pax_global_header00006660000000000000000000000064147144123060014514gustar00rootroot0000000000000052 comment=e755b5c49f247206b82ffeed55a2eb3d0669628b gevent-24.11.1/000077500000000000000000000000001471441230600131525ustar00rootroot00000000000000gevent-24.11.1/.coveragerc000066400000000000000000000040021471441230600152670ustar00rootroot00000000000000[run] # In coverage 4.0b3, concurrency=gevent is exactly equivalent to # concurrency=greenlet, except it causes coverage itself to import # gevent. That messes up our coverage numbers for top-level # statements, so we use greenlet instead. See https://github.com/gevent/gevent/pull/655#issuecomment-141198002 # See also .coveragerc-pypy concurrency = greenlet parallel = True source = gevent omit = # This is for <= 2.7.8, which we don't test */gevent/_ssl2.py */gevent/libev/_corecffi_build.py */gevent/libuv/_corecffi_build.py */gevent/win32util.py # having concurrency=greenlet means that the Queue class # which is used from multiple real threads doesn't # properly get covered. */gevent/_threading.py # local.so sometimes gets included, and it can't be parsed # as source, so it fails the whole process. *.so */gevent/libev/*.so */gevent/libuv/*.so */gevent/resolver/*.so # New in 5.0; required for the GHA coveralls submission. # Perhaps this obsoletes the source section in [paths]? relative_files = True [report] # Coverage is run on Linux under cPython 2/3 and pypy exclude_lines = pragma: no cover def __repr__ raise AssertionError raise NotImplementedError except ImportError: if __name__ == .__main__.: if sys.platform == 'win32': if mswindows: if is_windows: if WIN: self.fail omit = # local.so sometimes gets included, and it can't be parsed # as source, so it fails the whole process. # coverage 4.5 needs this specified here, 4.4.2 needed it in [run] *.so /tmp/test_* # Third-party vendored code src/gevent/_tblib.py # [paths] # # Combine source and paths from the Travis CI installs so they all get # # collapsed during combining. Otherwise, coveralls.io reports # # many different files (/lib/pythonX.Y/site-packages/gevent/...) and we don't # # get a good aggregate number. # source = # src/ # */lib/*/site-packages/ # */pypy*/site-packages/ # Local Variables: # mode: conf # End: gevent-24.11.1/.coveragerc-pypy000066400000000000000000000017411471441230600162750ustar00rootroot00000000000000[run] # This is just like .coveragerc, but # used for PyPy running. pypy doesn't support concurrency=greenlet parallel = True source = gevent omit = # This is for <= 2.7.8, which we don't test src/gevent/_ssl2.py src/gevent/libev/_corecffi_build.py src/gevent/libuv/_corecffi_build.py src/gevent/win32util.py # having concurrency=greenlet means that the Queue class # which is used from multiple real threads doesn't # properly get covered. src/gevent/_threading.py test_* # local.so sometimes gets included, and it can't be parsed # as source, so it fails the whole process. *.so [report] # Coverage is run on Linux under cPython 2/3 and pypy, so # exclude branches that are windows specific or pypy # specific exclude_lines = pragma: no cover def __repr__ raise AssertionError raise NotImplementedError except ImportError: if __name__ == .__main__.: if sys.platform == 'win32': if mswindows: if is_windows: gevent-24.11.1/.github/000077500000000000000000000000001471441230600145125ustar00rootroot00000000000000gevent-24.11.1/.github/ISSUE_TEMPLATE.md000066400000000000000000000017071471441230600172240ustar00rootroot00000000000000* gevent version: Please note how you installed it: From source, from PyPI, from your operating system's package, etc. * Python version: Please be as specific as possible. For example, "cPython 2.7.9 downloaded from python.org" * Operating System: Please be as specific as possible. For example, "Raspbian (Debian Linux 8.0 Linux 4.9.35-v7+ armv7l)" ### Description: **REPLACE ME**: What are you trying to get done, what has happened, what went wrong, and what did you expect? **IMPORTANT**: Please DO NOT POST SCREENSHOTS OF TEXT. Copy and paste the actual text. This means: No screenshots of your editor. No screenshots of tracebacks or console output. ```python-traceback Remember to put tracebacks in literal blocks ``` ### What I've run: **REPLACE ME**: Paste short, self contained, correct example code (See http://sscce.org), tracebacks, etc, here ```python "Put python code in python blocks" ``` gevent-24.11.1/.github/dependabot.yml000066400000000000000000000011041471441230600173360ustar00rootroot00000000000000# Keep GitHub Actions up to date with GitHub's Dependabot... # https://docs.github.com/en/code-security/dependabot/working-with-dependabot/keeping-your-actions-up-to-date-with-dependabot # https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#package-ecosystem version: 2 updates: - package-ecosystem: github-actions directory: / groups: github-actions: patterns: - "*" # Group all Actions updates into a single larger pull request schedule: interval: monthly gevent-24.11.1/.github/workflows/000077500000000000000000000000001471441230600165475ustar00rootroot00000000000000gevent-24.11.1/.github/workflows/ci.yml000066400000000000000000000603451471441230600176750ustar00rootroot00000000000000### # Initially copied from # https://github.com/actions/starter-workflows/blob/main/ci/python-package.yml # # Original comment follows. ### ### # This workflow will install Python dependencies, run tests and lint with a variety of Python versions # For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions ### ### # Important notes on GitHub actions: # # - We only get 2,000 free minutes a month # - We only get 500MB of artifact storage # - Cache storage is limited to 7 days and 5GB. # - macOS minutes are 10x as expensive as Linux minutes # - windows minutes are twice as expensive. # # So keep those workflows light. # # In December 2020, github only supports x86/64. If we wanted to test # gevent on other architectures, we might be able to use docker # emulation, but there's no native support. # # Another major downside: You can't just re-run the job for one part # of the matrix. So if there's a transient test failure that hit, say, 3.8, # to get a clean run every version of Python runs again. That's bad. # https://github.community/t/ability-to-rerun-just-a-single-job-in-a-workflow/17234/65 name: gevent testing # Triggers the workflow on push or pull request events on: [push, pull_request] # Limiting to particular branches might be helpful to conserve minutes. #on: # push: # branches: [ $default-branch ] # pull_request: # branches: [ $default-branch ] env: # Weirdly, this has to be a top-level key, not ``defaults.env`` PYTHONHASHSEED: 8675309 PYTHONUNBUFFERED: 1 PYTHONDONTWRITEBYTECODE: 1 PIP_UPGRADE_STRATEGY: eager # Don't get warnings about Python 2 support being deprecated. We # know. The env var works for pip 20. PIP_NO_PYTHON_VERSION_WARNING: 1 PIP_NO_WARN_SCRIPT_LOCATION: 1 GEVENTSETUP_EV_VERIFY: 1 # Disable some warnings produced by libev especially and also some Cython generated code. # These are shared between GCC and clang so it must be a minimal set. # TODO: Figure out how to set env vars per platform without resorting to inline scripting. # Note that changing the value of these variables invalidates configure caches CFLAGS: -O3 -pipe -Wno-strict-aliasing -Wno-comment CPPFLAGS: -DEV_VERIFY=1 # Uploading built wheels for releases. # TWINE_PASSWORD is encrypted and stored directly in the # travis repo settings. TWINE_USERNAME: __token__ ### # caching ### # CCACHE_DIR: ~/.ccache # Using ~ here makes it not find its cache. CC: "ccache gcc" CCACHE_NOCPP2: true CCACHE_SLOPPINESS: file_macro,time_macros,include_file_ctime,include_file_mtime CCACHE_NOHASHDIR: true # jobs: test: runs-on: ${{ matrix.os }} strategy: # fail-fast is for the entire job, and defaults to true, # when adding a new Python version that we expect to have test failures for, # it's good to set this to false so we can be sure that none of the # stable versions fail as we make modifications for the new version. # See also ``continue-on-error``. # https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions#jobsjob_idstrategyfail-fast fail-fast: false matrix: # 3.10+ needs more work: dnspython for example doesn't work # with it. That means for the bulk of our testing we need to # stick to 3.9. # # PyPy 7.3.13 started crashing for unknown reasons. 7.3.15 # still crashes. The crash is somewhere in # ``gevent.tests.test__queue gevent.tests.test__real_greenlet # gevent.tests.test__refcount_core # gevent.tests.test__resolver_dnspython`` # Seems resolved in 7.3.17 # # CAREFUL: Some of the tests are only run on specific versions of Python, # as dictated by the conditions found below. So when you change a version, # for example to force a specific patch release, don't forget to change conditions! # XXX: We could probably make this easier on ourself by adding a specific # key to the matrix in the version we care about and checking for that matrix key. python-version: ["3.12", "pypy-3.10-v7.3.17", '3.9', '3.10', '3.11', "3.13"] os: [macos-latest, ubuntu-latest] exclude: # The bulk of the testing is on Linux and Windows (appveyor). # Experience shows that it's sufficient to only test the latest # version on macOS. However, that does mean you need to # manually upload macOS wheels for those versions. # # XXX: Automate this part with another job. # # - os: macos-latest # python-version: 3.8 # - os: macos-latest # python-version: 3.9 # - os: macos-latest # python-version: 3.10 - os: macos-latest python-version: "pypy-3.10-v7.3.15" # On Arm, the only version of 3.9 available is 3.9.13, which is too # ancient to run the tests we need. - os: macos-latest python-version: "3.9" steps: - name: checkout uses: actions/checkout@v4 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v5 with: python-version: ${{ matrix.python-version }} cache: 'pip' cache-dependency-path: setup.py - name: Install ccache (ubuntu) if: startsWith(runner.os, 'Linux') run: | sudo apt-get install -y ccache sed gcc echo CCACHE_DIR=$HOME/.ccache >>$GITHUB_ENV mkdir -p $HOME/.ccache - name: Install ccache (macos) if: startsWith(runner.os, 'macOS') run: | brew install ccache echo CFLAGS=$CFLAGS -Wno-parentheses-equality >>$GITHUB_ENV echo CCACHE_DIR=$HOME/.ccache >>$GITHUB_ENV mkdir -p $HOME/.ccache - name: Set coverage status # coverage is too slow on PyPy. We can't submit it from macOS (see that action), # so don't bother taking the speed hit there either. # Coverage can't run the test_interpreters.py tests because greenlet can't # run there and that's how coverage is configured. Right now, we only have that # on Python 3.12, so take the quick way out and nix that version too. # Remember this condition needs to be synced with the coveralls/report step. if: ${{ !startsWith(matrix.python-version, 'pypy') && !startsWith(matrix.python-version, '3.12') && startsWith(runner.os, 'Linux') }} run: | echo G_USE_COV=--coverage >> $GITHUB_ENV ### # Caching. # This actually *restores* a cache and schedules a cleanup action # to save the cache. So it must come before the thing we want to use # the cache. ### - name: Cache ~/.ccache uses: actions/cache@v4 # This is repeated in an explicit save always step below # because normally it won't save anything if there's a cache hit! # Which is silly, because things in the cache might have (will have) # been changed. with: path: ~/.ccache/** key: ${{ runner.os }}-ccache2-${{ matrix.python-version }} - name: Cache config.cache # Store the configure caches. Having a cache can speed up c-ares # configure from 2-3 minutes to 20 seconds. uses: actions/cache@v4 with: path: deps/*/config.cache # XXX: This should probably include a hash of each configure # script (which is possible with hashFiles()). We don't have a restore-keys that doesn't include # the CFLAGS becouse the scripts fail to run if they get # different CFLAGS, CC, CPPFLAGS, etc, and GHA offers no way # to manually clear the cache. At one time, we had a # restore-key configured, and it still seems to be used even # without that setting here. The whole thing is being # matched even without the CFLAGS matching. Perhaps the - is # a generic search separator? key: ${{ runner.os }}-${{ matrix.os }}-configcache3-${{ matrix.python-version }}-${{ env.CFLAGS }} # Install gevent. Yes, this will create different files each time, # leading to a fresh cache. But because of CCache stats, we had already been doing # that (before we learned about CCACHE_NOSTATS). # We don't install using the requirements file for speed (reduced deps) and because an editable # install doesn't work in the cache. # First, the build dependencies (see setup.cfg) # so that we don't have to use build isolation and can better use the cache; # Note that we can't use -U for cffi and greenlet on PyPy. # The -q is because Pypy-2.7 sometimes started raising # UnicodeEncodeError: 'ascii' codec can't encode character u'\u2588' in position 6: ordinal not in range(128) # when downloading files. This started sometime in mid 2020. It's from # pip's vendored progress.bar class. - name: Install dependencies run: | pip install -U pip pip install -U -q setuptools wheel twine pip install -q -U 'cffi;platform_python_implementation=="CPython"' pip install -q -U 'cython>=3.0.2' # Use a debug version of greenlet to help catch any errors earlier. CFLAGS="$CFLAGS -Og -g -UNDEBUG" pip install -v --no-binary :all: 'greenlet>=2.0.0;platform_python_implementation=="CPython" and python_version < "3.11"' CFLAGS="$CFLAGS -Og -g -UNDEBUG" pip install -v --no-binary :all: 'greenlet>=3.0rc3;platform_python_implementation=="CPython" and python_version >= "3.11"' - name: Build gevent (non-Mac) if: ${{ ! startsWith(runner.os, 'Mac') }} run: | # Next, build the wheel *in place*. This helps ccache, and also lets us cache the configure # output (pip install uses a random temporary directory, making this difficult) python setup.py build_ext -i python setup.py bdist_wheel env: # Ensure we test with assertions enabled. # As opposed to the manylinux builds, which we distribute and # thus only use O3 (because Ofast enables fast-math, which has # process-wide effects), we test with Ofast here, because we # expect that some people will compile it themselves with that setting. CPPFLAGS: "-Ofast -UNDEBUG" - name: Build gevent (Mac) if: startsWith(runner.os, 'Mac') run: | # Next, build the wheel *in place*. This helps ccache, and also lets us cache the configure # output (pip install uses a random temporary directory, making this difficult) python setup.py build_ext -i python setup.py bdist_wheel # Something in the build system isn't detecting that we're building for both, # so we're getting tagged with just x86_64. Force the universal2 tag. # (I've verified that the .so files are in fact universal, with both architectures.) echo 'Done building' ls -l dist # (wheel tags --abi-tag universal2 dist/*x86_64.whl && ((rm dist/*universal2*universal2.whl || rm dist/*universal2*x86_86.whl) || rm dist/*x86_64.whl)) || true # XXX: That can produce invalid filenames, for some reason. 3.11 came up with # gevent-23.7.1.dev0-cp311-universal2-macosx_10_9_universal2.whl, which is not valid. # gevent-23.9.1.dev0-cp38-universal2-macosx_11_0_x86_64.whl has also shown up. # It's not clear why, because greenlet didn't do that. Maybe because it was already universal? # So we attempt to only do this for non-universal wheels. ls -l dist env: # Unlike the above, we are actually distributing these # wheels, so they need to be built for production use. CPPFLAGS: "-O3" # Build for both architectures ARCHFLAGS: "-arch x86_64 -arch arm64" # Force the wheel tag. _PYTHON_HOST_PLATFORM: "macosx-11.0-universal2" - name: Check gevent build run: | ls -l dist twine check dist/*whl - name: Cache ~/.ccache uses: actions/cache/save@v4 if: always() with: path: ~/.ccache/** key: ${{ runner.os }}-ccache2-${{ matrix.python-version }} - name: Upload gevent wheel uses: actions/upload-artifact@v4 with: name: gevent-${{ runner.os }}-${{ matrix.python-version }}.whl path: dist/*whl - name: Publish package to PyPI (mac) # We cannot 'uses: pypa/gh-action-pypi-publish@v1.11.0' because # that's apparently a container action, and those don't run on # the Mac. if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags') && startsWith(runner.os, 'Mac') env: TWINE_PASSWORD: ${{ secrets.TWINE_PASSWORD }} run: | twine upload --skip-existing dist/* - name: Install gevent run: | WHL=$(ls dist/*whl) pip install -U "$WHL[test]" - name: Report environment details run: | python --version python -c 'import greenlet; print(greenlet, greenlet.__version__)' python -c 'import gevent; print(gevent.__version__)' python -c 'from gevent._compat import get_clock_info; print(get_clock_info("perf_counter"))' python -c 'import gevent.core; print(gevent.core.loop)' python -c 'import gevent.ares; print(gevent.ares)' echo CCache stats ccache --version ccache -s -v - name: "Tests: Basic" run: | python -m gevent.tests --second-chance $G_USE_COV # For the CPython interpreters, unless we have reason to expect # different behaviour across the versions (e.g., as measured by coverage) # it's sufficient to run the full suite on the current version # and oldest version. - name: "Tests: subproccess and FileObjectThread" if: startsWith(runner.os, 'Linux') || (startsWith(runner.os, 'Mac') && matrix.python-version == '3.12.2') # Now, the non-default threaded file object. # In the past, we included all test files that had a reference to 'subprocess'' somewhere in their # text. The monkey-patched stdlib tests were specifically included here. # However, we now always also test on AppVeyor (Windows) which only has GEVENT_FILE=thread, # so we can save a lot of CI time by reducing the set and excluding the stdlib tests without # losing any coverage. env: GEVENT_FILE: thread run: | python -m gevent.tests --second-chance $G_USE_COV `(cd src/gevent/tests >/dev/null && ls test__*subprocess*.py)` - name: "Tests: c-ares resolver" # This sometimes fails on mac. # && (matrix.python-version == '3.11.8') if: startsWith(runner.os, 'Linux') env: GEVENT_RESOLVER: ares run: | python -mgevent.tests --second-chance $G_USE_COV --ignore tests_that_dont_use_resolver.txt - name: "Tests: dnspython resolver" # This has known issues on Pypy-3.6. dnspython resolver not # supported under anything newer than 3.10, so far. if: (matrix.python-version == '3.9') && startsWith(runner.os, 'Linux') env: GEVENT_RESOLVER: dnspython run: | python -mgevent.tests --second-chance $G_USE_COV --ignore tests_that_dont_use_resolver.txt - name: "Tests: leakchecks" # Run the leaktests; # This is incredibly important and we MUST have an environment that successfully passes # these tests. if: (startsWith(matrix.python-version, '3.11.8')) && startsWith(runner.os, 'Linux') env: GEVENTTEST_LEAKCHECK: 1 run: | python -m gevent.tests --second-chance --ignore tests_that_dont_do_leakchecks.txt - name: "Tests: PURE_PYTHON" # No compiled cython modules on CPython, using the default backend. Get coverage here. # We should only need to run this for a single version. if: (matrix.python-version == '3.11.8') && startsWith(runner.os, 'Linux') env: PURE_PYTHON: 1 run: | python -mgevent.tests --second-chance --coverage - name: "Tests: libuv" if: (startsWith(matrix.python-version, '3.11.8')) env: GEVENT_LOOP: libuv run: | python -m gevent.tests --second-chance $G_USE_COV - name: "Tests: libev-cffi" if: (matrix.python-version == '3.11.8') && startsWith(runner.os, 'Linux') env: GEVENT_LOOP: libev-cffi run: | python -m gevent.tests --second-chance $G_USE_COV - name: Report coverage if: ${{ !startsWith(matrix.python-version, 'pypy') }} run: | python -m coverage combine || true python -m coverage report -i || true python -m coverage xml -i || true - name: Coveralls Parallel uses: coverallsapp/github-action@v2 # 20230707: On macOS, this installs coveralls from homebrew. # It then runs ``coveralls report``. But that is producing # a usage error from ``coveralls`` (report is not recognized) Presumably the # brew and action versions are out of sync? if: ${{ !startsWith(matrix.python-version, 'pypy') && !startsWith(matrix.python-version, '3.12') && startsWith(runner.os, 'Linux') }} with: flag-name: run-${{ join(matrix.*, '-') }} parallel: true format: cobertura - name: Lint if: matrix.python-version == '3.10' && startsWith(runner.os, 'Linux') # We only need to do this on one version. # We do this here rather than a separate job to avoid the compilation overhead. # 20230707: Python 3.11 crashes inside pylint/astroid on _ssl3.py; # reverting to Python 3.10 solved that. # TODO: Revisit this when we have caching of that part. run: | pip install -U pylint python -m pylint --rcfile=.pylintrc gevent coveralls_finish: needs: test runs-on: ubuntu-latest steps: - name: Coveralls Finished uses: coverallsapp/github-action@v2 with: parallel-finished: true test_no_embed: runs-on: ${{ matrix.os }} strategy: matrix: python-version: ['3.11'] os: [ubuntu-latest] steps: - name: checkout uses: actions/checkout@v4 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v5 with: python-version: ${{ matrix.python-version }} cache: 'pip' cache-dependency-path: setup.py - name: Install ccache (ubuntu) if: startsWith(runner.os, 'Linux') run: | sudo apt-get install -y ccache sed gcc echo CCACHE_DIR=$HOME/.ccache >>$GITHUB_ENV mkdir -p $HOME/.ccache - name: Cache ~/.ccache uses: actions/cache@v4 with: path: ~/.ccache/** key: ${{ runner.os }}-ccache2_embed-${{ matrix.python-version }} - name: Cache config.cache # Store the configure caches. Having a cache can speed up c-ares # configure from 2-3 minutes to 20 seconds. uses: actions/cache@v4 with: path: deps/*/config.cache # XXX: This should probably include a hash of each configure # script We don't have a restore-keys that doesn't include # the CFLAGS becouse the scripts fail to run if they get # different CFLAGS, CC, CPPFLAGS, etc, and GHA offers no way # to manually clear the cache. At one time, we had a # restore-key configured, and it still seems to be used even # without that setting here. The whole thing is being # matched even without the CFLAGS matching. Perhaps the - is # a generic search separator? key: ${{ runner.os }}-${{ matrix.os }}-configcache_embed-${{ matrix.python-version }}-${{ env.CFLAGS }} - name: Install dependencies run: | pip install -U pip pip install -U -q setuptools wheel twine pip install -q -U 'cffi;platform_python_implementation=="CPython"' pip install -q -U 'cython>=3.0' pip install 'greenlet>=2.0.0; platform_python_implementation=="CPython"' - name: build libs and gevent env: GEVENTSETUP_EMBED: 0 GEVENTSETUP_EV_VERIFY: 1 run: | # These need to be absolute paths export BUILD_LIBS="$HOME/.libs/" mkdir -p $BUILD_LIBS export LDFLAGS=-L$BUILD_LIBS/lib export CPPFLAGS="-I$BUILD_LIBS/include" env | sort echo which sed? `which sed` echo LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$BUILD_LIBS/lib >>$GITHUB_ENV (pushd deps/libev && sh ./configure -C --prefix=$BUILD_LIBS && make install && popd) (pushd deps/c-ares && sh ./configure -C --prefix=$BUILD_LIBS && make -j4 install && popd) (pushd deps/libuv && ./autogen.sh && sh ./configure -C --disable-static --prefix=$BUILD_LIBS && make -j4 install && popd) # libev builds a manpage each time, and it includes today's date, so it frequently changes. # delete to avoid repacking the archive rm -rf $BUILD_LIBS/share/man/ ls -l $BUILD_LIBS $BUILD_LIBS/lib $BUILD_LIBS/include python setup.py bdist_wheel pip uninstall -y gevent pip install -U `ls dist/*whl`[test] # Test that we're actually linking # to the .so file. objdump -p build/lib*/gevent/libev/_corecffi*so | grep "NEEDED.*libev.so" objdump -p build/lib*/gevent/libev/corecext*so | grep "NEEDED.*libev.so" objdump -p build/lib*/gevent/libuv/_corecffi*so | grep "NEEDED.*libuv.so" objdump -p build/lib*/gevent/resolver/cares*so | grep "NEEDED.*libcares.so" - name: test non-embedded run: | # Verify that we got non-embedded builds python -c 'import gevent.libev.corecffi as CF; assert not CF.LIBEV_EMBED' python -c 'import gevent.libuv.loop as CF; assert not CF.libuv.LIBUV_EMBED' python -mgevent.tests --second-chance manylinux: runs-on: ubuntu-latest # If we have 'needs: test', then these wait to start running until # all the test matrix passes. That's good, because these take a # long time, and they take a long time to kill if something goes # wrong. OTOH, if one of the tests fail, and this is a release tag, # we have to notice that and try restarting things so that the # wheels get built and uploaded. For that reason, it's simplest to # remove this for release branches. needs: test strategy: matrix: python-version: [3.9] image: # 2014 is EOL as of June 2024. But # it is "still widely used" and has extended (for-pay) # support hrough 2028... - manylinux2014_aarch64 - manylinux2014_ppc64le - manylinux2014_s390x - manylinux2014_x86_64 - musllinux_1_1_x86_64 - musllinux_1_1_aarch64 name: ${{ matrix.image }} steps: - name: checkout uses: actions/checkout@v4 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v5 with: python-version: ${{ matrix.python-version }} - name: Cache ~/.ccache uses: actions/cache@v4 with: path: ~/.ccache/** key: ${{ runner.os }}-ccache_${{ matrix.config[2] }}-${{ matrix.config[0] }} - name: Set up QEMU uses: docker/setup-qemu-action@v3 with: platforms: all - name: Build and test gevent env: DOCKER_IMAGE: quay.io/pypa/${{ matrix.image }} GEVENT_MANYLINUX_NAME: ${{ matrix.image }} run: scripts/releases/make-manylinux - name: Publish package to PyPI uses: pypa/gh-action-pypi-publish@v1.11.0 if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags') with: user: __token__ password: ${{ secrets.TWINE_PASSWORD }} skip_existing: true packages_dir: wheelhouse/ - name: Upload gevent wheels uses: actions/upload-artifact@v4 with: path: wheelhouse/*whl name: ${{ matrix.image }}_wheels.zip # TODO: # * Use YAML syntax to share snippets, like the old .travis.yml did gevent-24.11.1/.gitignore000066400000000000000000000051251471441230600151450ustar00rootroot00000000000000*.py[cod] build/ .runtimes .tox/ *.so *.o *.egg-info gevent.*.[ch] src/gevent/__pycache__ # Things generated by cython src/gevent/_semaphore.c src/gevent/local.c src/gevent/greenlet.c src/gevent/_ident.c src/gevent/_imap.c src/gevent/event.c src/gevent/_hub_local.c src/gevent/_waiter.c src/gevent/_tracer.c src/gevent/queue.c src/gevent/_hub_primitives.c src/gevent/_greenlet_primitives.c src/gevent/_abstract_linkable.c src/gevent/libev/corecext.c src/gevent/libev/corecext.h src/gevent/libev/_corecffi.c src/gevent/libev/_corecffi.o src/gevent/resolver/cares.c src/gevent/_generated_include/*.h # Cython annotations src/gevent/*.html src/gevent/libev/*.html src/gevent/resolver/*.html Makefile.ext MANIFEST *_flymake.py .coverage\.* htmlcov/ .coverage docs/_build docs/__pycache__ # Artifacts of configuring in place deps/c-ares/config.log deps/c-ares/config.status deps/c-ares/config.cache deps/c-ares/stamp-h1 deps/c-ares/stamp-h2 deps/c-ares/ares_build.h.orig deps/c-ares/ares_config.h deps/c-ares/ares_build.h deps/c-ares/.libs deps/c-ares/*.o deps/c-ares/*.lo deps/c-ares/*.la deps/c-ares/.deps deps/c-ares/acountry deps/c-ares/adig deps/c-ares/ahost deps/c-ares/Makefile deps/c-ares/libtool deps/c-ares/libcares.pc deps/c-ares/test/.deps deps/c-ares/test/Makefile deps/c-ares/test/config.log deps/c-ares/test/config.status deps/c-ares/test/libtool deps/c-ares/test/stamp-h1 deps/c-ares/include/Makefile deps/c-ares/include/stamp-h2 deps/c-ares/src/Makefile deps/c-ares/src/lib/Makefile deps/c-ares/src/lib/ares_config.h deps/c-ares/src/lib/stamp-h1 deps/c-ares/src/tools/Makefile deps/c-ares/autom4te.cache/ deps/c-ares/configure~ deps/c-ares/src/lib/.deps/ deps/c-ares/src/lib/ares_config.h.in~ deps/c-ares/src/lib/dsa/.deps/ deps/c-ares/src/lib/dsa/.dirstamp deps/c-ares/src/lib/event/.deps/ deps/c-ares/src/lib/event/.dirstamp deps/c-ares/src/lib/legacy/.deps/ deps/c-ares/src/lib/legacy/.dirstamp deps/c-ares/src/lib/record/.deps/ deps/c-ares/src/lib/record/.dirstamp deps/c-ares/src/lib/str/.deps/ deps/c-ares/src/lib/str/.dirstamp deps/c-ares/src/lib/util/.deps/ deps/c-ares/src/lib/util/.dirstamp deps/c-ares/src/tools/.deps/ deps/libev/.deps deps/libev/Makefile deps/libev/config.log deps/libev/config.h deps/libev/config.status deps/libev/config.cache deps/libev/configure-output.txt deps/libev/libtool deps/libev/stamp-h1 deps/libev/.libs deps/libev/*.lo deps/libev/*.la deps/libev/*.o deps/libuv/.deps deps/libuv/Makefile deps/libuv/config.log deps/libuv/config.h deps/libuv/config.status deps/libev/config.cache deps/libduv/libtool deps/libuv/stamp-h1 deps/libuv/.libs deps/libuv/*.lo deps/libuv/*.la deps/libuv/*.o gevent-24.11.1/.landscape.yml000066400000000000000000000075201471441230600157110ustar00rootroot00000000000000doc-warnings: no # experimental, raises an exception test-warnings: no strictness: veryhigh max-line-length: 160 # We don't use any of the auto-detected things, and # auto-detection slows down startup autodetect: false requirements: - dev-requirements.txt python-targets: - 2 # - 3 # landscape.io seems to fail if we run both py2 and py3? ignore-paths: - examples/webchat/ - deps/libuv/ - doc/ - build/ - deps/ - dist - .eggs # util creates lots of warnings. ideally they should be fixed, # but that code doesn't change often - util # likewise with scripts - scripts/ # This file has invalid syntax for Python 3, which is how # landscape.io runs things... - src/gevent/_util_py2.py # ...and this file has invalid syntax for Python 2, which is how # travis currently runs things. sigh. - src/gevent/_socket3.py # This is vendored with minimal changes - src/gevent/_tblib.py # likewise - src/greentest/_six.py # This triggers https://github.com/PyCQA/pylint/issues/846 on Travis, # but the file is really small, so it's better to skip this one # file than disable that whole check. - src/gevent/core.py # sadly, this one is complicated - setup.py # This crashes (infinite recursion) trying to get the mro() of a class that extends # build_ext - _setuputils.py - src/greentest/getaddrinfo_module.py ignore-patterns: # disabled code - ^src/greentest/xtest_.*py # standard library code - ^src/greentest/2.* - ^src/greentest/3.* # benchmarks that aren't used/changed much - ^src/greentest/bench_.*py pyroma: run: true mccabe: # We have way too many violations of the complexity measure. # We should enable this and fix them one at a time, but that's # more refactoring than I want to do initially. run: false pyflakes: disable: # F821: undefined name; caught better by pylint, where it can be # controlled for the whole file/per-line - F821 # F401: unused import; same story - F401 # F811: redefined function; same story - F811 # F403: wildcard import; same story - F403 pep8: disable: # N805: first arg should be self; fails on metaclasses and # classmethods; pylint does a better job - N805 # N802: function names should be lower-case; comes from Windows # funcs and unittest-style asserts and factory funcs - N802 # N801: class names should use CapWords - N801 # N803: argument name should be lower-case; comes up with using # the class name as a keyword-argument - N803 # N813: camelCase imported as lowercase; socketcommon - N813 # N806: variable in function should be lowercase; but sometimes we # want constant-looking names, especially for closures - N806 # N812: lowercase imported as non-lowercase; from greenlet import # greenlet as RawGreenlet - N812 # E261: at least two spaces before inline comment. Really? Who does # that? - E261 # E265: Block comment should start with "# ". This arises from # commenting out individual lines of code. - E265 # N806: variable in function should be lowercase; but sometimes we # want constant-looking names, especially for closures - N806 # W503 line break before binary operator (I like and/or on the # next line, it makes more sense) - W503 # E266: too many leading '#' for block comment. (Multiple # can # set off blocks) - E266 # E402 module level import not at top of file. (happens in # setup.py, some test cases) - E402 # E702: multiple expressions on one line semicolon # (happens for monkey-patch)) - E702 # E731: do not assign a lambda expression, use a def # simpler than a def sometimes, and prevents redefinition warnings - E731 # E302/303: Too many/too few blank lines (between classes, etc) # This is *really* nitpicky. - E302 - E303 gevent-24.11.1/.pylintrc000066400000000000000000000216351471441230600150260ustar00rootroot00000000000000[MASTER] load-plugins=pylint.extensions.bad_builtin, pylint.extensions.code_style, pylint.extensions.dict_init_mutate, pylint.extensions.dunder, pylint.extensions.comparison_placement, pylint.extensions.confusing_elif, pylint.extensions.for_any_all, pylint.extensions.consider_refactoring_into_while_condition, pylint.extensions.check_elif, pylint.extensions.eq_without_hash, pylint.extensions.overlapping_exceptions, # pylint.extensions.comparetozero, # Takes out ``if x == 0:`` and wants you to write ``if not x:`` # but in many cases, the == 0 is actually much more clear. # pylint.extensions.mccabe, # We have too many too-complex methods. We should enable this and fix them # one by one. # pylint.extensions.redefined_variable_type, # We use that pattern during initialization. # magic_value wants you to not use arbitrary strings and numbers # inline in the code. But it's overzealous and has way too many false # positives. Trust people to do the most readable thing. # pylint.extensions.magic_value # Empty comment would be good, except it detects blank lines within # a single comment block. # # Those are often used to separate paragraphs, like here. # pylint.extensions.empty_comment, # consider_ternary_expression is a nice check, but is also overzealous. # Trust the human to do the readable thing. # pylint.extensions.consider_ternary_expression, # redefined_loop_name tends to catch us with things like # for name in (a, b, c): name = name + '_column' ... # pylint.extensions.redefined_loop_name, # This wants you to turn ``x in (1, 2)`` into ``x in {1, 2}``. # They both result in the LOAD_CONST bytecode, one a tuple one a # frozenset. In theory a set lookup using hashing is faster than # a linear scan of a tuple; but if the tuple is small, it can often # actually be faster to scan the tuple. # pylint.extensions.set_membership, # Fix zope.cachedescriptors.property.Lazy; the property-classes doesn't seem to # do anything. # https://stackoverflow.com/questions/51160955/pylint-how-to-specify-a-self-defined-property-decorator-with-property-classes # For releases prior to 2.14.2, this needs to be a one-line, quoted string. After that, # a multi-line string. # - Make zope.cachedescriptors.property.Lazy look like a property; # fixes pylint thinking it is a method. # - Run in Pure Python mode (ignore C extensions that respect this); # fixes some issues with zope.interface, like IFoo.providedby(ob) # claiming not to have the right number of parameters...except no, it does not. init-hook = import astroid.bases astroid.bases.POSSIBLE_PROPERTIES.add('Lazy') astroid.bases.POSSIBLE_PROPERTIES.add('LazyOnClass') astroid.bases.POSSIBLE_PROPERTIES.add('readproperty') astroid.bases.POSSIBLE_PROPERTIES.add('non_overridable') import os os.environ['PURE_PYTHON'] = ("1") # Ending on a quoted string # breaks pylint 2.14.5 (it strips the trailing quote. This is # probably because it tries to handle one-line quoted strings as well as multi-blocks). # The parens around it fix the issue. extension-pkg-whitelist=gevent.greenlet,gevent.libuv._corecffi,gevent.libev._corecffi,gevent.libev._corecffi.lib,gevent.local,gevent._ident # Control the amount of potential inferred values when inferring a single # object. This can help the performance when dealing with large functions or # complex, nested conditions. # gevent: The changes for Python 3.7 in _ssl3.py lead to infinite recursion # in pylint 2.3.1/astroid 2.2.5 in that file unless we this this to 1 # from the default of 100. limit-inference-results=1 [MESSAGES CONTROL] # Disable the message, report, category or checker with the given id(s). You # can either give multiple identifier separated by comma (,) or put this option # multiple time (only on the command line, not in the configuration file where # it should appear only once). # NOTE: comments must go ABOVE the statement. In Python 2, mixing in # comments disables all directives that follow, while in Python 3, putting # comments at the end of the line does the same thing (though Py3 supports # mixing) # invalid-name, ; We get lots of these, especially in scripts. should fix many of them # protected-access, ; We have many cases of this; legit ones need to be examinid and commented, then this removed # no-self-use, ; common in superclasses with extension points # too-few-public-methods, ; Exception and marker classes get tagged with this # exec-used, ; should tag individual instances with this, there are some but not too many # global-statement, ; should tag individual instances # multiple-statements, ; "from gevent import monkey; monkey.patch_all()" # locally-disabled, ; yes, we know we're doing this. don't replace one warning with another # cyclic-import, ; most of these are deferred imports # too-many-arguments, ; these are almost always because that's what the stdlib does # redefined-builtin, ; likewise: these tend to be keyword arguments like len= in the stdlib # undefined-all-variable, ; XXX: This crashes with pylint 1.5.4 on Travis (but not locally on Py2/3 # ; or landscape.io on Py3). The file causing the problem is unclear. UPDATE: identified and disabled # that file. # see https://github.com/PyCQA/pylint/issues/846 # useless-suppression: the only way to avoid repeating it for specific statements everywhere that we # do Py2/Py3 stuff is to put it here. Sadly this means that we might get better but not realize it. # duplicate-code: Yeah, the compatibility ssl modules are much the same # In pylint 1.8.0, inconsistent-return-statements are created for the wrong reasons. # This code raises it, even though there's only one return (the implicit 'return None' is presumably # what triggers it): # def foo(): # if baz: # return 1 # In Pylint 2dev1, needed for Python 3.7, we get spurious 'useless return' errors: # @property # def foo(self): # return None # generates useless-return # Pylint 2.4 adds import-outside-toplevel. But we do that a lot to defer imports because of patching. # Pylint 2.4 adds self-assigning-variable. But we do *that* to avoid unused-import when we # "export" the variable and don't have a __all__. # Pylint 2.6+ adds some python-3-only things that don't apply: raise-missing-from, super-with-arguments, consider-using-f-string, redundant-u-string-prefix # unnecessary-lambda-assignment: New check introduced in v2.14.0 # unnecessary-dunder-call: New check introduced in v2.14.0 # consider-using-assignment-expr: wants you to use the walrus operator. # It hits way too much and its not clear they would be improvements. # confusing-consecutive-elif: Are they though? disable=wrong-import-position, wrong-import-order, missing-docstring, ungrouped-imports, invalid-name, protected-access, too-few-public-methods, exec-used, global-statement, multiple-statements, locally-disabled, cyclic-import, too-many-arguments, redefined-builtin, useless-suppression, duplicate-code, undefined-all-variable, inconsistent-return-statements, useless-return, useless-object-inheritance, import-outside-toplevel, self-assigning-variable, raise-missing-from, super-with-arguments, consider-using-f-string, consider-using-assignment-expr, redundant-u-string-prefix, unnecessary-lambda-assignment, unnecessary-dunder-call, use-dict-literal, confusing-consecutive-elif, enable=consider-using-augmented-assign [FORMAT] # duplicated from setup.cfg max-line-length=160 max-module-lines=1100 [MISCELLANEOUS] # List of note tags to take in consideration, separated by a comma. #notes=FIXME,XXX,TODO # Disable that, we don't want them in the report (???) notes= [VARIABLES] dummy-variables-rgx=_.* [TYPECHECK] # List of members which are set dynamically and missed by pylint inference # system, and so shouldn't trigger E1101 when accessed. Python regular # expressions are accepted. # gevent: this is helpful for py3/py2 code. generated-members=exc_clear # List of classes names for which member attributes should not be checked # (useful for classes with attributes dynamically set). This can work # with qualified names. #ignored-classes=SSLContext, SSLSocket, greenlet, Greenlet, parent, dead # List of module names for which member attributes should not be checked # (useful for modules/projects where namespaces are manipulated during runtime # and thus existing member attributes cannot be deduced by static analysis. It # supports qualified module names, as well as Unix pattern matching. ignored-modules=gevent._corecffi,gevent.os,os,greenlet,threading,gevent.libev.corecffi,gevent.socket,gevent.core,gevent.testing.support [DESIGN] max-attributes=12 max-parents=10 # XXX: Eww! max-positional-arguments=20 [BASIC] bad-functions=input # Prospector turns ot unsafe-load-any-extension by default, but # pylint leaves it off. This is the proximal cause of the # undefined-all-variable crash. unsafe-load-any-extension = yes # Local Variables: # mode: conf # End: gevent-24.11.1/.readthedocs.yml000066400000000000000000000013131471441230600162360ustar00rootroot00000000000000# .readthedocs.yml # Read the Docs configuration file # See https://docs.readthedocs.io/en/stable/config-file/v2.html for details # Some things can only be configured on the RTD dashboard. # Those that we may have changed from the default include: # Analytics code: # Show Version Warning: False # Single Version: True # Required version: 2 # Build documentation in the docs/ directory with Sphinx sphinx: builder: html configuration: docs/conf.py # Set the version of Python and requirements required to build your # docs build: # os is required for some reason os: ubuntu-22.04 tools: python: "3.11" python: install: - method: pip path: . extra_requirements: - docs gevent-24.11.1/AUTHORS000066400000000000000000000024271471441230600142270ustar00rootroot00000000000000Gevent is written and maintained by Denis Bilenko Matt Iversen Steffen Prince Jason Madden and the contributors (ordered by the date of first contribution): Jason Toffaletti Mike Barton Ludvig Ericson Marcus Cavanaugh Matt Goodall Ralf Schmitt Daniele Varrazzo Nicholas Piël Örjan Persson Uriel Katz Ted Suzman Randall Leeds Erik Näslund Alexey Borzenkov David Hain Dmitry Chechik Ned Rockson Tommie Gannert Shaun Lindsay Andreas Blixt Nick Barkas Galfy Pundee Alexander Boudkar Damien Churchill Tom Lynn Shaun Cutts David LaBissoniere Alexandre Kandalintsev Geert Jansen Vitaly Kruglikov Saúl Ibarra Corretgé Oliver Beattie Bobby Powers Anton Patrushev Jan-Philip Gehrcke Alex Gaynor 陈小玉 Philip Conrad Heungsub Lee Ron Rothman See https://github.com/gevent/gevent/graphs/contributors for more info. Gevent is inspired by and uses some code from eventlet which was written by Bob Ipollito Donovan Preston The win32util module is taken from Twisted. The tblib module is taken from python-tblib by Ionel Cristian Mărieș. Some modules (local, ssl) contain code from the Python standard library. If your code is used in gevent and you are not mentioned above, please contact the maintainer. gevent-24.11.1/CHANGES.rst000066400000000000000000000724111471441230600147610ustar00rootroot00000000000000=========== Changelog =========== .. currentmodule:: gevent .. towncrier release notes start 24.11.1 (2024-11-11) ==================== Bugfixes -------- - Remove some legacy code that supported Python 2 for compatibility with the upcoming releases of Cython 3.1. Also, the ``PeriodicMonitorThreadStartedEvent`` now properly implements the ``IPeriodicMonitorThreadStartedEvent`` interface. The ``EventLoopBlocked`` event includes the hub which was blocked, and it is notified before the report is printed so that event listeners can modify the report. See :issue:`2076`. 24.10.3 (2024-10-18) ==================== Bugfixes -------- - Fix clearing stack frames on Python 3.13. This is invoked when you fork after having used the thread pool. See :issue:`2067`. - Distribute manylinux2014 wheels for x86_64. See :issue:`2068`. - Stop switching to the hub in the after fork hook in a child process. This could lead to strange behaviour, and is different than what all other versions of Python do. 24.10.2 (2024-10-11) ==================== Bugfixes -------- - Workaround a Cython bug compiling on GCC14. See :issue:`2049`. 24.10.1 (2024-10-09) ==================== Features -------- - Update the bundled c-ares to 1.33.1. - Add support for Python 3.13. - The functions and classes in ``gevent.subprocess`` no longer accept ``stdout=STDOUT`` and raise a ``ValueError``. Several additions and changes to the ``queue`` module, including: - ``Queue.shutdown`` is available on all versions of Python. - ``LifoQueue`` is now a joinable queue. - gevent.monkey changed from a module to a package. The public API remains the same. For this release, private APIs (undocumented, marked internal, or beginning with an underscore) are also preserved. However, these may be changed or removed at any time in the future. If you are using one of these APIs and cannot replace it, please contact the gevent team. Bugfixes -------- - For platforms that don't have ``socketpair``, upgrade our fallback code to avoid a security issue. See :issue:`2048`. Deprecations and Removals ------------------------- - Remove support for Python 3.8, which has reached the end of its support lifecycle. See :issue:`remove_py38`. 24.2.1 (2024-02-14) =================== Bugfixes -------- - Add support for Python patch releases 3.11.8 and 3.12.2, which changed internal details of threading. As a result of these changes, note that it is no longer possible to change the ``__class__`` of a ``gevent.threading._DummyThread`` object on those versions. See :issue:`2020`. Other ----- Other updates for compatibility with the standard library include: - Errors raised from ``subprocess.Popen`` may not have a filename set. - ``SSLSocket.recv_into`` and ``SSLSocket.read`` no longer require the buffer to implement ``len`` and now work with buffers whose size is not 1. - gh-108310: Fix CVE-2023-40217: Check for & avoid the ssl pre-close flaw. In addition: - Drop ``setuptools`` to a soft test dependency. - Drop support for very old versions of CFFI. - Update bundled c-ares from 1.19.1 to 1.26.0. - Locks created by gevent, but acquired from multiple different threads (not recommended), no longer spin to implement timeouts and interruptible blocking. Instead, they use the native functionality of the Python 3 lock. This may improve some scenarios. See :issue:`2013`. 23.9.1 (2023-09-12) =================== Bugfixes -------- - Require greenlet 3.0 on Python 3.11 and Python 3.12; greenlet 3.0 is recommended for all platforms. This fixes a number of obscure crashes on all versions of Python, as well as fixing a fairly common problem on Python 3.11+ that could manifest as either a crash or as a ``SystemError``. See :issue:`1985`. ---- 23.9.0.post1 (2023-09-02) ========================= - Fix Windows wheel builds. - Fix macOS wheel builds. 23.9.0 (2023-09-01) =================== Bugfixes -------- - Make ``gevent.select.select`` accept arbitrary iterables, not just sequences. That is, you can now pass in a generator of file descriptors instead of a realized list. Internally, arbitrary iterables are copied into lists. This better matches what the standard library does. Thanks to David Salvisberg. See :issue:`1979`. - On Python 3.11 and newer, opt out of Cython's fast exception manipulation, which *may* be causing problems in certain circumstances when combined with greenlets. On all versions of Python, adjust some error handling in the default C-based loop. This fixes several assertion failures on debug versions of CPython. Hopefully it has a positive impact under real conditions. See :issue:`1985`. - Make ``gevent.pywsgi`` comply more closely with the HTTP specification for chunked transfer encoding. In particular, we are much stricter about trailers, and trailers that are invalid (too long or featuring disallowed characters) forcibly close the connection to the client *after* the results have been sent. Trailers otherwise continue to be ignored and are not available to the WSGI application. Previously, carefully crafted invalid trailers in chunked requests on keep-alive connections might appear as two requests to ``gevent.pywsgi``. Because this was handled exactly as a normal keep-alive connection with two requests, the WSGI application should handle it normally. However, if you were counting on some upstream server to filter incoming requests based on paths or header fields, and the upstream server simply passed trailers through without validating them, then this embedded second request would bypass those checks. (If the upstream server validated that the trailers meet the HTTP specification, this could not occur, because characters that are required in an HTTP request, like a space, are not allowed in trailers.) CVE-2023-41419 was reserved for this. Our thanks to the original reporters, Keran Mu (mkr22@mails.tsinghua.edu.cn) and Jianjun Chen (jianjun@tsinghua.edu.cn), from Tsinghua University and Zhongguancun Laboratory. See :issue:`1989`. ---- 23.7.0 (2023-07-11) =================== Features -------- - Add preliminary support for Python 3.12, using greenlet 3.0a1. This is somewhat tricky to build from source at this time, and there is one known issue: On Python 3.12b3, dumping tracebacks of greenlets is not available. :issue:`1969`. - Update the bundled c-ares version to 1.19.1. See :issue:`1947`. Bugfixes -------- - Fix an edge case connecting a non-blocking ``SSLSocket`` that could result in an AttributeError. In a change to match the standard library, calling ``sock.connect_ex()`` on a subclass of ``socket`` no longer calls the subclass's ``connect`` method. Initial fix by Priyankar Jain. See :issue:`1932`. - Make gevent's ``FileObjectThread`` (mostly used on Windows) implement ``readinto`` cooperatively. PR by Kirill Smelkov. See :issue:`1948`. - Work around an ``AttributeError`` during cyclic garbage collection when Python finalizers (``__del__`` and the like) attempt to use gevent APIs. This is not a recommended practice, and it is unclear if catching this ``AttributeError`` will fix any problems or just shift them. (If we could determine the root situation that results in this cycle, we might be able to solve it.) See :issue:`1961`. Deprecations and Removals ------------------------- - Remove support for obsolete Python versions. This is everything prior to 3.8. Related changes include: - Stop using ``pkg_resources`` to find entry points (plugins). Instead, use ``importlib.metadata``. - Honor ``sys.unraisablehook`` when a callback function produces an exception, and handling the exception in the hub *also* produces an exception. In older versions, these would be simply printed. - ``setup.py`` no longer includes the ``setup_requires`` keyword. Installation with a tool that understands ``pyproject.toml`` is recommended. - The bundled tblib has been updated to version 2.0. ---- 22.10.2 (2022-10-31) ==================== Bugfixes -------- - Update to greenlet 2.0. This fixes a deallocation issue that required a change in greenlet's ABI. The design of greenlet 2.0 is intended to prevent future fixes and enhancements from requiring an ABI change, making it easier to update gevent and greenlet independently. .. caution:: greenlet 2.0 requires a modern-ish C++ compiler. This may mean certain older platforms are no longer supported. See :issue:`1909`. ---- 22.10.1 (2022-10-14) ==================== Features -------- - Update bundled libuv to 1.44.2. See :issue:`1913`. Misc ---- - See :issue:`1898`., See :issue:`1910`., See :issue:`1915`. ---- 22.08.0 (2022-10-08) ==================== Features -------- - Windows: Test and provide binary wheels for PyPy3.7. Note that there may be issues with subprocesses, signals, and it may be slow. See :issue:`1798`. - Upgrade embedded c-ares to 1.18.1. See :issue:`1847`. - Upgrade bundled libuv to 1.42.0 from 1.40.0. See :issue:`1851`. - Added preliminary support for Python 3.11 (rc2 and later). Some platforms may or may not have binary wheels at this time. .. important:: Support for legacy versions of Python, including 2.7 and 3.6, will be ending soon. The maintenance burden has become too great and the maintainer's time is too limited. Ideally, there will be a release of gevent compatible with a final release of greenlet 2.0 that still supports those legacy versions, but that may not be possible; this may be the final release to support them. :class:`gevent.threadpool.ThreadPool` can now optionally expire idle threads. This is used by default in the implicit thread pool used for DNS requests and other user-submitted tasks; other uses of a thread-pool need to opt-in to this. See :issue:`1867`. Bugfixes -------- - Truly disable the effects of compiling with ``-ffast-math``. See :issue:`1864`. ---- 21.12.0 (2021-12-11) ==================== Features -------- - Update autoconf files for Apple Silicon Macs. Note that while there are reports of compiling gevent on Apple Silicon Macs now, this is *not* a tested configuration. There may be some remaining issues with CFFI on some systems as well. See :issue:`1721`. - Build and upload CPython 3.10 binary manylinux wheels. Unfortunately, this required us to stop building and uploading CPython 2.7 binary manylinux wheels. Binary wheels for 2.7 continue to be available for Windows and macOS. See :issue:`1822`. - Test and distribute musllinux_1_1 wheels. See :issue:`1837`. - Update the tested versions of PyPy2 and PyPy3. For PyPy2, there should be no user visible changes, but for PyPy3, support has moved from Python 3.6 to Python 3.7. See :issue:`1843`. Bugfixes -------- - Try to avoid linking to two different Python runtime DLLs on Windows. See :issue:`1814`. - Stop compiling manylinux wheels with ``-ffast-math.`` This was implicit in ``-Ofast``, but could alter the global state of the process. Analysis and fix thanks to Ilya Konstantinov. See :issue:`1820`. - Fix hanging the interpreter on shutdown if gevent monkey patching occurred on a non-main thread in Python 3.9.8 and above. (Note that this is not a recommended practice.) See :issue:`1839`. ---- 21.8.0 (2021-08-05) =================== Features -------- - Update the embedded c-ares from 1.16.1 to 1.17.1. See :issue:`1758`. - Add support for Python 3.10rc1 and newer. As part of this, the minimum required greenlet version was increased to 1.1.0 (on CPython), and the minimum version of Cython needed to build gevent from a source checkout is 3.0a9. Note that the dnspython resolver is not available on Python 3.10. See :issue:`1790`. - Update from Cython 3.0a6 to 3.0a9. See :issue:`1801`. Misc ---- - See :issue:`1789`. ---- 21.1.2 (2021-01-20) =================== Features -------- - Update the embedded libev from 4.31 to 4.33. See :issue:`1754`. - Update the embedded libuv from 1.38.0 to 1.40.0. See :issue:`1755`. Misc ---- - See :issue:`1753`. ---- 21.1.1 (2021-01-18) =================== Bugfixes -------- Fix a ``TypeError`` on startup on Python 2 with ``zope.schema`` installed. Reported by Josh Zuech. 21.1.0 (2021-01-15) =================== Bugfixes -------- - Make gevent ``FileObjects`` more closely match the semantics of native file objects for the ``name`` attribute: - Objects opened from a file descriptor integer have that integer as their ``name.`` (Note that this is the Python 3 semantics; Python 2 native file objects returned from ``os.fdopen()`` have the string "" as their name , but here gevent always follows Python 3.) - The ``name`` remains accessible after the file object is closed. Thanks to Dan Milon. See :issue:`1745`. Misc ---- Make ``gevent.event.AsyncResult`` print a warning when it detects improper cross-thread usage instead of hanging. ``AsyncResult`` has *never* been safe to use from multiple threads. It, like most gevent objects, is intended to work with greenlets from a single thread. Using ``AsyncResult`` from multiple threads has undefined semantics. The safest way to communicate between threads is using an event loop async watcher. Those undefined semantics changed in recent gevent versions, making it more likely that an abused ``AsyncResult`` would misbehave in ways that could cause the program to hang. Now, when ``AsyncResult`` detects a situation that would hang, it prints a warning to stderr. Note that this is best-effort, and hangs are still possible, especially under PyPy 7.3.3. At the same time, ``AsyncResult`` is tuned to behave more like it did in older versions, meaning that the hang is once again much less likely. If you were getting lucky and using ``AsyncResult`` successfully across threads, this may restore your luck. In addition, cross-thread wakeups are faster. Note that the gevent hub now uses an extra file descriptor to implement this. Similar changes apply to ``gevent.event.Event`` (see :issue:`1735`). See :issue:`1739`. ---- 20.12.1 (2020-12-27) ==================== Features -------- - Make :class:`gevent.Greenlet` objects function as context managers. When the ``with`` suite finishes, execution doesn't continue until the greenlet is finished. This can be a simpler alternative to a :class:`gevent.pool.Group` when the lifetime of greenlets can be lexically scoped. Suggested by André Caron. See :issue:`1324`. Bugfixes -------- - Make gevent's ``Semaphore`` objects properly handle native thread identifiers larger than can be stored in a C ``long`` on Python 3, instead of raising an ``OverflowError``. Reported by TheYOSH. See :issue:`1733`. ---- 20.12.0 (2020-12-22) ==================== Features -------- - Make worker threads created by :class:`gevent.threadpool.ThreadPool` install the :func:`threading.setprofile` and :func:`threading.settrace` hooks while tasks are running. This provides visibility to profiling and tracing tools like yappi. Reported by Suhail Muhammed. See :issue:`1678`. - Drop support for Python 3.5. Bugfixes -------- - Incorrectly passing an exception *instance* instead of an exception *type* to `gevent.Greenlet.kill` or `gevent.killall` no longer prints an exception to stderr. See :issue:`1663`. - Make destroying a hub try harder to more forcibly stop loop processing when there are outstanding callbacks or IO operations scheduled. Thanks to Josh Snyder (:issue:`1686`) and Jan-Philip Gehrcke (:issue:`1669`). See :issue:`1686`. - Improve the ability to use monkey-patched locks, and `gevent.lock.BoundedSemaphore`, across threads, especially when the various threads might not have a gevent hub or any other active greenlets. In particular, this handles some cases that previously raised ``LoopExit`` or would hang. Note that this may not be reliable on PyPy on Windows; such an environment is not currently recommended. The semaphore tries to avoid creating a hub if it seems unnecessary, automatically creating one in the single-threaded case when it would block, but not in the multi-threaded case. While the differences should be correctly detected, it's possible there are corner cases where they might not be. If your application appears to hang acquiring semaphores, but adding a call to ``gevent.get_hub()`` in the thread attempting to acquire the semaphore before doing so fixes it, please file an issue. See :issue:`1698`. - Make error reporting when a greenlet suffers a `RecursionError` more reliable. Reported by Dan Milon. See :issue:`1704`. - gevent.pywsgi: Avoid printing an extra traceback ("TypeError: not enough arguments for format string") to standard error on certain invalid client requests. Reported by Steven Grimm. See :issue:`1708`. - Add support for PyPy2 7.3.3. See :issue:`1709`. - Python 2: Make ``gevent.subprocess.Popen.stdin`` objects have a ``write`` method that guarantees to write the entire argument in binary, unbuffered mode. This may require multiple trips around the event loop, but more closely matches the behaviour of the Python 2 standard library (and gevent prior to 1.5). The number of bytes written is still returned (instead of ``None``). See :issue:`1711`. - Make `gevent.pywsgi` stop trying to enforce the rules for reading chunked input or ``Content-Length`` terminated input when the connection is being upgraded, for example to a websocket connection. Likewise, if the protocol was switched by returning a ``101`` status, stop trying to automatically chunk the responses. Reported by Kavindu Santhusa. See :issue:`1712`. - Remove the ``__dict__`` attribute from `gevent.socket.socket` objects. The standard library socket do not have a ``__dict__``. Noticed by Carson Ip. As part of this refactoring, share more common socket code between Python 2 and Python 3. See :issue:`1724`. ---- 20.9.0 (2020-09-22) =================== Features -------- - The embedded libev is now asked to detect the availability of ``clock_gettime`` and use the realtime and/or monotonic clocks, if they are available. On Linux, this can reduce the number of system calls libev makes. Originally provided by Josh Snyder. See :issue:`1648`. Bugfixes -------- - On CPython, depend on greenlet >= 0.4.17. This version is binary incompatible with earlier releases on CPython 3.7 and later. On Python 3.7 and above, the module ``gevent.contextvars`` is no longer monkey-patched into the standard library. contextvars are now both greenlet and asyncio task local. See :issue:`1656`. See :issue:`1674`. - The ``DummyThread`` objects created automatically by certain operations when the standard library threading module is monkey-patched now match the naming convention the standard library uses ("Dummy-12345"). Previously (since gevent 1.2a2) they used "DummyThread-12345". See :issue:`1659`. - Fix compatibility with dnspython 2. .. caution:: This currently means that it can be imported. But it cannot yet be used. gevent has a pinned dependency on dnspython < 2 for now. See :issue:`1661`. ---- 20.6.2 (2020-06-16) =================== Features -------- - It is now possible to build and use the embedded libuv on a Cygwin platform. Note that Cygwin is not an officially supported platform of upstream libuv and is not tested by gevent, so the actual working status is unknown, and this may bitrot in future releases. Thanks to berkakinci for the patch. See :issue:`1645`. Bugfixes -------- - Relax the version constraint for psutil on PyPy. Previously it was pinned to 5.6.3 for PyPy2, except for on Windows, where it was excluded. It is now treated the same as CPython again. See :issue:`1643`. ---- 20.6.1 (2020-06-10) =================== Features -------- - gevent's CI is now tested on Ubuntu 18.04 (Bionic), an upgrade from 16.04 (Xenial). See :issue:`1623`. Bugfixes -------- - On Python 2, the dnspython resolver can be used without having selectors2 installed. Previously, an ImportError would be raised. See :issue:`1641`. - Python 3 ``gevent.ssl.SSLSocket`` objects no longer attempt to catch ``ConnectionResetError`` and treat it the same as an ``SSLError`` with ``SSL_ERROR_EOF`` (typically by suppressing it). This was a difference from the way the standard library behaved (which is to raise the exception). It was added to gevent during early testing of OpenSSL 1.1 and TLS 1.3. See :issue:`1637`. ---- 20.6.0 (2020-06-06) =================== Features -------- - Add ``gevent.selectors`` containing ``GeventSelector``. This selector implementation uses gevent details to attempt to reduce overhead when polling many file descriptors, only some of which become ready at any given time. This is monkey-patched as ``selectors.DefaultSelector`` by default. This is available on Python 2 if the ``selectors2`` backport is installed. (This backport is installed automatically using the ``recommended`` extra.) When monkey-patching, ``selectors`` is made available as an alias to this module. See :issue:`1532`. - Depend on greenlet >= 0.4.16. This is required for CPython 3.9 and 3.10a0. See :issue:`1627`. - Add support for Python 3.9. No binary wheels are available yet, however. See :issue:`1628`. Bugfixes -------- - ``gevent.socket.create_connection`` and ``gevent.socket.socket.connect`` no longer ignore IPv6 scope IDs. Any IP address (IPv4 or IPv6) is no longer subject to an extra call to ``getaddrinfo``. Depending on the resolver in use, this is likely to change the number and order of greenlet switches. (On Windows, in particular test cases when there are no other greenlets running, it has been observed to lead to ``LoopExit`` in scenarios that didn't produce that before.) See :issue:`1634`. ---- 20.5.2 (2020-05-28) =================== Bugfixes -------- - Forking a process that had use the threadpool to run tasks that created their own hub would fail to clean up the threadpool by raising ``greenlet.error``. See :issue:`1631`. ---- 20.5.1 (2020-05-26) =================== Features -------- - Waiters on Event and Semaphore objects that call ``wait()`` or ``acquire()``, respectively, that find the Event already set, or the Semaphore available, no longer "cut in line" and run before any previously scheduled greenlets. They now run in the order in which they arrived, just as waiters that had to block in those methods do. See :issue:`1520`. - Update tested PyPy version from 7.3.0 to 7.3.1 on Linux. See :issue:`1569`. - Make ``zope.interface``, ``zope.event`` and (by extension) ``setuptools`` required dependencies. The ``events`` install extra now does nothing and will be removed in 2021. See :issue:`1619`. - Update bundled libuv from 1.36.0 to 1.38.0. See :issue:`1621`. - Update bundled c-ares from 1.16.0 to 1.16.1. On macOS, stop trying to adjust c-ares headers to make them universal. See :issue:`1624`. Bugfixes -------- - Make gevent locks that are monkey-patched usually work across native threads as well as across greenlets within a single thread. Locks that are only used in a single thread do not take a performance hit. While cross-thread locking is relatively expensive, and not a recommended programming pattern, it can happen unwittingly, for example when using the threadpool and ``logging``. Before, cross-thread lock uses might succeed, or, if the lock was contended, raise ``greenlet.error``. Now, in the contended case, if the lock has been acquired by the main thread at least once, it should correctly block in any thread, cooperating with the event loop of both threads. In certain (hopefully rare) cases, it might be possible for contended case to raise ``LoopExit`` when previously it would have raised ``greenlet.error``; if these cases are a practical concern, please open an issue. Also, the underlying Semaphore always behaves in an atomic fashion (as if the GIL was not released) when PURE_PYTHON is set. Previously, it only correctly did so on PyPy. See :issue:`1437`. - Rename gevent's C accelerator extension modules using a prefix to avoid clashing with other C extensions. See :issue:`1480`. - Using ``gevent.wait`` on an ``Event`` more than once, when that Event is already set, could previously raise an AssertionError. As part of this, exceptions raised in the main greenlet will now include a more complete traceback from the failing greenlet. See :issue:`1540`. - Avoid closing the same Python libuv watcher IO object twice. Under some circumstances (only seen on Windows), that could lead to program crashes. See :issue:`1587`. - gevent can now be built using Cython 3.0a5 and newer. The PyPI distribution uses this version. The libev extension was incompatible with this. As part of this, certain internal, undocumented names have been changed. (Technically, gevent can be built with Cython 3.0a2 and above. However, up through 3.0a4 compiling with Cython 3 results in gevent's test for memory leaks failing. See `this Cython issue `_.) See :issue:`1599`. - Destroying a hub after joining it didn't necessarily clean up all resources associated with the hub, especially if the hub had been created in a secondary thread that was exiting. The hub and its parent greenlet could be kept alive. Now, destroying a hub drops the reference to the hub and ensures it cannot be switched to again. (Though using a new blocking API call may still create a new hub.) Joining a hub also cleans up some (small) memory resources that might have stuck around for longer before as well. See :issue:`1601`. - Fix some potential crashes under libuv when using ``gevent.signal_handler``. The crashes were seen running the test suite and were non-deterministic. See :issue:`1606`. ---- 20.5.0 (2020-05-01) =================== Features -------- - Update bundled c-ares to version 1.16.0. `Changes `_. See :issue:`1588`. - Update all the bundled ``config.guess`` and ``config.sub`` scripts. See :issue:`1589`. - Update bundled libuv from 1.34.0 to 1.36.0. See :issue:`1597`. Bugfixes -------- - Use ``ares_getaddrinfo`` instead of a manual lookup. This requires c-ares 1.16.0. Note that this may change the results, in particular their order. As part of this, certain parts of the c-ares extension were adapted to use modern Cython idioms. A few minor errors and discrepancies were fixed as well, such as ``gethostbyaddr('localhost')`` working on Python 3 and failing on Python 2. The DNSpython resolver now raises the expected TypeError in more cases instead of an AttributeError. See :issue:`1012`. - The c-ares and DNSPython resolvers now raise exceptions much more consistently with the standard resolver. Types and errnos are substantially more likely to match what the standard library produces. Depending on the system and configuration, results may not match exactly, at least with DNSPython. There are still some rare cases where the system resolver can raise ``herror`` but DNSPython will raise ``gaierror`` or vice versa. There doesn't seem to be a deterministic way to account for this. On PyPy, ``getnameinfo`` can produce results when CPython raises ``socket.error``, and gevent's DNSPython resolver also raises ``socket.error``. In addition, several other small discrepancies were addressed, including handling of localhost and broadcast host names. .. note:: This has been tested on Linux (CentOS and Ubuntu), macOS, and Windows. It hasn't been tested on other platforms, so results are unknown for them. The c-ares support, in particular, is using some additional socket functions and defines. Please let the maintainers know if this introduces issues. See :issue:`1459`. ---- 20.04.0 (2020-04-22) ==================== Features -------- - Let CI (Travis and Appveyor) build and upload release wheels for Windows, macOS and manylinux. As part of this, (a subset of) gevent's tests can run if the standard library's ``test.support`` module has been stripped. See :issue:`1555`. - Update tested PyPy version from 7.2.0 on Windows to 7.3.1. See :issue:`1569`. Bugfixes -------- - Fix a spurious warning about watchers and resource leaks on libuv on Windows. Reported by Stéphane Rainville. See :issue:`1564`. - Make monkey-patching properly remove ``select.epoll`` and ``select.kqueue``. Reported by Kirill Smelkov. See :issue:`1570`. - Make it possible to monkey-patch :mod:`contextvars` before Python 3.7 if a non-standard backport that uses the same name as the standard library does is installed. Previously this would raise an error. Reported by Simon Davy. See :issue:`1572`. - Fix destroying the libuv default loop and then using the default loop again. See :issue:`1580`. - libuv loops that have watched children can now exit. Previously, the SIGCHLD watcher kept the loop alive even if there were no longer any watched children. See :issue:`1581`. Deprecations and Removals ------------------------- - PyPy no longer uses the Python allocation functions for libuv and libev allocations. See :issue:`1569`. Misc ---- - See :issue:`1367`. gevent-24.11.1/CONTRIBUTING.rst000066400000000000000000000040351471441230600156150ustar00rootroot00000000000000======================== Contributing to gevent ======================== Please see `contribution-guide.org `_ for general details on what we need from contributors. If you're filing a bug that needs a code example, please be sure it's a `Short, Self Contained, Correct, Example `_ Thanks! gevent-specific details ======================= For information on building gevent, and adding and updating test cases, see `the development documentation `_. There are a number of systems in place to help ensure gevent is of the highest possible quality: - A test suite is run for every push and pull request submitted. Github Actions is used to test on Linux and macOS, and `AppVeyor`_ runs the builds on Windows. Pull requests with tests that don't pass will be automatically failed. .. image:: https://github.com/gevent/gevent/workflows/gevent%20testing/badge.svg :target: https://github.com/gevent/gevent/actions .. image:: https://ci.appveyor.com/api/projects/status/q4kl21ng2yo2ixur?svg=true :target: https://ci.appveyor.com/project/denik/gevent - Builds on Github Actions automatically submit updates to `coveralls.io`_ to monitor test coverage. Pull requests that don't feature adequate test coverage will be automatically failed. .. image:: https://coveralls.io/repos/gevent/gevent/badge.svg?branch=master&service=github :target: https://coveralls.io/github/gevent/gevent?branch=master - Github Actions builds also run `pylint `_ to enforce code quality conventions (PEP8 compliance and the like). .. _coveralls.io: https://coveralls.io/github/gevent/gevent .. _AppVeyor: https://ci.appveyor.com/project/denik/gevent Pull requests that don't pass those checks will be automatically failed. But don't worry, it's all about context. Most of the time failing checks are easy to fix, and occasionally a PR will be accepted even with failing checks to be fixed by the maintainers. gevent-24.11.1/LICENSE000066400000000000000000000023231471441230600141570ustar00rootroot00000000000000MIT License Except when otherwise stated (look at the beginning of each file) the software and the documentation in this project are copyrighted by: Denis Bilenko and the contributors, http://www.gevent.org Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. gevent-24.11.1/MANIFEST.in000066400000000000000000000043111471441230600147070ustar00rootroot00000000000000recursive-include src/greentest * recursive-include examples * recursive-include src/gevent * recursive-include docs * recursive-include deps * include LICENSE include NOTICE include README.rst include CONTRIBUTING.rst include TODO include changelog.rst include MANIFEST.in include AUTHORS include Makefile.ext include known_failures.py include *.yml include *.txt include _setup*.py include CHANGES.rst include pyproject.toml include .coveragerc include .coveragerc-pypy include tox.ini include .pep8 include .pylintrc recursive-include .github *.md recursive-include benchmarks *.sh *.py recursive-include appveyor *.cmd recursive-include appveyor *.ps1 recursive-include scripts *.sh *.py include scripts/releases/make-manylinux ### Artifacts of configuring/building in place # These we want, they come from the Makefile step #- recursive-exclude gevent corecext.pyx *.c *.h # This we want if we're on PyPy it's moved there ahead of time # by setup.py #- prune gevent/libev prune */__pycache__ global-exclude *.so global-exclude *.o global-exclude *.lo global-exclude *.la global-exclude .dirstamp global-exclude config.log config.status config.cache prune docs/_build global-exclude *.pyc recursive-exclude src/greentest .coverage prune src/greentest/htmlcov recursive-exclude deps/c-ares stamp-h? ares_build.h.orig # This is the output of _corecffi_build.py and may be particular # to each CFFI version/platform recursive-exclude src/gevent _corecffi.c exclude configure-output exclude configure-output.txt exclude deps/TAGS exclude deps/libev/configure-output.txt exclude deps/c-ares/ares_build.h exclude deps/c-ares/ares_config.h exclude deps/c-ares/libcares.pc exclude deps/c-ares/libtool exclude deps/c-ares/Makefile prune deps/c-ares/.deps prune deps/c-ares/.libs prune deps/libev/.deps prune deps/libev/.libs recursive-exclude deps/libev Makefile libtool stamp-h? config.h prune deps/libuv/.deps prune deps/libuv/.libs prune deps/libuv/src/.deps prune deps/libuv/src/unix/.deps prune deps/libuv/src/win/.deps prune deps/libuv/test/.deps prune deps/libuv/autom4te.cache prune deps/libuv/m4 recursive-exclude deps/libuv Makefile Makefile.in ar-lib aclocal.m4 compile configure depcomp install-sh libtool libuv.pc ltmain.sh missing gevent-24.11.1/NOTICE000066400000000000000000000076441471441230600140710ustar00rootroot00000000000000gevent is licensed under the MIT license. See the LICENSE file for the complete license. Portions of this software may have other licenses. ============================================= greentest/2.7 greentest/2.7.8 greentest/2.7pypy greentest/3.3 greentest/3.4 greentest/3.5 ----------------- Copyright (c) 2001-2016 Python Software Foundation; All Rights Reserved PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 -------------------------------------------- 1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"), and the Individual or Organization ("Licensee") accessing and otherwise using this software ("Python") in source or binary form and its associated documentation. 2. Subject to the terms and conditions of this License Agreement, PSF hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright (c) 2001-2016 Python Software Foundation; All Rights Reserved" are retained in Python alone or in any derivative version prepared by Licensee. 3. In the event Licensee prepares a derivative work that is based on or incorporates Python or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to Python. 4. PSF is making Python available to Licensee on an "AS IS" basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 7. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between PSF and Licensee. This License Agreement does not grant permission to use PSF trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. 8. By copying, installing or otherwise using Python, Licensee agrees to be bound by the terms and conditions of this License Agreement. ============================================ gevent/libuv/_corecffi_source.c gevent/libuv/_corecffi_cdef.c Originally based on code from https://github.com/veegee/guv Copyright (c) 2014 V G Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. =========================================== gevent-24.11.1/README.rst000066400000000000000000000017261471441230600146470ustar00rootroot00000000000000======== gevent ======== .. image:: https://github.com/gevent/gevent/workflows/gevent%20testing/badge.svg :target: https://github.com/gevent/gevent/actions .. image:: https://ci.appveyor.com/api/projects/status/bqxl88yhpho223jg?svg=true :target: https://ci.appveyor.com/project/denik/gevent .. image:: https://coveralls.io/repos/gevent/gevent/badge.svg?branch=master&service=github :target: https://coveralls.io/github/gevent/gevent?branch=master .. include:: docs/_about.rst Read the documentation online at http://www.gevent.org. Post issues on the `bug tracker`_, discuss and ask open ended questions on the `mailing list`_, and find announcements and information on the blog_ and `twitter (@gevent)`_. .. include:: docs/install.rst .. _bug tracker: https://github.com/gevent/gevent/issues .. _mailing list: http://groups.google.com/group/gevent .. _blog: https://dev.nextthought.com/blog/categories/gevent.html .. _twitter (@gevent): http://twitter.com/gevent gevent-24.11.1/TODO000066400000000000000000000001071471441230600136400ustar00rootroot00000000000000The issue tracker is hosted at https://github.com/gevent/gevent/issues gevent-24.11.1/_setupares.py000066400000000000000000000105201471441230600156740ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ setup helpers for c-ares. """ from __future__ import print_function, absolute_import, division import os import os.path import shutil import sys from _setuputils import Extension import distutils.sysconfig # to get CFLAGS to pass into c-ares configure script pylint:disable=import-error from _setuputils import WIN from _setuputils import quoted_dep_abspath from _setuputils import system from _setuputils import should_embed from _setuputils import LIBRARIES from _setuputils import DEFINE_MACROS from _setuputils import glob_many from _setuputils import dep_abspath from _setuputils import RUNNING_ON_CI from _setuputils import RUNNING_FROM_CHECKOUT from _setuputils import cythonize1 from _setuputils import get_include_dirs CARES_EMBED = should_embed('c-ares') # See #616, trouble building for a 32-bit python on a 64-bit platform # (Linux). _distutils_cflags = distutils.sysconfig.get_config_var("CFLAGS") or '' cflags = _distutils_cflags + ((' ' + os.environ['CFLAGS']) if os.environ.get("CFLAGS") else '') cflags = ('CFLAGS="%s"' % (cflags,)) if cflags else '' # Use -r, not -e, for support of old solaris. See # https://github.com/gevent/gevent/issues/777 ares_configure_command = ' '.join([ "(cd ", quoted_dep_abspath('c-ares'), " && if [ -r include/ares_build.h ]; then cp include/ares_build.h include/ares_build.h.orig; fi ", " && sh ./configure --disable-dependency-tracking --disable-tests -C " + cflags, " && cp src/lib/ares_config.h include/ares_build.h \"$OLDPWD\" ", " && cat include/ares_build.h ", " && if [ -r include/ares_build.h.orig ]; then mv include/ares_build.h.orig include/ares_build.h; fi)", "> configure-output.txt" ]) if 'GEVENT_MANYLINUX' in os.environ: # Assumes that c-ares is pre-configured. ares_configure_command = '(echo preconfigured) > configure-output.txt' def configure_ares(bext, ext): print("Embedding c-ares", bext, ext) bdir = os.path.join(bext.build_temp, 'c-ares', 'include') ext.include_dirs.insert(0, bdir) print("Inserted ", bdir, "in include dirs", ext.include_dirs) if not os.path.isdir(bdir): os.makedirs(bdir) if WIN: src = "deps\\c-ares\\include\\ares_build.h.dist" dest = os.path.join(bdir, "ares_build.h") print("Copying %r to %r" % (src, dest)) shutil.copy(src, dest) return cwd = os.getcwd() os.chdir(bdir) try: if os.path.exists('ares_config.h') and os.path.exists('ares_build.h'): return try: system(ares_configure_command) except: with open('configure-output.txt', 'r') as t: print(t.read(), file=sys.stderr) raise finally: os.chdir(cwd) ARES = Extension( name='gevent.resolver.cares', sources=[ 'src/gevent/resolver/cares.pyx' ], include_dirs=get_include_dirs(*( [os.path.join(dep_abspath('c-ares'), 'include'), os.path.join(dep_abspath('c-ares'), 'src', 'lib')] if CARES_EMBED else [])), libraries=list(LIBRARIES), define_macros=list(DEFINE_MACROS), depends=glob_many( 'src/gevent/resolver/cares_*.[ch]') ) ares_required = RUNNING_ON_CI and RUNNING_FROM_CHECKOUT ARES.optional = not ares_required if CARES_EMBED: ARES.sources += glob_many('deps/c-ares/src/lib/*.c') ARES.sources += glob_many('deps/c-ares/src/lib/dsa/*.c') ARES.sources += glob_many('deps/c-ares/src/lib/str/*.c') ARES.sources += glob_many('deps/c-ares/src/lib/record/*.c') ARES.sources += glob_many('deps/c-ares/src/lib/util/*.c') ARES.sources += glob_many('deps/c-ares/src/lib/event/*.c') ARES.sources += glob_many('deps/c-ares/src/lib/legacy/*.c') ARES.configure = configure_ares if WIN: ARES.libraries += ['advapi32'] ARES.define_macros += [('CARES_STATICLIB', '')] else: ARES.define_macros += [('HAVE_CONFIG_H', '')] if sys.platform != 'darwin': ARES.libraries += ['rt'] else: # libresolv dependency introduced in # c-ares 1.16.1. ARES.libraries += ['resolv'] ARES.define_macros += [('CARES_EMBED', '1')] else: ARES.libraries.append('cares') ARES.define_macros += [('HAVE_NETDB_H', '')] ARES.configure = lambda bext, ext: print("c-ares not embedded, not configuring", bext, ext) ARES = cythonize1(ARES) gevent-24.11.1/_setuplibev.py000066400000000000000000000112251471441230600160460ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ setup helpers for libev. Importing this module should have no side-effects; in particular, it shouldn't attempt to cythonize anything. """ from __future__ import print_function, absolute_import, division import os.path from _setuputils import Extension from _setuputils import system from _setuputils import dep_abspath from _setuputils import quoted_dep_abspath from _setuputils import WIN from _setuputils import LIBRARIES from _setuputils import DEFINE_MACROS from _setuputils import glob_many from _setuputils import should_embed from _setuputils import get_include_dirs LIBEV_EMBED = should_embed('libev') # Configure libev in place libev_configure_command = ' '.join([ "(cd ", quoted_dep_abspath('libev'), " && sh ./configure -C > configure-output.txt", ")", ]) def configure_libev(build_command=None, extension=None): # pylint:disable=unused-argument # build_command is an instance of ConfiguringBuildExt. # extension is an instance of the setuptools Extension object. # # This is invoked while `build_command` is in the middle of its `run()` # method. # Both of these arguments are unused here so that we can use this function # both from a build command and from libev/_corecffi_build.py if WIN: return libev_path = dep_abspath('libev') config_path = os.path.join(libev_path, 'config.h') if os.path.exists(config_path): print("Not configuring libev, 'config.h' already exists") return system(libev_configure_command) def build_extension(): # Return the un-cythonized extension. # This can be used to access things like `libraries` and `include_dirs` # and `define_macros` so we DRY. include_dirs = get_include_dirs() include_dirs.append(os.path.abspath(os.path.join('src', 'gevent', 'libev'))) if LIBEV_EMBED: include_dirs.append(dep_abspath('libev')) CORE = Extension(name='gevent.libev.corecext', sources=[ 'src/gevent/libev/corecext.pyx', 'src/gevent/libev/callbacks.c', ], include_dirs=include_dirs, libraries=list(LIBRARIES), define_macros=list(DEFINE_MACROS), depends=glob_many('src/gevent/libev/callbacks.*', 'src/gevent/libev/stathelper.c', 'src/gevent/libev/libev*.h', 'deps/libev/*.[ch]')) # While we don't actually use periodic watchers, # on Windows we need to enable them to work around an issue # in libev 4.33 where ``have_monotonic`` is not defined. EV_PERIODIC_ENABLE = "0" if WIN: CORE.define_macros.append(('EV_STANDALONE', '1')) EV_PERIODIC_ENABLE = "1" # QQQ libev can also use -lm, however it seems to be added implicitly if LIBEV_EMBED: CORE.define_macros += [ ('LIBEV_EMBED', '1'), # we don't use void* data in the cython implementation; # the CFFI implementation does and removes this line. ('EV_COMMON', ''), # libev watchers that we don't use currently: ('EV_CLEANUP_ENABLE', '0'), ('EV_EMBED_ENABLE', '0'), ("EV_PERIODIC_ENABLE", EV_PERIODIC_ENABLE), # Time keeping. If possible, use the realtime and/or monotonic # clocks. On Linux, this can reduce the number of observable syscalls. # On older linux, such as the version in manylinux2010, this requires # linking to lib rt. We handle this in make-manylinux. Newer versions # generally don't need that. ("EV_USE_REALTIME", "1"), ("EV_USE_MONOTONIC", "1"), # use the builtin floor() function. Every modern platform should # have this, right? ("EV_USE_FLOOR", "1"), ] CORE.configure = configure_libev if os.environ.get('GEVENTSETUP_EV_VERIFY') is not None: CORE.define_macros.append( ('EV_VERIFY', os.environ['GEVENTSETUP_EV_VERIFY'])) # EV_VERIFY is implemented using assert(), which only works if # NDEBUG is *not* defined. distutils likes to define NDEBUG by default, # meaning that we get no verification in embedded mode. Since that's the # most common testing configuration, that's not good. CORE.undef_macros.append('NDEBUG') else: CORE.define_macros += [('LIBEV_EMBED', '0')] CORE.libraries.append('ev') CORE.configure = lambda *args: print("libev not embedded, not configuring") return CORE gevent-24.11.1/_setuputils.py000066400000000000000000000456651471441230600161240ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ gevent build utilities. """ from __future__ import print_function, absolute_import, division import re import os import os.path import sys import sysconfig from distutils import sysconfig as dist_sysconfig from subprocess import check_call from glob import glob from setuptools import Extension as _Extension from setuptools.command.build_ext import build_ext THIS_DIR = os.path.dirname(__file__) ## Exported configurations PYPY = hasattr(sys, 'pypy_version_info') WIN = sys.platform.startswith('win') PY311 = sys.version_info[:2] >= (3, 11) PY312 = sys.version_info[:2] >= (3, 12) RUNNING_ON_TRAVIS = os.environ.get('TRAVIS') RUNNING_ON_APPVEYOR = os.environ.get('APPVEYOR') RUNNING_ON_GITHUB_ACTIONS = os.environ.get('GITHUB_ACTIONS') RUNNING_ON_CI = RUNNING_ON_TRAVIS or RUNNING_ON_APPVEYOR or RUNNING_ON_GITHUB_ACTIONS RUNNING_FROM_CHECKOUT = os.path.isdir(os.path.join(THIS_DIR, ".git")) LIBRARIES = [] DEFINE_MACROS = [] if WIN: LIBRARIES += ['ws2_32'] DEFINE_MACROS += [('FD_SETSIZE', '1024'), ('_WIN32', '1')] ### File handling def quoted_abspath(*segments): return '"' + os.path.abspath(os.path.join(*segments)) + '"' def read(*names): """Read a file path relative to this file.""" with open(os.path.join(THIS_DIR, *names)) as f: return f.read() def read_version(name="src/gevent/__init__.py"): contents = read(name) version = re.search(r"__version__\s*=\s*'(.*)'", contents, re.M).group(1) assert version, "could not read version" return version def dep_abspath(depname, *extra): return os.path.abspath(os.path.join('deps', depname, *extra)) def quoted_dep_abspath(depname): return quoted_abspath(dep_abspath(depname)) def glob_many(*globs): """ Return a list of all the glob patterns expanded. """ result = [] for pattern in globs: result.extend(glob(pattern)) return sorted(result) ## Configuration # Environment variables that are intended to be used outside of our own # CI should be documented in ``installing_from_source.rst``. # They should all begin with ``GEVENTSETUP_`` def bool_from_environ(key): value = os.environ.get(key) if not value: return value = value.lower().strip() if value in ('1', 'true', 'on', 'yes'): return True if value in ('0', 'false', 'off', 'no'): return False raise ValueError('Environment variable %r has invalid value %r. ' 'Please set it to 1, 0 or an empty string' % (key, value)) def _check_embed(key, defkey, path=None, warn=False): """ Find a boolean value, configured in the environment at *key* or *defkey* (typically, *defkey* will be shared by several calls). If those don't exist, then check for the existence of *path* and return that (if path is given) """ value = bool_from_environ(key) if value is None: value = bool_from_environ(defkey) if value is not None: if warn: print("Warning: gevent setup: legacy environment key %s or %s found" % (key, defkey)) return value return os.path.exists(path) if path is not None else None def should_embed(dep_name): """ Check the configuration for the dep_name and see if it should be embedded. Environment keys are derived from the dep name: libev becomes GEVENTSETUP_EMBED_LIBEV and c-ares becomes GEVENTSETUP_EMBED_CARES. """ path = dep_abspath(dep_name) normal_dep_key = dep_name.replace('-', '').upper() default_key = 'GEVENTSETUP_EMBED' dep_key = default_key + '_' + normal_dep_key result = _check_embed(dep_key, default_key) if result is not None: return result # Not defined, check legacy settings, and fallback to the path legacy_default_key = 'EMBED' legacy_dep_key = normal_dep_key + '_' + legacy_default_key return _check_embed(legacy_dep_key, legacy_default_key, path, warn=True) ## Headers def get_include_dirs(*extra_paths): """ Return additional include directories that might be needed to compile extensions. Specifically, we need the greenlet.h header in many of our extensions. """ # setuptools will put the normal include directory for Python.h on the # include path automatically. We don't want to override that with # a different Python.h if we can avoid it: On older versions of Python, # that can cause issues with debug builds (see https://github.com/gevent/gevent/issues/1461) # so order matters here. # # sysconfig.get_path('include') will return the path to the main include # directory. In a virtual environment, that's a symlink to the main # Python installation include directory: # sysconfig.get_path('include') -> /path/to/venv/include/python3.8 # /path/to/venv/include/python3.7 -> /pythondir/include/python3.8 # # distutils.sysconfig.get_python_inc() returns the main Python installation # include directory: # distutils.sysconfig.get_python_inc() -> /pythondir/include/python3.8 # # Neither sysconfig dir is not enough if we're in a virtualenv; the greenlet.h # header goes into a site/ subdir. See https://github.com/pypa/pip/issues/4610 dist_inc_dir = os.path.abspath(dist_sysconfig.get_python_inc()) # 1 sys_inc_dir = os.path.abspath(sysconfig.get_path("include")) # 2 venv_include_dir = os.path.join( sys.prefix, 'include', 'site', 'python' + sysconfig.get_python_version() ) venv_include_dir = os.path.abspath(venv_include_dir) # If we're installed via buildout, and buildout also installs # greenlet, we have *NO* access to greenlet.h at all. So include # our own copy as a fallback. dep_inc_dir = os.path.abspath('deps') # 3 return [ p for p in (dist_inc_dir, sys_inc_dir, dep_inc_dir) + extra_paths if os.path.exists(p) ] ## Processes def _system(cmd, cwd=None, env=None, **kwargs): sys.stdout.write('Running %r in %s\n' % (cmd, cwd or os.getcwd())) sys.stdout.flush() if 'shell' not in kwargs: kwargs['shell'] = True env = env or os.environ.copy() return check_call(cmd, cwd=cwd, env=env, **kwargs) def system(cmd, cwd=None, env=None, **kwargs): if _system(cmd, cwd=cwd, env=env, **kwargs): sys.exit(1) ### # Cython ### COMMON_UTILITY_INCLUDE_DIR = "src/gevent/_generated_include" # Based on code from # http://cython.readthedocs.io/en/latest/src/reference/compilation.html#distributing-cython-modules def _dummy_cythonize(extensions, **_kwargs): for extension in extensions: sources = [] for sfile in extension.sources: path, ext = os.path.splitext(sfile) if ext in ('.pyx', '.py'): ext = '.c' sfile = path + ext sources.append(sfile) extension.sources[:] = sources return extensions try: from Cython.Build import cythonize except ImportError: # The .c files had better already exist. cythonize = _dummy_cythonize def cythonize1(ext): # All the directories we have .pxd files # and .h files that are included regardless of # embed settings. standard_include_paths = [ 'src/gevent', 'src/gevent/libev', 'src/gevent/resolver', # This is for generated include files; see below. '.', ] if PY311: # The "fast" code is Cython for manipulating # exceptions is, unfortunately, broken, at least in 3.0.2. # The implementation of __Pyx__GetException() doesn't properly set # tstate->current_exception when it normalizes exceptions, # causing assertion errors. # This definitely seems to be a problem on 3.12, and MAY # be a problem on 3.11 (#1985) ext.define_macros.append(('CYTHON_FAST_THREAD_STATE', '0')) try: new_ext = cythonize( [ext], include_path=standard_include_paths, annotate=True, compiler_directives={ 'language_level': '3str', 'always_allow_keywords': False, 'infer_types': True, 'nonecheck': False, }, # XXX: Cython developers say: "Please use C macros instead # of Pyrex defines. Taking this kind of decision based on # the runtime environment of the build is wrong, it needs # to be taken at C compile time." # # They also say, "The 'IF' statement is deprecated and # will be removed in a future Cython version. Consider # using runtime conditions or C macros instead. See # https://github.com/cython/cython/issues/4310" # # And: " The 'DEF' statement is deprecated and will be # removed in a future Cython version. Consider using # global variables, constants, and in-place literals # instead." #compile_time_env={ # #}, # The common_utility_include_dir (not well documented) # causes Cython to emit separate files for much of the # static support code. Each of the modules then includes # the static files they need. They have hash names based # on digest of all the relevant compiler directives, # including those set here and those set in the file. It's # worth monitoring to be sure that we don't start to get # divergent copies; make sure files declare the same # options. # # The value used here must be listed in the above ``include_path``, # and included in sdists. Files will be included based on this # full path, so its parent directory, ``.``, must be on the runtime # include path. common_utility_include_dir=COMMON_UTILITY_INCLUDE_DIR, # The ``cache`` argument is not well documented, but causes Cython to # cache to disk some intermediate results. In the past, this was # incompatible with ``common_utility_include_dir``, but not anymore. # However, it seems to only function on posix (it spawns ``du``). # It doesn't seem to buy us much speed, and results in a bunch of # ResourceWarnings about unclosed files. # cache="build/cycache", )[0] except ValueError: # 'invalid literal for int() with base 10: '3str' # This is seen when an older version of Cython is installed. # It's a bit of a chicken-and-egg, though, because installing # from dev-requirements first scans this egg for its requirements # before doing any updates. import traceback traceback.print_exc() new_ext = _dummy_cythonize([ext])[0] for optional_attr in ('configure', 'optional'): if hasattr(ext, optional_attr): setattr(new_ext, optional_attr, getattr(ext, optional_attr)) new_ext.extra_compile_args.extend(IGNORE_THIRD_PARTY_WARNINGS) new_ext.include_dirs.extend(standard_include_paths) return new_ext # A tuple of arguments to add to ``extra_compile_args`` # to ignore warnings from third-party code we can't do anything # about. IGNORE_THIRD_PARTY_WARNINGS = () if sys.platform == 'darwin': # macos, or other platforms using clang # (TODO: How to detect clang outside those platforms?) IGNORE_THIRD_PARTY_WARNINGS += ( # If clang is old and doesn't support the warning, these # are ignored, albeit not silently. # The first two are all over the place from Cython. '-Wno-unreachable-code', '-Wno-deprecated-declarations', # generic, started with some xcode update '-Wno-incompatible-sysroot', # libuv '-Wno-tautological-compare', '-Wno-implicit-function-declaration', # libev '-Wno-unused-value', '-Wno-macro-redefined', ) ## Distutils extensions class BuildFailed(Exception): pass from distutils.errors import CCompilerError, DistutilsExecError, DistutilsPlatformError # pylint:disable=no-name-in-module,import-error ext_errors = (CCompilerError, DistutilsExecError, DistutilsPlatformError, IOError) class ConfiguringBuildExt(build_ext): # CFFI subclasses this class with its own, that overrides run() # and invokes a `pre_run` method, if defined. The run() method is # called only once from setup.py (this class is only instantiated # once per invocation of setup()); run() in turn calls # `build_extension` for every defined extension. # For extensions we control, we let them define a `configure` # callable attribute, and we invoke that before building. But we # can't control the Extension object that CFFI creates. The best # we can do is provide a global hook that we can invoke in pre_run(). gevent_pre_run_actions = () @classmethod def gevent_add_pre_run_action(cls, action): # Actions should be idempotent. cls.gevent_pre_run_actions += (action,) def finalize_options(self): # Setting parallel to true can break builds when we need to configure # embedded libraries, which we do by changing directories. If that # happens while we're compiling, we may not be able to find source code. build_ext.finalize_options(self) def gevent_prepare(self, ext): configure = getattr(ext, 'configure', None) if configure: configure(self, ext) def build_extension(self, ext): self.gevent_prepare(ext) try: return build_ext.build_extension(self, ext) except ext_errors: if getattr(ext, 'optional', False): raise BuildFailed() raise def pre_run(self, *_args): # Called only from CFFI. # With mulitple extensions, this probably gets called multiple # times. for action in self.gevent_pre_run_actions: action() class Extension(_Extension): # This class has a few functions: # # 1. Make pylint happy in terms of attributes we use. # 2. Add default arguments, often platform specific. def __init__(self, *args, **kwargs): self.libraries = [] self.define_macros = [] # Python 2 has this as an old-style class for some reason # so super() doesn't work. _Extension.__init__(self, *args, **kwargs) # pylint:disable=no-member,non-parent-init-called from distutils.command.clean import clean # pylint:disable=no-name-in-module,import-error from distutils import log # pylint:disable=no-name-in-module from distutils.dir_util import remove_tree # pylint:disable=no-name-in-module,import-error class GeventClean(clean): BASE_GEVENT_SRC = os.path.join('src', 'gevent') def __find_directories_in(self, top, named=None): """ Iterate directories, beneath and including *top* ignoring '.' entries. """ for dirpath, dirnames, _ in os.walk(top): # Modify dirnames in place to prevent walk from # recursing into hidden directories. dirnames[:] = [x for x in dirnames if not x.startswith('.')] for dirname in dirnames: if named is None or named == dirname: yield os.path.join(dirpath, dirname) def __glob_under(self, base, file_pat): return glob_many( os.path.join(base, file_pat), *(os.path.join(x, file_pat) for x in self.__find_directories_in(base))) def __remove_dirs(self, remove_file): dirs_to_remove = [ 'htmlcov', '.eggs', COMMON_UTILITY_INCLUDE_DIR, ] if self.all: dirs_to_remove += [ # tox '.tox', # instal.sh for pyenv '.runtimes', # Built wheels from manylinux 'wheelhouse', # Doc build os.path.join('.', 'docs', '_build'), ] dir_finders = [ # All python cache dirs (self.__find_directories_in, '.', '__pycache__'), ] for finder in dir_finders: func = finder[0] args = finder[1:] dirs_to_remove.extend(func(*args)) for f in sorted(dirs_to_remove): remove_file(f) def run(self): clean.run(self) if self.dry_run: def remove_file(f): if os.path.isdir(f): remove_tree(f, dry_run=self.dry_run) elif os.path.exists(f): log.info("Would remove '%s'", f) else: def remove_file(f): if os.path.isdir(f): remove_tree(f, dry_run=self.dry_run) elif os.path.exists(f): log.info("Removing '%s'", f) os.remove(f) # Remove directories first before searching for individual files self.__remove_dirs(remove_file) def glob_gevent(file_path): return glob(os.path.join(self.BASE_GEVENT_SRC, file_path)) def glob_gevent_and_under(file_pat): return self.__glob_under(self.BASE_GEVENT_SRC, file_pat) def glob_root_and_under(file_pat): return self.__glob_under('.', file_pat) files_to_remove = [ '.coverage', # One-off cython-generated code that doesn't # follow a globbale-pattern os.path.join(self.BASE_GEVENT_SRC, 'libev', 'corecext.c'), os.path.join(self.BASE_GEVENT_SRC, 'libev', 'corecext.h'), os.path.join(self.BASE_GEVENT_SRC, 'resolver', 'cares.c'), os.path.join(self.BASE_GEVENT_SRC, 'resolver', 'cares.c'), ] def dep_configure_artifacts(dep): for f in ( 'config.h', 'config.log', 'config.status', 'config.cache', 'configure-output.txt', '.libs' ): yield os.path.join('deps', dep, f) file_finders = [ # The base gevent directory contains # only generated .c code. Remove it. (glob_gevent, "*.c"), # Any .html files found in the gevent directory # are the result of Cython annotations. Remove them. (glob_gevent_and_under, "*.html"), # Any compiled binaries have to go (glob_gevent_and_under, "*.so"), (glob_gevent_and_under, "*.pyd"), (glob_root_and_under, "*.o"), # Compiled python files too (glob_gevent_and_under, "*.pyc"), (glob_gevent_and_under, "*.pyo"), # Artifacts of building dependencies in place (dep_configure_artifacts, 'libev'), (dep_configure_artifacts, 'libuv'), (dep_configure_artifacts, 'c-ares'), ] for func, pat in file_finders: files_to_remove.extend(func(pat)) for f in sorted(files_to_remove): remove_file(f) gevent-24.11.1/appveyor.yml000066400000000000000000000172671471441230600155570ustar00rootroot00000000000000clone_depth: 50 max_jobs: 8 shallow_clone: true build: parallel: true verbosity: minimal image: Visual Studio 2022 environment: global: APPVEYOR_SAVE_CACHE_ON_ERROR: "true" # SDK v7.0 MSVC Express 2008's SetEnv.cmd script will fail if the # /E:ON and /V:ON options are not enabled in the batch script interpreter # See: http://stackoverflow.com/a/13751649/163740 CMD_IN_ENV: "cmd /E:ON /V:ON /C .\\appveyor\\run_with_env.cmd" # Use a fixed hash seed for reproducability PYTHONHASHSEED: 8675309 # Disable tests that use external network resources; # too often we get failures to resolve DNS names or failures # to connect on AppVeyor. GEVENTTEST_USE_RESOURCES: "-network" # Don't get warnings about Python 2 support being deprecated. We # know. PIP_NO_PYTHON_VERSION_WARNING: 1 PIP_UPGRADE_STRATEGY: eager # Stay on the legacy implementation of editable installs. # Recent versions of setuptools has changed how editable installs work, # which broke Appveyor builds. # For a relevant upstream issue, see https://github.com/pypa/setuptools/issues/3557 SETUPTOOLS_ENABLE_FEATURES: "legacy-editable" # Enable this if debugging a resource leak. Otherwise # it slows things down. # PYTHONTRACEMALLOC: 10 ## # Upload settings for twine. TWINE_USERNAME: "__token__" TWINE_PASSWORD: secure: uXZ6Juhz2hElaTsaJ2Hnemm+YoYbjpkoT5NFFlj4xxSlZvUrjoiOdvPqxxCaNYozWIRM5QmXlj1nOF8nZDpzx7oAyVIMT2x3z9iI0C/G5r4G8uvbJJq6wpJRI5HQ3sE39qLK2MCPZJ3BTu/uvVgWWqQ6wInKXxNqDGyf9IgZOv3/sCd4CwD7bEqlwHzyeh9a2o17a5J1YMhL03LVRcrlmjN8/Ds642FtnF/e+VAhUdtZvU1ze8rfeR7KCe4ehOmy18dh5joPX8TJKbg/AJlIYQ== matrix: # http://www.appveyor.com/docs/installed-software#python # Fully supported 64-bit versions, with testing. This should be # all the current (non EOL) versions. - PYTHON: "C:\\Python313-x64" PYTHON_VERSION: "3.13.0" PYTHON_ARCH: "64" PYTHON_EXE: python - PYTHON: "C:\\Python312-x64" PYTHON_VERSION: "3.12.0b" PYTHON_ARCH: "64" PYTHON_EXE: python # 64-bit - PYTHON: "C:\\Python311-x64" PYTHON_VERSION: "3.11.0" PYTHON_ARCH: "64" PYTHON_EXE: python # TODO: What's the latest pypy? # - PYTHON: "C:\\pypy3.7-v7.3.7-win64" # PYTHON_ID: "pypy3" # PYTHON_EXE: pypy3w # PYTHON_VERSION: "3.7.x" # PYTHON_ARCH: "64" # APPVEYOR_BUILD_WORKER_IMAGE: Visual Studio 2019 - PYTHON: "C:\\Python310-x64" PYTHON_VERSION: "3.10.0" PYTHON_ARCH: "64" PYTHON_EXE: python - PYTHON: "C:\\Python39-x64" PYTHON_VERSION: "3.9.x" PYTHON_ARCH: "64" PYTHON_EXE: python # 32-bit, wheel only (no testing) - PYTHON: "C:\\Python39" PYTHON_VERSION: "3.9.x" PYTHON_ARCH: "32" PYTHON_EXE: python GWHEEL_ONLY: true # Also test a Python version not pre-installed # See: https://github.com/ogrisel/python-appveyor-demo/issues/10 # - PYTHON: "C:\\Python266" # PYTHON_VERSION: "2.6.6" # PYTHON_ARCH: "32" # PYTHON_EXE: python # matrix: # allow_failures: # - PYTHON_ID: "pypy" install: # If there is a newer build queued for the same PR, cancel this one. # The AppVeyor 'rollout builds' option is supposed to serve the same # purpose but it is problematic because it tends to cancel builds pushed # directly to master instead of just PR builds (or the converse). # credits: JuliaLang developers. - ps: if ($env:APPVEYOR_PULL_REQUEST_NUMBER -and $env:APPVEYOR_BUILD_NUMBER -ne ((Invoke-RestMethod ` https://ci.appveyor.com/api/projects/$env:APPVEYOR_ACCOUNT_NAME/$env:APPVEYOR_PROJECT_SLUG/history?recordsNumber=50).builds | ` Where-Object pullRequestId -eq $env:APPVEYOR_PULL_REQUEST_NUMBER)[0].buildNumber) { ` throw "There are newer queued builds for this pull request, failing early." } - ECHO "Filesystem root:" - ps: "ls \"C:/\"" - ECHO "Installed SDKs:" - ps: "if(Test-Path(\"C:/Program Files/Microsoft SDKs/Windows\")) {ls \"C:/Program Files/Microsoft SDKs/Windows\";}" # Install Python (from the official .msi of http://python.org) and pip when # not already installed. # PyPy portion based on https://github.com/wbond/asn1crypto/blob/master/appveyor.yml - ps: $env:PYTMP = "${env:TMP}\py"; if (!(Test-Path "$env:PYTMP")) { New-Item -ItemType directory -Path "$env:PYTMP" | Out-Null; } if ("${env:PYTHON_ID}" -eq "pypy") { if (!(Test-Path "${env:PYTMP}\pypy2-v7.3.6-win64.zip")) { (New-Object Net.WebClient).DownloadFile('https://downloads.python.org/pypy/pypy2.7-v7.3.6-win64.zip', "${env:PYTMP}\pypy2-v7.3.6-win64.zip"); } 7z x -y "${env:PYTMP}\pypy2-v7.3.6-win64.zip" -oC:\ | Out-Null; } elseif ("${env:PYTHON_ID}" -eq "pypy3") { if (!(Test-Path "${env:PYTMP}\pypy3.7-v7.3.7-win64.zip")) { (New-Object Net.WebClient).DownloadFile("https://downloads.python.org/pypy/pypy3.7-v7.3.7-win64.zip", "${env:PYTMP}\pypy3.7-v7.3.7-win64.zip"); } 7z x -y "${env:PYTMP}\pypy3.7-v7.3.7-win64.zip" -oC:\ | Out-Null; } elseif (-not(Test-Path($env:PYTHON))) { & appveyor\install.ps1; } # Prepend newly installed Python to the PATH of this build (this cannot be # done from inside the powershell script as it would require to restart # the parent CMD process). - "SET PATH=%PYTHON%;%PYTHON%\\Scripts;%PYTHON%\\bin;%PATH%" - "SET PYEXE=%PYTHON%\\%PYTHON_EXE%.exe" # Check that we have the expected version and architecture for Python - "%PYEXE% --version" - "%PYEXE% -c \"import struct; print(struct.calcsize('P') * 8)\"" # Upgrade to the latest version of pip to avoid it displaying warnings # about it being out of date. Do this here instead of above in # powershell because the annoying 'DEPRECATION:blahblahblah 2.7 blahblahblah' # breaks powershell. - "%CMD_IN_ENV% %PYEXE% -mensurepip -U --user" - "%CMD_IN_ENV% %PYEXE% -mpip install -U --user pip" - ps: "if(Test-Path(\"${env:PYTHON}\\bin\")) {ls ${env:PYTHON}\\bin;}" - ps: "if(Test-Path(\"${env:PYTHON}\\Scripts\")) {ls ${env:PYTHON}\\Scripts;}" cache: - "%TMP%\\py\\" - '%LOCALAPPDATA%\pip\Cache -> appveyor.yml,setup.py' build_script: # Build the compiled extension - "%CMD_IN_ENV% %PYEXE% -m pip install -U wheel" - "%CMD_IN_ENV% %PYEXE% -m pip install -U setuptools" - if not "%GWHEEL_ONLY%"=="true" %PYEXE% -m pip install -U -e .[test] test_script: # Run the project tests - if not "%GWHEEL_ONLY%"=="true" %PYEXE% -c "import greenlet; print(greenlet, greenlet.__version__)" - if not "%GWHEEL_ONLY%"=="true" %PYEXE% -c "import gevent.core; print(gevent.core.loop)" - if not "%GWHEEL_ONLY%"=="true" %PYEXE% -c "import gevent; print(gevent.config.settings['resolver'].get_options())" - if not "%GWHEEL_ONLY%"=="true" %PYEXE% -c "from gevent._compat import get_clock_info; print(get_clock_info('perf_counter'))" - if not "%GWHEEL_ONLY%"=="true" %PYEXE% -mgevent.tests.known_failures - if not "%GWHEEL_ONLY%"=="true" %PYEXE% -mgevent.tests --second-chance --config known_failures.py after_test: # pycparser can't be built correctly in an isolated environment. # See # https://ci.appveyor.com/project/denik/gevent/builds/23810605/job/83aw4u67artt002b#L602 # So we violate DRY and repeate some requirements in order to use # --no-build-isolation - "%CMD_IN_ENV% %PYEXE% -m pip install wheel cython setuptools cffi twine" - "%CMD_IN_ENV% %PYEXE% -m pip wheel --no-build-isolation . -w dist" - ps: "ls dist" artifacts: # Archive the generated wheel package in the ci.appveyor.com build report. - path: dist\gevent*whl deploy_script: - ps: if ($env:APPVEYOR_REPO_TAG -eq $TRUE) { twine upload --skip-existing dist/gevent* } deploy: on gevent-24.11.1/appveyor/000077500000000000000000000000001471441230600150175ustar00rootroot00000000000000gevent-24.11.1/appveyor/install.ps1000066400000000000000000000160331471441230600171150ustar00rootroot00000000000000# Sample script to install Python and pip under Windows # Authors: Olivier Grisel, Jonathan Helmus, Kyle Kastner, and Alex Willmer # License: CC0 1.0 Universal: http://creativecommons.org/publicdomain/zero/1.0/ $MINICONDA_URL = "http://repo.continuum.io/miniconda/" $BASE_URL = "https://www.python.org/ftp/python/" $GET_PIP_URL = "https://bootstrap.pypa.io/get-pip.py" $GET_PIP_PATH = "C:\get-pip.py" $PYTHON_PRERELEASE_REGEX = @" (?x) (?\d+) \. (?\d+) \. (?\d+) (?[a-z]{1,2}\d+) "@ function Download ($filename, $url) { $webclient = New-Object System.Net.WebClient $basedir = $pwd.Path + "\" $filepath = $basedir + $filename if (Test-Path $filename) { Write-Host "Reusing" $filepath return $filepath } # Download and retry up to 3 times in case of network transient errors. Write-Host "Downloading" $filename "from" $url $retry_attempts = 2 for ($i = 0; $i -lt $retry_attempts; $i++) { try { $webclient.DownloadFile($url, $filepath) break } Catch [Exception]{ Start-Sleep 1 } } if (Test-Path $filepath) { Write-Host "File saved at" $filepath } else { # Retry once to get the error message if any at the last try $webclient.DownloadFile($url, $filepath) } return $filepath } function ParsePythonVersion ($python_version) { if ($python_version -match $PYTHON_PRERELEASE_REGEX) { return ([int]$matches.major, [int]$matches.minor, [int]$matches.micro, $matches.prerelease) } $version_obj = [version]$python_version return ($version_obj.major, $version_obj.minor, $version_obj.build, "") } function DownloadPython ($python_version, $platform_suffix) { $major, $minor, $micro, $prerelease = ParsePythonVersion $python_version if (($major -le 2 -and $micro -eq 0) ` -or ($major -eq 3 -and $minor -le 2 -and $micro -eq 0) ` ) { $dir = "$major.$minor" $python_version = "$major.$minor$prerelease" } else { $dir = "$major.$minor.$micro" } if ($prerelease) { if (($major -le 2) ` -or ($major -eq 3 -and $minor -eq 1) ` -or ($major -eq 3 -and $minor -eq 2) ` -or ($major -eq 3 -and $minor -eq 3) ` ) { $dir = "$dir/prev" } } if (($major -le 2) -or ($major -le 3 -and $minor -le 4)) { $ext = "msi" if ($platform_suffix) { $platform_suffix = ".$platform_suffix" } } else { $ext = "exe" if ($platform_suffix) { $platform_suffix = "-$platform_suffix" } } $filename = "python-$python_version$platform_suffix.$ext" $url = "$BASE_URL$dir/$filename" $filepath = Download $filename $url return $filepath } function InstallPython ($python_version, $architecture, $python_home) { Write-Host "Installing Python" $python_version "for" $architecture "bit architecture to" $python_home if (Test-Path $python_home) { Write-Host $python_home "already exists, skipping." return $false } if ($architecture -eq "32") { $platform_suffix = "" } else { $platform_suffix = "amd64" } $installer_path = DownloadPython $python_version $platform_suffix $installer_ext = [System.IO.Path]::GetExtension($installer_path) Write-Host "Installing $installer_path to $python_home" $install_log = $python_home + ".log" if ($installer_ext -eq '.msi') { InstallPythonMSI $installer_path $python_home $install_log } else { InstallPythonEXE $installer_path $python_home $install_log } if (Test-Path $python_home) { Write-Host "Python $python_version ($architecture) installation complete" } else { Write-Host "Failed to install Python in $python_home" Get-Content -Path $install_log Exit 1 } } function InstallPythonEXE ($exepath, $python_home, $install_log) { $install_args = "/quiet InstallAllUsers=1 TargetDir=$python_home" RunCommand $exepath $install_args } function InstallPythonMSI ($msipath, $python_home, $install_log) { $install_args = "/qn /log $install_log /i $msipath TARGETDIR=$python_home" $uninstall_args = "/qn /x $msipath" RunCommand "msiexec.exe" $install_args if (-not(Test-Path $python_home)) { Write-Host "Python seems to be installed else-where, reinstalling." RunCommand "msiexec.exe" $uninstall_args RunCommand "msiexec.exe" $install_args } } function RunCommand ($command, $command_args) { Write-Host $command $command_args Start-Process -FilePath $command -ArgumentList $command_args -Wait -Passthru } function InstallPip ($python_home) { $pip_path = $python_home + "\Scripts\pip.exe" $python_path = $python_home + "\python.exe" if (-not(Test-Path $pip_path)) { Write-Host "Installing pip..." $webclient = New-Object System.Net.WebClient $webclient.DownloadFile($GET_PIP_URL, $GET_PIP_PATH) Write-Host "Executing:" $python_path $GET_PIP_PATH & $python_path $GET_PIP_PATH } else { Write-Host "pip already installed." } } function DownloadMiniconda ($python_version, $platform_suffix) { if ($python_version -eq "3.4") { $filename = "Miniconda3-3.5.5-Windows-" + $platform_suffix + ".exe" } else { $filename = "Miniconda-3.5.5-Windows-" + $platform_suffix + ".exe" } $url = $MINICONDA_URL + $filename $filepath = Download $filename $url return $filepath } function InstallMiniconda ($python_version, $architecture, $python_home) { Write-Host "Installing Python" $python_version "for" $architecture "bit architecture to" $python_home if (Test-Path $python_home) { Write-Host $python_home "already exists, skipping." return $false } if ($architecture -eq "32") { $platform_suffix = "x86" } else { $platform_suffix = "x86_64" } $filepath = DownloadMiniconda $python_version $platform_suffix Write-Host "Installing" $filepath "to" $python_home $install_log = $python_home + ".log" $args = "/S /D=$python_home" Write-Host $filepath $args Start-Process -FilePath $filepath -ArgumentList $args -Wait -Passthru if (Test-Path $python_home) { Write-Host "Python $python_version ($architecture) installation complete" } else { Write-Host "Failed to install Python in $python_home" Get-Content -Path $install_log Exit 1 } } function InstallMinicondaPip ($python_home) { $pip_path = $python_home + "\Scripts\pip.exe" $conda_path = $python_home + "\Scripts\conda.exe" if (-not(Test-Path $pip_path)) { Write-Host "Installing pip..." $args = "install --yes pip" Write-Host $conda_path $args Start-Process -FilePath "$conda_path" -ArgumentList $args -Wait -Passthru } else { Write-Host "pip already installed." } } function main () { InstallPython $env:PYTHON_VERSION $env:PYTHON_ARCH $env:PYTHON InstallPip $env:PYTHON } main gevent-24.11.1/appveyor/run_with_env.cmd000066400000000000000000000064461471441230600202250ustar00rootroot00000000000000:: To build extensions for 64 bit Python 3, we need to configure environment :: variables to use the MSVC 2010 C++ compilers from GRMSDKX_EN_DVD.iso of: :: MS Windows SDK for Windows 7 and .NET Framework 4 (SDK v7.1) :: :: To build extensions for 64 bit Python 2, we need to configure environment :: variables to use the MSVC 2008 C++ compilers from GRMSDKX_EN_DVD.iso of: :: MS Windows SDK for Windows 7 and .NET Framework 3.5 (SDK v7.0) :: :: 32 bit builds, and 64-bit builds for 3.5 and beyond, do not require specific :: environment configurations. :: :: Note: this script needs to be run with the /E:ON and /V:ON flags for the :: cmd interpreter, at least for (SDK v7.0) :: :: More details at: :: https://github.com/cython/cython/wiki/64BitCythonExtensionsOnWindows :: http://stackoverflow.com/a/13751649/163740 :: :: Author: Olivier Grisel :: License: CC0 1.0 Universal: http://creativecommons.org/publicdomain/zero/1.0/ :: :: Notes about batch files for Python people: :: :: Quotes in values are literally part of the values: :: SET FOO="bar" :: FOO is now five characters long: " b a r " :: If you don't want quotes, don't include them on the right-hand side. :: :: The CALL lines at the end of this file look redundant, but if you move them :: outside of the IF clauses, they do not run properly in the SET_SDK_64==Y :: case, I don't know why. @ECHO OFF SET COMMAND_TO_RUN=%* SET WIN_SDK_ROOT=C:\Program Files\Microsoft SDKs\Windows SET WIN_WDK=c:\Program Files (x86)\Windows Kits\10\Include\wdf :: Extract the major and minor versions, and allow for the minor version to be :: more than 9. This requires the version number to have two dots in it. SET MAJOR_PYTHON_VERSION=%PYTHON_VERSION:~0,1% IF "%PYTHON_VERSION:~3,1%" == "." ( SET MINOR_PYTHON_VERSION=%PYTHON_VERSION:~2,1% ) ELSE ( SET MINOR_PYTHON_VERSION=%PYTHON_VERSION:~2,2% ) :: Based on the Python version, determine what SDK version to use, and whether :: to set the SDK for 64-bit. IF %MAJOR_PYTHON_VERSION% == 2 ( SET WINDOWS_SDK_VERSION="v7.0" SET SET_SDK_64=Y ) ELSE ( IF %MAJOR_PYTHON_VERSION% == 3 ( SET WINDOWS_SDK_VERSION="v7.1" IF %MINOR_PYTHON_VERSION% LEQ 4 ( SET SET_SDK_64=Y ) ELSE ( SET SET_SDK_64=N IF EXIST "%WIN_WDK%" ( :: See: https://connect.microsoft.com/VisualStudio/feedback/details/1610302/ REN "%WIN_WDK%" 0wdf ) ) ) ELSE ( ECHO Unsupported Python version: "%MAJOR_PYTHON_VERSION%" EXIT 1 ) ) IF %PYTHON_ARCH% == 64 ( IF %SET_SDK_64% == Y ( ECHO Configuring Windows SDK %WINDOWS_SDK_VERSION% for Python %MAJOR_PYTHON_VERSION% on a 64 bit architecture SET DISTUTILS_USE_SDK=1 SET MSSdk=1 "%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Setup\WindowsSdkVer.exe" -q -version:%WINDOWS_SDK_VERSION% "%WIN_SDK_ROOT%\%WINDOWS_SDK_VERSION%\Bin\SetEnv.cmd" /x64 /release ECHO Executing: %COMMAND_TO_RUN% call %COMMAND_TO_RUN% || EXIT 1 ) ELSE ( ECHO Using default MSVC build environment for 64 bit architecture ECHO Executing: %COMMAND_TO_RUN% call %COMMAND_TO_RUN% || EXIT 1 ) ) ELSE ( ECHO Using default MSVC build environment for 32 bit architecture ECHO Executing: %COMMAND_TO_RUN% call %COMMAND_TO_RUN% || EXIT 1 ) gevent-24.11.1/benchmarks/000077500000000000000000000000001471441230600152675ustar00rootroot00000000000000gevent-24.11.1/benchmarks/.gitignore000066400000000000000000000000321471441230600172520ustar00rootroot00000000000000data/*.json data/*/*.json gevent-24.11.1/benchmarks/bench_dns_resolver.py000066400000000000000000000063211471441230600215070ustar00rootroot00000000000000from __future__ import absolute_import, print_function, division # Best run with dnsmasq configured as a caching nameserver # with no timeouts and configured to point there via # /etc/resolv.conf and GEVENT_RESOLVER_NAMESERVERS # Remember to use --inherit-environ to make that work! # dnsmasq -d --cache-size=100000 --local-ttl=1000000 --neg-ttl=10000000 # --max-ttl=100000000 --min-cache-ttl=10000000000 --no-poll --auth-ttl=100000000000 from gevent import monkey; monkey.patch_all() import sys import socket import perf import gevent from zope.dottedname.resolve import resolve as drresolve blacklist = { 22, 55, 68, 69, 72, 52, 94, 62, 54, 71, 73, 74, 34, 36, 83, 86, 79, 81, 98, 99, 120, 130, 152, 161, 165, 169, 172, 199, 205, 239, 235, 254, 256, 286, 299, 259, 229, 190, 185, 182, 173, 160, 158, 153, 139, 138, 131, 129, 127, 125, 116, 112, 110, 106, } RUN_COUNT = 15 if hasattr(sys, 'pypy_version_info') else 5 def quiet(f, n): try: f(n) except socket.gaierror: pass def resolve_seq(res, count=10, begin=0): for index in range(begin, count + begin): if index in blacklist: continue try: res.gethostbyname('x%s.com' % index) except socket.gaierror: pass def resolve_par(res, count=10, begin=0): gs = [] for index in range(begin, count + begin): if index in blacklist: continue gs.append(gevent.spawn(quiet, res.gethostbyname, 'x%s.com' % index)) gevent.joinall(gs) N = 300 def run_all(resolver_name, resolve): res = drresolve('gevent.resolver.' + resolver_name + '.Resolver') res = res() # dnspython looks up cname aliases by default, but c-ares does not. # dnsmasq can only cache one address with a given cname at a time, # and many of our addresses clash on that, so dnspython is put at a # severe disadvantage. We turn that off here. res._getaliases = lambda hostname, family: [] if N > 150: # 150 is the max concurrency in dnsmasq count = N // 3 resolve(res, count=count) resolve(res, count=count, begin=count) resolve(res, count=count, begin=count * 2) else: resolve(res, count=N) def main(): def worker_cmd(cmd, args): cmd.extend(args.benchmark) runner = perf.Runner(processes=5, values=3, add_cmdline_args=worker_cmd) all_names = 'dnspython', 'blocking', 'ares', 'thread' runner.argparser.add_argument('benchmark', nargs='*', default='all', choices=all_names + ('all',)) args = runner.parse_args() if 'all' in args.benchmark or args.benchmark == 'all': args.benchmark = ['all'] names = all_names else: names = args.benchmark for name in names: runner.bench_func(name + ' sequential', run_all, name, resolve_seq, inner_loops=N) runner.bench_func(name + ' parallel', run_all, name, resolve_par, inner_loops=N) if __name__ == '__main__': main() gevent-24.11.1/benchmarks/bench_get_memory.py000066400000000000000000000052541471441230600211550ustar00rootroot00000000000000""" Benchmarking for getting the memoryview of an object. https://github.com/gevent/gevent/issues/1318 """ from __future__ import print_function # pylint:disable=unidiomatic-typecheck try: xrange except NameError: xrange = range try: buffer except NameError: buffer = memoryview import perf from gevent._greenlet_primitives import get_memory as cy_get_memory def get_memory_gevent14(data): try: mv = memoryview(data) if mv.shape: return mv # No shape, probably working with a ctypes object, # or something else exotic that supports the buffer interface return mv.tobytes() except TypeError: # fixes "python2.7 array.array doesn't support memoryview used in # gevent.socket.send" issue # (http://code.google.com/p/gevent/issues/detail?id=94) return buffer(data) def get_memory_is(data): try: mv = memoryview(data) if type(data) is not memoryview else data if mv.shape: return mv # No shape, probably working with a ctypes object, # or something else exotic that supports the buffer interface return mv.tobytes() except TypeError: # fixes "python2.7 array.array doesn't support memoryview used in # gevent.socket.send" issue # (http://code.google.com/p/gevent/issues/detail?id=94) return buffer(data) def get_memory_inst(data): try: mv = memoryview(data) if not isinstance(data, memoryview) else data if mv.shape: return mv # No shape, probably working with a ctypes object, # or something else exotic that supports the buffer interface return mv.tobytes() except TypeError: # fixes "python2.7 array.array doesn't support memoryview used in # gevent.socket.send" issue # (http://code.google.com/p/gevent/issues/detail?id=94) return buffer(data) N = 100 DATA = { 'bytestring': b'abc123', 'bytearray': bytearray(b'abc123'), 'memoryview': memoryview(b'abc123'), } def test(loops, func, arg): t0 = perf.perf_counter() for __ in range(loops): for _ in xrange(N): func(arg) return perf.perf_counter() - t0 def main(): runner = perf.Runner() for func, name in ( (get_memory_gevent14, 'gevent14-py'), (cy_get_memory, 'inst-cy'), (get_memory_inst, 'inst-py'), (get_memory_is, 'is-py'), ): for arg_name, arg in DATA.items(): runner.bench_time_func( '%s - %s' % (name, arg_name), test, func, arg, inner_loops=N ) if __name__ == '__main__': main() gevent-24.11.1/benchmarks/bench_hub.py000066400000000000000000000047741471441230600175720ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Benchmarks for hub primitive operations. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import pyperf as perf from pyperf import perf_counter import gevent from greenlet import greenlet from greenlet import getcurrent N = 1000 def bench_switch(): class Parent(type(gevent.get_hub())): def run(self): parent = self.parent for _ in range(N): parent.switch() def child(): parent = getcurrent().parent # Back to the hub, which in turn goes # back to the main greenlet for _ in range(N): parent.switch() hub = Parent(None, None) child_greenlet = greenlet(child, hub) for _ in range(N): child_greenlet.switch() def bench_wait_ready(): class Watcher(object): def start(self, cb, obj): # Immediately switch back to the waiter, mark as ready cb(obj) def stop(self): pass watcher = Watcher() hub = gevent.get_hub() for _ in range(1000): hub.wait(watcher) def bench_cancel_wait(): class Watcher(object): active = True callback = object() def close(self): pass watcher = Watcher() hub = gevent.get_hub() loop = hub.loop for _ in range(1000): # Schedule all the callbacks. hub.cancel_wait(watcher, None, True) # Run them! for cb in loop._callbacks: if cb.callback: cb.callback(*cb.args) cb.stop() # so the real loop won't do it # destroy the loop so we don't keep building these functions # up hub.destroy(True) def bench_wait_func_ready(): from gevent.hub import wait class ToWatch(object): def rawlink(self, cb): cb(self) watched_objects = [ToWatch() for _ in range(N)] t0 = perf_counter() wait(watched_objects) return perf_counter() - t0 def main(): runner = perf.Runner() runner.bench_func('multiple wait ready', bench_wait_func_ready, inner_loops=N) runner.bench_func('wait ready', bench_wait_ready, inner_loops=N) runner.bench_func('cancel wait', bench_cancel_wait, inner_loops=N) runner.bench_func('switch', bench_switch, inner_loops=N) if __name__ == '__main__': main() gevent-24.11.1/benchmarks/bench_local.py000066400000000000000000000036011471441230600200720ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Benchmarks for thread locals. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import perf from gevent.local import local as glocal from threading import local as nlocal class GLocalSub(glocal): pass class NativeSub(nlocal): pass benchmarks = [] def _populate(l): for i in range(10): setattr(l, 'attr' + str(i), i) def bench_getattr(loops, local): t0 = perf.perf_counter() for _ in range(loops): # pylint:disable=pointless-statement local.attr0 local.attr1 local.attr2 local.attr3 local.attr4 local.attr5 local.attr6 local.attr7 local.attr8 local.attr9 return perf.perf_counter() - t0 def bench_setattr(loops, local): t0 = perf.perf_counter() for _ in range(loops): local.attr0 = 0 local.attr1 = 1 local.attr2 = 2 local.attr3 = 3 local.attr4 = 4 local.attr5 = 5 local.attr6 = 6 local.attr7 = 7 local.attr8 = 8 local.attr9 = 9 return perf.perf_counter() - t0 def main(): runner = perf.Runner() for name, obj in (('gevent', glocal()), ('gevent sub', GLocalSub()), ('native', nlocal()), ('native sub', NativeSub())): _populate(obj) benchmarks.append( runner.bench_time_func('getattr ' + name, bench_getattr, obj, inner_loops=10)) benchmarks.append( runner.bench_time_func('setattr ' + name, bench_setattr, obj, inner_loops=10)) if __name__ == '__main__': main() gevent-24.11.1/benchmarks/bench_pool.py000066400000000000000000000005001471441230600177440ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Benchmarks for greenlet pool. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import gevent.pool import bench_threadpool bench_threadpool.ThreadPool = gevent.pool.Pool if __name__ == '__main__': bench_threadpool.main() gevent-24.11.1/benchmarks/bench_queue.py000066400000000000000000000050331471441230600201250ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Benchmarks for gevent.queue """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import pyperf as perf import gevent from gevent import queue N = 1000 def _b_no_block(q): for i in range(N): q.put(i) for i in range(N): j = q.get() assert i == j, (i, j) def bench_unbounded_queue_noblock(kind=queue.UnboundQueue): _b_no_block(kind()) def bench_bounded_queue_noblock(kind=queue.Queue): _b_no_block(kind(N + 1)) def bench_bounded_queue_block(kind=queue.Queue, hub=False): q = kind(1) def get(): for i in range(N): j = q.get() assert i == j return "Finished" # Run putters in the main greenlet g = gevent.spawn(get) if not hub: for i in range(N): q.put(i) else: # putters in the hub def put(): assert gevent.getcurrent() is gevent.get_hub() for i in range(N): q.put(i) h = gevent.get_hub() h.loop.run_callback(put) h.join() g.join() assert g.value == 'Finished' def main(): runner = perf.Runner() runner.bench_func('bench_unbounded_queue_noblock', bench_unbounded_queue_noblock, inner_loops=N) runner.bench_func('bench_bounded_queue_noblock', bench_bounded_queue_noblock, inner_loops=N) runner.bench_func('bench_bounded_queue_block', bench_bounded_queue_block, inner_loops=N) runner.bench_func('bench_channel', bench_bounded_queue_block, queue.Channel, inner_loops=N) runner.bench_func('bench_bounded_queue_block_hub', bench_bounded_queue_block, queue.Queue, True, inner_loops=N) runner.bench_func('bench_channel_hub', bench_bounded_queue_block, queue.Channel, True, inner_loops=N) runner.bench_func('bench_unbounded_priority_queue_noblock', bench_unbounded_queue_noblock, queue.PriorityQueue, inner_loops=N) runner.bench_func('bench_bounded_priority_queue_noblock', bench_bounded_queue_noblock, queue.PriorityQueue, inner_loops=N) if __name__ == '__main__': main() gevent-24.11.1/benchmarks/bench_sendall.py000066400000000000000000000021431471441230600204220ustar00rootroot00000000000000#! /usr/bin/env python from __future__ import print_function, division, absolute_import import perf from gevent import socket from gevent.server import StreamServer def recvall(sock, _): while sock.recv(4096): pass N = 10 runs = [] def benchmark(conn, data): spent_total = 0 for _ in range(N): start = perf.perf_counter() conn.sendall(data) spent = perf.perf_counter() - start spent_total += spent runs.append(spent_total) return spent_total def main(): runner = perf.Runner() server = StreamServer(("127.0.0.1", 0), recvall) server.start() MB = 1024 * 1024 length = 50 * MB data = b"x" * length conn = socket.create_connection((server.server_host, server.server_port)) runner.bench_func('sendall', benchmark, conn, data, inner_loops=N) conn.close() server.stop() if runs: total = sum(runs) avg = total / len(runs) # This is really only true if the perf_counter counts in seconds time print("~ %.2f MB/s" % (length * N / avg / MB)) if __name__ == "__main__": main() gevent-24.11.1/benchmarks/bench_sleep0.py000066400000000000000000000020121471441230600201630ustar00rootroot00000000000000""" Benchmarking sleep(0) performance. """ from __future__ import print_function import perf try: xrange except NameError: xrange = range N = 100 def test(loops, sleep, arg): t0 = perf.perf_counter() for __ in range(loops): for _ in xrange(N): sleep(arg) return perf.perf_counter() - t0 def bench_gevent(loops, arg): from gevent import sleep from gevent import setswitchinterval setswitchinterval(1000) return test(loops, sleep, arg) def bench_eventlet(loops, arg): from eventlet import sleep return test(loops, sleep, arg) def main(): runner = perf.Runner() for arg in (0, -1, 0.00001, 0.001): runner.bench_time_func('gevent sleep(%s)' % (arg,), bench_gevent, arg, inner_loops=N) runner.bench_time_func('eventlet sleep(%s)' % (arg,), bench_eventlet, arg, inner_loops=N) if __name__ == '__main__': main() gevent-24.11.1/benchmarks/bench_socket.py000066400000000000000000000073101471441230600202710ustar00rootroot00000000000000#! /usr/bin/env python """ Basic socket benchmarks. """ from __future__ import print_function, division, absolute_import import os import sys import perf import gevent from gevent import socket as gsocket import socket import threading def recvall(sock, _): while sock.recv(4096): pass N = 10 MB = 1024 * 1024 length = 50 * MB BIG_DATA = b"x" * length SMALL_DATA = b'x' * 1000 def _sendto(loops, conn, data, to_send=None): addr = ('127.0.0.1', 55678) spent_total = 0 sent = 0 to_send = len(data) if to_send is None else to_send for __ in range(loops): for _ in range(N): start = perf.perf_counter() while sent < to_send: sent += conn.sendto(data, 0, addr) spent = perf.perf_counter() - start spent_total += spent return spent_total def _sendall(loops, conn, data): start = perf.perf_counter() for __ in range(loops): for _ in range(N): conn.sendall(data) taken = perf.perf_counter() - start conn.close() return taken def bench_native_udp(loops): conn = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) try: return _sendto(loops, conn, SMALL_DATA, len(BIG_DATA)) finally: conn.close() def bench_gevent_udp(loops): conn = gsocket.socket(socket.AF_INET, socket.SOCK_DGRAM) try: return _sendto(loops, conn, SMALL_DATA, len(BIG_DATA)) finally: conn.close() def _do_sendall(loops, send, recv): for s in send, recv: os.set_inheritable(s.fileno(), True) pid = os.fork() if not pid: send.close() recvall(recv, None) recv.close() sys.exit() return 0 else: try: return _sendall(loops, send, BIG_DATA) finally: send.close() recv.close() def bench_native_thread_default_socketpair(loops): send, recv = socket.socketpair() t = threading.Thread(target=recvall, args=(recv, None)) t.daemon = True t.start() return _sendall(loops, send, BIG_DATA) def bench_gevent_greenlet_default_socketpair(loops): send, recv = gsocket.socketpair() gevent.spawn(recvall, recv, None) return _sendall(loops, send, BIG_DATA) def bench_gevent_forked_socketpair(loops): send, recv = gsocket.socketpair() return _do_sendall(loops, send, recv) def bench_native_forked_socketpair(loops): send, recv = socket.socketpair() return _do_sendall(loops, send, recv) def main(): if '--profile' in sys.argv: import cProfile import pstats import io pr = cProfile.Profile() pr.enable() for _ in range(2): bench_gevent_forked_socketpair(2) pr.disable() s = io.StringIO() sortby = 'cumulative' ps = pstats.Stats(pr, stream=s).sort_stats(sortby) ps.print_stats() print(s.getvalue()) return runner = perf.Runner() runner.bench_time_func( 'gevent socketpair sendall greenlet', bench_gevent_greenlet_default_socketpair, inner_loops=N) runner.bench_time_func( 'native socketpair sendall thread', bench_native_thread_default_socketpair, inner_loops=N) runner.bench_time_func( 'gevent socketpair sendall fork', bench_gevent_forked_socketpair, inner_loops=N) runner.bench_time_func( 'native socketpair sendall fork', bench_native_forked_socketpair, inner_loops=N) runner.bench_time_func( 'native udp sendto', bench_native_udp, inner_loops=N) runner.bench_time_func( 'gevent udp sendto', bench_gevent_udp, inner_loops=N) if __name__ == "__main__": main() gevent-24.11.1/benchmarks/bench_spawn.py000066400000000000000000000134511471441230600201340ustar00rootroot00000000000000""" Benchmarking spawn() performance. """ from __future__ import print_function from __future__ import absolute_import from __future__ import division from pyperf import perf_counter from pyperf import Runner try: xrange except NameError: xrange = range N = 10000 counter = 0 def incr(**_kwargs): global counter counter += 1 def noop(_p): pass class Options(object): # TODO: Add back an argument for that eventlet_hub = None loops = None def __init__(self, sleep, join, **kwargs): self.kwargs = kwargs self.sleep = sleep self.join = join class Times(object): def __init__(self, spawn_duration, sleep_duration=-1, join_duration=-1): self.spawn_duration = spawn_duration self.sleep_duration = sleep_duration self.join_duration = join_duration def _test(spawn, sleep, options): global counter counter = 0 before_spawn = perf_counter() for _ in xrange(N): spawn(incr, **options.kwargs) spawn_duration = perf_counter() - before_spawn if options.sleep: assert counter == 0, counter before_sleep = perf_counter() sleep(0) sleep_duration = perf_counter() - before_sleep assert counter == N, (counter, N) else: sleep_duration = -1 if options.join: before_join = perf_counter() options.join() join_duration = perf_counter() - before_join else: join_duration = -1 return Times(spawn_duration, sleep_duration, join_duration) def test(spawn, sleep, options): all_times = [ _test(spawn, sleep, options) for _ in xrange(options.loops) ] spawn_duration = sum(x.spawn_duration for x in all_times) sleep_duration = sum(x.sleep_duration for x in all_times) join_duration = sum(x.sleep_duration for x in all_times if x != -1) return Times(spawn_duration, sleep_duration, join_duration) def bench_none(options): from time import sleep options.sleep = False def spawn(f, **kwargs): return f(**kwargs) return test(spawn, sleep, options) def bench_gevent(options): from gevent import spawn, sleep return test(spawn, sleep, options) def bench_geventraw(options): from gevent import sleep, spawn_raw return test(spawn_raw, sleep, options) def bench_geventpool(options): from gevent import sleep from gevent.pool import Pool p = Pool() if options.join: options.join = p.join times = test(p.spawn, sleep, options) return times try: __import__('eventlet') except ImportError: pass else: def bench_eventlet(options): from eventlet import spawn, sleep if options.eventlet_hub is not None: from eventlet.hubs import use_hub use_hub(options.eventlet_hub) return test(spawn, sleep, options) def all(): result = [x for x in globals() if x.startswith('bench_') and x != 'bench_all'] result.sort() result = [x.replace('bench_', '') for x in result] return result def main(argv=None): import os import sys if argv is None: argv = sys.argv[1:] env_options = [ '--inherit-environ', ','.join([k for k in os.environ if k.startswith(('GEVENT', 'PYTHON', 'ZS', # experimental zodbshootout config 'RS', # relstorage config 'COVERAGE'))])] # This is a default, so put it early argv[0:0] = env_options def worker_cmd(cmd, args): cmd.extend(args.benchmark) runner = Runner(add_cmdline_args=worker_cmd) runner.argparser.add_argument('benchmark', nargs='*', default='all', choices=all() + ['all']) def spawn_time(loops, func, options): options.loops = loops times = func(options) return times.spawn_duration def sleep_time(loops, func, options): options.loops = loops times = func(options) return times.sleep_duration def join_time(loops, func, options): options.loops = loops times = func(options) return times.join_duration args = runner.parse_args(argv) if 'all' in args.benchmark or args.benchmark == 'all': args.benchmark = ['all'] names = all() else: names = args.benchmark names = sorted(set(names)) for name in names: runner.bench_time_func(name + ' spawn', spawn_time, globals()['bench_' + name], Options(sleep=False, join=False), inner_loops=N) if name != 'none': runner.bench_time_func(name + ' sleep', sleep_time, globals()['bench_' + name], Options(sleep=True, join=False), inner_loops=N) if 'geventpool' in names: runner.bench_time_func('geventpool join', join_time, bench_geventpool, Options(sleep=True, join=True), inner_loops=N) for name in names: runner.bench_time_func(name + ' spawn kwarg', spawn_time, globals()['bench_' + name], Options(sleep=False, join=False, foo=1, bar='hello'), inner_loops=N) if __name__ == '__main__': main() gevent-24.11.1/benchmarks/bench_subprocess.py000066400000000000000000000030541471441230600211720ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Benchmarks for thread locals. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import perf from gevent import subprocess as gsubprocess import subprocess as nsubprocess N = 10 def _bench_spawn(module, loops, close_fds=True): total = 0 for _ in range(loops): t0 = perf.perf_counter() procs = [module.Popen('/usr/bin/true', close_fds=close_fds) for _ in range(N)] t1 = perf.perf_counter() for p in procs: p.communicate() p.poll() total += (t1 - t0) return total def bench_spawn_native(loops, close_fds=True): return _bench_spawn(nsubprocess, loops, close_fds) def bench_spawn_gevent(loops, close_fds=True): return _bench_spawn(gsubprocess, loops, close_fds) def main(): runner = perf.Runner() runner.bench_time_func('spawn native no close_fds', bench_spawn_native, False, inner_loops=N) runner.bench_time_func('spawn gevent no close_fds', bench_spawn_gevent, False, inner_loops=N) runner.bench_time_func('spawn native close_fds', bench_spawn_native, inner_loops=N) runner.bench_time_func('spawn gevent close_fds', bench_spawn_gevent, inner_loops=N) if __name__ == '__main__': main() gevent-24.11.1/benchmarks/bench_threadpool.py000066400000000000000000000047721471441230600211530ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Benchmarks for thread pool. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import pyperf as perf from gevent.threadpool import ThreadPool try: xrange = xrange except NameError: xrange = range def noop(): "Does nothing" def identity(i): return i PAR_COUNT = 5 N = 20 def bench_apply(loops): pool = ThreadPool(1) t0 = perf.perf_counter() for _ in xrange(loops): for _ in xrange(N): pool.apply(noop) pool.join() pool.kill() return perf.perf_counter() - t0 def bench_spawn_wait(loops): pool = ThreadPool(1) t0 = perf.perf_counter() for _ in xrange(loops): for _ in xrange(N): r = pool.spawn(noop) r.get() pool.join() pool.kill() return perf.perf_counter() - t0 def _map(pool, pool_func, loops): data = [1] * N t0 = perf.perf_counter() # Must collect for imap to finish for _ in xrange(loops): list(pool_func(identity, data)) pool.join() pool.kill() return perf.perf_counter() - t0 def _ppool(): pool = ThreadPool(PAR_COUNT) pool.size = PAR_COUNT return pool def bench_map_seq(loops): pool = ThreadPool(1) return _map(pool, pool.map, loops) def bench_map_par(loops): pool = _ppool() return _map(pool, pool.map, loops) def bench_imap_seq(loops): pool = ThreadPool(1) return _map(pool, pool.imap, loops) def bench_imap_par(loops): pool = _ppool() return _map(pool, pool.imap, loops) def bench_imap_un_seq(loops): pool = ThreadPool(1) return _map(pool, pool.imap_unordered, loops) def bench_imap_un_par(loops): pool = _ppool() return _map(pool, pool.imap_unordered, loops) def main(): runner = perf.Runner() runner.bench_time_func('imap_unordered_seq', bench_imap_un_seq) runner.bench_time_func('imap_unordered_par', bench_imap_un_par) runner.bench_time_func('imap_seq', bench_imap_seq) runner.bench_time_func('imap_par', bench_imap_par) runner.bench_time_func('map_seq', bench_map_seq) runner.bench_time_func('map_par', bench_map_par) runner.bench_time_func('apply', bench_apply) runner.bench_time_func('spawn', bench_spawn_wait) if __name__ == '__main__': main() gevent-24.11.1/benchmarks/bench_tracer.py000066400000000000000000000044061471441230600202640ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Benchmarks for gevent.queue """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import contextlib import perf import greenlet import gevent from gevent import _tracer as monitor N = 1000 @contextlib.contextmanager def tracer(cls, *args): inst = cls(*args) try: yield finally: inst.kill() def _run(loops): duration = 0 for _ in range(loops): g1 = None def switch(): parent = gevent.getcurrent().parent for _ in range(N): parent.switch() g1 = gevent.Greenlet(switch) g1.parent = gevent.getcurrent() t1 = perf.perf_counter() for _ in range(N): g1.switch() t2 = perf.perf_counter() duration += t2 - t1 return duration def bench_no_trace(loops): return _run(loops) def bench_trivial_tracer(loops): def trivial(_event, _args): return greenlet.settrace(trivial) try: return _run(loops) finally: greenlet.settrace(None) def bench_monitor_tracer(loops): with tracer(monitor.GreenletTracer): return _run(loops) def bench_hub_switch_tracer(loops): # use current as the hub, since tracer fires # when we switch into that greenlet with tracer(monitor.HubSwitchTracer, gevent.getcurrent(), 1): return _run(loops) def bench_max_switch_tracer(loops): # use object() as the hub, since tracer fires # when switch into something that's *not* the hub with tracer(monitor.MaxSwitchTracer, object, 1): return _run(loops) def main(): runner = perf.Runner() runner.bench_time_func( "no tracer", bench_no_trace, inner_loops=N ) runner.bench_time_func( "trivial tracer", bench_trivial_tracer, inner_loops=N ) runner.bench_time_func( "monitor tracer", bench_monitor_tracer, inner_loops=N ) runner.bench_time_func( "max switch tracer", bench_max_switch_tracer, inner_loops=N ) runner.bench_time_func( "hub switch tracer", bench_hub_switch_tracer, inner_loops=N ) if __name__ == '__main__': main() gevent-24.11.1/benchmarks/micro.sh000077500000000000000000000017601471441230600167430ustar00rootroot00000000000000#!/bin/sh set -e -x PYTHON=${PYTHON:=python} $PYTHON -c 'from __future__ import print_function; import gevent.core; print(gevent.__version__, gevent.core.get_version(), getattr(gevent.core, "get_method", lambda: "n/a")(), getattr(gevent, "get_hub", lambda: "n/a")())' $PYTHON -mperf timeit -s'obj = Exception(); obj.x=5' 'obj.x' $PYTHON -mperf timeit -s'from gevent import get_hub; get_hub()' 'get_hub()' $PYTHON -mperf timeit -s'from gevent import getcurrent' 'getcurrent()' $PYTHON -mperf timeit -s'from gevent import spawn; f = lambda : 5' 'spawn(f)' $PYTHON -mperf timeit -s'from gevent import spawn; f = lambda : 5' 'spawn(f).join()' $PYTHON -mperf timeit -s'from gevent import spawn, wait; from gevent.hub import xrange; f = lambda : 5' 'for _ in xrange(10000): spawn(f)' 'wait()' $PYTHON -mperf timeit -s'from gevent import spawn_raw; f = lambda : 5' 'spawn_raw(f)' benchmarks/micro_run_callback.sh benchmarks/micro_semaphore.sh benchmarks/micro_sleep.sh benchmarks/micro_greenlet_link.sh gevent-24.11.1/benchmarks/micro_greenlet_link.sh000077500000000000000000000020511471441230600216370ustar00rootroot00000000000000#!/bin/sh set -e -x PYTHON=${PYTHON:=python} $PYTHON -mperf timeit -s'from gevent import spawn; from gevent.hub import xrange; g = spawn(lambda: 5); l = lambda: 5' 'for _ in xrange(1000): g.link(l)' $PYTHON -mperf timeit -s'from gevent import spawn; from gevent.hub import xrange; g = spawn(lambda: 5); l = lambda *args: 5' 'for _ in xrange(10): g.link(l);' 'g.join()' $PYTHON -mperf timeit -s'from gevent import spawn; from gevent.hub import xrange; g = spawn(lambda: 5); l = lambda *args: 5' 'for _ in xrange(100): g.link(l);' 'g.join()' $PYTHON -mperf timeit -s'from gevent import spawn; from gevent.hub import xrange; g = spawn(lambda: 5); l = lambda *args: 5' 'for _ in xrange(1000): g.link(l);' 'g.join()' $PYTHON -mperf timeit -s'from gevent import spawn; from gevent.hub import xrange; g = spawn(lambda: 5); l = lambda *args: 5' 'for _ in xrange(10000): g.link(l);' 'g.join()' $PYTHON -mperf timeit -s'from gevent import spawn; from gevent.hub import xrange; g = spawn(lambda: 5); l = lambda *args: 5' 'for _ in xrange(100000): g.link(l);' 'g.join()' gevent-24.11.1/benchmarks/micro_run_callback.sh000077500000000000000000000017771471441230600214530ustar00rootroot00000000000000#!/bin/sh set -e -x PYTHON=${PYTHON:=python} $PYTHON -mperf timeit -s'from gevent import get_hub; run_cb = get_hub().loop.run_callback; f = lambda : 5' 'run_cb(f)' $PYTHON -mperf timeit -s'from gevent import wait,get_hub; run_cb = get_hub().loop.run_callback; f = lambda : 5' 'run_cb(f)' 'wait()' $PYTHON -mperf timeit -s'from gevent import get_hub; from gevent.hub import xrange; run_cb = get_hub().loop.run_callback; f = lambda : 5' 'for _ in xrange(100): run_cb(f)' $PYTHON -mperf timeit -s'from gevent import wait,get_hub; from gevent.hub import xrange; run_cb = get_hub().loop.run_callback; f = lambda : 5' 'for _ in xrange(100): run_cb(f)' 'wait()' $PYTHON -mperf timeit -s'from gevent import get_hub; from gevent.hub import xrange; run_cb = get_hub().loop.run_callback; f = lambda : 5' 'for _ in xrange(10000): run_cb(f)' $PYTHON -mperf timeit -s'from gevent import wait,get_hub; from gevent.hub import xrange; run_cb = get_hub().loop.run_callback; f = lambda : 5' 'for _ in xrange(10000): run_cb(f)' 'wait()' gevent-24.11.1/benchmarks/micro_semaphore.sh000077500000000000000000000010321471441230600207760ustar00rootroot00000000000000#!/bin/sh set -e -x PYTHON=${PYTHON:=python} $PYTHON -mpyperf timeit -s'from gevent.lock import Semaphore; s = Semaphore()' 's.release()' $PYTHON -mpyperf timeit -s'from gevent.lock import Semaphore; from gevent import spawn_raw; s = Semaphore(0)' 'spawn_raw(s.release); s.acquire()' $PYTHON -mpyperf timeit -s'from gevent.lock import Semaphore; from gevent import spawn_raw; s = Semaphore(0)' 'spawn_raw(s.release); spawn_raw(s.release); spawn_raw(s.release); spawn_raw(s.release); s.acquire(); s.acquire(); s.acquire(); s.acquire()' gevent-24.11.1/benchmarks/micro_sleep.sh000077500000000000000000000005751471441230600201360ustar00rootroot00000000000000#!/bin/sh set -e -x PYTHON=${PYTHON:=python} $PYTHON -m perf timeit -s 'from gevent import sleep; f = lambda : 5' 'sleep(0)' $PYTHON -m perf timeit -s 'from gevent import sleep; f = lambda : 5' 'sleep(0.00001)' $PYTHON -m perf timeit -s 'from gevent import sleep; f = lambda : 5' 'sleep(0.0001)' $PYTHON -m perf timeit -s 'from gevent import sleep; f = lambda : 5' 'sleep(0.001)' gevent-24.11.1/deps/000077500000000000000000000000001471441230600141055ustar00rootroot00000000000000gevent-24.11.1/deps/README.rst000066400000000000000000000072371471441230600156050ustar00rootroot00000000000000================================ Managing Embedded Dependencies ================================ * Generate patches with ``git diff --patch --minimal -b`` Updating libev ============== Download and unpack the tarball into libev/. Remove these extra files:: rm -f libev/Makefile.am rm -f libev/Symbols.ev rm -f libev/Symbols.event rm -f libev/TODO rm -f libev/aclocal.m4 rm -f libev/autogen.sh rm -f libev/compile rm -f libev/configure.ac rm -f libev/libev.m4 rm -f libev/mkinstalldirs Check if 'config.guess' and/or 'config.sub' went backwards in time (the 'timestamp' and 'copyright' dates). If so, revert it (or update from the latest source http://git.savannah.gnu.org/gitweb/?p=config.git;a=tree ) Updating c-ares =============== - Download and clean up the c-ares Makefile.in[c] and configure script to empty out the MANPAGES variables so that we don't have to ship those in the sdist:: export CARES_VER=1.33.1 cd deps/ wget https://github.com/c-ares/c-ares/releases/download/v$CARES_VER/c-ares-$CARES_VER.tar.gz tar -xf c-ares-$CARES_VER.tar.gz rm -rf c-ares c-ares-$CARES_VER.tar.gz mv c-ares-$CARES_VER c-ares cp c-ares/include/ares_build.h c-ares/include/ares_build.h.dist rm -rf c-ares/docs rm -rf c-ares/test rm -rf c-ares/cmake rm -f c-ares/maketgz rm -f c-ares/CMakeLists.txt rm -f c-ares/RELEASE-PROCEDURE.md c-ares/CONTRIBUTING.md c-ares/SECURITY.md rm -f c-ares/*.cmake c-ares/*.cmake.in rm -rf c-ares/config/ rm -f c-ares/INSTALL.md c-ares/LICENSE.md c-ares/DEVELOPER-NOTES.md rm -f c-ares/buildconf.bat rm -f c-ares/Makefile* git apply cares-make.patch At this point there might be new files in c-ares that need added to git, evaluate them and add them. Note that the patch may not apply cleanly. If not, commit the changes before the patch. Then manually apply them by editing the three files to remove the references to ``docs`` and ``test``; this is easiest to do by reading the existing patch file and searching for the relevant lines in the target files. Once this is working correctly, create the new patch using ``git diff -p --minimal -w`` (note that you cannot directly redirect the output of this into ``cares-make.patch``, or you'll get the diff of the patch itself in the diff!). - Follow the same 'config.guess' and 'config.sub' steps as libev, except the files belong in the ``config/`` subdir. Updating libuv ============== - Clean up the libuv tree, and apply the patches to libuv (this whole sequence is meant to be copied and pasted into the terminal):: export LIBUV_VER=v1.38.0 cd deps/ wget https://dist.libuv.org/dist/$LIBUV_VER/libuv-$LIBUV_VER.tar.gz tar -xf libuv-$LIBUV_VER.tar.gz rm libuv-$LIBUV_VER.tar.gz rm -rf libuv mv libuv-$LIBUV_VER libuv rm -rf libuv/.github rm -rf libuv/.readthedocs.yaml rm -rf libuv/LINKS.md rm -rf libuv/docs rm -rf libuv/samples rm -rf libuv/test/*.[ch] libuv/test/test.gyp # must leave the fixtures/ dir rm -rf libuv/tools rm -f libuv/android-configure* rm -f libuv/uv_win_longpath.manifest At this point there might be new files in libuv that need added to git and the build process. Evaluate those and add them to git and to ``src/gevent/libuv/_corecffi_build.py`` as needed. Then check if there are changes to the build system (e.g., the .gyp files) that need to be accounted for in our build file. .. caution:: Pay special attention to the m4 directory. New .m4 files that need to be added may not actually show up in git output. See https://github.com/libuv/libuv/issues/2862 - Follow the same 'config.guess' and 'config.sub' steps as libev. gevent-24.11.1/deps/c-ares/000077500000000000000000000000001471441230600152575ustar00rootroot00000000000000gevent-24.11.1/deps/c-ares/AUTHORS000066400000000000000000000023521471441230600163310ustar00rootroot00000000000000c-ares is based on ares, and these are the people that have worked on it since the fork was made: Albert Chin Alex Loukissas Alexander Klauer Alexander Lazic Alexey Simak Andreas Rieke Andrew Andkjar Andrew Ayer Andrew C. Morrow Ashish Sharma Ben Greear Ben Noordhuis BogDan Vatra Brad House Brad Spencer Bram Matthys Chris Araman Dan Fandrich Daniel Johnson Daniel Stenberg David Drysdale David Stuart Denis Bilenko Dima Tisnek Dirk Manske Dominick Meglio Doug Goldstein Doug Kwan Duncan Wilcox Eino Tuominen Erik Kline Fedor Indutny Frederic Germain Geert Uytterhoeven George Neill Gisle Vanem Google LLC Gregor Jasny Guenter Knauf Guilherme Balena Versiani Gunter Knauf Henrik Stoerner Jakub Hrozek James Bursa Jérémy Lal John Schember Keith Shaw Lei Shi Marko Kreen Michael Wallner Mike Crowe Nick Alcock Nick Mathewson Nicolas "Pixel" Noble Ning Dong Oleg Pudeyev Patrick Valsecchi Patrik Thunstrom Paul Saab Peter Pentchev Phil Blundell Poul Thomas Lomholt Ravi Pratap Robin Cornelius Saúl Ibarra Corretgé Sebastian at basti79.de Shmulik Regev Stefan Bühler Steinar H. Gunderson Svante Karlsson Tofu Linden Tom Hughes Tor Arntsen Viktor Szakats Vlad Dinulescu William Ahern Yang Tse hpopescu at ixiacom.com liren at vivisimo.com nordsturm saghul gevent-24.11.1/deps/c-ares/Makefile.in000066400000000000000000000772311471441230600173360ustar00rootroot00000000000000# Makefile.in generated by automake 1.17 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2024 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ ############################################################# # # Copyright (C) the Massachusetts Institute of Technology. # Copyright (C) Daniel Stenberg # # Permission to use, copy, modify, and distribute this # software and its documentation for any purpose and without # fee is hereby granted, provided that the above copyright # notice appear in all copies and that both that copyright # notice and this permission notice appear in supporting # documentation, and that the name of M.I.T. not be used in # advertising or publicity pertaining to distribution of the # software without specific, written prior permission. # M.I.T. makes no representations about the suitability of # this software for any purpose. It is provided "as is" # without express or implied warranty. # # SPDX-License-Identifier: MIT # ############################################################# VPATH = @srcdir@ am__is_gnu_make = { \ if test -z '$(MAKELEVEL)'; then \ false; \ elif test -n '$(MAKE_HOST)'; then \ true; \ elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \ true; \ else \ false; \ fi; \ } am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) am__rm_f = rm -f $(am__rm_f_notfound) am__rm_rf = rm -rf $(am__rm_f_notfound) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ subdir = . ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/m4/ax_ac_append_to_file.m4 \ $(top_srcdir)/m4/ax_ac_print_to_file.m4 \ $(top_srcdir)/m4/ax_add_am_macro_static.m4 \ $(top_srcdir)/m4/ax_am_macros_static.m4 \ $(top_srcdir)/m4/ax_append_compile_flags.m4 \ $(top_srcdir)/m4/ax_append_flag.m4 \ $(top_srcdir)/m4/ax_append_link_flags.m4 \ $(top_srcdir)/m4/ax_check_compile_flag.m4 \ $(top_srcdir)/m4/ax_check_gnu_make.m4 \ $(top_srcdir)/m4/ax_check_link_flag.m4 \ $(top_srcdir)/m4/ax_check_user_namespace.m4 \ $(top_srcdir)/m4/ax_check_uts_namespace.m4 \ $(top_srcdir)/m4/ax_code_coverage.m4 \ $(top_srcdir)/m4/ax_compiler_vendor.m4 \ $(top_srcdir)/m4/ax_cxx_compile_stdcxx.m4 \ $(top_srcdir)/m4/ax_cxx_compile_stdcxx_14.m4 \ $(top_srcdir)/m4/ax_file_escapes.m4 \ $(top_srcdir)/m4/ax_pthread.m4 \ $(top_srcdir)/m4/ax_require_defined.m4 \ $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ $(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/m4/pkg.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) DIST_COMMON = $(srcdir)/Makefile.am $(top_srcdir)/configure \ $(am__configure_deps) $(am__DIST_COMMON) am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \ configure.lineno config.status.lineno mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/src/lib/ares_config.h \ $(top_builddir)/include/ares_build.h CONFIG_CLEAN_FILES = libcares.pc CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = RECURSIVE_TARGETS = all-recursive check-recursive cscopelist-recursive \ ctags-recursive dvi-recursive html-recursive info-recursive \ install-data-recursive install-dvi-recursive \ install-exec-recursive install-html-recursive \ install-info-recursive install-pdf-recursive \ install-ps-recursive install-recursive installcheck-recursive \ installdirs-recursive pdf-recursive ps-recursive \ tags-recursive uninstall-recursive am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && echo $$files | $(am__xargs_n) 40 $(am__rm_f); }; \ } am__installdirs = "$(DESTDIR)$(pkgconfigdir)" DATA = $(pkgconfig_DATA) RECURSIVE_CLEAN_TARGETS = mostlyclean-recursive clean-recursive \ distclean-recursive maintainer-clean-recursive am__recursive_targets = \ $(RECURSIVE_TARGETS) \ $(RECURSIVE_CLEAN_TARGETS) \ $(am__extra_recursive_targets) AM_RECURSIVE_TARGETS = $(am__recursive_targets:-recursive=) TAGS CTAGS \ cscope distdir distdir-am dist dist-all distcheck am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` am__DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/libcares.pc.in \ $(top_srcdir)/config/compile $(top_srcdir)/config/config.guess \ $(top_srcdir)/config/config.sub \ $(top_srcdir)/config/install-sh $(top_srcdir)/config/ltmain.sh \ $(top_srcdir)/config/missing AUTHORS INSTALL.md README.md \ config/compile config/config.guess config/config.sub \ config/install-sh config/ltmain.sh config/missing DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) distdir = $(PACKAGE)-$(VERSION) top_distdir = $(distdir) am__remove_distdir = \ if test -d "$(distdir)"; then \ find "$(distdir)" -type d ! -perm -700 -exec chmod u+rwx {} ';' \ ; rm -rf "$(distdir)" \ || { sleep 5 && rm -rf "$(distdir)"; }; \ else :; fi am__post_remove_distdir = $(am__remove_distdir) am__relativize = \ dir0=`pwd`; \ sed_first='s,^\([^/]*\)/.*$$,\1,'; \ sed_rest='s,^[^/]*/*,,'; \ sed_last='s,^.*/\([^/]*\)$$,\1,'; \ sed_butlast='s,/*[^/]*$$,,'; \ while test -n "$$dir1"; do \ first=`echo "$$dir1" | sed -e "$$sed_first"`; \ if test "$$first" != "."; then \ if test "$$first" = ".."; then \ dir2=`echo "$$dir0" | sed -e "$$sed_last"`/"$$dir2"; \ dir0=`echo "$$dir0" | sed -e "$$sed_butlast"`; \ else \ first2=`echo "$$dir2" | sed -e "$$sed_first"`; \ if test "$$first2" = "$$first"; then \ dir2=`echo "$$dir2" | sed -e "$$sed_rest"`; \ else \ dir2="../$$dir2"; \ fi; \ dir0="$$dir0"/"$$first"; \ fi; \ fi; \ dir1=`echo "$$dir1" | sed -e "$$sed_rest"`; \ done; \ reldir="$$dir2" DIST_ARCHIVES = $(distdir).tar.gz GZIP_ENV = -9 DIST_TARGETS = dist-gzip # Exists only to be overridden by the user if desired. AM_DISTCHECK_DVI_TARGET = dvi distuninstallcheck_listfiles = find . -type f -print am__distuninstallcheck_listfiles = $(distuninstallcheck_listfiles) \ | sed 's|^\./|$(prefix)/|' | grep -v '$(infodir)/dir$$' distcleancheck_listfiles = \ find . \( -type f -a \! \ \( -name .nfs* -o -name .smb* -o -name .__afs* \) \) -print ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_CFLAGS = @AM_CFLAGS@ AM_CPPFLAGS = @AM_CPPFLAGS@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AS = @AS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BUILD_SUBDIRS = @BUILD_SUBDIRS@ CARES_PRIVATE_LIBS = @CARES_PRIVATE_LIBS@ CARES_RANDOM_FILE = @CARES_RANDOM_FILE@ CARES_SYMBOL_HIDING_CFLAG = @CARES_SYMBOL_HIDING_CFLAG@ CARES_VERSION_INFO = @CARES_VERSION_INFO@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CODE_COVERAGE_CFLAGS = @CODE_COVERAGE_CFLAGS@ CODE_COVERAGE_CPPFLAGS = @CODE_COVERAGE_CPPFLAGS@ CODE_COVERAGE_CXXFLAGS = @CODE_COVERAGE_CXXFLAGS@ CODE_COVERAGE_ENABLED = @CODE_COVERAGE_ENABLED@ CODE_COVERAGE_LIBS = @CODE_COVERAGE_LIBS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CSCOPE = @CSCOPE@ CTAGS = @CTAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ ETAGS = @ETAGS@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FILECMD = @FILECMD@ GCOV = @GCOV@ GENHTML = @GENHTML@ GMOCK112_CFLAGS = @GMOCK112_CFLAGS@ GMOCK112_LIBS = @GMOCK112_LIBS@ GMOCK_CFLAGS = @GMOCK_CFLAGS@ GMOCK_LIBS = @GMOCK_LIBS@ GREP = @GREP@ HAVE_CXX14 = @HAVE_CXX14@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ LCOV = @LCOV@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ NM = @NM@ NMEDIT = @NMEDIT@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKGCONFIG_CFLAGS = @PKGCONFIG_CFLAGS@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_CXX = @PTHREAD_CXX@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ VERSION = @VERSION@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__rm_f_notfound = @am__rm_f_notfound@ am__tar = @am__tar@ am__untar = @am__untar@ am__xargs_n = @am__xargs_n@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ ifGNUmake = @ifGNUmake@ ifnGNUmake = @ifnGNUmake@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ runstatedir = @runstatedir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ AUTOMAKE_OPTIONS = foreign nostdinc 1.9.6 ACLOCAL_AMFLAGS = -I m4 --install MSVCFILES = buildconf.bat # adig and ahost are just sample programs and thus not mentioned with the # regular sources and headers EXTRA_DIST = AUTHORS $(man_MANS) RELEASE-NOTES.md \ c-ares-config.cmake.in libcares.pc.cmake libcares.pc.in buildconf \ README.msvc $(MSVCFILES) INSTALL.md README.md LICENSE.md \ CMakeLists.txt Makefile.dj Makefile.m32 Makefile.netware Makefile.msvc \ Makefile.Watcom CONTRIBUTING.md SECURITY.md DEVELOPER-NOTES.md \ cmake/EnableWarnings.cmake CLEANFILES = $(PDFPAGES) $(HTMLPAGES) DISTCLEANFILES = include/ares_build.h DIST_SUBDIRS = include src SUBDIRS = @BUILD_SUBDIRS@ pkgconfigdir = $(libdir)/pkgconfig pkgconfig_DATA = libcares.pc # where to install the c-ares headers libcares_ladir = $(includedir) all: all-recursive .SUFFIXES: am--refresh: Makefile @: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ echo ' cd $(srcdir) && $(AUTOMAKE) --foreign'; \ $(am__cd) $(srcdir) && $(AUTOMAKE) --foreign \ && exit 0; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ echo ' $(SHELL) ./config.status'; \ $(SHELL) ./config.status;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles)'; \ cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) $(SHELL) ./config.status --recheck $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) $(am__cd) $(srcdir) && $(AUTOCONF) $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) $(am__cd) $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS) $(am__aclocal_m4_deps): libcares.pc: $(top_builddir)/config.status $(srcdir)/libcares.pc.in cd $(top_builddir) && $(SHELL) ./config.status $@ mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs distclean-libtool: -rm -f libtool config.lt install-pkgconfigDATA: $(pkgconfig_DATA) @$(NORMAL_INSTALL) @list='$(pkgconfig_DATA)'; test -n "$(pkgconfigdir)" || list=; \ if test -n "$$list"; then \ echo " $(MKDIR_P) '$(DESTDIR)$(pkgconfigdir)'"; \ $(MKDIR_P) "$(DESTDIR)$(pkgconfigdir)" || exit 1; \ fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(pkgconfigdir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(pkgconfigdir)" || exit $$?; \ done uninstall-pkgconfigDATA: @$(NORMAL_UNINSTALL) @list='$(pkgconfig_DATA)'; test -n "$(pkgconfigdir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ dir='$(DESTDIR)$(pkgconfigdir)'; $(am__uninstall_files_from_dir) # This directory's subdirectories are mostly independent; you can cd # into them and run 'make' without going through this Makefile. # To change the values of 'make' variables: instead of editing Makefiles, # (1) if the variable is set in 'config.status', edit 'config.status' # (which will cause the Makefiles to be regenerated when you run 'make'); # (2) otherwise, pass the desired values on the 'make' command line. $(am__recursive_targets): @fail=; \ if $(am__make_keepgoing); then \ failcom='fail=yes'; \ else \ failcom='exit 1'; \ fi; \ dot_seen=no; \ target=`echo $@ | sed s/-recursive//`; \ case "$@" in \ distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \ *) list='$(SUBDIRS)' ;; \ esac; \ for subdir in $$list; do \ echo "Making $$target in $$subdir"; \ if test "$$subdir" = "."; then \ dot_seen=yes; \ local_target="$$target-am"; \ else \ local_target="$$target"; \ fi; \ ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \ || eval $$failcom; \ done; \ if test "$$dot_seen" = "no"; then \ $(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \ fi; test -z "$$fail" ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-recursive TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \ include_option=--etags-include; \ empty_fix=.; \ else \ include_option=--include; \ empty_fix=; \ fi; \ list='$(SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ test ! -f $$subdir/TAGS || \ set "$$@" "$$include_option=$$here/$$subdir/TAGS"; \ fi; \ done; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-recursive CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscope: cscope.files test ! -s cscope.files \ || $(CSCOPE) -b -q $(AM_CSCOPEFLAGS) $(CSCOPEFLAGS) -i cscope.files $(CSCOPE_ARGS) clean-cscope: -rm -f cscope.files cscope.files: clean-cscope cscopelist cscopelist: cscopelist-recursive cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags -rm -f cscope.out cscope.in.out cscope.po.out cscope.files distdir: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) distdir-am distdir-am: $(DISTFILES) $(am__remove_distdir) $(AM_V_at)$(MKDIR_P) "$(distdir)" @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done @list='$(DIST_SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ $(am__make_dryrun) \ || test -d "$(distdir)/$$subdir" \ || $(MKDIR_P) "$(distdir)/$$subdir" \ || exit 1; \ dir1=$$subdir; dir2="$(distdir)/$$subdir"; \ $(am__relativize); \ new_distdir=$$reldir; \ dir1=$$subdir; dir2="$(top_distdir)"; \ $(am__relativize); \ new_top_distdir=$$reldir; \ echo " (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) top_distdir="$$new_top_distdir" distdir="$$new_distdir" \\"; \ echo " am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)"; \ ($(am__cd) $$subdir && \ $(MAKE) $(AM_MAKEFLAGS) \ top_distdir="$$new_top_distdir" \ distdir="$$new_distdir" \ am__remove_distdir=: \ am__skip_length_check=: \ am__skip_mode_fix=: \ distdir) \ || exit 1; \ fi; \ done $(MAKE) $(AM_MAKEFLAGS) \ top_distdir="$(top_distdir)" distdir="$(distdir)" \ dist-hook -test -n "$(am__skip_mode_fix)" \ || find "$(distdir)" -type d ! -perm -755 \ -exec chmod u+rwx,go+rx {} \; -o \ ! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \ ! -type d ! -perm -400 -exec chmod a+r {} \; -o \ ! -type d ! -perm -444 -exec $(install_sh) -c -m a+r {} {} \; \ || chmod -R a+r "$(distdir)" dist-gzip: distdir tardir=$(distdir) && $(am__tar) | eval GZIP= gzip $(GZIP_ENV) -c >$(distdir).tar.gz $(am__post_remove_distdir) dist-bzip2: distdir tardir=$(distdir) && $(am__tar) | BZIP2=$${BZIP2--9} bzip2 -c >$(distdir).tar.bz2 $(am__post_remove_distdir) dist-lzip: distdir tardir=$(distdir) && $(am__tar) | lzip -c $${LZIP_OPT--9} >$(distdir).tar.lz $(am__post_remove_distdir) dist-xz: distdir tardir=$(distdir) && $(am__tar) | XZ_OPT=$${XZ_OPT--e} xz -c >$(distdir).tar.xz $(am__post_remove_distdir) dist-zstd: distdir tardir=$(distdir) && $(am__tar) | zstd -c $${ZSTD_CLEVEL-$${ZSTD_OPT--19}} >$(distdir).tar.zst $(am__post_remove_distdir) dist-tarZ: distdir @echo WARNING: "Support for distribution archives compressed with" \ "legacy program 'compress' is deprecated." >&2 @echo WARNING: "It will be removed altogether in Automake 2.0" >&2 tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z $(am__post_remove_distdir) dist-shar: distdir @echo WARNING: "Support for shar distribution archives is" \ "deprecated." >&2 @echo WARNING: "It will be removed altogether in Automake 2.0" >&2 shar $(distdir) | eval GZIP= gzip $(GZIP_ENV) -c >$(distdir).shar.gz $(am__post_remove_distdir) dist-zip: distdir -rm -f $(distdir).zip zip -rq $(distdir).zip $(distdir) $(am__post_remove_distdir) dist dist-all: $(MAKE) $(AM_MAKEFLAGS) $(DIST_TARGETS) am__post_remove_distdir='@:' $(am__post_remove_distdir) # This target untars the dist file and tries a VPATH configuration. Then # it guarantees that the distribution is self-contained by making another # tarfile. distcheck: dist case '$(DIST_ARCHIVES)' in \ *.tar.gz*) \ eval GZIP= gzip -dc $(distdir).tar.gz | $(am__untar) ;;\ *.tar.bz2*) \ bzip2 -dc $(distdir).tar.bz2 | $(am__untar) ;;\ *.tar.lz*) \ lzip -dc $(distdir).tar.lz | $(am__untar) ;;\ *.tar.xz*) \ xz -dc $(distdir).tar.xz | $(am__untar) ;;\ *.tar.Z*) \ uncompress -c $(distdir).tar.Z | $(am__untar) ;;\ *.shar.gz*) \ eval GZIP= gzip -dc $(distdir).shar.gz | unshar ;;\ *.zip*) \ unzip $(distdir).zip ;;\ *.tar.zst*) \ zstd -dc $(distdir).tar.zst | $(am__untar) ;;\ esac chmod -R a-w $(distdir) chmod u+w $(distdir) mkdir $(distdir)/_build $(distdir)/_build/sub $(distdir)/_inst chmod a-w $(distdir) test -d $(distdir)/_build || exit 0; \ dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \ && dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \ && am__cwd=`pwd` \ && $(am__cd) $(distdir)/_build/sub \ && ../../configure \ $(AM_DISTCHECK_CONFIGURE_FLAGS) \ $(DISTCHECK_CONFIGURE_FLAGS) \ --srcdir=../.. --prefix="$$dc_install_base" \ && $(MAKE) $(AM_MAKEFLAGS) \ && $(MAKE) $(AM_MAKEFLAGS) $(AM_DISTCHECK_DVI_TARGET) \ && $(MAKE) $(AM_MAKEFLAGS) check \ && $(MAKE) $(AM_MAKEFLAGS) install \ && $(MAKE) $(AM_MAKEFLAGS) installcheck \ && $(MAKE) $(AM_MAKEFLAGS) uninstall \ && $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \ distuninstallcheck \ && chmod -R a-w "$$dc_install_base" \ && ({ \ (cd ../.. && umask 077 && mkdir "$$dc_destdir") \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \ distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \ } || { rm -rf "$$dc_destdir"; exit 1; }) \ && rm -rf "$$dc_destdir" \ && $(MAKE) $(AM_MAKEFLAGS) dist \ && rm -rf $(DIST_ARCHIVES) \ && $(MAKE) $(AM_MAKEFLAGS) distcleancheck \ && cd "$$am__cwd" \ || exit 1 $(am__post_remove_distdir) @(echo "$(distdir) archives ready for distribution: "; \ list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \ sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x' distuninstallcheck: @test -n '$(distuninstallcheck_dir)' || { \ echo 'ERROR: trying to run $@ with an empty' \ '$$(distuninstallcheck_dir)' >&2; \ exit 1; \ }; \ $(am__cd) '$(distuninstallcheck_dir)' || { \ echo 'ERROR: cannot chdir into $(distuninstallcheck_dir)' >&2; \ exit 1; \ }; \ test `$(am__distuninstallcheck_listfiles) | wc -l` -eq 0 \ || { echo "ERROR: files left after uninstall:" ; \ if test -n "$(DESTDIR)"; then \ echo " (check DESTDIR support)"; \ fi ; \ $(distuninstallcheck_listfiles) ; \ exit 1; } >&2 distcleancheck: distclean @if test '$(srcdir)' = . ; then \ echo "ERROR: distcleancheck can only run from a VPATH build" ; \ exit 1 ; \ fi @test `$(distcleancheck_listfiles) | wc -l` -eq 0 \ || { echo "ERROR: files left in build directory after distclean:" ; \ $(distcleancheck_listfiles) ; \ exit 1; } >&2 check-am: all-am check: check-recursive all-am: Makefile $(DATA) installdirs: installdirs-recursive installdirs-am: for dir in "$(DESTDIR)$(pkgconfigdir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-recursive install-exec: install-exec-recursive install-data: install-data-recursive uninstall: uninstall-recursive install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-recursive install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: -$(am__rm_f) $(CLEANFILES) distclean-generic: -$(am__rm_f) $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || $(am__rm_f) $(CONFIG_CLEAN_VPATH_FILES) -$(am__rm_f) $(DISTCLEANFILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-recursive clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-recursive -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -f Makefile distclean-am: clean-am distclean-generic distclean-libtool \ distclean-tags dvi: dvi-recursive dvi-am: html: html-recursive html-am: info: info-recursive info-am: install-data-am: install-pkgconfigDATA install-dvi: install-dvi-recursive install-dvi-am: install-exec-am: install-html: install-html-recursive install-html-am: install-info: install-info-recursive install-info-am: install-man: install-pdf: install-pdf-recursive install-pdf-am: install-ps: install-ps-recursive install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-recursive -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -rf $(top_srcdir)/autom4te.cache -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-recursive mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-recursive pdf-am: ps: ps-recursive ps-am: uninstall-am: uninstall-pkgconfigDATA .MAKE: $(am__recursive_targets) install-am install-strip .PHONY: $(am__recursive_targets) CTAGS GTAGS TAGS all all-am \ am--refresh check check-am clean clean-cscope clean-generic \ clean-libtool cscope cscopelist-am ctags ctags-am dist \ dist-all dist-bzip2 dist-gzip dist-hook dist-lzip dist-shar \ dist-tarZ dist-xz dist-zip dist-zstd distcheck distclean \ distclean-generic distclean-libtool distclean-tags \ distcleancheck distdir distuninstallcheck dvi dvi-am html \ html-am info info-am install install-am install-data \ install-data-am install-dvi install-dvi-am install-exec \ install-exec-am install-html install-html-am install-info \ install-info-am install-man install-pdf install-pdf-am \ install-pkgconfigDATA install-ps install-ps-am install-strip \ installcheck installcheck-am installdirs installdirs-am \ maintainer-clean maintainer-clean-generic mostlyclean \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags tags-am uninstall uninstall-am uninstall-pkgconfigDATA .PRECIOUS: Makefile # Make files named *.dist replace the file without .dist extension dist-hook: find $(distdir) -name "*.dist" -exec rm {} \; (distit=`find $(srcdir) -name "*.dist"`; \ for file in $$distit; do \ strip=`echo $$file | sed -e s/^$(srcdir)// -e s/\.dist//`; \ cp $$file $(distdir)$$strip; \ done) # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: # Tell GNU make to disable its built-in pattern rules. %:: %,v %:: RCS/%,v %:: RCS/% %:: s.% %:: SCCS/s.% gevent-24.11.1/deps/c-ares/README.md000066400000000000000000000171121471441230600165400ustar00rootroot00000000000000# [![c-ares logo](https://c-ares.org/art/c-ares-logo.svg)](https://c-ares.org/) [![Build Status](https://api.cirrus-ci.com/github/c-ares/c-ares.svg?branch=main)](https://cirrus-ci.com/github/c-ares/c-ares) [![Windows Build Status](https://ci.appveyor.com/api/projects/status/aevgc5914tm72pvs/branch/main?svg=true)](https://ci.appveyor.com/project/c-ares/c-ares/branch/main) [![Coverage Status](https://coveralls.io/repos/github/c-ares/c-ares/badge.svg?branch=main)](https://coveralls.io/github/c-ares/c-ares?branch=main) [![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/291/badge)](https://bestpractices.coreinfrastructure.org/projects/291) [![Fuzzing Status](https://oss-fuzz-build-logs.storage.googleapis.com/badges/c-ares.svg)](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:c-ares) [![Bugs](https://sonarcloud.io/api/project_badges/measure?project=c-ares_c-ares&metric=bugs)](https://sonarcloud.io/summary/new_code?id=c-ares_c-ares) [![Coverity Scan Status](https://scan.coverity.com/projects/c-ares/badge.svg)](https://scan.coverity.com/projects/c-ares) - [Overview](#overview) - [Code](#code) - [Communication](#communication) - [Release Keys](#release-keys) - [Verifying signatures](#verifying-signatures) - [Features](#features) - [RFCs and Proposals](#supported-rfcs-and-proposals) ## Overview [c-ares](https://c-ares.org) is a modern DNS (stub) resolver library, written in C. It provides interfaces for asynchronous queries while trying to abstract the intricacies of the underlying DNS protocol. It was originally intended for applications which need to perform DNS queries without blocking, or need to perform multiple DNS queries in parallel. One of the goals of c-ares is to be a better DNS resolver than is provided by your system, regardless of which system you use. We recommend using the c-ares library in all network applications even if the initial goal of asynchronous resolution is not necessary to your application. c-ares will build with any C89 compiler and is [MIT licensed](LICENSE.md), which makes it suitable for both free and commercial software. c-ares runs on Linux, FreeBSD, OpenBSD, MacOS, Solaris, AIX, Windows, Android, iOS and many more operating systems. c-ares has a strong focus on security, implementing safe parsers and data builders used throughout the code, thus avoiding many of the common pitfalls of other C libraries. Through automated testing with our extensive testing framework, c-ares is constantly validated with a range of static and dynamic analyzers, as well as being constantly fuzzed by [OSS Fuzz](https://github.com/google/oss-fuzz). While c-ares has been around for over 20 years, it has been actively maintained both in regards to the latest DNS RFCs as well as updated to follow the latest best practices in regards to C coding standards. ## Code The full source code and revision history is available in our [GitHub repository](https://github.com/c-ares/c-ares). Our signed releases are available in the [release archives](https://c-ares.org/download/). See the [INSTALL.md](INSTALL.md) file for build information. ## Communication **Issues** and **Feature Requests** should be reported to our [GitHub Issues](https://github.com/c-ares/c-ares/issues) page. **Discussions** around c-ares and its use, are held on [GitHub Discussions](https://github.com/c-ares/c-ares/discussions/categories/q-a) or the [Mailing List](https://lists.haxx.se/mailman/listinfo/c-ares). Mailing List archive [here](https://lists.haxx.se/pipermail/c-ares/). Please, do not mail volunteers privately about c-ares. **Security vulnerabilities** are treated according to our [Security Procedure](SECURITY.md), please email c-ares-security at haxx.se if you suspect one. ## Release keys Primary GPG keys for c-ares Releasers (some Releasers sign with subkeys): * **Daniel Stenberg** <> `27EDEAF22F3ABCEB50DB9A125CC908FDB71E12C2` * **Brad House** <> `DA7D64E4C82C6294CB73A20E22E3D13B5411B7CA` To import the full set of trusted release keys (including subkeys possibly used to sign releases): ```bash gpg --keyserver hkps://keyserver.ubuntu.com --recv-keys 27EDEAF22F3ABCEB50DB9A125CC908FDB71E12C2 # Daniel Stenberg gpg --keyserver hkps://keys.openpgp.org --recv-keys DA7D64E4C82C6294CB73A20E22E3D13B5411B7CA # Brad House ``` ### Verifying signatures For each release `c-ares-X.Y.Z.tar.gz` there is a corresponding `c-ares-X.Y.Z.tar.gz.asc` file which contains the detached signature for the release. After fetching all of the possible valid signing keys and loading into your keychain as per the prior section, you can simply run the command below on the downloaded package and detached signature: ```bash % gpg -v --verify c-ares-1.29.0.tar.gz.asc c-ares-1.29.0.tar.gz gpg: enabled compatibility flags: gpg: Signature made Fri May 24 02:50:38 2024 EDT gpg: using RSA key 27EDEAF22F3ABCEB50DB9A125CC908FDB71E12C2 gpg: using pgp trust model gpg: Good signature from "Daniel Stenberg " [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 27ED EAF2 2F3A BCEB 50DB 9A12 5CC9 08FD B71E 12C2 gpg: binary signature, digest algorithm SHA512, key algorithm rsa2048 ``` ## Features ### Supported RFCs and Proposals - [RFC1035](https://datatracker.ietf.org/doc/html/rfc7873). Initial/Base DNS RFC - [RFC2671](https://datatracker.ietf.org/doc/html/rfc2671), [RFC6891](https://datatracker.ietf.org/doc/html/rfc6891). EDNS0 option (meta-RR) - [RFC3596](https://datatracker.ietf.org/doc/html/rfc3596). IPv6 Address. `AAAA` Record. - [RFC2782](https://datatracker.ietf.org/doc/html/rfc2782). Server Selection. `SRV` Record. - [RFC3403](https://datatracker.ietf.org/doc/html/rfc3403). Naming Authority Pointer. `NAPTR` Record. - [RFC6698](https://datatracker.ietf.org/doc/html/rfc6698). DNS-Based Authentication of Named Entities (DANE) Transport Layer Security (TLS) Protocol. `TLSA` Record. - [RFC9460](https://datatracker.ietf.org/doc/html/rfc9460). General Purpose Service Binding, Service Binding type for use with HTTPS. `SVCB` and `HTTPS` Records. - [RFC7553](https://datatracker.ietf.org/doc/html/rfc7553). Uniform Resource Identifier. `URI` Record. - [RFC6844](https://datatracker.ietf.org/doc/html/rfc6844). Certification Authority Authorization. `CAA` Record. - [RFC2535](https://datatracker.ietf.org/doc/html/rfc2535), [RFC2931](https://datatracker.ietf.org/doc/html/rfc2931). `SIG0` Record. Only basic parser, not full implementation. - [RFC7873](https://datatracker.ietf.org/doc/html/rfc7873), [RFC9018](https://datatracker.ietf.org/doc/html/rfc9018). DNS Cookie off-path dns poisoning and amplification mitigation. - [draft-vixie-dnsext-dns0x20-00](https://datatracker.ietf.org/doc/html/draft-vixie-dnsext-dns0x20-00). DNS 0x20 query name case randomization to prevent cache poisioning attacks. - [RFC7686](https://datatracker.ietf.org/doc/html/rfc7686). Reject queries for `.onion` domain names with `NXDOMAIN`. - [RFC2606](https://datatracker.ietf.org/doc/html/rfc2606), [RFC6761](https://datatracker.ietf.org/doc/html/rfc6761). Special case treatment for `localhost`/`.localhost`. - [RFC2308](https://datatracker.ietf.org/doc/html/rfc2308), [RFC9520](https://datatracker.ietf.org/doc/html/rfc9520). Negative Caching of DNS Resolution Failures. - [RFC6724](https://datatracker.ietf.org/doc/html/rfc6724). IPv6 address sorting as used by `ares_getaddrinfo()`. - [RFC7413](https://datatracker.ietf.org/doc/html/rfc7413). TCP FastOpen (TFO) for 0-RTT TCP Connection Resumption. gevent-24.11.1/deps/c-ares/README.msvc000066400000000000000000000071551471441230600171160ustar00rootroot00000000000000 ___ __ _ _ __ ___ ___ / __| ___ / _` | '__/ _ \/ __| | (_ |___| (_| | | | __/\__ \ \___| \__,_|_| \___||___/ How to build c-ares using MSVC or Visual Studio ================================================= How to build using MSVC from the command line --------------------------------------------- Open a command prompt window and ensure that the environment is properly set up in order to use MSVC or Visual Studio compiler tools. Change to c-ares source folder where Makefile.msvc file is located and run: > nmake -f Makefile.msvc This will build all c-ares libraries as well as three sample programs. Once the above command has finished a new folder named MSVCXX will exist below the folder where makefile.msvc is found. The name of the folder depends on the MSVC compiler version being used to build c-ares. Below the MSVCXX folder there will exist four folders named 'cares', 'ahost', and 'adig'. The 'cares' folder is the one that holds the c-ares libraries you have just generated, the other three hold sample programs that use the libraries. The above command builds four versions of the c-ares library, dynamic and static versions and each one in release and debug flavours. Each of these is found in folders named dll-release, dll-debug, lib-release, and lib-debug, which hang from the 'cares' folder mentioned above. Each sample program also has folders with the same names to reflect which library version it is using. How to install using MSVC from the command line ----------------------------------------------- In order to allow easy usage of c-ares libraries it may be convenient to install c-ares libraries and header files to a common subdirectory tree. Once that c-ares libraries have been built using procedure described above, use same command prompt window to define environment variable INSTALL_DIR to designate the top subdirectory where installation of c-ares libraries and header files will be done. > set INSTALL_DIR=c:\c-ares Afterwards, run following command to actually perform the installation: > nmake -f Makefile.msvc install Installation procedure will copy c-ares libraries to subdirectory 'lib' and c-ares header files to subdirectory 'include' below the INSTALL_DIR subdir. When environment variable INSTALL_DIR is not defined, installation is done to c-ares source folder where Makefile.msvc file is located. Relationship between c-ares library file names and versions ----------------------------------------------------------- c-ares static release library version files: libcares.lib -> static release library c-ares static debug library version files: libcaresd.lib -> static debug library c-ares dynamic release library version files: cares.dll -> dynamic release library cares.lib -> import library for the dynamic release library cares.exp -> export file for the dynamic release library c-ares dynamic debug library version files: caresd.dll -> dynamic debug library caresd.lib -> import library for the dynamic debug library caresd.exp -> export file for the dynamic debug library caresd.pdb -> debug symbol file for the dynamic debug library How to use c-ares static libraries ---------------------------------- When using the c-ares static library in your program, you will have to define preprocessor symbol CARES_STATICLIB while building your program, otherwise you will get errors at linkage stage. Have Fun! gevent-24.11.1/deps/c-ares/RELEASE-NOTES.md000066400000000000000000000050471471441230600175550ustar00rootroot00000000000000## c-ares version 1.33.1 - August 23 2024 This is a bugfix release. Bugfixes: * Work around systemd-resolved quirk that returns unexpected codes for single label names. Also adds test cases to validate the work around works and will continue to work in future releases. [PR #863](https://github.com/c-ares/c-ares/pull/863), See Also https://github.com/systemd/systemd/issues/34101 * Fix sysconfig ndots default value, also adds containerized test case to prevent future regressions. [PR #862](https://github.com/c-ares/c-ares/pull/862) * Fix blank DNS name returning error code rather than valid record for commands like: `adig -t SOA .`. Also adds test case to prevent future regressions. [9e574af](https://github.com/c-ares/c-ares/commit/9e574af) * Fix calculation of query times > 1s. [2b2eae7](https://github.com/c-ares/c-ares/commit/2b2eae7) * Fix building on old Linux releases that don't have `TCP_FASTOPEN_CONNECT`. [b7a89b9](https://github.com/c-ares/c-ares/commit/b7a89b9) * Fix minor Android build warnings. [PR #848](https://github.com/c-ares/c-ares/pull/848) Thanks go to these friendly people for their efforts and contributions for this release: * Brad House (@bradh352) * Erik Lax (@eriklax) * Hans-Christian Egtvedt (@egtvedt) * Mikael Lindemann (@mikaellindemann) * Nodar Chkuaselidze (@nodech) ## c-ares version 1.33.0 - August 2 2024 This is a feature and bugfix release. Features: * Add DNS cookie support (RFC7873 + RFC9018) to help prevent off-path cache poisoning attacks. [PR #833](https://github.com/c-ares/c-ares/pull/833) * Implement TCP FastOpen (TFO) RFC7413, which will make TCP reconnects 0-RTT on supported systems. [PR #840](https://github.com/c-ares/c-ares/pull/840) Changes: * Reorganize source tree. [PR #822](https://github.com/c-ares/c-ares/pull/822) * Refactoring of connection handling to prevent code duplication. [PR #839](https://github.com/c-ares/c-ares/pull/839) * New dynamic array data structure to prevent simple logic flaws in array handling in various code paths. [PR #841](https://github.com/c-ares/c-ares/pull/841) Bugfixes: * `ares_destroy()` race condition during shutdown due to missing lock. [PR #831](https://github.com/c-ares/c-ares/pull/831) * Android: Preserve thread name after attaching it to JVM. [PR #838](https://github.com/c-ares/c-ares/pull/838) * Windows UWP (Store) support fix. [PR #845](https://github.com/c-ares/c-ares/pull/845) Thanks go to these friendly people for their efforts and contributions for this release: * Brad House (@bradh352) * Yauheni Khnykin (@Hsilgos) gevent-24.11.1/deps/c-ares/aclocal.m4000066400000000000000000001460501471441230600171250ustar00rootroot00000000000000# generated automatically by aclocal 1.17 -*- Autoconf -*- # Copyright (C) 1996-2024 Free Software Foundation, Inc. # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. m4_ifndef([AC_CONFIG_MACRO_DIRS], [m4_defun([_AM_CONFIG_MACRO_DIRS], [])m4_defun([AC_CONFIG_MACRO_DIRS], [_AM_CONFIG_MACRO_DIRS($@)])]) m4_ifndef([AC_AUTOCONF_VERSION], [m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl m4_if(m4_defn([AC_AUTOCONF_VERSION]), [2.72],, [m4_warning([this file was generated for autoconf 2.72. You have another version of autoconf. It may work, but is not guaranteed to. If you have problems, you may need to regenerate the build system entirely. To do so, use the procedure documented by the package, typically 'autoreconf'.])]) # Copyright (C) 2002-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_AUTOMAKE_VERSION(VERSION) # ---------------------------- # Automake X.Y traces this macro to ensure aclocal.m4 has been # generated from the m4 files accompanying Automake X.Y. # (This private macro should not be called outside this file.) AC_DEFUN([AM_AUTOMAKE_VERSION], [am__api_version='1.17' dnl Some users find AM_AUTOMAKE_VERSION and mistake it for a way to dnl require some minimum version. Point them to the right macro. m4_if([$1], [1.17], [], [AC_FATAL([Do not call $0, use AM_INIT_AUTOMAKE([$1]).])])dnl ]) # _AM_AUTOCONF_VERSION(VERSION) # ----------------------------- # aclocal traces this macro to find the Autoconf version. # This is a private macro too. Using m4_define simplifies # the logic in aclocal, which can simply ignore this definition. m4_define([_AM_AUTOCONF_VERSION], []) # AM_SET_CURRENT_AUTOMAKE_VERSION # ------------------------------- # Call AM_AUTOMAKE_VERSION and AM_AUTOMAKE_VERSION so they can be traced. # This function is AC_REQUIREd by AM_INIT_AUTOMAKE. AC_DEFUN([AM_SET_CURRENT_AUTOMAKE_VERSION], [AM_AUTOMAKE_VERSION([1.17])dnl m4_ifndef([AC_AUTOCONF_VERSION], [m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl _AM_AUTOCONF_VERSION(m4_defn([AC_AUTOCONF_VERSION]))]) # AM_AUX_DIR_EXPAND -*- Autoconf -*- # Copyright (C) 2001-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # For projects using AC_CONFIG_AUX_DIR([foo]), Autoconf sets # $ac_aux_dir to '$srcdir/foo'. In other projects, it is set to # '$srcdir', '$srcdir/..', or '$srcdir/../..'. # # Of course, Automake must honor this variable whenever it calls a # tool from the auxiliary directory. The problem is that $srcdir (and # therefore $ac_aux_dir as well) can be either absolute or relative, # depending on how configure is run. This is pretty annoying, since # it makes $ac_aux_dir quite unusable in subdirectories: in the top # source directory, any form will work fine, but in subdirectories a # relative path needs to be adjusted first. # # $ac_aux_dir/missing # fails when called from a subdirectory if $ac_aux_dir is relative # $top_srcdir/$ac_aux_dir/missing # fails if $ac_aux_dir is absolute, # fails when called from a subdirectory in a VPATH build with # a relative $ac_aux_dir # # The reason of the latter failure is that $top_srcdir and $ac_aux_dir # are both prefixed by $srcdir. In an in-source build this is usually # harmless because $srcdir is '.', but things will broke when you # start a VPATH build or use an absolute $srcdir. # # So we could use something similar to $top_srcdir/$ac_aux_dir/missing, # iff we strip the leading $srcdir from $ac_aux_dir. That would be: # am_aux_dir='\$(top_srcdir)/'`expr "$ac_aux_dir" : "$srcdir//*\(.*\)"` # and then we would define $MISSING as # MISSING="\${SHELL} $am_aux_dir/missing" # This will work as long as MISSING is not called from configure, because # unfortunately $(top_srcdir) has no meaning in configure. # However there are other variables, like CC, which are often used in # configure, and could therefore not use this "fixed" $ac_aux_dir. # # Another solution, used here, is to always expand $ac_aux_dir to an # absolute PATH. The drawback is that using absolute paths prevent a # configured tree to be moved without reconfiguration. AC_DEFUN([AM_AUX_DIR_EXPAND], [AC_REQUIRE([AC_CONFIG_AUX_DIR_DEFAULT])dnl # Expand $ac_aux_dir to an absolute path. am_aux_dir=`cd "$ac_aux_dir" && pwd` ]) # AM_COND_IF -*- Autoconf -*- # Copyright (C) 2008-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # _AM_COND_IF # _AM_COND_ELSE # _AM_COND_ENDIF # -------------- # These macros are only used for tracing. m4_define([_AM_COND_IF]) m4_define([_AM_COND_ELSE]) m4_define([_AM_COND_ENDIF]) # AM_COND_IF(COND, [IF-TRUE], [IF-FALSE]) # --------------------------------------- # If the shell condition COND is true, execute IF-TRUE, otherwise execute # IF-FALSE. Allow automake to learn about conditional instantiating macros # (the AC_CONFIG_FOOS). AC_DEFUN([AM_COND_IF], [m4_ifndef([_AM_COND_VALUE_$1], [m4_fatal([$0: no such condition "$1"])])dnl _AM_COND_IF([$1])dnl if test -z "$$1_TRUE"; then : m4_n([$2])[]dnl m4_ifval([$3], [_AM_COND_ELSE([$1])dnl else $3 ])dnl _AM_COND_ENDIF([$1])dnl fi[]dnl ]) # AM_CONDITIONAL -*- Autoconf -*- # Copyright (C) 1997-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_CONDITIONAL(NAME, SHELL-CONDITION) # ------------------------------------- # Define a conditional. AC_DEFUN([AM_CONDITIONAL], [AC_PREREQ([2.52])dnl m4_if([$1], [TRUE], [AC_FATAL([$0: invalid condition: $1])], [$1], [FALSE], [AC_FATAL([$0: invalid condition: $1])])dnl AC_SUBST([$1_TRUE])dnl AC_SUBST([$1_FALSE])dnl _AM_SUBST_NOTMAKE([$1_TRUE])dnl _AM_SUBST_NOTMAKE([$1_FALSE])dnl m4_define([_AM_COND_VALUE_$1], [$2])dnl if $2; then $1_TRUE= $1_FALSE='#' else $1_TRUE='#' $1_FALSE= fi AC_CONFIG_COMMANDS_PRE( [if test -z "${$1_TRUE}" && test -z "${$1_FALSE}"; then AC_MSG_ERROR([[conditional "$1" was never defined. Usually this means the macro was only invoked conditionally.]]) fi])]) # Copyright (C) 1999-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # There are a few dirty hacks below to avoid letting 'AC_PROG_CC' be # written in clear, in which case automake, when reading aclocal.m4, # will think it sees a *use*, and therefore will trigger all it's # C support machinery. Also note that it means that autoscan, seeing # CC etc. in the Makefile, will ask for an AC_PROG_CC use... # _AM_DEPENDENCIES(NAME) # ---------------------- # See how the compiler implements dependency checking. # NAME is "CC", "CXX", "OBJC", "OBJCXX", "UPC", or "GJC". # We try a few techniques and use that to set a single cache variable. # # We don't AC_REQUIRE the corresponding AC_PROG_CC since the latter was # modified to invoke _AM_DEPENDENCIES(CC); we would have a circular # dependency, and given that the user is not expected to run this macro, # just rely on AC_PROG_CC. AC_DEFUN([_AM_DEPENDENCIES], [AC_REQUIRE([AM_SET_DEPDIR])dnl AC_REQUIRE([AM_OUTPUT_DEPENDENCY_COMMANDS])dnl AC_REQUIRE([AM_MAKE_INCLUDE])dnl AC_REQUIRE([AM_DEP_TRACK])dnl m4_if([$1], [CC], [depcc="$CC" am_compiler_list=], [$1], [CXX], [depcc="$CXX" am_compiler_list=], [$1], [OBJC], [depcc="$OBJC" am_compiler_list='gcc3 gcc'], [$1], [OBJCXX], [depcc="$OBJCXX" am_compiler_list='gcc3 gcc'], [$1], [UPC], [depcc="$UPC" am_compiler_list=], [$1], [GCJ], [depcc="$GCJ" am_compiler_list='gcc3 gcc'], [depcc="$$1" am_compiler_list=]) AC_CACHE_CHECK([dependency style of $depcc], [am_cv_$1_dependencies_compiler_type], [if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named 'D' -- because '-MD' means "put the output # in D". rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_$1_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n ['s/^#*\([a-zA-Z0-9]*\))$/\1/p'] < ./depcomp` fi am__universal=false m4_case([$1], [CC], [case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac], [CXX], [case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac]) for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using ": > sub/conftst$i.h" creates only sub/conftst1.h with # Solaris 10 /bin/sh. echo '/* dummy */' > sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with '-c' and '-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle '-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs. am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # After this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested. if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) # This compiler won't grok '-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thus: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_$1_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_$1_dependencies_compiler_type=none fi ]) AC_SUBST([$1DEPMODE], [depmode=$am_cv_$1_dependencies_compiler_type]) AM_CONDITIONAL([am__fastdep$1], [ test "x$enable_dependency_tracking" != xno \ && test "$am_cv_$1_dependencies_compiler_type" = gcc3]) ]) # AM_SET_DEPDIR # ------------- # Choose a directory name for dependency files. # This macro is AC_REQUIREd in _AM_DEPENDENCIES. AC_DEFUN([AM_SET_DEPDIR], [AC_REQUIRE([AM_SET_LEADING_DOT])dnl AC_SUBST([DEPDIR], ["${am__leading_dot}deps"])dnl ]) # AM_DEP_TRACK # ------------ AC_DEFUN([AM_DEP_TRACK], [AC_ARG_ENABLE([dependency-tracking], [dnl AS_HELP_STRING( [--enable-dependency-tracking], [do not reject slow dependency extractors]) AS_HELP_STRING( [--disable-dependency-tracking], [speeds up one-time build])]) if test "x$enable_dependency_tracking" != xno; then am_depcomp="$ac_aux_dir/depcomp" AMDEPBACKSLASH='\' am__nodep='_no' fi AM_CONDITIONAL([AMDEP], [test "x$enable_dependency_tracking" != xno]) AC_SUBST([AMDEPBACKSLASH])dnl _AM_SUBST_NOTMAKE([AMDEPBACKSLASH])dnl AC_SUBST([am__nodep])dnl _AM_SUBST_NOTMAKE([am__nodep])dnl ]) # Generate code to set up dependency tracking. -*- Autoconf -*- # Copyright (C) 1999-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # _AM_OUTPUT_DEPENDENCY_COMMANDS # ------------------------------ AC_DEFUN([_AM_OUTPUT_DEPENDENCY_COMMANDS], [{ # Older Autoconf quotes --file arguments for eval, but not when files # are listed without --file. Let's play safe and only enable the eval # if we detect the quoting. # TODO: see whether this extra hack can be removed once we start # requiring Autoconf 2.70 or later. AS_CASE([$CONFIG_FILES], [*\'*], [eval set x "$CONFIG_FILES"], [*], [set x $CONFIG_FILES]) shift # Used to flag and report bootstrapping failures. am_rc=0 for am_mf do # Strip MF so we end up with the name of the file. am_mf=`AS_ECHO(["$am_mf"]) | sed -e 's/:.*$//'` # Check whether this is an Automake generated Makefile which includes # dependency-tracking related rules and includes. # Grep'ing the whole file directly is not great: AIX grep has a line # limit of 2048, but all sed's we know have understand at least 4000. sed -n 's,^am--depfiles:.*,X,p' "$am_mf" | grep X >/dev/null 2>&1 \ || continue am_dirpart=`AS_DIRNAME(["$am_mf"])` am_filepart=`AS_BASENAME(["$am_mf"])` AM_RUN_LOG([cd "$am_dirpart" \ && sed -e '/# am--include-marker/d' "$am_filepart" \ | $MAKE -f - am--depfiles]) || am_rc=$? done if test $am_rc -ne 0; then AC_MSG_FAILURE([Something went wrong bootstrapping makefile fragments for automatic dependency tracking. If GNU make was not used, consider re-running the configure script with MAKE="gmake" (or whatever is necessary). You can also try re-running configure with the '--disable-dependency-tracking' option to at least be able to build the package (albeit without support for automatic dependency tracking).]) fi AS_UNSET([am_dirpart]) AS_UNSET([am_filepart]) AS_UNSET([am_mf]) AS_UNSET([am_rc]) rm -f conftest-deps.mk } ])# _AM_OUTPUT_DEPENDENCY_COMMANDS # AM_OUTPUT_DEPENDENCY_COMMANDS # ----------------------------- # This macro should only be invoked once -- use via AC_REQUIRE. # # This code is only required when automatic dependency tracking is enabled. # This creates each '.Po' and '.Plo' makefile fragment that we'll need in # order to bootstrap the dependency handling code. AC_DEFUN([AM_OUTPUT_DEPENDENCY_COMMANDS], [AC_CONFIG_COMMANDS([depfiles], [test x"$AMDEP_TRUE" != x"" || _AM_OUTPUT_DEPENDENCY_COMMANDS], [AMDEP_TRUE="$AMDEP_TRUE" MAKE="${MAKE-make}"])]) # Do all the work for Automake. -*- Autoconf -*- # Copyright (C) 1996-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This macro actually does too much. Some checks are only needed if # your package does certain things. But this isn't really a big deal. dnl Redefine AC_PROG_CC to automatically invoke _AM_PROG_CC_C_O. m4_define([AC_PROG_CC], m4_defn([AC_PROG_CC]) [_AM_PROG_CC_C_O ]) # AM_INIT_AUTOMAKE(PACKAGE, VERSION, [NO-DEFINE]) # AM_INIT_AUTOMAKE([OPTIONS]) # ----------------------------------------------- # The call with PACKAGE and VERSION arguments is the old style # call (pre autoconf-2.50), which is being phased out. PACKAGE # and VERSION should now be passed to AC_INIT and removed from # the call to AM_INIT_AUTOMAKE. # We support both call styles for the transition. After # the next Automake release, Autoconf can make the AC_INIT # arguments mandatory, and then we can depend on a new Autoconf # release and drop the old call support. AC_DEFUN([AM_INIT_AUTOMAKE], [AC_PREREQ([2.65])dnl m4_ifdef([_$0_ALREADY_INIT], [m4_fatal([$0 expanded multiple times ]m4_defn([_$0_ALREADY_INIT]))], [m4_define([_$0_ALREADY_INIT], m4_expansion_stack)])dnl dnl Autoconf wants to disallow AM_ names. We explicitly allow dnl the ones we care about. m4_pattern_allow([^AM_[A-Z]+FLAGS$])dnl AC_REQUIRE([AM_SET_CURRENT_AUTOMAKE_VERSION])dnl AC_REQUIRE([AC_PROG_INSTALL])dnl if test "`cd $srcdir && pwd`" != "`pwd`"; then # Use -I$(srcdir) only when $(srcdir) != ., so that make's output # is not polluted with repeated "-I." AC_SUBST([am__isrc], [' -I$(srcdir)'])_AM_SUBST_NOTMAKE([am__isrc])dnl # test to see if srcdir already configured if test -f $srcdir/config.status; then AC_MSG_ERROR([source directory already configured; run "make distclean" there first]) fi fi # test whether we have cygpath if test -z "$CYGPATH_W"; then if (cygpath --version) >/dev/null 2>/dev/null; then CYGPATH_W='cygpath -w' else CYGPATH_W=echo fi fi AC_SUBST([CYGPATH_W]) # Define the identity of the package. dnl Distinguish between old-style and new-style calls. m4_ifval([$2], [AC_DIAGNOSE([obsolete], [$0: two- and three-arguments forms are deprecated.]) m4_ifval([$3], [_AM_SET_OPTION([no-define])])dnl AC_SUBST([PACKAGE], [$1])dnl AC_SUBST([VERSION], [$2])], [_AM_SET_OPTIONS([$1])dnl dnl Diagnose old-style AC_INIT with new-style AM_AUTOMAKE_INIT. m4_if( m4_ifset([AC_PACKAGE_NAME], [ok]):m4_ifset([AC_PACKAGE_VERSION], [ok]), [ok:ok],, [m4_fatal([AC_INIT should be called with package and version arguments])])dnl AC_SUBST([PACKAGE], ['AC_PACKAGE_TARNAME'])dnl AC_SUBST([VERSION], ['AC_PACKAGE_VERSION'])])dnl _AM_IF_OPTION([no-define],, [AC_DEFINE_UNQUOTED([PACKAGE], ["$PACKAGE"], [Name of package]) AC_DEFINE_UNQUOTED([VERSION], ["$VERSION"], [Version number of package])])dnl # Some tools Automake needs. AC_REQUIRE([AM_SANITY_CHECK])dnl AC_REQUIRE([AC_ARG_PROGRAM])dnl AM_MISSING_PROG([ACLOCAL], [aclocal-${am__api_version}]) AM_MISSING_PROG([AUTOCONF], [autoconf]) AM_MISSING_PROG([AUTOMAKE], [automake-${am__api_version}]) AM_MISSING_PROG([AUTOHEADER], [autoheader]) AM_MISSING_PROG([MAKEINFO], [makeinfo]) AC_REQUIRE([AM_PROG_INSTALL_SH])dnl AC_REQUIRE([AM_PROG_INSTALL_STRIP])dnl AC_REQUIRE([AC_PROG_MKDIR_P])dnl # For better backward compatibility. To be removed once Automake 1.9.x # dies out for good. For more background, see: # # AC_SUBST([mkdir_p], ['$(MKDIR_P)']) # We need awk for the "check" target (and possibly the TAP driver). The # system "awk" is bad on some platforms. AC_REQUIRE([AC_PROG_AWK])dnl AC_REQUIRE([AC_PROG_MAKE_SET])dnl AC_REQUIRE([AM_SET_LEADING_DOT])dnl _AM_IF_OPTION([tar-ustar], [_AM_PROG_TAR([ustar])], [_AM_IF_OPTION([tar-pax], [_AM_PROG_TAR([pax])], [_AM_PROG_TAR([v7])])]) _AM_IF_OPTION([no-dependencies],, [AC_PROVIDE_IFELSE([AC_PROG_CC], [_AM_DEPENDENCIES([CC])], [m4_define([AC_PROG_CC], m4_defn([AC_PROG_CC])[_AM_DEPENDENCIES([CC])])])dnl AC_PROVIDE_IFELSE([AC_PROG_CXX], [_AM_DEPENDENCIES([CXX])], [m4_define([AC_PROG_CXX], m4_defn([AC_PROG_CXX])[_AM_DEPENDENCIES([CXX])])])dnl AC_PROVIDE_IFELSE([AC_PROG_OBJC], [_AM_DEPENDENCIES([OBJC])], [m4_define([AC_PROG_OBJC], m4_defn([AC_PROG_OBJC])[_AM_DEPENDENCIES([OBJC])])])dnl AC_PROVIDE_IFELSE([AC_PROG_OBJCXX], [_AM_DEPENDENCIES([OBJCXX])], [m4_define([AC_PROG_OBJCXX], m4_defn([AC_PROG_OBJCXX])[_AM_DEPENDENCIES([OBJCXX])])])dnl ]) # Variables for tags utilities; see am/tags.am if test -z "$CTAGS"; then CTAGS=ctags fi AC_SUBST([CTAGS]) if test -z "$ETAGS"; then ETAGS=etags fi AC_SUBST([ETAGS]) if test -z "$CSCOPE"; then CSCOPE=cscope fi AC_SUBST([CSCOPE]) AC_REQUIRE([_AM_SILENT_RULES])dnl dnl The testsuite driver may need to know about EXEEXT, so add the dnl 'am__EXEEXT' conditional if _AM_COMPILER_EXEEXT was seen. This dnl macro is hooked onto _AC_COMPILER_EXEEXT early, see below. AC_CONFIG_COMMANDS_PRE(dnl [m4_provide_if([_AM_COMPILER_EXEEXT], [AM_CONDITIONAL([am__EXEEXT], [test -n "$EXEEXT"])])])dnl AC_REQUIRE([_AM_PROG_RM_F]) AC_REQUIRE([_AM_PROG_XARGS_N]) dnl The trailing newline in this macro's definition is deliberate, for dnl backward compatibility and to allow trailing 'dnl'-style comments dnl after the AM_INIT_AUTOMAKE invocation. See automake bug#16841. ]) dnl Hook into '_AC_COMPILER_EXEEXT' early to learn its expansion. Do not dnl add the conditional right here, as _AC_COMPILER_EXEEXT may be further dnl mangled by Autoconf and run in a shell conditional statement. m4_define([_AC_COMPILER_EXEEXT], m4_defn([_AC_COMPILER_EXEEXT])[m4_provide([_AM_COMPILER_EXEEXT])]) # When config.status generates a header, we must update the stamp-h file. # This file resides in the same directory as the config header # that is generated. The stamp files are numbered to have different names. # Autoconf calls _AC_AM_CONFIG_HEADER_HOOK (when defined) in the # loop where config.status creates the headers, so we can generate # our stamp files there. AC_DEFUN([_AC_AM_CONFIG_HEADER_HOOK], [# Compute $1's index in $config_headers. _am_arg=$1 _am_stamp_count=1 for _am_header in $config_headers :; do case $_am_header in $_am_arg | $_am_arg:* ) break ;; * ) _am_stamp_count=`expr $_am_stamp_count + 1` ;; esac done echo "timestamp for $_am_arg" >`AS_DIRNAME(["$_am_arg"])`/stamp-h[]$_am_stamp_count]) # Copyright (C) 2001-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_PROG_INSTALL_SH # ------------------ # Define $install_sh. AC_DEFUN([AM_PROG_INSTALL_SH], [AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl if test x"${install_sh+set}" != xset; then case $am_aux_dir in *\ * | *\ *) install_sh="\${SHELL} '$am_aux_dir/install-sh'" ;; *) install_sh="\${SHELL} $am_aux_dir/install-sh" esac fi AC_SUBST([install_sh])]) # Copyright (C) 2003-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # Check whether the underlying file-system supports filenames # with a leading dot. For instance MS-DOS doesn't. AC_DEFUN([AM_SET_LEADING_DOT], [rm -rf .tst 2>/dev/null mkdir .tst 2>/dev/null if test -d .tst; then am__leading_dot=. else am__leading_dot=_ fi rmdir .tst 2>/dev/null AC_SUBST([am__leading_dot])]) # Add --enable-maintainer-mode option to configure. -*- Autoconf -*- # From Jim Meyering # Copyright (C) 1996-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_MAINTAINER_MODE([DEFAULT-MODE]) # ---------------------------------- # Control maintainer-specific portions of Makefiles. # Default is to disable them, unless 'enable' is passed literally. # For symmetry, 'disable' may be passed as well. Anyway, the user # can override the default with the --enable/--disable switch. AC_DEFUN([AM_MAINTAINER_MODE], [m4_case(m4_default([$1], [disable]), [enable], [m4_define([am_maintainer_other], [disable])], [disable], [m4_define([am_maintainer_other], [enable])], [m4_define([am_maintainer_other], [enable]) m4_warn([syntax], [unexpected argument to AM@&t@_MAINTAINER_MODE: $1])]) AC_MSG_CHECKING([whether to enable maintainer-specific portions of Makefiles]) dnl maintainer-mode's default is 'disable' unless 'enable' is passed AC_ARG_ENABLE([maintainer-mode], [AS_HELP_STRING([--]am_maintainer_other[-maintainer-mode], am_maintainer_other[ make rules and dependencies not useful (and sometimes confusing) to the casual installer])], [USE_MAINTAINER_MODE=$enableval], [USE_MAINTAINER_MODE=]m4_if(am_maintainer_other, [enable], [no], [yes])) AC_MSG_RESULT([$USE_MAINTAINER_MODE]) AM_CONDITIONAL([MAINTAINER_MODE], [test $USE_MAINTAINER_MODE = yes]) MAINT=$MAINTAINER_MODE_TRUE AC_SUBST([MAINT])dnl ] ) # Check to see how 'make' treats includes. -*- Autoconf -*- # Copyright (C) 2001-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_MAKE_INCLUDE() # ----------------- # Check whether make has an 'include' directive that can support all # the idioms we need for our automatic dependency tracking code. AC_DEFUN([AM_MAKE_INCLUDE], [AC_MSG_CHECKING([whether ${MAKE-make} supports the include directive]) cat > confinc.mk << 'END' am__doit: @echo this is the am__doit target >confinc.out .PHONY: am__doit END am__include="#" am__quote= # BSD make does it like this. echo '.include "confinc.mk" # ignored' > confmf.BSD # Other make implementations (GNU, Solaris 10, AIX) do it like this. echo 'include confinc.mk # ignored' > confmf.GNU _am_result=no for s in GNU BSD; do AM_RUN_LOG([${MAKE-make} -f confmf.$s && cat confinc.out]) AS_CASE([$?:`cat confinc.out 2>/dev/null`], ['0:this is the am__doit target'], [AS_CASE([$s], [BSD], [am__include='.include' am__quote='"'], [am__include='include' am__quote=''])]) if test "$am__include" != "#"; then _am_result="yes ($s style)" break fi done rm -f confinc.* confmf.* AC_MSG_RESULT([${_am_result}]) AC_SUBST([am__include])]) AC_SUBST([am__quote])]) # Fake the existence of programs that GNU maintainers use. -*- Autoconf -*- # Copyright (C) 1997-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_MISSING_PROG(NAME, PROGRAM) # ------------------------------ AC_DEFUN([AM_MISSING_PROG], [AC_REQUIRE([AM_MISSING_HAS_RUN]) $1=${$1-"${am_missing_run}$2"} AC_SUBST($1)]) # AM_MISSING_HAS_RUN # ------------------ # Define MISSING if not defined so far and test if it is modern enough. # If it is, set am_missing_run to use it, otherwise, to nothing. AC_DEFUN([AM_MISSING_HAS_RUN], [AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl AC_REQUIRE_AUX_FILE([missing])dnl if test x"${MISSING+set}" != xset; then MISSING="\${SHELL} '$am_aux_dir/missing'" fi # Use eval to expand $SHELL if eval "$MISSING --is-lightweight"; then am_missing_run="$MISSING " else am_missing_run= AC_MSG_WARN(['missing' script is too old or missing]) fi ]) # Helper functions for option handling. -*- Autoconf -*- # Copyright (C) 2001-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # _AM_MANGLE_OPTION(NAME) # ----------------------- AC_DEFUN([_AM_MANGLE_OPTION], [[_AM_OPTION_]m4_bpatsubst($1, [[^a-zA-Z0-9_]], [_])]) # _AM_SET_OPTION(NAME) # -------------------- # Set option NAME. Presently that only means defining a flag for this option. AC_DEFUN([_AM_SET_OPTION], [m4_define(_AM_MANGLE_OPTION([$1]), [1])]) # _AM_SET_OPTIONS(OPTIONS) # ------------------------ # OPTIONS is a space-separated list of Automake options. AC_DEFUN([_AM_SET_OPTIONS], [m4_foreach_w([_AM_Option], [$1], [_AM_SET_OPTION(_AM_Option)])]) # _AM_IF_OPTION(OPTION, IF-SET, [IF-NOT-SET]) # ------------------------------------------- # Execute IF-SET if OPTION is set, IF-NOT-SET otherwise. AC_DEFUN([_AM_IF_OPTION], [m4_ifset(_AM_MANGLE_OPTION([$1]), [$2], [$3])]) # Copyright (C) 1999-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # _AM_PROG_CC_C_O # --------------- # Like AC_PROG_CC_C_O, but changed for automake. We rewrite AC_PROG_CC # to automatically call this. AC_DEFUN([_AM_PROG_CC_C_O], [AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl AC_REQUIRE_AUX_FILE([compile])dnl AC_LANG_PUSH([C])dnl AC_CACHE_CHECK( [whether $CC understands -c and -o together], [am_cv_prog_cc_c_o], [AC_LANG_CONFTEST([AC_LANG_PROGRAM([])]) # Make sure it works both with $CC and with simple cc. # Following AC_PROG_CC_C_O, we do the test twice because some # compilers refuse to overwrite an existing .o file with -o, # though they will create one. am_cv_prog_cc_c_o=yes for am_i in 1 2; do if AM_RUN_LOG([$CC -c conftest.$ac_ext -o conftest2.$ac_objext]) \ && test -f conftest2.$ac_objext; then : OK else am_cv_prog_cc_c_o=no break fi done rm -f core conftest* unset am_i]) if test "$am_cv_prog_cc_c_o" != yes; then # Losing compiler, so override with the script. # FIXME: It is wrong to rewrite CC. # But if we don't then we get into trouble of one sort or another. # A longer-term fix would be to have automake use am__CC in this case, # and then we could set am__CC="\$(top_srcdir)/compile \$(CC)" CC="$am_aux_dir/compile $CC" fi AC_LANG_POP([C])]) # For backward compatibility. AC_DEFUN_ONCE([AM_PROG_CC_C_O], [AC_REQUIRE([AC_PROG_CC])]) # Copyright (C) 2022-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # _AM_PROG_RM_F # --------------- # Check whether 'rm -f' without any arguments works. # https://bugs.gnu.org/10828 AC_DEFUN([_AM_PROG_RM_F], [am__rm_f_notfound= AS_IF([(rm -f && rm -fr && rm -rf) 2>/dev/null], [], [am__rm_f_notfound='""']) AC_SUBST(am__rm_f_notfound) ]) # Copyright (C) 2001-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_RUN_LOG(COMMAND) # ------------------- # Run COMMAND, save the exit status in ac_status, and log it. # (This has been adapted from Autoconf's _AC_RUN_LOG macro.) AC_DEFUN([AM_RUN_LOG], [{ echo "$as_me:$LINENO: $1" >&AS_MESSAGE_LOG_FD ($1) >&AS_MESSAGE_LOG_FD 2>&AS_MESSAGE_LOG_FD ac_status=$? echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD (exit $ac_status); }]) # Check to make sure that the build environment is sane. -*- Autoconf -*- # Copyright (C) 1996-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # _AM_SLEEP_FRACTIONAL_SECONDS # ---------------------------- AC_DEFUN([_AM_SLEEP_FRACTIONAL_SECONDS], [dnl AC_CACHE_CHECK([whether sleep supports fractional seconds], am_cv_sleep_fractional_seconds, [dnl AS_IF([sleep 0.001 2>/dev/null], [am_cv_sleep_fractional_seconds=yes], [am_cv_sleep_fractional_seconds=no]) ])]) # _AM_FILESYSTEM_TIMESTAMP_RESOLUTION # ----------------------------------- # Determine the filesystem's resolution for file modification # timestamps. The coarsest we know of is FAT, with a resolution # of only two seconds, even with the most recent "exFAT" extensions. # The finest (e.g. ext4 with large inodes, XFS, ZFS) is one # nanosecond, matching clock_gettime. However, it is probably not # possible to delay execution of a shell script for less than one # millisecond, due to process creation overhead and scheduling # granularity, so we don't check for anything finer than that. (See below.) AC_DEFUN([_AM_FILESYSTEM_TIMESTAMP_RESOLUTION], [dnl AC_REQUIRE([_AM_SLEEP_FRACTIONAL_SECONDS]) AC_CACHE_CHECK([filesystem timestamp resolution], am_cv_filesystem_timestamp_resolution, [dnl # Default to the worst case. am_cv_filesystem_timestamp_resolution=2 # Only try to go finer than 1 sec if sleep can do it. # Don't try 1 sec, because if 0.01 sec and 0.1 sec don't work, # - 1 sec is not much of a win compared to 2 sec, and # - it takes 2 seconds to perform the test whether 1 sec works. # # Instead, just use the default 2s on platforms that have 1s resolution, # accept the extra 1s delay when using $sleep in the Automake tests, in # exchange for not incurring the 2s delay for running the test for all # packages. # am_try_resolutions= if test "$am_cv_sleep_fractional_seconds" = yes; then # Even a millisecond often causes a bunch of false positives, # so just try a hundredth of a second. The time saved between .001 and # .01 is not terribly consequential. am_try_resolutions="0.01 0.1 $am_try_resolutions" fi # In order to catch current-generation FAT out, we must *modify* files # that already exist; the *creation* timestamp is finer. Use names # that make ls -t sort them differently when they have equal # timestamps than when they have distinct timestamps, keeping # in mind that ls -t prints the *newest* file first. rm -f conftest.ts? : > conftest.ts1 : > conftest.ts2 : > conftest.ts3 # Make sure ls -t actually works. Do 'set' in a subshell so we don't # clobber the current shell's arguments. (Outer-level square brackets # are removed by m4; they're present so that m4 does not expand # ; be careful, easy to get confused.) if ( set X `[ls -t conftest.ts[12]]` && { test "$[]*" != "X conftest.ts1 conftest.ts2" || test "$[]*" != "X conftest.ts2 conftest.ts1"; } ); then :; else # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". _AS_ECHO_UNQUOTED( ["Bad output from ls -t: \"`[ls -t conftest.ts[12]]`\""], [AS_MESSAGE_LOG_FD]) AC_MSG_FAILURE([ls -t produces unexpected output. Make sure there is not a broken ls alias in your environment.]) fi for am_try_res in $am_try_resolutions; do # Any one fine-grained sleep might happen to cross the boundary # between two values of a coarser actual resolution, but if we do # two fine-grained sleeps in a row, at least one of them will fall # entirely within a coarse interval. echo alpha > conftest.ts1 sleep $am_try_res echo beta > conftest.ts2 sleep $am_try_res echo gamma > conftest.ts3 # We assume that 'ls -t' will make use of high-resolution # timestamps if the operating system supports them at all. if (set X `ls -t conftest.ts?` && test "$[]2" = conftest.ts3 && test "$[]3" = conftest.ts2 && test "$[]4" = conftest.ts1); then # # Ok, ls -t worked. If we're at a resolution of 1 second, we're done, # because we don't need to test make. make_ok=true if test $am_try_res != 1; then # But if we've succeeded so far with a subsecond resolution, we # have one more thing to check: make. It can happen that # everything else supports the subsecond mtimes, but make doesn't; # notably on macOS, which ships make 3.81 from 2006 (the last one # released under GPLv2). https://bugs.gnu.org/68808 # # We test $MAKE if it is defined in the environment, else "make". # It might get overridden later, but our hope is that in practice # it does not matter: it is the system "make" which is (by far) # the most likely to be broken, whereas if the user overrides it, # probably they did so with a better, or at least not worse, make. # https://lists.gnu.org/archive/html/automake/2024-06/msg00051.html # # Create a Makefile (real tab character here): rm -f conftest.mk echo 'conftest.ts1: conftest.ts2' >conftest.mk echo ' touch conftest.ts2' >>conftest.mk # # Now, running # touch conftest.ts1; touch conftest.ts2; make # should touch ts1 because ts2 is newer. This could happen by luck, # but most often, it will fail if make's support is insufficient. So # test for several consecutive successes. # # (We reuse conftest.ts[12] because we still want to modify existing # files, not create new ones, per above.) n=0 make=${MAKE-make} until test $n -eq 3; do echo one > conftest.ts1 sleep $am_try_res echo two > conftest.ts2 # ts2 should now be newer than ts1 if $make -f conftest.mk | grep 'up to date' >/dev/null; then make_ok=false break # out of $n loop fi n=`expr $n + 1` done fi # if $make_ok; then # Everything we know to check worked out, so call this resolution good. am_cv_filesystem_timestamp_resolution=$am_try_res break # out of $am_try_res loop fi # Otherwise, we'll go on to check the next resolution. fi done rm -f conftest.ts? # (end _am_filesystem_timestamp_resolution) ])]) # AM_SANITY_CHECK # --------------- AC_DEFUN([AM_SANITY_CHECK], [AC_REQUIRE([_AM_FILESYSTEM_TIMESTAMP_RESOLUTION]) # This check should not be cached, as it may vary across builds of # different projects. AC_MSG_CHECKING([whether build environment is sane]) # Reject unsafe characters in $srcdir or the absolute working directory # name. Accept space and tab only in the latter. am_lf=' ' case `pwd` in *[[\\\"\#\$\&\'\`$am_lf]]*) AC_MSG_ERROR([unsafe absolute working directory name]);; esac case $srcdir in *[[\\\"\#\$\&\'\`$am_lf\ \ ]]*) AC_MSG_ERROR([unsafe srcdir value: '$srcdir']);; esac # Do 'set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). am_build_env_is_sane=no am_has_slept=no rm -f conftest.file for am_try in 1 2; do echo "timestamp, slept: $am_has_slept" > conftest.file if ( set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null` if test "$[]*" = "X"; then # -L didn't work. set X `ls -t "$srcdir/configure" conftest.file` fi test "$[]2" = conftest.file ); then am_build_env_is_sane=yes break fi # Just in case. sleep "$am_cv_filesystem_timestamp_resolution" am_has_slept=yes done AC_MSG_RESULT([$am_build_env_is_sane]) if test "$am_build_env_is_sane" = no; then AC_MSG_ERROR([newly created file is older than distributed files! Check your system clock]) fi # If we didn't sleep, we still need to ensure time stamps of config.status and # generated files are strictly newer. am_sleep_pid= AS_IF([test -e conftest.file || grep 'slept: no' conftest.file >/dev/null 2>&1],, [dnl ( sleep "$am_cv_filesystem_timestamp_resolution" ) & am_sleep_pid=$! ]) AC_CONFIG_COMMANDS_PRE( [AC_MSG_CHECKING([that generated files are newer than configure]) if test -n "$am_sleep_pid"; then # Hide warnings about reused PIDs. wait $am_sleep_pid 2>/dev/null fi AC_MSG_RESULT([done])]) rm -f conftest.file ]) # Copyright (C) 2009-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # _AM_SILENT_RULES # ---------------- # Enable less verbose build rules support. AC_DEFUN([_AM_SILENT_RULES], [AM_DEFAULT_VERBOSITY=1 AC_ARG_ENABLE([silent-rules], [dnl AS_HELP_STRING( [--enable-silent-rules], [less verbose build output (undo: "make V=1")]) AS_HELP_STRING( [--disable-silent-rules], [verbose build output (undo: "make V=0")])dnl ]) dnl dnl A few 'make' implementations (e.g., NonStop OS and NextStep) dnl do not support nested variable expansions. dnl See automake bug#9928 and bug#10237. am_make=${MAKE-make} AC_CACHE_CHECK([whether $am_make supports nested variables], [am_cv_make_support_nested_variables], [if AS_ECHO([['TRUE=$(BAR$(V)) BAR0=false BAR1=true V=1 am__doit: @$(TRUE) .PHONY: am__doit']]) | $am_make -f - >/dev/null 2>&1; then am_cv_make_support_nested_variables=yes else am_cv_make_support_nested_variables=no fi]) AC_SUBST([AM_V])dnl AM_SUBST_NOTMAKE([AM_V])dnl AC_SUBST([AM_DEFAULT_V])dnl AM_SUBST_NOTMAKE([AM_DEFAULT_V])dnl AC_SUBST([AM_DEFAULT_VERBOSITY])dnl AM_BACKSLASH='\' AC_SUBST([AM_BACKSLASH])dnl _AM_SUBST_NOTMAKE([AM_BACKSLASH])dnl dnl Delay evaluation of AM_DEFAULT_VERBOSITY to the end to allow multiple calls dnl to AM_SILENT_RULES to change the default value. AC_CONFIG_COMMANDS_PRE([dnl case $enable_silent_rules in @%:@ ((( yes) AM_DEFAULT_VERBOSITY=0;; no) AM_DEFAULT_VERBOSITY=1;; esac if test $am_cv_make_support_nested_variables = yes; then dnl Using '$V' instead of '$(V)' breaks IRIX make. AM_V='$(V)' AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)' else AM_V=$AM_DEFAULT_VERBOSITY AM_DEFAULT_V=$AM_DEFAULT_VERBOSITY fi ])dnl ]) # AM_SILENT_RULES([DEFAULT]) # -------------------------- # Set the default verbosity level to DEFAULT ("yes" being less verbose, "no" or # empty being verbose). AC_DEFUN([AM_SILENT_RULES], [AC_REQUIRE([_AM_SILENT_RULES]) AM_DEFAULT_VERBOSITY=m4_if([$1], [yes], [0], [1])]) # Copyright (C) 2001-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_PROG_INSTALL_STRIP # --------------------- # One issue with vendor 'install' (even GNU) is that you can't # specify the program used to strip binaries. This is especially # annoying in cross-compiling environments, where the build's strip # is unlikely to handle the host's binaries. # Fortunately install-sh will honor a STRIPPROG variable, so we # always use install-sh in "make install-strip", and initialize # STRIPPROG with the value of the STRIP variable (set by the user). AC_DEFUN([AM_PROG_INSTALL_STRIP], [AC_REQUIRE([AM_PROG_INSTALL_SH])dnl # Installed binaries are usually stripped using 'strip' when the user # run "make install-strip". However 'strip' might not be the right # tool to use in cross-compilation environments, therefore Automake # will honor the 'STRIP' environment variable to overrule this program. dnl Don't test for $cross_compiling = yes, because it might be 'maybe'. if test "$cross_compiling" != no; then AC_CHECK_TOOL([STRIP], [strip], :) fi INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s" AC_SUBST([INSTALL_STRIP_PROGRAM])]) # Copyright (C) 2006-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # _AM_SUBST_NOTMAKE(VARIABLE) # --------------------------- # Prevent Automake from outputting VARIABLE = @VARIABLE@ in Makefile.in. # This macro is traced by Automake. AC_DEFUN([_AM_SUBST_NOTMAKE]) # AM_SUBST_NOTMAKE(VARIABLE) # -------------------------- # Public sister of _AM_SUBST_NOTMAKE. AC_DEFUN([AM_SUBST_NOTMAKE], [_AM_SUBST_NOTMAKE($@)]) # Check how to create a tarball. -*- Autoconf -*- # Copyright (C) 2004-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # _AM_PROG_TAR(FORMAT) # -------------------- # Check how to create a tarball in format FORMAT. # FORMAT should be one of 'v7', 'ustar', or 'pax'. # # Substitute a variable $(am__tar) that is a command # writing to stdout a FORMAT-tarball containing the directory # $tardir. # tardir=directory && $(am__tar) > result.tar # # Substitute a variable $(am__untar) that extract such # a tarball read from stdin. # $(am__untar) < result.tar # AC_DEFUN([_AM_PROG_TAR], [# Always define AMTAR for backward compatibility. Yes, it's still used # in the wild :-( We should find a proper way to deprecate it ... AC_SUBST([AMTAR], ['$${TAR-tar}']) # We'll loop over all known methods to create a tar archive until one works. _am_tools='gnutar m4_if([$1], [ustar], [plaintar]) pax cpio none' m4_if([$1], [v7], [am__tar='$${TAR-tar} chof - "$$tardir"' am__untar='$${TAR-tar} xf -'], [m4_case([$1], [ustar], [# The POSIX 1988 'ustar' format is defined with fixed-size fields. # There is notably a 21 bits limit for the UID and the GID. In fact, # the 'pax' utility can hang on bigger UID/GID (see automake bug#8343 # and bug#13588). am_max_uid=2097151 # 2^21 - 1 am_max_gid=$am_max_uid # The $UID and $GID variables are not portable, so we need to resort # to the POSIX-mandated id(1) utility. Errors in the 'id' calls # below are definitely unexpected, so allow the users to see them # (that is, avoid stderr redirection). am_uid=`id -u || echo unknown` am_gid=`id -g || echo unknown` AC_MSG_CHECKING([whether UID '$am_uid' is supported by ustar format]) if test x$am_uid = xunknown; then AC_MSG_WARN([ancient id detected; assuming current UID is ok, but dist-ustar might not work]) elif test $am_uid -le $am_max_uid; then AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) _am_tools=none fi AC_MSG_CHECKING([whether GID '$am_gid' is supported by ustar format]) if test x$gm_gid = xunknown; then AC_MSG_WARN([ancient id detected; assuming current GID is ok, but dist-ustar might not work]) elif test $am_gid -le $am_max_gid; then AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) _am_tools=none fi], [pax], [], [m4_fatal([Unknown tar format])]) AC_MSG_CHECKING([how to create a $1 tar archive]) # Go ahead even if we have the value already cached. We do so because we # need to set the values for the 'am__tar' and 'am__untar' variables. _am_tools=${am_cv_prog_tar_$1-$_am_tools} for _am_tool in $_am_tools; do case $_am_tool in gnutar) for _am_tar in tar gnutar gtar; do AM_RUN_LOG([$_am_tar --version]) && break done am__tar="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$$tardir"' am__tar_="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$tardir"' am__untar="$_am_tar -xf -" ;; plaintar) # Must skip GNU tar: if it does not support --format= it doesn't create # ustar tarball either. (tar --version) >/dev/null 2>&1 && continue am__tar='tar chf - "$$tardir"' am__tar_='tar chf - "$tardir"' am__untar='tar xf -' ;; pax) am__tar='pax -L -x $1 -w "$$tardir"' am__tar_='pax -L -x $1 -w "$tardir"' am__untar='pax -r' ;; cpio) am__tar='find "$$tardir" -print | cpio -o -H $1 -L' am__tar_='find "$tardir" -print | cpio -o -H $1 -L' am__untar='cpio -i -H $1 -d' ;; none) am__tar=false am__tar_=false am__untar=false ;; esac # If the value was cached, stop now. We just wanted to have am__tar # and am__untar set. test -n "${am_cv_prog_tar_$1}" && break # tar/untar a dummy directory, and stop if the command works. rm -rf conftest.dir mkdir conftest.dir echo GrepMe > conftest.dir/file AM_RUN_LOG([tardir=conftest.dir && eval $am__tar_ >conftest.tar]) rm -rf conftest.dir if test -s conftest.tar; then AM_RUN_LOG([$am__untar /dev/null 2>&1 && break fi done rm -rf conftest.dir AC_CACHE_VAL([am_cv_prog_tar_$1], [am_cv_prog_tar_$1=$_am_tool]) AC_MSG_RESULT([$am_cv_prog_tar_$1])]) AC_SUBST([am__tar]) AC_SUBST([am__untar]) ]) # _AM_PROG_TAR # Copyright (C) 2022-2024 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # _AM_PROG_XARGS_N # ---------------- # Check whether 'xargs -n' works. It should work everywhere, so the fallback # is not optimized at all as we never expect to use it. AC_DEFUN([_AM_PROG_XARGS_N], [AC_CACHE_CHECK([xargs -n works], am_cv_xargs_n_works, [dnl AS_IF([test "`echo 1 2 3 | xargs -n2 echo`" = "1 2 3"], [am_cv_xargs_n_works=yes], [am_cv_xargs_n_works=no])]) AS_IF([test "$am_cv_xargs_n_works" = yes], [am__xargs_n='xargs -n'], [dnl am__xargs_n='am__xargs_n () { shift; sed "s/ /\\n/g" | while read am__xargs_n_arg; do "$@" "$am__xargs_n_arg"; done; }' ])dnl AC_SUBST(am__xargs_n) ]) m4_include([m4/ax_ac_append_to_file.m4]) m4_include([m4/ax_ac_print_to_file.m4]) m4_include([m4/ax_add_am_macro_static.m4]) m4_include([m4/ax_am_macros_static.m4]) m4_include([m4/ax_append_compile_flags.m4]) m4_include([m4/ax_append_flag.m4]) m4_include([m4/ax_append_link_flags.m4]) m4_include([m4/ax_check_compile_flag.m4]) m4_include([m4/ax_check_gnu_make.m4]) m4_include([m4/ax_check_link_flag.m4]) m4_include([m4/ax_check_user_namespace.m4]) m4_include([m4/ax_check_uts_namespace.m4]) m4_include([m4/ax_code_coverage.m4]) m4_include([m4/ax_compiler_vendor.m4]) m4_include([m4/ax_cxx_compile_stdcxx.m4]) m4_include([m4/ax_cxx_compile_stdcxx_14.m4]) m4_include([m4/ax_file_escapes.m4]) m4_include([m4/ax_pthread.m4]) m4_include([m4/ax_require_defined.m4]) m4_include([m4/libtool.m4]) m4_include([m4/ltoptions.m4]) m4_include([m4/ltsugar.m4]) m4_include([m4/ltversion.m4]) m4_include([m4/lt~obsolete.m4]) m4_include([m4/pkg.m4]) gevent-24.11.1/deps/c-ares/aminclude_static.am000066400000000000000000000151341471441230600211120ustar00rootroot00000000000000 # aminclude_static.am generated automatically by Autoconf # from AX_AM_MACROS_STATIC on Mon Sep 23 15:51:56 CDT 2024 # Code coverage # # Optional: # - CODE_COVERAGE_DIRECTORY: Top-level directory for code coverage reporting. # Multiple directories may be specified, separated by whitespace. # (Default: $(top_builddir)) # - CODE_COVERAGE_OUTPUT_FILE: Filename and path for the .info file generated # by lcov for code coverage. (Default: # $(PACKAGE_NAME)-$(PACKAGE_VERSION)-coverage.info) # - CODE_COVERAGE_OUTPUT_DIRECTORY: Directory for generated code coverage # reports to be created. (Default: # $(PACKAGE_NAME)-$(PACKAGE_VERSION)-coverage) # - CODE_COVERAGE_BRANCH_COVERAGE: Set to 1 to enforce branch coverage, # set to 0 to disable it and leave empty to stay with the default. # (Default: empty) # - CODE_COVERAGE_LCOV_SHOPTS_DEFAULT: Extra options shared between both lcov # instances. (Default: based on ) # - CODE_COVERAGE_LCOV_SHOPTS: Extra options to shared between both lcov # instances. (Default: ) # - CODE_COVERAGE_LCOV_OPTIONS_GCOVPATH: --gcov-tool pathtogcov # - CODE_COVERAGE_LCOV_OPTIONS_DEFAULT: Extra options to pass to the # collecting lcov instance. (Default: ) # - CODE_COVERAGE_LCOV_OPTIONS: Extra options to pass to the collecting lcov # instance. (Default: ) # - CODE_COVERAGE_LCOV_RMOPTS_DEFAULT: Extra options to pass to the filtering # lcov instance. (Default: empty) # - CODE_COVERAGE_LCOV_RMOPTS: Extra options to pass to the filtering lcov # instance. (Default: ) # - CODE_COVERAGE_GENHTML_OPTIONS_DEFAULT: Extra options to pass to the # genhtml instance. (Default: based on ) # - CODE_COVERAGE_GENHTML_OPTIONS: Extra options to pass to the genhtml # instance. (Default: ) # - CODE_COVERAGE_IGNORE_PATTERN: Extra glob pattern of files to ignore # # The generated report will be titled using the $(PACKAGE_NAME) and # $(PACKAGE_VERSION). In order to add the current git hash to the title, # use the git-version-gen script, available online. # Optional variables # run only on top dir if CODE_COVERAGE_ENABLED ifeq ($(abs_builddir), $(abs_top_builddir)) CODE_COVERAGE_DIRECTORY ?= $(top_builddir) CODE_COVERAGE_OUTPUT_FILE ?= $(PACKAGE_NAME)-$(PACKAGE_VERSION)-coverage.info CODE_COVERAGE_OUTPUT_DIRECTORY ?= $(PACKAGE_NAME)-$(PACKAGE_VERSION)-coverage CODE_COVERAGE_BRANCH_COVERAGE ?= CODE_COVERAGE_LCOV_SHOPTS_DEFAULT ?= $(if $(CODE_COVERAGE_BRANCH_COVERAGE),--rc lcov_branch_coverage=$(CODE_COVERAGE_BRANCH_COVERAGE)) CODE_COVERAGE_LCOV_SHOPTS ?= $(CODE_COVERAGE_LCOV_SHOPTS_DEFAULT) CODE_COVERAGE_LCOV_OPTIONS_GCOVPATH ?= --gcov-tool "$(GCOV)" CODE_COVERAGE_LCOV_OPTIONS_DEFAULT ?= $(CODE_COVERAGE_LCOV_OPTIONS_GCOVPATH) CODE_COVERAGE_LCOV_OPTIONS ?= $(CODE_COVERAGE_LCOV_OPTIONS_DEFAULT) CODE_COVERAGE_LCOV_RMOPTS_DEFAULT ?= CODE_COVERAGE_LCOV_RMOPTS ?= $(CODE_COVERAGE_LCOV_RMOPTS_DEFAULT) CODE_COVERAGE_GENHTML_OPTIONS_DEFAULT ?=$(if $(CODE_COVERAGE_BRANCH_COVERAGE),--rc genhtml_branch_coverage=$(CODE_COVERAGE_BRANCH_COVERAGE)) CODE_COVERAGE_GENHTML_OPTIONS ?= $(CODE_COVERAGE_GENHTML_OPTIONS_DEFAULT) CODE_COVERAGE_IGNORE_PATTERN ?= GITIGNOREFILES := $(GITIGNOREFILES) $(CODE_COVERAGE_OUTPUT_FILE) $(CODE_COVERAGE_OUTPUT_DIRECTORY) code_coverage_v_lcov_cap = $(code_coverage_v_lcov_cap_$(V)) code_coverage_v_lcov_cap_ = $(code_coverage_v_lcov_cap_$(AM_DEFAULT_VERBOSITY)) code_coverage_v_lcov_cap_0 = @echo " LCOV --capture" $(CODE_COVERAGE_OUTPUT_FILE); code_coverage_v_lcov_ign = $(code_coverage_v_lcov_ign_$(V)) code_coverage_v_lcov_ign_ = $(code_coverage_v_lcov_ign_$(AM_DEFAULT_VERBOSITY)) code_coverage_v_lcov_ign_0 = @echo " LCOV --remove /tmp/*" $(CODE_COVERAGE_IGNORE_PATTERN); code_coverage_v_genhtml = $(code_coverage_v_genhtml_$(V)) code_coverage_v_genhtml_ = $(code_coverage_v_genhtml_$(AM_DEFAULT_VERBOSITY)) code_coverage_v_genhtml_0 = @echo " GEN " "$(CODE_COVERAGE_OUTPUT_DIRECTORY)"; code_coverage_quiet = $(code_coverage_quiet_$(V)) code_coverage_quiet_ = $(code_coverage_quiet_$(AM_DEFAULT_VERBOSITY)) code_coverage_quiet_0 = --quiet # sanitizes the test-name: replaces with underscores: dashes and dots code_coverage_sanitize = $(subst -,_,$(subst .,_,$(1))) # Use recursive makes in order to ignore errors during check check-code-coverage: -$(AM_V_at)$(MAKE) $(AM_MAKEFLAGS) -k check $(AM_V_at)$(MAKE) $(AM_MAKEFLAGS) code-coverage-capture # Capture code coverage data code-coverage-capture: code-coverage-capture-hook $(code_coverage_v_lcov_cap)$(LCOV) $(code_coverage_quiet) $(addprefix --directory ,$(CODE_COVERAGE_DIRECTORY)) --capture --output-file "$(CODE_COVERAGE_OUTPUT_FILE).tmp" --test-name "$(call code_coverage_sanitize,$(PACKAGE_NAME)-$(PACKAGE_VERSION))" --no-checksum --compat-libtool $(CODE_COVERAGE_LCOV_SHOPTS) $(CODE_COVERAGE_LCOV_OPTIONS) $(code_coverage_v_lcov_ign)$(LCOV) $(code_coverage_quiet) $(addprefix --directory ,$(CODE_COVERAGE_DIRECTORY)) --remove "$(CODE_COVERAGE_OUTPUT_FILE).tmp" "/tmp/*" $(CODE_COVERAGE_IGNORE_PATTERN) --output-file "$(CODE_COVERAGE_OUTPUT_FILE)" $(CODE_COVERAGE_LCOV_SHOPTS) $(CODE_COVERAGE_LCOV_RMOPTS) -@rm -f "$(CODE_COVERAGE_OUTPUT_FILE).tmp" $(code_coverage_v_genhtml)LANG=C $(GENHTML) $(code_coverage_quiet) $(addprefix --prefix ,$(CODE_COVERAGE_DIRECTORY)) --output-directory "$(CODE_COVERAGE_OUTPUT_DIRECTORY)" --title "$(PACKAGE_NAME)-$(PACKAGE_VERSION) Code Coverage" --legend --show-details "$(CODE_COVERAGE_OUTPUT_FILE)" $(CODE_COVERAGE_GENHTML_OPTIONS) @echo "file://$(abs_builddir)/$(CODE_COVERAGE_OUTPUT_DIRECTORY)/index.html" code-coverage-clean: -$(LCOV) --directory $(top_builddir) -z -rm -rf "$(CODE_COVERAGE_OUTPUT_FILE)" "$(CODE_COVERAGE_OUTPUT_FILE).tmp" "$(CODE_COVERAGE_OUTPUT_DIRECTORY)" -find . \( -name "*.gcda" -o -name "*.gcno" -o -name "*.gcov" \) -delete code-coverage-dist-clean: AM_DISTCHECK_CONFIGURE_FLAGS := $(AM_DISTCHECK_CONFIGURE_FLAGS) --disable-code-coverage else # ifneq ($(abs_builddir), $(abs_top_builddir)) check-code-coverage: code-coverage-capture: code-coverage-capture-hook code-coverage-clean: code-coverage-dist-clean: endif # ifeq ($(abs_builddir), $(abs_top_builddir)) else #! CODE_COVERAGE_ENABLED # Use recursive makes in order to ignore errors during check check-code-coverage: @echo "Need to reconfigure with --enable-code-coverage" # Capture code coverage data code-coverage-capture: code-coverage-capture-hook @echo "Need to reconfigure with --enable-code-coverage" code-coverage-clean: code-coverage-dist-clean: endif #CODE_COVERAGE_ENABLED # Hook rule executed before code-coverage-capture, overridable by the user code-coverage-capture-hook: .PHONY: check-code-coverage code-coverage-capture code-coverage-dist-clean code-coverage-clean code-coverage-capture-hook gevent-24.11.1/deps/c-ares/buildconf000077500000000000000000000003231471441230600171500ustar00rootroot00000000000000#!/bin/sh # Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT echo "*** Do not use buildconf. Instead, just use: autoreconf -fi" >&2 exec ${AUTORECONF:-autoreconf} -fi "${@}" gevent-24.11.1/deps/c-ares/config/000077500000000000000000000000001471441230600165245ustar00rootroot00000000000000gevent-24.11.1/deps/c-ares/config/compile000077500000000000000000000167051471441230600201130ustar00rootroot00000000000000#! /bin/sh # Wrapper for compilers which do not understand '-c -o'. scriptversion=2024-06-19.01; # UTC # Copyright (C) 1999-2024 Free Software Foundation, Inc. # Written by Tom Tromey . # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. # This file is maintained in Automake, please report # bugs to or send patches to # . nl=' ' # We need space, tab and new line, in precisely that order. Quoting is # there to prevent tools from complaining about whitespace usage. IFS=" "" $nl" file_conv= # func_file_conv build_file lazy # Convert a $build file to $host form and store it in $file # Currently only supports Windows hosts. If the determined conversion # type is listed in (the comma separated) LAZY, no conversion will # take place. func_file_conv () { file=$1 case $file in / | /[!/]*) # absolute file, and not a UNC file if test -z "$file_conv"; then # lazily determine how to convert abs files case `uname -s` in MINGW*) file_conv=mingw ;; CYGWIN* | MSYS*) file_conv=cygwin ;; *) file_conv=wine ;; esac fi case $file_conv/,$2, in *,$file_conv,*) ;; mingw/*) file=`cmd //C echo "$file " | sed -e 's/"\(.*\) " *$/\1/'` ;; cygwin/* | msys/*) file=`cygpath -m "$file" || echo "$file"` ;; wine/*) file=`winepath -w "$file" || echo "$file"` ;; esac ;; esac } # func_cl_dashL linkdir # Make cl look for libraries in LINKDIR func_cl_dashL () { func_file_conv "$1" if test -z "$lib_path"; then lib_path=$file else lib_path="$lib_path;$file" fi linker_opts="$linker_opts -LIBPATH:$file" } # func_cl_dashl library # Do a library search-path lookup for cl func_cl_dashl () { lib=$1 found=no save_IFS=$IFS IFS=';' for dir in $lib_path $LIB do IFS=$save_IFS if $shared && test -f "$dir/$lib.dll.lib"; then found=yes lib=$dir/$lib.dll.lib break fi if test -f "$dir/$lib.lib"; then found=yes lib=$dir/$lib.lib break fi if test -f "$dir/lib$lib.a"; then found=yes lib=$dir/lib$lib.a break fi done IFS=$save_IFS if test "$found" != yes; then lib=$lib.lib fi } # func_cl_wrapper cl arg... # Adjust compile command to suit cl func_cl_wrapper () { # Assume a capable shell lib_path= shared=: linker_opts= for arg do if test -n "$eat"; then eat= else case $1 in -o) # configure might choose to run compile as 'compile cc -o foo foo.c'. eat=1 case $2 in *.o | *.lo | *.[oO][bB][jJ]) func_file_conv "$2" set x "$@" -Fo"$file" shift ;; *) func_file_conv "$2" set x "$@" -Fe"$file" shift ;; esac ;; -I) eat=1 func_file_conv "$2" mingw set x "$@" -I"$file" shift ;; -I*) func_file_conv "${1#-I}" mingw set x "$@" -I"$file" shift ;; -l) eat=1 func_cl_dashl "$2" set x "$@" "$lib" shift ;; -l*) func_cl_dashl "${1#-l}" set x "$@" "$lib" shift ;; -L) eat=1 func_cl_dashL "$2" ;; -L*) func_cl_dashL "${1#-L}" ;; -static) shared=false ;; -Wl,*) arg=${1#-Wl,} save_ifs="$IFS"; IFS=',' for flag in $arg; do IFS="$save_ifs" linker_opts="$linker_opts $flag" done IFS="$save_ifs" ;; -Xlinker) eat=1 linker_opts="$linker_opts $2" ;; -*) set x "$@" "$1" shift ;; *.cc | *.CC | *.cxx | *.CXX | *.[cC]++) func_file_conv "$1" set x "$@" -Tp"$file" shift ;; *.c | *.cpp | *.CPP | *.lib | *.LIB | *.Lib | *.OBJ | *.obj | *.[oO]) func_file_conv "$1" mingw set x "$@" "$file" shift ;; *) set x "$@" "$1" shift ;; esac fi shift done if test -n "$linker_opts"; then linker_opts="-link$linker_opts" fi exec "$@" $linker_opts exit 1 } eat= case $1 in '') echo "$0: No command. Try '$0 --help' for more information." 1>&2 exit 1; ;; -h | --h*) cat <<\EOF Usage: compile [--help] [--version] PROGRAM [ARGS] Wrapper for compilers which do not understand '-c -o'. Remove '-o dest.o' from ARGS, run PROGRAM with the remaining arguments, and rename the output as expected. If you are trying to build a whole package this is not the right script to run: please start by reading the file 'INSTALL'. Report bugs to . GNU Automake home page: . General help using GNU software: . EOF exit $? ;; -v | --v*) echo "compile (GNU Automake) $scriptversion" exit $? ;; cl | *[/\\]cl | cl.exe | *[/\\]cl.exe | \ clang-cl | *[/\\]clang-cl | clang-cl.exe | *[/\\]clang-cl.exe | \ icl | *[/\\]icl | icl.exe | *[/\\]icl.exe ) func_cl_wrapper "$@" # Doesn't return... ;; esac ofile= cfile= for arg do if test -n "$eat"; then eat= else case $1 in -o) # configure might choose to run compile as 'compile cc -o foo foo.c'. # So we strip '-o arg' only if arg is an object. eat=1 case $2 in *.o | *.obj) ofile=$2 ;; *) set x "$@" -o "$2" shift ;; esac ;; *.c) cfile=$1 set x "$@" "$1" shift ;; *) set x "$@" "$1" shift ;; esac fi shift done if test -z "$ofile" || test -z "$cfile"; then # If no '-o' option was seen then we might have been invoked from a # pattern rule where we don't need one. That is ok -- this is a # normal compilation that the losing compiler can handle. If no # '.c' file was seen then we are probably linking. That is also # ok. exec "$@" fi # Name of file we expect compiler to create. cofile=`echo "$cfile" | sed 's|^.*[\\/]||; s|^[a-zA-Z]:||; s/\.c$/.o/'` # Create the lock directory. # Note: use '[/\\:.-]' here to ensure that we don't use the same name # that we are using for the .o file. Also, base the name on the expected # object file name, since that is what matters with a parallel build. lockdir=`echo "$cofile" | sed -e 's|[/\\:.-]|_|g'`.d while true; do if mkdir "$lockdir" >/dev/null 2>&1; then break fi sleep 1 done # FIXME: race condition here if user kills between mkdir and trap. trap "rmdir '$lockdir'; exit 1" 1 2 15 # Run the compile. "$@" ret=$? if test -f "$cofile"; then test "$cofile" = "$ofile" || mv "$cofile" "$ofile" elif test -f "${cofile}bj"; then test "${cofile}bj" = "$ofile" || mv "${cofile}bj" "$ofile" fi rmdir "$lockdir" exit $ret # Local Variables: # mode: shell-script # sh-indentation: 2 # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC0" # time-stamp-end: "; # UTC" # End: gevent-24.11.1/deps/c-ares/config/config.guess000066400000000000000000001430671471441230600210540ustar00rootroot00000000000000#! /bin/sh # Attempt to guess a canonical system name. # Copyright 1992-2024 Free Software Foundation, Inc. # shellcheck disable=SC2006,SC2268 # see below for rationale timestamp='2024-07-27' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, see . # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). # # Originally written by Per Bothner; maintained since 2000 by Ben Elliston. # # You can get the latest version of this script from: # https://git.savannah.gnu.org/cgit/config.git/plain/config.guess # # Please send patches to . # The "shellcheck disable" line above the timestamp inhibits complaints # about features and limitations of the classic Bourne shell that were # superseded or lifted in POSIX. However, this script identifies a wide # variety of pre-POSIX systems that do not have POSIX shells at all, and # even some reasonably current systems (Solaris 10 as case-in-point) still # have a pre-POSIX /bin/sh. me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] Output the configuration name of the system '$me' is run on. Options: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.guess ($timestamp) Originally written by Per Bothner. Copyright 1992-2024 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try '$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" >&2 exit 1 ;; * ) break ;; esac done if test $# != 0; then echo "$me: too many arguments$help" >&2 exit 1 fi # Just in case it came from the environment. GUESS= # CC_FOR_BUILD -- compiler used by this script. Note that the use of a # compiler to aid in system detection is discouraged as it requires # temporary files to be created and, as you can see below, it is a # headache to deal with in a portable fashion. # Historically, 'CC_FOR_BUILD' used to be named 'HOST_CC'. We still # use 'HOST_CC' if defined, but it is deprecated. # Portable tmp directory creation inspired by the Autoconf team. tmp= # shellcheck disable=SC2172 trap 'test -z "$tmp" || rm -fr "$tmp"' 0 1 2 13 15 set_cc_for_build() { # prevent multiple calls if $tmp is already set test "$tmp" && return 0 : "${TMPDIR=/tmp}" # shellcheck disable=SC2039,SC3028 { tmp=`(umask 077 && mktemp -d "$TMPDIR/cgXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" ; } || { test -n "$RANDOM" && tmp=$TMPDIR/cg$$-$RANDOM && (umask 077 && mkdir "$tmp" 2>/dev/null) ; } || { tmp=$TMPDIR/cg-$$ && (umask 077 && mkdir "$tmp" 2>/dev/null) && echo "Warning: creating insecure temp directory" >&2 ; } || { echo "$me: cannot create a temporary directory in $TMPDIR" >&2 ; exit 1 ; } dummy=$tmp/dummy case ${CC_FOR_BUILD-},${HOST_CC-},${CC-} in ,,) echo "int x;" > "$dummy.c" for driver in cc gcc c17 c99 c89 ; do if ($driver -c -o "$dummy.o" "$dummy.c") >/dev/null 2>&1 ; then CC_FOR_BUILD=$driver break fi done if test x"$CC_FOR_BUILD" = x ; then CC_FOR_BUILD=no_compiler_found fi ;; ,,*) CC_FOR_BUILD=$CC ;; ,*,*) CC_FOR_BUILD=$HOST_CC ;; esac } # This is needed to find uname on a Pyramid OSx when run in the BSD universe. # (ghazi@noc.rutgers.edu 1994-08-24) if test -f /.attbin/uname ; then PATH=$PATH:/.attbin ; export PATH fi UNAME_MACHINE=`(uname -m) 2>/dev/null` || UNAME_MACHINE=unknown UNAME_RELEASE=`(uname -r) 2>/dev/null` || UNAME_RELEASE=unknown UNAME_SYSTEM=`(uname -s) 2>/dev/null` || UNAME_SYSTEM=unknown UNAME_VERSION=`(uname -v) 2>/dev/null` || UNAME_VERSION=unknown case $UNAME_SYSTEM in Linux|GNU|GNU/*) LIBC=unknown set_cc_for_build cat <<-EOF > "$dummy.c" #if defined(__ANDROID__) LIBC=android #else #include #if defined(__UCLIBC__) LIBC=uclibc #elif defined(__dietlibc__) LIBC=dietlibc #elif defined(__GLIBC__) LIBC=gnu #elif defined(__LLVM_LIBC__) LIBC=llvm #else #include /* First heuristic to detect musl libc. */ #ifdef __DEFINED_va_list LIBC=musl #endif #endif #endif EOF cc_set_libc=`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^LIBC' | sed 's, ,,g'` eval "$cc_set_libc" # Second heuristic to detect musl libc. if [ "$LIBC" = unknown ] && command -v ldd >/dev/null && ldd --version 2>&1 | grep -q ^musl; then LIBC=musl fi # If the system lacks a compiler, then just pick glibc. # We could probably try harder. if [ "$LIBC" = unknown ]; then LIBC=gnu fi ;; esac # Note: order is significant - the case branches are not exclusive. case $UNAME_MACHINE:$UNAME_SYSTEM:$UNAME_RELEASE:$UNAME_VERSION in *:NetBSD:*:*) # NetBSD (nbsd) targets should (where applicable) match one or # more of the tuples: *-*-netbsdelf*, *-*-netbsdaout*, # *-*-netbsdecoff* and *-*-netbsd*. For targets that recently # switched to ELF, *-*-netbsd* would select the old # object file format. This provides both forward # compatibility and a consistent mechanism for selecting the # object file format. # # Note: NetBSD doesn't particularly care about the vendor # portion of the name. We always set it to "unknown". UNAME_MACHINE_ARCH=`(uname -p 2>/dev/null || \ /sbin/sysctl -n hw.machine_arch 2>/dev/null || \ /usr/sbin/sysctl -n hw.machine_arch 2>/dev/null || \ echo unknown)` case $UNAME_MACHINE_ARCH in aarch64eb) machine=aarch64_be-unknown ;; armeb) machine=armeb-unknown ;; arm*) machine=arm-unknown ;; sh3el) machine=shl-unknown ;; sh3eb) machine=sh-unknown ;; sh5el) machine=sh5le-unknown ;; earmv*) arch=`echo "$UNAME_MACHINE_ARCH" | sed -e 's,^e\(armv[0-9]\).*$,\1,'` endian=`echo "$UNAME_MACHINE_ARCH" | sed -ne 's,^.*\(eb\)$,\1,p'` machine=${arch}${endian}-unknown ;; *) machine=$UNAME_MACHINE_ARCH-unknown ;; esac # The Operating System including object format, if it has switched # to ELF recently (or will in the future) and ABI. case $UNAME_MACHINE_ARCH in earm*) os=netbsdelf ;; arm*|i386|m68k|ns32k|sh3*|sparc|vax) set_cc_for_build if echo __ELF__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ELF__ then # Once all utilities can be ECOFF (netbsdecoff) or a.out (netbsdaout). # Return netbsd for either. FIX? os=netbsd else os=netbsdelf fi ;; *) os=netbsd ;; esac # Determine ABI tags. case $UNAME_MACHINE_ARCH in earm*) expr='s/^earmv[0-9]/-eabi/;s/eb$//' abi=`echo "$UNAME_MACHINE_ARCH" | sed -e "$expr"` ;; esac # The OS release # Debian GNU/NetBSD machines have a different userland, and # thus, need a distinct triplet. However, they do not need # kernel version information, so it can be replaced with a # suitable tag, in the style of linux-gnu. case $UNAME_VERSION in Debian*) release='-gnu' ;; *) release=`echo "$UNAME_RELEASE" | sed -e 's/[-_].*//' | cut -d. -f1,2` ;; esac # Since CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM: # contains redundant information, the shorter form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used. GUESS=$machine-${os}${release}${abi-} ;; *:Bitrig:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/Bitrig.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-bitrig$UNAME_RELEASE ;; *:OpenBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/OpenBSD.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-openbsd$UNAME_RELEASE ;; *:SecBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/SecBSD.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-secbsd$UNAME_RELEASE ;; *:LibertyBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/^.*BSD\.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-libertybsd$UNAME_RELEASE ;; *:MidnightBSD:*:*) GUESS=$UNAME_MACHINE-unknown-midnightbsd$UNAME_RELEASE ;; *:ekkoBSD:*:*) GUESS=$UNAME_MACHINE-unknown-ekkobsd$UNAME_RELEASE ;; *:SolidBSD:*:*) GUESS=$UNAME_MACHINE-unknown-solidbsd$UNAME_RELEASE ;; *:OS108:*:*) GUESS=$UNAME_MACHINE-unknown-os108_$UNAME_RELEASE ;; macppc:MirBSD:*:*) GUESS=powerpc-unknown-mirbsd$UNAME_RELEASE ;; *:MirBSD:*:*) GUESS=$UNAME_MACHINE-unknown-mirbsd$UNAME_RELEASE ;; *:Sortix:*:*) GUESS=$UNAME_MACHINE-unknown-sortix ;; *:Twizzler:*:*) GUESS=$UNAME_MACHINE-unknown-twizzler ;; *:Redox:*:*) GUESS=$UNAME_MACHINE-unknown-redox ;; mips:OSF1:*.*) GUESS=mips-dec-osf1 ;; alpha:OSF1:*:*) # Reset EXIT trap before exiting to avoid spurious non-zero exit code. trap '' 0 case $UNAME_RELEASE in *4.0) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $3}'` ;; *5.*) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $4}'` ;; esac # According to Compaq, /usr/sbin/psrinfo has been available on # OSF/1 and Tru64 systems produced since 1995. I hope that # covers most systems running today. This code pipes the CPU # types through head -n 1, so we only detect the type of CPU 0. ALPHA_CPU_TYPE=`/usr/sbin/psrinfo -v | sed -n -e 's/^ The alpha \(.*\) processor.*$/\1/p' | head -n 1` case $ALPHA_CPU_TYPE in "EV4 (21064)") UNAME_MACHINE=alpha ;; "EV4.5 (21064)") UNAME_MACHINE=alpha ;; "LCA4 (21066/21068)") UNAME_MACHINE=alpha ;; "EV5 (21164)") UNAME_MACHINE=alphaev5 ;; "EV5.6 (21164A)") UNAME_MACHINE=alphaev56 ;; "EV5.6 (21164PC)") UNAME_MACHINE=alphapca56 ;; "EV5.7 (21164PC)") UNAME_MACHINE=alphapca57 ;; "EV6 (21264)") UNAME_MACHINE=alphaev6 ;; "EV6.7 (21264A)") UNAME_MACHINE=alphaev67 ;; "EV6.8CB (21264C)") UNAME_MACHINE=alphaev68 ;; "EV6.8AL (21264B)") UNAME_MACHINE=alphaev68 ;; "EV6.8CX (21264D)") UNAME_MACHINE=alphaev68 ;; "EV6.9A (21264/EV69A)") UNAME_MACHINE=alphaev69 ;; "EV7 (21364)") UNAME_MACHINE=alphaev7 ;; "EV7.9 (21364A)") UNAME_MACHINE=alphaev79 ;; esac # A Pn.n version is a patched version. # A Vn.n version is a released version. # A Tn.n version is a released field test version. # A Xn.n version is an unreleased experimental baselevel. # 1.2 uses "1.2" for uname -r. OSF_REL=`echo "$UNAME_RELEASE" | sed -e 's/^[PVTX]//' | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz` GUESS=$UNAME_MACHINE-dec-osf$OSF_REL ;; Amiga*:UNIX_System_V:4.0:*) GUESS=m68k-unknown-sysv4 ;; *:[Aa]miga[Oo][Ss]:*:*) GUESS=$UNAME_MACHINE-unknown-amigaos ;; *:[Mm]orph[Oo][Ss]:*:*) GUESS=$UNAME_MACHINE-unknown-morphos ;; *:OS/390:*:*) GUESS=i370-ibm-openedition ;; *:z/VM:*:*) GUESS=s390-ibm-zvmoe ;; *:OS400:*:*) GUESS=powerpc-ibm-os400 ;; arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*) GUESS=arm-acorn-riscix$UNAME_RELEASE ;; arm*:riscos:*:*|arm*:RISCOS:*:*) GUESS=arm-unknown-riscos ;; SR2?01:HI-UX/MPP:*:* | SR8000:HI-UX/MPP:*:*) GUESS=hppa1.1-hitachi-hiuxmpp ;; Pyramid*:OSx*:*:* | MIS*:OSx*:*:* | MIS*:SMP_DC-OSx*:*:*) # akee@wpdis03.wpafb.af.mil (Earle F. Ake) contributed MIS and NILE. case `(/bin/universe) 2>/dev/null` in att) GUESS=pyramid-pyramid-sysv3 ;; *) GUESS=pyramid-pyramid-bsd ;; esac ;; NILE*:*:*:dcosx) GUESS=pyramid-pyramid-svr4 ;; DRS?6000:unix:4.0:6*) GUESS=sparc-icl-nx6 ;; DRS?6000:UNIX_SV:4.2*:7* | DRS?6000:isis:4.2*:7*) case `/usr/bin/uname -p` in sparc) GUESS=sparc-icl-nx7 ;; esac ;; s390x:SunOS:*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=$UNAME_MACHINE-ibm-solaris2$SUN_REL ;; sun4H:SunOS:5.*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=sparc-hal-solaris2$SUN_REL ;; sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=sparc-sun-solaris2$SUN_REL ;; i86pc:AuroraUX:5.*:* | i86xen:AuroraUX:5.*:*) GUESS=i386-pc-auroraux$UNAME_RELEASE ;; i86pc:SunOS:5.*:* | i86xen:SunOS:5.*:*) set_cc_for_build SUN_ARCH=i386 # If there is a compiler, see if it is configured for 64-bit objects. # Note that the Sun cc does not turn __LP64__ into 1 like gcc does. # This test works for both compilers. if test "$CC_FOR_BUILD" != no_compiler_found; then if (echo '#ifdef __amd64'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS="" $CC_FOR_BUILD -m64 -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then SUN_ARCH=x86_64 fi fi SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=$SUN_ARCH-pc-solaris2$SUN_REL ;; sun4*:SunOS:6*:*) # According to config.sub, this is the proper way to canonicalize # SunOS6. Hard to guess exactly what SunOS6 will be like, but # it's likely to be more like Solaris than SunOS4. SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=sparc-sun-solaris3$SUN_REL ;; sun4*:SunOS:*:*) case `/usr/bin/arch -k` in Series*|S4*) UNAME_RELEASE=`uname -v` ;; esac # Japanese Language versions have a version number like '4.1.3-JL'. SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/-/_/'` GUESS=sparc-sun-sunos$SUN_REL ;; sun3*:SunOS:*:*) GUESS=m68k-sun-sunos$UNAME_RELEASE ;; sun*:*:4.2BSD:*) UNAME_RELEASE=`(sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null` test "x$UNAME_RELEASE" = x && UNAME_RELEASE=3 case `/bin/arch` in sun3) GUESS=m68k-sun-sunos$UNAME_RELEASE ;; sun4) GUESS=sparc-sun-sunos$UNAME_RELEASE ;; esac ;; aushp:SunOS:*:*) GUESS=sparc-auspex-sunos$UNAME_RELEASE ;; # The situation for MiNT is a little confusing. The machine name # can be virtually everything (everything which is not # "atarist" or "atariste" at least should have a processor # > m68000). The system name ranges from "MiNT" over "FreeMiNT" # to the lowercase version "mint" (or "freemint"). Finally # the system name "TOS" denotes a system which is actually not # MiNT. But MiNT is downward compatible to TOS, so this should # be no problem. atarist[e]:*MiNT:*:* | atarist[e]:*mint:*:* | atarist[e]:*TOS:*:*) GUESS=m68k-atari-mint$UNAME_RELEASE ;; atari*:*MiNT:*:* | atari*:*mint:*:* | atarist[e]:*TOS:*:*) GUESS=m68k-atari-mint$UNAME_RELEASE ;; *falcon*:*MiNT:*:* | *falcon*:*mint:*:* | *falcon*:*TOS:*:*) GUESS=m68k-atari-mint$UNAME_RELEASE ;; milan*:*MiNT:*:* | milan*:*mint:*:* | *milan*:*TOS:*:*) GUESS=m68k-milan-mint$UNAME_RELEASE ;; hades*:*MiNT:*:* | hades*:*mint:*:* | *hades*:*TOS:*:*) GUESS=m68k-hades-mint$UNAME_RELEASE ;; *:*MiNT:*:* | *:*mint:*:* | *:*TOS:*:*) GUESS=m68k-unknown-mint$UNAME_RELEASE ;; m68k:machten:*:*) GUESS=m68k-apple-machten$UNAME_RELEASE ;; powerpc:machten:*:*) GUESS=powerpc-apple-machten$UNAME_RELEASE ;; RISC*:Mach:*:*) GUESS=mips-dec-mach_bsd4.3 ;; RISC*:ULTRIX:*:*) GUESS=mips-dec-ultrix$UNAME_RELEASE ;; VAX*:ULTRIX*:*:*) GUESS=vax-dec-ultrix$UNAME_RELEASE ;; 2020:CLIX:*:* | 2430:CLIX:*:*) GUESS=clipper-intergraph-clix$UNAME_RELEASE ;; mips:*:*:UMIPS | mips:*:*:RISCos) set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #ifdef __cplusplus #include /* for printf() prototype */ int main (int argc, char *argv[]) { #else int main (argc, argv) int argc; char *argv[]; { #endif #if defined (host_mips) && defined (MIPSEB) #if defined (SYSTYPE_SYSV) printf ("mips-mips-riscos%ssysv\\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_SVR4) printf ("mips-mips-riscos%ssvr4\\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_BSD43) || defined(SYSTYPE_BSD) printf ("mips-mips-riscos%sbsd\\n", argv[1]); exit (0); #endif #endif exit (-1); } EOF $CC_FOR_BUILD -o "$dummy" "$dummy.c" && dummyarg=`echo "$UNAME_RELEASE" | sed -n 's/\([0-9]*\).*/\1/p'` && SYSTEM_NAME=`"$dummy" "$dummyarg"` && { echo "$SYSTEM_NAME"; exit; } GUESS=mips-mips-riscos$UNAME_RELEASE ;; Motorola:PowerMAX_OS:*:*) GUESS=powerpc-motorola-powermax ;; Motorola:*:4.3:PL8-*) GUESS=powerpc-harris-powermax ;; Night_Hawk:*:*:PowerMAX_OS | Synergy:PowerMAX_OS:*:*) GUESS=powerpc-harris-powermax ;; Night_Hawk:Power_UNIX:*:*) GUESS=powerpc-harris-powerunix ;; m88k:CX/UX:7*:*) GUESS=m88k-harris-cxux7 ;; m88k:*:4*:R4*) GUESS=m88k-motorola-sysv4 ;; m88k:*:3*:R3*) GUESS=m88k-motorola-sysv3 ;; AViiON:dgux:*:*) # DG/UX returns AViiON for all architectures UNAME_PROCESSOR=`/usr/bin/uname -p` if test "$UNAME_PROCESSOR" = mc88100 || test "$UNAME_PROCESSOR" = mc88110 then if test "$TARGET_BINARY_INTERFACE"x = m88kdguxelfx || \ test "$TARGET_BINARY_INTERFACE"x = x then GUESS=m88k-dg-dgux$UNAME_RELEASE else GUESS=m88k-dg-dguxbcs$UNAME_RELEASE fi else GUESS=i586-dg-dgux$UNAME_RELEASE fi ;; M88*:DolphinOS:*:*) # DolphinOS (SVR3) GUESS=m88k-dolphin-sysv3 ;; M88*:*:R3*:*) # Delta 88k system running SVR3 GUESS=m88k-motorola-sysv3 ;; XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3) GUESS=m88k-tektronix-sysv3 ;; Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD) GUESS=m68k-tektronix-bsd ;; *:IRIX*:*:*) IRIX_REL=`echo "$UNAME_RELEASE" | sed -e 's/-/_/g'` GUESS=mips-sgi-irix$IRIX_REL ;; ????????:AIX?:[12].1:2) # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX. GUESS=romp-ibm-aix # uname -m gives an 8 hex-code CPU id ;; # Note that: echo "'`uname -s`'" gives 'AIX ' i*86:AIX:*:*) GUESS=i386-ibm-aix ;; ia64:AIX:*:*) if test -x /usr/bin/oslevel ; then IBM_REV=`/usr/bin/oslevel` else IBM_REV=$UNAME_VERSION.$UNAME_RELEASE fi GUESS=$UNAME_MACHINE-ibm-aix$IBM_REV ;; *:AIX:2:3) if grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #include int main () { if (!__power_pc()) exit(1); puts("powerpc-ibm-aix3.2.5"); exit(0); } EOF if $CC_FOR_BUILD -o "$dummy" "$dummy.c" && SYSTEM_NAME=`"$dummy"` then GUESS=$SYSTEM_NAME else GUESS=rs6000-ibm-aix3.2.5 fi elif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then GUESS=rs6000-ibm-aix3.2.4 else GUESS=rs6000-ibm-aix3.2 fi ;; *:AIX:*:[4567]) IBM_CPU_ID=`/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }'` if /usr/sbin/lsattr -El "$IBM_CPU_ID" | grep ' POWER' >/dev/null 2>&1; then IBM_ARCH=rs6000 else IBM_ARCH=powerpc fi if test -x /usr/bin/lslpp ; then IBM_REV=`/usr/bin/lslpp -Lqc bos.rte.libc | \ awk -F: '{ print $3 }' | sed s/[0-9]*$/0/` else IBM_REV=$UNAME_VERSION.$UNAME_RELEASE fi GUESS=$IBM_ARCH-ibm-aix$IBM_REV ;; *:AIX:*:*) GUESS=rs6000-ibm-aix ;; ibmrt:4.4BSD:*|romp-ibm:4.4BSD:*) GUESS=romp-ibm-bsd4.4 ;; ibmrt:*BSD:*|romp-ibm:BSD:*) # covers RT/PC BSD and GUESS=romp-ibm-bsd$UNAME_RELEASE # 4.3 with uname added to ;; # report: romp-ibm BSD 4.3 *:BOSX:*:*) GUESS=rs6000-bull-bosx ;; DPX/2?00:B.O.S.:*:*) GUESS=m68k-bull-sysv3 ;; 9000/[34]??:4.3bsd:1.*:*) GUESS=m68k-hp-bsd ;; hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*) GUESS=m68k-hp-bsd4.4 ;; 9000/[34678]??:HP-UX:*:*) HPUX_REV=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*.[0B]*//'` case $UNAME_MACHINE in 9000/31?) HP_ARCH=m68000 ;; 9000/[34]??) HP_ARCH=m68k ;; 9000/[678][0-9][0-9]) if test -x /usr/bin/getconf; then sc_cpu_version=`/usr/bin/getconf SC_CPU_VERSION 2>/dev/null` sc_kernel_bits=`/usr/bin/getconf SC_KERNEL_BITS 2>/dev/null` case $sc_cpu_version in 523) HP_ARCH=hppa1.0 ;; # CPU_PA_RISC1_0 528) HP_ARCH=hppa1.1 ;; # CPU_PA_RISC1_1 532) # CPU_PA_RISC2_0 case $sc_kernel_bits in 32) HP_ARCH=hppa2.0n ;; 64) HP_ARCH=hppa2.0w ;; '') HP_ARCH=hppa2.0 ;; # HP-UX 10.20 esac ;; esac fi if test "$HP_ARCH" = ""; then set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #define _HPUX_SOURCE #include #include int main () { #if defined(_SC_KERNEL_BITS) long bits = sysconf(_SC_KERNEL_BITS); #endif long cpu = sysconf (_SC_CPU_VERSION); switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0"); break; case CPU_PA_RISC1_1: puts ("hppa1.1"); break; case CPU_PA_RISC2_0: #if defined(_SC_KERNEL_BITS) switch (bits) { case 64: puts ("hppa2.0w"); break; case 32: puts ("hppa2.0n"); break; default: puts ("hppa2.0"); break; } break; #else /* !defined(_SC_KERNEL_BITS) */ puts ("hppa2.0"); break; #endif default: puts ("hppa1.0"); break; } exit (0); } EOF (CCOPTS="" $CC_FOR_BUILD -o "$dummy" "$dummy.c" 2>/dev/null) && HP_ARCH=`"$dummy"` test -z "$HP_ARCH" && HP_ARCH=hppa fi ;; esac if test "$HP_ARCH" = hppa2.0w then set_cc_for_build # hppa2.0w-hp-hpux* has a 64-bit kernel and a compiler generating # 32-bit code. hppa64-hp-hpux* has the same kernel and a compiler # generating 64-bit code. GNU and HP use different nomenclature: # # $ CC_FOR_BUILD=cc ./config.guess # => hppa2.0w-hp-hpux11.23 # $ CC_FOR_BUILD="cc +DA2.0w" ./config.guess # => hppa64-hp-hpux11.23 if echo __LP64__ | (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | grep -q __LP64__ then HP_ARCH=hppa2.0w else HP_ARCH=hppa64 fi fi GUESS=$HP_ARCH-hp-hpux$HPUX_REV ;; ia64:HP-UX:*:*) HPUX_REV=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*.[0B]*//'` GUESS=ia64-hp-hpux$HPUX_REV ;; 3050*:HI-UX:*:*) set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #include int main () { long cpu = sysconf (_SC_CPU_VERSION); /* The order matters, because CPU_IS_HP_MC68K erroneously returns true for CPU_PA_RISC1_0. CPU_IS_PA_RISC returns correct results, however. */ if (CPU_IS_PA_RISC (cpu)) { switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0-hitachi-hiuxwe2"); break; case CPU_PA_RISC1_1: puts ("hppa1.1-hitachi-hiuxwe2"); break; case CPU_PA_RISC2_0: puts ("hppa2.0-hitachi-hiuxwe2"); break; default: puts ("hppa-hitachi-hiuxwe2"); break; } } else if (CPU_IS_HP_MC68K (cpu)) puts ("m68k-hitachi-hiuxwe2"); else puts ("unknown-hitachi-hiuxwe2"); exit (0); } EOF $CC_FOR_BUILD -o "$dummy" "$dummy.c" && SYSTEM_NAME=`"$dummy"` && { echo "$SYSTEM_NAME"; exit; } GUESS=unknown-hitachi-hiuxwe2 ;; 9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:*) GUESS=hppa1.1-hp-bsd ;; 9000/8??:4.3bsd:*:*) GUESS=hppa1.0-hp-bsd ;; *9??*:MPE/iX:*:* | *3000*:MPE/iX:*:*) GUESS=hppa1.0-hp-mpeix ;; hp7??:OSF1:*:* | hp8?[79]:OSF1:*:*) GUESS=hppa1.1-hp-osf ;; hp8??:OSF1:*:*) GUESS=hppa1.0-hp-osf ;; i*86:OSF1:*:*) if test -x /usr/sbin/sysversion ; then GUESS=$UNAME_MACHINE-unknown-osf1mk else GUESS=$UNAME_MACHINE-unknown-osf1 fi ;; parisc*:Lites*:*:*) GUESS=hppa1.1-hp-lites ;; C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*) GUESS=c1-convex-bsd ;; C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*) if getsysinfo -f scalar_acc then echo c32-convex-bsd else echo c2-convex-bsd fi exit ;; C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*) GUESS=c34-convex-bsd ;; C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*) GUESS=c38-convex-bsd ;; C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*) GUESS=c4-convex-bsd ;; CRAY*Y-MP:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=ymp-cray-unicos$CRAY_REL ;; CRAY*[A-Z]90:*:*:*) echo "$UNAME_MACHINE"-cray-unicos"$UNAME_RELEASE" \ | sed -e 's/CRAY.*\([A-Z]90\)/\1/' \ -e y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/ \ -e 's/\.[^.]*$/.X/' exit ;; CRAY*TS:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=t90-cray-unicos$CRAY_REL ;; CRAY*T3E:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=alphaev5-cray-unicosmk$CRAY_REL ;; CRAY*SV1:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=sv1-cray-unicos$CRAY_REL ;; *:UNICOS/mp:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=craynv-cray-unicosmp$CRAY_REL ;; F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*) FUJITSU_PROC=`uname -m | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz` FUJITSU_SYS=`uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\///'` FUJITSU_REL=`echo "$UNAME_RELEASE" | sed -e 's/ /_/'` GUESS=${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL} ;; 5000:UNIX_System_V:4.*:*) FUJITSU_SYS=`uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\///'` FUJITSU_REL=`echo "$UNAME_RELEASE" | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/ /_/'` GUESS=sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL} ;; i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\ Embedded/OS:*:*) GUESS=$UNAME_MACHINE-pc-bsdi$UNAME_RELEASE ;; sparc*:BSD/OS:*:*) GUESS=sparc-unknown-bsdi$UNAME_RELEASE ;; *:BSD/OS:*:*) GUESS=$UNAME_MACHINE-unknown-bsdi$UNAME_RELEASE ;; arm:FreeBSD:*:*) UNAME_PROCESSOR=`uname -p` set_cc_for_build if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_PCS_VFP then FREEBSD_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_PROCESSOR-unknown-freebsd$FREEBSD_REL-gnueabi else FREEBSD_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_PROCESSOR-unknown-freebsd$FREEBSD_REL-gnueabihf fi ;; *:FreeBSD:*:*) UNAME_PROCESSOR=`uname -p` case $UNAME_PROCESSOR in amd64) UNAME_PROCESSOR=x86_64 ;; i386) UNAME_PROCESSOR=i586 ;; esac FREEBSD_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_PROCESSOR-unknown-freebsd$FREEBSD_REL ;; i*:CYGWIN*:*) GUESS=$UNAME_MACHINE-pc-cygwin ;; *:MINGW64*:*) GUESS=$UNAME_MACHINE-pc-mingw64 ;; *:MINGW*:*) GUESS=$UNAME_MACHINE-pc-mingw32 ;; *:MSYS*:*) GUESS=$UNAME_MACHINE-pc-msys ;; i*:PW*:*) GUESS=$UNAME_MACHINE-pc-pw32 ;; *:SerenityOS:*:*) GUESS=$UNAME_MACHINE-pc-serenity ;; *:Interix*:*) case $UNAME_MACHINE in x86) GUESS=i586-pc-interix$UNAME_RELEASE ;; authenticamd | genuineintel | EM64T) GUESS=x86_64-unknown-interix$UNAME_RELEASE ;; IA64) GUESS=ia64-unknown-interix$UNAME_RELEASE ;; esac ;; i*:UWIN*:*) GUESS=$UNAME_MACHINE-pc-uwin ;; amd64:CYGWIN*:*:* | x86_64:CYGWIN*:*:*) GUESS=x86_64-pc-cygwin ;; prep*:SunOS:5.*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=powerpcle-unknown-solaris2$SUN_REL ;; *:GNU:*:*) # the GNU system GNU_ARCH=`echo "$UNAME_MACHINE" | sed -e 's,[-/].*$,,'` GNU_REL=`echo "$UNAME_RELEASE" | sed -e 's,/.*$,,'` GUESS=$GNU_ARCH-unknown-$LIBC$GNU_REL ;; *:GNU/*:*:*) # other systems with GNU libc and userland GNU_SYS=`echo "$UNAME_SYSTEM" | sed 's,^[^/]*/,,' | tr "[:upper:]" "[:lower:]"` GNU_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_MACHINE-unknown-$GNU_SYS$GNU_REL-$LIBC ;; x86_64:[Mm]anagarm:*:*|i?86:[Mm]anagarm:*:*) GUESS="$UNAME_MACHINE-pc-managarm-mlibc" ;; *:[Mm]anagarm:*:*) GUESS="$UNAME_MACHINE-unknown-managarm-mlibc" ;; *:Minix:*:*) GUESS=$UNAME_MACHINE-unknown-minix ;; aarch64:Linux:*:*) set_cc_for_build CPU=$UNAME_MACHINE LIBCABI=$LIBC if test "$CC_FOR_BUILD" != no_compiler_found; then ABI=64 sed 's/^ //' << EOF > "$dummy.c" #ifdef __ARM_EABI__ #ifdef __ARM_PCS_VFP ABI=eabihf #else ABI=eabi #endif #endif EOF cc_set_abi=`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^ABI' | sed 's, ,,g'` eval "$cc_set_abi" case $ABI in eabi | eabihf) CPU=armv8l; LIBCABI=$LIBC$ABI ;; esac fi GUESS=$CPU-unknown-linux-$LIBCABI ;; aarch64_be:Linux:*:*) UNAME_MACHINE=aarch64_be GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; alpha:Linux:*:*) case `sed -n '/^cpu model/s/^.*: \(.*\)/\1/p' /proc/cpuinfo 2>/dev/null` in EV5) UNAME_MACHINE=alphaev5 ;; EV56) UNAME_MACHINE=alphaev56 ;; PCA56) UNAME_MACHINE=alphapca56 ;; PCA57) UNAME_MACHINE=alphapca56 ;; EV6) UNAME_MACHINE=alphaev6 ;; EV67) UNAME_MACHINE=alphaev67 ;; EV68*) UNAME_MACHINE=alphaev68 ;; esac objdump --private-headers /bin/sh | grep -q ld.so.1 if test "$?" = 0 ; then LIBC=gnulibc1 ; fi GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; arc:Linux:*:* | arceb:Linux:*:* | arc32:Linux:*:* | arc64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; arm*:Linux:*:*) set_cc_for_build if echo __ARM_EABI__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_EABI__ then GUESS=$UNAME_MACHINE-unknown-linux-$LIBC else if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_PCS_VFP then GUESS=$UNAME_MACHINE-unknown-linux-${LIBC}eabi else GUESS=$UNAME_MACHINE-unknown-linux-${LIBC}eabihf fi fi ;; avr32*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; cris:Linux:*:*) GUESS=$UNAME_MACHINE-axis-linux-$LIBC ;; crisv32:Linux:*:*) GUESS=$UNAME_MACHINE-axis-linux-$LIBC ;; e2k:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; frv:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; hexagon:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; i*86:Linux:*:*) GUESS=$UNAME_MACHINE-pc-linux-$LIBC ;; ia64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; k1om:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; kvx:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; kvx:cos:*:*) GUESS=$UNAME_MACHINE-unknown-cos ;; kvx:mbr:*:*) GUESS=$UNAME_MACHINE-unknown-mbr ;; loongarch32:Linux:*:* | loongarch64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; m32r*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; m68*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; mips:Linux:*:* | mips64:Linux:*:*) set_cc_for_build IS_GLIBC=0 test x"${LIBC}" = xgnu && IS_GLIBC=1 sed 's/^ //' << EOF > "$dummy.c" #undef CPU #undef mips #undef mipsel #undef mips64 #undef mips64el #if ${IS_GLIBC} && defined(_ABI64) LIBCABI=gnuabi64 #else #if ${IS_GLIBC} && defined(_ABIN32) LIBCABI=gnuabin32 #else LIBCABI=${LIBC} #endif #endif #if ${IS_GLIBC} && defined(__mips64) && defined(__mips_isa_rev) && __mips_isa_rev>=6 CPU=mipsisa64r6 #else #if ${IS_GLIBC} && !defined(__mips64) && defined(__mips_isa_rev) && __mips_isa_rev>=6 CPU=mipsisa32r6 #else #if defined(__mips64) CPU=mips64 #else CPU=mips #endif #endif #endif #if defined(__MIPSEL__) || defined(__MIPSEL) || defined(_MIPSEL) || defined(MIPSEL) MIPS_ENDIAN=el #else #if defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || defined(MIPSEB) MIPS_ENDIAN= #else MIPS_ENDIAN= #endif #endif EOF cc_set_vars=`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^CPU\|^MIPS_ENDIAN\|^LIBCABI'` eval "$cc_set_vars" test "x$CPU" != x && { echo "$CPU${MIPS_ENDIAN}-unknown-linux-$LIBCABI"; exit; } ;; mips64el:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; openrisc*:Linux:*:*) GUESS=or1k-unknown-linux-$LIBC ;; or32:Linux:*:* | or1k*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; padre:Linux:*:*) GUESS=sparc-unknown-linux-$LIBC ;; parisc64:Linux:*:* | hppa64:Linux:*:*) GUESS=hppa64-unknown-linux-$LIBC ;; parisc:Linux:*:* | hppa:Linux:*:*) # Look for CPU level case `grep '^cpu[^a-z]*:' /proc/cpuinfo 2>/dev/null | cut -d' ' -f2` in PA7*) GUESS=hppa1.1-unknown-linux-$LIBC ;; PA8*) GUESS=hppa2.0-unknown-linux-$LIBC ;; *) GUESS=hppa-unknown-linux-$LIBC ;; esac ;; ppc64:Linux:*:*) GUESS=powerpc64-unknown-linux-$LIBC ;; ppc:Linux:*:*) GUESS=powerpc-unknown-linux-$LIBC ;; ppc64le:Linux:*:*) GUESS=powerpc64le-unknown-linux-$LIBC ;; ppcle:Linux:*:*) GUESS=powerpcle-unknown-linux-$LIBC ;; riscv32:Linux:*:* | riscv32be:Linux:*:* | riscv64:Linux:*:* | riscv64be:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; s390:Linux:*:* | s390x:Linux:*:*) GUESS=$UNAME_MACHINE-ibm-linux-$LIBC ;; sh64*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; sh*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; sparc:Linux:*:* | sparc64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; tile*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; vax:Linux:*:*) GUESS=$UNAME_MACHINE-dec-linux-$LIBC ;; x86_64:Linux:*:*) set_cc_for_build CPU=$UNAME_MACHINE LIBCABI=$LIBC if test "$CC_FOR_BUILD" != no_compiler_found; then ABI=64 sed 's/^ //' << EOF > "$dummy.c" #ifdef __i386__ ABI=x86 #else #ifdef __ILP32__ ABI=x32 #endif #endif EOF cc_set_abi=`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^ABI' | sed 's, ,,g'` eval "$cc_set_abi" case $ABI in x86) CPU=i686 ;; x32) LIBCABI=${LIBC}x32 ;; esac fi GUESS=$CPU-pc-linux-$LIBCABI ;; xtensa*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; i*86:DYNIX/ptx:4*:*) # ptx 4.0 does uname -s correctly, with DYNIX/ptx in there. # earlier versions are messed up and put the nodename in both # sysname and nodename. GUESS=i386-sequent-sysv4 ;; i*86:UNIX_SV:4.2MP:2.*) # Unixware is an offshoot of SVR4, but it has its own version # number series starting with 2... # I am not positive that other SVR4 systems won't match this, # I just have to hope. -- rms. # Use sysv4.2uw... so that sysv4* matches it. GUESS=$UNAME_MACHINE-pc-sysv4.2uw$UNAME_VERSION ;; i*86:OS/2:*:*) # If we were able to find 'uname', then EMX Unix compatibility # is probably installed. GUESS=$UNAME_MACHINE-pc-os2-emx ;; i*86:XTS-300:*:STOP) GUESS=$UNAME_MACHINE-unknown-stop ;; i*86:atheos:*:*) GUESS=$UNAME_MACHINE-unknown-atheos ;; i*86:syllable:*:*) GUESS=$UNAME_MACHINE-pc-syllable ;; i*86:LynxOS:2.*:* | i*86:LynxOS:3.[01]*:* | i*86:LynxOS:4.[02]*:*) GUESS=i386-unknown-lynxos$UNAME_RELEASE ;; i*86:*DOS:*:*) GUESS=$UNAME_MACHINE-pc-msdosdjgpp ;; i*86:*:4.*:*) UNAME_REL=`echo "$UNAME_RELEASE" | sed 's/\/MP$//'` if grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then GUESS=$UNAME_MACHINE-univel-sysv$UNAME_REL else GUESS=$UNAME_MACHINE-pc-sysv$UNAME_REL fi ;; i*86:*:5:[678]*) # UnixWare 7.x, OpenUNIX and OpenServer 6. case `/bin/uname -X | grep "^Machine"` in *486*) UNAME_MACHINE=i486 ;; *Pentium) UNAME_MACHINE=i586 ;; *Pent*|*Celeron) UNAME_MACHINE=i686 ;; esac GUESS=$UNAME_MACHINE-unknown-sysv${UNAME_RELEASE}${UNAME_SYSTEM}${UNAME_VERSION} ;; i*86:*:3.2:*) if test -f /usr/options/cb.name; then UNAME_REL=`sed -n 's/.*Version //p' /dev/null >/dev/null ; then UNAME_REL=`(/bin/uname -X|grep Release|sed -e 's/.*= //')` (/bin/uname -X|grep i80486 >/dev/null) && UNAME_MACHINE=i486 (/bin/uname -X|grep '^Machine.*Pentium' >/dev/null) \ && UNAME_MACHINE=i586 (/bin/uname -X|grep '^Machine.*Pent *II' >/dev/null) \ && UNAME_MACHINE=i686 (/bin/uname -X|grep '^Machine.*Pentium Pro' >/dev/null) \ && UNAME_MACHINE=i686 GUESS=$UNAME_MACHINE-pc-sco$UNAME_REL else GUESS=$UNAME_MACHINE-pc-sysv32 fi ;; pc:*:*:*) # Left here for compatibility: # uname -m prints for DJGPP always 'pc', but it prints nothing about # the processor, so we play safe by assuming i586. # Note: whatever this is, it MUST be the same as what config.sub # prints for the "djgpp" host, or else GDB configure will decide that # this is a cross-build. GUESS=i586-pc-msdosdjgpp ;; Intel:Mach:3*:*) GUESS=i386-pc-mach3 ;; paragon:*:*:*) GUESS=i860-intel-osf1 ;; i860:*:4.*:*) # i860-SVR4 if grep Stardent /usr/include/sys/uadmin.h >/dev/null 2>&1 ; then GUESS=i860-stardent-sysv$UNAME_RELEASE # Stardent Vistra i860-SVR4 else # Add other i860-SVR4 vendors below as they are discovered. GUESS=i860-unknown-sysv$UNAME_RELEASE # Unknown i860-SVR4 fi ;; mini*:CTIX:SYS*5:*) # "miniframe" GUESS=m68010-convergent-sysv ;; mc68k:UNIX:SYSTEM5:3.51m) GUESS=m68k-convergent-sysv ;; M680?0:D-NIX:5.3:*) GUESS=m68k-diab-dnix ;; M68*:*:R3V[5678]*:*) test -r /sysV68 && { echo 'm68k-motorola-sysv'; exit; } ;; 3[345]??:*:4.0:3.0 | 3[34]??A:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0 | 3[34]??/*:*:4.0:3.0 | 4400:*:4.0:3.0 | 4850:*:4.0:3.0 | SKA40:*:4.0:3.0 | SDS2:*:4.0:3.0 | SHG2:*:4.0:3.0 | S7501*:*:4.0:3.0) OS_REL='' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3"$OS_REL"; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3"$OS_REL"; exit; } ;; 3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*) /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4; exit; } ;; NCR*:*:4.2:* | MPRAS*:*:4.2:*) OS_REL='.3' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3"$OS_REL"; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3"$OS_REL"; exit; } /bin/uname -p 2>/dev/null | /bin/grep pteron >/dev/null \ && { echo i586-ncr-sysv4.3"$OS_REL"; exit; } ;; m68*:LynxOS:2.*:* | m68*:LynxOS:3.0*:*) GUESS=m68k-unknown-lynxos$UNAME_RELEASE ;; mc68030:UNIX_System_V:4.*:*) GUESS=m68k-atari-sysv4 ;; TSUNAMI:LynxOS:2.*:*) GUESS=sparc-unknown-lynxos$UNAME_RELEASE ;; rs6000:LynxOS:2.*:*) GUESS=rs6000-unknown-lynxos$UNAME_RELEASE ;; PowerPC:LynxOS:2.*:* | PowerPC:LynxOS:3.[01]*:* | PowerPC:LynxOS:4.[02]*:*) GUESS=powerpc-unknown-lynxos$UNAME_RELEASE ;; SM[BE]S:UNIX_SV:*:*) GUESS=mips-dde-sysv$UNAME_RELEASE ;; RM*:ReliantUNIX-*:*:*) GUESS=mips-sni-sysv4 ;; RM*:SINIX-*:*:*) GUESS=mips-sni-sysv4 ;; *:SINIX-*:*:*) if uname -p 2>/dev/null >/dev/null ; then UNAME_MACHINE=`(uname -p) 2>/dev/null` GUESS=$UNAME_MACHINE-sni-sysv4 else GUESS=ns32k-sni-sysv fi ;; PENTIUM:*:4.0*:*) # Unisys 'ClearPath HMP IX 4000' SVR4/MP effort # says GUESS=i586-unisys-sysv4 ;; *:UNIX_System_V:4*:FTX*) # From Gerald Hewes . # How about differentiating between stratus architectures? -djm GUESS=hppa1.1-stratus-sysv4 ;; *:*:*:FTX*) # From seanf@swdc.stratus.com. GUESS=i860-stratus-sysv4 ;; i*86:VOS:*:*) # From Paul.Green@stratus.com. GUESS=$UNAME_MACHINE-stratus-vos ;; *:VOS:*:*) # From Paul.Green@stratus.com. GUESS=hppa1.1-stratus-vos ;; mc68*:A/UX:*:*) GUESS=m68k-apple-aux$UNAME_RELEASE ;; news*:NEWS-OS:6*:*) GUESS=mips-sony-newsos6 ;; R[34]000:*System_V*:*:* | R4000:UNIX_SYSV:*:* | R*000:UNIX_SV:*:*) if test -d /usr/nec; then GUESS=mips-nec-sysv$UNAME_RELEASE else GUESS=mips-unknown-sysv$UNAME_RELEASE fi ;; BeBox:BeOS:*:*) # BeOS running on hardware made by Be, PPC only. GUESS=powerpc-be-beos ;; BeMac:BeOS:*:*) # BeOS running on Mac or Mac clone, PPC only. GUESS=powerpc-apple-beos ;; BePC:BeOS:*:*) # BeOS running on Intel PC compatible. GUESS=i586-pc-beos ;; BePC:Haiku:*:*) # Haiku running on Intel PC compatible. GUESS=i586-pc-haiku ;; ppc:Haiku:*:*) # Haiku running on Apple PowerPC GUESS=powerpc-apple-haiku ;; *:Haiku:*:*) # Haiku modern gcc (not bound by BeOS compat) GUESS=$UNAME_MACHINE-unknown-haiku ;; SX-4:SUPER-UX:*:*) GUESS=sx4-nec-superux$UNAME_RELEASE ;; SX-5:SUPER-UX:*:*) GUESS=sx5-nec-superux$UNAME_RELEASE ;; SX-6:SUPER-UX:*:*) GUESS=sx6-nec-superux$UNAME_RELEASE ;; SX-7:SUPER-UX:*:*) GUESS=sx7-nec-superux$UNAME_RELEASE ;; SX-8:SUPER-UX:*:*) GUESS=sx8-nec-superux$UNAME_RELEASE ;; SX-8R:SUPER-UX:*:*) GUESS=sx8r-nec-superux$UNAME_RELEASE ;; SX-ACE:SUPER-UX:*:*) GUESS=sxace-nec-superux$UNAME_RELEASE ;; Power*:Rhapsody:*:*) GUESS=powerpc-apple-rhapsody$UNAME_RELEASE ;; *:Rhapsody:*:*) GUESS=$UNAME_MACHINE-apple-rhapsody$UNAME_RELEASE ;; arm64:Darwin:*:*) GUESS=aarch64-apple-darwin$UNAME_RELEASE ;; *:Darwin:*:*) UNAME_PROCESSOR=`uname -p` case $UNAME_PROCESSOR in unknown) UNAME_PROCESSOR=powerpc ;; esac if command -v xcode-select > /dev/null 2> /dev/null && \ ! xcode-select --print-path > /dev/null 2> /dev/null ; then # Avoid executing cc if there is no toolchain installed as # cc will be a stub that puts up a graphical alert # prompting the user to install developer tools. CC_FOR_BUILD=no_compiler_found else set_cc_for_build fi if test "$CC_FOR_BUILD" != no_compiler_found; then if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then case $UNAME_PROCESSOR in i386) UNAME_PROCESSOR=x86_64 ;; powerpc) UNAME_PROCESSOR=powerpc64 ;; esac fi # On 10.4-10.6 one might compile for PowerPC via gcc -arch ppc if (echo '#ifdef __POWERPC__'; echo IS_PPC; echo '#endif') | \ (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_PPC >/dev/null then UNAME_PROCESSOR=powerpc fi elif test "$UNAME_PROCESSOR" = i386 ; then # uname -m returns i386 or x86_64 UNAME_PROCESSOR=$UNAME_MACHINE fi GUESS=$UNAME_PROCESSOR-apple-darwin$UNAME_RELEASE ;; *:procnto*:*:* | *:QNX:[0123456789]*:*) UNAME_PROCESSOR=`uname -p` if test "$UNAME_PROCESSOR" = x86; then UNAME_PROCESSOR=i386 UNAME_MACHINE=pc fi GUESS=$UNAME_PROCESSOR-$UNAME_MACHINE-nto-qnx$UNAME_RELEASE ;; *:QNX:*:4*) GUESS=i386-pc-qnx ;; NEO-*:NONSTOP_KERNEL:*:*) GUESS=neo-tandem-nsk$UNAME_RELEASE ;; NSE-*:NONSTOP_KERNEL:*:*) GUESS=nse-tandem-nsk$UNAME_RELEASE ;; NSR-*:NONSTOP_KERNEL:*:*) GUESS=nsr-tandem-nsk$UNAME_RELEASE ;; NSV-*:NONSTOP_KERNEL:*:*) GUESS=nsv-tandem-nsk$UNAME_RELEASE ;; NSX-*:NONSTOP_KERNEL:*:*) GUESS=nsx-tandem-nsk$UNAME_RELEASE ;; *:NonStop-UX:*:*) GUESS=mips-compaq-nonstopux ;; BS2000:POSIX*:*:*) GUESS=bs2000-siemens-sysv ;; DS/*:UNIX_System_V:*:*) GUESS=$UNAME_MACHINE-$UNAME_SYSTEM-$UNAME_RELEASE ;; *:Plan9:*:*) # "uname -m" is not consistent, so use $cputype instead. 386 # is converted to i386 for consistency with other x86 # operating systems. if test "${cputype-}" = 386; then UNAME_MACHINE=i386 elif test "x${cputype-}" != x; then UNAME_MACHINE=$cputype fi GUESS=$UNAME_MACHINE-unknown-plan9 ;; *:TOPS-10:*:*) GUESS=pdp10-unknown-tops10 ;; *:TENEX:*:*) GUESS=pdp10-unknown-tenex ;; KS10:TOPS-20:*:* | KL10:TOPS-20:*:* | TYPE4:TOPS-20:*:*) GUESS=pdp10-dec-tops20 ;; XKL-1:TOPS-20:*:* | TYPE5:TOPS-20:*:*) GUESS=pdp10-xkl-tops20 ;; *:TOPS-20:*:*) GUESS=pdp10-unknown-tops20 ;; *:ITS:*:*) GUESS=pdp10-unknown-its ;; SEI:*:*:SEIUX) GUESS=mips-sei-seiux$UNAME_RELEASE ;; *:DragonFly:*:*) DRAGONFLY_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_MACHINE-unknown-dragonfly$DRAGONFLY_REL ;; *:*VMS:*:*) UNAME_MACHINE=`(uname -p) 2>/dev/null` case $UNAME_MACHINE in A*) GUESS=alpha-dec-vms ;; I*) GUESS=ia64-dec-vms ;; V*) GUESS=vax-dec-vms ;; esac ;; *:XENIX:*:SysV) GUESS=i386-pc-xenix ;; i*86:skyos:*:*) SKYOS_REL=`echo "$UNAME_RELEASE" | sed -e 's/ .*$//'` GUESS=$UNAME_MACHINE-pc-skyos$SKYOS_REL ;; i*86:rdos:*:*) GUESS=$UNAME_MACHINE-pc-rdos ;; i*86:Fiwix:*:*) GUESS=$UNAME_MACHINE-pc-fiwix ;; *:AROS:*:*) GUESS=$UNAME_MACHINE-unknown-aros ;; x86_64:VMkernel:*:*) GUESS=$UNAME_MACHINE-unknown-esx ;; amd64:Isilon\ OneFS:*:*) GUESS=x86_64-unknown-onefs ;; *:Unleashed:*:*) GUESS=$UNAME_MACHINE-unknown-unleashed$UNAME_RELEASE ;; *:Ironclad:*:*) GUESS=$UNAME_MACHINE-unknown-ironclad ;; esac # Do we have a guess based on uname results? if test "x$GUESS" != x; then echo "$GUESS" exit fi # No uname command or uname output not recognized. set_cc_for_build cat > "$dummy.c" < #include #endif #if defined(ultrix) || defined(_ultrix) || defined(__ultrix) || defined(__ultrix__) #if defined (vax) || defined (__vax) || defined (__vax__) || defined(mips) || defined(__mips) || defined(__mips__) || defined(MIPS) || defined(__MIPS__) #include #if defined(_SIZE_T_) || defined(SIGLOST) #include #endif #endif #endif int main () { #if defined (sony) #if defined (MIPSEB) /* BFD wants "bsd" instead of "newsos". Perhaps BFD should be changed, I don't know.... */ printf ("mips-sony-bsd\n"); exit (0); #else #include printf ("m68k-sony-newsos%s\n", #ifdef NEWSOS4 "4" #else "" #endif ); exit (0); #endif #endif #if defined (NeXT) #if !defined (__ARCHITECTURE__) #define __ARCHITECTURE__ "m68k" #endif int version; version=`(hostinfo | sed -n 's/.*NeXT Mach \([0-9]*\).*/\1/p') 2>/dev/null`; if (version < 4) printf ("%s-next-nextstep%d\n", __ARCHITECTURE__, version); else printf ("%s-next-openstep%d\n", __ARCHITECTURE__, version); exit (0); #endif #if defined (MULTIMAX) || defined (n16) #if defined (UMAXV) printf ("ns32k-encore-sysv\n"); exit (0); #else #if defined (CMU) printf ("ns32k-encore-mach\n"); exit (0); #else printf ("ns32k-encore-bsd\n"); exit (0); #endif #endif #endif #if defined (__386BSD__) printf ("i386-pc-bsd\n"); exit (0); #endif #if defined (sequent) #if defined (i386) printf ("i386-sequent-dynix\n"); exit (0); #endif #if defined (ns32000) printf ("ns32k-sequent-dynix\n"); exit (0); #endif #endif #if defined (_SEQUENT_) struct utsname un; uname(&un); if (strncmp(un.version, "V2", 2) == 0) { printf ("i386-sequent-ptx2\n"); exit (0); } if (strncmp(un.version, "V1", 2) == 0) { /* XXX is V1 correct? */ printf ("i386-sequent-ptx1\n"); exit (0); } printf ("i386-sequent-ptx\n"); exit (0); #endif #if defined (vax) #if !defined (ultrix) #include #if defined (BSD) #if BSD == 43 printf ("vax-dec-bsd4.3\n"); exit (0); #else #if BSD == 199006 printf ("vax-dec-bsd4.3reno\n"); exit (0); #else printf ("vax-dec-bsd\n"); exit (0); #endif #endif #else printf ("vax-dec-bsd\n"); exit (0); #endif #else #if defined(_SIZE_T_) || defined(SIGLOST) struct utsname un; uname (&un); printf ("vax-dec-ultrix%s\n", un.release); exit (0); #else printf ("vax-dec-ultrix\n"); exit (0); #endif #endif #endif #if defined(ultrix) || defined(_ultrix) || defined(__ultrix) || defined(__ultrix__) #if defined(mips) || defined(__mips) || defined(__mips__) || defined(MIPS) || defined(__MIPS__) #if defined(_SIZE_T_) || defined(SIGLOST) struct utsname *un; uname (&un); printf ("mips-dec-ultrix%s\n", un.release); exit (0); #else printf ("mips-dec-ultrix\n"); exit (0); #endif #endif #endif #if defined (alliant) && defined (i860) printf ("i860-alliant-bsd\n"); exit (0); #endif exit (1); } EOF $CC_FOR_BUILD -o "$dummy" "$dummy.c" 2>/dev/null && SYSTEM_NAME=`"$dummy"` && { echo "$SYSTEM_NAME"; exit; } # Apollos put the system type in the environment. test -d /usr/apollo && { echo "$ISP-apollo-$SYSTYPE"; exit; } echo "$0: unable to guess system type" >&2 case $UNAME_MACHINE:$UNAME_SYSTEM in mips:Linux | mips64:Linux) # If we got here on MIPS GNU/Linux, output extra information. cat >&2 <&2 <&2 </dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null` /bin/uname -X = `(/bin/uname -X) 2>/dev/null` hostinfo = `(hostinfo) 2>/dev/null` /bin/universe = `(/bin/universe) 2>/dev/null` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null` /bin/arch = `(/bin/arch) 2>/dev/null` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null` UNAME_MACHINE = "$UNAME_MACHINE" UNAME_RELEASE = "$UNAME_RELEASE" UNAME_SYSTEM = "$UNAME_SYSTEM" UNAME_VERSION = "$UNAME_VERSION" EOF fi exit 1 # Local variables: # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: gevent-24.11.1/deps/c-ares/config/config.sub000066400000000000000000001154411471441230600205120ustar00rootroot00000000000000#! /bin/sh # Configuration validation subroutine script. # Copyright 1992-2024 Free Software Foundation, Inc. # shellcheck disable=SC2006,SC2268,SC2162 # see below for rationale timestamp='2024-05-27' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, see . # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). # Please send patches to . # # Configuration subroutine to validate and canonicalize a configuration type. # Supply the specified configuration type as an argument. # If it is invalid, we print an error message on stderr and exit with code 1. # Otherwise, we print the canonical config type on stdout and succeed. # You can get the latest version of this script from: # https://git.savannah.gnu.org/cgit/config.git/plain/config.sub # This file is supposed to be the same for all GNU packages # and recognize all the CPU types, system types and aliases # that are meaningful with *any* GNU software. # Each package is responsible for reporting which valid configurations # it does not support. The user should be able to distinguish # a failure to support a valid configuration from a meaningless # configuration. # The goal of this file is to map all the various variations of a given # machine specification into a single specification in the form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM # or in some cases, the newer four-part form: # CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM # It is wrong to echo any other type of specification. # The "shellcheck disable" line above the timestamp inhibits complaints # about features and limitations of the classic Bourne shell that were # superseded or lifted in POSIX. However, this script identifies a wide # variety of pre-POSIX systems that do not have POSIX shells at all, and # even some reasonably current systems (Solaris 10 as case-in-point) still # have a pre-POSIX /bin/sh. me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] CPU-MFR-OPSYS or ALIAS Canonicalize a configuration name. Options: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.sub ($timestamp) Copyright 1992-2024 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try '$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" >&2 exit 1 ;; *local*) # First pass through any local machine types. echo "$1" exit ;; * ) break ;; esac done case $# in 0) echo "$me: missing argument$help" >&2 exit 1;; 1) ;; *) echo "$me: too many arguments$help" >&2 exit 1;; esac # Split fields of configuration type saved_IFS=$IFS IFS="-" read field1 field2 field3 field4 <&2 exit 1 ;; *-*-*-*) basic_machine=$field1-$field2 basic_os=$field3-$field4 ;; *-*-*) # Ambiguous whether COMPANY is present, or skipped and KERNEL-OS is two # parts maybe_os=$field2-$field3 case $maybe_os in cloudabi*-eabi* \ | kfreebsd*-gnu* \ | knetbsd*-gnu* \ | kopensolaris*-gnu* \ | linux-* \ | managarm-* \ | netbsd*-eabi* \ | netbsd*-gnu* \ | nto-qnx* \ | os2-emx* \ | rtmk-nova* \ | storm-chaos* \ | uclinux-gnu* \ | uclinux-uclibc* \ | windows-* ) basic_machine=$field1 basic_os=$maybe_os ;; android-linux) basic_machine=$field1-unknown basic_os=linux-android ;; *) basic_machine=$field1-$field2 basic_os=$field3 ;; esac ;; *-*) case $field1-$field2 in # Shorthands that happen to contain a single dash convex-c[12] | convex-c3[248]) basic_machine=$field2-convex basic_os= ;; decstation-3100) basic_machine=mips-dec basic_os= ;; *-*) # Second component is usually, but not always the OS case $field2 in # Do not treat sunos as a manufacturer sun*os*) basic_machine=$field1 basic_os=$field2 ;; # Manufacturers 3100* \ | 32* \ | 3300* \ | 3600* \ | 7300* \ | acorn \ | altos* \ | apollo \ | apple \ | atari \ | att* \ | axis \ | be \ | bull \ | cbm \ | ccur \ | cisco \ | commodore \ | convergent* \ | convex* \ | cray \ | crds \ | dec* \ | delta* \ | dg \ | digital \ | dolphin \ | encore* \ | gould \ | harris \ | highlevel \ | hitachi* \ | hp \ | ibm* \ | intergraph \ | isi* \ | knuth \ | masscomp \ | microblaze* \ | mips* \ | motorola* \ | ncr* \ | news \ | next \ | ns \ | oki \ | omron* \ | pc533* \ | rebel \ | rom68k \ | rombug \ | semi \ | sequent* \ | siemens \ | sgi* \ | siemens \ | sim \ | sni \ | sony* \ | stratus \ | sun \ | sun[234]* \ | tektronix \ | tti* \ | ultra \ | unicom* \ | wec \ | winbond \ | wrs) basic_machine=$field1-$field2 basic_os= ;; zephyr*) basic_machine=$field1-unknown basic_os=$field2 ;; *) basic_machine=$field1 basic_os=$field2 ;; esac ;; esac ;; *) # Convert single-component short-hands not valid as part of # multi-component configurations. case $field1 in 386bsd) basic_machine=i386-pc basic_os=bsd ;; a29khif) basic_machine=a29k-amd basic_os=udi ;; adobe68k) basic_machine=m68010-adobe basic_os=scout ;; alliant) basic_machine=fx80-alliant basic_os= ;; altos | altos3068) basic_machine=m68k-altos basic_os= ;; am29k) basic_machine=a29k-none basic_os=bsd ;; amdahl) basic_machine=580-amdahl basic_os=sysv ;; amiga) basic_machine=m68k-unknown basic_os= ;; amigaos | amigados) basic_machine=m68k-unknown basic_os=amigaos ;; amigaunix | amix) basic_machine=m68k-unknown basic_os=sysv4 ;; apollo68) basic_machine=m68k-apollo basic_os=sysv ;; apollo68bsd) basic_machine=m68k-apollo basic_os=bsd ;; aros) basic_machine=i386-pc basic_os=aros ;; aux) basic_machine=m68k-apple basic_os=aux ;; balance) basic_machine=ns32k-sequent basic_os=dynix ;; blackfin) basic_machine=bfin-unknown basic_os=linux ;; cegcc) basic_machine=arm-unknown basic_os=cegcc ;; cray) basic_machine=j90-cray basic_os=unicos ;; crds | unos) basic_machine=m68k-crds basic_os= ;; da30) basic_machine=m68k-da30 basic_os= ;; decstation | pmax | pmin | dec3100 | decstatn) basic_machine=mips-dec basic_os= ;; delta88) basic_machine=m88k-motorola basic_os=sysv3 ;; dicos) basic_machine=i686-pc basic_os=dicos ;; djgpp) basic_machine=i586-pc basic_os=msdosdjgpp ;; ebmon29k) basic_machine=a29k-amd basic_os=ebmon ;; es1800 | OSE68k | ose68k | ose | OSE) basic_machine=m68k-ericsson basic_os=ose ;; gmicro) basic_machine=tron-gmicro basic_os=sysv ;; go32) basic_machine=i386-pc basic_os=go32 ;; h8300hms) basic_machine=h8300-hitachi basic_os=hms ;; h8300xray) basic_machine=h8300-hitachi basic_os=xray ;; h8500hms) basic_machine=h8500-hitachi basic_os=hms ;; harris) basic_machine=m88k-harris basic_os=sysv3 ;; hp300 | hp300hpux) basic_machine=m68k-hp basic_os=hpux ;; hp300bsd) basic_machine=m68k-hp basic_os=bsd ;; hppaosf) basic_machine=hppa1.1-hp basic_os=osf ;; hppro) basic_machine=hppa1.1-hp basic_os=proelf ;; i386mach) basic_machine=i386-mach basic_os=mach ;; isi68 | isi) basic_machine=m68k-isi basic_os=sysv ;; m68knommu) basic_machine=m68k-unknown basic_os=linux ;; magnum | m3230) basic_machine=mips-mips basic_os=sysv ;; merlin) basic_machine=ns32k-utek basic_os=sysv ;; mingw64) basic_machine=x86_64-pc basic_os=mingw64 ;; mingw32) basic_machine=i686-pc basic_os=mingw32 ;; mingw32ce) basic_machine=arm-unknown basic_os=mingw32ce ;; monitor) basic_machine=m68k-rom68k basic_os=coff ;; morphos) basic_machine=powerpc-unknown basic_os=morphos ;; moxiebox) basic_machine=moxie-unknown basic_os=moxiebox ;; msdos) basic_machine=i386-pc basic_os=msdos ;; msys) basic_machine=i686-pc basic_os=msys ;; mvs) basic_machine=i370-ibm basic_os=mvs ;; nacl) basic_machine=le32-unknown basic_os=nacl ;; ncr3000) basic_machine=i486-ncr basic_os=sysv4 ;; netbsd386) basic_machine=i386-pc basic_os=netbsd ;; netwinder) basic_machine=armv4l-rebel basic_os=linux ;; news | news700 | news800 | news900) basic_machine=m68k-sony basic_os=newsos ;; news1000) basic_machine=m68030-sony basic_os=newsos ;; necv70) basic_machine=v70-nec basic_os=sysv ;; nh3000) basic_machine=m68k-harris basic_os=cxux ;; nh[45]000) basic_machine=m88k-harris basic_os=cxux ;; nindy960) basic_machine=i960-intel basic_os=nindy ;; mon960) basic_machine=i960-intel basic_os=mon960 ;; nonstopux) basic_machine=mips-compaq basic_os=nonstopux ;; os400) basic_machine=powerpc-ibm basic_os=os400 ;; OSE68000 | ose68000) basic_machine=m68000-ericsson basic_os=ose ;; os68k) basic_machine=m68k-none basic_os=os68k ;; paragon) basic_machine=i860-intel basic_os=osf ;; parisc) basic_machine=hppa-unknown basic_os=linux ;; psp) basic_machine=mipsallegrexel-sony basic_os=psp ;; pw32) basic_machine=i586-unknown basic_os=pw32 ;; rdos | rdos64) basic_machine=x86_64-pc basic_os=rdos ;; rdos32) basic_machine=i386-pc basic_os=rdos ;; rom68k) basic_machine=m68k-rom68k basic_os=coff ;; sa29200) basic_machine=a29k-amd basic_os=udi ;; sei) basic_machine=mips-sei basic_os=seiux ;; sequent) basic_machine=i386-sequent basic_os= ;; sps7) basic_machine=m68k-bull basic_os=sysv2 ;; st2000) basic_machine=m68k-tandem basic_os= ;; stratus) basic_machine=i860-stratus basic_os=sysv4 ;; sun2) basic_machine=m68000-sun basic_os= ;; sun2os3) basic_machine=m68000-sun basic_os=sunos3 ;; sun2os4) basic_machine=m68000-sun basic_os=sunos4 ;; sun3) basic_machine=m68k-sun basic_os= ;; sun3os3) basic_machine=m68k-sun basic_os=sunos3 ;; sun3os4) basic_machine=m68k-sun basic_os=sunos4 ;; sun4) basic_machine=sparc-sun basic_os= ;; sun4os3) basic_machine=sparc-sun basic_os=sunos3 ;; sun4os4) basic_machine=sparc-sun basic_os=sunos4 ;; sun4sol2) basic_machine=sparc-sun basic_os=solaris2 ;; sun386 | sun386i | roadrunner) basic_machine=i386-sun basic_os= ;; sv1) basic_machine=sv1-cray basic_os=unicos ;; symmetry) basic_machine=i386-sequent basic_os=dynix ;; t3e) basic_machine=alphaev5-cray basic_os=unicos ;; t90) basic_machine=t90-cray basic_os=unicos ;; toad1) basic_machine=pdp10-xkl basic_os=tops20 ;; tpf) basic_machine=s390x-ibm basic_os=tpf ;; udi29k) basic_machine=a29k-amd basic_os=udi ;; ultra3) basic_machine=a29k-nyu basic_os=sym1 ;; v810 | necv810) basic_machine=v810-nec basic_os=none ;; vaxv) basic_machine=vax-dec basic_os=sysv ;; vms) basic_machine=vax-dec basic_os=vms ;; vsta) basic_machine=i386-pc basic_os=vsta ;; vxworks960) basic_machine=i960-wrs basic_os=vxworks ;; vxworks68) basic_machine=m68k-wrs basic_os=vxworks ;; vxworks29k) basic_machine=a29k-wrs basic_os=vxworks ;; xbox) basic_machine=i686-pc basic_os=mingw32 ;; ymp) basic_machine=ymp-cray basic_os=unicos ;; *) basic_machine=$1 basic_os= ;; esac ;; esac # Decode 1-component or ad-hoc basic machines case $basic_machine in # Here we handle the default manufacturer of certain CPU types. It is in # some cases the only manufacturer, in others, it is the most popular. w89k) cpu=hppa1.1 vendor=winbond ;; op50n) cpu=hppa1.1 vendor=oki ;; op60c) cpu=hppa1.1 vendor=oki ;; ibm*) cpu=i370 vendor=ibm ;; orion105) cpu=clipper vendor=highlevel ;; mac | mpw | mac-mpw) cpu=m68k vendor=apple ;; pmac | pmac-mpw) cpu=powerpc vendor=apple ;; # Recognize the various machine names and aliases which stand # for a CPU type and a company and sometimes even an OS. 3b1 | 7300 | 7300-att | att-7300 | pc7300 | safari | unixpc) cpu=m68000 vendor=att ;; 3b*) cpu=we32k vendor=att ;; bluegene*) cpu=powerpc vendor=ibm basic_os=cnk ;; decsystem10* | dec10*) cpu=pdp10 vendor=dec basic_os=tops10 ;; decsystem20* | dec20*) cpu=pdp10 vendor=dec basic_os=tops20 ;; delta | 3300 | delta-motorola | 3300-motorola | motorola-delta | motorola-3300) cpu=m68k vendor=motorola ;; # This used to be dpx2*, but that gets the RS6000-based # DPX/20 and the x86-based DPX/2-100 wrong. See # https://oldskool.silicium.org/stations/bull_dpx20.htm # https://www.feb-patrimoine.com/english/bull_dpx2.htm # https://www.feb-patrimoine.com/english/unix_and_bull.htm dpx2 | dpx2[23]00 | dpx2[23]xx) cpu=m68k vendor=bull ;; dpx2100 | dpx21xx) cpu=i386 vendor=bull ;; dpx20) cpu=rs6000 vendor=bull ;; encore | umax | mmax) cpu=ns32k vendor=encore ;; elxsi) cpu=elxsi vendor=elxsi basic_os=${basic_os:-bsd} ;; fx2800) cpu=i860 vendor=alliant ;; genix) cpu=ns32k vendor=ns ;; h3050r* | hiux*) cpu=hppa1.1 vendor=hitachi basic_os=hiuxwe2 ;; hp3k9[0-9][0-9] | hp9[0-9][0-9]) cpu=hppa1.0 vendor=hp ;; hp9k2[0-9][0-9] | hp9k31[0-9]) cpu=m68000 vendor=hp ;; hp9k3[2-9][0-9]) cpu=m68k vendor=hp ;; hp9k6[0-9][0-9] | hp6[0-9][0-9]) cpu=hppa1.0 vendor=hp ;; hp9k7[0-79][0-9] | hp7[0-79][0-9]) cpu=hppa1.1 vendor=hp ;; hp9k78[0-9] | hp78[0-9]) # FIXME: really hppa2.0-hp cpu=hppa1.1 vendor=hp ;; hp9k8[67]1 | hp8[67]1 | hp9k80[24] | hp80[24] | hp9k8[78]9 | hp8[78]9 | hp9k893 | hp893) # FIXME: really hppa2.0-hp cpu=hppa1.1 vendor=hp ;; hp9k8[0-9][13679] | hp8[0-9][13679]) cpu=hppa1.1 vendor=hp ;; hp9k8[0-9][0-9] | hp8[0-9][0-9]) cpu=hppa1.0 vendor=hp ;; i*86v32) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=sysv32 ;; i*86v4*) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=sysv4 ;; i*86v) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=sysv ;; i*86sol2) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=solaris2 ;; j90 | j90-cray) cpu=j90 vendor=cray basic_os=${basic_os:-unicos} ;; iris | iris4d) cpu=mips vendor=sgi case $basic_os in irix*) ;; *) basic_os=irix4 ;; esac ;; miniframe) cpu=m68000 vendor=convergent ;; *mint | mint[0-9]* | *MiNT | *MiNT[0-9]*) cpu=m68k vendor=atari basic_os=mint ;; news-3600 | risc-news) cpu=mips vendor=sony basic_os=newsos ;; next | m*-next) cpu=m68k vendor=next ;; np1) cpu=np1 vendor=gould ;; op50n-* | op60c-*) cpu=hppa1.1 vendor=oki basic_os=proelf ;; pa-hitachi) cpu=hppa1.1 vendor=hitachi basic_os=hiuxwe2 ;; pbd) cpu=sparc vendor=tti ;; pbb) cpu=m68k vendor=tti ;; pc532) cpu=ns32k vendor=pc532 ;; pn) cpu=pn vendor=gould ;; power) cpu=power vendor=ibm ;; ps2) cpu=i386 vendor=ibm ;; rm[46]00) cpu=mips vendor=siemens ;; rtpc | rtpc-*) cpu=romp vendor=ibm ;; sde) cpu=mipsisa32 vendor=sde basic_os=${basic_os:-elf} ;; simso-wrs) cpu=sparclite vendor=wrs basic_os=vxworks ;; tower | tower-32) cpu=m68k vendor=ncr ;; vpp*|vx|vx-*) cpu=f301 vendor=fujitsu ;; w65) cpu=w65 vendor=wdc ;; w89k-*) cpu=hppa1.1 vendor=winbond basic_os=proelf ;; none) cpu=none vendor=none ;; leon|leon[3-9]) cpu=sparc vendor=$basic_machine ;; leon-*|leon[3-9]-*) cpu=sparc vendor=`echo "$basic_machine" | sed 's/-.*//'` ;; *-*) saved_IFS=$IFS IFS="-" read cpu vendor <&2 exit 1 ;; esac ;; esac # Here we canonicalize certain aliases for manufacturers. case $vendor in digital*) vendor=dec ;; commodore*) vendor=cbm ;; *) ;; esac # Decode manufacturer-specific aliases for certain operating systems. if test x"$basic_os" != x then # First recognize some ad-hoc cases, or perhaps split kernel-os, or else just # set os. obj= case $basic_os in gnu/linux*) kernel=linux os=`echo "$basic_os" | sed -e 's|gnu/linux|gnu|'` ;; os2-emx) kernel=os2 os=`echo "$basic_os" | sed -e 's|os2-emx|emx|'` ;; nto-qnx*) kernel=nto os=`echo "$basic_os" | sed -e 's|nto-qnx|qnx|'` ;; *-*) saved_IFS=$IFS IFS="-" read kernel os <&2 fi ;; *) echo "Invalid configuration '$1': OS '$os' not recognized" 1>&2 exit 1 ;; esac case $obj in aout* | coff* | elf* | pe*) ;; '') # empty is fine ;; *) echo "Invalid configuration '$1': Machine code format '$obj' not recognized" 1>&2 exit 1 ;; esac # Here we handle the constraint that a (synthetic) cpu and os are # valid only in combination with each other and nowhere else. case $cpu-$os in # The "javascript-unknown-ghcjs" triple is used by GHC; we # accept it here in order to tolerate that, but reject any # variations. javascript-ghcjs) ;; javascript-* | *-ghcjs) echo "Invalid configuration '$1': cpu '$cpu' is not valid with os '$os$obj'" 1>&2 exit 1 ;; esac # As a final step for OS-related things, validate the OS-kernel combination # (given a valid OS), if there is a kernel. case $kernel-$os-$obj in linux-gnu*- | linux-android*- | linux-dietlibc*- | linux-llvm*- \ | linux-mlibc*- | linux-musl*- | linux-newlib*- \ | linux-relibc*- | linux-uclibc*- | linux-ohos*- ) ;; uclinux-uclibc*- | uclinux-gnu*- ) ;; managarm-mlibc*- | managarm-kernel*- ) ;; windows*-msvc*-) ;; -dietlibc*- | -llvm*- | -mlibc*- | -musl*- | -newlib*- | -relibc*- \ | -uclibc*- ) # These are just libc implementations, not actual OSes, and thus # require a kernel. echo "Invalid configuration '$1': libc '$os' needs explicit kernel." 1>&2 exit 1 ;; -kernel*- ) echo "Invalid configuration '$1': '$os' needs explicit kernel." 1>&2 exit 1 ;; *-kernel*- ) echo "Invalid configuration '$1': '$kernel' does not support '$os'." 1>&2 exit 1 ;; *-msvc*- ) echo "Invalid configuration '$1': '$os' needs 'windows'." 1>&2 exit 1 ;; kfreebsd*-gnu*- | knetbsd*-gnu*- | netbsd*-gnu*- | kopensolaris*-gnu*-) ;; vxworks-simlinux- | vxworks-simwindows- | vxworks-spe-) ;; nto-qnx*-) ;; os2-emx-) ;; rtmk-nova-) ;; *-eabi*- | *-gnueabi*-) ;; none--*) # None (no kernel, i.e. freestanding / bare metal), # can be paired with an machine code file format ;; -*-) # Blank kernel with real OS is always fine. ;; --*) # Blank kernel and OS with real machine code file format is always fine. ;; *-*-*) echo "Invalid configuration '$1': Kernel '$kernel' not known to work with OS '$os'." 1>&2 exit 1 ;; esac # Here we handle the case where we know the os, and the CPU type, but not the # manufacturer. We pick the logical manufacturer. case $vendor in unknown) case $cpu-$os in *-riscix*) vendor=acorn ;; *-sunos* | *-solaris*) vendor=sun ;; *-cnk* | *-aix*) vendor=ibm ;; *-beos*) vendor=be ;; *-hpux*) vendor=hp ;; *-mpeix*) vendor=hp ;; *-hiux*) vendor=hitachi ;; *-unos*) vendor=crds ;; *-dgux*) vendor=dg ;; *-luna*) vendor=omron ;; *-genix*) vendor=ns ;; *-clix*) vendor=intergraph ;; *-mvs* | *-opened*) vendor=ibm ;; *-os400*) vendor=ibm ;; s390-* | s390x-*) vendor=ibm ;; *-ptx*) vendor=sequent ;; *-tpf*) vendor=ibm ;; *-vxsim* | *-vxworks* | *-windiss*) vendor=wrs ;; *-aux*) vendor=apple ;; *-hms*) vendor=hitachi ;; *-mpw* | *-macos*) vendor=apple ;; *-*mint | *-mint[0-9]* | *-*MiNT | *-MiNT[0-9]*) vendor=atari ;; *-vos*) vendor=stratus ;; esac ;; esac echo "$cpu-$vendor${kernel:+-$kernel}${os:+-$os}${obj:+-$obj}" exit # Local variables: # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: gevent-24.11.1/deps/c-ares/config/depcomp000077500000000000000000000562171471441230600201140ustar00rootroot00000000000000#! /bin/sh # depcomp - compile a program generating dependencies as side-effects scriptversion=2024-06-19.01; # UTC # Copyright (C) 1999-2024 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see . # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. # Originally written by Alexandre Oliva . case $1 in '') echo "$0: No command. Try '$0 --help' for more information." 1>&2 exit 1; ;; -h | --h*) cat <<\EOF Usage: depcomp [--help] [--version] PROGRAM [ARGS] Run PROGRAMS ARGS to compile a file, generating dependencies as side-effects. Environment variables: depmode Dependency tracking mode. source Source file read by 'PROGRAMS ARGS'. object Object file output by 'PROGRAMS ARGS'. DEPDIR directory where to store dependencies. depfile Dependency file to output. tmpdepfile Temporary file to use when outputting dependencies. libtool Whether libtool is used (yes/no). Report bugs to . GNU Automake home page: . General help using GNU software: . EOF exit $? ;; -v | --v*) echo "depcomp (GNU Automake) $scriptversion" exit $? ;; esac # Get the directory component of the given path, and save it in the # global variables '$dir'. Note that this directory component will # be either empty or ending with a '/' character. This is deliberate. set_dir_from () { case $1 in */*) dir=`echo "$1" | sed -e 's|/[^/]*$|/|'`;; *) dir=;; esac } # Get the suffix-stripped basename of the given path, and save it the # global variable '$base'. set_base_from () { base=`echo "$1" | sed -e 's|^.*/||' -e 's/\.[^.]*$//'` } # If no dependency file was actually created by the compiler invocation, # we still have to create a dummy depfile, to avoid errors with the # Makefile "include basename.Plo" scheme. make_dummy_depfile () { echo "#dummy" > "$depfile" } # Factor out some common post-processing of the generated depfile. # Requires the auxiliary global variable '$tmpdepfile' to be set. aix_post_process_depfile () { # If the compiler actually managed to produce a dependency file, # post-process it. if test -f "$tmpdepfile"; then # Each line is of the form 'foo.o: dependency.h'. # Do two passes, one to just change these to # $object: dependency.h # and one to simply output # dependency.h: # which is needed to avoid the deleted-header problem. { sed -e "s,^.*\.[$lower]*:,$object:," < "$tmpdepfile" sed -e "s,^.*\.[$lower]*:[$tab ]*,," -e 's,$,:,' < "$tmpdepfile" } > "$depfile" rm -f "$tmpdepfile" else make_dummy_depfile fi } # A tabulation character. tab=' ' # A newline character. nl=' ' # Character ranges might be problematic outside the C locale. # These definitions help. upper=ABCDEFGHIJKLMNOPQRSTUVWXYZ lower=abcdefghijklmnopqrstuvwxyz alpha=${upper}${lower} if test -z "$depmode" || test -z "$source" || test -z "$object"; then echo "depcomp: Variables source, object and depmode must be set" 1>&2 exit 1 fi # Dependencies for sub/bar.o or sub/bar.obj go into sub/.deps/bar.Po. depfile=${depfile-`echo "$object" | sed 's|[^\\/]*$|'${DEPDIR-.deps}'/&|;s|\.\([^.]*\)$|.P\1|;s|Pobj$|Po|'`} tmpdepfile=${tmpdepfile-`echo "$depfile" | sed 's/\.\([^.]*\)$/.T\1/'`} rm -f "$tmpdepfile" # Avoid interference from the environment. gccflag= dashmflag= # Some modes work just like other modes, but use different flags. We # parameterize here, but still list the modes in the big case below, # to make depend.m4 easier to write. Note that we *cannot* use a case # here, because this file can only contain one case statement. if test "$depmode" = hp; then # HP compiler uses -M and no extra arg. gccflag=-M depmode=gcc fi if test "$depmode" = dashXmstdout; then # This is just like dashmstdout with a different argument. dashmflag=-xM depmode=dashmstdout fi cygpath_u="cygpath -u -f -" if test "$depmode" = msvcmsys; then # This is just like msvisualcpp but w/o cygpath translation. # Just convert the backslash-escaped backslashes to single forward # slashes to satisfy depend.m4 cygpath_u='sed s,\\\\,/,g' depmode=msvisualcpp fi if test "$depmode" = msvc7msys; then # This is just like msvc7 but w/o cygpath translation. # Just convert the backslash-escaped backslashes to single forward # slashes to satisfy depend.m4 cygpath_u='sed s,\\\\,/,g' depmode=msvc7 fi if test "$depmode" = xlc; then # IBM C/C++ Compilers xlc/xlC can output gcc-like dependency information. gccflag=-qmakedep=gcc,-MF depmode=gcc fi case "$depmode" in gcc3) ## gcc 3 implements dependency tracking that does exactly what ## we want. Yay! Note: for some reason libtool 1.4 doesn't like ## it if -MD -MP comes after the -MF stuff. Hmm. ## Unfortunately, FreeBSD c89 acceptance of flags depends upon ## the command line argument order; so add the flags where they ## appear in depend2.am. Note that the slowdown incurred here ## affects only configure: in makefiles, %FASTDEP% shortcuts this. for arg do case $arg in -c) set fnord "$@" -MT "$object" -MD -MP -MF "$tmpdepfile" "$arg" ;; *) set fnord "$@" "$arg" ;; esac shift # fnord shift # $arg done "$@" stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi mv "$tmpdepfile" "$depfile" ;; gcc) ## Note that this doesn't just cater to obsolete pre-3.x GCC compilers. ## but also to in-use compilers like IBM xlc/xlC and the HP C compiler. ## (see the conditional assignment to $gccflag above). ## There are various ways to get dependency output from gcc. Here's ## why we pick this rather obscure method: ## - Don't want to use -MD because we'd like the dependencies to end ## up in a subdir. Having to rename by hand is ugly. ## (We might end up doing this anyway to support other compilers.) ## - The DEPENDENCIES_OUTPUT environment variable makes gcc act like ## -MM, not -M (despite what the docs say). Also, it might not be ## supported by the other compilers which use the 'gcc' depmode. ## - Using -M directly means running the compiler twice (even worse ## than renaming). if test -z "$gccflag"; then gccflag=-MD, fi "$@" -Wp,"$gccflag$tmpdepfile" stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" echo "$object : \\" > "$depfile" # The second -e expression handles DOS-style file names with drive # letters. sed -e 's/^[^:]*: / /' \ -e 's/^['$alpha']:\/[^:]*: / /' < "$tmpdepfile" >> "$depfile" ## This next piece of magic avoids the "deleted header file" problem. ## The problem is that when a header file which appears in a .P file ## is deleted, the dependency causes make to die (because there is ## typically no way to rebuild the header). We avoid this by adding ## dummy dependencies for each header file. Too bad gcc doesn't do ## this for us directly. ## Some versions of gcc put a space before the ':'. On the theory ## that the space means something, we add a space to the output as ## well. hp depmode also adds that space, but also prefixes the VPATH ## to the object. Take care to not repeat it in the output. ## Some versions of the HPUX 10.20 sed can't process this invocation ## correctly. Breaking it into two sed invocations is a workaround. tr ' ' "$nl" < "$tmpdepfile" \ | sed -e 's/^\\$//' -e '/^$/d' -e "s|.*$object$||" -e '/:$/d' \ | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; hp) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; sgi) if test "$libtool" = yes; then "$@" "-Wp,-MDupdate,$tmpdepfile" else "$@" -MDupdate "$tmpdepfile" fi stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" if test -f "$tmpdepfile"; then # yes, the sourcefile depend on other files echo "$object : \\" > "$depfile" # Clip off the initial element (the dependent). Don't try to be # clever and replace this with sed code, as IRIX sed won't handle # lines with more than a fixed number of characters (4096 in # IRIX 6.2 sed, 8192 in IRIX 6.5). We also remove comment lines; # the IRIX cc adds comments like '#:fec' to the end of the # dependency line. tr ' ' "$nl" < "$tmpdepfile" \ | sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' \ | tr "$nl" ' ' >> "$depfile" echo >> "$depfile" # The second pass generates a dummy entry for each header file. tr ' ' "$nl" < "$tmpdepfile" \ | sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' -e 's/$/:/' \ >> "$depfile" else make_dummy_depfile fi rm -f "$tmpdepfile" ;; xlc) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; aix) # The C for AIX Compiler uses -M and outputs the dependencies # in a .u file. In older versions, this file always lives in the # current directory. Also, the AIX compiler puts '$object:' at the # start of each line; $object doesn't have directory information. # Version 6 uses the directory in both cases. set_dir_from "$object" set_base_from "$object" if test "$libtool" = yes; then tmpdepfile1=$dir$base.u tmpdepfile2=$base.u tmpdepfile3=$dir.libs/$base.u "$@" -Wc,-M else tmpdepfile1=$dir$base.u tmpdepfile2=$dir$base.u tmpdepfile3=$dir$base.u "$@" -M fi stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" exit $stat fi for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" do test -f "$tmpdepfile" && break done aix_post_process_depfile ;; tcc) # tcc (Tiny C Compiler) understand '-MD -MF file' since version 0.9.26 # FIXME: That version still under development at the moment of writing. # Make that this statement remains true also for stable, released # versions. # It will wrap lines (doesn't matter whether long or short) with a # trailing '\', as in: # # foo.o : \ # foo.c \ # foo.h \ # # It will put a trailing '\' even on the last line, and will use leading # spaces rather than leading tabs (at least since its commit 0394caf7 # "Emit spaces for -MD"). "$@" -MD -MF "$tmpdepfile" stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" # Each non-empty line is of the form 'foo.o : \' or ' dep.h \'. # We have to change lines of the first kind to '$object: \'. sed -e "s|.*:|$object :|" < "$tmpdepfile" > "$depfile" # And for each line of the second kind, we have to emit a 'dep.h:' # dummy dependency, to avoid the deleted-header problem. sed -n -e 's|^ *\(.*\) *\\$|\1:|p' < "$tmpdepfile" >> "$depfile" rm -f "$tmpdepfile" ;; ## The order of this option in the case statement is important, since the ## shell code in configure will try each of these formats in the order ## listed in this file. A plain '-MD' option would be understood by many ## compilers, so we must ensure this comes after the gcc and icc options. pgcc) # Portland's C compiler understands '-MD'. # Will always output deps to 'file.d' where file is the root name of the # source file under compilation, even if file resides in a subdirectory. # The object file name does not affect the name of the '.d' file. # pgcc 10.2 will output # foo.o: sub/foo.c sub/foo.h # and will wrap long lines using '\' : # foo.o: sub/foo.c ... \ # sub/foo.h ... \ # ... set_dir_from "$object" # Use the source, not the object, to determine the base name, since # that's sadly what pgcc will do too. set_base_from "$source" tmpdepfile=$base.d # For projects that build the same source file twice into different object # files, the pgcc approach of using the *source* file root name can cause # problems in parallel builds. Use a locking strategy to avoid stomping on # the same $tmpdepfile. lockdir=$base.d-lock trap " echo '$0: caught signal, cleaning up...' >&2 rmdir '$lockdir' exit 1 " 1 2 13 15 numtries=100 i=$numtries while test $i -gt 0; do # mkdir is a portable test-and-set. if mkdir "$lockdir" 2>/dev/null; then # This process acquired the lock. "$@" -MD stat=$? # Release the lock. rmdir "$lockdir" break else # If the lock is being held by a different process, wait # until the winning process is done or we timeout. while test -d "$lockdir" && test $i -gt 0; do sleep 1 i=`expr $i - 1` done fi i=`expr $i - 1` done trap - 1 2 13 15 if test $i -le 0; then echo "$0: failed to acquire lock after $numtries attempts" >&2 echo "$0: check lockdir '$lockdir'" >&2 exit 1 fi if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" # Each line is of the form `foo.o: dependent.h', # or `foo.o: dep1.h dep2.h \', or ` dep3.h dep4.h \'. # Do two passes, one to just change these to # `$object: dependent.h' and one to simply `dependent.h:'. sed "s,^[^:]*:,$object :," < "$tmpdepfile" > "$depfile" # Some versions of the HPUX 10.20 sed can't process this invocation # correctly. Breaking it into two sed invocations is a workaround. sed 's,^[^:]*: \(.*\)$,\1,;s/^\\$//;/^$/d;/:$/d' < "$tmpdepfile" \ | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; hp2) # The "hp" stanza above does not work with aCC (C++) and HP's ia64 # compilers, which have integrated preprocessors. The correct option # to use with these is +Maked; it writes dependencies to a file named # 'foo.d', which lands next to the object file, wherever that # happens to be. # Much of this is similar to the tru64 case; see comments there. set_dir_from "$object" set_base_from "$object" if test "$libtool" = yes; then tmpdepfile1=$dir$base.d tmpdepfile2=$dir.libs/$base.d "$@" -Wc,+Maked else tmpdepfile1=$dir$base.d tmpdepfile2=$dir$base.d "$@" +Maked fi stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile1" "$tmpdepfile2" exit $stat fi for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" do test -f "$tmpdepfile" && break done if test -f "$tmpdepfile"; then sed -e "s,^.*\.[$lower]*:,$object:," "$tmpdepfile" > "$depfile" # Add 'dependent.h:' lines. sed -ne '2,${ s/^ *// s/ \\*$// s/$/:/ p }' "$tmpdepfile" >> "$depfile" else make_dummy_depfile fi rm -f "$tmpdepfile" "$tmpdepfile2" ;; tru64) # The Tru64 compiler uses -MD to generate dependencies as a side # effect. 'cc -MD -o foo.o ...' puts the dependencies into 'foo.o.d'. # At least on Alpha/Redhat 6.1, Compaq CCC V6.2-504 seems to put # dependencies in 'foo.d' instead, so we check for that too. # Subdirectories are respected. set_dir_from "$object" set_base_from "$object" if test "$libtool" = yes; then # Libtool generates 2 separate objects for the 2 libraries. These # two compilations output dependencies in $dir.libs/$base.o.d and # in $dir$base.o.d. We have to check for both files, because # one of the two compilations can be disabled. We should prefer # $dir$base.o.d over $dir.libs/$base.o.d because the latter is # automatically cleaned when .libs/ is deleted, while ignoring # the former would cause a distcleancheck panic. tmpdepfile1=$dir$base.o.d # libtool 1.5 tmpdepfile2=$dir.libs/$base.o.d # Likewise. tmpdepfile3=$dir.libs/$base.d # Compaq CCC V6.2-504 "$@" -Wc,-MD else tmpdepfile1=$dir$base.d tmpdepfile2=$dir$base.d tmpdepfile3=$dir$base.d "$@" -MD fi stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" exit $stat fi for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" do test -f "$tmpdepfile" && break done # Same post-processing that is required for AIX mode. aix_post_process_depfile ;; msvc7) if test "$libtool" = yes; then showIncludes=-Wc,-showIncludes else showIncludes=-showIncludes fi "$@" $showIncludes > "$tmpdepfile" stat=$? grep -v '^Note: including file: ' "$tmpdepfile" if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" echo "$object : \\" > "$depfile" # The first sed program below extracts the file names and escapes # backslashes for cygpath. The second sed program outputs the file # name when reading, but also accumulates all include files in the # hold buffer in order to output them again at the end. This only # works with sed implementations that can handle large buffers. sed < "$tmpdepfile" -n ' /^Note: including file: *\(.*\)/ { s//\1/ s/\\/\\\\/g p }' | $cygpath_u | sort -u | sed -n ' s/ /\\ /g s/\(.*\)/'"$tab"'\1 \\/p s/.\(.*\) \\/\1:/ H $ { s/.*/'"$tab"'/ G p }' >> "$depfile" echo >> "$depfile" # make sure the fragment doesn't end with a backslash rm -f "$tmpdepfile" ;; msvc7msys) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; #nosideeffect) # This comment above is used by automake to tell side-effect # dependency tracking mechanisms from slower ones. dashmstdout) # Important note: in order to support this mode, a compiler *must* # always write the preprocessed file to stdout, regardless of -o. "$@" || exit $? # Remove the call to Libtool. if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi # Remove '-o $object'. IFS=" " for arg do case $arg in -o) shift ;; $object) shift ;; *) set fnord "$@" "$arg" shift # fnord shift # $arg ;; esac done test -z "$dashmflag" && dashmflag=-M # Require at least two characters before searching for ':' # in the target name. This is to cope with DOS-style filenames: # a dependency such as 'c:/foo/bar' could be seen as target 'c' otherwise. "$@" $dashmflag | sed "s|^[$tab ]*[^:$tab ][^:][^:]*:[$tab ]*|$object: |" > "$tmpdepfile" rm -f "$depfile" cat < "$tmpdepfile" > "$depfile" # Some versions of the HPUX 10.20 sed can't process this sed invocation # correctly. Breaking it into two sed invocations is a workaround. tr ' ' "$nl" < "$tmpdepfile" \ | sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \ | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; dashXmstdout) # This case only exists to satisfy depend.m4. It is never actually # run, as this mode is specially recognized in the preamble. exit 1 ;; makedepend) "$@" || exit $? # Remove any Libtool call if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi # X makedepend shift cleared=no eat=no for arg do case $cleared in no) set ""; shift cleared=yes ;; esac if test $eat = yes; then eat=no continue fi case "$arg" in -D*|-I*) set fnord "$@" "$arg"; shift ;; # Strip any option that makedepend may not understand. Remove # the object too, otherwise makedepend will parse it as a source file. -arch) eat=yes ;; -*|$object) ;; *) set fnord "$@" "$arg"; shift ;; esac done obj_suffix=`echo "$object" | sed 's/^.*\././'` touch "$tmpdepfile" ${MAKEDEPEND-makedepend} -o"$obj_suffix" -f"$tmpdepfile" "$@" rm -f "$depfile" # makedepend may prepend the VPATH from the source file name to the object. # No need to regex-escape $object, excess matching of '.' is harmless. sed "s|^.*\($object *:\)|\1|" "$tmpdepfile" > "$depfile" # Some versions of the HPUX 10.20 sed can't process the last invocation # correctly. Breaking it into two sed invocations is a workaround. sed '1,2d' "$tmpdepfile" \ | tr ' ' "$nl" \ | sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \ | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" "$tmpdepfile".bak ;; cpp) # Important note: in order to support this mode, a compiler *must* # always write the preprocessed file to stdout. "$@" || exit $? # Remove the call to Libtool. if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi # Remove '-o $object'. IFS=" " for arg do case $arg in -o) shift ;; $object) shift ;; *) set fnord "$@" "$arg" shift # fnord shift # $arg ;; esac done "$@" -E \ | sed -n -e '/^# [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \ -e '/^#line [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \ | sed '$ s: \\$::' > "$tmpdepfile" rm -f "$depfile" echo "$object : \\" > "$depfile" cat < "$tmpdepfile" >> "$depfile" sed < "$tmpdepfile" '/^$/d;s/^ //;s/ \\$//;s/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; msvisualcpp) # Important note: in order to support this mode, a compiler *must* # always write the preprocessed file to stdout. "$@" || exit $? # Remove the call to Libtool. if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi IFS=" " for arg do case "$arg" in -o) shift ;; $object) shift ;; "-Gm"|"/Gm"|"-Gi"|"/Gi"|"-ZI"|"/ZI") set fnord "$@" shift shift ;; *) set fnord "$@" "$arg" shift shift ;; esac done "$@" -E 2>/dev/null | sed -n '/^#line [0-9][0-9]* "\([^"]*\)"/ s::\1:p' | $cygpath_u | sort -u > "$tmpdepfile" rm -f "$depfile" echo "$object : \\" > "$depfile" sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::'"$tab"'\1 \\:p' >> "$depfile" echo "$tab" >> "$depfile" sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::\1\::p' >> "$depfile" rm -f "$tmpdepfile" ;; msvcmsys) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; none) exec "$@" ;; *) echo "Unknown depmode $depmode" 1>&2 exit 1 ;; esac exit 0 # Local Variables: # mode: shell-script # sh-indentation: 2 # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC0" # time-stamp-end: "; # UTC" # End: gevent-24.11.1/deps/c-ares/config/install-sh000077500000000000000000000361151471441230600205360ustar00rootroot00000000000000#!/bin/sh # install - install a program, script, or datafile scriptversion=2024-06-19.01; # UTC # This originates from X11R5 (mit/util/scripts/install.sh), which was # later released in X11R6 (xc/config/util/install.sh) with the # following copyright and license. # # Copyright (C) 1994 X Consortium # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to # deal in the Software without restriction, including without limitation the # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or # sell copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # X CONSORTIUM BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN # AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNEC- # TION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # # Except as contained in this notice, the name of the X Consortium shall not # be used in advertising or otherwise to promote the sale, use or other deal- # ings in this Software without prior written authorization from the X Consor- # tium. # # # FSF changes to this file are in the public domain. # # Calling this script install-sh is preferred over install.sh, to prevent # 'make' implicit rules from creating a file called install from it # when there is no Makefile. # # This script is compatible with the BSD install script, but was written # from scratch. tab=' ' nl=' ' IFS=" $tab$nl" # Set DOITPROG to "echo" to test this script. doit=${DOITPROG-} doit_exec=${doit:-exec} # Put in absolute file names if you don't have them in your path; # or use environment vars. chgrpprog=${CHGRPPROG-chgrp} chmodprog=${CHMODPROG-chmod} chownprog=${CHOWNPROG-chown} cmpprog=${CMPPROG-cmp} cpprog=${CPPROG-cp} mkdirprog=${MKDIRPROG-mkdir} mvprog=${MVPROG-mv} rmprog=${RMPROG-rm} stripprog=${STRIPPROG-strip} posix_mkdir= # Desired mode of installed file. mode=0755 # Create dirs (including intermediate dirs) using mode 755. # This is like GNU 'install' as of coreutils 8.32 (2020). mkdir_umask=22 backupsuffix= chgrpcmd= chmodcmd=$chmodprog chowncmd= mvcmd=$mvprog rmcmd="$rmprog -f" stripcmd= src= dst= dir_arg= dst_arg= copy_on_change=false is_target_a_directory=possibly usage="\ Usage: $0 [OPTION]... [-T] SRCFILE DSTFILE or: $0 [OPTION]... SRCFILES... DIRECTORY or: $0 [OPTION]... -t DIRECTORY SRCFILES... or: $0 [OPTION]... -d DIRECTORIES... In the 1st form, copy SRCFILE to DSTFILE. In the 2nd and 3rd, copy all SRCFILES to DIRECTORY. In the 4th, create DIRECTORIES. Options: --help display this help and exit. --version display version info and exit. -c (ignored) -C install only if different (preserve data modification time) -d create directories instead of installing files. -g GROUP $chgrpprog installed files to GROUP. -m MODE $chmodprog installed files to MODE. -o USER $chownprog installed files to USER. -p pass -p to $cpprog. -s $stripprog installed files. -S SUFFIX attempt to back up existing files, with suffix SUFFIX. -t DIRECTORY install into DIRECTORY. -T report an error if DSTFILE is a directory. Environment variables override the default commands: CHGRPPROG CHMODPROG CHOWNPROG CMPPROG CPPROG MKDIRPROG MVPROG RMPROG STRIPPROG By default, rm is invoked with -f; when overridden with RMPROG, it's up to you to specify -f if you want it. If -S is not specified, no backups are attempted. Report bugs to . GNU Automake home page: . General help using GNU software: ." while test $# -ne 0; do case $1 in -c) ;; -C) copy_on_change=true;; -d) dir_arg=true;; -g) chgrpcmd="$chgrpprog $2" shift;; --help) echo "$usage"; exit $?;; -m) mode=$2 case $mode in *' '* | *"$tab"* | *"$nl"* | *'*'* | *'?'* | *'['*) echo "$0: invalid mode: $mode" >&2 exit 1;; esac shift;; -o) chowncmd="$chownprog $2" shift;; -p) cpprog="$cpprog -p";; -s) stripcmd=$stripprog;; -S) backupsuffix="$2" shift;; -t) is_target_a_directory=always dst_arg=$2 # Protect names problematic for 'test' and other utilities. case $dst_arg in -* | [=\(\)!]) dst_arg=./$dst_arg;; esac shift;; -T) is_target_a_directory=never;; --version) echo "$0 (GNU Automake) $scriptversion"; exit $?;; --) shift break;; -*) echo "$0: invalid option: $1" >&2 exit 1;; *) break;; esac shift done # We allow the use of options -d and -T together, by making -d # take the precedence; this is for compatibility with GNU install. if test -n "$dir_arg"; then if test -n "$dst_arg"; then echo "$0: target directory not allowed when installing a directory." >&2 exit 1 fi fi if test $# -ne 0 && test -z "$dir_arg$dst_arg"; then # When -d is used, all remaining arguments are directories to create. # When -t is used, the destination is already specified. # Otherwise, the last argument is the destination. Remove it from $@. for arg do if test -n "$dst_arg"; then # $@ is not empty: it contains at least $arg. set fnord "$@" "$dst_arg" shift # fnord fi shift # arg dst_arg=$arg # Protect names problematic for 'test' and other utilities. case $dst_arg in -* | [=\(\)!]) dst_arg=./$dst_arg;; esac done fi if test $# -eq 0; then if test -z "$dir_arg"; then echo "$0: no input file specified." >&2 exit 1 fi # It's OK to call 'install-sh -d' without argument. # This can happen when creating conditional directories. exit 0 fi if test -z "$dir_arg"; then if test $# -gt 1 || test "$is_target_a_directory" = always; then if test ! -d "$dst_arg"; then echo "$0: $dst_arg: Is not a directory." >&2 exit 1 fi fi fi if test -z "$dir_arg"; then do_exit='(exit $ret); exit $ret' trap "ret=129; $do_exit" 1 trap "ret=130; $do_exit" 2 trap "ret=141; $do_exit" 13 trap "ret=143; $do_exit" 15 # Set umask so as not to create temps with too-generous modes. # However, 'strip' requires both read and write access to temps. case $mode in # Optimize common cases. *644) cp_umask=133;; *755) cp_umask=22;; *[0-7]) if test -z "$stripcmd"; then u_plus_rw= else u_plus_rw='% 200' fi cp_umask=`expr '(' 777 - $mode % 1000 ')' $u_plus_rw`;; *) if test -z "$stripcmd"; then u_plus_rw= else u_plus_rw=,u+rw fi cp_umask=$mode$u_plus_rw;; esac fi for src do # Protect names problematic for 'test' and other utilities. case $src in -* | [=\(\)!]) src=./$src;; esac if test -n "$dir_arg"; then dst=$src dstdir=$dst test -d "$dstdir" dstdir_status=$? # Don't chown directories that already exist. if test $dstdir_status = 0; then chowncmd="" fi else # Waiting for this to be detected by the "$cpprog $src $dsttmp" command # might cause directories to be created, which would be especially bad # if $src (and thus $dsttmp) contains '*'. if test ! -f "$src" && test ! -d "$src"; then echo "$0: $src does not exist." >&2 exit 1 fi if test -z "$dst_arg"; then echo "$0: no destination specified." >&2 exit 1 fi dst=$dst_arg # If destination is a directory, append the input filename. if test -d "$dst"; then if test "$is_target_a_directory" = never; then echo "$0: $dst_arg: Is a directory" >&2 exit 1 fi dstdir=$dst dstbase=`basename "$src"` case $dst in */) dst=$dst$dstbase;; *) dst=$dst/$dstbase;; esac dstdir_status=0 else dstdir=`dirname "$dst"` test -d "$dstdir" dstdir_status=$? fi fi case $dstdir in */) dstdirslash=$dstdir;; *) dstdirslash=$dstdir/;; esac obsolete_mkdir_used=false if test $dstdir_status != 0; then case $posix_mkdir in '') # With -d, create the new directory with the user-specified mode. # Otherwise, rely on $mkdir_umask. if test -n "$dir_arg"; then mkdir_mode=-m$mode else mkdir_mode= fi posix_mkdir=false # The $RANDOM variable is not portable (e.g., dash). Use it # here however when possible just to lower collision chance. tmpdir=${TMPDIR-/tmp}/ins$RANDOM-$$ trap ' ret=$? rmdir "$tmpdir/a/b" "$tmpdir/a" "$tmpdir" 2>/dev/null exit $ret ' 0 # Because "mkdir -p" follows existing symlinks and we likely work # directly in world-writable /tmp, make sure that the '$tmpdir' # directory is successfully created first before we actually test # 'mkdir -p'. if (umask $mkdir_umask && $mkdirprog $mkdir_mode "$tmpdir" && exec $mkdirprog $mkdir_mode -p -- "$tmpdir/a/b") >/dev/null 2>&1 then if test -z "$dir_arg" || { # Check for POSIX incompatibility with -m. # HP-UX 11.23 and IRIX 6.5 mkdir -m -p sets group- or # other-writable bit of parent directory when it shouldn't. # FreeBSD 6.1 mkdir -m -p sets mode of existing directory. test_tmpdir="$tmpdir/a" ls_ld_tmpdir=`ls -ld "$test_tmpdir"` case $ls_ld_tmpdir in d????-?r-*) different_mode=700;; d????-?--*) different_mode=755;; *) false;; esac && $mkdirprog -m$different_mode -p -- "$test_tmpdir" && { ls_ld_tmpdir_1=`ls -ld "$test_tmpdir"` test "$ls_ld_tmpdir" = "$ls_ld_tmpdir_1" } } then posix_mkdir=: fi rmdir "$tmpdir/a/b" "$tmpdir/a" "$tmpdir" else # Remove any dirs left behind by ancient mkdir implementations. rmdir ./$mkdir_mode ./-p ./-- "$tmpdir" 2>/dev/null fi trap '' 0;; esac if $posix_mkdir && ( umask $mkdir_umask && $doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir" ) then : else # mkdir does not conform to POSIX, # or it failed possibly due to a race condition. Create the # directory the slow way, step by step, checking for races as we go. case $dstdir in /*) prefix='/';; [-=\(\)!]*) prefix='./';; *) prefix='';; esac oIFS=$IFS IFS=/ set -f set fnord $dstdir shift set +f IFS=$oIFS prefixes= for d do test X"$d" = X && continue prefix=$prefix$d if test -d "$prefix"; then prefixes= else if $posix_mkdir; then (umask $mkdir_umask && $doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir") && break # Don't fail if two instances are running concurrently. test -d "$prefix" || exit 1 else case $prefix in *\'*) qprefix=`echo "$prefix" | sed "s/'/'\\\\\\\\''/g"`;; *) qprefix=$prefix;; esac prefixes="$prefixes '$qprefix'" fi fi prefix=$prefix/ done if test -n "$prefixes"; then # Don't fail if two instances are running concurrently. (umask $mkdir_umask && eval "\$doit_exec \$mkdirprog $prefixes") || test -d "$dstdir" || exit 1 obsolete_mkdir_used=true fi fi fi if test -n "$dir_arg"; then { test -z "$chowncmd" || $doit $chowncmd "$dst"; } && { test -z "$chgrpcmd" || $doit $chgrpcmd "$dst"; } && { test "$obsolete_mkdir_used$chowncmd$chgrpcmd" = false || test -z "$chmodcmd" || $doit $chmodcmd $mode "$dst"; } || exit 1 else # Make a couple of temp file names in the proper directory. dsttmp=${dstdirslash}_inst.$$_ rmtmp=${dstdirslash}_rm.$$_ # Trap to clean up those temp files at exit. trap 'ret=$?; rm -f "$dsttmp" "$rmtmp" && exit $ret' 0 # Copy the file name to the temp name. (umask $cp_umask && { test -z "$stripcmd" || { # Create $dsttmp read-write so that cp doesn't create it read-only, # which would cause strip to fail. if test -z "$doit"; then : >"$dsttmp" # No need to fork-exec 'touch'. else $doit touch "$dsttmp" fi } } && $doit_exec $cpprog "$src" "$dsttmp") && # and set any options; do chmod last to preserve setuid bits. # # If any of these fail, we abort the whole thing. If we want to # ignore errors from any of these, just make sure not to ignore # errors from the above "$doit $cpprog $src $dsttmp" command. # { test -z "$chowncmd" || $doit $chowncmd "$dsttmp"; } && { test -z "$chgrpcmd" || $doit $chgrpcmd "$dsttmp"; } && { test -z "$stripcmd" || $doit $stripcmd "$dsttmp"; } && { test -z "$chmodcmd" || $doit $chmodcmd $mode "$dsttmp"; } && # If -C, don't bother to copy if it wouldn't change the file. if $copy_on_change && old=`LC_ALL=C ls -dlL "$dst" 2>/dev/null` && new=`LC_ALL=C ls -dlL "$dsttmp" 2>/dev/null` && set -f && set X $old && old=:$2:$4:$5:$6 && set X $new && new=:$2:$4:$5:$6 && set +f && test "$old" = "$new" && $cmpprog "$dst" "$dsttmp" >/dev/null 2>&1 then rm -f "$dsttmp" else # If $backupsuffix is set, and the file being installed # already exists, attempt a backup. Don't worry if it fails, # e.g., if mv doesn't support -f. if test -n "$backupsuffix" && test -f "$dst"; then $doit $mvcmd -f "$dst" "$dst$backupsuffix" 2>/dev/null fi # Rename the file to the real destination. $doit $mvcmd -f "$dsttmp" "$dst" 2>/dev/null || # The rename failed, perhaps because mv can't rename something else # to itself, or perhaps because mv is so ancient that it does not # support -f. { # Now remove or move aside any old file at destination location. # We try this two ways since rm can't unlink itself on some # systems and the destination file might be busy for other # reasons. In this case, the final cleanup might fail but the new # file should still install successfully. { test ! -f "$dst" || $doit $rmcmd "$dst" 2>/dev/null || { $doit $mvcmd -f "$dst" "$rmtmp" 2>/dev/null && { $doit $rmcmd "$rmtmp" 2>/dev/null; :; } } || { echo "$0: cannot unlink or rename $dst" >&2 (exit 1); exit 1 } } && # Now rename the file to the real destination. $doit $mvcmd "$dsttmp" "$dst" } fi || exit 1 trap '' 0 fi done # Local variables: # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC0" # time-stamp-end: "; # UTC" # End: gevent-24.11.1/deps/c-ares/config/ltmain.sh000066400000000000000000012123531471441230600203530ustar00rootroot00000000000000#! /usr/bin/env sh ## DO NOT EDIT - This file generated from ./build-aux/ltmain.in ## by inline-source v2019-02-19.15 # libtool (GNU libtool) 2.4.7 # Provide generalized library-building support services. # Written by Gordon Matzigkeit , 1996 # Copyright (C) 1996-2019, 2021-2022 Free Software Foundation, Inc. # This is free software; see the source for copying conditions. There is NO # warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # GNU Libtool is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # As a special exception to the GNU General Public License, # if you distribute this file as part of a program or library that # is built using GNU Libtool, you may include this file under the # same distribution terms that you use for the rest of that program. # # GNU Libtool is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . PROGRAM=libtool PACKAGE=libtool VERSION=2.4.7 package_revision=2.4.7 ## ------ ## ## Usage. ## ## ------ ## # Run './libtool --help' for help with using this script from the # command line. ## ------------------------------- ## ## User overridable command paths. ## ## ------------------------------- ## # After configure completes, it has a better idea of some of the # shell tools we need than the defaults used by the functions shared # with bootstrap, so set those here where they can still be over- # ridden by the user, but otherwise take precedence. : ${AUTOCONF="autoconf"} : ${AUTOMAKE="automake"} ## -------------------------- ## ## Source external libraries. ## ## -------------------------- ## # Much of our low-level functionality needs to be sourced from external # libraries, which are installed to $pkgauxdir. # Set a version string for this script. scriptversion=2019-02-19.15; # UTC # General shell script boiler plate, and helper functions. # Written by Gary V. Vaughan, 2004 # This is free software. There is NO warranty; not even for # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copyright (C) 2004-2019, 2021 Bootstrap Authors # # This file is dual licensed under the terms of the MIT license # , and GPL version 2 or later # . You must apply one of # these licenses when using or redistributing this software or any of # the files within it. See the URLs above, or the file `LICENSE` # included in the Bootstrap distribution for the full license texts. # Please report bugs or propose patches to: # ## ------ ## ## Usage. ## ## ------ ## # Evaluate this file near the top of your script to gain access to # the functions and variables defined here: # # . `echo "$0" | ${SED-sed} 's|[^/]*$||'`/build-aux/funclib.sh # # If you need to override any of the default environment variable # settings, do that before evaluating this file. ## -------------------- ## ## Shell normalisation. ## ## -------------------- ## # Some shells need a little help to be as Bourne compatible as possible. # Before doing anything else, make sure all that help has been provided! DUALCASE=1; export DUALCASE # for MKS sh if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case `(set -o) 2>/dev/null` in *posix*) set -o posix ;; esac fi # NLS nuisances: We save the old values in case they are required later. _G_user_locale= _G_safe_locale= for _G_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES do eval "if test set = \"\${$_G_var+set}\"; then save_$_G_var=\$$_G_var $_G_var=C export $_G_var _G_user_locale=\"$_G_var=\\\$save_\$_G_var; \$_G_user_locale\" _G_safe_locale=\"$_G_var=C; \$_G_safe_locale\" fi" done # These NLS vars are set unconditionally (bootstrap issue #24). Unset those # in case the environment reset is needed later and the $save_* variant is not # defined (see the code above). LC_ALL=C LANGUAGE=C export LANGUAGE LC_ALL # Make sure IFS has a sensible default sp=' ' nl=' ' IFS="$sp $nl" # There are apparently some retarded systems that use ';' as a PATH separator! if test "${PATH_SEPARATOR+set}" != set; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi # func_unset VAR # -------------- # Portably unset VAR. # In some shells, an 'unset VAR' statement leaves a non-zero return # status if VAR is already unset, which might be problematic if the # statement is used at the end of a function (thus poisoning its return # value) or when 'set -e' is active (causing even a spurious abort of # the script in this case). func_unset () { { eval $1=; (eval unset $1) >/dev/null 2>&1 && eval unset $1 || : ; } } # Make sure CDPATH doesn't cause `cd` commands to output the target dir. func_unset CDPATH # Make sure ${,E,F}GREP behave sanely. func_unset GREP_OPTIONS ## ------------------------- ## ## Locate command utilities. ## ## ------------------------- ## # func_executable_p FILE # ---------------------- # Check that FILE is an executable regular file. func_executable_p () { test -f "$1" && test -x "$1" } # func_path_progs PROGS_LIST CHECK_FUNC [PATH] # -------------------------------------------- # Search for either a program that responds to --version with output # containing "GNU", or else returned by CHECK_FUNC otherwise, by # trying all the directories in PATH with each of the elements of # PROGS_LIST. # # CHECK_FUNC should accept the path to a candidate program, and # set $func_check_prog_result if it truncates its output less than # $_G_path_prog_max characters. func_path_progs () { _G_progs_list=$1 _G_check_func=$2 _G_PATH=${3-"$PATH"} _G_path_prog_max=0 _G_path_prog_found=false _G_save_IFS=$IFS; IFS=${PATH_SEPARATOR-:} for _G_dir in $_G_PATH; do IFS=$_G_save_IFS test -z "$_G_dir" && _G_dir=. for _G_prog_name in $_G_progs_list; do for _exeext in '' .EXE; do _G_path_prog=$_G_dir/$_G_prog_name$_exeext func_executable_p "$_G_path_prog" || continue case `"$_G_path_prog" --version 2>&1` in *GNU*) func_path_progs_result=$_G_path_prog _G_path_prog_found=: ;; *) $_G_check_func $_G_path_prog func_path_progs_result=$func_check_prog_result ;; esac $_G_path_prog_found && break 3 done done done IFS=$_G_save_IFS test -z "$func_path_progs_result" && { echo "no acceptable sed could be found in \$PATH" >&2 exit 1 } } # We want to be able to use the functions in this file before configure # has figured out where the best binaries are kept, which means we have # to search for them ourselves - except when the results are already set # where we skip the searches. # Unless the user overrides by setting SED, search the path for either GNU # sed, or the sed that truncates its output the least. test -z "$SED" && { _G_sed_script=s/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb/ for _G_i in 1 2 3 4 5 6 7; do _G_sed_script=$_G_sed_script$nl$_G_sed_script done echo "$_G_sed_script" 2>/dev/null | sed 99q >conftest.sed _G_sed_script= func_check_prog_sed () { _G_path_prog=$1 _G_count=0 printf 0123456789 >conftest.in while : do cat conftest.in conftest.in >conftest.tmp mv conftest.tmp conftest.in cp conftest.in conftest.nl echo '' >> conftest.nl "$_G_path_prog" -f conftest.sed conftest.out 2>/dev/null || break diff conftest.out conftest.nl >/dev/null 2>&1 || break _G_count=`expr $_G_count + 1` if test "$_G_count" -gt "$_G_path_prog_max"; then # Best one so far, save it but keep looking for a better one func_check_prog_result=$_G_path_prog _G_path_prog_max=$_G_count fi # 10*(2^10) chars as input seems more than enough test 10 -lt "$_G_count" && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out } func_path_progs "sed gsed" func_check_prog_sed "$PATH:/usr/xpg4/bin" rm -f conftest.sed SED=$func_path_progs_result } # Unless the user overrides by setting GREP, search the path for either GNU # grep, or the grep that truncates its output the least. test -z "$GREP" && { func_check_prog_grep () { _G_path_prog=$1 _G_count=0 _G_path_prog_max=0 printf 0123456789 >conftest.in while : do cat conftest.in conftest.in >conftest.tmp mv conftest.tmp conftest.in cp conftest.in conftest.nl echo 'GREP' >> conftest.nl "$_G_path_prog" -e 'GREP$' -e '-(cannot match)-' conftest.out 2>/dev/null || break diff conftest.out conftest.nl >/dev/null 2>&1 || break _G_count=`expr $_G_count + 1` if test "$_G_count" -gt "$_G_path_prog_max"; then # Best one so far, save it but keep looking for a better one func_check_prog_result=$_G_path_prog _G_path_prog_max=$_G_count fi # 10*(2^10) chars as input seems more than enough test 10 -lt "$_G_count" && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out } func_path_progs "grep ggrep" func_check_prog_grep "$PATH:/usr/xpg4/bin" GREP=$func_path_progs_result } ## ------------------------------- ## ## User overridable command paths. ## ## ------------------------------- ## # All uppercase variable names are used for environment variables. These # variables can be overridden by the user before calling a script that # uses them if a suitable command of that name is not already available # in the command search PATH. : ${CP="cp -f"} : ${ECHO="printf %s\n"} : ${EGREP="$GREP -E"} : ${FGREP="$GREP -F"} : ${LN_S="ln -s"} : ${MAKE="make"} : ${MKDIR="mkdir"} : ${MV="mv -f"} : ${RM="rm -f"} : ${SHELL="${CONFIG_SHELL-/bin/sh}"} ## -------------------- ## ## Useful sed snippets. ## ## -------------------- ## sed_dirname='s|/[^/]*$||' sed_basename='s|^.*/||' # Sed substitution that helps us do robust quoting. It backslashifies # metacharacters that are still active within double-quoted strings. sed_quote_subst='s|\([`"$\\]\)|\\\1|g' # Same as above, but do not quote variable references. sed_double_quote_subst='s/\(["`\\]\)/\\\1/g' # Sed substitution that turns a string into a regex matching for the # string literally. sed_make_literal_regex='s|[].[^$\\*\/]|\\&|g' # Sed substitution that converts a w32 file name or path # that contains forward slashes, into one that contains # (escaped) backslashes. A very naive implementation. sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g' # Re-'\' parameter expansions in output of sed_double_quote_subst that # were '\'-ed in input to the same. If an odd number of '\' preceded a # '$' in input to sed_double_quote_subst, that '$' was protected from # expansion. Since each input '\' is now two '\'s, look for any number # of runs of four '\'s followed by two '\'s and then a '$'. '\' that '$'. _G_bs='\\' _G_bs2='\\\\' _G_bs4='\\\\\\\\' _G_dollar='\$' sed_double_backslash="\ s/$_G_bs4/&\\ /g s/^$_G_bs2$_G_dollar/$_G_bs&/ s/\\([^$_G_bs]\\)$_G_bs2$_G_dollar/\\1$_G_bs2$_G_bs$_G_dollar/g s/\n//g" # require_check_ifs_backslash # --------------------------- # Check if we can use backslash as IFS='\' separator, and set # $check_ifs_backshlash_broken to ':' or 'false'. require_check_ifs_backslash=func_require_check_ifs_backslash func_require_check_ifs_backslash () { _G_save_IFS=$IFS IFS='\' _G_check_ifs_backshlash='a\\b' for _G_i in $_G_check_ifs_backshlash do case $_G_i in a) check_ifs_backshlash_broken=false ;; '') break ;; *) check_ifs_backshlash_broken=: break ;; esac done IFS=$_G_save_IFS require_check_ifs_backslash=: } ## ----------------- ## ## Global variables. ## ## ----------------- ## # Except for the global variables explicitly listed below, the following # functions in the '^func_' namespace, and the '^require_' namespace # variables initialised in the 'Resource management' section, sourcing # this file will not pollute your global namespace with anything # else. There's no portable way to scope variables in Bourne shell # though, so actually running these functions will sometimes place # results into a variable named after the function, and often use # temporary variables in the '^_G_' namespace. If you are careful to # avoid using those namespaces casually in your sourcing script, things # should continue to work as you expect. And, of course, you can freely # overwrite any of the functions or variables defined here before # calling anything to customize them. EXIT_SUCCESS=0 EXIT_FAILURE=1 EXIT_MISMATCH=63 # $? = 63 is used to indicate version mismatch to missing. EXIT_SKIP=77 # $? = 77 is used to indicate a skipped test to automake. # Allow overriding, eg assuming that you follow the convention of # putting '$debug_cmd' at the start of all your functions, you can get # bash to show function call trace with: # # debug_cmd='eval echo "${FUNCNAME[0]} $*" >&2' bash your-script-name debug_cmd=${debug_cmd-":"} exit_cmd=: # By convention, finish your script with: # # exit $exit_status # # so that you can set exit_status to non-zero if you want to indicate # something went wrong during execution without actually bailing out at # the point of failure. exit_status=$EXIT_SUCCESS # Work around backward compatibility issue on IRIX 6.5. On IRIX 6.4+, sh # is ksh but when the shell is invoked as "sh" and the current value of # the _XPG environment variable is not equal to 1 (one), the special # positional parameter $0, within a function call, is the name of the # function. progpath=$0 # The name of this program. progname=`$ECHO "$progpath" |$SED "$sed_basename"` # Make sure we have an absolute progpath for reexecution: case $progpath in [\\/]*|[A-Za-z]:\\*) ;; *[\\/]*) progdir=`$ECHO "$progpath" |$SED "$sed_dirname"` progdir=`cd "$progdir" && pwd` progpath=$progdir/$progname ;; *) _G_IFS=$IFS IFS=${PATH_SEPARATOR-:} for progdir in $PATH; do IFS=$_G_IFS test -x "$progdir/$progname" && break done IFS=$_G_IFS test -n "$progdir" || progdir=`pwd` progpath=$progdir/$progname ;; esac ## ----------------- ## ## Standard options. ## ## ----------------- ## # The following options affect the operation of the functions defined # below, and should be set appropriately depending on run-time para- # meters passed on the command line. opt_dry_run=false opt_quiet=false opt_verbose=false # Categories 'all' and 'none' are always available. Append any others # you will pass as the first argument to func_warning from your own # code. warning_categories= # By default, display warnings according to 'opt_warning_types'. Set # 'warning_func' to ':' to elide all warnings, or func_fatal_error to # treat the next displayed warning as a fatal error. warning_func=func_warn_and_continue # Set to 'all' to display all warnings, 'none' to suppress all # warnings, or a space delimited list of some subset of # 'warning_categories' to display only the listed warnings. opt_warning_types=all ## -------------------- ## ## Resource management. ## ## -------------------- ## # This section contains definitions for functions that each ensure a # particular resource (a file, or a non-empty configuration variable for # example) is available, and if appropriate to extract default values # from pertinent package files. Call them using their associated # 'require_*' variable to ensure that they are executed, at most, once. # # It's entirely deliberate that calling these functions can set # variables that don't obey the namespace limitations obeyed by the rest # of this file, in order that that they be as useful as possible to # callers. # require_term_colors # ------------------- # Allow display of bold text on terminals that support it. require_term_colors=func_require_term_colors func_require_term_colors () { $debug_cmd test -t 1 && { # COLORTERM and USE_ANSI_COLORS environment variables take # precedence, because most terminfo databases neglect to describe # whether color sequences are supported. test -n "${COLORTERM+set}" && : ${USE_ANSI_COLORS="1"} if test 1 = "$USE_ANSI_COLORS"; then # Standard ANSI escape sequences tc_reset='' tc_bold=''; tc_standout='' tc_red=''; tc_green='' tc_blue=''; tc_cyan='' else # Otherwise trust the terminfo database after all. test -n "`tput sgr0 2>/dev/null`" && { tc_reset=`tput sgr0` test -n "`tput bold 2>/dev/null`" && tc_bold=`tput bold` tc_standout=$tc_bold test -n "`tput smso 2>/dev/null`" && tc_standout=`tput smso` test -n "`tput setaf 1 2>/dev/null`" && tc_red=`tput setaf 1` test -n "`tput setaf 2 2>/dev/null`" && tc_green=`tput setaf 2` test -n "`tput setaf 4 2>/dev/null`" && tc_blue=`tput setaf 4` test -n "`tput setaf 5 2>/dev/null`" && tc_cyan=`tput setaf 5` } fi } require_term_colors=: } ## ----------------- ## ## Function library. ## ## ----------------- ## # This section contains a variety of useful functions to call in your # scripts. Take note of the portable wrappers for features provided by # some modern shells, which will fall back to slower equivalents on # less featureful shells. # func_append VAR VALUE # --------------------- # Append VALUE onto the existing contents of VAR. # We should try to minimise forks, especially on Windows where they are # unreasonably slow, so skip the feature probes when bash or zsh are # being used: if test set = "${BASH_VERSION+set}${ZSH_VERSION+set}"; then : ${_G_HAVE_ARITH_OP="yes"} : ${_G_HAVE_XSI_OPS="yes"} # The += operator was introduced in bash 3.1 case $BASH_VERSION in [12].* | 3.0 | 3.0*) ;; *) : ${_G_HAVE_PLUSEQ_OP="yes"} ;; esac fi # _G_HAVE_PLUSEQ_OP # Can be empty, in which case the shell is probed, "yes" if += is # useable or anything else if it does not work. test -z "$_G_HAVE_PLUSEQ_OP" \ && (eval 'x=a; x+=" b"; test "a b" = "$x"') 2>/dev/null \ && _G_HAVE_PLUSEQ_OP=yes if test yes = "$_G_HAVE_PLUSEQ_OP" then # This is an XSI compatible shell, allowing a faster implementation... eval 'func_append () { $debug_cmd eval "$1+=\$2" }' else # ...otherwise fall back to using expr, which is often a shell builtin. func_append () { $debug_cmd eval "$1=\$$1\$2" } fi # func_append_quoted VAR VALUE # ---------------------------- # Quote VALUE and append to the end of shell variable VAR, separated # by a space. if test yes = "$_G_HAVE_PLUSEQ_OP"; then eval 'func_append_quoted () { $debug_cmd func_quote_arg pretty "$2" eval "$1+=\\ \$func_quote_arg_result" }' else func_append_quoted () { $debug_cmd func_quote_arg pretty "$2" eval "$1=\$$1\\ \$func_quote_arg_result" } fi # func_append_uniq VAR VALUE # -------------------------- # Append unique VALUE onto the existing contents of VAR, assuming # entries are delimited by the first character of VALUE. For example: # # func_append_uniq options " --another-option option-argument" # # will only append to $options if " --another-option option-argument " # is not already present somewhere in $options already (note spaces at # each end implied by leading space in second argument). func_append_uniq () { $debug_cmd eval _G_current_value='`$ECHO $'$1'`' _G_delim=`expr "$2" : '\(.\)'` case $_G_delim$_G_current_value$_G_delim in *"$2$_G_delim"*) ;; *) func_append "$@" ;; esac } # func_arith TERM... # ------------------ # Set func_arith_result to the result of evaluating TERMs. test -z "$_G_HAVE_ARITH_OP" \ && (eval 'test 2 = $(( 1 + 1 ))') 2>/dev/null \ && _G_HAVE_ARITH_OP=yes if test yes = "$_G_HAVE_ARITH_OP"; then eval 'func_arith () { $debug_cmd func_arith_result=$(( $* )) }' else func_arith () { $debug_cmd func_arith_result=`expr "$@"` } fi # func_basename FILE # ------------------ # Set func_basename_result to FILE with everything up to and including # the last / stripped. if test yes = "$_G_HAVE_XSI_OPS"; then # If this shell supports suffix pattern removal, then use it to avoid # forking. Hide the definitions single quotes in case the shell chokes # on unsupported syntax... _b='func_basename_result=${1##*/}' _d='case $1 in */*) func_dirname_result=${1%/*}$2 ;; * ) func_dirname_result=$3 ;; esac' else # ...otherwise fall back to using sed. _b='func_basename_result=`$ECHO "$1" |$SED "$sed_basename"`' _d='func_dirname_result=`$ECHO "$1" |$SED "$sed_dirname"` if test "X$func_dirname_result" = "X$1"; then func_dirname_result=$3 else func_append func_dirname_result "$2" fi' fi eval 'func_basename () { $debug_cmd '"$_b"' }' # func_dirname FILE APPEND NONDIR_REPLACEMENT # ------------------------------------------- # Compute the dirname of FILE. If nonempty, add APPEND to the result, # otherwise set result to NONDIR_REPLACEMENT. eval 'func_dirname () { $debug_cmd '"$_d"' }' # func_dirname_and_basename FILE APPEND NONDIR_REPLACEMENT # -------------------------------------------------------- # Perform func_basename and func_dirname in a single function # call: # dirname: Compute the dirname of FILE. If nonempty, # add APPEND to the result, otherwise set result # to NONDIR_REPLACEMENT. # value returned in "$func_dirname_result" # basename: Compute filename of FILE. # value retuned in "$func_basename_result" # For efficiency, we do not delegate to the functions above but instead # duplicate the functionality here. eval 'func_dirname_and_basename () { $debug_cmd '"$_b"' '"$_d"' }' # func_echo ARG... # ---------------- # Echo program name prefixed message. func_echo () { $debug_cmd _G_message=$* func_echo_IFS=$IFS IFS=$nl for _G_line in $_G_message; do IFS=$func_echo_IFS $ECHO "$progname: $_G_line" done IFS=$func_echo_IFS } # func_echo_all ARG... # -------------------- # Invoke $ECHO with all args, space-separated. func_echo_all () { $ECHO "$*" } # func_echo_infix_1 INFIX ARG... # ------------------------------ # Echo program name, followed by INFIX on the first line, with any # additional lines not showing INFIX. func_echo_infix_1 () { $debug_cmd $require_term_colors _G_infix=$1; shift _G_indent=$_G_infix _G_prefix="$progname: $_G_infix: " _G_message=$* # Strip color escape sequences before counting printable length for _G_tc in "$tc_reset" "$tc_bold" "$tc_standout" "$tc_red" "$tc_green" "$tc_blue" "$tc_cyan" do test -n "$_G_tc" && { _G_esc_tc=`$ECHO "$_G_tc" | $SED "$sed_make_literal_regex"` _G_indent=`$ECHO "$_G_indent" | $SED "s|$_G_esc_tc||g"` } done _G_indent="$progname: "`echo "$_G_indent" | $SED 's|.| |g'`" " ## exclude from sc_prohibit_nested_quotes func_echo_infix_1_IFS=$IFS IFS=$nl for _G_line in $_G_message; do IFS=$func_echo_infix_1_IFS $ECHO "$_G_prefix$tc_bold$_G_line$tc_reset" >&2 _G_prefix=$_G_indent done IFS=$func_echo_infix_1_IFS } # func_error ARG... # ----------------- # Echo program name prefixed message to standard error. func_error () { $debug_cmd $require_term_colors func_echo_infix_1 " $tc_standout${tc_red}error$tc_reset" "$*" >&2 } # func_fatal_error ARG... # ----------------------- # Echo program name prefixed message to standard error, and exit. func_fatal_error () { $debug_cmd func_error "$*" exit $EXIT_FAILURE } # func_grep EXPRESSION FILENAME # ----------------------------- # Check whether EXPRESSION matches any line of FILENAME, without output. func_grep () { $debug_cmd $GREP "$1" "$2" >/dev/null 2>&1 } # func_len STRING # --------------- # Set func_len_result to the length of STRING. STRING may not # start with a hyphen. test -z "$_G_HAVE_XSI_OPS" \ && (eval 'x=a/b/c; test 5aa/bb/cc = "${#x}${x%%/*}${x%/*}${x#*/}${x##*/}"') 2>/dev/null \ && _G_HAVE_XSI_OPS=yes if test yes = "$_G_HAVE_XSI_OPS"; then eval 'func_len () { $debug_cmd func_len_result=${#1} }' else func_len () { $debug_cmd func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len` } fi # func_mkdir_p DIRECTORY-PATH # --------------------------- # Make sure the entire path to DIRECTORY-PATH is available. func_mkdir_p () { $debug_cmd _G_directory_path=$1 _G_dir_list= if test -n "$_G_directory_path" && test : != "$opt_dry_run"; then # Protect directory names starting with '-' case $_G_directory_path in -*) _G_directory_path=./$_G_directory_path ;; esac # While some portion of DIR does not yet exist... while test ! -d "$_G_directory_path"; do # ...make a list in topmost first order. Use a colon delimited # list incase some portion of path contains whitespace. _G_dir_list=$_G_directory_path:$_G_dir_list # If the last portion added has no slash in it, the list is done case $_G_directory_path in */*) ;; *) break ;; esac # ...otherwise throw away the child directory and loop _G_directory_path=`$ECHO "$_G_directory_path" | $SED -e "$sed_dirname"` done _G_dir_list=`$ECHO "$_G_dir_list" | $SED 's|:*$||'` func_mkdir_p_IFS=$IFS; IFS=: for _G_dir in $_G_dir_list; do IFS=$func_mkdir_p_IFS # mkdir can fail with a 'File exist' error if two processes # try to create one of the directories concurrently. Don't # stop in that case! $MKDIR "$_G_dir" 2>/dev/null || : done IFS=$func_mkdir_p_IFS # Bail out if we (or some other process) failed to create a directory. test -d "$_G_directory_path" || \ func_fatal_error "Failed to create '$1'" fi } # func_mktempdir [BASENAME] # ------------------------- # Make a temporary directory that won't clash with other running # libtool processes, and avoids race conditions if possible. If # given, BASENAME is the basename for that directory. func_mktempdir () { $debug_cmd _G_template=${TMPDIR-/tmp}/${1-$progname} if test : = "$opt_dry_run"; then # Return a directory name, but don't create it in dry-run mode _G_tmpdir=$_G_template-$$ else # If mktemp works, use that first and foremost _G_tmpdir=`mktemp -d "$_G_template-XXXXXXXX" 2>/dev/null` if test ! -d "$_G_tmpdir"; then # Failing that, at least try and use $RANDOM to avoid a race _G_tmpdir=$_G_template-${RANDOM-0}$$ func_mktempdir_umask=`umask` umask 0077 $MKDIR "$_G_tmpdir" umask $func_mktempdir_umask fi # If we're not in dry-run mode, bomb out on failure test -d "$_G_tmpdir" || \ func_fatal_error "cannot create temporary directory '$_G_tmpdir'" fi $ECHO "$_G_tmpdir" } # func_normal_abspath PATH # ------------------------ # Remove doubled-up and trailing slashes, "." path components, # and cancel out any ".." path components in PATH after making # it an absolute path. func_normal_abspath () { $debug_cmd # These SED scripts presuppose an absolute path with a trailing slash. _G_pathcar='s|^/\([^/]*\).*$|\1|' _G_pathcdr='s|^/[^/]*||' _G_removedotparts=':dotsl s|/\./|/|g t dotsl s|/\.$|/|' _G_collapseslashes='s|/\{1,\}|/|g' _G_finalslash='s|/*$|/|' # Start from root dir and reassemble the path. func_normal_abspath_result= func_normal_abspath_tpath=$1 func_normal_abspath_altnamespace= case $func_normal_abspath_tpath in "") # Empty path, that just means $cwd. func_stripname '' '/' "`pwd`" func_normal_abspath_result=$func_stripname_result return ;; # The next three entries are used to spot a run of precisely # two leading slashes without using negated character classes; # we take advantage of case's first-match behaviour. ///*) # Unusual form of absolute path, do nothing. ;; //*) # Not necessarily an ordinary path; POSIX reserves leading '//' # and for example Cygwin uses it to access remote file shares # over CIFS/SMB, so we conserve a leading double slash if found. func_normal_abspath_altnamespace=/ ;; /*) # Absolute path, do nothing. ;; *) # Relative path, prepend $cwd. func_normal_abspath_tpath=`pwd`/$func_normal_abspath_tpath ;; esac # Cancel out all the simple stuff to save iterations. We also want # the path to end with a slash for ease of parsing, so make sure # there is one (and only one) here. func_normal_abspath_tpath=`$ECHO "$func_normal_abspath_tpath" | $SED \ -e "$_G_removedotparts" -e "$_G_collapseslashes" -e "$_G_finalslash"` while :; do # Processed it all yet? if test / = "$func_normal_abspath_tpath"; then # If we ascended to the root using ".." the result may be empty now. if test -z "$func_normal_abspath_result"; then func_normal_abspath_result=/ fi break fi func_normal_abspath_tcomponent=`$ECHO "$func_normal_abspath_tpath" | $SED \ -e "$_G_pathcar"` func_normal_abspath_tpath=`$ECHO "$func_normal_abspath_tpath" | $SED \ -e "$_G_pathcdr"` # Figure out what to do with it case $func_normal_abspath_tcomponent in "") # Trailing empty path component, ignore it. ;; ..) # Parent dir; strip last assembled component from result. func_dirname "$func_normal_abspath_result" func_normal_abspath_result=$func_dirname_result ;; *) # Actual path component, append it. func_append func_normal_abspath_result "/$func_normal_abspath_tcomponent" ;; esac done # Restore leading double-slash if one was found on entry. func_normal_abspath_result=$func_normal_abspath_altnamespace$func_normal_abspath_result } # func_notquiet ARG... # -------------------- # Echo program name prefixed message only when not in quiet mode. func_notquiet () { $debug_cmd $opt_quiet || func_echo ${1+"$@"} # A bug in bash halts the script if the last line of a function # fails when set -e is in force, so we need another command to # work around that: : } # func_relative_path SRCDIR DSTDIR # -------------------------------- # Set func_relative_path_result to the relative path from SRCDIR to DSTDIR. func_relative_path () { $debug_cmd func_relative_path_result= func_normal_abspath "$1" func_relative_path_tlibdir=$func_normal_abspath_result func_normal_abspath "$2" func_relative_path_tbindir=$func_normal_abspath_result # Ascend the tree starting from libdir while :; do # check if we have found a prefix of bindir case $func_relative_path_tbindir in $func_relative_path_tlibdir) # found an exact match func_relative_path_tcancelled= break ;; $func_relative_path_tlibdir*) # found a matching prefix func_stripname "$func_relative_path_tlibdir" '' "$func_relative_path_tbindir" func_relative_path_tcancelled=$func_stripname_result if test -z "$func_relative_path_result"; then func_relative_path_result=. fi break ;; *) func_dirname $func_relative_path_tlibdir func_relative_path_tlibdir=$func_dirname_result if test -z "$func_relative_path_tlibdir"; then # Have to descend all the way to the root! func_relative_path_result=../$func_relative_path_result func_relative_path_tcancelled=$func_relative_path_tbindir break fi func_relative_path_result=../$func_relative_path_result ;; esac done # Now calculate path; take care to avoid doubling-up slashes. func_stripname '' '/' "$func_relative_path_result" func_relative_path_result=$func_stripname_result func_stripname '/' '/' "$func_relative_path_tcancelled" if test -n "$func_stripname_result"; then func_append func_relative_path_result "/$func_stripname_result" fi # Normalisation. If bindir is libdir, return '.' else relative path. if test -n "$func_relative_path_result"; then func_stripname './' '' "$func_relative_path_result" func_relative_path_result=$func_stripname_result fi test -n "$func_relative_path_result" || func_relative_path_result=. : } # func_quote_portable EVAL ARG # ---------------------------- # Internal function to portably implement func_quote_arg. Note that we still # keep attention to performance here so we as much as possible try to avoid # calling sed binary (so far O(N) complexity as long as func_append is O(1)). func_quote_portable () { $debug_cmd $require_check_ifs_backslash func_quote_portable_result=$2 # one-time-loop (easy break) while true do if $1; then func_quote_portable_result=`$ECHO "$2" | $SED \ -e "$sed_double_quote_subst" -e "$sed_double_backslash"` break fi # Quote for eval. case $func_quote_portable_result in *[\\\`\"\$]*) # Fallback to sed for $func_check_bs_ifs_broken=:, or when the string # contains the shell wildcard characters. case $check_ifs_backshlash_broken$func_quote_portable_result in :*|*[\[\*\?]*) func_quote_portable_result=`$ECHO "$func_quote_portable_result" \ | $SED "$sed_quote_subst"` break ;; esac func_quote_portable_old_IFS=$IFS for _G_char in '\' '`' '"' '$' do # STATE($1) PREV($2) SEPARATOR($3) set start "" "" func_quote_portable_result=dummy"$_G_char$func_quote_portable_result$_G_char"dummy IFS=$_G_char for _G_part in $func_quote_portable_result do case $1 in quote) func_append func_quote_portable_result "$3$2" set quote "$_G_part" "\\$_G_char" ;; start) set first "" "" func_quote_portable_result= ;; first) set quote "$_G_part" "" ;; esac done done IFS=$func_quote_portable_old_IFS ;; *) ;; esac break done func_quote_portable_unquoted_result=$func_quote_portable_result case $func_quote_portable_result in # double-quote args containing shell metacharacters to delay # word splitting, command substitution and variable expansion # for a subsequent eval. # many bourne shells cannot handle close brackets correctly # in scan sets, so we specify it separately. *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"") func_quote_portable_result=\"$func_quote_portable_result\" ;; esac } # func_quotefast_eval ARG # ----------------------- # Quote one ARG (internal). This is equivalent to 'func_quote_arg eval ARG', # but optimized for speed. Result is stored in $func_quotefast_eval. if test xyes = `(x=; printf -v x %q yes; echo x"$x") 2>/dev/null`; then printf -v _GL_test_printf_tilde %q '~' if test '\~' = "$_GL_test_printf_tilde"; then func_quotefast_eval () { printf -v func_quotefast_eval_result %q "$1" } else # Broken older Bash implementations. Make those faster too if possible. func_quotefast_eval () { case $1 in '~'*) func_quote_portable false "$1" func_quotefast_eval_result=$func_quote_portable_result ;; *) printf -v func_quotefast_eval_result %q "$1" ;; esac } fi else func_quotefast_eval () { func_quote_portable false "$1" func_quotefast_eval_result=$func_quote_portable_result } fi # func_quote_arg MODEs ARG # ------------------------ # Quote one ARG to be evaled later. MODEs argument may contain zero or more # specifiers listed below separated by ',' character. This function returns two # values: # i) func_quote_arg_result # double-quoted (when needed), suitable for a subsequent eval # ii) func_quote_arg_unquoted_result # has all characters that are still active within double # quotes backslashified. Available only if 'unquoted' is specified. # # Available modes: # ---------------- # 'eval' (default) # - escape shell special characters # 'expand' # - the same as 'eval'; but do not quote variable references # 'pretty' # - request aesthetic output, i.e. '"a b"' instead of 'a\ b'. This might # be used later in func_quote to get output like: 'echo "a b"' instead # of 'echo a\ b'. This is slower than default on some shells. # 'unquoted' # - produce also $func_quote_arg_unquoted_result which does not contain # wrapping double-quotes. # # Examples for 'func_quote_arg pretty,unquoted string': # # string | *_result | *_unquoted_result # ------------+-----------------------+------------------- # " | \" | \" # a b | "a b" | a b # "a b" | "\"a b\"" | \"a b\" # * | "*" | * # z="${x-$y}" | "z=\"\${x-\$y}\"" | z=\"\${x-\$y}\" # # Examples for 'func_quote_arg pretty,unquoted,expand string': # # string | *_result | *_unquoted_result # --------------+---------------------+-------------------- # z="${x-$y}" | "z=\"${x-$y}\"" | z=\"${x-$y}\" func_quote_arg () { _G_quote_expand=false case ,$1, in *,expand,*) _G_quote_expand=: ;; esac case ,$1, in *,pretty,*|*,expand,*|*,unquoted,*) func_quote_portable $_G_quote_expand "$2" func_quote_arg_result=$func_quote_portable_result func_quote_arg_unquoted_result=$func_quote_portable_unquoted_result ;; *) # Faster quote-for-eval for some shells. func_quotefast_eval "$2" func_quote_arg_result=$func_quotefast_eval_result ;; esac } # func_quote MODEs ARGs... # ------------------------ # Quote all ARGs to be evaled later and join them into single command. See # func_quote_arg's description for more info. func_quote () { $debug_cmd _G_func_quote_mode=$1 ; shift func_quote_result= while test 0 -lt $#; do func_quote_arg "$_G_func_quote_mode" "$1" if test -n "$func_quote_result"; then func_append func_quote_result " $func_quote_arg_result" else func_append func_quote_result "$func_quote_arg_result" fi shift done } # func_stripname PREFIX SUFFIX NAME # --------------------------------- # strip PREFIX and SUFFIX from NAME, and store in func_stripname_result. # PREFIX and SUFFIX must not contain globbing or regex special # characters, hashes, percent signs, but SUFFIX may contain a leading # dot (in which case that matches only a dot). if test yes = "$_G_HAVE_XSI_OPS"; then eval 'func_stripname () { $debug_cmd # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are # positional parameters, so assign one to ordinary variable first. func_stripname_result=$3 func_stripname_result=${func_stripname_result#"$1"} func_stripname_result=${func_stripname_result%"$2"} }' else func_stripname () { $debug_cmd case $2 in .*) func_stripname_result=`$ECHO "$3" | $SED -e "s%^$1%%" -e "s%\\\\$2\$%%"`;; *) func_stripname_result=`$ECHO "$3" | $SED -e "s%^$1%%" -e "s%$2\$%%"`;; esac } fi # func_show_eval CMD [FAIL_EXP] # ----------------------------- # Unless opt_quiet is true, then output CMD. Then, if opt_dryrun is # not true, evaluate CMD. If the evaluation of CMD fails, and FAIL_EXP # is given, then evaluate it. func_show_eval () { $debug_cmd _G_cmd=$1 _G_fail_exp=${2-':'} func_quote_arg pretty,expand "$_G_cmd" eval "func_notquiet $func_quote_arg_result" $opt_dry_run || { eval "$_G_cmd" _G_status=$? if test 0 -ne "$_G_status"; then eval "(exit $_G_status); $_G_fail_exp" fi } } # func_show_eval_locale CMD [FAIL_EXP] # ------------------------------------ # Unless opt_quiet is true, then output CMD. Then, if opt_dryrun is # not true, evaluate CMD. If the evaluation of CMD fails, and FAIL_EXP # is given, then evaluate it. Use the saved locale for evaluation. func_show_eval_locale () { $debug_cmd _G_cmd=$1 _G_fail_exp=${2-':'} $opt_quiet || { func_quote_arg expand,pretty "$_G_cmd" eval "func_echo $func_quote_arg_result" } $opt_dry_run || { eval "$_G_user_locale $_G_cmd" _G_status=$? eval "$_G_safe_locale" if test 0 -ne "$_G_status"; then eval "(exit $_G_status); $_G_fail_exp" fi } } # func_tr_sh # ---------- # Turn $1 into a string suitable for a shell variable name. # Result is stored in $func_tr_sh_result. All characters # not in the set a-zA-Z0-9_ are replaced with '_'. Further, # if $1 begins with a digit, a '_' is prepended as well. func_tr_sh () { $debug_cmd case $1 in [0-9]* | *[!a-zA-Z0-9_]*) func_tr_sh_result=`$ECHO "$1" | $SED -e 's/^\([0-9]\)/_\1/' -e 's/[^a-zA-Z0-9_]/_/g'` ;; * ) func_tr_sh_result=$1 ;; esac } # func_verbose ARG... # ------------------- # Echo program name prefixed message in verbose mode only. func_verbose () { $debug_cmd $opt_verbose && func_echo "$*" : } # func_warn_and_continue ARG... # ----------------------------- # Echo program name prefixed warning message to standard error. func_warn_and_continue () { $debug_cmd $require_term_colors func_echo_infix_1 "${tc_red}warning$tc_reset" "$*" >&2 } # func_warning CATEGORY ARG... # ---------------------------- # Echo program name prefixed warning message to standard error. Warning # messages can be filtered according to CATEGORY, where this function # elides messages where CATEGORY is not listed in the global variable # 'opt_warning_types'. func_warning () { $debug_cmd # CATEGORY must be in the warning_categories list! case " $warning_categories " in *" $1 "*) ;; *) func_internal_error "invalid warning category '$1'" ;; esac _G_category=$1 shift case " $opt_warning_types " in *" $_G_category "*) $warning_func ${1+"$@"} ;; esac } # func_sort_ver VER1 VER2 # ----------------------- # 'sort -V' is not generally available. # Note this deviates from the version comparison in automake # in that it treats 1.5 < 1.5.0, and treats 1.4.4a < 1.4-p3a # but this should suffice as we won't be specifying old # version formats or redundant trailing .0 in bootstrap.conf. # If we did want full compatibility then we should probably # use m4_version_compare from autoconf. func_sort_ver () { $debug_cmd printf '%s\n%s\n' "$1" "$2" \ | sort -t. -k 1,1n -k 2,2n -k 3,3n -k 4,4n -k 5,5n -k 6,6n -k 7,7n -k 8,8n -k 9,9n } # func_lt_ver PREV CURR # --------------------- # Return true if PREV and CURR are in the correct order according to # func_sort_ver, otherwise false. Use it like this: # # func_lt_ver "$prev_ver" "$proposed_ver" || func_fatal_error "..." func_lt_ver () { $debug_cmd test "x$1" = x`func_sort_ver "$1" "$2" | $SED 1q` } # Local variables: # mode: shell-script # sh-indentation: 2 # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-pattern: "10/scriptversion=%:y-%02m-%02d.%02H; # UTC" # time-stamp-time-zone: "UTC" # End: #! /bin/sh # A portable, pluggable option parser for Bourne shell. # Written by Gary V. Vaughan, 2010 # This is free software. There is NO warranty; not even for # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # # Copyright (C) 2010-2019, 2021 Bootstrap Authors # # This file is dual licensed under the terms of the MIT license # , and GPL version 2 or later # . You must apply one of # these licenses when using or redistributing this software or any of # the files within it. See the URLs above, or the file `LICENSE` # included in the Bootstrap distribution for the full license texts. # Please report bugs or propose patches to: # # Set a version string for this script. scriptversion=2019-02-19.15; # UTC ## ------ ## ## Usage. ## ## ------ ## # This file is a library for parsing options in your shell scripts along # with assorted other useful supporting features that you can make use # of too. # # For the simplest scripts you might need only: # # #!/bin/sh # . relative/path/to/funclib.sh # . relative/path/to/options-parser # scriptversion=1.0 # func_options ${1+"$@"} # eval set dummy "$func_options_result"; shift # ...rest of your script... # # In order for the '--version' option to work, you will need to have a # suitably formatted comment like the one at the top of this file # starting with '# Written by ' and ending with '# Copyright'. # # For '-h' and '--help' to work, you will also need a one line # description of your script's purpose in a comment directly above the # '# Written by ' line, like the one at the top of this file. # # The default options also support '--debug', which will turn on shell # execution tracing (see the comment above debug_cmd below for another # use), and '--verbose' and the func_verbose function to allow your script # to display verbose messages only when your user has specified # '--verbose'. # # After sourcing this file, you can plug in processing for additional # options by amending the variables from the 'Configuration' section # below, and following the instructions in the 'Option parsing' # section further down. ## -------------- ## ## Configuration. ## ## -------------- ## # You should override these variables in your script after sourcing this # file so that they reflect the customisations you have added to the # option parser. # The usage line for option parsing errors and the start of '-h' and # '--help' output messages. You can embed shell variables for delayed # expansion at the time the message is displayed, but you will need to # quote other shell meta-characters carefully to prevent them being # expanded when the contents are evaled. usage='$progpath [OPTION]...' # Short help message in response to '-h' and '--help'. Add to this or # override it after sourcing this library to reflect the full set of # options your script accepts. usage_message="\ --debug enable verbose shell tracing -W, --warnings=CATEGORY report the warnings falling in CATEGORY [all] -v, --verbose verbosely report processing --version print version information and exit -h, --help print short or long help message and exit " # Additional text appended to 'usage_message' in response to '--help'. long_help_message=" Warning categories include: 'all' show all warnings 'none' turn off all the warnings 'error' warnings are treated as fatal errors" # Help message printed before fatal option parsing errors. fatal_help="Try '\$progname --help' for more information." ## ------------------------- ## ## Hook function management. ## ## ------------------------- ## # This section contains functions for adding, removing, and running hooks # in the main code. A hook is just a list of function names that can be # run in order later on. # func_hookable FUNC_NAME # ----------------------- # Declare that FUNC_NAME will run hooks added with # 'func_add_hook FUNC_NAME ...'. func_hookable () { $debug_cmd func_append hookable_fns " $1" } # func_add_hook FUNC_NAME HOOK_FUNC # --------------------------------- # Request that FUNC_NAME call HOOK_FUNC before it returns. FUNC_NAME must # first have been declared "hookable" by a call to 'func_hookable'. func_add_hook () { $debug_cmd case " $hookable_fns " in *" $1 "*) ;; *) func_fatal_error "'$1' does not accept hook functions." ;; esac eval func_append ${1}_hooks '" $2"' } # func_remove_hook FUNC_NAME HOOK_FUNC # ------------------------------------ # Remove HOOK_FUNC from the list of hook functions to be called by # FUNC_NAME. func_remove_hook () { $debug_cmd eval ${1}_hooks='`$ECHO "\$'$1'_hooks" |$SED "s| '$2'||"`' } # func_propagate_result FUNC_NAME_A FUNC_NAME_B # --------------------------------------------- # If the *_result variable of FUNC_NAME_A _is set_, assign its value to # *_result variable of FUNC_NAME_B. func_propagate_result () { $debug_cmd func_propagate_result_result=: if eval "test \"\${${1}_result+set}\" = set" then eval "${2}_result=\$${1}_result" else func_propagate_result_result=false fi } # func_run_hooks FUNC_NAME [ARG]... # --------------------------------- # Run all hook functions registered to FUNC_NAME. # It's assumed that the list of hook functions contains nothing more # than a whitespace-delimited list of legal shell function names, and # no effort is wasted trying to catch shell meta-characters or preserve # whitespace. func_run_hooks () { $debug_cmd case " $hookable_fns " in *" $1 "*) ;; *) func_fatal_error "'$1' does not support hook functions." ;; esac eval _G_hook_fns=\$$1_hooks; shift for _G_hook in $_G_hook_fns; do func_unset "${_G_hook}_result" eval $_G_hook '${1+"$@"}' func_propagate_result $_G_hook func_run_hooks if $func_propagate_result_result; then eval set dummy "$func_run_hooks_result"; shift fi done } ## --------------- ## ## Option parsing. ## ## --------------- ## # In order to add your own option parsing hooks, you must accept the # full positional parameter list from your hook function. You may remove # or edit any options that you action, and then pass back the remaining # unprocessed options in '_result', escaped # suitably for 'eval'. # # The '_result' variable is automatically unset # before your hook gets called; for best performance, only set the # *_result variable when necessary (i.e. don't call the 'func_quote' # function unnecessarily because it can be an expensive operation on some # machines). # # Like this: # # my_options_prep () # { # $debug_cmd # # # Extend the existing usage message. # usage_message=$usage_message' # -s, --silent don'\''t print informational messages # ' # # No change in '$@' (ignored completely by this hook). Leave # # my_options_prep_result variable intact. # } # func_add_hook func_options_prep my_options_prep # # # my_silent_option () # { # $debug_cmd # # args_changed=false # # # Note that, for efficiency, we parse as many options as we can # # recognise in a loop before passing the remainder back to the # # caller on the first unrecognised argument we encounter. # while test $# -gt 0; do # opt=$1; shift # case $opt in # --silent|-s) opt_silent=: # args_changed=: # ;; # # Separate non-argument short options: # -s*) func_split_short_opt "$_G_opt" # set dummy "$func_split_short_opt_name" \ # "-$func_split_short_opt_arg" ${1+"$@"} # shift # args_changed=: # ;; # *) # Make sure the first unrecognised option "$_G_opt" # # is added back to "$@" in case we need it later, # # if $args_changed was set to 'true'. # set dummy "$_G_opt" ${1+"$@"}; shift; break ;; # esac # done # # # Only call 'func_quote' here if we processed at least one argument. # if $args_changed; then # func_quote eval ${1+"$@"} # my_silent_option_result=$func_quote_result # fi # } # func_add_hook func_parse_options my_silent_option # # # my_option_validation () # { # $debug_cmd # # $opt_silent && $opt_verbose && func_fatal_help "\ # '--silent' and '--verbose' options are mutually exclusive." # } # func_add_hook func_validate_options my_option_validation # # You'll also need to manually amend $usage_message to reflect the extra # options you parse. It's preferable to append if you can, so that # multiple option parsing hooks can be added safely. # func_options_finish [ARG]... # ---------------------------- # Finishing the option parse loop (call 'func_options' hooks ATM). func_options_finish () { $debug_cmd func_run_hooks func_options ${1+"$@"} func_propagate_result func_run_hooks func_options_finish } # func_options [ARG]... # --------------------- # All the functions called inside func_options are hookable. See the # individual implementations for details. func_hookable func_options func_options () { $debug_cmd _G_options_quoted=false for my_func in options_prep parse_options validate_options options_finish do func_unset func_${my_func}_result func_unset func_run_hooks_result eval func_$my_func '${1+"$@"}' func_propagate_result func_$my_func func_options if $func_propagate_result_result; then eval set dummy "$func_options_result"; shift _G_options_quoted=: fi done $_G_options_quoted || { # As we (func_options) are top-level options-parser function and # nobody quoted "$@" for us yet, we need to do it explicitly for # caller. func_quote eval ${1+"$@"} func_options_result=$func_quote_result } } # func_options_prep [ARG]... # -------------------------- # All initialisations required before starting the option parse loop. # Note that when calling hook functions, we pass through the list of # positional parameters. If a hook function modifies that list, and # needs to propagate that back to rest of this script, then the complete # modified list must be put in 'func_run_hooks_result' before returning. func_hookable func_options_prep func_options_prep () { $debug_cmd # Option defaults: opt_verbose=false opt_warning_types= func_run_hooks func_options_prep ${1+"$@"} func_propagate_result func_run_hooks func_options_prep } # func_parse_options [ARG]... # --------------------------- # The main option parsing loop. func_hookable func_parse_options func_parse_options () { $debug_cmd _G_parse_options_requote=false # this just eases exit handling while test $# -gt 0; do # Defer to hook functions for initial option parsing, so they # get priority in the event of reusing an option name. func_run_hooks func_parse_options ${1+"$@"} func_propagate_result func_run_hooks func_parse_options if $func_propagate_result_result; then eval set dummy "$func_parse_options_result"; shift # Even though we may have changed "$@", we passed the "$@" array # down into the hook and it quoted it for us (because we are in # this if-branch). No need to quote it again. _G_parse_options_requote=false fi # Break out of the loop if we already parsed every option. test $# -gt 0 || break # We expect that one of the options parsed in this function matches # and thus we remove _G_opt from "$@" and need to re-quote. _G_match_parse_options=: _G_opt=$1 shift case $_G_opt in --debug|-x) debug_cmd='set -x' func_echo "enabling shell trace mode" >&2 $debug_cmd ;; --no-warnings|--no-warning|--no-warn) set dummy --warnings none ${1+"$@"} shift ;; --warnings|--warning|-W) if test $# = 0 && func_missing_arg $_G_opt; then _G_parse_options_requote=: break fi case " $warning_categories $1" in *" $1 "*) # trailing space prevents matching last $1 above func_append_uniq opt_warning_types " $1" ;; *all) opt_warning_types=$warning_categories ;; *none) opt_warning_types=none warning_func=: ;; *error) opt_warning_types=$warning_categories warning_func=func_fatal_error ;; *) func_fatal_error \ "unsupported warning category: '$1'" ;; esac shift ;; --verbose|-v) opt_verbose=: ;; --version) func_version ;; -\?|-h) func_usage ;; --help) func_help ;; # Separate optargs to long options (plugins may need this): --*=*) func_split_equals "$_G_opt" set dummy "$func_split_equals_lhs" \ "$func_split_equals_rhs" ${1+"$@"} shift ;; # Separate optargs to short options: -W*) func_split_short_opt "$_G_opt" set dummy "$func_split_short_opt_name" \ "$func_split_short_opt_arg" ${1+"$@"} shift ;; # Separate non-argument short options: -\?*|-h*|-v*|-x*) func_split_short_opt "$_G_opt" set dummy "$func_split_short_opt_name" \ "-$func_split_short_opt_arg" ${1+"$@"} shift ;; --) _G_parse_options_requote=: ; break ;; -*) func_fatal_help "unrecognised option: '$_G_opt'" ;; *) set dummy "$_G_opt" ${1+"$@"}; shift _G_match_parse_options=false break ;; esac if $_G_match_parse_options; then _G_parse_options_requote=: fi done if $_G_parse_options_requote; then # save modified positional parameters for caller func_quote eval ${1+"$@"} func_parse_options_result=$func_quote_result fi } # func_validate_options [ARG]... # ------------------------------ # Perform any sanity checks on option settings and/or unconsumed # arguments. func_hookable func_validate_options func_validate_options () { $debug_cmd # Display all warnings if -W was not given. test -n "$opt_warning_types" || opt_warning_types=" $warning_categories" func_run_hooks func_validate_options ${1+"$@"} func_propagate_result func_run_hooks func_validate_options # Bail if the options were screwed! $exit_cmd $EXIT_FAILURE } ## ----------------- ## ## Helper functions. ## ## ----------------- ## # This section contains the helper functions used by the rest of the # hookable option parser framework in ascii-betical order. # func_fatal_help ARG... # ---------------------- # Echo program name prefixed message to standard error, followed by # a help hint, and exit. func_fatal_help () { $debug_cmd eval \$ECHO \""Usage: $usage"\" eval \$ECHO \""$fatal_help"\" func_error ${1+"$@"} exit $EXIT_FAILURE } # func_help # --------- # Echo long help message to standard output and exit. func_help () { $debug_cmd func_usage_message $ECHO "$long_help_message" exit 0 } # func_missing_arg ARGNAME # ------------------------ # Echo program name prefixed message to standard error and set global # exit_cmd. func_missing_arg () { $debug_cmd func_error "Missing argument for '$1'." exit_cmd=exit } # func_split_equals STRING # ------------------------ # Set func_split_equals_lhs and func_split_equals_rhs shell variables # after splitting STRING at the '=' sign. test -z "$_G_HAVE_XSI_OPS" \ && (eval 'x=a/b/c; test 5aa/bb/cc = "${#x}${x%%/*}${x%/*}${x#*/}${x##*/}"') 2>/dev/null \ && _G_HAVE_XSI_OPS=yes if test yes = "$_G_HAVE_XSI_OPS" then # This is an XSI compatible shell, allowing a faster implementation... eval 'func_split_equals () { $debug_cmd func_split_equals_lhs=${1%%=*} func_split_equals_rhs=${1#*=} if test "x$func_split_equals_lhs" = "x$1"; then func_split_equals_rhs= fi }' else # ...otherwise fall back to using expr, which is often a shell builtin. func_split_equals () { $debug_cmd func_split_equals_lhs=`expr "x$1" : 'x\([^=]*\)'` func_split_equals_rhs= test "x$func_split_equals_lhs=" = "x$1" \ || func_split_equals_rhs=`expr "x$1" : 'x[^=]*=\(.*\)$'` } fi #func_split_equals # func_split_short_opt SHORTOPT # ----------------------------- # Set func_split_short_opt_name and func_split_short_opt_arg shell # variables after splitting SHORTOPT after the 2nd character. if test yes = "$_G_HAVE_XSI_OPS" then # This is an XSI compatible shell, allowing a faster implementation... eval 'func_split_short_opt () { $debug_cmd func_split_short_opt_arg=${1#??} func_split_short_opt_name=${1%"$func_split_short_opt_arg"} }' else # ...otherwise fall back to using expr, which is often a shell builtin. func_split_short_opt () { $debug_cmd func_split_short_opt_name=`expr "x$1" : 'x\(-.\)'` func_split_short_opt_arg=`expr "x$1" : 'x-.\(.*\)$'` } fi #func_split_short_opt # func_usage # ---------- # Echo short help message to standard output and exit. func_usage () { $debug_cmd func_usage_message $ECHO "Run '$progname --help |${PAGER-more}' for full usage" exit 0 } # func_usage_message # ------------------ # Echo short help message to standard output. func_usage_message () { $debug_cmd eval \$ECHO \""Usage: $usage"\" echo $SED -n 's|^# || /^Written by/{ x;p;x } h /^Written by/q' < "$progpath" echo eval \$ECHO \""$usage_message"\" } # func_version # ------------ # Echo version message to standard output and exit. # The version message is extracted from the calling file's header # comments, with leading '# ' stripped: # 1. First display the progname and version # 2. Followed by the header comment line matching /^# Written by / # 3. Then a blank line followed by the first following line matching # /^# Copyright / # 4. Immediately followed by any lines between the previous matches, # except lines preceding the intervening completely blank line. # For example, see the header comments of this file. func_version () { $debug_cmd printf '%s\n' "$progname $scriptversion" $SED -n ' /^# Written by /!b s|^# ||; p; n :fwd2blnk /./ { n b fwd2blnk } p; n :holdwrnt s|^# || s|^# *$|| /^Copyright /!{ /./H n b holdwrnt } s|\((C)\)[ 0-9,-]*[ ,-]\([1-9][0-9]* \)|\1 \2| G s|\(\n\)\n*|\1|g p; q' < "$progpath" exit $? } # Local variables: # mode: shell-script # sh-indentation: 2 # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-pattern: "30/scriptversion=%:y-%02m-%02d.%02H; # UTC" # time-stamp-time-zone: "UTC" # End: # Set a version string. scriptversion='(GNU libtool) 2.4.7' # func_echo ARG... # ---------------- # Libtool also displays the current mode in messages, so override # funclib.sh func_echo with this custom definition. func_echo () { $debug_cmd _G_message=$* func_echo_IFS=$IFS IFS=$nl for _G_line in $_G_message; do IFS=$func_echo_IFS $ECHO "$progname${opt_mode+: $opt_mode}: $_G_line" done IFS=$func_echo_IFS } # func_warning ARG... # ------------------- # Libtool warnings are not categorized, so override funclib.sh # func_warning with this simpler definition. func_warning () { $debug_cmd $warning_func ${1+"$@"} } ## ---------------- ## ## Options parsing. ## ## ---------------- ## # Hook in the functions to make sure our own options are parsed during # the option parsing loop. usage='$progpath [OPTION]... [MODE-ARG]...' # Short help message in response to '-h'. usage_message="Options: --config show all configuration variables --debug enable verbose shell tracing -n, --dry-run display commands without modifying any files --features display basic configuration information and exit --mode=MODE use operation mode MODE --no-warnings equivalent to '-Wnone' --preserve-dup-deps don't remove duplicate dependency libraries --quiet, --silent don't print informational messages --tag=TAG use configuration variables from tag TAG -v, --verbose print more informational messages than default --version print version information -W, --warnings=CATEGORY report the warnings falling in CATEGORY [all] -h, --help, --help-all print short, long, or detailed help message " # Additional text appended to 'usage_message' in response to '--help'. func_help () { $debug_cmd func_usage_message $ECHO "$long_help_message MODE must be one of the following: clean remove files from the build directory compile compile a source file into a libtool object execute automatically set library path, then run a program finish complete the installation of libtool libraries install install libraries or executables link create a library or an executable uninstall remove libraries from an installed directory MODE-ARGS vary depending on the MODE. When passed as first option, '--mode=MODE' may be abbreviated as 'MODE' or a unique abbreviation of that. Try '$progname --help --mode=MODE' for a more detailed description of MODE. When reporting a bug, please describe a test case to reproduce it and include the following information: host-triplet: $host shell: $SHELL compiler: $LTCC compiler flags: $LTCFLAGS linker: $LD (gnu? $with_gnu_ld) version: $progname (GNU libtool) 2.4.7 automake: `($AUTOMAKE --version) 2>/dev/null |$SED 1q` autoconf: `($AUTOCONF --version) 2>/dev/null |$SED 1q` Report bugs to . GNU libtool home page: . General help using GNU software: ." exit 0 } # func_lo2o OBJECT-NAME # --------------------- # Transform OBJECT-NAME from a '.lo' suffix to the platform specific # object suffix. lo2o=s/\\.lo\$/.$objext/ o2lo=s/\\.$objext\$/.lo/ if test yes = "$_G_HAVE_XSI_OPS"; then eval 'func_lo2o () { case $1 in *.lo) func_lo2o_result=${1%.lo}.$objext ;; * ) func_lo2o_result=$1 ;; esac }' # func_xform LIBOBJ-OR-SOURCE # --------------------------- # Transform LIBOBJ-OR-SOURCE from a '.o' or '.c' (or otherwise) # suffix to a '.lo' libtool-object suffix. eval 'func_xform () { func_xform_result=${1%.*}.lo }' else # ...otherwise fall back to using sed. func_lo2o () { func_lo2o_result=`$ECHO "$1" | $SED "$lo2o"` } func_xform () { func_xform_result=`$ECHO "$1" | $SED 's|\.[^.]*$|.lo|'` } fi # func_fatal_configuration ARG... # ------------------------------- # Echo program name prefixed message to standard error, followed by # a configuration failure hint, and exit. func_fatal_configuration () { func_fatal_error ${1+"$@"} \ "See the $PACKAGE documentation for more information." \ "Fatal configuration error." } # func_config # ----------- # Display the configuration for all the tags in this script. func_config () { re_begincf='^# ### BEGIN LIBTOOL' re_endcf='^# ### END LIBTOOL' # Default configuration. $SED "1,/$re_begincf CONFIG/d;/$re_endcf CONFIG/,\$d" < "$progpath" # Now print the configurations for the tags. for tagname in $taglist; do $SED -n "/$re_begincf TAG CONFIG: $tagname\$/,/$re_endcf TAG CONFIG: $tagname\$/p" < "$progpath" done exit $? } # func_features # ------------- # Display the features supported by this script. func_features () { echo "host: $host" if test yes = "$build_libtool_libs"; then echo "enable shared libraries" else echo "disable shared libraries" fi if test yes = "$build_old_libs"; then echo "enable static libraries" else echo "disable static libraries" fi exit $? } # func_enable_tag TAGNAME # ----------------------- # Verify that TAGNAME is valid, and either flag an error and exit, or # enable the TAGNAME tag. We also add TAGNAME to the global $taglist # variable here. func_enable_tag () { # Global variable: tagname=$1 re_begincf="^# ### BEGIN LIBTOOL TAG CONFIG: $tagname\$" re_endcf="^# ### END LIBTOOL TAG CONFIG: $tagname\$" sed_extractcf=/$re_begincf/,/$re_endcf/p # Validate tagname. case $tagname in *[!-_A-Za-z0-9,/]*) func_fatal_error "invalid tag name: $tagname" ;; esac # Don't test for the "default" C tag, as we know it's # there but not specially marked. case $tagname in CC) ;; *) if $GREP "$re_begincf" "$progpath" >/dev/null 2>&1; then taglist="$taglist $tagname" # Evaluate the configuration. Be careful to quote the path # and the sed script, to avoid splitting on whitespace, but # also don't use non-portable quotes within backquotes within # quotes we have to do it in 2 steps: extractedcf=`$SED -n -e "$sed_extractcf" < "$progpath"` eval "$extractedcf" else func_error "ignoring unknown tag $tagname" fi ;; esac } # func_check_version_match # ------------------------ # Ensure that we are using m4 macros, and libtool script from the same # release of libtool. func_check_version_match () { if test "$package_revision" != "$macro_revision"; then if test "$VERSION" != "$macro_version"; then if test -z "$macro_version"; then cat >&2 <<_LT_EOF $progname: Version mismatch error. This is $PACKAGE $VERSION, but the $progname: definition of this LT_INIT comes from an older release. $progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION $progname: and run autoconf again. _LT_EOF else cat >&2 <<_LT_EOF $progname: Version mismatch error. This is $PACKAGE $VERSION, but the $progname: definition of this LT_INIT comes from $PACKAGE $macro_version. $progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION $progname: and run autoconf again. _LT_EOF fi else cat >&2 <<_LT_EOF $progname: Version mismatch error. This is $PACKAGE $VERSION, revision $package_revision, $progname: but the definition of this LT_INIT comes from revision $macro_revision. $progname: You should recreate aclocal.m4 with macros from revision $package_revision $progname: of $PACKAGE $VERSION and run autoconf again. _LT_EOF fi exit $EXIT_MISMATCH fi } # libtool_options_prep [ARG]... # ----------------------------- # Preparation for options parsed by libtool. libtool_options_prep () { $debug_mode # Option defaults: opt_config=false opt_dlopen= opt_dry_run=false opt_help=false opt_mode= opt_preserve_dup_deps=false opt_quiet=false nonopt= preserve_args= _G_rc_lt_options_prep=: # Shorthand for --mode=foo, only valid as the first argument case $1 in clean|clea|cle|cl) shift; set dummy --mode clean ${1+"$@"}; shift ;; compile|compil|compi|comp|com|co|c) shift; set dummy --mode compile ${1+"$@"}; shift ;; execute|execut|execu|exec|exe|ex|e) shift; set dummy --mode execute ${1+"$@"}; shift ;; finish|finis|fini|fin|fi|f) shift; set dummy --mode finish ${1+"$@"}; shift ;; install|instal|insta|inst|ins|in|i) shift; set dummy --mode install ${1+"$@"}; shift ;; link|lin|li|l) shift; set dummy --mode link ${1+"$@"}; shift ;; uninstall|uninstal|uninsta|uninst|unins|unin|uni|un|u) shift; set dummy --mode uninstall ${1+"$@"}; shift ;; *) _G_rc_lt_options_prep=false ;; esac if $_G_rc_lt_options_prep; then # Pass back the list of options. func_quote eval ${1+"$@"} libtool_options_prep_result=$func_quote_result fi } func_add_hook func_options_prep libtool_options_prep # libtool_parse_options [ARG]... # --------------------------------- # Provide handling for libtool specific options. libtool_parse_options () { $debug_cmd _G_rc_lt_parse_options=false # Perform our own loop to consume as many options as possible in # each iteration. while test $# -gt 0; do _G_match_lt_parse_options=: _G_opt=$1 shift case $_G_opt in --dry-run|--dryrun|-n) opt_dry_run=: ;; --config) func_config ;; --dlopen|-dlopen) opt_dlopen="${opt_dlopen+$opt_dlopen }$1" shift ;; --preserve-dup-deps) opt_preserve_dup_deps=: ;; --features) func_features ;; --finish) set dummy --mode finish ${1+"$@"}; shift ;; --help) opt_help=: ;; --help-all) opt_help=': help-all' ;; --mode) test $# = 0 && func_missing_arg $_G_opt && break opt_mode=$1 case $1 in # Valid mode arguments: clean|compile|execute|finish|install|link|relink|uninstall) ;; # Catch anything else as an error *) func_error "invalid argument for $_G_opt" exit_cmd=exit break ;; esac shift ;; --no-silent|--no-quiet) opt_quiet=false func_append preserve_args " $_G_opt" ;; --no-warnings|--no-warning|--no-warn) opt_warning=false func_append preserve_args " $_G_opt" ;; --no-verbose) opt_verbose=false func_append preserve_args " $_G_opt" ;; --silent|--quiet) opt_quiet=: opt_verbose=false func_append preserve_args " $_G_opt" ;; --tag) test $# = 0 && func_missing_arg $_G_opt && break opt_tag=$1 func_append preserve_args " $_G_opt $1" func_enable_tag "$1" shift ;; --verbose|-v) opt_quiet=false opt_verbose=: func_append preserve_args " $_G_opt" ;; # An option not handled by this hook function: *) set dummy "$_G_opt" ${1+"$@"} ; shift _G_match_lt_parse_options=false break ;; esac $_G_match_lt_parse_options && _G_rc_lt_parse_options=: done if $_G_rc_lt_parse_options; then # save modified positional parameters for caller func_quote eval ${1+"$@"} libtool_parse_options_result=$func_quote_result fi } func_add_hook func_parse_options libtool_parse_options # libtool_validate_options [ARG]... # --------------------------------- # Perform any sanity checks on option settings and/or unconsumed # arguments. libtool_validate_options () { # save first non-option argument if test 0 -lt $#; then nonopt=$1 shift fi # preserve --debug test : = "$debug_cmd" || func_append preserve_args " --debug" case $host in # Solaris2 added to fix http://debbugs.gnu.org/cgi/bugreport.cgi?bug=16452 # see also: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=59788 *cygwin* | *mingw* | *pw32* | *cegcc* | *solaris2* | *os2*) # don't eliminate duplications in $postdeps and $predeps opt_duplicate_compiler_generated_deps=: ;; *) opt_duplicate_compiler_generated_deps=$opt_preserve_dup_deps ;; esac $opt_help || { # Sanity checks first: func_check_version_match test yes != "$build_libtool_libs" \ && test yes != "$build_old_libs" \ && func_fatal_configuration "not configured to build any kind of library" # Darwin sucks eval std_shrext=\"$shrext_cmds\" # Only execute mode is allowed to have -dlopen flags. if test -n "$opt_dlopen" && test execute != "$opt_mode"; then func_error "unrecognized option '-dlopen'" $ECHO "$help" 1>&2 exit $EXIT_FAILURE fi # Change the help message to a mode-specific one. generic_help=$help help="Try '$progname --help --mode=$opt_mode' for more information." } # Pass back the unparsed argument list func_quote eval ${1+"$@"} libtool_validate_options_result=$func_quote_result } func_add_hook func_validate_options libtool_validate_options # Process options as early as possible so that --help and --version # can return quickly. func_options ${1+"$@"} eval set dummy "$func_options_result"; shift ## ----------- ## ## Main. ## ## ----------- ## magic='%%%MAGIC variable%%%' magic_exe='%%%MAGIC EXE variable%%%' # Global variables. extracted_archives= extracted_serial=0 # If this variable is set in any of the actions, the command in it # will be execed at the end. This prevents here-documents from being # left over by shells. exec_cmd= # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF $1 _LTECHO_EOF' } # func_generated_by_libtool # True iff stdin has been generated by Libtool. This function is only # a basic sanity check; it will hardly flush out determined imposters. func_generated_by_libtool_p () { $GREP "^# Generated by .*$PACKAGE" > /dev/null 2>&1 } # func_lalib_p file # True iff FILE is a libtool '.la' library or '.lo' object file. # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_lalib_p () { test -f "$1" && $SED -e 4q "$1" 2>/dev/null | func_generated_by_libtool_p } # func_lalib_unsafe_p file # True iff FILE is a libtool '.la' library or '.lo' object file. # This function implements the same check as func_lalib_p without # resorting to external programs. To this end, it redirects stdin and # closes it afterwards, without saving the original file descriptor. # As a safety measure, use it only where a negative result would be # fatal anyway. Works if 'file' does not exist. func_lalib_unsafe_p () { lalib_p=no if test -f "$1" && test -r "$1" && exec 5<&0 <"$1"; then for lalib_p_l in 1 2 3 4 do read lalib_p_line case $lalib_p_line in \#\ Generated\ by\ *$PACKAGE* ) lalib_p=yes; break;; esac done exec 0<&5 5<&- fi test yes = "$lalib_p" } # func_ltwrapper_script_p file # True iff FILE is a libtool wrapper script # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_ltwrapper_script_p () { test -f "$1" && $lt_truncate_bin < "$1" 2>/dev/null | func_generated_by_libtool_p } # func_ltwrapper_executable_p file # True iff FILE is a libtool wrapper executable # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_ltwrapper_executable_p () { func_ltwrapper_exec_suffix= case $1 in *.exe) ;; *) func_ltwrapper_exec_suffix=.exe ;; esac $GREP "$magic_exe" "$1$func_ltwrapper_exec_suffix" >/dev/null 2>&1 } # func_ltwrapper_scriptname file # Assumes file is an ltwrapper_executable # uses $file to determine the appropriate filename for a # temporary ltwrapper_script. func_ltwrapper_scriptname () { func_dirname_and_basename "$1" "" "." func_stripname '' '.exe' "$func_basename_result" func_ltwrapper_scriptname_result=$func_dirname_result/$objdir/${func_stripname_result}_ltshwrapper } # func_ltwrapper_p file # True iff FILE is a libtool wrapper script or wrapper executable # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_ltwrapper_p () { func_ltwrapper_script_p "$1" || func_ltwrapper_executable_p "$1" } # func_execute_cmds commands fail_cmd # Execute tilde-delimited COMMANDS. # If FAIL_CMD is given, eval that upon failure. # FAIL_CMD may read-access the current command in variable CMD! func_execute_cmds () { $debug_cmd save_ifs=$IFS; IFS='~' for cmd in $1; do IFS=$sp$nl eval cmd=\"$cmd\" IFS=$save_ifs func_show_eval "$cmd" "${2-:}" done IFS=$save_ifs } # func_source file # Source FILE, adding directory component if necessary. # Note that it is not necessary on cygwin/mingw to append a dot to # FILE even if both FILE and FILE.exe exist: automatic-append-.exe # behavior happens only for exec(3), not for open(2)! Also, sourcing # 'FILE.' does not work on cygwin managed mounts. func_source () { $debug_cmd case $1 in */* | *\\*) . "$1" ;; *) . "./$1" ;; esac } # func_resolve_sysroot PATH # Replace a leading = in PATH with a sysroot. Store the result into # func_resolve_sysroot_result func_resolve_sysroot () { func_resolve_sysroot_result=$1 case $func_resolve_sysroot_result in =*) func_stripname '=' '' "$func_resolve_sysroot_result" func_resolve_sysroot_result=$lt_sysroot$func_stripname_result ;; esac } # func_replace_sysroot PATH # If PATH begins with the sysroot, replace it with = and # store the result into func_replace_sysroot_result. func_replace_sysroot () { case $lt_sysroot:$1 in ?*:"$lt_sysroot"*) func_stripname "$lt_sysroot" '' "$1" func_replace_sysroot_result='='$func_stripname_result ;; *) # Including no sysroot. func_replace_sysroot_result=$1 ;; esac } # func_infer_tag arg # Infer tagged configuration to use if any are available and # if one wasn't chosen via the "--tag" command line option. # Only attempt this if the compiler in the base compile # command doesn't match the default compiler. # arg is usually of the form 'gcc ...' func_infer_tag () { $debug_cmd if test -n "$available_tags" && test -z "$tagname"; then CC_quoted= for arg in $CC; do func_append_quoted CC_quoted "$arg" done CC_expanded=`func_echo_all $CC` CC_quoted_expanded=`func_echo_all $CC_quoted` case $@ in # Blanks in the command may have been stripped by the calling shell, # but not from the CC environment variable when configure was run. " $CC "* | "$CC "* | " $CC_expanded "* | "$CC_expanded "* | \ " $CC_quoted"* | "$CC_quoted "* | " $CC_quoted_expanded "* | "$CC_quoted_expanded "*) ;; # Blanks at the start of $base_compile will cause this to fail # if we don't check for them as well. *) for z in $available_tags; do if $GREP "^# ### BEGIN LIBTOOL TAG CONFIG: $z$" < "$progpath" > /dev/null; then # Evaluate the configuration. eval "`$SED -n -e '/^# ### BEGIN LIBTOOL TAG CONFIG: '$z'$/,/^# ### END LIBTOOL TAG CONFIG: '$z'$/p' < $progpath`" CC_quoted= for arg in $CC; do # Double-quote args containing other shell metacharacters. func_append_quoted CC_quoted "$arg" done CC_expanded=`func_echo_all $CC` CC_quoted_expanded=`func_echo_all $CC_quoted` case "$@ " in " $CC "* | "$CC "* | " $CC_expanded "* | "$CC_expanded "* | \ " $CC_quoted"* | "$CC_quoted "* | " $CC_quoted_expanded "* | "$CC_quoted_expanded "*) # The compiler in the base compile command matches # the one in the tagged configuration. # Assume this is the tagged configuration we want. tagname=$z break ;; esac fi done # If $tagname still isn't set, then no tagged configuration # was found and let the user know that the "--tag" command # line option must be used. if test -z "$tagname"; then func_echo "unable to infer tagged configuration" func_fatal_error "specify a tag with '--tag'" # else # func_verbose "using $tagname tagged configuration" fi ;; esac fi } # func_write_libtool_object output_name pic_name nonpic_name # Create a libtool object file (analogous to a ".la" file), # but don't create it if we're doing a dry run. func_write_libtool_object () { write_libobj=$1 if test yes = "$build_libtool_libs"; then write_lobj=\'$2\' else write_lobj=none fi if test yes = "$build_old_libs"; then write_oldobj=\'$3\' else write_oldobj=none fi $opt_dry_run || { cat >${write_libobj}T </dev/null` if test "$?" -eq 0 && test -n "$func_convert_core_file_wine_to_w32_tmp"; then func_convert_core_file_wine_to_w32_result=`$ECHO "$func_convert_core_file_wine_to_w32_tmp" | $SED -e "$sed_naive_backslashify"` else func_convert_core_file_wine_to_w32_result= fi fi } # end: func_convert_core_file_wine_to_w32 # func_convert_core_path_wine_to_w32 ARG # Helper function used by path conversion functions when $build is *nix, and # $host is mingw, cygwin, or some other w32 environment. Relies on a correctly # configured wine environment available, with the winepath program in $build's # $PATH. Assumes ARG has no leading or trailing path separator characters. # # ARG is path to be converted from $build format to win32. # Result is available in $func_convert_core_path_wine_to_w32_result. # Unconvertible file (directory) names in ARG are skipped; if no directory names # are convertible, then the result may be empty. func_convert_core_path_wine_to_w32 () { $debug_cmd # unfortunately, winepath doesn't convert paths, only file names func_convert_core_path_wine_to_w32_result= if test -n "$1"; then oldIFS=$IFS IFS=: for func_convert_core_path_wine_to_w32_f in $1; do IFS=$oldIFS func_convert_core_file_wine_to_w32 "$func_convert_core_path_wine_to_w32_f" if test -n "$func_convert_core_file_wine_to_w32_result"; then if test -z "$func_convert_core_path_wine_to_w32_result"; then func_convert_core_path_wine_to_w32_result=$func_convert_core_file_wine_to_w32_result else func_append func_convert_core_path_wine_to_w32_result ";$func_convert_core_file_wine_to_w32_result" fi fi done IFS=$oldIFS fi } # end: func_convert_core_path_wine_to_w32 # func_cygpath ARGS... # Wrapper around calling the cygpath program via LT_CYGPATH. This is used when # when (1) $build is *nix and Cygwin is hosted via a wine environment; or (2) # $build is MSYS and $host is Cygwin, or (3) $build is Cygwin. In case (1) or # (2), returns the Cygwin file name or path in func_cygpath_result (input # file name or path is assumed to be in w32 format, as previously converted # from $build's *nix or MSYS format). In case (3), returns the w32 file name # or path in func_cygpath_result (input file name or path is assumed to be in # Cygwin format). Returns an empty string on error. # # ARGS are passed to cygpath, with the last one being the file name or path to # be converted. # # Specify the absolute *nix (or w32) name to cygpath in the LT_CYGPATH # environment variable; do not put it in $PATH. func_cygpath () { $debug_cmd if test -n "$LT_CYGPATH" && test -f "$LT_CYGPATH"; then func_cygpath_result=`$LT_CYGPATH "$@" 2>/dev/null` if test "$?" -ne 0; then # on failure, ensure result is empty func_cygpath_result= fi else func_cygpath_result= func_error "LT_CYGPATH is empty or specifies non-existent file: '$LT_CYGPATH'" fi } #end: func_cygpath # func_convert_core_msys_to_w32 ARG # Convert file name or path ARG from MSYS format to w32 format. Return # result in func_convert_core_msys_to_w32_result. func_convert_core_msys_to_w32 () { $debug_cmd # awkward: cmd appends spaces to result func_convert_core_msys_to_w32_result=`( cmd //c echo "$1" ) 2>/dev/null | $SED -e 's/[ ]*$//' -e "$sed_naive_backslashify"` } #end: func_convert_core_msys_to_w32 # func_convert_file_check ARG1 ARG2 # Verify that ARG1 (a file name in $build format) was converted to $host # format in ARG2. Otherwise, emit an error message, but continue (resetting # func_to_host_file_result to ARG1). func_convert_file_check () { $debug_cmd if test -z "$2" && test -n "$1"; then func_error "Could not determine host file name corresponding to" func_error " '$1'" func_error "Continuing, but uninstalled executables may not work." # Fallback: func_to_host_file_result=$1 fi } # end func_convert_file_check # func_convert_path_check FROM_PATHSEP TO_PATHSEP FROM_PATH TO_PATH # Verify that FROM_PATH (a path in $build format) was converted to $host # format in TO_PATH. Otherwise, emit an error message, but continue, resetting # func_to_host_file_result to a simplistic fallback value (see below). func_convert_path_check () { $debug_cmd if test -z "$4" && test -n "$3"; then func_error "Could not determine the host path corresponding to" func_error " '$3'" func_error "Continuing, but uninstalled executables may not work." # Fallback. This is a deliberately simplistic "conversion" and # should not be "improved". See libtool.info. if test "x$1" != "x$2"; then lt_replace_pathsep_chars="s|$1|$2|g" func_to_host_path_result=`echo "$3" | $SED -e "$lt_replace_pathsep_chars"` else func_to_host_path_result=$3 fi fi } # end func_convert_path_check # func_convert_path_front_back_pathsep FRONTPAT BACKPAT REPL ORIG # Modifies func_to_host_path_result by prepending REPL if ORIG matches FRONTPAT # and appending REPL if ORIG matches BACKPAT. func_convert_path_front_back_pathsep () { $debug_cmd case $4 in $1 ) func_to_host_path_result=$3$func_to_host_path_result ;; esac case $4 in $2 ) func_append func_to_host_path_result "$3" ;; esac } # end func_convert_path_front_back_pathsep ################################################## # $build to $host FILE NAME CONVERSION FUNCTIONS # ################################################## # invoked via '$to_host_file_cmd ARG' # # In each case, ARG is the path to be converted from $build to $host format. # Result will be available in $func_to_host_file_result. # func_to_host_file ARG # Converts the file name ARG from $build format to $host format. Return result # in func_to_host_file_result. func_to_host_file () { $debug_cmd $to_host_file_cmd "$1" } # end func_to_host_file # func_to_tool_file ARG LAZY # converts the file name ARG from $build format to toolchain format. Return # result in func_to_tool_file_result. If the conversion in use is listed # in (the comma separated) LAZY, no conversion takes place. func_to_tool_file () { $debug_cmd case ,$2, in *,"$to_tool_file_cmd",*) func_to_tool_file_result=$1 ;; *) $to_tool_file_cmd "$1" func_to_tool_file_result=$func_to_host_file_result ;; esac } # end func_to_tool_file # func_convert_file_noop ARG # Copy ARG to func_to_host_file_result. func_convert_file_noop () { func_to_host_file_result=$1 } # end func_convert_file_noop # func_convert_file_msys_to_w32 ARG # Convert file name ARG from (mingw) MSYS to (mingw) w32 format; automatic # conversion to w32 is not available inside the cwrapper. Returns result in # func_to_host_file_result. func_convert_file_msys_to_w32 () { $debug_cmd func_to_host_file_result=$1 if test -n "$1"; then func_convert_core_msys_to_w32 "$1" func_to_host_file_result=$func_convert_core_msys_to_w32_result fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_msys_to_w32 # func_convert_file_cygwin_to_w32 ARG # Convert file name ARG from Cygwin to w32 format. Returns result in # func_to_host_file_result. func_convert_file_cygwin_to_w32 () { $debug_cmd func_to_host_file_result=$1 if test -n "$1"; then # because $build is cygwin, we call "the" cygpath in $PATH; no need to use # LT_CYGPATH in this case. func_to_host_file_result=`cygpath -m "$1"` fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_cygwin_to_w32 # func_convert_file_nix_to_w32 ARG # Convert file name ARG from *nix to w32 format. Requires a wine environment # and a working winepath. Returns result in func_to_host_file_result. func_convert_file_nix_to_w32 () { $debug_cmd func_to_host_file_result=$1 if test -n "$1"; then func_convert_core_file_wine_to_w32 "$1" func_to_host_file_result=$func_convert_core_file_wine_to_w32_result fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_nix_to_w32 # func_convert_file_msys_to_cygwin ARG # Convert file name ARG from MSYS to Cygwin format. Requires LT_CYGPATH set. # Returns result in func_to_host_file_result. func_convert_file_msys_to_cygwin () { $debug_cmd func_to_host_file_result=$1 if test -n "$1"; then func_convert_core_msys_to_w32 "$1" func_cygpath -u "$func_convert_core_msys_to_w32_result" func_to_host_file_result=$func_cygpath_result fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_msys_to_cygwin # func_convert_file_nix_to_cygwin ARG # Convert file name ARG from *nix to Cygwin format. Requires Cygwin installed # in a wine environment, working winepath, and LT_CYGPATH set. Returns result # in func_to_host_file_result. func_convert_file_nix_to_cygwin () { $debug_cmd func_to_host_file_result=$1 if test -n "$1"; then # convert from *nix to w32, then use cygpath to convert from w32 to cygwin. func_convert_core_file_wine_to_w32 "$1" func_cygpath -u "$func_convert_core_file_wine_to_w32_result" func_to_host_file_result=$func_cygpath_result fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_nix_to_cygwin ############################################# # $build to $host PATH CONVERSION FUNCTIONS # ############################################# # invoked via '$to_host_path_cmd ARG' # # In each case, ARG is the path to be converted from $build to $host format. # The result will be available in $func_to_host_path_result. # # Path separators are also converted from $build format to $host format. If # ARG begins or ends with a path separator character, it is preserved (but # converted to $host format) on output. # # All path conversion functions are named using the following convention: # file name conversion function : func_convert_file_X_to_Y () # path conversion function : func_convert_path_X_to_Y () # where, for any given $build/$host combination the 'X_to_Y' value is the # same. If conversion functions are added for new $build/$host combinations, # the two new functions must follow this pattern, or func_init_to_host_path_cmd # will break. # func_init_to_host_path_cmd # Ensures that function "pointer" variable $to_host_path_cmd is set to the # appropriate value, based on the value of $to_host_file_cmd. to_host_path_cmd= func_init_to_host_path_cmd () { $debug_cmd if test -z "$to_host_path_cmd"; then func_stripname 'func_convert_file_' '' "$to_host_file_cmd" to_host_path_cmd=func_convert_path_$func_stripname_result fi } # func_to_host_path ARG # Converts the path ARG from $build format to $host format. Return result # in func_to_host_path_result. func_to_host_path () { $debug_cmd func_init_to_host_path_cmd $to_host_path_cmd "$1" } # end func_to_host_path # func_convert_path_noop ARG # Copy ARG to func_to_host_path_result. func_convert_path_noop () { func_to_host_path_result=$1 } # end func_convert_path_noop # func_convert_path_msys_to_w32 ARG # Convert path ARG from (mingw) MSYS to (mingw) w32 format; automatic # conversion to w32 is not available inside the cwrapper. Returns result in # func_to_host_path_result. func_convert_path_msys_to_w32 () { $debug_cmd func_to_host_path_result=$1 if test -n "$1"; then # Remove leading and trailing path separator characters from ARG. MSYS # behavior is inconsistent here; cygpath turns them into '.;' and ';.'; # and winepath ignores them completely. func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_msys_to_w32 "$func_to_host_path_tmp1" func_to_host_path_result=$func_convert_core_msys_to_w32_result func_convert_path_check : ";" \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" ";" "$1" fi } # end func_convert_path_msys_to_w32 # func_convert_path_cygwin_to_w32 ARG # Convert path ARG from Cygwin to w32 format. Returns result in # func_to_host_file_result. func_convert_path_cygwin_to_w32 () { $debug_cmd func_to_host_path_result=$1 if test -n "$1"; then # See func_convert_path_msys_to_w32: func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_to_host_path_result=`cygpath -m -p "$func_to_host_path_tmp1"` func_convert_path_check : ";" \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" ";" "$1" fi } # end func_convert_path_cygwin_to_w32 # func_convert_path_nix_to_w32 ARG # Convert path ARG from *nix to w32 format. Requires a wine environment and # a working winepath. Returns result in func_to_host_file_result. func_convert_path_nix_to_w32 () { $debug_cmd func_to_host_path_result=$1 if test -n "$1"; then # See func_convert_path_msys_to_w32: func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1" func_to_host_path_result=$func_convert_core_path_wine_to_w32_result func_convert_path_check : ";" \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" ";" "$1" fi } # end func_convert_path_nix_to_w32 # func_convert_path_msys_to_cygwin ARG # Convert path ARG from MSYS to Cygwin format. Requires LT_CYGPATH set. # Returns result in func_to_host_file_result. func_convert_path_msys_to_cygwin () { $debug_cmd func_to_host_path_result=$1 if test -n "$1"; then # See func_convert_path_msys_to_w32: func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_msys_to_w32 "$func_to_host_path_tmp1" func_cygpath -u -p "$func_convert_core_msys_to_w32_result" func_to_host_path_result=$func_cygpath_result func_convert_path_check : : \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" : "$1" fi } # end func_convert_path_msys_to_cygwin # func_convert_path_nix_to_cygwin ARG # Convert path ARG from *nix to Cygwin format. Requires Cygwin installed in a # a wine environment, working winepath, and LT_CYGPATH set. Returns result in # func_to_host_file_result. func_convert_path_nix_to_cygwin () { $debug_cmd func_to_host_path_result=$1 if test -n "$1"; then # Remove leading and trailing path separator characters from # ARG. msys behavior is inconsistent here, cygpath turns them # into '.;' and ';.', and winepath ignores them completely. func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1" func_cygpath -u -p "$func_convert_core_path_wine_to_w32_result" func_to_host_path_result=$func_cygpath_result func_convert_path_check : : \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" : "$1" fi } # end func_convert_path_nix_to_cygwin # func_dll_def_p FILE # True iff FILE is a Windows DLL '.def' file. # Keep in sync with _LT_DLL_DEF_P in libtool.m4 func_dll_def_p () { $debug_cmd func_dll_def_p_tmp=`$SED -n \ -e 's/^[ ]*//' \ -e '/^\(;.*\)*$/d' \ -e 's/^\(EXPORTS\|LIBRARY\)\([ ].*\)*$/DEF/p' \ -e q \ "$1"` test DEF = "$func_dll_def_p_tmp" } # func_mode_compile arg... func_mode_compile () { $debug_cmd # Get the compilation command and the source file. base_compile= srcfile=$nonopt # always keep a non-empty value in "srcfile" suppress_opt=yes suppress_output= arg_mode=normal libobj= later= pie_flag= for arg do case $arg_mode in arg ) # do not "continue". Instead, add this to base_compile lastarg=$arg arg_mode=normal ;; target ) libobj=$arg arg_mode=normal continue ;; normal ) # Accept any command-line options. case $arg in -o) test -n "$libobj" && \ func_fatal_error "you cannot specify '-o' more than once" arg_mode=target continue ;; -pie | -fpie | -fPIE) func_append pie_flag " $arg" continue ;; -shared | -static | -prefer-pic | -prefer-non-pic) func_append later " $arg" continue ;; -no-suppress) suppress_opt=no continue ;; -Xcompiler) arg_mode=arg # the next one goes into the "base_compile" arg list continue # The current "srcfile" will either be retained or ;; # replaced later. I would guess that would be a bug. -Wc,*) func_stripname '-Wc,' '' "$arg" args=$func_stripname_result lastarg= save_ifs=$IFS; IFS=, for arg in $args; do IFS=$save_ifs func_append_quoted lastarg "$arg" done IFS=$save_ifs func_stripname ' ' '' "$lastarg" lastarg=$func_stripname_result # Add the arguments to base_compile. func_append base_compile " $lastarg" continue ;; *) # Accept the current argument as the source file. # The previous "srcfile" becomes the current argument. # lastarg=$srcfile srcfile=$arg ;; esac # case $arg ;; esac # case $arg_mode # Aesthetically quote the previous argument. func_append_quoted base_compile "$lastarg" done # for arg case $arg_mode in arg) func_fatal_error "you must specify an argument for -Xcompile" ;; target) func_fatal_error "you must specify a target with '-o'" ;; *) # Get the name of the library object. test -z "$libobj" && { func_basename "$srcfile" libobj=$func_basename_result } ;; esac # Recognize several different file suffixes. # If the user specifies -o file.o, it is replaced with file.lo case $libobj in *.[cCFSifmso] | \ *.ada | *.adb | *.ads | *.asm | \ *.c++ | *.cc | *.ii | *.class | *.cpp | *.cxx | \ *.[fF][09]? | *.for | *.java | *.go | *.obj | *.sx | *.cu | *.cup) func_xform "$libobj" libobj=$func_xform_result ;; esac case $libobj in *.lo) func_lo2o "$libobj"; obj=$func_lo2o_result ;; *) func_fatal_error "cannot determine name of library object from '$libobj'" ;; esac func_infer_tag $base_compile for arg in $later; do case $arg in -shared) test yes = "$build_libtool_libs" \ || func_fatal_configuration "cannot build a shared library" build_old_libs=no continue ;; -static) build_libtool_libs=no build_old_libs=yes continue ;; -prefer-pic) pic_mode=yes continue ;; -prefer-non-pic) pic_mode=no continue ;; esac done func_quote_arg pretty "$libobj" test "X$libobj" != "X$func_quote_arg_result" \ && $ECHO "X$libobj" | $GREP '[]~#^*{};<>?"'"'"' &()|`$[]' \ && func_warning "libobj name '$libobj' may not contain shell special characters." func_dirname_and_basename "$obj" "/" "" objname=$func_basename_result xdir=$func_dirname_result lobj=$xdir$objdir/$objname test -z "$base_compile" && \ func_fatal_help "you must specify a compilation command" # Delete any leftover library objects. if test yes = "$build_old_libs"; then removelist="$obj $lobj $libobj ${libobj}T" else removelist="$lobj $libobj ${libobj}T" fi # On Cygwin there's no "real" PIC flag so we must build both object types case $host_os in cygwin* | mingw* | pw32* | os2* | cegcc*) pic_mode=default ;; esac if test no = "$pic_mode" && test pass_all != "$deplibs_check_method"; then # non-PIC code in shared libraries is not supported pic_mode=default fi # Calculate the filename of the output object if compiler does # not support -o with -c if test no = "$compiler_c_o"; then output_obj=`$ECHO "$srcfile" | $SED 's%^.*/%%; s%\.[^.]*$%%'`.$objext lockfile=$output_obj.lock else output_obj= need_locks=no lockfile= fi # Lock this critical section if it is needed # We use this script file to make the link, it avoids creating a new file if test yes = "$need_locks"; then until $opt_dry_run || ln "$progpath" "$lockfile" 2>/dev/null; do func_echo "Waiting for $lockfile to be removed" sleep 2 done elif test warn = "$need_locks"; then if test -f "$lockfile"; then $ECHO "\ *** ERROR, $lockfile exists and contains: `cat $lockfile 2>/dev/null` This indicates that another process is trying to use the same temporary object file, and libtool could not work around it because your compiler does not support '-c' and '-o' together. If you repeat this compilation, it may succeed, by chance, but you had better avoid parallel builds (make -j) in this platform, or get a better compiler." $opt_dry_run || $RM $removelist exit $EXIT_FAILURE fi func_append removelist " $output_obj" $ECHO "$srcfile" > "$lockfile" fi $opt_dry_run || $RM $removelist func_append removelist " $lockfile" trap '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' 1 2 15 func_to_tool_file "$srcfile" func_convert_file_msys_to_w32 srcfile=$func_to_tool_file_result func_quote_arg pretty "$srcfile" qsrcfile=$func_quote_arg_result # Only build a PIC object if we are building libtool libraries. if test yes = "$build_libtool_libs"; then # Without this assignment, base_compile gets emptied. fbsd_hideous_sh_bug=$base_compile if test no != "$pic_mode"; then command="$base_compile $qsrcfile $pic_flag" else # Don't build PIC code command="$base_compile $qsrcfile" fi func_mkdir_p "$xdir$objdir" if test -z "$output_obj"; then # Place PIC objects in $objdir func_append command " -o $lobj" fi func_show_eval_locale "$command" \ 'test -n "$output_obj" && $RM $removelist; exit $EXIT_FAILURE' if test warn = "$need_locks" && test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then $ECHO "\ *** ERROR, $lockfile contains: `cat $lockfile 2>/dev/null` but it should contain: $srcfile This indicates that another process is trying to use the same temporary object file, and libtool could not work around it because your compiler does not support '-c' and '-o' together. If you repeat this compilation, it may succeed, by chance, but you had better avoid parallel builds (make -j) in this platform, or get a better compiler." $opt_dry_run || $RM $removelist exit $EXIT_FAILURE fi # Just move the object if needed, then go on to compile the next one if test -n "$output_obj" && test "X$output_obj" != "X$lobj"; then func_show_eval '$MV "$output_obj" "$lobj"' \ 'error=$?; $opt_dry_run || $RM $removelist; exit $error' fi # Allow error messages only from the first compilation. if test yes = "$suppress_opt"; then suppress_output=' >/dev/null 2>&1' fi fi # Only build a position-dependent object if we build old libraries. if test yes = "$build_old_libs"; then if test yes != "$pic_mode"; then # Don't build PIC code command="$base_compile $qsrcfile$pie_flag" else command="$base_compile $qsrcfile $pic_flag" fi if test yes = "$compiler_c_o"; then func_append command " -o $obj" fi # Suppress compiler output if we already did a PIC compilation. func_append command "$suppress_output" func_show_eval_locale "$command" \ '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' if test warn = "$need_locks" && test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then $ECHO "\ *** ERROR, $lockfile contains: `cat $lockfile 2>/dev/null` but it should contain: $srcfile This indicates that another process is trying to use the same temporary object file, and libtool could not work around it because your compiler does not support '-c' and '-o' together. If you repeat this compilation, it may succeed, by chance, but you had better avoid parallel builds (make -j) in this platform, or get a better compiler." $opt_dry_run || $RM $removelist exit $EXIT_FAILURE fi # Just move the object if needed if test -n "$output_obj" && test "X$output_obj" != "X$obj"; then func_show_eval '$MV "$output_obj" "$obj"' \ 'error=$?; $opt_dry_run || $RM $removelist; exit $error' fi fi $opt_dry_run || { func_write_libtool_object "$libobj" "$objdir/$objname" "$objname" # Unlock the critical section if it was locked if test no != "$need_locks"; then removelist=$lockfile $RM "$lockfile" fi } exit $EXIT_SUCCESS } $opt_help || { test compile = "$opt_mode" && func_mode_compile ${1+"$@"} } func_mode_help () { # We need to display help for each of the modes. case $opt_mode in "") # Generic help is extracted from the usage comments # at the start of this file. func_help ;; clean) $ECHO \ "Usage: $progname [OPTION]... --mode=clean RM [RM-OPTION]... FILE... Remove files from the build directory. RM is the name of the program to use to delete files associated with each FILE (typically '/bin/rm'). RM-OPTIONS are options (such as '-f') to be passed to RM. If FILE is a libtool library, object or program, all the files associated with it are deleted. Otherwise, only FILE itself is deleted using RM." ;; compile) $ECHO \ "Usage: $progname [OPTION]... --mode=compile COMPILE-COMMAND... SOURCEFILE Compile a source file into a libtool library object. This mode accepts the following additional options: -o OUTPUT-FILE set the output file name to OUTPUT-FILE -no-suppress do not suppress compiler output for multiple passes -prefer-pic try to build PIC objects only -prefer-non-pic try to build non-PIC objects only -shared do not build a '.o' file suitable for static linking -static only build a '.o' file suitable for static linking -Wc,FLAG -Xcompiler FLAG pass FLAG directly to the compiler COMPILE-COMMAND is a command to be used in creating a 'standard' object file from the given SOURCEFILE. The output file name is determined by removing the directory component from SOURCEFILE, then substituting the C source code suffix '.c' with the library object suffix, '.lo'." ;; execute) $ECHO \ "Usage: $progname [OPTION]... --mode=execute COMMAND [ARGS]... Automatically set library path, then run a program. This mode accepts the following additional options: -dlopen FILE add the directory containing FILE to the library path This mode sets the library path environment variable according to '-dlopen' flags. If any of the ARGS are libtool executable wrappers, then they are translated into their corresponding uninstalled binary, and any of their required library directories are added to the library path. Then, COMMAND is executed, with ARGS as arguments." ;; finish) $ECHO \ "Usage: $progname [OPTION]... --mode=finish [LIBDIR]... Complete the installation of libtool libraries. Each LIBDIR is a directory that contains libtool libraries. The commands that this mode executes may require superuser privileges. Use the '--dry-run' option if you just want to see what would be executed." ;; install) $ECHO \ "Usage: $progname [OPTION]... --mode=install INSTALL-COMMAND... Install executables or libraries. INSTALL-COMMAND is the installation command. The first component should be either the 'install' or 'cp' program. The following components of INSTALL-COMMAND are treated specially: -inst-prefix-dir PREFIX-DIR Use PREFIX-DIR as a staging area for installation The rest of the components are interpreted as arguments to that command (only BSD-compatible install options are recognized)." ;; link) $ECHO \ "Usage: $progname [OPTION]... --mode=link LINK-COMMAND... Link object files or libraries together to form another library, or to create an executable program. LINK-COMMAND is a command using the C compiler that you would use to create a program from several object files. The following components of LINK-COMMAND are treated specially: -all-static do not do any dynamic linking at all -avoid-version do not add a version suffix if possible -bindir BINDIR specify path to binaries directory (for systems where libraries must be found in the PATH setting at runtime) -dlopen FILE '-dlpreopen' FILE if it cannot be dlopened at runtime -dlpreopen FILE link in FILE and add its symbols to lt_preloaded_symbols -export-dynamic allow symbols from OUTPUT-FILE to be resolved with dlsym(3) -export-symbols SYMFILE try to export only the symbols listed in SYMFILE -export-symbols-regex REGEX try to export only the symbols matching REGEX -LLIBDIR search LIBDIR for required installed libraries -lNAME OUTPUT-FILE requires the installed library libNAME -module build a library that can dlopened -no-fast-install disable the fast-install mode -no-install link a not-installable executable -no-undefined declare that a library does not refer to external symbols -o OUTPUT-FILE create OUTPUT-FILE from the specified objects -objectlist FILE use a list of object files found in FILE to specify objects -os2dllname NAME force a short DLL name on OS/2 (no effect on other OSes) -precious-files-regex REGEX don't remove output files matching REGEX -release RELEASE specify package release information -rpath LIBDIR the created library will eventually be installed in LIBDIR -R[ ]LIBDIR add LIBDIR to the runtime path of programs and libraries -shared only do dynamic linking of libtool libraries -shrext SUFFIX override the standard shared library file extension -static do not do any dynamic linking of uninstalled libtool libraries -static-libtool-libs do not do any dynamic linking of libtool libraries -version-info CURRENT[:REVISION[:AGE]] specify library version info [each variable defaults to 0] -weak LIBNAME declare that the target provides the LIBNAME interface -Wc,FLAG -Xcompiler FLAG pass linker-specific FLAG directly to the compiler -Wa,FLAG -Xassembler FLAG pass linker-specific FLAG directly to the assembler -Wl,FLAG -Xlinker FLAG pass linker-specific FLAG directly to the linker -XCClinker FLAG pass link-specific FLAG to the compiler driver (CC) All other options (arguments beginning with '-') are ignored. Every other argument is treated as a filename. Files ending in '.la' are treated as uninstalled libtool libraries, other files are standard or library object files. If the OUTPUT-FILE ends in '.la', then a libtool library is created, only library objects ('.lo' files) may be specified, and '-rpath' is required, except when creating a convenience library. If OUTPUT-FILE ends in '.a' or '.lib', then a standard library is created using 'ar' and 'ranlib', or on Windows using 'lib'. If OUTPUT-FILE ends in '.lo' or '.$objext', then a reloadable object file is created, otherwise an executable program is created." ;; uninstall) $ECHO \ "Usage: $progname [OPTION]... --mode=uninstall RM [RM-OPTION]... FILE... Remove libraries from an installation directory. RM is the name of the program to use to delete files associated with each FILE (typically '/bin/rm'). RM-OPTIONS are options (such as '-f') to be passed to RM. If FILE is a libtool library, all the files associated with it are deleted. Otherwise, only FILE itself is deleted using RM." ;; *) func_fatal_help "invalid operation mode '$opt_mode'" ;; esac echo $ECHO "Try '$progname --help' for more information about other modes." } # Now that we've collected a possible --mode arg, show help if necessary if $opt_help; then if test : = "$opt_help"; then func_mode_help else { func_help noexit for opt_mode in compile link execute install finish uninstall clean; do func_mode_help done } | $SED -n '1p; 2,$s/^Usage:/ or: /p' { func_help noexit for opt_mode in compile link execute install finish uninstall clean; do echo func_mode_help done } | $SED '1d /^When reporting/,/^Report/{ H d } $x /information about other modes/d /more detailed .*MODE/d s/^Usage:.*--mode=\([^ ]*\) .*/Description of \1 mode:/' fi exit $? fi # func_mode_execute arg... func_mode_execute () { $debug_cmd # The first argument is the command name. cmd=$nonopt test -z "$cmd" && \ func_fatal_help "you must specify a COMMAND" # Handle -dlopen flags immediately. for file in $opt_dlopen; do test -f "$file" \ || func_fatal_help "'$file' is not a file" dir= case $file in *.la) func_resolve_sysroot "$file" file=$func_resolve_sysroot_result # Check to see that this really is a libtool archive. func_lalib_unsafe_p "$file" \ || func_fatal_help "'$lib' is not a valid libtool archive" # Read the libtool library. dlname= library_names= func_source "$file" # Skip this library if it cannot be dlopened. if test -z "$dlname"; then # Warn if it was a shared library. test -n "$library_names" && \ func_warning "'$file' was not linked with '-export-dynamic'" continue fi func_dirname "$file" "" "." dir=$func_dirname_result if test -f "$dir/$objdir/$dlname"; then func_append dir "/$objdir" else if test ! -f "$dir/$dlname"; then func_fatal_error "cannot find '$dlname' in '$dir' or '$dir/$objdir'" fi fi ;; *.lo) # Just add the directory containing the .lo file. func_dirname "$file" "" "." dir=$func_dirname_result ;; *) func_warning "'-dlopen' is ignored for non-libtool libraries and objects" continue ;; esac # Get the absolute pathname. absdir=`cd "$dir" && pwd` test -n "$absdir" && dir=$absdir # Now add the directory to shlibpath_var. if eval "test -z \"\$$shlibpath_var\""; then eval "$shlibpath_var=\"\$dir\"" else eval "$shlibpath_var=\"\$dir:\$$shlibpath_var\"" fi done # This variable tells wrapper scripts just to set shlibpath_var # rather than running their programs. libtool_execute_magic=$magic # Check if any of the arguments is a wrapper script. args= for file do case $file in -* | *.la | *.lo ) ;; *) # Do a test to see if this is really a libtool program. if func_ltwrapper_script_p "$file"; then func_source "$file" # Transform arg to wrapped name. file=$progdir/$program elif func_ltwrapper_executable_p "$file"; then func_ltwrapper_scriptname "$file" func_source "$func_ltwrapper_scriptname_result" # Transform arg to wrapped name. file=$progdir/$program fi ;; esac # Quote arguments (to preserve shell metacharacters). func_append_quoted args "$file" done if $opt_dry_run; then # Display what would be done. if test -n "$shlibpath_var"; then eval "\$ECHO \"\$shlibpath_var=\$$shlibpath_var\"" echo "export $shlibpath_var" fi $ECHO "$cmd$args" exit $EXIT_SUCCESS else if test -n "$shlibpath_var"; then # Export the shlibpath_var. eval "export $shlibpath_var" fi # Restore saved environment variables for lt_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES do eval "if test \"\${save_$lt_var+set}\" = set; then $lt_var=\$save_$lt_var; export $lt_var else $lt_unset $lt_var fi" done # Now prepare to actually exec the command. exec_cmd=\$cmd$args fi } test execute = "$opt_mode" && func_mode_execute ${1+"$@"} # func_mode_finish arg... func_mode_finish () { $debug_cmd libs= libdirs= admincmds= for opt in "$nonopt" ${1+"$@"} do if test -d "$opt"; then func_append libdirs " $opt" elif test -f "$opt"; then if func_lalib_unsafe_p "$opt"; then func_append libs " $opt" else func_warning "'$opt' is not a valid libtool archive" fi else func_fatal_error "invalid argument '$opt'" fi done if test -n "$libs"; then if test -n "$lt_sysroot"; then sysroot_regex=`$ECHO "$lt_sysroot" | $SED "$sed_make_literal_regex"` sysroot_cmd="s/\([ ']\)$sysroot_regex/\1/g;" else sysroot_cmd= fi # Remove sysroot references if $opt_dry_run; then for lib in $libs; do echo "removing references to $lt_sysroot and '=' prefixes from $lib" done else tmpdir=`func_mktempdir` for lib in $libs; do $SED -e "$sysroot_cmd s/\([ ']-[LR]\)=/\1/g; s/\([ ']\)=/\1/g" $lib \ > $tmpdir/tmp-la mv -f $tmpdir/tmp-la $lib done ${RM}r "$tmpdir" fi fi if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then for libdir in $libdirs; do if test -n "$finish_cmds"; then # Do each command in the finish commands. func_execute_cmds "$finish_cmds" 'admincmds="$admincmds '"$cmd"'"' fi if test -n "$finish_eval"; then # Do the single finish_eval. eval cmds=\"$finish_eval\" $opt_dry_run || eval "$cmds" || func_append admincmds " $cmds" fi done fi # Exit here if they wanted silent mode. $opt_quiet && exit $EXIT_SUCCESS if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then echo "----------------------------------------------------------------------" echo "Libraries have been installed in:" for libdir in $libdirs; do $ECHO " $libdir" done echo echo "If you ever happen to want to link against installed libraries" echo "in a given directory, LIBDIR, you must either use libtool, and" echo "specify the full pathname of the library, or use the '-LLIBDIR'" echo "flag during linking and do at least one of the following:" if test -n "$shlibpath_var"; then echo " - add LIBDIR to the '$shlibpath_var' environment variable" echo " during execution" fi if test -n "$runpath_var"; then echo " - add LIBDIR to the '$runpath_var' environment variable" echo " during linking" fi if test -n "$hardcode_libdir_flag_spec"; then libdir=LIBDIR eval flag=\"$hardcode_libdir_flag_spec\" $ECHO " - use the '$flag' linker flag" fi if test -n "$admincmds"; then $ECHO " - have your system administrator run these commands:$admincmds" fi if test -f /etc/ld.so.conf; then echo " - have your system administrator add LIBDIR to '/etc/ld.so.conf'" fi echo echo "See any operating system documentation about shared libraries for" case $host in solaris2.[6789]|solaris2.1[0-9]) echo "more information, such as the ld(1), crle(1) and ld.so(8) manual" echo "pages." ;; *) echo "more information, such as the ld(1) and ld.so(8) manual pages." ;; esac echo "----------------------------------------------------------------------" fi exit $EXIT_SUCCESS } test finish = "$opt_mode" && func_mode_finish ${1+"$@"} # func_mode_install arg... func_mode_install () { $debug_cmd # There may be an optional sh(1) argument at the beginning of # install_prog (especially on Windows NT). if test "$SHELL" = "$nonopt" || test /bin/sh = "$nonopt" || # Allow the use of GNU shtool's install command. case $nonopt in *shtool*) :;; *) false;; esac then # Aesthetically quote it. func_quote_arg pretty "$nonopt" install_prog="$func_quote_arg_result " arg=$1 shift else install_prog= arg=$nonopt fi # The real first argument should be the name of the installation program. # Aesthetically quote it. func_quote_arg pretty "$arg" func_append install_prog "$func_quote_arg_result" install_shared_prog=$install_prog case " $install_prog " in *[\\\ /]cp\ *) install_cp=: ;; *) install_cp=false ;; esac # We need to accept at least all the BSD install flags. dest= files= opts= prev= install_type= isdir=false stripme= no_mode=: for arg do arg2= if test -n "$dest"; then func_append files " $dest" dest=$arg continue fi case $arg in -d) isdir=: ;; -f) if $install_cp; then :; else prev=$arg fi ;; -g | -m | -o) prev=$arg ;; -s) stripme=" -s" continue ;; -*) ;; *) # If the previous option needed an argument, then skip it. if test -n "$prev"; then if test X-m = "X$prev" && test -n "$install_override_mode"; then arg2=$install_override_mode no_mode=false fi prev= else dest=$arg continue fi ;; esac # Aesthetically quote the argument. func_quote_arg pretty "$arg" func_append install_prog " $func_quote_arg_result" if test -n "$arg2"; then func_quote_arg pretty "$arg2" fi func_append install_shared_prog " $func_quote_arg_result" done test -z "$install_prog" && \ func_fatal_help "you must specify an install program" test -n "$prev" && \ func_fatal_help "the '$prev' option requires an argument" if test -n "$install_override_mode" && $no_mode; then if $install_cp; then :; else func_quote_arg pretty "$install_override_mode" func_append install_shared_prog " -m $func_quote_arg_result" fi fi if test -z "$files"; then if test -z "$dest"; then func_fatal_help "no file or destination specified" else func_fatal_help "you must specify a destination" fi fi # Strip any trailing slash from the destination. func_stripname '' '/' "$dest" dest=$func_stripname_result # Check to see that the destination is a directory. test -d "$dest" && isdir=: if $isdir; then destdir=$dest destname= else func_dirname_and_basename "$dest" "" "." destdir=$func_dirname_result destname=$func_basename_result # Not a directory, so check to see that there is only one file specified. set dummy $files; shift test "$#" -gt 1 && \ func_fatal_help "'$dest' is not a directory" fi case $destdir in [\\/]* | [A-Za-z]:[\\/]*) ;; *) for file in $files; do case $file in *.lo) ;; *) func_fatal_help "'$destdir' must be an absolute directory name" ;; esac done ;; esac # This variable tells wrapper scripts just to set variables rather # than running their programs. libtool_install_magic=$magic staticlibs= future_libdirs= current_libdirs= for file in $files; do # Do each installation. case $file in *.$libext) # Do the static libraries later. func_append staticlibs " $file" ;; *.la) func_resolve_sysroot "$file" file=$func_resolve_sysroot_result # Check to see that this really is a libtool archive. func_lalib_unsafe_p "$file" \ || func_fatal_help "'$file' is not a valid libtool archive" library_names= old_library= relink_command= func_source "$file" # Add the libdir to current_libdirs if it is the destination. if test "X$destdir" = "X$libdir"; then case "$current_libdirs " in *" $libdir "*) ;; *) func_append current_libdirs " $libdir" ;; esac else # Note the libdir as a future libdir. case "$future_libdirs " in *" $libdir "*) ;; *) func_append future_libdirs " $libdir" ;; esac fi func_dirname "$file" "/" "" dir=$func_dirname_result func_append dir "$objdir" if test -n "$relink_command"; then # Determine the prefix the user has applied to our future dir. inst_prefix_dir=`$ECHO "$destdir" | $SED -e "s%$libdir\$%%"` # Don't allow the user to place us outside of our expected # location b/c this prevents finding dependent libraries that # are installed to the same prefix. # At present, this check doesn't affect windows .dll's that # are installed into $libdir/../bin (currently, that works fine) # but it's something to keep an eye on. test "$inst_prefix_dir" = "$destdir" && \ func_fatal_error "error: cannot install '$file' to a directory not ending in $libdir" if test -n "$inst_prefix_dir"; then # Stick the inst_prefix_dir data into the link command. relink_command=`$ECHO "$relink_command" | $SED "s%@inst_prefix_dir@%-inst-prefix-dir $inst_prefix_dir%"` else relink_command=`$ECHO "$relink_command" | $SED "s%@inst_prefix_dir@%%"` fi func_warning "relinking '$file'" func_show_eval "$relink_command" \ 'func_fatal_error "error: relink '\''$file'\'' with the above command before installing it"' fi # See the names of the shared library. set dummy $library_names; shift if test -n "$1"; then realname=$1 shift srcname=$realname test -n "$relink_command" && srcname=${realname}T # Install the shared library and build the symlinks. func_show_eval "$install_shared_prog $dir/$srcname $destdir/$realname" \ 'exit $?' tstripme=$stripme case $host_os in cygwin* | mingw* | pw32* | cegcc*) case $realname in *.dll.a) tstripme= ;; esac ;; os2*) case $realname in *_dll.a) tstripme= ;; esac ;; esac if test -n "$tstripme" && test -n "$striplib"; then func_show_eval "$striplib $destdir/$realname" 'exit $?' fi if test "$#" -gt 0; then # Delete the old symlinks, and create new ones. # Try 'ln -sf' first, because the 'ln' binary might depend on # the symlink we replace! Solaris /bin/ln does not understand -f, # so we also need to try rm && ln -s. for linkname do test "$linkname" != "$realname" \ && func_show_eval "(cd $destdir && { $LN_S -f $realname $linkname || { $RM $linkname && $LN_S $realname $linkname; }; })" done fi # Do each command in the postinstall commands. lib=$destdir/$realname func_execute_cmds "$postinstall_cmds" 'exit $?' fi # Install the pseudo-library for information purposes. func_basename "$file" name=$func_basename_result instname=$dir/${name}i func_show_eval "$install_prog $instname $destdir/$name" 'exit $?' # Maybe install the static library, too. test -n "$old_library" && func_append staticlibs " $dir/$old_library" ;; *.lo) # Install (i.e. copy) a libtool object. # Figure out destination file name, if it wasn't already specified. if test -n "$destname"; then destfile=$destdir/$destname else func_basename "$file" destfile=$func_basename_result destfile=$destdir/$destfile fi # Deduce the name of the destination old-style object file. case $destfile in *.lo) func_lo2o "$destfile" staticdest=$func_lo2o_result ;; *.$objext) staticdest=$destfile destfile= ;; *) func_fatal_help "cannot copy a libtool object to '$destfile'" ;; esac # Install the libtool object if requested. test -n "$destfile" && \ func_show_eval "$install_prog $file $destfile" 'exit $?' # Install the old object if enabled. if test yes = "$build_old_libs"; then # Deduce the name of the old-style object file. func_lo2o "$file" staticobj=$func_lo2o_result func_show_eval "$install_prog \$staticobj \$staticdest" 'exit $?' fi exit $EXIT_SUCCESS ;; *) # Figure out destination file name, if it wasn't already specified. if test -n "$destname"; then destfile=$destdir/$destname else func_basename "$file" destfile=$func_basename_result destfile=$destdir/$destfile fi # If the file is missing, and there is a .exe on the end, strip it # because it is most likely a libtool script we actually want to # install stripped_ext= case $file in *.exe) if test ! -f "$file"; then func_stripname '' '.exe' "$file" file=$func_stripname_result stripped_ext=.exe fi ;; esac # Do a test to see if this is really a libtool program. case $host in *cygwin* | *mingw*) if func_ltwrapper_executable_p "$file"; then func_ltwrapper_scriptname "$file" wrapper=$func_ltwrapper_scriptname_result else func_stripname '' '.exe' "$file" wrapper=$func_stripname_result fi ;; *) wrapper=$file ;; esac if func_ltwrapper_script_p "$wrapper"; then notinst_deplibs= relink_command= func_source "$wrapper" # Check the variables that should have been set. test -z "$generated_by_libtool_version" && \ func_fatal_error "invalid libtool wrapper script '$wrapper'" finalize=: for lib in $notinst_deplibs; do # Check to see that each library is installed. libdir= if test -f "$lib"; then func_source "$lib" fi libfile=$libdir/`$ECHO "$lib" | $SED 's%^.*/%%g'` if test -n "$libdir" && test ! -f "$libfile"; then func_warning "'$lib' has not been installed in '$libdir'" finalize=false fi done relink_command= func_source "$wrapper" outputname= if test no = "$fast_install" && test -n "$relink_command"; then $opt_dry_run || { if $finalize; then tmpdir=`func_mktempdir` func_basename "$file$stripped_ext" file=$func_basename_result outputname=$tmpdir/$file # Replace the output file specification. relink_command=`$ECHO "$relink_command" | $SED 's%@OUTPUT@%'"$outputname"'%g'` $opt_quiet || { func_quote_arg expand,pretty "$relink_command" eval "func_echo $func_quote_arg_result" } if eval "$relink_command"; then : else func_error "error: relink '$file' with the above command before installing it" $opt_dry_run || ${RM}r "$tmpdir" continue fi file=$outputname else func_warning "cannot relink '$file'" fi } else # Install the binary that we compiled earlier. file=`$ECHO "$file$stripped_ext" | $SED "s%\([^/]*\)$%$objdir/\1%"` fi fi # remove .exe since cygwin /usr/bin/install will append another # one anyway case $install_prog,$host in */usr/bin/install*,*cygwin*) case $file:$destfile in *.exe:*.exe) # this is ok ;; *.exe:*) destfile=$destfile.exe ;; *:*.exe) func_stripname '' '.exe' "$destfile" destfile=$func_stripname_result ;; esac ;; esac func_show_eval "$install_prog\$stripme \$file \$destfile" 'exit $?' $opt_dry_run || if test -n "$outputname"; then ${RM}r "$tmpdir" fi ;; esac done for file in $staticlibs; do func_basename "$file" name=$func_basename_result # Set up the ranlib parameters. oldlib=$destdir/$name func_to_tool_file "$oldlib" func_convert_file_msys_to_w32 tool_oldlib=$func_to_tool_file_result func_show_eval "$install_prog \$file \$oldlib" 'exit $?' if test -n "$stripme" && test -n "$old_striplib"; then func_show_eval "$old_striplib $tool_oldlib" 'exit $?' fi # Do each command in the postinstall commands. func_execute_cmds "$old_postinstall_cmds" 'exit $?' done test -n "$future_libdirs" && \ func_warning "remember to run '$progname --finish$future_libdirs'" if test -n "$current_libdirs"; then # Maybe just do a dry run. $opt_dry_run && current_libdirs=" -n$current_libdirs" exec_cmd='$SHELL "$progpath" $preserve_args --finish$current_libdirs' else exit $EXIT_SUCCESS fi } test install = "$opt_mode" && func_mode_install ${1+"$@"} # func_generate_dlsyms outputname originator pic_p # Extract symbols from dlprefiles and create ${outputname}S.o with # a dlpreopen symbol table. func_generate_dlsyms () { $debug_cmd my_outputname=$1 my_originator=$2 my_pic_p=${3-false} my_prefix=`$ECHO "$my_originator" | $SED 's%[^a-zA-Z0-9]%_%g'` my_dlsyms= if test -n "$dlfiles$dlprefiles" || test no != "$dlself"; then if test -n "$NM" && test -n "$global_symbol_pipe"; then my_dlsyms=${my_outputname}S.c else func_error "not configured to extract global symbols from dlpreopened files" fi fi if test -n "$my_dlsyms"; then case $my_dlsyms in "") ;; *.c) # Discover the nlist of each of the dlfiles. nlist=$output_objdir/$my_outputname.nm func_show_eval "$RM $nlist ${nlist}S ${nlist}T" # Parse the name list into a source file. func_verbose "creating $output_objdir/$my_dlsyms" $opt_dry_run || $ECHO > "$output_objdir/$my_dlsyms" "\ /* $my_dlsyms - symbol resolution table for '$my_outputname' dlsym emulation. */ /* Generated by $PROGRAM (GNU $PACKAGE) $VERSION */ #ifdef __cplusplus extern \"C\" { #endif #if defined __GNUC__ && (((__GNUC__ == 4) && (__GNUC_MINOR__ >= 4)) || (__GNUC__ > 4)) #pragma GCC diagnostic ignored \"-Wstrict-prototypes\" #endif /* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */ #if defined _WIN32 || defined __CYGWIN__ || defined _WIN32_WCE /* DATA imports from DLLs on WIN32 can't be const, because runtime relocations are performed -- see ld's documentation on pseudo-relocs. */ # define LT_DLSYM_CONST #elif defined __osf__ /* This system does not cope well with relocations in const data. */ # define LT_DLSYM_CONST #else # define LT_DLSYM_CONST const #endif #define STREQ(s1, s2) (strcmp ((s1), (s2)) == 0) /* External symbol declarations for the compiler. */\ " if test yes = "$dlself"; then func_verbose "generating symbol list for '$output'" $opt_dry_run || echo ': @PROGRAM@ ' > "$nlist" # Add our own program objects to the symbol list. progfiles=`$ECHO "$objs$old_deplibs" | $SP2NL | $SED "$lo2o" | $NL2SP` for progfile in $progfiles; do func_to_tool_file "$progfile" func_convert_file_msys_to_w32 func_verbose "extracting global C symbols from '$func_to_tool_file_result'" $opt_dry_run || eval "$NM $func_to_tool_file_result | $global_symbol_pipe >> '$nlist'" done if test -n "$exclude_expsyms"; then $opt_dry_run || { eval '$EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T' eval '$MV "$nlist"T "$nlist"' } fi if test -n "$export_symbols_regex"; then $opt_dry_run || { eval '$EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T' eval '$MV "$nlist"T "$nlist"' } fi # Prepare the list of exported symbols if test -z "$export_symbols"; then export_symbols=$output_objdir/$outputname.exp $opt_dry_run || { $RM $export_symbols eval "$SED -n -e '/^: @PROGRAM@ $/d' -e 's/^.* \(.*\)$/\1/p' "'< "$nlist" > "$export_symbols"' case $host in *cygwin* | *mingw* | *cegcc* ) eval "echo EXPORTS "'> "$output_objdir/$outputname.def"' eval 'cat "$export_symbols" >> "$output_objdir/$outputname.def"' ;; esac } else $opt_dry_run || { eval "$SED -e 's/\([].[*^$]\)/\\\\\1/g' -e 's/^/ /' -e 's/$/$/'"' < "$export_symbols" > "$output_objdir/$outputname.exp"' eval '$GREP -f "$output_objdir/$outputname.exp" < "$nlist" > "$nlist"T' eval '$MV "$nlist"T "$nlist"' case $host in *cygwin* | *mingw* | *cegcc* ) eval "echo EXPORTS "'> "$output_objdir/$outputname.def"' eval 'cat "$nlist" >> "$output_objdir/$outputname.def"' ;; esac } fi fi for dlprefile in $dlprefiles; do func_verbose "extracting global C symbols from '$dlprefile'" func_basename "$dlprefile" name=$func_basename_result case $host in *cygwin* | *mingw* | *cegcc* ) # if an import library, we need to obtain dlname if func_win32_import_lib_p "$dlprefile"; then func_tr_sh "$dlprefile" eval "curr_lafile=\$libfile_$func_tr_sh_result" dlprefile_dlbasename= if test -n "$curr_lafile" && func_lalib_p "$curr_lafile"; then # Use subshell, to avoid clobbering current variable values dlprefile_dlname=`source "$curr_lafile" && echo "$dlname"` if test -n "$dlprefile_dlname"; then func_basename "$dlprefile_dlname" dlprefile_dlbasename=$func_basename_result else # no lafile. user explicitly requested -dlpreopen . $sharedlib_from_linklib_cmd "$dlprefile" dlprefile_dlbasename=$sharedlib_from_linklib_result fi fi $opt_dry_run || { if test -n "$dlprefile_dlbasename"; then eval '$ECHO ": $dlprefile_dlbasename" >> "$nlist"' else func_warning "Could not compute DLL name from $name" eval '$ECHO ": $name " >> "$nlist"' fi func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32 eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe | $SED -e '/I __imp/d' -e 's/I __nm_/D /;s/_nm__//' >> '$nlist'" } else # not an import lib $opt_dry_run || { eval '$ECHO ": $name " >> "$nlist"' func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32 eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'" } fi ;; *) $opt_dry_run || { eval '$ECHO ": $name " >> "$nlist"' func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32 eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'" } ;; esac done $opt_dry_run || { # Make sure we have at least an empty file. test -f "$nlist" || : > "$nlist" if test -n "$exclude_expsyms"; then $EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T $MV "$nlist"T "$nlist" fi # Try sorting and uniquifying the output. if $GREP -v "^: " < "$nlist" | if sort -k 3 /dev/null 2>&1; then sort -k 3 else sort +2 fi | uniq > "$nlist"S; then : else $GREP -v "^: " < "$nlist" > "$nlist"S fi if test -f "$nlist"S; then eval "$global_symbol_to_cdecl"' < "$nlist"S >> "$output_objdir/$my_dlsyms"' else echo '/* NONE */' >> "$output_objdir/$my_dlsyms" fi func_show_eval '$RM "${nlist}I"' if test -n "$global_symbol_to_import"; then eval "$global_symbol_to_import"' < "$nlist"S > "$nlist"I' fi echo >> "$output_objdir/$my_dlsyms" "\ /* The mapping between symbol names and symbols. */ typedef struct { const char *name; void *address; } lt_dlsymlist; extern LT_DLSYM_CONST lt_dlsymlist lt_${my_prefix}_LTX_preloaded_symbols[];\ " if test -s "$nlist"I; then echo >> "$output_objdir/$my_dlsyms" "\ static void lt_syminit(void) { LT_DLSYM_CONST lt_dlsymlist *symbol = lt_${my_prefix}_LTX_preloaded_symbols; for (; symbol->name; ++symbol) {" $SED 's/.*/ if (STREQ (symbol->name, \"&\")) symbol->address = (void *) \&&;/' < "$nlist"I >> "$output_objdir/$my_dlsyms" echo >> "$output_objdir/$my_dlsyms" "\ } }" fi echo >> "$output_objdir/$my_dlsyms" "\ LT_DLSYM_CONST lt_dlsymlist lt_${my_prefix}_LTX_preloaded_symbols[] = { {\"$my_originator\", (void *) 0}," if test -s "$nlist"I; then echo >> "$output_objdir/$my_dlsyms" "\ {\"@INIT@\", (void *) <_syminit}," fi case $need_lib_prefix in no) eval "$global_symbol_to_c_name_address" < "$nlist" >> "$output_objdir/$my_dlsyms" ;; *) eval "$global_symbol_to_c_name_address_lib_prefix" < "$nlist" >> "$output_objdir/$my_dlsyms" ;; esac echo >> "$output_objdir/$my_dlsyms" "\ {0, (void *) 0} }; /* This works around a problem in FreeBSD linker */ #ifdef FREEBSD_WORKAROUND static const void *lt_preloaded_setup() { return lt_${my_prefix}_LTX_preloaded_symbols; } #endif #ifdef __cplusplus } #endif\ " } # !$opt_dry_run pic_flag_for_symtable= case "$compile_command " in *" -static "*) ;; *) case $host in # compiling the symbol table file with pic_flag works around # a FreeBSD bug that causes programs to crash when -lm is # linked before any other PIC object. But we must not use # pic_flag when linking with -static. The problem exists in # FreeBSD 2.2.6 and is fixed in FreeBSD 3.1. *-*-freebsd2.*|*-*-freebsd3.0*|*-*-freebsdelf3.0*) pic_flag_for_symtable=" $pic_flag -DFREEBSD_WORKAROUND" ;; *-*-hpux*) pic_flag_for_symtable=" $pic_flag" ;; *) $my_pic_p && pic_flag_for_symtable=" $pic_flag" ;; esac ;; esac symtab_cflags= for arg in $LTCFLAGS; do case $arg in -pie | -fpie | -fPIE) ;; *) func_append symtab_cflags " $arg" ;; esac done # Now compile the dynamic symbol file. func_show_eval '(cd $output_objdir && $LTCC$symtab_cflags -c$no_builtin_flag$pic_flag_for_symtable "$my_dlsyms")' 'exit $?' # Clean up the generated files. func_show_eval '$RM "$output_objdir/$my_dlsyms" "$nlist" "${nlist}S" "${nlist}T" "${nlist}I"' # Transform the symbol file into the correct name. symfileobj=$output_objdir/${my_outputname}S.$objext case $host in *cygwin* | *mingw* | *cegcc* ) if test -f "$output_objdir/$my_outputname.def"; then compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$output_objdir/$my_outputname.def $symfileobj%"` finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$output_objdir/$my_outputname.def $symfileobj%"` else compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$symfileobj%"` finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$symfileobj%"` fi ;; *) compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$symfileobj%"` finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$symfileobj%"` ;; esac ;; *) func_fatal_error "unknown suffix for '$my_dlsyms'" ;; esac else # We keep going just in case the user didn't refer to # lt_preloaded_symbols. The linker will fail if global_symbol_pipe # really was required. # Nullify the symbol file. compile_command=`$ECHO "$compile_command" | $SED "s% @SYMFILE@%%"` finalize_command=`$ECHO "$finalize_command" | $SED "s% @SYMFILE@%%"` fi } # func_cygming_gnu_implib_p ARG # This predicate returns with zero status (TRUE) if # ARG is a GNU/binutils-style import library. Returns # with nonzero status (FALSE) otherwise. func_cygming_gnu_implib_p () { $debug_cmd func_to_tool_file "$1" func_convert_file_msys_to_w32 func_cygming_gnu_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $EGREP ' (_head_[A-Za-z0-9_]+_[ad]l*|[A-Za-z0-9_]+_[ad]l*_iname)$'` test -n "$func_cygming_gnu_implib_tmp" } # func_cygming_ms_implib_p ARG # This predicate returns with zero status (TRUE) if # ARG is an MS-style import library. Returns # with nonzero status (FALSE) otherwise. func_cygming_ms_implib_p () { $debug_cmd func_to_tool_file "$1" func_convert_file_msys_to_w32 func_cygming_ms_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $GREP '_NULL_IMPORT_DESCRIPTOR'` test -n "$func_cygming_ms_implib_tmp" } # func_win32_libid arg # return the library type of file 'arg' # # Need a lot of goo to handle *both* DLLs and import libs # Has to be a shell function in order to 'eat' the argument # that is supplied when $file_magic_command is called. # Despite the name, also deal with 64 bit binaries. func_win32_libid () { $debug_cmd win32_libid_type=unknown win32_fileres=`file -L $1 2>/dev/null` case $win32_fileres in *ar\ archive\ import\ library*) # definitely import win32_libid_type="x86 archive import" ;; *ar\ archive*) # could be an import, or static # Keep the egrep pattern in sync with the one in _LT_CHECK_MAGIC_METHOD. if eval $OBJDUMP -f $1 | $SED -e '10q' 2>/dev/null | $EGREP 'file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' >/dev/null; then case $nm_interface in "MS dumpbin") if func_cygming_ms_implib_p "$1" || func_cygming_gnu_implib_p "$1" then win32_nmres=import else win32_nmres= fi ;; *) func_to_tool_file "$1" func_convert_file_msys_to_w32 win32_nmres=`eval $NM -f posix -A \"$func_to_tool_file_result\" | $SED -n -e ' 1,100{ / I /{ s|.*|import| p q } }'` ;; esac case $win32_nmres in import*) win32_libid_type="x86 archive import";; *) win32_libid_type="x86 archive static";; esac fi ;; *DLL*) win32_libid_type="x86 DLL" ;; *executable*) # but shell scripts are "executable" too... case $win32_fileres in *MS\ Windows\ PE\ Intel*) win32_libid_type="x86 DLL" ;; esac ;; esac $ECHO "$win32_libid_type" } # func_cygming_dll_for_implib ARG # # Platform-specific function to extract the # name of the DLL associated with the specified # import library ARG. # Invoked by eval'ing the libtool variable # $sharedlib_from_linklib_cmd # Result is available in the variable # $sharedlib_from_linklib_result func_cygming_dll_for_implib () { $debug_cmd sharedlib_from_linklib_result=`$DLLTOOL --identify-strict --identify "$1"` } # func_cygming_dll_for_implib_fallback_core SECTION_NAME LIBNAMEs # # The is the core of a fallback implementation of a # platform-specific function to extract the name of the # DLL associated with the specified import library LIBNAME. # # SECTION_NAME is either .idata$6 or .idata$7, depending # on the platform and compiler that created the implib. # # Echos the name of the DLL associated with the # specified import library. func_cygming_dll_for_implib_fallback_core () { $debug_cmd match_literal=`$ECHO "$1" | $SED "$sed_make_literal_regex"` $OBJDUMP -s --section "$1" "$2" 2>/dev/null | $SED '/^Contents of section '"$match_literal"':/{ # Place marker at beginning of archive member dllname section s/.*/====MARK====/ p d } # These lines can sometimes be longer than 43 characters, but # are always uninteresting /:[ ]*file format pe[i]\{,1\}-/d /^In archive [^:]*:/d # Ensure marker is printed /^====MARK====/p # Remove all lines with less than 43 characters /^.\{43\}/!d # From remaining lines, remove first 43 characters s/^.\{43\}//' | $SED -n ' # Join marker and all lines until next marker into a single line /^====MARK====/ b para H $ b para b :para x s/\n//g # Remove the marker s/^====MARK====// # Remove trailing dots and whitespace s/[\. \t]*$// # Print /./p' | # we now have a list, one entry per line, of the stringified # contents of the appropriate section of all members of the # archive that possess that section. Heuristic: eliminate # all those that have a first or second character that is # a '.' (that is, objdump's representation of an unprintable # character.) This should work for all archives with less than # 0x302f exports -- but will fail for DLLs whose name actually # begins with a literal '.' or a single character followed by # a '.'. # # Of those that remain, print the first one. $SED -e '/^\./d;/^.\./d;q' } # func_cygming_dll_for_implib_fallback ARG # Platform-specific function to extract the # name of the DLL associated with the specified # import library ARG. # # This fallback implementation is for use when $DLLTOOL # does not support the --identify-strict option. # Invoked by eval'ing the libtool variable # $sharedlib_from_linklib_cmd # Result is available in the variable # $sharedlib_from_linklib_result func_cygming_dll_for_implib_fallback () { $debug_cmd if func_cygming_gnu_implib_p "$1"; then # binutils import library sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$7' "$1"` elif func_cygming_ms_implib_p "$1"; then # ms-generated import library sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$6' "$1"` else # unknown sharedlib_from_linklib_result= fi } # func_extract_an_archive dir oldlib func_extract_an_archive () { $debug_cmd f_ex_an_ar_dir=$1; shift f_ex_an_ar_oldlib=$1 if test yes = "$lock_old_archive_extraction"; then lockfile=$f_ex_an_ar_oldlib.lock until $opt_dry_run || ln "$progpath" "$lockfile" 2>/dev/null; do func_echo "Waiting for $lockfile to be removed" sleep 2 done fi func_show_eval "(cd \$f_ex_an_ar_dir && $AR x \"\$f_ex_an_ar_oldlib\")" \ 'stat=$?; rm -f "$lockfile"; exit $stat' if test yes = "$lock_old_archive_extraction"; then $opt_dry_run || rm -f "$lockfile" fi if ($AR t "$f_ex_an_ar_oldlib" | sort | sort -uc >/dev/null 2>&1); then : else func_fatal_error "object name conflicts in archive: $f_ex_an_ar_dir/$f_ex_an_ar_oldlib" fi } # func_extract_archives gentop oldlib ... func_extract_archives () { $debug_cmd my_gentop=$1; shift my_oldlibs=${1+"$@"} my_oldobjs= my_xlib= my_xabs= my_xdir= for my_xlib in $my_oldlibs; do # Extract the objects. case $my_xlib in [\\/]* | [A-Za-z]:[\\/]*) my_xabs=$my_xlib ;; *) my_xabs=`pwd`"/$my_xlib" ;; esac func_basename "$my_xlib" my_xlib=$func_basename_result my_xlib_u=$my_xlib while :; do case " $extracted_archives " in *" $my_xlib_u "*) func_arith $extracted_serial + 1 extracted_serial=$func_arith_result my_xlib_u=lt$extracted_serial-$my_xlib ;; *) break ;; esac done extracted_archives="$extracted_archives $my_xlib_u" my_xdir=$my_gentop/$my_xlib_u func_mkdir_p "$my_xdir" case $host in *-darwin*) func_verbose "Extracting $my_xabs" # Do not bother doing anything if just a dry run $opt_dry_run || { darwin_orig_dir=`pwd` cd $my_xdir || exit $? darwin_archive=$my_xabs darwin_curdir=`pwd` func_basename "$darwin_archive" darwin_base_archive=$func_basename_result darwin_arches=`$LIPO -info "$darwin_archive" 2>/dev/null | $GREP Architectures 2>/dev/null || true` if test -n "$darwin_arches"; then darwin_arches=`$ECHO "$darwin_arches" | $SED -e 's/.*are://'` darwin_arch= func_verbose "$darwin_base_archive has multiple architectures $darwin_arches" for darwin_arch in $darwin_arches; do func_mkdir_p "unfat-$$/$darwin_base_archive-$darwin_arch" $LIPO -thin $darwin_arch -output "unfat-$$/$darwin_base_archive-$darwin_arch/$darwin_base_archive" "$darwin_archive" cd "unfat-$$/$darwin_base_archive-$darwin_arch" func_extract_an_archive "`pwd`" "$darwin_base_archive" cd "$darwin_curdir" $RM "unfat-$$/$darwin_base_archive-$darwin_arch/$darwin_base_archive" done # $darwin_arches ## Okay now we've a bunch of thin objects, gotta fatten them up :) darwin_filelist=`find unfat-$$ -type f -name \*.o -print -o -name \*.lo -print | $SED -e "$sed_basename" | sort -u` darwin_file= darwin_files= for darwin_file in $darwin_filelist; do darwin_files=`find unfat-$$ -name $darwin_file -print | sort | $NL2SP` $LIPO -create -output "$darwin_file" $darwin_files done # $darwin_filelist $RM -rf unfat-$$ cd "$darwin_orig_dir" else cd $darwin_orig_dir func_extract_an_archive "$my_xdir" "$my_xabs" fi # $darwin_arches } # !$opt_dry_run ;; *) func_extract_an_archive "$my_xdir" "$my_xabs" ;; esac my_oldobjs="$my_oldobjs "`find $my_xdir -name \*.$objext -print -o -name \*.lo -print | sort | $NL2SP` done func_extract_archives_result=$my_oldobjs } # func_emit_wrapper [arg=no] # # Emit a libtool wrapper script on stdout. # Don't directly open a file because we may want to # incorporate the script contents within a cygwin/mingw # wrapper executable. Must ONLY be called from within # func_mode_link because it depends on a number of variables # set therein. # # ARG is the value that the WRAPPER_SCRIPT_BELONGS_IN_OBJDIR # variable will take. If 'yes', then the emitted script # will assume that the directory where it is stored is # the $objdir directory. This is a cygwin/mingw-specific # behavior. func_emit_wrapper () { func_emit_wrapper_arg1=${1-no} $ECHO "\ #! $SHELL # $output - temporary wrapper script for $objdir/$outputname # Generated by $PROGRAM (GNU $PACKAGE) $VERSION # # The $output program cannot be directly executed until all the libtool # libraries that it depends on are installed. # # This wrapper script should never be moved out of the build directory. # If it is, it will not operate correctly. # Sed substitution that helps us do robust quoting. It backslashifies # metacharacters that are still active within double-quoted strings. sed_quote_subst='$sed_quote_subst' # Be Bourne compatible if test -n \"\${ZSH_VERSION+set}\" && (emulate sh) >/dev/null 2>&1; then emulate sh NULLCMD=: # Zsh 3.x and 4.x performs word splitting on \${1+\"\$@\"}, which # is contrary to our usage. Disable this feature. alias -g '\${1+\"\$@\"}'='\"\$@\"' setopt NO_GLOB_SUBST else case \`(set -o) 2>/dev/null\` in *posix*) set -o posix;; esac fi BIN_SH=xpg4; export BIN_SH # for Tru64 DUALCASE=1; export DUALCASE # for MKS sh # The HP-UX ksh and POSIX shell print the target directory to stdout # if CDPATH is set. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH relink_command=\"$relink_command\" # This environment variable determines our operation mode. if test \"\$libtool_install_magic\" = \"$magic\"; then # install mode needs the following variables: generated_by_libtool_version='$macro_version' notinst_deplibs='$notinst_deplibs' else # When we are sourced in execute mode, \$file and \$ECHO are already set. if test \"\$libtool_execute_magic\" != \"$magic\"; then file=\"\$0\"" func_quote_arg pretty "$ECHO" qECHO=$func_quote_arg_result $ECHO "\ # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF \$1 _LTECHO_EOF' } ECHO=$qECHO fi # Very basic option parsing. These options are (a) specific to # the libtool wrapper, (b) are identical between the wrapper # /script/ and the wrapper /executable/ that is used only on # windows platforms, and (c) all begin with the string "--lt-" # (application programs are unlikely to have options that match # this pattern). # # There are only two supported options: --lt-debug and # --lt-dump-script. There is, deliberately, no --lt-help. # # The first argument to this parsing function should be the # script's $0 value, followed by "$@". lt_option_debug= func_parse_lt_options () { lt_script_arg0=\$0 shift for lt_opt do case \"\$lt_opt\" in --lt-debug) lt_option_debug=1 ;; --lt-dump-script) lt_dump_D=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%/[^/]*$%%'\` test \"X\$lt_dump_D\" = \"X\$lt_script_arg0\" && lt_dump_D=. lt_dump_F=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%^.*/%%'\` cat \"\$lt_dump_D/\$lt_dump_F\" exit 0 ;; --lt-*) \$ECHO \"Unrecognized --lt- option: '\$lt_opt'\" 1>&2 exit 1 ;; esac done # Print the debug banner immediately: if test -n \"\$lt_option_debug\"; then echo \"$outputname:$output:\$LINENO: libtool wrapper (GNU $PACKAGE) $VERSION\" 1>&2 fi } # Used when --lt-debug. Prints its arguments to stdout # (redirection is the responsibility of the caller) func_lt_dump_args () { lt_dump_args_N=1; for lt_arg do \$ECHO \"$outputname:$output:\$LINENO: newargv[\$lt_dump_args_N]: \$lt_arg\" lt_dump_args_N=\`expr \$lt_dump_args_N + 1\` done } # Core function for launching the target application func_exec_program_core () { " case $host in # Backslashes separate directories on plain windows *-*-mingw | *-*-os2* | *-cegcc*) $ECHO "\ if test -n \"\$lt_option_debug\"; then \$ECHO \"$outputname:$output:\$LINENO: newargv[0]: \$progdir\\\\\$program\" 1>&2 func_lt_dump_args \${1+\"\$@\"} 1>&2 fi exec \"\$progdir\\\\\$program\" \${1+\"\$@\"} " ;; *) $ECHO "\ if test -n \"\$lt_option_debug\"; then \$ECHO \"$outputname:$output:\$LINENO: newargv[0]: \$progdir/\$program\" 1>&2 func_lt_dump_args \${1+\"\$@\"} 1>&2 fi exec \"\$progdir/\$program\" \${1+\"\$@\"} " ;; esac $ECHO "\ \$ECHO \"\$0: cannot exec \$program \$*\" 1>&2 exit 1 } # A function to encapsulate launching the target application # Strips options in the --lt-* namespace from \$@ and # launches target application with the remaining arguments. func_exec_program () { case \" \$* \" in *\\ --lt-*) for lt_wr_arg do case \$lt_wr_arg in --lt-*) ;; *) set x \"\$@\" \"\$lt_wr_arg\"; shift;; esac shift done ;; esac func_exec_program_core \${1+\"\$@\"} } # Parse options func_parse_lt_options \"\$0\" \${1+\"\$@\"} # Find the directory that this script lives in. thisdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*$%%'\` test \"x\$thisdir\" = \"x\$file\" && thisdir=. # Follow symbolic links until we get to the real thisdir. file=\`ls -ld \"\$file\" | $SED -n 's/.*-> //p'\` while test -n \"\$file\"; do destdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*\$%%'\` # If there was a directory component, then change thisdir. if test \"x\$destdir\" != \"x\$file\"; then case \"\$destdir\" in [\\\\/]* | [A-Za-z]:[\\\\/]*) thisdir=\"\$destdir\" ;; *) thisdir=\"\$thisdir/\$destdir\" ;; esac fi file=\`\$ECHO \"\$file\" | $SED 's%^.*/%%'\` file=\`ls -ld \"\$thisdir/\$file\" | $SED -n 's/.*-> //p'\` done # Usually 'no', except on cygwin/mingw when embedded into # the cwrapper. WRAPPER_SCRIPT_BELONGS_IN_OBJDIR=$func_emit_wrapper_arg1 if test \"\$WRAPPER_SCRIPT_BELONGS_IN_OBJDIR\" = \"yes\"; then # special case for '.' if test \"\$thisdir\" = \".\"; then thisdir=\`pwd\` fi # remove .libs from thisdir case \"\$thisdir\" in *[\\\\/]$objdir ) thisdir=\`\$ECHO \"\$thisdir\" | $SED 's%[\\\\/][^\\\\/]*$%%'\` ;; $objdir ) thisdir=. ;; esac fi # Try to get the absolute directory name. absdir=\`cd \"\$thisdir\" && pwd\` test -n \"\$absdir\" && thisdir=\"\$absdir\" " if test yes = "$fast_install"; then $ECHO "\ program=lt-'$outputname'$exeext progdir=\"\$thisdir/$objdir\" if test ! -f \"\$progdir/\$program\" || { file=\`ls -1dt \"\$progdir/\$program\" \"\$progdir/../\$program\" 2>/dev/null | $SED 1q\`; \\ test \"X\$file\" != \"X\$progdir/\$program\"; }; then file=\"\$\$-\$program\" if test ! -d \"\$progdir\"; then $MKDIR \"\$progdir\" else $RM \"\$progdir/\$file\" fi" $ECHO "\ # relink executable if necessary if test -n \"\$relink_command\"; then if relink_command_output=\`eval \$relink_command 2>&1\`; then : else \$ECHO \"\$relink_command_output\" >&2 $RM \"\$progdir/\$file\" exit 1 fi fi $MV \"\$progdir/\$file\" \"\$progdir/\$program\" 2>/dev/null || { $RM \"\$progdir/\$program\"; $MV \"\$progdir/\$file\" \"\$progdir/\$program\"; } $RM \"\$progdir/\$file\" fi" else $ECHO "\ program='$outputname' progdir=\"\$thisdir/$objdir\" " fi $ECHO "\ if test -f \"\$progdir/\$program\"; then" # fixup the dll searchpath if we need to. # # Fix the DLL searchpath if we need to. Do this before prepending # to shlibpath, because on Windows, both are PATH and uninstalled # libraries must come first. if test -n "$dllsearchpath"; then $ECHO "\ # Add the dll search path components to the executable PATH PATH=$dllsearchpath:\$PATH " fi # Export our shlibpath_var if we have one. if test yes = "$shlibpath_overrides_runpath" && test -n "$shlibpath_var" && test -n "$temp_rpath"; then $ECHO "\ # Add our own library path to $shlibpath_var $shlibpath_var=\"$temp_rpath\$$shlibpath_var\" # Some systems cannot cope with colon-terminated $shlibpath_var # The second colon is a workaround for a bug in BeOS R4 sed $shlibpath_var=\`\$ECHO \"\$$shlibpath_var\" | $SED 's/::*\$//'\` export $shlibpath_var " fi $ECHO "\ if test \"\$libtool_execute_magic\" != \"$magic\"; then # Run the actual program with our arguments. func_exec_program \${1+\"\$@\"} fi else # The program doesn't exist. \$ECHO \"\$0: error: '\$progdir/\$program' does not exist\" 1>&2 \$ECHO \"This script is just a wrapper for \$program.\" 1>&2 \$ECHO \"See the $PACKAGE documentation for more information.\" 1>&2 exit 1 fi fi\ " } # func_emit_cwrapperexe_src # emit the source code for a wrapper executable on stdout # Must ONLY be called from within func_mode_link because # it depends on a number of variable set therein. func_emit_cwrapperexe_src () { cat < #include #ifdef _MSC_VER # include # include # include #else # include # include # ifdef __CYGWIN__ # include # endif #endif #include #include #include #include #include #include #include #include #define STREQ(s1, s2) (strcmp ((s1), (s2)) == 0) /* declarations of non-ANSI functions */ #if defined __MINGW32__ # ifdef __STRICT_ANSI__ int _putenv (const char *); # endif #elif defined __CYGWIN__ # ifdef __STRICT_ANSI__ char *realpath (const char *, char *); int putenv (char *); int setenv (const char *, const char *, int); # endif /* #elif defined other_platform || defined ... */ #endif /* portability defines, excluding path handling macros */ #if defined _MSC_VER # define setmode _setmode # define stat _stat # define chmod _chmod # define getcwd _getcwd # define putenv _putenv # define S_IXUSR _S_IEXEC #elif defined __MINGW32__ # define setmode _setmode # define stat _stat # define chmod _chmod # define getcwd _getcwd # define putenv _putenv #elif defined __CYGWIN__ # define HAVE_SETENV # define FOPEN_WB "wb" /* #elif defined other platforms ... */ #endif #if defined PATH_MAX # define LT_PATHMAX PATH_MAX #elif defined MAXPATHLEN # define LT_PATHMAX MAXPATHLEN #else # define LT_PATHMAX 1024 #endif #ifndef S_IXOTH # define S_IXOTH 0 #endif #ifndef S_IXGRP # define S_IXGRP 0 #endif /* path handling portability macros */ #ifndef DIR_SEPARATOR # define DIR_SEPARATOR '/' # define PATH_SEPARATOR ':' #endif #if defined _WIN32 || defined __MSDOS__ || defined __DJGPP__ || \ defined __OS2__ # define HAVE_DOS_BASED_FILE_SYSTEM # define FOPEN_WB "wb" # ifndef DIR_SEPARATOR_2 # define DIR_SEPARATOR_2 '\\' # endif # ifndef PATH_SEPARATOR_2 # define PATH_SEPARATOR_2 ';' # endif #endif #ifndef DIR_SEPARATOR_2 # define IS_DIR_SEPARATOR(ch) ((ch) == DIR_SEPARATOR) #else /* DIR_SEPARATOR_2 */ # define IS_DIR_SEPARATOR(ch) \ (((ch) == DIR_SEPARATOR) || ((ch) == DIR_SEPARATOR_2)) #endif /* DIR_SEPARATOR_2 */ #ifndef PATH_SEPARATOR_2 # define IS_PATH_SEPARATOR(ch) ((ch) == PATH_SEPARATOR) #else /* PATH_SEPARATOR_2 */ # define IS_PATH_SEPARATOR(ch) ((ch) == PATH_SEPARATOR_2) #endif /* PATH_SEPARATOR_2 */ #ifndef FOPEN_WB # define FOPEN_WB "w" #endif #ifndef _O_BINARY # define _O_BINARY 0 #endif #define XMALLOC(type, num) ((type *) xmalloc ((num) * sizeof(type))) #define XFREE(stale) do { \ if (stale) { free (stale); stale = 0; } \ } while (0) #if defined LT_DEBUGWRAPPER static int lt_debug = 1; #else static int lt_debug = 0; #endif const char *program_name = "libtool-wrapper"; /* in case xstrdup fails */ void *xmalloc (size_t num); char *xstrdup (const char *string); const char *base_name (const char *name); char *find_executable (const char *wrapper); char *chase_symlinks (const char *pathspec); int make_executable (const char *path); int check_executable (const char *path); char *strendzap (char *str, const char *pat); void lt_debugprintf (const char *file, int line, const char *fmt, ...); void lt_fatal (const char *file, int line, const char *message, ...); static const char *nonnull (const char *s); static const char *nonempty (const char *s); void lt_setenv (const char *name, const char *value); char *lt_extend_str (const char *orig_value, const char *add, int to_end); void lt_update_exe_path (const char *name, const char *value); void lt_update_lib_path (const char *name, const char *value); char **prepare_spawn (char **argv); void lt_dump_script (FILE *f); EOF cat <= 0) && (st.st_mode & (S_IXUSR | S_IXGRP | S_IXOTH))) return 1; else return 0; } int make_executable (const char *path) { int rval = 0; struct stat st; lt_debugprintf (__FILE__, __LINE__, "(make_executable): %s\n", nonempty (path)); if ((!path) || (!*path)) return 0; if (stat (path, &st) >= 0) { rval = chmod (path, st.st_mode | S_IXOTH | S_IXGRP | S_IXUSR); } return rval; } /* Searches for the full path of the wrapper. Returns newly allocated full path name if found, NULL otherwise Does not chase symlinks, even on platforms that support them. */ char * find_executable (const char *wrapper) { int has_slash = 0; const char *p; const char *p_next; /* static buffer for getcwd */ char tmp[LT_PATHMAX + 1]; size_t tmp_len; char *concat_name; lt_debugprintf (__FILE__, __LINE__, "(find_executable): %s\n", nonempty (wrapper)); if ((wrapper == NULL) || (*wrapper == '\0')) return NULL; /* Absolute path? */ #if defined HAVE_DOS_BASED_FILE_SYSTEM if (isalpha ((unsigned char) wrapper[0]) && wrapper[1] == ':') { concat_name = xstrdup (wrapper); if (check_executable (concat_name)) return concat_name; XFREE (concat_name); } else { #endif if (IS_DIR_SEPARATOR (wrapper[0])) { concat_name = xstrdup (wrapper); if (check_executable (concat_name)) return concat_name; XFREE (concat_name); } #if defined HAVE_DOS_BASED_FILE_SYSTEM } #endif for (p = wrapper; *p; p++) if (*p == '/') { has_slash = 1; break; } if (!has_slash) { /* no slashes; search PATH */ const char *path = getenv ("PATH"); if (path != NULL) { for (p = path; *p; p = p_next) { const char *q; size_t p_len; for (q = p; *q; q++) if (IS_PATH_SEPARATOR (*q)) break; p_len = (size_t) (q - p); p_next = (*q == '\0' ? q : q + 1); if (p_len == 0) { /* empty path: current directory */ if (getcwd (tmp, LT_PATHMAX) == NULL) lt_fatal (__FILE__, __LINE__, "getcwd failed: %s", nonnull (strerror (errno))); tmp_len = strlen (tmp); concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1); memcpy (concat_name, tmp, tmp_len); concat_name[tmp_len] = '/'; strcpy (concat_name + tmp_len + 1, wrapper); } else { concat_name = XMALLOC (char, p_len + 1 + strlen (wrapper) + 1); memcpy (concat_name, p, p_len); concat_name[p_len] = '/'; strcpy (concat_name + p_len + 1, wrapper); } if (check_executable (concat_name)) return concat_name; XFREE (concat_name); } } /* not found in PATH; assume curdir */ } /* Relative path | not found in path: prepend cwd */ if (getcwd (tmp, LT_PATHMAX) == NULL) lt_fatal (__FILE__, __LINE__, "getcwd failed: %s", nonnull (strerror (errno))); tmp_len = strlen (tmp); concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1); memcpy (concat_name, tmp, tmp_len); concat_name[tmp_len] = '/'; strcpy (concat_name + tmp_len + 1, wrapper); if (check_executable (concat_name)) return concat_name; XFREE (concat_name); return NULL; } char * chase_symlinks (const char *pathspec) { #ifndef S_ISLNK return xstrdup (pathspec); #else char buf[LT_PATHMAX]; struct stat s; char *tmp_pathspec = xstrdup (pathspec); char *p; int has_symlinks = 0; while (strlen (tmp_pathspec) && !has_symlinks) { lt_debugprintf (__FILE__, __LINE__, "checking path component for symlinks: %s\n", tmp_pathspec); if (lstat (tmp_pathspec, &s) == 0) { if (S_ISLNK (s.st_mode) != 0) { has_symlinks = 1; break; } /* search backwards for last DIR_SEPARATOR */ p = tmp_pathspec + strlen (tmp_pathspec) - 1; while ((p > tmp_pathspec) && (!IS_DIR_SEPARATOR (*p))) p--; if ((p == tmp_pathspec) && (!IS_DIR_SEPARATOR (*p))) { /* no more DIR_SEPARATORS left */ break; } *p = '\0'; } else { lt_fatal (__FILE__, __LINE__, "error accessing file \"%s\": %s", tmp_pathspec, nonnull (strerror (errno))); } } XFREE (tmp_pathspec); if (!has_symlinks) { return xstrdup (pathspec); } tmp_pathspec = realpath (pathspec, buf); if (tmp_pathspec == 0) { lt_fatal (__FILE__, __LINE__, "could not follow symlinks for %s", pathspec); } return xstrdup (tmp_pathspec); #endif } char * strendzap (char *str, const char *pat) { size_t len, patlen; assert (str != NULL); assert (pat != NULL); len = strlen (str); patlen = strlen (pat); if (patlen <= len) { str += len - patlen; if (STREQ (str, pat)) *str = '\0'; } return str; } void lt_debugprintf (const char *file, int line, const char *fmt, ...) { va_list args; if (lt_debug) { (void) fprintf (stderr, "%s:%s:%d: ", program_name, file, line); va_start (args, fmt); (void) vfprintf (stderr, fmt, args); va_end (args); } } static void lt_error_core (int exit_status, const char *file, int line, const char *mode, const char *message, va_list ap) { fprintf (stderr, "%s:%s:%d: %s: ", program_name, file, line, mode); vfprintf (stderr, message, ap); fprintf (stderr, ".\n"); if (exit_status >= 0) exit (exit_status); } void lt_fatal (const char *file, int line, const char *message, ...) { va_list ap; va_start (ap, message); lt_error_core (EXIT_FAILURE, file, line, "FATAL", message, ap); va_end (ap); } static const char * nonnull (const char *s) { return s ? s : "(null)"; } static const char * nonempty (const char *s) { return (s && !*s) ? "(empty)" : nonnull (s); } void lt_setenv (const char *name, const char *value) { lt_debugprintf (__FILE__, __LINE__, "(lt_setenv) setting '%s' to '%s'\n", nonnull (name), nonnull (value)); { #ifdef HAVE_SETENV /* always make a copy, for consistency with !HAVE_SETENV */ char *str = xstrdup (value); setenv (name, str, 1); #else size_t len = strlen (name) + 1 + strlen (value) + 1; char *str = XMALLOC (char, len); sprintf (str, "%s=%s", name, value); if (putenv (str) != EXIT_SUCCESS) { XFREE (str); } #endif } } char * lt_extend_str (const char *orig_value, const char *add, int to_end) { char *new_value; if (orig_value && *orig_value) { size_t orig_value_len = strlen (orig_value); size_t add_len = strlen (add); new_value = XMALLOC (char, add_len + orig_value_len + 1); if (to_end) { strcpy (new_value, orig_value); strcpy (new_value + orig_value_len, add); } else { strcpy (new_value, add); strcpy (new_value + add_len, orig_value); } } else { new_value = xstrdup (add); } return new_value; } void lt_update_exe_path (const char *name, const char *value) { lt_debugprintf (__FILE__, __LINE__, "(lt_update_exe_path) modifying '%s' by prepending '%s'\n", nonnull (name), nonnull (value)); if (name && *name && value && *value) { char *new_value = lt_extend_str (getenv (name), value, 0); /* some systems can't cope with a ':'-terminated path #' */ size_t len = strlen (new_value); while ((len > 0) && IS_PATH_SEPARATOR (new_value[len-1])) { new_value[--len] = '\0'; } lt_setenv (name, new_value); XFREE (new_value); } } void lt_update_lib_path (const char *name, const char *value) { lt_debugprintf (__FILE__, __LINE__, "(lt_update_lib_path) modifying '%s' by prepending '%s'\n", nonnull (name), nonnull (value)); if (name && *name && value && *value) { char *new_value = lt_extend_str (getenv (name), value, 0); lt_setenv (name, new_value); XFREE (new_value); } } EOF case $host_os in mingw*) cat <<"EOF" /* Prepares an argument vector before calling spawn(). Note that spawn() does not by itself call the command interpreter (getenv ("COMSPEC") != NULL ? getenv ("COMSPEC") : ({ OSVERSIONINFO v; v.dwOSVersionInfoSize = sizeof(OSVERSIONINFO); GetVersionEx(&v); v.dwPlatformId == VER_PLATFORM_WIN32_NT; }) ? "cmd.exe" : "command.com"). Instead it simply concatenates the arguments, separated by ' ', and calls CreateProcess(). We must quote the arguments since Win32 CreateProcess() interprets characters like ' ', '\t', '\\', '"' (but not '<' and '>') in a special way: - Space and tab are interpreted as delimiters. They are not treated as delimiters if they are surrounded by double quotes: "...". - Unescaped double quotes are removed from the input. Their only effect is that within double quotes, space and tab are treated like normal characters. - Backslashes not followed by double quotes are not special. - But 2*n+1 backslashes followed by a double quote become n backslashes followed by a double quote (n >= 0): \" -> " \\\" -> \" \\\\\" -> \\" */ #define SHELL_SPECIAL_CHARS "\"\\ \001\002\003\004\005\006\007\010\011\012\013\014\015\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037" #define SHELL_SPACE_CHARS " \001\002\003\004\005\006\007\010\011\012\013\014\015\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037" char ** prepare_spawn (char **argv) { size_t argc; char **new_argv; size_t i; /* Count number of arguments. */ for (argc = 0; argv[argc] != NULL; argc++) ; /* Allocate new argument vector. */ new_argv = XMALLOC (char *, argc + 1); /* Put quoted arguments into the new argument vector. */ for (i = 0; i < argc; i++) { const char *string = argv[i]; if (string[0] == '\0') new_argv[i] = xstrdup ("\"\""); else if (strpbrk (string, SHELL_SPECIAL_CHARS) != NULL) { int quote_around = (strpbrk (string, SHELL_SPACE_CHARS) != NULL); size_t length; unsigned int backslashes; const char *s; char *quoted_string; char *p; length = 0; backslashes = 0; if (quote_around) length++; for (s = string; *s != '\0'; s++) { char c = *s; if (c == '"') length += backslashes + 1; length++; if (c == '\\') backslashes++; else backslashes = 0; } if (quote_around) length += backslashes + 1; quoted_string = XMALLOC (char, length + 1); p = quoted_string; backslashes = 0; if (quote_around) *p++ = '"'; for (s = string; *s != '\0'; s++) { char c = *s; if (c == '"') { unsigned int j; for (j = backslashes + 1; j > 0; j--) *p++ = '\\'; } *p++ = c; if (c == '\\') backslashes++; else backslashes = 0; } if (quote_around) { unsigned int j; for (j = backslashes; j > 0; j--) *p++ = '\\'; *p++ = '"'; } *p = '\0'; new_argv[i] = quoted_string; } else new_argv[i] = (char *) string; } new_argv[argc] = NULL; return new_argv; } EOF ;; esac cat <<"EOF" void lt_dump_script (FILE* f) { EOF func_emit_wrapper yes | $SED -n -e ' s/^\(.\{79\}\)\(..*\)/\1\ \2/ h s/\([\\"]\)/\\\1/g s/$/\\n/ s/\([^\n]*\).*/ fputs ("\1", f);/p g D' cat <<"EOF" } EOF } # end: func_emit_cwrapperexe_src # func_win32_import_lib_p ARG # True if ARG is an import lib, as indicated by $file_magic_cmd func_win32_import_lib_p () { $debug_cmd case `eval $file_magic_cmd \"\$1\" 2>/dev/null | $SED -e 10q` in *import*) : ;; *) false ;; esac } # func_suncc_cstd_abi # !!ONLY CALL THIS FOR SUN CC AFTER $compile_command IS FULLY EXPANDED!! # Several compiler flags select an ABI that is incompatible with the # Cstd library. Avoid specifying it if any are in CXXFLAGS. func_suncc_cstd_abi () { $debug_cmd case " $compile_command " in *" -compat=g "*|*\ -std=c++[0-9][0-9]\ *|*" -library=stdcxx4 "*|*" -library=stlport4 "*) suncc_use_cstd_abi=no ;; *) suncc_use_cstd_abi=yes ;; esac } # func_mode_link arg... func_mode_link () { $debug_cmd case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) # It is impossible to link a dll without this setting, and # we shouldn't force the makefile maintainer to figure out # what system we are compiling for in order to pass an extra # flag for every libtool invocation. # allow_undefined=no # FIXME: Unfortunately, there are problems with the above when trying # to make a dll that has undefined symbols, in which case not # even a static library is built. For now, we need to specify # -no-undefined on the libtool link line when we can be certain # that all symbols are satisfied, otherwise we get a static library. allow_undefined=yes ;; *) allow_undefined=yes ;; esac libtool_args=$nonopt base_compile="$nonopt $@" compile_command=$nonopt finalize_command=$nonopt compile_rpath= finalize_rpath= compile_shlibpath= finalize_shlibpath= convenience= old_convenience= deplibs= old_deplibs= compiler_flags= linker_flags= dllsearchpath= lib_search_path=`pwd` inst_prefix_dir= new_inherited_linker_flags= avoid_version=no bindir= dlfiles= dlprefiles= dlself=no export_dynamic=no export_symbols= export_symbols_regex= generated= libobjs= ltlibs= module=no no_install=no objs= os2dllname= non_pic_objects= precious_files_regex= prefer_static_libs=no preload=false prev= prevarg= release= rpath= xrpath= perm_rpath= temp_rpath= thread_safe=no vinfo= vinfo_number=no weak_libs= single_module=$wl-single_module func_infer_tag $base_compile # We need to know -static, to get the right output filenames. for arg do case $arg in -shared) test yes != "$build_libtool_libs" \ && func_fatal_configuration "cannot build a shared library" build_old_libs=no break ;; -all-static | -static | -static-libtool-libs) case $arg in -all-static) if test yes = "$build_libtool_libs" && test -z "$link_static_flag"; then func_warning "complete static linking is impossible in this configuration" fi if test -n "$link_static_flag"; then dlopen_self=$dlopen_self_static fi prefer_static_libs=yes ;; -static) if test -z "$pic_flag" && test -n "$link_static_flag"; then dlopen_self=$dlopen_self_static fi prefer_static_libs=built ;; -static-libtool-libs) if test -z "$pic_flag" && test -n "$link_static_flag"; then dlopen_self=$dlopen_self_static fi prefer_static_libs=yes ;; esac build_libtool_libs=no build_old_libs=yes break ;; esac done # See if our shared archives depend on static archives. test -n "$old_archive_from_new_cmds" && build_old_libs=yes # Go through the arguments, transforming them on the way. while test "$#" -gt 0; do arg=$1 shift func_quote_arg pretty,unquoted "$arg" qarg=$func_quote_arg_unquoted_result func_append libtool_args " $func_quote_arg_result" # If the previous option needs an argument, assign it. if test -n "$prev"; then case $prev in output) func_append compile_command " @OUTPUT@" func_append finalize_command " @OUTPUT@" ;; esac case $prev in bindir) bindir=$arg prev= continue ;; dlfiles|dlprefiles) $preload || { # Add the symbol object into the linking commands. func_append compile_command " @SYMFILE@" func_append finalize_command " @SYMFILE@" preload=: } case $arg in *.la | *.lo) ;; # We handle these cases below. force) if test no = "$dlself"; then dlself=needless export_dynamic=yes fi prev= continue ;; self) if test dlprefiles = "$prev"; then dlself=yes elif test dlfiles = "$prev" && test yes != "$dlopen_self"; then dlself=yes else dlself=needless export_dynamic=yes fi prev= continue ;; *) if test dlfiles = "$prev"; then func_append dlfiles " $arg" else func_append dlprefiles " $arg" fi prev= continue ;; esac ;; expsyms) export_symbols=$arg test -f "$arg" \ || func_fatal_error "symbol file '$arg' does not exist" prev= continue ;; expsyms_regex) export_symbols_regex=$arg prev= continue ;; framework) case $host in *-*-darwin*) case "$deplibs " in *" $qarg.ltframework "*) ;; *) func_append deplibs " $qarg.ltframework" # this is fixed later ;; esac ;; esac prev= continue ;; inst_prefix) inst_prefix_dir=$arg prev= continue ;; mllvm) # Clang does not use LLVM to link, so we can simply discard any # '-mllvm $arg' options when doing the link step. prev= continue ;; objectlist) if test -f "$arg"; then save_arg=$arg moreargs= for fil in `cat "$save_arg"` do # func_append moreargs " $fil" arg=$fil # A libtool-controlled object. # Check to see that this really is a libtool object. if func_lalib_unsafe_p "$arg"; then pic_object= non_pic_object= # Read the .lo file func_source "$arg" if test -z "$pic_object" || test -z "$non_pic_object" || test none = "$pic_object" && test none = "$non_pic_object"; then func_fatal_error "cannot find name of object for '$arg'" fi # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir=$func_dirname_result if test none != "$pic_object"; then # Prepend the subdirectory the object is found in. pic_object=$xdir$pic_object if test dlfiles = "$prev"; then if test yes = "$build_libtool_libs" && test yes = "$dlopen_support"; then func_append dlfiles " $pic_object" prev= continue else # If libtool objects are unsupported, then we need to preload. prev=dlprefiles fi fi # CHECK ME: I think I busted this. -Ossama if test dlprefiles = "$prev"; then # Preload the old-style object. func_append dlprefiles " $pic_object" prev= fi # A PIC object. func_append libobjs " $pic_object" arg=$pic_object fi # Non-PIC object. if test none != "$non_pic_object"; then # Prepend the subdirectory the object is found in. non_pic_object=$xdir$non_pic_object # A standard non-PIC object func_append non_pic_objects " $non_pic_object" if test -z "$pic_object" || test none = "$pic_object"; then arg=$non_pic_object fi else # If the PIC object exists, use it instead. # $xdir was prepended to $pic_object above. non_pic_object=$pic_object func_append non_pic_objects " $non_pic_object" fi else # Only an error if not doing a dry-run. if $opt_dry_run; then # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir=$func_dirname_result func_lo2o "$arg" pic_object=$xdir$objdir/$func_lo2o_result non_pic_object=$xdir$func_lo2o_result func_append libobjs " $pic_object" func_append non_pic_objects " $non_pic_object" else func_fatal_error "'$arg' is not a valid libtool object" fi fi done else func_fatal_error "link input file '$arg' does not exist" fi arg=$save_arg prev= continue ;; os2dllname) os2dllname=$arg prev= continue ;; precious_regex) precious_files_regex=$arg prev= continue ;; release) release=-$arg prev= continue ;; rpath | xrpath) # We need an absolute path. case $arg in [\\/]* | [A-Za-z]:[\\/]*) ;; *) func_fatal_error "only absolute run-paths are allowed" ;; esac if test rpath = "$prev"; then case "$rpath " in *" $arg "*) ;; *) func_append rpath " $arg" ;; esac else case "$xrpath " in *" $arg "*) ;; *) func_append xrpath " $arg" ;; esac fi prev= continue ;; shrext) shrext_cmds=$arg prev= continue ;; weak) func_append weak_libs " $arg" prev= continue ;; xassembler) func_append compiler_flags " -Xassembler $qarg" prev= func_append compile_command " -Xassembler $qarg" func_append finalize_command " -Xassembler $qarg" continue ;; xcclinker) func_append linker_flags " $qarg" func_append compiler_flags " $qarg" prev= func_append compile_command " $qarg" func_append finalize_command " $qarg" continue ;; xcompiler) func_append compiler_flags " $qarg" prev= func_append compile_command " $qarg" func_append finalize_command " $qarg" continue ;; xlinker) func_append linker_flags " $qarg" func_append compiler_flags " $wl$qarg" prev= func_append compile_command " $wl$qarg" func_append finalize_command " $wl$qarg" continue ;; *) eval "$prev=\"\$arg\"" prev= continue ;; esac fi # test -n "$prev" prevarg=$arg case $arg in -all-static) if test -n "$link_static_flag"; then # See comment for -static flag below, for more details. func_append compile_command " $link_static_flag" func_append finalize_command " $link_static_flag" fi continue ;; -allow-undefined) # FIXME: remove this flag sometime in the future. func_fatal_error "'-allow-undefined' must not be used because it is the default" ;; -avoid-version) avoid_version=yes continue ;; -bindir) prev=bindir continue ;; -dlopen) prev=dlfiles continue ;; -dlpreopen) prev=dlprefiles continue ;; -export-dynamic) export_dynamic=yes continue ;; -export-symbols | -export-symbols-regex) if test -n "$export_symbols" || test -n "$export_symbols_regex"; then func_fatal_error "more than one -exported-symbols argument is not allowed" fi if test X-export-symbols = "X$arg"; then prev=expsyms else prev=expsyms_regex fi continue ;; -framework) prev=framework continue ;; -inst-prefix-dir) prev=inst_prefix continue ;; # The native IRIX linker understands -LANG:*, -LIST:* and -LNO:* # so, if we see these flags be careful not to treat them like -L -L[A-Z][A-Z]*:*) case $with_gcc/$host in no/*-*-irix* | /*-*-irix*) func_append compile_command " $arg" func_append finalize_command " $arg" ;; esac continue ;; -L*) func_stripname "-L" '' "$arg" if test -z "$func_stripname_result"; then if test "$#" -gt 0; then func_fatal_error "require no space between '-L' and '$1'" else func_fatal_error "need path for '-L' option" fi fi func_resolve_sysroot "$func_stripname_result" dir=$func_resolve_sysroot_result # We need an absolute path. case $dir in [\\/]* | [A-Za-z]:[\\/]*) ;; *) absdir=`cd "$dir" && pwd` test -z "$absdir" && \ func_fatal_error "cannot determine absolute directory name of '$dir'" dir=$absdir ;; esac case "$deplibs " in *" -L$dir "* | *" $arg "*) # Will only happen for absolute or sysroot arguments ;; *) # Preserve sysroot, but never include relative directories case $dir in [\\/]* | [A-Za-z]:[\\/]* | =*) func_append deplibs " $arg" ;; *) func_append deplibs " -L$dir" ;; esac func_append lib_search_path " $dir" ;; esac case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) testbindir=`$ECHO "$dir" | $SED 's*/lib$*/bin*'` case :$dllsearchpath: in *":$dir:"*) ;; ::) dllsearchpath=$dir;; *) func_append dllsearchpath ":$dir";; esac case :$dllsearchpath: in *":$testbindir:"*) ;; ::) dllsearchpath=$testbindir;; *) func_append dllsearchpath ":$testbindir";; esac ;; esac continue ;; -l*) if test X-lc = "X$arg" || test X-lm = "X$arg"; then case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-beos* | *-cegcc* | *-*-haiku*) # These systems don't actually have a C or math library (as such) continue ;; *-*-os2*) # These systems don't actually have a C library (as such) test X-lc = "X$arg" && continue ;; *-*-openbsd* | *-*-freebsd* | *-*-dragonfly* | *-*-bitrig* | *-*-midnightbsd*) # Do not include libc due to us having libc/libc_r. test X-lc = "X$arg" && continue ;; *-*-rhapsody* | *-*-darwin1.[012]) # Rhapsody C and math libraries are in the System framework func_append deplibs " System.ltframework" continue ;; *-*-sco3.2v5* | *-*-sco5v6*) # Causes problems with __ctype test X-lc = "X$arg" && continue ;; *-*-sysv4.2uw2* | *-*-sysv5* | *-*-unixware* | *-*-OpenUNIX*) # Compiler inserts libc in the correct place for threads to work test X-lc = "X$arg" && continue ;; esac elif test X-lc_r = "X$arg"; then case $host in *-*-openbsd* | *-*-freebsd* | *-*-dragonfly* | *-*-bitrig* | *-*-midnightbsd*) # Do not include libc_r directly, use -pthread flag. continue ;; esac fi func_append deplibs " $arg" continue ;; -mllvm) prev=mllvm continue ;; -module) module=yes continue ;; # Tru64 UNIX uses -model [arg] to determine the layout of C++ # classes, name mangling, and exception handling. # Darwin uses the -arch flag to determine output architecture. -model|-arch|-isysroot|--sysroot) func_append compiler_flags " $arg" func_append compile_command " $arg" func_append finalize_command " $arg" prev=xcompiler continue ;; # Solaris ld rejects as of 11.4. Refer to Oracle bug 22985199. -pthread) case $host in *solaris2*) ;; *) case "$new_inherited_linker_flags " in *" $arg "*) ;; * ) func_append new_inherited_linker_flags " $arg" ;; esac ;; esac continue ;; -mt|-mthreads|-kthread|-Kthread|-pthreads|--thread-safe \ |-threads|-fopenmp|-openmp|-mp|-xopenmp|-omp|-qsmp=*) func_append compiler_flags " $arg" func_append compile_command " $arg" func_append finalize_command " $arg" case "$new_inherited_linker_flags " in *" $arg "*) ;; * ) func_append new_inherited_linker_flags " $arg" ;; esac continue ;; -multi_module) single_module=$wl-multi_module continue ;; -no-fast-install) fast_install=no continue ;; -no-install) case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-*-darwin* | *-cegcc*) # The PATH hackery in wrapper scripts is required on Windows # and Darwin in order for the loader to find any dlls it needs. func_warning "'-no-install' is ignored for $host" func_warning "assuming '-no-fast-install' instead" fast_install=no ;; *) no_install=yes ;; esac continue ;; -no-undefined) allow_undefined=no continue ;; -objectlist) prev=objectlist continue ;; -os2dllname) prev=os2dllname continue ;; -o) prev=output ;; -precious-files-regex) prev=precious_regex continue ;; -release) prev=release continue ;; -rpath) prev=rpath continue ;; -R) prev=xrpath continue ;; -R*) func_stripname '-R' '' "$arg" dir=$func_stripname_result # We need an absolute path. case $dir in [\\/]* | [A-Za-z]:[\\/]*) ;; =*) func_stripname '=' '' "$dir" dir=$lt_sysroot$func_stripname_result ;; *) func_fatal_error "only absolute run-paths are allowed" ;; esac case "$xrpath " in *" $dir "*) ;; *) func_append xrpath " $dir" ;; esac continue ;; -shared) # The effects of -shared are defined in a previous loop. continue ;; -shrext) prev=shrext continue ;; -static | -static-libtool-libs) # The effects of -static are defined in a previous loop. # We used to do the same as -all-static on platforms that # didn't have a PIC flag, but the assumption that the effects # would be equivalent was wrong. It would break on at least # Digital Unix and AIX. continue ;; -thread-safe) thread_safe=yes continue ;; -version-info) prev=vinfo continue ;; -version-number) prev=vinfo vinfo_number=yes continue ;; -weak) prev=weak continue ;; -Wc,*) func_stripname '-Wc,' '' "$arg" args=$func_stripname_result arg= save_ifs=$IFS; IFS=, for flag in $args; do IFS=$save_ifs func_quote_arg pretty "$flag" func_append arg " $func_quote_arg_result" func_append compiler_flags " $func_quote_arg_result" done IFS=$save_ifs func_stripname ' ' '' "$arg" arg=$func_stripname_result ;; -Wl,*) func_stripname '-Wl,' '' "$arg" args=$func_stripname_result arg= save_ifs=$IFS; IFS=, for flag in $args; do IFS=$save_ifs func_quote_arg pretty "$flag" func_append arg " $wl$func_quote_arg_result" func_append compiler_flags " $wl$func_quote_arg_result" func_append linker_flags " $func_quote_arg_result" done IFS=$save_ifs func_stripname ' ' '' "$arg" arg=$func_stripname_result ;; -Xassembler) prev=xassembler continue ;; -Xcompiler) prev=xcompiler continue ;; -Xlinker) prev=xlinker continue ;; -XCClinker) prev=xcclinker continue ;; # -msg_* for osf cc -msg_*) func_quote_arg pretty "$arg" arg=$func_quote_arg_result ;; # Flags to be passed through unchanged, with rationale: # -64, -mips[0-9] enable 64-bit mode for the SGI compiler # -r[0-9][0-9]* specify processor for the SGI compiler # -xarch=*, -xtarget=* enable 64-bit mode for the Sun compiler # +DA*, +DD* enable 64-bit mode for the HP compiler # -q* compiler args for the IBM compiler # -m*, -t[45]*, -txscale* architecture-specific flags for GCC # -F/path path to uninstalled frameworks, gcc on darwin # -p, -pg, --coverage, -fprofile-* profiling flags for GCC # -fstack-protector* stack protector flags for GCC # @file GCC response files # -tp=* Portland pgcc target processor selection # --sysroot=* for sysroot support # -O*, -g*, -flto*, -fwhopr*, -fuse-linker-plugin GCC link-time optimization # -specs=* GCC specs files # -stdlib=* select c++ std lib with clang # -fsanitize=* Clang/GCC memory and address sanitizer # -fuse-ld=* Linker select flags for GCC # -Wa,* Pass flags directly to the assembler -64|-mips[0-9]|-r[0-9][0-9]*|-xarch=*|-xtarget=*|+DA*|+DD*|-q*|-m*| \ -t[45]*|-txscale*|-p|-pg|--coverage|-fprofile-*|-F*|@*|-tp=*|--sysroot=*| \ -O*|-g*|-flto*|-fwhopr*|-fuse-linker-plugin|-fstack-protector*|-stdlib=*| \ -specs=*|-fsanitize=*|-fuse-ld=*|-Wa,*) func_quote_arg pretty "$arg" arg=$func_quote_arg_result func_append compile_command " $arg" func_append finalize_command " $arg" func_append compiler_flags " $arg" continue ;; -Z*) if test os2 = "`expr $host : '.*\(os2\)'`"; then # OS/2 uses -Zxxx to specify OS/2-specific options compiler_flags="$compiler_flags $arg" func_append compile_command " $arg" func_append finalize_command " $arg" case $arg in -Zlinker | -Zstack) prev=xcompiler ;; esac continue else # Otherwise treat like 'Some other compiler flag' below func_quote_arg pretty "$arg" arg=$func_quote_arg_result fi ;; # Some other compiler flag. -* | +*) func_quote_arg pretty "$arg" arg=$func_quote_arg_result ;; *.$objext) # A standard object. func_append objs " $arg" ;; *.lo) # A libtool-controlled object. # Check to see that this really is a libtool object. if func_lalib_unsafe_p "$arg"; then pic_object= non_pic_object= # Read the .lo file func_source "$arg" if test -z "$pic_object" || test -z "$non_pic_object" || test none = "$pic_object" && test none = "$non_pic_object"; then func_fatal_error "cannot find name of object for '$arg'" fi # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir=$func_dirname_result test none = "$pic_object" || { # Prepend the subdirectory the object is found in. pic_object=$xdir$pic_object if test dlfiles = "$prev"; then if test yes = "$build_libtool_libs" && test yes = "$dlopen_support"; then func_append dlfiles " $pic_object" prev= continue else # If libtool objects are unsupported, then we need to preload. prev=dlprefiles fi fi # CHECK ME: I think I busted this. -Ossama if test dlprefiles = "$prev"; then # Preload the old-style object. func_append dlprefiles " $pic_object" prev= fi # A PIC object. func_append libobjs " $pic_object" arg=$pic_object } # Non-PIC object. if test none != "$non_pic_object"; then # Prepend the subdirectory the object is found in. non_pic_object=$xdir$non_pic_object # A standard non-PIC object func_append non_pic_objects " $non_pic_object" if test -z "$pic_object" || test none = "$pic_object"; then arg=$non_pic_object fi else # If the PIC object exists, use it instead. # $xdir was prepended to $pic_object above. non_pic_object=$pic_object func_append non_pic_objects " $non_pic_object" fi else # Only an error if not doing a dry-run. if $opt_dry_run; then # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir=$func_dirname_result func_lo2o "$arg" pic_object=$xdir$objdir/$func_lo2o_result non_pic_object=$xdir$func_lo2o_result func_append libobjs " $pic_object" func_append non_pic_objects " $non_pic_object" else func_fatal_error "'$arg' is not a valid libtool object" fi fi ;; *.$libext) # An archive. func_append deplibs " $arg" func_append old_deplibs " $arg" continue ;; *.la) # A libtool-controlled library. func_resolve_sysroot "$arg" if test dlfiles = "$prev"; then # This library was specified with -dlopen. func_append dlfiles " $func_resolve_sysroot_result" prev= elif test dlprefiles = "$prev"; then # The library was specified with -dlpreopen. func_append dlprefiles " $func_resolve_sysroot_result" prev= else func_append deplibs " $func_resolve_sysroot_result" fi continue ;; # Some other compiler argument. *) # Unknown arguments in both finalize_command and compile_command need # to be aesthetically quoted because they are evaled later. func_quote_arg pretty "$arg" arg=$func_quote_arg_result ;; esac # arg # Now actually substitute the argument into the commands. if test -n "$arg"; then func_append compile_command " $arg" func_append finalize_command " $arg" fi done # argument parsing loop test -n "$prev" && \ func_fatal_help "the '$prevarg' option requires an argument" if test yes = "$export_dynamic" && test -n "$export_dynamic_flag_spec"; then eval arg=\"$export_dynamic_flag_spec\" func_append compile_command " $arg" func_append finalize_command " $arg" fi oldlibs= # calculate the name of the file, without its directory func_basename "$output" outputname=$func_basename_result libobjs_save=$libobjs if test -n "$shlibpath_var"; then # get the directories listed in $shlibpath_var eval shlib_search_path=\`\$ECHO \"\$$shlibpath_var\" \| \$SED \'s/:/ /g\'\` else shlib_search_path= fi eval sys_lib_search_path=\"$sys_lib_search_path_spec\" eval sys_lib_dlsearch_path=\"$sys_lib_dlsearch_path_spec\" # Definition is injected by LT_CONFIG during libtool generation. func_munge_path_list sys_lib_dlsearch_path "$LT_SYS_LIBRARY_PATH" func_dirname "$output" "/" "" output_objdir=$func_dirname_result$objdir func_to_tool_file "$output_objdir/" tool_output_objdir=$func_to_tool_file_result # Create the object directory. func_mkdir_p "$output_objdir" # Determine the type of output case $output in "") func_fatal_help "you must specify an output file" ;; *.$libext) linkmode=oldlib ;; *.lo | *.$objext) linkmode=obj ;; *.la) linkmode=lib ;; *) linkmode=prog ;; # Anything else should be a program. esac specialdeplibs= libs= # Find all interdependent deplibs by searching for libraries # that are linked more than once (e.g. -la -lb -la) for deplib in $deplibs; do if $opt_preserve_dup_deps; then case "$libs " in *" $deplib "*) func_append specialdeplibs " $deplib" ;; esac fi func_append libs " $deplib" done if test lib = "$linkmode"; then libs="$predeps $libs $compiler_lib_search_path $postdeps" # Compute libraries that are listed more than once in $predeps # $postdeps and mark them as special (i.e., whose duplicates are # not to be eliminated). pre_post_deps= if $opt_duplicate_compiler_generated_deps; then for pre_post_dep in $predeps $postdeps; do case "$pre_post_deps " in *" $pre_post_dep "*) func_append specialdeplibs " $pre_post_deps" ;; esac func_append pre_post_deps " $pre_post_dep" done fi pre_post_deps= fi deplibs= newdependency_libs= newlib_search_path= need_relink=no # whether we're linking any uninstalled libtool libraries notinst_deplibs= # not-installed libtool libraries notinst_path= # paths that contain not-installed libtool libraries case $linkmode in lib) passes="conv dlpreopen link" for file in $dlfiles $dlprefiles; do case $file in *.la) ;; *) func_fatal_help "libraries can '-dlopen' only libtool libraries: $file" ;; esac done ;; prog) compile_deplibs= finalize_deplibs= alldeplibs=false newdlfiles= newdlprefiles= passes="conv scan dlopen dlpreopen link" ;; *) passes="conv" ;; esac for pass in $passes; do # The preopen pass in lib mode reverses $deplibs; put it back here # so that -L comes before libs that need it for instance... if test lib,link = "$linkmode,$pass"; then ## FIXME: Find the place where the list is rebuilt in the wrong ## order, and fix it there properly tmp_deplibs= for deplib in $deplibs; do tmp_deplibs="$deplib $tmp_deplibs" done deplibs=$tmp_deplibs fi if test lib,link = "$linkmode,$pass" || test prog,scan = "$linkmode,$pass"; then libs=$deplibs deplibs= fi if test prog = "$linkmode"; then case $pass in dlopen) libs=$dlfiles ;; dlpreopen) libs=$dlprefiles ;; link) libs="$deplibs %DEPLIBS% $dependency_libs" ;; esac fi if test lib,dlpreopen = "$linkmode,$pass"; then # Collect and forward deplibs of preopened libtool libs for lib in $dlprefiles; do # Ignore non-libtool-libs dependency_libs= func_resolve_sysroot "$lib" case $lib in *.la) func_source "$func_resolve_sysroot_result" ;; esac # Collect preopened libtool deplibs, except any this library # has declared as weak libs for deplib in $dependency_libs; do func_basename "$deplib" deplib_base=$func_basename_result case " $weak_libs " in *" $deplib_base "*) ;; *) func_append deplibs " $deplib" ;; esac done done libs=$dlprefiles fi if test dlopen = "$pass"; then # Collect dlpreopened libraries save_deplibs=$deplibs deplibs= fi for deplib in $libs; do lib= found=false case $deplib in -mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe \ |-threads|-fopenmp|-openmp|-mp|-xopenmp|-omp|-qsmp=*) if test prog,link = "$linkmode,$pass"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else func_append compiler_flags " $deplib" if test lib = "$linkmode"; then case "$new_inherited_linker_flags " in *" $deplib "*) ;; * ) func_append new_inherited_linker_flags " $deplib" ;; esac fi fi continue ;; -l*) if test lib != "$linkmode" && test prog != "$linkmode"; then func_warning "'-l' is ignored for archives/objects" continue fi func_stripname '-l' '' "$deplib" name=$func_stripname_result if test lib = "$linkmode"; then searchdirs="$newlib_search_path $lib_search_path $compiler_lib_search_dirs $sys_lib_search_path $shlib_search_path" else searchdirs="$newlib_search_path $lib_search_path $sys_lib_search_path $shlib_search_path" fi for searchdir in $searchdirs; do for search_ext in .la $std_shrext .so .a; do # Search the libtool library lib=$searchdir/lib$name$search_ext if test -f "$lib"; then if test .la = "$search_ext"; then found=: else found=false fi break 2 fi done done if $found; then # deplib is a libtool library # If $allow_libtool_libs_with_static_runtimes && $deplib is a stdlib, # We need to do some special things here, and not later. if test yes = "$allow_libtool_libs_with_static_runtimes"; then case " $predeps $postdeps " in *" $deplib "*) if func_lalib_p "$lib"; then library_names= old_library= func_source "$lib" for l in $old_library $library_names; do ll=$l done if test "X$ll" = "X$old_library"; then # only static version available found=false func_dirname "$lib" "" "." ladir=$func_dirname_result lib=$ladir/$old_library if test prog,link = "$linkmode,$pass"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else deplibs="$deplib $deplibs" test lib = "$linkmode" && newdependency_libs="$deplib $newdependency_libs" fi continue fi fi ;; *) ;; esac fi else # deplib doesn't seem to be a libtool library if test prog,link = "$linkmode,$pass"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else deplibs="$deplib $deplibs" test lib = "$linkmode" && newdependency_libs="$deplib $newdependency_libs" fi continue fi ;; # -l *.ltframework) if test prog,link = "$linkmode,$pass"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else deplibs="$deplib $deplibs" if test lib = "$linkmode"; then case "$new_inherited_linker_flags " in *" $deplib "*) ;; * ) func_append new_inherited_linker_flags " $deplib" ;; esac fi fi continue ;; -L*) case $linkmode in lib) deplibs="$deplib $deplibs" test conv = "$pass" && continue newdependency_libs="$deplib $newdependency_libs" func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result" func_append newlib_search_path " $func_resolve_sysroot_result" ;; prog) if test conv = "$pass"; then deplibs="$deplib $deplibs" continue fi if test scan = "$pass"; then deplibs="$deplib $deplibs" else compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" fi func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result" func_append newlib_search_path " $func_resolve_sysroot_result" ;; *) func_warning "'-L' is ignored for archives/objects" ;; esac # linkmode continue ;; # -L -R*) if test link = "$pass"; then func_stripname '-R' '' "$deplib" func_resolve_sysroot "$func_stripname_result" dir=$func_resolve_sysroot_result # Make sure the xrpath contains only unique directories. case "$xrpath " in *" $dir "*) ;; *) func_append xrpath " $dir" ;; esac fi deplibs="$deplib $deplibs" continue ;; *.la) func_resolve_sysroot "$deplib" lib=$func_resolve_sysroot_result ;; *.$libext) if test conv = "$pass"; then deplibs="$deplib $deplibs" continue fi case $linkmode in lib) # Linking convenience modules into shared libraries is allowed, # but linking other static libraries is non-portable. case " $dlpreconveniencelibs " in *" $deplib "*) ;; *) valid_a_lib=false case $deplibs_check_method in match_pattern*) set dummy $deplibs_check_method; shift match_pattern_regex=`expr "$deplibs_check_method" : "$1 \(.*\)"` if eval "\$ECHO \"$deplib\"" 2>/dev/null | $SED 10q \ | $EGREP "$match_pattern_regex" > /dev/null; then valid_a_lib=: fi ;; pass_all) valid_a_lib=: ;; esac if $valid_a_lib; then echo $ECHO "*** Warning: Linking the shared library $output against the" $ECHO "*** static library $deplib is not portable!" deplibs="$deplib $deplibs" else echo $ECHO "*** Warning: Trying to link with static lib archive $deplib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have" echo "*** because the file extensions .$libext of this argument makes me believe" echo "*** that it is just a static archive that I should not use here." fi ;; esac continue ;; prog) if test link != "$pass"; then deplibs="$deplib $deplibs" else compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" fi continue ;; esac # linkmode ;; # *.$libext *.lo | *.$objext) if test conv = "$pass"; then deplibs="$deplib $deplibs" elif test prog = "$linkmode"; then if test dlpreopen = "$pass" || test yes != "$dlopen_support" || test no = "$build_libtool_libs"; then # If there is no dlopen support or we're linking statically, # we need to preload. func_append newdlprefiles " $deplib" compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else func_append newdlfiles " $deplib" fi fi continue ;; %DEPLIBS%) alldeplibs=: continue ;; esac # case $deplib $found || test -f "$lib" \ || func_fatal_error "cannot find the library '$lib' or unhandled argument '$deplib'" # Check to see that this really is a libtool archive. func_lalib_unsafe_p "$lib" \ || func_fatal_error "'$lib' is not a valid libtool archive" func_dirname "$lib" "" "." ladir=$func_dirname_result dlname= dlopen= dlpreopen= libdir= library_names= old_library= inherited_linker_flags= # If the library was installed with an old release of libtool, # it will not redefine variables installed, or shouldnotlink installed=yes shouldnotlink=no avoidtemprpath= # Read the .la file func_source "$lib" # Convert "-framework foo" to "foo.ltframework" if test -n "$inherited_linker_flags"; then tmp_inherited_linker_flags=`$ECHO "$inherited_linker_flags" | $SED 's/-framework \([^ $]*\)/\1.ltframework/g'` for tmp_inherited_linker_flag in $tmp_inherited_linker_flags; do case " $new_inherited_linker_flags " in *" $tmp_inherited_linker_flag "*) ;; *) func_append new_inherited_linker_flags " $tmp_inherited_linker_flag";; esac done fi dependency_libs=`$ECHO " $dependency_libs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` if test lib,link = "$linkmode,$pass" || test prog,scan = "$linkmode,$pass" || { test prog != "$linkmode" && test lib != "$linkmode"; }; then test -n "$dlopen" && func_append dlfiles " $dlopen" test -n "$dlpreopen" && func_append dlprefiles " $dlpreopen" fi if test conv = "$pass"; then # Only check for convenience libraries deplibs="$lib $deplibs" if test -z "$libdir"; then if test -z "$old_library"; then func_fatal_error "cannot find name of link library for '$lib'" fi # It is a libtool convenience library, so add in its objects. func_append convenience " $ladir/$objdir/$old_library" func_append old_convenience " $ladir/$objdir/$old_library" elif test prog != "$linkmode" && test lib != "$linkmode"; then func_fatal_error "'$lib' is not a convenience library" fi tmp_libs= for deplib in $dependency_libs; do deplibs="$deplib $deplibs" if $opt_preserve_dup_deps; then case "$tmp_libs " in *" $deplib "*) func_append specialdeplibs " $deplib" ;; esac fi func_append tmp_libs " $deplib" done continue fi # $pass = conv # Get the name of the library we link against. linklib= if test -n "$old_library" && { test yes = "$prefer_static_libs" || test built,no = "$prefer_static_libs,$installed"; }; then linklib=$old_library else for l in $old_library $library_names; do linklib=$l done fi if test -z "$linklib"; then func_fatal_error "cannot find name of link library for '$lib'" fi # This library was specified with -dlopen. if test dlopen = "$pass"; then test -z "$libdir" \ && func_fatal_error "cannot -dlopen a convenience library: '$lib'" if test -z "$dlname" || test yes != "$dlopen_support" || test no = "$build_libtool_libs" then # If there is no dlname, no dlopen support or we're linking # statically, we need to preload. We also need to preload any # dependent libraries so libltdl's deplib preloader doesn't # bomb out in the load deplibs phase. func_append dlprefiles " $lib $dependency_libs" else func_append newdlfiles " $lib" fi continue fi # $pass = dlopen # We need an absolute path. case $ladir in [\\/]* | [A-Za-z]:[\\/]*) abs_ladir=$ladir ;; *) abs_ladir=`cd "$ladir" && pwd` if test -z "$abs_ladir"; then func_warning "cannot determine absolute directory name of '$ladir'" func_warning "passing it literally to the linker, although it might fail" abs_ladir=$ladir fi ;; esac func_basename "$lib" laname=$func_basename_result # Find the relevant object directory and library name. if test yes = "$installed"; then if test ! -f "$lt_sysroot$libdir/$linklib" && test -f "$abs_ladir/$linklib"; then func_warning "library '$lib' was moved." dir=$ladir absdir=$abs_ladir libdir=$abs_ladir else dir=$lt_sysroot$libdir absdir=$lt_sysroot$libdir fi test yes = "$hardcode_automatic" && avoidtemprpath=yes else if test ! -f "$ladir/$objdir/$linklib" && test -f "$abs_ladir/$linklib"; then dir=$ladir absdir=$abs_ladir # Remove this search path later func_append notinst_path " $abs_ladir" else dir=$ladir/$objdir absdir=$abs_ladir/$objdir # Remove this search path later func_append notinst_path " $abs_ladir" fi fi # $installed = yes func_stripname 'lib' '.la' "$laname" name=$func_stripname_result # This library was specified with -dlpreopen. if test dlpreopen = "$pass"; then if test -z "$libdir" && test prog = "$linkmode"; then func_fatal_error "only libraries may -dlpreopen a convenience library: '$lib'" fi case $host in # special handling for platforms with PE-DLLs. *cygwin* | *mingw* | *cegcc* ) # Linker will automatically link against shared library if both # static and shared are present. Therefore, ensure we extract # symbols from the import library if a shared library is present # (otherwise, the dlopen module name will be incorrect). We do # this by putting the import library name into $newdlprefiles. # We recover the dlopen module name by 'saving' the la file # name in a special purpose variable, and (later) extracting the # dlname from the la file. if test -n "$dlname"; then func_tr_sh "$dir/$linklib" eval "libfile_$func_tr_sh_result=\$abs_ladir/\$laname" func_append newdlprefiles " $dir/$linklib" else func_append newdlprefiles " $dir/$old_library" # Keep a list of preopened convenience libraries to check # that they are being used correctly in the link pass. test -z "$libdir" && \ func_append dlpreconveniencelibs " $dir/$old_library" fi ;; * ) # Prefer using a static library (so that no silly _DYNAMIC symbols # are required to link). if test -n "$old_library"; then func_append newdlprefiles " $dir/$old_library" # Keep a list of preopened convenience libraries to check # that they are being used correctly in the link pass. test -z "$libdir" && \ func_append dlpreconveniencelibs " $dir/$old_library" # Otherwise, use the dlname, so that lt_dlopen finds it. elif test -n "$dlname"; then func_append newdlprefiles " $dir/$dlname" else func_append newdlprefiles " $dir/$linklib" fi ;; esac fi # $pass = dlpreopen if test -z "$libdir"; then # Link the convenience library if test lib = "$linkmode"; then deplibs="$dir/$old_library $deplibs" elif test prog,link = "$linkmode,$pass"; then compile_deplibs="$dir/$old_library $compile_deplibs" finalize_deplibs="$dir/$old_library $finalize_deplibs" else deplibs="$lib $deplibs" # used for prog,scan pass fi continue fi if test prog = "$linkmode" && test link != "$pass"; then func_append newlib_search_path " $ladir" deplibs="$lib $deplibs" linkalldeplibs=false if test no != "$link_all_deplibs" || test -z "$library_names" || test no = "$build_libtool_libs"; then linkalldeplibs=: fi tmp_libs= for deplib in $dependency_libs; do case $deplib in -L*) func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result" func_append newlib_search_path " $func_resolve_sysroot_result" ;; esac # Need to link against all dependency_libs? if $linkalldeplibs; then deplibs="$deplib $deplibs" else # Need to hardcode shared library paths # or/and link against static libraries newdependency_libs="$deplib $newdependency_libs" fi if $opt_preserve_dup_deps; then case "$tmp_libs " in *" $deplib "*) func_append specialdeplibs " $deplib" ;; esac fi func_append tmp_libs " $deplib" done # for deplib continue fi # $linkmode = prog... if test prog,link = "$linkmode,$pass"; then if test -n "$library_names" && { { test no = "$prefer_static_libs" || test built,yes = "$prefer_static_libs,$installed"; } || test -z "$old_library"; }; then # We need to hardcode the library path if test -n "$shlibpath_var" && test -z "$avoidtemprpath"; then # Make sure the rpath contains only unique directories. case $temp_rpath: in *"$absdir:"*) ;; *) func_append temp_rpath "$absdir:" ;; esac fi # Hardcode the library path. # Skip directories that are in the system default run-time # search path. case " $sys_lib_dlsearch_path " in *" $absdir "*) ;; *) case "$compile_rpath " in *" $absdir "*) ;; *) func_append compile_rpath " $absdir" ;; esac ;; esac case " $sys_lib_dlsearch_path " in *" $libdir "*) ;; *) case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac ;; esac fi # $linkmode,$pass = prog,link... if $alldeplibs && { test pass_all = "$deplibs_check_method" || { test yes = "$build_libtool_libs" && test -n "$library_names"; }; }; then # We only need to search for static libraries continue fi fi link_static=no # Whether the deplib will be linked statically use_static_libs=$prefer_static_libs if test built = "$use_static_libs" && test yes = "$installed"; then use_static_libs=no fi if test -n "$library_names" && { test no = "$use_static_libs" || test -z "$old_library"; }; then case $host in *cygwin* | *mingw* | *cegcc* | *os2*) # No point in relinking DLLs because paths are not encoded func_append notinst_deplibs " $lib" need_relink=no ;; *) if test no = "$installed"; then func_append notinst_deplibs " $lib" need_relink=yes fi ;; esac # This is a shared library # Warn about portability, can't link against -module's on some # systems (darwin). Don't bleat about dlopened modules though! dlopenmodule= for dlpremoduletest in $dlprefiles; do if test "X$dlpremoduletest" = "X$lib"; then dlopenmodule=$dlpremoduletest break fi done if test -z "$dlopenmodule" && test yes = "$shouldnotlink" && test link = "$pass"; then echo if test prog = "$linkmode"; then $ECHO "*** Warning: Linking the executable $output against the loadable module" else $ECHO "*** Warning: Linking the shared library $output against the loadable module" fi $ECHO "*** $linklib is not portable!" fi if test lib = "$linkmode" && test yes = "$hardcode_into_libs"; then # Hardcode the library path. # Skip directories that are in the system default run-time # search path. case " $sys_lib_dlsearch_path " in *" $absdir "*) ;; *) case "$compile_rpath " in *" $absdir "*) ;; *) func_append compile_rpath " $absdir" ;; esac ;; esac case " $sys_lib_dlsearch_path " in *" $libdir "*) ;; *) case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac ;; esac fi if test -n "$old_archive_from_expsyms_cmds"; then # figure out the soname set dummy $library_names shift realname=$1 shift libname=`eval "\\$ECHO \"$libname_spec\""` # use dlname if we got it. it's perfectly good, no? if test -n "$dlname"; then soname=$dlname elif test -n "$soname_spec"; then # bleh windows case $host in *cygwin* | mingw* | *cegcc* | *os2*) func_arith $current - $age major=$func_arith_result versuffix=-$major ;; esac eval soname=\"$soname_spec\" else soname=$realname fi # Make a new name for the extract_expsyms_cmds to use soroot=$soname func_basename "$soroot" soname=$func_basename_result func_stripname 'lib' '.dll' "$soname" newlib=libimp-$func_stripname_result.a # If the library has no export list, then create one now if test -f "$output_objdir/$soname-def"; then : else func_verbose "extracting exported symbol list from '$soname'" func_execute_cmds "$extract_expsyms_cmds" 'exit $?' fi # Create $newlib if test -f "$output_objdir/$newlib"; then :; else func_verbose "generating import library for '$soname'" func_execute_cmds "$old_archive_from_expsyms_cmds" 'exit $?' fi # make sure the library variables are pointing to the new library dir=$output_objdir linklib=$newlib fi # test -n "$old_archive_from_expsyms_cmds" if test prog = "$linkmode" || test relink != "$opt_mode"; then add_shlibpath= add_dir= add= lib_linked=yes case $hardcode_action in immediate | unsupported) if test no = "$hardcode_direct"; then add=$dir/$linklib case $host in *-*-sco3.2v5.0.[024]*) add_dir=-L$dir ;; *-*-sysv4*uw2*) add_dir=-L$dir ;; *-*-sysv5OpenUNIX* | *-*-sysv5UnixWare7.[01].[10]* | \ *-*-unixware7*) add_dir=-L$dir ;; *-*-darwin* ) # if the lib is a (non-dlopened) module then we cannot # link against it, someone is ignoring the earlier warnings if /usr/bin/file -L $add 2> /dev/null | $GREP ": [^:]* bundle" >/dev/null; then if test "X$dlopenmodule" != "X$lib"; then $ECHO "*** Warning: lib $linklib is a module, not a shared library" if test -z "$old_library"; then echo echo "*** And there doesn't seem to be a static archive available" echo "*** The link will probably fail, sorry" else add=$dir/$old_library fi elif test -n "$old_library"; then add=$dir/$old_library fi fi esac elif test no = "$hardcode_minus_L"; then case $host in *-*-sunos*) add_shlibpath=$dir ;; esac add_dir=-L$dir add=-l$name elif test no = "$hardcode_shlibpath_var"; then add_shlibpath=$dir add=-l$name else lib_linked=no fi ;; relink) if test yes = "$hardcode_direct" && test no = "$hardcode_direct_absolute"; then add=$dir/$linklib elif test yes = "$hardcode_minus_L"; then add_dir=-L$absdir # Try looking first in the location we're being installed to. if test -n "$inst_prefix_dir"; then case $libdir in [\\/]*) func_append add_dir " -L$inst_prefix_dir$libdir" ;; esac fi add=-l$name elif test yes = "$hardcode_shlibpath_var"; then add_shlibpath=$dir add=-l$name else lib_linked=no fi ;; *) lib_linked=no ;; esac if test yes != "$lib_linked"; then func_fatal_configuration "unsupported hardcode properties" fi if test -n "$add_shlibpath"; then case :$compile_shlibpath: in *":$add_shlibpath:"*) ;; *) func_append compile_shlibpath "$add_shlibpath:" ;; esac fi if test prog = "$linkmode"; then test -n "$add_dir" && compile_deplibs="$add_dir $compile_deplibs" test -n "$add" && compile_deplibs="$add $compile_deplibs" else test -n "$add_dir" && deplibs="$add_dir $deplibs" test -n "$add" && deplibs="$add $deplibs" if test yes != "$hardcode_direct" && test yes != "$hardcode_minus_L" && test yes = "$hardcode_shlibpath_var"; then case :$finalize_shlibpath: in *":$libdir:"*) ;; *) func_append finalize_shlibpath "$libdir:" ;; esac fi fi fi if test prog = "$linkmode" || test relink = "$opt_mode"; then add_shlibpath= add_dir= add= # Finalize command for both is simple: just hardcode it. if test yes = "$hardcode_direct" && test no = "$hardcode_direct_absolute"; then add=$libdir/$linklib elif test yes = "$hardcode_minus_L"; then add_dir=-L$libdir add=-l$name elif test yes = "$hardcode_shlibpath_var"; then case :$finalize_shlibpath: in *":$libdir:"*) ;; *) func_append finalize_shlibpath "$libdir:" ;; esac add=-l$name elif test yes = "$hardcode_automatic"; then if test -n "$inst_prefix_dir" && test -f "$inst_prefix_dir$libdir/$linklib"; then add=$inst_prefix_dir$libdir/$linklib else add=$libdir/$linklib fi else # We cannot seem to hardcode it, guess we'll fake it. add_dir=-L$libdir # Try looking first in the location we're being installed to. if test -n "$inst_prefix_dir"; then case $libdir in [\\/]*) func_append add_dir " -L$inst_prefix_dir$libdir" ;; esac fi add=-l$name fi if test prog = "$linkmode"; then test -n "$add_dir" && finalize_deplibs="$add_dir $finalize_deplibs" test -n "$add" && finalize_deplibs="$add $finalize_deplibs" else test -n "$add_dir" && deplibs="$add_dir $deplibs" test -n "$add" && deplibs="$add $deplibs" fi fi elif test prog = "$linkmode"; then # Here we assume that one of hardcode_direct or hardcode_minus_L # is not unsupported. This is valid on all known static and # shared platforms. if test unsupported != "$hardcode_direct"; then test -n "$old_library" && linklib=$old_library compile_deplibs="$dir/$linklib $compile_deplibs" finalize_deplibs="$dir/$linklib $finalize_deplibs" else compile_deplibs="-l$name -L$dir $compile_deplibs" finalize_deplibs="-l$name -L$dir $finalize_deplibs" fi elif test yes = "$build_libtool_libs"; then # Not a shared library if test pass_all != "$deplibs_check_method"; then # We're trying link a shared library against a static one # but the system doesn't support it. # Just print a warning and add the library to dependency_libs so # that the program can be linked against the static library. echo $ECHO "*** Warning: This system cannot link to static lib archive $lib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have." if test yes = "$module"; then echo "*** But as you try to build a module library, libtool will still create " echo "*** a static module, that should work as long as the dlopening application" echo "*** is linked with the -dlopen flag to resolve symbols at runtime." if test -z "$global_symbol_pipe"; then echo echo "*** However, this would only work if libtool was able to extract symbol" echo "*** lists from a program, using 'nm' or equivalent, but libtool could" echo "*** not find such a program. So, this module is probably useless." echo "*** 'nm' from GNU binutils and a full rebuild may help." fi if test no = "$build_old_libs"; then build_libtool_libs=module build_old_libs=yes else build_libtool_libs=no fi fi else deplibs="$dir/$old_library $deplibs" link_static=yes fi fi # link shared/static library? if test lib = "$linkmode"; then if test -n "$dependency_libs" && { test yes != "$hardcode_into_libs" || test yes = "$build_old_libs" || test yes = "$link_static"; }; then # Extract -R from dependency_libs temp_deplibs= for libdir in $dependency_libs; do case $libdir in -R*) func_stripname '-R' '' "$libdir" temp_xrpath=$func_stripname_result case " $xrpath " in *" $temp_xrpath "*) ;; *) func_append xrpath " $temp_xrpath";; esac;; *) func_append temp_deplibs " $libdir";; esac done dependency_libs=$temp_deplibs fi func_append newlib_search_path " $absdir" # Link against this library test no = "$link_static" && newdependency_libs="$abs_ladir/$laname $newdependency_libs" # ... and its dependency_libs tmp_libs= for deplib in $dependency_libs; do newdependency_libs="$deplib $newdependency_libs" case $deplib in -L*) func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result";; *) func_resolve_sysroot "$deplib" ;; esac if $opt_preserve_dup_deps; then case "$tmp_libs " in *" $func_resolve_sysroot_result "*) func_append specialdeplibs " $func_resolve_sysroot_result" ;; esac fi func_append tmp_libs " $func_resolve_sysroot_result" done if test no != "$link_all_deplibs"; then # Add the search paths of all dependency libraries for deplib in $dependency_libs; do path= case $deplib in -L*) path=$deplib ;; *.la) func_resolve_sysroot "$deplib" deplib=$func_resolve_sysroot_result func_dirname "$deplib" "" "." dir=$func_dirname_result # We need an absolute path. case $dir in [\\/]* | [A-Za-z]:[\\/]*) absdir=$dir ;; *) absdir=`cd "$dir" && pwd` if test -z "$absdir"; then func_warning "cannot determine absolute directory name of '$dir'" absdir=$dir fi ;; esac if $GREP "^installed=no" $deplib > /dev/null; then case $host in *-*-darwin*) depdepl= eval deplibrary_names=`$SED -n -e 's/^library_names=\(.*\)$/\1/p' $deplib` if test -n "$deplibrary_names"; then for tmp in $deplibrary_names; do depdepl=$tmp done if test -f "$absdir/$objdir/$depdepl"; then depdepl=$absdir/$objdir/$depdepl darwin_install_name=`$OTOOL -L $depdepl | awk '{if (NR == 2) {print $1;exit}}'` if test -z "$darwin_install_name"; then darwin_install_name=`$OTOOL64 -L $depdepl | awk '{if (NR == 2) {print $1;exit}}'` fi func_append compiler_flags " $wl-dylib_file $wl$darwin_install_name:$depdepl" func_append linker_flags " -dylib_file $darwin_install_name:$depdepl" path= fi fi ;; *) path=-L$absdir/$objdir ;; esac else eval libdir=`$SED -n -e 's/^libdir=\(.*\)$/\1/p' $deplib` test -z "$libdir" && \ func_fatal_error "'$deplib' is not a valid libtool archive" test "$absdir" != "$libdir" && \ func_warning "'$deplib' seems to be moved" path=-L$absdir fi ;; esac case " $deplibs " in *" $path "*) ;; *) deplibs="$path $deplibs" ;; esac done fi # link_all_deplibs != no fi # linkmode = lib done # for deplib in $libs if test link = "$pass"; then if test prog = "$linkmode"; then compile_deplibs="$new_inherited_linker_flags $compile_deplibs" finalize_deplibs="$new_inherited_linker_flags $finalize_deplibs" else compiler_flags="$compiler_flags "`$ECHO " $new_inherited_linker_flags" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` fi fi dependency_libs=$newdependency_libs if test dlpreopen = "$pass"; then # Link the dlpreopened libraries before other libraries for deplib in $save_deplibs; do deplibs="$deplib $deplibs" done fi if test dlopen != "$pass"; then test conv = "$pass" || { # Make sure lib_search_path contains only unique directories. lib_search_path= for dir in $newlib_search_path; do case "$lib_search_path " in *" $dir "*) ;; *) func_append lib_search_path " $dir" ;; esac done newlib_search_path= } if test prog,link = "$linkmode,$pass"; then vars="compile_deplibs finalize_deplibs" else vars=deplibs fi for var in $vars dependency_libs; do # Add libraries to $var in reverse order eval tmp_libs=\"\$$var\" new_libs= for deplib in $tmp_libs; do # FIXME: Pedantically, this is the right thing to do, so # that some nasty dependency loop isn't accidentally # broken: #new_libs="$deplib $new_libs" # Pragmatically, this seems to cause very few problems in # practice: case $deplib in -L*) new_libs="$deplib $new_libs" ;; -R*) ;; *) # And here is the reason: when a library appears more # than once as an explicit dependence of a library, or # is implicitly linked in more than once by the # compiler, it is considered special, and multiple # occurrences thereof are not removed. Compare this # with having the same library being listed as a # dependency of multiple other libraries: in this case, # we know (pedantically, we assume) the library does not # need to be listed more than once, so we keep only the # last copy. This is not always right, but it is rare # enough that we require users that really mean to play # such unportable linking tricks to link the library # using -Wl,-lname, so that libtool does not consider it # for duplicate removal. case " $specialdeplibs " in *" $deplib "*) new_libs="$deplib $new_libs" ;; *) case " $new_libs " in *" $deplib "*) ;; *) new_libs="$deplib $new_libs" ;; esac ;; esac ;; esac done tmp_libs= for deplib in $new_libs; do case $deplib in -L*) case " $tmp_libs " in *" $deplib "*) ;; *) func_append tmp_libs " $deplib" ;; esac ;; *) func_append tmp_libs " $deplib" ;; esac done eval $var=\"$tmp_libs\" done # for var fi # Add Sun CC postdeps if required: test CXX = "$tagname" && { case $host_os in linux*) case `$CC -V 2>&1 | $SED 5q` in *Sun\ C*) # Sun C++ 5.9 func_suncc_cstd_abi if test no != "$suncc_use_cstd_abi"; then func_append postdeps ' -library=Cstd -library=Crun' fi ;; esac ;; solaris*) func_cc_basename "$CC" case $func_cc_basename_result in CC* | sunCC*) func_suncc_cstd_abi if test no != "$suncc_use_cstd_abi"; then func_append postdeps ' -library=Cstd -library=Crun' fi ;; esac ;; esac } # Last step: remove runtime libs from dependency_libs # (they stay in deplibs) tmp_libs= for i in $dependency_libs; do case " $predeps $postdeps $compiler_lib_search_path " in *" $i "*) i= ;; esac if test -n "$i"; then func_append tmp_libs " $i" fi done dependency_libs=$tmp_libs done # for pass if test prog = "$linkmode"; then dlfiles=$newdlfiles fi if test prog = "$linkmode" || test lib = "$linkmode"; then dlprefiles=$newdlprefiles fi case $linkmode in oldlib) if test -n "$dlfiles$dlprefiles" || test no != "$dlself"; then func_warning "'-dlopen' is ignored for archives" fi case " $deplibs" in *\ -l* | *\ -L*) func_warning "'-l' and '-L' are ignored for archives" ;; esac test -n "$rpath" && \ func_warning "'-rpath' is ignored for archives" test -n "$xrpath" && \ func_warning "'-R' is ignored for archives" test -n "$vinfo" && \ func_warning "'-version-info/-version-number' is ignored for archives" test -n "$release" && \ func_warning "'-release' is ignored for archives" test -n "$export_symbols$export_symbols_regex" && \ func_warning "'-export-symbols' is ignored for archives" # Now set the variables for building old libraries. build_libtool_libs=no oldlibs=$output func_append objs "$old_deplibs" ;; lib) # Make sure we only generate libraries of the form 'libNAME.la'. case $outputname in lib*) func_stripname 'lib' '.la' "$outputname" name=$func_stripname_result eval shared_ext=\"$shrext_cmds\" eval libname=\"$libname_spec\" ;; *) test no = "$module" \ && func_fatal_help "libtool library '$output' must begin with 'lib'" if test no != "$need_lib_prefix"; then # Add the "lib" prefix for modules if required func_stripname '' '.la' "$outputname" name=$func_stripname_result eval shared_ext=\"$shrext_cmds\" eval libname=\"$libname_spec\" else func_stripname '' '.la' "$outputname" libname=$func_stripname_result fi ;; esac if test -n "$objs"; then if test pass_all != "$deplibs_check_method"; then func_fatal_error "cannot build libtool library '$output' from non-libtool objects on this host:$objs" else echo $ECHO "*** Warning: Linking the shared library $output against the non-libtool" $ECHO "*** objects $objs is not portable!" func_append libobjs " $objs" fi fi test no = "$dlself" \ || func_warning "'-dlopen self' is ignored for libtool libraries" set dummy $rpath shift test 1 -lt "$#" \ && func_warning "ignoring multiple '-rpath's for a libtool library" install_libdir=$1 oldlibs= if test -z "$rpath"; then if test yes = "$build_libtool_libs"; then # Building a libtool convenience library. # Some compilers have problems with a '.al' extension so # convenience libraries should have the same extension an # archive normally would. oldlibs="$output_objdir/$libname.$libext $oldlibs" build_libtool_libs=convenience build_old_libs=yes fi test -n "$vinfo" && \ func_warning "'-version-info/-version-number' is ignored for convenience libraries" test -n "$release" && \ func_warning "'-release' is ignored for convenience libraries" else # Parse the version information argument. save_ifs=$IFS; IFS=: set dummy $vinfo 0 0 0 shift IFS=$save_ifs test -n "$7" && \ func_fatal_help "too many parameters to '-version-info'" # convert absolute version numbers to libtool ages # this retains compatibility with .la files and attempts # to make the code below a bit more comprehensible case $vinfo_number in yes) number_major=$1 number_minor=$2 number_revision=$3 # # There are really only two kinds -- those that # use the current revision as the major version # and those that subtract age and use age as # a minor version. But, then there is irix # that has an extra 1 added just for fun # case $version_type in # correct linux to gnu/linux during the next big refactor darwin|freebsd-elf|linux|midnightbsd-elf|osf|windows|none) func_arith $number_major + $number_minor current=$func_arith_result age=$number_minor revision=$number_revision ;; freebsd-aout|qnx|sunos) current=$number_major revision=$number_minor age=0 ;; irix|nonstopux) func_arith $number_major + $number_minor current=$func_arith_result age=$number_minor revision=$number_minor lt_irix_increment=no ;; esac ;; no) current=$1 revision=$2 age=$3 ;; esac # Check that each of the things are valid numbers. case $current in 0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;; *) func_error "CURRENT '$current' must be a nonnegative integer" func_fatal_error "'$vinfo' is not valid version information" ;; esac case $revision in 0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;; *) func_error "REVISION '$revision' must be a nonnegative integer" func_fatal_error "'$vinfo' is not valid version information" ;; esac case $age in 0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;; *) func_error "AGE '$age' must be a nonnegative integer" func_fatal_error "'$vinfo' is not valid version information" ;; esac if test "$age" -gt "$current"; then func_error "AGE '$age' is greater than the current interface number '$current'" func_fatal_error "'$vinfo' is not valid version information" fi # Calculate the version variables. major= versuffix= verstring= case $version_type in none) ;; darwin) # Like Linux, but with the current version available in # verstring for coding it into the library header func_arith $current - $age major=.$func_arith_result versuffix=$major.$age.$revision # Darwin ld doesn't like 0 for these options... func_arith $current + 1 minor_current=$func_arith_result xlcverstring="$wl-compatibility_version $wl$minor_current $wl-current_version $wl$minor_current.$revision" verstring="-compatibility_version $minor_current -current_version $minor_current.$revision" # On Darwin other compilers case $CC in nagfor*) verstring="$wl-compatibility_version $wl$minor_current $wl-current_version $wl$minor_current.$revision" ;; *) verstring="-compatibility_version $minor_current -current_version $minor_current.$revision" ;; esac ;; freebsd-aout) major=.$current versuffix=.$current.$revision ;; freebsd-elf | midnightbsd-elf) func_arith $current - $age major=.$func_arith_result versuffix=$major.$age.$revision ;; irix | nonstopux) if test no = "$lt_irix_increment"; then func_arith $current - $age else func_arith $current - $age + 1 fi major=$func_arith_result case $version_type in nonstopux) verstring_prefix=nonstopux ;; *) verstring_prefix=sgi ;; esac verstring=$verstring_prefix$major.$revision # Add in all the interfaces that we are compatible with. loop=$revision while test 0 -ne "$loop"; do func_arith $revision - $loop iface=$func_arith_result func_arith $loop - 1 loop=$func_arith_result verstring=$verstring_prefix$major.$iface:$verstring done # Before this point, $major must not contain '.'. major=.$major versuffix=$major.$revision ;; linux) # correct to gnu/linux during the next big refactor func_arith $current - $age major=.$func_arith_result versuffix=$major.$age.$revision ;; osf) func_arith $current - $age major=.$func_arith_result versuffix=.$current.$age.$revision verstring=$current.$age.$revision # Add in all the interfaces that we are compatible with. loop=$age while test 0 -ne "$loop"; do func_arith $current - $loop iface=$func_arith_result func_arith $loop - 1 loop=$func_arith_result verstring=$verstring:$iface.0 done # Make executables depend on our current version. func_append verstring ":$current.0" ;; qnx) major=.$current versuffix=.$current ;; sco) major=.$current versuffix=.$current ;; sunos) major=.$current versuffix=.$current.$revision ;; windows) # Use '-' rather than '.', since we only want one # extension on DOS 8.3 file systems. func_arith $current - $age major=$func_arith_result versuffix=-$major ;; *) func_fatal_configuration "unknown library version type '$version_type'" ;; esac # Clear the version info if we defaulted, and they specified a release. if test -z "$vinfo" && test -n "$release"; then major= case $version_type in darwin) # we can't check for "0.0" in archive_cmds due to quoting # problems, so we reset it completely verstring= ;; *) verstring=0.0 ;; esac if test no = "$need_version"; then versuffix= else versuffix=.0.0 fi fi # Remove version info from name if versioning should be avoided if test yes,no = "$avoid_version,$need_version"; then major= versuffix= verstring= fi # Check to see if the archive will have undefined symbols. if test yes = "$allow_undefined"; then if test unsupported = "$allow_undefined_flag"; then if test yes = "$build_old_libs"; then func_warning "undefined symbols not allowed in $host shared libraries; building static only" build_libtool_libs=no else func_fatal_error "can't build $host shared library unless -no-undefined is specified" fi fi else # Don't allow undefined symbols. allow_undefined_flag=$no_undefined_flag fi fi func_generate_dlsyms "$libname" "$libname" : func_append libobjs " $symfileobj" test " " = "$libobjs" && libobjs= if test relink != "$opt_mode"; then # Remove our outputs, but don't remove object files since they # may have been created when compiling PIC objects. removelist= tempremovelist=`$ECHO "$output_objdir/*"` for p in $tempremovelist; do case $p in *.$objext | *.gcno) ;; $output_objdir/$outputname | $output_objdir/$libname.* | $output_objdir/$libname$release.*) if test -n "$precious_files_regex"; then if $ECHO "$p" | $EGREP -e "$precious_files_regex" >/dev/null 2>&1 then continue fi fi func_append removelist " $p" ;; *) ;; esac done test -n "$removelist" && \ func_show_eval "${RM}r \$removelist" fi # Now set the variables for building old libraries. if test yes = "$build_old_libs" && test convenience != "$build_libtool_libs"; then func_append oldlibs " $output_objdir/$libname.$libext" # Transform .lo files to .o files. oldobjs="$objs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.$libext$/d; $lo2o" | $NL2SP` fi # Eliminate all temporary directories. #for path in $notinst_path; do # lib_search_path=`$ECHO "$lib_search_path " | $SED "s% $path % %g"` # deplibs=`$ECHO "$deplibs " | $SED "s% -L$path % %g"` # dependency_libs=`$ECHO "$dependency_libs " | $SED "s% -L$path % %g"` #done if test -n "$xrpath"; then # If the user specified any rpath flags, then add them. temp_xrpath= for libdir in $xrpath; do func_replace_sysroot "$libdir" func_append temp_xrpath " -R$func_replace_sysroot_result" case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac done if test yes != "$hardcode_into_libs" || test yes = "$build_old_libs"; then dependency_libs="$temp_xrpath $dependency_libs" fi fi # Make sure dlfiles contains only unique files that won't be dlpreopened old_dlfiles=$dlfiles dlfiles= for lib in $old_dlfiles; do case " $dlprefiles $dlfiles " in *" $lib "*) ;; *) func_append dlfiles " $lib" ;; esac done # Make sure dlprefiles contains only unique files old_dlprefiles=$dlprefiles dlprefiles= for lib in $old_dlprefiles; do case "$dlprefiles " in *" $lib "*) ;; *) func_append dlprefiles " $lib" ;; esac done if test yes = "$build_libtool_libs"; then if test -n "$rpath"; then case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-*-beos* | *-cegcc* | *-*-haiku*) # these systems don't actually have a c library (as such)! ;; *-*-rhapsody* | *-*-darwin1.[012]) # Rhapsody C library is in the System framework func_append deplibs " System.ltframework" ;; *-*-netbsd*) # Don't link with libc until the a.out ld.so is fixed. ;; *-*-openbsd* | *-*-freebsd* | *-*-dragonfly* | *-*-midnightbsd*) # Do not include libc due to us having libc/libc_r. ;; *-*-sco3.2v5* | *-*-sco5v6*) # Causes problems with __ctype ;; *-*-sysv4.2uw2* | *-*-sysv5* | *-*-unixware* | *-*-OpenUNIX*) # Compiler inserts libc in the correct place for threads to work ;; *) # Add libc to deplibs on all other systems if necessary. if test yes = "$build_libtool_need_lc"; then func_append deplibs " -lc" fi ;; esac fi # Transform deplibs into only deplibs that can be linked in shared. name_save=$name libname_save=$libname release_save=$release versuffix_save=$versuffix major_save=$major # I'm not sure if I'm treating the release correctly. I think # release should show up in the -l (ie -lgmp5) so we don't want to # add it in twice. Is that correct? release= versuffix= major= newdeplibs= droppeddeps=no case $deplibs_check_method in pass_all) # Don't check for shared/static. Everything works. # This might be a little naive. We might want to check # whether the library exists or not. But this is on # osf3 & osf4 and I'm not really sure... Just # implementing what was already the behavior. newdeplibs=$deplibs ;; test_compile) # This code stresses the "libraries are programs" paradigm to its # limits. Maybe even breaks it. We compile a program, linking it # against the deplibs as a proxy for the library. Then we can check # whether they linked in statically or dynamically with ldd. $opt_dry_run || $RM conftest.c cat > conftest.c </dev/null` $nocaseglob else potential_libs=`ls $i/$libnameglob[.-]* 2>/dev/null` fi for potent_lib in $potential_libs; do # Follow soft links. if ls -lLd "$potent_lib" 2>/dev/null | $GREP " -> " >/dev/null; then continue fi # The statement above tries to avoid entering an # endless loop below, in case of cyclic links. # We might still enter an endless loop, since a link # loop can be closed while we follow links, # but so what? potlib=$potent_lib while test -h "$potlib" 2>/dev/null; do potliblink=`ls -ld $potlib | $SED 's/.* -> //'` case $potliblink in [\\/]* | [A-Za-z]:[\\/]*) potlib=$potliblink;; *) potlib=`$ECHO "$potlib" | $SED 's|[^/]*$||'`"$potliblink";; esac done if eval $file_magic_cmd \"\$potlib\" 2>/dev/null | $SED -e 10q | $EGREP "$file_magic_regex" > /dev/null; then func_append newdeplibs " $a_deplib" a_deplib= break 2 fi done done fi if test -n "$a_deplib"; then droppeddeps=yes echo $ECHO "*** Warning: linker path does not have real file for library $a_deplib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have" echo "*** because I did check the linker path looking for a file starting" if test -z "$potlib"; then $ECHO "*** with $libname but no candidates were found. (...for file magic test)" else $ECHO "*** with $libname and none of the candidates passed a file format test" $ECHO "*** using a file magic. Last file checked: $potlib" fi fi ;; *) # Add a -L argument. func_append newdeplibs " $a_deplib" ;; esac done # Gone through all deplibs. ;; match_pattern*) set dummy $deplibs_check_method; shift match_pattern_regex=`expr "$deplibs_check_method" : "$1 \(.*\)"` for a_deplib in $deplibs; do case $a_deplib in -l*) func_stripname -l '' "$a_deplib" name=$func_stripname_result if test yes = "$allow_libtool_libs_with_static_runtimes"; then case " $predeps $postdeps " in *" $a_deplib "*) func_append newdeplibs " $a_deplib" a_deplib= ;; esac fi if test -n "$a_deplib"; then libname=`eval "\\$ECHO \"$libname_spec\""` for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do potential_libs=`ls $i/$libname[.-]* 2>/dev/null` for potent_lib in $potential_libs; do potlib=$potent_lib # see symlink-check above in file_magic test if eval "\$ECHO \"$potent_lib\"" 2>/dev/null | $SED 10q | \ $EGREP "$match_pattern_regex" > /dev/null; then func_append newdeplibs " $a_deplib" a_deplib= break 2 fi done done fi if test -n "$a_deplib"; then droppeddeps=yes echo $ECHO "*** Warning: linker path does not have real file for library $a_deplib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have" echo "*** because I did check the linker path looking for a file starting" if test -z "$potlib"; then $ECHO "*** with $libname but no candidates were found. (...for regex pattern test)" else $ECHO "*** with $libname and none of the candidates passed a file format test" $ECHO "*** using a regex pattern. Last file checked: $potlib" fi fi ;; *) # Add a -L argument. func_append newdeplibs " $a_deplib" ;; esac done # Gone through all deplibs. ;; none | unknown | *) newdeplibs= tmp_deplibs=`$ECHO " $deplibs" | $SED 's/ -lc$//; s/ -[LR][^ ]*//g'` if test yes = "$allow_libtool_libs_with_static_runtimes"; then for i in $predeps $postdeps; do # can't use Xsed below, because $i might contain '/' tmp_deplibs=`$ECHO " $tmp_deplibs" | $SED "s|$i||"` done fi case $tmp_deplibs in *[!\ \ ]*) echo if test none = "$deplibs_check_method"; then echo "*** Warning: inter-library dependencies are not supported in this platform." else echo "*** Warning: inter-library dependencies are not known to be supported." fi echo "*** All declared inter-library dependencies are being dropped." droppeddeps=yes ;; esac ;; esac versuffix=$versuffix_save major=$major_save release=$release_save libname=$libname_save name=$name_save case $host in *-*-rhapsody* | *-*-darwin1.[012]) # On Rhapsody replace the C library with the System framework newdeplibs=`$ECHO " $newdeplibs" | $SED 's/ -lc / System.ltframework /'` ;; esac if test yes = "$droppeddeps"; then if test yes = "$module"; then echo echo "*** Warning: libtool could not satisfy all declared inter-library" $ECHO "*** dependencies of module $libname. Therefore, libtool will create" echo "*** a static module, that should work as long as the dlopening" echo "*** application is linked with the -dlopen flag." if test -z "$global_symbol_pipe"; then echo echo "*** However, this would only work if libtool was able to extract symbol" echo "*** lists from a program, using 'nm' or equivalent, but libtool could" echo "*** not find such a program. So, this module is probably useless." echo "*** 'nm' from GNU binutils and a full rebuild may help." fi if test no = "$build_old_libs"; then oldlibs=$output_objdir/$libname.$libext build_libtool_libs=module build_old_libs=yes else build_libtool_libs=no fi else echo "*** The inter-library dependencies that have been dropped here will be" echo "*** automatically added whenever a program is linked with this library" echo "*** or is declared to -dlopen it." if test no = "$allow_undefined"; then echo echo "*** Since this library must not contain undefined symbols," echo "*** because either the platform does not support them or" echo "*** it was explicitly requested with -no-undefined," echo "*** libtool will only create a static version of it." if test no = "$build_old_libs"; then oldlibs=$output_objdir/$libname.$libext build_libtool_libs=module build_old_libs=yes else build_libtool_libs=no fi fi fi fi # Done checking deplibs! deplibs=$newdeplibs fi # Time to change all our "foo.ltframework" stuff back to "-framework foo" case $host in *-*-darwin*) newdeplibs=`$ECHO " $newdeplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` new_inherited_linker_flags=`$ECHO " $new_inherited_linker_flags" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` deplibs=`$ECHO " $deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` ;; esac # move library search paths that coincide with paths to not yet # installed libraries to the beginning of the library search list new_libs= for path in $notinst_path; do case " $new_libs " in *" -L$path/$objdir "*) ;; *) case " $deplibs " in *" -L$path/$objdir "*) func_append new_libs " -L$path/$objdir" ;; esac ;; esac done for deplib in $deplibs; do case $deplib in -L*) case " $new_libs " in *" $deplib "*) ;; *) func_append new_libs " $deplib" ;; esac ;; *) func_append new_libs " $deplib" ;; esac done deplibs=$new_libs # All the library-specific variables (install_libdir is set above). library_names= old_library= dlname= # Test again, we may have decided not to build it any more if test yes = "$build_libtool_libs"; then # Remove $wl instances when linking with ld. # FIXME: should test the right _cmds variable. case $archive_cmds in *\$LD\ *) wl= ;; esac if test yes = "$hardcode_into_libs"; then # Hardcode the library paths hardcode_libdirs= dep_rpath= rpath=$finalize_rpath test relink = "$opt_mode" || rpath=$compile_rpath$rpath for libdir in $rpath; do if test -n "$hardcode_libdir_flag_spec"; then if test -n "$hardcode_libdir_separator"; then func_replace_sysroot "$libdir" libdir=$func_replace_sysroot_result if test -z "$hardcode_libdirs"; then hardcode_libdirs=$libdir else # Just accumulate the unique libdirs. case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*) ;; *) func_append hardcode_libdirs "$hardcode_libdir_separator$libdir" ;; esac fi else eval flag=\"$hardcode_libdir_flag_spec\" func_append dep_rpath " $flag" fi elif test -n "$runpath_var"; then case "$perm_rpath " in *" $libdir "*) ;; *) func_append perm_rpath " $libdir" ;; esac fi done # Substitute the hardcoded libdirs into the rpath. if test -n "$hardcode_libdir_separator" && test -n "$hardcode_libdirs"; then libdir=$hardcode_libdirs eval "dep_rpath=\"$hardcode_libdir_flag_spec\"" fi if test -n "$runpath_var" && test -n "$perm_rpath"; then # We should set the runpath_var. rpath= for dir in $perm_rpath; do func_append rpath "$dir:" done eval "$runpath_var='$rpath\$$runpath_var'; export $runpath_var" fi test -n "$dep_rpath" && deplibs="$dep_rpath $deplibs" fi shlibpath=$finalize_shlibpath test relink = "$opt_mode" || shlibpath=$compile_shlibpath$shlibpath if test -n "$shlibpath"; then eval "$shlibpath_var='$shlibpath\$$shlibpath_var'; export $shlibpath_var" fi # Get the real and link names of the library. eval shared_ext=\"$shrext_cmds\" eval library_names=\"$library_names_spec\" set dummy $library_names shift realname=$1 shift if test -n "$soname_spec"; then eval soname=\"$soname_spec\" else soname=$realname fi if test -z "$dlname"; then dlname=$soname fi lib=$output_objdir/$realname linknames= for link do func_append linknames " $link" done # Use standard objects if they are pic test -z "$pic_flag" && libobjs=`$ECHO "$libobjs" | $SP2NL | $SED "$lo2o" | $NL2SP` test "X$libobjs" = "X " && libobjs= delfiles= if test -n "$export_symbols" && test -n "$include_expsyms"; then $opt_dry_run || cp "$export_symbols" "$output_objdir/$libname.uexp" export_symbols=$output_objdir/$libname.uexp func_append delfiles " $export_symbols" fi orig_export_symbols= case $host_os in cygwin* | mingw* | cegcc*) if test -n "$export_symbols" && test -z "$export_symbols_regex"; then # exporting using user supplied symfile func_dll_def_p "$export_symbols" || { # and it's NOT already a .def file. Must figure out # which of the given symbols are data symbols and tag # them as such. So, trigger use of export_symbols_cmds. # export_symbols gets reassigned inside the "prepare # the list of exported symbols" if statement, so the # include_expsyms logic still works. orig_export_symbols=$export_symbols export_symbols= always_export_symbols=yes } fi ;; esac # Prepare the list of exported symbols if test -z "$export_symbols"; then if test yes = "$always_export_symbols" || test -n "$export_symbols_regex"; then func_verbose "generating symbol list for '$libname.la'" export_symbols=$output_objdir/$libname.exp $opt_dry_run || $RM $export_symbols cmds=$export_symbols_cmds save_ifs=$IFS; IFS='~' for cmd1 in $cmds; do IFS=$save_ifs # Take the normal branch if the nm_file_list_spec branch # doesn't work or if tool conversion is not needed. case $nm_file_list_spec~$to_tool_file_cmd in *~func_convert_file_noop | *~func_convert_file_msys_to_w32 | ~*) try_normal_branch=yes eval cmd=\"$cmd1\" func_len " $cmd" len=$func_len_result ;; *) try_normal_branch=no ;; esac if test yes = "$try_normal_branch" \ && { test "$len" -lt "$max_cmd_len" \ || test "$max_cmd_len" -le -1; } then func_show_eval "$cmd" 'exit $?' skipped_export=false elif test -n "$nm_file_list_spec"; then func_basename "$output" output_la=$func_basename_result save_libobjs=$libobjs save_output=$output output=$output_objdir/$output_la.nm func_to_tool_file "$output" libobjs=$nm_file_list_spec$func_to_tool_file_result func_append delfiles " $output" func_verbose "creating $NM input file list: $output" for obj in $save_libobjs; do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" done > "$output" eval cmd=\"$cmd1\" func_show_eval "$cmd" 'exit $?' output=$save_output libobjs=$save_libobjs skipped_export=false else # The command line is too long to execute in one step. func_verbose "using reloadable object file for export list..." skipped_export=: # Break out early, otherwise skipped_export may be # set to false by a later but shorter cmd. break fi done IFS=$save_ifs if test -n "$export_symbols_regex" && test : != "$skipped_export"; then func_show_eval '$EGREP -e "$export_symbols_regex" "$export_symbols" > "${export_symbols}T"' func_show_eval '$MV "${export_symbols}T" "$export_symbols"' fi fi fi if test -n "$export_symbols" && test -n "$include_expsyms"; then tmp_export_symbols=$export_symbols test -n "$orig_export_symbols" && tmp_export_symbols=$orig_export_symbols $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"' fi if test : != "$skipped_export" && test -n "$orig_export_symbols"; then # The given exports_symbols file has to be filtered, so filter it. func_verbose "filter symbol list for '$libname.la' to tag DATA exports" # FIXME: $output_objdir/$libname.filter potentially contains lots of # 's' commands, which not all seds can handle. GNU sed should be fine # though. Also, the filter scales superlinearly with the number of # global variables. join(1) would be nice here, but unfortunately # isn't a blessed tool. $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter func_append delfiles " $export_symbols $output_objdir/$libname.filter" export_symbols=$output_objdir/$libname.def $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols fi tmp_deplibs= for test_deplib in $deplibs; do case " $convenience " in *" $test_deplib "*) ;; *) func_append tmp_deplibs " $test_deplib" ;; esac done deplibs=$tmp_deplibs if test -n "$convenience"; then if test -n "$whole_archive_flag_spec" && test yes = "$compiler_needs_object" && test -z "$libobjs"; then # extract the archives, so we have objects to list. # TODO: could optimize this to just extract one archive. whole_archive_flag_spec= fi if test -n "$whole_archive_flag_spec"; then save_libobjs=$libobjs eval libobjs=\"\$libobjs $whole_archive_flag_spec\" test "X$libobjs" = "X " && libobjs= else gentop=$output_objdir/${outputname}x func_append generated " $gentop" func_extract_archives $gentop $convenience func_append libobjs " $func_extract_archives_result" test "X$libobjs" = "X " && libobjs= fi fi if test yes = "$thread_safe" && test -n "$thread_safe_flag_spec"; then eval flag=\"$thread_safe_flag_spec\" func_append linker_flags " $flag" fi # Make a backup of the uninstalled library when relinking if test relink = "$opt_mode"; then $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}U && $MV $realname ${realname}U)' || exit $? fi # Do each of the archive commands. if test yes = "$module" && test -n "$module_cmds"; then if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then eval test_cmds=\"$module_expsym_cmds\" cmds=$module_expsym_cmds else eval test_cmds=\"$module_cmds\" cmds=$module_cmds fi else if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then eval test_cmds=\"$archive_expsym_cmds\" cmds=$archive_expsym_cmds else eval test_cmds=\"$archive_cmds\" cmds=$archive_cmds fi fi if test : != "$skipped_export" && func_len " $test_cmds" && len=$func_len_result && test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then : else # The command line is too long to link in one step, link piecewise # or, if using GNU ld and skipped_export is not :, use a linker # script. # Save the value of $output and $libobjs because we want to # use them later. If we have whole_archive_flag_spec, we # want to use save_libobjs as it was before # whole_archive_flag_spec was expanded, because we can't # assume the linker understands whole_archive_flag_spec. # This may have to be revisited, in case too many # convenience libraries get linked in and end up exceeding # the spec. if test -z "$convenience" || test -z "$whole_archive_flag_spec"; then save_libobjs=$libobjs fi save_output=$output func_basename "$output" output_la=$func_basename_result # Clear the reloadable object creation command queue and # initialize k to one. test_cmds= concat_cmds= objlist= last_robj= k=1 if test -n "$save_libobjs" && test : != "$skipped_export" && test yes = "$with_gnu_ld"; then output=$output_objdir/$output_la.lnkscript func_verbose "creating GNU ld script: $output" echo 'INPUT (' > $output for obj in $save_libobjs do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" >> $output done echo ')' >> $output func_append delfiles " $output" func_to_tool_file "$output" output=$func_to_tool_file_result elif test -n "$save_libobjs" && test : != "$skipped_export" && test -n "$file_list_spec"; then output=$output_objdir/$output_la.lnk func_verbose "creating linker input file list: $output" : > $output set x $save_libobjs shift firstobj= if test yes = "$compiler_needs_object"; then firstobj="$1 " shift fi for obj do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" >> $output done func_append delfiles " $output" func_to_tool_file "$output" output=$firstobj\"$file_list_spec$func_to_tool_file_result\" else if test -n "$save_libobjs"; then func_verbose "creating reloadable object files..." output=$output_objdir/$output_la-$k.$objext eval test_cmds=\"$reload_cmds\" func_len " $test_cmds" len0=$func_len_result len=$len0 # Loop over the list of objects to be linked. for obj in $save_libobjs do func_len " $obj" func_arith $len + $func_len_result len=$func_arith_result if test -z "$objlist" || test "$len" -lt "$max_cmd_len"; then func_append objlist " $obj" else # The command $test_cmds is almost too long, add a # command to the queue. if test 1 -eq "$k"; then # The first file doesn't have a previous command to add. reload_objs=$objlist eval concat_cmds=\"$reload_cmds\" else # All subsequent reloadable object files will link in # the last one created. reload_objs="$objlist $last_robj" eval concat_cmds=\"\$concat_cmds~$reload_cmds~\$RM $last_robj\" fi last_robj=$output_objdir/$output_la-$k.$objext func_arith $k + 1 k=$func_arith_result output=$output_objdir/$output_la-$k.$objext objlist=" $obj" func_len " $last_robj" func_arith $len0 + $func_len_result len=$func_arith_result fi done # Handle the remaining objects by creating one last # reloadable object file. All subsequent reloadable object # files will link in the last one created. test -z "$concat_cmds" || concat_cmds=$concat_cmds~ reload_objs="$objlist $last_robj" eval concat_cmds=\"\$concat_cmds$reload_cmds\" if test -n "$last_robj"; then eval concat_cmds=\"\$concat_cmds~\$RM $last_robj\" fi func_append delfiles " $output" else output= fi ${skipped_export-false} && { func_verbose "generating symbol list for '$libname.la'" export_symbols=$output_objdir/$libname.exp $opt_dry_run || $RM $export_symbols libobjs=$output # Append the command to create the export file. test -z "$concat_cmds" || concat_cmds=$concat_cmds~ eval concat_cmds=\"\$concat_cmds$export_symbols_cmds\" if test -n "$last_robj"; then eval concat_cmds=\"\$concat_cmds~\$RM $last_robj\" fi } test -n "$save_libobjs" && func_verbose "creating a temporary reloadable object file: $output" # Loop through the commands generated above and execute them. save_ifs=$IFS; IFS='~' for cmd in $concat_cmds; do IFS=$save_ifs $opt_quiet || { func_quote_arg expand,pretty "$cmd" eval "func_echo $func_quote_arg_result" } $opt_dry_run || eval "$cmd" || { lt_exit=$? # Restore the uninstalled library and exit if test relink = "$opt_mode"; then ( cd "$output_objdir" && \ $RM "${realname}T" && \ $MV "${realname}U" "$realname" ) fi exit $lt_exit } done IFS=$save_ifs if test -n "$export_symbols_regex" && ${skipped_export-false}; then func_show_eval '$EGREP -e "$export_symbols_regex" "$export_symbols" > "${export_symbols}T"' func_show_eval '$MV "${export_symbols}T" "$export_symbols"' fi fi ${skipped_export-false} && { if test -n "$export_symbols" && test -n "$include_expsyms"; then tmp_export_symbols=$export_symbols test -n "$orig_export_symbols" && tmp_export_symbols=$orig_export_symbols $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"' fi if test -n "$orig_export_symbols"; then # The given exports_symbols file has to be filtered, so filter it. func_verbose "filter symbol list for '$libname.la' to tag DATA exports" # FIXME: $output_objdir/$libname.filter potentially contains lots of # 's' commands, which not all seds can handle. GNU sed should be fine # though. Also, the filter scales superlinearly with the number of # global variables. join(1) would be nice here, but unfortunately # isn't a blessed tool. $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter func_append delfiles " $export_symbols $output_objdir/$libname.filter" export_symbols=$output_objdir/$libname.def $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols fi } libobjs=$output # Restore the value of output. output=$save_output if test -n "$convenience" && test -n "$whole_archive_flag_spec"; then eval libobjs=\"\$libobjs $whole_archive_flag_spec\" test "X$libobjs" = "X " && libobjs= fi # Expand the library linking commands again to reset the # value of $libobjs for piecewise linking. # Do each of the archive commands. if test yes = "$module" && test -n "$module_cmds"; then if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then cmds=$module_expsym_cmds else cmds=$module_cmds fi else if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then cmds=$archive_expsym_cmds else cmds=$archive_cmds fi fi fi if test -n "$delfiles"; then # Append the command to remove temporary files to $cmds. eval cmds=\"\$cmds~\$RM $delfiles\" fi # Add any objects from preloaded convenience libraries if test -n "$dlprefiles"; then gentop=$output_objdir/${outputname}x func_append generated " $gentop" func_extract_archives $gentop $dlprefiles func_append libobjs " $func_extract_archives_result" test "X$libobjs" = "X " && libobjs= fi save_ifs=$IFS; IFS='~' for cmd in $cmds; do IFS=$sp$nl eval cmd=\"$cmd\" IFS=$save_ifs $opt_quiet || { func_quote_arg expand,pretty "$cmd" eval "func_echo $func_quote_arg_result" } $opt_dry_run || eval "$cmd" || { lt_exit=$? # Restore the uninstalled library and exit if test relink = "$opt_mode"; then ( cd "$output_objdir" && \ $RM "${realname}T" && \ $MV "${realname}U" "$realname" ) fi exit $lt_exit } done IFS=$save_ifs # Restore the uninstalled library and exit if test relink = "$opt_mode"; then $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}T && $MV $realname ${realname}T && $MV ${realname}U $realname)' || exit $? if test -n "$convenience"; then if test -z "$whole_archive_flag_spec"; then func_show_eval '${RM}r "$gentop"' fi fi exit $EXIT_SUCCESS fi # Create links to the real library. for linkname in $linknames; do if test "$realname" != "$linkname"; then func_show_eval '(cd "$output_objdir" && $RM "$linkname" && $LN_S "$realname" "$linkname")' 'exit $?' fi done # If -module or -export-dynamic was specified, set the dlname. if test yes = "$module" || test yes = "$export_dynamic"; then # On all known operating systems, these are identical. dlname=$soname fi fi ;; obj) if test -n "$dlfiles$dlprefiles" || test no != "$dlself"; then func_warning "'-dlopen' is ignored for objects" fi case " $deplibs" in *\ -l* | *\ -L*) func_warning "'-l' and '-L' are ignored for objects" ;; esac test -n "$rpath" && \ func_warning "'-rpath' is ignored for objects" test -n "$xrpath" && \ func_warning "'-R' is ignored for objects" test -n "$vinfo" && \ func_warning "'-version-info' is ignored for objects" test -n "$release" && \ func_warning "'-release' is ignored for objects" case $output in *.lo) test -n "$objs$old_deplibs" && \ func_fatal_error "cannot build library object '$output' from non-libtool objects" libobj=$output func_lo2o "$libobj" obj=$func_lo2o_result ;; *) libobj= obj=$output ;; esac # Delete the old objects. $opt_dry_run || $RM $obj $libobj # Objects from convenience libraries. This assumes # single-version convenience libraries. Whenever we create # different ones for PIC/non-PIC, this we'll have to duplicate # the extraction. reload_conv_objs= gentop= # if reload_cmds runs $LD directly, get rid of -Wl from # whole_archive_flag_spec and hope we can get by with turning comma # into space. case $reload_cmds in *\$LD[\ \$]*) wl= ;; esac if test -n "$convenience"; then if test -n "$whole_archive_flag_spec"; then eval tmp_whole_archive_flags=\"$whole_archive_flag_spec\" test -n "$wl" || tmp_whole_archive_flags=`$ECHO "$tmp_whole_archive_flags" | $SED 's|,| |g'` reload_conv_objs=$reload_objs\ $tmp_whole_archive_flags else gentop=$output_objdir/${obj}x func_append generated " $gentop" func_extract_archives $gentop $convenience reload_conv_objs="$reload_objs $func_extract_archives_result" fi fi # If we're not building shared, we need to use non_pic_objs test yes = "$build_libtool_libs" || libobjs=$non_pic_objects # Create the old-style object. reload_objs=$objs$old_deplibs' '`$ECHO "$libobjs" | $SP2NL | $SED "/\.$libext$/d; /\.lib$/d; $lo2o" | $NL2SP`' '$reload_conv_objs output=$obj func_execute_cmds "$reload_cmds" 'exit $?' # Exit if we aren't doing a library object file. if test -z "$libobj"; then if test -n "$gentop"; then func_show_eval '${RM}r "$gentop"' fi exit $EXIT_SUCCESS fi test yes = "$build_libtool_libs" || { if test -n "$gentop"; then func_show_eval '${RM}r "$gentop"' fi # Create an invalid libtool object if no PIC, so that we don't # accidentally link it into a program. # $show "echo timestamp > $libobj" # $opt_dry_run || eval "echo timestamp > $libobj" || exit $? exit $EXIT_SUCCESS } if test -n "$pic_flag" || test default != "$pic_mode"; then # Only do commands if we really have different PIC objects. reload_objs="$libobjs $reload_conv_objs" output=$libobj func_execute_cmds "$reload_cmds" 'exit $?' fi if test -n "$gentop"; then func_show_eval '${RM}r "$gentop"' fi exit $EXIT_SUCCESS ;; prog) case $host in *cygwin*) func_stripname '' '.exe' "$output" output=$func_stripname_result.exe;; esac test -n "$vinfo" && \ func_warning "'-version-info' is ignored for programs" test -n "$release" && \ func_warning "'-release' is ignored for programs" $preload \ && test unknown,unknown,unknown = "$dlopen_support,$dlopen_self,$dlopen_self_static" \ && func_warning "'LT_INIT([dlopen])' not used. Assuming no dlopen support." case $host in *-*-rhapsody* | *-*-darwin1.[012]) # On Rhapsody replace the C library is the System framework compile_deplibs=`$ECHO " $compile_deplibs" | $SED 's/ -lc / System.ltframework /'` finalize_deplibs=`$ECHO " $finalize_deplibs" | $SED 's/ -lc / System.ltframework /'` ;; esac case $host in *-*-darwin*) # Don't allow lazy linking, it breaks C++ global constructors # But is supposedly fixed on 10.4 or later (yay!). if test CXX = "$tagname"; then case ${MACOSX_DEPLOYMENT_TARGET-10.0} in 10.[0123]) func_append compile_command " $wl-bind_at_load" func_append finalize_command " $wl-bind_at_load" ;; esac fi # Time to change all our "foo.ltframework" stuff back to "-framework foo" compile_deplibs=`$ECHO " $compile_deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` finalize_deplibs=`$ECHO " $finalize_deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` ;; esac # move library search paths that coincide with paths to not yet # installed libraries to the beginning of the library search list new_libs= for path in $notinst_path; do case " $new_libs " in *" -L$path/$objdir "*) ;; *) case " $compile_deplibs " in *" -L$path/$objdir "*) func_append new_libs " -L$path/$objdir" ;; esac ;; esac done for deplib in $compile_deplibs; do case $deplib in -L*) case " $new_libs " in *" $deplib "*) ;; *) func_append new_libs " $deplib" ;; esac ;; *) func_append new_libs " $deplib" ;; esac done compile_deplibs=$new_libs func_append compile_command " $compile_deplibs" func_append finalize_command " $finalize_deplibs" if test -n "$rpath$xrpath"; then # If the user specified any rpath flags, then add them. for libdir in $rpath $xrpath; do # This is the magic to use -rpath. case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac done fi # Now hardcode the library paths rpath= hardcode_libdirs= for libdir in $compile_rpath $finalize_rpath; do if test -n "$hardcode_libdir_flag_spec"; then if test -n "$hardcode_libdir_separator"; then if test -z "$hardcode_libdirs"; then hardcode_libdirs=$libdir else # Just accumulate the unique libdirs. case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*) ;; *) func_append hardcode_libdirs "$hardcode_libdir_separator$libdir" ;; esac fi else eval flag=\"$hardcode_libdir_flag_spec\" func_append rpath " $flag" fi elif test -n "$runpath_var"; then case "$perm_rpath " in *" $libdir "*) ;; *) func_append perm_rpath " $libdir" ;; esac fi case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) testbindir=`$ECHO "$libdir" | $SED -e 's*/lib$*/bin*'` case :$dllsearchpath: in *":$libdir:"*) ;; ::) dllsearchpath=$libdir;; *) func_append dllsearchpath ":$libdir";; esac case :$dllsearchpath: in *":$testbindir:"*) ;; ::) dllsearchpath=$testbindir;; *) func_append dllsearchpath ":$testbindir";; esac ;; esac done # Substitute the hardcoded libdirs into the rpath. if test -n "$hardcode_libdir_separator" && test -n "$hardcode_libdirs"; then libdir=$hardcode_libdirs eval rpath=\" $hardcode_libdir_flag_spec\" fi compile_rpath=$rpath rpath= hardcode_libdirs= for libdir in $finalize_rpath; do if test -n "$hardcode_libdir_flag_spec"; then if test -n "$hardcode_libdir_separator"; then if test -z "$hardcode_libdirs"; then hardcode_libdirs=$libdir else # Just accumulate the unique libdirs. case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*) ;; *) func_append hardcode_libdirs "$hardcode_libdir_separator$libdir" ;; esac fi else eval flag=\"$hardcode_libdir_flag_spec\" func_append rpath " $flag" fi elif test -n "$runpath_var"; then case "$finalize_perm_rpath " in *" $libdir "*) ;; *) func_append finalize_perm_rpath " $libdir" ;; esac fi done # Substitute the hardcoded libdirs into the rpath. if test -n "$hardcode_libdir_separator" && test -n "$hardcode_libdirs"; then libdir=$hardcode_libdirs eval rpath=\" $hardcode_libdir_flag_spec\" fi finalize_rpath=$rpath if test -n "$libobjs" && test yes = "$build_old_libs"; then # Transform all the library objects into standard objects. compile_command=`$ECHO "$compile_command" | $SP2NL | $SED "$lo2o" | $NL2SP` finalize_command=`$ECHO "$finalize_command" | $SP2NL | $SED "$lo2o" | $NL2SP` fi func_generate_dlsyms "$outputname" "@PROGRAM@" false # template prelinking step if test -n "$prelink_cmds"; then func_execute_cmds "$prelink_cmds" 'exit $?' fi wrappers_required=: case $host in *cegcc* | *mingw32ce*) # Disable wrappers for cegcc and mingw32ce hosts, we are cross compiling anyway. wrappers_required=false ;; *cygwin* | *mingw* ) test yes = "$build_libtool_libs" || wrappers_required=false ;; *) if test no = "$need_relink" || test yes != "$build_libtool_libs"; then wrappers_required=false fi ;; esac $wrappers_required || { # Replace the output file specification. compile_command=`$ECHO "$compile_command" | $SED 's%@OUTPUT@%'"$output"'%g'` link_command=$compile_command$compile_rpath # We have no uninstalled library dependencies, so finalize right now. exit_status=0 func_show_eval "$link_command" 'exit_status=$?' if test -n "$postlink_cmds"; then func_to_tool_file "$output" postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'` func_execute_cmds "$postlink_cmds" 'exit $?' fi # Delete the generated files. if test -f "$output_objdir/${outputname}S.$objext"; then func_show_eval '$RM "$output_objdir/${outputname}S.$objext"' fi exit $exit_status } if test -n "$compile_shlibpath$finalize_shlibpath"; then compile_command="$shlibpath_var=\"$compile_shlibpath$finalize_shlibpath\$$shlibpath_var\" $compile_command" fi if test -n "$finalize_shlibpath"; then finalize_command="$shlibpath_var=\"$finalize_shlibpath\$$shlibpath_var\" $finalize_command" fi compile_var= finalize_var= if test -n "$runpath_var"; then if test -n "$perm_rpath"; then # We should set the runpath_var. rpath= for dir in $perm_rpath; do func_append rpath "$dir:" done compile_var="$runpath_var=\"$rpath\$$runpath_var\" " fi if test -n "$finalize_perm_rpath"; then # We should set the runpath_var. rpath= for dir in $finalize_perm_rpath; do func_append rpath "$dir:" done finalize_var="$runpath_var=\"$rpath\$$runpath_var\" " fi fi if test yes = "$no_install"; then # We don't need to create a wrapper script. link_command=$compile_var$compile_command$compile_rpath # Replace the output file specification. link_command=`$ECHO "$link_command" | $SED 's%@OUTPUT@%'"$output"'%g'` # Delete the old output file. $opt_dry_run || $RM $output # Link the executable and exit func_show_eval "$link_command" 'exit $?' if test -n "$postlink_cmds"; then func_to_tool_file "$output" postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'` func_execute_cmds "$postlink_cmds" 'exit $?' fi exit $EXIT_SUCCESS fi case $hardcode_action,$fast_install in relink,*) # Fast installation is not supported link_command=$compile_var$compile_command$compile_rpath relink_command=$finalize_var$finalize_command$finalize_rpath func_warning "this platform does not like uninstalled shared libraries" func_warning "'$output' will be relinked during installation" ;; *,yes) link_command=$finalize_var$compile_command$finalize_rpath relink_command=`$ECHO "$compile_var$compile_command$compile_rpath" | $SED 's%@OUTPUT@%\$progdir/\$file%g'` ;; *,no) link_command=$compile_var$compile_command$compile_rpath relink_command=$finalize_var$finalize_command$finalize_rpath ;; *,needless) link_command=$finalize_var$compile_command$finalize_rpath relink_command= ;; esac # Replace the output file specification. link_command=`$ECHO "$link_command" | $SED 's%@OUTPUT@%'"$output_objdir/$outputname"'%g'` # Delete the old output files. $opt_dry_run || $RM $output $output_objdir/$outputname $output_objdir/lt-$outputname func_show_eval "$link_command" 'exit $?' if test -n "$postlink_cmds"; then func_to_tool_file "$output_objdir/$outputname" postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output_objdir/$outputname"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'` func_execute_cmds "$postlink_cmds" 'exit $?' fi # Now create the wrapper script. func_verbose "creating $output" # Quote the relink command for shipping. if test -n "$relink_command"; then # Preserve any variables that may affect compiler behavior for var in $variables_saved_for_relink; do if eval test -z \"\${$var+set}\"; then relink_command="{ test -z \"\${$var+set}\" || $lt_unset $var || { $var=; export $var; }; }; $relink_command" elif eval var_value=\$$var; test -z "$var_value"; then relink_command="$var=; export $var; $relink_command" else func_quote_arg pretty "$var_value" relink_command="$var=$func_quote_arg_result; export $var; $relink_command" fi done func_quote eval cd "`pwd`" func_quote_arg pretty,unquoted "($func_quote_result; $relink_command)" relink_command=$func_quote_arg_unquoted_result fi # Only actually do things if not in dry run mode. $opt_dry_run || { # win32 will think the script is a binary if it has # a .exe suffix, so we strip it off here. case $output in *.exe) func_stripname '' '.exe' "$output" output=$func_stripname_result ;; esac # test for cygwin because mv fails w/o .exe extensions case $host in *cygwin*) exeext=.exe func_stripname '' '.exe' "$outputname" outputname=$func_stripname_result ;; *) exeext= ;; esac case $host in *cygwin* | *mingw* ) func_dirname_and_basename "$output" "" "." output_name=$func_basename_result output_path=$func_dirname_result cwrappersource=$output_path/$objdir/lt-$output_name.c cwrapper=$output_path/$output_name.exe $RM $cwrappersource $cwrapper trap "$RM $cwrappersource $cwrapper; exit $EXIT_FAILURE" 1 2 15 func_emit_cwrapperexe_src > $cwrappersource # The wrapper executable is built using the $host compiler, # because it contains $host paths and files. If cross- # compiling, it, like the target executable, must be # executed on the $host or under an emulation environment. $opt_dry_run || { $LTCC $LTCFLAGS -o $cwrapper $cwrappersource $STRIP $cwrapper } # Now, create the wrapper script for func_source use: func_ltwrapper_scriptname $cwrapper $RM $func_ltwrapper_scriptname_result trap "$RM $func_ltwrapper_scriptname_result; exit $EXIT_FAILURE" 1 2 15 $opt_dry_run || { # note: this script will not be executed, so do not chmod. if test "x$build" = "x$host"; then $cwrapper --lt-dump-script > $func_ltwrapper_scriptname_result else func_emit_wrapper no > $func_ltwrapper_scriptname_result fi } ;; * ) $RM $output trap "$RM $output; exit $EXIT_FAILURE" 1 2 15 func_emit_wrapper no > $output chmod +x $output ;; esac } exit $EXIT_SUCCESS ;; esac # See if we need to build an old-fashioned archive. for oldlib in $oldlibs; do case $build_libtool_libs in convenience) oldobjs="$libobjs_save $symfileobj" addlibs=$convenience build_libtool_libs=no ;; module) oldobjs=$libobjs_save addlibs=$old_convenience build_libtool_libs=no ;; *) oldobjs="$old_deplibs $non_pic_objects" $preload && test -f "$symfileobj" \ && func_append oldobjs " $symfileobj" addlibs=$old_convenience ;; esac if test -n "$addlibs"; then gentop=$output_objdir/${outputname}x func_append generated " $gentop" func_extract_archives $gentop $addlibs func_append oldobjs " $func_extract_archives_result" fi # Do each command in the archive commands. if test -n "$old_archive_from_new_cmds" && test yes = "$build_libtool_libs"; then cmds=$old_archive_from_new_cmds else # Add any objects from preloaded convenience libraries if test -n "$dlprefiles"; then gentop=$output_objdir/${outputname}x func_append generated " $gentop" func_extract_archives $gentop $dlprefiles func_append oldobjs " $func_extract_archives_result" fi # POSIX demands no paths to be encoded in archives. We have # to avoid creating archives with duplicate basenames if we # might have to extract them afterwards, e.g., when creating a # static archive out of a convenience library, or when linking # the entirety of a libtool archive into another (currently # not supported by libtool). if (for obj in $oldobjs do func_basename "$obj" $ECHO "$func_basename_result" done | sort | sort -uc >/dev/null 2>&1); then : else echo "copying selected object files to avoid basename conflicts..." gentop=$output_objdir/${outputname}x func_append generated " $gentop" func_mkdir_p "$gentop" save_oldobjs=$oldobjs oldobjs= counter=1 for obj in $save_oldobjs do func_basename "$obj" objbase=$func_basename_result case " $oldobjs " in " ") oldobjs=$obj ;; *[\ /]"$objbase "*) while :; do # Make sure we don't pick an alternate name that also # overlaps. newobj=lt$counter-$objbase func_arith $counter + 1 counter=$func_arith_result case " $oldobjs " in *[\ /]"$newobj "*) ;; *) if test ! -f "$gentop/$newobj"; then break; fi ;; esac done func_show_eval "ln $obj $gentop/$newobj || cp $obj $gentop/$newobj" func_append oldobjs " $gentop/$newobj" ;; *) func_append oldobjs " $obj" ;; esac done fi func_to_tool_file "$oldlib" func_convert_file_msys_to_w32 tool_oldlib=$func_to_tool_file_result eval cmds=\"$old_archive_cmds\" func_len " $cmds" len=$func_len_result if test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then cmds=$old_archive_cmds elif test -n "$archiver_list_spec"; then func_verbose "using command file archive linking..." for obj in $oldobjs do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" done > $output_objdir/$libname.libcmd func_to_tool_file "$output_objdir/$libname.libcmd" oldobjs=" $archiver_list_spec$func_to_tool_file_result" cmds=$old_archive_cmds else # the command line is too long to link in one step, link in parts func_verbose "using piecewise archive linking..." save_RANLIB=$RANLIB RANLIB=: objlist= concat_cmds= save_oldobjs=$oldobjs oldobjs= # Is there a better way of finding the last object in the list? for obj in $save_oldobjs do last_oldobj=$obj done eval test_cmds=\"$old_archive_cmds\" func_len " $test_cmds" len0=$func_len_result len=$len0 for obj in $save_oldobjs do func_len " $obj" func_arith $len + $func_len_result len=$func_arith_result func_append objlist " $obj" if test "$len" -lt "$max_cmd_len"; then : else # the above command should be used before it gets too long oldobjs=$objlist if test "$obj" = "$last_oldobj"; then RANLIB=$save_RANLIB fi test -z "$concat_cmds" || concat_cmds=$concat_cmds~ eval concat_cmds=\"\$concat_cmds$old_archive_cmds\" objlist= len=$len0 fi done RANLIB=$save_RANLIB oldobjs=$objlist if test -z "$oldobjs"; then eval cmds=\"\$concat_cmds\" else eval cmds=\"\$concat_cmds~\$old_archive_cmds\" fi fi fi func_execute_cmds "$cmds" 'exit $?' done test -n "$generated" && \ func_show_eval "${RM}r$generated" # Now create the libtool archive. case $output in *.la) old_library= test yes = "$build_old_libs" && old_library=$libname.$libext func_verbose "creating $output" # Preserve any variables that may affect compiler behavior for var in $variables_saved_for_relink; do if eval test -z \"\${$var+set}\"; then relink_command="{ test -z \"\${$var+set}\" || $lt_unset $var || { $var=; export $var; }; }; $relink_command" elif eval var_value=\$$var; test -z "$var_value"; then relink_command="$var=; export $var; $relink_command" else func_quote_arg pretty,unquoted "$var_value" relink_command="$var=$func_quote_arg_unquoted_result; export $var; $relink_command" fi done # Quote the link command for shipping. func_quote eval cd "`pwd`" relink_command="($func_quote_result; $SHELL \"$progpath\" $preserve_args --mode=relink $libtool_args @inst_prefix_dir@)" func_quote_arg pretty,unquoted "$relink_command" relink_command=$func_quote_arg_unquoted_result if test yes = "$hardcode_automatic"; then relink_command= fi # Only create the output if not a dry run. $opt_dry_run || { for installed in no yes; do if test yes = "$installed"; then if test -z "$install_libdir"; then break fi output=$output_objdir/${outputname}i # Replace all uninstalled libtool libraries with the installed ones newdependency_libs= for deplib in $dependency_libs; do case $deplib in *.la) func_basename "$deplib" name=$func_basename_result func_resolve_sysroot "$deplib" eval libdir=`$SED -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result` test -z "$libdir" && \ func_fatal_error "'$deplib' is not a valid libtool archive" func_append newdependency_libs " ${lt_sysroot:+=}$libdir/$name" ;; -L*) func_stripname -L '' "$deplib" func_replace_sysroot "$func_stripname_result" func_append newdependency_libs " -L$func_replace_sysroot_result" ;; -R*) func_stripname -R '' "$deplib" func_replace_sysroot "$func_stripname_result" func_append newdependency_libs " -R$func_replace_sysroot_result" ;; *) func_append newdependency_libs " $deplib" ;; esac done dependency_libs=$newdependency_libs newdlfiles= for lib in $dlfiles; do case $lib in *.la) func_basename "$lib" name=$func_basename_result eval libdir=`$SED -n -e 's/^libdir=\(.*\)$/\1/p' $lib` test -z "$libdir" && \ func_fatal_error "'$lib' is not a valid libtool archive" func_append newdlfiles " ${lt_sysroot:+=}$libdir/$name" ;; *) func_append newdlfiles " $lib" ;; esac done dlfiles=$newdlfiles newdlprefiles= for lib in $dlprefiles; do case $lib in *.la) # Only pass preopened files to the pseudo-archive (for # eventual linking with the app. that links it) if we # didn't already link the preopened objects directly into # the library: func_basename "$lib" name=$func_basename_result eval libdir=`$SED -n -e 's/^libdir=\(.*\)$/\1/p' $lib` test -z "$libdir" && \ func_fatal_error "'$lib' is not a valid libtool archive" func_append newdlprefiles " ${lt_sysroot:+=}$libdir/$name" ;; esac done dlprefiles=$newdlprefiles else newdlfiles= for lib in $dlfiles; do case $lib in [\\/]* | [A-Za-z]:[\\/]*) abs=$lib ;; *) abs=`pwd`"/$lib" ;; esac func_append newdlfiles " $abs" done dlfiles=$newdlfiles newdlprefiles= for lib in $dlprefiles; do case $lib in [\\/]* | [A-Za-z]:[\\/]*) abs=$lib ;; *) abs=`pwd`"/$lib" ;; esac func_append newdlprefiles " $abs" done dlprefiles=$newdlprefiles fi $RM $output # place dlname in correct position for cygwin # In fact, it would be nice if we could use this code for all target # systems that can't hard-code library paths into their executables # and that have no shared library path variable independent of PATH, # but it turns out we can't easily determine that from inspecting # libtool variables, so we have to hard-code the OSs to which it # applies here; at the moment, that means platforms that use the PE # object format with DLL files. See the long comment at the top of # tests/bindir.at for full details. tdlname=$dlname case $host,$output,$installed,$module,$dlname in *cygwin*,*lai,yes,no,*.dll | *mingw*,*lai,yes,no,*.dll | *cegcc*,*lai,yes,no,*.dll) # If a -bindir argument was supplied, place the dll there. if test -n "$bindir"; then func_relative_path "$install_libdir" "$bindir" tdlname=$func_relative_path_result/$dlname else # Otherwise fall back on heuristic. tdlname=../bin/$dlname fi ;; esac $ECHO > $output "\ # $outputname - a libtool library file # Generated by $PROGRAM (GNU $PACKAGE) $VERSION # # Please DO NOT delete this file! # It is necessary for linking the library. # The name that we can dlopen(3). dlname='$tdlname' # Names of this library. library_names='$library_names' # The name of the static archive. old_library='$old_library' # Linker flags that cannot go in dependency_libs. inherited_linker_flags='$new_inherited_linker_flags' # Libraries that this one depends upon. dependency_libs='$dependency_libs' # Names of additional weak libraries provided by this library weak_library_names='$weak_libs' # Version information for $libname. current=$current age=$age revision=$revision # Is this an already installed library? installed=$installed # Should we warn about portability when linking against -modules? shouldnotlink=$module # Files to dlopen/dlpreopen dlopen='$dlfiles' dlpreopen='$dlprefiles' # Directory that this library needs to be installed in: libdir='$install_libdir'" if test no,yes = "$installed,$need_relink"; then $ECHO >> $output "\ relink_command=\"$relink_command\"" fi done } # Do a symbolic link so that the libtool archive can be found in # LD_LIBRARY_PATH before the program is installed. func_show_eval '( cd "$output_objdir" && $RM "$outputname" && $LN_S "../$outputname" "$outputname" )' 'exit $?' ;; esac exit $EXIT_SUCCESS } if test link = "$opt_mode" || test relink = "$opt_mode"; then func_mode_link ${1+"$@"} fi # func_mode_uninstall arg... func_mode_uninstall () { $debug_cmd RM=$nonopt files= rmforce=false exit_status=0 # This variable tells wrapper scripts just to set variables rather # than running their programs. libtool_install_magic=$magic for arg do case $arg in -f) func_append RM " $arg"; rmforce=: ;; -*) func_append RM " $arg" ;; *) func_append files " $arg" ;; esac done test -z "$RM" && \ func_fatal_help "you must specify an RM program" rmdirs= for file in $files; do func_dirname "$file" "" "." dir=$func_dirname_result if test . = "$dir"; then odir=$objdir else odir=$dir/$objdir fi func_basename "$file" name=$func_basename_result test uninstall = "$opt_mode" && odir=$dir # Remember odir for removal later, being careful to avoid duplicates if test clean = "$opt_mode"; then case " $rmdirs " in *" $odir "*) ;; *) func_append rmdirs " $odir" ;; esac fi # Don't error if the file doesn't exist and rm -f was used. if { test -L "$file"; } >/dev/null 2>&1 || { test -h "$file"; } >/dev/null 2>&1 || test -f "$file"; then : elif test -d "$file"; then exit_status=1 continue elif $rmforce; then continue fi rmfiles=$file case $name in *.la) # Possibly a libtool archive, so verify it. if func_lalib_p "$file"; then func_source $dir/$name # Delete the libtool libraries and symlinks. for n in $library_names; do func_append rmfiles " $odir/$n" done test -n "$old_library" && func_append rmfiles " $odir/$old_library" case $opt_mode in clean) case " $library_names " in *" $dlname "*) ;; *) test -n "$dlname" && func_append rmfiles " $odir/$dlname" ;; esac test -n "$libdir" && func_append rmfiles " $odir/$name $odir/${name}i" ;; uninstall) if test -n "$library_names"; then # Do each command in the postuninstall commands. func_execute_cmds "$postuninstall_cmds" '$rmforce || exit_status=1' fi if test -n "$old_library"; then # Do each command in the old_postuninstall commands. func_execute_cmds "$old_postuninstall_cmds" '$rmforce || exit_status=1' fi # FIXME: should reinstall the best remaining shared library. ;; esac fi ;; *.lo) # Possibly a libtool object, so verify it. if func_lalib_p "$file"; then # Read the .lo file func_source $dir/$name # Add PIC object to the list of files to remove. if test -n "$pic_object" && test none != "$pic_object"; then func_append rmfiles " $dir/$pic_object" fi # Add non-PIC object to the list of files to remove. if test -n "$non_pic_object" && test none != "$non_pic_object"; then func_append rmfiles " $dir/$non_pic_object" fi fi ;; *) if test clean = "$opt_mode"; then noexename=$name case $file in *.exe) func_stripname '' '.exe' "$file" file=$func_stripname_result func_stripname '' '.exe' "$name" noexename=$func_stripname_result # $file with .exe has already been added to rmfiles, # add $file without .exe func_append rmfiles " $file" ;; esac # Do a test to see if this is a libtool program. if func_ltwrapper_p "$file"; then if func_ltwrapper_executable_p "$file"; then func_ltwrapper_scriptname "$file" relink_command= func_source $func_ltwrapper_scriptname_result func_append rmfiles " $func_ltwrapper_scriptname_result" else relink_command= func_source $dir/$noexename fi # note $name still contains .exe if it was in $file originally # as does the version of $file that was added into $rmfiles func_append rmfiles " $odir/$name $odir/${name}S.$objext" if test yes = "$fast_install" && test -n "$relink_command"; then func_append rmfiles " $odir/lt-$name" fi if test "X$noexename" != "X$name"; then func_append rmfiles " $odir/lt-$noexename.c" fi fi fi ;; esac func_show_eval "$RM $rmfiles" 'exit_status=1' done # Try to remove the $objdir's in the directories where we deleted files for dir in $rmdirs; do if test -d "$dir"; then func_show_eval "rmdir $dir >/dev/null 2>&1" fi done exit $exit_status } if test uninstall = "$opt_mode" || test clean = "$opt_mode"; then func_mode_uninstall ${1+"$@"} fi test -z "$opt_mode" && { help=$generic_help func_fatal_help "you must specify a MODE" } test -z "$exec_cmd" && \ func_fatal_help "invalid operation mode '$opt_mode'" if test -n "$exec_cmd"; then eval exec "$exec_cmd" exit $EXIT_FAILURE fi exit $exit_status # The TAGs below are defined such that we never get into a situation # where we disable both kinds of libraries. Given conflicting # choices, we go for a static library, that is the most portable, # since we can't tell whether shared libraries were disabled because # the user asked for that or because the platform doesn't support # them. This is particularly important on AIX, because we don't # support having both static and shared libraries enabled at the same # time on that platform, so we default to a shared-only configuration. # If a disable-shared tag is given, we'll fallback to a static-only # configuration. But we'll never go from static-only to shared-only. # ### BEGIN LIBTOOL TAG CONFIG: disable-shared build_libtool_libs=no build_old_libs=yes # ### END LIBTOOL TAG CONFIG: disable-shared # ### BEGIN LIBTOOL TAG CONFIG: disable-static build_old_libs=`case $build_libtool_libs in yes) echo no;; *) echo yes;; esac` # ### END LIBTOOL TAG CONFIG: disable-static # Local Variables: # mode:shell-script # sh-indentation:2 # End: gevent-24.11.1/deps/c-ares/config/missing000077500000000000000000000170601471441230600201270ustar00rootroot00000000000000#! /bin/sh # Common wrapper for a few potentially missing GNU and other programs. scriptversion=2024-06-07.14; # UTC # shellcheck disable=SC2006,SC2268 # we must support pre-POSIX shells # Copyright (C) 1996-2024 Free Software Foundation, Inc. # Originally written by Fran,cois Pinard , 1996. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see . # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. if test $# -eq 0; then echo 1>&2 "Try '$0 --help' for more information" exit 1 fi case $1 in --is-lightweight) # Used by our autoconf macros to check whether the available missing # script is modern enough. exit 0 ;; --run) # Back-compat with the calling convention used by older automake. shift ;; -h|--h|--he|--hel|--help) echo "\ $0 [OPTION]... PROGRAM [ARGUMENT]... Run 'PROGRAM [ARGUMENT]...', returning a proper advice when this fails due to PROGRAM being missing or too old. Options: -h, --help display this help and exit -v, --version output version information and exit Supported PROGRAM values: aclocal autoconf autogen autoheader autom4te automake autoreconf bison flex help2man lex makeinfo perl yacc Version suffixes to PROGRAM as well as the prefixes 'gnu-', 'gnu', and 'g' are ignored when checking the name. Report bugs to . GNU Automake home page: . General help using GNU software: ." exit $? ;; -v|--v|--ve|--ver|--vers|--versi|--versio|--version) echo "missing (GNU Automake) $scriptversion" exit $? ;; -*) echo 1>&2 "$0: unknown '$1' option" echo 1>&2 "Try '$0 --help' for more information" exit 1 ;; esac # Run the given program, remember its exit status. "$@"; st=$? # If it succeeded, we are done. test $st -eq 0 && exit 0 # Also exit now if we it failed (or wasn't found), and '--version' was # passed; such an option is passed most likely to detect whether the # program is present and works. case $2 in --version|--help) exit $st;; esac # Exit code 63 means version mismatch. This often happens when the user # tries to use an ancient version of a tool on a file that requires a # minimum version. if test $st -eq 63; then msg="probably too old" elif test $st -eq 127; then # Program was missing. msg="missing on your system" else # Program was found and executed, but failed. Give up. exit $st fi perl_URL=https://www.perl.org/ flex_URL=https://github.com/westes/flex gnu_software_URL=https://www.gnu.org/software program_details () { case $1 in aclocal|automake|autoreconf) echo "The '$1' program is part of the GNU Automake package:" echo "<$gnu_software_URL/automake>" echo "It also requires GNU Autoconf, GNU m4 and Perl in order to run:" echo "<$gnu_software_URL/autoconf>" echo "<$gnu_software_URL/m4/>" echo "<$perl_URL>" ;; autoconf|autom4te|autoheader) echo "The '$1' program is part of the GNU Autoconf package:" echo "<$gnu_software_URL/autoconf/>" echo "It also requires GNU m4 and Perl in order to run:" echo "<$gnu_software_URL/m4/>" echo "<$perl_URL>" ;; *) : ;; esac } give_advice () { # Normalize program name to check for. normalized_program=`echo "$1" | sed ' s/^gnu-//; t s/^gnu//; t s/^g//; t'` printf '%s\n' "'$1' is $msg." configure_deps="'configure.ac' or m4 files included by 'configure.ac'" autoheader_deps="'acconfig.h'" automake_deps="'Makefile.am'" aclocal_deps="'acinclude.m4'" case $normalized_program in aclocal*) echo "You should only need it if you modified $aclocal_deps or" echo "$configure_deps." ;; autoconf*) echo "You should only need it if you modified $configure_deps." ;; autogen*) echo "You should only need it if you modified a '.def' or '.tpl' file." echo "You may want to install the GNU AutoGen package:" echo "<$gnu_software_URL/autogen/>" ;; autoheader*) echo "You should only need it if you modified $autoheader_deps or" echo "$configure_deps." ;; automake*) echo "You should only need it if you modified $automake_deps or" echo "$configure_deps." ;; autom4te*) echo "You might have modified some maintainer files that require" echo "the 'autom4te' program to be rebuilt." ;; autoreconf*) echo "You should only need it if you modified $aclocal_deps or" echo "$automake_deps or $autoheader_deps or $automake_deps or" echo "$configure_deps." ;; bison*|yacc*) echo "You should only need it if you modified a '.y' file." echo "You may want to install the GNU Bison package:" echo "<$gnu_software_URL/bison/>" ;; help2man*) echo "You should only need it if you modified a dependency" \ "of a man page." echo "You may want to install the GNU Help2man package:" echo "<$gnu_software_URL/help2man/>" ;; lex*|flex*) echo "You should only need it if you modified a '.l' file." echo "You may want to install the Fast Lexical Analyzer package:" echo "<$flex_URL>" ;; makeinfo*) echo "You should only need it if you modified a '.texi' file, or" echo "any other file indirectly affecting the aspect of the manual." echo "You might want to install the Texinfo package:" echo "<$gnu_software_URL/texinfo/>" echo "The spurious makeinfo call might also be the consequence of" echo "using a buggy 'make' (AIX, DU, IRIX), in which case you might" echo "want to install GNU make:" echo "<$gnu_software_URL/make/>" ;; perl*) echo "You should only need it to run GNU Autoconf, GNU Automake, " echo " assorted other tools, or if you modified a Perl source file." echo "You may want to install the Perl 5 language interpreter:" echo "<$perl_URL>" ;; *) echo "You might have modified some files without having the proper" echo "tools for further handling them. Check the 'README' file, it" echo "often tells you about the needed prerequisites for installing" echo "this package. You may also peek at any GNU archive site, in" echo "case some other package contains this missing '$1' program." ;; esac program_details "$normalized_program" } give_advice "$1" | sed -e '1s/^/WARNING: /' \ -e '2,$s/^/ /' >&2 # Propagate the correct exit status (expected to be 127 for a program # not found, 63 for a program that failed due to version mismatch). exit $st # Local variables: # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC0" # time-stamp-end: "; # UTC" # End: gevent-24.11.1/deps/c-ares/configure000077500000000000000000031667631471441230600172140ustar00rootroot00000000000000#! /bin/sh # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.72 for c-ares 1.33.1. # # Report bugs to . # # # Copyright (C) 1992-1996, 1998-2017, 2020-2023 Free Software Foundation, # Inc. # # # This configure script is free software; the Free Software Foundation # gives unlimited permission to copy, distribute and modify it. ## -------------------- ## ## M4sh Initialization. ## ## -------------------- ## # Be more Bourne compatible DUALCASE=1; export DUALCASE # for MKS sh if test ${ZSH_VERSION+y} && (emulate sh) >/dev/null 2>&1 then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case e in #( e) case `(set -o) 2>/dev/null` in #( *posix*) : set -o posix ;; #( *) : ;; esac ;; esac fi # Reset variables that may have inherited troublesome values from # the environment. # IFS needs to be set, to space, tab, and newline, in precisely that order. # (If _AS_PATH_WALK were called with IFS unset, it would have the # side effect of setting IFS to empty, thus disabling word splitting.) # Quoting is to prevent editors from complaining about space-tab. as_nl=' ' export as_nl IFS=" "" $as_nl" PS1='$ ' PS2='> ' PS4='+ ' # Ensure predictable behavior from utilities with locale-dependent output. LC_ALL=C export LC_ALL LANGUAGE=C export LANGUAGE # We cannot yet rely on "unset" to work, but we need these variables # to be unset--not just set to an empty or harmless value--now, to # avoid bugs in old shells (e.g. pre-3.0 UWIN ksh). This construct # also avoids known problems related to "unset" and subshell syntax # in other old shells (e.g. bash 2.01 and pdksh 5.2.14). for as_var in BASH_ENV ENV MAIL MAILPATH CDPATH do eval test \${$as_var+y} \ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || : done # Ensure that fds 0, 1, and 2 are open. if (exec 3>&0) 2>/dev/null; then :; else exec 0&1) 2>/dev/null; then :; else exec 1>/dev/null; fi if (exec 3>&2) ; then :; else exec 2>/dev/null; fi # The user is always right. if ${PATH_SEPARATOR+false} :; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi # Find who we are. Look in the path if we contain no directory separator. as_myself= case $0 in #(( *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac test -r "$as_dir$0" && as_myself=$as_dir$0 && break done IFS=$as_save_IFS ;; esac # We did not find ourselves, most probably we were run as 'sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then printf "%s\n" "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2 exit 1 fi # Use a proper internal environment variable to ensure we don't fall # into an infinite loop, continuously re-executing ourselves. if test x"${_as_can_reexec}" != xno && test "x$CONFIG_SHELL" != x; then _as_can_reexec=no; export _as_can_reexec; # We cannot yet assume a decent shell, so we have to provide a # neutralization value for shells without unset; and this also # works around shells that cannot unset nonexistent variables. # Preserve -v and -x to the replacement shell. BASH_ENV=/dev/null ENV=/dev/null (unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV case $- in # (((( *v*x* | *x*v* ) as_opts=-vx ;; *v* ) as_opts=-v ;; *x* ) as_opts=-x ;; * ) as_opts= ;; esac exec $CONFIG_SHELL $as_opts "$as_myself" ${1+"$@"} # Admittedly, this is quite paranoid, since all the known shells bail # out after a failed 'exec'. printf "%s\n" "$0: could not re-execute with $CONFIG_SHELL" >&2 exit 255 fi # We don't want this to propagate to other subprocesses. { _as_can_reexec=; unset _as_can_reexec;} if test "x$CONFIG_SHELL" = x; then as_bourne_compatible="if test \${ZSH_VERSION+y} && (emulate sh) >/dev/null 2>&1 then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on \${1+\"\$@\"}, which # is contrary to our usage. Disable this feature. alias -g '\${1+\"\$@\"}'='\"\$@\"' setopt NO_GLOB_SUBST else case e in #( e) case \`(set -o) 2>/dev/null\` in #( *posix*) : set -o posix ;; #( *) : ;; esac ;; esac fi " as_required="as_fn_return () { (exit \$1); } as_fn_success () { as_fn_return 0; } as_fn_failure () { as_fn_return 1; } as_fn_ret_success () { return 0; } as_fn_ret_failure () { return 1; } exitcode=0 as_fn_success || { exitcode=1; echo as_fn_success failed.; } as_fn_failure && { exitcode=1; echo as_fn_failure succeeded.; } as_fn_ret_success || { exitcode=1; echo as_fn_ret_success failed.; } as_fn_ret_failure && { exitcode=1; echo as_fn_ret_failure succeeded.; } if ( set x; as_fn_ret_success y && test x = \"\$1\" ) then : else case e in #( e) exitcode=1; echo positional parameters were not saved. ;; esac fi test x\$exitcode = x0 || exit 1 blah=\$(echo \$(echo blah)) test x\"\$blah\" = xblah || exit 1 test -x / || exit 1" as_suggested=" as_lineno_1=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_1a=\$LINENO as_lineno_2=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_2a=\$LINENO eval 'test \"x\$as_lineno_1'\$as_run'\" != \"x\$as_lineno_2'\$as_run'\" && test \"x\`expr \$as_lineno_1'\$as_run' + 1\`\" = \"x\$as_lineno_2'\$as_run'\"' || exit 1 test -n \"\${ZSH_VERSION+set}\${BASH_VERSION+set}\" || ( ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO ECHO=\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO PATH=/empty FPATH=/empty; export PATH FPATH test \"X\`printf %s \$ECHO\`\" = \"X\$ECHO\" \\ || test \"X\`print -r -- \$ECHO\`\" = \"X\$ECHO\" ) || exit 1 test \$(( 1 + 1 )) = 2 || exit 1" if (eval "$as_required") 2>/dev/null then : as_have_required=yes else case e in #( e) as_have_required=no ;; esac fi if test x$as_have_required = xyes && (eval "$as_suggested") 2>/dev/null then : else case e in #( e) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR as_found=false for as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac as_found=: case $as_dir in #( /*) for as_base in sh bash ksh sh5; do # Try only shells that exist, to save several forks. as_shell=$as_dir$as_base if { test -f "$as_shell" || test -f "$as_shell.exe"; } && as_run=a "$as_shell" -c "$as_bourne_compatible""$as_required" 2>/dev/null then : CONFIG_SHELL=$as_shell as_have_required=yes if as_run=a "$as_shell" -c "$as_bourne_compatible""$as_suggested" 2>/dev/null then : break 2 fi fi done;; esac as_found=false done IFS=$as_save_IFS if $as_found then : else case e in #( e) if { test -f "$SHELL" || test -f "$SHELL.exe"; } && as_run=a "$SHELL" -c "$as_bourne_compatible""$as_required" 2>/dev/null then : CONFIG_SHELL=$SHELL as_have_required=yes fi ;; esac fi if test "x$CONFIG_SHELL" != x then : export CONFIG_SHELL # We cannot yet assume a decent shell, so we have to provide a # neutralization value for shells without unset; and this also # works around shells that cannot unset nonexistent variables. # Preserve -v and -x to the replacement shell. BASH_ENV=/dev/null ENV=/dev/null (unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV case $- in # (((( *v*x* | *x*v* ) as_opts=-vx ;; *v* ) as_opts=-v ;; *x* ) as_opts=-x ;; * ) as_opts= ;; esac exec $CONFIG_SHELL $as_opts "$as_myself" ${1+"$@"} # Admittedly, this is quite paranoid, since all the known shells bail # out after a failed 'exec'. printf "%s\n" "$0: could not re-execute with $CONFIG_SHELL" >&2 exit 255 fi if test x$as_have_required = xno then : printf "%s\n" "$0: This script requires a shell more modern than all" printf "%s\n" "$0: the shells that I found on your system." if test ${ZSH_VERSION+y} ; then printf "%s\n" "$0: In particular, zsh $ZSH_VERSION has bugs and should" printf "%s\n" "$0: be upgraded to zsh 4.3.4 or later." else printf "%s\n" "$0: Please tell bug-autoconf@gnu.org and c-ares mailing $0: list: http://lists.haxx.se/listinfo/c-ares about your $0: system, including any error possibly output before this $0: message. Then install a modern shell, or manually run $0: the script under such a shell if you do have one." fi exit 1 fi ;; esac fi fi SHELL=${CONFIG_SHELL-/bin/sh} export SHELL # Unset more variables known to interfere with behavior of common tools. CLICOLOR_FORCE= GREP_OPTIONS= unset CLICOLOR_FORCE GREP_OPTIONS ## --------------------- ## ## M4sh Shell Functions. ## ## --------------------- ## # as_fn_unset VAR # --------------- # Portably unset VAR. as_fn_unset () { { eval $1=; unset $1;} } as_unset=as_fn_unset # as_fn_set_status STATUS # ----------------------- # Set $? to STATUS, without forking. as_fn_set_status () { return $1 } # as_fn_set_status # as_fn_exit STATUS # ----------------- # Exit the shell with STATUS, even in a "trap 0" or "set -e" context. as_fn_exit () { set +e as_fn_set_status $1 exit $1 } # as_fn_exit # as_fn_mkdir_p # ------------- # Create "$as_dir" as a directory, including parents if necessary. as_fn_mkdir_p () { case $as_dir in #( -*) as_dir=./$as_dir;; esac test -d "$as_dir" || eval $as_mkdir_p || { as_dirs= while :; do case $as_dir in #( *\'*) as_qdir=`printf "%s\n" "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'( *) as_qdir=$as_dir;; esac as_dirs="'$as_qdir' $as_dirs" as_dir=`$as_dirname -- "$as_dir" || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null || printf "%s\n" X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` test -d "$as_dir" && break done test -z "$as_dirs" || eval "mkdir $as_dirs" } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir" } # as_fn_mkdir_p # as_fn_executable_p FILE # ----------------------- # Test if FILE is an executable regular file. as_fn_executable_p () { test -f "$1" && test -x "$1" } # as_fn_executable_p # as_fn_append VAR VALUE # ---------------------- # Append the text in VALUE to the end of the definition contained in VAR. Take # advantage of any shell optimizations that allow amortized linear growth over # repeated appends, instead of the typical quadratic growth present in naive # implementations. if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null then : eval 'as_fn_append () { eval $1+=\$2 }' else case e in #( e) as_fn_append () { eval $1=\$$1\$2 } ;; esac fi # as_fn_append # as_fn_arith ARG... # ------------------ # Perform arithmetic evaluation on the ARGs, and store the result in the # global $as_val. Take advantage of shells that can avoid forks. The arguments # must be portable across $(()) and expr. if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null then : eval 'as_fn_arith () { as_val=$(( $* )) }' else case e in #( e) as_fn_arith () { as_val=`expr "$@" || test $? -eq 1` } ;; esac fi # as_fn_arith # as_fn_error STATUS ERROR [LINENO LOG_FD] # ---------------------------------------- # Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are # provided, also output the error to LOG_FD, referencing LINENO. Then exit the # script with STATUS, using 1 if that was 0. as_fn_error () { as_status=$1; test $as_status -eq 0 && as_status=1 if test "$4"; then as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: $2" >&$4 fi printf "%s\n" "$as_me: error: $2" >&2 as_fn_exit $as_status } # as_fn_error if expr a : '\(a\)' >/dev/null 2>&1 && test "X`expr 00001 : '.*\(...\)'`" = X001; then as_expr=expr else as_expr=false fi if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then as_dirname=dirname else as_dirname=false fi as_me=`$as_basename -- "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| . 2>/dev/null || printf "%s\n" X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits as_lineno_1=$LINENO as_lineno_1a=$LINENO as_lineno_2=$LINENO as_lineno_2a=$LINENO eval 'test "x$as_lineno_1'$as_run'" != "x$as_lineno_2'$as_run'" && test "x`expr $as_lineno_1'$as_run' + 1`" = "x$as_lineno_2'$as_run'"' || { # Blame Lee E. McMahon (1931-1989) for sed's syntax. :-) sed -n ' p /[$]LINENO/= ' <$as_myself | sed ' t clear :clear s/[$]LINENO.*/&-/ t lineno b :lineno N :loop s/[$]LINENO\([^'$as_cr_alnum'_].*\n\)\(.*\)/\2\1\2/ t loop s/-\n.*// ' >$as_me.lineno && chmod +x "$as_me.lineno" || { printf "%s\n" "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2; as_fn_exit 1; } # If we had to re-execute with $CONFIG_SHELL, we're ensured to have # already done that, so ensure we don't try to do so again and fall # in an infinite loop. This has already happened in practice. _as_can_reexec=no; export _as_can_reexec # Don't try to exec as it changes $[0], causing all sort of problems # (the dirname of $[0] is not the place where we might find the # original and so on. Autoconf is especially sensitive to this). . "./$as_me.lineno" # Exit status is that of the last command. exit } # Determine whether it's possible to make 'echo' print without a newline. # These variables are no longer used directly by Autoconf, but are AC_SUBSTed # for compatibility with existing Makefiles. ECHO_C= ECHO_N= ECHO_T= case `echo -n x` in #((((( -n*) case `echo 'xy\c'` in *c*) ECHO_T=' ';; # ECHO_T is single tab character. xy) ECHO_C='\c';; *) echo `echo ksh88 bug on AIX 6.1` > /dev/null ECHO_T=' ';; esac;; *) ECHO_N='-n';; esac # For backward compatibility with old third-party macros, we provide # the shell variables $as_echo and $as_echo_n. New code should use # AS_ECHO(["message"]) and AS_ECHO_N(["message"]), respectively. as_echo='printf %s\n' as_echo_n='printf %s' rm -f conf$$ conf$$.exe conf$$.file if test -d conf$$.dir; then rm -f conf$$.dir/conf$$.file else rm -f conf$$.dir mkdir conf$$.dir 2>/dev/null fi if (echo >conf$$.file) 2>/dev/null; then if ln -s conf$$.file conf$$ 2>/dev/null; then as_ln_s='ln -s' # ... but there are two gotchas: # 1) On MSYS, both 'ln -s file dir' and 'ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; 'ln -s' creates a wrapper executable. # In both cases, we have to default to 'cp -pR'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || as_ln_s='cp -pR' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -pR' fi else as_ln_s='cp -pR' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null if mkdir -p . 2>/dev/null; then as_mkdir_p='mkdir -p "$as_dir"' else test -d ./-p && rmdir ./-p as_mkdir_p=false fi as_test_x='test -x' as_executable_p=as_fn_executable_p # Sed expression to map a string onto a valid CPP name. as_sed_cpp="y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g" as_tr_cpp="eval sed '$as_sed_cpp'" # deprecated # Sed expression to map a string onto a valid variable name. as_sed_sh="y%*+%pp%;s%[^_$as_cr_alnum]%_%g" as_tr_sh="eval sed '$as_sed_sh'" # deprecated SHELL=${CONFIG_SHELL-/bin/sh} test -n "$DJDIR" || exec 7<&0 &1 # Name of the host. # hostname on some systems (SVR3.2, old GNU/Linux) returns a bogus exit status, # so uname gets run too. ac_hostname=`(hostname || uname -n) 2>/dev/null | sed 1q` # # Initializations. # ac_default_prefix=/usr/local ac_clean_files= ac_config_libobj_dir=. LIBOBJS= cross_compiling=no subdirs= MFLAGS= MAKEFLAGS= # Identity of this package. PACKAGE_NAME='c-ares' PACKAGE_TARNAME='c-ares' PACKAGE_VERSION='1.33.1' PACKAGE_STRING='c-ares 1.33.1' PACKAGE_BUGREPORT='c-ares mailing list: http://lists.haxx.se/listinfo/c-ares' PACKAGE_URL='' ac_unique_file="src/lib/ares_ipv6.h" # Factoring default headers for most tests. ac_includes_default="\ #include #ifdef HAVE_STDIO_H # include #endif #ifdef HAVE_STDLIB_H # include #endif #ifdef HAVE_STRING_H # include #endif #ifdef HAVE_INTTYPES_H # include #endif #ifdef HAVE_STDINT_H # include #endif #ifdef HAVE_STRINGS_H # include #endif #ifdef HAVE_SYS_TYPES_H # include #endif #ifdef HAVE_SYS_STAT_H # include #endif #ifdef HAVE_UNISTD_H # include #endif" ac_header_c_list= enable_year2038=no ac_subst_vars='am__EXEEXT_FALSE am__EXEEXT_TRUE LTLIBOBJS LIBOBJS BUILD_SUBDIRS PKGCONFIG_CFLAGS AM_CPPFLAGS AM_CFLAGS BUILD_TESTS_FALSE BUILD_TESTS_TRUE GMOCK112_LIBS GMOCK112_CFLAGS GMOCK_LIBS GMOCK_CFLAGS PKG_CONFIG_LIBDIR PKG_CONFIG_PATH PKG_CONFIG CARES_PRIVATE_LIBS PTHREAD_CFLAGS PTHREAD_LIBS PTHREAD_CXX PTHREAD_CC ax_pthread_config CPP CARES_SYMBOL_HIDING_CFLAG CARES_SYMBOL_HIDING_FALSE CARES_SYMBOL_HIDING_TRUE CARES_USE_NO_UNDEFINED_FALSE CARES_USE_NO_UNDEFINED_TRUE CODE_COVERAGE_LIBS CODE_COVERAGE_CXXFLAGS CODE_COVERAGE_CFLAGS CODE_COVERAGE_CPPFLAGS GENHTML LCOV GCOV ifnGNUmake ifGNUmake CODE_COVERAGE_ENABLED CODE_COVERAGE_ENABLED_FALSE CODE_COVERAGE_ENABLED_TRUE MAINT MAINTAINER_MODE_FALSE MAINTAINER_MODE_TRUE CARES_RANDOM_FILE CXXCPP LT_SYS_LIBRARY_PATH OTOOL64 OTOOL LIPO NMEDIT DSYMUTIL MANIFEST_TOOL RANLIB ac_ct_AR AR FILECMD LN_S NM ac_ct_DUMPBIN DUMPBIN LD FGREP EGREP GREP SED host_os host_vendor host_cpu host build_os build_vendor build_cpu build LIBTOOL OBJDUMP DLLTOOL AS am__xargs_n am__rm_f_notfound AM_BACKSLASH AM_DEFAULT_VERBOSITY AM_DEFAULT_V AM_V CSCOPE ETAGS CTAGS am__fastdepCXX_FALSE am__fastdepCXX_TRUE CXXDEPMODE am__fastdepCC_FALSE am__fastdepCC_TRUE CCDEPMODE am__nodep AMDEPBACKSLASH AMDEP_FALSE AMDEP_TRUE am__include DEPDIR am__untar am__tar AMTAR am__leading_dot SET_MAKE AWK mkdir_p MKDIR_P INSTALL_STRIP_PROGRAM STRIP install_sh MAKEINFO AUTOHEADER AUTOMAKE AUTOCONF ACLOCAL VERSION PACKAGE CYGPATH_W am__isrc INSTALL_DATA INSTALL_SCRIPT INSTALL_PROGRAM HAVE_CXX14 ac_ct_CXX CXXFLAGS CXX OBJEXT EXEEXT ac_ct_CC CPPFLAGS LDFLAGS CFLAGS CC CARES_VERSION_INFO target_alias host_alias build_alias LIBS ECHO_T ECHO_N ECHO_C DEFS mandir localedir libdir psdir pdfdir dvidir htmldir infodir docdir oldincludedir includedir runstatedir localstatedir sharedstatedir sysconfdir datadir datarootdir libexecdir sbindir bindir program_transform_name prefix exec_prefix PACKAGE_URL PACKAGE_BUGREPORT PACKAGE_STRING PACKAGE_VERSION PACKAGE_TARNAME PACKAGE_NAME PATH_SEPARATOR SHELL am__quote' ac_subst_files='' ac_user_opts=' enable_option_checking enable_dependency_tracking enable_silent_rules enable_shared enable_static with_pic enable_fast_install with_aix_soname with_gnu_ld with_sysroot enable_libtool_lock enable_warnings enable_symbol_hiding enable_tests enable_cares_threads with_random enable_maintainer_mode with_gcov enable_code_coverage enable_largefile enable_libgcc enable_year2038 ' ac_precious_vars='build_alias host_alias target_alias CC CFLAGS LDFLAGS LIBS CPPFLAGS CXX CXXFLAGS CCC LT_SYS_LIBRARY_PATH CXXCPP CPP PKG_CONFIG PKG_CONFIG_PATH PKG_CONFIG_LIBDIR GMOCK_CFLAGS GMOCK_LIBS GMOCK112_CFLAGS GMOCK112_LIBS' # Initialize some variables set by options. ac_init_help= ac_init_version=false ac_unrecognized_opts= ac_unrecognized_sep= # The variables have the same names as the options, with # dashes changed to underlines. cache_file=/dev/null exec_prefix=NONE no_create= no_recursion= prefix=NONE program_prefix=NONE program_suffix=NONE program_transform_name=s,x,x, silent= site= srcdir= verbose= x_includes=NONE x_libraries=NONE # Installation directory options. # These are left unexpanded so users can "make install exec_prefix=/foo" # and all the variables that are supposed to be based on exec_prefix # by default will actually change. # Use braces instead of parens because sh, perl, etc. also accept them. # (The list follows the same order as the GNU Coding Standards.) bindir='${exec_prefix}/bin' sbindir='${exec_prefix}/sbin' libexecdir='${exec_prefix}/libexec' datarootdir='${prefix}/share' datadir='${datarootdir}' sysconfdir='${prefix}/etc' sharedstatedir='${prefix}/com' localstatedir='${prefix}/var' runstatedir='${localstatedir}/run' includedir='${prefix}/include' oldincludedir='/usr/include' docdir='${datarootdir}/doc/${PACKAGE_TARNAME}' infodir='${datarootdir}/info' htmldir='${docdir}' dvidir='${docdir}' pdfdir='${docdir}' psdir='${docdir}' libdir='${exec_prefix}/lib' localedir='${datarootdir}/locale' mandir='${datarootdir}/man' ac_prev= ac_dashdash= for ac_option do # If the previous option needs an argument, assign it. if test -n "$ac_prev"; then eval $ac_prev=\$ac_option ac_prev= continue fi case $ac_option in *=?*) ac_optarg=`expr "X$ac_option" : '[^=]*=\(.*\)'` ;; *=) ac_optarg= ;; *) ac_optarg=yes ;; esac case $ac_dashdash$ac_option in --) ac_dashdash=yes ;; -bindir | --bindir | --bindi | --bind | --bin | --bi) ac_prev=bindir ;; -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*) bindir=$ac_optarg ;; -build | --build | --buil | --bui | --bu) ac_prev=build_alias ;; -build=* | --build=* | --buil=* | --bui=* | --bu=*) build_alias=$ac_optarg ;; -cache-file | --cache-file | --cache-fil | --cache-fi \ | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c) ac_prev=cache_file ;; -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \ | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*) cache_file=$ac_optarg ;; --config-cache | -C) cache_file=config.cache ;; -datadir | --datadir | --datadi | --datad) ac_prev=datadir ;; -datadir=* | --datadir=* | --datadi=* | --datad=*) datadir=$ac_optarg ;; -datarootdir | --datarootdir | --datarootdi | --datarootd | --dataroot \ | --dataroo | --dataro | --datar) ac_prev=datarootdir ;; -datarootdir=* | --datarootdir=* | --datarootdi=* | --datarootd=* \ | --dataroot=* | --dataroo=* | --dataro=* | --datar=*) datarootdir=$ac_optarg ;; -disable-* | --disable-*) ac_useropt=`expr "x$ac_option" : 'x-*disable-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid feature name: '$ac_useropt'" ac_useropt_orig=$ac_useropt ac_useropt=`printf "%s\n" "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "enable_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--disable-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval enable_$ac_useropt=no ;; -docdir | --docdir | --docdi | --doc | --do) ac_prev=docdir ;; -docdir=* | --docdir=* | --docdi=* | --doc=* | --do=*) docdir=$ac_optarg ;; -dvidir | --dvidir | --dvidi | --dvid | --dvi | --dv) ac_prev=dvidir ;; -dvidir=* | --dvidir=* | --dvidi=* | --dvid=* | --dvi=* | --dv=*) dvidir=$ac_optarg ;; -enable-* | --enable-*) ac_useropt=`expr "x$ac_option" : 'x-*enable-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid feature name: '$ac_useropt'" ac_useropt_orig=$ac_useropt ac_useropt=`printf "%s\n" "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "enable_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--enable-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval enable_$ac_useropt=\$ac_optarg ;; -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \ | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \ | --exec | --exe | --ex) ac_prev=exec_prefix ;; -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \ | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \ | --exec=* | --exe=* | --ex=*) exec_prefix=$ac_optarg ;; -gas | --gas | --ga | --g) # Obsolete; use --with-gas. with_gas=yes ;; -help | --help | --hel | --he | -h) ac_init_help=long ;; -help=r* | --help=r* | --hel=r* | --he=r* | -hr*) ac_init_help=recursive ;; -help=s* | --help=s* | --hel=s* | --he=s* | -hs*) ac_init_help=short ;; -host | --host | --hos | --ho) ac_prev=host_alias ;; -host=* | --host=* | --hos=* | --ho=*) host_alias=$ac_optarg ;; -htmldir | --htmldir | --htmldi | --htmld | --html | --htm | --ht) ac_prev=htmldir ;; -htmldir=* | --htmldir=* | --htmldi=* | --htmld=* | --html=* | --htm=* \ | --ht=*) htmldir=$ac_optarg ;; -includedir | --includedir | --includedi | --included | --include \ | --includ | --inclu | --incl | --inc) ac_prev=includedir ;; -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \ | --includ=* | --inclu=* | --incl=* | --inc=*) includedir=$ac_optarg ;; -infodir | --infodir | --infodi | --infod | --info | --inf) ac_prev=infodir ;; -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*) infodir=$ac_optarg ;; -libdir | --libdir | --libdi | --libd) ac_prev=libdir ;; -libdir=* | --libdir=* | --libdi=* | --libd=*) libdir=$ac_optarg ;; -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \ | --libexe | --libex | --libe) ac_prev=libexecdir ;; -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \ | --libexe=* | --libex=* | --libe=*) libexecdir=$ac_optarg ;; -localedir | --localedir | --localedi | --localed | --locale) ac_prev=localedir ;; -localedir=* | --localedir=* | --localedi=* | --localed=* | --locale=*) localedir=$ac_optarg ;; -localstatedir | --localstatedir | --localstatedi | --localstated \ | --localstate | --localstat | --localsta | --localst | --locals) ac_prev=localstatedir ;; -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \ | --localstate=* | --localstat=* | --localsta=* | --localst=* | --locals=*) localstatedir=$ac_optarg ;; -mandir | --mandir | --mandi | --mand | --man | --ma | --m) ac_prev=mandir ;; -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*) mandir=$ac_optarg ;; -nfp | --nfp | --nf) # Obsolete; use --without-fp. with_fp=no ;; -no-create | --no-create | --no-creat | --no-crea | --no-cre \ | --no-cr | --no-c | -n) no_create=yes ;; -no-recursion | --no-recursion | --no-recursio | --no-recursi \ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) no_recursion=yes ;; -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \ | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \ | --oldin | --oldi | --old | --ol | --o) ac_prev=oldincludedir ;; -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \ | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \ | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*) oldincludedir=$ac_optarg ;; -prefix | --prefix | --prefi | --pref | --pre | --pr | --p) ac_prev=prefix ;; -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*) prefix=$ac_optarg ;; -program-prefix | --program-prefix | --program-prefi | --program-pref \ | --program-pre | --program-pr | --program-p) ac_prev=program_prefix ;; -program-prefix=* | --program-prefix=* | --program-prefi=* \ | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*) program_prefix=$ac_optarg ;; -program-suffix | --program-suffix | --program-suffi | --program-suff \ | --program-suf | --program-su | --program-s) ac_prev=program_suffix ;; -program-suffix=* | --program-suffix=* | --program-suffi=* \ | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*) program_suffix=$ac_optarg ;; -program-transform-name | --program-transform-name \ | --program-transform-nam | --program-transform-na \ | --program-transform-n | --program-transform- \ | --program-transform | --program-transfor \ | --program-transfo | --program-transf \ | --program-trans | --program-tran \ | --progr-tra | --program-tr | --program-t) ac_prev=program_transform_name ;; -program-transform-name=* | --program-transform-name=* \ | --program-transform-nam=* | --program-transform-na=* \ | --program-transform-n=* | --program-transform-=* \ | --program-transform=* | --program-transfor=* \ | --program-transfo=* | --program-transf=* \ | --program-trans=* | --program-tran=* \ | --progr-tra=* | --program-tr=* | --program-t=*) program_transform_name=$ac_optarg ;; -pdfdir | --pdfdir | --pdfdi | --pdfd | --pdf | --pd) ac_prev=pdfdir ;; -pdfdir=* | --pdfdir=* | --pdfdi=* | --pdfd=* | --pdf=* | --pd=*) pdfdir=$ac_optarg ;; -psdir | --psdir | --psdi | --psd | --ps) ac_prev=psdir ;; -psdir=* | --psdir=* | --psdi=* | --psd=* | --ps=*) psdir=$ac_optarg ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) silent=yes ;; -runstatedir | --runstatedir | --runstatedi | --runstated \ | --runstate | --runstat | --runsta | --runst | --runs \ | --run | --ru | --r) ac_prev=runstatedir ;; -runstatedir=* | --runstatedir=* | --runstatedi=* | --runstated=* \ | --runstate=* | --runstat=* | --runsta=* | --runst=* | --runs=* \ | --run=* | --ru=* | --r=*) runstatedir=$ac_optarg ;; -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb) ac_prev=sbindir ;; -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \ | --sbi=* | --sb=*) sbindir=$ac_optarg ;; -sharedstatedir | --sharedstatedir | --sharedstatedi \ | --sharedstated | --sharedstate | --sharedstat | --sharedsta \ | --sharedst | --shareds | --shared | --share | --shar \ | --sha | --sh) ac_prev=sharedstatedir ;; -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \ | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \ | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \ | --sha=* | --sh=*) sharedstatedir=$ac_optarg ;; -site | --site | --sit) ac_prev=site ;; -site=* | --site=* | --sit=*) site=$ac_optarg ;; -srcdir | --srcdir | --srcdi | --srcd | --src | --sr) ac_prev=srcdir ;; -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*) srcdir=$ac_optarg ;; -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \ | --syscon | --sysco | --sysc | --sys | --sy) ac_prev=sysconfdir ;; -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \ | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*) sysconfdir=$ac_optarg ;; -target | --target | --targe | --targ | --tar | --ta | --t) ac_prev=target_alias ;; -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*) target_alias=$ac_optarg ;; -v | -verbose | --verbose | --verbos | --verbo | --verb) verbose=yes ;; -version | --version | --versio | --versi | --vers | -V) ac_init_version=: ;; -with-* | --with-*) ac_useropt=`expr "x$ac_option" : 'x-*with-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid package name: '$ac_useropt'" ac_useropt_orig=$ac_useropt ac_useropt=`printf "%s\n" "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "with_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--with-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval with_$ac_useropt=\$ac_optarg ;; -without-* | --without-*) ac_useropt=`expr "x$ac_option" : 'x-*without-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid package name: '$ac_useropt'" ac_useropt_orig=$ac_useropt ac_useropt=`printf "%s\n" "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "with_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--without-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval with_$ac_useropt=no ;; --x) # Obsolete; use --with-x. with_x=yes ;; -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \ | --x-incl | --x-inc | --x-in | --x-i) ac_prev=x_includes ;; -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \ | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*) x_includes=$ac_optarg ;; -x-libraries | --x-libraries | --x-librarie | --x-librari \ | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l) ac_prev=x_libraries ;; -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \ | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*) x_libraries=$ac_optarg ;; -*) as_fn_error $? "unrecognized option: '$ac_option' Try '$0 --help' for more information" ;; *=*) ac_envvar=`expr "x$ac_option" : 'x\([^=]*\)='` # Reject names that are not valid shell variable names. case $ac_envvar in #( '' | [0-9]* | *[!_$as_cr_alnum]* ) as_fn_error $? "invalid variable name: '$ac_envvar'" ;; esac eval $ac_envvar=\$ac_optarg export $ac_envvar ;; *) # FIXME: should be removed in autoconf 3.0. printf "%s\n" "$as_me: WARNING: you should use --build, --host, --target" >&2 expr "x$ac_option" : ".*[^-._$as_cr_alnum]" >/dev/null && printf "%s\n" "$as_me: WARNING: invalid host type: $ac_option" >&2 : "${build_alias=$ac_option} ${host_alias=$ac_option} ${target_alias=$ac_option}" ;; esac done if test -n "$ac_prev"; then ac_option=--`echo $ac_prev | sed 's/_/-/g'` as_fn_error $? "missing argument to $ac_option" fi if test -n "$ac_unrecognized_opts"; then case $enable_option_checking in no) ;; fatal) as_fn_error $? "unrecognized options: $ac_unrecognized_opts" ;; *) printf "%s\n" "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2 ;; esac fi # Check all directory arguments for consistency. for ac_var in exec_prefix prefix bindir sbindir libexecdir datarootdir \ datadir sysconfdir sharedstatedir localstatedir includedir \ oldincludedir docdir infodir htmldir dvidir pdfdir psdir \ libdir localedir mandir runstatedir do eval ac_val=\$$ac_var # Remove trailing slashes. case $ac_val in */ ) ac_val=`expr "X$ac_val" : 'X\(.*[^/]\)' \| "X$ac_val" : 'X\(.*\)'` eval $ac_var=\$ac_val;; esac # Be sure to have absolute directory names. case $ac_val in [\\/$]* | ?:[\\/]* ) continue;; NONE | '' ) case $ac_var in *prefix ) continue;; esac;; esac as_fn_error $? "expected an absolute directory name for --$ac_var: $ac_val" done # There might be people who depend on the old broken behavior: '$host' # used to hold the argument of --host etc. # FIXME: To remove some day. build=$build_alias host=$host_alias target=$target_alias # FIXME: To remove some day. if test "x$host_alias" != x; then if test "x$build_alias" = x; then cross_compiling=maybe elif test "x$build_alias" != "x$host_alias"; then cross_compiling=yes fi fi ac_tool_prefix= test -n "$host_alias" && ac_tool_prefix=$host_alias- test "$silent" = yes && exec 6>/dev/null ac_pwd=`pwd` && test -n "$ac_pwd" && ac_ls_di=`ls -di .` && ac_pwd_ls_di=`cd "$ac_pwd" && ls -di .` || as_fn_error $? "working directory cannot be determined" test "X$ac_ls_di" = "X$ac_pwd_ls_di" || as_fn_error $? "pwd does not report name of working directory" # Find the source files, if location was not specified. if test -z "$srcdir"; then ac_srcdir_defaulted=yes # Try the directory containing this script, then the parent directory. ac_confdir=`$as_dirname -- "$as_myself" || $as_expr X"$as_myself" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_myself" : 'X\(//\)[^/]' \| \ X"$as_myself" : 'X\(//\)$' \| \ X"$as_myself" : 'X\(/\)' \| . 2>/dev/null || printf "%s\n" X"$as_myself" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` srcdir=$ac_confdir if test ! -r "$srcdir/$ac_unique_file"; then srcdir=.. fi else ac_srcdir_defaulted=no fi if test ! -r "$srcdir/$ac_unique_file"; then test "$ac_srcdir_defaulted" = yes && srcdir="$ac_confdir or .." as_fn_error $? "cannot find sources ($ac_unique_file) in $srcdir" fi ac_msg="sources are in $srcdir, but 'cd $srcdir' does not work" ac_abs_confdir=`( cd "$srcdir" && test -r "./$ac_unique_file" || as_fn_error $? "$ac_msg" pwd)` # When building in place, set srcdir=. if test "$ac_abs_confdir" = "$ac_pwd"; then srcdir=. fi # Remove unnecessary trailing slashes from srcdir. # Double slashes in file names in object file debugging info # mess up M-x gdb in Emacs. case $srcdir in */) srcdir=`expr "X$srcdir" : 'X\(.*[^/]\)' \| "X$srcdir" : 'X\(.*\)'`;; esac for ac_var in $ac_precious_vars; do eval ac_env_${ac_var}_set=\${${ac_var}+set} eval ac_env_${ac_var}_value=\$${ac_var} eval ac_cv_env_${ac_var}_set=\${${ac_var}+set} eval ac_cv_env_${ac_var}_value=\$${ac_var} done # # Report the --help message. # if test "$ac_init_help" = "long"; then # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat <<_ACEOF 'configure' configures c-ares 1.33.1 to adapt to many kinds of systems. Usage: $0 [OPTION]... [VAR=VALUE]... To assign environment variables (e.g., CC, CFLAGS...), specify them as VAR=VALUE. See below for descriptions of some of the useful variables. Defaults for the options are specified in brackets. Configuration: -h, --help display this help and exit --help=short display options specific to this package --help=recursive display the short help of all the included packages -V, --version display version information and exit -q, --quiet, --silent do not print 'checking ...' messages --cache-file=FILE cache test results in FILE [disabled] -C, --config-cache alias for '--cache-file=config.cache' -n, --no-create do not create output files --srcdir=DIR find the sources in DIR [configure dir or '..'] Installation directories: --prefix=PREFIX install architecture-independent files in PREFIX [$ac_default_prefix] --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX [PREFIX] By default, 'make install' will install all the files in '$ac_default_prefix/bin', '$ac_default_prefix/lib' etc. You can specify an installation prefix other than '$ac_default_prefix' using '--prefix', for instance '--prefix=\$HOME'. For better control, use the options below. Fine tuning of the installation directories: --bindir=DIR user executables [EPREFIX/bin] --sbindir=DIR system admin executables [EPREFIX/sbin] --libexecdir=DIR program executables [EPREFIX/libexec] --sysconfdir=DIR read-only single-machine data [PREFIX/etc] --sharedstatedir=DIR modifiable architecture-independent data [PREFIX/com] --localstatedir=DIR modifiable single-machine data [PREFIX/var] --runstatedir=DIR modifiable per-process data [LOCALSTATEDIR/run] --libdir=DIR object code libraries [EPREFIX/lib] --includedir=DIR C header files [PREFIX/include] --oldincludedir=DIR C header files for non-gcc [/usr/include] --datarootdir=DIR read-only arch.-independent data root [PREFIX/share] --datadir=DIR read-only architecture-independent data [DATAROOTDIR] --infodir=DIR info documentation [DATAROOTDIR/info] --localedir=DIR locale-dependent data [DATAROOTDIR/locale] --mandir=DIR man documentation [DATAROOTDIR/man] --docdir=DIR documentation root [DATAROOTDIR/doc/c-ares] --htmldir=DIR html documentation [DOCDIR] --dvidir=DIR dvi documentation [DOCDIR] --pdfdir=DIR pdf documentation [DOCDIR] --psdir=DIR ps documentation [DOCDIR] _ACEOF cat <<\_ACEOF Program names: --program-prefix=PREFIX prepend PREFIX to installed program names --program-suffix=SUFFIX append SUFFIX to installed program names --program-transform-name=PROGRAM run sed PROGRAM on installed program names System types: --build=BUILD configure for building on BUILD [guessed] --host=HOST cross-compile to build programs to run on HOST [BUILD] _ACEOF fi if test -n "$ac_init_help"; then case $ac_init_help in short | recursive ) echo "Configuration of c-ares 1.33.1:";; esac cat <<\_ACEOF Optional Features: --disable-option-checking ignore unrecognized --enable/--with options --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no) --enable-FEATURE[=ARG] include FEATURE [ARG=yes] --enable-dependency-tracking do not reject slow dependency extractors --disable-dependency-tracking speeds up one-time build --enable-silent-rules less verbose build output (undo: "make V=1") --disable-silent-rules verbose build output (undo: "make V=0") --enable-shared[=PKGS] build shared libraries [default=yes] --enable-static[=PKGS] build static libraries [default=yes] --enable-fast-install[=PKGS] optimize for fast installation [default=yes] --disable-libtool-lock avoid locking (might break parallel builds) --disable-warnings Disable strict compiler warnings --disable-symbol-hiding Disable symbol hiding. Enabled by default if the compiler supports it. --disable-tests disable building of test suite. Built by default if GoogleTest is found. --disable-cares-threads Disable building of thread safety support --enable-maintainer-mode enable make rules and dependencies not useful (and sometimes confusing) to the casual installer --enable-code-coverage Whether to enable code coverage support --disable-largefile omit support for large files --enable-libgcc use libgcc when linking --enable-year2038 support timestamps after 2038 Optional Packages: --with-PACKAGE[=ARG] use PACKAGE [ARG=yes] --without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no) --with-pic[=PKGS] try to use only PIC/non-PIC objects [default=use both] --with-aix-soname=aix|svr4|both shared library versioning (aka "SONAME") variant to provide on AIX, [default=aix]. --with-gnu-ld assume the C compiler uses GNU ld [default=no] --with-sysroot[=DIR] Search for dependent libraries within DIR (or the compiler's sysroot if not specified). --with-random=FILE read randomness from FILE (default=/dev/urandom) --with-gcov=GCOV use given GCOV for coverage (GCOV=gcov). Some influential environment variables: CC C compiler command CFLAGS C compiler flags LDFLAGS linker flags, e.g. -L if you have libraries in a nonstandard directory LIBS libraries to pass to the linker, e.g. -l CPPFLAGS (Objective) C/C++ preprocessor flags, e.g. -I if you have headers in a nonstandard directory CXX C++ compiler command CXXFLAGS C++ compiler flags LT_SYS_LIBRARY_PATH User-defined run-time library search path. CXXCPP C++ preprocessor CPP C preprocessor PKG_CONFIG path to pkg-config utility PKG_CONFIG_PATH directories to add to pkg-config's search path PKG_CONFIG_LIBDIR path overriding pkg-config's built-in search path GMOCK_CFLAGS C compiler flags for GMOCK, overriding pkg-config GMOCK_LIBS linker flags for GMOCK, overriding pkg-config GMOCK112_CFLAGS C compiler flags for GMOCK112, overriding pkg-config GMOCK112_LIBS linker flags for GMOCK112, overriding pkg-config Use these variables to override the choices made by 'configure' or to help it to find libraries and programs with nonstandard names/locations. Report bugs to . _ACEOF ac_status=$? fi if test "$ac_init_help" = "recursive"; then # If there are subdirs, report their specific --help. for ac_dir in : $ac_subdirs_all; do test "x$ac_dir" = x: && continue test -d "$ac_dir" || { cd "$srcdir" && ac_pwd=`pwd` && srcdir=. && test -d "$ac_dir"; } || continue ac_builddir=. case "$ac_dir" in .) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_dir_suffix=/`printf "%s\n" "$ac_dir" | sed 's|^\.[\\/]||'` # A ".." for each directory in $ac_dir_suffix. ac_top_builddir_sub=`printf "%s\n" "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'` case $ac_top_builddir_sub in "") ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_top_build_prefix=$ac_top_builddir_sub/ ;; esac ;; esac ac_abs_top_builddir=$ac_pwd ac_abs_builddir=$ac_pwd$ac_dir_suffix # for backward compatibility: ac_top_builddir=$ac_top_build_prefix case $srcdir in .) # We are building in place. ac_srcdir=. ac_top_srcdir=$ac_top_builddir_sub ac_abs_top_srcdir=$ac_pwd ;; [\\/]* | ?:[\\/]* ) # Absolute name. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ac_abs_top_srcdir=$srcdir ;; *) # Relative name. ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_build_prefix$srcdir ac_abs_top_srcdir=$ac_pwd/$srcdir ;; esac ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix cd "$ac_dir" || { ac_status=$?; continue; } # Check for configure.gnu first; this name is used for a wrapper for # Metaconfig's "Configure" on case-insensitive file systems. if test -f "$ac_srcdir/configure.gnu"; then echo && $SHELL "$ac_srcdir/configure.gnu" --help=recursive elif test -f "$ac_srcdir/configure"; then echo && $SHELL "$ac_srcdir/configure" --help=recursive else printf "%s\n" "$as_me: WARNING: no configuration information is in $ac_dir" >&2 fi || ac_status=$? cd "$ac_pwd" || { ac_status=$?; break; } done fi test -n "$ac_init_help" && exit $ac_status if $ac_init_version; then cat <<\_ACEOF c-ares configure 1.33.1 generated by GNU Autoconf 2.72 Copyright (C) 2023 Free Software Foundation, Inc. This configure script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it. _ACEOF exit fi ## ------------------------ ## ## Autoconf initialization. ## ## ------------------------ ## # ac_fn_c_try_compile LINENO # -------------------------- # Try to compile conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext conftest.beam if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_c_werror_flag" || test ! -s conftest.err } && test -s conftest.$ac_objext then : ac_retval=0 else case e in #( e) printf "%s\n" "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 ;; esac fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_compile # ac_fn_c_check_header_compile LINENO HEADER VAR INCLUDES # ------------------------------------------------------- # Tests whether HEADER exists and can be compiled using the include files in # INCLUDES, setting the cache variable VAR accordingly. ac_fn_c_check_header_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 printf %s "checking for $2... " >&6; } if eval test \${$3+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 #include <$2> _ACEOF if ac_fn_c_try_compile "$LINENO" then : eval "$3=yes" else case e in #( e) eval "$3=no" ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi eval ac_res=\$$3 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_header_compile # ac_fn_cxx_try_compile LINENO # ---------------------------- # Try to compile conftest.$ac_ext, and return whether this succeeded. ac_fn_cxx_try_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext conftest.beam if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_cxx_werror_flag" || test ! -s conftest.err } && test -s conftest.$ac_objext then : ac_retval=0 else case e in #( e) printf "%s\n" "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 ;; esac fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_cxx_try_compile # ac_fn_c_try_link LINENO # ----------------------- # Try to link conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_link () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext conftest.beam conftest$ac_exeext if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_link") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_c_werror_flag" || test ! -s conftest.err } && test -s conftest$ac_exeext && { test "$cross_compiling" = yes || test -x conftest$ac_exeext } then : ac_retval=0 else case e in #( e) printf "%s\n" "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 ;; esac fi # Delete the IPA/IPO (Inter Procedural Analysis/Optimization) information # created by the PGI compiler (conftest_ipa8_conftest.oo), as it would # interfere with the next link command; also delete a directory that is # left behind by Apple's compiler. We do this before executing the actions. rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_link # ac_fn_c_check_func LINENO FUNC VAR # ---------------------------------- # Tests whether FUNC exists, setting the cache variable VAR accordingly ac_fn_c_check_func () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 printf %s "checking for $2... " >&6; } if eval test \${$3+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Define $2 to an innocuous variant, in case declares $2. For example, HP-UX 11i declares gettimeofday. */ #define $2 innocuous_$2 /* System header to define __stub macros and hopefully few prototypes, which can conflict with char $2 (void); below. */ #include #undef $2 /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char $2 (void); /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined __stub_$2 || defined __stub___$2 choke me #endif int main (void) { return $2 (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : eval "$3=yes" else case e in #( e) eval "$3=no" ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext ;; esac fi eval ac_res=\$$3 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_func # ac_fn_cxx_try_cpp LINENO # ------------------------ # Try to preprocess conftest.$ac_ext, and return whether this succeeded. ac_fn_cxx_try_cpp () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_cpp conftest.$ac_ext" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_cpp conftest.$ac_ext") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } > conftest.i && { test -z "$ac_cxx_preproc_warn_flag$ac_cxx_werror_flag" || test ! -s conftest.err } then : ac_retval=0 else case e in #( e) printf "%s\n" "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 ;; esac fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_cxx_try_cpp # ac_fn_cxx_try_link LINENO # ------------------------- # Try to link conftest.$ac_ext, and return whether this succeeded. ac_fn_cxx_try_link () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext conftest.beam conftest$ac_exeext if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_link") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_cxx_werror_flag" || test ! -s conftest.err } && test -s conftest$ac_exeext && { test "$cross_compiling" = yes || test -x conftest$ac_exeext } then : ac_retval=0 else case e in #( e) printf "%s\n" "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 ;; esac fi # Delete the IPA/IPO (Inter Procedural Analysis/Optimization) information # created by the PGI compiler (conftest_ipa8_conftest.oo), as it would # interfere with the next link command; also delete a directory that is # left behind by Apple's compiler. We do this before executing the actions. rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_cxx_try_link # ac_fn_c_try_cpp LINENO # ---------------------- # Try to preprocess conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_cpp () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_cpp conftest.$ac_ext" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_cpp conftest.$ac_ext") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } > conftest.i && { test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || test ! -s conftest.err } then : ac_retval=0 else case e in #( e) printf "%s\n" "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 ;; esac fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_cpp # ac_fn_c_check_header_preproc LINENO HEADER VAR # ---------------------------------------------- # Tests whether HEADER exists and can be preprocessed (in isolation), setting # the cache variable VAR accordingly. ac_fn_c_check_header_preproc () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 printf %s "checking for $2... " >&6; } if eval test \${$3+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include <$2> _ACEOF if ac_fn_c_try_cpp "$LINENO" then : eval "$3=yes" else case e in #( e) eval "$3=no" ;; esac fi rm -f conftest.err conftest.i conftest.$ac_ext ;; esac fi eval ac_res=\$$3 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_header_preproc # ac_fn_check_decl LINENO SYMBOL VAR INCLUDES EXTRA-OPTIONS FLAG-VAR # ------------------------------------------------------------------ # Tests whether SYMBOL is declared in INCLUDES, setting cache variable VAR # accordingly. Pass EXTRA-OPTIONS to the compiler, using FLAG-VAR. ac_fn_check_decl () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack as_decl_name=`echo $2|sed 's/ *(.*//'` { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $as_decl_name is declared" >&5 printf %s "checking whether $as_decl_name is declared... " >&6; } if eval test \${$3+y} then : printf %s "(cached) " >&6 else case e in #( e) as_decl_use=`echo $2|sed -e 's/(/((/' -e 's/)/) 0&/' -e 's/,/) 0& (/g'` eval ac_save_FLAGS=\$$6 as_fn_append $6 " $5" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 int main (void) { #ifndef $as_decl_name #ifdef __cplusplus (void) $as_decl_use; #else (void) $as_decl_name; #endif #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : eval "$3=yes" else case e in #( e) eval "$3=no" ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext eval $6=\$ac_save_FLAGS ;; esac fi eval ac_res=\$$3 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_check_decl # ac_fn_c_check_type LINENO TYPE VAR INCLUDES # ------------------------------------------- # Tests whether TYPE exists after having included INCLUDES, setting cache # variable VAR accordingly. ac_fn_c_check_type () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 printf %s "checking for $2... " >&6; } if eval test \${$3+y} then : printf %s "(cached) " >&6 else case e in #( e) eval "$3=no" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 int main (void) { if (sizeof ($2)) return 0; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 int main (void) { if (sizeof (($2))) return 0; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : else case e in #( e) eval "$3=yes" ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi eval ac_res=\$$3 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_type # ac_fn_c_check_member LINENO AGGR MEMBER VAR INCLUDES # ---------------------------------------------------- # Tries to find if the field MEMBER exists in type AGGR, after including # INCLUDES, setting cache variable VAR accordingly. ac_fn_c_check_member () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $2.$3" >&5 printf %s "checking for $2.$3... " >&6; } if eval test \${$4+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $5 int main (void) { static $2 ac_aggr; if (ac_aggr.$3) return 0; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : eval "$4=yes" else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $5 int main (void) { static $2 ac_aggr; if (sizeof ac_aggr.$3) return 0; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : eval "$4=yes" else case e in #( e) eval "$4=no" ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi eval ac_res=\$$4 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_member # ac_fn_c_try_run LINENO # ---------------------- # Try to run conftest.$ac_ext, and return whether this succeeded. Assumes that # executables *can* be run. ac_fn_c_try_run () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { ac_try='./conftest$ac_exeext' { { case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_try") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; } then : ac_retval=0 else case e in #( e) printf "%s\n" "$as_me: program exited with status $ac_status" >&5 printf "%s\n" "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=$ac_status ;; esac fi rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_run ac_configure_args_raw= for ac_arg do case $ac_arg in *\'*) ac_arg=`printf "%s\n" "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;; esac as_fn_append ac_configure_args_raw " '$ac_arg'" done case $ac_configure_args_raw in *$as_nl*) ac_safe_unquote= ;; *) ac_unsafe_z='|&;<>()$`\\"*?[ '' ' # This string ends in space, tab. ac_unsafe_a="$ac_unsafe_z#~" ac_safe_unquote="s/ '\\([^$ac_unsafe_a][^$ac_unsafe_z]*\\)'/ \\1/g" ac_configure_args_raw=` printf "%s\n" "$ac_configure_args_raw" | sed "$ac_safe_unquote"`;; esac cat >config.log <<_ACEOF This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by c-ares $as_me 1.33.1, which was generated by GNU Autoconf 2.72. Invocation command line was $ $0$ac_configure_args_raw _ACEOF exec 5>>config.log { cat <<_ASUNAME ## --------- ## ## Platform. ## ## --------- ## hostname = `(hostname || uname -n) 2>/dev/null | sed 1q` uname -m = `(uname -m) 2>/dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null || echo unknown` /bin/uname -X = `(/bin/uname -X) 2>/dev/null || echo unknown` /bin/arch = `(/bin/arch) 2>/dev/null || echo unknown` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null || echo unknown` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null || echo unknown` /usr/bin/hostinfo = `(/usr/bin/hostinfo) 2>/dev/null || echo unknown` /bin/machine = `(/bin/machine) 2>/dev/null || echo unknown` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null || echo unknown` /bin/universe = `(/bin/universe) 2>/dev/null || echo unknown` _ASUNAME as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac printf "%s\n" "PATH: $as_dir" done IFS=$as_save_IFS } >&5 cat >&5 <<_ACEOF ## ----------- ## ## Core tests. ## ## ----------- ## _ACEOF # Keep a trace of the command line. # Strip out --no-create and --no-recursion so they do not pile up. # Strip out --silent because we don't want to record it for future runs. # Also quote any args containing shell meta-characters. # Make two passes to allow for proper duplicate-argument suppression. ac_configure_args= ac_configure_args0= ac_configure_args1= ac_must_keep_next=false for ac_pass in 1 2 do for ac_arg do case $ac_arg in -no-create | --no-c* | -n | -no-recursion | --no-r*) continue ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) continue ;; *\'*) ac_arg=`printf "%s\n" "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;; esac case $ac_pass in 1) as_fn_append ac_configure_args0 " '$ac_arg'" ;; 2) as_fn_append ac_configure_args1 " '$ac_arg'" if test $ac_must_keep_next = true; then ac_must_keep_next=false # Got value, back to normal. else case $ac_arg in *=* | --config-cache | -C | -disable-* | --disable-* \ | -enable-* | --enable-* | -gas | --g* | -nfp | --nf* \ | -q | -quiet | --q* | -silent | --sil* | -v | -verb* \ | -with-* | --with-* | -without-* | --without-* | --x) case "$ac_configure_args0 " in "$ac_configure_args1"*" '$ac_arg' "* ) continue ;; esac ;; -* ) ac_must_keep_next=true ;; esac fi as_fn_append ac_configure_args " '$ac_arg'" ;; esac done done { ac_configure_args0=; unset ac_configure_args0;} { ac_configure_args1=; unset ac_configure_args1;} # When interrupted or exit'd, cleanup temporary files, and complete # config.log. We remove comments because anyway the quotes in there # would cause problems or look ugly. # WARNING: Use '\'' to represent an apostrophe within the trap. # WARNING: Do not start the trap code with a newline, due to a FreeBSD 4.0 bug. trap 'exit_status=$? # Sanitize IFS. IFS=" "" $as_nl" # Save into config.log some information that might help in debugging. { echo printf "%s\n" "## ---------------- ## ## Cache variables. ## ## ---------------- ##" echo # The following way of writing the cache mishandles newlines in values, ( for ac_var in `(set) 2>&1 | sed -n '\''s/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'\''`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_*) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5 printf "%s\n" "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;; esac case $ac_var in #( _ | IFS | as_nl) ;; #( BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #( *) { eval $ac_var=; unset $ac_var;} ;; esac ;; esac done (set) 2>&1 | case $as_nl`(ac_space='\'' '\''; set) 2>&1` in #( *${as_nl}ac_space=\ *) sed -n \ "s/'\''/'\''\\\\'\'''\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\''\\2'\''/p" ;; #( *) sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p" ;; esac | sort ) echo printf "%s\n" "## ----------------- ## ## Output variables. ## ## ----------------- ##" echo for ac_var in $ac_subst_vars do eval ac_val=\$$ac_var case $ac_val in *\'\''*) ac_val=`printf "%s\n" "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;; esac printf "%s\n" "$ac_var='\''$ac_val'\''" done | sort echo if test -n "$ac_subst_files"; then printf "%s\n" "## ------------------- ## ## File substitutions. ## ## ------------------- ##" echo for ac_var in $ac_subst_files do eval ac_val=\$$ac_var case $ac_val in *\'\''*) ac_val=`printf "%s\n" "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;; esac printf "%s\n" "$ac_var='\''$ac_val'\''" done | sort echo fi if test -s confdefs.h; then printf "%s\n" "## ----------- ## ## confdefs.h. ## ## ----------- ##" echo cat confdefs.h echo fi test "$ac_signal" != 0 && printf "%s\n" "$as_me: caught signal $ac_signal" printf "%s\n" "$as_me: exit $exit_status" } >&5 rm -f core *.core core.conftest.* && rm -f -r conftest* confdefs* conf$$* $ac_clean_files && exit $exit_status ' 0 for ac_signal in 1 2 13 15; do trap 'ac_signal='$ac_signal'; as_fn_exit 1' $ac_signal done ac_signal=0 # confdefs.h avoids OS command line length limits that DEFS can exceed. rm -f -r conftest* confdefs.h printf "%s\n" "/* confdefs.h */" > confdefs.h # Predefined preprocessor variables. printf "%s\n" "#define PACKAGE_NAME \"$PACKAGE_NAME\"" >>confdefs.h printf "%s\n" "#define PACKAGE_TARNAME \"$PACKAGE_TARNAME\"" >>confdefs.h printf "%s\n" "#define PACKAGE_VERSION \"$PACKAGE_VERSION\"" >>confdefs.h printf "%s\n" "#define PACKAGE_STRING \"$PACKAGE_STRING\"" >>confdefs.h printf "%s\n" "#define PACKAGE_BUGREPORT \"$PACKAGE_BUGREPORT\"" >>confdefs.h printf "%s\n" "#define PACKAGE_URL \"$PACKAGE_URL\"" >>confdefs.h # Let the site file select an alternate cache file if it wants to. # Prefer an explicitly selected file to automatically selected ones. if test -n "$CONFIG_SITE"; then ac_site_files="$CONFIG_SITE" elif test "x$prefix" != xNONE; then ac_site_files="$prefix/share/config.site $prefix/etc/config.site" else ac_site_files="$ac_default_prefix/share/config.site $ac_default_prefix/etc/config.site" fi for ac_site_file in $ac_site_files do case $ac_site_file in #( */*) : ;; #( *) : ac_site_file=./$ac_site_file ;; esac if test -f "$ac_site_file" && test -r "$ac_site_file"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: loading site script $ac_site_file" >&5 printf "%s\n" "$as_me: loading site script $ac_site_file" >&6;} sed 's/^/| /' "$ac_site_file" >&5 . "$ac_site_file" \ || { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in '$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in '$ac_pwd':" >&2;} as_fn_error $? "failed to load site script $ac_site_file See 'config.log' for more details" "$LINENO" 5; } fi done if test -r "$cache_file"; then # Some versions of bash will fail to source /dev/null (special files # actually), so we avoid doing that. DJGPP emulates it as a regular file. if test /dev/null != "$cache_file" && test -f "$cache_file"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: loading cache $cache_file" >&5 printf "%s\n" "$as_me: loading cache $cache_file" >&6;} case $cache_file in [\\/]* | ?:[\\/]* ) . "$cache_file";; *) . "./$cache_file";; esac fi else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: creating cache $cache_file" >&5 printf "%s\n" "$as_me: creating cache $cache_file" >&6;} >$cache_file fi as_fn_append ac_header_c_list " stdio.h stdio_h HAVE_STDIO_H" # Test code for whether the C compiler supports C89 (global declarations) ac_c_conftest_c89_globals=' /* Does the compiler advertise C89 conformance? Do not test the value of __STDC__, because some compilers set it to 0 while being otherwise adequately conformant. */ #if !defined __STDC__ # error "Compiler does not advertise C89 conformance" #endif #include #include struct stat; /* Most of the following tests are stolen from RCS 5.7 src/conf.sh. */ struct buf { int x; }; struct buf * (*rcsopen) (struct buf *, struct stat *, int); static char *e (char **p, int i) { return p[i]; } static char *f (char * (*g) (char **, int), char **p, ...) { char *s; va_list v; va_start (v,p); s = g (p, va_arg (v,int)); va_end (v); return s; } /* C89 style stringification. */ #define noexpand_stringify(a) #a const char *stringified = noexpand_stringify(arbitrary+token=sequence); /* C89 style token pasting. Exercises some of the corner cases that e.g. old MSVC gets wrong, but not very hard. */ #define noexpand_concat(a,b) a##b #define expand_concat(a,b) noexpand_concat(a,b) extern int vA; extern int vbee; #define aye A #define bee B int *pvA = &expand_concat(v,aye); int *pvbee = &noexpand_concat(v,bee); /* OSF 4.0 Compaq cc is some sort of almost-ANSI by default. It has function prototypes and stuff, but not \xHH hex character constants. These do not provoke an error unfortunately, instead are silently treated as an "x". The following induces an error, until -std is added to get proper ANSI mode. Curiously \x00 != x always comes out true, for an array size at least. It is necessary to write \x00 == 0 to get something that is true only with -std. */ int osf4_cc_array ['\''\x00'\'' == 0 ? 1 : -1]; /* IBM C 6 for AIX is almost-ANSI by default, but it replaces macro parameters inside strings and character constants. */ #define FOO(x) '\''x'\'' int xlc6_cc_array[FOO(a) == '\''x'\'' ? 1 : -1]; int test (int i, double x); struct s1 {int (*f) (int a);}; struct s2 {int (*f) (double a);}; int pairnames (int, char **, int *(*)(struct buf *, struct stat *, int), int, int);' # Test code for whether the C compiler supports C89 (body of main). ac_c_conftest_c89_main=' ok |= (argc == 0 || f (e, argv, 0) != argv[0] || f (e, argv, 1) != argv[1]); ' # Test code for whether the C compiler supports C99 (global declarations) ac_c_conftest_c99_globals=' /* Does the compiler advertise C99 conformance? */ #if !defined __STDC_VERSION__ || __STDC_VERSION__ < 199901L # error "Compiler does not advertise C99 conformance" #endif // See if C++-style comments work. #include extern int puts (const char *); extern int printf (const char *, ...); extern int dprintf (int, const char *, ...); extern void *malloc (size_t); extern void free (void *); // Check varargs macros. These examples are taken from C99 6.10.3.5. // dprintf is used instead of fprintf to avoid needing to declare // FILE and stderr. #define debug(...) dprintf (2, __VA_ARGS__) #define showlist(...) puts (#__VA_ARGS__) #define report(test,...) ((test) ? puts (#test) : printf (__VA_ARGS__)) static void test_varargs_macros (void) { int x = 1234; int y = 5678; debug ("Flag"); debug ("X = %d\n", x); showlist (The first, second, and third items.); report (x>y, "x is %d but y is %d", x, y); } // Check long long types. #define BIG64 18446744073709551615ull #define BIG32 4294967295ul #define BIG_OK (BIG64 / BIG32 == 4294967297ull && BIG64 % BIG32 == 0) #if !BIG_OK #error "your preprocessor is broken" #endif #if BIG_OK #else #error "your preprocessor is broken" #endif static long long int bignum = -9223372036854775807LL; static unsigned long long int ubignum = BIG64; struct incomplete_array { int datasize; double data[]; }; struct named_init { int number; const wchar_t *name; double average; }; typedef const char *ccp; static inline int test_restrict (ccp restrict text) { // Iterate through items via the restricted pointer. // Also check for declarations in for loops. for (unsigned int i = 0; *(text+i) != '\''\0'\''; ++i) continue; return 0; } // Check varargs and va_copy. static bool test_varargs (const char *format, ...) { va_list args; va_start (args, format); va_list args_copy; va_copy (args_copy, args); const char *str = ""; int number = 0; float fnumber = 0; while (*format) { switch (*format++) { case '\''s'\'': // string str = va_arg (args_copy, const char *); break; case '\''d'\'': // int number = va_arg (args_copy, int); break; case '\''f'\'': // float fnumber = va_arg (args_copy, double); break; default: break; } } va_end (args_copy); va_end (args); return *str && number && fnumber; } ' # Test code for whether the C compiler supports C99 (body of main). ac_c_conftest_c99_main=' // Check bool. _Bool success = false; success |= (argc != 0); // Check restrict. if (test_restrict ("String literal") == 0) success = true; char *restrict newvar = "Another string"; // Check varargs. success &= test_varargs ("s, d'\'' f .", "string", 65, 34.234); test_varargs_macros (); // Check flexible array members. struct incomplete_array *ia = malloc (sizeof (struct incomplete_array) + (sizeof (double) * 10)); ia->datasize = 10; for (int i = 0; i < ia->datasize; ++i) ia->data[i] = i * 1.234; // Work around memory leak warnings. free (ia); // Check named initializers. struct named_init ni = { .number = 34, .name = L"Test wide string", .average = 543.34343, }; ni.number = 58; int dynamic_array[ni.number]; dynamic_array[0] = argv[0][0]; dynamic_array[ni.number - 1] = 543; // work around unused variable warnings ok |= (!success || bignum == 0LL || ubignum == 0uLL || newvar[0] == '\''x'\'' || dynamic_array[ni.number - 1] != 543); ' # Test code for whether the C compiler supports C11 (global declarations) ac_c_conftest_c11_globals=' /* Does the compiler advertise C11 conformance? */ #if !defined __STDC_VERSION__ || __STDC_VERSION__ < 201112L # error "Compiler does not advertise C11 conformance" #endif // Check _Alignas. char _Alignas (double) aligned_as_double; char _Alignas (0) no_special_alignment; extern char aligned_as_int; char _Alignas (0) _Alignas (int) aligned_as_int; // Check _Alignof. enum { int_alignment = _Alignof (int), int_array_alignment = _Alignof (int[100]), char_alignment = _Alignof (char) }; _Static_assert (0 < -_Alignof (int), "_Alignof is signed"); // Check _Noreturn. int _Noreturn does_not_return (void) { for (;;) continue; } // Check _Static_assert. struct test_static_assert { int x; _Static_assert (sizeof (int) <= sizeof (long int), "_Static_assert does not work in struct"); long int y; }; // Check UTF-8 literals. #define u8 syntax error! char const utf8_literal[] = u8"happens to be ASCII" "another string"; // Check duplicate typedefs. typedef long *long_ptr; typedef long int *long_ptr; typedef long_ptr long_ptr; // Anonymous structures and unions -- taken from C11 6.7.2.1 Example 1. struct anonymous { union { struct { int i; int j; }; struct { int k; long int l; } w; }; int m; } v1; ' # Test code for whether the C compiler supports C11 (body of main). ac_c_conftest_c11_main=' _Static_assert ((offsetof (struct anonymous, i) == offsetof (struct anonymous, w.k)), "Anonymous union alignment botch"); v1.i = 2; v1.w.k = 5; ok |= v1.i != 5; ' # Test code for whether the C compiler supports C11 (complete). ac_c_conftest_c11_program="${ac_c_conftest_c89_globals} ${ac_c_conftest_c99_globals} ${ac_c_conftest_c11_globals} int main (int argc, char **argv) { int ok = 0; ${ac_c_conftest_c89_main} ${ac_c_conftest_c99_main} ${ac_c_conftest_c11_main} return ok; } " # Test code for whether the C compiler supports C99 (complete). ac_c_conftest_c99_program="${ac_c_conftest_c89_globals} ${ac_c_conftest_c99_globals} int main (int argc, char **argv) { int ok = 0; ${ac_c_conftest_c89_main} ${ac_c_conftest_c99_main} return ok; } " # Test code for whether the C compiler supports C89 (complete). ac_c_conftest_c89_program="${ac_c_conftest_c89_globals} int main (int argc, char **argv) { int ok = 0; ${ac_c_conftest_c89_main} return ok; } " as_fn_append ac_header_c_list " stdlib.h stdlib_h HAVE_STDLIB_H" as_fn_append ac_header_c_list " string.h string_h HAVE_STRING_H" as_fn_append ac_header_c_list " inttypes.h inttypes_h HAVE_INTTYPES_H" as_fn_append ac_header_c_list " stdint.h stdint_h HAVE_STDINT_H" as_fn_append ac_header_c_list " strings.h strings_h HAVE_STRINGS_H" as_fn_append ac_header_c_list " sys/stat.h sys_stat_h HAVE_SYS_STAT_H" as_fn_append ac_header_c_list " sys/types.h sys_types_h HAVE_SYS_TYPES_H" as_fn_append ac_header_c_list " unistd.h unistd_h HAVE_UNISTD_H" as_fn_append ac_header_c_list " wchar.h wchar_h HAVE_WCHAR_H" as_fn_append ac_header_c_list " minix/config.h minix_config_h HAVE_MINIX_CONFIG_H" # Test code for whether the C++ compiler supports C++98 (global declarations) ac_cxx_conftest_cxx98_globals=' // Does the compiler advertise C++98 conformance? #if !defined __cplusplus || __cplusplus < 199711L # error "Compiler does not advertise C++98 conformance" #endif // These inclusions are to reject old compilers that // lack the unsuffixed header files. #include #include // and are *not* freestanding headers in C++98. extern void assert (int); namespace std { extern int strcmp (const char *, const char *); } // Namespaces, exceptions, and templates were all added after "C++ 2.0". using std::exception; using std::strcmp; namespace { void test_exception_syntax() { try { throw "test"; } catch (const char *s) { // Extra parentheses suppress a warning when building autoconf itself, // due to lint rules shared with more typical C programs. assert (!(strcmp) (s, "test")); } } template struct test_template { T const val; explicit test_template(T t) : val(t) {} template T add(U u) { return static_cast(u) + val; } }; } // anonymous namespace ' # Test code for whether the C++ compiler supports C++98 (body of main) ac_cxx_conftest_cxx98_main=' assert (argc); assert (! argv[0]); { test_exception_syntax (); test_template tt (2.0); assert (tt.add (4) == 6.0); assert (true && !false); } ' # Test code for whether the C++ compiler supports C++11 (global declarations) ac_cxx_conftest_cxx11_globals=' // Does the compiler advertise C++ 2011 conformance? #if !defined __cplusplus || __cplusplus < 201103L # error "Compiler does not advertise C++11 conformance" #endif namespace cxx11test { constexpr int get_val() { return 20; } struct testinit { int i; double d; }; class delegate { public: delegate(int n) : n(n) {} delegate(): delegate(2354) {} virtual int getval() { return this->n; }; protected: int n; }; class overridden : public delegate { public: overridden(int n): delegate(n) {} virtual int getval() override final { return this->n * 2; } }; class nocopy { public: nocopy(int i): i(i) {} nocopy() = default; nocopy(const nocopy&) = delete; nocopy & operator=(const nocopy&) = delete; private: int i; }; // for testing lambda expressions template Ret eval(Fn f, Ret v) { return f(v); } // for testing variadic templates and trailing return types template auto sum(V first) -> V { return first; } template auto sum(V first, Args... rest) -> V { return first + sum(rest...); } } ' # Test code for whether the C++ compiler supports C++11 (body of main) ac_cxx_conftest_cxx11_main=' { // Test auto and decltype auto a1 = 6538; auto a2 = 48573953.4; auto a3 = "String literal"; int total = 0; for (auto i = a3; *i; ++i) { total += *i; } decltype(a2) a4 = 34895.034; } { // Test constexpr short sa[cxx11test::get_val()] = { 0 }; } { // Test initializer lists cxx11test::testinit il = { 4323, 435234.23544 }; } { // Test range-based for int array[] = {9, 7, 13, 15, 4, 18, 12, 10, 5, 3, 14, 19, 17, 8, 6, 20, 16, 2, 11, 1}; for (auto &x : array) { x += 23; } } { // Test lambda expressions using cxx11test::eval; assert (eval ([](int x) { return x*2; }, 21) == 42); double d = 2.0; assert (eval ([&](double x) { return d += x; }, 3.0) == 5.0); assert (d == 5.0); assert (eval ([=](double x) mutable { return d += x; }, 4.0) == 9.0); assert (d == 5.0); } { // Test use of variadic templates using cxx11test::sum; auto a = sum(1); auto b = sum(1, 2); auto c = sum(1.0, 2.0, 3.0); } { // Test constructor delegation cxx11test::delegate d1; cxx11test::delegate d2(); cxx11test::delegate d3(45); } { // Test override and final cxx11test::overridden o1(55464); } { // Test nullptr char *c = nullptr; } { // Test template brackets test_template<::test_template> v(test_template(12)); } { // Unicode literals char const *utf8 = u8"UTF-8 string \u2500"; char16_t const *utf16 = u"UTF-8 string \u2500"; char32_t const *utf32 = U"UTF-32 string \u2500"; } ' # Test code for whether the C compiler supports C++11 (complete). ac_cxx_conftest_cxx11_program="${ac_cxx_conftest_cxx98_globals} ${ac_cxx_conftest_cxx11_globals} int main (int argc, char **argv) { int ok = 0; ${ac_cxx_conftest_cxx98_main} ${ac_cxx_conftest_cxx11_main} return ok; } " # Test code for whether the C compiler supports C++98 (complete). ac_cxx_conftest_cxx98_program="${ac_cxx_conftest_cxx98_globals} int main (int argc, char **argv) { int ok = 0; ${ac_cxx_conftest_cxx98_main} return ok; } " # Auxiliary files required by this configure script. ac_aux_files="config.guess config.sub ltmain.sh missing install-sh compile" # Locations in which to look for auxiliary files. ac_aux_dir_candidates="${srcdir}/config" # Search for a directory containing all of the required auxiliary files, # $ac_aux_files, from the $PATH-style list $ac_aux_dir_candidates. # If we don't find one directory that contains all the files we need, # we report the set of missing files from the *first* directory in # $ac_aux_dir_candidates and give up. ac_missing_aux_files="" ac_first_candidate=: printf "%s\n" "$as_me:${as_lineno-$LINENO}: looking for aux files: $ac_aux_files" >&5 as_save_IFS=$IFS; IFS=$PATH_SEPARATOR as_found=false for as_dir in $ac_aux_dir_candidates do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac as_found=: printf "%s\n" "$as_me:${as_lineno-$LINENO}: trying $as_dir" >&5 ac_aux_dir_found=yes ac_install_sh= for ac_aux in $ac_aux_files do # As a special case, if "install-sh" is required, that requirement # can be satisfied by any of "install-sh", "install.sh", or "shtool", # and $ac_install_sh is set appropriately for whichever one is found. if test x"$ac_aux" = x"install-sh" then if test -f "${as_dir}install-sh"; then printf "%s\n" "$as_me:${as_lineno-$LINENO}: ${as_dir}install-sh found" >&5 ac_install_sh="${as_dir}install-sh -c" elif test -f "${as_dir}install.sh"; then printf "%s\n" "$as_me:${as_lineno-$LINENO}: ${as_dir}install.sh found" >&5 ac_install_sh="${as_dir}install.sh -c" elif test -f "${as_dir}shtool"; then printf "%s\n" "$as_me:${as_lineno-$LINENO}: ${as_dir}shtool found" >&5 ac_install_sh="${as_dir}shtool install -c" else ac_aux_dir_found=no if $ac_first_candidate; then ac_missing_aux_files="${ac_missing_aux_files} install-sh" else break fi fi else if test -f "${as_dir}${ac_aux}"; then printf "%s\n" "$as_me:${as_lineno-$LINENO}: ${as_dir}${ac_aux} found" >&5 else ac_aux_dir_found=no if $ac_first_candidate; then ac_missing_aux_files="${ac_missing_aux_files} ${ac_aux}" else break fi fi fi done if test "$ac_aux_dir_found" = yes; then ac_aux_dir="$as_dir" break fi ac_first_candidate=false as_found=false done IFS=$as_save_IFS if $as_found then : else case e in #( e) as_fn_error $? "cannot find required auxiliary files:$ac_missing_aux_files" "$LINENO" 5 ;; esac fi # These three variables are undocumented and unsupported, # and are intended to be withdrawn in a future Autoconf release. # They can cause serious problems if a builder's source tree is in a directory # whose full name contains unusual characters. if test -f "${ac_aux_dir}config.guess"; then ac_config_guess="$SHELL ${ac_aux_dir}config.guess" fi if test -f "${ac_aux_dir}config.sub"; then ac_config_sub="$SHELL ${ac_aux_dir}config.sub" fi if test -f "$ac_aux_dir/configure"; then ac_configure="$SHELL ${ac_aux_dir}configure" fi # Check that the precious variables saved in the cache have kept the same # value. ac_cache_corrupted=false for ac_var in $ac_precious_vars; do eval ac_old_set=\$ac_cv_env_${ac_var}_set eval ac_new_set=\$ac_env_${ac_var}_set eval ac_old_val=\$ac_cv_env_${ac_var}_value eval ac_new_val=\$ac_env_${ac_var}_value case $ac_old_set,$ac_new_set in set,) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: '$ac_var' was set to '$ac_old_val' in the previous run" >&5 printf "%s\n" "$as_me: error: '$ac_var' was set to '$ac_old_val' in the previous run" >&2;} ac_cache_corrupted=: ;; ,set) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: '$ac_var' was not set in the previous run" >&5 printf "%s\n" "$as_me: error: '$ac_var' was not set in the previous run" >&2;} ac_cache_corrupted=: ;; ,);; *) if test "x$ac_old_val" != "x$ac_new_val"; then # differences in whitespace do not lead to failure. ac_old_val_w=`echo x $ac_old_val` ac_new_val_w=`echo x $ac_new_val` if test "$ac_old_val_w" != "$ac_new_val_w"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: '$ac_var' has changed since the previous run:" >&5 printf "%s\n" "$as_me: error: '$ac_var' has changed since the previous run:" >&2;} ac_cache_corrupted=: else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: warning: ignoring whitespace changes in '$ac_var' since the previous run:" >&5 printf "%s\n" "$as_me: warning: ignoring whitespace changes in '$ac_var' since the previous run:" >&2;} eval $ac_var=\$ac_old_val fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: former value: '$ac_old_val'" >&5 printf "%s\n" "$as_me: former value: '$ac_old_val'" >&2;} { printf "%s\n" "$as_me:${as_lineno-$LINENO}: current value: '$ac_new_val'" >&5 printf "%s\n" "$as_me: current value: '$ac_new_val'" >&2;} fi;; esac # Pass precious variables to config.status. if test "$ac_new_set" = set; then case $ac_new_val in *\'*) ac_arg=$ac_var=`printf "%s\n" "$ac_new_val" | sed "s/'/'\\\\\\\\''/g"` ;; *) ac_arg=$ac_var=$ac_new_val ;; esac case " $ac_configure_args " in *" '$ac_arg' "*) ;; # Avoid dups. Use of quotes ensures accuracy. *) as_fn_append ac_configure_args " '$ac_arg'" ;; esac fi done if $ac_cache_corrupted; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in '$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in '$ac_pwd':" >&2;} { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: changes in the environment can compromise the build" >&5 printf "%s\n" "$as_me: error: changes in the environment can compromise the build" >&2;} as_fn_error $? "run '${MAKE-make} distclean' and/or 'rm $cache_file' and start over" "$LINENO" 5 fi ## -------------------- ## ## Main body of script. ## ## -------------------- ## ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu CARES_VERSION_INFO="20:1:18" ac_config_headers="$ac_config_headers src/lib/ares_config.h include/ares_build.h" # Expand $ac_aux_dir to an absolute path. am_aux_dir=`cd "$ac_aux_dir" && pwd` ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}gcc", so it can be a program name with args. set dummy ${ac_tool_prefix}gcc; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}gcc" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi CC=$ac_cv_prog_CC if test -n "$CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 printf "%s\n" "$CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_CC"; then ac_ct_CC=$CC # Extract the first word of "gcc", so it can be a program name with args. set dummy gcc; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="gcc" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 printf "%s\n" "$ac_ct_CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi else CC="$ac_cv_prog_CC" fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}cc", so it can be a program name with args. set dummy ${ac_tool_prefix}cc; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}cc" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi CC=$ac_cv_prog_CC if test -n "$CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 printf "%s\n" "$CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi fi if test -z "$CC"; then # Extract the first word of "cc", so it can be a program name with args. set dummy cc; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else ac_prog_rejected=no as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then if test "$as_dir$ac_word$ac_exec_ext" = "/usr/ucb/cc"; then ac_prog_rejected=yes continue fi ac_cv_prog_CC="cc" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS if test $ac_prog_rejected = yes; then # We found a bogon in the path, so make sure we never use it. set dummy $ac_cv_prog_CC shift if test $# != 0; then # We chose a different compiler from the bogus one. # However, it has the same basename, so the bogon will be chosen # first if we set CC to just the basename; use the full file name. shift ac_cv_prog_CC="$as_dir$ac_word${1+' '}$@" fi fi fi ;; esac fi CC=$ac_cv_prog_CC if test -n "$CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 printf "%s\n" "$CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then for ac_prog in cl.exe do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_CC="$ac_tool_prefix$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi CC=$ac_cv_prog_CC if test -n "$CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 printf "%s\n" "$CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$CC" && break done fi if test -z "$CC"; then ac_ct_CC=$CC for ac_prog in cl.exe do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 printf "%s\n" "$ac_ct_CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$ac_ct_CC" && break done if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi fi fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}clang", so it can be a program name with args. set dummy ${ac_tool_prefix}clang; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}clang" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi CC=$ac_cv_prog_CC if test -n "$CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 printf "%s\n" "$CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_CC"; then ac_ct_CC=$CC # Extract the first word of "clang", so it can be a program name with args. set dummy clang; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="clang" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 printf "%s\n" "$ac_ct_CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi else CC="$ac_cv_prog_CC" fi fi test -z "$CC" && { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in '$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in '$ac_pwd':" >&2;} as_fn_error $? "no acceptable C compiler found in \$PATH See 'config.log' for more details" "$LINENO" 5; } # Provide some information about the compiler. printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for C compiler version" >&5 set X $ac_compile ac_compiler=$2 for ac_option in --version -v -V -qversion -version; do { { ac_try="$ac_compiler $ac_option >&5" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_compiler $ac_option >&5") 2>conftest.err ac_status=$? if test -s conftest.err; then sed '10a\ ... rest of stderr output deleted ... 10q' conftest.err >conftest.er1 cat conftest.er1 >&5 fi rm -f conftest.er1 conftest.err printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } done cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files a.out a.out.dSYM a.exe b.out" # Try to create an executable without -o first, disregard a.out. # It will help us diagnose broken compilers, and finding out an intuition # of exeext. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the C compiler works" >&5 printf %s "checking whether the C compiler works... " >&6; } ac_link_default=`printf "%s\n" "$ac_link" | sed 's/ -o *conftest[^ ]*//'` # The possible output files: ac_files="a.out conftest.exe conftest a.exe a_out.exe b.out conftest.*" ac_rmfiles= for ac_file in $ac_files do case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; * ) ac_rmfiles="$ac_rmfiles $ac_file";; esac done rm -f $ac_rmfiles if { { ac_try="$ac_link_default" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_link_default") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } then : # Autoconf-2.13 could set the ac_cv_exeext variable to 'no'. # So ignore a value of 'no', otherwise this would lead to 'EXEEXT = no' # in a Makefile. We should not override ac_cv_exeext if it was cached, # so that the user can short-circuit this test for compilers unknown to # Autoconf. for ac_file in $ac_files '' do test -f "$ac_file" || continue case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; [ab].out ) # We found the default executable, but exeext='' is most # certainly right. break;; *.* ) if test ${ac_cv_exeext+y} && test "$ac_cv_exeext" != no; then :; else ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` fi # We set ac_cv_exeext here because the later test for it is not # safe: cross compilers may not add the suffix if given an '-o' # argument, so we may need to know it at that point already. # Even if this section looks crufty: it has the advantage of # actually working. break;; * ) break;; esac done test "$ac_cv_exeext" = no && ac_cv_exeext= else case e in #( e) ac_file='' ;; esac fi if test -z "$ac_file" then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } printf "%s\n" "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in '$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in '$ac_pwd':" >&2;} as_fn_error 77 "C compiler cannot create executables See 'config.log' for more details" "$LINENO" 5; } else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5 printf "%s\n" "yes" >&6; } ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for C compiler default output file name" >&5 printf %s "checking for C compiler default output file name... " >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_file" >&5 printf "%s\n" "$ac_file" >&6; } ac_exeext=$ac_cv_exeext rm -f -r a.out a.out.dSYM a.exe conftest$ac_cv_exeext b.out ac_clean_files=$ac_clean_files_save { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for suffix of executables" >&5 printf %s "checking for suffix of executables... " >&6; } if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } then : # If both 'conftest.exe' and 'conftest' are 'present' (well, observable) # catch 'conftest.exe'. For instance with Cygwin, 'ls conftest' will # work properly (i.e., refer to 'conftest.exe'), while it won't with # 'rm'. for ac_file in conftest.exe conftest conftest.*; do test -f "$ac_file" || continue case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; *.* ) ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` break;; * ) break;; esac done else case e in #( e) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in '$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in '$ac_pwd':" >&2;} as_fn_error $? "cannot compute suffix of executables: cannot compile and link See 'config.log' for more details" "$LINENO" 5; } ;; esac fi rm -f conftest conftest$ac_cv_exeext { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_exeext" >&5 printf "%s\n" "$ac_cv_exeext" >&6; } rm -f conftest.$ac_ext EXEEXT=$ac_cv_exeext ac_exeext=$EXEEXT cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main (void) { FILE *f = fopen ("conftest.out", "w"); if (!f) return 1; return ferror (f) || fclose (f) != 0; ; return 0; } _ACEOF ac_clean_files="$ac_clean_files conftest.out" # Check that the compiler produces executables we can run. If not, either # the compiler is broken, or we cross compile. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether we are cross compiling" >&5 printf %s "checking whether we are cross compiling... " >&6; } if test "$cross_compiling" != yes; then { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if { ac_try='./conftest$ac_cv_exeext' { { case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_try") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; }; then cross_compiling=no else if test "$cross_compiling" = maybe; then cross_compiling=yes else { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in '$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in '$ac_pwd':" >&2;} as_fn_error 77 "cannot run C compiled programs. If you meant to cross compile, use '--host'. See 'config.log' for more details" "$LINENO" 5; } fi fi fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $cross_compiling" >&5 printf "%s\n" "$cross_compiling" >&6; } rm -f conftest.$ac_ext conftest$ac_cv_exeext \ conftest.o conftest.obj conftest.out ac_clean_files=$ac_clean_files_save { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for suffix of object files" >&5 printf %s "checking for suffix of object files... " >&6; } if test ${ac_cv_objext+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF rm -f conftest.o conftest.obj if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } then : for ac_file in conftest.o conftest.obj conftest.*; do test -f "$ac_file" || continue; case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM ) ;; *) ac_cv_objext=`expr "$ac_file" : '.*\.\(.*\)'` break;; esac done else case e in #( e) printf "%s\n" "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in '$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in '$ac_pwd':" >&2;} as_fn_error $? "cannot compute suffix of object files: cannot compile See 'config.log' for more details" "$LINENO" 5; } ;; esac fi rm -f conftest.$ac_cv_objext conftest.$ac_ext ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_objext" >&5 printf "%s\n" "$ac_cv_objext" >&6; } OBJEXT=$ac_cv_objext ac_objext=$OBJEXT { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the compiler supports GNU C" >&5 printf %s "checking whether the compiler supports GNU C... " >&6; } if test ${ac_cv_c_compiler_gnu+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { #ifndef __GNUC__ choke me #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : ac_compiler_gnu=yes else case e in #( e) ac_compiler_gnu=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ac_cv_c_compiler_gnu=$ac_compiler_gnu ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_compiler_gnu" >&5 printf "%s\n" "$ac_cv_c_compiler_gnu" >&6; } ac_compiler_gnu=$ac_cv_c_compiler_gnu if test $ac_compiler_gnu = yes; then GCC=yes else GCC= fi ac_test_CFLAGS=${CFLAGS+y} ac_save_CFLAGS=$CFLAGS { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $CC accepts -g" >&5 printf %s "checking whether $CC accepts -g... " >&6; } if test ${ac_cv_prog_cc_g+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_save_c_werror_flag=$ac_c_werror_flag ac_c_werror_flag=yes ac_cv_prog_cc_g=no CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : ac_cv_prog_cc_g=yes else case e in #( e) CFLAGS="" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : else case e in #( e) ac_c_werror_flag=$ac_save_c_werror_flag CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : ac_cv_prog_cc_g=yes fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ac_c_werror_flag=$ac_save_c_werror_flag ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_g" >&5 printf "%s\n" "$ac_cv_prog_cc_g" >&6; } if test $ac_test_CFLAGS; then CFLAGS=$ac_save_CFLAGS elif test $ac_cv_prog_cc_g = yes; then if test "$GCC" = yes; then CFLAGS="-g -O2" else CFLAGS="-g" fi else if test "$GCC" = yes; then CFLAGS="-O2" else CFLAGS= fi fi ac_prog_cc_stdc=no if test x$ac_prog_cc_stdc = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC option to enable C11 features" >&5 printf %s "checking for $CC option to enable C11 features... " >&6; } if test ${ac_cv_prog_cc_c11+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_cv_prog_cc_c11=no ac_save_CC=$CC cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $ac_c_conftest_c11_program _ACEOF for ac_arg in '' -std=gnu11 do CC="$ac_save_CC $ac_arg" if ac_fn_c_try_compile "$LINENO" then : ac_cv_prog_cc_c11=$ac_arg fi rm -f core conftest.err conftest.$ac_objext conftest.beam test "x$ac_cv_prog_cc_c11" != "xno" && break done rm -f conftest.$ac_ext CC=$ac_save_CC ;; esac fi if test "x$ac_cv_prog_cc_c11" = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 printf "%s\n" "unsupported" >&6; } else case e in #( e) if test "x$ac_cv_prog_cc_c11" = x then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 printf "%s\n" "none needed" >&6; } else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c11" >&5 printf "%s\n" "$ac_cv_prog_cc_c11" >&6; } CC="$CC $ac_cv_prog_cc_c11" ;; esac fi ac_cv_prog_cc_stdc=$ac_cv_prog_cc_c11 ac_prog_cc_stdc=c11 ;; esac fi fi if test x$ac_prog_cc_stdc = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC option to enable C99 features" >&5 printf %s "checking for $CC option to enable C99 features... " >&6; } if test ${ac_cv_prog_cc_c99+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_cv_prog_cc_c99=no ac_save_CC=$CC cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $ac_c_conftest_c99_program _ACEOF for ac_arg in '' -std=gnu99 -std=c99 -c99 -qlanglvl=extc1x -qlanglvl=extc99 -AC99 -D_STDC_C99= do CC="$ac_save_CC $ac_arg" if ac_fn_c_try_compile "$LINENO" then : ac_cv_prog_cc_c99=$ac_arg fi rm -f core conftest.err conftest.$ac_objext conftest.beam test "x$ac_cv_prog_cc_c99" != "xno" && break done rm -f conftest.$ac_ext CC=$ac_save_CC ;; esac fi if test "x$ac_cv_prog_cc_c99" = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 printf "%s\n" "unsupported" >&6; } else case e in #( e) if test "x$ac_cv_prog_cc_c99" = x then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 printf "%s\n" "none needed" >&6; } else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c99" >&5 printf "%s\n" "$ac_cv_prog_cc_c99" >&6; } CC="$CC $ac_cv_prog_cc_c99" ;; esac fi ac_cv_prog_cc_stdc=$ac_cv_prog_cc_c99 ac_prog_cc_stdc=c99 ;; esac fi fi if test x$ac_prog_cc_stdc = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC option to enable C89 features" >&5 printf %s "checking for $CC option to enable C89 features... " >&6; } if test ${ac_cv_prog_cc_c89+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_cv_prog_cc_c89=no ac_save_CC=$CC cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $ac_c_conftest_c89_program _ACEOF for ac_arg in '' -qlanglvl=extc89 -qlanglvl=ansi -std -Ae "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__" do CC="$ac_save_CC $ac_arg" if ac_fn_c_try_compile "$LINENO" then : ac_cv_prog_cc_c89=$ac_arg fi rm -f core conftest.err conftest.$ac_objext conftest.beam test "x$ac_cv_prog_cc_c89" != "xno" && break done rm -f conftest.$ac_ext CC=$ac_save_CC ;; esac fi if test "x$ac_cv_prog_cc_c89" = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 printf "%s\n" "unsupported" >&6; } else case e in #( e) if test "x$ac_cv_prog_cc_c89" = x then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 printf "%s\n" "none needed" >&6; } else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c89" >&5 printf "%s\n" "$ac_cv_prog_cc_c89" >&6; } CC="$CC $ac_cv_prog_cc_c89" ;; esac fi ac_cv_prog_cc_stdc=$ac_cv_prog_cc_c89 ac_prog_cc_stdc=c89 ;; esac fi fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $CC understands -c and -o together" >&5 printf %s "checking whether $CC understands -c and -o together... " >&6; } if test ${am_cv_prog_cc_c_o+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF # Make sure it works both with $CC and with simple cc. # Following AC_PROG_CC_C_O, we do the test twice because some # compilers refuse to overwrite an existing .o file with -o, # though they will create one. am_cv_prog_cc_c_o=yes for am_i in 1 2; do if { echo "$as_me:$LINENO: $CC -c conftest.$ac_ext -o conftest2.$ac_objext" >&5 ($CC -c conftest.$ac_ext -o conftest2.$ac_objext) >&5 2>&5 ac_status=$? echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } \ && test -f conftest2.$ac_objext; then : OK else am_cv_prog_cc_c_o=no break fi done rm -f core conftest* unset am_i ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_cv_prog_cc_c_o" >&5 printf "%s\n" "$am_cv_prog_cc_c_o" >&6; } if test "$am_cv_prog_cc_c_o" != yes; then # Losing compiler, so override with the script. # FIXME: It is wrong to rewrite CC. # But if we don't then we get into trouble of one sort or another. # A longer-term fix would be to have automake use am__CC in this case, # and then we could set am__CC="\$(top_srcdir)/compile \$(CC)" CC="$am_aux_dir/compile $CC" fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ac_header= ac_cache= for ac_item in $ac_header_c_list do if test $ac_cache; then ac_fn_c_check_header_compile "$LINENO" $ac_header ac_cv_header_$ac_cache "$ac_includes_default" if eval test \"x\$ac_cv_header_$ac_cache\" = xyes; then printf "%s\n" "#define $ac_item 1" >> confdefs.h fi ac_header= ac_cache= elif test $ac_header; then ac_cache=$ac_item else ac_header=$ac_item fi done if test $ac_cv_header_stdlib_h = yes && test $ac_cv_header_string_h = yes then : printf "%s\n" "#define STDC_HEADERS 1" >>confdefs.h fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether it is safe to define __EXTENSIONS__" >&5 printf %s "checking whether it is safe to define __EXTENSIONS__... " >&6; } if test ${ac_cv_safe_to_define___extensions__+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ # define __EXTENSIONS__ 1 $ac_includes_default int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : ac_cv_safe_to_define___extensions__=yes else case e in #( e) ac_cv_safe_to_define___extensions__=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_safe_to_define___extensions__" >&5 printf "%s\n" "$ac_cv_safe_to_define___extensions__" >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether _XOPEN_SOURCE should be defined" >&5 printf %s "checking whether _XOPEN_SOURCE should be defined... " >&6; } if test ${ac_cv_should_define__xopen_source+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_cv_should_define__xopen_source=no if test $ac_cv_header_wchar_h = yes then : cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include mbstate_t x; int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #define _XOPEN_SOURCE 500 #include mbstate_t x; int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : ac_cv_should_define__xopen_source=yes fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_should_define__xopen_source" >&5 printf "%s\n" "$ac_cv_should_define__xopen_source" >&6; } printf "%s\n" "#define _ALL_SOURCE 1" >>confdefs.h printf "%s\n" "#define _DARWIN_C_SOURCE 1" >>confdefs.h printf "%s\n" "#define _GNU_SOURCE 1" >>confdefs.h printf "%s\n" "#define _HPUX_ALT_XOPEN_SOCKET_API 1" >>confdefs.h printf "%s\n" "#define _NETBSD_SOURCE 1" >>confdefs.h printf "%s\n" "#define _OPENBSD_SOURCE 1" >>confdefs.h printf "%s\n" "#define _POSIX_PTHREAD_SEMANTICS 1" >>confdefs.h printf "%s\n" "#define __STDC_WANT_IEC_60559_ATTRIBS_EXT__ 1" >>confdefs.h printf "%s\n" "#define __STDC_WANT_IEC_60559_BFP_EXT__ 1" >>confdefs.h printf "%s\n" "#define __STDC_WANT_IEC_60559_DFP_EXT__ 1" >>confdefs.h printf "%s\n" "#define __STDC_WANT_IEC_60559_EXT__ 1" >>confdefs.h printf "%s\n" "#define __STDC_WANT_IEC_60559_FUNCS_EXT__ 1" >>confdefs.h printf "%s\n" "#define __STDC_WANT_IEC_60559_TYPES_EXT__ 1" >>confdefs.h printf "%s\n" "#define __STDC_WANT_LIB_EXT2__ 1" >>confdefs.h printf "%s\n" "#define __STDC_WANT_MATH_SPEC_FUNCS__ 1" >>confdefs.h printf "%s\n" "#define _TANDEM_SOURCE 1" >>confdefs.h if test $ac_cv_header_minix_config_h = yes then : MINIX=yes printf "%s\n" "#define _MINIX 1" >>confdefs.h printf "%s\n" "#define _POSIX_SOURCE 1" >>confdefs.h printf "%s\n" "#define _POSIX_1_SOURCE 2" >>confdefs.h else case e in #( e) MINIX= ;; esac fi if test $ac_cv_safe_to_define___extensions__ = yes then : printf "%s\n" "#define __EXTENSIONS__ 1" >>confdefs.h fi if test $ac_cv_should_define__xopen_source = yes then : printf "%s\n" "#define _XOPEN_SOURCE 500" >>confdefs.h fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu if test -z "$CXX"; then if test -n "$CCC"; then CXX=$CCC else if test -n "$ac_tool_prefix"; then for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC clang++ do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_CXX+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$CXX"; then ac_cv_prog_CXX="$CXX" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_CXX="$ac_tool_prefix$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi CXX=$ac_cv_prog_CXX if test -n "$CXX"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CXX" >&5 printf "%s\n" "$CXX" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$CXX" && break done fi if test -z "$CXX"; then ac_ct_CXX=$CXX for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC clang++ do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_CXX+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_CXX"; then ac_cv_prog_ac_ct_CXX="$ac_ct_CXX" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CXX="$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_CXX=$ac_cv_prog_ac_ct_CXX if test -n "$ac_ct_CXX"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CXX" >&5 printf "%s\n" "$ac_ct_CXX" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$ac_ct_CXX" && break done if test "x$ac_ct_CXX" = x; then CXX="g++" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CXX=$ac_ct_CXX fi fi fi fi # Provide some information about the compiler. printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for C++ compiler version" >&5 set X $ac_compile ac_compiler=$2 for ac_option in --version -v -V -qversion; do { { ac_try="$ac_compiler $ac_option >&5" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_compiler $ac_option >&5") 2>conftest.err ac_status=$? if test -s conftest.err; then sed '10a\ ... rest of stderr output deleted ... 10q' conftest.err >conftest.er1 cat conftest.er1 >&5 fi rm -f conftest.er1 conftest.err printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } done { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the compiler supports GNU C++" >&5 printf %s "checking whether the compiler supports GNU C++... " >&6; } if test ${ac_cv_cxx_compiler_gnu+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { #ifndef __GNUC__ choke me #endif ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO" then : ac_compiler_gnu=yes else case e in #( e) ac_compiler_gnu=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ac_cv_cxx_compiler_gnu=$ac_compiler_gnu ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_cxx_compiler_gnu" >&5 printf "%s\n" "$ac_cv_cxx_compiler_gnu" >&6; } ac_compiler_gnu=$ac_cv_cxx_compiler_gnu if test $ac_compiler_gnu = yes; then GXX=yes else GXX= fi ac_test_CXXFLAGS=${CXXFLAGS+y} ac_save_CXXFLAGS=$CXXFLAGS { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $CXX accepts -g" >&5 printf %s "checking whether $CXX accepts -g... " >&6; } if test ${ac_cv_prog_cxx_g+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_save_cxx_werror_flag=$ac_cxx_werror_flag ac_cxx_werror_flag=yes ac_cv_prog_cxx_g=no CXXFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO" then : ac_cv_prog_cxx_g=yes else case e in #( e) CXXFLAGS="" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO" then : else case e in #( e) ac_cxx_werror_flag=$ac_save_cxx_werror_flag CXXFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO" then : ac_cv_prog_cxx_g=yes fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ac_cxx_werror_flag=$ac_save_cxx_werror_flag ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cxx_g" >&5 printf "%s\n" "$ac_cv_prog_cxx_g" >&6; } if test $ac_test_CXXFLAGS; then CXXFLAGS=$ac_save_CXXFLAGS elif test $ac_cv_prog_cxx_g = yes; then if test "$GXX" = yes; then CXXFLAGS="-g -O2" else CXXFLAGS="-g" fi else if test "$GXX" = yes; then CXXFLAGS="-O2" else CXXFLAGS= fi fi ac_prog_cxx_stdcxx=no if test x$ac_prog_cxx_stdcxx = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CXX option to enable C++11 features" >&5 printf %s "checking for $CXX option to enable C++11 features... " >&6; } if test ${ac_cv_prog_cxx_cxx11+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_cv_prog_cxx_cxx11=no ac_save_CXX=$CXX cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $ac_cxx_conftest_cxx11_program _ACEOF for ac_arg in '' -std=gnu++11 -std=gnu++0x -std=c++11 -std=c++0x -qlanglvl=extended0x -AA do CXX="$ac_save_CXX $ac_arg" if ac_fn_cxx_try_compile "$LINENO" then : ac_cv_prog_cxx_cxx11=$ac_arg fi rm -f core conftest.err conftest.$ac_objext conftest.beam test "x$ac_cv_prog_cxx_cxx11" != "xno" && break done rm -f conftest.$ac_ext CXX=$ac_save_CXX ;; esac fi if test "x$ac_cv_prog_cxx_cxx11" = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 printf "%s\n" "unsupported" >&6; } else case e in #( e) if test "x$ac_cv_prog_cxx_cxx11" = x then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 printf "%s\n" "none needed" >&6; } else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cxx_cxx11" >&5 printf "%s\n" "$ac_cv_prog_cxx_cxx11" >&6; } CXX="$CXX $ac_cv_prog_cxx_cxx11" ;; esac fi ac_cv_prog_cxx_stdcxx=$ac_cv_prog_cxx_cxx11 ac_prog_cxx_stdcxx=cxx11 ;; esac fi fi if test x$ac_prog_cxx_stdcxx = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CXX option to enable C++98 features" >&5 printf %s "checking for $CXX option to enable C++98 features... " >&6; } if test ${ac_cv_prog_cxx_cxx98+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_cv_prog_cxx_cxx98=no ac_save_CXX=$CXX cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $ac_cxx_conftest_cxx98_program _ACEOF for ac_arg in '' -std=gnu++98 -std=c++98 -qlanglvl=extended -AA do CXX="$ac_save_CXX $ac_arg" if ac_fn_cxx_try_compile "$LINENO" then : ac_cv_prog_cxx_cxx98=$ac_arg fi rm -f core conftest.err conftest.$ac_objext conftest.beam test "x$ac_cv_prog_cxx_cxx98" != "xno" && break done rm -f conftest.$ac_ext CXX=$ac_save_CXX ;; esac fi if test "x$ac_cv_prog_cxx_cxx98" = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 printf "%s\n" "unsupported" >&6; } else case e in #( e) if test "x$ac_cv_prog_cxx_cxx98" = x then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 printf "%s\n" "none needed" >&6; } else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cxx_cxx98" >&5 printf "%s\n" "$ac_cv_prog_cxx_cxx98" >&6; } CXX="$CXX $ac_cv_prog_cxx_cxx98" ;; esac fi ac_cv_prog_cxx_stdcxx=$ac_cv_prog_cxx_cxx98 ac_prog_cxx_stdcxx=cxx98 ;; esac fi fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu ax_cxx_compile_alternatives="14 1y" ax_cxx_compile_cxx14_required=false ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu ac_success=no if test x$ac_success = xno; then for alternative in ${ax_cxx_compile_alternatives}; do for switch in -std=c++${alternative} +std=c++${alternative} "-h std=c++${alternative}" MSVC; do if test x"$switch" = xMSVC; then switch=-std:c++${alternative} cachevar=`printf "%s\n" "ax_cv_cxx_compile_cxx14_${switch}_MSVC" | sed "$as_sed_sh"` else cachevar=`printf "%s\n" "ax_cv_cxx_compile_cxx14_$switch" | sed "$as_sed_sh"` fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $CXX supports C++14 features with $switch" >&5 printf %s "checking whether $CXX supports C++14 features with $switch... " >&6; } if eval test \${$cachevar+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_save_CXX="$CXX" CXX="$CXX $switch" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ // If the compiler admits that it is not ready for C++11, why torture it? // Hopefully, this will speed up the test. #ifndef __cplusplus #error "This is not a C++ compiler" // MSVC always sets __cplusplus to 199711L in older versions; newer versions // only set it correctly if /Zc:__cplusplus is specified as well as a // /std:c++NN switch: // https://devblogs.microsoft.com/cppblog/msvc-now-correctly-reports-__cplusplus/ #elif __cplusplus < 201103L && !defined _MSC_VER #error "This is not a C++11 compiler" #else namespace cxx11 { namespace test_static_assert { template struct check { static_assert(sizeof(int) <= sizeof(T), "not big enough"); }; } namespace test_final_override { struct Base { virtual ~Base() {} virtual void f() {} }; struct Derived : public Base { virtual ~Derived() override {} virtual void f() override {} }; } namespace test_double_right_angle_brackets { template < typename T > struct check {}; typedef check single_type; typedef check> double_type; typedef check>> triple_type; typedef check>>> quadruple_type; } namespace test_decltype { int f() { int a = 1; decltype(a) b = 2; return a + b; } } namespace test_type_deduction { template < typename T1, typename T2 > struct is_same { static const bool value = false; }; template < typename T > struct is_same { static const bool value = true; }; template < typename T1, typename T2 > auto add(T1 a1, T2 a2) -> decltype(a1 + a2) { return a1 + a2; } int test(const int c, volatile int v) { static_assert(is_same::value == true, ""); static_assert(is_same::value == false, ""); static_assert(is_same::value == false, ""); auto ac = c; auto av = v; auto sumi = ac + av + 'x'; auto sumf = ac + av + 1.0; static_assert(is_same::value == true, ""); static_assert(is_same::value == true, ""); static_assert(is_same::value == true, ""); static_assert(is_same::value == false, ""); static_assert(is_same::value == true, ""); return (sumf > 0.0) ? sumi : add(c, v); } } namespace test_noexcept { int f() { return 0; } int g() noexcept { return 0; } static_assert(noexcept(f()) == false, ""); static_assert(noexcept(g()) == true, ""); } namespace test_constexpr { template < typename CharT > unsigned long constexpr strlen_c_r(const CharT *const s, const unsigned long acc) noexcept { return *s ? strlen_c_r(s + 1, acc + 1) : acc; } template < typename CharT > unsigned long constexpr strlen_c(const CharT *const s) noexcept { return strlen_c_r(s, 0UL); } static_assert(strlen_c("") == 0UL, ""); static_assert(strlen_c("1") == 1UL, ""); static_assert(strlen_c("example") == 7UL, ""); static_assert(strlen_c("another\0example") == 7UL, ""); } namespace test_rvalue_references { template < int N > struct answer { static constexpr int value = N; }; answer<1> f(int&) { return answer<1>(); } answer<2> f(const int&) { return answer<2>(); } answer<3> f(int&&) { return answer<3>(); } void test() { int i = 0; const int c = 0; static_assert(decltype(f(i))::value == 1, ""); static_assert(decltype(f(c))::value == 2, ""); static_assert(decltype(f(0))::value == 3, ""); } } namespace test_uniform_initialization { struct test { static const int zero {}; static const int one {1}; }; static_assert(test::zero == 0, ""); static_assert(test::one == 1, ""); } namespace test_lambdas { void test1() { auto lambda1 = [](){}; auto lambda2 = lambda1; lambda1(); lambda2(); } int test2() { auto a = [](int i, int j){ return i + j; }(1, 2); auto b = []() -> int { return '0'; }(); auto c = [=](){ return a + b; }(); auto d = [&](){ return c; }(); auto e = [a, &b](int x) mutable { const auto identity = [](int y){ return y; }; for (auto i = 0; i < a; ++i) a += b--; return x + identity(a + b); }(0); return a + b + c + d + e; } int test3() { const auto nullary = [](){ return 0; }; const auto unary = [](int x){ return x; }; using nullary_t = decltype(nullary); using unary_t = decltype(unary); const auto higher1st = [](nullary_t f){ return f(); }; const auto higher2nd = [unary](nullary_t f1){ return [unary, f1](unary_t f2){ return f2(unary(f1())); }; }; return higher1st(nullary) + higher2nd(nullary)(unary); } } namespace test_variadic_templates { template struct sum; template struct sum { static constexpr auto value = N0 + sum::value; }; template <> struct sum<> { static constexpr auto value = 0; }; static_assert(sum<>::value == 0, ""); static_assert(sum<1>::value == 1, ""); static_assert(sum<23>::value == 23, ""); static_assert(sum<1, 2>::value == 3, ""); static_assert(sum<5, 5, 11>::value == 21, ""); static_assert(sum<2, 3, 5, 7, 11, 13>::value == 41, ""); } // http://stackoverflow.com/questions/13728184/template-aliases-and-sfinae // Clang 3.1 fails with headers of libstd++ 4.8.3 when using std::function // because of this. namespace test_template_alias_sfinae { struct foo {}; template using member = typename T::member_type; template void func(...) {} template void func(member*) {} void test(); void test() { func(0); } } } // namespace cxx11 #endif // __cplusplus >= 201103L // If the compiler admits that it is not ready for C++14, why torture it? // Hopefully, this will speed up the test. #ifndef __cplusplus #error "This is not a C++ compiler" #elif __cplusplus < 201402L && !defined _MSC_VER #error "This is not a C++14 compiler" #else namespace cxx14 { namespace test_polymorphic_lambdas { int test() { const auto lambda = [](auto&&... args){ const auto istiny = [](auto x){ return (sizeof(x) == 1UL) ? 1 : 0; }; const int aretiny[] = { istiny(args)... }; return aretiny[0]; }; return lambda(1, 1L, 1.0f, '1'); } } namespace test_binary_literals { constexpr auto ivii = 0b0000000000101010; static_assert(ivii == 42, "wrong value"); } namespace test_generalized_constexpr { template < typename CharT > constexpr unsigned long strlen_c(const CharT *const s) noexcept { auto length = 0UL; for (auto p = s; *p; ++p) ++length; return length; } static_assert(strlen_c("") == 0UL, ""); static_assert(strlen_c("x") == 1UL, ""); static_assert(strlen_c("test") == 4UL, ""); static_assert(strlen_c("another\0test") == 7UL, ""); } namespace test_lambda_init_capture { int test() { auto x = 0; const auto lambda1 = [a = x](int b){ return a + b; }; const auto lambda2 = [a = lambda1(x)](){ return a; }; return lambda2(); } } namespace test_digit_separators { constexpr auto ten_million = 100'000'000; static_assert(ten_million == 100000000, ""); } namespace test_return_type_deduction { auto f(int& x) { return x; } decltype(auto) g(int& x) { return x; } template < typename T1, typename T2 > struct is_same { static constexpr auto value = false; }; template < typename T > struct is_same { static constexpr auto value = true; }; int test() { auto x = 0; static_assert(is_same::value, ""); static_assert(is_same::value, ""); return x; } } } // namespace cxx14 #endif // __cplusplus >= 201402L _ACEOF if ac_fn_cxx_try_compile "$LINENO" then : eval $cachevar=yes else case e in #( e) eval $cachevar=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext CXX="$ac_save_CXX" ;; esac fi eval ac_res=\$$cachevar { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } if eval test x\$$cachevar = xyes; then CXX="$CXX $switch" if test -n "$CXXCPP" ; then CXXCPP="$CXXCPP $switch" fi ac_success=yes break fi done if test x$ac_success = xyes; then break fi done fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu if test x$ax_cxx_compile_cxx14_required = xtrue; then if test x$ac_success = xno; then as_fn_error $? "*** A compiler with support for C++14 language features is required." "$LINENO" 5 fi fi if test x$ac_success = xno; then HAVE_CXX14=0 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: No compiler with C++14 support was found" >&5 printf "%s\n" "$as_me: No compiler with C++14 support was found" >&6;} else HAVE_CXX14=1 printf "%s\n" "#define HAVE_CXX14 1" >>confdefs.h fi am__api_version='1.17' # Find a good install program. We prefer a C program (faster), # so one script is as good as another. But avoid the broken or # incompatible versions: # SysV /etc/install, /usr/sbin/install # SunOS /usr/etc/install # IRIX /sbin/install # AIX /bin/install # AmigaOS /C/install, which installs bootblocks on floppy discs # AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag # AFS /usr/afsws/bin/install, which mishandles nonexistent args # SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff" # OS/2's system install, which has a completely different semantic # ./install, which can be erroneously created by make from ./install.sh. # Reject install programs that cannot install multiple files. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for a BSD-compatible install" >&5 printf %s "checking for a BSD-compatible install... " >&6; } if test -z "$INSTALL"; then if test ${ac_cv_path_install+y} then : printf %s "(cached) " >&6 else case e in #( e) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac # Account for fact that we put trailing slashes in our PATH walk. case $as_dir in #(( ./ | /[cC]/* | \ /etc/* | /usr/sbin/* | /usr/etc/* | /sbin/* | /usr/afsws/bin/* | \ ?:[\\/]os2[\\/]install[\\/]* | ?:[\\/]OS2[\\/]INSTALL[\\/]* | \ /usr/ucb/* ) ;; *) # OSF1 and SCO ODT 3.0 have their own names for install. # Don't use installbsd from OSF since it installs stuff as root # by default. for ac_prog in ginstall scoinst install; do for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_prog$ac_exec_ext"; then if test $ac_prog = install && grep dspmsg "$as_dir$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # AIX install. It has an incompatible calling convention. : elif test $ac_prog = install && grep pwplus "$as_dir$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # program-specific install script used by HP pwplus--don't use. : else rm -rf conftest.one conftest.two conftest.dir echo one > conftest.one echo two > conftest.two mkdir conftest.dir if "$as_dir$ac_prog$ac_exec_ext" -c conftest.one conftest.two "`pwd`/conftest.dir/" && test -s conftest.one && test -s conftest.two && test -s conftest.dir/conftest.one && test -s conftest.dir/conftest.two then ac_cv_path_install="$as_dir$ac_prog$ac_exec_ext -c" break 3 fi fi fi done done ;; esac done IFS=$as_save_IFS rm -rf conftest.one conftest.two conftest.dir ;; esac fi if test ${ac_cv_path_install+y}; then INSTALL=$ac_cv_path_install else # As a last resort, use the slow shell script. Don't cache a # value for INSTALL within a source directory, because that will # break other packages using the cache if that directory is # removed, or if the value is a relative name. INSTALL=$ac_install_sh fi fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $INSTALL" >&5 printf "%s\n" "$INSTALL" >&6; } # Use test -z because SunOS4 sh mishandles braces in ${var-val}. # It thinks the first close brace ends the variable substitution. test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}' test -z "$INSTALL_SCRIPT" && INSTALL_SCRIPT='${INSTALL}' test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644' { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether sleep supports fractional seconds" >&5 printf %s "checking whether sleep supports fractional seconds... " >&6; } if test ${am_cv_sleep_fractional_seconds+y} then : printf %s "(cached) " >&6 else case e in #( e) if sleep 0.001 2>/dev/null then : am_cv_sleep_fractional_seconds=yes else case e in #( e) am_cv_sleep_fractional_seconds=no ;; esac fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_cv_sleep_fractional_seconds" >&5 printf "%s\n" "$am_cv_sleep_fractional_seconds" >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking filesystem timestamp resolution" >&5 printf %s "checking filesystem timestamp resolution... " >&6; } if test ${am_cv_filesystem_timestamp_resolution+y} then : printf %s "(cached) " >&6 else case e in #( e) # Default to the worst case. am_cv_filesystem_timestamp_resolution=2 # Only try to go finer than 1 sec if sleep can do it. # Don't try 1 sec, because if 0.01 sec and 0.1 sec don't work, # - 1 sec is not much of a win compared to 2 sec, and # - it takes 2 seconds to perform the test whether 1 sec works. # # Instead, just use the default 2s on platforms that have 1s resolution, # accept the extra 1s delay when using $sleep in the Automake tests, in # exchange for not incurring the 2s delay for running the test for all # packages. # am_try_resolutions= if test "$am_cv_sleep_fractional_seconds" = yes; then # Even a millisecond often causes a bunch of false positives, # so just try a hundredth of a second. The time saved between .001 and # .01 is not terribly consequential. am_try_resolutions="0.01 0.1 $am_try_resolutions" fi # In order to catch current-generation FAT out, we must *modify* files # that already exist; the *creation* timestamp is finer. Use names # that make ls -t sort them differently when they have equal # timestamps than when they have distinct timestamps, keeping # in mind that ls -t prints the *newest* file first. rm -f conftest.ts? : > conftest.ts1 : > conftest.ts2 : > conftest.ts3 # Make sure ls -t actually works. Do 'set' in a subshell so we don't # clobber the current shell's arguments. (Outer-level square brackets # are removed by m4; they're present so that m4 does not expand # ; be careful, easy to get confused.) if ( set X `ls -t conftest.ts[12]` && { test "$*" != "X conftest.ts1 conftest.ts2" || test "$*" != "X conftest.ts2 conftest.ts1"; } ); then :; else # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". printf "%s\n" ""Bad output from ls -t: \"`ls -t conftest.ts[12]`\""" >&5 { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in '$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in '$ac_pwd':" >&2;} as_fn_error $? "ls -t produces unexpected output. Make sure there is not a broken ls alias in your environment. See 'config.log' for more details" "$LINENO" 5; } fi for am_try_res in $am_try_resolutions; do # Any one fine-grained sleep might happen to cross the boundary # between two values of a coarser actual resolution, but if we do # two fine-grained sleeps in a row, at least one of them will fall # entirely within a coarse interval. echo alpha > conftest.ts1 sleep $am_try_res echo beta > conftest.ts2 sleep $am_try_res echo gamma > conftest.ts3 # We assume that 'ls -t' will make use of high-resolution # timestamps if the operating system supports them at all. if (set X `ls -t conftest.ts?` && test "$2" = conftest.ts3 && test "$3" = conftest.ts2 && test "$4" = conftest.ts1); then # # Ok, ls -t worked. If we're at a resolution of 1 second, we're done, # because we don't need to test make. make_ok=true if test $am_try_res != 1; then # But if we've succeeded so far with a subsecond resolution, we # have one more thing to check: make. It can happen that # everything else supports the subsecond mtimes, but make doesn't; # notably on macOS, which ships make 3.81 from 2006 (the last one # released under GPLv2). https://bugs.gnu.org/68808 # # We test $MAKE if it is defined in the environment, else "make". # It might get overridden later, but our hope is that in practice # it does not matter: it is the system "make" which is (by far) # the most likely to be broken, whereas if the user overrides it, # probably they did so with a better, or at least not worse, make. # https://lists.gnu.org/archive/html/automake/2024-06/msg00051.html # # Create a Makefile (real tab character here): rm -f conftest.mk echo 'conftest.ts1: conftest.ts2' >conftest.mk echo ' touch conftest.ts2' >>conftest.mk # # Now, running # touch conftest.ts1; touch conftest.ts2; make # should touch ts1 because ts2 is newer. This could happen by luck, # but most often, it will fail if make's support is insufficient. So # test for several consecutive successes. # # (We reuse conftest.ts[12] because we still want to modify existing # files, not create new ones, per above.) n=0 make=${MAKE-make} until test $n -eq 3; do echo one > conftest.ts1 sleep $am_try_res echo two > conftest.ts2 # ts2 should now be newer than ts1 if $make -f conftest.mk | grep 'up to date' >/dev/null; then make_ok=false break # out of $n loop fi n=`expr $n + 1` done fi # if $make_ok; then # Everything we know to check worked out, so call this resolution good. am_cv_filesystem_timestamp_resolution=$am_try_res break # out of $am_try_res loop fi # Otherwise, we'll go on to check the next resolution. fi done rm -f conftest.ts? # (end _am_filesystem_timestamp_resolution) ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_cv_filesystem_timestamp_resolution" >&5 printf "%s\n" "$am_cv_filesystem_timestamp_resolution" >&6; } # This check should not be cached, as it may vary across builds of # different projects. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether build environment is sane" >&5 printf %s "checking whether build environment is sane... " >&6; } # Reject unsafe characters in $srcdir or the absolute working directory # name. Accept space and tab only in the latter. am_lf=' ' case `pwd` in *[\\\"\#\$\&\'\`$am_lf]*) as_fn_error $? "unsafe absolute working directory name" "$LINENO" 5;; esac case $srcdir in *[\\\"\#\$\&\'\`$am_lf\ \ ]*) as_fn_error $? "unsafe srcdir value: '$srcdir'" "$LINENO" 5;; esac # Do 'set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). am_build_env_is_sane=no am_has_slept=no rm -f conftest.file for am_try in 1 2; do echo "timestamp, slept: $am_has_slept" > conftest.file if ( set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t "$srcdir/configure" conftest.file` fi test "$2" = conftest.file ); then am_build_env_is_sane=yes break fi # Just in case. sleep "$am_cv_filesystem_timestamp_resolution" am_has_slept=yes done { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_build_env_is_sane" >&5 printf "%s\n" "$am_build_env_is_sane" >&6; } if test "$am_build_env_is_sane" = no; then as_fn_error $? "newly created file is older than distributed files! Check your system clock" "$LINENO" 5 fi # If we didn't sleep, we still need to ensure time stamps of config.status and # generated files are strictly newer. am_sleep_pid= if test -e conftest.file || grep 'slept: no' conftest.file >/dev/null 2>&1 then : else case e in #( e) ( sleep "$am_cv_filesystem_timestamp_resolution" ) & am_sleep_pid=$! ;; esac fi rm -f conftest.file test "$program_prefix" != NONE && program_transform_name="s&^&$program_prefix&;$program_transform_name" # Use a double $ so make ignores it. test "$program_suffix" != NONE && program_transform_name="s&\$&$program_suffix&;$program_transform_name" # Double any \ or $. # By default was 's,x,x', remove it if useless. ac_script='s/[\\$]/&&/g;s/;s,x,x,$//' program_transform_name=`printf "%s\n" "$program_transform_name" | sed "$ac_script"` if test x"${MISSING+set}" != xset; then MISSING="\${SHELL} '$am_aux_dir/missing'" fi # Use eval to expand $SHELL if eval "$MISSING --is-lightweight"; then am_missing_run="$MISSING " else am_missing_run= { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: 'missing' script is too old or missing" >&5 printf "%s\n" "$as_me: WARNING: 'missing' script is too old or missing" >&2;} fi if test x"${install_sh+set}" != xset; then case $am_aux_dir in *\ * | *\ *) install_sh="\${SHELL} '$am_aux_dir/install-sh'" ;; *) install_sh="\${SHELL} $am_aux_dir/install-sh" esac fi # Installed binaries are usually stripped using 'strip' when the user # run "make install-strip". However 'strip' might not be the right # tool to use in cross-compilation environments, therefore Automake # will honor the 'STRIP' environment variable to overrule this program. if test "$cross_compiling" != no; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. set dummy ${ac_tool_prefix}strip; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_STRIP+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$STRIP"; then ac_cv_prog_STRIP="$STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_STRIP="${ac_tool_prefix}strip" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi STRIP=$ac_cv_prog_STRIP if test -n "$STRIP"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $STRIP" >&5 printf "%s\n" "$STRIP" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_STRIP"; then ac_ct_STRIP=$STRIP # Extract the first word of "strip", so it can be a program name with args. set dummy strip; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_STRIP+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_STRIP"; then ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_STRIP="strip" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP if test -n "$ac_ct_STRIP"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_STRIP" >&5 printf "%s\n" "$ac_ct_STRIP" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_STRIP" = x; then STRIP=":" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac STRIP=$ac_ct_STRIP fi else STRIP="$ac_cv_prog_STRIP" fi fi INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for a race-free mkdir -p" >&5 printf %s "checking for a race-free mkdir -p... " >&6; } if test -z "$MKDIR_P"; then if test ${ac_cv_path_mkdir+y} then : printf %s "(cached) " >&6 else case e in #( e) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/opt/sfw/bin do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_prog in mkdir gmkdir; do for ac_exec_ext in '' $ac_executable_extensions; do as_fn_executable_p "$as_dir$ac_prog$ac_exec_ext" || continue case `"$as_dir$ac_prog$ac_exec_ext" --version 2>&1` in #( 'mkdir ('*'coreutils) '* | \ *'BusyBox '* | \ 'mkdir (fileutils) '4.1*) ac_cv_path_mkdir=$as_dir$ac_prog$ac_exec_ext break 3;; esac done done done IFS=$as_save_IFS ;; esac fi test -d ./--version && rmdir ./--version if test ${ac_cv_path_mkdir+y}; then MKDIR_P="$ac_cv_path_mkdir -p" else # As a last resort, use plain mkdir -p, # in the hope it doesn't have the bugs of ancient mkdir. MKDIR_P='mkdir -p' fi fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $MKDIR_P" >&5 printf "%s\n" "$MKDIR_P" >&6; } for ac_prog in gawk mawk nawk awk do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_AWK+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$AWK"; then ac_cv_prog_AWK="$AWK" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_AWK="$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi AWK=$ac_cv_prog_AWK if test -n "$AWK"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $AWK" >&5 printf "%s\n" "$AWK" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$AWK" && break done { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether ${MAKE-make} sets \$(MAKE)" >&5 printf %s "checking whether ${MAKE-make} sets \$(MAKE)... " >&6; } set x ${MAKE-make} ac_make=`printf "%s\n" "$2" | sed 's/+/p/g; s/[^a-zA-Z0-9_]/_/g'` if eval test \${ac_cv_prog_make_${ac_make}_set+y} then : printf %s "(cached) " >&6 else case e in #( e) cat >conftest.make <<\_ACEOF SHELL = /bin/sh all: @echo '@@@%%%=$(MAKE)=@@@%%%' _ACEOF # GNU make sometimes prints "make[1]: Entering ...", which would confuse us. case `${MAKE-make} -f conftest.make 2>/dev/null` in *@@@%%%=?*=@@@%%%*) eval ac_cv_prog_make_${ac_make}_set=yes;; *) eval ac_cv_prog_make_${ac_make}_set=no;; esac rm -f conftest.make ;; esac fi if eval test \$ac_cv_prog_make_${ac_make}_set = yes; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5 printf "%s\n" "yes" >&6; } SET_MAKE= else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } SET_MAKE="MAKE=${MAKE-make}" fi rm -rf .tst 2>/dev/null mkdir .tst 2>/dev/null if test -d .tst; then am__leading_dot=. else am__leading_dot=_ fi rmdir .tst 2>/dev/null DEPDIR="${am__leading_dot}deps" ac_config_commands="$ac_config_commands depfiles" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether ${MAKE-make} supports the include directive" >&5 printf %s "checking whether ${MAKE-make} supports the include directive... " >&6; } cat > confinc.mk << 'END' am__doit: @echo this is the am__doit target >confinc.out .PHONY: am__doit END am__include="#" am__quote= # BSD make does it like this. echo '.include "confinc.mk" # ignored' > confmf.BSD # Other make implementations (GNU, Solaris 10, AIX) do it like this. echo 'include confinc.mk # ignored' > confmf.GNU _am_result=no for s in GNU BSD; do { echo "$as_me:$LINENO: ${MAKE-make} -f confmf.$s && cat confinc.out" >&5 (${MAKE-make} -f confmf.$s && cat confinc.out) >&5 2>&5 ac_status=$? echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } case $?:`cat confinc.out 2>/dev/null` in #( '0:this is the am__doit target') : case $s in #( BSD) : am__include='.include' am__quote='"' ;; #( *) : am__include='include' am__quote='' ;; esac ;; #( *) : ;; esac if test "$am__include" != "#"; then _am_result="yes ($s style)" break fi done rm -f confinc.* confmf.* { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: ${_am_result}" >&5 printf "%s\n" "${_am_result}" >&6; } # Check whether --enable-dependency-tracking was given. if test ${enable_dependency_tracking+y} then : enableval=$enable_dependency_tracking; fi if test "x$enable_dependency_tracking" != xno; then am_depcomp="$ac_aux_dir/depcomp" AMDEPBACKSLASH='\' am__nodep='_no' fi if test "x$enable_dependency_tracking" != xno; then AMDEP_TRUE= AMDEP_FALSE='#' else AMDEP_TRUE='#' AMDEP_FALSE= fi AM_DEFAULT_VERBOSITY=1 # Check whether --enable-silent-rules was given. if test ${enable_silent_rules+y} then : enableval=$enable_silent_rules; fi am_make=${MAKE-make} { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $am_make supports nested variables" >&5 printf %s "checking whether $am_make supports nested variables... " >&6; } if test ${am_cv_make_support_nested_variables+y} then : printf %s "(cached) " >&6 else case e in #( e) if printf "%s\n" 'TRUE=$(BAR$(V)) BAR0=false BAR1=true V=1 am__doit: @$(TRUE) .PHONY: am__doit' | $am_make -f - >/dev/null 2>&1; then am_cv_make_support_nested_variables=yes else am_cv_make_support_nested_variables=no fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_cv_make_support_nested_variables" >&5 printf "%s\n" "$am_cv_make_support_nested_variables" >&6; } AM_BACKSLASH='\' am__rm_f_notfound= if (rm -f && rm -fr && rm -rf) 2>/dev/null then : else case e in #( e) am__rm_f_notfound='""' ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking xargs -n works" >&5 printf %s "checking xargs -n works... " >&6; } if test ${am_cv_xargs_n_works+y} then : printf %s "(cached) " >&6 else case e in #( e) if test "`echo 1 2 3 | xargs -n2 echo`" = "1 2 3" then : am_cv_xargs_n_works=yes else case e in #( e) am_cv_xargs_n_works=no ;; esac fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_cv_xargs_n_works" >&5 printf "%s\n" "$am_cv_xargs_n_works" >&6; } if test "$am_cv_xargs_n_works" = yes then : am__xargs_n='xargs -n' else case e in #( e) am__xargs_n='am__xargs_n () { shift; sed "s/ /\\n/g" | while read am__xargs_n_arg; do "" "$am__xargs_n_arg"; done; }' ;; esac fi if test "`cd $srcdir && pwd`" != "`pwd`"; then # Use -I$(srcdir) only when $(srcdir) != ., so that make's output # is not polluted with repeated "-I." am__isrc=' -I$(srcdir)' # test to see if srcdir already configured if test -f $srcdir/config.status; then as_fn_error $? "source directory already configured; run \"make distclean\" there first" "$LINENO" 5 fi fi # test whether we have cygpath if test -z "$CYGPATH_W"; then if (cygpath --version) >/dev/null 2>/dev/null; then CYGPATH_W='cygpath -w' else CYGPATH_W=echo fi fi # Define the identity of the package. PACKAGE='c-ares' VERSION='1.33.1' printf "%s\n" "#define PACKAGE \"$PACKAGE\"" >>confdefs.h printf "%s\n" "#define VERSION \"$VERSION\"" >>confdefs.h # Some tools Automake needs. ACLOCAL=${ACLOCAL-"${am_missing_run}aclocal-${am__api_version}"} AUTOCONF=${AUTOCONF-"${am_missing_run}autoconf"} AUTOMAKE=${AUTOMAKE-"${am_missing_run}automake-${am__api_version}"} AUTOHEADER=${AUTOHEADER-"${am_missing_run}autoheader"} MAKEINFO=${MAKEINFO-"${am_missing_run}makeinfo"} # For better backward compatibility. To be removed once Automake 1.9.x # dies out for good. For more background, see: # # mkdir_p='$(MKDIR_P)' # We need awk for the "check" target (and possibly the TAP driver). The # system "awk" is bad on some platforms. # Always define AMTAR for backward compatibility. Yes, it's still used # in the wild :-( We should find a proper way to deprecate it ... AMTAR='$${TAR-tar}' # We'll loop over all known methods to create a tar archive until one works. _am_tools='gnutar pax cpio none' am__tar='$${TAR-tar} chof - "$$tardir"' am__untar='$${TAR-tar} xf -' depcc="$CC" am_compiler_list= { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking dependency style of $depcc" >&5 printf %s "checking dependency style of $depcc... " >&6; } if test ${am_cv_CC_dependencies_compiler_type+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named 'D' -- because '-MD' means "put the output # in D". rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_CC_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` fi am__universal=false case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using ": > sub/conftst$i.h" creates only sub/conftst1.h with # Solaris 10 /bin/sh. echo '/* dummy */' > sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with '-c' and '-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle '-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs. am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # After this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested. if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) # This compiler won't grok '-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thus: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_CC_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_CC_dependencies_compiler_type=none fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_cv_CC_dependencies_compiler_type" >&5 printf "%s\n" "$am_cv_CC_dependencies_compiler_type" >&6; } CCDEPMODE=depmode=$am_cv_CC_dependencies_compiler_type if test "x$enable_dependency_tracking" != xno \ && test "$am_cv_CC_dependencies_compiler_type" = gcc3; then am__fastdepCC_TRUE= am__fastdepCC_FALSE='#' else am__fastdepCC_TRUE='#' am__fastdepCC_FALSE= fi depcc="$CXX" am_compiler_list= { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking dependency style of $depcc" >&5 printf %s "checking dependency style of $depcc... " >&6; } if test ${am_cv_CXX_dependencies_compiler_type+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named 'D' -- because '-MD' means "put the output # in D". rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_CXX_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` fi am__universal=false case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using ": > sub/conftst$i.h" creates only sub/conftst1.h with # Solaris 10 /bin/sh. echo '/* dummy */' > sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with '-c' and '-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle '-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs. am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # After this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested. if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) # This compiler won't grok '-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thus: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_CXX_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_CXX_dependencies_compiler_type=none fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_cv_CXX_dependencies_compiler_type" >&5 printf "%s\n" "$am_cv_CXX_dependencies_compiler_type" >&6; } CXXDEPMODE=depmode=$am_cv_CXX_dependencies_compiler_type if test "x$enable_dependency_tracking" != xno \ && test "$am_cv_CXX_dependencies_compiler_type" = gcc3; then am__fastdepCXX_TRUE= am__fastdepCXX_FALSE='#' else am__fastdepCXX_TRUE='#' am__fastdepCXX_FALSE= fi # Variables for tags utilities; see am/tags.am if test -z "$CTAGS"; then CTAGS=ctags fi if test -z "$ETAGS"; then ETAGS=etags fi if test -z "$CSCOPE"; then CSCOPE=cscope fi case `pwd` in *\ * | *\ *) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: Libtool does not cope well with whitespace in \`pwd\`" >&5 printf "%s\n" "$as_me: WARNING: Libtool does not cope well with whitespace in \`pwd\`" >&2;} ;; esac macro_version='2.4.7' macro_revision='2.4.7' ltmain=$ac_aux_dir/ltmain.sh # Make sure we can run config.sub. $SHELL "${ac_aux_dir}config.sub" sun4 >/dev/null 2>&1 || as_fn_error $? "cannot run $SHELL ${ac_aux_dir}config.sub" "$LINENO" 5 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking build system type" >&5 printf %s "checking build system type... " >&6; } if test ${ac_cv_build+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_build_alias=$build_alias test "x$ac_build_alias" = x && ac_build_alias=`$SHELL "${ac_aux_dir}config.guess"` test "x$ac_build_alias" = x && as_fn_error $? "cannot guess build type; you must specify one" "$LINENO" 5 ac_cv_build=`$SHELL "${ac_aux_dir}config.sub" $ac_build_alias` || as_fn_error $? "$SHELL ${ac_aux_dir}config.sub $ac_build_alias failed" "$LINENO" 5 ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_build" >&5 printf "%s\n" "$ac_cv_build" >&6; } case $ac_cv_build in *-*-*) ;; *) as_fn_error $? "invalid value of canonical build" "$LINENO" 5;; esac build=$ac_cv_build ac_save_IFS=$IFS; IFS='-' set x $ac_cv_build shift build_cpu=$1 build_vendor=$2 shift; shift # Remember, the first character of IFS is used to create $*, # except with old shells: build_os=$* IFS=$ac_save_IFS case $build_os in *\ *) build_os=`echo "$build_os" | sed 's/ /-/g'`;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking host system type" >&5 printf %s "checking host system type... " >&6; } if test ${ac_cv_host+y} then : printf %s "(cached) " >&6 else case e in #( e) if test "x$host_alias" = x; then ac_cv_host=$ac_cv_build else ac_cv_host=`$SHELL "${ac_aux_dir}config.sub" $host_alias` || as_fn_error $? "$SHELL ${ac_aux_dir}config.sub $host_alias failed" "$LINENO" 5 fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_host" >&5 printf "%s\n" "$ac_cv_host" >&6; } case $ac_cv_host in *-*-*) ;; *) as_fn_error $? "invalid value of canonical host" "$LINENO" 5;; esac host=$ac_cv_host ac_save_IFS=$IFS; IFS='-' set x $ac_cv_host shift host_cpu=$1 host_vendor=$2 shift; shift # Remember, the first character of IFS is used to create $*, # except with old shells: host_os=$* IFS=$ac_save_IFS case $host_os in *\ *) host_os=`echo "$host_os" | sed 's/ /-/g'`;; esac # Backslashify metacharacters that are still active within # double-quoted strings. sed_quote_subst='s/\(["`$\\]\)/\\\1/g' # Same as above, but do not quote variable references. double_quote_subst='s/\(["`\\]\)/\\\1/g' # Sed substitution to delay expansion of an escaped shell variable in a # double_quote_subst'ed string. delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g' # Sed substitution to delay expansion of an escaped single quote. delay_single_quote_subst='s/'\''/'\'\\\\\\\'\''/g' # Sed substitution to avoid accidental globbing in evaled expressions no_glob_subst='s/\*/\\\*/g' ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5 printf %s "checking how to print strings... " >&6; } # Test print first, because it will be a builtin if present. if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \ test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='print -r --' elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='printf %s\n' else # Use this function as a fallback that always works. func_fallback_echo () { eval 'cat <<_LTECHO_EOF $1 _LTECHO_EOF' } ECHO='func_fallback_echo' fi # func_echo_all arg... # Invoke $ECHO with all args, space-separated. func_echo_all () { $ECHO "" } case $ECHO in printf*) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: printf" >&5 printf "%s\n" "printf" >&6; } ;; print*) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: print -r" >&5 printf "%s\n" "print -r" >&6; } ;; *) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: cat" >&5 printf "%s\n" "cat" >&6; } ;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for a sed that does not truncate output" >&5 printf %s "checking for a sed that does not truncate output... " >&6; } if test ${ac_cv_path_SED+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_script=s/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb/ for ac_i in 1 2 3 4 5 6 7; do ac_script="$ac_script$as_nl$ac_script" done echo "$ac_script" 2>/dev/null | sed 99q >conftest.sed { ac_script=; unset ac_script;} if test -z "$SED"; then ac_path_SED_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_prog in sed gsed do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_SED="$as_dir$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_SED" || continue # Check for GNU ac_path_SED and select it if it is found. # Check for GNU $ac_path_SED case `"$ac_path_SED" --version 2>&1` in #( *GNU*) ac_cv_path_SED="$ac_path_SED" ac_path_SED_found=:;; #( *) ac_count=0 printf %s 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" printf "%s\n" '' >> "conftest.nl" "$ac_path_SED" -f conftest.sed < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_SED_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_SED="$ac_path_SED" ac_path_SED_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_SED_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_SED"; then as_fn_error $? "no acceptable sed could be found in \$PATH" "$LINENO" 5 fi else ac_cv_path_SED=$SED fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_SED" >&5 printf "%s\n" "$ac_cv_path_SED" >&6; } SED="$ac_cv_path_SED" rm -f conftest.sed test -z "$SED" && SED=sed Xsed="$SED -e 1s/^X//" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for grep that handles long lines and -e" >&5 printf %s "checking for grep that handles long lines and -e... " >&6; } if test ${ac_cv_path_GREP+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -z "$GREP"; then ac_path_GREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_prog in grep ggrep do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_GREP="$as_dir$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_GREP" || continue # Check for GNU ac_path_GREP and select it if it is found. # Check for GNU $ac_path_GREP case `"$ac_path_GREP" --version 2>&1` in #( *GNU*) ac_cv_path_GREP="$ac_path_GREP" ac_path_GREP_found=:;; #( *) ac_count=0 printf %s 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" printf "%s\n" 'GREP' >> "conftest.nl" "$ac_path_GREP" -e 'GREP$' -e '-(cannot match)-' < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_GREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_GREP="$ac_path_GREP" ac_path_GREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_GREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_GREP"; then as_fn_error $? "no acceptable grep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_GREP=$GREP fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_GREP" >&5 printf "%s\n" "$ac_cv_path_GREP" >&6; } GREP="$ac_cv_path_GREP" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for egrep" >&5 printf %s "checking for egrep... " >&6; } if test ${ac_cv_path_EGREP+y} then : printf %s "(cached) " >&6 else case e in #( e) if echo a | $GREP -E '(a|b)' >/dev/null 2>&1 then ac_cv_path_EGREP="$GREP -E" else if test -z "$EGREP"; then ac_path_EGREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_prog in egrep do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_EGREP="$as_dir$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_EGREP" || continue # Check for GNU ac_path_EGREP and select it if it is found. # Check for GNU $ac_path_EGREP case `"$ac_path_EGREP" --version 2>&1` in #( *GNU*) ac_cv_path_EGREP="$ac_path_EGREP" ac_path_EGREP_found=:;; #( *) ac_count=0 printf %s 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" printf "%s\n" 'EGREP' >> "conftest.nl" "$ac_path_EGREP" 'EGREP$' < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_EGREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_EGREP="$ac_path_EGREP" ac_path_EGREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_EGREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_EGREP"; then as_fn_error $? "no acceptable egrep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_EGREP=$EGREP fi fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_EGREP" >&5 printf "%s\n" "$ac_cv_path_EGREP" >&6; } EGREP="$ac_cv_path_EGREP" EGREP_TRADITIONAL=$EGREP ac_cv_path_EGREP_TRADITIONAL=$EGREP { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for fgrep" >&5 printf %s "checking for fgrep... " >&6; } if test ${ac_cv_path_FGREP+y} then : printf %s "(cached) " >&6 else case e in #( e) if echo 'ab*c' | $GREP -F 'ab*c' >/dev/null 2>&1 then ac_cv_path_FGREP="$GREP -F" else if test -z "$FGREP"; then ac_path_FGREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_prog in fgrep do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_FGREP="$as_dir$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_FGREP" || continue # Check for GNU ac_path_FGREP and select it if it is found. # Check for GNU $ac_path_FGREP case `"$ac_path_FGREP" --version 2>&1` in #( *GNU*) ac_cv_path_FGREP="$ac_path_FGREP" ac_path_FGREP_found=:;; #( *) ac_count=0 printf %s 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" printf "%s\n" 'FGREP' >> "conftest.nl" "$ac_path_FGREP" FGREP < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_FGREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_FGREP="$ac_path_FGREP" ac_path_FGREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_FGREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_FGREP"; then as_fn_error $? "no acceptable fgrep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_FGREP=$FGREP fi fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_FGREP" >&5 printf "%s\n" "$ac_cv_path_FGREP" >&6; } FGREP="$ac_cv_path_FGREP" test -z "$GREP" && GREP=grep # Check whether --with-gnu-ld was given. if test ${with_gnu_ld+y} then : withval=$with_gnu_ld; test no = "$withval" || with_gnu_ld=yes else case e in #( e) with_gnu_ld=no ;; esac fi ac_prog=ld if test yes = "$GCC"; then # Check if gcc -print-prog-name=ld gives a path. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for ld used by $CC" >&5 printf %s "checking for ld used by $CC... " >&6; } case $host in *-*-mingw*) # gcc leaves a trailing carriage return, which upsets mingw ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; *) ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; esac case $ac_prog in # Accept absolute paths. [\\/]* | ?:[\\/]*) re_direlt='/[^/][^/]*/\.\./' # Canonicalize the pathname of ld ac_prog=`$ECHO "$ac_prog"| $SED 's%\\\\%/%g'` while $ECHO "$ac_prog" | $GREP "$re_direlt" > /dev/null 2>&1; do ac_prog=`$ECHO $ac_prog| $SED "s%$re_direlt%/%"` done test -z "$LD" && LD=$ac_prog ;; "") # If it fails, then pretend we aren't using GCC. ac_prog=ld ;; *) # If it is relative, then search for the first ld in PATH. with_gnu_ld=unknown ;; esac elif test yes = "$with_gnu_ld"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for GNU ld" >&5 printf %s "checking for GNU ld... " >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for non-GNU ld" >&5 printf %s "checking for non-GNU ld... " >&6; } fi if test ${lt_cv_path_LD+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -z "$LD"; then lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR for ac_dir in $PATH; do IFS=$lt_save_ifs test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then lt_cv_path_LD=$ac_dir/$ac_prog # Check to see if the program is GNU ld. I'd rather use --version, # but apparently some variants of GNU ld only accept -v. # Break only if it was the GNU/non-GNU ld that we prefer. case `"$lt_cv_path_LD" -v 2>&1 &5 printf "%s\n" "$LD" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -z "$LD" && as_fn_error $? "no acceptable ld found in \$PATH" "$LINENO" 5 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if the linker ($LD) is GNU ld" >&5 printf %s "checking if the linker ($LD) is GNU ld... " >&6; } if test ${lt_cv_prog_gnu_ld+y} then : printf %s "(cached) " >&6 else case e in #( e) # I'd rather use --version here, but apparently some GNU lds only accept -v. case `$LD -v 2>&1 &5 printf "%s\n" "$lt_cv_prog_gnu_ld" >&6; } with_gnu_ld=$lt_cv_prog_gnu_ld { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for BSD- or MS-compatible name lister (nm)" >&5 printf %s "checking for BSD- or MS-compatible name lister (nm)... " >&6; } if test ${lt_cv_path_NM+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$NM"; then # Let the user override the test. lt_cv_path_NM=$NM else lt_nm_to_check=${ac_tool_prefix}nm if test -n "$ac_tool_prefix" && test "$build" = "$host"; then lt_nm_to_check="$lt_nm_to_check nm" fi for lt_tmp_nm in $lt_nm_to_check; do lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do IFS=$lt_save_ifs test -z "$ac_dir" && ac_dir=. tmp_nm=$ac_dir/$lt_tmp_nm if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext"; then # Check to see if the nm accepts a BSD-compat flag. # Adding the 'sed 1q' prevents false positives on HP-UX, which says: # nm: unknown option "B" ignored # Tru64's nm complains that /dev/null is an invalid object file # MSYS converts /dev/null to NUL, MinGW nm treats NUL as empty case $build_os in mingw*) lt_bad_file=conftest.nm/nofile ;; *) lt_bad_file=/dev/null ;; esac case `"$tmp_nm" -B $lt_bad_file 2>&1 | $SED '1q'` in *$lt_bad_file* | *'Invalid file or object type'*) lt_cv_path_NM="$tmp_nm -B" break 2 ;; *) case `"$tmp_nm" -p /dev/null 2>&1 | $SED '1q'` in */dev/null*) lt_cv_path_NM="$tmp_nm -p" break 2 ;; *) lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but continue # so that we can try to find one that supports BSD flags ;; esac ;; esac fi done IFS=$lt_save_ifs done : ${lt_cv_path_NM=no} fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_NM" >&5 printf "%s\n" "$lt_cv_path_NM" >&6; } if test no != "$lt_cv_path_NM"; then NM=$lt_cv_path_NM else # Didn't find any BSD compatible name lister, look for dumpbin. if test -n "$DUMPBIN"; then : # Let the user override the test. else if test -n "$ac_tool_prefix"; then for ac_prog in dumpbin "link -dump" do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_DUMPBIN+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$DUMPBIN"; then ac_cv_prog_DUMPBIN="$DUMPBIN" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_DUMPBIN="$ac_tool_prefix$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi DUMPBIN=$ac_cv_prog_DUMPBIN if test -n "$DUMPBIN"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $DUMPBIN" >&5 printf "%s\n" "$DUMPBIN" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$DUMPBIN" && break done fi if test -z "$DUMPBIN"; then ac_ct_DUMPBIN=$DUMPBIN for ac_prog in dumpbin "link -dump" do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_DUMPBIN+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_DUMPBIN"; then ac_cv_prog_ac_ct_DUMPBIN="$ac_ct_DUMPBIN" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DUMPBIN="$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_DUMPBIN=$ac_cv_prog_ac_ct_DUMPBIN if test -n "$ac_ct_DUMPBIN"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DUMPBIN" >&5 printf "%s\n" "$ac_ct_DUMPBIN" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$ac_ct_DUMPBIN" && break done if test "x$ac_ct_DUMPBIN" = x; then DUMPBIN=":" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DUMPBIN=$ac_ct_DUMPBIN fi fi case `$DUMPBIN -symbols -headers /dev/null 2>&1 | $SED '1q'` in *COFF*) DUMPBIN="$DUMPBIN -symbols -headers" ;; *) DUMPBIN=: ;; esac fi if test : != "$DUMPBIN"; then NM=$DUMPBIN fi fi test -z "$NM" && NM=nm { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking the name lister ($NM) interface" >&5 printf %s "checking the name lister ($NM) interface... " >&6; } if test ${lt_cv_nm_interface+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_nm_interface="BSD nm" echo "int some_variable = 0;" > conftest.$ac_ext (eval echo "\"\$as_me:$LINENO: $ac_compile\"" >&5) (eval "$ac_compile" 2>conftest.err) cat conftest.err >&5 (eval echo "\"\$as_me:$LINENO: $NM \\\"conftest.$ac_objext\\\"\"" >&5) (eval "$NM \"conftest.$ac_objext\"" 2>conftest.err > conftest.out) cat conftest.err >&5 (eval echo "\"\$as_me:$LINENO: output\"" >&5) cat conftest.out >&5 if $GREP 'External.*some_variable' conftest.out > /dev/null; then lt_cv_nm_interface="MS dumpbin" fi rm -f conftest* ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_nm_interface" >&5 printf "%s\n" "$lt_cv_nm_interface" >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether ln -s works" >&5 printf %s "checking whether ln -s works... " >&6; } LN_S=$as_ln_s if test "$LN_S" = "ln -s"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5 printf "%s\n" "yes" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no, using $LN_S" >&5 printf "%s\n" "no, using $LN_S" >&6; } fi # find the maximum length of command line arguments { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking the maximum length of command line arguments" >&5 printf %s "checking the maximum length of command line arguments... " >&6; } if test ${lt_cv_sys_max_cmd_len+y} then : printf %s "(cached) " >&6 else case e in #( e) i=0 teststring=ABCD case $build_os in msdosdjgpp*) # On DJGPP, this test can blow up pretty badly due to problems in libc # (any single argument exceeding 2000 bytes causes a buffer overrun # during glob expansion). Even if it were fixed, the result of this # check would be larger than it should be. lt_cv_sys_max_cmd_len=12288; # 12K is about right ;; gnu*) # Under GNU Hurd, this test is not required because there is # no limit to the length of command line arguments. # Libtool will interpret -1 as no limit whatsoever lt_cv_sys_max_cmd_len=-1; ;; cygwin* | mingw* | cegcc*) # On Win9x/ME, this test blows up -- it succeeds, but takes # about 5 minutes as the teststring grows exponentially. # Worse, since 9x/ME are not pre-emptively multitasking, # you end up with a "frozen" computer, even though with patience # the test eventually succeeds (with a max line length of 256k). # Instead, let's just punt: use the minimum linelength reported by # all of the supported platforms: 8192 (on NT/2K/XP). lt_cv_sys_max_cmd_len=8192; ;; mint*) # On MiNT this can take a long time and run out of memory. lt_cv_sys_max_cmd_len=8192; ;; amigaos*) # On AmigaOS with pdksh, this test takes hours, literally. # So we just punt and use a minimum line length of 8192. lt_cv_sys_max_cmd_len=8192; ;; bitrig* | darwin* | dragonfly* | freebsd* | midnightbsd* | netbsd* | openbsd*) # This has been around since 386BSD, at least. Likely further. if test -x /sbin/sysctl; then lt_cv_sys_max_cmd_len=`/sbin/sysctl -n kern.argmax` elif test -x /usr/sbin/sysctl; then lt_cv_sys_max_cmd_len=`/usr/sbin/sysctl -n kern.argmax` else lt_cv_sys_max_cmd_len=65536 # usable default for all BSDs fi # And add a safety zone lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` ;; interix*) # We know the value 262144 and hardcode it with a safety zone (like BSD) lt_cv_sys_max_cmd_len=196608 ;; os2*) # The test takes a long time on OS/2. lt_cv_sys_max_cmd_len=8192 ;; osf*) # Dr. Hans Ekkehard Plesser reports seeing a kernel panic running configure # due to this test when exec_disable_arg_limit is 1 on Tru64. It is not # nice to cause kernel panics so lets avoid the loop below. # First set a reasonable default. lt_cv_sys_max_cmd_len=16384 # if test -x /sbin/sysconfig; then case `/sbin/sysconfig -q proc exec_disable_arg_limit` in *1*) lt_cv_sys_max_cmd_len=-1 ;; esac fi ;; sco3.2v5*) lt_cv_sys_max_cmd_len=102400 ;; sysv5* | sco5v6* | sysv4.2uw2*) kargmax=`grep ARG_MAX /etc/conf/cf.d/stune 2>/dev/null` if test -n "$kargmax"; then lt_cv_sys_max_cmd_len=`echo $kargmax | $SED 's/.*[ ]//'` else lt_cv_sys_max_cmd_len=32768 fi ;; *) lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null` if test -n "$lt_cv_sys_max_cmd_len" && \ test undefined != "$lt_cv_sys_max_cmd_len"; then lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` else # Make teststring a little bigger before we do anything with it. # a 1K string should be a reasonable start. for i in 1 2 3 4 5 6 7 8; do teststring=$teststring$teststring done SHELL=${SHELL-${CONFIG_SHELL-/bin/sh}} # If test is not a shell built-in, we'll probably end up computing a # maximum length that is only half of the actual maximum length, but # we can't tell. while { test X`env echo "$teststring$teststring" 2>/dev/null` \ = "X$teststring$teststring"; } >/dev/null 2>&1 && test 17 != "$i" # 1/2 MB should be enough do i=`expr $i + 1` teststring=$teststring$teststring done # Only check the string length outside the loop. lt_cv_sys_max_cmd_len=`expr "X$teststring" : ".*" 2>&1` teststring= # Add a significant safety factor because C++ compilers can tack on # massive amounts of additional arguments before passing them to the # linker. It appears as though 1/2 is a usable value. lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2` fi ;; esac ;; esac fi if test -n "$lt_cv_sys_max_cmd_len"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sys_max_cmd_len" >&5 printf "%s\n" "$lt_cv_sys_max_cmd_len" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none" >&5 printf "%s\n" "none" >&6; } fi max_cmd_len=$lt_cv_sys_max_cmd_len : ${CP="cp -f"} : ${MV="mv -f"} : ${RM="rm -f"} if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then lt_unset=unset else lt_unset=false fi # test EBCDIC or ASCII case `echo X|tr X '\101'` in A) # ASCII based system # \n is not interpreted correctly by Solaris 8 /usr/ucb/tr lt_SP2NL='tr \040 \012' lt_NL2SP='tr \015\012 \040\040' ;; *) # EBCDIC based system lt_SP2NL='tr \100 \n' lt_NL2SP='tr \r\n \100\100' ;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5 printf %s "checking how to convert $build file names to $host format... " >&6; } if test ${lt_cv_to_host_file_cmd+y} then : printf %s "(cached) " >&6 else case e in #( e) case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32 ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32 ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32 ;; esac ;; *-*-cygwin* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_noop ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin ;; esac ;; * ) # unhandled hosts (and "normal" native builds) lt_cv_to_host_file_cmd=func_convert_file_noop ;; esac ;; esac fi to_host_file_cmd=$lt_cv_to_host_file_cmd { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5 printf "%s\n" "$lt_cv_to_host_file_cmd" >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5 printf %s "checking how to convert $build file names to toolchain format... " >&6; } if test ${lt_cv_to_tool_file_cmd+y} then : printf %s "(cached) " >&6 else case e in #( e) #assume ordinary cross tools, or native build. lt_cv_to_tool_file_cmd=func_convert_file_noop case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32 ;; esac ;; esac ;; esac fi to_tool_file_cmd=$lt_cv_to_tool_file_cmd { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5 printf "%s\n" "$lt_cv_to_tool_file_cmd" >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5 printf %s "checking for $LD option to reload object files... " >&6; } if test ${lt_cv_ld_reload_flag+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_ld_reload_flag='-r' ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_reload_flag" >&5 printf "%s\n" "$lt_cv_ld_reload_flag" >&6; } reload_flag=$lt_cv_ld_reload_flag case $reload_flag in "" | " "*) ;; *) reload_flag=" $reload_flag" ;; esac reload_cmds='$LD$reload_flag -o $output$reload_objs' case $host_os in cygwin* | mingw* | pw32* | cegcc*) if test yes != "$GCC"; then reload_cmds=false fi ;; darwin*) if test yes = "$GCC"; then reload_cmds='$LTCC $LTCFLAGS -nostdlib $wl-r -o $output$reload_objs' else reload_cmds='$LD$reload_flag -o $output$reload_objs' fi ;; esac if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}file", so it can be a program name with args. set dummy ${ac_tool_prefix}file; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_FILECMD+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$FILECMD"; then ac_cv_prog_FILECMD="$FILECMD" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_FILECMD="${ac_tool_prefix}file" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi FILECMD=$ac_cv_prog_FILECMD if test -n "$FILECMD"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $FILECMD" >&5 printf "%s\n" "$FILECMD" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_FILECMD"; then ac_ct_FILECMD=$FILECMD # Extract the first word of "file", so it can be a program name with args. set dummy file; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_FILECMD+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_FILECMD"; then ac_cv_prog_ac_ct_FILECMD="$ac_ct_FILECMD" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_FILECMD="file" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_FILECMD=$ac_cv_prog_ac_ct_FILECMD if test -n "$ac_ct_FILECMD"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_FILECMD" >&5 printf "%s\n" "$ac_ct_FILECMD" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_FILECMD" = x; then FILECMD=":" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac FILECMD=$ac_ct_FILECMD fi else FILECMD="$ac_cv_prog_FILECMD" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}objdump", so it can be a program name with args. set dummy ${ac_tool_prefix}objdump; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_OBJDUMP+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$OBJDUMP"; then ac_cv_prog_OBJDUMP="$OBJDUMP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_OBJDUMP="${ac_tool_prefix}objdump" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi OBJDUMP=$ac_cv_prog_OBJDUMP if test -n "$OBJDUMP"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $OBJDUMP" >&5 printf "%s\n" "$OBJDUMP" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_OBJDUMP"; then ac_ct_OBJDUMP=$OBJDUMP # Extract the first word of "objdump", so it can be a program name with args. set dummy objdump; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_OBJDUMP+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_OBJDUMP"; then ac_cv_prog_ac_ct_OBJDUMP="$ac_ct_OBJDUMP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OBJDUMP="objdump" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_OBJDUMP=$ac_cv_prog_ac_ct_OBJDUMP if test -n "$ac_ct_OBJDUMP"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OBJDUMP" >&5 printf "%s\n" "$ac_ct_OBJDUMP" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_OBJDUMP" = x; then OBJDUMP="false" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OBJDUMP=$ac_ct_OBJDUMP fi else OBJDUMP="$ac_cv_prog_OBJDUMP" fi test -z "$OBJDUMP" && OBJDUMP=objdump { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to recognize dependent libraries" >&5 printf %s "checking how to recognize dependent libraries... " >&6; } if test ${lt_cv_deplibs_check_method+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_file_magic_cmd='$MAGIC_CMD' lt_cv_file_magic_test_file= lt_cv_deplibs_check_method='unknown' # Need to set the preceding variable on all platforms that support # interlibrary dependencies. # 'none' -- dependencies not supported. # 'unknown' -- same as none, but documents that we really don't know. # 'pass_all' -- all dependencies passed with no checks. # 'test_compile' -- check by making test program. # 'file_magic [[regex]]' -- check by looking for files in library path # that responds to the $file_magic_cmd with a given extended regex. # If you have 'file' or equivalent on your system and you're not sure # whether 'pass_all' will *always* work, you probably want this one. case $host_os in aix[4-9]*) lt_cv_deplibs_check_method=pass_all ;; beos*) lt_cv_deplibs_check_method=pass_all ;; bsdi[45]*) lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib)' lt_cv_file_magic_cmd='$FILECMD -L' lt_cv_file_magic_test_file=/shlib/libc.so ;; cygwin*) # func_win32_libid is a shell function defined in ltmain.sh lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' ;; mingw* | pw32*) # Base MSYS/MinGW do not provide the 'file' command needed by # func_win32_libid shell function, so use a weaker test based on 'objdump', # unless we find 'file', for example because we are cross-compiling. if ( file / ) >/dev/null 2>&1; then lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' else # Keep this pattern in sync with the one in func_win32_libid. lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' lt_cv_file_magic_cmd='$OBJDUMP -f' fi ;; cegcc*) # use the weaker test based on 'objdump'. See mingw*. lt_cv_deplibs_check_method='file_magic file format pe-arm-.*little(.*architecture: arm)?' lt_cv_file_magic_cmd='$OBJDUMP -f' ;; darwin* | rhapsody*) lt_cv_deplibs_check_method=pass_all ;; freebsd* | dragonfly* | midnightbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then case $host_cpu in i*86 ) # Not sure whether the presence of OpenBSD here was a mistake. # Let's accept both of them until this is cleared up. lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD|DragonFly)/i[3-9]86 (compact )?demand paged shared library' lt_cv_file_magic_cmd=$FILECMD lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*` ;; esac else lt_cv_deplibs_check_method=pass_all fi ;; haiku*) lt_cv_deplibs_check_method=pass_all ;; hpux10.20* | hpux11*) lt_cv_file_magic_cmd=$FILECMD case $host_cpu in ia64*) lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF-[0-9][0-9]) shared object file - IA64' lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so ;; hppa*64*) lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF[ -][0-9][0-9])(-bit)?( [LM]SB)? shared object( file)?[, -]* PA-RISC [0-9]\.[0-9]' lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl ;; *) lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|PA-RISC[0-9]\.[0-9]) shared library' lt_cv_file_magic_test_file=/usr/lib/libc.sl ;; esac ;; interix[3-9]*) # PIC code is broken on Interix 3.x, that's why |\.a not |_pic\.a here lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so|\.a)$' ;; irix5* | irix6* | nonstopux*) case $LD in *-32|*"-32 ") libmagic=32-bit;; *-n32|*"-n32 ") libmagic=N32;; *-64|*"-64 ") libmagic=64-bit;; *) libmagic=never-match;; esac lt_cv_deplibs_check_method=pass_all ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) lt_cv_deplibs_check_method=pass_all ;; netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so|_pic\.a)$' fi ;; newos6*) lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (executable|dynamic lib)' lt_cv_file_magic_cmd=$FILECMD lt_cv_file_magic_test_file=/usr/lib/libnls.so ;; *nto* | *qnx*) lt_cv_deplibs_check_method=pass_all ;; openbsd* | bitrig*) if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|\.so|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$' fi ;; osf3* | osf4* | osf5*) lt_cv_deplibs_check_method=pass_all ;; rdos*) lt_cv_deplibs_check_method=pass_all ;; solaris*) lt_cv_deplibs_check_method=pass_all ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) lt_cv_deplibs_check_method=pass_all ;; sysv4 | sysv4.3*) case $host_vendor in motorola) lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib) M[0-9][0-9]* Version [0-9]' lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*` ;; ncr) lt_cv_deplibs_check_method=pass_all ;; sequent) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [LM]SB (shared object|dynamic lib )' ;; sni) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method="file_magic ELF [0-9][0-9]*-bit [LM]SB dynamic lib" lt_cv_file_magic_test_file=/lib/libc.so ;; siemens) lt_cv_deplibs_check_method=pass_all ;; pc) lt_cv_deplibs_check_method=pass_all ;; esac ;; tpf*) lt_cv_deplibs_check_method=pass_all ;; os2*) lt_cv_deplibs_check_method=pass_all ;; esac ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5 printf "%s\n" "$lt_cv_deplibs_check_method" >&6; } file_magic_glob= want_nocaseglob=no if test "$build" = "$host"; then case $host_os in mingw* | pw32*) if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then want_nocaseglob=yes else file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"` fi ;; esac fi file_magic_cmd=$lt_cv_file_magic_cmd deplibs_check_method=$lt_cv_deplibs_check_method test -z "$deplibs_check_method" && deplibs_check_method=unknown if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args. set dummy ${ac_tool_prefix}dlltool; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_DLLTOOL+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$DLLTOOL"; then ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi DLLTOOL=$ac_cv_prog_DLLTOOL if test -n "$DLLTOOL"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5 printf "%s\n" "$DLLTOOL" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_DLLTOOL"; then ac_ct_DLLTOOL=$DLLTOOL # Extract the first word of "dlltool", so it can be a program name with args. set dummy dlltool; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_DLLTOOL+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_DLLTOOL"; then ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DLLTOOL="dlltool" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL if test -n "$ac_ct_DLLTOOL"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5 printf "%s\n" "$ac_ct_DLLTOOL" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_DLLTOOL" = x; then DLLTOOL="false" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DLLTOOL=$ac_ct_DLLTOOL fi else DLLTOOL="$ac_cv_prog_DLLTOOL" fi test -z "$DLLTOOL" && DLLTOOL=dlltool { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5 printf %s "checking how to associate runtime and link libraries... " >&6; } if test ${lt_cv_sharedlib_from_linklib_cmd+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_sharedlib_from_linklib_cmd='unknown' case $host_os in cygwin* | mingw* | pw32* | cegcc*) # two different shell functions defined in ltmain.sh; # decide which one to use based on capabilities of $DLLTOOL case `$DLLTOOL --help 2>&1` in *--identify-strict*) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib ;; *) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback ;; esac ;; *) # fallback: assume linklib IS sharedlib lt_cv_sharedlib_from_linklib_cmd=$ECHO ;; esac ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5 printf "%s\n" "$lt_cv_sharedlib_from_linklib_cmd" >&6; } sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO if test -n "$ac_tool_prefix"; then for ac_prog in ar do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_AR+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$AR"; then ac_cv_prog_AR="$AR" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_AR="$ac_tool_prefix$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi AR=$ac_cv_prog_AR if test -n "$AR"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $AR" >&5 printf "%s\n" "$AR" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$AR" && break done fi if test -z "$AR"; then ac_ct_AR=$AR for ac_prog in ar do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_AR+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_AR"; then ac_cv_prog_ac_ct_AR="$ac_ct_AR" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_AR="$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_AR=$ac_cv_prog_ac_ct_AR if test -n "$ac_ct_AR"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_AR" >&5 printf "%s\n" "$ac_ct_AR" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$ac_ct_AR" && break done if test "x$ac_ct_AR" = x; then AR="false" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac AR=$ac_ct_AR fi fi : ${AR=ar} # Use ARFLAGS variable as AR's operation code to sync the variable naming with # Automake. If both AR_FLAGS and ARFLAGS are specified, AR_FLAGS should have # higher priority because thats what people were doing historically (setting # ARFLAGS for automake and AR_FLAGS for libtool). FIXME: Make the AR_FLAGS # variable obsoleted/removed. test ${AR_FLAGS+y} || AR_FLAGS=${ARFLAGS-cr} lt_ar_flags=$AR_FLAGS # Make AR_FLAGS overridable by 'make ARFLAGS='. Don't try to run-time override # by AR_FLAGS because that was never working and AR_FLAGS is about to die. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5 printf %s "checking for archiver @FILE support... " >&6; } if test ${lt_cv_ar_at_file+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_ar_at_file=no cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : echo conftest.$ac_objext > conftest.lst lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5' { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5 (eval $lt_ar_try) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if test 0 -eq "$ac_status"; then # Ensure the archiver fails upon bogus file names. rm -f conftest.$ac_objext libconftest.a { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5 (eval $lt_ar_try) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if test 0 -ne "$ac_status"; then lt_cv_ar_at_file=@ fi fi rm -f conftest.* libconftest.a fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5 printf "%s\n" "$lt_cv_ar_at_file" >&6; } if test no = "$lt_cv_ar_at_file"; then archiver_list_spec= else archiver_list_spec=$lt_cv_ar_at_file fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. set dummy ${ac_tool_prefix}strip; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_STRIP+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$STRIP"; then ac_cv_prog_STRIP="$STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_STRIP="${ac_tool_prefix}strip" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi STRIP=$ac_cv_prog_STRIP if test -n "$STRIP"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $STRIP" >&5 printf "%s\n" "$STRIP" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_STRIP"; then ac_ct_STRIP=$STRIP # Extract the first word of "strip", so it can be a program name with args. set dummy strip; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_STRIP+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_STRIP"; then ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_STRIP="strip" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP if test -n "$ac_ct_STRIP"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_STRIP" >&5 printf "%s\n" "$ac_ct_STRIP" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_STRIP" = x; then STRIP=":" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac STRIP=$ac_ct_STRIP fi else STRIP="$ac_cv_prog_STRIP" fi test -z "$STRIP" && STRIP=: if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}ranlib", so it can be a program name with args. set dummy ${ac_tool_prefix}ranlib; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_RANLIB+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$RANLIB"; then ac_cv_prog_RANLIB="$RANLIB" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_RANLIB="${ac_tool_prefix}ranlib" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi RANLIB=$ac_cv_prog_RANLIB if test -n "$RANLIB"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $RANLIB" >&5 printf "%s\n" "$RANLIB" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_RANLIB"; then ac_ct_RANLIB=$RANLIB # Extract the first word of "ranlib", so it can be a program name with args. set dummy ranlib; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_RANLIB+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_RANLIB"; then ac_cv_prog_ac_ct_RANLIB="$ac_ct_RANLIB" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_RANLIB="ranlib" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_RANLIB=$ac_cv_prog_ac_ct_RANLIB if test -n "$ac_ct_RANLIB"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_RANLIB" >&5 printf "%s\n" "$ac_ct_RANLIB" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_RANLIB" = x; then RANLIB=":" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac RANLIB=$ac_ct_RANLIB fi else RANLIB="$ac_cv_prog_RANLIB" fi test -z "$RANLIB" && RANLIB=: # Determine commands to create old-style static archives. old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs' old_postinstall_cmds='chmod 644 $oldlib' old_postuninstall_cmds= if test -n "$RANLIB"; then case $host_os in bitrig* | openbsd*) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB -t \$tool_oldlib" ;; *) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB \$tool_oldlib" ;; esac old_archive_cmds="$old_archive_cmds~\$RANLIB \$tool_oldlib" fi case $host_os in darwin*) lock_old_archive_extraction=yes ;; *) lock_old_archive_extraction=no ;; esac # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC # Check for command to grab the raw symbol name followed by C symbol from nm. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking command to parse $NM output from $compiler object" >&5 printf %s "checking command to parse $NM output from $compiler object... " >&6; } if test ${lt_cv_sys_global_symbol_pipe+y} then : printf %s "(cached) " >&6 else case e in #( e) # These are sane defaults that work on at least a few old systems. # [They come from Ultrix. What could be older than Ultrix?!! ;)] # Character class describing NM global symbol codes. symcode='[BCDEGRST]' # Regexp to match symbols that can be accessed directly from C. sympat='\([_A-Za-z][_A-Za-z0-9]*\)' # Define system-specific variables. case $host_os in aix*) symcode='[BCDT]' ;; cygwin* | mingw* | pw32* | cegcc*) symcode='[ABCDGISTW]' ;; hpux*) if test ia64 = "$host_cpu"; then symcode='[ABCDEGRST]' fi ;; irix* | nonstopux*) symcode='[BCDEGRST]' ;; osf*) symcode='[BCDEGQRST]' ;; solaris*) symcode='[BDRT]' ;; sco3.2v5*) symcode='[DT]' ;; sysv4.2uw2*) symcode='[DT]' ;; sysv5* | sco5v6* | unixware* | OpenUNIX*) symcode='[ABDT]' ;; sysv4) symcode='[DFNSTU]' ;; esac # If we're using GNU nm, then use its standard symbol codes. case `$NM -V 2>&1` in *GNU* | *'with BFD'*) symcode='[ABCDGIRSTW]' ;; esac if test "$lt_cv_nm_interface" = "MS dumpbin"; then # Gets list of data symbols to import. lt_cv_sys_global_symbol_to_import="$SED -n -e 's/^I .* \(.*\)$/\1/p'" # Adjust the below global symbol transforms to fixup imported variables. lt_cdecl_hook=" -e 's/^I .* \(.*\)$/extern __declspec(dllimport) char \1;/p'" lt_c_name_hook=" -e 's/^I .* \(.*\)$/ {\"\1\", (void *) 0},/p'" lt_c_name_lib_hook="\ -e 's/^I .* \(lib.*\)$/ {\"\1\", (void *) 0},/p'\ -e 's/^I .* \(.*\)$/ {\"lib\1\", (void *) 0},/p'" else # Disable hooks by default. lt_cv_sys_global_symbol_to_import= lt_cdecl_hook= lt_c_name_hook= lt_c_name_lib_hook= fi # Transform an extracted symbol line into a proper C declaration. # Some systems (esp. on ia64) link data and code symbols differently, # so use this general approach. lt_cv_sys_global_symbol_to_cdecl="$SED -n"\ $lt_cdecl_hook\ " -e 's/^T .* \(.*\)$/extern int \1();/p'"\ " -e 's/^$symcode$symcode* .* \(.*\)$/extern char \1;/p'" # Transform an extracted symbol line into symbol name and symbol address lt_cv_sys_global_symbol_to_c_name_address="$SED -n"\ $lt_c_name_hook\ " -e 's/^: \(.*\) .*$/ {\"\1\", (void *) 0},/p'"\ " -e 's/^$symcode$symcode* .* \(.*\)$/ {\"\1\", (void *) \&\1},/p'" # Transform an extracted symbol line into symbol name with lib prefix and # symbol address. lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="$SED -n"\ $lt_c_name_lib_hook\ " -e 's/^: \(.*\) .*$/ {\"\1\", (void *) 0},/p'"\ " -e 's/^$symcode$symcode* .* \(lib.*\)$/ {\"\1\", (void *) \&\1},/p'"\ " -e 's/^$symcode$symcode* .* \(.*\)$/ {\"lib\1\", (void *) \&\1},/p'" # Handle CRLF in mingw tool chain opt_cr= case $build_os in mingw*) opt_cr=`$ECHO 'x\{0,1\}' | tr x '\015'` # option cr in regexp ;; esac # Try without a prefix underscore, then with it. for ac_symprfx in "" "_"; do # Transform symcode, sympat, and symprfx into a raw symbol and a C symbol. symxfrm="\\1 $ac_symprfx\\2 \\2" # Write the raw and C identifiers. if test "$lt_cv_nm_interface" = "MS dumpbin"; then # Fake it for dumpbin and say T for any non-static function, # D for any global variable and I for any imported variable. # Also find C++ and __fastcall symbols from MSVC++ or ICC, # which start with @ or ?. lt_cv_sys_global_symbol_pipe="$AWK '"\ " {last_section=section; section=\$ 3};"\ " /^COFF SYMBOL TABLE/{for(i in hide) delete hide[i]};"\ " /Section length .*#relocs.*(pick any)/{hide[last_section]=1};"\ " /^ *Symbol name *: /{split(\$ 0,sn,\":\"); si=substr(sn[2],2)};"\ " /^ *Type *: code/{print \"T\",si,substr(si,length(prfx))};"\ " /^ *Type *: data/{print \"I\",si,substr(si,length(prfx))};"\ " \$ 0!~/External *\|/{next};"\ " / 0+ UNDEF /{next}; / UNDEF \([^|]\)*()/{next};"\ " {if(hide[section]) next};"\ " {f=\"D\"}; \$ 0~/\(\).*\|/{f=\"T\"};"\ " {split(\$ 0,a,/\||\r/); split(a[2],s)};"\ " s[1]~/^[@?]/{print f,s[1],s[1]; next};"\ " s[1]~prfx {split(s[1],t,\"@\"); print f,t[1],substr(t[1],length(prfx))}"\ " ' prfx=^$ac_symprfx" else lt_cv_sys_global_symbol_pipe="$SED -n -e 's/^.*[ ]\($symcode$symcode*\)[ ][ ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'" fi lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | $SED '/ __gnu_lto/d'" # Check to see that the pipe works correctly. pipe_works=no rm -f conftest* cat > conftest.$ac_ext <<_LT_EOF #ifdef __cplusplus extern "C" { #endif char nm_test_var; void nm_test_func(void); void nm_test_func(void){} #ifdef __cplusplus } #endif int main(){nm_test_var='a';nm_test_func();return(0);} _LT_EOF if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then # Now try to grab the symbols. nlist=conftest.nm if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$NM conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist\""; } >&5 (eval $NM conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s "$nlist"; then # Try sorting and uniquifying the output. if sort "$nlist" | uniq > "$nlist"T; then mv -f "$nlist"T "$nlist" else rm -f "$nlist"T fi # Make sure that we snagged all the symbols we need. if $GREP ' nm_test_var$' "$nlist" >/dev/null; then if $GREP ' nm_test_func$' "$nlist" >/dev/null; then cat <<_LT_EOF > conftest.$ac_ext /* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */ #if defined _WIN32 || defined __CYGWIN__ || defined _WIN32_WCE /* DATA imports from DLLs on WIN32 can't be const, because runtime relocations are performed -- see ld's documentation on pseudo-relocs. */ # define LT_DLSYM_CONST #elif defined __osf__ /* This system does not cope well with relocations in const data. */ # define LT_DLSYM_CONST #else # define LT_DLSYM_CONST const #endif #ifdef __cplusplus extern "C" { #endif _LT_EOF # Now generate the symbol file. eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | $GREP -v main >> conftest.$ac_ext' cat <<_LT_EOF >> conftest.$ac_ext /* The mapping between symbol names and symbols. */ LT_DLSYM_CONST struct { const char *name; void *address; } lt__PROGRAM__LTX_preloaded_symbols[] = { { "@PROGRAM@", (void *) 0 }, _LT_EOF $SED "s/^$symcode$symcode* .* \(.*\)$/ {\"\1\", (void *) \&\1},/" < "$nlist" | $GREP -v main >> conftest.$ac_ext cat <<\_LT_EOF >> conftest.$ac_ext {0, (void *) 0} }; /* This works around a problem in FreeBSD linker */ #ifdef FREEBSD_WORKAROUND static const void *lt_preloaded_setup() { return lt__PROGRAM__LTX_preloaded_symbols; } #endif #ifdef __cplusplus } #endif _LT_EOF # Now try linking the two files. mv conftest.$ac_objext conftstm.$ac_objext lt_globsym_save_LIBS=$LIBS lt_globsym_save_CFLAGS=$CFLAGS LIBS=conftstm.$ac_objext CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag" if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5 (eval $ac_link) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s conftest$ac_exeext; then pipe_works=yes fi LIBS=$lt_globsym_save_LIBS CFLAGS=$lt_globsym_save_CFLAGS else echo "cannot find nm_test_func in $nlist" >&5 fi else echo "cannot find nm_test_var in $nlist" >&5 fi else echo "cannot run $lt_cv_sys_global_symbol_pipe" >&5 fi else echo "$progname: failed program was:" >&5 cat conftest.$ac_ext >&5 fi rm -rf conftest* conftst* # Do not use the global_symbol_pipe unless it works. if test yes = "$pipe_works"; then break else lt_cv_sys_global_symbol_pipe= fi done ;; esac fi if test -z "$lt_cv_sys_global_symbol_pipe"; then lt_cv_sys_global_symbol_to_cdecl= fi if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: failed" >&5 printf "%s\n" "failed" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: ok" >&5 printf "%s\n" "ok" >&6; } fi # Response file support. if test "$lt_cv_nm_interface" = "MS dumpbin"; then nm_file_list_spec='@' elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then nm_file_list_spec='@' fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5 printf %s "checking for sysroot... " >&6; } # Check whether --with-sysroot was given. if test ${with_sysroot+y} then : withval=$with_sysroot; else case e in #( e) with_sysroot=no ;; esac fi lt_sysroot= case $with_sysroot in #( yes) if test yes = "$GCC"; then lt_sysroot=`$CC --print-sysroot 2>/dev/null` fi ;; #( /*) lt_sysroot=`echo "$with_sysroot" | $SED -e "$sed_quote_subst"` ;; #( no|'') ;; #( *) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $with_sysroot" >&5 printf "%s\n" "$with_sysroot" >&6; } as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5 ;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5 printf "%s\n" "${lt_sysroot:-no}" >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for a working dd" >&5 printf %s "checking for a working dd... " >&6; } if test ${ac_cv_path_lt_DD+y} then : printf %s "(cached) " >&6 else case e in #( e) printf 0123456789abcdef0123456789abcdef >conftest.i cat conftest.i conftest.i >conftest2.i : ${lt_DD:=$DD} if test -z "$lt_DD"; then ac_path_lt_DD_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_prog in dd do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_lt_DD="$as_dir$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_lt_DD" || continue if "$ac_path_lt_DD" bs=32 count=1 conftest.out 2>/dev/null; then cmp -s conftest.i conftest.out \ && ac_cv_path_lt_DD="$ac_path_lt_DD" ac_path_lt_DD_found=: fi $ac_path_lt_DD_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_lt_DD"; then : fi else ac_cv_path_lt_DD=$lt_DD fi rm -f conftest.i conftest2.i conftest.out ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_lt_DD" >&5 printf "%s\n" "$ac_cv_path_lt_DD" >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to truncate binary pipes" >&5 printf %s "checking how to truncate binary pipes... " >&6; } if test ${lt_cv_truncate_bin+y} then : printf %s "(cached) " >&6 else case e in #( e) printf 0123456789abcdef0123456789abcdef >conftest.i cat conftest.i conftest.i >conftest2.i lt_cv_truncate_bin= if "$ac_cv_path_lt_DD" bs=32 count=1 conftest.out 2>/dev/null; then cmp -s conftest.i conftest.out \ && lt_cv_truncate_bin="$ac_cv_path_lt_DD bs=4096 count=1" fi rm -f conftest.i conftest2.i conftest.out test -z "$lt_cv_truncate_bin" && lt_cv_truncate_bin="$SED -e 4q" ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_truncate_bin" >&5 printf "%s\n" "$lt_cv_truncate_bin" >&6; } # Calculate cc_basename. Skip known compiler wrappers and cross-prefix. func_cc_basename () { for cc_temp in $*""; do case $cc_temp in compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; \-*) ;; *) break;; esac done func_cc_basename_result=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"` } # Check whether --enable-libtool-lock was given. if test ${enable_libtool_lock+y} then : enableval=$enable_libtool_lock; fi test no = "$enable_libtool_lock" || enable_libtool_lock=yes # Some flags need to be propagated to the compiler or linker for good # libtool support. case $host in ia64-*-hpux*) # Find out what ABI is being produced by ac_compile, and set mode # options accordingly. echo 'int i;' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then case `$FILECMD conftest.$ac_objext` in *ELF-32*) HPUX_IA64_MODE=32 ;; *ELF-64*) HPUX_IA64_MODE=64 ;; esac fi rm -rf conftest* ;; *-*-irix6*) # Find out what ABI is being produced by ac_compile, and set linker # options accordingly. echo '#line '$LINENO' "configure"' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then if test yes = "$lt_cv_prog_gnu_ld"; then case `$FILECMD conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -melf32bsmip" ;; *N32*) LD="${LD-ld} -melf32bmipn32" ;; *64-bit*) LD="${LD-ld} -melf64bmip" ;; esac else case `$FILECMD conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -32" ;; *N32*) LD="${LD-ld} -n32" ;; *64-bit*) LD="${LD-ld} -64" ;; esac fi fi rm -rf conftest* ;; mips64*-*linux*) # Find out what ABI is being produced by ac_compile, and set linker # options accordingly. echo '#line '$LINENO' "configure"' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then emul=elf case `$FILECMD conftest.$ac_objext` in *32-bit*) emul="${emul}32" ;; *64-bit*) emul="${emul}64" ;; esac case `$FILECMD conftest.$ac_objext` in *MSB*) emul="${emul}btsmip" ;; *LSB*) emul="${emul}ltsmip" ;; esac case `$FILECMD conftest.$ac_objext` in *N32*) emul="${emul}n32" ;; esac LD="${LD-ld} -m $emul" fi rm -rf conftest* ;; x86_64-*kfreebsd*-gnu|x86_64-*linux*|powerpc*-*linux*| \ s390*-*linux*|s390*-*tpf*|sparc*-*linux*) # Find out what ABI is being produced by ac_compile, and set linker # options accordingly. Note that the listed cases only cover the # situations where additional linker options are needed (such as when # doing 32-bit compilation for a host where ld defaults to 64-bit, or # vice versa); the common cases where no linker options are needed do # not appear in the list. echo 'int i;' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then case `$FILECMD conftest.o` in *32-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_i386_fbsd" ;; x86_64-*linux*) case `$FILECMD conftest.o` in *x86-64*) LD="${LD-ld} -m elf32_x86_64" ;; *) LD="${LD-ld} -m elf_i386" ;; esac ;; powerpc64le-*linux*) LD="${LD-ld} -m elf32lppclinux" ;; powerpc64-*linux*) LD="${LD-ld} -m elf32ppclinux" ;; s390x-*linux*) LD="${LD-ld} -m elf_s390" ;; sparc64-*linux*) LD="${LD-ld} -m elf32_sparc" ;; esac ;; *64-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_x86_64_fbsd" ;; x86_64-*linux*) LD="${LD-ld} -m elf_x86_64" ;; powerpcle-*linux*) LD="${LD-ld} -m elf64lppc" ;; powerpc-*linux*) LD="${LD-ld} -m elf64ppc" ;; s390*-*linux*|s390*-*tpf*) LD="${LD-ld} -m elf64_s390" ;; sparc*-*linux*) LD="${LD-ld} -m elf64_sparc" ;; esac ;; esac fi rm -rf conftest* ;; *-*-sco3.2v5*) # On SCO OpenServer 5, we need -belf to get full-featured binaries. SAVE_CFLAGS=$CFLAGS CFLAGS="$CFLAGS -belf" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the C compiler needs -belf" >&5 printf %s "checking whether the C compiler needs -belf... " >&6; } if test ${lt_cv_cc_needs_belf+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : lt_cv_cc_needs_belf=yes else case e in #( e) lt_cv_cc_needs_belf=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_cc_needs_belf" >&5 printf "%s\n" "$lt_cv_cc_needs_belf" >&6; } if test yes != "$lt_cv_cc_needs_belf"; then # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf CFLAGS=$SAVE_CFLAGS fi ;; *-*solaris*) # Find out what ABI is being produced by ac_compile, and set linker # options accordingly. echo 'int i;' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then case `$FILECMD conftest.o` in *64-bit*) case $lt_cv_prog_gnu_ld in yes*) case $host in i?86-*-solaris*|x86_64-*-solaris*) LD="${LD-ld} -m elf_x86_64" ;; sparc*-*-solaris*) LD="${LD-ld} -m elf64_sparc" ;; esac # GNU ld 2.21 introduced _sol2 emulations. Use them if available. if ${LD-ld} -V | grep _sol2 >/dev/null 2>&1; then LD=${LD-ld}_sol2 fi ;; *) if ${LD-ld} -64 -r -o conftest2.o conftest.o >/dev/null 2>&1; then LD="${LD-ld} -64" fi ;; esac ;; esac fi rm -rf conftest* ;; esac need_locks=$enable_libtool_lock if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args. set dummy ${ac_tool_prefix}mt; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_MANIFEST_TOOL+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$MANIFEST_TOOL"; then ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL if test -n "$MANIFEST_TOOL"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5 printf "%s\n" "$MANIFEST_TOOL" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_MANIFEST_TOOL"; then ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL # Extract the first word of "mt", so it can be a program name with args. set dummy mt; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_MANIFEST_TOOL+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_MANIFEST_TOOL"; then ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_MANIFEST_TOOL="mt" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL if test -n "$ac_ct_MANIFEST_TOOL"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5 printf "%s\n" "$ac_ct_MANIFEST_TOOL" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_MANIFEST_TOOL" = x; then MANIFEST_TOOL=":" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL fi else MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL" fi test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5 printf %s "checking if $MANIFEST_TOOL is a manifest tool... " >&6; } if test ${lt_cv_path_mainfest_tool+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_path_mainfest_tool=no echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5 $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out cat conftest.err >&5 if $GREP 'Manifest Tool' conftest.out > /dev/null; then lt_cv_path_mainfest_tool=yes fi rm -f conftest* ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5 printf "%s\n" "$lt_cv_path_mainfest_tool" >&6; } if test yes != "$lt_cv_path_mainfest_tool"; then MANIFEST_TOOL=: fi case $host_os in rhapsody* | darwin*) if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}dsymutil", so it can be a program name with args. set dummy ${ac_tool_prefix}dsymutil; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_DSYMUTIL+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$DSYMUTIL"; then ac_cv_prog_DSYMUTIL="$DSYMUTIL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_DSYMUTIL="${ac_tool_prefix}dsymutil" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi DSYMUTIL=$ac_cv_prog_DSYMUTIL if test -n "$DSYMUTIL"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $DSYMUTIL" >&5 printf "%s\n" "$DSYMUTIL" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_DSYMUTIL"; then ac_ct_DSYMUTIL=$DSYMUTIL # Extract the first word of "dsymutil", so it can be a program name with args. set dummy dsymutil; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_DSYMUTIL+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_DSYMUTIL"; then ac_cv_prog_ac_ct_DSYMUTIL="$ac_ct_DSYMUTIL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DSYMUTIL="dsymutil" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_DSYMUTIL=$ac_cv_prog_ac_ct_DSYMUTIL if test -n "$ac_ct_DSYMUTIL"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DSYMUTIL" >&5 printf "%s\n" "$ac_ct_DSYMUTIL" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_DSYMUTIL" = x; then DSYMUTIL=":" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DSYMUTIL=$ac_ct_DSYMUTIL fi else DSYMUTIL="$ac_cv_prog_DSYMUTIL" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}nmedit", so it can be a program name with args. set dummy ${ac_tool_prefix}nmedit; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_NMEDIT+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$NMEDIT"; then ac_cv_prog_NMEDIT="$NMEDIT" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_NMEDIT="${ac_tool_prefix}nmedit" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi NMEDIT=$ac_cv_prog_NMEDIT if test -n "$NMEDIT"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $NMEDIT" >&5 printf "%s\n" "$NMEDIT" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_NMEDIT"; then ac_ct_NMEDIT=$NMEDIT # Extract the first word of "nmedit", so it can be a program name with args. set dummy nmedit; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_NMEDIT+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_NMEDIT"; then ac_cv_prog_ac_ct_NMEDIT="$ac_ct_NMEDIT" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_NMEDIT="nmedit" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_NMEDIT=$ac_cv_prog_ac_ct_NMEDIT if test -n "$ac_ct_NMEDIT"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_NMEDIT" >&5 printf "%s\n" "$ac_ct_NMEDIT" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_NMEDIT" = x; then NMEDIT=":" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac NMEDIT=$ac_ct_NMEDIT fi else NMEDIT="$ac_cv_prog_NMEDIT" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}lipo", so it can be a program name with args. set dummy ${ac_tool_prefix}lipo; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_LIPO+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$LIPO"; then ac_cv_prog_LIPO="$LIPO" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_LIPO="${ac_tool_prefix}lipo" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi LIPO=$ac_cv_prog_LIPO if test -n "$LIPO"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $LIPO" >&5 printf "%s\n" "$LIPO" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_LIPO"; then ac_ct_LIPO=$LIPO # Extract the first word of "lipo", so it can be a program name with args. set dummy lipo; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_LIPO+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_LIPO"; then ac_cv_prog_ac_ct_LIPO="$ac_ct_LIPO" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_LIPO="lipo" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_LIPO=$ac_cv_prog_ac_ct_LIPO if test -n "$ac_ct_LIPO"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_LIPO" >&5 printf "%s\n" "$ac_ct_LIPO" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_LIPO" = x; then LIPO=":" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac LIPO=$ac_ct_LIPO fi else LIPO="$ac_cv_prog_LIPO" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}otool", so it can be a program name with args. set dummy ${ac_tool_prefix}otool; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_OTOOL+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$OTOOL"; then ac_cv_prog_OTOOL="$OTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_OTOOL="${ac_tool_prefix}otool" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi OTOOL=$ac_cv_prog_OTOOL if test -n "$OTOOL"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $OTOOL" >&5 printf "%s\n" "$OTOOL" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_OTOOL"; then ac_ct_OTOOL=$OTOOL # Extract the first word of "otool", so it can be a program name with args. set dummy otool; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_OTOOL+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_OTOOL"; then ac_cv_prog_ac_ct_OTOOL="$ac_ct_OTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OTOOL="otool" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_OTOOL=$ac_cv_prog_ac_ct_OTOOL if test -n "$ac_ct_OTOOL"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OTOOL" >&5 printf "%s\n" "$ac_ct_OTOOL" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_OTOOL" = x; then OTOOL=":" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OTOOL=$ac_ct_OTOOL fi else OTOOL="$ac_cv_prog_OTOOL" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}otool64", so it can be a program name with args. set dummy ${ac_tool_prefix}otool64; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_OTOOL64+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$OTOOL64"; then ac_cv_prog_OTOOL64="$OTOOL64" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_OTOOL64="${ac_tool_prefix}otool64" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi OTOOL64=$ac_cv_prog_OTOOL64 if test -n "$OTOOL64"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $OTOOL64" >&5 printf "%s\n" "$OTOOL64" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_OTOOL64"; then ac_ct_OTOOL64=$OTOOL64 # Extract the first word of "otool64", so it can be a program name with args. set dummy otool64; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_OTOOL64+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_OTOOL64"; then ac_cv_prog_ac_ct_OTOOL64="$ac_ct_OTOOL64" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OTOOL64="otool64" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_OTOOL64=$ac_cv_prog_ac_ct_OTOOL64 if test -n "$ac_ct_OTOOL64"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OTOOL64" >&5 printf "%s\n" "$ac_ct_OTOOL64" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_OTOOL64" = x; then OTOOL64=":" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OTOOL64=$ac_ct_OTOOL64 fi else OTOOL64="$ac_cv_prog_OTOOL64" fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for -single_module linker flag" >&5 printf %s "checking for -single_module linker flag... " >&6; } if test ${lt_cv_apple_cc_single_mod+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_apple_cc_single_mod=no if test -z "$LT_MULTI_MODULE"; then # By default we will add the -single_module flag. You can override # by either setting the environment variable LT_MULTI_MODULE # non-empty at configure time, or by adding -multi_module to the # link flags. rm -rf libconftest.dylib* echo "int foo(void){return 1;}" > conftest.c echo "$LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c" >&5 $LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c 2>conftest.err _lt_result=$? # If there is a non-empty error log, and "single_module" # appears in it, assume the flag caused a linker warning if test -s conftest.err && $GREP single_module conftest.err; then cat conftest.err >&5 # Otherwise, if the output was created with a 0 exit code from # the compiler, it worked. elif test -f libconftest.dylib && test 0 = "$_lt_result"; then lt_cv_apple_cc_single_mod=yes else cat conftest.err >&5 fi rm -rf libconftest.dylib* rm -f conftest.* fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_apple_cc_single_mod" >&5 printf "%s\n" "$lt_cv_apple_cc_single_mod" >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for -exported_symbols_list linker flag" >&5 printf %s "checking for -exported_symbols_list linker flag... " >&6; } if test ${lt_cv_ld_exported_symbols_list+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_ld_exported_symbols_list=no save_LDFLAGS=$LDFLAGS echo "_main" > conftest.sym LDFLAGS="$LDFLAGS -Wl,-exported_symbols_list,conftest.sym" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : lt_cv_ld_exported_symbols_list=yes else case e in #( e) lt_cv_ld_exported_symbols_list=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext LDFLAGS=$save_LDFLAGS ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_exported_symbols_list" >&5 printf "%s\n" "$lt_cv_ld_exported_symbols_list" >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for -force_load linker flag" >&5 printf %s "checking for -force_load linker flag... " >&6; } if test ${lt_cv_ld_force_load+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_ld_force_load=no cat > conftest.c << _LT_EOF int forced_loaded() { return 2;} _LT_EOF echo "$LTCC $LTCFLAGS -c -o conftest.o conftest.c" >&5 $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5 echo "$AR $AR_FLAGS libconftest.a conftest.o" >&5 $AR $AR_FLAGS libconftest.a conftest.o 2>&5 echo "$RANLIB libconftest.a" >&5 $RANLIB libconftest.a 2>&5 cat > conftest.c << _LT_EOF int main() { return 0;} _LT_EOF echo "$LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a" >&5 $LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a 2>conftest.err _lt_result=$? if test -s conftest.err && $GREP force_load conftest.err; then cat conftest.err >&5 elif test -f conftest && test 0 = "$_lt_result" && $GREP forced_load conftest >/dev/null 2>&1; then lt_cv_ld_force_load=yes else cat conftest.err >&5 fi rm -f conftest.err libconftest.a conftest conftest.c rm -rf conftest.dSYM ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_force_load" >&5 printf "%s\n" "$lt_cv_ld_force_load" >&6; } case $host_os in rhapsody* | darwin1.[012]) _lt_dar_allow_undefined='$wl-undefined ${wl}suppress' ;; darwin1.*) _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;; darwin*) case $MACOSX_DEPLOYMENT_TARGET,$host in 10.[012],*|,*powerpc*-darwin[5-8]*) _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;; *) _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;; esac ;; esac if test yes = "$lt_cv_apple_cc_single_mod"; then _lt_dar_single_mod='$single_module' fi _lt_dar_needs_single_mod=no case $host_os in rhapsody* | darwin1.*) _lt_dar_needs_single_mod=yes ;; darwin*) # When targeting Mac OS X 10.4 (darwin 8) or later, # -single_module is the default and -multi_module is unsupported. # The toolchain on macOS 10.14 (darwin 18) and later cannot # target any OS version that needs -single_module. case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in 10.0,*-darwin[567].*|10.[0-3],*-darwin[5-9].*|10.[0-3],*-darwin1[0-7].*) _lt_dar_needs_single_mod=yes ;; esac ;; esac if test yes = "$lt_cv_ld_exported_symbols_list"; then _lt_dar_export_syms=' $wl-exported_symbols_list,$output_objdir/$libname-symbols.expsym' else _lt_dar_export_syms='~$NMEDIT -s $output_objdir/$libname-symbols.expsym $lib' fi if test : != "$DSYMUTIL" && test no = "$lt_cv_ld_force_load"; then _lt_dsymutil='~$DSYMUTIL $lib || :' else _lt_dsymutil= fi ;; esac # func_munge_path_list VARIABLE PATH # ----------------------------------- # VARIABLE is name of variable containing _space_ separated list of # directories to be munged by the contents of PATH, which is string # having a format: # "DIR[:DIR]:" # string "DIR[ DIR]" will be prepended to VARIABLE # ":DIR[:DIR]" # string "DIR[ DIR]" will be appended to VARIABLE # "DIRP[:DIRP]::[DIRA:]DIRA" # string "DIRP[ DIRP]" will be prepended to VARIABLE and string # "DIRA[ DIRA]" will be appended to VARIABLE # "DIR[:DIR]" # VARIABLE will be replaced by "DIR[ DIR]" func_munge_path_list () { case x$2 in x) ;; *:) eval $1=\"`$ECHO $2 | $SED 's/:/ /g'` \$$1\" ;; x:*) eval $1=\"\$$1 `$ECHO $2 | $SED 's/:/ /g'`\" ;; *::*) eval $1=\"\$$1\ `$ECHO $2 | $SED -e 's/.*:://' -e 's/:/ /g'`\" eval $1=\"`$ECHO $2 | $SED -e 's/::.*//' -e 's/:/ /g'`\ \$$1\" ;; *) eval $1=\"`$ECHO $2 | $SED 's/:/ /g'`\" ;; esac } ac_fn_c_check_header_compile "$LINENO" "dlfcn.h" "ac_cv_header_dlfcn_h" "$ac_includes_default " if test "x$ac_cv_header_dlfcn_h" = xyes then : printf "%s\n" "#define HAVE_DLFCN_H 1" >>confdefs.h fi func_stripname_cnf () { case $2 in .*) func_stripname_result=`$ECHO "$3" | $SED "s%^$1%%; s%\\\\$2\$%%"`;; *) func_stripname_result=`$ECHO "$3" | $SED "s%^$1%%; s%$2\$%%"`;; esac } # func_stripname_cnf # Set options enable_win32_dll=yes case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-cegcc*) if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}as", so it can be a program name with args. set dummy ${ac_tool_prefix}as; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_AS+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$AS"; then ac_cv_prog_AS="$AS" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_AS="${ac_tool_prefix}as" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi AS=$ac_cv_prog_AS if test -n "$AS"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $AS" >&5 printf "%s\n" "$AS" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_AS"; then ac_ct_AS=$AS # Extract the first word of "as", so it can be a program name with args. set dummy as; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_AS+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_AS"; then ac_cv_prog_ac_ct_AS="$ac_ct_AS" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_AS="as" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_AS=$ac_cv_prog_ac_ct_AS if test -n "$ac_ct_AS"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_AS" >&5 printf "%s\n" "$ac_ct_AS" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_AS" = x; then AS="false" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac AS=$ac_ct_AS fi else AS="$ac_cv_prog_AS" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args. set dummy ${ac_tool_prefix}dlltool; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_DLLTOOL+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$DLLTOOL"; then ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi DLLTOOL=$ac_cv_prog_DLLTOOL if test -n "$DLLTOOL"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5 printf "%s\n" "$DLLTOOL" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_DLLTOOL"; then ac_ct_DLLTOOL=$DLLTOOL # Extract the first word of "dlltool", so it can be a program name with args. set dummy dlltool; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_DLLTOOL+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_DLLTOOL"; then ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DLLTOOL="dlltool" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL if test -n "$ac_ct_DLLTOOL"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5 printf "%s\n" "$ac_ct_DLLTOOL" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_DLLTOOL" = x; then DLLTOOL="false" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DLLTOOL=$ac_ct_DLLTOOL fi else DLLTOOL="$ac_cv_prog_DLLTOOL" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}objdump", so it can be a program name with args. set dummy ${ac_tool_prefix}objdump; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_OBJDUMP+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$OBJDUMP"; then ac_cv_prog_OBJDUMP="$OBJDUMP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_OBJDUMP="${ac_tool_prefix}objdump" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi OBJDUMP=$ac_cv_prog_OBJDUMP if test -n "$OBJDUMP"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $OBJDUMP" >&5 printf "%s\n" "$OBJDUMP" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_OBJDUMP"; then ac_ct_OBJDUMP=$OBJDUMP # Extract the first word of "objdump", so it can be a program name with args. set dummy objdump; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_OBJDUMP+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_OBJDUMP"; then ac_cv_prog_ac_ct_OBJDUMP="$ac_ct_OBJDUMP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OBJDUMP="objdump" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_OBJDUMP=$ac_cv_prog_ac_ct_OBJDUMP if test -n "$ac_ct_OBJDUMP"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OBJDUMP" >&5 printf "%s\n" "$ac_ct_OBJDUMP" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_OBJDUMP" = x; then OBJDUMP="false" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OBJDUMP=$ac_ct_OBJDUMP fi else OBJDUMP="$ac_cv_prog_OBJDUMP" fi ;; esac test -z "$AS" && AS=as test -z "$DLLTOOL" && DLLTOOL=dlltool test -z "$OBJDUMP" && OBJDUMP=objdump enable_dlopen=no # Check whether --enable-shared was given. if test ${enable_shared+y} then : enableval=$enable_shared; p=${PACKAGE-default} case $enableval in yes) enable_shared=yes ;; no) enable_shared=no ;; *) enable_shared=no # Look at the argument we got. We use all the common list separators. lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR, for pkg in $enableval; do IFS=$lt_save_ifs if test "X$pkg" = "X$p"; then enable_shared=yes fi done IFS=$lt_save_ifs ;; esac else case e in #( e) enable_shared=yes ;; esac fi # Check whether --enable-static was given. if test ${enable_static+y} then : enableval=$enable_static; p=${PACKAGE-default} case $enableval in yes) enable_static=yes ;; no) enable_static=no ;; *) enable_static=no # Look at the argument we got. We use all the common list separators. lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR, for pkg in $enableval; do IFS=$lt_save_ifs if test "X$pkg" = "X$p"; then enable_static=yes fi done IFS=$lt_save_ifs ;; esac else case e in #( e) enable_static=yes ;; esac fi # Check whether --with-pic was given. if test ${with_pic+y} then : withval=$with_pic; lt_p=${PACKAGE-default} case $withval in yes|no) pic_mode=$withval ;; *) pic_mode=default # Look at the argument we got. We use all the common list separators. lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR, for lt_pkg in $withval; do IFS=$lt_save_ifs if test "X$lt_pkg" = "X$lt_p"; then pic_mode=yes fi done IFS=$lt_save_ifs ;; esac else case e in #( e) pic_mode=default ;; esac fi # Check whether --enable-fast-install was given. if test ${enable_fast_install+y} then : enableval=$enable_fast_install; p=${PACKAGE-default} case $enableval in yes) enable_fast_install=yes ;; no) enable_fast_install=no ;; *) enable_fast_install=no # Look at the argument we got. We use all the common list separators. lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR, for pkg in $enableval; do IFS=$lt_save_ifs if test "X$pkg" = "X$p"; then enable_fast_install=yes fi done IFS=$lt_save_ifs ;; esac else case e in #( e) enable_fast_install=yes ;; esac fi shared_archive_member_spec= case $host,$enable_shared in power*-*-aix[5-9]*,yes) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking which variant of shared library versioning to provide" >&5 printf %s "checking which variant of shared library versioning to provide... " >&6; } # Check whether --with-aix-soname was given. if test ${with_aix_soname+y} then : withval=$with_aix_soname; case $withval in aix|svr4|both) ;; *) as_fn_error $? "Unknown argument to --with-aix-soname" "$LINENO" 5 ;; esac lt_cv_with_aix_soname=$with_aix_soname else case e in #( e) if test ${lt_cv_with_aix_soname+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_with_aix_soname=aix ;; esac fi with_aix_soname=$lt_cv_with_aix_soname ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $with_aix_soname" >&5 printf "%s\n" "$with_aix_soname" >&6; } if test aix != "$with_aix_soname"; then # For the AIX way of multilib, we name the shared archive member # based on the bitwidth used, traditionally 'shr.o' or 'shr_64.o', # and 'shr.imp' or 'shr_64.imp', respectively, for the Import File. # Even when GNU compilers ignore OBJECT_MODE but need '-maix64' flag, # the AIX toolchain works better with OBJECT_MODE set (default 32). if test 64 = "${OBJECT_MODE-32}"; then shared_archive_member_spec=shr_64 else shared_archive_member_spec=shr fi fi ;; *) with_aix_soname=aix ;; esac # This can be used to rebuild libtool when needed LIBTOOL_DEPS=$ltmain # Always use our own libtool. LIBTOOL='$(SHELL) $(top_builddir)/libtool' test -z "$LN_S" && LN_S="ln -s" if test -n "${ZSH_VERSION+set}"; then setopt NO_GLOB_SUBST fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for objdir" >&5 printf %s "checking for objdir... " >&6; } if test ${lt_cv_objdir+y} then : printf %s "(cached) " >&6 else case e in #( e) rm -f .libs 2>/dev/null mkdir .libs 2>/dev/null if test -d .libs; then lt_cv_objdir=.libs else # MS-DOS does not allow filenames that begin with a dot. lt_cv_objdir=_libs fi rmdir .libs 2>/dev/null ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_objdir" >&5 printf "%s\n" "$lt_cv_objdir" >&6; } objdir=$lt_cv_objdir printf "%s\n" "#define LT_OBJDIR \"$lt_cv_objdir/\"" >>confdefs.h case $host_os in aix3*) # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test set != "${COLLECT_NAMES+set}"; then COLLECT_NAMES= export COLLECT_NAMES fi ;; esac # Global variables: ofile=libtool can_build_shared=yes # All known linkers require a '.a' archive for static linking (except MSVC and # ICC, which need '.lib'). libext=a with_gnu_ld=$lt_cv_prog_gnu_ld old_CC=$CC old_CFLAGS=$CFLAGS # Set sane defaults for various variables test -z "$CC" && CC=cc test -z "$LTCC" && LTCC=$CC test -z "$LTCFLAGS" && LTCFLAGS=$CFLAGS test -z "$LD" && LD=ld test -z "$ac_objext" && ac_objext=o func_cc_basename $compiler cc_basename=$func_cc_basename_result # Only perform the check for file, if the check method requires it test -z "$MAGIC_CMD" && MAGIC_CMD=file case $deplibs_check_method in file_magic*) if test "$file_magic_cmd" = '$MAGIC_CMD'; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for ${ac_tool_prefix}file" >&5 printf %s "checking for ${ac_tool_prefix}file... " >&6; } if test ${lt_cv_path_MAGIC_CMD+y} then : printf %s "(cached) " >&6 else case e in #( e) case $MAGIC_CMD in [\\/*] | ?:[\\/]*) lt_cv_path_MAGIC_CMD=$MAGIC_CMD # Let the user override the test with a path. ;; *) lt_save_MAGIC_CMD=$MAGIC_CMD lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR ac_dummy="/usr/bin$PATH_SEPARATOR$PATH" for ac_dir in $ac_dummy; do IFS=$lt_save_ifs test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/${ac_tool_prefix}file"; then lt_cv_path_MAGIC_CMD=$ac_dir/"${ac_tool_prefix}file" if test -n "$file_magic_test_file"; then case $deplibs_check_method in "file_magic "*) file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` MAGIC_CMD=$lt_cv_path_MAGIC_CMD if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | $EGREP "$file_magic_regex" > /dev/null; then : else cat <<_LT_EOF 1>&2 *** Warning: the command libtool uses to detect shared libraries, *** $file_magic_cmd, produces output that libtool cannot recognize. *** The result is that libtool may fail to recognize shared libraries *** as such. This will affect the creation of libtool libraries that *** depend on shared libraries, but programs linked with such libtool *** libraries will work regardless of this problem. Nevertheless, you *** may want to report the problem to your system manager and/or to *** bug-libtool@gnu.org _LT_EOF fi ;; esac fi break fi done IFS=$lt_save_ifs MAGIC_CMD=$lt_save_MAGIC_CMD ;; esac ;; esac fi MAGIC_CMD=$lt_cv_path_MAGIC_CMD if test -n "$MAGIC_CMD"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $MAGIC_CMD" >&5 printf "%s\n" "$MAGIC_CMD" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test -z "$lt_cv_path_MAGIC_CMD"; then if test -n "$ac_tool_prefix"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for file" >&5 printf %s "checking for file... " >&6; } if test ${lt_cv_path_MAGIC_CMD+y} then : printf %s "(cached) " >&6 else case e in #( e) case $MAGIC_CMD in [\\/*] | ?:[\\/]*) lt_cv_path_MAGIC_CMD=$MAGIC_CMD # Let the user override the test with a path. ;; *) lt_save_MAGIC_CMD=$MAGIC_CMD lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR ac_dummy="/usr/bin$PATH_SEPARATOR$PATH" for ac_dir in $ac_dummy; do IFS=$lt_save_ifs test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/file"; then lt_cv_path_MAGIC_CMD=$ac_dir/"file" if test -n "$file_magic_test_file"; then case $deplibs_check_method in "file_magic "*) file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` MAGIC_CMD=$lt_cv_path_MAGIC_CMD if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | $EGREP "$file_magic_regex" > /dev/null; then : else cat <<_LT_EOF 1>&2 *** Warning: the command libtool uses to detect shared libraries, *** $file_magic_cmd, produces output that libtool cannot recognize. *** The result is that libtool may fail to recognize shared libraries *** as such. This will affect the creation of libtool libraries that *** depend on shared libraries, but programs linked with such libtool *** libraries will work regardless of this problem. Nevertheless, you *** may want to report the problem to your system manager and/or to *** bug-libtool@gnu.org _LT_EOF fi ;; esac fi break fi done IFS=$lt_save_ifs MAGIC_CMD=$lt_save_MAGIC_CMD ;; esac ;; esac fi MAGIC_CMD=$lt_cv_path_MAGIC_CMD if test -n "$MAGIC_CMD"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $MAGIC_CMD" >&5 printf "%s\n" "$MAGIC_CMD" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi else MAGIC_CMD=: fi fi fi ;; esac # Use C for the default configuration in the libtool script lt_save_CC=$CC ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu # Source file extension for C test sources. ac_ext=c # Object file extension for compiled C test sources. objext=o objext=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(){return(0);}' # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC # Save the default compiler, since it gets overwritten when the other # tags are being tested, and _LT_TAGVAR(compiler, []) is a NOP. compiler_DEFAULT=$CC # save warnings/boilerplate of simple test code ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" >conftest.$ac_ext eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_compiler_boilerplate=`cat conftest.err` $RM conftest* ac_outfile=conftest.$ac_objext echo "$lt_simple_link_test_code" >conftest.$ac_ext eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_linker_boilerplate=`cat conftest.err` $RM -r conftest* ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then lt_prog_compiler_no_builtin_flag= if test yes = "$GCC"; then case $cc_basename in nvcc*) lt_prog_compiler_no_builtin_flag=' -Xcompiler -fno-builtin' ;; *) lt_prog_compiler_no_builtin_flag=' -fno-builtin' ;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -fno-rtti -fno-exceptions" >&5 printf %s "checking if $compiler supports -fno-rtti -fno-exceptions... " >&6; } if test ${lt_cv_prog_compiler_rtti_exceptions+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_prog_compiler_rtti_exceptions=no ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-fno-rtti -fno-exceptions" ## exclude from sc_useless_quotes_in_assignment # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_rtti_exceptions=yes fi fi $RM conftest* ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_rtti_exceptions" >&5 printf "%s\n" "$lt_cv_prog_compiler_rtti_exceptions" >&6; } if test yes = "$lt_cv_prog_compiler_rtti_exceptions"; then lt_prog_compiler_no_builtin_flag="$lt_prog_compiler_no_builtin_flag -fno-rtti -fno-exceptions" else : fi fi lt_prog_compiler_wl= lt_prog_compiler_pic= lt_prog_compiler_static= if test yes = "$GCC"; then lt_prog_compiler_wl='-Wl,' lt_prog_compiler_static='-static' case $host_os in aix*) # All AIX code is PIC. if test ia64 = "$host_cpu"; then # AIX 5 now supports IA64 processor lt_prog_compiler_static='-Bstatic' fi lt_prog_compiler_pic='-fPIC' ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support lt_prog_compiler_pic='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but # adding the '-m68020' flag to GCC prevents building anything better, # like '-m68040'. lt_prog_compiler_pic='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries lt_prog_compiler_pic='-DDLL_EXPORT' case $host_os in os2*) lt_prog_compiler_static='$wl-static' ;; esac ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files lt_prog_compiler_pic='-fno-common' ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. lt_prog_compiler_static= ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) # +Z the default ;; *) lt_prog_compiler_pic='-fPIC' ;; esac ;; interix[3-9]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; msdosdjgpp*) # Just because we use GCC doesn't mean we suddenly get shared libraries # on systems that don't support them. lt_prog_compiler_can_build_shared=no enable_shared=no ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic='-fPIC -shared' ;; sysv4*MP*) if test -d /usr/nec; then lt_prog_compiler_pic=-Kconform_pic fi ;; *) lt_prog_compiler_pic='-fPIC' ;; esac case $cc_basename in nvcc*) # Cuda Compiler Driver 2.2 lt_prog_compiler_wl='-Xlinker ' if test -n "$lt_prog_compiler_pic"; then lt_prog_compiler_pic="-Xcompiler $lt_prog_compiler_pic" fi ;; esac else # PORTME Check for flag to pass linker flags through the system compiler. case $host_os in aix*) lt_prog_compiler_wl='-Wl,' if test ia64 = "$host_cpu"; then # AIX 5 now supports IA64 processor lt_prog_compiler_static='-Bstatic' else lt_prog_compiler_static='-bnso -bI:/lib/syscalls.exp' fi ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files lt_prog_compiler_pic='-fno-common' case $cc_basename in nagfor*) # NAG Fortran compiler lt_prog_compiler_wl='-Wl,-Wl,,' lt_prog_compiler_pic='-PIC' lt_prog_compiler_static='-Bstatic' ;; esac ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). lt_prog_compiler_pic='-DDLL_EXPORT' case $host_os in os2*) lt_prog_compiler_static='$wl-static' ;; esac ;; hpux9* | hpux10* | hpux11*) lt_prog_compiler_wl='-Wl,' # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but # not for PA HP-UX. case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) lt_prog_compiler_pic='+Z' ;; esac # Is there a better lt_prog_compiler_static that works with the bundled CC? lt_prog_compiler_static='$wl-a ${wl}archive' ;; irix5* | irix6* | nonstopux*) lt_prog_compiler_wl='-Wl,' # PIC (with -KPIC) is the default. lt_prog_compiler_static='-non_shared' ;; linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in # old Intel for x86_64, which still supported -KPIC. ecc*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-static' ;; # icc used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. icc* | ifort*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fPIC' lt_prog_compiler_static='-static' ;; # Lahey Fortran 8.1. lf95*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='--shared' lt_prog_compiler_static='--static' ;; nagfor*) # NAG Fortran compiler lt_prog_compiler_wl='-Wl,-Wl,,' lt_prog_compiler_pic='-PIC' lt_prog_compiler_static='-Bstatic' ;; tcc*) # Fabrice Bellard et al's Tiny C Compiler lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fPIC' lt_prog_compiler_static='-static' ;; pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group compilers (*not* the Pentium gcc compiler, # which looks to be a dead project) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fpic' lt_prog_compiler_static='-Bstatic' ;; ccc*) lt_prog_compiler_wl='-Wl,' # All Alpha code is PIC. lt_prog_compiler_static='-non_shared' ;; xl* | bgxl* | bgf* | mpixl*) # IBM XL C 8.0/Fortran 10.1, 11.1 on PPC and BlueGene lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-qpic' lt_prog_compiler_static='-qstaticlink' ;; *) case `$CC -V 2>&1 | $SED 5q` in *Sun\ Ceres\ Fortran* | *Sun*Fortran*\ [1-7].* | *Sun*Fortran*\ 8.[0-3]*) # Sun Fortran 8.3 passes all unrecognized flags to the linker lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' lt_prog_compiler_wl='' ;; *Sun\ F* | *Sun*Fortran*) lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' lt_prog_compiler_wl='-Qoption ld ' ;; *Sun\ C*) # Sun C 5.9 lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' lt_prog_compiler_wl='-Wl,' ;; *Intel*\ [CF]*Compiler*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fPIC' lt_prog_compiler_static='-static' ;; *Portland\ Group*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fpic' lt_prog_compiler_static='-Bstatic' ;; esac ;; esac ;; newsos6) lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic='-fPIC -shared' ;; osf3* | osf4* | osf5*) lt_prog_compiler_wl='-Wl,' # All OSF/1 code is PIC. lt_prog_compiler_static='-non_shared' ;; rdos*) lt_prog_compiler_static='-non_shared' ;; solaris*) lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' case $cc_basename in f77* | f90* | f95* | sunf77* | sunf90* | sunf95*) lt_prog_compiler_wl='-Qoption ld ';; *) lt_prog_compiler_wl='-Wl,';; esac ;; sunos4*) lt_prog_compiler_wl='-Qoption ld ' lt_prog_compiler_pic='-PIC' lt_prog_compiler_static='-Bstatic' ;; sysv4 | sysv4.2uw2* | sysv4.3*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' ;; sysv4*MP*) if test -d /usr/nec; then lt_prog_compiler_pic='-Kconform_pic' lt_prog_compiler_static='-Bstatic' fi ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' ;; unicos*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_can_build_shared=no ;; uts4*) lt_prog_compiler_pic='-pic' lt_prog_compiler_static='-Bstatic' ;; *) lt_prog_compiler_can_build_shared=no ;; esac fi case $host_os in # For platforms that do not support PIC, -DPIC is meaningless: *djgpp*) lt_prog_compiler_pic= ;; *) lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC" ;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5 printf %s "checking for $compiler option to produce PIC... " >&6; } if test ${lt_cv_prog_compiler_pic+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_prog_compiler_pic=$lt_prog_compiler_pic ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5 printf "%s\n" "$lt_cv_prog_compiler_pic" >&6; } lt_prog_compiler_pic=$lt_cv_prog_compiler_pic # # Check to make sure the PIC flag actually works. # if test -n "$lt_prog_compiler_pic"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $compiler PIC flag $lt_prog_compiler_pic works" >&5 printf %s "checking if $compiler PIC flag $lt_prog_compiler_pic works... " >&6; } if test ${lt_cv_prog_compiler_pic_works+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_prog_compiler_pic_works=no ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="$lt_prog_compiler_pic -DPIC" ## exclude from sc_useless_quotes_in_assignment # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_pic_works=yes fi fi $RM conftest* ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_works" >&5 printf "%s\n" "$lt_cv_prog_compiler_pic_works" >&6; } if test yes = "$lt_cv_prog_compiler_pic_works"; then case $lt_prog_compiler_pic in "" | " "*) ;; *) lt_prog_compiler_pic=" $lt_prog_compiler_pic" ;; esac else lt_prog_compiler_pic= lt_prog_compiler_can_build_shared=no fi fi # # Check to make sure the static flag actually works. # wl=$lt_prog_compiler_wl eval lt_tmp_static_flag=\"$lt_prog_compiler_static\" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $compiler static flag $lt_tmp_static_flag works" >&5 printf %s "checking if $compiler static flag $lt_tmp_static_flag works... " >&6; } if test ${lt_cv_prog_compiler_static_works+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_prog_compiler_static_works=no save_LDFLAGS=$LDFLAGS LDFLAGS="$LDFLAGS $lt_tmp_static_flag" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&5 $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_static_works=yes fi else lt_cv_prog_compiler_static_works=yes fi fi $RM -r conftest* LDFLAGS=$save_LDFLAGS ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_static_works" >&5 printf "%s\n" "$lt_cv_prog_compiler_static_works" >&6; } if test yes = "$lt_cv_prog_compiler_static_works"; then : else lt_prog_compiler_static= fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 printf %s "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if test ${lt_cv_prog_compiler_c_o+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_prog_compiler_c_o=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o" >&5 printf "%s\n" "$lt_cv_prog_compiler_c_o" >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 printf %s "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if test ${lt_cv_prog_compiler_c_o+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_prog_compiler_c_o=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o" >&5 printf "%s\n" "$lt_cv_prog_compiler_c_o" >&6; } hard_links=nottested if test no = "$lt_cv_prog_compiler_c_o" && test no != "$need_locks"; then # do not overwrite the value of need_locks provided by the user { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if we can lock with hard links" >&5 printf %s "checking if we can lock with hard links... " >&6; } hard_links=yes $RM conftest* ln conftest.a conftest.b 2>/dev/null && hard_links=no touch conftest.a ln conftest.a conftest.b 2>&5 || hard_links=no ln conftest.a conftest.b 2>/dev/null && hard_links=no { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $hard_links" >&5 printf "%s\n" "$hard_links" >&6; } if test no = "$hard_links"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: '$CC' does not support '-c -o', so 'make -j' may be unsafe" >&5 printf "%s\n" "$as_me: WARNING: '$CC' does not support '-c -o', so 'make -j' may be unsafe" >&2;} need_locks=warn fi else need_locks=no fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the $compiler linker ($LD) supports shared libraries" >&5 printf %s "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; } runpath_var= allow_undefined_flag= always_export_symbols=no archive_cmds= archive_expsym_cmds= compiler_needs_object=no enable_shared_with_static_runtimes=no export_dynamic_flag_spec= export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' hardcode_automatic=no hardcode_direct=no hardcode_direct_absolute=no hardcode_libdir_flag_spec= hardcode_libdir_separator= hardcode_minus_L=no hardcode_shlibpath_var=unsupported inherit_rpath=no link_all_deplibs=unknown module_cmds= module_expsym_cmds= old_archive_from_new_cmds= old_archive_from_expsyms_cmds= thread_safe_flag_spec= whole_archive_flag_spec= # include_expsyms should be a list of space-separated symbols to be *always* # included in the symbol list include_expsyms= # exclude_expsyms can be an extended regexp of symbols to exclude # it will be wrapped by ' (' and ')$', so one must not match beginning or # end of line. Example: 'a|bc|.*d.*' will exclude the symbols 'a' and 'bc', # as well as any symbol that contains 'd'. exclude_expsyms='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*' # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out # platforms (ab)use it in PIC code, but their linkers get confused if # the symbol is explicitly referenced. Since portable code cannot # rely on this symbol name, it's probably fine to never include it in # preloaded symbol tables. # Exclude shared library initialization/finalization symbols. extract_expsyms_cmds= case $host_os in cygwin* | mingw* | pw32* | cegcc*) # FIXME: the MSVC++ and ICC port hasn't been tested in a loooong time # When not using gcc, we currently assume that we are using # Microsoft Visual C++ or Intel C++ Compiler. if test yes != "$GCC"; then with_gnu_ld=no fi ;; interix*) # we just hope/assume this is gcc and not c89 (= MSVC++ or ICC) with_gnu_ld=yes ;; openbsd* | bitrig*) with_gnu_ld=no ;; esac ld_shlibs=yes # On some targets, GNU ld is compatible enough with the native linker # that we're better off using the native interface for both. lt_use_gnu_ld_interface=no if test yes = "$with_gnu_ld"; then case $host_os in aix*) # The AIX port of GNU ld has always aspired to compatibility # with the native linker. However, as the warning in the GNU ld # block says, versions before 2.19.5* couldn't really create working # shared libraries, regardless of the interface used. case `$LD -v 2>&1` in *\ \(GNU\ Binutils\)\ 2.19.5*) ;; *\ \(GNU\ Binutils\)\ 2.[2-9]*) ;; *\ \(GNU\ Binutils\)\ [3-9]*) ;; *) lt_use_gnu_ld_interface=yes ;; esac ;; *) lt_use_gnu_ld_interface=yes ;; esac fi if test yes = "$lt_use_gnu_ld_interface"; then # If archive_cmds runs LD, not CC, wlarc should be empty wlarc='$wl' # Set some defaults for GNU ld with shared library support. These # are reset later if shared libraries are not supported. Putting them # here allows them to be overridden if necessary. runpath_var=LD_RUN_PATH hardcode_libdir_flag_spec='$wl-rpath $wl$libdir' export_dynamic_flag_spec='$wl--export-dynamic' # ancient GNU ld didn't support --whole-archive et. al. if $LD --help 2>&1 | $GREP 'no-whole-archive' > /dev/null; then whole_archive_flag_spec=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive' else whole_archive_flag_spec= fi supports_anon_versioning=no case `$LD -v | $SED -e 's/([^)]\+)\s\+//' 2>&1` in *GNU\ gold*) supports_anon_versioning=yes ;; *\ [01].* | *\ 2.[0-9].* | *\ 2.10.*) ;; # catch versions < 2.11 *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ... *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ... *\ 2.11.*) ;; # other 2.11 versions *) supports_anon_versioning=yes ;; esac # See if GNU ld supports shared libraries. case $host_os in aix[3-9]*) # On AIX/PPC, the GNU linker is very broken if test ia64 != "$host_cpu"; then ld_shlibs=no cat <<_LT_EOF 1>&2 *** Warning: the GNU linker, at least up to release 2.19, is reported *** to be unable to reliably create shared libraries on AIX. *** Therefore, libtool is disabling shared libraries support. If you *** really care for shared libraries, you may want to install binutils *** 2.20 or above, or modify your PATH so that a non-GNU linker is found. *** You will then need to restart the configuration process. _LT_EOF fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds='' ;; m68k) archive_cmds='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes ;; esac ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then allow_undefined_flag=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME archive_cmds='$CC -nostart $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' else ld_shlibs=no fi ;; cygwin* | mingw* | pw32* | cegcc*) # _LT_TAGVAR(hardcode_libdir_flag_spec, ) is actually meaningless, # as there is no search path for DLLs. hardcode_libdir_flag_spec='-L$libdir' export_dynamic_flag_spec='$wl--export-all-symbols' allow_undefined_flag=unsupported always_export_symbols=no enable_shared_with_static_runtimes=yes export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols' exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname' if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' # If the export-symbols file already is a .def file, use it as # is; otherwise, prepend EXPORTS... archive_expsym_cmds='if test DEF = "`$SED -n -e '\''s/^[ ]*//'\'' -e '\''/^\(;.*\)*$/d'\'' -e '\''s/^\(EXPORTS\|LIBRARY\)\([ ].*\)*$/DEF/p'\'' -e q $export_symbols`" ; then cp $export_symbols $output_objdir/$soname.def; else echo EXPORTS > $output_objdir/$soname.def; cat $export_symbols >> $output_objdir/$soname.def; fi~ $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else ld_shlibs=no fi ;; haiku*) archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' link_all_deplibs=yes ;; os2*) hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes allow_undefined_flag=unsupported shrext_cmds=.dll archive_cmds='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' archive_expsym_cmds='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ prefix_cmds="$SED"~ if test EXPORTS = "`$SED 1q $export_symbols`"; then prefix_cmds="$prefix_cmds -e 1d"; fi~ prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~ cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' old_archive_From_new_cmds='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def' enable_shared_with_static_runtimes=yes file_list_spec='@' ;; interix[3-9]*) hardcode_direct=no hardcode_shlibpath_var=no hardcode_libdir_flag_spec='$wl-rpath,$libdir' export_dynamic_flag_spec='$wl-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' archive_expsym_cmds='$SED "s|^|_|" $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--retain-symbols-file,$output_objdir/$soname.expsym $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; gnu* | linux* | tpf* | k*bsd*-gnu | kopensolaris*-gnu) tmp_diet=no if test linux-dietlibc = "$host_os"; then case $cc_basename in diet\ *) tmp_diet=yes;; # linux-dietlibc with static linking (!diet-dyn) esac fi if $LD --help 2>&1 | $EGREP ': supported targets:.* elf' > /dev/null \ && test no = "$tmp_diet" then tmp_addflag=' $pic_flag' tmp_sharedflag='-shared' case $cc_basename,$host_cpu in pgcc*) # Portland Group C compiler whole_archive_flag_spec='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' tmp_addflag=' $pic_flag' ;; pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group f77 and f90 compilers whole_archive_flag_spec='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' tmp_addflag=' $pic_flag -Mnomain' ;; ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64 tmp_addflag=' -i_dynamic' ;; efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64 tmp_addflag=' -i_dynamic -nofor_main' ;; ifc* | ifort*) # Intel Fortran compiler tmp_addflag=' -nofor_main' ;; lf95*) # Lahey Fortran 8.1 whole_archive_flag_spec= tmp_sharedflag='--shared' ;; nagfor*) # NAGFOR 5.3 tmp_sharedflag='-Wl,-shared' ;; xl[cC]* | bgxl[cC]* | mpixl[cC]*) # IBM XL C 8.0 on PPC (deal with xlf below) tmp_sharedflag='-qmkshrobj' tmp_addflag= ;; nvcc*) # Cuda Compiler Driver 2.2 whole_archive_flag_spec='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' compiler_needs_object=yes ;; esac case `$CC -V 2>&1 | $SED 5q` in *Sun\ C*) # Sun C 5.9 whole_archive_flag_spec='$wl--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' compiler_needs_object=yes tmp_sharedflag='-G' ;; *Sun\ F*) # Sun Fortran 8.3 tmp_sharedflag='-G' ;; esac archive_cmds='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' if test yes = "$supports_anon_versioning"; then archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-version-script $wl$output_objdir/$libname.ver -o $lib' fi case $cc_basename in tcc*) export_dynamic_flag_spec='-rdynamic' ;; xlf* | bgf* | bgxlf* | mpixlf*) # IBM XL Fortran 10.1 on PPC cannot create shared libs itself whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive' hardcode_libdir_flag_spec='$wl-rpath $wl$libdir' archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib' if test yes = "$supports_anon_versioning"; then archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib' fi ;; esac else ld_shlibs=no fi ;; netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' wlarc= else archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' fi ;; solaris*) if $LD -v 2>&1 | $GREP 'BFD 2\.8' > /dev/null; then ld_shlibs=no cat <<_LT_EOF 1>&2 *** Warning: The releases 2.8.* of the GNU linker cannot reliably *** create shared libraries on Solaris systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.9.1 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' else ld_shlibs=no fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*) case `$LD -v 2>&1` in *\ [01].* | *\ 2.[0-9].* | *\ 2.1[0-5].*) ld_shlibs=no cat <<_LT_EOF 1>&2 *** Warning: Releases of the GNU linker prior to 2.16.91.0.3 cannot *** reliably create shared libraries on SCO systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.16.91.0.3 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF ;; *) # For security reasons, it is highly recommended that you always # use absolute paths for naming shared libraries, and exclude the # DT_RUNPATH tag from executables and libraries. But doing so # requires that you compile everything twice, which is a pain. if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then hardcode_libdir_flag_spec='$wl-rpath $wl$libdir' archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' else ld_shlibs=no fi ;; esac ;; sunos4*) archive_cmds='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags' wlarc= hardcode_direct=yes hardcode_shlibpath_var=no ;; *) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' else ld_shlibs=no fi ;; esac if test no = "$ld_shlibs"; then runpath_var= hardcode_libdir_flag_spec= export_dynamic_flag_spec= whole_archive_flag_spec= fi else # PORTME fill in a description of your system's linker (not GNU ld) case $host_os in aix3*) allow_undefined_flag=unsupported always_export_symbols=yes archive_expsym_cmds='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname' # Note: this linker hardcodes the directories in LIBPATH if there # are no directories specified by -L. hardcode_minus_L=yes if test yes = "$GCC" && test -z "$lt_prog_compiler_static"; then # Neither direct hardcoding nor static linking is supported with a # broken collect2. hardcode_direct=unsupported fi ;; aix[4-9]*) if test ia64 = "$host_cpu"; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' no_entry_flag= else # If we're using GNU nm, then we don't want the "-C" option. # -C means demangle to GNU nm, but means don't demangle to AIX nm. # Without the "-l" option, or with the "-B" option, AIX nm treats # weak defined symbols like other global defined symbols, whereas # GNU nm marks them as "W". # While the 'weak' keyword is ignored in the Export File, we need # it in the Import File for the 'aix-soname' feature, so we have # to replace the "-B" option with "-P" for AIX nm. if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then export_symbols_cmds='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && (substr(\$ 3,1,1) != ".")) { if (\$ 2 == "W") { print \$ 3 " weak" } else { print \$ 3 } } }'\'' | sort -u > $export_symbols' else export_symbols_cmds='`func_echo_all $NM | $SED -e '\''s/B\([^B]*\)$/P\1/'\''` -PCpgl $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "L") || (\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) && (substr(\$ 1,1,1) != ".")) { if ((\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) { print \$ 1 " weak" } else { print \$ 1 } } }'\'' | sort -u > $export_symbols' fi aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we # have runtime linking enabled, and use it for executables. # For shared libraries, we enable/disable runtime linking # depending on the kind of the shared library created - # when "with_aix_soname,aix_use_runtimelinking" is: # "aix,no" lib.a(lib.so.V) shared, rtl:no, for executables # "aix,yes" lib.so shared, rtl:yes, for executables # lib.a static archive # "both,no" lib.so.V(shr.o) shared, rtl:yes # lib.a(lib.so.V) shared, rtl:no, for executables # "both,yes" lib.so.V(shr.o) shared, rtl:yes, for executables # lib.a(lib.so.V) shared, rtl:no # "svr4,*" lib.so.V(shr.o) shared, rtl:yes, for executables # lib.a static archive case $host_os in aix4.[23]|aix4.[23].*|aix[5-9]*) for ld_flag in $LDFLAGS; do if (test x-brtl = "x$ld_flag" || test x-Wl,-brtl = "x$ld_flag"); then aix_use_runtimelinking=yes break fi done if test svr4,no = "$with_aix_soname,$aix_use_runtimelinking"; then # With aix-soname=svr4, we create the lib.so.V shared archives only, # so we don't have lib.a shared libs to link our executables. # We have to force runtime linking in this case. aix_use_runtimelinking=yes LDFLAGS="$LDFLAGS -Wl,-brtl" fi ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. archive_cmds='' hardcode_direct=yes hardcode_direct_absolute=yes hardcode_libdir_separator=':' link_all_deplibs=yes file_list_spec='$wl-f,' case $with_aix_soname,$aix_use_runtimelinking in aix,*) ;; # traditional, no import file svr4,* | *,yes) # use import file # The Import File defines what to hardcode. hardcode_direct=no hardcode_direct_absolute=no ;; esac if test yes = "$GCC"; then case $host_os in aix4.[012]|aix4.[012].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ collect2name=`$CC -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 hardcode_direct=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking hardcode_minus_L=yes hardcode_libdir_flag_spec='-L$libdir' hardcode_libdir_separator= fi ;; esac shared_flag='-shared' if test yes = "$aix_use_runtimelinking"; then shared_flag="$shared_flag "'$wl-G' fi # Need to ensure runtime linking is disabled for the traditional # shared library, or the linker may eventually find shared libraries # /with/ Import File - we do not want to mix them. shared_flag_aix='-shared' shared_flag_svr4='-shared $wl-G' else # not using gcc if test ia64 = "$host_cpu"; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else if test yes = "$aix_use_runtimelinking"; then shared_flag='$wl-G' else shared_flag='$wl-bM:SRE' fi shared_flag_aix='$wl-bM:SRE' shared_flag_svr4='$wl-G' fi fi export_dynamic_flag_spec='$wl-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to export. always_export_symbols=yes if test aix,yes = "$with_aix_soname,$aix_use_runtimelinking"; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. allow_undefined_flag='-berok' # Determine the default libpath from the value encoded in an # empty executable. if test set = "${lt_cv_aix_libpath+set}"; then aix_libpath=$lt_cv_aix_libpath else if test ${lt_cv_aix_libpath_+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_=/usr/lib:/lib fi ;; esac fi aix_libpath=$lt_cv_aix_libpath_ fi hardcode_libdir_flag_spec='$wl-blibpath:$libdir:'"$aix_libpath" archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs $wl'$no_entry_flag' $compiler_flags `if test -n "$allow_undefined_flag"; then func_echo_all "$wl$allow_undefined_flag"; else :; fi` $wl'$exp_sym_flag:\$export_symbols' '$shared_flag else if test ia64 = "$host_cpu"; then hardcode_libdir_flag_spec='$wl-R $libdir:/usr/lib:/lib' allow_undefined_flag="-z nodefs" archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\$wl$no_entry_flag"' $compiler_flags $wl$allow_undefined_flag '"\$wl$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. if test set = "${lt_cv_aix_libpath+set}"; then aix_libpath=$lt_cv_aix_libpath else if test ${lt_cv_aix_libpath_+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_=/usr/lib:/lib fi ;; esac fi aix_libpath=$lt_cv_aix_libpath_ fi hardcode_libdir_flag_spec='$wl-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. no_undefined_flag=' $wl-bernotok' allow_undefined_flag=' $wl-berok' if test yes = "$with_gnu_ld"; then # We only use this code for GNU lds that support --whole-archive. whole_archive_flag_spec='$wl--whole-archive$convenience $wl--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives whole_archive_flag_spec='$convenience' fi archive_cmds_need_lc=yes archive_expsym_cmds='$RM -r $output_objdir/$realname.d~$MKDIR $output_objdir/$realname.d' # -brtl affects multiple linker settings, -berok does not and is overridden later compiler_flags_filtered='`func_echo_all "$compiler_flags " | $SED -e "s%-brtl\\([, ]\\)%-berok\\1%g"`' if test svr4 != "$with_aix_soname"; then # This is similar to how AIX traditionally builds its shared libraries. archive_expsym_cmds="$archive_expsym_cmds"'~$CC '$shared_flag_aix' -o $output_objdir/$realname.d/$soname $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$realname.d/$soname' fi if test aix != "$with_aix_soname"; then archive_expsym_cmds="$archive_expsym_cmds"'~$CC '$shared_flag_svr4' -o $output_objdir/$realname.d/$shared_archive_member_spec.o $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$STRIP -e $output_objdir/$realname.d/$shared_archive_member_spec.o~( func_echo_all "#! $soname($shared_archive_member_spec.o)"; if test shr_64 = "$shared_archive_member_spec"; then func_echo_all "# 64"; else func_echo_all "# 32"; fi; cat $export_symbols ) > $output_objdir/$realname.d/$shared_archive_member_spec.imp~$AR $AR_FLAGS $output_objdir/$soname $output_objdir/$realname.d/$shared_archive_member_spec.o $output_objdir/$realname.d/$shared_archive_member_spec.imp' else # used by -dlpreopen to get the symbols archive_expsym_cmds="$archive_expsym_cmds"'~$MV $output_objdir/$realname.d/$soname $output_objdir' fi archive_expsym_cmds="$archive_expsym_cmds"'~$RM -r $output_objdir/$realname.d' fi fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds='' ;; m68k) archive_cmds='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes ;; esac ;; bsdi[45]*) export_dynamic_flag_spec=-rdynamic ;; cygwin* | mingw* | pw32* | cegcc*) # When not using gcc, we currently assume that we are using # Microsoft Visual C++ or Intel C++ Compiler. # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. case $cc_basename in cl* | icl*) # Native MSVC or ICC hardcode_libdir_flag_spec=' ' allow_undefined_flag=unsupported always_export_symbols=yes file_list_spec='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=.dll # FIXME: Setting linknames here is a bad hack. archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~linknames=' archive_expsym_cmds='if test DEF = "`$SED -n -e '\''s/^[ ]*//'\'' -e '\''/^\(;.*\)*$/d'\'' -e '\''s/^\(EXPORTS\|LIBRARY\)\([ ].*\)*$/DEF/p'\'' -e q $export_symbols`" ; then cp "$export_symbols" "$output_objdir/$soname.def"; echo "$tool_output_objdir$soname.def" > "$output_objdir/$soname.exp"; else $SED -e '\''s/^/-link -EXPORT:/'\'' < $export_symbols > $output_objdir/$soname.exp; fi~ $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, )='true' enable_shared_with_static_runtimes=yes exclude_expsyms='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols' # Don't use ranlib old_postinstall_cmds='chmod 644 $oldlib' postlink_cmds='lt_outputfile="@OUTPUT@"~ lt_tool_outputfile="@TOOL_OUTPUT@"~ case $lt_outputfile in *.exe|*.EXE) ;; *) lt_outputfile=$lt_outputfile.exe lt_tool_outputfile=$lt_tool_outputfile.exe ;; esac~ if test : != "$MANIFEST_TOOL" && test -f "$lt_outputfile.manifest"; then $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; $RM "$lt_outputfile.manifest"; fi' ;; *) # Assume MSVC and ICC wrapper hardcode_libdir_flag_spec=' ' allow_undefined_flag=unsupported # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=.dll # FIXME: Setting linknames here is a bad hack. archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames=' # The linker will automatically build a .lib file if we build a DLL. old_archive_from_new_cmds='true' # FIXME: Should let the user specify the lib program. old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs' enable_shared_with_static_runtimes=yes ;; esac ;; darwin* | rhapsody*) archive_cmds_need_lc=no hardcode_direct=no hardcode_automatic=yes hardcode_shlibpath_var=unsupported if test yes = "$lt_cv_ld_force_load"; then whole_archive_flag_spec='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience $wl-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`' else whole_archive_flag_spec='' fi link_all_deplibs=yes allow_undefined_flag=$_lt_dar_allow_undefined case $cc_basename in ifort*|nagfor*) _lt_dar_can_shared=yes ;; *) _lt_dar_can_shared=$GCC ;; esac if test yes = "$_lt_dar_can_shared"; then output_verbose_link_cmd=func_echo_all archive_cmds="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dsymutil" module_cmds="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dsymutil" archive_expsym_cmds="$SED 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dar_export_syms$_lt_dsymutil" module_expsym_cmds="$SED -e 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dar_export_syms$_lt_dsymutil" else ld_shlibs=no fi ;; dgux*) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_libdir_flag_spec='-L$libdir' hardcode_shlibpath_var=no ;; # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor # support. Future versions do this automatically, but an explicit c++rt0.o # does not break anything, and helps significantly (at the cost of a little # extra space). freebsd2.2*) archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o' hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes hardcode_shlibpath_var=no ;; # Unfortunately, older versions of FreeBSD 2 do not have this feature. freebsd2.*) archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=yes hardcode_minus_L=yes hardcode_shlibpath_var=no ;; # FreeBSD 3 and greater uses gcc -shared to do shared libraries. freebsd* | dragonfly* | midnightbsd*) archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes hardcode_shlibpath_var=no ;; hpux9*) if test yes = "$GCC"; then archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag $wl+b $wl$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib' else archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib' fi hardcode_libdir_flag_spec='$wl+b $wl$libdir' hardcode_libdir_separator=: hardcode_direct=yes # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes export_dynamic_flag_spec='$wl-E' ;; hpux10*) if test yes,no = "$GCC,$with_gnu_ld"; then archive_cmds='$CC -shared $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' fi if test no = "$with_gnu_ld"; then hardcode_libdir_flag_spec='$wl+b $wl$libdir' hardcode_libdir_separator=: hardcode_direct=yes hardcode_direct_absolute=yes export_dynamic_flag_spec='$wl-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes fi ;; hpux11*) if test yes,no = "$GCC,$with_gnu_ld"; then case $host_cpu in hppa*64*) archive_cmds='$CC -shared $wl+h $wl$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) archive_cmds='$CC -shared $pic_flag $wl+h $wl$soname $wl+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) archive_cmds='$CC -shared $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags' ;; esac else case $host_cpu in hppa*64*) archive_cmds='$CC -b $wl+h $wl$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) archive_cmds='$CC -b $wl+h $wl$soname $wl+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) # Older versions of the 11.00 compiler do not understand -b yet # (HP92453-01 A.11.01.20 doesn't, HP92453-01 B.11.X.35175-35176.GP does) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $CC understands -b" >&5 printf %s "checking if $CC understands -b... " >&6; } if test ${lt_cv_prog_compiler__b+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_prog_compiler__b=no save_LDFLAGS=$LDFLAGS LDFLAGS="$LDFLAGS -b" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&5 $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler__b=yes fi else lt_cv_prog_compiler__b=yes fi fi $RM -r conftest* LDFLAGS=$save_LDFLAGS ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler__b" >&5 printf "%s\n" "$lt_cv_prog_compiler__b" >&6; } if test yes = "$lt_cv_prog_compiler__b"; then archive_cmds='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' fi ;; esac fi if test no = "$with_gnu_ld"; then hardcode_libdir_flag_spec='$wl+b $wl$libdir' hardcode_libdir_separator=: case $host_cpu in hppa*64*|ia64*) hardcode_direct=no hardcode_shlibpath_var=no ;; *) hardcode_direct=yes hardcode_direct_absolute=yes export_dynamic_flag_spec='$wl-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes ;; esac fi ;; irix5* | irix6* | nonstopux*) if test yes = "$GCC"; then archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' # Try to use the -exported_symbol ld option, if it does not # work, assume that -exports_file does not work either and # implicitly export all symbols. # This should be the same for all languages, so no per-tag cache variable. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5 printf %s "checking whether the $host_os linker accepts -exported_symbol... " >&6; } if test ${lt_cv_irix_exported_symbol+y} then : printf %s "(cached) " >&6 else case e in #( e) save_LDFLAGS=$LDFLAGS LDFLAGS="$LDFLAGS -shared $wl-exported_symbol ${wl}foo $wl-update_registry $wl/dev/null" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int foo (void) { return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : lt_cv_irix_exported_symbol=yes else case e in #( e) lt_cv_irix_exported_symbol=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext LDFLAGS=$save_LDFLAGS ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5 printf "%s\n" "$lt_cv_irix_exported_symbol" >&6; } if test yes = "$lt_cv_irix_exported_symbol"; then archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations $wl-exports_file $wl$export_symbols -o $lib' fi else archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -exports_file $export_symbols -o $lib' fi archive_cmds_need_lc='no' hardcode_libdir_flag_spec='$wl-rpath $wl$libdir' hardcode_libdir_separator=: inherit_rpath=yes link_all_deplibs=yes ;; linux*) case $cc_basename in tcc*) # Fabrice Bellard et al's Tiny C Compiler ld_shlibs=yes archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out else archive_cmds='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF fi hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes hardcode_shlibpath_var=no ;; newsos6) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=yes hardcode_libdir_flag_spec='$wl-rpath $wl$libdir' hardcode_libdir_separator=: hardcode_shlibpath_var=no ;; *nto* | *qnx*) ;; openbsd* | bitrig*) if test -f /usr/libexec/ld.so; then hardcode_direct=yes hardcode_shlibpath_var=no hardcode_direct_absolute=yes if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags $wl-retain-symbols-file,$export_symbols' hardcode_libdir_flag_spec='$wl-rpath,$libdir' export_dynamic_flag_spec='$wl-E' else archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' hardcode_libdir_flag_spec='$wl-rpath,$libdir' fi else ld_shlibs=no fi ;; os2*) hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes allow_undefined_flag=unsupported shrext_cmds=.dll archive_cmds='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' archive_expsym_cmds='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ prefix_cmds="$SED"~ if test EXPORTS = "`$SED 1q $export_symbols`"; then prefix_cmds="$prefix_cmds -e 1d"; fi~ prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~ cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' old_archive_From_new_cmds='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def' enable_shared_with_static_runtimes=yes file_list_spec='@' ;; osf3*) if test yes = "$GCC"; then allow_undefined_flag=' $wl-expect_unresolved $wl\*' archive_cmds='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' else allow_undefined_flag=' -expect_unresolved \*' archive_cmds='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' fi archive_cmds_need_lc='no' hardcode_libdir_flag_spec='$wl-rpath $wl$libdir' hardcode_libdir_separator=: ;; osf4* | osf5*) # as osf3* with the addition of -msym flag if test yes = "$GCC"; then allow_undefined_flag=' $wl-expect_unresolved $wl\*' archive_cmds='$CC -shared$allow_undefined_flag $pic_flag $libobjs $deplibs $compiler_flags $wl-msym $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' hardcode_libdir_flag_spec='$wl-rpath $wl$libdir' else allow_undefined_flag=' -expect_unresolved \*' archive_cmds='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' archive_expsym_cmds='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; printf "%s\\n" "-hidden">> $lib.exp~ $CC -shared$allow_undefined_flag $wl-input $wl$lib.exp $compiler_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib~$RM $lib.exp' # Both c and cxx compiler support -rpath directly hardcode_libdir_flag_spec='-rpath $libdir' fi archive_cmds_need_lc='no' hardcode_libdir_separator=: ;; solaris*) no_undefined_flag=' -z defs' if test yes = "$GCC"; then wlarc='$wl' archive_cmds='$CC -shared $pic_flag $wl-z ${wl}text $wl-h $wl$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -shared $pic_flag $wl-z ${wl}text $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' else case `$CC -V 2>&1` in *"Compilers 5.0"*) wlarc='' archive_cmds='$LD -G$allow_undefined_flag -h $soname -o $lib $libobjs $deplibs $linker_flags' archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $LD -G$allow_undefined_flag -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$RM $lib.exp' ;; *) wlarc='$wl' archive_cmds='$CC -G$allow_undefined_flag -h $soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G$allow_undefined_flag -M $lib.exp -h $soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' ;; esac fi hardcode_libdir_flag_spec='-R$libdir' hardcode_shlibpath_var=no case $host_os in solaris2.[0-5] | solaris2.[0-5].*) ;; *) # The compiler driver will combine and reorder linker options, # but understands '-z linker_flag'. GCC discards it without '$wl', # but is careful enough not to reorder. # Supported since Solaris 2.6 (maybe 2.5.1?) if test yes = "$GCC"; then whole_archive_flag_spec='$wl-z ${wl}allextract$convenience $wl-z ${wl}defaultextract' else whole_archive_flag_spec='-z allextract$convenience -z defaultextract' fi ;; esac link_all_deplibs=yes ;; sunos4*) if test sequent = "$host_vendor"; then # Use $CC to link under sequent, because it throws in some extra .o # files that make .init and .fini sections work. archive_cmds='$CC -G $wl-h $soname -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags' fi hardcode_libdir_flag_spec='-L$libdir' hardcode_direct=yes hardcode_minus_L=yes hardcode_shlibpath_var=no ;; sysv4) case $host_vendor in sni) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=yes # is this really true??? ;; siemens) ## LD is ld it makes a PLAMLIB ## CC just makes a GrossModule. archive_cmds='$LD -G -o $lib $libobjs $deplibs $linker_flags' reload_cmds='$CC -r -o $output$reload_objs' hardcode_direct=no ;; motorola) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=no #Motorola manual says yes, but my tests say they lie ;; esac runpath_var='LD_RUN_PATH' hardcode_shlibpath_var=no ;; sysv4.3*) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_shlibpath_var=no export_dynamic_flag_spec='-Bexport' ;; sysv4*MP*) if test -d /usr/nec; then archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_shlibpath_var=no runpath_var=LD_RUN_PATH hardcode_runpath_var=yes ld_shlibs=yes fi ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[01].[10]* | unixware7* | sco3.2v5.0.[024]*) no_undefined_flag='$wl-z,text' archive_cmds_need_lc=no hardcode_shlibpath_var=no runpath_var='LD_RUN_PATH' if test yes = "$GCC"; then archive_cmds='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; sysv5* | sco3.2v5* | sco5v6*) # Note: We CANNOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. no_undefined_flag='$wl-z,text' allow_undefined_flag='$wl-z,nodefs' archive_cmds_need_lc=no hardcode_shlibpath_var=no hardcode_libdir_flag_spec='$wl-R,$libdir' hardcode_libdir_separator=':' link_all_deplibs=yes export_dynamic_flag_spec='$wl-Bexport' runpath_var='LD_RUN_PATH' if test yes = "$GCC"; then archive_cmds='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; uts4*) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_libdir_flag_spec='-L$libdir' hardcode_shlibpath_var=no ;; *) ld_shlibs=no ;; esac if test sni = "$host_vendor"; then case $host in sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*) export_dynamic_flag_spec='$wl-Blargedynsym' ;; esac fi fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs" >&5 printf "%s\n" "$ld_shlibs" >&6; } test no = "$ld_shlibs" && can_build_shared=no with_gnu_ld=$with_gnu_ld # # Do we need to explicitly link libc? # case "x$archive_cmds_need_lc" in x|xyes) # Assume -lc should be added archive_cmds_need_lc=yes if test yes,yes = "$GCC,$enable_shared"; then case $archive_cmds in *'~'*) # FIXME: we may have to deal with multi-command sequences. ;; '$CC '*) # Test whether the compiler implicitly links with -lc since on some # systems, -lgcc has to come before -lc. If gcc already passes -lc # to ld, don't add -lc before -lgcc. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether -lc should be explicitly linked in" >&5 printf %s "checking whether -lc should be explicitly linked in... " >&6; } if test ${lt_cv_archive_cmds_need_lc+y} then : printf %s "(cached) " >&6 else case e in #( e) $RM conftest* echo "$lt_simple_compile_test_code" > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } 2>conftest.err; then soname=conftest lib=conftest libobjs=conftest.$ac_objext deplibs= wl=$lt_prog_compiler_wl pic_flag=$lt_prog_compiler_pic compiler_flags=-v linker_flags=-v verstring= output_objdir=. libname=conftest lt_save_allow_undefined_flag=$allow_undefined_flag allow_undefined_flag= if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$archive_cmds 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1\""; } >&5 (eval $archive_cmds 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } then lt_cv_archive_cmds_need_lc=no else lt_cv_archive_cmds_need_lc=yes fi allow_undefined_flag=$lt_save_allow_undefined_flag else cat conftest.err 1>&5 fi $RM conftest* ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_archive_cmds_need_lc" >&5 printf "%s\n" "$lt_cv_archive_cmds_need_lc" >&6; } archive_cmds_need_lc=$lt_cv_archive_cmds_need_lc ;; esac fi ;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking dynamic linker characteristics" >&5 printf %s "checking dynamic linker characteristics... " >&6; } if test yes = "$GCC"; then case $host_os in darwin*) lt_awk_arg='/^libraries:/,/LR/' ;; *) lt_awk_arg='/^libraries:/' ;; esac case $host_os in mingw* | cegcc*) lt_sed_strip_eq='s|=\([A-Za-z]:\)|\1|g' ;; *) lt_sed_strip_eq='s|=/|/|g' ;; esac lt_search_path_spec=`$CC -print-search-dirs | awk $lt_awk_arg | $SED -e "s/^libraries://" -e $lt_sed_strip_eq` case $lt_search_path_spec in *\;*) # if the path contains ";" then we assume it to be the separator # otherwise default to the standard path separator (i.e. ":") - it is # assumed that no part of a normal pathname contains ";" but that should # okay in the real world where ";" in dirpaths is itself problematic. lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED 's/;/ /g'` ;; *) lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED "s/$PATH_SEPARATOR/ /g"` ;; esac # Ok, now we have the path, separated by spaces, we can step through it # and add multilib dir if necessary... lt_tmp_lt_search_path_spec= lt_multi_os_dir=/`$CC $CPPFLAGS $CFLAGS $LDFLAGS -print-multi-os-directory 2>/dev/null` # ...but if some path component already ends with the multilib dir we assume # that all is fine and trust -print-search-dirs as is (GCC 4.2? or newer). case "$lt_multi_os_dir; $lt_search_path_spec " in "/; "* | "/.; "* | "/./; "* | *"$lt_multi_os_dir "* | *"$lt_multi_os_dir/ "*) lt_multi_os_dir= ;; esac for lt_sys_path in $lt_search_path_spec; do if test -d "$lt_sys_path$lt_multi_os_dir"; then lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path$lt_multi_os_dir" elif test -n "$lt_multi_os_dir"; then test -d "$lt_sys_path" && \ lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path" fi done lt_search_path_spec=`$ECHO "$lt_tmp_lt_search_path_spec" | awk ' BEGIN {RS = " "; FS = "/|\n";} { lt_foo = ""; lt_count = 0; for (lt_i = NF; lt_i > 0; lt_i--) { if ($lt_i != "" && $lt_i != ".") { if ($lt_i == "..") { lt_count++; } else { if (lt_count == 0) { lt_foo = "/" $lt_i lt_foo; } else { lt_count--; } } } } if (lt_foo != "") { lt_freq[lt_foo]++; } if (lt_freq[lt_foo] == 1) { print lt_foo; } }'` # AWK program above erroneously prepends '/' to C:/dos/paths # for these hosts. case $host_os in mingw* | cegcc*) lt_search_path_spec=`$ECHO "$lt_search_path_spec" |\ $SED 's|/\([A-Za-z]:\)|\1|g'` ;; esac sys_lib_search_path_spec=`$ECHO "$lt_search_path_spec" | $lt_NL2SP` else sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" fi library_names_spec= libname_spec='lib$name' soname_spec= shrext_cmds=.so postinstall_cmds= postuninstall_cmds= finish_cmds= finish_eval= shlibpath_var= shlibpath_overrides_runpath=unknown version_type=none dynamic_linker="$host_os ld.so" sys_lib_dlsearch_path_spec="/lib /usr/lib" need_lib_prefix=unknown hardcode_into_libs=no # when you set need_version to no, make sure it does not cause -set_version # flags to be left without arguments need_version=unknown case $host_os in aix3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname.a' shlibpath_var=LIBPATH # AIX 3 has no versioning support, so we append a major version to the name. soname_spec='$libname$release$shared_ext$major' ;; aix[4-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no hardcode_into_libs=yes if test ia64 = "$host_cpu"; then # AIX 5 supports IA64 library_names_spec='$libname$release$shared_ext$major $libname$release$shared_ext$versuffix $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH else # With GCC up to 2.95.x, collect2 would create an import file # for dependence libraries. The import file would start with # the line '#! .'. This would cause the generated library to # depend on '.', always an invalid library. This was fixed in # development snapshots of GCC prior to 3.0. case $host_os in aix4 | aix4.[01] | aix4.[01].*) if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' echo ' yes ' echo '#endif'; } | $CC -E - | $GREP yes > /dev/null; then : else can_build_shared=no fi ;; esac # Using Import Files as archive members, it is possible to support # filename-based versioning of shared library archives on AIX. While # this would work for both with and without runtime linking, it will # prevent static linking of such archives. So we do filename-based # shared library versioning with .so extension only, which is used # when both runtime linking and shared linking is enabled. # Unfortunately, runtime linking may impact performance, so we do # not want this to be the default eventually. Also, we use the # versioned .so libs for executables only if there is the -brtl # linker flag in LDFLAGS as well, or --with-aix-soname=svr4 only. # To allow for filename-based versioning support, we need to create # libNAME.so.V as an archive file, containing: # *) an Import File, referring to the versioned filename of the # archive as well as the shared archive member, telling the # bitwidth (32 or 64) of that shared object, and providing the # list of exported symbols of that shared object, eventually # decorated with the 'weak' keyword # *) the shared object with the F_LOADONLY flag set, to really avoid # it being seen by the linker. # At run time we better use the real file rather than another symlink, # but for link time we create the symlink libNAME.so -> libNAME.so.V case $with_aix_soname,$aix_use_runtimelinking in # AIX (on Power*) has no versioning support, so currently we cannot hardcode correct # soname into executable. Probably we can add versioning support to # collect2, so additional links can be useful in future. aix,yes) # traditional libtool dynamic_linker='AIX unversionable lib.so' # If using run time linking (on AIX 4.2 or later) use lib.so # instead of lib.a to let people know that these are not # typical AIX shared libraries. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' ;; aix,no) # traditional AIX only dynamic_linker='AIX lib.a(lib.so.V)' # We preserve .a as extension for shared libraries through AIX4.2 # and later when we are not doing run time linking. library_names_spec='$libname$release.a $libname.a' soname_spec='$libname$release$shared_ext$major' ;; svr4,*) # full svr4 only dynamic_linker="AIX lib.so.V($shared_archive_member_spec.o)" library_names_spec='$libname$release$shared_ext$major $libname$shared_ext' # We do not specify a path in Import Files, so LIBPATH fires. shlibpath_overrides_runpath=yes ;; *,yes) # both, prefer svr4 dynamic_linker="AIX lib.so.V($shared_archive_member_spec.o), lib.a(lib.so.V)" library_names_spec='$libname$release$shared_ext$major $libname$shared_ext' # unpreferred sharedlib libNAME.a needs extra handling postinstall_cmds='test -n "$linkname" || linkname="$realname"~func_stripname "" ".so" "$linkname"~$install_shared_prog "$dir/$func_stripname_result.$libext" "$destdir/$func_stripname_result.$libext"~test -z "$tstripme" || test -z "$striplib" || $striplib "$destdir/$func_stripname_result.$libext"' postuninstall_cmds='for n in $library_names $old_library; do :; done~func_stripname "" ".so" "$n"~test "$func_stripname_result" = "$n" || func_append rmfiles " $odir/$func_stripname_result.$libext"' # We do not specify a path in Import Files, so LIBPATH fires. shlibpath_overrides_runpath=yes ;; *,no) # both, prefer aix dynamic_linker="AIX lib.a(lib.so.V), lib.so.V($shared_archive_member_spec.o)" library_names_spec='$libname$release.a $libname.a' soname_spec='$libname$release$shared_ext$major' # unpreferred sharedlib libNAME.so.V and symlink libNAME.so need extra handling postinstall_cmds='test -z "$dlname" || $install_shared_prog $dir/$dlname $destdir/$dlname~test -z "$tstripme" || test -z "$striplib" || $striplib $destdir/$dlname~test -n "$linkname" || linkname=$realname~func_stripname "" ".a" "$linkname"~(cd "$destdir" && $LN_S -f $dlname $func_stripname_result.so)' postuninstall_cmds='test -z "$dlname" || func_append rmfiles " $odir/$dlname"~for n in $old_library $library_names; do :; done~func_stripname "" ".a" "$n"~func_append rmfiles " $odir/$func_stripname_result.so"' ;; esac shlibpath_var=LIBPATH fi ;; amigaos*) case $host_cpu in powerpc) # Since July 2007 AmigaOS4 officially supports .so libraries. # When compiling the executable, add -use-dynld -Lsobjs: to the compileline. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' ;; m68k) library_names_spec='$libname.ixlibrary $libname.a' # Create ${libname}_ixlibrary.a entries in /sys/libs. finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' ;; esac ;; beos*) library_names_spec='$libname$shared_ext' dynamic_linker="$host_os ld.so" shlibpath_var=LIBRARY_PATH ;; bsdi[45]*) version_type=linux # correct to gnu/linux during the next big refactor need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" # the default ld.so.conf also contains /usr/contrib/lib and # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow # libtool to hard-code these into programs ;; cygwin* | mingw* | pw32* | cegcc*) version_type=windows shrext_cmds=.dll need_version=no need_lib_prefix=no case $GCC,$cc_basename in yes,*) # gcc library_names_spec='$libname.dll.a' # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \$file`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes case $host_os in cygwin*) # Cygwin DLLs use 'cyg' prefix rather than 'lib' soname_spec='`echo $libname | $SED -e 's/^lib/cyg/'``echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext' sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/lib/w32api" ;; mingw* | cegcc*) # MinGW DLLs use traditional 'lib' prefix soname_spec='$libname`echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext' ;; pw32*) # pw32 DLLs use 'pw' prefix rather than 'lib' library_names_spec='`echo $libname | $SED -e 's/^lib/pw/'``echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext' ;; esac dynamic_linker='Win32 ld.exe' ;; *,cl* | *,icl*) # Native MSVC or ICC libname_spec='$name' soname_spec='$libname`echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext' library_names_spec='$libname.dll.lib' case $build_os in mingw*) sys_lib_search_path_spec= lt_save_ifs=$IFS IFS=';' for lt_path in $LIB do IFS=$lt_save_ifs # Let DOS variable expansion print the short 8.3 style file name. lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"` sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path" done IFS=$lt_save_ifs # Convert to MSYS style. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'` ;; cygwin*) # Convert to unix form, then to dos form, then back to unix form # but this time dos style (no spaces!) so that the unix form looks # like /cygdrive/c/PROGRA~1:/cygdr... sys_lib_search_path_spec=`cygpath --path --unix "$LIB"` sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null` sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` ;; *) sys_lib_search_path_spec=$LIB if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then # It is most probably a Windows format PATH. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` else sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` fi # FIXME: find the short name or the path components, as spaces are # common. (e.g. "Program Files" -> "PROGRA~1") ;; esac # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \$file`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes dynamic_linker='Win32 link.exe' ;; *) # Assume MSVC and ICC wrapper library_names_spec='$libname`echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext $libname.lib' dynamic_linker='Win32 ld.exe' ;; esac # FIXME: first we should search . and the directory the executable is in shlibpath_var=PATH ;; darwin* | rhapsody*) dynamic_linker="$host_os dyld" version_type=darwin need_lib_prefix=no need_version=no library_names_spec='$libname$release$major$shared_ext $libname$shared_ext' soname_spec='$libname$release$major$shared_ext' shlibpath_overrides_runpath=yes shlibpath_var=DYLD_LIBRARY_PATH shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/local/lib" sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' ;; dgux*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH ;; freebsd* | dragonfly* | midnightbsd*) # DragonFly does not have aout. When/if they implement a new # versioning mechanism, adjust this. if test -x /usr/bin/objformat; then objformat=`/usr/bin/objformat` else case $host_os in freebsd[23].*) objformat=aout ;; *) objformat=elf ;; esac fi version_type=freebsd-$objformat case $version_type in freebsd-elf*) library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' need_version=no need_lib_prefix=no ;; freebsd-*) library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' need_version=yes ;; esac shlibpath_var=LD_LIBRARY_PATH case $host_os in freebsd2.*) shlibpath_overrides_runpath=yes ;; freebsd3.[01]* | freebsdelf3.[01]*) shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; freebsd3.[2-9]* | freebsdelf3.[2-9]* | \ freebsd4.[0-5] | freebsdelf4.[0-5] | freebsd4.1.1 | freebsdelf4.1.1) shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; *) # from 4.6 on, and DragonFly shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; esac ;; haiku*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no dynamic_linker="$host_os runtime_loader" library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LIBRARY_PATH shlibpath_overrides_runpath=no sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib' hardcode_into_libs=yes ;; hpux9* | hpux10* | hpux11*) # Give a soname corresponding to the major version so that dld.sl refuses to # link against other versions. version_type=sunos need_lib_prefix=no need_version=no case $host_cpu in ia64*) shrext_cmds='.so' hardcode_into_libs=yes dynamic_linker="$host_os dld.so" shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' if test 32 = "$HPUX_IA64_MODE"; then sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" sys_lib_dlsearch_path_spec=/usr/lib/hpux32 else sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" sys_lib_dlsearch_path_spec=/usr/lib/hpux64 fi ;; hppa*64*) shrext_cmds='.sl' hardcode_into_libs=yes dynamic_linker="$host_os dld.sl" shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; *) shrext_cmds='.sl' dynamic_linker="$host_os dld.sl" shlibpath_var=SHLIB_PATH shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' ;; esac # HP-UX runs *really* slowly unless shared libraries are mode 555, ... postinstall_cmds='chmod 555 $lib' # or fails outright, so override atomically: install_override_mode=555 ;; interix[3-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; irix5* | irix6* | nonstopux*) case $host_os in nonstopux*) version_type=nonstopux ;; *) if test yes = "$lt_cv_prog_gnu_ld"; then version_type=linux # correct to gnu/linux during the next big refactor else version_type=irix fi ;; esac need_lib_prefix=no need_version=no soname_spec='$libname$release$shared_ext$major' library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$release$shared_ext $libname$shared_ext' case $host_os in irix5* | nonstopux*) libsuff= shlibsuff= ;; *) case $LD in # libtool.m4 will add one of these switches to LD *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") libsuff= shlibsuff= libmagic=32-bit;; *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") libsuff=32 shlibsuff=N32 libmagic=N32;; *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") libsuff=64 shlibsuff=64 libmagic=64-bit;; *) libsuff= shlibsuff= libmagic=never-match;; esac ;; esac shlibpath_var=LD_LIBRARY${shlibsuff}_PATH shlibpath_overrides_runpath=no sys_lib_search_path_spec="/usr/lib$libsuff /lib$libsuff /usr/local/lib$libsuff" sys_lib_dlsearch_path_spec="/usr/lib$libsuff /lib$libsuff" hardcode_into_libs=yes ;; # No shared lib support for Linux oldld, aout, or coff. linux*oldld* | linux*aout* | linux*coff*) dynamic_linker=no ;; linux*android*) version_type=none # Android doesn't support versioned libraries. need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext' soname_spec='$libname$release$shared_ext' finish_cmds= shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes dynamic_linker='Android linker' # Don't embed -rpath directories since the linker doesn't support them. hardcode_libdir_flag_spec='-L$libdir' ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no # Some binutils ld are patched to set DT_RUNPATH if test ${lt_cv_shlibpath_overrides_runpath+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_shlibpath_overrides_runpath=no save_LDFLAGS=$LDFLAGS save_libdir=$libdir eval "libdir=/foo; wl=\"$lt_prog_compiler_wl\"; \ LDFLAGS=\"\$LDFLAGS $hardcode_libdir_flag_spec\"" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : if ($OBJDUMP -p conftest$ac_exeext) 2>/dev/null | grep "RUNPATH.*$libdir" >/dev/null then : lt_cv_shlibpath_overrides_runpath=yes fi fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext LDFLAGS=$save_LDFLAGS libdir=$save_libdir ;; esac fi shlibpath_overrides_runpath=$lt_cv_shlibpath_overrides_runpath # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes # Ideally, we could use ldconfig to report *all* directores which are # searched for libraries, however this is still not possible. Aside from not # being certain /sbin/ldconfig is available, command # 'ldconfig -N -X -v | grep ^/' on 64bit Fedora does not report /usr/lib64, # even though it is searched at run-time. Try to do the best guess by # appending ld.so.conf contents (and includes) to the search path. if test -f /etc/ld.so.conf; then lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \$2)); skip = 1; } { if (!skip) print \$0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '` sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra" fi # We used to test for /lib/ld.so.1 and disable shared libraries on # powerpc, because MkLinux only supported shared libraries with the # GNU dynamic linker. Since this was broken with cross compilers, # most powerpc-linux boxes support dynamic linking these days and # people can always --disable-shared, the test was removed, and we # assume the GNU/Linux dynamic linker is in use. dynamic_linker='GNU/Linux ld.so' ;; netbsd*) version_type=sunos need_lib_prefix=no need_version=no if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' dynamic_linker='NetBSD (a.out) ld.so' else library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' dynamic_linker='NetBSD ld.elf_so' fi shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; newsos6) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; *nto* | *qnx*) version_type=qnx need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='ldqnx.so' ;; openbsd* | bitrig*) version_type=sunos sys_lib_dlsearch_path_spec=/usr/lib need_lib_prefix=no if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then need_version=no else need_version=yes fi library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; os2*) libname_spec='$name' version_type=windows shrext_cmds=.dll need_version=no need_lib_prefix=no # OS/2 can only load a DLL with a base name of 8 characters or less. soname_spec='`test -n "$os2dllname" && libname="$os2dllname"; v=$($ECHO $release$versuffix | tr -d .-); n=$($ECHO $libname | cut -b -$((8 - ${#v})) | tr . _); $ECHO $n$v`$shared_ext' library_names_spec='${libname}_dll.$libext' dynamic_linker='OS/2 ld.exe' shlibpath_var=BEGINLIBPATH sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec postinstall_cmds='base_file=`basename \$file`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; $ECHO \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; $ECHO \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' ;; osf3* | osf4* | osf5*) version_type=osf need_lib_prefix=no need_version=no soname_spec='$libname$release$shared_ext$major' library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; rdos*) dynamic_linker=no ;; solaris*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes # ldd complains unless libraries are executable postinstall_cmds='chmod +x $lib' ;; sunos4*) version_type=sunos library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes if test yes = "$with_gnu_ld"; then need_lib_prefix=no fi need_version=yes ;; sysv4 | sysv4.3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH case $host_vendor in sni) shlibpath_overrides_runpath=no need_lib_prefix=no runpath_var=LD_RUN_PATH ;; siemens) need_lib_prefix=no ;; motorola) need_lib_prefix=no need_version=no shlibpath_overrides_runpath=no sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' ;; esac ;; sysv4*MP*) if test -d /usr/nec; then version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$shared_ext.$versuffix $libname$shared_ext.$major $libname$shared_ext' soname_spec='$libname$shared_ext.$major' shlibpath_var=LD_LIBRARY_PATH fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) version_type=sco need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes if test yes = "$with_gnu_ld"; then sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' else sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' case $host_os in sco3.2v5*) sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" ;; esac fi sys_lib_dlsearch_path_spec='/usr/lib' ;; tpf*) # TPF is a cross-target only. Preferred cross-host = GNU/Linux. version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; uts4*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH ;; *) dynamic_linker=no ;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $dynamic_linker" >&5 printf "%s\n" "$dynamic_linker" >&6; } test no = "$dynamic_linker" && can_build_shared=no variables_saved_for_relink="PATH $shlibpath_var $runpath_var" if test yes = "$GCC"; then variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" fi if test set = "${lt_cv_sys_lib_search_path_spec+set}"; then sys_lib_search_path_spec=$lt_cv_sys_lib_search_path_spec fi if test set = "${lt_cv_sys_lib_dlsearch_path_spec+set}"; then sys_lib_dlsearch_path_spec=$lt_cv_sys_lib_dlsearch_path_spec fi # remember unaugmented sys_lib_dlsearch_path content for libtool script decls... configure_time_dlsearch_path=$sys_lib_dlsearch_path_spec # ... but it needs LT_SYS_LIBRARY_PATH munging for other configure-time code func_munge_path_list sys_lib_dlsearch_path_spec "$LT_SYS_LIBRARY_PATH" # to be used as default LT_SYS_LIBRARY_PATH value in generated libtool configure_time_lt_sys_library_path=$LT_SYS_LIBRARY_PATH { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to hardcode library paths into programs" >&5 printf %s "checking how to hardcode library paths into programs... " >&6; } hardcode_action= if test -n "$hardcode_libdir_flag_spec" || test -n "$runpath_var" || test yes = "$hardcode_automatic"; then # We can hardcode non-existent directories. if test no != "$hardcode_direct" && # If the only mechanism to avoid hardcoding is shlibpath_var, we # have to relink, otherwise we might link with an installed library # when we should be linking with a yet-to-be-installed one ## test no != "$_LT_TAGVAR(hardcode_shlibpath_var, )" && test no != "$hardcode_minus_L"; then # Linking always hardcodes the temporary library directory. hardcode_action=relink else # We can link without hardcoding, and we can hardcode nonexisting dirs. hardcode_action=immediate fi else # We cannot hardcode anything, or else we can only hardcode existing # directories. hardcode_action=unsupported fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $hardcode_action" >&5 printf "%s\n" "$hardcode_action" >&6; } if test relink = "$hardcode_action" || test yes = "$inherit_rpath"; then # Fast installation is not supported enable_fast_install=no elif test yes = "$shlibpath_overrides_runpath" || test no = "$enable_shared"; then # Fast installation is not necessary enable_fast_install=needless fi if test yes != "$enable_dlopen"; then enable_dlopen=unknown enable_dlopen_self=unknown enable_dlopen_self_static=unknown else lt_cv_dlopen=no lt_cv_dlopen_libs= case $host_os in beos*) lt_cv_dlopen=load_add_on lt_cv_dlopen_libs= lt_cv_dlopen_self=yes ;; mingw* | pw32* | cegcc*) lt_cv_dlopen=LoadLibrary lt_cv_dlopen_libs= ;; cygwin*) lt_cv_dlopen=dlopen lt_cv_dlopen_libs= ;; darwin*) # if libdl is installed we need to link against it { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5 printf %s "checking for dlopen in -ldl... " >&6; } if test ${ac_cv_lib_dl_dlopen+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_check_lib_save_LIBS=$LIBS LIBS="-ldl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. The 'extern "C"' is for builds by C++ compilers; although this is not generally supported in C code supporting it here has little cost and some practical benefit (sr 110532). */ #ifdef __cplusplus extern "C" #endif char dlopen (void); int main (void) { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : ac_cv_lib_dl_dlopen=yes else case e in #( e) ac_cv_lib_dl_dlopen=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5 printf "%s\n" "$ac_cv_lib_dl_dlopen" >&6; } if test "x$ac_cv_lib_dl_dlopen" = xyes then : lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-ldl else case e in #( e) lt_cv_dlopen=dyld lt_cv_dlopen_libs= lt_cv_dlopen_self=yes ;; esac fi ;; tpf*) # Don't try to run any link tests for TPF. We know it's impossible # because TPF is a cross-compiler, and we know how we open DSOs. lt_cv_dlopen=dlopen lt_cv_dlopen_libs= lt_cv_dlopen_self=no ;; *) ac_fn_c_check_func "$LINENO" "shl_load" "ac_cv_func_shl_load" if test "x$ac_cv_func_shl_load" = xyes then : lt_cv_dlopen=shl_load else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for shl_load in -ldld" >&5 printf %s "checking for shl_load in -ldld... " >&6; } if test ${ac_cv_lib_dld_shl_load+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_check_lib_save_LIBS=$LIBS LIBS="-ldld $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. The 'extern "C"' is for builds by C++ compilers; although this is not generally supported in C code supporting it here has little cost and some practical benefit (sr 110532). */ #ifdef __cplusplus extern "C" #endif char shl_load (void); int main (void) { return shl_load (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : ac_cv_lib_dld_shl_load=yes else case e in #( e) ac_cv_lib_dld_shl_load=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dld_shl_load" >&5 printf "%s\n" "$ac_cv_lib_dld_shl_load" >&6; } if test "x$ac_cv_lib_dld_shl_load" = xyes then : lt_cv_dlopen=shl_load lt_cv_dlopen_libs=-ldld else case e in #( e) ac_fn_c_check_func "$LINENO" "dlopen" "ac_cv_func_dlopen" if test "x$ac_cv_func_dlopen" = xyes then : lt_cv_dlopen=dlopen else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5 printf %s "checking for dlopen in -ldl... " >&6; } if test ${ac_cv_lib_dl_dlopen+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_check_lib_save_LIBS=$LIBS LIBS="-ldl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. The 'extern "C"' is for builds by C++ compilers; although this is not generally supported in C code supporting it here has little cost and some practical benefit (sr 110532). */ #ifdef __cplusplus extern "C" #endif char dlopen (void); int main (void) { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : ac_cv_lib_dl_dlopen=yes else case e in #( e) ac_cv_lib_dl_dlopen=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5 printf "%s\n" "$ac_cv_lib_dl_dlopen" >&6; } if test "x$ac_cv_lib_dl_dlopen" = xyes then : lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-ldl else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for dlopen in -lsvld" >&5 printf %s "checking for dlopen in -lsvld... " >&6; } if test ${ac_cv_lib_svld_dlopen+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_check_lib_save_LIBS=$LIBS LIBS="-lsvld $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. The 'extern "C"' is for builds by C++ compilers; although this is not generally supported in C code supporting it here has little cost and some practical benefit (sr 110532). */ #ifdef __cplusplus extern "C" #endif char dlopen (void); int main (void) { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : ac_cv_lib_svld_dlopen=yes else case e in #( e) ac_cv_lib_svld_dlopen=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_svld_dlopen" >&5 printf "%s\n" "$ac_cv_lib_svld_dlopen" >&6; } if test "x$ac_cv_lib_svld_dlopen" = xyes then : lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-lsvld else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for dld_link in -ldld" >&5 printf %s "checking for dld_link in -ldld... " >&6; } if test ${ac_cv_lib_dld_dld_link+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_check_lib_save_LIBS=$LIBS LIBS="-ldld $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. The 'extern "C"' is for builds by C++ compilers; although this is not generally supported in C code supporting it here has little cost and some practical benefit (sr 110532). */ #ifdef __cplusplus extern "C" #endif char dld_link (void); int main (void) { return dld_link (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : ac_cv_lib_dld_dld_link=yes else case e in #( e) ac_cv_lib_dld_dld_link=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dld_dld_link" >&5 printf "%s\n" "$ac_cv_lib_dld_dld_link" >&6; } if test "x$ac_cv_lib_dld_dld_link" = xyes then : lt_cv_dlopen=dld_link lt_cv_dlopen_libs=-ldld fi ;; esac fi ;; esac fi ;; esac fi ;; esac fi ;; esac fi ;; esac if test no = "$lt_cv_dlopen"; then enable_dlopen=no else enable_dlopen=yes fi case $lt_cv_dlopen in dlopen) save_CPPFLAGS=$CPPFLAGS test yes = "$ac_cv_header_dlfcn_h" && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H" save_LDFLAGS=$LDFLAGS wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\" save_LIBS=$LIBS LIBS="$lt_cv_dlopen_libs $LIBS" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether a program can dlopen itself" >&5 printf %s "checking whether a program can dlopen itself... " >&6; } if test ${lt_cv_dlopen_self+y} then : printf %s "(cached) " >&6 else case e in #( e) if test yes = "$cross_compiling"; then : lt_cv_dlopen_self=cross else lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat > conftest.$ac_ext <<_LT_EOF #line $LINENO "configure" #include "confdefs.h" #if HAVE_DLFCN_H #include #endif #include #ifdef RTLD_GLOBAL # define LT_DLGLOBAL RTLD_GLOBAL #else # ifdef DL_GLOBAL # define LT_DLGLOBAL DL_GLOBAL # else # define LT_DLGLOBAL 0 # endif #endif /* We may have to define LT_DLLAZY_OR_NOW in the command line if we find out it does not work in some platform. */ #ifndef LT_DLLAZY_OR_NOW # ifdef RTLD_LAZY # define LT_DLLAZY_OR_NOW RTLD_LAZY # else # ifdef DL_LAZY # define LT_DLLAZY_OR_NOW DL_LAZY # else # ifdef RTLD_NOW # define LT_DLLAZY_OR_NOW RTLD_NOW # else # ifdef DL_NOW # define LT_DLLAZY_OR_NOW DL_NOW # else # define LT_DLLAZY_OR_NOW 0 # endif # endif # endif # endif #endif /* When -fvisibility=hidden is used, assume the code has been annotated correspondingly for the symbols needed. */ #if defined __GNUC__ && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3)) int fnord () __attribute__((visibility("default"))); #endif int fnord () { return 42; } int main () { void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); int status = $lt_dlunknown; if (self) { if (dlsym (self,"fnord")) status = $lt_dlno_uscore; else { if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; else puts (dlerror ()); } /* dlclose (self); */ } else puts (dlerror ()); return status; } _LT_EOF if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5 (eval $ac_link) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s "conftest$ac_exeext" 2>/dev/null; then (./conftest; exit; ) >&5 2>/dev/null lt_status=$? case x$lt_status in x$lt_dlno_uscore) lt_cv_dlopen_self=yes ;; x$lt_dlneed_uscore) lt_cv_dlopen_self=yes ;; x$lt_dlunknown|x*) lt_cv_dlopen_self=no ;; esac else : # compilation failed lt_cv_dlopen_self=no fi fi rm -fr conftest* ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_dlopen_self" >&5 printf "%s\n" "$lt_cv_dlopen_self" >&6; } if test yes = "$lt_cv_dlopen_self"; then wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $lt_prog_compiler_static\" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether a statically linked program can dlopen itself" >&5 printf %s "checking whether a statically linked program can dlopen itself... " >&6; } if test ${lt_cv_dlopen_self_static+y} then : printf %s "(cached) " >&6 else case e in #( e) if test yes = "$cross_compiling"; then : lt_cv_dlopen_self_static=cross else lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat > conftest.$ac_ext <<_LT_EOF #line $LINENO "configure" #include "confdefs.h" #if HAVE_DLFCN_H #include #endif #include #ifdef RTLD_GLOBAL # define LT_DLGLOBAL RTLD_GLOBAL #else # ifdef DL_GLOBAL # define LT_DLGLOBAL DL_GLOBAL # else # define LT_DLGLOBAL 0 # endif #endif /* We may have to define LT_DLLAZY_OR_NOW in the command line if we find out it does not work in some platform. */ #ifndef LT_DLLAZY_OR_NOW # ifdef RTLD_LAZY # define LT_DLLAZY_OR_NOW RTLD_LAZY # else # ifdef DL_LAZY # define LT_DLLAZY_OR_NOW DL_LAZY # else # ifdef RTLD_NOW # define LT_DLLAZY_OR_NOW RTLD_NOW # else # ifdef DL_NOW # define LT_DLLAZY_OR_NOW DL_NOW # else # define LT_DLLAZY_OR_NOW 0 # endif # endif # endif # endif #endif /* When -fvisibility=hidden is used, assume the code has been annotated correspondingly for the symbols needed. */ #if defined __GNUC__ && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3)) int fnord () __attribute__((visibility("default"))); #endif int fnord () { return 42; } int main () { void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); int status = $lt_dlunknown; if (self) { if (dlsym (self,"fnord")) status = $lt_dlno_uscore; else { if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; else puts (dlerror ()); } /* dlclose (self); */ } else puts (dlerror ()); return status; } _LT_EOF if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5 (eval $ac_link) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s "conftest$ac_exeext" 2>/dev/null; then (./conftest; exit; ) >&5 2>/dev/null lt_status=$? case x$lt_status in x$lt_dlno_uscore) lt_cv_dlopen_self_static=yes ;; x$lt_dlneed_uscore) lt_cv_dlopen_self_static=yes ;; x$lt_dlunknown|x*) lt_cv_dlopen_self_static=no ;; esac else : # compilation failed lt_cv_dlopen_self_static=no fi fi rm -fr conftest* ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_dlopen_self_static" >&5 printf "%s\n" "$lt_cv_dlopen_self_static" >&6; } fi CPPFLAGS=$save_CPPFLAGS LDFLAGS=$save_LDFLAGS LIBS=$save_LIBS ;; esac case $lt_cv_dlopen_self in yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;; *) enable_dlopen_self=unknown ;; esac case $lt_cv_dlopen_self_static in yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;; *) enable_dlopen_self_static=unknown ;; esac fi striplib= old_striplib= { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether stripping libraries is possible" >&5 printf %s "checking whether stripping libraries is possible... " >&6; } if test -z "$STRIP"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } else if $STRIP -V 2>&1 | $GREP "GNU strip" >/dev/null; then old_striplib="$STRIP --strip-debug" striplib="$STRIP --strip-unneeded" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5 printf "%s\n" "yes" >&6; } else case $host_os in darwin*) # FIXME - insert some real tests, host_os isn't really good enough striplib="$STRIP -x" old_striplib="$STRIP -S" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5 printf "%s\n" "yes" >&6; } ;; freebsd*) if $STRIP -V 2>&1 | $GREP "elftoolchain" >/dev/null; then old_striplib="$STRIP --strip-debug" striplib="$STRIP --strip-unneeded" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5 printf "%s\n" "yes" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi ;; *) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } ;; esac fi fi # Report what library types will actually be built { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if libtool supports shared libraries" >&5 printf %s "checking if libtool supports shared libraries... " >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $can_build_shared" >&5 printf "%s\n" "$can_build_shared" >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether to build shared libraries" >&5 printf %s "checking whether to build shared libraries... " >&6; } test no = "$can_build_shared" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) test yes = "$enable_shared" && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[4-9]*) if test ia64 != "$host_cpu"; then case $enable_shared,$with_aix_soname,$aix_use_runtimelinking in yes,aix,yes) ;; # shared object as lib.so file only yes,svr4,*) ;; # shared object as lib.so archive member only yes,*) enable_static=no ;; # shared object in lib.a archive as well esac fi ;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $enable_shared" >&5 printf "%s\n" "$enable_shared" >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether to build static libraries" >&5 printf %s "checking whether to build static libraries... " >&6; } # Make sure either enable_shared or enable_static is yes. test yes = "$enable_shared" || enable_static=yes { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $enable_static" >&5 printf "%s\n" "$enable_static" >&6; } fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu CC=$lt_save_CC if test -n "$CXX" && ( test no != "$CXX" && ( (test g++ = "$CXX" && `g++ -v >/dev/null 2>&1` ) || (test g++ != "$CXX"))); then ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to run the C++ preprocessor" >&5 printf %s "checking how to run the C++ preprocessor... " >&6; } if test -z "$CXXCPP"; then if test ${ac_cv_prog_CXXCPP+y} then : printf %s "(cached) " >&6 else case e in #( e) # Double quotes because $CXX needs to be expanded for CXXCPP in "$CXX -E" cpp /lib/cpp do ac_preproc_ok=false for ac_cxx_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include Syntax error _ACEOF if ac_fn_cxx_try_cpp "$LINENO" then : else case e in #( e) # Broken: fails on valid input. continue ;; esac fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_cxx_try_cpp "$LINENO" then : # Broken: success on invalid input. continue else case e in #( e) # Passes both tests. ac_preproc_ok=: break ;; esac fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of 'break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok then : break fi done ac_cv_prog_CXXCPP=$CXXCPP ;; esac fi CXXCPP=$ac_cv_prog_CXXCPP else ac_cv_prog_CXXCPP=$CXXCPP fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CXXCPP" >&5 printf "%s\n" "$CXXCPP" >&6; } ac_preproc_ok=false for ac_cxx_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include Syntax error _ACEOF if ac_fn_cxx_try_cpp "$LINENO" then : else case e in #( e) # Broken: fails on valid input. continue ;; esac fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_cxx_try_cpp "$LINENO" then : # Broken: success on invalid input. continue else case e in #( e) # Passes both tests. ac_preproc_ok=: break ;; esac fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of 'break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok then : else case e in #( e) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in '$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in '$ac_pwd':" >&2;} as_fn_error $? "C++ preprocessor \"$CXXCPP\" fails sanity check See 'config.log' for more details" "$LINENO" 5; } ;; esac fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu else _lt_caught_CXX_error=yes fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu archive_cmds_need_lc_CXX=no allow_undefined_flag_CXX= always_export_symbols_CXX=no archive_expsym_cmds_CXX= compiler_needs_object_CXX=no export_dynamic_flag_spec_CXX= hardcode_direct_CXX=no hardcode_direct_absolute_CXX=no hardcode_libdir_flag_spec_CXX= hardcode_libdir_separator_CXX= hardcode_minus_L_CXX=no hardcode_shlibpath_var_CXX=unsupported hardcode_automatic_CXX=no inherit_rpath_CXX=no module_cmds_CXX= module_expsym_cmds_CXX= link_all_deplibs_CXX=unknown old_archive_cmds_CXX=$old_archive_cmds reload_flag_CXX=$reload_flag reload_cmds_CXX=$reload_cmds no_undefined_flag_CXX= whole_archive_flag_spec_CXX= enable_shared_with_static_runtimes_CXX=no # Source file extension for C++ test sources. ac_ext=cpp # Object file extension for compiled C++ test sources. objext=o objext_CXX=$objext # No sense in running all these tests if we already determined that # the CXX compiler isn't working. Some variables (like enable_shared) # are currently assumed to apply to all compilers on this platform, # and will be corrupted by setting them based on a non-working compiler. if test yes != "$_lt_caught_CXX_error"; then # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(int, char *[]) { return(0); }' # ltmain only uses $CC for tagged configurations so make sure $CC is set. # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC # save warnings/boilerplate of simple test code ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" >conftest.$ac_ext eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_compiler_boilerplate=`cat conftest.err` $RM conftest* ac_outfile=conftest.$ac_objext echo "$lt_simple_link_test_code" >conftest.$ac_ext eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_linker_boilerplate=`cat conftest.err` $RM -r conftest* # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_LD=$LD lt_save_GCC=$GCC GCC=$GXX lt_save_with_gnu_ld=$with_gnu_ld lt_save_path_LD=$lt_cv_path_LD if test -n "${lt_cv_prog_gnu_ldcxx+set}"; then lt_cv_prog_gnu_ld=$lt_cv_prog_gnu_ldcxx else $as_unset lt_cv_prog_gnu_ld fi if test -n "${lt_cv_path_LDCXX+set}"; then lt_cv_path_LD=$lt_cv_path_LDCXX else $as_unset lt_cv_path_LD fi test -z "${LDCXX+set}" || LD=$LDCXX CC=${CXX-"c++"} CFLAGS=$CXXFLAGS compiler=$CC compiler_CXX=$CC func_cc_basename $compiler cc_basename=$func_cc_basename_result if test -n "$compiler"; then # We don't want -fno-exception when compiling C++ code, so set the # no_builtin_flag separately if test yes = "$GXX"; then lt_prog_compiler_no_builtin_flag_CXX=' -fno-builtin' else lt_prog_compiler_no_builtin_flag_CXX= fi if test yes = "$GXX"; then # Set up default GNU C++ configuration # Check whether --with-gnu-ld was given. if test ${with_gnu_ld+y} then : withval=$with_gnu_ld; test no = "$withval" || with_gnu_ld=yes else case e in #( e) with_gnu_ld=no ;; esac fi ac_prog=ld if test yes = "$GCC"; then # Check if gcc -print-prog-name=ld gives a path. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for ld used by $CC" >&5 printf %s "checking for ld used by $CC... " >&6; } case $host in *-*-mingw*) # gcc leaves a trailing carriage return, which upsets mingw ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; *) ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; esac case $ac_prog in # Accept absolute paths. [\\/]* | ?:[\\/]*) re_direlt='/[^/][^/]*/\.\./' # Canonicalize the pathname of ld ac_prog=`$ECHO "$ac_prog"| $SED 's%\\\\%/%g'` while $ECHO "$ac_prog" | $GREP "$re_direlt" > /dev/null 2>&1; do ac_prog=`$ECHO $ac_prog| $SED "s%$re_direlt%/%"` done test -z "$LD" && LD=$ac_prog ;; "") # If it fails, then pretend we aren't using GCC. ac_prog=ld ;; *) # If it is relative, then search for the first ld in PATH. with_gnu_ld=unknown ;; esac elif test yes = "$with_gnu_ld"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for GNU ld" >&5 printf %s "checking for GNU ld... " >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for non-GNU ld" >&5 printf %s "checking for non-GNU ld... " >&6; } fi if test ${lt_cv_path_LD+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -z "$LD"; then lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR for ac_dir in $PATH; do IFS=$lt_save_ifs test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then lt_cv_path_LD=$ac_dir/$ac_prog # Check to see if the program is GNU ld. I'd rather use --version, # but apparently some variants of GNU ld only accept -v. # Break only if it was the GNU/non-GNU ld that we prefer. case `"$lt_cv_path_LD" -v 2>&1 &5 printf "%s\n" "$LD" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -z "$LD" && as_fn_error $? "no acceptable ld found in \$PATH" "$LINENO" 5 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if the linker ($LD) is GNU ld" >&5 printf %s "checking if the linker ($LD) is GNU ld... " >&6; } if test ${lt_cv_prog_gnu_ld+y} then : printf %s "(cached) " >&6 else case e in #( e) # I'd rather use --version here, but apparently some GNU lds only accept -v. case `$LD -v 2>&1 &5 printf "%s\n" "$lt_cv_prog_gnu_ld" >&6; } with_gnu_ld=$lt_cv_prog_gnu_ld # Check if GNU C++ uses GNU ld as the underlying linker, since the # archiving commands below assume that GNU ld is being used. if test yes = "$with_gnu_ld"; then archive_cmds_CXX='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds_CXX='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' hardcode_libdir_flag_spec_CXX='$wl-rpath $wl$libdir' export_dynamic_flag_spec_CXX='$wl--export-dynamic' # If archive_cmds runs LD, not CC, wlarc should be empty # XXX I think wlarc can be eliminated in ltcf-cxx, but I need to # investigate it a little bit more. (MM) wlarc='$wl' # ancient GNU ld didn't support --whole-archive et. al. if eval "`$CC -print-prog-name=ld` --help 2>&1" | $GREP 'no-whole-archive' > /dev/null; then whole_archive_flag_spec_CXX=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive' else whole_archive_flag_spec_CXX= fi else with_gnu_ld=no wlarc= # A generic and very simple default shared library creation # command for GNU C++ for the case where it uses the native # linker, instead of GNU ld. If possible, this setting should # overridden to take advantage of the native linker features on # the platform it is being used on. archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' fi # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else GXX=no with_gnu_ld=no wlarc= fi # PORTME: fill in a description of your system's C++ link characteristics { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the $compiler linker ($LD) supports shared libraries" >&5 printf %s "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; } ld_shlibs_CXX=yes case $host_os in aix3*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; aix[4-9]*) if test ia64 = "$host_cpu"; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' no_entry_flag= else aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we # have runtime linking enabled, and use it for executables. # For shared libraries, we enable/disable runtime linking # depending on the kind of the shared library created - # when "with_aix_soname,aix_use_runtimelinking" is: # "aix,no" lib.a(lib.so.V) shared, rtl:no, for executables # "aix,yes" lib.so shared, rtl:yes, for executables # lib.a static archive # "both,no" lib.so.V(shr.o) shared, rtl:yes # lib.a(lib.so.V) shared, rtl:no, for executables # "both,yes" lib.so.V(shr.o) shared, rtl:yes, for executables # lib.a(lib.so.V) shared, rtl:no # "svr4,*" lib.so.V(shr.o) shared, rtl:yes, for executables # lib.a static archive case $host_os in aix4.[23]|aix4.[23].*|aix[5-9]*) for ld_flag in $LDFLAGS; do case $ld_flag in *-brtl*) aix_use_runtimelinking=yes break ;; esac done if test svr4,no = "$with_aix_soname,$aix_use_runtimelinking"; then # With aix-soname=svr4, we create the lib.so.V shared archives only, # so we don't have lib.a shared libs to link our executables. # We have to force runtime linking in this case. aix_use_runtimelinking=yes LDFLAGS="$LDFLAGS -Wl,-brtl" fi ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. archive_cmds_CXX='' hardcode_direct_CXX=yes hardcode_direct_absolute_CXX=yes hardcode_libdir_separator_CXX=':' link_all_deplibs_CXX=yes file_list_spec_CXX='$wl-f,' case $with_aix_soname,$aix_use_runtimelinking in aix,*) ;; # no import file svr4,* | *,yes) # use import file # The Import File defines what to hardcode. hardcode_direct_CXX=no hardcode_direct_absolute_CXX=no ;; esac if test yes = "$GXX"; then case $host_os in aix4.[012]|aix4.[012].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ collect2name=`$CC -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 hardcode_direct_CXX=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking hardcode_minus_L_CXX=yes hardcode_libdir_flag_spec_CXX='-L$libdir' hardcode_libdir_separator_CXX= fi esac shared_flag='-shared' if test yes = "$aix_use_runtimelinking"; then shared_flag=$shared_flag' $wl-G' fi # Need to ensure runtime linking is disabled for the traditional # shared library, or the linker may eventually find shared libraries # /with/ Import File - we do not want to mix them. shared_flag_aix='-shared' shared_flag_svr4='-shared $wl-G' else # not using gcc if test ia64 = "$host_cpu"; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else if test yes = "$aix_use_runtimelinking"; then shared_flag='$wl-G' else shared_flag='$wl-bM:SRE' fi shared_flag_aix='$wl-bM:SRE' shared_flag_svr4='$wl-G' fi fi export_dynamic_flag_spec_CXX='$wl-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to # export. always_export_symbols_CXX=yes if test aix,yes = "$with_aix_soname,$aix_use_runtimelinking"; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. # The "-G" linker flag allows undefined symbols. no_undefined_flag_CXX='-bernotok' # Determine the default libpath from the value encoded in an empty # executable. if test set = "${lt_cv_aix_libpath+set}"; then aix_libpath=$lt_cv_aix_libpath else if test ${lt_cv_aix_libpath__CXX+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_cxx_try_link "$LINENO" then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath__CXX"; then lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath__CXX"; then lt_cv_aix_libpath__CXX=/usr/lib:/lib fi ;; esac fi aix_libpath=$lt_cv_aix_libpath__CXX fi hardcode_libdir_flag_spec_CXX='$wl-blibpath:$libdir:'"$aix_libpath" archive_expsym_cmds_CXX='$CC -o $output_objdir/$soname $libobjs $deplibs $wl'$no_entry_flag' $compiler_flags `if test -n "$allow_undefined_flag"; then func_echo_all "$wl$allow_undefined_flag"; else :; fi` $wl'$exp_sym_flag:\$export_symbols' '$shared_flag else if test ia64 = "$host_cpu"; then hardcode_libdir_flag_spec_CXX='$wl-R $libdir:/usr/lib:/lib' allow_undefined_flag_CXX="-z nodefs" archive_expsym_cmds_CXX="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\$wl$no_entry_flag"' $compiler_flags $wl$allow_undefined_flag '"\$wl$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. if test set = "${lt_cv_aix_libpath+set}"; then aix_libpath=$lt_cv_aix_libpath else if test ${lt_cv_aix_libpath__CXX+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_cxx_try_link "$LINENO" then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath__CXX"; then lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath__CXX"; then lt_cv_aix_libpath__CXX=/usr/lib:/lib fi ;; esac fi aix_libpath=$lt_cv_aix_libpath__CXX fi hardcode_libdir_flag_spec_CXX='$wl-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. no_undefined_flag_CXX=' $wl-bernotok' allow_undefined_flag_CXX=' $wl-berok' if test yes = "$with_gnu_ld"; then # We only use this code for GNU lds that support --whole-archive. whole_archive_flag_spec_CXX='$wl--whole-archive$convenience $wl--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives whole_archive_flag_spec_CXX='$convenience' fi archive_cmds_need_lc_CXX=yes archive_expsym_cmds_CXX='$RM -r $output_objdir/$realname.d~$MKDIR $output_objdir/$realname.d' # -brtl affects multiple linker settings, -berok does not and is overridden later compiler_flags_filtered='`func_echo_all "$compiler_flags " | $SED -e "s%-brtl\\([, ]\\)%-berok\\1%g"`' if test svr4 != "$with_aix_soname"; then # This is similar to how AIX traditionally builds its shared # libraries. Need -bnortl late, we may have -brtl in LDFLAGS. archive_expsym_cmds_CXX="$archive_expsym_cmds_CXX"'~$CC '$shared_flag_aix' -o $output_objdir/$realname.d/$soname $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$realname.d/$soname' fi if test aix != "$with_aix_soname"; then archive_expsym_cmds_CXX="$archive_expsym_cmds_CXX"'~$CC '$shared_flag_svr4' -o $output_objdir/$realname.d/$shared_archive_member_spec.o $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$STRIP -e $output_objdir/$realname.d/$shared_archive_member_spec.o~( func_echo_all "#! $soname($shared_archive_member_spec.o)"; if test shr_64 = "$shared_archive_member_spec"; then func_echo_all "# 64"; else func_echo_all "# 32"; fi; cat $export_symbols ) > $output_objdir/$realname.d/$shared_archive_member_spec.imp~$AR $AR_FLAGS $output_objdir/$soname $output_objdir/$realname.d/$shared_archive_member_spec.o $output_objdir/$realname.d/$shared_archive_member_spec.imp' else # used by -dlpreopen to get the symbols archive_expsym_cmds_CXX="$archive_expsym_cmds_CXX"'~$MV $output_objdir/$realname.d/$soname $output_objdir' fi archive_expsym_cmds_CXX="$archive_expsym_cmds_CXX"'~$RM -r $output_objdir/$realname.d' fi fi ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then allow_undefined_flag_CXX=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME archive_cmds_CXX='$CC -nostart $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' else ld_shlibs_CXX=no fi ;; chorus*) case $cc_basename in *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac ;; cygwin* | mingw* | pw32* | cegcc*) case $GXX,$cc_basename in ,cl* | no,cl* | ,icl* | no,icl*) # Native MSVC or ICC # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. hardcode_libdir_flag_spec_CXX=' ' allow_undefined_flag_CXX=unsupported always_export_symbols_CXX=yes file_list_spec_CXX='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=.dll # FIXME: Setting linknames here is a bad hack. archive_cmds_CXX='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~linknames=' archive_expsym_cmds_CXX='if test DEF = "`$SED -n -e '\''s/^[ ]*//'\'' -e '\''/^\(;.*\)*$/d'\'' -e '\''s/^\(EXPORTS\|LIBRARY\)\([ ].*\)*$/DEF/p'\'' -e q $export_symbols`" ; then cp "$export_symbols" "$output_objdir/$soname.def"; echo "$tool_output_objdir$soname.def" > "$output_objdir/$soname.exp"; else $SED -e '\''s/^/-link -EXPORT:/'\'' < $export_symbols > $output_objdir/$soname.exp; fi~ $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, CXX)='true' enable_shared_with_static_runtimes_CXX=yes # Don't use ranlib old_postinstall_cmds_CXX='chmod 644 $oldlib' postlink_cmds_CXX='lt_outputfile="@OUTPUT@"~ lt_tool_outputfile="@TOOL_OUTPUT@"~ case $lt_outputfile in *.exe|*.EXE) ;; *) lt_outputfile=$lt_outputfile.exe lt_tool_outputfile=$lt_tool_outputfile.exe ;; esac~ func_to_tool_file "$lt_outputfile"~ if test : != "$MANIFEST_TOOL" && test -f "$lt_outputfile.manifest"; then $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; $RM "$lt_outputfile.manifest"; fi' ;; *) # g++ # _LT_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless, # as there is no search path for DLLs. hardcode_libdir_flag_spec_CXX='-L$libdir' export_dynamic_flag_spec_CXX='$wl--export-all-symbols' allow_undefined_flag_CXX=unsupported always_export_symbols_CXX=no enable_shared_with_static_runtimes_CXX=yes if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' # If the export-symbols file already is a .def file, use it as # is; otherwise, prepend EXPORTS... archive_expsym_cmds_CXX='if test DEF = "`$SED -n -e '\''s/^[ ]*//'\'' -e '\''/^\(;.*\)*$/d'\'' -e '\''s/^\(EXPORTS\|LIBRARY\)\([ ].*\)*$/DEF/p'\'' -e q $export_symbols`" ; then cp $export_symbols $output_objdir/$soname.def; else echo EXPORTS > $output_objdir/$soname.def; cat $export_symbols >> $output_objdir/$soname.def; fi~ $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else ld_shlibs_CXX=no fi ;; esac ;; darwin* | rhapsody*) archive_cmds_need_lc_CXX=no hardcode_direct_CXX=no hardcode_automatic_CXX=yes hardcode_shlibpath_var_CXX=unsupported if test yes = "$lt_cv_ld_force_load"; then whole_archive_flag_spec_CXX='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience $wl-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`' else whole_archive_flag_spec_CXX='' fi link_all_deplibs_CXX=yes allow_undefined_flag_CXX=$_lt_dar_allow_undefined case $cc_basename in ifort*|nagfor*) _lt_dar_can_shared=yes ;; *) _lt_dar_can_shared=$GCC ;; esac if test yes = "$_lt_dar_can_shared"; then output_verbose_link_cmd=func_echo_all archive_cmds_CXX="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dsymutil" module_cmds_CXX="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dsymutil" archive_expsym_cmds_CXX="$SED 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dar_export_syms$_lt_dsymutil" module_expsym_cmds_CXX="$SED -e 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dar_export_syms$_lt_dsymutil" if test yes = "$_lt_dar_needs_single_mod" -a yes != "$lt_cv_apple_cc_single_mod"; then archive_cmds_CXX="\$CC -r -keep_private_externs -nostdlib -o \$lib-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$lib-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring$_lt_dsymutil" archive_expsym_cmds_CXX="$SED 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC -r -keep_private_externs -nostdlib -o \$lib-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$lib-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring$_lt_dar_export_syms$_lt_dsymutil" fi else ld_shlibs_CXX=no fi ;; os2*) hardcode_libdir_flag_spec_CXX='-L$libdir' hardcode_minus_L_CXX=yes allow_undefined_flag_CXX=unsupported shrext_cmds=.dll archive_cmds_CXX='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' archive_expsym_cmds_CXX='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ prefix_cmds="$SED"~ if test EXPORTS = "`$SED 1q $export_symbols`"; then prefix_cmds="$prefix_cmds -e 1d"; fi~ prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~ cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' old_archive_From_new_cmds_CXX='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def' enable_shared_with_static_runtimes_CXX=yes file_list_spec_CXX='@' ;; dgux*) case $cc_basename in ec++*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; ghcx*) # Green Hills C++ Compiler # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac ;; freebsd2.*) # C++ shared libraries reported to be fairly broken before # switch to ELF ld_shlibs_CXX=no ;; freebsd-elf*) archive_cmds_need_lc_CXX=no ;; freebsd* | dragonfly* | midnightbsd*) # FreeBSD 3 and later use GNU C++ and GNU ld with standard ELF # conventions ld_shlibs_CXX=yes ;; haiku*) archive_cmds_CXX='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' link_all_deplibs_CXX=yes ;; hpux9*) hardcode_libdir_flag_spec_CXX='$wl+b $wl$libdir' hardcode_libdir_separator_CXX=: export_dynamic_flag_spec_CXX='$wl-E' hardcode_direct_CXX=yes hardcode_minus_L_CXX=yes # Not in the search PATH, # but as the default # location of the library. case $cc_basename in CC*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; aCC*) archive_cmds_CXX='$RM $output_objdir/$soname~$CC -b $wl+b $wl$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $EGREP "\-L"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test yes = "$GXX"; then archive_cmds_CXX='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag $wl+b $wl$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib' else # FIXME: insert proper C++ library support ld_shlibs_CXX=no fi ;; esac ;; hpux10*|hpux11*) if test no = "$with_gnu_ld"; then hardcode_libdir_flag_spec_CXX='$wl+b $wl$libdir' hardcode_libdir_separator_CXX=: case $host_cpu in hppa*64*|ia64*) ;; *) export_dynamic_flag_spec_CXX='$wl-E' ;; esac fi case $host_cpu in hppa*64*|ia64*) hardcode_direct_CXX=no hardcode_shlibpath_var_CXX=no ;; *) hardcode_direct_CXX=yes hardcode_direct_absolute_CXX=yes hardcode_minus_L_CXX=yes # Not in the search PATH, # but as the default # location of the library. ;; esac case $cc_basename in CC*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; aCC*) case $host_cpu in hppa*64*) archive_cmds_CXX='$CC -b $wl+h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; ia64*) archive_cmds_CXX='$CC -b $wl+h $wl$soname $wl+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; *) archive_cmds_CXX='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; esac # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $GREP "\-L"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test yes = "$GXX"; then if test no = "$with_gnu_ld"; then case $host_cpu in hppa*64*) archive_cmds_CXX='$CC -shared -nostdlib -fPIC $wl+h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; ia64*) archive_cmds_CXX='$CC -shared -nostdlib $pic_flag $wl+h $wl$soname $wl+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; *) archive_cmds_CXX='$CC -shared -nostdlib $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; esac fi else # FIXME: insert proper C++ library support ld_shlibs_CXX=no fi ;; esac ;; interix[3-9]*) hardcode_direct_CXX=no hardcode_shlibpath_var_CXX=no hardcode_libdir_flag_spec_CXX='$wl-rpath,$libdir' export_dynamic_flag_spec_CXX='$wl-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. archive_cmds_CXX='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' archive_expsym_cmds_CXX='$SED "s|^|_|" $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--retain-symbols-file,$output_objdir/$soname.expsym $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; irix5* | irix6*) case $cc_basename in CC*) # SGI C++ archive_cmds_CXX='$CC -shared -all -multigot $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' # Archives containing C++ object files must be created using # "CC -ar", where "CC" is the IRIX C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. old_archive_cmds_CXX='$CC -ar -WR,-u -o $oldlib $oldobjs' ;; *) if test yes = "$GXX"; then if test no = "$with_gnu_ld"; then archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' else archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` -o $lib' fi fi link_all_deplibs_CXX=yes ;; esac hardcode_libdir_flag_spec_CXX='$wl-rpath $wl$libdir' hardcode_libdir_separator_CXX=: inherit_rpath_CXX=yes ;; linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in KCC*) # Kuck and Associates, Inc. (KAI) C++ Compiler # KCC will only create a shared library if the output file # ends with ".so" (or ".sl" for HP-UX), so rename the library # to its proper name (with version) after linking. archive_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\$tempext\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' archive_expsym_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\$tempext\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib $wl-retain-symbols-file,$export_symbols; mv \$templib $lib' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 | $GREP "ld"`; rm -f libconftest$shared_ext; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' hardcode_libdir_flag_spec_CXX='$wl-rpath,$libdir' export_dynamic_flag_spec_CXX='$wl--export-dynamic' # Archives containing C++ object files must be created using # "CC -Bstatic", where "CC" is the KAI C++ compiler. old_archive_cmds_CXX='$CC -Bstatic -o $oldlib $oldobjs' ;; icpc* | ecpc* ) # Intel C++ with_gnu_ld=yes # version 8.0 and above of icpc choke on multiply defined symbols # if we add $predep_objects and $postdep_objects, however 7.1 and # earlier do not add the objects themselves. case `$CC -V 2>&1` in *"Version 7."*) archive_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' ;; *) # Version 8.0 or newer tmp_idyn= case $host_cpu in ia64*) tmp_idyn=' -i_dynamic';; esac archive_cmds_CXX='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds_CXX='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' ;; esac archive_cmds_need_lc_CXX=no hardcode_libdir_flag_spec_CXX='$wl-rpath,$libdir' export_dynamic_flag_spec_CXX='$wl--export-dynamic' whole_archive_flag_spec_CXX='$wl--whole-archive$convenience $wl--no-whole-archive' ;; pgCC* | pgcpp*) # Portland Group C++ compiler case `$CC -V` in *pgCC\ [1-5].* | *pgcpp\ [1-5].*) prelink_cmds_CXX='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~ compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"' old_archive_cmds_CXX='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~ $AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~ $RANLIB $oldlib' archive_cmds_CXX='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds_CXX='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' ;; *) # Version 6 and above use weak symbols archive_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' ;; esac hardcode_libdir_flag_spec_CXX='$wl--rpath $wl$libdir' export_dynamic_flag_spec_CXX='$wl--export-dynamic' whole_archive_flag_spec_CXX='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' ;; cxx*) # Compaq C++ archive_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib $wl-retain-symbols-file $wl$export_symbols' runpath_var=LD_RUN_PATH hardcode_libdir_flag_spec_CXX='-rpath $libdir' hardcode_libdir_separator_CXX=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld .*$\)/\1/"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "X$list" | $Xsed' ;; xl* | mpixl* | bgxl*) # IBM XL 8.0 on PPC, with GNU ld hardcode_libdir_flag_spec_CXX='$wl-rpath $wl$libdir' export_dynamic_flag_spec_CXX='$wl--export-dynamic' archive_cmds_CXX='$CC -qmkshrobj $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' if test yes = "$supports_anon_versioning"; then archive_expsym_cmds_CXX='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $CC -qmkshrobj $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-version-script $wl$output_objdir/$libname.ver -o $lib' fi ;; *) case `$CC -V 2>&1 | $SED 5q` in *Sun\ C*) # Sun C++ 5.9 no_undefined_flag_CXX=' -zdefs' archive_cmds_CXX='$CC -G$allow_undefined_flag -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' archive_expsym_cmds_CXX='$CC -G$allow_undefined_flag -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-retain-symbols-file $wl$export_symbols' hardcode_libdir_flag_spec_CXX='-R$libdir' whole_archive_flag_spec_CXX='$wl--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' compiler_needs_object_CXX=yes # Not sure whether something based on # $CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 # would be better. output_verbose_link_cmd='func_echo_all' # Archives containing C++ object files must be created using # "CC -xar", where "CC" is the Sun C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. old_archive_cmds_CXX='$CC -xar -o $oldlib $oldobjs' ;; esac ;; esac ;; lynxos*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; m88k*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; mvs*) case $cc_basename in cxx*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac ;; netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then archive_cmds_CXX='$LD -Bshareable -o $lib $predep_objects $libobjs $deplibs $postdep_objects $linker_flags' wlarc= hardcode_libdir_flag_spec_CXX='-R$libdir' hardcode_direct_CXX=yes hardcode_shlibpath_var_CXX=no fi # Workaround some broken pre-1.5 toolchains output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP conftest.$objext | $SED -e "s:-lgcc -lc -lgcc::"' ;; *nto* | *qnx*) ld_shlibs_CXX=yes ;; openbsd* | bitrig*) if test -f /usr/libexec/ld.so; then hardcode_direct_CXX=yes hardcode_shlibpath_var_CXX=no hardcode_direct_absolute_CXX=yes archive_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' hardcode_libdir_flag_spec_CXX='$wl-rpath,$libdir' if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`"; then archive_expsym_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-retain-symbols-file,$export_symbols -o $lib' export_dynamic_flag_spec_CXX='$wl-E' whole_archive_flag_spec_CXX=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive' fi output_verbose_link_cmd=func_echo_all else ld_shlibs_CXX=no fi ;; osf3* | osf4* | osf5*) case $cc_basename in KCC*) # Kuck and Associates, Inc. (KAI) C++ Compiler # KCC will only create a shared library if the output file # ends with ".so" (or ".sl" for HP-UX), so rename the library # to its proper name (with version) after linking. archive_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo "$lib" | $SED -e "s/\$tempext\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' hardcode_libdir_flag_spec_CXX='$wl-rpath,$libdir' hardcode_libdir_separator_CXX=: # Archives containing C++ object files must be created using # the KAI C++ compiler. case $host in osf3*) old_archive_cmds_CXX='$CC -Bstatic -o $oldlib $oldobjs' ;; *) old_archive_cmds_CXX='$CC -o $oldlib $oldobjs' ;; esac ;; RCC*) # Rational C++ 2.4.1 # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; cxx*) case $host in osf3*) allow_undefined_flag_CXX=' $wl-expect_unresolved $wl\*' archive_cmds_CXX='$CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $soname `test -n "$verstring" && func_echo_all "$wl-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' hardcode_libdir_flag_spec_CXX='$wl-rpath $wl$libdir' ;; *) allow_undefined_flag_CXX=' -expect_unresolved \*' archive_cmds_CXX='$CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' archive_expsym_cmds_CXX='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done~ echo "-hidden">> $lib.exp~ $CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname $wl-input $wl$lib.exp `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib~ $RM $lib.exp' hardcode_libdir_flag_spec_CXX='-rpath $libdir' ;; esac hardcode_libdir_separator_CXX=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld" | $GREP -v "ld:"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test yes,no = "$GXX,$with_gnu_ld"; then allow_undefined_flag_CXX=' $wl-expect_unresolved $wl\*' case $host in osf3*) archive_cmds_CXX='$CC -shared -nostdlib $allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' ;; *) archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-msym $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' ;; esac hardcode_libdir_flag_spec_CXX='$wl-rpath $wl$libdir' hardcode_libdir_separator_CXX=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else # FIXME: insert proper C++ library support ld_shlibs_CXX=no fi ;; esac ;; psos*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; sunos4*) case $cc_basename in CC*) # Sun C++ 4.x # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; lcc*) # Lucid # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac ;; solaris*) case $cc_basename in CC* | sunCC*) # Sun C++ 4.2, 5.x and Centerline C++ archive_cmds_need_lc_CXX=yes no_undefined_flag_CXX=' -zdefs' archive_cmds_CXX='$CC -G$allow_undefined_flag -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' archive_expsym_cmds_CXX='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G$allow_undefined_flag $wl-M $wl$lib.exp -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' hardcode_libdir_flag_spec_CXX='-R$libdir' hardcode_shlibpath_var_CXX=no case $host_os in solaris2.[0-5] | solaris2.[0-5].*) ;; *) # The compiler driver will combine and reorder linker options, # but understands '-z linker_flag'. # Supported since Solaris 2.6 (maybe 2.5.1?) whole_archive_flag_spec_CXX='-z allextract$convenience -z defaultextract' ;; esac link_all_deplibs_CXX=yes output_verbose_link_cmd='func_echo_all' # Archives containing C++ object files must be created using # "CC -xar", where "CC" is the Sun C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. old_archive_cmds_CXX='$CC -xar -o $oldlib $oldobjs' ;; gcx*) # Green Hills C++ Compiler archive_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-h $wl$soname -o $lib' # The C++ compiler must be used to create the archive. old_archive_cmds_CXX='$CC $LDFLAGS -archive -o $oldlib $oldobjs' ;; *) # GNU C++ compiler with Solaris linker if test yes,no = "$GXX,$with_gnu_ld"; then no_undefined_flag_CXX=' $wl-z ${wl}defs' if $CC --version | $GREP -v '^2\.7' > /dev/null; then archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-h $wl$soname -o $lib' archive_expsym_cmds_CXX='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -shared $pic_flag -nostdlib $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else # g++ 2.7 appears to require '-G' NOT '-shared' on this # platform. archive_cmds_CXX='$CC -G -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-h $wl$soname -o $lib' archive_expsym_cmds_CXX='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G -nostdlib $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -G $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' fi hardcode_libdir_flag_spec_CXX='$wl-R $wl$libdir' case $host_os in solaris2.[0-5] | solaris2.[0-5].*) ;; *) whole_archive_flag_spec_CXX='$wl-z ${wl}allextract$convenience $wl-z ${wl}defaultextract' ;; esac fi ;; esac ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[01].[10]* | unixware7* | sco3.2v5.0.[024]*) no_undefined_flag_CXX='$wl-z,text' archive_cmds_need_lc_CXX=no hardcode_shlibpath_var_CXX=no runpath_var='LD_RUN_PATH' case $cc_basename in CC*) archive_cmds_CXX='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds_CXX='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; *) archive_cmds_CXX='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds_CXX='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; sysv5* | sco3.2v5* | sco5v6*) # Note: We CANNOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. no_undefined_flag_CXX='$wl-z,text' allow_undefined_flag_CXX='$wl-z,nodefs' archive_cmds_need_lc_CXX=no hardcode_shlibpath_var_CXX=no hardcode_libdir_flag_spec_CXX='$wl-R,$libdir' hardcode_libdir_separator_CXX=':' link_all_deplibs_CXX=yes export_dynamic_flag_spec_CXX='$wl-Bexport' runpath_var='LD_RUN_PATH' case $cc_basename in CC*) archive_cmds_CXX='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds_CXX='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' old_archive_cmds_CXX='$CC -Tprelink_objects $oldobjs~ '"$old_archive_cmds_CXX" reload_cmds_CXX='$CC -Tprelink_objects $reload_objs~ '"$reload_cmds_CXX" ;; *) archive_cmds_CXX='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds_CXX='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; tandem*) case $cc_basename in NCC*) # NonStop-UX NCC 3.20 # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac ;; vxworks*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs_CXX" >&5 printf "%s\n" "$ld_shlibs_CXX" >&6; } test no = "$ld_shlibs_CXX" && can_build_shared=no GCC_CXX=$GXX LD_CXX=$LD ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... # Dependencies to place before and after the object being linked: predep_objects_CXX= postdep_objects_CXX= predeps_CXX= postdeps_CXX= compiler_lib_search_path_CXX= cat > conftest.$ac_ext <<_LT_EOF class Foo { public: Foo (void) { a = 0; } private: int a; }; _LT_EOF _lt_libdeps_save_CFLAGS=$CFLAGS case "$CC $CFLAGS " in #( *\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;; *\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;; *\ -fuse-linker-plugin*\ *) CFLAGS="$CFLAGS -fno-use-linker-plugin" ;; esac if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then # Parse the compiler output and extract the necessary # objects, libraries and library flags. # Sentinel used to keep track of whether or not we are before # the conftest object file. pre_test_object_deps_done=no for p in `eval "$output_verbose_link_cmd"`; do case $prev$p in -L* | -R* | -l*) # Some compilers place space between "-{L,R}" and the path. # Remove the space. if test x-L = "$p" || test x-R = "$p"; then prev=$p continue fi # Expand the sysroot to ease extracting the directories later. if test -z "$prev"; then case $p in -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;; -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;; -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;; esac fi case $p in =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;; esac if test no = "$pre_test_object_deps_done"; then case $prev in -L | -R) # Internal compiler library paths should come after those # provided the user. The postdeps already come after the # user supplied libs so there is no need to process them. if test -z "$compiler_lib_search_path_CXX"; then compiler_lib_search_path_CXX=$prev$p else compiler_lib_search_path_CXX="${compiler_lib_search_path_CXX} $prev$p" fi ;; # The "-l" case would never come before the object being # linked, so don't bother handling this case. esac else if test -z "$postdeps_CXX"; then postdeps_CXX=$prev$p else postdeps_CXX="${postdeps_CXX} $prev$p" fi fi prev= ;; *.lto.$objext) ;; # Ignore GCC LTO objects *.$objext) # This assumes that the test object file only shows up # once in the compiler output. if test "$p" = "conftest.$objext"; then pre_test_object_deps_done=yes continue fi if test no = "$pre_test_object_deps_done"; then if test -z "$predep_objects_CXX"; then predep_objects_CXX=$p else predep_objects_CXX="$predep_objects_CXX $p" fi else if test -z "$postdep_objects_CXX"; then postdep_objects_CXX=$p else postdep_objects_CXX="$postdep_objects_CXX $p" fi fi ;; *) ;; # Ignore the rest. esac done # Clean up. rm -f a.out a.exe else echo "libtool.m4: error: problem compiling CXX test program" fi $RM -f confest.$objext CFLAGS=$_lt_libdeps_save_CFLAGS # PORTME: override above test on systems where it is broken case $host_os in interix[3-9]*) # Interix 3.5 installs completely hosed .la files for C++, so rather than # hack all around it, let's just trust "g++" to DTRT. predep_objects_CXX= postdep_objects_CXX= postdeps_CXX= ;; esac case " $postdeps_CXX " in *" -lc "*) archive_cmds_need_lc_CXX=no ;; esac compiler_lib_search_dirs_CXX= if test -n "${compiler_lib_search_path_CXX}"; then compiler_lib_search_dirs_CXX=`echo " ${compiler_lib_search_path_CXX}" | $SED -e 's! -L! !g' -e 's!^ !!'` fi lt_prog_compiler_wl_CXX= lt_prog_compiler_pic_CXX= lt_prog_compiler_static_CXX= # C++ specific cases for pic, static, wl, etc. if test yes = "$GXX"; then lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_static_CXX='-static' case $host_os in aix*) # All AIX code is PIC. if test ia64 = "$host_cpu"; then # AIX 5 now supports IA64 processor lt_prog_compiler_static_CXX='-Bstatic' fi lt_prog_compiler_pic_CXX='-fPIC' ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support lt_prog_compiler_pic_CXX='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but # adding the '-m68020' flag to GCC prevents building anything better, # like '-m68040'. lt_prog_compiler_pic_CXX='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | os2* | pw32* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries lt_prog_compiler_pic_CXX='-DDLL_EXPORT' case $host_os in os2*) lt_prog_compiler_static_CXX='$wl-static' ;; esac ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files lt_prog_compiler_pic_CXX='-fno-common' ;; *djgpp*) # DJGPP does not support shared libraries at all lt_prog_compiler_pic_CXX= ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. lt_prog_compiler_static_CXX= ;; interix[3-9]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; sysv4*MP*) if test -d /usr/nec; then lt_prog_compiler_pic_CXX=-Kconform_pic fi ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) ;; *) lt_prog_compiler_pic_CXX='-fPIC' ;; esac ;; *qnx* | *nto*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic_CXX='-fPIC -shared' ;; *) lt_prog_compiler_pic_CXX='-fPIC' ;; esac else case $host_os in aix[4-9]*) # All AIX code is PIC. if test ia64 = "$host_cpu"; then # AIX 5 now supports IA64 processor lt_prog_compiler_static_CXX='-Bstatic' else lt_prog_compiler_static_CXX='-bnso -bI:/lib/syscalls.exp' fi ;; chorus*) case $cc_basename in cxch68*) # Green Hills C++ Compiler # _LT_TAGVAR(lt_prog_compiler_static, CXX)="--no_auto_instantiation -u __main -u __premain -u _abort -r $COOL_DIR/lib/libOrb.a $MVME_DIR/lib/CC/libC.a $MVME_DIR/lib/classix/libcx.s.a" ;; esac ;; mingw* | cygwin* | os2* | pw32* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). lt_prog_compiler_pic_CXX='-DDLL_EXPORT' ;; dgux*) case $cc_basename in ec++*) lt_prog_compiler_pic_CXX='-KPIC' ;; ghcx*) # Green Hills C++ Compiler lt_prog_compiler_pic_CXX='-pic' ;; *) ;; esac ;; freebsd* | dragonfly* | midnightbsd*) # FreeBSD uses GNU C++ ;; hpux9* | hpux10* | hpux11*) case $cc_basename in CC*) lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_static_CXX='$wl-a ${wl}archive' if test ia64 != "$host_cpu"; then lt_prog_compiler_pic_CXX='+Z' fi ;; aCC*) lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_static_CXX='$wl-a ${wl}archive' case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) lt_prog_compiler_pic_CXX='+Z' ;; esac ;; *) ;; esac ;; interix*) # This is c89, which is MS Visual C++ (no shared libs) # Anyone wants to do a port? ;; irix5* | irix6* | nonstopux*) case $cc_basename in CC*) lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_static_CXX='-non_shared' # CC pic flag -KPIC is the default. ;; *) ;; esac ;; linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in KCC*) # KAI C++ Compiler lt_prog_compiler_wl_CXX='--backend -Wl,' lt_prog_compiler_pic_CXX='-fPIC' ;; ecpc* ) # old Intel C++ for x86_64, which still supported -KPIC. lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_pic_CXX='-KPIC' lt_prog_compiler_static_CXX='-static' ;; icpc* ) # Intel C++, used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_pic_CXX='-fPIC' lt_prog_compiler_static_CXX='-static' ;; pgCC* | pgcpp*) # Portland Group C++ compiler lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_pic_CXX='-fpic' lt_prog_compiler_static_CXX='-Bstatic' ;; cxx*) # Compaq C++ # Make sure the PIC flag is empty. It appears that all Alpha # Linux and Compaq Tru64 Unix objects are PIC. lt_prog_compiler_pic_CXX= lt_prog_compiler_static_CXX='-non_shared' ;; xlc* | xlC* | bgxl[cC]* | mpixl[cC]*) # IBM XL 8.0, 9.0 on PPC and BlueGene lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_pic_CXX='-qpic' lt_prog_compiler_static_CXX='-qstaticlink' ;; *) case `$CC -V 2>&1 | $SED 5q` in *Sun\ C*) # Sun C++ 5.9 lt_prog_compiler_pic_CXX='-KPIC' lt_prog_compiler_static_CXX='-Bstatic' lt_prog_compiler_wl_CXX='-Qoption ld ' ;; esac ;; esac ;; lynxos*) ;; m88k*) ;; mvs*) case $cc_basename in cxx*) lt_prog_compiler_pic_CXX='-W c,exportall' ;; *) ;; esac ;; netbsd*) ;; *qnx* | *nto*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic_CXX='-fPIC -shared' ;; osf3* | osf4* | osf5*) case $cc_basename in KCC*) lt_prog_compiler_wl_CXX='--backend -Wl,' ;; RCC*) # Rational C++ 2.4.1 lt_prog_compiler_pic_CXX='-pic' ;; cxx*) # Digital/Compaq C++ lt_prog_compiler_wl_CXX='-Wl,' # Make sure the PIC flag is empty. It appears that all Alpha # Linux and Compaq Tru64 Unix objects are PIC. lt_prog_compiler_pic_CXX= lt_prog_compiler_static_CXX='-non_shared' ;; *) ;; esac ;; psos*) ;; solaris*) case $cc_basename in CC* | sunCC*) # Sun C++ 4.2, 5.x and Centerline C++ lt_prog_compiler_pic_CXX='-KPIC' lt_prog_compiler_static_CXX='-Bstatic' lt_prog_compiler_wl_CXX='-Qoption ld ' ;; gcx*) # Green Hills C++ Compiler lt_prog_compiler_pic_CXX='-PIC' ;; *) ;; esac ;; sunos4*) case $cc_basename in CC*) # Sun C++ 4.x lt_prog_compiler_pic_CXX='-pic' lt_prog_compiler_static_CXX='-Bstatic' ;; lcc*) # Lucid lt_prog_compiler_pic_CXX='-pic' ;; *) ;; esac ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) case $cc_basename in CC*) lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_pic_CXX='-KPIC' lt_prog_compiler_static_CXX='-Bstatic' ;; esac ;; tandem*) case $cc_basename in NCC*) # NonStop-UX NCC 3.20 lt_prog_compiler_pic_CXX='-KPIC' ;; *) ;; esac ;; vxworks*) ;; *) lt_prog_compiler_can_build_shared_CXX=no ;; esac fi case $host_os in # For platforms that do not support PIC, -DPIC is meaningless: *djgpp*) lt_prog_compiler_pic_CXX= ;; *) lt_prog_compiler_pic_CXX="$lt_prog_compiler_pic_CXX -DPIC" ;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5 printf %s "checking for $compiler option to produce PIC... " >&6; } if test ${lt_cv_prog_compiler_pic_CXX+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_prog_compiler_pic_CXX=$lt_prog_compiler_pic_CXX ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_CXX" >&5 printf "%s\n" "$lt_cv_prog_compiler_pic_CXX" >&6; } lt_prog_compiler_pic_CXX=$lt_cv_prog_compiler_pic_CXX # # Check to make sure the PIC flag actually works. # if test -n "$lt_prog_compiler_pic_CXX"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $compiler PIC flag $lt_prog_compiler_pic_CXX works" >&5 printf %s "checking if $compiler PIC flag $lt_prog_compiler_pic_CXX works... " >&6; } if test ${lt_cv_prog_compiler_pic_works_CXX+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_prog_compiler_pic_works_CXX=no ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="$lt_prog_compiler_pic_CXX -DPIC" ## exclude from sc_useless_quotes_in_assignment # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_pic_works_CXX=yes fi fi $RM conftest* ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_works_CXX" >&5 printf "%s\n" "$lt_cv_prog_compiler_pic_works_CXX" >&6; } if test yes = "$lt_cv_prog_compiler_pic_works_CXX"; then case $lt_prog_compiler_pic_CXX in "" | " "*) ;; *) lt_prog_compiler_pic_CXX=" $lt_prog_compiler_pic_CXX" ;; esac else lt_prog_compiler_pic_CXX= lt_prog_compiler_can_build_shared_CXX=no fi fi # # Check to make sure the static flag actually works. # wl=$lt_prog_compiler_wl_CXX eval lt_tmp_static_flag=\"$lt_prog_compiler_static_CXX\" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $compiler static flag $lt_tmp_static_flag works" >&5 printf %s "checking if $compiler static flag $lt_tmp_static_flag works... " >&6; } if test ${lt_cv_prog_compiler_static_works_CXX+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_prog_compiler_static_works_CXX=no save_LDFLAGS=$LDFLAGS LDFLAGS="$LDFLAGS $lt_tmp_static_flag" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&5 $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_static_works_CXX=yes fi else lt_cv_prog_compiler_static_works_CXX=yes fi fi $RM -r conftest* LDFLAGS=$save_LDFLAGS ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_static_works_CXX" >&5 printf "%s\n" "$lt_cv_prog_compiler_static_works_CXX" >&6; } if test yes = "$lt_cv_prog_compiler_static_works_CXX"; then : else lt_prog_compiler_static_CXX= fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 printf %s "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if test ${lt_cv_prog_compiler_c_o_CXX+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_prog_compiler_c_o_CXX=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o_CXX=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o_CXX" >&5 printf "%s\n" "$lt_cv_prog_compiler_c_o_CXX" >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 printf %s "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if test ${lt_cv_prog_compiler_c_o_CXX+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_prog_compiler_c_o_CXX=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o_CXX=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o_CXX" >&5 printf "%s\n" "$lt_cv_prog_compiler_c_o_CXX" >&6; } hard_links=nottested if test no = "$lt_cv_prog_compiler_c_o_CXX" && test no != "$need_locks"; then # do not overwrite the value of need_locks provided by the user { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if we can lock with hard links" >&5 printf %s "checking if we can lock with hard links... " >&6; } hard_links=yes $RM conftest* ln conftest.a conftest.b 2>/dev/null && hard_links=no touch conftest.a ln conftest.a conftest.b 2>&5 || hard_links=no ln conftest.a conftest.b 2>/dev/null && hard_links=no { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $hard_links" >&5 printf "%s\n" "$hard_links" >&6; } if test no = "$hard_links"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: '$CC' does not support '-c -o', so 'make -j' may be unsafe" >&5 printf "%s\n" "$as_me: WARNING: '$CC' does not support '-c -o', so 'make -j' may be unsafe" >&2;} need_locks=warn fi else need_locks=no fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the $compiler linker ($LD) supports shared libraries" >&5 printf %s "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; } export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' exclude_expsyms_CXX='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*' case $host_os in aix[4-9]*) # If we're using GNU nm, then we don't want the "-C" option. # -C means demangle to GNU nm, but means don't demangle to AIX nm. # Without the "-l" option, or with the "-B" option, AIX nm treats # weak defined symbols like other global defined symbols, whereas # GNU nm marks them as "W". # While the 'weak' keyword is ignored in the Export File, we need # it in the Import File for the 'aix-soname' feature, so we have # to replace the "-B" option with "-P" for AIX nm. if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then export_symbols_cmds_CXX='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && (substr(\$ 3,1,1) != ".")) { if (\$ 2 == "W") { print \$ 3 " weak" } else { print \$ 3 } } }'\'' | sort -u > $export_symbols' else export_symbols_cmds_CXX='`func_echo_all $NM | $SED -e '\''s/B\([^B]*\)$/P\1/'\''` -PCpgl $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "L") || (\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) && (substr(\$ 1,1,1) != ".")) { if ((\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) { print \$ 1 " weak" } else { print \$ 1 } } }'\'' | sort -u > $export_symbols' fi ;; pw32*) export_symbols_cmds_CXX=$ltdll_cmds ;; cygwin* | mingw* | cegcc*) case $cc_basename in cl* | icl*) exclude_expsyms_CXX='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' ;; *) export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols' exclude_expsyms_CXX='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname' ;; esac ;; *) export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' ;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs_CXX" >&5 printf "%s\n" "$ld_shlibs_CXX" >&6; } test no = "$ld_shlibs_CXX" && can_build_shared=no with_gnu_ld_CXX=$with_gnu_ld # # Do we need to explicitly link libc? # case "x$archive_cmds_need_lc_CXX" in x|xyes) # Assume -lc should be added archive_cmds_need_lc_CXX=yes if test yes,yes = "$GCC,$enable_shared"; then case $archive_cmds_CXX in *'~'*) # FIXME: we may have to deal with multi-command sequences. ;; '$CC '*) # Test whether the compiler implicitly links with -lc since on some # systems, -lgcc has to come before -lc. If gcc already passes -lc # to ld, don't add -lc before -lgcc. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether -lc should be explicitly linked in" >&5 printf %s "checking whether -lc should be explicitly linked in... " >&6; } if test ${lt_cv_archive_cmds_need_lc_CXX+y} then : printf %s "(cached) " >&6 else case e in #( e) $RM conftest* echo "$lt_simple_compile_test_code" > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } 2>conftest.err; then soname=conftest lib=conftest libobjs=conftest.$ac_objext deplibs= wl=$lt_prog_compiler_wl_CXX pic_flag=$lt_prog_compiler_pic_CXX compiler_flags=-v linker_flags=-v verstring= output_objdir=. libname=conftest lt_save_allow_undefined_flag=$allow_undefined_flag_CXX allow_undefined_flag_CXX= if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$archive_cmds_CXX 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1\""; } >&5 (eval $archive_cmds_CXX 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } then lt_cv_archive_cmds_need_lc_CXX=no else lt_cv_archive_cmds_need_lc_CXX=yes fi allow_undefined_flag_CXX=$lt_save_allow_undefined_flag else cat conftest.err 1>&5 fi $RM conftest* ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $lt_cv_archive_cmds_need_lc_CXX" >&5 printf "%s\n" "$lt_cv_archive_cmds_need_lc_CXX" >&6; } archive_cmds_need_lc_CXX=$lt_cv_archive_cmds_need_lc_CXX ;; esac fi ;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking dynamic linker characteristics" >&5 printf %s "checking dynamic linker characteristics... " >&6; } library_names_spec= libname_spec='lib$name' soname_spec= shrext_cmds=.so postinstall_cmds= postuninstall_cmds= finish_cmds= finish_eval= shlibpath_var= shlibpath_overrides_runpath=unknown version_type=none dynamic_linker="$host_os ld.so" sys_lib_dlsearch_path_spec="/lib /usr/lib" need_lib_prefix=unknown hardcode_into_libs=no # when you set need_version to no, make sure it does not cause -set_version # flags to be left without arguments need_version=unknown case $host_os in aix3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname.a' shlibpath_var=LIBPATH # AIX 3 has no versioning support, so we append a major version to the name. soname_spec='$libname$release$shared_ext$major' ;; aix[4-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no hardcode_into_libs=yes if test ia64 = "$host_cpu"; then # AIX 5 supports IA64 library_names_spec='$libname$release$shared_ext$major $libname$release$shared_ext$versuffix $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH else # With GCC up to 2.95.x, collect2 would create an import file # for dependence libraries. The import file would start with # the line '#! .'. This would cause the generated library to # depend on '.', always an invalid library. This was fixed in # development snapshots of GCC prior to 3.0. case $host_os in aix4 | aix4.[01] | aix4.[01].*) if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' echo ' yes ' echo '#endif'; } | $CC -E - | $GREP yes > /dev/null; then : else can_build_shared=no fi ;; esac # Using Import Files as archive members, it is possible to support # filename-based versioning of shared library archives on AIX. While # this would work for both with and without runtime linking, it will # prevent static linking of such archives. So we do filename-based # shared library versioning with .so extension only, which is used # when both runtime linking and shared linking is enabled. # Unfortunately, runtime linking may impact performance, so we do # not want this to be the default eventually. Also, we use the # versioned .so libs for executables only if there is the -brtl # linker flag in LDFLAGS as well, or --with-aix-soname=svr4 only. # To allow for filename-based versioning support, we need to create # libNAME.so.V as an archive file, containing: # *) an Import File, referring to the versioned filename of the # archive as well as the shared archive member, telling the # bitwidth (32 or 64) of that shared object, and providing the # list of exported symbols of that shared object, eventually # decorated with the 'weak' keyword # *) the shared object with the F_LOADONLY flag set, to really avoid # it being seen by the linker. # At run time we better use the real file rather than another symlink, # but for link time we create the symlink libNAME.so -> libNAME.so.V case $with_aix_soname,$aix_use_runtimelinking in # AIX (on Power*) has no versioning support, so currently we cannot hardcode correct # soname into executable. Probably we can add versioning support to # collect2, so additional links can be useful in future. aix,yes) # traditional libtool dynamic_linker='AIX unversionable lib.so' # If using run time linking (on AIX 4.2 or later) use lib.so # instead of lib.a to let people know that these are not # typical AIX shared libraries. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' ;; aix,no) # traditional AIX only dynamic_linker='AIX lib.a(lib.so.V)' # We preserve .a as extension for shared libraries through AIX4.2 # and later when we are not doing run time linking. library_names_spec='$libname$release.a $libname.a' soname_spec='$libname$release$shared_ext$major' ;; svr4,*) # full svr4 only dynamic_linker="AIX lib.so.V($shared_archive_member_spec.o)" library_names_spec='$libname$release$shared_ext$major $libname$shared_ext' # We do not specify a path in Import Files, so LIBPATH fires. shlibpath_overrides_runpath=yes ;; *,yes) # both, prefer svr4 dynamic_linker="AIX lib.so.V($shared_archive_member_spec.o), lib.a(lib.so.V)" library_names_spec='$libname$release$shared_ext$major $libname$shared_ext' # unpreferred sharedlib libNAME.a needs extra handling postinstall_cmds='test -n "$linkname" || linkname="$realname"~func_stripname "" ".so" "$linkname"~$install_shared_prog "$dir/$func_stripname_result.$libext" "$destdir/$func_stripname_result.$libext"~test -z "$tstripme" || test -z "$striplib" || $striplib "$destdir/$func_stripname_result.$libext"' postuninstall_cmds='for n in $library_names $old_library; do :; done~func_stripname "" ".so" "$n"~test "$func_stripname_result" = "$n" || func_append rmfiles " $odir/$func_stripname_result.$libext"' # We do not specify a path in Import Files, so LIBPATH fires. shlibpath_overrides_runpath=yes ;; *,no) # both, prefer aix dynamic_linker="AIX lib.a(lib.so.V), lib.so.V($shared_archive_member_spec.o)" library_names_spec='$libname$release.a $libname.a' soname_spec='$libname$release$shared_ext$major' # unpreferred sharedlib libNAME.so.V and symlink libNAME.so need extra handling postinstall_cmds='test -z "$dlname" || $install_shared_prog $dir/$dlname $destdir/$dlname~test -z "$tstripme" || test -z "$striplib" || $striplib $destdir/$dlname~test -n "$linkname" || linkname=$realname~func_stripname "" ".a" "$linkname"~(cd "$destdir" && $LN_S -f $dlname $func_stripname_result.so)' postuninstall_cmds='test -z "$dlname" || func_append rmfiles " $odir/$dlname"~for n in $old_library $library_names; do :; done~func_stripname "" ".a" "$n"~func_append rmfiles " $odir/$func_stripname_result.so"' ;; esac shlibpath_var=LIBPATH fi ;; amigaos*) case $host_cpu in powerpc) # Since July 2007 AmigaOS4 officially supports .so libraries. # When compiling the executable, add -use-dynld -Lsobjs: to the compileline. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' ;; m68k) library_names_spec='$libname.ixlibrary $libname.a' # Create ${libname}_ixlibrary.a entries in /sys/libs. finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' ;; esac ;; beos*) library_names_spec='$libname$shared_ext' dynamic_linker="$host_os ld.so" shlibpath_var=LIBRARY_PATH ;; bsdi[45]*) version_type=linux # correct to gnu/linux during the next big refactor need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" # the default ld.so.conf also contains /usr/contrib/lib and # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow # libtool to hard-code these into programs ;; cygwin* | mingw* | pw32* | cegcc*) version_type=windows shrext_cmds=.dll need_version=no need_lib_prefix=no case $GCC,$cc_basename in yes,*) # gcc library_names_spec='$libname.dll.a' # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \$file`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes case $host_os in cygwin*) # Cygwin DLLs use 'cyg' prefix rather than 'lib' soname_spec='`echo $libname | $SED -e 's/^lib/cyg/'``echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext' ;; mingw* | cegcc*) # MinGW DLLs use traditional 'lib' prefix soname_spec='$libname`echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext' ;; pw32*) # pw32 DLLs use 'pw' prefix rather than 'lib' library_names_spec='`echo $libname | $SED -e 's/^lib/pw/'``echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext' ;; esac dynamic_linker='Win32 ld.exe' ;; *,cl* | *,icl*) # Native MSVC or ICC libname_spec='$name' soname_spec='$libname`echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext' library_names_spec='$libname.dll.lib' case $build_os in mingw*) sys_lib_search_path_spec= lt_save_ifs=$IFS IFS=';' for lt_path in $LIB do IFS=$lt_save_ifs # Let DOS variable expansion print the short 8.3 style file name. lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"` sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path" done IFS=$lt_save_ifs # Convert to MSYS style. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'` ;; cygwin*) # Convert to unix form, then to dos form, then back to unix form # but this time dos style (no spaces!) so that the unix form looks # like /cygdrive/c/PROGRA~1:/cygdr... sys_lib_search_path_spec=`cygpath --path --unix "$LIB"` sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null` sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` ;; *) sys_lib_search_path_spec=$LIB if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then # It is most probably a Windows format PATH. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` else sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` fi # FIXME: find the short name or the path components, as spaces are # common. (e.g. "Program Files" -> "PROGRA~1") ;; esac # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \$file`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes dynamic_linker='Win32 link.exe' ;; *) # Assume MSVC and ICC wrapper library_names_spec='$libname`echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext $libname.lib' dynamic_linker='Win32 ld.exe' ;; esac # FIXME: first we should search . and the directory the executable is in shlibpath_var=PATH ;; darwin* | rhapsody*) dynamic_linker="$host_os dyld" version_type=darwin need_lib_prefix=no need_version=no library_names_spec='$libname$release$major$shared_ext $libname$shared_ext' soname_spec='$libname$release$major$shared_ext' shlibpath_overrides_runpath=yes shlibpath_var=DYLD_LIBRARY_PATH shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' ;; dgux*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH ;; freebsd* | dragonfly* | midnightbsd*) # DragonFly does not have aout. When/if they implement a new # versioning mechanism, adjust this. if test -x /usr/bin/objformat; then objformat=`/usr/bin/objformat` else case $host_os in freebsd[23].*) objformat=aout ;; *) objformat=elf ;; esac fi version_type=freebsd-$objformat case $version_type in freebsd-elf*) library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' need_version=no need_lib_prefix=no ;; freebsd-*) library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' need_version=yes ;; esac shlibpath_var=LD_LIBRARY_PATH case $host_os in freebsd2.*) shlibpath_overrides_runpath=yes ;; freebsd3.[01]* | freebsdelf3.[01]*) shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; freebsd3.[2-9]* | freebsdelf3.[2-9]* | \ freebsd4.[0-5] | freebsdelf4.[0-5] | freebsd4.1.1 | freebsdelf4.1.1) shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; *) # from 4.6 on, and DragonFly shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; esac ;; haiku*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no dynamic_linker="$host_os runtime_loader" library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LIBRARY_PATH shlibpath_overrides_runpath=no sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib' hardcode_into_libs=yes ;; hpux9* | hpux10* | hpux11*) # Give a soname corresponding to the major version so that dld.sl refuses to # link against other versions. version_type=sunos need_lib_prefix=no need_version=no case $host_cpu in ia64*) shrext_cmds='.so' hardcode_into_libs=yes dynamic_linker="$host_os dld.so" shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' if test 32 = "$HPUX_IA64_MODE"; then sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" sys_lib_dlsearch_path_spec=/usr/lib/hpux32 else sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" sys_lib_dlsearch_path_spec=/usr/lib/hpux64 fi ;; hppa*64*) shrext_cmds='.sl' hardcode_into_libs=yes dynamic_linker="$host_os dld.sl" shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; *) shrext_cmds='.sl' dynamic_linker="$host_os dld.sl" shlibpath_var=SHLIB_PATH shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' ;; esac # HP-UX runs *really* slowly unless shared libraries are mode 555, ... postinstall_cmds='chmod 555 $lib' # or fails outright, so override atomically: install_override_mode=555 ;; interix[3-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; irix5* | irix6* | nonstopux*) case $host_os in nonstopux*) version_type=nonstopux ;; *) if test yes = "$lt_cv_prog_gnu_ld"; then version_type=linux # correct to gnu/linux during the next big refactor else version_type=irix fi ;; esac need_lib_prefix=no need_version=no soname_spec='$libname$release$shared_ext$major' library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$release$shared_ext $libname$shared_ext' case $host_os in irix5* | nonstopux*) libsuff= shlibsuff= ;; *) case $LD in # libtool.m4 will add one of these switches to LD *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") libsuff= shlibsuff= libmagic=32-bit;; *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") libsuff=32 shlibsuff=N32 libmagic=N32;; *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") libsuff=64 shlibsuff=64 libmagic=64-bit;; *) libsuff= shlibsuff= libmagic=never-match;; esac ;; esac shlibpath_var=LD_LIBRARY${shlibsuff}_PATH shlibpath_overrides_runpath=no sys_lib_search_path_spec="/usr/lib$libsuff /lib$libsuff /usr/local/lib$libsuff" sys_lib_dlsearch_path_spec="/usr/lib$libsuff /lib$libsuff" hardcode_into_libs=yes ;; # No shared lib support for Linux oldld, aout, or coff. linux*oldld* | linux*aout* | linux*coff*) dynamic_linker=no ;; linux*android*) version_type=none # Android doesn't support versioned libraries. need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext' soname_spec='$libname$release$shared_ext' finish_cmds= shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes dynamic_linker='Android linker' # Don't embed -rpath directories since the linker doesn't support them. hardcode_libdir_flag_spec_CXX='-L$libdir' ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no # Some binutils ld are patched to set DT_RUNPATH if test ${lt_cv_shlibpath_overrides_runpath+y} then : printf %s "(cached) " >&6 else case e in #( e) lt_cv_shlibpath_overrides_runpath=no save_LDFLAGS=$LDFLAGS save_libdir=$libdir eval "libdir=/foo; wl=\"$lt_prog_compiler_wl_CXX\"; \ LDFLAGS=\"\$LDFLAGS $hardcode_libdir_flag_spec_CXX\"" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_cxx_try_link "$LINENO" then : if ($OBJDUMP -p conftest$ac_exeext) 2>/dev/null | grep "RUNPATH.*$libdir" >/dev/null then : lt_cv_shlibpath_overrides_runpath=yes fi fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext LDFLAGS=$save_LDFLAGS libdir=$save_libdir ;; esac fi shlibpath_overrides_runpath=$lt_cv_shlibpath_overrides_runpath # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes # Ideally, we could use ldconfig to report *all* directores which are # searched for libraries, however this is still not possible. Aside from not # being certain /sbin/ldconfig is available, command # 'ldconfig -N -X -v | grep ^/' on 64bit Fedora does not report /usr/lib64, # even though it is searched at run-time. Try to do the best guess by # appending ld.so.conf contents (and includes) to the search path. if test -f /etc/ld.so.conf; then lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \$2)); skip = 1; } { if (!skip) print \$0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '` sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra" fi # We used to test for /lib/ld.so.1 and disable shared libraries on # powerpc, because MkLinux only supported shared libraries with the # GNU dynamic linker. Since this was broken with cross compilers, # most powerpc-linux boxes support dynamic linking these days and # people can always --disable-shared, the test was removed, and we # assume the GNU/Linux dynamic linker is in use. dynamic_linker='GNU/Linux ld.so' ;; netbsd*) version_type=sunos need_lib_prefix=no need_version=no if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' dynamic_linker='NetBSD (a.out) ld.so' else library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' dynamic_linker='NetBSD ld.elf_so' fi shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; newsos6) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; *nto* | *qnx*) version_type=qnx need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='ldqnx.so' ;; openbsd* | bitrig*) version_type=sunos sys_lib_dlsearch_path_spec=/usr/lib need_lib_prefix=no if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then need_version=no else need_version=yes fi library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; os2*) libname_spec='$name' version_type=windows shrext_cmds=.dll need_version=no need_lib_prefix=no # OS/2 can only load a DLL with a base name of 8 characters or less. soname_spec='`test -n "$os2dllname" && libname="$os2dllname"; v=$($ECHO $release$versuffix | tr -d .-); n=$($ECHO $libname | cut -b -$((8 - ${#v})) | tr . _); $ECHO $n$v`$shared_ext' library_names_spec='${libname}_dll.$libext' dynamic_linker='OS/2 ld.exe' shlibpath_var=BEGINLIBPATH sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec postinstall_cmds='base_file=`basename \$file`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; $ECHO \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; $ECHO \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' ;; osf3* | osf4* | osf5*) version_type=osf need_lib_prefix=no need_version=no soname_spec='$libname$release$shared_ext$major' library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; rdos*) dynamic_linker=no ;; solaris*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes # ldd complains unless libraries are executable postinstall_cmds='chmod +x $lib' ;; sunos4*) version_type=sunos library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes if test yes = "$with_gnu_ld"; then need_lib_prefix=no fi need_version=yes ;; sysv4 | sysv4.3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH case $host_vendor in sni) shlibpath_overrides_runpath=no need_lib_prefix=no runpath_var=LD_RUN_PATH ;; siemens) need_lib_prefix=no ;; motorola) need_lib_prefix=no need_version=no shlibpath_overrides_runpath=no sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' ;; esac ;; sysv4*MP*) if test -d /usr/nec; then version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$shared_ext.$versuffix $libname$shared_ext.$major $libname$shared_ext' soname_spec='$libname$shared_ext.$major' shlibpath_var=LD_LIBRARY_PATH fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) version_type=sco need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes if test yes = "$with_gnu_ld"; then sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' else sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' case $host_os in sco3.2v5*) sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" ;; esac fi sys_lib_dlsearch_path_spec='/usr/lib' ;; tpf*) # TPF is a cross-target only. Preferred cross-host = GNU/Linux. version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; uts4*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH ;; *) dynamic_linker=no ;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $dynamic_linker" >&5 printf "%s\n" "$dynamic_linker" >&6; } test no = "$dynamic_linker" && can_build_shared=no variables_saved_for_relink="PATH $shlibpath_var $runpath_var" if test yes = "$GCC"; then variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" fi if test set = "${lt_cv_sys_lib_search_path_spec+set}"; then sys_lib_search_path_spec=$lt_cv_sys_lib_search_path_spec fi if test set = "${lt_cv_sys_lib_dlsearch_path_spec+set}"; then sys_lib_dlsearch_path_spec=$lt_cv_sys_lib_dlsearch_path_spec fi # remember unaugmented sys_lib_dlsearch_path content for libtool script decls... configure_time_dlsearch_path=$sys_lib_dlsearch_path_spec # ... but it needs LT_SYS_LIBRARY_PATH munging for other configure-time code func_munge_path_list sys_lib_dlsearch_path_spec "$LT_SYS_LIBRARY_PATH" # to be used as default LT_SYS_LIBRARY_PATH value in generated libtool configure_time_lt_sys_library_path=$LT_SYS_LIBRARY_PATH { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to hardcode library paths into programs" >&5 printf %s "checking how to hardcode library paths into programs... " >&6; } hardcode_action_CXX= if test -n "$hardcode_libdir_flag_spec_CXX" || test -n "$runpath_var_CXX" || test yes = "$hardcode_automatic_CXX"; then # We can hardcode non-existent directories. if test no != "$hardcode_direct_CXX" && # If the only mechanism to avoid hardcoding is shlibpath_var, we # have to relink, otherwise we might link with an installed library # when we should be linking with a yet-to-be-installed one ## test no != "$_LT_TAGVAR(hardcode_shlibpath_var, CXX)" && test no != "$hardcode_minus_L_CXX"; then # Linking always hardcodes the temporary library directory. hardcode_action_CXX=relink else # We can link without hardcoding, and we can hardcode nonexisting dirs. hardcode_action_CXX=immediate fi else # We cannot hardcode anything, or else we can only hardcode existing # directories. hardcode_action_CXX=unsupported fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $hardcode_action_CXX" >&5 printf "%s\n" "$hardcode_action_CXX" >&6; } if test relink = "$hardcode_action_CXX" || test yes = "$inherit_rpath_CXX"; then # Fast installation is not supported enable_fast_install=no elif test yes = "$shlibpath_overrides_runpath" || test no = "$enable_shared"; then # Fast installation is not necessary enable_fast_install=needless fi fi # test -n "$compiler" CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS LDCXX=$LD LD=$lt_save_LD GCC=$lt_save_GCC with_gnu_ld=$lt_save_with_gnu_ld lt_cv_path_LDCXX=$lt_cv_path_LD lt_cv_path_LD=$lt_save_path_LD lt_cv_prog_gnu_ldcxx=$lt_cv_prog_gnu_ld lt_cv_prog_gnu_ld=$lt_save_with_gnu_ld fi # test yes != "$_lt_caught_CXX_error" ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ac_config_commands="$ac_config_commands libtool" # Only expand once: ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}gcc", so it can be a program name with args. set dummy ${ac_tool_prefix}gcc; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}gcc" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi CC=$ac_cv_prog_CC if test -n "$CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 printf "%s\n" "$CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_CC"; then ac_ct_CC=$CC # Extract the first word of "gcc", so it can be a program name with args. set dummy gcc; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="gcc" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 printf "%s\n" "$ac_ct_CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi else CC="$ac_cv_prog_CC" fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}cc", so it can be a program name with args. set dummy ${ac_tool_prefix}cc; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}cc" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi CC=$ac_cv_prog_CC if test -n "$CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 printf "%s\n" "$CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi fi if test -z "$CC"; then # Extract the first word of "cc", so it can be a program name with args. set dummy cc; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else ac_prog_rejected=no as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then if test "$as_dir$ac_word$ac_exec_ext" = "/usr/ucb/cc"; then ac_prog_rejected=yes continue fi ac_cv_prog_CC="cc" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS if test $ac_prog_rejected = yes; then # We found a bogon in the path, so make sure we never use it. set dummy $ac_cv_prog_CC shift if test $# != 0; then # We chose a different compiler from the bogus one. # However, it has the same basename, so the bogon will be chosen # first if we set CC to just the basename; use the full file name. shift ac_cv_prog_CC="$as_dir$ac_word${1+' '}$@" fi fi fi ;; esac fi CC=$ac_cv_prog_CC if test -n "$CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 printf "%s\n" "$CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then for ac_prog in cl.exe do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_CC="$ac_tool_prefix$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi CC=$ac_cv_prog_CC if test -n "$CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 printf "%s\n" "$CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$CC" && break done fi if test -z "$CC"; then ac_ct_CC=$CC for ac_prog in cl.exe do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 printf "%s\n" "$ac_ct_CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$ac_ct_CC" && break done if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi fi fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}clang", so it can be a program name with args. set dummy ${ac_tool_prefix}clang; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}clang" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi CC=$ac_cv_prog_CC if test -n "$CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 printf "%s\n" "$CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_CC"; then ac_ct_CC=$CC # Extract the first word of "clang", so it can be a program name with args. set dummy clang; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="clang" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 printf "%s\n" "$ac_ct_CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi else CC="$ac_cv_prog_CC" fi fi test -z "$CC" && { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in '$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in '$ac_pwd':" >&2;} as_fn_error $? "no acceptable C compiler found in \$PATH See 'config.log' for more details" "$LINENO" 5; } # Provide some information about the compiler. printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for C compiler version" >&5 set X $ac_compile ac_compiler=$2 for ac_option in --version -v -V -qversion -version; do { { ac_try="$ac_compiler $ac_option >&5" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" printf "%s\n" "$ac_try_echo"; } >&5 (eval "$ac_compiler $ac_option >&5") 2>conftest.err ac_status=$? if test -s conftest.err; then sed '10a\ ... rest of stderr output deleted ... 10q' conftest.err >conftest.er1 cat conftest.er1 >&5 fi rm -f conftest.er1 conftest.err printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } done { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the compiler supports GNU C" >&5 printf %s "checking whether the compiler supports GNU C... " >&6; } if test ${ac_cv_c_compiler_gnu+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { #ifndef __GNUC__ choke me #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : ac_compiler_gnu=yes else case e in #( e) ac_compiler_gnu=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ac_cv_c_compiler_gnu=$ac_compiler_gnu ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_compiler_gnu" >&5 printf "%s\n" "$ac_cv_c_compiler_gnu" >&6; } ac_compiler_gnu=$ac_cv_c_compiler_gnu if test $ac_compiler_gnu = yes; then GCC=yes else GCC= fi ac_test_CFLAGS=${CFLAGS+y} ac_save_CFLAGS=$CFLAGS { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $CC accepts -g" >&5 printf %s "checking whether $CC accepts -g... " >&6; } if test ${ac_cv_prog_cc_g+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_save_c_werror_flag=$ac_c_werror_flag ac_c_werror_flag=yes ac_cv_prog_cc_g=no CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : ac_cv_prog_cc_g=yes else case e in #( e) CFLAGS="" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : else case e in #( e) ac_c_werror_flag=$ac_save_c_werror_flag CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : ac_cv_prog_cc_g=yes fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ac_c_werror_flag=$ac_save_c_werror_flag ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_g" >&5 printf "%s\n" "$ac_cv_prog_cc_g" >&6; } if test $ac_test_CFLAGS; then CFLAGS=$ac_save_CFLAGS elif test $ac_cv_prog_cc_g = yes; then if test "$GCC" = yes; then CFLAGS="-g -O2" else CFLAGS="-g" fi else if test "$GCC" = yes; then CFLAGS="-O2" else CFLAGS= fi fi ac_prog_cc_stdc=no if test x$ac_prog_cc_stdc = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC option to enable C11 features" >&5 printf %s "checking for $CC option to enable C11 features... " >&6; } if test ${ac_cv_prog_cc_c11+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_cv_prog_cc_c11=no ac_save_CC=$CC cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $ac_c_conftest_c11_program _ACEOF for ac_arg in '' -std=gnu11 do CC="$ac_save_CC $ac_arg" if ac_fn_c_try_compile "$LINENO" then : ac_cv_prog_cc_c11=$ac_arg fi rm -f core conftest.err conftest.$ac_objext conftest.beam test "x$ac_cv_prog_cc_c11" != "xno" && break done rm -f conftest.$ac_ext CC=$ac_save_CC ;; esac fi if test "x$ac_cv_prog_cc_c11" = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 printf "%s\n" "unsupported" >&6; } else case e in #( e) if test "x$ac_cv_prog_cc_c11" = x then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 printf "%s\n" "none needed" >&6; } else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c11" >&5 printf "%s\n" "$ac_cv_prog_cc_c11" >&6; } CC="$CC $ac_cv_prog_cc_c11" ;; esac fi ac_cv_prog_cc_stdc=$ac_cv_prog_cc_c11 ac_prog_cc_stdc=c11 ;; esac fi fi if test x$ac_prog_cc_stdc = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC option to enable C99 features" >&5 printf %s "checking for $CC option to enable C99 features... " >&6; } if test ${ac_cv_prog_cc_c99+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_cv_prog_cc_c99=no ac_save_CC=$CC cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $ac_c_conftest_c99_program _ACEOF for ac_arg in '' -std=gnu99 -std=c99 -c99 -qlanglvl=extc1x -qlanglvl=extc99 -AC99 -D_STDC_C99= do CC="$ac_save_CC $ac_arg" if ac_fn_c_try_compile "$LINENO" then : ac_cv_prog_cc_c99=$ac_arg fi rm -f core conftest.err conftest.$ac_objext conftest.beam test "x$ac_cv_prog_cc_c99" != "xno" && break done rm -f conftest.$ac_ext CC=$ac_save_CC ;; esac fi if test "x$ac_cv_prog_cc_c99" = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 printf "%s\n" "unsupported" >&6; } else case e in #( e) if test "x$ac_cv_prog_cc_c99" = x then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 printf "%s\n" "none needed" >&6; } else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c99" >&5 printf "%s\n" "$ac_cv_prog_cc_c99" >&6; } CC="$CC $ac_cv_prog_cc_c99" ;; esac fi ac_cv_prog_cc_stdc=$ac_cv_prog_cc_c99 ac_prog_cc_stdc=c99 ;; esac fi fi if test x$ac_prog_cc_stdc = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC option to enable C89 features" >&5 printf %s "checking for $CC option to enable C89 features... " >&6; } if test ${ac_cv_prog_cc_c89+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_cv_prog_cc_c89=no ac_save_CC=$CC cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $ac_c_conftest_c89_program _ACEOF for ac_arg in '' -qlanglvl=extc89 -qlanglvl=ansi -std -Ae "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__" do CC="$ac_save_CC $ac_arg" if ac_fn_c_try_compile "$LINENO" then : ac_cv_prog_cc_c89=$ac_arg fi rm -f core conftest.err conftest.$ac_objext conftest.beam test "x$ac_cv_prog_cc_c89" != "xno" && break done rm -f conftest.$ac_ext CC=$ac_save_CC ;; esac fi if test "x$ac_cv_prog_cc_c89" = xno then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 printf "%s\n" "unsupported" >&6; } else case e in #( e) if test "x$ac_cv_prog_cc_c89" = x then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 printf "%s\n" "none needed" >&6; } else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c89" >&5 printf "%s\n" "$ac_cv_prog_cc_c89" >&6; } CC="$CC $ac_cv_prog_cc_c89" ;; esac fi ac_cv_prog_cc_stdc=$ac_cv_prog_cc_c89 ac_prog_cc_stdc=c89 ;; esac fi fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $CC understands -c and -o together" >&5 printf %s "checking whether $CC understands -c and -o together... " >&6; } if test ${am_cv_prog_cc_c_o+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF # Make sure it works both with $CC and with simple cc. # Following AC_PROG_CC_C_O, we do the test twice because some # compilers refuse to overwrite an existing .o file with -o, # though they will create one. am_cv_prog_cc_c_o=yes for am_i in 1 2; do if { echo "$as_me:$LINENO: $CC -c conftest.$ac_ext -o conftest2.$ac_objext" >&5 ($CC -c conftest.$ac_ext -o conftest2.$ac_objext) >&5 2>&5 ac_status=$? echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } \ && test -f conftest2.$ac_objext; then : OK else am_cv_prog_cc_c_o=no break fi done rm -f core conftest* unset am_i ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $am_cv_prog_cc_c_o" >&5 printf "%s\n" "$am_cv_prog_cc_c_o" >&6; } if test "$am_cv_prog_cc_c_o" != yes; then # Losing compiler, so override with the script. # FIXME: It is wrong to rewrite CC. # But if we don't then we get into trouble of one sort or another. # A longer-term fix would be to have automake use am__CC in this case, # and then we could set am__CC="\$(top_srcdir)/compile \$(CC)" CC="$am_aux_dir/compile $CC" fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for egrep" >&5 printf %s "checking for egrep... " >&6; } if test ${ac_cv_path_EGREP+y} then : printf %s "(cached) " >&6 else case e in #( e) if echo a | $GREP -E '(a|b)' >/dev/null 2>&1 then ac_cv_path_EGREP="$GREP -E" else if test -z "$EGREP"; then ac_path_EGREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_prog in egrep do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_EGREP="$as_dir$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_EGREP" || continue # Check for GNU ac_path_EGREP and select it if it is found. # Check for GNU $ac_path_EGREP case `"$ac_path_EGREP" --version 2>&1` in #( *GNU*) ac_cv_path_EGREP="$ac_path_EGREP" ac_path_EGREP_found=:;; #( *) ac_count=0 printf %s 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" printf "%s\n" 'EGREP' >> "conftest.nl" "$ac_path_EGREP" 'EGREP$' < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_EGREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_EGREP="$ac_path_EGREP" ac_path_EGREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_EGREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_EGREP"; then as_fn_error $? "no acceptable egrep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_EGREP=$EGREP fi fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_EGREP" >&5 printf "%s\n" "$ac_cv_path_EGREP" >&6; } EGREP="$ac_cv_path_EGREP" EGREP_TRADITIONAL=$EGREP ac_cv_path_EGREP_TRADITIONAL=$EGREP { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for C compiler vendor" >&5 printf %s "checking for C compiler vendor... " >&6; } if test ${ax_cv_c_compiler_vendor+y} then : printf %s "(cached) " >&6 else case e in #( e) vendors=" intel: __ICC,__ECC,__INTEL_COMPILER ibm: __xlc__,__xlC__,__IBMC__,__IBMCPP__,__ibmxl__ pathscale: __PATHCC__,__PATHSCALE__ clang: __clang__ cray: _CRAYC fujitsu: __FUJITSU sdcc: SDCC,__SDCC sx: _SX nvhpc: __NVCOMPILER portland: __PGI gnu: __GNUC__ sun: __SUNPRO_C,__SUNPRO_CC,__SUNPRO_F90,__SUNPRO_F95 hp: __HP_cc,__HP_aCC dec: __DECC,__DECCXX,__DECC_VER,__DECCXX_VER borland: __BORLANDC__,__CODEGEARC__,__TURBOC__ comeau: __COMO__ kai: __KCC lcc: __LCC__ sgi: __sgi,sgi microsoft: _MSC_VER metrowerks: __MWERKS__ watcom: __WATCOMC__ tcc: __TINYC__ unknown: UNKNOWN " for ventest in $vendors; do case $ventest in *:) vendor=$ventest continue ;; *) vencpp="defined("`echo $ventest | sed 's/,/) || defined(/g'`")" ;; esac cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { #if !($vencpp) thisisanerror; #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : break fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext done ax_cv_c_compiler_vendor=`echo $vendor | cut -d: -f1` ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_cv_c_compiler_vendor" >&5 printf "%s\n" "$ax_cv_c_compiler_vendor" >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether this is native windows" >&5 printf %s "checking whether this is native windows... " >&6; } ac_cv_native_windows=no ac_cv_windows=no case $host_os in mingw*) ac_cv_native_windows=yes ac_cv_windows=yes ;; cygwin*) ac_cv_windows=yes ;; esac if test "$ax_cv_c_compiler_vendor" = "microsoft" ; then ac_cv_native_windows=yes ac_cv_windows=yes fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_native_windows" >&5 printf "%s\n" "$ac_cv_native_windows" >&6; } # Check whether --enable-shared was given. if test ${enable_shared+y} then : enableval=$enable_shared; p=${PACKAGE-default} case $enableval in yes) enable_shared=yes ;; no) enable_shared=no ;; *) enable_shared=no # Look at the argument we got. We use all the common list separators. lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR, for pkg in $enableval; do IFS=$lt_save_ifs if test "X$pkg" = "X$p"; then enable_shared=yes fi done IFS=$lt_save_ifs ;; esac else case e in #( e) enable_shared=yes ;; esac fi if test "x$ac_cv_windows" = "xyes" then : # Check whether --enable-static was given. if test ${enable_static+y} then : enableval=$enable_static; p=${PACKAGE-default} case $enableval in yes) enable_static=yes ;; no) enable_static=no ;; *) enable_static=no # Look at the argument we got. We use all the common list separators. lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR, for pkg in $enableval; do IFS=$lt_save_ifs if test "X$pkg" = "X$p"; then enable_static=yes fi done IFS=$lt_save_ifs ;; esac else case e in #( e) enable_static=no ;; esac fi else case e in #( e) # Check whether --enable-static was given. if test ${enable_static+y} then : enableval=$enable_static; p=${PACKAGE-default} case $enableval in yes) enable_static=yes ;; no) enable_static=no ;; *) enable_static=no # Look at the argument we got. We use all the common list separators. lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR, for pkg in $enableval; do IFS=$lt_save_ifs if test "X$pkg" = "X$p"; then enable_static=yes fi done IFS=$lt_save_ifs ;; esac else case e in #( e) enable_static=yes ;; esac fi ;; esac fi # Check whether --enable-warnings was given. if test ${enable_warnings+y} then : enableval=$enable_warnings; enable_warnings=${enableval} else case e in #( e) enable_warnings=yes ;; esac fi # Check whether --enable-symbol-hiding was given. if test ${enable_symbol_hiding+y} then : enableval=$enable_symbol_hiding; symbol_hiding="$enableval" if test "$symbol_hiding" = "no" -a "x$enable_shared" = "xyes" ; then case $host_os in cygwin* | mingw* | pw32* | cegcc*) as_fn_error $? "Cannot disable symbol hiding on windows" "$LINENO" 5 ;; esac fi else case e in #( e) if test "x$enable_shared" = "xyes" ; then symbol_hiding="maybe" else symbol_hiding="no" fi ;; esac fi # Check whether --enable-tests was given. if test ${enable_tests+y} then : enableval=$enable_tests; build_tests="$enableval" else case e in #( e) if test "x$HAVE_CXX14" = "x1" && test "x$cross_compiling" = "xno" ; then build_tests="maybe" else build_tests="no" fi ;; esac fi # Check whether --enable-cares-threads was given. if test ${enable_cares_threads+y} then : enableval=$enable_cares_threads; CARES_THREADS=${enableval} else case e in #( e) CARES_THREADS=yes ;; esac fi # Check whether --with-random was given. if test ${with_random+y} then : withval=$with_random; CARES_RANDOM_FILE="$withval" else case e in #( e) CARES_RANDOM_FILE="/dev/urandom" ;; esac fi if test -n "$CARES_RANDOM_FILE" && test X"$CARES_RANDOM_FILE" != Xno ; then printf "%s\n" "#define CARES_RANDOM_FILE \"$CARES_RANDOM_FILE\"" >>confdefs.h fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether to enable maintainer-specific portions of Makefiles" >&5 printf %s "checking whether to enable maintainer-specific portions of Makefiles... " >&6; } # Check whether --enable-maintainer-mode was given. if test ${enable_maintainer_mode+y} then : enableval=$enable_maintainer_mode; USE_MAINTAINER_MODE=$enableval else case e in #( e) USE_MAINTAINER_MODE=no ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $USE_MAINTAINER_MODE" >&5 printf "%s\n" "$USE_MAINTAINER_MODE" >&6; } if test $USE_MAINTAINER_MODE = yes; then MAINTAINER_MODE_TRUE= MAINTAINER_MODE_FALSE='#' else MAINTAINER_MODE_TRUE='#' MAINTAINER_MODE_FALSE= fi MAINT=$MAINTAINER_MODE_TRUE AM_DEFAULT_VERBOSITY=0 # allow to override gcov location # Check whether --with-gcov was given. if test ${with_gcov+y} then : withval=$with_gcov; _AX_CODE_COVERAGE_GCOV_PROG_WITH=$with_gcov else case e in #( e) _AX_CODE_COVERAGE_GCOV_PROG_WITH=gcov ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether to build with code coverage support" >&5 printf %s "checking whether to build with code coverage support... " >&6; } # Check whether --enable-code-coverage was given. if test ${enable_code_coverage+y} then : enableval=$enable_code_coverage; else case e in #( e) enable_code_coverage=no ;; esac fi if test "x$enable_code_coverage" = xyes; then CODE_COVERAGE_ENABLED_TRUE= CODE_COVERAGE_ENABLED_FALSE='#' else CODE_COVERAGE_ENABLED_TRUE='#' CODE_COVERAGE_ENABLED_FALSE= fi CODE_COVERAGE_ENABLED=$enable_code_coverage { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $enable_code_coverage" >&5 printf "%s\n" "$enable_code_coverage" >&6; } if test "x$enable_code_coverage" = xyes then : for ac_prog in gawk mawk nawk awk do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_AWK+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$AWK"; then ac_cv_prog_AWK="$AWK" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_AWK="$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi AWK=$ac_cv_prog_AWK if test -n "$AWK"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $AWK" >&5 printf "%s\n" "$AWK" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$AWK" && break done { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for GNU make" >&5 printf %s "checking for GNU make... " >&6; } if test ${_cv_gnu_make_command+y} then : printf %s "(cached) " >&6 else case e in #( e) _cv_gnu_make_command="" ; for a in "$MAKE" make gmake gnumake ; do if test -z "$a" ; then continue ; fi ; if "$a" --version 2> /dev/null | grep GNU 2>&1 > /dev/null ; then _cv_gnu_make_command=$a ; AX_CHECK_GNU_MAKE_HEADLINE=$("$a" --version 2> /dev/null | grep "GNU Make") ax_check_gnu_make_version=$(echo ${AX_CHECK_GNU_MAKE_HEADLINE} | ${AWK} -F " " '{ print $(NF); }') break ; fi done ; ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $_cv_gnu_make_command" >&5 printf "%s\n" "$_cv_gnu_make_command" >&6; } if test "x$_cv_gnu_make_command" = x"" then : ifGNUmake="#" else case e in #( e) ifGNUmake="" ;; esac fi if test "x$_cv_gnu_make_command" = x"" then : ifnGNUmake="" else case e in #( e) ifnGNUmake="#" ;; esac fi if test "x$_cv_gnu_make_command" = x"" then : { ax_cv_gnu_make_command=; unset ax_cv_gnu_make_command;} else case e in #( e) ax_cv_gnu_make_command=${_cv_gnu_make_command} ;; esac fi if test "x$_cv_gnu_make_command" = x"" then : as_fn_error $? "not using GNU make that is needed for coverage" "$LINENO" 5 fi # check for gcov if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}$_AX_CODE_COVERAGE_GCOV_PROG_WITH", so it can be a program name with args. set dummy ${ac_tool_prefix}$_AX_CODE_COVERAGE_GCOV_PROG_WITH; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_GCOV+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$GCOV"; then ac_cv_prog_GCOV="$GCOV" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_GCOV="${ac_tool_prefix}$_AX_CODE_COVERAGE_GCOV_PROG_WITH" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi GCOV=$ac_cv_prog_GCOV if test -n "$GCOV"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $GCOV" >&5 printf "%s\n" "$GCOV" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_prog_GCOV"; then ac_ct_GCOV=$GCOV # Extract the first word of "$_AX_CODE_COVERAGE_GCOV_PROG_WITH", so it can be a program name with args. set dummy $_AX_CODE_COVERAGE_GCOV_PROG_WITH; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ac_ct_GCOV+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ac_ct_GCOV"; then ac_cv_prog_ac_ct_GCOV="$ac_ct_GCOV" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_GCOV="$_AX_CODE_COVERAGE_GCOV_PROG_WITH" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi ac_ct_GCOV=$ac_cv_prog_ac_ct_GCOV if test -n "$ac_ct_GCOV"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_ct_GCOV" >&5 printf "%s\n" "$ac_ct_GCOV" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_ct_GCOV" = x; then GCOV=":" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac GCOV=$ac_ct_GCOV fi else GCOV="$ac_cv_prog_GCOV" fi if test "X$GCOV" = "X:" then : as_fn_error $? "gcov is needed to do coverage" "$LINENO" 5 fi if test "$GCC" = "no" then : as_fn_error $? "not compiling with gcc, which is required for gcov code coverage" "$LINENO" 5 fi # Extract the first word of "lcov", so it can be a program name with args. set dummy lcov; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_LCOV+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$LCOV"; then ac_cv_prog_LCOV="$LCOV" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_LCOV="lcov" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi LCOV=$ac_cv_prog_LCOV if test -n "$LCOV"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $LCOV" >&5 printf "%s\n" "$LCOV" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi # Extract the first word of "genhtml", so it can be a program name with args. set dummy genhtml; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_GENHTML+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$GENHTML"; then ac_cv_prog_GENHTML="$GENHTML" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_GENHTML="genhtml" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi GENHTML=$ac_cv_prog_GENHTML if test -n "$GENHTML"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $GENHTML" >&5 printf "%s\n" "$GENHTML" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test x"$LCOV" = x then : as_fn_error $? "To enable code coverage reporting you must have lcov installed" "$LINENO" 5 fi if test x"$GENHTML" = x then : as_fn_error $? "Could not find genhtml from the lcov package" "$LINENO" 5 fi CODE_COVERAGE_CPPFLAGS="-DNDEBUG" CODE_COVERAGE_CFLAGS="-O0 -g -fprofile-arcs -ftest-coverage" CODE_COVERAGE_CXXFLAGS="-O0 -g -fprofile-arcs -ftest-coverage" CODE_COVERAGE_LIBS="-lgcov" fi # Check whether --enable-largefile was given. if test ${enable_largefile+y} then : enableval=$enable_largefile; fi if test "$enable_largefile,$enable_year2038" != no,no then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC option to enable large file support" >&5 printf %s "checking for $CC option to enable large file support... " >&6; } if test ${ac_cv_sys_largefile_opts+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_save_CC="$CC" ac_opt_found=no for ac_opt in "none needed" "-D_FILE_OFFSET_BITS=64" "-D_LARGE_FILES=1" "-n32"; do if test x"$ac_opt" != x"none needed" then : CC="$ac_save_CC $ac_opt" fi cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #ifndef FTYPE # define FTYPE off_t #endif /* Check that FTYPE can represent 2**63 - 1 correctly. We can't simply define LARGE_FTYPE to be 9223372036854775807, since some C++ compilers masquerading as C compilers incorrectly reject 9223372036854775807. */ #define LARGE_FTYPE (((FTYPE) 1 << 31 << 31) - 1 + ((FTYPE) 1 << 31 << 31)) int FTYPE_is_large[(LARGE_FTYPE % 2147483629 == 721 && LARGE_FTYPE % 2147483647 == 1) ? 1 : -1]; int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : if test x"$ac_opt" = x"none needed" then : # GNU/Linux s390x and alpha need _FILE_OFFSET_BITS=64 for wide ino_t. CC="$CC -DFTYPE=ino_t" if ac_fn_c_try_compile "$LINENO" then : else case e in #( e) CC="$CC -D_FILE_OFFSET_BITS=64" if ac_fn_c_try_compile "$LINENO" then : ac_opt='-D_FILE_OFFSET_BITS=64' fi rm -f core conftest.err conftest.$ac_objext conftest.beam ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam fi ac_cv_sys_largefile_opts=$ac_opt ac_opt_found=yes fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext test $ac_opt_found = no || break done CC="$ac_save_CC" test $ac_opt_found = yes || ac_cv_sys_largefile_opts="support not detected" ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_sys_largefile_opts" >&5 printf "%s\n" "$ac_cv_sys_largefile_opts" >&6; } ac_have_largefile=yes case $ac_cv_sys_largefile_opts in #( "none needed") : ;; #( "supported through gnulib") : ;; #( "support not detected") : ac_have_largefile=no ;; #( "-D_FILE_OFFSET_BITS=64") : printf "%s\n" "#define _FILE_OFFSET_BITS 64" >>confdefs.h ;; #( "-D_LARGE_FILES=1") : printf "%s\n" "#define _LARGE_FILES 1" >>confdefs.h ;; #( "-n32") : CC="$CC -n32" ;; #( *) : as_fn_error $? "internal error: bad value for \$ac_cv_sys_largefile_opts" "$LINENO" 5 ;; esac if test "$enable_year2038" != no then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC option for timestamps after 2038" >&5 printf %s "checking for $CC option for timestamps after 2038... " >&6; } if test ${ac_cv_sys_year2038_opts+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_save_CPPFLAGS="$CPPFLAGS" ac_opt_found=no for ac_opt in "none needed" "-D_TIME_BITS=64" "-D__MINGW_USE_VC2005_COMPAT" "-U_USE_32_BIT_TIME_T -D__MINGW_USE_VC2005_COMPAT"; do if test x"$ac_opt" != x"none needed" then : CPPFLAGS="$ac_save_CPPFLAGS $ac_opt" fi cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include /* Check that time_t can represent 2**32 - 1 correctly. */ #define LARGE_TIME_T \\ ((time_t) (((time_t) 1 << 30) - 1 + 3 * ((time_t) 1 << 30))) int verify_time_t_range[(LARGE_TIME_T / 65537 == 65535 && LARGE_TIME_T % 65537 == 0) ? 1 : -1]; int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : ac_cv_sys_year2038_opts="$ac_opt" ac_opt_found=yes fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext test $ac_opt_found = no || break done CPPFLAGS="$ac_save_CPPFLAGS" test $ac_opt_found = yes || ac_cv_sys_year2038_opts="support not detected" ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_sys_year2038_opts" >&5 printf "%s\n" "$ac_cv_sys_year2038_opts" >&6; } ac_have_year2038=yes case $ac_cv_sys_year2038_opts in #( "none needed") : ;; #( "support not detected") : ac_have_year2038=no ;; #( "-D_TIME_BITS=64") : printf "%s\n" "#define _TIME_BITS 64" >>confdefs.h ;; #( "-D__MINGW_USE_VC2005_COMPAT") : printf "%s\n" "#define __MINGW_USE_VC2005_COMPAT 1" >>confdefs.h ;; #( "-U_USE_32_BIT_TIME_T"*) : { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in '$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in '$ac_pwd':" >&2;} as_fn_error $? "the 'time_t' type is currently forced to be 32-bit. It will stop working after mid-January 2038. Remove _USE_32BIT_TIME_T from the compiler flags. See 'config.log' for more details" "$LINENO" 5; } ;; #( *) : as_fn_error $? "internal error: bad value for \$ac_cv_sys_year2038_opts" "$LINENO" 5 ;; esac fi fi case $host_os in solaris*) printf "%s\n" "#define ETC_INET 1" >>confdefs.h ;; esac case $host_os in solaris2*) if test "x$GCC" = 'xyes'; then for flag in -mimpure-text; do as_CACHEVAR=`printf "%s\n" "ax_cv_check_ldflags__$flag" | sed "$as_sed_sh"` { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the linker accepts $flag" >&5 printf %s "checking whether the linker accepts $flag... " >&6; } if eval test \${$as_CACHEVAR+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_check_save_flags=$LDFLAGS LDFLAGS="$LDFLAGS $flag" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : eval "$as_CACHEVAR=yes" else case e in #( e) eval "$as_CACHEVAR=no" ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext LDFLAGS=$ax_check_save_flags ;; esac fi eval ac_res=\$$as_CACHEVAR { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } if eval test \"x\$"$as_CACHEVAR"\" = x"yes" then : if test ${LDFLAGS+y} then : case " $LDFLAGS " in *" $flag "*) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : LDFLAGS already contains \$flag"; } >&5 (: LDFLAGS already contains $flag) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } ;; *) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : LDFLAGS=\"\$LDFLAGS \$flag\""; } >&5 (: LDFLAGS="$LDFLAGS $flag") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } LDFLAGS="$LDFLAGS $flag" ;; esac else case e in #( e) LDFLAGS="$flag" ;; esac fi else case e in #( e) : ;; esac fi done fi ;; *) ;; esac cares_use_no_undefined=no case $host_os in cygwin* | mingw* | pw32* | cegcc* | os2* | aix*) cares_use_no_undefined=yes ;; *) ;; esac if test "$cares_use_no_undefined" = 'yes'; then CARES_USE_NO_UNDEFINED_TRUE= CARES_USE_NO_UNDEFINED_FALSE='#' else CARES_USE_NO_UNDEFINED_TRUE='#' CARES_USE_NO_UNDEFINED_FALSE= fi if test "$ac_cv_native_windows" = "yes" ; then AM_CPPFLAGS="$AM_CPPFLAGS -D_WIN32_WINNT=0x0602 -DWIN32_LEAN_AND_MEAN" fi if test "$ac_cv_native_windows" = "yes" -a "x$enable_shared" = "xyes" -a "x$enable_static" = "xyes" ; then as_fn_error $? "Windows cannot build both static and shared simultaneously, specify --disable-shared or --disable-static" "$LINENO" 5 fi if test "x$enable_shared" = "xno" -a "x$enable_static" = "xyes" ; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether we need CARES_STATICLIB definition" >&5 printf %s "checking whether we need CARES_STATICLIB definition... " >&6; } if test "$ac_cv_native_windows" = "yes" ; then if test ${AM_CPPFLAGS+y} then : case " $AM_CPPFLAGS " in *" -DCARES_STATICLIB "*) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : AM_CPPFLAGS already contains -DCARES_STATICLIB"; } >&5 (: AM_CPPFLAGS already contains -DCARES_STATICLIB) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } ;; *) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : AM_CPPFLAGS=\"\$AM_CPPFLAGS -DCARES_STATICLIB\""; } >&5 (: AM_CPPFLAGS="$AM_CPPFLAGS -DCARES_STATICLIB") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } AM_CPPFLAGS="$AM_CPPFLAGS -DCARES_STATICLIB" ;; esac else case e in #( e) AM_CPPFLAGS="-DCARES_STATICLIB" ;; esac fi PKGCONFIG_CFLAGS="-DCARES_STATICLIB" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5 printf "%s\n" "yes" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi CARES_SYMBOL_HIDING_CFLAG="" if test "$symbol_hiding" != "no" ; then compiler_supports_symbol_hiding="no" if test "$ac_cv_windows" = "yes" ; then compiler_supports_symbol_hiding="yes" else case "$ax_cv_c_compiler_vendor" in clang|gnu|intel) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether C compiler accepts " >&5 printf %s "checking whether C compiler accepts ... " >&6; } if test ${ax_cv_check_cflags__+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_check_save_flags=$CFLAGS CFLAGS="$CFLAGS " cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : ax_cv_check_cflags__=yes else case e in #( e) ax_cv_check_cflags__=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext CFLAGS=$ax_check_save_flags ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_cv_check_cflags__" >&5 printf "%s\n" "$ax_cv_check_cflags__" >&6; } if test x"$ax_cv_check_cflags__" = xyes then : : else case e in #( e) : ;; esac fi for flag in -fvisibility=hidden; do as_CACHEVAR=`printf "%s\n" "ax_cv_check_cflags__$flag" | sed "$as_sed_sh"` { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether C compiler accepts $flag" >&5 printf %s "checking whether C compiler accepts $flag... " >&6; } if eval test \${$as_CACHEVAR+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_check_save_flags=$CFLAGS CFLAGS="$CFLAGS $flag" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : eval "$as_CACHEVAR=yes" else case e in #( e) eval "$as_CACHEVAR=no" ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext CFLAGS=$ax_check_save_flags ;; esac fi eval ac_res=\$$as_CACHEVAR { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } if test x"`eval 'as_val=${'$as_CACHEVAR'};printf "%s\n" "$as_val"'`" = xyes then : if test ${CARES_SYMBOL_HIDING_CFLAG+y} then : case " $CARES_SYMBOL_HIDING_CFLAG " in *" $flag "*) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : CARES_SYMBOL_HIDING_CFLAG already contains \$flag"; } >&5 (: CARES_SYMBOL_HIDING_CFLAG already contains $flag) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } ;; *) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : CARES_SYMBOL_HIDING_CFLAG=\"\$CARES_SYMBOL_HIDING_CFLAG \$flag\""; } >&5 (: CARES_SYMBOL_HIDING_CFLAG="$CARES_SYMBOL_HIDING_CFLAG $flag") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } CARES_SYMBOL_HIDING_CFLAG="$CARES_SYMBOL_HIDING_CFLAG $flag" ;; esac else case e in #( e) CARES_SYMBOL_HIDING_CFLAG="$flag" ;; esac fi else case e in #( e) : ;; esac fi done if test "x$CARES_SYMBOL_HIDING_CFLAG" != "x" ; then compiler_supports_symbol_hiding="yes" fi ;; sun) for flag in -xldscope=hidden; do as_CACHEVAR=`printf "%s\n" "ax_cv_check_cflags__$flag" | sed "$as_sed_sh"` { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether C compiler accepts $flag" >&5 printf %s "checking whether C compiler accepts $flag... " >&6; } if eval test \${$as_CACHEVAR+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_check_save_flags=$CFLAGS CFLAGS="$CFLAGS $flag" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : eval "$as_CACHEVAR=yes" else case e in #( e) eval "$as_CACHEVAR=no" ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext CFLAGS=$ax_check_save_flags ;; esac fi eval ac_res=\$$as_CACHEVAR { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } if test x"`eval 'as_val=${'$as_CACHEVAR'};printf "%s\n" "$as_val"'`" = xyes then : if test ${CARES_SYMBOL_HIDING_CFLAG+y} then : case " $CARES_SYMBOL_HIDING_CFLAG " in *" $flag "*) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : CARES_SYMBOL_HIDING_CFLAG already contains \$flag"; } >&5 (: CARES_SYMBOL_HIDING_CFLAG already contains $flag) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } ;; *) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : CARES_SYMBOL_HIDING_CFLAG=\"\$CARES_SYMBOL_HIDING_CFLAG \$flag\""; } >&5 (: CARES_SYMBOL_HIDING_CFLAG="$CARES_SYMBOL_HIDING_CFLAG $flag") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } CARES_SYMBOL_HIDING_CFLAG="$CARES_SYMBOL_HIDING_CFLAG $flag" ;; esac else case e in #( e) CARES_SYMBOL_HIDING_CFLAG="$flag" ;; esac fi else case e in #( e) : ;; esac fi done if test "x$CARES_SYMBOL_HIDING_CFLAG" != "x" ; then compiler_supports_symbol_hiding="yes" fi ;; esac fi if test "$compiler_supports_symbol_hiding" = "no" ; then if test "$symbol_hiding" = "yes" ; then as_fn_error $? "Compiler does not support symbol hiding" "$LINENO" 5 else symbol_hiding="no" fi else printf "%s\n" "#define CARES_SYMBOL_HIDING 1 " >>confdefs.h symbol_hiding="yes" fi fi if test "x$symbol_hiding" = "xyes"; then CARES_SYMBOL_HIDING_TRUE= CARES_SYMBOL_HIDING_FALSE='#' else CARES_SYMBOL_HIDING_TRUE='#' CARES_SYMBOL_HIDING_FALSE= fi if test "$enable_warnings" = "yes"; then for flag in -Wall -Wextra -Waggregate-return -Wcast-align -Wcast-qual -Wconversion -Wdeclaration-after-statement -Wdouble-promotion -Wfloat-equal -Wformat-security -Winit-self -Wjump-misses-init -Wlogical-op -Wmissing-braces -Wmissing-declarations -Wmissing-format-attribute -Wmissing-include-dirs -Wmissing-prototypes -Wnested-externs -Wno-coverage-mismatch -Wold-style-definition -Wpacked -Wpedantic -Wpointer-arith -Wredundant-decls -Wshadow -Wsign-conversion -Wstrict-overflow -Wstrict-prototypes -Wtrampolines -Wundef -Wunreachable-code -Wunused -Wvariadic-macros -Wvla -Wwrite-strings -Werror=implicit-int -Werror=implicit-function-declaration -Werror=partial-availability -Wno-long-long ; do as_CACHEVAR=`printf "%s\n" "ax_cv_check_cflags_-Werror_$flag" | sed "$as_sed_sh"` { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether C compiler accepts $flag" >&5 printf %s "checking whether C compiler accepts $flag... " >&6; } if eval test \${$as_CACHEVAR+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_check_save_flags=$CFLAGS CFLAGS="$CFLAGS -Werror $flag" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : eval "$as_CACHEVAR=yes" else case e in #( e) eval "$as_CACHEVAR=no" ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext CFLAGS=$ax_check_save_flags ;; esac fi eval ac_res=\$$as_CACHEVAR { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } if test x"`eval 'as_val=${'$as_CACHEVAR'};printf "%s\n" "$as_val"'`" = xyes then : if test ${AM_CFLAGS+y} then : case " $AM_CFLAGS " in *" $flag "*) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : AM_CFLAGS already contains \$flag"; } >&5 (: AM_CFLAGS already contains $flag) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } ;; *) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : AM_CFLAGS=\"\$AM_CFLAGS \$flag\""; } >&5 (: AM_CFLAGS="$AM_CFLAGS $flag") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } AM_CFLAGS="$AM_CFLAGS $flag" ;; esac else case e in #( e) AM_CFLAGS="$flag" ;; esac fi else case e in #( e) : ;; esac fi done case $host_os in *android*) for flag in -std=c99; do as_CACHEVAR=`printf "%s\n" "ax_cv_check_cflags_-Werror_$flag" | sed "$as_sed_sh"` { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether C compiler accepts $flag" >&5 printf %s "checking whether C compiler accepts $flag... " >&6; } if eval test \${$as_CACHEVAR+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_check_save_flags=$CFLAGS CFLAGS="$CFLAGS -Werror $flag" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : eval "$as_CACHEVAR=yes" else case e in #( e) eval "$as_CACHEVAR=no" ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext CFLAGS=$ax_check_save_flags ;; esac fi eval ac_res=\$$as_CACHEVAR { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } if test x"`eval 'as_val=${'$as_CACHEVAR'};printf "%s\n" "$as_val"'`" = xyes then : if test ${AM_CFLAGS+y} then : case " $AM_CFLAGS " in *" $flag "*) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : AM_CFLAGS already contains \$flag"; } >&5 (: AM_CFLAGS already contains $flag) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } ;; *) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : AM_CFLAGS=\"\$AM_CFLAGS \$flag\""; } >&5 (: AM_CFLAGS="$AM_CFLAGS $flag") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } AM_CFLAGS="$AM_CFLAGS $flag" ;; esac else case e in #( e) AM_CFLAGS="$flag" ;; esac fi else case e in #( e) : ;; esac fi done ;; *) for flag in -std=c90; do as_CACHEVAR=`printf "%s\n" "ax_cv_check_cflags_-Werror_$flag" | sed "$as_sed_sh"` { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether C compiler accepts $flag" >&5 printf %s "checking whether C compiler accepts $flag... " >&6; } if eval test \${$as_CACHEVAR+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_check_save_flags=$CFLAGS CFLAGS="$CFLAGS -Werror $flag" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : eval "$as_CACHEVAR=yes" else case e in #( e) eval "$as_CACHEVAR=no" ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext CFLAGS=$ax_check_save_flags ;; esac fi eval ac_res=\$$as_CACHEVAR { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } if test x"`eval 'as_val=${'$as_CACHEVAR'};printf "%s\n" "$as_val"'`" = xyes then : if test ${AM_CFLAGS+y} then : case " $AM_CFLAGS " in *" $flag "*) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : AM_CFLAGS already contains \$flag"; } >&5 (: AM_CFLAGS already contains $flag) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } ;; *) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : AM_CFLAGS=\"\$AM_CFLAGS \$flag\""; } >&5 (: AM_CFLAGS="$AM_CFLAGS $flag") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } AM_CFLAGS="$AM_CFLAGS $flag" ;; esac else case e in #( e) AM_CFLAGS="$flag" ;; esac fi else case e in #( e) : ;; esac fi done ;; esac fi if test "$ax_cv_c_compiler_vendor" = "intel"; then for flag in -shared-intel; do as_CACHEVAR=`printf "%s\n" "ax_cv_check_cflags__$flag" | sed "$as_sed_sh"` { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether C compiler accepts $flag" >&5 printf %s "checking whether C compiler accepts $flag... " >&6; } if eval test \${$as_CACHEVAR+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_check_save_flags=$CFLAGS CFLAGS="$CFLAGS $flag" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : eval "$as_CACHEVAR=yes" else case e in #( e) eval "$as_CACHEVAR=no" ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext CFLAGS=$ax_check_save_flags ;; esac fi eval ac_res=\$$as_CACHEVAR { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } if test x"`eval 'as_val=${'$as_CACHEVAR'};printf "%s\n" "$as_val"'`" = xyes then : if test ${AM_CFLAGS+y} then : case " $AM_CFLAGS " in *" $flag "*) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : AM_CFLAGS already contains \$flag"; } >&5 (: AM_CFLAGS already contains $flag) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } ;; *) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : AM_CFLAGS=\"\$AM_CFLAGS \$flag\""; } >&5 (: AM_CFLAGS="$AM_CFLAGS $flag") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } AM_CFLAGS="$AM_CFLAGS $flag" ;; esac else case e in #( e) AM_CFLAGS="$flag" ;; esac fi else case e in #( e) : ;; esac fi done fi if test "$ac_cv_native_windows" = "yes" ; then ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking how to run the C preprocessor" >&5 printf %s "checking how to run the C preprocessor... " >&6; } # On Suns, sometimes $CPP names a directory. if test -n "$CPP" && test -d "$CPP"; then CPP= fi if test -z "$CPP"; then if test ${ac_cv_prog_CPP+y} then : printf %s "(cached) " >&6 else case e in #( e) # Double quotes because $CC needs to be expanded for CPP in "$CC -E" "$CC -E -traditional-cpp" cpp /lib/cpp do ac_preproc_ok=false for ac_c_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include Syntax error _ACEOF if ac_fn_c_try_cpp "$LINENO" then : else case e in #( e) # Broken: fails on valid input. continue ;; esac fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_c_try_cpp "$LINENO" then : # Broken: success on invalid input. continue else case e in #( e) # Passes both tests. ac_preproc_ok=: break ;; esac fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of 'break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok then : break fi done ac_cv_prog_CPP=$CPP ;; esac fi CPP=$ac_cv_prog_CPP else ac_cv_prog_CPP=$CPP fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $CPP" >&5 printf "%s\n" "$CPP" >&6; } ac_preproc_ok=false for ac_c_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include Syntax error _ACEOF if ac_fn_c_try_cpp "$LINENO" then : else case e in #( e) # Broken: fails on valid input. continue ;; esac fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_c_try_cpp "$LINENO" then : # Broken: success on invalid input. continue else case e in #( e) # Passes both tests. ac_preproc_ok=: break ;; esac fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of 'break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok then : else case e in #( e) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in '$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in '$ac_pwd':" >&2;} as_fn_error $? "C preprocessor \"$CPP\" fails sanity check See 'config.log' for more details" "$LINENO" 5; } ;; esac fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ac_fn_c_check_header_preproc "$LINENO" "windows.h" "ac_cv_header_windows_h" if test "x$ac_cv_header_windows_h" = xyes then : printf "%s\n" "#define HAVE_WINDOWS_H 1" >>confdefs.h fi ac_fn_c_check_header_preproc "$LINENO" "winsock2.h" "ac_cv_header_winsock2_h" if test "x$ac_cv_header_winsock2_h" = xyes then : printf "%s\n" "#define HAVE_WINSOCK2_H 1" >>confdefs.h fi ac_fn_c_check_header_preproc "$LINENO" "ws2tcpip.h" "ac_cv_header_ws2tcpip_h" if test "x$ac_cv_header_ws2tcpip_h" = xyes then : printf "%s\n" "#define HAVE_WS2TCPIP_H 1" >>confdefs.h fi ac_fn_c_check_header_preproc "$LINENO" "iphlpapi.h" "ac_cv_header_iphlpapi_h" if test "x$ac_cv_header_iphlpapi_h" = xyes then : printf "%s\n" "#define HAVE_IPHLPAPI_H 1" >>confdefs.h fi ac_fn_c_check_header_preproc "$LINENO" "netioapi.h" "ac_cv_header_netioapi_h" if test "x$ac_cv_header_netioapi_h" = xyes then : printf "%s\n" "#define HAVE_NETIOAPI_H 1" >>confdefs.h fi ac_fn_c_check_header_preproc "$LINENO" "ws2ipdef.h" "ac_cv_header_ws2ipdef_h" if test "x$ac_cv_header_ws2ipdef_h" = xyes then : printf "%s\n" "#define HAVE_WS2IPDEF_H 1" >>confdefs.h fi ac_fn_c_check_header_preproc "$LINENO" "winternl.h" "ac_cv_header_winternl_h" if test "x$ac_cv_header_winternl_h" = xyes then : printf "%s\n" "#define HAVE_WINTERNL_H 1" >>confdefs.h fi ac_fn_c_check_header_preproc "$LINENO" "ntdef.h" "ac_cv_header_ntdef_h" if test "x$ac_cv_header_ntdef_h" = xyes then : printf "%s\n" "#define HAVE_NTDEF_H 1" >>confdefs.h fi ac_fn_c_check_header_preproc "$LINENO" "ntstatus.h" "ac_cv_header_ntstatus_h" if test "x$ac_cv_header_ntstatus_h" = xyes then : printf "%s\n" "#define HAVE_NTSTATUS_H 1" >>confdefs.h fi ac_fn_c_check_header_preproc "$LINENO" "mswsock.h" "ac_cv_header_mswsock_h" if test "x$ac_cv_header_mswsock_h" = xyes then : printf "%s\n" "#define HAVE_MSWSOCK_H 1" >>confdefs.h fi if test "$ac_cv_header_winsock2_h" = "yes"; then LIBS="$LIBS -lws2_32 -liphlpapi" fi fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for library containing getservbyport" >&5 printf %s "checking for library containing getservbyport... " >&6; } if test ${ac_cv_search_getservbyport+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_func_search_save_LIBS=$LIBS cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. The 'extern "C"' is for builds by C++ compilers; although this is not generally supported in C code supporting it here has little cost and some practical benefit (sr 110532). */ #ifdef __cplusplus extern "C" #endif char getservbyport (void); int main (void) { return getservbyport (); ; return 0; } _ACEOF for ac_lib in '' nsl socket resolv do if test -z "$ac_lib"; then ac_res="none required" else ac_res=-l$ac_lib LIBS="-l$ac_lib $ac_func_search_save_LIBS" fi if ac_fn_c_try_link "$LINENO" then : ac_cv_search_getservbyport=$ac_res fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext if test ${ac_cv_search_getservbyport+y} then : break fi done if test ${ac_cv_search_getservbyport+y} then : else case e in #( e) ac_cv_search_getservbyport=no ;; esac fi rm conftest.$ac_ext LIBS=$ac_func_search_save_LIBS ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_getservbyport" >&5 printf "%s\n" "$ac_cv_search_getservbyport" >&6; } ac_res=$ac_cv_search_getservbyport if test "$ac_res" != no then : test "$ac_res" = "none required" || LIBS="$ac_res $LIBS" fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking if libxnet is required" >&5 printf %s "checking if libxnet is required... " >&6; } need_xnet=no case $host_os in hpux*) XNET_LIBS="" for flag in -lxnet; do as_CACHEVAR=`printf "%s\n" "ax_cv_check_ldflags__$flag" | sed "$as_sed_sh"` { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether the linker accepts $flag" >&5 printf %s "checking whether the linker accepts $flag... " >&6; } if eval test \${$as_CACHEVAR+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_check_save_flags=$LDFLAGS LDFLAGS="$LDFLAGS $flag" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : eval "$as_CACHEVAR=yes" else case e in #( e) eval "$as_CACHEVAR=no" ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext LDFLAGS=$ax_check_save_flags ;; esac fi eval ac_res=\$$as_CACHEVAR { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } if eval test \"x\$"$as_CACHEVAR"\" = x"yes" then : if test ${XNET_LIBS+y} then : case " $XNET_LIBS " in *" $flag "*) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : XNET_LIBS already contains \$flag"; } >&5 (: XNET_LIBS already contains $flag) 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } ;; *) { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: : XNET_LIBS=\"\$XNET_LIBS \$flag\""; } >&5 (: XNET_LIBS="$XNET_LIBS $flag") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } XNET_LIBS="$XNET_LIBS $flag" ;; esac else case e in #( e) XNET_LIBS="$flag" ;; esac fi else case e in #( e) : ;; esac fi done if test "x$XNET_LIBS" != "x" ; then LIBS="$LIBS $XNET_LIBS" need_xnet=yes fi ;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $need_xnet" >&5 printf "%s\n" "$need_xnet" >&6; } if test "x$host_vendor" = "xibm" -a "x$host_os" = "xopenedition" then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for library containing res_init" >&5 printf %s "checking for library containing res_init... " >&6; } if test ${ac_cv_search_res_init+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_func_search_save_LIBS=$LIBS cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. The 'extern "C"' is for builds by C++ compilers; although this is not generally supported in C code supporting it here has little cost and some practical benefit (sr 110532). */ #ifdef __cplusplus extern "C" #endif char res_init (void); int main (void) { return res_init (); ; return 0; } _ACEOF for ac_lib in '' resolv do if test -z "$ac_lib"; then ac_res="none required" else ac_res=-l$ac_lib LIBS="-l$ac_lib $ac_func_search_save_LIBS" fi if ac_fn_c_try_link "$LINENO" then : ac_cv_search_res_init=$ac_res fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext if test ${ac_cv_search_res_init+y} then : break fi done if test ${ac_cv_search_res_init+y} then : else case e in #( e) ac_cv_search_res_init=no ;; esac fi rm conftest.$ac_ext LIBS=$ac_func_search_save_LIBS ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_res_init" >&5 printf "%s\n" "$ac_cv_search_res_init" >&6; } ac_res=$ac_cv_search_res_init if test "$ac_res" != no then : test "$ac_res" = "none required" || LIBS="$ac_res $LIBS" printf "%s\n" "#define CARES_USE_LIBRESOLV 1" >>confdefs.h else case e in #( e) as_fn_error $? "Unable to find libresolv which is required for z/OS" "$LINENO" 5 ;; esac fi fi if test "x$host_vendor" = "xapple" then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for iOS minimum version 10 or later" >&5 printf %s "checking for iOS minimum version 10 or later... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include int main (void) { #if TARGET_OS_IPHONE == 0 || __IPHONE_OS_VERSION_MIN_REQUIRED < 100000 #error Not iOS 10 or later #endif return 0; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5 printf "%s\n" "yes" >&6; } ac_cv_ios_10="yes" else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext fi if test "x$host_vendor" = "xapple" then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for macOS minimum version 10.12 or later" >&5 printf %s "checking for macOS minimum version 10.12 or later... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include int main (void) { #ifndef MAC_OS_X_VERSION_10_12 # define MAC_OS_X_VERSION_10_12 101200 #endif #if MAC_OS_X_VERSION_MIN_REQUIRED < MAC_OS_X_VERSION_10_12 #error Not macOS 10.12 or later #endif return 0; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5 printf "%s\n" "yes" >&6; } ac_cv_macos_10_12="yes" else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether to use libgcc" >&5 printf %s "checking whether to use libgcc... " >&6; } # Check whether --enable-libgcc was given. if test ${enable_libgcc+y} then : enableval=$enable_libgcc; case "$enableval" in yes) LIBS="$LIBS -lgcc" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5 printf "%s\n" "yes" >&6; } ;; *) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } ;; esac else case e in #( e) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } ;; esac fi ac_fn_c_check_header_compile "$LINENO" "malloc.h" "ac_cv_header_malloc_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_malloc_h" = xyes then : printf "%s\n" "#define HAVE_MALLOC_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "memory.h" "ac_cv_header_memory_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_memory_h" = xyes then : printf "%s\n" "#define HAVE_MEMORY_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "AvailabilityMacros.h" "ac_cv_header_AvailabilityMacros_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_AvailabilityMacros_h" = xyes then : printf "%s\n" "#define HAVE_AVAILABILITYMACROS_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "sys/types.h" "ac_cv_header_sys_types_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_sys_types_h" = xyes then : printf "%s\n" "#define HAVE_SYS_TYPES_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "sys/time.h" "ac_cv_header_sys_time_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_sys_time_h" = xyes then : printf "%s\n" "#define HAVE_SYS_TIME_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "sys/select.h" "ac_cv_header_sys_select_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_sys_select_h" = xyes then : printf "%s\n" "#define HAVE_SYS_SELECT_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "sys/socket.h" "ac_cv_header_sys_socket_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_sys_socket_h" = xyes then : printf "%s\n" "#define HAVE_SYS_SOCKET_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "sys/filio.h" "ac_cv_header_sys_filio_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_sys_filio_h" = xyes then : printf "%s\n" "#define HAVE_SYS_FILIO_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "sys/ioctl.h" "ac_cv_header_sys_ioctl_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_sys_ioctl_h" = xyes then : printf "%s\n" "#define HAVE_SYS_IOCTL_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "sys/param.h" "ac_cv_header_sys_param_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_sys_param_h" = xyes then : printf "%s\n" "#define HAVE_SYS_PARAM_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "sys/uio.h" "ac_cv_header_sys_uio_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_sys_uio_h" = xyes then : printf "%s\n" "#define HAVE_SYS_UIO_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "sys/random.h" "ac_cv_header_sys_random_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_sys_random_h" = xyes then : printf "%s\n" "#define HAVE_SYS_RANDOM_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "sys/event.h" "ac_cv_header_sys_event_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_sys_event_h" = xyes then : printf "%s\n" "#define HAVE_SYS_EVENT_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "sys/epoll.h" "ac_cv_header_sys_epoll_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_sys_epoll_h" = xyes then : printf "%s\n" "#define HAVE_SYS_EPOLL_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "assert.h" "ac_cv_header_assert_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_assert_h" = xyes then : printf "%s\n" "#define HAVE_ASSERT_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "iphlpapi.h" "ac_cv_header_iphlpapi_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_iphlpapi_h" = xyes then : printf "%s\n" "#define HAVE_IPHLPAPI_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "netioapi.h" "ac_cv_header_netioapi_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_netioapi_h" = xyes then : printf "%s\n" "#define HAVE_NETIOAPI_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "netdb.h" "ac_cv_header_netdb_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_netdb_h" = xyes then : printf "%s\n" "#define HAVE_NETDB_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "netinet/in.h" "ac_cv_header_netinet_in_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_netinet_in_h" = xyes then : printf "%s\n" "#define HAVE_NETINET_IN_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "netinet6/in6.h" "ac_cv_header_netinet6_in6_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_netinet6_in6_h" = xyes then : printf "%s\n" "#define HAVE_NETINET6_IN6_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "netinet/tcp.h" "ac_cv_header_netinet_tcp_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_netinet_tcp_h" = xyes then : printf "%s\n" "#define HAVE_NETINET_TCP_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "net/if.h" "ac_cv_header_net_if_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_net_if_h" = xyes then : printf "%s\n" "#define HAVE_NET_IF_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "ifaddrs.h" "ac_cv_header_ifaddrs_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_ifaddrs_h" = xyes then : printf "%s\n" "#define HAVE_IFADDRS_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "fcntl.h" "ac_cv_header_fcntl_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_fcntl_h" = xyes then : printf "%s\n" "#define HAVE_FCNTL_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "errno.h" "ac_cv_header_errno_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_errno_h" = xyes then : printf "%s\n" "#define HAVE_ERRNO_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "socket.h" "ac_cv_header_socket_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_socket_h" = xyes then : printf "%s\n" "#define HAVE_SOCKET_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "strings.h" "ac_cv_header_strings_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_strings_h" = xyes then : printf "%s\n" "#define HAVE_STRINGS_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "stdbool.h" "ac_cv_header_stdbool_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_stdbool_h" = xyes then : printf "%s\n" "#define HAVE_STDBOOL_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "time.h" "ac_cv_header_time_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_time_h" = xyes then : printf "%s\n" "#define HAVE_TIME_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "poll.h" "ac_cv_header_poll_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_poll_h" = xyes then : printf "%s\n" "#define HAVE_POLL_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "limits.h" "ac_cv_header_limits_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_limits_h" = xyes then : printf "%s\n" "#define HAVE_LIMITS_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "arpa/nameser.h" "ac_cv_header_arpa_nameser_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_arpa_nameser_h" = xyes then : printf "%s\n" "#define HAVE_ARPA_NAMESER_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "arpa/nameser_compat.h" "ac_cv_header_arpa_nameser_compat_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_arpa_nameser_compat_h" = xyes then : printf "%s\n" "#define HAVE_ARPA_NAMESER_COMPAT_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "arpa/inet.h" "ac_cv_header_arpa_inet_h" " #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif #ifdef HAVE_ARPA_NAMESER_H #include #endif #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif " if test "x$ac_cv_header_arpa_inet_h" = xyes then : printf "%s\n" "#define HAVE_ARPA_INET_H 1" >>confdefs.h fi cares_all_includes=" #include #include #ifdef HAVE_AVAILABILITYMACROS_H # include #endif #ifdef HAVE_SYS_UIO_H # include #endif #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_TCP_H # include #endif #ifdef HAVE_SYS_FILIO_H # include #endif #ifdef HAVE_SYS_IOCTL_H # include #endif #ifdef HAVE_UNISTD_H # include #endif #ifdef HAVE_STRING_H # include #endif #ifdef HAVE_STRINGS_H # include #endif #ifdef HAVE_TIME_H # include #endif #ifdef HAVE_SYS_TIME_H # include #endif #ifdef HAVE_SYS_TYPES_H # include #endif #ifdef HAVE_SYS_STAT_H # include #endif #ifdef HAVE_SYS_RANDOM_H # include #endif #ifdef HAVE_SYS_EVENT_H # include #endif #ifdef HAVE_SYS_EPOLL_H # include #endif #ifdef HAVE_SYS_SOCKET_H # include #endif #ifdef HAVE_SYS_PARAM_H # include #endif #ifdef HAVE_FCNTL_H # include #endif #ifdef HAVE_POLL_H # include #endif #ifdef HAVE_NET_IF_H # include #endif #ifdef HAVE_IFADDRS_H # include #endif #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETINET_TCP_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #ifdef HAVE_RESOLV_H # include #endif #ifdef HAVE_IPHLPAPI_H # include #endif #ifdef HAVE_NETIOAPI_H # include #endif #ifdef HAVE_WINSOCK2_H # include #endif #ifdef HAVE_WS2IPDEF_H # include #endif #ifdef HAVE_WS2TCPIP_H # include #endif #ifdef HAVE_WINDOWS_H # include #endif " { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $CC options needed to detect all undeclared functions" >&5 printf %s "checking for $CC options needed to detect all undeclared functions... " >&6; } if test ${ac_cv_c_undeclared_builtin_options+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_save_CFLAGS=$CFLAGS ac_cv_c_undeclared_builtin_options='cannot detect' for ac_arg in '' -fno-builtin; do CFLAGS="$ac_save_CFLAGS $ac_arg" # This test program should *not* compile successfully. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main (void) { (void) strchr; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : else case e in #( e) # This test program should compile successfully. # No library function is consistently available on # freestanding implementations, so test against a dummy # declaration. Include always-available headers on the # off chance that they somehow elicit warnings. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include #include extern void ac_decl (int, char *); int main (void) { (void) ac_decl (0, (char *) 0); (void) ac_decl; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO" then : if test x"$ac_arg" = x then : ac_cv_c_undeclared_builtin_options='none needed' else case e in #( e) ac_cv_c_undeclared_builtin_options=$ac_arg ;; esac fi break fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext done CFLAGS=$ac_save_CFLAGS ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_undeclared_builtin_options" >&5 printf "%s\n" "$ac_cv_c_undeclared_builtin_options" >&6; } case $ac_cv_c_undeclared_builtin_options in #( 'cannot detect') : { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in '$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in '$ac_pwd':" >&2;} as_fn_error $? "cannot make $CC report undeclared builtins See 'config.log' for more details" "$LINENO" 5; } ;; #( 'none needed') : ac_c_undeclared_builtin_options='' ;; #( *) : ac_c_undeclared_builtin_options=$ac_cv_c_undeclared_builtin_options ;; esac ac_fn_check_decl "$LINENO" "HAVE_ARPA_NAMESER_H" "ac_cv_have_decl_HAVE_ARPA_NAMESER_H" "$ac_includes_default" "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_HAVE_ARPA_NAMESER_H" = xyes then : cat >>confdefs.h <<_EOF #define CARES_HAVE_ARPA_NAMESER_H 1 _EOF fi ac_fn_check_decl "$LINENO" "HAVE_ARPA_NAMESER_COMPAT_H" "ac_cv_have_decl_HAVE_ARPA_NAMESER_COMPAT_H" "$ac_includes_default" "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_HAVE_ARPA_NAMESER_COMPAT_H" = xyes then : cat >>confdefs.h <<_EOF #define CARES_HAVE_ARPA_NAMESER_COMPAT_H 1 _EOF fi ac_fn_c_check_type "$LINENO" "long long" "ac_cv_type_long_long" "$ac_includes_default" if test "x$ac_cv_type_long_long" = xyes then : printf "%s\n" "#define HAVE_LONGLONG 1" >>confdefs.h fi ac_fn_c_check_type "$LINENO" "ssize_t" "ac_cv_type_ssize_t" "$ac_includes_default" if test "x$ac_cv_type_ssize_t" = xyes then : CARES_TYPEOF_ARES_SSIZE_T=ssize_t else case e in #( e) CARES_TYPEOF_ARES_SSIZE_T=int ;; esac fi printf "%s\n" "#define CARES_TYPEOF_ARES_SSIZE_T ${CARES_TYPEOF_ARES_SSIZE_T}" >>confdefs.h ac_fn_c_check_type "$LINENO" "socklen_t" "ac_cv_type_socklen_t" "$cares_all_includes " if test "x$ac_cv_type_socklen_t" = xyes then : printf "%s\n" "#define HAVE_SOCKLEN_T /**/" >>confdefs.h cat >>confdefs.h <<_EOF #define CARES_TYPEOF_ARES_SOCKLEN_T socklen_t _EOF else case e in #( e) cat >>confdefs.h <<_EOF #define CARES_TYPEOF_ARES_SOCKLEN_T int _EOF ;; esac fi ac_fn_c_check_type "$LINENO" "SOCKET" "ac_cv_type_SOCKET" "$cares_all_includes " if test "x$ac_cv_type_SOCKET" = xyes then : fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for library containing clock_gettime" >&5 printf %s "checking for library containing clock_gettime... " >&6; } if test ${ac_cv_search_clock_gettime+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_func_search_save_LIBS=$LIBS cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. The 'extern "C"' is for builds by C++ compilers; although this is not generally supported in C code supporting it here has little cost and some practical benefit (sr 110532). */ #ifdef __cplusplus extern "C" #endif char clock_gettime (void); int main (void) { return clock_gettime (); ; return 0; } _ACEOF for ac_lib in '' rt posix4 do if test -z "$ac_lib"; then ac_res="none required" else ac_res=-l$ac_lib LIBS="-l$ac_lib $ac_func_search_save_LIBS" fi if ac_fn_c_try_link "$LINENO" then : ac_cv_search_clock_gettime=$ac_res fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext if test ${ac_cv_search_clock_gettime+y} then : break fi done if test ${ac_cv_search_clock_gettime+y} then : else case e in #( e) ac_cv_search_clock_gettime=no ;; esac fi rm conftest.$ac_ext LIBS=$ac_func_search_save_LIBS ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_clock_gettime" >&5 printf "%s\n" "$ac_cv_search_clock_gettime" >&6; } ac_res=$ac_cv_search_clock_gettime if test "$ac_res" != no then : test "$ac_res" = "none required" || LIBS="$ac_res $LIBS" fi ac_fn_check_decl "$LINENO" "recv" "ac_cv_have_decl_recv" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_recv" = xyes then : printf "%s\n" "#define HAVE_RECV 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "recvfrom" "ac_cv_have_decl_recvfrom" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_recvfrom" = xyes then : printf "%s\n" "#define HAVE_RECVFROM 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "send" "ac_cv_have_decl_send" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_send" = xyes then : printf "%s\n" "#define HAVE_SEND 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "getnameinfo" "ac_cv_have_decl_getnameinfo" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_getnameinfo" = xyes then : printf "%s\n" "#define HAVE_GETNAMEINFO 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "gethostname" "ac_cv_have_decl_gethostname" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_gethostname" = xyes then : printf "%s\n" "#define HAVE_GETHOSTNAME 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "connect" "ac_cv_have_decl_connect" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_connect" = xyes then : printf "%s\n" "#define HAVE_CONNECT 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "connectx" "ac_cv_have_decl_connectx" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_connectx" = xyes then : printf "%s\n" "#define HAVE_CONNECTX 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "closesocket" "ac_cv_have_decl_closesocket" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_closesocket" = xyes then : printf "%s\n" "#define HAVE_CLOSESOCKET 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "CloseSocket" "ac_cv_have_decl_CloseSocket" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_CloseSocket" = xyes then : printf "%s\n" "#define HAVE_CLOSESOCKET_CAMEL 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "fcntl" "ac_cv_have_decl_fcntl" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_fcntl" = xyes then : printf "%s\n" "#define HAVE_FCNTL 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "getenv" "ac_cv_have_decl_getenv" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_getenv" = xyes then : printf "%s\n" "#define HAVE_GETENV 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "gethostname" "ac_cv_have_decl_gethostname" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_gethostname" = xyes then : printf "%s\n" "#define HAVE_GETHOSTNAME 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "getrandom" "ac_cv_have_decl_getrandom" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_getrandom" = xyes then : printf "%s\n" "#define HAVE_GETRANDOM 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "getservbyport_r" "ac_cv_have_decl_getservbyport_r" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_getservbyport_r" = xyes then : printf "%s\n" "#define HAVE_GETSERVBYPORT_R 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "inet_net_pton" "ac_cv_have_decl_inet_net_pton" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_inet_net_pton" = xyes then : printf "%s\n" "#define HAVE_INET_NET_PTON 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "inet_ntop" "ac_cv_have_decl_inet_ntop" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_inet_ntop" = xyes then : printf "%s\n" "#define HAVE_INET_NTOP 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "inet_pton" "ac_cv_have_decl_inet_pton" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_inet_pton" = xyes then : printf "%s\n" "#define HAVE_INET_PTON 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "ioctl" "ac_cv_have_decl_ioctl" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_ioctl" = xyes then : printf "%s\n" "#define HAVE_IOCTL 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "ioctlsocket" "ac_cv_have_decl_ioctlsocket" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_ioctlsocket" = xyes then : printf "%s\n" "#define HAVE_IOCTLSOCKET 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "IoctlSocket" "ac_cv_have_decl_IoctlSocket" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_IoctlSocket" = xyes then : printf "%s\n" "#define HAVE_IOCTLSOCKET_CAMEL 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "setsockopt" "ac_cv_have_decl_setsockopt" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_setsockopt" = xyes then : printf "%s\n" "#define HAVE_SETSOCKOPT 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "socket" "ac_cv_have_decl_socket" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_socket" = xyes then : printf "%s\n" "#define HAVE_SOCKET 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "strcasecmp" "ac_cv_have_decl_strcasecmp" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_strcasecmp" = xyes then : printf "%s\n" "#define HAVE_STRCASECMP 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "strdup" "ac_cv_have_decl_strdup" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_strdup" = xyes then : printf "%s\n" "#define HAVE_STRDUP 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "stricmp" "ac_cv_have_decl_stricmp" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_stricmp" = xyes then : printf "%s\n" "#define HAVE_STRICMP 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "strncasecmp" "ac_cv_have_decl_strncasecmp" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_strncasecmp" = xyes then : printf "%s\n" "#define HAVE_STRNCASECMP 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "strncmpi" "ac_cv_have_decl_strncmpi" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_strncmpi" = xyes then : printf "%s\n" "#define HAVE_STRNCMPI 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "strnicmp" "ac_cv_have_decl_strnicmp" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_strnicmp" = xyes then : printf "%s\n" "#define HAVE_STRNICMP 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "writev" "ac_cv_have_decl_writev" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_writev" = xyes then : printf "%s\n" "#define HAVE_WRITEV 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "arc4random_buf" "ac_cv_have_decl_arc4random_buf" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_arc4random_buf" = xyes then : printf "%s\n" "#define HAVE_ARC4RANDOM_BUF 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "stat" "ac_cv_have_decl_stat" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_stat" = xyes then : printf "%s\n" "#define HAVE_STAT 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "gettimeofday" "ac_cv_have_decl_gettimeofday" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_gettimeofday" = xyes then : printf "%s\n" "#define HAVE_GETTIMEOFDAY 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "clock_gettime" "ac_cv_have_decl_clock_gettime" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_clock_gettime" = xyes then : printf "%s\n" "#define HAVE_CLOCK_GETTIME 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "if_indextoname" "ac_cv_have_decl_if_indextoname" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_if_indextoname" = xyes then : printf "%s\n" "#define HAVE_IF_INDEXTONAME 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "if_nametoindex" "ac_cv_have_decl_if_nametoindex" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_if_nametoindex" = xyes then : printf "%s\n" "#define HAVE_IF_NAMETOINDEX 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "getifaddrs" "ac_cv_have_decl_getifaddrs" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_getifaddrs" = xyes then : printf "%s\n" "#define HAVE_GETIFADDRS 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "poll" "ac_cv_have_decl_poll" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_poll" = xyes then : printf "%s\n" "#define HAVE_POLL 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "pipe" "ac_cv_have_decl_pipe" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_pipe" = xyes then : printf "%s\n" "#define HAVE_PIPE 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "pipe2" "ac_cv_have_decl_pipe2" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_pipe2" = xyes then : printf "%s\n" "#define HAVE_PIPE2 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "kqueue" "ac_cv_have_decl_kqueue" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_kqueue" = xyes then : printf "%s\n" "#define HAVE_KQUEUE 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "epoll_create1" "ac_cv_have_decl_epoll_create1" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_epoll_create1" = xyes then : printf "%s\n" "#define HAVE_EPOLL 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "ConvertInterfaceIndexToLuid" "ac_cv_have_decl_ConvertInterfaceIndexToLuid" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_ConvertInterfaceIndexToLuid" = xyes then : printf "%s\n" "#define HAVE_CONVERTINTERFACEINDEXTOLUID 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "ConvertInterfaceLuidToNameA" "ac_cv_have_decl_ConvertInterfaceLuidToNameA" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_ConvertInterfaceLuidToNameA" = xyes then : printf "%s\n" "#define HAVE_CONVERTINTERFACELUIDTONAMEA 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "NotifyIpInterfaceChange" "ac_cv_have_decl_NotifyIpInterfaceChange" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_NotifyIpInterfaceChange" = xyes then : printf "%s\n" "#define HAVE_NOTIFYIPINTERFACECHANGE 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "RegisterWaitForSingleObject" "ac_cv_have_decl_RegisterWaitForSingleObject" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_RegisterWaitForSingleObject" = xyes then : printf "%s\n" "#define HAVE_REGISTERWAITFORSINGLEOBJECT 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "__system_property_get" "ac_cv_have_decl___system_property_get" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl___system_property_get" = xyes then : printf "%s\n" "#define HAVE___SYSTEM_PROPERTY_GET 1" >>confdefs.h fi if test "x$ac_cv_type_ssize_t" = "xyes" -a "x$ac_cv_type_socklen_t" = "xyes" -a "x$ac_cv_native_windows" != "xyes" ; then recvfrom_type_retv="ssize_t" recvfrom_type_arg3="size_t" else recvfrom_type_retv="int" recvfrom_type_arg3="int" fi if test "x$ac_cv_type_SOCKET" = "xyes" ; then recvfrom_type_arg1="SOCKET" else recvfrom_type_arg1="int" fi if test "x$ac_cv_type_socklen_t" = "xyes" ; then recvfrom_type_arg6="socklen_t *" getnameinfo_type_arg2="socklen_t" getnameinfo_type_arg46="socklen_t" else recvfrom_type_arg6="int *" getnameinfo_type_arg2="int" getnameinfo_type_arg46="int" fi if test "x$ac_cv_native_windows" = "xyes" ; then recv_type_arg2="char *" else recv_type_arg2="void *" fi recv_type_retv=${recvfrom_type_retv} send_type_retv=${recvfrom_type_retv} recv_type_arg1=${recvfrom_type_arg1} recvfrom_type_arg2=${recv_type_arg2} send_type_arg1=${recvfrom_type_arg1} recv_type_arg3=${recvfrom_type_arg3} send_type_arg3=${recvfrom_type_arg3} gethostname_type_arg2=${recvfrom_type_arg3} recvfrom_qual_arg5= recvfrom_type_arg4=int recvfrom_type_arg5="struct sockaddr *" recv_type_arg4=int getnameinfo_type_arg1="struct sockaddr *" getnameinfo_type_arg7=int send_type_arg2="const void *" send_type_arg4=int printf "%s\n" "#define RECVFROM_TYPE_RETV ${recvfrom_type_retv} " >>confdefs.h printf "%s\n" "#define RECVFROM_TYPE_ARG1 ${recvfrom_type_arg1} " >>confdefs.h printf "%s\n" "#define RECVFROM_TYPE_ARG2 ${recvfrom_type_arg2} " >>confdefs.h printf "%s\n" "#define RECVFROM_TYPE_ARG3 ${recvfrom_type_arg3} " >>confdefs.h printf "%s\n" "#define RECVFROM_TYPE_ARG4 ${recvfrom_type_arg4} " >>confdefs.h printf "%s\n" "#define RECVFROM_TYPE_ARG5 ${recvfrom_type_arg5} " >>confdefs.h printf "%s\n" "#define RECVFROM_QUAL_ARG5 ${recvfrom_qual_arg5}" >>confdefs.h printf "%s\n" "#define RECV_TYPE_RETV ${recv_type_retv} " >>confdefs.h printf "%s\n" "#define RECV_TYPE_ARG1 ${recv_type_arg1} " >>confdefs.h printf "%s\n" "#define RECV_TYPE_ARG2 ${recv_type_arg2} " >>confdefs.h printf "%s\n" "#define RECV_TYPE_ARG3 ${recv_type_arg3} " >>confdefs.h printf "%s\n" "#define RECV_TYPE_ARG4 ${recv_type_arg4} " >>confdefs.h printf "%s\n" "#define SEND_TYPE_RETV ${send_type_retv} " >>confdefs.h printf "%s\n" "#define SEND_TYPE_ARG1 ${send_type_arg1} " >>confdefs.h printf "%s\n" "#define SEND_TYPE_ARG2 ${send_type_arg2} " >>confdefs.h printf "%s\n" "#define SEND_TYPE_ARG3 ${send_type_arg3} " >>confdefs.h printf "%s\n" "#define SEND_TYPE_ARG4 ${send_type_arg4} " >>confdefs.h printf "%s\n" "#define GETNAMEINFO_TYPE_ARG1 ${getnameinfo_type_arg1} " >>confdefs.h printf "%s\n" "#define GETNAMEINFO_TYPE_ARG2 ${getnameinfo_type_arg2} " >>confdefs.h printf "%s\n" "#define GETNAMEINFO_TYPE_ARG7 ${getnameinfo_type_arg7} " >>confdefs.h printf "%s\n" "#define GETNAMEINFO_TYPE_ARG46 ${getnameinfo_type_arg46} " >>confdefs.h printf "%s\n" "#define GETHOSTNAME_TYPE_ARG2 ${gethostname_type_arg2} " >>confdefs.h if test "$ac_cv_have_decl_getservbyport_r" = "yes" ; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking number of arguments for getservbyport_r()" >&5 printf %s "checking number of arguments for getservbyport_r()... " >&6; } getservbyport_r_args=6 case $host_os in solaris*) getservbyport_r_args=5 ;; aix*|openbsd*) getservbyport_r_args=4 ;; esac { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $getservbyport_r_args" >&5 printf "%s\n" "$getservbyport_r_args" >&6; } printf "%s\n" "#define GETSERVBYPORT_R_ARGS $getservbyport_r_args " >>confdefs.h fi if test "$ac_cv_have_decl_getservbyname_r" = "yes" ; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking number of arguments for getservbyname_r()" >&5 printf %s "checking number of arguments for getservbyname_r()... " >&6; } getservbyname_r_args=6 case $host_os in solaris*) getservbyname_r_args=5 ;; aix*|openbsd*) getservbyname_r_args=4 ;; esac printf "%s\n" "#define GETSERVBYNAME_R_ARGS $getservbyname_r_args " >>confdefs.h { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $getservbyname_r_args" >&5 printf "%s\n" "$getservbyname_r_args" >&6; } fi ac_fn_c_check_type "$LINENO" "size_t" "ac_cv_type_size_t" "$ac_includes_default" if test "x$ac_cv_type_size_t" = xyes then : else case e in #( e) printf "%s\n" "#define size_t unsigned int" >>confdefs.h ;; esac fi ac_fn_check_decl "$LINENO" "AF_INET6" "ac_cv_have_decl_AF_INET6" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_AF_INET6" = xyes then : printf "%s\n" "#define HAVE_AF_INET6 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "PF_INET6" "ac_cv_have_decl_PF_INET6" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_PF_INET6" = xyes then : printf "%s\n" "#define HAVE_PF_INET6 1" >>confdefs.h fi ac_fn_c_check_type "$LINENO" "struct in6_addr" "ac_cv_type_struct_in6_addr" "$cares_all_includes " if test "x$ac_cv_type_struct_in6_addr" = xyes then : printf "%s\n" "#define HAVE_STRUCT_IN6_ADDR 1" >>confdefs.h fi ac_fn_c_check_type "$LINENO" "struct sockaddr_in6" "ac_cv_type_struct_sockaddr_in6" "$cares_all_includes " if test "x$ac_cv_type_struct_sockaddr_in6" = xyes then : printf "%s\n" "#define HAVE_STRUCT_SOCKADDR_IN6 1" >>confdefs.h fi ac_fn_c_check_type "$LINENO" "struct sockaddr_storage" "ac_cv_type_struct_sockaddr_storage" "$cares_all_includes " if test "x$ac_cv_type_struct_sockaddr_storage" = xyes then : printf "%s\n" "#define HAVE_STRUCT_SOCKADDR_STORAGE 1" >>confdefs.h fi ac_fn_c_check_type "$LINENO" "struct addrinfo" "ac_cv_type_struct_addrinfo" "$cares_all_includes " if test "x$ac_cv_type_struct_addrinfo" = xyes then : printf "%s\n" "#define HAVE_STRUCT_ADDRINFO 1" >>confdefs.h fi ac_fn_c_check_type "$LINENO" "struct timeval" "ac_cv_type_struct_timeval" "$cares_all_includes " if test "x$ac_cv_type_struct_timeval" = xyes then : printf "%s\n" "#define HAVE_STRUCT_TIMEVAL 1" >>confdefs.h fi ac_fn_c_check_member "$LINENO" "struct sockaddr_in6" "sin6_scope_id" "ac_cv_member_struct_sockaddr_in6_sin6_scope_id" "$cares_all_includes " if test "x$ac_cv_member_struct_sockaddr_in6_sin6_scope_id" = xyes then : printf "%s\n" "#define HAVE_STRUCT_SOCKADDR_IN6_SIN6_SCOPE_ID 1" >>confdefs.h fi ac_fn_c_check_member "$LINENO" "struct addrinfo" "ai_flags" "ac_cv_member_struct_addrinfo_ai_flags" "$cares_all_includes " if test "x$ac_cv_member_struct_addrinfo_ai_flags" = xyes then : printf "%s\n" "#define HAVE_STRUCT_ADDRINFO_AI_FLAGS 1" >>confdefs.h fi ac_fn_check_decl "$LINENO" "FIONBIO" "ac_cv_have_decl_FIONBIO" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_FIONBIO" = xyes then : fi ac_fn_check_decl "$LINENO" "O_NONBLOCK" "ac_cv_have_decl_O_NONBLOCK" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_O_NONBLOCK" = xyes then : fi ac_fn_check_decl "$LINENO" "SO_NONBLOCK" "ac_cv_have_decl_SO_NONBLOCK" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_SO_NONBLOCK" = xyes then : fi ac_fn_check_decl "$LINENO" "MSG_NOSIGNAL" "ac_cv_have_decl_MSG_NOSIGNAL" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_MSG_NOSIGNAL" = xyes then : fi ac_fn_check_decl "$LINENO" "CLOCK_MONOTONIC" "ac_cv_have_decl_CLOCK_MONOTONIC" "$cares_all_includes " "$ac_c_undeclared_builtin_options" "CFLAGS" if test "x$ac_cv_have_decl_CLOCK_MONOTONIC" = xyes then : fi if test "$ac_cv_have_decl_CLOCK_MONOTONIC" = "yes" -a "$ac_cv_have_decl_clock_gettime" = "yes" ; then printf "%s\n" "#define HAVE_CLOCK_GETTIME_MONOTONIC 1 " >>confdefs.h fi if test "$ac_cv_have_decl_FIONBIO" = "yes" -a "$ac_cv_have_decl_ioctl" = "yes" ; then printf "%s\n" "#define HAVE_IOCTL_FIONBIO 1 " >>confdefs.h fi if test "$ac_cv_have_decl_FIONBIO" = "yes" -a "$ac_cv_have_decl_ioctlsocket" = "yes" ; then printf "%s\n" "#define HAVE_IOCTLSOCKET_FIONBIO 1 " >>confdefs.h fi if test "$ac_cv_have_decl_SO_NONBLOCK" = "yes" -a "$ac_cv_have_decl_setsockopt" = "yes" ; then printf "%s\n" "#define HAVE_SETSOCKOPT_SO_NONBLOCK 1 " >>confdefs.h fi if test "$ac_cv_have_decl_O_NONBLOCK" = "yes" -a "$ac_cv_have_decl_fcntl" = "yes" ; then printf "%s\n" "#define HAVE_FCNTL_O_NONBLOCK 1 " >>confdefs.h fi if test "x$ac_cv_header_sys_types_h" = "xyes" ; then cat >>confdefs.h <<_EOF #define CARES_HAVE_SYS_TYPES_H 1 _EOF fi if test "x$ac_cv_header_sys_socket_h" = "xyes" ; then cat >>confdefs.h <<_EOF #define CARES_HAVE_SYS_SOCKET_H 1 _EOF fi if test "x$ac_cv_header_sys_select_h" = "xyes" ; then cat >>confdefs.h <<_EOF #define CARES_HAVE_SYS_SELECT_H 1 _EOF fi if test "x$ac_cv_header_ws2tcpip_h" = "xyes" ; then cat >>confdefs.h <<_EOF #define CARES_HAVE_WS2TCPIP_H 1 _EOF fi if test "x$ac_cv_header_winsock2_h" = "xyes" ; then cat >>confdefs.h <<_EOF #define CARES_HAVE_WINSOCK2_H 1 _EOF fi if test "x$ac_cv_header_windows_h" = "xyes" ; then cat >>confdefs.h <<_EOF #define CARES_HAVE_WINDOWS_H 1 _EOF fi if test "x$ac_cv_header_arpa_nameser_h" = "xyes" ; then cat >>confdefs.h <<_EOF #define CARES_HAVE_ARPA_NAMESER_H 1 _EOF fi if test "x$ac_cv_header_arpa_nameser_compa_h" = "xyes" ; then cat >>confdefs.h <<_EOF #define CARES_HAVE_ARPA_NAMESER_COMPA_H 1 _EOF fi if test "${CARES_THREADS}" = "yes" -a "x${ac_cv_native_windows}" != "xyes" ; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for egrep -e" >&5 printf %s "checking for egrep -e... " >&6; } if test ${ac_cv_path_EGREP_TRADITIONAL+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -z "$EGREP_TRADITIONAL"; then ac_path_EGREP_TRADITIONAL_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_prog in grep ggrep do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_EGREP_TRADITIONAL="$as_dir$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_EGREP_TRADITIONAL" || continue # Check for GNU ac_path_EGREP_TRADITIONAL and select it if it is found. # Check for GNU $ac_path_EGREP_TRADITIONAL case `"$ac_path_EGREP_TRADITIONAL" --version 2>&1` in #( *GNU*) ac_cv_path_EGREP_TRADITIONAL="$ac_path_EGREP_TRADITIONAL" ac_path_EGREP_TRADITIONAL_found=:;; #( *) ac_count=0 printf %s 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" printf "%s\n" 'EGREP_TRADITIONAL' >> "conftest.nl" "$ac_path_EGREP_TRADITIONAL" -E 'EGR(EP|AC)_TRADITIONAL$' < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_EGREP_TRADITIONAL_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_EGREP_TRADITIONAL="$ac_path_EGREP_TRADITIONAL" ac_path_EGREP_TRADITIONAL_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_EGREP_TRADITIONAL_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_EGREP_TRADITIONAL"; then : fi else ac_cv_path_EGREP_TRADITIONAL=$EGREP_TRADITIONAL fi if test "$ac_cv_path_EGREP_TRADITIONAL" then : ac_cv_path_EGREP_TRADITIONAL="$ac_cv_path_EGREP_TRADITIONAL -E" else case e in #( e) if test -z "$EGREP_TRADITIONAL"; then ac_path_EGREP_TRADITIONAL_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_prog in egrep do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_EGREP_TRADITIONAL="$as_dir$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_EGREP_TRADITIONAL" || continue # Check for GNU ac_path_EGREP_TRADITIONAL and select it if it is found. # Check for GNU $ac_path_EGREP_TRADITIONAL case `"$ac_path_EGREP_TRADITIONAL" --version 2>&1` in #( *GNU*) ac_cv_path_EGREP_TRADITIONAL="$ac_path_EGREP_TRADITIONAL" ac_path_EGREP_TRADITIONAL_found=:;; #( *) ac_count=0 printf %s 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" printf "%s\n" 'EGREP_TRADITIONAL' >> "conftest.nl" "$ac_path_EGREP_TRADITIONAL" 'EGR(EP|AC)_TRADITIONAL$' < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_EGREP_TRADITIONAL_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_EGREP_TRADITIONAL="$ac_path_EGREP_TRADITIONAL" ac_path_EGREP_TRADITIONAL_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_EGREP_TRADITIONAL_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_EGREP_TRADITIONAL"; then as_fn_error $? "no acceptable egrep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_EGREP_TRADITIONAL=$EGREP_TRADITIONAL fi ;; esac fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_EGREP_TRADITIONAL" >&5 printf "%s\n" "$ac_cv_path_EGREP_TRADITIONAL" >&6; } EGREP_TRADITIONAL=$ac_cv_path_EGREP_TRADITIONAL ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ax_pthread_ok=no # We used to check for pthread.h first, but this fails if pthread.h # requires special compiler flags (e.g. on Tru64 or Sequent). # It gets checked for in the link test anyway. # First of all, check if the user has set any of the PTHREAD_LIBS, # etcetera environment variables, and if threads linking works using # them: if test "x$PTHREAD_CFLAGS$PTHREAD_LIBS" != "x"; then ax_pthread_save_CC="$CC" ax_pthread_save_CFLAGS="$CFLAGS" ax_pthread_save_LIBS="$LIBS" if test "x$PTHREAD_CC" != "x" then : CC="$PTHREAD_CC" fi if test "x$PTHREAD_CXX" != "x" then : CXX="$PTHREAD_CXX" fi CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for pthread_join using $CC $PTHREAD_CFLAGS $PTHREAD_LIBS" >&5 printf %s "checking for pthread_join using $CC $PTHREAD_CFLAGS $PTHREAD_LIBS... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. The 'extern "C"' is for builds by C++ compilers; although this is not generally supported in C code supporting it here has little cost and some practical benefit (sr 110532). */ #ifdef __cplusplus extern "C" #endif char pthread_join (void); int main (void) { return pthread_join (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : ax_pthread_ok=yes fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_pthread_ok" >&5 printf "%s\n" "$ax_pthread_ok" >&6; } if test "x$ax_pthread_ok" = "xno"; then PTHREAD_LIBS="" PTHREAD_CFLAGS="" fi CC="$ax_pthread_save_CC" CFLAGS="$ax_pthread_save_CFLAGS" LIBS="$ax_pthread_save_LIBS" fi # We must check for the threads library under a number of different # names; the ordering is very important because some systems # (e.g. DEC) have both -lpthread and -lpthreads, where one of the # libraries is broken (non-POSIX). # Create a list of thread flags to try. Items with a "," contain both # C compiler flags (before ",") and linker flags (after ","). Other items # starting with a "-" are C compiler flags, and remaining items are # library names, except for "none" which indicates that we try without # any flags at all, and "pthread-config" which is a program returning # the flags for the Pth emulation library. ax_pthread_flags="pthreads none -Kthread -pthread -pthreads -mthreads pthread --thread-safe -mt pthread-config" # The ordering *is* (sometimes) important. Some notes on the # individual items follow: # pthreads: AIX (must check this before -lpthread) # none: in case threads are in libc; should be tried before -Kthread and # other compiler flags to prevent continual compiler warnings # -Kthread: Sequent (threads in libc, but -Kthread needed for pthread.h) # -pthread: Linux/gcc (kernel threads), BSD/gcc (userland threads), Tru64 # (Note: HP C rejects this with "bad form for `-t' option") # -pthreads: Solaris/gcc (Note: HP C also rejects) # -mt: Sun Workshop C (may only link SunOS threads [-lthread], but it # doesn't hurt to check since this sometimes defines pthreads and # -D_REENTRANT too), HP C (must be checked before -lpthread, which # is present but should not be used directly; and before -mthreads, # because the compiler interprets this as "-mt" + "-hreads") # -mthreads: Mingw32/gcc, Lynx/gcc # pthread: Linux, etcetera # --thread-safe: KAI C++ # pthread-config: use pthread-config program (for GNU Pth library) case $host_os in freebsd*) # -kthread: FreeBSD kernel threads (preferred to -pthread since SMP-able) # lthread: LinuxThreads port on FreeBSD (also preferred to -pthread) ax_pthread_flags="-kthread lthread $ax_pthread_flags" ;; hpux*) # From the cc(1) man page: "[-mt] Sets various -D flags to enable # multi-threading and also sets -lpthread." ax_pthread_flags="-mt -pthread pthread $ax_pthread_flags" ;; openedition*) # IBM z/OS requires a feature-test macro to be defined in order to # enable POSIX threads at all, so give the user a hint if this is # not set. (We don't define these ourselves, as they can affect # other portions of the system API in unpredictable ways.) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ # if !defined(_OPEN_THREADS) && !defined(_UNIX03_THREADS) AX_PTHREAD_ZOS_MISSING # endif _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP_TRADITIONAL "AX_PTHREAD_ZOS_MISSING" >/dev/null 2>&1 then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: IBM z/OS requires -D_OPEN_THREADS or -D_UNIX03_THREADS to enable pthreads support." >&5 printf "%s\n" "$as_me: WARNING: IBM z/OS requires -D_OPEN_THREADS or -D_UNIX03_THREADS to enable pthreads support." >&2;} fi rm -rf conftest* ;; solaris*) # On Solaris (at least, for some versions), libc contains stubbed # (non-functional) versions of the pthreads routines, so link-based # tests will erroneously succeed. (N.B.: The stubs are missing # pthread_cleanup_push, or rather a function called by this macro, # so we could check for that, but who knows whether they'll stub # that too in a future libc.) So we'll check first for the # standard Solaris way of linking pthreads (-mt -lpthread). ax_pthread_flags="-mt,-lpthread pthread $ax_pthread_flags" ;; esac # Are we compiling with Clang? { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $CC is Clang" >&5 printf %s "checking whether $CC is Clang... " >&6; } if test ${ax_cv_PTHREAD_CLANG+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_cv_PTHREAD_CLANG=no # Note that Autoconf sets GCC=yes for Clang as well as GCC if test "x$GCC" = "xyes"; then cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Note: Clang 2.7 lacks __clang_[a-z]+__ */ # if defined(__clang__) && defined(__llvm__) AX_PTHREAD_CC_IS_CLANG # endif _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP_TRADITIONAL "AX_PTHREAD_CC_IS_CLANG" >/dev/null 2>&1 then : ax_cv_PTHREAD_CLANG=yes fi rm -rf conftest* fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_cv_PTHREAD_CLANG" >&5 printf "%s\n" "$ax_cv_PTHREAD_CLANG" >&6; } ax_pthread_clang="$ax_cv_PTHREAD_CLANG" # GCC generally uses -pthread, or -pthreads on some platforms (e.g. SPARC) # Note that for GCC and Clang -pthread generally implies -lpthread, # except when -nostdlib is passed. # This is problematic using libtool to build C++ shared libraries with pthread: # [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=25460 # [2] https://bugzilla.redhat.com/show_bug.cgi?id=661333 # [3] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=468555 # To solve this, first try -pthread together with -lpthread for GCC if test "x$GCC" = "xyes" then : ax_pthread_flags="-pthread,-lpthread -pthread -pthreads $ax_pthread_flags" fi # Clang takes -pthread (never supported any other flag), but we'll try with -lpthread first if test "x$ax_pthread_clang" = "xyes" then : ax_pthread_flags="-pthread,-lpthread -pthread" fi # The presence of a feature test macro requesting re-entrant function # definitions is, on some systems, a strong hint that pthreads support is # correctly enabled case $host_os in darwin* | hpux* | linux* | osf* | solaris*) ax_pthread_check_macro="_REENTRANT" ;; aix*) ax_pthread_check_macro="_THREAD_SAFE" ;; *) ax_pthread_check_macro="--" ;; esac if test "x$ax_pthread_check_macro" = "x--" then : ax_pthread_check_cond=0 else case e in #( e) ax_pthread_check_cond="!defined($ax_pthread_check_macro)" ;; esac fi if test "x$ax_pthread_ok" = "xno"; then for ax_pthread_try_flag in $ax_pthread_flags; do case $ax_pthread_try_flag in none) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether pthreads work without any flags" >&5 printf %s "checking whether pthreads work without any flags... " >&6; } ;; *,*) PTHREAD_CFLAGS=`echo $ax_pthread_try_flag | sed "s/^\(.*\),\(.*\)$/\1/"` PTHREAD_LIBS=`echo $ax_pthread_try_flag | sed "s/^\(.*\),\(.*\)$/\2/"` { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether pthreads work with \"$PTHREAD_CFLAGS\" and \"$PTHREAD_LIBS\"" >&5 printf %s "checking whether pthreads work with \"$PTHREAD_CFLAGS\" and \"$PTHREAD_LIBS\"... " >&6; } ;; -*) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether pthreads work with $ax_pthread_try_flag" >&5 printf %s "checking whether pthreads work with $ax_pthread_try_flag... " >&6; } PTHREAD_CFLAGS="$ax_pthread_try_flag" ;; pthread-config) # Extract the first word of "pthread-config", so it can be a program name with args. set dummy pthread-config; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ax_pthread_config+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ax_pthread_config"; then ac_cv_prog_ax_pthread_config="$ax_pthread_config" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ax_pthread_config="yes" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS test -z "$ac_cv_prog_ax_pthread_config" && ac_cv_prog_ax_pthread_config="no" fi ;; esac fi ax_pthread_config=$ac_cv_prog_ax_pthread_config if test -n "$ax_pthread_config"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_pthread_config" >&5 printf "%s\n" "$ax_pthread_config" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ax_pthread_config" = "xno" then : continue fi PTHREAD_CFLAGS="`pthread-config --cflags`" PTHREAD_LIBS="`pthread-config --ldflags` `pthread-config --libs`" ;; *) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for the pthreads library -l$ax_pthread_try_flag" >&5 printf %s "checking for the pthreads library -l$ax_pthread_try_flag... " >&6; } PTHREAD_LIBS="-l$ax_pthread_try_flag" ;; esac ax_pthread_save_CFLAGS="$CFLAGS" ax_pthread_save_LIBS="$LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" # Check for various functions. We must include pthread.h, # since some functions may be macros. (On the Sequent, we # need a special flag -Kthread to make this header compile.) # We check for pthread_join because it is in -lpthread on IRIX # while pthread_create is in libc. We check for pthread_attr_init # due to DEC craziness with -lpthreads. We check for # pthread_cleanup_push because it is one of the few pthread # functions on Solaris that doesn't have a non-functional libc stub. # We try pthread_create on general principles. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include # if $ax_pthread_check_cond # error "$ax_pthread_check_macro must be defined" # endif static void *some_global = NULL; static void routine(void *a) { /* To avoid any unused-parameter or unused-but-set-parameter warning. */ some_global = a; } static void *start_routine(void *a) { return a; } int main (void) { pthread_t th; pthread_attr_t attr; pthread_create(&th, 0, start_routine, 0); pthread_join(th, 0); pthread_attr_init(&attr); pthread_cleanup_push(routine, 0); pthread_cleanup_pop(0) /* ; */ ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : ax_pthread_ok=yes fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext CFLAGS="$ax_pthread_save_CFLAGS" LIBS="$ax_pthread_save_LIBS" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_pthread_ok" >&5 printf "%s\n" "$ax_pthread_ok" >&6; } if test "x$ax_pthread_ok" = "xyes" then : break fi PTHREAD_LIBS="" PTHREAD_CFLAGS="" done fi # Clang needs special handling, because older versions handle the -pthread # option in a rather... idiosyncratic way if test "x$ax_pthread_clang" = "xyes"; then # Clang takes -pthread; it has never supported any other flag # (Note 1: This will need to be revisited if a system that Clang # supports has POSIX threads in a separate library. This tends not # to be the way of modern systems, but it's conceivable.) # (Note 2: On some systems, notably Darwin, -pthread is not needed # to get POSIX threads support; the API is always present and # active. We could reasonably leave PTHREAD_CFLAGS empty. But # -pthread does define _REENTRANT, and while the Darwin headers # ignore this macro, third-party headers might not.) # However, older versions of Clang make a point of warning the user # that, in an invocation where only linking and no compilation is # taking place, the -pthread option has no effect ("argument unused # during compilation"). They expect -pthread to be passed in only # when source code is being compiled. # # Problem is, this is at odds with the way Automake and most other # C build frameworks function, which is that the same flags used in # compilation (CFLAGS) are also used in linking. Many systems # supported by AX_PTHREAD require exactly this for POSIX threads # support, and in fact it is often not straightforward to specify a # flag that is used only in the compilation phase and not in # linking. Such a scenario is extremely rare in practice. # # Even though use of the -pthread flag in linking would only print # a warning, this can be a nuisance for well-run software projects # that build with -Werror. So if the active version of Clang has # this misfeature, we search for an option to squash it. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether Clang needs flag to prevent \"argument unused\" warning when linking with -pthread" >&5 printf %s "checking whether Clang needs flag to prevent \"argument unused\" warning when linking with -pthread... " >&6; } if test ${ax_cv_PTHREAD_CLANG_NO_WARN_FLAG+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_cv_PTHREAD_CLANG_NO_WARN_FLAG=unknown # Create an alternate version of $ac_link that compiles and # links in two steps (.c -> .o, .o -> exe) instead of one # (.c -> exe), because the warning occurs only in the second # step ax_pthread_save_ac_link="$ac_link" ax_pthread_sed='s/conftest\.\$ac_ext/conftest.$ac_objext/g' ax_pthread_link_step=`printf "%s\n" "$ac_link" | sed "$ax_pthread_sed"` ax_pthread_2step_ac_link="($ac_compile) && (echo ==== >&5) && ($ax_pthread_link_step)" ax_pthread_save_CFLAGS="$CFLAGS" for ax_pthread_try in '' -Qunused-arguments -Wno-unused-command-line-argument unknown; do if test "x$ax_pthread_try" = "xunknown" then : break fi CFLAGS="-Werror -Wunknown-warning-option $ax_pthread_try -pthread $ax_pthread_save_CFLAGS" ac_link="$ax_pthread_save_ac_link" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main(void){return 0;} _ACEOF if ac_fn_c_try_link "$LINENO" then : ac_link="$ax_pthread_2step_ac_link" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main(void){return 0;} _ACEOF if ac_fn_c_try_link "$LINENO" then : break fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext done ac_link="$ax_pthread_save_ac_link" CFLAGS="$ax_pthread_save_CFLAGS" if test "x$ax_pthread_try" = "x" then : ax_pthread_try=no fi ax_cv_PTHREAD_CLANG_NO_WARN_FLAG="$ax_pthread_try" ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_cv_PTHREAD_CLANG_NO_WARN_FLAG" >&5 printf "%s\n" "$ax_cv_PTHREAD_CLANG_NO_WARN_FLAG" >&6; } case "$ax_cv_PTHREAD_CLANG_NO_WARN_FLAG" in no | unknown) ;; *) PTHREAD_CFLAGS="$ax_cv_PTHREAD_CLANG_NO_WARN_FLAG $PTHREAD_CFLAGS" ;; esac fi # $ax_pthread_clang = yes # Various other checks: if test "x$ax_pthread_ok" = "xyes"; then ax_pthread_save_CFLAGS="$CFLAGS" ax_pthread_save_LIBS="$LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" # Detect AIX lossage: JOINABLE attribute is called UNDETACHED. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for joinable pthread attribute" >&5 printf %s "checking for joinable pthread attribute... " >&6; } if test ${ax_cv_PTHREAD_JOINABLE_ATTR+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_cv_PTHREAD_JOINABLE_ATTR=unknown for ax_pthread_attr in PTHREAD_CREATE_JOINABLE PTHREAD_CREATE_UNDETACHED; do cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main (void) { int attr = $ax_pthread_attr; return attr /* ; */ ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : ax_cv_PTHREAD_JOINABLE_ATTR=$ax_pthread_attr; break fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext done ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_cv_PTHREAD_JOINABLE_ATTR" >&5 printf "%s\n" "$ax_cv_PTHREAD_JOINABLE_ATTR" >&6; } if test "x$ax_cv_PTHREAD_JOINABLE_ATTR" != "xunknown" && \ test "x$ax_cv_PTHREAD_JOINABLE_ATTR" != "xPTHREAD_CREATE_JOINABLE" && \ test "x$ax_pthread_joinable_attr_defined" != "xyes" then : printf "%s\n" "#define PTHREAD_CREATE_JOINABLE $ax_cv_PTHREAD_JOINABLE_ATTR" >>confdefs.h ax_pthread_joinable_attr_defined=yes fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether more special flags are required for pthreads" >&5 printf %s "checking whether more special flags are required for pthreads... " >&6; } if test ${ax_cv_PTHREAD_SPECIAL_FLAGS+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_cv_PTHREAD_SPECIAL_FLAGS=no case $host_os in solaris*) ax_cv_PTHREAD_SPECIAL_FLAGS="-D_POSIX_PTHREAD_SEMANTICS" ;; esac ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_cv_PTHREAD_SPECIAL_FLAGS" >&5 printf "%s\n" "$ax_cv_PTHREAD_SPECIAL_FLAGS" >&6; } if test "x$ax_cv_PTHREAD_SPECIAL_FLAGS" != "xno" && \ test "x$ax_pthread_special_flags_added" != "xyes" then : PTHREAD_CFLAGS="$ax_cv_PTHREAD_SPECIAL_FLAGS $PTHREAD_CFLAGS" ax_pthread_special_flags_added=yes fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for PTHREAD_PRIO_INHERIT" >&5 printf %s "checking for PTHREAD_PRIO_INHERIT... " >&6; } if test ${ax_cv_PTHREAD_PRIO_INHERIT+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main (void) { int i = PTHREAD_PRIO_INHERIT; return i; ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : ax_cv_PTHREAD_PRIO_INHERIT=yes else case e in #( e) ax_cv_PTHREAD_PRIO_INHERIT=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_cv_PTHREAD_PRIO_INHERIT" >&5 printf "%s\n" "$ax_cv_PTHREAD_PRIO_INHERIT" >&6; } if test "x$ax_cv_PTHREAD_PRIO_INHERIT" = "xyes" && \ test "x$ax_pthread_prio_inherit_defined" != "xyes" then : printf "%s\n" "#define HAVE_PTHREAD_PRIO_INHERIT 1" >>confdefs.h ax_pthread_prio_inherit_defined=yes fi CFLAGS="$ax_pthread_save_CFLAGS" LIBS="$ax_pthread_save_LIBS" # More AIX lossage: compile with *_r variant if test "x$GCC" != "xyes"; then case $host_os in aix*) case "x/$CC" in #( x*/c89|x*/c89_128|x*/c99|x*/c99_128|x*/cc|x*/cc128|x*/xlc|x*/xlc_v6|x*/xlc128|x*/xlc128_v6) : #handle absolute path differently from PATH based program lookup case "x$CC" in #( x/*) : if as_fn_executable_p ${CC}_r then : PTHREAD_CC="${CC}_r" fi if test "x${CXX}" != "x" then : if as_fn_executable_p ${CXX}_r then : PTHREAD_CXX="${CXX}_r" fi fi ;; #( *) : for ac_prog in ${CC}_r do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_PTHREAD_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$PTHREAD_CC"; then ac_cv_prog_PTHREAD_CC="$PTHREAD_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_PTHREAD_CC="$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi PTHREAD_CC=$ac_cv_prog_PTHREAD_CC if test -n "$PTHREAD_CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $PTHREAD_CC" >&5 printf "%s\n" "$PTHREAD_CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$PTHREAD_CC" && break done test -n "$PTHREAD_CC" || PTHREAD_CC="$CC" if test "x${CXX}" != "x" then : for ac_prog in ${CXX}_r do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_PTHREAD_CXX+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$PTHREAD_CXX"; then ac_cv_prog_PTHREAD_CXX="$PTHREAD_CXX" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_PTHREAD_CXX="$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi PTHREAD_CXX=$ac_cv_prog_PTHREAD_CXX if test -n "$PTHREAD_CXX"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $PTHREAD_CXX" >&5 printf "%s\n" "$PTHREAD_CXX" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$PTHREAD_CXX" && break done test -n "$PTHREAD_CXX" || PTHREAD_CXX="$CXX" fi ;; esac ;; #( *) : ;; esac ;; esac fi fi test -n "$PTHREAD_CC" || PTHREAD_CC="$CC" test -n "$PTHREAD_CXX" || PTHREAD_CXX="$CXX" # Finally, execute ACTION-IF-FOUND/ACTION-IF-NOT-FOUND: if test "x$ax_pthread_ok" = "xyes"; then : else ax_pthread_ok=no { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: threads requested but not supported" >&5 printf "%s\n" "$as_me: WARNING: threads requested but not supported" >&2;} CARES_THREADS=no fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu if test "${CARES_THREADS}" = "yes" ; then ac_fn_c_check_header_compile "$LINENO" "pthread.h" "ac_cv_header_pthread_h" "$ac_includes_default" if test "x$ac_cv_header_pthread_h" = xyes then : printf "%s\n" "#define HAVE_PTHREAD_H 1" >>confdefs.h fi ac_fn_c_check_header_compile "$LINENO" "pthread_np.h" "ac_cv_header_pthread_np_h" "$ac_includes_default" if test "x$ac_cv_header_pthread_np_h" = xyes then : printf "%s\n" "#define HAVE_PTHREAD_NP_H 1" >>confdefs.h fi LIBS="$PTHREAD_LIBS $LIBS" AM_CFLAGS="$AM_CFLAGS $PTHREAD_CFLAGS" CC="$PTHREAD_CC" CXX="$PTHREAD_CXX" fi fi if test "${CARES_THREADS}" = "yes" ; then printf "%s\n" "#define CARES_THREADS 1 " >>confdefs.h fi CARES_PRIVATE_LIBS="$LIBS" BUILD_SUBDIRS="include src" if test "x$build_tests" != "xno" -a "x$HAVE_CXX14" = "0" ; then if test "x$build_tests" = "xmaybe" ; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: cannot build tests without a CXX14 compiler" >&5 printf "%s\n" "$as_me: WARNING: cannot build tests without a CXX14 compiler" >&2;} build_tests=no else as_fn_error $? "*** Building tests requires a CXX14 compiler" "$LINENO" 5 fi fi if test "x$build_tests" != "xno" -a "x$cross_compiling" = "xyes" ; then if test "x$build_tests" = "xmaybe" ; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: cannot build tests when cross compiling" >&5 printf "%s\n" "$as_me: WARNING: cannot build tests when cross compiling" >&2;} build_tests=no else as_fn_error $? "*** Tests not supported when cross compiling" "$LINENO" 5 fi fi if test "x$build_tests" != "xno" ; then if test "x$ac_cv_env_PKG_CONFIG_set" != "xset"; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}pkg-config", so it can be a program name with args. set dummy ${ac_tool_prefix}pkg-config; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_path_PKG_CONFIG+y} then : printf %s "(cached) " >&6 else case e in #( e) case $PKG_CONFIG in [\\/]* | ?:[\\/]*) ac_cv_path_PKG_CONFIG="$PKG_CONFIG" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_path_PKG_CONFIG="$as_dir$ac_word$ac_exec_ext" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS ;; esac ;; esac fi PKG_CONFIG=$ac_cv_path_PKG_CONFIG if test -n "$PKG_CONFIG"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $PKG_CONFIG" >&5 printf "%s\n" "$PKG_CONFIG" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi fi if test -z "$ac_cv_path_PKG_CONFIG"; then ac_pt_PKG_CONFIG=$PKG_CONFIG # Extract the first word of "pkg-config", so it can be a program name with args. set dummy pkg-config; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_path_ac_pt_PKG_CONFIG+y} then : printf %s "(cached) " >&6 else case e in #( e) case $ac_pt_PKG_CONFIG in [\\/]* | ?:[\\/]*) ac_cv_path_ac_pt_PKG_CONFIG="$ac_pt_PKG_CONFIG" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_path_ac_pt_PKG_CONFIG="$as_dir$ac_word$ac_exec_ext" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS ;; esac ;; esac fi ac_pt_PKG_CONFIG=$ac_cv_path_ac_pt_PKG_CONFIG if test -n "$ac_pt_PKG_CONFIG"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_pt_PKG_CONFIG" >&5 printf "%s\n" "$ac_pt_PKG_CONFIG" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ac_pt_PKG_CONFIG" = x; then PKG_CONFIG="" else case $cross_compiling:$ac_tool_warned in yes:) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 printf "%s\n" "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac PKG_CONFIG=$ac_pt_PKG_CONFIG fi else PKG_CONFIG="$ac_cv_path_PKG_CONFIG" fi fi if test -n "$PKG_CONFIG"; then _pkg_min_version=0.9.0 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking pkg-config is at least version $_pkg_min_version" >&5 printf %s "checking pkg-config is at least version $_pkg_min_version... " >&6; } if $PKG_CONFIG --atleast-pkgconfig-version $_pkg_min_version; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5 printf "%s\n" "yes" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } PKG_CONFIG="" fi fi pkg_failed=no { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for gmock" >&5 printf %s "checking for gmock... " >&6; } if test -n "$GMOCK_CFLAGS"; then pkg_cv_GMOCK_CFLAGS="$GMOCK_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"gmock\""; } >&5 ($PKG_CONFIG --exists --print-errors "gmock") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_GMOCK_CFLAGS=`$PKG_CONFIG --cflags "gmock" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$GMOCK_LIBS"; then pkg_cv_GMOCK_LIBS="$GMOCK_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"gmock\""; } >&5 ($PKG_CONFIG --exists --print-errors "gmock") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_GMOCK_LIBS=`$PKG_CONFIG --libs "gmock" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then GMOCK_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "gmock" 2>&1` else GMOCK_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "gmock" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$GMOCK_PKG_ERRORS" >&5 have_gmock=no elif test $pkg_failed = untried; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } have_gmock=no else GMOCK_CFLAGS=$pkg_cv_GMOCK_CFLAGS GMOCK_LIBS=$pkg_cv_GMOCK_LIBS { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5 printf "%s\n" "yes" >&6; } have_gmock=yes fi if test "x$have_gmock" = "xno" ; then if test "x$build_tests" = "xmaybe" ; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: gmock could not be found, not building tests" >&5 printf "%s\n" "$as_me: WARNING: gmock could not be found, not building tests" >&2;} build_tests=no else as_fn_error $? "tests require gmock" "$LINENO" 5 fi else pkg_failed=no { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for gmock >= 1.12.0" >&5 printf %s "checking for gmock >= 1.12.0... " >&6; } if test -n "$GMOCK112_CFLAGS"; then pkg_cv_GMOCK112_CFLAGS="$GMOCK112_CFLAGS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"gmock >= 1.12.0\""; } >&5 ($PKG_CONFIG --exists --print-errors "gmock >= 1.12.0") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_GMOCK112_CFLAGS=`$PKG_CONFIG --cflags "gmock >= 1.12.0" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test -n "$GMOCK112_LIBS"; then pkg_cv_GMOCK112_LIBS="$GMOCK112_LIBS" elif test -n "$PKG_CONFIG"; then if test -n "$PKG_CONFIG" && \ { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"gmock >= 1.12.0\""; } >&5 ($PKG_CONFIG --exists --print-errors "gmock >= 1.12.0") 2>&5 ac_status=$? printf "%s\n" "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then pkg_cv_GMOCK112_LIBS=`$PKG_CONFIG --libs "gmock >= 1.12.0" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes else pkg_failed=yes fi else pkg_failed=untried fi if test $pkg_failed = yes; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi if test $_pkg_short_errors_supported = yes; then GMOCK112_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "gmock >= 1.12.0" 2>&1` else GMOCK112_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "gmock >= 1.12.0" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$GMOCK112_PKG_ERRORS" >&5 have_gmock_v112=no elif test $pkg_failed = untried; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } have_gmock_v112=no else GMOCK112_CFLAGS=$pkg_cv_GMOCK112_CFLAGS GMOCK112_LIBS=$pkg_cv_GMOCK112_LIBS { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: yes" >&5 printf "%s\n" "yes" >&6; } have_gmock_v112=yes fi if test "x$have_gmock_v112" = "xyes" ; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether user namespaces are supported" >&5 printf %s "checking whether user namespaces are supported... " >&6; } if test ${ax_cv_user_namespace+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu if test "$cross_compiling" = yes then : ax_cv_user_namespace=no else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #define _GNU_SOURCE #include #include #include #include #include #include #include int userfn(void *d) { usleep(100000); /* synchronize by sleep */ return (getuid() != 0); } char userst[1024*1024]; int main() { char buffer[1024]; int rc, status, fd; pid_t child = clone(userfn, userst + 1024*1024, CLONE_NEWUSER|SIGCHLD, 0); if (child < 0) return 1; snprintf(buffer, sizeof(buffer), "/proc/%d/uid_map", child); fd = open(buffer, O_CREAT|O_WRONLY|O_TRUNC, 0755); snprintf(buffer, sizeof(buffer), "0 %d 1\n", getuid()); write(fd, buffer, strlen(buffer)); close(fd); rc = waitpid(child, &status, 0); if (rc <= 0) return 1; if (!WIFEXITED(status)) return 1; return WEXITSTATUS(status); } _ACEOF if ac_fn_c_try_run "$LINENO" then : ax_cv_user_namespace=yes else case e in #( e) ax_cv_user_namespace=no ;; esac fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_cv_user_namespace" >&5 printf "%s\n" "$ax_cv_user_namespace" >&6; } if test "$ax_cv_user_namespace" = yes; then printf "%s\n" "#define HAVE_USER_NAMESPACE 1" >>confdefs.h fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether UTS namespaces are supported" >&5 printf %s "checking whether UTS namespaces are supported... " >&6; } if test ${ax_cv_uts_namespace+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu if test "$cross_compiling" = yes then : ax_cv_uts_namespace=no else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #define _GNU_SOURCE #include #include #include #include #include #include #include #include int utsfn(void *d) { char buffer[1024]; const char *name = "autoconftest"; int rc = sethostname(name, strlen(name)); if (rc != 0) return 1; gethostname(buffer, 1024); return (strcmp(buffer, name) != 0); } char st2[1024*1024]; int fn(void *d) { pid_t child; int rc, status; usleep(100000); /* synchronize by sleep */ if (getuid() != 0) return 1; child = clone(utsfn, st2 + 1024*1024, CLONE_NEWUTS|SIGCHLD, 0); if (child < 0) return 1; rc = waitpid(child, &status, 0); if (rc <= 0) return 1; if (!WIFEXITED(status)) return 1; return WEXITSTATUS(status); } char st[1024*1024]; int main() { char buffer[1024]; int rc, status, fd; pid_t child = clone(fn, st + 1024*1024, CLONE_NEWUSER|SIGCHLD, 0); if (child < 0) return 1; snprintf(buffer, sizeof(buffer), "/proc/%d/uid_map", child); fd = open(buffer, O_CREAT|O_WRONLY|O_TRUNC, 0755); snprintf(buffer, sizeof(buffer), "0 %d 1\n", getuid()); write(fd, buffer, strlen(buffer)); close(fd); rc = waitpid(child, &status, 0); if (rc <= 0) return 1; if (!WIFEXITED(status)) return 1; return WEXITSTATUS(status); } _ACEOF if ac_fn_c_try_run "$LINENO" then : ax_cv_uts_namespace=yes else case e in #( e) ax_cv_uts_namespace=no ;; esac fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext ;; esac fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_cv_uts_namespace" >&5 printf "%s\n" "$ax_cv_uts_namespace" >&6; } if test "$ax_cv_uts_namespace" = yes; then printf "%s\n" "#define HAVE_UTS_NAMESPACE 1" >>confdefs.h fi fi fi fi if test "x$build_tests" != "xno" ; then build_tests=yes ax_cxx_compile_alternatives="14 1y" ax_cxx_compile_cxx14_required=true ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu ac_success=no if test x$ac_success = xno; then for alternative in ${ax_cxx_compile_alternatives}; do for switch in -std=c++${alternative} +std=c++${alternative} "-h std=c++${alternative}" MSVC; do if test x"$switch" = xMSVC; then switch=-std:c++${alternative} cachevar=`printf "%s\n" "ax_cv_cxx_compile_cxx14_${switch}_MSVC" | sed "$as_sed_sh"` else cachevar=`printf "%s\n" "ax_cv_cxx_compile_cxx14_$switch" | sed "$as_sed_sh"` fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $CXX supports C++14 features with $switch" >&5 printf %s "checking whether $CXX supports C++14 features with $switch... " >&6; } if eval test \${$cachevar+y} then : printf %s "(cached) " >&6 else case e in #( e) ac_save_CXX="$CXX" CXX="$CXX $switch" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ // If the compiler admits that it is not ready for C++11, why torture it? // Hopefully, this will speed up the test. #ifndef __cplusplus #error "This is not a C++ compiler" // MSVC always sets __cplusplus to 199711L in older versions; newer versions // only set it correctly if /Zc:__cplusplus is specified as well as a // /std:c++NN switch: // https://devblogs.microsoft.com/cppblog/msvc-now-correctly-reports-__cplusplus/ #elif __cplusplus < 201103L && !defined _MSC_VER #error "This is not a C++11 compiler" #else namespace cxx11 { namespace test_static_assert { template struct check { static_assert(sizeof(int) <= sizeof(T), "not big enough"); }; } namespace test_final_override { struct Base { virtual ~Base() {} virtual void f() {} }; struct Derived : public Base { virtual ~Derived() override {} virtual void f() override {} }; } namespace test_double_right_angle_brackets { template < typename T > struct check {}; typedef check single_type; typedef check> double_type; typedef check>> triple_type; typedef check>>> quadruple_type; } namespace test_decltype { int f() { int a = 1; decltype(a) b = 2; return a + b; } } namespace test_type_deduction { template < typename T1, typename T2 > struct is_same { static const bool value = false; }; template < typename T > struct is_same { static const bool value = true; }; template < typename T1, typename T2 > auto add(T1 a1, T2 a2) -> decltype(a1 + a2) { return a1 + a2; } int test(const int c, volatile int v) { static_assert(is_same::value == true, ""); static_assert(is_same::value == false, ""); static_assert(is_same::value == false, ""); auto ac = c; auto av = v; auto sumi = ac + av + 'x'; auto sumf = ac + av + 1.0; static_assert(is_same::value == true, ""); static_assert(is_same::value == true, ""); static_assert(is_same::value == true, ""); static_assert(is_same::value == false, ""); static_assert(is_same::value == true, ""); return (sumf > 0.0) ? sumi : add(c, v); } } namespace test_noexcept { int f() { return 0; } int g() noexcept { return 0; } static_assert(noexcept(f()) == false, ""); static_assert(noexcept(g()) == true, ""); } namespace test_constexpr { template < typename CharT > unsigned long constexpr strlen_c_r(const CharT *const s, const unsigned long acc) noexcept { return *s ? strlen_c_r(s + 1, acc + 1) : acc; } template < typename CharT > unsigned long constexpr strlen_c(const CharT *const s) noexcept { return strlen_c_r(s, 0UL); } static_assert(strlen_c("") == 0UL, ""); static_assert(strlen_c("1") == 1UL, ""); static_assert(strlen_c("example") == 7UL, ""); static_assert(strlen_c("another\0example") == 7UL, ""); } namespace test_rvalue_references { template < int N > struct answer { static constexpr int value = N; }; answer<1> f(int&) { return answer<1>(); } answer<2> f(const int&) { return answer<2>(); } answer<3> f(int&&) { return answer<3>(); } void test() { int i = 0; const int c = 0; static_assert(decltype(f(i))::value == 1, ""); static_assert(decltype(f(c))::value == 2, ""); static_assert(decltype(f(0))::value == 3, ""); } } namespace test_uniform_initialization { struct test { static const int zero {}; static const int one {1}; }; static_assert(test::zero == 0, ""); static_assert(test::one == 1, ""); } namespace test_lambdas { void test1() { auto lambda1 = [](){}; auto lambda2 = lambda1; lambda1(); lambda2(); } int test2() { auto a = [](int i, int j){ return i + j; }(1, 2); auto b = []() -> int { return '0'; }(); auto c = [=](){ return a + b; }(); auto d = [&](){ return c; }(); auto e = [a, &b](int x) mutable { const auto identity = [](int y){ return y; }; for (auto i = 0; i < a; ++i) a += b--; return x + identity(a + b); }(0); return a + b + c + d + e; } int test3() { const auto nullary = [](){ return 0; }; const auto unary = [](int x){ return x; }; using nullary_t = decltype(nullary); using unary_t = decltype(unary); const auto higher1st = [](nullary_t f){ return f(); }; const auto higher2nd = [unary](nullary_t f1){ return [unary, f1](unary_t f2){ return f2(unary(f1())); }; }; return higher1st(nullary) + higher2nd(nullary)(unary); } } namespace test_variadic_templates { template struct sum; template struct sum { static constexpr auto value = N0 + sum::value; }; template <> struct sum<> { static constexpr auto value = 0; }; static_assert(sum<>::value == 0, ""); static_assert(sum<1>::value == 1, ""); static_assert(sum<23>::value == 23, ""); static_assert(sum<1, 2>::value == 3, ""); static_assert(sum<5, 5, 11>::value == 21, ""); static_assert(sum<2, 3, 5, 7, 11, 13>::value == 41, ""); } // http://stackoverflow.com/questions/13728184/template-aliases-and-sfinae // Clang 3.1 fails with headers of libstd++ 4.8.3 when using std::function // because of this. namespace test_template_alias_sfinae { struct foo {}; template using member = typename T::member_type; template void func(...) {} template void func(member*) {} void test(); void test() { func(0); } } } // namespace cxx11 #endif // __cplusplus >= 201103L // If the compiler admits that it is not ready for C++14, why torture it? // Hopefully, this will speed up the test. #ifndef __cplusplus #error "This is not a C++ compiler" #elif __cplusplus < 201402L && !defined _MSC_VER #error "This is not a C++14 compiler" #else namespace cxx14 { namespace test_polymorphic_lambdas { int test() { const auto lambda = [](auto&&... args){ const auto istiny = [](auto x){ return (sizeof(x) == 1UL) ? 1 : 0; }; const int aretiny[] = { istiny(args)... }; return aretiny[0]; }; return lambda(1, 1L, 1.0f, '1'); } } namespace test_binary_literals { constexpr auto ivii = 0b0000000000101010; static_assert(ivii == 42, "wrong value"); } namespace test_generalized_constexpr { template < typename CharT > constexpr unsigned long strlen_c(const CharT *const s) noexcept { auto length = 0UL; for (auto p = s; *p; ++p) ++length; return length; } static_assert(strlen_c("") == 0UL, ""); static_assert(strlen_c("x") == 1UL, ""); static_assert(strlen_c("test") == 4UL, ""); static_assert(strlen_c("another\0test") == 7UL, ""); } namespace test_lambda_init_capture { int test() { auto x = 0; const auto lambda1 = [a = x](int b){ return a + b; }; const auto lambda2 = [a = lambda1(x)](){ return a; }; return lambda2(); } } namespace test_digit_separators { constexpr auto ten_million = 100'000'000; static_assert(ten_million == 100000000, ""); } namespace test_return_type_deduction { auto f(int& x) { return x; } decltype(auto) g(int& x) { return x; } template < typename T1, typename T2 > struct is_same { static constexpr auto value = false; }; template < typename T > struct is_same { static constexpr auto value = true; }; int test() { auto x = 0; static_assert(is_same::value, ""); static_assert(is_same::value, ""); return x; } } } // namespace cxx14 #endif // __cplusplus >= 201402L _ACEOF if ac_fn_cxx_try_compile "$LINENO" then : eval $cachevar=yes else case e in #( e) eval $cachevar=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext CXX="$ac_save_CXX" ;; esac fi eval ac_res=\$$cachevar { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 printf "%s\n" "$ac_res" >&6; } if eval test x\$$cachevar = xyes; then CXX="$CXX $switch" if test -n "$CXXCPP" ; then CXXCPP="$CXXCPP $switch" fi ac_success=yes break fi done if test x$ac_success = xyes; then break fi done fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu if test x$ax_cxx_compile_cxx14_required = xtrue; then if test x$ac_success = xno; then as_fn_error $? "*** A compiler with support for C++14 language features is required." "$LINENO" 5 fi fi if test x$ac_success = xno; then HAVE_CXX14=0 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: No compiler with C++14 support was found" >&5 printf "%s\n" "$as_me: No compiler with C++14 support was found" >&6;} else HAVE_CXX14=1 printf "%s\n" "#define HAVE_CXX14 1" >>confdefs.h fi if test "$ac_cv_native_windows" != "yes" ; then ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ax_pthread_ok=no # We used to check for pthread.h first, but this fails if pthread.h # requires special compiler flags (e.g. on Tru64 or Sequent). # It gets checked for in the link test anyway. # First of all, check if the user has set any of the PTHREAD_LIBS, # etcetera environment variables, and if threads linking works using # them: if test "x$PTHREAD_CFLAGS$PTHREAD_LIBS" != "x"; then ax_pthread_save_CC="$CC" ax_pthread_save_CFLAGS="$CFLAGS" ax_pthread_save_LIBS="$LIBS" if test "x$PTHREAD_CC" != "x" then : CC="$PTHREAD_CC" fi if test "x$PTHREAD_CXX" != "x" then : CXX="$PTHREAD_CXX" fi CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for pthread_join using $CC $PTHREAD_CFLAGS $PTHREAD_LIBS" >&5 printf %s "checking for pthread_join using $CC $PTHREAD_CFLAGS $PTHREAD_LIBS... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. The 'extern "C"' is for builds by C++ compilers; although this is not generally supported in C code supporting it here has little cost and some practical benefit (sr 110532). */ #ifdef __cplusplus extern "C" #endif char pthread_join (void); int main (void) { return pthread_join (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : ax_pthread_ok=yes fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_pthread_ok" >&5 printf "%s\n" "$ax_pthread_ok" >&6; } if test "x$ax_pthread_ok" = "xno"; then PTHREAD_LIBS="" PTHREAD_CFLAGS="" fi CC="$ax_pthread_save_CC" CFLAGS="$ax_pthread_save_CFLAGS" LIBS="$ax_pthread_save_LIBS" fi # We must check for the threads library under a number of different # names; the ordering is very important because some systems # (e.g. DEC) have both -lpthread and -lpthreads, where one of the # libraries is broken (non-POSIX). # Create a list of thread flags to try. Items with a "," contain both # C compiler flags (before ",") and linker flags (after ","). Other items # starting with a "-" are C compiler flags, and remaining items are # library names, except for "none" which indicates that we try without # any flags at all, and "pthread-config" which is a program returning # the flags for the Pth emulation library. ax_pthread_flags="pthreads none -Kthread -pthread -pthreads -mthreads pthread --thread-safe -mt pthread-config" # The ordering *is* (sometimes) important. Some notes on the # individual items follow: # pthreads: AIX (must check this before -lpthread) # none: in case threads are in libc; should be tried before -Kthread and # other compiler flags to prevent continual compiler warnings # -Kthread: Sequent (threads in libc, but -Kthread needed for pthread.h) # -pthread: Linux/gcc (kernel threads), BSD/gcc (userland threads), Tru64 # (Note: HP C rejects this with "bad form for `-t' option") # -pthreads: Solaris/gcc (Note: HP C also rejects) # -mt: Sun Workshop C (may only link SunOS threads [-lthread], but it # doesn't hurt to check since this sometimes defines pthreads and # -D_REENTRANT too), HP C (must be checked before -lpthread, which # is present but should not be used directly; and before -mthreads, # because the compiler interprets this as "-mt" + "-hreads") # -mthreads: Mingw32/gcc, Lynx/gcc # pthread: Linux, etcetera # --thread-safe: KAI C++ # pthread-config: use pthread-config program (for GNU Pth library) case $host_os in freebsd*) # -kthread: FreeBSD kernel threads (preferred to -pthread since SMP-able) # lthread: LinuxThreads port on FreeBSD (also preferred to -pthread) ax_pthread_flags="-kthread lthread $ax_pthread_flags" ;; hpux*) # From the cc(1) man page: "[-mt] Sets various -D flags to enable # multi-threading and also sets -lpthread." ax_pthread_flags="-mt -pthread pthread $ax_pthread_flags" ;; openedition*) # IBM z/OS requires a feature-test macro to be defined in order to # enable POSIX threads at all, so give the user a hint if this is # not set. (We don't define these ourselves, as they can affect # other portions of the system API in unpredictable ways.) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ # if !defined(_OPEN_THREADS) && !defined(_UNIX03_THREADS) AX_PTHREAD_ZOS_MISSING # endif _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP_TRADITIONAL "AX_PTHREAD_ZOS_MISSING" >/dev/null 2>&1 then : { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: IBM z/OS requires -D_OPEN_THREADS or -D_UNIX03_THREADS to enable pthreads support." >&5 printf "%s\n" "$as_me: WARNING: IBM z/OS requires -D_OPEN_THREADS or -D_UNIX03_THREADS to enable pthreads support." >&2;} fi rm -rf conftest* ;; solaris*) # On Solaris (at least, for some versions), libc contains stubbed # (non-functional) versions of the pthreads routines, so link-based # tests will erroneously succeed. (N.B.: The stubs are missing # pthread_cleanup_push, or rather a function called by this macro, # so we could check for that, but who knows whether they'll stub # that too in a future libc.) So we'll check first for the # standard Solaris way of linking pthreads (-mt -lpthread). ax_pthread_flags="-mt,-lpthread pthread $ax_pthread_flags" ;; esac # Are we compiling with Clang? { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether $CC is Clang" >&5 printf %s "checking whether $CC is Clang... " >&6; } if test ${ax_cv_PTHREAD_CLANG+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_cv_PTHREAD_CLANG=no # Note that Autoconf sets GCC=yes for Clang as well as GCC if test "x$GCC" = "xyes"; then cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Note: Clang 2.7 lacks __clang_[a-z]+__ */ # if defined(__clang__) && defined(__llvm__) AX_PTHREAD_CC_IS_CLANG # endif _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP_TRADITIONAL "AX_PTHREAD_CC_IS_CLANG" >/dev/null 2>&1 then : ax_cv_PTHREAD_CLANG=yes fi rm -rf conftest* fi ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_cv_PTHREAD_CLANG" >&5 printf "%s\n" "$ax_cv_PTHREAD_CLANG" >&6; } ax_pthread_clang="$ax_cv_PTHREAD_CLANG" # GCC generally uses -pthread, or -pthreads on some platforms (e.g. SPARC) # Note that for GCC and Clang -pthread generally implies -lpthread, # except when -nostdlib is passed. # This is problematic using libtool to build C++ shared libraries with pthread: # [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=25460 # [2] https://bugzilla.redhat.com/show_bug.cgi?id=661333 # [3] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=468555 # To solve this, first try -pthread together with -lpthread for GCC if test "x$GCC" = "xyes" then : ax_pthread_flags="-pthread,-lpthread -pthread -pthreads $ax_pthread_flags" fi # Clang takes -pthread (never supported any other flag), but we'll try with -lpthread first if test "x$ax_pthread_clang" = "xyes" then : ax_pthread_flags="-pthread,-lpthread -pthread" fi # The presence of a feature test macro requesting re-entrant function # definitions is, on some systems, a strong hint that pthreads support is # correctly enabled case $host_os in darwin* | hpux* | linux* | osf* | solaris*) ax_pthread_check_macro="_REENTRANT" ;; aix*) ax_pthread_check_macro="_THREAD_SAFE" ;; *) ax_pthread_check_macro="--" ;; esac if test "x$ax_pthread_check_macro" = "x--" then : ax_pthread_check_cond=0 else case e in #( e) ax_pthread_check_cond="!defined($ax_pthread_check_macro)" ;; esac fi if test "x$ax_pthread_ok" = "xno"; then for ax_pthread_try_flag in $ax_pthread_flags; do case $ax_pthread_try_flag in none) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether pthreads work without any flags" >&5 printf %s "checking whether pthreads work without any flags... " >&6; } ;; *,*) PTHREAD_CFLAGS=`echo $ax_pthread_try_flag | sed "s/^\(.*\),\(.*\)$/\1/"` PTHREAD_LIBS=`echo $ax_pthread_try_flag | sed "s/^\(.*\),\(.*\)$/\2/"` { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether pthreads work with \"$PTHREAD_CFLAGS\" and \"$PTHREAD_LIBS\"" >&5 printf %s "checking whether pthreads work with \"$PTHREAD_CFLAGS\" and \"$PTHREAD_LIBS\"... " >&6; } ;; -*) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether pthreads work with $ax_pthread_try_flag" >&5 printf %s "checking whether pthreads work with $ax_pthread_try_flag... " >&6; } PTHREAD_CFLAGS="$ax_pthread_try_flag" ;; pthread-config) # Extract the first word of "pthread-config", so it can be a program name with args. set dummy pthread-config; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_ax_pthread_config+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$ax_pthread_config"; then ac_cv_prog_ax_pthread_config="$ax_pthread_config" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_ax_pthread_config="yes" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS test -z "$ac_cv_prog_ax_pthread_config" && ac_cv_prog_ax_pthread_config="no" fi ;; esac fi ax_pthread_config=$ac_cv_prog_ax_pthread_config if test -n "$ax_pthread_config"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_pthread_config" >&5 printf "%s\n" "$ax_pthread_config" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi if test "x$ax_pthread_config" = "xno" then : continue fi PTHREAD_CFLAGS="`pthread-config --cflags`" PTHREAD_LIBS="`pthread-config --ldflags` `pthread-config --libs`" ;; *) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for the pthreads library -l$ax_pthread_try_flag" >&5 printf %s "checking for the pthreads library -l$ax_pthread_try_flag... " >&6; } PTHREAD_LIBS="-l$ax_pthread_try_flag" ;; esac ax_pthread_save_CFLAGS="$CFLAGS" ax_pthread_save_LIBS="$LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" # Check for various functions. We must include pthread.h, # since some functions may be macros. (On the Sequent, we # need a special flag -Kthread to make this header compile.) # We check for pthread_join because it is in -lpthread on IRIX # while pthread_create is in libc. We check for pthread_attr_init # due to DEC craziness with -lpthreads. We check for # pthread_cleanup_push because it is one of the few pthread # functions on Solaris that doesn't have a non-functional libc stub. # We try pthread_create on general principles. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include # if $ax_pthread_check_cond # error "$ax_pthread_check_macro must be defined" # endif static void *some_global = NULL; static void routine(void *a) { /* To avoid any unused-parameter or unused-but-set-parameter warning. */ some_global = a; } static void *start_routine(void *a) { return a; } int main (void) { pthread_t th; pthread_attr_t attr; pthread_create(&th, 0, start_routine, 0); pthread_join(th, 0); pthread_attr_init(&attr); pthread_cleanup_push(routine, 0); pthread_cleanup_pop(0) /* ; */ ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : ax_pthread_ok=yes fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext CFLAGS="$ax_pthread_save_CFLAGS" LIBS="$ax_pthread_save_LIBS" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_pthread_ok" >&5 printf "%s\n" "$ax_pthread_ok" >&6; } if test "x$ax_pthread_ok" = "xyes" then : break fi PTHREAD_LIBS="" PTHREAD_CFLAGS="" done fi # Clang needs special handling, because older versions handle the -pthread # option in a rather... idiosyncratic way if test "x$ax_pthread_clang" = "xyes"; then # Clang takes -pthread; it has never supported any other flag # (Note 1: This will need to be revisited if a system that Clang # supports has POSIX threads in a separate library. This tends not # to be the way of modern systems, but it's conceivable.) # (Note 2: On some systems, notably Darwin, -pthread is not needed # to get POSIX threads support; the API is always present and # active. We could reasonably leave PTHREAD_CFLAGS empty. But # -pthread does define _REENTRANT, and while the Darwin headers # ignore this macro, third-party headers might not.) # However, older versions of Clang make a point of warning the user # that, in an invocation where only linking and no compilation is # taking place, the -pthread option has no effect ("argument unused # during compilation"). They expect -pthread to be passed in only # when source code is being compiled. # # Problem is, this is at odds with the way Automake and most other # C build frameworks function, which is that the same flags used in # compilation (CFLAGS) are also used in linking. Many systems # supported by AX_PTHREAD require exactly this for POSIX threads # support, and in fact it is often not straightforward to specify a # flag that is used only in the compilation phase and not in # linking. Such a scenario is extremely rare in practice. # # Even though use of the -pthread flag in linking would only print # a warning, this can be a nuisance for well-run software projects # that build with -Werror. So if the active version of Clang has # this misfeature, we search for an option to squash it. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether Clang needs flag to prevent \"argument unused\" warning when linking with -pthread" >&5 printf %s "checking whether Clang needs flag to prevent \"argument unused\" warning when linking with -pthread... " >&6; } if test ${ax_cv_PTHREAD_CLANG_NO_WARN_FLAG+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_cv_PTHREAD_CLANG_NO_WARN_FLAG=unknown # Create an alternate version of $ac_link that compiles and # links in two steps (.c -> .o, .o -> exe) instead of one # (.c -> exe), because the warning occurs only in the second # step ax_pthread_save_ac_link="$ac_link" ax_pthread_sed='s/conftest\.\$ac_ext/conftest.$ac_objext/g' ax_pthread_link_step=`printf "%s\n" "$ac_link" | sed "$ax_pthread_sed"` ax_pthread_2step_ac_link="($ac_compile) && (echo ==== >&5) && ($ax_pthread_link_step)" ax_pthread_save_CFLAGS="$CFLAGS" for ax_pthread_try in '' -Qunused-arguments -Wno-unused-command-line-argument unknown; do if test "x$ax_pthread_try" = "xunknown" then : break fi CFLAGS="-Werror -Wunknown-warning-option $ax_pthread_try -pthread $ax_pthread_save_CFLAGS" ac_link="$ax_pthread_save_ac_link" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main(void){return 0;} _ACEOF if ac_fn_c_try_link "$LINENO" then : ac_link="$ax_pthread_2step_ac_link" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main(void){return 0;} _ACEOF if ac_fn_c_try_link "$LINENO" then : break fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext done ac_link="$ax_pthread_save_ac_link" CFLAGS="$ax_pthread_save_CFLAGS" if test "x$ax_pthread_try" = "x" then : ax_pthread_try=no fi ax_cv_PTHREAD_CLANG_NO_WARN_FLAG="$ax_pthread_try" ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_cv_PTHREAD_CLANG_NO_WARN_FLAG" >&5 printf "%s\n" "$ax_cv_PTHREAD_CLANG_NO_WARN_FLAG" >&6; } case "$ax_cv_PTHREAD_CLANG_NO_WARN_FLAG" in no | unknown) ;; *) PTHREAD_CFLAGS="$ax_cv_PTHREAD_CLANG_NO_WARN_FLAG $PTHREAD_CFLAGS" ;; esac fi # $ax_pthread_clang = yes # Various other checks: if test "x$ax_pthread_ok" = "xyes"; then ax_pthread_save_CFLAGS="$CFLAGS" ax_pthread_save_LIBS="$LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" # Detect AIX lossage: JOINABLE attribute is called UNDETACHED. { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for joinable pthread attribute" >&5 printf %s "checking for joinable pthread attribute... " >&6; } if test ${ax_cv_PTHREAD_JOINABLE_ATTR+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_cv_PTHREAD_JOINABLE_ATTR=unknown for ax_pthread_attr in PTHREAD_CREATE_JOINABLE PTHREAD_CREATE_UNDETACHED; do cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main (void) { int attr = $ax_pthread_attr; return attr /* ; */ ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : ax_cv_PTHREAD_JOINABLE_ATTR=$ax_pthread_attr; break fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext done ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_cv_PTHREAD_JOINABLE_ATTR" >&5 printf "%s\n" "$ax_cv_PTHREAD_JOINABLE_ATTR" >&6; } if test "x$ax_cv_PTHREAD_JOINABLE_ATTR" != "xunknown" && \ test "x$ax_cv_PTHREAD_JOINABLE_ATTR" != "xPTHREAD_CREATE_JOINABLE" && \ test "x$ax_pthread_joinable_attr_defined" != "xyes" then : printf "%s\n" "#define PTHREAD_CREATE_JOINABLE $ax_cv_PTHREAD_JOINABLE_ATTR" >>confdefs.h ax_pthread_joinable_attr_defined=yes fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether more special flags are required for pthreads" >&5 printf %s "checking whether more special flags are required for pthreads... " >&6; } if test ${ax_cv_PTHREAD_SPECIAL_FLAGS+y} then : printf %s "(cached) " >&6 else case e in #( e) ax_cv_PTHREAD_SPECIAL_FLAGS=no case $host_os in solaris*) ax_cv_PTHREAD_SPECIAL_FLAGS="-D_POSIX_PTHREAD_SEMANTICS" ;; esac ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_cv_PTHREAD_SPECIAL_FLAGS" >&5 printf "%s\n" "$ax_cv_PTHREAD_SPECIAL_FLAGS" >&6; } if test "x$ax_cv_PTHREAD_SPECIAL_FLAGS" != "xno" && \ test "x$ax_pthread_special_flags_added" != "xyes" then : PTHREAD_CFLAGS="$ax_cv_PTHREAD_SPECIAL_FLAGS $PTHREAD_CFLAGS" ax_pthread_special_flags_added=yes fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for PTHREAD_PRIO_INHERIT" >&5 printf %s "checking for PTHREAD_PRIO_INHERIT... " >&6; } if test ${ax_cv_PTHREAD_PRIO_INHERIT+y} then : printf %s "(cached) " >&6 else case e in #( e) cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main (void) { int i = PTHREAD_PRIO_INHERIT; return i; ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO" then : ax_cv_PTHREAD_PRIO_INHERIT=yes else case e in #( e) ax_cv_PTHREAD_PRIO_INHERIT=no ;; esac fi rm -f core conftest.err conftest.$ac_objext conftest.beam \ conftest$ac_exeext conftest.$ac_ext ;; esac fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $ax_cv_PTHREAD_PRIO_INHERIT" >&5 printf "%s\n" "$ax_cv_PTHREAD_PRIO_INHERIT" >&6; } if test "x$ax_cv_PTHREAD_PRIO_INHERIT" = "xyes" && \ test "x$ax_pthread_prio_inherit_defined" != "xyes" then : printf "%s\n" "#define HAVE_PTHREAD_PRIO_INHERIT 1" >>confdefs.h ax_pthread_prio_inherit_defined=yes fi CFLAGS="$ax_pthread_save_CFLAGS" LIBS="$ax_pthread_save_LIBS" # More AIX lossage: compile with *_r variant if test "x$GCC" != "xyes"; then case $host_os in aix*) case "x/$CC" in #( x*/c89|x*/c89_128|x*/c99|x*/c99_128|x*/cc|x*/cc128|x*/xlc|x*/xlc_v6|x*/xlc128|x*/xlc128_v6) : #handle absolute path differently from PATH based program lookup case "x$CC" in #( x/*) : if as_fn_executable_p ${CC}_r then : PTHREAD_CC="${CC}_r" fi if test "x${CXX}" != "x" then : if as_fn_executable_p ${CXX}_r then : PTHREAD_CXX="${CXX}_r" fi fi ;; #( *) : for ac_prog in ${CC}_r do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_PTHREAD_CC+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$PTHREAD_CC"; then ac_cv_prog_PTHREAD_CC="$PTHREAD_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_PTHREAD_CC="$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi PTHREAD_CC=$ac_cv_prog_PTHREAD_CC if test -n "$PTHREAD_CC"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $PTHREAD_CC" >&5 printf "%s\n" "$PTHREAD_CC" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$PTHREAD_CC" && break done test -n "$PTHREAD_CC" || PTHREAD_CC="$CC" if test "x${CXX}" != "x" then : for ac_prog in ${CXX}_r do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 printf %s "checking for $ac_word... " >&6; } if test ${ac_cv_prog_PTHREAD_CXX+y} then : printf %s "(cached) " >&6 else case e in #( e) if test -n "$PTHREAD_CXX"; then ac_cv_prog_PTHREAD_CXX="$PTHREAD_CXX" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir$ac_word$ac_exec_ext"; then ac_cv_prog_PTHREAD_CXX="$ac_prog" printf "%s\n" "$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi ;; esac fi PTHREAD_CXX=$ac_cv_prog_PTHREAD_CXX if test -n "$PTHREAD_CXX"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $PTHREAD_CXX" >&5 printf "%s\n" "$PTHREAD_CXX" >&6; } else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: no" >&5 printf "%s\n" "no" >&6; } fi test -n "$PTHREAD_CXX" && break done test -n "$PTHREAD_CXX" || PTHREAD_CXX="$CXX" fi ;; esac ;; #( *) : ;; esac ;; esac fi fi test -n "$PTHREAD_CC" || PTHREAD_CC="$CC" test -n "$PTHREAD_CXX" || PTHREAD_CXX="$CXX" # Finally, execute ACTION-IF-FOUND/ACTION-IF-NOT-FOUND: if test "x$ax_pthread_ok" = "xyes"; then CARES_TEST_PTHREADS="yes" : else ax_pthread_ok=no as_fn_error $? "threading required for tests" "$LINENO" 5 fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu fi BUILD_SUBDIRS="${BUILD_SUBDIRS} test" fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking whether to build tests" >&5 printf %s "checking whether to build tests... " >&6; } { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: $build_tests" >&5 printf "%s\n" "$build_tests" >&6; } if test "x$build_tests" = "xyes"; then BUILD_TESTS_TRUE= BUILD_TESTS_FALSE='#' else BUILD_TESTS_TRUE='#' BUILD_TESTS_FALSE= fi ac_config_files="$ac_config_files Makefile include/Makefile src/Makefile src/lib/Makefile src/tools/Makefile libcares.pc" if test -z "$BUILD_TESTS_TRUE"; then : ac_config_files="$ac_config_files test/Makefile" fi cat >confcache <<\_ACEOF # This file is a shell script that caches the results of configure # tests run on this system so they can be shared between configure # scripts and configure runs, see configure's option --config-cache. # It is not useful on other systems. If it contains results you don't # want to keep, you may remove or edit it. # # config.status only pays attention to the cache file if you give it # the --recheck option to rerun configure. # # 'ac_cv_env_foo' variables (set or unset) will be overridden when # loading this file, other *unset* 'ac_cv_foo' will be assigned the # following values. _ACEOF # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_*) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5 printf "%s\n" "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;; esac case $ac_var in #( _ | IFS | as_nl) ;; #( BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #( *) { eval $ac_var=; unset $ac_var;} ;; esac ;; esac done (set) 2>&1 | case $as_nl`(ac_space=' '; set) 2>&1` in #( *${as_nl}ac_space=\ *) # 'set' does not quote correctly, so add quotes: double-quote # substitution turns \\\\ into \\, and sed turns \\ into \. sed -n \ "s/'/'\\\\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\\2'/p" ;; #( *) # 'set' quotes correctly as required by POSIX, so do not add quotes. sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p" ;; esac | sort ) | sed ' /^ac_cv_env_/b end t clear :clear s/^\([^=]*\)=\(.*[{}].*\)$/test ${\1+y} || &/ t end s/^\([^=]*\)=\(.*\)$/\1=${\1=\2}/ :end' >>confcache if diff "$cache_file" confcache >/dev/null 2>&1; then :; else if test -w "$cache_file"; then if test "x$cache_file" != "x/dev/null"; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: updating cache $cache_file" >&5 printf "%s\n" "$as_me: updating cache $cache_file" >&6;} if test ! -f "$cache_file" || test -h "$cache_file"; then cat confcache >"$cache_file" else case $cache_file in #( */* | ?:*) mv -f confcache "$cache_file"$$ && mv -f "$cache_file"$$ "$cache_file" ;; #( *) mv -f confcache "$cache_file" ;; esac fi fi else { printf "%s\n" "$as_me:${as_lineno-$LINENO}: not updating unwritable cache $cache_file" >&5 printf "%s\n" "$as_me: not updating unwritable cache $cache_file" >&6;} fi fi rm -f confcache test "x$prefix" = xNONE && prefix=$ac_default_prefix # Let make expand exec_prefix. test "x$exec_prefix" = xNONE && exec_prefix='${prefix}' DEFS=-DHAVE_CONFIG_H ac_libobjs= ac_ltlibobjs= U= for ac_i in : $LIBOBJS; do test "x$ac_i" = x: && continue # 1. Remove the extension, and $U if already installed. ac_script='s/\$U\././;s/\.o$//;s/\.obj$//' ac_i=`printf "%s\n" "$ac_i" | sed "$ac_script"` # 2. Prepend LIBOBJDIR. When used with automake>=1.10 LIBOBJDIR # will be set to the directory where LIBOBJS objects are built. as_fn_append ac_libobjs " \${LIBOBJDIR}$ac_i\$U.$ac_objext" as_fn_append ac_ltlibobjs " \${LIBOBJDIR}$ac_i"'$U.lo' done LIBOBJS=$ac_libobjs LTLIBOBJS=$ac_ltlibobjs { printf "%s\n" "$as_me:${as_lineno-$LINENO}: checking that generated files are newer than configure" >&5 printf %s "checking that generated files are newer than configure... " >&6; } if test -n "$am_sleep_pid"; then # Hide warnings about reused PIDs. wait $am_sleep_pid 2>/dev/null fi { printf "%s\n" "$as_me:${as_lineno-$LINENO}: result: done" >&5 printf "%s\n" "done" >&6; } if test -z "${AMDEP_TRUE}" && test -z "${AMDEP_FALSE}"; then as_fn_error $? "conditional \"AMDEP\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${am__fastdepCC_TRUE}" && test -z "${am__fastdepCC_FALSE}"; then as_fn_error $? "conditional \"am__fastdepCC\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${am__fastdepCXX_TRUE}" && test -z "${am__fastdepCXX_FALSE}"; then as_fn_error $? "conditional \"am__fastdepCXX\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi case $enable_silent_rules in # ((( yes) AM_DEFAULT_VERBOSITY=0;; no) AM_DEFAULT_VERBOSITY=1;; esac if test $am_cv_make_support_nested_variables = yes; then AM_V='$(V)' AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)' else AM_V=$AM_DEFAULT_VERBOSITY AM_DEFAULT_V=$AM_DEFAULT_VERBOSITY fi if test -n "$EXEEXT"; then am__EXEEXT_TRUE= am__EXEEXT_FALSE='#' else am__EXEEXT_TRUE='#' am__EXEEXT_FALSE= fi if test -z "${MAINTAINER_MODE_TRUE}" && test -z "${MAINTAINER_MODE_FALSE}"; then as_fn_error $? "conditional \"MAINTAINER_MODE\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${CODE_COVERAGE_ENABLED_TRUE}" && test -z "${CODE_COVERAGE_ENABLED_FALSE}"; then as_fn_error $? "conditional \"CODE_COVERAGE_ENABLED\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi # Check whether --enable-year2038 was given. if test ${enable_year2038+y} then : enableval=$enable_year2038; fi if test -z "${CARES_USE_NO_UNDEFINED_TRUE}" && test -z "${CARES_USE_NO_UNDEFINED_FALSE}"; then as_fn_error $? "conditional \"CARES_USE_NO_UNDEFINED\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${CARES_SYMBOL_HIDING_TRUE}" && test -z "${CARES_SYMBOL_HIDING_FALSE}"; then as_fn_error $? "conditional \"CARES_SYMBOL_HIDING\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${BUILD_TESTS_TRUE}" && test -z "${BUILD_TESTS_FALSE}"; then as_fn_error $? "conditional \"BUILD_TESTS\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi : "${CONFIG_STATUS=./config.status}" ac_write_fail=0 ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files $CONFIG_STATUS" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: creating $CONFIG_STATUS" >&5 printf "%s\n" "$as_me: creating $CONFIG_STATUS" >&6;} as_write_fail=0 cat >$CONFIG_STATUS <<_ASEOF || as_write_fail=1 #! $SHELL # Generated by $as_me. # Run this file to recreate the current configuration. # Compiler output produced by configure, useful for debugging # configure, is in config.log if it exists. debug=false ac_cs_recheck=false ac_cs_silent=false SHELL=\${CONFIG_SHELL-$SHELL} export SHELL _ASEOF cat >>$CONFIG_STATUS <<\_ASEOF || as_write_fail=1 ## -------------------- ## ## M4sh Initialization. ## ## -------------------- ## # Be more Bourne compatible DUALCASE=1; export DUALCASE # for MKS sh if test ${ZSH_VERSION+y} && (emulate sh) >/dev/null 2>&1 then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case e in #( e) case `(set -o) 2>/dev/null` in #( *posix*) : set -o posix ;; #( *) : ;; esac ;; esac fi # Reset variables that may have inherited troublesome values from # the environment. # IFS needs to be set, to space, tab, and newline, in precisely that order. # (If _AS_PATH_WALK were called with IFS unset, it would have the # side effect of setting IFS to empty, thus disabling word splitting.) # Quoting is to prevent editors from complaining about space-tab. as_nl=' ' export as_nl IFS=" "" $as_nl" PS1='$ ' PS2='> ' PS4='+ ' # Ensure predictable behavior from utilities with locale-dependent output. LC_ALL=C export LC_ALL LANGUAGE=C export LANGUAGE # We cannot yet rely on "unset" to work, but we need these variables # to be unset--not just set to an empty or harmless value--now, to # avoid bugs in old shells (e.g. pre-3.0 UWIN ksh). This construct # also avoids known problems related to "unset" and subshell syntax # in other old shells (e.g. bash 2.01 and pdksh 5.2.14). for as_var in BASH_ENV ENV MAIL MAILPATH CDPATH do eval test \${$as_var+y} \ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || : done # Ensure that fds 0, 1, and 2 are open. if (exec 3>&0) 2>/dev/null; then :; else exec 0&1) 2>/dev/null; then :; else exec 1>/dev/null; fi if (exec 3>&2) ; then :; else exec 2>/dev/null; fi # The user is always right. if ${PATH_SEPARATOR+false} :; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi # Find who we are. Look in the path if we contain no directory separator. as_myself= case $0 in #(( *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS case $as_dir in #((( '') as_dir=./ ;; */) ;; *) as_dir=$as_dir/ ;; esac test -r "$as_dir$0" && as_myself=$as_dir$0 && break done IFS=$as_save_IFS ;; esac # We did not find ourselves, most probably we were run as 'sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then printf "%s\n" "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2 exit 1 fi # as_fn_error STATUS ERROR [LINENO LOG_FD] # ---------------------------------------- # Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are # provided, also output the error to LOG_FD, referencing LINENO. Then exit the # script with STATUS, using 1 if that was 0. as_fn_error () { as_status=$1; test $as_status -eq 0 && as_status=1 if test "$4"; then as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: $2" >&$4 fi printf "%s\n" "$as_me: error: $2" >&2 as_fn_exit $as_status } # as_fn_error # as_fn_set_status STATUS # ----------------------- # Set $? to STATUS, without forking. as_fn_set_status () { return $1 } # as_fn_set_status # as_fn_exit STATUS # ----------------- # Exit the shell with STATUS, even in a "trap 0" or "set -e" context. as_fn_exit () { set +e as_fn_set_status $1 exit $1 } # as_fn_exit # as_fn_unset VAR # --------------- # Portably unset VAR. as_fn_unset () { { eval $1=; unset $1;} } as_unset=as_fn_unset # as_fn_append VAR VALUE # ---------------------- # Append the text in VALUE to the end of the definition contained in VAR. Take # advantage of any shell optimizations that allow amortized linear growth over # repeated appends, instead of the typical quadratic growth present in naive # implementations. if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null then : eval 'as_fn_append () { eval $1+=\$2 }' else case e in #( e) as_fn_append () { eval $1=\$$1\$2 } ;; esac fi # as_fn_append # as_fn_arith ARG... # ------------------ # Perform arithmetic evaluation on the ARGs, and store the result in the # global $as_val. Take advantage of shells that can avoid forks. The arguments # must be portable across $(()) and expr. if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null then : eval 'as_fn_arith () { as_val=$(( $* )) }' else case e in #( e) as_fn_arith () { as_val=`expr "$@" || test $? -eq 1` } ;; esac fi # as_fn_arith if expr a : '\(a\)' >/dev/null 2>&1 && test "X`expr 00001 : '.*\(...\)'`" = X001; then as_expr=expr else as_expr=false fi if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then as_dirname=dirname else as_dirname=false fi as_me=`$as_basename -- "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| . 2>/dev/null || printf "%s\n" X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits # Determine whether it's possible to make 'echo' print without a newline. # These variables are no longer used directly by Autoconf, but are AC_SUBSTed # for compatibility with existing Makefiles. ECHO_C= ECHO_N= ECHO_T= case `echo -n x` in #((((( -n*) case `echo 'xy\c'` in *c*) ECHO_T=' ';; # ECHO_T is single tab character. xy) ECHO_C='\c';; *) echo `echo ksh88 bug on AIX 6.1` > /dev/null ECHO_T=' ';; esac;; *) ECHO_N='-n';; esac # For backward compatibility with old third-party macros, we provide # the shell variables $as_echo and $as_echo_n. New code should use # AS_ECHO(["message"]) and AS_ECHO_N(["message"]), respectively. as_echo='printf %s\n' as_echo_n='printf %s' rm -f conf$$ conf$$.exe conf$$.file if test -d conf$$.dir; then rm -f conf$$.dir/conf$$.file else rm -f conf$$.dir mkdir conf$$.dir 2>/dev/null fi if (echo >conf$$.file) 2>/dev/null; then if ln -s conf$$.file conf$$ 2>/dev/null; then as_ln_s='ln -s' # ... but there are two gotchas: # 1) On MSYS, both 'ln -s file dir' and 'ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; 'ln -s' creates a wrapper executable. # In both cases, we have to default to 'cp -pR'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || as_ln_s='cp -pR' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -pR' fi else as_ln_s='cp -pR' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null # as_fn_mkdir_p # ------------- # Create "$as_dir" as a directory, including parents if necessary. as_fn_mkdir_p () { case $as_dir in #( -*) as_dir=./$as_dir;; esac test -d "$as_dir" || eval $as_mkdir_p || { as_dirs= while :; do case $as_dir in #( *\'*) as_qdir=`printf "%s\n" "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'( *) as_qdir=$as_dir;; esac as_dirs="'$as_qdir' $as_dirs" as_dir=`$as_dirname -- "$as_dir" || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null || printf "%s\n" X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` test -d "$as_dir" && break done test -z "$as_dirs" || eval "mkdir $as_dirs" } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir" } # as_fn_mkdir_p if mkdir -p . 2>/dev/null; then as_mkdir_p='mkdir -p "$as_dir"' else test -d ./-p && rmdir ./-p as_mkdir_p=false fi # as_fn_executable_p FILE # ----------------------- # Test if FILE is an executable regular file. as_fn_executable_p () { test -f "$1" && test -x "$1" } # as_fn_executable_p as_test_x='test -x' as_executable_p=as_fn_executable_p # Sed expression to map a string onto a valid CPP name. as_sed_cpp="y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g" as_tr_cpp="eval sed '$as_sed_cpp'" # deprecated # Sed expression to map a string onto a valid variable name. as_sed_sh="y%*+%pp%;s%[^_$as_cr_alnum]%_%g" as_tr_sh="eval sed '$as_sed_sh'" # deprecated exec 6>&1 ## ----------------------------------- ## ## Main body of $CONFIG_STATUS script. ## ## ----------------------------------- ## _ASEOF test $as_write_fail = 0 && chmod +x $CONFIG_STATUS || ac_write_fail=1 cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # Save the log message, to keep $0 and so on meaningful, and to # report actual input values of CONFIG_FILES etc. instead of their # values after options handling. ac_log=" This file was extended by c-ares $as_me 1.33.1, which was generated by GNU Autoconf 2.72. Invocation command line was CONFIG_FILES = $CONFIG_FILES CONFIG_HEADERS = $CONFIG_HEADERS CONFIG_LINKS = $CONFIG_LINKS CONFIG_COMMANDS = $CONFIG_COMMANDS $ $0 $@ on `(hostname || uname -n) 2>/dev/null | sed 1q` " _ACEOF case $ac_config_files in *" "*) set x $ac_config_files; shift; ac_config_files=$*;; esac case $ac_config_headers in *" "*) set x $ac_config_headers; shift; ac_config_headers=$*;; esac cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 # Files that config.status was made for. config_files="$ac_config_files" config_headers="$ac_config_headers" config_commands="$ac_config_commands" _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 ac_cs_usage="\ '$as_me' instantiates files and other configuration actions from templates according to the current configuration. Unless the files and actions are specified as TAGs, all are instantiated by default. Usage: $0 [OPTION]... [TAG]... -h, --help print this help, then exit -V, --version print version number and configuration settings, then exit --config print configuration, then exit -q, --quiet, --silent do not print progress messages -d, --debug don't remove temporary files --recheck update $as_me by reconfiguring in the same conditions --file=FILE[:TEMPLATE] instantiate the configuration file FILE --header=FILE[:TEMPLATE] instantiate the configuration header FILE Configuration files: $config_files Configuration headers: $config_headers Configuration commands: $config_commands Report bugs to ." _ACEOF ac_cs_config=`printf "%s\n" "$ac_configure_args" | sed "$ac_safe_unquote"` ac_cs_config_escaped=`printf "%s\n" "$ac_cs_config" | sed "s/^ //; s/'/'\\\\\\\\''/g"` cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_cs_config='$ac_cs_config_escaped' ac_cs_version="\\ c-ares config.status 1.33.1 configured by $0, generated by GNU Autoconf 2.72, with options \\"\$ac_cs_config\\" Copyright (C) 2023 Free Software Foundation, Inc. This config.status script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it." ac_pwd='$ac_pwd' srcdir='$srcdir' INSTALL='$INSTALL' MKDIR_P='$MKDIR_P' AWK='$AWK' test -n "\$AWK" || AWK=awk _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # The default lists apply if the user does not specify any file. ac_need_defaults=: while test $# != 0 do case $1 in --*=?*) ac_option=`expr "X$1" : 'X\([^=]*\)='` ac_optarg=`expr "X$1" : 'X[^=]*=\(.*\)'` ac_shift=: ;; --*=) ac_option=`expr "X$1" : 'X\([^=]*\)='` ac_optarg= ac_shift=: ;; *) ac_option=$1 ac_optarg=$2 ac_shift=shift ;; esac case $ac_option in # Handling of the options. -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r) ac_cs_recheck=: ;; --version | --versio | --versi | --vers | --ver | --ve | --v | -V ) printf "%s\n" "$ac_cs_version"; exit ;; --config | --confi | --conf | --con | --co | --c ) printf "%s\n" "$ac_cs_config"; exit ;; --debug | --debu | --deb | --de | --d | -d ) debug=: ;; --file | --fil | --fi | --f ) $ac_shift case $ac_optarg in *\'*) ac_optarg=`printf "%s\n" "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` ;; '') as_fn_error $? "missing file argument" ;; esac as_fn_append CONFIG_FILES " '$ac_optarg'" ac_need_defaults=false;; --header | --heade | --head | --hea ) $ac_shift case $ac_optarg in *\'*) ac_optarg=`printf "%s\n" "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` ;; esac as_fn_append CONFIG_HEADERS " '$ac_optarg'" ac_need_defaults=false;; --he | --h) # Conflict between --help and --header as_fn_error $? "ambiguous option: '$1' Try '$0 --help' for more information.";; --help | --hel | -h ) printf "%s\n" "$ac_cs_usage"; exit ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil | --si | --s) ac_cs_silent=: ;; # This is an error. -*) as_fn_error $? "unrecognized option: '$1' Try '$0 --help' for more information." ;; *) as_fn_append ac_config_targets " $1" ac_need_defaults=false ;; esac shift done ac_configure_extra_args= if $ac_cs_silent; then exec 6>/dev/null ac_configure_extra_args="$ac_configure_extra_args --silent" fi _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 if \$ac_cs_recheck; then set X $SHELL '$0' $ac_configure_args \$ac_configure_extra_args --no-create --no-recursion shift \printf "%s\n" "running CONFIG_SHELL=$SHELL \$*" >&6 CONFIG_SHELL='$SHELL' export CONFIG_SHELL exec "\$@" fi _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 exec 5>>config.log { echo sed 'h;s/./-/g;s/^.../## /;s/...$/ ##/;p;x;p;x' <<_ASBOX ## Running $as_me. ## _ASBOX printf "%s\n" "$ac_log" } >&5 _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 # # INIT-COMMANDS # AMDEP_TRUE="$AMDEP_TRUE" MAKE="${MAKE-make}" # The HP-UX ksh and POSIX shell print the target directory to stdout # if CDPATH is set. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH sed_quote_subst='$sed_quote_subst' double_quote_subst='$double_quote_subst' delay_variable_subst='$delay_variable_subst' macro_version='`$ECHO "$macro_version" | $SED "$delay_single_quote_subst"`' macro_revision='`$ECHO "$macro_revision" | $SED "$delay_single_quote_subst"`' AS='`$ECHO "$AS" | $SED "$delay_single_quote_subst"`' DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`' OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`' enable_shared='`$ECHO "$enable_shared" | $SED "$delay_single_quote_subst"`' enable_static='`$ECHO "$enable_static" | $SED "$delay_single_quote_subst"`' pic_mode='`$ECHO "$pic_mode" | $SED "$delay_single_quote_subst"`' enable_fast_install='`$ECHO "$enable_fast_install" | $SED "$delay_single_quote_subst"`' shared_archive_member_spec='`$ECHO "$shared_archive_member_spec" | $SED "$delay_single_quote_subst"`' SHELL='`$ECHO "$SHELL" | $SED "$delay_single_quote_subst"`' ECHO='`$ECHO "$ECHO" | $SED "$delay_single_quote_subst"`' PATH_SEPARATOR='`$ECHO "$PATH_SEPARATOR" | $SED "$delay_single_quote_subst"`' host_alias='`$ECHO "$host_alias" | $SED "$delay_single_quote_subst"`' host='`$ECHO "$host" | $SED "$delay_single_quote_subst"`' host_os='`$ECHO "$host_os" | $SED "$delay_single_quote_subst"`' build_alias='`$ECHO "$build_alias" | $SED "$delay_single_quote_subst"`' build='`$ECHO "$build" | $SED "$delay_single_quote_subst"`' build_os='`$ECHO "$build_os" | $SED "$delay_single_quote_subst"`' SED='`$ECHO "$SED" | $SED "$delay_single_quote_subst"`' Xsed='`$ECHO "$Xsed" | $SED "$delay_single_quote_subst"`' GREP='`$ECHO "$GREP" | $SED "$delay_single_quote_subst"`' EGREP='`$ECHO "$EGREP" | $SED "$delay_single_quote_subst"`' FGREP='`$ECHO "$FGREP" | $SED "$delay_single_quote_subst"`' LD='`$ECHO "$LD" | $SED "$delay_single_quote_subst"`' NM='`$ECHO "$NM" | $SED "$delay_single_quote_subst"`' LN_S='`$ECHO "$LN_S" | $SED "$delay_single_quote_subst"`' max_cmd_len='`$ECHO "$max_cmd_len" | $SED "$delay_single_quote_subst"`' ac_objext='`$ECHO "$ac_objext" | $SED "$delay_single_quote_subst"`' exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`' lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`' lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`' lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`' lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`' lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`' reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`' reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`' FILECMD='`$ECHO "$FILECMD" | $SED "$delay_single_quote_subst"`' deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`' file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`' file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`' want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`' sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`' AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`' lt_ar_flags='`$ECHO "$lt_ar_flags" | $SED "$delay_single_quote_subst"`' AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`' archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`' STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`' RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`' old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`' old_postuninstall_cmds='`$ECHO "$old_postuninstall_cmds" | $SED "$delay_single_quote_subst"`' old_archive_cmds='`$ECHO "$old_archive_cmds" | $SED "$delay_single_quote_subst"`' lock_old_archive_extraction='`$ECHO "$lock_old_archive_extraction" | $SED "$delay_single_quote_subst"`' CC='`$ECHO "$CC" | $SED "$delay_single_quote_subst"`' CFLAGS='`$ECHO "$CFLAGS" | $SED "$delay_single_quote_subst"`' compiler='`$ECHO "$compiler" | $SED "$delay_single_quote_subst"`' GCC='`$ECHO "$GCC" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_import='`$ECHO "$lt_cv_sys_global_symbol_to_import" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`' lt_cv_nm_interface='`$ECHO "$lt_cv_nm_interface" | $SED "$delay_single_quote_subst"`' nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`' lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`' lt_cv_truncate_bin='`$ECHO "$lt_cv_truncate_bin" | $SED "$delay_single_quote_subst"`' objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`' MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`' lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`' need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`' MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`' DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`' NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`' LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`' OTOOL='`$ECHO "$OTOOL" | $SED "$delay_single_quote_subst"`' OTOOL64='`$ECHO "$OTOOL64" | $SED "$delay_single_quote_subst"`' libext='`$ECHO "$libext" | $SED "$delay_single_quote_subst"`' shrext_cmds='`$ECHO "$shrext_cmds" | $SED "$delay_single_quote_subst"`' extract_expsyms_cmds='`$ECHO "$extract_expsyms_cmds" | $SED "$delay_single_quote_subst"`' archive_cmds_need_lc='`$ECHO "$archive_cmds_need_lc" | $SED "$delay_single_quote_subst"`' enable_shared_with_static_runtimes='`$ECHO "$enable_shared_with_static_runtimes" | $SED "$delay_single_quote_subst"`' export_dynamic_flag_spec='`$ECHO "$export_dynamic_flag_spec" | $SED "$delay_single_quote_subst"`' whole_archive_flag_spec='`$ECHO "$whole_archive_flag_spec" | $SED "$delay_single_quote_subst"`' compiler_needs_object='`$ECHO "$compiler_needs_object" | $SED "$delay_single_quote_subst"`' old_archive_from_new_cmds='`$ECHO "$old_archive_from_new_cmds" | $SED "$delay_single_quote_subst"`' old_archive_from_expsyms_cmds='`$ECHO "$old_archive_from_expsyms_cmds" | $SED "$delay_single_quote_subst"`' archive_cmds='`$ECHO "$archive_cmds" | $SED "$delay_single_quote_subst"`' archive_expsym_cmds='`$ECHO "$archive_expsym_cmds" | $SED "$delay_single_quote_subst"`' module_cmds='`$ECHO "$module_cmds" | $SED "$delay_single_quote_subst"`' module_expsym_cmds='`$ECHO "$module_expsym_cmds" | $SED "$delay_single_quote_subst"`' with_gnu_ld='`$ECHO "$with_gnu_ld" | $SED "$delay_single_quote_subst"`' allow_undefined_flag='`$ECHO "$allow_undefined_flag" | $SED "$delay_single_quote_subst"`' no_undefined_flag='`$ECHO "$no_undefined_flag" | $SED "$delay_single_quote_subst"`' hardcode_libdir_flag_spec='`$ECHO "$hardcode_libdir_flag_spec" | $SED "$delay_single_quote_subst"`' hardcode_libdir_separator='`$ECHO "$hardcode_libdir_separator" | $SED "$delay_single_quote_subst"`' hardcode_direct='`$ECHO "$hardcode_direct" | $SED "$delay_single_quote_subst"`' hardcode_direct_absolute='`$ECHO "$hardcode_direct_absolute" | $SED "$delay_single_quote_subst"`' hardcode_minus_L='`$ECHO "$hardcode_minus_L" | $SED "$delay_single_quote_subst"`' hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_quote_subst"`' hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`' inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`' link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`' always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`' export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`' exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`' include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`' prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`' postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`' file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`' variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`' need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`' need_version='`$ECHO "$need_version" | $SED "$delay_single_quote_subst"`' version_type='`$ECHO "$version_type" | $SED "$delay_single_quote_subst"`' runpath_var='`$ECHO "$runpath_var" | $SED "$delay_single_quote_subst"`' shlibpath_var='`$ECHO "$shlibpath_var" | $SED "$delay_single_quote_subst"`' shlibpath_overrides_runpath='`$ECHO "$shlibpath_overrides_runpath" | $SED "$delay_single_quote_subst"`' libname_spec='`$ECHO "$libname_spec" | $SED "$delay_single_quote_subst"`' library_names_spec='`$ECHO "$library_names_spec" | $SED "$delay_single_quote_subst"`' soname_spec='`$ECHO "$soname_spec" | $SED "$delay_single_quote_subst"`' install_override_mode='`$ECHO "$install_override_mode" | $SED "$delay_single_quote_subst"`' postinstall_cmds='`$ECHO "$postinstall_cmds" | $SED "$delay_single_quote_subst"`' postuninstall_cmds='`$ECHO "$postuninstall_cmds" | $SED "$delay_single_quote_subst"`' finish_cmds='`$ECHO "$finish_cmds" | $SED "$delay_single_quote_subst"`' finish_eval='`$ECHO "$finish_eval" | $SED "$delay_single_quote_subst"`' hardcode_into_libs='`$ECHO "$hardcode_into_libs" | $SED "$delay_single_quote_subst"`' sys_lib_search_path_spec='`$ECHO "$sys_lib_search_path_spec" | $SED "$delay_single_quote_subst"`' configure_time_dlsearch_path='`$ECHO "$configure_time_dlsearch_path" | $SED "$delay_single_quote_subst"`' configure_time_lt_sys_library_path='`$ECHO "$configure_time_lt_sys_library_path" | $SED "$delay_single_quote_subst"`' hardcode_action='`$ECHO "$hardcode_action" | $SED "$delay_single_quote_subst"`' enable_dlopen='`$ECHO "$enable_dlopen" | $SED "$delay_single_quote_subst"`' enable_dlopen_self='`$ECHO "$enable_dlopen_self" | $SED "$delay_single_quote_subst"`' enable_dlopen_self_static='`$ECHO "$enable_dlopen_self_static" | $SED "$delay_single_quote_subst"`' old_striplib='`$ECHO "$old_striplib" | $SED "$delay_single_quote_subst"`' striplib='`$ECHO "$striplib" | $SED "$delay_single_quote_subst"`' compiler_lib_search_dirs='`$ECHO "$compiler_lib_search_dirs" | $SED "$delay_single_quote_subst"`' predep_objects='`$ECHO "$predep_objects" | $SED "$delay_single_quote_subst"`' postdep_objects='`$ECHO "$postdep_objects" | $SED "$delay_single_quote_subst"`' predeps='`$ECHO "$predeps" | $SED "$delay_single_quote_subst"`' postdeps='`$ECHO "$postdeps" | $SED "$delay_single_quote_subst"`' compiler_lib_search_path='`$ECHO "$compiler_lib_search_path" | $SED "$delay_single_quote_subst"`' LD_CXX='`$ECHO "$LD_CXX" | $SED "$delay_single_quote_subst"`' reload_flag_CXX='`$ECHO "$reload_flag_CXX" | $SED "$delay_single_quote_subst"`' reload_cmds_CXX='`$ECHO "$reload_cmds_CXX" | $SED "$delay_single_quote_subst"`' old_archive_cmds_CXX='`$ECHO "$old_archive_cmds_CXX" | $SED "$delay_single_quote_subst"`' compiler_CXX='`$ECHO "$compiler_CXX" | $SED "$delay_single_quote_subst"`' GCC_CXX='`$ECHO "$GCC_CXX" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_no_builtin_flag_CXX='`$ECHO "$lt_prog_compiler_no_builtin_flag_CXX" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_pic_CXX='`$ECHO "$lt_prog_compiler_pic_CXX" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_wl_CXX='`$ECHO "$lt_prog_compiler_wl_CXX" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_static_CXX='`$ECHO "$lt_prog_compiler_static_CXX" | $SED "$delay_single_quote_subst"`' lt_cv_prog_compiler_c_o_CXX='`$ECHO "$lt_cv_prog_compiler_c_o_CXX" | $SED "$delay_single_quote_subst"`' archive_cmds_need_lc_CXX='`$ECHO "$archive_cmds_need_lc_CXX" | $SED "$delay_single_quote_subst"`' enable_shared_with_static_runtimes_CXX='`$ECHO "$enable_shared_with_static_runtimes_CXX" | $SED "$delay_single_quote_subst"`' export_dynamic_flag_spec_CXX='`$ECHO "$export_dynamic_flag_spec_CXX" | $SED "$delay_single_quote_subst"`' whole_archive_flag_spec_CXX='`$ECHO "$whole_archive_flag_spec_CXX" | $SED "$delay_single_quote_subst"`' compiler_needs_object_CXX='`$ECHO "$compiler_needs_object_CXX" | $SED "$delay_single_quote_subst"`' old_archive_from_new_cmds_CXX='`$ECHO "$old_archive_from_new_cmds_CXX" | $SED "$delay_single_quote_subst"`' old_archive_from_expsyms_cmds_CXX='`$ECHO "$old_archive_from_expsyms_cmds_CXX" | $SED "$delay_single_quote_subst"`' archive_cmds_CXX='`$ECHO "$archive_cmds_CXX" | $SED "$delay_single_quote_subst"`' archive_expsym_cmds_CXX='`$ECHO "$archive_expsym_cmds_CXX" | $SED "$delay_single_quote_subst"`' module_cmds_CXX='`$ECHO "$module_cmds_CXX" | $SED "$delay_single_quote_subst"`' module_expsym_cmds_CXX='`$ECHO "$module_expsym_cmds_CXX" | $SED "$delay_single_quote_subst"`' with_gnu_ld_CXX='`$ECHO "$with_gnu_ld_CXX" | $SED "$delay_single_quote_subst"`' allow_undefined_flag_CXX='`$ECHO "$allow_undefined_flag_CXX" | $SED "$delay_single_quote_subst"`' no_undefined_flag_CXX='`$ECHO "$no_undefined_flag_CXX" | $SED "$delay_single_quote_subst"`' hardcode_libdir_flag_spec_CXX='`$ECHO "$hardcode_libdir_flag_spec_CXX" | $SED "$delay_single_quote_subst"`' hardcode_libdir_separator_CXX='`$ECHO "$hardcode_libdir_separator_CXX" | $SED "$delay_single_quote_subst"`' hardcode_direct_CXX='`$ECHO "$hardcode_direct_CXX" | $SED "$delay_single_quote_subst"`' hardcode_direct_absolute_CXX='`$ECHO "$hardcode_direct_absolute_CXX" | $SED "$delay_single_quote_subst"`' hardcode_minus_L_CXX='`$ECHO "$hardcode_minus_L_CXX" | $SED "$delay_single_quote_subst"`' hardcode_shlibpath_var_CXX='`$ECHO "$hardcode_shlibpath_var_CXX" | $SED "$delay_single_quote_subst"`' hardcode_automatic_CXX='`$ECHO "$hardcode_automatic_CXX" | $SED "$delay_single_quote_subst"`' inherit_rpath_CXX='`$ECHO "$inherit_rpath_CXX" | $SED "$delay_single_quote_subst"`' link_all_deplibs_CXX='`$ECHO "$link_all_deplibs_CXX" | $SED "$delay_single_quote_subst"`' always_export_symbols_CXX='`$ECHO "$always_export_symbols_CXX" | $SED "$delay_single_quote_subst"`' export_symbols_cmds_CXX='`$ECHO "$export_symbols_cmds_CXX" | $SED "$delay_single_quote_subst"`' exclude_expsyms_CXX='`$ECHO "$exclude_expsyms_CXX" | $SED "$delay_single_quote_subst"`' include_expsyms_CXX='`$ECHO "$include_expsyms_CXX" | $SED "$delay_single_quote_subst"`' prelink_cmds_CXX='`$ECHO "$prelink_cmds_CXX" | $SED "$delay_single_quote_subst"`' postlink_cmds_CXX='`$ECHO "$postlink_cmds_CXX" | $SED "$delay_single_quote_subst"`' file_list_spec_CXX='`$ECHO "$file_list_spec_CXX" | $SED "$delay_single_quote_subst"`' hardcode_action_CXX='`$ECHO "$hardcode_action_CXX" | $SED "$delay_single_quote_subst"`' compiler_lib_search_dirs_CXX='`$ECHO "$compiler_lib_search_dirs_CXX" | $SED "$delay_single_quote_subst"`' predep_objects_CXX='`$ECHO "$predep_objects_CXX" | $SED "$delay_single_quote_subst"`' postdep_objects_CXX='`$ECHO "$postdep_objects_CXX" | $SED "$delay_single_quote_subst"`' predeps_CXX='`$ECHO "$predeps_CXX" | $SED "$delay_single_quote_subst"`' postdeps_CXX='`$ECHO "$postdeps_CXX" | $SED "$delay_single_quote_subst"`' compiler_lib_search_path_CXX='`$ECHO "$compiler_lib_search_path_CXX" | $SED "$delay_single_quote_subst"`' LTCC='$LTCC' LTCFLAGS='$LTCFLAGS' compiler='$compiler_DEFAULT' # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF \$1 _LTECHO_EOF' } # Quote evaled strings. for var in AS \ DLLTOOL \ OBJDUMP \ SHELL \ ECHO \ PATH_SEPARATOR \ SED \ GREP \ EGREP \ FGREP \ LD \ NM \ LN_S \ lt_SP2NL \ lt_NL2SP \ reload_flag \ FILECMD \ deplibs_check_method \ file_magic_cmd \ file_magic_glob \ want_nocaseglob \ sharedlib_from_linklib_cmd \ AR \ archiver_list_spec \ STRIP \ RANLIB \ CC \ CFLAGS \ compiler \ lt_cv_sys_global_symbol_pipe \ lt_cv_sys_global_symbol_to_cdecl \ lt_cv_sys_global_symbol_to_import \ lt_cv_sys_global_symbol_to_c_name_address \ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \ lt_cv_nm_interface \ nm_file_list_spec \ lt_cv_truncate_bin \ lt_prog_compiler_no_builtin_flag \ lt_prog_compiler_pic \ lt_prog_compiler_wl \ lt_prog_compiler_static \ lt_cv_prog_compiler_c_o \ need_locks \ MANIFEST_TOOL \ DSYMUTIL \ NMEDIT \ LIPO \ OTOOL \ OTOOL64 \ shrext_cmds \ export_dynamic_flag_spec \ whole_archive_flag_spec \ compiler_needs_object \ with_gnu_ld \ allow_undefined_flag \ no_undefined_flag \ hardcode_libdir_flag_spec \ hardcode_libdir_separator \ exclude_expsyms \ include_expsyms \ file_list_spec \ variables_saved_for_relink \ libname_spec \ library_names_spec \ soname_spec \ install_override_mode \ finish_eval \ old_striplib \ striplib \ compiler_lib_search_dirs \ predep_objects \ postdep_objects \ predeps \ postdeps \ compiler_lib_search_path \ LD_CXX \ reload_flag_CXX \ compiler_CXX \ lt_prog_compiler_no_builtin_flag_CXX \ lt_prog_compiler_pic_CXX \ lt_prog_compiler_wl_CXX \ lt_prog_compiler_static_CXX \ lt_cv_prog_compiler_c_o_CXX \ export_dynamic_flag_spec_CXX \ whole_archive_flag_spec_CXX \ compiler_needs_object_CXX \ with_gnu_ld_CXX \ allow_undefined_flag_CXX \ no_undefined_flag_CXX \ hardcode_libdir_flag_spec_CXX \ hardcode_libdir_separator_CXX \ exclude_expsyms_CXX \ include_expsyms_CXX \ file_list_spec_CXX \ compiler_lib_search_dirs_CXX \ predep_objects_CXX \ postdep_objects_CXX \ predeps_CXX \ postdeps_CXX \ compiler_lib_search_path_CXX; do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[\\\\\\\`\\"\\\$]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED \\"\\\$sed_quote_subst\\"\\\`\\\\\\"" ## exclude from sc_prohibit_nested_quotes ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done # Double-quote double-evaled strings. for var in reload_cmds \ old_postinstall_cmds \ old_postuninstall_cmds \ old_archive_cmds \ extract_expsyms_cmds \ old_archive_from_new_cmds \ old_archive_from_expsyms_cmds \ archive_cmds \ archive_expsym_cmds \ module_cmds \ module_expsym_cmds \ export_symbols_cmds \ prelink_cmds \ postlink_cmds \ postinstall_cmds \ postuninstall_cmds \ finish_cmds \ sys_lib_search_path_spec \ configure_time_dlsearch_path \ configure_time_lt_sys_library_path \ reload_cmds_CXX \ old_archive_cmds_CXX \ old_archive_from_new_cmds_CXX \ old_archive_from_expsyms_cmds_CXX \ archive_cmds_CXX \ archive_expsym_cmds_CXX \ module_cmds_CXX \ module_expsym_cmds_CXX \ export_symbols_cmds_CXX \ prelink_cmds_CXX \ postlink_cmds_CXX; do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[\\\\\\\`\\"\\\$]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\"" ## exclude from sc_prohibit_nested_quotes ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done ac_aux_dir='$ac_aux_dir' # See if we are running on zsh, and set the options that allow our # commands through without removal of \ escapes INIT. if test -n "\${ZSH_VERSION+set}"; then setopt NO_GLOB_SUBST fi PACKAGE='$PACKAGE' VERSION='$VERSION' RM='$RM' ofile='$ofile' _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # Handling of arguments. for ac_config_target in $ac_config_targets do case $ac_config_target in "src/lib/ares_config.h") CONFIG_HEADERS="$CONFIG_HEADERS src/lib/ares_config.h" ;; "include/ares_build.h") CONFIG_HEADERS="$CONFIG_HEADERS include/ares_build.h" ;; "depfiles") CONFIG_COMMANDS="$CONFIG_COMMANDS depfiles" ;; "libtool") CONFIG_COMMANDS="$CONFIG_COMMANDS libtool" ;; "Makefile") CONFIG_FILES="$CONFIG_FILES Makefile" ;; "include/Makefile") CONFIG_FILES="$CONFIG_FILES include/Makefile" ;; "src/Makefile") CONFIG_FILES="$CONFIG_FILES src/Makefile" ;; "src/lib/Makefile") CONFIG_FILES="$CONFIG_FILES src/lib/Makefile" ;; "src/tools/Makefile") CONFIG_FILES="$CONFIG_FILES src/tools/Makefile" ;; "libcares.pc") CONFIG_FILES="$CONFIG_FILES libcares.pc" ;; "test/Makefile") CONFIG_FILES="$CONFIG_FILES test/Makefile" ;; *) as_fn_error $? "invalid argument: '$ac_config_target'" "$LINENO" 5;; esac done # If the user did not use the arguments to specify the items to instantiate, # then the envvar interface is used. Set only those that are not. # We use the long form for the default assignment because of an extremely # bizarre bug on SunOS 4.1.3. if $ac_need_defaults; then test ${CONFIG_FILES+y} || CONFIG_FILES=$config_files test ${CONFIG_HEADERS+y} || CONFIG_HEADERS=$config_headers test ${CONFIG_COMMANDS+y} || CONFIG_COMMANDS=$config_commands fi # Have a temporary directory for convenience. Make it in the build tree # simply because there is no reason against having it here, and in addition, # creating and moving files from /tmp can sometimes cause problems. # Hook for its removal unless debugging. # Note that there is a small window in which the directory will not be cleaned: # after its creation but before its name has been assigned to '$tmp'. $debug || { tmp= ac_tmp= trap 'exit_status=$? : "${ac_tmp:=$tmp}" { test ! -d "$ac_tmp" || rm -fr "$ac_tmp"; } && exit $exit_status ' 0 trap 'as_fn_exit 1' 1 2 13 15 } # Create a (secure) tmp directory for tmp files. { tmp=`(umask 077 && mktemp -d "./confXXXXXX") 2>/dev/null` && test -d "$tmp" } || { tmp=./conf$$-$RANDOM (umask 077 && mkdir "$tmp") } || as_fn_error $? "cannot create a temporary directory in ." "$LINENO" 5 ac_tmp=$tmp # Set up the scripts for CONFIG_FILES section. # No need to generate them if there are no CONFIG_FILES. # This happens for instance with './config.status config.h'. if test -n "$CONFIG_FILES"; then ac_cr=`echo X | tr X '\015'` # On cygwin, bash can eat \r inside `` if the user requested igncr. # But we know of no other shell where ac_cr would be empty at this # point, so we can use a bashism as a fallback. if test "x$ac_cr" = x; then eval ac_cr=\$\'\\r\' fi ac_cs_awk_cr=`$AWK 'BEGIN { print "a\rb" }' /dev/null` if test "$ac_cs_awk_cr" = "a${ac_cr}b"; then ac_cs_awk_cr='\\r' else ac_cs_awk_cr=$ac_cr fi echo 'BEGIN {' >"$ac_tmp/subs1.awk" && _ACEOF { echo "cat >conf$$subs.awk <<_ACEOF" && echo "$ac_subst_vars" | sed 's/.*/&!$&$ac_delim/' && echo "_ACEOF" } >conf$$subs.sh || as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 ac_delim_num=`echo "$ac_subst_vars" | grep -c '^'` ac_delim='%!_!# ' for ac_last_try in false false false false false :; do . ./conf$$subs.sh || as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 ac_delim_n=`sed -n "s/.*$ac_delim\$/X/p" conf$$subs.awk | grep -c X` if test $ac_delim_n = $ac_delim_num; then break elif $ac_last_try; then as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 else ac_delim="$ac_delim!$ac_delim _$ac_delim!! " fi done rm -f conf$$subs.sh cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 cat >>"\$ac_tmp/subs1.awk" <<\\_ACAWK && _ACEOF sed -n ' h s/^/S["/; s/!.*/"]=/ p g s/^[^!]*!// :repl t repl s/'"$ac_delim"'$// t delim :nl h s/\(.\{148\}\)..*/\1/ t more1 s/["\\]/\\&/g; s/^/"/; s/$/\\n"\\/ p n b repl :more1 s/["\\]/\\&/g; s/^/"/; s/$/"\\/ p g s/.\{148\}// t nl :delim h s/\(.\{148\}\)..*/\1/ t more2 s/["\\]/\\&/g; s/^/"/; s/$/"/ p b :more2 s/["\\]/\\&/g; s/^/"/; s/$/"\\/ p g s/.\{148\}// t delim ' >$CONFIG_STATUS || ac_write_fail=1 rm -f conf$$subs.awk cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 _ACAWK cat >>"\$ac_tmp/subs1.awk" <<_ACAWK && for (key in S) S_is_set[key] = 1 FS = "" } { line = $ 0 nfields = split(line, field, "@") substed = 0 len = length(field[1]) for (i = 2; i < nfields; i++) { key = field[i] keylen = length(key) if (S_is_set[key]) { value = S[key] line = substr(line, 1, len) "" value "" substr(line, len + keylen + 3) len += length(value) + length(field[++i]) substed = 1 } else len += 1 + keylen } print line } _ACAWK _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 if sed "s/$ac_cr//" < /dev/null > /dev/null 2>&1; then sed "s/$ac_cr\$//; s/$ac_cr/$ac_cs_awk_cr/g" else cat fi < "$ac_tmp/subs1.awk" > "$ac_tmp/subs.awk" \ || as_fn_error $? "could not setup config files machinery" "$LINENO" 5 _ACEOF # VPATH may cause trouble with some makes, so we remove sole $(srcdir), # ${srcdir} and @srcdir@ entries from VPATH if srcdir is ".", strip leading and # trailing colons and then remove the whole line if VPATH becomes empty # (actually we leave an empty line to preserve line numbers). if test "x$srcdir" = x.; then ac_vpsub='/^[ ]*VPATH[ ]*=[ ]*/{ h s/// s/^/:/ s/[ ]*$/:/ s/:\$(srcdir):/:/g s/:\${srcdir}:/:/g s/:@srcdir@:/:/g s/^:*// s/:*$// x s/\(=[ ]*\).*/\1/ G s/\n// s/^[^=]*=[ ]*$// }' fi cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 fi # test -n "$CONFIG_FILES" # Set up the scripts for CONFIG_HEADERS section. # No need to generate them if there are no CONFIG_HEADERS. # This happens for instance with './config.status Makefile'. if test -n "$CONFIG_HEADERS"; then cat >"$ac_tmp/defines.awk" <<\_ACAWK || BEGIN { _ACEOF # Transform confdefs.h into an awk script 'defines.awk', embedded as # here-document in config.status, that substitutes the proper values into # config.h.in to produce config.h. # Create a delimiter string that does not exist in confdefs.h, to ease # handling of long lines. ac_delim='%!_!# ' for ac_last_try in false false :; do ac_tt=`sed -n "/$ac_delim/p" confdefs.h` if test -z "$ac_tt"; then break elif $ac_last_try; then as_fn_error $? "could not make $CONFIG_HEADERS" "$LINENO" 5 else ac_delim="$ac_delim!$ac_delim _$ac_delim!! " fi done # For the awk script, D is an array of macro values keyed by name, # likewise P contains macro parameters if any. Preserve backslash # newline sequences. ac_word_re=[_$as_cr_Letters][_$as_cr_alnum]* sed -n ' s/.\{148\}/&'"$ac_delim"'/g t rset :rset s/^[ ]*#[ ]*define[ ][ ]*/ / t def d :def s/\\$// t bsnl s/["\\]/\\&/g s/^ \('"$ac_word_re"'\)\(([^()]*)\)[ ]*\(.*\)/P["\1"]="\2"\ D["\1"]=" \3"/p s/^ \('"$ac_word_re"'\)[ ]*\(.*\)/D["\1"]=" \2"/p d :bsnl s/["\\]/\\&/g s/^ \('"$ac_word_re"'\)\(([^()]*)\)[ ]*\(.*\)/P["\1"]="\2"\ D["\1"]=" \3\\\\\\n"\\/p t cont s/^ \('"$ac_word_re"'\)[ ]*\(.*\)/D["\1"]=" \2\\\\\\n"\\/p t cont d :cont n s/.\{148\}/&'"$ac_delim"'/g t clear :clear s/\\$// t bsnlc s/["\\]/\\&/g; s/^/"/; s/$/"/p d :bsnlc s/["\\]/\\&/g; s/^/"/; s/$/\\\\\\n"\\/p b cont ' >$CONFIG_STATUS || ac_write_fail=1 cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 for (key in D) D_is_set[key] = 1 FS = "" } /^[\t ]*#[\t ]*(define|undef)[\t ]+$ac_word_re([\t (]|\$)/ { line = \$ 0 split(line, arg, " ") if (arg[1] == "#") { defundef = arg[2] mac1 = arg[3] } else { defundef = substr(arg[1], 2) mac1 = arg[2] } split(mac1, mac2, "(") #) macro = mac2[1] prefix = substr(line, 1, index(line, defundef) - 1) if (D_is_set[macro]) { # Preserve the white space surrounding the "#". print prefix "define", macro P[macro] D[macro] next } else { # Replace #undef with comments. This is necessary, for example, # in the case of _POSIX_SOURCE, which is predefined and required # on some systems where configure will not decide to define it. if (defundef == "undef") { print "/*", prefix defundef, macro, "*/" next } } } { print } _ACAWK _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 as_fn_error $? "could not setup config headers machinery" "$LINENO" 5 fi # test -n "$CONFIG_HEADERS" eval set X " :F $CONFIG_FILES :H $CONFIG_HEADERS :C $CONFIG_COMMANDS" shift for ac_tag do case $ac_tag in :[FHLC]) ac_mode=$ac_tag; continue;; esac case $ac_mode$ac_tag in :[FHL]*:*);; :L* | :C*:*) as_fn_error $? "invalid tag '$ac_tag'" "$LINENO" 5;; :[FH]-) ac_tag=-:-;; :[FH]*) ac_tag=$ac_tag:$ac_tag.in;; esac ac_save_IFS=$IFS IFS=: set x $ac_tag IFS=$ac_save_IFS shift ac_file=$1 shift case $ac_mode in :L) ac_source=$1;; :[FH]) ac_file_inputs= for ac_f do case $ac_f in -) ac_f="$ac_tmp/stdin";; *) # Look for the file first in the build tree, then in the source tree # (if the path is not absolute). The absolute path cannot be DOS-style, # because $ac_f cannot contain ':'. test -f "$ac_f" || case $ac_f in [\\/$]*) false;; *) test -f "$srcdir/$ac_f" && ac_f="$srcdir/$ac_f";; esac || as_fn_error 1 "cannot find input file: '$ac_f'" "$LINENO" 5;; esac case $ac_f in *\'*) ac_f=`printf "%s\n" "$ac_f" | sed "s/'/'\\\\\\\\''/g"`;; esac as_fn_append ac_file_inputs " '$ac_f'" done # Let's still pretend it is 'configure' which instantiates (i.e., don't # use $as_me), people would be surprised to read: # /* config.h. Generated by config.status. */ configure_input='Generated from '` printf "%s\n" "$*" | sed 's|^[^:]*/||;s|:[^:]*/|, |g' `' by configure.' if test x"$ac_file" != x-; then configure_input="$ac_file. $configure_input" { printf "%s\n" "$as_me:${as_lineno-$LINENO}: creating $ac_file" >&5 printf "%s\n" "$as_me: creating $ac_file" >&6;} fi # Neutralize special characters interpreted by sed in replacement strings. case $configure_input in #( *\&* | *\|* | *\\* ) ac_sed_conf_input=`printf "%s\n" "$configure_input" | sed 's/[\\\\&|]/\\\\&/g'`;; #( *) ac_sed_conf_input=$configure_input;; esac case $ac_tag in *:-:* | *:-) cat >"$ac_tmp/stdin" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 ;; esac ;; esac ac_dir=`$as_dirname -- "$ac_file" || $as_expr X"$ac_file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$ac_file" : 'X\(//\)[^/]' \| \ X"$ac_file" : 'X\(//\)$' \| \ X"$ac_file" : 'X\(/\)' \| . 2>/dev/null || printf "%s\n" X"$ac_file" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` as_dir="$ac_dir"; as_fn_mkdir_p ac_builddir=. case "$ac_dir" in .) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_dir_suffix=/`printf "%s\n" "$ac_dir" | sed 's|^\.[\\/]||'` # A ".." for each directory in $ac_dir_suffix. ac_top_builddir_sub=`printf "%s\n" "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'` case $ac_top_builddir_sub in "") ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_top_build_prefix=$ac_top_builddir_sub/ ;; esac ;; esac ac_abs_top_builddir=$ac_pwd ac_abs_builddir=$ac_pwd$ac_dir_suffix # for backward compatibility: ac_top_builddir=$ac_top_build_prefix case $srcdir in .) # We are building in place. ac_srcdir=. ac_top_srcdir=$ac_top_builddir_sub ac_abs_top_srcdir=$ac_pwd ;; [\\/]* | ?:[\\/]* ) # Absolute name. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ac_abs_top_srcdir=$srcdir ;; *) # Relative name. ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_build_prefix$srcdir ac_abs_top_srcdir=$ac_pwd/$srcdir ;; esac ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix case $ac_mode in :F) # # CONFIG_FILE # case $INSTALL in [\\/$]* | ?:[\\/]* ) ac_INSTALL=$INSTALL ;; *) ac_INSTALL=$ac_top_build_prefix$INSTALL ;; esac ac_MKDIR_P=$MKDIR_P case $MKDIR_P in [\\/$]* | ?:[\\/]* ) ;; */*) ac_MKDIR_P=$ac_top_build_prefix$MKDIR_P ;; esac _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # If the template does not know about datarootdir, expand it. # FIXME: This hack should be removed a few years after 2.60. ac_datarootdir_hack=; ac_datarootdir_seen= ac_sed_dataroot=' /datarootdir/ { p q } /@datadir@/p /@docdir@/p /@infodir@/p /@localedir@/p /@mandir@/p' case `eval "sed -n \"\$ac_sed_dataroot\" $ac_file_inputs"` in *datarootdir*) ac_datarootdir_seen=yes;; *@datadir@*|*@docdir@*|*@infodir@*|*@localedir@*|*@mandir@*) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&5 printf "%s\n" "$as_me: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&2;} _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_datarootdir_hack=' s&@datadir@&$datadir&g s&@docdir@&$docdir&g s&@infodir@&$infodir&g s&@localedir@&$localedir&g s&@mandir@&$mandir&g s&\\\${datarootdir}&$datarootdir&g' ;; esac _ACEOF # Neutralize VPATH when '$srcdir' = '.'. # Shell code in configure.ac might set extrasub. # FIXME: do we really want to maintain this feature? cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_sed_extra="$ac_vpsub $extrasub _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 :t /@[a-zA-Z_][a-zA-Z_0-9]*@/!b s|@configure_input@|$ac_sed_conf_input|;t t s&@top_builddir@&$ac_top_builddir_sub&;t t s&@top_build_prefix@&$ac_top_build_prefix&;t t s&@srcdir@&$ac_srcdir&;t t s&@abs_srcdir@&$ac_abs_srcdir&;t t s&@top_srcdir@&$ac_top_srcdir&;t t s&@abs_top_srcdir@&$ac_abs_top_srcdir&;t t s&@builddir@&$ac_builddir&;t t s&@abs_builddir@&$ac_abs_builddir&;t t s&@abs_top_builddir@&$ac_abs_top_builddir&;t t s&@INSTALL@&$ac_INSTALL&;t t s&@MKDIR_P@&$ac_MKDIR_P&;t t $ac_datarootdir_hack " eval sed \"\$ac_sed_extra\" "$ac_file_inputs" | $AWK -f "$ac_tmp/subs.awk" \ >$ac_tmp/out || as_fn_error $? "could not create $ac_file" "$LINENO" 5 test -z "$ac_datarootdir_hack$ac_datarootdir_seen" && { ac_out=`sed -n '/\${datarootdir}/p' "$ac_tmp/out"`; test -n "$ac_out"; } && { ac_out=`sed -n '/^[ ]*datarootdir[ ]*:*=/p' \ "$ac_tmp/out"`; test -z "$ac_out"; } && { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file contains a reference to the variable 'datarootdir' which seems to be undefined. Please make sure it is defined" >&5 printf "%s\n" "$as_me: WARNING: $ac_file contains a reference to the variable 'datarootdir' which seems to be undefined. Please make sure it is defined" >&2;} rm -f "$ac_tmp/stdin" case $ac_file in -) cat "$ac_tmp/out" && rm -f "$ac_tmp/out";; *) rm -f "$ac_file" && mv "$ac_tmp/out" "$ac_file";; esac \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 ;; :H) # # CONFIG_HEADER # if test x"$ac_file" != x-; then { printf "%s\n" "/* $configure_input */" >&1 \ && eval '$AWK -f "$ac_tmp/defines.awk"' "$ac_file_inputs" } >"$ac_tmp/config.h" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 if diff "$ac_file" "$ac_tmp/config.h" >/dev/null 2>&1; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: $ac_file is unchanged" >&5 printf "%s\n" "$as_me: $ac_file is unchanged" >&6;} else rm -f "$ac_file" mv "$ac_tmp/config.h" "$ac_file" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 fi else printf "%s\n" "/* $configure_input */" >&1 \ && eval '$AWK -f "$ac_tmp/defines.awk"' "$ac_file_inputs" \ || as_fn_error $? "could not create -" "$LINENO" 5 fi # Compute "$ac_file"'s index in $config_headers. _am_arg="$ac_file" _am_stamp_count=1 for _am_header in $config_headers :; do case $_am_header in $_am_arg | $_am_arg:* ) break ;; * ) _am_stamp_count=`expr $_am_stamp_count + 1` ;; esac done echo "timestamp for $_am_arg" >`$as_dirname -- "$_am_arg" || $as_expr X"$_am_arg" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$_am_arg" : 'X\(//\)[^/]' \| \ X"$_am_arg" : 'X\(//\)$' \| \ X"$_am_arg" : 'X\(/\)' \| . 2>/dev/null || printf "%s\n" X"$_am_arg" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'`/stamp-h$_am_stamp_count ;; :C) { printf "%s\n" "$as_me:${as_lineno-$LINENO}: executing $ac_file commands" >&5 printf "%s\n" "$as_me: executing $ac_file commands" >&6;} ;; esac case $ac_file$ac_mode in "depfiles":C) test x"$AMDEP_TRUE" != x"" || { # Older Autoconf quotes --file arguments for eval, but not when files # are listed without --file. Let's play safe and only enable the eval # if we detect the quoting. # TODO: see whether this extra hack can be removed once we start # requiring Autoconf 2.70 or later. case $CONFIG_FILES in #( *\'*) : eval set x "$CONFIG_FILES" ;; #( *) : set x $CONFIG_FILES ;; #( *) : ;; esac shift # Used to flag and report bootstrapping failures. am_rc=0 for am_mf do # Strip MF so we end up with the name of the file. am_mf=`printf "%s\n" "$am_mf" | sed -e 's/:.*$//'` # Check whether this is an Automake generated Makefile which includes # dependency-tracking related rules and includes. # Grep'ing the whole file directly is not great: AIX grep has a line # limit of 2048, but all sed's we know have understand at least 4000. sed -n 's,^am--depfiles:.*,X,p' "$am_mf" | grep X >/dev/null 2>&1 \ || continue am_dirpart=`$as_dirname -- "$am_mf" || $as_expr X"$am_mf" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$am_mf" : 'X\(//\)[^/]' \| \ X"$am_mf" : 'X\(//\)$' \| \ X"$am_mf" : 'X\(/\)' \| . 2>/dev/null || printf "%s\n" X"$am_mf" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` am_filepart=`$as_basename -- "$am_mf" || $as_expr X/"$am_mf" : '.*/\([^/][^/]*\)/*$' \| \ X"$am_mf" : 'X\(//\)$' \| \ X"$am_mf" : 'X\(/\)' \| . 2>/dev/null || printf "%s\n" X/"$am_mf" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` { echo "$as_me:$LINENO: cd "$am_dirpart" \ && sed -e '/# am--include-marker/d' "$am_filepart" \ | $MAKE -f - am--depfiles" >&5 (cd "$am_dirpart" \ && sed -e '/# am--include-marker/d' "$am_filepart" \ | $MAKE -f - am--depfiles) >&5 2>&5 ac_status=$? echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } || am_rc=$? done if test $am_rc -ne 0; then { { printf "%s\n" "$as_me:${as_lineno-$LINENO}: error: in '$ac_pwd':" >&5 printf "%s\n" "$as_me: error: in '$ac_pwd':" >&2;} as_fn_error $? "Something went wrong bootstrapping makefile fragments for automatic dependency tracking. If GNU make was not used, consider re-running the configure script with MAKE=\"gmake\" (or whatever is necessary). You can also try re-running configure with the '--disable-dependency-tracking' option to at least be able to build the package (albeit without support for automatic dependency tracking). See 'config.log' for more details" "$LINENO" 5; } fi { am_dirpart=; unset am_dirpart;} { am_filepart=; unset am_filepart;} { am_mf=; unset am_mf;} { am_rc=; unset am_rc;} rm -f conftest-deps.mk } ;; "libtool":C) # See if we are running on zsh, and set the options that allow our # commands through without removal of \ escapes. if test -n "${ZSH_VERSION+set}"; then setopt NO_GLOB_SUBST fi cfgfile=${ofile}T trap "$RM \"$cfgfile\"; exit 1" 1 2 15 $RM "$cfgfile" cat <<_LT_EOF >> "$cfgfile" #! $SHELL # Generated automatically by $as_me ($PACKAGE) $VERSION # Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: # NOTE: Changes made to this file will be lost: look at ltmain.sh. # Provide generalized library-building support services. # Written by Gordon Matzigkeit, 1996 # Copyright (C) 2014 Free Software Foundation, Inc. # This is free software; see the source for copying conditions. There is NO # warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # GNU Libtool is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of of the License, or # (at your option) any later version. # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program or library that is built # using GNU Libtool, you may include this file under the same # distribution terms that you use for the rest of that program. # # GNU Libtool is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . # The names of the tagged configurations supported by this script. available_tags='CXX ' # Configured defaults for sys_lib_dlsearch_path munging. : \${LT_SYS_LIBRARY_PATH="$configure_time_lt_sys_library_path"} # ### BEGIN LIBTOOL CONFIG # Which release of libtool.m4 was used? macro_version=$macro_version macro_revision=$macro_revision # Assembler program. AS=$lt_AS # DLL creation program. DLLTOOL=$lt_DLLTOOL # Object dumper program. OBJDUMP=$lt_OBJDUMP # Whether or not to build shared libraries. build_libtool_libs=$enable_shared # Whether or not to build static libraries. build_old_libs=$enable_static # What type of objects to build. pic_mode=$pic_mode # Whether or not to optimize for fast installation. fast_install=$enable_fast_install # Shared archive member basename,for filename based shared library versioning on AIX. shared_archive_member_spec=$shared_archive_member_spec # Shell to use when invoking shell scripts. SHELL=$lt_SHELL # An echo program that protects backslashes. ECHO=$lt_ECHO # The PATH separator for the build system. PATH_SEPARATOR=$lt_PATH_SEPARATOR # The host system. host_alias=$host_alias host=$host host_os=$host_os # The build system. build_alias=$build_alias build=$build build_os=$build_os # A sed program that does not truncate output. SED=$lt_SED # Sed that helps us avoid accidentally triggering echo(1) options like -n. Xsed="\$SED -e 1s/^X//" # A grep program that handles long lines. GREP=$lt_GREP # An ERE matcher. EGREP=$lt_EGREP # A literal string matcher. FGREP=$lt_FGREP # A BSD- or MS-compatible name lister. NM=$lt_NM # Whether we need soft or hard links. LN_S=$lt_LN_S # What is the maximum length of a command? max_cmd_len=$max_cmd_len # Object file suffix (normally "o"). objext=$ac_objext # Executable file suffix (normally ""). exeext=$exeext # whether the shell understands "unset". lt_unset=$lt_unset # turn spaces into newlines. SP2NL=$lt_lt_SP2NL # turn newlines into spaces. NL2SP=$lt_lt_NL2SP # convert \$build file names to \$host format. to_host_file_cmd=$lt_cv_to_host_file_cmd # convert \$build files to toolchain format. to_tool_file_cmd=$lt_cv_to_tool_file_cmd # A file(cmd) program that detects file types. FILECMD=$lt_FILECMD # Method to check whether dependent libraries are shared objects. deplibs_check_method=$lt_deplibs_check_method # Command to use when deplibs_check_method = "file_magic". file_magic_cmd=$lt_file_magic_cmd # How to find potential files when deplibs_check_method = "file_magic". file_magic_glob=$lt_file_magic_glob # Find potential files using nocaseglob when deplibs_check_method = "file_magic". want_nocaseglob=$lt_want_nocaseglob # Command to associate shared and link libraries. sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd # The archiver. AR=$lt_AR # Flags to create an archive (by configure). lt_ar_flags=$lt_ar_flags # Flags to create an archive. AR_FLAGS=\${ARFLAGS-"\$lt_ar_flags"} # How to feed a file listing to the archiver. archiver_list_spec=$lt_archiver_list_spec # A symbol stripping program. STRIP=$lt_STRIP # Commands used to install an old-style archive. RANLIB=$lt_RANLIB old_postinstall_cmds=$lt_old_postinstall_cmds old_postuninstall_cmds=$lt_old_postuninstall_cmds # Whether to use a lock for old archive extraction. lock_old_archive_extraction=$lock_old_archive_extraction # A C compiler. LTCC=$lt_CC # LTCC compiler flags. LTCFLAGS=$lt_CFLAGS # Take the output of nm and produce a listing of raw symbols and C names. global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe # Transform the output of nm in a proper C declaration. global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl # Transform the output of nm into a list of symbols to manually relocate. global_symbol_to_import=$lt_lt_cv_sys_global_symbol_to_import # Transform the output of nm in a C name address pair. global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address # Transform the output of nm in a C name address pair when lib prefix is needed. global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix # The name lister interface. nm_interface=$lt_lt_cv_nm_interface # Specify filename containing input files for \$NM. nm_file_list_spec=$lt_nm_file_list_spec # The root where to search for dependent libraries,and where our libraries should be installed. lt_sysroot=$lt_sysroot # Command to truncate a binary pipe. lt_truncate_bin=$lt_lt_cv_truncate_bin # The name of the directory that contains temporary libtool files. objdir=$objdir # Used to examine libraries when file_magic_cmd begins with "file". MAGIC_CMD=$MAGIC_CMD # Must we lock files when doing compilation? need_locks=$lt_need_locks # Manifest tool. MANIFEST_TOOL=$lt_MANIFEST_TOOL # Tool to manipulate archived DWARF debug symbol files on Mac OS X. DSYMUTIL=$lt_DSYMUTIL # Tool to change global to local symbols on Mac OS X. NMEDIT=$lt_NMEDIT # Tool to manipulate fat objects and archives on Mac OS X. LIPO=$lt_LIPO # ldd/readelf like tool for Mach-O binaries on Mac OS X. OTOOL=$lt_OTOOL # ldd/readelf like tool for 64 bit Mach-O binaries on Mac OS X 10.4. OTOOL64=$lt_OTOOL64 # Old archive suffix (normally "a"). libext=$libext # Shared library suffix (normally ".so"). shrext_cmds=$lt_shrext_cmds # The commands to extract the exported symbol list from a shared archive. extract_expsyms_cmds=$lt_extract_expsyms_cmds # Variables whose values should be saved in libtool wrapper scripts and # restored at link time. variables_saved_for_relink=$lt_variables_saved_for_relink # Do we need the "lib" prefix for modules? need_lib_prefix=$need_lib_prefix # Do we need a version for libraries? need_version=$need_version # Library versioning type. version_type=$version_type # Shared library runtime path variable. runpath_var=$runpath_var # Shared library path variable. shlibpath_var=$shlibpath_var # Is shlibpath searched before the hard-coded library search path? shlibpath_overrides_runpath=$shlibpath_overrides_runpath # Format of library name prefix. libname_spec=$lt_libname_spec # List of archive names. First name is the real one, the rest are links. # The last name is the one that the linker finds with -lNAME library_names_spec=$lt_library_names_spec # The coded name of the library, if different from the real name. soname_spec=$lt_soname_spec # Permission mode override for installation of shared libraries. install_override_mode=$lt_install_override_mode # Command to use after installation of a shared archive. postinstall_cmds=$lt_postinstall_cmds # Command to use after uninstallation of a shared archive. postuninstall_cmds=$lt_postuninstall_cmds # Commands used to finish a libtool library installation in a directory. finish_cmds=$lt_finish_cmds # As "finish_cmds", except a single script fragment to be evaled but # not shown. finish_eval=$lt_finish_eval # Whether we should hardcode library paths into libraries. hardcode_into_libs=$hardcode_into_libs # Compile-time system search path for libraries. sys_lib_search_path_spec=$lt_sys_lib_search_path_spec # Detected run-time system search path for libraries. sys_lib_dlsearch_path_spec=$lt_configure_time_dlsearch_path # Explicit LT_SYS_LIBRARY_PATH set during ./configure time. configure_time_lt_sys_library_path=$lt_configure_time_lt_sys_library_path # Whether dlopen is supported. dlopen_support=$enable_dlopen # Whether dlopen of programs is supported. dlopen_self=$enable_dlopen_self # Whether dlopen of statically linked programs is supported. dlopen_self_static=$enable_dlopen_self_static # Commands to strip libraries. old_striplib=$lt_old_striplib striplib=$lt_striplib # The linker used to build libraries. LD=$lt_LD # How to create reloadable object files. reload_flag=$lt_reload_flag reload_cmds=$lt_reload_cmds # Commands used to build an old-style archive. old_archive_cmds=$lt_old_archive_cmds # A language specific compiler. CC=$lt_compiler # Is the compiler the GNU compiler? with_gcc=$GCC # Compiler flag to turn off builtin functions. no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag # Additional compiler flags for building library objects. pic_flag=$lt_lt_prog_compiler_pic # How to pass a linker flag through the compiler. wl=$lt_lt_prog_compiler_wl # Compiler flag to prevent dynamic linking. link_static_flag=$lt_lt_prog_compiler_static # Does compiler simultaneously support -c and -o options? compiler_c_o=$lt_lt_cv_prog_compiler_c_o # Whether or not to add -lc for building shared libraries. build_libtool_need_lc=$archive_cmds_need_lc # Whether or not to disallow shared libs when runtime libs are static. allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes # Compiler flag to allow reflexive dlopens. export_dynamic_flag_spec=$lt_export_dynamic_flag_spec # Compiler flag to generate shared objects directly from archives. whole_archive_flag_spec=$lt_whole_archive_flag_spec # Whether the compiler copes with passing no objects directly. compiler_needs_object=$lt_compiler_needs_object # Create an old-style archive from a shared archive. old_archive_from_new_cmds=$lt_old_archive_from_new_cmds # Create a temporary old-style archive to link instead of a shared archive. old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds # Commands used to build a shared archive. archive_cmds=$lt_archive_cmds archive_expsym_cmds=$lt_archive_expsym_cmds # Commands used to build a loadable module if different from building # a shared archive. module_cmds=$lt_module_cmds module_expsym_cmds=$lt_module_expsym_cmds # Whether we are building with GNU ld or not. with_gnu_ld=$lt_with_gnu_ld # Flag that allows shared libraries with undefined symbols to be built. allow_undefined_flag=$lt_allow_undefined_flag # Flag that enforces no undefined symbols. no_undefined_flag=$lt_no_undefined_flag # Flag to hardcode \$libdir into a binary during linking. # This must work even if \$libdir does not exist hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec # Whether we need a single "-rpath" flag with a separated argument. hardcode_libdir_separator=$lt_hardcode_libdir_separator # Set to "yes" if using DIR/libNAME\$shared_ext during linking hardcodes # DIR into the resulting binary. hardcode_direct=$hardcode_direct # Set to "yes" if using DIR/libNAME\$shared_ext during linking hardcodes # DIR into the resulting binary and the resulting library dependency is # "absolute",i.e impossible to change by setting \$shlibpath_var if the # library is relocated. hardcode_direct_absolute=$hardcode_direct_absolute # Set to "yes" if using the -LDIR flag during linking hardcodes DIR # into the resulting binary. hardcode_minus_L=$hardcode_minus_L # Set to "yes" if using SHLIBPATH_VAR=DIR during linking hardcodes DIR # into the resulting binary. hardcode_shlibpath_var=$hardcode_shlibpath_var # Set to "yes" if building a shared library automatically hardcodes DIR # into the library and all subsequent libraries and executables linked # against it. hardcode_automatic=$hardcode_automatic # Set to yes if linker adds runtime paths of dependent libraries # to runtime path list. inherit_rpath=$inherit_rpath # Whether libtool must link a program against all its dependency libraries. link_all_deplibs=$link_all_deplibs # Set to "yes" if exported symbols are required. always_export_symbols=$always_export_symbols # The commands to list exported symbols. export_symbols_cmds=$lt_export_symbols_cmds # Symbols that should not be listed in the preloaded symbols. exclude_expsyms=$lt_exclude_expsyms # Symbols that must always be exported. include_expsyms=$lt_include_expsyms # Commands necessary for linking programs (against libraries) with templates. prelink_cmds=$lt_prelink_cmds # Commands necessary for finishing linking programs. postlink_cmds=$lt_postlink_cmds # Specify filename containing input files. file_list_spec=$lt_file_list_spec # How to hardcode a shared library path into an executable. hardcode_action=$hardcode_action # The directories searched by this compiler when creating a shared library. compiler_lib_search_dirs=$lt_compiler_lib_search_dirs # Dependencies to place before and after the objects being linked to # create a shared library. predep_objects=$lt_predep_objects postdep_objects=$lt_postdep_objects predeps=$lt_predeps postdeps=$lt_postdeps # The library search path used internally by the compiler when linking # a shared library. compiler_lib_search_path=$lt_compiler_lib_search_path # ### END LIBTOOL CONFIG _LT_EOF cat <<'_LT_EOF' >> "$cfgfile" # ### BEGIN FUNCTIONS SHARED WITH CONFIGURE # func_munge_path_list VARIABLE PATH # ----------------------------------- # VARIABLE is name of variable containing _space_ separated list of # directories to be munged by the contents of PATH, which is string # having a format: # "DIR[:DIR]:" # string "DIR[ DIR]" will be prepended to VARIABLE # ":DIR[:DIR]" # string "DIR[ DIR]" will be appended to VARIABLE # "DIRP[:DIRP]::[DIRA:]DIRA" # string "DIRP[ DIRP]" will be prepended to VARIABLE and string # "DIRA[ DIRA]" will be appended to VARIABLE # "DIR[:DIR]" # VARIABLE will be replaced by "DIR[ DIR]" func_munge_path_list () { case x$2 in x) ;; *:) eval $1=\"`$ECHO $2 | $SED 's/:/ /g'` \$$1\" ;; x:*) eval $1=\"\$$1 `$ECHO $2 | $SED 's/:/ /g'`\" ;; *::*) eval $1=\"\$$1\ `$ECHO $2 | $SED -e 's/.*:://' -e 's/:/ /g'`\" eval $1=\"`$ECHO $2 | $SED -e 's/::.*//' -e 's/:/ /g'`\ \$$1\" ;; *) eval $1=\"`$ECHO $2 | $SED 's/:/ /g'`\" ;; esac } # Calculate cc_basename. Skip known compiler wrappers and cross-prefix. func_cc_basename () { for cc_temp in $*""; do case $cc_temp in compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; \-*) ;; *) break;; esac done func_cc_basename_result=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"` } # ### END FUNCTIONS SHARED WITH CONFIGURE _LT_EOF case $host_os in aix3*) cat <<\_LT_EOF >> "$cfgfile" # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test set != "${COLLECT_NAMES+set}"; then COLLECT_NAMES= export COLLECT_NAMES fi _LT_EOF ;; esac ltmain=$ac_aux_dir/ltmain.sh # We use sed instead of cat because bash on DJGPP gets confused if # if finds mixed CR/LF and LF-only lines. Since sed operates in # text mode, it properly converts lines to CR/LF. This bash problem # is reportedly fixed, but why not run on old versions too? $SED '$q' "$ltmain" >> "$cfgfile" \ || (rm -f "$cfgfile"; exit 1) mv -f "$cfgfile" "$ofile" || (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile") chmod +x "$ofile" cat <<_LT_EOF >> "$ofile" # ### BEGIN LIBTOOL TAG CONFIG: CXX # The linker used to build libraries. LD=$lt_LD_CXX # How to create reloadable object files. reload_flag=$lt_reload_flag_CXX reload_cmds=$lt_reload_cmds_CXX # Commands used to build an old-style archive. old_archive_cmds=$lt_old_archive_cmds_CXX # A language specific compiler. CC=$lt_compiler_CXX # Is the compiler the GNU compiler? with_gcc=$GCC_CXX # Compiler flag to turn off builtin functions. no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_CXX # Additional compiler flags for building library objects. pic_flag=$lt_lt_prog_compiler_pic_CXX # How to pass a linker flag through the compiler. wl=$lt_lt_prog_compiler_wl_CXX # Compiler flag to prevent dynamic linking. link_static_flag=$lt_lt_prog_compiler_static_CXX # Does compiler simultaneously support -c and -o options? compiler_c_o=$lt_lt_cv_prog_compiler_c_o_CXX # Whether or not to add -lc for building shared libraries. build_libtool_need_lc=$archive_cmds_need_lc_CXX # Whether or not to disallow shared libs when runtime libs are static. allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes_CXX # Compiler flag to allow reflexive dlopens. export_dynamic_flag_spec=$lt_export_dynamic_flag_spec_CXX # Compiler flag to generate shared objects directly from archives. whole_archive_flag_spec=$lt_whole_archive_flag_spec_CXX # Whether the compiler copes with passing no objects directly. compiler_needs_object=$lt_compiler_needs_object_CXX # Create an old-style archive from a shared archive. old_archive_from_new_cmds=$lt_old_archive_from_new_cmds_CXX # Create a temporary old-style archive to link instead of a shared archive. old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds_CXX # Commands used to build a shared archive. archive_cmds=$lt_archive_cmds_CXX archive_expsym_cmds=$lt_archive_expsym_cmds_CXX # Commands used to build a loadable module if different from building # a shared archive. module_cmds=$lt_module_cmds_CXX module_expsym_cmds=$lt_module_expsym_cmds_CXX # Whether we are building with GNU ld or not. with_gnu_ld=$lt_with_gnu_ld_CXX # Flag that allows shared libraries with undefined symbols to be built. allow_undefined_flag=$lt_allow_undefined_flag_CXX # Flag that enforces no undefined symbols. no_undefined_flag=$lt_no_undefined_flag_CXX # Flag to hardcode \$libdir into a binary during linking. # This must work even if \$libdir does not exist hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec_CXX # Whether we need a single "-rpath" flag with a separated argument. hardcode_libdir_separator=$lt_hardcode_libdir_separator_CXX # Set to "yes" if using DIR/libNAME\$shared_ext during linking hardcodes # DIR into the resulting binary. hardcode_direct=$hardcode_direct_CXX # Set to "yes" if using DIR/libNAME\$shared_ext during linking hardcodes # DIR into the resulting binary and the resulting library dependency is # "absolute",i.e impossible to change by setting \$shlibpath_var if the # library is relocated. hardcode_direct_absolute=$hardcode_direct_absolute_CXX # Set to "yes" if using the -LDIR flag during linking hardcodes DIR # into the resulting binary. hardcode_minus_L=$hardcode_minus_L_CXX # Set to "yes" if using SHLIBPATH_VAR=DIR during linking hardcodes DIR # into the resulting binary. hardcode_shlibpath_var=$hardcode_shlibpath_var_CXX # Set to "yes" if building a shared library automatically hardcodes DIR # into the library and all subsequent libraries and executables linked # against it. hardcode_automatic=$hardcode_automatic_CXX # Set to yes if linker adds runtime paths of dependent libraries # to runtime path list. inherit_rpath=$inherit_rpath_CXX # Whether libtool must link a program against all its dependency libraries. link_all_deplibs=$link_all_deplibs_CXX # Set to "yes" if exported symbols are required. always_export_symbols=$always_export_symbols_CXX # The commands to list exported symbols. export_symbols_cmds=$lt_export_symbols_cmds_CXX # Symbols that should not be listed in the preloaded symbols. exclude_expsyms=$lt_exclude_expsyms_CXX # Symbols that must always be exported. include_expsyms=$lt_include_expsyms_CXX # Commands necessary for linking programs (against libraries) with templates. prelink_cmds=$lt_prelink_cmds_CXX # Commands necessary for finishing linking programs. postlink_cmds=$lt_postlink_cmds_CXX # Specify filename containing input files. file_list_spec=$lt_file_list_spec_CXX # How to hardcode a shared library path into an executable. hardcode_action=$hardcode_action_CXX # The directories searched by this compiler when creating a shared library. compiler_lib_search_dirs=$lt_compiler_lib_search_dirs_CXX # Dependencies to place before and after the objects being linked to # create a shared library. predep_objects=$lt_predep_objects_CXX postdep_objects=$lt_postdep_objects_CXX predeps=$lt_predeps_CXX postdeps=$lt_postdeps_CXX # The library search path used internally by the compiler when linking # a shared library. compiler_lib_search_path=$lt_compiler_lib_search_path_CXX # ### END LIBTOOL TAG CONFIG: CXX _LT_EOF ;; esac done # for ac_tag as_fn_exit 0 _ACEOF ac_clean_files=$ac_clean_files_save test $ac_write_fail = 0 || as_fn_error $? "write failure creating $CONFIG_STATUS" "$LINENO" 5 # configure is writing to config.log, and then calls config.status. # config.status does its own redirection, appending to config.log. # Unfortunately, on DOS this fails, as config.log is still kept open # by configure, so config.status won't be able to write to it; its # output is simply discarded. So we exec the FD to /dev/null, # effectively closing config.log, so it can be properly (re)opened and # appended to by config.status. When coming back to configure, we # need to make the FD available again. if test "$no_create" != yes; then ac_cs_success=: ac_config_status_args= test "$silent" = yes && ac_config_status_args="$ac_config_status_args --quiet" exec 5>/dev/null $SHELL $CONFIG_STATUS $ac_config_status_args || ac_cs_success=false exec 5>>config.log # Use ||, not &&, to avoid exiting from the if with $? = 1, which # would make configure fail if this is the last instruction. $ac_cs_success || as_fn_exit 1 fi if test -n "$ac_unrecognized_opts" && test "$enable_option_checking" != no; then { printf "%s\n" "$as_me:${as_lineno-$LINENO}: WARNING: unrecognized options: $ac_unrecognized_opts" >&5 printf "%s\n" "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2;} fi gevent-24.11.1/deps/c-ares/configure.ac000066400000000000000000001015371471441230600175540ustar00rootroot00000000000000dnl Copyright (C) The c-ares project and its contributors dnl SPDX-License-Identifier: MIT AC_PREREQ([2.69]) AC_INIT([c-ares], [1.33.1], [c-ares mailing list: http://lists.haxx.se/listinfo/c-ares]) CARES_VERSION_INFO="20:1:18" dnl This flag accepts an argument of the form current[:revision[:age]]. So, dnl passing -version-info 3:12:1 sets current to 3, revision to 12, and age to dnl 1. dnl dnl If either revision or age are omitted, they default to 0. Also note that age dnl must be less than or equal to the current interface number. dnl dnl Here are a set of rules to help you update your library version information: dnl dnl 1.Start with version information of 0:0:0 for each libtool library. dnl dnl 2.Update the version information only immediately before a public release of dnl your software. More frequent updates are unnecessary, and only guarantee dnl that the current interface number gets larger faster. dnl dnl 3.If the library source code has changed at all since the last update, then dnl increment revision (c:r+1:a) dnl dnl 4.If any interfaces have been added, removed, or changed since the last dnl update, increment current, and set revision to 0. (c+1:r=0:a) dnl dnl 5.If any interfaces have been added since the last public release, then dnl increment age. (c:r:a+1) dnl dnl 6.If any interfaces have been removed since the last public release, then dnl set age to 0. (c:r:a=0) dnl AC_SUBST([CARES_VERSION_INFO]) AC_CONFIG_SRCDIR([src/lib/ares_ipv6.h]) AC_CONFIG_HEADERS([src/lib/ares_config.h include/ares_build.h]) AC_CONFIG_AUX_DIR(config) AC_CONFIG_MACRO_DIR([m4]) AC_USE_SYSTEM_EXTENSIONS AX_CXX_COMPILE_STDCXX_14([noext],[optional]) AM_INIT_AUTOMAKE([foreign subdir-objects 1.9.6]) LT_INIT([win32-dll,pic,disable-fast-install,aix-soname=svr4]) AC_LANG([C]) AC_PROG_CC AM_PROG_CC_C_O AC_PROG_EGREP AC_PROG_INSTALL AC_CANONICAL_HOST AX_COMPILER_VENDOR AC_MSG_CHECKING([whether this is native windows]) ac_cv_native_windows=no ac_cv_windows=no case $host_os in mingw*) ac_cv_native_windows=yes ac_cv_windows=yes ;; cygwin*) ac_cv_windows=yes ;; esac if test "$ax_cv_c_compiler_vendor" = "microsoft" ; then ac_cv_native_windows=yes ac_cv_windows=yes fi AC_MSG_RESULT($ac_cv_native_windows) AC_ENABLE_SHARED dnl Disable static builds by default on Windows unless overwritten since Windows dnl can't simultaneously build shared and static with autotools. AS_IF([test "x$ac_cv_windows" = "xyes"], [AC_DISABLE_STATIC], [AC_ENABLE_STATIC]) AC_ARG_ENABLE(warnings, AS_HELP_STRING([--disable-warnings],[Disable strict compiler warnings]), [ enable_warnings=${enableval} ], [ enable_warnings=yes ]) AC_ARG_ENABLE(symbol-hiding, AS_HELP_STRING([--disable-symbol-hiding], [Disable symbol hiding. Enabled by default if the compiler supports it.]), [ symbol_hiding="$enableval" if test "$symbol_hiding" = "no" -a "x$enable_shared" = "xyes" ; then case $host_os in cygwin* | mingw* | pw32* | cegcc*) AC_MSG_ERROR([Cannot disable symbol hiding on windows]) ;; esac fi ], [ if test "x$enable_shared" = "xyes" ; then symbol_hiding="maybe" else symbol_hiding="no" fi ] ) AC_ARG_ENABLE(tests, AS_HELP_STRING([--disable-tests], [disable building of test suite. Built by default if GoogleTest is found.]), [ build_tests="$enableval" ], [ if test "x$HAVE_CXX14" = "x1" && test "x$cross_compiling" = "xno" ; then build_tests="maybe" else build_tests="no" fi ] ) AC_ARG_ENABLE(cares-threads, AS_HELP_STRING([--disable-cares-threads], [Disable building of thread safety support]), [ CARES_THREADS=${enableval} ], [ CARES_THREADS=yes ]) AC_ARG_WITH(random, AS_HELP_STRING([--with-random=FILE], [read randomness from FILE (default=/dev/urandom)]), [ CARES_RANDOM_FILE="$withval" ], [ CARES_RANDOM_FILE="/dev/urandom" ] ) if test -n "$CARES_RANDOM_FILE" && test X"$CARES_RANDOM_FILE" != Xno ; then AC_SUBST(CARES_RANDOM_FILE) AC_DEFINE_UNQUOTED(CARES_RANDOM_FILE, "$CARES_RANDOM_FILE", [a suitable file/device to read random data from]) fi AM_MAINTAINER_MODE m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES([yes])]) dnl CARES_DEFINE_UNQUOTED (VARIABLE, [VALUE]) dnl ------------------------------------------------- dnl Like AC_DEFINE_UNQUOTED this macro will define a C preprocessor dnl symbol that can be further used in custom template configuration dnl files. This macro, unlike AC_DEFINE_UNQUOTED, does not use a third dnl argument for the description. Symbol definitions done with this dnl macro are intended to be exclusively used in handcrafted *.h.in dnl template files. Contrary to what AC_DEFINE_UNQUOTED does, this one dnl prevents autoheader generation and insertion of symbol template dnl stub and definition into the first configuration header file. Do dnl not use this macro as a replacement for AC_DEFINE_UNQUOTED, each dnl one serves different functional needs. AC_DEFUN([CARES_DEFINE_UNQUOTED], [ cat >>confdefs.h <<_EOF [@%:@define] $1 ifelse($#, 2, [$2], 1) _EOF ]) AX_CODE_COVERAGE AC_SYS_LARGEFILE case $host_os in solaris*) AC_DEFINE(ETC_INET, 1, [if a /etc/inet dir is being used]) ;; esac dnl solaris needed flag case $host_os in solaris2*) if test "x$GCC" = 'xyes'; then AX_APPEND_LINK_FLAGS([-mimpure-text]) fi ;; *) ;; esac dnl -no-undefined libtool (not linker) flag for windows cares_use_no_undefined=no case $host_os in cygwin* | mingw* | pw32* | cegcc* | os2* | aix*) cares_use_no_undefined=yes ;; *) ;; esac AM_CONDITIONAL([CARES_USE_NO_UNDEFINED], [test "$cares_use_no_undefined" = 'yes']) if test "$ac_cv_native_windows" = "yes" ; then AM_CPPFLAGS="$AM_CPPFLAGS -D_WIN32_WINNT=0x0602 -DWIN32_LEAN_AND_MEAN" fi dnl Windows can only build shared or static, not both at the same time if test "$ac_cv_native_windows" = "yes" -a "x$enable_shared" = "xyes" -a "x$enable_static" = "xyes" ; then AC_MSG_ERROR([Windows cannot build both static and shared simultaneously, specify --disable-shared or --disable-static]) fi dnl Only windows requires CARES_STATICLIB definition if test "x$enable_shared" = "xno" -a "x$enable_static" = "xyes" ; then AC_MSG_CHECKING([whether we need CARES_STATICLIB definition]) if test "$ac_cv_native_windows" = "yes" ; then AX_APPEND_FLAG([-DCARES_STATICLIB], [AM_CPPFLAGS]) PKGCONFIG_CFLAGS="-DCARES_STATICLIB" AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) fi fi dnl Test for symbol hiding CARES_SYMBOL_HIDING_CFLAG="" if test "$symbol_hiding" != "no" ; then compiler_supports_symbol_hiding="no" if test "$ac_cv_windows" = "yes" ; then compiler_supports_symbol_hiding="yes" else case "$ax_cv_c_compiler_vendor" in clang|gnu|intel) AX_APPEND_COMPILE_FLAGS([-fvisibility=hidden], [CARES_SYMBOL_HIDING_CFLAG]) if test "x$CARES_SYMBOL_HIDING_CFLAG" != "x" ; then compiler_supports_symbol_hiding="yes" fi ;; sun) AX_APPEND_COMPILE_FLAGS([-xldscope=hidden], [CARES_SYMBOL_HIDING_CFLAG]) if test "x$CARES_SYMBOL_HIDING_CFLAG" != "x" ; then compiler_supports_symbol_hiding="yes" fi ;; esac fi if test "$compiler_supports_symbol_hiding" = "no" ; then if test "$symbol_hiding" = "yes" ; then AC_MSG_ERROR([Compiler does not support symbol hiding]) else symbol_hiding="no" fi else AC_DEFINE([CARES_SYMBOL_HIDING], [ 1 ], [Set to 1 if non-pubilc shared library symbols are hidden]) symbol_hiding="yes" fi fi AM_CONDITIONAL(CARES_SYMBOL_HIDING, test "x$symbol_hiding" = "xyes") AC_SUBST(CARES_SYMBOL_HIDING_CFLAG) if test "$enable_warnings" = "yes"; then AX_APPEND_COMPILE_FLAGS([-Wall -Wextra -Waggregate-return -Wcast-align -Wcast-qual -Wconversion -Wdeclaration-after-statement -Wdouble-promotion -Wfloat-equal -Wformat-security -Winit-self -Wjump-misses-init -Wlogical-op -Wmissing-braces -Wmissing-declarations -Wmissing-format-attribute -Wmissing-include-dirs -Wmissing-prototypes -Wnested-externs -Wno-coverage-mismatch -Wold-style-definition -Wpacked -Wpedantic -Wpointer-arith -Wredundant-decls -Wshadow -Wsign-conversion -Wstrict-overflow -Wstrict-prototypes -Wtrampolines -Wundef -Wunreachable-code -Wunused -Wvariadic-macros -Wvla -Wwrite-strings -Werror=implicit-int -Werror=implicit-function-declaration -Werror=partial-availability -Wno-long-long ], [AM_CFLAGS], [-Werror]) dnl Android requires c99, all others should use c90 case $host_os in *android*) AX_APPEND_COMPILE_FLAGS([-std=c99], [AM_CFLAGS], [-Werror]) ;; *) AX_APPEND_COMPILE_FLAGS([-std=c90], [AM_CFLAGS], [-Werror]) ;; esac fi if test "$ax_cv_c_compiler_vendor" = "intel"; then AX_APPEND_COMPILE_FLAGS([-shared-intel], [AM_CFLAGS]) fi if test "$ac_cv_native_windows" = "yes" ; then dnl we use [ - ] in the 4th argument to tell AC_CHECK_HEADERS to simply dnl check for existence of the headers, not usability. This is because dnl on windows, header order matters, and you need to include headers *after* dnl other headers, AC_CHECK_HEADERS only allows you to specify headers that dnl must be included *before* the header being checked. AC_CHECK_HEADERS([windows.h winsock2.h ws2tcpip.h iphlpapi.h netioapi.h ws2ipdef.h winternl.h ntdef.h ntstatus.h mswsock.h ], [], [], [-]) dnl Windows builds require linking to iphlpapi if test "$ac_cv_header_winsock2_h" = "yes"; then LIBS="$LIBS -lws2_32 -liphlpapi" fi fi dnl ********************************************************************** dnl Checks for libraries. dnl ********************************************************************** dnl see if libnsl or libsocket are required AC_SEARCH_LIBS([getservbyport], [nsl socket resolv]) AC_MSG_CHECKING([if libxnet is required]) need_xnet=no case $host_os in hpux*) XNET_LIBS="" AX_APPEND_LINK_FLAGS([-lxnet], [XNET_LIBS]) if test "x$XNET_LIBS" != "x" ; then LIBS="$LIBS $XNET_LIBS" need_xnet=yes fi ;; esac AC_MSG_RESULT($need_xnet) dnl resolv lib for z/OS AS_IF([test "x$host_vendor" = "xibm" -a "x$host_os" = "xopenedition" ], [ AC_SEARCH_LIBS([res_init], [resolv], [ AC_DEFINE([CARES_USE_LIBRESOLV], [1], [Use resolver library to configure cares]) ], [ AC_MSG_ERROR([Unable to find libresolv which is required for z/OS]) ]) ]) dnl iOS 10? AS_IF([test "x$host_vendor" = "xapple"], [ AC_MSG_CHECKING([for iOS minimum version 10 or later]) AC_COMPILE_IFELSE([ AC_LANG_PROGRAM([[ #include #include #include ]], [[ #if TARGET_OS_IPHONE == 0 || __IPHONE_OS_VERSION_MIN_REQUIRED < 100000 #error Not iOS 10 or later #endif return 0; ]]) ],[ AC_MSG_RESULT([yes]) ac_cv_ios_10="yes" ],[ AC_MSG_RESULT([no]) ]) ]) dnl macOS 10.12? AS_IF([test "x$host_vendor" = "xapple"], [ AC_MSG_CHECKING([for macOS minimum version 10.12 or later]) AC_COMPILE_IFELSE([ AC_LANG_PROGRAM([[ #include #include #include ]], [[ #ifndef MAC_OS_X_VERSION_10_12 # define MAC_OS_X_VERSION_10_12 101200 #endif #if MAC_OS_X_VERSION_MIN_REQUIRED < MAC_OS_X_VERSION_10_12 #error Not macOS 10.12 or later #endif return 0; ]]) ],[ AC_MSG_RESULT([yes]) ac_cv_macos_10_12="yes" ],[ AC_MSG_RESULT([no]) ]) ]) AC_MSG_CHECKING([whether to use libgcc]) AC_ARG_ENABLE(libgcc, AS_HELP_STRING([--enable-libgcc],[use libgcc when linking]), [ case "$enableval" in yes) LIBS="$LIBS -lgcc" AC_MSG_RESULT(yes) ;; *) AC_MSG_RESULT(no) ;; esac ], AC_MSG_RESULT(no) ) dnl check for a few basic system headers we need. It would be nice if we could dnl split these on separate lines, but for some reason autotools on Windows doesn't dnl allow this, even tried ending lines with a backslash. AC_CHECK_HEADERS([malloc.h memory.h AvailabilityMacros.h sys/types.h sys/time.h sys/select.h sys/socket.h sys/filio.h sys/ioctl.h sys/param.h sys/uio.h sys/random.h sys/event.h sys/epoll.h assert.h iphlpapi.h netioapi.h netdb.h netinet/in.h netinet6/in6.h netinet/tcp.h net/if.h ifaddrs.h fcntl.h errno.h socket.h strings.h stdbool.h time.h poll.h limits.h arpa/nameser.h arpa/nameser_compat.h arpa/inet.h ], dnl to do if not found [], dnl to do if found [], dnl default includes [ #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_TIME_H #include #endif dnl We do this default-include simply to make sure that the nameser_compat.h dnl header *REALLY* can be include after the new nameser.h. It seems AIX 5.1 dnl (and others?) is not designed to allow this. #ifdef HAVE_ARPA_NAMESER_H #include #endif dnl *Sigh* these are needed in order for net/if.h to get properly detected. #ifdef HAVE_SYS_SOCKET_H #include #endif #ifdef HAVE_NETINET_IN_H #include #endif ] ) cares_all_includes=" #include #include #ifdef HAVE_AVAILABILITYMACROS_H # include #endif #ifdef HAVE_SYS_UIO_H # include #endif #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_TCP_H # include #endif #ifdef HAVE_SYS_FILIO_H # include #endif #ifdef HAVE_SYS_IOCTL_H # include #endif #ifdef HAVE_UNISTD_H # include #endif #ifdef HAVE_STRING_H # include #endif #ifdef HAVE_STRINGS_H # include #endif #ifdef HAVE_TIME_H # include #endif #ifdef HAVE_SYS_TIME_H # include #endif #ifdef HAVE_SYS_TYPES_H # include #endif #ifdef HAVE_SYS_STAT_H # include #endif #ifdef HAVE_SYS_RANDOM_H # include #endif #ifdef HAVE_SYS_EVENT_H # include #endif #ifdef HAVE_SYS_EPOLL_H # include #endif #ifdef HAVE_SYS_SOCKET_H # include #endif #ifdef HAVE_SYS_PARAM_H # include #endif #ifdef HAVE_FCNTL_H # include #endif #ifdef HAVE_POLL_H # include #endif #ifdef HAVE_NET_IF_H # include #endif #ifdef HAVE_IFADDRS_H # include #endif #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETINET_TCP_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #ifdef HAVE_RESOLV_H # include #endif #ifdef HAVE_IPHLPAPI_H # include #endif #ifdef HAVE_NETIOAPI_H # include #endif #ifdef HAVE_WINSOCK2_H # include #endif #ifdef HAVE_WS2IPDEF_H # include #endif #ifdef HAVE_WS2TCPIP_H # include #endif #ifdef HAVE_WINDOWS_H # include #endif " AC_CHECK_DECL([HAVE_ARPA_NAMESER_H],[CARES_DEFINE_UNQUOTED([CARES_HAVE_ARPA_NAMESER_H])], []) AC_CHECK_DECL([HAVE_ARPA_NAMESER_COMPAT_H],[CARES_DEFINE_UNQUOTED([CARES_HAVE_ARPA_NAMESER_COMPAT_H])],[]) AC_CHECK_TYPE(long long, [AC_DEFINE(HAVE_LONGLONG, 1, [Define to 1 if the compiler supports the 'long long' data type.])]) AC_CHECK_TYPE(ssize_t, [ CARES_TYPEOF_ARES_SSIZE_T=ssize_t ], [ CARES_TYPEOF_ARES_SSIZE_T=int ]) AC_DEFINE_UNQUOTED([CARES_TYPEOF_ARES_SSIZE_T], ${CARES_TYPEOF_ARES_SSIZE_T}, [the signed version of size_t]) AC_CHECK_TYPE(socklen_t, [ AC_DEFINE(HAVE_SOCKLEN_T, [], [socklen_t]) CARES_DEFINE_UNQUOTED([CARES_TYPEOF_ARES_SOCKLEN_T], [socklen_t]) ], [ CARES_DEFINE_UNQUOTED([CARES_TYPEOF_ARES_SOCKLEN_T], [int]) ], $cares_all_includes ) AC_CHECK_TYPE(SOCKET, [], [], $cares_all_includes) dnl ############################################################################### dnl clock_gettime might require an external library AC_SEARCH_LIBS([clock_gettime], [rt posix4]) dnl Use AC_CHECK_DECL not AC_CHECK_FUNCS, while this doesn't do a linkage test, dnl it just makes sure the headers define it, this is the only thing without dnl a complex workaround on Windows that will do what we need. See: dnl https://github.com/msys2/msys2/wiki/Porting/f87a222118b1008ebc166ad237f04edb759c8f4c#calling-conventions-stdcall-and-autotools dnl https://lists.gnu.org/archive/html/autoconf/2013-05/msg00085.html dnl and for a more complex workaround, we'd need to use AC_LINK_IFELSE like: dnl https://mailman.videolan.org/pipermail/vlc-devel/2015-March/101802.html dnl which would require we check each individually and provide function arguments dnl for the test. AC_CHECK_DECL(recv, [AC_DEFINE([HAVE_RECV], 1, [Define to 1 if you have `recv`] )], [], $cares_all_includes) AC_CHECK_DECL(recvfrom, [AC_DEFINE([HAVE_RECVFROM], 1, [Define to 1 if you have `recvfrom`] )], [], $cares_all_includes) AC_CHECK_DECL(send, [AC_DEFINE([HAVE_SEND], 1, [Define to 1 if you have `send`] )], [], $cares_all_includes) AC_CHECK_DECL(getnameinfo, [AC_DEFINE([HAVE_GETNAMEINFO], 1, [Define to 1 if you have `getnameinfo`] )], [], $cares_all_includes) AC_CHECK_DECL(gethostname, [AC_DEFINE([HAVE_GETHOSTNAME], 1, [Define to 1 if you have `gethostname`] )], [], $cares_all_includes) AC_CHECK_DECL(connect, [AC_DEFINE([HAVE_CONNECT], 1, [Define to 1 if you have `connect`] )], [], $cares_all_includes) AC_CHECK_DECL(connectx, [AC_DEFINE([HAVE_CONNECTX], 1, [Define to 1 if you have `connectx`] )], [], $cares_all_includes) AC_CHECK_DECL(closesocket, [AC_DEFINE([HAVE_CLOSESOCKET], 1, [Define to 1 if you have `closesocket`] )], [], $cares_all_includes) AC_CHECK_DECL(CloseSocket, [AC_DEFINE([HAVE_CLOSESOCKET_CAMEL], 1, [Define to 1 if you have `CloseSocket`] )], [], $cares_all_includes) AC_CHECK_DECL(fcntl, [AC_DEFINE([HAVE_FCNTL], 1, [Define to 1 if you have `fcntl`] )], [], $cares_all_includes) AC_CHECK_DECL(getenv, [AC_DEFINE([HAVE_GETENV], 1, [Define to 1 if you have `getenv`] )], [], $cares_all_includes) AC_CHECK_DECL(gethostname, [AC_DEFINE([HAVE_GETHOSTNAME], 1, [Define to 1 if you have `gethostname`] )], [], $cares_all_includes) AC_CHECK_DECL(getrandom, [AC_DEFINE([HAVE_GETRANDOM], 1, [Define to 1 if you have `getrandom`] )], [], $cares_all_includes) AC_CHECK_DECL(getservbyport_r, [AC_DEFINE([HAVE_GETSERVBYPORT_R], 1, [Define to 1 if you have `getservbyport_r`])], [], $cares_all_includes) AC_CHECK_DECL(inet_net_pton, [AC_DEFINE([HAVE_INET_NET_PTON], 1, [Define to 1 if you have `inet_net_pton`] )], [], $cares_all_includes) AC_CHECK_DECL(inet_ntop, [AC_DEFINE([HAVE_INET_NTOP], 1, [Define to 1 if you have `inet_ntop`] )], [], $cares_all_includes) AC_CHECK_DECL(inet_pton, [AC_DEFINE([HAVE_INET_PTON], 1, [Define to 1 if you have `inet_pton`] )], [], $cares_all_includes) AC_CHECK_DECL(ioctl, [AC_DEFINE([HAVE_IOCTL], 1, [Define to 1 if you have `ioctl`] )], [], $cares_all_includes) AC_CHECK_DECL(ioctlsocket, [AC_DEFINE([HAVE_IOCTLSOCKET], 1, [Define to 1 if you have `ioctlsocket`] )], [], $cares_all_includes) AC_CHECK_DECL(IoctlSocket, [AC_DEFINE([HAVE_IOCTLSOCKET_CAMEL], 1, [Define to 1 if you have `IoctlSocket`] )], [], $cares_all_includes) AC_CHECK_DECL(setsockopt, [AC_DEFINE([HAVE_SETSOCKOPT], 1, [Define to 1 if you have `setsockopt`] )], [], $cares_all_includes) AC_CHECK_DECL(socket, [AC_DEFINE([HAVE_SOCKET], 1, [Define to 1 if you have `socket`] )], [], $cares_all_includes) AC_CHECK_DECL(strcasecmp, [AC_DEFINE([HAVE_STRCASECMP], 1, [Define to 1 if you have `strcasecmp`] )], [], $cares_all_includes) AC_CHECK_DECL(strdup, [AC_DEFINE([HAVE_STRDUP], 1, [Define to 1 if you have `strdup`] )], [], $cares_all_includes) AC_CHECK_DECL(stricmp, [AC_DEFINE([HAVE_STRICMP], 1, [Define to 1 if you have `stricmp`] )], [], $cares_all_includes) AC_CHECK_DECL(strncasecmp, [AC_DEFINE([HAVE_STRNCASECMP], 1, [Define to 1 if you have `strncasecmp`] )], [], $cares_all_includes) AC_CHECK_DECL(strncmpi, [AC_DEFINE([HAVE_STRNCMPI], 1, [Define to 1 if you have `strncmpi`] )], [], $cares_all_includes) AC_CHECK_DECL(strnicmp, [AC_DEFINE([HAVE_STRNICMP], 1, [Define to 1 if you have `strnicmp`] )], [], $cares_all_includes) AC_CHECK_DECL(writev, [AC_DEFINE([HAVE_WRITEV], 1, [Define to 1 if you have `writev`] )], [], $cares_all_includes) AC_CHECK_DECL(arc4random_buf, [AC_DEFINE([HAVE_ARC4RANDOM_BUF], 1, [Define to 1 if you have `arc4random_buf`] )], [], $cares_all_includes) AC_CHECK_DECL(stat, [AC_DEFINE([HAVE_STAT], 1, [Define to 1 if you have `stat`] )], [], $cares_all_includes) AC_CHECK_DECL(gettimeofday, [AC_DEFINE([HAVE_GETTIMEOFDAY], 1, [Define to 1 if you have `gettimeofday`] )], [], $cares_all_includes) AC_CHECK_DECL(clock_gettime, [AC_DEFINE([HAVE_CLOCK_GETTIME], 1, [Define to 1 if you have `clock_gettime`] )], [], $cares_all_includes) AC_CHECK_DECL(if_indextoname, [AC_DEFINE([HAVE_IF_INDEXTONAME], 1, [Define to 1 if you have `if_indextoname`] )], [], $cares_all_includes) AC_CHECK_DECL(if_nametoindex, [AC_DEFINE([HAVE_IF_NAMETOINDEX], 1, [Define to 1 if you have `if_nametoindex`] )], [], $cares_all_includes) AC_CHECK_DECL(getifaddrs, [AC_DEFINE([HAVE_GETIFADDRS], 1, [Define to 1 if you have `getifaddrs`] )], [], $cares_all_includes) AC_CHECK_DECL(poll, [AC_DEFINE([HAVE_POLL], 1, [Define to 1 if you have `poll`] )], [], $cares_all_includes) AC_CHECK_DECL(pipe, [AC_DEFINE([HAVE_PIPE], 1, [Define to 1 if you have `pipe`] )], [], $cares_all_includes) AC_CHECK_DECL(pipe2, [AC_DEFINE([HAVE_PIPE2], 1, [Define to 1 if you have `pipe2`] )], [], $cares_all_includes) AC_CHECK_DECL(kqueue, [AC_DEFINE([HAVE_KQUEUE], 1, [Define to 1 if you have `kqueue`] )], [], $cares_all_includes) AC_CHECK_DECL(epoll_create1, [AC_DEFINE([HAVE_EPOLL], 1, [Define to 1 if you have `epoll_{create1,ctl,wait}`])], [], $cares_all_includes) AC_CHECK_DECL(ConvertInterfaceIndexToLuid, [AC_DEFINE([HAVE_CONVERTINTERFACEINDEXTOLUID], 1, [Define to 1 if you have `ConvertInterfaceIndexToLuid`])], [], $cares_all_includes) AC_CHECK_DECL(ConvertInterfaceLuidToNameA, [AC_DEFINE([HAVE_CONVERTINTERFACELUIDTONAMEA], 1, [Define to 1 if you have `ConvertInterfaceLuidToNameA`])], [], $cares_all_includes) AC_CHECK_DECL(NotifyIpInterfaceChange, [AC_DEFINE([HAVE_NOTIFYIPINTERFACECHANGE], 1, [Define to 1 if you have `NotifyIpInterfaceChange`] )], [], $cares_all_includes) AC_CHECK_DECL(RegisterWaitForSingleObject, [AC_DEFINE([HAVE_REGISTERWAITFORSINGLEOBJECT], 1, [Define to 1 if you have `RegisterWaitForSingleObject`])], [], $cares_all_includes) AC_CHECK_DECL(__system_property_get, [AC_DEFINE([HAVE___SYSTEM_PROPERTY_GET], 1, [Define to 1 if you have `__system_property_get`] )], [], $cares_all_includes) dnl ############################################################################### dnl recv, recvfrom, send, getnameinfo, gethostname dnl ARGUMENTS AND RETURN VALUES if test "x$ac_cv_type_ssize_t" = "xyes" -a "x$ac_cv_type_socklen_t" = "xyes" -a "x$ac_cv_native_windows" != "xyes" ; then recvfrom_type_retv="ssize_t" recvfrom_type_arg3="size_t" else recvfrom_type_retv="int" recvfrom_type_arg3="int" fi if test "x$ac_cv_type_SOCKET" = "xyes" ; then dnl If the SOCKET type is defined, it uses socket ... should be windows only recvfrom_type_arg1="SOCKET" else recvfrom_type_arg1="int" fi if test "x$ac_cv_type_socklen_t" = "xyes" ; then recvfrom_type_arg6="socklen_t *" getnameinfo_type_arg2="socklen_t" getnameinfo_type_arg46="socklen_t" else recvfrom_type_arg6="int *" getnameinfo_type_arg2="int" getnameinfo_type_arg46="int" fi if test "x$ac_cv_native_windows" = "xyes" ; then recv_type_arg2="char *" else recv_type_arg2="void *" fi dnl Functions are typically consistent so the equivalent fields map ... equivalently recv_type_retv=${recvfrom_type_retv} send_type_retv=${recvfrom_type_retv} recv_type_arg1=${recvfrom_type_arg1} recvfrom_type_arg2=${recv_type_arg2} send_type_arg1=${recvfrom_type_arg1} recv_type_arg3=${recvfrom_type_arg3} send_type_arg3=${recvfrom_type_arg3} gethostname_type_arg2=${recvfrom_type_arg3} dnl These should always be "sane" values to use always recvfrom_qual_arg5= recvfrom_type_arg4=int recvfrom_type_arg5="struct sockaddr *" recv_type_arg4=int getnameinfo_type_arg1="struct sockaddr *" getnameinfo_type_arg7=int send_type_arg2="const void *" send_type_arg4=int AC_DEFINE_UNQUOTED([RECVFROM_TYPE_RETV], [ ${recvfrom_type_retv} ], [ recvfrom() return value ]) AC_DEFINE_UNQUOTED([RECVFROM_TYPE_ARG1], [ ${recvfrom_type_arg1} ], [ recvfrom() arg1 type ]) AC_DEFINE_UNQUOTED([RECVFROM_TYPE_ARG2], [ ${recvfrom_type_arg2} ], [ recvfrom() arg2 type ]) AC_DEFINE_UNQUOTED([RECVFROM_TYPE_ARG3], [ ${recvfrom_type_arg3} ], [ recvfrom() arg3 type ]) AC_DEFINE_UNQUOTED([RECVFROM_TYPE_ARG4], [ ${recvfrom_type_arg4} ], [ recvfrom() arg4 type ]) AC_DEFINE_UNQUOTED([RECVFROM_TYPE_ARG5], [ ${recvfrom_type_arg5} ], [ recvfrom() arg5 type ]) AC_DEFINE_UNQUOTED([RECVFROM_QUAL_ARG5], [ ${recvfrom_qual_arg5}], [ recvfrom() arg5 qualifier]) AC_DEFINE_UNQUOTED([RECV_TYPE_RETV], [ ${recv_type_retv} ], [ recv() return value ]) AC_DEFINE_UNQUOTED([RECV_TYPE_ARG1], [ ${recv_type_arg1} ], [ recv() arg1 type ]) AC_DEFINE_UNQUOTED([RECV_TYPE_ARG2], [ ${recv_type_arg2} ], [ recv() arg2 type ]) AC_DEFINE_UNQUOTED([RECV_TYPE_ARG3], [ ${recv_type_arg3} ], [ recv() arg3 type ]) AC_DEFINE_UNQUOTED([RECV_TYPE_ARG4], [ ${recv_type_arg4} ], [ recv() arg4 type ]) AC_DEFINE_UNQUOTED([SEND_TYPE_RETV], [ ${send_type_retv} ], [ send() return value ]) AC_DEFINE_UNQUOTED([SEND_TYPE_ARG1], [ ${send_type_arg1} ], [ send() arg1 type ]) AC_DEFINE_UNQUOTED([SEND_TYPE_ARG2], [ ${send_type_arg2} ], [ send() arg2 type ]) AC_DEFINE_UNQUOTED([SEND_TYPE_ARG3], [ ${send_type_arg3} ], [ send() arg3 type ]) AC_DEFINE_UNQUOTED([SEND_TYPE_ARG4], [ ${send_type_arg4} ], [ send() arg4 type ]) AC_DEFINE_UNQUOTED([GETNAMEINFO_TYPE_ARG1], [ ${getnameinfo_type_arg1} ], [ getnameinfo() arg1 type ]) AC_DEFINE_UNQUOTED([GETNAMEINFO_TYPE_ARG2], [ ${getnameinfo_type_arg2} ], [ getnameinfo() arg2 type ]) AC_DEFINE_UNQUOTED([GETNAMEINFO_TYPE_ARG7], [ ${getnameinfo_type_arg7} ], [ getnameinfo() arg7 type ]) AC_DEFINE_UNQUOTED([GETNAMEINFO_TYPE_ARG46], [ ${getnameinfo_type_arg46} ], [ getnameinfo() arg4 and 6 type ]) AC_DEFINE_UNQUOTED([GETHOSTNAME_TYPE_ARG2], [ ${gethostname_type_arg2} ], [ gethostname() arg2 type ]) dnl ############################################################################### if test "$ac_cv_have_decl_getservbyport_r" = "yes" ; then AC_MSG_CHECKING([number of arguments for getservbyport_r()]) getservbyport_r_args=6 case $host_os in solaris*) getservbyport_r_args=5 ;; aix*|openbsd*) getservbyport_r_args=4 ;; esac AC_MSG_RESULT([$getservbyport_r_args]) AC_DEFINE_UNQUOTED([GETSERVBYPORT_R_ARGS], [ $getservbyport_r_args ], [ number of arguments for getservbyport_r() ]) fi if test "$ac_cv_have_decl_getservbyname_r" = "yes" ; then AC_MSG_CHECKING([number of arguments for getservbyname_r()]) getservbyname_r_args=6 case $host_os in solaris*) getservbyname_r_args=5 ;; aix*|openbsd*) getservbyname_r_args=4 ;; esac AC_DEFINE_UNQUOTED([GETSERVBYNAME_R_ARGS], [ $getservbyname_r_args ], [ number of arguments for getservbyname_r() ]) AC_MSG_RESULT([$getservbyname_r_args]) fi AC_TYPE_SIZE_T AC_CHECK_DECL(AF_INET6, [AC_DEFINE([HAVE_AF_INET6],1,[Define to 1 if you have AF_INET6])], [], $cares_all_includes) AC_CHECK_DECL(PF_INET6, [AC_DEFINE([HAVE_PF_INET6],1,[Define to 1 if you have PF_INET6])], [], $cares_all_includes) AC_CHECK_TYPES(struct in6_addr, [], [], $cares_all_includes) AC_CHECK_TYPES(struct sockaddr_in6, [], [], $cares_all_includes) AC_CHECK_TYPES(struct sockaddr_storage, [], [], $cares_all_includes) AC_CHECK_TYPES(struct addrinfo, [], [], $cares_all_includes) AC_CHECK_TYPES(struct timeval, [], [], $cares_all_includes) AC_CHECK_MEMBERS(struct sockaddr_in6.sin6_scope_id, [], [], $cares_all_includes) AC_CHECK_MEMBERS(struct addrinfo.ai_flags, [], [], $cares_all_includes) AC_CHECK_DECL(FIONBIO, [], [], $cares_all_includes) AC_CHECK_DECL(O_NONBLOCK, [], [], $cares_all_includes) AC_CHECK_DECL(SO_NONBLOCK, [], [], $cares_all_includes) AC_CHECK_DECL(MSG_NOSIGNAL, [], [], $cares_all_includes) AC_CHECK_DECL(CLOCK_MONOTONIC, [], [], $cares_all_includes) if test "$ac_cv_have_decl_CLOCK_MONOTONIC" = "yes" -a "$ac_cv_have_decl_clock_gettime" = "yes" ; then AC_DEFINE([HAVE_CLOCK_GETTIME_MONOTONIC], [ 1 ], [ clock_gettime() with CLOCK_MONOTONIC support ]) fi if test "$ac_cv_have_decl_FIONBIO" = "yes" -a "$ac_cv_have_decl_ioctl" = "yes" ; then AC_DEFINE([HAVE_IOCTL_FIONBIO], [ 1 ], [ ioctl() with FIONBIO support ]) fi if test "$ac_cv_have_decl_FIONBIO" = "yes" -a "$ac_cv_have_decl_ioctlsocket" = "yes" ; then AC_DEFINE([HAVE_IOCTLSOCKET_FIONBIO], [ 1 ], [ ioctlsocket() with FIONBIO support ]) fi if test "$ac_cv_have_decl_SO_NONBLOCK" = "yes" -a "$ac_cv_have_decl_setsockopt" = "yes" ; then AC_DEFINE([HAVE_SETSOCKOPT_SO_NONBLOCK], [ 1 ], [ setsockopt() with SO_NONBLOCK support ]) fi if test "$ac_cv_have_decl_O_NONBLOCK" = "yes" -a "$ac_cv_have_decl_fcntl" = "yes" ; then AC_DEFINE([HAVE_FCNTL_O_NONBLOCK], [ 1 ], [ fcntl() with O_NONBLOCK support ]) fi dnl ares_build.h.in specific defines if test "x$ac_cv_header_sys_types_h" = "xyes" ; then CARES_DEFINE_UNQUOTED([CARES_HAVE_SYS_TYPES_H],[1]) fi if test "x$ac_cv_header_sys_socket_h" = "xyes" ; then CARES_DEFINE_UNQUOTED([CARES_HAVE_SYS_SOCKET_H],[1]) fi if test "x$ac_cv_header_sys_select_h" = "xyes" ; then CARES_DEFINE_UNQUOTED([CARES_HAVE_SYS_SELECT_H],[1]) fi if test "x$ac_cv_header_ws2tcpip_h" = "xyes" ; then CARES_DEFINE_UNQUOTED([CARES_HAVE_WS2TCPIP_H],[1]) fi if test "x$ac_cv_header_winsock2_h" = "xyes" ; then CARES_DEFINE_UNQUOTED([CARES_HAVE_WINSOCK2_H],[1]) fi if test "x$ac_cv_header_windows_h" = "xyes" ; then CARES_DEFINE_UNQUOTED([CARES_HAVE_WINDOWS_H],[1]) fi if test "x$ac_cv_header_arpa_nameser_h" = "xyes" ; then CARES_DEFINE_UNQUOTED([CARES_HAVE_ARPA_NAMESER_H],[1]) fi if test "x$ac_cv_header_arpa_nameser_compa_h" = "xyes" ; then CARES_DEFINE_UNQUOTED([CARES_HAVE_ARPA_NAMESER_COMPA_H],[1]) fi dnl ------------ THREADING -------------- dnl windows always supports threads, only check non-windows systems. if test "${CARES_THREADS}" = "yes" -a "x${ac_cv_native_windows}" != "xyes" ; then AX_PTHREAD([ ], [ AC_MSG_WARN([threads requested but not supported]) CARES_THREADS=no ]) if test "${CARES_THREADS}" = "yes" ; then AC_CHECK_HEADERS([pthread.h pthread_np.h]) LIBS="$PTHREAD_LIBS $LIBS" AM_CFLAGS="$AM_CFLAGS $PTHREAD_CFLAGS" CC="$PTHREAD_CC" CXX="$PTHREAD_CXX" fi fi if test "${CARES_THREADS}" = "yes" ; then AC_DEFINE([CARES_THREADS], [ 1 ], [Threading enabled]) fi CARES_PRIVATE_LIBS="$LIBS" AC_SUBST(CARES_PRIVATE_LIBS) BUILD_SUBDIRS="include src" dnl ******** TESTS ******* if test "x$build_tests" != "xno" -a "x$HAVE_CXX14" = "0" ; then if test "x$build_tests" = "xmaybe" ; then AC_MSG_WARN([cannot build tests without a CXX14 compiler]) build_tests=no else AC_MSG_ERROR([*** Building tests requires a CXX14 compiler]) fi fi if test "x$build_tests" != "xno" -a "x$cross_compiling" = "xyes" ; then if test "x$build_tests" = "xmaybe" ; then AC_MSG_WARN([cannot build tests when cross compiling]) build_tests=no else AC_MSG_ERROR([*** Tests not supported when cross compiling]) fi fi if test "x$build_tests" != "xno" ; then PKG_CHECK_MODULES([GMOCK], [gmock], [ have_gmock=yes ], [ have_gmock=no ]) if test "x$have_gmock" = "xno" ; then if test "x$build_tests" = "xmaybe" ; then AC_MSG_WARN([gmock could not be found, not building tests]) build_tests=no else AC_MSG_ERROR([tests require gmock]) fi else PKG_CHECK_MODULES([GMOCK112], [gmock >= 1.12.0], [ have_gmock_v112=yes ], [ have_gmock_v112=no ]) if test "x$have_gmock_v112" = "xyes" ; then AX_CHECK_USER_NAMESPACE AX_CHECK_UTS_NAMESPACE fi fi fi if test "x$build_tests" != "xno" ; then build_tests=yes AX_CXX_COMPILE_STDCXX_14([noext],[mandatory]) if test "$ac_cv_native_windows" != "yes" ; then AX_PTHREAD([ CARES_TEST_PTHREADS="yes" ], [ AC_MSG_ERROR([threading required for tests]) ]) fi BUILD_SUBDIRS="${BUILD_SUBDIRS} test" fi AC_MSG_CHECKING([whether to build tests]) AC_MSG_RESULT([$build_tests]) AM_CONDITIONAL(BUILD_TESTS, test "x$build_tests" = "xyes") AC_SUBST(AM_CFLAGS) AC_SUBST(AM_CPPFLAGS) AC_SUBST(PKGCONFIG_CFLAGS) AC_SUBST(BUILD_SUBDIRS) AC_CONFIG_FILES([Makefile include/Makefile src/Makefile src/lib/Makefile src/tools/Makefile libcares.pc ]) AM_COND_IF([BUILD_TESTS], [AC_CONFIG_FILES([test/Makefile])]) AC_OUTPUT gevent-24.11.1/deps/c-ares/include/000077500000000000000000000000001471441230600167025ustar00rootroot00000000000000gevent-24.11.1/deps/c-ares/include/CMakeLists.txt000066400000000000000000000007641471441230600214510ustar00rootroot00000000000000# Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT # Write ares_build.h configuration file. This is an installed file. CONFIGURE_FILE (ares_build.h.cmake ${PROJECT_BINARY_DIR}/ares_build.h) # Headers installation target IF (CARES_INSTALL) SET (CARES_HEADERS ares.h ares_version.h "${PROJECT_BINARY_DIR}/ares_build.h" ares_dns.h ares_dns_record.h ares_nameser.h) INSTALL (FILES ${CARES_HEADERS} COMPONENT Devel DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}) ENDIF () gevent-24.11.1/deps/c-ares/include/Makefile.am000066400000000000000000000005541471441230600207420ustar00rootroot00000000000000# Copyright (C) Daniel Stenberg # SPDX-License-Identifier: MIT AUTOMAKE_OPTIONS = foreign nostdinc 1.9.6 ACLOCAL_AMFLAGS = -I m4 --install # what headers to install on 'make install': include_HEADERS = ares.h ares_version.h ares_build.h ares_dns.h ares_dns_record.h ares_nameser.h EXTRA_DIST = ares_build.h.cmake ares_build.h.in ares_build.h.dist CMakeLists.txt gevent-24.11.1/deps/c-ares/include/Makefile.in000066400000000000000000000460661471441230600207630ustar00rootroot00000000000000# Makefile.in generated by automake 1.17 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2024 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = { \ if test -z '$(MAKELEVEL)'; then \ false; \ elif test -n '$(MAKE_HOST)'; then \ true; \ elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \ true; \ else \ false; \ fi; \ } am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) am__rm_f = rm -f $(am__rm_f_notfound) am__rm_rf = rm -rf $(am__rm_f_notfound) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ subdir = include ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/m4/ax_ac_append_to_file.m4 \ $(top_srcdir)/m4/ax_ac_print_to_file.m4 \ $(top_srcdir)/m4/ax_add_am_macro_static.m4 \ $(top_srcdir)/m4/ax_am_macros_static.m4 \ $(top_srcdir)/m4/ax_append_compile_flags.m4 \ $(top_srcdir)/m4/ax_append_flag.m4 \ $(top_srcdir)/m4/ax_append_link_flags.m4 \ $(top_srcdir)/m4/ax_check_compile_flag.m4 \ $(top_srcdir)/m4/ax_check_gnu_make.m4 \ $(top_srcdir)/m4/ax_check_link_flag.m4 \ $(top_srcdir)/m4/ax_check_user_namespace.m4 \ $(top_srcdir)/m4/ax_check_uts_namespace.m4 \ $(top_srcdir)/m4/ax_code_coverage.m4 \ $(top_srcdir)/m4/ax_compiler_vendor.m4 \ $(top_srcdir)/m4/ax_cxx_compile_stdcxx.m4 \ $(top_srcdir)/m4/ax_cxx_compile_stdcxx_14.m4 \ $(top_srcdir)/m4/ax_file_escapes.m4 \ $(top_srcdir)/m4/ax_pthread.m4 \ $(top_srcdir)/m4/ax_require_defined.m4 \ $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ $(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/m4/pkg.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) DIST_COMMON = $(srcdir)/Makefile.am $(include_HEADERS) \ $(am__DIST_COMMON) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/src/lib/ares_config.h ares_build.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && echo $$files | $(am__xargs_n) 40 $(am__rm_f); }; \ } am__installdirs = "$(DESTDIR)$(includedir)" HEADERS = $(include_HEADERS) am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) \ ares_build.h.in # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` am__DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/ares_build.h.in DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_CFLAGS = @AM_CFLAGS@ AM_CPPFLAGS = @AM_CPPFLAGS@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AS = @AS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BUILD_SUBDIRS = @BUILD_SUBDIRS@ CARES_PRIVATE_LIBS = @CARES_PRIVATE_LIBS@ CARES_RANDOM_FILE = @CARES_RANDOM_FILE@ CARES_SYMBOL_HIDING_CFLAG = @CARES_SYMBOL_HIDING_CFLAG@ CARES_VERSION_INFO = @CARES_VERSION_INFO@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CODE_COVERAGE_CFLAGS = @CODE_COVERAGE_CFLAGS@ CODE_COVERAGE_CPPFLAGS = @CODE_COVERAGE_CPPFLAGS@ CODE_COVERAGE_CXXFLAGS = @CODE_COVERAGE_CXXFLAGS@ CODE_COVERAGE_ENABLED = @CODE_COVERAGE_ENABLED@ CODE_COVERAGE_LIBS = @CODE_COVERAGE_LIBS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CSCOPE = @CSCOPE@ CTAGS = @CTAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ ETAGS = @ETAGS@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FILECMD = @FILECMD@ GCOV = @GCOV@ GENHTML = @GENHTML@ GMOCK112_CFLAGS = @GMOCK112_CFLAGS@ GMOCK112_LIBS = @GMOCK112_LIBS@ GMOCK_CFLAGS = @GMOCK_CFLAGS@ GMOCK_LIBS = @GMOCK_LIBS@ GREP = @GREP@ HAVE_CXX14 = @HAVE_CXX14@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ LCOV = @LCOV@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ NM = @NM@ NMEDIT = @NMEDIT@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKGCONFIG_CFLAGS = @PKGCONFIG_CFLAGS@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_CXX = @PTHREAD_CXX@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ VERSION = @VERSION@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__rm_f_notfound = @am__rm_f_notfound@ am__tar = @am__tar@ am__untar = @am__untar@ am__xargs_n = @am__xargs_n@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ ifGNUmake = @ifGNUmake@ ifnGNUmake = @ifnGNUmake@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ runstatedir = @runstatedir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ # Copyright (C) Daniel Stenberg # SPDX-License-Identifier: MIT AUTOMAKE_OPTIONS = foreign nostdinc 1.9.6 ACLOCAL_AMFLAGS = -I m4 --install # what headers to install on 'make install': include_HEADERS = ares.h ares_version.h ares_build.h ares_dns.h ares_dns_record.h ares_nameser.h EXTRA_DIST = ares_build.h.cmake ares_build.h.in ares_build.h.dist CMakeLists.txt all: ares_build.h $(MAKE) $(AM_MAKEFLAGS) all-am .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign include/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign include/Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): ares_build.h: stamp-h2 @test -f $@ || rm -f stamp-h2 @test -f $@ || $(MAKE) $(AM_MAKEFLAGS) stamp-h2 stamp-h2: $(srcdir)/ares_build.h.in $(top_builddir)/config.status $(AM_V_at)rm -f stamp-h2 $(AM_V_GEN)cd $(top_builddir) && $(SHELL) ./config.status include/ares_build.h distclean-hdr: -rm -f ares_build.h stamp-h2 mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs install-includeHEADERS: $(include_HEADERS) @$(NORMAL_INSTALL) @list='$(include_HEADERS)'; test -n "$(includedir)" || list=; \ if test -n "$$list"; then \ echo " $(MKDIR_P) '$(DESTDIR)$(includedir)'"; \ $(MKDIR_P) "$(DESTDIR)$(includedir)" || exit 1; \ fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_HEADER) $$files '$(DESTDIR)$(includedir)'"; \ $(INSTALL_HEADER) $$files "$(DESTDIR)$(includedir)" || exit $$?; \ done uninstall-includeHEADERS: @$(NORMAL_UNINSTALL) @list='$(include_HEADERS)'; test -n "$(includedir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ dir='$(DESTDIR)$(includedir)'; $(am__uninstall_files_from_dir) ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-am TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-am CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscopelist: cscopelist-am cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags distdir: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) distdir-am distdir-am: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile $(HEADERS) ares_build.h installdirs: for dir in "$(DESTDIR)$(includedir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -$(am__rm_f) $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || $(am__rm_f) $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic distclean-hdr distclean-tags dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-includeHEADERS install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-includeHEADERS .MAKE: all install-am install-strip .PHONY: CTAGS GTAGS TAGS all all-am check check-am clean clean-generic \ clean-libtool cscopelist-am ctags ctags-am distclean \ distclean-generic distclean-hdr distclean-libtool \ distclean-tags distdir dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am install-dvi \ install-dvi-am install-exec install-exec-am install-html \ install-html-am install-includeHEADERS install-info \ install-info-am install-man install-pdf install-pdf-am \ install-ps install-ps-am install-strip installcheck \ installcheck-am installdirs maintainer-clean \ maintainer-clean-generic mostlyclean mostlyclean-generic \ mostlyclean-libtool pdf pdf-am ps ps-am tags tags-am uninstall \ uninstall-am uninstall-includeHEADERS .PRECIOUS: Makefile # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: # Tell GNU make to disable its built-in pattern rules. %:: %,v %:: RCS/%,v %:: RCS/% %:: s.% %:: SCCS/s.% gevent-24.11.1/deps/c-ares/include/ares.h000066400000000000000000001013771471441230600200160ustar00rootroot00000000000000/* MIT License * * Copyright (c) Massachusetts Institute of Technology * Copyright (c) Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef ARES__H #define ARES__H #include "ares_version.h" /* c-ares version defines */ #include "ares_build.h" /* c-ares build definitions */ #if defined(_WIN32) # ifndef WIN32_LEAN_AND_MEAN # define WIN32_LEAN_AND_MEAN # endif #endif #ifdef CARES_HAVE_SYS_TYPES_H # include #endif #ifdef CARES_HAVE_SYS_SOCKET_H # include #endif #ifdef CARES_HAVE_SYS_SELECT_H # include #endif #ifdef CARES_HAVE_WINSOCK2_H # include /* To aid with linking against a static c-ares build, lets tell the microsoft * compiler to pull in needed dependencies */ # ifdef _MSC_VER # pragma comment(lib, "ws2_32") # pragma comment(lib, "advapi32") # pragma comment(lib, "iphlpapi") # endif #endif #ifdef CARES_HAVE_WS2TCPIP_H # include #endif #ifdef CARES_HAVE_WINDOWS_H # include #endif /* HP-UX systems version 9, 10 and 11 lack sys/select.h and so does oldish libc5-based Linux systems. Only include it on system that are known to require it! */ #if defined(_AIX) || defined(__NOVELL_LIBC__) || defined(__NetBSD__) || \ defined(__minix) || defined(__SYMBIAN32__) || defined(__INTEGRITY) || \ defined(ANDROID) || defined(__ANDROID__) || defined(__OpenBSD__) || \ defined(__QNXNTO__) || defined(__MVS__) || defined(__HAIKU__) # include #endif #if (defined(NETWARE) && !defined(__NOVELL_LIBC__)) # include #endif #if !defined(_WIN32) # include #endif #ifdef WATT32 # include #endif #if defined(ANDROID) || defined(__ANDROID__) # include #endif typedef CARES_TYPEOF_ARES_SOCKLEN_T ares_socklen_t; typedef CARES_TYPEOF_ARES_SSIZE_T ares_ssize_t; #ifdef __cplusplus extern "C" { #endif /* ** c-ares external API function linkage decorations. */ #if defined(_WIN32) || defined(__CYGWIN__) || defined(__SYMBIAN32__) # ifdef CARES_STATICLIB # define CARES_EXTERN # else # ifdef CARES_BUILDING_LIBRARY # define CARES_EXTERN __declspec(dllexport) # else # define CARES_EXTERN __declspec(dllimport) # endif # endif #else # if defined(__GNUC__) && __GNUC__ >= 4 # define CARES_EXTERN __attribute__((visibility("default"))) # elif defined(__INTEL_COMPILER) && __INTEL_COMPILER >= 900 # define CARES_EXTERN __attribute__((visibility("default"))) # elif defined(__SUNPRO_C) # define CARES_EXTERN _global # else # define CARES_EXTERN # endif #endif #ifdef __GNUC__ # define CARES_GCC_VERSION \ (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__) #else # define CARES_GCC_VERSION 0 #endif #ifndef __has_attribute # define __has_attribute(x) 0 #endif #ifdef CARES_NO_DEPRECATED # define CARES_DEPRECATED # define CARES_DEPRECATED_FOR(f) #else # if CARES_GCC_VERSION >= 30200 || __has_attribute(__deprecated__) # define CARES_DEPRECATED __attribute__((__deprecated__)) # else # define CARES_DEPRECATED # endif # if CARES_GCC_VERSION >= 40500 || defined(__clang__) # define CARES_DEPRECATED_FOR(f) \ __attribute__((deprecated("Use " #f " instead"))) # elif defined(_MSC_VER) # define CARES_DEPRECATED_FOR(f) __declspec(deprecated("Use " #f " instead")) # else # define CARES_DEPRECATED_FOR(f) CARES_DEPRECATED # endif #endif typedef enum { ARES_SUCCESS = 0, /* Server error codes (ARES_ENODATA indicates no relevant answer) */ ARES_ENODATA = 1, ARES_EFORMERR = 2, ARES_ESERVFAIL = 3, ARES_ENOTFOUND = 4, ARES_ENOTIMP = 5, ARES_EREFUSED = 6, /* Locally generated error codes */ ARES_EBADQUERY = 7, ARES_EBADNAME = 8, ARES_EBADFAMILY = 9, ARES_EBADRESP = 10, ARES_ECONNREFUSED = 11, ARES_ETIMEOUT = 12, ARES_EOF = 13, ARES_EFILE = 14, ARES_ENOMEM = 15, ARES_EDESTRUCTION = 16, ARES_EBADSTR = 17, /* ares_getnameinfo error codes */ ARES_EBADFLAGS = 18, /* ares_getaddrinfo error codes */ ARES_ENONAME = 19, ARES_EBADHINTS = 20, /* Uninitialized library error code */ ARES_ENOTINITIALIZED = 21, /* introduced in 1.7.0 */ /* ares_library_init error codes */ ARES_ELOADIPHLPAPI = 22, /* introduced in 1.7.0 */ ARES_EADDRGETNETWORKPARAMS = 23, /* introduced in 1.7.0 */ /* More error codes */ ARES_ECANCELLED = 24, /* introduced in 1.7.0 */ /* More ares_getaddrinfo error codes */ ARES_ESERVICE = 25, /* ares_getaddrinfo() was passed a text service name that * is not recognized. introduced in 1.16.0 */ ARES_ENOSERVER = 26 /* No DNS servers were configured */ } ares_status_t; typedef enum { ARES_FALSE = 0, ARES_TRUE = 1 } ares_bool_t; /*! Values for ARES_OPT_EVENT_THREAD */ typedef enum { /*! Default (best choice) event system */ ARES_EVSYS_DEFAULT = 0, /*! Win32 IOCP/AFD_POLL event system */ ARES_EVSYS_WIN32 = 1, /*! Linux epoll */ ARES_EVSYS_EPOLL = 2, /*! BSD/MacOS kqueue */ ARES_EVSYS_KQUEUE = 3, /*! POSIX poll() */ ARES_EVSYS_POLL = 4, /*! last fallback on Unix-like systems, select() */ ARES_EVSYS_SELECT = 5 } ares_evsys_t; /* Flag values */ #define ARES_FLAG_USEVC (1 << 0) #define ARES_FLAG_PRIMARY (1 << 1) #define ARES_FLAG_IGNTC (1 << 2) #define ARES_FLAG_NORECURSE (1 << 3) #define ARES_FLAG_STAYOPEN (1 << 4) #define ARES_FLAG_NOSEARCH (1 << 5) #define ARES_FLAG_NOALIASES (1 << 6) #define ARES_FLAG_NOCHECKRESP (1 << 7) #define ARES_FLAG_EDNS (1 << 8) #define ARES_FLAG_NO_DFLT_SVR (1 << 9) #define ARES_FLAG_DNS0x20 (1 << 10) /* Option mask values */ #define ARES_OPT_FLAGS (1 << 0) #define ARES_OPT_TIMEOUT (1 << 1) #define ARES_OPT_TRIES (1 << 2) #define ARES_OPT_NDOTS (1 << 3) #define ARES_OPT_UDP_PORT (1 << 4) #define ARES_OPT_TCP_PORT (1 << 5) #define ARES_OPT_SERVERS (1 << 6) #define ARES_OPT_DOMAINS (1 << 7) #define ARES_OPT_LOOKUPS (1 << 8) #define ARES_OPT_SOCK_STATE_CB (1 << 9) #define ARES_OPT_SORTLIST (1 << 10) #define ARES_OPT_SOCK_SNDBUF (1 << 11) #define ARES_OPT_SOCK_RCVBUF (1 << 12) #define ARES_OPT_TIMEOUTMS (1 << 13) #define ARES_OPT_ROTATE (1 << 14) #define ARES_OPT_EDNSPSZ (1 << 15) #define ARES_OPT_NOROTATE (1 << 16) #define ARES_OPT_RESOLVCONF (1 << 17) #define ARES_OPT_HOSTS_FILE (1 << 18) #define ARES_OPT_UDP_MAX_QUERIES (1 << 19) #define ARES_OPT_MAXTIMEOUTMS (1 << 20) #define ARES_OPT_QUERY_CACHE (1 << 21) #define ARES_OPT_EVENT_THREAD (1 << 22) #define ARES_OPT_SERVER_FAILOVER (1 << 23) /* Nameinfo flag values */ #define ARES_NI_NOFQDN (1 << 0) #define ARES_NI_NUMERICHOST (1 << 1) #define ARES_NI_NAMEREQD (1 << 2) #define ARES_NI_NUMERICSERV (1 << 3) #define ARES_NI_DGRAM (1 << 4) #define ARES_NI_TCP 0 #define ARES_NI_UDP ARES_NI_DGRAM #define ARES_NI_SCTP (1 << 5) #define ARES_NI_DCCP (1 << 6) #define ARES_NI_NUMERICSCOPE (1 << 7) #define ARES_NI_LOOKUPHOST (1 << 8) #define ARES_NI_LOOKUPSERVICE (1 << 9) /* Reserved for future use */ #define ARES_NI_IDN (1 << 10) #define ARES_NI_IDN_ALLOW_UNASSIGNED (1 << 11) #define ARES_NI_IDN_USE_STD3_ASCII_RULES (1 << 12) /* Addrinfo flag values */ #define ARES_AI_CANONNAME (1 << 0) #define ARES_AI_NUMERICHOST (1 << 1) #define ARES_AI_PASSIVE (1 << 2) #define ARES_AI_NUMERICSERV (1 << 3) #define ARES_AI_V4MAPPED (1 << 4) #define ARES_AI_ALL (1 << 5) #define ARES_AI_ADDRCONFIG (1 << 6) #define ARES_AI_NOSORT (1 << 7) #define ARES_AI_ENVHOSTS (1 << 8) /* Reserved for future use */ #define ARES_AI_IDN (1 << 10) #define ARES_AI_IDN_ALLOW_UNASSIGNED (1 << 11) #define ARES_AI_IDN_USE_STD3_ASCII_RULES (1 << 12) #define ARES_AI_CANONIDN (1 << 13) #define ARES_AI_MASK \ (ARES_AI_CANONNAME | ARES_AI_NUMERICHOST | ARES_AI_PASSIVE | \ ARES_AI_NUMERICSERV | ARES_AI_V4MAPPED | ARES_AI_ALL | ARES_AI_ADDRCONFIG) #define ARES_GETSOCK_MAXNUM \ 16 /* ares_getsock() can return info about this \ many sockets */ #define ARES_GETSOCK_READABLE(bits, num) (bits & (1 << (num))) #define ARES_GETSOCK_WRITABLE(bits, num) \ (bits & (1 << ((num) + ARES_GETSOCK_MAXNUM))) /* c-ares library initialization flag values */ #define ARES_LIB_INIT_NONE (0) #define ARES_LIB_INIT_WIN32 (1 << 0) #define ARES_LIB_INIT_ALL (ARES_LIB_INIT_WIN32) /* Server state callback flag values */ #define ARES_SERV_STATE_UDP (1 << 0) /* Query used UDP */ #define ARES_SERV_STATE_TCP (1 << 1) /* Query used TCP */ /* * Typedef our socket type */ #ifndef ares_socket_typedef # if defined(_WIN32) && !defined(WATT32) typedef SOCKET ares_socket_t; # define ARES_SOCKET_BAD INVALID_SOCKET # else typedef int ares_socket_t; # define ARES_SOCKET_BAD -1 # endif # define ares_socket_typedef #endif /* ares_socket_typedef */ typedef void (*ares_sock_state_cb)(void *data, ares_socket_t socket_fd, int readable, int writable); struct apattern; /* Options controlling server failover behavior. * The retry chance is the probability (1/N) by which we will retry a failed * server instead of the best server when selecting a server to send queries * to. * The retry delay is the minimum time in milliseconds to wait between doing * such retries (applied per-server). */ struct ares_server_failover_options { unsigned short retry_chance; size_t retry_delay; }; /* NOTE about the ares_options struct to users and developers. This struct will remain looking like this. It will not be extended nor shrunk in future releases, but all new options will be set by ares_set_*() options instead of with the ares_init_options() function. Eventually (in a galaxy far far away), all options will be settable by ares_set_*() options and the ares_init_options() function will become deprecated. When new options are added to c-ares, they are not added to this struct. And they are not "saved" with the ares_save_options() function but instead we encourage the use of the ares_dup() function. Needless to say, if you add config options to c-ares you need to make sure ares_dup() duplicates this new option. */ struct ares_options { int flags; int timeout; /* in seconds or milliseconds, depending on options */ int tries; int ndots; unsigned short udp_port; /* host byte order */ unsigned short tcp_port; /* host byte order */ int socket_send_buffer_size; int socket_receive_buffer_size; struct in_addr *servers; int nservers; char **domains; int ndomains; char *lookups; ares_sock_state_cb sock_state_cb; void *sock_state_cb_data; struct apattern *sortlist; int nsort; int ednspsz; char *resolvconf_path; char *hosts_path; int udp_max_queries; int maxtimeout; /* in milliseconds */ unsigned int qcache_max_ttl; /* Maximum TTL for query cache, 0=disabled */ ares_evsys_t evsys; struct ares_server_failover_options server_failover_opts; }; struct hostent; struct timeval; struct sockaddr; struct ares_channeldata; struct ares_addrinfo; struct ares_addrinfo_hints; /* Legacy typedef, don't use, you can't specify "const" */ typedef struct ares_channeldata *ares_channel; /* Current main channel typedef */ typedef struct ares_channeldata ares_channel_t; /* * NOTE: before c-ares 1.7.0 we would most often use the system in6_addr * struct below when ares itself was built, but many apps would use this * private version since the header checked a HAVE_* define for it. Starting * with 1.7.0 we always declare and use our own to stop relying on the * system's one. */ struct ares_in6_addr { union { unsigned char _S6_u8[16]; } _S6_un; }; struct ares_addr { int family; union { struct in_addr addr4; struct ares_in6_addr addr6; } addr; }; /* DNS record parser, writer, and helpers */ #include "ares_dns_record.h" typedef void (*ares_callback)(void *arg, int status, int timeouts, unsigned char *abuf, int alen); typedef void (*ares_callback_dnsrec)(void *arg, ares_status_t status, size_t timeouts, const ares_dns_record_t *dnsrec); typedef void (*ares_host_callback)(void *arg, int status, int timeouts, struct hostent *hostent); typedef void (*ares_nameinfo_callback)(void *arg, int status, int timeouts, char *node, char *service); typedef int (*ares_sock_create_callback)(ares_socket_t socket_fd, int type, void *data); typedef int (*ares_sock_config_callback)(ares_socket_t socket_fd, int type, void *data); typedef void (*ares_addrinfo_callback)(void *arg, int status, int timeouts, struct ares_addrinfo *res); typedef void (*ares_server_state_callback)(const char *server_string, ares_bool_t success, int flags, void *data); CARES_EXTERN int ares_library_init(int flags); CARES_EXTERN int ares_library_init_mem(int flags, void *(*amalloc)(size_t size), void (*afree)(void *ptr), void *(*arealloc)(void *ptr, size_t size)); #if defined(ANDROID) || defined(__ANDROID__) CARES_EXTERN void ares_library_init_jvm(JavaVM *jvm); CARES_EXTERN int ares_library_init_android(jobject connectivity_manager); CARES_EXTERN int ares_library_android_initialized(void); #endif CARES_EXTERN int ares_library_initialized(void); CARES_EXTERN void ares_library_cleanup(void); CARES_EXTERN const char *ares_version(int *version); CARES_EXTERN CARES_DEPRECATED_FOR(ares_init_options) int ares_init( ares_channel_t **channelptr); CARES_EXTERN int ares_init_options(ares_channel_t **channelptr, const struct ares_options *options, int optmask); CARES_EXTERN int ares_save_options(const ares_channel_t *channel, struct ares_options *options, int *optmask); CARES_EXTERN void ares_destroy_options(struct ares_options *options); CARES_EXTERN int ares_dup(ares_channel_t **dest, const ares_channel_t *src); CARES_EXTERN ares_status_t ares_reinit(ares_channel_t *channel); CARES_EXTERN void ares_destroy(ares_channel_t *channel); CARES_EXTERN void ares_cancel(ares_channel_t *channel); /* These next 3 configure local binding for the out-going socket * connection. Use these to specify source IP and/or network device * on multi-homed systems. */ CARES_EXTERN void ares_set_local_ip4(ares_channel_t *channel, unsigned int local_ip); /* local_ip6 should be 16 bytes in length */ CARES_EXTERN void ares_set_local_ip6(ares_channel_t *channel, const unsigned char *local_ip6); /* local_dev_name should be null terminated. */ CARES_EXTERN void ares_set_local_dev(ares_channel_t *channel, const char *local_dev_name); CARES_EXTERN void ares_set_socket_callback(ares_channel_t *channel, ares_sock_create_callback callback, void *user_data); CARES_EXTERN void ares_set_socket_configure_callback( ares_channel_t *channel, ares_sock_config_callback callback, void *user_data); CARES_EXTERN void ares_set_server_state_callback(ares_channel_t *channel, ares_server_state_callback callback, void *user_data); CARES_EXTERN int ares_set_sortlist(ares_channel_t *channel, const char *sortstr); CARES_EXTERN void ares_getaddrinfo(ares_channel_t *channel, const char *node, const char *service, const struct ares_addrinfo_hints *hints, ares_addrinfo_callback callback, void *arg); CARES_EXTERN void ares_freeaddrinfo(struct ares_addrinfo *ai); /* * Virtual function set to have user-managed socket IO. * Note that all functions need to be defined, and when * set, the library will not do any bind nor set any * socket options, assuming the client handles these * through either socket creation or the * ares_sock_config_callback call. */ struct iovec; struct ares_socket_functions { ares_socket_t (*asocket)(int, int, int, void *); int (*aclose)(ares_socket_t, void *); int (*aconnect)(ares_socket_t, const struct sockaddr *, ares_socklen_t, void *); ares_ssize_t (*arecvfrom)(ares_socket_t, void *, size_t, int, struct sockaddr *, ares_socklen_t *, void *); ares_ssize_t (*asendv)(ares_socket_t, const struct iovec *, int, void *); }; CARES_EXTERN void ares_set_socket_functions(ares_channel_t *channel, const struct ares_socket_functions *funcs, void *user_data); CARES_EXTERN CARES_DEPRECATED_FOR(ares_send_dnsrec) void ares_send( ares_channel_t *channel, const unsigned char *qbuf, int qlen, ares_callback callback, void *arg); /*! Send a DNS query as an ares_dns_record_t with a callback containing the * parsed DNS record. * * \param[in] channel Pointer to channel on which queries will be sent. * \param[in] dnsrec DNS Record to send * \param[in] callback Callback function invoked on completion or failure of * the query sequence. * \param[in] arg Additional argument passed to the callback function. * \param[out] qid Query ID * \return One of the c-ares status codes. */ CARES_EXTERN ares_status_t ares_send_dnsrec(ares_channel_t *channel, const ares_dns_record_t *dnsrec, ares_callback_dnsrec callback, void *arg, unsigned short *qid); CARES_EXTERN CARES_DEPRECATED_FOR(ares_query_dnsrec) void ares_query( ares_channel_t *channel, const char *name, int dnsclass, int type, ares_callback callback, void *arg); /*! Perform a DNS query with a callback containing the parsed DNS record. * * \param[in] channel Pointer to channel on which queries will be sent. * \param[in] name Query name * \param[in] dnsclass DNS Class * \param[in] type DNS Record Type * \param[in] callback Callback function invoked on completion or failure of * the query sequence. * \param[in] arg Additional argument passed to the callback function. * \param[out] qid Query ID * \return One of the c-ares status codes. */ CARES_EXTERN ares_status_t ares_query_dnsrec(ares_channel_t *channel, const char *name, ares_dns_class_t dnsclass, ares_dns_rec_type_t type, ares_callback_dnsrec callback, void *arg, unsigned short *qid); CARES_EXTERN CARES_DEPRECATED_FOR(ares_search_dnsrec) void ares_search( ares_channel_t *channel, const char *name, int dnsclass, int type, ares_callback callback, void *arg); /*! Search for a complete DNS message. * * \param[in] channel Pointer to channel on which queries will be sent. * \param[in] dnsrec Pointer to initialized and filled DNS record object. * \param[in] callback Callback function invoked on completion or failure of * the query sequence. * \param[in] arg Additional argument passed to the callback function. * \return One of the c-ares status codes. In all cases, except * ARES_EFORMERR due to misuse, this error code will also be sent * to the provided callback. */ CARES_EXTERN ares_status_t ares_search_dnsrec(ares_channel_t *channel, const ares_dns_record_t *dnsrec, ares_callback_dnsrec callback, void *arg); CARES_EXTERN CARES_DEPRECATED_FOR(ares_getaddrinfo) void ares_gethostbyname( ares_channel_t *channel, const char *name, int family, ares_host_callback callback, void *arg); CARES_EXTERN int ares_gethostbyname_file(ares_channel_t *channel, const char *name, int family, struct hostent **host); CARES_EXTERN void ares_gethostbyaddr(ares_channel_t *channel, const void *addr, int addrlen, int family, ares_host_callback callback, void *arg); CARES_EXTERN void ares_getnameinfo(ares_channel_t *channel, const struct sockaddr *sa, ares_socklen_t salen, int flags, ares_nameinfo_callback callback, void *arg); CARES_EXTERN CARES_DEPRECATED_FOR( ARES_OPT_EVENT_THREAD or ARES_OPT_SOCK_STATE_CB) int ares_fds(const ares_channel_t *channel, fd_set *read_fds, fd_set *write_fds); CARES_EXTERN CARES_DEPRECATED_FOR( ARES_OPT_EVENT_THREAD or ARES_OPT_SOCK_STATE_CB) int ares_getsock(const ares_channel_t *channel, ares_socket_t *socks, int numsocks); CARES_EXTERN struct timeval *ares_timeout(const ares_channel_t *channel, struct timeval *maxtv, struct timeval *tv); CARES_EXTERN CARES_DEPRECATED_FOR(ares_process_fd) void ares_process( ares_channel_t *channel, fd_set *read_fds, fd_set *write_fds); CARES_EXTERN void ares_process_fd(ares_channel_t *channel, ares_socket_t read_fd, ares_socket_t write_fd); CARES_EXTERN CARES_DEPRECATED_FOR(ares_dns_record_create) int ares_create_query( const char *name, int dnsclass, int type, unsigned short id, int rd, unsigned char **buf, int *buflen, int max_udp_size); CARES_EXTERN CARES_DEPRECATED_FOR(ares_dns_record_create) int ares_mkquery( const char *name, int dnsclass, int type, unsigned short id, int rd, unsigned char **buf, int *buflen); CARES_EXTERN int ares_expand_name(const unsigned char *encoded, const unsigned char *abuf, int alen, char **s, long *enclen); CARES_EXTERN int ares_expand_string(const unsigned char *encoded, const unsigned char *abuf, int alen, unsigned char **s, long *enclen); struct ares_addrttl { struct in_addr ipaddr; int ttl; }; struct ares_addr6ttl { struct ares_in6_addr ip6addr; int ttl; }; struct ares_caa_reply { struct ares_caa_reply *next; int critical; unsigned char *property; size_t plength; /* plength excludes null termination */ unsigned char *value; size_t length; /* length excludes null termination */ }; struct ares_srv_reply { struct ares_srv_reply *next; char *host; unsigned short priority; unsigned short weight; unsigned short port; }; struct ares_mx_reply { struct ares_mx_reply *next; char *host; unsigned short priority; }; struct ares_txt_reply { struct ares_txt_reply *next; unsigned char *txt; size_t length; /* length excludes null termination */ }; /* NOTE: This structure is a superset of ares_txt_reply */ struct ares_txt_ext { struct ares_txt_ext *next; unsigned char *txt; size_t length; /* 1 - if start of new record * 0 - if a chunk in the same record */ unsigned char record_start; }; struct ares_naptr_reply { struct ares_naptr_reply *next; unsigned char *flags; unsigned char *service; unsigned char *regexp; char *replacement; unsigned short order; unsigned short preference; }; struct ares_soa_reply { char *nsname; char *hostmaster; unsigned int serial; unsigned int refresh; unsigned int retry; unsigned int expire; unsigned int minttl; }; struct ares_uri_reply { struct ares_uri_reply *next; unsigned short priority; unsigned short weight; char *uri; int ttl; }; /* * Similar to addrinfo, but with extra ttl and missing canonname. */ struct ares_addrinfo_node { int ai_ttl; int ai_flags; int ai_family; int ai_socktype; int ai_protocol; ares_socklen_t ai_addrlen; struct sockaddr *ai_addr; struct ares_addrinfo_node *ai_next; }; /* * alias - label of the resource record. * name - value (canonical name) of the resource record. * See RFC2181 10.1.1. CNAME terminology. */ struct ares_addrinfo_cname { int ttl; char *alias; char *name; struct ares_addrinfo_cname *next; }; struct ares_addrinfo { struct ares_addrinfo_cname *cnames; struct ares_addrinfo_node *nodes; char *name; }; struct ares_addrinfo_hints { int ai_flags; int ai_family; int ai_socktype; int ai_protocol; }; /* ** Parse the buffer, starting at *abuf and of length alen bytes, previously ** obtained from an ares_search call. Put the results in *host, if nonnull. ** Also, if addrttls is nonnull, put up to *naddrttls IPv4 addresses along with ** their TTLs in that array, and set *naddrttls to the number of addresses ** so written. */ CARES_EXTERN CARES_DEPRECATED_FOR(ares_dns_parse) int ares_parse_a_reply( const unsigned char *abuf, int alen, struct hostent **host, struct ares_addrttl *addrttls, int *naddrttls); CARES_EXTERN CARES_DEPRECATED_FOR(ares_dns_parse) int ares_parse_aaaa_reply( const unsigned char *abuf, int alen, struct hostent **host, struct ares_addr6ttl *addrttls, int *naddrttls); CARES_EXTERN CARES_DEPRECATED_FOR(ares_dns_parse) int ares_parse_caa_reply( const unsigned char *abuf, int alen, struct ares_caa_reply **caa_out); CARES_EXTERN CARES_DEPRECATED_FOR(ares_dns_parse) int ares_parse_ptr_reply( const unsigned char *abuf, int alen, const void *addr, int addrlen, int family, struct hostent **host); CARES_EXTERN CARES_DEPRECATED_FOR(ares_dns_parse) int ares_parse_ns_reply( const unsigned char *abuf, int alen, struct hostent **host); CARES_EXTERN CARES_DEPRECATED_FOR(ares_dns_parse) int ares_parse_srv_reply( const unsigned char *abuf, int alen, struct ares_srv_reply **srv_out); CARES_EXTERN CARES_DEPRECATED_FOR(ares_dns_parse) int ares_parse_mx_reply( const unsigned char *abuf, int alen, struct ares_mx_reply **mx_out); CARES_EXTERN CARES_DEPRECATED_FOR(ares_dns_parse) int ares_parse_txt_reply( const unsigned char *abuf, int alen, struct ares_txt_reply **txt_out); CARES_EXTERN CARES_DEPRECATED_FOR(ares_dns_parse) int ares_parse_txt_reply_ext( const unsigned char *abuf, int alen, struct ares_txt_ext **txt_out); CARES_EXTERN CARES_DEPRECATED_FOR(ares_dns_parse) int ares_parse_naptr_reply( const unsigned char *abuf, int alen, struct ares_naptr_reply **naptr_out); CARES_EXTERN CARES_DEPRECATED_FOR(ares_dns_parse) int ares_parse_soa_reply( const unsigned char *abuf, int alen, struct ares_soa_reply **soa_out); CARES_EXTERN CARES_DEPRECATED_FOR(ares_dns_parse) int ares_parse_uri_reply( const unsigned char *abuf, int alen, struct ares_uri_reply **uri_out); CARES_EXTERN void ares_free_string(void *str); CARES_EXTERN void ares_free_hostent(struct hostent *host); CARES_EXTERN void ares_free_data(void *dataptr); CARES_EXTERN const char *ares_strerror(int code); struct ares_addr_node { struct ares_addr_node *next; int family; union { struct in_addr addr4; struct ares_in6_addr addr6; } addr; }; struct ares_addr_port_node { struct ares_addr_port_node *next; int family; union { struct in_addr addr4; struct ares_in6_addr addr6; } addr; int udp_port; int tcp_port; }; CARES_EXTERN CARES_DEPRECATED_FOR(ares_set_servers_csv) int ares_set_servers( ares_channel_t *channel, const struct ares_addr_node *servers); CARES_EXTERN CARES_DEPRECATED_FOR(ares_set_servers_ports_csv) int ares_set_servers_ports(ares_channel_t *channel, const struct ares_addr_port_node *servers); /* Incoming string format: host[:port][,host[:port]]... */ CARES_EXTERN int ares_set_servers_csv(ares_channel_t *channel, const char *servers); CARES_EXTERN int ares_set_servers_ports_csv(ares_channel_t *channel, const char *servers); CARES_EXTERN char *ares_get_servers_csv(const ares_channel_t *channel); CARES_EXTERN CARES_DEPRECATED_FOR(ares_get_servers_csv) int ares_get_servers( const ares_channel_t *channel, struct ares_addr_node **servers); CARES_EXTERN CARES_DEPRECATED_FOR(ares_get_servers_csv) int ares_get_servers_ports(const ares_channel_t *channel, struct ares_addr_port_node **servers); CARES_EXTERN const char *ares_inet_ntop(int af, const void *src, char *dst, ares_socklen_t size); CARES_EXTERN int ares_inet_pton(int af, const char *src, void *dst); /*! Whether or not the c-ares library was built with threadsafety * * \return ARES_TRUE if built with threadsafety, ARES_FALSE if not */ CARES_EXTERN ares_bool_t ares_threadsafety(void); /*! Block until notified that there are no longer any queries in queue, or * the specified timeout has expired. * * \param[in] channel Initialized ares channel * \param[in] timeout_ms Number of milliseconds to wait for the queue to be * empty. -1 for Infinite. * \return ARES_ENOTIMP if not built with threading support, ARES_ETIMEOUT * if requested timeout expires, ARES_SUCCESS when queue is empty. */ CARES_EXTERN ares_status_t ares_queue_wait_empty(ares_channel_t *channel, int timeout_ms); /*! Retrieve the total number of active queries pending answers from servers. * Some c-ares requests may spawn multiple queries, such as ares_getaddrinfo() * when using AF_UNSPEC, which will be reflected in this number. * * \param[in] channel Initialized ares channel * \return Number of active queries to servers */ CARES_EXTERN size_t ares_queue_active_queries(const ares_channel_t *channel); #ifdef __cplusplus } #endif #endif /* ARES__H */ gevent-24.11.1/deps/c-ares/include/ares_build.h000066400000000000000000000154241471441230600211720ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2009 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __CARES_BUILD_H #define __CARES_BUILD_H /* ================================================================ */ /* NOTES FOR CONFIGURE CAPABLE SYSTEMS */ /* ================================================================ */ /* * NOTE 1: * ------- * * See file ares_build.h.in, run configure, and forget that this file * exists it is only used for non-configure systems. * But you can keep reading if you want ;-) * */ /* ================================================================ */ /* NOTES FOR NON-CONFIGURE SYSTEMS */ /* ================================================================ */ /* * NOTE 1: * ------- * * Nothing in this file is intended to be modified or adjusted by the * c-ares library user nor by the c-ares library builder. * * If you think that something actually needs to be changed, adjusted * or fixed in this file, then, report it on the c-ares development * mailing list: http://lists.haxx.se/listinfo/c-ares/ * * Try to keep one section per platform, compiler and architecture, * otherwise, if an existing section is reused for a different one and * later on the original is adjusted, probably the piggybacking one can * be adversely changed. * * In order to differentiate between platforms/compilers/architectures * use only compiler built in predefined preprocessor symbols. * * This header file shall only export symbols which are 'cares' or 'CARES' * prefixed, otherwise public name space would be polluted. * * NOTE 2: * ------- * * Right now you might be staring at file ares_build.h.dist or ares_build.h, * this is due to the following reason: file ares_build.h.dist is renamed * to ares_build.h when the c-ares source code distribution archive file is * created. * * File ares_build.h.dist is not included in the distribution archive. * File ares_build.h is not present in the git tree. * * The distributed ares_build.h file is only intended to be used on systems * which can not run the also distributed configure script. * * On systems capable of running the configure script, the configure process * will overwrite the distributed ares_build.h file with one that is suitable * and specific to the library being configured and built, which is generated * from the ares_build.h.in template file. * * If you check out from git on a non-configure platform, you must run the * appropriate buildconf* script to set up ares_build.h and other local files. * */ /* ================================================================ */ /* DEFINITION OF THESE SYMBOLS SHALL NOT TAKE PLACE ANYWHERE ELSE */ /* ================================================================ */ #ifdef CARES_TYPEOF_ARES_SOCKLEN_T # error "CARES_TYPEOF_ARES_SOCKLEN_T shall not be defined except in ares_build.h" Error Compilation_aborted_CARES_TYPEOF_ARES_SOCKLEN_T_already_defined #endif /* ================================================================ */ /* EXTERNAL INTERFACE SETTINGS FOR NON-CONFIGURE SYSTEMS ONLY */ /* ================================================================ */ #if defined(__DJGPP__) || defined(__GO32__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__SALFORDC__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__BORLANDC__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__TURBOC__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__WATCOMC__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__POCC__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__LCC__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__SYMBIAN32__) # define CARES_TYPEOF_ARES_SOCKLEN_T unsigned int #elif defined(__MWERKS__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(_WIN32_WCE) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__VMS) # define CARES_TYPEOF_ARES_SOCKLEN_T unsigned int #elif defined(__OS400__) # if defined(__ILEC400__) # define CARES_TYPEOF_ARES_SOCKLEN_T socklen_t # define CARES_HAVE_SYS_TYPES_H 1 # define CARES_HAVE_SYS_SOCKET_H 1 # define CARES_HAVE_SYS_SELECT_H 1 # endif #elif defined(__MVS__) # if defined(__IBMC__) || defined(__IBMCPP__) # define CARES_TYPEOF_ARES_SOCKLEN_T socklen_t # define CARES_HAVE_SYS_TYPES_H 1 # define CARES_HAVE_SYS_SOCKET_H 1 # define CARES_HAVE_SYS_SELECT_H 1 # endif #elif defined(__370__) # if defined(__IBMC__) || defined(__IBMCPP__) # define CARES_TYPEOF_ARES_SOCKLEN_T socklen_t # define CARES_HAVE_SYS_TYPES_H 1 # define CARES_HAVE_SYS_SOCKET_H 1 # define CARES_HAVE_SYS_SELECT_H 1 # endif #elif defined(TPF) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(_WIN32) # define WIN32_LEAN_AND_MEAN # define CARES_TYPEOF_ARES_SOCKLEN_T int # define CARES_HAVE_WINDOWS_H 1 # define CARES_HAVE_SYS_TYPES_H 1 # if defined(WATT32) # define CARES_HAVE_SYS_SOCKET_H 1 # define CARES_HAVE_SYS_SELECT_H 1 # else # define CARES_HAVE_WS2TCPIP_H 1 # define CARES_HAVE_WINSOCK2_H 1 # endif #elif defined(__GNUC__) # define CARES_TYPEOF_ARES_SOCKLEN_T socklen_t # define CARES_HAVE_SYS_TYPES_H 1 # define CARES_HAVE_SYS_SOCKET_H 1 # define CARES_HAVE_SYS_SELECT_H 1 #else # error "Unknown non-configure build target!" Error Compilation_aborted_Unknown_non_configure_build_target #endif /* Data type definition of ares_ssize_t. */ #ifdef _WIN32 # ifdef _WIN64 # define CARES_TYPEOF_ARES_SSIZE_T __int64 # else # define CARES_TYPEOF_ARES_SSIZE_T long # endif #else # define CARES_TYPEOF_ARES_SSIZE_T ssize_t #endif #endif /* __CARES_BUILD_H */ gevent-24.11.1/deps/c-ares/include/ares_build.h.cmake000066400000000000000000000022041471441230600222410ustar00rootroot00000000000000#ifndef __CARES_BUILD_H #define __CARES_BUILD_H /* * Copyright (C) The c-ares project and its contributors * SPDX-License-Identifier: MIT */ #define CARES_TYPEOF_ARES_SOCKLEN_T @CARES_TYPEOF_ARES_SOCKLEN_T@ #define CARES_TYPEOF_ARES_SSIZE_T @CARES_TYPEOF_ARES_SSIZE_T@ /* Prefix names with CARES_ to make sure they don't conflict with other config.h * files. We need to include some dependent headers that may be system specific * for C-Ares */ #cmakedefine CARES_HAVE_SYS_TYPES_H #cmakedefine CARES_HAVE_SYS_SOCKET_H #cmakedefine CARES_HAVE_SYS_SELECT_H #cmakedefine CARES_HAVE_WINDOWS_H #cmakedefine CARES_HAVE_WS2TCPIP_H #cmakedefine CARES_HAVE_WINSOCK2_H #cmakedefine CARES_HAVE_ARPA_NAMESER_H #cmakedefine CARES_HAVE_ARPA_NAMESER_COMPAT_H #ifdef CARES_HAVE_SYS_TYPES_H # include #endif #ifdef CARES_HAVE_SYS_SOCKET_H # include #endif #ifdef CARES_HAVE_SYS_SELECT_H # include #endif #ifdef CARES_HAVE_WINSOCK2_H # include #endif #ifdef CARES_HAVE_WS2TCPIP_H # include #endif #ifdef CARES_HAVE_WINDOWS_H # include #endif #endif /* __CARES_BUILD_H */ gevent-24.11.1/deps/c-ares/include/ares_build.h.dist000066400000000000000000000154241471441230600221340ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2009 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __CARES_BUILD_H #define __CARES_BUILD_H /* ================================================================ */ /* NOTES FOR CONFIGURE CAPABLE SYSTEMS */ /* ================================================================ */ /* * NOTE 1: * ------- * * See file ares_build.h.in, run configure, and forget that this file * exists it is only used for non-configure systems. * But you can keep reading if you want ;-) * */ /* ================================================================ */ /* NOTES FOR NON-CONFIGURE SYSTEMS */ /* ================================================================ */ /* * NOTE 1: * ------- * * Nothing in this file is intended to be modified or adjusted by the * c-ares library user nor by the c-ares library builder. * * If you think that something actually needs to be changed, adjusted * or fixed in this file, then, report it on the c-ares development * mailing list: http://lists.haxx.se/listinfo/c-ares/ * * Try to keep one section per platform, compiler and architecture, * otherwise, if an existing section is reused for a different one and * later on the original is adjusted, probably the piggybacking one can * be adversely changed. * * In order to differentiate between platforms/compilers/architectures * use only compiler built in predefined preprocessor symbols. * * This header file shall only export symbols which are 'cares' or 'CARES' * prefixed, otherwise public name space would be polluted. * * NOTE 2: * ------- * * Right now you might be staring at file ares_build.h.dist or ares_build.h, * this is due to the following reason: file ares_build.h.dist is renamed * to ares_build.h when the c-ares source code distribution archive file is * created. * * File ares_build.h.dist is not included in the distribution archive. * File ares_build.h is not present in the git tree. * * The distributed ares_build.h file is only intended to be used on systems * which can not run the also distributed configure script. * * On systems capable of running the configure script, the configure process * will overwrite the distributed ares_build.h file with one that is suitable * and specific to the library being configured and built, which is generated * from the ares_build.h.in template file. * * If you check out from git on a non-configure platform, you must run the * appropriate buildconf* script to set up ares_build.h and other local files. * */ /* ================================================================ */ /* DEFINITION OF THESE SYMBOLS SHALL NOT TAKE PLACE ANYWHERE ELSE */ /* ================================================================ */ #ifdef CARES_TYPEOF_ARES_SOCKLEN_T # error "CARES_TYPEOF_ARES_SOCKLEN_T shall not be defined except in ares_build.h" Error Compilation_aborted_CARES_TYPEOF_ARES_SOCKLEN_T_already_defined #endif /* ================================================================ */ /* EXTERNAL INTERFACE SETTINGS FOR NON-CONFIGURE SYSTEMS ONLY */ /* ================================================================ */ #if defined(__DJGPP__) || defined(__GO32__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__SALFORDC__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__BORLANDC__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__TURBOC__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__WATCOMC__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__POCC__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__LCC__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__SYMBIAN32__) # define CARES_TYPEOF_ARES_SOCKLEN_T unsigned int #elif defined(__MWERKS__) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(_WIN32_WCE) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(__VMS) # define CARES_TYPEOF_ARES_SOCKLEN_T unsigned int #elif defined(__OS400__) # if defined(__ILEC400__) # define CARES_TYPEOF_ARES_SOCKLEN_T socklen_t # define CARES_HAVE_SYS_TYPES_H 1 # define CARES_HAVE_SYS_SOCKET_H 1 # define CARES_HAVE_SYS_SELECT_H 1 # endif #elif defined(__MVS__) # if defined(__IBMC__) || defined(__IBMCPP__) # define CARES_TYPEOF_ARES_SOCKLEN_T socklen_t # define CARES_HAVE_SYS_TYPES_H 1 # define CARES_HAVE_SYS_SOCKET_H 1 # define CARES_HAVE_SYS_SELECT_H 1 # endif #elif defined(__370__) # if defined(__IBMC__) || defined(__IBMCPP__) # define CARES_TYPEOF_ARES_SOCKLEN_T socklen_t # define CARES_HAVE_SYS_TYPES_H 1 # define CARES_HAVE_SYS_SOCKET_H 1 # define CARES_HAVE_SYS_SELECT_H 1 # endif #elif defined(TPF) # define CARES_TYPEOF_ARES_SOCKLEN_T int #elif defined(_WIN32) # define WIN32_LEAN_AND_MEAN # define CARES_TYPEOF_ARES_SOCKLEN_T int # define CARES_HAVE_WINDOWS_H 1 # define CARES_HAVE_SYS_TYPES_H 1 # if defined(WATT32) # define CARES_HAVE_SYS_SOCKET_H 1 # define CARES_HAVE_SYS_SELECT_H 1 # else # define CARES_HAVE_WS2TCPIP_H 1 # define CARES_HAVE_WINSOCK2_H 1 # endif #elif defined(__GNUC__) # define CARES_TYPEOF_ARES_SOCKLEN_T socklen_t # define CARES_HAVE_SYS_TYPES_H 1 # define CARES_HAVE_SYS_SOCKET_H 1 # define CARES_HAVE_SYS_SELECT_H 1 #else # error "Unknown non-configure build target!" Error Compilation_aborted_Unknown_non_configure_build_target #endif /* Data type definition of ares_ssize_t. */ #ifdef _WIN32 # ifdef _WIN64 # define CARES_TYPEOF_ARES_SSIZE_T __int64 # else # define CARES_TYPEOF_ARES_SSIZE_T long # endif #else # define CARES_TYPEOF_ARES_SSIZE_T ssize_t #endif #endif /* __CARES_BUILD_H */ gevent-24.11.1/deps/c-ares/include/ares_build.h.in000066400000000000000000000021241471441230600215700ustar00rootroot00000000000000#ifndef __CARES_BUILD_H #define __CARES_BUILD_H /* * Copyright (C) The c-ares project and its contributors * SPDX-License-Identifier: MIT */ #define CARES_TYPEOF_ARES_SOCKLEN_T @CARES_TYPEOF_ARES_SOCKLEN_T@ #define CARES_TYPEOF_ARES_SSIZE_T @CARES_TYPEOF_ARES_SSIZE_T@ /* Prefix names with CARES_ to make sure they don't conflict with other config.h * files. We need to include some dependent headers that may be system specific * for C-Ares */ #undef CARES_HAVE_SYS_TYPES_H #undef CARES_HAVE_SYS_SOCKET_H #undef CARES_HAVE_SYS_SELECT_H #undef CARES_HAVE_WINDOWS_H #undef CARES_HAVE_WS2TCPIP_H #undef CARES_HAVE_WINSOCK2_H #undef CARES_HAVE_ARPA_NAMESER_H #undef CARES_HAVE_ARPA_NAMESER_COMPAT_H #ifdef CARES_HAVE_SYS_TYPES_H # include #endif #ifdef CARES_HAVE_SYS_SOCKET_H # include #endif #ifdef CARES_HAVE_SYS_SELECT_H # include #endif #ifdef CARES_HAVE_WINSOCK2_H # include #endif #ifdef CARES_HAVE_WS2TCPIP_H # include #endif #ifdef CARES_HAVE_WINDOWS_H # include #endif #endif /* __CARES_BUILD_H */ gevent-24.11.1/deps/c-ares/include/ares_dns.h000066400000000000000000000132521471441230600206540ustar00rootroot00000000000000/* MIT License * * Copyright (c) Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef HEADER_CARES_DNS_H #define HEADER_CARES_DNS_H /* * NOTE TO INTEGRATORS: * * This header is made public due to legacy projects relying on it. * Please do not use the macros within this header, or include this * header in your project as it may be removed in the future. */ /* * Macro DNS__16BIT reads a network short (16 bit) given in network * byte order, and returns its value as an unsigned short. */ #define DNS__16BIT(p) \ ((unsigned short)((unsigned int)0xffff & \ (((unsigned int)((unsigned char)(p)[0]) << 8U) | \ ((unsigned int)((unsigned char)(p)[1]))))) /* * Macro DNS__32BIT reads a network long (32 bit) given in network * byte order, and returns its value as an unsigned int. */ #define DNS__32BIT(p) \ ((unsigned int)(((unsigned int)((unsigned char)(p)[0]) << 24U) | \ ((unsigned int)((unsigned char)(p)[1]) << 16U) | \ ((unsigned int)((unsigned char)(p)[2]) << 8U) | \ ((unsigned int)((unsigned char)(p)[3])))) #define DNS__SET16BIT(p, v) \ (((p)[0] = (unsigned char)(((v) >> 8) & 0xff)), \ ((p)[1] = (unsigned char)((v) & 0xff))) #define DNS__SET32BIT(p, v) \ (((p)[0] = (unsigned char)(((v) >> 24) & 0xff)), \ ((p)[1] = (unsigned char)(((v) >> 16) & 0xff)), \ ((p)[2] = (unsigned char)(((v) >> 8) & 0xff)), \ ((p)[3] = (unsigned char)((v) & 0xff))) #if 0 /* we cannot use this approach on systems where we can't access 16/32 bit data on un-aligned addresses */ # define DNS__16BIT(p) ntohs(*(unsigned short *)(p)) # define DNS__32BIT(p) ntohl(*(unsigned long *)(p)) # define DNS__SET16BIT(p, v) *(unsigned short *)(p) = htons(v) # define DNS__SET32BIT(p, v) *(unsigned long *)(p) = htonl(v) #endif /* Macros for parsing a DNS header */ #define DNS_HEADER_QID(h) DNS__16BIT(h) #define DNS_HEADER_QR(h) (((h)[2] >> 7) & 0x1) #define DNS_HEADER_OPCODE(h) (((h)[2] >> 3) & 0xf) #define DNS_HEADER_AA(h) (((h)[2] >> 2) & 0x1) #define DNS_HEADER_TC(h) (((h)[2] >> 1) & 0x1) #define DNS_HEADER_RD(h) ((h)[2] & 0x1) #define DNS_HEADER_RA(h) (((h)[3] >> 7) & 0x1) #define DNS_HEADER_Z(h) (((h)[3] >> 4) & 0x7) #define DNS_HEADER_RCODE(h) ((h)[3] & 0xf) #define DNS_HEADER_QDCOUNT(h) DNS__16BIT((h) + 4) #define DNS_HEADER_ANCOUNT(h) DNS__16BIT((h) + 6) #define DNS_HEADER_NSCOUNT(h) DNS__16BIT((h) + 8) #define DNS_HEADER_ARCOUNT(h) DNS__16BIT((h) + 10) /* Macros for constructing a DNS header */ #define DNS_HEADER_SET_QID(h, v) DNS__SET16BIT(h, v) #define DNS_HEADER_SET_QR(h, v) ((h)[2] |= (unsigned char)(((v) & 0x1) << 7)) #define DNS_HEADER_SET_OPCODE(h, v) \ ((h)[2] |= (unsigned char)(((v) & 0xf) << 3)) #define DNS_HEADER_SET_AA(h, v) ((h)[2] |= (unsigned char)(((v) & 0x1) << 2)) #define DNS_HEADER_SET_TC(h, v) ((h)[2] |= (unsigned char)(((v) & 0x1) << 1)) #define DNS_HEADER_SET_RD(h, v) ((h)[2] |= (unsigned char)((v) & 0x1)) #define DNS_HEADER_SET_RA(h, v) ((h)[3] |= (unsigned char)(((v) & 0x1) << 7)) #define DNS_HEADER_SET_Z(h, v) ((h)[3] |= (unsigned char)(((v) & 0x7) << 4)) #define DNS_HEADER_SET_RCODE(h, v) ((h)[3] |= (unsigned char)((v) & 0xf)) #define DNS_HEADER_SET_QDCOUNT(h, v) DNS__SET16BIT((h) + 4, v) #define DNS_HEADER_SET_ANCOUNT(h, v) DNS__SET16BIT((h) + 6, v) #define DNS_HEADER_SET_NSCOUNT(h, v) DNS__SET16BIT((h) + 8, v) #define DNS_HEADER_SET_ARCOUNT(h, v) DNS__SET16BIT((h) + 10, v) /* Macros for parsing the fixed part of a DNS question */ #define DNS_QUESTION_TYPE(q) DNS__16BIT(q) #define DNS_QUESTION_CLASS(q) DNS__16BIT((q) + 2) /* Macros for constructing the fixed part of a DNS question */ #define DNS_QUESTION_SET_TYPE(q, v) DNS__SET16BIT(q, v) #define DNS_QUESTION_SET_CLASS(q, v) DNS__SET16BIT((q) + 2, v) /* Macros for parsing the fixed part of a DNS resource record */ #define DNS_RR_TYPE(r) DNS__16BIT(r) #define DNS_RR_CLASS(r) DNS__16BIT((r) + 2) #define DNS_RR_TTL(r) DNS__32BIT((r) + 4) #define DNS_RR_LEN(r) DNS__16BIT((r) + 8) /* Macros for constructing the fixed part of a DNS resource record */ #define DNS_RR_SET_TYPE(r, v) DNS__SET16BIT(r, v) #define DNS_RR_SET_CLASS(r, v) DNS__SET16BIT((r) + 2, v) #define DNS_RR_SET_TTL(r, v) DNS__SET32BIT((r) + 4, v) #define DNS_RR_SET_LEN(r, v) DNS__SET16BIT((r) + 8, v) #endif /* HEADER_CARES_DNS_H */ gevent-24.11.1/deps/c-ares/include/ares_dns_record.h000066400000000000000000001351121471441230600222120ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES_DNS_RECORD_H #define __ARES_DNS_RECORD_H /* Include ares.h, not this file directly */ #ifdef __cplusplus extern "C" { #endif /*! \addtogroup ares_dns_record DNS Record Handling * * This is a set of functions to create and manipulate DNS records. * * @{ */ /*! DNS Record types handled by c-ares. Some record types may only be valid * on requests (e.g. ARES_REC_TYPE_ANY), and some may only be valid on * responses */ typedef enum { ARES_REC_TYPE_A = 1, /*!< Host address. */ ARES_REC_TYPE_NS = 2, /*!< Authoritative server. */ ARES_REC_TYPE_CNAME = 5, /*!< Canonical name. */ ARES_REC_TYPE_SOA = 6, /*!< Start of authority zone. */ ARES_REC_TYPE_PTR = 12, /*!< Domain name pointer. */ ARES_REC_TYPE_HINFO = 13, /*!< Host information. */ ARES_REC_TYPE_MX = 15, /*!< Mail routing information. */ ARES_REC_TYPE_TXT = 16, /*!< Text strings. */ ARES_REC_TYPE_SIG = 24, /*!< RFC 2535 / RFC 2931. SIG Record */ ARES_REC_TYPE_AAAA = 28, /*!< RFC 3596. Ip6 Address. */ ARES_REC_TYPE_SRV = 33, /*!< RFC 2782. Server Selection. */ ARES_REC_TYPE_NAPTR = 35, /*!< RFC 3403. Naming Authority Pointer */ ARES_REC_TYPE_OPT = 41, /*!< RFC 6891. EDNS0 option (meta-RR) */ ARES_REC_TYPE_TLSA = 52, /*!< RFC 6698. DNS-Based Authentication of Named * Entities (DANE) Transport Layer Security * (TLS) Protocol: TLSA */ ARES_REC_TYPE_SVCB = 64, /*!< RFC 9460. General Purpose Service Binding */ ARES_REC_TYPE_HTTPS = 65, /*!< RFC 9460. Service Binding type for use with * HTTPS */ ARES_REC_TYPE_ANY = 255, /*!< Wildcard match. Not response RR. */ ARES_REC_TYPE_URI = 256, /*!< RFC 7553. Uniform Resource Identifier */ ARES_REC_TYPE_CAA = 257, /*!< RFC 6844. Certification Authority * Authorization. */ ARES_REC_TYPE_RAW_RR = 65536 /*!< Used as an indicator that the RR record * is not parsed, but provided in wire * format */ } ares_dns_rec_type_t; /*! DNS Classes for requests and responses. */ typedef enum { ARES_CLASS_IN = 1, /*!< Internet */ ARES_CLASS_CHAOS = 3, /*!< CHAOS */ ARES_CLASS_HESOID = 4, /*!< Hesoid [Dyer 87] */ ARES_CLASS_NONE = 254, /*!< RFC 2136 */ ARES_CLASS_ANY = 255 /*!< Any class (requests only) */ } ares_dns_class_t; /*! DNS RR Section type */ typedef enum { ARES_SECTION_ANSWER = 1, /*!< Answer section */ ARES_SECTION_AUTHORITY = 2, /*!< Authority section */ ARES_SECTION_ADDITIONAL = 3 /*!< Additional information section */ } ares_dns_section_t; /*! DNS Header opcodes */ typedef enum { ARES_OPCODE_QUERY = 0, /*!< Standard query */ ARES_OPCODE_IQUERY = 1, /*!< Inverse query. Obsolete. */ ARES_OPCODE_STATUS = 2, /*!< Name server status query */ ARES_OPCODE_NOTIFY = 4, /*!< Zone change notification (RFC 1996) */ ARES_OPCODE_UPDATE = 5 /*!< Zone update message (RFC2136) */ } ares_dns_opcode_t; /*! DNS Header flags */ typedef enum { ARES_FLAG_QR = 1 << 0, /*!< QR. If set, is a response */ ARES_FLAG_AA = 1 << 1, /*!< Authoritative Answer. If set, is authoritative */ ARES_FLAG_TC = 1 << 2, /*!< Truncation. If set, is truncated response */ ARES_FLAG_RD = 1 << 3, /*!< Recursion Desired. If set, recursion is desired */ ARES_FLAG_RA = 1 << 4, /*!< Recursion Available. If set, server supports * recursion */ ARES_FLAG_AD = 1 << 5, /*!< RFC 2065. Authentic Data bit indicates in a * response that the data included has been verified by * the server providing it */ ARES_FLAG_CD = 1 << 6 /*!< RFC 2065. Checking Disabled bit indicates in a * query that non-verified data is acceptable to the * resolver sending the query. */ } ares_dns_flags_t; /*! DNS Response Codes from server */ typedef enum { ARES_RCODE_NOERROR = 0, /*!< Success */ ARES_RCODE_FORMERR = 1, /*!< Format error. The name server was unable * to interpret the query. */ ARES_RCODE_SERVFAIL = 2, /*!< Server Failure. The name server was * unable to process this query due to a * problem with the nameserver */ ARES_RCODE_NXDOMAIN = 3, /*!< Name Error. Meaningful only for * responses from an authoritative name * server, this code signifies that the * domain name referenced in the query does * not exist. */ ARES_RCODE_NOTIMP = 4, /*!< Not implemented. The name server does * not support the requested kind of * query */ ARES_RCODE_REFUSED = 5, /*!< Refused. The name server refuses to * perform the specified operation for * policy reasons. */ ARES_RCODE_YXDOMAIN = 6, /*!< RFC 2136. Some name that ought not to * exist, does exist. */ ARES_RCODE_YXRRSET = 7, /*!< RFC 2136. Some RRset that ought to not * exist, does exist. */ ARES_RCODE_NXRRSET = 8, /*!< RFC 2136. Some RRset that ought to exist, * does not exist. */ ARES_RCODE_NOTAUTH = 9, /*!< RFC 2136. The server is not authoritative * for the zone named in the Zone section. */ ARES_RCODE_NOTZONE = 10, /*!< RFC 2136. A name used in the Prerequisite * or Update Section is not within the zone * denoted by the Zone Section. */ ARES_RCODE_DSOTYPEI = 11, /*!< RFC 8409. DSO-TYPE Not implemented */ ARES_RCODE_BADSIG = 16, /*!< RFC 8945. TSIG Signature Failure */ ARES_RCODE_BADKEY = 17, /*!< RFC 8945. Key not recognized. */ ARES_RCODE_BADTIME = 18, /*!< RFC 8945. Signature out of time window. */ ARES_RCODE_BADMODE = 19, /*!< RFC 2930. Bad TKEY Mode */ ARES_RCODE_BADNAME = 20, /*!< RFC 2930. Duplicate Key Name */ ARES_RCODE_BADALG = 21, /*!< RFC 2930. Algorithm not supported */ ARES_RCODE_BADTRUNC = 22, /*!< RFC 8945. Bad Truncation */ ARES_RCODE_BADCOOKIE = 23 /*!< RFC 7873. Bad/missing Server Cookie */ } ares_dns_rcode_t; /*! Data types used */ typedef enum { ARES_DATATYPE_INADDR = 1, /*!< struct in_addr * type */ ARES_DATATYPE_INADDR6 = 2, /*!< struct ares_in6_addr * type */ ARES_DATATYPE_U8 = 3, /*!< 8bit unsigned integer */ ARES_DATATYPE_U16 = 4, /*!< 16bit unsigned integer */ ARES_DATATYPE_U32 = 5, /*!< 32bit unsigned integer */ ARES_DATATYPE_NAME = 6, /*!< Null-terminated string of a domain name */ ARES_DATATYPE_STR = 7, /*!< Null-terminated string */ ARES_DATATYPE_BIN = 8, /*!< Binary data */ ARES_DATATYPE_BINP = 9, /*!< Officially defined as binary data, but likely * printable. Guaranteed to have a NULL * terminator for convenience (not included in * length) */ ARES_DATATYPE_OPT = 10, /*!< Array of options. 16bit identifier, BIN * data. */ ARES_DATATYPE_ABINP = 11 /*!< Array of binary data, likely printable. * Guaranteed to have a NULL terminator for * convenience (not included in length) */ } ares_dns_datatype_t; /*! Keys used for all RR Types. We take the record type and multiply by 100 * to ensure we have a proper offset between keys so we can keep these sorted */ typedef enum { /*! A Record. Address. Datatype: INADDR */ ARES_RR_A_ADDR = (ARES_REC_TYPE_A * 100) + 1, /*! NS Record. Name. Datatype: NAME */ ARES_RR_NS_NSDNAME = (ARES_REC_TYPE_NS * 100) + 1, /*! CNAME Record. CName. Datatype: NAME */ ARES_RR_CNAME_CNAME = (ARES_REC_TYPE_CNAME * 100) + 1, /*! SOA Record. MNAME, Primary Source of Data. Datatype: NAME */ ARES_RR_SOA_MNAME = (ARES_REC_TYPE_SOA * 100) + 1, /*! SOA Record. RNAME, Mailbox of person responsible. Datatype: NAME */ ARES_RR_SOA_RNAME = (ARES_REC_TYPE_SOA * 100) + 2, /*! SOA Record. Serial, version. Datatype: U32 */ ARES_RR_SOA_SERIAL = (ARES_REC_TYPE_SOA * 100) + 3, /*! SOA Record. Refresh, zone refersh interval. Datatype: U32 */ ARES_RR_SOA_REFRESH = (ARES_REC_TYPE_SOA * 100) + 4, /*! SOA Record. Retry, failed refresh retry interval. Datatype: U32 */ ARES_RR_SOA_RETRY = (ARES_REC_TYPE_SOA * 100) + 5, /*! SOA Record. Expire, upper limit on authority. Datatype: U32 */ ARES_RR_SOA_EXPIRE = (ARES_REC_TYPE_SOA * 100) + 6, /*! SOA Record. Minimum, RR TTL. Datatype: U32 */ ARES_RR_SOA_MINIMUM = (ARES_REC_TYPE_SOA * 100) + 7, /*! PTR Record. DNAME, pointer domain. Datatype: NAME */ ARES_RR_PTR_DNAME = (ARES_REC_TYPE_PTR * 100) + 1, /*! HINFO Record. CPU. Datatype: STR */ ARES_RR_HINFO_CPU = (ARES_REC_TYPE_HINFO * 100) + 1, /*! HINFO Record. OS. Datatype: STR */ ARES_RR_HINFO_OS = (ARES_REC_TYPE_HINFO * 100) + 2, /*! MX Record. Preference. Datatype: U16 */ ARES_RR_MX_PREFERENCE = (ARES_REC_TYPE_MX * 100) + 1, /*! MX Record. Exchange, domain. Datatype: NAME */ ARES_RR_MX_EXCHANGE = (ARES_REC_TYPE_MX * 100) + 2, /*! TXT Record. Data. Datatype: ABINP */ ARES_RR_TXT_DATA = (ARES_REC_TYPE_TXT * 100) + 1, /*! SIG Record. Type Covered. Datatype: U16 */ ARES_RR_SIG_TYPE_COVERED = (ARES_REC_TYPE_SIG * 100) + 1, /*! SIG Record. Algorithm. Datatype: U8 */ ARES_RR_SIG_ALGORITHM = (ARES_REC_TYPE_SIG * 100) + 2, /*! SIG Record. Labels. Datatype: U8 */ ARES_RR_SIG_LABELS = (ARES_REC_TYPE_SIG * 100) + 3, /*! SIG Record. Original TTL. Datatype: U32 */ ARES_RR_SIG_ORIGINAL_TTL = (ARES_REC_TYPE_SIG * 100) + 4, /*! SIG Record. Signature Expiration. Datatype: U32 */ ARES_RR_SIG_EXPIRATION = (ARES_REC_TYPE_SIG * 100) + 5, /*! SIG Record. Signature Inception. Datatype: U32 */ ARES_RR_SIG_INCEPTION = (ARES_REC_TYPE_SIG * 100) + 6, /*! SIG Record. Key Tag. Datatype: U16 */ ARES_RR_SIG_KEY_TAG = (ARES_REC_TYPE_SIG * 100) + 7, /*! SIG Record. Signers Name. Datatype: NAME */ ARES_RR_SIG_SIGNERS_NAME = (ARES_REC_TYPE_SIG * 100) + 8, /*! SIG Record. Signature. Datatype: BIN */ ARES_RR_SIG_SIGNATURE = (ARES_REC_TYPE_SIG * 100) + 9, /*! AAAA Record. Address. Datatype: INADDR6 */ ARES_RR_AAAA_ADDR = (ARES_REC_TYPE_AAAA * 100) + 1, /*! SRV Record. Priority. Datatype: U16 */ ARES_RR_SRV_PRIORITY = (ARES_REC_TYPE_SRV * 100) + 2, /*! SRV Record. Weight. Datatype: U16 */ ARES_RR_SRV_WEIGHT = (ARES_REC_TYPE_SRV * 100) + 3, /*! SRV Record. Port. Datatype: U16 */ ARES_RR_SRV_PORT = (ARES_REC_TYPE_SRV * 100) + 4, /*! SRV Record. Target domain. Datatype: NAME */ ARES_RR_SRV_TARGET = (ARES_REC_TYPE_SRV * 100) + 5, /*! NAPTR Record. Order. Datatype: U16 */ ARES_RR_NAPTR_ORDER = (ARES_REC_TYPE_NAPTR * 100) + 1, /*! NAPTR Record. Preference. Datatype: U16 */ ARES_RR_NAPTR_PREFERENCE = (ARES_REC_TYPE_NAPTR * 100) + 2, /*! NAPTR Record. Flags. Datatype: STR */ ARES_RR_NAPTR_FLAGS = (ARES_REC_TYPE_NAPTR * 100) + 3, /*! NAPTR Record. Services. Datatype: STR */ ARES_RR_NAPTR_SERVICES = (ARES_REC_TYPE_NAPTR * 100) + 4, /*! NAPTR Record. Regexp. Datatype: STR */ ARES_RR_NAPTR_REGEXP = (ARES_REC_TYPE_NAPTR * 100) + 5, /*! NAPTR Record. Replacement. Datatype: NAME */ ARES_RR_NAPTR_REPLACEMENT = (ARES_REC_TYPE_NAPTR * 100) + 6, /*! OPT Record. UDP Size. Datatype: U16 */ ARES_RR_OPT_UDP_SIZE = (ARES_REC_TYPE_OPT * 100) + 1, /*! OPT Record. Version. Datatype: U8 */ ARES_RR_OPT_VERSION = (ARES_REC_TYPE_OPT * 100) + 3, /*! OPT Record. Flags. Datatype: U16 */ ARES_RR_OPT_FLAGS = (ARES_REC_TYPE_OPT * 100) + 4, /*! OPT Record. Options. Datatype: OPT */ ARES_RR_OPT_OPTIONS = (ARES_REC_TYPE_OPT * 100) + 5, /*! TLSA Record. Certificate Usage. Datatype: U8 */ ARES_RR_TLSA_CERT_USAGE = (ARES_REC_TYPE_TLSA * 100) + 1, /*! TLSA Record. Selector. Datatype: U8 */ ARES_RR_TLSA_SELECTOR = (ARES_REC_TYPE_TLSA * 100) + 2, /*! TLSA Record. Matching Type. Datatype: U8 */ ARES_RR_TLSA_MATCH = (ARES_REC_TYPE_TLSA * 100) + 3, /*! TLSA Record. Certificate Association Data. Datatype: BIN */ ARES_RR_TLSA_DATA = (ARES_REC_TYPE_TLSA * 100) + 4, /*! SVCB Record. SvcPriority. Datatype: U16 */ ARES_RR_SVCB_PRIORITY = (ARES_REC_TYPE_SVCB * 100) + 1, /*! SVCB Record. TargetName. Datatype: NAME */ ARES_RR_SVCB_TARGET = (ARES_REC_TYPE_SVCB * 100) + 2, /*! SVCB Record. SvcParams. Datatype: OPT */ ARES_RR_SVCB_PARAMS = (ARES_REC_TYPE_SVCB * 100) + 3, /*! HTTPS Record. SvcPriority. Datatype: U16 */ ARES_RR_HTTPS_PRIORITY = (ARES_REC_TYPE_HTTPS * 100) + 1, /*! HTTPS Record. TargetName. Datatype: NAME */ ARES_RR_HTTPS_TARGET = (ARES_REC_TYPE_HTTPS * 100) + 2, /*! HTTPS Record. SvcParams. Datatype: OPT */ ARES_RR_HTTPS_PARAMS = (ARES_REC_TYPE_HTTPS * 100) + 3, /*! URI Record. Priority. Datatype: U16 */ ARES_RR_URI_PRIORITY = (ARES_REC_TYPE_URI * 100) + 1, /*! URI Record. Weight. Datatype: U16 */ ARES_RR_URI_WEIGHT = (ARES_REC_TYPE_URI * 100) + 2, /*! URI Record. Target domain. Datatype: NAME */ ARES_RR_URI_TARGET = (ARES_REC_TYPE_URI * 100) + 3, /*! CAA Record. Critical flag. Datatype: U8 */ ARES_RR_CAA_CRITICAL = (ARES_REC_TYPE_CAA * 100) + 1, /*! CAA Record. Tag/Property. Datatype: STR */ ARES_RR_CAA_TAG = (ARES_REC_TYPE_CAA * 100) + 2, /*! CAA Record. Value. Datatype: BINP */ ARES_RR_CAA_VALUE = (ARES_REC_TYPE_CAA * 100) + 3, /*! RAW Record. RR Type. Datatype: U16 */ ARES_RR_RAW_RR_TYPE = (ARES_REC_TYPE_RAW_RR * 100) + 1, /*! RAW Record. RR Data. Datatype: BIN */ ARES_RR_RAW_RR_DATA = (ARES_REC_TYPE_RAW_RR * 100) + 2 } ares_dns_rr_key_t; /*! TLSA Record ARES_RR_TLSA_CERT_USAGE known values */ typedef enum { /*! Certificate Usage 0. CA Constraint. */ ARES_TLSA_USAGE_CA = 0, /*! Certificate Usage 1. Service Certificate Constraint. */ ARES_TLSA_USAGE_SERVICE = 1, /*! Certificate Usage 2. Trust Anchor Assertion. */ ARES_TLSA_USAGE_TRUSTANCHOR = 2, /*! Certificate Usage 3. Domain-issued certificate. */ ARES_TLSA_USAGE_DOMAIN = 3 } ares_tlsa_usage_t; /*! TLSA Record ARES_RR_TLSA_SELECTOR known values */ typedef enum { /*! Full Certificate */ ARES_TLSA_SELECTOR_FULL = 0, /*! DER-encoded SubjectPublicKeyInfo */ ARES_TLSA_SELECTOR_SUBJPUBKEYINFO = 1 } ares_tlsa_selector_t; /*! TLSA Record ARES_RR_TLSA_MATCH known values */ typedef enum { /*! Exact match */ ARES_TLSA_MATCH_EXACT = 0, /*! Sha256 match */ ARES_TLSA_MATCH_SHA256 = 1, /*! Sha512 match */ ARES_TLSA_MATCH_SHA512 = 2 } ares_tlsa_match_t; /*! SVCB (and HTTPS) RR known parameters */ typedef enum { /*! Mandatory keys in this RR (RFC 9460 Section 8) */ ARES_SVCB_PARAM_MANDATORY = 0, /*! Additional supported protocols (RFC 9460 Section 7.1) */ ARES_SVCB_PARAM_ALPN = 1, /*! No support for default protocol (RFC 9460 Section 7.1) */ ARES_SVCB_PARAM_NO_DEFAULT_ALPN = 2, /*! Port for alternative endpoint (RFC 9460 Section 7.2) */ ARES_SVCB_PARAM_PORT = 3, /*! IPv4 address hints (RFC 9460 Section 7.3) */ ARES_SVCB_PARAM_IPV4HINT = 4, /*! RESERVED (held for Encrypted ClientHello) */ ARES_SVCB_PARAM_ECH = 5, /*! IPv6 address hints (RFC 9460 Section 7.3) */ ARES_SVCB_PARAM_IPV6HINT = 6 } ares_svcb_param_t; /*! OPT RR known parameters */ typedef enum { /*! RFC 8764. Apple's DNS Long-Lived Queries Protocol */ ARES_OPT_PARAM_LLQ = 1, /*! http://files.dns-sd.org/draft-sekar-dns-ul.txt: Update Lease */ ARES_OPT_PARAM_UL = 2, /*! RFC 5001. Name Server Identification */ ARES_OPT_PARAM_NSID = 3, /*! RFC 6975. DNSSEC Algorithm Understood */ ARES_OPT_PARAM_DAU = 5, /*! RFC 6975. DS Hash Understood */ ARES_OPT_PARAM_DHU = 6, /*! RFC 6975. NSEC3 Hash Understood */ ARES_OPT_PARAM_N3U = 7, /*! RFC 7871. Client Subnet */ ARES_OPT_PARAM_EDNS_CLIENT_SUBNET = 8, /*! RFC 7314. Expire Timer */ ARES_OPT_PARAM_EDNS_EXPIRE = 9, /*! RFC 7873. Client and Server Cookies */ ARES_OPT_PARAM_COOKIE = 10, /*! RFC 7828. TCP Keepalive timeout */ ARES_OPT_PARAM_EDNS_TCP_KEEPALIVE = 11, /*! RFC 7830. Padding */ ARES_OPT_PARAM_PADDING = 12, /*! RFC 7901. Chain query requests */ ARES_OPT_PARAM_CHAIN = 13, /*! RFC 8145. Signaling Trust Anchor Knowledge in DNSSEC */ ARES_OPT_PARAM_EDNS_KEY_TAG = 14, /*! RFC 8914. Extended ERROR code and message */ ARES_OPT_PARAM_EXTENDED_DNS_ERROR = 15 } ares_opt_param_t; /*! Data type for option records for keys like ARES_RR_OPT_OPTIONS and * ARES_RR_HTTPS_PARAMS returned by ares_dns_opt_get_datatype() */ typedef enum { /*! No value allowed for this option */ ARES_OPT_DATATYPE_NONE = 1, /*! List of strings, each prefixed with a single octet representing the length */ ARES_OPT_DATATYPE_STR_LIST = 2, /*! List of 8bit integers, concatenated */ ARES_OPT_DATATYPE_U8_LIST = 3, /*! 16bit integer in network byte order */ ARES_OPT_DATATYPE_U16 = 4, /*! list of 16bit integer in network byte order, concatenated. */ ARES_OPT_DATATYPE_U16_LIST = 5, /*! 32bit integer in network byte order */ ARES_OPT_DATATYPE_U32 = 6, /*! list 32bit integer in network byte order, concatenated */ ARES_OPT_DATATYPE_U32_LIST = 7, /*! List of ipv4 addresses in network byte order, concatenated */ ARES_OPT_DATATYPE_INADDR4_LIST = 8, /*! List of ipv6 addresses in network byte order, concatenated */ ARES_OPT_DATATYPE_INADDR6_LIST = 9, /*! Binary Data */ ARES_OPT_DATATYPE_BIN = 10, /*! DNS Domain Name Format */ ARES_OPT_DATATYPE_NAME = 11 } ares_dns_opt_datatype_t; /*! Data type for flags to ares_dns_parse() */ typedef enum { /*! Parse Answers from RFC 1035 that allow name compression as RAW */ ARES_DNS_PARSE_AN_BASE_RAW = 1 << 0, /*! Parse Authority from RFC 1035 that allow name compression as RAW */ ARES_DNS_PARSE_NS_BASE_RAW = 1 << 1, /*! Parse Additional from RFC 1035 that allow name compression as RAW */ ARES_DNS_PARSE_AR_BASE_RAW = 1 << 2, /*! Parse Answers from later RFCs (no name compression) RAW */ ARES_DNS_PARSE_AN_EXT_RAW = 1 << 3, /*! Parse Authority from later RFCs (no name compression) as RAW */ ARES_DNS_PARSE_NS_EXT_RAW = 1 << 4, /*! Parse Additional from later RFCs (no name compression) as RAW */ ARES_DNS_PARSE_AR_EXT_RAW = 1 << 5 } ares_dns_parse_flags_t; /*! String representation of DNS Record Type * * \param[in] type DNS Record Type * \return string */ CARES_EXTERN const char *ares_dns_rec_type_tostr(ares_dns_rec_type_t type); /*! String representation of DNS Class * * \param[in] qclass DNS Class * \return string */ CARES_EXTERN const char *ares_dns_class_tostr(ares_dns_class_t qclass); /*! String representation of DNS OpCode * * \param[in] opcode DNS OpCode * \return string */ CARES_EXTERN const char *ares_dns_opcode_tostr(ares_dns_opcode_t opcode); /*! String representation of DNS Resource Record Parameter * * \param[in] key DNS Resource Record parameter * \return string */ CARES_EXTERN const char *ares_dns_rr_key_tostr(ares_dns_rr_key_t key); /*! String representation of DNS Resource Record section * * \param[in] section Section * \return string */ CARES_EXTERN const char *ares_dns_section_tostr(ares_dns_section_t section); /*! Convert DNS class name as string to ares_dns_class_t * * \param[out] qclass Pointer passed by reference to write class * \param[in] str String to convert * \return ARES_TRUE on success */ CARES_EXTERN ares_bool_t ares_dns_class_fromstr(ares_dns_class_t *qclass, const char *str); /*! Convert DNS record type as string to ares_dns_rec_type_t * * \param[out] qtype Pointer passed by reference to write record type * \param[in] str String to convert * \return ARES_TRUE on success */ CARES_EXTERN ares_bool_t ares_dns_rec_type_fromstr(ares_dns_rec_type_t *qtype, const char *str); /*! Convert DNS response code as string to from ares_dns_rcode_t * * \param[in] rcode Response code to convert * \return ARES_TRUE on success */ CARES_EXTERN const char *ares_dns_rcode_tostr(ares_dns_rcode_t rcode); /*! Convert any valid ip address (ipv4 or ipv6) into struct ares_addr and * return the starting pointer of the network byte order address and the * length of the address (4 or 16). * * \param[in] ipaddr ASCII string form of the ip address * \param[in,out] addr Must set "family" member to one of AF_UNSPEC, * AF_INET, AF_INET6 on input. * \param[out] out_len Length of binary form address * \return Pointer to start of binary address or NULL on error. */ CARES_EXTERN const void *ares_dns_pton(const char *ipaddr, struct ares_addr *addr, size_t *out_len); /*! Convert an ip address into the PTR format for in-addr.arpa or in6.arpa * * \param[in] addr properly filled address structure * \return String representing PTR, use ares_free_string() to free */ CARES_EXTERN char *ares_dns_addr_to_ptr(const struct ares_addr *addr); /*! The options/parameters extensions to some RRs can be somewhat opaque, this * is a helper to return the best match for a datatype for interpreting the * option record. * * \param[in] key Key associated with options/parameters * \param[in] opt Option Key/Parameter * \return Datatype */ CARES_EXTERN ares_dns_opt_datatype_t ares_dns_opt_get_datatype(ares_dns_rr_key_t key, unsigned short opt); /*! The options/parameters extensions to some RRs can be somewhat opaque, this * is a helper to return the name if the option is known. * * \param[in] key Key associated with options/parameters * \param[in] opt Option Key/Parameter * \return name, or NULL if not known. */ CARES_EXTERN const char *ares_dns_opt_get_name(ares_dns_rr_key_t key, unsigned short opt); /*! Retrieve a list of Resource Record keys that can be set or retrieved for * the Resource record type. * * \param[in] type Record Type * \param[out] cnt Number of keys returned * \return array of keys associated with Resource Record */ CARES_EXTERN const ares_dns_rr_key_t * ares_dns_rr_get_keys(ares_dns_rec_type_t type, size_t *cnt); /*! Retrieve the datatype associated with a Resource Record key. * * \param[in] key Resource Record Key * \return datatype */ CARES_EXTERN ares_dns_datatype_t ares_dns_rr_key_datatype(ares_dns_rr_key_t key); /*! Retrieve the DNS Resource Record type associated with a Resource Record key. * * \param[in] key Resource Record Key * \return DNS Resource Record Type */ CARES_EXTERN ares_dns_rec_type_t ares_dns_rr_key_to_rec_type(ares_dns_rr_key_t key); /*! Opaque data type representing a DNS RR (Resource Record) */ struct ares_dns_rr; /*! Typedef for opaque data type representing a DNS RR (Resource Record) */ typedef struct ares_dns_rr ares_dns_rr_t; /*! Opaque data type representing a DNS Query Data QD Packet */ struct ares_dns_qd; /*! Typedef for opaque data type representing a DNS Query Data QD Packet */ typedef struct ares_dns_qd ares_dns_qd_t; /*! Opaque data type representing a DNS Packet */ struct ares_dns_record; /*! Typedef for opaque data type representing a DNS Packet */ typedef struct ares_dns_record ares_dns_record_t; /*! Create a new DNS record object * * \param[out] dnsrec Pointer passed by reference for a newly allocated * record object. Must be ares_dns_record_destroy()'d by * caller. * \param[in] id DNS Query ID. If structuring a new query to be sent * with ares_send(), this value should be zero. * \param[in] flags DNS Flags from \ares_dns_flags_t * \param[in] opcode DNS OpCode (typically ARES_OPCODE_QUERY) * \param[in] rcode DNS RCode * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_record_create(ares_dns_record_t **dnsrec, unsigned short id, unsigned short flags, ares_dns_opcode_t opcode, ares_dns_rcode_t rcode); /*! Destroy a DNS record object * * \param[in] dnsrec Initialized record object */ CARES_EXTERN void ares_dns_record_destroy(ares_dns_record_t *dnsrec); /*! Get the DNS Query ID * * \param[in] dnsrec Initialized record object * \return DNS query id */ CARES_EXTERN unsigned short ares_dns_record_get_id(const ares_dns_record_t *dnsrec); /*! Overwrite the DNS query id * * \param[in] dnsrec Initialized record object * \param[in] id DNS query id * \return ARES_TRUE on success, ARES_FALSE on usage error */ CARES_EXTERN ares_bool_t ares_dns_record_set_id(ares_dns_record_t *dnsrec, unsigned short id); /*! Get the DNS Record Flags * * \param[in] dnsrec Initialized record object * \return One or more \ares_dns_flags_t */ CARES_EXTERN unsigned short ares_dns_record_get_flags(const ares_dns_record_t *dnsrec); /*! Get the DNS Record OpCode * * \param[in] dnsrec Initialized record object * \return opcode */ CARES_EXTERN ares_dns_opcode_t ares_dns_record_get_opcode(const ares_dns_record_t *dnsrec); /*! Get the DNS Record RCode * * \param[in] dnsrec Initialized record object * \return rcode */ CARES_EXTERN ares_dns_rcode_t ares_dns_record_get_rcode(const ares_dns_record_t *dnsrec); /*! Add a query to the DNS Record. Typically a record will have only 1 * query. Most DNS servers will reject queries with more than 1 question. * * \param[in] dnsrec Initialized record object * \param[in] name Name/Hostname of request * \param[in] qtype Type of query * \param[in] qclass Class of query (typically ARES_CLASS_IN) * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_record_query_add(ares_dns_record_t *dnsrec, const char *name, ares_dns_rec_type_t qtype, ares_dns_class_t qclass); /*! Replace the question name with a new name. This may be used when performing * a search with aliases. * * Note that this will invalidate the name pointer returned from * ares_dns_record_query_get(). * * \param[in] dnsrec Initialized record object * \param[in] idx Index of question (typically 0) * \param[in] name Name to use as replacement. * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_record_query_set_name( ares_dns_record_t *dnsrec, size_t idx, const char *name); /*! Replace the question type with a different type. This may be used when * needing to query more than one address class (e.g. A and AAAA) * * \param[in] dnsrec Initialized record object * \param[in] idx Index of question (typically 0) * \param[in] qtype Record Type to use as replacement. * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_record_query_set_type( ares_dns_record_t *dnsrec, size_t idx, ares_dns_rec_type_t qtype); /*! Get the count of queries in the DNS Record * * \param[in] dnsrec Initialized record object * \return count of queries */ CARES_EXTERN size_t ares_dns_record_query_cnt(const ares_dns_record_t *dnsrec); /*! Get the data about the query at the provided index. * * \param[in] dnsrec Initialized record object * \param[in] idx Index of query * \param[out] name Optional. Returns name, may pass NULL if not desired. * This pointer will be invalided by any call to * ares_dns_record_query_set_name(). * \param[out] qtype Optional. Returns record type, may pass NULL. * \param[out] qclass Optional. Returns class, may pass NULL. * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_record_query_get( const ares_dns_record_t *dnsrec, size_t idx, const char **name, ares_dns_rec_type_t *qtype, ares_dns_class_t *qclass); /*! Get the count of Resource Records in the provided section * * \param[in] dnsrec Initialized record object * \param[in] sect Section. ARES_SECTION_ANSWER is most used. * \return count of resource records. */ CARES_EXTERN size_t ares_dns_record_rr_cnt(const ares_dns_record_t *dnsrec, ares_dns_section_t sect); /*! Add a Resource Record to the DNS Record. * * \param[out] rr_out Pointer to created resource record. This pointer * is owned by the DNS record itself, this is just made * available to facilitate adding RR-specific fields. * \param[in] dnsrec Initialized record object * \param[in] sect Section to add resource record to * \param[in] name Resource Record name/hostname * \param[in] type Record Type * \param[in] rclass Class * \param[in] ttl TTL * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_record_rr_add( ares_dns_rr_t **rr_out, ares_dns_record_t *dnsrec, ares_dns_section_t sect, const char *name, ares_dns_rec_type_t type, ares_dns_class_t rclass, unsigned int ttl); /*! Fetch a writable resource record based on the section and index. * * \param[in] dnsrec Initialized record object * \param[in] sect Section for resource record * \param[in] idx Index of resource record in section * \return NULL on misuse, otherwise a writable pointer to the resource record */ CARES_EXTERN ares_dns_rr_t *ares_dns_record_rr_get(ares_dns_record_t *dnsrec, ares_dns_section_t sect, size_t idx); /*! Fetch a non-writeable resource record based on the section and index. * * \param[in] dnsrec Initialized record object * \param[in] sect Section for resource record * \param[in] idx Index of resource record in section * \return NULL on misuse, otherwise a const pointer to the resource record */ CARES_EXTERN const ares_dns_rr_t * ares_dns_record_rr_get_const(const ares_dns_record_t *dnsrec, ares_dns_section_t sect, size_t idx); /*! Remove the resource record based on the section and index * * \param[in] dnsrec Initialized record object * \param[in] sect Section for resource record * \param[in] idx Index of resource record in section * \return ARES_SUCCESS on success, otherwise an error code. */ CARES_EXTERN ares_status_t ares_dns_record_rr_del(ares_dns_record_t *dnsrec, ares_dns_section_t sect, size_t idx); /*! Retrieve the resource record Name/Hostname * * \param[in] rr Pointer to resource record * \return Name */ CARES_EXTERN const char *ares_dns_rr_get_name(const ares_dns_rr_t *rr); /*! Retrieve the resource record type * * \param[in] rr Pointer to resource record * \return type */ CARES_EXTERN ares_dns_rec_type_t ares_dns_rr_get_type(const ares_dns_rr_t *rr); /*! Retrieve the resource record class * * \param[in] rr Pointer to resource record * \return class */ CARES_EXTERN ares_dns_class_t ares_dns_rr_get_class(const ares_dns_rr_t *rr); /*! Retrieve the resource record TTL * * \param[in] rr Pointer to resource record * \return TTL */ CARES_EXTERN unsigned int ares_dns_rr_get_ttl(const ares_dns_rr_t *rr); /*! Set ipv4 address data type for specified resource record and key. Can * only be used on keys with datatype ARES_DATATYPE_INADDR * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \param[in] addr Pointer to ipv4 address to use. * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_rr_set_addr(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, const struct in_addr *addr); /*! Set ipv6 address data type for specified resource record and key. Can * only be used on keys with datatype ARES_DATATYPE_INADDR6 * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \param[in] addr Pointer to ipv6 address to use. * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_rr_set_addr6(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, const struct ares_in6_addr *addr); /*! Set string data for specified resource record and key. Can * only be used on keys with datatype ARES_DATATYPE_STR or ARES_DATATYPE_NAME. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \param[in] val Pointer to string to set. * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_rr_set_str(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, const char *val); /*! Set 8bit unsigned integer for specified resource record and key. Can * only be used on keys with datatype ARES_DATATYPE_U8 * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \param[in] val 8bit unsigned integer * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_rr_set_u8(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned char val); /*! Set 16bit unsigned integer for specified resource record and key. Can * only be used on keys with datatype ARES_DATATYPE_U16 * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \param[in] val 16bit unsigned integer * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_rr_set_u16(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned short val); /*! Set 32bit unsigned integer for specified resource record and key. Can * only be used on keys with datatype ARES_DATATYPE_U32 * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \param[in] val 32bit unsigned integer * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_rr_set_u32(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned int val); /*! Set binary (BIN or BINP) data for specified resource record and key. Can * only be used on keys with datatype ARES_DATATYPE_BIN or ARES_DATATYPE_BINP. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \param[in] val Pointer to binary data. * \param[in] len Length of binary data * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_rr_set_bin(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, const unsigned char *val, size_t len); /*! Add binary array value (ABINP) data for specified resource record and key. * Can only be used on keys with datatype ARES_DATATYPE_ABINP. The value will * Be added as the last element in the array. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \param[in] val Pointer to binary data. * \param[in] len Length of binary data * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_rr_add_abin(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, const unsigned char *val, size_t len); /*! Delete binary array value (ABINP) data for specified resource record and * key by specified index. Can only be used on keys with datatype * ARES_DATATYPE_ABINP. The value at the index will be deleted. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \param[in] idx Index to delete * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_rr_del_abin(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, size_t idx); /*! Set the option for the RR * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \param[in] opt Option record key id. * \param[out] val Optional. Value to associate with option. * \param[out] val_len Length of value passed. * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_rr_set_opt(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned short opt, const unsigned char *val, size_t val_len); /*! Delete the option for the RR by id * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \param[in] opt Option record key id. * \return ARES_SUCCESS if removed, ARES_ENOTFOUND if not found */ CARES_EXTERN ares_status_t ares_dns_rr_del_opt_byid(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned short opt); /*! Retrieve a pointer to the ipv4 address. Can only be used on keys with * datatype ARES_DATATYPE_INADDR. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \return pointer to ipv4 address or NULL on error */ CARES_EXTERN const struct in_addr * ares_dns_rr_get_addr(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key); /*! Retrieve a pointer to the ipv6 address. Can only be used on keys with * datatype ARES_DATATYPE_INADDR6. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \return pointer to ipv6 address or NULL on error */ CARES_EXTERN const struct ares_in6_addr * ares_dns_rr_get_addr6(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key); /*! Retrieve a pointer to the string. Can only be used on keys with * datatype ARES_DATATYPE_STR and ARES_DATATYPE_NAME. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \return pointer string or NULL on error */ CARES_EXTERN const char *ares_dns_rr_get_str(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key); /*! Retrieve an 8bit unsigned integer. Can only be used on keys with * datatype ARES_DATATYPE_U8. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \return 8bit unsigned integer */ CARES_EXTERN unsigned char ares_dns_rr_get_u8(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key); /*! Retrieve an 16bit unsigned integer. Can only be used on keys with * datatype ARES_DATATYPE_U16. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \return 16bit unsigned integer */ CARES_EXTERN unsigned short ares_dns_rr_get_u16(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key); /*! Retrieve an 32bit unsigned integer. Can only be used on keys with * datatype ARES_DATATYPE_U32. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \return 32bit unsigned integer */ CARES_EXTERN unsigned int ares_dns_rr_get_u32(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key); /*! Retrieve a pointer to the binary data. Can only be used on keys with * datatype ARES_DATATYPE_BIN, ARES_DATATYPE_BINP, or ARES_DATATYPE_ABINP. * If BINP or ABINP, the data is guaranteed to have a NULL terminator which * is NOT included in the length. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \param[out] len Length of binary data returned * \return pointer binary data or NULL on error */ CARES_EXTERN const unsigned char * ares_dns_rr_get_bin(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, size_t *len); /*! Retrieve the count of the array of stored binary values. Can only be used on * keys with datatype ARES_DATATYPE_ABINP. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \return count of values */ CARES_EXTERN size_t ares_dns_rr_get_abin_cnt(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key); /*! Retrieve a pointer to the binary array data from the specified index. Can * only be used on keys with datatype ARES_DATATYPE_ABINP. If ABINP, the data * is guaranteed to have a NULL terminator which is NOT included in the length. * If want all array membersconcatenated, may use ares_dns_rr_get_bin() * instead. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \param[in] idx Index of value to retrieve * \param[out] len Length of binary data returned * \return pointer binary data or NULL on error */ CARES_EXTERN const unsigned char * ares_dns_rr_get_abin(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, size_t idx, size_t *len); /*! Retrieve the number of options stored for the RR. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \return count, or 0 if none. */ CARES_EXTERN size_t ares_dns_rr_get_opt_cnt(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key); /*! Retrieve the option for the RR by index. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \param[in] idx Index of option record * \param[out] val Optional. Pointer passed by reference to hold value. * Options may not have values. Value if returned is * guaranteed to be NULL terminated, however in most * cases it is not printable. * \param[out] val_len Optional. Pointer passed by reference to hold value * length. * \return option key/id on success, 65535 on misuse. */ CARES_EXTERN unsigned short ares_dns_rr_get_opt(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, size_t idx, const unsigned char **val, size_t *val_len); /*! Retrieve the option for the RR by the option key/id. * * \param[in] dns_rr Pointer to resource record * \param[in] key DNS Resource Record Key * \param[in] opt Option record key id (this is not the index). * \param[out] val Optional. Pointer passed by reference to hold value. * Options may not have values. Value if returned is * guaranteed to be NULL terminated, however in most cases * it is not printable. * \param[out] val_len Optional. Pointer passed by reference to hold value * length. * \return ARES_TRUE on success, ARES_FALSE on misuse. */ CARES_EXTERN ares_bool_t ares_dns_rr_get_opt_byid(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned short opt, const unsigned char **val, size_t *val_len); /*! Parse a complete DNS message. * * \param[in] buf pointer to bytes to be parsed * \param[in] buf_len Length of buf provided * \param[in] flags Flags dictating how the message should be parsed. * \param[out] dnsrec Pointer passed by reference for a new DNS record object * that must be ares_dns_record_destroy()'d by caller. * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_parse(const unsigned char *buf, size_t buf_len, unsigned int flags, ares_dns_record_t **dnsrec); /*! Write a complete DNS message * * \param[in] dnsrec Pointer to initialized and filled DNS record object. * \param[out] buf Pointer passed by reference to be filled in with with * DNS message. Must be ares_free()'d by caller. * \param[out] buf_len Length of returned buffer containing DNS message. * \return ARES_SUCCESS on success */ CARES_EXTERN ares_status_t ares_dns_write(const ares_dns_record_t *dnsrec, unsigned char **buf, size_t *buf_len); /*! Duplicate a complete DNS message. This does not copy internal members * (such as the ttl decrement capability). * * \param[in] dnsrec Pointer to initialized and filled DNS record object. * \return duplicted DNS record object, or NULL on out of memory. */ CARES_EXTERN ares_dns_record_t * ares_dns_record_duplicate(const ares_dns_record_t *dnsrec); /*! @} */ #ifdef __cplusplus } #endif #endif /* __ARES_DNS_RECORD_H */ gevent-24.11.1/deps/c-ares/include/ares_nameser.h000066400000000000000000000320571471441230600215260ustar00rootroot00000000000000/* MIT License * * Copyright (c) Massachusetts Institute of Technology * Copyright (c) Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef ARES_NAMESER_H #define ARES_NAMESER_H #include "ares_build.h" #ifdef CARES_HAVE_ARPA_NAMESER_H # include #endif #ifdef CARES_HAVE_ARPA_NAMESER_COMPAT_H # include #endif /* ============================================================================ * arpa/nameser.h may or may not provide ALL of the below defines, so check * each one individually and set if not * ============================================================================ */ #ifndef NS_PACKETSZ # define NS_PACKETSZ 512 /* maximum packet size */ #endif #ifndef NS_MAXDNAME # define NS_MAXDNAME 256 /* maximum domain name */ #endif #ifndef NS_MAXCDNAME # define NS_MAXCDNAME 255 /* maximum compressed domain name */ #endif #ifndef NS_MAXLABEL # define NS_MAXLABEL 63 #endif #ifndef NS_HFIXEDSZ # define NS_HFIXEDSZ 12 /* #/bytes of fixed data in header */ #endif #ifndef NS_QFIXEDSZ # define NS_QFIXEDSZ 4 /* #/bytes of fixed data in query */ #endif #ifndef NS_RRFIXEDSZ # define NS_RRFIXEDSZ 10 /* #/bytes of fixed data in r record */ #endif #ifndef NS_INT16SZ # define NS_INT16SZ 2 #endif #ifndef NS_INADDRSZ # define NS_INADDRSZ 4 #endif #ifndef NS_IN6ADDRSZ # define NS_IN6ADDRSZ 16 #endif #ifndef NS_CMPRSFLGS # define NS_CMPRSFLGS 0xc0 /* Flag bits indicating name compression. */ #endif #ifndef NS_DEFAULTPORT # define NS_DEFAULTPORT 53 /* For both TCP and UDP. */ #endif /* ============================================================================ * arpa/nameser.h should provide these enumerations always, so if not found, * provide them * ============================================================================ */ #ifndef CARES_HAVE_ARPA_NAMESER_H typedef enum __ns_class { ns_c_invalid = 0, /* Cookie. */ ns_c_in = 1, /* Internet. */ ns_c_2 = 2, /* unallocated/unsupported. */ ns_c_chaos = 3, /* MIT Chaos-net. */ ns_c_hs = 4, /* MIT Hesiod. */ /* Query class values which do not appear in resource records */ ns_c_none = 254, /* for prereq. sections in update requests */ ns_c_any = 255, /* Wildcard match. */ ns_c_max = 65536 } ns_class; typedef enum __ns_type { ns_t_invalid = 0, /* Cookie. */ ns_t_a = 1, /* Host address. */ ns_t_ns = 2, /* Authoritative server. */ ns_t_md = 3, /* Mail destination. */ ns_t_mf = 4, /* Mail forwarder. */ ns_t_cname = 5, /* Canonical name. */ ns_t_soa = 6, /* Start of authority zone. */ ns_t_mb = 7, /* Mailbox domain name. */ ns_t_mg = 8, /* Mail group member. */ ns_t_mr = 9, /* Mail rename name. */ ns_t_null = 10, /* Null resource record. */ ns_t_wks = 11, /* Well known service. */ ns_t_ptr = 12, /* Domain name pointer. */ ns_t_hinfo = 13, /* Host information. */ ns_t_minfo = 14, /* Mailbox information. */ ns_t_mx = 15, /* Mail routing information. */ ns_t_txt = 16, /* Text strings. */ ns_t_rp = 17, /* Responsible person. */ ns_t_afsdb = 18, /* AFS cell database. */ ns_t_x25 = 19, /* X_25 calling address. */ ns_t_isdn = 20, /* ISDN calling address. */ ns_t_rt = 21, /* Router. */ ns_t_nsap = 22, /* NSAP address. */ ns_t_nsap_ptr = 23, /* Reverse NSAP lookup (deprecated). */ ns_t_sig = 24, /* Security signature. */ ns_t_key = 25, /* Security key. */ ns_t_px = 26, /* X.400 mail mapping. */ ns_t_gpos = 27, /* Geographical position (withdrawn). */ ns_t_aaaa = 28, /* Ip6 Address. */ ns_t_loc = 29, /* Location Information. */ ns_t_nxt = 30, /* Next domain (security). */ ns_t_eid = 31, /* Endpoint identifier. */ ns_t_nimloc = 32, /* Nimrod Locator. */ ns_t_srv = 33, /* Server Selection. */ ns_t_atma = 34, /* ATM Address */ ns_t_naptr = 35, /* Naming Authority PoinTeR */ ns_t_kx = 36, /* Key Exchange */ ns_t_cert = 37, /* Certification record */ ns_t_a6 = 38, /* IPv6 address (deprecates AAAA) */ ns_t_dname = 39, /* Non-terminal DNAME (for IPv6) */ ns_t_sink = 40, /* Kitchen sink (experimental) */ ns_t_opt = 41, /* EDNS0 option (meta-RR) */ ns_t_apl = 42, /* Address prefix list (RFC3123) */ ns_t_ds = 43, /* Delegation Signer (RFC4034) */ ns_t_sshfp = 44, /* SSH Key Fingerprint (RFC4255) */ ns_t_rrsig = 46, /* Resource Record Signature (RFC4034) */ ns_t_nsec = 47, /* Next Secure (RFC4034) */ ns_t_dnskey = 48, /* DNS Public Key (RFC4034) */ ns_t_tkey = 249, /* Transaction key */ ns_t_tsig = 250, /* Transaction signature. */ ns_t_ixfr = 251, /* Incremental zone transfer. */ ns_t_axfr = 252, /* Transfer zone of authority. */ ns_t_mailb = 253, /* Transfer mailbox records. */ ns_t_maila = 254, /* Transfer mail agent records. */ ns_t_any = 255, /* Wildcard match. */ ns_t_uri = 256, /* Uniform Resource Identifier (RFC7553) */ ns_t_caa = 257, /* Certification Authority Authorization. */ ns_t_max = 65536 } ns_type; typedef enum __ns_opcode { ns_o_query = 0, /* Standard query. */ ns_o_iquery = 1, /* Inverse query (deprecated/unsupported). */ ns_o_status = 2, /* Name server status query (unsupported). */ /* Opcode 3 is undefined/reserved. */ ns_o_notify = 4, /* Zone change notification. */ ns_o_update = 5, /* Zone update message. */ ns_o_max = 6 } ns_opcode; typedef enum __ns_rcode { ns_r_noerror = 0, /* No error occurred. */ ns_r_formerr = 1, /* Format error. */ ns_r_servfail = 2, /* Server failure. */ ns_r_nxdomain = 3, /* Name error. */ ns_r_notimpl = 4, /* Unimplemented. */ ns_r_refused = 5, /* Operation refused. */ /* these are for BIND_UPDATE */ ns_r_yxdomain = 6, /* Name exists */ ns_r_yxrrset = 7, /* RRset exists */ ns_r_nxrrset = 8, /* RRset does not exist */ ns_r_notauth = 9, /* Not authoritative for zone */ ns_r_notzone = 10, /* Zone of record different from zone section */ ns_r_max = 11, /* The following are TSIG extended errors */ ns_r_badsig = 16, ns_r_badkey = 17, ns_r_badtime = 18 } ns_rcode; #endif /* CARES_HAVE_ARPA_NAMESER_H */ /* ============================================================================ * arpa/nameser_compat.h typically sets these. However on some systems * arpa/nameser.h does, but may not set all of them. Lets conditionally * define each * ============================================================================ */ #ifndef PACKETSZ # define PACKETSZ NS_PACKETSZ #endif #ifndef MAXDNAME # define MAXDNAME NS_MAXDNAME #endif #ifndef MAXCDNAME # define MAXCDNAME NS_MAXCDNAME #endif #ifndef MAXLABEL # define MAXLABEL NS_MAXLABEL #endif #ifndef HFIXEDSZ # define HFIXEDSZ NS_HFIXEDSZ #endif #ifndef QFIXEDSZ # define QFIXEDSZ NS_QFIXEDSZ #endif #ifndef RRFIXEDSZ # define RRFIXEDSZ NS_RRFIXEDSZ #endif #ifndef INDIR_MASK # define INDIR_MASK NS_CMPRSFLGS #endif #ifndef NAMESERVER_PORT # define NAMESERVER_PORT NS_DEFAULTPORT #endif /* opcodes */ #ifndef O_QUERY # define O_QUERY 0 /* ns_o_query */ #endif #ifndef O_IQUERY # define O_IQUERY 1 /* ns_o_iquery */ #endif #ifndef O_STATUS # define O_STATUS 2 /* ns_o_status */ #endif #ifndef O_NOTIFY # define O_NOTIFY 4 /* ns_o_notify */ #endif #ifndef O_UPDATE # define O_UPDATE 5 /* ns_o_update */ #endif /* response codes */ #ifndef SERVFAIL # define SERVFAIL ns_r_servfail #endif #ifndef NOTIMP # define NOTIMP ns_r_notimpl #endif #ifndef REFUSED # define REFUSED ns_r_refused #endif #if defined(_WIN32) && !defined(HAVE_ARPA_NAMESER_COMPAT_H) && defined(NOERROR) # undef NOERROR /* it seems this is already defined in winerror.h */ #endif #ifndef NOERROR # define NOERROR ns_r_noerror #endif #ifndef FORMERR # define FORMERR ns_r_formerr #endif #ifndef NXDOMAIN # define NXDOMAIN ns_r_nxdomain #endif /* Non-standard response codes, use numeric values */ #ifndef YXDOMAIN # define YXDOMAIN 6 /* ns_r_yxdomain */ #endif #ifndef YXRRSET # define YXRRSET 7 /* ns_r_yxrrset */ #endif #ifndef NXRRSET # define NXRRSET 8 /* ns_r_nxrrset */ #endif #ifndef NOTAUTH # define NOTAUTH 9 /* ns_r_notauth */ #endif #ifndef NOTZONE # define NOTZONE 10 /* ns_r_notzone */ #endif #ifndef TSIG_BADSIG # define TSIG_BADSIG 16 /* ns_r_badsig */ #endif #ifndef TSIG_BADKEY # define TSIG_BADKEY 17 /* ns_r_badkey */ #endif #ifndef TSIG_BADTIME # define TSIG_BADTIME 18 /* ns_r_badtime */ #endif /* classes */ #ifndef C_IN # define C_IN 1 /* ns_c_in */ #endif #ifndef C_CHAOS # define C_CHAOS 3 /* ns_c_chaos */ #endif #ifndef C_HS # define C_HS 4 /* ns_c_hs */ #endif #ifndef C_NONE # define C_NONE 254 /* ns_c_none */ #endif #ifndef C_ANY # define C_ANY 255 /* ns_c_any */ #endif /* types */ #ifndef T_A # define T_A 1 /* ns_t_a */ #endif #ifndef T_NS # define T_NS 2 /* ns_t_ns */ #endif #ifndef T_MD # define T_MD 3 /* ns_t_md */ #endif #ifndef T_MF # define T_MF 4 /* ns_t_mf */ #endif #ifndef T_CNAME # define T_CNAME 5 /* ns_t_cname */ #endif #ifndef T_SOA # define T_SOA 6 /* ns_t_soa */ #endif #ifndef T_MB # define T_MB 7 /* ns_t_mb */ #endif #ifndef T_MG # define T_MG 8 /* ns_t_mg */ #endif #ifndef T_MR # define T_MR 9 /* ns_t_mr */ #endif #ifndef T_NULL # define T_NULL 10 /* ns_t_null */ #endif #ifndef T_WKS # define T_WKS 11 /* ns_t_wks */ #endif #ifndef T_PTR # define T_PTR 12 /* ns_t_ptr */ #endif #ifndef T_HINFO # define T_HINFO 13 /* ns_t_hinfo */ #endif #ifndef T_MINFO # define T_MINFO 14 /* ns_t_minfo */ #endif #ifndef T_MX # define T_MX 15 /* ns_t_mx */ #endif #ifndef T_TXT # define T_TXT 16 /* ns_t_txt */ #endif #ifndef T_RP # define T_RP 17 /* ns_t_rp */ #endif #ifndef T_AFSDB # define T_AFSDB 18 /* ns_t_afsdb */ #endif #ifndef T_X25 # define T_X25 19 /* ns_t_x25 */ #endif #ifndef T_ISDN # define T_ISDN 20 /* ns_t_isdn */ #endif #ifndef T_RT # define T_RT 21 /* ns_t_rt */ #endif #ifndef T_NSAP # define T_NSAP 22 /* ns_t_nsap */ #endif #ifndef T_NSAP_PTR # define T_NSAP_PTR 23 /* ns_t_nsap_ptr */ #endif #ifndef T_SIG # define T_SIG 24 /* ns_t_sig */ #endif #ifndef T_KEY # define T_KEY 25 /* ns_t_key */ #endif #ifndef T_PX # define T_PX 26 /* ns_t_px */ #endif #ifndef T_GPOS # define T_GPOS 27 /* ns_t_gpos */ #endif #ifndef T_AAAA # define T_AAAA 28 /* ns_t_aaaa */ #endif #ifndef T_LOC # define T_LOC 29 /* ns_t_loc */ #endif #ifndef T_NXT # define T_NXT 30 /* ns_t_nxt */ #endif #ifndef T_EID # define T_EID 31 /* ns_t_eid */ #endif #ifndef T_NIMLOC # define T_NIMLOC 32 /* ns_t_nimloc */ #endif #ifndef T_SRV # define T_SRV 33 /* ns_t_srv */ #endif #ifndef T_ATMA # define T_ATMA 34 /* ns_t_atma */ #endif #ifndef T_NAPTR # define T_NAPTR 35 /* ns_t_naptr */ #endif #ifndef T_KX # define T_KX 36 /* ns_t_kx */ #endif #ifndef T_CERT # define T_CERT 37 /* ns_t_cert */ #endif #ifndef T_A6 # define T_A6 38 /* ns_t_a6 */ #endif #ifndef T_DNAME # define T_DNAME 39 /* ns_t_dname */ #endif #ifndef T_SINK # define T_SINK 40 /* ns_t_sink */ #endif #ifndef T_OPT # define T_OPT 41 /* ns_t_opt */ #endif #ifndef T_APL # define T_APL 42 /* ns_t_apl */ #endif #ifndef T_DS # define T_DS 43 /* ns_t_ds */ #endif #ifndef T_SSHFP # define T_SSHFP 44 /* ns_t_sshfp */ #endif #ifndef T_RRSIG # define T_RRSIG 46 /* ns_t_rrsig */ #endif #ifndef T_NSEC # define T_NSEC 47 /* ns_t_nsec */ #endif #ifndef T_DNSKEY # define T_DNSKEY 48 /* ns_t_dnskey */ #endif #ifndef T_TKEY # define T_TKEY 249 /* ns_t_tkey */ #endif #ifndef T_TSIG # define T_TSIG 250 /* ns_t_tsig */ #endif #ifndef T_IXFR # define T_IXFR 251 /* ns_t_ixfr */ #endif #ifndef T_AXFR # define T_AXFR 252 /* ns_t_axfr */ #endif #ifndef T_MAILB # define T_MAILB 253 /* ns_t_mailb */ #endif #ifndef T_MAILA # define T_MAILA 254 /* ns_t_maila */ #endif #ifndef T_ANY # define T_ANY 255 /* ns_t_any */ #endif #ifndef T_URI # define T_URI 256 /* ns_t_uri */ #endif #ifndef T_CAA # define T_CAA 257 /* ns_t_caa */ #endif #ifndef T_MAX # define T_MAX 65536 /* ns_t_max */ #endif #endif /* ARES_NAMESER_H */ gevent-24.11.1/deps/c-ares/include/ares_version.h000066400000000000000000000033011471441230600215470ustar00rootroot00000000000000/* MIT License * * Copyright (c) Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef ARES__VERSION_H #define ARES__VERSION_H /* This is the global package copyright */ #define ARES_COPYRIGHT "2004 - 2024 Daniel Stenberg, ." #define ARES_VERSION_MAJOR 1 #define ARES_VERSION_MINOR 33 #define ARES_VERSION_PATCH 1 #define ARES_VERSION \ ((ARES_VERSION_MAJOR << 16) | (ARES_VERSION_MINOR << 8) | \ (ARES_VERSION_PATCH)) #define ARES_VERSION_STR "1.33.1" #define CARES_HAVE_ARES_LIBRARY_INIT 1 #define CARES_HAVE_ARES_LIBRARY_CLEANUP 1 #endif gevent-24.11.1/deps/c-ares/libcares.pc.in000066400000000000000000000012401471441230600177710ustar00rootroot00000000000000#*************************************************************************** # Project ___ __ _ _ __ ___ ___ # / __|____ / _` | '__/ _ \/ __| # | (_|_____| (_| | | | __/\__ \ # \___| \__,_|_| \___||___/ # # Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT prefix=@prefix@ exec_prefix=@exec_prefix@ libdir=@libdir@ includedir=@includedir@ Name: c-ares URL: http://c-ares.org/ Description: asynchronous DNS lookup library Version: @VERSION@ Requires: Requires.private: Cflags: -I${includedir} @PKGCONFIG_CFLAGS@ Libs: -L${libdir} -lcares Libs.private: @CARES_PRIVATE_LIBS@ gevent-24.11.1/deps/c-ares/m4/000077500000000000000000000000001471441230600155775ustar00rootroot00000000000000gevent-24.11.1/deps/c-ares/m4/ax_ac_append_to_file.m4000066400000000000000000000016221471441230600221450ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_ac_append_to_file.html # =========================================================================== # # SYNOPSIS # # AX_AC_APPEND_TO_FILE([FILE],[DATA]) # # DESCRIPTION # # Appends the specified data to the specified Autoconf is run. If you want # to append to a file when configure is run use AX_APPEND_TO_FILE instead. # # LICENSE # # Copyright (c) 2009 Allan Caffee # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 10 AC_DEFUN([AX_AC_APPEND_TO_FILE],[ AC_REQUIRE([AX_FILE_ESCAPES]) m4_esyscmd( AX_FILE_ESCAPES [ printf "%s" "$2" >> "$1" ]) ]) gevent-24.11.1/deps/c-ares/m4/ax_ac_print_to_file.m4000066400000000000000000000016111471441230600220300ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_ac_print_to_file.html # =========================================================================== # # SYNOPSIS # # AX_AC_PRINT_TO_FILE([FILE],[DATA]) # # DESCRIPTION # # Writes the specified data to the specified file when Autoconf is run. If # you want to print to a file when configure is run use AX_PRINT_TO_FILE # instead. # # LICENSE # # Copyright (c) 2009 Allan Caffee # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 10 AC_DEFUN([AX_AC_PRINT_TO_FILE],[ m4_esyscmd( AC_REQUIRE([AX_FILE_ESCAPES]) [ printf "%s" "$2" > "$1" ]) ]) gevent-24.11.1/deps/c-ares/m4/ax_add_am_macro_static.m4000066400000000000000000000015251471441230600224710ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_add_am_macro_static.html # =========================================================================== # # SYNOPSIS # # AX_ADD_AM_MACRO_STATIC([RULE]) # # DESCRIPTION # # Adds the specified rule to $AMINCLUDE. # # LICENSE # # Copyright (c) 2009 Tom Howard # Copyright (c) 2009 Allan Caffee # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 8 AC_DEFUN([AX_ADD_AM_MACRO_STATIC],[ AC_REQUIRE([AX_AM_MACROS_STATIC]) AX_AC_APPEND_TO_FILE(AMINCLUDE_STATIC,[$1]) ]) gevent-24.11.1/deps/c-ares/m4/ax_am_macros_static.m4000066400000000000000000000021251471441230600220410ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_am_macros_static.html # =========================================================================== # # SYNOPSIS # # AX_AM_MACROS_STATIC # # DESCRIPTION # # Adds support for macros that create Automake rules. You must manually # add the following line # # include $(top_srcdir)/aminclude_static.am # # to your Makefile.am files. # # LICENSE # # Copyright (c) 2009 Tom Howard # Copyright (c) 2009 Allan Caffee # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 11 AC_DEFUN([AMINCLUDE_STATIC],[aminclude_static.am]) AC_DEFUN([AX_AM_MACROS_STATIC], [ AX_AC_PRINT_TO_FILE(AMINCLUDE_STATIC,[ # ]AMINCLUDE_STATIC[ generated automatically by Autoconf # from AX_AM_MACROS_STATIC on ]m4_esyscmd([LC_ALL=C date])[ ]) ]) gevent-24.11.1/deps/c-ares/m4/ax_append_compile_flags.m4000066400000000000000000000055111471441230600226660ustar00rootroot00000000000000# =========================================================================== # http://www.gnu.org/software/autoconf-archive/ax_append_compile_flags.html # =========================================================================== # # SYNOPSIS # # AX_APPEND_COMPILE_FLAGS([FLAG1 FLAG2 ...], [FLAGS-VARIABLE], [EXTRA-FLAGS]) # # DESCRIPTION # # For every FLAG1, FLAG2 it is checked whether the compiler works with the # flag. If it does, the flag is added FLAGS-VARIABLE # # If FLAGS-VARIABLE is not specified, the current language's flags (e.g. # CFLAGS) is used. During the check the flag is always added to the # current language's flags. # # If EXTRA-FLAGS is defined, it is added to the current language's default # flags (e.g. CFLAGS) when the check is done. The check is thus made with # the flags: "CFLAGS EXTRA-FLAGS FLAG". This can for example be used to # force the compiler to issue an error when a bad flag is given. # # NOTE: This macro depends on the AX_APPEND_FLAG and # AX_CHECK_COMPILE_FLAG. Please keep this macro in sync with # AX_APPEND_LINK_FLAGS. # # LICENSE # # Copyright (c) 2011 Maarten Bosmans # # This program is free software: you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by the # Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General # Public License for more details. # # You should have received a copy of the GNU General Public License along # with this program. If not, see . # # As a special exception, the respective Autoconf Macro's copyright owner # gives unlimited permission to copy, distribute and modify the configure # scripts that are the output of Autoconf when processing the Macro. You # need not follow the terms of the GNU General Public License when using # or distributing such scripts, even though portions of the text of the # Macro appear in them. The GNU General Public License (GPL) does govern # all other use of the material that constitutes the Autoconf Macro. # # This special exception to the GPL applies to versions of the Autoconf # Macro released by the Autoconf Archive. When you make and distribute a # modified version of the Autoconf Macro, you may extend this special # exception to the GPL to apply to your modified version as well. #serial 3 AC_DEFUN([AX_APPEND_COMPILE_FLAGS], [AC_REQUIRE([AX_CHECK_COMPILE_FLAG]) AC_REQUIRE([AX_APPEND_FLAG]) for flag in $1; do AX_CHECK_COMPILE_FLAG([$flag], [AX_APPEND_FLAG([$flag], [$2])], [], [$3]) done ])dnl AX_APPEND_COMPILE_FLAGS gevent-24.11.1/deps/c-ares/m4/ax_append_flag.m4000066400000000000000000000053041471441230600207730ustar00rootroot00000000000000# =========================================================================== # http://www.gnu.org/software/autoconf-archive/ax_append_flag.html # =========================================================================== # # SYNOPSIS # # AX_APPEND_FLAG(FLAG, [FLAGS-VARIABLE]) # # DESCRIPTION # # FLAG is appended to the FLAGS-VARIABLE shell variable, with a space # added in between. # # If FLAGS-VARIABLE is not specified, the current language's flags (e.g. # CFLAGS) is used. FLAGS-VARIABLE is not changed if it already contains # FLAG. If FLAGS-VARIABLE is unset in the shell, it is set to exactly # FLAG. # # NOTE: Implementation based on AX_CFLAGS_GCC_OPTION. # # LICENSE # # Copyright (c) 2008 Guido U. Draheim # Copyright (c) 2011 Maarten Bosmans # # This program is free software: you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by the # Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General # Public License for more details. # # You should have received a copy of the GNU General Public License along # with this program. If not, see . # # As a special exception, the respective Autoconf Macro's copyright owner # gives unlimited permission to copy, distribute and modify the configure # scripts that are the output of Autoconf when processing the Macro. You # need not follow the terms of the GNU General Public License when using # or distributing such scripts, even though portions of the text of the # Macro appear in them. The GNU General Public License (GPL) does govern # all other use of the material that constitutes the Autoconf Macro. # # This special exception to the GPL applies to versions of the Autoconf # Macro released by the Autoconf Archive. When you make and distribute a # modified version of the Autoconf Macro, you may extend this special # exception to the GPL to apply to your modified version as well. #serial 2 AC_DEFUN([AX_APPEND_FLAG], [AC_PREREQ(2.59)dnl for _AC_LANG_PREFIX AS_VAR_PUSHDEF([FLAGS], [m4_default($2,_AC_LANG_PREFIX[FLAGS])])dnl AS_VAR_SET_IF(FLAGS, [case " AS_VAR_GET(FLAGS) " in *" $1 "*) AC_RUN_LOG([: FLAGS already contains $1]) ;; *) AC_RUN_LOG([: FLAGS="$FLAGS $1"]) AS_VAR_SET(FLAGS, ["AS_VAR_GET(FLAGS) $1"]) ;; esac], [AS_VAR_SET(FLAGS,["$1"])]) AS_VAR_POPDEF([FLAGS])dnl ])dnl AX_APPEND_FLAG gevent-24.11.1/deps/c-ares/m4/ax_append_link_flags.m4000066400000000000000000000032571471441230600222000ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_append_link_flags.html # =========================================================================== # # SYNOPSIS # # AX_APPEND_LINK_FLAGS([FLAG1 FLAG2 ...], [FLAGS-VARIABLE], [EXTRA-FLAGS], [INPUT]) # # DESCRIPTION # # For every FLAG1, FLAG2 it is checked whether the linker works with the # flag. If it does, the flag is added FLAGS-VARIABLE # # If FLAGS-VARIABLE is not specified, the linker's flags (LDFLAGS) is # used. During the check the flag is always added to the linker's flags. # # If EXTRA-FLAGS is defined, it is added to the linker's default flags # when the check is done. The check is thus made with the flags: "LDFLAGS # EXTRA-FLAGS FLAG". This can for example be used to force the linker to # issue an error when a bad flag is given. # # INPUT gives an alternative input source to AC_COMPILE_IFELSE. # # NOTE: This macro depends on the AX_APPEND_FLAG and AX_CHECK_LINK_FLAG. # Please keep this macro in sync with AX_APPEND_COMPILE_FLAGS. # # LICENSE # # Copyright (c) 2011 Maarten Bosmans # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 7 AC_DEFUN([AX_APPEND_LINK_FLAGS], [AX_REQUIRE_DEFINED([AX_CHECK_LINK_FLAG]) AX_REQUIRE_DEFINED([AX_APPEND_FLAG]) for flag in $1; do AX_CHECK_LINK_FLAG([$flag], [AX_APPEND_FLAG([$flag], [m4_default([$2], [LDFLAGS])])], [], [$3], [$4]) done ])dnl AX_APPEND_LINK_FLAGS gevent-24.11.1/deps/c-ares/m4/ax_check_compile_flag.m4000066400000000000000000000062511471441230600223130ustar00rootroot00000000000000# =========================================================================== # http://www.gnu.org/software/autoconf-archive/ax_check_compile_flag.html # =========================================================================== # # SYNOPSIS # # AX_CHECK_COMPILE_FLAG(FLAG, [ACTION-SUCCESS], [ACTION-FAILURE], [EXTRA-FLAGS]) # # DESCRIPTION # # Check whether the given FLAG works with the current language's compiler # or gives an error. (Warnings, however, are ignored) # # ACTION-SUCCESS/ACTION-FAILURE are shell commands to execute on # success/failure. # # If EXTRA-FLAGS is defined, it is added to the current language's default # flags (e.g. CFLAGS) when the check is done. The check is thus made with # the flags: "CFLAGS EXTRA-FLAGS FLAG". This can for example be used to # force the compiler to issue an error when a bad flag is given. # # NOTE: Implementation based on AX_CFLAGS_GCC_OPTION. Please keep this # macro in sync with AX_CHECK_{PREPROC,LINK}_FLAG. # # LICENSE # # Copyright (c) 2008 Guido U. Draheim # Copyright (c) 2011 Maarten Bosmans # # This program is free software: you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by the # Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General # Public License for more details. # # You should have received a copy of the GNU General Public License along # with this program. If not, see . # # As a special exception, the respective Autoconf Macro's copyright owner # gives unlimited permission to copy, distribute and modify the configure # scripts that are the output of Autoconf when processing the Macro. You # need not follow the terms of the GNU General Public License when using # or distributing such scripts, even though portions of the text of the # Macro appear in them. The GNU General Public License (GPL) does govern # all other use of the material that constitutes the Autoconf Macro. # # This special exception to the GPL applies to versions of the Autoconf # Macro released by the Autoconf Archive. When you make and distribute a # modified version of the Autoconf Macro, you may extend this special # exception to the GPL to apply to your modified version as well. #serial 2 AC_DEFUN([AX_CHECK_COMPILE_FLAG], [AC_PREREQ(2.59)dnl for _AC_LANG_PREFIX AS_VAR_PUSHDEF([CACHEVAR],[ax_cv_check_[]_AC_LANG_ABBREV[]flags_$4_$1])dnl AC_CACHE_CHECK([whether _AC_LANG compiler accepts $1], CACHEVAR, [ ax_check_save_flags=$[]_AC_LANG_PREFIX[]FLAGS _AC_LANG_PREFIX[]FLAGS="$[]_AC_LANG_PREFIX[]FLAGS $4 $1" AC_COMPILE_IFELSE([AC_LANG_PROGRAM()], [AS_VAR_SET(CACHEVAR,[yes])], [AS_VAR_SET(CACHEVAR,[no])]) _AC_LANG_PREFIX[]FLAGS=$ax_check_save_flags]) AS_IF([test x"AS_VAR_GET(CACHEVAR)" = xyes], [m4_default([$2], :)], [m4_default([$3], :)]) AS_VAR_POPDEF([CACHEVAR])dnl ])dnl AX_CHECK_COMPILE_FLAGS gevent-24.11.1/deps/c-ares/m4/ax_check_gnu_make.m4000066400000000000000000000077271471441230600214710ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_check_gnu_make.html # =========================================================================== # # SYNOPSIS # # AX_CHECK_GNU_MAKE([run-if-true],[run-if-false]) # # DESCRIPTION # # This macro searches for a GNU version of make. If a match is found: # # * The makefile variable `ifGNUmake' is set to the empty string, otherwise # it is set to "#". This is useful for including a special features in a # Makefile, which cannot be handled by other versions of make. # * The makefile variable `ifnGNUmake' is set to #, otherwise # it is set to the empty string. This is useful for including a special # features in a Makefile, which can be handled # by other versions of make or to specify else like clause. # * The variable `_cv_gnu_make_command` is set to the command to invoke # GNU make if it exists, the empty string otherwise. # * The variable `ax_cv_gnu_make_command` is set to the command to invoke # GNU make by copying `_cv_gnu_make_command`, otherwise it is unset. # * If GNU Make is found, its version is extracted from the output of # `make --version` as the last field of a record of space-separated # columns and saved into the variable `ax_check_gnu_make_version`. # * Additionally if GNU Make is found, run shell code run-if-true # else run shell code run-if-false. # # Here is an example of its use: # # Makefile.in might contain: # # # A failsafe way of putting a dependency rule into a makefile # $(DEPEND): # $(CC) -MM $(srcdir)/*.c > $(DEPEND) # # @ifGNUmake@ ifeq ($(DEPEND),$(wildcard $(DEPEND))) # @ifGNUmake@ include $(DEPEND) # @ifGNUmake@ else # fallback code # @ifGNUmake@ endif # # Then configure.in would normally contain: # # AX_CHECK_GNU_MAKE() # AC_OUTPUT(Makefile) # # Then perhaps to cause gnu make to override any other make, we could do # something like this (note that GNU make always looks for GNUmakefile # first): # # if ! test x$_cv_gnu_make_command = x ; then # mv Makefile GNUmakefile # echo .DEFAULT: > Makefile ; # echo \ $_cv_gnu_make_command \$@ >> Makefile; # fi # # Then, if any (well almost any) other make is called, and GNU make also # exists, then the other make wraps the GNU make. # # LICENSE # # Copyright (c) 2008 John Darrington # Copyright (c) 2015 Enrico M. Crisostomo # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 12 AC_DEFUN([AX_CHECK_GNU_MAKE],dnl [AC_PROG_AWK AC_CACHE_CHECK([for GNU make],[_cv_gnu_make_command],[dnl _cv_gnu_make_command="" ; dnl Search all the common names for GNU make for a in "$MAKE" make gmake gnumake ; do if test -z "$a" ; then continue ; fi ; if "$a" --version 2> /dev/null | grep GNU 2>&1 > /dev/null ; then _cv_gnu_make_command=$a ; AX_CHECK_GNU_MAKE_HEADLINE=$("$a" --version 2> /dev/null | grep "GNU Make") ax_check_gnu_make_version=$(echo ${AX_CHECK_GNU_MAKE_HEADLINE} | ${AWK} -F " " '{ print $(NF); }') break ; fi done ;]) dnl If there was a GNU version, then set @ifGNUmake@ to the empty string, '#' otherwise AS_VAR_IF([_cv_gnu_make_command], [""], [AS_VAR_SET([ifGNUmake], ["#"])], [AS_VAR_SET([ifGNUmake], [""])]) AS_VAR_IF([_cv_gnu_make_command], [""], [AS_VAR_SET([ifnGNUmake], [""])], [AS_VAR_SET([ifnGNUmake], ["#"])]) AS_VAR_IF([_cv_gnu_make_command], [""], [AS_UNSET(ax_cv_gnu_make_command)], [AS_VAR_SET([ax_cv_gnu_make_command], [${_cv_gnu_make_command}])]) AS_VAR_IF([_cv_gnu_make_command], [""],[$2],[$1]) AC_SUBST([ifGNUmake]) AC_SUBST([ifnGNUmake]) ]) gevent-24.11.1/deps/c-ares/m4/ax_check_link_flag.m4000066400000000000000000000036441471441230600216230ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_check_link_flag.html # =========================================================================== # # SYNOPSIS # # AX_CHECK_LINK_FLAG(FLAG, [ACTION-SUCCESS], [ACTION-FAILURE], [EXTRA-FLAGS], [INPUT]) # # DESCRIPTION # # Check whether the given FLAG works with the linker or gives an error. # (Warnings, however, are ignored) # # ACTION-SUCCESS/ACTION-FAILURE are shell commands to execute on # success/failure. # # If EXTRA-FLAGS is defined, it is added to the linker's default flags # when the check is done. The check is thus made with the flags: "LDFLAGS # EXTRA-FLAGS FLAG". This can for example be used to force the linker to # issue an error when a bad flag is given. # # INPUT gives an alternative input source to AC_LINK_IFELSE. # # NOTE: Implementation based on AX_CFLAGS_GCC_OPTION. Please keep this # macro in sync with AX_CHECK_{PREPROC,COMPILE}_FLAG. # # LICENSE # # Copyright (c) 2008 Guido U. Draheim # Copyright (c) 2011 Maarten Bosmans # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 6 AC_DEFUN([AX_CHECK_LINK_FLAG], [AC_PREREQ(2.64)dnl for _AC_LANG_PREFIX and AS_VAR_IF AS_VAR_PUSHDEF([CACHEVAR],[ax_cv_check_ldflags_$4_$1])dnl AC_CACHE_CHECK([whether the linker accepts $1], CACHEVAR, [ ax_check_save_flags=$LDFLAGS LDFLAGS="$LDFLAGS $4 $1" AC_LINK_IFELSE([m4_default([$5],[AC_LANG_PROGRAM()])], [AS_VAR_SET(CACHEVAR,[yes])], [AS_VAR_SET(CACHEVAR,[no])]) LDFLAGS=$ax_check_save_flags]) AS_VAR_IF(CACHEVAR,yes, [m4_default([$2], :)], [m4_default([$3], :)]) AS_VAR_POPDEF([CACHEVAR])dnl ])dnl AX_CHECK_LINK_FLAGS gevent-24.11.1/deps/c-ares/m4/ax_check_user_namespace.m4000066400000000000000000000027551471441230600226710ustar00rootroot00000000000000# -*- Autoconf -*- # SYNOPSIS # # AX_CHECK_USER_NAMESPACE # # DESCRIPTION # # This macro checks whether the local system supports Linux user namespaces. # If so, it calls AC_DEFINE(HAVE_USER_NAMESPACE). # # Copyright (C) The c-ares team # SPDX-License-Identifier: MIT AC_DEFUN([AX_CHECK_USER_NAMESPACE],[dnl AC_CACHE_CHECK([whether user namespaces are supported], ax_cv_user_namespace,[ AC_LANG_PUSH([C]) AC_RUN_IFELSE([AC_LANG_SOURCE([[ #define _GNU_SOURCE #include #include #include #include #include #include #include int userfn(void *d) { usleep(100000); /* synchronize by sleep */ return (getuid() != 0); } char userst[1024*1024]; int main() { char buffer[1024]; int rc, status, fd; pid_t child = clone(userfn, userst + 1024*1024, CLONE_NEWUSER|SIGCHLD, 0); if (child < 0) return 1; snprintf(buffer, sizeof(buffer), "/proc/%d/uid_map", child); fd = open(buffer, O_CREAT|O_WRONLY|O_TRUNC, 0755); snprintf(buffer, sizeof(buffer), "0 %d 1\n", getuid()); write(fd, buffer, strlen(buffer)); close(fd); rc = waitpid(child, &status, 0); if (rc <= 0) return 1; if (!WIFEXITED(status)) return 1; return WEXITSTATUS(status); } ]])],[ax_cv_user_namespace=yes],[ax_cv_user_namespace=no],[ax_cv_user_namespace=no]) AC_LANG_POP([C]) ]) if test "$ax_cv_user_namespace" = yes; then AC_DEFINE([HAVE_USER_NAMESPACE],[1],[Whether user namespaces are available]) fi ]) # AX_CHECK_USER_NAMESPACE gevent-24.11.1/deps/c-ares/m4/ax_check_uts_namespace.m4000066400000000000000000000040741471441230600225220ustar00rootroot00000000000000# -*- Autoconf -*- # SYNOPSIS # # AX_CHECK_UTS_NAMESPACE # # DESCRIPTION # # This macro checks whether the local system supports Linux UTS namespaces. # Also requires user namespaces to be available, so that non-root users # can enter the namespace. # If so, it calls AC_DEFINE(HAVE_UTS_NAMESPACE). # # Copyright (C) The c-ares team # SPDX-License-Identifier: MIT AC_DEFUN([AX_CHECK_UTS_NAMESPACE],[dnl AC_CACHE_CHECK([whether UTS namespaces are supported], ax_cv_uts_namespace,[ AC_LANG_PUSH([C]) AC_RUN_IFELSE([AC_LANG_SOURCE([[ #define _GNU_SOURCE #include #include #include #include #include #include #include #include int utsfn(void *d) { char buffer[1024]; const char *name = "autoconftest"; int rc = sethostname(name, strlen(name)); if (rc != 0) return 1; gethostname(buffer, 1024); return (strcmp(buffer, name) != 0); } char st2[1024*1024]; int fn(void *d) { pid_t child; int rc, status; usleep(100000); /* synchronize by sleep */ if (getuid() != 0) return 1; child = clone(utsfn, st2 + 1024*1024, CLONE_NEWUTS|SIGCHLD, 0); if (child < 0) return 1; rc = waitpid(child, &status, 0); if (rc <= 0) return 1; if (!WIFEXITED(status)) return 1; return WEXITSTATUS(status); } char st[1024*1024]; int main() { char buffer[1024]; int rc, status, fd; pid_t child = clone(fn, st + 1024*1024, CLONE_NEWUSER|SIGCHLD, 0); if (child < 0) return 1; snprintf(buffer, sizeof(buffer), "/proc/%d/uid_map", child); fd = open(buffer, O_CREAT|O_WRONLY|O_TRUNC, 0755); snprintf(buffer, sizeof(buffer), "0 %d 1\n", getuid()); write(fd, buffer, strlen(buffer)); close(fd); rc = waitpid(child, &status, 0); if (rc <= 0) return 1; if (!WIFEXITED(status)) return 1; return WEXITSTATUS(status); } ]]) ],[ax_cv_uts_namespace=yes],[ax_cv_uts_namespace=no],[ax_cv_uts_namespace=no]) AC_LANG_POP([C]) ]) if test "$ax_cv_uts_namespace" = yes; then AC_DEFINE([HAVE_UTS_NAMESPACE],[1],[Whether UTS namespaces are available]) fi ]) # AX_CHECK_UTS_NAMESPACE gevent-24.11.1/deps/c-ares/m4/ax_code_coverage.m4000066400000000000000000000276121471441230600213260ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_code_coverage.html # =========================================================================== # # SYNOPSIS # # AX_CODE_COVERAGE() # # DESCRIPTION # # Defines CODE_COVERAGE_CPPFLAGS, CODE_COVERAGE_CFLAGS, # CODE_COVERAGE_CXXFLAGS and CODE_COVERAGE_LIBS which should be included # in the CPPFLAGS, CFLAGS CXXFLAGS and LIBS/LIBADD variables of every # build target (program or library) which should be built with code # coverage support. Also add rules using AX_ADD_AM_MACRO_STATIC; and # $enable_code_coverage which can be used in subsequent configure output. # CODE_COVERAGE_ENABLED is defined and substituted, and corresponds to the # value of the --enable-code-coverage option, which defaults to being # disabled. # # Test also for gcov program and create GCOV variable that could be # substituted. # # Note that all optimization flags in CFLAGS must be disabled when code # coverage is enabled. # # Usage example: # # configure.ac: # # AX_CODE_COVERAGE # # Makefile.am: # # include $(top_srcdir)/aminclude_static.am # # my_program_LIBS = ... $(CODE_COVERAGE_LIBS) ... # my_program_CPPFLAGS = ... $(CODE_COVERAGE_CPPFLAGS) ... # my_program_CFLAGS = ... $(CODE_COVERAGE_CFLAGS) ... # my_program_CXXFLAGS = ... $(CODE_COVERAGE_CXXFLAGS) ... # # clean-local: code-coverage-clean # distclean-local: code-coverage-dist-clean # # This results in a "check-code-coverage" rule being added to any # Makefile.am which do "include $(top_srcdir)/aminclude_static.am" # (assuming the module has been configured with --enable-code-coverage). # Running `make check-code-coverage` in that directory will run the # module's test suite (`make check`) and build a code coverage report # detailing the code which was touched, then print the URI for the report. # # This code was derived from Makefile.decl in GLib, originally licensed # under LGPLv2.1+. # # LICENSE # # Copyright (c) 2012, 2016 Philip Withnall # Copyright (c) 2012 Xan Lopez # Copyright (c) 2012 Christian Persch # Copyright (c) 2012 Paolo Borelli # Copyright (c) 2012 Dan Winship # Copyright (c) 2015,2018 Bastien ROUCARIES # # This library is free software; you can redistribute it and/or modify it # under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation; either version 2.1 of the License, or (at # your option) any later version. # # This library is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser # General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with this program. If not, see . #serial 34 m4_define(_AX_CODE_COVERAGE_RULES,[ AX_ADD_AM_MACRO_STATIC([ # Code coverage # # Optional: # - CODE_COVERAGE_DIRECTORY: Top-level directory for code coverage reporting. # Multiple directories may be specified, separated by whitespace. # (Default: \$(top_builddir)) # - CODE_COVERAGE_OUTPUT_FILE: Filename and path for the .info file generated # by lcov for code coverage. (Default: # \$(PACKAGE_NAME)-\$(PACKAGE_VERSION)-coverage.info) # - CODE_COVERAGE_OUTPUT_DIRECTORY: Directory for generated code coverage # reports to be created. (Default: # \$(PACKAGE_NAME)-\$(PACKAGE_VERSION)-coverage) # - CODE_COVERAGE_BRANCH_COVERAGE: Set to 1 to enforce branch coverage, # set to 0 to disable it and leave empty to stay with the default. # (Default: empty) # - CODE_COVERAGE_LCOV_SHOPTS_DEFAULT: Extra options shared between both lcov # instances. (Default: based on $CODE_COVERAGE_BRANCH_COVERAGE) # - CODE_COVERAGE_LCOV_SHOPTS: Extra options to shared between both lcov # instances. (Default: $CODE_COVERAGE_LCOV_SHOPTS_DEFAULT) # - CODE_COVERAGE_LCOV_OPTIONS_GCOVPATH: --gcov-tool pathtogcov # - CODE_COVERAGE_LCOV_OPTIONS_DEFAULT: Extra options to pass to the # collecting lcov instance. (Default: $CODE_COVERAGE_LCOV_OPTIONS_GCOVPATH) # - CODE_COVERAGE_LCOV_OPTIONS: Extra options to pass to the collecting lcov # instance. (Default: $CODE_COVERAGE_LCOV_OPTIONS_DEFAULT) # - CODE_COVERAGE_LCOV_RMOPTS_DEFAULT: Extra options to pass to the filtering # lcov instance. (Default: empty) # - CODE_COVERAGE_LCOV_RMOPTS: Extra options to pass to the filtering lcov # instance. (Default: $CODE_COVERAGE_LCOV_RMOPTS_DEFAULT) # - CODE_COVERAGE_GENHTML_OPTIONS_DEFAULT: Extra options to pass to the # genhtml instance. (Default: based on $CODE_COVERAGE_BRANCH_COVERAGE) # - CODE_COVERAGE_GENHTML_OPTIONS: Extra options to pass to the genhtml # instance. (Default: $CODE_COVERAGE_GENHTML_OPTIONS_DEFAULT) # - CODE_COVERAGE_IGNORE_PATTERN: Extra glob pattern of files to ignore # # The generated report will be titled using the \$(PACKAGE_NAME) and # \$(PACKAGE_VERSION). In order to add the current git hash to the title, # use the git-version-gen script, available online. # Optional variables # run only on top dir if CODE_COVERAGE_ENABLED ifeq (\$(abs_builddir), \$(abs_top_builddir)) CODE_COVERAGE_DIRECTORY ?= \$(top_builddir) CODE_COVERAGE_OUTPUT_FILE ?= \$(PACKAGE_NAME)-\$(PACKAGE_VERSION)-coverage.info CODE_COVERAGE_OUTPUT_DIRECTORY ?= \$(PACKAGE_NAME)-\$(PACKAGE_VERSION)-coverage CODE_COVERAGE_BRANCH_COVERAGE ?= CODE_COVERAGE_LCOV_SHOPTS_DEFAULT ?= \$(if \$(CODE_COVERAGE_BRANCH_COVERAGE),\ --rc lcov_branch_coverage=\$(CODE_COVERAGE_BRANCH_COVERAGE)) CODE_COVERAGE_LCOV_SHOPTS ?= \$(CODE_COVERAGE_LCOV_SHOPTS_DEFAULT) CODE_COVERAGE_LCOV_OPTIONS_GCOVPATH ?= --gcov-tool \"\$(GCOV)\" CODE_COVERAGE_LCOV_OPTIONS_DEFAULT ?= \$(CODE_COVERAGE_LCOV_OPTIONS_GCOVPATH) CODE_COVERAGE_LCOV_OPTIONS ?= \$(CODE_COVERAGE_LCOV_OPTIONS_DEFAULT) CODE_COVERAGE_LCOV_RMOPTS_DEFAULT ?= CODE_COVERAGE_LCOV_RMOPTS ?= \$(CODE_COVERAGE_LCOV_RMOPTS_DEFAULT) CODE_COVERAGE_GENHTML_OPTIONS_DEFAULT ?=\ \$(if \$(CODE_COVERAGE_BRANCH_COVERAGE),\ --rc genhtml_branch_coverage=\$(CODE_COVERAGE_BRANCH_COVERAGE)) CODE_COVERAGE_GENHTML_OPTIONS ?= \$(CODE_COVERAGE_GENHTML_OPTIONS_DEFAULT) CODE_COVERAGE_IGNORE_PATTERN ?= GITIGNOREFILES := \$(GITIGNOREFILES) \$(CODE_COVERAGE_OUTPUT_FILE) \$(CODE_COVERAGE_OUTPUT_DIRECTORY) code_coverage_v_lcov_cap = \$(code_coverage_v_lcov_cap_\$(V)) code_coverage_v_lcov_cap_ = \$(code_coverage_v_lcov_cap_\$(AM_DEFAULT_VERBOSITY)) code_coverage_v_lcov_cap_0 = @echo \" LCOV --capture\" \$(CODE_COVERAGE_OUTPUT_FILE); code_coverage_v_lcov_ign = \$(code_coverage_v_lcov_ign_\$(V)) code_coverage_v_lcov_ign_ = \$(code_coverage_v_lcov_ign_\$(AM_DEFAULT_VERBOSITY)) code_coverage_v_lcov_ign_0 = @echo \" LCOV --remove /tmp/*\" \$(CODE_COVERAGE_IGNORE_PATTERN); code_coverage_v_genhtml = \$(code_coverage_v_genhtml_\$(V)) code_coverage_v_genhtml_ = \$(code_coverage_v_genhtml_\$(AM_DEFAULT_VERBOSITY)) code_coverage_v_genhtml_0 = @echo \" GEN \" \"\$(CODE_COVERAGE_OUTPUT_DIRECTORY)\"; code_coverage_quiet = \$(code_coverage_quiet_\$(V)) code_coverage_quiet_ = \$(code_coverage_quiet_\$(AM_DEFAULT_VERBOSITY)) code_coverage_quiet_0 = --quiet # sanitizes the test-name: replaces with underscores: dashes and dots code_coverage_sanitize = \$(subst -,_,\$(subst .,_,\$(1))) # Use recursive makes in order to ignore errors during check check-code-coverage: -\$(AM_V_at)\$(MAKE) \$(AM_MAKEFLAGS) -k check \$(AM_V_at)\$(MAKE) \$(AM_MAKEFLAGS) code-coverage-capture # Capture code coverage data code-coverage-capture: code-coverage-capture-hook \$(code_coverage_v_lcov_cap)\$(LCOV) \$(code_coverage_quiet) \$(addprefix --directory ,\$(CODE_COVERAGE_DIRECTORY)) --capture --output-file \"\$(CODE_COVERAGE_OUTPUT_FILE).tmp\" --test-name \"\$(call code_coverage_sanitize,\$(PACKAGE_NAME)-\$(PACKAGE_VERSION))\" --no-checksum --compat-libtool \$(CODE_COVERAGE_LCOV_SHOPTS) \$(CODE_COVERAGE_LCOV_OPTIONS) \$(code_coverage_v_lcov_ign)\$(LCOV) \$(code_coverage_quiet) \$(addprefix --directory ,\$(CODE_COVERAGE_DIRECTORY)) --remove \"\$(CODE_COVERAGE_OUTPUT_FILE).tmp\" \"/tmp/*\" \$(CODE_COVERAGE_IGNORE_PATTERN) --output-file \"\$(CODE_COVERAGE_OUTPUT_FILE)\" \$(CODE_COVERAGE_LCOV_SHOPTS) \$(CODE_COVERAGE_LCOV_RMOPTS) -@rm -f \"\$(CODE_COVERAGE_OUTPUT_FILE).tmp\" \$(code_coverage_v_genhtml)LANG=C \$(GENHTML) \$(code_coverage_quiet) \$(addprefix --prefix ,\$(CODE_COVERAGE_DIRECTORY)) --output-directory \"\$(CODE_COVERAGE_OUTPUT_DIRECTORY)\" --title \"\$(PACKAGE_NAME)-\$(PACKAGE_VERSION) Code Coverage\" --legend --show-details \"\$(CODE_COVERAGE_OUTPUT_FILE)\" \$(CODE_COVERAGE_GENHTML_OPTIONS) @echo \"file://\$(abs_builddir)/\$(CODE_COVERAGE_OUTPUT_DIRECTORY)/index.html\" code-coverage-clean: -\$(LCOV) --directory \$(top_builddir) -z -rm -rf \"\$(CODE_COVERAGE_OUTPUT_FILE)\" \"\$(CODE_COVERAGE_OUTPUT_FILE).tmp\" \"\$(CODE_COVERAGE_OUTPUT_DIRECTORY)\" -find . \\( -name \"*.gcda\" -o -name \"*.gcno\" -o -name \"*.gcov\" \\) -delete code-coverage-dist-clean: A][M_DISTCHECK_CONFIGURE_FLAGS := \$(A][M_DISTCHECK_CONFIGURE_FLAGS) --disable-code-coverage else # ifneq (\$(abs_builddir), \$(abs_top_builddir)) check-code-coverage: code-coverage-capture: code-coverage-capture-hook code-coverage-clean: code-coverage-dist-clean: endif # ifeq (\$(abs_builddir), \$(abs_top_builddir)) else #! CODE_COVERAGE_ENABLED # Use recursive makes in order to ignore errors during check check-code-coverage: @echo \"Need to reconfigure with --enable-code-coverage\" # Capture code coverage data code-coverage-capture: code-coverage-capture-hook @echo \"Need to reconfigure with --enable-code-coverage\" code-coverage-clean: code-coverage-dist-clean: endif #CODE_COVERAGE_ENABLED # Hook rule executed before code-coverage-capture, overridable by the user code-coverage-capture-hook: .PHONY: check-code-coverage code-coverage-capture code-coverage-dist-clean code-coverage-clean code-coverage-capture-hook ]) ]) AC_DEFUN([_AX_CODE_COVERAGE_ENABLED],[ AX_CHECK_GNU_MAKE([],AC_MSG_ERROR([not using GNU make that is needed for coverage])) AC_REQUIRE([AX_ADD_AM_MACRO_STATIC]) # check for gcov AC_CHECK_TOOL([GCOV], [$_AX_CODE_COVERAGE_GCOV_PROG_WITH], [:]) AS_IF([test "X$GCOV" = "X:"], AC_MSG_ERROR([gcov is needed to do coverage])) AC_SUBST([GCOV]) dnl Check if gcc is being used AS_IF([ test "$GCC" = "no" ], [ AC_MSG_ERROR([not compiling with gcc, which is required for gcov code coverage]) ]) AC_CHECK_PROG([LCOV], [lcov], [lcov]) AC_CHECK_PROG([GENHTML], [genhtml], [genhtml]) AS_IF([ test x"$LCOV" = x ], [ AC_MSG_ERROR([To enable code coverage reporting you must have lcov installed]) ]) AS_IF([ test x"$GENHTML" = x ], [ AC_MSG_ERROR([Could not find genhtml from the lcov package]) ]) dnl Build the code coverage flags dnl Define CODE_COVERAGE_LDFLAGS for backwards compatibility CODE_COVERAGE_CPPFLAGS="-DNDEBUG" CODE_COVERAGE_CFLAGS="-O0 -g -fprofile-arcs -ftest-coverage" CODE_COVERAGE_CXXFLAGS="-O0 -g -fprofile-arcs -ftest-coverage" CODE_COVERAGE_LIBS="-lgcov" AC_SUBST([CODE_COVERAGE_CPPFLAGS]) AC_SUBST([CODE_COVERAGE_CFLAGS]) AC_SUBST([CODE_COVERAGE_CXXFLAGS]) AC_SUBST([CODE_COVERAGE_LIBS]) ]) AC_DEFUN([AX_CODE_COVERAGE],[ dnl Check for --enable-code-coverage # allow to override gcov location AC_ARG_WITH([gcov], [AS_HELP_STRING([--with-gcov[=GCOV]], [use given GCOV for coverage (GCOV=gcov).])], [_AX_CODE_COVERAGE_GCOV_PROG_WITH=$with_gcov], [_AX_CODE_COVERAGE_GCOV_PROG_WITH=gcov]) AC_MSG_CHECKING([whether to build with code coverage support]) AC_ARG_ENABLE([code-coverage], AS_HELP_STRING([--enable-code-coverage], [Whether to enable code coverage support]),, enable_code_coverage=no) AM_CONDITIONAL([CODE_COVERAGE_ENABLED], [test "x$enable_code_coverage" = xyes]) AC_SUBST([CODE_COVERAGE_ENABLED], [$enable_code_coverage]) AC_MSG_RESULT($enable_code_coverage) AS_IF([ test "x$enable_code_coverage" = xyes ], [ _AX_CODE_COVERAGE_ENABLED ]) _AX_CODE_COVERAGE_RULES ]) gevent-24.11.1/deps/c-ares/m4/ax_compiler_vendor.m4000066400000000000000000000103621471441230600217220ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_compiler_vendor.html # =========================================================================== # # SYNOPSIS # # AX_COMPILER_VENDOR # # DESCRIPTION # # Determine the vendor of the C, C++ or Fortran compiler. The vendor is # returned in the cache variable $ax_cv_c_compiler_vendor for C, # $ax_cv_cxx_compiler_vendor for C++ or $ax_cv_fc_compiler_vendor for # (modern) Fortran. The value is one of "intel", "ibm", "pathscale", # "clang" (LLVM), "cray", "fujitsu", "sdcc", "sx", "nvhpc" (NVIDIA HPC # Compiler), "portland" (PGI), "gnu" (GCC), "sun" (Oracle Developer # Studio), "hp", "dec", "borland", "comeau", "kai", "lcc", "sgi", # "microsoft", "metrowerks", "watcom", "tcc" (Tiny CC) or "unknown" (if # the compiler cannot be determined). # # To check for a Fortran compiler, you must first call AC_FC_PP_SRCEXT # with an appropriate preprocessor-enabled extension. For example: # # AC_LANG_PUSH([Fortran]) # AC_PROG_FC # AC_FC_PP_SRCEXT([F]) # AX_COMPILER_VENDOR # AC_LANG_POP([Fortran]) # # LICENSE # # Copyright (c) 2008 Steven G. Johnson # Copyright (c) 2008 Matteo Frigo # Copyright (c) 2018-19 John Zaitseff # # This program is free software: you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by the # Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General # Public License for more details. # # You should have received a copy of the GNU General Public License along # with this program. If not, see . # # As a special exception, the respective Autoconf Macro's copyright owner # gives unlimited permission to copy, distribute and modify the configure # scripts that are the output of Autoconf when processing the Macro. You # need not follow the terms of the GNU General Public License when using # or distributing such scripts, even though portions of the text of the # Macro appear in them. The GNU General Public License (GPL) does govern # all other use of the material that constitutes the Autoconf Macro. # # This special exception to the GPL applies to versions of the Autoconf # Macro released by the Autoconf Archive. When you make and distribute a # modified version of the Autoconf Macro, you may extend this special # exception to the GPL to apply to your modified version as well. #serial 32 AC_DEFUN([AX_COMPILER_VENDOR], [dnl AC_CACHE_CHECK([for _AC_LANG compiler vendor], ax_cv_[]_AC_LANG_ABBREV[]_compiler_vendor, [dnl dnl If you modify this list of vendors, please add similar support dnl to ax_compiler_version.m4 if at all possible. dnl dnl Note: Do NOT check for GCC first since some other compilers dnl define __GNUC__ to remain compatible with it. Compilers that dnl are very slow to start (such as Intel) are listed first. vendors=" intel: __ICC,__ECC,__INTEL_COMPILER ibm: __xlc__,__xlC__,__IBMC__,__IBMCPP__,__ibmxl__ pathscale: __PATHCC__,__PATHSCALE__ clang: __clang__ cray: _CRAYC fujitsu: __FUJITSU sdcc: SDCC,__SDCC sx: _SX nvhpc: __NVCOMPILER portland: __PGI gnu: __GNUC__ sun: __SUNPRO_C,__SUNPRO_CC,__SUNPRO_F90,__SUNPRO_F95 hp: __HP_cc,__HP_aCC dec: __DECC,__DECCXX,__DECC_VER,__DECCXX_VER borland: __BORLANDC__,__CODEGEARC__,__TURBOC__ comeau: __COMO__ kai: __KCC lcc: __LCC__ sgi: __sgi,sgi microsoft: _MSC_VER metrowerks: __MWERKS__ watcom: __WATCOMC__ tcc: __TINYC__ unknown: UNKNOWN " for ventest in $vendors; do case $ventest in *:) vendor=$ventest continue ;; *) vencpp="defined("`echo $ventest | sed 's/,/) || defined(/g'`")" ;; esac AC_COMPILE_IFELSE([AC_LANG_PROGRAM([], [[ #if !($vencpp) thisisanerror; #endif ]])], [break]) done ax_cv_[]_AC_LANG_ABBREV[]_compiler_vendor=`echo $vendor | cut -d: -f1` ]) ])dnl gevent-24.11.1/deps/c-ares/m4/ax_cxx_compile_stdcxx.m4000066400000000000000000000520731471441230600224470ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_cxx_compile_stdcxx.html # =========================================================================== # # SYNOPSIS # # AX_CXX_COMPILE_STDCXX(VERSION, [ext|noext], [mandatory|optional]) # # DESCRIPTION # # Check for baseline language coverage in the compiler for the specified # version of the C++ standard. If necessary, add switches to CXX and # CXXCPP to enable support. VERSION may be '11', '14', '17', or '20' for # the respective C++ standard version. # # The second argument, if specified, indicates whether you insist on an # extended mode (e.g. -std=gnu++11) or a strict conformance mode (e.g. # -std=c++11). If neither is specified, you get whatever works, with # preference for no added switch, and then for an extended mode. # # The third argument, if specified 'mandatory' or if left unspecified, # indicates that baseline support for the specified C++ standard is # required and that the macro should error out if no mode with that # support is found. If specified 'optional', then configuration proceeds # regardless, after defining HAVE_CXX${VERSION} if and only if a # supporting mode is found. # # LICENSE # # Copyright (c) 2008 Benjamin Kosnik # Copyright (c) 2012 Zack Weinberg # Copyright (c) 2013 Roy Stogner # Copyright (c) 2014, 2015 Google Inc.; contributed by Alexey Sokolov # Copyright (c) 2015 Paul Norman # Copyright (c) 2015 Moritz Klammler # Copyright (c) 2016, 2018 Krzesimir Nowak # Copyright (c) 2019 Enji Cooper # Copyright (c) 2020 Jason Merrill # Copyright (c) 2021 Jörn Heusipp # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 18 dnl This macro is based on the code from the AX_CXX_COMPILE_STDCXX_11 macro dnl (serial version number 13). AC_DEFUN([AX_CXX_COMPILE_STDCXX], [dnl m4_if([$1], [11], [ax_cxx_compile_alternatives="11 0x"], [$1], [14], [ax_cxx_compile_alternatives="14 1y"], [$1], [17], [ax_cxx_compile_alternatives="17 1z"], [$1], [20], [ax_cxx_compile_alternatives="20"], [m4_fatal([invalid first argument `$1' to AX_CXX_COMPILE_STDCXX])])dnl m4_if([$2], [], [], [$2], [ext], [], [$2], [noext], [], [m4_fatal([invalid second argument `$2' to AX_CXX_COMPILE_STDCXX])])dnl m4_if([$3], [], [ax_cxx_compile_cxx$1_required=true], [$3], [mandatory], [ax_cxx_compile_cxx$1_required=true], [$3], [optional], [ax_cxx_compile_cxx$1_required=false], [m4_fatal([invalid third argument `$3' to AX_CXX_COMPILE_STDCXX])]) AC_LANG_PUSH([C++])dnl ac_success=no m4_if([$2], [], [dnl AC_CACHE_CHECK(whether $CXX supports C++$1 features by default, ax_cv_cxx_compile_cxx$1, [AC_COMPILE_IFELSE([AC_LANG_SOURCE([_AX_CXX_COMPILE_STDCXX_testbody_$1])], [ax_cv_cxx_compile_cxx$1=yes], [ax_cv_cxx_compile_cxx$1=no])]) if test x$ax_cv_cxx_compile_cxx$1 = xyes; then ac_success=yes fi]) m4_if([$2], [noext], [], [dnl if test x$ac_success = xno; then for alternative in ${ax_cxx_compile_alternatives}; do switch="-std=gnu++${alternative}" cachevar=AS_TR_SH([ax_cv_cxx_compile_cxx$1_$switch]) AC_CACHE_CHECK(whether $CXX supports C++$1 features with $switch, $cachevar, [ac_save_CXX="$CXX" CXX="$CXX $switch" AC_COMPILE_IFELSE([AC_LANG_SOURCE([_AX_CXX_COMPILE_STDCXX_testbody_$1])], [eval $cachevar=yes], [eval $cachevar=no]) CXX="$ac_save_CXX"]) if eval test x\$$cachevar = xyes; then CXX="$CXX $switch" if test -n "$CXXCPP" ; then CXXCPP="$CXXCPP $switch" fi ac_success=yes break fi done fi]) m4_if([$2], [ext], [], [dnl if test x$ac_success = xno; then dnl HP's aCC needs +std=c++11 according to: dnl http://h21007.www2.hp.com/portal/download/files/unprot/aCxx/PDF_Release_Notes/769149-001.pdf dnl Cray's crayCC needs "-h std=c++11" dnl MSVC needs -std:c++NN for C++17 and later (default is C++14) for alternative in ${ax_cxx_compile_alternatives}; do for switch in -std=c++${alternative} +std=c++${alternative} "-h std=c++${alternative}" MSVC; do if test x"$switch" = xMSVC; then dnl AS_TR_SH maps both `:` and `=` to `_` so -std:c++17 would collide dnl with -std=c++17. We suffix the cache variable name with _MSVC to dnl avoid this. switch=-std:c++${alternative} cachevar=AS_TR_SH([ax_cv_cxx_compile_cxx$1_${switch}_MSVC]) else cachevar=AS_TR_SH([ax_cv_cxx_compile_cxx$1_$switch]) fi AC_CACHE_CHECK(whether $CXX supports C++$1 features with $switch, $cachevar, [ac_save_CXX="$CXX" CXX="$CXX $switch" AC_COMPILE_IFELSE([AC_LANG_SOURCE([_AX_CXX_COMPILE_STDCXX_testbody_$1])], [eval $cachevar=yes], [eval $cachevar=no]) CXX="$ac_save_CXX"]) if eval test x\$$cachevar = xyes; then CXX="$CXX $switch" if test -n "$CXXCPP" ; then CXXCPP="$CXXCPP $switch" fi ac_success=yes break fi done if test x$ac_success = xyes; then break fi done fi]) AC_LANG_POP([C++]) if test x$ax_cxx_compile_cxx$1_required = xtrue; then if test x$ac_success = xno; then AC_MSG_ERROR([*** A compiler with support for C++$1 language features is required.]) fi fi if test x$ac_success = xno; then HAVE_CXX$1=0 AC_MSG_NOTICE([No compiler with C++$1 support was found]) else HAVE_CXX$1=1 AC_DEFINE(HAVE_CXX$1,1, [define if the compiler supports basic C++$1 syntax]) fi AC_SUBST(HAVE_CXX$1) ]) dnl Test body for checking C++11 support m4_define([_AX_CXX_COMPILE_STDCXX_testbody_11], _AX_CXX_COMPILE_STDCXX_testbody_new_in_11 ) dnl Test body for checking C++14 support m4_define([_AX_CXX_COMPILE_STDCXX_testbody_14], _AX_CXX_COMPILE_STDCXX_testbody_new_in_11 _AX_CXX_COMPILE_STDCXX_testbody_new_in_14 ) dnl Test body for checking C++17 support m4_define([_AX_CXX_COMPILE_STDCXX_testbody_17], _AX_CXX_COMPILE_STDCXX_testbody_new_in_11 _AX_CXX_COMPILE_STDCXX_testbody_new_in_14 _AX_CXX_COMPILE_STDCXX_testbody_new_in_17 ) dnl Test body for checking C++20 support m4_define([_AX_CXX_COMPILE_STDCXX_testbody_20], _AX_CXX_COMPILE_STDCXX_testbody_new_in_11 _AX_CXX_COMPILE_STDCXX_testbody_new_in_14 _AX_CXX_COMPILE_STDCXX_testbody_new_in_17 _AX_CXX_COMPILE_STDCXX_testbody_new_in_20 ) dnl Tests for new features in C++11 m4_define([_AX_CXX_COMPILE_STDCXX_testbody_new_in_11], [[ // If the compiler admits that it is not ready for C++11, why torture it? // Hopefully, this will speed up the test. #ifndef __cplusplus #error "This is not a C++ compiler" // MSVC always sets __cplusplus to 199711L in older versions; newer versions // only set it correctly if /Zc:__cplusplus is specified as well as a // /std:c++NN switch: // https://devblogs.microsoft.com/cppblog/msvc-now-correctly-reports-__cplusplus/ #elif __cplusplus < 201103L && !defined _MSC_VER #error "This is not a C++11 compiler" #else namespace cxx11 { namespace test_static_assert { template struct check { static_assert(sizeof(int) <= sizeof(T), "not big enough"); }; } namespace test_final_override { struct Base { virtual ~Base() {} virtual void f() {} }; struct Derived : public Base { virtual ~Derived() override {} virtual void f() override {} }; } namespace test_double_right_angle_brackets { template < typename T > struct check {}; typedef check single_type; typedef check> double_type; typedef check>> triple_type; typedef check>>> quadruple_type; } namespace test_decltype { int f() { int a = 1; decltype(a) b = 2; return a + b; } } namespace test_type_deduction { template < typename T1, typename T2 > struct is_same { static const bool value = false; }; template < typename T > struct is_same { static const bool value = true; }; template < typename T1, typename T2 > auto add(T1 a1, T2 a2) -> decltype(a1 + a2) { return a1 + a2; } int test(const int c, volatile int v) { static_assert(is_same::value == true, ""); static_assert(is_same::value == false, ""); static_assert(is_same::value == false, ""); auto ac = c; auto av = v; auto sumi = ac + av + 'x'; auto sumf = ac + av + 1.0; static_assert(is_same::value == true, ""); static_assert(is_same::value == true, ""); static_assert(is_same::value == true, ""); static_assert(is_same::value == false, ""); static_assert(is_same::value == true, ""); return (sumf > 0.0) ? sumi : add(c, v); } } namespace test_noexcept { int f() { return 0; } int g() noexcept { return 0; } static_assert(noexcept(f()) == false, ""); static_assert(noexcept(g()) == true, ""); } namespace test_constexpr { template < typename CharT > unsigned long constexpr strlen_c_r(const CharT *const s, const unsigned long acc) noexcept { return *s ? strlen_c_r(s + 1, acc + 1) : acc; } template < typename CharT > unsigned long constexpr strlen_c(const CharT *const s) noexcept { return strlen_c_r(s, 0UL); } static_assert(strlen_c("") == 0UL, ""); static_assert(strlen_c("1") == 1UL, ""); static_assert(strlen_c("example") == 7UL, ""); static_assert(strlen_c("another\0example") == 7UL, ""); } namespace test_rvalue_references { template < int N > struct answer { static constexpr int value = N; }; answer<1> f(int&) { return answer<1>(); } answer<2> f(const int&) { return answer<2>(); } answer<3> f(int&&) { return answer<3>(); } void test() { int i = 0; const int c = 0; static_assert(decltype(f(i))::value == 1, ""); static_assert(decltype(f(c))::value == 2, ""); static_assert(decltype(f(0))::value == 3, ""); } } namespace test_uniform_initialization { struct test { static const int zero {}; static const int one {1}; }; static_assert(test::zero == 0, ""); static_assert(test::one == 1, ""); } namespace test_lambdas { void test1() { auto lambda1 = [](){}; auto lambda2 = lambda1; lambda1(); lambda2(); } int test2() { auto a = [](int i, int j){ return i + j; }(1, 2); auto b = []() -> int { return '0'; }(); auto c = [=](){ return a + b; }(); auto d = [&](){ return c; }(); auto e = [a, &b](int x) mutable { const auto identity = [](int y){ return y; }; for (auto i = 0; i < a; ++i) a += b--; return x + identity(a + b); }(0); return a + b + c + d + e; } int test3() { const auto nullary = [](){ return 0; }; const auto unary = [](int x){ return x; }; using nullary_t = decltype(nullary); using unary_t = decltype(unary); const auto higher1st = [](nullary_t f){ return f(); }; const auto higher2nd = [unary](nullary_t f1){ return [unary, f1](unary_t f2){ return f2(unary(f1())); }; }; return higher1st(nullary) + higher2nd(nullary)(unary); } } namespace test_variadic_templates { template struct sum; template struct sum { static constexpr auto value = N0 + sum::value; }; template <> struct sum<> { static constexpr auto value = 0; }; static_assert(sum<>::value == 0, ""); static_assert(sum<1>::value == 1, ""); static_assert(sum<23>::value == 23, ""); static_assert(sum<1, 2>::value == 3, ""); static_assert(sum<5, 5, 11>::value == 21, ""); static_assert(sum<2, 3, 5, 7, 11, 13>::value == 41, ""); } // http://stackoverflow.com/questions/13728184/template-aliases-and-sfinae // Clang 3.1 fails with headers of libstd++ 4.8.3 when using std::function // because of this. namespace test_template_alias_sfinae { struct foo {}; template using member = typename T::member_type; template void func(...) {} template void func(member*) {} void test(); void test() { func(0); } } } // namespace cxx11 #endif // __cplusplus >= 201103L ]]) dnl Tests for new features in C++14 m4_define([_AX_CXX_COMPILE_STDCXX_testbody_new_in_14], [[ // If the compiler admits that it is not ready for C++14, why torture it? // Hopefully, this will speed up the test. #ifndef __cplusplus #error "This is not a C++ compiler" #elif __cplusplus < 201402L && !defined _MSC_VER #error "This is not a C++14 compiler" #else namespace cxx14 { namespace test_polymorphic_lambdas { int test() { const auto lambda = [](auto&&... args){ const auto istiny = [](auto x){ return (sizeof(x) == 1UL) ? 1 : 0; }; const int aretiny[] = { istiny(args)... }; return aretiny[0]; }; return lambda(1, 1L, 1.0f, '1'); } } namespace test_binary_literals { constexpr auto ivii = 0b0000000000101010; static_assert(ivii == 42, "wrong value"); } namespace test_generalized_constexpr { template < typename CharT > constexpr unsigned long strlen_c(const CharT *const s) noexcept { auto length = 0UL; for (auto p = s; *p; ++p) ++length; return length; } static_assert(strlen_c("") == 0UL, ""); static_assert(strlen_c("x") == 1UL, ""); static_assert(strlen_c("test") == 4UL, ""); static_assert(strlen_c("another\0test") == 7UL, ""); } namespace test_lambda_init_capture { int test() { auto x = 0; const auto lambda1 = [a = x](int b){ return a + b; }; const auto lambda2 = [a = lambda1(x)](){ return a; }; return lambda2(); } } namespace test_digit_separators { constexpr auto ten_million = 100'000'000; static_assert(ten_million == 100000000, ""); } namespace test_return_type_deduction { auto f(int& x) { return x; } decltype(auto) g(int& x) { return x; } template < typename T1, typename T2 > struct is_same { static constexpr auto value = false; }; template < typename T > struct is_same { static constexpr auto value = true; }; int test() { auto x = 0; static_assert(is_same::value, ""); static_assert(is_same::value, ""); return x; } } } // namespace cxx14 #endif // __cplusplus >= 201402L ]]) dnl Tests for new features in C++17 m4_define([_AX_CXX_COMPILE_STDCXX_testbody_new_in_17], [[ // If the compiler admits that it is not ready for C++17, why torture it? // Hopefully, this will speed up the test. #ifndef __cplusplus #error "This is not a C++ compiler" #elif __cplusplus < 201703L && !defined _MSC_VER #error "This is not a C++17 compiler" #else #include #include #include namespace cxx17 { namespace test_constexpr_lambdas { constexpr int foo = [](){return 42;}(); } namespace test::nested_namespace::definitions { } namespace test_fold_expression { template int multiply(Args... args) { return (args * ... * 1); } template bool all(Args... args) { return (args && ...); } } namespace test_extended_static_assert { static_assert (true); } namespace test_auto_brace_init_list { auto foo = {5}; auto bar {5}; static_assert(std::is_same, decltype(foo)>::value); static_assert(std::is_same::value); } namespace test_typename_in_template_template_parameter { template typename X> struct D; } namespace test_fallthrough_nodiscard_maybe_unused_attributes { int f1() { return 42; } [[nodiscard]] int f2() { [[maybe_unused]] auto unused = f1(); switch (f1()) { case 17: f1(); [[fallthrough]]; case 42: f1(); } return f1(); } } namespace test_extended_aggregate_initialization { struct base1 { int b1, b2 = 42; }; struct base2 { base2() { b3 = 42; } int b3; }; struct derived : base1, base2 { int d; }; derived d1 {{1, 2}, {}, 4}; // full initialization derived d2 {{}, {}, 4}; // value-initialized bases } namespace test_general_range_based_for_loop { struct iter { int i; int& operator* () { return i; } const int& operator* () const { return i; } iter& operator++() { ++i; return *this; } }; struct sentinel { int i; }; bool operator== (const iter& i, const sentinel& s) { return i.i == s.i; } bool operator!= (const iter& i, const sentinel& s) { return !(i == s); } struct range { iter begin() const { return {0}; } sentinel end() const { return {5}; } }; void f() { range r {}; for (auto i : r) { [[maybe_unused]] auto v = i; } } } namespace test_lambda_capture_asterisk_this_by_value { struct t { int i; int foo() { return [*this]() { return i; }(); } }; } namespace test_enum_class_construction { enum class byte : unsigned char {}; byte foo {42}; } namespace test_constexpr_if { template int f () { if constexpr(cond) { return 13; } else { return 42; } } } namespace test_selection_statement_with_initializer { int f() { return 13; } int f2() { if (auto i = f(); i > 0) { return 3; } switch (auto i = f(); i + 4) { case 17: return 2; default: return 1; } } } namespace test_template_argument_deduction_for_class_templates { template struct pair { pair (T1 p1, T2 p2) : m1 {p1}, m2 {p2} {} T1 m1; T2 m2; }; void f() { [[maybe_unused]] auto p = pair{13, 42u}; } } namespace test_non_type_auto_template_parameters { template struct B {}; B<5> b1; B<'a'> b2; } namespace test_structured_bindings { int arr[2] = { 1, 2 }; std::pair pr = { 1, 2 }; auto f1() -> int(&)[2] { return arr; } auto f2() -> std::pair& { return pr; } struct S { int x1 : 2; volatile double y1; }; S f3() { return {}; } auto [ x1, y1 ] = f1(); auto& [ xr1, yr1 ] = f1(); auto [ x2, y2 ] = f2(); auto& [ xr2, yr2 ] = f2(); const auto [ x3, y3 ] = f3(); } namespace test_exception_spec_type_system { struct Good {}; struct Bad {}; void g1() noexcept; void g2(); template Bad f(T*, T*); template Good f(T1*, T2*); static_assert (std::is_same_v); } namespace test_inline_variables { template void f(T) {} template inline T g(T) { return T{}; } template<> inline void f<>(int) {} template<> int g<>(int) { return 5; } } } // namespace cxx17 #endif // __cplusplus < 201703L && !defined _MSC_VER ]]) dnl Tests for new features in C++20 m4_define([_AX_CXX_COMPILE_STDCXX_testbody_new_in_20], [[ #ifndef __cplusplus #error "This is not a C++ compiler" #elif __cplusplus < 202002L && !defined _MSC_VER #error "This is not a C++20 compiler" #else #include namespace cxx20 { // As C++20 supports feature test macros in the standard, there is no // immediate need to actually test for feature availability on the // Autoconf side. } // namespace cxx20 #endif // __cplusplus < 202002L && !defined _MSC_VER ]]) gevent-24.11.1/deps/c-ares/m4/ax_cxx_compile_stdcxx_14.m4000066400000000000000000000025131471441230600227450ustar00rootroot00000000000000# ============================================================================= # https://www.gnu.org/software/autoconf-archive/ax_cxx_compile_stdcxx_14.html # ============================================================================= # # SYNOPSIS # # AX_CXX_COMPILE_STDCXX_14([ext|noext], [mandatory|optional]) # # DESCRIPTION # # Check for baseline language coverage in the compiler for the C++14 # standard; if necessary, add switches to CXX and CXXCPP to enable # support. # # This macro is a convenience alias for calling the AX_CXX_COMPILE_STDCXX # macro with the version set to C++14. The two optional arguments are # forwarded literally as the second and third argument respectively. # Please see the documentation for the AX_CXX_COMPILE_STDCXX macro for # more information. If you want to use this macro, you also need to # download the ax_cxx_compile_stdcxx.m4 file. # # LICENSE # # Copyright (c) 2015 Moritz Klammler # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 5 AX_REQUIRE_DEFINED([AX_CXX_COMPILE_STDCXX]) AC_DEFUN([AX_CXX_COMPILE_STDCXX_14], [AX_CXX_COMPILE_STDCXX([14], [$1], [$2])]) gevent-24.11.1/deps/c-ares/m4/ax_file_escapes.m4000066400000000000000000000013731471441230600211570ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_file_escapes.html # =========================================================================== # # SYNOPSIS # # AX_FILE_ESCAPES # # DESCRIPTION # # Writes the specified data to the specified file. # # LICENSE # # Copyright (c) 2008 Tom Howard # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 8 AC_DEFUN([AX_FILE_ESCAPES],[ AX_DOLLAR="\$" AX_SRB="\\135" AX_SLB="\\133" AX_BS="\\\\" AX_DQ="\"" ]) gevent-24.11.1/deps/c-ares/m4/ax_pthread.m4000066400000000000000000000540341471441230600201660ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_pthread.html # =========================================================================== # # SYNOPSIS # # AX_PTHREAD([ACTION-IF-FOUND[, ACTION-IF-NOT-FOUND]]) # # DESCRIPTION # # This macro figures out how to build C programs using POSIX threads. It # sets the PTHREAD_LIBS output variable to the threads library and linker # flags, and the PTHREAD_CFLAGS output variable to any special C compiler # flags that are needed. (The user can also force certain compiler # flags/libs to be tested by setting these environment variables.) # # Also sets PTHREAD_CC and PTHREAD_CXX to any special C compiler that is # needed for multi-threaded programs (defaults to the value of CC # respectively CXX otherwise). (This is necessary on e.g. AIX to use the # special cc_r/CC_r compiler alias.) # # NOTE: You are assumed to not only compile your program with these flags, # but also to link with them as well. For example, you might link with # $PTHREAD_CC $CFLAGS $PTHREAD_CFLAGS $LDFLAGS ... $PTHREAD_LIBS $LIBS # $PTHREAD_CXX $CXXFLAGS $PTHREAD_CFLAGS $LDFLAGS ... $PTHREAD_LIBS $LIBS # # If you are only building threaded programs, you may wish to use these # variables in your default LIBS, CFLAGS, and CC: # # LIBS="$PTHREAD_LIBS $LIBS" # CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # CXXFLAGS="$CXXFLAGS $PTHREAD_CFLAGS" # CC="$PTHREAD_CC" # CXX="$PTHREAD_CXX" # # In addition, if the PTHREAD_CREATE_JOINABLE thread-attribute constant # has a nonstandard name, this macro defines PTHREAD_CREATE_JOINABLE to # that name (e.g. PTHREAD_CREATE_UNDETACHED on AIX). # # Also HAVE_PTHREAD_PRIO_INHERIT is defined if pthread is found and the # PTHREAD_PRIO_INHERIT symbol is defined when compiling with # PTHREAD_CFLAGS. # # ACTION-IF-FOUND is a list of shell commands to run if a threads library # is found, and ACTION-IF-NOT-FOUND is a list of commands to run it if it # is not found. If ACTION-IF-FOUND is not specified, the default action # will define HAVE_PTHREAD. # # Please let the authors know if this macro fails on any platform, or if # you have any other suggestions or comments. This macro was based on work # by SGJ on autoconf scripts for FFTW (http://www.fftw.org/) (with help # from M. Frigo), as well as ac_pthread and hb_pthread macros posted by # Alejandro Forero Cuervo to the autoconf macro repository. We are also # grateful for the helpful feedback of numerous users. # # Updated for Autoconf 2.68 by Daniel Richard G. # # LICENSE # # Copyright (c) 2008 Steven G. Johnson # Copyright (c) 2011 Daniel Richard G. # Copyright (c) 2019 Marc Stevens # # This program is free software: you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by the # Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General # Public License for more details. # # You should have received a copy of the GNU General Public License along # with this program. If not, see . # # As a special exception, the respective Autoconf Macro's copyright owner # gives unlimited permission to copy, distribute and modify the configure # scripts that are the output of Autoconf when processing the Macro. You # need not follow the terms of the GNU General Public License when using # or distributing such scripts, even though portions of the text of the # Macro appear in them. The GNU General Public License (GPL) does govern # all other use of the material that constitutes the Autoconf Macro. # # This special exception to the GPL applies to versions of the Autoconf # Macro released by the Autoconf Archive. When you make and distribute a # modified version of the Autoconf Macro, you may extend this special # exception to the GPL to apply to your modified version as well. #serial 31 AU_ALIAS([ACX_PTHREAD], [AX_PTHREAD]) AC_DEFUN([AX_PTHREAD], [ AC_REQUIRE([AC_CANONICAL_HOST]) AC_REQUIRE([AC_PROG_CC]) AC_REQUIRE([AC_PROG_SED]) AC_LANG_PUSH([C]) ax_pthread_ok=no # We used to check for pthread.h first, but this fails if pthread.h # requires special compiler flags (e.g. on Tru64 or Sequent). # It gets checked for in the link test anyway. # First of all, check if the user has set any of the PTHREAD_LIBS, # etcetera environment variables, and if threads linking works using # them: if test "x$PTHREAD_CFLAGS$PTHREAD_LIBS" != "x"; then ax_pthread_save_CC="$CC" ax_pthread_save_CFLAGS="$CFLAGS" ax_pthread_save_LIBS="$LIBS" AS_IF([test "x$PTHREAD_CC" != "x"], [CC="$PTHREAD_CC"]) AS_IF([test "x$PTHREAD_CXX" != "x"], [CXX="$PTHREAD_CXX"]) CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" AC_MSG_CHECKING([for pthread_join using $CC $PTHREAD_CFLAGS $PTHREAD_LIBS]) AC_LINK_IFELSE([AC_LANG_CALL([], [pthread_join])], [ax_pthread_ok=yes]) AC_MSG_RESULT([$ax_pthread_ok]) if test "x$ax_pthread_ok" = "xno"; then PTHREAD_LIBS="" PTHREAD_CFLAGS="" fi CC="$ax_pthread_save_CC" CFLAGS="$ax_pthread_save_CFLAGS" LIBS="$ax_pthread_save_LIBS" fi # We must check for the threads library under a number of different # names; the ordering is very important because some systems # (e.g. DEC) have both -lpthread and -lpthreads, where one of the # libraries is broken (non-POSIX). # Create a list of thread flags to try. Items with a "," contain both # C compiler flags (before ",") and linker flags (after ","). Other items # starting with a "-" are C compiler flags, and remaining items are # library names, except for "none" which indicates that we try without # any flags at all, and "pthread-config" which is a program returning # the flags for the Pth emulation library. ax_pthread_flags="pthreads none -Kthread -pthread -pthreads -mthreads pthread --thread-safe -mt pthread-config" # The ordering *is* (sometimes) important. Some notes on the # individual items follow: # pthreads: AIX (must check this before -lpthread) # none: in case threads are in libc; should be tried before -Kthread and # other compiler flags to prevent continual compiler warnings # -Kthread: Sequent (threads in libc, but -Kthread needed for pthread.h) # -pthread: Linux/gcc (kernel threads), BSD/gcc (userland threads), Tru64 # (Note: HP C rejects this with "bad form for `-t' option") # -pthreads: Solaris/gcc (Note: HP C also rejects) # -mt: Sun Workshop C (may only link SunOS threads [-lthread], but it # doesn't hurt to check since this sometimes defines pthreads and # -D_REENTRANT too), HP C (must be checked before -lpthread, which # is present but should not be used directly; and before -mthreads, # because the compiler interprets this as "-mt" + "-hreads") # -mthreads: Mingw32/gcc, Lynx/gcc # pthread: Linux, etcetera # --thread-safe: KAI C++ # pthread-config: use pthread-config program (for GNU Pth library) case $host_os in freebsd*) # -kthread: FreeBSD kernel threads (preferred to -pthread since SMP-able) # lthread: LinuxThreads port on FreeBSD (also preferred to -pthread) ax_pthread_flags="-kthread lthread $ax_pthread_flags" ;; hpux*) # From the cc(1) man page: "[-mt] Sets various -D flags to enable # multi-threading and also sets -lpthread." ax_pthread_flags="-mt -pthread pthread $ax_pthread_flags" ;; openedition*) # IBM z/OS requires a feature-test macro to be defined in order to # enable POSIX threads at all, so give the user a hint if this is # not set. (We don't define these ourselves, as they can affect # other portions of the system API in unpredictable ways.) AC_EGREP_CPP([AX_PTHREAD_ZOS_MISSING], [ # if !defined(_OPEN_THREADS) && !defined(_UNIX03_THREADS) AX_PTHREAD_ZOS_MISSING # endif ], [AC_MSG_WARN([IBM z/OS requires -D_OPEN_THREADS or -D_UNIX03_THREADS to enable pthreads support.])]) ;; solaris*) # On Solaris (at least, for some versions), libc contains stubbed # (non-functional) versions of the pthreads routines, so link-based # tests will erroneously succeed. (N.B.: The stubs are missing # pthread_cleanup_push, or rather a function called by this macro, # so we could check for that, but who knows whether they'll stub # that too in a future libc.) So we'll check first for the # standard Solaris way of linking pthreads (-mt -lpthread). ax_pthread_flags="-mt,-lpthread pthread $ax_pthread_flags" ;; esac # Are we compiling with Clang? AC_CACHE_CHECK([whether $CC is Clang], [ax_cv_PTHREAD_CLANG], [ax_cv_PTHREAD_CLANG=no # Note that Autoconf sets GCC=yes for Clang as well as GCC if test "x$GCC" = "xyes"; then AC_EGREP_CPP([AX_PTHREAD_CC_IS_CLANG], [/* Note: Clang 2.7 lacks __clang_[a-z]+__ */ # if defined(__clang__) && defined(__llvm__) AX_PTHREAD_CC_IS_CLANG # endif ], [ax_cv_PTHREAD_CLANG=yes]) fi ]) ax_pthread_clang="$ax_cv_PTHREAD_CLANG" # GCC generally uses -pthread, or -pthreads on some platforms (e.g. SPARC) # Note that for GCC and Clang -pthread generally implies -lpthread, # except when -nostdlib is passed. # This is problematic using libtool to build C++ shared libraries with pthread: # [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=25460 # [2] https://bugzilla.redhat.com/show_bug.cgi?id=661333 # [3] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=468555 # To solve this, first try -pthread together with -lpthread for GCC AS_IF([test "x$GCC" = "xyes"], [ax_pthread_flags="-pthread,-lpthread -pthread -pthreads $ax_pthread_flags"]) # Clang takes -pthread (never supported any other flag), but we'll try with -lpthread first AS_IF([test "x$ax_pthread_clang" = "xyes"], [ax_pthread_flags="-pthread,-lpthread -pthread"]) # The presence of a feature test macro requesting re-entrant function # definitions is, on some systems, a strong hint that pthreads support is # correctly enabled case $host_os in darwin* | hpux* | linux* | osf* | solaris*) ax_pthread_check_macro="_REENTRANT" ;; aix*) ax_pthread_check_macro="_THREAD_SAFE" ;; *) ax_pthread_check_macro="--" ;; esac AS_IF([test "x$ax_pthread_check_macro" = "x--"], [ax_pthread_check_cond=0], [ax_pthread_check_cond="!defined($ax_pthread_check_macro)"]) if test "x$ax_pthread_ok" = "xno"; then for ax_pthread_try_flag in $ax_pthread_flags; do case $ax_pthread_try_flag in none) AC_MSG_CHECKING([whether pthreads work without any flags]) ;; *,*) PTHREAD_CFLAGS=`echo $ax_pthread_try_flag | sed "s/^\(.*\),\(.*\)$/\1/"` PTHREAD_LIBS=`echo $ax_pthread_try_flag | sed "s/^\(.*\),\(.*\)$/\2/"` AC_MSG_CHECKING([whether pthreads work with "$PTHREAD_CFLAGS" and "$PTHREAD_LIBS"]) ;; -*) AC_MSG_CHECKING([whether pthreads work with $ax_pthread_try_flag]) PTHREAD_CFLAGS="$ax_pthread_try_flag" ;; pthread-config) AC_CHECK_PROG([ax_pthread_config], [pthread-config], [yes], [no]) AS_IF([test "x$ax_pthread_config" = "xno"], [continue]) PTHREAD_CFLAGS="`pthread-config --cflags`" PTHREAD_LIBS="`pthread-config --ldflags` `pthread-config --libs`" ;; *) AC_MSG_CHECKING([for the pthreads library -l$ax_pthread_try_flag]) PTHREAD_LIBS="-l$ax_pthread_try_flag" ;; esac ax_pthread_save_CFLAGS="$CFLAGS" ax_pthread_save_LIBS="$LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" # Check for various functions. We must include pthread.h, # since some functions may be macros. (On the Sequent, we # need a special flag -Kthread to make this header compile.) # We check for pthread_join because it is in -lpthread on IRIX # while pthread_create is in libc. We check for pthread_attr_init # due to DEC craziness with -lpthreads. We check for # pthread_cleanup_push because it is one of the few pthread # functions on Solaris that doesn't have a non-functional libc stub. # We try pthread_create on general principles. AC_LINK_IFELSE([AC_LANG_PROGRAM([#include # if $ax_pthread_check_cond # error "$ax_pthread_check_macro must be defined" # endif static void *some_global = NULL; static void routine(void *a) { /* To avoid any unused-parameter or unused-but-set-parameter warning. */ some_global = a; } static void *start_routine(void *a) { return a; }], [pthread_t th; pthread_attr_t attr; pthread_create(&th, 0, start_routine, 0); pthread_join(th, 0); pthread_attr_init(&attr); pthread_cleanup_push(routine, 0); pthread_cleanup_pop(0) /* ; */])], [ax_pthread_ok=yes], []) CFLAGS="$ax_pthread_save_CFLAGS" LIBS="$ax_pthread_save_LIBS" AC_MSG_RESULT([$ax_pthread_ok]) AS_IF([test "x$ax_pthread_ok" = "xyes"], [break]) PTHREAD_LIBS="" PTHREAD_CFLAGS="" done fi # Clang needs special handling, because older versions handle the -pthread # option in a rather... idiosyncratic way if test "x$ax_pthread_clang" = "xyes"; then # Clang takes -pthread; it has never supported any other flag # (Note 1: This will need to be revisited if a system that Clang # supports has POSIX threads in a separate library. This tends not # to be the way of modern systems, but it's conceivable.) # (Note 2: On some systems, notably Darwin, -pthread is not needed # to get POSIX threads support; the API is always present and # active. We could reasonably leave PTHREAD_CFLAGS empty. But # -pthread does define _REENTRANT, and while the Darwin headers # ignore this macro, third-party headers might not.) # However, older versions of Clang make a point of warning the user # that, in an invocation where only linking and no compilation is # taking place, the -pthread option has no effect ("argument unused # during compilation"). They expect -pthread to be passed in only # when source code is being compiled. # # Problem is, this is at odds with the way Automake and most other # C build frameworks function, which is that the same flags used in # compilation (CFLAGS) are also used in linking. Many systems # supported by AX_PTHREAD require exactly this for POSIX threads # support, and in fact it is often not straightforward to specify a # flag that is used only in the compilation phase and not in # linking. Such a scenario is extremely rare in practice. # # Even though use of the -pthread flag in linking would only print # a warning, this can be a nuisance for well-run software projects # that build with -Werror. So if the active version of Clang has # this misfeature, we search for an option to squash it. AC_CACHE_CHECK([whether Clang needs flag to prevent "argument unused" warning when linking with -pthread], [ax_cv_PTHREAD_CLANG_NO_WARN_FLAG], [ax_cv_PTHREAD_CLANG_NO_WARN_FLAG=unknown # Create an alternate version of $ac_link that compiles and # links in two steps (.c -> .o, .o -> exe) instead of one # (.c -> exe), because the warning occurs only in the second # step ax_pthread_save_ac_link="$ac_link" ax_pthread_sed='s/conftest\.\$ac_ext/conftest.$ac_objext/g' ax_pthread_link_step=`AS_ECHO(["$ac_link"]) | sed "$ax_pthread_sed"` ax_pthread_2step_ac_link="($ac_compile) && (echo ==== >&5) && ($ax_pthread_link_step)" ax_pthread_save_CFLAGS="$CFLAGS" for ax_pthread_try in '' -Qunused-arguments -Wno-unused-command-line-argument unknown; do AS_IF([test "x$ax_pthread_try" = "xunknown"], [break]) CFLAGS="-Werror -Wunknown-warning-option $ax_pthread_try -pthread $ax_pthread_save_CFLAGS" ac_link="$ax_pthread_save_ac_link" AC_LINK_IFELSE([AC_LANG_SOURCE([[int main(void){return 0;}]])], [ac_link="$ax_pthread_2step_ac_link" AC_LINK_IFELSE([AC_LANG_SOURCE([[int main(void){return 0;}]])], [break]) ]) done ac_link="$ax_pthread_save_ac_link" CFLAGS="$ax_pthread_save_CFLAGS" AS_IF([test "x$ax_pthread_try" = "x"], [ax_pthread_try=no]) ax_cv_PTHREAD_CLANG_NO_WARN_FLAG="$ax_pthread_try" ]) case "$ax_cv_PTHREAD_CLANG_NO_WARN_FLAG" in no | unknown) ;; *) PTHREAD_CFLAGS="$ax_cv_PTHREAD_CLANG_NO_WARN_FLAG $PTHREAD_CFLAGS" ;; esac fi # $ax_pthread_clang = yes # Various other checks: if test "x$ax_pthread_ok" = "xyes"; then ax_pthread_save_CFLAGS="$CFLAGS" ax_pthread_save_LIBS="$LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" # Detect AIX lossage: JOINABLE attribute is called UNDETACHED. AC_CACHE_CHECK([for joinable pthread attribute], [ax_cv_PTHREAD_JOINABLE_ATTR], [ax_cv_PTHREAD_JOINABLE_ATTR=unknown for ax_pthread_attr in PTHREAD_CREATE_JOINABLE PTHREAD_CREATE_UNDETACHED; do AC_LINK_IFELSE([AC_LANG_PROGRAM([#include ], [int attr = $ax_pthread_attr; return attr /* ; */])], [ax_cv_PTHREAD_JOINABLE_ATTR=$ax_pthread_attr; break], []) done ]) AS_IF([test "x$ax_cv_PTHREAD_JOINABLE_ATTR" != "xunknown" && \ test "x$ax_cv_PTHREAD_JOINABLE_ATTR" != "xPTHREAD_CREATE_JOINABLE" && \ test "x$ax_pthread_joinable_attr_defined" != "xyes"], [AC_DEFINE_UNQUOTED([PTHREAD_CREATE_JOINABLE], [$ax_cv_PTHREAD_JOINABLE_ATTR], [Define to necessary symbol if this constant uses a non-standard name on your system.]) ax_pthread_joinable_attr_defined=yes ]) AC_CACHE_CHECK([whether more special flags are required for pthreads], [ax_cv_PTHREAD_SPECIAL_FLAGS], [ax_cv_PTHREAD_SPECIAL_FLAGS=no case $host_os in solaris*) ax_cv_PTHREAD_SPECIAL_FLAGS="-D_POSIX_PTHREAD_SEMANTICS" ;; esac ]) AS_IF([test "x$ax_cv_PTHREAD_SPECIAL_FLAGS" != "xno" && \ test "x$ax_pthread_special_flags_added" != "xyes"], [PTHREAD_CFLAGS="$ax_cv_PTHREAD_SPECIAL_FLAGS $PTHREAD_CFLAGS" ax_pthread_special_flags_added=yes]) AC_CACHE_CHECK([for PTHREAD_PRIO_INHERIT], [ax_cv_PTHREAD_PRIO_INHERIT], [AC_LINK_IFELSE([AC_LANG_PROGRAM([[#include ]], [[int i = PTHREAD_PRIO_INHERIT; return i;]])], [ax_cv_PTHREAD_PRIO_INHERIT=yes], [ax_cv_PTHREAD_PRIO_INHERIT=no]) ]) AS_IF([test "x$ax_cv_PTHREAD_PRIO_INHERIT" = "xyes" && \ test "x$ax_pthread_prio_inherit_defined" != "xyes"], [AC_DEFINE([HAVE_PTHREAD_PRIO_INHERIT], [1], [Have PTHREAD_PRIO_INHERIT.]) ax_pthread_prio_inherit_defined=yes ]) CFLAGS="$ax_pthread_save_CFLAGS" LIBS="$ax_pthread_save_LIBS" # More AIX lossage: compile with *_r variant if test "x$GCC" != "xyes"; then case $host_os in aix*) AS_CASE(["x/$CC"], [x*/c89|x*/c89_128|x*/c99|x*/c99_128|x*/cc|x*/cc128|x*/xlc|x*/xlc_v6|x*/xlc128|x*/xlc128_v6], [#handle absolute path differently from PATH based program lookup AS_CASE(["x$CC"], [x/*], [ AS_IF([AS_EXECUTABLE_P([${CC}_r])],[PTHREAD_CC="${CC}_r"]) AS_IF([test "x${CXX}" != "x"], [AS_IF([AS_EXECUTABLE_P([${CXX}_r])],[PTHREAD_CXX="${CXX}_r"])]) ], [ AC_CHECK_PROGS([PTHREAD_CC],[${CC}_r],[$CC]) AS_IF([test "x${CXX}" != "x"], [AC_CHECK_PROGS([PTHREAD_CXX],[${CXX}_r],[$CXX])]) ] ) ]) ;; esac fi fi test -n "$PTHREAD_CC" || PTHREAD_CC="$CC" test -n "$PTHREAD_CXX" || PTHREAD_CXX="$CXX" AC_SUBST([PTHREAD_LIBS]) AC_SUBST([PTHREAD_CFLAGS]) AC_SUBST([PTHREAD_CC]) AC_SUBST([PTHREAD_CXX]) # Finally, execute ACTION-IF-FOUND/ACTION-IF-NOT-FOUND: if test "x$ax_pthread_ok" = "xyes"; then ifelse([$1],,[AC_DEFINE([HAVE_PTHREAD],[1],[Define if you have POSIX threads libraries and header files.])],[$1]) : else ax_pthread_ok=no $2 fi AC_LANG_POP ])dnl AX_PTHREAD gevent-24.11.1/deps/c-ares/m4/ax_require_defined.m4000066400000000000000000000023021471441230600216600ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_require_defined.html # =========================================================================== # # SYNOPSIS # # AX_REQUIRE_DEFINED(MACRO) # # DESCRIPTION # # AX_REQUIRE_DEFINED is a simple helper for making sure other macros have # been defined and thus are available for use. This avoids random issues # where a macro isn't expanded. Instead the configure script emits a # non-fatal: # # ./configure: line 1673: AX_CFLAGS_WARN_ALL: command not found # # It's like AC_REQUIRE except it doesn't expand the required macro. # # Here's an example: # # AX_REQUIRE_DEFINED([AX_CHECK_LINK_FLAG]) # # LICENSE # # Copyright (c) 2014 Mike Frysinger # # Copying and distribution of this file, with or without modification, are # permitted in any medium without royalty provided the copyright notice # and this notice are preserved. This file is offered as-is, without any # warranty. #serial 2 AC_DEFUN([AX_REQUIRE_DEFINED], [dnl m4_ifndef([$1], [m4_fatal([macro ]$1[ is not defined; is a m4 file missing?])]) ])dnl AX_REQUIRE_DEFINED gevent-24.11.1/deps/c-ares/m4/libtool.m4000066400000000000000000011307371471441230600175210ustar00rootroot00000000000000# libtool.m4 - Configure libtool for the host system. -*-Autoconf-*- # # Copyright (C) 1996-2001, 2003-2019, 2021-2022 Free Software # Foundation, Inc. # Written by Gordon Matzigkeit, 1996 # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. m4_define([_LT_COPYING], [dnl # Copyright (C) 2014 Free Software Foundation, Inc. # This is free software; see the source for copying conditions. There is NO # warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # GNU Libtool is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of of the License, or # (at your option) any later version. # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program or library that is built # using GNU Libtool, you may include this file under the same # distribution terms that you use for the rest of that program. # # GNU Libtool is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . ]) # serial 59 LT_INIT # LT_PREREQ(VERSION) # ------------------ # Complain and exit if this libtool version is less that VERSION. m4_defun([LT_PREREQ], [m4_if(m4_version_compare(m4_defn([LT_PACKAGE_VERSION]), [$1]), -1, [m4_default([$3], [m4_fatal([Libtool version $1 or higher is required], 63)])], [$2])]) # _LT_CHECK_BUILDDIR # ------------------ # Complain if the absolute build directory name contains unusual characters m4_defun([_LT_CHECK_BUILDDIR], [case `pwd` in *\ * | *\ *) AC_MSG_WARN([Libtool does not cope well with whitespace in `pwd`]) ;; esac ]) # LT_INIT([OPTIONS]) # ------------------ AC_DEFUN([LT_INIT], [AC_PREREQ([2.62])dnl We use AC_PATH_PROGS_FEATURE_CHECK AC_REQUIRE([AC_CONFIG_AUX_DIR_DEFAULT])dnl AC_BEFORE([$0], [LT_LANG])dnl AC_BEFORE([$0], [LT_OUTPUT])dnl AC_BEFORE([$0], [LTDL_INIT])dnl m4_require([_LT_CHECK_BUILDDIR])dnl dnl Autoconf doesn't catch unexpanded LT_ macros by default: m4_pattern_forbid([^_?LT_[A-Z_]+$])dnl m4_pattern_allow([^(_LT_EOF|LT_DLGLOBAL|LT_DLLAZY_OR_NOW|LT_MULTI_MODULE)$])dnl dnl aclocal doesn't pull ltoptions.m4, ltsugar.m4, or ltversion.m4 dnl unless we require an AC_DEFUNed macro: AC_REQUIRE([LTOPTIONS_VERSION])dnl AC_REQUIRE([LTSUGAR_VERSION])dnl AC_REQUIRE([LTVERSION_VERSION])dnl AC_REQUIRE([LTOBSOLETE_VERSION])dnl m4_require([_LT_PROG_LTMAIN])dnl _LT_SHELL_INIT([SHELL=${CONFIG_SHELL-/bin/sh}]) dnl Parse OPTIONS _LT_SET_OPTIONS([$0], [$1]) # This can be used to rebuild libtool when needed LIBTOOL_DEPS=$ltmain # Always use our own libtool. LIBTOOL='$(SHELL) $(top_builddir)/libtool' AC_SUBST(LIBTOOL)dnl _LT_SETUP # Only expand once: m4_define([LT_INIT]) ])# LT_INIT # Old names: AU_ALIAS([AC_PROG_LIBTOOL], [LT_INIT]) AU_ALIAS([AM_PROG_LIBTOOL], [LT_INIT]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_PROG_LIBTOOL], []) dnl AC_DEFUN([AM_PROG_LIBTOOL], []) # _LT_PREPARE_CC_BASENAME # ----------------------- m4_defun([_LT_PREPARE_CC_BASENAME], [ # Calculate cc_basename. Skip known compiler wrappers and cross-prefix. func_cc_basename () { for cc_temp in @S|@*""; do case $cc_temp in compile | *[[\\/]]compile | ccache | *[[\\/]]ccache ) ;; distcc | *[[\\/]]distcc | purify | *[[\\/]]purify ) ;; \-*) ;; *) break;; esac done func_cc_basename_result=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"` } ])# _LT_PREPARE_CC_BASENAME # _LT_CC_BASENAME(CC) # ------------------- # It would be clearer to call AC_REQUIREs from _LT_PREPARE_CC_BASENAME, # but that macro is also expanded into generated libtool script, which # arranges for $SED and $ECHO to be set by different means. m4_defun([_LT_CC_BASENAME], [m4_require([_LT_PREPARE_CC_BASENAME])dnl AC_REQUIRE([_LT_DECL_SED])dnl AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])dnl func_cc_basename $1 cc_basename=$func_cc_basename_result ]) # _LT_FILEUTILS_DEFAULTS # ---------------------- # It is okay to use these file commands and assume they have been set # sensibly after 'm4_require([_LT_FILEUTILS_DEFAULTS])'. m4_defun([_LT_FILEUTILS_DEFAULTS], [: ${CP="cp -f"} : ${MV="mv -f"} : ${RM="rm -f"} ])# _LT_FILEUTILS_DEFAULTS # _LT_SETUP # --------- m4_defun([_LT_SETUP], [AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_CANONICAL_BUILD])dnl AC_REQUIRE([_LT_PREPARE_SED_QUOTE_VARS])dnl AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])dnl _LT_DECL([], [PATH_SEPARATOR], [1], [The PATH separator for the build system])dnl dnl _LT_DECL([], [host_alias], [0], [The host system])dnl _LT_DECL([], [host], [0])dnl _LT_DECL([], [host_os], [0])dnl dnl _LT_DECL([], [build_alias], [0], [The build system])dnl _LT_DECL([], [build], [0])dnl _LT_DECL([], [build_os], [0])dnl dnl AC_REQUIRE([AC_PROG_CC])dnl AC_REQUIRE([LT_PATH_LD])dnl AC_REQUIRE([LT_PATH_NM])dnl dnl AC_REQUIRE([AC_PROG_LN_S])dnl test -z "$LN_S" && LN_S="ln -s" _LT_DECL([], [LN_S], [1], [Whether we need soft or hard links])dnl dnl AC_REQUIRE([LT_CMD_MAX_LEN])dnl _LT_DECL([objext], [ac_objext], [0], [Object file suffix (normally "o")])dnl _LT_DECL([], [exeext], [0], [Executable file suffix (normally "")])dnl dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_CHECK_SHELL_FEATURES])dnl m4_require([_LT_PATH_CONVERSION_FUNCTIONS])dnl m4_require([_LT_CMD_RELOAD])dnl m4_require([_LT_DECL_FILECMD])dnl m4_require([_LT_CHECK_MAGIC_METHOD])dnl m4_require([_LT_CHECK_SHAREDLIB_FROM_LINKLIB])dnl m4_require([_LT_CMD_OLD_ARCHIVE])dnl m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl m4_require([_LT_WITH_SYSROOT])dnl m4_require([_LT_CMD_TRUNCATE])dnl _LT_CONFIG_LIBTOOL_INIT([ # See if we are running on zsh, and set the options that allow our # commands through without removal of \ escapes INIT. if test -n "\${ZSH_VERSION+set}"; then setopt NO_GLOB_SUBST fi ]) if test -n "${ZSH_VERSION+set}"; then setopt NO_GLOB_SUBST fi _LT_CHECK_OBJDIR m4_require([_LT_TAG_COMPILER])dnl case $host_os in aix3*) # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test set != "${COLLECT_NAMES+set}"; then COLLECT_NAMES= export COLLECT_NAMES fi ;; esac # Global variables: ofile=libtool can_build_shared=yes # All known linkers require a '.a' archive for static linking (except MSVC and # ICC, which need '.lib'). libext=a with_gnu_ld=$lt_cv_prog_gnu_ld old_CC=$CC old_CFLAGS=$CFLAGS # Set sane defaults for various variables test -z "$CC" && CC=cc test -z "$LTCC" && LTCC=$CC test -z "$LTCFLAGS" && LTCFLAGS=$CFLAGS test -z "$LD" && LD=ld test -z "$ac_objext" && ac_objext=o _LT_CC_BASENAME([$compiler]) # Only perform the check for file, if the check method requires it test -z "$MAGIC_CMD" && MAGIC_CMD=file case $deplibs_check_method in file_magic*) if test "$file_magic_cmd" = '$MAGIC_CMD'; then _LT_PATH_MAGIC fi ;; esac # Use C for the default configuration in the libtool script LT_SUPPORTED_TAG([CC]) _LT_LANG_C_CONFIG _LT_LANG_DEFAULT_CONFIG _LT_CONFIG_COMMANDS ])# _LT_SETUP # _LT_PREPARE_SED_QUOTE_VARS # -------------------------- # Define a few sed substitution that help us do robust quoting. m4_defun([_LT_PREPARE_SED_QUOTE_VARS], [# Backslashify metacharacters that are still active within # double-quoted strings. sed_quote_subst='s/\([["`$\\]]\)/\\\1/g' # Same as above, but do not quote variable references. double_quote_subst='s/\([["`\\]]\)/\\\1/g' # Sed substitution to delay expansion of an escaped shell variable in a # double_quote_subst'ed string. delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g' # Sed substitution to delay expansion of an escaped single quote. delay_single_quote_subst='s/'\''/'\'\\\\\\\'\''/g' # Sed substitution to avoid accidental globbing in evaled expressions no_glob_subst='s/\*/\\\*/g' ]) # _LT_PROG_LTMAIN # --------------- # Note that this code is called both from 'configure', and 'config.status' # now that we use AC_CONFIG_COMMANDS to generate libtool. Notably, # 'config.status' has no value for ac_aux_dir unless we are using Automake, # so we pass a copy along to make sure it has a sensible value anyway. m4_defun([_LT_PROG_LTMAIN], [m4_ifdef([AC_REQUIRE_AUX_FILE], [AC_REQUIRE_AUX_FILE([ltmain.sh])])dnl _LT_CONFIG_LIBTOOL_INIT([ac_aux_dir='$ac_aux_dir']) ltmain=$ac_aux_dir/ltmain.sh ])# _LT_PROG_LTMAIN ## ------------------------------------- ## ## Accumulate code for creating libtool. ## ## ------------------------------------- ## # So that we can recreate a full libtool script including additional # tags, we accumulate the chunks of code to send to AC_CONFIG_COMMANDS # in macros and then make a single call at the end using the 'libtool' # label. # _LT_CONFIG_LIBTOOL_INIT([INIT-COMMANDS]) # ---------------------------------------- # Register INIT-COMMANDS to be passed to AC_CONFIG_COMMANDS later. m4_define([_LT_CONFIG_LIBTOOL_INIT], [m4_ifval([$1], [m4_append([_LT_OUTPUT_LIBTOOL_INIT], [$1 ])])]) # Initialize. m4_define([_LT_OUTPUT_LIBTOOL_INIT]) # _LT_CONFIG_LIBTOOL([COMMANDS]) # ------------------------------ # Register COMMANDS to be passed to AC_CONFIG_COMMANDS later. m4_define([_LT_CONFIG_LIBTOOL], [m4_ifval([$1], [m4_append([_LT_OUTPUT_LIBTOOL_COMMANDS], [$1 ])])]) # Initialize. m4_define([_LT_OUTPUT_LIBTOOL_COMMANDS]) # _LT_CONFIG_SAVE_COMMANDS([COMMANDS], [INIT_COMMANDS]) # ----------------------------------------------------- m4_defun([_LT_CONFIG_SAVE_COMMANDS], [_LT_CONFIG_LIBTOOL([$1]) _LT_CONFIG_LIBTOOL_INIT([$2]) ]) # _LT_FORMAT_COMMENT([COMMENT]) # ----------------------------- # Add leading comment marks to the start of each line, and a trailing # full-stop to the whole comment if one is not present already. m4_define([_LT_FORMAT_COMMENT], [m4_ifval([$1], [ m4_bpatsubst([m4_bpatsubst([$1], [^ *], [# ])], [['`$\]], [\\\&])]m4_bmatch([$1], [[!?.]$], [], [.]) )]) ## ------------------------ ## ## FIXME: Eliminate VARNAME ## ## ------------------------ ## # _LT_DECL([CONFIGNAME], VARNAME, VALUE, [DESCRIPTION], [IS-TAGGED?]) # ------------------------------------------------------------------- # CONFIGNAME is the name given to the value in the libtool script. # VARNAME is the (base) name used in the configure script. # VALUE may be 0, 1 or 2 for a computed quote escaped value based on # VARNAME. Any other value will be used directly. m4_define([_LT_DECL], [lt_if_append_uniq([lt_decl_varnames], [$2], [, ], [lt_dict_add_subkey([lt_decl_dict], [$2], [libtool_name], [m4_ifval([$1], [$1], [$2])]) lt_dict_add_subkey([lt_decl_dict], [$2], [value], [$3]) m4_ifval([$4], [lt_dict_add_subkey([lt_decl_dict], [$2], [description], [$4])]) lt_dict_add_subkey([lt_decl_dict], [$2], [tagged?], [m4_ifval([$5], [yes], [no])])]) ]) # _LT_TAGDECL([CONFIGNAME], VARNAME, VALUE, [DESCRIPTION]) # -------------------------------------------------------- m4_define([_LT_TAGDECL], [_LT_DECL([$1], [$2], [$3], [$4], [yes])]) # lt_decl_tag_varnames([SEPARATOR], [VARNAME1...]) # ------------------------------------------------ m4_define([lt_decl_tag_varnames], [_lt_decl_filter([tagged?], [yes], $@)]) # _lt_decl_filter(SUBKEY, VALUE, [SEPARATOR], [VARNAME1..]) # --------------------------------------------------------- m4_define([_lt_decl_filter], [m4_case([$#], [0], [m4_fatal([$0: too few arguments: $#])], [1], [m4_fatal([$0: too few arguments: $#: $1])], [2], [lt_dict_filter([lt_decl_dict], [$1], [$2], [], lt_decl_varnames)], [3], [lt_dict_filter([lt_decl_dict], [$1], [$2], [$3], lt_decl_varnames)], [lt_dict_filter([lt_decl_dict], $@)])[]dnl ]) # lt_decl_quote_varnames([SEPARATOR], [VARNAME1...]) # -------------------------------------------------- m4_define([lt_decl_quote_varnames], [_lt_decl_filter([value], [1], $@)]) # lt_decl_dquote_varnames([SEPARATOR], [VARNAME1...]) # --------------------------------------------------- m4_define([lt_decl_dquote_varnames], [_lt_decl_filter([value], [2], $@)]) # lt_decl_varnames_tagged([SEPARATOR], [VARNAME1...]) # --------------------------------------------------- m4_define([lt_decl_varnames_tagged], [m4_assert([$# <= 2])dnl _$0(m4_quote(m4_default([$1], [[, ]])), m4_ifval([$2], [[$2]], [m4_dquote(lt_decl_tag_varnames)]), m4_split(m4_normalize(m4_quote(_LT_TAGS)), [ ]))]) m4_define([_lt_decl_varnames_tagged], [m4_ifval([$3], [lt_combine([$1], [$2], [_], $3)])]) # lt_decl_all_varnames([SEPARATOR], [VARNAME1...]) # ------------------------------------------------ m4_define([lt_decl_all_varnames], [_$0(m4_quote(m4_default([$1], [[, ]])), m4_if([$2], [], m4_quote(lt_decl_varnames), m4_quote(m4_shift($@))))[]dnl ]) m4_define([_lt_decl_all_varnames], [lt_join($@, lt_decl_varnames_tagged([$1], lt_decl_tag_varnames([[, ]], m4_shift($@))))dnl ]) # _LT_CONFIG_STATUS_DECLARE([VARNAME]) # ------------------------------------ # Quote a variable value, and forward it to 'config.status' so that its # declaration there will have the same value as in 'configure'. VARNAME # must have a single quote delimited value for this to work. m4_define([_LT_CONFIG_STATUS_DECLARE], [$1='`$ECHO "$][$1" | $SED "$delay_single_quote_subst"`']) # _LT_CONFIG_STATUS_DECLARATIONS # ------------------------------ # We delimit libtool config variables with single quotes, so when # we write them to config.status, we have to be sure to quote all # embedded single quotes properly. In configure, this macro expands # each variable declared with _LT_DECL (and _LT_TAGDECL) into: # # ='`$ECHO "$" | $SED "$delay_single_quote_subst"`' m4_defun([_LT_CONFIG_STATUS_DECLARATIONS], [m4_foreach([_lt_var], m4_quote(lt_decl_all_varnames), [m4_n([_LT_CONFIG_STATUS_DECLARE(_lt_var)])])]) # _LT_LIBTOOL_TAGS # ---------------- # Output comment and list of tags supported by the script m4_defun([_LT_LIBTOOL_TAGS], [_LT_FORMAT_COMMENT([The names of the tagged configurations supported by this script])dnl available_tags='_LT_TAGS'dnl ]) # _LT_LIBTOOL_DECLARE(VARNAME, [TAG]) # ----------------------------------- # Extract the dictionary values for VARNAME (optionally with TAG) and # expand to a commented shell variable setting: # # # Some comment about what VAR is for. # visible_name=$lt_internal_name m4_define([_LT_LIBTOOL_DECLARE], [_LT_FORMAT_COMMENT(m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [description])))[]dnl m4_pushdef([_libtool_name], m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [libtool_name])))[]dnl m4_case(m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [value])), [0], [_libtool_name=[$]$1], [1], [_libtool_name=$lt_[]$1], [2], [_libtool_name=$lt_[]$1], [_libtool_name=lt_dict_fetch([lt_decl_dict], [$1], [value])])[]dnl m4_ifval([$2], [_$2])[]m4_popdef([_libtool_name])[]dnl ]) # _LT_LIBTOOL_CONFIG_VARS # ----------------------- # Produce commented declarations of non-tagged libtool config variables # suitable for insertion in the LIBTOOL CONFIG section of the 'libtool' # script. Tagged libtool config variables (even for the LIBTOOL CONFIG # section) are produced by _LT_LIBTOOL_TAG_VARS. m4_defun([_LT_LIBTOOL_CONFIG_VARS], [m4_foreach([_lt_var], m4_quote(_lt_decl_filter([tagged?], [no], [], lt_decl_varnames)), [m4_n([_LT_LIBTOOL_DECLARE(_lt_var)])])]) # _LT_LIBTOOL_TAG_VARS(TAG) # ------------------------- m4_define([_LT_LIBTOOL_TAG_VARS], [m4_foreach([_lt_var], m4_quote(lt_decl_tag_varnames), [m4_n([_LT_LIBTOOL_DECLARE(_lt_var, [$1])])])]) # _LT_TAGVAR(VARNAME, [TAGNAME]) # ------------------------------ m4_define([_LT_TAGVAR], [m4_ifval([$2], [$1_$2], [$1])]) # _LT_CONFIG_COMMANDS # ------------------- # Send accumulated output to $CONFIG_STATUS. Thanks to the lists of # variables for single and double quote escaping we saved from calls # to _LT_DECL, we can put quote escaped variables declarations # into 'config.status', and then the shell code to quote escape them in # for loops in 'config.status'. Finally, any additional code accumulated # from calls to _LT_CONFIG_LIBTOOL_INIT is expanded. m4_defun([_LT_CONFIG_COMMANDS], [AC_PROVIDE_IFELSE([LT_OUTPUT], dnl If the libtool generation code has been placed in $CONFIG_LT, dnl instead of duplicating it all over again into config.status, dnl then we will have config.status run $CONFIG_LT later, so it dnl needs to know what name is stored there: [AC_CONFIG_COMMANDS([libtool], [$SHELL $CONFIG_LT || AS_EXIT(1)], [CONFIG_LT='$CONFIG_LT'])], dnl If the libtool generation code is destined for config.status, dnl expand the accumulated commands and init code now: [AC_CONFIG_COMMANDS([libtool], [_LT_OUTPUT_LIBTOOL_COMMANDS], [_LT_OUTPUT_LIBTOOL_COMMANDS_INIT])]) ])#_LT_CONFIG_COMMANDS # Initialize. m4_define([_LT_OUTPUT_LIBTOOL_COMMANDS_INIT], [ # The HP-UX ksh and POSIX shell print the target directory to stdout # if CDPATH is set. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH sed_quote_subst='$sed_quote_subst' double_quote_subst='$double_quote_subst' delay_variable_subst='$delay_variable_subst' _LT_CONFIG_STATUS_DECLARATIONS LTCC='$LTCC' LTCFLAGS='$LTCFLAGS' compiler='$compiler_DEFAULT' # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF \$[]1 _LTECHO_EOF' } # Quote evaled strings. for var in lt_decl_all_varnames([[ \ ]], lt_decl_quote_varnames); do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[[\\\\\\\`\\"\\\$]]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED \\"\\\$sed_quote_subst\\"\\\`\\\\\\"" ## exclude from sc_prohibit_nested_quotes ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done # Double-quote double-evaled strings. for var in lt_decl_all_varnames([[ \ ]], lt_decl_dquote_varnames); do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[[\\\\\\\`\\"\\\$]]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\"" ## exclude from sc_prohibit_nested_quotes ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done _LT_OUTPUT_LIBTOOL_INIT ]) # _LT_GENERATED_FILE_INIT(FILE, [COMMENT]) # ------------------------------------ # Generate a child script FILE with all initialization necessary to # reuse the environment learned by the parent script, and make the # file executable. If COMMENT is supplied, it is inserted after the # '#!' sequence but before initialization text begins. After this # macro, additional text can be appended to FILE to form the body of # the child script. The macro ends with non-zero status if the # file could not be fully written (such as if the disk is full). m4_ifdef([AS_INIT_GENERATED], [m4_defun([_LT_GENERATED_FILE_INIT],[AS_INIT_GENERATED($@)])], [m4_defun([_LT_GENERATED_FILE_INIT], [m4_require([AS_PREPARE])]dnl [m4_pushdef([AS_MESSAGE_LOG_FD])]dnl [lt_write_fail=0 cat >$1 <<_ASEOF || lt_write_fail=1 #! $SHELL # Generated by $as_me. $2 SHELL=\${CONFIG_SHELL-$SHELL} export SHELL _ASEOF cat >>$1 <<\_ASEOF || lt_write_fail=1 AS_SHELL_SANITIZE _AS_PREPARE exec AS_MESSAGE_FD>&1 _ASEOF test 0 = "$lt_write_fail" && chmod +x $1[]dnl m4_popdef([AS_MESSAGE_LOG_FD])])])# _LT_GENERATED_FILE_INIT # LT_OUTPUT # --------- # This macro allows early generation of the libtool script (before # AC_OUTPUT is called), incase it is used in configure for compilation # tests. AC_DEFUN([LT_OUTPUT], [: ${CONFIG_LT=./config.lt} AC_MSG_NOTICE([creating $CONFIG_LT]) _LT_GENERATED_FILE_INIT(["$CONFIG_LT"], [# Run this file to recreate a libtool stub with the current configuration.]) cat >>"$CONFIG_LT" <<\_LTEOF lt_cl_silent=false exec AS_MESSAGE_LOG_FD>>config.log { echo AS_BOX([Running $as_me.]) } >&AS_MESSAGE_LOG_FD lt_cl_help="\ '$as_me' creates a local libtool stub from the current configuration, for use in further configure time tests before the real libtool is generated. Usage: $[0] [[OPTIONS]] -h, --help print this help, then exit -V, --version print version number, then exit -q, --quiet do not print progress messages -d, --debug don't remove temporary files Report bugs to ." lt_cl_version="\ m4_ifset([AC_PACKAGE_NAME], [AC_PACKAGE_NAME ])config.lt[]dnl m4_ifset([AC_PACKAGE_VERSION], [ AC_PACKAGE_VERSION]) configured by $[0], generated by m4_PACKAGE_STRING. Copyright (C) 2011 Free Software Foundation, Inc. This config.lt script is free software; the Free Software Foundation gives unlimited permision to copy, distribute and modify it." while test 0 != $[#] do case $[1] in --version | --v* | -V ) echo "$lt_cl_version"; exit 0 ;; --help | --h* | -h ) echo "$lt_cl_help"; exit 0 ;; --debug | --d* | -d ) debug=: ;; --quiet | --q* | --silent | --s* | -q ) lt_cl_silent=: ;; -*) AC_MSG_ERROR([unrecognized option: $[1] Try '$[0] --help' for more information.]) ;; *) AC_MSG_ERROR([unrecognized argument: $[1] Try '$[0] --help' for more information.]) ;; esac shift done if $lt_cl_silent; then exec AS_MESSAGE_FD>/dev/null fi _LTEOF cat >>"$CONFIG_LT" <<_LTEOF _LT_OUTPUT_LIBTOOL_COMMANDS_INIT _LTEOF cat >>"$CONFIG_LT" <<\_LTEOF AC_MSG_NOTICE([creating $ofile]) _LT_OUTPUT_LIBTOOL_COMMANDS AS_EXIT(0) _LTEOF chmod +x "$CONFIG_LT" # configure is writing to config.log, but config.lt does its own redirection, # appending to config.log, which fails on DOS, as config.log is still kept # open by configure. Here we exec the FD to /dev/null, effectively closing # config.log, so it can be properly (re)opened and appended to by config.lt. lt_cl_success=: test yes = "$silent" && lt_config_lt_args="$lt_config_lt_args --quiet" exec AS_MESSAGE_LOG_FD>/dev/null $SHELL "$CONFIG_LT" $lt_config_lt_args || lt_cl_success=false exec AS_MESSAGE_LOG_FD>>config.log $lt_cl_success || AS_EXIT(1) ])# LT_OUTPUT # _LT_CONFIG(TAG) # --------------- # If TAG is the built-in tag, create an initial libtool script with a # default configuration from the untagged config vars. Otherwise add code # to config.status for appending the configuration named by TAG from the # matching tagged config vars. m4_defun([_LT_CONFIG], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl _LT_CONFIG_SAVE_COMMANDS([ m4_define([_LT_TAG], m4_if([$1], [], [C], [$1]))dnl m4_if(_LT_TAG, [C], [ # See if we are running on zsh, and set the options that allow our # commands through without removal of \ escapes. if test -n "${ZSH_VERSION+set}"; then setopt NO_GLOB_SUBST fi cfgfile=${ofile}T trap "$RM \"$cfgfile\"; exit 1" 1 2 15 $RM "$cfgfile" cat <<_LT_EOF >> "$cfgfile" #! $SHELL # Generated automatically by $as_me ($PACKAGE) $VERSION # Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: # NOTE: Changes made to this file will be lost: look at ltmain.sh. # Provide generalized library-building support services. # Written by Gordon Matzigkeit, 1996 _LT_COPYING _LT_LIBTOOL_TAGS # Configured defaults for sys_lib_dlsearch_path munging. : \${LT_SYS_LIBRARY_PATH="$configure_time_lt_sys_library_path"} # ### BEGIN LIBTOOL CONFIG _LT_LIBTOOL_CONFIG_VARS _LT_LIBTOOL_TAG_VARS # ### END LIBTOOL CONFIG _LT_EOF cat <<'_LT_EOF' >> "$cfgfile" # ### BEGIN FUNCTIONS SHARED WITH CONFIGURE _LT_PREPARE_MUNGE_PATH_LIST _LT_PREPARE_CC_BASENAME # ### END FUNCTIONS SHARED WITH CONFIGURE _LT_EOF case $host_os in aix3*) cat <<\_LT_EOF >> "$cfgfile" # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test set != "${COLLECT_NAMES+set}"; then COLLECT_NAMES= export COLLECT_NAMES fi _LT_EOF ;; esac _LT_PROG_LTMAIN # We use sed instead of cat because bash on DJGPP gets confused if # if finds mixed CR/LF and LF-only lines. Since sed operates in # text mode, it properly converts lines to CR/LF. This bash problem # is reportedly fixed, but why not run on old versions too? $SED '$q' "$ltmain" >> "$cfgfile" \ || (rm -f "$cfgfile"; exit 1) mv -f "$cfgfile" "$ofile" || (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile") chmod +x "$ofile" ], [cat <<_LT_EOF >> "$ofile" dnl Unfortunately we have to use $1 here, since _LT_TAG is not expanded dnl in a comment (ie after a #). # ### BEGIN LIBTOOL TAG CONFIG: $1 _LT_LIBTOOL_TAG_VARS(_LT_TAG) # ### END LIBTOOL TAG CONFIG: $1 _LT_EOF ])dnl /m4_if ], [m4_if([$1], [], [ PACKAGE='$PACKAGE' VERSION='$VERSION' RM='$RM' ofile='$ofile'], []) ])dnl /_LT_CONFIG_SAVE_COMMANDS ])# _LT_CONFIG # LT_SUPPORTED_TAG(TAG) # --------------------- # Trace this macro to discover what tags are supported by the libtool # --tag option, using: # autoconf --trace 'LT_SUPPORTED_TAG:$1' AC_DEFUN([LT_SUPPORTED_TAG], []) # C support is built-in for now m4_define([_LT_LANG_C_enabled], []) m4_define([_LT_TAGS], []) # LT_LANG(LANG) # ------------- # Enable libtool support for the given language if not already enabled. AC_DEFUN([LT_LANG], [AC_BEFORE([$0], [LT_OUTPUT])dnl m4_case([$1], [C], [_LT_LANG(C)], [C++], [_LT_LANG(CXX)], [Go], [_LT_LANG(GO)], [Java], [_LT_LANG(GCJ)], [Fortran 77], [_LT_LANG(F77)], [Fortran], [_LT_LANG(FC)], [Windows Resource], [_LT_LANG(RC)], [m4_ifdef([_LT_LANG_]$1[_CONFIG], [_LT_LANG($1)], [m4_fatal([$0: unsupported language: "$1"])])])dnl ])# LT_LANG # _LT_LANG(LANGNAME) # ------------------ m4_defun([_LT_LANG], [m4_ifdef([_LT_LANG_]$1[_enabled], [], [LT_SUPPORTED_TAG([$1])dnl m4_append([_LT_TAGS], [$1 ])dnl m4_define([_LT_LANG_]$1[_enabled], [])dnl _LT_LANG_$1_CONFIG($1)])dnl ])# _LT_LANG m4_ifndef([AC_PROG_GO], [ ############################################################ # NOTE: This macro has been submitted for inclusion into # # GNU Autoconf as AC_PROG_GO. When it is available in # # a released version of Autoconf we should remove this # # macro and use it instead. # ############################################################ m4_defun([AC_PROG_GO], [AC_LANG_PUSH(Go)dnl AC_ARG_VAR([GOC], [Go compiler command])dnl AC_ARG_VAR([GOFLAGS], [Go compiler flags])dnl _AC_ARG_VAR_LDFLAGS()dnl AC_CHECK_TOOL(GOC, gccgo) if test -z "$GOC"; then if test -n "$ac_tool_prefix"; then AC_CHECK_PROG(GOC, [${ac_tool_prefix}gccgo], [${ac_tool_prefix}gccgo]) fi fi if test -z "$GOC"; then AC_CHECK_PROG(GOC, gccgo, gccgo, false) fi ])#m4_defun ])#m4_ifndef # _LT_LANG_DEFAULT_CONFIG # ----------------------- m4_defun([_LT_LANG_DEFAULT_CONFIG], [AC_PROVIDE_IFELSE([AC_PROG_CXX], [LT_LANG(CXX)], [m4_define([AC_PROG_CXX], defn([AC_PROG_CXX])[LT_LANG(CXX)])]) AC_PROVIDE_IFELSE([AC_PROG_F77], [LT_LANG(F77)], [m4_define([AC_PROG_F77], defn([AC_PROG_F77])[LT_LANG(F77)])]) AC_PROVIDE_IFELSE([AC_PROG_FC], [LT_LANG(FC)], [m4_define([AC_PROG_FC], defn([AC_PROG_FC])[LT_LANG(FC)])]) dnl The call to [A][M_PROG_GCJ] is quoted like that to stop aclocal dnl pulling things in needlessly. AC_PROVIDE_IFELSE([AC_PROG_GCJ], [LT_LANG(GCJ)], [AC_PROVIDE_IFELSE([A][M_PROG_GCJ], [LT_LANG(GCJ)], [AC_PROVIDE_IFELSE([LT_PROG_GCJ], [LT_LANG(GCJ)], [m4_ifdef([AC_PROG_GCJ], [m4_define([AC_PROG_GCJ], defn([AC_PROG_GCJ])[LT_LANG(GCJ)])]) m4_ifdef([A][M_PROG_GCJ], [m4_define([A][M_PROG_GCJ], defn([A][M_PROG_GCJ])[LT_LANG(GCJ)])]) m4_ifdef([LT_PROG_GCJ], [m4_define([LT_PROG_GCJ], defn([LT_PROG_GCJ])[LT_LANG(GCJ)])])])])]) AC_PROVIDE_IFELSE([AC_PROG_GO], [LT_LANG(GO)], [m4_define([AC_PROG_GO], defn([AC_PROG_GO])[LT_LANG(GO)])]) AC_PROVIDE_IFELSE([LT_PROG_RC], [LT_LANG(RC)], [m4_define([LT_PROG_RC], defn([LT_PROG_RC])[LT_LANG(RC)])]) ])# _LT_LANG_DEFAULT_CONFIG # Obsolete macros: AU_DEFUN([AC_LIBTOOL_CXX], [LT_LANG(C++)]) AU_DEFUN([AC_LIBTOOL_F77], [LT_LANG(Fortran 77)]) AU_DEFUN([AC_LIBTOOL_FC], [LT_LANG(Fortran)]) AU_DEFUN([AC_LIBTOOL_GCJ], [LT_LANG(Java)]) AU_DEFUN([AC_LIBTOOL_RC], [LT_LANG(Windows Resource)]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_CXX], []) dnl AC_DEFUN([AC_LIBTOOL_F77], []) dnl AC_DEFUN([AC_LIBTOOL_FC], []) dnl AC_DEFUN([AC_LIBTOOL_GCJ], []) dnl AC_DEFUN([AC_LIBTOOL_RC], []) # _LT_TAG_COMPILER # ---------------- m4_defun([_LT_TAG_COMPILER], [AC_REQUIRE([AC_PROG_CC])dnl _LT_DECL([LTCC], [CC], [1], [A C compiler])dnl _LT_DECL([LTCFLAGS], [CFLAGS], [1], [LTCC compiler flags])dnl _LT_TAGDECL([CC], [compiler], [1], [A language specific compiler])dnl _LT_TAGDECL([with_gcc], [GCC], [0], [Is the compiler the GNU compiler?])dnl # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC ])# _LT_TAG_COMPILER # _LT_COMPILER_BOILERPLATE # ------------------------ # Check for compiler boilerplate output or warnings with # the simple compiler test code. m4_defun([_LT_COMPILER_BOILERPLATE], [m4_require([_LT_DECL_SED])dnl ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" >conftest.$ac_ext eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_compiler_boilerplate=`cat conftest.err` $RM conftest* ])# _LT_COMPILER_BOILERPLATE # _LT_LINKER_BOILERPLATE # ---------------------- # Check for linker boilerplate output or warnings with # the simple link test code. m4_defun([_LT_LINKER_BOILERPLATE], [m4_require([_LT_DECL_SED])dnl ac_outfile=conftest.$ac_objext echo "$lt_simple_link_test_code" >conftest.$ac_ext eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_linker_boilerplate=`cat conftest.err` $RM -r conftest* ])# _LT_LINKER_BOILERPLATE # _LT_REQUIRED_DARWIN_CHECKS # ------------------------- m4_defun_once([_LT_REQUIRED_DARWIN_CHECKS],[ case $host_os in rhapsody* | darwin*) AC_CHECK_TOOL([DSYMUTIL], [dsymutil], [:]) AC_CHECK_TOOL([NMEDIT], [nmedit], [:]) AC_CHECK_TOOL([LIPO], [lipo], [:]) AC_CHECK_TOOL([OTOOL], [otool], [:]) AC_CHECK_TOOL([OTOOL64], [otool64], [:]) _LT_DECL([], [DSYMUTIL], [1], [Tool to manipulate archived DWARF debug symbol files on Mac OS X]) _LT_DECL([], [NMEDIT], [1], [Tool to change global to local symbols on Mac OS X]) _LT_DECL([], [LIPO], [1], [Tool to manipulate fat objects and archives on Mac OS X]) _LT_DECL([], [OTOOL], [1], [ldd/readelf like tool for Mach-O binaries on Mac OS X]) _LT_DECL([], [OTOOL64], [1], [ldd/readelf like tool for 64 bit Mach-O binaries on Mac OS X 10.4]) AC_CACHE_CHECK([for -single_module linker flag],[lt_cv_apple_cc_single_mod], [lt_cv_apple_cc_single_mod=no if test -z "$LT_MULTI_MODULE"; then # By default we will add the -single_module flag. You can override # by either setting the environment variable LT_MULTI_MODULE # non-empty at configure time, or by adding -multi_module to the # link flags. rm -rf libconftest.dylib* echo "int foo(void){return 1;}" > conftest.c echo "$LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c" >&AS_MESSAGE_LOG_FD $LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c 2>conftest.err _lt_result=$? # If there is a non-empty error log, and "single_module" # appears in it, assume the flag caused a linker warning if test -s conftest.err && $GREP single_module conftest.err; then cat conftest.err >&AS_MESSAGE_LOG_FD # Otherwise, if the output was created with a 0 exit code from # the compiler, it worked. elif test -f libconftest.dylib && test 0 = "$_lt_result"; then lt_cv_apple_cc_single_mod=yes else cat conftest.err >&AS_MESSAGE_LOG_FD fi rm -rf libconftest.dylib* rm -f conftest.* fi]) AC_CACHE_CHECK([for -exported_symbols_list linker flag], [lt_cv_ld_exported_symbols_list], [lt_cv_ld_exported_symbols_list=no save_LDFLAGS=$LDFLAGS echo "_main" > conftest.sym LDFLAGS="$LDFLAGS -Wl,-exported_symbols_list,conftest.sym" AC_LINK_IFELSE([AC_LANG_PROGRAM([],[])], [lt_cv_ld_exported_symbols_list=yes], [lt_cv_ld_exported_symbols_list=no]) LDFLAGS=$save_LDFLAGS ]) AC_CACHE_CHECK([for -force_load linker flag],[lt_cv_ld_force_load], [lt_cv_ld_force_load=no cat > conftest.c << _LT_EOF int forced_loaded() { return 2;} _LT_EOF echo "$LTCC $LTCFLAGS -c -o conftest.o conftest.c" >&AS_MESSAGE_LOG_FD $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&AS_MESSAGE_LOG_FD echo "$AR $AR_FLAGS libconftest.a conftest.o" >&AS_MESSAGE_LOG_FD $AR $AR_FLAGS libconftest.a conftest.o 2>&AS_MESSAGE_LOG_FD echo "$RANLIB libconftest.a" >&AS_MESSAGE_LOG_FD $RANLIB libconftest.a 2>&AS_MESSAGE_LOG_FD cat > conftest.c << _LT_EOF int main() { return 0;} _LT_EOF echo "$LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a" >&AS_MESSAGE_LOG_FD $LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a 2>conftest.err _lt_result=$? if test -s conftest.err && $GREP force_load conftest.err; then cat conftest.err >&AS_MESSAGE_LOG_FD elif test -f conftest && test 0 = "$_lt_result" && $GREP forced_load conftest >/dev/null 2>&1; then lt_cv_ld_force_load=yes else cat conftest.err >&AS_MESSAGE_LOG_FD fi rm -f conftest.err libconftest.a conftest conftest.c rm -rf conftest.dSYM ]) case $host_os in rhapsody* | darwin1.[[012]]) _lt_dar_allow_undefined='$wl-undefined ${wl}suppress' ;; darwin1.*) _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;; darwin*) case $MACOSX_DEPLOYMENT_TARGET,$host in 10.[[012]],*|,*powerpc*-darwin[[5-8]]*) _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;; *) _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;; esac ;; esac if test yes = "$lt_cv_apple_cc_single_mod"; then _lt_dar_single_mod='$single_module' fi _lt_dar_needs_single_mod=no case $host_os in rhapsody* | darwin1.*) _lt_dar_needs_single_mod=yes ;; darwin*) # When targeting Mac OS X 10.4 (darwin 8) or later, # -single_module is the default and -multi_module is unsupported. # The toolchain on macOS 10.14 (darwin 18) and later cannot # target any OS version that needs -single_module. case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in 10.0,*-darwin[[567]].*|10.[[0-3]],*-darwin[[5-9]].*|10.[[0-3]],*-darwin1[[0-7]].*) _lt_dar_needs_single_mod=yes ;; esac ;; esac if test yes = "$lt_cv_ld_exported_symbols_list"; then _lt_dar_export_syms=' $wl-exported_symbols_list,$output_objdir/$libname-symbols.expsym' else _lt_dar_export_syms='~$NMEDIT -s $output_objdir/$libname-symbols.expsym $lib' fi if test : != "$DSYMUTIL" && test no = "$lt_cv_ld_force_load"; then _lt_dsymutil='~$DSYMUTIL $lib || :' else _lt_dsymutil= fi ;; esac ]) # _LT_DARWIN_LINKER_FEATURES([TAG]) # --------------------------------- # Checks for linker and compiler features on darwin m4_defun([_LT_DARWIN_LINKER_FEATURES], [ m4_require([_LT_REQUIRED_DARWIN_CHECKS]) _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_automatic, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported if test yes = "$lt_cv_ld_force_load"; then _LT_TAGVAR(whole_archive_flag_spec, $1)='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience $wl-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`' m4_case([$1], [F77], [_LT_TAGVAR(compiler_needs_object, $1)=yes], [FC], [_LT_TAGVAR(compiler_needs_object, $1)=yes]) else _LT_TAGVAR(whole_archive_flag_spec, $1)='' fi _LT_TAGVAR(link_all_deplibs, $1)=yes _LT_TAGVAR(allow_undefined_flag, $1)=$_lt_dar_allow_undefined case $cc_basename in ifort*|nagfor*) _lt_dar_can_shared=yes ;; *) _lt_dar_can_shared=$GCC ;; esac if test yes = "$_lt_dar_can_shared"; then output_verbose_link_cmd=func_echo_all _LT_TAGVAR(archive_cmds, $1)="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dsymutil" _LT_TAGVAR(module_cmds, $1)="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dsymutil" _LT_TAGVAR(archive_expsym_cmds, $1)="$SED 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dar_export_syms$_lt_dsymutil" _LT_TAGVAR(module_expsym_cmds, $1)="$SED -e 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dar_export_syms$_lt_dsymutil" m4_if([$1], [CXX], [ if test yes = "$_lt_dar_needs_single_mod" -a yes != "$lt_cv_apple_cc_single_mod"; then _LT_TAGVAR(archive_cmds, $1)="\$CC -r -keep_private_externs -nostdlib -o \$lib-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$lib-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring$_lt_dsymutil" _LT_TAGVAR(archive_expsym_cmds, $1)="$SED 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC -r -keep_private_externs -nostdlib -o \$lib-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$lib-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring$_lt_dar_export_syms$_lt_dsymutil" fi ],[]) else _LT_TAGVAR(ld_shlibs, $1)=no fi ]) # _LT_SYS_MODULE_PATH_AIX([TAGNAME]) # ---------------------------------- # Links a minimal program and checks the executable # for the system default hardcoded library path. In most cases, # this is /usr/lib:/lib, but when the MPI compilers are used # the location of the communication and MPI libs are included too. # If we don't find anything, use the default library path according # to the aix ld manual. # Store the results from the different compilers for each TAGNAME. # Allow to override them for all tags through lt_cv_aix_libpath. m4_defun([_LT_SYS_MODULE_PATH_AIX], [m4_require([_LT_DECL_SED])dnl if test set = "${lt_cv_aix_libpath+set}"; then aix_libpath=$lt_cv_aix_libpath else AC_CACHE_VAL([_LT_TAGVAR([lt_cv_aix_libpath_], [$1])], [AC_LINK_IFELSE([AC_LANG_PROGRAM],[ lt_aix_libpath_sed='[ /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }]' _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi],[]) if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=/usr/lib:/lib fi ]) aix_libpath=$_LT_TAGVAR([lt_cv_aix_libpath_], [$1]) fi ])# _LT_SYS_MODULE_PATH_AIX # _LT_SHELL_INIT(ARG) # ------------------- m4_define([_LT_SHELL_INIT], [m4_divert_text([M4SH-INIT], [$1 ])])# _LT_SHELL_INIT # _LT_PROG_ECHO_BACKSLASH # ----------------------- # Find how we can fake an echo command that does not interpret backslash. # In particular, with Autoconf 2.60 or later we add some code to the start # of the generated configure script that will find a shell with a builtin # printf (that we can use as an echo command). m4_defun([_LT_PROG_ECHO_BACKSLASH], [ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO AC_MSG_CHECKING([how to print strings]) # Test print first, because it will be a builtin if present. if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \ test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='print -r --' elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='printf %s\n' else # Use this function as a fallback that always works. func_fallback_echo () { eval 'cat <<_LTECHO_EOF $[]1 _LTECHO_EOF' } ECHO='func_fallback_echo' fi # func_echo_all arg... # Invoke $ECHO with all args, space-separated. func_echo_all () { $ECHO "$*" } case $ECHO in printf*) AC_MSG_RESULT([printf]) ;; print*) AC_MSG_RESULT([print -r]) ;; *) AC_MSG_RESULT([cat]) ;; esac m4_ifdef([_AS_DETECT_SUGGESTED], [_AS_DETECT_SUGGESTED([ test -n "${ZSH_VERSION+set}${BASH_VERSION+set}" || ( ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO PATH=/empty FPATH=/empty; export PATH FPATH test "X`printf %s $ECHO`" = "X$ECHO" \ || test "X`print -r -- $ECHO`" = "X$ECHO" )])]) _LT_DECL([], [SHELL], [1], [Shell to use when invoking shell scripts]) _LT_DECL([], [ECHO], [1], [An echo program that protects backslashes]) ])# _LT_PROG_ECHO_BACKSLASH # _LT_WITH_SYSROOT # ---------------- AC_DEFUN([_LT_WITH_SYSROOT], [m4_require([_LT_DECL_SED])dnl AC_MSG_CHECKING([for sysroot]) AC_ARG_WITH([sysroot], [AS_HELP_STRING([--with-sysroot@<:@=DIR@:>@], [Search for dependent libraries within DIR (or the compiler's sysroot if not specified).])], [], [with_sysroot=no]) dnl lt_sysroot will always be passed unquoted. We quote it here dnl in case the user passed a directory name. lt_sysroot= case $with_sysroot in #( yes) if test yes = "$GCC"; then lt_sysroot=`$CC --print-sysroot 2>/dev/null` fi ;; #( /*) lt_sysroot=`echo "$with_sysroot" | $SED -e "$sed_quote_subst"` ;; #( no|'') ;; #( *) AC_MSG_RESULT([$with_sysroot]) AC_MSG_ERROR([The sysroot must be an absolute path.]) ;; esac AC_MSG_RESULT([${lt_sysroot:-no}]) _LT_DECL([], [lt_sysroot], [0], [The root where to search for ]dnl [dependent libraries, and where our libraries should be installed.])]) # _LT_ENABLE_LOCK # --------------- m4_defun([_LT_ENABLE_LOCK], [AC_ARG_ENABLE([libtool-lock], [AS_HELP_STRING([--disable-libtool-lock], [avoid locking (might break parallel builds)])]) test no = "$enable_libtool_lock" || enable_libtool_lock=yes # Some flags need to be propagated to the compiler or linker for good # libtool support. case $host in ia64-*-hpux*) # Find out what ABI is being produced by ac_compile, and set mode # options accordingly. echo 'int i;' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then case `$FILECMD conftest.$ac_objext` in *ELF-32*) HPUX_IA64_MODE=32 ;; *ELF-64*) HPUX_IA64_MODE=64 ;; esac fi rm -rf conftest* ;; *-*-irix6*) # Find out what ABI is being produced by ac_compile, and set linker # options accordingly. echo '[#]line '$LINENO' "configure"' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then if test yes = "$lt_cv_prog_gnu_ld"; then case `$FILECMD conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -melf32bsmip" ;; *N32*) LD="${LD-ld} -melf32bmipn32" ;; *64-bit*) LD="${LD-ld} -melf64bmip" ;; esac else case `$FILECMD conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -32" ;; *N32*) LD="${LD-ld} -n32" ;; *64-bit*) LD="${LD-ld} -64" ;; esac fi fi rm -rf conftest* ;; mips64*-*linux*) # Find out what ABI is being produced by ac_compile, and set linker # options accordingly. echo '[#]line '$LINENO' "configure"' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then emul=elf case `$FILECMD conftest.$ac_objext` in *32-bit*) emul="${emul}32" ;; *64-bit*) emul="${emul}64" ;; esac case `$FILECMD conftest.$ac_objext` in *MSB*) emul="${emul}btsmip" ;; *LSB*) emul="${emul}ltsmip" ;; esac case `$FILECMD conftest.$ac_objext` in *N32*) emul="${emul}n32" ;; esac LD="${LD-ld} -m $emul" fi rm -rf conftest* ;; x86_64-*kfreebsd*-gnu|x86_64-*linux*|powerpc*-*linux*| \ s390*-*linux*|s390*-*tpf*|sparc*-*linux*) # Find out what ABI is being produced by ac_compile, and set linker # options accordingly. Note that the listed cases only cover the # situations where additional linker options are needed (such as when # doing 32-bit compilation for a host where ld defaults to 64-bit, or # vice versa); the common cases where no linker options are needed do # not appear in the list. echo 'int i;' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then case `$FILECMD conftest.o` in *32-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_i386_fbsd" ;; x86_64-*linux*) case `$FILECMD conftest.o` in *x86-64*) LD="${LD-ld} -m elf32_x86_64" ;; *) LD="${LD-ld} -m elf_i386" ;; esac ;; powerpc64le-*linux*) LD="${LD-ld} -m elf32lppclinux" ;; powerpc64-*linux*) LD="${LD-ld} -m elf32ppclinux" ;; s390x-*linux*) LD="${LD-ld} -m elf_s390" ;; sparc64-*linux*) LD="${LD-ld} -m elf32_sparc" ;; esac ;; *64-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_x86_64_fbsd" ;; x86_64-*linux*) LD="${LD-ld} -m elf_x86_64" ;; powerpcle-*linux*) LD="${LD-ld} -m elf64lppc" ;; powerpc-*linux*) LD="${LD-ld} -m elf64ppc" ;; s390*-*linux*|s390*-*tpf*) LD="${LD-ld} -m elf64_s390" ;; sparc*-*linux*) LD="${LD-ld} -m elf64_sparc" ;; esac ;; esac fi rm -rf conftest* ;; *-*-sco3.2v5*) # On SCO OpenServer 5, we need -belf to get full-featured binaries. SAVE_CFLAGS=$CFLAGS CFLAGS="$CFLAGS -belf" AC_CACHE_CHECK([whether the C compiler needs -belf], lt_cv_cc_needs_belf, [AC_LANG_PUSH(C) AC_LINK_IFELSE([AC_LANG_PROGRAM([[]],[[]])],[lt_cv_cc_needs_belf=yes],[lt_cv_cc_needs_belf=no]) AC_LANG_POP]) if test yes != "$lt_cv_cc_needs_belf"; then # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf CFLAGS=$SAVE_CFLAGS fi ;; *-*solaris*) # Find out what ABI is being produced by ac_compile, and set linker # options accordingly. echo 'int i;' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then case `$FILECMD conftest.o` in *64-bit*) case $lt_cv_prog_gnu_ld in yes*) case $host in i?86-*-solaris*|x86_64-*-solaris*) LD="${LD-ld} -m elf_x86_64" ;; sparc*-*-solaris*) LD="${LD-ld} -m elf64_sparc" ;; esac # GNU ld 2.21 introduced _sol2 emulations. Use them if available. if ${LD-ld} -V | grep _sol2 >/dev/null 2>&1; then LD=${LD-ld}_sol2 fi ;; *) if ${LD-ld} -64 -r -o conftest2.o conftest.o >/dev/null 2>&1; then LD="${LD-ld} -64" fi ;; esac ;; esac fi rm -rf conftest* ;; esac need_locks=$enable_libtool_lock ])# _LT_ENABLE_LOCK # _LT_PROG_AR # ----------- m4_defun([_LT_PROG_AR], [AC_CHECK_TOOLS(AR, [ar], false) : ${AR=ar} _LT_DECL([], [AR], [1], [The archiver]) # Use ARFLAGS variable as AR's operation code to sync the variable naming with # Automake. If both AR_FLAGS and ARFLAGS are specified, AR_FLAGS should have # higher priority because thats what people were doing historically (setting # ARFLAGS for automake and AR_FLAGS for libtool). FIXME: Make the AR_FLAGS # variable obsoleted/removed. test ${AR_FLAGS+y} || AR_FLAGS=${ARFLAGS-cr} lt_ar_flags=$AR_FLAGS _LT_DECL([], [lt_ar_flags], [0], [Flags to create an archive (by configure)]) # Make AR_FLAGS overridable by 'make ARFLAGS='. Don't try to run-time override # by AR_FLAGS because that was never working and AR_FLAGS is about to die. _LT_DECL([], [AR_FLAGS], [\@S|@{ARFLAGS-"\@S|@lt_ar_flags"}], [Flags to create an archive]) AC_CACHE_CHECK([for archiver @FILE support], [lt_cv_ar_at_file], [lt_cv_ar_at_file=no AC_COMPILE_IFELSE([AC_LANG_PROGRAM], [echo conftest.$ac_objext > conftest.lst lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&AS_MESSAGE_LOG_FD' AC_TRY_EVAL([lt_ar_try]) if test 0 -eq "$ac_status"; then # Ensure the archiver fails upon bogus file names. rm -f conftest.$ac_objext libconftest.a AC_TRY_EVAL([lt_ar_try]) if test 0 -ne "$ac_status"; then lt_cv_ar_at_file=@ fi fi rm -f conftest.* libconftest.a ]) ]) if test no = "$lt_cv_ar_at_file"; then archiver_list_spec= else archiver_list_spec=$lt_cv_ar_at_file fi _LT_DECL([], [archiver_list_spec], [1], [How to feed a file listing to the archiver]) ])# _LT_PROG_AR # _LT_CMD_OLD_ARCHIVE # ------------------- m4_defun([_LT_CMD_OLD_ARCHIVE], [_LT_PROG_AR AC_CHECK_TOOL(STRIP, strip, :) test -z "$STRIP" && STRIP=: _LT_DECL([], [STRIP], [1], [A symbol stripping program]) AC_CHECK_TOOL(RANLIB, ranlib, :) test -z "$RANLIB" && RANLIB=: _LT_DECL([], [RANLIB], [1], [Commands used to install an old-style archive]) # Determine commands to create old-style static archives. old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs' old_postinstall_cmds='chmod 644 $oldlib' old_postuninstall_cmds= if test -n "$RANLIB"; then case $host_os in bitrig* | openbsd*) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB -t \$tool_oldlib" ;; *) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB \$tool_oldlib" ;; esac old_archive_cmds="$old_archive_cmds~\$RANLIB \$tool_oldlib" fi case $host_os in darwin*) lock_old_archive_extraction=yes ;; *) lock_old_archive_extraction=no ;; esac _LT_DECL([], [old_postinstall_cmds], [2]) _LT_DECL([], [old_postuninstall_cmds], [2]) _LT_TAGDECL([], [old_archive_cmds], [2], [Commands used to build an old-style archive]) _LT_DECL([], [lock_old_archive_extraction], [0], [Whether to use a lock for old archive extraction]) ])# _LT_CMD_OLD_ARCHIVE # _LT_COMPILER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS, # [OUTPUT-FILE], [ACTION-SUCCESS], [ACTION-FAILURE]) # ---------------------------------------------------------------- # Check whether the given compiler option works AC_DEFUN([_LT_COMPILER_OPTION], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_SED])dnl AC_CACHE_CHECK([$1], [$2], [$2=no m4_if([$4], , [ac_outfile=conftest.$ac_objext], [ac_outfile=$4]) echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="$3" ## exclude from sc_useless_quotes_in_assignment # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&AS_MESSAGE_LOG_FD) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&AS_MESSAGE_LOG_FD echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then $2=yes fi fi $RM conftest* ]) if test yes = "[$]$2"; then m4_if([$5], , :, [$5]) else m4_if([$6], , :, [$6]) fi ])# _LT_COMPILER_OPTION # Old name: AU_ALIAS([AC_LIBTOOL_COMPILER_OPTION], [_LT_COMPILER_OPTION]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_COMPILER_OPTION], []) # _LT_LINKER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS, # [ACTION-SUCCESS], [ACTION-FAILURE]) # ---------------------------------------------------- # Check whether the given linker option works AC_DEFUN([_LT_LINKER_OPTION], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_SED])dnl AC_CACHE_CHECK([$1], [$2], [$2=no save_LDFLAGS=$LDFLAGS LDFLAGS="$LDFLAGS $3" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&AS_MESSAGE_LOG_FD $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then $2=yes fi else $2=yes fi fi $RM -r conftest* LDFLAGS=$save_LDFLAGS ]) if test yes = "[$]$2"; then m4_if([$4], , :, [$4]) else m4_if([$5], , :, [$5]) fi ])# _LT_LINKER_OPTION # Old name: AU_ALIAS([AC_LIBTOOL_LINKER_OPTION], [_LT_LINKER_OPTION]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_LINKER_OPTION], []) # LT_CMD_MAX_LEN #--------------- AC_DEFUN([LT_CMD_MAX_LEN], [AC_REQUIRE([AC_CANONICAL_HOST])dnl # find the maximum length of command line arguments AC_MSG_CHECKING([the maximum length of command line arguments]) AC_CACHE_VAL([lt_cv_sys_max_cmd_len], [dnl i=0 teststring=ABCD case $build_os in msdosdjgpp*) # On DJGPP, this test can blow up pretty badly due to problems in libc # (any single argument exceeding 2000 bytes causes a buffer overrun # during glob expansion). Even if it were fixed, the result of this # check would be larger than it should be. lt_cv_sys_max_cmd_len=12288; # 12K is about right ;; gnu*) # Under GNU Hurd, this test is not required because there is # no limit to the length of command line arguments. # Libtool will interpret -1 as no limit whatsoever lt_cv_sys_max_cmd_len=-1; ;; cygwin* | mingw* | cegcc*) # On Win9x/ME, this test blows up -- it succeeds, but takes # about 5 minutes as the teststring grows exponentially. # Worse, since 9x/ME are not pre-emptively multitasking, # you end up with a "frozen" computer, even though with patience # the test eventually succeeds (with a max line length of 256k). # Instead, let's just punt: use the minimum linelength reported by # all of the supported platforms: 8192 (on NT/2K/XP). lt_cv_sys_max_cmd_len=8192; ;; mint*) # On MiNT this can take a long time and run out of memory. lt_cv_sys_max_cmd_len=8192; ;; amigaos*) # On AmigaOS with pdksh, this test takes hours, literally. # So we just punt and use a minimum line length of 8192. lt_cv_sys_max_cmd_len=8192; ;; bitrig* | darwin* | dragonfly* | freebsd* | midnightbsd* | netbsd* | openbsd*) # This has been around since 386BSD, at least. Likely further. if test -x /sbin/sysctl; then lt_cv_sys_max_cmd_len=`/sbin/sysctl -n kern.argmax` elif test -x /usr/sbin/sysctl; then lt_cv_sys_max_cmd_len=`/usr/sbin/sysctl -n kern.argmax` else lt_cv_sys_max_cmd_len=65536 # usable default for all BSDs fi # And add a safety zone lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` ;; interix*) # We know the value 262144 and hardcode it with a safety zone (like BSD) lt_cv_sys_max_cmd_len=196608 ;; os2*) # The test takes a long time on OS/2. lt_cv_sys_max_cmd_len=8192 ;; osf*) # Dr. Hans Ekkehard Plesser reports seeing a kernel panic running configure # due to this test when exec_disable_arg_limit is 1 on Tru64. It is not # nice to cause kernel panics so lets avoid the loop below. # First set a reasonable default. lt_cv_sys_max_cmd_len=16384 # if test -x /sbin/sysconfig; then case `/sbin/sysconfig -q proc exec_disable_arg_limit` in *1*) lt_cv_sys_max_cmd_len=-1 ;; esac fi ;; sco3.2v5*) lt_cv_sys_max_cmd_len=102400 ;; sysv5* | sco5v6* | sysv4.2uw2*) kargmax=`grep ARG_MAX /etc/conf/cf.d/stune 2>/dev/null` if test -n "$kargmax"; then lt_cv_sys_max_cmd_len=`echo $kargmax | $SED 's/.*[[ ]]//'` else lt_cv_sys_max_cmd_len=32768 fi ;; *) lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null` if test -n "$lt_cv_sys_max_cmd_len" && \ test undefined != "$lt_cv_sys_max_cmd_len"; then lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` else # Make teststring a little bigger before we do anything with it. # a 1K string should be a reasonable start. for i in 1 2 3 4 5 6 7 8; do teststring=$teststring$teststring done SHELL=${SHELL-${CONFIG_SHELL-/bin/sh}} # If test is not a shell built-in, we'll probably end up computing a # maximum length that is only half of the actual maximum length, but # we can't tell. while { test X`env echo "$teststring$teststring" 2>/dev/null` \ = "X$teststring$teststring"; } >/dev/null 2>&1 && test 17 != "$i" # 1/2 MB should be enough do i=`expr $i + 1` teststring=$teststring$teststring done # Only check the string length outside the loop. lt_cv_sys_max_cmd_len=`expr "X$teststring" : ".*" 2>&1` teststring= # Add a significant safety factor because C++ compilers can tack on # massive amounts of additional arguments before passing them to the # linker. It appears as though 1/2 is a usable value. lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2` fi ;; esac ]) if test -n "$lt_cv_sys_max_cmd_len"; then AC_MSG_RESULT($lt_cv_sys_max_cmd_len) else AC_MSG_RESULT(none) fi max_cmd_len=$lt_cv_sys_max_cmd_len _LT_DECL([], [max_cmd_len], [0], [What is the maximum length of a command?]) ])# LT_CMD_MAX_LEN # Old name: AU_ALIAS([AC_LIBTOOL_SYS_MAX_CMD_LEN], [LT_CMD_MAX_LEN]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_SYS_MAX_CMD_LEN], []) # _LT_HEADER_DLFCN # ---------------- m4_defun([_LT_HEADER_DLFCN], [AC_CHECK_HEADERS([dlfcn.h], [], [], [AC_INCLUDES_DEFAULT])dnl ])# _LT_HEADER_DLFCN # _LT_TRY_DLOPEN_SELF (ACTION-IF-TRUE, ACTION-IF-TRUE-W-USCORE, # ACTION-IF-FALSE, ACTION-IF-CROSS-COMPILING) # ---------------------------------------------------------------- m4_defun([_LT_TRY_DLOPEN_SELF], [m4_require([_LT_HEADER_DLFCN])dnl if test yes = "$cross_compiling"; then : [$4] else lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat > conftest.$ac_ext <<_LT_EOF [#line $LINENO "configure" #include "confdefs.h" #if HAVE_DLFCN_H #include #endif #include #ifdef RTLD_GLOBAL # define LT_DLGLOBAL RTLD_GLOBAL #else # ifdef DL_GLOBAL # define LT_DLGLOBAL DL_GLOBAL # else # define LT_DLGLOBAL 0 # endif #endif /* We may have to define LT_DLLAZY_OR_NOW in the command line if we find out it does not work in some platform. */ #ifndef LT_DLLAZY_OR_NOW # ifdef RTLD_LAZY # define LT_DLLAZY_OR_NOW RTLD_LAZY # else # ifdef DL_LAZY # define LT_DLLAZY_OR_NOW DL_LAZY # else # ifdef RTLD_NOW # define LT_DLLAZY_OR_NOW RTLD_NOW # else # ifdef DL_NOW # define LT_DLLAZY_OR_NOW DL_NOW # else # define LT_DLLAZY_OR_NOW 0 # endif # endif # endif # endif #endif /* When -fvisibility=hidden is used, assume the code has been annotated correspondingly for the symbols needed. */ #if defined __GNUC__ && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3)) int fnord () __attribute__((visibility("default"))); #endif int fnord () { return 42; } int main () { void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); int status = $lt_dlunknown; if (self) { if (dlsym (self,"fnord")) status = $lt_dlno_uscore; else { if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; else puts (dlerror ()); } /* dlclose (self); */ } else puts (dlerror ()); return status; }] _LT_EOF if AC_TRY_EVAL(ac_link) && test -s "conftest$ac_exeext" 2>/dev/null; then (./conftest; exit; ) >&AS_MESSAGE_LOG_FD 2>/dev/null lt_status=$? case x$lt_status in x$lt_dlno_uscore) $1 ;; x$lt_dlneed_uscore) $2 ;; x$lt_dlunknown|x*) $3 ;; esac else : # compilation failed $3 fi fi rm -fr conftest* ])# _LT_TRY_DLOPEN_SELF # LT_SYS_DLOPEN_SELF # ------------------ AC_DEFUN([LT_SYS_DLOPEN_SELF], [m4_require([_LT_HEADER_DLFCN])dnl if test yes != "$enable_dlopen"; then enable_dlopen=unknown enable_dlopen_self=unknown enable_dlopen_self_static=unknown else lt_cv_dlopen=no lt_cv_dlopen_libs= case $host_os in beos*) lt_cv_dlopen=load_add_on lt_cv_dlopen_libs= lt_cv_dlopen_self=yes ;; mingw* | pw32* | cegcc*) lt_cv_dlopen=LoadLibrary lt_cv_dlopen_libs= ;; cygwin*) lt_cv_dlopen=dlopen lt_cv_dlopen_libs= ;; darwin*) # if libdl is installed we need to link against it AC_CHECK_LIB([dl], [dlopen], [lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-ldl],[ lt_cv_dlopen=dyld lt_cv_dlopen_libs= lt_cv_dlopen_self=yes ]) ;; tpf*) # Don't try to run any link tests for TPF. We know it's impossible # because TPF is a cross-compiler, and we know how we open DSOs. lt_cv_dlopen=dlopen lt_cv_dlopen_libs= lt_cv_dlopen_self=no ;; *) AC_CHECK_FUNC([shl_load], [lt_cv_dlopen=shl_load], [AC_CHECK_LIB([dld], [shl_load], [lt_cv_dlopen=shl_load lt_cv_dlopen_libs=-ldld], [AC_CHECK_FUNC([dlopen], [lt_cv_dlopen=dlopen], [AC_CHECK_LIB([dl], [dlopen], [lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-ldl], [AC_CHECK_LIB([svld], [dlopen], [lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-lsvld], [AC_CHECK_LIB([dld], [dld_link], [lt_cv_dlopen=dld_link lt_cv_dlopen_libs=-ldld]) ]) ]) ]) ]) ]) ;; esac if test no = "$lt_cv_dlopen"; then enable_dlopen=no else enable_dlopen=yes fi case $lt_cv_dlopen in dlopen) save_CPPFLAGS=$CPPFLAGS test yes = "$ac_cv_header_dlfcn_h" && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H" save_LDFLAGS=$LDFLAGS wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\" save_LIBS=$LIBS LIBS="$lt_cv_dlopen_libs $LIBS" AC_CACHE_CHECK([whether a program can dlopen itself], lt_cv_dlopen_self, [dnl _LT_TRY_DLOPEN_SELF( lt_cv_dlopen_self=yes, lt_cv_dlopen_self=yes, lt_cv_dlopen_self=no, lt_cv_dlopen_self=cross) ]) if test yes = "$lt_cv_dlopen_self"; then wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $lt_prog_compiler_static\" AC_CACHE_CHECK([whether a statically linked program can dlopen itself], lt_cv_dlopen_self_static, [dnl _LT_TRY_DLOPEN_SELF( lt_cv_dlopen_self_static=yes, lt_cv_dlopen_self_static=yes, lt_cv_dlopen_self_static=no, lt_cv_dlopen_self_static=cross) ]) fi CPPFLAGS=$save_CPPFLAGS LDFLAGS=$save_LDFLAGS LIBS=$save_LIBS ;; esac case $lt_cv_dlopen_self in yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;; *) enable_dlopen_self=unknown ;; esac case $lt_cv_dlopen_self_static in yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;; *) enable_dlopen_self_static=unknown ;; esac fi _LT_DECL([dlopen_support], [enable_dlopen], [0], [Whether dlopen is supported]) _LT_DECL([dlopen_self], [enable_dlopen_self], [0], [Whether dlopen of programs is supported]) _LT_DECL([dlopen_self_static], [enable_dlopen_self_static], [0], [Whether dlopen of statically linked programs is supported]) ])# LT_SYS_DLOPEN_SELF # Old name: AU_ALIAS([AC_LIBTOOL_DLOPEN_SELF], [LT_SYS_DLOPEN_SELF]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_DLOPEN_SELF], []) # _LT_COMPILER_C_O([TAGNAME]) # --------------------------- # Check to see if options -c and -o are simultaneously supported by compiler. # This macro does not hard code the compiler like AC_PROG_CC_C_O. m4_defun([_LT_COMPILER_C_O], [m4_require([_LT_DECL_SED])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_TAG_COMPILER])dnl AC_CACHE_CHECK([if $compiler supports -c -o file.$ac_objext], [_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)], [_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&AS_MESSAGE_LOG_FD) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&AS_MESSAGE_LOG_FD echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then _LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes fi fi chmod u+w . 2>&AS_MESSAGE_LOG_FD $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* ]) _LT_TAGDECL([compiler_c_o], [lt_cv_prog_compiler_c_o], [1], [Does compiler simultaneously support -c and -o options?]) ])# _LT_COMPILER_C_O # _LT_COMPILER_FILE_LOCKS([TAGNAME]) # ---------------------------------- # Check to see if we can do hard links to lock some files if needed m4_defun([_LT_COMPILER_FILE_LOCKS], [m4_require([_LT_ENABLE_LOCK])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl _LT_COMPILER_C_O([$1]) hard_links=nottested if test no = "$_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)" && test no != "$need_locks"; then # do not overwrite the value of need_locks provided by the user AC_MSG_CHECKING([if we can lock with hard links]) hard_links=yes $RM conftest* ln conftest.a conftest.b 2>/dev/null && hard_links=no touch conftest.a ln conftest.a conftest.b 2>&5 || hard_links=no ln conftest.a conftest.b 2>/dev/null && hard_links=no AC_MSG_RESULT([$hard_links]) if test no = "$hard_links"; then AC_MSG_WARN(['$CC' does not support '-c -o', so 'make -j' may be unsafe]) need_locks=warn fi else need_locks=no fi _LT_DECL([], [need_locks], [1], [Must we lock files when doing compilation?]) ])# _LT_COMPILER_FILE_LOCKS # _LT_CHECK_OBJDIR # ---------------- m4_defun([_LT_CHECK_OBJDIR], [AC_CACHE_CHECK([for objdir], [lt_cv_objdir], [rm -f .libs 2>/dev/null mkdir .libs 2>/dev/null if test -d .libs; then lt_cv_objdir=.libs else # MS-DOS does not allow filenames that begin with a dot. lt_cv_objdir=_libs fi rmdir .libs 2>/dev/null]) objdir=$lt_cv_objdir _LT_DECL([], [objdir], [0], [The name of the directory that contains temporary libtool files])dnl m4_pattern_allow([LT_OBJDIR])dnl AC_DEFINE_UNQUOTED([LT_OBJDIR], "$lt_cv_objdir/", [Define to the sub-directory where libtool stores uninstalled libraries.]) ])# _LT_CHECK_OBJDIR # _LT_LINKER_HARDCODE_LIBPATH([TAGNAME]) # -------------------------------------- # Check hardcoding attributes. m4_defun([_LT_LINKER_HARDCODE_LIBPATH], [AC_MSG_CHECKING([how to hardcode library paths into programs]) _LT_TAGVAR(hardcode_action, $1)= if test -n "$_LT_TAGVAR(hardcode_libdir_flag_spec, $1)" || test -n "$_LT_TAGVAR(runpath_var, $1)" || test yes = "$_LT_TAGVAR(hardcode_automatic, $1)"; then # We can hardcode non-existent directories. if test no != "$_LT_TAGVAR(hardcode_direct, $1)" && # If the only mechanism to avoid hardcoding is shlibpath_var, we # have to relink, otherwise we might link with an installed library # when we should be linking with a yet-to-be-installed one ## test no != "$_LT_TAGVAR(hardcode_shlibpath_var, $1)" && test no != "$_LT_TAGVAR(hardcode_minus_L, $1)"; then # Linking always hardcodes the temporary library directory. _LT_TAGVAR(hardcode_action, $1)=relink else # We can link without hardcoding, and we can hardcode nonexisting dirs. _LT_TAGVAR(hardcode_action, $1)=immediate fi else # We cannot hardcode anything, or else we can only hardcode existing # directories. _LT_TAGVAR(hardcode_action, $1)=unsupported fi AC_MSG_RESULT([$_LT_TAGVAR(hardcode_action, $1)]) if test relink = "$_LT_TAGVAR(hardcode_action, $1)" || test yes = "$_LT_TAGVAR(inherit_rpath, $1)"; then # Fast installation is not supported enable_fast_install=no elif test yes = "$shlibpath_overrides_runpath" || test no = "$enable_shared"; then # Fast installation is not necessary enable_fast_install=needless fi _LT_TAGDECL([], [hardcode_action], [0], [How to hardcode a shared library path into an executable]) ])# _LT_LINKER_HARDCODE_LIBPATH # _LT_CMD_STRIPLIB # ---------------- m4_defun([_LT_CMD_STRIPLIB], [m4_require([_LT_DECL_EGREP]) striplib= old_striplib= AC_MSG_CHECKING([whether stripping libraries is possible]) if test -z "$STRIP"; then AC_MSG_RESULT([no]) else if $STRIP -V 2>&1 | $GREP "GNU strip" >/dev/null; then old_striplib="$STRIP --strip-debug" striplib="$STRIP --strip-unneeded" AC_MSG_RESULT([yes]) else case $host_os in darwin*) # FIXME - insert some real tests, host_os isn't really good enough striplib="$STRIP -x" old_striplib="$STRIP -S" AC_MSG_RESULT([yes]) ;; freebsd*) if $STRIP -V 2>&1 | $GREP "elftoolchain" >/dev/null; then old_striplib="$STRIP --strip-debug" striplib="$STRIP --strip-unneeded" AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) fi ;; *) AC_MSG_RESULT([no]) ;; esac fi fi _LT_DECL([], [old_striplib], [1], [Commands to strip libraries]) _LT_DECL([], [striplib], [1]) ])# _LT_CMD_STRIPLIB # _LT_PREPARE_MUNGE_PATH_LIST # --------------------------- # Make sure func_munge_path_list() is defined correctly. m4_defun([_LT_PREPARE_MUNGE_PATH_LIST], [[# func_munge_path_list VARIABLE PATH # ----------------------------------- # VARIABLE is name of variable containing _space_ separated list of # directories to be munged by the contents of PATH, which is string # having a format: # "DIR[:DIR]:" # string "DIR[ DIR]" will be prepended to VARIABLE # ":DIR[:DIR]" # string "DIR[ DIR]" will be appended to VARIABLE # "DIRP[:DIRP]::[DIRA:]DIRA" # string "DIRP[ DIRP]" will be prepended to VARIABLE and string # "DIRA[ DIRA]" will be appended to VARIABLE # "DIR[:DIR]" # VARIABLE will be replaced by "DIR[ DIR]" func_munge_path_list () { case x@S|@2 in x) ;; *:) eval @S|@1=\"`$ECHO @S|@2 | $SED 's/:/ /g'` \@S|@@S|@1\" ;; x:*) eval @S|@1=\"\@S|@@S|@1 `$ECHO @S|@2 | $SED 's/:/ /g'`\" ;; *::*) eval @S|@1=\"\@S|@@S|@1\ `$ECHO @S|@2 | $SED -e 's/.*:://' -e 's/:/ /g'`\" eval @S|@1=\"`$ECHO @S|@2 | $SED -e 's/::.*//' -e 's/:/ /g'`\ \@S|@@S|@1\" ;; *) eval @S|@1=\"`$ECHO @S|@2 | $SED 's/:/ /g'`\" ;; esac } ]])# _LT_PREPARE_PATH_LIST # _LT_SYS_DYNAMIC_LINKER([TAG]) # ----------------------------- # PORTME Fill in your ld.so characteristics m4_defun([_LT_SYS_DYNAMIC_LINKER], [AC_REQUIRE([AC_CANONICAL_HOST])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_OBJDUMP])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_CHECK_SHELL_FEATURES])dnl m4_require([_LT_PREPARE_MUNGE_PATH_LIST])dnl AC_MSG_CHECKING([dynamic linker characteristics]) m4_if([$1], [], [ if test yes = "$GCC"; then case $host_os in darwin*) lt_awk_arg='/^libraries:/,/LR/' ;; *) lt_awk_arg='/^libraries:/' ;; esac case $host_os in mingw* | cegcc*) lt_sed_strip_eq='s|=\([[A-Za-z]]:\)|\1|g' ;; *) lt_sed_strip_eq='s|=/|/|g' ;; esac lt_search_path_spec=`$CC -print-search-dirs | awk $lt_awk_arg | $SED -e "s/^libraries://" -e $lt_sed_strip_eq` case $lt_search_path_spec in *\;*) # if the path contains ";" then we assume it to be the separator # otherwise default to the standard path separator (i.e. ":") - it is # assumed that no part of a normal pathname contains ";" but that should # okay in the real world where ";" in dirpaths is itself problematic. lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED 's/;/ /g'` ;; *) lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED "s/$PATH_SEPARATOR/ /g"` ;; esac # Ok, now we have the path, separated by spaces, we can step through it # and add multilib dir if necessary... lt_tmp_lt_search_path_spec= lt_multi_os_dir=/`$CC $CPPFLAGS $CFLAGS $LDFLAGS -print-multi-os-directory 2>/dev/null` # ...but if some path component already ends with the multilib dir we assume # that all is fine and trust -print-search-dirs as is (GCC 4.2? or newer). case "$lt_multi_os_dir; $lt_search_path_spec " in "/; "* | "/.; "* | "/./; "* | *"$lt_multi_os_dir "* | *"$lt_multi_os_dir/ "*) lt_multi_os_dir= ;; esac for lt_sys_path in $lt_search_path_spec; do if test -d "$lt_sys_path$lt_multi_os_dir"; then lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path$lt_multi_os_dir" elif test -n "$lt_multi_os_dir"; then test -d "$lt_sys_path" && \ lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path" fi done lt_search_path_spec=`$ECHO "$lt_tmp_lt_search_path_spec" | awk ' BEGIN {RS = " "; FS = "/|\n";} { lt_foo = ""; lt_count = 0; for (lt_i = NF; lt_i > 0; lt_i--) { if ($lt_i != "" && $lt_i != ".") { if ($lt_i == "..") { lt_count++; } else { if (lt_count == 0) { lt_foo = "/" $lt_i lt_foo; } else { lt_count--; } } } } if (lt_foo != "") { lt_freq[[lt_foo]]++; } if (lt_freq[[lt_foo]] == 1) { print lt_foo; } }'` # AWK program above erroneously prepends '/' to C:/dos/paths # for these hosts. case $host_os in mingw* | cegcc*) lt_search_path_spec=`$ECHO "$lt_search_path_spec" |\ $SED 's|/\([[A-Za-z]]:\)|\1|g'` ;; esac sys_lib_search_path_spec=`$ECHO "$lt_search_path_spec" | $lt_NL2SP` else sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" fi]) library_names_spec= libname_spec='lib$name' soname_spec= shrext_cmds=.so postinstall_cmds= postuninstall_cmds= finish_cmds= finish_eval= shlibpath_var= shlibpath_overrides_runpath=unknown version_type=none dynamic_linker="$host_os ld.so" sys_lib_dlsearch_path_spec="/lib /usr/lib" need_lib_prefix=unknown hardcode_into_libs=no # when you set need_version to no, make sure it does not cause -set_version # flags to be left without arguments need_version=unknown AC_ARG_VAR([LT_SYS_LIBRARY_PATH], [User-defined run-time library search path.]) case $host_os in aix3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname.a' shlibpath_var=LIBPATH # AIX 3 has no versioning support, so we append a major version to the name. soname_spec='$libname$release$shared_ext$major' ;; aix[[4-9]]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no hardcode_into_libs=yes if test ia64 = "$host_cpu"; then # AIX 5 supports IA64 library_names_spec='$libname$release$shared_ext$major $libname$release$shared_ext$versuffix $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH else # With GCC up to 2.95.x, collect2 would create an import file # for dependence libraries. The import file would start with # the line '#! .'. This would cause the generated library to # depend on '.', always an invalid library. This was fixed in # development snapshots of GCC prior to 3.0. case $host_os in aix4 | aix4.[[01]] | aix4.[[01]].*) if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' echo ' yes ' echo '#endif'; } | $CC -E - | $GREP yes > /dev/null; then : else can_build_shared=no fi ;; esac # Using Import Files as archive members, it is possible to support # filename-based versioning of shared library archives on AIX. While # this would work for both with and without runtime linking, it will # prevent static linking of such archives. So we do filename-based # shared library versioning with .so extension only, which is used # when both runtime linking and shared linking is enabled. # Unfortunately, runtime linking may impact performance, so we do # not want this to be the default eventually. Also, we use the # versioned .so libs for executables only if there is the -brtl # linker flag in LDFLAGS as well, or --with-aix-soname=svr4 only. # To allow for filename-based versioning support, we need to create # libNAME.so.V as an archive file, containing: # *) an Import File, referring to the versioned filename of the # archive as well as the shared archive member, telling the # bitwidth (32 or 64) of that shared object, and providing the # list of exported symbols of that shared object, eventually # decorated with the 'weak' keyword # *) the shared object with the F_LOADONLY flag set, to really avoid # it being seen by the linker. # At run time we better use the real file rather than another symlink, # but for link time we create the symlink libNAME.so -> libNAME.so.V case $with_aix_soname,$aix_use_runtimelinking in # AIX (on Power*) has no versioning support, so currently we cannot hardcode correct # soname into executable. Probably we can add versioning support to # collect2, so additional links can be useful in future. aix,yes) # traditional libtool dynamic_linker='AIX unversionable lib.so' # If using run time linking (on AIX 4.2 or later) use lib.so # instead of lib.a to let people know that these are not # typical AIX shared libraries. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' ;; aix,no) # traditional AIX only dynamic_linker='AIX lib.a[(]lib.so.V[)]' # We preserve .a as extension for shared libraries through AIX4.2 # and later when we are not doing run time linking. library_names_spec='$libname$release.a $libname.a' soname_spec='$libname$release$shared_ext$major' ;; svr4,*) # full svr4 only dynamic_linker="AIX lib.so.V[(]$shared_archive_member_spec.o[)]" library_names_spec='$libname$release$shared_ext$major $libname$shared_ext' # We do not specify a path in Import Files, so LIBPATH fires. shlibpath_overrides_runpath=yes ;; *,yes) # both, prefer svr4 dynamic_linker="AIX lib.so.V[(]$shared_archive_member_spec.o[)], lib.a[(]lib.so.V[)]" library_names_spec='$libname$release$shared_ext$major $libname$shared_ext' # unpreferred sharedlib libNAME.a needs extra handling postinstall_cmds='test -n "$linkname" || linkname="$realname"~func_stripname "" ".so" "$linkname"~$install_shared_prog "$dir/$func_stripname_result.$libext" "$destdir/$func_stripname_result.$libext"~test -z "$tstripme" || test -z "$striplib" || $striplib "$destdir/$func_stripname_result.$libext"' postuninstall_cmds='for n in $library_names $old_library; do :; done~func_stripname "" ".so" "$n"~test "$func_stripname_result" = "$n" || func_append rmfiles " $odir/$func_stripname_result.$libext"' # We do not specify a path in Import Files, so LIBPATH fires. shlibpath_overrides_runpath=yes ;; *,no) # both, prefer aix dynamic_linker="AIX lib.a[(]lib.so.V[)], lib.so.V[(]$shared_archive_member_spec.o[)]" library_names_spec='$libname$release.a $libname.a' soname_spec='$libname$release$shared_ext$major' # unpreferred sharedlib libNAME.so.V and symlink libNAME.so need extra handling postinstall_cmds='test -z "$dlname" || $install_shared_prog $dir/$dlname $destdir/$dlname~test -z "$tstripme" || test -z "$striplib" || $striplib $destdir/$dlname~test -n "$linkname" || linkname=$realname~func_stripname "" ".a" "$linkname"~(cd "$destdir" && $LN_S -f $dlname $func_stripname_result.so)' postuninstall_cmds='test -z "$dlname" || func_append rmfiles " $odir/$dlname"~for n in $old_library $library_names; do :; done~func_stripname "" ".a" "$n"~func_append rmfiles " $odir/$func_stripname_result.so"' ;; esac shlibpath_var=LIBPATH fi ;; amigaos*) case $host_cpu in powerpc) # Since July 2007 AmigaOS4 officially supports .so libraries. # When compiling the executable, add -use-dynld -Lsobjs: to the compileline. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' ;; m68k) library_names_spec='$libname.ixlibrary $libname.a' # Create ${libname}_ixlibrary.a entries in /sys/libs. finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([[^/]]*\)\.ixlibrary$%\1%'\''`; $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' ;; esac ;; beos*) library_names_spec='$libname$shared_ext' dynamic_linker="$host_os ld.so" shlibpath_var=LIBRARY_PATH ;; bsdi[[45]]*) version_type=linux # correct to gnu/linux during the next big refactor need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" # the default ld.so.conf also contains /usr/contrib/lib and # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow # libtool to hard-code these into programs ;; cygwin* | mingw* | pw32* | cegcc*) version_type=windows shrext_cmds=.dll need_version=no need_lib_prefix=no case $GCC,$cc_basename in yes,*) # gcc library_names_spec='$libname.dll.a' # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \$file`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes case $host_os in cygwin*) # Cygwin DLLs use 'cyg' prefix rather than 'lib' soname_spec='`echo $libname | $SED -e 's/^lib/cyg/'``echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext' m4_if([$1], [],[ sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/lib/w32api"]) ;; mingw* | cegcc*) # MinGW DLLs use traditional 'lib' prefix soname_spec='$libname`echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext' ;; pw32*) # pw32 DLLs use 'pw' prefix rather than 'lib' library_names_spec='`echo $libname | $SED -e 's/^lib/pw/'``echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext' ;; esac dynamic_linker='Win32 ld.exe' ;; *,cl* | *,icl*) # Native MSVC or ICC libname_spec='$name' soname_spec='$libname`echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext' library_names_spec='$libname.dll.lib' case $build_os in mingw*) sys_lib_search_path_spec= lt_save_ifs=$IFS IFS=';' for lt_path in $LIB do IFS=$lt_save_ifs # Let DOS variable expansion print the short 8.3 style file name. lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"` sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path" done IFS=$lt_save_ifs # Convert to MSYS style. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's|\\\\|/|g' -e 's| \\([[a-zA-Z]]\\):| /\\1|g' -e 's|^ ||'` ;; cygwin*) # Convert to unix form, then to dos form, then back to unix form # but this time dos style (no spaces!) so that the unix form looks # like /cygdrive/c/PROGRA~1:/cygdr... sys_lib_search_path_spec=`cygpath --path --unix "$LIB"` sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null` sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` ;; *) sys_lib_search_path_spec=$LIB if $ECHO "$sys_lib_search_path_spec" | [$GREP ';[c-zC-Z]:/' >/dev/null]; then # It is most probably a Windows format PATH. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` else sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` fi # FIXME: find the short name or the path components, as spaces are # common. (e.g. "Program Files" -> "PROGRA~1") ;; esac # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \$file`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes dynamic_linker='Win32 link.exe' ;; *) # Assume MSVC and ICC wrapper library_names_spec='$libname`echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext $libname.lib' dynamic_linker='Win32 ld.exe' ;; esac # FIXME: first we should search . and the directory the executable is in shlibpath_var=PATH ;; darwin* | rhapsody*) dynamic_linker="$host_os dyld" version_type=darwin need_lib_prefix=no need_version=no library_names_spec='$libname$release$major$shared_ext $libname$shared_ext' soname_spec='$libname$release$major$shared_ext' shlibpath_overrides_runpath=yes shlibpath_var=DYLD_LIBRARY_PATH shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' m4_if([$1], [],[ sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/local/lib"]) sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' ;; dgux*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH ;; freebsd* | dragonfly* | midnightbsd*) # DragonFly does not have aout. When/if they implement a new # versioning mechanism, adjust this. if test -x /usr/bin/objformat; then objformat=`/usr/bin/objformat` else case $host_os in freebsd[[23]].*) objformat=aout ;; *) objformat=elf ;; esac fi version_type=freebsd-$objformat case $version_type in freebsd-elf*) library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' need_version=no need_lib_prefix=no ;; freebsd-*) library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' need_version=yes ;; esac shlibpath_var=LD_LIBRARY_PATH case $host_os in freebsd2.*) shlibpath_overrides_runpath=yes ;; freebsd3.[[01]]* | freebsdelf3.[[01]]*) shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; freebsd3.[[2-9]]* | freebsdelf3.[[2-9]]* | \ freebsd4.[[0-5]] | freebsdelf4.[[0-5]] | freebsd4.1.1 | freebsdelf4.1.1) shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; *) # from 4.6 on, and DragonFly shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; esac ;; haiku*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no dynamic_linker="$host_os runtime_loader" library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LIBRARY_PATH shlibpath_overrides_runpath=no sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib' hardcode_into_libs=yes ;; hpux9* | hpux10* | hpux11*) # Give a soname corresponding to the major version so that dld.sl refuses to # link against other versions. version_type=sunos need_lib_prefix=no need_version=no case $host_cpu in ia64*) shrext_cmds='.so' hardcode_into_libs=yes dynamic_linker="$host_os dld.so" shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' if test 32 = "$HPUX_IA64_MODE"; then sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" sys_lib_dlsearch_path_spec=/usr/lib/hpux32 else sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" sys_lib_dlsearch_path_spec=/usr/lib/hpux64 fi ;; hppa*64*) shrext_cmds='.sl' hardcode_into_libs=yes dynamic_linker="$host_os dld.sl" shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; *) shrext_cmds='.sl' dynamic_linker="$host_os dld.sl" shlibpath_var=SHLIB_PATH shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' ;; esac # HP-UX runs *really* slowly unless shared libraries are mode 555, ... postinstall_cmds='chmod 555 $lib' # or fails outright, so override atomically: install_override_mode=555 ;; interix[[3-9]]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; irix5* | irix6* | nonstopux*) case $host_os in nonstopux*) version_type=nonstopux ;; *) if test yes = "$lt_cv_prog_gnu_ld"; then version_type=linux # correct to gnu/linux during the next big refactor else version_type=irix fi ;; esac need_lib_prefix=no need_version=no soname_spec='$libname$release$shared_ext$major' library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$release$shared_ext $libname$shared_ext' case $host_os in irix5* | nonstopux*) libsuff= shlibsuff= ;; *) case $LD in # libtool.m4 will add one of these switches to LD *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") libsuff= shlibsuff= libmagic=32-bit;; *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") libsuff=32 shlibsuff=N32 libmagic=N32;; *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") libsuff=64 shlibsuff=64 libmagic=64-bit;; *) libsuff= shlibsuff= libmagic=never-match;; esac ;; esac shlibpath_var=LD_LIBRARY${shlibsuff}_PATH shlibpath_overrides_runpath=no sys_lib_search_path_spec="/usr/lib$libsuff /lib$libsuff /usr/local/lib$libsuff" sys_lib_dlsearch_path_spec="/usr/lib$libsuff /lib$libsuff" hardcode_into_libs=yes ;; # No shared lib support for Linux oldld, aout, or coff. linux*oldld* | linux*aout* | linux*coff*) dynamic_linker=no ;; linux*android*) version_type=none # Android doesn't support versioned libraries. need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext' soname_spec='$libname$release$shared_ext' finish_cmds= shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes dynamic_linker='Android linker' # Don't embed -rpath directories since the linker doesn't support them. _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no # Some binutils ld are patched to set DT_RUNPATH AC_CACHE_VAL([lt_cv_shlibpath_overrides_runpath], [lt_cv_shlibpath_overrides_runpath=no save_LDFLAGS=$LDFLAGS save_libdir=$libdir eval "libdir=/foo; wl=\"$_LT_TAGVAR(lt_prog_compiler_wl, $1)\"; \ LDFLAGS=\"\$LDFLAGS $_LT_TAGVAR(hardcode_libdir_flag_spec, $1)\"" AC_LINK_IFELSE([AC_LANG_PROGRAM([],[])], [AS_IF([ ($OBJDUMP -p conftest$ac_exeext) 2>/dev/null | grep "RUNPATH.*$libdir" >/dev/null], [lt_cv_shlibpath_overrides_runpath=yes])]) LDFLAGS=$save_LDFLAGS libdir=$save_libdir ]) shlibpath_overrides_runpath=$lt_cv_shlibpath_overrides_runpath # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes # Ideally, we could use ldconfig to report *all* directores which are # searched for libraries, however this is still not possible. Aside from not # being certain /sbin/ldconfig is available, command # 'ldconfig -N -X -v | grep ^/' on 64bit Fedora does not report /usr/lib64, # even though it is searched at run-time. Try to do the best guess by # appending ld.so.conf contents (and includes) to the search path. if test -f /etc/ld.so.conf; then lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \[$]2)); skip = 1; } { if (!skip) print \[$]0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '` sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra" fi # We used to test for /lib/ld.so.1 and disable shared libraries on # powerpc, because MkLinux only supported shared libraries with the # GNU dynamic linker. Since this was broken with cross compilers, # most powerpc-linux boxes support dynamic linking these days and # people can always --disable-shared, the test was removed, and we # assume the GNU/Linux dynamic linker is in use. dynamic_linker='GNU/Linux ld.so' ;; netbsd*) version_type=sunos need_lib_prefix=no need_version=no if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' dynamic_linker='NetBSD (a.out) ld.so' else library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' dynamic_linker='NetBSD ld.elf_so' fi shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; newsos6) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; *nto* | *qnx*) version_type=qnx need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='ldqnx.so' ;; openbsd* | bitrig*) version_type=sunos sys_lib_dlsearch_path_spec=/usr/lib need_lib_prefix=no if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then need_version=no else need_version=yes fi library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; os2*) libname_spec='$name' version_type=windows shrext_cmds=.dll need_version=no need_lib_prefix=no # OS/2 can only load a DLL with a base name of 8 characters or less. soname_spec='`test -n "$os2dllname" && libname="$os2dllname"; v=$($ECHO $release$versuffix | tr -d .-); n=$($ECHO $libname | cut -b -$((8 - ${#v})) | tr . _); $ECHO $n$v`$shared_ext' library_names_spec='${libname}_dll.$libext' dynamic_linker='OS/2 ld.exe' shlibpath_var=BEGINLIBPATH sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec postinstall_cmds='base_file=`basename \$file`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; $ECHO \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; $ECHO \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' ;; osf3* | osf4* | osf5*) version_type=osf need_lib_prefix=no need_version=no soname_spec='$libname$release$shared_ext$major' library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; rdos*) dynamic_linker=no ;; solaris*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes # ldd complains unless libraries are executable postinstall_cmds='chmod +x $lib' ;; sunos4*) version_type=sunos library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes if test yes = "$with_gnu_ld"; then need_lib_prefix=no fi need_version=yes ;; sysv4 | sysv4.3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH case $host_vendor in sni) shlibpath_overrides_runpath=no need_lib_prefix=no runpath_var=LD_RUN_PATH ;; siemens) need_lib_prefix=no ;; motorola) need_lib_prefix=no need_version=no shlibpath_overrides_runpath=no sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' ;; esac ;; sysv4*MP*) if test -d /usr/nec; then version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$shared_ext.$versuffix $libname$shared_ext.$major $libname$shared_ext' soname_spec='$libname$shared_ext.$major' shlibpath_var=LD_LIBRARY_PATH fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) version_type=sco need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes if test yes = "$with_gnu_ld"; then sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' else sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' case $host_os in sco3.2v5*) sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" ;; esac fi sys_lib_dlsearch_path_spec='/usr/lib' ;; tpf*) # TPF is a cross-target only. Preferred cross-host = GNU/Linux. version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; uts4*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH ;; *) dynamic_linker=no ;; esac AC_MSG_RESULT([$dynamic_linker]) test no = "$dynamic_linker" && can_build_shared=no variables_saved_for_relink="PATH $shlibpath_var $runpath_var" if test yes = "$GCC"; then variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" fi if test set = "${lt_cv_sys_lib_search_path_spec+set}"; then sys_lib_search_path_spec=$lt_cv_sys_lib_search_path_spec fi if test set = "${lt_cv_sys_lib_dlsearch_path_spec+set}"; then sys_lib_dlsearch_path_spec=$lt_cv_sys_lib_dlsearch_path_spec fi # remember unaugmented sys_lib_dlsearch_path content for libtool script decls... configure_time_dlsearch_path=$sys_lib_dlsearch_path_spec # ... but it needs LT_SYS_LIBRARY_PATH munging for other configure-time code func_munge_path_list sys_lib_dlsearch_path_spec "$LT_SYS_LIBRARY_PATH" # to be used as default LT_SYS_LIBRARY_PATH value in generated libtool configure_time_lt_sys_library_path=$LT_SYS_LIBRARY_PATH _LT_DECL([], [variables_saved_for_relink], [1], [Variables whose values should be saved in libtool wrapper scripts and restored at link time]) _LT_DECL([], [need_lib_prefix], [0], [Do we need the "lib" prefix for modules?]) _LT_DECL([], [need_version], [0], [Do we need a version for libraries?]) _LT_DECL([], [version_type], [0], [Library versioning type]) _LT_DECL([], [runpath_var], [0], [Shared library runtime path variable]) _LT_DECL([], [shlibpath_var], [0],[Shared library path variable]) _LT_DECL([], [shlibpath_overrides_runpath], [0], [Is shlibpath searched before the hard-coded library search path?]) _LT_DECL([], [libname_spec], [1], [Format of library name prefix]) _LT_DECL([], [library_names_spec], [1], [[List of archive names. First name is the real one, the rest are links. The last name is the one that the linker finds with -lNAME]]) _LT_DECL([], [soname_spec], [1], [[The coded name of the library, if different from the real name]]) _LT_DECL([], [install_override_mode], [1], [Permission mode override for installation of shared libraries]) _LT_DECL([], [postinstall_cmds], [2], [Command to use after installation of a shared archive]) _LT_DECL([], [postuninstall_cmds], [2], [Command to use after uninstallation of a shared archive]) _LT_DECL([], [finish_cmds], [2], [Commands used to finish a libtool library installation in a directory]) _LT_DECL([], [finish_eval], [1], [[As "finish_cmds", except a single script fragment to be evaled but not shown]]) _LT_DECL([], [hardcode_into_libs], [0], [Whether we should hardcode library paths into libraries]) _LT_DECL([], [sys_lib_search_path_spec], [2], [Compile-time system search path for libraries]) _LT_DECL([sys_lib_dlsearch_path_spec], [configure_time_dlsearch_path], [2], [Detected run-time system search path for libraries]) _LT_DECL([], [configure_time_lt_sys_library_path], [2], [Explicit LT_SYS_LIBRARY_PATH set during ./configure time]) ])# _LT_SYS_DYNAMIC_LINKER # _LT_PATH_TOOL_PREFIX(TOOL) # -------------------------- # find a file program that can recognize shared library AC_DEFUN([_LT_PATH_TOOL_PREFIX], [m4_require([_LT_DECL_EGREP])dnl AC_MSG_CHECKING([for $1]) AC_CACHE_VAL(lt_cv_path_MAGIC_CMD, [case $MAGIC_CMD in [[\\/*] | ?:[\\/]*]) lt_cv_path_MAGIC_CMD=$MAGIC_CMD # Let the user override the test with a path. ;; *) lt_save_MAGIC_CMD=$MAGIC_CMD lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR dnl $ac_dummy forces splitting on constant user-supplied paths. dnl POSIX.2 word splitting is done only on the output of word expansions, dnl not every word. This closes a longstanding sh security hole. ac_dummy="m4_if([$2], , $PATH, [$2])" for ac_dir in $ac_dummy; do IFS=$lt_save_ifs test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/$1"; then lt_cv_path_MAGIC_CMD=$ac_dir/"$1" if test -n "$file_magic_test_file"; then case $deplibs_check_method in "file_magic "*) file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` MAGIC_CMD=$lt_cv_path_MAGIC_CMD if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | $EGREP "$file_magic_regex" > /dev/null; then : else cat <<_LT_EOF 1>&2 *** Warning: the command libtool uses to detect shared libraries, *** $file_magic_cmd, produces output that libtool cannot recognize. *** The result is that libtool may fail to recognize shared libraries *** as such. This will affect the creation of libtool libraries that *** depend on shared libraries, but programs linked with such libtool *** libraries will work regardless of this problem. Nevertheless, you *** may want to report the problem to your system manager and/or to *** bug-libtool@gnu.org _LT_EOF fi ;; esac fi break fi done IFS=$lt_save_ifs MAGIC_CMD=$lt_save_MAGIC_CMD ;; esac]) MAGIC_CMD=$lt_cv_path_MAGIC_CMD if test -n "$MAGIC_CMD"; then AC_MSG_RESULT($MAGIC_CMD) else AC_MSG_RESULT(no) fi _LT_DECL([], [MAGIC_CMD], [0], [Used to examine libraries when file_magic_cmd begins with "file"])dnl ])# _LT_PATH_TOOL_PREFIX # Old name: AU_ALIAS([AC_PATH_TOOL_PREFIX], [_LT_PATH_TOOL_PREFIX]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_PATH_TOOL_PREFIX], []) # _LT_PATH_MAGIC # -------------- # find a file program that can recognize a shared library m4_defun([_LT_PATH_MAGIC], [_LT_PATH_TOOL_PREFIX(${ac_tool_prefix}file, /usr/bin$PATH_SEPARATOR$PATH) if test -z "$lt_cv_path_MAGIC_CMD"; then if test -n "$ac_tool_prefix"; then _LT_PATH_TOOL_PREFIX(file, /usr/bin$PATH_SEPARATOR$PATH) else MAGIC_CMD=: fi fi ])# _LT_PATH_MAGIC # LT_PATH_LD # ---------- # find the pathname to the GNU or non-GNU linker AC_DEFUN([LT_PATH_LD], [AC_REQUIRE([AC_PROG_CC])dnl AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_CANONICAL_BUILD])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_PROG_ECHO_BACKSLASH])dnl AC_ARG_WITH([gnu-ld], [AS_HELP_STRING([--with-gnu-ld], [assume the C compiler uses GNU ld @<:@default=no@:>@])], [test no = "$withval" || with_gnu_ld=yes], [with_gnu_ld=no])dnl ac_prog=ld if test yes = "$GCC"; then # Check if gcc -print-prog-name=ld gives a path. AC_MSG_CHECKING([for ld used by $CC]) case $host in *-*-mingw*) # gcc leaves a trailing carriage return, which upsets mingw ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; *) ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; esac case $ac_prog in # Accept absolute paths. [[\\/]]* | ?:[[\\/]]*) re_direlt='/[[^/]][[^/]]*/\.\./' # Canonicalize the pathname of ld ac_prog=`$ECHO "$ac_prog"| $SED 's%\\\\%/%g'` while $ECHO "$ac_prog" | $GREP "$re_direlt" > /dev/null 2>&1; do ac_prog=`$ECHO $ac_prog| $SED "s%$re_direlt%/%"` done test -z "$LD" && LD=$ac_prog ;; "") # If it fails, then pretend we aren't using GCC. ac_prog=ld ;; *) # If it is relative, then search for the first ld in PATH. with_gnu_ld=unknown ;; esac elif test yes = "$with_gnu_ld"; then AC_MSG_CHECKING([for GNU ld]) else AC_MSG_CHECKING([for non-GNU ld]) fi AC_CACHE_VAL(lt_cv_path_LD, [if test -z "$LD"; then lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR for ac_dir in $PATH; do IFS=$lt_save_ifs test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then lt_cv_path_LD=$ac_dir/$ac_prog # Check to see if the program is GNU ld. I'd rather use --version, # but apparently some variants of GNU ld only accept -v. # Break only if it was the GNU/non-GNU ld that we prefer. case `"$lt_cv_path_LD" -v 2>&1 &1 conftest.i cat conftest.i conftest.i >conftest2.i : ${lt_DD:=$DD} AC_PATH_PROGS_FEATURE_CHECK([lt_DD], [dd], [if "$ac_path_lt_DD" bs=32 count=1 conftest.out 2>/dev/null; then cmp -s conftest.i conftest.out \ && ac_cv_path_lt_DD="$ac_path_lt_DD" ac_path_lt_DD_found=: fi]) rm -f conftest.i conftest2.i conftest.out]) ])# _LT_PATH_DD # _LT_CMD_TRUNCATE # ---------------- # find command to truncate a binary pipe m4_defun([_LT_CMD_TRUNCATE], [m4_require([_LT_PATH_DD]) AC_CACHE_CHECK([how to truncate binary pipes], [lt_cv_truncate_bin], [printf 0123456789abcdef0123456789abcdef >conftest.i cat conftest.i conftest.i >conftest2.i lt_cv_truncate_bin= if "$ac_cv_path_lt_DD" bs=32 count=1 conftest.out 2>/dev/null; then cmp -s conftest.i conftest.out \ && lt_cv_truncate_bin="$ac_cv_path_lt_DD bs=4096 count=1" fi rm -f conftest.i conftest2.i conftest.out test -z "$lt_cv_truncate_bin" && lt_cv_truncate_bin="$SED -e 4q"]) _LT_DECL([lt_truncate_bin], [lt_cv_truncate_bin], [1], [Command to truncate a binary pipe]) ])# _LT_CMD_TRUNCATE # _LT_CHECK_MAGIC_METHOD # ---------------------- # how to check for library dependencies # -- PORTME fill in with the dynamic library characteristics m4_defun([_LT_CHECK_MAGIC_METHOD], [m4_require([_LT_DECL_EGREP]) m4_require([_LT_DECL_OBJDUMP]) AC_CACHE_CHECK([how to recognize dependent libraries], lt_cv_deplibs_check_method, [lt_cv_file_magic_cmd='$MAGIC_CMD' lt_cv_file_magic_test_file= lt_cv_deplibs_check_method='unknown' # Need to set the preceding variable on all platforms that support # interlibrary dependencies. # 'none' -- dependencies not supported. # 'unknown' -- same as none, but documents that we really don't know. # 'pass_all' -- all dependencies passed with no checks. # 'test_compile' -- check by making test program. # 'file_magic [[regex]]' -- check by looking for files in library path # that responds to the $file_magic_cmd with a given extended regex. # If you have 'file' or equivalent on your system and you're not sure # whether 'pass_all' will *always* work, you probably want this one. case $host_os in aix[[4-9]]*) lt_cv_deplibs_check_method=pass_all ;; beos*) lt_cv_deplibs_check_method=pass_all ;; bsdi[[45]]*) lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (shared object|dynamic lib)' lt_cv_file_magic_cmd='$FILECMD -L' lt_cv_file_magic_test_file=/shlib/libc.so ;; cygwin*) # func_win32_libid is a shell function defined in ltmain.sh lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' ;; mingw* | pw32*) # Base MSYS/MinGW do not provide the 'file' command needed by # func_win32_libid shell function, so use a weaker test based on 'objdump', # unless we find 'file', for example because we are cross-compiling. if ( file / ) >/dev/null 2>&1; then lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' else # Keep this pattern in sync with the one in func_win32_libid. lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' lt_cv_file_magic_cmd='$OBJDUMP -f' fi ;; cegcc*) # use the weaker test based on 'objdump'. See mingw*. lt_cv_deplibs_check_method='file_magic file format pe-arm-.*little(.*architecture: arm)?' lt_cv_file_magic_cmd='$OBJDUMP -f' ;; darwin* | rhapsody*) lt_cv_deplibs_check_method=pass_all ;; freebsd* | dragonfly* | midnightbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then case $host_cpu in i*86 ) # Not sure whether the presence of OpenBSD here was a mistake. # Let's accept both of them until this is cleared up. lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD|DragonFly)/i[[3-9]]86 (compact )?demand paged shared library' lt_cv_file_magic_cmd=$FILECMD lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*` ;; esac else lt_cv_deplibs_check_method=pass_all fi ;; haiku*) lt_cv_deplibs_check_method=pass_all ;; hpux10.20* | hpux11*) lt_cv_file_magic_cmd=$FILECMD case $host_cpu in ia64*) lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|ELF-[[0-9]][[0-9]]) shared object file - IA64' lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so ;; hppa*64*) [lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF[ -][0-9][0-9])(-bit)?( [LM]SB)? shared object( file)?[, -]* PA-RISC [0-9]\.[0-9]'] lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl ;; *) lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|PA-RISC[[0-9]]\.[[0-9]]) shared library' lt_cv_file_magic_test_file=/usr/lib/libc.sl ;; esac ;; interix[[3-9]]*) # PIC code is broken on Interix 3.x, that's why |\.a not |_pic\.a here lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so|\.a)$' ;; irix5* | irix6* | nonstopux*) case $LD in *-32|*"-32 ") libmagic=32-bit;; *-n32|*"-n32 ") libmagic=N32;; *-64|*"-64 ") libmagic=64-bit;; *) libmagic=never-match;; esac lt_cv_deplibs_check_method=pass_all ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) lt_cv_deplibs_check_method=pass_all ;; netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so|_pic\.a)$' fi ;; newos6*) lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (executable|dynamic lib)' lt_cv_file_magic_cmd=$FILECMD lt_cv_file_magic_test_file=/usr/lib/libnls.so ;; *nto* | *qnx*) lt_cv_deplibs_check_method=pass_all ;; openbsd* | bitrig*) if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|\.so|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$' fi ;; osf3* | osf4* | osf5*) lt_cv_deplibs_check_method=pass_all ;; rdos*) lt_cv_deplibs_check_method=pass_all ;; solaris*) lt_cv_deplibs_check_method=pass_all ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) lt_cv_deplibs_check_method=pass_all ;; sysv4 | sysv4.3*) case $host_vendor in motorola) lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (shared object|dynamic lib) M[[0-9]][[0-9]]* Version [[0-9]]' lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*` ;; ncr) lt_cv_deplibs_check_method=pass_all ;; sequent) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB (shared object|dynamic lib )' ;; sni) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method="file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB dynamic lib" lt_cv_file_magic_test_file=/lib/libc.so ;; siemens) lt_cv_deplibs_check_method=pass_all ;; pc) lt_cv_deplibs_check_method=pass_all ;; esac ;; tpf*) lt_cv_deplibs_check_method=pass_all ;; os2*) lt_cv_deplibs_check_method=pass_all ;; esac ]) file_magic_glob= want_nocaseglob=no if test "$build" = "$host"; then case $host_os in mingw* | pw32*) if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then want_nocaseglob=yes else file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[[\1]]\/[[\1]]\/g;/g"` fi ;; esac fi file_magic_cmd=$lt_cv_file_magic_cmd deplibs_check_method=$lt_cv_deplibs_check_method test -z "$deplibs_check_method" && deplibs_check_method=unknown _LT_DECL([], [deplibs_check_method], [1], [Method to check whether dependent libraries are shared objects]) _LT_DECL([], [file_magic_cmd], [1], [Command to use when deplibs_check_method = "file_magic"]) _LT_DECL([], [file_magic_glob], [1], [How to find potential files when deplibs_check_method = "file_magic"]) _LT_DECL([], [want_nocaseglob], [1], [Find potential files using nocaseglob when deplibs_check_method = "file_magic"]) ])# _LT_CHECK_MAGIC_METHOD # LT_PATH_NM # ---------- # find the pathname to a BSD- or MS-compatible name lister AC_DEFUN([LT_PATH_NM], [AC_REQUIRE([AC_PROG_CC])dnl AC_CACHE_CHECK([for BSD- or MS-compatible name lister (nm)], lt_cv_path_NM, [if test -n "$NM"; then # Let the user override the test. lt_cv_path_NM=$NM else lt_nm_to_check=${ac_tool_prefix}nm if test -n "$ac_tool_prefix" && test "$build" = "$host"; then lt_nm_to_check="$lt_nm_to_check nm" fi for lt_tmp_nm in $lt_nm_to_check; do lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do IFS=$lt_save_ifs test -z "$ac_dir" && ac_dir=. tmp_nm=$ac_dir/$lt_tmp_nm if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext"; then # Check to see if the nm accepts a BSD-compat flag. # Adding the 'sed 1q' prevents false positives on HP-UX, which says: # nm: unknown option "B" ignored # Tru64's nm complains that /dev/null is an invalid object file # MSYS converts /dev/null to NUL, MinGW nm treats NUL as empty case $build_os in mingw*) lt_bad_file=conftest.nm/nofile ;; *) lt_bad_file=/dev/null ;; esac case `"$tmp_nm" -B $lt_bad_file 2>&1 | $SED '1q'` in *$lt_bad_file* | *'Invalid file or object type'*) lt_cv_path_NM="$tmp_nm -B" break 2 ;; *) case `"$tmp_nm" -p /dev/null 2>&1 | $SED '1q'` in */dev/null*) lt_cv_path_NM="$tmp_nm -p" break 2 ;; *) lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but continue # so that we can try to find one that supports BSD flags ;; esac ;; esac fi done IFS=$lt_save_ifs done : ${lt_cv_path_NM=no} fi]) if test no != "$lt_cv_path_NM"; then NM=$lt_cv_path_NM else # Didn't find any BSD compatible name lister, look for dumpbin. if test -n "$DUMPBIN"; then : # Let the user override the test. else AC_CHECK_TOOLS(DUMPBIN, [dumpbin "link -dump"], :) case `$DUMPBIN -symbols -headers /dev/null 2>&1 | $SED '1q'` in *COFF*) DUMPBIN="$DUMPBIN -symbols -headers" ;; *) DUMPBIN=: ;; esac fi AC_SUBST([DUMPBIN]) if test : != "$DUMPBIN"; then NM=$DUMPBIN fi fi test -z "$NM" && NM=nm AC_SUBST([NM]) _LT_DECL([], [NM], [1], [A BSD- or MS-compatible name lister])dnl AC_CACHE_CHECK([the name lister ($NM) interface], [lt_cv_nm_interface], [lt_cv_nm_interface="BSD nm" echo "int some_variable = 0;" > conftest.$ac_ext (eval echo "\"\$as_me:$LINENO: $ac_compile\"" >&AS_MESSAGE_LOG_FD) (eval "$ac_compile" 2>conftest.err) cat conftest.err >&AS_MESSAGE_LOG_FD (eval echo "\"\$as_me:$LINENO: $NM \\\"conftest.$ac_objext\\\"\"" >&AS_MESSAGE_LOG_FD) (eval "$NM \"conftest.$ac_objext\"" 2>conftest.err > conftest.out) cat conftest.err >&AS_MESSAGE_LOG_FD (eval echo "\"\$as_me:$LINENO: output\"" >&AS_MESSAGE_LOG_FD) cat conftest.out >&AS_MESSAGE_LOG_FD if $GREP 'External.*some_variable' conftest.out > /dev/null; then lt_cv_nm_interface="MS dumpbin" fi rm -f conftest*]) ])# LT_PATH_NM # Old names: AU_ALIAS([AM_PROG_NM], [LT_PATH_NM]) AU_ALIAS([AC_PROG_NM], [LT_PATH_NM]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AM_PROG_NM], []) dnl AC_DEFUN([AC_PROG_NM], []) # _LT_CHECK_SHAREDLIB_FROM_LINKLIB # -------------------------------- # how to determine the name of the shared library # associated with a specific link library. # -- PORTME fill in with the dynamic library characteristics m4_defun([_LT_CHECK_SHAREDLIB_FROM_LINKLIB], [m4_require([_LT_DECL_EGREP]) m4_require([_LT_DECL_OBJDUMP]) m4_require([_LT_DECL_DLLTOOL]) AC_CACHE_CHECK([how to associate runtime and link libraries], lt_cv_sharedlib_from_linklib_cmd, [lt_cv_sharedlib_from_linklib_cmd='unknown' case $host_os in cygwin* | mingw* | pw32* | cegcc*) # two different shell functions defined in ltmain.sh; # decide which one to use based on capabilities of $DLLTOOL case `$DLLTOOL --help 2>&1` in *--identify-strict*) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib ;; *) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback ;; esac ;; *) # fallback: assume linklib IS sharedlib lt_cv_sharedlib_from_linklib_cmd=$ECHO ;; esac ]) sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO _LT_DECL([], [sharedlib_from_linklib_cmd], [1], [Command to associate shared and link libraries]) ])# _LT_CHECK_SHAREDLIB_FROM_LINKLIB # _LT_PATH_MANIFEST_TOOL # ---------------------- # locate the manifest tool m4_defun([_LT_PATH_MANIFEST_TOOL], [AC_CHECK_TOOL(MANIFEST_TOOL, mt, :) test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt AC_CACHE_CHECK([if $MANIFEST_TOOL is a manifest tool], [lt_cv_path_mainfest_tool], [lt_cv_path_mainfest_tool=no echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&AS_MESSAGE_LOG_FD $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out cat conftest.err >&AS_MESSAGE_LOG_FD if $GREP 'Manifest Tool' conftest.out > /dev/null; then lt_cv_path_mainfest_tool=yes fi rm -f conftest*]) if test yes != "$lt_cv_path_mainfest_tool"; then MANIFEST_TOOL=: fi _LT_DECL([], [MANIFEST_TOOL], [1], [Manifest tool])dnl ])# _LT_PATH_MANIFEST_TOOL # _LT_DLL_DEF_P([FILE]) # --------------------- # True iff FILE is a Windows DLL '.def' file. # Keep in sync with func_dll_def_p in the libtool script AC_DEFUN([_LT_DLL_DEF_P], [dnl test DEF = "`$SED -n dnl -e '\''s/^[[ ]]*//'\'' dnl Strip leading whitespace -e '\''/^\(;.*\)*$/d'\'' dnl Delete empty lines and comments -e '\''s/^\(EXPORTS\|LIBRARY\)\([[ ]].*\)*$/DEF/p'\'' dnl -e q dnl Only consider the first "real" line $1`" dnl ])# _LT_DLL_DEF_P # LT_LIB_M # -------- # check for math library AC_DEFUN([LT_LIB_M], [AC_REQUIRE([AC_CANONICAL_HOST])dnl LIBM= case $host in *-*-beos* | *-*-cegcc* | *-*-cygwin* | *-*-haiku* | *-*-pw32* | *-*-darwin*) # These system don't have libm, or don't need it ;; *-ncr-sysv4.3*) AC_CHECK_LIB(mw, _mwvalidcheckl, LIBM=-lmw) AC_CHECK_LIB(m, cos, LIBM="$LIBM -lm") ;; *) AC_CHECK_LIB(m, cos, LIBM=-lm) ;; esac AC_SUBST([LIBM]) ])# LT_LIB_M # Old name: AU_ALIAS([AC_CHECK_LIBM], [LT_LIB_M]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_CHECK_LIBM], []) # _LT_COMPILER_NO_RTTI([TAGNAME]) # ------------------------------- m4_defun([_LT_COMPILER_NO_RTTI], [m4_require([_LT_TAG_COMPILER])dnl _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)= if test yes = "$GCC"; then case $cc_basename in nvcc*) _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -Xcompiler -fno-builtin' ;; *) _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin' ;; esac _LT_COMPILER_OPTION([if $compiler supports -fno-rtti -fno-exceptions], lt_cv_prog_compiler_rtti_exceptions, [-fno-rtti -fno-exceptions], [], [_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)="$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1) -fno-rtti -fno-exceptions"]) fi _LT_TAGDECL([no_builtin_flag], [lt_prog_compiler_no_builtin_flag], [1], [Compiler flag to turn off builtin functions]) ])# _LT_COMPILER_NO_RTTI # _LT_CMD_GLOBAL_SYMBOLS # ---------------------- m4_defun([_LT_CMD_GLOBAL_SYMBOLS], [AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_PROG_CC])dnl AC_REQUIRE([AC_PROG_AWK])dnl AC_REQUIRE([LT_PATH_NM])dnl AC_REQUIRE([LT_PATH_LD])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_TAG_COMPILER])dnl # Check for command to grab the raw symbol name followed by C symbol from nm. AC_MSG_CHECKING([command to parse $NM output from $compiler object]) AC_CACHE_VAL([lt_cv_sys_global_symbol_pipe], [ # These are sane defaults that work on at least a few old systems. # [They come from Ultrix. What could be older than Ultrix?!! ;)] # Character class describing NM global symbol codes. symcode='[[BCDEGRST]]' # Regexp to match symbols that can be accessed directly from C. sympat='\([[_A-Za-z]][[_A-Za-z0-9]]*\)' # Define system-specific variables. case $host_os in aix*) symcode='[[BCDT]]' ;; cygwin* | mingw* | pw32* | cegcc*) symcode='[[ABCDGISTW]]' ;; hpux*) if test ia64 = "$host_cpu"; then symcode='[[ABCDEGRST]]' fi ;; irix* | nonstopux*) symcode='[[BCDEGRST]]' ;; osf*) symcode='[[BCDEGQRST]]' ;; solaris*) symcode='[[BDRT]]' ;; sco3.2v5*) symcode='[[DT]]' ;; sysv4.2uw2*) symcode='[[DT]]' ;; sysv5* | sco5v6* | unixware* | OpenUNIX*) symcode='[[ABDT]]' ;; sysv4) symcode='[[DFNSTU]]' ;; esac # If we're using GNU nm, then use its standard symbol codes. case `$NM -V 2>&1` in *GNU* | *'with BFD'*) symcode='[[ABCDGIRSTW]]' ;; esac if test "$lt_cv_nm_interface" = "MS dumpbin"; then # Gets list of data symbols to import. lt_cv_sys_global_symbol_to_import="$SED -n -e 's/^I .* \(.*\)$/\1/p'" # Adjust the below global symbol transforms to fixup imported variables. lt_cdecl_hook=" -e 's/^I .* \(.*\)$/extern __declspec(dllimport) char \1;/p'" lt_c_name_hook=" -e 's/^I .* \(.*\)$/ {\"\1\", (void *) 0},/p'" lt_c_name_lib_hook="\ -e 's/^I .* \(lib.*\)$/ {\"\1\", (void *) 0},/p'\ -e 's/^I .* \(.*\)$/ {\"lib\1\", (void *) 0},/p'" else # Disable hooks by default. lt_cv_sys_global_symbol_to_import= lt_cdecl_hook= lt_c_name_hook= lt_c_name_lib_hook= fi # Transform an extracted symbol line into a proper C declaration. # Some systems (esp. on ia64) link data and code symbols differently, # so use this general approach. lt_cv_sys_global_symbol_to_cdecl="$SED -n"\ $lt_cdecl_hook\ " -e 's/^T .* \(.*\)$/extern int \1();/p'"\ " -e 's/^$symcode$symcode* .* \(.*\)$/extern char \1;/p'" # Transform an extracted symbol line into symbol name and symbol address lt_cv_sys_global_symbol_to_c_name_address="$SED -n"\ $lt_c_name_hook\ " -e 's/^: \(.*\) .*$/ {\"\1\", (void *) 0},/p'"\ " -e 's/^$symcode$symcode* .* \(.*\)$/ {\"\1\", (void *) \&\1},/p'" # Transform an extracted symbol line into symbol name with lib prefix and # symbol address. lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="$SED -n"\ $lt_c_name_lib_hook\ " -e 's/^: \(.*\) .*$/ {\"\1\", (void *) 0},/p'"\ " -e 's/^$symcode$symcode* .* \(lib.*\)$/ {\"\1\", (void *) \&\1},/p'"\ " -e 's/^$symcode$symcode* .* \(.*\)$/ {\"lib\1\", (void *) \&\1},/p'" # Handle CRLF in mingw tool chain opt_cr= case $build_os in mingw*) opt_cr=`$ECHO 'x\{0,1\}' | tr x '\015'` # option cr in regexp ;; esac # Try without a prefix underscore, then with it. for ac_symprfx in "" "_"; do # Transform symcode, sympat, and symprfx into a raw symbol and a C symbol. symxfrm="\\1 $ac_symprfx\\2 \\2" # Write the raw and C identifiers. if test "$lt_cv_nm_interface" = "MS dumpbin"; then # Fake it for dumpbin and say T for any non-static function, # D for any global variable and I for any imported variable. # Also find C++ and __fastcall symbols from MSVC++ or ICC, # which start with @ or ?. lt_cv_sys_global_symbol_pipe="$AWK ['"\ " {last_section=section; section=\$ 3};"\ " /^COFF SYMBOL TABLE/{for(i in hide) delete hide[i]};"\ " /Section length .*#relocs.*(pick any)/{hide[last_section]=1};"\ " /^ *Symbol name *: /{split(\$ 0,sn,\":\"); si=substr(sn[2],2)};"\ " /^ *Type *: code/{print \"T\",si,substr(si,length(prfx))};"\ " /^ *Type *: data/{print \"I\",si,substr(si,length(prfx))};"\ " \$ 0!~/External *\|/{next};"\ " / 0+ UNDEF /{next}; / UNDEF \([^|]\)*()/{next};"\ " {if(hide[section]) next};"\ " {f=\"D\"}; \$ 0~/\(\).*\|/{f=\"T\"};"\ " {split(\$ 0,a,/\||\r/); split(a[2],s)};"\ " s[1]~/^[@?]/{print f,s[1],s[1]; next};"\ " s[1]~prfx {split(s[1],t,\"@\"); print f,t[1],substr(t[1],length(prfx))}"\ " ' prfx=^$ac_symprfx]" else lt_cv_sys_global_symbol_pipe="$SED -n -e 's/^.*[[ ]]\($symcode$symcode*\)[[ ]][[ ]]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'" fi lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | $SED '/ __gnu_lto/d'" # Check to see that the pipe works correctly. pipe_works=no rm -f conftest* cat > conftest.$ac_ext <<_LT_EOF #ifdef __cplusplus extern "C" { #endif char nm_test_var; void nm_test_func(void); void nm_test_func(void){} #ifdef __cplusplus } #endif int main(){nm_test_var='a';nm_test_func();return(0);} _LT_EOF if AC_TRY_EVAL(ac_compile); then # Now try to grab the symbols. nlist=conftest.nm if AC_TRY_EVAL(NM conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist) && test -s "$nlist"; then # Try sorting and uniquifying the output. if sort "$nlist" | uniq > "$nlist"T; then mv -f "$nlist"T "$nlist" else rm -f "$nlist"T fi # Make sure that we snagged all the symbols we need. if $GREP ' nm_test_var$' "$nlist" >/dev/null; then if $GREP ' nm_test_func$' "$nlist" >/dev/null; then cat <<_LT_EOF > conftest.$ac_ext /* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */ #if defined _WIN32 || defined __CYGWIN__ || defined _WIN32_WCE /* DATA imports from DLLs on WIN32 can't be const, because runtime relocations are performed -- see ld's documentation on pseudo-relocs. */ # define LT@&t@_DLSYM_CONST #elif defined __osf__ /* This system does not cope well with relocations in const data. */ # define LT@&t@_DLSYM_CONST #else # define LT@&t@_DLSYM_CONST const #endif #ifdef __cplusplus extern "C" { #endif _LT_EOF # Now generate the symbol file. eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | $GREP -v main >> conftest.$ac_ext' cat <<_LT_EOF >> conftest.$ac_ext /* The mapping between symbol names and symbols. */ LT@&t@_DLSYM_CONST struct { const char *name; void *address; } lt__PROGRAM__LTX_preloaded_symbols[[]] = { { "@PROGRAM@", (void *) 0 }, _LT_EOF $SED "s/^$symcode$symcode* .* \(.*\)$/ {\"\1\", (void *) \&\1},/" < "$nlist" | $GREP -v main >> conftest.$ac_ext cat <<\_LT_EOF >> conftest.$ac_ext {0, (void *) 0} }; /* This works around a problem in FreeBSD linker */ #ifdef FREEBSD_WORKAROUND static const void *lt_preloaded_setup() { return lt__PROGRAM__LTX_preloaded_symbols; } #endif #ifdef __cplusplus } #endif _LT_EOF # Now try linking the two files. mv conftest.$ac_objext conftstm.$ac_objext lt_globsym_save_LIBS=$LIBS lt_globsym_save_CFLAGS=$CFLAGS LIBS=conftstm.$ac_objext CFLAGS="$CFLAGS$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)" if AC_TRY_EVAL(ac_link) && test -s conftest$ac_exeext; then pipe_works=yes fi LIBS=$lt_globsym_save_LIBS CFLAGS=$lt_globsym_save_CFLAGS else echo "cannot find nm_test_func in $nlist" >&AS_MESSAGE_LOG_FD fi else echo "cannot find nm_test_var in $nlist" >&AS_MESSAGE_LOG_FD fi else echo "cannot run $lt_cv_sys_global_symbol_pipe" >&AS_MESSAGE_LOG_FD fi else echo "$progname: failed program was:" >&AS_MESSAGE_LOG_FD cat conftest.$ac_ext >&5 fi rm -rf conftest* conftst* # Do not use the global_symbol_pipe unless it works. if test yes = "$pipe_works"; then break else lt_cv_sys_global_symbol_pipe= fi done ]) if test -z "$lt_cv_sys_global_symbol_pipe"; then lt_cv_sys_global_symbol_to_cdecl= fi if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then AC_MSG_RESULT(failed) else AC_MSG_RESULT(ok) fi # Response file support. if test "$lt_cv_nm_interface" = "MS dumpbin"; then nm_file_list_spec='@' elif $NM --help 2>/dev/null | grep '[[@]]FILE' >/dev/null; then nm_file_list_spec='@' fi _LT_DECL([global_symbol_pipe], [lt_cv_sys_global_symbol_pipe], [1], [Take the output of nm and produce a listing of raw symbols and C names]) _LT_DECL([global_symbol_to_cdecl], [lt_cv_sys_global_symbol_to_cdecl], [1], [Transform the output of nm in a proper C declaration]) _LT_DECL([global_symbol_to_import], [lt_cv_sys_global_symbol_to_import], [1], [Transform the output of nm into a list of symbols to manually relocate]) _LT_DECL([global_symbol_to_c_name_address], [lt_cv_sys_global_symbol_to_c_name_address], [1], [Transform the output of nm in a C name address pair]) _LT_DECL([global_symbol_to_c_name_address_lib_prefix], [lt_cv_sys_global_symbol_to_c_name_address_lib_prefix], [1], [Transform the output of nm in a C name address pair when lib prefix is needed]) _LT_DECL([nm_interface], [lt_cv_nm_interface], [1], [The name lister interface]) _LT_DECL([], [nm_file_list_spec], [1], [Specify filename containing input files for $NM]) ]) # _LT_CMD_GLOBAL_SYMBOLS # _LT_COMPILER_PIC([TAGNAME]) # --------------------------- m4_defun([_LT_COMPILER_PIC], [m4_require([_LT_TAG_COMPILER])dnl _LT_TAGVAR(lt_prog_compiler_wl, $1)= _LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_static, $1)= m4_if([$1], [CXX], [ # C++ specific cases for pic, static, wl, etc. if test yes = "$GXX"; then _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' case $host_os in aix*) # All AIX code is PIC. if test ia64 = "$host_cpu"; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' fi _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but # adding the '-m68020' flag to GCC prevents building anything better, # like '-m68040'. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | os2* | pw32* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) case $host_os in os2*) _LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-static' ;; esac ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common' ;; *djgpp*) # DJGPP does not support shared libraries at all _LT_TAGVAR(lt_prog_compiler_pic, $1)= ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. _LT_TAGVAR(lt_prog_compiler_static, $1)= ;; interix[[3-9]]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; sysv4*MP*) if test -d /usr/nec; then _LT_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic fi ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac ;; *qnx* | *nto*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac else case $host_os in aix[[4-9]]*) # All AIX code is PIC. if test ia64 = "$host_cpu"; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' else _LT_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp' fi ;; chorus*) case $cc_basename in cxch68*) # Green Hills C++ Compiler # _LT_TAGVAR(lt_prog_compiler_static, $1)="--no_auto_instantiation -u __main -u __premain -u _abort -r $COOL_DIR/lib/libOrb.a $MVME_DIR/lib/CC/libC.a $MVME_DIR/lib/classix/libcx.s.a" ;; esac ;; mingw* | cygwin* | os2* | pw32* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) ;; dgux*) case $cc_basename in ec++*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' ;; ghcx*) # Green Hills C++ Compiler _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' ;; *) ;; esac ;; freebsd* | dragonfly* | midnightbsd*) # FreeBSD uses GNU C++ ;; hpux9* | hpux10* | hpux11*) case $cc_basename in CC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-a ${wl}archive' if test ia64 != "$host_cpu"; then _LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z' fi ;; aCC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-a ${wl}archive' case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z' ;; esac ;; *) ;; esac ;; interix*) # This is c89, which is MS Visual C++ (no shared libs) # Anyone wants to do a port? ;; irix5* | irix6* | nonstopux*) case $cc_basename in CC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' # CC pic flag -KPIC is the default. ;; *) ;; esac ;; linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in KCC*) # KAI C++ Compiler _LT_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; ecpc* ) # old Intel C++ for x86_64, which still supported -KPIC. _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; icpc* ) # Intel C++, used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; pgCC* | pgcpp*) # Portland Group C++ compiler _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; cxx*) # Compaq C++ # Make sure the PIC flag is empty. It appears that all Alpha # Linux and Compaq Tru64 Unix objects are PIC. _LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; xlc* | xlC* | bgxl[[cC]]* | mpixl[[cC]]*) # IBM XL 8.0, 9.0 on PPC and BlueGene _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-qpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-qstaticlink' ;; *) case `$CC -V 2>&1 | $SED 5q` in *Sun\ C*) # Sun C++ 5.9 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' ;; esac ;; esac ;; lynxos*) ;; m88k*) ;; mvs*) case $cc_basename in cxx*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-W c,exportall' ;; *) ;; esac ;; netbsd*) ;; *qnx* | *nto*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; osf3* | osf4* | osf5*) case $cc_basename in KCC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,' ;; RCC*) # Rational C++ 2.4.1 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' ;; cxx*) # Digital/Compaq C++ _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # Make sure the PIC flag is empty. It appears that all Alpha # Linux and Compaq Tru64 Unix objects are PIC. _LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; *) ;; esac ;; psos*) ;; solaris*) case $cc_basename in CC* | sunCC*) # Sun C++ 4.2, 5.x and Centerline C++ _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' ;; gcx*) # Green Hills C++ Compiler _LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' ;; *) ;; esac ;; sunos4*) case $cc_basename in CC*) # Sun C++ 4.x _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; lcc*) # Lucid _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' ;; *) ;; esac ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) case $cc_basename in CC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; esac ;; tandem*) case $cc_basename in NCC*) # NonStop-UX NCC 3.20 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' ;; *) ;; esac ;; vxworks*) ;; *) _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no ;; esac fi ], [ if test yes = "$GCC"; then _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' case $host_os in aix*) # All AIX code is PIC. if test ia64 = "$host_cpu"; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' fi _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but # adding the '-m68020' flag to GCC prevents building anything better, # like '-m68040'. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) case $host_os in os2*) _LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-static' ;; esac ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common' ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. _LT_TAGVAR(lt_prog_compiler_static, $1)= ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) # +Z the default ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac ;; interix[[3-9]]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; msdosdjgpp*) # Just because we use GCC doesn't mean we suddenly get shared libraries # on systems that don't support them. _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no enable_shared=no ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; sysv4*MP*) if test -d /usr/nec; then _LT_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic fi ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac case $cc_basename in nvcc*) # Cuda Compiler Driver 2.2 _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Xlinker ' if test -n "$_LT_TAGVAR(lt_prog_compiler_pic, $1)"; then _LT_TAGVAR(lt_prog_compiler_pic, $1)="-Xcompiler $_LT_TAGVAR(lt_prog_compiler_pic, $1)" fi ;; esac else # PORTME Check for flag to pass linker flags through the system compiler. case $host_os in aix*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' if test ia64 = "$host_cpu"; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' else _LT_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp' fi ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common' case $cc_basename in nagfor*) # NAG Fortran compiler _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,-Wl,,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; esac ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) case $host_os in os2*) _LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-static' ;; esac ;; hpux9* | hpux10* | hpux11*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but # not for PA HP-UX. case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z' ;; esac # Is there a better lt_prog_compiler_static that works with the bundled CC? _LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-a ${wl}archive' ;; irix5* | irix6* | nonstopux*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # PIC (with -KPIC) is the default. _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in # old Intel for x86_64, which still supported -KPIC. ecc*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; # icc used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. icc* | ifort*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; # Lahey Fortran 8.1. lf95*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='--shared' _LT_TAGVAR(lt_prog_compiler_static, $1)='--static' ;; nagfor*) # NAG Fortran compiler _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,-Wl,,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; tcc*) # Fabrice Bellard et al's Tiny C Compiler _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group compilers (*not* the Pentium gcc compiler, # which looks to be a dead project) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; ccc*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # All Alpha code is PIC. _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; xl* | bgxl* | bgf* | mpixl*) # IBM XL C 8.0/Fortran 10.1, 11.1 on PPC and BlueGene _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-qpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-qstaticlink' ;; *) case `$CC -V 2>&1 | $SED 5q` in *Sun\ Ceres\ Fortran* | *Sun*Fortran*\ [[1-7]].* | *Sun*Fortran*\ 8.[[0-3]]*) # Sun Fortran 8.3 passes all unrecognized flags to the linker _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='' ;; *Sun\ F* | *Sun*Fortran*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' ;; *Sun\ C*) # Sun C 5.9 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' ;; *Intel*\ [[CF]]*Compiler*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; *Portland\ Group*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; esac ;; esac ;; newsos6) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; osf3* | osf4* | osf5*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # All OSF/1 code is PIC. _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; rdos*) _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; solaris*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' case $cc_basename in f77* | f90* | f95* | sunf77* | sunf90* | sunf95*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ';; *) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,';; esac ;; sunos4*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; sysv4 | sysv4.2uw2* | sysv4.3*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; sysv4*MP*) if test -d /usr/nec; then _LT_TAGVAR(lt_prog_compiler_pic, $1)='-Kconform_pic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' fi ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; unicos*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no ;; uts4*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; *) _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no ;; esac fi ]) case $host_os in # For platforms that do not support PIC, -DPIC is meaningless: *djgpp*) _LT_TAGVAR(lt_prog_compiler_pic, $1)= ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)="$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])" ;; esac AC_CACHE_CHECK([for $compiler option to produce PIC], [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)], [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_prog_compiler_pic, $1)]) _LT_TAGVAR(lt_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_cv_prog_compiler_pic, $1) # # Check to make sure the PIC flag actually works. # if test -n "$_LT_TAGVAR(lt_prog_compiler_pic, $1)"; then _LT_COMPILER_OPTION([if $compiler PIC flag $_LT_TAGVAR(lt_prog_compiler_pic, $1) works], [_LT_TAGVAR(lt_cv_prog_compiler_pic_works, $1)], [$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])], [], [case $_LT_TAGVAR(lt_prog_compiler_pic, $1) in "" | " "*) ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)=" $_LT_TAGVAR(lt_prog_compiler_pic, $1)" ;; esac], [_LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no]) fi _LT_TAGDECL([pic_flag], [lt_prog_compiler_pic], [1], [Additional compiler flags for building library objects]) _LT_TAGDECL([wl], [lt_prog_compiler_wl], [1], [How to pass a linker flag through the compiler]) # # Check to make sure the static flag actually works. # wl=$_LT_TAGVAR(lt_prog_compiler_wl, $1) eval lt_tmp_static_flag=\"$_LT_TAGVAR(lt_prog_compiler_static, $1)\" _LT_LINKER_OPTION([if $compiler static flag $lt_tmp_static_flag works], _LT_TAGVAR(lt_cv_prog_compiler_static_works, $1), $lt_tmp_static_flag, [], [_LT_TAGVAR(lt_prog_compiler_static, $1)=]) _LT_TAGDECL([link_static_flag], [lt_prog_compiler_static], [1], [Compiler flag to prevent dynamic linking]) ])# _LT_COMPILER_PIC # _LT_LINKER_SHLIBS([TAGNAME]) # ---------------------------- # See if the linker supports building shared libraries. m4_defun([_LT_LINKER_SHLIBS], [AC_REQUIRE([LT_PATH_LD])dnl AC_REQUIRE([LT_PATH_NM])dnl m4_require([_LT_PATH_MANIFEST_TOOL])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl m4_require([_LT_TAG_COMPILER])dnl AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries]) m4_if([$1], [CXX], [ _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'] case $host_os in aix[[4-9]]*) # If we're using GNU nm, then we don't want the "-C" option. # -C means demangle to GNU nm, but means don't demangle to AIX nm. # Without the "-l" option, or with the "-B" option, AIX nm treats # weak defined symbols like other global defined symbols, whereas # GNU nm marks them as "W". # While the 'weak' keyword is ignored in the Export File, we need # it in the Import File for the 'aix-soname' feature, so we have # to replace the "-B" option with "-P" for AIX nm. if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then _LT_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && ([substr](\$ 3,1,1) != ".")) { if (\$ 2 == "W") { print \$ 3 " weak" } else { print \$ 3 } } }'\'' | sort -u > $export_symbols' else _LT_TAGVAR(export_symbols_cmds, $1)='`func_echo_all $NM | $SED -e '\''s/B\([[^B]]*\)$/P\1/'\''` -PCpgl $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "L") || (\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) && ([substr](\$ 1,1,1) != ".")) { if ((\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) { print \$ 1 " weak" } else { print \$ 1 } } }'\'' | sort -u > $export_symbols' fi ;; pw32*) _LT_TAGVAR(export_symbols_cmds, $1)=$ltdll_cmds ;; cygwin* | mingw* | cegcc*) case $cc_basename in cl* | icl*) _LT_TAGVAR(exclude_expsyms, $1)='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' ;; *) _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'] ;; esac ;; *) _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' ;; esac ], [ runpath_var= _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_cmds, $1)= _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(compiler_needs_object, $1)=no _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(old_archive_from_new_cmds, $1)= _LT_TAGVAR(old_archive_from_expsyms_cmds, $1)= _LT_TAGVAR(thread_safe_flag_spec, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= # include_expsyms should be a list of space-separated symbols to be *always* # included in the symbol list _LT_TAGVAR(include_expsyms, $1)= # exclude_expsyms can be an extended regexp of symbols to exclude # it will be wrapped by ' (' and ')$', so one must not match beginning or # end of line. Example: 'a|bc|.*d.*' will exclude the symbols 'a' and 'bc', # as well as any symbol that contains 'd'. _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'] # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out # platforms (ab)use it in PIC code, but their linkers get confused if # the symbol is explicitly referenced. Since portable code cannot # rely on this symbol name, it's probably fine to never include it in # preloaded symbol tables. # Exclude shared library initialization/finalization symbols. dnl Note also adjust exclude_expsyms for C++ above. extract_expsyms_cmds= case $host_os in cygwin* | mingw* | pw32* | cegcc*) # FIXME: the MSVC++ and ICC port hasn't been tested in a loooong time # When not using gcc, we currently assume that we are using # Microsoft Visual C++ or Intel C++ Compiler. if test yes != "$GCC"; then with_gnu_ld=no fi ;; interix*) # we just hope/assume this is gcc and not c89 (= MSVC++ or ICC) with_gnu_ld=yes ;; openbsd* | bitrig*) with_gnu_ld=no ;; esac _LT_TAGVAR(ld_shlibs, $1)=yes # On some targets, GNU ld is compatible enough with the native linker # that we're better off using the native interface for both. lt_use_gnu_ld_interface=no if test yes = "$with_gnu_ld"; then case $host_os in aix*) # The AIX port of GNU ld has always aspired to compatibility # with the native linker. However, as the warning in the GNU ld # block says, versions before 2.19.5* couldn't really create working # shared libraries, regardless of the interface used. case `$LD -v 2>&1` in *\ \(GNU\ Binutils\)\ 2.19.5*) ;; *\ \(GNU\ Binutils\)\ 2.[[2-9]]*) ;; *\ \(GNU\ Binutils\)\ [[3-9]]*) ;; *) lt_use_gnu_ld_interface=yes ;; esac ;; *) lt_use_gnu_ld_interface=yes ;; esac fi if test yes = "$lt_use_gnu_ld_interface"; then # If archive_cmds runs LD, not CC, wlarc should be empty wlarc='$wl' # Set some defaults for GNU ld with shared library support. These # are reset later if shared libraries are not supported. Putting them # here allows them to be overridden if necessary. runpath_var=LD_RUN_PATH _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic' # ancient GNU ld didn't support --whole-archive et. al. if $LD --help 2>&1 | $GREP 'no-whole-archive' > /dev/null; then _LT_TAGVAR(whole_archive_flag_spec, $1)=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive' else _LT_TAGVAR(whole_archive_flag_spec, $1)= fi supports_anon_versioning=no case `$LD -v | $SED -e 's/([[^)]]\+)\s\+//' 2>&1` in *GNU\ gold*) supports_anon_versioning=yes ;; *\ [[01]].* | *\ 2.[[0-9]].* | *\ 2.10.*) ;; # catch versions < 2.11 *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ... *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ... *\ 2.11.*) ;; # other 2.11 versions *) supports_anon_versioning=yes ;; esac # See if GNU ld supports shared libraries. case $host_os in aix[[3-9]]*) # On AIX/PPC, the GNU linker is very broken if test ia64 != "$host_cpu"; then _LT_TAGVAR(ld_shlibs, $1)=no cat <<_LT_EOF 1>&2 *** Warning: the GNU linker, at least up to release 2.19, is reported *** to be unable to reliably create shared libraries on AIX. *** Therefore, libtool is disabling shared libraries support. If you *** really care for shared libraries, you may want to install binutils *** 2.20 or above, or modify your PATH so that a non-GNU linker is found. *** You will then need to restart the configuration process. _LT_EOF fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='' ;; m68k) _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_minus_L, $1)=yes ;; esac ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(allow_undefined_flag, $1)=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME _LT_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; cygwin* | mingw* | pw32* | cegcc*) # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless, # as there is no search path for DLLs. _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-all-symbols' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'] if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' # If the export-symbols file already is a .def file, use it as # is; otherwise, prepend EXPORTS... _LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then cp $export_symbols $output_objdir/$soname.def; else echo EXPORTS > $output_objdir/$soname.def; cat $export_symbols >> $output_objdir/$soname.def; fi~ $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; haiku*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(link_all_deplibs, $1)=yes ;; os2*) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(allow_undefined_flag, $1)=unsupported shrext_cmds=.dll _LT_TAGVAR(archive_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' _LT_TAGVAR(archive_expsym_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ prefix_cmds="$SED"~ if test EXPORTS = "`$SED 1q $export_symbols`"; then prefix_cmds="$prefix_cmds -e 1d"; fi~ prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~ cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' _LT_TAGVAR(old_archive_From_new_cmds, $1)='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def' _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes _LT_TAGVAR(file_list_spec, $1)='@' ;; interix[[3-9]]*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$SED "s|^|_|" $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--retain-symbols-file,$output_objdir/$soname.expsym $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; gnu* | linux* | tpf* | k*bsd*-gnu | kopensolaris*-gnu) tmp_diet=no if test linux-dietlibc = "$host_os"; then case $cc_basename in diet\ *) tmp_diet=yes;; # linux-dietlibc with static linking (!diet-dyn) esac fi if $LD --help 2>&1 | $EGREP ': supported targets:.* elf' > /dev/null \ && test no = "$tmp_diet" then tmp_addflag=' $pic_flag' tmp_sharedflag='-shared' case $cc_basename,$host_cpu in pgcc*) # Portland Group C compiler _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' tmp_addflag=' $pic_flag' ;; pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group f77 and f90 compilers _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' tmp_addflag=' $pic_flag -Mnomain' ;; ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64 tmp_addflag=' -i_dynamic' ;; efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64 tmp_addflag=' -i_dynamic -nofor_main' ;; ifc* | ifort*) # Intel Fortran compiler tmp_addflag=' -nofor_main' ;; lf95*) # Lahey Fortran 8.1 _LT_TAGVAR(whole_archive_flag_spec, $1)= tmp_sharedflag='--shared' ;; nagfor*) # NAGFOR 5.3 tmp_sharedflag='-Wl,-shared' ;; xl[[cC]]* | bgxl[[cC]]* | mpixl[[cC]]*) # IBM XL C 8.0 on PPC (deal with xlf below) tmp_sharedflag='-qmkshrobj' tmp_addflag= ;; nvcc*) # Cuda Compiler Driver 2.2 _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' _LT_TAGVAR(compiler_needs_object, $1)=yes ;; esac case `$CC -V 2>&1 | $SED 5q` in *Sun\ C*) # Sun C 5.9 _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' _LT_TAGVAR(compiler_needs_object, $1)=yes tmp_sharedflag='-G' ;; *Sun\ F*) # Sun Fortran 8.3 tmp_sharedflag='-G' ;; esac _LT_TAGVAR(archive_cmds, $1)='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' if test yes = "$supports_anon_versioning"; then _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-version-script $wl$output_objdir/$libname.ver -o $lib' fi case $cc_basename in tcc*) _LT_TAGVAR(export_dynamic_flag_spec, $1)='-rdynamic' ;; xlf* | bgf* | bgxlf* | mpixlf*) # IBM XL Fortran 10.1 on PPC cannot create shared libs itself _LT_TAGVAR(whole_archive_flag_spec, $1)='--whole-archive$convenience --no-whole-archive' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib' if test yes = "$supports_anon_versioning"; then _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib' fi ;; esac else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' wlarc= else _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' fi ;; solaris*) if $LD -v 2>&1 | $GREP 'BFD 2\.8' > /dev/null; then _LT_TAGVAR(ld_shlibs, $1)=no cat <<_LT_EOF 1>&2 *** Warning: The releases 2.8.* of the GNU linker cannot reliably *** create shared libraries on Solaris systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.9.1 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*) case `$LD -v 2>&1` in *\ [[01]].* | *\ 2.[[0-9]].* | *\ 2.1[[0-5]].*) _LT_TAGVAR(ld_shlibs, $1)=no cat <<_LT_EOF 1>&2 *** Warning: Releases of the GNU linker prior to 2.16.91.0.3 cannot *** reliably create shared libraries on SCO systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.16.91.0.3 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF ;; *) # For security reasons, it is highly recommended that you always # use absolute paths for naming shared libraries, and exclude the # DT_RUNPATH tag from executables and libraries. But doing so # requires that you compile everything twice, which is a pain. if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; sunos4*) _LT_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags' wlarc= _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac if test no = "$_LT_TAGVAR(ld_shlibs, $1)"; then runpath_var= _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= fi else # PORTME fill in a description of your system's linker (not GNU ld) case $host_os in aix3*) _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=yes _LT_TAGVAR(archive_expsym_cmds, $1)='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname' # Note: this linker hardcodes the directories in LIBPATH if there # are no directories specified by -L. _LT_TAGVAR(hardcode_minus_L, $1)=yes if test yes = "$GCC" && test -z "$lt_prog_compiler_static"; then # Neither direct hardcoding nor static linking is supported with a # broken collect2. _LT_TAGVAR(hardcode_direct, $1)=unsupported fi ;; aix[[4-9]]*) if test ia64 = "$host_cpu"; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' no_entry_flag= else # If we're using GNU nm, then we don't want the "-C" option. # -C means demangle to GNU nm, but means don't demangle to AIX nm. # Without the "-l" option, or with the "-B" option, AIX nm treats # weak defined symbols like other global defined symbols, whereas # GNU nm marks them as "W". # While the 'weak' keyword is ignored in the Export File, we need # it in the Import File for the 'aix-soname' feature, so we have # to replace the "-B" option with "-P" for AIX nm. if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then _LT_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && ([substr](\$ 3,1,1) != ".")) { if (\$ 2 == "W") { print \$ 3 " weak" } else { print \$ 3 } } }'\'' | sort -u > $export_symbols' else _LT_TAGVAR(export_symbols_cmds, $1)='`func_echo_all $NM | $SED -e '\''s/B\([[^B]]*\)$/P\1/'\''` -PCpgl $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "L") || (\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) && ([substr](\$ 1,1,1) != ".")) { if ((\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) { print \$ 1 " weak" } else { print \$ 1 } } }'\'' | sort -u > $export_symbols' fi aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we # have runtime linking enabled, and use it for executables. # For shared libraries, we enable/disable runtime linking # depending on the kind of the shared library created - # when "with_aix_soname,aix_use_runtimelinking" is: # "aix,no" lib.a(lib.so.V) shared, rtl:no, for executables # "aix,yes" lib.so shared, rtl:yes, for executables # lib.a static archive # "both,no" lib.so.V(shr.o) shared, rtl:yes # lib.a(lib.so.V) shared, rtl:no, for executables # "both,yes" lib.so.V(shr.o) shared, rtl:yes, for executables # lib.a(lib.so.V) shared, rtl:no # "svr4,*" lib.so.V(shr.o) shared, rtl:yes, for executables # lib.a static archive case $host_os in aix4.[[23]]|aix4.[[23]].*|aix[[5-9]]*) for ld_flag in $LDFLAGS; do if (test x-brtl = "x$ld_flag" || test x-Wl,-brtl = "x$ld_flag"); then aix_use_runtimelinking=yes break fi done if test svr4,no = "$with_aix_soname,$aix_use_runtimelinking"; then # With aix-soname=svr4, we create the lib.so.V shared archives only, # so we don't have lib.a shared libs to link our executables. # We have to force runtime linking in this case. aix_use_runtimelinking=yes LDFLAGS="$LDFLAGS -Wl,-brtl" fi ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. _LT_TAGVAR(archive_cmds, $1)='' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes _LT_TAGVAR(file_list_spec, $1)='$wl-f,' case $with_aix_soname,$aix_use_runtimelinking in aix,*) ;; # traditional, no import file svr4,* | *,yes) # use import file # The Import File defines what to hardcode. _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no ;; esac if test yes = "$GCC"; then case $host_os in aix4.[[012]]|aix4.[[012]].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ collect2name=`$CC -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 _LT_TAGVAR(hardcode_direct, $1)=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)= fi ;; esac shared_flag='-shared' if test yes = "$aix_use_runtimelinking"; then shared_flag="$shared_flag "'$wl-G' fi # Need to ensure runtime linking is disabled for the traditional # shared library, or the linker may eventually find shared libraries # /with/ Import File - we do not want to mix them. shared_flag_aix='-shared' shared_flag_svr4='-shared $wl-G' else # not using gcc if test ia64 = "$host_cpu"; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else if test yes = "$aix_use_runtimelinking"; then shared_flag='$wl-G' else shared_flag='$wl-bM:SRE' fi shared_flag_aix='$wl-bM:SRE' shared_flag_svr4='$wl-G' fi fi _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to export. _LT_TAGVAR(always_export_symbols, $1)=yes if test aix,yes = "$with_aix_soname,$aix_use_runtimelinking"; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. _LT_TAGVAR(allow_undefined_flag, $1)='-berok' # Determine the default libpath from the value encoded in an # empty executable. _LT_SYS_MODULE_PATH_AIX([$1]) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath" _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs $wl'$no_entry_flag' $compiler_flags `if test -n "$allow_undefined_flag"; then func_echo_all "$wl$allow_undefined_flag"; else :; fi` $wl'$exp_sym_flag:\$export_symbols' '$shared_flag else if test ia64 = "$host_cpu"; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R $libdir:/usr/lib:/lib' _LT_TAGVAR(allow_undefined_flag, $1)="-z nodefs" _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\$wl$no_entry_flag"' $compiler_flags $wl$allow_undefined_flag '"\$wl$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. _LT_SYS_MODULE_PATH_AIX([$1]) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. _LT_TAGVAR(no_undefined_flag, $1)=' $wl-bernotok' _LT_TAGVAR(allow_undefined_flag, $1)=' $wl-berok' if test yes = "$with_gnu_ld"; then # We only use this code for GNU lds that support --whole-archive. _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive$convenience $wl--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives _LT_TAGVAR(whole_archive_flag_spec, $1)='$convenience' fi _LT_TAGVAR(archive_cmds_need_lc, $1)=yes _LT_TAGVAR(archive_expsym_cmds, $1)='$RM -r $output_objdir/$realname.d~$MKDIR $output_objdir/$realname.d' # -brtl affects multiple linker settings, -berok does not and is overridden later compiler_flags_filtered='`func_echo_all "$compiler_flags " | $SED -e "s%-brtl\\([[, ]]\\)%-berok\\1%g"`' if test svr4 != "$with_aix_soname"; then # This is similar to how AIX traditionally builds its shared libraries. _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_aix' -o $output_objdir/$realname.d/$soname $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$realname.d/$soname' fi if test aix != "$with_aix_soname"; then _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_svr4' -o $output_objdir/$realname.d/$shared_archive_member_spec.o $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$STRIP -e $output_objdir/$realname.d/$shared_archive_member_spec.o~( func_echo_all "#! $soname($shared_archive_member_spec.o)"; if test shr_64 = "$shared_archive_member_spec"; then func_echo_all "# 64"; else func_echo_all "# 32"; fi; cat $export_symbols ) > $output_objdir/$realname.d/$shared_archive_member_spec.imp~$AR $AR_FLAGS $output_objdir/$soname $output_objdir/$realname.d/$shared_archive_member_spec.o $output_objdir/$realname.d/$shared_archive_member_spec.imp' else # used by -dlpreopen to get the symbols _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$MV $output_objdir/$realname.d/$soname $output_objdir' fi _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$RM -r $output_objdir/$realname.d' fi fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='' ;; m68k) _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_minus_L, $1)=yes ;; esac ;; bsdi[[45]]*) _LT_TAGVAR(export_dynamic_flag_spec, $1)=-rdynamic ;; cygwin* | mingw* | pw32* | cegcc*) # When not using gcc, we currently assume that we are using # Microsoft Visual C++ or Intel C++ Compiler. # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. case $cc_basename in cl* | icl*) # Native MSVC or ICC _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' ' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=yes _LT_TAGVAR(file_list_spec, $1)='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=.dll # FIXME: Setting linknames here is a bad hack. _LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~linknames=' _LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then cp "$export_symbols" "$output_objdir/$soname.def"; echo "$tool_output_objdir$soname.def" > "$output_objdir/$soname.exp"; else $SED -e '\''s/^/-link -EXPORT:/'\'' < $export_symbols > $output_objdir/$soname.exp; fi~ $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, $1)='true' _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes _LT_TAGVAR(exclude_expsyms, $1)='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1,DATA/'\'' | $SED -e '\''/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols' # Don't use ranlib _LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib' _LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~ lt_tool_outputfile="@TOOL_OUTPUT@"~ case $lt_outputfile in *.exe|*.EXE) ;; *) lt_outputfile=$lt_outputfile.exe lt_tool_outputfile=$lt_tool_outputfile.exe ;; esac~ if test : != "$MANIFEST_TOOL" && test -f "$lt_outputfile.manifest"; then $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; $RM "$lt_outputfile.manifest"; fi' ;; *) # Assume MSVC and ICC wrapper _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' ' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=.dll # FIXME: Setting linknames here is a bad hack. _LT_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames=' # The linker will automatically build a .lib file if we build a DLL. _LT_TAGVAR(old_archive_from_new_cmds, $1)='true' # FIXME: Should let the user specify the lib program. _LT_TAGVAR(old_archive_cmds, $1)='lib -OUT:$oldlib$oldobjs$old_deplibs' _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes ;; esac ;; darwin* | rhapsody*) _LT_DARWIN_LINKER_FEATURES($1) ;; dgux*) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor # support. Future versions do this automatically, but an explicit c++rt0.o # does not break anything, and helps significantly (at the cost of a little # extra space). freebsd2.2*) _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; # Unfortunately, older versions of FreeBSD 2 do not have this feature. freebsd2.*) _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; # FreeBSD 3 and greater uses gcc -shared to do shared libraries. freebsd* | dragonfly* | midnightbsd*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; hpux9*) if test yes = "$GCC"; then _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared $pic_flag $wl+b $wl$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib' else _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib' fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(hardcode_direct, $1)=yes # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' ;; hpux10*) if test yes,no = "$GCC,$with_gnu_ld"; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags' else _LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' fi if test no = "$with_gnu_ld"; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. _LT_TAGVAR(hardcode_minus_L, $1)=yes fi ;; hpux11*) if test yes,no = "$GCC,$with_gnu_ld"; then case $host_cpu in hppa*64*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl+h $wl$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl+h $wl$soname $wl+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags' ;; esac else case $host_cpu in hppa*64*) _LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) _LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) m4_if($1, [], [ # Older versions of the 11.00 compiler do not understand -b yet # (HP92453-01 A.11.01.20 doesn't, HP92453-01 B.11.X.35175-35176.GP does) _LT_LINKER_OPTION([if $CC understands -b], _LT_TAGVAR(lt_cv_prog_compiler__b, $1), [-b], [_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags'], [_LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'])], [_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags']) ;; esac fi if test no = "$with_gnu_ld"; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: case $host_cpu in hppa*64*|ia64*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. _LT_TAGVAR(hardcode_minus_L, $1)=yes ;; esac fi ;; irix5* | irix6* | nonstopux*) if test yes = "$GCC"; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' # Try to use the -exported_symbol ld option, if it does not # work, assume that -exports_file does not work either and # implicitly export all symbols. # This should be the same for all languages, so no per-tag cache variable. AC_CACHE_CHECK([whether the $host_os linker accepts -exported_symbol], [lt_cv_irix_exported_symbol], [save_LDFLAGS=$LDFLAGS LDFLAGS="$LDFLAGS -shared $wl-exported_symbol ${wl}foo $wl-update_registry $wl/dev/null" AC_LINK_IFELSE( [AC_LANG_SOURCE( [AC_LANG_CASE([C], [[int foo (void) { return 0; }]], [C++], [[int foo (void) { return 0; }]], [Fortran 77], [[ subroutine foo end]], [Fortran], [[ subroutine foo end]])])], [lt_cv_irix_exported_symbol=yes], [lt_cv_irix_exported_symbol=no]) LDFLAGS=$save_LDFLAGS]) if test yes = "$lt_cv_irix_exported_symbol"; then _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations $wl-exports_file $wl$export_symbols -o $lib' fi else _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -exports_file $export_symbols -o $lib' fi _LT_TAGVAR(archive_cmds_need_lc, $1)='no' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(inherit_rpath, $1)=yes _LT_TAGVAR(link_all_deplibs, $1)=yes ;; linux*) case $cc_basename in tcc*) # Fabrice Bellard et al's Tiny C Compiler _LT_TAGVAR(ld_shlibs, $1)=yes _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out else _LT_TAGVAR(archive_cmds, $1)='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; newsos6) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *nto* | *qnx*) ;; openbsd* | bitrig*) if test -f /usr/libexec/ld.so; then _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=yes if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags $wl-retain-symbols-file,$export_symbols' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' else _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' fi else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; os2*) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(allow_undefined_flag, $1)=unsupported shrext_cmds=.dll _LT_TAGVAR(archive_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' _LT_TAGVAR(archive_expsym_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ prefix_cmds="$SED"~ if test EXPORTS = "`$SED 1q $export_symbols`"; then prefix_cmds="$prefix_cmds -e 1d"; fi~ prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~ cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' _LT_TAGVAR(old_archive_From_new_cmds, $1)='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def' _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes _LT_TAGVAR(file_list_spec, $1)='@' ;; osf3*) if test yes = "$GCC"; then _LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' else _LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' fi _LT_TAGVAR(archive_cmds_need_lc, $1)='no' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: ;; osf4* | osf5*) # as osf3* with the addition of -msym flag if test yes = "$GCC"; then _LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $pic_flag $libobjs $deplibs $compiler_flags $wl-msym $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' else _LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; printf "%s\\n" "-hidden">> $lib.exp~ $CC -shared$allow_undefined_flag $wl-input $wl$lib.exp $compiler_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib~$RM $lib.exp' # Both c and cxx compiler support -rpath directly _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir' fi _LT_TAGVAR(archive_cmds_need_lc, $1)='no' _LT_TAGVAR(hardcode_libdir_separator, $1)=: ;; solaris*) _LT_TAGVAR(no_undefined_flag, $1)=' -z defs' if test yes = "$GCC"; then wlarc='$wl' _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl-z ${wl}text $wl-h $wl$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -shared $pic_flag $wl-z ${wl}text $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' else case `$CC -V 2>&1` in *"Compilers 5.0"*) wlarc='' _LT_TAGVAR(archive_cmds, $1)='$LD -G$allow_undefined_flag -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $LD -G$allow_undefined_flag -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$RM $lib.exp' ;; *) wlarc='$wl' _LT_TAGVAR(archive_cmds, $1)='$CC -G$allow_undefined_flag -h $soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G$allow_undefined_flag -M $lib.exp -h $soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' ;; esac fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no case $host_os in solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; *) # The compiler driver will combine and reorder linker options, # but understands '-z linker_flag'. GCC discards it without '$wl', # but is careful enough not to reorder. # Supported since Solaris 2.6 (maybe 2.5.1?) if test yes = "$GCC"; then _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl-z ${wl}allextract$convenience $wl-z ${wl}defaultextract' else _LT_TAGVAR(whole_archive_flag_spec, $1)='-z allextract$convenience -z defaultextract' fi ;; esac _LT_TAGVAR(link_all_deplibs, $1)=yes ;; sunos4*) if test sequent = "$host_vendor"; then # Use $CC to link under sequent, because it throws in some extra .o # files that make .init and .fini sections work. _LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h $soname -o $lib $libobjs $deplibs $compiler_flags' else _LT_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags' fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; sysv4) case $host_vendor in sni) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=yes # is this really true??? ;; siemens) ## LD is ld it makes a PLAMLIB ## CC just makes a GrossModule. _LT_TAGVAR(archive_cmds, $1)='$LD -G -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(reload_cmds, $1)='$CC -r -o $output$reload_objs' _LT_TAGVAR(hardcode_direct, $1)=no ;; motorola) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=no #Motorola manual says yes, but my tests say they lie ;; esac runpath_var='LD_RUN_PATH' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; sysv4.3*) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(export_dynamic_flag_spec, $1)='-Bexport' ;; sysv4*MP*) if test -d /usr/nec; then _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no runpath_var=LD_RUN_PATH hardcode_runpath_var=yes _LT_TAGVAR(ld_shlibs, $1)=yes fi ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[[01]].[[10]]* | unixware7* | sco3.2v5.0.[[024]]*) _LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no runpath_var='LD_RUN_PATH' if test yes = "$GCC"; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else _LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; sysv5* | sco3.2v5* | sco5v6*) # Note: We CANNOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. _LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text' _LT_TAGVAR(allow_undefined_flag, $1)='$wl-z,nodefs' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R,$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-Bexport' runpath_var='LD_RUN_PATH' if test yes = "$GCC"; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else _LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; uts4*) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) _LT_TAGVAR(ld_shlibs, $1)=no ;; esac if test sni = "$host_vendor"; then case $host in sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*) _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-Blargedynsym' ;; esac fi fi ]) AC_MSG_RESULT([$_LT_TAGVAR(ld_shlibs, $1)]) test no = "$_LT_TAGVAR(ld_shlibs, $1)" && can_build_shared=no _LT_TAGVAR(with_gnu_ld, $1)=$with_gnu_ld _LT_DECL([], [libext], [0], [Old archive suffix (normally "a")])dnl _LT_DECL([], [shrext_cmds], [1], [Shared library suffix (normally ".so")])dnl _LT_DECL([], [extract_expsyms_cmds], [2], [The commands to extract the exported symbol list from a shared archive]) # # Do we need to explicitly link libc? # case "x$_LT_TAGVAR(archive_cmds_need_lc, $1)" in x|xyes) # Assume -lc should be added _LT_TAGVAR(archive_cmds_need_lc, $1)=yes if test yes,yes = "$GCC,$enable_shared"; then case $_LT_TAGVAR(archive_cmds, $1) in *'~'*) # FIXME: we may have to deal with multi-command sequences. ;; '$CC '*) # Test whether the compiler implicitly links with -lc since on some # systems, -lgcc has to come before -lc. If gcc already passes -lc # to ld, don't add -lc before -lgcc. AC_CACHE_CHECK([whether -lc should be explicitly linked in], [lt_cv_]_LT_TAGVAR(archive_cmds_need_lc, $1), [$RM conftest* echo "$lt_simple_compile_test_code" > conftest.$ac_ext if AC_TRY_EVAL(ac_compile) 2>conftest.err; then soname=conftest lib=conftest libobjs=conftest.$ac_objext deplibs= wl=$_LT_TAGVAR(lt_prog_compiler_wl, $1) pic_flag=$_LT_TAGVAR(lt_prog_compiler_pic, $1) compiler_flags=-v linker_flags=-v verstring= output_objdir=. libname=conftest lt_save_allow_undefined_flag=$_LT_TAGVAR(allow_undefined_flag, $1) _LT_TAGVAR(allow_undefined_flag, $1)= if AC_TRY_EVAL(_LT_TAGVAR(archive_cmds, $1) 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1) then lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1)=no else lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1)=yes fi _LT_TAGVAR(allow_undefined_flag, $1)=$lt_save_allow_undefined_flag else cat conftest.err 1>&5 fi $RM conftest* ]) _LT_TAGVAR(archive_cmds_need_lc, $1)=$lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1) ;; esac fi ;; esac _LT_TAGDECL([build_libtool_need_lc], [archive_cmds_need_lc], [0], [Whether or not to add -lc for building shared libraries]) _LT_TAGDECL([allow_libtool_libs_with_static_runtimes], [enable_shared_with_static_runtimes], [0], [Whether or not to disallow shared libs when runtime libs are static]) _LT_TAGDECL([], [export_dynamic_flag_spec], [1], [Compiler flag to allow reflexive dlopens]) _LT_TAGDECL([], [whole_archive_flag_spec], [1], [Compiler flag to generate shared objects directly from archives]) _LT_TAGDECL([], [compiler_needs_object], [1], [Whether the compiler copes with passing no objects directly]) _LT_TAGDECL([], [old_archive_from_new_cmds], [2], [Create an old-style archive from a shared archive]) _LT_TAGDECL([], [old_archive_from_expsyms_cmds], [2], [Create a temporary old-style archive to link instead of a shared archive]) _LT_TAGDECL([], [archive_cmds], [2], [Commands used to build a shared archive]) _LT_TAGDECL([], [archive_expsym_cmds], [2]) _LT_TAGDECL([], [module_cmds], [2], [Commands used to build a loadable module if different from building a shared archive.]) _LT_TAGDECL([], [module_expsym_cmds], [2]) _LT_TAGDECL([], [with_gnu_ld], [1], [Whether we are building with GNU ld or not]) _LT_TAGDECL([], [allow_undefined_flag], [1], [Flag that allows shared libraries with undefined symbols to be built]) _LT_TAGDECL([], [no_undefined_flag], [1], [Flag that enforces no undefined symbols]) _LT_TAGDECL([], [hardcode_libdir_flag_spec], [1], [Flag to hardcode $libdir into a binary during linking. This must work even if $libdir does not exist]) _LT_TAGDECL([], [hardcode_libdir_separator], [1], [Whether we need a single "-rpath" flag with a separated argument]) _LT_TAGDECL([], [hardcode_direct], [0], [Set to "yes" if using DIR/libNAME$shared_ext during linking hardcodes DIR into the resulting binary]) _LT_TAGDECL([], [hardcode_direct_absolute], [0], [Set to "yes" if using DIR/libNAME$shared_ext during linking hardcodes DIR into the resulting binary and the resulting library dependency is "absolute", i.e impossible to change by setting $shlibpath_var if the library is relocated]) _LT_TAGDECL([], [hardcode_minus_L], [0], [Set to "yes" if using the -LDIR flag during linking hardcodes DIR into the resulting binary]) _LT_TAGDECL([], [hardcode_shlibpath_var], [0], [Set to "yes" if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into the resulting binary]) _LT_TAGDECL([], [hardcode_automatic], [0], [Set to "yes" if building a shared library automatically hardcodes DIR into the library and all subsequent libraries and executables linked against it]) _LT_TAGDECL([], [inherit_rpath], [0], [Set to yes if linker adds runtime paths of dependent libraries to runtime path list]) _LT_TAGDECL([], [link_all_deplibs], [0], [Whether libtool must link a program against all its dependency libraries]) _LT_TAGDECL([], [always_export_symbols], [0], [Set to "yes" if exported symbols are required]) _LT_TAGDECL([], [export_symbols_cmds], [2], [The commands to list exported symbols]) _LT_TAGDECL([], [exclude_expsyms], [1], [Symbols that should not be listed in the preloaded symbols]) _LT_TAGDECL([], [include_expsyms], [1], [Symbols that must always be exported]) _LT_TAGDECL([], [prelink_cmds], [2], [Commands necessary for linking programs (against libraries) with templates]) _LT_TAGDECL([], [postlink_cmds], [2], [Commands necessary for finishing linking programs]) _LT_TAGDECL([], [file_list_spec], [1], [Specify filename containing input files]) dnl FIXME: Not yet implemented dnl _LT_TAGDECL([], [thread_safe_flag_spec], [1], dnl [Compiler flag to generate thread safe objects]) ])# _LT_LINKER_SHLIBS # _LT_LANG_C_CONFIG([TAG]) # ------------------------ # Ensure that the configuration variables for a C compiler are suitably # defined. These variables are subsequently used by _LT_CONFIG to write # the compiler configuration to 'libtool'. m4_defun([_LT_LANG_C_CONFIG], [m4_require([_LT_DECL_EGREP])dnl lt_save_CC=$CC AC_LANG_PUSH(C) # Source file extension for C test sources. ac_ext=c # Object file extension for compiled C test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(){return(0);}' _LT_TAG_COMPILER # Save the default compiler, since it gets overwritten when the other # tags are being tested, and _LT_TAGVAR(compiler, []) is a NOP. compiler_DEFAULT=$CC # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then _LT_COMPILER_NO_RTTI($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) LT_SYS_DLOPEN_SELF _LT_CMD_STRIPLIB # Report what library types will actually be built AC_MSG_CHECKING([if libtool supports shared libraries]) AC_MSG_RESULT([$can_build_shared]) AC_MSG_CHECKING([whether to build shared libraries]) test no = "$can_build_shared" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) test yes = "$enable_shared" && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[[4-9]]*) if test ia64 != "$host_cpu"; then case $enable_shared,$with_aix_soname,$aix_use_runtimelinking in yes,aix,yes) ;; # shared object as lib.so file only yes,svr4,*) ;; # shared object as lib.so archive member only yes,*) enable_static=no ;; # shared object in lib.a archive as well esac fi ;; esac AC_MSG_RESULT([$enable_shared]) AC_MSG_CHECKING([whether to build static libraries]) # Make sure either enable_shared or enable_static is yes. test yes = "$enable_shared" || enable_static=yes AC_MSG_RESULT([$enable_static]) _LT_CONFIG($1) fi AC_LANG_POP CC=$lt_save_CC ])# _LT_LANG_C_CONFIG # _LT_LANG_CXX_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for a C++ compiler are suitably # defined. These variables are subsequently used by _LT_CONFIG to write # the compiler configuration to 'libtool'. m4_defun([_LT_LANG_CXX_CONFIG], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_PATH_MANIFEST_TOOL])dnl if test -n "$CXX" && ( test no != "$CXX" && ( (test g++ = "$CXX" && `g++ -v >/dev/null 2>&1` ) || (test g++ != "$CXX"))); then AC_PROG_CXXCPP else _lt_caught_CXX_error=yes fi AC_LANG_PUSH(C++) _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(compiler_needs_object, $1)=no _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds _LT_TAGVAR(no_undefined_flag, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no # Source file extension for C++ test sources. ac_ext=cpp # Object file extension for compiled C++ test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # No sense in running all these tests if we already determined that # the CXX compiler isn't working. Some variables (like enable_shared) # are currently assumed to apply to all compilers on this platform, # and will be corrupted by setting them based on a non-working compiler. if test yes != "$_lt_caught_CXX_error"; then # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(int, char *[[]]) { return(0); }' # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_LD=$LD lt_save_GCC=$GCC GCC=$GXX lt_save_with_gnu_ld=$with_gnu_ld lt_save_path_LD=$lt_cv_path_LD if test -n "${lt_cv_prog_gnu_ldcxx+set}"; then lt_cv_prog_gnu_ld=$lt_cv_prog_gnu_ldcxx else $as_unset lt_cv_prog_gnu_ld fi if test -n "${lt_cv_path_LDCXX+set}"; then lt_cv_path_LD=$lt_cv_path_LDCXX else $as_unset lt_cv_path_LD fi test -z "${LDCXX+set}" || LD=$LDCXX CC=${CXX-"c++"} CFLAGS=$CXXFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) if test -n "$compiler"; then # We don't want -fno-exception when compiling C++ code, so set the # no_builtin_flag separately if test yes = "$GXX"; then _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin' else _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)= fi if test yes = "$GXX"; then # Set up default GNU C++ configuration LT_PATH_LD # Check if GNU C++ uses GNU ld as the underlying linker, since the # archiving commands below assume that GNU ld is being used. if test yes = "$with_gnu_ld"; then _LT_TAGVAR(archive_cmds, $1)='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic' # If archive_cmds runs LD, not CC, wlarc should be empty # XXX I think wlarc can be eliminated in ltcf-cxx, but I need to # investigate it a little bit more. (MM) wlarc='$wl' # ancient GNU ld didn't support --whole-archive et. al. if eval "`$CC -print-prog-name=ld` --help 2>&1" | $GREP 'no-whole-archive' > /dev/null; then _LT_TAGVAR(whole_archive_flag_spec, $1)=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive' else _LT_TAGVAR(whole_archive_flag_spec, $1)= fi else with_gnu_ld=no wlarc= # A generic and very simple default shared library creation # command for GNU C++ for the case where it uses the native # linker, instead of GNU ld. If possible, this setting should # overridden to take advantage of the native linker features on # the platform it is being used on. _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' fi # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else GXX=no with_gnu_ld=no wlarc= fi # PORTME: fill in a description of your system's C++ link characteristics AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries]) _LT_TAGVAR(ld_shlibs, $1)=yes case $host_os in aix3*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; aix[[4-9]]*) if test ia64 = "$host_cpu"; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' no_entry_flag= else aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we # have runtime linking enabled, and use it for executables. # For shared libraries, we enable/disable runtime linking # depending on the kind of the shared library created - # when "with_aix_soname,aix_use_runtimelinking" is: # "aix,no" lib.a(lib.so.V) shared, rtl:no, for executables # "aix,yes" lib.so shared, rtl:yes, for executables # lib.a static archive # "both,no" lib.so.V(shr.o) shared, rtl:yes # lib.a(lib.so.V) shared, rtl:no, for executables # "both,yes" lib.so.V(shr.o) shared, rtl:yes, for executables # lib.a(lib.so.V) shared, rtl:no # "svr4,*" lib.so.V(shr.o) shared, rtl:yes, for executables # lib.a static archive case $host_os in aix4.[[23]]|aix4.[[23]].*|aix[[5-9]]*) for ld_flag in $LDFLAGS; do case $ld_flag in *-brtl*) aix_use_runtimelinking=yes break ;; esac done if test svr4,no = "$with_aix_soname,$aix_use_runtimelinking"; then # With aix-soname=svr4, we create the lib.so.V shared archives only, # so we don't have lib.a shared libs to link our executables. # We have to force runtime linking in this case. aix_use_runtimelinking=yes LDFLAGS="$LDFLAGS -Wl,-brtl" fi ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. _LT_TAGVAR(archive_cmds, $1)='' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes _LT_TAGVAR(file_list_spec, $1)='$wl-f,' case $with_aix_soname,$aix_use_runtimelinking in aix,*) ;; # no import file svr4,* | *,yes) # use import file # The Import File defines what to hardcode. _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no ;; esac if test yes = "$GXX"; then case $host_os in aix4.[[012]]|aix4.[[012]].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ collect2name=`$CC -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 _LT_TAGVAR(hardcode_direct, $1)=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)= fi esac shared_flag='-shared' if test yes = "$aix_use_runtimelinking"; then shared_flag=$shared_flag' $wl-G' fi # Need to ensure runtime linking is disabled for the traditional # shared library, or the linker may eventually find shared libraries # /with/ Import File - we do not want to mix them. shared_flag_aix='-shared' shared_flag_svr4='-shared $wl-G' else # not using gcc if test ia64 = "$host_cpu"; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else if test yes = "$aix_use_runtimelinking"; then shared_flag='$wl-G' else shared_flag='$wl-bM:SRE' fi shared_flag_aix='$wl-bM:SRE' shared_flag_svr4='$wl-G' fi fi _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to # export. _LT_TAGVAR(always_export_symbols, $1)=yes if test aix,yes = "$with_aix_soname,$aix_use_runtimelinking"; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. # The "-G" linker flag allows undefined symbols. _LT_TAGVAR(no_undefined_flag, $1)='-bernotok' # Determine the default libpath from the value encoded in an empty # executable. _LT_SYS_MODULE_PATH_AIX([$1]) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath" _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs $wl'$no_entry_flag' $compiler_flags `if test -n "$allow_undefined_flag"; then func_echo_all "$wl$allow_undefined_flag"; else :; fi` $wl'$exp_sym_flag:\$export_symbols' '$shared_flag else if test ia64 = "$host_cpu"; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R $libdir:/usr/lib:/lib' _LT_TAGVAR(allow_undefined_flag, $1)="-z nodefs" _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\$wl$no_entry_flag"' $compiler_flags $wl$allow_undefined_flag '"\$wl$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. _LT_SYS_MODULE_PATH_AIX([$1]) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. _LT_TAGVAR(no_undefined_flag, $1)=' $wl-bernotok' _LT_TAGVAR(allow_undefined_flag, $1)=' $wl-berok' if test yes = "$with_gnu_ld"; then # We only use this code for GNU lds that support --whole-archive. _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive$convenience $wl--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives _LT_TAGVAR(whole_archive_flag_spec, $1)='$convenience' fi _LT_TAGVAR(archive_cmds_need_lc, $1)=yes _LT_TAGVAR(archive_expsym_cmds, $1)='$RM -r $output_objdir/$realname.d~$MKDIR $output_objdir/$realname.d' # -brtl affects multiple linker settings, -berok does not and is overridden later compiler_flags_filtered='`func_echo_all "$compiler_flags " | $SED -e "s%-brtl\\([[, ]]\\)%-berok\\1%g"`' if test svr4 != "$with_aix_soname"; then # This is similar to how AIX traditionally builds its shared # libraries. Need -bnortl late, we may have -brtl in LDFLAGS. _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_aix' -o $output_objdir/$realname.d/$soname $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$realname.d/$soname' fi if test aix != "$with_aix_soname"; then _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_svr4' -o $output_objdir/$realname.d/$shared_archive_member_spec.o $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$STRIP -e $output_objdir/$realname.d/$shared_archive_member_spec.o~( func_echo_all "#! $soname($shared_archive_member_spec.o)"; if test shr_64 = "$shared_archive_member_spec"; then func_echo_all "# 64"; else func_echo_all "# 32"; fi; cat $export_symbols ) > $output_objdir/$realname.d/$shared_archive_member_spec.imp~$AR $AR_FLAGS $output_objdir/$soname $output_objdir/$realname.d/$shared_archive_member_spec.o $output_objdir/$realname.d/$shared_archive_member_spec.imp' else # used by -dlpreopen to get the symbols _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$MV $output_objdir/$realname.d/$soname $output_objdir' fi _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$RM -r $output_objdir/$realname.d' fi fi ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(allow_undefined_flag, $1)=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME _LT_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; chorus*) case $cc_basename in *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; cygwin* | mingw* | pw32* | cegcc*) case $GXX,$cc_basename in ,cl* | no,cl* | ,icl* | no,icl*) # Native MSVC or ICC # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' ' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=yes _LT_TAGVAR(file_list_spec, $1)='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=.dll # FIXME: Setting linknames here is a bad hack. _LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~linknames=' _LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then cp "$export_symbols" "$output_objdir/$soname.def"; echo "$tool_output_objdir$soname.def" > "$output_objdir/$soname.exp"; else $SED -e '\''s/^/-link -EXPORT:/'\'' < $export_symbols > $output_objdir/$soname.exp; fi~ $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, $1)='true' _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes # Don't use ranlib _LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib' _LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~ lt_tool_outputfile="@TOOL_OUTPUT@"~ case $lt_outputfile in *.exe|*.EXE) ;; *) lt_outputfile=$lt_outputfile.exe lt_tool_outputfile=$lt_tool_outputfile.exe ;; esac~ func_to_tool_file "$lt_outputfile"~ if test : != "$MANIFEST_TOOL" && test -f "$lt_outputfile.manifest"; then $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; $RM "$lt_outputfile.manifest"; fi' ;; *) # g++ # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless, # as there is no search path for DLLs. _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-all-symbols' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' # If the export-symbols file already is a .def file, use it as # is; otherwise, prepend EXPORTS... _LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then cp $export_symbols $output_objdir/$soname.def; else echo EXPORTS > $output_objdir/$soname.def; cat $export_symbols >> $output_objdir/$soname.def; fi~ $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; darwin* | rhapsody*) _LT_DARWIN_LINKER_FEATURES($1) ;; os2*) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(allow_undefined_flag, $1)=unsupported shrext_cmds=.dll _LT_TAGVAR(archive_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' _LT_TAGVAR(archive_expsym_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ prefix_cmds="$SED"~ if test EXPORTS = "`$SED 1q $export_symbols`"; then prefix_cmds="$prefix_cmds -e 1d"; fi~ prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~ cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' _LT_TAGVAR(old_archive_From_new_cmds, $1)='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def' _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes _LT_TAGVAR(file_list_spec, $1)='@' ;; dgux*) case $cc_basename in ec++*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; ghcx*) # Green Hills C++ Compiler # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; freebsd2.*) # C++ shared libraries reported to be fairly broken before # switch to ELF _LT_TAGVAR(ld_shlibs, $1)=no ;; freebsd-elf*) _LT_TAGVAR(archive_cmds_need_lc, $1)=no ;; freebsd* | dragonfly* | midnightbsd*) # FreeBSD 3 and later use GNU C++ and GNU ld with standard ELF # conventions _LT_TAGVAR(ld_shlibs, $1)=yes ;; haiku*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(link_all_deplibs, $1)=yes ;; hpux9*) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH, # but as the default # location of the library. case $cc_basename in CC*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; aCC*) _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -b $wl+b $wl$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $EGREP "\-L"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test yes = "$GXX"; then _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag $wl+b $wl$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib' else # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; hpux10*|hpux11*) if test no = "$with_gnu_ld"; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: case $host_cpu in hppa*64*|ia64*) ;; *) _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' ;; esac fi case $host_cpu in hppa*64*|ia64*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH, # but as the default # location of the library. ;; esac case $cc_basename in CC*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; aCC*) case $host_cpu in hppa*64*) _LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; ia64*) _LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; esac # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $GREP "\-L"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test yes = "$GXX"; then if test no = "$with_gnu_ld"; then case $host_cpu in hppa*64*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC $wl+h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; ia64*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag $wl+h $wl$soname $wl+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; esac fi else # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; interix[[3-9]]*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$SED "s|^|_|" $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--retain-symbols-file,$output_objdir/$soname.expsym $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; irix5* | irix6*) case $cc_basename in CC*) # SGI C++ _LT_TAGVAR(archive_cmds, $1)='$CC -shared -all -multigot $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' # Archives containing C++ object files must be created using # "CC -ar", where "CC" is the IRIX C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC -ar -WR,-u -o $oldlib $oldobjs' ;; *) if test yes = "$GXX"; then if test no = "$with_gnu_ld"; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' else _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` -o $lib' fi fi _LT_TAGVAR(link_all_deplibs, $1)=yes ;; esac _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(inherit_rpath, $1)=yes ;; linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in KCC*) # Kuck and Associates, Inc. (KAI) C++ Compiler # KCC will only create a shared library if the output file # ends with ".so" (or ".sl" for HP-UX), so rename the library # to its proper name (with version) after linking. _LT_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\$tempext\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\$tempext\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib $wl-retain-symbols-file,$export_symbols; mv \$templib $lib' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 | $GREP "ld"`; rm -f libconftest$shared_ext; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic' # Archives containing C++ object files must be created using # "CC -Bstatic", where "CC" is the KAI C++ compiler. _LT_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs' ;; icpc* | ecpc* ) # Intel C++ with_gnu_ld=yes # version 8.0 and above of icpc choke on multiply defined symbols # if we add $predep_objects and $postdep_objects, however 7.1 and # earlier do not add the objects themselves. case `$CC -V 2>&1` in *"Version 7."*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' ;; *) # Version 8.0 or newer tmp_idyn= case $host_cpu in ia64*) tmp_idyn=' -i_dynamic';; esac _LT_TAGVAR(archive_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' ;; esac _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic' _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive$convenience $wl--no-whole-archive' ;; pgCC* | pgcpp*) # Portland Group C++ compiler case `$CC -V` in *pgCC\ [[1-5]].* | *pgcpp\ [[1-5]].*) _LT_TAGVAR(prelink_cmds, $1)='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~ compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"' _LT_TAGVAR(old_archive_cmds, $1)='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~ $AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~ $RANLIB $oldlib' _LT_TAGVAR(archive_cmds, $1)='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' ;; *) # Version 6 and above use weak symbols _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' ;; esac _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl--rpath $wl$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic' _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' ;; cxx*) # Compaq C++ _LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib $wl-retain-symbols-file $wl$export_symbols' runpath_var=LD_RUN_PATH _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld .*$\)/\1/"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "X$list" | $Xsed' ;; xl* | mpixl* | bgxl*) # IBM XL 8.0 on PPC, with GNU ld _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic' _LT_TAGVAR(archive_cmds, $1)='$CC -qmkshrobj $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' if test yes = "$supports_anon_versioning"; then _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $CC -qmkshrobj $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-version-script $wl$output_objdir/$libname.ver -o $lib' fi ;; *) case `$CC -V 2>&1 | $SED 5q` in *Sun\ C*) # Sun C++ 5.9 _LT_TAGVAR(no_undefined_flag, $1)=' -zdefs' _LT_TAGVAR(archive_cmds, $1)='$CC -G$allow_undefined_flag -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G$allow_undefined_flag -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-retain-symbols-file $wl$export_symbols' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' _LT_TAGVAR(compiler_needs_object, $1)=yes # Not sure whether something based on # $CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 # would be better. output_verbose_link_cmd='func_echo_all' # Archives containing C++ object files must be created using # "CC -xar", where "CC" is the Sun C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC -xar -o $oldlib $oldobjs' ;; esac ;; esac ;; lynxos*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; m88k*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; mvs*) case $cc_basename in cxx*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $predep_objects $libobjs $deplibs $postdep_objects $linker_flags' wlarc= _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no fi # Workaround some broken pre-1.5 toolchains output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP conftest.$objext | $SED -e "s:-lgcc -lc -lgcc::"' ;; *nto* | *qnx*) _LT_TAGVAR(ld_shlibs, $1)=yes ;; openbsd* | bitrig*) if test -f /usr/libexec/ld.so; then _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`"; then _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-retain-symbols-file,$export_symbols -o $lib' _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' _LT_TAGVAR(whole_archive_flag_spec, $1)=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive' fi output_verbose_link_cmd=func_echo_all else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; osf3* | osf4* | osf5*) case $cc_basename in KCC*) # Kuck and Associates, Inc. (KAI) C++ Compiler # KCC will only create a shared library if the output file # ends with ".so" (or ".sl" for HP-UX), so rename the library # to its proper name (with version) after linking. _LT_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo "$lib" | $SED -e "s/\$tempext\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Archives containing C++ object files must be created using # the KAI C++ compiler. case $host in osf3*) _LT_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs' ;; *) _LT_TAGVAR(old_archive_cmds, $1)='$CC -o $oldlib $oldobjs' ;; esac ;; RCC*) # Rational C++ 2.4.1 # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; cxx*) case $host in osf3*) _LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $soname `test -n "$verstring" && func_echo_all "$wl-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' ;; *) _LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done~ echo "-hidden">> $lib.exp~ $CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname $wl-input $wl$lib.exp `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib~ $RM $lib.exp' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir' ;; esac _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld" | $GREP -v "ld:"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test yes,no = "$GXX,$with_gnu_ld"; then _LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*' case $host in osf3*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-msym $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' ;; esac _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; psos*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; sunos4*) case $cc_basename in CC*) # Sun C++ 4.x # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; lcc*) # Lucid # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; solaris*) case $cc_basename in CC* | sunCC*) # Sun C++ 4.2, 5.x and Centerline C++ _LT_TAGVAR(archive_cmds_need_lc,$1)=yes _LT_TAGVAR(no_undefined_flag, $1)=' -zdefs' _LT_TAGVAR(archive_cmds, $1)='$CC -G$allow_undefined_flag -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G$allow_undefined_flag $wl-M $wl$lib.exp -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no case $host_os in solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; *) # The compiler driver will combine and reorder linker options, # but understands '-z linker_flag'. # Supported since Solaris 2.6 (maybe 2.5.1?) _LT_TAGVAR(whole_archive_flag_spec, $1)='-z allextract$convenience -z defaultextract' ;; esac _LT_TAGVAR(link_all_deplibs, $1)=yes output_verbose_link_cmd='func_echo_all' # Archives containing C++ object files must be created using # "CC -xar", where "CC" is the Sun C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC -xar -o $oldlib $oldobjs' ;; gcx*) # Green Hills C++ Compiler _LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-h $wl$soname -o $lib' # The C++ compiler must be used to create the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC $LDFLAGS -archive -o $oldlib $oldobjs' ;; *) # GNU C++ compiler with Solaris linker if test yes,no = "$GXX,$with_gnu_ld"; then _LT_TAGVAR(no_undefined_flag, $1)=' $wl-z ${wl}defs' if $CC --version | $GREP -v '^2\.7' > /dev/null; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-h $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -shared $pic_flag -nostdlib $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else # g++ 2.7 appears to require '-G' NOT '-shared' on this # platform. _LT_TAGVAR(archive_cmds, $1)='$CC -G -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-h $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G -nostdlib $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -G $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R $wl$libdir' case $host_os in solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; *) _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl-z ${wl}allextract$convenience $wl-z ${wl}defaultextract' ;; esac fi ;; esac ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[[01]].[[10]]* | unixware7* | sco3.2v5.0.[[024]]*) _LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no runpath_var='LD_RUN_PATH' case $cc_basename in CC*) _LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; sysv5* | sco3.2v5* | sco5v6*) # Note: We CANNOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. _LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text' _LT_TAGVAR(allow_undefined_flag, $1)='$wl-z,nodefs' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R,$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-Bexport' runpath_var='LD_RUN_PATH' case $cc_basename in CC*) _LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(old_archive_cmds, $1)='$CC -Tprelink_objects $oldobjs~ '"$_LT_TAGVAR(old_archive_cmds, $1)" _LT_TAGVAR(reload_cmds, $1)='$CC -Tprelink_objects $reload_objs~ '"$_LT_TAGVAR(reload_cmds, $1)" ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; tandem*) case $cc_basename in NCC*) # NonStop-UX NCC 3.20 # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; vxworks*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac AC_MSG_RESULT([$_LT_TAGVAR(ld_shlibs, $1)]) test no = "$_LT_TAGVAR(ld_shlibs, $1)" && can_build_shared=no _LT_TAGVAR(GCC, $1)=$GXX _LT_TAGVAR(LD, $1)=$LD ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... _LT_SYS_HIDDEN_LIBDEPS($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi # test -n "$compiler" CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS LDCXX=$LD LD=$lt_save_LD GCC=$lt_save_GCC with_gnu_ld=$lt_save_with_gnu_ld lt_cv_path_LDCXX=$lt_cv_path_LD lt_cv_path_LD=$lt_save_path_LD lt_cv_prog_gnu_ldcxx=$lt_cv_prog_gnu_ld lt_cv_prog_gnu_ld=$lt_save_with_gnu_ld fi # test yes != "$_lt_caught_CXX_error" AC_LANG_POP ])# _LT_LANG_CXX_CONFIG # _LT_FUNC_STRIPNAME_CNF # ---------------------- # func_stripname_cnf prefix suffix name # strip PREFIX and SUFFIX off of NAME. # PREFIX and SUFFIX must not contain globbing or regex special # characters, hashes, percent signs, but SUFFIX may contain a leading # dot (in which case that matches only a dot). # # This function is identical to the (non-XSI) version of func_stripname, # except this one can be used by m4 code that may be executed by configure, # rather than the libtool script. m4_defun([_LT_FUNC_STRIPNAME_CNF],[dnl AC_REQUIRE([_LT_DECL_SED]) AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH]) func_stripname_cnf () { case @S|@2 in .*) func_stripname_result=`$ECHO "@S|@3" | $SED "s%^@S|@1%%; s%\\\\@S|@2\$%%"`;; *) func_stripname_result=`$ECHO "@S|@3" | $SED "s%^@S|@1%%; s%@S|@2\$%%"`;; esac } # func_stripname_cnf ])# _LT_FUNC_STRIPNAME_CNF # _LT_SYS_HIDDEN_LIBDEPS([TAGNAME]) # --------------------------------- # Figure out "hidden" library dependencies from verbose # compiler output when linking a shared library. # Parse the compiler output and extract the necessary # objects, libraries and library flags. m4_defun([_LT_SYS_HIDDEN_LIBDEPS], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl AC_REQUIRE([_LT_FUNC_STRIPNAME_CNF])dnl # Dependencies to place before and after the object being linked: _LT_TAGVAR(predep_objects, $1)= _LT_TAGVAR(postdep_objects, $1)= _LT_TAGVAR(predeps, $1)= _LT_TAGVAR(postdeps, $1)= _LT_TAGVAR(compiler_lib_search_path, $1)= dnl we can't use the lt_simple_compile_test_code here, dnl because it contains code intended for an executable, dnl not a library. It's possible we should let each dnl tag define a new lt_????_link_test_code variable, dnl but it's only used here... m4_if([$1], [], [cat > conftest.$ac_ext <<_LT_EOF int a; void foo (void) { a = 0; } _LT_EOF ], [$1], [CXX], [cat > conftest.$ac_ext <<_LT_EOF class Foo { public: Foo (void) { a = 0; } private: int a; }; _LT_EOF ], [$1], [F77], [cat > conftest.$ac_ext <<_LT_EOF subroutine foo implicit none integer*4 a a=0 return end _LT_EOF ], [$1], [FC], [cat > conftest.$ac_ext <<_LT_EOF subroutine foo implicit none integer a a=0 return end _LT_EOF ], [$1], [GCJ], [cat > conftest.$ac_ext <<_LT_EOF public class foo { private int a; public void bar (void) { a = 0; } }; _LT_EOF ], [$1], [GO], [cat > conftest.$ac_ext <<_LT_EOF package foo func foo() { } _LT_EOF ]) _lt_libdeps_save_CFLAGS=$CFLAGS case "$CC $CFLAGS " in #( *\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;; *\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;; *\ -fuse-linker-plugin*\ *) CFLAGS="$CFLAGS -fno-use-linker-plugin" ;; esac dnl Parse the compiler output and extract the necessary dnl objects, libraries and library flags. if AC_TRY_EVAL(ac_compile); then # Parse the compiler output and extract the necessary # objects, libraries and library flags. # Sentinel used to keep track of whether or not we are before # the conftest object file. pre_test_object_deps_done=no for p in `eval "$output_verbose_link_cmd"`; do case $prev$p in -L* | -R* | -l*) # Some compilers place space between "-{L,R}" and the path. # Remove the space. if test x-L = "$p" || test x-R = "$p"; then prev=$p continue fi # Expand the sysroot to ease extracting the directories later. if test -z "$prev"; then case $p in -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;; -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;; -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;; esac fi case $p in =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;; esac if test no = "$pre_test_object_deps_done"; then case $prev in -L | -R) # Internal compiler library paths should come after those # provided the user. The postdeps already come after the # user supplied libs so there is no need to process them. if test -z "$_LT_TAGVAR(compiler_lib_search_path, $1)"; then _LT_TAGVAR(compiler_lib_search_path, $1)=$prev$p else _LT_TAGVAR(compiler_lib_search_path, $1)="${_LT_TAGVAR(compiler_lib_search_path, $1)} $prev$p" fi ;; # The "-l" case would never come before the object being # linked, so don't bother handling this case. esac else if test -z "$_LT_TAGVAR(postdeps, $1)"; then _LT_TAGVAR(postdeps, $1)=$prev$p else _LT_TAGVAR(postdeps, $1)="${_LT_TAGVAR(postdeps, $1)} $prev$p" fi fi prev= ;; *.lto.$objext) ;; # Ignore GCC LTO objects *.$objext) # This assumes that the test object file only shows up # once in the compiler output. if test "$p" = "conftest.$objext"; then pre_test_object_deps_done=yes continue fi if test no = "$pre_test_object_deps_done"; then if test -z "$_LT_TAGVAR(predep_objects, $1)"; then _LT_TAGVAR(predep_objects, $1)=$p else _LT_TAGVAR(predep_objects, $1)="$_LT_TAGVAR(predep_objects, $1) $p" fi else if test -z "$_LT_TAGVAR(postdep_objects, $1)"; then _LT_TAGVAR(postdep_objects, $1)=$p else _LT_TAGVAR(postdep_objects, $1)="$_LT_TAGVAR(postdep_objects, $1) $p" fi fi ;; *) ;; # Ignore the rest. esac done # Clean up. rm -f a.out a.exe else echo "libtool.m4: error: problem compiling $1 test program" fi $RM -f confest.$objext CFLAGS=$_lt_libdeps_save_CFLAGS # PORTME: override above test on systems where it is broken m4_if([$1], [CXX], [case $host_os in interix[[3-9]]*) # Interix 3.5 installs completely hosed .la files for C++, so rather than # hack all around it, let's just trust "g++" to DTRT. _LT_TAGVAR(predep_objects,$1)= _LT_TAGVAR(postdep_objects,$1)= _LT_TAGVAR(postdeps,$1)= ;; esac ]) case " $_LT_TAGVAR(postdeps, $1) " in *" -lc "*) _LT_TAGVAR(archive_cmds_need_lc, $1)=no ;; esac _LT_TAGVAR(compiler_lib_search_dirs, $1)= if test -n "${_LT_TAGVAR(compiler_lib_search_path, $1)}"; then _LT_TAGVAR(compiler_lib_search_dirs, $1)=`echo " ${_LT_TAGVAR(compiler_lib_search_path, $1)}" | $SED -e 's! -L! !g' -e 's!^ !!'` fi _LT_TAGDECL([], [compiler_lib_search_dirs], [1], [The directories searched by this compiler when creating a shared library]) _LT_TAGDECL([], [predep_objects], [1], [Dependencies to place before and after the objects being linked to create a shared library]) _LT_TAGDECL([], [postdep_objects], [1]) _LT_TAGDECL([], [predeps], [1]) _LT_TAGDECL([], [postdeps], [1]) _LT_TAGDECL([], [compiler_lib_search_path], [1], [The library search path used internally by the compiler when linking a shared library]) ])# _LT_SYS_HIDDEN_LIBDEPS # _LT_LANG_F77_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for a Fortran 77 compiler are # suitably defined. These variables are subsequently used by _LT_CONFIG # to write the compiler configuration to 'libtool'. m4_defun([_LT_LANG_F77_CONFIG], [AC_LANG_PUSH(Fortran 77) if test -z "$F77" || test no = "$F77"; then _lt_disable_F77=yes fi _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds _LT_TAGVAR(no_undefined_flag, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no # Source file extension for f77 test sources. ac_ext=f # Object file extension for compiled f77 test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # No sense in running all these tests if we already determined that # the F77 compiler isn't working. Some variables (like enable_shared) # are currently assumed to apply to all compilers on this platform, # and will be corrupted by setting them based on a non-working compiler. if test yes != "$_lt_disable_F77"; then # Code to be used in simple compile tests lt_simple_compile_test_code="\ subroutine t return end " # Code to be used in simple link tests lt_simple_link_test_code="\ program t end " # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_GCC=$GCC lt_save_CFLAGS=$CFLAGS CC=${F77-"f77"} CFLAGS=$FFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) GCC=$G77 if test -n "$compiler"; then AC_MSG_CHECKING([if libtool supports shared libraries]) AC_MSG_RESULT([$can_build_shared]) AC_MSG_CHECKING([whether to build shared libraries]) test no = "$can_build_shared" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) test yes = "$enable_shared" && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[[4-9]]*) if test ia64 != "$host_cpu"; then case $enable_shared,$with_aix_soname,$aix_use_runtimelinking in yes,aix,yes) ;; # shared object as lib.so file only yes,svr4,*) ;; # shared object as lib.so archive member only yes,*) enable_static=no ;; # shared object in lib.a archive as well esac fi ;; esac AC_MSG_RESULT([$enable_shared]) AC_MSG_CHECKING([whether to build static libraries]) # Make sure either enable_shared or enable_static is yes. test yes = "$enable_shared" || enable_static=yes AC_MSG_RESULT([$enable_static]) _LT_TAGVAR(GCC, $1)=$G77 _LT_TAGVAR(LD, $1)=$LD ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi # test -n "$compiler" GCC=$lt_save_GCC CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS fi # test yes != "$_lt_disable_F77" AC_LANG_POP ])# _LT_LANG_F77_CONFIG # _LT_LANG_FC_CONFIG([TAG]) # ------------------------- # Ensure that the configuration variables for a Fortran compiler are # suitably defined. These variables are subsequently used by _LT_CONFIG # to write the compiler configuration to 'libtool'. m4_defun([_LT_LANG_FC_CONFIG], [AC_LANG_PUSH(Fortran) if test -z "$FC" || test no = "$FC"; then _lt_disable_FC=yes fi _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds _LT_TAGVAR(no_undefined_flag, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no # Source file extension for fc test sources. ac_ext=${ac_fc_srcext-f} # Object file extension for compiled fc test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # No sense in running all these tests if we already determined that # the FC compiler isn't working. Some variables (like enable_shared) # are currently assumed to apply to all compilers on this platform, # and will be corrupted by setting them based on a non-working compiler. if test yes != "$_lt_disable_FC"; then # Code to be used in simple compile tests lt_simple_compile_test_code="\ subroutine t return end " # Code to be used in simple link tests lt_simple_link_test_code="\ program t end " # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_GCC=$GCC lt_save_CFLAGS=$CFLAGS CC=${FC-"f95"} CFLAGS=$FCFLAGS compiler=$CC GCC=$ac_cv_fc_compiler_gnu _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) if test -n "$compiler"; then AC_MSG_CHECKING([if libtool supports shared libraries]) AC_MSG_RESULT([$can_build_shared]) AC_MSG_CHECKING([whether to build shared libraries]) test no = "$can_build_shared" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) test yes = "$enable_shared" && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[[4-9]]*) if test ia64 != "$host_cpu"; then case $enable_shared,$with_aix_soname,$aix_use_runtimelinking in yes,aix,yes) ;; # shared object as lib.so file only yes,svr4,*) ;; # shared object as lib.so archive member only yes,*) enable_static=no ;; # shared object in lib.a archive as well esac fi ;; esac AC_MSG_RESULT([$enable_shared]) AC_MSG_CHECKING([whether to build static libraries]) # Make sure either enable_shared or enable_static is yes. test yes = "$enable_shared" || enable_static=yes AC_MSG_RESULT([$enable_static]) _LT_TAGVAR(GCC, $1)=$ac_cv_fc_compiler_gnu _LT_TAGVAR(LD, $1)=$LD ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... _LT_SYS_HIDDEN_LIBDEPS($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi # test -n "$compiler" GCC=$lt_save_GCC CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS fi # test yes != "$_lt_disable_FC" AC_LANG_POP ])# _LT_LANG_FC_CONFIG # _LT_LANG_GCJ_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for the GNU Java Compiler compiler # are suitably defined. These variables are subsequently used by _LT_CONFIG # to write the compiler configuration to 'libtool'. m4_defun([_LT_LANG_GCJ_CONFIG], [AC_REQUIRE([LT_PROG_GCJ])dnl AC_LANG_SAVE # Source file extension for Java test sources. ac_ext=java # Object file extension for compiled Java test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="class foo {}" # Code to be used in simple link tests lt_simple_link_test_code='public class conftest { public static void main(String[[]] argv) {}; }' # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_GCC=$GCC GCC=yes CC=${GCJ-"gcj"} CFLAGS=$GCJFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_TAGVAR(LD, $1)=$LD _LT_CC_BASENAME([$compiler]) # GCJ did not exist at the time GCC didn't implicitly link libc in. _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then _LT_COMPILER_NO_RTTI($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi AC_LANG_RESTORE GCC=$lt_save_GCC CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS ])# _LT_LANG_GCJ_CONFIG # _LT_LANG_GO_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for the GNU Go compiler # are suitably defined. These variables are subsequently used by _LT_CONFIG # to write the compiler configuration to 'libtool'. m4_defun([_LT_LANG_GO_CONFIG], [AC_REQUIRE([LT_PROG_GO])dnl AC_LANG_SAVE # Source file extension for Go test sources. ac_ext=go # Object file extension for compiled Go test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="package main; func main() { }" # Code to be used in simple link tests lt_simple_link_test_code='package main; func main() { }' # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_GCC=$GCC GCC=yes CC=${GOC-"gccgo"} CFLAGS=$GOFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_TAGVAR(LD, $1)=$LD _LT_CC_BASENAME([$compiler]) # Go did not exist at the time GCC didn't implicitly link libc in. _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then _LT_COMPILER_NO_RTTI($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi AC_LANG_RESTORE GCC=$lt_save_GCC CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS ])# _LT_LANG_GO_CONFIG # _LT_LANG_RC_CONFIG([TAG]) # ------------------------- # Ensure that the configuration variables for the Windows resource compiler # are suitably defined. These variables are subsequently used by _LT_CONFIG # to write the compiler configuration to 'libtool'. m4_defun([_LT_LANG_RC_CONFIG], [AC_REQUIRE([LT_PROG_RC])dnl AC_LANG_SAVE # Source file extension for RC test sources. ac_ext=rc # Object file extension for compiled RC test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code='sample MENU { MENUITEM "&Soup", 100, CHECKED }' # Code to be used in simple link tests lt_simple_link_test_code=$lt_simple_compile_test_code # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_GCC=$GCC GCC= CC=${RC-"windres"} CFLAGS= compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) _LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes if test -n "$compiler"; then : _LT_CONFIG($1) fi GCC=$lt_save_GCC AC_LANG_RESTORE CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS ])# _LT_LANG_RC_CONFIG # LT_PROG_GCJ # ----------- AC_DEFUN([LT_PROG_GCJ], [m4_ifdef([AC_PROG_GCJ], [AC_PROG_GCJ], [m4_ifdef([A][M_PROG_GCJ], [A][M_PROG_GCJ], [AC_CHECK_TOOL(GCJ, gcj,) test set = "${GCJFLAGS+set}" || GCJFLAGS="-g -O2" AC_SUBST(GCJFLAGS)])])[]dnl ]) # Old name: AU_ALIAS([LT_AC_PROG_GCJ], [LT_PROG_GCJ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([LT_AC_PROG_GCJ], []) # LT_PROG_GO # ---------- AC_DEFUN([LT_PROG_GO], [AC_CHECK_TOOL(GOC, gccgo,) ]) # LT_PROG_RC # ---------- AC_DEFUN([LT_PROG_RC], [AC_CHECK_TOOL(RC, windres,) ]) # Old name: AU_ALIAS([LT_AC_PROG_RC], [LT_PROG_RC]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([LT_AC_PROG_RC], []) # _LT_DECL_EGREP # -------------- # If we don't have a new enough Autoconf to choose the best grep # available, choose the one first in the user's PATH. m4_defun([_LT_DECL_EGREP], [AC_REQUIRE([AC_PROG_EGREP])dnl AC_REQUIRE([AC_PROG_FGREP])dnl test -z "$GREP" && GREP=grep _LT_DECL([], [GREP], [1], [A grep program that handles long lines]) _LT_DECL([], [EGREP], [1], [An ERE matcher]) _LT_DECL([], [FGREP], [1], [A literal string matcher]) dnl Non-bleeding-edge autoconf doesn't subst GREP, so do it here too AC_SUBST([GREP]) ]) # _LT_DECL_OBJDUMP # -------------- # If we don't have a new enough Autoconf to choose the best objdump # available, choose the one first in the user's PATH. m4_defun([_LT_DECL_OBJDUMP], [AC_CHECK_TOOL(OBJDUMP, objdump, false) test -z "$OBJDUMP" && OBJDUMP=objdump _LT_DECL([], [OBJDUMP], [1], [An object symbol dumper]) AC_SUBST([OBJDUMP]) ]) # _LT_DECL_DLLTOOL # ---------------- # Ensure DLLTOOL variable is set. m4_defun([_LT_DECL_DLLTOOL], [AC_CHECK_TOOL(DLLTOOL, dlltool, false) test -z "$DLLTOOL" && DLLTOOL=dlltool _LT_DECL([], [DLLTOOL], [1], [DLL creation program]) AC_SUBST([DLLTOOL]) ]) # _LT_DECL_FILECMD # ---------------- # Check for a file(cmd) program that can be used to detect file type and magic m4_defun([_LT_DECL_FILECMD], [AC_CHECK_TOOL([FILECMD], [file], [:]) _LT_DECL([], [FILECMD], [1], [A file(cmd) program that detects file types]) ])# _LD_DECL_FILECMD # _LT_DECL_SED # ------------ # Check for a fully-functional sed program, that truncates # as few characters as possible. Prefer GNU sed if found. m4_defun([_LT_DECL_SED], [AC_PROG_SED test -z "$SED" && SED=sed Xsed="$SED -e 1s/^X//" _LT_DECL([], [SED], [1], [A sed program that does not truncate output]) _LT_DECL([], [Xsed], ["\$SED -e 1s/^X//"], [Sed that helps us avoid accidentally triggering echo(1) options like -n]) ])# _LT_DECL_SED m4_ifndef([AC_PROG_SED], [ ############################################################ # NOTE: This macro has been submitted for inclusion into # # GNU Autoconf as AC_PROG_SED. When it is available in # # a released version of Autoconf we should remove this # # macro and use it instead. # ############################################################ m4_defun([AC_PROG_SED], [AC_MSG_CHECKING([for a sed that does not truncate output]) AC_CACHE_VAL(lt_cv_path_SED, [# Loop through the user's path and test for sed and gsed. # Then use that list of sed's as ones to test for truncation. as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for lt_ac_prog in sed gsed; do for ac_exec_ext in '' $ac_executable_extensions; do if $as_executable_p "$as_dir/$lt_ac_prog$ac_exec_ext"; then lt_ac_sed_list="$lt_ac_sed_list $as_dir/$lt_ac_prog$ac_exec_ext" fi done done done IFS=$as_save_IFS lt_ac_max=0 lt_ac_count=0 # Add /usr/xpg4/bin/sed as it is typically found on Solaris # along with /bin/sed that truncates output. for lt_ac_sed in $lt_ac_sed_list /usr/xpg4/bin/sed; do test ! -f "$lt_ac_sed" && continue cat /dev/null > conftest.in lt_ac_count=0 echo $ECHO_N "0123456789$ECHO_C" >conftest.in # Check for GNU sed and select it if it is found. if "$lt_ac_sed" --version 2>&1 < /dev/null | grep 'GNU' > /dev/null; then lt_cv_path_SED=$lt_ac_sed break fi while true; do cat conftest.in conftest.in >conftest.tmp mv conftest.tmp conftest.in cp conftest.in conftest.nl echo >>conftest.nl $lt_ac_sed -e 's/a$//' < conftest.nl >conftest.out || break cmp -s conftest.out conftest.nl || break # 10000 chars as input seems more than enough test 10 -lt "$lt_ac_count" && break lt_ac_count=`expr $lt_ac_count + 1` if test "$lt_ac_count" -gt "$lt_ac_max"; then lt_ac_max=$lt_ac_count lt_cv_path_SED=$lt_ac_sed fi done done ]) SED=$lt_cv_path_SED AC_SUBST([SED]) AC_MSG_RESULT([$SED]) ])#AC_PROG_SED ])#m4_ifndef # Old name: AU_ALIAS([LT_AC_PROG_SED], [AC_PROG_SED]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([LT_AC_PROG_SED], []) # _LT_CHECK_SHELL_FEATURES # ------------------------ # Find out whether the shell is Bourne or XSI compatible, # or has some other useful features. m4_defun([_LT_CHECK_SHELL_FEATURES], [if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then lt_unset=unset else lt_unset=false fi _LT_DECL([], [lt_unset], [0], [whether the shell understands "unset"])dnl # test EBCDIC or ASCII case `echo X|tr X '\101'` in A) # ASCII based system # \n is not interpreted correctly by Solaris 8 /usr/ucb/tr lt_SP2NL='tr \040 \012' lt_NL2SP='tr \015\012 \040\040' ;; *) # EBCDIC based system lt_SP2NL='tr \100 \n' lt_NL2SP='tr \r\n \100\100' ;; esac _LT_DECL([SP2NL], [lt_SP2NL], [1], [turn spaces into newlines])dnl _LT_DECL([NL2SP], [lt_NL2SP], [1], [turn newlines into spaces])dnl ])# _LT_CHECK_SHELL_FEATURES # _LT_PATH_CONVERSION_FUNCTIONS # ----------------------------- # Determine what file name conversion functions should be used by # func_to_host_file (and, implicitly, by func_to_host_path). These are needed # for certain cross-compile configurations and native mingw. m4_defun([_LT_PATH_CONVERSION_FUNCTIONS], [AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_CANONICAL_BUILD])dnl AC_MSG_CHECKING([how to convert $build file names to $host format]) AC_CACHE_VAL(lt_cv_to_host_file_cmd, [case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32 ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32 ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32 ;; esac ;; *-*-cygwin* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_noop ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin ;; esac ;; * ) # unhandled hosts (and "normal" native builds) lt_cv_to_host_file_cmd=func_convert_file_noop ;; esac ]) to_host_file_cmd=$lt_cv_to_host_file_cmd AC_MSG_RESULT([$lt_cv_to_host_file_cmd]) _LT_DECL([to_host_file_cmd], [lt_cv_to_host_file_cmd], [0], [convert $build file names to $host format])dnl AC_MSG_CHECKING([how to convert $build file names to toolchain format]) AC_CACHE_VAL(lt_cv_to_tool_file_cmd, [#assume ordinary cross tools, or native build. lt_cv_to_tool_file_cmd=func_convert_file_noop case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32 ;; esac ;; esac ]) to_tool_file_cmd=$lt_cv_to_tool_file_cmd AC_MSG_RESULT([$lt_cv_to_tool_file_cmd]) _LT_DECL([to_tool_file_cmd], [lt_cv_to_tool_file_cmd], [0], [convert $build files to toolchain format])dnl ])# _LT_PATH_CONVERSION_FUNCTIONS gevent-24.11.1/deps/c-ares/m4/ltoptions.m4000066400000000000000000000342751471441230600201070ustar00rootroot00000000000000# Helper functions for option handling. -*- Autoconf -*- # # Copyright (C) 2004-2005, 2007-2009, 2011-2019, 2021-2022 Free # Software Foundation, Inc. # Written by Gary V. Vaughan, 2004 # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. # serial 8 ltoptions.m4 # This is to help aclocal find these macros, as it can't see m4_define. AC_DEFUN([LTOPTIONS_VERSION], [m4_if([1])]) # _LT_MANGLE_OPTION(MACRO-NAME, OPTION-NAME) # ------------------------------------------ m4_define([_LT_MANGLE_OPTION], [[_LT_OPTION_]m4_bpatsubst($1__$2, [[^a-zA-Z0-9_]], [_])]) # _LT_SET_OPTION(MACRO-NAME, OPTION-NAME) # --------------------------------------- # Set option OPTION-NAME for macro MACRO-NAME, and if there is a # matching handler defined, dispatch to it. Other OPTION-NAMEs are # saved as a flag. m4_define([_LT_SET_OPTION], [m4_define(_LT_MANGLE_OPTION([$1], [$2]))dnl m4_ifdef(_LT_MANGLE_DEFUN([$1], [$2]), _LT_MANGLE_DEFUN([$1], [$2]), [m4_warning([Unknown $1 option '$2'])])[]dnl ]) # _LT_IF_OPTION(MACRO-NAME, OPTION-NAME, IF-SET, [IF-NOT-SET]) # ------------------------------------------------------------ # Execute IF-SET if OPTION is set, IF-NOT-SET otherwise. m4_define([_LT_IF_OPTION], [m4_ifdef(_LT_MANGLE_OPTION([$1], [$2]), [$3], [$4])]) # _LT_UNLESS_OPTIONS(MACRO-NAME, OPTION-LIST, IF-NOT-SET) # ------------------------------------------------------- # Execute IF-NOT-SET unless all options in OPTION-LIST for MACRO-NAME # are set. m4_define([_LT_UNLESS_OPTIONS], [m4_foreach([_LT_Option], m4_split(m4_normalize([$2])), [m4_ifdef(_LT_MANGLE_OPTION([$1], _LT_Option), [m4_define([$0_found])])])[]dnl m4_ifdef([$0_found], [m4_undefine([$0_found])], [$3 ])[]dnl ]) # _LT_SET_OPTIONS(MACRO-NAME, OPTION-LIST) # ---------------------------------------- # OPTION-LIST is a space-separated list of Libtool options associated # with MACRO-NAME. If any OPTION has a matching handler declared with # LT_OPTION_DEFINE, dispatch to that macro; otherwise complain about # the unknown option and exit. m4_defun([_LT_SET_OPTIONS], [# Set options m4_foreach([_LT_Option], m4_split(m4_normalize([$2])), [_LT_SET_OPTION([$1], _LT_Option)]) m4_if([$1],[LT_INIT],[ dnl dnl Simply set some default values (i.e off) if boolean options were not dnl specified: _LT_UNLESS_OPTIONS([LT_INIT], [dlopen], [enable_dlopen=no ]) _LT_UNLESS_OPTIONS([LT_INIT], [win32-dll], [enable_win32_dll=no ]) dnl dnl If no reference was made to various pairs of opposing options, then dnl we run the default mode handler for the pair. For example, if neither dnl 'shared' nor 'disable-shared' was passed, we enable building of shared dnl archives by default: _LT_UNLESS_OPTIONS([LT_INIT], [shared disable-shared], [_LT_ENABLE_SHARED]) _LT_UNLESS_OPTIONS([LT_INIT], [static disable-static], [_LT_ENABLE_STATIC]) _LT_UNLESS_OPTIONS([LT_INIT], [pic-only no-pic], [_LT_WITH_PIC]) _LT_UNLESS_OPTIONS([LT_INIT], [fast-install disable-fast-install], [_LT_ENABLE_FAST_INSTALL]) _LT_UNLESS_OPTIONS([LT_INIT], [aix-soname=aix aix-soname=both aix-soname=svr4], [_LT_WITH_AIX_SONAME([aix])]) ]) ])# _LT_SET_OPTIONS ## --------------------------------- ## ## Macros to handle LT_INIT options. ## ## --------------------------------- ## # _LT_MANGLE_DEFUN(MACRO-NAME, OPTION-NAME) # ----------------------------------------- m4_define([_LT_MANGLE_DEFUN], [[_LT_OPTION_DEFUN_]m4_bpatsubst(m4_toupper([$1__$2]), [[^A-Z0-9_]], [_])]) # LT_OPTION_DEFINE(MACRO-NAME, OPTION-NAME, CODE) # ----------------------------------------------- m4_define([LT_OPTION_DEFINE], [m4_define(_LT_MANGLE_DEFUN([$1], [$2]), [$3])[]dnl ])# LT_OPTION_DEFINE # dlopen # ------ LT_OPTION_DEFINE([LT_INIT], [dlopen], [enable_dlopen=yes ]) AU_DEFUN([AC_LIBTOOL_DLOPEN], [_LT_SET_OPTION([LT_INIT], [dlopen]) AC_DIAGNOSE([obsolete], [$0: Remove this warning and the call to _LT_SET_OPTION when you put the 'dlopen' option into LT_INIT's first parameter.]) ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_DLOPEN], []) # win32-dll # --------- # Declare package support for building win32 dll's. LT_OPTION_DEFINE([LT_INIT], [win32-dll], [enable_win32_dll=yes case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-cegcc*) AC_CHECK_TOOL(AS, as, false) AC_CHECK_TOOL(DLLTOOL, dlltool, false) AC_CHECK_TOOL(OBJDUMP, objdump, false) ;; esac test -z "$AS" && AS=as _LT_DECL([], [AS], [1], [Assembler program])dnl test -z "$DLLTOOL" && DLLTOOL=dlltool _LT_DECL([], [DLLTOOL], [1], [DLL creation program])dnl test -z "$OBJDUMP" && OBJDUMP=objdump _LT_DECL([], [OBJDUMP], [1], [Object dumper program])dnl ])# win32-dll AU_DEFUN([AC_LIBTOOL_WIN32_DLL], [AC_REQUIRE([AC_CANONICAL_HOST])dnl _LT_SET_OPTION([LT_INIT], [win32-dll]) AC_DIAGNOSE([obsolete], [$0: Remove this warning and the call to _LT_SET_OPTION when you put the 'win32-dll' option into LT_INIT's first parameter.]) ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_WIN32_DLL], []) # _LT_ENABLE_SHARED([DEFAULT]) # ---------------------------- # implement the --enable-shared flag, and supports the 'shared' and # 'disable-shared' LT_INIT options. # DEFAULT is either 'yes' or 'no'. If omitted, it defaults to 'yes'. m4_define([_LT_ENABLE_SHARED], [m4_define([_LT_ENABLE_SHARED_DEFAULT], [m4_if($1, no, no, yes)])dnl AC_ARG_ENABLE([shared], [AS_HELP_STRING([--enable-shared@<:@=PKGS@:>@], [build shared libraries @<:@default=]_LT_ENABLE_SHARED_DEFAULT[@:>@])], [p=${PACKAGE-default} case $enableval in yes) enable_shared=yes ;; no) enable_shared=no ;; *) enable_shared=no # Look at the argument we got. We use all the common list separators. lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR, for pkg in $enableval; do IFS=$lt_save_ifs if test "X$pkg" = "X$p"; then enable_shared=yes fi done IFS=$lt_save_ifs ;; esac], [enable_shared=]_LT_ENABLE_SHARED_DEFAULT) _LT_DECL([build_libtool_libs], [enable_shared], [0], [Whether or not to build shared libraries]) ])# _LT_ENABLE_SHARED LT_OPTION_DEFINE([LT_INIT], [shared], [_LT_ENABLE_SHARED([yes])]) LT_OPTION_DEFINE([LT_INIT], [disable-shared], [_LT_ENABLE_SHARED([no])]) # Old names: AC_DEFUN([AC_ENABLE_SHARED], [_LT_SET_OPTION([LT_INIT], m4_if([$1], [no], [disable-])[shared]) ]) AC_DEFUN([AC_DISABLE_SHARED], [_LT_SET_OPTION([LT_INIT], [disable-shared]) ]) AU_DEFUN([AM_ENABLE_SHARED], [AC_ENABLE_SHARED($@)]) AU_DEFUN([AM_DISABLE_SHARED], [AC_DISABLE_SHARED($@)]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AM_ENABLE_SHARED], []) dnl AC_DEFUN([AM_DISABLE_SHARED], []) # _LT_ENABLE_STATIC([DEFAULT]) # ---------------------------- # implement the --enable-static flag, and support the 'static' and # 'disable-static' LT_INIT options. # DEFAULT is either 'yes' or 'no'. If omitted, it defaults to 'yes'. m4_define([_LT_ENABLE_STATIC], [m4_define([_LT_ENABLE_STATIC_DEFAULT], [m4_if($1, no, no, yes)])dnl AC_ARG_ENABLE([static], [AS_HELP_STRING([--enable-static@<:@=PKGS@:>@], [build static libraries @<:@default=]_LT_ENABLE_STATIC_DEFAULT[@:>@])], [p=${PACKAGE-default} case $enableval in yes) enable_static=yes ;; no) enable_static=no ;; *) enable_static=no # Look at the argument we got. We use all the common list separators. lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR, for pkg in $enableval; do IFS=$lt_save_ifs if test "X$pkg" = "X$p"; then enable_static=yes fi done IFS=$lt_save_ifs ;; esac], [enable_static=]_LT_ENABLE_STATIC_DEFAULT) _LT_DECL([build_old_libs], [enable_static], [0], [Whether or not to build static libraries]) ])# _LT_ENABLE_STATIC LT_OPTION_DEFINE([LT_INIT], [static], [_LT_ENABLE_STATIC([yes])]) LT_OPTION_DEFINE([LT_INIT], [disable-static], [_LT_ENABLE_STATIC([no])]) # Old names: AC_DEFUN([AC_ENABLE_STATIC], [_LT_SET_OPTION([LT_INIT], m4_if([$1], [no], [disable-])[static]) ]) AC_DEFUN([AC_DISABLE_STATIC], [_LT_SET_OPTION([LT_INIT], [disable-static]) ]) AU_DEFUN([AM_ENABLE_STATIC], [AC_ENABLE_STATIC($@)]) AU_DEFUN([AM_DISABLE_STATIC], [AC_DISABLE_STATIC($@)]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AM_ENABLE_STATIC], []) dnl AC_DEFUN([AM_DISABLE_STATIC], []) # _LT_ENABLE_FAST_INSTALL([DEFAULT]) # ---------------------------------- # implement the --enable-fast-install flag, and support the 'fast-install' # and 'disable-fast-install' LT_INIT options. # DEFAULT is either 'yes' or 'no'. If omitted, it defaults to 'yes'. m4_define([_LT_ENABLE_FAST_INSTALL], [m4_define([_LT_ENABLE_FAST_INSTALL_DEFAULT], [m4_if($1, no, no, yes)])dnl AC_ARG_ENABLE([fast-install], [AS_HELP_STRING([--enable-fast-install@<:@=PKGS@:>@], [optimize for fast installation @<:@default=]_LT_ENABLE_FAST_INSTALL_DEFAULT[@:>@])], [p=${PACKAGE-default} case $enableval in yes) enable_fast_install=yes ;; no) enable_fast_install=no ;; *) enable_fast_install=no # Look at the argument we got. We use all the common list separators. lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR, for pkg in $enableval; do IFS=$lt_save_ifs if test "X$pkg" = "X$p"; then enable_fast_install=yes fi done IFS=$lt_save_ifs ;; esac], [enable_fast_install=]_LT_ENABLE_FAST_INSTALL_DEFAULT) _LT_DECL([fast_install], [enable_fast_install], [0], [Whether or not to optimize for fast installation])dnl ])# _LT_ENABLE_FAST_INSTALL LT_OPTION_DEFINE([LT_INIT], [fast-install], [_LT_ENABLE_FAST_INSTALL([yes])]) LT_OPTION_DEFINE([LT_INIT], [disable-fast-install], [_LT_ENABLE_FAST_INSTALL([no])]) # Old names: AU_DEFUN([AC_ENABLE_FAST_INSTALL], [_LT_SET_OPTION([LT_INIT], m4_if([$1], [no], [disable-])[fast-install]) AC_DIAGNOSE([obsolete], [$0: Remove this warning and the call to _LT_SET_OPTION when you put the 'fast-install' option into LT_INIT's first parameter.]) ]) AU_DEFUN([AC_DISABLE_FAST_INSTALL], [_LT_SET_OPTION([LT_INIT], [disable-fast-install]) AC_DIAGNOSE([obsolete], [$0: Remove this warning and the call to _LT_SET_OPTION when you put the 'disable-fast-install' option into LT_INIT's first parameter.]) ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_ENABLE_FAST_INSTALL], []) dnl AC_DEFUN([AM_DISABLE_FAST_INSTALL], []) # _LT_WITH_AIX_SONAME([DEFAULT]) # ---------------------------------- # implement the --with-aix-soname flag, and support the `aix-soname=aix' # and `aix-soname=both' and `aix-soname=svr4' LT_INIT options. DEFAULT # is either `aix', `both' or `svr4'. If omitted, it defaults to `aix'. m4_define([_LT_WITH_AIX_SONAME], [m4_define([_LT_WITH_AIX_SONAME_DEFAULT], [m4_if($1, svr4, svr4, m4_if($1, both, both, aix))])dnl shared_archive_member_spec= case $host,$enable_shared in power*-*-aix[[5-9]]*,yes) AC_MSG_CHECKING([which variant of shared library versioning to provide]) AC_ARG_WITH([aix-soname], [AS_HELP_STRING([--with-aix-soname=aix|svr4|both], [shared library versioning (aka "SONAME") variant to provide on AIX, @<:@default=]_LT_WITH_AIX_SONAME_DEFAULT[@:>@.])], [case $withval in aix|svr4|both) ;; *) AC_MSG_ERROR([Unknown argument to --with-aix-soname]) ;; esac lt_cv_with_aix_soname=$with_aix_soname], [AC_CACHE_VAL([lt_cv_with_aix_soname], [lt_cv_with_aix_soname=]_LT_WITH_AIX_SONAME_DEFAULT) with_aix_soname=$lt_cv_with_aix_soname]) AC_MSG_RESULT([$with_aix_soname]) if test aix != "$with_aix_soname"; then # For the AIX way of multilib, we name the shared archive member # based on the bitwidth used, traditionally 'shr.o' or 'shr_64.o', # and 'shr.imp' or 'shr_64.imp', respectively, for the Import File. # Even when GNU compilers ignore OBJECT_MODE but need '-maix64' flag, # the AIX toolchain works better with OBJECT_MODE set (default 32). if test 64 = "${OBJECT_MODE-32}"; then shared_archive_member_spec=shr_64 else shared_archive_member_spec=shr fi fi ;; *) with_aix_soname=aix ;; esac _LT_DECL([], [shared_archive_member_spec], [0], [Shared archive member basename, for filename based shared library versioning on AIX])dnl ])# _LT_WITH_AIX_SONAME LT_OPTION_DEFINE([LT_INIT], [aix-soname=aix], [_LT_WITH_AIX_SONAME([aix])]) LT_OPTION_DEFINE([LT_INIT], [aix-soname=both], [_LT_WITH_AIX_SONAME([both])]) LT_OPTION_DEFINE([LT_INIT], [aix-soname=svr4], [_LT_WITH_AIX_SONAME([svr4])]) # _LT_WITH_PIC([MODE]) # -------------------- # implement the --with-pic flag, and support the 'pic-only' and 'no-pic' # LT_INIT options. # MODE is either 'yes' or 'no'. If omitted, it defaults to 'both'. m4_define([_LT_WITH_PIC], [AC_ARG_WITH([pic], [AS_HELP_STRING([--with-pic@<:@=PKGS@:>@], [try to use only PIC/non-PIC objects @<:@default=use both@:>@])], [lt_p=${PACKAGE-default} case $withval in yes|no) pic_mode=$withval ;; *) pic_mode=default # Look at the argument we got. We use all the common list separators. lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR, for lt_pkg in $withval; do IFS=$lt_save_ifs if test "X$lt_pkg" = "X$lt_p"; then pic_mode=yes fi done IFS=$lt_save_ifs ;; esac], [pic_mode=m4_default([$1], [default])]) _LT_DECL([], [pic_mode], [0], [What type of objects to build])dnl ])# _LT_WITH_PIC LT_OPTION_DEFINE([LT_INIT], [pic-only], [_LT_WITH_PIC([yes])]) LT_OPTION_DEFINE([LT_INIT], [no-pic], [_LT_WITH_PIC([no])]) # Old name: AU_DEFUN([AC_LIBTOOL_PICMODE], [_LT_SET_OPTION([LT_INIT], [pic-only]) AC_DIAGNOSE([obsolete], [$0: Remove this warning and the call to _LT_SET_OPTION when you put the 'pic-only' option into LT_INIT's first parameter.]) ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_PICMODE], []) ## ----------------- ## ## LTDL_INIT Options ## ## ----------------- ## m4_define([_LTDL_MODE], []) LT_OPTION_DEFINE([LTDL_INIT], [nonrecursive], [m4_define([_LTDL_MODE], [nonrecursive])]) LT_OPTION_DEFINE([LTDL_INIT], [recursive], [m4_define([_LTDL_MODE], [recursive])]) LT_OPTION_DEFINE([LTDL_INIT], [subproject], [m4_define([_LTDL_MODE], [subproject])]) m4_define([_LTDL_TYPE], []) LT_OPTION_DEFINE([LTDL_INIT], [installable], [m4_define([_LTDL_TYPE], [installable])]) LT_OPTION_DEFINE([LTDL_INIT], [convenience], [m4_define([_LTDL_TYPE], [convenience])]) gevent-24.11.1/deps/c-ares/m4/ltsugar.m4000066400000000000000000000104531471441230600175250ustar00rootroot00000000000000# ltsugar.m4 -- libtool m4 base layer. -*-Autoconf-*- # # Copyright (C) 2004-2005, 2007-2008, 2011-2019, 2021-2022 Free Software # Foundation, Inc. # Written by Gary V. Vaughan, 2004 # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. # serial 6 ltsugar.m4 # This is to help aclocal find these macros, as it can't see m4_define. AC_DEFUN([LTSUGAR_VERSION], [m4_if([0.1])]) # lt_join(SEP, ARG1, [ARG2...]) # ----------------------------- # Produce ARG1SEPARG2...SEPARGn, omitting [] arguments and their # associated separator. # Needed until we can rely on m4_join from Autoconf 2.62, since all earlier # versions in m4sugar had bugs. m4_define([lt_join], [m4_if([$#], [1], [], [$#], [2], [[$2]], [m4_if([$2], [], [], [[$2]_])$0([$1], m4_shift(m4_shift($@)))])]) m4_define([_lt_join], [m4_if([$#$2], [2], [], [m4_if([$2], [], [], [[$1$2]])$0([$1], m4_shift(m4_shift($@)))])]) # lt_car(LIST) # lt_cdr(LIST) # ------------ # Manipulate m4 lists. # These macros are necessary as long as will still need to support # Autoconf-2.59, which quotes differently. m4_define([lt_car], [[$1]]) m4_define([lt_cdr], [m4_if([$#], 0, [m4_fatal([$0: cannot be called without arguments])], [$#], 1, [], [m4_dquote(m4_shift($@))])]) m4_define([lt_unquote], $1) # lt_append(MACRO-NAME, STRING, [SEPARATOR]) # ------------------------------------------ # Redefine MACRO-NAME to hold its former content plus 'SEPARATOR''STRING'. # Note that neither SEPARATOR nor STRING are expanded; they are appended # to MACRO-NAME as is (leaving the expansion for when MACRO-NAME is invoked). # No SEPARATOR is output if MACRO-NAME was previously undefined (different # than defined and empty). # # This macro is needed until we can rely on Autoconf 2.62, since earlier # versions of m4sugar mistakenly expanded SEPARATOR but not STRING. m4_define([lt_append], [m4_define([$1], m4_ifdef([$1], [m4_defn([$1])[$3]])[$2])]) # lt_combine(SEP, PREFIX-LIST, INFIX, SUFFIX1, [SUFFIX2...]) # ---------------------------------------------------------- # Produce a SEP delimited list of all paired combinations of elements of # PREFIX-LIST with SUFFIX1 through SUFFIXn. Each element of the list # has the form PREFIXmINFIXSUFFIXn. # Needed until we can rely on m4_combine added in Autoconf 2.62. m4_define([lt_combine], [m4_if(m4_eval([$# > 3]), [1], [m4_pushdef([_Lt_sep], [m4_define([_Lt_sep], m4_defn([lt_car]))])]]dnl [[m4_foreach([_Lt_prefix], [$2], [m4_foreach([_Lt_suffix], ]m4_dquote(m4_dquote(m4_shift(m4_shift(m4_shift($@)))))[, [_Lt_sep([$1])[]m4_defn([_Lt_prefix])[$3]m4_defn([_Lt_suffix])])])])]) # lt_if_append_uniq(MACRO-NAME, VARNAME, [SEPARATOR], [UNIQ], [NOT-UNIQ]) # ----------------------------------------------------------------------- # Iff MACRO-NAME does not yet contain VARNAME, then append it (delimited # by SEPARATOR if supplied) and expand UNIQ, else NOT-UNIQ. m4_define([lt_if_append_uniq], [m4_ifdef([$1], [m4_if(m4_index([$3]m4_defn([$1])[$3], [$3$2$3]), [-1], [lt_append([$1], [$2], [$3])$4], [$5])], [lt_append([$1], [$2], [$3])$4])]) # lt_dict_add(DICT, KEY, VALUE) # ----------------------------- m4_define([lt_dict_add], [m4_define([$1($2)], [$3])]) # lt_dict_add_subkey(DICT, KEY, SUBKEY, VALUE) # -------------------------------------------- m4_define([lt_dict_add_subkey], [m4_define([$1($2:$3)], [$4])]) # lt_dict_fetch(DICT, KEY, [SUBKEY]) # ---------------------------------- m4_define([lt_dict_fetch], [m4_ifval([$3], m4_ifdef([$1($2:$3)], [m4_defn([$1($2:$3)])]), m4_ifdef([$1($2)], [m4_defn([$1($2)])]))]) # lt_if_dict_fetch(DICT, KEY, [SUBKEY], VALUE, IF-TRUE, [IF-FALSE]) # ----------------------------------------------------------------- m4_define([lt_if_dict_fetch], [m4_if(lt_dict_fetch([$1], [$2], [$3]), [$4], [$5], [$6])]) # lt_dict_filter(DICT, [SUBKEY], VALUE, [SEPARATOR], KEY, [...]) # -------------------------------------------------------------- m4_define([lt_dict_filter], [m4_if([$5], [], [], [lt_join(m4_quote(m4_default([$4], [[, ]])), lt_unquote(m4_split(m4_normalize(m4_foreach(_Lt_key, lt_car([m4_shiftn(4, $@)]), [lt_if_dict_fetch([$1], _Lt_key, [$2], [$3], [_Lt_key ])])))))])[]dnl ]) gevent-24.11.1/deps/c-ares/m4/ltversion.m4000066400000000000000000000013121471441230600200630ustar00rootroot00000000000000# ltversion.m4 -- version numbers -*- Autoconf -*- # # Copyright (C) 2004, 2011-2019, 2021-2022 Free Software Foundation, # Inc. # Written by Scott James Remnant, 2004 # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. # @configure_input@ # serial 4245 ltversion.m4 # This file is part of GNU Libtool m4_define([LT_PACKAGE_VERSION], [2.4.7]) m4_define([LT_PACKAGE_REVISION], [2.4.7]) AC_DEFUN([LTVERSION_VERSION], [macro_version='2.4.7' macro_revision='2.4.7' _LT_DECL(, macro_version, 0, [Which release of libtool.m4 was used?]) _LT_DECL(, macro_revision, 0) ]) gevent-24.11.1/deps/c-ares/m4/lt~obsolete.m4000066400000000000000000000140071471441230600204150ustar00rootroot00000000000000# lt~obsolete.m4 -- aclocal satisfying obsolete definitions. -*-Autoconf-*- # # Copyright (C) 2004-2005, 2007, 2009, 2011-2019, 2021-2022 Free # Software Foundation, Inc. # Written by Scott James Remnant, 2004. # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. # serial 5 lt~obsolete.m4 # These exist entirely to fool aclocal when bootstrapping libtool. # # In the past libtool.m4 has provided macros via AC_DEFUN (or AU_DEFUN), # which have later been changed to m4_define as they aren't part of the # exported API, or moved to Autoconf or Automake where they belong. # # The trouble is, aclocal is a bit thick. It'll see the old AC_DEFUN # in /usr/share/aclocal/libtool.m4 and remember it, then when it sees us # using a macro with the same name in our local m4/libtool.m4 it'll # pull the old libtool.m4 in (it doesn't see our shiny new m4_define # and doesn't know about Autoconf macros at all.) # # So we provide this file, which has a silly filename so it's always # included after everything else. This provides aclocal with the # AC_DEFUNs it wants, but when m4 processes it, it doesn't do anything # because those macros already exist, or will be overwritten later. # We use AC_DEFUN over AU_DEFUN for compatibility with aclocal-1.6. # # Anytime we withdraw an AC_DEFUN or AU_DEFUN, remember to add it here. # Yes, that means every name once taken will need to remain here until # we give up compatibility with versions before 1.7, at which point # we need to keep only those names which we still refer to. # This is to help aclocal find these macros, as it can't see m4_define. AC_DEFUN([LTOBSOLETE_VERSION], [m4_if([1])]) m4_ifndef([AC_LIBTOOL_LINKER_OPTION], [AC_DEFUN([AC_LIBTOOL_LINKER_OPTION])]) m4_ifndef([AC_PROG_EGREP], [AC_DEFUN([AC_PROG_EGREP])]) m4_ifndef([_LT_AC_PROG_ECHO_BACKSLASH], [AC_DEFUN([_LT_AC_PROG_ECHO_BACKSLASH])]) m4_ifndef([_LT_AC_SHELL_INIT], [AC_DEFUN([_LT_AC_SHELL_INIT])]) m4_ifndef([_LT_AC_SYS_LIBPATH_AIX], [AC_DEFUN([_LT_AC_SYS_LIBPATH_AIX])]) m4_ifndef([_LT_PROG_LTMAIN], [AC_DEFUN([_LT_PROG_LTMAIN])]) m4_ifndef([_LT_AC_TAGVAR], [AC_DEFUN([_LT_AC_TAGVAR])]) m4_ifndef([AC_LTDL_ENABLE_INSTALL], [AC_DEFUN([AC_LTDL_ENABLE_INSTALL])]) m4_ifndef([AC_LTDL_PREOPEN], [AC_DEFUN([AC_LTDL_PREOPEN])]) m4_ifndef([_LT_AC_SYS_COMPILER], [AC_DEFUN([_LT_AC_SYS_COMPILER])]) m4_ifndef([_LT_AC_LOCK], [AC_DEFUN([_LT_AC_LOCK])]) m4_ifndef([AC_LIBTOOL_SYS_OLD_ARCHIVE], [AC_DEFUN([AC_LIBTOOL_SYS_OLD_ARCHIVE])]) m4_ifndef([_LT_AC_TRY_DLOPEN_SELF], [AC_DEFUN([_LT_AC_TRY_DLOPEN_SELF])]) m4_ifndef([AC_LIBTOOL_PROG_CC_C_O], [AC_DEFUN([AC_LIBTOOL_PROG_CC_C_O])]) m4_ifndef([AC_LIBTOOL_SYS_HARD_LINK_LOCKS], [AC_DEFUN([AC_LIBTOOL_SYS_HARD_LINK_LOCKS])]) m4_ifndef([AC_LIBTOOL_OBJDIR], [AC_DEFUN([AC_LIBTOOL_OBJDIR])]) m4_ifndef([AC_LTDL_OBJDIR], [AC_DEFUN([AC_LTDL_OBJDIR])]) m4_ifndef([AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH], [AC_DEFUN([AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH])]) m4_ifndef([AC_LIBTOOL_SYS_LIB_STRIP], [AC_DEFUN([AC_LIBTOOL_SYS_LIB_STRIP])]) m4_ifndef([AC_PATH_MAGIC], [AC_DEFUN([AC_PATH_MAGIC])]) m4_ifndef([AC_PROG_LD_GNU], [AC_DEFUN([AC_PROG_LD_GNU])]) m4_ifndef([AC_PROG_LD_RELOAD_FLAG], [AC_DEFUN([AC_PROG_LD_RELOAD_FLAG])]) m4_ifndef([AC_DEPLIBS_CHECK_METHOD], [AC_DEFUN([AC_DEPLIBS_CHECK_METHOD])]) m4_ifndef([AC_LIBTOOL_PROG_COMPILER_NO_RTTI], [AC_DEFUN([AC_LIBTOOL_PROG_COMPILER_NO_RTTI])]) m4_ifndef([AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE], [AC_DEFUN([AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE])]) m4_ifndef([AC_LIBTOOL_PROG_COMPILER_PIC], [AC_DEFUN([AC_LIBTOOL_PROG_COMPILER_PIC])]) m4_ifndef([AC_LIBTOOL_PROG_LD_SHLIBS], [AC_DEFUN([AC_LIBTOOL_PROG_LD_SHLIBS])]) m4_ifndef([AC_LIBTOOL_POSTDEP_PREDEP], [AC_DEFUN([AC_LIBTOOL_POSTDEP_PREDEP])]) m4_ifndef([LT_AC_PROG_EGREP], [AC_DEFUN([LT_AC_PROG_EGREP])]) m4_ifndef([LT_AC_PROG_SED], [AC_DEFUN([LT_AC_PROG_SED])]) m4_ifndef([_LT_CC_BASENAME], [AC_DEFUN([_LT_CC_BASENAME])]) m4_ifndef([_LT_COMPILER_BOILERPLATE], [AC_DEFUN([_LT_COMPILER_BOILERPLATE])]) m4_ifndef([_LT_LINKER_BOILERPLATE], [AC_DEFUN([_LT_LINKER_BOILERPLATE])]) m4_ifndef([_AC_PROG_LIBTOOL], [AC_DEFUN([_AC_PROG_LIBTOOL])]) m4_ifndef([AC_LIBTOOL_SETUP], [AC_DEFUN([AC_LIBTOOL_SETUP])]) m4_ifndef([_LT_AC_CHECK_DLFCN], [AC_DEFUN([_LT_AC_CHECK_DLFCN])]) m4_ifndef([AC_LIBTOOL_SYS_DYNAMIC_LINKER], [AC_DEFUN([AC_LIBTOOL_SYS_DYNAMIC_LINKER])]) m4_ifndef([_LT_AC_TAGCONFIG], [AC_DEFUN([_LT_AC_TAGCONFIG])]) m4_ifndef([AC_DISABLE_FAST_INSTALL], [AC_DEFUN([AC_DISABLE_FAST_INSTALL])]) m4_ifndef([_LT_AC_LANG_CXX], [AC_DEFUN([_LT_AC_LANG_CXX])]) m4_ifndef([_LT_AC_LANG_F77], [AC_DEFUN([_LT_AC_LANG_F77])]) m4_ifndef([_LT_AC_LANG_GCJ], [AC_DEFUN([_LT_AC_LANG_GCJ])]) m4_ifndef([AC_LIBTOOL_LANG_C_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_C_CONFIG])]) m4_ifndef([_LT_AC_LANG_C_CONFIG], [AC_DEFUN([_LT_AC_LANG_C_CONFIG])]) m4_ifndef([AC_LIBTOOL_LANG_CXX_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_CXX_CONFIG])]) m4_ifndef([_LT_AC_LANG_CXX_CONFIG], [AC_DEFUN([_LT_AC_LANG_CXX_CONFIG])]) m4_ifndef([AC_LIBTOOL_LANG_F77_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_F77_CONFIG])]) m4_ifndef([_LT_AC_LANG_F77_CONFIG], [AC_DEFUN([_LT_AC_LANG_F77_CONFIG])]) m4_ifndef([AC_LIBTOOL_LANG_GCJ_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_GCJ_CONFIG])]) m4_ifndef([_LT_AC_LANG_GCJ_CONFIG], [AC_DEFUN([_LT_AC_LANG_GCJ_CONFIG])]) m4_ifndef([AC_LIBTOOL_LANG_RC_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_RC_CONFIG])]) m4_ifndef([_LT_AC_LANG_RC_CONFIG], [AC_DEFUN([_LT_AC_LANG_RC_CONFIG])]) m4_ifndef([AC_LIBTOOL_CONFIG], [AC_DEFUN([AC_LIBTOOL_CONFIG])]) m4_ifndef([_LT_AC_FILE_LTDLL_C], [AC_DEFUN([_LT_AC_FILE_LTDLL_C])]) m4_ifndef([_LT_REQUIRED_DARWIN_CHECKS], [AC_DEFUN([_LT_REQUIRED_DARWIN_CHECKS])]) m4_ifndef([_LT_AC_PROG_CXXCPP], [AC_DEFUN([_LT_AC_PROG_CXXCPP])]) m4_ifndef([_LT_PREPARE_SED_QUOTE_VARS], [AC_DEFUN([_LT_PREPARE_SED_QUOTE_VARS])]) m4_ifndef([_LT_PROG_ECHO_BACKSLASH], [AC_DEFUN([_LT_PROG_ECHO_BACKSLASH])]) m4_ifndef([_LT_PROG_F77], [AC_DEFUN([_LT_PROG_F77])]) m4_ifndef([_LT_PROG_FC], [AC_DEFUN([_LT_PROG_FC])]) m4_ifndef([_LT_PROG_CXX], [AC_DEFUN([_LT_PROG_CXX])]) gevent-24.11.1/deps/c-ares/m4/pkg.m4000066400000000000000000000240071471441230600166250ustar00rootroot00000000000000# pkg.m4 - Macros to locate and utilise pkg-config. -*- Autoconf -*- # serial 12 (pkg-config-0.29.2) dnl Copyright © 2004 Scott James Remnant . dnl Copyright © 2012-2015 Dan Nicholson dnl dnl This program is free software; you can redistribute it and/or modify dnl it under the terms of the GNU General Public License as published by dnl the Free Software Foundation; either version 2 of the License, or dnl (at your option) any later version. dnl dnl This program is distributed in the hope that it will be useful, but dnl WITHOUT ANY WARRANTY; without even the implied warranty of dnl MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU dnl General Public License for more details. dnl dnl You should have received a copy of the GNU General Public License dnl along with this program; if not, write to the Free Software dnl Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA dnl 02111-1307, USA. dnl dnl As a special exception to the GNU General Public License, if you dnl distribute this file as part of a program that contains a dnl configuration script generated by Autoconf, you may include it under dnl the same distribution terms that you use for the rest of that dnl program. dnl PKG_PREREQ(MIN-VERSION) dnl ----------------------- dnl Since: 0.29 dnl dnl Verify that the version of the pkg-config macros are at least dnl MIN-VERSION. Unlike PKG_PROG_PKG_CONFIG, which checks the user's dnl installed version of pkg-config, this checks the developer's version dnl of pkg.m4 when generating configure. dnl dnl To ensure that this macro is defined, also add: dnl m4_ifndef([PKG_PREREQ], dnl [m4_fatal([must install pkg-config 0.29 or later before running autoconf/autogen])]) dnl dnl See the "Since" comment for each macro you use to see what version dnl of the macros you require. m4_defun([PKG_PREREQ], [m4_define([PKG_MACROS_VERSION], [0.29.2]) m4_if(m4_version_compare(PKG_MACROS_VERSION, [$1]), -1, [m4_fatal([pkg.m4 version $1 or higher is required but ]PKG_MACROS_VERSION[ found])]) ])dnl PKG_PREREQ dnl PKG_PROG_PKG_CONFIG([MIN-VERSION]) dnl ---------------------------------- dnl Since: 0.16 dnl dnl Search for the pkg-config tool and set the PKG_CONFIG variable to dnl first found in the path. Checks that the version of pkg-config found dnl is at least MIN-VERSION. If MIN-VERSION is not specified, 0.9.0 is dnl used since that's the first version where most current features of dnl pkg-config existed. AC_DEFUN([PKG_PROG_PKG_CONFIG], [m4_pattern_forbid([^_?PKG_[A-Z_]+$]) m4_pattern_allow([^PKG_CONFIG(_(PATH|LIBDIR|SYSROOT_DIR|ALLOW_SYSTEM_(CFLAGS|LIBS)))?$]) m4_pattern_allow([^PKG_CONFIG_(DISABLE_UNINSTALLED|TOP_BUILD_DIR|DEBUG_SPEW)$]) AC_ARG_VAR([PKG_CONFIG], [path to pkg-config utility]) AC_ARG_VAR([PKG_CONFIG_PATH], [directories to add to pkg-config's search path]) AC_ARG_VAR([PKG_CONFIG_LIBDIR], [path overriding pkg-config's built-in search path]) if test "x$ac_cv_env_PKG_CONFIG_set" != "xset"; then AC_PATH_TOOL([PKG_CONFIG], [pkg-config]) fi if test -n "$PKG_CONFIG"; then _pkg_min_version=m4_default([$1], [0.9.0]) AC_MSG_CHECKING([pkg-config is at least version $_pkg_min_version]) if $PKG_CONFIG --atleast-pkgconfig-version $_pkg_min_version; then AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) PKG_CONFIG="" fi fi[]dnl ])dnl PKG_PROG_PKG_CONFIG dnl PKG_CHECK_EXISTS(MODULES, [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND]) dnl ------------------------------------------------------------------- dnl Since: 0.18 dnl dnl Check to see whether a particular set of modules exists. Similar to dnl PKG_CHECK_MODULES(), but does not set variables or print errors. dnl dnl Please remember that m4 expands AC_REQUIRE([PKG_PROG_PKG_CONFIG]) dnl only at the first occurence in configure.ac, so if the first place dnl it's called might be skipped (such as if it is within an "if", you dnl have to call PKG_CHECK_EXISTS manually AC_DEFUN([PKG_CHECK_EXISTS], [AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl if test -n "$PKG_CONFIG" && \ AC_RUN_LOG([$PKG_CONFIG --exists --print-errors "$1"]); then m4_default([$2], [:]) m4_ifvaln([$3], [else $3])dnl fi]) dnl _PKG_CONFIG([VARIABLE], [COMMAND], [MODULES]) dnl --------------------------------------------- dnl Internal wrapper calling pkg-config via PKG_CONFIG and setting dnl pkg_failed based on the result. m4_define([_PKG_CONFIG], [if test -n "$$1"; then pkg_cv_[]$1="$$1" elif test -n "$PKG_CONFIG"; then PKG_CHECK_EXISTS([$3], [pkg_cv_[]$1=`$PKG_CONFIG --[]$2 "$3" 2>/dev/null` test "x$?" != "x0" && pkg_failed=yes ], [pkg_failed=yes]) else pkg_failed=untried fi[]dnl ])dnl _PKG_CONFIG dnl _PKG_SHORT_ERRORS_SUPPORTED dnl --------------------------- dnl Internal check to see if pkg-config supports short errors. AC_DEFUN([_PKG_SHORT_ERRORS_SUPPORTED], [AC_REQUIRE([PKG_PROG_PKG_CONFIG]) if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then _pkg_short_errors_supported=yes else _pkg_short_errors_supported=no fi[]dnl ])dnl _PKG_SHORT_ERRORS_SUPPORTED dnl PKG_CHECK_MODULES(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND], dnl [ACTION-IF-NOT-FOUND]) dnl -------------------------------------------------------------- dnl Since: 0.4.0 dnl dnl Note that if there is a possibility the first call to dnl PKG_CHECK_MODULES might not happen, you should be sure to include an dnl explicit call to PKG_PROG_PKG_CONFIG in your configure.ac AC_DEFUN([PKG_CHECK_MODULES], [AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl AC_ARG_VAR([$1][_CFLAGS], [C compiler flags for $1, overriding pkg-config])dnl AC_ARG_VAR([$1][_LIBS], [linker flags for $1, overriding pkg-config])dnl pkg_failed=no AC_MSG_CHECKING([for $2]) _PKG_CONFIG([$1][_CFLAGS], [cflags], [$2]) _PKG_CONFIG([$1][_LIBS], [libs], [$2]) m4_define([_PKG_TEXT], [Alternatively, you may set the environment variables $1[]_CFLAGS and $1[]_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details.]) if test $pkg_failed = yes; then AC_MSG_RESULT([no]) _PKG_SHORT_ERRORS_SUPPORTED if test $_pkg_short_errors_supported = yes; then $1[]_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "$2" 2>&1` else $1[]_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "$2" 2>&1` fi # Put the nasty error message in config.log where it belongs echo "$$1[]_PKG_ERRORS" >&AS_MESSAGE_LOG_FD m4_default([$4], [AC_MSG_ERROR( [Package requirements ($2) were not met: $$1_PKG_ERRORS Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. _PKG_TEXT])[]dnl ]) elif test $pkg_failed = untried; then AC_MSG_RESULT([no]) m4_default([$4], [AC_MSG_FAILURE( [The pkg-config script could not be found or is too old. Make sure it is in your PATH or set the PKG_CONFIG environment variable to the full path to pkg-config. _PKG_TEXT To get pkg-config, see .])[]dnl ]) else $1[]_CFLAGS=$pkg_cv_[]$1[]_CFLAGS $1[]_LIBS=$pkg_cv_[]$1[]_LIBS AC_MSG_RESULT([yes]) $3 fi[]dnl ])dnl PKG_CHECK_MODULES dnl PKG_CHECK_MODULES_STATIC(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND], dnl [ACTION-IF-NOT-FOUND]) dnl --------------------------------------------------------------------- dnl Since: 0.29 dnl dnl Checks for existence of MODULES and gathers its build flags with dnl static libraries enabled. Sets VARIABLE-PREFIX_CFLAGS from --cflags dnl and VARIABLE-PREFIX_LIBS from --libs. dnl dnl Note that if there is a possibility the first call to dnl PKG_CHECK_MODULES_STATIC might not happen, you should be sure to dnl include an explicit call to PKG_PROG_PKG_CONFIG in your dnl configure.ac. AC_DEFUN([PKG_CHECK_MODULES_STATIC], [AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl _save_PKG_CONFIG=$PKG_CONFIG PKG_CONFIG="$PKG_CONFIG --static" PKG_CHECK_MODULES($@) PKG_CONFIG=$_save_PKG_CONFIG[]dnl ])dnl PKG_CHECK_MODULES_STATIC dnl PKG_INSTALLDIR([DIRECTORY]) dnl ------------------------- dnl Since: 0.27 dnl dnl Substitutes the variable pkgconfigdir as the location where a module dnl should install pkg-config .pc files. By default the directory is dnl $libdir/pkgconfig, but the default can be changed by passing dnl DIRECTORY. The user can override through the --with-pkgconfigdir dnl parameter. AC_DEFUN([PKG_INSTALLDIR], [m4_pushdef([pkg_default], [m4_default([$1], ['${libdir}/pkgconfig'])]) m4_pushdef([pkg_description], [pkg-config installation directory @<:@]pkg_default[@:>@]) AC_ARG_WITH([pkgconfigdir], [AS_HELP_STRING([--with-pkgconfigdir], pkg_description)],, [with_pkgconfigdir=]pkg_default) AC_SUBST([pkgconfigdir], [$with_pkgconfigdir]) m4_popdef([pkg_default]) m4_popdef([pkg_description]) ])dnl PKG_INSTALLDIR dnl PKG_NOARCH_INSTALLDIR([DIRECTORY]) dnl -------------------------------- dnl Since: 0.27 dnl dnl Substitutes the variable noarch_pkgconfigdir as the location where a dnl module should install arch-independent pkg-config .pc files. By dnl default the directory is $datadir/pkgconfig, but the default can be dnl changed by passing DIRECTORY. The user can override through the dnl --with-noarch-pkgconfigdir parameter. AC_DEFUN([PKG_NOARCH_INSTALLDIR], [m4_pushdef([pkg_default], [m4_default([$1], ['${datadir}/pkgconfig'])]) m4_pushdef([pkg_description], [pkg-config arch-independent installation directory @<:@]pkg_default[@:>@]) AC_ARG_WITH([noarch-pkgconfigdir], [AS_HELP_STRING([--with-noarch-pkgconfigdir], pkg_description)],, [with_noarch_pkgconfigdir=]pkg_default) AC_SUBST([noarch_pkgconfigdir], [$with_noarch_pkgconfigdir]) m4_popdef([pkg_default]) m4_popdef([pkg_description]) ])dnl PKG_NOARCH_INSTALLDIR dnl PKG_CHECK_VAR(VARIABLE, MODULE, CONFIG-VARIABLE, dnl [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND]) dnl ------------------------------------------- dnl Since: 0.28 dnl dnl Retrieves the value of the pkg-config variable for the given module. AC_DEFUN([PKG_CHECK_VAR], [AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl AC_ARG_VAR([$1], [value of $3 for $2, overriding pkg-config])dnl _PKG_CONFIG([$1], [variable="][$3]["], [$2]) AS_VAR_COPY([$1], [pkg_cv_][$1]) AS_VAR_IF([$1], [""], [$5], [$4])dnl ])dnl PKG_CHECK_VAR gevent-24.11.1/deps/c-ares/src/000077500000000000000000000000001471441230600160465ustar00rootroot00000000000000gevent-24.11.1/deps/c-ares/src/CMakeLists.txt000066400000000000000000000002071471441230600206050ustar00rootroot00000000000000# Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT ADD_SUBDIRECTORY (lib) ADD_SUBDIRECTORY (tools) gevent-24.11.1/deps/c-ares/src/Makefile.am000066400000000000000000000002031471441230600200750ustar00rootroot00000000000000# Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT EXTRA_DIST=CMakeLists.txt SUBDIRS=lib tools gevent-24.11.1/deps/c-ares/src/Makefile.in000066400000000000000000000506211471441230600201170ustar00rootroot00000000000000# Makefile.in generated by automake 1.17 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2024 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = { \ if test -z '$(MAKELEVEL)'; then \ false; \ elif test -n '$(MAKE_HOST)'; then \ true; \ elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \ true; \ else \ false; \ fi; \ } am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) am__rm_f = rm -f $(am__rm_f_notfound) am__rm_rf = rm -rf $(am__rm_f_notfound) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ subdir = src ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/m4/ax_ac_append_to_file.m4 \ $(top_srcdir)/m4/ax_ac_print_to_file.m4 \ $(top_srcdir)/m4/ax_add_am_macro_static.m4 \ $(top_srcdir)/m4/ax_am_macros_static.m4 \ $(top_srcdir)/m4/ax_append_compile_flags.m4 \ $(top_srcdir)/m4/ax_append_flag.m4 \ $(top_srcdir)/m4/ax_append_link_flags.m4 \ $(top_srcdir)/m4/ax_check_compile_flag.m4 \ $(top_srcdir)/m4/ax_check_gnu_make.m4 \ $(top_srcdir)/m4/ax_check_link_flag.m4 \ $(top_srcdir)/m4/ax_check_user_namespace.m4 \ $(top_srcdir)/m4/ax_check_uts_namespace.m4 \ $(top_srcdir)/m4/ax_code_coverage.m4 \ $(top_srcdir)/m4/ax_compiler_vendor.m4 \ $(top_srcdir)/m4/ax_cxx_compile_stdcxx.m4 \ $(top_srcdir)/m4/ax_cxx_compile_stdcxx_14.m4 \ $(top_srcdir)/m4/ax_file_escapes.m4 \ $(top_srcdir)/m4/ax_pthread.m4 \ $(top_srcdir)/m4/ax_require_defined.m4 \ $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ $(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/m4/pkg.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) DIST_COMMON = $(srcdir)/Makefile.am $(am__DIST_COMMON) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/src/lib/ares_config.h \ $(top_builddir)/include/ares_build.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = SOURCES = DIST_SOURCES = RECURSIVE_TARGETS = all-recursive check-recursive cscopelist-recursive \ ctags-recursive dvi-recursive html-recursive info-recursive \ install-data-recursive install-dvi-recursive \ install-exec-recursive install-html-recursive \ install-info-recursive install-pdf-recursive \ install-ps-recursive install-recursive installcheck-recursive \ installdirs-recursive pdf-recursive ps-recursive \ tags-recursive uninstall-recursive am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac RECURSIVE_CLEAN_TARGETS = mostlyclean-recursive clean-recursive \ distclean-recursive maintainer-clean-recursive am__recursive_targets = \ $(RECURSIVE_TARGETS) \ $(RECURSIVE_CLEAN_TARGETS) \ $(am__extra_recursive_targets) AM_RECURSIVE_TARGETS = $(am__recursive_targets:-recursive=) TAGS CTAGS \ distdir distdir-am am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` DIST_SUBDIRS = $(SUBDIRS) am__DIST_COMMON = $(srcdir)/Makefile.in DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) am__relativize = \ dir0=`pwd`; \ sed_first='s,^\([^/]*\)/.*$$,\1,'; \ sed_rest='s,^[^/]*/*,,'; \ sed_last='s,^.*/\([^/]*\)$$,\1,'; \ sed_butlast='s,/*[^/]*$$,,'; \ while test -n "$$dir1"; do \ first=`echo "$$dir1" | sed -e "$$sed_first"`; \ if test "$$first" != "."; then \ if test "$$first" = ".."; then \ dir2=`echo "$$dir0" | sed -e "$$sed_last"`/"$$dir2"; \ dir0=`echo "$$dir0" | sed -e "$$sed_butlast"`; \ else \ first2=`echo "$$dir2" | sed -e "$$sed_first"`; \ if test "$$first2" = "$$first"; then \ dir2=`echo "$$dir2" | sed -e "$$sed_rest"`; \ else \ dir2="../$$dir2"; \ fi; \ dir0="$$dir0"/"$$first"; \ fi; \ fi; \ dir1=`echo "$$dir1" | sed -e "$$sed_rest"`; \ done; \ reldir="$$dir2" ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_CFLAGS = @AM_CFLAGS@ AM_CPPFLAGS = @AM_CPPFLAGS@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AS = @AS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BUILD_SUBDIRS = @BUILD_SUBDIRS@ CARES_PRIVATE_LIBS = @CARES_PRIVATE_LIBS@ CARES_RANDOM_FILE = @CARES_RANDOM_FILE@ CARES_SYMBOL_HIDING_CFLAG = @CARES_SYMBOL_HIDING_CFLAG@ CARES_VERSION_INFO = @CARES_VERSION_INFO@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CODE_COVERAGE_CFLAGS = @CODE_COVERAGE_CFLAGS@ CODE_COVERAGE_CPPFLAGS = @CODE_COVERAGE_CPPFLAGS@ CODE_COVERAGE_CXXFLAGS = @CODE_COVERAGE_CXXFLAGS@ CODE_COVERAGE_ENABLED = @CODE_COVERAGE_ENABLED@ CODE_COVERAGE_LIBS = @CODE_COVERAGE_LIBS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CSCOPE = @CSCOPE@ CTAGS = @CTAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ ETAGS = @ETAGS@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FILECMD = @FILECMD@ GCOV = @GCOV@ GENHTML = @GENHTML@ GMOCK112_CFLAGS = @GMOCK112_CFLAGS@ GMOCK112_LIBS = @GMOCK112_LIBS@ GMOCK_CFLAGS = @GMOCK_CFLAGS@ GMOCK_LIBS = @GMOCK_LIBS@ GREP = @GREP@ HAVE_CXX14 = @HAVE_CXX14@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ LCOV = @LCOV@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ NM = @NM@ NMEDIT = @NMEDIT@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKGCONFIG_CFLAGS = @PKGCONFIG_CFLAGS@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_CXX = @PTHREAD_CXX@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ VERSION = @VERSION@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__rm_f_notfound = @am__rm_f_notfound@ am__tar = @am__tar@ am__untar = @am__untar@ am__xargs_n = @am__xargs_n@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ ifGNUmake = @ifGNUmake@ ifnGNUmake = @ifnGNUmake@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ runstatedir = @runstatedir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ # Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT EXTRA_DIST = CMakeLists.txt SUBDIRS = lib tools all: all-recursive .SUFFIXES: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign src/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign src/Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs # This directory's subdirectories are mostly independent; you can cd # into them and run 'make' without going through this Makefile. # To change the values of 'make' variables: instead of editing Makefiles, # (1) if the variable is set in 'config.status', edit 'config.status' # (which will cause the Makefiles to be regenerated when you run 'make'); # (2) otherwise, pass the desired values on the 'make' command line. $(am__recursive_targets): @fail=; \ if $(am__make_keepgoing); then \ failcom='fail=yes'; \ else \ failcom='exit 1'; \ fi; \ dot_seen=no; \ target=`echo $@ | sed s/-recursive//`; \ case "$@" in \ distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \ *) list='$(SUBDIRS)' ;; \ esac; \ for subdir in $$list; do \ echo "Making $$target in $$subdir"; \ if test "$$subdir" = "."; then \ dot_seen=yes; \ local_target="$$target-am"; \ else \ local_target="$$target"; \ fi; \ ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \ || eval $$failcom; \ done; \ if test "$$dot_seen" = "no"; then \ $(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \ fi; test -z "$$fail" ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-recursive TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \ include_option=--etags-include; \ empty_fix=.; \ else \ include_option=--include; \ empty_fix=; \ fi; \ list='$(SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ test ! -f $$subdir/TAGS || \ set "$$@" "$$include_option=$$here/$$subdir/TAGS"; \ fi; \ done; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-recursive CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscopelist: cscopelist-recursive cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags distdir: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) distdir-am distdir-am: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done @list='$(DIST_SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ $(am__make_dryrun) \ || test -d "$(distdir)/$$subdir" \ || $(MKDIR_P) "$(distdir)/$$subdir" \ || exit 1; \ dir1=$$subdir; dir2="$(distdir)/$$subdir"; \ $(am__relativize); \ new_distdir=$$reldir; \ dir1=$$subdir; dir2="$(top_distdir)"; \ $(am__relativize); \ new_top_distdir=$$reldir; \ echo " (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) top_distdir="$$new_top_distdir" distdir="$$new_distdir" \\"; \ echo " am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)"; \ ($(am__cd) $$subdir && \ $(MAKE) $(AM_MAKEFLAGS) \ top_distdir="$$new_top_distdir" \ distdir="$$new_distdir" \ am__remove_distdir=: \ am__skip_length_check=: \ am__skip_mode_fix=: \ distdir) \ || exit 1; \ fi; \ done check-am: all-am check: check-recursive all-am: Makefile installdirs: installdirs-recursive installdirs-am: install: install-recursive install-exec: install-exec-recursive install-data: install-data-recursive uninstall: uninstall-recursive install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-recursive install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -$(am__rm_f) $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || $(am__rm_f) $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-recursive clean-am: clean-generic clean-libtool mostlyclean-am distclean: distclean-recursive -rm -f Makefile distclean-am: clean-am distclean-generic distclean-tags dvi: dvi-recursive dvi-am: html: html-recursive html-am: info: info-recursive info-am: install-data-am: install-dvi: install-dvi-recursive install-dvi-am: install-exec-am: install-html: install-html-recursive install-html-am: install-info: install-info-recursive install-info-am: install-man: install-pdf: install-pdf-recursive install-pdf-am: install-ps: install-ps-recursive install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-recursive -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-recursive mostlyclean-am: mostlyclean-generic mostlyclean-libtool pdf: pdf-recursive pdf-am: ps: ps-recursive ps-am: uninstall-am: .MAKE: $(am__recursive_targets) install-am install-strip .PHONY: $(am__recursive_targets) CTAGS GTAGS TAGS all all-am check \ check-am clean clean-generic clean-libtool cscopelist-am ctags \ ctags-am distclean distclean-generic distclean-libtool \ distclean-tags distdir dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am install-dvi \ install-dvi-am install-exec install-exec-am install-html \ install-html-am install-info install-info-am install-man \ install-pdf install-pdf-am install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ installdirs-am maintainer-clean maintainer-clean-generic \ mostlyclean mostlyclean-generic mostlyclean-libtool pdf pdf-am \ ps ps-am tags tags-am uninstall uninstall-am .PRECIOUS: Makefile # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: # Tell GNU make to disable its built-in pattern rules. %:: %,v %:: RCS/%,v %:: RCS/% %:: s.% %:: SCCS/s.% gevent-24.11.1/deps/c-ares/src/lib/000077500000000000000000000000001471441230600166145ustar00rootroot00000000000000gevent-24.11.1/deps/c-ares/src/lib/CMakeLists.txt000066400000000000000000000111241471441230600213530ustar00rootroot00000000000000# Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT # Transform Makefile.inc transform_makefile_inc("Makefile.inc" "${PROJECT_BINARY_DIR}/src/lib/Makefile.inc.cmake") include(${PROJECT_BINARY_DIR}/src/lib/Makefile.inc.cmake) # Write ares_config.h configuration file. This is used only for the build. CONFIGURE_FILE (ares_config.h.cmake ${PROJECT_BINARY_DIR}/ares_config.h) # Build the dynamic/shared library IF (CARES_SHARED) ADD_LIBRARY (${PROJECT_NAME} SHARED ${CSOURCES}) # Include resource file in windows builds for versioned DLLs IF (WIN32) TARGET_SOURCES (${PROJECT_NAME} PRIVATE cares.rc) ENDIF() # Convert CARES_LIB_VERSIONINFO libtool version format into VERSION and SOVERSION # Convert from ":" separated into CMake list format using ";" STRING (REPLACE ":" ";" CARES_LIB_VERSIONINFO ${CARES_LIB_VERSIONINFO}) LIST (GET CARES_LIB_VERSIONINFO 0 CARES_LIB_VERSION_CURRENT) LIST (GET CARES_LIB_VERSIONINFO 1 CARES_LIB_VERSION_REVISION) LIST (GET CARES_LIB_VERSIONINFO 2 CARES_LIB_VERSION_AGE) MATH (EXPR CARES_LIB_VERSION_MAJOR "${CARES_LIB_VERSION_CURRENT} - ${CARES_LIB_VERSION_AGE}") SET (CARES_LIB_VERSION_MINOR "${CARES_LIB_VERSION_AGE}") SET (CARES_LIB_VERSION_RELEASE "${CARES_LIB_VERSION_REVISION}") SET_TARGET_PROPERTIES (${PROJECT_NAME} PROPERTIES EXPORT_NAME cares OUTPUT_NAME cares COMPILE_PDB_NAME cares SOVERSION ${CARES_LIB_VERSION_MAJOR} VERSION "${CARES_LIB_VERSION_MAJOR}.${CARES_LIB_VERSION_MINOR}.${CARES_LIB_VERSION_RELEASE}" C_STANDARD 90 ) IF (ANDROID) SET_TARGET_PROPERTIES (${PROJECT_NAME} PROPERTIES C_STANDARD 99) ENDIF () IF (CARES_SYMBOL_HIDING) SET_TARGET_PROPERTIES (${PROJECT_NAME} PROPERTIES C_VISIBILITY_PRESET hidden VISIBILITY_INLINES_HIDDEN YES ) ENDIF () TARGET_INCLUDE_DIRECTORIES (${PROJECT_NAME} PUBLIC "$" "$" "$" "$" PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}" ) TARGET_COMPILE_DEFINITIONS (${PROJECT_NAME} PRIVATE HAVE_CONFIG_H=1 CARES_BUILDING_LIBRARY) TARGET_LINK_LIBRARIES (${PROJECT_NAME} PUBLIC ${CARES_DEPENDENT_LIBS} PRIVATE ${CMAKE_THREAD_LIBS_INIT} ) IF (CARES_INSTALL) INSTALL (TARGETS ${PROJECT_NAME} EXPORT ${PROJECT_NAME}-targets COMPONENT Library ${TARGETS_INST_DEST} ) IF (MSVC) INSTALL(FILES $ DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT Library OPTIONAL ) ENDIF () ENDIF () SET (STATIC_SUFFIX "_static") # For chain building: add alias targets that look like import libs that would be returned by find_package(c-ares). ADD_LIBRARY (${PROJECT_NAME}::cares_shared ALIAS ${PROJECT_NAME}) ADD_LIBRARY (${PROJECT_NAME}::cares ALIAS ${PROJECT_NAME}) ENDIF () # Build the static library IF (CARES_STATIC) SET (LIBNAME ${PROJECT_NAME}${STATIC_SUFFIX}) ADD_LIBRARY (${LIBNAME} STATIC ${CSOURCES}) SET_TARGET_PROPERTIES (${LIBNAME} PROPERTIES EXPORT_NAME cares${STATIC_SUFFIX} OUTPUT_NAME cares${STATIC_SUFFIX} COMPILE_PDB_NAME cares${STATIC_SUFFIX} C_STANDARD 90 ) IF (ANDROID) SET_TARGET_PROPERTIES (${LIBNAME} PROPERTIES C_STANDARD 99) ENDIF () IF (CARES_STATIC_PIC) SET_TARGET_PROPERTIES (${LIBNAME} PROPERTIES POSITION_INDEPENDENT_CODE True) ENDIF () TARGET_INCLUDE_DIRECTORIES (${LIBNAME} PUBLIC "$" "$" "$" "$" PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}" ) TARGET_COMPILE_DEFINITIONS (${LIBNAME} PRIVATE HAVE_CONFIG_H=1 CARES_BUILDING_LIBRARY) # Only matters on Windows IF (WIN32 OR CYGWIN) TARGET_COMPILE_DEFINITIONS (${LIBNAME} PUBLIC CARES_STATICLIB) ENDIF() TARGET_LINK_LIBRARIES (${LIBNAME} PUBLIC ${CARES_DEPENDENT_LIBS}) IF (CARES_INSTALL) INSTALL (TARGETS ${LIBNAME} EXPORT ${PROJECT_NAME}-targets COMPONENT Devel ${TARGETS_INST_DEST} ) ENDIF () # For chain building: add alias targets that look like import libs that would be returned by find_package(c-ares). ADD_LIBRARY (${PROJECT_NAME}::cares_static ALIAS ${LIBNAME}) IF (NOT TARGET ${PROJECT_NAME}::cares) # Only use static for the generic alias if shared lib wasn't built. ADD_LIBRARY (${PROJECT_NAME}::cares ALIAS ${LIBNAME}) ENDIF () ENDIF () gevent-24.11.1/deps/c-ares/src/lib/Makefile.am000066400000000000000000000033561471441230600206570ustar00rootroot00000000000000# Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT AUTOMAKE_OPTIONS = foreign subdir-objects nostdinc 1.9.6 ACLOCAL_AMFLAGS = -I m4 --install # Specify our include paths here, and do it relative to $(top_srcdir) and # $(top_builddir), to ensure that these paths which belong to the library # being currently built and tested are searched before the library which # might possibly already be installed in the system. AM_CPPFLAGS += -I$(top_builddir)/include \ -I$(top_builddir)/src/lib \ -I$(top_srcdir)/include \ -I$(top_srcdir)/src/lib lib_LTLIBRARIES = libcares.la man_MANS = $(MANPAGES) # adig and ahost are just sample programs and thus not mentioned with the # regular sources and headers EXTRA_DIST = Makefile.inc config-win32.h CMakeLists.txt \ ares_config.h.in ares_config.h.cmake cares.rc \ $(CSOURCES) $(HHEADERS) config-dos.h DISTCLEANFILES = ares_config.h DIST_SUBDIRS = libcares_la_LDFLAGS = -version-info @CARES_VERSION_INFO@ if CARES_USE_NO_UNDEFINED libcares_la_LDFLAGS += -no-undefined endif libcares_la_CFLAGS_EXTRA = libcares_la_CPPFLAGS_EXTRA = -DCARES_BUILDING_LIBRARY if CARES_SYMBOL_HIDING libcares_la_CFLAGS_EXTRA += @CARES_SYMBOL_HIDING_CFLAG@ libcares_la_CPPFLAGS_EXTRA += -DCARES_SYMBOL_HIDING endif include $(top_srcdir)/aminclude_static.am libcares_la_LIBS = $(CODE_COVERAGE_LIBS) libcares_la_CFLAGS_EXTRA += $(CODE_COVERAGE_CFLAGS) libcares_la_CPPFLAGS_EXTRA += $(CODE_COVERAGE_CPPFLAGS) libcares_la_CFLAGS = $(AM_CFLAGS) $(libcares_la_CFLAGS_EXTRA) libcares_la_CPPFLAGS = $(AM_CPPFLAGS) $(libcares_la_CPPFLAGS_EXTRA) # Makefile.inc provides the CSOURCES and HHEADERS defines include Makefile.inc libcares_la_SOURCES = $(CSOURCES) $(HHEADERS) gevent-24.11.1/deps/c-ares/src/lib/Makefile.in000066400000000000000000005425031471441230600206720ustar00rootroot00000000000000# Makefile.in generated by automake 1.17 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2024 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ # aminclude_static.am generated automatically by Autoconf # from AX_AM_MACROS_STATIC on Mon Sep 23 15:51:56 CDT 2024 # Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT VPATH = @srcdir@ am__is_gnu_make = { \ if test -z '$(MAKELEVEL)'; then \ false; \ elif test -n '$(MAKE_HOST)'; then \ true; \ elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \ true; \ else \ false; \ fi; \ } am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) am__rm_f = rm -f $(am__rm_f_notfound) am__rm_rf = rm -rf $(am__rm_f_notfound) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ @CARES_USE_NO_UNDEFINED_TRUE@am__append_1 = -no-undefined @CARES_SYMBOL_HIDING_TRUE@am__append_2 = @CARES_SYMBOL_HIDING_CFLAG@ @CARES_SYMBOL_HIDING_TRUE@am__append_3 = -DCARES_SYMBOL_HIDING subdir = src/lib SUBDIRS = ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/m4/ax_ac_append_to_file.m4 \ $(top_srcdir)/m4/ax_ac_print_to_file.m4 \ $(top_srcdir)/m4/ax_add_am_macro_static.m4 \ $(top_srcdir)/m4/ax_am_macros_static.m4 \ $(top_srcdir)/m4/ax_append_compile_flags.m4 \ $(top_srcdir)/m4/ax_append_flag.m4 \ $(top_srcdir)/m4/ax_append_link_flags.m4 \ $(top_srcdir)/m4/ax_check_compile_flag.m4 \ $(top_srcdir)/m4/ax_check_gnu_make.m4 \ $(top_srcdir)/m4/ax_check_link_flag.m4 \ $(top_srcdir)/m4/ax_check_user_namespace.m4 \ $(top_srcdir)/m4/ax_check_uts_namespace.m4 \ $(top_srcdir)/m4/ax_code_coverage.m4 \ $(top_srcdir)/m4/ax_compiler_vendor.m4 \ $(top_srcdir)/m4/ax_cxx_compile_stdcxx.m4 \ $(top_srcdir)/m4/ax_cxx_compile_stdcxx_14.m4 \ $(top_srcdir)/m4/ax_file_escapes.m4 \ $(top_srcdir)/m4/ax_pthread.m4 \ $(top_srcdir)/m4/ax_require_defined.m4 \ $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ $(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/m4/pkg.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) DIST_COMMON = $(srcdir)/Makefile.am $(am__DIST_COMMON) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = ares_config.h $(top_builddir)/include/ares_build.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && echo $$files | $(am__xargs_n) 40 $(am__rm_f); }; \ } am__installdirs = "$(DESTDIR)$(libdir)" LTLIBRARIES = $(lib_LTLIBRARIES) libcares_la_LIBADD = am__dirstamp = $(am__leading_dot)dirstamp am__objects_1 = libcares_la-ares__addrinfo2hostent.lo \ libcares_la-ares__addrinfo_localhost.lo \ libcares_la-ares__close_sockets.lo \ libcares_la-ares__hosts_file.lo \ libcares_la-ares__parse_into_addrinfo.lo \ libcares_la-ares__socket.lo libcares_la-ares__sortaddrinfo.lo \ libcares_la-ares_android.lo libcares_la-ares_cancel.lo \ libcares_la-ares_cookie.lo libcares_la-ares_data.lo \ libcares_la-ares_destroy.lo libcares_la-ares_free_hostent.lo \ libcares_la-ares_free_string.lo \ libcares_la-ares_freeaddrinfo.lo \ libcares_la-ares_getaddrinfo.lo libcares_la-ares_getenv.lo \ libcares_la-ares_gethostbyaddr.lo \ libcares_la-ares_gethostbyname.lo \ libcares_la-ares_getnameinfo.lo libcares_la-ares_init.lo \ libcares_la-ares_library_init.lo libcares_la-ares_metrics.lo \ libcares_la-ares_options.lo libcares_la-ares_platform.lo \ libcares_la-ares_process.lo libcares_la-ares_qcache.lo \ libcares_la-ares_query.lo libcares_la-ares_search.lo \ libcares_la-ares_send.lo libcares_la-ares_strerror.lo \ libcares_la-ares_sysconfig.lo \ libcares_la-ares_sysconfig_files.lo \ libcares_la-ares_sysconfig_mac.lo \ libcares_la-ares_sysconfig_win.lo libcares_la-ares_timeout.lo \ libcares_la-ares_update_servers.lo libcares_la-ares_version.lo \ libcares_la-inet_net_pton.lo libcares_la-inet_ntop.lo \ libcares_la-windows_port.lo dsa/libcares_la-ares__array.lo \ dsa/libcares_la-ares__htable.lo \ dsa/libcares_la-ares__htable_asvp.lo \ dsa/libcares_la-ares__htable_strvp.lo \ dsa/libcares_la-ares__htable_szvp.lo \ dsa/libcares_la-ares__htable_vpvp.lo \ dsa/libcares_la-ares__llist.lo dsa/libcares_la-ares__slist.lo \ event/libcares_la-ares_event_configchg.lo \ event/libcares_la-ares_event_epoll.lo \ event/libcares_la-ares_event_kqueue.lo \ event/libcares_la-ares_event_poll.lo \ event/libcares_la-ares_event_select.lo \ event/libcares_la-ares_event_thread.lo \ event/libcares_la-ares_event_wake_pipe.lo \ event/libcares_la-ares_event_win32.lo \ legacy/libcares_la-ares_create_query.lo \ legacy/libcares_la-ares_expand_name.lo \ legacy/libcares_la-ares_expand_string.lo \ legacy/libcares_la-ares_fds.lo \ legacy/libcares_la-ares_getsock.lo \ legacy/libcares_la-ares_parse_a_reply.lo \ legacy/libcares_la-ares_parse_aaaa_reply.lo \ legacy/libcares_la-ares_parse_caa_reply.lo \ legacy/libcares_la-ares_parse_mx_reply.lo \ legacy/libcares_la-ares_parse_naptr_reply.lo \ legacy/libcares_la-ares_parse_ns_reply.lo \ legacy/libcares_la-ares_parse_ptr_reply.lo \ legacy/libcares_la-ares_parse_soa_reply.lo \ legacy/libcares_la-ares_parse_srv_reply.lo \ legacy/libcares_la-ares_parse_txt_reply.lo \ legacy/libcares_la-ares_parse_uri_reply.lo \ record/libcares_la-ares_dns_mapping.lo \ record/libcares_la-ares_dns_multistring.lo \ record/libcares_la-ares_dns_name.lo \ record/libcares_la-ares_dns_parse.lo \ record/libcares_la-ares_dns_record.lo \ record/libcares_la-ares_dns_write.lo \ str/libcares_la-ares__buf.lo \ str/libcares_la-ares_strcasecmp.lo str/libcares_la-ares_str.lo \ str/libcares_la-ares_strsplit.lo \ util/libcares_la-ares__iface_ips.lo \ util/libcares_la-ares__threads.lo \ util/libcares_la-ares__timeval.lo \ util/libcares_la-ares_math.lo util/libcares_la-ares_rand.lo am__objects_2 = am_libcares_la_OBJECTS = $(am__objects_1) $(am__objects_2) libcares_la_OBJECTS = $(am_libcares_la_OBJECTS) AM_V_lt = $(am__v_lt_@AM_V@) am__v_lt_ = $(am__v_lt_@AM_DEFAULT_V@) am__v_lt_0 = --silent am__v_lt_1 = libcares_la_LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CCLD) $(libcares_la_CFLAGS) \ $(CFLAGS) $(libcares_la_LDFLAGS) $(LDFLAGS) -o $@ AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = DEFAULT_INCLUDES = depcomp = $(SHELL) $(top_srcdir)/config/depcomp am__maybe_remake_depfiles = depfiles am__depfiles_remade = \ ./$(DEPDIR)/libcares_la-ares__addrinfo2hostent.Plo \ ./$(DEPDIR)/libcares_la-ares__addrinfo_localhost.Plo \ ./$(DEPDIR)/libcares_la-ares__close_sockets.Plo \ ./$(DEPDIR)/libcares_la-ares__hosts_file.Plo \ ./$(DEPDIR)/libcares_la-ares__parse_into_addrinfo.Plo \ ./$(DEPDIR)/libcares_la-ares__socket.Plo \ ./$(DEPDIR)/libcares_la-ares__sortaddrinfo.Plo \ ./$(DEPDIR)/libcares_la-ares_android.Plo \ ./$(DEPDIR)/libcares_la-ares_cancel.Plo \ ./$(DEPDIR)/libcares_la-ares_cookie.Plo \ ./$(DEPDIR)/libcares_la-ares_data.Plo \ ./$(DEPDIR)/libcares_la-ares_destroy.Plo \ ./$(DEPDIR)/libcares_la-ares_free_hostent.Plo \ ./$(DEPDIR)/libcares_la-ares_free_string.Plo \ ./$(DEPDIR)/libcares_la-ares_freeaddrinfo.Plo \ ./$(DEPDIR)/libcares_la-ares_getaddrinfo.Plo \ ./$(DEPDIR)/libcares_la-ares_getenv.Plo \ ./$(DEPDIR)/libcares_la-ares_gethostbyaddr.Plo \ ./$(DEPDIR)/libcares_la-ares_gethostbyname.Plo \ ./$(DEPDIR)/libcares_la-ares_getnameinfo.Plo \ ./$(DEPDIR)/libcares_la-ares_init.Plo \ ./$(DEPDIR)/libcares_la-ares_library_init.Plo \ ./$(DEPDIR)/libcares_la-ares_metrics.Plo \ ./$(DEPDIR)/libcares_la-ares_options.Plo \ ./$(DEPDIR)/libcares_la-ares_platform.Plo \ ./$(DEPDIR)/libcares_la-ares_process.Plo \ ./$(DEPDIR)/libcares_la-ares_qcache.Plo \ ./$(DEPDIR)/libcares_la-ares_query.Plo \ ./$(DEPDIR)/libcares_la-ares_search.Plo \ ./$(DEPDIR)/libcares_la-ares_send.Plo \ ./$(DEPDIR)/libcares_la-ares_strerror.Plo \ ./$(DEPDIR)/libcares_la-ares_sysconfig.Plo \ ./$(DEPDIR)/libcares_la-ares_sysconfig_files.Plo \ ./$(DEPDIR)/libcares_la-ares_sysconfig_mac.Plo \ ./$(DEPDIR)/libcares_la-ares_sysconfig_win.Plo \ ./$(DEPDIR)/libcares_la-ares_timeout.Plo \ ./$(DEPDIR)/libcares_la-ares_update_servers.Plo \ ./$(DEPDIR)/libcares_la-ares_version.Plo \ ./$(DEPDIR)/libcares_la-inet_net_pton.Plo \ ./$(DEPDIR)/libcares_la-inet_ntop.Plo \ ./$(DEPDIR)/libcares_la-windows_port.Plo \ dsa/$(DEPDIR)/libcares_la-ares__array.Plo \ dsa/$(DEPDIR)/libcares_la-ares__htable.Plo \ dsa/$(DEPDIR)/libcares_la-ares__htable_asvp.Plo \ dsa/$(DEPDIR)/libcares_la-ares__htable_strvp.Plo \ dsa/$(DEPDIR)/libcares_la-ares__htable_szvp.Plo \ dsa/$(DEPDIR)/libcares_la-ares__htable_vpvp.Plo \ dsa/$(DEPDIR)/libcares_la-ares__llist.Plo \ dsa/$(DEPDIR)/libcares_la-ares__slist.Plo \ event/$(DEPDIR)/libcares_la-ares_event_configchg.Plo \ event/$(DEPDIR)/libcares_la-ares_event_epoll.Plo \ event/$(DEPDIR)/libcares_la-ares_event_kqueue.Plo \ event/$(DEPDIR)/libcares_la-ares_event_poll.Plo \ event/$(DEPDIR)/libcares_la-ares_event_select.Plo \ event/$(DEPDIR)/libcares_la-ares_event_thread.Plo \ event/$(DEPDIR)/libcares_la-ares_event_wake_pipe.Plo \ event/$(DEPDIR)/libcares_la-ares_event_win32.Plo \ legacy/$(DEPDIR)/libcares_la-ares_create_query.Plo \ legacy/$(DEPDIR)/libcares_la-ares_expand_name.Plo \ legacy/$(DEPDIR)/libcares_la-ares_expand_string.Plo \ legacy/$(DEPDIR)/libcares_la-ares_fds.Plo \ legacy/$(DEPDIR)/libcares_la-ares_getsock.Plo \ legacy/$(DEPDIR)/libcares_la-ares_parse_a_reply.Plo \ legacy/$(DEPDIR)/libcares_la-ares_parse_aaaa_reply.Plo \ legacy/$(DEPDIR)/libcares_la-ares_parse_caa_reply.Plo \ legacy/$(DEPDIR)/libcares_la-ares_parse_mx_reply.Plo \ legacy/$(DEPDIR)/libcares_la-ares_parse_naptr_reply.Plo \ legacy/$(DEPDIR)/libcares_la-ares_parse_ns_reply.Plo \ legacy/$(DEPDIR)/libcares_la-ares_parse_ptr_reply.Plo \ legacy/$(DEPDIR)/libcares_la-ares_parse_soa_reply.Plo \ legacy/$(DEPDIR)/libcares_la-ares_parse_srv_reply.Plo \ legacy/$(DEPDIR)/libcares_la-ares_parse_txt_reply.Plo \ legacy/$(DEPDIR)/libcares_la-ares_parse_uri_reply.Plo \ record/$(DEPDIR)/libcares_la-ares_dns_mapping.Plo \ record/$(DEPDIR)/libcares_la-ares_dns_multistring.Plo \ record/$(DEPDIR)/libcares_la-ares_dns_name.Plo \ record/$(DEPDIR)/libcares_la-ares_dns_parse.Plo \ record/$(DEPDIR)/libcares_la-ares_dns_record.Plo \ record/$(DEPDIR)/libcares_la-ares_dns_write.Plo \ str/$(DEPDIR)/libcares_la-ares__buf.Plo \ str/$(DEPDIR)/libcares_la-ares_str.Plo \ str/$(DEPDIR)/libcares_la-ares_strcasecmp.Plo \ str/$(DEPDIR)/libcares_la-ares_strsplit.Plo \ util/$(DEPDIR)/libcares_la-ares__iface_ips.Plo \ util/$(DEPDIR)/libcares_la-ares__threads.Plo \ util/$(DEPDIR)/libcares_la-ares__timeval.Plo \ util/$(DEPDIR)/libcares_la-ares_math.Plo \ util/$(DEPDIR)/libcares_la-ares_rand.Plo am__mv = mv -f COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \ $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) LTCOMPILE = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) \ $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) \ $(AM_CFLAGS) $(CFLAGS) AM_V_CC = $(am__v_CC_@AM_V@) am__v_CC_ = $(am__v_CC_@AM_DEFAULT_V@) am__v_CC_0 = @echo " CC " $@; am__v_CC_1 = CCLD = $(CC) LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \ $(AM_LDFLAGS) $(LDFLAGS) -o $@ AM_V_CCLD = $(am__v_CCLD_@AM_V@) am__v_CCLD_ = $(am__v_CCLD_@AM_DEFAULT_V@) am__v_CCLD_0 = @echo " CCLD " $@; am__v_CCLD_1 = SOURCES = $(libcares_la_SOURCES) DIST_SOURCES = $(libcares_la_SOURCES) RECURSIVE_TARGETS = all-recursive check-recursive cscopelist-recursive \ ctags-recursive dvi-recursive html-recursive info-recursive \ install-data-recursive install-dvi-recursive \ install-exec-recursive install-html-recursive \ install-info-recursive install-pdf-recursive \ install-ps-recursive install-recursive installcheck-recursive \ installdirs-recursive pdf-recursive ps-recursive \ tags-recursive uninstall-recursive am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac RECURSIVE_CLEAN_TARGETS = mostlyclean-recursive clean-recursive \ distclean-recursive maintainer-clean-recursive am__recursive_targets = \ $(RECURSIVE_TARGETS) \ $(RECURSIVE_CLEAN_TARGETS) \ $(am__extra_recursive_targets) AM_RECURSIVE_TARGETS = $(am__recursive_targets:-recursive=) TAGS CTAGS \ distdir distdir-am am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) \ ares_config.h.in # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` am__DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.inc \ $(srcdir)/ares_config.h.in $(top_srcdir)/aminclude_static.am \ $(top_srcdir)/config/depcomp DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) am__relativize = \ dir0=`pwd`; \ sed_first='s,^\([^/]*\)/.*$$,\1,'; \ sed_rest='s,^[^/]*/*,,'; \ sed_last='s,^.*/\([^/]*\)$$,\1,'; \ sed_butlast='s,/*[^/]*$$,,'; \ while test -n "$$dir1"; do \ first=`echo "$$dir1" | sed -e "$$sed_first"`; \ if test "$$first" != "."; then \ if test "$$first" = ".."; then \ dir2=`echo "$$dir0" | sed -e "$$sed_last"`/"$$dir2"; \ dir0=`echo "$$dir0" | sed -e "$$sed_butlast"`; \ else \ first2=`echo "$$dir2" | sed -e "$$sed_first"`; \ if test "$$first2" = "$$first"; then \ dir2=`echo "$$dir2" | sed -e "$$sed_rest"`; \ else \ dir2="../$$dir2"; \ fi; \ dir0="$$dir0"/"$$first"; \ fi; \ fi; \ dir1=`echo "$$dir1" | sed -e "$$sed_rest"`; \ done; \ reldir="$$dir2" ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_CFLAGS = @AM_CFLAGS@ # Specify our include paths here, and do it relative to $(top_srcdir) and # $(top_builddir), to ensure that these paths which belong to the library # being currently built and tested are searched before the library which # might possibly already be installed in the system. AM_CPPFLAGS = @AM_CPPFLAGS@ -I$(top_builddir)/include \ -I$(top_builddir)/src/lib -I$(top_srcdir)/include \ -I$(top_srcdir)/src/lib AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AS = @AS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BUILD_SUBDIRS = @BUILD_SUBDIRS@ CARES_PRIVATE_LIBS = @CARES_PRIVATE_LIBS@ CARES_RANDOM_FILE = @CARES_RANDOM_FILE@ CARES_SYMBOL_HIDING_CFLAG = @CARES_SYMBOL_HIDING_CFLAG@ CARES_VERSION_INFO = @CARES_VERSION_INFO@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CODE_COVERAGE_CFLAGS = @CODE_COVERAGE_CFLAGS@ CODE_COVERAGE_CPPFLAGS = @CODE_COVERAGE_CPPFLAGS@ CODE_COVERAGE_CXXFLAGS = @CODE_COVERAGE_CXXFLAGS@ CODE_COVERAGE_ENABLED = @CODE_COVERAGE_ENABLED@ CODE_COVERAGE_LIBS = @CODE_COVERAGE_LIBS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CSCOPE = @CSCOPE@ CTAGS = @CTAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ ETAGS = @ETAGS@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FILECMD = @FILECMD@ GCOV = @GCOV@ GENHTML = @GENHTML@ GMOCK112_CFLAGS = @GMOCK112_CFLAGS@ GMOCK112_LIBS = @GMOCK112_LIBS@ GMOCK_CFLAGS = @GMOCK_CFLAGS@ GMOCK_LIBS = @GMOCK_LIBS@ GREP = @GREP@ HAVE_CXX14 = @HAVE_CXX14@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ LCOV = @LCOV@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ NM = @NM@ NMEDIT = @NMEDIT@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKGCONFIG_CFLAGS = @PKGCONFIG_CFLAGS@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_CXX = @PTHREAD_CXX@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ VERSION = @VERSION@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__rm_f_notfound = @am__rm_f_notfound@ am__tar = @am__tar@ am__untar = @am__untar@ am__xargs_n = @am__xargs_n@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ ifGNUmake = @ifGNUmake@ ifnGNUmake = @ifnGNUmake@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ runstatedir = @runstatedir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ # Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT AUTOMAKE_OPTIONS = foreign subdir-objects nostdinc 1.9.6 ACLOCAL_AMFLAGS = -I m4 --install lib_LTLIBRARIES = libcares.la man_MANS = $(MANPAGES) # adig and ahost are just sample programs and thus not mentioned with the # regular sources and headers EXTRA_DIST = Makefile.inc config-win32.h CMakeLists.txt \ ares_config.h.in ares_config.h.cmake cares.rc \ $(CSOURCES) $(HHEADERS) config-dos.h DISTCLEANFILES = ares_config.h DIST_SUBDIRS = libcares_la_LDFLAGS = -version-info @CARES_VERSION_INFO@ \ $(am__append_1) libcares_la_CFLAGS_EXTRA = $(am__append_2) $(CODE_COVERAGE_CFLAGS) libcares_la_CPPFLAGS_EXTRA = -DCARES_BUILDING_LIBRARY $(am__append_3) \ $(CODE_COVERAGE_CPPFLAGS) @CODE_COVERAGE_ENABLED_TRUE@GITIGNOREFILES := $(GITIGNOREFILES) $(CODE_COVERAGE_OUTPUT_FILE) $(CODE_COVERAGE_OUTPUT_DIRECTORY) @CODE_COVERAGE_ENABLED_TRUE@code_coverage_v_lcov_cap = $(code_coverage_v_lcov_cap_$(V)) @CODE_COVERAGE_ENABLED_TRUE@code_coverage_v_lcov_cap_ = $(code_coverage_v_lcov_cap_$(AM_DEFAULT_VERBOSITY)) @CODE_COVERAGE_ENABLED_TRUE@code_coverage_v_lcov_cap_0 = @echo " LCOV --capture" $(CODE_COVERAGE_OUTPUT_FILE); @CODE_COVERAGE_ENABLED_TRUE@code_coverage_v_lcov_ign = $(code_coverage_v_lcov_ign_$(V)) @CODE_COVERAGE_ENABLED_TRUE@code_coverage_v_lcov_ign_ = $(code_coverage_v_lcov_ign_$(AM_DEFAULT_VERBOSITY)) @CODE_COVERAGE_ENABLED_TRUE@code_coverage_v_lcov_ign_0 = @echo " LCOV --remove /tmp/*" $(CODE_COVERAGE_IGNORE_PATTERN); @CODE_COVERAGE_ENABLED_TRUE@code_coverage_v_genhtml = $(code_coverage_v_genhtml_$(V)) @CODE_COVERAGE_ENABLED_TRUE@code_coverage_v_genhtml_ = $(code_coverage_v_genhtml_$(AM_DEFAULT_VERBOSITY)) @CODE_COVERAGE_ENABLED_TRUE@code_coverage_v_genhtml_0 = @echo " GEN " "$(CODE_COVERAGE_OUTPUT_DIRECTORY)"; @CODE_COVERAGE_ENABLED_TRUE@code_coverage_quiet = $(code_coverage_quiet_$(V)) @CODE_COVERAGE_ENABLED_TRUE@code_coverage_quiet_ = $(code_coverage_quiet_$(AM_DEFAULT_VERBOSITY)) @CODE_COVERAGE_ENABLED_TRUE@code_coverage_quiet_0 = --quiet # sanitizes the test-name: replaces with underscores: dashes and dots @CODE_COVERAGE_ENABLED_TRUE@code_coverage_sanitize = $(subst -,_,$(subst .,_,$(1))) @CODE_COVERAGE_ENABLED_TRUE@AM_DISTCHECK_CONFIGURE_FLAGS := $(AM_DISTCHECK_CONFIGURE_FLAGS) --disable-code-coverage libcares_la_LIBS = $(CODE_COVERAGE_LIBS) libcares_la_CFLAGS = $(AM_CFLAGS) $(libcares_la_CFLAGS_EXTRA) libcares_la_CPPFLAGS = $(AM_CPPFLAGS) $(libcares_la_CPPFLAGS_EXTRA) CSOURCES = ares__addrinfo2hostent.c \ ares__addrinfo_localhost.c \ ares__close_sockets.c \ ares__hosts_file.c \ ares__parse_into_addrinfo.c \ ares__socket.c \ ares__sortaddrinfo.c \ ares_android.c \ ares_cancel.c \ ares_cookie.c \ ares_data.c \ ares_destroy.c \ ares_free_hostent.c \ ares_free_string.c \ ares_freeaddrinfo.c \ ares_getaddrinfo.c \ ares_getenv.c \ ares_gethostbyaddr.c \ ares_gethostbyname.c \ ares_getnameinfo.c \ ares_init.c \ ares_library_init.c \ ares_metrics.c \ ares_options.c \ ares_platform.c \ ares_process.c \ ares_qcache.c \ ares_query.c \ ares_search.c \ ares_send.c \ ares_strerror.c \ ares_sysconfig.c \ ares_sysconfig_files.c \ ares_sysconfig_mac.c \ ares_sysconfig_win.c \ ares_timeout.c \ ares_update_servers.c \ ares_version.c \ inet_net_pton.c \ inet_ntop.c \ windows_port.c \ dsa/ares__array.c \ dsa/ares__htable.c \ dsa/ares__htable_asvp.c \ dsa/ares__htable_strvp.c \ dsa/ares__htable_szvp.c \ dsa/ares__htable_vpvp.c \ dsa/ares__llist.c \ dsa/ares__slist.c \ event/ares_event_configchg.c \ event/ares_event_epoll.c \ event/ares_event_kqueue.c \ event/ares_event_poll.c \ event/ares_event_select.c \ event/ares_event_thread.c \ event/ares_event_wake_pipe.c \ event/ares_event_win32.c \ legacy/ares_create_query.c \ legacy/ares_expand_name.c \ legacy/ares_expand_string.c \ legacy/ares_fds.c \ legacy/ares_getsock.c \ legacy/ares_parse_a_reply.c \ legacy/ares_parse_aaaa_reply.c \ legacy/ares_parse_caa_reply.c \ legacy/ares_parse_mx_reply.c \ legacy/ares_parse_naptr_reply.c \ legacy/ares_parse_ns_reply.c \ legacy/ares_parse_ptr_reply.c \ legacy/ares_parse_soa_reply.c \ legacy/ares_parse_srv_reply.c \ legacy/ares_parse_txt_reply.c \ legacy/ares_parse_uri_reply.c \ record/ares_dns_mapping.c \ record/ares_dns_multistring.c \ record/ares_dns_name.c \ record/ares_dns_parse.c \ record/ares_dns_record.c \ record/ares_dns_write.c \ str/ares__buf.c \ str/ares_strcasecmp.c \ str/ares_str.c \ str/ares_strsplit.c \ util/ares__iface_ips.c \ util/ares__threads.c \ util/ares__timeval.c \ util/ares_math.c \ util/ares_rand.c HHEADERS = ares_android.h \ ares_data.h \ ares_getenv.h \ ares_inet_net_pton.h \ ares_ipv6.h \ ares_platform.h \ ares_private.h \ ares_setup.h \ dsa/ares__array.h \ dsa/ares__htable.h \ dsa/ares__htable_asvp.h \ dsa/ares__htable_strvp.h \ dsa/ares__htable_szvp.h \ dsa/ares__htable_vpvp.h \ dsa/ares__llist.h \ dsa/ares__slist.h \ event/ares_event.h \ event/ares_event_win32.h \ record/ares_dns_multistring.h \ record/ares_dns_private.h \ str/ares__buf.h \ str/ares_strcasecmp.h \ str/ares_str.h \ str/ares_strsplit.h \ util/ares__iface_ips.h \ util/ares__threads.h \ thirdparty/apple/dnsinfo.h # Makefile.inc provides the CSOURCES and HHEADERS defines libcares_la_SOURCES = $(CSOURCES) $(HHEADERS) all: ares_config.h $(MAKE) $(AM_MAKEFLAGS) all-recursive .SUFFIXES: .SUFFIXES: .c .lo .o .obj $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(top_srcdir)/aminclude_static.am $(srcdir)/Makefile.inc $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign src/lib/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign src/lib/Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles);; \ esac; $(top_srcdir)/aminclude_static.am $(srcdir)/Makefile.inc $(am__empty): $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): ares_config.h: stamp-h1 @test -f $@ || rm -f stamp-h1 @test -f $@ || $(MAKE) $(AM_MAKEFLAGS) stamp-h1 stamp-h1: $(srcdir)/ares_config.h.in $(top_builddir)/config.status $(AM_V_at)rm -f stamp-h1 $(AM_V_GEN)cd $(top_builddir) && $(SHELL) ./config.status src/lib/ares_config.h $(srcdir)/ares_config.h.in: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) $(AM_V_GEN)($(am__cd) $(top_srcdir) && $(AUTOHEADER)) $(AM_V_at)rm -f stamp-h1 $(AM_V_at)touch $@ distclean-hdr: -rm -f ares_config.h stamp-h1 install-libLTLIBRARIES: $(lib_LTLIBRARIES) @$(NORMAL_INSTALL) @list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \ list2=; for p in $$list; do \ if test -f $$p; then \ list2="$$list2 $$p"; \ else :; fi; \ done; \ test -z "$$list2" || { \ echo " $(MKDIR_P) '$(DESTDIR)$(libdir)'"; \ $(MKDIR_P) "$(DESTDIR)$(libdir)" || exit 1; \ echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 '$(DESTDIR)$(libdir)'"; \ $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 "$(DESTDIR)$(libdir)"; \ } uninstall-libLTLIBRARIES: @$(NORMAL_UNINSTALL) @list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \ for p in $$list; do \ $(am__strip_dir) \ echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f '$(DESTDIR)$(libdir)/$$f'"; \ $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f "$(DESTDIR)$(libdir)/$$f"; \ done clean-libLTLIBRARIES: -$(am__rm_f) $(lib_LTLIBRARIES) @list='$(lib_LTLIBRARIES)'; \ locs=`for p in $$list; do echo $$p; done | \ sed 's|^[^/]*$$|.|; s|/[^/]*$$||; s|$$|/so_locations|' | \ sort -u`; \ echo rm -f $${locs}; \ $(am__rm_f) $${locs} dsa/$(am__dirstamp): @$(MKDIR_P) dsa @: >>dsa/$(am__dirstamp) dsa/$(DEPDIR)/$(am__dirstamp): @$(MKDIR_P) dsa/$(DEPDIR) @: >>dsa/$(DEPDIR)/$(am__dirstamp) dsa/libcares_la-ares__array.lo: dsa/$(am__dirstamp) \ dsa/$(DEPDIR)/$(am__dirstamp) dsa/libcares_la-ares__htable.lo: dsa/$(am__dirstamp) \ dsa/$(DEPDIR)/$(am__dirstamp) dsa/libcares_la-ares__htable_asvp.lo: dsa/$(am__dirstamp) \ dsa/$(DEPDIR)/$(am__dirstamp) dsa/libcares_la-ares__htable_strvp.lo: dsa/$(am__dirstamp) \ dsa/$(DEPDIR)/$(am__dirstamp) dsa/libcares_la-ares__htable_szvp.lo: dsa/$(am__dirstamp) \ dsa/$(DEPDIR)/$(am__dirstamp) dsa/libcares_la-ares__htable_vpvp.lo: dsa/$(am__dirstamp) \ dsa/$(DEPDIR)/$(am__dirstamp) dsa/libcares_la-ares__llist.lo: dsa/$(am__dirstamp) \ dsa/$(DEPDIR)/$(am__dirstamp) dsa/libcares_la-ares__slist.lo: dsa/$(am__dirstamp) \ dsa/$(DEPDIR)/$(am__dirstamp) event/$(am__dirstamp): @$(MKDIR_P) event @: >>event/$(am__dirstamp) event/$(DEPDIR)/$(am__dirstamp): @$(MKDIR_P) event/$(DEPDIR) @: >>event/$(DEPDIR)/$(am__dirstamp) event/libcares_la-ares_event_configchg.lo: event/$(am__dirstamp) \ event/$(DEPDIR)/$(am__dirstamp) event/libcares_la-ares_event_epoll.lo: event/$(am__dirstamp) \ event/$(DEPDIR)/$(am__dirstamp) event/libcares_la-ares_event_kqueue.lo: event/$(am__dirstamp) \ event/$(DEPDIR)/$(am__dirstamp) event/libcares_la-ares_event_poll.lo: event/$(am__dirstamp) \ event/$(DEPDIR)/$(am__dirstamp) event/libcares_la-ares_event_select.lo: event/$(am__dirstamp) \ event/$(DEPDIR)/$(am__dirstamp) event/libcares_la-ares_event_thread.lo: event/$(am__dirstamp) \ event/$(DEPDIR)/$(am__dirstamp) event/libcares_la-ares_event_wake_pipe.lo: event/$(am__dirstamp) \ event/$(DEPDIR)/$(am__dirstamp) event/libcares_la-ares_event_win32.lo: event/$(am__dirstamp) \ event/$(DEPDIR)/$(am__dirstamp) legacy/$(am__dirstamp): @$(MKDIR_P) legacy @: >>legacy/$(am__dirstamp) legacy/$(DEPDIR)/$(am__dirstamp): @$(MKDIR_P) legacy/$(DEPDIR) @: >>legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_create_query.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_expand_name.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_expand_string.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_fds.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_getsock.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_parse_a_reply.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_parse_aaaa_reply.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_parse_caa_reply.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_parse_mx_reply.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_parse_naptr_reply.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_parse_ns_reply.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_parse_ptr_reply.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_parse_soa_reply.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_parse_srv_reply.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_parse_txt_reply.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) legacy/libcares_la-ares_parse_uri_reply.lo: legacy/$(am__dirstamp) \ legacy/$(DEPDIR)/$(am__dirstamp) record/$(am__dirstamp): @$(MKDIR_P) record @: >>record/$(am__dirstamp) record/$(DEPDIR)/$(am__dirstamp): @$(MKDIR_P) record/$(DEPDIR) @: >>record/$(DEPDIR)/$(am__dirstamp) record/libcares_la-ares_dns_mapping.lo: record/$(am__dirstamp) \ record/$(DEPDIR)/$(am__dirstamp) record/libcares_la-ares_dns_multistring.lo: record/$(am__dirstamp) \ record/$(DEPDIR)/$(am__dirstamp) record/libcares_la-ares_dns_name.lo: record/$(am__dirstamp) \ record/$(DEPDIR)/$(am__dirstamp) record/libcares_la-ares_dns_parse.lo: record/$(am__dirstamp) \ record/$(DEPDIR)/$(am__dirstamp) record/libcares_la-ares_dns_record.lo: record/$(am__dirstamp) \ record/$(DEPDIR)/$(am__dirstamp) record/libcares_la-ares_dns_write.lo: record/$(am__dirstamp) \ record/$(DEPDIR)/$(am__dirstamp) str/$(am__dirstamp): @$(MKDIR_P) str @: >>str/$(am__dirstamp) str/$(DEPDIR)/$(am__dirstamp): @$(MKDIR_P) str/$(DEPDIR) @: >>str/$(DEPDIR)/$(am__dirstamp) str/libcares_la-ares__buf.lo: str/$(am__dirstamp) \ str/$(DEPDIR)/$(am__dirstamp) str/libcares_la-ares_strcasecmp.lo: str/$(am__dirstamp) \ str/$(DEPDIR)/$(am__dirstamp) str/libcares_la-ares_str.lo: str/$(am__dirstamp) \ str/$(DEPDIR)/$(am__dirstamp) str/libcares_la-ares_strsplit.lo: str/$(am__dirstamp) \ str/$(DEPDIR)/$(am__dirstamp) util/$(am__dirstamp): @$(MKDIR_P) util @: >>util/$(am__dirstamp) util/$(DEPDIR)/$(am__dirstamp): @$(MKDIR_P) util/$(DEPDIR) @: >>util/$(DEPDIR)/$(am__dirstamp) util/libcares_la-ares__iface_ips.lo: util/$(am__dirstamp) \ util/$(DEPDIR)/$(am__dirstamp) util/libcares_la-ares__threads.lo: util/$(am__dirstamp) \ util/$(DEPDIR)/$(am__dirstamp) util/libcares_la-ares__timeval.lo: util/$(am__dirstamp) \ util/$(DEPDIR)/$(am__dirstamp) util/libcares_la-ares_math.lo: util/$(am__dirstamp) \ util/$(DEPDIR)/$(am__dirstamp) util/libcares_la-ares_rand.lo: util/$(am__dirstamp) \ util/$(DEPDIR)/$(am__dirstamp) libcares.la: $(libcares_la_OBJECTS) $(libcares_la_DEPENDENCIES) $(EXTRA_libcares_la_DEPENDENCIES) $(AM_V_CCLD)$(libcares_la_LINK) -rpath $(libdir) $(libcares_la_OBJECTS) $(libcares_la_LIBADD) $(LIBS) mostlyclean-compile: -rm -f *.$(OBJEXT) -rm -f dsa/*.$(OBJEXT) -rm -f dsa/*.lo -rm -f event/*.$(OBJEXT) -rm -f event/*.lo -rm -f legacy/*.$(OBJEXT) -rm -f legacy/*.lo -rm -f record/*.$(OBJEXT) -rm -f record/*.lo -rm -f str/*.$(OBJEXT) -rm -f str/*.lo -rm -f util/*.$(OBJEXT) -rm -f util/*.lo distclean-compile: -rm -f *.tab.c @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares__addrinfo2hostent.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares__addrinfo_localhost.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares__close_sockets.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares__hosts_file.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares__parse_into_addrinfo.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares__socket.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares__sortaddrinfo.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_android.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_cancel.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_cookie.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_data.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_destroy.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_free_hostent.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_free_string.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_freeaddrinfo.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_getaddrinfo.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_getenv.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_gethostbyaddr.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_gethostbyname.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_getnameinfo.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_init.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_library_init.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_metrics.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_options.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_platform.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_process.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_qcache.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_query.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_search.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_send.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_strerror.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_sysconfig.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_sysconfig_files.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_sysconfig_mac.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_sysconfig_win.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_timeout.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_update_servers.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-ares_version.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-inet_net_pton.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-inet_ntop.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libcares_la-windows_port.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@dsa/$(DEPDIR)/libcares_la-ares__array.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@dsa/$(DEPDIR)/libcares_la-ares__htable.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@dsa/$(DEPDIR)/libcares_la-ares__htable_asvp.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@dsa/$(DEPDIR)/libcares_la-ares__htable_strvp.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@dsa/$(DEPDIR)/libcares_la-ares__htable_szvp.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@dsa/$(DEPDIR)/libcares_la-ares__htable_vpvp.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@dsa/$(DEPDIR)/libcares_la-ares__llist.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@dsa/$(DEPDIR)/libcares_la-ares__slist.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@event/$(DEPDIR)/libcares_la-ares_event_configchg.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@event/$(DEPDIR)/libcares_la-ares_event_epoll.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@event/$(DEPDIR)/libcares_la-ares_event_kqueue.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@event/$(DEPDIR)/libcares_la-ares_event_poll.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@event/$(DEPDIR)/libcares_la-ares_event_select.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@event/$(DEPDIR)/libcares_la-ares_event_thread.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@event/$(DEPDIR)/libcares_la-ares_event_wake_pipe.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@event/$(DEPDIR)/libcares_la-ares_event_win32.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_create_query.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_expand_name.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_expand_string.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_fds.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_getsock.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_parse_a_reply.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_parse_aaaa_reply.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_parse_caa_reply.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_parse_mx_reply.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_parse_naptr_reply.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_parse_ns_reply.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_parse_ptr_reply.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_parse_soa_reply.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_parse_srv_reply.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_parse_txt_reply.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@legacy/$(DEPDIR)/libcares_la-ares_parse_uri_reply.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@record/$(DEPDIR)/libcares_la-ares_dns_mapping.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@record/$(DEPDIR)/libcares_la-ares_dns_multistring.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@record/$(DEPDIR)/libcares_la-ares_dns_name.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@record/$(DEPDIR)/libcares_la-ares_dns_parse.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@record/$(DEPDIR)/libcares_la-ares_dns_record.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@record/$(DEPDIR)/libcares_la-ares_dns_write.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@str/$(DEPDIR)/libcares_la-ares__buf.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@str/$(DEPDIR)/libcares_la-ares_str.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@str/$(DEPDIR)/libcares_la-ares_strcasecmp.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@str/$(DEPDIR)/libcares_la-ares_strsplit.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@util/$(DEPDIR)/libcares_la-ares__iface_ips.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@util/$(DEPDIR)/libcares_la-ares__threads.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@util/$(DEPDIR)/libcares_la-ares__timeval.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@util/$(DEPDIR)/libcares_la-ares_math.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@util/$(DEPDIR)/libcares_la-ares_rand.Plo@am__quote@ # am--include-marker $(am__depfiles_remade): @$(MKDIR_P) $(@D) @: >>$@ am--depfiles: $(am__depfiles_remade) .c.o: @am__fastdepCC_TRUE@ $(AM_V_CC)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.o$$||'`;\ @am__fastdepCC_TRUE@ $(COMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ $< &&\ @am__fastdepCC_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ $< .c.obj: @am__fastdepCC_TRUE@ $(AM_V_CC)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.obj$$||'`;\ @am__fastdepCC_TRUE@ $(COMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ `$(CYGPATH_W) '$<'` &&\ @am__fastdepCC_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ `$(CYGPATH_W) '$<'` .c.lo: @am__fastdepCC_TRUE@ $(AM_V_CC)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.lo$$||'`;\ @am__fastdepCC_TRUE@ $(LTCOMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ $< &&\ @am__fastdepCC_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LTCOMPILE) -c -o $@ $< libcares_la-ares__addrinfo2hostent.lo: ares__addrinfo2hostent.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares__addrinfo2hostent.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares__addrinfo2hostent.Tpo -c -o libcares_la-ares__addrinfo2hostent.lo `test -f 'ares__addrinfo2hostent.c' || echo '$(srcdir)/'`ares__addrinfo2hostent.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares__addrinfo2hostent.Tpo $(DEPDIR)/libcares_la-ares__addrinfo2hostent.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares__addrinfo2hostent.c' object='libcares_la-ares__addrinfo2hostent.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares__addrinfo2hostent.lo `test -f 'ares__addrinfo2hostent.c' || echo '$(srcdir)/'`ares__addrinfo2hostent.c libcares_la-ares__addrinfo_localhost.lo: ares__addrinfo_localhost.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares__addrinfo_localhost.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares__addrinfo_localhost.Tpo -c -o libcares_la-ares__addrinfo_localhost.lo `test -f 'ares__addrinfo_localhost.c' || echo '$(srcdir)/'`ares__addrinfo_localhost.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares__addrinfo_localhost.Tpo $(DEPDIR)/libcares_la-ares__addrinfo_localhost.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares__addrinfo_localhost.c' object='libcares_la-ares__addrinfo_localhost.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares__addrinfo_localhost.lo `test -f 'ares__addrinfo_localhost.c' || echo '$(srcdir)/'`ares__addrinfo_localhost.c libcares_la-ares__close_sockets.lo: ares__close_sockets.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares__close_sockets.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares__close_sockets.Tpo -c -o libcares_la-ares__close_sockets.lo `test -f 'ares__close_sockets.c' || echo '$(srcdir)/'`ares__close_sockets.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares__close_sockets.Tpo $(DEPDIR)/libcares_la-ares__close_sockets.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares__close_sockets.c' object='libcares_la-ares__close_sockets.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares__close_sockets.lo `test -f 'ares__close_sockets.c' || echo '$(srcdir)/'`ares__close_sockets.c libcares_la-ares__hosts_file.lo: ares__hosts_file.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares__hosts_file.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares__hosts_file.Tpo -c -o libcares_la-ares__hosts_file.lo `test -f 'ares__hosts_file.c' || echo '$(srcdir)/'`ares__hosts_file.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares__hosts_file.Tpo $(DEPDIR)/libcares_la-ares__hosts_file.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares__hosts_file.c' object='libcares_la-ares__hosts_file.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares__hosts_file.lo `test -f 'ares__hosts_file.c' || echo '$(srcdir)/'`ares__hosts_file.c libcares_la-ares__parse_into_addrinfo.lo: ares__parse_into_addrinfo.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares__parse_into_addrinfo.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares__parse_into_addrinfo.Tpo -c -o libcares_la-ares__parse_into_addrinfo.lo `test -f 'ares__parse_into_addrinfo.c' || echo '$(srcdir)/'`ares__parse_into_addrinfo.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares__parse_into_addrinfo.Tpo $(DEPDIR)/libcares_la-ares__parse_into_addrinfo.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares__parse_into_addrinfo.c' object='libcares_la-ares__parse_into_addrinfo.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares__parse_into_addrinfo.lo `test -f 'ares__parse_into_addrinfo.c' || echo '$(srcdir)/'`ares__parse_into_addrinfo.c libcares_la-ares__socket.lo: ares__socket.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares__socket.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares__socket.Tpo -c -o libcares_la-ares__socket.lo `test -f 'ares__socket.c' || echo '$(srcdir)/'`ares__socket.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares__socket.Tpo $(DEPDIR)/libcares_la-ares__socket.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares__socket.c' object='libcares_la-ares__socket.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares__socket.lo `test -f 'ares__socket.c' || echo '$(srcdir)/'`ares__socket.c libcares_la-ares__sortaddrinfo.lo: ares__sortaddrinfo.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares__sortaddrinfo.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares__sortaddrinfo.Tpo -c -o libcares_la-ares__sortaddrinfo.lo `test -f 'ares__sortaddrinfo.c' || echo '$(srcdir)/'`ares__sortaddrinfo.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares__sortaddrinfo.Tpo $(DEPDIR)/libcares_la-ares__sortaddrinfo.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares__sortaddrinfo.c' object='libcares_la-ares__sortaddrinfo.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares__sortaddrinfo.lo `test -f 'ares__sortaddrinfo.c' || echo '$(srcdir)/'`ares__sortaddrinfo.c libcares_la-ares_android.lo: ares_android.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_android.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_android.Tpo -c -o libcares_la-ares_android.lo `test -f 'ares_android.c' || echo '$(srcdir)/'`ares_android.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_android.Tpo $(DEPDIR)/libcares_la-ares_android.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_android.c' object='libcares_la-ares_android.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_android.lo `test -f 'ares_android.c' || echo '$(srcdir)/'`ares_android.c libcares_la-ares_cancel.lo: ares_cancel.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_cancel.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_cancel.Tpo -c -o libcares_la-ares_cancel.lo `test -f 'ares_cancel.c' || echo '$(srcdir)/'`ares_cancel.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_cancel.Tpo $(DEPDIR)/libcares_la-ares_cancel.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_cancel.c' object='libcares_la-ares_cancel.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_cancel.lo `test -f 'ares_cancel.c' || echo '$(srcdir)/'`ares_cancel.c libcares_la-ares_cookie.lo: ares_cookie.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_cookie.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_cookie.Tpo -c -o libcares_la-ares_cookie.lo `test -f 'ares_cookie.c' || echo '$(srcdir)/'`ares_cookie.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_cookie.Tpo $(DEPDIR)/libcares_la-ares_cookie.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_cookie.c' object='libcares_la-ares_cookie.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_cookie.lo `test -f 'ares_cookie.c' || echo '$(srcdir)/'`ares_cookie.c libcares_la-ares_data.lo: ares_data.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_data.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_data.Tpo -c -o libcares_la-ares_data.lo `test -f 'ares_data.c' || echo '$(srcdir)/'`ares_data.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_data.Tpo $(DEPDIR)/libcares_la-ares_data.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_data.c' object='libcares_la-ares_data.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_data.lo `test -f 'ares_data.c' || echo '$(srcdir)/'`ares_data.c libcares_la-ares_destroy.lo: ares_destroy.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_destroy.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_destroy.Tpo -c -o libcares_la-ares_destroy.lo `test -f 'ares_destroy.c' || echo '$(srcdir)/'`ares_destroy.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_destroy.Tpo $(DEPDIR)/libcares_la-ares_destroy.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_destroy.c' object='libcares_la-ares_destroy.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_destroy.lo `test -f 'ares_destroy.c' || echo '$(srcdir)/'`ares_destroy.c libcares_la-ares_free_hostent.lo: ares_free_hostent.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_free_hostent.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_free_hostent.Tpo -c -o libcares_la-ares_free_hostent.lo `test -f 'ares_free_hostent.c' || echo '$(srcdir)/'`ares_free_hostent.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_free_hostent.Tpo $(DEPDIR)/libcares_la-ares_free_hostent.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_free_hostent.c' object='libcares_la-ares_free_hostent.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_free_hostent.lo `test -f 'ares_free_hostent.c' || echo '$(srcdir)/'`ares_free_hostent.c libcares_la-ares_free_string.lo: ares_free_string.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_free_string.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_free_string.Tpo -c -o libcares_la-ares_free_string.lo `test -f 'ares_free_string.c' || echo '$(srcdir)/'`ares_free_string.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_free_string.Tpo $(DEPDIR)/libcares_la-ares_free_string.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_free_string.c' object='libcares_la-ares_free_string.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_free_string.lo `test -f 'ares_free_string.c' || echo '$(srcdir)/'`ares_free_string.c libcares_la-ares_freeaddrinfo.lo: ares_freeaddrinfo.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_freeaddrinfo.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_freeaddrinfo.Tpo -c -o libcares_la-ares_freeaddrinfo.lo `test -f 'ares_freeaddrinfo.c' || echo '$(srcdir)/'`ares_freeaddrinfo.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_freeaddrinfo.Tpo $(DEPDIR)/libcares_la-ares_freeaddrinfo.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_freeaddrinfo.c' object='libcares_la-ares_freeaddrinfo.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_freeaddrinfo.lo `test -f 'ares_freeaddrinfo.c' || echo '$(srcdir)/'`ares_freeaddrinfo.c libcares_la-ares_getaddrinfo.lo: ares_getaddrinfo.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_getaddrinfo.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_getaddrinfo.Tpo -c -o libcares_la-ares_getaddrinfo.lo `test -f 'ares_getaddrinfo.c' || echo '$(srcdir)/'`ares_getaddrinfo.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_getaddrinfo.Tpo $(DEPDIR)/libcares_la-ares_getaddrinfo.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_getaddrinfo.c' object='libcares_la-ares_getaddrinfo.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_getaddrinfo.lo `test -f 'ares_getaddrinfo.c' || echo '$(srcdir)/'`ares_getaddrinfo.c libcares_la-ares_getenv.lo: ares_getenv.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_getenv.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_getenv.Tpo -c -o libcares_la-ares_getenv.lo `test -f 'ares_getenv.c' || echo '$(srcdir)/'`ares_getenv.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_getenv.Tpo $(DEPDIR)/libcares_la-ares_getenv.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_getenv.c' object='libcares_la-ares_getenv.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_getenv.lo `test -f 'ares_getenv.c' || echo '$(srcdir)/'`ares_getenv.c libcares_la-ares_gethostbyaddr.lo: ares_gethostbyaddr.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_gethostbyaddr.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_gethostbyaddr.Tpo -c -o libcares_la-ares_gethostbyaddr.lo `test -f 'ares_gethostbyaddr.c' || echo '$(srcdir)/'`ares_gethostbyaddr.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_gethostbyaddr.Tpo $(DEPDIR)/libcares_la-ares_gethostbyaddr.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_gethostbyaddr.c' object='libcares_la-ares_gethostbyaddr.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_gethostbyaddr.lo `test -f 'ares_gethostbyaddr.c' || echo '$(srcdir)/'`ares_gethostbyaddr.c libcares_la-ares_gethostbyname.lo: ares_gethostbyname.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_gethostbyname.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_gethostbyname.Tpo -c -o libcares_la-ares_gethostbyname.lo `test -f 'ares_gethostbyname.c' || echo '$(srcdir)/'`ares_gethostbyname.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_gethostbyname.Tpo $(DEPDIR)/libcares_la-ares_gethostbyname.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_gethostbyname.c' object='libcares_la-ares_gethostbyname.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_gethostbyname.lo `test -f 'ares_gethostbyname.c' || echo '$(srcdir)/'`ares_gethostbyname.c libcares_la-ares_getnameinfo.lo: ares_getnameinfo.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_getnameinfo.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_getnameinfo.Tpo -c -o libcares_la-ares_getnameinfo.lo `test -f 'ares_getnameinfo.c' || echo '$(srcdir)/'`ares_getnameinfo.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_getnameinfo.Tpo $(DEPDIR)/libcares_la-ares_getnameinfo.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_getnameinfo.c' object='libcares_la-ares_getnameinfo.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_getnameinfo.lo `test -f 'ares_getnameinfo.c' || echo '$(srcdir)/'`ares_getnameinfo.c libcares_la-ares_init.lo: ares_init.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_init.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_init.Tpo -c -o libcares_la-ares_init.lo `test -f 'ares_init.c' || echo '$(srcdir)/'`ares_init.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_init.Tpo $(DEPDIR)/libcares_la-ares_init.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_init.c' object='libcares_la-ares_init.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_init.lo `test -f 'ares_init.c' || echo '$(srcdir)/'`ares_init.c libcares_la-ares_library_init.lo: ares_library_init.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_library_init.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_library_init.Tpo -c -o libcares_la-ares_library_init.lo `test -f 'ares_library_init.c' || echo '$(srcdir)/'`ares_library_init.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_library_init.Tpo $(DEPDIR)/libcares_la-ares_library_init.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_library_init.c' object='libcares_la-ares_library_init.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_library_init.lo `test -f 'ares_library_init.c' || echo '$(srcdir)/'`ares_library_init.c libcares_la-ares_metrics.lo: ares_metrics.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_metrics.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_metrics.Tpo -c -o libcares_la-ares_metrics.lo `test -f 'ares_metrics.c' || echo '$(srcdir)/'`ares_metrics.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_metrics.Tpo $(DEPDIR)/libcares_la-ares_metrics.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_metrics.c' object='libcares_la-ares_metrics.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_metrics.lo `test -f 'ares_metrics.c' || echo '$(srcdir)/'`ares_metrics.c libcares_la-ares_options.lo: ares_options.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_options.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_options.Tpo -c -o libcares_la-ares_options.lo `test -f 'ares_options.c' || echo '$(srcdir)/'`ares_options.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_options.Tpo $(DEPDIR)/libcares_la-ares_options.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_options.c' object='libcares_la-ares_options.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_options.lo `test -f 'ares_options.c' || echo '$(srcdir)/'`ares_options.c libcares_la-ares_platform.lo: ares_platform.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_platform.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_platform.Tpo -c -o libcares_la-ares_platform.lo `test -f 'ares_platform.c' || echo '$(srcdir)/'`ares_platform.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_platform.Tpo $(DEPDIR)/libcares_la-ares_platform.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_platform.c' object='libcares_la-ares_platform.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_platform.lo `test -f 'ares_platform.c' || echo '$(srcdir)/'`ares_platform.c libcares_la-ares_process.lo: ares_process.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_process.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_process.Tpo -c -o libcares_la-ares_process.lo `test -f 'ares_process.c' || echo '$(srcdir)/'`ares_process.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_process.Tpo $(DEPDIR)/libcares_la-ares_process.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_process.c' object='libcares_la-ares_process.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_process.lo `test -f 'ares_process.c' || echo '$(srcdir)/'`ares_process.c libcares_la-ares_qcache.lo: ares_qcache.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_qcache.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_qcache.Tpo -c -o libcares_la-ares_qcache.lo `test -f 'ares_qcache.c' || echo '$(srcdir)/'`ares_qcache.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_qcache.Tpo $(DEPDIR)/libcares_la-ares_qcache.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_qcache.c' object='libcares_la-ares_qcache.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_qcache.lo `test -f 'ares_qcache.c' || echo '$(srcdir)/'`ares_qcache.c libcares_la-ares_query.lo: ares_query.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_query.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_query.Tpo -c -o libcares_la-ares_query.lo `test -f 'ares_query.c' || echo '$(srcdir)/'`ares_query.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_query.Tpo $(DEPDIR)/libcares_la-ares_query.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_query.c' object='libcares_la-ares_query.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_query.lo `test -f 'ares_query.c' || echo '$(srcdir)/'`ares_query.c libcares_la-ares_search.lo: ares_search.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_search.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_search.Tpo -c -o libcares_la-ares_search.lo `test -f 'ares_search.c' || echo '$(srcdir)/'`ares_search.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_search.Tpo $(DEPDIR)/libcares_la-ares_search.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_search.c' object='libcares_la-ares_search.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_search.lo `test -f 'ares_search.c' || echo '$(srcdir)/'`ares_search.c libcares_la-ares_send.lo: ares_send.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_send.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_send.Tpo -c -o libcares_la-ares_send.lo `test -f 'ares_send.c' || echo '$(srcdir)/'`ares_send.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_send.Tpo $(DEPDIR)/libcares_la-ares_send.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_send.c' object='libcares_la-ares_send.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_send.lo `test -f 'ares_send.c' || echo '$(srcdir)/'`ares_send.c libcares_la-ares_strerror.lo: ares_strerror.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_strerror.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_strerror.Tpo -c -o libcares_la-ares_strerror.lo `test -f 'ares_strerror.c' || echo '$(srcdir)/'`ares_strerror.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_strerror.Tpo $(DEPDIR)/libcares_la-ares_strerror.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_strerror.c' object='libcares_la-ares_strerror.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_strerror.lo `test -f 'ares_strerror.c' || echo '$(srcdir)/'`ares_strerror.c libcares_la-ares_sysconfig.lo: ares_sysconfig.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_sysconfig.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_sysconfig.Tpo -c -o libcares_la-ares_sysconfig.lo `test -f 'ares_sysconfig.c' || echo '$(srcdir)/'`ares_sysconfig.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_sysconfig.Tpo $(DEPDIR)/libcares_la-ares_sysconfig.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_sysconfig.c' object='libcares_la-ares_sysconfig.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_sysconfig.lo `test -f 'ares_sysconfig.c' || echo '$(srcdir)/'`ares_sysconfig.c libcares_la-ares_sysconfig_files.lo: ares_sysconfig_files.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_sysconfig_files.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_sysconfig_files.Tpo -c -o libcares_la-ares_sysconfig_files.lo `test -f 'ares_sysconfig_files.c' || echo '$(srcdir)/'`ares_sysconfig_files.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_sysconfig_files.Tpo $(DEPDIR)/libcares_la-ares_sysconfig_files.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_sysconfig_files.c' object='libcares_la-ares_sysconfig_files.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_sysconfig_files.lo `test -f 'ares_sysconfig_files.c' || echo '$(srcdir)/'`ares_sysconfig_files.c libcares_la-ares_sysconfig_mac.lo: ares_sysconfig_mac.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_sysconfig_mac.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_sysconfig_mac.Tpo -c -o libcares_la-ares_sysconfig_mac.lo `test -f 'ares_sysconfig_mac.c' || echo '$(srcdir)/'`ares_sysconfig_mac.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_sysconfig_mac.Tpo $(DEPDIR)/libcares_la-ares_sysconfig_mac.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_sysconfig_mac.c' object='libcares_la-ares_sysconfig_mac.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_sysconfig_mac.lo `test -f 'ares_sysconfig_mac.c' || echo '$(srcdir)/'`ares_sysconfig_mac.c libcares_la-ares_sysconfig_win.lo: ares_sysconfig_win.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_sysconfig_win.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_sysconfig_win.Tpo -c -o libcares_la-ares_sysconfig_win.lo `test -f 'ares_sysconfig_win.c' || echo '$(srcdir)/'`ares_sysconfig_win.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_sysconfig_win.Tpo $(DEPDIR)/libcares_la-ares_sysconfig_win.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_sysconfig_win.c' object='libcares_la-ares_sysconfig_win.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_sysconfig_win.lo `test -f 'ares_sysconfig_win.c' || echo '$(srcdir)/'`ares_sysconfig_win.c libcares_la-ares_timeout.lo: ares_timeout.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_timeout.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_timeout.Tpo -c -o libcares_la-ares_timeout.lo `test -f 'ares_timeout.c' || echo '$(srcdir)/'`ares_timeout.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_timeout.Tpo $(DEPDIR)/libcares_la-ares_timeout.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_timeout.c' object='libcares_la-ares_timeout.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_timeout.lo `test -f 'ares_timeout.c' || echo '$(srcdir)/'`ares_timeout.c libcares_la-ares_update_servers.lo: ares_update_servers.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_update_servers.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_update_servers.Tpo -c -o libcares_la-ares_update_servers.lo `test -f 'ares_update_servers.c' || echo '$(srcdir)/'`ares_update_servers.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_update_servers.Tpo $(DEPDIR)/libcares_la-ares_update_servers.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_update_servers.c' object='libcares_la-ares_update_servers.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_update_servers.lo `test -f 'ares_update_servers.c' || echo '$(srcdir)/'`ares_update_servers.c libcares_la-ares_version.lo: ares_version.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-ares_version.lo -MD -MP -MF $(DEPDIR)/libcares_la-ares_version.Tpo -c -o libcares_la-ares_version.lo `test -f 'ares_version.c' || echo '$(srcdir)/'`ares_version.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-ares_version.Tpo $(DEPDIR)/libcares_la-ares_version.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_version.c' object='libcares_la-ares_version.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-ares_version.lo `test -f 'ares_version.c' || echo '$(srcdir)/'`ares_version.c libcares_la-inet_net_pton.lo: inet_net_pton.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-inet_net_pton.lo -MD -MP -MF $(DEPDIR)/libcares_la-inet_net_pton.Tpo -c -o libcares_la-inet_net_pton.lo `test -f 'inet_net_pton.c' || echo '$(srcdir)/'`inet_net_pton.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-inet_net_pton.Tpo $(DEPDIR)/libcares_la-inet_net_pton.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='inet_net_pton.c' object='libcares_la-inet_net_pton.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-inet_net_pton.lo `test -f 'inet_net_pton.c' || echo '$(srcdir)/'`inet_net_pton.c libcares_la-inet_ntop.lo: inet_ntop.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-inet_ntop.lo -MD -MP -MF $(DEPDIR)/libcares_la-inet_ntop.Tpo -c -o libcares_la-inet_ntop.lo `test -f 'inet_ntop.c' || echo '$(srcdir)/'`inet_ntop.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-inet_ntop.Tpo $(DEPDIR)/libcares_la-inet_ntop.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='inet_ntop.c' object='libcares_la-inet_ntop.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-inet_ntop.lo `test -f 'inet_ntop.c' || echo '$(srcdir)/'`inet_ntop.c libcares_la-windows_port.lo: windows_port.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT libcares_la-windows_port.lo -MD -MP -MF $(DEPDIR)/libcares_la-windows_port.Tpo -c -o libcares_la-windows_port.lo `test -f 'windows_port.c' || echo '$(srcdir)/'`windows_port.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/libcares_la-windows_port.Tpo $(DEPDIR)/libcares_la-windows_port.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='windows_port.c' object='libcares_la-windows_port.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o libcares_la-windows_port.lo `test -f 'windows_port.c' || echo '$(srcdir)/'`windows_port.c dsa/libcares_la-ares__array.lo: dsa/ares__array.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT dsa/libcares_la-ares__array.lo -MD -MP -MF dsa/$(DEPDIR)/libcares_la-ares__array.Tpo -c -o dsa/libcares_la-ares__array.lo `test -f 'dsa/ares__array.c' || echo '$(srcdir)/'`dsa/ares__array.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) dsa/$(DEPDIR)/libcares_la-ares__array.Tpo dsa/$(DEPDIR)/libcares_la-ares__array.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='dsa/ares__array.c' object='dsa/libcares_la-ares__array.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o dsa/libcares_la-ares__array.lo `test -f 'dsa/ares__array.c' || echo '$(srcdir)/'`dsa/ares__array.c dsa/libcares_la-ares__htable.lo: dsa/ares__htable.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT dsa/libcares_la-ares__htable.lo -MD -MP -MF dsa/$(DEPDIR)/libcares_la-ares__htable.Tpo -c -o dsa/libcares_la-ares__htable.lo `test -f 'dsa/ares__htable.c' || echo '$(srcdir)/'`dsa/ares__htable.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) dsa/$(DEPDIR)/libcares_la-ares__htable.Tpo dsa/$(DEPDIR)/libcares_la-ares__htable.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='dsa/ares__htable.c' object='dsa/libcares_la-ares__htable.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o dsa/libcares_la-ares__htable.lo `test -f 'dsa/ares__htable.c' || echo '$(srcdir)/'`dsa/ares__htable.c dsa/libcares_la-ares__htable_asvp.lo: dsa/ares__htable_asvp.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT dsa/libcares_la-ares__htable_asvp.lo -MD -MP -MF dsa/$(DEPDIR)/libcares_la-ares__htable_asvp.Tpo -c -o dsa/libcares_la-ares__htable_asvp.lo `test -f 'dsa/ares__htable_asvp.c' || echo '$(srcdir)/'`dsa/ares__htable_asvp.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) dsa/$(DEPDIR)/libcares_la-ares__htable_asvp.Tpo dsa/$(DEPDIR)/libcares_la-ares__htable_asvp.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='dsa/ares__htable_asvp.c' object='dsa/libcares_la-ares__htable_asvp.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o dsa/libcares_la-ares__htable_asvp.lo `test -f 'dsa/ares__htable_asvp.c' || echo '$(srcdir)/'`dsa/ares__htable_asvp.c dsa/libcares_la-ares__htable_strvp.lo: dsa/ares__htable_strvp.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT dsa/libcares_la-ares__htable_strvp.lo -MD -MP -MF dsa/$(DEPDIR)/libcares_la-ares__htable_strvp.Tpo -c -o dsa/libcares_la-ares__htable_strvp.lo `test -f 'dsa/ares__htable_strvp.c' || echo '$(srcdir)/'`dsa/ares__htable_strvp.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) dsa/$(DEPDIR)/libcares_la-ares__htable_strvp.Tpo dsa/$(DEPDIR)/libcares_la-ares__htable_strvp.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='dsa/ares__htable_strvp.c' object='dsa/libcares_la-ares__htable_strvp.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o dsa/libcares_la-ares__htable_strvp.lo `test -f 'dsa/ares__htable_strvp.c' || echo '$(srcdir)/'`dsa/ares__htable_strvp.c dsa/libcares_la-ares__htable_szvp.lo: dsa/ares__htable_szvp.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT dsa/libcares_la-ares__htable_szvp.lo -MD -MP -MF dsa/$(DEPDIR)/libcares_la-ares__htable_szvp.Tpo -c -o dsa/libcares_la-ares__htable_szvp.lo `test -f 'dsa/ares__htable_szvp.c' || echo '$(srcdir)/'`dsa/ares__htable_szvp.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) dsa/$(DEPDIR)/libcares_la-ares__htable_szvp.Tpo dsa/$(DEPDIR)/libcares_la-ares__htable_szvp.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='dsa/ares__htable_szvp.c' object='dsa/libcares_la-ares__htable_szvp.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o dsa/libcares_la-ares__htable_szvp.lo `test -f 'dsa/ares__htable_szvp.c' || echo '$(srcdir)/'`dsa/ares__htable_szvp.c dsa/libcares_la-ares__htable_vpvp.lo: dsa/ares__htable_vpvp.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT dsa/libcares_la-ares__htable_vpvp.lo -MD -MP -MF dsa/$(DEPDIR)/libcares_la-ares__htable_vpvp.Tpo -c -o dsa/libcares_la-ares__htable_vpvp.lo `test -f 'dsa/ares__htable_vpvp.c' || echo '$(srcdir)/'`dsa/ares__htable_vpvp.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) dsa/$(DEPDIR)/libcares_la-ares__htable_vpvp.Tpo dsa/$(DEPDIR)/libcares_la-ares__htable_vpvp.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='dsa/ares__htable_vpvp.c' object='dsa/libcares_la-ares__htable_vpvp.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o dsa/libcares_la-ares__htable_vpvp.lo `test -f 'dsa/ares__htable_vpvp.c' || echo '$(srcdir)/'`dsa/ares__htable_vpvp.c dsa/libcares_la-ares__llist.lo: dsa/ares__llist.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT dsa/libcares_la-ares__llist.lo -MD -MP -MF dsa/$(DEPDIR)/libcares_la-ares__llist.Tpo -c -o dsa/libcares_la-ares__llist.lo `test -f 'dsa/ares__llist.c' || echo '$(srcdir)/'`dsa/ares__llist.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) dsa/$(DEPDIR)/libcares_la-ares__llist.Tpo dsa/$(DEPDIR)/libcares_la-ares__llist.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='dsa/ares__llist.c' object='dsa/libcares_la-ares__llist.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o dsa/libcares_la-ares__llist.lo `test -f 'dsa/ares__llist.c' || echo '$(srcdir)/'`dsa/ares__llist.c dsa/libcares_la-ares__slist.lo: dsa/ares__slist.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT dsa/libcares_la-ares__slist.lo -MD -MP -MF dsa/$(DEPDIR)/libcares_la-ares__slist.Tpo -c -o dsa/libcares_la-ares__slist.lo `test -f 'dsa/ares__slist.c' || echo '$(srcdir)/'`dsa/ares__slist.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) dsa/$(DEPDIR)/libcares_la-ares__slist.Tpo dsa/$(DEPDIR)/libcares_la-ares__slist.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='dsa/ares__slist.c' object='dsa/libcares_la-ares__slist.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o dsa/libcares_la-ares__slist.lo `test -f 'dsa/ares__slist.c' || echo '$(srcdir)/'`dsa/ares__slist.c event/libcares_la-ares_event_configchg.lo: event/ares_event_configchg.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT event/libcares_la-ares_event_configchg.lo -MD -MP -MF event/$(DEPDIR)/libcares_la-ares_event_configchg.Tpo -c -o event/libcares_la-ares_event_configchg.lo `test -f 'event/ares_event_configchg.c' || echo '$(srcdir)/'`event/ares_event_configchg.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) event/$(DEPDIR)/libcares_la-ares_event_configchg.Tpo event/$(DEPDIR)/libcares_la-ares_event_configchg.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='event/ares_event_configchg.c' object='event/libcares_la-ares_event_configchg.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o event/libcares_la-ares_event_configchg.lo `test -f 'event/ares_event_configchg.c' || echo '$(srcdir)/'`event/ares_event_configchg.c event/libcares_la-ares_event_epoll.lo: event/ares_event_epoll.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT event/libcares_la-ares_event_epoll.lo -MD -MP -MF event/$(DEPDIR)/libcares_la-ares_event_epoll.Tpo -c -o event/libcares_la-ares_event_epoll.lo `test -f 'event/ares_event_epoll.c' || echo '$(srcdir)/'`event/ares_event_epoll.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) event/$(DEPDIR)/libcares_la-ares_event_epoll.Tpo event/$(DEPDIR)/libcares_la-ares_event_epoll.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='event/ares_event_epoll.c' object='event/libcares_la-ares_event_epoll.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o event/libcares_la-ares_event_epoll.lo `test -f 'event/ares_event_epoll.c' || echo '$(srcdir)/'`event/ares_event_epoll.c event/libcares_la-ares_event_kqueue.lo: event/ares_event_kqueue.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT event/libcares_la-ares_event_kqueue.lo -MD -MP -MF event/$(DEPDIR)/libcares_la-ares_event_kqueue.Tpo -c -o event/libcares_la-ares_event_kqueue.lo `test -f 'event/ares_event_kqueue.c' || echo '$(srcdir)/'`event/ares_event_kqueue.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) event/$(DEPDIR)/libcares_la-ares_event_kqueue.Tpo event/$(DEPDIR)/libcares_la-ares_event_kqueue.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='event/ares_event_kqueue.c' object='event/libcares_la-ares_event_kqueue.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o event/libcares_la-ares_event_kqueue.lo `test -f 'event/ares_event_kqueue.c' || echo '$(srcdir)/'`event/ares_event_kqueue.c event/libcares_la-ares_event_poll.lo: event/ares_event_poll.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT event/libcares_la-ares_event_poll.lo -MD -MP -MF event/$(DEPDIR)/libcares_la-ares_event_poll.Tpo -c -o event/libcares_la-ares_event_poll.lo `test -f 'event/ares_event_poll.c' || echo '$(srcdir)/'`event/ares_event_poll.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) event/$(DEPDIR)/libcares_la-ares_event_poll.Tpo event/$(DEPDIR)/libcares_la-ares_event_poll.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='event/ares_event_poll.c' object='event/libcares_la-ares_event_poll.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o event/libcares_la-ares_event_poll.lo `test -f 'event/ares_event_poll.c' || echo '$(srcdir)/'`event/ares_event_poll.c event/libcares_la-ares_event_select.lo: event/ares_event_select.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT event/libcares_la-ares_event_select.lo -MD -MP -MF event/$(DEPDIR)/libcares_la-ares_event_select.Tpo -c -o event/libcares_la-ares_event_select.lo `test -f 'event/ares_event_select.c' || echo '$(srcdir)/'`event/ares_event_select.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) event/$(DEPDIR)/libcares_la-ares_event_select.Tpo event/$(DEPDIR)/libcares_la-ares_event_select.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='event/ares_event_select.c' object='event/libcares_la-ares_event_select.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o event/libcares_la-ares_event_select.lo `test -f 'event/ares_event_select.c' || echo '$(srcdir)/'`event/ares_event_select.c event/libcares_la-ares_event_thread.lo: event/ares_event_thread.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT event/libcares_la-ares_event_thread.lo -MD -MP -MF event/$(DEPDIR)/libcares_la-ares_event_thread.Tpo -c -o event/libcares_la-ares_event_thread.lo `test -f 'event/ares_event_thread.c' || echo '$(srcdir)/'`event/ares_event_thread.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) event/$(DEPDIR)/libcares_la-ares_event_thread.Tpo event/$(DEPDIR)/libcares_la-ares_event_thread.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='event/ares_event_thread.c' object='event/libcares_la-ares_event_thread.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o event/libcares_la-ares_event_thread.lo `test -f 'event/ares_event_thread.c' || echo '$(srcdir)/'`event/ares_event_thread.c event/libcares_la-ares_event_wake_pipe.lo: event/ares_event_wake_pipe.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT event/libcares_la-ares_event_wake_pipe.lo -MD -MP -MF event/$(DEPDIR)/libcares_la-ares_event_wake_pipe.Tpo -c -o event/libcares_la-ares_event_wake_pipe.lo `test -f 'event/ares_event_wake_pipe.c' || echo '$(srcdir)/'`event/ares_event_wake_pipe.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) event/$(DEPDIR)/libcares_la-ares_event_wake_pipe.Tpo event/$(DEPDIR)/libcares_la-ares_event_wake_pipe.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='event/ares_event_wake_pipe.c' object='event/libcares_la-ares_event_wake_pipe.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o event/libcares_la-ares_event_wake_pipe.lo `test -f 'event/ares_event_wake_pipe.c' || echo '$(srcdir)/'`event/ares_event_wake_pipe.c event/libcares_la-ares_event_win32.lo: event/ares_event_win32.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT event/libcares_la-ares_event_win32.lo -MD -MP -MF event/$(DEPDIR)/libcares_la-ares_event_win32.Tpo -c -o event/libcares_la-ares_event_win32.lo `test -f 'event/ares_event_win32.c' || echo '$(srcdir)/'`event/ares_event_win32.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) event/$(DEPDIR)/libcares_la-ares_event_win32.Tpo event/$(DEPDIR)/libcares_la-ares_event_win32.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='event/ares_event_win32.c' object='event/libcares_la-ares_event_win32.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o event/libcares_la-ares_event_win32.lo `test -f 'event/ares_event_win32.c' || echo '$(srcdir)/'`event/ares_event_win32.c legacy/libcares_la-ares_create_query.lo: legacy/ares_create_query.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_create_query.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_create_query.Tpo -c -o legacy/libcares_la-ares_create_query.lo `test -f 'legacy/ares_create_query.c' || echo '$(srcdir)/'`legacy/ares_create_query.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_create_query.Tpo legacy/$(DEPDIR)/libcares_la-ares_create_query.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_create_query.c' object='legacy/libcares_la-ares_create_query.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_create_query.lo `test -f 'legacy/ares_create_query.c' || echo '$(srcdir)/'`legacy/ares_create_query.c legacy/libcares_la-ares_expand_name.lo: legacy/ares_expand_name.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_expand_name.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_expand_name.Tpo -c -o legacy/libcares_la-ares_expand_name.lo `test -f 'legacy/ares_expand_name.c' || echo '$(srcdir)/'`legacy/ares_expand_name.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_expand_name.Tpo legacy/$(DEPDIR)/libcares_la-ares_expand_name.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_expand_name.c' object='legacy/libcares_la-ares_expand_name.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_expand_name.lo `test -f 'legacy/ares_expand_name.c' || echo '$(srcdir)/'`legacy/ares_expand_name.c legacy/libcares_la-ares_expand_string.lo: legacy/ares_expand_string.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_expand_string.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_expand_string.Tpo -c -o legacy/libcares_la-ares_expand_string.lo `test -f 'legacy/ares_expand_string.c' || echo '$(srcdir)/'`legacy/ares_expand_string.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_expand_string.Tpo legacy/$(DEPDIR)/libcares_la-ares_expand_string.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_expand_string.c' object='legacy/libcares_la-ares_expand_string.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_expand_string.lo `test -f 'legacy/ares_expand_string.c' || echo '$(srcdir)/'`legacy/ares_expand_string.c legacy/libcares_la-ares_fds.lo: legacy/ares_fds.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_fds.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_fds.Tpo -c -o legacy/libcares_la-ares_fds.lo `test -f 'legacy/ares_fds.c' || echo '$(srcdir)/'`legacy/ares_fds.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_fds.Tpo legacy/$(DEPDIR)/libcares_la-ares_fds.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_fds.c' object='legacy/libcares_la-ares_fds.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_fds.lo `test -f 'legacy/ares_fds.c' || echo '$(srcdir)/'`legacy/ares_fds.c legacy/libcares_la-ares_getsock.lo: legacy/ares_getsock.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_getsock.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_getsock.Tpo -c -o legacy/libcares_la-ares_getsock.lo `test -f 'legacy/ares_getsock.c' || echo '$(srcdir)/'`legacy/ares_getsock.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_getsock.Tpo legacy/$(DEPDIR)/libcares_la-ares_getsock.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_getsock.c' object='legacy/libcares_la-ares_getsock.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_getsock.lo `test -f 'legacy/ares_getsock.c' || echo '$(srcdir)/'`legacy/ares_getsock.c legacy/libcares_la-ares_parse_a_reply.lo: legacy/ares_parse_a_reply.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_parse_a_reply.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_parse_a_reply.Tpo -c -o legacy/libcares_la-ares_parse_a_reply.lo `test -f 'legacy/ares_parse_a_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_a_reply.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_parse_a_reply.Tpo legacy/$(DEPDIR)/libcares_la-ares_parse_a_reply.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_parse_a_reply.c' object='legacy/libcares_la-ares_parse_a_reply.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_parse_a_reply.lo `test -f 'legacy/ares_parse_a_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_a_reply.c legacy/libcares_la-ares_parse_aaaa_reply.lo: legacy/ares_parse_aaaa_reply.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_parse_aaaa_reply.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_parse_aaaa_reply.Tpo -c -o legacy/libcares_la-ares_parse_aaaa_reply.lo `test -f 'legacy/ares_parse_aaaa_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_aaaa_reply.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_parse_aaaa_reply.Tpo legacy/$(DEPDIR)/libcares_la-ares_parse_aaaa_reply.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_parse_aaaa_reply.c' object='legacy/libcares_la-ares_parse_aaaa_reply.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_parse_aaaa_reply.lo `test -f 'legacy/ares_parse_aaaa_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_aaaa_reply.c legacy/libcares_la-ares_parse_caa_reply.lo: legacy/ares_parse_caa_reply.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_parse_caa_reply.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_parse_caa_reply.Tpo -c -o legacy/libcares_la-ares_parse_caa_reply.lo `test -f 'legacy/ares_parse_caa_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_caa_reply.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_parse_caa_reply.Tpo legacy/$(DEPDIR)/libcares_la-ares_parse_caa_reply.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_parse_caa_reply.c' object='legacy/libcares_la-ares_parse_caa_reply.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_parse_caa_reply.lo `test -f 'legacy/ares_parse_caa_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_caa_reply.c legacy/libcares_la-ares_parse_mx_reply.lo: legacy/ares_parse_mx_reply.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_parse_mx_reply.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_parse_mx_reply.Tpo -c -o legacy/libcares_la-ares_parse_mx_reply.lo `test -f 'legacy/ares_parse_mx_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_mx_reply.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_parse_mx_reply.Tpo legacy/$(DEPDIR)/libcares_la-ares_parse_mx_reply.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_parse_mx_reply.c' object='legacy/libcares_la-ares_parse_mx_reply.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_parse_mx_reply.lo `test -f 'legacy/ares_parse_mx_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_mx_reply.c legacy/libcares_la-ares_parse_naptr_reply.lo: legacy/ares_parse_naptr_reply.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_parse_naptr_reply.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_parse_naptr_reply.Tpo -c -o legacy/libcares_la-ares_parse_naptr_reply.lo `test -f 'legacy/ares_parse_naptr_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_naptr_reply.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_parse_naptr_reply.Tpo legacy/$(DEPDIR)/libcares_la-ares_parse_naptr_reply.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_parse_naptr_reply.c' object='legacy/libcares_la-ares_parse_naptr_reply.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_parse_naptr_reply.lo `test -f 'legacy/ares_parse_naptr_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_naptr_reply.c legacy/libcares_la-ares_parse_ns_reply.lo: legacy/ares_parse_ns_reply.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_parse_ns_reply.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_parse_ns_reply.Tpo -c -o legacy/libcares_la-ares_parse_ns_reply.lo `test -f 'legacy/ares_parse_ns_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_ns_reply.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_parse_ns_reply.Tpo legacy/$(DEPDIR)/libcares_la-ares_parse_ns_reply.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_parse_ns_reply.c' object='legacy/libcares_la-ares_parse_ns_reply.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_parse_ns_reply.lo `test -f 'legacy/ares_parse_ns_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_ns_reply.c legacy/libcares_la-ares_parse_ptr_reply.lo: legacy/ares_parse_ptr_reply.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_parse_ptr_reply.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_parse_ptr_reply.Tpo -c -o legacy/libcares_la-ares_parse_ptr_reply.lo `test -f 'legacy/ares_parse_ptr_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_ptr_reply.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_parse_ptr_reply.Tpo legacy/$(DEPDIR)/libcares_la-ares_parse_ptr_reply.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_parse_ptr_reply.c' object='legacy/libcares_la-ares_parse_ptr_reply.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_parse_ptr_reply.lo `test -f 'legacy/ares_parse_ptr_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_ptr_reply.c legacy/libcares_la-ares_parse_soa_reply.lo: legacy/ares_parse_soa_reply.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_parse_soa_reply.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_parse_soa_reply.Tpo -c -o legacy/libcares_la-ares_parse_soa_reply.lo `test -f 'legacy/ares_parse_soa_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_soa_reply.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_parse_soa_reply.Tpo legacy/$(DEPDIR)/libcares_la-ares_parse_soa_reply.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_parse_soa_reply.c' object='legacy/libcares_la-ares_parse_soa_reply.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_parse_soa_reply.lo `test -f 'legacy/ares_parse_soa_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_soa_reply.c legacy/libcares_la-ares_parse_srv_reply.lo: legacy/ares_parse_srv_reply.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_parse_srv_reply.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_parse_srv_reply.Tpo -c -o legacy/libcares_la-ares_parse_srv_reply.lo `test -f 'legacy/ares_parse_srv_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_srv_reply.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_parse_srv_reply.Tpo legacy/$(DEPDIR)/libcares_la-ares_parse_srv_reply.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_parse_srv_reply.c' object='legacy/libcares_la-ares_parse_srv_reply.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_parse_srv_reply.lo `test -f 'legacy/ares_parse_srv_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_srv_reply.c legacy/libcares_la-ares_parse_txt_reply.lo: legacy/ares_parse_txt_reply.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_parse_txt_reply.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_parse_txt_reply.Tpo -c -o legacy/libcares_la-ares_parse_txt_reply.lo `test -f 'legacy/ares_parse_txt_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_txt_reply.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_parse_txt_reply.Tpo legacy/$(DEPDIR)/libcares_la-ares_parse_txt_reply.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_parse_txt_reply.c' object='legacy/libcares_la-ares_parse_txt_reply.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_parse_txt_reply.lo `test -f 'legacy/ares_parse_txt_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_txt_reply.c legacy/libcares_la-ares_parse_uri_reply.lo: legacy/ares_parse_uri_reply.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT legacy/libcares_la-ares_parse_uri_reply.lo -MD -MP -MF legacy/$(DEPDIR)/libcares_la-ares_parse_uri_reply.Tpo -c -o legacy/libcares_la-ares_parse_uri_reply.lo `test -f 'legacy/ares_parse_uri_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_uri_reply.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) legacy/$(DEPDIR)/libcares_la-ares_parse_uri_reply.Tpo legacy/$(DEPDIR)/libcares_la-ares_parse_uri_reply.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='legacy/ares_parse_uri_reply.c' object='legacy/libcares_la-ares_parse_uri_reply.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o legacy/libcares_la-ares_parse_uri_reply.lo `test -f 'legacy/ares_parse_uri_reply.c' || echo '$(srcdir)/'`legacy/ares_parse_uri_reply.c record/libcares_la-ares_dns_mapping.lo: record/ares_dns_mapping.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT record/libcares_la-ares_dns_mapping.lo -MD -MP -MF record/$(DEPDIR)/libcares_la-ares_dns_mapping.Tpo -c -o record/libcares_la-ares_dns_mapping.lo `test -f 'record/ares_dns_mapping.c' || echo '$(srcdir)/'`record/ares_dns_mapping.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) record/$(DEPDIR)/libcares_la-ares_dns_mapping.Tpo record/$(DEPDIR)/libcares_la-ares_dns_mapping.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='record/ares_dns_mapping.c' object='record/libcares_la-ares_dns_mapping.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o record/libcares_la-ares_dns_mapping.lo `test -f 'record/ares_dns_mapping.c' || echo '$(srcdir)/'`record/ares_dns_mapping.c record/libcares_la-ares_dns_multistring.lo: record/ares_dns_multistring.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT record/libcares_la-ares_dns_multistring.lo -MD -MP -MF record/$(DEPDIR)/libcares_la-ares_dns_multistring.Tpo -c -o record/libcares_la-ares_dns_multistring.lo `test -f 'record/ares_dns_multistring.c' || echo '$(srcdir)/'`record/ares_dns_multistring.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) record/$(DEPDIR)/libcares_la-ares_dns_multistring.Tpo record/$(DEPDIR)/libcares_la-ares_dns_multistring.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='record/ares_dns_multistring.c' object='record/libcares_la-ares_dns_multistring.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o record/libcares_la-ares_dns_multistring.lo `test -f 'record/ares_dns_multistring.c' || echo '$(srcdir)/'`record/ares_dns_multistring.c record/libcares_la-ares_dns_name.lo: record/ares_dns_name.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT record/libcares_la-ares_dns_name.lo -MD -MP -MF record/$(DEPDIR)/libcares_la-ares_dns_name.Tpo -c -o record/libcares_la-ares_dns_name.lo `test -f 'record/ares_dns_name.c' || echo '$(srcdir)/'`record/ares_dns_name.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) record/$(DEPDIR)/libcares_la-ares_dns_name.Tpo record/$(DEPDIR)/libcares_la-ares_dns_name.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='record/ares_dns_name.c' object='record/libcares_la-ares_dns_name.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o record/libcares_la-ares_dns_name.lo `test -f 'record/ares_dns_name.c' || echo '$(srcdir)/'`record/ares_dns_name.c record/libcares_la-ares_dns_parse.lo: record/ares_dns_parse.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT record/libcares_la-ares_dns_parse.lo -MD -MP -MF record/$(DEPDIR)/libcares_la-ares_dns_parse.Tpo -c -o record/libcares_la-ares_dns_parse.lo `test -f 'record/ares_dns_parse.c' || echo '$(srcdir)/'`record/ares_dns_parse.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) record/$(DEPDIR)/libcares_la-ares_dns_parse.Tpo record/$(DEPDIR)/libcares_la-ares_dns_parse.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='record/ares_dns_parse.c' object='record/libcares_la-ares_dns_parse.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o record/libcares_la-ares_dns_parse.lo `test -f 'record/ares_dns_parse.c' || echo '$(srcdir)/'`record/ares_dns_parse.c record/libcares_la-ares_dns_record.lo: record/ares_dns_record.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT record/libcares_la-ares_dns_record.lo -MD -MP -MF record/$(DEPDIR)/libcares_la-ares_dns_record.Tpo -c -o record/libcares_la-ares_dns_record.lo `test -f 'record/ares_dns_record.c' || echo '$(srcdir)/'`record/ares_dns_record.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) record/$(DEPDIR)/libcares_la-ares_dns_record.Tpo record/$(DEPDIR)/libcares_la-ares_dns_record.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='record/ares_dns_record.c' object='record/libcares_la-ares_dns_record.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o record/libcares_la-ares_dns_record.lo `test -f 'record/ares_dns_record.c' || echo '$(srcdir)/'`record/ares_dns_record.c record/libcares_la-ares_dns_write.lo: record/ares_dns_write.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT record/libcares_la-ares_dns_write.lo -MD -MP -MF record/$(DEPDIR)/libcares_la-ares_dns_write.Tpo -c -o record/libcares_la-ares_dns_write.lo `test -f 'record/ares_dns_write.c' || echo '$(srcdir)/'`record/ares_dns_write.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) record/$(DEPDIR)/libcares_la-ares_dns_write.Tpo record/$(DEPDIR)/libcares_la-ares_dns_write.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='record/ares_dns_write.c' object='record/libcares_la-ares_dns_write.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o record/libcares_la-ares_dns_write.lo `test -f 'record/ares_dns_write.c' || echo '$(srcdir)/'`record/ares_dns_write.c str/libcares_la-ares__buf.lo: str/ares__buf.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT str/libcares_la-ares__buf.lo -MD -MP -MF str/$(DEPDIR)/libcares_la-ares__buf.Tpo -c -o str/libcares_la-ares__buf.lo `test -f 'str/ares__buf.c' || echo '$(srcdir)/'`str/ares__buf.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) str/$(DEPDIR)/libcares_la-ares__buf.Tpo str/$(DEPDIR)/libcares_la-ares__buf.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='str/ares__buf.c' object='str/libcares_la-ares__buf.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o str/libcares_la-ares__buf.lo `test -f 'str/ares__buf.c' || echo '$(srcdir)/'`str/ares__buf.c str/libcares_la-ares_strcasecmp.lo: str/ares_strcasecmp.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT str/libcares_la-ares_strcasecmp.lo -MD -MP -MF str/$(DEPDIR)/libcares_la-ares_strcasecmp.Tpo -c -o str/libcares_la-ares_strcasecmp.lo `test -f 'str/ares_strcasecmp.c' || echo '$(srcdir)/'`str/ares_strcasecmp.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) str/$(DEPDIR)/libcares_la-ares_strcasecmp.Tpo str/$(DEPDIR)/libcares_la-ares_strcasecmp.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='str/ares_strcasecmp.c' object='str/libcares_la-ares_strcasecmp.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o str/libcares_la-ares_strcasecmp.lo `test -f 'str/ares_strcasecmp.c' || echo '$(srcdir)/'`str/ares_strcasecmp.c str/libcares_la-ares_str.lo: str/ares_str.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT str/libcares_la-ares_str.lo -MD -MP -MF str/$(DEPDIR)/libcares_la-ares_str.Tpo -c -o str/libcares_la-ares_str.lo `test -f 'str/ares_str.c' || echo '$(srcdir)/'`str/ares_str.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) str/$(DEPDIR)/libcares_la-ares_str.Tpo str/$(DEPDIR)/libcares_la-ares_str.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='str/ares_str.c' object='str/libcares_la-ares_str.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o str/libcares_la-ares_str.lo `test -f 'str/ares_str.c' || echo '$(srcdir)/'`str/ares_str.c str/libcares_la-ares_strsplit.lo: str/ares_strsplit.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT str/libcares_la-ares_strsplit.lo -MD -MP -MF str/$(DEPDIR)/libcares_la-ares_strsplit.Tpo -c -o str/libcares_la-ares_strsplit.lo `test -f 'str/ares_strsplit.c' || echo '$(srcdir)/'`str/ares_strsplit.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) str/$(DEPDIR)/libcares_la-ares_strsplit.Tpo str/$(DEPDIR)/libcares_la-ares_strsplit.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='str/ares_strsplit.c' object='str/libcares_la-ares_strsplit.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o str/libcares_la-ares_strsplit.lo `test -f 'str/ares_strsplit.c' || echo '$(srcdir)/'`str/ares_strsplit.c util/libcares_la-ares__iface_ips.lo: util/ares__iface_ips.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT util/libcares_la-ares__iface_ips.lo -MD -MP -MF util/$(DEPDIR)/libcares_la-ares__iface_ips.Tpo -c -o util/libcares_la-ares__iface_ips.lo `test -f 'util/ares__iface_ips.c' || echo '$(srcdir)/'`util/ares__iface_ips.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) util/$(DEPDIR)/libcares_la-ares__iface_ips.Tpo util/$(DEPDIR)/libcares_la-ares__iface_ips.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='util/ares__iface_ips.c' object='util/libcares_la-ares__iface_ips.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o util/libcares_la-ares__iface_ips.lo `test -f 'util/ares__iface_ips.c' || echo '$(srcdir)/'`util/ares__iface_ips.c util/libcares_la-ares__threads.lo: util/ares__threads.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT util/libcares_la-ares__threads.lo -MD -MP -MF util/$(DEPDIR)/libcares_la-ares__threads.Tpo -c -o util/libcares_la-ares__threads.lo `test -f 'util/ares__threads.c' || echo '$(srcdir)/'`util/ares__threads.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) util/$(DEPDIR)/libcares_la-ares__threads.Tpo util/$(DEPDIR)/libcares_la-ares__threads.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='util/ares__threads.c' object='util/libcares_la-ares__threads.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o util/libcares_la-ares__threads.lo `test -f 'util/ares__threads.c' || echo '$(srcdir)/'`util/ares__threads.c util/libcares_la-ares__timeval.lo: util/ares__timeval.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT util/libcares_la-ares__timeval.lo -MD -MP -MF util/$(DEPDIR)/libcares_la-ares__timeval.Tpo -c -o util/libcares_la-ares__timeval.lo `test -f 'util/ares__timeval.c' || echo '$(srcdir)/'`util/ares__timeval.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) util/$(DEPDIR)/libcares_la-ares__timeval.Tpo util/$(DEPDIR)/libcares_la-ares__timeval.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='util/ares__timeval.c' object='util/libcares_la-ares__timeval.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o util/libcares_la-ares__timeval.lo `test -f 'util/ares__timeval.c' || echo '$(srcdir)/'`util/ares__timeval.c util/libcares_la-ares_math.lo: util/ares_math.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT util/libcares_la-ares_math.lo -MD -MP -MF util/$(DEPDIR)/libcares_la-ares_math.Tpo -c -o util/libcares_la-ares_math.lo `test -f 'util/ares_math.c' || echo '$(srcdir)/'`util/ares_math.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) util/$(DEPDIR)/libcares_la-ares_math.Tpo util/$(DEPDIR)/libcares_la-ares_math.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='util/ares_math.c' object='util/libcares_la-ares_math.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o util/libcares_la-ares_math.lo `test -f 'util/ares_math.c' || echo '$(srcdir)/'`util/ares_math.c util/libcares_la-ares_rand.lo: util/ares_rand.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -MT util/libcares_la-ares_rand.lo -MD -MP -MF util/$(DEPDIR)/libcares_la-ares_rand.Tpo -c -o util/libcares_la-ares_rand.lo `test -f 'util/ares_rand.c' || echo '$(srcdir)/'`util/ares_rand.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) util/$(DEPDIR)/libcares_la-ares_rand.Tpo util/$(DEPDIR)/libcares_la-ares_rand.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='util/ares_rand.c' object='util/libcares_la-ares_rand.lo' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libcares_la_CPPFLAGS) $(CPPFLAGS) $(libcares_la_CFLAGS) $(CFLAGS) -c -o util/libcares_la-ares_rand.lo `test -f 'util/ares_rand.c' || echo '$(srcdir)/'`util/ares_rand.c mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs -rm -rf dsa/.libs dsa/_libs -rm -rf event/.libs event/_libs -rm -rf legacy/.libs legacy/_libs -rm -rf record/.libs record/_libs -rm -rf str/.libs str/_libs -rm -rf util/.libs util/_libs # This directory's subdirectories are mostly independent; you can cd # into them and run 'make' without going through this Makefile. # To change the values of 'make' variables: instead of editing Makefiles, # (1) if the variable is set in 'config.status', edit 'config.status' # (which will cause the Makefiles to be regenerated when you run 'make'); # (2) otherwise, pass the desired values on the 'make' command line. $(am__recursive_targets): @fail=; \ if $(am__make_keepgoing); then \ failcom='fail=yes'; \ else \ failcom='exit 1'; \ fi; \ dot_seen=no; \ target=`echo $@ | sed s/-recursive//`; \ case "$@" in \ distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \ *) list='$(SUBDIRS)' ;; \ esac; \ for subdir in $$list; do \ echo "Making $$target in $$subdir"; \ if test "$$subdir" = "."; then \ dot_seen=yes; \ local_target="$$target-am"; \ else \ local_target="$$target"; \ fi; \ ($(am__cd) $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \ || eval $$failcom; \ done; \ if test "$$dot_seen" = "no"; then \ $(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \ fi; test -z "$$fail" ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-recursive TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \ include_option=--etags-include; \ empty_fix=.; \ else \ include_option=--include; \ empty_fix=; \ fi; \ list='$(SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ test ! -f $$subdir/TAGS || \ set "$$@" "$$include_option=$$here/$$subdir/TAGS"; \ fi; \ done; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-recursive CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscopelist: cscopelist-recursive cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags distdir: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) distdir-am distdir-am: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done @list='$(DIST_SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ $(am__make_dryrun) \ || test -d "$(distdir)/$$subdir" \ || $(MKDIR_P) "$(distdir)/$$subdir" \ || exit 1; \ dir1=$$subdir; dir2="$(distdir)/$$subdir"; \ $(am__relativize); \ new_distdir=$$reldir; \ dir1=$$subdir; dir2="$(top_distdir)"; \ $(am__relativize); \ new_top_distdir=$$reldir; \ echo " (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) top_distdir="$$new_top_distdir" distdir="$$new_distdir" \\"; \ echo " am__remove_distdir=: am__skip_length_check=: am__skip_mode_fix=: distdir)"; \ ($(am__cd) $$subdir && \ $(MAKE) $(AM_MAKEFLAGS) \ top_distdir="$$new_top_distdir" \ distdir="$$new_distdir" \ am__remove_distdir=: \ am__skip_length_check=: \ am__skip_mode_fix=: \ distdir) \ || exit 1; \ fi; \ done check-am: all-am check: check-recursive all-am: Makefile $(LTLIBRARIES) ares_config.h installdirs: installdirs-recursive installdirs-am: for dir in "$(DESTDIR)$(libdir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-recursive install-exec: install-exec-recursive install-data: install-data-recursive uninstall: uninstall-recursive install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-recursive install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -$(am__rm_f) $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || $(am__rm_f) $(CONFIG_CLEAN_VPATH_FILES) -$(am__rm_f) $(DISTCLEANFILES) -$(am__rm_f) dsa/$(DEPDIR)/$(am__dirstamp) -$(am__rm_f) dsa/$(am__dirstamp) -$(am__rm_f) event/$(DEPDIR)/$(am__dirstamp) -$(am__rm_f) event/$(am__dirstamp) -$(am__rm_f) legacy/$(DEPDIR)/$(am__dirstamp) -$(am__rm_f) legacy/$(am__dirstamp) -$(am__rm_f) record/$(DEPDIR)/$(am__dirstamp) -$(am__rm_f) record/$(am__dirstamp) -$(am__rm_f) str/$(DEPDIR)/$(am__dirstamp) -$(am__rm_f) str/$(am__dirstamp) -$(am__rm_f) util/$(DEPDIR)/$(am__dirstamp) -$(am__rm_f) util/$(am__dirstamp) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-recursive clean-am: clean-generic clean-libLTLIBRARIES clean-libtool \ mostlyclean-am distclean: distclean-recursive -rm -f ./$(DEPDIR)/libcares_la-ares__addrinfo2hostent.Plo -rm -f ./$(DEPDIR)/libcares_la-ares__addrinfo_localhost.Plo -rm -f ./$(DEPDIR)/libcares_la-ares__close_sockets.Plo -rm -f ./$(DEPDIR)/libcares_la-ares__hosts_file.Plo -rm -f ./$(DEPDIR)/libcares_la-ares__parse_into_addrinfo.Plo -rm -f ./$(DEPDIR)/libcares_la-ares__socket.Plo -rm -f ./$(DEPDIR)/libcares_la-ares__sortaddrinfo.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_android.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_cancel.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_cookie.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_data.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_destroy.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_free_hostent.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_free_string.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_freeaddrinfo.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_getaddrinfo.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_getenv.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_gethostbyaddr.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_gethostbyname.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_getnameinfo.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_init.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_library_init.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_metrics.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_options.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_platform.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_process.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_qcache.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_query.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_search.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_send.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_strerror.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_sysconfig.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_sysconfig_files.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_sysconfig_mac.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_sysconfig_win.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_timeout.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_update_servers.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_version.Plo -rm -f ./$(DEPDIR)/libcares_la-inet_net_pton.Plo -rm -f ./$(DEPDIR)/libcares_la-inet_ntop.Plo -rm -f ./$(DEPDIR)/libcares_la-windows_port.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__array.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__htable.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__htable_asvp.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__htable_strvp.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__htable_szvp.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__htable_vpvp.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__llist.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__slist.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_configchg.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_epoll.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_kqueue.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_poll.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_select.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_thread.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_wake_pipe.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_win32.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_create_query.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_expand_name.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_expand_string.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_fds.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_getsock.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_a_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_aaaa_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_caa_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_mx_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_naptr_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_ns_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_ptr_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_soa_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_srv_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_txt_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_uri_reply.Plo -rm -f record/$(DEPDIR)/libcares_la-ares_dns_mapping.Plo -rm -f record/$(DEPDIR)/libcares_la-ares_dns_multistring.Plo -rm -f record/$(DEPDIR)/libcares_la-ares_dns_name.Plo -rm -f record/$(DEPDIR)/libcares_la-ares_dns_parse.Plo -rm -f record/$(DEPDIR)/libcares_la-ares_dns_record.Plo -rm -f record/$(DEPDIR)/libcares_la-ares_dns_write.Plo -rm -f str/$(DEPDIR)/libcares_la-ares__buf.Plo -rm -f str/$(DEPDIR)/libcares_la-ares_str.Plo -rm -f str/$(DEPDIR)/libcares_la-ares_strcasecmp.Plo -rm -f str/$(DEPDIR)/libcares_la-ares_strsplit.Plo -rm -f util/$(DEPDIR)/libcares_la-ares__iface_ips.Plo -rm -f util/$(DEPDIR)/libcares_la-ares__threads.Plo -rm -f util/$(DEPDIR)/libcares_la-ares__timeval.Plo -rm -f util/$(DEPDIR)/libcares_la-ares_math.Plo -rm -f util/$(DEPDIR)/libcares_la-ares_rand.Plo -rm -f Makefile distclean-am: clean-am distclean-compile distclean-generic \ distclean-hdr distclean-tags dvi: dvi-recursive dvi-am: html: html-recursive html-am: info: info-recursive info-am: install-data-am: install-dvi: install-dvi-recursive install-dvi-am: install-exec-am: install-libLTLIBRARIES install-html: install-html-recursive install-html-am: install-info: install-info-recursive install-info-am: install-man: install-pdf: install-pdf-recursive install-pdf-am: install-ps: install-ps-recursive install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-recursive -rm -f ./$(DEPDIR)/libcares_la-ares__addrinfo2hostent.Plo -rm -f ./$(DEPDIR)/libcares_la-ares__addrinfo_localhost.Plo -rm -f ./$(DEPDIR)/libcares_la-ares__close_sockets.Plo -rm -f ./$(DEPDIR)/libcares_la-ares__hosts_file.Plo -rm -f ./$(DEPDIR)/libcares_la-ares__parse_into_addrinfo.Plo -rm -f ./$(DEPDIR)/libcares_la-ares__socket.Plo -rm -f ./$(DEPDIR)/libcares_la-ares__sortaddrinfo.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_android.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_cancel.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_cookie.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_data.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_destroy.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_free_hostent.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_free_string.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_freeaddrinfo.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_getaddrinfo.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_getenv.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_gethostbyaddr.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_gethostbyname.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_getnameinfo.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_init.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_library_init.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_metrics.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_options.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_platform.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_process.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_qcache.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_query.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_search.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_send.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_strerror.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_sysconfig.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_sysconfig_files.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_sysconfig_mac.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_sysconfig_win.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_timeout.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_update_servers.Plo -rm -f ./$(DEPDIR)/libcares_la-ares_version.Plo -rm -f ./$(DEPDIR)/libcares_la-inet_net_pton.Plo -rm -f ./$(DEPDIR)/libcares_la-inet_ntop.Plo -rm -f ./$(DEPDIR)/libcares_la-windows_port.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__array.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__htable.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__htable_asvp.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__htable_strvp.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__htable_szvp.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__htable_vpvp.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__llist.Plo -rm -f dsa/$(DEPDIR)/libcares_la-ares__slist.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_configchg.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_epoll.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_kqueue.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_poll.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_select.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_thread.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_wake_pipe.Plo -rm -f event/$(DEPDIR)/libcares_la-ares_event_win32.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_create_query.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_expand_name.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_expand_string.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_fds.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_getsock.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_a_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_aaaa_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_caa_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_mx_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_naptr_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_ns_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_ptr_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_soa_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_srv_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_txt_reply.Plo -rm -f legacy/$(DEPDIR)/libcares_la-ares_parse_uri_reply.Plo -rm -f record/$(DEPDIR)/libcares_la-ares_dns_mapping.Plo -rm -f record/$(DEPDIR)/libcares_la-ares_dns_multistring.Plo -rm -f record/$(DEPDIR)/libcares_la-ares_dns_name.Plo -rm -f record/$(DEPDIR)/libcares_la-ares_dns_parse.Plo -rm -f record/$(DEPDIR)/libcares_la-ares_dns_record.Plo -rm -f record/$(DEPDIR)/libcares_la-ares_dns_write.Plo -rm -f str/$(DEPDIR)/libcares_la-ares__buf.Plo -rm -f str/$(DEPDIR)/libcares_la-ares_str.Plo -rm -f str/$(DEPDIR)/libcares_la-ares_strcasecmp.Plo -rm -f str/$(DEPDIR)/libcares_la-ares_strsplit.Plo -rm -f util/$(DEPDIR)/libcares_la-ares__iface_ips.Plo -rm -f util/$(DEPDIR)/libcares_la-ares__threads.Plo -rm -f util/$(DEPDIR)/libcares_la-ares__timeval.Plo -rm -f util/$(DEPDIR)/libcares_la-ares_math.Plo -rm -f util/$(DEPDIR)/libcares_la-ares_rand.Plo -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-recursive mostlyclean-am: mostlyclean-compile mostlyclean-generic \ mostlyclean-libtool pdf: pdf-recursive pdf-am: ps: ps-recursive ps-am: uninstall-am: uninstall-libLTLIBRARIES .MAKE: $(am__recursive_targets) all install-am install-strip .PHONY: $(am__recursive_targets) CTAGS GTAGS TAGS all all-am \ am--depfiles check check-am clean clean-generic \ clean-libLTLIBRARIES clean-libtool cscopelist-am ctags \ ctags-am distclean distclean-compile distclean-generic \ distclean-hdr distclean-libtool distclean-tags distdir dvi \ dvi-am html html-am info info-am install install-am \ install-data install-data-am install-dvi install-dvi-am \ install-exec install-exec-am install-html install-html-am \ install-info install-info-am install-libLTLIBRARIES \ install-man install-pdf install-pdf-am install-ps \ install-ps-am install-strip installcheck installcheck-am \ installdirs installdirs-am maintainer-clean \ maintainer-clean-generic mostlyclean mostlyclean-compile \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags tags-am uninstall uninstall-am uninstall-libLTLIBRARIES .PRECIOUS: Makefile # Code coverage # # Optional: # - CODE_COVERAGE_DIRECTORY: Top-level directory for code coverage reporting. # Multiple directories may be specified, separated by whitespace. # (Default: $(top_builddir)) # - CODE_COVERAGE_OUTPUT_FILE: Filename and path for the .info file generated # by lcov for code coverage. (Default: # $(PACKAGE_NAME)-$(PACKAGE_VERSION)-coverage.info) # - CODE_COVERAGE_OUTPUT_DIRECTORY: Directory for generated code coverage # reports to be created. (Default: # $(PACKAGE_NAME)-$(PACKAGE_VERSION)-coverage) # - CODE_COVERAGE_BRANCH_COVERAGE: Set to 1 to enforce branch coverage, # set to 0 to disable it and leave empty to stay with the default. # (Default: empty) # - CODE_COVERAGE_LCOV_SHOPTS_DEFAULT: Extra options shared between both lcov # instances. (Default: based on ) # - CODE_COVERAGE_LCOV_SHOPTS: Extra options to shared between both lcov # instances. (Default: ) # - CODE_COVERAGE_LCOV_OPTIONS_GCOVPATH: --gcov-tool pathtogcov # - CODE_COVERAGE_LCOV_OPTIONS_DEFAULT: Extra options to pass to the # collecting lcov instance. (Default: ) # - CODE_COVERAGE_LCOV_OPTIONS: Extra options to pass to the collecting lcov # instance. (Default: ) # - CODE_COVERAGE_LCOV_RMOPTS_DEFAULT: Extra options to pass to the filtering # lcov instance. (Default: empty) # - CODE_COVERAGE_LCOV_RMOPTS: Extra options to pass to the filtering lcov # instance. (Default: ) # - CODE_COVERAGE_GENHTML_OPTIONS_DEFAULT: Extra options to pass to the # genhtml instance. (Default: based on ) # - CODE_COVERAGE_GENHTML_OPTIONS: Extra options to pass to the genhtml # instance. (Default: ) # - CODE_COVERAGE_IGNORE_PATTERN: Extra glob pattern of files to ignore # # The generated report will be titled using the $(PACKAGE_NAME) and # $(PACKAGE_VERSION). In order to add the current git hash to the title, # use the git-version-gen script, available online. # Optional variables # run only on top dir @CODE_COVERAGE_ENABLED_TRUE@ ifeq ($(abs_builddir), $(abs_top_builddir)) @CODE_COVERAGE_ENABLED_TRUE@CODE_COVERAGE_DIRECTORY ?= $(top_builddir) @CODE_COVERAGE_ENABLED_TRUE@CODE_COVERAGE_OUTPUT_FILE ?= $(PACKAGE_NAME)-$(PACKAGE_VERSION)-coverage.info @CODE_COVERAGE_ENABLED_TRUE@CODE_COVERAGE_OUTPUT_DIRECTORY ?= $(PACKAGE_NAME)-$(PACKAGE_VERSION)-coverage @CODE_COVERAGE_ENABLED_TRUE@CODE_COVERAGE_BRANCH_COVERAGE ?= @CODE_COVERAGE_ENABLED_TRUE@CODE_COVERAGE_LCOV_SHOPTS_DEFAULT ?= $(if $(CODE_COVERAGE_BRANCH_COVERAGE),--rc lcov_branch_coverage=$(CODE_COVERAGE_BRANCH_COVERAGE)) @CODE_COVERAGE_ENABLED_TRUE@CODE_COVERAGE_LCOV_SHOPTS ?= $(CODE_COVERAGE_LCOV_SHOPTS_DEFAULT) @CODE_COVERAGE_ENABLED_TRUE@CODE_COVERAGE_LCOV_OPTIONS_GCOVPATH ?= --gcov-tool "$(GCOV)" @CODE_COVERAGE_ENABLED_TRUE@CODE_COVERAGE_LCOV_OPTIONS_DEFAULT ?= $(CODE_COVERAGE_LCOV_OPTIONS_GCOVPATH) @CODE_COVERAGE_ENABLED_TRUE@CODE_COVERAGE_LCOV_OPTIONS ?= $(CODE_COVERAGE_LCOV_OPTIONS_DEFAULT) @CODE_COVERAGE_ENABLED_TRUE@CODE_COVERAGE_LCOV_RMOPTS_DEFAULT ?= @CODE_COVERAGE_ENABLED_TRUE@CODE_COVERAGE_LCOV_RMOPTS ?= $(CODE_COVERAGE_LCOV_RMOPTS_DEFAULT) @CODE_COVERAGE_ENABLED_TRUE@CODE_COVERAGE_GENHTML_OPTIONS_DEFAULT ?=$(if $(CODE_COVERAGE_BRANCH_COVERAGE),--rc genhtml_branch_coverage=$(CODE_COVERAGE_BRANCH_COVERAGE)) @CODE_COVERAGE_ENABLED_TRUE@CODE_COVERAGE_GENHTML_OPTIONS ?= $(CODE_COVERAGE_GENHTML_OPTIONS_DEFAULT) @CODE_COVERAGE_ENABLED_TRUE@CODE_COVERAGE_IGNORE_PATTERN ?= # Use recursive makes in order to ignore errors during check @CODE_COVERAGE_ENABLED_TRUE@check-code-coverage: @CODE_COVERAGE_ENABLED_TRUE@ -$(AM_V_at)$(MAKE) $(AM_MAKEFLAGS) -k check @CODE_COVERAGE_ENABLED_TRUE@ $(AM_V_at)$(MAKE) $(AM_MAKEFLAGS) code-coverage-capture # Capture code coverage data @CODE_COVERAGE_ENABLED_TRUE@code-coverage-capture: code-coverage-capture-hook @CODE_COVERAGE_ENABLED_TRUE@ $(code_coverage_v_lcov_cap)$(LCOV) $(code_coverage_quiet) $(addprefix --directory ,$(CODE_COVERAGE_DIRECTORY)) --capture --output-file "$(CODE_COVERAGE_OUTPUT_FILE).tmp" --test-name "$(call code_coverage_sanitize,$(PACKAGE_NAME)-$(PACKAGE_VERSION))" --no-checksum --compat-libtool $(CODE_COVERAGE_LCOV_SHOPTS) $(CODE_COVERAGE_LCOV_OPTIONS) @CODE_COVERAGE_ENABLED_TRUE@ $(code_coverage_v_lcov_ign)$(LCOV) $(code_coverage_quiet) $(addprefix --directory ,$(CODE_COVERAGE_DIRECTORY)) --remove "$(CODE_COVERAGE_OUTPUT_FILE).tmp" "/tmp/*" $(CODE_COVERAGE_IGNORE_PATTERN) --output-file "$(CODE_COVERAGE_OUTPUT_FILE)" $(CODE_COVERAGE_LCOV_SHOPTS) $(CODE_COVERAGE_LCOV_RMOPTS) @CODE_COVERAGE_ENABLED_TRUE@ -@rm -f "$(CODE_COVERAGE_OUTPUT_FILE).tmp" @CODE_COVERAGE_ENABLED_TRUE@ $(code_coverage_v_genhtml)LANG=C $(GENHTML) $(code_coverage_quiet) $(addprefix --prefix ,$(CODE_COVERAGE_DIRECTORY)) --output-directory "$(CODE_COVERAGE_OUTPUT_DIRECTORY)" --title "$(PACKAGE_NAME)-$(PACKAGE_VERSION) Code Coverage" --legend --show-details "$(CODE_COVERAGE_OUTPUT_FILE)" $(CODE_COVERAGE_GENHTML_OPTIONS) @CODE_COVERAGE_ENABLED_TRUE@ @echo "file://$(abs_builddir)/$(CODE_COVERAGE_OUTPUT_DIRECTORY)/index.html" @CODE_COVERAGE_ENABLED_TRUE@code-coverage-clean: @CODE_COVERAGE_ENABLED_TRUE@ -$(LCOV) --directory $(top_builddir) -z @CODE_COVERAGE_ENABLED_TRUE@ -rm -rf "$(CODE_COVERAGE_OUTPUT_FILE)" "$(CODE_COVERAGE_OUTPUT_FILE).tmp" "$(CODE_COVERAGE_OUTPUT_DIRECTORY)" @CODE_COVERAGE_ENABLED_TRUE@ -find . \( -name "*.gcda" -o -name "*.gcno" -o -name "*.gcov" \) -delete @CODE_COVERAGE_ENABLED_TRUE@code-coverage-dist-clean: @CODE_COVERAGE_ENABLED_TRUE@ else # ifneq ($(abs_builddir), $(abs_top_builddir)) @CODE_COVERAGE_ENABLED_TRUE@check-code-coverage: @CODE_COVERAGE_ENABLED_TRUE@code-coverage-capture: code-coverage-capture-hook @CODE_COVERAGE_ENABLED_TRUE@code-coverage-clean: @CODE_COVERAGE_ENABLED_TRUE@code-coverage-dist-clean: @CODE_COVERAGE_ENABLED_TRUE@ endif # ifeq ($(abs_builddir), $(abs_top_builddir)) # Use recursive makes in order to ignore errors during check @CODE_COVERAGE_ENABLED_FALSE@check-code-coverage: @CODE_COVERAGE_ENABLED_FALSE@ @echo "Need to reconfigure with --enable-code-coverage" # Capture code coverage data @CODE_COVERAGE_ENABLED_FALSE@code-coverage-capture: code-coverage-capture-hook @CODE_COVERAGE_ENABLED_FALSE@ @echo "Need to reconfigure with --enable-code-coverage" @CODE_COVERAGE_ENABLED_FALSE@code-coverage-clean: @CODE_COVERAGE_ENABLED_FALSE@code-coverage-dist-clean: # Hook rule executed before code-coverage-capture, overridable by the user code-coverage-capture-hook: .PHONY: check-code-coverage code-coverage-capture code-coverage-dist-clean code-coverage-clean code-coverage-capture-hook # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: # Tell GNU make to disable its built-in pattern rules. %:: %,v %:: RCS/%,v %:: RCS/% %:: s.% %:: SCCS/s.% gevent-24.11.1/deps/c-ares/src/lib/Makefile.inc000066400000000000000000000061311471441230600210250ustar00rootroot00000000000000# Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT CSOURCES = ares__addrinfo2hostent.c \ ares__addrinfo_localhost.c \ ares__close_sockets.c \ ares__hosts_file.c \ ares__parse_into_addrinfo.c \ ares__socket.c \ ares__sortaddrinfo.c \ ares_android.c \ ares_cancel.c \ ares_cookie.c \ ares_data.c \ ares_destroy.c \ ares_free_hostent.c \ ares_free_string.c \ ares_freeaddrinfo.c \ ares_getaddrinfo.c \ ares_getenv.c \ ares_gethostbyaddr.c \ ares_gethostbyname.c \ ares_getnameinfo.c \ ares_init.c \ ares_library_init.c \ ares_metrics.c \ ares_options.c \ ares_platform.c \ ares_process.c \ ares_qcache.c \ ares_query.c \ ares_search.c \ ares_send.c \ ares_strerror.c \ ares_sysconfig.c \ ares_sysconfig_files.c \ ares_sysconfig_mac.c \ ares_sysconfig_win.c \ ares_timeout.c \ ares_update_servers.c \ ares_version.c \ inet_net_pton.c \ inet_ntop.c \ windows_port.c \ dsa/ares__array.c \ dsa/ares__htable.c \ dsa/ares__htable_asvp.c \ dsa/ares__htable_strvp.c \ dsa/ares__htable_szvp.c \ dsa/ares__htable_vpvp.c \ dsa/ares__llist.c \ dsa/ares__slist.c \ event/ares_event_configchg.c \ event/ares_event_epoll.c \ event/ares_event_kqueue.c \ event/ares_event_poll.c \ event/ares_event_select.c \ event/ares_event_thread.c \ event/ares_event_wake_pipe.c \ event/ares_event_win32.c \ legacy/ares_create_query.c \ legacy/ares_expand_name.c \ legacy/ares_expand_string.c \ legacy/ares_fds.c \ legacy/ares_getsock.c \ legacy/ares_parse_a_reply.c \ legacy/ares_parse_aaaa_reply.c \ legacy/ares_parse_caa_reply.c \ legacy/ares_parse_mx_reply.c \ legacy/ares_parse_naptr_reply.c \ legacy/ares_parse_ns_reply.c \ legacy/ares_parse_ptr_reply.c \ legacy/ares_parse_soa_reply.c \ legacy/ares_parse_srv_reply.c \ legacy/ares_parse_txt_reply.c \ legacy/ares_parse_uri_reply.c \ record/ares_dns_mapping.c \ record/ares_dns_multistring.c \ record/ares_dns_name.c \ record/ares_dns_parse.c \ record/ares_dns_record.c \ record/ares_dns_write.c \ str/ares__buf.c \ str/ares_strcasecmp.c \ str/ares_str.c \ str/ares_strsplit.c \ util/ares__iface_ips.c \ util/ares__threads.c \ util/ares__timeval.c \ util/ares_math.c \ util/ares_rand.c HHEADERS = ares_android.h \ ares_data.h \ ares_getenv.h \ ares_inet_net_pton.h \ ares_ipv6.h \ ares_platform.h \ ares_private.h \ ares_setup.h \ dsa/ares__array.h \ dsa/ares__htable.h \ dsa/ares__htable_asvp.h \ dsa/ares__htable_strvp.h \ dsa/ares__htable_szvp.h \ dsa/ares__htable_vpvp.h \ dsa/ares__llist.h \ dsa/ares__slist.h \ event/ares_event.h \ event/ares_event_win32.h \ record/ares_dns_multistring.h \ record/ares_dns_private.h \ str/ares__buf.h \ str/ares_strcasecmp.h \ str/ares_str.h \ str/ares_strsplit.h \ util/ares__iface_ips.h \ util/ares__threads.h \ thirdparty/apple/dnsinfo.h gevent-24.11.1/deps/c-ares/src/lib/ares__addrinfo2hostent.c000066400000000000000000000172101471441230600234070ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2005 Dominick Meglio * Copyright (c) 2019 Andrew Selivanov * Copyright (c) 2021 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #ifdef HAVE_STRINGS_H # include #endif #ifdef HAVE_LIMITS_H # include #endif ares_status_t ares__addrinfo2hostent(const struct ares_addrinfo *ai, int family, struct hostent **host) { struct ares_addrinfo_node *next; struct ares_addrinfo_cname *next_cname; char **aliases = NULL; char *addrs = NULL; size_t naliases = 0; size_t naddrs = 0; size_t alias = 0; size_t i; if (ai == NULL || host == NULL) { return ARES_EBADQUERY; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* Use the first node of the response as the family, since hostent can only * represent one family. We assume getaddrinfo() returned a sorted list if * the user requested AF_UNSPEC. */ if (family == AF_UNSPEC && ai->nodes) { family = ai->nodes->ai_family; } if (family != AF_INET && family != AF_INET6) { return ARES_EBADQUERY; /* LCOV_EXCL_LINE: DefensiveCoding */ } *host = ares_malloc(sizeof(**host)); if (!(*host)) { goto enomem; /* LCOV_EXCL_LINE: OutOfMemory */ } memset(*host, 0, sizeof(**host)); next = ai->nodes; while (next) { if (next->ai_family == family) { ++naddrs; } next = next->ai_next; } next_cname = ai->cnames; while (next_cname) { if (next_cname->alias) { ++naliases; } next_cname = next_cname->next; } aliases = ares_malloc((naliases + 1) * sizeof(char *)); if (!aliases) { goto enomem; /* LCOV_EXCL_LINE: OutOfMemory */ } (*host)->h_aliases = aliases; memset(aliases, 0, (naliases + 1) * sizeof(char *)); if (naliases) { for (next_cname = ai->cnames; next_cname != NULL; next_cname = next_cname->next) { if (next_cname->alias == NULL) { continue; } aliases[alias] = ares_strdup(next_cname->alias); if (!aliases[alias]) { goto enomem; /* LCOV_EXCL_LINE: OutOfMemory */ } alias++; } } (*host)->h_addr_list = ares_malloc((naddrs + 1) * sizeof(char *)); if (!(*host)->h_addr_list) { goto enomem; /* LCOV_EXCL_LINE: OutOfMemory */ } memset((*host)->h_addr_list, 0, (naddrs + 1) * sizeof(char *)); if (ai->cnames) { (*host)->h_name = ares_strdup(ai->cnames->name); if ((*host)->h_name == NULL && ai->cnames->name) { goto enomem; /* LCOV_EXCL_LINE: OutOfMemory */ } } else { (*host)->h_name = ares_strdup(ai->name); if ((*host)->h_name == NULL && ai->name) { goto enomem; /* LCOV_EXCL_LINE: OutOfMemory */ } } (*host)->h_addrtype = (HOSTENT_ADDRTYPE_TYPE)family; if (family == AF_INET) { (*host)->h_length = sizeof(struct in_addr); } if (family == AF_INET6) { (*host)->h_length = sizeof(struct ares_in6_addr); } if (naddrs) { addrs = ares_malloc(naddrs * (size_t)(*host)->h_length); if (!addrs) { goto enomem; /* LCOV_EXCL_LINE: OutOfMemory */ } i = 0; for (next = ai->nodes; next != NULL; next = next->ai_next) { if (next->ai_family != family) { continue; } (*host)->h_addr_list[i] = addrs + (i * (size_t)(*host)->h_length); if (family == AF_INET6) { memcpy((*host)->h_addr_list[i], &(CARES_INADDR_CAST(const struct sockaddr_in6 *, next->ai_addr) ->sin6_addr), (size_t)(*host)->h_length); } if (family == AF_INET) { memcpy((*host)->h_addr_list[i], &(CARES_INADDR_CAST(const struct sockaddr_in *, next->ai_addr) ->sin_addr), (size_t)(*host)->h_length); } ++i; } if (i == 0) { ares_free(addrs); } } if (naddrs == 0 && naliases == 0) { ares_free_hostent(*host); *host = NULL; return ARES_ENODATA; } return ARES_SUCCESS; /* LCOV_EXCL_START: OutOfMemory */ enomem: ares_free_hostent(*host); *host = NULL; return ARES_ENOMEM; /* LCOV_EXCL_STOP */ } ares_status_t ares__addrinfo2addrttl(const struct ares_addrinfo *ai, int family, size_t req_naddrttls, struct ares_addrttl *addrttls, struct ares_addr6ttl *addr6ttls, size_t *naddrttls) { struct ares_addrinfo_node *next; struct ares_addrinfo_cname *next_cname; int cname_ttl = INT_MAX; if (family != AF_INET && family != AF_INET6) { return ARES_EBADQUERY; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (ai == NULL || naddrttls == NULL) { return ARES_EBADQUERY; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (family == AF_INET && addrttls == NULL) { return ARES_EBADQUERY; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (family == AF_INET6 && addr6ttls == NULL) { return ARES_EBADQUERY; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (req_naddrttls == 0) { return ARES_EBADQUERY; /* LCOV_EXCL_LINE: DefensiveCoding */ } *naddrttls = 0; next_cname = ai->cnames; while (next_cname) { if (next_cname->ttl < cname_ttl) { cname_ttl = next_cname->ttl; } next_cname = next_cname->next; } for (next = ai->nodes; next != NULL; next = next->ai_next) { if (next->ai_family != family) { continue; } if (*naddrttls >= req_naddrttls) { break; } if (family == AF_INET6) { if (next->ai_ttl > cname_ttl) { addr6ttls[*naddrttls].ttl = cname_ttl; } else { addr6ttls[*naddrttls].ttl = next->ai_ttl; } memcpy(&addr6ttls[*naddrttls].ip6addr, &(CARES_INADDR_CAST(const struct sockaddr_in6 *, next->ai_addr) ->sin6_addr), sizeof(struct ares_in6_addr)); } else { if (next->ai_ttl > cname_ttl) { addrttls[*naddrttls].ttl = cname_ttl; } else { addrttls[*naddrttls].ttl = next->ai_ttl; } memcpy(&addrttls[*naddrttls].ipaddr, &(CARES_INADDR_CAST(const struct sockaddr_in *, next->ai_addr) ->sin_addr), sizeof(struct in_addr)); } (*naddrttls)++; } return ARES_SUCCESS; } gevent-24.11.1/deps/c-ares/src/lib/ares__addrinfo_localhost.c000066400000000000000000000151021471441230600237660ustar00rootroot00000000000000/* MIT License * * Copyright (c) Massachusetts Institute of Technology * Copyright (c) Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #if defined(USE_WINSOCK) # if defined(_WIN32_WINNT) && _WIN32_WINNT >= 0x0600 # include # endif # if defined(HAVE_IPHLPAPI_H) # include # endif # if defined(HAVE_NETIOAPI_H) # include # endif #endif ares_status_t ares_append_ai_node(int aftype, unsigned short port, unsigned int ttl, const void *adata, struct ares_addrinfo_node **nodes) { struct ares_addrinfo_node *node; node = ares__append_addrinfo_node(nodes); if (!node) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } memset(node, 0, sizeof(*node)); if (aftype == AF_INET) { struct sockaddr_in *sin = ares_malloc(sizeof(*sin)); if (!sin) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } memset(sin, 0, sizeof(*sin)); memcpy(&sin->sin_addr.s_addr, adata, sizeof(sin->sin_addr.s_addr)); sin->sin_family = AF_INET; sin->sin_port = htons(port); node->ai_addr = (struct sockaddr *)sin; node->ai_family = AF_INET; node->ai_addrlen = sizeof(*sin); node->ai_addr = (struct sockaddr *)sin; node->ai_ttl = (int)ttl; } if (aftype == AF_INET6) { struct sockaddr_in6 *sin6 = ares_malloc(sizeof(*sin6)); if (!sin6) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } memset(sin6, 0, sizeof(*sin6)); memcpy(&sin6->sin6_addr.s6_addr, adata, sizeof(sin6->sin6_addr.s6_addr)); sin6->sin6_family = AF_INET6; sin6->sin6_port = htons(port); node->ai_addr = (struct sockaddr *)sin6; node->ai_family = AF_INET6; node->ai_addrlen = sizeof(*sin6); node->ai_addr = (struct sockaddr *)sin6; node->ai_ttl = (int)ttl; } return ARES_SUCCESS; } static ares_status_t ares__default_loopback_addrs(int aftype, unsigned short port, struct ares_addrinfo_node **nodes) { ares_status_t status = ARES_SUCCESS; if (aftype == AF_UNSPEC || aftype == AF_INET6) { struct ares_in6_addr addr6; ares_inet_pton(AF_INET6, "::1", &addr6); status = ares_append_ai_node(AF_INET6, port, 0, &addr6, nodes); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } if (aftype == AF_UNSPEC || aftype == AF_INET) { struct in_addr addr4; ares_inet_pton(AF_INET, "127.0.0.1", &addr4); status = ares_append_ai_node(AF_INET, port, 0, &addr4, nodes); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } return status; } static ares_status_t ares__system_loopback_addrs(int aftype, unsigned short port, struct ares_addrinfo_node **nodes) { #if defined(USE_WINSOCK) && defined(_WIN32_WINNT) && _WIN32_WINNT >= 0x0600 && \ !defined(__WATCOMC__) PMIB_UNICASTIPADDRESS_TABLE table; unsigned int i; ares_status_t status = ARES_ENOTFOUND; *nodes = NULL; if (GetUnicastIpAddressTable((ADDRESS_FAMILY)aftype, &table) != NO_ERROR) { return ARES_ENOTFOUND; } for (i = 0; i < table->NumEntries; i++) { if (table->Table[i].InterfaceLuid.Info.IfType != IF_TYPE_SOFTWARE_LOOPBACK) { continue; } if (table->Table[i].Address.si_family == AF_INET) { status = ares_append_ai_node(table->Table[i].Address.si_family, port, 0, &table->Table[i].Address.Ipv4.sin_addr, nodes); } else if (table->Table[i].Address.si_family == AF_INET6) { status = ares_append_ai_node(table->Table[i].Address.si_family, port, 0, &table->Table[i].Address.Ipv6.sin6_addr, nodes); } else { /* Ignore any others */ continue; } if (status != ARES_SUCCESS) { goto fail; } } if (*nodes == NULL) { status = ARES_ENOTFOUND; } fail: FreeMibTable(table); if (status != ARES_SUCCESS) { ares__freeaddrinfo_nodes(*nodes); *nodes = NULL; } return status; #else (void)aftype; (void)port; (void)nodes; /* Not supported on any other OS at this time */ return ARES_ENOTFOUND; #endif } ares_status_t ares__addrinfo_localhost(const char *name, unsigned short port, const struct ares_addrinfo_hints *hints, struct ares_addrinfo *ai) { struct ares_addrinfo_node *nodes = NULL; ares_status_t status; /* Validate family */ switch (hints->ai_family) { case AF_INET: case AF_INET6: case AF_UNSPEC: break; default: /* LCOV_EXCL_LINE: DefensiveCoding */ return ARES_EBADFAMILY; /* LCOV_EXCL_LINE: DefensiveCoding */ } ai->name = ares_strdup(name); if (!ai->name) { goto enomem; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__system_loopback_addrs(hints->ai_family, port, &nodes); if (status == ARES_ENOTFOUND) { status = ares__default_loopback_addrs(hints->ai_family, port, &nodes); } ares__addrinfo_cat_nodes(&ai->nodes, nodes); return status; /* LCOV_EXCL_START: OutOfMemory */ enomem: ares__freeaddrinfo_nodes(nodes); ares_free(ai->name); ai->name = NULL; return ARES_ENOMEM; /* LCOV_EXCL_STOP */ } gevent-24.11.1/deps/c-ares/src/lib/ares__close_sockets.c000066400000000000000000000107361471441230600230000ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include static void ares__requeue_queries(ares_conn_t *conn, ares_status_t requeue_status) { ares_query_t *query; ares_timeval_t now; ares__tvnow(&now); while ((query = ares__llist_first_val(conn->queries_to_conn)) != NULL) { ares__requeue_query(query, &now, requeue_status, ARES_TRUE, NULL); } } void ares__close_connection(ares_conn_t *conn, ares_status_t requeue_status) { ares_server_t *server = conn->server; ares_channel_t *channel = server->channel; /* Unlink */ ares__llist_node_claim( ares__htable_asvp_get_direct(channel->connnode_by_socket, conn->fd)); ares__htable_asvp_remove(channel->connnode_by_socket, conn->fd); if (conn->flags & ARES_CONN_FLAG_TCP) { /* Reset any existing input and output buffer. */ ares__buf_consume(server->tcp_parser, ares__buf_len(server->tcp_parser)); ares__buf_consume(server->tcp_send, ares__buf_len(server->tcp_send)); server->tcp_conn = NULL; } /* Requeue queries to other connections */ ares__requeue_queries(conn, requeue_status); ares__llist_destroy(conn->queries_to_conn); SOCK_STATE_CALLBACK(channel, conn->fd, 0, 0); ares__close_socket(channel, conn->fd); ares_free(conn); } void ares__close_sockets(ares_server_t *server) { ares__llist_node_t *node; while ((node = ares__llist_node_first(server->connections)) != NULL) { ares_conn_t *conn = ares__llist_node_val(node); ares__close_connection(conn, ARES_SUCCESS); } } void ares__check_cleanup_conns(const ares_channel_t *channel) { ares__slist_node_t *snode; if (channel == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* Iterate across each server */ for (snode = ares__slist_node_first(channel->servers); snode != NULL; snode = ares__slist_node_next(snode)) { ares_server_t *server = ares__slist_node_val(snode); ares__llist_node_t *cnode; /* Iterate across each connection */ cnode = ares__llist_node_first(server->connections); while (cnode != NULL) { ares__llist_node_t *next = ares__llist_node_next(cnode); ares_conn_t *conn = ares__llist_node_val(cnode); ares_bool_t do_cleanup = ARES_FALSE; cnode = next; /* Has connections, not eligible */ if (ares__llist_len(conn->queries_to_conn)) { continue; } /* If we are configured not to stay open, close it out */ if (!(channel->flags & ARES_FLAG_STAYOPEN)) { do_cleanup = ARES_TRUE; } /* If the associated server has failures, close it out. Resetting the * connection (and specifically the source port number) can help resolve * situations where packets are being dropped. */ if (conn->server->consec_failures > 0) { do_cleanup = ARES_TRUE; } /* If the udp connection hit its max queries, always close it */ if (!(conn->flags & ARES_CONN_FLAG_TCP) && channel->udp_max_queries > 0 && conn->total_queries >= channel->udp_max_queries) { do_cleanup = ARES_TRUE; } if (!do_cleanup) { continue; } /* Clean it up */ ares__close_connection(conn, ARES_SUCCESS); } } } gevent-24.11.1/deps/c-ares/src/lib/ares__hosts_file.c000066400000000000000000000624671471441230600223070ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_SYS_TYPES_H # include #endif #ifdef HAVE_SYS_STAT_H # include #endif #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #include #include "ares_platform.h" /* HOSTS FILE PROCESSING OVERVIEW * ============================== * The hosts file on the system contains static entries to be processed locally * rather than querying the nameserver. Each row is an IP address followed by * a list of space delimited hostnames that match the ip address. This is used * for both forward and reverse lookups. * * We are caching the entire parsed hosts file for performance reasons. Some * files may be quite sizable and as per Issue #458 can approach 1/2MB in size, * and the parse overhead on a rapid succession of queries can be quite large. * The entries are stored in forwards and backwards hashtables so we can get * O(1) performance on lookup. The file is cached until the file modification * timestamp changes. * * The hosts file processing is quite unique. It has to merge all related hosts * and ips into a single entry due to file formatting requirements. For * instance take the below: * * 127.0.0.1 localhost.localdomain localhost * ::1 localhost.localdomain localhost * 192.168.1.1 host.example.com host * 192.168.1.5 host.example.com host * 2620:1234::1 host.example.com host6.example.com host6 host * * This will yield 2 entries. * 1) ips: 127.0.0.1,::1 * hosts: localhost.localdomain,localhost * 2) ips: 192.168.1.1,192.168.1.5,2620:1234::1 * hosts: host.example.com,host,host6.example.com,host6 * * It could be argued that if searching for 192.168.1.1 that the 'host6' * hostnames should not be returned, but this implementation will return them * since they are related. It is unlikely this will matter in the real world. */ struct ares_hosts_file { time_t ts; /*! cache the filename so we know if the filename changes it automatically * invalidates the cache */ char *filename; /*! iphash is the owner of the 'entry' object as there is only ever a single * match to the object. */ ares__htable_strvp_t *iphash; /*! hosthash does not own the entry so won't free on destruction */ ares__htable_strvp_t *hosthash; }; struct ares_hosts_entry { size_t refcnt; /*! If the entry is stored multiple times in the * ip address hash, we have to reference count it */ ares__llist_t *ips; ares__llist_t *hosts; }; const void *ares_dns_pton(const char *ipaddr, struct ares_addr *addr, size_t *out_len) { const void *ptr = NULL; size_t ptr_len = 0; if (ipaddr == NULL || addr == NULL || out_len == NULL) { return NULL; /* LCOV_EXCL_LINE: DefensiveCoding */ } *out_len = 0; if (addr->family == AF_INET && ares_inet_pton(AF_INET, ipaddr, &addr->addr.addr4) > 0) { ptr = &addr->addr.addr4; ptr_len = sizeof(addr->addr.addr4); } else if (addr->family == AF_INET6 && ares_inet_pton(AF_INET6, ipaddr, &addr->addr.addr6) > 0) { ptr = &addr->addr.addr6; ptr_len = sizeof(addr->addr.addr6); } else if (addr->family == AF_UNSPEC) { if (ares_inet_pton(AF_INET, ipaddr, &addr->addr.addr4) > 0) { addr->family = AF_INET; ptr = &addr->addr.addr4; ptr_len = sizeof(addr->addr.addr4); } else if (ares_inet_pton(AF_INET6, ipaddr, &addr->addr.addr6) > 0) { addr->family = AF_INET6; ptr = &addr->addr.addr6; ptr_len = sizeof(addr->addr.addr6); } } *out_len = ptr_len; return ptr; } static ares_bool_t ares__normalize_ipaddr(const char *ipaddr, char *out, size_t out_len) { struct ares_addr data; const void *addr; size_t addr_len = 0; memset(&data, 0, sizeof(data)); data.family = AF_UNSPEC; addr = ares_dns_pton(ipaddr, &data, &addr_len); if (addr == NULL) { return ARES_FALSE; } if (!ares_inet_ntop(data.family, addr, out, (ares_socklen_t)out_len)) { return ARES_FALSE; /* LCOV_EXCL_LINE: DefensiveCoding */ } return ARES_TRUE; } static void ares__hosts_entry_destroy(ares_hosts_entry_t *entry) { if (entry == NULL) { return; } /* Honor reference counting */ if (entry->refcnt != 0) { entry->refcnt--; } if (entry->refcnt > 0) { return; } ares__llist_destroy(entry->hosts); ares__llist_destroy(entry->ips); ares_free(entry); } static void ares__hosts_entry_destroy_cb(void *entry) { ares__hosts_entry_destroy(entry); } void ares__hosts_file_destroy(ares_hosts_file_t *hf) { if (hf == NULL) { return; } ares_free(hf->filename); ares__htable_strvp_destroy(hf->hosthash); ares__htable_strvp_destroy(hf->iphash); ares_free(hf); } static ares_hosts_file_t *ares__hosts_file_create(const char *filename) { ares_hosts_file_t *hf = ares_malloc_zero(sizeof(*hf)); if (hf == NULL) { goto fail; } hf->ts = time(NULL); hf->filename = ares_strdup(filename); if (hf->filename == NULL) { goto fail; } hf->iphash = ares__htable_strvp_create(ares__hosts_entry_destroy_cb); if (hf->iphash == NULL) { goto fail; } hf->hosthash = ares__htable_strvp_create(NULL); if (hf->hosthash == NULL) { goto fail; } return hf; fail: ares__hosts_file_destroy(hf); return NULL; } typedef enum { ARES_MATCH_NONE = 0, ARES_MATCH_IPADDR = 1, ARES_MATCH_HOST = 2 } ares_hosts_file_match_t; static ares_status_t ares__hosts_file_merge_entry( const ares_hosts_file_t *hf, ares_hosts_entry_t *existing, ares_hosts_entry_t *entry, ares_hosts_file_match_t matchtype) { ares__llist_node_t *node; /* If we matched on IP address, we know there can only be 1, so there's no * reason to do anything */ if (matchtype != ARES_MATCH_IPADDR) { while ((node = ares__llist_node_first(entry->ips)) != NULL) { const char *ipaddr = ares__llist_node_val(node); if (ares__htable_strvp_get_direct(hf->iphash, ipaddr) != NULL) { ares__llist_node_destroy(node); continue; } ares__llist_node_move_parent_last(node, existing->ips); } } while ((node = ares__llist_node_first(entry->hosts)) != NULL) { const char *hostname = ares__llist_node_val(node); if (ares__htable_strvp_get_direct(hf->hosthash, hostname) != NULL) { ares__llist_node_destroy(node); continue; } ares__llist_node_move_parent_last(node, existing->hosts); } ares__hosts_entry_destroy(entry); return ARES_SUCCESS; } static ares_hosts_file_match_t ares__hosts_file_match(const ares_hosts_file_t *hf, ares_hosts_entry_t *entry, ares_hosts_entry_t **match) { ares__llist_node_t *node; *match = NULL; for (node = ares__llist_node_first(entry->ips); node != NULL; node = ares__llist_node_next(node)) { const char *ipaddr = ares__llist_node_val(node); *match = ares__htable_strvp_get_direct(hf->iphash, ipaddr); if (*match != NULL) { return ARES_MATCH_IPADDR; } } for (node = ares__llist_node_first(entry->hosts); node != NULL; node = ares__llist_node_next(node)) { const char *host = ares__llist_node_val(node); *match = ares__htable_strvp_get_direct(hf->hosthash, host); if (*match != NULL) { return ARES_MATCH_HOST; } } return ARES_MATCH_NONE; } /*! entry is invalidated upon calling this function, always, even on error */ static ares_status_t ares__hosts_file_add(ares_hosts_file_t *hosts, ares_hosts_entry_t *entry) { ares_hosts_entry_t *match = NULL; ares_status_t status = ARES_SUCCESS; ares__llist_node_t *node; ares_hosts_file_match_t matchtype; size_t num_hostnames; /* Record the number of hostnames in this entry file. If we merge into an * existing record, these will be *appended* to the entry, so we'll count * backwards when adding to the hosts hashtable */ num_hostnames = ares__llist_len(entry->hosts); matchtype = ares__hosts_file_match(hosts, entry, &match); if (matchtype != ARES_MATCH_NONE) { status = ares__hosts_file_merge_entry(hosts, match, entry, matchtype); if (status != ARES_SUCCESS) { ares__hosts_entry_destroy(entry); /* LCOV_EXCL_LINE: DefensiveCoding */ return status; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* entry was invalidated above by merging */ entry = match; } if (matchtype != ARES_MATCH_IPADDR) { const char *ipaddr = ares__llist_last_val(entry->ips); if (!ares__htable_strvp_get(hosts->iphash, ipaddr, NULL)) { if (!ares__htable_strvp_insert(hosts->iphash, ipaddr, entry)) { ares__hosts_entry_destroy(entry); return ARES_ENOMEM; } entry->refcnt++; } } /* Go backwards, on a merge, hostnames are appended. Breakout once we've * consumed all the hosts that we appended */ for (node = ares__llist_node_last(entry->hosts); node != NULL; node = ares__llist_node_prev(node)) { const char *val = ares__llist_node_val(node); if (num_hostnames == 0) { break; } num_hostnames--; /* first hostname match wins. If we detect a duplicate hostname for another * ip it will automatically be added to the same entry */ if (ares__htable_strvp_get(hosts->hosthash, val, NULL)) { continue; } if (!ares__htable_strvp_insert(hosts->hosthash, val, entry)) { return ARES_ENOMEM; } } return ARES_SUCCESS; } static ares_bool_t ares__hosts_entry_isdup(ares_hosts_entry_t *entry, const char *host) { ares__llist_node_t *node; for (node = ares__llist_node_first(entry->ips); node != NULL; node = ares__llist_node_next(node)) { const char *myhost = ares__llist_node_val(node); if (strcasecmp(myhost, host) == 0) { return ARES_TRUE; } } return ARES_FALSE; } static ares_status_t ares__parse_hosts_hostnames(ares__buf_t *buf, ares_hosts_entry_t *entry) { entry->hosts = ares__llist_create(ares_free); if (entry->hosts == NULL) { return ARES_ENOMEM; } /* Parse hostnames and aliases */ while (ares__buf_len(buf)) { char hostname[256]; char *temp; ares_status_t status; unsigned char comment = '#'; ares__buf_consume_whitespace(buf, ARES_FALSE); if (ares__buf_len(buf) == 0) { break; } /* See if it is a comment, if so stop processing */ if (ares__buf_begins_with(buf, &comment, 1)) { break; } ares__buf_tag(buf); /* Must be at end of line */ if (ares__buf_consume_nonwhitespace(buf) == 0) { break; } status = ares__buf_tag_fetch_string(buf, hostname, sizeof(hostname)); if (status != ARES_SUCCESS) { /* Bad entry, just ignore as long as its not the first. If its the first, * it must be valid */ if (ares__llist_len(entry->hosts) == 0) { return ARES_EBADSTR; } continue; } /* Validate it is a valid hostname characterset */ if (!ares__is_hostname(hostname)) { continue; } /* Don't add a duplicate to the same entry */ if (ares__hosts_entry_isdup(entry, hostname)) { continue; } /* Add to list */ temp = ares_strdup(hostname); if (temp == NULL) { return ARES_ENOMEM; } if (ares__llist_insert_last(entry->hosts, temp) == NULL) { ares_free(temp); return ARES_ENOMEM; } } /* Must have at least 1 entry */ if (ares__llist_len(entry->hosts) == 0) { return ARES_EBADSTR; } return ARES_SUCCESS; } static ares_status_t ares__parse_hosts_ipaddr(ares__buf_t *buf, ares_hosts_entry_t **entry_out) { char addr[INET6_ADDRSTRLEN]; char *temp; ares_hosts_entry_t *entry = NULL; ares_status_t status; *entry_out = NULL; ares__buf_tag(buf); ares__buf_consume_nonwhitespace(buf); status = ares__buf_tag_fetch_string(buf, addr, sizeof(addr)); if (status != ARES_SUCCESS) { return status; } /* Validate and normalize the ip address format */ if (!ares__normalize_ipaddr(addr, addr, sizeof(addr))) { return ARES_EBADSTR; } entry = ares_malloc_zero(sizeof(*entry)); if (entry == NULL) { return ARES_ENOMEM; } entry->ips = ares__llist_create(ares_free); if (entry->ips == NULL) { ares__hosts_entry_destroy(entry); return ARES_ENOMEM; } temp = ares_strdup(addr); if (temp == NULL) { ares__hosts_entry_destroy(entry); return ARES_ENOMEM; } if (ares__llist_insert_first(entry->ips, temp) == NULL) { ares_free(temp); ares__hosts_entry_destroy(entry); return ARES_ENOMEM; } *entry_out = entry; return ARES_SUCCESS; } static ares_status_t ares__parse_hosts(const char *filename, ares_hosts_file_t **out) { ares__buf_t *buf = NULL; ares_status_t status = ARES_EBADRESP; ares_hosts_file_t *hf = NULL; ares_hosts_entry_t *entry = NULL; *out = NULL; buf = ares__buf_create(); if (buf == NULL) { status = ARES_ENOMEM; goto done; } status = ares__buf_load_file(filename, buf); if (status != ARES_SUCCESS) { goto done; } hf = ares__hosts_file_create(filename); if (hf == NULL) { status = ARES_ENOMEM; goto done; } while (ares__buf_len(buf)) { unsigned char comment = '#'; /* -- Start of new line here -- */ /* Consume any leading whitespace */ ares__buf_consume_whitespace(buf, ARES_FALSE); if (ares__buf_len(buf) == 0) { break; } /* See if it is a comment, if so, consume remaining line */ if (ares__buf_begins_with(buf, &comment, 1)) { ares__buf_consume_line(buf, ARES_TRUE); continue; } /* Pull off ip address */ status = ares__parse_hosts_ipaddr(buf, &entry); if (status == ARES_ENOMEM) { goto done; } if (status != ARES_SUCCESS) { /* Bad line, consume and go onto next */ ares__buf_consume_line(buf, ARES_TRUE); continue; } /* Parse of the hostnames */ status = ares__parse_hosts_hostnames(buf, entry); if (status == ARES_ENOMEM) { goto done; } else if (status != ARES_SUCCESS) { /* Bad line, consume and go onto next */ ares__hosts_entry_destroy(entry); entry = NULL; ares__buf_consume_line(buf, ARES_TRUE); continue; } /* Append the successful entry to the hosts file */ status = ares__hosts_file_add(hf, entry); entry = NULL; /* is always invalidated by this function, even on error */ if (status != ARES_SUCCESS) { goto done; } /* Go to next line */ ares__buf_consume_line(buf, ARES_TRUE); } status = ARES_SUCCESS; done: ares__hosts_entry_destroy(entry); ares__buf_destroy(buf); if (status != ARES_SUCCESS) { ares__hosts_file_destroy(hf); } else { *out = hf; } return status; } static ares_bool_t ares__hosts_expired(const char *filename, const ares_hosts_file_t *hf) { time_t mod_ts = 0; #ifdef HAVE_STAT struct stat st; if (stat(filename, &st) == 0) { mod_ts = st.st_mtime; } #elif defined(_WIN32) struct _stat st; if (_stat(filename, &st) == 0) { mod_ts = st.st_mtime; } #else (void)filename; #endif if (hf == NULL) { return ARES_TRUE; } /* Expire every 60s if we can't get a time */ if (mod_ts == 0) { mod_ts = time(NULL) - 60; /* LCOV_EXCL_LINE: only on systems without stat() */ } /* If filenames are different, its expired */ if (strcasecmp(hf->filename, filename) != 0) { return ARES_TRUE; } if (hf->ts <= mod_ts) { return ARES_TRUE; } return ARES_FALSE; } static ares_status_t ares__hosts_path(const ares_channel_t *channel, ares_bool_t use_env, char **path) { char *path_hosts = NULL; *path = NULL; if (channel->hosts_path) { path_hosts = ares_strdup(channel->hosts_path); if (!path_hosts) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } } if (use_env) { if (path_hosts) { ares_free(path_hosts); } path_hosts = ares_strdup(getenv("CARES_HOSTS")); if (!path_hosts) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } } if (!path_hosts) { #if defined(USE_WINSOCK) char PATH_HOSTS[MAX_PATH] = ""; char tmp[MAX_PATH]; HKEY hkeyHosts; DWORD dwLength = sizeof(tmp); if (RegOpenKeyExA(HKEY_LOCAL_MACHINE, WIN_NS_NT_KEY, 0, KEY_READ, &hkeyHosts) != ERROR_SUCCESS) { return ARES_ENOTFOUND; } RegQueryValueExA(hkeyHosts, DATABASEPATH, NULL, NULL, (LPBYTE)tmp, &dwLength); ExpandEnvironmentStringsA(tmp, PATH_HOSTS, MAX_PATH); RegCloseKey(hkeyHosts); strcat(PATH_HOSTS, WIN_PATH_HOSTS); #elif defined(WATT32) const char *PATH_HOSTS = _w32_GetHostsFile(); if (!PATH_HOSTS) { return ARES_ENOTFOUND; } #endif path_hosts = ares_strdup(PATH_HOSTS); if (!path_hosts) { return ARES_ENOMEM; } } *path = path_hosts; return ARES_SUCCESS; } static ares_status_t ares__hosts_update(ares_channel_t *channel, ares_bool_t use_env) { ares_status_t status; char *filename = NULL; status = ares__hosts_path(channel, use_env, &filename); if (status != ARES_SUCCESS) { return status; } if (!ares__hosts_expired(filename, channel->hf)) { ares_free(filename); return ARES_SUCCESS; } ares__hosts_file_destroy(channel->hf); channel->hf = NULL; status = ares__parse_hosts(filename, &channel->hf); ares_free(filename); return status; } ares_status_t ares__hosts_search_ipaddr(ares_channel_t *channel, ares_bool_t use_env, const char *ipaddr, const ares_hosts_entry_t **entry) { ares_status_t status; char addr[INET6_ADDRSTRLEN]; *entry = NULL; status = ares__hosts_update(channel, use_env); if (status != ARES_SUCCESS) { return status; } if (channel->hf == NULL) { return ARES_ENOTFOUND; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (!ares__normalize_ipaddr(ipaddr, addr, sizeof(addr))) { return ARES_EBADNAME; } *entry = ares__htable_strvp_get_direct(channel->hf->iphash, addr); if (*entry == NULL) { return ARES_ENOTFOUND; } return ARES_SUCCESS; } ares_status_t ares__hosts_search_host(ares_channel_t *channel, ares_bool_t use_env, const char *host, const ares_hosts_entry_t **entry) { ares_status_t status; *entry = NULL; status = ares__hosts_update(channel, use_env); if (status != ARES_SUCCESS) { return status; } if (channel->hf == NULL) { return ARES_ENOTFOUND; /* LCOV_EXCL_LINE: DefensiveCoding */ } *entry = ares__htable_strvp_get_direct(channel->hf->hosthash, host); if (*entry == NULL) { return ARES_ENOTFOUND; } return ARES_SUCCESS; } static ares_status_t ares__hosts_ai_append_cnames(const ares_hosts_entry_t *entry, struct ares_addrinfo_cname **cnames_out) { struct ares_addrinfo_cname *cname = NULL; struct ares_addrinfo_cname *cnames = NULL; const char *primaryhost; ares__llist_node_t *node; ares_status_t status; size_t cnt = 0; node = ares__llist_node_first(entry->hosts); primaryhost = ares__llist_node_val(node); /* Skip to next node to start with aliases */ node = ares__llist_node_next(node); while (node != NULL) { const char *host = ares__llist_node_val(node); /* Cap at 100 entries. , some people use * https://github.com/StevenBlack/hosts and we don't need 200k+ aliases */ cnt++; if (cnt > 100) { break; /* LCOV_EXCL_LINE: DefensiveCoding */ } cname = ares__append_addrinfo_cname(&cnames); if (cname == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } cname->alias = ares_strdup(host); if (cname->alias == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } cname->name = ares_strdup(primaryhost); if (cname->name == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } node = ares__llist_node_next(node); } /* No entries, add only primary */ if (cnames == NULL) { cname = ares__append_addrinfo_cname(&cnames); if (cname == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } cname->name = ares_strdup(primaryhost); if (cname->name == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } status = ARES_SUCCESS; done: if (status != ARES_SUCCESS) { ares__freeaddrinfo_cnames(cnames); /* LCOV_EXCL_LINE: DefensiveCoding */ return status; /* LCOV_EXCL_LINE: DefensiveCoding */ } *cnames_out = cnames; return ARES_SUCCESS; } ares_status_t ares__hosts_entry_to_addrinfo(const ares_hosts_entry_t *entry, const char *name, int family, unsigned short port, ares_bool_t want_cnames, struct ares_addrinfo *ai) { ares_status_t status; struct ares_addrinfo_cname *cnames = NULL; struct ares_addrinfo_node *ainodes = NULL; ares__llist_node_t *node; switch (family) { case AF_INET: case AF_INET6: case AF_UNSPEC: break; default: /* LCOV_EXCL_LINE: DefensiveCoding */ return ARES_EBADFAMILY; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (name != NULL) { ai->name = ares_strdup(name); if (ai->name == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } for (node = ares__llist_node_first(entry->ips); node != NULL; node = ares__llist_node_next(node)) { struct ares_addr addr; const void *ptr = NULL; size_t ptr_len = 0; const char *ipaddr = ares__llist_node_val(node); memset(&addr, 0, sizeof(addr)); addr.family = family; ptr = ares_dns_pton(ipaddr, &addr, &ptr_len); if (ptr == NULL) { continue; } status = ares_append_ai_node(addr.family, port, 0, ptr, &ainodes); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } } if (want_cnames) { status = ares__hosts_ai_append_cnames(entry, &cnames); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } } status = ARES_SUCCESS; done: if (status != ARES_SUCCESS) { /* LCOV_EXCL_START: defensive coding */ ares__freeaddrinfo_cnames(cnames); ares__freeaddrinfo_nodes(ainodes); ares_free(ai->name); ai->name = NULL; return status; /* LCOV_EXCL_STOP */ } ares__addrinfo_cat_cnames(&ai->cnames, cnames); ares__addrinfo_cat_nodes(&ai->nodes, ainodes); return status; } ares_status_t ares__hosts_entry_to_hostent(const ares_hosts_entry_t *entry, int family, struct hostent **hostent) { ares_status_t status; struct ares_addrinfo *ai = ares_malloc_zero(sizeof(*ai)); *hostent = NULL; if (ai == NULL) { return ARES_ENOMEM; } status = ares__hosts_entry_to_addrinfo(entry, NULL, family, 0, ARES_TRUE, ai); if (status != ARES_SUCCESS) { goto done; } status = ares__addrinfo2hostent(ai, family, hostent); if (status != ARES_SUCCESS) { goto done; } done: ares_freeaddrinfo(ai); if (status != ARES_SUCCESS) { ares_free_hostent(*hostent); *hostent = NULL; } return status; } gevent-24.11.1/deps/c-ares/src/lib/ares__parse_into_addrinfo.c000066400000000000000000000133301471441230600241420ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2019 Andrew Selivanov * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #ifdef HAVE_STRINGS_H # include #endif #ifdef HAVE_LIMITS_H # include #endif ares_status_t ares__parse_into_addrinfo(const ares_dns_record_t *dnsrec, ares_bool_t cname_only_is_enodata, unsigned short port, struct ares_addrinfo *ai) { ares_status_t status; size_t i; size_t ancount; const char *hostname = NULL; ares_bool_t got_a = ARES_FALSE; ares_bool_t got_aaaa = ARES_FALSE; ares_bool_t got_cname = ARES_FALSE; struct ares_addrinfo_cname *cnames = NULL; struct ares_addrinfo_node *nodes = NULL; /* Save question hostname */ status = ares_dns_record_query_get(dnsrec, 0, &hostname, NULL, NULL); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } ancount = ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER); if (ancount == 0) { status = ARES_ENODATA; goto done; } for (i = 0; i < ancount; i++) { ares_dns_rec_type_t rtype; const ares_dns_rr_t *rr = ares_dns_record_rr_get_const(dnsrec, ARES_SECTION_ANSWER, i); if (ares_dns_rr_get_class(rr) != ARES_CLASS_IN) { continue; } rtype = ares_dns_rr_get_type(rr); /* Issue #683 * Old code did this hostname sanity check, however it appears this is * flawed logic. Other resolvers don't do this sanity check. Leaving * this code commented out for future reference. * * rname = ares_dns_rr_get_name(rr); * if ((rtype == ARES_REC_TYPE_A || rtype == ARES_REC_TYPE_AAAA) && * strcasecmp(rname, hostname) != 0) { * continue; * } */ if (rtype == ARES_REC_TYPE_CNAME) { struct ares_addrinfo_cname *cname; got_cname = ARES_TRUE; /* replace hostname with data from cname * SA: Seems wrong as it introduces order dependency. */ hostname = ares_dns_rr_get_str(rr, ARES_RR_CNAME_CNAME); cname = ares__append_addrinfo_cname(&cnames); if (cname == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } cname->ttl = (int)ares_dns_rr_get_ttl(rr); cname->alias = ares_strdup(ares_dns_rr_get_name(rr)); if (cname->alias == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } cname->name = ares_strdup(hostname); if (cname->name == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } else if (rtype == ARES_REC_TYPE_A) { got_a = ARES_TRUE; status = ares_append_ai_node(AF_INET, port, ares_dns_rr_get_ttl(rr), ares_dns_rr_get_addr(rr, ARES_RR_A_ADDR), &nodes); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } else if (rtype == ARES_REC_TYPE_AAAA) { got_aaaa = ARES_TRUE; status = ares_append_ai_node(AF_INET6, port, ares_dns_rr_get_ttl(rr), ares_dns_rr_get_addr6(rr, ARES_RR_AAAA_ADDR), &nodes); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } else { continue; } } if (!got_a && !got_aaaa && (!got_cname || (got_cname && cname_only_is_enodata))) { status = ARES_ENODATA; goto done; } /* save the hostname as ai->name */ if (ai->name == NULL || strcasecmp(ai->name, hostname) != 0) { ares_free(ai->name); ai->name = ares_strdup(hostname); if (ai->name == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } if (got_a || got_aaaa) { ares__addrinfo_cat_nodes(&ai->nodes, nodes); nodes = NULL; } if (got_cname) { ares__addrinfo_cat_cnames(&ai->cnames, cnames); cnames = NULL; } done: ares__freeaddrinfo_cnames(cnames); ares__freeaddrinfo_nodes(nodes); /* compatibility */ if (status == ARES_EBADNAME) { status = ARES_EBADRESP; } return status; } gevent-24.11.1/deps/c-ares/src/lib/ares__socket.c000066400000000000000000000554761471441230600214420ustar00rootroot00000000000000/* MIT License * * Copyright (c) Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_SYS_UIO_H # include #endif #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETINET_TCP_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #ifdef HAVE_STRINGS_H # include #endif #ifdef HAVE_SYS_IOCTL_H # include #endif #ifdef NETWARE # include #endif #include #include #include #if defined(__linux__) && defined(TCP_FASTOPEN_CONNECT) # define TFO_SUPPORTED 1 # define TFO_SKIP_CONNECT 0 # define TFO_USE_SENDTO 0 # define TFO_USE_CONNECTX 0 # define TFO_CLIENT_SOCKOPT TCP_FASTOPEN_CONNECT #elif defined(__FreeBSD__) && defined(TCP_FASTOPEN) # define TFO_SUPPORTED 1 # define TFO_SKIP_CONNECT 1 # define TFO_USE_SENDTO 1 # define TFO_USE_CONNECTX 0 # define TFO_CLIENT_SOCKOPT TCP_FASTOPEN #elif defined(__APPLE__) && defined(HAVE_CONNECTX) # define TFO_SUPPORTED 1 # define TFO_SKIP_CONNECT 0 # define TFO_USE_SENDTO 0 # define TFO_USE_CONNECTX 1 # undef TFO_CLIENT_SOCKOPT #else # define TFO_SUPPORTED 0 #endif #ifndef HAVE_WRITEV /* Structure for scatter/gather I/O. */ struct iovec { void *iov_base; /* Pointer to data. */ size_t iov_len; /* Length of data. */ }; #endif /* Return 1 if the specified error number describes a readiness error, or 0 * otherwise. This is mostly for HP-UX, which could return EAGAIN or * EWOULDBLOCK. See this man page * * http://devrsrc1.external.hp.com/STKS/cgi-bin/man2html? * manpage=/usr/share/man/man2.Z/send.2 */ ares_bool_t ares__socket_try_again(int errnum) { #if !defined EWOULDBLOCK && !defined EAGAIN # error "Neither EWOULDBLOCK nor EAGAIN defined" #endif #ifdef EWOULDBLOCK if (errnum == EWOULDBLOCK) { return ARES_TRUE; } #endif #if defined EAGAIN && EAGAIN != EWOULDBLOCK if (errnum == EAGAIN) { return ARES_TRUE; } #endif return ARES_FALSE; } ares_ssize_t ares__socket_recv(ares_channel_t *channel, ares_socket_t s, void *data, size_t data_len) { if (channel->sock_funcs && channel->sock_funcs->arecvfrom) { return channel->sock_funcs->arecvfrom(s, data, data_len, 0, 0, 0, channel->sock_func_cb_data); } return (ares_ssize_t)recv((RECV_TYPE_ARG1)s, (RECV_TYPE_ARG2)data, (RECV_TYPE_ARG3)data_len, (RECV_TYPE_ARG4)(0)); } ares_ssize_t ares__socket_recvfrom(ares_channel_t *channel, ares_socket_t s, void *data, size_t data_len, int flags, struct sockaddr *from, ares_socklen_t *from_len) { if (channel->sock_funcs && channel->sock_funcs->arecvfrom) { return channel->sock_funcs->arecvfrom(s, data, data_len, flags, from, from_len, channel->sock_func_cb_data); } #ifdef HAVE_RECVFROM return (ares_ssize_t)recvfrom(s, data, (RECVFROM_TYPE_ARG3)data_len, flags, from, from_len); #else return ares__socket_recv(channel, s, data, data_len); #endif } /* Use like: * struct sockaddr_storage sa_storage; * ares_socklen_t salen = sizeof(sa_storage); * struct sockaddr *sa = (struct sockaddr *)&sa_storage; * ares__conn_set_sockaddr(conn, sa, &salen); */ static ares_status_t ares__conn_set_sockaddr(const ares_conn_t *conn, struct sockaddr *sa, ares_socklen_t *salen) { const ares_server_t *server = conn->server; unsigned short port = conn->flags & ARES_CONN_FLAG_TCP ? server->tcp_port : server->udp_port; struct sockaddr_in *sin; struct sockaddr_in6 *sin6; switch (server->addr.family) { case AF_INET: sin = (struct sockaddr_in *)(void *)sa; if (*salen < (ares_socklen_t)sizeof(*sin)) { return ARES_EFORMERR; } *salen = sizeof(*sin); memset(sin, 0, sizeof(*sin)); sin->sin_family = AF_INET; sin->sin_port = htons(port); memcpy(&sin->sin_addr, &server->addr.addr.addr4, sizeof(sin->sin_addr)); return ARES_SUCCESS; case AF_INET6: sin6 = (struct sockaddr_in6 *)(void *)sa; if (*salen < (ares_socklen_t)sizeof(*sin6)) { return ARES_EFORMERR; } *salen = sizeof(*sin6); memset(sin6, 0, sizeof(*sin6)); sin6->sin6_family = AF_INET6; sin6->sin6_port = htons(port); memcpy(&sin6->sin6_addr, &server->addr.addr.addr6, sizeof(sin6->sin6_addr)); #ifdef HAVE_STRUCT_SOCKADDR_IN6_SIN6_SCOPE_ID sin6->sin6_scope_id = server->ll_scope; #endif return ARES_SUCCESS; default: break; } return ARES_EBADFAMILY; } static ares_status_t ares_conn_set_self_ip(ares_conn_t *conn, ares_bool_t early) { struct sockaddr_storage sa_storage; int rv; ares_socklen_t len = sizeof(sa_storage); /* We call this twice on TFO, if we already have the IP we can go ahead and * skip processing */ if (!early && conn->self_ip.family != AF_UNSPEC) { return ARES_SUCCESS; } memset(&sa_storage, 0, sizeof(sa_storage)); rv = getsockname(conn->fd, (struct sockaddr *)(void *)&sa_storage, &len); if (rv != 0) { /* During TCP FastOpen, we can't get the IP this early since connect() * may not be called. That's ok, we'll try again later */ if (early && conn->flags & ARES_CONN_FLAG_TCP && conn->flags & ARES_CONN_FLAG_TFO) { memset(&conn->self_ip, 0, sizeof(conn->self_ip)); return ARES_SUCCESS; } return ARES_ECONNREFUSED; } if (!ares_sockaddr_to_ares_addr(&conn->self_ip, NULL, (struct sockaddr *)(void *)&sa_storage)) { return ARES_ECONNREFUSED; } return ARES_SUCCESS; } ares_ssize_t ares__conn_write(ares_conn_t *conn, const void *data, size_t len) { ares_channel_t *channel = conn->server->channel; int flags = 0; #ifdef HAVE_MSG_NOSIGNAL flags |= MSG_NOSIGNAL; #endif if (channel->sock_funcs && channel->sock_funcs->asendv) { struct iovec vec; vec.iov_base = (void *)((size_t)data); /* Cast off const */ vec.iov_len = len; return channel->sock_funcs->asendv(conn->fd, &vec, 1, channel->sock_func_cb_data); } if (conn->flags & ARES_CONN_FLAG_TFO_INITIAL) { conn->flags &= ~((unsigned int)ARES_CONN_FLAG_TFO_INITIAL); #if defined(TFO_USE_SENDTO) && TFO_USE_SENDTO { struct sockaddr_storage sa_storage; ares_socklen_t salen = sizeof(sa_storage); struct sockaddr *sa = (struct sockaddr *)&sa_storage; ares_status_t status; ares_ssize_t rv; status = ares__conn_set_sockaddr(conn, sa, &salen); if (status != ARES_SUCCESS) { return status; } rv = (ares_ssize_t)sendto((SEND_TYPE_ARG1)conn->fd, (SEND_TYPE_ARG2)data, (SEND_TYPE_ARG3)len, (SEND_TYPE_ARG4)flags, sa, salen); /* If using TFO, we might not have been able to get an IP earlier, since * we hadn't informed the OS of the destination. When using sendto() * now we have so we should be able to fetch it */ ares_conn_set_self_ip(conn, ARES_TRUE); return rv; } #endif } return (ares_ssize_t)send((SEND_TYPE_ARG1)conn->fd, (SEND_TYPE_ARG2)data, (SEND_TYPE_ARG3)len, (SEND_TYPE_ARG4)flags); } /* * setsocknonblock sets the given socket to either blocking or non-blocking * mode based on the 'nonblock' boolean argument. This function is highly * portable. */ static int setsocknonblock(ares_socket_t sockfd, /* operate on this */ int nonblock /* TRUE or FALSE */) { #if defined(USE_BLOCKING_SOCKETS) return 0; /* returns success */ #elif defined(HAVE_FCNTL_O_NONBLOCK) /* most recent unix versions */ int flags; flags = fcntl(sockfd, F_GETFL, 0); if (nonblock) { return fcntl(sockfd, F_SETFL, flags | O_NONBLOCK); } else { return fcntl(sockfd, F_SETFL, flags & (~O_NONBLOCK)); /* LCOV_EXCL_LINE */ } #elif defined(HAVE_IOCTL_FIONBIO) /* older unix versions */ int flags = nonblock ? 1 : 0; return ioctl(sockfd, FIONBIO, &flags); #elif defined(HAVE_IOCTLSOCKET_FIONBIO) # ifdef WATT32 char flags = nonblock ? 1 : 0; # else /* Windows */ unsigned long flags = nonblock ? 1UL : 0UL; # endif return ioctlsocket(sockfd, (long)FIONBIO, &flags); #elif defined(HAVE_IOCTLSOCKET_CAMEL_FIONBIO) /* Amiga */ long flags = nonblock ? 1L : 0L; return IoctlSocket(sockfd, FIONBIO, flags); #elif defined(HAVE_SETSOCKOPT_SO_NONBLOCK) /* BeOS */ long b = nonblock ? 1L : 0L; return setsockopt(sockfd, SOL_SOCKET, SO_NONBLOCK, &b, sizeof(b)); #else # error "no non-blocking method was found/used/set" #endif } #if defined(IPV6_V6ONLY) && defined(USE_WINSOCK) /* It makes support for IPv4-mapped IPv6 addresses. * Linux kernel, NetBSD, FreeBSD and Darwin: default is off; * Windows Vista and later: default is on; * DragonFly BSD: acts like off, and dummy setting; * OpenBSD and earlier Windows: unsupported. * Linux: controlled by /proc/sys/net/ipv6/bindv6only. */ static void set_ipv6_v6only(ares_socket_t sockfd, int on) { (void)setsockopt(sockfd, IPPROTO_IPV6, IPV6_V6ONLY, (void *)&on, sizeof(on)); } #else # define set_ipv6_v6only(s, v) #endif static ares_status_t configure_socket(ares_conn_t *conn) { union { struct sockaddr sa; struct sockaddr_in sa4; struct sockaddr_in6 sa6; } local; ares_socklen_t bindlen = 0; ares_server_t *server = conn->server; ares_channel_t *channel = server->channel; /* do not set options for user-managed sockets */ if (channel->sock_funcs && channel->sock_funcs->asocket) { return ARES_SUCCESS; } (void)setsocknonblock(conn->fd, 1); #if defined(FD_CLOEXEC) && !defined(MSDOS) /* Configure the socket fd as close-on-exec. */ if (fcntl(conn->fd, F_SETFD, FD_CLOEXEC) != 0) { return ARES_ECONNREFUSED; /* LCOV_EXCL_LINE */ } #endif /* No need to emit SIGPIPE on socket errors */ #if defined(SO_NOSIGPIPE) { int opt = 1; (void)setsockopt(conn->fd, SOL_SOCKET, SO_NOSIGPIPE, (void *)&opt, sizeof(opt)); } #endif /* Set the socket's send and receive buffer sizes. */ if (channel->socket_send_buffer_size > 0 && setsockopt(conn->fd, SOL_SOCKET, SO_SNDBUF, (void *)&channel->socket_send_buffer_size, sizeof(channel->socket_send_buffer_size)) != 0) { return ARES_ECONNREFUSED; /* LCOV_EXCL_LINE: UntestablePath */ } if (channel->socket_receive_buffer_size > 0 && setsockopt(conn->fd, SOL_SOCKET, SO_RCVBUF, (void *)&channel->socket_receive_buffer_size, sizeof(channel->socket_receive_buffer_size)) != 0) { return ARES_ECONNREFUSED; /* LCOV_EXCL_LINE: UntestablePath */ } #ifdef SO_BINDTODEVICE if (ares_strlen(channel->local_dev_name)) { /* Only root can do this, and usually not fatal if it doesn't work, so * just continue on. */ (void)setsockopt(conn->fd, SOL_SOCKET, SO_BINDTODEVICE, channel->local_dev_name, sizeof(channel->local_dev_name)); } #endif if (server->addr.family == AF_INET && channel->local_ip4) { memset(&local.sa4, 0, sizeof(local.sa4)); local.sa4.sin_family = AF_INET; local.sa4.sin_addr.s_addr = htonl(channel->local_ip4); bindlen = sizeof(local.sa4); } else if (server->addr.family == AF_INET6 && server->ll_scope == 0 && memcmp(channel->local_ip6, ares_in6addr_any._S6_un._S6_u8, sizeof(channel->local_ip6)) != 0) { /* Only if not link-local and an ip other than "::" is specified */ memset(&local.sa6, 0, sizeof(local.sa6)); local.sa6.sin6_family = AF_INET6; memcpy(&local.sa6.sin6_addr, channel->local_ip6, sizeof(channel->local_ip6)); bindlen = sizeof(local.sa6); } if (bindlen && bind(conn->fd, &local.sa, bindlen) < 0) { return ARES_ECONNREFUSED; } if (server->addr.family == AF_INET6) { set_ipv6_v6only(conn->fd, 0); } if (conn->flags & ARES_CONN_FLAG_TCP) { int opt = 1; #ifdef TCP_NODELAY /* * Disable the Nagle algorithm (only relevant for TCP sockets, and thus not * in configure_socket). In general, in DNS lookups we're pretty much * interested in firing off a single request and then waiting for a reply, * so batching isn't very interesting. */ if (setsockopt(conn->fd, IPPROTO_TCP, TCP_NODELAY, (void *)&opt, sizeof(opt)) != 0) { return ARES_ECONNREFUSED; } #endif #if defined(TFO_CLIENT_SOCKOPT) if (conn->flags & ARES_CONN_FLAG_TFO && setsockopt(conn->fd, IPPROTO_TCP, TFO_CLIENT_SOCKOPT, (void *)&opt, sizeof(opt)) != 0) { /* Disable TFO if flag can't be set. */ conn->flags &= ~((unsigned int)ARES_CONN_FLAG_TFO); } #endif } return ARES_SUCCESS; } ares_bool_t ares_sockaddr_to_ares_addr(struct ares_addr *ares_addr, unsigned short *port, const struct sockaddr *sockaddr) { if (sockaddr->sa_family == AF_INET) { /* NOTE: memcpy sockaddr_in due to alignment issues found by UBSAN due to * dnsinfo packing on MacOS */ struct sockaddr_in sockaddr_in; memcpy(&sockaddr_in, sockaddr, sizeof(sockaddr_in)); ares_addr->family = AF_INET; memcpy(&ares_addr->addr.addr4, &(sockaddr_in.sin_addr), sizeof(ares_addr->addr.addr4)); if (port) { *port = ntohs(sockaddr_in.sin_port); } return ARES_TRUE; } if (sockaddr->sa_family == AF_INET6) { /* NOTE: memcpy sockaddr_in6 due to alignment issues found by UBSAN due to * dnsinfo packing on MacOS */ struct sockaddr_in6 sockaddr_in6; memcpy(&sockaddr_in6, sockaddr, sizeof(sockaddr_in6)); ares_addr->family = AF_INET6; memcpy(&ares_addr->addr.addr6, &(sockaddr_in6.sin6_addr), sizeof(ares_addr->addr.addr6)); if (port) { *port = ntohs(sockaddr_in6.sin6_port); } return ARES_TRUE; } return ARES_FALSE; } static ares_status_t ares__conn_connect(ares_conn_t *conn, struct sockaddr *sa, ares_socklen_t salen) { /* Normal non TCPFastOpen style connect */ if (!(conn->flags & ARES_CONN_FLAG_TFO)) { return ares__connect_socket(conn->server->channel, conn->fd, sa, salen); } /* FreeBSD don't want any sort of connect() so skip */ #if defined(TFO_SKIP_CONNECT) && TFO_SKIP_CONNECT return ARES_SUCCESS; #elif defined(TFO_USE_CONNECTX) && TFO_USE_CONNECTX { int rv; int err; do { sa_endpoints_t endpoints; memset(&endpoints, 0, sizeof(endpoints)); endpoints.sae_dstaddr = sa; endpoints.sae_dstaddrlen = salen; rv = connectx(conn->fd, &endpoints, SAE_ASSOCID_ANY, CONNECT_DATA_IDEMPOTENT | CONNECT_RESUME_ON_READ_WRITE, NULL, 0, NULL, NULL); err = SOCKERRNO; if (rv == -1 && err != EINPROGRESS && err != EWOULDBLOCK) { return ARES_ECONNREFUSED; } } while (rv == -1 && err == EINTR); } return ARES_SUCCESS; #elif defined(TFO_SUPPORTED) && TFO_SUPPORTED return ares__connect_socket(conn->server->channel, conn->fd, sa, salen); #else /* Shouldn't be possible */ return ARES_ECONNREFUSED; #endif } ares_status_t ares__open_connection(ares_conn_t **conn_out, ares_channel_t *channel, ares_server_t *server, ares_bool_t is_tcp) { ares_status_t status; struct sockaddr_storage sa_storage; ares_socklen_t salen = sizeof(sa_storage); struct sockaddr *sa = (struct sockaddr *)&sa_storage; ares_conn_t *conn; ares__llist_node_t *node = NULL; int stype = is_tcp ? SOCK_STREAM : SOCK_DGRAM; *conn_out = NULL; conn = ares_malloc(sizeof(*conn)); if (conn == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } memset(conn, 0, sizeof(*conn)); conn->fd = ARES_SOCKET_BAD; conn->server = server; conn->queries_to_conn = ares__llist_create(NULL); conn->flags = is_tcp ? ARES_CONN_FLAG_TCP : ARES_CONN_FLAG_NONE; /* Enable TFO if the OS supports it and we were passed in data to send during * the connect. It might be disabled later if an error is encountered. Make * sure a user isn't overriding anything. */ if (conn->flags & ARES_CONN_FLAG_TCP && channel->sock_funcs == NULL && TFO_SUPPORTED) { conn->flags |= ARES_CONN_FLAG_TFO; } if (conn->queries_to_conn == NULL) { /* LCOV_EXCL_START: OutOfMemory */ status = ARES_ENOMEM; goto done; /* LCOV_EXCL_STOP */ } /* Convert into the struct sockaddr structure needed by the OS */ status = ares__conn_set_sockaddr(conn, sa, &salen); if (status != ARES_SUCCESS) { goto done; } /* Acquire a socket. */ conn->fd = ares__open_socket(channel, server->addr.family, stype, 0); if (conn->fd == ARES_SOCKET_BAD) { status = ARES_ECONNREFUSED; goto done; } /* Configure it. */ status = configure_socket(conn); if (status != ARES_SUCCESS) { goto done; } if (channel->sock_config_cb) { int err = channel->sock_config_cb(conn->fd, stype, channel->sock_config_cb_data); if (err < 0) { status = ARES_ECONNREFUSED; goto done; } } /* Connect */ status = ares__conn_connect(conn, sa, salen); if (status != ARES_SUCCESS) { goto done; } if (channel->sock_create_cb) { int err = channel->sock_create_cb(conn->fd, stype, channel->sock_create_cb_data); if (err < 0) { status = ARES_ECONNREFUSED; goto done; } } /* Let the connection know we haven't written our first packet yet for TFO */ if (conn->flags & ARES_CONN_FLAG_TFO) { conn->flags |= ARES_CONN_FLAG_TFO_INITIAL; } /* Need to store our own ip for DNS cookie support */ status = ares_conn_set_self_ip(conn, ARES_FALSE); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: UntestablePath */ } /* TCP connections are thrown to the end as we don't spawn multiple TCP * connections. UDP connections are put on front where the newest connection * can be quickly pulled */ if (is_tcp) { node = ares__llist_insert_last(server->connections, conn); } else { node = ares__llist_insert_first(server->connections, conn); } if (node == NULL) { /* LCOV_EXCL_START: OutOfMemory */ status = ARES_ENOMEM; goto done; /* LCOV_EXCL_STOP */ } /* Register globally to quickly map event on file descriptor to connection * node object */ if (!ares__htable_asvp_insert(channel->connnode_by_socket, conn->fd, node)) { /* LCOV_EXCL_START: OutOfMemory */ status = ARES_ENOMEM; goto done; /* LCOV_EXCL_STOP */ } SOCK_STATE_CALLBACK(channel, conn->fd, 1, is_tcp ? 1 : 0); if (is_tcp) { server->tcp_conn = conn; } done: if (status != ARES_SUCCESS) { ares__llist_node_claim(node); ares__llist_destroy(conn->queries_to_conn); ares__close_socket(channel, conn->fd); ares_free(conn); } else { *conn_out = conn; } return status; } ares_socket_t ares__open_socket(ares_channel_t *channel, int af, int type, int protocol) { if (channel->sock_funcs && channel->sock_funcs->asocket) { return channel->sock_funcs->asocket(af, type, protocol, channel->sock_func_cb_data); } return socket(af, type, protocol); } ares_status_t ares__connect_socket(ares_channel_t *channel, ares_socket_t sockfd, const struct sockaddr *addr, ares_socklen_t addrlen) { int rv; int err; do { if (channel->sock_funcs && channel->sock_funcs->aconnect) { rv = channel->sock_funcs->aconnect(sockfd, addr, addrlen, channel->sock_func_cb_data); } else { rv = connect(sockfd, addr, addrlen); } err = SOCKERRNO; if (rv == -1 && err != EINPROGRESS && err != EWOULDBLOCK) { return ARES_ECONNREFUSED; } } while (rv == -1 && err == EINTR); return ARES_SUCCESS; } void ares__close_socket(ares_channel_t *channel, ares_socket_t s) { if (s == ARES_SOCKET_BAD) { return; } if (channel->sock_funcs && channel->sock_funcs->aclose) { channel->sock_funcs->aclose(s, channel->sock_func_cb_data); } else { sclose(s); } } void ares_set_socket_callback(ares_channel_t *channel, ares_sock_create_callback cb, void *data) { if (channel == NULL) { return; } channel->sock_create_cb = cb; channel->sock_create_cb_data = data; } void ares_set_socket_configure_callback(ares_channel_t *channel, ares_sock_config_callback cb, void *data) { if (channel == NULL || channel->optmask & ARES_OPT_EVENT_THREAD) { return; } channel->sock_config_cb = cb; channel->sock_config_cb_data = data; } void ares_set_socket_functions(ares_channel_t *channel, const struct ares_socket_functions *funcs, void *data) { if (channel == NULL || channel->optmask & ARES_OPT_EVENT_THREAD) { return; } channel->sock_funcs = funcs; channel->sock_func_cb_data = data; } gevent-24.11.1/deps/c-ares/src/lib/ares__sortaddrinfo.c000066400000000000000000000336031471441230600226340ustar00rootroot00000000000000/* * Original file name getaddrinfo.c * Lifted from the 'Android Bionic' project with the BSD license. */ /* * Copyright (C) 1995, 1996, 1997, and 1998 WIDE Project. * Copyright (C) 2018 The Android Open Source Project * Copyright (C) 2019 Andrew Selivanov * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * 3. Neither the name of the project nor the names of its contributors * may be used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE PROJECT AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE PROJECT OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. * * SPDX-License-Identifier: BSD-3-Clause */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_STRINGS_H # include #endif #include #include struct addrinfo_sort_elem { struct ares_addrinfo_node *ai; ares_bool_t has_src_addr; ares_sockaddr src_addr; size_t original_order; }; #define ARES_IPV6_ADDR_MC_SCOPE(a) ((a)->s6_addr[1] & 0x0f) #define ARES_IPV6_ADDR_SCOPE_NODELOCAL 0x01 #define ARES_IPV6_ADDR_SCOPE_INTFACELOCAL 0x01 #define ARES_IPV6_ADDR_SCOPE_LINKLOCAL 0x02 #define ARES_IPV6_ADDR_SCOPE_SITELOCAL 0x05 #define ARES_IPV6_ADDR_SCOPE_ORGLOCAL 0x08 #define ARES_IPV6_ADDR_SCOPE_GLOBAL 0x0e #define ARES_IN_LOOPBACK(a) \ ((((long unsigned int)(a)) & 0xff000000) == 0x7f000000) /* RFC 4193. */ #define ARES_IN6_IS_ADDR_ULA(a) (((a)->s6_addr[0] & 0xfe) == 0xfc) /* These macros are modelled after the ones in . */ /* RFC 4380, section 2.6 */ #define ARES_IN6_IS_ADDR_TEREDO(a) \ ((*(const unsigned int *)(const void *)(&(a)->s6_addr[0]) == \ ntohl(0x20010000))) /* RFC 3056, section 2. */ #define ARES_IN6_IS_ADDR_6TO4(a) \ (((a)->s6_addr[0] == 0x20) && ((a)->s6_addr[1] == 0x02)) /* 6bone testing address area (3ffe::/16), deprecated in RFC 3701. */ #define ARES_IN6_IS_ADDR_6BONE(a) \ (((a)->s6_addr[0] == 0x3f) && ((a)->s6_addr[1] == 0xfe)) static int get_scope(const struct sockaddr *addr) { if (addr->sa_family == AF_INET6) { const struct sockaddr_in6 *addr6 = CARES_INADDR_CAST(const struct sockaddr_in6 *, addr); if (IN6_IS_ADDR_MULTICAST(&addr6->sin6_addr)) { return ARES_IPV6_ADDR_MC_SCOPE(&addr6->sin6_addr); } else if (IN6_IS_ADDR_LOOPBACK(&addr6->sin6_addr) || IN6_IS_ADDR_LINKLOCAL(&addr6->sin6_addr)) { /* * RFC 4291 section 2.5.3 says loopback is to be treated as having * link-local scope. */ return ARES_IPV6_ADDR_SCOPE_LINKLOCAL; } else if (IN6_IS_ADDR_SITELOCAL(&addr6->sin6_addr)) { return ARES_IPV6_ADDR_SCOPE_SITELOCAL; } else { return ARES_IPV6_ADDR_SCOPE_GLOBAL; } } else if (addr->sa_family == AF_INET) { const struct sockaddr_in *addr4 = CARES_INADDR_CAST(const struct sockaddr_in *, addr); unsigned long int na = ntohl(addr4->sin_addr.s_addr); if (ARES_IN_LOOPBACK(na) || /* 127.0.0.0/8 */ (na & 0xffff0000) == 0xa9fe0000) /* 169.254.0.0/16 */ { return ARES_IPV6_ADDR_SCOPE_LINKLOCAL; } else { /* * RFC 6724 section 3.2. Other IPv4 addresses, including private * addresses and shared addresses (100.64.0.0/10), are assigned global * scope. */ return ARES_IPV6_ADDR_SCOPE_GLOBAL; } } else { /* * This should never happen. * Return a scope with low priority as a last resort. */ return ARES_IPV6_ADDR_SCOPE_NODELOCAL; } } static int get_label(const struct sockaddr *addr) { if (addr->sa_family == AF_INET) { return 4; } else if (addr->sa_family == AF_INET6) { const struct sockaddr_in6 *addr6 = CARES_INADDR_CAST(const struct sockaddr_in6 *, addr); if (IN6_IS_ADDR_LOOPBACK(&addr6->sin6_addr)) { return 0; } else if (IN6_IS_ADDR_V4MAPPED(&addr6->sin6_addr)) { return 4; } else if (ARES_IN6_IS_ADDR_6TO4(&addr6->sin6_addr)) { return 2; } else if (ARES_IN6_IS_ADDR_TEREDO(&addr6->sin6_addr)) { return 5; } else if (ARES_IN6_IS_ADDR_ULA(&addr6->sin6_addr)) { return 13; } else if (IN6_IS_ADDR_V4COMPAT(&addr6->sin6_addr)) { return 3; } else if (IN6_IS_ADDR_SITELOCAL(&addr6->sin6_addr)) { return 11; } else if (ARES_IN6_IS_ADDR_6BONE(&addr6->sin6_addr)) { return 12; } else { /* All other IPv6 addresses, including global unicast addresses. */ return 1; } } else { /* * This should never happen. * Return a semi-random label as a last resort. */ return 1; } } /* * Get the precedence for a given IPv4/IPv6 address. * RFC 6724, section 2.1. */ static int get_precedence(const struct sockaddr *addr) { if (addr->sa_family == AF_INET) { return 35; } else if (addr->sa_family == AF_INET6) { const struct sockaddr_in6 *addr6 = CARES_INADDR_CAST(const struct sockaddr_in6 *, addr); if (IN6_IS_ADDR_LOOPBACK(&addr6->sin6_addr)) { return 50; } else if (IN6_IS_ADDR_V4MAPPED(&addr6->sin6_addr)) { return 35; } else if (ARES_IN6_IS_ADDR_6TO4(&addr6->sin6_addr)) { return 30; } else if (ARES_IN6_IS_ADDR_TEREDO(&addr6->sin6_addr)) { return 5; } else if (ARES_IN6_IS_ADDR_ULA(&addr6->sin6_addr)) { return 3; } else if (IN6_IS_ADDR_V4COMPAT(&addr6->sin6_addr) || IN6_IS_ADDR_SITELOCAL(&addr6->sin6_addr) || ARES_IN6_IS_ADDR_6BONE(&addr6->sin6_addr)) { return 1; } else { /* All other IPv6 addresses, including global unicast addresses. */ return 40; } } else { return 1; } } /* * Find number of matching initial bits between the two addresses a1 and a2. */ static size_t common_prefix_len(const struct in6_addr *a1, const struct in6_addr *a2) { const unsigned char *p1 = (const unsigned char *)a1; const unsigned char *p2 = (const unsigned char *)a2; size_t i; for (i = 0; i < sizeof(*a1); ++i) { unsigned char x; size_t j; if (p1[i] == p2[i]) { continue; } x = p1[i] ^ p2[i]; for (j = 0; j < CHAR_BIT; ++j) { if (x & (1 << (CHAR_BIT - 1))) { return i * CHAR_BIT + j; } x <<= 1; } } return sizeof(*a1) * CHAR_BIT; } /* * Compare two source/destination address pairs. * RFC 6724, section 6. */ static int rfc6724_compare(const void *ptr1, const void *ptr2) { const struct addrinfo_sort_elem *a1 = (const struct addrinfo_sort_elem *)ptr1; const struct addrinfo_sort_elem *a2 = (const struct addrinfo_sort_elem *)ptr2; int scope_src1; int scope_dst1; int scope_match1; int scope_src2; int scope_dst2; int scope_match2; int label_src1; int label_dst1; int label_match1; int label_src2; int label_dst2; int label_match2; int precedence1; int precedence2; size_t prefixlen1; size_t prefixlen2; /* Rule 1: Avoid unusable destinations. */ if (a1->has_src_addr != a2->has_src_addr) { return ((int)a2->has_src_addr) - ((int)a1->has_src_addr); } /* Rule 2: Prefer matching scope. */ scope_src1 = ARES_IPV6_ADDR_SCOPE_NODELOCAL; if (a1->has_src_addr) { scope_src1 = get_scope(&a1->src_addr.sa); } scope_dst1 = get_scope(a1->ai->ai_addr); scope_match1 = (scope_src1 == scope_dst1); scope_src2 = ARES_IPV6_ADDR_SCOPE_NODELOCAL; if (a2->has_src_addr) { scope_src2 = get_scope(&a2->src_addr.sa); } scope_dst2 = get_scope(a2->ai->ai_addr); scope_match2 = (scope_src2 == scope_dst2); if (scope_match1 != scope_match2) { return scope_match2 - scope_match1; } /* Rule 3: Avoid deprecated addresses. */ /* Rule 4: Prefer home addresses. */ /* Rule 5: Prefer matching label. */ label_src1 = 1; if (a1->has_src_addr) { label_src1 = get_label(&a1->src_addr.sa); } label_dst1 = get_label(a1->ai->ai_addr); label_match1 = (label_src1 == label_dst1); label_src2 = 1; if (a2->has_src_addr) { label_src2 = get_label(&a2->src_addr.sa); } label_dst2 = get_label(a2->ai->ai_addr); label_match2 = (label_src2 == label_dst2); if (label_match1 != label_match2) { return label_match2 - label_match1; } /* Rule 6: Prefer higher precedence. */ precedence1 = get_precedence(a1->ai->ai_addr); precedence2 = get_precedence(a2->ai->ai_addr); if (precedence1 != precedence2) { return precedence2 - precedence1; } /* Rule 7: Prefer native transport. */ /* Rule 8: Prefer smaller scope. */ if (scope_dst1 != scope_dst2) { return scope_dst1 - scope_dst2; } /* Rule 9: Use longest matching prefix. */ if (a1->has_src_addr && a1->ai->ai_addr->sa_family == AF_INET6 && a2->has_src_addr && a2->ai->ai_addr->sa_family == AF_INET6) { const struct sockaddr_in6 *a1_src = &a1->src_addr.sa6; const struct sockaddr_in6 *a1_dst = CARES_INADDR_CAST(const struct sockaddr_in6 *, a1->ai->ai_addr); const struct sockaddr_in6 *a2_src = &a2->src_addr.sa6; const struct sockaddr_in6 *a2_dst = CARES_INADDR_CAST(const struct sockaddr_in6 *, a2->ai->ai_addr); prefixlen1 = common_prefix_len(&a1_src->sin6_addr, &a1_dst->sin6_addr); prefixlen2 = common_prefix_len(&a2_src->sin6_addr, &a2_dst->sin6_addr); if (prefixlen1 != prefixlen2) { return (int)prefixlen2 - (int)prefixlen1; } } /* * Rule 10: Leave the order unchanged. * We need this since qsort() is not necessarily stable. */ return ((int)a1->original_order) - ((int)a2->original_order); } /* * Find the source address that will be used if trying to connect to the given * address. * * Returns 1 if a source address was found, 0 if the address is unreachable * and -1 if a fatal error occurred. If 0 or 1, the contents of src_addr are * undefined. */ static int find_src_addr(ares_channel_t *channel, const struct sockaddr *addr, struct sockaddr *src_addr) { ares_socket_t sock; ares_socklen_t len; switch (addr->sa_family) { case AF_INET: len = sizeof(struct sockaddr_in); break; case AF_INET6: len = sizeof(struct sockaddr_in6); break; default: /* No known usable source address for non-INET families. */ return 0; } sock = ares__open_socket(channel, addr->sa_family, SOCK_DGRAM, IPPROTO_UDP); if (sock == ARES_SOCKET_BAD) { if (SOCKERRNO == EAFNOSUPPORT) { return 0; } else { return -1; } } if (ares__connect_socket(channel, sock, addr, len) != ARES_SUCCESS) { ares__close_socket(channel, sock); return 0; } if (getsockname(sock, src_addr, &len) != 0) { ares__close_socket(channel, sock); return -1; } ares__close_socket(channel, sock); return 1; } /* * Sort the linked list starting at sentinel->ai_next in RFC6724 order. * Will leave the list unchanged if an error occurs. */ ares_status_t ares__sortaddrinfo(ares_channel_t *channel, struct ares_addrinfo_node *list_sentinel) { struct ares_addrinfo_node *cur; size_t nelem = 0; size_t i; int has_src_addr; struct addrinfo_sort_elem *elems; cur = list_sentinel->ai_next; while (cur) { ++nelem; cur = cur->ai_next; } if (!nelem) { return ARES_ENODATA; } elems = (struct addrinfo_sort_elem *)ares_malloc( nelem * sizeof(struct addrinfo_sort_elem)); if (!elems) { return ARES_ENOMEM; } /* * Convert the linked list to an array that also contains the candidate * source address for each destination address. */ for (i = 0, cur = list_sentinel->ai_next; i < nelem; ++i, cur = cur->ai_next) { assert(cur != NULL); elems[i].ai = cur; elems[i].original_order = i; has_src_addr = find_src_addr(channel, cur->ai_addr, &elems[i].src_addr.sa); if (has_src_addr == -1) { ares_free(elems); return ARES_ENOTFOUND; } elems[i].has_src_addr = (has_src_addr == 1) ? ARES_TRUE : ARES_FALSE; } /* Sort the addresses, and rearrange the linked list so it matches the sorted * order. */ qsort((void *)elems, nelem, sizeof(struct addrinfo_sort_elem), rfc6724_compare); list_sentinel->ai_next = elems[0].ai; for (i = 0; i < nelem - 1; ++i) { elems[i].ai->ai_next = elems[i + 1].ai; } elems[nelem - 1].ai->ai_next = NULL; ares_free(elems); return ARES_SUCCESS; } gevent-24.11.1/deps/c-ares/src/lib/ares_android.c000066400000000000000000000335371471441230600214250ustar00rootroot00000000000000/* MIT License * * Copyright (c) John Schember * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #if defined(ANDROID) || defined(__ANDROID__) # include "ares_private.h" # include # include # include "ares_android.h" static JavaVM *android_jvm = NULL; static jobject android_connectivity_manager = NULL; /* ConnectivityManager.getActiveNetwork */ static jmethodID android_cm_active_net_mid = NULL; /* ConnectivityManager.getLinkProperties */ static jmethodID android_cm_link_props_mid = NULL; /* LinkProperties.getDnsServers */ static jmethodID android_lp_dns_servers_mid = NULL; /* LinkProperties.getDomains */ static jmethodID android_lp_domains_mid = NULL; /* List.size */ static jmethodID android_list_size_mid = NULL; /* List.get */ static jmethodID android_list_get_mid = NULL; /* InetAddress.getHostAddress */ static jmethodID android_ia_host_addr_mid = NULL; static jclass jni_get_class(JNIEnv *env, const char *path) { jclass cls = NULL; if (env == NULL || path == NULL || *path == '\0') { return NULL; } cls = (*env)->FindClass(env, path); if ((*env)->ExceptionOccurred(env)) { (*env)->ExceptionClear(env); return NULL; } return cls; } static jmethodID jni_get_method_id(JNIEnv *env, jclass cls, const char *func_name, const char *signature) { jmethodID mid = NULL; if (env == NULL || cls == NULL || func_name == NULL || *func_name == '\0' || signature == NULL || *signature == '\0') { return NULL; } mid = (*env)->GetMethodID(env, cls, func_name, signature); if ((*env)->ExceptionOccurred(env)) { (*env)->ExceptionClear(env); return NULL; } return mid; } static int jvm_attach(JNIEnv **env) { char name[17] = {0}; JavaVMAttachArgs args; args.version = JNI_VERSION_1_6; if (prctl(PR_GET_NAME, name) == 0) { args.name = name; } else { args.name = NULL; } args.group = NULL; return (*android_jvm)->AttachCurrentThread(android_jvm, env, &args); } void ares_library_init_jvm(JavaVM *jvm) { android_jvm = jvm; } int ares_library_init_android(jobject connectivity_manager) { JNIEnv *env = NULL; int need_detatch = 0; int res; ares_status_t ret = ARES_ENOTINITIALIZED; jclass obj_cls = NULL; if (android_jvm == NULL) { goto cleanup; } res = (*android_jvm)->GetEnv(android_jvm, (void **)&env, JNI_VERSION_1_6); if (res == JNI_EDETACHED) { env = NULL; res = jvm_attach(&env); need_detatch = 1; } if (res != JNI_OK || env == NULL) { goto cleanup; } android_connectivity_manager = (*env)->NewGlobalRef(env, connectivity_manager); if (android_connectivity_manager == NULL) { goto cleanup; } /* Initialization has succeeded. Now attempt to cache the methods that will be * called by ares_get_android_server_list. */ ret = ARES_SUCCESS; /* ConnectivityManager in API 1. */ obj_cls = jni_get_class(env, "android/net/ConnectivityManager"); if (obj_cls == NULL) { goto cleanup; } /* ConnectivityManager.getActiveNetwork in API 23. */ android_cm_active_net_mid = jni_get_method_id( env, obj_cls, "getActiveNetwork", "()Landroid/net/Network;"); if (android_cm_active_net_mid == NULL) { goto cleanup; } /* ConnectivityManager.getLinkProperties in API 21. */ android_cm_link_props_mid = jni_get_method_id(env, obj_cls, "getLinkProperties", "(Landroid/net/Network;)Landroid/net/LinkProperties;"); if (android_cm_link_props_mid == NULL) { goto cleanup; } /* LinkProperties in API 21. */ (*env)->DeleteLocalRef(env, obj_cls); obj_cls = jni_get_class(env, "android/net/LinkProperties"); if (obj_cls == NULL) { goto cleanup; } /* getDnsServers in API 21. */ android_lp_dns_servers_mid = jni_get_method_id(env, obj_cls, "getDnsServers", "()Ljava/util/List;"); if (android_lp_dns_servers_mid == NULL) { goto cleanup; } /* getDomains in API 21. */ android_lp_domains_mid = jni_get_method_id(env, obj_cls, "getDomains", "()Ljava/lang/String;"); if (android_lp_domains_mid == NULL) { goto cleanup; } (*env)->DeleteLocalRef(env, obj_cls); obj_cls = jni_get_class(env, "java/util/List"); if (obj_cls == NULL) { goto cleanup; } android_list_size_mid = jni_get_method_id(env, obj_cls, "size", "()I"); if (android_list_size_mid == NULL) { goto cleanup; } android_list_get_mid = jni_get_method_id(env, obj_cls, "get", "(I)Ljava/lang/Object;"); if (android_list_get_mid == NULL) { goto cleanup; } (*env)->DeleteLocalRef(env, obj_cls); obj_cls = jni_get_class(env, "java/net/InetAddress"); if (obj_cls == NULL) { goto cleanup; } android_ia_host_addr_mid = jni_get_method_id(env, obj_cls, "getHostAddress", "()Ljava/lang/String;"); if (android_ia_host_addr_mid == NULL) { goto cleanup; } (*env)->DeleteLocalRef(env, obj_cls); goto done; cleanup: if (obj_cls != NULL) { (*env)->DeleteLocalRef(env, obj_cls); } android_cm_active_net_mid = NULL; android_cm_link_props_mid = NULL; android_lp_dns_servers_mid = NULL; android_lp_domains_mid = NULL; android_list_size_mid = NULL; android_list_get_mid = NULL; android_ia_host_addr_mid = NULL; done: if (need_detatch) { (*android_jvm)->DetachCurrentThread(android_jvm); } return (int)ret; } int ares_library_android_initialized(void) { if (android_jvm == NULL || android_connectivity_manager == NULL) { return ARES_ENOTINITIALIZED; } return ARES_SUCCESS; } void ares_library_cleanup_android(void) { JNIEnv *env = NULL; int need_detatch = 0; int res; if (android_jvm == NULL || android_connectivity_manager == NULL) { return; } res = (*android_jvm)->GetEnv(android_jvm, (void **)&env, JNI_VERSION_1_6); if (res == JNI_EDETACHED) { env = NULL; res = jvm_attach(&env); need_detatch = 1; } if (res != JNI_OK || env == NULL) { return; } android_cm_active_net_mid = NULL; android_cm_link_props_mid = NULL; android_lp_dns_servers_mid = NULL; android_lp_domains_mid = NULL; android_list_size_mid = NULL; android_list_get_mid = NULL; android_ia_host_addr_mid = NULL; (*env)->DeleteGlobalRef(env, android_connectivity_manager); android_connectivity_manager = NULL; if (need_detatch) { (*android_jvm)->DetachCurrentThread(android_jvm); } } char **ares_get_android_server_list(size_t max_servers, size_t *num_servers) { JNIEnv *env = NULL; jobject active_network = NULL; jobject link_properties = NULL; jobject server_list = NULL; jobject server = NULL; jstring str = NULL; jint nserv; const char *ch_server_address; int res; size_t i; char **dns_list = NULL; int need_detatch = 0; if (android_jvm == NULL || android_connectivity_manager == NULL || max_servers == 0 || num_servers == NULL) { return NULL; } if (android_cm_active_net_mid == NULL || android_cm_link_props_mid == NULL || android_lp_dns_servers_mid == NULL || android_list_size_mid == NULL || android_list_get_mid == NULL || android_ia_host_addr_mid == NULL) { return NULL; } res = (*android_jvm)->GetEnv(android_jvm, (void **)&env, JNI_VERSION_1_6); if (res == JNI_EDETACHED) { env = NULL; res = jvm_attach(&env); need_detatch = 1; } if (res != JNI_OK || env == NULL) { goto done; } /* JNI below is equivalent to this Java code. import android.content.Context; import android.net.ConnectivityManager; import android.net.LinkProperties; import android.net.Network; import java.net.InetAddress; import java.util.List; ConnectivityManager cm = (ConnectivityManager)this.getApplicationContext() .getSystemService(Context.CONNECTIVITY_SERVICE); Network an = cm.getActiveNetwork(); LinkProperties lp = cm.getLinkProperties(an); List dns = lp.getDnsServers(); for (InetAddress ia: dns) { String ha = ia.getHostAddress(); } Note: The JNI ConnectivityManager object and all method IDs were previously initialized in ares_library_init_android. */ active_network = (*env)->CallObjectMethod(env, android_connectivity_manager, android_cm_active_net_mid); if (active_network == NULL) { goto done; } link_properties = (*env)->CallObjectMethod(env, android_connectivity_manager, android_cm_link_props_mid, active_network); if (link_properties == NULL) { goto done; } server_list = (*env)->CallObjectMethod(env, link_properties, android_lp_dns_servers_mid); if (server_list == NULL) { goto done; } nserv = (*env)->CallIntMethod(env, server_list, android_list_size_mid); if (nserv > (jint)max_servers) { nserv = (jint)max_servers; } if (nserv <= 0) { goto done; } *num_servers = (size_t)nserv; dns_list = ares_malloc(sizeof(*dns_list) * (*num_servers)); for (i = 0; i < *num_servers; i++) { size_t len = 64; server = (*env)->CallObjectMethod(env, server_list, android_list_get_mid, (jint)i); dns_list[i] = ares_malloc(len); dns_list[i][0] = 0; if (server == NULL) { continue; } str = (*env)->CallObjectMethod(env, server, android_ia_host_addr_mid); ch_server_address = (*env)->GetStringUTFChars(env, str, 0); ares_strcpy(dns_list[i], ch_server_address, len); (*env)->ReleaseStringUTFChars(env, str, ch_server_address); (*env)->DeleteLocalRef(env, str); (*env)->DeleteLocalRef(env, server); } done: if ((*env)->ExceptionOccurred(env)) { (*env)->ExceptionClear(env); } if (server_list != NULL) { (*env)->DeleteLocalRef(env, server_list); } if (link_properties != NULL) { (*env)->DeleteLocalRef(env, link_properties); } if (active_network != NULL) { (*env)->DeleteLocalRef(env, active_network); } if (need_detatch) { (*android_jvm)->DetachCurrentThread(android_jvm); } return dns_list; } char *ares_get_android_search_domains_list(void) { JNIEnv *env = NULL; jobject active_network = NULL; jobject link_properties = NULL; jstring domains = NULL; const char *domain; int res; char *domain_list = NULL; int need_detatch = 0; if (android_jvm == NULL || android_connectivity_manager == NULL) { return NULL; } if (android_cm_active_net_mid == NULL || android_cm_link_props_mid == NULL || android_lp_domains_mid == NULL) { return NULL; } res = (*android_jvm)->GetEnv(android_jvm, (void **)&env, JNI_VERSION_1_6); if (res == JNI_EDETACHED) { env = NULL; res = jvm_attach(&env); need_detatch = 1; } if (res != JNI_OK || env == NULL) { goto done; } /* JNI below is equivalent to this Java code. import android.content.Context; import android.net.ConnectivityManager; import android.net.LinkProperties; ConnectivityManager cm = (ConnectivityManager)this.getApplicationContext() .getSystemService(Context.CONNECTIVITY_SERVICE); Network an = cm.getActiveNetwork(); LinkProperties lp = cm.getLinkProperties(an); String domains = lp.getDomains(); for (String domain: domains.split(",")) { String d = domain; } Note: The JNI ConnectivityManager object and all method IDs were previously initialized in ares_library_init_android. */ active_network = (*env)->CallObjectMethod(env, android_connectivity_manager, android_cm_active_net_mid); if (active_network == NULL) { goto done; } link_properties = (*env)->CallObjectMethod(env, android_connectivity_manager, android_cm_link_props_mid, active_network); if (link_properties == NULL) { goto done; } /* Get the domains. It is a common separated list of domains to search. */ domains = (*env)->CallObjectMethod(env, link_properties, android_lp_domains_mid); if (domains == NULL) { goto done; } /* Split on , */ domain = (*env)->GetStringUTFChars(env, domains, 0); domain_list = ares_strdup(domain); (*env)->ReleaseStringUTFChars(env, domains, domain); (*env)->DeleteLocalRef(env, domains); done: if ((*env)->ExceptionOccurred(env)) { (*env)->ExceptionClear(env); } if (link_properties != NULL) { (*env)->DeleteLocalRef(env, link_properties); } if (active_network != NULL) { (*env)->DeleteLocalRef(env, active_network); } if (need_detatch) { (*android_jvm)->DetachCurrentThread(android_jvm); } return domain_list; } #else /* warning: ISO C forbids an empty translation unit */ typedef int dummy_make_iso_compilers_happy; #endif gevent-24.11.1/deps/c-ares/src/lib/ares_android.h000066400000000000000000000027501471441230600214230ustar00rootroot00000000000000/* MIT License * * Copyright (c) John Schember * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES_ANDROID_H__ #define __ARES_ANDROID_H__ #if defined(ANDROID) || defined(__ANDROID__) char **ares_get_android_server_list(size_t max_servers, size_t *num_servers); char *ares_get_android_search_domains_list(void); void ares_library_cleanup_android(void); #endif #endif /* __ARES_ANDROID_H__ */ gevent-24.11.1/deps/c-ares/src/lib/ares_cancel.c000066400000000000000000000056601471441230600212260ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2004 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" /* * ares_cancel() cancels all ongoing requests/resolves that might be going on * on the given channel. It does NOT kill the channel, use ares_destroy() for * that. */ void ares_cancel(ares_channel_t *channel) { if (channel == NULL) { return; } ares__channel_lock(channel); if (ares__llist_len(channel->all_queries) > 0) { ares__llist_node_t *node = NULL; ares__llist_node_t *next = NULL; /* Swap list heads, so that only those queries which were present on entry * into this function are cancelled. New queries added by callbacks of * queries being cancelled will not be cancelled themselves. */ ares__llist_t *list_copy = channel->all_queries; channel->all_queries = ares__llist_create(NULL); /* Out of memory, this function doesn't return a result code though so we * can't report to caller */ if (channel->all_queries == NULL) { channel->all_queries = list_copy; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } node = ares__llist_node_first(list_copy); while (node != NULL) { ares_query_t *query; /* Cache next since this node is being deleted */ next = ares__llist_node_next(node); query = ares__llist_node_claim(node); query->node_all_queries = NULL; /* NOTE: its possible this may enqueue new queries */ query->callback(query->arg, ARES_ECANCELLED, 0, NULL); ares__free_query(query); node = next; } ares__llist_destroy(list_copy); } /* See if the connections should be cleaned up */ ares__check_cleanup_conns(channel); ares_queue_notify_empty(channel); done: ares__channel_unlock(channel); } gevent-24.11.1/deps/c-ares/src/lib/ares_config.h.cmake000066400000000000000000000345721471441230600223360ustar00rootroot00000000000000/* Copyright (C) The c-ares project and its contributors * SPDX-License-Identifier: MIT */ /* Generated from ares_config.h.cmake */ /* Define if building universal (internal helper macro) */ #undef AC_APPLE_UNIVERSAL_BUILD /* Defined for build with symbol hiding. */ #cmakedefine CARES_SYMBOL_HIDING /* Use resolver library to configure cares */ #cmakedefine CARES_USE_LIBRESOLV /* if a /etc/inet dir is being used */ #undef ETC_INET /* Define to the type of arg 2 for gethostname. */ #define GETHOSTNAME_TYPE_ARG2 @GETHOSTNAME_TYPE_ARG2@ /* Define to the type qualifier of arg 1 for getnameinfo. */ #define GETNAMEINFO_QUAL_ARG1 @GETNAMEINFO_QUAL_ARG1@ /* Define to the type of arg 1 for getnameinfo. */ #define GETNAMEINFO_TYPE_ARG1 @GETNAMEINFO_TYPE_ARG1@ /* Define to the type of arg 2 for getnameinfo. */ #define GETNAMEINFO_TYPE_ARG2 @GETNAMEINFO_TYPE_ARG2@ /* Define to the type of args 4 and 6 for getnameinfo. */ #define GETNAMEINFO_TYPE_ARG46 @GETNAMEINFO_TYPE_ARG46@ /* Define to the type of arg 7 for getnameinfo. */ #define GETNAMEINFO_TYPE_ARG7 @GETNAMEINFO_TYPE_ARG7@ /* Specifies the number of arguments to getservbyport_r */ #define GETSERVBYPORT_R_ARGS @GETSERVBYPORT_R_ARGS@ /* Specifies the number of arguments to getservbyname_r */ #define GETSERVBYNAME_R_ARGS @GETSERVBYNAME_R_ARGS@ /* Define to 1 if you have AF_INET6. */ #cmakedefine HAVE_AF_INET6 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_ARPA_INET_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_ARPA_NAMESER_COMPAT_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_ARPA_NAMESER_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_ASSERT_H 1 /* Define to 1 if you have the clock_gettime function and monotonic timer. */ #cmakedefine HAVE_CLOCK_GETTIME_MONOTONIC 1 /* Define to 1 if you have the closesocket function. */ #cmakedefine HAVE_CLOSESOCKET 1 /* Define to 1 if you have the CloseSocket camel case function. */ #cmakedefine HAVE_CLOSESOCKET_CAMEL 1 /* Define to 1 if you have the connect function. */ #cmakedefine HAVE_CONNECT 1 /* Define to 1 if you have the connectx function. */ #cmakedefine HAVE_CONNECTX 1 /* define if the compiler supports basic C++11 syntax */ #cmakedefine HAVE_CXX11 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_DLFCN_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_ERRNO_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_POLL_H 1 /* Define to 1 if you have the poll function. */ #cmakedefine HAVE_POLL 1 /* Define to 1 if you have the pipe function. */ #cmakedefine HAVE_PIPE 1 /* Define to 1 if you have the pipe2 function. */ #cmakedefine HAVE_PIPE2 1 /* Define to 1 if you have the kqueue function. */ #cmakedefine HAVE_KQUEUE 1 /* Define to 1 if you have the epoll{_create,ctl,wait} functions. */ #cmakedefine HAVE_EPOLL 1 /* Define to 1 if you have the fcntl function. */ #cmakedefine HAVE_FCNTL 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_FCNTL_H 1 /* Define to 1 if you have a working fcntl O_NONBLOCK function. */ #cmakedefine HAVE_FCNTL_O_NONBLOCK 1 /* Define to 1 if you have the freeaddrinfo function. */ #cmakedefine HAVE_FREEADDRINFO 1 /* Define to 1 if you have a working getaddrinfo function. */ #cmakedefine HAVE_GETADDRINFO 1 /* Define to 1 if the getaddrinfo function is threadsafe. */ #cmakedefine HAVE_GETADDRINFO_THREADSAFE 1 /* Define to 1 if you have the getenv function. */ #cmakedefine HAVE_GETENV 1 /* Define to 1 if you have the gethostname function. */ #cmakedefine HAVE_GETHOSTNAME 1 /* Define to 1 if you have the getnameinfo function. */ #cmakedefine HAVE_GETNAMEINFO 1 /* Define to 1 if you have the getrandom function. */ #cmakedefine HAVE_GETRANDOM 1 /* Define to 1 if you have the getservbyport_r function. */ #cmakedefine HAVE_GETSERVBYPORT_R 1 /* Define to 1 if you have the getservbyname_r function. */ #cmakedefine HAVE_GETSERVBYNAME_R 1 /* Define to 1 if you have the `gettimeofday' function. */ #cmakedefine HAVE_GETTIMEOFDAY 1 /* Define to 1 if you have the `if_indextoname' function. */ #cmakedefine HAVE_IF_INDEXTONAME 1 /* Define to 1 if you have the `if_nametoindex' function. */ #cmakedefine HAVE_IF_NAMETOINDEX 1 /* Define to 1 if you have the `ConvertInterfaceIndexToLuid' function. */ #cmakedefine HAVE_CONVERTINTERFACEINDEXTOLUID 1 /* Define to 1 if you have the `ConvertInterfaceLuidToNameA' function. */ #cmakedefine HAVE_CONVERTINTERFACELUIDTONAMEA 1 /* Define to 1 if you have the `NotifyIpInterfaceChange' function. */ #cmakedefine HAVE_NOTIFYIPINTERFACECHANGE 1 /* Define to 1 if you have the `RegisterWaitForSingleObject' function. */ #cmakedefine HAVE_REGISTERWAITFORSINGLEOBJECT 1 /* Define to 1 if you have a IPv6 capable working inet_net_pton function. */ #cmakedefine HAVE_INET_NET_PTON 1 /* Define to 1 if you have a IPv6 capable working inet_ntop function. */ #cmakedefine HAVE_INET_NTOP 1 /* Define to 1 if you have a IPv6 capable working inet_pton function. */ #cmakedefine HAVE_INET_PTON 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_INTTYPES_H 1 /* Define to 1 if you have the ioctl function. */ #cmakedefine HAVE_IOCTL 1 /* Define to 1 if you have the ioctlsocket function. */ #cmakedefine HAVE_IOCTLSOCKET 1 /* Define to 1 if you have the IoctlSocket camel case function. */ #cmakedefine HAVE_IOCTLSOCKET_CAMEL 1 /* Define to 1 if you have a working IoctlSocket camel case FIONBIO function. */ #cmakedefine HAVE_IOCTLSOCKET_CAMEL_FIONBIO 1 /* Define to 1 if you have a working ioctlsocket FIONBIO function. */ #cmakedefine HAVE_IOCTLSOCKET_FIONBIO 1 /* Define to 1 if you have a working ioctl FIONBIO function. */ #cmakedefine HAVE_IOCTL_FIONBIO 1 /* Define to 1 if you have a working ioctl SIOCGIFADDR function. */ #cmakedefine HAVE_IOCTL_SIOCGIFADDR 1 /* Define to 1 if you have the `resolve' library (-lresolve). */ #cmakedefine HAVE_LIBRESOLV 1 /* Define to 1 if you have iphlpapi.h */ #cmakedefine HAVE_IPHLPAPI_H 1 /* Define to 1 if you have netioapi.h */ #cmakedefine HAVE_NETIOAPI_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_LIMITS_H 1 /* Define to 1 if the compiler supports the 'long long' data type. */ #cmakedefine HAVE_LONGLONG 1 /* Define to 1 if you have the malloc.h header file. */ #cmakedefine HAVE_MALLOC_H 1 /* Define to 1 if you have the memory.h header file. */ #cmakedefine HAVE_MEMORY_H 1 /* Define to 1 if you have the AvailabilityMacros.h header file. */ #cmakedefine HAVE_AVAILABILITYMACROS_H 1 /* Define to 1 if you have the MSG_NOSIGNAL flag. */ #cmakedefine HAVE_MSG_NOSIGNAL 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_NETDB_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_NETINET_IN_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_NETINET6_IN6_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_NETINET_TCP_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_NET_IF_H 1 /* Define to 1 if you have PF_INET6. */ #cmakedefine HAVE_PF_INET6 1 /* Define to 1 if you have the recv function. */ #cmakedefine HAVE_RECV 1 /* Define to 1 if you have the recvfrom function. */ #cmakedefine HAVE_RECVFROM 1 /* Define to 1 if you have the send function. */ #cmakedefine HAVE_SEND 1 /* Define to 1 if you have the setsockopt function. */ #cmakedefine HAVE_SETSOCKOPT 1 /* Define to 1 if you have a working setsockopt SO_NONBLOCK function. */ #cmakedefine HAVE_SETSOCKOPT_SO_NONBLOCK 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_SIGNAL_H 1 /* Define to 1 if your struct sockaddr_in6 has sin6_scope_id. */ #cmakedefine HAVE_STRUCT_SOCKADDR_IN6_SIN6_SCOPE_ID 1 /* Define to 1 if you have the socket function. */ #cmakedefine HAVE_SOCKET 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_SOCKET_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_STDBOOL_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_STDINT_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_STDLIB_H 1 /* Define to 1 if you have the strcasecmp function. */ #cmakedefine HAVE_STRCASECMP 1 /* Define to 1 if you have the strcmpi function. */ #cmakedefine HAVE_STRCMPI 1 /* Define to 1 if you have the strdup function. */ #cmakedefine HAVE_STRDUP 1 /* Define to 1 if you have the stricmp function. */ #cmakedefine HAVE_STRICMP 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_STRINGS_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_STRING_H 1 /* Define to 1 if you have the strncasecmp function. */ #cmakedefine HAVE_STRNCASECMP 1 /* Define to 1 if you have the strncmpi function. */ #cmakedefine HAVE_STRNCMPI 1 /* Define to 1 if you have the strnicmp function. */ #cmakedefine HAVE_STRNICMP 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_STROPTS_H 1 /* Define to 1 if you have struct addrinfo. */ #cmakedefine HAVE_STRUCT_ADDRINFO 1 /* Define to 1 if you have struct in6_addr. */ #cmakedefine HAVE_STRUCT_IN6_ADDR 1 /* Define to 1 if you have struct sockaddr_in6. */ #cmakedefine HAVE_STRUCT_SOCKADDR_IN6 1 /* if struct sockaddr_storage is defined */ #cmakedefine HAVE_STRUCT_SOCKADDR_STORAGE 1 /* Define to 1 if you have the timeval struct. */ #cmakedefine HAVE_STRUCT_TIMEVAL 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_SYS_IOCTL_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_SYS_PARAM_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_SYS_RANDOM_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_SYS_EVENT_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_SYS_EPOLL_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_SYS_SELECT_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_SYS_SOCKET_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_SYS_STAT_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_SYS_TIME_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_SYS_TYPES_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_SYS_UIO_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_TIME_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_IFADDRS_H 1 /* Define to 1 if you have the header file. */ #cmakedefine HAVE_UNISTD_H 1 /* Define to 1 if you have the windows.h header file. */ #cmakedefine HAVE_WINDOWS_H 1 /* Define to 1 if you have the winsock2.h header file. */ #cmakedefine HAVE_WINSOCK2_H 1 /* Define to 1 if you have the winsock.h header file. */ #cmakedefine HAVE_WINSOCK_H 1 /* Define to 1 if you have the mswsock.h header file. */ #cmakedefine HAVE_MSWSOCK_H 1 /* Define to 1 if you have the winternl.h header file. */ #cmakedefine HAVE_WINTERNL_H 1 /* Define to 1 if you have the ntstatus.h header file. */ #cmakedefine HAVE_NTSTATUS_H 1 /* Define to 1 if you have the ntdef.h header file. */ #cmakedefine HAVE_NTDEF_H 1 /* Define to 1 if you have the writev function. */ #cmakedefine HAVE_WRITEV 1 /* Define to 1 if you have the ws2tcpip.h header file. */ #cmakedefine HAVE_WS2TCPIP_H 1 /* Define to 1 if you have the __system_property_get function */ #cmakedefine HAVE___SYSTEM_PROPERTY_GET 1 /* Define if have arc4random_buf() */ #cmakedefine HAVE_ARC4RANDOM_BUF 1 /* Define if have getifaddrs() */ #cmakedefine HAVE_GETIFADDRS 1 /* Define if have stat() */ #cmakedefine HAVE_STAT 1 /* a suitable file/device to read random data from */ #cmakedefine CARES_RANDOM_FILE "@CARES_RANDOM_FILE@" /* Define to the type qualifier pointed by arg 5 for recvfrom. */ #define RECVFROM_QUAL_ARG5 @RECVFROM_QUAL_ARG5@ /* Define to the type of arg 1 for recvfrom. */ #define RECVFROM_TYPE_ARG1 @RECVFROM_TYPE_ARG1@ /* Define to the type pointed by arg 2 for recvfrom. */ #define RECVFROM_TYPE_ARG2 @RECVFROM_TYPE_ARG2@ /* Define to 1 if the type pointed by arg 2 for recvfrom is void. */ #cmakedefine01 RECVFROM_TYPE_ARG2_IS_VOID /* Define to the type of arg 3 for recvfrom. */ #define RECVFROM_TYPE_ARG3 @RECVFROM_TYPE_ARG3@ /* Define to the type of arg 4 for recvfrom. */ #define RECVFROM_TYPE_ARG4 @RECVFROM_TYPE_ARG4@ /* Define to the type pointed by arg 5 for recvfrom. */ #define RECVFROM_TYPE_ARG5 @RECVFROM_TYPE_ARG5@ /* Define to 1 if the type pointed by arg 5 for recvfrom is void. */ #cmakedefine01 RECVFROM_TYPE_ARG5_IS_VOID /* Define to the type pointed by arg 6 for recvfrom. */ #define RECVFROM_TYPE_ARG6 @RECVFROM_TYPE_ARG6@ /* Define to 1 if the type pointed by arg 6 for recvfrom is void. */ #cmakedefine01 RECVFROM_TYPE_ARG6_IS_VOID /* Define to the function return type for recvfrom. */ #define RECVFROM_TYPE_RETV @RECVFROM_TYPE_RETV@ /* Define to the type of arg 1 for recv. */ #define RECV_TYPE_ARG1 @RECV_TYPE_ARG1@ /* Define to the type of arg 2 for recv. */ #define RECV_TYPE_ARG2 @RECV_TYPE_ARG2@ /* Define to the type of arg 3 for recv. */ #define RECV_TYPE_ARG3 @RECV_TYPE_ARG3@ /* Define to the type of arg 4 for recv. */ #define RECV_TYPE_ARG4 @RECV_TYPE_ARG4@ /* Define to the function return type for recv. */ #define RECV_TYPE_RETV @RECV_TYPE_RETV@ /* Define to the type of arg 1 for send. */ #define SEND_TYPE_ARG1 @SEND_TYPE_ARG1@ /* Define to the type of arg 2 for send. */ #define SEND_TYPE_ARG2 @SEND_TYPE_ARG2@ /* Define to the type of arg 3 for send. */ #define SEND_TYPE_ARG3 @SEND_TYPE_ARG3@ /* Define to the type of arg 4 for send. */ #define SEND_TYPE_ARG4 @SEND_TYPE_ARG4@ /* Define to the function return type for send. */ #define SEND_TYPE_RETV @SEND_TYPE_RETV@ /* Define to disable non-blocking sockets. */ #undef USE_BLOCKING_SOCKETS /* Define to avoid automatic inclusion of winsock.h */ #undef WIN32_LEAN_AND_MEAN /* Define to 1 if you have the pthread.h header file. */ #cmakedefine HAVE_PTHREAD_H 1 /* Define to 1 if you have the pthread_np.h header file. */ #cmakedefine HAVE_PTHREAD_NP_H 1 /* Define to 1 if threads are enabled */ #cmakedefine CARES_THREADS 1 /* Define to 1 if pthread_init() exists */ #cmakedefine HAVE_PTHREAD_INIT 1 gevent-24.11.1/deps/c-ares/src/lib/ares_config.h.in000066400000000000000000000360301471441230600216530ustar00rootroot00000000000000/* src/lib/ares_config.h.in. Generated from configure.ac by autoheader. */ /* a suitable file/device to read random data from */ #undef CARES_RANDOM_FILE /* Set to 1 if non-pubilc shared library symbols are hidden */ #undef CARES_SYMBOL_HIDING /* Threading enabled */ #undef CARES_THREADS /* the signed version of size_t */ #undef CARES_TYPEOF_ARES_SSIZE_T /* Use resolver library to configure cares */ #undef CARES_USE_LIBRESOLV /* if a /etc/inet dir is being used */ #undef ETC_INET /* gethostname() arg2 type */ #undef GETHOSTNAME_TYPE_ARG2 /* getnameinfo() arg1 type */ #undef GETNAMEINFO_TYPE_ARG1 /* getnameinfo() arg2 type */ #undef GETNAMEINFO_TYPE_ARG2 /* getnameinfo() arg4 and 6 type */ #undef GETNAMEINFO_TYPE_ARG46 /* getnameinfo() arg7 type */ #undef GETNAMEINFO_TYPE_ARG7 /* number of arguments for getservbyname_r() */ #undef GETSERVBYNAME_R_ARGS /* number of arguments for getservbyport_r() */ #undef GETSERVBYPORT_R_ARGS /* Define to 1 if you have AF_INET6 */ #undef HAVE_AF_INET6 /* Define to 1 if you have `arc4random_buf` */ #undef HAVE_ARC4RANDOM_BUF /* Define to 1 if you have the header file. */ #undef HAVE_ARPA_INET_H /* Define to 1 if you have the header file. */ #undef HAVE_ARPA_NAMESER_COMPAT_H /* Define to 1 if you have the header file. */ #undef HAVE_ARPA_NAMESER_H /* Define to 1 if you have the header file. */ #undef HAVE_ASSERT_H /* Define to 1 if you have the header file. */ #undef HAVE_AVAILABILITYMACROS_H /* Define to 1 if you have `clock_gettime` */ #undef HAVE_CLOCK_GETTIME /* clock_gettime() with CLOCK_MONOTONIC support */ #undef HAVE_CLOCK_GETTIME_MONOTONIC /* Define to 1 if you have `closesocket` */ #undef HAVE_CLOSESOCKET /* Define to 1 if you have `CloseSocket` */ #undef HAVE_CLOSESOCKET_CAMEL /* Define to 1 if you have `connect` */ #undef HAVE_CONNECT /* Define to 1 if you have `connectx` */ #undef HAVE_CONNECTX /* Define to 1 if you have `ConvertInterfaceIndexToLuid` */ #undef HAVE_CONVERTINTERFACEINDEXTOLUID /* Define to 1 if you have `ConvertInterfaceLuidToNameA` */ #undef HAVE_CONVERTINTERFACELUIDTONAMEA /* define if the compiler supports basic C++14 syntax */ #undef HAVE_CXX14 /* Define to 1 if you have the header file. */ #undef HAVE_DLFCN_H /* Define to 1 if you have `epoll_{create1,ctl,wait}` */ #undef HAVE_EPOLL /* Define to 1 if you have the header file. */ #undef HAVE_ERRNO_H /* Define to 1 if you have `fcntl` */ #undef HAVE_FCNTL /* Define to 1 if you have the header file. */ #undef HAVE_FCNTL_H /* fcntl() with O_NONBLOCK support */ #undef HAVE_FCNTL_O_NONBLOCK /* Define to 1 if you have `getenv` */ #undef HAVE_GETENV /* Define to 1 if you have `gethostname` */ #undef HAVE_GETHOSTNAME /* Define to 1 if you have `getifaddrs` */ #undef HAVE_GETIFADDRS /* Define to 1 if you have `getnameinfo` */ #undef HAVE_GETNAMEINFO /* Define to 1 if you have `getrandom` */ #undef HAVE_GETRANDOM /* Define to 1 if you have `getservbyport_r` */ #undef HAVE_GETSERVBYPORT_R /* Define to 1 if you have `gettimeofday` */ #undef HAVE_GETTIMEOFDAY /* Define to 1 if you have the header file. */ #undef HAVE_IFADDRS_H /* Define to 1 if you have `if_indextoname` */ #undef HAVE_IF_INDEXTONAME /* Define to 1 if you have `if_nametoindex` */ #undef HAVE_IF_NAMETOINDEX /* Define to 1 if you have `inet_net_pton` */ #undef HAVE_INET_NET_PTON /* Define to 1 if you have `inet_ntop` */ #undef HAVE_INET_NTOP /* Define to 1 if you have `inet_pton` */ #undef HAVE_INET_PTON /* Define to 1 if you have the header file. */ #undef HAVE_INTTYPES_H /* Define to 1 if you have `ioctl` */ #undef HAVE_IOCTL /* Define to 1 if you have `ioctlsocket` */ #undef HAVE_IOCTLSOCKET /* Define to 1 if you have `IoctlSocket` */ #undef HAVE_IOCTLSOCKET_CAMEL /* ioctlsocket() with FIONBIO support */ #undef HAVE_IOCTLSOCKET_FIONBIO /* ioctl() with FIONBIO support */ #undef HAVE_IOCTL_FIONBIO /* Define to 1 if you have the header file. */ #undef HAVE_IPHLPAPI_H /* Define to 1 if you have `kqueue` */ #undef HAVE_KQUEUE /* Define to 1 if you have the header file. */ #undef HAVE_LIMITS_H /* Define to 1 if the compiler supports the 'long long' data type. */ #undef HAVE_LONGLONG /* Define to 1 if you have the header file. */ #undef HAVE_MALLOC_H /* Define to 1 if you have the header file. */ #undef HAVE_MEMORY_H /* Define to 1 if you have the header file. */ #undef HAVE_MINIX_CONFIG_H /* Define to 1 if you have the header file. */ #undef HAVE_MSWSOCK_H /* Define to 1 if you have the header file. */ #undef HAVE_NETDB_H /* Define to 1 if you have the header file. */ #undef HAVE_NETINET6_IN6_H /* Define to 1 if you have the header file. */ #undef HAVE_NETINET_IN_H /* Define to 1 if you have the header file. */ #undef HAVE_NETINET_TCP_H /* Define to 1 if you have the header file. */ #undef HAVE_NETIOAPI_H /* Define to 1 if you have the header file. */ #undef HAVE_NET_IF_H /* Define to 1 if you have `NotifyIpInterfaceChange` */ #undef HAVE_NOTIFYIPINTERFACECHANGE /* Define to 1 if you have the header file. */ #undef HAVE_NTDEF_H /* Define to 1 if you have the header file. */ #undef HAVE_NTSTATUS_H /* Define to 1 if you have PF_INET6 */ #undef HAVE_PF_INET6 /* Define to 1 if you have `pipe` */ #undef HAVE_PIPE /* Define to 1 if you have `pipe2` */ #undef HAVE_PIPE2 /* Define to 1 if you have `poll` */ #undef HAVE_POLL /* Define to 1 if you have the header file. */ #undef HAVE_POLL_H /* Define to 1 if you have the header file. */ #undef HAVE_PTHREAD_H /* Define to 1 if you have the header file. */ #undef HAVE_PTHREAD_NP_H /* Have PTHREAD_PRIO_INHERIT. */ #undef HAVE_PTHREAD_PRIO_INHERIT /* Define to 1 if you have `recv` */ #undef HAVE_RECV /* Define to 1 if you have `recvfrom` */ #undef HAVE_RECVFROM /* Define to 1 if you have `RegisterWaitForSingleObject` */ #undef HAVE_REGISTERWAITFORSINGLEOBJECT /* Define to 1 if you have `send` */ #undef HAVE_SEND /* Define to 1 if you have `setsockopt` */ #undef HAVE_SETSOCKOPT /* setsockopt() with SO_NONBLOCK support */ #undef HAVE_SETSOCKOPT_SO_NONBLOCK /* Define to 1 if you have `socket` */ #undef HAVE_SOCKET /* Define to 1 if you have the header file. */ #undef HAVE_SOCKET_H /* socklen_t */ #undef HAVE_SOCKLEN_T /* Define to 1 if you have `stat` */ #undef HAVE_STAT /* Define to 1 if you have the header file. */ #undef HAVE_STDBOOL_H /* Define to 1 if you have the header file. */ #undef HAVE_STDINT_H /* Define to 1 if you have the header file. */ #undef HAVE_STDIO_H /* Define to 1 if you have the header file. */ #undef HAVE_STDLIB_H /* Define to 1 if you have `strcasecmp` */ #undef HAVE_STRCASECMP /* Define to 1 if you have `strdup` */ #undef HAVE_STRDUP /* Define to 1 if you have `stricmp` */ #undef HAVE_STRICMP /* Define to 1 if you have the header file. */ #undef HAVE_STRINGS_H /* Define to 1 if you have the header file. */ #undef HAVE_STRING_H /* Define to 1 if you have `strncasecmp` */ #undef HAVE_STRNCASECMP /* Define to 1 if you have `strncmpi` */ #undef HAVE_STRNCMPI /* Define to 1 if you have `strnicmp` */ #undef HAVE_STRNICMP /* Define to 1 if the system has the type 'struct addrinfo'. */ #undef HAVE_STRUCT_ADDRINFO /* Define to 1 if 'ai_flags' is a member of 'struct addrinfo'. */ #undef HAVE_STRUCT_ADDRINFO_AI_FLAGS /* Define to 1 if the system has the type 'struct in6_addr'. */ #undef HAVE_STRUCT_IN6_ADDR /* Define to 1 if the system has the type 'struct sockaddr_in6'. */ #undef HAVE_STRUCT_SOCKADDR_IN6 /* Define to 1 if 'sin6_scope_id' is a member of 'struct sockaddr_in6'. */ #undef HAVE_STRUCT_SOCKADDR_IN6_SIN6_SCOPE_ID /* Define to 1 if the system has the type 'struct sockaddr_storage'. */ #undef HAVE_STRUCT_SOCKADDR_STORAGE /* Define to 1 if the system has the type 'struct timeval'. */ #undef HAVE_STRUCT_TIMEVAL /* Define to 1 if you have the header file. */ #undef HAVE_SYS_EPOLL_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_EVENT_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_FILIO_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_IOCTL_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_PARAM_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_RANDOM_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_SELECT_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_SOCKET_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_STAT_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_TIME_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_TYPES_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_UIO_H /* Define to 1 if you have the header file. */ #undef HAVE_TIME_H /* Define to 1 if you have the header file. */ #undef HAVE_UNISTD_H /* Whether user namespaces are available */ #undef HAVE_USER_NAMESPACE /* Whether UTS namespaces are available */ #undef HAVE_UTS_NAMESPACE /* Define to 1 if you have the header file. */ #undef HAVE_WCHAR_H /* Define to 1 if you have the header file. */ #undef HAVE_WINDOWS_H /* Define to 1 if you have the header file. */ #undef HAVE_WINSOCK2_H /* Define to 1 if you have the header file. */ #undef HAVE_WINTERNL_H /* Define to 1 if you have `writev` */ #undef HAVE_WRITEV /* Define to 1 if you have the header file. */ #undef HAVE_WS2IPDEF_H /* Define to 1 if you have the header file. */ #undef HAVE_WS2TCPIP_H /* Define to 1 if you have `__system_property_get` */ #undef HAVE___SYSTEM_PROPERTY_GET /* Define to the sub-directory where libtool stores uninstalled libraries. */ #undef LT_OBJDIR /* Name of package */ #undef PACKAGE /* Define to the address where bug reports for this package should be sent. */ #undef PACKAGE_BUGREPORT /* Define to the full name of this package. */ #undef PACKAGE_NAME /* Define to the full name and version of this package. */ #undef PACKAGE_STRING /* Define to the one symbol short name of this package. */ #undef PACKAGE_TARNAME /* Define to the home page for this package. */ #undef PACKAGE_URL /* Define to the version of this package. */ #undef PACKAGE_VERSION /* Define to necessary symbol if this constant uses a non-standard name on your system. */ #undef PTHREAD_CREATE_JOINABLE /* recvfrom() arg5 qualifier */ #undef RECVFROM_QUAL_ARG5 /* recvfrom() arg1 type */ #undef RECVFROM_TYPE_ARG1 /* recvfrom() arg2 type */ #undef RECVFROM_TYPE_ARG2 /* recvfrom() arg3 type */ #undef RECVFROM_TYPE_ARG3 /* recvfrom() arg4 type */ #undef RECVFROM_TYPE_ARG4 /* recvfrom() arg5 type */ #undef RECVFROM_TYPE_ARG5 /* recvfrom() return value */ #undef RECVFROM_TYPE_RETV /* recv() arg1 type */ #undef RECV_TYPE_ARG1 /* recv() arg2 type */ #undef RECV_TYPE_ARG2 /* recv() arg3 type */ #undef RECV_TYPE_ARG3 /* recv() arg4 type */ #undef RECV_TYPE_ARG4 /* recv() return value */ #undef RECV_TYPE_RETV /* send() arg1 type */ #undef SEND_TYPE_ARG1 /* send() arg2 type */ #undef SEND_TYPE_ARG2 /* send() arg3 type */ #undef SEND_TYPE_ARG3 /* send() arg4 type */ #undef SEND_TYPE_ARG4 /* send() return value */ #undef SEND_TYPE_RETV /* Define to 1 if all of the C89 standard headers exist (not just the ones required in a freestanding environment). This macro is provided for backward compatibility; new code need not use it. */ #undef STDC_HEADERS /* Enable extensions on AIX, Interix, z/OS. */ #ifndef _ALL_SOURCE # undef _ALL_SOURCE #endif /* Enable general extensions on macOS. */ #ifndef _DARWIN_C_SOURCE # undef _DARWIN_C_SOURCE #endif /* Enable general extensions on Solaris. */ #ifndef __EXTENSIONS__ # undef __EXTENSIONS__ #endif /* Enable GNU extensions on systems that have them. */ #ifndef _GNU_SOURCE # undef _GNU_SOURCE #endif /* Enable X/Open compliant socket functions that do not require linking with -lxnet on HP-UX 11.11. */ #ifndef _HPUX_ALT_XOPEN_SOCKET_API # undef _HPUX_ALT_XOPEN_SOCKET_API #endif /* Identify the host operating system as Minix. This macro does not affect the system headers' behavior. A future release of Autoconf may stop defining this macro. */ #ifndef _MINIX # undef _MINIX #endif /* Enable general extensions on NetBSD. Enable NetBSD compatibility extensions on Minix. */ #ifndef _NETBSD_SOURCE # undef _NETBSD_SOURCE #endif /* Enable OpenBSD compatibility extensions on NetBSD. Oddly enough, this does nothing on OpenBSD. */ #ifndef _OPENBSD_SOURCE # undef _OPENBSD_SOURCE #endif /* Define to 1 if needed for POSIX-compatible behavior. */ #ifndef _POSIX_SOURCE # undef _POSIX_SOURCE #endif /* Define to 2 if needed for POSIX-compatible behavior. */ #ifndef _POSIX_1_SOURCE # undef _POSIX_1_SOURCE #endif /* Enable POSIX-compatible threading on Solaris. */ #ifndef _POSIX_PTHREAD_SEMANTICS # undef _POSIX_PTHREAD_SEMANTICS #endif /* Enable extensions specified by ISO/IEC TS 18661-5:2014. */ #ifndef __STDC_WANT_IEC_60559_ATTRIBS_EXT__ # undef __STDC_WANT_IEC_60559_ATTRIBS_EXT__ #endif /* Enable extensions specified by ISO/IEC TS 18661-1:2014. */ #ifndef __STDC_WANT_IEC_60559_BFP_EXT__ # undef __STDC_WANT_IEC_60559_BFP_EXT__ #endif /* Enable extensions specified by ISO/IEC TS 18661-2:2015. */ #ifndef __STDC_WANT_IEC_60559_DFP_EXT__ # undef __STDC_WANT_IEC_60559_DFP_EXT__ #endif /* Enable extensions specified by C23 Annex F. */ #ifndef __STDC_WANT_IEC_60559_EXT__ # undef __STDC_WANT_IEC_60559_EXT__ #endif /* Enable extensions specified by ISO/IEC TS 18661-4:2015. */ #ifndef __STDC_WANT_IEC_60559_FUNCS_EXT__ # undef __STDC_WANT_IEC_60559_FUNCS_EXT__ #endif /* Enable extensions specified by C23 Annex H and ISO/IEC TS 18661-3:2015. */ #ifndef __STDC_WANT_IEC_60559_TYPES_EXT__ # undef __STDC_WANT_IEC_60559_TYPES_EXT__ #endif /* Enable extensions specified by ISO/IEC TR 24731-2:2010. */ #ifndef __STDC_WANT_LIB_EXT2__ # undef __STDC_WANT_LIB_EXT2__ #endif /* Enable extensions specified by ISO/IEC 24747:2009. */ #ifndef __STDC_WANT_MATH_SPEC_FUNCS__ # undef __STDC_WANT_MATH_SPEC_FUNCS__ #endif /* Enable extensions on HP NonStop. */ #ifndef _TANDEM_SOURCE # undef _TANDEM_SOURCE #endif /* Enable X/Open extensions. Define to 500 only if necessary to make mbstate_t available. */ #ifndef _XOPEN_SOURCE # undef _XOPEN_SOURCE #endif /* Version number of package */ #undef VERSION /* Number of bits in a file offset, on hosts where this is settable. */ #undef _FILE_OFFSET_BITS /* Define to 1 on platforms where this makes off_t a 64-bit type. */ #undef _LARGE_FILES /* Number of bits in time_t, on hosts where this is settable. */ #undef _TIME_BITS /* Define to 1 on platforms where this makes time_t a 64-bit type. */ #undef __MINGW_USE_VC2005_COMPAT /* Define as 'unsigned int' if doesn't define. */ #undef size_t gevent-24.11.1/deps/c-ares/src/lib/ares_cookie.c000066400000000000000000000450461471441230600212540ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ /* DNS cookies are a simple form of learned mutual authentication supported by * most DNS server implementations these days and can help prevent DNS Cache * Poisoning attacks for clients and DNS amplification attacks for servers. * * A good overview is here: * https://www.dotmagazine.online/issues/digital-responsibility-and-sustainability/dns-cookies-transaction-mechanism * * RFCs used for implementation are * [RFC7873](https://datatracker.ietf.org/doc/html/rfc7873) which is extended by * [RFC9018](https://datatracker.ietf.org/doc/html/rfc9018). * * Though this could be used on TCP, the likelihood of it being useful is small * and could cause some issues. TCP is better used as a fallback in case there * are issues with DNS Cookie support in the upstream servers (e.g. AnyCast * cluster issues). * * While most recursive DNS servers support DNS Cookies, public DNS servers like * Google (8.8.8.8, 8.8.4.4) and CloudFlare (1.1.1.1, 1.0.0.1) don't seem to * have this enabled yet for unknown reasons. * * The risk to having DNS Cookie support always enabled is nearly zero as there * is built-in detection support and it will simply bypass using cookies if the * remote server doesn't support it. The problem arises if a remote server * supports DNS cookies, then stops supporting them (such as if an administrator * reconfigured the server, or maybe there are different servers in a cluster * with different configurations). We need to detect this behavior by tracking * how much time has gone by since we received our last valid cookie reply, and * if we exceed the threshold, reset all cookie parameters like we haven't * attempted a request yet. * * ## Implementation Plan * * ### Constants: * - `COOKIE_CLIENT_TIMEOUT`: 86400s (1 day) * - How often to regenerate the per-server client cookie, even if our * source ip address hasn't changed. * - `COOKIE_UNSUPPORTED_TIMEOUT`: 300s (5 minutes) * - If a server responds without a cookie in the reply, this is how long to * wait before attempting to send a client cookie again. * - `COOKIE_REGRESSION_TIMEOUT`: 120s (2 minutes) * - If a server was once known to return cookies, and all of a sudden stops * returning cookies (but the reply is otherwise valid), this is how long * to continue to attempt to use cookies before giving up and resetting. * Such an event would cause an outage for this duration, but since a * cache poisoning attack should be dropping invalid replies we should be * able to still get the valid reply and not assume it is a server * regression just because we received replies without cookies. * - `COOKIE_RESEND_MAX`: 3 * - Maximum times to resend a query to a server due to the server responding * with `BAD_COOKIE`, after this, we switch to TCP. * * ### Per-server variables: * - `cookie.state`: Known state of cookie support, enumeration. * - `INITIAL` (0): Initial state, not yet determined. Used during startup. * - `GENERATED` (1): Cookie has been generated and sent to a server, but no * validated response yet. * - `SUPPORTED` (2): Server has been determined to properly support cookies * - `UNSUPPORTED` (3): Server has been determined to not support cookies * - `cookie.client` : 8 byte randomly generated client cookie * - `cookie.client_ts`: Timestamp client cookie was generated * - `cookie.client_ip`: IP address client used to connect to server * - `cookie.server`: 8 to 32 byte server cookie * - `cookie.server_len`: length of server cookie * - `cookie.unsupported_ts`: Timestamp of last attempt to use a cookies, but * it was determined that the server didn't support them. * * ### Per-query variables: * - `query.client_cookie`: Duplicate of `cookie.client` at the point in time * the query is put on the wire. This should be available in the * `ares_dns_record_t` for the request for verification purposes so we don't * actually need to duplicate this, just naming it here for the ease of * documentation below. * - `query.cookie_try_count`: Number of tries to send a cookie but receive * `BAD_COOKIE` responses. Used to know when we need to switch to TCP. * * ### Procedure: * **NOTE**: These steps will all be done after obtaining a connection handle as * some of these steps depend on determining the source ip address for the * connection. * * 1. If the query is not using EDNS, then **skip any remaining processing**. * 2. If using TCP, ensure there is no EDNS cookie opt (10) set (there may have * been if this is a resend after upgrade to TCP), then **skip any remaining * processing**. * 3. If `cookie.state == SUPPORTED`, `cookie.unsupported_ts` is non-zero, and * evaluates greater than `COOKIE_REGRESSION_TIMEOUT`, then clear all cookie * settings, set `cookie.state = INITIAL`. Continue to next step (4) * 4. If `cookie.state == UNSUPPORTED` * - If `cookie.unsupported_ts` evaluates less than * `COOKIE_UNSUPPORTED_TIMEOUT` * - Ensure there is no EDNS cookie opt (10) set (shouldn't be unless * requestor had put this themselves), then **skip any remaining * processing** as we don't want to try to send cookies. * - Otherwise: * - clear all cookie settings, set `cookie.state = INITIAL`. * - Continue to next step (5) which will send a new cookie. * 5. If `cookie.state == INITIAL`: * - randomly generate new `cookie.client` * - set `cookie.client_ts` to the current timestamp. * - set `cookie.state = GENERATED`. * - set `cookie.client_ip` to the current source ip address. * 6. If `cookie.state == GENERATED || cookie.state == SUPPORTED` and * `cookie.client_ip` does not match the current source ip address: * - clear `cookie.server` * - randomly generate new `cookie.client` * - set `cookie.client_ts` to the current timestamp. * - set `cookie.client_ip` to the current source ip address. * - do not change the `cookie.state` * 7. If `cookie.state == SUPPORTED` and `cookie.client_ts` evaluation exceeds * `COOKIE_CLIENT_TIMEOUT`: * - clear `cookie.server` * - randomly generate new `cookie.client` * - set `cookie.client_ts` to the current timestamp. * - set `cookie.client_ip` to the current source ip address. * - do not change the `cookie.state` * 8. Generate EDNS OPT record (10) for client cookie. The option value will be * the `cookie.client` concatenated with the `cookie.server`. If there is no * known server cookie, it will not be appended. Copy `cookie.client` to * `query.client_cookie` to handle possible client cookie changes by other * queries before a reply is received (technically this is in the cached * `ares_dns_record_t` so no need to manually do this). Send request to * server. * 9. Evaluate response: * 1. If invalid EDNS OPT cookie (10) length sent back in response (valid * length is 16-40), or bad client cookie value (validate first 8 bytes * against `query.client_cookie` not `cookie.client`), **drop response** * as if it hadn't been received. This is likely a spoofing attack. * Wait for valid response up to normal response timeout. * 2. If a EDNS OPT cookie (10) server cookie is returned: * - set `cookie.unsupported_ts` to zero and `cookie.state = SUPPORTED`. * We can confirm this server supports cookies based on the existence * of this record. * - If a new EDNS OPT cookie (10) server cookie is in the response, and * the `client.cookie` matches the `query.client_cookie` still (hasn't * been rotated by some other parallel query), save it as * `cookie.server`. * 3. If dns response `rcode` is `BAD_COOKIE`: * - Ensure a EDNS OPT cookie (10) is returned, otherwise **drop * response**, this is completely invalid and likely an spoof of some * sort. * - Otherwise * - Increment `query.cookie_try_count` * - If `query.cookie_try_count >= COOKIE_RESEND_MAX`, set * `query.using_tcp` to force the next attempt to use TCP. * - **Requeue the query**, but do not increment the normal * `try_count` as a `BAD_COOKIE` reply isn't a normal try failure. * This should end up going all the way back to step 1 on the next * attempt. * 4. If EDNS OPT cookie (10) is **NOT** returned in the response: * - If `cookie.state == SUPPORTED` * - if `cookie.unsupported_ts` is zero, set to the current timestamp. * - Drop the response, wait for a valid response to be returned * - if `cookie.state == GENERATED` * - clear all cookie settings * - set `cookie.state = UNSUPPORTED` * - set `cookie.unsupported_ts` to the current time * - Accept response (state should be `UNSUPPORTED` if we're here) */ #include "ares_private.h" /* 1 day */ #define COOKIE_CLIENT_TIMEOUT_MS (86400 * 1000) /* 5 minutes */ #define COOKIE_UNSUPPORTED_TIMEOUT_MS (300 * 1000) /* 2 minutes */ #define COOKIE_REGRESSION_TIMEOUT_MS (120 * 1000) #define COOKIE_RESEND_MAX 3 static const unsigned char * ares_dns_cookie_fetch(const ares_dns_record_t *dnsrec, size_t *len) { const ares_dns_rr_t *rr = ares_dns_get_opt_rr_const(dnsrec); const unsigned char *val = NULL; *len = 0; if (rr == NULL) { return NULL; } if (!ares_dns_rr_get_opt_byid(rr, ARES_RR_OPT_OPTIONS, ARES_OPT_PARAM_COOKIE, &val, len)) { return NULL; } return val; } static ares_bool_t timeval_is_set(const ares_timeval_t *tv) { if (tv->sec != 0 && tv->usec != 0) { return ARES_TRUE; } return ARES_FALSE; } static ares_bool_t timeval_expired(const ares_timeval_t *tv, const ares_timeval_t *now, unsigned long millsecs) { ares_int64_t tvdiff_ms; ares_timeval_t tvdiff; ares__timeval_diff(&tvdiff, tv, now); tvdiff_ms = tvdiff.sec * 1000 + tvdiff.usec / 1000; if (tvdiff_ms >= (ares_int64_t)millsecs) { return ARES_TRUE; } return ARES_FALSE; } static void ares_cookie_clear(ares_cookie_t *cookie) { memset(cookie, 0, sizeof(*cookie)); cookie->state = ARES_COOKIE_INITIAL; } static void ares_cookie_generate(ares_cookie_t *cookie, ares_conn_t *conn, const ares_timeval_t *now) { ares_channel_t *channel = conn->server->channel; ares__rand_bytes(channel->rand_state, cookie->client, sizeof(cookie->client)); memcpy(&cookie->client_ts, now, sizeof(cookie->client_ts)); memcpy(&cookie->client_ip, &conn->self_ip, sizeof(cookie->client_ip)); } static void ares_cookie_clear_server(ares_cookie_t *cookie) { memset(cookie->server, 0, sizeof(cookie->server)); cookie->server_len = 0; } static ares_bool_t ares_addr_equal(const struct ares_addr *addr1, const struct ares_addr *addr2) { if (addr1->family != addr2->family) { return ARES_FALSE; } switch (addr1->family) { case AF_INET: if (memcmp(&addr1->addr.addr4, &addr2->addr.addr4, sizeof(addr1->addr.addr4)) == 0) { return ARES_TRUE; } break; case AF_INET6: /* This structure is weird, and due to padding SonarCloud complains if * you don't punch all the way down. At some point we should rework * this structure */ if (memcmp(&addr1->addr.addr6._S6_un._S6_u8, &addr2->addr.addr6._S6_un._S6_u8, sizeof(addr1->addr.addr6._S6_un._S6_u8)) == 0) { return ARES_TRUE; } break; default: break; /* LCOV_EXCL_LINE */ } return ARES_FALSE; } ares_status_t ares_cookie_apply(ares_dns_record_t *dnsrec, ares_conn_t *conn, const ares_timeval_t *now) { ares_server_t *server = conn->server; ares_cookie_t *cookie = &server->cookie; ares_dns_rr_t *rr = ares_dns_get_opt_rr(dnsrec); unsigned char c[40]; size_t c_len; /* If there is no OPT record, then EDNS isn't supported, and therefore * cookies can't be supported */ if (rr == NULL) { return ARES_SUCCESS; } /* No cookies on TCP, make sure we remove one if one is present */ if (conn->flags & ARES_CONN_FLAG_TCP) { ares_dns_rr_del_opt_byid(rr, ARES_RR_OPT_OPTIONS, ARES_OPT_PARAM_COOKIE); return ARES_SUCCESS; } /* Look for regression */ if (cookie->state == ARES_COOKIE_SUPPORTED && timeval_is_set(&cookie->unsupported_ts) && timeval_expired(&cookie->unsupported_ts, now, COOKIE_REGRESSION_TIMEOUT_MS)) { ares_cookie_clear(cookie); } /* Handle unsupported state */ if (cookie->state == ARES_COOKIE_UNSUPPORTED) { /* If timer hasn't expired, just delete any possible cookie and return */ if (!timeval_expired(&cookie->unsupported_ts, now, COOKIE_REGRESSION_TIMEOUT_MS)) { ares_dns_rr_del_opt_byid(rr, ARES_RR_OPT_OPTIONS, ARES_OPT_PARAM_COOKIE); return ARES_SUCCESS; } /* We want to try to "learn" again */ ares_cookie_clear(cookie); } /* Generate a new cookie */ if (cookie->state == ARES_COOKIE_INITIAL) { ares_cookie_generate(cookie, conn, now); cookie->state = ARES_COOKIE_GENERATED; } /* Regenerate the cookie and clear the server cookie if the client ip has * changed */ if ((cookie->state == ARES_COOKIE_GENERATED || cookie->state == ARES_COOKIE_SUPPORTED) && !ares_addr_equal(&conn->self_ip, &cookie->client_ip)) { ares_cookie_clear_server(cookie); ares_cookie_generate(cookie, conn, now); } /* If the client cookie has reached its maximum time, refresh it */ if (cookie->state == ARES_COOKIE_SUPPORTED && timeval_expired(&cookie->client_ts, now, COOKIE_CLIENT_TIMEOUT_MS)) { ares_cookie_clear_server(cookie); ares_cookie_generate(cookie, conn, now); } /* Generate the full cookie which is the client cookie concatenated with the * server cookie (if there is one) and apply it. */ memcpy(c, cookie->client, sizeof(cookie->client)); if (cookie->server_len) { memcpy(c + sizeof(cookie->client), cookie->server, cookie->server_len); } c_len = sizeof(cookie->client) + cookie->server_len; return ares_dns_rr_set_opt(rr, ARES_RR_OPT_OPTIONS, ARES_OPT_PARAM_COOKIE, c, c_len); } ares_status_t ares_cookie_validate(ares_query_t *query, const ares_dns_record_t *dnsresp, ares_conn_t *conn, const ares_timeval_t *now) { ares_server_t *server = conn->server; ares_cookie_t *cookie = &server->cookie; const ares_dns_record_t *dnsreq = query->query; const unsigned char *resp_cookie; size_t resp_cookie_len; const unsigned char *req_cookie; size_t req_cookie_len; resp_cookie = ares_dns_cookie_fetch(dnsresp, &resp_cookie_len); /* Invalid cookie length, drop */ if (resp_cookie && (resp_cookie_len < 8 || resp_cookie_len > 40)) { return ARES_EBADRESP; } req_cookie = ares_dns_cookie_fetch(dnsreq, &req_cookie_len); /* Didn't request cookies, so we can stop evaluating */ if (req_cookie == NULL) { return ARES_SUCCESS; } /* If 8-byte prefix for returned cookie doesn't match the requested cookie, * drop for spoofing */ if (resp_cookie && memcmp(req_cookie, resp_cookie, 8) != 0) { return ARES_EBADRESP; } if (resp_cookie && resp_cookie_len > 8) { /* Make sure we record that we successfully received a cookie response */ cookie->state = ARES_COOKIE_SUPPORTED; memset(&cookie->unsupported_ts, 0, sizeof(cookie->unsupported_ts)); /* If client cookie hasn't been rotated, save the returned server cookie */ if (memcmp(cookie->client, req_cookie, sizeof(cookie->client)) == 0) { cookie->server_len = resp_cookie_len - 8; memcpy(cookie->server, resp_cookie + 8, cookie->server_len); } } if (ares_dns_record_get_rcode(dnsresp) == ARES_RCODE_BADCOOKIE) { /* Illegal to return BADCOOKIE but no cookie, drop */ if (resp_cookie == NULL) { return ARES_EBADRESP; } /* If we have too many attempts to send a cookie, we need to requeue as * tcp */ query->cookie_try_count++; if (query->cookie_try_count >= COOKIE_RESEND_MAX) { query->using_tcp = ARES_TRUE; } /* Resend the request, hopefully it will work the next time as we should * have recorded a server cookie */ ares__requeue_query(query, now, ARES_SUCCESS, ARES_FALSE /* Don't increment try count */, NULL); /* Parent needs to drop this response */ return ARES_EBADRESP; } /* We've got a response with a server cookie, and we've done all the * evaluation we can, return success */ if (resp_cookie_len > 8) { return ARES_SUCCESS; } if (cookie->state == ARES_COOKIE_SUPPORTED) { /* If we're not currently tracking an error time yet, start */ if (!timeval_is_set(&cookie->unsupported_ts)) { memcpy(&cookie->unsupported_ts, now, sizeof(cookie->unsupported_ts)); } /* Drop it since we expected a cookie */ return ARES_EBADRESP; } if (cookie->state == ARES_COOKIE_GENERATED) { ares_cookie_clear(cookie); cookie->state = ARES_COOKIE_UNSUPPORTED; memcpy(&cookie->unsupported_ts, now, sizeof(cookie->unsupported_ts)); } /* Cookie state should be UNSUPPORTED if we're here */ return ARES_SUCCESS; } gevent-24.11.1/deps/c-ares/src/lib/ares_data.c000066400000000000000000000115061471441230600207060ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2009 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include #include #include "ares_data.h" /* ** ares_free_data() - c-ares external API function. ** ** This function must be used by the application to free data memory that ** has been internally allocated by some c-ares function and for which a ** pointer has already been returned to the calling application. The list ** of c-ares functions returning pointers that must be free'ed using this ** function is: ** ** ares_get_servers() ** ares_parse_srv_reply() ** ares_parse_txt_reply() */ void ares_free_data(void *dataptr) { while (dataptr != NULL) { struct ares_data *ptr; void *next_data = NULL; #ifdef __INTEL_COMPILER # pragma warning(push) # pragma warning(disable : 1684) /* 1684: conversion from pointer to same-sized integral type */ #endif ptr = (void *)((char *)dataptr - offsetof(struct ares_data, data)); #ifdef __INTEL_COMPILER # pragma warning(pop) #endif if (ptr->mark != ARES_DATATYPE_MARK) { return; } switch (ptr->type) { case ARES_DATATYPE_MX_REPLY: next_data = ptr->data.mx_reply.next; ares_free(ptr->data.mx_reply.host); break; case ARES_DATATYPE_SRV_REPLY: next_data = ptr->data.srv_reply.next; ares_free(ptr->data.srv_reply.host); break; case ARES_DATATYPE_URI_REPLY: next_data = ptr->data.uri_reply.next; ares_free(ptr->data.uri_reply.uri); break; case ARES_DATATYPE_TXT_REPLY: case ARES_DATATYPE_TXT_EXT: next_data = ptr->data.txt_reply.next; ares_free(ptr->data.txt_reply.txt); break; case ARES_DATATYPE_ADDR_NODE: next_data = ptr->data.addr_node.next; break; case ARES_DATATYPE_ADDR_PORT_NODE: next_data = ptr->data.addr_port_node.next; break; case ARES_DATATYPE_NAPTR_REPLY: next_data = ptr->data.naptr_reply.next; ares_free(ptr->data.naptr_reply.flags); ares_free(ptr->data.naptr_reply.service); ares_free(ptr->data.naptr_reply.regexp); ares_free(ptr->data.naptr_reply.replacement); break; case ARES_DATATYPE_SOA_REPLY: ares_free(ptr->data.soa_reply.nsname); ares_free(ptr->data.soa_reply.hostmaster); break; case ARES_DATATYPE_CAA_REPLY: next_data = ptr->data.caa_reply.next; ares_free(ptr->data.caa_reply.property); ares_free(ptr->data.caa_reply.value); break; default: return; } ares_free(ptr); dataptr = next_data; } } /* ** ares_malloc_data() - c-ares internal helper function. ** ** This function allocates memory for a c-ares private ares_data struct ** for the specified ares_datatype, initializes c-ares private fields ** and zero initializes those which later might be used from the public ** API. It returns an interior pointer which can be passed by c-ares ** functions to the calling application, and that must be free'ed using ** c-ares external API function ares_free_data(). */ void *ares_malloc_data(ares_datatype type) { struct ares_data *ptr; ptr = ares_malloc_zero(sizeof(*ptr)); if (!ptr) { return NULL; } switch (type) { case ARES_DATATYPE_MX_REPLY: case ARES_DATATYPE_SRV_REPLY: case ARES_DATATYPE_URI_REPLY: case ARES_DATATYPE_TXT_EXT: case ARES_DATATYPE_TXT_REPLY: case ARES_DATATYPE_CAA_REPLY: case ARES_DATATYPE_ADDR_NODE: case ARES_DATATYPE_ADDR_PORT_NODE: case ARES_DATATYPE_NAPTR_REPLY: case ARES_DATATYPE_SOA_REPLY: break; default: ares_free(ptr); return NULL; } ptr->mark = ARES_DATATYPE_MARK; ptr->type = type; return &ptr->data; } gevent-24.11.1/deps/c-ares/src/lib/ares_data.h000066400000000000000000000076021471441230600207150ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2009 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES_DATA_H #define __ARES_DATA_H typedef enum { ARES_DATATYPE_UNKNOWN = 1, /* unknown data type - introduced in 1.7.0 */ ARES_DATATYPE_SRV_REPLY, /* struct ares_srv_reply - introduced in 1.7.0 */ ARES_DATATYPE_TXT_REPLY, /* struct ares_txt_reply - introduced in 1.7.0 */ ARES_DATATYPE_TXT_EXT, /* struct ares_txt_ext - introduced in 1.11.0 */ ARES_DATATYPE_ADDR_NODE, /* struct ares_addr_node - introduced in 1.7.1 */ ARES_DATATYPE_MX_REPLY, /* struct ares_mx_reply - introduced in 1.7.2 */ ARES_DATATYPE_NAPTR_REPLY, /* struct ares_naptr_reply - introduced in 1.7.6 */ ARES_DATATYPE_SOA_REPLY, /* struct ares_soa_reply - introduced in 1.9.0 */ ARES_DATATYPE_URI_REPLY, /* struct ares_uri_reply */ #if 0 ARES_DATATYPE_ADDR6TTL, /* struct ares_addrttl */ ARES_DATATYPE_ADDRTTL, /* struct ares_addr6ttl */ ARES_DATATYPE_HOSTENT, /* struct hostent */ ARES_DATATYPE_OPTIONS, /* struct ares_options */ #endif ARES_DATATYPE_ADDR_PORT_NODE, /* struct ares_addr_port_node - introduced in 1.11.0 */ ARES_DATATYPE_CAA_REPLY, /* struct ares_caa_reply - introduced in 1.17 */ ARES_DATATYPE_LAST /* not used - introduced in 1.7.0 */ } ares_datatype; #define ARES_DATATYPE_MARK 0xbead /* * ares_data struct definition is internal to c-ares and shall not * be exposed by the public API in order to allow future changes * and extensions to it without breaking ABI. This will be used * internally by c-ares as the container of multiple types of data * dynamically allocated for which a reference will be returned * to the calling application. * * c-ares API functions returning a pointer to c-ares internally * allocated data will actually be returning an interior pointer * into this ares_data struct. * * All this is 'invisible' to the calling application, the only * requirement is that this kind of data must be free'ed by the * calling application using ares_free_data() with the pointer * it has received from a previous c-ares function call. */ struct ares_data { ares_datatype type; /* Actual data type identifier. */ unsigned int mark; /* Private ares_data signature. */ union { struct ares_txt_reply txt_reply; struct ares_txt_ext txt_ext; struct ares_srv_reply srv_reply; struct ares_addr_node addr_node; struct ares_addr_port_node addr_port_node; struct ares_mx_reply mx_reply; struct ares_naptr_reply naptr_reply; struct ares_soa_reply soa_reply; struct ares_caa_reply caa_reply; struct ares_uri_reply uri_reply; } data; }; void *ares_malloc_data(ares_datatype type); #endif /* __ARES_DATA_H */ gevent-24.11.1/deps/c-ares/src/lib/ares_destroy.c000066400000000000000000000121001471441230600214550ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2004 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "event/ares_event.h" #include void ares_destroy(ares_channel_t *channel) { size_t i; ares__llist_node_t *node = NULL; if (channel == NULL) { return; } /* Mark as being shutdown */ ares__channel_lock(channel); channel->sys_up = ARES_FALSE; ares__channel_unlock(channel); /* Disable configuration change monitoring. We can't hold a lock because * some cleanup routines, such as on Windows, are synchronous operations. * What we've observed is a system config change event was triggered right * at shutdown time and it tries to take the channel lock and the destruction * waits for that event to complete before it continues so we get a channel * lock deadlock at shutdown if we hold a lock during this process. */ if (channel->optmask & ARES_OPT_EVENT_THREAD) { ares_event_thread_t *e = channel->sock_state_cb_data; if (e && e->configchg) { ares_event_configchg_destroy(e->configchg); e->configchg = NULL; } } /* Wait for reinit thread to exit if there was one pending, can't be * holding a lock as the thread may take locks. */ if (channel->reinit_thread != NULL) { void *rv; ares__thread_join(channel->reinit_thread, &rv); channel->reinit_thread = NULL; } /* Lock because callbacks will be triggered, and any system-generated * callbacks need to hold a channel lock. */ ares__channel_lock(channel); /* Destroy all queries */ node = ares__llist_node_first(channel->all_queries); while (node != NULL) { ares__llist_node_t *next = ares__llist_node_next(node); ares_query_t *query = ares__llist_node_claim(node); query->node_all_queries = NULL; query->callback(query->arg, ARES_EDESTRUCTION, 0, NULL); ares__free_query(query); node = next; } ares_queue_notify_empty(channel); #ifndef NDEBUG /* Freeing the query should remove it from all the lists in which it sits, * so all query lists should be empty now. */ assert(ares__llist_len(channel->all_queries) == 0); assert(ares__htable_szvp_num_keys(channel->queries_by_qid) == 0); assert(ares__slist_len(channel->queries_by_timeout) == 0); #endif ares__destroy_servers_state(channel); #ifndef NDEBUG assert(ares__htable_asvp_num_keys(channel->connnode_by_socket) == 0); #endif /* No more callbacks will be triggered after this point, unlock */ ares__channel_unlock(channel); /* Shut down the event thread */ if (channel->optmask & ARES_OPT_EVENT_THREAD) { ares_event_thread_destroy(channel); } if (channel->domains) { for (i = 0; i < channel->ndomains; i++) { ares_free(channel->domains[i]); } ares_free(channel->domains); } ares__llist_destroy(channel->all_queries); ares__slist_destroy(channel->queries_by_timeout); ares__htable_szvp_destroy(channel->queries_by_qid); ares__htable_asvp_destroy(channel->connnode_by_socket); ares_free(channel->sortlist); ares_free(channel->lookups); ares_free(channel->resolvconf_path); ares_free(channel->hosts_path); ares__destroy_rand_state(channel->rand_state); ares__hosts_file_destroy(channel->hf); ares__qcache_destroy(channel->qcache); ares__channel_threading_destroy(channel); ares_free(channel); } void ares__destroy_server(ares_server_t *server) { if (server == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } ares__close_sockets(server); ares__llist_destroy(server->connections); ares__buf_destroy(server->tcp_parser); ares__buf_destroy(server->tcp_send); ares_free(server); } void ares__destroy_servers_state(ares_channel_t *channel) { ares__slist_node_t *node; while ((node = ares__slist_node_first(channel->servers)) != NULL) { ares_server_t *server = ares__slist_node_claim(node); ares__destroy_server(server); } ares__slist_destroy(channel->servers); channel->servers = NULL; } gevent-24.11.1/deps/c-ares/src/lib/ares_free_hostent.c000066400000000000000000000034361471441230600224650ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETDB_H # include #endif void ares_free_hostent(struct hostent *host) { char **p; if (!host) { return; } ares_free(host->h_name); for (p = host->h_aliases; p && *p; p++) { ares_free(*p); } ares_free(host->h_aliases); if (host->h_addr_list) { ares_free( host->h_addr_list[0]); /* no matter if there is one or many entries, there is only one malloc for all of them */ ares_free(host->h_addr_list); } ares_free(host); } gevent-24.11.1/deps/c-ares/src/lib/ares_free_string.c000066400000000000000000000025261471441230600223060ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2000 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" void ares_free_string(void *str) { ares_free(str); } gevent-24.11.1/deps/c-ares/src/lib/ares_freeaddrinfo.c000066400000000000000000000037761471441230600224370ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2019 Andrew Selivanov * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETDB_H # include #endif void ares__freeaddrinfo_cnames(struct ares_addrinfo_cname *head) { struct ares_addrinfo_cname *current; while (head) { current = head; head = head->next; ares_free(current->alias); ares_free(current->name); ares_free(current); } } void ares__freeaddrinfo_nodes(struct ares_addrinfo_node *head) { struct ares_addrinfo_node *current; while (head) { current = head; head = head->ai_next; ares_free(current->ai_addr); ares_free(current); } } void ares_freeaddrinfo(struct ares_addrinfo *ai) { if (ai == NULL) { return; } ares__freeaddrinfo_cnames(ai->cnames); ares__freeaddrinfo_nodes(ai->nodes); ares_free(ai->name); ares_free(ai); } gevent-24.11.1/deps/c-ares/src/lib/ares_getaddrinfo.c000066400000000000000000000504301471441230600222620ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998, 2011, 2013 Massachusetts Institute of Technology * Copyright (c) 2017 Christian Ammer * Copyright (c) 2019 Andrew Selivanov * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_GETSERVBYNAME_R # if !defined(GETSERVBYNAME_R_ARGS) || (GETSERVBYNAME_R_ARGS < 4) || \ (GETSERVBYNAME_R_ARGS > 6) # error "you MUST specify a valid number of arguments for getservbyname_r" # endif #endif #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #include "ares_nameser.h" #ifdef HAVE_STRINGS_H # include #endif #include #ifdef HAVE_LIMITS_H # include #endif #include "ares_dns.h" #ifdef _WIN32 # include "ares_platform.h" #endif struct host_query { ares_channel_t *channel; char *name; unsigned short port; /* in host order */ ares_addrinfo_callback callback; void *arg; struct ares_addrinfo_hints hints; int sent_family; /* this family is what was is being used */ size_t timeouts; /* number of timeouts we saw for this request */ char *lookups; /* Duplicate memory from channel because of ares_reinit() */ const char *remaining_lookups; /* types of lookup we need to perform ("fb" by default, file and dns respectively) */ /* Search order for names */ char **names; size_t names_cnt; size_t next_name_idx; /* next name index being attempted */ struct ares_addrinfo *ai; /* store results between lookups */ unsigned short qid_a; /* qid for A request */ unsigned short qid_aaaa; /* qid for AAAA request */ size_t remaining; /* number of DNS answers waiting for */ /* Track nodata responses to possibly override final result */ size_t nodata_cnt; }; static const struct ares_addrinfo_hints default_hints = { 0, /* ai_flags */ AF_UNSPEC, /* ai_family */ 0, /* ai_socktype */ 0, /* ai_protocol */ }; /* forward declarations */ static ares_bool_t next_dns_lookup(struct host_query *hquery); struct ares_addrinfo_cname * ares__append_addrinfo_cname(struct ares_addrinfo_cname **head) { struct ares_addrinfo_cname *tail = ares_malloc_zero(sizeof(*tail)); struct ares_addrinfo_cname *last = *head; if (tail == NULL) { return NULL; /* LCOV_EXCL_LINE: OutOfMemory */ } if (!last) { *head = tail; return tail; } while (last->next) { last = last->next; } last->next = tail; return tail; } void ares__addrinfo_cat_cnames(struct ares_addrinfo_cname **head, struct ares_addrinfo_cname *tail) { struct ares_addrinfo_cname *last = *head; if (!last) { *head = tail; return; } while (last->next) { last = last->next; } last->next = tail; } /* Allocate new addrinfo and append to the tail. */ struct ares_addrinfo_node * ares__append_addrinfo_node(struct ares_addrinfo_node **head) { struct ares_addrinfo_node *tail = ares_malloc_zero(sizeof(*tail)); struct ares_addrinfo_node *last = *head; if (tail == NULL) { return NULL; /* LCOV_EXCL_LINE: OutOfMemory */ } if (!last) { *head = tail; return tail; } while (last->ai_next) { last = last->ai_next; } last->ai_next = tail; return tail; } void ares__addrinfo_cat_nodes(struct ares_addrinfo_node **head, struct ares_addrinfo_node *tail) { struct ares_addrinfo_node *last = *head; if (!last) { *head = tail; return; } while (last->ai_next) { last = last->ai_next; } last->ai_next = tail; } /* Resolve service name into port number given in host byte order. * If not resolved, return 0. */ static unsigned short lookup_service(const char *service, int flags) { const char *proto; struct servent *sep; #ifdef HAVE_GETSERVBYNAME_R struct servent se; char tmpbuf[4096]; #endif if (service) { if (flags & ARES_NI_UDP) { proto = "udp"; } else if (flags & ARES_NI_SCTP) { proto = "sctp"; } else if (flags & ARES_NI_DCCP) { proto = "dccp"; } else { proto = "tcp"; } #ifdef HAVE_GETSERVBYNAME_R memset(&se, 0, sizeof(se)); sep = &se; memset(tmpbuf, 0, sizeof(tmpbuf)); # if GETSERVBYNAME_R_ARGS == 6 if (getservbyname_r(service, proto, &se, (void *)tmpbuf, sizeof(tmpbuf), &sep) != 0) { sep = NULL; /* LCOV_EXCL_LINE: buffer large so this never fails */ } # elif GETSERVBYNAME_R_ARGS == 5 sep = getservbyname_r(service, proto, &se, (void *)tmpbuf, sizeof(tmpbuf)); # elif GETSERVBYNAME_R_ARGS == 4 if (getservbyname_r(service, proto, &se, (void *)tmpbuf) != 0) { sep = NULL; } # else /* Lets just hope the OS uses TLS! */ sep = getservbyname(service, proto); # endif #else /* Lets just hope the OS uses TLS! */ # if (defined(NETWARE) && !defined(__NOVELL_LIBC__)) sep = getservbyname(service, (char *)proto); # else sep = getservbyname(service, proto); # endif #endif return (sep ? ntohs((unsigned short)sep->s_port) : 0); } return 0; } /* If the name looks like an IP address or an error occurred, * fake up a host entry, end the query immediately, and return true. * Otherwise return false. */ static ares_bool_t fake_addrinfo(const char *name, unsigned short port, const struct ares_addrinfo_hints *hints, struct ares_addrinfo *ai, ares_addrinfo_callback callback, void *arg) { struct ares_addrinfo_cname *cname; ares_status_t status = ARES_SUCCESS; ares_bool_t result = ARES_FALSE; int family = hints->ai_family; if (family == AF_INET || family == AF_INET6 || family == AF_UNSPEC) { /* It only looks like an IP address if it's all numbers and dots. */ size_t numdots = 0; ares_bool_t valid = ARES_TRUE; const char *p; for (p = name; *p; p++) { if (!ares__isdigit(*p) && *p != '.') { valid = ARES_FALSE; break; } else if (*p == '.') { numdots++; } } /* if we don't have 3 dots, it is illegal * (although inet_pton doesn't think so). */ if (numdots != 3 || !valid) { result = ARES_FALSE; } else { struct in_addr addr4; result = ares_inet_pton(AF_INET, name, &addr4) < 1 ? ARES_FALSE : ARES_TRUE; if (result) { status = ares_append_ai_node(AF_INET, port, 0, &addr4, &ai->nodes); if (status != ARES_SUCCESS) { callback(arg, (int)status, 0, NULL); /* LCOV_EXCL_LINE: OutOfMemory */ return ARES_TRUE; /* LCOV_EXCL_LINE: OutOfMemory */ } } } } if (!result && (family == AF_INET6 || family == AF_UNSPEC)) { struct ares_in6_addr addr6; result = ares_inet_pton(AF_INET6, name, &addr6) < 1 ? ARES_FALSE : ARES_TRUE; if (result) { status = ares_append_ai_node(AF_INET6, port, 0, &addr6, &ai->nodes); if (status != ARES_SUCCESS) { callback(arg, (int)status, 0, NULL); /* LCOV_EXCL_LINE: OutOfMemory */ return ARES_TRUE; /* LCOV_EXCL_LINE: OutOfMemory */ } } } if (!result) { return ARES_FALSE; } if (hints->ai_flags & ARES_AI_CANONNAME) { cname = ares__append_addrinfo_cname(&ai->cnames); if (!cname) { /* LCOV_EXCL_START: OutOfMemory */ ares_freeaddrinfo(ai); callback(arg, ARES_ENOMEM, 0, NULL); return ARES_TRUE; /* LCOV_EXCL_STOP */ } /* Duplicate the name, to avoid a constness violation. */ cname->name = ares_strdup(name); if (!cname->name) { ares_freeaddrinfo(ai); callback(arg, ARES_ENOMEM, 0, NULL); return ARES_TRUE; } } ai->nodes->ai_socktype = hints->ai_socktype; ai->nodes->ai_protocol = hints->ai_protocol; callback(arg, ARES_SUCCESS, 0, ai); return ARES_TRUE; } static void hquery_free(struct host_query *hquery, ares_bool_t cleanup_ai) { if (cleanup_ai) { ares_freeaddrinfo(hquery->ai); } ares__strsplit_free(hquery->names, hquery->names_cnt); ares_free(hquery->name); ares_free(hquery->lookups); ares_free(hquery); } static void end_hquery(struct host_query *hquery, ares_status_t status) { struct ares_addrinfo_node sentinel; struct ares_addrinfo_node *next; if (status == ARES_SUCCESS) { if (!(hquery->hints.ai_flags & ARES_AI_NOSORT) && hquery->ai->nodes) { sentinel.ai_next = hquery->ai->nodes; ares__sortaddrinfo(hquery->channel, &sentinel); hquery->ai->nodes = sentinel.ai_next; } next = hquery->ai->nodes; while (next) { next->ai_socktype = hquery->hints.ai_socktype; next->ai_protocol = hquery->hints.ai_protocol; next = next->ai_next; } } else { /* Clean up what we have collected by so far. */ ares_freeaddrinfo(hquery->ai); hquery->ai = NULL; } hquery->callback(hquery->arg, (int)status, (int)hquery->timeouts, hquery->ai); hquery_free(hquery, ARES_FALSE); } ares_bool_t ares__is_localhost(const char *name) { /* RFC6761 6.3 says : The domain "localhost." and any names falling within * ".localhost." */ size_t len; if (name == NULL) { return ARES_FALSE; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (strcmp(name, "localhost") == 0) { return ARES_TRUE; } len = ares_strlen(name); if (len < 10 /* strlen(".localhost") */) { return ARES_FALSE; } if (strcmp(name + (len - 10 /* strlen(".localhost") */), ".localhost") == 0) { return ARES_TRUE; } return ARES_FALSE; } static ares_status_t file_lookup(struct host_query *hquery) { const ares_hosts_entry_t *entry; ares_status_t status; /* Per RFC 7686, reject queries for ".onion" domain names with NXDOMAIN. */ if (ares__is_onion_domain(hquery->name)) { return ARES_ENOTFOUND; } status = ares__hosts_search_host( hquery->channel, (hquery->hints.ai_flags & ARES_AI_ENVHOSTS) ? ARES_TRUE : ARES_FALSE, hquery->name, &entry); if (status != ARES_SUCCESS) { goto done; } status = ares__hosts_entry_to_addrinfo( entry, hquery->name, hquery->hints.ai_family, hquery->port, (hquery->hints.ai_flags & ARES_AI_CANONNAME) ? ARES_TRUE : ARES_FALSE, hquery->ai); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } done: /* RFC6761 section 6.3 #3 states that "Name resolution APIs and libraries * SHOULD recognize localhost names as special and SHOULD always return the * IP loopback address for address queries". * We will also ignore ALL errors when trying to resolve localhost, such * as permissions errors reading /etc/hosts or a malformed /etc/hosts */ if (status != ARES_SUCCESS && status != ARES_ENOMEM && ares__is_localhost(hquery->name)) { return ares__addrinfo_localhost(hquery->name, hquery->port, &hquery->hints, hquery->ai); } return status; } static void next_lookup(struct host_query *hquery, ares_status_t status) { switch (*hquery->remaining_lookups) { case 'b': /* RFC6761 section 6.3 #3 says "Name resolution APIs SHOULD NOT send * queries for localhost names to their configured caching DNS * server(s)." * Otherwise, DNS lookup. */ if (!ares__is_localhost(hquery->name) && next_dns_lookup(hquery)) { break; } hquery->remaining_lookups++; next_lookup(hquery, status); break; case 'f': /* Host file lookup */ if (file_lookup(hquery) == ARES_SUCCESS) { end_hquery(hquery, ARES_SUCCESS); break; } hquery->remaining_lookups++; next_lookup(hquery, status); break; default: /* No lookup left */ end_hquery(hquery, status); break; } } static void terminate_retries(const struct host_query *hquery, unsigned short qid) { unsigned short term_qid = (qid == hquery->qid_a) ? hquery->qid_aaaa : hquery->qid_a; const ares_channel_t *channel = hquery->channel; ares_query_t *query = NULL; /* No other outstanding queries, nothing to do */ if (!hquery->remaining) { return; } query = ares__htable_szvp_get_direct(channel->queries_by_qid, term_qid); if (query == NULL) { return; } query->no_retries = ARES_TRUE; } static void host_callback(void *arg, ares_status_t status, size_t timeouts, const ares_dns_record_t *dnsrec) { struct host_query *hquery = (struct host_query *)arg; ares_status_t addinfostatus = ARES_SUCCESS; hquery->timeouts += timeouts; hquery->remaining--; if (status == ARES_SUCCESS) { if (dnsrec == NULL) { addinfostatus = ARES_EBADRESP; /* LCOV_EXCL_LINE: DefensiveCoding */ } else { addinfostatus = ares__parse_into_addrinfo(dnsrec, ARES_TRUE, hquery->port, hquery->ai); } if (addinfostatus == ARES_SUCCESS) { terminate_retries(hquery, ares_dns_record_get_id(dnsrec)); } } if (!hquery->remaining) { if (status == ARES_EDESTRUCTION || status == ARES_ECANCELLED) { /* must make sure we don't do next_lookup() on destroy or cancel, * and return the appropriate status. We won't return a partial * result in this case. */ end_hquery(hquery, status); } else if (addinfostatus != ARES_SUCCESS && addinfostatus != ARES_ENODATA) { /* error in parsing result e.g. no memory */ if (addinfostatus == ARES_EBADRESP && hquery->ai->nodes) { /* We got a bad response from server, but at least one query * ended with ARES_SUCCESS */ end_hquery(hquery, ARES_SUCCESS); } else { end_hquery(hquery, addinfostatus); } } else if (hquery->ai->nodes) { /* at least one query ended with ARES_SUCCESS */ end_hquery(hquery, ARES_SUCCESS); } else if (status == ARES_ENOTFOUND || status == ARES_ENODATA || addinfostatus == ARES_ENODATA) { if (status == ARES_ENODATA || addinfostatus == ARES_ENODATA) { hquery->nodata_cnt++; } next_lookup(hquery, hquery->nodata_cnt ? ARES_ENODATA : status); } else if ( (status == ARES_ESERVFAIL || status == ARES_EREFUSED) && ares__name_label_cnt(hquery->names[hquery->next_name_idx-1]) == 1 ) { /* Issue #852, systemd-resolved may return SERVFAIL or REFUSED on a * single label domain name. */ next_lookup(hquery, hquery->nodata_cnt ? ARES_ENODATA : status); } else { end_hquery(hquery, status); } } /* at this point we keep on waiting for the next query to finish */ } static void ares_getaddrinfo_int(ares_channel_t *channel, const char *name, const char *service, const struct ares_addrinfo_hints *hints, ares_addrinfo_callback callback, void *arg) { struct host_query *hquery; unsigned short port = 0; int family; struct ares_addrinfo *ai; ares_status_t status; if (!hints) { hints = &default_hints; } family = hints->ai_family; /* Right now we only know how to look up Internet addresses and unspec means try both basically. */ if (family != AF_INET && family != AF_INET6 && family != AF_UNSPEC) { callback(arg, ARES_ENOTIMP, 0, NULL); return; } if (ares__is_onion_domain(name)) { callback(arg, ARES_ENOTFOUND, 0, NULL); return; } if (service) { if (hints->ai_flags & ARES_AI_NUMERICSERV) { unsigned long val; errno = 0; val = strtoul(service, NULL, 0); if ((val == 0 && errno != 0) || val > 65535) { callback(arg, ARES_ESERVICE, 0, NULL); return; } port = (unsigned short)val; } else { port = lookup_service(service, 0); if (!port) { unsigned long val; errno = 0; val = strtoul(service, NULL, 0); if ((val == 0 && errno != 0) || val > 65535) { callback(arg, ARES_ESERVICE, 0, NULL); return; } port = (unsigned short)val; } } } ai = ares_malloc_zero(sizeof(*ai)); if (!ai) { callback(arg, ARES_ENOMEM, 0, NULL); return; } if (fake_addrinfo(name, port, hints, ai, callback, arg)) { return; } /* Allocate and fill in the host query structure. */ hquery = ares_malloc_zero(sizeof(*hquery)); if (!hquery) { ares_freeaddrinfo(ai); callback(arg, ARES_ENOMEM, 0, NULL); return; } hquery->port = port; hquery->channel = channel; hquery->hints = *hints; hquery->sent_family = -1; /* nothing is sent yet */ hquery->callback = callback; hquery->arg = arg; hquery->ai = ai; hquery->name = ares_strdup(name); if (hquery->name == NULL) { hquery_free(hquery, ARES_TRUE); callback(arg, ARES_ENOMEM, 0, NULL); return; } status = ares__search_name_list(channel, name, &hquery->names, &hquery->names_cnt); if (status != ARES_SUCCESS) { hquery_free(hquery, ARES_TRUE); callback(arg, (int)status, 0, NULL); return; } hquery->next_name_idx = 0; hquery->lookups = ares_strdup(channel->lookups); if (hquery->lookups == NULL) { hquery_free(hquery, ARES_TRUE); callback(arg, ARES_ENOMEM, 0, NULL); return; } hquery->remaining_lookups = hquery->lookups; /* Start performing lookups according to channel->lookups. */ next_lookup(hquery, ARES_ECONNREFUSED /* initial error code */); } void ares_getaddrinfo(ares_channel_t *channel, const char *name, const char *service, const struct ares_addrinfo_hints *hints, ares_addrinfo_callback callback, void *arg) { if (channel == NULL) { return; } ares__channel_lock(channel); ares_getaddrinfo_int(channel, name, service, hints, callback, arg); ares__channel_unlock(channel); } static ares_bool_t next_dns_lookup(struct host_query *hquery) { const char *name = NULL; if (hquery->next_name_idx >= hquery->names_cnt) { return ARES_FALSE; } name = hquery->names[hquery->next_name_idx++]; /* NOTE: hquery may be invalidated during the call to ares_query_qid(), * so should not be referenced after this point */ switch (hquery->hints.ai_family) { case AF_INET: hquery->remaining += 1; ares_query_nolock(hquery->channel, name, ARES_CLASS_IN, ARES_REC_TYPE_A, host_callback, hquery, &hquery->qid_a); break; case AF_INET6: hquery->remaining += 1; ares_query_nolock(hquery->channel, name, ARES_CLASS_IN, ARES_REC_TYPE_AAAA, host_callback, hquery, &hquery->qid_aaaa); break; case AF_UNSPEC: hquery->remaining += 2; ares_query_nolock(hquery->channel, name, ARES_CLASS_IN, ARES_REC_TYPE_A, host_callback, hquery, &hquery->qid_a); ares_query_nolock(hquery->channel, name, ARES_CLASS_IN, ARES_REC_TYPE_AAAA, host_callback, hquery, &hquery->qid_aaaa); break; default: break; } return ARES_TRUE; } gevent-24.11.1/deps/c-ares/src/lib/ares_getenv.c000066400000000000000000000026141471441230600212650ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_getenv.h" #ifndef HAVE_GETENV char *ares_getenv(const char *name) { return NULL; } #endif gevent-24.11.1/deps/c-ares/src/lib/ares_getenv.h000066400000000000000000000026531471441230600212750ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef HEADER_CARES_GETENV_H #define HEADER_CARES_GETENV_H #ifndef HAVE_GETENV extern char *ares_getenv(const char *name); #endif #endif /* HEADER_CARES_GETENV_H */ gevent-24.11.1/deps/c-ares/src/lib/ares_gethostbyaddr.c000066400000000000000000000165531471441230600226470ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #include "ares_nameser.h" #include "ares_inet_net_pton.h" #include "ares_platform.h" struct addr_query { /* Arguments passed to ares_gethostbyaddr() */ ares_channel_t *channel; struct ares_addr addr; ares_host_callback callback; void *arg; char *lookups; /* duplicate memory from channel for ares_reinit() */ const char *remaining_lookups; size_t timeouts; }; static void next_lookup(struct addr_query *aquery); static void addr_callback(void *arg, ares_status_t status, size_t timeouts, const ares_dns_record_t *dnsrec); static void end_aquery(struct addr_query *aquery, ares_status_t status, struct hostent *host); static ares_status_t file_lookup(ares_channel_t *channel, const struct ares_addr *addr, struct hostent **host); void ares_gethostbyaddr_nolock(ares_channel_t *channel, const void *addr, int addrlen, int family, ares_host_callback callback, void *arg) { struct addr_query *aquery; if (family != AF_INET && family != AF_INET6) { callback(arg, ARES_ENOTIMP, 0, NULL); return; } if ((family == AF_INET && addrlen != sizeof(aquery->addr.addr.addr4)) || (family == AF_INET6 && addrlen != sizeof(aquery->addr.addr.addr6))) { callback(arg, ARES_ENOTIMP, 0, NULL); return; } aquery = ares_malloc(sizeof(struct addr_query)); if (!aquery) { callback(arg, ARES_ENOMEM, 0, NULL); return; } aquery->lookups = ares_strdup(channel->lookups); if (aquery->lookups == NULL) { /* LCOV_EXCL_START: OutOfMemory */ ares_free(aquery); callback(arg, ARES_ENOMEM, 0, NULL); return; /* LCOV_EXCL_STOP */ } aquery->channel = channel; if (family == AF_INET) { memcpy(&aquery->addr.addr.addr4, addr, sizeof(aquery->addr.addr.addr4)); } else { memcpy(&aquery->addr.addr.addr6, addr, sizeof(aquery->addr.addr.addr6)); } aquery->addr.family = family; aquery->callback = callback; aquery->arg = arg; aquery->remaining_lookups = aquery->lookups; aquery->timeouts = 0; next_lookup(aquery); } void ares_gethostbyaddr(ares_channel_t *channel, const void *addr, int addrlen, int family, ares_host_callback callback, void *arg) { if (channel == NULL) { return; } ares__channel_lock(channel); ares_gethostbyaddr_nolock(channel, addr, addrlen, family, callback, arg); ares__channel_unlock(channel); } static void next_lookup(struct addr_query *aquery) { const char *p; ares_status_t status; struct hostent *host; char *name; for (p = aquery->remaining_lookups; *p; p++) { switch (*p) { case 'b': name = ares_dns_addr_to_ptr(&aquery->addr); if (name == NULL) { end_aquery(aquery, ARES_ENOMEM, NULL); /* LCOV_EXCL_LINE: OutOfMemory */ return; /* LCOV_EXCL_LINE: OutOfMemory */ } aquery->remaining_lookups = p + 1; ares_query_nolock(aquery->channel, name, ARES_CLASS_IN, ARES_REC_TYPE_PTR, addr_callback, aquery, NULL); ares_free(name); return; case 'f': status = file_lookup(aquery->channel, &aquery->addr, &host); /* this status check below previously checked for !ARES_ENOTFOUND, but we should not assume that this single error code is the one that can occur, as that is in fact no longer the case */ if (status == ARES_SUCCESS) { end_aquery(aquery, status, host); return; } break; default: break; } } end_aquery(aquery, ARES_ENOTFOUND, NULL); } static void addr_callback(void *arg, ares_status_t status, size_t timeouts, const ares_dns_record_t *dnsrec) { struct addr_query *aquery = (struct addr_query *)arg; struct hostent *host; size_t addrlen; aquery->timeouts += timeouts; if (status == ARES_SUCCESS) { if (aquery->addr.family == AF_INET) { addrlen = sizeof(aquery->addr.addr.addr4); status = ares_parse_ptr_reply_dnsrec(dnsrec, &aquery->addr.addr.addr4, (int)addrlen, AF_INET, &host); } else { addrlen = sizeof(aquery->addr.addr.addr6); status = ares_parse_ptr_reply_dnsrec(dnsrec, &aquery->addr.addr.addr6, (int)addrlen, AF_INET6, &host); } end_aquery(aquery, status, host); } else if (status == ARES_EDESTRUCTION || status == ARES_ECANCELLED) { end_aquery(aquery, status, NULL); } else { next_lookup(aquery); } } static void end_aquery(struct addr_query *aquery, ares_status_t status, struct hostent *host) { aquery->callback(aquery->arg, (int)status, (int)aquery->timeouts, host); if (host) { ares_free_hostent(host); } ares_free(aquery->lookups); ares_free(aquery); } static ares_status_t file_lookup(ares_channel_t *channel, const struct ares_addr *addr, struct hostent **host) { char ipaddr[INET6_ADDRSTRLEN]; const void *ptr = NULL; const ares_hosts_entry_t *entry; ares_status_t status; if (addr->family == AF_INET) { ptr = &addr->addr.addr4; } else if (addr->family == AF_INET6) { ptr = &addr->addr.addr6; } if (ptr == NULL) { return ARES_ENOTFOUND; } if (!ares_inet_ntop(addr->family, ptr, ipaddr, sizeof(ipaddr))) { return ARES_ENOTFOUND; } status = ares__hosts_search_ipaddr(channel, ARES_FALSE, ipaddr, &entry); if (status != ARES_SUCCESS) { return status; } status = ares__hosts_entry_to_hostent(entry, addr->family, host); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } return ARES_SUCCESS; } gevent-24.11.1/deps/c-ares/src/lib/ares_gethostbyname.c000066400000000000000000000246221471441230600226510ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998, 2011, 2013 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #include "ares_nameser.h" #ifdef HAVE_STRINGS_H # include #endif #include "ares_inet_net_pton.h" #include "ares_platform.h" static void sort_addresses(const struct hostent *host, const struct apattern *sortlist, size_t nsort); static void sort6_addresses(const struct hostent *host, const struct apattern *sortlist, size_t nsort); static size_t get_address_index(const struct in_addr *addr, const struct apattern *sortlist, size_t nsort); static size_t get6_address_index(const struct ares_in6_addr *addr, const struct apattern *sortlist, size_t nsort); struct host_query { ares_host_callback callback; void *arg; ares_channel_t *channel; }; static void ares_gethostbyname_callback(void *arg, int status, int timeouts, struct ares_addrinfo *result) { struct hostent *hostent = NULL; struct host_query *ghbn_arg = arg; if (status == ARES_SUCCESS) { status = (int)ares__addrinfo2hostent(result, AF_UNSPEC, &hostent); } /* addrinfo2hostent will only return ENODATA if there are no addresses _and_ * no cname/aliases. However, gethostbyname will return ENODATA even if there * is cname/alias data */ if (status == ARES_SUCCESS && hostent && (!hostent->h_addr_list || !hostent->h_addr_list[0])) { status = ARES_ENODATA; } if (status == ARES_SUCCESS && ghbn_arg->channel->nsort && hostent) { if (hostent->h_addrtype == AF_INET6) { sort6_addresses(hostent, ghbn_arg->channel->sortlist, ghbn_arg->channel->nsort); } if (hostent->h_addrtype == AF_INET) { sort_addresses(hostent, ghbn_arg->channel->sortlist, ghbn_arg->channel->nsort); } } ghbn_arg->callback(ghbn_arg->arg, status, timeouts, hostent); ares_freeaddrinfo(result); ares_free(ghbn_arg); ares_free_hostent(hostent); } void ares_gethostbyname(ares_channel_t *channel, const char *name, int family, ares_host_callback callback, void *arg) { struct ares_addrinfo_hints hints; struct host_query *ghbn_arg; if (!callback) { return; } memset(&hints, 0, sizeof(hints)); hints.ai_flags = ARES_AI_CANONNAME; hints.ai_family = family; ghbn_arg = ares_malloc(sizeof(*ghbn_arg)); if (!ghbn_arg) { callback(arg, ARES_ENOMEM, 0, NULL); return; } ghbn_arg->callback = callback; ghbn_arg->arg = arg; ghbn_arg->channel = channel; /* NOTE: ares_getaddrinfo() locks the channel, we don't use the channel * outside so no need to lock */ ares_getaddrinfo(channel, name, NULL, &hints, ares_gethostbyname_callback, ghbn_arg); } static void sort_addresses(const struct hostent *host, const struct apattern *sortlist, size_t nsort) { struct in_addr a1; struct in_addr a2; int i1; int i2; size_t ind1; size_t ind2; /* This is a simple insertion sort, not optimized at all. i1 walks * through the address list, with the loop invariant that everything * to the left of i1 is sorted. In the loop body, the value at i1 is moved * back through the list (via i2) until it is in sorted order. */ for (i1 = 0; host->h_addr_list[i1]; i1++) { memcpy(&a1, host->h_addr_list[i1], sizeof(struct in_addr)); ind1 = get_address_index(&a1, sortlist, nsort); for (i2 = i1 - 1; i2 >= 0; i2--) { memcpy(&a2, host->h_addr_list[i2], sizeof(struct in_addr)); ind2 = get_address_index(&a2, sortlist, nsort); if (ind2 <= ind1) { break; } memcpy(host->h_addr_list[i2 + 1], &a2, sizeof(struct in_addr)); } memcpy(host->h_addr_list[i2 + 1], &a1, sizeof(struct in_addr)); } } /* Find the first entry in sortlist which matches addr. Return nsort * if none of them match. */ static size_t get_address_index(const struct in_addr *addr, const struct apattern *sortlist, size_t nsort) { size_t i; struct ares_addr aaddr; memset(&aaddr, 0, sizeof(aaddr)); aaddr.family = AF_INET; memcpy(&aaddr.addr.addr4, addr, 4); for (i = 0; i < nsort; i++) { if (sortlist[i].addr.family != AF_INET) { continue; } if (ares__subnet_match(&aaddr, &sortlist[i].addr, sortlist[i].mask)) { break; } } return i; } static void sort6_addresses(const struct hostent *host, const struct apattern *sortlist, size_t nsort) { struct ares_in6_addr a1; struct ares_in6_addr a2; int i1; int i2; size_t ind1; size_t ind2; /* This is a simple insertion sort, not optimized at all. i1 walks * through the address list, with the loop invariant that everything * to the left of i1 is sorted. In the loop body, the value at i1 is moved * back through the list (via i2) until it is in sorted order. */ for (i1 = 0; host->h_addr_list[i1]; i1++) { memcpy(&a1, host->h_addr_list[i1], sizeof(struct ares_in6_addr)); ind1 = get6_address_index(&a1, sortlist, nsort); for (i2 = i1 - 1; i2 >= 0; i2--) { memcpy(&a2, host->h_addr_list[i2], sizeof(struct ares_in6_addr)); ind2 = get6_address_index(&a2, sortlist, nsort); if (ind2 <= ind1) { break; } memcpy(host->h_addr_list[i2 + 1], &a2, sizeof(struct ares_in6_addr)); } memcpy(host->h_addr_list[i2 + 1], &a1, sizeof(struct ares_in6_addr)); } } /* Find the first entry in sortlist which matches addr. Return nsort * if none of them match. */ static size_t get6_address_index(const struct ares_in6_addr *addr, const struct apattern *sortlist, size_t nsort) { size_t i; struct ares_addr aaddr; memset(&aaddr, 0, sizeof(aaddr)); aaddr.family = AF_INET6; memcpy(&aaddr.addr.addr6, addr, 16); for (i = 0; i < nsort; i++) { if (sortlist[i].addr.family != AF_INET6) { continue; } if (ares__subnet_match(&aaddr, &sortlist[i].addr, sortlist[i].mask)) { break; } } return i; } static ares_status_t ares__hostent_localhost(const char *name, int family, struct hostent **host_out) { ares_status_t status; struct ares_addrinfo *ai = NULL; struct ares_addrinfo_hints hints; memset(&hints, 0, sizeof(hints)); hints.ai_family = family; ai = ares_malloc_zero(sizeof(*ai)); if (ai == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__addrinfo_localhost(name, 0, &hints, ai); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__addrinfo2hostent(ai, family, host_out); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } done: ares_freeaddrinfo(ai); return status; } /* I really have no idea why this is exposed as a public function, but since * it is, we can't kill this legacy function. */ static ares_status_t ares_gethostbyname_file_int(ares_channel_t *channel, const char *name, int family, struct hostent **host) { const ares_hosts_entry_t *entry; ares_status_t status; /* We only take the channel to ensure that ares_init() been called. */ if (channel == NULL || name == NULL || host == NULL) { /* Anything will do, really. This seems fine, and is consistent with other error cases. */ if (host != NULL) { *host = NULL; } return ARES_ENOTFOUND; } /* Per RFC 7686, reject queries for ".onion" domain names with NXDOMAIN. */ if (ares__is_onion_domain(name)) { return ARES_ENOTFOUND; } status = ares__hosts_search_host(channel, ARES_FALSE, name, &entry); if (status != ARES_SUCCESS) { goto done; } status = ares__hosts_entry_to_hostent(entry, family, host); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } done: /* RFC6761 section 6.3 #3 states that "Name resolution APIs and libraries * SHOULD recognize localhost names as special and SHOULD always return the * IP loopback address for address queries". * We will also ignore ALL errors when trying to resolve localhost, such * as permissions errors reading /etc/hosts or a malformed /etc/hosts */ if (status != ARES_SUCCESS && status != ARES_ENOMEM && ares__is_localhost(name)) { return ares__hostent_localhost(name, family, host); } return status; } int ares_gethostbyname_file(ares_channel_t *channel, const char *name, int family, struct hostent **host) { ares_status_t status; if (channel == NULL) { return ARES_ENOTFOUND; } ares__channel_lock(channel); status = ares_gethostbyname_file_int(channel, name, family, host); ares__channel_unlock(channel); return (int)status; } gevent-24.11.1/deps/c-ares/src/lib/ares_getnameinfo.c000066400000000000000000000332171471441230600222740ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2005, 2013 Dominick Meglio * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_GETSERVBYPORT_R # if !defined(GETSERVBYPORT_R_ARGS) || (GETSERVBYPORT_R_ARGS < 4) || \ (GETSERVBYPORT_R_ARGS > 6) # error "you MUST specify a valid number of arguments for getservbyport_r" # endif #endif #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #include "ares_nameser.h" #ifdef HAVE_NET_IF_H # include #endif #if defined(USE_WINSOCK) && defined(HAVE_IPHLPAPI_H) # include #endif #include "ares_ipv6.h" struct nameinfo_query { ares_nameinfo_callback callback; void *arg; union { struct sockaddr_in addr4; struct sockaddr_in6 addr6; } addr; int family; unsigned int flags; size_t timeouts; }; #ifdef HAVE_STRUCT_SOCKADDR_IN6_SIN6_SCOPE_ID # define IPBUFSIZ \ (sizeof("ffff:ffff:ffff:ffff:ffff:ffff:255.255.255.255") + IF_NAMESIZE) #else # define IPBUFSIZ (sizeof("ffff:ffff:ffff:ffff:ffff:ffff:255.255.255.255")) #endif static void nameinfo_callback(void *arg, int status, int timeouts, struct hostent *host); static char *lookup_service(unsigned short port, unsigned int flags, char *buf, size_t buflen); #ifdef HAVE_STRUCT_SOCKADDR_IN6_SIN6_SCOPE_ID static void append_scopeid(const struct sockaddr_in6 *addr6, unsigned int flags, char *buf, size_t buflen); #endif static char *ares_striendstr(const char *s1, const char *s2); static void ares_getnameinfo_int(ares_channel_t *channel, const struct sockaddr *sa, ares_socklen_t salen, int flags_int, ares_nameinfo_callback callback, void *arg) { const struct sockaddr_in *addr = NULL; const struct sockaddr_in6 *addr6 = NULL; struct nameinfo_query *niquery; unsigned short port = 0; unsigned int flags = (unsigned int)flags_int; /* Validate socket address family and length */ if (sa && sa->sa_family == AF_INET && salen >= (ares_socklen_t)sizeof(struct sockaddr_in)) { addr = CARES_INADDR_CAST(const struct sockaddr_in *, sa); port = addr->sin_port; } else if (sa && sa->sa_family == AF_INET6 && salen >= (ares_socklen_t)sizeof(struct sockaddr_in6)) { addr6 = CARES_INADDR_CAST(const struct sockaddr_in6 *, sa); port = addr6->sin6_port; } else { callback(arg, ARES_ENOTIMP, 0, NULL, NULL); return; } /* If neither, assume they want a host */ if (!(flags & ARES_NI_LOOKUPSERVICE) && !(flags & ARES_NI_LOOKUPHOST)) { flags |= ARES_NI_LOOKUPHOST; } /* All they want is a service, no need for DNS */ if ((flags & ARES_NI_LOOKUPSERVICE) && !(flags & ARES_NI_LOOKUPHOST)) { char buf[33]; char *service; service = lookup_service((unsigned short)(port & 0xffff), flags, buf, sizeof(buf)); callback(arg, ARES_SUCCESS, 0, NULL, service); return; } /* They want a host lookup */ if (flags & ARES_NI_LOOKUPHOST) { /* A numeric host can be handled without DNS */ if (flags & ARES_NI_NUMERICHOST) { char ipbuf[IPBUFSIZ]; char srvbuf[33]; char *service = NULL; ipbuf[0] = 0; /* Specifying not to lookup a host, but then saying a host * is required has to be illegal. */ if (flags & ARES_NI_NAMEREQD) { callback(arg, ARES_EBADFLAGS, 0, NULL, NULL); return; } if (sa->sa_family == AF_INET6) { ares_inet_ntop(AF_INET6, &addr6->sin6_addr, ipbuf, IPBUFSIZ); /* If the system supports scope IDs, use it */ #ifdef HAVE_STRUCT_SOCKADDR_IN6_SIN6_SCOPE_ID append_scopeid(addr6, flags, ipbuf, sizeof(ipbuf)); #endif } else { ares_inet_ntop(AF_INET, &addr->sin_addr, ipbuf, IPBUFSIZ); } /* They also want a service */ if (flags & ARES_NI_LOOKUPSERVICE) { service = lookup_service((unsigned short)(port & 0xffff), flags, srvbuf, sizeof(srvbuf)); } callback(arg, ARES_SUCCESS, 0, ipbuf, service); return; } else { /* This is where a DNS lookup becomes necessary */ niquery = ares_malloc(sizeof(struct nameinfo_query)); if (!niquery) { callback(arg, ARES_ENOMEM, 0, NULL, NULL); return; } niquery->callback = callback; niquery->arg = arg; niquery->flags = flags; niquery->timeouts = 0; if (sa->sa_family == AF_INET) { niquery->family = AF_INET; memcpy(&niquery->addr.addr4, addr, sizeof(niquery->addr.addr4)); ares_gethostbyaddr_nolock(channel, &addr->sin_addr, sizeof(struct in_addr), AF_INET, nameinfo_callback, niquery); } else { niquery->family = AF_INET6; memcpy(&niquery->addr.addr6, addr6, sizeof(niquery->addr.addr6)); ares_gethostbyaddr_nolock(channel, &addr6->sin6_addr, sizeof(struct ares_in6_addr), AF_INET6, nameinfo_callback, niquery); } } } } void ares_getnameinfo(ares_channel_t *channel, const struct sockaddr *sa, ares_socklen_t salen, int flags_int, ares_nameinfo_callback callback, void *arg) { if (channel == NULL) { return; } ares__channel_lock(channel); ares_getnameinfo_int(channel, sa, salen, flags_int, callback, arg); ares__channel_unlock(channel); } static void nameinfo_callback(void *arg, int status, int timeouts, struct hostent *host) { struct nameinfo_query *niquery = (struct nameinfo_query *)arg; char srvbuf[33]; char *service = NULL; niquery->timeouts += (size_t)timeouts; if (status == ARES_SUCCESS) { /* They want a service too */ if (niquery->flags & ARES_NI_LOOKUPSERVICE) { if (niquery->family == AF_INET) { service = lookup_service(niquery->addr.addr4.sin_port, niquery->flags, srvbuf, sizeof(srvbuf)); } else { service = lookup_service(niquery->addr.addr6.sin6_port, niquery->flags, srvbuf, sizeof(srvbuf)); } } /* NOFQDN means we have to strip off the domain name portion. We do this by determining our own domain name, then searching the string for this domain name and removing it. */ #ifdef HAVE_GETHOSTNAME if (niquery->flags & ARES_NI_NOFQDN) { char buf[255]; const char *domain; gethostname(buf, 255); if ((domain = strchr(buf, '.')) != NULL) { char *end = ares_striendstr(host->h_name, domain); if (end) { *end = 0; } } } #endif niquery->callback(niquery->arg, ARES_SUCCESS, (int)niquery->timeouts, host->h_name, service); ares_free(niquery); return; } /* We couldn't find the host, but it's OK, we can use the IP */ else if (status == ARES_ENOTFOUND && !(niquery->flags & ARES_NI_NAMEREQD)) { char ipbuf[IPBUFSIZ]; if (niquery->family == AF_INET) { ares_inet_ntop(AF_INET, &niquery->addr.addr4.sin_addr, ipbuf, IPBUFSIZ); } else { ares_inet_ntop(AF_INET6, &niquery->addr.addr6.sin6_addr, ipbuf, IPBUFSIZ); #ifdef HAVE_STRUCT_SOCKADDR_IN6_SIN6_SCOPE_ID append_scopeid(&niquery->addr.addr6, niquery->flags, ipbuf, sizeof(ipbuf)); #endif } /* They want a service too */ if (niquery->flags & ARES_NI_LOOKUPSERVICE) { if (niquery->family == AF_INET) { service = lookup_service(niquery->addr.addr4.sin_port, niquery->flags, srvbuf, sizeof(srvbuf)); } else { service = lookup_service(niquery->addr.addr6.sin6_port, niquery->flags, srvbuf, sizeof(srvbuf)); } } niquery->callback(niquery->arg, ARES_SUCCESS, (int)niquery->timeouts, ipbuf, service); ares_free(niquery); return; } niquery->callback(niquery->arg, status, (int)niquery->timeouts, NULL, NULL); ares_free(niquery); } static char *lookup_service(unsigned short port, unsigned int flags, char *buf, size_t buflen) { const char *proto; struct servent *sep; #ifdef HAVE_GETSERVBYPORT_R struct servent se; #endif char tmpbuf[4096]; const char *name; size_t name_len; if (port) { if (flags & ARES_NI_NUMERICSERV) { sep = NULL; } else { if (flags & ARES_NI_UDP) { proto = "udp"; } else if (flags & ARES_NI_SCTP) { proto = "sctp"; } else if (flags & ARES_NI_DCCP) { proto = "dccp"; } else { proto = "tcp"; } #ifdef HAVE_GETSERVBYPORT_R memset(&se, 0, sizeof(se)); sep = &se; memset(tmpbuf, 0, sizeof(tmpbuf)); # if GETSERVBYPORT_R_ARGS == 6 if (getservbyport_r(port, proto, &se, (void *)tmpbuf, sizeof(tmpbuf), &sep) != 0) { sep = NULL; /* LCOV_EXCL_LINE: buffer large so this never fails */ } # elif GETSERVBYPORT_R_ARGS == 5 sep = getservbyport_r(port, proto, &se, (void *)tmpbuf, sizeof(tmpbuf)); # elif GETSERVBYPORT_R_ARGS == 4 if (getservbyport_r(port, proto, &se, (void *)tmpbuf) != 0) { sep = NULL; } # else /* Lets just hope the OS uses TLS! */ sep = getservbyport(port, proto); # endif #else /* Lets just hope the OS uses TLS! */ # if (defined(NETWARE) && !defined(__NOVELL_LIBC__)) sep = getservbyport(port, (char *)proto); # else sep = getservbyport(port, proto); # endif #endif } if (sep && sep->s_name) { /* get service name */ name = sep->s_name; } else { /* get port as a string */ snprintf(tmpbuf, sizeof(tmpbuf), "%u", (unsigned int)ntohs(port)); name = tmpbuf; } name_len = ares_strlen(name); if (name_len < buflen) { /* return it if buffer big enough */ memcpy(buf, name, name_len + 1); } else { /* avoid reusing previous one */ buf[0] = '\0'; /* LCOV_EXCL_LINE: no real service names are too big */ } return buf; } buf[0] = '\0'; return NULL; } #ifdef HAVE_STRUCT_SOCKADDR_IN6_SIN6_SCOPE_ID static void append_scopeid(const struct sockaddr_in6 *addr6, unsigned int flags, char *buf, size_t buflen) { # ifdef HAVE_IF_INDEXTONAME int is_ll; int is_mcll; # endif char tmpbuf[IF_NAMESIZE + 2]; size_t bufl; tmpbuf[0] = '%'; # ifdef HAVE_IF_INDEXTONAME is_ll = IN6_IS_ADDR_LINKLOCAL(&addr6->sin6_addr); is_mcll = IN6_IS_ADDR_MC_LINKLOCAL(&addr6->sin6_addr); if ((flags & ARES_NI_NUMERICSCOPE) || (!is_ll && !is_mcll)) { snprintf(&tmpbuf[1], sizeof(tmpbuf) - 1, "%lu", (unsigned long)addr6->sin6_scope_id); } else { if (if_indextoname(addr6->sin6_scope_id, &tmpbuf[1]) == NULL) { snprintf(&tmpbuf[1], sizeof(tmpbuf) - 1, "%lu", (unsigned long)addr6->sin6_scope_id); } } # else snprintf(&tmpbuf[1], sizeof(tmpbuf) - 1, "%lu", (unsigned long)addr6->sin6_scope_id); (void)flags; # endif tmpbuf[IF_NAMESIZE + 1] = '\0'; bufl = ares_strlen(buf); if (bufl + ares_strlen(tmpbuf) < buflen) { /* only append the scopeid string if it fits in the target buffer */ ares_strcpy(&buf[bufl], tmpbuf, buflen - bufl); } } #endif /* Determines if s1 ends with the string in s2 (case-insensitive) */ static char *ares_striendstr(const char *s1, const char *s2) { const char *c1; const char *c2; const char *c1_begin; int lo1; int lo2; size_t s1_len = ares_strlen(s1); size_t s2_len = ares_strlen(s2); if (s1 == NULL || s2 == NULL) { return NULL; } /* If the substr is longer than the full str, it can't match */ if (s2_len > s1_len) { return NULL; } /* Jump to the end of s1 minus the length of s2 */ c1_begin = s1 + s1_len - s2_len; c1 = c1_begin; c2 = s2; while (c2 < s2 + s2_len) { lo1 = ares__tolower((unsigned char)*c1); lo2 = ares__tolower((unsigned char)*c2); if (lo1 != lo2) { return NULL; } else { c1++; c2++; } } /* Cast off const */ return (char *)((size_t)c1_begin); } ares_bool_t ares__is_onion_domain(const char *name) { if (ares_striendstr(name, ".onion")) { return ARES_TRUE; } if (ares_striendstr(name, ".onion.")) { return ARES_TRUE; } return ARES_FALSE; } gevent-24.11.1/deps/c-ares/src/lib/ares_inet_net_pton.h000066400000000000000000000027351471441230600226530ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2005 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef HEADER_CARES_INET_NET_PTON_H #define HEADER_CARES_INET_NET_PTON_H #ifdef HAVE_INET_NET_PTON # define ares_inet_net_pton(w, x, y, z) inet_net_pton(w, x, y, z) #else int ares_inet_net_pton(int af, const char *src, void *dst, size_t size); #endif #endif /* HEADER_CARES_INET_NET_PTON_H */ gevent-24.11.1/deps/c-ares/src/lib/ares_init.c000066400000000000000000000414271471441230600207450ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2007 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_SYS_PARAM_H # include #endif #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #include "ares_nameser.h" #if defined(ANDROID) || defined(__ANDROID__) # include # include "ares_android.h" /* From the Bionic sources */ # define DNS_PROP_NAME_PREFIX "net.dns" # define MAX_DNS_PROPERTIES 8 #endif #if defined(CARES_USE_LIBRESOLV) # include #endif #if defined(USE_WINSOCK) && defined(HAVE_IPHLPAPI_H) # include #endif #include "ares_inet_net_pton.h" #include "ares_platform.h" #include "event/ares_event.h" int ares_init(ares_channel_t **channelptr) { return ares_init_options(channelptr, NULL, 0); } static int ares_query_timeout_cmp_cb(const void *arg1, const void *arg2) { const ares_query_t *q1 = arg1; const ares_query_t *q2 = arg2; if (q1->timeout.sec > q2->timeout.sec) { return 1; } if (q1->timeout.sec < q2->timeout.sec) { return -1; } if (q1->timeout.usec > q2->timeout.usec) { return 1; } if (q1->timeout.usec < q2->timeout.usec) { return -1; } return 0; } static int server_sort_cb(const void *data1, const void *data2) { const ares_server_t *s1 = data1; const ares_server_t *s2 = data2; if (s1->consec_failures < s2->consec_failures) { return -1; } if (s1->consec_failures > s2->consec_failures) { return 1; } if (s1->idx < s2->idx) { return -1; } if (s1->idx > s2->idx) { return 1; } return 0; } static void server_destroy_cb(void *data) { if (data == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } ares__destroy_server(data); } static ares_status_t init_by_defaults(ares_channel_t *channel) { char *hostname = NULL; ares_status_t rc = ARES_SUCCESS; #ifdef HAVE_GETHOSTNAME const char *dot; #endif struct ares_addr addr; ares__llist_t *sconfig = NULL; /* Enable EDNS by default */ if (!(channel->optmask & ARES_OPT_FLAGS)) { channel->flags = ARES_FLAG_EDNS; } if (channel->ednspsz == 0) { channel->ednspsz = EDNSPACKETSZ; } if (channel->timeout == 0) { channel->timeout = DEFAULT_TIMEOUT; } if (channel->tries == 0) { channel->tries = DEFAULT_TRIES; } if (ares__slist_len(channel->servers) == 0) { /* Add a default local named server to the channel unless configured not * to (in which case return an error). */ if (channel->flags & ARES_FLAG_NO_DFLT_SVR) { rc = ARES_ENOSERVER; goto error; } addr.family = AF_INET; addr.addr.addr4.s_addr = htonl(INADDR_LOOPBACK); rc = ares__sconfig_append(&sconfig, &addr, 0, 0, NULL); if (rc != ARES_SUCCESS) { goto error; /* LCOV_EXCL_LINE: OutOfMemory */ } rc = ares__servers_update(channel, sconfig, ARES_FALSE); ares__llist_destroy(sconfig); if (rc != ARES_SUCCESS) { goto error; } } #if defined(USE_WINSOCK) # define toolong(x) (x == -1) && (SOCKERRNO == WSAEFAULT) #elif defined(ENAMETOOLONG) # define toolong(x) \ (x == -1) && ((SOCKERRNO == ENAMETOOLONG) || (SOCKERRNO == EINVAL)) #else # define toolong(x) (x == -1) && (SOCKERRNO == EINVAL) #endif if (channel->ndomains == 0) { /* Derive a default domain search list from the kernel hostname, * or set it to empty if the hostname isn't helpful. */ #ifndef HAVE_GETHOSTNAME channel->ndomains = 0; /* default to none */ #else GETHOSTNAME_TYPE_ARG2 lenv = 64; size_t len = 64; int res; channel->ndomains = 0; /* default to none */ hostname = ares_malloc(len); if (!hostname) { rc = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto error; /* LCOV_EXCL_LINE: OutOfMemory */ } do { res = gethostname(hostname, lenv); if (toolong(res)) { char *p; len *= 2; lenv *= 2; p = ares_realloc(hostname, len); if (!p) { rc = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto error; /* LCOV_EXCL_LINE: OutOfMemory */ } hostname = p; continue; } else if (res) { /* Lets not treat a gethostname failure as critical, since we * are ok if gethostname doesn't even exist */ *hostname = '\0'; break; } } while (res != 0); dot = strchr(hostname, '.'); if (dot) { /* a dot was found */ channel->domains = ares_malloc(sizeof(char *)); if (!channel->domains) { rc = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto error; /* LCOV_EXCL_LINE: OutOfMemory */ } channel->domains[0] = ares_strdup(dot + 1); if (!channel->domains[0]) { rc = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto error; /* LCOV_EXCL_LINE: OutOfMemory */ } channel->ndomains = 1; } #endif } if (channel->nsort == 0) { channel->sortlist = NULL; } if (!channel->lookups) { channel->lookups = ares_strdup("fb"); if (!channel->lookups) { rc = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } } /* Set default fields for server failover behavior */ if (!(channel->optmask & ARES_OPT_SERVER_FAILOVER)) { channel->server_retry_chance = DEFAULT_SERVER_RETRY_CHANCE; channel->server_retry_delay = DEFAULT_SERVER_RETRY_DELAY; } error: if (hostname) { ares_free(hostname); } return rc; } int ares_init_options(ares_channel_t **channelptr, const struct ares_options *options, int optmask) { ares_channel_t *channel; ares_status_t status = ARES_SUCCESS; if (ares_library_initialized() != ARES_SUCCESS) { return ARES_ENOTINITIALIZED; /* LCOV_EXCL_LINE: n/a on non-WinSock */ } channel = ares_malloc_zero(sizeof(*channel)); if (!channel) { *channelptr = NULL; return ARES_ENOMEM; } /* We are in a good state */ channel->sys_up = ARES_TRUE; /* One option where zero is valid, so set default value here */ channel->ndots = 1; status = ares__channel_threading_init(channel); if (status != ARES_SUCCESS) { goto done; } /* Generate random key */ channel->rand_state = ares__init_rand_state(); if (channel->rand_state == NULL) { status = ARES_ENOMEM; DEBUGF(fprintf(stderr, "Error: init_id_key failed: %s\n", ares_strerror(status))); goto done; } /* Initialize Server List */ channel->servers = ares__slist_create(channel->rand_state, server_sort_cb, server_destroy_cb); if (channel->servers == NULL) { status = ARES_ENOMEM; goto done; } /* Initialize our lists of queries */ channel->all_queries = ares__llist_create(NULL); if (channel->all_queries == NULL) { status = ARES_ENOMEM; goto done; } channel->queries_by_qid = ares__htable_szvp_create(NULL); if (channel->queries_by_qid == NULL) { status = ARES_ENOMEM; goto done; } channel->queries_by_timeout = ares__slist_create(channel->rand_state, ares_query_timeout_cmp_cb, NULL); if (channel->queries_by_timeout == NULL) { status = ARES_ENOMEM; goto done; } channel->connnode_by_socket = ares__htable_asvp_create(NULL); if (channel->connnode_by_socket == NULL) { status = ARES_ENOMEM; goto done; } /* Initialize configuration by each of the four sources, from highest * precedence to lowest. */ status = ares__init_by_options(channel, options, optmask); if (status != ARES_SUCCESS) { DEBUGF(fprintf(stderr, "Error: init_by_options failed: %s\n", ares_strerror(status))); /* If we fail to apply user-specified options, fail the whole init process */ goto done; } /* Go ahead and let it initialize the query cache even if the ttl is 0 and * completely unused. This reduces the number of different code paths that * might be followed even if there is a minor performance hit. */ status = ares__qcache_create(channel->rand_state, channel->qcache_max_ttl, &channel->qcache); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } if (status == ARES_SUCCESS) { status = ares__init_by_sysconfig(channel); if (status != ARES_SUCCESS) { DEBUGF(fprintf(stderr, "Error: init_by_sysconfig failed: %s\n", ares_strerror(status))); } } /* * No matter what failed or succeeded, seed defaults to provide * useful behavior for things that we missed. */ status = init_by_defaults(channel); if (status != ARES_SUCCESS) { DEBUGF(fprintf(stderr, "Error: init_by_defaults failed: %s\n", ares_strerror(status))); goto done; } /* Initialize the event thread */ if (channel->optmask & ARES_OPT_EVENT_THREAD) { ares_event_thread_t *e = NULL; status = ares_event_thread_init(channel); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: UntestablePath */ } /* Initialize monitor for configuration changes. In some rare cases, * ARES_ENOTIMP may occur (OpenWatcom), ignore this. */ e = channel->sock_state_cb_data; status = ares_event_configchg_init(&e->configchg, e); if (status != ARES_SUCCESS && status != ARES_ENOTIMP) { goto done; /* LCOV_EXCL_LINE: UntestablePath */ } status = ARES_SUCCESS; } done: if (status != ARES_SUCCESS) { ares_destroy(channel); return (int)status; } *channelptr = channel; return ARES_SUCCESS; } static void *ares_reinit_thread(void *arg) { ares_channel_t *channel = arg; ares_status_t status; /* ares__init_by_sysconfig() will lock when applying the config, but not * when retrieving. */ status = ares__init_by_sysconfig(channel); if (status != ARES_SUCCESS) { DEBUGF(fprintf(stderr, "Error: init_by_sysconfig failed: %s\n", ares_strerror(status))); } ares__channel_lock(channel); /* Flush cached queries on reinit */ if (status == ARES_SUCCESS && channel->qcache) { ares__qcache_flush(channel->qcache); } channel->reinit_pending = ARES_FALSE; ares__channel_unlock(channel); return NULL; } ares_status_t ares_reinit(ares_channel_t *channel) { ares_status_t status = ARES_SUCCESS; if (channel == NULL) { return ARES_EFORMERR; } ares__channel_lock(channel); /* If a reinit is already in process, lets not do it again. Or if we are * shutting down, skip. */ if (!channel->sys_up || channel->reinit_pending) { ares__channel_unlock(channel); return ARES_SUCCESS; } channel->reinit_pending = ARES_TRUE; ares__channel_unlock(channel); if (ares_threadsafety()) { /* clean up the prior reinit process's thread. We know the thread isn't * running since reinit_pending was false */ if (channel->reinit_thread != NULL) { void *rv; ares__thread_join(channel->reinit_thread, &rv); channel->reinit_thread = NULL; } /* Spawn a new thread */ status = ares__thread_create(&channel->reinit_thread, ares_reinit_thread, channel); if (status != ARES_SUCCESS) { /* LCOV_EXCL_START: UntestablePath */ ares__channel_lock(channel); channel->reinit_pending = ARES_FALSE; ares__channel_unlock(channel); /* LCOV_EXCL_STOP */ } } else { /* Threading support not available, call directly */ ares_reinit_thread(channel); } return status; } /* ares_dup() duplicates a channel handle with all its options and returns a new channel handle */ int ares_dup(ares_channel_t **dest, const ares_channel_t *src) { struct ares_options opts; ares_status_t rc; int optmask; if (dest == NULL || src == NULL) { return ARES_EFORMERR; } *dest = NULL; /* in case of failure return NULL explicitly */ /* First get the options supported by the old ares_save_options() function, which is most of them */ rc = (ares_status_t)ares_save_options(src, &opts, &optmask); if (rc != ARES_SUCCESS) { ares_destroy_options(&opts); goto done; } /* Then create the new channel with those options */ rc = (ares_status_t)ares_init_options(dest, &opts, optmask); /* destroy the options copy to not leak any memory */ ares_destroy_options(&opts); if (rc != ARES_SUCCESS) { goto done; } ares__channel_lock(src); /* Now clone the options that ares_save_options() doesn't support, but are * user-provided */ (*dest)->sock_create_cb = src->sock_create_cb; (*dest)->sock_create_cb_data = src->sock_create_cb_data; (*dest)->sock_config_cb = src->sock_config_cb; (*dest)->sock_config_cb_data = src->sock_config_cb_data; (*dest)->sock_funcs = src->sock_funcs; (*dest)->sock_func_cb_data = src->sock_func_cb_data; (*dest)->server_state_cb = src->server_state_cb; (*dest)->server_state_cb_data = src->server_state_cb_data; ares_strcpy((*dest)->local_dev_name, src->local_dev_name, sizeof((*dest)->local_dev_name)); (*dest)->local_ip4 = src->local_ip4; memcpy((*dest)->local_ip6, src->local_ip6, sizeof(src->local_ip6)); ares__channel_unlock(src); /* Servers are a bit unique as ares_init_options() only allows ipv4 servers * and not a port per server, but there are other user specified ways, that * too will toggle the optmask ARES_OPT_SERVERS to let us know. If that's * the case, pull them in. * * We don't want to clone system-configuration servers though. * * We must use the "csv" format to get things like link-local address support */ if (optmask & ARES_OPT_SERVERS) { char *csv = ares_get_servers_csv(src); if (csv == NULL) { /* LCOV_EXCL_START: OutOfMemory */ ares_destroy(*dest); *dest = NULL; rc = ARES_ENOMEM; goto done; /* LCOV_EXCL_STOP */ } rc = (ares_status_t)ares_set_servers_ports_csv(*dest, csv); ares_free_string(csv); if (rc != ARES_SUCCESS) { /* LCOV_EXCL_START: OutOfMemory */ ares_destroy(*dest); *dest = NULL; goto done; /* LCOV_EXCL_STOP */ } } rc = ARES_SUCCESS; done: return (int)rc; /* everything went fine */ } void ares_set_local_ip4(ares_channel_t *channel, unsigned int local_ip) { if (channel == NULL) { return; } ares__channel_lock(channel); channel->local_ip4 = local_ip; ares__channel_unlock(channel); } /* local_ip6 should be 16 bytes in length */ void ares_set_local_ip6(ares_channel_t *channel, const unsigned char *local_ip6) { if (channel == NULL) { return; } ares__channel_lock(channel); memcpy(&channel->local_ip6, local_ip6, sizeof(channel->local_ip6)); ares__channel_unlock(channel); } /* local_dev_name should be null terminated. */ void ares_set_local_dev(ares_channel_t *channel, const char *local_dev_name) { if (channel == NULL) { return; } ares__channel_lock(channel); ares_strcpy(channel->local_dev_name, local_dev_name, sizeof(channel->local_dev_name)); channel->local_dev_name[sizeof(channel->local_dev_name) - 1] = 0; ares__channel_unlock(channel); } int ares_set_sortlist(ares_channel_t *channel, const char *sortstr) { size_t nsort = 0; struct apattern *sortlist = NULL; ares_status_t status; if (!channel) { return ARES_ENODATA; } ares__channel_lock(channel); status = ares__parse_sortlist(&sortlist, &nsort, sortstr); if (status == ARES_SUCCESS && sortlist) { if (channel->sortlist) { ares_free(channel->sortlist); } channel->sortlist = sortlist; channel->nsort = nsort; /* Save sortlist as if it was passed in as an option */ channel->optmask |= ARES_OPT_SORTLIST; } ares__channel_unlock(channel); return (int)status; } gevent-24.11.1/deps/c-ares/src/lib/ares_ipv6.h000066400000000000000000000054351471441230600206720ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2005 Dominick Meglio * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef ARES_IPV6_H #define ARES_IPV6_H #ifdef HAVE_NETINET6_IN6_H # include #endif #if defined(USE_WINSOCK) # if defined(HAVE_IPHLPAPI_H) # include # endif # if defined(HAVE_NETIOAPI_H) # include # endif #endif #ifndef HAVE_PF_INET6 # define PF_INET6 AF_INET6 #endif #ifndef HAVE_STRUCT_SOCKADDR_IN6 struct sockaddr_in6 { unsigned short sin6_family; unsigned short sin6_port; unsigned long sin6_flowinfo; struct ares_in6_addr sin6_addr; unsigned int sin6_scope_id; }; #endif typedef union { struct sockaddr sa; struct sockaddr_in sa4; struct sockaddr_in6 sa6; } ares_sockaddr; #ifndef HAVE_STRUCT_ADDRINFO struct addrinfo { int ai_flags; int ai_family; int ai_socktype; int ai_protocol; ares_socklen_t ai_addrlen; /* Follow rfc3493 struct addrinfo */ char *ai_canonname; struct sockaddr *ai_addr; struct addrinfo *ai_next; }; #endif #ifndef NS_IN6ADDRSZ # ifndef HAVE_STRUCT_IN6_ADDR /* We cannot have it set to zero, so we pick a fixed value here */ # define NS_IN6ADDRSZ 16 # else # define NS_IN6ADDRSZ sizeof(struct in6_addr) # endif #endif #ifndef NS_INADDRSZ # define NS_INADDRSZ sizeof(struct in_addr) #endif #ifndef NS_INT16SZ # define NS_INT16SZ 2 #endif #ifndef IF_NAMESIZE # ifdef IFNAMSIZ # define IF_NAMESIZE IFNAMSIZ # else # define IF_NAMESIZE 256 # endif #endif /* Defined in inet_net_pton.c for no particular reason. */ extern const struct ares_in6_addr ares_in6addr_any; /* :: */ #endif /* ARES_IPV6_H */ gevent-24.11.1/deps/c-ares/src/lib/ares_library_init.c000066400000000000000000000077141471441230600224720ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2004 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" /* library-private global and unique instance vars */ #if defined(ANDROID) || defined(__ANDROID__) # include "ares_android.h" #endif /* library-private global vars with source visibility restricted to this file */ static unsigned int ares_initialized; static int ares_init_flags; /* library-private global vars with visibility across the whole library */ /* Some systems may return either NULL or a valid pointer on malloc(0). c-ares * should never call malloc(0) so lets return NULL so we're more likely to find * an issue if it were to occur. */ static void *default_malloc(size_t size) { if (size == 0) { return NULL; } return malloc(size); } #if defined(_WIN32) /* We need indirections to handle Windows DLL rules. */ static void *default_realloc(void *p, size_t size) { return realloc(p, size); } static void default_free(void *p) { free(p); } #else # define default_realloc realloc # define default_free free #endif void *(*ares_malloc)(size_t size) = default_malloc; void *(*ares_realloc)(void *ptr, size_t size) = default_realloc; void (*ares_free)(void *ptr) = default_free; void *ares_malloc_zero(size_t size) { void *ptr = ares_malloc(size); if (ptr != NULL) { memset(ptr, 0, size); } return ptr; } void *ares_realloc_zero(void *ptr, size_t orig_size, size_t new_size) { void *p = ares_realloc(ptr, new_size); if (p == NULL) { return NULL; } if (new_size > orig_size) { memset((unsigned char *)p + orig_size, 0, new_size - orig_size); } return p; } int ares_library_init(int flags) { if (ares_initialized) { ares_initialized++; return ARES_SUCCESS; } ares_initialized++; /* NOTE: ARES_LIB_INIT_WIN32 flag no longer used */ ares_init_flags = flags; return ARES_SUCCESS; } int ares_library_init_mem(int flags, void *(*amalloc)(size_t size), void (*afree)(void *ptr), void *(*arealloc)(void *ptr, size_t size)) { if (amalloc) { ares_malloc = amalloc; } if (arealloc) { ares_realloc = arealloc; } if (afree) { ares_free = afree; } return ares_library_init(flags); } void ares_library_cleanup(void) { if (!ares_initialized) { return; } ares_initialized--; if (ares_initialized) { return; } /* NOTE: ARES_LIB_INIT_WIN32 flag no longer used */ #if defined(ANDROID) || defined(__ANDROID__) ares_library_cleanup_android(); #endif ares_init_flags = ARES_LIB_INIT_NONE; ares_malloc = malloc; ares_realloc = realloc; ares_free = free; } int ares_library_initialized(void) { #ifdef USE_WINSOCK if (!ares_initialized) { return ARES_ENOTINITIALIZED; } #endif return ARES_SUCCESS; } gevent-24.11.1/deps/c-ares/src/lib/ares_metrics.c000066400000000000000000000223441471441230600214450ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ /* IMPLEMENTATION NOTES * ==================== * * With very little effort we should be able to determine fairly proper timeouts * we can use based on prior query history. We track in order to be able to * auto-scale when network conditions change (e.g. maybe there is a provider * failover and timings change due to that). Apple appears to do this within * their system resolver in MacOS. Obviously we should have a minimum, maximum, * and initial value to make sure the algorithm doesn't somehow go off the * rails. * * Values: * - Minimum Timeout: 250ms (approximate RTT half-way around the globe) * - Maximum Timeout: 5000ms (Recommended timeout in RFC 1123), can be reduced * by ARES_OPT_MAXTIMEOUTMS, but otherwise the bound specified by the option * caps the retry timeout. * - Initial Timeout: User-specified via configuration or ARES_OPT_TIMEOUTMS * - Average latency multiplier: 5x (a local DNS server returning a cached value * will be quicker than if it needs to recurse so we need to account for this) * - Minimum Count for Average: 3. This is the minimum number of queries we * need to form an average for the bucket. * * Per-server buckets for tracking latency over time (these are ephemeral * meaning they don't persist once a channel is destroyed). We record both the * current timespan for the bucket and the immediate preceding timespan in case * of roll-overs we can still maintain recent metrics for calculations: * - 1 minute * - 15 minutes * - 1 hr * - 1 day * - since inception * * Each bucket would contain: * - timestamp (divided by interval) * - minimum latency * - maximum latency * - total time * - count * NOTE: average latency is (total time / count), we will calculate this * dynamically when needed * * Basic algorithm for calculating timeout to use would be: * - Scan from most recent bucket to least recent * - Check timestamp of bucket, if doesn't match current time, continue to next * bucket * - Check count of bucket, if its not at least the "Minimum Count for Average", * check the previous bucket, otherwise continue to next bucket * - If we reached the end with no bucket match, use "Initial Timeout" * - If bucket is selected, take ("total time" / count) as Average latency, * multiply by "Average Latency Multiplier", bound by "Minimum Timeout" and * "Maximum Timeout" * NOTE: The timeout calculated may not be the timeout used. If we are retrying * the query on the same server another time, then it will use a larger value * * On each query reply where the response is legitimate (proper response or * NXDOMAIN) and not something like a server error: * - Cycle through each bucket in order * - Check timestamp of bucket against current timestamp, if out of date * overwrite previous entry with values, clear current values * - Compare current minimum and maximum recorded latency against query time and * adjust if necessary * - Increment "count" by 1 and "total time" by the query time * * Other Notes: * - This is always-on, the only user-configurable value is the initial * timeout which will simply re-uses the current option. * - Minimum and Maximum latencies for a bucket are currently unused but are * there in case we find a need for them in the future. */ #include "ares_private.h" /*! Minimum timeout value. Chosen due to it being approximately RTT half-way * around the world */ #define MIN_TIMEOUT_MS 250 /*! Multiplier to apply to average latency to come up with an initial timeout */ #define AVG_TIMEOUT_MULTIPLIER 5 /*! Upper timeout bounds, only used if channel->maxtimeout not set */ #define MAX_TIMEOUT_MS 5000 /*! Minimum queries required to form an average */ #define MIN_COUNT_FOR_AVERAGE 3 static time_t ares_metric_timestamp(ares_server_bucket_t bucket, const ares_timeval_t *now, ares_bool_t is_previous) { time_t divisor = 1; /* Silence bogus MSVC warning by setting default value */ switch (bucket) { case ARES_METRIC_1MINUTE: divisor = 60; break; case ARES_METRIC_15MINUTES: divisor = 15 * 60; break; case ARES_METRIC_1HOUR: divisor = 60 * 60; break; case ARES_METRIC_1DAY: divisor = 24 * 60 * 60; break; case ARES_METRIC_INCEPTION: return is_previous ? 0 : 1; case ARES_METRIC_COUNT: return 0; /* Invalid! */ } if (is_previous) { if (divisor >= now->sec) { return 0; } return (time_t)((now->sec - divisor) / divisor); } return (time_t)(now->sec / divisor); } void ares_metrics_record(const ares_query_t *query, ares_server_t *server, ares_status_t status, const ares_dns_record_t *dnsrec) { ares_timeval_t now; ares_timeval_t tvdiff; unsigned int query_ms; ares_dns_rcode_t rcode; ares_server_bucket_t i; if (status != ARES_SUCCESS) { return; } if (server == NULL) { return; } ares__tvnow(&now); rcode = ares_dns_record_get_rcode(dnsrec); if (rcode != ARES_RCODE_NOERROR && rcode != ARES_RCODE_NXDOMAIN) { return; } ares__timeval_diff(&tvdiff, &query->ts, &now); query_ms = (unsigned int)((tvdiff.sec * 1000) + (tvdiff.usec / 1000)); if (query_ms == 0) { query_ms = 1; } /* Place in each bucket */ for (i = 0; i < ARES_METRIC_COUNT; i++) { time_t ts = ares_metric_timestamp(i, &now, ARES_FALSE); /* Copy metrics to prev and clear */ if (ts != server->metrics[i].ts) { server->metrics[i].prev_ts = server->metrics[i].ts; server->metrics[i].prev_total_ms = server->metrics[i].total_ms; server->metrics[i].prev_total_count = server->metrics[i].total_count; server->metrics[i].ts = ts; server->metrics[i].latency_min_ms = 0; server->metrics[i].latency_max_ms = 0; server->metrics[i].total_ms = 0; server->metrics[i].total_count = 0; } if (server->metrics[i].latency_min_ms == 0 || server->metrics[i].latency_min_ms > query_ms) { server->metrics[i].latency_min_ms = query_ms; } if (query_ms > server->metrics[i].latency_max_ms) { server->metrics[i].latency_min_ms = query_ms; } server->metrics[i].total_count++; server->metrics[i].total_ms += (ares_uint64_t)query_ms; } } size_t ares_metrics_server_timeout(const ares_server_t *server, const ares_timeval_t *now) { const ares_channel_t *channel = server->channel; ares_server_bucket_t i; size_t timeout_ms = 0; size_t max_timeout_ms; for (i = 0; i < ARES_METRIC_COUNT; i++) { time_t ts = ares_metric_timestamp(i, now, ARES_FALSE); /* This ts has been invalidated, see if we should use the previous * time period */ if (ts != server->metrics[i].ts || server->metrics[i].total_count < MIN_COUNT_FOR_AVERAGE) { time_t prev_ts = ares_metric_timestamp(i, now, ARES_TRUE); if (prev_ts != server->metrics[i].prev_ts || server->metrics[i].prev_total_count < MIN_COUNT_FOR_AVERAGE) { /* Move onto next bucket */ continue; } /* Calculate average time for previous bucket */ timeout_ms = (size_t)(server->metrics[i].prev_total_ms / server->metrics[i].prev_total_count); } else { /* Calculate average time for current bucket*/ timeout_ms = (size_t)(server->metrics[i].total_ms / server->metrics[i].total_count); } /* Multiply average by constant to get timeout value */ timeout_ms *= AVG_TIMEOUT_MULTIPLIER; break; } /* If we're here, that means its the first query for the server, so we just * use the initial default timeout */ if (timeout_ms == 0) { timeout_ms = channel->timeout; } /* don't go below lower bounds */ if (timeout_ms < MIN_TIMEOUT_MS) { timeout_ms = MIN_TIMEOUT_MS; } /* don't go above upper bounds */ max_timeout_ms = channel->maxtimeout ? channel->maxtimeout : MAX_TIMEOUT_MS; if (timeout_ms > max_timeout_ms) { timeout_ms = max_timeout_ms; } return timeout_ms; } gevent-24.11.1/deps/c-ares/src/lib/ares_options.c000066400000000000000000000346301471441230600214730ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2008 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_ARPA_INET_H # include #endif #include "ares_data.h" #include "ares_inet_net_pton.h" void ares_destroy_options(struct ares_options *options) { int i; ares_free(options->servers); for (i = 0; options->domains && i < options->ndomains; i++) { ares_free(options->domains[i]); } ares_free(options->domains); ares_free(options->sortlist); ares_free(options->lookups); ares_free(options->resolvconf_path); ares_free(options->hosts_path); } static struct in_addr *ares_save_opt_servers(const ares_channel_t *channel, int *nservers) { ares__slist_node_t *snode; struct in_addr *out = ares_malloc_zero(ares__slist_len(channel->servers) * sizeof(*out)); *nservers = 0; if (out == NULL) { return NULL; } for (snode = ares__slist_node_first(channel->servers); snode != NULL; snode = ares__slist_node_next(snode)) { const ares_server_t *server = ares__slist_node_val(snode); if (server->addr.family != AF_INET) { continue; } memcpy(&out[*nservers], &server->addr.addr.addr4, sizeof(*out)); (*nservers)++; } return out; } /* Save options from initialized channel */ int ares_save_options(const ares_channel_t *channel, struct ares_options *options, int *optmask) { size_t i; /* NOTE: We can't zero the whole thing out, this is because the size of the * struct ares_options changes over time, so if someone compiled * with an older version, their struct size might be smaller and * we might overwrite their memory! So using the optmask is critical * here, as they could have only set options they knew about. * * Unfortunately ares_destroy_options() doesn't take an optmask, so * there are a few pointers we *must* zero out otherwise we won't * know if they were allocated or not */ options->servers = NULL; options->domains = NULL; options->sortlist = NULL; options->lookups = NULL; options->resolvconf_path = NULL; options->hosts_path = NULL; if (!ARES_CONFIG_CHECK(channel)) { return ARES_ENODATA; } if (channel->optmask & ARES_OPT_FLAGS) { options->flags = (int)channel->flags; } /* We convert ARES_OPT_TIMEOUT to ARES_OPT_TIMEOUTMS in * ares__init_by_options() */ if (channel->optmask & ARES_OPT_TIMEOUTMS) { options->timeout = (int)channel->timeout; } if (channel->optmask & ARES_OPT_TRIES) { options->tries = (int)channel->tries; } if (channel->optmask & ARES_OPT_NDOTS) { options->ndots = (int)channel->ndots; } if (channel->optmask & ARES_OPT_MAXTIMEOUTMS) { options->maxtimeout = (int)channel->maxtimeout; } if (channel->optmask & ARES_OPT_UDP_PORT) { options->udp_port = channel->udp_port; } if (channel->optmask & ARES_OPT_TCP_PORT) { options->tcp_port = channel->tcp_port; } if (channel->optmask & ARES_OPT_SOCK_STATE_CB) { options->sock_state_cb = channel->sock_state_cb; options->sock_state_cb_data = channel->sock_state_cb_data; } if (channel->optmask & ARES_OPT_SERVERS) { options->servers = ares_save_opt_servers(channel, &options->nservers); if (options->servers == NULL) { return ARES_ENOMEM; } } if (channel->optmask & ARES_OPT_DOMAINS) { options->domains = NULL; if (channel->ndomains) { options->domains = ares_malloc(channel->ndomains * sizeof(char *)); if (!options->domains) { return ARES_ENOMEM; } for (i = 0; i < channel->ndomains; i++) { options->domains[i] = ares_strdup(channel->domains[i]); if (!options->domains[i]) { options->ndomains = (int)i; return ARES_ENOMEM; } } } options->ndomains = (int)channel->ndomains; } if (channel->optmask & ARES_OPT_LOOKUPS) { options->lookups = ares_strdup(channel->lookups); if (!options->lookups && channel->lookups) { return ARES_ENOMEM; } } if (channel->optmask & ARES_OPT_SORTLIST) { options->sortlist = NULL; if (channel->nsort) { options->sortlist = ares_malloc(channel->nsort * sizeof(struct apattern)); if (!options->sortlist) { return ARES_ENOMEM; } for (i = 0; i < channel->nsort; i++) { options->sortlist[i] = channel->sortlist[i]; } } options->nsort = (int)channel->nsort; } if (channel->optmask & ARES_OPT_RESOLVCONF) { options->resolvconf_path = ares_strdup(channel->resolvconf_path); if (!options->resolvconf_path) { return ARES_ENOMEM; } } if (channel->optmask & ARES_OPT_HOSTS_FILE) { options->hosts_path = ares_strdup(channel->hosts_path); if (!options->hosts_path) { return ARES_ENOMEM; } } if (channel->optmask & ARES_OPT_SOCK_SNDBUF && channel->socket_send_buffer_size > 0) { options->socket_send_buffer_size = channel->socket_send_buffer_size; } if (channel->optmask & ARES_OPT_SOCK_RCVBUF && channel->socket_receive_buffer_size > 0) { options->socket_receive_buffer_size = channel->socket_receive_buffer_size; } if (channel->optmask & ARES_OPT_EDNSPSZ) { options->ednspsz = (int)channel->ednspsz; } if (channel->optmask & ARES_OPT_UDP_MAX_QUERIES) { options->udp_max_queries = (int)channel->udp_max_queries; } if (channel->optmask & ARES_OPT_QUERY_CACHE) { options->qcache_max_ttl = channel->qcache_max_ttl; } if (channel->optmask & ARES_OPT_EVENT_THREAD) { options->evsys = channel->evsys; } /* Set options for server failover behavior */ if (channel->optmask & ARES_OPT_SERVER_FAILOVER) { options->server_failover_opts.retry_chance = channel->server_retry_chance; options->server_failover_opts.retry_delay = channel->server_retry_delay; } *optmask = (int)channel->optmask; return ARES_SUCCESS; } static ares_status_t ares__init_options_servers(ares_channel_t *channel, const struct in_addr *servers, size_t nservers) { ares__llist_t *slist = NULL; ares_status_t status; status = ares_in_addr_to_server_config_llist(servers, nservers, &slist); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__servers_update(channel, slist, ARES_TRUE); ares__llist_destroy(slist); return status; } ares_status_t ares__init_by_options(ares_channel_t *channel, const struct ares_options *options, int optmask) { size_t i; if (channel == NULL) { return ARES_ENODATA; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (options == NULL) { if (optmask != 0) { return ARES_ENODATA; /* LCOV_EXCL_LINE: DefensiveCoding */ } return ARES_SUCCESS; } /* Easy stuff. */ /* Event Thread requires threading support and is incompatible with socket * state callbacks */ if (optmask & ARES_OPT_EVENT_THREAD) { if (!ares_threadsafety()) { return ARES_ENOTIMP; } if (optmask & ARES_OPT_SOCK_STATE_CB) { return ARES_EFORMERR; } channel->evsys = options->evsys; } if (optmask & ARES_OPT_FLAGS) { channel->flags = (unsigned int)options->flags; } if (optmask & ARES_OPT_TIMEOUTMS) { /* Apparently some integrations were passing -1 to tell c-ares to use * the default instead of just omitting the optmask */ if (options->timeout <= 0) { optmask &= ~(ARES_OPT_TIMEOUTMS); } else { channel->timeout = (unsigned int)options->timeout; } } else if (optmask & ARES_OPT_TIMEOUT) { optmask &= ~(ARES_OPT_TIMEOUT); /* Apparently some integrations were passing -1 to tell c-ares to use * the default instead of just omitting the optmask */ if (options->timeout > 0) { /* Convert to milliseconds */ optmask |= ARES_OPT_TIMEOUTMS; channel->timeout = (unsigned int)options->timeout * 1000; } } if (optmask & ARES_OPT_TRIES) { if (options->tries <= 0) { optmask &= ~(ARES_OPT_TRIES); } else { channel->tries = (size_t)options->tries; } } if (optmask & ARES_OPT_NDOTS) { if (options->ndots < 0) { optmask &= ~(ARES_OPT_NDOTS); } else { channel->ndots = (size_t)options->ndots; } } if (optmask & ARES_OPT_MAXTIMEOUTMS) { if (options->maxtimeout <= 0) { optmask &= ~(ARES_OPT_MAXTIMEOUTMS); } else { channel->maxtimeout = (size_t)options->maxtimeout; } } if (optmask & ARES_OPT_ROTATE) { channel->rotate = ARES_TRUE; } if (optmask & ARES_OPT_NOROTATE) { channel->rotate = ARES_FALSE; } if (optmask & ARES_OPT_UDP_PORT) { channel->udp_port = options->udp_port; } if (optmask & ARES_OPT_TCP_PORT) { channel->tcp_port = options->tcp_port; } if (optmask & ARES_OPT_SOCK_STATE_CB) { channel->sock_state_cb = options->sock_state_cb; channel->sock_state_cb_data = options->sock_state_cb_data; } if (optmask & ARES_OPT_SOCK_SNDBUF) { if (options->socket_send_buffer_size <= 0) { optmask &= ~(ARES_OPT_SOCK_SNDBUF); } else { channel->socket_send_buffer_size = options->socket_send_buffer_size; } } if (optmask & ARES_OPT_SOCK_RCVBUF) { if (options->socket_receive_buffer_size <= 0) { optmask &= ~(ARES_OPT_SOCK_RCVBUF); } else { channel->socket_receive_buffer_size = options->socket_receive_buffer_size; } } if (optmask & ARES_OPT_EDNSPSZ) { if (options->ednspsz <= 0) { optmask &= ~(ARES_OPT_EDNSPSZ); } else { channel->ednspsz = (size_t)options->ednspsz; } } /* Copy the domains, if given. Keep channel->ndomains consistent so * we can clean up in case of error. */ if (optmask & ARES_OPT_DOMAINS && options->ndomains > 0) { channel->domains = ares_malloc_zero((size_t)options->ndomains * sizeof(char *)); if (!channel->domains) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } channel->ndomains = (size_t)options->ndomains; for (i = 0; i < (size_t)options->ndomains; i++) { channel->domains[i] = ares_strdup(options->domains[i]); if (!channel->domains[i]) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } } } /* Set lookups, if given. */ if (optmask & ARES_OPT_LOOKUPS) { if (options->lookups == NULL) { optmask &= ~(ARES_OPT_LOOKUPS); } else { channel->lookups = ares_strdup(options->lookups); if (!channel->lookups) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } } } /* copy sortlist */ if (optmask & ARES_OPT_SORTLIST && options->nsort > 0) { channel->nsort = (size_t)options->nsort; channel->sortlist = ares_malloc((size_t)options->nsort * sizeof(struct apattern)); if (!channel->sortlist) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } for (i = 0; i < (size_t)options->nsort; i++) { channel->sortlist[i] = options->sortlist[i]; } } /* Set path for resolv.conf file, if given. */ if (optmask & ARES_OPT_RESOLVCONF) { if (options->resolvconf_path == NULL) { optmask &= ~(ARES_OPT_RESOLVCONF); } else { channel->resolvconf_path = ares_strdup(options->resolvconf_path); if (channel->resolvconf_path == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } } } /* Set path for hosts file, if given. */ if (optmask & ARES_OPT_HOSTS_FILE) { if (options->hosts_path == NULL) { optmask &= ~(ARES_OPT_HOSTS_FILE); } else { channel->hosts_path = ares_strdup(options->hosts_path); if (channel->hosts_path == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } } } if (optmask & ARES_OPT_UDP_MAX_QUERIES) { if (options->udp_max_queries <= 0) { optmask &= ~(ARES_OPT_UDP_MAX_QUERIES); } else { channel->udp_max_queries = (size_t)options->udp_max_queries; } } /* As of c-ares 1.31.0, the Query Cache is on by default. The only way to * disable it is to set options->qcache_max_ttl = 0 while specifying the * ARES_OPT_QUERY_CACHE which will actually disable it completely. */ if (optmask & ARES_OPT_QUERY_CACHE) { /* qcache_max_ttl is unsigned unlike the others */ channel->qcache_max_ttl = options->qcache_max_ttl; } else { optmask |= ARES_OPT_QUERY_CACHE; channel->qcache_max_ttl = 3600; } /* Initialize the ipv4 servers if provided */ if (optmask & ARES_OPT_SERVERS) { if (options->nservers <= 0) { optmask &= ~(ARES_OPT_SERVERS); } else { ares_status_t status; status = ares__init_options_servers(channel, options->servers, (size_t)options->nservers); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } } /* Set fields for server failover behavior */ if (optmask & ARES_OPT_SERVER_FAILOVER) { channel->server_retry_chance = options->server_failover_opts.retry_chance; channel->server_retry_delay = options->server_failover_opts.retry_delay; } channel->optmask = (unsigned int)optmask; return ARES_SUCCESS; } gevent-24.11.1/deps/c-ares/src/lib/ares_platform.c000066400000000000000000020577011471441230600216320ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2004 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_platform.h" #if defined(_WIN32) && !defined(MSDOS) # define V_PLATFORM_WIN32s 0 # define V_PLATFORM_WIN32_WINDOWS 1 # define V_PLATFORM_WIN32_NT 2 # define V_PLATFORM_WIN32_CE 3 win_platform ares__getplatform(void) { OSVERSIONINFOEX OsvEx; memset(&OsvEx, 0, sizeof(OsvEx)); OsvEx.dwOSVersionInfoSize = sizeof(OSVERSIONINFOEX); # ifdef _MSC_VER # pragma warning(push) # pragma warning(disable : 4996) /* warning C4996: 'GetVersionExW': was \ declared deprecated */ # endif if (!GetVersionEx((void *)&OsvEx)) { memset(&OsvEx, 0, sizeof(OsvEx)); OsvEx.dwOSVersionInfoSize = sizeof(OSVERSIONINFO); if (!GetVersionEx((void *)&OsvEx)) { return WIN_UNKNOWN; } } # ifdef _MSC_VER # pragma warning(pop) # endif switch (OsvEx.dwPlatformId) { case V_PLATFORM_WIN32s: return WIN_3X; case V_PLATFORM_WIN32_WINDOWS: return WIN_9X; case V_PLATFORM_WIN32_NT: return WIN_NT; case V_PLATFORM_WIN32_CE: return WIN_CE; default: return WIN_UNKNOWN; } } #endif /* _WIN32 && ! MSDOS */ #if defined(_WIN32_WCE) /* IANA Well Known Ports are in range 0-1023 */ # define USE_IANA_WELL_KNOWN_PORTS 1 /* IANA Registered Ports are in range 1024-49151 */ # define USE_IANA_REGISTERED_PORTS 1 struct pvt_servent { char *s_name; char **s_aliases; unsigned short s_port; char *s_proto; }; /* * Ref: http://www.iana.org/assignments/port-numbers */ static struct pvt_servent IANAports[] = { # ifdef USE_IANA_WELL_KNOWN_PORTS { "tcpmux", { NULL }, 1, "tcp" }, { "tcpmux", { NULL }, 1, "udp" }, { "compressnet", { NULL }, 2, "tcp" }, { "compressnet", { NULL }, 2, "udp" }, { "compressnet", { NULL }, 3, "tcp" }, { "compressnet", { NULL }, 3, "udp" }, { "rje", { NULL }, 5, "tcp" }, { "rje", { NULL }, 5, "udp" }, { "echo", { NULL }, 7, "tcp" }, { "echo", { NULL }, 7, "udp" }, { "discard", { NULL }, 9, "tcp" }, { "discard", { NULL }, 9, "udp" }, { "discard", { NULL }, 9, "sctp" }, { "discard", { NULL }, 9, "dccp" }, { "systat", { NULL }, 11, "tcp" }, { "systat", { NULL }, 11, "udp" }, { "daytime", { NULL }, 13, "tcp" }, { "daytime", { NULL }, 13, "udp" }, { "qotd", { NULL }, 17, "tcp" }, { "qotd", { NULL }, 17, "udp" }, { "msp", { NULL }, 18, "tcp" }, { "msp", { NULL }, 18, "udp" }, { "chargen", { NULL }, 19, "tcp" }, { "chargen", { NULL }, 19, "udp" }, { "ftp-data", { NULL }, 20, "tcp" }, { "ftp-data", { NULL }, 20, "udp" }, { "ftp-data", { NULL }, 20, "sctp" }, { "ftp", { NULL }, 21, "tcp" }, { "ftp", { NULL }, 21, "udp" }, { "ftp", { NULL }, 21, "sctp" }, { "ssh", { NULL }, 22, "tcp" }, { "ssh", { NULL }, 22, "udp" }, { "ssh", { NULL }, 22, "sctp" }, { "telnet", { NULL }, 23, "tcp" }, { "telnet", { NULL }, 23, "udp" }, { "smtp", { NULL }, 25, "tcp" }, { "smtp", { NULL }, 25, "udp" }, { "nsw-fe", { NULL }, 27, "tcp" }, { "nsw-fe", { NULL }, 27, "udp" }, { "msg-icp", { NULL }, 29, "tcp" }, { "msg-icp", { NULL }, 29, "udp" }, { "msg-auth", { NULL }, 31, "tcp" }, { "msg-auth", { NULL }, 31, "udp" }, { "dsp", { NULL }, 33, "tcp" }, { "dsp", { NULL }, 33, "udp" }, { "time", { NULL }, 37, "tcp" }, { "time", { NULL }, 37, "udp" }, { "rap", { NULL }, 38, "tcp" }, { "rap", { NULL }, 38, "udp" }, { "rlp", { NULL }, 39, "tcp" }, { "rlp", { NULL }, 39, "udp" }, { "graphics", { NULL }, 41, "tcp" }, { "graphics", { NULL }, 41, "udp" }, { "name", { NULL }, 42, "tcp" }, { "name", { NULL }, 42, "udp" }, { "nameserver", { NULL }, 42, "tcp" }, { "nameserver", { NULL }, 42, "udp" }, { "nicname", { NULL }, 43, "tcp" }, { "nicname", { NULL }, 43, "udp" }, { "mpm-flags", { NULL }, 44, "tcp" }, { "mpm-flags", { NULL }, 44, "udp" }, { "mpm", { NULL }, 45, "tcp" }, { "mpm", { NULL }, 45, "udp" }, { "mpm-snd", { NULL }, 46, "tcp" }, { "mpm-snd", { NULL }, 46, "udp" }, { "ni-ftp", { NULL }, 47, "tcp" }, { "ni-ftp", { NULL }, 47, "udp" }, { "auditd", { NULL }, 48, "tcp" }, { "auditd", { NULL }, 48, "udp" }, { "tacacs", { NULL }, 49, "tcp" }, { "tacacs", { NULL }, 49, "udp" }, { "re-mail-ck", { NULL }, 50, "tcp" }, { "re-mail-ck", { NULL }, 50, "udp" }, { "la-maint", { NULL }, 51, "tcp" }, { "la-maint", { NULL }, 51, "udp" }, { "xns-time", { NULL }, 52, "tcp" }, { "xns-time", { NULL }, 52, "udp" }, { "domain", { NULL }, 53, "tcp" }, { "domain", { NULL }, 53, "udp" }, { "xns-ch", { NULL }, 54, "tcp" }, { "xns-ch", { NULL }, 54, "udp" }, { "isi-gl", { NULL }, 55, "tcp" }, { "isi-gl", { NULL }, 55, "udp" }, { "xns-auth", { NULL }, 56, "tcp" }, { "xns-auth", { NULL }, 56, "udp" }, { "xns-mail", { NULL }, 58, "tcp" }, { "xns-mail", { NULL }, 58, "udp" }, { "ni-mail", { NULL }, 61, "tcp" }, { "ni-mail", { NULL }, 61, "udp" }, { "acas", { NULL }, 62, "tcp" }, { "acas", { NULL }, 62, "udp" }, { "whois++", { NULL }, 63, "tcp" }, { "whois++", { NULL }, 63, "udp" }, { "covia", { NULL }, 64, "tcp" }, { "covia", { NULL }, 64, "udp" }, { "tacacs-ds", { NULL }, 65, "tcp" }, { "tacacs-ds", { NULL }, 65, "udp" }, { "sql*net", { NULL }, 66, "tcp" }, { "sql*net", { NULL }, 66, "udp" }, { "bootps", { NULL }, 67, "tcp" }, { "bootps", { NULL }, 67, "udp" }, { "bootpc", { NULL }, 68, "tcp" }, { "bootpc", { NULL }, 68, "udp" }, { "tftp", { NULL }, 69, "tcp" }, { "tftp", { NULL }, 69, "udp" }, { "gopher", { NULL }, 70, "tcp" }, { "gopher", { NULL }, 70, "udp" }, { "netrjs-1", { NULL }, 71, "tcp" }, { "netrjs-1", { NULL }, 71, "udp" }, { "netrjs-2", { NULL }, 72, "tcp" }, { "netrjs-2", { NULL }, 72, "udp" }, { "netrjs-3", { NULL }, 73, "tcp" }, { "netrjs-3", { NULL }, 73, "udp" }, { "netrjs-4", { NULL }, 74, "tcp" }, { "netrjs-4", { NULL }, 74, "udp" }, { "deos", { NULL }, 76, "tcp" }, { "deos", { NULL }, 76, "udp" }, { "vettcp", { NULL }, 78, "tcp" }, { "vettcp", { NULL }, 78, "udp" }, { "finger", { NULL }, 79, "tcp" }, { "finger", { NULL }, 79, "udp" }, { "http", { NULL }, 80, "tcp" }, { "http", { NULL }, 80, "udp" }, { "www", { NULL }, 80, "tcp" }, { "www", { NULL }, 80, "udp" }, { "www-http", { NULL }, 80, "tcp" }, { "www-http", { NULL }, 80, "udp" }, { "http", { NULL }, 80, "sctp" }, { "xfer", { NULL }, 82, "tcp" }, { "xfer", { NULL }, 82, "udp" }, { "mit-ml-dev", { NULL }, 83, "tcp" }, { "mit-ml-dev", { NULL }, 83, "udp" }, { "ctf", { NULL }, 84, "tcp" }, { "ctf", { NULL }, 84, "udp" }, { "mit-ml-dev", { NULL }, 85, "tcp" }, { "mit-ml-dev", { NULL }, 85, "udp" }, { "mfcobol", { NULL }, 86, "tcp" }, { "mfcobol", { NULL }, 86, "udp" }, { "kerberos", { NULL }, 88, "tcp" }, { "kerberos", { NULL }, 88, "udp" }, { "su-mit-tg", { NULL }, 89, "tcp" }, { "su-mit-tg", { NULL }, 89, "udp" }, { "dnsix", { NULL }, 90, "tcp" }, { "dnsix", { NULL }, 90, "udp" }, { "mit-dov", { NULL }, 91, "tcp" }, { "mit-dov", { NULL }, 91, "udp" }, { "npp", { NULL }, 92, "tcp" }, { "npp", { NULL }, 92, "udp" }, { "dcp", { NULL }, 93, "tcp" }, { "dcp", { NULL }, 93, "udp" }, { "objcall", { NULL }, 94, "tcp" }, { "objcall", { NULL }, 94, "udp" }, { "supdup", { NULL }, 95, "tcp" }, { "supdup", { NULL }, 95, "udp" }, { "dixie", { NULL }, 96, "tcp" }, { "dixie", { NULL }, 96, "udp" }, { "swift-rvf", { NULL }, 97, "tcp" }, { "swift-rvf", { NULL }, 97, "udp" }, { "tacnews", { NULL }, 98, "tcp" }, { "tacnews", { NULL }, 98, "udp" }, { "metagram", { NULL }, 99, "tcp" }, { "metagram", { NULL }, 99, "udp" }, { "newacct", { NULL }, 100, "tcp" }, { "hostname", { NULL }, 101, "tcp" }, { "hostname", { NULL }, 101, "udp" }, { "iso-tsap", { NULL }, 102, "tcp" }, { "iso-tsap", { NULL }, 102, "udp" }, { "gppitnp", { NULL }, 103, "tcp" }, { "gppitnp", { NULL }, 103, "udp" }, { "acr-nema", { NULL }, 104, "tcp" }, { "acr-nema", { NULL }, 104, "udp" }, { "cso", { NULL }, 105, "tcp" }, { "cso", { NULL }, 105, "udp" }, { "csnet-ns", { NULL }, 105, "tcp" }, { "csnet-ns", { NULL }, 105, "udp" }, { "3com-tsmux", { NULL }, 106, "tcp" }, { "3com-tsmux", { NULL }, 106, "udp" }, { "rtelnet", { NULL }, 107, "tcp" }, { "rtelnet", { NULL }, 107, "udp" }, { "snagas", { NULL }, 108, "tcp" }, { "snagas", { NULL }, 108, "udp" }, { "pop2", { NULL }, 109, "tcp" }, { "pop2", { NULL }, 109, "udp" }, { "pop3", { NULL }, 110, "tcp" }, { "pop3", { NULL }, 110, "udp" }, { "sunrpc", { NULL }, 111, "tcp" }, { "sunrpc", { NULL }, 111, "udp" }, { "mcidas", { NULL }, 112, "tcp" }, { "mcidas", { NULL }, 112, "udp" }, { "ident", { NULL }, 113, "tcp" }, { "auth", { NULL }, 113, "tcp" }, { "auth", { NULL }, 113, "udp" }, { "sftp", { NULL }, 115, "tcp" }, { "sftp", { NULL }, 115, "udp" }, { "ansanotify", { NULL }, 116, "tcp" }, { "ansanotify", { NULL }, 116, "udp" }, { "uucp-path", { NULL }, 117, "tcp" }, { "uucp-path", { NULL }, 117, "udp" }, { "sqlserv", { NULL }, 118, "tcp" }, { "sqlserv", { NULL }, 118, "udp" }, { "nntp", { NULL }, 119, "tcp" }, { "nntp", { NULL }, 119, "udp" }, { "cfdptkt", { NULL }, 120, "tcp" }, { "cfdptkt", { NULL }, 120, "udp" }, { "erpc", { NULL }, 121, "tcp" }, { "erpc", { NULL }, 121, "udp" }, { "smakynet", { NULL }, 122, "tcp" }, { "smakynet", { NULL }, 122, "udp" }, { "ntp", { NULL }, 123, "tcp" }, { "ntp", { NULL }, 123, "udp" }, { "ansatrader", { NULL }, 124, "tcp" }, { "ansatrader", { NULL }, 124, "udp" }, { "locus-map", { NULL }, 125, "tcp" }, { "locus-map", { NULL }, 125, "udp" }, { "nxedit", { NULL }, 126, "tcp" }, { "nxedit", { NULL }, 126, "udp" }, { "locus-con", { NULL }, 127, "tcp" }, { "locus-con", { NULL }, 127, "udp" }, { "gss-xlicen", { NULL }, 128, "tcp" }, { "gss-xlicen", { NULL }, 128, "udp" }, { "pwdgen", { NULL }, 129, "tcp" }, { "pwdgen", { NULL }, 129, "udp" }, { "cisco-fna", { NULL }, 130, "tcp" }, { "cisco-fna", { NULL }, 130, "udp" }, { "cisco-tna", { NULL }, 131, "tcp" }, { "cisco-tna", { NULL }, 131, "udp" }, { "cisco-sys", { NULL }, 132, "tcp" }, { "cisco-sys", { NULL }, 132, "udp" }, { "statsrv", { NULL }, 133, "tcp" }, { "statsrv", { NULL }, 133, "udp" }, { "ingres-net", { NULL }, 134, "tcp" }, { "ingres-net", { NULL }, 134, "udp" }, { "epmap", { NULL }, 135, "tcp" }, { "epmap", { NULL }, 135, "udp" }, { "profile", { NULL }, 136, "tcp" }, { "profile", { NULL }, 136, "udp" }, { "netbios-ns", { NULL }, 137, "tcp" }, { "netbios-ns", { NULL }, 137, "udp" }, { "netbios-dgm", { NULL }, 138, "tcp" }, { "netbios-dgm", { NULL }, 138, "udp" }, { "netbios-ssn", { NULL }, 139, "tcp" }, { "netbios-ssn", { NULL }, 139, "udp" }, { "emfis-data", { NULL }, 140, "tcp" }, { "emfis-data", { NULL }, 140, "udp" }, { "emfis-cntl", { NULL }, 141, "tcp" }, { "emfis-cntl", { NULL }, 141, "udp" }, { "bl-idm", { NULL }, 142, "tcp" }, { "bl-idm", { NULL }, 142, "udp" }, { "imap", { NULL }, 143, "tcp" }, { "imap", { NULL }, 143, "udp" }, { "uma", { NULL }, 144, "tcp" }, { "uma", { NULL }, 144, "udp" }, { "uaac", { NULL }, 145, "tcp" }, { "uaac", { NULL }, 145, "udp" }, { "iso-tp0", { NULL }, 146, "tcp" }, { "iso-tp0", { NULL }, 146, "udp" }, { "iso-ip", { NULL }, 147, "tcp" }, { "iso-ip", { NULL }, 147, "udp" }, { "jargon", { NULL }, 148, "tcp" }, { "jargon", { NULL }, 148, "udp" }, { "aed-512", { NULL }, 149, "tcp" }, { "aed-512", { NULL }, 149, "udp" }, { "sql-net", { NULL }, 150, "tcp" }, { "sql-net", { NULL }, 150, "udp" }, { "hems", { NULL }, 151, "tcp" }, { "hems", { NULL }, 151, "udp" }, { "bftp", { NULL }, 152, "tcp" }, { "bftp", { NULL }, 152, "udp" }, { "sgmp", { NULL }, 153, "tcp" }, { "sgmp", { NULL }, 153, "udp" }, { "netsc-prod", { NULL }, 154, "tcp" }, { "netsc-prod", { NULL }, 154, "udp" }, { "netsc-dev", { NULL }, 155, "tcp" }, { "netsc-dev", { NULL }, 155, "udp" }, { "sqlsrv", { NULL }, 156, "tcp" }, { "sqlsrv", { NULL }, 156, "udp" }, { "knet-cmp", { NULL }, 157, "tcp" }, { "knet-cmp", { NULL }, 157, "udp" }, { "pcmail-srv", { NULL }, 158, "tcp" }, { "pcmail-srv", { NULL }, 158, "udp" }, { "nss-routing", { NULL }, 159, "tcp" }, { "nss-routing", { NULL }, 159, "udp" }, { "sgmp-traps", { NULL }, 160, "tcp" }, { "sgmp-traps", { NULL }, 160, "udp" }, { "snmp", { NULL }, 161, "tcp" }, { "snmp", { NULL }, 161, "udp" }, { "snmptrap", { NULL }, 162, "tcp" }, { "snmptrap", { NULL }, 162, "udp" }, { "cmip-man", { NULL }, 163, "tcp" }, { "cmip-man", { NULL }, 163, "udp" }, { "cmip-agent", { NULL }, 164, "tcp" }, { "cmip-agent", { NULL }, 164, "udp" }, { "xns-courier", { NULL }, 165, "tcp" }, { "xns-courier", { NULL }, 165, "udp" }, { "s-net", { NULL }, 166, "tcp" }, { "s-net", { NULL }, 166, "udp" }, { "namp", { NULL }, 167, "tcp" }, { "namp", { NULL }, 167, "udp" }, { "rsvd", { NULL }, 168, "tcp" }, { "rsvd", { NULL }, 168, "udp" }, { "send", { NULL }, 169, "tcp" }, { "send", { NULL }, 169, "udp" }, { "print-srv", { NULL }, 170, "tcp" }, { "print-srv", { NULL }, 170, "udp" }, { "multiplex", { NULL }, 171, "tcp" }, { "multiplex", { NULL }, 171, "udp" }, { "cl/1", { NULL }, 172, "tcp" }, { "cl/1", { NULL }, 172, "udp" }, { "xyplex-mux", { NULL }, 173, "tcp" }, { "xyplex-mux", { NULL }, 173, "udp" }, { "mailq", { NULL }, 174, "tcp" }, { "mailq", { NULL }, 174, "udp" }, { "vmnet", { NULL }, 175, "tcp" }, { "vmnet", { NULL }, 175, "udp" }, { "genrad-mux", { NULL }, 176, "tcp" }, { "genrad-mux", { NULL }, 176, "udp" }, { "xdmcp", { NULL }, 177, "tcp" }, { "xdmcp", { NULL }, 177, "udp" }, { "nextstep", { NULL }, 178, "tcp" }, { "nextstep", { NULL }, 178, "udp" }, { "bgp", { NULL }, 179, "tcp" }, { "bgp", { NULL }, 179, "udp" }, { "bgp", { NULL }, 179, "sctp" }, { "ris", { NULL }, 180, "tcp" }, { "ris", { NULL }, 180, "udp" }, { "unify", { NULL }, 181, "tcp" }, { "unify", { NULL }, 181, "udp" }, { "audit", { NULL }, 182, "tcp" }, { "audit", { NULL }, 182, "udp" }, { "ocbinder", { NULL }, 183, "tcp" }, { "ocbinder", { NULL }, 183, "udp" }, { "ocserver", { NULL }, 184, "tcp" }, { "ocserver", { NULL }, 184, "udp" }, { "remote-kis", { NULL }, 185, "tcp" }, { "remote-kis", { NULL }, 185, "udp" }, { "kis", { NULL }, 186, "tcp" }, { "kis", { NULL }, 186, "udp" }, { "aci", { NULL }, 187, "tcp" }, { "aci", { NULL }, 187, "udp" }, { "mumps", { NULL }, 188, "tcp" }, { "mumps", { NULL }, 188, "udp" }, { "qft", { NULL }, 189, "tcp" }, { "qft", { NULL }, 189, "udp" }, { "gacp", { NULL }, 190, "tcp" }, { "gacp", { NULL }, 190, "udp" }, { "prospero", { NULL }, 191, "tcp" }, { "prospero", { NULL }, 191, "udp" }, { "osu-nms", { NULL }, 192, "tcp" }, { "osu-nms", { NULL }, 192, "udp" }, { "srmp", { NULL }, 193, "tcp" }, { "srmp", { NULL }, 193, "udp" }, { "irc", { NULL }, 194, "tcp" }, { "irc", { NULL }, 194, "udp" }, { "dn6-nlm-aud", { NULL }, 195, "tcp" }, { "dn6-nlm-aud", { NULL }, 195, "udp" }, { "dn6-smm-red", { NULL }, 196, "tcp" }, { "dn6-smm-red", { NULL }, 196, "udp" }, { "dls", { NULL }, 197, "tcp" }, { "dls", { NULL }, 197, "udp" }, { "dls-mon", { NULL }, 198, "tcp" }, { "dls-mon", { NULL }, 198, "udp" }, { "smux", { NULL }, 199, "tcp" }, { "smux", { NULL }, 199, "udp" }, { "src", { NULL }, 200, "tcp" }, { "src", { NULL }, 200, "udp" }, { "at-rtmp", { NULL }, 201, "tcp" }, { "at-rtmp", { NULL }, 201, "udp" }, { "at-nbp", { NULL }, 202, "tcp" }, { "at-nbp", { NULL }, 202, "udp" }, { "at-3", { NULL }, 203, "tcp" }, { "at-3", { NULL }, 203, "udp" }, { "at-echo", { NULL }, 204, "tcp" }, { "at-echo", { NULL }, 204, "udp" }, { "at-5", { NULL }, 205, "tcp" }, { "at-5", { NULL }, 205, "udp" }, { "at-zis", { NULL }, 206, "tcp" }, { "at-zis", { NULL }, 206, "udp" }, { "at-7", { NULL }, 207, "tcp" }, { "at-7", { NULL }, 207, "udp" }, { "at-8", { NULL }, 208, "tcp" }, { "at-8", { NULL }, 208, "udp" }, { "qmtp", { NULL }, 209, "tcp" }, { "qmtp", { NULL }, 209, "udp" }, { "z39.50", { NULL }, 210, "tcp" }, { "z39.50", { NULL }, 210, "udp" }, { "914c/g", { NULL }, 211, "tcp" }, { "914c/g", { NULL }, 211, "udp" }, { "anet", { NULL }, 212, "tcp" }, { "anet", { NULL }, 212, "udp" }, { "ipx", { NULL }, 213, "tcp" }, { "ipx", { NULL }, 213, "udp" }, { "vmpwscs", { NULL }, 214, "tcp" }, { "vmpwscs", { NULL }, 214, "udp" }, { "softpc", { NULL }, 215, "tcp" }, { "softpc", { NULL }, 215, "udp" }, { "CAIlic", { NULL }, 216, "tcp" }, { "CAIlic", { NULL }, 216, "udp" }, { "dbase", { NULL }, 217, "tcp" }, { "dbase", { NULL }, 217, "udp" }, { "mpp", { NULL }, 218, "tcp" }, { "mpp", { NULL }, 218, "udp" }, { "uarps", { NULL }, 219, "tcp" }, { "uarps", { NULL }, 219, "udp" }, { "imap3", { NULL }, 220, "tcp" }, { "imap3", { NULL }, 220, "udp" }, { "fln-spx", { NULL }, 221, "tcp" }, { "fln-spx", { NULL }, 221, "udp" }, { "rsh-spx", { NULL }, 222, "tcp" }, { "rsh-spx", { NULL }, 222, "udp" }, { "cdc", { NULL }, 223, "tcp" }, { "cdc", { NULL }, 223, "udp" }, { "masqdialer", { NULL }, 224, "tcp" }, { "masqdialer", { NULL }, 224, "udp" }, { "direct", { NULL }, 242, "tcp" }, { "direct", { NULL }, 242, "udp" }, { "sur-meas", { NULL }, 243, "tcp" }, { "sur-meas", { NULL }, 243, "udp" }, { "inbusiness", { NULL }, 244, "tcp" }, { "inbusiness", { NULL }, 244, "udp" }, { "link", { NULL }, 245, "tcp" }, { "link", { NULL }, 245, "udp" }, { "dsp3270", { NULL }, 246, "tcp" }, { "dsp3270", { NULL }, 246, "udp" }, { "subntbcst_tftp", { NULL }, 247, "tcp" }, { "subntbcst_tftp", { NULL }, 247, "udp" }, { "bhfhs", { NULL }, 248, "tcp" }, { "bhfhs", { NULL }, 248, "udp" }, { "rap", { NULL }, 256, "tcp" }, { "rap", { NULL }, 256, "udp" }, { "set", { NULL }, 257, "tcp" }, { "set", { NULL }, 257, "udp" }, { "esro-gen", { NULL }, 259, "tcp" }, { "esro-gen", { NULL }, 259, "udp" }, { "openport", { NULL }, 260, "tcp" }, { "openport", { NULL }, 260, "udp" }, { "nsiiops", { NULL }, 261, "tcp" }, { "nsiiops", { NULL }, 261, "udp" }, { "arcisdms", { NULL }, 262, "tcp" }, { "arcisdms", { NULL }, 262, "udp" }, { "hdap", { NULL }, 263, "tcp" }, { "hdap", { NULL }, 263, "udp" }, { "bgmp", { NULL }, 264, "tcp" }, { "bgmp", { NULL }, 264, "udp" }, { "x-bone-ctl", { NULL }, 265, "tcp" }, { "x-bone-ctl", { NULL }, 265, "udp" }, { "sst", { NULL }, 266, "tcp" }, { "sst", { NULL }, 266, "udp" }, { "td-service", { NULL }, 267, "tcp" }, { "td-service", { NULL }, 267, "udp" }, { "td-replica", { NULL }, 268, "tcp" }, { "td-replica", { NULL }, 268, "udp" }, { "manet", { NULL }, 269, "tcp" }, { "manet", { NULL }, 269, "udp" }, { "gist", { NULL }, 270, "udp" }, { "http-mgmt", { NULL }, 280, "tcp" }, { "http-mgmt", { NULL }, 280, "udp" }, { "personal-link", { NULL }, 281, "tcp" }, { "personal-link", { NULL }, 281, "udp" }, { "cableport-ax", { NULL }, 282, "tcp" }, { "cableport-ax", { NULL }, 282, "udp" }, { "rescap", { NULL }, 283, "tcp" }, { "rescap", { NULL }, 283, "udp" }, { "corerjd", { NULL }, 284, "tcp" }, { "corerjd", { NULL }, 284, "udp" }, { "fxp", { NULL }, 286, "tcp" }, { "fxp", { NULL }, 286, "udp" }, { "k-block", { NULL }, 287, "tcp" }, { "k-block", { NULL }, 287, "udp" }, { "novastorbakcup", { NULL }, 308, "tcp" }, { "novastorbakcup", { NULL }, 308, "udp" }, { "entrusttime", { NULL }, 309, "tcp" }, { "entrusttime", { NULL }, 309, "udp" }, { "bhmds", { NULL }, 310, "tcp" }, { "bhmds", { NULL }, 310, "udp" }, { "asip-webadmin", { NULL }, 311, "tcp" }, { "asip-webadmin", { NULL }, 311, "udp" }, { "vslmp", { NULL }, 312, "tcp" }, { "vslmp", { NULL }, 312, "udp" }, { "magenta-logic", { NULL }, 313, "tcp" }, { "magenta-logic", { NULL }, 313, "udp" }, { "opalis-robot", { NULL }, 314, "tcp" }, { "opalis-robot", { NULL }, 314, "udp" }, { "dpsi", { NULL }, 315, "tcp" }, { "dpsi", { NULL }, 315, "udp" }, { "decauth", { NULL }, 316, "tcp" }, { "decauth", { NULL }, 316, "udp" }, { "zannet", { NULL }, 317, "tcp" }, { "zannet", { NULL }, 317, "udp" }, { "pkix-timestamp", { NULL }, 318, "tcp" }, { "pkix-timestamp", { NULL }, 318, "udp" }, { "ptp-event", { NULL }, 319, "tcp" }, { "ptp-event", { NULL }, 319, "udp" }, { "ptp-general", { NULL }, 320, "tcp" }, { "ptp-general", { NULL }, 320, "udp" }, { "pip", { NULL }, 321, "tcp" }, { "pip", { NULL }, 321, "udp" }, { "rtsps", { NULL }, 322, "tcp" }, { "rtsps", { NULL }, 322, "udp" }, { "texar", { NULL }, 333, "tcp" }, { "texar", { NULL }, 333, "udp" }, { "pdap", { NULL }, 344, "tcp" }, { "pdap", { NULL }, 344, "udp" }, { "pawserv", { NULL }, 345, "tcp" }, { "pawserv", { NULL }, 345, "udp" }, { "zserv", { NULL }, 346, "tcp" }, { "zserv", { NULL }, 346, "udp" }, { "fatserv", { NULL }, 347, "tcp" }, { "fatserv", { NULL }, 347, "udp" }, { "csi-sgwp", { NULL }, 348, "tcp" }, { "csi-sgwp", { NULL }, 348, "udp" }, { "mftp", { NULL }, 349, "tcp" }, { "mftp", { NULL }, 349, "udp" }, { "matip-type-a", { NULL }, 350, "tcp" }, { "matip-type-a", { NULL }, 350, "udp" }, { "matip-type-b", { NULL }, 351, "tcp" }, { "matip-type-b", { NULL }, 351, "udp" }, { "bhoetty", { NULL }, 351, "tcp" }, { "bhoetty", { NULL }, 351, "udp" }, { "dtag-ste-sb", { NULL }, 352, "tcp" }, { "dtag-ste-sb", { NULL }, 352, "udp" }, { "bhoedap4", { NULL }, 352, "tcp" }, { "bhoedap4", { NULL }, 352, "udp" }, { "ndsauth", { NULL }, 353, "tcp" }, { "ndsauth", { NULL }, 353, "udp" }, { "bh611", { NULL }, 354, "tcp" }, { "bh611", { NULL }, 354, "udp" }, { "datex-asn", { NULL }, 355, "tcp" }, { "datex-asn", { NULL }, 355, "udp" }, { "cloanto-net-1", { NULL }, 356, "tcp" }, { "cloanto-net-1", { NULL }, 356, "udp" }, { "bhevent", { NULL }, 357, "tcp" }, { "bhevent", { NULL }, 357, "udp" }, { "shrinkwrap", { NULL }, 358, "tcp" }, { "shrinkwrap", { NULL }, 358, "udp" }, { "nsrmp", { NULL }, 359, "tcp" }, { "nsrmp", { NULL }, 359, "udp" }, { "scoi2odialog", { NULL }, 360, "tcp" }, { "scoi2odialog", { NULL }, 360, "udp" }, { "semantix", { NULL }, 361, "tcp" }, { "semantix", { NULL }, 361, "udp" }, { "srssend", { NULL }, 362, "tcp" }, { "srssend", { NULL }, 362, "udp" }, { "rsvp_tunnel", { NULL }, 363, "tcp" }, { "rsvp_tunnel", { NULL }, 363, "udp" }, { "aurora-cmgr", { NULL }, 364, "tcp" }, { "aurora-cmgr", { NULL }, 364, "udp" }, { "dtk", { NULL }, 365, "tcp" }, { "dtk", { NULL }, 365, "udp" }, { "odmr", { NULL }, 366, "tcp" }, { "odmr", { NULL }, 366, "udp" }, { "mortgageware", { NULL }, 367, "tcp" }, { "mortgageware", { NULL }, 367, "udp" }, { "qbikgdp", { NULL }, 368, "tcp" }, { "qbikgdp", { NULL }, 368, "udp" }, { "rpc2portmap", { NULL }, 369, "tcp" }, { "rpc2portmap", { NULL }, 369, "udp" }, { "codaauth2", { NULL }, 370, "tcp" }, { "codaauth2", { NULL }, 370, "udp" }, { "clearcase", { NULL }, 371, "tcp" }, { "clearcase", { NULL }, 371, "udp" }, { "ulistproc", { NULL }, 372, "tcp" }, { "ulistproc", { NULL }, 372, "udp" }, { "legent-1", { NULL }, 373, "tcp" }, { "legent-1", { NULL }, 373, "udp" }, { "legent-2", { NULL }, 374, "tcp" }, { "legent-2", { NULL }, 374, "udp" }, { "hassle", { NULL }, 375, "tcp" }, { "hassle", { NULL }, 375, "udp" }, { "nip", { NULL }, 376, "tcp" }, { "nip", { NULL }, 376, "udp" }, { "tnETOS", { NULL }, 377, "tcp" }, { "tnETOS", { NULL }, 377, "udp" }, { "dsETOS", { NULL }, 378, "tcp" }, { "dsETOS", { NULL }, 378, "udp" }, { "is99c", { NULL }, 379, "tcp" }, { "is99c", { NULL }, 379, "udp" }, { "is99s", { NULL }, 380, "tcp" }, { "is99s", { NULL }, 380, "udp" }, { "hp-collector", { NULL }, 381, "tcp" }, { "hp-collector", { NULL }, 381, "udp" }, { "hp-managed-node", { NULL }, 382, "tcp" }, { "hp-managed-node", { NULL }, 382, "udp" }, { "hp-alarm-mgr", { NULL }, 383, "tcp" }, { "hp-alarm-mgr", { NULL }, 383, "udp" }, { "arns", { NULL }, 384, "tcp" }, { "arns", { NULL }, 384, "udp" }, { "ibm-app", { NULL }, 385, "tcp" }, { "ibm-app", { NULL }, 385, "udp" }, { "asa", { NULL }, 386, "tcp" }, { "asa", { NULL }, 386, "udp" }, { "aurp", { NULL }, 387, "tcp" }, { "aurp", { NULL }, 387, "udp" }, { "unidata-ldm", { NULL }, 388, "tcp" }, { "unidata-ldm", { NULL }, 388, "udp" }, { "ldap", { NULL }, 389, "tcp" }, { "ldap", { NULL }, 389, "udp" }, { "uis", { NULL }, 390, "tcp" }, { "uis", { NULL }, 390, "udp" }, { "synotics-relay", { NULL }, 391, "tcp" }, { "synotics-relay", { NULL }, 391, "udp" }, { "synotics-broker", { NULL }, 392, "tcp" }, { "synotics-broker", { NULL }, 392, "udp" }, { "meta5", { NULL }, 393, "tcp" }, { "meta5", { NULL }, 393, "udp" }, { "embl-ndt", { NULL }, 394, "tcp" }, { "embl-ndt", { NULL }, 394, "udp" }, { "netcp", { NULL }, 395, "tcp" }, { "netcp", { NULL }, 395, "udp" }, { "netware-ip", { NULL }, 396, "tcp" }, { "netware-ip", { NULL }, 396, "udp" }, { "mptn", { NULL }, 397, "tcp" }, { "mptn", { NULL }, 397, "udp" }, { "kryptolan", { NULL }, 398, "tcp" }, { "kryptolan", { NULL }, 398, "udp" }, { "iso-tsap-c2", { NULL }, 399, "tcp" }, { "iso-tsap-c2", { NULL }, 399, "udp" }, { "osb-sd", { NULL }, 400, "tcp" }, { "osb-sd", { NULL }, 400, "udp" }, { "ups", { NULL }, 401, "tcp" }, { "ups", { NULL }, 401, "udp" }, { "genie", { NULL }, 402, "tcp" }, { "genie", { NULL }, 402, "udp" }, { "decap", { NULL }, 403, "tcp" }, { "decap", { NULL }, 403, "udp" }, { "nced", { NULL }, 404, "tcp" }, { "nced", { NULL }, 404, "udp" }, { "ncld", { NULL }, 405, "tcp" }, { "ncld", { NULL }, 405, "udp" }, { "imsp", { NULL }, 406, "tcp" }, { "imsp", { NULL }, 406, "udp" }, { "timbuktu", { NULL }, 407, "tcp" }, { "timbuktu", { NULL }, 407, "udp" }, { "prm-sm", { NULL }, 408, "tcp" }, { "prm-sm", { NULL }, 408, "udp" }, { "prm-nm", { NULL }, 409, "tcp" }, { "prm-nm", { NULL }, 409, "udp" }, { "decladebug", { NULL }, 410, "tcp" }, { "decladebug", { NULL }, 410, "udp" }, { "rmt", { NULL }, 411, "tcp" }, { "rmt", { NULL }, 411, "udp" }, { "synoptics-trap", { NULL }, 412, "tcp" }, { "synoptics-trap", { NULL }, 412, "udp" }, { "smsp", { NULL }, 413, "tcp" }, { "smsp", { NULL }, 413, "udp" }, { "infoseek", { NULL }, 414, "tcp" }, { "infoseek", { NULL }, 414, "udp" }, { "bnet", { NULL }, 415, "tcp" }, { "bnet", { NULL }, 415, "udp" }, { "silverplatter", { NULL }, 416, "tcp" }, { "silverplatter", { NULL }, 416, "udp" }, { "onmux", { NULL }, 417, "tcp" }, { "onmux", { NULL }, 417, "udp" }, { "hyper-g", { NULL }, 418, "tcp" }, { "hyper-g", { NULL }, 418, "udp" }, { "ariel1", { NULL }, 419, "tcp" }, { "ariel1", { NULL }, 419, "udp" }, { "smpte", { NULL }, 420, "tcp" }, { "smpte", { NULL }, 420, "udp" }, { "ariel2", { NULL }, 421, "tcp" }, { "ariel2", { NULL }, 421, "udp" }, { "ariel3", { NULL }, 422, "tcp" }, { "ariel3", { NULL }, 422, "udp" }, { "opc-job-start", { NULL }, 423, "tcp" }, { "opc-job-start", { NULL }, 423, "udp" }, { "opc-job-track", { NULL }, 424, "tcp" }, { "opc-job-track", { NULL }, 424, "udp" }, { "icad-el", { NULL }, 425, "tcp" }, { "icad-el", { NULL }, 425, "udp" }, { "smartsdp", { NULL }, 426, "tcp" }, { "smartsdp", { NULL }, 426, "udp" }, { "svrloc", { NULL }, 427, "tcp" }, { "svrloc", { NULL }, 427, "udp" }, { "ocs_cmu", { NULL }, 428, "tcp" }, { "ocs_cmu", { NULL }, 428, "udp" }, { "ocs_amu", { NULL }, 429, "tcp" }, { "ocs_amu", { NULL }, 429, "udp" }, { "utmpsd", { NULL }, 430, "tcp" }, { "utmpsd", { NULL }, 430, "udp" }, { "utmpcd", { NULL }, 431, "tcp" }, { "utmpcd", { NULL }, 431, "udp" }, { "iasd", { NULL }, 432, "tcp" }, { "iasd", { NULL }, 432, "udp" }, { "nnsp", { NULL }, 433, "tcp" }, { "nnsp", { NULL }, 433, "udp" }, { "mobileip-agent", { NULL }, 434, "tcp" }, { "mobileip-agent", { NULL }, 434, "udp" }, { "mobilip-mn", { NULL }, 435, "tcp" }, { "mobilip-mn", { NULL }, 435, "udp" }, { "dna-cml", { NULL }, 436, "tcp" }, { "dna-cml", { NULL }, 436, "udp" }, { "comscm", { NULL }, 437, "tcp" }, { "comscm", { NULL }, 437, "udp" }, { "dsfgw", { NULL }, 438, "tcp" }, { "dsfgw", { NULL }, 438, "udp" }, { "dasp", { NULL }, 439, "tcp" }, { "dasp", { NULL }, 439, "udp" }, { "sgcp", { NULL }, 440, "tcp" }, { "sgcp", { NULL }, 440, "udp" }, { "decvms-sysmgt", { NULL }, 441, "tcp" }, { "decvms-sysmgt", { NULL }, 441, "udp" }, { "cvc_hostd", { NULL }, 442, "tcp" }, { "cvc_hostd", { NULL }, 442, "udp" }, { "https", { NULL }, 443, "tcp" }, { "https", { NULL }, 443, "udp" }, { "https", { NULL }, 443, "sctp" }, { "snpp", { NULL }, 444, "tcp" }, { "snpp", { NULL }, 444, "udp" }, { "microsoft-ds", { NULL }, 445, "tcp" }, { "microsoft-ds", { NULL }, 445, "udp" }, { "ddm-rdb", { NULL }, 446, "tcp" }, { "ddm-rdb", { NULL }, 446, "udp" }, { "ddm-dfm", { NULL }, 447, "tcp" }, { "ddm-dfm", { NULL }, 447, "udp" }, { "ddm-ssl", { NULL }, 448, "tcp" }, { "ddm-ssl", { NULL }, 448, "udp" }, { "as-servermap", { NULL }, 449, "tcp" }, { "as-servermap", { NULL }, 449, "udp" }, { "tserver", { NULL }, 450, "tcp" }, { "tserver", { NULL }, 450, "udp" }, { "sfs-smp-net", { NULL }, 451, "tcp" }, { "sfs-smp-net", { NULL }, 451, "udp" }, { "sfs-config", { NULL }, 452, "tcp" }, { "sfs-config", { NULL }, 452, "udp" }, { "creativeserver", { NULL }, 453, "tcp" }, { "creativeserver", { NULL }, 453, "udp" }, { "contentserver", { NULL }, 454, "tcp" }, { "contentserver", { NULL }, 454, "udp" }, { "creativepartnr", { NULL }, 455, "tcp" }, { "creativepartnr", { NULL }, 455, "udp" }, { "macon-tcp", { NULL }, 456, "tcp" }, { "macon-udp", { NULL }, 456, "udp" }, { "scohelp", { NULL }, 457, "tcp" }, { "scohelp", { NULL }, 457, "udp" }, { "appleqtc", { NULL }, 458, "tcp" }, { "appleqtc", { NULL }, 458, "udp" }, { "ampr-rcmd", { NULL }, 459, "tcp" }, { "ampr-rcmd", { NULL }, 459, "udp" }, { "skronk", { NULL }, 460, "tcp" }, { "skronk", { NULL }, 460, "udp" }, { "datasurfsrv", { NULL }, 461, "tcp" }, { "datasurfsrv", { NULL }, 461, "udp" }, { "datasurfsrvsec", { NULL }, 462, "tcp" }, { "datasurfsrvsec", { NULL }, 462, "udp" }, { "alpes", { NULL }, 463, "tcp" }, { "alpes", { NULL }, 463, "udp" }, { "kpasswd", { NULL }, 464, "tcp" }, { "kpasswd", { NULL }, 464, "udp" }, { "urd", { NULL }, 465, "tcp" }, { "igmpv3lite", { NULL }, 465, "udp" }, { "digital-vrc", { NULL }, 466, "tcp" }, { "digital-vrc", { NULL }, 466, "udp" }, { "mylex-mapd", { NULL }, 467, "tcp" }, { "mylex-mapd", { NULL }, 467, "udp" }, { "photuris", { NULL }, 468, "tcp" }, { "photuris", { NULL }, 468, "udp" }, { "rcp", { NULL }, 469, "tcp" }, { "rcp", { NULL }, 469, "udp" }, { "scx-proxy", { NULL }, 470, "tcp" }, { "scx-proxy", { NULL }, 470, "udp" }, { "mondex", { NULL }, 471, "tcp" }, { "mondex", { NULL }, 471, "udp" }, { "ljk-login", { NULL }, 472, "tcp" }, { "ljk-login", { NULL }, 472, "udp" }, { "hybrid-pop", { NULL }, 473, "tcp" }, { "hybrid-pop", { NULL }, 473, "udp" }, { "tn-tl-w1", { NULL }, 474, "tcp" }, { "tn-tl-w2", { NULL }, 474, "udp" }, { "tcpnethaspsrv", { NULL }, 475, "tcp" }, { "tcpnethaspsrv", { NULL }, 475, "udp" }, { "tn-tl-fd1", { NULL }, 476, "tcp" }, { "tn-tl-fd1", { NULL }, 476, "udp" }, { "ss7ns", { NULL }, 477, "tcp" }, { "ss7ns", { NULL }, 477, "udp" }, { "spsc", { NULL }, 478, "tcp" }, { "spsc", { NULL }, 478, "udp" }, { "iafserver", { NULL }, 479, "tcp" }, { "iafserver", { NULL }, 479, "udp" }, { "iafdbase", { NULL }, 480, "tcp" }, { "iafdbase", { NULL }, 480, "udp" }, { "ph", { NULL }, 481, "tcp" }, { "ph", { NULL }, 481, "udp" }, { "bgs-nsi", { NULL }, 482, "tcp" }, { "bgs-nsi", { NULL }, 482, "udp" }, { "ulpnet", { NULL }, 483, "tcp" }, { "ulpnet", { NULL }, 483, "udp" }, { "integra-sme", { NULL }, 484, "tcp" }, { "integra-sme", { NULL }, 484, "udp" }, { "powerburst", { NULL }, 485, "tcp" }, { "powerburst", { NULL }, 485, "udp" }, { "avian", { NULL }, 486, "tcp" }, { "avian", { NULL }, 486, "udp" }, { "saft", { NULL }, 487, "tcp" }, { "saft", { NULL }, 487, "udp" }, { "gss-http", { NULL }, 488, "tcp" }, { "gss-http", { NULL }, 488, "udp" }, { "nest-protocol", { NULL }, 489, "tcp" }, { "nest-protocol", { NULL }, 489, "udp" }, { "micom-pfs", { NULL }, 490, "tcp" }, { "micom-pfs", { NULL }, 490, "udp" }, { "go-login", { NULL }, 491, "tcp" }, { "go-login", { NULL }, 491, "udp" }, { "ticf-1", { NULL }, 492, "tcp" }, { "ticf-1", { NULL }, 492, "udp" }, { "ticf-2", { NULL }, 493, "tcp" }, { "ticf-2", { NULL }, 493, "udp" }, { "pov-ray", { NULL }, 494, "tcp" }, { "pov-ray", { NULL }, 494, "udp" }, { "intecourier", { NULL }, 495, "tcp" }, { "intecourier", { NULL }, 495, "udp" }, { "pim-rp-disc", { NULL }, 496, "tcp" }, { "pim-rp-disc", { NULL }, 496, "udp" }, { "dantz", { NULL }, 497, "tcp" }, { "dantz", { NULL }, 497, "udp" }, { "siam", { NULL }, 498, "tcp" }, { "siam", { NULL }, 498, "udp" }, { "iso-ill", { NULL }, 499, "tcp" }, { "iso-ill", { NULL }, 499, "udp" }, { "isakmp", { NULL }, 500, "tcp" }, { "isakmp", { NULL }, 500, "udp" }, { "stmf", { NULL }, 501, "tcp" }, { "stmf", { NULL }, 501, "udp" }, { "asa-appl-proto", { NULL }, 502, "tcp" }, { "asa-appl-proto", { NULL }, 502, "udp" }, { "intrinsa", { NULL }, 503, "tcp" }, { "intrinsa", { NULL }, 503, "udp" }, { "citadel", { NULL }, 504, "tcp" }, { "citadel", { NULL }, 504, "udp" }, { "mailbox-lm", { NULL }, 505, "tcp" }, { "mailbox-lm", { NULL }, 505, "udp" }, { "ohimsrv", { NULL }, 506, "tcp" }, { "ohimsrv", { NULL }, 506, "udp" }, { "crs", { NULL }, 507, "tcp" }, { "crs", { NULL }, 507, "udp" }, { "xvttp", { NULL }, 508, "tcp" }, { "xvttp", { NULL }, 508, "udp" }, { "snare", { NULL }, 509, "tcp" }, { "snare", { NULL }, 509, "udp" }, { "fcp", { NULL }, 510, "tcp" }, { "fcp", { NULL }, 510, "udp" }, { "passgo", { NULL }, 511, "tcp" }, { "passgo", { NULL }, 511, "udp" }, { "exec", { NULL }, 512, "tcp" }, { "comsat", { NULL }, 512, "udp" }, { "biff", { NULL }, 512, "udp" }, { "login", { NULL }, 513, "tcp" }, { "who", { NULL }, 513, "udp" }, { "shell", { NULL }, 514, "tcp" }, { "syslog", { NULL }, 514, "udp" }, { "printer", { NULL }, 515, "tcp" }, { "printer", { NULL }, 515, "udp" }, { "videotex", { NULL }, 516, "tcp" }, { "videotex", { NULL }, 516, "udp" }, { "talk", { NULL }, 517, "tcp" }, { "talk", { NULL }, 517, "udp" }, { "ntalk", { NULL }, 518, "tcp" }, { "ntalk", { NULL }, 518, "udp" }, { "utime", { NULL }, 519, "tcp" }, { "utime", { NULL }, 519, "udp" }, { "efs", { NULL }, 520, "tcp" }, { "router", { NULL }, 520, "udp" }, { "ripng", { NULL }, 521, "tcp" }, { "ripng", { NULL }, 521, "udp" }, { "ulp", { NULL }, 522, "tcp" }, { "ulp", { NULL }, 522, "udp" }, { "ibm-db2", { NULL }, 523, "tcp" }, { "ibm-db2", { NULL }, 523, "udp" }, { "ncp", { NULL }, 524, "tcp" }, { "ncp", { NULL }, 524, "udp" }, { "timed", { NULL }, 525, "tcp" }, { "timed", { NULL }, 525, "udp" }, { "tempo", { NULL }, 526, "tcp" }, { "tempo", { NULL }, 526, "udp" }, { "stx", { NULL }, 527, "tcp" }, { "stx", { NULL }, 527, "udp" }, { "custix", { NULL }, 528, "tcp" }, { "custix", { NULL }, 528, "udp" }, { "irc-serv", { NULL }, 529, "tcp" }, { "irc-serv", { NULL }, 529, "udp" }, { "courier", { NULL }, 530, "tcp" }, { "courier", { NULL }, 530, "udp" }, { "conference", { NULL }, 531, "tcp" }, { "conference", { NULL }, 531, "udp" }, { "netnews", { NULL }, 532, "tcp" }, { "netnews", { NULL }, 532, "udp" }, { "netwall", { NULL }, 533, "tcp" }, { "netwall", { NULL }, 533, "udp" }, { "windream", { NULL }, 534, "tcp" }, { "windream", { NULL }, 534, "udp" }, { "iiop", { NULL }, 535, "tcp" }, { "iiop", { NULL }, 535, "udp" }, { "opalis-rdv", { NULL }, 536, "tcp" }, { "opalis-rdv", { NULL }, 536, "udp" }, { "nmsp", { NULL }, 537, "tcp" }, { "nmsp", { NULL }, 537, "udp" }, { "gdomap", { NULL }, 538, "tcp" }, { "gdomap", { NULL }, 538, "udp" }, { "apertus-ldp", { NULL }, 539, "tcp" }, { "apertus-ldp", { NULL }, 539, "udp" }, { "uucp", { NULL }, 540, "tcp" }, { "uucp", { NULL }, 540, "udp" }, { "uucp-rlogin", { NULL }, 541, "tcp" }, { "uucp-rlogin", { NULL }, 541, "udp" }, { "commerce", { NULL }, 542, "tcp" }, { "commerce", { NULL }, 542, "udp" }, { "klogin", { NULL }, 543, "tcp" }, { "klogin", { NULL }, 543, "udp" }, { "kshell", { NULL }, 544, "tcp" }, { "kshell", { NULL }, 544, "udp" }, { "appleqtcsrvr", { NULL }, 545, "tcp" }, { "appleqtcsrvr", { NULL }, 545, "udp" }, { "dhcpv6-client", { NULL }, 546, "tcp" }, { "dhcpv6-client", { NULL }, 546, "udp" }, { "dhcpv6-server", { NULL }, 547, "tcp" }, { "dhcpv6-server", { NULL }, 547, "udp" }, { "afpovertcp", { NULL }, 548, "tcp" }, { "afpovertcp", { NULL }, 548, "udp" }, { "idfp", { NULL }, 549, "tcp" }, { "idfp", { NULL }, 549, "udp" }, { "new-rwho", { NULL }, 550, "tcp" }, { "new-rwho", { NULL }, 550, "udp" }, { "cybercash", { NULL }, 551, "tcp" }, { "cybercash", { NULL }, 551, "udp" }, { "devshr-nts", { NULL }, 552, "tcp" }, { "devshr-nts", { NULL }, 552, "udp" }, { "pirp", { NULL }, 553, "tcp" }, { "pirp", { NULL }, 553, "udp" }, { "rtsp", { NULL }, 554, "tcp" }, { "rtsp", { NULL }, 554, "udp" }, { "dsf", { NULL }, 555, "tcp" }, { "dsf", { NULL }, 555, "udp" }, { "remotefs", { NULL }, 556, "tcp" }, { "remotefs", { NULL }, 556, "udp" }, { "openvms-sysipc", { NULL }, 557, "tcp" }, { "openvms-sysipc", { NULL }, 557, "udp" }, { "sdnskmp", { NULL }, 558, "tcp" }, { "sdnskmp", { NULL }, 558, "udp" }, { "teedtap", { NULL }, 559, "tcp" }, { "teedtap", { NULL }, 559, "udp" }, { "rmonitor", { NULL }, 560, "tcp" }, { "rmonitor", { NULL }, 560, "udp" }, { "monitor", { NULL }, 561, "tcp" }, { "monitor", { NULL }, 561, "udp" }, { "chshell", { NULL }, 562, "tcp" }, { "chshell", { NULL }, 562, "udp" }, { "nntps", { NULL }, 563, "tcp" }, { "nntps", { NULL }, 563, "udp" }, { "9pfs", { NULL }, 564, "tcp" }, { "9pfs", { NULL }, 564, "udp" }, { "whoami", { NULL }, 565, "tcp" }, { "whoami", { NULL }, 565, "udp" }, { "streettalk", { NULL }, 566, "tcp" }, { "streettalk", { NULL }, 566, "udp" }, { "banyan-rpc", { NULL }, 567, "tcp" }, { "banyan-rpc", { NULL }, 567, "udp" }, { "ms-shuttle", { NULL }, 568, "tcp" }, { "ms-shuttle", { NULL }, 568, "udp" }, { "ms-rome", { NULL }, 569, "tcp" }, { "ms-rome", { NULL }, 569, "udp" }, { "meter", { NULL }, 570, "tcp" }, { "meter", { NULL }, 570, "udp" }, { "meter", { NULL }, 571, "tcp" }, { "meter", { NULL }, 571, "udp" }, { "sonar", { NULL }, 572, "tcp" }, { "sonar", { NULL }, 572, "udp" }, { "banyan-vip", { NULL }, 573, "tcp" }, { "banyan-vip", { NULL }, 573, "udp" }, { "ftp-agent", { NULL }, 574, "tcp" }, { "ftp-agent", { NULL }, 574, "udp" }, { "vemmi", { NULL }, 575, "tcp" }, { "vemmi", { NULL }, 575, "udp" }, { "ipcd", { NULL }, 576, "tcp" }, { "ipcd", { NULL }, 576, "udp" }, { "vnas", { NULL }, 577, "tcp" }, { "vnas", { NULL }, 577, "udp" }, { "ipdd", { NULL }, 578, "tcp" }, { "ipdd", { NULL }, 578, "udp" }, { "decbsrv", { NULL }, 579, "tcp" }, { "decbsrv", { NULL }, 579, "udp" }, { "sntp-heartbeat", { NULL }, 580, "tcp" }, { "sntp-heartbeat", { NULL }, 580, "udp" }, { "bdp", { NULL }, 581, "tcp" }, { "bdp", { NULL }, 581, "udp" }, { "scc-security", { NULL }, 582, "tcp" }, { "scc-security", { NULL }, 582, "udp" }, { "philips-vc", { NULL }, 583, "tcp" }, { "philips-vc", { NULL }, 583, "udp" }, { "keyserver", { NULL }, 584, "tcp" }, { "keyserver", { NULL }, 584, "udp" }, { "password-chg", { NULL }, 586, "tcp" }, { "password-chg", { NULL }, 586, "udp" }, { "submission", { NULL }, 587, "tcp" }, { "submission", { NULL }, 587, "udp" }, { "cal", { NULL }, 588, "tcp" }, { "cal", { NULL }, 588, "udp" }, { "eyelink", { NULL }, 589, "tcp" }, { "eyelink", { NULL }, 589, "udp" }, { "tns-cml", { NULL }, 590, "tcp" }, { "tns-cml", { NULL }, 590, "udp" }, { "http-alt", { NULL }, 591, "tcp" }, { "http-alt", { NULL }, 591, "udp" }, { "eudora-set", { NULL }, 592, "tcp" }, { "eudora-set", { NULL }, 592, "udp" }, { "http-rpc-epmap", { NULL }, 593, "tcp" }, { "http-rpc-epmap", { NULL }, 593, "udp" }, { "tpip", { NULL }, 594, "tcp" }, { "tpip", { NULL }, 594, "udp" }, { "cab-protocol", { NULL }, 595, "tcp" }, { "cab-protocol", { NULL }, 595, "udp" }, { "smsd", { NULL }, 596, "tcp" }, { "smsd", { NULL }, 596, "udp" }, { "ptcnameservice", { NULL }, 597, "tcp" }, { "ptcnameservice", { NULL }, 597, "udp" }, { "sco-websrvrmg3", { NULL }, 598, "tcp" }, { "sco-websrvrmg3", { NULL }, 598, "udp" }, { "acp", { NULL }, 599, "tcp" }, { "acp", { NULL }, 599, "udp" }, { "ipcserver", { NULL }, 600, "tcp" }, { "ipcserver", { NULL }, 600, "udp" }, { "syslog-conn", { NULL }, 601, "tcp" }, { "syslog-conn", { NULL }, 601, "udp" }, { "xmlrpc-beep", { NULL }, 602, "tcp" }, { "xmlrpc-beep", { NULL }, 602, "udp" }, { "idxp", { NULL }, 603, "tcp" }, { "idxp", { NULL }, 603, "udp" }, { "tunnel", { NULL }, 604, "tcp" }, { "tunnel", { NULL }, 604, "udp" }, { "soap-beep", { NULL }, 605, "tcp" }, { "soap-beep", { NULL }, 605, "udp" }, { "urm", { NULL }, 606, "tcp" }, { "urm", { NULL }, 606, "udp" }, { "nqs", { NULL }, 607, "tcp" }, { "nqs", { NULL }, 607, "udp" }, { "sift-uft", { NULL }, 608, "tcp" }, { "sift-uft", { NULL }, 608, "udp" }, { "npmp-trap", { NULL }, 609, "tcp" }, { "npmp-trap", { NULL }, 609, "udp" }, { "npmp-local", { NULL }, 610, "tcp" }, { "npmp-local", { NULL }, 610, "udp" }, { "npmp-gui", { NULL }, 611, "tcp" }, { "npmp-gui", { NULL }, 611, "udp" }, { "hmmp-ind", { NULL }, 612, "tcp" }, { "hmmp-ind", { NULL }, 612, "udp" }, { "hmmp-op", { NULL }, 613, "tcp" }, { "hmmp-op", { NULL }, 613, "udp" }, { "sshell", { NULL }, 614, "tcp" }, { "sshell", { NULL }, 614, "udp" }, { "sco-inetmgr", { NULL }, 615, "tcp" }, { "sco-inetmgr", { NULL }, 615, "udp" }, { "sco-sysmgr", { NULL }, 616, "tcp" }, { "sco-sysmgr", { NULL }, 616, "udp" }, { "sco-dtmgr", { NULL }, 617, "tcp" }, { "sco-dtmgr", { NULL }, 617, "udp" }, { "dei-icda", { NULL }, 618, "tcp" }, { "dei-icda", { NULL }, 618, "udp" }, { "compaq-evm", { NULL }, 619, "tcp" }, { "compaq-evm", { NULL }, 619, "udp" }, { "sco-websrvrmgr", { NULL }, 620, "tcp" }, { "sco-websrvrmgr", { NULL }, 620, "udp" }, { "escp-ip", { NULL }, 621, "tcp" }, { "escp-ip", { NULL }, 621, "udp" }, { "collaborator", { NULL }, 622, "tcp" }, { "collaborator", { NULL }, 622, "udp" }, { "oob-ws-http", { NULL }, 623, "tcp" }, { "asf-rmcp", { NULL }, 623, "udp" }, { "cryptoadmin", { NULL }, 624, "tcp" }, { "cryptoadmin", { NULL }, 624, "udp" }, { "dec_dlm", { NULL }, 625, "tcp" }, { "dec_dlm", { NULL }, 625, "udp" }, { "asia", { NULL }, 626, "tcp" }, { "asia", { NULL }, 626, "udp" }, { "passgo-tivoli", { NULL }, 627, "tcp" }, { "passgo-tivoli", { NULL }, 627, "udp" }, { "qmqp", { NULL }, 628, "tcp" }, { "qmqp", { NULL }, 628, "udp" }, { "3com-amp3", { NULL }, 629, "tcp" }, { "3com-amp3", { NULL }, 629, "udp" }, { "rda", { NULL }, 630, "tcp" }, { "rda", { NULL }, 630, "udp" }, { "ipp", { NULL }, 631, "tcp" }, { "ipp", { NULL }, 631, "udp" }, { "bmpp", { NULL }, 632, "tcp" }, { "bmpp", { NULL }, 632, "udp" }, { "servstat", { NULL }, 633, "tcp" }, { "servstat", { NULL }, 633, "udp" }, { "ginad", { NULL }, 634, "tcp" }, { "ginad", { NULL }, 634, "udp" }, { "rlzdbase", { NULL }, 635, "tcp" }, { "rlzdbase", { NULL }, 635, "udp" }, { "ldaps", { NULL }, 636, "tcp" }, { "ldaps", { NULL }, 636, "udp" }, { "lanserver", { NULL }, 637, "tcp" }, { "lanserver", { NULL }, 637, "udp" }, { "mcns-sec", { NULL }, 638, "tcp" }, { "mcns-sec", { NULL }, 638, "udp" }, { "msdp", { NULL }, 639, "tcp" }, { "msdp", { NULL }, 639, "udp" }, { "entrust-sps", { NULL }, 640, "tcp" }, { "entrust-sps", { NULL }, 640, "udp" }, { "repcmd", { NULL }, 641, "tcp" }, { "repcmd", { NULL }, 641, "udp" }, { "esro-emsdp", { NULL }, 642, "tcp" }, { "esro-emsdp", { NULL }, 642, "udp" }, { "sanity", { NULL }, 643, "tcp" }, { "sanity", { NULL }, 643, "udp" }, { "dwr", { NULL }, 644, "tcp" }, { "dwr", { NULL }, 644, "udp" }, { "pssc", { NULL }, 645, "tcp" }, { "pssc", { NULL }, 645, "udp" }, { "ldp", { NULL }, 646, "tcp" }, { "ldp", { NULL }, 646, "udp" }, { "dhcp-failover", { NULL }, 647, "tcp" }, { "dhcp-failover", { NULL }, 647, "udp" }, { "rrp", { NULL }, 648, "tcp" }, { "rrp", { NULL }, 648, "udp" }, { "cadview-3d", { NULL }, 649, "tcp" }, { "cadview-3d", { NULL }, 649, "udp" }, { "obex", { NULL }, 650, "tcp" }, { "obex", { NULL }, 650, "udp" }, { "ieee-mms", { NULL }, 651, "tcp" }, { "ieee-mms", { NULL }, 651, "udp" }, { "hello-port", { NULL }, 652, "tcp" }, { "hello-port", { NULL }, 652, "udp" }, { "repscmd", { NULL }, 653, "tcp" }, { "repscmd", { NULL }, 653, "udp" }, { "aodv", { NULL }, 654, "tcp" }, { "aodv", { NULL }, 654, "udp" }, { "tinc", { NULL }, 655, "tcp" }, { "tinc", { NULL }, 655, "udp" }, { "spmp", { NULL }, 656, "tcp" }, { "spmp", { NULL }, 656, "udp" }, { "rmc", { NULL }, 657, "tcp" }, { "rmc", { NULL }, 657, "udp" }, { "tenfold", { NULL }, 658, "tcp" }, { "tenfold", { NULL }, 658, "udp" }, { "mac-srvr-admin", { NULL }, 660, "tcp" }, { "mac-srvr-admin", { NULL }, 660, "udp" }, { "hap", { NULL }, 661, "tcp" }, { "hap", { NULL }, 661, "udp" }, { "pftp", { NULL }, 662, "tcp" }, { "pftp", { NULL }, 662, "udp" }, { "purenoise", { NULL }, 663, "tcp" }, { "purenoise", { NULL }, 663, "udp" }, { "oob-ws-https", { NULL }, 664, "tcp" }, { "asf-secure-rmcp", { NULL }, 664, "udp" }, { "sun-dr", { NULL }, 665, "tcp" }, { "sun-dr", { NULL }, 665, "udp" }, { "mdqs", { NULL }, 666, "tcp" }, { "mdqs", { NULL }, 666, "udp" }, { "doom", { NULL }, 666, "tcp" }, { "doom", { NULL }, 666, "udp" }, { "disclose", { NULL }, 667, "tcp" }, { "disclose", { NULL }, 667, "udp" }, { "mecomm", { NULL }, 668, "tcp" }, { "mecomm", { NULL }, 668, "udp" }, { "meregister", { NULL }, 669, "tcp" }, { "meregister", { NULL }, 669, "udp" }, { "vacdsm-sws", { NULL }, 670, "tcp" }, { "vacdsm-sws", { NULL }, 670, "udp" }, { "vacdsm-app", { NULL }, 671, "tcp" }, { "vacdsm-app", { NULL }, 671, "udp" }, { "vpps-qua", { NULL }, 672, "tcp" }, { "vpps-qua", { NULL }, 672, "udp" }, { "cimplex", { NULL }, 673, "tcp" }, { "cimplex", { NULL }, 673, "udp" }, { "acap", { NULL }, 674, "tcp" }, { "acap", { NULL }, 674, "udp" }, { "dctp", { NULL }, 675, "tcp" }, { "dctp", { NULL }, 675, "udp" }, { "vpps-via", { NULL }, 676, "tcp" }, { "vpps-via", { NULL }, 676, "udp" }, { "vpp", { NULL }, 677, "tcp" }, { "vpp", { NULL }, 677, "udp" }, { "ggf-ncp", { NULL }, 678, "tcp" }, { "ggf-ncp", { NULL }, 678, "udp" }, { "mrm", { NULL }, 679, "tcp" }, { "mrm", { NULL }, 679, "udp" }, { "entrust-aaas", { NULL }, 680, "tcp" }, { "entrust-aaas", { NULL }, 680, "udp" }, { "entrust-aams", { NULL }, 681, "tcp" }, { "entrust-aams", { NULL }, 681, "udp" }, { "xfr", { NULL }, 682, "tcp" }, { "xfr", { NULL }, 682, "udp" }, { "corba-iiop", { NULL }, 683, "tcp" }, { "corba-iiop", { NULL }, 683, "udp" }, { "corba-iiop-ssl", { NULL }, 684, "tcp" }, { "corba-iiop-ssl", { NULL }, 684, "udp" }, { "mdc-portmapper", { NULL }, 685, "tcp" }, { "mdc-portmapper", { NULL }, 685, "udp" }, { "hcp-wismar", { NULL }, 686, "tcp" }, { "hcp-wismar", { NULL }, 686, "udp" }, { "asipregistry", { NULL }, 687, "tcp" }, { "asipregistry", { NULL }, 687, "udp" }, { "realm-rusd", { NULL }, 688, "tcp" }, { "realm-rusd", { NULL }, 688, "udp" }, { "nmap", { NULL }, 689, "tcp" }, { "nmap", { NULL }, 689, "udp" }, { "vatp", { NULL }, 690, "tcp" }, { "vatp", { NULL }, 690, "udp" }, { "msexch-routing", { NULL }, 691, "tcp" }, { "msexch-routing", { NULL }, 691, "udp" }, { "hyperwave-isp", { NULL }, 692, "tcp" }, { "hyperwave-isp", { NULL }, 692, "udp" }, { "connendp", { NULL }, 693, "tcp" }, { "connendp", { NULL }, 693, "udp" }, { "ha-cluster", { NULL }, 694, "tcp" }, { "ha-cluster", { NULL }, 694, "udp" }, { "ieee-mms-ssl", { NULL }, 695, "tcp" }, { "ieee-mms-ssl", { NULL }, 695, "udp" }, { "rushd", { NULL }, 696, "tcp" }, { "rushd", { NULL }, 696, "udp" }, { "uuidgen", { NULL }, 697, "tcp" }, { "uuidgen", { NULL }, 697, "udp" }, { "olsr", { NULL }, 698, "tcp" }, { "olsr", { NULL }, 698, "udp" }, { "accessnetwork", { NULL }, 699, "tcp" }, { "accessnetwork", { NULL }, 699, "udp" }, { "epp", { NULL }, 700, "tcp" }, { "epp", { NULL }, 700, "udp" }, { "lmp", { NULL }, 701, "tcp" }, { "lmp", { NULL }, 701, "udp" }, { "iris-beep", { NULL }, 702, "tcp" }, { "iris-beep", { NULL }, 702, "udp" }, { "elcsd", { NULL }, 704, "tcp" }, { "elcsd", { NULL }, 704, "udp" }, { "agentx", { NULL }, 705, "tcp" }, { "agentx", { NULL }, 705, "udp" }, { "silc", { NULL }, 706, "tcp" }, { "silc", { NULL }, 706, "udp" }, { "borland-dsj", { NULL }, 707, "tcp" }, { "borland-dsj", { NULL }, 707, "udp" }, { "entrust-kmsh", { NULL }, 709, "tcp" }, { "entrust-kmsh", { NULL }, 709, "udp" }, { "entrust-ash", { NULL }, 710, "tcp" }, { "entrust-ash", { NULL }, 710, "udp" }, { "cisco-tdp", { NULL }, 711, "tcp" }, { "cisco-tdp", { NULL }, 711, "udp" }, { "tbrpf", { NULL }, 712, "tcp" }, { "tbrpf", { NULL }, 712, "udp" }, { "iris-xpc", { NULL }, 713, "tcp" }, { "iris-xpc", { NULL }, 713, "udp" }, { "iris-xpcs", { NULL }, 714, "tcp" }, { "iris-xpcs", { NULL }, 714, "udp" }, { "iris-lwz", { NULL }, 715, "tcp" }, { "iris-lwz", { NULL }, 715, "udp" }, { "pana", { NULL }, 716, "udp" }, { "netviewdm1", { NULL }, 729, "tcp" }, { "netviewdm1", { NULL }, 729, "udp" }, { "netviewdm2", { NULL }, 730, "tcp" }, { "netviewdm2", { NULL }, 730, "udp" }, { "netviewdm3", { NULL }, 731, "tcp" }, { "netviewdm3", { NULL }, 731, "udp" }, { "netgw", { NULL }, 741, "tcp" }, { "netgw", { NULL }, 741, "udp" }, { "netrcs", { NULL }, 742, "tcp" }, { "netrcs", { NULL }, 742, "udp" }, { "flexlm", { NULL }, 744, "tcp" }, { "flexlm", { NULL }, 744, "udp" }, { "fujitsu-dev", { NULL }, 747, "tcp" }, { "fujitsu-dev", { NULL }, 747, "udp" }, { "ris-cm", { NULL }, 748, "tcp" }, { "ris-cm", { NULL }, 748, "udp" }, { "kerberos-adm", { NULL }, 749, "tcp" }, { "kerberos-adm", { NULL }, 749, "udp" }, { "rfile", { NULL }, 750, "tcp" }, { "loadav", { NULL }, 750, "udp" }, { "kerberos-iv", { NULL }, 750, "udp" }, { "pump", { NULL }, 751, "tcp" }, { "pump", { NULL }, 751, "udp" }, { "qrh", { NULL }, 752, "tcp" }, { "qrh", { NULL }, 752, "udp" }, { "rrh", { NULL }, 753, "tcp" }, { "rrh", { NULL }, 753, "udp" }, { "tell", { NULL }, 754, "tcp" }, { "tell", { NULL }, 754, "udp" }, { "nlogin", { NULL }, 758, "tcp" }, { "nlogin", { NULL }, 758, "udp" }, { "con", { NULL }, 759, "tcp" }, { "con", { NULL }, 759, "udp" }, { "ns", { NULL }, 760, "tcp" }, { "ns", { NULL }, 760, "udp" }, { "rxe", { NULL }, 761, "tcp" }, { "rxe", { NULL }, 761, "udp" }, { "quotad", { NULL }, 762, "tcp" }, { "quotad", { NULL }, 762, "udp" }, { "cycleserv", { NULL }, 763, "tcp" }, { "cycleserv", { NULL }, 763, "udp" }, { "omserv", { NULL }, 764, "tcp" }, { "omserv", { NULL }, 764, "udp" }, { "webster", { NULL }, 765, "tcp" }, { "webster", { NULL }, 765, "udp" }, { "phonebook", { NULL }, 767, "tcp" }, { "phonebook", { NULL }, 767, "udp" }, { "vid", { NULL }, 769, "tcp" }, { "vid", { NULL }, 769, "udp" }, { "cadlock", { NULL }, 770, "tcp" }, { "cadlock", { NULL }, 770, "udp" }, { "rtip", { NULL }, 771, "tcp" }, { "rtip", { NULL }, 771, "udp" }, { "cycleserv2", { NULL }, 772, "tcp" }, { "cycleserv2", { NULL }, 772, "udp" }, { "submit", { NULL }, 773, "tcp" }, { "notify", { NULL }, 773, "udp" }, { "rpasswd", { NULL }, 774, "tcp" }, { "acmaint_dbd", { NULL }, 774, "udp" }, { "entomb", { NULL }, 775, "tcp" }, { "acmaint_transd", { NULL }, 775, "udp" }, { "wpages", { NULL }, 776, "tcp" }, { "wpages", { NULL }, 776, "udp" }, { "multiling-http", { NULL }, 777, "tcp" }, { "multiling-http", { NULL }, 777, "udp" }, { "wpgs", { NULL }, 780, "tcp" }, { "wpgs", { NULL }, 780, "udp" }, { "mdbs_daemon", { NULL }, 800, "tcp" }, { "mdbs_daemon", { NULL }, 800, "udp" }, { "device", { NULL }, 801, "tcp" }, { "device", { NULL }, 801, "udp" }, { "fcp-udp", { NULL }, 810, "tcp" }, { "fcp-udp", { NULL }, 810, "udp" }, { "itm-mcell-s", { NULL }, 828, "tcp" }, { "itm-mcell-s", { NULL }, 828, "udp" }, { "pkix-3-ca-ra", { NULL }, 829, "tcp" }, { "pkix-3-ca-ra", { NULL }, 829, "udp" }, { "netconf-ssh", { NULL }, 830, "tcp" }, { "netconf-ssh", { NULL }, 830, "udp" }, { "netconf-beep", { NULL }, 831, "tcp" }, { "netconf-beep", { NULL }, 831, "udp" }, { "netconfsoaphttp", { NULL }, 832, "tcp" }, { "netconfsoaphttp", { NULL }, 832, "udp" }, { "netconfsoapbeep", { NULL }, 833, "tcp" }, { "netconfsoapbeep", { NULL }, 833, "udp" }, { "dhcp-failover2", { NULL }, 847, "tcp" }, { "dhcp-failover2", { NULL }, 847, "udp" }, { "gdoi", { NULL }, 848, "tcp" }, { "gdoi", { NULL }, 848, "udp" }, { "iscsi", { NULL }, 860, "tcp" }, { "iscsi", { NULL }, 860, "udp" }, { "owamp-control", { NULL }, 861, "tcp" }, { "owamp-control", { NULL }, 861, "udp" }, { "twamp-control", { NULL }, 862, "tcp" }, { "twamp-control", { NULL }, 862, "udp" }, { "rsync", { NULL }, 873, "tcp" }, { "rsync", { NULL }, 873, "udp" }, { "iclcnet-locate", { NULL }, 886, "tcp" }, { "iclcnet-locate", { NULL }, 886, "udp" }, { "iclcnet_svinfo", { NULL }, 887, "tcp" }, { "iclcnet_svinfo", { NULL }, 887, "udp" }, { "accessbuilder", { NULL }, 888, "tcp" }, { "accessbuilder", { NULL }, 888, "udp" }, { "cddbp", { NULL }, 888, "tcp" }, { "omginitialrefs", { NULL }, 900, "tcp" }, { "omginitialrefs", { NULL }, 900, "udp" }, { "smpnameres", { NULL }, 901, "tcp" }, { "smpnameres", { NULL }, 901, "udp" }, { "ideafarm-door", { NULL }, 902, "tcp" }, { "ideafarm-door", { NULL }, 902, "udp" }, { "ideafarm-panic", { NULL }, 903, "tcp" }, { "ideafarm-panic", { NULL }, 903, "udp" }, { "kink", { NULL }, 910, "tcp" }, { "kink", { NULL }, 910, "udp" }, { "xact-backup", { NULL }, 911, "tcp" }, { "xact-backup", { NULL }, 911, "udp" }, { "apex-mesh", { NULL }, 912, "tcp" }, { "apex-mesh", { NULL }, 912, "udp" }, { "apex-edge", { NULL }, 913, "tcp" }, { "apex-edge", { NULL }, 913, "udp" }, { "ftps-data", { NULL }, 989, "tcp" }, { "ftps-data", { NULL }, 989, "udp" }, { "ftps", { NULL }, 990, "tcp" }, { "ftps", { NULL }, 990, "udp" }, { "nas", { NULL }, 991, "tcp" }, { "nas", { NULL }, 991, "udp" }, { "telnets", { NULL }, 992, "tcp" }, { "telnets", { NULL }, 992, "udp" }, { "imaps", { NULL }, 993, "tcp" }, { "imaps", { NULL }, 993, "udp" }, { "ircs", { NULL }, 994, "tcp" }, { "ircs", { NULL }, 994, "udp" }, { "pop3s", { NULL }, 995, "tcp" }, { "pop3s", { NULL }, 995, "udp" }, { "vsinet", { NULL }, 996, "tcp" }, { "vsinet", { NULL }, 996, "udp" }, { "maitrd", { NULL }, 997, "tcp" }, { "maitrd", { NULL }, 997, "udp" }, { "busboy", { NULL }, 998, "tcp" }, { "puparp", { NULL }, 998, "udp" }, { "garcon", { NULL }, 999, "tcp" }, { "applix", { NULL }, 999, "udp" }, { "puprouter", { NULL }, 999, "tcp" }, { "puprouter", { NULL }, 999, "udp" }, { "cadlock2", { NULL }, 1000, "tcp" }, { "cadlock2", { NULL }, 1000, "udp" }, { "surf", { NULL }, 1010, "tcp" }, { "surf", { NULL }, 1010, "udp" }, { "exp1", { NULL }, 1021, "tcp" }, { "exp1", { NULL }, 1021, "udp" }, { "exp2", { NULL }, 1022, "tcp" }, { "exp2", { NULL }, 1022, "udp" }, # endif /* USE_IANA_WELL_KNOWN_PORTS */ # ifdef USE_IANA_REGISTERED_PORTS { "blackjack", { NULL }, 1025, "tcp" }, { "blackjack", { NULL }, 1025, "udp" }, { "cap", { NULL }, 1026, "tcp" }, { "cap", { NULL }, 1026, "udp" }, { "solid-mux", { NULL }, 1029, "tcp" }, { "solid-mux", { NULL }, 1029, "udp" }, { "iad1", { NULL }, 1030, "tcp" }, { "iad1", { NULL }, 1030, "udp" }, { "iad2", { NULL }, 1031, "tcp" }, { "iad2", { NULL }, 1031, "udp" }, { "iad3", { NULL }, 1032, "tcp" }, { "iad3", { NULL }, 1032, "udp" }, { "netinfo-local", { NULL }, 1033, "tcp" }, { "netinfo-local", { NULL }, 1033, "udp" }, { "activesync", { NULL }, 1034, "tcp" }, { "activesync", { NULL }, 1034, "udp" }, { "mxxrlogin", { NULL }, 1035, "tcp" }, { "mxxrlogin", { NULL }, 1035, "udp" }, { "nsstp", { NULL }, 1036, "tcp" }, { "nsstp", { NULL }, 1036, "udp" }, { "ams", { NULL }, 1037, "tcp" }, { "ams", { NULL }, 1037, "udp" }, { "mtqp", { NULL }, 1038, "tcp" }, { "mtqp", { NULL }, 1038, "udp" }, { "sbl", { NULL }, 1039, "tcp" }, { "sbl", { NULL }, 1039, "udp" }, { "netarx", { NULL }, 1040, "tcp" }, { "netarx", { NULL }, 1040, "udp" }, { "danf-ak2", { NULL }, 1041, "tcp" }, { "danf-ak2", { NULL }, 1041, "udp" }, { "afrog", { NULL }, 1042, "tcp" }, { "afrog", { NULL }, 1042, "udp" }, { "boinc-client", { NULL }, 1043, "tcp" }, { "boinc-client", { NULL }, 1043, "udp" }, { "dcutility", { NULL }, 1044, "tcp" }, { "dcutility", { NULL }, 1044, "udp" }, { "fpitp", { NULL }, 1045, "tcp" }, { "fpitp", { NULL }, 1045, "udp" }, { "wfremotertm", { NULL }, 1046, "tcp" }, { "wfremotertm", { NULL }, 1046, "udp" }, { "neod1", { NULL }, 1047, "tcp" }, { "neod1", { NULL }, 1047, "udp" }, { "neod2", { NULL }, 1048, "tcp" }, { "neod2", { NULL }, 1048, "udp" }, { "td-postman", { NULL }, 1049, "tcp" }, { "td-postman", { NULL }, 1049, "udp" }, { "cma", { NULL }, 1050, "tcp" }, { "cma", { NULL }, 1050, "udp" }, { "optima-vnet", { NULL }, 1051, "tcp" }, { "optima-vnet", { NULL }, 1051, "udp" }, { "ddt", { NULL }, 1052, "tcp" }, { "ddt", { NULL }, 1052, "udp" }, { "remote-as", { NULL }, 1053, "tcp" }, { "remote-as", { NULL }, 1053, "udp" }, { "brvread", { NULL }, 1054, "tcp" }, { "brvread", { NULL }, 1054, "udp" }, { "ansyslmd", { NULL }, 1055, "tcp" }, { "ansyslmd", { NULL }, 1055, "udp" }, { "vfo", { NULL }, 1056, "tcp" }, { "vfo", { NULL }, 1056, "udp" }, { "startron", { NULL }, 1057, "tcp" }, { "startron", { NULL }, 1057, "udp" }, { "nim", { NULL }, 1058, "tcp" }, { "nim", { NULL }, 1058, "udp" }, { "nimreg", { NULL }, 1059, "tcp" }, { "nimreg", { NULL }, 1059, "udp" }, { "polestar", { NULL }, 1060, "tcp" }, { "polestar", { NULL }, 1060, "udp" }, { "kiosk", { NULL }, 1061, "tcp" }, { "kiosk", { NULL }, 1061, "udp" }, { "veracity", { NULL }, 1062, "tcp" }, { "veracity", { NULL }, 1062, "udp" }, { "kyoceranetdev", { NULL }, 1063, "tcp" }, { "kyoceranetdev", { NULL }, 1063, "udp" }, { "jstel", { NULL }, 1064, "tcp" }, { "jstel", { NULL }, 1064, "udp" }, { "syscomlan", { NULL }, 1065, "tcp" }, { "syscomlan", { NULL }, 1065, "udp" }, { "fpo-fns", { NULL }, 1066, "tcp" }, { "fpo-fns", { NULL }, 1066, "udp" }, { "instl_boots", { NULL }, 1067, "tcp" }, { "instl_boots", { NULL }, 1067, "udp" }, { "instl_bootc", { NULL }, 1068, "tcp" }, { "instl_bootc", { NULL }, 1068, "udp" }, { "cognex-insight", { NULL }, 1069, "tcp" }, { "cognex-insight", { NULL }, 1069, "udp" }, { "gmrupdateserv", { NULL }, 1070, "tcp" }, { "gmrupdateserv", { NULL }, 1070, "udp" }, { "bsquare-voip", { NULL }, 1071, "tcp" }, { "bsquare-voip", { NULL }, 1071, "udp" }, { "cardax", { NULL }, 1072, "tcp" }, { "cardax", { NULL }, 1072, "udp" }, { "bridgecontrol", { NULL }, 1073, "tcp" }, { "bridgecontrol", { NULL }, 1073, "udp" }, { "warmspotMgmt", { NULL }, 1074, "tcp" }, { "warmspotMgmt", { NULL }, 1074, "udp" }, { "rdrmshc", { NULL }, 1075, "tcp" }, { "rdrmshc", { NULL }, 1075, "udp" }, { "dab-sti-c", { NULL }, 1076, "tcp" }, { "dab-sti-c", { NULL }, 1076, "udp" }, { "imgames", { NULL }, 1077, "tcp" }, { "imgames", { NULL }, 1077, "udp" }, { "avocent-proxy", { NULL }, 1078, "tcp" }, { "avocent-proxy", { NULL }, 1078, "udp" }, { "asprovatalk", { NULL }, 1079, "tcp" }, { "asprovatalk", { NULL }, 1079, "udp" }, { "socks", { NULL }, 1080, "tcp" }, { "socks", { NULL }, 1080, "udp" }, { "pvuniwien", { NULL }, 1081, "tcp" }, { "pvuniwien", { NULL }, 1081, "udp" }, { "amt-esd-prot", { NULL }, 1082, "tcp" }, { "amt-esd-prot", { NULL }, 1082, "udp" }, { "ansoft-lm-1", { NULL }, 1083, "tcp" }, { "ansoft-lm-1", { NULL }, 1083, "udp" }, { "ansoft-lm-2", { NULL }, 1084, "tcp" }, { "ansoft-lm-2", { NULL }, 1084, "udp" }, { "webobjects", { NULL }, 1085, "tcp" }, { "webobjects", { NULL }, 1085, "udp" }, { "cplscrambler-lg", { NULL }, 1086, "tcp" }, { "cplscrambler-lg", { NULL }, 1086, "udp" }, { "cplscrambler-in", { NULL }, 1087, "tcp" }, { "cplscrambler-in", { NULL }, 1087, "udp" }, { "cplscrambler-al", { NULL }, 1088, "tcp" }, { "cplscrambler-al", { NULL }, 1088, "udp" }, { "ff-annunc", { NULL }, 1089, "tcp" }, { "ff-annunc", { NULL }, 1089, "udp" }, { "ff-fms", { NULL }, 1090, "tcp" }, { "ff-fms", { NULL }, 1090, "udp" }, { "ff-sm", { NULL }, 1091, "tcp" }, { "ff-sm", { NULL }, 1091, "udp" }, { "obrpd", { NULL }, 1092, "tcp" }, { "obrpd", { NULL }, 1092, "udp" }, { "proofd", { NULL }, 1093, "tcp" }, { "proofd", { NULL }, 1093, "udp" }, { "rootd", { NULL }, 1094, "tcp" }, { "rootd", { NULL }, 1094, "udp" }, { "nicelink", { NULL }, 1095, "tcp" }, { "nicelink", { NULL }, 1095, "udp" }, { "cnrprotocol", { NULL }, 1096, "tcp" }, { "cnrprotocol", { NULL }, 1096, "udp" }, { "sunclustermgr", { NULL }, 1097, "tcp" }, { "sunclustermgr", { NULL }, 1097, "udp" }, { "rmiactivation", { NULL }, 1098, "tcp" }, { "rmiactivation", { NULL }, 1098, "udp" }, { "rmiregistry", { NULL }, 1099, "tcp" }, { "rmiregistry", { NULL }, 1099, "udp" }, { "mctp", { NULL }, 1100, "tcp" }, { "mctp", { NULL }, 1100, "udp" }, { "pt2-discover", { NULL }, 1101, "tcp" }, { "pt2-discover", { NULL }, 1101, "udp" }, { "adobeserver-1", { NULL }, 1102, "tcp" }, { "adobeserver-1", { NULL }, 1102, "udp" }, { "adobeserver-2", { NULL }, 1103, "tcp" }, { "adobeserver-2", { NULL }, 1103, "udp" }, { "xrl", { NULL }, 1104, "tcp" }, { "xrl", { NULL }, 1104, "udp" }, { "ftranhc", { NULL }, 1105, "tcp" }, { "ftranhc", { NULL }, 1105, "udp" }, { "isoipsigport-1", { NULL }, 1106, "tcp" }, { "isoipsigport-1", { NULL }, 1106, "udp" }, { "isoipsigport-2", { NULL }, 1107, "tcp" }, { "isoipsigport-2", { NULL }, 1107, "udp" }, { "ratio-adp", { NULL }, 1108, "tcp" }, { "ratio-adp", { NULL }, 1108, "udp" }, { "webadmstart", { NULL }, 1110, "tcp" }, { "nfsd-keepalive", { NULL }, 1110, "udp" }, { "lmsocialserver", { NULL }, 1111, "tcp" }, { "lmsocialserver", { NULL }, 1111, "udp" }, { "icp", { NULL }, 1112, "tcp" }, { "icp", { NULL }, 1112, "udp" }, { "ltp-deepspace", { NULL }, 1113, "tcp" }, { "ltp-deepspace", { NULL }, 1113, "udp" }, { "mini-sql", { NULL }, 1114, "tcp" }, { "mini-sql", { NULL }, 1114, "udp" }, { "ardus-trns", { NULL }, 1115, "tcp" }, { "ardus-trns", { NULL }, 1115, "udp" }, { "ardus-cntl", { NULL }, 1116, "tcp" }, { "ardus-cntl", { NULL }, 1116, "udp" }, { "ardus-mtrns", { NULL }, 1117, "tcp" }, { "ardus-mtrns", { NULL }, 1117, "udp" }, { "sacred", { NULL }, 1118, "tcp" }, { "sacred", { NULL }, 1118, "udp" }, { "bnetgame", { NULL }, 1119, "tcp" }, { "bnetgame", { NULL }, 1119, "udp" }, { "bnetfile", { NULL }, 1120, "tcp" }, { "bnetfile", { NULL }, 1120, "udp" }, { "rmpp", { NULL }, 1121, "tcp" }, { "rmpp", { NULL }, 1121, "udp" }, { "availant-mgr", { NULL }, 1122, "tcp" }, { "availant-mgr", { NULL }, 1122, "udp" }, { "murray", { NULL }, 1123, "tcp" }, { "murray", { NULL }, 1123, "udp" }, { "hpvmmcontrol", { NULL }, 1124, "tcp" }, { "hpvmmcontrol", { NULL }, 1124, "udp" }, { "hpvmmagent", { NULL }, 1125, "tcp" }, { "hpvmmagent", { NULL }, 1125, "udp" }, { "hpvmmdata", { NULL }, 1126, "tcp" }, { "hpvmmdata", { NULL }, 1126, "udp" }, { "kwdb-commn", { NULL }, 1127, "tcp" }, { "kwdb-commn", { NULL }, 1127, "udp" }, { "saphostctrl", { NULL }, 1128, "tcp" }, { "saphostctrl", { NULL }, 1128, "udp" }, { "saphostctrls", { NULL }, 1129, "tcp" }, { "saphostctrls", { NULL }, 1129, "udp" }, { "casp", { NULL }, 1130, "tcp" }, { "casp", { NULL }, 1130, "udp" }, { "caspssl", { NULL }, 1131, "tcp" }, { "caspssl", { NULL }, 1131, "udp" }, { "kvm-via-ip", { NULL }, 1132, "tcp" }, { "kvm-via-ip", { NULL }, 1132, "udp" }, { "dfn", { NULL }, 1133, "tcp" }, { "dfn", { NULL }, 1133, "udp" }, { "aplx", { NULL }, 1134, "tcp" }, { "aplx", { NULL }, 1134, "udp" }, { "omnivision", { NULL }, 1135, "tcp" }, { "omnivision", { NULL }, 1135, "udp" }, { "hhb-gateway", { NULL }, 1136, "tcp" }, { "hhb-gateway", { NULL }, 1136, "udp" }, { "trim", { NULL }, 1137, "tcp" }, { "trim", { NULL }, 1137, "udp" }, { "encrypted_admin", { NULL }, 1138, "tcp" }, { "encrypted_admin", { NULL }, 1138, "udp" }, { "evm", { NULL }, 1139, "tcp" }, { "evm", { NULL }, 1139, "udp" }, { "autonoc", { NULL }, 1140, "tcp" }, { "autonoc", { NULL }, 1140, "udp" }, { "mxomss", { NULL }, 1141, "tcp" }, { "mxomss", { NULL }, 1141, "udp" }, { "edtools", { NULL }, 1142, "tcp" }, { "edtools", { NULL }, 1142, "udp" }, { "imyx", { NULL }, 1143, "tcp" }, { "imyx", { NULL }, 1143, "udp" }, { "fuscript", { NULL }, 1144, "tcp" }, { "fuscript", { NULL }, 1144, "udp" }, { "x9-icue", { NULL }, 1145, "tcp" }, { "x9-icue", { NULL }, 1145, "udp" }, { "audit-transfer", { NULL }, 1146, "tcp" }, { "audit-transfer", { NULL }, 1146, "udp" }, { "capioverlan", { NULL }, 1147, "tcp" }, { "capioverlan", { NULL }, 1147, "udp" }, { "elfiq-repl", { NULL }, 1148, "tcp" }, { "elfiq-repl", { NULL }, 1148, "udp" }, { "bvtsonar", { NULL }, 1149, "tcp" }, { "bvtsonar", { NULL }, 1149, "udp" }, { "blaze", { NULL }, 1150, "tcp" }, { "blaze", { NULL }, 1150, "udp" }, { "unizensus", { NULL }, 1151, "tcp" }, { "unizensus", { NULL }, 1151, "udp" }, { "winpoplanmess", { NULL }, 1152, "tcp" }, { "winpoplanmess", { NULL }, 1152, "udp" }, { "c1222-acse", { NULL }, 1153, "tcp" }, { "c1222-acse", { NULL }, 1153, "udp" }, { "resacommunity", { NULL }, 1154, "tcp" }, { "resacommunity", { NULL }, 1154, "udp" }, { "nfa", { NULL }, 1155, "tcp" }, { "nfa", { NULL }, 1155, "udp" }, { "iascontrol-oms", { NULL }, 1156, "tcp" }, { "iascontrol-oms", { NULL }, 1156, "udp" }, { "iascontrol", { NULL }, 1157, "tcp" }, { "iascontrol", { NULL }, 1157, "udp" }, { "dbcontrol-oms", { NULL }, 1158, "tcp" }, { "dbcontrol-oms", { NULL }, 1158, "udp" }, { "oracle-oms", { NULL }, 1159, "tcp" }, { "oracle-oms", { NULL }, 1159, "udp" }, { "olsv", { NULL }, 1160, "tcp" }, { "olsv", { NULL }, 1160, "udp" }, { "health-polling", { NULL }, 1161, "tcp" }, { "health-polling", { NULL }, 1161, "udp" }, { "health-trap", { NULL }, 1162, "tcp" }, { "health-trap", { NULL }, 1162, "udp" }, { "sddp", { NULL }, 1163, "tcp" }, { "sddp", { NULL }, 1163, "udp" }, { "qsm-proxy", { NULL }, 1164, "tcp" }, { "qsm-proxy", { NULL }, 1164, "udp" }, { "qsm-gui", { NULL }, 1165, "tcp" }, { "qsm-gui", { NULL }, 1165, "udp" }, { "qsm-remote", { NULL }, 1166, "tcp" }, { "qsm-remote", { NULL }, 1166, "udp" }, { "cisco-ipsla", { NULL }, 1167, "tcp" }, { "cisco-ipsla", { NULL }, 1167, "udp" }, { "cisco-ipsla", { NULL }, 1167, "sctp" }, { "vchat", { NULL }, 1168, "tcp" }, { "vchat", { NULL }, 1168, "udp" }, { "tripwire", { NULL }, 1169, "tcp" }, { "tripwire", { NULL }, 1169, "udp" }, { "atc-lm", { NULL }, 1170, "tcp" }, { "atc-lm", { NULL }, 1170, "udp" }, { "atc-appserver", { NULL }, 1171, "tcp" }, { "atc-appserver", { NULL }, 1171, "udp" }, { "dnap", { NULL }, 1172, "tcp" }, { "dnap", { NULL }, 1172, "udp" }, { "d-cinema-rrp", { NULL }, 1173, "tcp" }, { "d-cinema-rrp", { NULL }, 1173, "udp" }, { "fnet-remote-ui", { NULL }, 1174, "tcp" }, { "fnet-remote-ui", { NULL }, 1174, "udp" }, { "dossier", { NULL }, 1175, "tcp" }, { "dossier", { NULL }, 1175, "udp" }, { "indigo-server", { NULL }, 1176, "tcp" }, { "indigo-server", { NULL }, 1176, "udp" }, { "dkmessenger", { NULL }, 1177, "tcp" }, { "dkmessenger", { NULL }, 1177, "udp" }, { "sgi-storman", { NULL }, 1178, "tcp" }, { "sgi-storman", { NULL }, 1178, "udp" }, { "b2n", { NULL }, 1179, "tcp" }, { "b2n", { NULL }, 1179, "udp" }, { "mc-client", { NULL }, 1180, "tcp" }, { "mc-client", { NULL }, 1180, "udp" }, { "3comnetman", { NULL }, 1181, "tcp" }, { "3comnetman", { NULL }, 1181, "udp" }, { "accelenet", { NULL }, 1182, "tcp" }, { "accelenet-data", { NULL }, 1182, "udp" }, { "llsurfup-http", { NULL }, 1183, "tcp" }, { "llsurfup-http", { NULL }, 1183, "udp" }, { "llsurfup-https", { NULL }, 1184, "tcp" }, { "llsurfup-https", { NULL }, 1184, "udp" }, { "catchpole", { NULL }, 1185, "tcp" }, { "catchpole", { NULL }, 1185, "udp" }, { "mysql-cluster", { NULL }, 1186, "tcp" }, { "mysql-cluster", { NULL }, 1186, "udp" }, { "alias", { NULL }, 1187, "tcp" }, { "alias", { NULL }, 1187, "udp" }, { "hp-webadmin", { NULL }, 1188, "tcp" }, { "hp-webadmin", { NULL }, 1188, "udp" }, { "unet", { NULL }, 1189, "tcp" }, { "unet", { NULL }, 1189, "udp" }, { "commlinx-avl", { NULL }, 1190, "tcp" }, { "commlinx-avl", { NULL }, 1190, "udp" }, { "gpfs", { NULL }, 1191, "tcp" }, { "gpfs", { NULL }, 1191, "udp" }, { "caids-sensor", { NULL }, 1192, "tcp" }, { "caids-sensor", { NULL }, 1192, "udp" }, { "fiveacross", { NULL }, 1193, "tcp" }, { "fiveacross", { NULL }, 1193, "udp" }, { "openvpn", { NULL }, 1194, "tcp" }, { "openvpn", { NULL }, 1194, "udp" }, { "rsf-1", { NULL }, 1195, "tcp" }, { "rsf-1", { NULL }, 1195, "udp" }, { "netmagic", { NULL }, 1196, "tcp" }, { "netmagic", { NULL }, 1196, "udp" }, { "carrius-rshell", { NULL }, 1197, "tcp" }, { "carrius-rshell", { NULL }, 1197, "udp" }, { "cajo-discovery", { NULL }, 1198, "tcp" }, { "cajo-discovery", { NULL }, 1198, "udp" }, { "dmidi", { NULL }, 1199, "tcp" }, { "dmidi", { NULL }, 1199, "udp" }, { "scol", { NULL }, 1200, "tcp" }, { "scol", { NULL }, 1200, "udp" }, { "nucleus-sand", { NULL }, 1201, "tcp" }, { "nucleus-sand", { NULL }, 1201, "udp" }, { "caiccipc", { NULL }, 1202, "tcp" }, { "caiccipc", { NULL }, 1202, "udp" }, { "ssslic-mgr", { NULL }, 1203, "tcp" }, { "ssslic-mgr", { NULL }, 1203, "udp" }, { "ssslog-mgr", { NULL }, 1204, "tcp" }, { "ssslog-mgr", { NULL }, 1204, "udp" }, { "accord-mgc", { NULL }, 1205, "tcp" }, { "accord-mgc", { NULL }, 1205, "udp" }, { "anthony-data", { NULL }, 1206, "tcp" }, { "anthony-data", { NULL }, 1206, "udp" }, { "metasage", { NULL }, 1207, "tcp" }, { "metasage", { NULL }, 1207, "udp" }, { "seagull-ais", { NULL }, 1208, "tcp" }, { "seagull-ais", { NULL }, 1208, "udp" }, { "ipcd3", { NULL }, 1209, "tcp" }, { "ipcd3", { NULL }, 1209, "udp" }, { "eoss", { NULL }, 1210, "tcp" }, { "eoss", { NULL }, 1210, "udp" }, { "groove-dpp", { NULL }, 1211, "tcp" }, { "groove-dpp", { NULL }, 1211, "udp" }, { "lupa", { NULL }, 1212, "tcp" }, { "lupa", { NULL }, 1212, "udp" }, { "mpc-lifenet", { NULL }, 1213, "tcp" }, { "mpc-lifenet", { NULL }, 1213, "udp" }, { "kazaa", { NULL }, 1214, "tcp" }, { "kazaa", { NULL }, 1214, "udp" }, { "scanstat-1", { NULL }, 1215, "tcp" }, { "scanstat-1", { NULL }, 1215, "udp" }, { "etebac5", { NULL }, 1216, "tcp" }, { "etebac5", { NULL }, 1216, "udp" }, { "hpss-ndapi", { NULL }, 1217, "tcp" }, { "hpss-ndapi", { NULL }, 1217, "udp" }, { "aeroflight-ads", { NULL }, 1218, "tcp" }, { "aeroflight-ads", { NULL }, 1218, "udp" }, { "aeroflight-ret", { NULL }, 1219, "tcp" }, { "aeroflight-ret", { NULL }, 1219, "udp" }, { "qt-serveradmin", { NULL }, 1220, "tcp" }, { "qt-serveradmin", { NULL }, 1220, "udp" }, { "sweetware-apps", { NULL }, 1221, "tcp" }, { "sweetware-apps", { NULL }, 1221, "udp" }, { "nerv", { NULL }, 1222, "tcp" }, { "nerv", { NULL }, 1222, "udp" }, { "tgp", { NULL }, 1223, "tcp" }, { "tgp", { NULL }, 1223, "udp" }, { "vpnz", { NULL }, 1224, "tcp" }, { "vpnz", { NULL }, 1224, "udp" }, { "slinkysearch", { NULL }, 1225, "tcp" }, { "slinkysearch", { NULL }, 1225, "udp" }, { "stgxfws", { NULL }, 1226, "tcp" }, { "stgxfws", { NULL }, 1226, "udp" }, { "dns2go", { NULL }, 1227, "tcp" }, { "dns2go", { NULL }, 1227, "udp" }, { "florence", { NULL }, 1228, "tcp" }, { "florence", { NULL }, 1228, "udp" }, { "zented", { NULL }, 1229, "tcp" }, { "zented", { NULL }, 1229, "udp" }, { "periscope", { NULL }, 1230, "tcp" }, { "periscope", { NULL }, 1230, "udp" }, { "menandmice-lpm", { NULL }, 1231, "tcp" }, { "menandmice-lpm", { NULL }, 1231, "udp" }, { "univ-appserver", { NULL }, 1233, "tcp" }, { "univ-appserver", { NULL }, 1233, "udp" }, { "search-agent", { NULL }, 1234, "tcp" }, { "search-agent", { NULL }, 1234, "udp" }, { "mosaicsyssvc1", { NULL }, 1235, "tcp" }, { "mosaicsyssvc1", { NULL }, 1235, "udp" }, { "bvcontrol", { NULL }, 1236, "tcp" }, { "bvcontrol", { NULL }, 1236, "udp" }, { "tsdos390", { NULL }, 1237, "tcp" }, { "tsdos390", { NULL }, 1237, "udp" }, { "hacl-qs", { NULL }, 1238, "tcp" }, { "hacl-qs", { NULL }, 1238, "udp" }, { "nmsd", { NULL }, 1239, "tcp" }, { "nmsd", { NULL }, 1239, "udp" }, { "instantia", { NULL }, 1240, "tcp" }, { "instantia", { NULL }, 1240, "udp" }, { "nessus", { NULL }, 1241, "tcp" }, { "nessus", { NULL }, 1241, "udp" }, { "nmasoverip", { NULL }, 1242, "tcp" }, { "nmasoverip", { NULL }, 1242, "udp" }, { "serialgateway", { NULL }, 1243, "tcp" }, { "serialgateway", { NULL }, 1243, "udp" }, { "isbconference1", { NULL }, 1244, "tcp" }, { "isbconference1", { NULL }, 1244, "udp" }, { "isbconference2", { NULL }, 1245, "tcp" }, { "isbconference2", { NULL }, 1245, "udp" }, { "payrouter", { NULL }, 1246, "tcp" }, { "payrouter", { NULL }, 1246, "udp" }, { "visionpyramid", { NULL }, 1247, "tcp" }, { "visionpyramid", { NULL }, 1247, "udp" }, { "hermes", { NULL }, 1248, "tcp" }, { "hermes", { NULL }, 1248, "udp" }, { "mesavistaco", { NULL }, 1249, "tcp" }, { "mesavistaco", { NULL }, 1249, "udp" }, { "swldy-sias", { NULL }, 1250, "tcp" }, { "swldy-sias", { NULL }, 1250, "udp" }, { "servergraph", { NULL }, 1251, "tcp" }, { "servergraph", { NULL }, 1251, "udp" }, { "bspne-pcc", { NULL }, 1252, "tcp" }, { "bspne-pcc", { NULL }, 1252, "udp" }, { "q55-pcc", { NULL }, 1253, "tcp" }, { "q55-pcc", { NULL }, 1253, "udp" }, { "de-noc", { NULL }, 1254, "tcp" }, { "de-noc", { NULL }, 1254, "udp" }, { "de-cache-query", { NULL }, 1255, "tcp" }, { "de-cache-query", { NULL }, 1255, "udp" }, { "de-server", { NULL }, 1256, "tcp" }, { "de-server", { NULL }, 1256, "udp" }, { "shockwave2", { NULL }, 1257, "tcp" }, { "shockwave2", { NULL }, 1257, "udp" }, { "opennl", { NULL }, 1258, "tcp" }, { "opennl", { NULL }, 1258, "udp" }, { "opennl-voice", { NULL }, 1259, "tcp" }, { "opennl-voice", { NULL }, 1259, "udp" }, { "ibm-ssd", { NULL }, 1260, "tcp" }, { "ibm-ssd", { NULL }, 1260, "udp" }, { "mpshrsv", { NULL }, 1261, "tcp" }, { "mpshrsv", { NULL }, 1261, "udp" }, { "qnts-orb", { NULL }, 1262, "tcp" }, { "qnts-orb", { NULL }, 1262, "udp" }, { "dka", { NULL }, 1263, "tcp" }, { "dka", { NULL }, 1263, "udp" }, { "prat", { NULL }, 1264, "tcp" }, { "prat", { NULL }, 1264, "udp" }, { "dssiapi", { NULL }, 1265, "tcp" }, { "dssiapi", { NULL }, 1265, "udp" }, { "dellpwrappks", { NULL }, 1266, "tcp" }, { "dellpwrappks", { NULL }, 1266, "udp" }, { "epc", { NULL }, 1267, "tcp" }, { "epc", { NULL }, 1267, "udp" }, { "propel-msgsys", { NULL }, 1268, "tcp" }, { "propel-msgsys", { NULL }, 1268, "udp" }, { "watilapp", { NULL }, 1269, "tcp" }, { "watilapp", { NULL }, 1269, "udp" }, { "opsmgr", { NULL }, 1270, "tcp" }, { "opsmgr", { NULL }, 1270, "udp" }, { "excw", { NULL }, 1271, "tcp" }, { "excw", { NULL }, 1271, "udp" }, { "cspmlockmgr", { NULL }, 1272, "tcp" }, { "cspmlockmgr", { NULL }, 1272, "udp" }, { "emc-gateway", { NULL }, 1273, "tcp" }, { "emc-gateway", { NULL }, 1273, "udp" }, { "t1distproc", { NULL }, 1274, "tcp" }, { "t1distproc", { NULL }, 1274, "udp" }, { "ivcollector", { NULL }, 1275, "tcp" }, { "ivcollector", { NULL }, 1275, "udp" }, { "ivmanager", { NULL }, 1276, "tcp" }, { "ivmanager", { NULL }, 1276, "udp" }, { "miva-mqs", { NULL }, 1277, "tcp" }, { "miva-mqs", { NULL }, 1277, "udp" }, { "dellwebadmin-1", { NULL }, 1278, "tcp" }, { "dellwebadmin-1", { NULL }, 1278, "udp" }, { "dellwebadmin-2", { NULL }, 1279, "tcp" }, { "dellwebadmin-2", { NULL }, 1279, "udp" }, { "pictrography", { NULL }, 1280, "tcp" }, { "pictrography", { NULL }, 1280, "udp" }, { "healthd", { NULL }, 1281, "tcp" }, { "healthd", { NULL }, 1281, "udp" }, { "emperion", { NULL }, 1282, "tcp" }, { "emperion", { NULL }, 1282, "udp" }, { "productinfo", { NULL }, 1283, "tcp" }, { "productinfo", { NULL }, 1283, "udp" }, { "iee-qfx", { NULL }, 1284, "tcp" }, { "iee-qfx", { NULL }, 1284, "udp" }, { "neoiface", { NULL }, 1285, "tcp" }, { "neoiface", { NULL }, 1285, "udp" }, { "netuitive", { NULL }, 1286, "tcp" }, { "netuitive", { NULL }, 1286, "udp" }, { "routematch", { NULL }, 1287, "tcp" }, { "routematch", { NULL }, 1287, "udp" }, { "navbuddy", { NULL }, 1288, "tcp" }, { "navbuddy", { NULL }, 1288, "udp" }, { "jwalkserver", { NULL }, 1289, "tcp" }, { "jwalkserver", { NULL }, 1289, "udp" }, { "winjaserver", { NULL }, 1290, "tcp" }, { "winjaserver", { NULL }, 1290, "udp" }, { "seagulllms", { NULL }, 1291, "tcp" }, { "seagulllms", { NULL }, 1291, "udp" }, { "dsdn", { NULL }, 1292, "tcp" }, { "dsdn", { NULL }, 1292, "udp" }, { "pkt-krb-ipsec", { NULL }, 1293, "tcp" }, { "pkt-krb-ipsec", { NULL }, 1293, "udp" }, { "cmmdriver", { NULL }, 1294, "tcp" }, { "cmmdriver", { NULL }, 1294, "udp" }, { "ehtp", { NULL }, 1295, "tcp" }, { "ehtp", { NULL }, 1295, "udp" }, { "dproxy", { NULL }, 1296, "tcp" }, { "dproxy", { NULL }, 1296, "udp" }, { "sdproxy", { NULL }, 1297, "tcp" }, { "sdproxy", { NULL }, 1297, "udp" }, { "lpcp", { NULL }, 1298, "tcp" }, { "lpcp", { NULL }, 1298, "udp" }, { "hp-sci", { NULL }, 1299, "tcp" }, { "hp-sci", { NULL }, 1299, "udp" }, { "h323hostcallsc", { NULL }, 1300, "tcp" }, { "h323hostcallsc", { NULL }, 1300, "udp" }, { "ci3-software-1", { NULL }, 1301, "tcp" }, { "ci3-software-1", { NULL }, 1301, "udp" }, { "ci3-software-2", { NULL }, 1302, "tcp" }, { "ci3-software-2", { NULL }, 1302, "udp" }, { "sftsrv", { NULL }, 1303, "tcp" }, { "sftsrv", { NULL }, 1303, "udp" }, { "boomerang", { NULL }, 1304, "tcp" }, { "boomerang", { NULL }, 1304, "udp" }, { "pe-mike", { NULL }, 1305, "tcp" }, { "pe-mike", { NULL }, 1305, "udp" }, { "re-conn-proto", { NULL }, 1306, "tcp" }, { "re-conn-proto", { NULL }, 1306, "udp" }, { "pacmand", { NULL }, 1307, "tcp" }, { "pacmand", { NULL }, 1307, "udp" }, { "odsi", { NULL }, 1308, "tcp" }, { "odsi", { NULL }, 1308, "udp" }, { "jtag-server", { NULL }, 1309, "tcp" }, { "jtag-server", { NULL }, 1309, "udp" }, { "husky", { NULL }, 1310, "tcp" }, { "husky", { NULL }, 1310, "udp" }, { "rxmon", { NULL }, 1311, "tcp" }, { "rxmon", { NULL }, 1311, "udp" }, { "sti-envision", { NULL }, 1312, "tcp" }, { "sti-envision", { NULL }, 1312, "udp" }, { "bmc_patroldb", { NULL }, 1313, "tcp" }, { "bmc_patroldb", { NULL }, 1313, "udp" }, { "pdps", { NULL }, 1314, "tcp" }, { "pdps", { NULL }, 1314, "udp" }, { "els", { NULL }, 1315, "tcp" }, { "els", { NULL }, 1315, "udp" }, { "exbit-escp", { NULL }, 1316, "tcp" }, { "exbit-escp", { NULL }, 1316, "udp" }, { "vrts-ipcserver", { NULL }, 1317, "tcp" }, { "vrts-ipcserver", { NULL }, 1317, "udp" }, { "krb5gatekeeper", { NULL }, 1318, "tcp" }, { "krb5gatekeeper", { NULL }, 1318, "udp" }, { "amx-icsp", { NULL }, 1319, "tcp" }, { "amx-icsp", { NULL }, 1319, "udp" }, { "amx-axbnet", { NULL }, 1320, "tcp" }, { "amx-axbnet", { NULL }, 1320, "udp" }, { "pip", { NULL }, 1321, "tcp" }, { "pip", { NULL }, 1321, "udp" }, { "novation", { NULL }, 1322, "tcp" }, { "novation", { NULL }, 1322, "udp" }, { "brcd", { NULL }, 1323, "tcp" }, { "brcd", { NULL }, 1323, "udp" }, { "delta-mcp", { NULL }, 1324, "tcp" }, { "delta-mcp", { NULL }, 1324, "udp" }, { "dx-instrument", { NULL }, 1325, "tcp" }, { "dx-instrument", { NULL }, 1325, "udp" }, { "wimsic", { NULL }, 1326, "tcp" }, { "wimsic", { NULL }, 1326, "udp" }, { "ultrex", { NULL }, 1327, "tcp" }, { "ultrex", { NULL }, 1327, "udp" }, { "ewall", { NULL }, 1328, "tcp" }, { "ewall", { NULL }, 1328, "udp" }, { "netdb-export", { NULL }, 1329, "tcp" }, { "netdb-export", { NULL }, 1329, "udp" }, { "streetperfect", { NULL }, 1330, "tcp" }, { "streetperfect", { NULL }, 1330, "udp" }, { "intersan", { NULL }, 1331, "tcp" }, { "intersan", { NULL }, 1331, "udp" }, { "pcia-rxp-b", { NULL }, 1332, "tcp" }, { "pcia-rxp-b", { NULL }, 1332, "udp" }, { "passwrd-policy", { NULL }, 1333, "tcp" }, { "passwrd-policy", { NULL }, 1333, "udp" }, { "writesrv", { NULL }, 1334, "tcp" }, { "writesrv", { NULL }, 1334, "udp" }, { "digital-notary", { NULL }, 1335, "tcp" }, { "digital-notary", { NULL }, 1335, "udp" }, { "ischat", { NULL }, 1336, "tcp" }, { "ischat", { NULL }, 1336, "udp" }, { "menandmice-dns", { NULL }, 1337, "tcp" }, { "menandmice-dns", { NULL }, 1337, "udp" }, { "wmc-log-svc", { NULL }, 1338, "tcp" }, { "wmc-log-svc", { NULL }, 1338, "udp" }, { "kjtsiteserver", { NULL }, 1339, "tcp" }, { "kjtsiteserver", { NULL }, 1339, "udp" }, { "naap", { NULL }, 1340, "tcp" }, { "naap", { NULL }, 1340, "udp" }, { "qubes", { NULL }, 1341, "tcp" }, { "qubes", { NULL }, 1341, "udp" }, { "esbroker", { NULL }, 1342, "tcp" }, { "esbroker", { NULL }, 1342, "udp" }, { "re101", { NULL }, 1343, "tcp" }, { "re101", { NULL }, 1343, "udp" }, { "icap", { NULL }, 1344, "tcp" }, { "icap", { NULL }, 1344, "udp" }, { "vpjp", { NULL }, 1345, "tcp" }, { "vpjp", { NULL }, 1345, "udp" }, { "alta-ana-lm", { NULL }, 1346, "tcp" }, { "alta-ana-lm", { NULL }, 1346, "udp" }, { "bbn-mmc", { NULL }, 1347, "tcp" }, { "bbn-mmc", { NULL }, 1347, "udp" }, { "bbn-mmx", { NULL }, 1348, "tcp" }, { "bbn-mmx", { NULL }, 1348, "udp" }, { "sbook", { NULL }, 1349, "tcp" }, { "sbook", { NULL }, 1349, "udp" }, { "editbench", { NULL }, 1350, "tcp" }, { "editbench", { NULL }, 1350, "udp" }, { "equationbuilder", { NULL }, 1351, "tcp" }, { "equationbuilder", { NULL }, 1351, "udp" }, { "lotusnote", { NULL }, 1352, "tcp" }, { "lotusnote", { NULL }, 1352, "udp" }, { "relief", { NULL }, 1353, "tcp" }, { "relief", { NULL }, 1353, "udp" }, { "XSIP-network", { NULL }, 1354, "tcp" }, { "XSIP-network", { NULL }, 1354, "udp" }, { "intuitive-edge", { NULL }, 1355, "tcp" }, { "intuitive-edge", { NULL }, 1355, "udp" }, { "cuillamartin", { NULL }, 1356, "tcp" }, { "cuillamartin", { NULL }, 1356, "udp" }, { "pegboard", { NULL }, 1357, "tcp" }, { "pegboard", { NULL }, 1357, "udp" }, { "connlcli", { NULL }, 1358, "tcp" }, { "connlcli", { NULL }, 1358, "udp" }, { "ftsrv", { NULL }, 1359, "tcp" }, { "ftsrv", { NULL }, 1359, "udp" }, { "mimer", { NULL }, 1360, "tcp" }, { "mimer", { NULL }, 1360, "udp" }, { "linx", { NULL }, 1361, "tcp" }, { "linx", { NULL }, 1361, "udp" }, { "timeflies", { NULL }, 1362, "tcp" }, { "timeflies", { NULL }, 1362, "udp" }, { "ndm-requester", { NULL }, 1363, "tcp" }, { "ndm-requester", { NULL }, 1363, "udp" }, { "ndm-server", { NULL }, 1364, "tcp" }, { "ndm-server", { NULL }, 1364, "udp" }, { "adapt-sna", { NULL }, 1365, "tcp" }, { "adapt-sna", { NULL }, 1365, "udp" }, { "netware-csp", { NULL }, 1366, "tcp" }, { "netware-csp", { NULL }, 1366, "udp" }, { "dcs", { NULL }, 1367, "tcp" }, { "dcs", { NULL }, 1367, "udp" }, { "screencast", { NULL }, 1368, "tcp" }, { "screencast", { NULL }, 1368, "udp" }, { "gv-us", { NULL }, 1369, "tcp" }, { "gv-us", { NULL }, 1369, "udp" }, { "us-gv", { NULL }, 1370, "tcp" }, { "us-gv", { NULL }, 1370, "udp" }, { "fc-cli", { NULL }, 1371, "tcp" }, { "fc-cli", { NULL }, 1371, "udp" }, { "fc-ser", { NULL }, 1372, "tcp" }, { "fc-ser", { NULL }, 1372, "udp" }, { "chromagrafx", { NULL }, 1373, "tcp" }, { "chromagrafx", { NULL }, 1373, "udp" }, { "molly", { NULL }, 1374, "tcp" }, { "molly", { NULL }, 1374, "udp" }, { "bytex", { NULL }, 1375, "tcp" }, { "bytex", { NULL }, 1375, "udp" }, { "ibm-pps", { NULL }, 1376, "tcp" }, { "ibm-pps", { NULL }, 1376, "udp" }, { "cichlid", { NULL }, 1377, "tcp" }, { "cichlid", { NULL }, 1377, "udp" }, { "elan", { NULL }, 1378, "tcp" }, { "elan", { NULL }, 1378, "udp" }, { "dbreporter", { NULL }, 1379, "tcp" }, { "dbreporter", { NULL }, 1379, "udp" }, { "telesis-licman", { NULL }, 1380, "tcp" }, { "telesis-licman", { NULL }, 1380, "udp" }, { "apple-licman", { NULL }, 1381, "tcp" }, { "apple-licman", { NULL }, 1381, "udp" }, { "udt_os", { NULL }, 1382, "tcp" }, { "udt_os", { NULL }, 1382, "udp" }, { "gwha", { NULL }, 1383, "tcp" }, { "gwha", { NULL }, 1383, "udp" }, { "os-licman", { NULL }, 1384, "tcp" }, { "os-licman", { NULL }, 1384, "udp" }, { "atex_elmd", { NULL }, 1385, "tcp" }, { "atex_elmd", { NULL }, 1385, "udp" }, { "checksum", { NULL }, 1386, "tcp" }, { "checksum", { NULL }, 1386, "udp" }, { "cadsi-lm", { NULL }, 1387, "tcp" }, { "cadsi-lm", { NULL }, 1387, "udp" }, { "objective-dbc", { NULL }, 1388, "tcp" }, { "objective-dbc", { NULL }, 1388, "udp" }, { "iclpv-dm", { NULL }, 1389, "tcp" }, { "iclpv-dm", { NULL }, 1389, "udp" }, { "iclpv-sc", { NULL }, 1390, "tcp" }, { "iclpv-sc", { NULL }, 1390, "udp" }, { "iclpv-sas", { NULL }, 1391, "tcp" }, { "iclpv-sas", { NULL }, 1391, "udp" }, { "iclpv-pm", { NULL }, 1392, "tcp" }, { "iclpv-pm", { NULL }, 1392, "udp" }, { "iclpv-nls", { NULL }, 1393, "tcp" }, { "iclpv-nls", { NULL }, 1393, "udp" }, { "iclpv-nlc", { NULL }, 1394, "tcp" }, { "iclpv-nlc", { NULL }, 1394, "udp" }, { "iclpv-wsm", { NULL }, 1395, "tcp" }, { "iclpv-wsm", { NULL }, 1395, "udp" }, { "dvl-activemail", { NULL }, 1396, "tcp" }, { "dvl-activemail", { NULL }, 1396, "udp" }, { "audio-activmail", { NULL }, 1397, "tcp" }, { "audio-activmail", { NULL }, 1397, "udp" }, { "video-activmail", { NULL }, 1398, "tcp" }, { "video-activmail", { NULL }, 1398, "udp" }, { "cadkey-licman", { NULL }, 1399, "tcp" }, { "cadkey-licman", { NULL }, 1399, "udp" }, { "cadkey-tablet", { NULL }, 1400, "tcp" }, { "cadkey-tablet", { NULL }, 1400, "udp" }, { "goldleaf-licman", { NULL }, 1401, "tcp" }, { "goldleaf-licman", { NULL }, 1401, "udp" }, { "prm-sm-np", { NULL }, 1402, "tcp" }, { "prm-sm-np", { NULL }, 1402, "udp" }, { "prm-nm-np", { NULL }, 1403, "tcp" }, { "prm-nm-np", { NULL }, 1403, "udp" }, { "igi-lm", { NULL }, 1404, "tcp" }, { "igi-lm", { NULL }, 1404, "udp" }, { "ibm-res", { NULL }, 1405, "tcp" }, { "ibm-res", { NULL }, 1405, "udp" }, { "netlabs-lm", { NULL }, 1406, "tcp" }, { "netlabs-lm", { NULL }, 1406, "udp" }, { "dbsa-lm", { NULL }, 1407, "tcp" }, { "dbsa-lm", { NULL }, 1407, "udp" }, { "sophia-lm", { NULL }, 1408, "tcp" }, { "sophia-lm", { NULL }, 1408, "udp" }, { "here-lm", { NULL }, 1409, "tcp" }, { "here-lm", { NULL }, 1409, "udp" }, { "hiq", { NULL }, 1410, "tcp" }, { "hiq", { NULL }, 1410, "udp" }, { "af", { NULL }, 1411, "tcp" }, { "af", { NULL }, 1411, "udp" }, { "innosys", { NULL }, 1412, "tcp" }, { "innosys", { NULL }, 1412, "udp" }, { "innosys-acl", { NULL }, 1413, "tcp" }, { "innosys-acl", { NULL }, 1413, "udp" }, { "ibm-mqseries", { NULL }, 1414, "tcp" }, { "ibm-mqseries", { NULL }, 1414, "udp" }, { "dbstar", { NULL }, 1415, "tcp" }, { "dbstar", { NULL }, 1415, "udp" }, { "novell-lu6.2", { NULL }, 1416, "tcp" }, { "novell-lu6.2", { NULL }, 1416, "udp" }, { "timbuktu-srv1", { NULL }, 1417, "tcp" }, { "timbuktu-srv1", { NULL }, 1417, "udp" }, { "timbuktu-srv2", { NULL }, 1418, "tcp" }, { "timbuktu-srv2", { NULL }, 1418, "udp" }, { "timbuktu-srv3", { NULL }, 1419, "tcp" }, { "timbuktu-srv3", { NULL }, 1419, "udp" }, { "timbuktu-srv4", { NULL }, 1420, "tcp" }, { "timbuktu-srv4", { NULL }, 1420, "udp" }, { "gandalf-lm", { NULL }, 1421, "tcp" }, { "gandalf-lm", { NULL }, 1421, "udp" }, { "autodesk-lm", { NULL }, 1422, "tcp" }, { "autodesk-lm", { NULL }, 1422, "udp" }, { "essbase", { NULL }, 1423, "tcp" }, { "essbase", { NULL }, 1423, "udp" }, { "hybrid", { NULL }, 1424, "tcp" }, { "hybrid", { NULL }, 1424, "udp" }, { "zion-lm", { NULL }, 1425, "tcp" }, { "zion-lm", { NULL }, 1425, "udp" }, { "sais", { NULL }, 1426, "tcp" }, { "sais", { NULL }, 1426, "udp" }, { "mloadd", { NULL }, 1427, "tcp" }, { "mloadd", { NULL }, 1427, "udp" }, { "informatik-lm", { NULL }, 1428, "tcp" }, { "informatik-lm", { NULL }, 1428, "udp" }, { "nms", { NULL }, 1429, "tcp" }, { "nms", { NULL }, 1429, "udp" }, { "tpdu", { NULL }, 1430, "tcp" }, { "tpdu", { NULL }, 1430, "udp" }, { "rgtp", { NULL }, 1431, "tcp" }, { "rgtp", { NULL }, 1431, "udp" }, { "blueberry-lm", { NULL }, 1432, "tcp" }, { "blueberry-lm", { NULL }, 1432, "udp" }, { "ms-sql-s", { NULL }, 1433, "tcp" }, { "ms-sql-s", { NULL }, 1433, "udp" }, { "ms-sql-m", { NULL }, 1434, "tcp" }, { "ms-sql-m", { NULL }, 1434, "udp" }, { "ibm-cics", { NULL }, 1435, "tcp" }, { "ibm-cics", { NULL }, 1435, "udp" }, { "saism", { NULL }, 1436, "tcp" }, { "saism", { NULL }, 1436, "udp" }, { "tabula", { NULL }, 1437, "tcp" }, { "tabula", { NULL }, 1437, "udp" }, { "eicon-server", { NULL }, 1438, "tcp" }, { "eicon-server", { NULL }, 1438, "udp" }, { "eicon-x25", { NULL }, 1439, "tcp" }, { "eicon-x25", { NULL }, 1439, "udp" }, { "eicon-slp", { NULL }, 1440, "tcp" }, { "eicon-slp", { NULL }, 1440, "udp" }, { "cadis-1", { NULL }, 1441, "tcp" }, { "cadis-1", { NULL }, 1441, "udp" }, { "cadis-2", { NULL }, 1442, "tcp" }, { "cadis-2", { NULL }, 1442, "udp" }, { "ies-lm", { NULL }, 1443, "tcp" }, { "ies-lm", { NULL }, 1443, "udp" }, { "marcam-lm", { NULL }, 1444, "tcp" }, { "marcam-lm", { NULL }, 1444, "udp" }, { "proxima-lm", { NULL }, 1445, "tcp" }, { "proxima-lm", { NULL }, 1445, "udp" }, { "ora-lm", { NULL }, 1446, "tcp" }, { "ora-lm", { NULL }, 1446, "udp" }, { "apri-lm", { NULL }, 1447, "tcp" }, { "apri-lm", { NULL }, 1447, "udp" }, { "oc-lm", { NULL }, 1448, "tcp" }, { "oc-lm", { NULL }, 1448, "udp" }, { "peport", { NULL }, 1449, "tcp" }, { "peport", { NULL }, 1449, "udp" }, { "dwf", { NULL }, 1450, "tcp" }, { "dwf", { NULL }, 1450, "udp" }, { "infoman", { NULL }, 1451, "tcp" }, { "infoman", { NULL }, 1451, "udp" }, { "gtegsc-lm", { NULL }, 1452, "tcp" }, { "gtegsc-lm", { NULL }, 1452, "udp" }, { "genie-lm", { NULL }, 1453, "tcp" }, { "genie-lm", { NULL }, 1453, "udp" }, { "interhdl_elmd", { NULL }, 1454, "tcp" }, { "interhdl_elmd", { NULL }, 1454, "udp" }, { "esl-lm", { NULL }, 1455, "tcp" }, { "esl-lm", { NULL }, 1455, "udp" }, { "dca", { NULL }, 1456, "tcp" }, { "dca", { NULL }, 1456, "udp" }, { "valisys-lm", { NULL }, 1457, "tcp" }, { "valisys-lm", { NULL }, 1457, "udp" }, { "nrcabq-lm", { NULL }, 1458, "tcp" }, { "nrcabq-lm", { NULL }, 1458, "udp" }, { "proshare1", { NULL }, 1459, "tcp" }, { "proshare1", { NULL }, 1459, "udp" }, { "proshare2", { NULL }, 1460, "tcp" }, { "proshare2", { NULL }, 1460, "udp" }, { "ibm_wrless_lan", { NULL }, 1461, "tcp" }, { "ibm_wrless_lan", { NULL }, 1461, "udp" }, { "world-lm", { NULL }, 1462, "tcp" }, { "world-lm", { NULL }, 1462, "udp" }, { "nucleus", { NULL }, 1463, "tcp" }, { "nucleus", { NULL }, 1463, "udp" }, { "msl_lmd", { NULL }, 1464, "tcp" }, { "msl_lmd", { NULL }, 1464, "udp" }, { "pipes", { NULL }, 1465, "tcp" }, { "pipes", { NULL }, 1465, "udp" }, { "oceansoft-lm", { NULL }, 1466, "tcp" }, { "oceansoft-lm", { NULL }, 1466, "udp" }, { "csdmbase", { NULL }, 1467, "tcp" }, { "csdmbase", { NULL }, 1467, "udp" }, { "csdm", { NULL }, 1468, "tcp" }, { "csdm", { NULL }, 1468, "udp" }, { "aal-lm", { NULL }, 1469, "tcp" }, { "aal-lm", { NULL }, 1469, "udp" }, { "uaiact", { NULL }, 1470, "tcp" }, { "uaiact", { NULL }, 1470, "udp" }, { "csdmbase", { NULL }, 1471, "tcp" }, { "csdmbase", { NULL }, 1471, "udp" }, { "csdm", { NULL }, 1472, "tcp" }, { "csdm", { NULL }, 1472, "udp" }, { "openmath", { NULL }, 1473, "tcp" }, { "openmath", { NULL }, 1473, "udp" }, { "telefinder", { NULL }, 1474, "tcp" }, { "telefinder", { NULL }, 1474, "udp" }, { "taligent-lm", { NULL }, 1475, "tcp" }, { "taligent-lm", { NULL }, 1475, "udp" }, { "clvm-cfg", { NULL }, 1476, "tcp" }, { "clvm-cfg", { NULL }, 1476, "udp" }, { "ms-sna-server", { NULL }, 1477, "tcp" }, { "ms-sna-server", { NULL }, 1477, "udp" }, { "ms-sna-base", { NULL }, 1478, "tcp" }, { "ms-sna-base", { NULL }, 1478, "udp" }, { "dberegister", { NULL }, 1479, "tcp" }, { "dberegister", { NULL }, 1479, "udp" }, { "pacerforum", { NULL }, 1480, "tcp" }, { "pacerforum", { NULL }, 1480, "udp" }, { "airs", { NULL }, 1481, "tcp" }, { "airs", { NULL }, 1481, "udp" }, { "miteksys-lm", { NULL }, 1482, "tcp" }, { "miteksys-lm", { NULL }, 1482, "udp" }, { "afs", { NULL }, 1483, "tcp" }, { "afs", { NULL }, 1483, "udp" }, { "confluent", { NULL }, 1484, "tcp" }, { "confluent", { NULL }, 1484, "udp" }, { "lansource", { NULL }, 1485, "tcp" }, { "lansource", { NULL }, 1485, "udp" }, { "nms_topo_serv", { NULL }, 1486, "tcp" }, { "nms_topo_serv", { NULL }, 1486, "udp" }, { "localinfosrvr", { NULL }, 1487, "tcp" }, { "localinfosrvr", { NULL }, 1487, "udp" }, { "docstor", { NULL }, 1488, "tcp" }, { "docstor", { NULL }, 1488, "udp" }, { "dmdocbroker", { NULL }, 1489, "tcp" }, { "dmdocbroker", { NULL }, 1489, "udp" }, { "insitu-conf", { NULL }, 1490, "tcp" }, { "insitu-conf", { NULL }, 1490, "udp" }, { "stone-design-1", { NULL }, 1492, "tcp" }, { "stone-design-1", { NULL }, 1492, "udp" }, { "netmap_lm", { NULL }, 1493, "tcp" }, { "netmap_lm", { NULL }, 1493, "udp" }, { "ica", { NULL }, 1494, "tcp" }, { "ica", { NULL }, 1494, "udp" }, { "cvc", { NULL }, 1495, "tcp" }, { "cvc", { NULL }, 1495, "udp" }, { "liberty-lm", { NULL }, 1496, "tcp" }, { "liberty-lm", { NULL }, 1496, "udp" }, { "rfx-lm", { NULL }, 1497, "tcp" }, { "rfx-lm", { NULL }, 1497, "udp" }, { "sybase-sqlany", { NULL }, 1498, "tcp" }, { "sybase-sqlany", { NULL }, 1498, "udp" }, { "fhc", { NULL }, 1499, "tcp" }, { "fhc", { NULL }, 1499, "udp" }, { "vlsi-lm", { NULL }, 1500, "tcp" }, { "vlsi-lm", { NULL }, 1500, "udp" }, { "saiscm", { NULL }, 1501, "tcp" }, { "saiscm", { NULL }, 1501, "udp" }, { "shivadiscovery", { NULL }, 1502, "tcp" }, { "shivadiscovery", { NULL }, 1502, "udp" }, { "imtc-mcs", { NULL }, 1503, "tcp" }, { "imtc-mcs", { NULL }, 1503, "udp" }, { "evb-elm", { NULL }, 1504, "tcp" }, { "evb-elm", { NULL }, 1504, "udp" }, { "funkproxy", { NULL }, 1505, "tcp" }, { "funkproxy", { NULL }, 1505, "udp" }, { "utcd", { NULL }, 1506, "tcp" }, { "utcd", { NULL }, 1506, "udp" }, { "symplex", { NULL }, 1507, "tcp" }, { "symplex", { NULL }, 1507, "udp" }, { "diagmond", { NULL }, 1508, "tcp" }, { "diagmond", { NULL }, 1508, "udp" }, { "robcad-lm", { NULL }, 1509, "tcp" }, { "robcad-lm", { NULL }, 1509, "udp" }, { "mvx-lm", { NULL }, 1510, "tcp" }, { "mvx-lm", { NULL }, 1510, "udp" }, { "3l-l1", { NULL }, 1511, "tcp" }, { "3l-l1", { NULL }, 1511, "udp" }, { "wins", { NULL }, 1512, "tcp" }, { "wins", { NULL }, 1512, "udp" }, { "fujitsu-dtc", { NULL }, 1513, "tcp" }, { "fujitsu-dtc", { NULL }, 1513, "udp" }, { "fujitsu-dtcns", { NULL }, 1514, "tcp" }, { "fujitsu-dtcns", { NULL }, 1514, "udp" }, { "ifor-protocol", { NULL }, 1515, "tcp" }, { "ifor-protocol", { NULL }, 1515, "udp" }, { "vpad", { NULL }, 1516, "tcp" }, { "vpad", { NULL }, 1516, "udp" }, { "vpac", { NULL }, 1517, "tcp" }, { "vpac", { NULL }, 1517, "udp" }, { "vpvd", { NULL }, 1518, "tcp" }, { "vpvd", { NULL }, 1518, "udp" }, { "vpvc", { NULL }, 1519, "tcp" }, { "vpvc", { NULL }, 1519, "udp" }, { "atm-zip-office", { NULL }, 1520, "tcp" }, { "atm-zip-office", { NULL }, 1520, "udp" }, { "ncube-lm", { NULL }, 1521, "tcp" }, { "ncube-lm", { NULL }, 1521, "udp" }, { "ricardo-lm", { NULL }, 1522, "tcp" }, { "ricardo-lm", { NULL }, 1522, "udp" }, { "cichild-lm", { NULL }, 1523, "tcp" }, { "cichild-lm", { NULL }, 1523, "udp" }, { "ingreslock", { NULL }, 1524, "tcp" }, { "ingreslock", { NULL }, 1524, "udp" }, { "orasrv", { NULL }, 1525, "tcp" }, { "orasrv", { NULL }, 1525, "udp" }, { "prospero-np", { NULL }, 1525, "tcp" }, { "prospero-np", { NULL }, 1525, "udp" }, { "pdap-np", { NULL }, 1526, "tcp" }, { "pdap-np", { NULL }, 1526, "udp" }, { "tlisrv", { NULL }, 1527, "tcp" }, { "tlisrv", { NULL }, 1527, "udp" }, { "coauthor", { NULL }, 1529, "tcp" }, { "coauthor", { NULL }, 1529, "udp" }, { "rap-service", { NULL }, 1530, "tcp" }, { "rap-service", { NULL }, 1530, "udp" }, { "rap-listen", { NULL }, 1531, "tcp" }, { "rap-listen", { NULL }, 1531, "udp" }, { "miroconnect", { NULL }, 1532, "tcp" }, { "miroconnect", { NULL }, 1532, "udp" }, { "virtual-places", { NULL }, 1533, "tcp" }, { "virtual-places", { NULL }, 1533, "udp" }, { "micromuse-lm", { NULL }, 1534, "tcp" }, { "micromuse-lm", { NULL }, 1534, "udp" }, { "ampr-info", { NULL }, 1535, "tcp" }, { "ampr-info", { NULL }, 1535, "udp" }, { "ampr-inter", { NULL }, 1536, "tcp" }, { "ampr-inter", { NULL }, 1536, "udp" }, { "sdsc-lm", { NULL }, 1537, "tcp" }, { "sdsc-lm", { NULL }, 1537, "udp" }, { "3ds-lm", { NULL }, 1538, "tcp" }, { "3ds-lm", { NULL }, 1538, "udp" }, { "intellistor-lm", { NULL }, 1539, "tcp" }, { "intellistor-lm", { NULL }, 1539, "udp" }, { "rds", { NULL }, 1540, "tcp" }, { "rds", { NULL }, 1540, "udp" }, { "rds2", { NULL }, 1541, "tcp" }, { "rds2", { NULL }, 1541, "udp" }, { "gridgen-elmd", { NULL }, 1542, "tcp" }, { "gridgen-elmd", { NULL }, 1542, "udp" }, { "simba-cs", { NULL }, 1543, "tcp" }, { "simba-cs", { NULL }, 1543, "udp" }, { "aspeclmd", { NULL }, 1544, "tcp" }, { "aspeclmd", { NULL }, 1544, "udp" }, { "vistium-share", { NULL }, 1545, "tcp" }, { "vistium-share", { NULL }, 1545, "udp" }, { "abbaccuray", { NULL }, 1546, "tcp" }, { "abbaccuray", { NULL }, 1546, "udp" }, { "laplink", { NULL }, 1547, "tcp" }, { "laplink", { NULL }, 1547, "udp" }, { "axon-lm", { NULL }, 1548, "tcp" }, { "axon-lm", { NULL }, 1548, "udp" }, { "shivahose", { NULL }, 1549, "tcp" }, { "shivasound", { NULL }, 1549, "udp" }, { "3m-image-lm", { NULL }, 1550, "tcp" }, { "3m-image-lm", { NULL }, 1550, "udp" }, { "hecmtl-db", { NULL }, 1551, "tcp" }, { "hecmtl-db", { NULL }, 1551, "udp" }, { "pciarray", { NULL }, 1552, "tcp" }, { "pciarray", { NULL }, 1552, "udp" }, { "sna-cs", { NULL }, 1553, "tcp" }, { "sna-cs", { NULL }, 1553, "udp" }, { "caci-lm", { NULL }, 1554, "tcp" }, { "caci-lm", { NULL }, 1554, "udp" }, { "livelan", { NULL }, 1555, "tcp" }, { "livelan", { NULL }, 1555, "udp" }, { "veritas_pbx", { NULL }, 1556, "tcp" }, { "veritas_pbx", { NULL }, 1556, "udp" }, { "arbortext-lm", { NULL }, 1557, "tcp" }, { "arbortext-lm", { NULL }, 1557, "udp" }, { "xingmpeg", { NULL }, 1558, "tcp" }, { "xingmpeg", { NULL }, 1558, "udp" }, { "web2host", { NULL }, 1559, "tcp" }, { "web2host", { NULL }, 1559, "udp" }, { "asci-val", { NULL }, 1560, "tcp" }, { "asci-val", { NULL }, 1560, "udp" }, { "facilityview", { NULL }, 1561, "tcp" }, { "facilityview", { NULL }, 1561, "udp" }, { "pconnectmgr", { NULL }, 1562, "tcp" }, { "pconnectmgr", { NULL }, 1562, "udp" }, { "cadabra-lm", { NULL }, 1563, "tcp" }, { "cadabra-lm", { NULL }, 1563, "udp" }, { "pay-per-view", { NULL }, 1564, "tcp" }, { "pay-per-view", { NULL }, 1564, "udp" }, { "winddlb", { NULL }, 1565, "tcp" }, { "winddlb", { NULL }, 1565, "udp" }, { "corelvideo", { NULL }, 1566, "tcp" }, { "corelvideo", { NULL }, 1566, "udp" }, { "jlicelmd", { NULL }, 1567, "tcp" }, { "jlicelmd", { NULL }, 1567, "udp" }, { "tsspmap", { NULL }, 1568, "tcp" }, { "tsspmap", { NULL }, 1568, "udp" }, { "ets", { NULL }, 1569, "tcp" }, { "ets", { NULL }, 1569, "udp" }, { "orbixd", { NULL }, 1570, "tcp" }, { "orbixd", { NULL }, 1570, "udp" }, { "rdb-dbs-disp", { NULL }, 1571, "tcp" }, { "rdb-dbs-disp", { NULL }, 1571, "udp" }, { "chip-lm", { NULL }, 1572, "tcp" }, { "chip-lm", { NULL }, 1572, "udp" }, { "itscomm-ns", { NULL }, 1573, "tcp" }, { "itscomm-ns", { NULL }, 1573, "udp" }, { "mvel-lm", { NULL }, 1574, "tcp" }, { "mvel-lm", { NULL }, 1574, "udp" }, { "oraclenames", { NULL }, 1575, "tcp" }, { "oraclenames", { NULL }, 1575, "udp" }, { "moldflow-lm", { NULL }, 1576, "tcp" }, { "moldflow-lm", { NULL }, 1576, "udp" }, { "hypercube-lm", { NULL }, 1577, "tcp" }, { "hypercube-lm", { NULL }, 1577, "udp" }, { "jacobus-lm", { NULL }, 1578, "tcp" }, { "jacobus-lm", { NULL }, 1578, "udp" }, { "ioc-sea-lm", { NULL }, 1579, "tcp" }, { "ioc-sea-lm", { NULL }, 1579, "udp" }, { "tn-tl-r1", { NULL }, 1580, "tcp" }, { "tn-tl-r2", { NULL }, 1580, "udp" }, { "mil-2045-47001", { NULL }, 1581, "tcp" }, { "mil-2045-47001", { NULL }, 1581, "udp" }, { "msims", { NULL }, 1582, "tcp" }, { "msims", { NULL }, 1582, "udp" }, { "simbaexpress", { NULL }, 1583, "tcp" }, { "simbaexpress", { NULL }, 1583, "udp" }, { "tn-tl-fd2", { NULL }, 1584, "tcp" }, { "tn-tl-fd2", { NULL }, 1584, "udp" }, { "intv", { NULL }, 1585, "tcp" }, { "intv", { NULL }, 1585, "udp" }, { "ibm-abtact", { NULL }, 1586, "tcp" }, { "ibm-abtact", { NULL }, 1586, "udp" }, { "pra_elmd", { NULL }, 1587, "tcp" }, { "pra_elmd", { NULL }, 1587, "udp" }, { "triquest-lm", { NULL }, 1588, "tcp" }, { "triquest-lm", { NULL }, 1588, "udp" }, { "vqp", { NULL }, 1589, "tcp" }, { "vqp", { NULL }, 1589, "udp" }, { "gemini-lm", { NULL }, 1590, "tcp" }, { "gemini-lm", { NULL }, 1590, "udp" }, { "ncpm-pm", { NULL }, 1591, "tcp" }, { "ncpm-pm", { NULL }, 1591, "udp" }, { "commonspace", { NULL }, 1592, "tcp" }, { "commonspace", { NULL }, 1592, "udp" }, { "mainsoft-lm", { NULL }, 1593, "tcp" }, { "mainsoft-lm", { NULL }, 1593, "udp" }, { "sixtrak", { NULL }, 1594, "tcp" }, { "sixtrak", { NULL }, 1594, "udp" }, { "radio", { NULL }, 1595, "tcp" }, { "radio", { NULL }, 1595, "udp" }, { "radio-sm", { NULL }, 1596, "tcp" }, { "radio-bc", { NULL }, 1596, "udp" }, { "orbplus-iiop", { NULL }, 1597, "tcp" }, { "orbplus-iiop", { NULL }, 1597, "udp" }, { "picknfs", { NULL }, 1598, "tcp" }, { "picknfs", { NULL }, 1598, "udp" }, { "simbaservices", { NULL }, 1599, "tcp" }, { "simbaservices", { NULL }, 1599, "udp" }, { "issd", { NULL }, 1600, "tcp" }, { "issd", { NULL }, 1600, "udp" }, { "aas", { NULL }, 1601, "tcp" }, { "aas", { NULL }, 1601, "udp" }, { "inspect", { NULL }, 1602, "tcp" }, { "inspect", { NULL }, 1602, "udp" }, { "picodbc", { NULL }, 1603, "tcp" }, { "picodbc", { NULL }, 1603, "udp" }, { "icabrowser", { NULL }, 1604, "tcp" }, { "icabrowser", { NULL }, 1604, "udp" }, { "slp", { NULL }, 1605, "tcp" }, { "slp", { NULL }, 1605, "udp" }, { "slm-api", { NULL }, 1606, "tcp" }, { "slm-api", { NULL }, 1606, "udp" }, { "stt", { NULL }, 1607, "tcp" }, { "stt", { NULL }, 1607, "udp" }, { "smart-lm", { NULL }, 1608, "tcp" }, { "smart-lm", { NULL }, 1608, "udp" }, { "isysg-lm", { NULL }, 1609, "tcp" }, { "isysg-lm", { NULL }, 1609, "udp" }, { "taurus-wh", { NULL }, 1610, "tcp" }, { "taurus-wh", { NULL }, 1610, "udp" }, { "ill", { NULL }, 1611, "tcp" }, { "ill", { NULL }, 1611, "udp" }, { "netbill-trans", { NULL }, 1612, "tcp" }, { "netbill-trans", { NULL }, 1612, "udp" }, { "netbill-keyrep", { NULL }, 1613, "tcp" }, { "netbill-keyrep", { NULL }, 1613, "udp" }, { "netbill-cred", { NULL }, 1614, "tcp" }, { "netbill-cred", { NULL }, 1614, "udp" }, { "netbill-auth", { NULL }, 1615, "tcp" }, { "netbill-auth", { NULL }, 1615, "udp" }, { "netbill-prod", { NULL }, 1616, "tcp" }, { "netbill-prod", { NULL }, 1616, "udp" }, { "nimrod-agent", { NULL }, 1617, "tcp" }, { "nimrod-agent", { NULL }, 1617, "udp" }, { "skytelnet", { NULL }, 1618, "tcp" }, { "skytelnet", { NULL }, 1618, "udp" }, { "xs-openstorage", { NULL }, 1619, "tcp" }, { "xs-openstorage", { NULL }, 1619, "udp" }, { "faxportwinport", { NULL }, 1620, "tcp" }, { "faxportwinport", { NULL }, 1620, "udp" }, { "softdataphone", { NULL }, 1621, "tcp" }, { "softdataphone", { NULL }, 1621, "udp" }, { "ontime", { NULL }, 1622, "tcp" }, { "ontime", { NULL }, 1622, "udp" }, { "jaleosnd", { NULL }, 1623, "tcp" }, { "jaleosnd", { NULL }, 1623, "udp" }, { "udp-sr-port", { NULL }, 1624, "tcp" }, { "udp-sr-port", { NULL }, 1624, "udp" }, { "svs-omagent", { NULL }, 1625, "tcp" }, { "svs-omagent", { NULL }, 1625, "udp" }, { "shockwave", { NULL }, 1626, "tcp" }, { "shockwave", { NULL }, 1626, "udp" }, { "t128-gateway", { NULL }, 1627, "tcp" }, { "t128-gateway", { NULL }, 1627, "udp" }, { "lontalk-norm", { NULL }, 1628, "tcp" }, { "lontalk-norm", { NULL }, 1628, "udp" }, { "lontalk-urgnt", { NULL }, 1629, "tcp" }, { "lontalk-urgnt", { NULL }, 1629, "udp" }, { "oraclenet8cman", { NULL }, 1630, "tcp" }, { "oraclenet8cman", { NULL }, 1630, "udp" }, { "visitview", { NULL }, 1631, "tcp" }, { "visitview", { NULL }, 1631, "udp" }, { "pammratc", { NULL }, 1632, "tcp" }, { "pammratc", { NULL }, 1632, "udp" }, { "pammrpc", { NULL }, 1633, "tcp" }, { "pammrpc", { NULL }, 1633, "udp" }, { "loaprobe", { NULL }, 1634, "tcp" }, { "loaprobe", { NULL }, 1634, "udp" }, { "edb-server1", { NULL }, 1635, "tcp" }, { "edb-server1", { NULL }, 1635, "udp" }, { "isdc", { NULL }, 1636, "tcp" }, { "isdc", { NULL }, 1636, "udp" }, { "islc", { NULL }, 1637, "tcp" }, { "islc", { NULL }, 1637, "udp" }, { "ismc", { NULL }, 1638, "tcp" }, { "ismc", { NULL }, 1638, "udp" }, { "cert-initiator", { NULL }, 1639, "tcp" }, { "cert-initiator", { NULL }, 1639, "udp" }, { "cert-responder", { NULL }, 1640, "tcp" }, { "cert-responder", { NULL }, 1640, "udp" }, { "invision", { NULL }, 1641, "tcp" }, { "invision", { NULL }, 1641, "udp" }, { "isis-am", { NULL }, 1642, "tcp" }, { "isis-am", { NULL }, 1642, "udp" }, { "isis-ambc", { NULL }, 1643, "tcp" }, { "isis-ambc", { NULL }, 1643, "udp" }, { "saiseh", { NULL }, 1644, "tcp" }, { "sightline", { NULL }, 1645, "tcp" }, { "sightline", { NULL }, 1645, "udp" }, { "sa-msg-port", { NULL }, 1646, "tcp" }, { "sa-msg-port", { NULL }, 1646, "udp" }, { "rsap", { NULL }, 1647, "tcp" }, { "rsap", { NULL }, 1647, "udp" }, { "concurrent-lm", { NULL }, 1648, "tcp" }, { "concurrent-lm", { NULL }, 1648, "udp" }, { "kermit", { NULL }, 1649, "tcp" }, { "kermit", { NULL }, 1649, "udp" }, { "nkd", { NULL }, 1650, "tcp" }, { "nkd", { NULL }, 1650, "udp" }, { "shiva_confsrvr", { NULL }, 1651, "tcp" }, { "shiva_confsrvr", { NULL }, 1651, "udp" }, { "xnmp", { NULL }, 1652, "tcp" }, { "xnmp", { NULL }, 1652, "udp" }, { "alphatech-lm", { NULL }, 1653, "tcp" }, { "alphatech-lm", { NULL }, 1653, "udp" }, { "stargatealerts", { NULL }, 1654, "tcp" }, { "stargatealerts", { NULL }, 1654, "udp" }, { "dec-mbadmin", { NULL }, 1655, "tcp" }, { "dec-mbadmin", { NULL }, 1655, "udp" }, { "dec-mbadmin-h", { NULL }, 1656, "tcp" }, { "dec-mbadmin-h", { NULL }, 1656, "udp" }, { "fujitsu-mmpdc", { NULL }, 1657, "tcp" }, { "fujitsu-mmpdc", { NULL }, 1657, "udp" }, { "sixnetudr", { NULL }, 1658, "tcp" }, { "sixnetudr", { NULL }, 1658, "udp" }, { "sg-lm", { NULL }, 1659, "tcp" }, { "sg-lm", { NULL }, 1659, "udp" }, { "skip-mc-gikreq", { NULL }, 1660, "tcp" }, { "skip-mc-gikreq", { NULL }, 1660, "udp" }, { "netview-aix-1", { NULL }, 1661, "tcp" }, { "netview-aix-1", { NULL }, 1661, "udp" }, { "netview-aix-2", { NULL }, 1662, "tcp" }, { "netview-aix-2", { NULL }, 1662, "udp" }, { "netview-aix-3", { NULL }, 1663, "tcp" }, { "netview-aix-3", { NULL }, 1663, "udp" }, { "netview-aix-4", { NULL }, 1664, "tcp" }, { "netview-aix-4", { NULL }, 1664, "udp" }, { "netview-aix-5", { NULL }, 1665, "tcp" }, { "netview-aix-5", { NULL }, 1665, "udp" }, { "netview-aix-6", { NULL }, 1666, "tcp" }, { "netview-aix-6", { NULL }, 1666, "udp" }, { "netview-aix-7", { NULL }, 1667, "tcp" }, { "netview-aix-7", { NULL }, 1667, "udp" }, { "netview-aix-8", { NULL }, 1668, "tcp" }, { "netview-aix-8", { NULL }, 1668, "udp" }, { "netview-aix-9", { NULL }, 1669, "tcp" }, { "netview-aix-9", { NULL }, 1669, "udp" }, { "netview-aix-10", { NULL }, 1670, "tcp" }, { "netview-aix-10", { NULL }, 1670, "udp" }, { "netview-aix-11", { NULL }, 1671, "tcp" }, { "netview-aix-11", { NULL }, 1671, "udp" }, { "netview-aix-12", { NULL }, 1672, "tcp" }, { "netview-aix-12", { NULL }, 1672, "udp" }, { "proshare-mc-1", { NULL }, 1673, "tcp" }, { "proshare-mc-1", { NULL }, 1673, "udp" }, { "proshare-mc-2", { NULL }, 1674, "tcp" }, { "proshare-mc-2", { NULL }, 1674, "udp" }, { "pdp", { NULL }, 1675, "tcp" }, { "pdp", { NULL }, 1675, "udp" }, { "netcomm1", { NULL }, 1676, "tcp" }, { "netcomm2", { NULL }, 1676, "udp" }, { "groupwise", { NULL }, 1677, "tcp" }, { "groupwise", { NULL }, 1677, "udp" }, { "prolink", { NULL }, 1678, "tcp" }, { "prolink", { NULL }, 1678, "udp" }, { "darcorp-lm", { NULL }, 1679, "tcp" }, { "darcorp-lm", { NULL }, 1679, "udp" }, { "microcom-sbp", { NULL }, 1680, "tcp" }, { "microcom-sbp", { NULL }, 1680, "udp" }, { "sd-elmd", { NULL }, 1681, "tcp" }, { "sd-elmd", { NULL }, 1681, "udp" }, { "lanyon-lantern", { NULL }, 1682, "tcp" }, { "lanyon-lantern", { NULL }, 1682, "udp" }, { "ncpm-hip", { NULL }, 1683, "tcp" }, { "ncpm-hip", { NULL }, 1683, "udp" }, { "snaresecure", { NULL }, 1684, "tcp" }, { "snaresecure", { NULL }, 1684, "udp" }, { "n2nremote", { NULL }, 1685, "tcp" }, { "n2nremote", { NULL }, 1685, "udp" }, { "cvmon", { NULL }, 1686, "tcp" }, { "cvmon", { NULL }, 1686, "udp" }, { "nsjtp-ctrl", { NULL }, 1687, "tcp" }, { "nsjtp-ctrl", { NULL }, 1687, "udp" }, { "nsjtp-data", { NULL }, 1688, "tcp" }, { "nsjtp-data", { NULL }, 1688, "udp" }, { "firefox", { NULL }, 1689, "tcp" }, { "firefox", { NULL }, 1689, "udp" }, { "ng-umds", { NULL }, 1690, "tcp" }, { "ng-umds", { NULL }, 1690, "udp" }, { "empire-empuma", { NULL }, 1691, "tcp" }, { "empire-empuma", { NULL }, 1691, "udp" }, { "sstsys-lm", { NULL }, 1692, "tcp" }, { "sstsys-lm", { NULL }, 1692, "udp" }, { "rrirtr", { NULL }, 1693, "tcp" }, { "rrirtr", { NULL }, 1693, "udp" }, { "rrimwm", { NULL }, 1694, "tcp" }, { "rrimwm", { NULL }, 1694, "udp" }, { "rrilwm", { NULL }, 1695, "tcp" }, { "rrilwm", { NULL }, 1695, "udp" }, { "rrifmm", { NULL }, 1696, "tcp" }, { "rrifmm", { NULL }, 1696, "udp" }, { "rrisat", { NULL }, 1697, "tcp" }, { "rrisat", { NULL }, 1697, "udp" }, { "rsvp-encap-1", { NULL }, 1698, "tcp" }, { "rsvp-encap-1", { NULL }, 1698, "udp" }, { "rsvp-encap-2", { NULL }, 1699, "tcp" }, { "rsvp-encap-2", { NULL }, 1699, "udp" }, { "mps-raft", { NULL }, 1700, "tcp" }, { "mps-raft", { NULL }, 1700, "udp" }, { "l2f", { NULL }, 1701, "tcp" }, { "l2f", { NULL }, 1701, "udp" }, { "l2tp", { NULL }, 1701, "tcp" }, { "l2tp", { NULL }, 1701, "udp" }, { "deskshare", { NULL }, 1702, "tcp" }, { "deskshare", { NULL }, 1702, "udp" }, { "hb-engine", { NULL }, 1703, "tcp" }, { "hb-engine", { NULL }, 1703, "udp" }, { "bcs-broker", { NULL }, 1704, "tcp" }, { "bcs-broker", { NULL }, 1704, "udp" }, { "slingshot", { NULL }, 1705, "tcp" }, { "slingshot", { NULL }, 1705, "udp" }, { "jetform", { NULL }, 1706, "tcp" }, { "jetform", { NULL }, 1706, "udp" }, { "vdmplay", { NULL }, 1707, "tcp" }, { "vdmplay", { NULL }, 1707, "udp" }, { "gat-lmd", { NULL }, 1708, "tcp" }, { "gat-lmd", { NULL }, 1708, "udp" }, { "centra", { NULL }, 1709, "tcp" }, { "centra", { NULL }, 1709, "udp" }, { "impera", { NULL }, 1710, "tcp" }, { "impera", { NULL }, 1710, "udp" }, { "pptconference", { NULL }, 1711, "tcp" }, { "pptconference", { NULL }, 1711, "udp" }, { "registrar", { NULL }, 1712, "tcp" }, { "registrar", { NULL }, 1712, "udp" }, { "conferencetalk", { NULL }, 1713, "tcp" }, { "conferencetalk", { NULL }, 1713, "udp" }, { "sesi-lm", { NULL }, 1714, "tcp" }, { "sesi-lm", { NULL }, 1714, "udp" }, { "houdini-lm", { NULL }, 1715, "tcp" }, { "houdini-lm", { NULL }, 1715, "udp" }, { "xmsg", { NULL }, 1716, "tcp" }, { "xmsg", { NULL }, 1716, "udp" }, { "fj-hdnet", { NULL }, 1717, "tcp" }, { "fj-hdnet", { NULL }, 1717, "udp" }, { "h323gatedisc", { NULL }, 1718, "tcp" }, { "h323gatedisc", { NULL }, 1718, "udp" }, { "h323gatestat", { NULL }, 1719, "tcp" }, { "h323gatestat", { NULL }, 1719, "udp" }, { "h323hostcall", { NULL }, 1720, "tcp" }, { "h323hostcall", { NULL }, 1720, "udp" }, { "caicci", { NULL }, 1721, "tcp" }, { "caicci", { NULL }, 1721, "udp" }, { "hks-lm", { NULL }, 1722, "tcp" }, { "hks-lm", { NULL }, 1722, "udp" }, { "pptp", { NULL }, 1723, "tcp" }, { "pptp", { NULL }, 1723, "udp" }, { "csbphonemaster", { NULL }, 1724, "tcp" }, { "csbphonemaster", { NULL }, 1724, "udp" }, { "iden-ralp", { NULL }, 1725, "tcp" }, { "iden-ralp", { NULL }, 1725, "udp" }, { "iberiagames", { NULL }, 1726, "tcp" }, { "iberiagames", { NULL }, 1726, "udp" }, { "winddx", { NULL }, 1727, "tcp" }, { "winddx", { NULL }, 1727, "udp" }, { "telindus", { NULL }, 1728, "tcp" }, { "telindus", { NULL }, 1728, "udp" }, { "citynl", { NULL }, 1729, "tcp" }, { "citynl", { NULL }, 1729, "udp" }, { "roketz", { NULL }, 1730, "tcp" }, { "roketz", { NULL }, 1730, "udp" }, { "msiccp", { NULL }, 1731, "tcp" }, { "msiccp", { NULL }, 1731, "udp" }, { "proxim", { NULL }, 1732, "tcp" }, { "proxim", { NULL }, 1732, "udp" }, { "siipat", { NULL }, 1733, "tcp" }, { "siipat", { NULL }, 1733, "udp" }, { "cambertx-lm", { NULL }, 1734, "tcp" }, { "cambertx-lm", { NULL }, 1734, "udp" }, { "privatechat", { NULL }, 1735, "tcp" }, { "privatechat", { NULL }, 1735, "udp" }, { "street-stream", { NULL }, 1736, "tcp" }, { "street-stream", { NULL }, 1736, "udp" }, { "ultimad", { NULL }, 1737, "tcp" }, { "ultimad", { NULL }, 1737, "udp" }, { "gamegen1", { NULL }, 1738, "tcp" }, { "gamegen1", { NULL }, 1738, "udp" }, { "webaccess", { NULL }, 1739, "tcp" }, { "webaccess", { NULL }, 1739, "udp" }, { "encore", { NULL }, 1740, "tcp" }, { "encore", { NULL }, 1740, "udp" }, { "cisco-net-mgmt", { NULL }, 1741, "tcp" }, { "cisco-net-mgmt", { NULL }, 1741, "udp" }, { "3Com-nsd", { NULL }, 1742, "tcp" }, { "3Com-nsd", { NULL }, 1742, "udp" }, { "cinegrfx-lm", { NULL }, 1743, "tcp" }, { "cinegrfx-lm", { NULL }, 1743, "udp" }, { "ncpm-ft", { NULL }, 1744, "tcp" }, { "ncpm-ft", { NULL }, 1744, "udp" }, { "remote-winsock", { NULL }, 1745, "tcp" }, { "remote-winsock", { NULL }, 1745, "udp" }, { "ftrapid-1", { NULL }, 1746, "tcp" }, { "ftrapid-1", { NULL }, 1746, "udp" }, { "ftrapid-2", { NULL }, 1747, "tcp" }, { "ftrapid-2", { NULL }, 1747, "udp" }, { "oracle-em1", { NULL }, 1748, "tcp" }, { "oracle-em1", { NULL }, 1748, "udp" }, { "aspen-services", { NULL }, 1749, "tcp" }, { "aspen-services", { NULL }, 1749, "udp" }, { "sslp", { NULL }, 1750, "tcp" }, { "sslp", { NULL }, 1750, "udp" }, { "swiftnet", { NULL }, 1751, "tcp" }, { "swiftnet", { NULL }, 1751, "udp" }, { "lofr-lm", { NULL }, 1752, "tcp" }, { "lofr-lm", { NULL }, 1752, "udp" }, { "oracle-em2", { NULL }, 1754, "tcp" }, { "oracle-em2", { NULL }, 1754, "udp" }, { "ms-streaming", { NULL }, 1755, "tcp" }, { "ms-streaming", { NULL }, 1755, "udp" }, { "capfast-lmd", { NULL }, 1756, "tcp" }, { "capfast-lmd", { NULL }, 1756, "udp" }, { "cnhrp", { NULL }, 1757, "tcp" }, { "cnhrp", { NULL }, 1757, "udp" }, { "tftp-mcast", { NULL }, 1758, "tcp" }, { "tftp-mcast", { NULL }, 1758, "udp" }, { "spss-lm", { NULL }, 1759, "tcp" }, { "spss-lm", { NULL }, 1759, "udp" }, { "www-ldap-gw", { NULL }, 1760, "tcp" }, { "www-ldap-gw", { NULL }, 1760, "udp" }, { "cft-0", { NULL }, 1761, "tcp" }, { "cft-0", { NULL }, 1761, "udp" }, { "cft-1", { NULL }, 1762, "tcp" }, { "cft-1", { NULL }, 1762, "udp" }, { "cft-2", { NULL }, 1763, "tcp" }, { "cft-2", { NULL }, 1763, "udp" }, { "cft-3", { NULL }, 1764, "tcp" }, { "cft-3", { NULL }, 1764, "udp" }, { "cft-4", { NULL }, 1765, "tcp" }, { "cft-4", { NULL }, 1765, "udp" }, { "cft-5", { NULL }, 1766, "tcp" }, { "cft-5", { NULL }, 1766, "udp" }, { "cft-6", { NULL }, 1767, "tcp" }, { "cft-6", { NULL }, 1767, "udp" }, { "cft-7", { NULL }, 1768, "tcp" }, { "cft-7", { NULL }, 1768, "udp" }, { "bmc-net-adm", { NULL }, 1769, "tcp" }, { "bmc-net-adm", { NULL }, 1769, "udp" }, { "bmc-net-svc", { NULL }, 1770, "tcp" }, { "bmc-net-svc", { NULL }, 1770, "udp" }, { "vaultbase", { NULL }, 1771, "tcp" }, { "vaultbase", { NULL }, 1771, "udp" }, { "essweb-gw", { NULL }, 1772, "tcp" }, { "essweb-gw", { NULL }, 1772, "udp" }, { "kmscontrol", { NULL }, 1773, "tcp" }, { "kmscontrol", { NULL }, 1773, "udp" }, { "global-dtserv", { NULL }, 1774, "tcp" }, { "global-dtserv", { NULL }, 1774, "udp" }, { "femis", { NULL }, 1776, "tcp" }, { "femis", { NULL }, 1776, "udp" }, { "powerguardian", { NULL }, 1777, "tcp" }, { "powerguardian", { NULL }, 1777, "udp" }, { "prodigy-intrnet", { NULL }, 1778, "tcp" }, { "prodigy-intrnet", { NULL }, 1778, "udp" }, { "pharmasoft", { NULL }, 1779, "tcp" }, { "pharmasoft", { NULL }, 1779, "udp" }, { "dpkeyserv", { NULL }, 1780, "tcp" }, { "dpkeyserv", { NULL }, 1780, "udp" }, { "answersoft-lm", { NULL }, 1781, "tcp" }, { "answersoft-lm", { NULL }, 1781, "udp" }, { "hp-hcip", { NULL }, 1782, "tcp" }, { "hp-hcip", { NULL }, 1782, "udp" }, { "finle-lm", { NULL }, 1784, "tcp" }, { "finle-lm", { NULL }, 1784, "udp" }, { "windlm", { NULL }, 1785, "tcp" }, { "windlm", { NULL }, 1785, "udp" }, { "funk-logger", { NULL }, 1786, "tcp" }, { "funk-logger", { NULL }, 1786, "udp" }, { "funk-license", { NULL }, 1787, "tcp" }, { "funk-license", { NULL }, 1787, "udp" }, { "psmond", { NULL }, 1788, "tcp" }, { "psmond", { NULL }, 1788, "udp" }, { "hello", { NULL }, 1789, "tcp" }, { "hello", { NULL }, 1789, "udp" }, { "nmsp", { NULL }, 1790, "tcp" }, { "nmsp", { NULL }, 1790, "udp" }, { "ea1", { NULL }, 1791, "tcp" }, { "ea1", { NULL }, 1791, "udp" }, { "ibm-dt-2", { NULL }, 1792, "tcp" }, { "ibm-dt-2", { NULL }, 1792, "udp" }, { "rsc-robot", { NULL }, 1793, "tcp" }, { "rsc-robot", { NULL }, 1793, "udp" }, { "cera-bcm", { NULL }, 1794, "tcp" }, { "cera-bcm", { NULL }, 1794, "udp" }, { "dpi-proxy", { NULL }, 1795, "tcp" }, { "dpi-proxy", { NULL }, 1795, "udp" }, { "vocaltec-admin", { NULL }, 1796, "tcp" }, { "vocaltec-admin", { NULL }, 1796, "udp" }, { "uma", { NULL }, 1797, "tcp" }, { "uma", { NULL }, 1797, "udp" }, { "etp", { NULL }, 1798, "tcp" }, { "etp", { NULL }, 1798, "udp" }, { "netrisk", { NULL }, 1799, "tcp" }, { "netrisk", { NULL }, 1799, "udp" }, { "ansys-lm", { NULL }, 1800, "tcp" }, { "ansys-lm", { NULL }, 1800, "udp" }, { "msmq", { NULL }, 1801, "tcp" }, { "msmq", { NULL }, 1801, "udp" }, { "concomp1", { NULL }, 1802, "tcp" }, { "concomp1", { NULL }, 1802, "udp" }, { "hp-hcip-gwy", { NULL }, 1803, "tcp" }, { "hp-hcip-gwy", { NULL }, 1803, "udp" }, { "enl", { NULL }, 1804, "tcp" }, { "enl", { NULL }, 1804, "udp" }, { "enl-name", { NULL }, 1805, "tcp" }, { "enl-name", { NULL }, 1805, "udp" }, { "musiconline", { NULL }, 1806, "tcp" }, { "musiconline", { NULL }, 1806, "udp" }, { "fhsp", { NULL }, 1807, "tcp" }, { "fhsp", { NULL }, 1807, "udp" }, { "oracle-vp2", { NULL }, 1808, "tcp" }, { "oracle-vp2", { NULL }, 1808, "udp" }, { "oracle-vp1", { NULL }, 1809, "tcp" }, { "oracle-vp1", { NULL }, 1809, "udp" }, { "jerand-lm", { NULL }, 1810, "tcp" }, { "jerand-lm", { NULL }, 1810, "udp" }, { "scientia-sdb", { NULL }, 1811, "tcp" }, { "scientia-sdb", { NULL }, 1811, "udp" }, { "radius", { NULL }, 1812, "tcp" }, { "radius", { NULL }, 1812, "udp" }, { "radius-acct", { NULL }, 1813, "tcp" }, { "radius-acct", { NULL }, 1813, "udp" }, { "tdp-suite", { NULL }, 1814, "tcp" }, { "tdp-suite", { NULL }, 1814, "udp" }, { "mmpft", { NULL }, 1815, "tcp" }, { "mmpft", { NULL }, 1815, "udp" }, { "harp", { NULL }, 1816, "tcp" }, { "harp", { NULL }, 1816, "udp" }, { "rkb-oscs", { NULL }, 1817, "tcp" }, { "rkb-oscs", { NULL }, 1817, "udp" }, { "etftp", { NULL }, 1818, "tcp" }, { "etftp", { NULL }, 1818, "udp" }, { "plato-lm", { NULL }, 1819, "tcp" }, { "plato-lm", { NULL }, 1819, "udp" }, { "mcagent", { NULL }, 1820, "tcp" }, { "mcagent", { NULL }, 1820, "udp" }, { "donnyworld", { NULL }, 1821, "tcp" }, { "donnyworld", { NULL }, 1821, "udp" }, { "es-elmd", { NULL }, 1822, "tcp" }, { "es-elmd", { NULL }, 1822, "udp" }, { "unisys-lm", { NULL }, 1823, "tcp" }, { "unisys-lm", { NULL }, 1823, "udp" }, { "metrics-pas", { NULL }, 1824, "tcp" }, { "metrics-pas", { NULL }, 1824, "udp" }, { "direcpc-video", { NULL }, 1825, "tcp" }, { "direcpc-video", { NULL }, 1825, "udp" }, { "ardt", { NULL }, 1826, "tcp" }, { "ardt", { NULL }, 1826, "udp" }, { "asi", { NULL }, 1827, "tcp" }, { "asi", { NULL }, 1827, "udp" }, { "itm-mcell-u", { NULL }, 1828, "tcp" }, { "itm-mcell-u", { NULL }, 1828, "udp" }, { "optika-emedia", { NULL }, 1829, "tcp" }, { "optika-emedia", { NULL }, 1829, "udp" }, { "net8-cman", { NULL }, 1830, "tcp" }, { "net8-cman", { NULL }, 1830, "udp" }, { "myrtle", { NULL }, 1831, "tcp" }, { "myrtle", { NULL }, 1831, "udp" }, { "tht-treasure", { NULL }, 1832, "tcp" }, { "tht-treasure", { NULL }, 1832, "udp" }, { "udpradio", { NULL }, 1833, "tcp" }, { "udpradio", { NULL }, 1833, "udp" }, { "ardusuni", { NULL }, 1834, "tcp" }, { "ardusuni", { NULL }, 1834, "udp" }, { "ardusmul", { NULL }, 1835, "tcp" }, { "ardusmul", { NULL }, 1835, "udp" }, { "ste-smsc", { NULL }, 1836, "tcp" }, { "ste-smsc", { NULL }, 1836, "udp" }, { "csoft1", { NULL }, 1837, "tcp" }, { "csoft1", { NULL }, 1837, "udp" }, { "talnet", { NULL }, 1838, "tcp" }, { "talnet", { NULL }, 1838, "udp" }, { "netopia-vo1", { NULL }, 1839, "tcp" }, { "netopia-vo1", { NULL }, 1839, "udp" }, { "netopia-vo2", { NULL }, 1840, "tcp" }, { "netopia-vo2", { NULL }, 1840, "udp" }, { "netopia-vo3", { NULL }, 1841, "tcp" }, { "netopia-vo3", { NULL }, 1841, "udp" }, { "netopia-vo4", { NULL }, 1842, "tcp" }, { "netopia-vo4", { NULL }, 1842, "udp" }, { "netopia-vo5", { NULL }, 1843, "tcp" }, { "netopia-vo5", { NULL }, 1843, "udp" }, { "direcpc-dll", { NULL }, 1844, "tcp" }, { "direcpc-dll", { NULL }, 1844, "udp" }, { "altalink", { NULL }, 1845, "tcp" }, { "altalink", { NULL }, 1845, "udp" }, { "tunstall-pnc", { NULL }, 1846, "tcp" }, { "tunstall-pnc", { NULL }, 1846, "udp" }, { "slp-notify", { NULL }, 1847, "tcp" }, { "slp-notify", { NULL }, 1847, "udp" }, { "fjdocdist", { NULL }, 1848, "tcp" }, { "fjdocdist", { NULL }, 1848, "udp" }, { "alpha-sms", { NULL }, 1849, "tcp" }, { "alpha-sms", { NULL }, 1849, "udp" }, { "gsi", { NULL }, 1850, "tcp" }, { "gsi", { NULL }, 1850, "udp" }, { "ctcd", { NULL }, 1851, "tcp" }, { "ctcd", { NULL }, 1851, "udp" }, { "virtual-time", { NULL }, 1852, "tcp" }, { "virtual-time", { NULL }, 1852, "udp" }, { "vids-avtp", { NULL }, 1853, "tcp" }, { "vids-avtp", { NULL }, 1853, "udp" }, { "buddy-draw", { NULL }, 1854, "tcp" }, { "buddy-draw", { NULL }, 1854, "udp" }, { "fiorano-rtrsvc", { NULL }, 1855, "tcp" }, { "fiorano-rtrsvc", { NULL }, 1855, "udp" }, { "fiorano-msgsvc", { NULL }, 1856, "tcp" }, { "fiorano-msgsvc", { NULL }, 1856, "udp" }, { "datacaptor", { NULL }, 1857, "tcp" }, { "datacaptor", { NULL }, 1857, "udp" }, { "privateark", { NULL }, 1858, "tcp" }, { "privateark", { NULL }, 1858, "udp" }, { "gammafetchsvr", { NULL }, 1859, "tcp" }, { "gammafetchsvr", { NULL }, 1859, "udp" }, { "sunscalar-svc", { NULL }, 1860, "tcp" }, { "sunscalar-svc", { NULL }, 1860, "udp" }, { "lecroy-vicp", { NULL }, 1861, "tcp" }, { "lecroy-vicp", { NULL }, 1861, "udp" }, { "mysql-cm-agent", { NULL }, 1862, "tcp" }, { "mysql-cm-agent", { NULL }, 1862, "udp" }, { "msnp", { NULL }, 1863, "tcp" }, { "msnp", { NULL }, 1863, "udp" }, { "paradym-31port", { NULL }, 1864, "tcp" }, { "paradym-31port", { NULL }, 1864, "udp" }, { "entp", { NULL }, 1865, "tcp" }, { "entp", { NULL }, 1865, "udp" }, { "swrmi", { NULL }, 1866, "tcp" }, { "swrmi", { NULL }, 1866, "udp" }, { "udrive", { NULL }, 1867, "tcp" }, { "udrive", { NULL }, 1867, "udp" }, { "viziblebrowser", { NULL }, 1868, "tcp" }, { "viziblebrowser", { NULL }, 1868, "udp" }, { "transact", { NULL }, 1869, "tcp" }, { "transact", { NULL }, 1869, "udp" }, { "sunscalar-dns", { NULL }, 1870, "tcp" }, { "sunscalar-dns", { NULL }, 1870, "udp" }, { "canocentral0", { NULL }, 1871, "tcp" }, { "canocentral0", { NULL }, 1871, "udp" }, { "canocentral1", { NULL }, 1872, "tcp" }, { "canocentral1", { NULL }, 1872, "udp" }, { "fjmpjps", { NULL }, 1873, "tcp" }, { "fjmpjps", { NULL }, 1873, "udp" }, { "fjswapsnp", { NULL }, 1874, "tcp" }, { "fjswapsnp", { NULL }, 1874, "udp" }, { "westell-stats", { NULL }, 1875, "tcp" }, { "westell-stats", { NULL }, 1875, "udp" }, { "ewcappsrv", { NULL }, 1876, "tcp" }, { "ewcappsrv", { NULL }, 1876, "udp" }, { "hp-webqosdb", { NULL }, 1877, "tcp" }, { "hp-webqosdb", { NULL }, 1877, "udp" }, { "drmsmc", { NULL }, 1878, "tcp" }, { "drmsmc", { NULL }, 1878, "udp" }, { "nettgain-nms", { NULL }, 1879, "tcp" }, { "nettgain-nms", { NULL }, 1879, "udp" }, { "vsat-control", { NULL }, 1880, "tcp" }, { "vsat-control", { NULL }, 1880, "udp" }, { "ibm-mqseries2", { NULL }, 1881, "tcp" }, { "ibm-mqseries2", { NULL }, 1881, "udp" }, { "ecsqdmn", { NULL }, 1882, "tcp" }, { "ecsqdmn", { NULL }, 1882, "udp" }, { "ibm-mqisdp", { NULL }, 1883, "tcp" }, { "ibm-mqisdp", { NULL }, 1883, "udp" }, { "idmaps", { NULL }, 1884, "tcp" }, { "idmaps", { NULL }, 1884, "udp" }, { "vrtstrapserver", { NULL }, 1885, "tcp" }, { "vrtstrapserver", { NULL }, 1885, "udp" }, { "leoip", { NULL }, 1886, "tcp" }, { "leoip", { NULL }, 1886, "udp" }, { "filex-lport", { NULL }, 1887, "tcp" }, { "filex-lport", { NULL }, 1887, "udp" }, { "ncconfig", { NULL }, 1888, "tcp" }, { "ncconfig", { NULL }, 1888, "udp" }, { "unify-adapter", { NULL }, 1889, "tcp" }, { "unify-adapter", { NULL }, 1889, "udp" }, { "wilkenlistener", { NULL }, 1890, "tcp" }, { "wilkenlistener", { NULL }, 1890, "udp" }, { "childkey-notif", { NULL }, 1891, "tcp" }, { "childkey-notif", { NULL }, 1891, "udp" }, { "childkey-ctrl", { NULL }, 1892, "tcp" }, { "childkey-ctrl", { NULL }, 1892, "udp" }, { "elad", { NULL }, 1893, "tcp" }, { "elad", { NULL }, 1893, "udp" }, { "o2server-port", { NULL }, 1894, "tcp" }, { "o2server-port", { NULL }, 1894, "udp" }, { "b-novative-ls", { NULL }, 1896, "tcp" }, { "b-novative-ls", { NULL }, 1896, "udp" }, { "metaagent", { NULL }, 1897, "tcp" }, { "metaagent", { NULL }, 1897, "udp" }, { "cymtec-port", { NULL }, 1898, "tcp" }, { "cymtec-port", { NULL }, 1898, "udp" }, { "mc2studios", { NULL }, 1899, "tcp" }, { "mc2studios", { NULL }, 1899, "udp" }, { "ssdp", { NULL }, 1900, "tcp" }, { "ssdp", { NULL }, 1900, "udp" }, { "fjicl-tep-a", { NULL }, 1901, "tcp" }, { "fjicl-tep-a", { NULL }, 1901, "udp" }, { "fjicl-tep-b", { NULL }, 1902, "tcp" }, { "fjicl-tep-b", { NULL }, 1902, "udp" }, { "linkname", { NULL }, 1903, "tcp" }, { "linkname", { NULL }, 1903, "udp" }, { "fjicl-tep-c", { NULL }, 1904, "tcp" }, { "fjicl-tep-c", { NULL }, 1904, "udp" }, { "sugp", { NULL }, 1905, "tcp" }, { "sugp", { NULL }, 1905, "udp" }, { "tpmd", { NULL }, 1906, "tcp" }, { "tpmd", { NULL }, 1906, "udp" }, { "intrastar", { NULL }, 1907, "tcp" }, { "intrastar", { NULL }, 1907, "udp" }, { "dawn", { NULL }, 1908, "tcp" }, { "dawn", { NULL }, 1908, "udp" }, { "global-wlink", { NULL }, 1909, "tcp" }, { "global-wlink", { NULL }, 1909, "udp" }, { "ultrabac", { NULL }, 1910, "tcp" }, { "ultrabac", { NULL }, 1910, "udp" }, { "mtp", { NULL }, 1911, "tcp" }, { "mtp", { NULL }, 1911, "udp" }, { "rhp-iibp", { NULL }, 1912, "tcp" }, { "rhp-iibp", { NULL }, 1912, "udp" }, { "armadp", { NULL }, 1913, "tcp" }, { "armadp", { NULL }, 1913, "udp" }, { "elm-momentum", { NULL }, 1914, "tcp" }, { "elm-momentum", { NULL }, 1914, "udp" }, { "facelink", { NULL }, 1915, "tcp" }, { "facelink", { NULL }, 1915, "udp" }, { "persona", { NULL }, 1916, "tcp" }, { "persona", { NULL }, 1916, "udp" }, { "noagent", { NULL }, 1917, "tcp" }, { "noagent", { NULL }, 1917, "udp" }, { "can-nds", { NULL }, 1918, "tcp" }, { "can-nds", { NULL }, 1918, "udp" }, { "can-dch", { NULL }, 1919, "tcp" }, { "can-dch", { NULL }, 1919, "udp" }, { "can-ferret", { NULL }, 1920, "tcp" }, { "can-ferret", { NULL }, 1920, "udp" }, { "noadmin", { NULL }, 1921, "tcp" }, { "noadmin", { NULL }, 1921, "udp" }, { "tapestry", { NULL }, 1922, "tcp" }, { "tapestry", { NULL }, 1922, "udp" }, { "spice", { NULL }, 1923, "tcp" }, { "spice", { NULL }, 1923, "udp" }, { "xiip", { NULL }, 1924, "tcp" }, { "xiip", { NULL }, 1924, "udp" }, { "discovery-port", { NULL }, 1925, "tcp" }, { "discovery-port", { NULL }, 1925, "udp" }, { "egs", { NULL }, 1926, "tcp" }, { "egs", { NULL }, 1926, "udp" }, { "videte-cipc", { NULL }, 1927, "tcp" }, { "videte-cipc", { NULL }, 1927, "udp" }, { "emsd-port", { NULL }, 1928, "tcp" }, { "emsd-port", { NULL }, 1928, "udp" }, { "bandwiz-system", { NULL }, 1929, "tcp" }, { "bandwiz-system", { NULL }, 1929, "udp" }, { "driveappserver", { NULL }, 1930, "tcp" }, { "driveappserver", { NULL }, 1930, "udp" }, { "amdsched", { NULL }, 1931, "tcp" }, { "amdsched", { NULL }, 1931, "udp" }, { "ctt-broker", { NULL }, 1932, "tcp" }, { "ctt-broker", { NULL }, 1932, "udp" }, { "xmapi", { NULL }, 1933, "tcp" }, { "xmapi", { NULL }, 1933, "udp" }, { "xaapi", { NULL }, 1934, "tcp" }, { "xaapi", { NULL }, 1934, "udp" }, { "macromedia-fcs", { NULL }, 1935, "tcp" }, { "macromedia-fcs", { NULL }, 1935, "udp" }, { "jetcmeserver", { NULL }, 1936, "tcp" }, { "jetcmeserver", { NULL }, 1936, "udp" }, { "jwserver", { NULL }, 1937, "tcp" }, { "jwserver", { NULL }, 1937, "udp" }, { "jwclient", { NULL }, 1938, "tcp" }, { "jwclient", { NULL }, 1938, "udp" }, { "jvserver", { NULL }, 1939, "tcp" }, { "jvserver", { NULL }, 1939, "udp" }, { "jvclient", { NULL }, 1940, "tcp" }, { "jvclient", { NULL }, 1940, "udp" }, { "dic-aida", { NULL }, 1941, "tcp" }, { "dic-aida", { NULL }, 1941, "udp" }, { "res", { NULL }, 1942, "tcp" }, { "res", { NULL }, 1942, "udp" }, { "beeyond-media", { NULL }, 1943, "tcp" }, { "beeyond-media", { NULL }, 1943, "udp" }, { "close-combat", { NULL }, 1944, "tcp" }, { "close-combat", { NULL }, 1944, "udp" }, { "dialogic-elmd", { NULL }, 1945, "tcp" }, { "dialogic-elmd", { NULL }, 1945, "udp" }, { "tekpls", { NULL }, 1946, "tcp" }, { "tekpls", { NULL }, 1946, "udp" }, { "sentinelsrm", { NULL }, 1947, "tcp" }, { "sentinelsrm", { NULL }, 1947, "udp" }, { "eye2eye", { NULL }, 1948, "tcp" }, { "eye2eye", { NULL }, 1948, "udp" }, { "ismaeasdaqlive", { NULL }, 1949, "tcp" }, { "ismaeasdaqlive", { NULL }, 1949, "udp" }, { "ismaeasdaqtest", { NULL }, 1950, "tcp" }, { "ismaeasdaqtest", { NULL }, 1950, "udp" }, { "bcs-lmserver", { NULL }, 1951, "tcp" }, { "bcs-lmserver", { NULL }, 1951, "udp" }, { "mpnjsc", { NULL }, 1952, "tcp" }, { "mpnjsc", { NULL }, 1952, "udp" }, { "rapidbase", { NULL }, 1953, "tcp" }, { "rapidbase", { NULL }, 1953, "udp" }, { "abr-api", { NULL }, 1954, "tcp" }, { "abr-api", { NULL }, 1954, "udp" }, { "abr-secure", { NULL }, 1955, "tcp" }, { "abr-secure", { NULL }, 1955, "udp" }, { "vrtl-vmf-ds", { NULL }, 1956, "tcp" }, { "vrtl-vmf-ds", { NULL }, 1956, "udp" }, { "unix-status", { NULL }, 1957, "tcp" }, { "unix-status", { NULL }, 1957, "udp" }, { "dxadmind", { NULL }, 1958, "tcp" }, { "dxadmind", { NULL }, 1958, "udp" }, { "simp-all", { NULL }, 1959, "tcp" }, { "simp-all", { NULL }, 1959, "udp" }, { "nasmanager", { NULL }, 1960, "tcp" }, { "nasmanager", { NULL }, 1960, "udp" }, { "bts-appserver", { NULL }, 1961, "tcp" }, { "bts-appserver", { NULL }, 1961, "udp" }, { "biap-mp", { NULL }, 1962, "tcp" }, { "biap-mp", { NULL }, 1962, "udp" }, { "webmachine", { NULL }, 1963, "tcp" }, { "webmachine", { NULL }, 1963, "udp" }, { "solid-e-engine", { NULL }, 1964, "tcp" }, { "solid-e-engine", { NULL }, 1964, "udp" }, { "tivoli-npm", { NULL }, 1965, "tcp" }, { "tivoli-npm", { NULL }, 1965, "udp" }, { "slush", { NULL }, 1966, "tcp" }, { "slush", { NULL }, 1966, "udp" }, { "sns-quote", { NULL }, 1967, "tcp" }, { "sns-quote", { NULL }, 1967, "udp" }, { "lipsinc", { NULL }, 1968, "tcp" }, { "lipsinc", { NULL }, 1968, "udp" }, { "lipsinc1", { NULL }, 1969, "tcp" }, { "lipsinc1", { NULL }, 1969, "udp" }, { "netop-rc", { NULL }, 1970, "tcp" }, { "netop-rc", { NULL }, 1970, "udp" }, { "netop-school", { NULL }, 1971, "tcp" }, { "netop-school", { NULL }, 1971, "udp" }, { "intersys-cache", { NULL }, 1972, "tcp" }, { "intersys-cache", { NULL }, 1972, "udp" }, { "dlsrap", { NULL }, 1973, "tcp" }, { "dlsrap", { NULL }, 1973, "udp" }, { "drp", { NULL }, 1974, "tcp" }, { "drp", { NULL }, 1974, "udp" }, { "tcoflashagent", { NULL }, 1975, "tcp" }, { "tcoflashagent", { NULL }, 1975, "udp" }, { "tcoregagent", { NULL }, 1976, "tcp" }, { "tcoregagent", { NULL }, 1976, "udp" }, { "tcoaddressbook", { NULL }, 1977, "tcp" }, { "tcoaddressbook", { NULL }, 1977, "udp" }, { "unisql", { NULL }, 1978, "tcp" }, { "unisql", { NULL }, 1978, "udp" }, { "unisql-java", { NULL }, 1979, "tcp" }, { "unisql-java", { NULL }, 1979, "udp" }, { "pearldoc-xact", { NULL }, 1980, "tcp" }, { "pearldoc-xact", { NULL }, 1980, "udp" }, { "p2pq", { NULL }, 1981, "tcp" }, { "p2pq", { NULL }, 1981, "udp" }, { "estamp", { NULL }, 1982, "tcp" }, { "estamp", { NULL }, 1982, "udp" }, { "lhtp", { NULL }, 1983, "tcp" }, { "lhtp", { NULL }, 1983, "udp" }, { "bb", { NULL }, 1984, "tcp" }, { "bb", { NULL }, 1984, "udp" }, { "hsrp", { NULL }, 1985, "tcp" }, { "hsrp", { NULL }, 1985, "udp" }, { "licensedaemon", { NULL }, 1986, "tcp" }, { "licensedaemon", { NULL }, 1986, "udp" }, { "tr-rsrb-p1", { NULL }, 1987, "tcp" }, { "tr-rsrb-p1", { NULL }, 1987, "udp" }, { "tr-rsrb-p2", { NULL }, 1988, "tcp" }, { "tr-rsrb-p2", { NULL }, 1988, "udp" }, { "tr-rsrb-p3", { NULL }, 1989, "tcp" }, { "tr-rsrb-p3", { NULL }, 1989, "udp" }, { "mshnet", { NULL }, 1989, "tcp" }, { "mshnet", { NULL }, 1989, "udp" }, { "stun-p1", { NULL }, 1990, "tcp" }, { "stun-p1", { NULL }, 1990, "udp" }, { "stun-p2", { NULL }, 1991, "tcp" }, { "stun-p2", { NULL }, 1991, "udp" }, { "stun-p3", { NULL }, 1992, "tcp" }, { "stun-p3", { NULL }, 1992, "udp" }, { "ipsendmsg", { NULL }, 1992, "tcp" }, { "ipsendmsg", { NULL }, 1992, "udp" }, { "snmp-tcp-port", { NULL }, 1993, "tcp" }, { "snmp-tcp-port", { NULL }, 1993, "udp" }, { "stun-port", { NULL }, 1994, "tcp" }, { "stun-port", { NULL }, 1994, "udp" }, { "perf-port", { NULL }, 1995, "tcp" }, { "perf-port", { NULL }, 1995, "udp" }, { "tr-rsrb-port", { NULL }, 1996, "tcp" }, { "tr-rsrb-port", { NULL }, 1996, "udp" }, { "gdp-port", { NULL }, 1997, "tcp" }, { "gdp-port", { NULL }, 1997, "udp" }, { "x25-svc-port", { NULL }, 1998, "tcp" }, { "x25-svc-port", { NULL }, 1998, "udp" }, { "tcp-id-port", { NULL }, 1999, "tcp" }, { "tcp-id-port", { NULL }, 1999, "udp" }, { "cisco-sccp", { NULL }, 2000, "tcp" }, { "cisco-sccp", { NULL }, 2000, "udp" }, { "dc", { NULL }, 2001, "tcp" }, { "wizard", { NULL }, 2001, "udp" }, { "globe", { NULL }, 2002, "tcp" }, { "globe", { NULL }, 2002, "udp" }, { "brutus", { NULL }, 2003, "tcp" }, { "brutus", { NULL }, 2003, "udp" }, { "mailbox", { NULL }, 2004, "tcp" }, { "emce", { NULL }, 2004, "udp" }, { "berknet", { NULL }, 2005, "tcp" }, { "oracle", { NULL }, 2005, "udp" }, { "invokator", { NULL }, 2006, "tcp" }, { "raid-cd", { NULL }, 2006, "udp" }, { "dectalk", { NULL }, 2007, "tcp" }, { "raid-am", { NULL }, 2007, "udp" }, { "conf", { NULL }, 2008, "tcp" }, { "terminaldb", { NULL }, 2008, "udp" }, { "news", { NULL }, 2009, "tcp" }, { "whosockami", { NULL }, 2009, "udp" }, { "search", { NULL }, 2010, "tcp" }, { "pipe_server", { NULL }, 2010, "udp" }, { "raid-cc", { NULL }, 2011, "tcp" }, { "servserv", { NULL }, 2011, "udp" }, { "ttyinfo", { NULL }, 2012, "tcp" }, { "raid-ac", { NULL }, 2012, "udp" }, { "raid-am", { NULL }, 2013, "tcp" }, { "raid-cd", { NULL }, 2013, "udp" }, { "troff", { NULL }, 2014, "tcp" }, { "raid-sf", { NULL }, 2014, "udp" }, { "cypress", { NULL }, 2015, "tcp" }, { "raid-cs", { NULL }, 2015, "udp" }, { "bootserver", { NULL }, 2016, "tcp" }, { "bootserver", { NULL }, 2016, "udp" }, { "cypress-stat", { NULL }, 2017, "tcp" }, { "bootclient", { NULL }, 2017, "udp" }, { "terminaldb", { NULL }, 2018, "tcp" }, { "rellpack", { NULL }, 2018, "udp" }, { "whosockami", { NULL }, 2019, "tcp" }, { "about", { NULL }, 2019, "udp" }, { "xinupageserver", { NULL }, 2020, "tcp" }, { "xinupageserver", { NULL }, 2020, "udp" }, { "servexec", { NULL }, 2021, "tcp" }, { "xinuexpansion1", { NULL }, 2021, "udp" }, { "down", { NULL }, 2022, "tcp" }, { "xinuexpansion2", { NULL }, 2022, "udp" }, { "xinuexpansion3", { NULL }, 2023, "tcp" }, { "xinuexpansion3", { NULL }, 2023, "udp" }, { "xinuexpansion4", { NULL }, 2024, "tcp" }, { "xinuexpansion4", { NULL }, 2024, "udp" }, { "ellpack", { NULL }, 2025, "tcp" }, { "xribs", { NULL }, 2025, "udp" }, { "scrabble", { NULL }, 2026, "tcp" }, { "scrabble", { NULL }, 2026, "udp" }, { "shadowserver", { NULL }, 2027, "tcp" }, { "shadowserver", { NULL }, 2027, "udp" }, { "submitserver", { NULL }, 2028, "tcp" }, { "submitserver", { NULL }, 2028, "udp" }, { "hsrpv6", { NULL }, 2029, "tcp" }, { "hsrpv6", { NULL }, 2029, "udp" }, { "device2", { NULL }, 2030, "tcp" }, { "device2", { NULL }, 2030, "udp" }, { "mobrien-chat", { NULL }, 2031, "tcp" }, { "mobrien-chat", { NULL }, 2031, "udp" }, { "blackboard", { NULL }, 2032, "tcp" }, { "blackboard", { NULL }, 2032, "udp" }, { "glogger", { NULL }, 2033, "tcp" }, { "glogger", { NULL }, 2033, "udp" }, { "scoremgr", { NULL }, 2034, "tcp" }, { "scoremgr", { NULL }, 2034, "udp" }, { "imsldoc", { NULL }, 2035, "tcp" }, { "imsldoc", { NULL }, 2035, "udp" }, { "e-dpnet", { NULL }, 2036, "tcp" }, { "e-dpnet", { NULL }, 2036, "udp" }, { "applus", { NULL }, 2037, "tcp" }, { "applus", { NULL }, 2037, "udp" }, { "objectmanager", { NULL }, 2038, "tcp" }, { "objectmanager", { NULL }, 2038, "udp" }, { "prizma", { NULL }, 2039, "tcp" }, { "prizma", { NULL }, 2039, "udp" }, { "lam", { NULL }, 2040, "tcp" }, { "lam", { NULL }, 2040, "udp" }, { "interbase", { NULL }, 2041, "tcp" }, { "interbase", { NULL }, 2041, "udp" }, { "isis", { NULL }, 2042, "tcp" }, { "isis", { NULL }, 2042, "udp" }, { "isis-bcast", { NULL }, 2043, "tcp" }, { "isis-bcast", { NULL }, 2043, "udp" }, { "rimsl", { NULL }, 2044, "tcp" }, { "rimsl", { NULL }, 2044, "udp" }, { "cdfunc", { NULL }, 2045, "tcp" }, { "cdfunc", { NULL }, 2045, "udp" }, { "sdfunc", { NULL }, 2046, "tcp" }, { "sdfunc", { NULL }, 2046, "udp" }, { "dls", { NULL }, 2047, "tcp" }, { "dls", { NULL }, 2047, "udp" }, { "dls-monitor", { NULL }, 2048, "tcp" }, { "dls-monitor", { NULL }, 2048, "udp" }, { "shilp", { NULL }, 2049, "tcp" }, { "shilp", { NULL }, 2049, "udp" }, { "nfs", { NULL }, 2049, "tcp" }, { "nfs", { NULL }, 2049, "udp" }, { "nfs", { NULL }, 2049, "sctp" }, { "av-emb-config", { NULL }, 2050, "tcp" }, { "av-emb-config", { NULL }, 2050, "udp" }, { "epnsdp", { NULL }, 2051, "tcp" }, { "epnsdp", { NULL }, 2051, "udp" }, { "clearvisn", { NULL }, 2052, "tcp" }, { "clearvisn", { NULL }, 2052, "udp" }, { "lot105-ds-upd", { NULL }, 2053, "tcp" }, { "lot105-ds-upd", { NULL }, 2053, "udp" }, { "weblogin", { NULL }, 2054, "tcp" }, { "weblogin", { NULL }, 2054, "udp" }, { "iop", { NULL }, 2055, "tcp" }, { "iop", { NULL }, 2055, "udp" }, { "omnisky", { NULL }, 2056, "tcp" }, { "omnisky", { NULL }, 2056, "udp" }, { "rich-cp", { NULL }, 2057, "tcp" }, { "rich-cp", { NULL }, 2057, "udp" }, { "newwavesearch", { NULL }, 2058, "tcp" }, { "newwavesearch", { NULL }, 2058, "udp" }, { "bmc-messaging", { NULL }, 2059, "tcp" }, { "bmc-messaging", { NULL }, 2059, "udp" }, { "teleniumdaemon", { NULL }, 2060, "tcp" }, { "teleniumdaemon", { NULL }, 2060, "udp" }, { "netmount", { NULL }, 2061, "tcp" }, { "netmount", { NULL }, 2061, "udp" }, { "icg-swp", { NULL }, 2062, "tcp" }, { "icg-swp", { NULL }, 2062, "udp" }, { "icg-bridge", { NULL }, 2063, "tcp" }, { "icg-bridge", { NULL }, 2063, "udp" }, { "icg-iprelay", { NULL }, 2064, "tcp" }, { "icg-iprelay", { NULL }, 2064, "udp" }, { "dlsrpn", { NULL }, 2065, "tcp" }, { "dlsrpn", { NULL }, 2065, "udp" }, { "aura", { NULL }, 2066, "tcp" }, { "aura", { NULL }, 2066, "udp" }, { "dlswpn", { NULL }, 2067, "tcp" }, { "dlswpn", { NULL }, 2067, "udp" }, { "avauthsrvprtcl", { NULL }, 2068, "tcp" }, { "avauthsrvprtcl", { NULL }, 2068, "udp" }, { "event-port", { NULL }, 2069, "tcp" }, { "event-port", { NULL }, 2069, "udp" }, { "ah-esp-encap", { NULL }, 2070, "tcp" }, { "ah-esp-encap", { NULL }, 2070, "udp" }, { "acp-port", { NULL }, 2071, "tcp" }, { "acp-port", { NULL }, 2071, "udp" }, { "msync", { NULL }, 2072, "tcp" }, { "msync", { NULL }, 2072, "udp" }, { "gxs-data-port", { NULL }, 2073, "tcp" }, { "gxs-data-port", { NULL }, 2073, "udp" }, { "vrtl-vmf-sa", { NULL }, 2074, "tcp" }, { "vrtl-vmf-sa", { NULL }, 2074, "udp" }, { "newlixengine", { NULL }, 2075, "tcp" }, { "newlixengine", { NULL }, 2075, "udp" }, { "newlixconfig", { NULL }, 2076, "tcp" }, { "newlixconfig", { NULL }, 2076, "udp" }, { "tsrmagt", { NULL }, 2077, "tcp" }, { "tsrmagt", { NULL }, 2077, "udp" }, { "tpcsrvr", { NULL }, 2078, "tcp" }, { "tpcsrvr", { NULL }, 2078, "udp" }, { "idware-router", { NULL }, 2079, "tcp" }, { "idware-router", { NULL }, 2079, "udp" }, { "autodesk-nlm", { NULL }, 2080, "tcp" }, { "autodesk-nlm", { NULL }, 2080, "udp" }, { "kme-trap-port", { NULL }, 2081, "tcp" }, { "kme-trap-port", { NULL }, 2081, "udp" }, { "infowave", { NULL }, 2082, "tcp" }, { "infowave", { NULL }, 2082, "udp" }, { "radsec", { NULL }, 2083, "tcp" }, { "radsec", { NULL }, 2083, "udp" }, { "sunclustergeo", { NULL }, 2084, "tcp" }, { "sunclustergeo", { NULL }, 2084, "udp" }, { "ada-cip", { NULL }, 2085, "tcp" }, { "ada-cip", { NULL }, 2085, "udp" }, { "gnunet", { NULL }, 2086, "tcp" }, { "gnunet", { NULL }, 2086, "udp" }, { "eli", { NULL }, 2087, "tcp" }, { "eli", { NULL }, 2087, "udp" }, { "ip-blf", { NULL }, 2088, "tcp" }, { "ip-blf", { NULL }, 2088, "udp" }, { "sep", { NULL }, 2089, "tcp" }, { "sep", { NULL }, 2089, "udp" }, { "lrp", { NULL }, 2090, "tcp" }, { "lrp", { NULL }, 2090, "udp" }, { "prp", { NULL }, 2091, "tcp" }, { "prp", { NULL }, 2091, "udp" }, { "descent3", { NULL }, 2092, "tcp" }, { "descent3", { NULL }, 2092, "udp" }, { "nbx-cc", { NULL }, 2093, "tcp" }, { "nbx-cc", { NULL }, 2093, "udp" }, { "nbx-au", { NULL }, 2094, "tcp" }, { "nbx-au", { NULL }, 2094, "udp" }, { "nbx-ser", { NULL }, 2095, "tcp" }, { "nbx-ser", { NULL }, 2095, "udp" }, { "nbx-dir", { NULL }, 2096, "tcp" }, { "nbx-dir", { NULL }, 2096, "udp" }, { "jetformpreview", { NULL }, 2097, "tcp" }, { "jetformpreview", { NULL }, 2097, "udp" }, { "dialog-port", { NULL }, 2098, "tcp" }, { "dialog-port", { NULL }, 2098, "udp" }, { "h2250-annex-g", { NULL }, 2099, "tcp" }, { "h2250-annex-g", { NULL }, 2099, "udp" }, { "amiganetfs", { NULL }, 2100, "tcp" }, { "amiganetfs", { NULL }, 2100, "udp" }, { "rtcm-sc104", { NULL }, 2101, "tcp" }, { "rtcm-sc104", { NULL }, 2101, "udp" }, { "zephyr-srv", { NULL }, 2102, "tcp" }, { "zephyr-srv", { NULL }, 2102, "udp" }, { "zephyr-clt", { NULL }, 2103, "tcp" }, { "zephyr-clt", { NULL }, 2103, "udp" }, { "zephyr-hm", { NULL }, 2104, "tcp" }, { "zephyr-hm", { NULL }, 2104, "udp" }, { "minipay", { NULL }, 2105, "tcp" }, { "minipay", { NULL }, 2105, "udp" }, { "mzap", { NULL }, 2106, "tcp" }, { "mzap", { NULL }, 2106, "udp" }, { "bintec-admin", { NULL }, 2107, "tcp" }, { "bintec-admin", { NULL }, 2107, "udp" }, { "comcam", { NULL }, 2108, "tcp" }, { "comcam", { NULL }, 2108, "udp" }, { "ergolight", { NULL }, 2109, "tcp" }, { "ergolight", { NULL }, 2109, "udp" }, { "umsp", { NULL }, 2110, "tcp" }, { "umsp", { NULL }, 2110, "udp" }, { "dsatp", { NULL }, 2111, "tcp" }, { "dsatp", { NULL }, 2111, "udp" }, { "idonix-metanet", { NULL }, 2112, "tcp" }, { "idonix-metanet", { NULL }, 2112, "udp" }, { "hsl-storm", { NULL }, 2113, "tcp" }, { "hsl-storm", { NULL }, 2113, "udp" }, { "newheights", { NULL }, 2114, "tcp" }, { "newheights", { NULL }, 2114, "udp" }, { "kdm", { NULL }, 2115, "tcp" }, { "kdm", { NULL }, 2115, "udp" }, { "ccowcmr", { NULL }, 2116, "tcp" }, { "ccowcmr", { NULL }, 2116, "udp" }, { "mentaclient", { NULL }, 2117, "tcp" }, { "mentaclient", { NULL }, 2117, "udp" }, { "mentaserver", { NULL }, 2118, "tcp" }, { "mentaserver", { NULL }, 2118, "udp" }, { "gsigatekeeper", { NULL }, 2119, "tcp" }, { "gsigatekeeper", { NULL }, 2119, "udp" }, { "qencp", { NULL }, 2120, "tcp" }, { "qencp", { NULL }, 2120, "udp" }, { "scientia-ssdb", { NULL }, 2121, "tcp" }, { "scientia-ssdb", { NULL }, 2121, "udp" }, { "caupc-remote", { NULL }, 2122, "tcp" }, { "caupc-remote", { NULL }, 2122, "udp" }, { "gtp-control", { NULL }, 2123, "tcp" }, { "gtp-control", { NULL }, 2123, "udp" }, { "elatelink", { NULL }, 2124, "tcp" }, { "elatelink", { NULL }, 2124, "udp" }, { "lockstep", { NULL }, 2125, "tcp" }, { "lockstep", { NULL }, 2125, "udp" }, { "pktcable-cops", { NULL }, 2126, "tcp" }, { "pktcable-cops", { NULL }, 2126, "udp" }, { "index-pc-wb", { NULL }, 2127, "tcp" }, { "index-pc-wb", { NULL }, 2127, "udp" }, { "net-steward", { NULL }, 2128, "tcp" }, { "net-steward", { NULL }, 2128, "udp" }, { "cs-live", { NULL }, 2129, "tcp" }, { "cs-live", { NULL }, 2129, "udp" }, { "xds", { NULL }, 2130, "tcp" }, { "xds", { NULL }, 2130, "udp" }, { "avantageb2b", { NULL }, 2131, "tcp" }, { "avantageb2b", { NULL }, 2131, "udp" }, { "solera-epmap", { NULL }, 2132, "tcp" }, { "solera-epmap", { NULL }, 2132, "udp" }, { "zymed-zpp", { NULL }, 2133, "tcp" }, { "zymed-zpp", { NULL }, 2133, "udp" }, { "avenue", { NULL }, 2134, "tcp" }, { "avenue", { NULL }, 2134, "udp" }, { "gris", { NULL }, 2135, "tcp" }, { "gris", { NULL }, 2135, "udp" }, { "appworxsrv", { NULL }, 2136, "tcp" }, { "appworxsrv", { NULL }, 2136, "udp" }, { "connect", { NULL }, 2137, "tcp" }, { "connect", { NULL }, 2137, "udp" }, { "unbind-cluster", { NULL }, 2138, "tcp" }, { "unbind-cluster", { NULL }, 2138, "udp" }, { "ias-auth", { NULL }, 2139, "tcp" }, { "ias-auth", { NULL }, 2139, "udp" }, { "ias-reg", { NULL }, 2140, "tcp" }, { "ias-reg", { NULL }, 2140, "udp" }, { "ias-admind", { NULL }, 2141, "tcp" }, { "ias-admind", { NULL }, 2141, "udp" }, { "tdmoip", { NULL }, 2142, "tcp" }, { "tdmoip", { NULL }, 2142, "udp" }, { "lv-jc", { NULL }, 2143, "tcp" }, { "lv-jc", { NULL }, 2143, "udp" }, { "lv-ffx", { NULL }, 2144, "tcp" }, { "lv-ffx", { NULL }, 2144, "udp" }, { "lv-pici", { NULL }, 2145, "tcp" }, { "lv-pici", { NULL }, 2145, "udp" }, { "lv-not", { NULL }, 2146, "tcp" }, { "lv-not", { NULL }, 2146, "udp" }, { "lv-auth", { NULL }, 2147, "tcp" }, { "lv-auth", { NULL }, 2147, "udp" }, { "veritas-ucl", { NULL }, 2148, "tcp" }, { "veritas-ucl", { NULL }, 2148, "udp" }, { "acptsys", { NULL }, 2149, "tcp" }, { "acptsys", { NULL }, 2149, "udp" }, { "dynamic3d", { NULL }, 2150, "tcp" }, { "dynamic3d", { NULL }, 2150, "udp" }, { "docent", { NULL }, 2151, "tcp" }, { "docent", { NULL }, 2151, "udp" }, { "gtp-user", { NULL }, 2152, "tcp" }, { "gtp-user", { NULL }, 2152, "udp" }, { "ctlptc", { NULL }, 2153, "tcp" }, { "ctlptc", { NULL }, 2153, "udp" }, { "stdptc", { NULL }, 2154, "tcp" }, { "stdptc", { NULL }, 2154, "udp" }, { "brdptc", { NULL }, 2155, "tcp" }, { "brdptc", { NULL }, 2155, "udp" }, { "trp", { NULL }, 2156, "tcp" }, { "trp", { NULL }, 2156, "udp" }, { "xnds", { NULL }, 2157, "tcp" }, { "xnds", { NULL }, 2157, "udp" }, { "touchnetplus", { NULL }, 2158, "tcp" }, { "touchnetplus", { NULL }, 2158, "udp" }, { "gdbremote", { NULL }, 2159, "tcp" }, { "gdbremote", { NULL }, 2159, "udp" }, { "apc-2160", { NULL }, 2160, "tcp" }, { "apc-2160", { NULL }, 2160, "udp" }, { "apc-2161", { NULL }, 2161, "tcp" }, { "apc-2161", { NULL }, 2161, "udp" }, { "navisphere", { NULL }, 2162, "tcp" }, { "navisphere", { NULL }, 2162, "udp" }, { "navisphere-sec", { NULL }, 2163, "tcp" }, { "navisphere-sec", { NULL }, 2163, "udp" }, { "ddns-v3", { NULL }, 2164, "tcp" }, { "ddns-v3", { NULL }, 2164, "udp" }, { "x-bone-api", { NULL }, 2165, "tcp" }, { "x-bone-api", { NULL }, 2165, "udp" }, { "iwserver", { NULL }, 2166, "tcp" }, { "iwserver", { NULL }, 2166, "udp" }, { "raw-serial", { NULL }, 2167, "tcp" }, { "raw-serial", { NULL }, 2167, "udp" }, { "easy-soft-mux", { NULL }, 2168, "tcp" }, { "easy-soft-mux", { NULL }, 2168, "udp" }, { "brain", { NULL }, 2169, "tcp" }, { "brain", { NULL }, 2169, "udp" }, { "eyetv", { NULL }, 2170, "tcp" }, { "eyetv", { NULL }, 2170, "udp" }, { "msfw-storage", { NULL }, 2171, "tcp" }, { "msfw-storage", { NULL }, 2171, "udp" }, { "msfw-s-storage", { NULL }, 2172, "tcp" }, { "msfw-s-storage", { NULL }, 2172, "udp" }, { "msfw-replica", { NULL }, 2173, "tcp" }, { "msfw-replica", { NULL }, 2173, "udp" }, { "msfw-array", { NULL }, 2174, "tcp" }, { "msfw-array", { NULL }, 2174, "udp" }, { "airsync", { NULL }, 2175, "tcp" }, { "airsync", { NULL }, 2175, "udp" }, { "rapi", { NULL }, 2176, "tcp" }, { "rapi", { NULL }, 2176, "udp" }, { "qwave", { NULL }, 2177, "tcp" }, { "qwave", { NULL }, 2177, "udp" }, { "bitspeer", { NULL }, 2178, "tcp" }, { "bitspeer", { NULL }, 2178, "udp" }, { "vmrdp", { NULL }, 2179, "tcp" }, { "vmrdp", { NULL }, 2179, "udp" }, { "mc-gt-srv", { NULL }, 2180, "tcp" }, { "mc-gt-srv", { NULL }, 2180, "udp" }, { "eforward", { NULL }, 2181, "tcp" }, { "eforward", { NULL }, 2181, "udp" }, { "cgn-stat", { NULL }, 2182, "tcp" }, { "cgn-stat", { NULL }, 2182, "udp" }, { "cgn-config", { NULL }, 2183, "tcp" }, { "cgn-config", { NULL }, 2183, "udp" }, { "nvd", { NULL }, 2184, "tcp" }, { "nvd", { NULL }, 2184, "udp" }, { "onbase-dds", { NULL }, 2185, "tcp" }, { "onbase-dds", { NULL }, 2185, "udp" }, { "gtaua", { NULL }, 2186, "tcp" }, { "gtaua", { NULL }, 2186, "udp" }, { "ssmc", { NULL }, 2187, "tcp" }, { "ssmd", { NULL }, 2187, "udp" }, { "tivoconnect", { NULL }, 2190, "tcp" }, { "tivoconnect", { NULL }, 2190, "udp" }, { "tvbus", { NULL }, 2191, "tcp" }, { "tvbus", { NULL }, 2191, "udp" }, { "asdis", { NULL }, 2192, "tcp" }, { "asdis", { NULL }, 2192, "udp" }, { "drwcs", { NULL }, 2193, "tcp" }, { "drwcs", { NULL }, 2193, "udp" }, { "mnp-exchange", { NULL }, 2197, "tcp" }, { "mnp-exchange", { NULL }, 2197, "udp" }, { "onehome-remote", { NULL }, 2198, "tcp" }, { "onehome-remote", { NULL }, 2198, "udp" }, { "onehome-help", { NULL }, 2199, "tcp" }, { "onehome-help", { NULL }, 2199, "udp" }, { "ici", { NULL }, 2200, "tcp" }, { "ici", { NULL }, 2200, "udp" }, { "ats", { NULL }, 2201, "tcp" }, { "ats", { NULL }, 2201, "udp" }, { "imtc-map", { NULL }, 2202, "tcp" }, { "imtc-map", { NULL }, 2202, "udp" }, { "b2-runtime", { NULL }, 2203, "tcp" }, { "b2-runtime", { NULL }, 2203, "udp" }, { "b2-license", { NULL }, 2204, "tcp" }, { "b2-license", { NULL }, 2204, "udp" }, { "jps", { NULL }, 2205, "tcp" }, { "jps", { NULL }, 2205, "udp" }, { "hpocbus", { NULL }, 2206, "tcp" }, { "hpocbus", { NULL }, 2206, "udp" }, { "hpssd", { NULL }, 2207, "tcp" }, { "hpssd", { NULL }, 2207, "udp" }, { "hpiod", { NULL }, 2208, "tcp" }, { "hpiod", { NULL }, 2208, "udp" }, { "rimf-ps", { NULL }, 2209, "tcp" }, { "rimf-ps", { NULL }, 2209, "udp" }, { "noaaport", { NULL }, 2210, "tcp" }, { "noaaport", { NULL }, 2210, "udp" }, { "emwin", { NULL }, 2211, "tcp" }, { "emwin", { NULL }, 2211, "udp" }, { "leecoposserver", { NULL }, 2212, "tcp" }, { "leecoposserver", { NULL }, 2212, "udp" }, { "kali", { NULL }, 2213, "tcp" }, { "kali", { NULL }, 2213, "udp" }, { "rpi", { NULL }, 2214, "tcp" }, { "rpi", { NULL }, 2214, "udp" }, { "ipcore", { NULL }, 2215, "tcp" }, { "ipcore", { NULL }, 2215, "udp" }, { "vtu-comms", { NULL }, 2216, "tcp" }, { "vtu-comms", { NULL }, 2216, "udp" }, { "gotodevice", { NULL }, 2217, "tcp" }, { "gotodevice", { NULL }, 2217, "udp" }, { "bounzza", { NULL }, 2218, "tcp" }, { "bounzza", { NULL }, 2218, "udp" }, { "netiq-ncap", { NULL }, 2219, "tcp" }, { "netiq-ncap", { NULL }, 2219, "udp" }, { "netiq", { NULL }, 2220, "tcp" }, { "netiq", { NULL }, 2220, "udp" }, { "rockwell-csp1", { NULL }, 2221, "tcp" }, { "rockwell-csp1", { NULL }, 2221, "udp" }, { "EtherNet/IP-1", { NULL }, 2222, "tcp" }, { "EtherNet/IP-1", { NULL }, 2222, "udp" }, { "rockwell-csp2", { NULL }, 2223, "tcp" }, { "rockwell-csp2", { NULL }, 2223, "udp" }, { "efi-mg", { NULL }, 2224, "tcp" }, { "efi-mg", { NULL }, 2224, "udp" }, { "rcip-itu", { NULL }, 2225, "tcp" }, { "rcip-itu", { NULL }, 2225, "sctp" }, { "di-drm", { NULL }, 2226, "tcp" }, { "di-drm", { NULL }, 2226, "udp" }, { "di-msg", { NULL }, 2227, "tcp" }, { "di-msg", { NULL }, 2227, "udp" }, { "ehome-ms", { NULL }, 2228, "tcp" }, { "ehome-ms", { NULL }, 2228, "udp" }, { "datalens", { NULL }, 2229, "tcp" }, { "datalens", { NULL }, 2229, "udp" }, { "queueadm", { NULL }, 2230, "tcp" }, { "queueadm", { NULL }, 2230, "udp" }, { "wimaxasncp", { NULL }, 2231, "tcp" }, { "wimaxasncp", { NULL }, 2231, "udp" }, { "ivs-video", { NULL }, 2232, "tcp" }, { "ivs-video", { NULL }, 2232, "udp" }, { "infocrypt", { NULL }, 2233, "tcp" }, { "infocrypt", { NULL }, 2233, "udp" }, { "directplay", { NULL }, 2234, "tcp" }, { "directplay", { NULL }, 2234, "udp" }, { "sercomm-wlink", { NULL }, 2235, "tcp" }, { "sercomm-wlink", { NULL }, 2235, "udp" }, { "nani", { NULL }, 2236, "tcp" }, { "nani", { NULL }, 2236, "udp" }, { "optech-port1-lm", { NULL }, 2237, "tcp" }, { "optech-port1-lm", { NULL }, 2237, "udp" }, { "aviva-sna", { NULL }, 2238, "tcp" }, { "aviva-sna", { NULL }, 2238, "udp" }, { "imagequery", { NULL }, 2239, "tcp" }, { "imagequery", { NULL }, 2239, "udp" }, { "recipe", { NULL }, 2240, "tcp" }, { "recipe", { NULL }, 2240, "udp" }, { "ivsd", { NULL }, 2241, "tcp" }, { "ivsd", { NULL }, 2241, "udp" }, { "foliocorp", { NULL }, 2242, "tcp" }, { "foliocorp", { NULL }, 2242, "udp" }, { "magicom", { NULL }, 2243, "tcp" }, { "magicom", { NULL }, 2243, "udp" }, { "nmsserver", { NULL }, 2244, "tcp" }, { "nmsserver", { NULL }, 2244, "udp" }, { "hao", { NULL }, 2245, "tcp" }, { "hao", { NULL }, 2245, "udp" }, { "pc-mta-addrmap", { NULL }, 2246, "tcp" }, { "pc-mta-addrmap", { NULL }, 2246, "udp" }, { "antidotemgrsvr", { NULL }, 2247, "tcp" }, { "antidotemgrsvr", { NULL }, 2247, "udp" }, { "ums", { NULL }, 2248, "tcp" }, { "ums", { NULL }, 2248, "udp" }, { "rfmp", { NULL }, 2249, "tcp" }, { "rfmp", { NULL }, 2249, "udp" }, { "remote-collab", { NULL }, 2250, "tcp" }, { "remote-collab", { NULL }, 2250, "udp" }, { "dif-port", { NULL }, 2251, "tcp" }, { "dif-port", { NULL }, 2251, "udp" }, { "njenet-ssl", { NULL }, 2252, "tcp" }, { "njenet-ssl", { NULL }, 2252, "udp" }, { "dtv-chan-req", { NULL }, 2253, "tcp" }, { "dtv-chan-req", { NULL }, 2253, "udp" }, { "seispoc", { NULL }, 2254, "tcp" }, { "seispoc", { NULL }, 2254, "udp" }, { "vrtp", { NULL }, 2255, "tcp" }, { "vrtp", { NULL }, 2255, "udp" }, { "pcc-mfp", { NULL }, 2256, "tcp" }, { "pcc-mfp", { NULL }, 2256, "udp" }, { "simple-tx-rx", { NULL }, 2257, "tcp" }, { "simple-tx-rx", { NULL }, 2257, "udp" }, { "rcts", { NULL }, 2258, "tcp" }, { "rcts", { NULL }, 2258, "udp" }, { "acd-pm", { NULL }, 2259, "tcp" }, { "acd-pm", { NULL }, 2259, "udp" }, { "apc-2260", { NULL }, 2260, "tcp" }, { "apc-2260", { NULL }, 2260, "udp" }, { "comotionmaster", { NULL }, 2261, "tcp" }, { "comotionmaster", { NULL }, 2261, "udp" }, { "comotionback", { NULL }, 2262, "tcp" }, { "comotionback", { NULL }, 2262, "udp" }, { "ecwcfg", { NULL }, 2263, "tcp" }, { "ecwcfg", { NULL }, 2263, "udp" }, { "apx500api-1", { NULL }, 2264, "tcp" }, { "apx500api-1", { NULL }, 2264, "udp" }, { "apx500api-2", { NULL }, 2265, "tcp" }, { "apx500api-2", { NULL }, 2265, "udp" }, { "mfserver", { NULL }, 2266, "tcp" }, { "mfserver", { NULL }, 2266, "udp" }, { "ontobroker", { NULL }, 2267, "tcp" }, { "ontobroker", { NULL }, 2267, "udp" }, { "amt", { NULL }, 2268, "tcp" }, { "amt", { NULL }, 2268, "udp" }, { "mikey", { NULL }, 2269, "tcp" }, { "mikey", { NULL }, 2269, "udp" }, { "starschool", { NULL }, 2270, "tcp" }, { "starschool", { NULL }, 2270, "udp" }, { "mmcals", { NULL }, 2271, "tcp" }, { "mmcals", { NULL }, 2271, "udp" }, { "mmcal", { NULL }, 2272, "tcp" }, { "mmcal", { NULL }, 2272, "udp" }, { "mysql-im", { NULL }, 2273, "tcp" }, { "mysql-im", { NULL }, 2273, "udp" }, { "pcttunnell", { NULL }, 2274, "tcp" }, { "pcttunnell", { NULL }, 2274, "udp" }, { "ibridge-data", { NULL }, 2275, "tcp" }, { "ibridge-data", { NULL }, 2275, "udp" }, { "ibridge-mgmt", { NULL }, 2276, "tcp" }, { "ibridge-mgmt", { NULL }, 2276, "udp" }, { "bluectrlproxy", { NULL }, 2277, "tcp" }, { "bluectrlproxy", { NULL }, 2277, "udp" }, { "s3db", { NULL }, 2278, "tcp" }, { "s3db", { NULL }, 2278, "udp" }, { "xmquery", { NULL }, 2279, "tcp" }, { "xmquery", { NULL }, 2279, "udp" }, { "lnvpoller", { NULL }, 2280, "tcp" }, { "lnvpoller", { NULL }, 2280, "udp" }, { "lnvconsole", { NULL }, 2281, "tcp" }, { "lnvconsole", { NULL }, 2281, "udp" }, { "lnvalarm", { NULL }, 2282, "tcp" }, { "lnvalarm", { NULL }, 2282, "udp" }, { "lnvstatus", { NULL }, 2283, "tcp" }, { "lnvstatus", { NULL }, 2283, "udp" }, { "lnvmaps", { NULL }, 2284, "tcp" }, { "lnvmaps", { NULL }, 2284, "udp" }, { "lnvmailmon", { NULL }, 2285, "tcp" }, { "lnvmailmon", { NULL }, 2285, "udp" }, { "nas-metering", { NULL }, 2286, "tcp" }, { "nas-metering", { NULL }, 2286, "udp" }, { "dna", { NULL }, 2287, "tcp" }, { "dna", { NULL }, 2287, "udp" }, { "netml", { NULL }, 2288, "tcp" }, { "netml", { NULL }, 2288, "udp" }, { "dict-lookup", { NULL }, 2289, "tcp" }, { "dict-lookup", { NULL }, 2289, "udp" }, { "sonus-logging", { NULL }, 2290, "tcp" }, { "sonus-logging", { NULL }, 2290, "udp" }, { "eapsp", { NULL }, 2291, "tcp" }, { "eapsp", { NULL }, 2291, "udp" }, { "mib-streaming", { NULL }, 2292, "tcp" }, { "mib-streaming", { NULL }, 2292, "udp" }, { "npdbgmngr", { NULL }, 2293, "tcp" }, { "npdbgmngr", { NULL }, 2293, "udp" }, { "konshus-lm", { NULL }, 2294, "tcp" }, { "konshus-lm", { NULL }, 2294, "udp" }, { "advant-lm", { NULL }, 2295, "tcp" }, { "advant-lm", { NULL }, 2295, "udp" }, { "theta-lm", { NULL }, 2296, "tcp" }, { "theta-lm", { NULL }, 2296, "udp" }, { "d2k-datamover1", { NULL }, 2297, "tcp" }, { "d2k-datamover1", { NULL }, 2297, "udp" }, { "d2k-datamover2", { NULL }, 2298, "tcp" }, { "d2k-datamover2", { NULL }, 2298, "udp" }, { "pc-telecommute", { NULL }, 2299, "tcp" }, { "pc-telecommute", { NULL }, 2299, "udp" }, { "cvmmon", { NULL }, 2300, "tcp" }, { "cvmmon", { NULL }, 2300, "udp" }, { "cpq-wbem", { NULL }, 2301, "tcp" }, { "cpq-wbem", { NULL }, 2301, "udp" }, { "binderysupport", { NULL }, 2302, "tcp" }, { "binderysupport", { NULL }, 2302, "udp" }, { "proxy-gateway", { NULL }, 2303, "tcp" }, { "proxy-gateway", { NULL }, 2303, "udp" }, { "attachmate-uts", { NULL }, 2304, "tcp" }, { "attachmate-uts", { NULL }, 2304, "udp" }, { "mt-scaleserver", { NULL }, 2305, "tcp" }, { "mt-scaleserver", { NULL }, 2305, "udp" }, { "tappi-boxnet", { NULL }, 2306, "tcp" }, { "tappi-boxnet", { NULL }, 2306, "udp" }, { "pehelp", { NULL }, 2307, "tcp" }, { "pehelp", { NULL }, 2307, "udp" }, { "sdhelp", { NULL }, 2308, "tcp" }, { "sdhelp", { NULL }, 2308, "udp" }, { "sdserver", { NULL }, 2309, "tcp" }, { "sdserver", { NULL }, 2309, "udp" }, { "sdclient", { NULL }, 2310, "tcp" }, { "sdclient", { NULL }, 2310, "udp" }, { "messageservice", { NULL }, 2311, "tcp" }, { "messageservice", { NULL }, 2311, "udp" }, { "wanscaler", { NULL }, 2312, "tcp" }, { "wanscaler", { NULL }, 2312, "udp" }, { "iapp", { NULL }, 2313, "tcp" }, { "iapp", { NULL }, 2313, "udp" }, { "cr-websystems", { NULL }, 2314, "tcp" }, { "cr-websystems", { NULL }, 2314, "udp" }, { "precise-sft", { NULL }, 2315, "tcp" }, { "precise-sft", { NULL }, 2315, "udp" }, { "sent-lm", { NULL }, 2316, "tcp" }, { "sent-lm", { NULL }, 2316, "udp" }, { "attachmate-g32", { NULL }, 2317, "tcp" }, { "attachmate-g32", { NULL }, 2317, "udp" }, { "cadencecontrol", { NULL }, 2318, "tcp" }, { "cadencecontrol", { NULL }, 2318, "udp" }, { "infolibria", { NULL }, 2319, "tcp" }, { "infolibria", { NULL }, 2319, "udp" }, { "siebel-ns", { NULL }, 2320, "tcp" }, { "siebel-ns", { NULL }, 2320, "udp" }, { "rdlap", { NULL }, 2321, "tcp" }, { "rdlap", { NULL }, 2321, "udp" }, { "ofsd", { NULL }, 2322, "tcp" }, { "ofsd", { NULL }, 2322, "udp" }, { "3d-nfsd", { NULL }, 2323, "tcp" }, { "3d-nfsd", { NULL }, 2323, "udp" }, { "cosmocall", { NULL }, 2324, "tcp" }, { "cosmocall", { NULL }, 2324, "udp" }, { "ansysli", { NULL }, 2325, "tcp" }, { "ansysli", { NULL }, 2325, "udp" }, { "idcp", { NULL }, 2326, "tcp" }, { "idcp", { NULL }, 2326, "udp" }, { "xingcsm", { NULL }, 2327, "tcp" }, { "xingcsm", { NULL }, 2327, "udp" }, { "netrix-sftm", { NULL }, 2328, "tcp" }, { "netrix-sftm", { NULL }, 2328, "udp" }, { "nvd", { NULL }, 2329, "tcp" }, { "nvd", { NULL }, 2329, "udp" }, { "tscchat", { NULL }, 2330, "tcp" }, { "tscchat", { NULL }, 2330, "udp" }, { "agentview", { NULL }, 2331, "tcp" }, { "agentview", { NULL }, 2331, "udp" }, { "rcc-host", { NULL }, 2332, "tcp" }, { "rcc-host", { NULL }, 2332, "udp" }, { "snapp", { NULL }, 2333, "tcp" }, { "snapp", { NULL }, 2333, "udp" }, { "ace-client", { NULL }, 2334, "tcp" }, { "ace-client", { NULL }, 2334, "udp" }, { "ace-proxy", { NULL }, 2335, "tcp" }, { "ace-proxy", { NULL }, 2335, "udp" }, { "appleugcontrol", { NULL }, 2336, "tcp" }, { "appleugcontrol", { NULL }, 2336, "udp" }, { "ideesrv", { NULL }, 2337, "tcp" }, { "ideesrv", { NULL }, 2337, "udp" }, { "norton-lambert", { NULL }, 2338, "tcp" }, { "norton-lambert", { NULL }, 2338, "udp" }, { "3com-webview", { NULL }, 2339, "tcp" }, { "3com-webview", { NULL }, 2339, "udp" }, { "wrs_registry", { NULL }, 2340, "tcp" }, { "wrs_registry", { NULL }, 2340, "udp" }, { "xiostatus", { NULL }, 2341, "tcp" }, { "xiostatus", { NULL }, 2341, "udp" }, { "manage-exec", { NULL }, 2342, "tcp" }, { "manage-exec", { NULL }, 2342, "udp" }, { "nati-logos", { NULL }, 2343, "tcp" }, { "nati-logos", { NULL }, 2343, "udp" }, { "fcmsys", { NULL }, 2344, "tcp" }, { "fcmsys", { NULL }, 2344, "udp" }, { "dbm", { NULL }, 2345, "tcp" }, { "dbm", { NULL }, 2345, "udp" }, { "redstorm_join", { NULL }, 2346, "tcp" }, { "redstorm_join", { NULL }, 2346, "udp" }, { "redstorm_find", { NULL }, 2347, "tcp" }, { "redstorm_find", { NULL }, 2347, "udp" }, { "redstorm_info", { NULL }, 2348, "tcp" }, { "redstorm_info", { NULL }, 2348, "udp" }, { "redstorm_diag", { NULL }, 2349, "tcp" }, { "redstorm_diag", { NULL }, 2349, "udp" }, { "psbserver", { NULL }, 2350, "tcp" }, { "psbserver", { NULL }, 2350, "udp" }, { "psrserver", { NULL }, 2351, "tcp" }, { "psrserver", { NULL }, 2351, "udp" }, { "pslserver", { NULL }, 2352, "tcp" }, { "pslserver", { NULL }, 2352, "udp" }, { "pspserver", { NULL }, 2353, "tcp" }, { "pspserver", { NULL }, 2353, "udp" }, { "psprserver", { NULL }, 2354, "tcp" }, { "psprserver", { NULL }, 2354, "udp" }, { "psdbserver", { NULL }, 2355, "tcp" }, { "psdbserver", { NULL }, 2355, "udp" }, { "gxtelmd", { NULL }, 2356, "tcp" }, { "gxtelmd", { NULL }, 2356, "udp" }, { "unihub-server", { NULL }, 2357, "tcp" }, { "unihub-server", { NULL }, 2357, "udp" }, { "futrix", { NULL }, 2358, "tcp" }, { "futrix", { NULL }, 2358, "udp" }, { "flukeserver", { NULL }, 2359, "tcp" }, { "flukeserver", { NULL }, 2359, "udp" }, { "nexstorindltd", { NULL }, 2360, "tcp" }, { "nexstorindltd", { NULL }, 2360, "udp" }, { "tl1", { NULL }, 2361, "tcp" }, { "tl1", { NULL }, 2361, "udp" }, { "digiman", { NULL }, 2362, "tcp" }, { "digiman", { NULL }, 2362, "udp" }, { "mediacntrlnfsd", { NULL }, 2363, "tcp" }, { "mediacntrlnfsd", { NULL }, 2363, "udp" }, { "oi-2000", { NULL }, 2364, "tcp" }, { "oi-2000", { NULL }, 2364, "udp" }, { "dbref", { NULL }, 2365, "tcp" }, { "dbref", { NULL }, 2365, "udp" }, { "qip-login", { NULL }, 2366, "tcp" }, { "qip-login", { NULL }, 2366, "udp" }, { "service-ctrl", { NULL }, 2367, "tcp" }, { "service-ctrl", { NULL }, 2367, "udp" }, { "opentable", { NULL }, 2368, "tcp" }, { "opentable", { NULL }, 2368, "udp" }, { "l3-hbmon", { NULL }, 2370, "tcp" }, { "l3-hbmon", { NULL }, 2370, "udp" }, { "worldwire", { NULL }, 2371, "tcp" }, { "worldwire", { NULL }, 2371, "udp" }, { "lanmessenger", { NULL }, 2372, "tcp" }, { "lanmessenger", { NULL }, 2372, "udp" }, { "remographlm", { NULL }, 2373, "tcp" }, { "hydra", { NULL }, 2374, "tcp" }, { "compaq-https", { NULL }, 2381, "tcp" }, { "compaq-https", { NULL }, 2381, "udp" }, { "ms-olap3", { NULL }, 2382, "tcp" }, { "ms-olap3", { NULL }, 2382, "udp" }, { "ms-olap4", { NULL }, 2383, "tcp" }, { "ms-olap4", { NULL }, 2383, "udp" }, { "sd-request", { NULL }, 2384, "tcp" }, { "sd-capacity", { NULL }, 2384, "udp" }, { "sd-data", { NULL }, 2385, "tcp" }, { "sd-data", { NULL }, 2385, "udp" }, { "virtualtape", { NULL }, 2386, "tcp" }, { "virtualtape", { NULL }, 2386, "udp" }, { "vsamredirector", { NULL }, 2387, "tcp" }, { "vsamredirector", { NULL }, 2387, "udp" }, { "mynahautostart", { NULL }, 2388, "tcp" }, { "mynahautostart", { NULL }, 2388, "udp" }, { "ovsessionmgr", { NULL }, 2389, "tcp" }, { "ovsessionmgr", { NULL }, 2389, "udp" }, { "rsmtp", { NULL }, 2390, "tcp" }, { "rsmtp", { NULL }, 2390, "udp" }, { "3com-net-mgmt", { NULL }, 2391, "tcp" }, { "3com-net-mgmt", { NULL }, 2391, "udp" }, { "tacticalauth", { NULL }, 2392, "tcp" }, { "tacticalauth", { NULL }, 2392, "udp" }, { "ms-olap1", { NULL }, 2393, "tcp" }, { "ms-olap1", { NULL }, 2393, "udp" }, { "ms-olap2", { NULL }, 2394, "tcp" }, { "ms-olap2", { NULL }, 2394, "udp" }, { "lan900_remote", { NULL }, 2395, "tcp" }, { "lan900_remote", { NULL }, 2395, "udp" }, { "wusage", { NULL }, 2396, "tcp" }, { "wusage", { NULL }, 2396, "udp" }, { "ncl", { NULL }, 2397, "tcp" }, { "ncl", { NULL }, 2397, "udp" }, { "orbiter", { NULL }, 2398, "tcp" }, { "orbiter", { NULL }, 2398, "udp" }, { "fmpro-fdal", { NULL }, 2399, "tcp" }, { "fmpro-fdal", { NULL }, 2399, "udp" }, { "opequus-server", { NULL }, 2400, "tcp" }, { "opequus-server", { NULL }, 2400, "udp" }, { "cvspserver", { NULL }, 2401, "tcp" }, { "cvspserver", { NULL }, 2401, "udp" }, { "taskmaster2000", { NULL }, 2402, "tcp" }, { "taskmaster2000", { NULL }, 2402, "udp" }, { "taskmaster2000", { NULL }, 2403, "tcp" }, { "taskmaster2000", { NULL }, 2403, "udp" }, { "iec-104", { NULL }, 2404, "tcp" }, { "iec-104", { NULL }, 2404, "udp" }, { "trc-netpoll", { NULL }, 2405, "tcp" }, { "trc-netpoll", { NULL }, 2405, "udp" }, { "jediserver", { NULL }, 2406, "tcp" }, { "jediserver", { NULL }, 2406, "udp" }, { "orion", { NULL }, 2407, "tcp" }, { "orion", { NULL }, 2407, "udp" }, { "optimanet", { NULL }, 2408, "tcp" }, { "optimanet", { NULL }, 2408, "udp" }, { "sns-protocol", { NULL }, 2409, "tcp" }, { "sns-protocol", { NULL }, 2409, "udp" }, { "vrts-registry", { NULL }, 2410, "tcp" }, { "vrts-registry", { NULL }, 2410, "udp" }, { "netwave-ap-mgmt", { NULL }, 2411, "tcp" }, { "netwave-ap-mgmt", { NULL }, 2411, "udp" }, { "cdn", { NULL }, 2412, "tcp" }, { "cdn", { NULL }, 2412, "udp" }, { "orion-rmi-reg", { NULL }, 2413, "tcp" }, { "orion-rmi-reg", { NULL }, 2413, "udp" }, { "beeyond", { NULL }, 2414, "tcp" }, { "beeyond", { NULL }, 2414, "udp" }, { "codima-rtp", { NULL }, 2415, "tcp" }, { "codima-rtp", { NULL }, 2415, "udp" }, { "rmtserver", { NULL }, 2416, "tcp" }, { "rmtserver", { NULL }, 2416, "udp" }, { "composit-server", { NULL }, 2417, "tcp" }, { "composit-server", { NULL }, 2417, "udp" }, { "cas", { NULL }, 2418, "tcp" }, { "cas", { NULL }, 2418, "udp" }, { "attachmate-s2s", { NULL }, 2419, "tcp" }, { "attachmate-s2s", { NULL }, 2419, "udp" }, { "dslremote-mgmt", { NULL }, 2420, "tcp" }, { "dslremote-mgmt", { NULL }, 2420, "udp" }, { "g-talk", { NULL }, 2421, "tcp" }, { "g-talk", { NULL }, 2421, "udp" }, { "crmsbits", { NULL }, 2422, "tcp" }, { "crmsbits", { NULL }, 2422, "udp" }, { "rnrp", { NULL }, 2423, "tcp" }, { "rnrp", { NULL }, 2423, "udp" }, { "kofax-svr", { NULL }, 2424, "tcp" }, { "kofax-svr", { NULL }, 2424, "udp" }, { "fjitsuappmgr", { NULL }, 2425, "tcp" }, { "fjitsuappmgr", { NULL }, 2425, "udp" }, { "mgcp-gateway", { NULL }, 2427, "tcp" }, { "mgcp-gateway", { NULL }, 2427, "udp" }, { "ott", { NULL }, 2428, "tcp" }, { "ott", { NULL }, 2428, "udp" }, { "ft-role", { NULL }, 2429, "tcp" }, { "ft-role", { NULL }, 2429, "udp" }, { "venus", { NULL }, 2430, "tcp" }, { "venus", { NULL }, 2430, "udp" }, { "venus-se", { NULL }, 2431, "tcp" }, { "venus-se", { NULL }, 2431, "udp" }, { "codasrv", { NULL }, 2432, "tcp" }, { "codasrv", { NULL }, 2432, "udp" }, { "codasrv-se", { NULL }, 2433, "tcp" }, { "codasrv-se", { NULL }, 2433, "udp" }, { "pxc-epmap", { NULL }, 2434, "tcp" }, { "pxc-epmap", { NULL }, 2434, "udp" }, { "optilogic", { NULL }, 2435, "tcp" }, { "optilogic", { NULL }, 2435, "udp" }, { "topx", { NULL }, 2436, "tcp" }, { "topx", { NULL }, 2436, "udp" }, { "unicontrol", { NULL }, 2437, "tcp" }, { "unicontrol", { NULL }, 2437, "udp" }, { "msp", { NULL }, 2438, "tcp" }, { "msp", { NULL }, 2438, "udp" }, { "sybasedbsynch", { NULL }, 2439, "tcp" }, { "sybasedbsynch", { NULL }, 2439, "udp" }, { "spearway", { NULL }, 2440, "tcp" }, { "spearway", { NULL }, 2440, "udp" }, { "pvsw-inet", { NULL }, 2441, "tcp" }, { "pvsw-inet", { NULL }, 2441, "udp" }, { "netangel", { NULL }, 2442, "tcp" }, { "netangel", { NULL }, 2442, "udp" }, { "powerclientcsf", { NULL }, 2443, "tcp" }, { "powerclientcsf", { NULL }, 2443, "udp" }, { "btpp2sectrans", { NULL }, 2444, "tcp" }, { "btpp2sectrans", { NULL }, 2444, "udp" }, { "dtn1", { NULL }, 2445, "tcp" }, { "dtn1", { NULL }, 2445, "udp" }, { "bues_service", { NULL }, 2446, "tcp" }, { "bues_service", { NULL }, 2446, "udp" }, { "ovwdb", { NULL }, 2447, "tcp" }, { "ovwdb", { NULL }, 2447, "udp" }, { "hpppssvr", { NULL }, 2448, "tcp" }, { "hpppssvr", { NULL }, 2448, "udp" }, { "ratl", { NULL }, 2449, "tcp" }, { "ratl", { NULL }, 2449, "udp" }, { "netadmin", { NULL }, 2450, "tcp" }, { "netadmin", { NULL }, 2450, "udp" }, { "netchat", { NULL }, 2451, "tcp" }, { "netchat", { NULL }, 2451, "udp" }, { "snifferclient", { NULL }, 2452, "tcp" }, { "snifferclient", { NULL }, 2452, "udp" }, { "madge-ltd", { NULL }, 2453, "tcp" }, { "madge-ltd", { NULL }, 2453, "udp" }, { "indx-dds", { NULL }, 2454, "tcp" }, { "indx-dds", { NULL }, 2454, "udp" }, { "wago-io-system", { NULL }, 2455, "tcp" }, { "wago-io-system", { NULL }, 2455, "udp" }, { "altav-remmgt", { NULL }, 2456, "tcp" }, { "altav-remmgt", { NULL }, 2456, "udp" }, { "rapido-ip", { NULL }, 2457, "tcp" }, { "rapido-ip", { NULL }, 2457, "udp" }, { "griffin", { NULL }, 2458, "tcp" }, { "griffin", { NULL }, 2458, "udp" }, { "community", { NULL }, 2459, "tcp" }, { "community", { NULL }, 2459, "udp" }, { "ms-theater", { NULL }, 2460, "tcp" }, { "ms-theater", { NULL }, 2460, "udp" }, { "qadmifoper", { NULL }, 2461, "tcp" }, { "qadmifoper", { NULL }, 2461, "udp" }, { "qadmifevent", { NULL }, 2462, "tcp" }, { "qadmifevent", { NULL }, 2462, "udp" }, { "lsi-raid-mgmt", { NULL }, 2463, "tcp" }, { "lsi-raid-mgmt", { NULL }, 2463, "udp" }, { "direcpc-si", { NULL }, 2464, "tcp" }, { "direcpc-si", { NULL }, 2464, "udp" }, { "lbm", { NULL }, 2465, "tcp" }, { "lbm", { NULL }, 2465, "udp" }, { "lbf", { NULL }, 2466, "tcp" }, { "lbf", { NULL }, 2466, "udp" }, { "high-criteria", { NULL }, 2467, "tcp" }, { "high-criteria", { NULL }, 2467, "udp" }, { "qip-msgd", { NULL }, 2468, "tcp" }, { "qip-msgd", { NULL }, 2468, "udp" }, { "mti-tcs-comm", { NULL }, 2469, "tcp" }, { "mti-tcs-comm", { NULL }, 2469, "udp" }, { "taskman-port", { NULL }, 2470, "tcp" }, { "taskman-port", { NULL }, 2470, "udp" }, { "seaodbc", { NULL }, 2471, "tcp" }, { "seaodbc", { NULL }, 2471, "udp" }, { "c3", { NULL }, 2472, "tcp" }, { "c3", { NULL }, 2472, "udp" }, { "aker-cdp", { NULL }, 2473, "tcp" }, { "aker-cdp", { NULL }, 2473, "udp" }, { "vitalanalysis", { NULL }, 2474, "tcp" }, { "vitalanalysis", { NULL }, 2474, "udp" }, { "ace-server", { NULL }, 2475, "tcp" }, { "ace-server", { NULL }, 2475, "udp" }, { "ace-svr-prop", { NULL }, 2476, "tcp" }, { "ace-svr-prop", { NULL }, 2476, "udp" }, { "ssm-cvs", { NULL }, 2477, "tcp" }, { "ssm-cvs", { NULL }, 2477, "udp" }, { "ssm-cssps", { NULL }, 2478, "tcp" }, { "ssm-cssps", { NULL }, 2478, "udp" }, { "ssm-els", { NULL }, 2479, "tcp" }, { "ssm-els", { NULL }, 2479, "udp" }, { "powerexchange", { NULL }, 2480, "tcp" }, { "powerexchange", { NULL }, 2480, "udp" }, { "giop", { NULL }, 2481, "tcp" }, { "giop", { NULL }, 2481, "udp" }, { "giop-ssl", { NULL }, 2482, "tcp" }, { "giop-ssl", { NULL }, 2482, "udp" }, { "ttc", { NULL }, 2483, "tcp" }, { "ttc", { NULL }, 2483, "udp" }, { "ttc-ssl", { NULL }, 2484, "tcp" }, { "ttc-ssl", { NULL }, 2484, "udp" }, { "netobjects1", { NULL }, 2485, "tcp" }, { "netobjects1", { NULL }, 2485, "udp" }, { "netobjects2", { NULL }, 2486, "tcp" }, { "netobjects2", { NULL }, 2486, "udp" }, { "pns", { NULL }, 2487, "tcp" }, { "pns", { NULL }, 2487, "udp" }, { "moy-corp", { NULL }, 2488, "tcp" }, { "moy-corp", { NULL }, 2488, "udp" }, { "tsilb", { NULL }, 2489, "tcp" }, { "tsilb", { NULL }, 2489, "udp" }, { "qip-qdhcp", { NULL }, 2490, "tcp" }, { "qip-qdhcp", { NULL }, 2490, "udp" }, { "conclave-cpp", { NULL }, 2491, "tcp" }, { "conclave-cpp", { NULL }, 2491, "udp" }, { "groove", { NULL }, 2492, "tcp" }, { "groove", { NULL }, 2492, "udp" }, { "talarian-mqs", { NULL }, 2493, "tcp" }, { "talarian-mqs", { NULL }, 2493, "udp" }, { "bmc-ar", { NULL }, 2494, "tcp" }, { "bmc-ar", { NULL }, 2494, "udp" }, { "fast-rem-serv", { NULL }, 2495, "tcp" }, { "fast-rem-serv", { NULL }, 2495, "udp" }, { "dirgis", { NULL }, 2496, "tcp" }, { "dirgis", { NULL }, 2496, "udp" }, { "quaddb", { NULL }, 2497, "tcp" }, { "quaddb", { NULL }, 2497, "udp" }, { "odn-castraq", { NULL }, 2498, "tcp" }, { "odn-castraq", { NULL }, 2498, "udp" }, { "unicontrol", { NULL }, 2499, "tcp" }, { "unicontrol", { NULL }, 2499, "udp" }, { "rtsserv", { NULL }, 2500, "tcp" }, { "rtsserv", { NULL }, 2500, "udp" }, { "rtsclient", { NULL }, 2501, "tcp" }, { "rtsclient", { NULL }, 2501, "udp" }, { "kentrox-prot", { NULL }, 2502, "tcp" }, { "kentrox-prot", { NULL }, 2502, "udp" }, { "nms-dpnss", { NULL }, 2503, "tcp" }, { "nms-dpnss", { NULL }, 2503, "udp" }, { "wlbs", { NULL }, 2504, "tcp" }, { "wlbs", { NULL }, 2504, "udp" }, { "ppcontrol", { NULL }, 2505, "tcp" }, { "ppcontrol", { NULL }, 2505, "udp" }, { "jbroker", { NULL }, 2506, "tcp" }, { "jbroker", { NULL }, 2506, "udp" }, { "spock", { NULL }, 2507, "tcp" }, { "spock", { NULL }, 2507, "udp" }, { "jdatastore", { NULL }, 2508, "tcp" }, { "jdatastore", { NULL }, 2508, "udp" }, { "fjmpss", { NULL }, 2509, "tcp" }, { "fjmpss", { NULL }, 2509, "udp" }, { "fjappmgrbulk", { NULL }, 2510, "tcp" }, { "fjappmgrbulk", { NULL }, 2510, "udp" }, { "metastorm", { NULL }, 2511, "tcp" }, { "metastorm", { NULL }, 2511, "udp" }, { "citrixima", { NULL }, 2512, "tcp" }, { "citrixima", { NULL }, 2512, "udp" }, { "citrixadmin", { NULL }, 2513, "tcp" }, { "citrixadmin", { NULL }, 2513, "udp" }, { "facsys-ntp", { NULL }, 2514, "tcp" }, { "facsys-ntp", { NULL }, 2514, "udp" }, { "facsys-router", { NULL }, 2515, "tcp" }, { "facsys-router", { NULL }, 2515, "udp" }, { "maincontrol", { NULL }, 2516, "tcp" }, { "maincontrol", { NULL }, 2516, "udp" }, { "call-sig-trans", { NULL }, 2517, "tcp" }, { "call-sig-trans", { NULL }, 2517, "udp" }, { "willy", { NULL }, 2518, "tcp" }, { "willy", { NULL }, 2518, "udp" }, { "globmsgsvc", { NULL }, 2519, "tcp" }, { "globmsgsvc", { NULL }, 2519, "udp" }, { "pvsw", { NULL }, 2520, "tcp" }, { "pvsw", { NULL }, 2520, "udp" }, { "adaptecmgr", { NULL }, 2521, "tcp" }, { "adaptecmgr", { NULL }, 2521, "udp" }, { "windb", { NULL }, 2522, "tcp" }, { "windb", { NULL }, 2522, "udp" }, { "qke-llc-v3", { NULL }, 2523, "tcp" }, { "qke-llc-v3", { NULL }, 2523, "udp" }, { "optiwave-lm", { NULL }, 2524, "tcp" }, { "optiwave-lm", { NULL }, 2524, "udp" }, { "ms-v-worlds", { NULL }, 2525, "tcp" }, { "ms-v-worlds", { NULL }, 2525, "udp" }, { "ema-sent-lm", { NULL }, 2526, "tcp" }, { "ema-sent-lm", { NULL }, 2526, "udp" }, { "iqserver", { NULL }, 2527, "tcp" }, { "iqserver", { NULL }, 2527, "udp" }, { "ncr_ccl", { NULL }, 2528, "tcp" }, { "ncr_ccl", { NULL }, 2528, "udp" }, { "utsftp", { NULL }, 2529, "tcp" }, { "utsftp", { NULL }, 2529, "udp" }, { "vrcommerce", { NULL }, 2530, "tcp" }, { "vrcommerce", { NULL }, 2530, "udp" }, { "ito-e-gui", { NULL }, 2531, "tcp" }, { "ito-e-gui", { NULL }, 2531, "udp" }, { "ovtopmd", { NULL }, 2532, "tcp" }, { "ovtopmd", { NULL }, 2532, "udp" }, { "snifferserver", { NULL }, 2533, "tcp" }, { "snifferserver", { NULL }, 2533, "udp" }, { "combox-web-acc", { NULL }, 2534, "tcp" }, { "combox-web-acc", { NULL }, 2534, "udp" }, { "madcap", { NULL }, 2535, "tcp" }, { "madcap", { NULL }, 2535, "udp" }, { "btpp2audctr1", { NULL }, 2536, "tcp" }, { "btpp2audctr1", { NULL }, 2536, "udp" }, { "upgrade", { NULL }, 2537, "tcp" }, { "upgrade", { NULL }, 2537, "udp" }, { "vnwk-prapi", { NULL }, 2538, "tcp" }, { "vnwk-prapi", { NULL }, 2538, "udp" }, { "vsiadmin", { NULL }, 2539, "tcp" }, { "vsiadmin", { NULL }, 2539, "udp" }, { "lonworks", { NULL }, 2540, "tcp" }, { "lonworks", { NULL }, 2540, "udp" }, { "lonworks2", { NULL }, 2541, "tcp" }, { "lonworks2", { NULL }, 2541, "udp" }, { "udrawgraph", { NULL }, 2542, "tcp" }, { "udrawgraph", { NULL }, 2542, "udp" }, { "reftek", { NULL }, 2543, "tcp" }, { "reftek", { NULL }, 2543, "udp" }, { "novell-zen", { NULL }, 2544, "tcp" }, { "novell-zen", { NULL }, 2544, "udp" }, { "sis-emt", { NULL }, 2545, "tcp" }, { "sis-emt", { NULL }, 2545, "udp" }, { "vytalvaultbrtp", { NULL }, 2546, "tcp" }, { "vytalvaultbrtp", { NULL }, 2546, "udp" }, { "vytalvaultvsmp", { NULL }, 2547, "tcp" }, { "vytalvaultvsmp", { NULL }, 2547, "udp" }, { "vytalvaultpipe", { NULL }, 2548, "tcp" }, { "vytalvaultpipe", { NULL }, 2548, "udp" }, { "ipass", { NULL }, 2549, "tcp" }, { "ipass", { NULL }, 2549, "udp" }, { "ads", { NULL }, 2550, "tcp" }, { "ads", { NULL }, 2550, "udp" }, { "isg-uda-server", { NULL }, 2551, "tcp" }, { "isg-uda-server", { NULL }, 2551, "udp" }, { "call-logging", { NULL }, 2552, "tcp" }, { "call-logging", { NULL }, 2552, "udp" }, { "efidiningport", { NULL }, 2553, "tcp" }, { "efidiningport", { NULL }, 2553, "udp" }, { "vcnet-link-v10", { NULL }, 2554, "tcp" }, { "vcnet-link-v10", { NULL }, 2554, "udp" }, { "compaq-wcp", { NULL }, 2555, "tcp" }, { "compaq-wcp", { NULL }, 2555, "udp" }, { "nicetec-nmsvc", { NULL }, 2556, "tcp" }, { "nicetec-nmsvc", { NULL }, 2556, "udp" }, { "nicetec-mgmt", { NULL }, 2557, "tcp" }, { "nicetec-mgmt", { NULL }, 2557, "udp" }, { "pclemultimedia", { NULL }, 2558, "tcp" }, { "pclemultimedia", { NULL }, 2558, "udp" }, { "lstp", { NULL }, 2559, "tcp" }, { "lstp", { NULL }, 2559, "udp" }, { "labrat", { NULL }, 2560, "tcp" }, { "labrat", { NULL }, 2560, "udp" }, { "mosaixcc", { NULL }, 2561, "tcp" }, { "mosaixcc", { NULL }, 2561, "udp" }, { "delibo", { NULL }, 2562, "tcp" }, { "delibo", { NULL }, 2562, "udp" }, { "cti-redwood", { NULL }, 2563, "tcp" }, { "cti-redwood", { NULL }, 2563, "udp" }, { "hp-3000-telnet", { NULL }, 2564, "tcp" }, { "coord-svr", { NULL }, 2565, "tcp" }, { "coord-svr", { NULL }, 2565, "udp" }, { "pcs-pcw", { NULL }, 2566, "tcp" }, { "pcs-pcw", { NULL }, 2566, "udp" }, { "clp", { NULL }, 2567, "tcp" }, { "clp", { NULL }, 2567, "udp" }, { "spamtrap", { NULL }, 2568, "tcp" }, { "spamtrap", { NULL }, 2568, "udp" }, { "sonuscallsig", { NULL }, 2569, "tcp" }, { "sonuscallsig", { NULL }, 2569, "udp" }, { "hs-port", { NULL }, 2570, "tcp" }, { "hs-port", { NULL }, 2570, "udp" }, { "cecsvc", { NULL }, 2571, "tcp" }, { "cecsvc", { NULL }, 2571, "udp" }, { "ibp", { NULL }, 2572, "tcp" }, { "ibp", { NULL }, 2572, "udp" }, { "trustestablish", { NULL }, 2573, "tcp" }, { "trustestablish", { NULL }, 2573, "udp" }, { "blockade-bpsp", { NULL }, 2574, "tcp" }, { "blockade-bpsp", { NULL }, 2574, "udp" }, { "hl7", { NULL }, 2575, "tcp" }, { "hl7", { NULL }, 2575, "udp" }, { "tclprodebugger", { NULL }, 2576, "tcp" }, { "tclprodebugger", { NULL }, 2576, "udp" }, { "scipticslsrvr", { NULL }, 2577, "tcp" }, { "scipticslsrvr", { NULL }, 2577, "udp" }, { "rvs-isdn-dcp", { NULL }, 2578, "tcp" }, { "rvs-isdn-dcp", { NULL }, 2578, "udp" }, { "mpfoncl", { NULL }, 2579, "tcp" }, { "mpfoncl", { NULL }, 2579, "udp" }, { "tributary", { NULL }, 2580, "tcp" }, { "tributary", { NULL }, 2580, "udp" }, { "argis-te", { NULL }, 2581, "tcp" }, { "argis-te", { NULL }, 2581, "udp" }, { "argis-ds", { NULL }, 2582, "tcp" }, { "argis-ds", { NULL }, 2582, "udp" }, { "mon", { NULL }, 2583, "tcp" }, { "mon", { NULL }, 2583, "udp" }, { "cyaserv", { NULL }, 2584, "tcp" }, { "cyaserv", { NULL }, 2584, "udp" }, { "netx-server", { NULL }, 2585, "tcp" }, { "netx-server", { NULL }, 2585, "udp" }, { "netx-agent", { NULL }, 2586, "tcp" }, { "netx-agent", { NULL }, 2586, "udp" }, { "masc", { NULL }, 2587, "tcp" }, { "masc", { NULL }, 2587, "udp" }, { "privilege", { NULL }, 2588, "tcp" }, { "privilege", { NULL }, 2588, "udp" }, { "quartus-tcl", { NULL }, 2589, "tcp" }, { "quartus-tcl", { NULL }, 2589, "udp" }, { "idotdist", { NULL }, 2590, "tcp" }, { "idotdist", { NULL }, 2590, "udp" }, { "maytagshuffle", { NULL }, 2591, "tcp" }, { "maytagshuffle", { NULL }, 2591, "udp" }, { "netrek", { NULL }, 2592, "tcp" }, { "netrek", { NULL }, 2592, "udp" }, { "mns-mail", { NULL }, 2593, "tcp" }, { "mns-mail", { NULL }, 2593, "udp" }, { "dts", { NULL }, 2594, "tcp" }, { "dts", { NULL }, 2594, "udp" }, { "worldfusion1", { NULL }, 2595, "tcp" }, { "worldfusion1", { NULL }, 2595, "udp" }, { "worldfusion2", { NULL }, 2596, "tcp" }, { "worldfusion2", { NULL }, 2596, "udp" }, { "homesteadglory", { NULL }, 2597, "tcp" }, { "homesteadglory", { NULL }, 2597, "udp" }, { "citriximaclient", { NULL }, 2598, "tcp" }, { "citriximaclient", { NULL }, 2598, "udp" }, { "snapd", { NULL }, 2599, "tcp" }, { "snapd", { NULL }, 2599, "udp" }, { "hpstgmgr", { NULL }, 2600, "tcp" }, { "hpstgmgr", { NULL }, 2600, "udp" }, { "discp-client", { NULL }, 2601, "tcp" }, { "discp-client", { NULL }, 2601, "udp" }, { "discp-server", { NULL }, 2602, "tcp" }, { "discp-server", { NULL }, 2602, "udp" }, { "servicemeter", { NULL }, 2603, "tcp" }, { "servicemeter", { NULL }, 2603, "udp" }, { "nsc-ccs", { NULL }, 2604, "tcp" }, { "nsc-ccs", { NULL }, 2604, "udp" }, { "nsc-posa", { NULL }, 2605, "tcp" }, { "nsc-posa", { NULL }, 2605, "udp" }, { "netmon", { NULL }, 2606, "tcp" }, { "netmon", { NULL }, 2606, "udp" }, { "connection", { NULL }, 2607, "tcp" }, { "connection", { NULL }, 2607, "udp" }, { "wag-service", { NULL }, 2608, "tcp" }, { "wag-service", { NULL }, 2608, "udp" }, { "system-monitor", { NULL }, 2609, "tcp" }, { "system-monitor", { NULL }, 2609, "udp" }, { "versa-tek", { NULL }, 2610, "tcp" }, { "versa-tek", { NULL }, 2610, "udp" }, { "lionhead", { NULL }, 2611, "tcp" }, { "lionhead", { NULL }, 2611, "udp" }, { "qpasa-agent", { NULL }, 2612, "tcp" }, { "qpasa-agent", { NULL }, 2612, "udp" }, { "smntubootstrap", { NULL }, 2613, "tcp" }, { "smntubootstrap", { NULL }, 2613, "udp" }, { "neveroffline", { NULL }, 2614, "tcp" }, { "neveroffline", { NULL }, 2614, "udp" }, { "firepower", { NULL }, 2615, "tcp" }, { "firepower", { NULL }, 2615, "udp" }, { "appswitch-emp", { NULL }, 2616, "tcp" }, { "appswitch-emp", { NULL }, 2616, "udp" }, { "cmadmin", { NULL }, 2617, "tcp" }, { "cmadmin", { NULL }, 2617, "udp" }, { "priority-e-com", { NULL }, 2618, "tcp" }, { "priority-e-com", { NULL }, 2618, "udp" }, { "bruce", { NULL }, 2619, "tcp" }, { "bruce", { NULL }, 2619, "udp" }, { "lpsrecommender", { NULL }, 2620, "tcp" }, { "lpsrecommender", { NULL }, 2620, "udp" }, { "miles-apart", { NULL }, 2621, "tcp" }, { "miles-apart", { NULL }, 2621, "udp" }, { "metricadbc", { NULL }, 2622, "tcp" }, { "metricadbc", { NULL }, 2622, "udp" }, { "lmdp", { NULL }, 2623, "tcp" }, { "lmdp", { NULL }, 2623, "udp" }, { "aria", { NULL }, 2624, "tcp" }, { "aria", { NULL }, 2624, "udp" }, { "blwnkl-port", { NULL }, 2625, "tcp" }, { "blwnkl-port", { NULL }, 2625, "udp" }, { "gbjd816", { NULL }, 2626, "tcp" }, { "gbjd816", { NULL }, 2626, "udp" }, { "moshebeeri", { NULL }, 2627, "tcp" }, { "moshebeeri", { NULL }, 2627, "udp" }, { "dict", { NULL }, 2628, "tcp" }, { "dict", { NULL }, 2628, "udp" }, { "sitaraserver", { NULL }, 2629, "tcp" }, { "sitaraserver", { NULL }, 2629, "udp" }, { "sitaramgmt", { NULL }, 2630, "tcp" }, { "sitaramgmt", { NULL }, 2630, "udp" }, { "sitaradir", { NULL }, 2631, "tcp" }, { "sitaradir", { NULL }, 2631, "udp" }, { "irdg-post", { NULL }, 2632, "tcp" }, { "irdg-post", { NULL }, 2632, "udp" }, { "interintelli", { NULL }, 2633, "tcp" }, { "interintelli", { NULL }, 2633, "udp" }, { "pk-electronics", { NULL }, 2634, "tcp" }, { "pk-electronics", { NULL }, 2634, "udp" }, { "backburner", { NULL }, 2635, "tcp" }, { "backburner", { NULL }, 2635, "udp" }, { "solve", { NULL }, 2636, "tcp" }, { "solve", { NULL }, 2636, "udp" }, { "imdocsvc", { NULL }, 2637, "tcp" }, { "imdocsvc", { NULL }, 2637, "udp" }, { "sybaseanywhere", { NULL }, 2638, "tcp" }, { "sybaseanywhere", { NULL }, 2638, "udp" }, { "aminet", { NULL }, 2639, "tcp" }, { "aminet", { NULL }, 2639, "udp" }, { "sai_sentlm", { NULL }, 2640, "tcp" }, { "sai_sentlm", { NULL }, 2640, "udp" }, { "hdl-srv", { NULL }, 2641, "tcp" }, { "hdl-srv", { NULL }, 2641, "udp" }, { "tragic", { NULL }, 2642, "tcp" }, { "tragic", { NULL }, 2642, "udp" }, { "gte-samp", { NULL }, 2643, "tcp" }, { "gte-samp", { NULL }, 2643, "udp" }, { "travsoft-ipx-t", { NULL }, 2644, "tcp" }, { "travsoft-ipx-t", { NULL }, 2644, "udp" }, { "novell-ipx-cmd", { NULL }, 2645, "tcp" }, { "novell-ipx-cmd", { NULL }, 2645, "udp" }, { "and-lm", { NULL }, 2646, "tcp" }, { "and-lm", { NULL }, 2646, "udp" }, { "syncserver", { NULL }, 2647, "tcp" }, { "syncserver", { NULL }, 2647, "udp" }, { "upsnotifyprot", { NULL }, 2648, "tcp" }, { "upsnotifyprot", { NULL }, 2648, "udp" }, { "vpsipport", { NULL }, 2649, "tcp" }, { "vpsipport", { NULL }, 2649, "udp" }, { "eristwoguns", { NULL }, 2650, "tcp" }, { "eristwoguns", { NULL }, 2650, "udp" }, { "ebinsite", { NULL }, 2651, "tcp" }, { "ebinsite", { NULL }, 2651, "udp" }, { "interpathpanel", { NULL }, 2652, "tcp" }, { "interpathpanel", { NULL }, 2652, "udp" }, { "sonus", { NULL }, 2653, "tcp" }, { "sonus", { NULL }, 2653, "udp" }, { "corel_vncadmin", { NULL }, 2654, "tcp" }, { "corel_vncadmin", { NULL }, 2654, "udp" }, { "unglue", { NULL }, 2655, "tcp" }, { "unglue", { NULL }, 2655, "udp" }, { "kana", { NULL }, 2656, "tcp" }, { "kana", { NULL }, 2656, "udp" }, { "sns-dispatcher", { NULL }, 2657, "tcp" }, { "sns-dispatcher", { NULL }, 2657, "udp" }, { "sns-admin", { NULL }, 2658, "tcp" }, { "sns-admin", { NULL }, 2658, "udp" }, { "sns-query", { NULL }, 2659, "tcp" }, { "sns-query", { NULL }, 2659, "udp" }, { "gcmonitor", { NULL }, 2660, "tcp" }, { "gcmonitor", { NULL }, 2660, "udp" }, { "olhost", { NULL }, 2661, "tcp" }, { "olhost", { NULL }, 2661, "udp" }, { "bintec-capi", { NULL }, 2662, "tcp" }, { "bintec-capi", { NULL }, 2662, "udp" }, { "bintec-tapi", { NULL }, 2663, "tcp" }, { "bintec-tapi", { NULL }, 2663, "udp" }, { "patrol-mq-gm", { NULL }, 2664, "tcp" }, { "patrol-mq-gm", { NULL }, 2664, "udp" }, { "patrol-mq-nm", { NULL }, 2665, "tcp" }, { "patrol-mq-nm", { NULL }, 2665, "udp" }, { "extensis", { NULL }, 2666, "tcp" }, { "extensis", { NULL }, 2666, "udp" }, { "alarm-clock-s", { NULL }, 2667, "tcp" }, { "alarm-clock-s", { NULL }, 2667, "udp" }, { "alarm-clock-c", { NULL }, 2668, "tcp" }, { "alarm-clock-c", { NULL }, 2668, "udp" }, { "toad", { NULL }, 2669, "tcp" }, { "toad", { NULL }, 2669, "udp" }, { "tve-announce", { NULL }, 2670, "tcp" }, { "tve-announce", { NULL }, 2670, "udp" }, { "newlixreg", { NULL }, 2671, "tcp" }, { "newlixreg", { NULL }, 2671, "udp" }, { "nhserver", { NULL }, 2672, "tcp" }, { "nhserver", { NULL }, 2672, "udp" }, { "firstcall42", { NULL }, 2673, "tcp" }, { "firstcall42", { NULL }, 2673, "udp" }, { "ewnn", { NULL }, 2674, "tcp" }, { "ewnn", { NULL }, 2674, "udp" }, { "ttc-etap", { NULL }, 2675, "tcp" }, { "ttc-etap", { NULL }, 2675, "udp" }, { "simslink", { NULL }, 2676, "tcp" }, { "simslink", { NULL }, 2676, "udp" }, { "gadgetgate1way", { NULL }, 2677, "tcp" }, { "gadgetgate1way", { NULL }, 2677, "udp" }, { "gadgetgate2way", { NULL }, 2678, "tcp" }, { "gadgetgate2way", { NULL }, 2678, "udp" }, { "syncserverssl", { NULL }, 2679, "tcp" }, { "syncserverssl", { NULL }, 2679, "udp" }, { "pxc-sapxom", { NULL }, 2680, "tcp" }, { "pxc-sapxom", { NULL }, 2680, "udp" }, { "mpnjsomb", { NULL }, 2681, "tcp" }, { "mpnjsomb", { NULL }, 2681, "udp" }, { "ncdloadbalance", { NULL }, 2683, "tcp" }, { "ncdloadbalance", { NULL }, 2683, "udp" }, { "mpnjsosv", { NULL }, 2684, "tcp" }, { "mpnjsosv", { NULL }, 2684, "udp" }, { "mpnjsocl", { NULL }, 2685, "tcp" }, { "mpnjsocl", { NULL }, 2685, "udp" }, { "mpnjsomg", { NULL }, 2686, "tcp" }, { "mpnjsomg", { NULL }, 2686, "udp" }, { "pq-lic-mgmt", { NULL }, 2687, "tcp" }, { "pq-lic-mgmt", { NULL }, 2687, "udp" }, { "md-cg-http", { NULL }, 2688, "tcp" }, { "md-cg-http", { NULL }, 2688, "udp" }, { "fastlynx", { NULL }, 2689, "tcp" }, { "fastlynx", { NULL }, 2689, "udp" }, { "hp-nnm-data", { NULL }, 2690, "tcp" }, { "hp-nnm-data", { NULL }, 2690, "udp" }, { "itinternet", { NULL }, 2691, "tcp" }, { "itinternet", { NULL }, 2691, "udp" }, { "admins-lms", { NULL }, 2692, "tcp" }, { "admins-lms", { NULL }, 2692, "udp" }, { "pwrsevent", { NULL }, 2694, "tcp" }, { "pwrsevent", { NULL }, 2694, "udp" }, { "vspread", { NULL }, 2695, "tcp" }, { "vspread", { NULL }, 2695, "udp" }, { "unifyadmin", { NULL }, 2696, "tcp" }, { "unifyadmin", { NULL }, 2696, "udp" }, { "oce-snmp-trap", { NULL }, 2697, "tcp" }, { "oce-snmp-trap", { NULL }, 2697, "udp" }, { "mck-ivpip", { NULL }, 2698, "tcp" }, { "mck-ivpip", { NULL }, 2698, "udp" }, { "csoft-plusclnt", { NULL }, 2699, "tcp" }, { "csoft-plusclnt", { NULL }, 2699, "udp" }, { "tqdata", { NULL }, 2700, "tcp" }, { "tqdata", { NULL }, 2700, "udp" }, { "sms-rcinfo", { NULL }, 2701, "tcp" }, { "sms-rcinfo", { NULL }, 2701, "udp" }, { "sms-xfer", { NULL }, 2702, "tcp" }, { "sms-xfer", { NULL }, 2702, "udp" }, { "sms-chat", { NULL }, 2703, "tcp" }, { "sms-chat", { NULL }, 2703, "udp" }, { "sms-remctrl", { NULL }, 2704, "tcp" }, { "sms-remctrl", { NULL }, 2704, "udp" }, { "sds-admin", { NULL }, 2705, "tcp" }, { "sds-admin", { NULL }, 2705, "udp" }, { "ncdmirroring", { NULL }, 2706, "tcp" }, { "ncdmirroring", { NULL }, 2706, "udp" }, { "emcsymapiport", { NULL }, 2707, "tcp" }, { "emcsymapiport", { NULL }, 2707, "udp" }, { "banyan-net", { NULL }, 2708, "tcp" }, { "banyan-net", { NULL }, 2708, "udp" }, { "supermon", { NULL }, 2709, "tcp" }, { "supermon", { NULL }, 2709, "udp" }, { "sso-service", { NULL }, 2710, "tcp" }, { "sso-service", { NULL }, 2710, "udp" }, { "sso-control", { NULL }, 2711, "tcp" }, { "sso-control", { NULL }, 2711, "udp" }, { "aocp", { NULL }, 2712, "tcp" }, { "aocp", { NULL }, 2712, "udp" }, { "raventbs", { NULL }, 2713, "tcp" }, { "raventbs", { NULL }, 2713, "udp" }, { "raventdm", { NULL }, 2714, "tcp" }, { "raventdm", { NULL }, 2714, "udp" }, { "hpstgmgr2", { NULL }, 2715, "tcp" }, { "hpstgmgr2", { NULL }, 2715, "udp" }, { "inova-ip-disco", { NULL }, 2716, "tcp" }, { "inova-ip-disco", { NULL }, 2716, "udp" }, { "pn-requester", { NULL }, 2717, "tcp" }, { "pn-requester", { NULL }, 2717, "udp" }, { "pn-requester2", { NULL }, 2718, "tcp" }, { "pn-requester2", { NULL }, 2718, "udp" }, { "scan-change", { NULL }, 2719, "tcp" }, { "scan-change", { NULL }, 2719, "udp" }, { "wkars", { NULL }, 2720, "tcp" }, { "wkars", { NULL }, 2720, "udp" }, { "smart-diagnose", { NULL }, 2721, "tcp" }, { "smart-diagnose", { NULL }, 2721, "udp" }, { "proactivesrvr", { NULL }, 2722, "tcp" }, { "proactivesrvr", { NULL }, 2722, "udp" }, { "watchdog-nt", { NULL }, 2723, "tcp" }, { "watchdog-nt", { NULL }, 2723, "udp" }, { "qotps", { NULL }, 2724, "tcp" }, { "qotps", { NULL }, 2724, "udp" }, { "msolap-ptp2", { NULL }, 2725, "tcp" }, { "msolap-ptp2", { NULL }, 2725, "udp" }, { "tams", { NULL }, 2726, "tcp" }, { "tams", { NULL }, 2726, "udp" }, { "mgcp-callagent", { NULL }, 2727, "tcp" }, { "mgcp-callagent", { NULL }, 2727, "udp" }, { "sqdr", { NULL }, 2728, "tcp" }, { "sqdr", { NULL }, 2728, "udp" }, { "tcim-control", { NULL }, 2729, "tcp" }, { "tcim-control", { NULL }, 2729, "udp" }, { "nec-raidplus", { NULL }, 2730, "tcp" }, { "nec-raidplus", { NULL }, 2730, "udp" }, { "fyre-messanger", { NULL }, 2731, "tcp" }, { "fyre-messanger", { NULL }, 2731, "udp" }, { "g5m", { NULL }, 2732, "tcp" }, { "g5m", { NULL }, 2732, "udp" }, { "signet-ctf", { NULL }, 2733, "tcp" }, { "signet-ctf", { NULL }, 2733, "udp" }, { "ccs-software", { NULL }, 2734, "tcp" }, { "ccs-software", { NULL }, 2734, "udp" }, { "netiq-mc", { NULL }, 2735, "tcp" }, { "netiq-mc", { NULL }, 2735, "udp" }, { "radwiz-nms-srv", { NULL }, 2736, "tcp" }, { "radwiz-nms-srv", { NULL }, 2736, "udp" }, { "srp-feedback", { NULL }, 2737, "tcp" }, { "srp-feedback", { NULL }, 2737, "udp" }, { "ndl-tcp-ois-gw", { NULL }, 2738, "tcp" }, { "ndl-tcp-ois-gw", { NULL }, 2738, "udp" }, { "tn-timing", { NULL }, 2739, "tcp" }, { "tn-timing", { NULL }, 2739, "udp" }, { "alarm", { NULL }, 2740, "tcp" }, { "alarm", { NULL }, 2740, "udp" }, { "tsb", { NULL }, 2741, "tcp" }, { "tsb", { NULL }, 2741, "udp" }, { "tsb2", { NULL }, 2742, "tcp" }, { "tsb2", { NULL }, 2742, "udp" }, { "murx", { NULL }, 2743, "tcp" }, { "murx", { NULL }, 2743, "udp" }, { "honyaku", { NULL }, 2744, "tcp" }, { "honyaku", { NULL }, 2744, "udp" }, { "urbisnet", { NULL }, 2745, "tcp" }, { "urbisnet", { NULL }, 2745, "udp" }, { "cpudpencap", { NULL }, 2746, "tcp" }, { "cpudpencap", { NULL }, 2746, "udp" }, { "fjippol-swrly", { NULL }, 2747, "tcp" }, { "fjippol-swrly", { NULL }, 2747, "udp" }, { "fjippol-polsvr", { NULL }, 2748, "tcp" }, { "fjippol-polsvr", { NULL }, 2748, "udp" }, { "fjippol-cnsl", { NULL }, 2749, "tcp" }, { "fjippol-cnsl", { NULL }, 2749, "udp" }, { "fjippol-port1", { NULL }, 2750, "tcp" }, { "fjippol-port1", { NULL }, 2750, "udp" }, { "fjippol-port2", { NULL }, 2751, "tcp" }, { "fjippol-port2", { NULL }, 2751, "udp" }, { "rsisysaccess", { NULL }, 2752, "tcp" }, { "rsisysaccess", { NULL }, 2752, "udp" }, { "de-spot", { NULL }, 2753, "tcp" }, { "de-spot", { NULL }, 2753, "udp" }, { "apollo-cc", { NULL }, 2754, "tcp" }, { "apollo-cc", { NULL }, 2754, "udp" }, { "expresspay", { NULL }, 2755, "tcp" }, { "expresspay", { NULL }, 2755, "udp" }, { "simplement-tie", { NULL }, 2756, "tcp" }, { "simplement-tie", { NULL }, 2756, "udp" }, { "cnrp", { NULL }, 2757, "tcp" }, { "cnrp", { NULL }, 2757, "udp" }, { "apollo-status", { NULL }, 2758, "tcp" }, { "apollo-status", { NULL }, 2758, "udp" }, { "apollo-gms", { NULL }, 2759, "tcp" }, { "apollo-gms", { NULL }, 2759, "udp" }, { "sabams", { NULL }, 2760, "tcp" }, { "sabams", { NULL }, 2760, "udp" }, { "dicom-iscl", { NULL }, 2761, "tcp" }, { "dicom-iscl", { NULL }, 2761, "udp" }, { "dicom-tls", { NULL }, 2762, "tcp" }, { "dicom-tls", { NULL }, 2762, "udp" }, { "desktop-dna", { NULL }, 2763, "tcp" }, { "desktop-dna", { NULL }, 2763, "udp" }, { "data-insurance", { NULL }, 2764, "tcp" }, { "data-insurance", { NULL }, 2764, "udp" }, { "qip-audup", { NULL }, 2765, "tcp" }, { "qip-audup", { NULL }, 2765, "udp" }, { "compaq-scp", { NULL }, 2766, "tcp" }, { "compaq-scp", { NULL }, 2766, "udp" }, { "uadtc", { NULL }, 2767, "tcp" }, { "uadtc", { NULL }, 2767, "udp" }, { "uacs", { NULL }, 2768, "tcp" }, { "uacs", { NULL }, 2768, "udp" }, { "exce", { NULL }, 2769, "tcp" }, { "exce", { NULL }, 2769, "udp" }, { "veronica", { NULL }, 2770, "tcp" }, { "veronica", { NULL }, 2770, "udp" }, { "vergencecm", { NULL }, 2771, "tcp" }, { "vergencecm", { NULL }, 2771, "udp" }, { "auris", { NULL }, 2772, "tcp" }, { "auris", { NULL }, 2772, "udp" }, { "rbakcup1", { NULL }, 2773, "tcp" }, { "rbakcup1", { NULL }, 2773, "udp" }, { "rbakcup2", { NULL }, 2774, "tcp" }, { "rbakcup2", { NULL }, 2774, "udp" }, { "smpp", { NULL }, 2775, "tcp" }, { "smpp", { NULL }, 2775, "udp" }, { "ridgeway1", { NULL }, 2776, "tcp" }, { "ridgeway1", { NULL }, 2776, "udp" }, { "ridgeway2", { NULL }, 2777, "tcp" }, { "ridgeway2", { NULL }, 2777, "udp" }, { "gwen-sonya", { NULL }, 2778, "tcp" }, { "gwen-sonya", { NULL }, 2778, "udp" }, { "lbc-sync", { NULL }, 2779, "tcp" }, { "lbc-sync", { NULL }, 2779, "udp" }, { "lbc-control", { NULL }, 2780, "tcp" }, { "lbc-control", { NULL }, 2780, "udp" }, { "whosells", { NULL }, 2781, "tcp" }, { "whosells", { NULL }, 2781, "udp" }, { "everydayrc", { NULL }, 2782, "tcp" }, { "everydayrc", { NULL }, 2782, "udp" }, { "aises", { NULL }, 2783, "tcp" }, { "aises", { NULL }, 2783, "udp" }, { "www-dev", { NULL }, 2784, "tcp" }, { "www-dev", { NULL }, 2784, "udp" }, { "aic-np", { NULL }, 2785, "tcp" }, { "aic-np", { NULL }, 2785, "udp" }, { "aic-oncrpc", { NULL }, 2786, "tcp" }, { "aic-oncrpc", { NULL }, 2786, "udp" }, { "piccolo", { NULL }, 2787, "tcp" }, { "piccolo", { NULL }, 2787, "udp" }, { "fryeserv", { NULL }, 2788, "tcp" }, { "fryeserv", { NULL }, 2788, "udp" }, { "media-agent", { NULL }, 2789, "tcp" }, { "media-agent", { NULL }, 2789, "udp" }, { "plgproxy", { NULL }, 2790, "tcp" }, { "plgproxy", { NULL }, 2790, "udp" }, { "mtport-regist", { NULL }, 2791, "tcp" }, { "mtport-regist", { NULL }, 2791, "udp" }, { "f5-globalsite", { NULL }, 2792, "tcp" }, { "f5-globalsite", { NULL }, 2792, "udp" }, { "initlsmsad", { NULL }, 2793, "tcp" }, { "initlsmsad", { NULL }, 2793, "udp" }, { "livestats", { NULL }, 2795, "tcp" }, { "livestats", { NULL }, 2795, "udp" }, { "ac-tech", { NULL }, 2796, "tcp" }, { "ac-tech", { NULL }, 2796, "udp" }, { "esp-encap", { NULL }, 2797, "tcp" }, { "esp-encap", { NULL }, 2797, "udp" }, { "tmesis-upshot", { NULL }, 2798, "tcp" }, { "tmesis-upshot", { NULL }, 2798, "udp" }, { "icon-discover", { NULL }, 2799, "tcp" }, { "icon-discover", { NULL }, 2799, "udp" }, { "acc-raid", { NULL }, 2800, "tcp" }, { "acc-raid", { NULL }, 2800, "udp" }, { "igcp", { NULL }, 2801, "tcp" }, { "igcp", { NULL }, 2801, "udp" }, { "veritas-tcp1", { NULL }, 2802, "tcp" }, { "veritas-udp1", { NULL }, 2802, "udp" }, { "btprjctrl", { NULL }, 2803, "tcp" }, { "btprjctrl", { NULL }, 2803, "udp" }, { "dvr-esm", { NULL }, 2804, "tcp" }, { "dvr-esm", { NULL }, 2804, "udp" }, { "wta-wsp-s", { NULL }, 2805, "tcp" }, { "wta-wsp-s", { NULL }, 2805, "udp" }, { "cspuni", { NULL }, 2806, "tcp" }, { "cspuni", { NULL }, 2806, "udp" }, { "cspmulti", { NULL }, 2807, "tcp" }, { "cspmulti", { NULL }, 2807, "udp" }, { "j-lan-p", { NULL }, 2808, "tcp" }, { "j-lan-p", { NULL }, 2808, "udp" }, { "corbaloc", { NULL }, 2809, "tcp" }, { "corbaloc", { NULL }, 2809, "udp" }, { "netsteward", { NULL }, 2810, "tcp" }, { "netsteward", { NULL }, 2810, "udp" }, { "gsiftp", { NULL }, 2811, "tcp" }, { "gsiftp", { NULL }, 2811, "udp" }, { "atmtcp", { NULL }, 2812, "tcp" }, { "atmtcp", { NULL }, 2812, "udp" }, { "llm-pass", { NULL }, 2813, "tcp" }, { "llm-pass", { NULL }, 2813, "udp" }, { "llm-csv", { NULL }, 2814, "tcp" }, { "llm-csv", { NULL }, 2814, "udp" }, { "lbc-measure", { NULL }, 2815, "tcp" }, { "lbc-measure", { NULL }, 2815, "udp" }, { "lbc-watchdog", { NULL }, 2816, "tcp" }, { "lbc-watchdog", { NULL }, 2816, "udp" }, { "nmsigport", { NULL }, 2817, "tcp" }, { "nmsigport", { NULL }, 2817, "udp" }, { "rmlnk", { NULL }, 2818, "tcp" }, { "rmlnk", { NULL }, 2818, "udp" }, { "fc-faultnotify", { NULL }, 2819, "tcp" }, { "fc-faultnotify", { NULL }, 2819, "udp" }, { "univision", { NULL }, 2820, "tcp" }, { "univision", { NULL }, 2820, "udp" }, { "vrts-at-port", { NULL }, 2821, "tcp" }, { "vrts-at-port", { NULL }, 2821, "udp" }, { "ka0wuc", { NULL }, 2822, "tcp" }, { "ka0wuc", { NULL }, 2822, "udp" }, { "cqg-netlan", { NULL }, 2823, "tcp" }, { "cqg-netlan", { NULL }, 2823, "udp" }, { "cqg-netlan-1", { NULL }, 2824, "tcp" }, { "cqg-netlan-1", { NULL }, 2824, "udp" }, { "slc-systemlog", { NULL }, 2826, "tcp" }, { "slc-systemlog", { NULL }, 2826, "udp" }, { "slc-ctrlrloops", { NULL }, 2827, "tcp" }, { "slc-ctrlrloops", { NULL }, 2827, "udp" }, { "itm-lm", { NULL }, 2828, "tcp" }, { "itm-lm", { NULL }, 2828, "udp" }, { "silkp1", { NULL }, 2829, "tcp" }, { "silkp1", { NULL }, 2829, "udp" }, { "silkp2", { NULL }, 2830, "tcp" }, { "silkp2", { NULL }, 2830, "udp" }, { "silkp3", { NULL }, 2831, "tcp" }, { "silkp3", { NULL }, 2831, "udp" }, { "silkp4", { NULL }, 2832, "tcp" }, { "silkp4", { NULL }, 2832, "udp" }, { "glishd", { NULL }, 2833, "tcp" }, { "glishd", { NULL }, 2833, "udp" }, { "evtp", { NULL }, 2834, "tcp" }, { "evtp", { NULL }, 2834, "udp" }, { "evtp-data", { NULL }, 2835, "tcp" }, { "evtp-data", { NULL }, 2835, "udp" }, { "catalyst", { NULL }, 2836, "tcp" }, { "catalyst", { NULL }, 2836, "udp" }, { "repliweb", { NULL }, 2837, "tcp" }, { "repliweb", { NULL }, 2837, "udp" }, { "starbot", { NULL }, 2838, "tcp" }, { "starbot", { NULL }, 2838, "udp" }, { "nmsigport", { NULL }, 2839, "tcp" }, { "nmsigport", { NULL }, 2839, "udp" }, { "l3-exprt", { NULL }, 2840, "tcp" }, { "l3-exprt", { NULL }, 2840, "udp" }, { "l3-ranger", { NULL }, 2841, "tcp" }, { "l3-ranger", { NULL }, 2841, "udp" }, { "l3-hawk", { NULL }, 2842, "tcp" }, { "l3-hawk", { NULL }, 2842, "udp" }, { "pdnet", { NULL }, 2843, "tcp" }, { "pdnet", { NULL }, 2843, "udp" }, { "bpcp-poll", { NULL }, 2844, "tcp" }, { "bpcp-poll", { NULL }, 2844, "udp" }, { "bpcp-trap", { NULL }, 2845, "tcp" }, { "bpcp-trap", { NULL }, 2845, "udp" }, { "aimpp-hello", { NULL }, 2846, "tcp" }, { "aimpp-hello", { NULL }, 2846, "udp" }, { "aimpp-port-req", { NULL }, 2847, "tcp" }, { "aimpp-port-req", { NULL }, 2847, "udp" }, { "amt-blc-port", { NULL }, 2848, "tcp" }, { "amt-blc-port", { NULL }, 2848, "udp" }, { "fxp", { NULL }, 2849, "tcp" }, { "fxp", { NULL }, 2849, "udp" }, { "metaconsole", { NULL }, 2850, "tcp" }, { "metaconsole", { NULL }, 2850, "udp" }, { "webemshttp", { NULL }, 2851, "tcp" }, { "webemshttp", { NULL }, 2851, "udp" }, { "bears-01", { NULL }, 2852, "tcp" }, { "bears-01", { NULL }, 2852, "udp" }, { "ispipes", { NULL }, 2853, "tcp" }, { "ispipes", { NULL }, 2853, "udp" }, { "infomover", { NULL }, 2854, "tcp" }, { "infomover", { NULL }, 2854, "udp" }, { "msrp", { NULL }, 2855, "tcp" }, { "msrp", { NULL }, 2855, "udp" }, { "cesdinv", { NULL }, 2856, "tcp" }, { "cesdinv", { NULL }, 2856, "udp" }, { "simctlp", { NULL }, 2857, "tcp" }, { "simctlp", { NULL }, 2857, "udp" }, { "ecnp", { NULL }, 2858, "tcp" }, { "ecnp", { NULL }, 2858, "udp" }, { "activememory", { NULL }, 2859, "tcp" }, { "activememory", { NULL }, 2859, "udp" }, { "dialpad-voice1", { NULL }, 2860, "tcp" }, { "dialpad-voice1", { NULL }, 2860, "udp" }, { "dialpad-voice2", { NULL }, 2861, "tcp" }, { "dialpad-voice2", { NULL }, 2861, "udp" }, { "ttg-protocol", { NULL }, 2862, "tcp" }, { "ttg-protocol", { NULL }, 2862, "udp" }, { "sonardata", { NULL }, 2863, "tcp" }, { "sonardata", { NULL }, 2863, "udp" }, { "astromed-main", { NULL }, 2864, "tcp" }, { "astromed-main", { NULL }, 2864, "udp" }, { "pit-vpn", { NULL }, 2865, "tcp" }, { "pit-vpn", { NULL }, 2865, "udp" }, { "iwlistener", { NULL }, 2866, "tcp" }, { "iwlistener", { NULL }, 2866, "udp" }, { "esps-portal", { NULL }, 2867, "tcp" }, { "esps-portal", { NULL }, 2867, "udp" }, { "npep-messaging", { NULL }, 2868, "tcp" }, { "npep-messaging", { NULL }, 2868, "udp" }, { "icslap", { NULL }, 2869, "tcp" }, { "icslap", { NULL }, 2869, "udp" }, { "daishi", { NULL }, 2870, "tcp" }, { "daishi", { NULL }, 2870, "udp" }, { "msi-selectplay", { NULL }, 2871, "tcp" }, { "msi-selectplay", { NULL }, 2871, "udp" }, { "radix", { NULL }, 2872, "tcp" }, { "radix", { NULL }, 2872, "udp" }, { "dxmessagebase1", { NULL }, 2874, "tcp" }, { "dxmessagebase1", { NULL }, 2874, "udp" }, { "dxmessagebase2", { NULL }, 2875, "tcp" }, { "dxmessagebase2", { NULL }, 2875, "udp" }, { "sps-tunnel", { NULL }, 2876, "tcp" }, { "sps-tunnel", { NULL }, 2876, "udp" }, { "bluelance", { NULL }, 2877, "tcp" }, { "bluelance", { NULL }, 2877, "udp" }, { "aap", { NULL }, 2878, "tcp" }, { "aap", { NULL }, 2878, "udp" }, { "ucentric-ds", { NULL }, 2879, "tcp" }, { "ucentric-ds", { NULL }, 2879, "udp" }, { "synapse", { NULL }, 2880, "tcp" }, { "synapse", { NULL }, 2880, "udp" }, { "ndsp", { NULL }, 2881, "tcp" }, { "ndsp", { NULL }, 2881, "udp" }, { "ndtp", { NULL }, 2882, "tcp" }, { "ndtp", { NULL }, 2882, "udp" }, { "ndnp", { NULL }, 2883, "tcp" }, { "ndnp", { NULL }, 2883, "udp" }, { "flashmsg", { NULL }, 2884, "tcp" }, { "flashmsg", { NULL }, 2884, "udp" }, { "topflow", { NULL }, 2885, "tcp" }, { "topflow", { NULL }, 2885, "udp" }, { "responselogic", { NULL }, 2886, "tcp" }, { "responselogic", { NULL }, 2886, "udp" }, { "aironetddp", { NULL }, 2887, "tcp" }, { "aironetddp", { NULL }, 2887, "udp" }, { "spcsdlobby", { NULL }, 2888, "tcp" }, { "spcsdlobby", { NULL }, 2888, "udp" }, { "rsom", { NULL }, 2889, "tcp" }, { "rsom", { NULL }, 2889, "udp" }, { "cspclmulti", { NULL }, 2890, "tcp" }, { "cspclmulti", { NULL }, 2890, "udp" }, { "cinegrfx-elmd", { NULL }, 2891, "tcp" }, { "cinegrfx-elmd", { NULL }, 2891, "udp" }, { "snifferdata", { NULL }, 2892, "tcp" }, { "snifferdata", { NULL }, 2892, "udp" }, { "vseconnector", { NULL }, 2893, "tcp" }, { "vseconnector", { NULL }, 2893, "udp" }, { "abacus-remote", { NULL }, 2894, "tcp" }, { "abacus-remote", { NULL }, 2894, "udp" }, { "natuslink", { NULL }, 2895, "tcp" }, { "natuslink", { NULL }, 2895, "udp" }, { "ecovisiong6-1", { NULL }, 2896, "tcp" }, { "ecovisiong6-1", { NULL }, 2896, "udp" }, { "citrix-rtmp", { NULL }, 2897, "tcp" }, { "citrix-rtmp", { NULL }, 2897, "udp" }, { "appliance-cfg", { NULL }, 2898, "tcp" }, { "appliance-cfg", { NULL }, 2898, "udp" }, { "powergemplus", { NULL }, 2899, "tcp" }, { "powergemplus", { NULL }, 2899, "udp" }, { "quicksuite", { NULL }, 2900, "tcp" }, { "quicksuite", { NULL }, 2900, "udp" }, { "allstorcns", { NULL }, 2901, "tcp" }, { "allstorcns", { NULL }, 2901, "udp" }, { "netaspi", { NULL }, 2902, "tcp" }, { "netaspi", { NULL }, 2902, "udp" }, { "suitcase", { NULL }, 2903, "tcp" }, { "suitcase", { NULL }, 2903, "udp" }, { "m2ua", { NULL }, 2904, "tcp" }, { "m2ua", { NULL }, 2904, "udp" }, { "m2ua", { NULL }, 2904, "sctp" }, { "m3ua", { NULL }, 2905, "tcp" }, { "m3ua", { NULL }, 2905, "sctp" }, { "caller9", { NULL }, 2906, "tcp" }, { "caller9", { NULL }, 2906, "udp" }, { "webmethods-b2b", { NULL }, 2907, "tcp" }, { "webmethods-b2b", { NULL }, 2907, "udp" }, { "mao", { NULL }, 2908, "tcp" }, { "mao", { NULL }, 2908, "udp" }, { "funk-dialout", { NULL }, 2909, "tcp" }, { "funk-dialout", { NULL }, 2909, "udp" }, { "tdaccess", { NULL }, 2910, "tcp" }, { "tdaccess", { NULL }, 2910, "udp" }, { "blockade", { NULL }, 2911, "tcp" }, { "blockade", { NULL }, 2911, "udp" }, { "epicon", { NULL }, 2912, "tcp" }, { "epicon", { NULL }, 2912, "udp" }, { "boosterware", { NULL }, 2913, "tcp" }, { "boosterware", { NULL }, 2913, "udp" }, { "gamelobby", { NULL }, 2914, "tcp" }, { "gamelobby", { NULL }, 2914, "udp" }, { "tksocket", { NULL }, 2915, "tcp" }, { "tksocket", { NULL }, 2915, "udp" }, { "elvin_server", { NULL }, 2916, "tcp" }, { "elvin_server", { NULL }, 2916, "udp" }, { "elvin_client", { NULL }, 2917, "tcp" }, { "elvin_client", { NULL }, 2917, "udp" }, { "kastenchasepad", { NULL }, 2918, "tcp" }, { "kastenchasepad", { NULL }, 2918, "udp" }, { "roboer", { NULL }, 2919, "tcp" }, { "roboer", { NULL }, 2919, "udp" }, { "roboeda", { NULL }, 2920, "tcp" }, { "roboeda", { NULL }, 2920, "udp" }, { "cesdcdman", { NULL }, 2921, "tcp" }, { "cesdcdman", { NULL }, 2921, "udp" }, { "cesdcdtrn", { NULL }, 2922, "tcp" }, { "cesdcdtrn", { NULL }, 2922, "udp" }, { "wta-wsp-wtp-s", { NULL }, 2923, "tcp" }, { "wta-wsp-wtp-s", { NULL }, 2923, "udp" }, { "precise-vip", { NULL }, 2924, "tcp" }, { "precise-vip", { NULL }, 2924, "udp" }, { "mobile-file-dl", { NULL }, 2926, "tcp" }, { "mobile-file-dl", { NULL }, 2926, "udp" }, { "unimobilectrl", { NULL }, 2927, "tcp" }, { "unimobilectrl", { NULL }, 2927, "udp" }, { "redstone-cpss", { NULL }, 2928, "tcp" }, { "redstone-cpss", { NULL }, 2928, "udp" }, { "amx-webadmin", { NULL }, 2929, "tcp" }, { "amx-webadmin", { NULL }, 2929, "udp" }, { "amx-weblinx", { NULL }, 2930, "tcp" }, { "amx-weblinx", { NULL }, 2930, "udp" }, { "circle-x", { NULL }, 2931, "tcp" }, { "circle-x", { NULL }, 2931, "udp" }, { "incp", { NULL }, 2932, "tcp" }, { "incp", { NULL }, 2932, "udp" }, { "4-tieropmgw", { NULL }, 2933, "tcp" }, { "4-tieropmgw", { NULL }, 2933, "udp" }, { "4-tieropmcli", { NULL }, 2934, "tcp" }, { "4-tieropmcli", { NULL }, 2934, "udp" }, { "qtp", { NULL }, 2935, "tcp" }, { "qtp", { NULL }, 2935, "udp" }, { "otpatch", { NULL }, 2936, "tcp" }, { "otpatch", { NULL }, 2936, "udp" }, { "pnaconsult-lm", { NULL }, 2937, "tcp" }, { "pnaconsult-lm", { NULL }, 2937, "udp" }, { "sm-pas-1", { NULL }, 2938, "tcp" }, { "sm-pas-1", { NULL }, 2938, "udp" }, { "sm-pas-2", { NULL }, 2939, "tcp" }, { "sm-pas-2", { NULL }, 2939, "udp" }, { "sm-pas-3", { NULL }, 2940, "tcp" }, { "sm-pas-3", { NULL }, 2940, "udp" }, { "sm-pas-4", { NULL }, 2941, "tcp" }, { "sm-pas-4", { NULL }, 2941, "udp" }, { "sm-pas-5", { NULL }, 2942, "tcp" }, { "sm-pas-5", { NULL }, 2942, "udp" }, { "ttnrepository", { NULL }, 2943, "tcp" }, { "ttnrepository", { NULL }, 2943, "udp" }, { "megaco-h248", { NULL }, 2944, "tcp" }, { "megaco-h248", { NULL }, 2944, "udp" }, { "megaco-h248", { NULL }, 2944, "sctp" }, { "h248-binary", { NULL }, 2945, "tcp" }, { "h248-binary", { NULL }, 2945, "udp" }, { "h248-binary", { NULL }, 2945, "sctp" }, { "fjsvmpor", { NULL }, 2946, "tcp" }, { "fjsvmpor", { NULL }, 2946, "udp" }, { "gpsd", { NULL }, 2947, "tcp" }, { "gpsd", { NULL }, 2947, "udp" }, { "wap-push", { NULL }, 2948, "tcp" }, { "wap-push", { NULL }, 2948, "udp" }, { "wap-pushsecure", { NULL }, 2949, "tcp" }, { "wap-pushsecure", { NULL }, 2949, "udp" }, { "esip", { NULL }, 2950, "tcp" }, { "esip", { NULL }, 2950, "udp" }, { "ottp", { NULL }, 2951, "tcp" }, { "ottp", { NULL }, 2951, "udp" }, { "mpfwsas", { NULL }, 2952, "tcp" }, { "mpfwsas", { NULL }, 2952, "udp" }, { "ovalarmsrv", { NULL }, 2953, "tcp" }, { "ovalarmsrv", { NULL }, 2953, "udp" }, { "ovalarmsrv-cmd", { NULL }, 2954, "tcp" }, { "ovalarmsrv-cmd", { NULL }, 2954, "udp" }, { "csnotify", { NULL }, 2955, "tcp" }, { "csnotify", { NULL }, 2955, "udp" }, { "ovrimosdbman", { NULL }, 2956, "tcp" }, { "ovrimosdbman", { NULL }, 2956, "udp" }, { "jmact5", { NULL }, 2957, "tcp" }, { "jmact5", { NULL }, 2957, "udp" }, { "jmact6", { NULL }, 2958, "tcp" }, { "jmact6", { NULL }, 2958, "udp" }, { "rmopagt", { NULL }, 2959, "tcp" }, { "rmopagt", { NULL }, 2959, "udp" }, { "dfoxserver", { NULL }, 2960, "tcp" }, { "dfoxserver", { NULL }, 2960, "udp" }, { "boldsoft-lm", { NULL }, 2961, "tcp" }, { "boldsoft-lm", { NULL }, 2961, "udp" }, { "iph-policy-cli", { NULL }, 2962, "tcp" }, { "iph-policy-cli", { NULL }, 2962, "udp" }, { "iph-policy-adm", { NULL }, 2963, "tcp" }, { "iph-policy-adm", { NULL }, 2963, "udp" }, { "bullant-srap", { NULL }, 2964, "tcp" }, { "bullant-srap", { NULL }, 2964, "udp" }, { "bullant-rap", { NULL }, 2965, "tcp" }, { "bullant-rap", { NULL }, 2965, "udp" }, { "idp-infotrieve", { NULL }, 2966, "tcp" }, { "idp-infotrieve", { NULL }, 2966, "udp" }, { "ssc-agent", { NULL }, 2967, "tcp" }, { "ssc-agent", { NULL }, 2967, "udp" }, { "enpp", { NULL }, 2968, "tcp" }, { "enpp", { NULL }, 2968, "udp" }, { "essp", { NULL }, 2969, "tcp" }, { "essp", { NULL }, 2969, "udp" }, { "index-net", { NULL }, 2970, "tcp" }, { "index-net", { NULL }, 2970, "udp" }, { "netclip", { NULL }, 2971, "tcp" }, { "netclip", { NULL }, 2971, "udp" }, { "pmsm-webrctl", { NULL }, 2972, "tcp" }, { "pmsm-webrctl", { NULL }, 2972, "udp" }, { "svnetworks", { NULL }, 2973, "tcp" }, { "svnetworks", { NULL }, 2973, "udp" }, { "signal", { NULL }, 2974, "tcp" }, { "signal", { NULL }, 2974, "udp" }, { "fjmpcm", { NULL }, 2975, "tcp" }, { "fjmpcm", { NULL }, 2975, "udp" }, { "cns-srv-port", { NULL }, 2976, "tcp" }, { "cns-srv-port", { NULL }, 2976, "udp" }, { "ttc-etap-ns", { NULL }, 2977, "tcp" }, { "ttc-etap-ns", { NULL }, 2977, "udp" }, { "ttc-etap-ds", { NULL }, 2978, "tcp" }, { "ttc-etap-ds", { NULL }, 2978, "udp" }, { "h263-video", { NULL }, 2979, "tcp" }, { "h263-video", { NULL }, 2979, "udp" }, { "wimd", { NULL }, 2980, "tcp" }, { "wimd", { NULL }, 2980, "udp" }, { "mylxamport", { NULL }, 2981, "tcp" }, { "mylxamport", { NULL }, 2981, "udp" }, { "iwb-whiteboard", { NULL }, 2982, "tcp" }, { "iwb-whiteboard", { NULL }, 2982, "udp" }, { "netplan", { NULL }, 2983, "tcp" }, { "netplan", { NULL }, 2983, "udp" }, { "hpidsadmin", { NULL }, 2984, "tcp" }, { "hpidsadmin", { NULL }, 2984, "udp" }, { "hpidsagent", { NULL }, 2985, "tcp" }, { "hpidsagent", { NULL }, 2985, "udp" }, { "stonefalls", { NULL }, 2986, "tcp" }, { "stonefalls", { NULL }, 2986, "udp" }, { "identify", { NULL }, 2987, "tcp" }, { "identify", { NULL }, 2987, "udp" }, { "hippad", { NULL }, 2988, "tcp" }, { "hippad", { NULL }, 2988, "udp" }, { "zarkov", { NULL }, 2989, "tcp" }, { "zarkov", { NULL }, 2989, "udp" }, { "boscap", { NULL }, 2990, "tcp" }, { "boscap", { NULL }, 2990, "udp" }, { "wkstn-mon", { NULL }, 2991, "tcp" }, { "wkstn-mon", { NULL }, 2991, "udp" }, { "avenyo", { NULL }, 2992, "tcp" }, { "avenyo", { NULL }, 2992, "udp" }, { "veritas-vis1", { NULL }, 2993, "tcp" }, { "veritas-vis1", { NULL }, 2993, "udp" }, { "veritas-vis2", { NULL }, 2994, "tcp" }, { "veritas-vis2", { NULL }, 2994, "udp" }, { "idrs", { NULL }, 2995, "tcp" }, { "idrs", { NULL }, 2995, "udp" }, { "vsixml", { NULL }, 2996, "tcp" }, { "vsixml", { NULL }, 2996, "udp" }, { "rebol", { NULL }, 2997, "tcp" }, { "rebol", { NULL }, 2997, "udp" }, { "realsecure", { NULL }, 2998, "tcp" }, { "realsecure", { NULL }, 2998, "udp" }, { "remoteware-un", { NULL }, 2999, "tcp" }, { "remoteware-un", { NULL }, 2999, "udp" }, { "hbci", { NULL }, 3000, "tcp" }, { "hbci", { NULL }, 3000, "udp" }, { "remoteware-cl", { NULL }, 3000, "tcp" }, { "remoteware-cl", { NULL }, 3000, "udp" }, { "exlm-agent", { NULL }, 3002, "tcp" }, { "exlm-agent", { NULL }, 3002, "udp" }, { "remoteware-srv", { NULL }, 3002, "tcp" }, { "remoteware-srv", { NULL }, 3002, "udp" }, { "cgms", { NULL }, 3003, "tcp" }, { "cgms", { NULL }, 3003, "udp" }, { "csoftragent", { NULL }, 3004, "tcp" }, { "csoftragent", { NULL }, 3004, "udp" }, { "geniuslm", { NULL }, 3005, "tcp" }, { "geniuslm", { NULL }, 3005, "udp" }, { "ii-admin", { NULL }, 3006, "tcp" }, { "ii-admin", { NULL }, 3006, "udp" }, { "lotusmtap", { NULL }, 3007, "tcp" }, { "lotusmtap", { NULL }, 3007, "udp" }, { "midnight-tech", { NULL }, 3008, "tcp" }, { "midnight-tech", { NULL }, 3008, "udp" }, { "pxc-ntfy", { NULL }, 3009, "tcp" }, { "pxc-ntfy", { NULL }, 3009, "udp" }, { "gw", { NULL }, 3010, "tcp" }, { "ping-pong", { NULL }, 3010, "udp" }, { "trusted-web", { NULL }, 3011, "tcp" }, { "trusted-web", { NULL }, 3011, "udp" }, { "twsdss", { NULL }, 3012, "tcp" }, { "twsdss", { NULL }, 3012, "udp" }, { "gilatskysurfer", { NULL }, 3013, "tcp" }, { "gilatskysurfer", { NULL }, 3013, "udp" }, { "broker_service", { NULL }, 3014, "tcp" }, { "broker_service", { NULL }, 3014, "udp" }, { "nati-dstp", { NULL }, 3015, "tcp" }, { "nati-dstp", { NULL }, 3015, "udp" }, { "notify_srvr", { NULL }, 3016, "tcp" }, { "notify_srvr", { NULL }, 3016, "udp" }, { "event_listener", { NULL }, 3017, "tcp" }, { "event_listener", { NULL }, 3017, "udp" }, { "srvc_registry", { NULL }, 3018, "tcp" }, { "srvc_registry", { NULL }, 3018, "udp" }, { "resource_mgr", { NULL }, 3019, "tcp" }, { "resource_mgr", { NULL }, 3019, "udp" }, { "cifs", { NULL }, 3020, "tcp" }, { "cifs", { NULL }, 3020, "udp" }, { "agriserver", { NULL }, 3021, "tcp" }, { "agriserver", { NULL }, 3021, "udp" }, { "csregagent", { NULL }, 3022, "tcp" }, { "csregagent", { NULL }, 3022, "udp" }, { "magicnotes", { NULL }, 3023, "tcp" }, { "magicnotes", { NULL }, 3023, "udp" }, { "nds_sso", { NULL }, 3024, "tcp" }, { "nds_sso", { NULL }, 3024, "udp" }, { "arepa-raft", { NULL }, 3025, "tcp" }, { "arepa-raft", { NULL }, 3025, "udp" }, { "agri-gateway", { NULL }, 3026, "tcp" }, { "agri-gateway", { NULL }, 3026, "udp" }, { "LiebDevMgmt_C", { NULL }, 3027, "tcp" }, { "LiebDevMgmt_C", { NULL }, 3027, "udp" }, { "LiebDevMgmt_DM", { NULL }, 3028, "tcp" }, { "LiebDevMgmt_DM", { NULL }, 3028, "udp" }, { "LiebDevMgmt_A", { NULL }, 3029, "tcp" }, { "LiebDevMgmt_A", { NULL }, 3029, "udp" }, { "arepa-cas", { NULL }, 3030, "tcp" }, { "arepa-cas", { NULL }, 3030, "udp" }, { "eppc", { NULL }, 3031, "tcp" }, { "eppc", { NULL }, 3031, "udp" }, { "redwood-chat", { NULL }, 3032, "tcp" }, { "redwood-chat", { NULL }, 3032, "udp" }, { "pdb", { NULL }, 3033, "tcp" }, { "pdb", { NULL }, 3033, "udp" }, { "osmosis-aeea", { NULL }, 3034, "tcp" }, { "osmosis-aeea", { NULL }, 3034, "udp" }, { "fjsv-gssagt", { NULL }, 3035, "tcp" }, { "fjsv-gssagt", { NULL }, 3035, "udp" }, { "hagel-dump", { NULL }, 3036, "tcp" }, { "hagel-dump", { NULL }, 3036, "udp" }, { "hp-san-mgmt", { NULL }, 3037, "tcp" }, { "hp-san-mgmt", { NULL }, 3037, "udp" }, { "santak-ups", { NULL }, 3038, "tcp" }, { "santak-ups", { NULL }, 3038, "udp" }, { "cogitate", { NULL }, 3039, "tcp" }, { "cogitate", { NULL }, 3039, "udp" }, { "tomato-springs", { NULL }, 3040, "tcp" }, { "tomato-springs", { NULL }, 3040, "udp" }, { "di-traceware", { NULL }, 3041, "tcp" }, { "di-traceware", { NULL }, 3041, "udp" }, { "journee", { NULL }, 3042, "tcp" }, { "journee", { NULL }, 3042, "udp" }, { "brp", { NULL }, 3043, "tcp" }, { "brp", { NULL }, 3043, "udp" }, { "epp", { NULL }, 3044, "tcp" }, { "epp", { NULL }, 3044, "udp" }, { "responsenet", { NULL }, 3045, "tcp" }, { "responsenet", { NULL }, 3045, "udp" }, { "di-ase", { NULL }, 3046, "tcp" }, { "di-ase", { NULL }, 3046, "udp" }, { "hlserver", { NULL }, 3047, "tcp" }, { "hlserver", { NULL }, 3047, "udp" }, { "pctrader", { NULL }, 3048, "tcp" }, { "pctrader", { NULL }, 3048, "udp" }, { "nsws", { NULL }, 3049, "tcp" }, { "nsws", { NULL }, 3049, "udp" }, { "gds_db", { NULL }, 3050, "tcp" }, { "gds_db", { NULL }, 3050, "udp" }, { "galaxy-server", { NULL }, 3051, "tcp" }, { "galaxy-server", { NULL }, 3051, "udp" }, { "apc-3052", { NULL }, 3052, "tcp" }, { "apc-3052", { NULL }, 3052, "udp" }, { "dsom-server", { NULL }, 3053, "tcp" }, { "dsom-server", { NULL }, 3053, "udp" }, { "amt-cnf-prot", { NULL }, 3054, "tcp" }, { "amt-cnf-prot", { NULL }, 3054, "udp" }, { "policyserver", { NULL }, 3055, "tcp" }, { "policyserver", { NULL }, 3055, "udp" }, { "cdl-server", { NULL }, 3056, "tcp" }, { "cdl-server", { NULL }, 3056, "udp" }, { "goahead-fldup", { NULL }, 3057, "tcp" }, { "goahead-fldup", { NULL }, 3057, "udp" }, { "videobeans", { NULL }, 3058, "tcp" }, { "videobeans", { NULL }, 3058, "udp" }, { "qsoft", { NULL }, 3059, "tcp" }, { "qsoft", { NULL }, 3059, "udp" }, { "interserver", { NULL }, 3060, "tcp" }, { "interserver", { NULL }, 3060, "udp" }, { "cautcpd", { NULL }, 3061, "tcp" }, { "cautcpd", { NULL }, 3061, "udp" }, { "ncacn-ip-tcp", { NULL }, 3062, "tcp" }, { "ncacn-ip-tcp", { NULL }, 3062, "udp" }, { "ncadg-ip-udp", { NULL }, 3063, "tcp" }, { "ncadg-ip-udp", { NULL }, 3063, "udp" }, { "rprt", { NULL }, 3064, "tcp" }, { "rprt", { NULL }, 3064, "udp" }, { "slinterbase", { NULL }, 3065, "tcp" }, { "slinterbase", { NULL }, 3065, "udp" }, { "netattachsdmp", { NULL }, 3066, "tcp" }, { "netattachsdmp", { NULL }, 3066, "udp" }, { "fjhpjp", { NULL }, 3067, "tcp" }, { "fjhpjp", { NULL }, 3067, "udp" }, { "ls3bcast", { NULL }, 3068, "tcp" }, { "ls3bcast", { NULL }, 3068, "udp" }, { "ls3", { NULL }, 3069, "tcp" }, { "ls3", { NULL }, 3069, "udp" }, { "mgxswitch", { NULL }, 3070, "tcp" }, { "mgxswitch", { NULL }, 3070, "udp" }, { "csd-mgmt-port", { NULL }, 3071, "tcp" }, { "csd-mgmt-port", { NULL }, 3071, "udp" }, { "csd-monitor", { NULL }, 3072, "tcp" }, { "csd-monitor", { NULL }, 3072, "udp" }, { "vcrp", { NULL }, 3073, "tcp" }, { "vcrp", { NULL }, 3073, "udp" }, { "xbox", { NULL }, 3074, "tcp" }, { "xbox", { NULL }, 3074, "udp" }, { "orbix-locator", { NULL }, 3075, "tcp" }, { "orbix-locator", { NULL }, 3075, "udp" }, { "orbix-config", { NULL }, 3076, "tcp" }, { "orbix-config", { NULL }, 3076, "udp" }, { "orbix-loc-ssl", { NULL }, 3077, "tcp" }, { "orbix-loc-ssl", { NULL }, 3077, "udp" }, { "orbix-cfg-ssl", { NULL }, 3078, "tcp" }, { "orbix-cfg-ssl", { NULL }, 3078, "udp" }, { "lv-frontpanel", { NULL }, 3079, "tcp" }, { "lv-frontpanel", { NULL }, 3079, "udp" }, { "stm_pproc", { NULL }, 3080, "tcp" }, { "stm_pproc", { NULL }, 3080, "udp" }, { "tl1-lv", { NULL }, 3081, "tcp" }, { "tl1-lv", { NULL }, 3081, "udp" }, { "tl1-raw", { NULL }, 3082, "tcp" }, { "tl1-raw", { NULL }, 3082, "udp" }, { "tl1-telnet", { NULL }, 3083, "tcp" }, { "tl1-telnet", { NULL }, 3083, "udp" }, { "itm-mccs", { NULL }, 3084, "tcp" }, { "itm-mccs", { NULL }, 3084, "udp" }, { "pcihreq", { NULL }, 3085, "tcp" }, { "pcihreq", { NULL }, 3085, "udp" }, { "jdl-dbkitchen", { NULL }, 3086, "tcp" }, { "jdl-dbkitchen", { NULL }, 3086, "udp" }, { "asoki-sma", { NULL }, 3087, "tcp" }, { "asoki-sma", { NULL }, 3087, "udp" }, { "xdtp", { NULL }, 3088, "tcp" }, { "xdtp", { NULL }, 3088, "udp" }, { "ptk-alink", { NULL }, 3089, "tcp" }, { "ptk-alink", { NULL }, 3089, "udp" }, { "stss", { NULL }, 3090, "tcp" }, { "stss", { NULL }, 3090, "udp" }, { "1ci-smcs", { NULL }, 3091, "tcp" }, { "1ci-smcs", { NULL }, 3091, "udp" }, { "rapidmq-center", { NULL }, 3093, "tcp" }, { "rapidmq-center", { NULL }, 3093, "udp" }, { "rapidmq-reg", { NULL }, 3094, "tcp" }, { "rapidmq-reg", { NULL }, 3094, "udp" }, { "panasas", { NULL }, 3095, "tcp" }, { "panasas", { NULL }, 3095, "udp" }, { "ndl-aps", { NULL }, 3096, "tcp" }, { "ndl-aps", { NULL }, 3096, "udp" }, { "itu-bicc-stc", { NULL }, 3097, "sctp" }, { "umm-port", { NULL }, 3098, "tcp" }, { "umm-port", { NULL }, 3098, "udp" }, { "chmd", { NULL }, 3099, "tcp" }, { "chmd", { NULL }, 3099, "udp" }, { "opcon-xps", { NULL }, 3100, "tcp" }, { "opcon-xps", { NULL }, 3100, "udp" }, { "hp-pxpib", { NULL }, 3101, "tcp" }, { "hp-pxpib", { NULL }, 3101, "udp" }, { "slslavemon", { NULL }, 3102, "tcp" }, { "slslavemon", { NULL }, 3102, "udp" }, { "autocuesmi", { NULL }, 3103, "tcp" }, { "autocuesmi", { NULL }, 3103, "udp" }, { "autocuelog", { NULL }, 3104, "tcp" }, { "autocuetime", { NULL }, 3104, "udp" }, { "cardbox", { NULL }, 3105, "tcp" }, { "cardbox", { NULL }, 3105, "udp" }, { "cardbox-http", { NULL }, 3106, "tcp" }, { "cardbox-http", { NULL }, 3106, "udp" }, { "business", { NULL }, 3107, "tcp" }, { "business", { NULL }, 3107, "udp" }, { "geolocate", { NULL }, 3108, "tcp" }, { "geolocate", { NULL }, 3108, "udp" }, { "personnel", { NULL }, 3109, "tcp" }, { "personnel", { NULL }, 3109, "udp" }, { "sim-control", { NULL }, 3110, "tcp" }, { "sim-control", { NULL }, 3110, "udp" }, { "wsynch", { NULL }, 3111, "tcp" }, { "wsynch", { NULL }, 3111, "udp" }, { "ksysguard", { NULL }, 3112, "tcp" }, { "ksysguard", { NULL }, 3112, "udp" }, { "cs-auth-svr", { NULL }, 3113, "tcp" }, { "cs-auth-svr", { NULL }, 3113, "udp" }, { "ccmad", { NULL }, 3114, "tcp" }, { "ccmad", { NULL }, 3114, "udp" }, { "mctet-master", { NULL }, 3115, "tcp" }, { "mctet-master", { NULL }, 3115, "udp" }, { "mctet-gateway", { NULL }, 3116, "tcp" }, { "mctet-gateway", { NULL }, 3116, "udp" }, { "mctet-jserv", { NULL }, 3117, "tcp" }, { "mctet-jserv", { NULL }, 3117, "udp" }, { "pkagent", { NULL }, 3118, "tcp" }, { "pkagent", { NULL }, 3118, "udp" }, { "d2000kernel", { NULL }, 3119, "tcp" }, { "d2000kernel", { NULL }, 3119, "udp" }, { "d2000webserver", { NULL }, 3120, "tcp" }, { "d2000webserver", { NULL }, 3120, "udp" }, { "vtr-emulator", { NULL }, 3122, "tcp" }, { "vtr-emulator", { NULL }, 3122, "udp" }, { "edix", { NULL }, 3123, "tcp" }, { "edix", { NULL }, 3123, "udp" }, { "beacon-port", { NULL }, 3124, "tcp" }, { "beacon-port", { NULL }, 3124, "udp" }, { "a13-an", { NULL }, 3125, "tcp" }, { "a13-an", { NULL }, 3125, "udp" }, { "ctx-bridge", { NULL }, 3127, "tcp" }, { "ctx-bridge", { NULL }, 3127, "udp" }, { "ndl-aas", { NULL }, 3128, "tcp" }, { "ndl-aas", { NULL }, 3128, "udp" }, { "netport-id", { NULL }, 3129, "tcp" }, { "netport-id", { NULL }, 3129, "udp" }, { "icpv2", { NULL }, 3130, "tcp" }, { "icpv2", { NULL }, 3130, "udp" }, { "netbookmark", { NULL }, 3131, "tcp" }, { "netbookmark", { NULL }, 3131, "udp" }, { "ms-rule-engine", { NULL }, 3132, "tcp" }, { "ms-rule-engine", { NULL }, 3132, "udp" }, { "prism-deploy", { NULL }, 3133, "tcp" }, { "prism-deploy", { NULL }, 3133, "udp" }, { "ecp", { NULL }, 3134, "tcp" }, { "ecp", { NULL }, 3134, "udp" }, { "peerbook-port", { NULL }, 3135, "tcp" }, { "peerbook-port", { NULL }, 3135, "udp" }, { "grubd", { NULL }, 3136, "tcp" }, { "grubd", { NULL }, 3136, "udp" }, { "rtnt-1", { NULL }, 3137, "tcp" }, { "rtnt-1", { NULL }, 3137, "udp" }, { "rtnt-2", { NULL }, 3138, "tcp" }, { "rtnt-2", { NULL }, 3138, "udp" }, { "incognitorv", { NULL }, 3139, "tcp" }, { "incognitorv", { NULL }, 3139, "udp" }, { "ariliamulti", { NULL }, 3140, "tcp" }, { "ariliamulti", { NULL }, 3140, "udp" }, { "vmodem", { NULL }, 3141, "tcp" }, { "vmodem", { NULL }, 3141, "udp" }, { "rdc-wh-eos", { NULL }, 3142, "tcp" }, { "rdc-wh-eos", { NULL }, 3142, "udp" }, { "seaview", { NULL }, 3143, "tcp" }, { "seaview", { NULL }, 3143, "udp" }, { "tarantella", { NULL }, 3144, "tcp" }, { "tarantella", { NULL }, 3144, "udp" }, { "csi-lfap", { NULL }, 3145, "tcp" }, { "csi-lfap", { NULL }, 3145, "udp" }, { "bears-02", { NULL }, 3146, "tcp" }, { "bears-02", { NULL }, 3146, "udp" }, { "rfio", { NULL }, 3147, "tcp" }, { "rfio", { NULL }, 3147, "udp" }, { "nm-game-admin", { NULL }, 3148, "tcp" }, { "nm-game-admin", { NULL }, 3148, "udp" }, { "nm-game-server", { NULL }, 3149, "tcp" }, { "nm-game-server", { NULL }, 3149, "udp" }, { "nm-asses-admin", { NULL }, 3150, "tcp" }, { "nm-asses-admin", { NULL }, 3150, "udp" }, { "nm-assessor", { NULL }, 3151, "tcp" }, { "nm-assessor", { NULL }, 3151, "udp" }, { "feitianrockey", { NULL }, 3152, "tcp" }, { "feitianrockey", { NULL }, 3152, "udp" }, { "s8-client-port", { NULL }, 3153, "tcp" }, { "s8-client-port", { NULL }, 3153, "udp" }, { "ccmrmi", { NULL }, 3154, "tcp" }, { "ccmrmi", { NULL }, 3154, "udp" }, { "jpegmpeg", { NULL }, 3155, "tcp" }, { "jpegmpeg", { NULL }, 3155, "udp" }, { "indura", { NULL }, 3156, "tcp" }, { "indura", { NULL }, 3156, "udp" }, { "e3consultants", { NULL }, 3157, "tcp" }, { "e3consultants", { NULL }, 3157, "udp" }, { "stvp", { NULL }, 3158, "tcp" }, { "stvp", { NULL }, 3158, "udp" }, { "navegaweb-port", { NULL }, 3159, "tcp" }, { "navegaweb-port", { NULL }, 3159, "udp" }, { "tip-app-server", { NULL }, 3160, "tcp" }, { "tip-app-server", { NULL }, 3160, "udp" }, { "doc1lm", { NULL }, 3161, "tcp" }, { "doc1lm", { NULL }, 3161, "udp" }, { "sflm", { NULL }, 3162, "tcp" }, { "sflm", { NULL }, 3162, "udp" }, { "res-sap", { NULL }, 3163, "tcp" }, { "res-sap", { NULL }, 3163, "udp" }, { "imprs", { NULL }, 3164, "tcp" }, { "imprs", { NULL }, 3164, "udp" }, { "newgenpay", { NULL }, 3165, "tcp" }, { "newgenpay", { NULL }, 3165, "udp" }, { "sossecollector", { NULL }, 3166, "tcp" }, { "sossecollector", { NULL }, 3166, "udp" }, { "nowcontact", { NULL }, 3167, "tcp" }, { "nowcontact", { NULL }, 3167, "udp" }, { "poweronnud", { NULL }, 3168, "tcp" }, { "poweronnud", { NULL }, 3168, "udp" }, { "serverview-as", { NULL }, 3169, "tcp" }, { "serverview-as", { NULL }, 3169, "udp" }, { "serverview-asn", { NULL }, 3170, "tcp" }, { "serverview-asn", { NULL }, 3170, "udp" }, { "serverview-gf", { NULL }, 3171, "tcp" }, { "serverview-gf", { NULL }, 3171, "udp" }, { "serverview-rm", { NULL }, 3172, "tcp" }, { "serverview-rm", { NULL }, 3172, "udp" }, { "serverview-icc", { NULL }, 3173, "tcp" }, { "serverview-icc", { NULL }, 3173, "udp" }, { "armi-server", { NULL }, 3174, "tcp" }, { "armi-server", { NULL }, 3174, "udp" }, { "t1-e1-over-ip", { NULL }, 3175, "tcp" }, { "t1-e1-over-ip", { NULL }, 3175, "udp" }, { "ars-master", { NULL }, 3176, "tcp" }, { "ars-master", { NULL }, 3176, "udp" }, { "phonex-port", { NULL }, 3177, "tcp" }, { "phonex-port", { NULL }, 3177, "udp" }, { "radclientport", { NULL }, 3178, "tcp" }, { "radclientport", { NULL }, 3178, "udp" }, { "h2gf-w-2m", { NULL }, 3179, "tcp" }, { "h2gf-w-2m", { NULL }, 3179, "udp" }, { "mc-brk-srv", { NULL }, 3180, "tcp" }, { "mc-brk-srv", { NULL }, 3180, "udp" }, { "bmcpatrolagent", { NULL }, 3181, "tcp" }, { "bmcpatrolagent", { NULL }, 3181, "udp" }, { "bmcpatrolrnvu", { NULL }, 3182, "tcp" }, { "bmcpatrolrnvu", { NULL }, 3182, "udp" }, { "cops-tls", { NULL }, 3183, "tcp" }, { "cops-tls", { NULL }, 3183, "udp" }, { "apogeex-port", { NULL }, 3184, "tcp" }, { "apogeex-port", { NULL }, 3184, "udp" }, { "smpppd", { NULL }, 3185, "tcp" }, { "smpppd", { NULL }, 3185, "udp" }, { "iiw-port", { NULL }, 3186, "tcp" }, { "iiw-port", { NULL }, 3186, "udp" }, { "odi-port", { NULL }, 3187, "tcp" }, { "odi-port", { NULL }, 3187, "udp" }, { "brcm-comm-port", { NULL }, 3188, "tcp" }, { "brcm-comm-port", { NULL }, 3188, "udp" }, { "pcle-infex", { NULL }, 3189, "tcp" }, { "pcle-infex", { NULL }, 3189, "udp" }, { "csvr-proxy", { NULL }, 3190, "tcp" }, { "csvr-proxy", { NULL }, 3190, "udp" }, { "csvr-sslproxy", { NULL }, 3191, "tcp" }, { "csvr-sslproxy", { NULL }, 3191, "udp" }, { "firemonrcc", { NULL }, 3192, "tcp" }, { "firemonrcc", { NULL }, 3192, "udp" }, { "spandataport", { NULL }, 3193, "tcp" }, { "spandataport", { NULL }, 3193, "udp" }, { "magbind", { NULL }, 3194, "tcp" }, { "magbind", { NULL }, 3194, "udp" }, { "ncu-1", { NULL }, 3195, "tcp" }, { "ncu-1", { NULL }, 3195, "udp" }, { "ncu-2", { NULL }, 3196, "tcp" }, { "ncu-2", { NULL }, 3196, "udp" }, { "embrace-dp-s", { NULL }, 3197, "tcp" }, { "embrace-dp-s", { NULL }, 3197, "udp" }, { "embrace-dp-c", { NULL }, 3198, "tcp" }, { "embrace-dp-c", { NULL }, 3198, "udp" }, { "dmod-workspace", { NULL }, 3199, "tcp" }, { "dmod-workspace", { NULL }, 3199, "udp" }, { "tick-port", { NULL }, 3200, "tcp" }, { "tick-port", { NULL }, 3200, "udp" }, { "cpq-tasksmart", { NULL }, 3201, "tcp" }, { "cpq-tasksmart", { NULL }, 3201, "udp" }, { "intraintra", { NULL }, 3202, "tcp" }, { "intraintra", { NULL }, 3202, "udp" }, { "netwatcher-mon", { NULL }, 3203, "tcp" }, { "netwatcher-mon", { NULL }, 3203, "udp" }, { "netwatcher-db", { NULL }, 3204, "tcp" }, { "netwatcher-db", { NULL }, 3204, "udp" }, { "isns", { NULL }, 3205, "tcp" }, { "isns", { NULL }, 3205, "udp" }, { "ironmail", { NULL }, 3206, "tcp" }, { "ironmail", { NULL }, 3206, "udp" }, { "vx-auth-port", { NULL }, 3207, "tcp" }, { "vx-auth-port", { NULL }, 3207, "udp" }, { "pfu-prcallback", { NULL }, 3208, "tcp" }, { "pfu-prcallback", { NULL }, 3208, "udp" }, { "netwkpathengine", { NULL }, 3209, "tcp" }, { "netwkpathengine", { NULL }, 3209, "udp" }, { "flamenco-proxy", { NULL }, 3210, "tcp" }, { "flamenco-proxy", { NULL }, 3210, "udp" }, { "avsecuremgmt", { NULL }, 3211, "tcp" }, { "avsecuremgmt", { NULL }, 3211, "udp" }, { "surveyinst", { NULL }, 3212, "tcp" }, { "surveyinst", { NULL }, 3212, "udp" }, { "neon24x7", { NULL }, 3213, "tcp" }, { "neon24x7", { NULL }, 3213, "udp" }, { "jmq-daemon-1", { NULL }, 3214, "tcp" }, { "jmq-daemon-1", { NULL }, 3214, "udp" }, { "jmq-daemon-2", { NULL }, 3215, "tcp" }, { "jmq-daemon-2", { NULL }, 3215, "udp" }, { "ferrari-foam", { NULL }, 3216, "tcp" }, { "ferrari-foam", { NULL }, 3216, "udp" }, { "unite", { NULL }, 3217, "tcp" }, { "unite", { NULL }, 3217, "udp" }, { "smartpackets", { NULL }, 3218, "tcp" }, { "smartpackets", { NULL }, 3218, "udp" }, { "wms-messenger", { NULL }, 3219, "tcp" }, { "wms-messenger", { NULL }, 3219, "udp" }, { "xnm-ssl", { NULL }, 3220, "tcp" }, { "xnm-ssl", { NULL }, 3220, "udp" }, { "xnm-clear-text", { NULL }, 3221, "tcp" }, { "xnm-clear-text", { NULL }, 3221, "udp" }, { "glbp", { NULL }, 3222, "tcp" }, { "glbp", { NULL }, 3222, "udp" }, { "digivote", { NULL }, 3223, "tcp" }, { "digivote", { NULL }, 3223, "udp" }, { "aes-discovery", { NULL }, 3224, "tcp" }, { "aes-discovery", { NULL }, 3224, "udp" }, { "fcip-port", { NULL }, 3225, "tcp" }, { "fcip-port", { NULL }, 3225, "udp" }, { "isi-irp", { NULL }, 3226, "tcp" }, { "isi-irp", { NULL }, 3226, "udp" }, { "dwnmshttp", { NULL }, 3227, "tcp" }, { "dwnmshttp", { NULL }, 3227, "udp" }, { "dwmsgserver", { NULL }, 3228, "tcp" }, { "dwmsgserver", { NULL }, 3228, "udp" }, { "global-cd-port", { NULL }, 3229, "tcp" }, { "global-cd-port", { NULL }, 3229, "udp" }, { "sftdst-port", { NULL }, 3230, "tcp" }, { "sftdst-port", { NULL }, 3230, "udp" }, { "vidigo", { NULL }, 3231, "tcp" }, { "vidigo", { NULL }, 3231, "udp" }, { "mdtp", { NULL }, 3232, "tcp" }, { "mdtp", { NULL }, 3232, "udp" }, { "whisker", { NULL }, 3233, "tcp" }, { "whisker", { NULL }, 3233, "udp" }, { "alchemy", { NULL }, 3234, "tcp" }, { "alchemy", { NULL }, 3234, "udp" }, { "mdap-port", { NULL }, 3235, "tcp" }, { "mdap-port", { NULL }, 3235, "udp" }, { "apparenet-ts", { NULL }, 3236, "tcp" }, { "apparenet-ts", { NULL }, 3236, "udp" }, { "apparenet-tps", { NULL }, 3237, "tcp" }, { "apparenet-tps", { NULL }, 3237, "udp" }, { "apparenet-as", { NULL }, 3238, "tcp" }, { "apparenet-as", { NULL }, 3238, "udp" }, { "apparenet-ui", { NULL }, 3239, "tcp" }, { "apparenet-ui", { NULL }, 3239, "udp" }, { "triomotion", { NULL }, 3240, "tcp" }, { "triomotion", { NULL }, 3240, "udp" }, { "sysorb", { NULL }, 3241, "tcp" }, { "sysorb", { NULL }, 3241, "udp" }, { "sdp-id-port", { NULL }, 3242, "tcp" }, { "sdp-id-port", { NULL }, 3242, "udp" }, { "timelot", { NULL }, 3243, "tcp" }, { "timelot", { NULL }, 3243, "udp" }, { "onesaf", { NULL }, 3244, "tcp" }, { "onesaf", { NULL }, 3244, "udp" }, { "vieo-fe", { NULL }, 3245, "tcp" }, { "vieo-fe", { NULL }, 3245, "udp" }, { "dvt-system", { NULL }, 3246, "tcp" }, { "dvt-system", { NULL }, 3246, "udp" }, { "dvt-data", { NULL }, 3247, "tcp" }, { "dvt-data", { NULL }, 3247, "udp" }, { "procos-lm", { NULL }, 3248, "tcp" }, { "procos-lm", { NULL }, 3248, "udp" }, { "ssp", { NULL }, 3249, "tcp" }, { "ssp", { NULL }, 3249, "udp" }, { "hicp", { NULL }, 3250, "tcp" }, { "hicp", { NULL }, 3250, "udp" }, { "sysscanner", { NULL }, 3251, "tcp" }, { "sysscanner", { NULL }, 3251, "udp" }, { "dhe", { NULL }, 3252, "tcp" }, { "dhe", { NULL }, 3252, "udp" }, { "pda-data", { NULL }, 3253, "tcp" }, { "pda-data", { NULL }, 3253, "udp" }, { "pda-sys", { NULL }, 3254, "tcp" }, { "pda-sys", { NULL }, 3254, "udp" }, { "semaphore", { NULL }, 3255, "tcp" }, { "semaphore", { NULL }, 3255, "udp" }, { "cpqrpm-agent", { NULL }, 3256, "tcp" }, { "cpqrpm-agent", { NULL }, 3256, "udp" }, { "cpqrpm-server", { NULL }, 3257, "tcp" }, { "cpqrpm-server", { NULL }, 3257, "udp" }, { "ivecon-port", { NULL }, 3258, "tcp" }, { "ivecon-port", { NULL }, 3258, "udp" }, { "epncdp2", { NULL }, 3259, "tcp" }, { "epncdp2", { NULL }, 3259, "udp" }, { "iscsi-target", { NULL }, 3260, "tcp" }, { "iscsi-target", { NULL }, 3260, "udp" }, { "winshadow", { NULL }, 3261, "tcp" }, { "winshadow", { NULL }, 3261, "udp" }, { "necp", { NULL }, 3262, "tcp" }, { "necp", { NULL }, 3262, "udp" }, { "ecolor-imager", { NULL }, 3263, "tcp" }, { "ecolor-imager", { NULL }, 3263, "udp" }, { "ccmail", { NULL }, 3264, "tcp" }, { "ccmail", { NULL }, 3264, "udp" }, { "altav-tunnel", { NULL }, 3265, "tcp" }, { "altav-tunnel", { NULL }, 3265, "udp" }, { "ns-cfg-server", { NULL }, 3266, "tcp" }, { "ns-cfg-server", { NULL }, 3266, "udp" }, { "ibm-dial-out", { NULL }, 3267, "tcp" }, { "ibm-dial-out", { NULL }, 3267, "udp" }, { "msft-gc", { NULL }, 3268, "tcp" }, { "msft-gc", { NULL }, 3268, "udp" }, { "msft-gc-ssl", { NULL }, 3269, "tcp" }, { "msft-gc-ssl", { NULL }, 3269, "udp" }, { "verismart", { NULL }, 3270, "tcp" }, { "verismart", { NULL }, 3270, "udp" }, { "csoft-prev", { NULL }, 3271, "tcp" }, { "csoft-prev", { NULL }, 3271, "udp" }, { "user-manager", { NULL }, 3272, "tcp" }, { "user-manager", { NULL }, 3272, "udp" }, { "sxmp", { NULL }, 3273, "tcp" }, { "sxmp", { NULL }, 3273, "udp" }, { "ordinox-server", { NULL }, 3274, "tcp" }, { "ordinox-server", { NULL }, 3274, "udp" }, { "samd", { NULL }, 3275, "tcp" }, { "samd", { NULL }, 3275, "udp" }, { "maxim-asics", { NULL }, 3276, "tcp" }, { "maxim-asics", { NULL }, 3276, "udp" }, { "awg-proxy", { NULL }, 3277, "tcp" }, { "awg-proxy", { NULL }, 3277, "udp" }, { "lkcmserver", { NULL }, 3278, "tcp" }, { "lkcmserver", { NULL }, 3278, "udp" }, { "admind", { NULL }, 3279, "tcp" }, { "admind", { NULL }, 3279, "udp" }, { "vs-server", { NULL }, 3280, "tcp" }, { "vs-server", { NULL }, 3280, "udp" }, { "sysopt", { NULL }, 3281, "tcp" }, { "sysopt", { NULL }, 3281, "udp" }, { "datusorb", { NULL }, 3282, "tcp" }, { "datusorb", { NULL }, 3282, "udp" }, { "net-assistant", { NULL }, 3283, "tcp" }, { "net-assistant", { NULL }, 3283, "udp" }, { "4talk", { NULL }, 3284, "tcp" }, { "4talk", { NULL }, 3284, "udp" }, { "plato", { NULL }, 3285, "tcp" }, { "plato", { NULL }, 3285, "udp" }, { "e-net", { NULL }, 3286, "tcp" }, { "e-net", { NULL }, 3286, "udp" }, { "directvdata", { NULL }, 3287, "tcp" }, { "directvdata", { NULL }, 3287, "udp" }, { "cops", { NULL }, 3288, "tcp" }, { "cops", { NULL }, 3288, "udp" }, { "enpc", { NULL }, 3289, "tcp" }, { "enpc", { NULL }, 3289, "udp" }, { "caps-lm", { NULL }, 3290, "tcp" }, { "caps-lm", { NULL }, 3290, "udp" }, { "sah-lm", { NULL }, 3291, "tcp" }, { "sah-lm", { NULL }, 3291, "udp" }, { "cart-o-rama", { NULL }, 3292, "tcp" }, { "cart-o-rama", { NULL }, 3292, "udp" }, { "fg-fps", { NULL }, 3293, "tcp" }, { "fg-fps", { NULL }, 3293, "udp" }, { "fg-gip", { NULL }, 3294, "tcp" }, { "fg-gip", { NULL }, 3294, "udp" }, { "dyniplookup", { NULL }, 3295, "tcp" }, { "dyniplookup", { NULL }, 3295, "udp" }, { "rib-slm", { NULL }, 3296, "tcp" }, { "rib-slm", { NULL }, 3296, "udp" }, { "cytel-lm", { NULL }, 3297, "tcp" }, { "cytel-lm", { NULL }, 3297, "udp" }, { "deskview", { NULL }, 3298, "tcp" }, { "deskview", { NULL }, 3298, "udp" }, { "pdrncs", { NULL }, 3299, "tcp" }, { "pdrncs", { NULL }, 3299, "udp" }, { "mcs-fastmail", { NULL }, 3302, "tcp" }, { "mcs-fastmail", { NULL }, 3302, "udp" }, { "opsession-clnt", { NULL }, 3303, "tcp" }, { "opsession-clnt", { NULL }, 3303, "udp" }, { "opsession-srvr", { NULL }, 3304, "tcp" }, { "opsession-srvr", { NULL }, 3304, "udp" }, { "odette-ftp", { NULL }, 3305, "tcp" }, { "odette-ftp", { NULL }, 3305, "udp" }, { "mysql", { NULL }, 3306, "tcp" }, { "mysql", { NULL }, 3306, "udp" }, { "opsession-prxy", { NULL }, 3307, "tcp" }, { "opsession-prxy", { NULL }, 3307, "udp" }, { "tns-server", { NULL }, 3308, "tcp" }, { "tns-server", { NULL }, 3308, "udp" }, { "tns-adv", { NULL }, 3309, "tcp" }, { "tns-adv", { NULL }, 3309, "udp" }, { "dyna-access", { NULL }, 3310, "tcp" }, { "dyna-access", { NULL }, 3310, "udp" }, { "mcns-tel-ret", { NULL }, 3311, "tcp" }, { "mcns-tel-ret", { NULL }, 3311, "udp" }, { "appman-server", { NULL }, 3312, "tcp" }, { "appman-server", { NULL }, 3312, "udp" }, { "uorb", { NULL }, 3313, "tcp" }, { "uorb", { NULL }, 3313, "udp" }, { "uohost", { NULL }, 3314, "tcp" }, { "uohost", { NULL }, 3314, "udp" }, { "cdid", { NULL }, 3315, "tcp" }, { "cdid", { NULL }, 3315, "udp" }, { "aicc-cmi", { NULL }, 3316, "tcp" }, { "aicc-cmi", { NULL }, 3316, "udp" }, { "vsaiport", { NULL }, 3317, "tcp" }, { "vsaiport", { NULL }, 3317, "udp" }, { "ssrip", { NULL }, 3318, "tcp" }, { "ssrip", { NULL }, 3318, "udp" }, { "sdt-lmd", { NULL }, 3319, "tcp" }, { "sdt-lmd", { NULL }, 3319, "udp" }, { "officelink2000", { NULL }, 3320, "tcp" }, { "officelink2000", { NULL }, 3320, "udp" }, { "vnsstr", { NULL }, 3321, "tcp" }, { "vnsstr", { NULL }, 3321, "udp" }, { "sftu", { NULL }, 3326, "tcp" }, { "sftu", { NULL }, 3326, "udp" }, { "bbars", { NULL }, 3327, "tcp" }, { "bbars", { NULL }, 3327, "udp" }, { "egptlm", { NULL }, 3328, "tcp" }, { "egptlm", { NULL }, 3328, "udp" }, { "hp-device-disc", { NULL }, 3329, "tcp" }, { "hp-device-disc", { NULL }, 3329, "udp" }, { "mcs-calypsoicf", { NULL }, 3330, "tcp" }, { "mcs-calypsoicf", { NULL }, 3330, "udp" }, { "mcs-messaging", { NULL }, 3331, "tcp" }, { "mcs-messaging", { NULL }, 3331, "udp" }, { "mcs-mailsvr", { NULL }, 3332, "tcp" }, { "mcs-mailsvr", { NULL }, 3332, "udp" }, { "dec-notes", { NULL }, 3333, "tcp" }, { "dec-notes", { NULL }, 3333, "udp" }, { "directv-web", { NULL }, 3334, "tcp" }, { "directv-web", { NULL }, 3334, "udp" }, { "directv-soft", { NULL }, 3335, "tcp" }, { "directv-soft", { NULL }, 3335, "udp" }, { "directv-tick", { NULL }, 3336, "tcp" }, { "directv-tick", { NULL }, 3336, "udp" }, { "directv-catlg", { NULL }, 3337, "tcp" }, { "directv-catlg", { NULL }, 3337, "udp" }, { "anet-b", { NULL }, 3338, "tcp" }, { "anet-b", { NULL }, 3338, "udp" }, { "anet-l", { NULL }, 3339, "tcp" }, { "anet-l", { NULL }, 3339, "udp" }, { "anet-m", { NULL }, 3340, "tcp" }, { "anet-m", { NULL }, 3340, "udp" }, { "anet-h", { NULL }, 3341, "tcp" }, { "anet-h", { NULL }, 3341, "udp" }, { "webtie", { NULL }, 3342, "tcp" }, { "webtie", { NULL }, 3342, "udp" }, { "ms-cluster-net", { NULL }, 3343, "tcp" }, { "ms-cluster-net", { NULL }, 3343, "udp" }, { "bnt-manager", { NULL }, 3344, "tcp" }, { "bnt-manager", { NULL }, 3344, "udp" }, { "influence", { NULL }, 3345, "tcp" }, { "influence", { NULL }, 3345, "udp" }, { "trnsprntproxy", { NULL }, 3346, "tcp" }, { "trnsprntproxy", { NULL }, 3346, "udp" }, { "phoenix-rpc", { NULL }, 3347, "tcp" }, { "phoenix-rpc", { NULL }, 3347, "udp" }, { "pangolin-laser", { NULL }, 3348, "tcp" }, { "pangolin-laser", { NULL }, 3348, "udp" }, { "chevinservices", { NULL }, 3349, "tcp" }, { "chevinservices", { NULL }, 3349, "udp" }, { "findviatv", { NULL }, 3350, "tcp" }, { "findviatv", { NULL }, 3350, "udp" }, { "btrieve", { NULL }, 3351, "tcp" }, { "btrieve", { NULL }, 3351, "udp" }, { "ssql", { NULL }, 3352, "tcp" }, { "ssql", { NULL }, 3352, "udp" }, { "fatpipe", { NULL }, 3353, "tcp" }, { "fatpipe", { NULL }, 3353, "udp" }, { "suitjd", { NULL }, 3354, "tcp" }, { "suitjd", { NULL }, 3354, "udp" }, { "ordinox-dbase", { NULL }, 3355, "tcp" }, { "ordinox-dbase", { NULL }, 3355, "udp" }, { "upnotifyps", { NULL }, 3356, "tcp" }, { "upnotifyps", { NULL }, 3356, "udp" }, { "adtech-test", { NULL }, 3357, "tcp" }, { "adtech-test", { NULL }, 3357, "udp" }, { "mpsysrmsvr", { NULL }, 3358, "tcp" }, { "mpsysrmsvr", { NULL }, 3358, "udp" }, { "wg-netforce", { NULL }, 3359, "tcp" }, { "wg-netforce", { NULL }, 3359, "udp" }, { "kv-server", { NULL }, 3360, "tcp" }, { "kv-server", { NULL }, 3360, "udp" }, { "kv-agent", { NULL }, 3361, "tcp" }, { "kv-agent", { NULL }, 3361, "udp" }, { "dj-ilm", { NULL }, 3362, "tcp" }, { "dj-ilm", { NULL }, 3362, "udp" }, { "nati-vi-server", { NULL }, 3363, "tcp" }, { "nati-vi-server", { NULL }, 3363, "udp" }, { "creativeserver", { NULL }, 3364, "tcp" }, { "creativeserver", { NULL }, 3364, "udp" }, { "contentserver", { NULL }, 3365, "tcp" }, { "contentserver", { NULL }, 3365, "udp" }, { "creativepartnr", { NULL }, 3366, "tcp" }, { "creativepartnr", { NULL }, 3366, "udp" }, { "tip2", { NULL }, 3372, "tcp" }, { "tip2", { NULL }, 3372, "udp" }, { "lavenir-lm", { NULL }, 3373, "tcp" }, { "lavenir-lm", { NULL }, 3373, "udp" }, { "cluster-disc", { NULL }, 3374, "tcp" }, { "cluster-disc", { NULL }, 3374, "udp" }, { "vsnm-agent", { NULL }, 3375, "tcp" }, { "vsnm-agent", { NULL }, 3375, "udp" }, { "cdbroker", { NULL }, 3376, "tcp" }, { "cdbroker", { NULL }, 3376, "udp" }, { "cogsys-lm", { NULL }, 3377, "tcp" }, { "cogsys-lm", { NULL }, 3377, "udp" }, { "wsicopy", { NULL }, 3378, "tcp" }, { "wsicopy", { NULL }, 3378, "udp" }, { "socorfs", { NULL }, 3379, "tcp" }, { "socorfs", { NULL }, 3379, "udp" }, { "sns-channels", { NULL }, 3380, "tcp" }, { "sns-channels", { NULL }, 3380, "udp" }, { "geneous", { NULL }, 3381, "tcp" }, { "geneous", { NULL }, 3381, "udp" }, { "fujitsu-neat", { NULL }, 3382, "tcp" }, { "fujitsu-neat", { NULL }, 3382, "udp" }, { "esp-lm", { NULL }, 3383, "tcp" }, { "esp-lm", { NULL }, 3383, "udp" }, { "hp-clic", { NULL }, 3384, "tcp" }, { "hp-clic", { NULL }, 3384, "udp" }, { "qnxnetman", { NULL }, 3385, "tcp" }, { "qnxnetman", { NULL }, 3385, "udp" }, { "gprs-data", { NULL }, 3386, "tcp" }, { "gprs-sig", { NULL }, 3386, "udp" }, { "backroomnet", { NULL }, 3387, "tcp" }, { "backroomnet", { NULL }, 3387, "udp" }, { "cbserver", { NULL }, 3388, "tcp" }, { "cbserver", { NULL }, 3388, "udp" }, { "ms-wbt-server", { NULL }, 3389, "tcp" }, { "ms-wbt-server", { NULL }, 3389, "udp" }, { "dsc", { NULL }, 3390, "tcp" }, { "dsc", { NULL }, 3390, "udp" }, { "savant", { NULL }, 3391, "tcp" }, { "savant", { NULL }, 3391, "udp" }, { "efi-lm", { NULL }, 3392, "tcp" }, { "efi-lm", { NULL }, 3392, "udp" }, { "d2k-tapestry1", { NULL }, 3393, "tcp" }, { "d2k-tapestry1", { NULL }, 3393, "udp" }, { "d2k-tapestry2", { NULL }, 3394, "tcp" }, { "d2k-tapestry2", { NULL }, 3394, "udp" }, { "dyna-lm", { NULL }, 3395, "tcp" }, { "dyna-lm", { NULL }, 3395, "udp" }, { "printer_agent", { NULL }, 3396, "tcp" }, { "printer_agent", { NULL }, 3396, "udp" }, { "cloanto-lm", { NULL }, 3397, "tcp" }, { "cloanto-lm", { NULL }, 3397, "udp" }, { "mercantile", { NULL }, 3398, "tcp" }, { "mercantile", { NULL }, 3398, "udp" }, { "csms", { NULL }, 3399, "tcp" }, { "csms", { NULL }, 3399, "udp" }, { "csms2", { NULL }, 3400, "tcp" }, { "csms2", { NULL }, 3400, "udp" }, { "filecast", { NULL }, 3401, "tcp" }, { "filecast", { NULL }, 3401, "udp" }, { "fxaengine-net", { NULL }, 3402, "tcp" }, { "fxaengine-net", { NULL }, 3402, "udp" }, { "nokia-ann-ch1", { NULL }, 3405, "tcp" }, { "nokia-ann-ch1", { NULL }, 3405, "udp" }, { "nokia-ann-ch2", { NULL }, 3406, "tcp" }, { "nokia-ann-ch2", { NULL }, 3406, "udp" }, { "ldap-admin", { NULL }, 3407, "tcp" }, { "ldap-admin", { NULL }, 3407, "udp" }, { "BESApi", { NULL }, 3408, "tcp" }, { "BESApi", { NULL }, 3408, "udp" }, { "networklens", { NULL }, 3409, "tcp" }, { "networklens", { NULL }, 3409, "udp" }, { "networklenss", { NULL }, 3410, "tcp" }, { "networklenss", { NULL }, 3410, "udp" }, { "biolink-auth", { NULL }, 3411, "tcp" }, { "biolink-auth", { NULL }, 3411, "udp" }, { "xmlblaster", { NULL }, 3412, "tcp" }, { "xmlblaster", { NULL }, 3412, "udp" }, { "svnet", { NULL }, 3413, "tcp" }, { "svnet", { NULL }, 3413, "udp" }, { "wip-port", { NULL }, 3414, "tcp" }, { "wip-port", { NULL }, 3414, "udp" }, { "bcinameservice", { NULL }, 3415, "tcp" }, { "bcinameservice", { NULL }, 3415, "udp" }, { "commandport", { NULL }, 3416, "tcp" }, { "commandport", { NULL }, 3416, "udp" }, { "csvr", { NULL }, 3417, "tcp" }, { "csvr", { NULL }, 3417, "udp" }, { "rnmap", { NULL }, 3418, "tcp" }, { "rnmap", { NULL }, 3418, "udp" }, { "softaudit", { NULL }, 3419, "tcp" }, { "softaudit", { NULL }, 3419, "udp" }, { "ifcp-port", { NULL }, 3420, "tcp" }, { "ifcp-port", { NULL }, 3420, "udp" }, { "bmap", { NULL }, 3421, "tcp" }, { "bmap", { NULL }, 3421, "udp" }, { "rusb-sys-port", { NULL }, 3422, "tcp" }, { "rusb-sys-port", { NULL }, 3422, "udp" }, { "xtrm", { NULL }, 3423, "tcp" }, { "xtrm", { NULL }, 3423, "udp" }, { "xtrms", { NULL }, 3424, "tcp" }, { "xtrms", { NULL }, 3424, "udp" }, { "agps-port", { NULL }, 3425, "tcp" }, { "agps-port", { NULL }, 3425, "udp" }, { "arkivio", { NULL }, 3426, "tcp" }, { "arkivio", { NULL }, 3426, "udp" }, { "websphere-snmp", { NULL }, 3427, "tcp" }, { "websphere-snmp", { NULL }, 3427, "udp" }, { "twcss", { NULL }, 3428, "tcp" }, { "twcss", { NULL }, 3428, "udp" }, { "gcsp", { NULL }, 3429, "tcp" }, { "gcsp", { NULL }, 3429, "udp" }, { "ssdispatch", { NULL }, 3430, "tcp" }, { "ssdispatch", { NULL }, 3430, "udp" }, { "ndl-als", { NULL }, 3431, "tcp" }, { "ndl-als", { NULL }, 3431, "udp" }, { "osdcp", { NULL }, 3432, "tcp" }, { "osdcp", { NULL }, 3432, "udp" }, { "alta-smp", { NULL }, 3433, "tcp" }, { "alta-smp", { NULL }, 3433, "udp" }, { "opencm", { NULL }, 3434, "tcp" }, { "opencm", { NULL }, 3434, "udp" }, { "pacom", { NULL }, 3435, "tcp" }, { "pacom", { NULL }, 3435, "udp" }, { "gc-config", { NULL }, 3436, "tcp" }, { "gc-config", { NULL }, 3436, "udp" }, { "autocueds", { NULL }, 3437, "tcp" }, { "autocueds", { NULL }, 3437, "udp" }, { "spiral-admin", { NULL }, 3438, "tcp" }, { "spiral-admin", { NULL }, 3438, "udp" }, { "hri-port", { NULL }, 3439, "tcp" }, { "hri-port", { NULL }, 3439, "udp" }, { "ans-console", { NULL }, 3440, "tcp" }, { "ans-console", { NULL }, 3440, "udp" }, { "connect-client", { NULL }, 3441, "tcp" }, { "connect-client", { NULL }, 3441, "udp" }, { "connect-server", { NULL }, 3442, "tcp" }, { "connect-server", { NULL }, 3442, "udp" }, { "ov-nnm-websrv", { NULL }, 3443, "tcp" }, { "ov-nnm-websrv", { NULL }, 3443, "udp" }, { "denali-server", { NULL }, 3444, "tcp" }, { "denali-server", { NULL }, 3444, "udp" }, { "monp", { NULL }, 3445, "tcp" }, { "monp", { NULL }, 3445, "udp" }, { "3comfaxrpc", { NULL }, 3446, "tcp" }, { "3comfaxrpc", { NULL }, 3446, "udp" }, { "directnet", { NULL }, 3447, "tcp" }, { "directnet", { NULL }, 3447, "udp" }, { "dnc-port", { NULL }, 3448, "tcp" }, { "dnc-port", { NULL }, 3448, "udp" }, { "hotu-chat", { NULL }, 3449, "tcp" }, { "hotu-chat", { NULL }, 3449, "udp" }, { "castorproxy", { NULL }, 3450, "tcp" }, { "castorproxy", { NULL }, 3450, "udp" }, { "asam", { NULL }, 3451, "tcp" }, { "asam", { NULL }, 3451, "udp" }, { "sabp-signal", { NULL }, 3452, "tcp" }, { "sabp-signal", { NULL }, 3452, "udp" }, { "pscupd", { NULL }, 3453, "tcp" }, { "pscupd", { NULL }, 3453, "udp" }, { "mira", { NULL }, 3454, "tcp" }, { "prsvp", { NULL }, 3455, "tcp" }, { "prsvp", { NULL }, 3455, "udp" }, { "vat", { NULL }, 3456, "tcp" }, { "vat", { NULL }, 3456, "udp" }, { "vat-control", { NULL }, 3457, "tcp" }, { "vat-control", { NULL }, 3457, "udp" }, { "d3winosfi", { NULL }, 3458, "tcp" }, { "d3winosfi", { NULL }, 3458, "udp" }, { "integral", { NULL }, 3459, "tcp" }, { "integral", { NULL }, 3459, "udp" }, { "edm-manager", { NULL }, 3460, "tcp" }, { "edm-manager", { NULL }, 3460, "udp" }, { "edm-stager", { NULL }, 3461, "tcp" }, { "edm-stager", { NULL }, 3461, "udp" }, { "edm-std-notify", { NULL }, 3462, "tcp" }, { "edm-std-notify", { NULL }, 3462, "udp" }, { "edm-adm-notify", { NULL }, 3463, "tcp" }, { "edm-adm-notify", { NULL }, 3463, "udp" }, { "edm-mgr-sync", { NULL }, 3464, "tcp" }, { "edm-mgr-sync", { NULL }, 3464, "udp" }, { "edm-mgr-cntrl", { NULL }, 3465, "tcp" }, { "edm-mgr-cntrl", { NULL }, 3465, "udp" }, { "workflow", { NULL }, 3466, "tcp" }, { "workflow", { NULL }, 3466, "udp" }, { "rcst", { NULL }, 3467, "tcp" }, { "rcst", { NULL }, 3467, "udp" }, { "ttcmremotectrl", { NULL }, 3468, "tcp" }, { "ttcmremotectrl", { NULL }, 3468, "udp" }, { "pluribus", { NULL }, 3469, "tcp" }, { "pluribus", { NULL }, 3469, "udp" }, { "jt400", { NULL }, 3470, "tcp" }, { "jt400", { NULL }, 3470, "udp" }, { "jt400-ssl", { NULL }, 3471, "tcp" }, { "jt400-ssl", { NULL }, 3471, "udp" }, { "jaugsremotec-1", { NULL }, 3472, "tcp" }, { "jaugsremotec-1", { NULL }, 3472, "udp" }, { "jaugsremotec-2", { NULL }, 3473, "tcp" }, { "jaugsremotec-2", { NULL }, 3473, "udp" }, { "ttntspauto", { NULL }, 3474, "tcp" }, { "ttntspauto", { NULL }, 3474, "udp" }, { "genisar-port", { NULL }, 3475, "tcp" }, { "genisar-port", { NULL }, 3475, "udp" }, { "nppmp", { NULL }, 3476, "tcp" }, { "nppmp", { NULL }, 3476, "udp" }, { "ecomm", { NULL }, 3477, "tcp" }, { "ecomm", { NULL }, 3477, "udp" }, { "stun", { NULL }, 3478, "tcp" }, { "stun", { NULL }, 3478, "udp" }, { "turn", { NULL }, 3478, "tcp" }, { "turn", { NULL }, 3478, "udp" }, { "stun-behavior", { NULL }, 3478, "tcp" }, { "stun-behavior", { NULL }, 3478, "udp" }, { "twrpc", { NULL }, 3479, "tcp" }, { "twrpc", { NULL }, 3479, "udp" }, { "plethora", { NULL }, 3480, "tcp" }, { "plethora", { NULL }, 3480, "udp" }, { "cleanerliverc", { NULL }, 3481, "tcp" }, { "cleanerliverc", { NULL }, 3481, "udp" }, { "vulture", { NULL }, 3482, "tcp" }, { "vulture", { NULL }, 3482, "udp" }, { "slim-devices", { NULL }, 3483, "tcp" }, { "slim-devices", { NULL }, 3483, "udp" }, { "gbs-stp", { NULL }, 3484, "tcp" }, { "gbs-stp", { NULL }, 3484, "udp" }, { "celatalk", { NULL }, 3485, "tcp" }, { "celatalk", { NULL }, 3485, "udp" }, { "ifsf-hb-port", { NULL }, 3486, "tcp" }, { "ifsf-hb-port", { NULL }, 3486, "udp" }, { "ltctcp", { NULL }, 3487, "tcp" }, { "ltcudp", { NULL }, 3487, "udp" }, { "fs-rh-srv", { NULL }, 3488, "tcp" }, { "fs-rh-srv", { NULL }, 3488, "udp" }, { "dtp-dia", { NULL }, 3489, "tcp" }, { "dtp-dia", { NULL }, 3489, "udp" }, { "colubris", { NULL }, 3490, "tcp" }, { "colubris", { NULL }, 3490, "udp" }, { "swr-port", { NULL }, 3491, "tcp" }, { "swr-port", { NULL }, 3491, "udp" }, { "tvdumtray-port", { NULL }, 3492, "tcp" }, { "tvdumtray-port", { NULL }, 3492, "udp" }, { "nut", { NULL }, 3493, "tcp" }, { "nut", { NULL }, 3493, "udp" }, { "ibm3494", { NULL }, 3494, "tcp" }, { "ibm3494", { NULL }, 3494, "udp" }, { "seclayer-tcp", { NULL }, 3495, "tcp" }, { "seclayer-tcp", { NULL }, 3495, "udp" }, { "seclayer-tls", { NULL }, 3496, "tcp" }, { "seclayer-tls", { NULL }, 3496, "udp" }, { "ipether232port", { NULL }, 3497, "tcp" }, { "ipether232port", { NULL }, 3497, "udp" }, { "dashpas-port", { NULL }, 3498, "tcp" }, { "dashpas-port", { NULL }, 3498, "udp" }, { "sccip-media", { NULL }, 3499, "tcp" }, { "sccip-media", { NULL }, 3499, "udp" }, { "rtmp-port", { NULL }, 3500, "tcp" }, { "rtmp-port", { NULL }, 3500, "udp" }, { "isoft-p2p", { NULL }, 3501, "tcp" }, { "isoft-p2p", { NULL }, 3501, "udp" }, { "avinstalldisc", { NULL }, 3502, "tcp" }, { "avinstalldisc", { NULL }, 3502, "udp" }, { "lsp-ping", { NULL }, 3503, "tcp" }, { "lsp-ping", { NULL }, 3503, "udp" }, { "ironstorm", { NULL }, 3504, "tcp" }, { "ironstorm", { NULL }, 3504, "udp" }, { "ccmcomm", { NULL }, 3505, "tcp" }, { "ccmcomm", { NULL }, 3505, "udp" }, { "apc-3506", { NULL }, 3506, "tcp" }, { "apc-3506", { NULL }, 3506, "udp" }, { "nesh-broker", { NULL }, 3507, "tcp" }, { "nesh-broker", { NULL }, 3507, "udp" }, { "interactionweb", { NULL }, 3508, "tcp" }, { "interactionweb", { NULL }, 3508, "udp" }, { "vt-ssl", { NULL }, 3509, "tcp" }, { "vt-ssl", { NULL }, 3509, "udp" }, { "xss-port", { NULL }, 3510, "tcp" }, { "xss-port", { NULL }, 3510, "udp" }, { "webmail-2", { NULL }, 3511, "tcp" }, { "webmail-2", { NULL }, 3511, "udp" }, { "aztec", { NULL }, 3512, "tcp" }, { "aztec", { NULL }, 3512, "udp" }, { "arcpd", { NULL }, 3513, "tcp" }, { "arcpd", { NULL }, 3513, "udp" }, { "must-p2p", { NULL }, 3514, "tcp" }, { "must-p2p", { NULL }, 3514, "udp" }, { "must-backplane", { NULL }, 3515, "tcp" }, { "must-backplane", { NULL }, 3515, "udp" }, { "smartcard-port", { NULL }, 3516, "tcp" }, { "smartcard-port", { NULL }, 3516, "udp" }, { "802-11-iapp", { NULL }, 3517, "tcp" }, { "802-11-iapp", { NULL }, 3517, "udp" }, { "artifact-msg", { NULL }, 3518, "tcp" }, { "artifact-msg", { NULL }, 3518, "udp" }, { "nvmsgd", { NULL }, 3519, "tcp" }, { "galileo", { NULL }, 3519, "udp" }, { "galileolog", { NULL }, 3520, "tcp" }, { "galileolog", { NULL }, 3520, "udp" }, { "mc3ss", { NULL }, 3521, "tcp" }, { "mc3ss", { NULL }, 3521, "udp" }, { "nssocketport", { NULL }, 3522, "tcp" }, { "nssocketport", { NULL }, 3522, "udp" }, { "odeumservlink", { NULL }, 3523, "tcp" }, { "odeumservlink", { NULL }, 3523, "udp" }, { "ecmport", { NULL }, 3524, "tcp" }, { "ecmport", { NULL }, 3524, "udp" }, { "eisport", { NULL }, 3525, "tcp" }, { "eisport", { NULL }, 3525, "udp" }, { "starquiz-port", { NULL }, 3526, "tcp" }, { "starquiz-port", { NULL }, 3526, "udp" }, { "beserver-msg-q", { NULL }, 3527, "tcp" }, { "beserver-msg-q", { NULL }, 3527, "udp" }, { "jboss-iiop", { NULL }, 3528, "tcp" }, { "jboss-iiop", { NULL }, 3528, "udp" }, { "jboss-iiop-ssl", { NULL }, 3529, "tcp" }, { "jboss-iiop-ssl", { NULL }, 3529, "udp" }, { "gf", { NULL }, 3530, "tcp" }, { "gf", { NULL }, 3530, "udp" }, { "joltid", { NULL }, 3531, "tcp" }, { "joltid", { NULL }, 3531, "udp" }, { "raven-rmp", { NULL }, 3532, "tcp" }, { "raven-rmp", { NULL }, 3532, "udp" }, { "raven-rdp", { NULL }, 3533, "tcp" }, { "raven-rdp", { NULL }, 3533, "udp" }, { "urld-port", { NULL }, 3534, "tcp" }, { "urld-port", { NULL }, 3534, "udp" }, { "ms-la", { NULL }, 3535, "tcp" }, { "ms-la", { NULL }, 3535, "udp" }, { "snac", { NULL }, 3536, "tcp" }, { "snac", { NULL }, 3536, "udp" }, { "ni-visa-remote", { NULL }, 3537, "tcp" }, { "ni-visa-remote", { NULL }, 3537, "udp" }, { "ibm-diradm", { NULL }, 3538, "tcp" }, { "ibm-diradm", { NULL }, 3538, "udp" }, { "ibm-diradm-ssl", { NULL }, 3539, "tcp" }, { "ibm-diradm-ssl", { NULL }, 3539, "udp" }, { "pnrp-port", { NULL }, 3540, "tcp" }, { "pnrp-port", { NULL }, 3540, "udp" }, { "voispeed-port", { NULL }, 3541, "tcp" }, { "voispeed-port", { NULL }, 3541, "udp" }, { "hacl-monitor", { NULL }, 3542, "tcp" }, { "hacl-monitor", { NULL }, 3542, "udp" }, { "qftest-lookup", { NULL }, 3543, "tcp" }, { "qftest-lookup", { NULL }, 3543, "udp" }, { "teredo", { NULL }, 3544, "tcp" }, { "teredo", { NULL }, 3544, "udp" }, { "camac", { NULL }, 3545, "tcp" }, { "camac", { NULL }, 3545, "udp" }, { "symantec-sim", { NULL }, 3547, "tcp" }, { "symantec-sim", { NULL }, 3547, "udp" }, { "interworld", { NULL }, 3548, "tcp" }, { "interworld", { NULL }, 3548, "udp" }, { "tellumat-nms", { NULL }, 3549, "tcp" }, { "tellumat-nms", { NULL }, 3549, "udp" }, { "ssmpp", { NULL }, 3550, "tcp" }, { "ssmpp", { NULL }, 3550, "udp" }, { "apcupsd", { NULL }, 3551, "tcp" }, { "apcupsd", { NULL }, 3551, "udp" }, { "taserver", { NULL }, 3552, "tcp" }, { "taserver", { NULL }, 3552, "udp" }, { "rbr-discovery", { NULL }, 3553, "tcp" }, { "rbr-discovery", { NULL }, 3553, "udp" }, { "questnotify", { NULL }, 3554, "tcp" }, { "questnotify", { NULL }, 3554, "udp" }, { "razor", { NULL }, 3555, "tcp" }, { "razor", { NULL }, 3555, "udp" }, { "sky-transport", { NULL }, 3556, "tcp" }, { "sky-transport", { NULL }, 3556, "udp" }, { "personalos-001", { NULL }, 3557, "tcp" }, { "personalos-001", { NULL }, 3557, "udp" }, { "mcp-port", { NULL }, 3558, "tcp" }, { "mcp-port", { NULL }, 3558, "udp" }, { "cctv-port", { NULL }, 3559, "tcp" }, { "cctv-port", { NULL }, 3559, "udp" }, { "iniserve-port", { NULL }, 3560, "tcp" }, { "iniserve-port", { NULL }, 3560, "udp" }, { "bmc-onekey", { NULL }, 3561, "tcp" }, { "bmc-onekey", { NULL }, 3561, "udp" }, { "sdbproxy", { NULL }, 3562, "tcp" }, { "sdbproxy", { NULL }, 3562, "udp" }, { "watcomdebug", { NULL }, 3563, "tcp" }, { "watcomdebug", { NULL }, 3563, "udp" }, { "esimport", { NULL }, 3564, "tcp" }, { "esimport", { NULL }, 3564, "udp" }, { "m2pa", { NULL }, 3565, "tcp" }, { "m2pa", { NULL }, 3565, "sctp" }, { "quest-data-hub", { NULL }, 3566, "tcp" }, { "oap", { NULL }, 3567, "tcp" }, { "oap", { NULL }, 3567, "udp" }, { "oap-s", { NULL }, 3568, "tcp" }, { "oap-s", { NULL }, 3568, "udp" }, { "mbg-ctrl", { NULL }, 3569, "tcp" }, { "mbg-ctrl", { NULL }, 3569, "udp" }, { "mccwebsvr-port", { NULL }, 3570, "tcp" }, { "mccwebsvr-port", { NULL }, 3570, "udp" }, { "megardsvr-port", { NULL }, 3571, "tcp" }, { "megardsvr-port", { NULL }, 3571, "udp" }, { "megaregsvrport", { NULL }, 3572, "tcp" }, { "megaregsvrport", { NULL }, 3572, "udp" }, { "tag-ups-1", { NULL }, 3573, "tcp" }, { "tag-ups-1", { NULL }, 3573, "udp" }, { "dmaf-server", { NULL }, 3574, "tcp" }, { "dmaf-caster", { NULL }, 3574, "udp" }, { "ccm-port", { NULL }, 3575, "tcp" }, { "ccm-port", { NULL }, 3575, "udp" }, { "cmc-port", { NULL }, 3576, "tcp" }, { "cmc-port", { NULL }, 3576, "udp" }, { "config-port", { NULL }, 3577, "tcp" }, { "config-port", { NULL }, 3577, "udp" }, { "data-port", { NULL }, 3578, "tcp" }, { "data-port", { NULL }, 3578, "udp" }, { "ttat3lb", { NULL }, 3579, "tcp" }, { "ttat3lb", { NULL }, 3579, "udp" }, { "nati-svrloc", { NULL }, 3580, "tcp" }, { "nati-svrloc", { NULL }, 3580, "udp" }, { "kfxaclicensing", { NULL }, 3581, "tcp" }, { "kfxaclicensing", { NULL }, 3581, "udp" }, { "press", { NULL }, 3582, "tcp" }, { "press", { NULL }, 3582, "udp" }, { "canex-watch", { NULL }, 3583, "tcp" }, { "canex-watch", { NULL }, 3583, "udp" }, { "u-dbap", { NULL }, 3584, "tcp" }, { "u-dbap", { NULL }, 3584, "udp" }, { "emprise-lls", { NULL }, 3585, "tcp" }, { "emprise-lls", { NULL }, 3585, "udp" }, { "emprise-lsc", { NULL }, 3586, "tcp" }, { "emprise-lsc", { NULL }, 3586, "udp" }, { "p2pgroup", { NULL }, 3587, "tcp" }, { "p2pgroup", { NULL }, 3587, "udp" }, { "sentinel", { NULL }, 3588, "tcp" }, { "sentinel", { NULL }, 3588, "udp" }, { "isomair", { NULL }, 3589, "tcp" }, { "isomair", { NULL }, 3589, "udp" }, { "wv-csp-sms", { NULL }, 3590, "tcp" }, { "wv-csp-sms", { NULL }, 3590, "udp" }, { "gtrack-server", { NULL }, 3591, "tcp" }, { "gtrack-server", { NULL }, 3591, "udp" }, { "gtrack-ne", { NULL }, 3592, "tcp" }, { "gtrack-ne", { NULL }, 3592, "udp" }, { "bpmd", { NULL }, 3593, "tcp" }, { "bpmd", { NULL }, 3593, "udp" }, { "mediaspace", { NULL }, 3594, "tcp" }, { "mediaspace", { NULL }, 3594, "udp" }, { "shareapp", { NULL }, 3595, "tcp" }, { "shareapp", { NULL }, 3595, "udp" }, { "iw-mmogame", { NULL }, 3596, "tcp" }, { "iw-mmogame", { NULL }, 3596, "udp" }, { "a14", { NULL }, 3597, "tcp" }, { "a14", { NULL }, 3597, "udp" }, { "a15", { NULL }, 3598, "tcp" }, { "a15", { NULL }, 3598, "udp" }, { "quasar-server", { NULL }, 3599, "tcp" }, { "quasar-server", { NULL }, 3599, "udp" }, { "trap-daemon", { NULL }, 3600, "tcp" }, { "trap-daemon", { NULL }, 3600, "udp" }, { "visinet-gui", { NULL }, 3601, "tcp" }, { "visinet-gui", { NULL }, 3601, "udp" }, { "infiniswitchcl", { NULL }, 3602, "tcp" }, { "infiniswitchcl", { NULL }, 3602, "udp" }, { "int-rcv-cntrl", { NULL }, 3603, "tcp" }, { "int-rcv-cntrl", { NULL }, 3603, "udp" }, { "bmc-jmx-port", { NULL }, 3604, "tcp" }, { "bmc-jmx-port", { NULL }, 3604, "udp" }, { "comcam-io", { NULL }, 3605, "tcp" }, { "comcam-io", { NULL }, 3605, "udp" }, { "splitlock", { NULL }, 3606, "tcp" }, { "splitlock", { NULL }, 3606, "udp" }, { "precise-i3", { NULL }, 3607, "tcp" }, { "precise-i3", { NULL }, 3607, "udp" }, { "trendchip-dcp", { NULL }, 3608, "tcp" }, { "trendchip-dcp", { NULL }, 3608, "udp" }, { "cpdi-pidas-cm", { NULL }, 3609, "tcp" }, { "cpdi-pidas-cm", { NULL }, 3609, "udp" }, { "echonet", { NULL }, 3610, "tcp" }, { "echonet", { NULL }, 3610, "udp" }, { "six-degrees", { NULL }, 3611, "tcp" }, { "six-degrees", { NULL }, 3611, "udp" }, { "hp-dataprotect", { NULL }, 3612, "tcp" }, { "hp-dataprotect", { NULL }, 3612, "udp" }, { "alaris-disc", { NULL }, 3613, "tcp" }, { "alaris-disc", { NULL }, 3613, "udp" }, { "sigma-port", { NULL }, 3614, "tcp" }, { "sigma-port", { NULL }, 3614, "udp" }, { "start-network", { NULL }, 3615, "tcp" }, { "start-network", { NULL }, 3615, "udp" }, { "cd3o-protocol", { NULL }, 3616, "tcp" }, { "cd3o-protocol", { NULL }, 3616, "udp" }, { "sharp-server", { NULL }, 3617, "tcp" }, { "sharp-server", { NULL }, 3617, "udp" }, { "aairnet-1", { NULL }, 3618, "tcp" }, { "aairnet-1", { NULL }, 3618, "udp" }, { "aairnet-2", { NULL }, 3619, "tcp" }, { "aairnet-2", { NULL }, 3619, "udp" }, { "ep-pcp", { NULL }, 3620, "tcp" }, { "ep-pcp", { NULL }, 3620, "udp" }, { "ep-nsp", { NULL }, 3621, "tcp" }, { "ep-nsp", { NULL }, 3621, "udp" }, { "ff-lr-port", { NULL }, 3622, "tcp" }, { "ff-lr-port", { NULL }, 3622, "udp" }, { "haipe-discover", { NULL }, 3623, "tcp" }, { "haipe-discover", { NULL }, 3623, "udp" }, { "dist-upgrade", { NULL }, 3624, "tcp" }, { "dist-upgrade", { NULL }, 3624, "udp" }, { "volley", { NULL }, 3625, "tcp" }, { "volley", { NULL }, 3625, "udp" }, { "bvcdaemon-port", { NULL }, 3626, "tcp" }, { "bvcdaemon-port", { NULL }, 3626, "udp" }, { "jamserverport", { NULL }, 3627, "tcp" }, { "jamserverport", { NULL }, 3627, "udp" }, { "ept-machine", { NULL }, 3628, "tcp" }, { "ept-machine", { NULL }, 3628, "udp" }, { "escvpnet", { NULL }, 3629, "tcp" }, { "escvpnet", { NULL }, 3629, "udp" }, { "cs-remote-db", { NULL }, 3630, "tcp" }, { "cs-remote-db", { NULL }, 3630, "udp" }, { "cs-services", { NULL }, 3631, "tcp" }, { "cs-services", { NULL }, 3631, "udp" }, { "distcc", { NULL }, 3632, "tcp" }, { "distcc", { NULL }, 3632, "udp" }, { "wacp", { NULL }, 3633, "tcp" }, { "wacp", { NULL }, 3633, "udp" }, { "hlibmgr", { NULL }, 3634, "tcp" }, { "hlibmgr", { NULL }, 3634, "udp" }, { "sdo", { NULL }, 3635, "tcp" }, { "sdo", { NULL }, 3635, "udp" }, { "servistaitsm", { NULL }, 3636, "tcp" }, { "servistaitsm", { NULL }, 3636, "udp" }, { "scservp", { NULL }, 3637, "tcp" }, { "scservp", { NULL }, 3637, "udp" }, { "ehp-backup", { NULL }, 3638, "tcp" }, { "ehp-backup", { NULL }, 3638, "udp" }, { "xap-ha", { NULL }, 3639, "tcp" }, { "xap-ha", { NULL }, 3639, "udp" }, { "netplay-port1", { NULL }, 3640, "tcp" }, { "netplay-port1", { NULL }, 3640, "udp" }, { "netplay-port2", { NULL }, 3641, "tcp" }, { "netplay-port2", { NULL }, 3641, "udp" }, { "juxml-port", { NULL }, 3642, "tcp" }, { "juxml-port", { NULL }, 3642, "udp" }, { "audiojuggler", { NULL }, 3643, "tcp" }, { "audiojuggler", { NULL }, 3643, "udp" }, { "ssowatch", { NULL }, 3644, "tcp" }, { "ssowatch", { NULL }, 3644, "udp" }, { "cyc", { NULL }, 3645, "tcp" }, { "cyc", { NULL }, 3645, "udp" }, { "xss-srv-port", { NULL }, 3646, "tcp" }, { "xss-srv-port", { NULL }, 3646, "udp" }, { "splitlock-gw", { NULL }, 3647, "tcp" }, { "splitlock-gw", { NULL }, 3647, "udp" }, { "fjcp", { NULL }, 3648, "tcp" }, { "fjcp", { NULL }, 3648, "udp" }, { "nmmp", { NULL }, 3649, "tcp" }, { "nmmp", { NULL }, 3649, "udp" }, { "prismiq-plugin", { NULL }, 3650, "tcp" }, { "prismiq-plugin", { NULL }, 3650, "udp" }, { "xrpc-registry", { NULL }, 3651, "tcp" }, { "xrpc-registry", { NULL }, 3651, "udp" }, { "vxcrnbuport", { NULL }, 3652, "tcp" }, { "vxcrnbuport", { NULL }, 3652, "udp" }, { "tsp", { NULL }, 3653, "tcp" }, { "tsp", { NULL }, 3653, "udp" }, { "vaprtm", { NULL }, 3654, "tcp" }, { "vaprtm", { NULL }, 3654, "udp" }, { "abatemgr", { NULL }, 3655, "tcp" }, { "abatemgr", { NULL }, 3655, "udp" }, { "abatjss", { NULL }, 3656, "tcp" }, { "abatjss", { NULL }, 3656, "udp" }, { "immedianet-bcn", { NULL }, 3657, "tcp" }, { "immedianet-bcn", { NULL }, 3657, "udp" }, { "ps-ams", { NULL }, 3658, "tcp" }, { "ps-ams", { NULL }, 3658, "udp" }, { "apple-sasl", { NULL }, 3659, "tcp" }, { "apple-sasl", { NULL }, 3659, "udp" }, { "can-nds-ssl", { NULL }, 3660, "tcp" }, { "can-nds-ssl", { NULL }, 3660, "udp" }, { "can-ferret-ssl", { NULL }, 3661, "tcp" }, { "can-ferret-ssl", { NULL }, 3661, "udp" }, { "pserver", { NULL }, 3662, "tcp" }, { "pserver", { NULL }, 3662, "udp" }, { "dtp", { NULL }, 3663, "tcp" }, { "dtp", { NULL }, 3663, "udp" }, { "ups-engine", { NULL }, 3664, "tcp" }, { "ups-engine", { NULL }, 3664, "udp" }, { "ent-engine", { NULL }, 3665, "tcp" }, { "ent-engine", { NULL }, 3665, "udp" }, { "eserver-pap", { NULL }, 3666, "tcp" }, { "eserver-pap", { NULL }, 3666, "udp" }, { "infoexch", { NULL }, 3667, "tcp" }, { "infoexch", { NULL }, 3667, "udp" }, { "dell-rm-port", { NULL }, 3668, "tcp" }, { "dell-rm-port", { NULL }, 3668, "udp" }, { "casanswmgmt", { NULL }, 3669, "tcp" }, { "casanswmgmt", { NULL }, 3669, "udp" }, { "smile", { NULL }, 3670, "tcp" }, { "smile", { NULL }, 3670, "udp" }, { "efcp", { NULL }, 3671, "tcp" }, { "efcp", { NULL }, 3671, "udp" }, { "lispworks-orb", { NULL }, 3672, "tcp" }, { "lispworks-orb", { NULL }, 3672, "udp" }, { "mediavault-gui", { NULL }, 3673, "tcp" }, { "mediavault-gui", { NULL }, 3673, "udp" }, { "wininstall-ipc", { NULL }, 3674, "tcp" }, { "wininstall-ipc", { NULL }, 3674, "udp" }, { "calltrax", { NULL }, 3675, "tcp" }, { "calltrax", { NULL }, 3675, "udp" }, { "va-pacbase", { NULL }, 3676, "tcp" }, { "va-pacbase", { NULL }, 3676, "udp" }, { "roverlog", { NULL }, 3677, "tcp" }, { "roverlog", { NULL }, 3677, "udp" }, { "ipr-dglt", { NULL }, 3678, "tcp" }, { "ipr-dglt", { NULL }, 3678, "udp" }, { "newton-dock", { NULL }, 3679, "tcp" }, { "newton-dock", { NULL }, 3679, "udp" }, { "npds-tracker", { NULL }, 3680, "tcp" }, { "npds-tracker", { NULL }, 3680, "udp" }, { "bts-x73", { NULL }, 3681, "tcp" }, { "bts-x73", { NULL }, 3681, "udp" }, { "cas-mapi", { NULL }, 3682, "tcp" }, { "cas-mapi", { NULL }, 3682, "udp" }, { "bmc-ea", { NULL }, 3683, "tcp" }, { "bmc-ea", { NULL }, 3683, "udp" }, { "faxstfx-port", { NULL }, 3684, "tcp" }, { "faxstfx-port", { NULL }, 3684, "udp" }, { "dsx-agent", { NULL }, 3685, "tcp" }, { "dsx-agent", { NULL }, 3685, "udp" }, { "tnmpv2", { NULL }, 3686, "tcp" }, { "tnmpv2", { NULL }, 3686, "udp" }, { "simple-push", { NULL }, 3687, "tcp" }, { "simple-push", { NULL }, 3687, "udp" }, { "simple-push-s", { NULL }, 3688, "tcp" }, { "simple-push-s", { NULL }, 3688, "udp" }, { "daap", { NULL }, 3689, "tcp" }, { "daap", { NULL }, 3689, "udp" }, { "svn", { NULL }, 3690, "tcp" }, { "svn", { NULL }, 3690, "udp" }, { "magaya-network", { NULL }, 3691, "tcp" }, { "magaya-network", { NULL }, 3691, "udp" }, { "intelsync", { NULL }, 3692, "tcp" }, { "intelsync", { NULL }, 3692, "udp" }, { "bmc-data-coll", { NULL }, 3695, "tcp" }, { "bmc-data-coll", { NULL }, 3695, "udp" }, { "telnetcpcd", { NULL }, 3696, "tcp" }, { "telnetcpcd", { NULL }, 3696, "udp" }, { "nw-license", { NULL }, 3697, "tcp" }, { "nw-license", { NULL }, 3697, "udp" }, { "sagectlpanel", { NULL }, 3698, "tcp" }, { "sagectlpanel", { NULL }, 3698, "udp" }, { "kpn-icw", { NULL }, 3699, "tcp" }, { "kpn-icw", { NULL }, 3699, "udp" }, { "lrs-paging", { NULL }, 3700, "tcp" }, { "lrs-paging", { NULL }, 3700, "udp" }, { "netcelera", { NULL }, 3701, "tcp" }, { "netcelera", { NULL }, 3701, "udp" }, { "ws-discovery", { NULL }, 3702, "tcp" }, { "ws-discovery", { NULL }, 3702, "udp" }, { "adobeserver-3", { NULL }, 3703, "tcp" }, { "adobeserver-3", { NULL }, 3703, "udp" }, { "adobeserver-4", { NULL }, 3704, "tcp" }, { "adobeserver-4", { NULL }, 3704, "udp" }, { "adobeserver-5", { NULL }, 3705, "tcp" }, { "adobeserver-5", { NULL }, 3705, "udp" }, { "rt-event", { NULL }, 3706, "tcp" }, { "rt-event", { NULL }, 3706, "udp" }, { "rt-event-s", { NULL }, 3707, "tcp" }, { "rt-event-s", { NULL }, 3707, "udp" }, { "sun-as-iiops", { NULL }, 3708, "tcp" }, { "sun-as-iiops", { NULL }, 3708, "udp" }, { "ca-idms", { NULL }, 3709, "tcp" }, { "ca-idms", { NULL }, 3709, "udp" }, { "portgate-auth", { NULL }, 3710, "tcp" }, { "portgate-auth", { NULL }, 3710, "udp" }, { "edb-server2", { NULL }, 3711, "tcp" }, { "edb-server2", { NULL }, 3711, "udp" }, { "sentinel-ent", { NULL }, 3712, "tcp" }, { "sentinel-ent", { NULL }, 3712, "udp" }, { "tftps", { NULL }, 3713, "tcp" }, { "tftps", { NULL }, 3713, "udp" }, { "delos-dms", { NULL }, 3714, "tcp" }, { "delos-dms", { NULL }, 3714, "udp" }, { "anoto-rendezv", { NULL }, 3715, "tcp" }, { "anoto-rendezv", { NULL }, 3715, "udp" }, { "wv-csp-sms-cir", { NULL }, 3716, "tcp" }, { "wv-csp-sms-cir", { NULL }, 3716, "udp" }, { "wv-csp-udp-cir", { NULL }, 3717, "tcp" }, { "wv-csp-udp-cir", { NULL }, 3717, "udp" }, { "opus-services", { NULL }, 3718, "tcp" }, { "opus-services", { NULL }, 3718, "udp" }, { "itelserverport", { NULL }, 3719, "tcp" }, { "itelserverport", { NULL }, 3719, "udp" }, { "ufastro-instr", { NULL }, 3720, "tcp" }, { "ufastro-instr", { NULL }, 3720, "udp" }, { "xsync", { NULL }, 3721, "tcp" }, { "xsync", { NULL }, 3721, "udp" }, { "xserveraid", { NULL }, 3722, "tcp" }, { "xserveraid", { NULL }, 3722, "udp" }, { "sychrond", { NULL }, 3723, "tcp" }, { "sychrond", { NULL }, 3723, "udp" }, { "blizwow", { NULL }, 3724, "tcp" }, { "blizwow", { NULL }, 3724, "udp" }, { "na-er-tip", { NULL }, 3725, "tcp" }, { "na-er-tip", { NULL }, 3725, "udp" }, { "array-manager", { NULL }, 3726, "tcp" }, { "array-manager", { NULL }, 3726, "udp" }, { "e-mdu", { NULL }, 3727, "tcp" }, { "e-mdu", { NULL }, 3727, "udp" }, { "e-woa", { NULL }, 3728, "tcp" }, { "e-woa", { NULL }, 3728, "udp" }, { "fksp-audit", { NULL }, 3729, "tcp" }, { "fksp-audit", { NULL }, 3729, "udp" }, { "client-ctrl", { NULL }, 3730, "tcp" }, { "client-ctrl", { NULL }, 3730, "udp" }, { "smap", { NULL }, 3731, "tcp" }, { "smap", { NULL }, 3731, "udp" }, { "m-wnn", { NULL }, 3732, "tcp" }, { "m-wnn", { NULL }, 3732, "udp" }, { "multip-msg", { NULL }, 3733, "tcp" }, { "multip-msg", { NULL }, 3733, "udp" }, { "synel-data", { NULL }, 3734, "tcp" }, { "synel-data", { NULL }, 3734, "udp" }, { "pwdis", { NULL }, 3735, "tcp" }, { "pwdis", { NULL }, 3735, "udp" }, { "rs-rmi", { NULL }, 3736, "tcp" }, { "rs-rmi", { NULL }, 3736, "udp" }, { "xpanel", { NULL }, 3737, "tcp" }, { "versatalk", { NULL }, 3738, "tcp" }, { "versatalk", { NULL }, 3738, "udp" }, { "launchbird-lm", { NULL }, 3739, "tcp" }, { "launchbird-lm", { NULL }, 3739, "udp" }, { "heartbeat", { NULL }, 3740, "tcp" }, { "heartbeat", { NULL }, 3740, "udp" }, { "wysdma", { NULL }, 3741, "tcp" }, { "wysdma", { NULL }, 3741, "udp" }, { "cst-port", { NULL }, 3742, "tcp" }, { "cst-port", { NULL }, 3742, "udp" }, { "ipcs-command", { NULL }, 3743, "tcp" }, { "ipcs-command", { NULL }, 3743, "udp" }, { "sasg", { NULL }, 3744, "tcp" }, { "sasg", { NULL }, 3744, "udp" }, { "gw-call-port", { NULL }, 3745, "tcp" }, { "gw-call-port", { NULL }, 3745, "udp" }, { "linktest", { NULL }, 3746, "tcp" }, { "linktest", { NULL }, 3746, "udp" }, { "linktest-s", { NULL }, 3747, "tcp" }, { "linktest-s", { NULL }, 3747, "udp" }, { "webdata", { NULL }, 3748, "tcp" }, { "webdata", { NULL }, 3748, "udp" }, { "cimtrak", { NULL }, 3749, "tcp" }, { "cimtrak", { NULL }, 3749, "udp" }, { "cbos-ip-port", { NULL }, 3750, "tcp" }, { "cbos-ip-port", { NULL }, 3750, "udp" }, { "gprs-cube", { NULL }, 3751, "tcp" }, { "gprs-cube", { NULL }, 3751, "udp" }, { "vipremoteagent", { NULL }, 3752, "tcp" }, { "vipremoteagent", { NULL }, 3752, "udp" }, { "nattyserver", { NULL }, 3753, "tcp" }, { "nattyserver", { NULL }, 3753, "udp" }, { "timestenbroker", { NULL }, 3754, "tcp" }, { "timestenbroker", { NULL }, 3754, "udp" }, { "sas-remote-hlp", { NULL }, 3755, "tcp" }, { "sas-remote-hlp", { NULL }, 3755, "udp" }, { "canon-capt", { NULL }, 3756, "tcp" }, { "canon-capt", { NULL }, 3756, "udp" }, { "grf-port", { NULL }, 3757, "tcp" }, { "grf-port", { NULL }, 3757, "udp" }, { "apw-registry", { NULL }, 3758, "tcp" }, { "apw-registry", { NULL }, 3758, "udp" }, { "exapt-lmgr", { NULL }, 3759, "tcp" }, { "exapt-lmgr", { NULL }, 3759, "udp" }, { "adtempusclient", { NULL }, 3760, "tcp" }, { "adtempusclient", { NULL }, 3760, "udp" }, { "gsakmp", { NULL }, 3761, "tcp" }, { "gsakmp", { NULL }, 3761, "udp" }, { "gbs-smp", { NULL }, 3762, "tcp" }, { "gbs-smp", { NULL }, 3762, "udp" }, { "xo-wave", { NULL }, 3763, "tcp" }, { "xo-wave", { NULL }, 3763, "udp" }, { "mni-prot-rout", { NULL }, 3764, "tcp" }, { "mni-prot-rout", { NULL }, 3764, "udp" }, { "rtraceroute", { NULL }, 3765, "tcp" }, { "rtraceroute", { NULL }, 3765, "udp" }, { "listmgr-port", { NULL }, 3767, "tcp" }, { "listmgr-port", { NULL }, 3767, "udp" }, { "rblcheckd", { NULL }, 3768, "tcp" }, { "rblcheckd", { NULL }, 3768, "udp" }, { "haipe-otnk", { NULL }, 3769, "tcp" }, { "haipe-otnk", { NULL }, 3769, "udp" }, { "cindycollab", { NULL }, 3770, "tcp" }, { "cindycollab", { NULL }, 3770, "udp" }, { "paging-port", { NULL }, 3771, "tcp" }, { "paging-port", { NULL }, 3771, "udp" }, { "ctp", { NULL }, 3772, "tcp" }, { "ctp", { NULL }, 3772, "udp" }, { "ctdhercules", { NULL }, 3773, "tcp" }, { "ctdhercules", { NULL }, 3773, "udp" }, { "zicom", { NULL }, 3774, "tcp" }, { "zicom", { NULL }, 3774, "udp" }, { "ispmmgr", { NULL }, 3775, "tcp" }, { "ispmmgr", { NULL }, 3775, "udp" }, { "dvcprov-port", { NULL }, 3776, "tcp" }, { "dvcprov-port", { NULL }, 3776, "udp" }, { "jibe-eb", { NULL }, 3777, "tcp" }, { "jibe-eb", { NULL }, 3777, "udp" }, { "c-h-it-port", { NULL }, 3778, "tcp" }, { "c-h-it-port", { NULL }, 3778, "udp" }, { "cognima", { NULL }, 3779, "tcp" }, { "cognima", { NULL }, 3779, "udp" }, { "nnp", { NULL }, 3780, "tcp" }, { "nnp", { NULL }, 3780, "udp" }, { "abcvoice-port", { NULL }, 3781, "tcp" }, { "abcvoice-port", { NULL }, 3781, "udp" }, { "iso-tp0s", { NULL }, 3782, "tcp" }, { "iso-tp0s", { NULL }, 3782, "udp" }, { "bim-pem", { NULL }, 3783, "tcp" }, { "bim-pem", { NULL }, 3783, "udp" }, { "bfd-control", { NULL }, 3784, "tcp" }, { "bfd-control", { NULL }, 3784, "udp" }, { "bfd-echo", { NULL }, 3785, "tcp" }, { "bfd-echo", { NULL }, 3785, "udp" }, { "upstriggervsw", { NULL }, 3786, "tcp" }, { "upstriggervsw", { NULL }, 3786, "udp" }, { "fintrx", { NULL }, 3787, "tcp" }, { "fintrx", { NULL }, 3787, "udp" }, { "isrp-port", { NULL }, 3788, "tcp" }, { "isrp-port", { NULL }, 3788, "udp" }, { "remotedeploy", { NULL }, 3789, "tcp" }, { "remotedeploy", { NULL }, 3789, "udp" }, { "quickbooksrds", { NULL }, 3790, "tcp" }, { "quickbooksrds", { NULL }, 3790, "udp" }, { "tvnetworkvideo", { NULL }, 3791, "tcp" }, { "tvnetworkvideo", { NULL }, 3791, "udp" }, { "sitewatch", { NULL }, 3792, "tcp" }, { "sitewatch", { NULL }, 3792, "udp" }, { "dcsoftware", { NULL }, 3793, "tcp" }, { "dcsoftware", { NULL }, 3793, "udp" }, { "jaus", { NULL }, 3794, "tcp" }, { "jaus", { NULL }, 3794, "udp" }, { "myblast", { NULL }, 3795, "tcp" }, { "myblast", { NULL }, 3795, "udp" }, { "spw-dialer", { NULL }, 3796, "tcp" }, { "spw-dialer", { NULL }, 3796, "udp" }, { "idps", { NULL }, 3797, "tcp" }, { "idps", { NULL }, 3797, "udp" }, { "minilock", { NULL }, 3798, "tcp" }, { "minilock", { NULL }, 3798, "udp" }, { "radius-dynauth", { NULL }, 3799, "tcp" }, { "radius-dynauth", { NULL }, 3799, "udp" }, { "pwgpsi", { NULL }, 3800, "tcp" }, { "pwgpsi", { NULL }, 3800, "udp" }, { "ibm-mgr", { NULL }, 3801, "tcp" }, { "ibm-mgr", { NULL }, 3801, "udp" }, { "vhd", { NULL }, 3802, "tcp" }, { "vhd", { NULL }, 3802, "udp" }, { "soniqsync", { NULL }, 3803, "tcp" }, { "soniqsync", { NULL }, 3803, "udp" }, { "iqnet-port", { NULL }, 3804, "tcp" }, { "iqnet-port", { NULL }, 3804, "udp" }, { "tcpdataserver", { NULL }, 3805, "tcp" }, { "tcpdataserver", { NULL }, 3805, "udp" }, { "wsmlb", { NULL }, 3806, "tcp" }, { "wsmlb", { NULL }, 3806, "udp" }, { "spugna", { NULL }, 3807, "tcp" }, { "spugna", { NULL }, 3807, "udp" }, { "sun-as-iiops-ca", { NULL }, 3808, "tcp" }, { "sun-as-iiops-ca", { NULL }, 3808, "udp" }, { "apocd", { NULL }, 3809, "tcp" }, { "apocd", { NULL }, 3809, "udp" }, { "wlanauth", { NULL }, 3810, "tcp" }, { "wlanauth", { NULL }, 3810, "udp" }, { "amp", { NULL }, 3811, "tcp" }, { "amp", { NULL }, 3811, "udp" }, { "neto-wol-server", { NULL }, 3812, "tcp" }, { "neto-wol-server", { NULL }, 3812, "udp" }, { "rap-ip", { NULL }, 3813, "tcp" }, { "rap-ip", { NULL }, 3813, "udp" }, { "neto-dcs", { NULL }, 3814, "tcp" }, { "neto-dcs", { NULL }, 3814, "udp" }, { "lansurveyorxml", { NULL }, 3815, "tcp" }, { "lansurveyorxml", { NULL }, 3815, "udp" }, { "sunlps-http", { NULL }, 3816, "tcp" }, { "sunlps-http", { NULL }, 3816, "udp" }, { "tapeware", { NULL }, 3817, "tcp" }, { "tapeware", { NULL }, 3817, "udp" }, { "crinis-hb", { NULL }, 3818, "tcp" }, { "crinis-hb", { NULL }, 3818, "udp" }, { "epl-slp", { NULL }, 3819, "tcp" }, { "epl-slp", { NULL }, 3819, "udp" }, { "scp", { NULL }, 3820, "tcp" }, { "scp", { NULL }, 3820, "udp" }, { "pmcp", { NULL }, 3821, "tcp" }, { "pmcp", { NULL }, 3821, "udp" }, { "acp-discovery", { NULL }, 3822, "tcp" }, { "acp-discovery", { NULL }, 3822, "udp" }, { "acp-conduit", { NULL }, 3823, "tcp" }, { "acp-conduit", { NULL }, 3823, "udp" }, { "acp-policy", { NULL }, 3824, "tcp" }, { "acp-policy", { NULL }, 3824, "udp" }, { "ffserver", { NULL }, 3825, "tcp" }, { "ffserver", { NULL }, 3825, "udp" }, { "wormux", { NULL }, 3826, "tcp" }, { "wormux", { NULL }, 3826, "udp" }, { "netmpi", { NULL }, 3827, "tcp" }, { "netmpi", { NULL }, 3827, "udp" }, { "neteh", { NULL }, 3828, "tcp" }, { "neteh", { NULL }, 3828, "udp" }, { "neteh-ext", { NULL }, 3829, "tcp" }, { "neteh-ext", { NULL }, 3829, "udp" }, { "cernsysmgmtagt", { NULL }, 3830, "tcp" }, { "cernsysmgmtagt", { NULL }, 3830, "udp" }, { "dvapps", { NULL }, 3831, "tcp" }, { "dvapps", { NULL }, 3831, "udp" }, { "xxnetserver", { NULL }, 3832, "tcp" }, { "xxnetserver", { NULL }, 3832, "udp" }, { "aipn-auth", { NULL }, 3833, "tcp" }, { "aipn-auth", { NULL }, 3833, "udp" }, { "spectardata", { NULL }, 3834, "tcp" }, { "spectardata", { NULL }, 3834, "udp" }, { "spectardb", { NULL }, 3835, "tcp" }, { "spectardb", { NULL }, 3835, "udp" }, { "markem-dcp", { NULL }, 3836, "tcp" }, { "markem-dcp", { NULL }, 3836, "udp" }, { "mkm-discovery", { NULL }, 3837, "tcp" }, { "mkm-discovery", { NULL }, 3837, "udp" }, { "sos", { NULL }, 3838, "tcp" }, { "sos", { NULL }, 3838, "udp" }, { "amx-rms", { NULL }, 3839, "tcp" }, { "amx-rms", { NULL }, 3839, "udp" }, { "flirtmitmir", { NULL }, 3840, "tcp" }, { "flirtmitmir", { NULL }, 3840, "udp" }, { "zfirm-shiprush3", { NULL }, 3841, "tcp" }, { "zfirm-shiprush3", { NULL }, 3841, "udp" }, { "nhci", { NULL }, 3842, "tcp" }, { "nhci", { NULL }, 3842, "udp" }, { "quest-agent", { NULL }, 3843, "tcp" }, { "quest-agent", { NULL }, 3843, "udp" }, { "rnm", { NULL }, 3844, "tcp" }, { "rnm", { NULL }, 3844, "udp" }, { "v-one-spp", { NULL }, 3845, "tcp" }, { "v-one-spp", { NULL }, 3845, "udp" }, { "an-pcp", { NULL }, 3846, "tcp" }, { "an-pcp", { NULL }, 3846, "udp" }, { "msfw-control", { NULL }, 3847, "tcp" }, { "msfw-control", { NULL }, 3847, "udp" }, { "item", { NULL }, 3848, "tcp" }, { "item", { NULL }, 3848, "udp" }, { "spw-dnspreload", { NULL }, 3849, "tcp" }, { "spw-dnspreload", { NULL }, 3849, "udp" }, { "qtms-bootstrap", { NULL }, 3850, "tcp" }, { "qtms-bootstrap", { NULL }, 3850, "udp" }, { "spectraport", { NULL }, 3851, "tcp" }, { "spectraport", { NULL }, 3851, "udp" }, { "sse-app-config", { NULL }, 3852, "tcp" }, { "sse-app-config", { NULL }, 3852, "udp" }, { "sscan", { NULL }, 3853, "tcp" }, { "sscan", { NULL }, 3853, "udp" }, { "stryker-com", { NULL }, 3854, "tcp" }, { "stryker-com", { NULL }, 3854, "udp" }, { "opentrac", { NULL }, 3855, "tcp" }, { "opentrac", { NULL }, 3855, "udp" }, { "informer", { NULL }, 3856, "tcp" }, { "informer", { NULL }, 3856, "udp" }, { "trap-port", { NULL }, 3857, "tcp" }, { "trap-port", { NULL }, 3857, "udp" }, { "trap-port-mom", { NULL }, 3858, "tcp" }, { "trap-port-mom", { NULL }, 3858, "udp" }, { "nav-port", { NULL }, 3859, "tcp" }, { "nav-port", { NULL }, 3859, "udp" }, { "sasp", { NULL }, 3860, "tcp" }, { "sasp", { NULL }, 3860, "udp" }, { "winshadow-hd", { NULL }, 3861, "tcp" }, { "winshadow-hd", { NULL }, 3861, "udp" }, { "giga-pocket", { NULL }, 3862, "tcp" }, { "giga-pocket", { NULL }, 3862, "udp" }, { "asap-tcp", { NULL }, 3863, "tcp" }, { "asap-udp", { NULL }, 3863, "udp" }, { "asap-sctp", { NULL }, 3863, "sctp" }, { "asap-tcp-tls", { NULL }, 3864, "tcp" }, { "asap-sctp-tls", { NULL }, 3864, "sctp" }, { "xpl", { NULL }, 3865, "tcp" }, { "xpl", { NULL }, 3865, "udp" }, { "dzdaemon", { NULL }, 3866, "tcp" }, { "dzdaemon", { NULL }, 3866, "udp" }, { "dzoglserver", { NULL }, 3867, "tcp" }, { "dzoglserver", { NULL }, 3867, "udp" }, { "diameter", { NULL }, 3868, "tcp" }, { "diameter", { NULL }, 3868, "sctp" }, { "ovsam-mgmt", { NULL }, 3869, "tcp" }, { "ovsam-mgmt", { NULL }, 3869, "udp" }, { "ovsam-d-agent", { NULL }, 3870, "tcp" }, { "ovsam-d-agent", { NULL }, 3870, "udp" }, { "avocent-adsap", { NULL }, 3871, "tcp" }, { "avocent-adsap", { NULL }, 3871, "udp" }, { "oem-agent", { NULL }, 3872, "tcp" }, { "oem-agent", { NULL }, 3872, "udp" }, { "fagordnc", { NULL }, 3873, "tcp" }, { "fagordnc", { NULL }, 3873, "udp" }, { "sixxsconfig", { NULL }, 3874, "tcp" }, { "sixxsconfig", { NULL }, 3874, "udp" }, { "pnbscada", { NULL }, 3875, "tcp" }, { "pnbscada", { NULL }, 3875, "udp" }, { "dl_agent", { NULL }, 3876, "tcp" }, { "dl_agent", { NULL }, 3876, "udp" }, { "xmpcr-interface", { NULL }, 3877, "tcp" }, { "xmpcr-interface", { NULL }, 3877, "udp" }, { "fotogcad", { NULL }, 3878, "tcp" }, { "fotogcad", { NULL }, 3878, "udp" }, { "appss-lm", { NULL }, 3879, "tcp" }, { "appss-lm", { NULL }, 3879, "udp" }, { "igrs", { NULL }, 3880, "tcp" }, { "igrs", { NULL }, 3880, "udp" }, { "idac", { NULL }, 3881, "tcp" }, { "idac", { NULL }, 3881, "udp" }, { "msdts1", { NULL }, 3882, "tcp" }, { "msdts1", { NULL }, 3882, "udp" }, { "vrpn", { NULL }, 3883, "tcp" }, { "vrpn", { NULL }, 3883, "udp" }, { "softrack-meter", { NULL }, 3884, "tcp" }, { "softrack-meter", { NULL }, 3884, "udp" }, { "topflow-ssl", { NULL }, 3885, "tcp" }, { "topflow-ssl", { NULL }, 3885, "udp" }, { "nei-management", { NULL }, 3886, "tcp" }, { "nei-management", { NULL }, 3886, "udp" }, { "ciphire-data", { NULL }, 3887, "tcp" }, { "ciphire-data", { NULL }, 3887, "udp" }, { "ciphire-serv", { NULL }, 3888, "tcp" }, { "ciphire-serv", { NULL }, 3888, "udp" }, { "dandv-tester", { NULL }, 3889, "tcp" }, { "dandv-tester", { NULL }, 3889, "udp" }, { "ndsconnect", { NULL }, 3890, "tcp" }, { "ndsconnect", { NULL }, 3890, "udp" }, { "rtc-pm-port", { NULL }, 3891, "tcp" }, { "rtc-pm-port", { NULL }, 3891, "udp" }, { "pcc-image-port", { NULL }, 3892, "tcp" }, { "pcc-image-port", { NULL }, 3892, "udp" }, { "cgi-starapi", { NULL }, 3893, "tcp" }, { "cgi-starapi", { NULL }, 3893, "udp" }, { "syam-agent", { NULL }, 3894, "tcp" }, { "syam-agent", { NULL }, 3894, "udp" }, { "syam-smc", { NULL }, 3895, "tcp" }, { "syam-smc", { NULL }, 3895, "udp" }, { "sdo-tls", { NULL }, 3896, "tcp" }, { "sdo-tls", { NULL }, 3896, "udp" }, { "sdo-ssh", { NULL }, 3897, "tcp" }, { "sdo-ssh", { NULL }, 3897, "udp" }, { "senip", { NULL }, 3898, "tcp" }, { "senip", { NULL }, 3898, "udp" }, { "itv-control", { NULL }, 3899, "tcp" }, { "itv-control", { NULL }, 3899, "udp" }, { "udt_os", { NULL }, 3900, "tcp" }, { "udt_os", { NULL }, 3900, "udp" }, { "nimsh", { NULL }, 3901, "tcp" }, { "nimsh", { NULL }, 3901, "udp" }, { "nimaux", { NULL }, 3902, "tcp" }, { "nimaux", { NULL }, 3902, "udp" }, { "charsetmgr", { NULL }, 3903, "tcp" }, { "charsetmgr", { NULL }, 3903, "udp" }, { "omnilink-port", { NULL }, 3904, "tcp" }, { "omnilink-port", { NULL }, 3904, "udp" }, { "mupdate", { NULL }, 3905, "tcp" }, { "mupdate", { NULL }, 3905, "udp" }, { "topovista-data", { NULL }, 3906, "tcp" }, { "topovista-data", { NULL }, 3906, "udp" }, { "imoguia-port", { NULL }, 3907, "tcp" }, { "imoguia-port", { NULL }, 3907, "udp" }, { "hppronetman", { NULL }, 3908, "tcp" }, { "hppronetman", { NULL }, 3908, "udp" }, { "surfcontrolcpa", { NULL }, 3909, "tcp" }, { "surfcontrolcpa", { NULL }, 3909, "udp" }, { "prnrequest", { NULL }, 3910, "tcp" }, { "prnrequest", { NULL }, 3910, "udp" }, { "prnstatus", { NULL }, 3911, "tcp" }, { "prnstatus", { NULL }, 3911, "udp" }, { "gbmt-stars", { NULL }, 3912, "tcp" }, { "gbmt-stars", { NULL }, 3912, "udp" }, { "listcrt-port", { NULL }, 3913, "tcp" }, { "listcrt-port", { NULL }, 3913, "udp" }, { "listcrt-port-2", { NULL }, 3914, "tcp" }, { "listcrt-port-2", { NULL }, 3914, "udp" }, { "agcat", { NULL }, 3915, "tcp" }, { "agcat", { NULL }, 3915, "udp" }, { "wysdmc", { NULL }, 3916, "tcp" }, { "wysdmc", { NULL }, 3916, "udp" }, { "aftmux", { NULL }, 3917, "tcp" }, { "aftmux", { NULL }, 3917, "udp" }, { "pktcablemmcops", { NULL }, 3918, "tcp" }, { "pktcablemmcops", { NULL }, 3918, "udp" }, { "hyperip", { NULL }, 3919, "tcp" }, { "hyperip", { NULL }, 3919, "udp" }, { "exasoftport1", { NULL }, 3920, "tcp" }, { "exasoftport1", { NULL }, 3920, "udp" }, { "herodotus-net", { NULL }, 3921, "tcp" }, { "herodotus-net", { NULL }, 3921, "udp" }, { "sor-update", { NULL }, 3922, "tcp" }, { "sor-update", { NULL }, 3922, "udp" }, { "symb-sb-port", { NULL }, 3923, "tcp" }, { "symb-sb-port", { NULL }, 3923, "udp" }, { "mpl-gprs-port", { NULL }, 3924, "tcp" }, { "mpl-gprs-port", { NULL }, 3924, "udp" }, { "zmp", { NULL }, 3925, "tcp" }, { "zmp", { NULL }, 3925, "udp" }, { "winport", { NULL }, 3926, "tcp" }, { "winport", { NULL }, 3926, "udp" }, { "natdataservice", { NULL }, 3927, "tcp" }, { "natdataservice", { NULL }, 3927, "udp" }, { "netboot-pxe", { NULL }, 3928, "tcp" }, { "netboot-pxe", { NULL }, 3928, "udp" }, { "smauth-port", { NULL }, 3929, "tcp" }, { "smauth-port", { NULL }, 3929, "udp" }, { "syam-webserver", { NULL }, 3930, "tcp" }, { "syam-webserver", { NULL }, 3930, "udp" }, { "msr-plugin-port", { NULL }, 3931, "tcp" }, { "msr-plugin-port", { NULL }, 3931, "udp" }, { "dyn-site", { NULL }, 3932, "tcp" }, { "dyn-site", { NULL }, 3932, "udp" }, { "plbserve-port", { NULL }, 3933, "tcp" }, { "plbserve-port", { NULL }, 3933, "udp" }, { "sunfm-port", { NULL }, 3934, "tcp" }, { "sunfm-port", { NULL }, 3934, "udp" }, { "sdp-portmapper", { NULL }, 3935, "tcp" }, { "sdp-portmapper", { NULL }, 3935, "udp" }, { "mailprox", { NULL }, 3936, "tcp" }, { "mailprox", { NULL }, 3936, "udp" }, { "dvbservdsc", { NULL }, 3937, "tcp" }, { "dvbservdsc", { NULL }, 3937, "udp" }, { "dbcontrol_agent", { NULL }, 3938, "tcp" }, { "dbcontrol_agent", { NULL }, 3938, "udp" }, { "aamp", { NULL }, 3939, "tcp" }, { "aamp", { NULL }, 3939, "udp" }, { "xecp-node", { NULL }, 3940, "tcp" }, { "xecp-node", { NULL }, 3940, "udp" }, { "homeportal-web", { NULL }, 3941, "tcp" }, { "homeportal-web", { NULL }, 3941, "udp" }, { "srdp", { NULL }, 3942, "tcp" }, { "srdp", { NULL }, 3942, "udp" }, { "tig", { NULL }, 3943, "tcp" }, { "tig", { NULL }, 3943, "udp" }, { "sops", { NULL }, 3944, "tcp" }, { "sops", { NULL }, 3944, "udp" }, { "emcads", { NULL }, 3945, "tcp" }, { "emcads", { NULL }, 3945, "udp" }, { "backupedge", { NULL }, 3946, "tcp" }, { "backupedge", { NULL }, 3946, "udp" }, { "ccp", { NULL }, 3947, "tcp" }, { "ccp", { NULL }, 3947, "udp" }, { "apdap", { NULL }, 3948, "tcp" }, { "apdap", { NULL }, 3948, "udp" }, { "drip", { NULL }, 3949, "tcp" }, { "drip", { NULL }, 3949, "udp" }, { "namemunge", { NULL }, 3950, "tcp" }, { "namemunge", { NULL }, 3950, "udp" }, { "pwgippfax", { NULL }, 3951, "tcp" }, { "pwgippfax", { NULL }, 3951, "udp" }, { "i3-sessionmgr", { NULL }, 3952, "tcp" }, { "i3-sessionmgr", { NULL }, 3952, "udp" }, { "xmlink-connect", { NULL }, 3953, "tcp" }, { "xmlink-connect", { NULL }, 3953, "udp" }, { "adrep", { NULL }, 3954, "tcp" }, { "adrep", { NULL }, 3954, "udp" }, { "p2pcommunity", { NULL }, 3955, "tcp" }, { "p2pcommunity", { NULL }, 3955, "udp" }, { "gvcp", { NULL }, 3956, "tcp" }, { "gvcp", { NULL }, 3956, "udp" }, { "mqe-broker", { NULL }, 3957, "tcp" }, { "mqe-broker", { NULL }, 3957, "udp" }, { "mqe-agent", { NULL }, 3958, "tcp" }, { "mqe-agent", { NULL }, 3958, "udp" }, { "treehopper", { NULL }, 3959, "tcp" }, { "treehopper", { NULL }, 3959, "udp" }, { "bess", { NULL }, 3960, "tcp" }, { "bess", { NULL }, 3960, "udp" }, { "proaxess", { NULL }, 3961, "tcp" }, { "proaxess", { NULL }, 3961, "udp" }, { "sbi-agent", { NULL }, 3962, "tcp" }, { "sbi-agent", { NULL }, 3962, "udp" }, { "thrp", { NULL }, 3963, "tcp" }, { "thrp", { NULL }, 3963, "udp" }, { "sasggprs", { NULL }, 3964, "tcp" }, { "sasggprs", { NULL }, 3964, "udp" }, { "ati-ip-to-ncpe", { NULL }, 3965, "tcp" }, { "ati-ip-to-ncpe", { NULL }, 3965, "udp" }, { "bflckmgr", { NULL }, 3966, "tcp" }, { "bflckmgr", { NULL }, 3966, "udp" }, { "ppsms", { NULL }, 3967, "tcp" }, { "ppsms", { NULL }, 3967, "udp" }, { "ianywhere-dbns", { NULL }, 3968, "tcp" }, { "ianywhere-dbns", { NULL }, 3968, "udp" }, { "landmarks", { NULL }, 3969, "tcp" }, { "landmarks", { NULL }, 3969, "udp" }, { "lanrevagent", { NULL }, 3970, "tcp" }, { "lanrevagent", { NULL }, 3970, "udp" }, { "lanrevserver", { NULL }, 3971, "tcp" }, { "lanrevserver", { NULL }, 3971, "udp" }, { "iconp", { NULL }, 3972, "tcp" }, { "iconp", { NULL }, 3972, "udp" }, { "progistics", { NULL }, 3973, "tcp" }, { "progistics", { NULL }, 3973, "udp" }, { "citysearch", { NULL }, 3974, "tcp" }, { "citysearch", { NULL }, 3974, "udp" }, { "airshot", { NULL }, 3975, "tcp" }, { "airshot", { NULL }, 3975, "udp" }, { "opswagent", { NULL }, 3976, "tcp" }, { "opswagent", { NULL }, 3976, "udp" }, { "opswmanager", { NULL }, 3977, "tcp" }, { "opswmanager", { NULL }, 3977, "udp" }, { "secure-cfg-svr", { NULL }, 3978, "tcp" }, { "secure-cfg-svr", { NULL }, 3978, "udp" }, { "smwan", { NULL }, 3979, "tcp" }, { "smwan", { NULL }, 3979, "udp" }, { "acms", { NULL }, 3980, "tcp" }, { "acms", { NULL }, 3980, "udp" }, { "starfish", { NULL }, 3981, "tcp" }, { "starfish", { NULL }, 3981, "udp" }, { "eis", { NULL }, 3982, "tcp" }, { "eis", { NULL }, 3982, "udp" }, { "eisp", { NULL }, 3983, "tcp" }, { "eisp", { NULL }, 3983, "udp" }, { "mapper-nodemgr", { NULL }, 3984, "tcp" }, { "mapper-nodemgr", { NULL }, 3984, "udp" }, { "mapper-mapethd", { NULL }, 3985, "tcp" }, { "mapper-mapethd", { NULL }, 3985, "udp" }, { "mapper-ws_ethd", { NULL }, 3986, "tcp" }, { "mapper-ws_ethd", { NULL }, 3986, "udp" }, { "centerline", { NULL }, 3987, "tcp" }, { "centerline", { NULL }, 3987, "udp" }, { "dcs-config", { NULL }, 3988, "tcp" }, { "dcs-config", { NULL }, 3988, "udp" }, { "bv-queryengine", { NULL }, 3989, "tcp" }, { "bv-queryengine", { NULL }, 3989, "udp" }, { "bv-is", { NULL }, 3990, "tcp" }, { "bv-is", { NULL }, 3990, "udp" }, { "bv-smcsrv", { NULL }, 3991, "tcp" }, { "bv-smcsrv", { NULL }, 3991, "udp" }, { "bv-ds", { NULL }, 3992, "tcp" }, { "bv-ds", { NULL }, 3992, "udp" }, { "bv-agent", { NULL }, 3993, "tcp" }, { "bv-agent", { NULL }, 3993, "udp" }, { "iss-mgmt-ssl", { NULL }, 3995, "tcp" }, { "iss-mgmt-ssl", { NULL }, 3995, "udp" }, { "abcsoftware", { NULL }, 3996, "tcp" }, { "abcsoftware", { NULL }, 3996, "udp" }, { "agentsease-db", { NULL }, 3997, "tcp" }, { "agentsease-db", { NULL }, 3997, "udp" }, { "dnx", { NULL }, 3998, "tcp" }, { "dnx", { NULL }, 3998, "udp" }, { "nvcnet", { NULL }, 3999, "tcp" }, { "nvcnet", { NULL }, 3999, "udp" }, { "terabase", { NULL }, 4000, "tcp" }, { "terabase", { NULL }, 4000, "udp" }, { "newoak", { NULL }, 4001, "tcp" }, { "newoak", { NULL }, 4001, "udp" }, { "pxc-spvr-ft", { NULL }, 4002, "tcp" }, { "pxc-spvr-ft", { NULL }, 4002, "udp" }, { "pxc-splr-ft", { NULL }, 4003, "tcp" }, { "pxc-splr-ft", { NULL }, 4003, "udp" }, { "pxc-roid", { NULL }, 4004, "tcp" }, { "pxc-roid", { NULL }, 4004, "udp" }, { "pxc-pin", { NULL }, 4005, "tcp" }, { "pxc-pin", { NULL }, 4005, "udp" }, { "pxc-spvr", { NULL }, 4006, "tcp" }, { "pxc-spvr", { NULL }, 4006, "udp" }, { "pxc-splr", { NULL }, 4007, "tcp" }, { "pxc-splr", { NULL }, 4007, "udp" }, { "netcheque", { NULL }, 4008, "tcp" }, { "netcheque", { NULL }, 4008, "udp" }, { "chimera-hwm", { NULL }, 4009, "tcp" }, { "chimera-hwm", { NULL }, 4009, "udp" }, { "samsung-unidex", { NULL }, 4010, "tcp" }, { "samsung-unidex", { NULL }, 4010, "udp" }, { "altserviceboot", { NULL }, 4011, "tcp" }, { "altserviceboot", { NULL }, 4011, "udp" }, { "pda-gate", { NULL }, 4012, "tcp" }, { "pda-gate", { NULL }, 4012, "udp" }, { "acl-manager", { NULL }, 4013, "tcp" }, { "acl-manager", { NULL }, 4013, "udp" }, { "taiclock", { NULL }, 4014, "tcp" }, { "taiclock", { NULL }, 4014, "udp" }, { "talarian-mcast1", { NULL }, 4015, "tcp" }, { "talarian-mcast1", { NULL }, 4015, "udp" }, { "talarian-mcast2", { NULL }, 4016, "tcp" }, { "talarian-mcast2", { NULL }, 4016, "udp" }, { "talarian-mcast3", { NULL }, 4017, "tcp" }, { "talarian-mcast3", { NULL }, 4017, "udp" }, { "talarian-mcast4", { NULL }, 4018, "tcp" }, { "talarian-mcast4", { NULL }, 4018, "udp" }, { "talarian-mcast5", { NULL }, 4019, "tcp" }, { "talarian-mcast5", { NULL }, 4019, "udp" }, { "trap", { NULL }, 4020, "tcp" }, { "trap", { NULL }, 4020, "udp" }, { "nexus-portal", { NULL }, 4021, "tcp" }, { "nexus-portal", { NULL }, 4021, "udp" }, { "dnox", { NULL }, 4022, "tcp" }, { "dnox", { NULL }, 4022, "udp" }, { "esnm-zoning", { NULL }, 4023, "tcp" }, { "esnm-zoning", { NULL }, 4023, "udp" }, { "tnp1-port", { NULL }, 4024, "tcp" }, { "tnp1-port", { NULL }, 4024, "udp" }, { "partimage", { NULL }, 4025, "tcp" }, { "partimage", { NULL }, 4025, "udp" }, { "as-debug", { NULL }, 4026, "tcp" }, { "as-debug", { NULL }, 4026, "udp" }, { "bxp", { NULL }, 4027, "tcp" }, { "bxp", { NULL }, 4027, "udp" }, { "dtserver-port", { NULL }, 4028, "tcp" }, { "dtserver-port", { NULL }, 4028, "udp" }, { "ip-qsig", { NULL }, 4029, "tcp" }, { "ip-qsig", { NULL }, 4029, "udp" }, { "jdmn-port", { NULL }, 4030, "tcp" }, { "jdmn-port", { NULL }, 4030, "udp" }, { "suucp", { NULL }, 4031, "tcp" }, { "suucp", { NULL }, 4031, "udp" }, { "vrts-auth-port", { NULL }, 4032, "tcp" }, { "vrts-auth-port", { NULL }, 4032, "udp" }, { "sanavigator", { NULL }, 4033, "tcp" }, { "sanavigator", { NULL }, 4033, "udp" }, { "ubxd", { NULL }, 4034, "tcp" }, { "ubxd", { NULL }, 4034, "udp" }, { "wap-push-http", { NULL }, 4035, "tcp" }, { "wap-push-http", { NULL }, 4035, "udp" }, { "wap-push-https", { NULL }, 4036, "tcp" }, { "wap-push-https", { NULL }, 4036, "udp" }, { "ravehd", { NULL }, 4037, "tcp" }, { "ravehd", { NULL }, 4037, "udp" }, { "fazzt-ptp", { NULL }, 4038, "tcp" }, { "fazzt-ptp", { NULL }, 4038, "udp" }, { "fazzt-admin", { NULL }, 4039, "tcp" }, { "fazzt-admin", { NULL }, 4039, "udp" }, { "yo-main", { NULL }, 4040, "tcp" }, { "yo-main", { NULL }, 4040, "udp" }, { "houston", { NULL }, 4041, "tcp" }, { "houston", { NULL }, 4041, "udp" }, { "ldxp", { NULL }, 4042, "tcp" }, { "ldxp", { NULL }, 4042, "udp" }, { "nirp", { NULL }, 4043, "tcp" }, { "nirp", { NULL }, 4043, "udp" }, { "ltp", { NULL }, 4044, "tcp" }, { "ltp", { NULL }, 4044, "udp" }, { "npp", { NULL }, 4045, "tcp" }, { "npp", { NULL }, 4045, "udp" }, { "acp-proto", { NULL }, 4046, "tcp" }, { "acp-proto", { NULL }, 4046, "udp" }, { "ctp-state", { NULL }, 4047, "tcp" }, { "ctp-state", { NULL }, 4047, "udp" }, { "wafs", { NULL }, 4049, "tcp" }, { "wafs", { NULL }, 4049, "udp" }, { "cisco-wafs", { NULL }, 4050, "tcp" }, { "cisco-wafs", { NULL }, 4050, "udp" }, { "cppdp", { NULL }, 4051, "tcp" }, { "cppdp", { NULL }, 4051, "udp" }, { "interact", { NULL }, 4052, "tcp" }, { "interact", { NULL }, 4052, "udp" }, { "ccu-comm-1", { NULL }, 4053, "tcp" }, { "ccu-comm-1", { NULL }, 4053, "udp" }, { "ccu-comm-2", { NULL }, 4054, "tcp" }, { "ccu-comm-2", { NULL }, 4054, "udp" }, { "ccu-comm-3", { NULL }, 4055, "tcp" }, { "ccu-comm-3", { NULL }, 4055, "udp" }, { "lms", { NULL }, 4056, "tcp" }, { "lms", { NULL }, 4056, "udp" }, { "wfm", { NULL }, 4057, "tcp" }, { "wfm", { NULL }, 4057, "udp" }, { "kingfisher", { NULL }, 4058, "tcp" }, { "kingfisher", { NULL }, 4058, "udp" }, { "dlms-cosem", { NULL }, 4059, "tcp" }, { "dlms-cosem", { NULL }, 4059, "udp" }, { "dsmeter_iatc", { NULL }, 4060, "tcp" }, { "dsmeter_iatc", { NULL }, 4060, "udp" }, { "ice-location", { NULL }, 4061, "tcp" }, { "ice-location", { NULL }, 4061, "udp" }, { "ice-slocation", { NULL }, 4062, "tcp" }, { "ice-slocation", { NULL }, 4062, "udp" }, { "ice-router", { NULL }, 4063, "tcp" }, { "ice-router", { NULL }, 4063, "udp" }, { "ice-srouter", { NULL }, 4064, "tcp" }, { "ice-srouter", { NULL }, 4064, "udp" }, { "avanti_cdp", { NULL }, 4065, "tcp" }, { "avanti_cdp", { NULL }, 4065, "udp" }, { "pmas", { NULL }, 4066, "tcp" }, { "pmas", { NULL }, 4066, "udp" }, { "idp", { NULL }, 4067, "tcp" }, { "idp", { NULL }, 4067, "udp" }, { "ipfltbcst", { NULL }, 4068, "tcp" }, { "ipfltbcst", { NULL }, 4068, "udp" }, { "minger", { NULL }, 4069, "tcp" }, { "minger", { NULL }, 4069, "udp" }, { "tripe", { NULL }, 4070, "tcp" }, { "tripe", { NULL }, 4070, "udp" }, { "aibkup", { NULL }, 4071, "tcp" }, { "aibkup", { NULL }, 4071, "udp" }, { "zieto-sock", { NULL }, 4072, "tcp" }, { "zieto-sock", { NULL }, 4072, "udp" }, { "iRAPP", { NULL }, 4073, "tcp" }, { "iRAPP", { NULL }, 4073, "udp" }, { "cequint-cityid", { NULL }, 4074, "tcp" }, { "cequint-cityid", { NULL }, 4074, "udp" }, { "perimlan", { NULL }, 4075, "tcp" }, { "perimlan", { NULL }, 4075, "udp" }, { "seraph", { NULL }, 4076, "tcp" }, { "seraph", { NULL }, 4076, "udp" }, { "ascomalarm", { NULL }, 4077, "udp" }, { "cssp", { NULL }, 4078, "tcp" }, { "santools", { NULL }, 4079, "tcp" }, { "santools", { NULL }, 4079, "udp" }, { "lorica-in", { NULL }, 4080, "tcp" }, { "lorica-in", { NULL }, 4080, "udp" }, { "lorica-in-sec", { NULL }, 4081, "tcp" }, { "lorica-in-sec", { NULL }, 4081, "udp" }, { "lorica-out", { NULL }, 4082, "tcp" }, { "lorica-out", { NULL }, 4082, "udp" }, { "lorica-out-sec", { NULL }, 4083, "tcp" }, { "lorica-out-sec", { NULL }, 4083, "udp" }, { "fortisphere-vm", { NULL }, 4084, "udp" }, { "ezmessagesrv", { NULL }, 4085, "tcp" }, { "ftsync", { NULL }, 4086, "udp" }, { "applusservice", { NULL }, 4087, "tcp" }, { "npsp", { NULL }, 4088, "tcp" }, { "opencore", { NULL }, 4089, "tcp" }, { "opencore", { NULL }, 4089, "udp" }, { "omasgport", { NULL }, 4090, "tcp" }, { "omasgport", { NULL }, 4090, "udp" }, { "ewinstaller", { NULL }, 4091, "tcp" }, { "ewinstaller", { NULL }, 4091, "udp" }, { "ewdgs", { NULL }, 4092, "tcp" }, { "ewdgs", { NULL }, 4092, "udp" }, { "pvxpluscs", { NULL }, 4093, "tcp" }, { "pvxpluscs", { NULL }, 4093, "udp" }, { "sysrqd", { NULL }, 4094, "tcp" }, { "sysrqd", { NULL }, 4094, "udp" }, { "xtgui", { NULL }, 4095, "tcp" }, { "xtgui", { NULL }, 4095, "udp" }, { "bre", { NULL }, 4096, "tcp" }, { "bre", { NULL }, 4096, "udp" }, { "patrolview", { NULL }, 4097, "tcp" }, { "patrolview", { NULL }, 4097, "udp" }, { "drmsfsd", { NULL }, 4098, "tcp" }, { "drmsfsd", { NULL }, 4098, "udp" }, { "dpcp", { NULL }, 4099, "tcp" }, { "dpcp", { NULL }, 4099, "udp" }, { "igo-incognito", { NULL }, 4100, "tcp" }, { "igo-incognito", { NULL }, 4100, "udp" }, { "brlp-0", { NULL }, 4101, "tcp" }, { "brlp-0", { NULL }, 4101, "udp" }, { "brlp-1", { NULL }, 4102, "tcp" }, { "brlp-1", { NULL }, 4102, "udp" }, { "brlp-2", { NULL }, 4103, "tcp" }, { "brlp-2", { NULL }, 4103, "udp" }, { "brlp-3", { NULL }, 4104, "tcp" }, { "brlp-3", { NULL }, 4104, "udp" }, { "shofarplayer", { NULL }, 4105, "tcp" }, { "shofarplayer", { NULL }, 4105, "udp" }, { "synchronite", { NULL }, 4106, "tcp" }, { "synchronite", { NULL }, 4106, "udp" }, { "j-ac", { NULL }, 4107, "tcp" }, { "j-ac", { NULL }, 4107, "udp" }, { "accel", { NULL }, 4108, "tcp" }, { "accel", { NULL }, 4108, "udp" }, { "izm", { NULL }, 4109, "tcp" }, { "izm", { NULL }, 4109, "udp" }, { "g2tag", { NULL }, 4110, "tcp" }, { "g2tag", { NULL }, 4110, "udp" }, { "xgrid", { NULL }, 4111, "tcp" }, { "xgrid", { NULL }, 4111, "udp" }, { "apple-vpns-rp", { NULL }, 4112, "tcp" }, { "apple-vpns-rp", { NULL }, 4112, "udp" }, { "aipn-reg", { NULL }, 4113, "tcp" }, { "aipn-reg", { NULL }, 4113, "udp" }, { "jomamqmonitor", { NULL }, 4114, "tcp" }, { "jomamqmonitor", { NULL }, 4114, "udp" }, { "cds", { NULL }, 4115, "tcp" }, { "cds", { NULL }, 4115, "udp" }, { "smartcard-tls", { NULL }, 4116, "tcp" }, { "smartcard-tls", { NULL }, 4116, "udp" }, { "hillrserv", { NULL }, 4117, "tcp" }, { "hillrserv", { NULL }, 4117, "udp" }, { "netscript", { NULL }, 4118, "tcp" }, { "netscript", { NULL }, 4118, "udp" }, { "assuria-slm", { NULL }, 4119, "tcp" }, { "assuria-slm", { NULL }, 4119, "udp" }, { "e-builder", { NULL }, 4121, "tcp" }, { "e-builder", { NULL }, 4121, "udp" }, { "fprams", { NULL }, 4122, "tcp" }, { "fprams", { NULL }, 4122, "udp" }, { "z-wave", { NULL }, 4123, "tcp" }, { "z-wave", { NULL }, 4123, "udp" }, { "tigv2", { NULL }, 4124, "tcp" }, { "tigv2", { NULL }, 4124, "udp" }, { "opsview-envoy", { NULL }, 4125, "tcp" }, { "opsview-envoy", { NULL }, 4125, "udp" }, { "ddrepl", { NULL }, 4126, "tcp" }, { "ddrepl", { NULL }, 4126, "udp" }, { "unikeypro", { NULL }, 4127, "tcp" }, { "unikeypro", { NULL }, 4127, "udp" }, { "nufw", { NULL }, 4128, "tcp" }, { "nufw", { NULL }, 4128, "udp" }, { "nuauth", { NULL }, 4129, "tcp" }, { "nuauth", { NULL }, 4129, "udp" }, { "fronet", { NULL }, 4130, "tcp" }, { "fronet", { NULL }, 4130, "udp" }, { "stars", { NULL }, 4131, "tcp" }, { "stars", { NULL }, 4131, "udp" }, { "nuts_dem", { NULL }, 4132, "tcp" }, { "nuts_dem", { NULL }, 4132, "udp" }, { "nuts_bootp", { NULL }, 4133, "tcp" }, { "nuts_bootp", { NULL }, 4133, "udp" }, { "nifty-hmi", { NULL }, 4134, "tcp" }, { "nifty-hmi", { NULL }, 4134, "udp" }, { "cl-db-attach", { NULL }, 4135, "tcp" }, { "cl-db-attach", { NULL }, 4135, "udp" }, { "cl-db-request", { NULL }, 4136, "tcp" }, { "cl-db-request", { NULL }, 4136, "udp" }, { "cl-db-remote", { NULL }, 4137, "tcp" }, { "cl-db-remote", { NULL }, 4137, "udp" }, { "nettest", { NULL }, 4138, "tcp" }, { "nettest", { NULL }, 4138, "udp" }, { "thrtx", { NULL }, 4139, "tcp" }, { "thrtx", { NULL }, 4139, "udp" }, { "cedros_fds", { NULL }, 4140, "tcp" }, { "cedros_fds", { NULL }, 4140, "udp" }, { "oirtgsvc", { NULL }, 4141, "tcp" }, { "oirtgsvc", { NULL }, 4141, "udp" }, { "oidocsvc", { NULL }, 4142, "tcp" }, { "oidocsvc", { NULL }, 4142, "udp" }, { "oidsr", { NULL }, 4143, "tcp" }, { "oidsr", { NULL }, 4143, "udp" }, { "vvr-control", { NULL }, 4145, "tcp" }, { "vvr-control", { NULL }, 4145, "udp" }, { "tgcconnect", { NULL }, 4146, "tcp" }, { "tgcconnect", { NULL }, 4146, "udp" }, { "vrxpservman", { NULL }, 4147, "tcp" }, { "vrxpservman", { NULL }, 4147, "udp" }, { "hhb-handheld", { NULL }, 4148, "tcp" }, { "hhb-handheld", { NULL }, 4148, "udp" }, { "agslb", { NULL }, 4149, "tcp" }, { "agslb", { NULL }, 4149, "udp" }, { "PowerAlert-nsa", { NULL }, 4150, "tcp" }, { "PowerAlert-nsa", { NULL }, 4150, "udp" }, { "menandmice_noh", { NULL }, 4151, "tcp" }, { "menandmice_noh", { NULL }, 4151, "udp" }, { "idig_mux", { NULL }, 4152, "tcp" }, { "idig_mux", { NULL }, 4152, "udp" }, { "mbl-battd", { NULL }, 4153, "tcp" }, { "mbl-battd", { NULL }, 4153, "udp" }, { "atlinks", { NULL }, 4154, "tcp" }, { "atlinks", { NULL }, 4154, "udp" }, { "bzr", { NULL }, 4155, "tcp" }, { "bzr", { NULL }, 4155, "udp" }, { "stat-results", { NULL }, 4156, "tcp" }, { "stat-results", { NULL }, 4156, "udp" }, { "stat-scanner", { NULL }, 4157, "tcp" }, { "stat-scanner", { NULL }, 4157, "udp" }, { "stat-cc", { NULL }, 4158, "tcp" }, { "stat-cc", { NULL }, 4158, "udp" }, { "nss", { NULL }, 4159, "tcp" }, { "nss", { NULL }, 4159, "udp" }, { "jini-discovery", { NULL }, 4160, "tcp" }, { "jini-discovery", { NULL }, 4160, "udp" }, { "omscontact", { NULL }, 4161, "tcp" }, { "omscontact", { NULL }, 4161, "udp" }, { "omstopology", { NULL }, 4162, "tcp" }, { "omstopology", { NULL }, 4162, "udp" }, { "silverpeakpeer", { NULL }, 4163, "tcp" }, { "silverpeakpeer", { NULL }, 4163, "udp" }, { "silverpeakcomm", { NULL }, 4164, "tcp" }, { "silverpeakcomm", { NULL }, 4164, "udp" }, { "altcp", { NULL }, 4165, "tcp" }, { "altcp", { NULL }, 4165, "udp" }, { "joost", { NULL }, 4166, "tcp" }, { "joost", { NULL }, 4166, "udp" }, { "ddgn", { NULL }, 4167, "tcp" }, { "ddgn", { NULL }, 4167, "udp" }, { "pslicser", { NULL }, 4168, "tcp" }, { "pslicser", { NULL }, 4168, "udp" }, { "iadt", { NULL }, 4169, "tcp" }, { "iadt-disc", { NULL }, 4169, "udp" }, { "d-cinema-csp", { NULL }, 4170, "tcp" }, { "ml-svnet", { NULL }, 4171, "tcp" }, { "pcoip", { NULL }, 4172, "tcp" }, { "pcoip", { NULL }, 4172, "udp" }, { "smcluster", { NULL }, 4174, "tcp" }, { "bccp", { NULL }, 4175, "tcp" }, { "tl-ipcproxy", { NULL }, 4176, "tcp" }, { "wello", { NULL }, 4177, "tcp" }, { "wello", { NULL }, 4177, "udp" }, { "storman", { NULL }, 4178, "tcp" }, { "storman", { NULL }, 4178, "udp" }, { "MaxumSP", { NULL }, 4179, "tcp" }, { "MaxumSP", { NULL }, 4179, "udp" }, { "httpx", { NULL }, 4180, "tcp" }, { "httpx", { NULL }, 4180, "udp" }, { "macbak", { NULL }, 4181, "tcp" }, { "macbak", { NULL }, 4181, "udp" }, { "pcptcpservice", { NULL }, 4182, "tcp" }, { "pcptcpservice", { NULL }, 4182, "udp" }, { "gmmp", { NULL }, 4183, "tcp" }, { "gmmp", { NULL }, 4183, "udp" }, { "universe_suite", { NULL }, 4184, "tcp" }, { "universe_suite", { NULL }, 4184, "udp" }, { "wcpp", { NULL }, 4185, "tcp" }, { "wcpp", { NULL }, 4185, "udp" }, { "boxbackupstore", { NULL }, 4186, "tcp" }, { "csc_proxy", { NULL }, 4187, "tcp" }, { "vatata", { NULL }, 4188, "tcp" }, { "vatata", { NULL }, 4188, "udp" }, { "pcep", { NULL }, 4189, "tcp" }, { "sieve", { NULL }, 4190, "tcp" }, { "dsmipv6", { NULL }, 4191, "udp" }, { "azeti", { NULL }, 4192, "tcp" }, { "azeti-bd", { NULL }, 4192, "udp" }, { "pvxplusio", { NULL }, 4193, "tcp" }, { "eims-admin", { NULL }, 4199, "tcp" }, { "eims-admin", { NULL }, 4199, "udp" }, { "corelccam", { NULL }, 4300, "tcp" }, { "corelccam", { NULL }, 4300, "udp" }, { "d-data", { NULL }, 4301, "tcp" }, { "d-data", { NULL }, 4301, "udp" }, { "d-data-control", { NULL }, 4302, "tcp" }, { "d-data-control", { NULL }, 4302, "udp" }, { "srcp", { NULL }, 4303, "tcp" }, { "srcp", { NULL }, 4303, "udp" }, { "owserver", { NULL }, 4304, "tcp" }, { "owserver", { NULL }, 4304, "udp" }, { "batman", { NULL }, 4305, "tcp" }, { "batman", { NULL }, 4305, "udp" }, { "pinghgl", { NULL }, 4306, "tcp" }, { "pinghgl", { NULL }, 4306, "udp" }, { "visicron-vs", { NULL }, 4307, "tcp" }, { "visicron-vs", { NULL }, 4307, "udp" }, { "compx-lockview", { NULL }, 4308, "tcp" }, { "compx-lockview", { NULL }, 4308, "udp" }, { "dserver", { NULL }, 4309, "tcp" }, { "dserver", { NULL }, 4309, "udp" }, { "mirrtex", { NULL }, 4310, "tcp" }, { "mirrtex", { NULL }, 4310, "udp" }, { "p6ssmc", { NULL }, 4311, "tcp" }, { "pscl-mgt", { NULL }, 4312, "tcp" }, { "perrla", { NULL }, 4313, "tcp" }, { "fdt-rcatp", { NULL }, 4320, "tcp" }, { "fdt-rcatp", { NULL }, 4320, "udp" }, { "rwhois", { NULL }, 4321, "tcp" }, { "rwhois", { NULL }, 4321, "udp" }, { "trim-event", { NULL }, 4322, "tcp" }, { "trim-event", { NULL }, 4322, "udp" }, { "trim-ice", { NULL }, 4323, "tcp" }, { "trim-ice", { NULL }, 4323, "udp" }, { "balour", { NULL }, 4324, "tcp" }, { "balour", { NULL }, 4324, "udp" }, { "geognosisman", { NULL }, 4325, "tcp" }, { "geognosisman", { NULL }, 4325, "udp" }, { "geognosis", { NULL }, 4326, "tcp" }, { "geognosis", { NULL }, 4326, "udp" }, { "jaxer-web", { NULL }, 4327, "tcp" }, { "jaxer-web", { NULL }, 4327, "udp" }, { "jaxer-manager", { NULL }, 4328, "tcp" }, { "jaxer-manager", { NULL }, 4328, "udp" }, { "publiqare-sync", { NULL }, 4329, "tcp" }, { "gaia", { NULL }, 4340, "tcp" }, { "gaia", { NULL }, 4340, "udp" }, { "lisp-data", { NULL }, 4341, "tcp" }, { "lisp-data", { NULL }, 4341, "udp" }, { "lisp-cons", { NULL }, 4342, "tcp" }, { "lisp-control", { NULL }, 4342, "udp" }, { "unicall", { NULL }, 4343, "tcp" }, { "unicall", { NULL }, 4343, "udp" }, { "vinainstall", { NULL }, 4344, "tcp" }, { "vinainstall", { NULL }, 4344, "udp" }, { "m4-network-as", { NULL }, 4345, "tcp" }, { "m4-network-as", { NULL }, 4345, "udp" }, { "elanlm", { NULL }, 4346, "tcp" }, { "elanlm", { NULL }, 4346, "udp" }, { "lansurveyor", { NULL }, 4347, "tcp" }, { "lansurveyor", { NULL }, 4347, "udp" }, { "itose", { NULL }, 4348, "tcp" }, { "itose", { NULL }, 4348, "udp" }, { "fsportmap", { NULL }, 4349, "tcp" }, { "fsportmap", { NULL }, 4349, "udp" }, { "net-device", { NULL }, 4350, "tcp" }, { "net-device", { NULL }, 4350, "udp" }, { "plcy-net-svcs", { NULL }, 4351, "tcp" }, { "plcy-net-svcs", { NULL }, 4351, "udp" }, { "pjlink", { NULL }, 4352, "tcp" }, { "pjlink", { NULL }, 4352, "udp" }, { "f5-iquery", { NULL }, 4353, "tcp" }, { "f5-iquery", { NULL }, 4353, "udp" }, { "qsnet-trans", { NULL }, 4354, "tcp" }, { "qsnet-trans", { NULL }, 4354, "udp" }, { "qsnet-workst", { NULL }, 4355, "tcp" }, { "qsnet-workst", { NULL }, 4355, "udp" }, { "qsnet-assist", { NULL }, 4356, "tcp" }, { "qsnet-assist", { NULL }, 4356, "udp" }, { "qsnet-cond", { NULL }, 4357, "tcp" }, { "qsnet-cond", { NULL }, 4357, "udp" }, { "qsnet-nucl", { NULL }, 4358, "tcp" }, { "qsnet-nucl", { NULL }, 4358, "udp" }, { "omabcastltkm", { NULL }, 4359, "tcp" }, { "omabcastltkm", { NULL }, 4359, "udp" }, { "matrix_vnet", { NULL }, 4360, "tcp" }, { "nacnl", { NULL }, 4361, "udp" }, { "afore-vdp-disc", { NULL }, 4362, "udp" }, { "wxbrief", { NULL }, 4368, "tcp" }, { "wxbrief", { NULL }, 4368, "udp" }, { "epmd", { NULL }, 4369, "tcp" }, { "epmd", { NULL }, 4369, "udp" }, { "elpro_tunnel", { NULL }, 4370, "tcp" }, { "elpro_tunnel", { NULL }, 4370, "udp" }, { "l2c-control", { NULL }, 4371, "tcp" }, { "l2c-disc", { NULL }, 4371, "udp" }, { "l2c-data", { NULL }, 4372, "tcp" }, { "l2c-data", { NULL }, 4372, "udp" }, { "remctl", { NULL }, 4373, "tcp" }, { "remctl", { NULL }, 4373, "udp" }, { "psi-ptt", { NULL }, 4374, "tcp" }, { "tolteces", { NULL }, 4375, "tcp" }, { "tolteces", { NULL }, 4375, "udp" }, { "bip", { NULL }, 4376, "tcp" }, { "bip", { NULL }, 4376, "udp" }, { "cp-spxsvr", { NULL }, 4377, "tcp" }, { "cp-spxsvr", { NULL }, 4377, "udp" }, { "cp-spxdpy", { NULL }, 4378, "tcp" }, { "cp-spxdpy", { NULL }, 4378, "udp" }, { "ctdb", { NULL }, 4379, "tcp" }, { "ctdb", { NULL }, 4379, "udp" }, { "xandros-cms", { NULL }, 4389, "tcp" }, { "xandros-cms", { NULL }, 4389, "udp" }, { "wiegand", { NULL }, 4390, "tcp" }, { "wiegand", { NULL }, 4390, "udp" }, { "apwi-imserver", { NULL }, 4391, "tcp" }, { "apwi-rxserver", { NULL }, 4392, "tcp" }, { "apwi-rxspooler", { NULL }, 4393, "tcp" }, { "apwi-disc", { NULL }, 4394, "udp" }, { "omnivisionesx", { NULL }, 4395, "tcp" }, { "omnivisionesx", { NULL }, 4395, "udp" }, { "fly", { NULL }, 4396, "tcp" }, { "ds-srv", { NULL }, 4400, "tcp" }, { "ds-srv", { NULL }, 4400, "udp" }, { "ds-srvr", { NULL }, 4401, "tcp" }, { "ds-srvr", { NULL }, 4401, "udp" }, { "ds-clnt", { NULL }, 4402, "tcp" }, { "ds-clnt", { NULL }, 4402, "udp" }, { "ds-user", { NULL }, 4403, "tcp" }, { "ds-user", { NULL }, 4403, "udp" }, { "ds-admin", { NULL }, 4404, "tcp" }, { "ds-admin", { NULL }, 4404, "udp" }, { "ds-mail", { NULL }, 4405, "tcp" }, { "ds-mail", { NULL }, 4405, "udp" }, { "ds-slp", { NULL }, 4406, "tcp" }, { "ds-slp", { NULL }, 4406, "udp" }, { "nacagent", { NULL }, 4407, "tcp" }, { "slscc", { NULL }, 4408, "tcp" }, { "netcabinet-com", { NULL }, 4409, "tcp" }, { "itwo-server", { NULL }, 4410, "tcp" }, { "netrockey6", { NULL }, 4425, "tcp" }, { "netrockey6", { NULL }, 4425, "udp" }, { "beacon-port-2", { NULL }, 4426, "tcp" }, { "beacon-port-2", { NULL }, 4426, "udp" }, { "drizzle", { NULL }, 4427, "tcp" }, { "omviserver", { NULL }, 4428, "tcp" }, { "omviagent", { NULL }, 4429, "tcp" }, { "rsqlserver", { NULL }, 4430, "tcp" }, { "rsqlserver", { NULL }, 4430, "udp" }, { "wspipe", { NULL }, 4431, "tcp" }, { "netblox", { NULL }, 4441, "udp" }, { "saris", { NULL }, 4442, "tcp" }, { "saris", { NULL }, 4442, "udp" }, { "pharos", { NULL }, 4443, "tcp" }, { "pharos", { NULL }, 4443, "udp" }, { "krb524", { NULL }, 4444, "tcp" }, { "krb524", { NULL }, 4444, "udp" }, { "nv-video", { NULL }, 4444, "tcp" }, { "nv-video", { NULL }, 4444, "udp" }, { "upnotifyp", { NULL }, 4445, "tcp" }, { "upnotifyp", { NULL }, 4445, "udp" }, { "n1-fwp", { NULL }, 4446, "tcp" }, { "n1-fwp", { NULL }, 4446, "udp" }, { "n1-rmgmt", { NULL }, 4447, "tcp" }, { "n1-rmgmt", { NULL }, 4447, "udp" }, { "asc-slmd", { NULL }, 4448, "tcp" }, { "asc-slmd", { NULL }, 4448, "udp" }, { "privatewire", { NULL }, 4449, "tcp" }, { "privatewire", { NULL }, 4449, "udp" }, { "camp", { NULL }, 4450, "tcp" }, { "camp", { NULL }, 4450, "udp" }, { "ctisystemmsg", { NULL }, 4451, "tcp" }, { "ctisystemmsg", { NULL }, 4451, "udp" }, { "ctiprogramload", { NULL }, 4452, "tcp" }, { "ctiprogramload", { NULL }, 4452, "udp" }, { "nssalertmgr", { NULL }, 4453, "tcp" }, { "nssalertmgr", { NULL }, 4453, "udp" }, { "nssagentmgr", { NULL }, 4454, "tcp" }, { "nssagentmgr", { NULL }, 4454, "udp" }, { "prchat-user", { NULL }, 4455, "tcp" }, { "prchat-user", { NULL }, 4455, "udp" }, { "prchat-server", { NULL }, 4456, "tcp" }, { "prchat-server", { NULL }, 4456, "udp" }, { "prRegister", { NULL }, 4457, "tcp" }, { "prRegister", { NULL }, 4457, "udp" }, { "mcp", { NULL }, 4458, "tcp" }, { "mcp", { NULL }, 4458, "udp" }, { "hpssmgmt", { NULL }, 4484, "tcp" }, { "hpssmgmt", { NULL }, 4484, "udp" }, { "assyst-dr", { NULL }, 4485, "tcp" }, { "icms", { NULL }, 4486, "tcp" }, { "icms", { NULL }, 4486, "udp" }, { "prex-tcp", { NULL }, 4487, "tcp" }, { "awacs-ice", { NULL }, 4488, "tcp" }, { "awacs-ice", { NULL }, 4488, "udp" }, { "ipsec-nat-t", { NULL }, 4500, "tcp" }, { "ipsec-nat-t", { NULL }, 4500, "udp" }, { "ehs", { NULL }, 4535, "tcp" }, { "ehs", { NULL }, 4535, "udp" }, { "ehs-ssl", { NULL }, 4536, "tcp" }, { "ehs-ssl", { NULL }, 4536, "udp" }, { "wssauthsvc", { NULL }, 4537, "tcp" }, { "wssauthsvc", { NULL }, 4537, "udp" }, { "swx-gate", { NULL }, 4538, "tcp" }, { "swx-gate", { NULL }, 4538, "udp" }, { "worldscores", { NULL }, 4545, "tcp" }, { "worldscores", { NULL }, 4545, "udp" }, { "sf-lm", { NULL }, 4546, "tcp" }, { "sf-lm", { NULL }, 4546, "udp" }, { "lanner-lm", { NULL }, 4547, "tcp" }, { "lanner-lm", { NULL }, 4547, "udp" }, { "synchromesh", { NULL }, 4548, "tcp" }, { "synchromesh", { NULL }, 4548, "udp" }, { "aegate", { NULL }, 4549, "tcp" }, { "aegate", { NULL }, 4549, "udp" }, { "gds-adppiw-db", { NULL }, 4550, "tcp" }, { "gds-adppiw-db", { NULL }, 4550, "udp" }, { "ieee-mih", { NULL }, 4551, "tcp" }, { "ieee-mih", { NULL }, 4551, "udp" }, { "menandmice-mon", { NULL }, 4552, "tcp" }, { "menandmice-mon", { NULL }, 4552, "udp" }, { "icshostsvc", { NULL }, 4553, "tcp" }, { "msfrs", { NULL }, 4554, "tcp" }, { "msfrs", { NULL }, 4554, "udp" }, { "rsip", { NULL }, 4555, "tcp" }, { "rsip", { NULL }, 4555, "udp" }, { "dtn-bundle-tcp", { NULL }, 4556, "tcp" }, { "dtn-bundle-udp", { NULL }, 4556, "udp" }, { "mtcevrunqss", { NULL }, 4557, "udp" }, { "mtcevrunqman", { NULL }, 4558, "udp" }, { "hylafax", { NULL }, 4559, "tcp" }, { "hylafax", { NULL }, 4559, "udp" }, { "kwtc", { NULL }, 4566, "tcp" }, { "kwtc", { NULL }, 4566, "udp" }, { "tram", { NULL }, 4567, "tcp" }, { "tram", { NULL }, 4567, "udp" }, { "bmc-reporting", { NULL }, 4568, "tcp" }, { "bmc-reporting", { NULL }, 4568, "udp" }, { "iax", { NULL }, 4569, "tcp" }, { "iax", { NULL }, 4569, "udp" }, { "rid", { NULL }, 4590, "tcp" }, { "l3t-at-an", { NULL }, 4591, "tcp" }, { "l3t-at-an", { NULL }, 4591, "udp" }, { "hrpd-ith-at-an", { NULL }, 4592, "udp" }, { "ipt-anri-anri", { NULL }, 4593, "tcp" }, { "ipt-anri-anri", { NULL }, 4593, "udp" }, { "ias-session", { NULL }, 4594, "tcp" }, { "ias-session", { NULL }, 4594, "udp" }, { "ias-paging", { NULL }, 4595, "tcp" }, { "ias-paging", { NULL }, 4595, "udp" }, { "ias-neighbor", { NULL }, 4596, "tcp" }, { "ias-neighbor", { NULL }, 4596, "udp" }, { "a21-an-1xbs", { NULL }, 4597, "tcp" }, { "a21-an-1xbs", { NULL }, 4597, "udp" }, { "a16-an-an", { NULL }, 4598, "tcp" }, { "a16-an-an", { NULL }, 4598, "udp" }, { "a17-an-an", { NULL }, 4599, "tcp" }, { "a17-an-an", { NULL }, 4599, "udp" }, { "piranha1", { NULL }, 4600, "tcp" }, { "piranha1", { NULL }, 4600, "udp" }, { "piranha2", { NULL }, 4601, "tcp" }, { "piranha2", { NULL }, 4601, "udp" }, { "mtsserver", { NULL }, 4602, "tcp" }, { "menandmice-upg", { NULL }, 4603, "tcp" }, { "playsta2-app", { NULL }, 4658, "tcp" }, { "playsta2-app", { NULL }, 4658, "udp" }, { "playsta2-lob", { NULL }, 4659, "tcp" }, { "playsta2-lob", { NULL }, 4659, "udp" }, { "smaclmgr", { NULL }, 4660, "tcp" }, { "smaclmgr", { NULL }, 4660, "udp" }, { "kar2ouche", { NULL }, 4661, "tcp" }, { "kar2ouche", { NULL }, 4661, "udp" }, { "oms", { NULL }, 4662, "tcp" }, { "oms", { NULL }, 4662, "udp" }, { "noteit", { NULL }, 4663, "tcp" }, { "noteit", { NULL }, 4663, "udp" }, { "ems", { NULL }, 4664, "tcp" }, { "ems", { NULL }, 4664, "udp" }, { "contclientms", { NULL }, 4665, "tcp" }, { "contclientms", { NULL }, 4665, "udp" }, { "eportcomm", { NULL }, 4666, "tcp" }, { "eportcomm", { NULL }, 4666, "udp" }, { "mmacomm", { NULL }, 4667, "tcp" }, { "mmacomm", { NULL }, 4667, "udp" }, { "mmaeds", { NULL }, 4668, "tcp" }, { "mmaeds", { NULL }, 4668, "udp" }, { "eportcommdata", { NULL }, 4669, "tcp" }, { "eportcommdata", { NULL }, 4669, "udp" }, { "light", { NULL }, 4670, "tcp" }, { "light", { NULL }, 4670, "udp" }, { "acter", { NULL }, 4671, "tcp" }, { "acter", { NULL }, 4671, "udp" }, { "rfa", { NULL }, 4672, "tcp" }, { "rfa", { NULL }, 4672, "udp" }, { "cxws", { NULL }, 4673, "tcp" }, { "cxws", { NULL }, 4673, "udp" }, { "appiq-mgmt", { NULL }, 4674, "tcp" }, { "appiq-mgmt", { NULL }, 4674, "udp" }, { "dhct-status", { NULL }, 4675, "tcp" }, { "dhct-status", { NULL }, 4675, "udp" }, { "dhct-alerts", { NULL }, 4676, "tcp" }, { "dhct-alerts", { NULL }, 4676, "udp" }, { "bcs", { NULL }, 4677, "tcp" }, { "bcs", { NULL }, 4677, "udp" }, { "traversal", { NULL }, 4678, "tcp" }, { "traversal", { NULL }, 4678, "udp" }, { "mgesupervision", { NULL }, 4679, "tcp" }, { "mgesupervision", { NULL }, 4679, "udp" }, { "mgemanagement", { NULL }, 4680, "tcp" }, { "mgemanagement", { NULL }, 4680, "udp" }, { "parliant", { NULL }, 4681, "tcp" }, { "parliant", { NULL }, 4681, "udp" }, { "finisar", { NULL }, 4682, "tcp" }, { "finisar", { NULL }, 4682, "udp" }, { "spike", { NULL }, 4683, "tcp" }, { "spike", { NULL }, 4683, "udp" }, { "rfid-rp1", { NULL }, 4684, "tcp" }, { "rfid-rp1", { NULL }, 4684, "udp" }, { "autopac", { NULL }, 4685, "tcp" }, { "autopac", { NULL }, 4685, "udp" }, { "msp-os", { NULL }, 4686, "tcp" }, { "msp-os", { NULL }, 4686, "udp" }, { "nst", { NULL }, 4687, "tcp" }, { "nst", { NULL }, 4687, "udp" }, { "mobile-p2p", { NULL }, 4688, "tcp" }, { "mobile-p2p", { NULL }, 4688, "udp" }, { "altovacentral", { NULL }, 4689, "tcp" }, { "altovacentral", { NULL }, 4689, "udp" }, { "prelude", { NULL }, 4690, "tcp" }, { "prelude", { NULL }, 4690, "udp" }, { "mtn", { NULL }, 4691, "tcp" }, { "mtn", { NULL }, 4691, "udp" }, { "conspiracy", { NULL }, 4692, "tcp" }, { "conspiracy", { NULL }, 4692, "udp" }, { "netxms-agent", { NULL }, 4700, "tcp" }, { "netxms-agent", { NULL }, 4700, "udp" }, { "netxms-mgmt", { NULL }, 4701, "tcp" }, { "netxms-mgmt", { NULL }, 4701, "udp" }, { "netxms-sync", { NULL }, 4702, "tcp" }, { "netxms-sync", { NULL }, 4702, "udp" }, { "npqes-test", { NULL }, 4703, "tcp" }, { "assuria-ins", { NULL }, 4704, "tcp" }, { "truckstar", { NULL }, 4725, "tcp" }, { "truckstar", { NULL }, 4725, "udp" }, { "a26-fap-fgw", { NULL }, 4726, "udp" }, { "fcis", { NULL }, 4727, "tcp" }, { "fcis-disc", { NULL }, 4727, "udp" }, { "capmux", { NULL }, 4728, "tcp" }, { "capmux", { NULL }, 4728, "udp" }, { "gsmtap", { NULL }, 4729, "udp" }, { "gearman", { NULL }, 4730, "tcp" }, { "gearman", { NULL }, 4730, "udp" }, { "remcap", { NULL }, 4731, "tcp" }, { "ohmtrigger", { NULL }, 4732, "udp" }, { "resorcs", { NULL }, 4733, "tcp" }, { "ipdr-sp", { NULL }, 4737, "tcp" }, { "ipdr-sp", { NULL }, 4737, "udp" }, { "solera-lpn", { NULL }, 4738, "tcp" }, { "solera-lpn", { NULL }, 4738, "udp" }, { "ipfix", { NULL }, 4739, "tcp" }, { "ipfix", { NULL }, 4739, "udp" }, { "ipfix", { NULL }, 4739, "sctp" }, { "ipfixs", { NULL }, 4740, "tcp" }, { "ipfixs", { NULL }, 4740, "sctp" }, { "ipfixs", { NULL }, 4740, "udp" }, { "lumimgrd", { NULL }, 4741, "tcp" }, { "lumimgrd", { NULL }, 4741, "udp" }, { "sicct", { NULL }, 4742, "tcp" }, { "sicct-sdp", { NULL }, 4742, "udp" }, { "openhpid", { NULL }, 4743, "tcp" }, { "openhpid", { NULL }, 4743, "udp" }, { "ifsp", { NULL }, 4744, "tcp" }, { "ifsp", { NULL }, 4744, "udp" }, { "fmp", { NULL }, 4745, "tcp" }, { "fmp", { NULL }, 4745, "udp" }, { "profilemac", { NULL }, 4749, "tcp" }, { "profilemac", { NULL }, 4749, "udp" }, { "ssad", { NULL }, 4750, "tcp" }, { "ssad", { NULL }, 4750, "udp" }, { "spocp", { NULL }, 4751, "tcp" }, { "spocp", { NULL }, 4751, "udp" }, { "snap", { NULL }, 4752, "tcp" }, { "snap", { NULL }, 4752, "udp" }, { "bfd-multi-ctl", { NULL }, 4784, "tcp" }, { "bfd-multi-ctl", { NULL }, 4784, "udp" }, { "cncp", { NULL }, 4785, "udp" }, { "smart-install", { NULL }, 4786, "tcp" }, { "sia-ctrl-plane", { NULL }, 4787, "tcp" }, { "iims", { NULL }, 4800, "tcp" }, { "iims", { NULL }, 4800, "udp" }, { "iwec", { NULL }, 4801, "tcp" }, { "iwec", { NULL }, 4801, "udp" }, { "ilss", { NULL }, 4802, "tcp" }, { "ilss", { NULL }, 4802, "udp" }, { "notateit", { NULL }, 4803, "tcp" }, { "notateit-disc", { NULL }, 4803, "udp" }, { "aja-ntv4-disc", { NULL }, 4804, "udp" }, { "htcp", { NULL }, 4827, "tcp" }, { "htcp", { NULL }, 4827, "udp" }, { "varadero-0", { NULL }, 4837, "tcp" }, { "varadero-0", { NULL }, 4837, "udp" }, { "varadero-1", { NULL }, 4838, "tcp" }, { "varadero-1", { NULL }, 4838, "udp" }, { "varadero-2", { NULL }, 4839, "tcp" }, { "varadero-2", { NULL }, 4839, "udp" }, { "opcua-tcp", { NULL }, 4840, "tcp" }, { "opcua-udp", { NULL }, 4840, "udp" }, { "quosa", { NULL }, 4841, "tcp" }, { "quosa", { NULL }, 4841, "udp" }, { "gw-asv", { NULL }, 4842, "tcp" }, { "gw-asv", { NULL }, 4842, "udp" }, { "opcua-tls", { NULL }, 4843, "tcp" }, { "opcua-tls", { NULL }, 4843, "udp" }, { "gw-log", { NULL }, 4844, "tcp" }, { "gw-log", { NULL }, 4844, "udp" }, { "wcr-remlib", { NULL }, 4845, "tcp" }, { "wcr-remlib", { NULL }, 4845, "udp" }, { "contamac_icm", { NULL }, 4846, "tcp" }, { "contamac_icm", { NULL }, 4846, "udp" }, { "wfc", { NULL }, 4847, "tcp" }, { "wfc", { NULL }, 4847, "udp" }, { "appserv-http", { NULL }, 4848, "tcp" }, { "appserv-http", { NULL }, 4848, "udp" }, { "appserv-https", { NULL }, 4849, "tcp" }, { "appserv-https", { NULL }, 4849, "udp" }, { "sun-as-nodeagt", { NULL }, 4850, "tcp" }, { "sun-as-nodeagt", { NULL }, 4850, "udp" }, { "derby-repli", { NULL }, 4851, "tcp" }, { "derby-repli", { NULL }, 4851, "udp" }, { "unify-debug", { NULL }, 4867, "tcp" }, { "unify-debug", { NULL }, 4867, "udp" }, { "phrelay", { NULL }, 4868, "tcp" }, { "phrelay", { NULL }, 4868, "udp" }, { "phrelaydbg", { NULL }, 4869, "tcp" }, { "phrelaydbg", { NULL }, 4869, "udp" }, { "cc-tracking", { NULL }, 4870, "tcp" }, { "cc-tracking", { NULL }, 4870, "udp" }, { "wired", { NULL }, 4871, "tcp" }, { "wired", { NULL }, 4871, "udp" }, { "tritium-can", { NULL }, 4876, "tcp" }, { "tritium-can", { NULL }, 4876, "udp" }, { "lmcs", { NULL }, 4877, "tcp" }, { "lmcs", { NULL }, 4877, "udp" }, { "inst-discovery", { NULL }, 4878, "udp" }, { "wsdl-event", { NULL }, 4879, "tcp" }, { "hislip", { NULL }, 4880, "tcp" }, { "socp-t", { NULL }, 4881, "udp" }, { "socp-c", { NULL }, 4882, "udp" }, { "wmlserver", { NULL }, 4883, "tcp" }, { "hivestor", { NULL }, 4884, "tcp" }, { "hivestor", { NULL }, 4884, "udp" }, { "abbs", { NULL }, 4885, "tcp" }, { "abbs", { NULL }, 4885, "udp" }, { "lyskom", { NULL }, 4894, "tcp" }, { "lyskom", { NULL }, 4894, "udp" }, { "radmin-port", { NULL }, 4899, "tcp" }, { "radmin-port", { NULL }, 4899, "udp" }, { "hfcs", { NULL }, 4900, "tcp" }, { "hfcs", { NULL }, 4900, "udp" }, { "flr_agent", { NULL }, 4901, "tcp" }, { "magiccontrol", { NULL }, 4902, "tcp" }, { "lutap", { NULL }, 4912, "tcp" }, { "lutcp", { NULL }, 4913, "tcp" }, { "bones", { NULL }, 4914, "tcp" }, { "bones", { NULL }, 4914, "udp" }, { "frcs", { NULL }, 4915, "tcp" }, { "atsc-mh-ssc", { NULL }, 4937, "udp" }, { "eq-office-4940", { NULL }, 4940, "tcp" }, { "eq-office-4940", { NULL }, 4940, "udp" }, { "eq-office-4941", { NULL }, 4941, "tcp" }, { "eq-office-4941", { NULL }, 4941, "udp" }, { "eq-office-4942", { NULL }, 4942, "tcp" }, { "eq-office-4942", { NULL }, 4942, "udp" }, { "munin", { NULL }, 4949, "tcp" }, { "munin", { NULL }, 4949, "udp" }, { "sybasesrvmon", { NULL }, 4950, "tcp" }, { "sybasesrvmon", { NULL }, 4950, "udp" }, { "pwgwims", { NULL }, 4951, "tcp" }, { "pwgwims", { NULL }, 4951, "udp" }, { "sagxtsds", { NULL }, 4952, "tcp" }, { "sagxtsds", { NULL }, 4952, "udp" }, { "dbsyncarbiter", { NULL }, 4953, "tcp" }, { "ccss-qmm", { NULL }, 4969, "tcp" }, { "ccss-qmm", { NULL }, 4969, "udp" }, { "ccss-qsm", { NULL }, 4970, "tcp" }, { "ccss-qsm", { NULL }, 4970, "udp" }, { "webyast", { NULL }, 4984, "tcp" }, { "gerhcs", { NULL }, 4985, "tcp" }, { "mrip", { NULL }, 4986, "tcp" }, { "mrip", { NULL }, 4986, "udp" }, { "smar-se-port1", { NULL }, 4987, "tcp" }, { "smar-se-port1", { NULL }, 4987, "udp" }, { "smar-se-port2", { NULL }, 4988, "tcp" }, { "smar-se-port2", { NULL }, 4988, "udp" }, { "parallel", { NULL }, 4989, "tcp" }, { "parallel", { NULL }, 4989, "udp" }, { "busycal", { NULL }, 4990, "tcp" }, { "busycal", { NULL }, 4990, "udp" }, { "vrt", { NULL }, 4991, "tcp" }, { "vrt", { NULL }, 4991, "udp" }, { "hfcs-manager", { NULL }, 4999, "tcp" }, { "hfcs-manager", { NULL }, 4999, "udp" }, { "commplex-main", { NULL }, 5000, "tcp" }, { "commplex-main", { NULL }, 5000, "udp" }, { "commplex-link", { NULL }, 5001, "tcp" }, { "commplex-link", { NULL }, 5001, "udp" }, { "rfe", { NULL }, 5002, "tcp" }, { "rfe", { NULL }, 5002, "udp" }, { "fmpro-internal", { NULL }, 5003, "tcp" }, { "fmpro-internal", { NULL }, 5003, "udp" }, { "avt-profile-1", { NULL }, 5004, "tcp" }, { "avt-profile-1", { NULL }, 5004, "udp" }, { "avt-profile-1", { NULL }, 5004, "dccp" }, { "avt-profile-2", { NULL }, 5005, "tcp" }, { "avt-profile-2", { NULL }, 5005, "udp" }, { "avt-profile-2", { NULL }, 5005, "dccp" }, { "wsm-server", { NULL }, 5006, "tcp" }, { "wsm-server", { NULL }, 5006, "udp" }, { "wsm-server-ssl", { NULL }, 5007, "tcp" }, { "wsm-server-ssl", { NULL }, 5007, "udp" }, { "synapsis-edge", { NULL }, 5008, "tcp" }, { "synapsis-edge", { NULL }, 5008, "udp" }, { "winfs", { NULL }, 5009, "tcp" }, { "winfs", { NULL }, 5009, "udp" }, { "telelpathstart", { NULL }, 5010, "tcp" }, { "telelpathstart", { NULL }, 5010, "udp" }, { "telelpathattack", { NULL }, 5011, "tcp" }, { "telelpathattack", { NULL }, 5011, "udp" }, { "nsp", { NULL }, 5012, "tcp" }, { "nsp", { NULL }, 5012, "udp" }, { "fmpro-v6", { NULL }, 5013, "tcp" }, { "fmpro-v6", { NULL }, 5013, "udp" }, { "onpsocket", { NULL }, 5014, "udp" }, { "fmwp", { NULL }, 5015, "tcp" }, { "zenginkyo-1", { NULL }, 5020, "tcp" }, { "zenginkyo-1", { NULL }, 5020, "udp" }, { "zenginkyo-2", { NULL }, 5021, "tcp" }, { "zenginkyo-2", { NULL }, 5021, "udp" }, { "mice", { NULL }, 5022, "tcp" }, { "mice", { NULL }, 5022, "udp" }, { "htuilsrv", { NULL }, 5023, "tcp" }, { "htuilsrv", { NULL }, 5023, "udp" }, { "scpi-telnet", { NULL }, 5024, "tcp" }, { "scpi-telnet", { NULL }, 5024, "udp" }, { "scpi-raw", { NULL }, 5025, "tcp" }, { "scpi-raw", { NULL }, 5025, "udp" }, { "strexec-d", { NULL }, 5026, "tcp" }, { "strexec-d", { NULL }, 5026, "udp" }, { "strexec-s", { NULL }, 5027, "tcp" }, { "strexec-s", { NULL }, 5027, "udp" }, { "qvr", { NULL }, 5028, "tcp" }, { "infobright", { NULL }, 5029, "tcp" }, { "infobright", { NULL }, 5029, "udp" }, { "surfpass", { NULL }, 5030, "tcp" }, { "surfpass", { NULL }, 5030, "udp" }, { "dmp", { NULL }, 5031, "udp" }, { "asnaacceler8db", { NULL }, 5042, "tcp" }, { "asnaacceler8db", { NULL }, 5042, "udp" }, { "swxadmin", { NULL }, 5043, "tcp" }, { "swxadmin", { NULL }, 5043, "udp" }, { "lxi-evntsvc", { NULL }, 5044, "tcp" }, { "lxi-evntsvc", { NULL }, 5044, "udp" }, { "osp", { NULL }, 5045, "tcp" }, { "vpm-udp", { NULL }, 5046, "udp" }, { "iscape", { NULL }, 5047, "udp" }, { "texai", { NULL }, 5048, "tcp" }, { "ivocalize", { NULL }, 5049, "tcp" }, { "ivocalize", { NULL }, 5049, "udp" }, { "mmcc", { NULL }, 5050, "tcp" }, { "mmcc", { NULL }, 5050, "udp" }, { "ita-agent", { NULL }, 5051, "tcp" }, { "ita-agent", { NULL }, 5051, "udp" }, { "ita-manager", { NULL }, 5052, "tcp" }, { "ita-manager", { NULL }, 5052, "udp" }, { "rlm", { NULL }, 5053, "tcp" }, { "rlm-admin", { NULL }, 5054, "tcp" }, { "unot", { NULL }, 5055, "tcp" }, { "unot", { NULL }, 5055, "udp" }, { "intecom-ps1", { NULL }, 5056, "tcp" }, { "intecom-ps1", { NULL }, 5056, "udp" }, { "intecom-ps2", { NULL }, 5057, "tcp" }, { "intecom-ps2", { NULL }, 5057, "udp" }, { "locus-disc", { NULL }, 5058, "udp" }, { "sds", { NULL }, 5059, "tcp" }, { "sds", { NULL }, 5059, "udp" }, { "sip", { NULL }, 5060, "tcp" }, { "sip", { NULL }, 5060, "udp" }, { "sip-tls", { NULL }, 5061, "tcp" }, { "sip-tls", { NULL }, 5061, "udp" }, { "na-localise", { NULL }, 5062, "tcp" }, { "na-localise", { NULL }, 5062, "udp" }, { "csrpc", { NULL }, 5063, "tcp" }, { "ca-1", { NULL }, 5064, "tcp" }, { "ca-1", { NULL }, 5064, "udp" }, { "ca-2", { NULL }, 5065, "tcp" }, { "ca-2", { NULL }, 5065, "udp" }, { "stanag-5066", { NULL }, 5066, "tcp" }, { "stanag-5066", { NULL }, 5066, "udp" }, { "authentx", { NULL }, 5067, "tcp" }, { "authentx", { NULL }, 5067, "udp" }, { "bitforestsrv", { NULL }, 5068, "tcp" }, { "i-net-2000-npr", { NULL }, 5069, "tcp" }, { "i-net-2000-npr", { NULL }, 5069, "udp" }, { "vtsas", { NULL }, 5070, "tcp" }, { "vtsas", { NULL }, 5070, "udp" }, { "powerschool", { NULL }, 5071, "tcp" }, { "powerschool", { NULL }, 5071, "udp" }, { "ayiya", { NULL }, 5072, "tcp" }, { "ayiya", { NULL }, 5072, "udp" }, { "tag-pm", { NULL }, 5073, "tcp" }, { "tag-pm", { NULL }, 5073, "udp" }, { "alesquery", { NULL }, 5074, "tcp" }, { "alesquery", { NULL }, 5074, "udp" }, { "cp-spxrpts", { NULL }, 5079, "udp" }, { "onscreen", { NULL }, 5080, "tcp" }, { "onscreen", { NULL }, 5080, "udp" }, { "sdl-ets", { NULL }, 5081, "tcp" }, { "sdl-ets", { NULL }, 5081, "udp" }, { "qcp", { NULL }, 5082, "tcp" }, { "qcp", { NULL }, 5082, "udp" }, { "qfp", { NULL }, 5083, "tcp" }, { "qfp", { NULL }, 5083, "udp" }, { "llrp", { NULL }, 5084, "tcp" }, { "llrp", { NULL }, 5084, "udp" }, { "encrypted-llrp", { NULL }, 5085, "tcp" }, { "encrypted-llrp", { NULL }, 5085, "udp" }, { "aprigo-cs", { NULL }, 5086, "tcp" }, { "car", { NULL }, 5090, "sctp" }, { "cxtp", { NULL }, 5091, "sctp" }, { "magpie", { NULL }, 5092, "udp" }, { "sentinel-lm", { NULL }, 5093, "tcp" }, { "sentinel-lm", { NULL }, 5093, "udp" }, { "hart-ip", { NULL }, 5094, "tcp" }, { "hart-ip", { NULL }, 5094, "udp" }, { "sentlm-srv2srv", { NULL }, 5099, "tcp" }, { "sentlm-srv2srv", { NULL }, 5099, "udp" }, { "socalia", { NULL }, 5100, "tcp" }, { "socalia", { NULL }, 5100, "udp" }, { "talarian-tcp", { NULL }, 5101, "tcp" }, { "talarian-udp", { NULL }, 5101, "udp" }, { "oms-nonsecure", { NULL }, 5102, "tcp" }, { "oms-nonsecure", { NULL }, 5102, "udp" }, { "actifio-c2c", { NULL }, 5103, "tcp" }, { "tinymessage", { NULL }, 5104, "udp" }, { "hughes-ap", { NULL }, 5105, "udp" }, { "taep-as-svc", { NULL }, 5111, "tcp" }, { "taep-as-svc", { NULL }, 5111, "udp" }, { "pm-cmdsvr", { NULL }, 5112, "tcp" }, { "pm-cmdsvr", { NULL }, 5112, "udp" }, { "ev-services", { NULL }, 5114, "tcp" }, { "autobuild", { NULL }, 5115, "tcp" }, { "emb-proj-cmd", { NULL }, 5116, "udp" }, { "gradecam", { NULL }, 5117, "tcp" }, { "nbt-pc", { NULL }, 5133, "tcp" }, { "nbt-pc", { NULL }, 5133, "udp" }, { "ppactivation", { NULL }, 5134, "tcp" }, { "erp-scale", { NULL }, 5135, "tcp" }, { "minotaur-sa", { NULL }, 5136, "udp" }, { "ctsd", { NULL }, 5137, "tcp" }, { "ctsd", { NULL }, 5137, "udp" }, { "rmonitor_secure", { NULL }, 5145, "tcp" }, { "rmonitor_secure", { NULL }, 5145, "udp" }, { "social-alarm", { NULL }, 5146, "tcp" }, { "atmp", { NULL }, 5150, "tcp" }, { "atmp", { NULL }, 5150, "udp" }, { "esri_sde", { NULL }, 5151, "tcp" }, { "esri_sde", { NULL }, 5151, "udp" }, { "sde-discovery", { NULL }, 5152, "tcp" }, { "sde-discovery", { NULL }, 5152, "udp" }, { "toruxserver", { NULL }, 5153, "tcp" }, { "bzflag", { NULL }, 5154, "tcp" }, { "bzflag", { NULL }, 5154, "udp" }, { "asctrl-agent", { NULL }, 5155, "tcp" }, { "asctrl-agent", { NULL }, 5155, "udp" }, { "rugameonline", { NULL }, 5156, "tcp" }, { "mediat", { NULL }, 5157, "tcp" }, { "snmpssh", { NULL }, 5161, "tcp" }, { "snmpssh-trap", { NULL }, 5162, "tcp" }, { "sbackup", { NULL }, 5163, "tcp" }, { "vpa", { NULL }, 5164, "tcp" }, { "vpa-disc", { NULL }, 5164, "udp" }, { "ife_icorp", { NULL }, 5165, "tcp" }, { "ife_icorp", { NULL }, 5165, "udp" }, { "winpcs", { NULL }, 5166, "tcp" }, { "winpcs", { NULL }, 5166, "udp" }, { "scte104", { NULL }, 5167, "tcp" }, { "scte104", { NULL }, 5167, "udp" }, { "scte30", { NULL }, 5168, "tcp" }, { "scte30", { NULL }, 5168, "udp" }, { "aol", { NULL }, 5190, "tcp" }, { "aol", { NULL }, 5190, "udp" }, { "aol-1", { NULL }, 5191, "tcp" }, { "aol-1", { NULL }, 5191, "udp" }, { "aol-2", { NULL }, 5192, "tcp" }, { "aol-2", { NULL }, 5192, "udp" }, { "aol-3", { NULL }, 5193, "tcp" }, { "aol-3", { NULL }, 5193, "udp" }, { "cpscomm", { NULL }, 5194, "tcp" }, { "targus-getdata", { NULL }, 5200, "tcp" }, { "targus-getdata", { NULL }, 5200, "udp" }, { "targus-getdata1", { NULL }, 5201, "tcp" }, { "targus-getdata1", { NULL }, 5201, "udp" }, { "targus-getdata2", { NULL }, 5202, "tcp" }, { "targus-getdata2", { NULL }, 5202, "udp" }, { "targus-getdata3", { NULL }, 5203, "tcp" }, { "targus-getdata3", { NULL }, 5203, "udp" }, { "3exmp", { NULL }, 5221, "tcp" }, { "xmpp-client", { NULL }, 5222, "tcp" }, { "hpvirtgrp", { NULL }, 5223, "tcp" }, { "hpvirtgrp", { NULL }, 5223, "udp" }, { "hpvirtctrl", { NULL }, 5224, "tcp" }, { "hpvirtctrl", { NULL }, 5224, "udp" }, { "hp-server", { NULL }, 5225, "tcp" }, { "hp-server", { NULL }, 5225, "udp" }, { "hp-status", { NULL }, 5226, "tcp" }, { "hp-status", { NULL }, 5226, "udp" }, { "perfd", { NULL }, 5227, "tcp" }, { "perfd", { NULL }, 5227, "udp" }, { "hpvroom", { NULL }, 5228, "tcp" }, { "csedaemon", { NULL }, 5232, "tcp" }, { "enfs", { NULL }, 5233, "tcp" }, { "eenet", { NULL }, 5234, "tcp" }, { "eenet", { NULL }, 5234, "udp" }, { "galaxy-network", { NULL }, 5235, "tcp" }, { "galaxy-network", { NULL }, 5235, "udp" }, { "padl2sim", { NULL }, 5236, "tcp" }, { "padl2sim", { NULL }, 5236, "udp" }, { "mnet-discovery", { NULL }, 5237, "tcp" }, { "mnet-discovery", { NULL }, 5237, "udp" }, { "downtools", { NULL }, 5245, "tcp" }, { "downtools-disc", { NULL }, 5245, "udp" }, { "capwap-control", { NULL }, 5246, "udp" }, { "capwap-data", { NULL }, 5247, "udp" }, { "caacws", { NULL }, 5248, "tcp" }, { "caacws", { NULL }, 5248, "udp" }, { "caaclang2", { NULL }, 5249, "tcp" }, { "caaclang2", { NULL }, 5249, "udp" }, { "soagateway", { NULL }, 5250, "tcp" }, { "soagateway", { NULL }, 5250, "udp" }, { "caevms", { NULL }, 5251, "tcp" }, { "caevms", { NULL }, 5251, "udp" }, { "movaz-ssc", { NULL }, 5252, "tcp" }, { "movaz-ssc", { NULL }, 5252, "udp" }, { "kpdp", { NULL }, 5253, "tcp" }, { "3com-njack-1", { NULL }, 5264, "tcp" }, { "3com-njack-1", { NULL }, 5264, "udp" }, { "3com-njack-2", { NULL }, 5265, "tcp" }, { "3com-njack-2", { NULL }, 5265, "udp" }, { "xmpp-server", { NULL }, 5269, "tcp" }, { "xmp", { NULL }, 5270, "tcp" }, { "xmp", { NULL }, 5270, "udp" }, { "cuelink", { NULL }, 5271, "tcp" }, { "cuelink-disc", { NULL }, 5271, "udp" }, { "pk", { NULL }, 5272, "tcp" }, { "pk", { NULL }, 5272, "udp" }, { "xmpp-bosh", { NULL }, 5280, "tcp" }, { "undo-lm", { NULL }, 5281, "tcp" }, { "transmit-port", { NULL }, 5282, "tcp" }, { "transmit-port", { NULL }, 5282, "udp" }, { "presence", { NULL }, 5298, "tcp" }, { "presence", { NULL }, 5298, "udp" }, { "nlg-data", { NULL }, 5299, "tcp" }, { "nlg-data", { NULL }, 5299, "udp" }, { "hacl-hb", { NULL }, 5300, "tcp" }, { "hacl-hb", { NULL }, 5300, "udp" }, { "hacl-gs", { NULL }, 5301, "tcp" }, { "hacl-gs", { NULL }, 5301, "udp" }, { "hacl-cfg", { NULL }, 5302, "tcp" }, { "hacl-cfg", { NULL }, 5302, "udp" }, { "hacl-probe", { NULL }, 5303, "tcp" }, { "hacl-probe", { NULL }, 5303, "udp" }, { "hacl-local", { NULL }, 5304, "tcp" }, { "hacl-local", { NULL }, 5304, "udp" }, { "hacl-test", { NULL }, 5305, "tcp" }, { "hacl-test", { NULL }, 5305, "udp" }, { "sun-mc-grp", { NULL }, 5306, "tcp" }, { "sun-mc-grp", { NULL }, 5306, "udp" }, { "sco-aip", { NULL }, 5307, "tcp" }, { "sco-aip", { NULL }, 5307, "udp" }, { "cfengine", { NULL }, 5308, "tcp" }, { "cfengine", { NULL }, 5308, "udp" }, { "jprinter", { NULL }, 5309, "tcp" }, { "jprinter", { NULL }, 5309, "udp" }, { "outlaws", { NULL }, 5310, "tcp" }, { "outlaws", { NULL }, 5310, "udp" }, { "permabit-cs", { NULL }, 5312, "tcp" }, { "permabit-cs", { NULL }, 5312, "udp" }, { "rrdp", { NULL }, 5313, "tcp" }, { "rrdp", { NULL }, 5313, "udp" }, { "opalis-rbt-ipc", { NULL }, 5314, "tcp" }, { "opalis-rbt-ipc", { NULL }, 5314, "udp" }, { "hacl-poll", { NULL }, 5315, "tcp" }, { "hacl-poll", { NULL }, 5315, "udp" }, { "hpdevms", { NULL }, 5316, "tcp" }, { "hpdevms", { NULL }, 5316, "udp" }, { "bsfserver-zn", { NULL }, 5320, "tcp" }, { "bsfsvr-zn-ssl", { NULL }, 5321, "tcp" }, { "kfserver", { NULL }, 5343, "tcp" }, { "kfserver", { NULL }, 5343, "udp" }, { "xkotodrcp", { NULL }, 5344, "tcp" }, { "xkotodrcp", { NULL }, 5344, "udp" }, { "stuns", { NULL }, 5349, "tcp" }, { "stuns", { NULL }, 5349, "udp" }, { "turns", { NULL }, 5349, "tcp" }, { "turns", { NULL }, 5349, "udp" }, { "stun-behaviors", { NULL }, 5349, "tcp" }, { "stun-behaviors", { NULL }, 5349, "udp" }, { "nat-pmp-status", { NULL }, 5350, "tcp" }, { "nat-pmp-status", { NULL }, 5350, "udp" }, { "nat-pmp", { NULL }, 5351, "tcp" }, { "nat-pmp", { NULL }, 5351, "udp" }, { "dns-llq", { NULL }, 5352, "tcp" }, { "dns-llq", { NULL }, 5352, "udp" }, { "mdns", { NULL }, 5353, "tcp" }, { "mdns", { NULL }, 5353, "udp" }, { "mdnsresponder", { NULL }, 5354, "tcp" }, { "mdnsresponder", { NULL }, 5354, "udp" }, { "llmnr", { NULL }, 5355, "tcp" }, { "llmnr", { NULL }, 5355, "udp" }, { "ms-smlbiz", { NULL }, 5356, "tcp" }, { "ms-smlbiz", { NULL }, 5356, "udp" }, { "wsdapi", { NULL }, 5357, "tcp" }, { "wsdapi", { NULL }, 5357, "udp" }, { "wsdapi-s", { NULL }, 5358, "tcp" }, { "wsdapi-s", { NULL }, 5358, "udp" }, { "ms-alerter", { NULL }, 5359, "tcp" }, { "ms-alerter", { NULL }, 5359, "udp" }, { "ms-sideshow", { NULL }, 5360, "tcp" }, { "ms-sideshow", { NULL }, 5360, "udp" }, { "ms-s-sideshow", { NULL }, 5361, "tcp" }, { "ms-s-sideshow", { NULL }, 5361, "udp" }, { "serverwsd2", { NULL }, 5362, "tcp" }, { "serverwsd2", { NULL }, 5362, "udp" }, { "net-projection", { NULL }, 5363, "tcp" }, { "net-projection", { NULL }, 5363, "udp" }, { "stresstester", { NULL }, 5397, "tcp" }, { "stresstester", { NULL }, 5397, "udp" }, { "elektron-admin", { NULL }, 5398, "tcp" }, { "elektron-admin", { NULL }, 5398, "udp" }, { "securitychase", { NULL }, 5399, "tcp" }, { "securitychase", { NULL }, 5399, "udp" }, { "excerpt", { NULL }, 5400, "tcp" }, { "excerpt", { NULL }, 5400, "udp" }, { "excerpts", { NULL }, 5401, "tcp" }, { "excerpts", { NULL }, 5401, "udp" }, { "mftp", { NULL }, 5402, "tcp" }, { "mftp", { NULL }, 5402, "udp" }, { "hpoms-ci-lstn", { NULL }, 5403, "tcp" }, { "hpoms-ci-lstn", { NULL }, 5403, "udp" }, { "hpoms-dps-lstn", { NULL }, 5404, "tcp" }, { "hpoms-dps-lstn", { NULL }, 5404, "udp" }, { "netsupport", { NULL }, 5405, "tcp" }, { "netsupport", { NULL }, 5405, "udp" }, { "systemics-sox", { NULL }, 5406, "tcp" }, { "systemics-sox", { NULL }, 5406, "udp" }, { "foresyte-clear", { NULL }, 5407, "tcp" }, { "foresyte-clear", { NULL }, 5407, "udp" }, { "foresyte-sec", { NULL }, 5408, "tcp" }, { "foresyte-sec", { NULL }, 5408, "udp" }, { "salient-dtasrv", { NULL }, 5409, "tcp" }, { "salient-dtasrv", { NULL }, 5409, "udp" }, { "salient-usrmgr", { NULL }, 5410, "tcp" }, { "salient-usrmgr", { NULL }, 5410, "udp" }, { "actnet", { NULL }, 5411, "tcp" }, { "actnet", { NULL }, 5411, "udp" }, { "continuus", { NULL }, 5412, "tcp" }, { "continuus", { NULL }, 5412, "udp" }, { "wwiotalk", { NULL }, 5413, "tcp" }, { "wwiotalk", { NULL }, 5413, "udp" }, { "statusd", { NULL }, 5414, "tcp" }, { "statusd", { NULL }, 5414, "udp" }, { "ns-server", { NULL }, 5415, "tcp" }, { "ns-server", { NULL }, 5415, "udp" }, { "sns-gateway", { NULL }, 5416, "tcp" }, { "sns-gateway", { NULL }, 5416, "udp" }, { "sns-agent", { NULL }, 5417, "tcp" }, { "sns-agent", { NULL }, 5417, "udp" }, { "mcntp", { NULL }, 5418, "tcp" }, { "mcntp", { NULL }, 5418, "udp" }, { "dj-ice", { NULL }, 5419, "tcp" }, { "dj-ice", { NULL }, 5419, "udp" }, { "cylink-c", { NULL }, 5420, "tcp" }, { "cylink-c", { NULL }, 5420, "udp" }, { "netsupport2", { NULL }, 5421, "tcp" }, { "netsupport2", { NULL }, 5421, "udp" }, { "salient-mux", { NULL }, 5422, "tcp" }, { "salient-mux", { NULL }, 5422, "udp" }, { "virtualuser", { NULL }, 5423, "tcp" }, { "virtualuser", { NULL }, 5423, "udp" }, { "beyond-remote", { NULL }, 5424, "tcp" }, { "beyond-remote", { NULL }, 5424, "udp" }, { "br-channel", { NULL }, 5425, "tcp" }, { "br-channel", { NULL }, 5425, "udp" }, { "devbasic", { NULL }, 5426, "tcp" }, { "devbasic", { NULL }, 5426, "udp" }, { "sco-peer-tta", { NULL }, 5427, "tcp" }, { "sco-peer-tta", { NULL }, 5427, "udp" }, { "telaconsole", { NULL }, 5428, "tcp" }, { "telaconsole", { NULL }, 5428, "udp" }, { "base", { NULL }, 5429, "tcp" }, { "base", { NULL }, 5429, "udp" }, { "radec-corp", { NULL }, 5430, "tcp" }, { "radec-corp", { NULL }, 5430, "udp" }, { "park-agent", { NULL }, 5431, "tcp" }, { "park-agent", { NULL }, 5431, "udp" }, { "postgresql", { NULL }, 5432, "tcp" }, { "postgresql", { NULL }, 5432, "udp" }, { "pyrrho", { NULL }, 5433, "tcp" }, { "pyrrho", { NULL }, 5433, "udp" }, { "sgi-arrayd", { NULL }, 5434, "tcp" }, { "sgi-arrayd", { NULL }, 5434, "udp" }, { "sceanics", { NULL }, 5435, "tcp" }, { "sceanics", { NULL }, 5435, "udp" }, { "pmip6-cntl", { NULL }, 5436, "udp" }, { "pmip6-data", { NULL }, 5437, "udp" }, { "spss", { NULL }, 5443, "tcp" }, { "spss", { NULL }, 5443, "udp" }, { "surebox", { NULL }, 5453, "tcp" }, { "surebox", { NULL }, 5453, "udp" }, { "apc-5454", { NULL }, 5454, "tcp" }, { "apc-5454", { NULL }, 5454, "udp" }, { "apc-5455", { NULL }, 5455, "tcp" }, { "apc-5455", { NULL }, 5455, "udp" }, { "apc-5456", { NULL }, 5456, "tcp" }, { "apc-5456", { NULL }, 5456, "udp" }, { "silkmeter", { NULL }, 5461, "tcp" }, { "silkmeter", { NULL }, 5461, "udp" }, { "ttl-publisher", { NULL }, 5462, "tcp" }, { "ttl-publisher", { NULL }, 5462, "udp" }, { "ttlpriceproxy", { NULL }, 5463, "tcp" }, { "ttlpriceproxy", { NULL }, 5463, "udp" }, { "quailnet", { NULL }, 5464, "tcp" }, { "quailnet", { NULL }, 5464, "udp" }, { "netops-broker", { NULL }, 5465, "tcp" }, { "netops-broker", { NULL }, 5465, "udp" }, { "fcp-addr-srvr1", { NULL }, 5500, "tcp" }, { "fcp-addr-srvr1", { NULL }, 5500, "udp" }, { "fcp-addr-srvr2", { NULL }, 5501, "tcp" }, { "fcp-addr-srvr2", { NULL }, 5501, "udp" }, { "fcp-srvr-inst1", { NULL }, 5502, "tcp" }, { "fcp-srvr-inst1", { NULL }, 5502, "udp" }, { "fcp-srvr-inst2", { NULL }, 5503, "tcp" }, { "fcp-srvr-inst2", { NULL }, 5503, "udp" }, { "fcp-cics-gw1", { NULL }, 5504, "tcp" }, { "fcp-cics-gw1", { NULL }, 5504, "udp" }, { "checkoutdb", { NULL }, 5505, "tcp" }, { "checkoutdb", { NULL }, 5505, "udp" }, { "amc", { NULL }, 5506, "tcp" }, { "amc", { NULL }, 5506, "udp" }, { "sgi-eventmond", { NULL }, 5553, "tcp" }, { "sgi-eventmond", { NULL }, 5553, "udp" }, { "sgi-esphttp", { NULL }, 5554, "tcp" }, { "sgi-esphttp", { NULL }, 5554, "udp" }, { "personal-agent", { NULL }, 5555, "tcp" }, { "personal-agent", { NULL }, 5555, "udp" }, { "freeciv", { NULL }, 5556, "tcp" }, { "freeciv", { NULL }, 5556, "udp" }, { "farenet", { NULL }, 5557, "tcp" }, { "westec-connect", { NULL }, 5566, "tcp" }, { "m-oap", { NULL }, 5567, "tcp" }, { "m-oap", { NULL }, 5567, "udp" }, { "sdt", { NULL }, 5568, "tcp" }, { "sdt", { NULL }, 5568, "udp" }, { "sdmmp", { NULL }, 5573, "tcp" }, { "sdmmp", { NULL }, 5573, "udp" }, { "lsi-bobcat", { NULL }, 5574, "tcp" }, { "ora-oap", { NULL }, 5575, "tcp" }, { "fdtracks", { NULL }, 5579, "tcp" }, { "tmosms0", { NULL }, 5580, "tcp" }, { "tmosms0", { NULL }, 5580, "udp" }, { "tmosms1", { NULL }, 5581, "tcp" }, { "tmosms1", { NULL }, 5581, "udp" }, { "fac-restore", { NULL }, 5582, "tcp" }, { "fac-restore", { NULL }, 5582, "udp" }, { "tmo-icon-sync", { NULL }, 5583, "tcp" }, { "tmo-icon-sync", { NULL }, 5583, "udp" }, { "bis-web", { NULL }, 5584, "tcp" }, { "bis-web", { NULL }, 5584, "udp" }, { "bis-sync", { NULL }, 5585, "tcp" }, { "bis-sync", { NULL }, 5585, "udp" }, { "ininmessaging", { NULL }, 5597, "tcp" }, { "ininmessaging", { NULL }, 5597, "udp" }, { "mctfeed", { NULL }, 5598, "tcp" }, { "mctfeed", { NULL }, 5598, "udp" }, { "esinstall", { NULL }, 5599, "tcp" }, { "esinstall", { NULL }, 5599, "udp" }, { "esmmanager", { NULL }, 5600, "tcp" }, { "esmmanager", { NULL }, 5600, "udp" }, { "esmagent", { NULL }, 5601, "tcp" }, { "esmagent", { NULL }, 5601, "udp" }, { "a1-msc", { NULL }, 5602, "tcp" }, { "a1-msc", { NULL }, 5602, "udp" }, { "a1-bs", { NULL }, 5603, "tcp" }, { "a1-bs", { NULL }, 5603, "udp" }, { "a3-sdunode", { NULL }, 5604, "tcp" }, { "a3-sdunode", { NULL }, 5604, "udp" }, { "a4-sdunode", { NULL }, 5605, "tcp" }, { "a4-sdunode", { NULL }, 5605, "udp" }, { "ninaf", { NULL }, 5627, "tcp" }, { "ninaf", { NULL }, 5627, "udp" }, { "htrust", { NULL }, 5628, "tcp" }, { "htrust", { NULL }, 5628, "udp" }, { "symantec-sfdb", { NULL }, 5629, "tcp" }, { "symantec-sfdb", { NULL }, 5629, "udp" }, { "precise-comm", { NULL }, 5630, "tcp" }, { "precise-comm", { NULL }, 5630, "udp" }, { "pcanywheredata", { NULL }, 5631, "tcp" }, { "pcanywheredata", { NULL }, 5631, "udp" }, { "pcanywherestat", { NULL }, 5632, "tcp" }, { "pcanywherestat", { NULL }, 5632, "udp" }, { "beorl", { NULL }, 5633, "tcp" }, { "beorl", { NULL }, 5633, "udp" }, { "xprtld", { NULL }, 5634, "tcp" }, { "xprtld", { NULL }, 5634, "udp" }, { "sfmsso", { NULL }, 5635, "tcp" }, { "sfm-db-server", { NULL }, 5636, "tcp" }, { "cssc", { NULL }, 5637, "tcp" }, { "amqps", { NULL }, 5671, "tcp" }, { "amqps", { NULL }, 5671, "udp" }, { "amqp", { NULL }, 5672, "tcp" }, { "amqp", { NULL }, 5672, "udp" }, { "amqp", { NULL }, 5672, "sctp" }, { "jms", { NULL }, 5673, "tcp" }, { "jms", { NULL }, 5673, "udp" }, { "hyperscsi-port", { NULL }, 5674, "tcp" }, { "hyperscsi-port", { NULL }, 5674, "udp" }, { "v5ua", { NULL }, 5675, "tcp" }, { "v5ua", { NULL }, 5675, "udp" }, { "v5ua", { NULL }, 5675, "sctp" }, { "raadmin", { NULL }, 5676, "tcp" }, { "raadmin", { NULL }, 5676, "udp" }, { "questdb2-lnchr", { NULL }, 5677, "tcp" }, { "questdb2-lnchr", { NULL }, 5677, "udp" }, { "rrac", { NULL }, 5678, "tcp" }, { "rrac", { NULL }, 5678, "udp" }, { "dccm", { NULL }, 5679, "tcp" }, { "dccm", { NULL }, 5679, "udp" }, { "auriga-router", { NULL }, 5680, "tcp" }, { "auriga-router", { NULL }, 5680, "udp" }, { "ncxcp", { NULL }, 5681, "tcp" }, { "ncxcp", { NULL }, 5681, "udp" }, { "brightcore", { NULL }, 5682, "udp" }, { "ggz", { NULL }, 5688, "tcp" }, { "ggz", { NULL }, 5688, "udp" }, { "qmvideo", { NULL }, 5689, "tcp" }, { "qmvideo", { NULL }, 5689, "udp" }, { "proshareaudio", { NULL }, 5713, "tcp" }, { "proshareaudio", { NULL }, 5713, "udp" }, { "prosharevideo", { NULL }, 5714, "tcp" }, { "prosharevideo", { NULL }, 5714, "udp" }, { "prosharedata", { NULL }, 5715, "tcp" }, { "prosharedata", { NULL }, 5715, "udp" }, { "prosharerequest", { NULL }, 5716, "tcp" }, { "prosharerequest", { NULL }, 5716, "udp" }, { "prosharenotify", { NULL }, 5717, "tcp" }, { "prosharenotify", { NULL }, 5717, "udp" }, { "dpm", { NULL }, 5718, "tcp" }, { "dpm", { NULL }, 5718, "udp" }, { "dpm-agent", { NULL }, 5719, "tcp" }, { "dpm-agent", { NULL }, 5719, "udp" }, { "ms-licensing", { NULL }, 5720, "tcp" }, { "ms-licensing", { NULL }, 5720, "udp" }, { "dtpt", { NULL }, 5721, "tcp" }, { "dtpt", { NULL }, 5721, "udp" }, { "msdfsr", { NULL }, 5722, "tcp" }, { "msdfsr", { NULL }, 5722, "udp" }, { "omhs", { NULL }, 5723, "tcp" }, { "omhs", { NULL }, 5723, "udp" }, { "omsdk", { NULL }, 5724, "tcp" }, { "omsdk", { NULL }, 5724, "udp" }, { "ms-ilm", { NULL }, 5725, "tcp" }, { "ms-ilm-sts", { NULL }, 5726, "tcp" }, { "asgenf", { NULL }, 5727, "tcp" }, { "io-dist-data", { NULL }, 5728, "tcp" }, { "io-dist-group", { NULL }, 5728, "udp" }, { "openmail", { NULL }, 5729, "tcp" }, { "openmail", { NULL }, 5729, "udp" }, { "unieng", { NULL }, 5730, "tcp" }, { "unieng", { NULL }, 5730, "udp" }, { "ida-discover1", { NULL }, 5741, "tcp" }, { "ida-discover1", { NULL }, 5741, "udp" }, { "ida-discover2", { NULL }, 5742, "tcp" }, { "ida-discover2", { NULL }, 5742, "udp" }, { "watchdoc-pod", { NULL }, 5743, "tcp" }, { "watchdoc-pod", { NULL }, 5743, "udp" }, { "watchdoc", { NULL }, 5744, "tcp" }, { "watchdoc", { NULL }, 5744, "udp" }, { "fcopy-server", { NULL }, 5745, "tcp" }, { "fcopy-server", { NULL }, 5745, "udp" }, { "fcopys-server", { NULL }, 5746, "tcp" }, { "fcopys-server", { NULL }, 5746, "udp" }, { "tunatic", { NULL }, 5747, "tcp" }, { "tunatic", { NULL }, 5747, "udp" }, { "tunalyzer", { NULL }, 5748, "tcp" }, { "tunalyzer", { NULL }, 5748, "udp" }, { "rscd", { NULL }, 5750, "tcp" }, { "rscd", { NULL }, 5750, "udp" }, { "openmailg", { NULL }, 5755, "tcp" }, { "openmailg", { NULL }, 5755, "udp" }, { "x500ms", { NULL }, 5757, "tcp" }, { "x500ms", { NULL }, 5757, "udp" }, { "openmailns", { NULL }, 5766, "tcp" }, { "openmailns", { NULL }, 5766, "udp" }, { "s-openmail", { NULL }, 5767, "tcp" }, { "s-openmail", { NULL }, 5767, "udp" }, { "openmailpxy", { NULL }, 5768, "tcp" }, { "openmailpxy", { NULL }, 5768, "udp" }, { "spramsca", { NULL }, 5769, "tcp" }, { "spramsca", { NULL }, 5769, "udp" }, { "spramsd", { NULL }, 5770, "tcp" }, { "spramsd", { NULL }, 5770, "udp" }, { "netagent", { NULL }, 5771, "tcp" }, { "netagent", { NULL }, 5771, "udp" }, { "dali-port", { NULL }, 5777, "tcp" }, { "dali-port", { NULL }, 5777, "udp" }, { "vts-rpc", { NULL }, 5780, "tcp" }, { "3par-evts", { NULL }, 5781, "tcp" }, { "3par-evts", { NULL }, 5781, "udp" }, { "3par-mgmt", { NULL }, 5782, "tcp" }, { "3par-mgmt", { NULL }, 5782, "udp" }, { "3par-mgmt-ssl", { NULL }, 5783, "tcp" }, { "3par-mgmt-ssl", { NULL }, 5783, "udp" }, { "ibar", { NULL }, 5784, "udp" }, { "3par-rcopy", { NULL }, 5785, "tcp" }, { "3par-rcopy", { NULL }, 5785, "udp" }, { "cisco-redu", { NULL }, 5786, "udp" }, { "waascluster", { NULL }, 5787, "udp" }, { "xtreamx", { NULL }, 5793, "tcp" }, { "xtreamx", { NULL }, 5793, "udp" }, { "spdp", { NULL }, 5794, "udp" }, { "icmpd", { NULL }, 5813, "tcp" }, { "icmpd", { NULL }, 5813, "udp" }, { "spt-automation", { NULL }, 5814, "tcp" }, { "spt-automation", { NULL }, 5814, "udp" }, { "wherehoo", { NULL }, 5859, "tcp" }, { "wherehoo", { NULL }, 5859, "udp" }, { "ppsuitemsg", { NULL }, 5863, "tcp" }, { "ppsuitemsg", { NULL }, 5863, "udp" }, { "rfb", { NULL }, 5900, "tcp" }, { "rfb", { NULL }, 5900, "udp" }, { "cm", { NULL }, 5910, "tcp" }, { "cm", { NULL }, 5910, "udp" }, { "cpdlc", { NULL }, 5911, "tcp" }, { "cpdlc", { NULL }, 5911, "udp" }, { "fis", { NULL }, 5912, "tcp" }, { "fis", { NULL }, 5912, "udp" }, { "ads-c", { NULL }, 5913, "tcp" }, { "ads-c", { NULL }, 5913, "udp" }, { "indy", { NULL }, 5963, "tcp" }, { "indy", { NULL }, 5963, "udp" }, { "mppolicy-v5", { NULL }, 5968, "tcp" }, { "mppolicy-v5", { NULL }, 5968, "udp" }, { "mppolicy-mgr", { NULL }, 5969, "tcp" }, { "mppolicy-mgr", { NULL }, 5969, "udp" }, { "couchdb", { NULL }, 5984, "tcp" }, { "couchdb", { NULL }, 5984, "udp" }, { "wsman", { NULL }, 5985, "tcp" }, { "wsman", { NULL }, 5985, "udp" }, { "wsmans", { NULL }, 5986, "tcp" }, { "wsmans", { NULL }, 5986, "udp" }, { "wbem-rmi", { NULL }, 5987, "tcp" }, { "wbem-rmi", { NULL }, 5987, "udp" }, { "wbem-http", { NULL }, 5988, "tcp" }, { "wbem-http", { NULL }, 5988, "udp" }, { "wbem-https", { NULL }, 5989, "tcp" }, { "wbem-https", { NULL }, 5989, "udp" }, { "wbem-exp-https", { NULL }, 5990, "tcp" }, { "wbem-exp-https", { NULL }, 5990, "udp" }, { "nuxsl", { NULL }, 5991, "tcp" }, { "nuxsl", { NULL }, 5991, "udp" }, { "consul-insight", { NULL }, 5992, "tcp" }, { "consul-insight", { NULL }, 5992, "udp" }, { "cvsup", { NULL }, 5999, "tcp" }, { "cvsup", { NULL }, 5999, "udp" }, { "ndl-ahp-svc", { NULL }, 6064, "tcp" }, { "ndl-ahp-svc", { NULL }, 6064, "udp" }, { "winpharaoh", { NULL }, 6065, "tcp" }, { "winpharaoh", { NULL }, 6065, "udp" }, { "ewctsp", { NULL }, 6066, "tcp" }, { "ewctsp", { NULL }, 6066, "udp" }, { "gsmp", { NULL }, 6068, "tcp" }, { "gsmp", { NULL }, 6068, "udp" }, { "trip", { NULL }, 6069, "tcp" }, { "trip", { NULL }, 6069, "udp" }, { "messageasap", { NULL }, 6070, "tcp" }, { "messageasap", { NULL }, 6070, "udp" }, { "ssdtp", { NULL }, 6071, "tcp" }, { "ssdtp", { NULL }, 6071, "udp" }, { "diagnose-proc", { NULL }, 6072, "tcp" }, { "diagnose-proc", { NULL }, 6072, "udp" }, { "directplay8", { NULL }, 6073, "tcp" }, { "directplay8", { NULL }, 6073, "udp" }, { "max", { NULL }, 6074, "tcp" }, { "max", { NULL }, 6074, "udp" }, { "dpm-acm", { NULL }, 6075, "tcp" }, { "miami-bcast", { NULL }, 6083, "udp" }, { "p2p-sip", { NULL }, 6084, "tcp" }, { "konspire2b", { NULL }, 6085, "tcp" }, { "konspire2b", { NULL }, 6085, "udp" }, { "pdtp", { NULL }, 6086, "tcp" }, { "pdtp", { NULL }, 6086, "udp" }, { "ldss", { NULL }, 6087, "tcp" }, { "ldss", { NULL }, 6087, "udp" }, { "raxa-mgmt", { NULL }, 6099, "tcp" }, { "synchronet-db", { NULL }, 6100, "tcp" }, { "synchronet-db", { NULL }, 6100, "udp" }, { "synchronet-rtc", { NULL }, 6101, "tcp" }, { "synchronet-rtc", { NULL }, 6101, "udp" }, { "synchronet-upd", { NULL }, 6102, "tcp" }, { "synchronet-upd", { NULL }, 6102, "udp" }, { "rets", { NULL }, 6103, "tcp" }, { "rets", { NULL }, 6103, "udp" }, { "dbdb", { NULL }, 6104, "tcp" }, { "dbdb", { NULL }, 6104, "udp" }, { "primaserver", { NULL }, 6105, "tcp" }, { "primaserver", { NULL }, 6105, "udp" }, { "mpsserver", { NULL }, 6106, "tcp" }, { "mpsserver", { NULL }, 6106, "udp" }, { "etc-control", { NULL }, 6107, "tcp" }, { "etc-control", { NULL }, 6107, "udp" }, { "sercomm-scadmin", { NULL }, 6108, "tcp" }, { "sercomm-scadmin", { NULL }, 6108, "udp" }, { "globecast-id", { NULL }, 6109, "tcp" }, { "globecast-id", { NULL }, 6109, "udp" }, { "softcm", { NULL }, 6110, "tcp" }, { "softcm", { NULL }, 6110, "udp" }, { "spc", { NULL }, 6111, "tcp" }, { "spc", { NULL }, 6111, "udp" }, { "dtspcd", { NULL }, 6112, "tcp" }, { "dtspcd", { NULL }, 6112, "udp" }, { "dayliteserver", { NULL }, 6113, "tcp" }, { "wrspice", { NULL }, 6114, "tcp" }, { "xic", { NULL }, 6115, "tcp" }, { "xtlserv", { NULL }, 6116, "tcp" }, { "daylitetouch", { NULL }, 6117, "tcp" }, { "spdy", { NULL }, 6121, "tcp" }, { "bex-webadmin", { NULL }, 6122, "tcp" }, { "bex-webadmin", { NULL }, 6122, "udp" }, { "backup-express", { NULL }, 6123, "tcp" }, { "backup-express", { NULL }, 6123, "udp" }, { "pnbs", { NULL }, 6124, "tcp" }, { "pnbs", { NULL }, 6124, "udp" }, { "nbt-wol", { NULL }, 6133, "tcp" }, { "nbt-wol", { NULL }, 6133, "udp" }, { "pulsonixnls", { NULL }, 6140, "tcp" }, { "pulsonixnls", { NULL }, 6140, "udp" }, { "meta-corp", { NULL }, 6141, "tcp" }, { "meta-corp", { NULL }, 6141, "udp" }, { "aspentec-lm", { NULL }, 6142, "tcp" }, { "aspentec-lm", { NULL }, 6142, "udp" }, { "watershed-lm", { NULL }, 6143, "tcp" }, { "watershed-lm", { NULL }, 6143, "udp" }, { "statsci1-lm", { NULL }, 6144, "tcp" }, { "statsci1-lm", { NULL }, 6144, "udp" }, { "statsci2-lm", { NULL }, 6145, "tcp" }, { "statsci2-lm", { NULL }, 6145, "udp" }, { "lonewolf-lm", { NULL }, 6146, "tcp" }, { "lonewolf-lm", { NULL }, 6146, "udp" }, { "montage-lm", { NULL }, 6147, "tcp" }, { "montage-lm", { NULL }, 6147, "udp" }, { "ricardo-lm", { NULL }, 6148, "tcp" }, { "ricardo-lm", { NULL }, 6148, "udp" }, { "tal-pod", { NULL }, 6149, "tcp" }, { "tal-pod", { NULL }, 6149, "udp" }, { "efb-aci", { NULL }, 6159, "tcp" }, { "patrol-ism", { NULL }, 6161, "tcp" }, { "patrol-ism", { NULL }, 6161, "udp" }, { "patrol-coll", { NULL }, 6162, "tcp" }, { "patrol-coll", { NULL }, 6162, "udp" }, { "pscribe", { NULL }, 6163, "tcp" }, { "pscribe", { NULL }, 6163, "udp" }, { "lm-x", { NULL }, 6200, "tcp" }, { "lm-x", { NULL }, 6200, "udp" }, { "radmind", { NULL }, 6222, "tcp" }, { "radmind", { NULL }, 6222, "udp" }, { "jeol-nsdtp-1", { NULL }, 6241, "tcp" }, { "jeol-nsddp-1", { NULL }, 6241, "udp" }, { "jeol-nsdtp-2", { NULL }, 6242, "tcp" }, { "jeol-nsddp-2", { NULL }, 6242, "udp" }, { "jeol-nsdtp-3", { NULL }, 6243, "tcp" }, { "jeol-nsddp-3", { NULL }, 6243, "udp" }, { "jeol-nsdtp-4", { NULL }, 6244, "tcp" }, { "jeol-nsddp-4", { NULL }, 6244, "udp" }, { "tl1-raw-ssl", { NULL }, 6251, "tcp" }, { "tl1-raw-ssl", { NULL }, 6251, "udp" }, { "tl1-ssh", { NULL }, 6252, "tcp" }, { "tl1-ssh", { NULL }, 6252, "udp" }, { "crip", { NULL }, 6253, "tcp" }, { "crip", { NULL }, 6253, "udp" }, { "gld", { NULL }, 6267, "tcp" }, { "grid", { NULL }, 6268, "tcp" }, { "grid", { NULL }, 6268, "udp" }, { "grid-alt", { NULL }, 6269, "tcp" }, { "grid-alt", { NULL }, 6269, "udp" }, { "bmc-grx", { NULL }, 6300, "tcp" }, { "bmc-grx", { NULL }, 6300, "udp" }, { "bmc_ctd_ldap", { NULL }, 6301, "tcp" }, { "bmc_ctd_ldap", { NULL }, 6301, "udp" }, { "ufmp", { NULL }, 6306, "tcp" }, { "ufmp", { NULL }, 6306, "udp" }, { "scup", { NULL }, 6315, "tcp" }, { "scup-disc", { NULL }, 6315, "udp" }, { "abb-escp", { NULL }, 6316, "tcp" }, { "abb-escp", { NULL }, 6316, "udp" }, { "repsvc", { NULL }, 6320, "tcp" }, { "repsvc", { NULL }, 6320, "udp" }, { "emp-server1", { NULL }, 6321, "tcp" }, { "emp-server1", { NULL }, 6321, "udp" }, { "emp-server2", { NULL }, 6322, "tcp" }, { "emp-server2", { NULL }, 6322, "udp" }, { "sflow", { NULL }, 6343, "tcp" }, { "sflow", { NULL }, 6343, "udp" }, { "gnutella-svc", { NULL }, 6346, "tcp" }, { "gnutella-svc", { NULL }, 6346, "udp" }, { "gnutella-rtr", { NULL }, 6347, "tcp" }, { "gnutella-rtr", { NULL }, 6347, "udp" }, { "adap", { NULL }, 6350, "tcp" }, { "adap", { NULL }, 6350, "udp" }, { "pmcs", { NULL }, 6355, "tcp" }, { "pmcs", { NULL }, 6355, "udp" }, { "metaedit-mu", { NULL }, 6360, "tcp" }, { "metaedit-mu", { NULL }, 6360, "udp" }, { "metaedit-se", { NULL }, 6370, "tcp" }, { "metaedit-se", { NULL }, 6370, "udp" }, { "metatude-mds", { NULL }, 6382, "tcp" }, { "metatude-mds", { NULL }, 6382, "udp" }, { "clariion-evr01", { NULL }, 6389, "tcp" }, { "clariion-evr01", { NULL }, 6389, "udp" }, { "metaedit-ws", { NULL }, 6390, "tcp" }, { "metaedit-ws", { NULL }, 6390, "udp" }, { "faxcomservice", { NULL }, 6417, "tcp" }, { "faxcomservice", { NULL }, 6417, "udp" }, { "syserverremote", { NULL }, 6418, "tcp" }, { "svdrp", { NULL }, 6419, "tcp" }, { "nim-vdrshell", { NULL }, 6420, "tcp" }, { "nim-vdrshell", { NULL }, 6420, "udp" }, { "nim-wan", { NULL }, 6421, "tcp" }, { "nim-wan", { NULL }, 6421, "udp" }, { "pgbouncer", { NULL }, 6432, "tcp" }, { "sun-sr-https", { NULL }, 6443, "tcp" }, { "sun-sr-https", { NULL }, 6443, "udp" }, { "sge_qmaster", { NULL }, 6444, "tcp" }, { "sge_qmaster", { NULL }, 6444, "udp" }, { "sge_execd", { NULL }, 6445, "tcp" }, { "sge_execd", { NULL }, 6445, "udp" }, { "mysql-proxy", { NULL }, 6446, "tcp" }, { "mysql-proxy", { NULL }, 6446, "udp" }, { "skip-cert-recv", { NULL }, 6455, "tcp" }, { "skip-cert-send", { NULL }, 6456, "udp" }, { "lvision-lm", { NULL }, 6471, "tcp" }, { "lvision-lm", { NULL }, 6471, "udp" }, { "sun-sr-http", { NULL }, 6480, "tcp" }, { "sun-sr-http", { NULL }, 6480, "udp" }, { "servicetags", { NULL }, 6481, "tcp" }, { "servicetags", { NULL }, 6481, "udp" }, { "ldoms-mgmt", { NULL }, 6482, "tcp" }, { "ldoms-mgmt", { NULL }, 6482, "udp" }, { "SunVTS-RMI", { NULL }, 6483, "tcp" }, { "SunVTS-RMI", { NULL }, 6483, "udp" }, { "sun-sr-jms", { NULL }, 6484, "tcp" }, { "sun-sr-jms", { NULL }, 6484, "udp" }, { "sun-sr-iiop", { NULL }, 6485, "tcp" }, { "sun-sr-iiop", { NULL }, 6485, "udp" }, { "sun-sr-iiops", { NULL }, 6486, "tcp" }, { "sun-sr-iiops", { NULL }, 6486, "udp" }, { "sun-sr-iiop-aut", { NULL }, 6487, "tcp" }, { "sun-sr-iiop-aut", { NULL }, 6487, "udp" }, { "sun-sr-jmx", { NULL }, 6488, "tcp" }, { "sun-sr-jmx", { NULL }, 6488, "udp" }, { "sun-sr-admin", { NULL }, 6489, "tcp" }, { "sun-sr-admin", { NULL }, 6489, "udp" }, { "boks", { NULL }, 6500, "tcp" }, { "boks", { NULL }, 6500, "udp" }, { "boks_servc", { NULL }, 6501, "tcp" }, { "boks_servc", { NULL }, 6501, "udp" }, { "boks_servm", { NULL }, 6502, "tcp" }, { "boks_servm", { NULL }, 6502, "udp" }, { "boks_clntd", { NULL }, 6503, "tcp" }, { "boks_clntd", { NULL }, 6503, "udp" }, { "badm_priv", { NULL }, 6505, "tcp" }, { "badm_priv", { NULL }, 6505, "udp" }, { "badm_pub", { NULL }, 6506, "tcp" }, { "badm_pub", { NULL }, 6506, "udp" }, { "bdir_priv", { NULL }, 6507, "tcp" }, { "bdir_priv", { NULL }, 6507, "udp" }, { "bdir_pub", { NULL }, 6508, "tcp" }, { "bdir_pub", { NULL }, 6508, "udp" }, { "mgcs-mfp-port", { NULL }, 6509, "tcp" }, { "mgcs-mfp-port", { NULL }, 6509, "udp" }, { "mcer-port", { NULL }, 6510, "tcp" }, { "mcer-port", { NULL }, 6510, "udp" }, { "netconf-tls", { NULL }, 6513, "tcp" }, { "syslog-tls", { NULL }, 6514, "tcp" }, { "syslog-tls", { NULL }, 6514, "udp" }, { "syslog-tls", { NULL }, 6514, "dccp" }, { "elipse-rec", { NULL }, 6515, "tcp" }, { "elipse-rec", { NULL }, 6515, "udp" }, { "lds-distrib", { NULL }, 6543, "tcp" }, { "lds-distrib", { NULL }, 6543, "udp" }, { "lds-dump", { NULL }, 6544, "tcp" }, { "lds-dump", { NULL }, 6544, "udp" }, { "apc-6547", { NULL }, 6547, "tcp" }, { "apc-6547", { NULL }, 6547, "udp" }, { "apc-6548", { NULL }, 6548, "tcp" }, { "apc-6548", { NULL }, 6548, "udp" }, { "apc-6549", { NULL }, 6549, "tcp" }, { "apc-6549", { NULL }, 6549, "udp" }, { "fg-sysupdate", { NULL }, 6550, "tcp" }, { "fg-sysupdate", { NULL }, 6550, "udp" }, { "sum", { NULL }, 6551, "tcp" }, { "sum", { NULL }, 6551, "udp" }, { "xdsxdm", { NULL }, 6558, "tcp" }, { "xdsxdm", { NULL }, 6558, "udp" }, { "sane-port", { NULL }, 6566, "tcp" }, { "sane-port", { NULL }, 6566, "udp" }, { "esp", { NULL }, 6567, "tcp" }, { "esp", { NULL }, 6567, "udp" }, { "canit_store", { NULL }, 6568, "tcp" }, { "rp-reputation", { NULL }, 6568, "udp" }, { "affiliate", { NULL }, 6579, "tcp" }, { "affiliate", { NULL }, 6579, "udp" }, { "parsec-master", { NULL }, 6580, "tcp" }, { "parsec-master", { NULL }, 6580, "udp" }, { "parsec-peer", { NULL }, 6581, "tcp" }, { "parsec-peer", { NULL }, 6581, "udp" }, { "parsec-game", { NULL }, 6582, "tcp" }, { "parsec-game", { NULL }, 6582, "udp" }, { "joaJewelSuite", { NULL }, 6583, "tcp" }, { "joaJewelSuite", { NULL }, 6583, "udp" }, { "mshvlm", { NULL }, 6600, "tcp" }, { "mstmg-sstp", { NULL }, 6601, "tcp" }, { "wsscomfrmwk", { NULL }, 6602, "tcp" }, { "odette-ftps", { NULL }, 6619, "tcp" }, { "odette-ftps", { NULL }, 6619, "udp" }, { "kftp-data", { NULL }, 6620, "tcp" }, { "kftp-data", { NULL }, 6620, "udp" }, { "kftp", { NULL }, 6621, "tcp" }, { "kftp", { NULL }, 6621, "udp" }, { "mcftp", { NULL }, 6622, "tcp" }, { "mcftp", { NULL }, 6622, "udp" }, { "ktelnet", { NULL }, 6623, "tcp" }, { "ktelnet", { NULL }, 6623, "udp" }, { "datascaler-db", { NULL }, 6624, "tcp" }, { "datascaler-ctl", { NULL }, 6625, "tcp" }, { "wago-service", { NULL }, 6626, "tcp" }, { "wago-service", { NULL }, 6626, "udp" }, { "nexgen", { NULL }, 6627, "tcp" }, { "nexgen", { NULL }, 6627, "udp" }, { "afesc-mc", { NULL }, 6628, "tcp" }, { "afesc-mc", { NULL }, 6628, "udp" }, { "mxodbc-connect", { NULL }, 6632, "tcp" }, { "pcs-sf-ui-man", { NULL }, 6655, "tcp" }, { "emgmsg", { NULL }, 6656, "tcp" }, { "palcom-disc", { NULL }, 6657, "udp" }, { "vocaltec-gold", { NULL }, 6670, "tcp" }, { "vocaltec-gold", { NULL }, 6670, "udp" }, { "p4p-portal", { NULL }, 6671, "tcp" }, { "p4p-portal", { NULL }, 6671, "udp" }, { "vision_server", { NULL }, 6672, "tcp" }, { "vision_server", { NULL }, 6672, "udp" }, { "vision_elmd", { NULL }, 6673, "tcp" }, { "vision_elmd", { NULL }, 6673, "udp" }, { "vfbp", { NULL }, 6678, "tcp" }, { "vfbp-disc", { NULL }, 6678, "udp" }, { "osaut", { NULL }, 6679, "tcp" }, { "osaut", { NULL }, 6679, "udp" }, { "clever-ctrace", { NULL }, 6687, "tcp" }, { "clever-tcpip", { NULL }, 6688, "tcp" }, { "tsa", { NULL }, 6689, "tcp" }, { "tsa", { NULL }, 6689, "udp" }, { "babel", { NULL }, 6697, "udp" }, { "kti-icad-srvr", { NULL }, 6701, "tcp" }, { "kti-icad-srvr", { NULL }, 6701, "udp" }, { "e-design-net", { NULL }, 6702, "tcp" }, { "e-design-net", { NULL }, 6702, "udp" }, { "e-design-web", { NULL }, 6703, "tcp" }, { "e-design-web", { NULL }, 6703, "udp" }, { "frc-hp", { NULL }, 6704, "sctp" }, { "frc-mp", { NULL }, 6705, "sctp" }, { "frc-lp", { NULL }, 6706, "sctp" }, { "ibprotocol", { NULL }, 6714, "tcp" }, { "ibprotocol", { NULL }, 6714, "udp" }, { "fibotrader-com", { NULL }, 6715, "tcp" }, { "fibotrader-com", { NULL }, 6715, "udp" }, { "bmc-perf-agent", { NULL }, 6767, "tcp" }, { "bmc-perf-agent", { NULL }, 6767, "udp" }, { "bmc-perf-mgrd", { NULL }, 6768, "tcp" }, { "bmc-perf-mgrd", { NULL }, 6768, "udp" }, { "adi-gxp-srvprt", { NULL }, 6769, "tcp" }, { "adi-gxp-srvprt", { NULL }, 6769, "udp" }, { "plysrv-http", { NULL }, 6770, "tcp" }, { "plysrv-http", { NULL }, 6770, "udp" }, { "plysrv-https", { NULL }, 6771, "tcp" }, { "plysrv-https", { NULL }, 6771, "udp" }, { "dgpf-exchg", { NULL }, 6785, "tcp" }, { "dgpf-exchg", { NULL }, 6785, "udp" }, { "smc-jmx", { NULL }, 6786, "tcp" }, { "smc-jmx", { NULL }, 6786, "udp" }, { "smc-admin", { NULL }, 6787, "tcp" }, { "smc-admin", { NULL }, 6787, "udp" }, { "smc-http", { NULL }, 6788, "tcp" }, { "smc-http", { NULL }, 6788, "udp" }, { "smc-https", { NULL }, 6789, "tcp" }, { "smc-https", { NULL }, 6789, "udp" }, { "hnmp", { NULL }, 6790, "tcp" }, { "hnmp", { NULL }, 6790, "udp" }, { "hnm", { NULL }, 6791, "tcp" }, { "hnm", { NULL }, 6791, "udp" }, { "acnet", { NULL }, 6801, "tcp" }, { "acnet", { NULL }, 6801, "udp" }, { "pentbox-sim", { NULL }, 6817, "tcp" }, { "ambit-lm", { NULL }, 6831, "tcp" }, { "ambit-lm", { NULL }, 6831, "udp" }, { "netmo-default", { NULL }, 6841, "tcp" }, { "netmo-default", { NULL }, 6841, "udp" }, { "netmo-http", { NULL }, 6842, "tcp" }, { "netmo-http", { NULL }, 6842, "udp" }, { "iccrushmore", { NULL }, 6850, "tcp" }, { "iccrushmore", { NULL }, 6850, "udp" }, { "acctopus-cc", { NULL }, 6868, "tcp" }, { "acctopus-st", { NULL }, 6868, "udp" }, { "muse", { NULL }, 6888, "tcp" }, { "muse", { NULL }, 6888, "udp" }, { "jetstream", { NULL }, 6901, "tcp" }, { "xsmsvc", { NULL }, 6936, "tcp" }, { "xsmsvc", { NULL }, 6936, "udp" }, { "bioserver", { NULL }, 6946, "tcp" }, { "bioserver", { NULL }, 6946, "udp" }, { "otlp", { NULL }, 6951, "tcp" }, { "otlp", { NULL }, 6951, "udp" }, { "jmact3", { NULL }, 6961, "tcp" }, { "jmact3", { NULL }, 6961, "udp" }, { "jmevt2", { NULL }, 6962, "tcp" }, { "jmevt2", { NULL }, 6962, "udp" }, { "swismgr1", { NULL }, 6963, "tcp" }, { "swismgr1", { NULL }, 6963, "udp" }, { "swismgr2", { NULL }, 6964, "tcp" }, { "swismgr2", { NULL }, 6964, "udp" }, { "swistrap", { NULL }, 6965, "tcp" }, { "swistrap", { NULL }, 6965, "udp" }, { "swispol", { NULL }, 6966, "tcp" }, { "swispol", { NULL }, 6966, "udp" }, { "acmsoda", { NULL }, 6969, "tcp" }, { "acmsoda", { NULL }, 6969, "udp" }, { "MobilitySrv", { NULL }, 6997, "tcp" }, { "MobilitySrv", { NULL }, 6997, "udp" }, { "iatp-highpri", { NULL }, 6998, "tcp" }, { "iatp-highpri", { NULL }, 6998, "udp" }, { "iatp-normalpri", { NULL }, 6999, "tcp" }, { "iatp-normalpri", { NULL }, 6999, "udp" }, { "afs3-fileserver", { NULL }, 7000, "tcp" }, { "afs3-fileserver", { NULL }, 7000, "udp" }, { "afs3-callback", { NULL }, 7001, "tcp" }, { "afs3-callback", { NULL }, 7001, "udp" }, { "afs3-prserver", { NULL }, 7002, "tcp" }, { "afs3-prserver", { NULL }, 7002, "udp" }, { "afs3-vlserver", { NULL }, 7003, "tcp" }, { "afs3-vlserver", { NULL }, 7003, "udp" }, { "afs3-kaserver", { NULL }, 7004, "tcp" }, { "afs3-kaserver", { NULL }, 7004, "udp" }, { "afs3-volser", { NULL }, 7005, "tcp" }, { "afs3-volser", { NULL }, 7005, "udp" }, { "afs3-errors", { NULL }, 7006, "tcp" }, { "afs3-errors", { NULL }, 7006, "udp" }, { "afs3-bos", { NULL }, 7007, "tcp" }, { "afs3-bos", { NULL }, 7007, "udp" }, { "afs3-update", { NULL }, 7008, "tcp" }, { "afs3-update", { NULL }, 7008, "udp" }, { "afs3-rmtsys", { NULL }, 7009, "tcp" }, { "afs3-rmtsys", { NULL }, 7009, "udp" }, { "ups-onlinet", { NULL }, 7010, "tcp" }, { "ups-onlinet", { NULL }, 7010, "udp" }, { "talon-disc", { NULL }, 7011, "tcp" }, { "talon-disc", { NULL }, 7011, "udp" }, { "talon-engine", { NULL }, 7012, "tcp" }, { "talon-engine", { NULL }, 7012, "udp" }, { "microtalon-dis", { NULL }, 7013, "tcp" }, { "microtalon-dis", { NULL }, 7013, "udp" }, { "microtalon-com", { NULL }, 7014, "tcp" }, { "microtalon-com", { NULL }, 7014, "udp" }, { "talon-webserver", { NULL }, 7015, "tcp" }, { "talon-webserver", { NULL }, 7015, "udp" }, { "dpserve", { NULL }, 7020, "tcp" }, { "dpserve", { NULL }, 7020, "udp" }, { "dpserveadmin", { NULL }, 7021, "tcp" }, { "dpserveadmin", { NULL }, 7021, "udp" }, { "ctdp", { NULL }, 7022, "tcp" }, { "ctdp", { NULL }, 7022, "udp" }, { "ct2nmcs", { NULL }, 7023, "tcp" }, { "ct2nmcs", { NULL }, 7023, "udp" }, { "vmsvc", { NULL }, 7024, "tcp" }, { "vmsvc", { NULL }, 7024, "udp" }, { "vmsvc-2", { NULL }, 7025, "tcp" }, { "vmsvc-2", { NULL }, 7025, "udp" }, { "op-probe", { NULL }, 7030, "tcp" }, { "op-probe", { NULL }, 7030, "udp" }, { "arcp", { NULL }, 7070, "tcp" }, { "arcp", { NULL }, 7070, "udp" }, { "iwg1", { NULL }, 7071, "tcp" }, { "iwg1", { NULL }, 7071, "udp" }, { "empowerid", { NULL }, 7080, "tcp" }, { "empowerid", { NULL }, 7080, "udp" }, { "lazy-ptop", { NULL }, 7099, "tcp" }, { "lazy-ptop", { NULL }, 7099, "udp" }, { "font-service", { NULL }, 7100, "tcp" }, { "font-service", { NULL }, 7100, "udp" }, { "elcn", { NULL }, 7101, "tcp" }, { "elcn", { NULL }, 7101, "udp" }, { "aes-x170", { NULL }, 7107, "udp" }, { "virprot-lm", { NULL }, 7121, "tcp" }, { "virprot-lm", { NULL }, 7121, "udp" }, { "scenidm", { NULL }, 7128, "tcp" }, { "scenidm", { NULL }, 7128, "udp" }, { "scenccs", { NULL }, 7129, "tcp" }, { "scenccs", { NULL }, 7129, "udp" }, { "cabsm-comm", { NULL }, 7161, "tcp" }, { "cabsm-comm", { NULL }, 7161, "udp" }, { "caistoragemgr", { NULL }, 7162, "tcp" }, { "caistoragemgr", { NULL }, 7162, "udp" }, { "cacsambroker", { NULL }, 7163, "tcp" }, { "cacsambroker", { NULL }, 7163, "udp" }, { "fsr", { NULL }, 7164, "tcp" }, { "fsr", { NULL }, 7164, "udp" }, { "doc-server", { NULL }, 7165, "tcp" }, { "doc-server", { NULL }, 7165, "udp" }, { "aruba-server", { NULL }, 7166, "tcp" }, { "aruba-server", { NULL }, 7166, "udp" }, { "casrmagent", { NULL }, 7167, "tcp" }, { "cnckadserver", { NULL }, 7168, "tcp" }, { "ccag-pib", { NULL }, 7169, "tcp" }, { "ccag-pib", { NULL }, 7169, "udp" }, { "nsrp", { NULL }, 7170, "tcp" }, { "nsrp", { NULL }, 7170, "udp" }, { "drm-production", { NULL }, 7171, "tcp" }, { "drm-production", { NULL }, 7171, "udp" }, { "zsecure", { NULL }, 7173, "tcp" }, { "clutild", { NULL }, 7174, "tcp" }, { "clutild", { NULL }, 7174, "udp" }, { "fodms", { NULL }, 7200, "tcp" }, { "fodms", { NULL }, 7200, "udp" }, { "dlip", { NULL }, 7201, "tcp" }, { "dlip", { NULL }, 7201, "udp" }, { "ramp", { NULL }, 7227, "tcp" }, { "ramp", { NULL }, 7227, "udp" }, { "citrixupp", { NULL }, 7228, "tcp" }, { "citrixuppg", { NULL }, 7229, "tcp" }, { "pads", { NULL }, 7237, "tcp" }, { "cnap", { NULL }, 7262, "tcp" }, { "cnap", { NULL }, 7262, "udp" }, { "watchme-7272", { NULL }, 7272, "tcp" }, { "watchme-7272", { NULL }, 7272, "udp" }, { "oma-rlp", { NULL }, 7273, "tcp" }, { "oma-rlp", { NULL }, 7273, "udp" }, { "oma-rlp-s", { NULL }, 7274, "tcp" }, { "oma-rlp-s", { NULL }, 7274, "udp" }, { "oma-ulp", { NULL }, 7275, "tcp" }, { "oma-ulp", { NULL }, 7275, "udp" }, { "oma-ilp", { NULL }, 7276, "tcp" }, { "oma-ilp", { NULL }, 7276, "udp" }, { "oma-ilp-s", { NULL }, 7277, "tcp" }, { "oma-ilp-s", { NULL }, 7277, "udp" }, { "oma-dcdocbs", { NULL }, 7278, "tcp" }, { "oma-dcdocbs", { NULL }, 7278, "udp" }, { "ctxlic", { NULL }, 7279, "tcp" }, { "ctxlic", { NULL }, 7279, "udp" }, { "itactionserver1", { NULL }, 7280, "tcp" }, { "itactionserver1", { NULL }, 7280, "udp" }, { "itactionserver2", { NULL }, 7281, "tcp" }, { "itactionserver2", { NULL }, 7281, "udp" }, { "mzca-action", { NULL }, 7282, "tcp" }, { "mzca-alert", { NULL }, 7282, "udp" }, { "lcm-server", { NULL }, 7365, "tcp" }, { "lcm-server", { NULL }, 7365, "udp" }, { "mindfilesys", { NULL }, 7391, "tcp" }, { "mindfilesys", { NULL }, 7391, "udp" }, { "mrssrendezvous", { NULL }, 7392, "tcp" }, { "mrssrendezvous", { NULL }, 7392, "udp" }, { "nfoldman", { NULL }, 7393, "tcp" }, { "nfoldman", { NULL }, 7393, "udp" }, { "fse", { NULL }, 7394, "tcp" }, { "fse", { NULL }, 7394, "udp" }, { "winqedit", { NULL }, 7395, "tcp" }, { "winqedit", { NULL }, 7395, "udp" }, { "hexarc", { NULL }, 7397, "tcp" }, { "hexarc", { NULL }, 7397, "udp" }, { "rtps-discovery", { NULL }, 7400, "tcp" }, { "rtps-discovery", { NULL }, 7400, "udp" }, { "rtps-dd-ut", { NULL }, 7401, "tcp" }, { "rtps-dd-ut", { NULL }, 7401, "udp" }, { "rtps-dd-mt", { NULL }, 7402, "tcp" }, { "rtps-dd-mt", { NULL }, 7402, "udp" }, { "ionixnetmon", { NULL }, 7410, "tcp" }, { "ionixnetmon", { NULL }, 7410, "udp" }, { "mtportmon", { NULL }, 7421, "tcp" }, { "mtportmon", { NULL }, 7421, "udp" }, { "pmdmgr", { NULL }, 7426, "tcp" }, { "pmdmgr", { NULL }, 7426, "udp" }, { "oveadmgr", { NULL }, 7427, "tcp" }, { "oveadmgr", { NULL }, 7427, "udp" }, { "ovladmgr", { NULL }, 7428, "tcp" }, { "ovladmgr", { NULL }, 7428, "udp" }, { "opi-sock", { NULL }, 7429, "tcp" }, { "opi-sock", { NULL }, 7429, "udp" }, { "xmpv7", { NULL }, 7430, "tcp" }, { "xmpv7", { NULL }, 7430, "udp" }, { "pmd", { NULL }, 7431, "tcp" }, { "pmd", { NULL }, 7431, "udp" }, { "faximum", { NULL }, 7437, "tcp" }, { "faximum", { NULL }, 7437, "udp" }, { "oracleas-https", { NULL }, 7443, "tcp" }, { "oracleas-https", { NULL }, 7443, "udp" }, { "rise", { NULL }, 7473, "tcp" }, { "rise", { NULL }, 7473, "udp" }, { "telops-lmd", { NULL }, 7491, "tcp" }, { "telops-lmd", { NULL }, 7491, "udp" }, { "silhouette", { NULL }, 7500, "tcp" }, { "silhouette", { NULL }, 7500, "udp" }, { "ovbus", { NULL }, 7501, "tcp" }, { "ovbus", { NULL }, 7501, "udp" }, { "acplt", { NULL }, 7509, "tcp" }, { "ovhpas", { NULL }, 7510, "tcp" }, { "ovhpas", { NULL }, 7510, "udp" }, { "pafec-lm", { NULL }, 7511, "tcp" }, { "pafec-lm", { NULL }, 7511, "udp" }, { "saratoga", { NULL }, 7542, "tcp" }, { "saratoga", { NULL }, 7542, "udp" }, { "atul", { NULL }, 7543, "tcp" }, { "atul", { NULL }, 7543, "udp" }, { "nta-ds", { NULL }, 7544, "tcp" }, { "nta-ds", { NULL }, 7544, "udp" }, { "nta-us", { NULL }, 7545, "tcp" }, { "nta-us", { NULL }, 7545, "udp" }, { "cfs", { NULL }, 7546, "tcp" }, { "cfs", { NULL }, 7546, "udp" }, { "cwmp", { NULL }, 7547, "tcp" }, { "cwmp", { NULL }, 7547, "udp" }, { "tidp", { NULL }, 7548, "tcp" }, { "tidp", { NULL }, 7548, "udp" }, { "nls-tl", { NULL }, 7549, "tcp" }, { "nls-tl", { NULL }, 7549, "udp" }, { "sncp", { NULL }, 7560, "tcp" }, { "sncp", { NULL }, 7560, "udp" }, { "cfw", { NULL }, 7563, "tcp" }, { "vsi-omega", { NULL }, 7566, "tcp" }, { "vsi-omega", { NULL }, 7566, "udp" }, { "dell-eql-asm", { NULL }, 7569, "tcp" }, { "aries-kfinder", { NULL }, 7570, "tcp" }, { "aries-kfinder", { NULL }, 7570, "udp" }, { "sun-lm", { NULL }, 7588, "tcp" }, { "sun-lm", { NULL }, 7588, "udp" }, { "indi", { NULL }, 7624, "tcp" }, { "indi", { NULL }, 7624, "udp" }, { "simco", { NULL }, 7626, "tcp" }, { "simco", { NULL }, 7626, "sctp" }, { "soap-http", { NULL }, 7627, "tcp" }, { "soap-http", { NULL }, 7627, "udp" }, { "zen-pawn", { NULL }, 7628, "tcp" }, { "zen-pawn", { NULL }, 7628, "udp" }, { "xdas", { NULL }, 7629, "tcp" }, { "xdas", { NULL }, 7629, "udp" }, { "hawk", { NULL }, 7630, "tcp" }, { "tesla-sys-msg", { NULL }, 7631, "tcp" }, { "pmdfmgt", { NULL }, 7633, "tcp" }, { "pmdfmgt", { NULL }, 7633, "udp" }, { "cuseeme", { NULL }, 7648, "tcp" }, { "cuseeme", { NULL }, 7648, "udp" }, { "imqstomp", { NULL }, 7672, "tcp" }, { "imqstomps", { NULL }, 7673, "tcp" }, { "imqtunnels", { NULL }, 7674, "tcp" }, { "imqtunnels", { NULL }, 7674, "udp" }, { "imqtunnel", { NULL }, 7675, "tcp" }, { "imqtunnel", { NULL }, 7675, "udp" }, { "imqbrokerd", { NULL }, 7676, "tcp" }, { "imqbrokerd", { NULL }, 7676, "udp" }, { "sun-user-https", { NULL }, 7677, "tcp" }, { "sun-user-https", { NULL }, 7677, "udp" }, { "pando-pub", { NULL }, 7680, "tcp" }, { "pando-pub", { NULL }, 7680, "udp" }, { "collaber", { NULL }, 7689, "tcp" }, { "collaber", { NULL }, 7689, "udp" }, { "klio", { NULL }, 7697, "tcp" }, { "klio", { NULL }, 7697, "udp" }, { "em7-secom", { NULL }, 7700, "tcp" }, { "sync-em7", { NULL }, 7707, "tcp" }, { "sync-em7", { NULL }, 7707, "udp" }, { "scinet", { NULL }, 7708, "tcp" }, { "scinet", { NULL }, 7708, "udp" }, { "medimageportal", { NULL }, 7720, "tcp" }, { "medimageportal", { NULL }, 7720, "udp" }, { "nsdeepfreezectl", { NULL }, 7724, "tcp" }, { "nsdeepfreezectl", { NULL }, 7724, "udp" }, { "nitrogen", { NULL }, 7725, "tcp" }, { "nitrogen", { NULL }, 7725, "udp" }, { "freezexservice", { NULL }, 7726, "tcp" }, { "freezexservice", { NULL }, 7726, "udp" }, { "trident-data", { NULL }, 7727, "tcp" }, { "trident-data", { NULL }, 7727, "udp" }, { "smip", { NULL }, 7734, "tcp" }, { "smip", { NULL }, 7734, "udp" }, { "aiagent", { NULL }, 7738, "tcp" }, { "aiagent", { NULL }, 7738, "udp" }, { "scriptview", { NULL }, 7741, "tcp" }, { "scriptview", { NULL }, 7741, "udp" }, { "msss", { NULL }, 7742, "tcp" }, { "sstp-1", { NULL }, 7743, "tcp" }, { "sstp-1", { NULL }, 7743, "udp" }, { "raqmon-pdu", { NULL }, 7744, "tcp" }, { "raqmon-pdu", { NULL }, 7744, "udp" }, { "prgp", { NULL }, 7747, "tcp" }, { "prgp", { NULL }, 7747, "udp" }, { "cbt", { NULL }, 7777, "tcp" }, { "cbt", { NULL }, 7777, "udp" }, { "interwise", { NULL }, 7778, "tcp" }, { "interwise", { NULL }, 7778, "udp" }, { "vstat", { NULL }, 7779, "tcp" }, { "vstat", { NULL }, 7779, "udp" }, { "accu-lmgr", { NULL }, 7781, "tcp" }, { "accu-lmgr", { NULL }, 7781, "udp" }, { "minivend", { NULL }, 7786, "tcp" }, { "minivend", { NULL }, 7786, "udp" }, { "popup-reminders", { NULL }, 7787, "tcp" }, { "popup-reminders", { NULL }, 7787, "udp" }, { "office-tools", { NULL }, 7789, "tcp" }, { "office-tools", { NULL }, 7789, "udp" }, { "q3ade", { NULL }, 7794, "tcp" }, { "q3ade", { NULL }, 7794, "udp" }, { "pnet-conn", { NULL }, 7797, "tcp" }, { "pnet-conn", { NULL }, 7797, "udp" }, { "pnet-enc", { NULL }, 7798, "tcp" }, { "pnet-enc", { NULL }, 7798, "udp" }, { "altbsdp", { NULL }, 7799, "tcp" }, { "altbsdp", { NULL }, 7799, "udp" }, { "asr", { NULL }, 7800, "tcp" }, { "asr", { NULL }, 7800, "udp" }, { "ssp-client", { NULL }, 7801, "tcp" }, { "ssp-client", { NULL }, 7801, "udp" }, { "rbt-wanopt", { NULL }, 7810, "tcp" }, { "rbt-wanopt", { NULL }, 7810, "udp" }, { "apc-7845", { NULL }, 7845, "tcp" }, { "apc-7845", { NULL }, 7845, "udp" }, { "apc-7846", { NULL }, 7846, "tcp" }, { "apc-7846", { NULL }, 7846, "udp" }, { "mobileanalyzer", { NULL }, 7869, "tcp" }, { "rbt-smc", { NULL }, 7870, "tcp" }, { "pss", { NULL }, 7880, "tcp" }, { "pss", { NULL }, 7880, "udp" }, { "ubroker", { NULL }, 7887, "tcp" }, { "ubroker", { NULL }, 7887, "udp" }, { "mevent", { NULL }, 7900, "tcp" }, { "mevent", { NULL }, 7900, "udp" }, { "tnos-sp", { NULL }, 7901, "tcp" }, { "tnos-sp", { NULL }, 7901, "udp" }, { "tnos-dp", { NULL }, 7902, "tcp" }, { "tnos-dp", { NULL }, 7902, "udp" }, { "tnos-dps", { NULL }, 7903, "tcp" }, { "tnos-dps", { NULL }, 7903, "udp" }, { "qo-secure", { NULL }, 7913, "tcp" }, { "qo-secure", { NULL }, 7913, "udp" }, { "t2-drm", { NULL }, 7932, "tcp" }, { "t2-drm", { NULL }, 7932, "udp" }, { "t2-brm", { NULL }, 7933, "tcp" }, { "t2-brm", { NULL }, 7933, "udp" }, { "supercell", { NULL }, 7967, "tcp" }, { "supercell", { NULL }, 7967, "udp" }, { "micromuse-ncps", { NULL }, 7979, "tcp" }, { "micromuse-ncps", { NULL }, 7979, "udp" }, { "quest-vista", { NULL }, 7980, "tcp" }, { "quest-vista", { NULL }, 7980, "udp" }, { "sossd-collect", { NULL }, 7981, "tcp" }, { "sossd-agent", { NULL }, 7982, "tcp" }, { "sossd-disc", { NULL }, 7982, "udp" }, { "pushns", { NULL }, 7997, "tcp" }, { "usicontentpush", { NULL }, 7998, "udp" }, { "irdmi2", { NULL }, 7999, "tcp" }, { "irdmi2", { NULL }, 7999, "udp" }, { "irdmi", { NULL }, 8000, "tcp" }, { "irdmi", { NULL }, 8000, "udp" }, { "vcom-tunnel", { NULL }, 8001, "tcp" }, { "vcom-tunnel", { NULL }, 8001, "udp" }, { "teradataordbms", { NULL }, 8002, "tcp" }, { "teradataordbms", { NULL }, 8002, "udp" }, { "mcreport", { NULL }, 8003, "tcp" }, { "mcreport", { NULL }, 8003, "udp" }, { "mxi", { NULL }, 8005, "tcp" }, { "mxi", { NULL }, 8005, "udp" }, { "http-alt", { NULL }, 8008, "tcp" }, { "http-alt", { NULL }, 8008, "udp" }, { "qbdb", { NULL }, 8019, "tcp" }, { "qbdb", { NULL }, 8019, "udp" }, { "intu-ec-svcdisc", { NULL }, 8020, "tcp" }, { "intu-ec-svcdisc", { NULL }, 8020, "udp" }, { "intu-ec-client", { NULL }, 8021, "tcp" }, { "intu-ec-client", { NULL }, 8021, "udp" }, { "oa-system", { NULL }, 8022, "tcp" }, { "oa-system", { NULL }, 8022, "udp" }, { "ca-audit-da", { NULL }, 8025, "tcp" }, { "ca-audit-da", { NULL }, 8025, "udp" }, { "ca-audit-ds", { NULL }, 8026, "tcp" }, { "ca-audit-ds", { NULL }, 8026, "udp" }, { "pro-ed", { NULL }, 8032, "tcp" }, { "pro-ed", { NULL }, 8032, "udp" }, { "mindprint", { NULL }, 8033, "tcp" }, { "mindprint", { NULL }, 8033, "udp" }, { "vantronix-mgmt", { NULL }, 8034, "tcp" }, { "vantronix-mgmt", { NULL }, 8034, "udp" }, { "ampify", { NULL }, 8040, "tcp" }, { "ampify", { NULL }, 8040, "udp" }, { "fs-agent", { NULL }, 8042, "tcp" }, { "fs-server", { NULL }, 8043, "tcp" }, { "fs-mgmt", { NULL }, 8044, "tcp" }, { "senomix01", { NULL }, 8052, "tcp" }, { "senomix01", { NULL }, 8052, "udp" }, { "senomix02", { NULL }, 8053, "tcp" }, { "senomix02", { NULL }, 8053, "udp" }, { "senomix03", { NULL }, 8054, "tcp" }, { "senomix03", { NULL }, 8054, "udp" }, { "senomix04", { NULL }, 8055, "tcp" }, { "senomix04", { NULL }, 8055, "udp" }, { "senomix05", { NULL }, 8056, "tcp" }, { "senomix05", { NULL }, 8056, "udp" }, { "senomix06", { NULL }, 8057, "tcp" }, { "senomix06", { NULL }, 8057, "udp" }, { "senomix07", { NULL }, 8058, "tcp" }, { "senomix07", { NULL }, 8058, "udp" }, { "senomix08", { NULL }, 8059, "tcp" }, { "senomix08", { NULL }, 8059, "udp" }, { "gadugadu", { NULL }, 8074, "tcp" }, { "gadugadu", { NULL }, 8074, "udp" }, { "http-alt", { NULL }, 8080, "tcp" }, { "http-alt", { NULL }, 8080, "udp" }, { "sunproxyadmin", { NULL }, 8081, "tcp" }, { "sunproxyadmin", { NULL }, 8081, "udp" }, { "us-cli", { NULL }, 8082, "tcp" }, { "us-cli", { NULL }, 8082, "udp" }, { "us-srv", { NULL }, 8083, "tcp" }, { "us-srv", { NULL }, 8083, "udp" }, { "d-s-n", { NULL }, 8086, "tcp" }, { "d-s-n", { NULL }, 8086, "udp" }, { "simplifymedia", { NULL }, 8087, "tcp" }, { "simplifymedia", { NULL }, 8087, "udp" }, { "radan-http", { NULL }, 8088, "tcp" }, { "radan-http", { NULL }, 8088, "udp" }, { "jamlink", { NULL }, 8091, "tcp" }, { "sac", { NULL }, 8097, "tcp" }, { "sac", { NULL }, 8097, "udp" }, { "xprint-server", { NULL }, 8100, "tcp" }, { "xprint-server", { NULL }, 8100, "udp" }, { "ldoms-migr", { NULL }, 8101, "tcp" }, { "mtl8000-matrix", { NULL }, 8115, "tcp" }, { "mtl8000-matrix", { NULL }, 8115, "udp" }, { "cp-cluster", { NULL }, 8116, "tcp" }, { "cp-cluster", { NULL }, 8116, "udp" }, { "privoxy", { NULL }, 8118, "tcp" }, { "privoxy", { NULL }, 8118, "udp" }, { "apollo-data", { NULL }, 8121, "tcp" }, { "apollo-data", { NULL }, 8121, "udp" }, { "apollo-admin", { NULL }, 8122, "tcp" }, { "apollo-admin", { NULL }, 8122, "udp" }, { "paycash-online", { NULL }, 8128, "tcp" }, { "paycash-online", { NULL }, 8128, "udp" }, { "paycash-wbp", { NULL }, 8129, "tcp" }, { "paycash-wbp", { NULL }, 8129, "udp" }, { "indigo-vrmi", { NULL }, 8130, "tcp" }, { "indigo-vrmi", { NULL }, 8130, "udp" }, { "indigo-vbcp", { NULL }, 8131, "tcp" }, { "indigo-vbcp", { NULL }, 8131, "udp" }, { "dbabble", { NULL }, 8132, "tcp" }, { "dbabble", { NULL }, 8132, "udp" }, { "isdd", { NULL }, 8148, "tcp" }, { "isdd", { NULL }, 8148, "udp" }, { "patrol", { NULL }, 8160, "tcp" }, { "patrol", { NULL }, 8160, "udp" }, { "patrol-snmp", { NULL }, 8161, "tcp" }, { "patrol-snmp", { NULL }, 8161, "udp" }, { "vmware-fdm", { NULL }, 8182, "tcp" }, { "vmware-fdm", { NULL }, 8182, "udp" }, { "proremote", { NULL }, 8183, "tcp" }, { "itach", { NULL }, 8184, "tcp" }, { "itach", { NULL }, 8184, "udp" }, { "spytechphone", { NULL }, 8192, "tcp" }, { "spytechphone", { NULL }, 8192, "udp" }, { "blp1", { NULL }, 8194, "tcp" }, { "blp1", { NULL }, 8194, "udp" }, { "blp2", { NULL }, 8195, "tcp" }, { "blp2", { NULL }, 8195, "udp" }, { "vvr-data", { NULL }, 8199, "tcp" }, { "vvr-data", { NULL }, 8199, "udp" }, { "trivnet1", { NULL }, 8200, "tcp" }, { "trivnet1", { NULL }, 8200, "udp" }, { "trivnet2", { NULL }, 8201, "tcp" }, { "trivnet2", { NULL }, 8201, "udp" }, { "lm-perfworks", { NULL }, 8204, "tcp" }, { "lm-perfworks", { NULL }, 8204, "udp" }, { "lm-instmgr", { NULL }, 8205, "tcp" }, { "lm-instmgr", { NULL }, 8205, "udp" }, { "lm-dta", { NULL }, 8206, "tcp" }, { "lm-dta", { NULL }, 8206, "udp" }, { "lm-sserver", { NULL }, 8207, "tcp" }, { "lm-sserver", { NULL }, 8207, "udp" }, { "lm-webwatcher", { NULL }, 8208, "tcp" }, { "lm-webwatcher", { NULL }, 8208, "udp" }, { "rexecj", { NULL }, 8230, "tcp" }, { "rexecj", { NULL }, 8230, "udp" }, { "synapse-nhttps", { NULL }, 8243, "tcp" }, { "synapse-nhttps", { NULL }, 8243, "udp" }, { "pando-sec", { NULL }, 8276, "tcp" }, { "pando-sec", { NULL }, 8276, "udp" }, { "synapse-nhttp", { NULL }, 8280, "tcp" }, { "synapse-nhttp", { NULL }, 8280, "udp" }, { "blp3", { NULL }, 8292, "tcp" }, { "blp3", { NULL }, 8292, "udp" }, { "hiperscan-id", { NULL }, 8293, "tcp" }, { "blp4", { NULL }, 8294, "tcp" }, { "blp4", { NULL }, 8294, "udp" }, { "tmi", { NULL }, 8300, "tcp" }, { "tmi", { NULL }, 8300, "udp" }, { "amberon", { NULL }, 8301, "tcp" }, { "amberon", { NULL }, 8301, "udp" }, { "tnp-discover", { NULL }, 8320, "tcp" }, { "tnp-discover", { NULL }, 8320, "udp" }, { "tnp", { NULL }, 8321, "tcp" }, { "tnp", { NULL }, 8321, "udp" }, { "server-find", { NULL }, 8351, "tcp" }, { "server-find", { NULL }, 8351, "udp" }, { "cruise-enum", { NULL }, 8376, "tcp" }, { "cruise-enum", { NULL }, 8376, "udp" }, { "cruise-swroute", { NULL }, 8377, "tcp" }, { "cruise-swroute", { NULL }, 8377, "udp" }, { "cruise-config", { NULL }, 8378, "tcp" }, { "cruise-config", { NULL }, 8378, "udp" }, { "cruise-diags", { NULL }, 8379, "tcp" }, { "cruise-diags", { NULL }, 8379, "udp" }, { "cruise-update", { NULL }, 8380, "tcp" }, { "cruise-update", { NULL }, 8380, "udp" }, { "m2mservices", { NULL }, 8383, "tcp" }, { "m2mservices", { NULL }, 8383, "udp" }, { "cvd", { NULL }, 8400, "tcp" }, { "cvd", { NULL }, 8400, "udp" }, { "sabarsd", { NULL }, 8401, "tcp" }, { "sabarsd", { NULL }, 8401, "udp" }, { "abarsd", { NULL }, 8402, "tcp" }, { "abarsd", { NULL }, 8402, "udp" }, { "admind", { NULL }, 8403, "tcp" }, { "admind", { NULL }, 8403, "udp" }, { "svcloud", { NULL }, 8404, "tcp" }, { "svbackup", { NULL }, 8405, "tcp" }, { "espeech", { NULL }, 8416, "tcp" }, { "espeech", { NULL }, 8416, "udp" }, { "espeech-rtp", { NULL }, 8417, "tcp" }, { "espeech-rtp", { NULL }, 8417, "udp" }, { "cybro-a-bus", { NULL }, 8442, "tcp" }, { "cybro-a-bus", { NULL }, 8442, "udp" }, { "pcsync-https", { NULL }, 8443, "tcp" }, { "pcsync-https", { NULL }, 8443, "udp" }, { "pcsync-http", { NULL }, 8444, "tcp" }, { "pcsync-http", { NULL }, 8444, "udp" }, { "npmp", { NULL }, 8450, "tcp" }, { "npmp", { NULL }, 8450, "udp" }, { "cisco-avp", { NULL }, 8470, "tcp" }, { "pim-port", { NULL }, 8471, "tcp" }, { "pim-port", { NULL }, 8471, "sctp" }, { "otv", { NULL }, 8472, "tcp" }, { "otv", { NULL }, 8472, "udp" }, { "vp2p", { NULL }, 8473, "tcp" }, { "vp2p", { NULL }, 8473, "udp" }, { "noteshare", { NULL }, 8474, "tcp" }, { "noteshare", { NULL }, 8474, "udp" }, { "fmtp", { NULL }, 8500, "tcp" }, { "fmtp", { NULL }, 8500, "udp" }, { "rtsp-alt", { NULL }, 8554, "tcp" }, { "rtsp-alt", { NULL }, 8554, "udp" }, { "d-fence", { NULL }, 8555, "tcp" }, { "d-fence", { NULL }, 8555, "udp" }, { "oap-admin", { NULL }, 8567, "tcp" }, { "oap-admin", { NULL }, 8567, "udp" }, { "asterix", { NULL }, 8600, "tcp" }, { "asterix", { NULL }, 8600, "udp" }, { "canon-mfnp", { NULL }, 8610, "tcp" }, { "canon-mfnp", { NULL }, 8610, "udp" }, { "canon-bjnp1", { NULL }, 8611, "tcp" }, { "canon-bjnp1", { NULL }, 8611, "udp" }, { "canon-bjnp2", { NULL }, 8612, "tcp" }, { "canon-bjnp2", { NULL }, 8612, "udp" }, { "canon-bjnp3", { NULL }, 8613, "tcp" }, { "canon-bjnp3", { NULL }, 8613, "udp" }, { "canon-bjnp4", { NULL }, 8614, "tcp" }, { "canon-bjnp4", { NULL }, 8614, "udp" }, { "sun-as-jmxrmi", { NULL }, 8686, "tcp" }, { "sun-as-jmxrmi", { NULL }, 8686, "udp" }, { "vnyx", { NULL }, 8699, "tcp" }, { "vnyx", { NULL }, 8699, "udp" }, { "dtp-net", { NULL }, 8732, "udp" }, { "ibus", { NULL }, 8733, "tcp" }, { "ibus", { NULL }, 8733, "udp" }, { "mc-appserver", { NULL }, 8763, "tcp" }, { "mc-appserver", { NULL }, 8763, "udp" }, { "openqueue", { NULL }, 8764, "tcp" }, { "openqueue", { NULL }, 8764, "udp" }, { "ultraseek-http", { NULL }, 8765, "tcp" }, { "ultraseek-http", { NULL }, 8765, "udp" }, { "dpap", { NULL }, 8770, "tcp" }, { "dpap", { NULL }, 8770, "udp" }, { "msgclnt", { NULL }, 8786, "tcp" }, { "msgclnt", { NULL }, 8786, "udp" }, { "msgsrvr", { NULL }, 8787, "tcp" }, { "msgsrvr", { NULL }, 8787, "udp" }, { "sunwebadmin", { NULL }, 8800, "tcp" }, { "sunwebadmin", { NULL }, 8800, "udp" }, { "truecm", { NULL }, 8804, "tcp" }, { "truecm", { NULL }, 8804, "udp" }, { "dxspider", { NULL }, 8873, "tcp" }, { "dxspider", { NULL }, 8873, "udp" }, { "cddbp-alt", { NULL }, 8880, "tcp" }, { "cddbp-alt", { NULL }, 8880, "udp" }, { "secure-mqtt", { NULL }, 8883, "tcp" }, { "secure-mqtt", { NULL }, 8883, "udp" }, { "ddi-tcp-1", { NULL }, 8888, "tcp" }, { "ddi-udp-1", { NULL }, 8888, "udp" }, { "ddi-tcp-2", { NULL }, 8889, "tcp" }, { "ddi-udp-2", { NULL }, 8889, "udp" }, { "ddi-tcp-3", { NULL }, 8890, "tcp" }, { "ddi-udp-3", { NULL }, 8890, "udp" }, { "ddi-tcp-4", { NULL }, 8891, "tcp" }, { "ddi-udp-4", { NULL }, 8891, "udp" }, { "ddi-tcp-5", { NULL }, 8892, "tcp" }, { "ddi-udp-5", { NULL }, 8892, "udp" }, { "ddi-tcp-6", { NULL }, 8893, "tcp" }, { "ddi-udp-6", { NULL }, 8893, "udp" }, { "ddi-tcp-7", { NULL }, 8894, "tcp" }, { "ddi-udp-7", { NULL }, 8894, "udp" }, { "ospf-lite", { NULL }, 8899, "tcp" }, { "ospf-lite", { NULL }, 8899, "udp" }, { "jmb-cds1", { NULL }, 8900, "tcp" }, { "jmb-cds1", { NULL }, 8900, "udp" }, { "jmb-cds2", { NULL }, 8901, "tcp" }, { "jmb-cds2", { NULL }, 8901, "udp" }, { "manyone-http", { NULL }, 8910, "tcp" }, { "manyone-http", { NULL }, 8910, "udp" }, { "manyone-xml", { NULL }, 8911, "tcp" }, { "manyone-xml", { NULL }, 8911, "udp" }, { "wcbackup", { NULL }, 8912, "tcp" }, { "wcbackup", { NULL }, 8912, "udp" }, { "dragonfly", { NULL }, 8913, "tcp" }, { "dragonfly", { NULL }, 8913, "udp" }, { "twds", { NULL }, 8937, "tcp" }, { "cumulus-admin", { NULL }, 8954, "tcp" }, { "cumulus-admin", { NULL }, 8954, "udp" }, { "sunwebadmins", { NULL }, 8989, "tcp" }, { "sunwebadmins", { NULL }, 8989, "udp" }, { "http-wmap", { NULL }, 8990, "tcp" }, { "http-wmap", { NULL }, 8990, "udp" }, { "https-wmap", { NULL }, 8991, "tcp" }, { "https-wmap", { NULL }, 8991, "udp" }, { "bctp", { NULL }, 8999, "tcp" }, { "bctp", { NULL }, 8999, "udp" }, { "cslistener", { NULL }, 9000, "tcp" }, { "cslistener", { NULL }, 9000, "udp" }, { "etlservicemgr", { NULL }, 9001, "tcp" }, { "etlservicemgr", { NULL }, 9001, "udp" }, { "dynamid", { NULL }, 9002, "tcp" }, { "dynamid", { NULL }, 9002, "udp" }, { "ogs-client", { NULL }, 9007, "udp" }, { "ogs-server", { NULL }, 9008, "tcp" }, { "pichat", { NULL }, 9009, "tcp" }, { "pichat", { NULL }, 9009, "udp" }, { "sdr", { NULL }, 9010, "tcp" }, { "tambora", { NULL }, 9020, "tcp" }, { "tambora", { NULL }, 9020, "udp" }, { "panagolin-ident", { NULL }, 9021, "tcp" }, { "panagolin-ident", { NULL }, 9021, "udp" }, { "paragent", { NULL }, 9022, "tcp" }, { "paragent", { NULL }, 9022, "udp" }, { "swa-1", { NULL }, 9023, "tcp" }, { "swa-1", { NULL }, 9023, "udp" }, { "swa-2", { NULL }, 9024, "tcp" }, { "swa-2", { NULL }, 9024, "udp" }, { "swa-3", { NULL }, 9025, "tcp" }, { "swa-3", { NULL }, 9025, "udp" }, { "swa-4", { NULL }, 9026, "tcp" }, { "swa-4", { NULL }, 9026, "udp" }, { "versiera", { NULL }, 9050, "tcp" }, { "fio-cmgmt", { NULL }, 9051, "tcp" }, { "glrpc", { NULL }, 9080, "tcp" }, { "glrpc", { NULL }, 9080, "udp" }, { "lcs-ap", { NULL }, 9082, "sctp" }, { "emc-pp-mgmtsvc", { NULL }, 9083, "tcp" }, { "aurora", { NULL }, 9084, "tcp" }, { "aurora", { NULL }, 9084, "udp" }, { "aurora", { NULL }, 9084, "sctp" }, { "ibm-rsyscon", { NULL }, 9085, "tcp" }, { "ibm-rsyscon", { NULL }, 9085, "udp" }, { "net2display", { NULL }, 9086, "tcp" }, { "net2display", { NULL }, 9086, "udp" }, { "classic", { NULL }, 9087, "tcp" }, { "classic", { NULL }, 9087, "udp" }, { "sqlexec", { NULL }, 9088, "tcp" }, { "sqlexec", { NULL }, 9088, "udp" }, { "sqlexec-ssl", { NULL }, 9089, "tcp" }, { "sqlexec-ssl", { NULL }, 9089, "udp" }, { "websm", { NULL }, 9090, "tcp" }, { "websm", { NULL }, 9090, "udp" }, { "xmltec-xmlmail", { NULL }, 9091, "tcp" }, { "xmltec-xmlmail", { NULL }, 9091, "udp" }, { "XmlIpcRegSvc", { NULL }, 9092, "tcp" }, { "XmlIpcRegSvc", { NULL }, 9092, "udp" }, { "hp-pdl-datastr", { NULL }, 9100, "tcp" }, { "hp-pdl-datastr", { NULL }, 9100, "udp" }, { "pdl-datastream", { NULL }, 9100, "tcp" }, { "pdl-datastream", { NULL }, 9100, "udp" }, { "bacula-dir", { NULL }, 9101, "tcp" }, { "bacula-dir", { NULL }, 9101, "udp" }, { "bacula-fd", { NULL }, 9102, "tcp" }, { "bacula-fd", { NULL }, 9102, "udp" }, { "bacula-sd", { NULL }, 9103, "tcp" }, { "bacula-sd", { NULL }, 9103, "udp" }, { "peerwire", { NULL }, 9104, "tcp" }, { "peerwire", { NULL }, 9104, "udp" }, { "xadmin", { NULL }, 9105, "tcp" }, { "xadmin", { NULL }, 9105, "udp" }, { "astergate", { NULL }, 9106, "tcp" }, { "astergate-disc", { NULL }, 9106, "udp" }, { "astergatefax", { NULL }, 9107, "tcp" }, { "mxit", { NULL }, 9119, "tcp" }, { "mxit", { NULL }, 9119, "udp" }, { "dddp", { NULL }, 9131, "tcp" }, { "dddp", { NULL }, 9131, "udp" }, { "apani1", { NULL }, 9160, "tcp" }, { "apani1", { NULL }, 9160, "udp" }, { "apani2", { NULL }, 9161, "tcp" }, { "apani2", { NULL }, 9161, "udp" }, { "apani3", { NULL }, 9162, "tcp" }, { "apani3", { NULL }, 9162, "udp" }, { "apani4", { NULL }, 9163, "tcp" }, { "apani4", { NULL }, 9163, "udp" }, { "apani5", { NULL }, 9164, "tcp" }, { "apani5", { NULL }, 9164, "udp" }, { "sun-as-jpda", { NULL }, 9191, "tcp" }, { "sun-as-jpda", { NULL }, 9191, "udp" }, { "wap-wsp", { NULL }, 9200, "tcp" }, { "wap-wsp", { NULL }, 9200, "udp" }, { "wap-wsp-wtp", { NULL }, 9201, "tcp" }, { "wap-wsp-wtp", { NULL }, 9201, "udp" }, { "wap-wsp-s", { NULL }, 9202, "tcp" }, { "wap-wsp-s", { NULL }, 9202, "udp" }, { "wap-wsp-wtp-s", { NULL }, 9203, "tcp" }, { "wap-wsp-wtp-s", { NULL }, 9203, "udp" }, { "wap-vcard", { NULL }, 9204, "tcp" }, { "wap-vcard", { NULL }, 9204, "udp" }, { "wap-vcal", { NULL }, 9205, "tcp" }, { "wap-vcal", { NULL }, 9205, "udp" }, { "wap-vcard-s", { NULL }, 9206, "tcp" }, { "wap-vcard-s", { NULL }, 9206, "udp" }, { "wap-vcal-s", { NULL }, 9207, "tcp" }, { "wap-vcal-s", { NULL }, 9207, "udp" }, { "rjcdb-vcards", { NULL }, 9208, "tcp" }, { "rjcdb-vcards", { NULL }, 9208, "udp" }, { "almobile-system", { NULL }, 9209, "tcp" }, { "almobile-system", { NULL }, 9209, "udp" }, { "oma-mlp", { NULL }, 9210, "tcp" }, { "oma-mlp", { NULL }, 9210, "udp" }, { "oma-mlp-s", { NULL }, 9211, "tcp" }, { "oma-mlp-s", { NULL }, 9211, "udp" }, { "serverviewdbms", { NULL }, 9212, "tcp" }, { "serverviewdbms", { NULL }, 9212, "udp" }, { "serverstart", { NULL }, 9213, "tcp" }, { "serverstart", { NULL }, 9213, "udp" }, { "ipdcesgbs", { NULL }, 9214, "tcp" }, { "ipdcesgbs", { NULL }, 9214, "udp" }, { "insis", { NULL }, 9215, "tcp" }, { "insis", { NULL }, 9215, "udp" }, { "acme", { NULL }, 9216, "tcp" }, { "acme", { NULL }, 9216, "udp" }, { "fsc-port", { NULL }, 9217, "tcp" }, { "fsc-port", { NULL }, 9217, "udp" }, { "teamcoherence", { NULL }, 9222, "tcp" }, { "teamcoherence", { NULL }, 9222, "udp" }, { "mon", { NULL }, 9255, "tcp" }, { "mon", { NULL }, 9255, "udp" }, { "pegasus", { NULL }, 9278, "tcp" }, { "pegasus", { NULL }, 9278, "udp" }, { "pegasus-ctl", { NULL }, 9279, "tcp" }, { "pegasus-ctl", { NULL }, 9279, "udp" }, { "pgps", { NULL }, 9280, "tcp" }, { "pgps", { NULL }, 9280, "udp" }, { "swtp-port1", { NULL }, 9281, "tcp" }, { "swtp-port1", { NULL }, 9281, "udp" }, { "swtp-port2", { NULL }, 9282, "tcp" }, { "swtp-port2", { NULL }, 9282, "udp" }, { "callwaveiam", { NULL }, 9283, "tcp" }, { "callwaveiam", { NULL }, 9283, "udp" }, { "visd", { NULL }, 9284, "tcp" }, { "visd", { NULL }, 9284, "udp" }, { "n2h2server", { NULL }, 9285, "tcp" }, { "n2h2server", { NULL }, 9285, "udp" }, { "n2receive", { NULL }, 9286, "udp" }, { "cumulus", { NULL }, 9287, "tcp" }, { "cumulus", { NULL }, 9287, "udp" }, { "armtechdaemon", { NULL }, 9292, "tcp" }, { "armtechdaemon", { NULL }, 9292, "udp" }, { "storview", { NULL }, 9293, "tcp" }, { "storview", { NULL }, 9293, "udp" }, { "armcenterhttp", { NULL }, 9294, "tcp" }, { "armcenterhttp", { NULL }, 9294, "udp" }, { "armcenterhttps", { NULL }, 9295, "tcp" }, { "armcenterhttps", { NULL }, 9295, "udp" }, { "vrace", { NULL }, 9300, "tcp" }, { "vrace", { NULL }, 9300, "udp" }, { "sphinxql", { NULL }, 9306, "tcp" }, { "sphinxapi", { NULL }, 9312, "tcp" }, { "secure-ts", { NULL }, 9318, "tcp" }, { "secure-ts", { NULL }, 9318, "udp" }, { "guibase", { NULL }, 9321, "tcp" }, { "guibase", { NULL }, 9321, "udp" }, { "mpidcmgr", { NULL }, 9343, "tcp" }, { "mpidcmgr", { NULL }, 9343, "udp" }, { "mphlpdmc", { NULL }, 9344, "tcp" }, { "mphlpdmc", { NULL }, 9344, "udp" }, { "ctechlicensing", { NULL }, 9346, "tcp" }, { "ctechlicensing", { NULL }, 9346, "udp" }, { "fjdmimgr", { NULL }, 9374, "tcp" }, { "fjdmimgr", { NULL }, 9374, "udp" }, { "boxp", { NULL }, 9380, "tcp" }, { "boxp", { NULL }, 9380, "udp" }, { "d2dconfig", { NULL }, 9387, "tcp" }, { "d2ddatatrans", { NULL }, 9388, "tcp" }, { "adws", { NULL }, 9389, "tcp" }, { "otp", { NULL }, 9390, "tcp" }, { "fjinvmgr", { NULL }, 9396, "tcp" }, { "fjinvmgr", { NULL }, 9396, "udp" }, { "mpidcagt", { NULL }, 9397, "tcp" }, { "mpidcagt", { NULL }, 9397, "udp" }, { "sec-t4net-srv", { NULL }, 9400, "tcp" }, { "sec-t4net-srv", { NULL }, 9400, "udp" }, { "sec-t4net-clt", { NULL }, 9401, "tcp" }, { "sec-t4net-clt", { NULL }, 9401, "udp" }, { "sec-pc2fax-srv", { NULL }, 9402, "tcp" }, { "sec-pc2fax-srv", { NULL }, 9402, "udp" }, { "git", { NULL }, 9418, "tcp" }, { "git", { NULL }, 9418, "udp" }, { "tungsten-https", { NULL }, 9443, "tcp" }, { "tungsten-https", { NULL }, 9443, "udp" }, { "wso2esb-console", { NULL }, 9444, "tcp" }, { "wso2esb-console", { NULL }, 9444, "udp" }, { "sntlkeyssrvr", { NULL }, 9450, "tcp" }, { "sntlkeyssrvr", { NULL }, 9450, "udp" }, { "ismserver", { NULL }, 9500, "tcp" }, { "ismserver", { NULL }, 9500, "udp" }, { "sma-spw", { NULL }, 9522, "udp" }, { "mngsuite", { NULL }, 9535, "tcp" }, { "mngsuite", { NULL }, 9535, "udp" }, { "laes-bf", { NULL }, 9536, "tcp" }, { "laes-bf", { NULL }, 9536, "udp" }, { "trispen-sra", { NULL }, 9555, "tcp" }, { "trispen-sra", { NULL }, 9555, "udp" }, { "ldgateway", { NULL }, 9592, "tcp" }, { "ldgateway", { NULL }, 9592, "udp" }, { "cba8", { NULL }, 9593, "tcp" }, { "cba8", { NULL }, 9593, "udp" }, { "msgsys", { NULL }, 9594, "tcp" }, { "msgsys", { NULL }, 9594, "udp" }, { "pds", { NULL }, 9595, "tcp" }, { "pds", { NULL }, 9595, "udp" }, { "mercury-disc", { NULL }, 9596, "tcp" }, { "mercury-disc", { NULL }, 9596, "udp" }, { "pd-admin", { NULL }, 9597, "tcp" }, { "pd-admin", { NULL }, 9597, "udp" }, { "vscp", { NULL }, 9598, "tcp" }, { "vscp", { NULL }, 9598, "udp" }, { "robix", { NULL }, 9599, "tcp" }, { "robix", { NULL }, 9599, "udp" }, { "micromuse-ncpw", { NULL }, 9600, "tcp" }, { "micromuse-ncpw", { NULL }, 9600, "udp" }, { "streamcomm-ds", { NULL }, 9612, "tcp" }, { "streamcomm-ds", { NULL }, 9612, "udp" }, { "iadt-tls", { NULL }, 9614, "tcp" }, { "erunbook_agent", { NULL }, 9616, "tcp" }, { "erunbook_server", { NULL }, 9617, "tcp" }, { "condor", { NULL }, 9618, "tcp" }, { "condor", { NULL }, 9618, "udp" }, { "odbcpathway", { NULL }, 9628, "tcp" }, { "odbcpathway", { NULL }, 9628, "udp" }, { "uniport", { NULL }, 9629, "tcp" }, { "uniport", { NULL }, 9629, "udp" }, { "peoctlr", { NULL }, 9630, "tcp" }, { "peocoll", { NULL }, 9631, "tcp" }, { "mc-comm", { NULL }, 9632, "udp" }, { "pqsflows", { NULL }, 9640, "tcp" }, { "xmms2", { NULL }, 9667, "tcp" }, { "xmms2", { NULL }, 9667, "udp" }, { "tec5-sdctp", { NULL }, 9668, "tcp" }, { "tec5-sdctp", { NULL }, 9668, "udp" }, { "client-wakeup", { NULL }, 9694, "tcp" }, { "client-wakeup", { NULL }, 9694, "udp" }, { "ccnx", { NULL }, 9695, "tcp" }, { "ccnx", { NULL }, 9695, "udp" }, { "board-roar", { NULL }, 9700, "tcp" }, { "board-roar", { NULL }, 9700, "udp" }, { "l5nas-parchan", { NULL }, 9747, "tcp" }, { "l5nas-parchan", { NULL }, 9747, "udp" }, { "board-voip", { NULL }, 9750, "tcp" }, { "board-voip", { NULL }, 9750, "udp" }, { "rasadv", { NULL }, 9753, "tcp" }, { "rasadv", { NULL }, 9753, "udp" }, { "tungsten-http", { NULL }, 9762, "tcp" }, { "tungsten-http", { NULL }, 9762, "udp" }, { "davsrc", { NULL }, 9800, "tcp" }, { "davsrc", { NULL }, 9800, "udp" }, { "sstp-2", { NULL }, 9801, "tcp" }, { "sstp-2", { NULL }, 9801, "udp" }, { "davsrcs", { NULL }, 9802, "tcp" }, { "davsrcs", { NULL }, 9802, "udp" }, { "sapv1", { NULL }, 9875, "tcp" }, { "sapv1", { NULL }, 9875, "udp" }, { "sd", { NULL }, 9876, "tcp" }, { "sd", { NULL }, 9876, "udp" }, { "cyborg-systems", { NULL }, 9888, "tcp" }, { "cyborg-systems", { NULL }, 9888, "udp" }, { "gt-proxy", { NULL }, 9889, "tcp" }, { "gt-proxy", { NULL }, 9889, "udp" }, { "monkeycom", { NULL }, 9898, "tcp" }, { "monkeycom", { NULL }, 9898, "udp" }, { "sctp-tunneling", { NULL }, 9899, "tcp" }, { "sctp-tunneling", { NULL }, 9899, "udp" }, { "iua", { NULL }, 9900, "tcp" }, { "iua", { NULL }, 9900, "udp" }, { "iua", { NULL }, 9900, "sctp" }, { "enrp", { NULL }, 9901, "udp" }, { "enrp-sctp", { NULL }, 9901, "sctp" }, { "enrp-sctp-tls", { NULL }, 9902, "sctp" }, { "domaintime", { NULL }, 9909, "tcp" }, { "domaintime", { NULL }, 9909, "udp" }, { "sype-transport", { NULL }, 9911, "tcp" }, { "sype-transport", { NULL }, 9911, "udp" }, { "apc-9950", { NULL }, 9950, "tcp" }, { "apc-9950", { NULL }, 9950, "udp" }, { "apc-9951", { NULL }, 9951, "tcp" }, { "apc-9951", { NULL }, 9951, "udp" }, { "apc-9952", { NULL }, 9952, "tcp" }, { "apc-9952", { NULL }, 9952, "udp" }, { "acis", { NULL }, 9953, "tcp" }, { "acis", { NULL }, 9953, "udp" }, { "odnsp", { NULL }, 9966, "tcp" }, { "odnsp", { NULL }, 9966, "udp" }, { "dsm-scm-target", { NULL }, 9987, "tcp" }, { "dsm-scm-target", { NULL }, 9987, "udp" }, { "nsesrvr", { NULL }, 9988, "tcp" }, { "osm-appsrvr", { NULL }, 9990, "tcp" }, { "osm-appsrvr", { NULL }, 9990, "udp" }, { "osm-oev", { NULL }, 9991, "tcp" }, { "osm-oev", { NULL }, 9991, "udp" }, { "palace-1", { NULL }, 9992, "tcp" }, { "palace-1", { NULL }, 9992, "udp" }, { "palace-2", { NULL }, 9993, "tcp" }, { "palace-2", { NULL }, 9993, "udp" }, { "palace-3", { NULL }, 9994, "tcp" }, { "palace-3", { NULL }, 9994, "udp" }, { "palace-4", { NULL }, 9995, "tcp" }, { "palace-4", { NULL }, 9995, "udp" }, { "palace-5", { NULL }, 9996, "tcp" }, { "palace-5", { NULL }, 9996, "udp" }, { "palace-6", { NULL }, 9997, "tcp" }, { "palace-6", { NULL }, 9997, "udp" }, { "distinct32", { NULL }, 9998, "tcp" }, { "distinct32", { NULL }, 9998, "udp" }, { "distinct", { NULL }, 9999, "tcp" }, { "distinct", { NULL }, 9999, "udp" }, { "ndmp", { NULL }, 10000, "tcp" }, { "ndmp", { NULL }, 10000, "udp" }, { "scp-config", { NULL }, 10001, "tcp" }, { "scp-config", { NULL }, 10001, "udp" }, { "documentum", { NULL }, 10002, "tcp" }, { "documentum", { NULL }, 10002, "udp" }, { "documentum_s", { NULL }, 10003, "tcp" }, { "documentum_s", { NULL }, 10003, "udp" }, { "emcrmirccd", { NULL }, 10004, "tcp" }, { "emcrmird", { NULL }, 10005, "tcp" }, { "mvs-capacity", { NULL }, 10007, "tcp" }, { "mvs-capacity", { NULL }, 10007, "udp" }, { "octopus", { NULL }, 10008, "tcp" }, { "octopus", { NULL }, 10008, "udp" }, { "swdtp-sv", { NULL }, 10009, "tcp" }, { "swdtp-sv", { NULL }, 10009, "udp" }, { "rxapi", { NULL }, 10010, "tcp" }, { "zabbix-agent", { NULL }, 10050, "tcp" }, { "zabbix-agent", { NULL }, 10050, "udp" }, { "zabbix-trapper", { NULL }, 10051, "tcp" }, { "zabbix-trapper", { NULL }, 10051, "udp" }, { "qptlmd", { NULL }, 10055, "tcp" }, { "amanda", { NULL }, 10080, "tcp" }, { "amanda", { NULL }, 10080, "udp" }, { "famdc", { NULL }, 10081, "tcp" }, { "famdc", { NULL }, 10081, "udp" }, { "itap-ddtp", { NULL }, 10100, "tcp" }, { "itap-ddtp", { NULL }, 10100, "udp" }, { "ezmeeting-2", { NULL }, 10101, "tcp" }, { "ezmeeting-2", { NULL }, 10101, "udp" }, { "ezproxy-2", { NULL }, 10102, "tcp" }, { "ezproxy-2", { NULL }, 10102, "udp" }, { "ezrelay", { NULL }, 10103, "tcp" }, { "ezrelay", { NULL }, 10103, "udp" }, { "swdtp", { NULL }, 10104, "tcp" }, { "swdtp", { NULL }, 10104, "udp" }, { "bctp-server", { NULL }, 10107, "tcp" }, { "bctp-server", { NULL }, 10107, "udp" }, { "nmea-0183", { NULL }, 10110, "tcp" }, { "nmea-0183", { NULL }, 10110, "udp" }, { "netiq-endpoint", { NULL }, 10113, "tcp" }, { "netiq-endpoint", { NULL }, 10113, "udp" }, { "netiq-qcheck", { NULL }, 10114, "tcp" }, { "netiq-qcheck", { NULL }, 10114, "udp" }, { "netiq-endpt", { NULL }, 10115, "tcp" }, { "netiq-endpt", { NULL }, 10115, "udp" }, { "netiq-voipa", { NULL }, 10116, "tcp" }, { "netiq-voipa", { NULL }, 10116, "udp" }, { "iqrm", { NULL }, 10117, "tcp" }, { "iqrm", { NULL }, 10117, "udp" }, { "bmc-perf-sd", { NULL }, 10128, "tcp" }, { "bmc-perf-sd", { NULL }, 10128, "udp" }, { "bmc-gms", { NULL }, 10129, "tcp" }, { "qb-db-server", { NULL }, 10160, "tcp" }, { "qb-db-server", { NULL }, 10160, "udp" }, { "snmptls", { NULL }, 10161, "tcp" }, { "snmpdtls", { NULL }, 10161, "udp" }, { "snmptls-trap", { NULL }, 10162, "tcp" }, { "snmpdtls-trap", { NULL }, 10162, "udp" }, { "trisoap", { NULL }, 10200, "tcp" }, { "trisoap", { NULL }, 10200, "udp" }, { "rsms", { NULL }, 10201, "tcp" }, { "rscs", { NULL }, 10201, "udp" }, { "apollo-relay", { NULL }, 10252, "tcp" }, { "apollo-relay", { NULL }, 10252, "udp" }, { "axis-wimp-port", { NULL }, 10260, "tcp" }, { "axis-wimp-port", { NULL }, 10260, "udp" }, { "blocks", { NULL }, 10288, "tcp" }, { "blocks", { NULL }, 10288, "udp" }, { "cosir", { NULL }, 10321, "tcp" }, { "hip-nat-t", { NULL }, 10500, "udp" }, { "MOS-lower", { NULL }, 10540, "tcp" }, { "MOS-lower", { NULL }, 10540, "udp" }, { "MOS-upper", { NULL }, 10541, "tcp" }, { "MOS-upper", { NULL }, 10541, "udp" }, { "MOS-aux", { NULL }, 10542, "tcp" }, { "MOS-aux", { NULL }, 10542, "udp" }, { "MOS-soap", { NULL }, 10543, "tcp" }, { "MOS-soap", { NULL }, 10543, "udp" }, { "MOS-soap-opt", { NULL }, 10544, "tcp" }, { "MOS-soap-opt", { NULL }, 10544, "udp" }, { "gap", { NULL }, 10800, "tcp" }, { "gap", { NULL }, 10800, "udp" }, { "lpdg", { NULL }, 10805, "tcp" }, { "lpdg", { NULL }, 10805, "udp" }, { "nbd", { NULL }, 10809, "tcp" }, { "nmc-disc", { NULL }, 10810, "udp" }, { "helix", { NULL }, 10860, "tcp" }, { "helix", { NULL }, 10860, "udp" }, { "rmiaux", { NULL }, 10990, "tcp" }, { "rmiaux", { NULL }, 10990, "udp" }, { "irisa", { NULL }, 11000, "tcp" }, { "irisa", { NULL }, 11000, "udp" }, { "metasys", { NULL }, 11001, "tcp" }, { "metasys", { NULL }, 11001, "udp" }, { "netapp-icmgmt", { NULL }, 11104, "tcp" }, { "netapp-icdata", { NULL }, 11105, "tcp" }, { "sgi-lk", { NULL }, 11106, "tcp" }, { "sgi-lk", { NULL }, 11106, "udp" }, { "vce", { NULL }, 11111, "tcp" }, { "vce", { NULL }, 11111, "udp" }, { "dicom", { NULL }, 11112, "tcp" }, { "dicom", { NULL }, 11112, "udp" }, { "suncacao-snmp", { NULL }, 11161, "tcp" }, { "suncacao-snmp", { NULL }, 11161, "udp" }, { "suncacao-jmxmp", { NULL }, 11162, "tcp" }, { "suncacao-jmxmp", { NULL }, 11162, "udp" }, { "suncacao-rmi", { NULL }, 11163, "tcp" }, { "suncacao-rmi", { NULL }, 11163, "udp" }, { "suncacao-csa", { NULL }, 11164, "tcp" }, { "suncacao-csa", { NULL }, 11164, "udp" }, { "suncacao-websvc", { NULL }, 11165, "tcp" }, { "suncacao-websvc", { NULL }, 11165, "udp" }, { "snss", { NULL }, 11171, "udp" }, { "oemcacao-jmxmp", { NULL }, 11172, "tcp" }, { "oemcacao-rmi", { NULL }, 11174, "tcp" }, { "oemcacao-websvc", { NULL }, 11175, "tcp" }, { "smsqp", { NULL }, 11201, "tcp" }, { "smsqp", { NULL }, 11201, "udp" }, { "wifree", { NULL }, 11208, "tcp" }, { "wifree", { NULL }, 11208, "udp" }, { "memcache", { NULL }, 11211, "tcp" }, { "memcache", { NULL }, 11211, "udp" }, { "imip", { NULL }, 11319, "tcp" }, { "imip", { NULL }, 11319, "udp" }, { "imip-channels", { NULL }, 11320, "tcp" }, { "imip-channels", { NULL }, 11320, "udp" }, { "arena-server", { NULL }, 11321, "tcp" }, { "arena-server", { NULL }, 11321, "udp" }, { "atm-uhas", { NULL }, 11367, "tcp" }, { "atm-uhas", { NULL }, 11367, "udp" }, { "hkp", { NULL }, 11371, "tcp" }, { "hkp", { NULL }, 11371, "udp" }, { "asgcypresstcps", { NULL }, 11489, "tcp" }, { "tempest-port", { NULL }, 11600, "tcp" }, { "tempest-port", { NULL }, 11600, "udp" }, { "h323callsigalt", { NULL }, 11720, "tcp" }, { "h323callsigalt", { NULL }, 11720, "udp" }, { "intrepid-ssl", { NULL }, 11751, "tcp" }, { "intrepid-ssl", { NULL }, 11751, "udp" }, { "xoraya", { NULL }, 11876, "tcp" }, { "xoraya", { NULL }, 11876, "udp" }, { "x2e-disc", { NULL }, 11877, "udp" }, { "sysinfo-sp", { NULL }, 11967, "tcp" }, { "sysinfo-sp", { NULL }, 11967, "udp" }, { "wmereceiving", { NULL }, 11997, "sctp" }, { "wmedistribution", { NULL }, 11998, "sctp" }, { "wmereporting", { NULL }, 11999, "sctp" }, { "entextxid", { NULL }, 12000, "tcp" }, { "entextxid", { NULL }, 12000, "udp" }, { "entextnetwk", { NULL }, 12001, "tcp" }, { "entextnetwk", { NULL }, 12001, "udp" }, { "entexthigh", { NULL }, 12002, "tcp" }, { "entexthigh", { NULL }, 12002, "udp" }, { "entextmed", { NULL }, 12003, "tcp" }, { "entextmed", { NULL }, 12003, "udp" }, { "entextlow", { NULL }, 12004, "tcp" }, { "entextlow", { NULL }, 12004, "udp" }, { "dbisamserver1", { NULL }, 12005, "tcp" }, { "dbisamserver1", { NULL }, 12005, "udp" }, { "dbisamserver2", { NULL }, 12006, "tcp" }, { "dbisamserver2", { NULL }, 12006, "udp" }, { "accuracer", { NULL }, 12007, "tcp" }, { "accuracer", { NULL }, 12007, "udp" }, { "accuracer-dbms", { NULL }, 12008, "tcp" }, { "accuracer-dbms", { NULL }, 12008, "udp" }, { "edbsrvr", { NULL }, 12010, "tcp" }, { "vipera", { NULL }, 12012, "tcp" }, { "vipera", { NULL }, 12012, "udp" }, { "vipera-ssl", { NULL }, 12013, "tcp" }, { "vipera-ssl", { NULL }, 12013, "udp" }, { "rets-ssl", { NULL }, 12109, "tcp" }, { "rets-ssl", { NULL }, 12109, "udp" }, { "nupaper-ss", { NULL }, 12121, "tcp" }, { "nupaper-ss", { NULL }, 12121, "udp" }, { "cawas", { NULL }, 12168, "tcp" }, { "cawas", { NULL }, 12168, "udp" }, { "hivep", { NULL }, 12172, "tcp" }, { "hivep", { NULL }, 12172, "udp" }, { "linogridengine", { NULL }, 12300, "tcp" }, { "linogridengine", { NULL }, 12300, "udp" }, { "warehouse-sss", { NULL }, 12321, "tcp" }, { "warehouse-sss", { NULL }, 12321, "udp" }, { "warehouse", { NULL }, 12322, "tcp" }, { "warehouse", { NULL }, 12322, "udp" }, { "italk", { NULL }, 12345, "tcp" }, { "italk", { NULL }, 12345, "udp" }, { "tsaf", { NULL }, 12753, "tcp" }, { "tsaf", { NULL }, 12753, "udp" }, { "i-zipqd", { NULL }, 13160, "tcp" }, { "i-zipqd", { NULL }, 13160, "udp" }, { "bcslogc", { NULL }, 13216, "tcp" }, { "bcslogc", { NULL }, 13216, "udp" }, { "rs-pias", { NULL }, 13217, "tcp" }, { "rs-pias", { NULL }, 13217, "udp" }, { "emc-vcas-tcp", { NULL }, 13218, "tcp" }, { "emc-vcas-udp", { NULL }, 13218, "udp" }, { "powwow-client", { NULL }, 13223, "tcp" }, { "powwow-client", { NULL }, 13223, "udp" }, { "powwow-server", { NULL }, 13224, "tcp" }, { "powwow-server", { NULL }, 13224, "udp" }, { "doip-data", { NULL }, 13400, "tcp" }, { "doip-disc", { NULL }, 13400, "udp" }, { "bprd", { NULL }, 13720, "tcp" }, { "bprd", { NULL }, 13720, "udp" }, { "bpdbm", { NULL }, 13721, "tcp" }, { "bpdbm", { NULL }, 13721, "udp" }, { "bpjava-msvc", { NULL }, 13722, "tcp" }, { "bpjava-msvc", { NULL }, 13722, "udp" }, { "vnetd", { NULL }, 13724, "tcp" }, { "vnetd", { NULL }, 13724, "udp" }, { "bpcd", { NULL }, 13782, "tcp" }, { "bpcd", { NULL }, 13782, "udp" }, { "vopied", { NULL }, 13783, "tcp" }, { "vopied", { NULL }, 13783, "udp" }, { "nbdb", { NULL }, 13785, "tcp" }, { "nbdb", { NULL }, 13785, "udp" }, { "nomdb", { NULL }, 13786, "tcp" }, { "nomdb", { NULL }, 13786, "udp" }, { "dsmcc-config", { NULL }, 13818, "tcp" }, { "dsmcc-config", { NULL }, 13818, "udp" }, { "dsmcc-session", { NULL }, 13819, "tcp" }, { "dsmcc-session", { NULL }, 13819, "udp" }, { "dsmcc-passthru", { NULL }, 13820, "tcp" }, { "dsmcc-passthru", { NULL }, 13820, "udp" }, { "dsmcc-download", { NULL }, 13821, "tcp" }, { "dsmcc-download", { NULL }, 13821, "udp" }, { "dsmcc-ccp", { NULL }, 13822, "tcp" }, { "dsmcc-ccp", { NULL }, 13822, "udp" }, { "bmdss", { NULL }, 13823, "tcp" }, { "dta-systems", { NULL }, 13929, "tcp" }, { "dta-systems", { NULL }, 13929, "udp" }, { "medevolve", { NULL }, 13930, "tcp" }, { "scotty-ft", { NULL }, 14000, "tcp" }, { "scotty-ft", { NULL }, 14000, "udp" }, { "sua", { NULL }, 14001, "tcp" }, { "sua", { NULL }, 14001, "udp" }, { "sua", { NULL }, 14001, "sctp" }, { "sage-best-com1", { NULL }, 14033, "tcp" }, { "sage-best-com1", { NULL }, 14033, "udp" }, { "sage-best-com2", { NULL }, 14034, "tcp" }, { "sage-best-com2", { NULL }, 14034, "udp" }, { "vcs-app", { NULL }, 14141, "tcp" }, { "vcs-app", { NULL }, 14141, "udp" }, { "icpp", { NULL }, 14142, "tcp" }, { "icpp", { NULL }, 14142, "udp" }, { "gcm-app", { NULL }, 14145, "tcp" }, { "gcm-app", { NULL }, 14145, "udp" }, { "vrts-tdd", { NULL }, 14149, "tcp" }, { "vrts-tdd", { NULL }, 14149, "udp" }, { "vcscmd", { NULL }, 14150, "tcp" }, { "vad", { NULL }, 14154, "tcp" }, { "vad", { NULL }, 14154, "udp" }, { "cps", { NULL }, 14250, "tcp" }, { "cps", { NULL }, 14250, "udp" }, { "ca-web-update", { NULL }, 14414, "tcp" }, { "ca-web-update", { NULL }, 14414, "udp" }, { "hde-lcesrvr-1", { NULL }, 14936, "tcp" }, { "hde-lcesrvr-1", { NULL }, 14936, "udp" }, { "hde-lcesrvr-2", { NULL }, 14937, "tcp" }, { "hde-lcesrvr-2", { NULL }, 14937, "udp" }, { "hydap", { NULL }, 15000, "tcp" }, { "hydap", { NULL }, 15000, "udp" }, { "xpilot", { NULL }, 15345, "tcp" }, { "xpilot", { NULL }, 15345, "udp" }, { "3link", { NULL }, 15363, "tcp" }, { "3link", { NULL }, 15363, "udp" }, { "cisco-snat", { NULL }, 15555, "tcp" }, { "cisco-snat", { NULL }, 15555, "udp" }, { "bex-xr", { NULL }, 15660, "tcp" }, { "bex-xr", { NULL }, 15660, "udp" }, { "ptp", { NULL }, 15740, "tcp" }, { "ptp", { NULL }, 15740, "udp" }, { "2ping", { NULL }, 15998, "udp" }, { "programmar", { NULL }, 15999, "tcp" }, { "fmsas", { NULL }, 16000, "tcp" }, { "fmsascon", { NULL }, 16001, "tcp" }, { "gsms", { NULL }, 16002, "tcp" }, { "alfin", { NULL }, 16003, "udp" }, { "jwpc", { NULL }, 16020, "tcp" }, { "jwpc-bin", { NULL }, 16021, "tcp" }, { "sun-sea-port", { NULL }, 16161, "tcp" }, { "sun-sea-port", { NULL }, 16161, "udp" }, { "solaris-audit", { NULL }, 16162, "tcp" }, { "etb4j", { NULL }, 16309, "tcp" }, { "etb4j", { NULL }, 16309, "udp" }, { "pduncs", { NULL }, 16310, "tcp" }, { "pduncs", { NULL }, 16310, "udp" }, { "pdefmns", { NULL }, 16311, "tcp" }, { "pdefmns", { NULL }, 16311, "udp" }, { "netserialext1", { NULL }, 16360, "tcp" }, { "netserialext1", { NULL }, 16360, "udp" }, { "netserialext2", { NULL }, 16361, "tcp" }, { "netserialext2", { NULL }, 16361, "udp" }, { "netserialext3", { NULL }, 16367, "tcp" }, { "netserialext3", { NULL }, 16367, "udp" }, { "netserialext4", { NULL }, 16368, "tcp" }, { "netserialext4", { NULL }, 16368, "udp" }, { "connected", { NULL }, 16384, "tcp" }, { "connected", { NULL }, 16384, "udp" }, { "xoms", { NULL }, 16619, "tcp" }, { "newbay-snc-mc", { NULL }, 16900, "tcp" }, { "newbay-snc-mc", { NULL }, 16900, "udp" }, { "sgcip", { NULL }, 16950, "tcp" }, { "sgcip", { NULL }, 16950, "udp" }, { "intel-rci-mp", { NULL }, 16991, "tcp" }, { "intel-rci-mp", { NULL }, 16991, "udp" }, { "amt-soap-http", { NULL }, 16992, "tcp" }, { "amt-soap-http", { NULL }, 16992, "udp" }, { "amt-soap-https", { NULL }, 16993, "tcp" }, { "amt-soap-https", { NULL }, 16993, "udp" }, { "amt-redir-tcp", { NULL }, 16994, "tcp" }, { "amt-redir-tcp", { NULL }, 16994, "udp" }, { "amt-redir-tls", { NULL }, 16995, "tcp" }, { "amt-redir-tls", { NULL }, 16995, "udp" }, { "isode-dua", { NULL }, 17007, "tcp" }, { "isode-dua", { NULL }, 17007, "udp" }, { "soundsvirtual", { NULL }, 17185, "tcp" }, { "soundsvirtual", { NULL }, 17185, "udp" }, { "chipper", { NULL }, 17219, "tcp" }, { "chipper", { NULL }, 17219, "udp" }, { "integrius-stp", { NULL }, 17234, "tcp" }, { "integrius-stp", { NULL }, 17234, "udp" }, { "ssh-mgmt", { NULL }, 17235, "tcp" }, { "ssh-mgmt", { NULL }, 17235, "udp" }, { "db-lsp", { NULL }, 17500, "tcp" }, { "db-lsp-disc", { NULL }, 17500, "udp" }, { "ea", { NULL }, 17729, "tcp" }, { "ea", { NULL }, 17729, "udp" }, { "zep", { NULL }, 17754, "tcp" }, { "zep", { NULL }, 17754, "udp" }, { "zigbee-ip", { NULL }, 17755, "tcp" }, { "zigbee-ip", { NULL }, 17755, "udp" }, { "zigbee-ips", { NULL }, 17756, "tcp" }, { "zigbee-ips", { NULL }, 17756, "udp" }, { "sw-orion", { NULL }, 17777, "tcp" }, { "biimenu", { NULL }, 18000, "tcp" }, { "biimenu", { NULL }, 18000, "udp" }, { "radpdf", { NULL }, 18104, "tcp" }, { "racf", { NULL }, 18136, "tcp" }, { "opsec-cvp", { NULL }, 18181, "tcp" }, { "opsec-cvp", { NULL }, 18181, "udp" }, { "opsec-ufp", { NULL }, 18182, "tcp" }, { "opsec-ufp", { NULL }, 18182, "udp" }, { "opsec-sam", { NULL }, 18183, "tcp" }, { "opsec-sam", { NULL }, 18183, "udp" }, { "opsec-lea", { NULL }, 18184, "tcp" }, { "opsec-lea", { NULL }, 18184, "udp" }, { "opsec-omi", { NULL }, 18185, "tcp" }, { "opsec-omi", { NULL }, 18185, "udp" }, { "ohsc", { NULL }, 18186, "tcp" }, { "ohsc", { NULL }, 18186, "udp" }, { "opsec-ela", { NULL }, 18187, "tcp" }, { "opsec-ela", { NULL }, 18187, "udp" }, { "checkpoint-rtm", { NULL }, 18241, "tcp" }, { "checkpoint-rtm", { NULL }, 18241, "udp" }, { "gv-pf", { NULL }, 18262, "tcp" }, { "gv-pf", { NULL }, 18262, "udp" }, { "ac-cluster", { NULL }, 18463, "tcp" }, { "ac-cluster", { NULL }, 18463, "udp" }, { "rds-ib", { NULL }, 18634, "tcp" }, { "rds-ib", { NULL }, 18634, "udp" }, { "rds-ip", { NULL }, 18635, "tcp" }, { "rds-ip", { NULL }, 18635, "udp" }, { "ique", { NULL }, 18769, "tcp" }, { "ique", { NULL }, 18769, "udp" }, { "infotos", { NULL }, 18881, "tcp" }, { "infotos", { NULL }, 18881, "udp" }, { "apc-necmp", { NULL }, 18888, "tcp" }, { "apc-necmp", { NULL }, 18888, "udp" }, { "igrid", { NULL }, 19000, "tcp" }, { "igrid", { NULL }, 19000, "udp" }, { "j-link", { NULL }, 19020, "tcp" }, { "opsec-uaa", { NULL }, 19191, "tcp" }, { "opsec-uaa", { NULL }, 19191, "udp" }, { "ua-secureagent", { NULL }, 19194, "tcp" }, { "ua-secureagent", { NULL }, 19194, "udp" }, { "keysrvr", { NULL }, 19283, "tcp" }, { "keysrvr", { NULL }, 19283, "udp" }, { "keyshadow", { NULL }, 19315, "tcp" }, { "keyshadow", { NULL }, 19315, "udp" }, { "mtrgtrans", { NULL }, 19398, "tcp" }, { "mtrgtrans", { NULL }, 19398, "udp" }, { "hp-sco", { NULL }, 19410, "tcp" }, { "hp-sco", { NULL }, 19410, "udp" }, { "hp-sca", { NULL }, 19411, "tcp" }, { "hp-sca", { NULL }, 19411, "udp" }, { "hp-sessmon", { NULL }, 19412, "tcp" }, { "hp-sessmon", { NULL }, 19412, "udp" }, { "fxuptp", { NULL }, 19539, "tcp" }, { "fxuptp", { NULL }, 19539, "udp" }, { "sxuptp", { NULL }, 19540, "tcp" }, { "sxuptp", { NULL }, 19540, "udp" }, { "jcp", { NULL }, 19541, "tcp" }, { "jcp", { NULL }, 19541, "udp" }, { "iec-104-sec", { NULL }, 19998, "tcp" }, { "dnp-sec", { NULL }, 19999, "tcp" }, { "dnp-sec", { NULL }, 19999, "udp" }, { "dnp", { NULL }, 20000, "tcp" }, { "dnp", { NULL }, 20000, "udp" }, { "microsan", { NULL }, 20001, "tcp" }, { "microsan", { NULL }, 20001, "udp" }, { "commtact-http", { NULL }, 20002, "tcp" }, { "commtact-http", { NULL }, 20002, "udp" }, { "commtact-https", { NULL }, 20003, "tcp" }, { "commtact-https", { NULL }, 20003, "udp" }, { "openwebnet", { NULL }, 20005, "tcp" }, { "openwebnet", { NULL }, 20005, "udp" }, { "ss-idi-disc", { NULL }, 20012, "udp" }, { "ss-idi", { NULL }, 20013, "tcp" }, { "opendeploy", { NULL }, 20014, "tcp" }, { "opendeploy", { NULL }, 20014, "udp" }, { "nburn_id", { NULL }, 20034, "tcp" }, { "nburn_id", { NULL }, 20034, "udp" }, { "tmophl7mts", { NULL }, 20046, "tcp" }, { "tmophl7mts", { NULL }, 20046, "udp" }, { "mountd", { NULL }, 20048, "tcp" }, { "mountd", { NULL }, 20048, "udp" }, { "nfsrdma", { NULL }, 20049, "tcp" }, { "nfsrdma", { NULL }, 20049, "udp" }, { "nfsrdma", { NULL }, 20049, "sctp" }, { "tolfab", { NULL }, 20167, "tcp" }, { "tolfab", { NULL }, 20167, "udp" }, { "ipdtp-port", { NULL }, 20202, "tcp" }, { "ipdtp-port", { NULL }, 20202, "udp" }, { "ipulse-ics", { NULL }, 20222, "tcp" }, { "ipulse-ics", { NULL }, 20222, "udp" }, { "emwavemsg", { NULL }, 20480, "tcp" }, { "emwavemsg", { NULL }, 20480, "udp" }, { "track", { NULL }, 20670, "tcp" }, { "track", { NULL }, 20670, "udp" }, { "athand-mmp", { NULL }, 20999, "tcp" }, { "athand-mmp", { NULL }, 20999, "udp" }, { "irtrans", { NULL }, 21000, "tcp" }, { "irtrans", { NULL }, 21000, "udp" }, { "dfserver", { NULL }, 21554, "tcp" }, { "dfserver", { NULL }, 21554, "udp" }, { "vofr-gateway", { NULL }, 21590, "tcp" }, { "vofr-gateway", { NULL }, 21590, "udp" }, { "tvpm", { NULL }, 21800, "tcp" }, { "tvpm", { NULL }, 21800, "udp" }, { "webphone", { NULL }, 21845, "tcp" }, { "webphone", { NULL }, 21845, "udp" }, { "netspeak-is", { NULL }, 21846, "tcp" }, { "netspeak-is", { NULL }, 21846, "udp" }, { "netspeak-cs", { NULL }, 21847, "tcp" }, { "netspeak-cs", { NULL }, 21847, "udp" }, { "netspeak-acd", { NULL }, 21848, "tcp" }, { "netspeak-acd", { NULL }, 21848, "udp" }, { "netspeak-cps", { NULL }, 21849, "tcp" }, { "netspeak-cps", { NULL }, 21849, "udp" }, { "snapenetio", { NULL }, 22000, "tcp" }, { "snapenetio", { NULL }, 22000, "udp" }, { "optocontrol", { NULL }, 22001, "tcp" }, { "optocontrol", { NULL }, 22001, "udp" }, { "optohost002", { NULL }, 22002, "tcp" }, { "optohost002", { NULL }, 22002, "udp" }, { "optohost003", { NULL }, 22003, "tcp" }, { "optohost003", { NULL }, 22003, "udp" }, { "optohost004", { NULL }, 22004, "tcp" }, { "optohost004", { NULL }, 22004, "udp" }, { "optohost004", { NULL }, 22005, "tcp" }, { "optohost004", { NULL }, 22005, "udp" }, { "dcap", { NULL }, 22125, "tcp" }, { "gsidcap", { NULL }, 22128, "tcp" }, { "wnn6", { NULL }, 22273, "tcp" }, { "wnn6", { NULL }, 22273, "udp" }, { "cis", { NULL }, 22305, "tcp" }, { "cis", { NULL }, 22305, "udp" }, { "cis-secure", { NULL }, 22343, "tcp" }, { "cis-secure", { NULL }, 22343, "udp" }, { "WibuKey", { NULL }, 22347, "tcp" }, { "WibuKey", { NULL }, 22347, "udp" }, { "CodeMeter", { NULL }, 22350, "tcp" }, { "CodeMeter", { NULL }, 22350, "udp" }, { "vocaltec-wconf", { NULL }, 22555, "tcp" }, { "vocaltec-phone", { NULL }, 22555, "udp" }, { "talikaserver", { NULL }, 22763, "tcp" }, { "talikaserver", { NULL }, 22763, "udp" }, { "aws-brf", { NULL }, 22800, "tcp" }, { "aws-brf", { NULL }, 22800, "udp" }, { "brf-gw", { NULL }, 22951, "tcp" }, { "brf-gw", { NULL }, 22951, "udp" }, { "inovaport1", { NULL }, 23000, "tcp" }, { "inovaport1", { NULL }, 23000, "udp" }, { "inovaport2", { NULL }, 23001, "tcp" }, { "inovaport2", { NULL }, 23001, "udp" }, { "inovaport3", { NULL }, 23002, "tcp" }, { "inovaport3", { NULL }, 23002, "udp" }, { "inovaport4", { NULL }, 23003, "tcp" }, { "inovaport4", { NULL }, 23003, "udp" }, { "inovaport5", { NULL }, 23004, "tcp" }, { "inovaport5", { NULL }, 23004, "udp" }, { "inovaport6", { NULL }, 23005, "tcp" }, { "inovaport6", { NULL }, 23005, "udp" }, { "s102", { NULL }, 23272, "udp" }, { "elxmgmt", { NULL }, 23333, "tcp" }, { "elxmgmt", { NULL }, 23333, "udp" }, { "novar-dbase", { NULL }, 23400, "tcp" }, { "novar-dbase", { NULL }, 23400, "udp" }, { "novar-alarm", { NULL }, 23401, "tcp" }, { "novar-alarm", { NULL }, 23401, "udp" }, { "novar-global", { NULL }, 23402, "tcp" }, { "novar-global", { NULL }, 23402, "udp" }, { "aequus", { NULL }, 23456, "tcp" }, { "aequus-alt", { NULL }, 23457, "tcp" }, { "med-ltp", { NULL }, 24000, "tcp" }, { "med-ltp", { NULL }, 24000, "udp" }, { "med-fsp-rx", { NULL }, 24001, "tcp" }, { "med-fsp-rx", { NULL }, 24001, "udp" }, { "med-fsp-tx", { NULL }, 24002, "tcp" }, { "med-fsp-tx", { NULL }, 24002, "udp" }, { "med-supp", { NULL }, 24003, "tcp" }, { "med-supp", { NULL }, 24003, "udp" }, { "med-ovw", { NULL }, 24004, "tcp" }, { "med-ovw", { NULL }, 24004, "udp" }, { "med-ci", { NULL }, 24005, "tcp" }, { "med-ci", { NULL }, 24005, "udp" }, { "med-net-svc", { NULL }, 24006, "tcp" }, { "med-net-svc", { NULL }, 24006, "udp" }, { "filesphere", { NULL }, 24242, "tcp" }, { "filesphere", { NULL }, 24242, "udp" }, { "vista-4gl", { NULL }, 24249, "tcp" }, { "vista-4gl", { NULL }, 24249, "udp" }, { "ild", { NULL }, 24321, "tcp" }, { "ild", { NULL }, 24321, "udp" }, { "intel_rci", { NULL }, 24386, "tcp" }, { "intel_rci", { NULL }, 24386, "udp" }, { "tonidods", { NULL }, 24465, "tcp" }, { "tonidods", { NULL }, 24465, "udp" }, { "binkp", { NULL }, 24554, "tcp" }, { "binkp", { NULL }, 24554, "udp" }, { "canditv", { NULL }, 24676, "tcp" }, { "canditv", { NULL }, 24676, "udp" }, { "flashfiler", { NULL }, 24677, "tcp" }, { "flashfiler", { NULL }, 24677, "udp" }, { "proactivate", { NULL }, 24678, "tcp" }, { "proactivate", { NULL }, 24678, "udp" }, { "tcc-http", { NULL }, 24680, "tcp" }, { "tcc-http", { NULL }, 24680, "udp" }, { "cslg", { NULL }, 24754, "tcp" }, { "find", { NULL }, 24922, "tcp" }, { "find", { NULL }, 24922, "udp" }, { "icl-twobase1", { NULL }, 25000, "tcp" }, { "icl-twobase1", { NULL }, 25000, "udp" }, { "icl-twobase2", { NULL }, 25001, "tcp" }, { "icl-twobase2", { NULL }, 25001, "udp" }, { "icl-twobase3", { NULL }, 25002, "tcp" }, { "icl-twobase3", { NULL }, 25002, "udp" }, { "icl-twobase4", { NULL }, 25003, "tcp" }, { "icl-twobase4", { NULL }, 25003, "udp" }, { "icl-twobase5", { NULL }, 25004, "tcp" }, { "icl-twobase5", { NULL }, 25004, "udp" }, { "icl-twobase6", { NULL }, 25005, "tcp" }, { "icl-twobase6", { NULL }, 25005, "udp" }, { "icl-twobase7", { NULL }, 25006, "tcp" }, { "icl-twobase7", { NULL }, 25006, "udp" }, { "icl-twobase8", { NULL }, 25007, "tcp" }, { "icl-twobase8", { NULL }, 25007, "udp" }, { "icl-twobase9", { NULL }, 25008, "tcp" }, { "icl-twobase9", { NULL }, 25008, "udp" }, { "icl-twobase10", { NULL }, 25009, "tcp" }, { "icl-twobase10", { NULL }, 25009, "udp" }, { "rna", { NULL }, 25471, "sctp" }, { "sauterdongle", { NULL }, 25576, "tcp" }, { "vocaltec-hos", { NULL }, 25793, "tcp" }, { "vocaltec-hos", { NULL }, 25793, "udp" }, { "tasp-net", { NULL }, 25900, "tcp" }, { "tasp-net", { NULL }, 25900, "udp" }, { "niobserver", { NULL }, 25901, "tcp" }, { "niobserver", { NULL }, 25901, "udp" }, { "nilinkanalyst", { NULL }, 25902, "tcp" }, { "nilinkanalyst", { NULL }, 25902, "udp" }, { "niprobe", { NULL }, 25903, "tcp" }, { "niprobe", { NULL }, 25903, "udp" }, { "quake", { NULL }, 26000, "tcp" }, { "quake", { NULL }, 26000, "udp" }, { "scscp", { NULL }, 26133, "tcp" }, { "scscp", { NULL }, 26133, "udp" }, { "wnn6-ds", { NULL }, 26208, "tcp" }, { "wnn6-ds", { NULL }, 26208, "udp" }, { "ezproxy", { NULL }, 26260, "tcp" }, { "ezproxy", { NULL }, 26260, "udp" }, { "ezmeeting", { NULL }, 26261, "tcp" }, { "ezmeeting", { NULL }, 26261, "udp" }, { "k3software-svr", { NULL }, 26262, "tcp" }, { "k3software-svr", { NULL }, 26262, "udp" }, { "k3software-cli", { NULL }, 26263, "tcp" }, { "k3software-cli", { NULL }, 26263, "udp" }, { "exoline-tcp", { NULL }, 26486, "tcp" }, { "exoline-udp", { NULL }, 26486, "udp" }, { "exoconfig", { NULL }, 26487, "tcp" }, { "exoconfig", { NULL }, 26487, "udp" }, { "exonet", { NULL }, 26489, "tcp" }, { "exonet", { NULL }, 26489, "udp" }, { "imagepump", { NULL }, 27345, "tcp" }, { "imagepump", { NULL }, 27345, "udp" }, { "jesmsjc", { NULL }, 27442, "tcp" }, { "jesmsjc", { NULL }, 27442, "udp" }, { "kopek-httphead", { NULL }, 27504, "tcp" }, { "kopek-httphead", { NULL }, 27504, "udp" }, { "ars-vista", { NULL }, 27782, "tcp" }, { "ars-vista", { NULL }, 27782, "udp" }, { "tw-auth-key", { NULL }, 27999, "tcp" }, { "tw-auth-key", { NULL }, 27999, "udp" }, { "nxlmd", { NULL }, 28000, "tcp" }, { "nxlmd", { NULL }, 28000, "udp" }, { "pqsp", { NULL }, 28001, "tcp" }, { "siemensgsm", { NULL }, 28240, "tcp" }, { "siemensgsm", { NULL }, 28240, "udp" }, { "sgsap", { NULL }, 29118, "sctp" }, { "otmp", { NULL }, 29167, "tcp" }, { "otmp", { NULL }, 29167, "udp" }, { "sbcap", { NULL }, 29168, "sctp" }, { "iuhsctpassoc", { NULL }, 29169, "sctp" }, { "pago-services1", { NULL }, 30001, "tcp" }, { "pago-services1", { NULL }, 30001, "udp" }, { "pago-services2", { NULL }, 30002, "tcp" }, { "pago-services2", { NULL }, 30002, "udp" }, { "kingdomsonline", { NULL }, 30260, "tcp" }, { "kingdomsonline", { NULL }, 30260, "udp" }, { "ovobs", { NULL }, 30999, "tcp" }, { "ovobs", { NULL }, 30999, "udp" }, { "autotrac-acp", { NULL }, 31020, "tcp" }, { "yawn", { NULL }, 31029, "udp" }, { "xqosd", { NULL }, 31416, "tcp" }, { "xqosd", { NULL }, 31416, "udp" }, { "tetrinet", { NULL }, 31457, "tcp" }, { "tetrinet", { NULL }, 31457, "udp" }, { "lm-mon", { NULL }, 31620, "tcp" }, { "lm-mon", { NULL }, 31620, "udp" }, { "dsx_monitor", { NULL }, 31685, "tcp" }, { "gamesmith-port", { NULL }, 31765, "tcp" }, { "gamesmith-port", { NULL }, 31765, "udp" }, { "iceedcp_tx", { NULL }, 31948, "tcp" }, { "iceedcp_tx", { NULL }, 31948, "udp" }, { "iceedcp_rx", { NULL }, 31949, "tcp" }, { "iceedcp_rx", { NULL }, 31949, "udp" }, { "iracinghelper", { NULL }, 32034, "tcp" }, { "iracinghelper", { NULL }, 32034, "udp" }, { "t1distproc60", { NULL }, 32249, "tcp" }, { "t1distproc60", { NULL }, 32249, "udp" }, { "apm-link", { NULL }, 32483, "tcp" }, { "apm-link", { NULL }, 32483, "udp" }, { "sec-ntb-clnt", { NULL }, 32635, "tcp" }, { "sec-ntb-clnt", { NULL }, 32635, "udp" }, { "DMExpress", { NULL }, 32636, "tcp" }, { "DMExpress", { NULL }, 32636, "udp" }, { "filenet-powsrm", { NULL }, 32767, "tcp" }, { "filenet-powsrm", { NULL }, 32767, "udp" }, { "filenet-tms", { NULL }, 32768, "tcp" }, { "filenet-tms", { NULL }, 32768, "udp" }, { "filenet-rpc", { NULL }, 32769, "tcp" }, { "filenet-rpc", { NULL }, 32769, "udp" }, { "filenet-nch", { NULL }, 32770, "tcp" }, { "filenet-nch", { NULL }, 32770, "udp" }, { "filenet-rmi", { NULL }, 32771, "tcp" }, { "filenet-rmi", { NULL }, 32771, "udp" }, { "filenet-pa", { NULL }, 32772, "tcp" }, { "filenet-pa", { NULL }, 32772, "udp" }, { "filenet-cm", { NULL }, 32773, "tcp" }, { "filenet-cm", { NULL }, 32773, "udp" }, { "filenet-re", { NULL }, 32774, "tcp" }, { "filenet-re", { NULL }, 32774, "udp" }, { "filenet-pch", { NULL }, 32775, "tcp" }, { "filenet-pch", { NULL }, 32775, "udp" }, { "filenet-peior", { NULL }, 32776, "tcp" }, { "filenet-peior", { NULL }, 32776, "udp" }, { "filenet-obrok", { NULL }, 32777, "tcp" }, { "filenet-obrok", { NULL }, 32777, "udp" }, { "mlsn", { NULL }, 32801, "tcp" }, { "mlsn", { NULL }, 32801, "udp" }, { "retp", { NULL }, 32811, "tcp" }, { "idmgratm", { NULL }, 32896, "tcp" }, { "idmgratm", { NULL }, 32896, "udp" }, { "aurora-balaena", { NULL }, 33123, "tcp" }, { "aurora-balaena", { NULL }, 33123, "udp" }, { "diamondport", { NULL }, 33331, "tcp" }, { "diamondport", { NULL }, 33331, "udp" }, { "dgi-serv", { NULL }, 33333, "tcp" }, { "traceroute", { NULL }, 33434, "tcp" }, { "traceroute", { NULL }, 33434, "udp" }, { "snip-slave", { NULL }, 33656, "tcp" }, { "snip-slave", { NULL }, 33656, "udp" }, { "turbonote-2", { NULL }, 34249, "tcp" }, { "turbonote-2", { NULL }, 34249, "udp" }, { "p-net-local", { NULL }, 34378, "tcp" }, { "p-net-local", { NULL }, 34378, "udp" }, { "p-net-remote", { NULL }, 34379, "tcp" }, { "p-net-remote", { NULL }, 34379, "udp" }, { "dhanalakshmi", { NULL }, 34567, "tcp" }, { "profinet-rt", { NULL }, 34962, "tcp" }, { "profinet-rt", { NULL }, 34962, "udp" }, { "profinet-rtm", { NULL }, 34963, "tcp" }, { "profinet-rtm", { NULL }, 34963, "udp" }, { "profinet-cm", { NULL }, 34964, "tcp" }, { "profinet-cm", { NULL }, 34964, "udp" }, { "ethercat", { NULL }, 34980, "tcp" }, { "ethercat", { NULL }, 34980, "udp" }, { "allpeers", { NULL }, 36001, "tcp" }, { "allpeers", { NULL }, 36001, "udp" }, { "s1-control", { NULL }, 36412, "sctp" }, { "x2-control", { NULL }, 36422, "sctp" }, { "m2ap", { NULL }, 36443, "sctp" }, { "m3ap", { NULL }, 36444, "sctp" }, { "kastenxpipe", { NULL }, 36865, "tcp" }, { "kastenxpipe", { NULL }, 36865, "udp" }, { "neckar", { NULL }, 37475, "tcp" }, { "neckar", { NULL }, 37475, "udp" }, { "unisys-eportal", { NULL }, 37654, "tcp" }, { "unisys-eportal", { NULL }, 37654, "udp" }, { "galaxy7-data", { NULL }, 38201, "tcp" }, { "galaxy7-data", { NULL }, 38201, "udp" }, { "fairview", { NULL }, 38202, "tcp" }, { "fairview", { NULL }, 38202, "udp" }, { "agpolicy", { NULL }, 38203, "tcp" }, { "agpolicy", { NULL }, 38203, "udp" }, { "turbonote-1", { NULL }, 39681, "tcp" }, { "turbonote-1", { NULL }, 39681, "udp" }, { "safetynetp", { NULL }, 40000, "tcp" }, { "safetynetp", { NULL }, 40000, "udp" }, { "cscp", { NULL }, 40841, "tcp" }, { "cscp", { NULL }, 40841, "udp" }, { "csccredir", { NULL }, 40842, "tcp" }, { "csccredir", { NULL }, 40842, "udp" }, { "csccfirewall", { NULL }, 40843, "tcp" }, { "csccfirewall", { NULL }, 40843, "udp" }, { "ortec-disc", { NULL }, 40853, "udp" }, { "fs-qos", { NULL }, 41111, "tcp" }, { "fs-qos", { NULL }, 41111, "udp" }, { "tentacle", { NULL }, 41121, "tcp" }, { "crestron-cip", { NULL }, 41794, "tcp" }, { "crestron-cip", { NULL }, 41794, "udp" }, { "crestron-ctp", { NULL }, 41795, "tcp" }, { "crestron-ctp", { NULL }, 41795, "udp" }, { "candp", { NULL }, 42508, "tcp" }, { "candp", { NULL }, 42508, "udp" }, { "candrp", { NULL }, 42509, "tcp" }, { "candrp", { NULL }, 42509, "udp" }, { "caerpc", { NULL }, 42510, "tcp" }, { "caerpc", { NULL }, 42510, "udp" }, { "reachout", { NULL }, 43188, "tcp" }, { "reachout", { NULL }, 43188, "udp" }, { "ndm-agent-port", { NULL }, 43189, "tcp" }, { "ndm-agent-port", { NULL }, 43189, "udp" }, { "ip-provision", { NULL }, 43190, "tcp" }, { "ip-provision", { NULL }, 43190, "udp" }, { "noit-transport", { NULL }, 43191, "tcp" }, { "ew-mgmt", { NULL }, 43440, "tcp" }, { "ew-disc-cmd", { NULL }, 43440, "udp" }, { "ciscocsdb", { NULL }, 43441, "tcp" }, { "ciscocsdb", { NULL }, 43441, "udp" }, { "pmcd", { NULL }, 44321, "tcp" }, { "pmcd", { NULL }, 44321, "udp" }, { "pmcdproxy", { NULL }, 44322, "tcp" }, { "pmcdproxy", { NULL }, 44322, "udp" }, { "pcp", { NULL }, 44323, "udp" }, { "rbr-debug", { NULL }, 44553, "tcp" }, { "rbr-debug", { NULL }, 44553, "udp" }, { "EtherNet/IP-2", { NULL }, 44818, "tcp" }, { "EtherNet/IP-2", { NULL }, 44818, "udp" }, { "invision-ag", { NULL }, 45054, "tcp" }, { "invision-ag", { NULL }, 45054, "udp" }, { "eba", { NULL }, 45678, "tcp" }, { "eba", { NULL }, 45678, "udp" }, { "qdb2service", { NULL }, 45825, "tcp" }, { "qdb2service", { NULL }, 45825, "udp" }, { "ssr-servermgr", { NULL }, 45966, "tcp" }, { "ssr-servermgr", { NULL }, 45966, "udp" }, { "mediabox", { NULL }, 46999, "tcp" }, { "mediabox", { NULL }, 46999, "udp" }, { "mbus", { NULL }, 47000, "tcp" }, { "mbus", { NULL }, 47000, "udp" }, { "winrm", { NULL }, 47001, "tcp" }, { "dbbrowse", { NULL }, 47557, "tcp" }, { "dbbrowse", { NULL }, 47557, "udp" }, { "directplaysrvr", { NULL }, 47624, "tcp" }, { "directplaysrvr", { NULL }, 47624, "udp" }, { "ap", { NULL }, 47806, "tcp" }, { "ap", { NULL }, 47806, "udp" }, { "bacnet", { NULL }, 47808, "tcp" }, { "bacnet", { NULL }, 47808, "udp" }, { "nimcontroller", { NULL }, 48000, "tcp" }, { "nimcontroller", { NULL }, 48000, "udp" }, { "nimspooler", { NULL }, 48001, "tcp" }, { "nimspooler", { NULL }, 48001, "udp" }, { "nimhub", { NULL }, 48002, "tcp" }, { "nimhub", { NULL }, 48002, "udp" }, { "nimgtw", { NULL }, 48003, "tcp" }, { "nimgtw", { NULL }, 48003, "udp" }, { "nimbusdb", { NULL }, 48004, "tcp" }, { "nimbusdbctrl", { NULL }, 48005, "tcp" }, { "3gpp-cbsp", { NULL }, 48049, "tcp" }, { "isnetserv", { NULL }, 48128, "tcp" }, { "isnetserv", { NULL }, 48128, "udp" }, { "blp5", { NULL }, 48129, "tcp" }, { "blp5", { NULL }, 48129, "udp" }, { "com-bardac-dw", { NULL }, 48556, "tcp" }, { "com-bardac-dw", { NULL }, 48556, "udp" }, { "iqobject", { NULL }, 48619, "tcp" }, { "iqobject", { NULL }, 48619, "udp" }, # endif /* USE_IANA_REGISTERED_PORTS */ { NULL, { NULL }, 0, NULL } }; struct servent *getservbyport(int port, const char *proto) { unsigned short u_port; const char *protocol = NULL; int error = 0; size_t i; u_port = ntohs((unsigned short)port); if (proto) { switch (ares_strlen(proto)) { case 3: if (!strncasecmp(proto, "tcp", 3)) { protocol = "tcp"; } else if (!strncasecmp(proto, "udp", 3)) { protocol = "udp"; } else { error = WSAEFAULT; } break; case 4: if (!strncasecmp(proto, "sctp", 4)) { protocol = "sctp"; } else if (!strncasecmp(proto, "dccp", 4)) { protocol = "dccp"; } else { error = WSAEFAULT; } break; default: error = WSAEFAULT; } } if (!error) { for (i = 0; i < (sizeof(IANAports) / sizeof(IANAports[0])) - 1; i++) { if (u_port == IANAports[i].s_port) { if (!protocol || !strcasecmp(protocol, IANAports[i].s_proto)) { return (struct servent *)&IANAports[i]; } } } error = WSANO_DATA; } SET_SOCKERRNO(error); return NULL; } #endif /* _WIN32_WCE */ gevent-24.11.1/deps/c-ares/src/lib/ares_platform.h000066400000000000000000000031411471441230600216220ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2004 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef HEADER_CARES_PLATFORM_H #define HEADER_CARES_PLATFORM_H #if defined(_WIN32) && !defined(MSDOS) typedef enum { WIN_UNKNOWN, WIN_3X, WIN_9X, WIN_NT, WIN_CE } win_platform; win_platform ares__getplatform(void); #endif #if defined(_WIN32_WCE) struct servent *getservbyport(int port, const char *proto); #endif #endif /* HEADER_CARES_PLATFORM_H */ gevent-24.11.1/deps/c-ares/src/lib/ares_private.h000066400000000000000000001031521471441230600214530ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2010 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES_PRIVATE_H #define __ARES_PRIVATE_H /* ============================================================================ * NOTE: All c-ares source files should include ares_private.h as the first * header. * ============================================================================ */ #include "ares_setup.h" #include "ares.h" #ifdef HAVE_NETINET_IN_H # include #endif #define DEFAULT_TIMEOUT 2000 /* milliseconds */ #define DEFAULT_TRIES 3 #ifndef INADDR_NONE # define INADDR_NONE 0xffffffff #endif /* By using a double cast, we can get rid of the bogus warning of * warning: cast from 'const struct sockaddr *' to 'const struct sockaddr_in6 *' * increases required alignment from 1 to 4 [-Wcast-align] */ #define CARES_INADDR_CAST(type, var) ((type)((const void *)var)) #if defined(USE_WINSOCK) # define WIN_NS_9X "System\\CurrentControlSet\\Services\\VxD\\MSTCP" # define WIN_NS_NT_KEY "System\\CurrentControlSet\\Services\\Tcpip\\Parameters" # define WIN_DNSCLIENT "Software\\Policies\\Microsoft\\System\\DNSClient" # define WIN_NT_DNSCLIENT \ "Software\\Policies\\Microsoft\\Windows NT\\DNSClient" # define NAMESERVER "NameServer" # define DHCPNAMESERVER "DhcpNameServer" # define DATABASEPATH "DatabasePath" # define WIN_PATH_HOSTS "\\hosts" # define SEARCHLIST_KEY "SearchList" # define PRIMARYDNSSUFFIX_KEY "PrimaryDNSSuffix" # define INTERFACES_KEY "Interfaces" # define DOMAIN_KEY "Domain" # define DHCPDOMAIN_KEY "DhcpDomain" # define PATH_RESOLV_CONF "" #elif defined(WATT32) # define PATH_RESOLV_CONF "/dev/ENV/etc/resolv.conf" W32_FUNC const char *_w32_GetHostsFile(void); #elif defined(NETWARE) # define PATH_RESOLV_CONF "sys:/etc/resolv.cfg" # define PATH_HOSTS "sys:/etc/hosts" #elif defined(__riscos__) # define PATH_RESOLV_CONF "" # define PATH_HOSTS "InetDBase:Hosts" #elif defined(__HAIKU__) # define PATH_RESOLV_CONF "/system/settings/network/resolv.conf" # define PATH_HOSTS "/system/settings/network/hosts" #else # define PATH_RESOLV_CONF "/etc/resolv.conf" # ifdef ETC_INET # define PATH_HOSTS "/etc/inet/hosts" # else # define PATH_HOSTS "/etc/hosts" # endif #endif #include "ares_ipv6.h" struct ares_rand_state; typedef struct ares_rand_state ares_rand_state; #include "dsa/ares__array.h" #include "dsa/ares__llist.h" #include "dsa/ares__slist.h" #include "dsa/ares__htable_strvp.h" #include "dsa/ares__htable_szvp.h" #include "dsa/ares__htable_asvp.h" #include "dsa/ares__htable_vpvp.h" #include "record/ares_dns_multistring.h" #include "str/ares__buf.h" #include "record/ares_dns_private.h" #include "util/ares__iface_ips.h" #include "util/ares__threads.h" #ifndef HAVE_GETENV # include "ares_getenv.h" # define getenv(ptr) ares_getenv(ptr) #endif #include "str/ares_str.h" #include "str/ares_strsplit.h" #ifndef HAVE_STRCASECMP # include "str/ares_strcasecmp.h" # define strcasecmp(p1, p2) ares_strcasecmp(p1, p2) #endif #ifndef HAVE_STRNCASECMP # include "str/ares_strcasecmp.h" # define strncasecmp(p1, p2, n) ares_strncasecmp(p1, p2, n) #endif /********* EDNS defines section ******/ #define EDNSPACKETSZ \ 1232 /* Reasonable UDP payload size, as agreed by operators \ https://www.dnsflagday.net/2020/#faq */ #define MAXENDSSZ 4096 /* Maximum (local) limit for edns packet size */ #define EDNSFIXEDSZ 11 /* Size of EDNS header */ /********* EDNS defines section ******/ /* Default values for server failover behavior. We retry failed servers with * a 10% probability and a minimum delay of 5 seconds between retries. */ #define DEFAULT_SERVER_RETRY_CHANCE 10 #define DEFAULT_SERVER_RETRY_DELAY 5000 struct ares_query; typedef struct ares_query ares_query_t; struct ares_server; typedef struct ares_server ares_server_t; struct ares_conn; typedef struct ares_conn ares_conn_t; typedef enum { /*! No flags */ ARES_CONN_FLAG_NONE = 0, /*! TCP connection, not UDP */ ARES_CONN_FLAG_TCP = 1 << 0, /*! TCP Fast Open is enabled and being used if supported by the OS */ ARES_CONN_FLAG_TFO = 1 << 1, /*! TCP Fast Open has not yet sent its first packet. Gets unset on first * write to a connection */ ARES_CONN_FLAG_TFO_INITIAL = 1 << 2 } ares_conn_flags_t; struct ares_conn { ares_server_t *server; ares_socket_t fd; struct ares_addr self_ip; ares_conn_flags_t flags; /* total number of queries run on this connection since it was established */ size_t total_queries; /* list of outstanding queries to this connection */ ares__llist_t *queries_to_conn; }; #ifdef _MSC_VER typedef __int64 ares_int64_t; typedef unsigned __int64 ares_uint64_t; #else typedef long long ares_int64_t; typedef unsigned long long ares_uint64_t; #endif /*! struct timeval on some systems like Windows doesn't support 64bit time so * therefore can't be used due to Y2K38 issues. Make our own that does have * 64bit time. */ typedef struct { ares_int64_t sec; /*!< Seconds */ unsigned int usec; /*!< Microseconds. Can't be negative. */ } ares_timeval_t; /*! Various buckets for grouping history */ typedef enum { ARES_METRIC_1MINUTE = 0, /*!< Bucket for tracking over the last minute */ ARES_METRIC_15MINUTES, /*!< Bucket for tracking over the last 15 minutes */ ARES_METRIC_1HOUR, /*!< Bucket for tracking over the last hour */ ARES_METRIC_1DAY, /*!< Bucket for tracking over the last day */ ARES_METRIC_INCEPTION, /*!< Bucket for tracking since inception */ ARES_METRIC_COUNT /*!< Count of buckets, not a real bucket */ } ares_server_bucket_t; /*! Data metrics collected for each bucket */ typedef struct { time_t ts; /*!< Timestamp divided by bucket divisor */ unsigned int latency_min_ms; /*!< Minimum latency for queries */ unsigned int latency_max_ms; /*!< Maximum latency for queries */ ares_uint64_t total_ms; /*!< Cumulative query time for bucket */ ares_uint64_t total_count; /*!< Number of queries for bucket */ time_t prev_ts; /*!< Previous period bucket timestamp */ ares_uint64_t prev_total_ms; /*!< Previous period bucket cumulative query time */ ares_uint64_t prev_total_count; /*!< Previous period bucket query count */ } ares_server_metrics_t; typedef enum { ARES_COOKIE_INITIAL = 0, ARES_COOKIE_GENERATED = 1, ARES_COOKIE_SUPPORTED = 2, ARES_COOKIE_UNSUPPORTED = 3 } ares_cookie_state_t; /*! Structure holding tracking data for RFC 7873/9018 DNS cookies. * Implementation plan for this feature is here: * https://github.com/c-ares/c-ares/issues/620 */ typedef struct { /*! starts at INITIAL, transitions as needed. */ ares_cookie_state_t state; /*! randomly-generate client cookie */ unsigned char client[8]; /*! timestamp client cookie was generated, used for rotation purposes */ ares_timeval_t client_ts; /*! IP address last used for client to connect to server. If this changes * The client cookie gets invalidated */ struct ares_addr client_ip; /*! Server Cookie last received, 8-32 bytes in length */ unsigned char server[32]; /*! Length of server cookie on file. */ size_t server_len; /*! Timestamp of last attempt to use cookies, but it was determined that the * server didn't support them */ ares_timeval_t unsupported_ts; } ares_cookie_t; struct ares_server { /* Configuration */ size_t idx; /* index for server in system configuration */ struct ares_addr addr; unsigned short udp_port; /* host byte order */ unsigned short tcp_port; /* host byte order */ char ll_iface[64]; /* IPv6 Link Local Interface */ unsigned int ll_scope; /* IPv6 Link Local Scope */ size_t consec_failures; /* Consecutive query failure count * can be hard errors or timeouts */ ares__llist_t *connections; ares_conn_t *tcp_conn; /* The next time when we will retry this server if it has hit failures */ ares_timeval_t next_retry_time; /* TCP buffer since multiple responses can come back in one read, or partial * in a read */ ares__buf_t *tcp_parser; /* TCP output queue */ ares__buf_t *tcp_send; /*! Buckets for collecting metrics about the server */ ares_server_metrics_t metrics[ARES_METRIC_COUNT]; /*! RFC 7873/9018 DNS Cookies */ ares_cookie_t cookie; /* Link back to owning channel */ ares_channel_t *channel; }; /* State to represent a DNS query */ struct ares_query { /* Query ID from qbuf, for faster lookup, and current timeout */ unsigned short qid; /* host byte order */ ares_timeval_t ts; /*!< Timestamp query was sent */ ares_timeval_t timeout; ares_channel_t *channel; /* * Node object for each list entry the query belongs to in order to * make removal operations O(1). */ ares__slist_node_t *node_queries_by_timeout; ares__llist_node_t *node_queries_to_conn; ares__llist_node_t *node_all_queries; /* connection handle query is associated with */ ares_conn_t *conn; /* Query */ ares_dns_record_t *query; ares_callback_dnsrec callback; void *arg; /* Query status */ size_t try_count; /* Number of times we tried this query already. */ size_t cookie_try_count; /* Attempt count for cookie resends */ ares_bool_t using_tcp; ares_status_t error_status; size_t timeouts; /* number of timeouts we saw for this request */ ares_bool_t no_retries; /* do not perform any additional retries, this is * set when a query is to be canceled */ }; struct apattern { struct ares_addr addr; unsigned char mask; }; struct ares__qcache; typedef struct ares__qcache ares__qcache_t; struct ares_hosts_file; typedef struct ares_hosts_file ares_hosts_file_t; struct ares_channeldata { /* Configuration data */ unsigned int flags; size_t timeout; /* in milliseconds */ size_t tries; size_t ndots; size_t maxtimeout; /* in milliseconds */ ares_bool_t rotate; unsigned short udp_port; /* stored in network order */ unsigned short tcp_port; /* stored in network order */ int socket_send_buffer_size; /* setsockopt takes int */ int socket_receive_buffer_size; /* setsockopt takes int */ char **domains; size_t ndomains; struct apattern *sortlist; size_t nsort; char *lookups; size_t ednspsz; unsigned int qcache_max_ttl; ares_evsys_t evsys; unsigned int optmask; /* For binding to local devices and/or IP addresses. Leave * them null/zero for no binding. */ char local_dev_name[32]; unsigned int local_ip4; unsigned char local_ip6[16]; /* Thread safety lock */ ares__thread_mutex_t *lock; /* Conditional to wake waiters when queue is empty */ ares__thread_cond_t *cond_empty; /* Server addresses and communications state. Sorted by least consecutive * failures, followed by the configuration order if failures are equal. */ ares__slist_t *servers; /* random state to use when generating new ids and generating retry penalties */ ares_rand_state *rand_state; /* All active queries in a single list */ ares__llist_t *all_queries; /* Queries bucketed by qid, for quickly dispatching DNS responses: */ ares__htable_szvp_t *queries_by_qid; /* Queries bucketed by timeout, for quickly handling timeouts: */ ares__slist_t *queries_by_timeout; /* Map linked list node member for connection to file descriptor. We use * the node instead of the connection object itself so we can quickly look * up a connection and remove it if necessary (as otherwise we'd have to * scan all connections) */ ares__htable_asvp_t *connnode_by_socket; ares_sock_state_cb sock_state_cb; void *sock_state_cb_data; ares_sock_create_callback sock_create_cb; void *sock_create_cb_data; ares_sock_config_callback sock_config_cb; void *sock_config_cb_data; const struct ares_socket_functions *sock_funcs; void *sock_func_cb_data; /* Path for resolv.conf file, configurable via ares_options */ char *resolvconf_path; /* Path for hosts file, configurable via ares_options */ char *hosts_path; /* Maximum UDP queries per connection allowed */ size_t udp_max_queries; /* Cache of local hosts file */ ares_hosts_file_t *hf; /* Query Cache */ ares__qcache_t *qcache; /* Fields controlling server failover behavior. * The retry chance is the probability (1/N) by which we will retry a failed * server instead of the best server when selecting a server to send queries * to. * The retry delay is the minimum time in milliseconds to wait between doing * such retries (applied per-server). */ unsigned short server_retry_chance; size_t server_retry_delay; /* Callback triggered when a server has a successful or failed response */ ares_server_state_callback server_state_cb; void *server_state_cb_data; /* TRUE if a reinit is pending. Reinit spawns a thread to read the system * configuration and then apply the configuration since configuration * reading may block. The thread handle is provided for waiting on thread * exit. */ ares_bool_t reinit_pending; ares__thread_t *reinit_thread; /* Whether the system is up or not. This is mainly to prevent deadlocks * and access violations during the cleanup process. Some things like * system config changes might get triggered and we need a flag to make * sure we don't take action. */ ares_bool_t sys_up; }; /* Does the domain end in ".onion" or ".onion."? Case-insensitive. */ ares_bool_t ares__is_onion_domain(const char *name); /* Memory management functions */ extern void *(*ares_malloc)(size_t size); extern void *(*ares_realloc)(void *ptr, size_t size); extern void (*ares_free)(void *ptr); void *ares_malloc_zero(size_t size); void *ares_realloc_zero(void *ptr, size_t orig_size, size_t new_size); /* return true if now is exactly check time or later */ ares_bool_t ares__timedout(const ares_timeval_t *now, const ares_timeval_t *check); /* Returns one of the normal ares status codes like ARES_SUCCESS */ ares_status_t ares__send_query(ares_query_t *query, const ares_timeval_t *now); ares_status_t ares__requeue_query(ares_query_t *query, const ares_timeval_t *now, ares_status_t status, ares_bool_t inc_try_count, const ares_dns_record_t *dnsrec); /*! Count the number of labels (dots+1) in a domain */ size_t ares__name_label_cnt(const char *name); /*! Retrieve a list of names to use for searching. The first successful * query in the list wins. This function also uses the HOSTSALIASES file * as well as uses channel configuration to determine the search order. * * \param[in] channel initialized ares channel * \param[in] name initial name being searched * \param[out] names array of names to attempt, use ares__strsplit_free() * when no longer needed. * \param[out] names_len number of names in array * \return ARES_SUCCESS on success, otherwise one of the other error codes. */ ares_status_t ares__search_name_list(const ares_channel_t *channel, const char *name, char ***names, size_t *names_len); /*! Function to create callback arg for converting from ares_callback_dnsrec * to ares_calback */ void *ares__dnsrec_convert_arg(ares_callback callback, void *arg); /*! Callback function used to convert from the ares_callback_dnsrec prototype to * the ares_callback prototype, by writing the result and passing that to * the inner callback. */ void ares__dnsrec_convert_cb(void *arg, ares_status_t status, size_t timeouts, const ares_dns_record_t *dnsrec); void ares__close_connection(ares_conn_t *conn, ares_status_t requeue_status); void ares__close_sockets(ares_server_t *server); void ares__check_cleanup_conns(const ares_channel_t *channel); void ares__free_query(ares_query_t *query); ares_rand_state *ares__init_rand_state(void); void ares__destroy_rand_state(ares_rand_state *state); void ares__rand_bytes(ares_rand_state *state, unsigned char *buf, size_t len); unsigned short ares__generate_new_id(ares_rand_state *state); void ares__tvnow(ares_timeval_t *now); void ares__timeval_remaining(ares_timeval_t *remaining, const ares_timeval_t *now, const ares_timeval_t *tout); void ares__timeval_diff(ares_timeval_t *tvdiff, const ares_timeval_t *tvstart, const ares_timeval_t *tvstop); ares_status_t ares__expand_name_validated(const unsigned char *encoded, const unsigned char *abuf, size_t alen, char **s, size_t *enclen, ares_bool_t is_hostname); ares_status_t ares_expand_string_ex(const unsigned char *encoded, const unsigned char *abuf, size_t alen, unsigned char **s, size_t *enclen); ares_status_t ares__init_servers_state(ares_channel_t *channel); ares_status_t ares__init_by_options(ares_channel_t *channel, const struct ares_options *options, int optmask); ares_status_t ares__init_by_sysconfig(ares_channel_t *channel); typedef struct { ares__llist_t *sconfig; struct apattern *sortlist; size_t nsortlist; char **domains; size_t ndomains; char *lookups; size_t ndots; size_t tries; ares_bool_t rotate; size_t timeout_ms; ares_bool_t usevc; } ares_sysconfig_t; ares_status_t ares__sysconfig_set_options(ares_sysconfig_t *sysconfig, const char *str); ares_status_t ares__init_by_environment(ares_sysconfig_t *sysconfig); ares_status_t ares__init_sysconfig_files(const ares_channel_t *channel, ares_sysconfig_t *sysconfig); #ifdef __APPLE__ ares_status_t ares__init_sysconfig_macos(ares_sysconfig_t *sysconfig); #endif #ifdef USE_WINSOCK ares_status_t ares__init_sysconfig_windows(ares_sysconfig_t *sysconfig); #endif ares_status_t ares__parse_sortlist(struct apattern **sortlist, size_t *nsort, const char *str); void ares__destroy_servers_state(ares_channel_t *channel); /* Returns ARES_SUCCESS if alias found, alias is set. Returns ARES_ENOTFOUND * if not alias found. Returns other errors on critical failure like * ARES_ENOMEM */ ares_status_t ares__lookup_hostaliases(const ares_channel_t *channel, const char *name, char **alias); ares_status_t ares__cat_domain(const char *name, const char *domain, char **s); ares_status_t ares__sortaddrinfo(ares_channel_t *channel, struct ares_addrinfo_node *ai_node); void ares__freeaddrinfo_nodes(struct ares_addrinfo_node *ai_node); ares_bool_t ares__is_localhost(const char *name); struct ares_addrinfo_node * ares__append_addrinfo_node(struct ares_addrinfo_node **ai_node); void ares__addrinfo_cat_nodes(struct ares_addrinfo_node **head, struct ares_addrinfo_node *tail); void ares__freeaddrinfo_cnames(struct ares_addrinfo_cname *ai_cname); struct ares_addrinfo_cname * ares__append_addrinfo_cname(struct ares_addrinfo_cname **ai_cname); ares_status_t ares_append_ai_node(int aftype, unsigned short port, unsigned int ttl, const void *adata, struct ares_addrinfo_node **nodes); void ares__addrinfo_cat_cnames(struct ares_addrinfo_cname **head, struct ares_addrinfo_cname *tail); ares_status_t ares__parse_into_addrinfo(const ares_dns_record_t *dnsrec, ares_bool_t cname_only_is_enodata, unsigned short port, struct ares_addrinfo *ai); ares_status_t ares_parse_ptr_reply_dnsrec(const ares_dns_record_t *dnsrec, const void *addr, int addrlen, int family, struct hostent **host); ares_status_t ares__addrinfo2hostent(const struct ares_addrinfo *ai, int family, struct hostent **host); ares_status_t ares__addrinfo2addrttl(const struct ares_addrinfo *ai, int family, size_t req_naddrttls, struct ares_addrttl *addrttls, struct ares_addr6ttl *addr6ttls, size_t *naddrttls); ares_status_t ares__addrinfo_localhost(const char *name, unsigned short port, const struct ares_addrinfo_hints *hints, struct ares_addrinfo *ai); ares_status_t ares__open_connection(ares_conn_t **conn_out, ares_channel_t *channel, ares_server_t *server, ares_bool_t is_tcp); ares_bool_t ares_sockaddr_to_ares_addr(struct ares_addr *ares_addr, unsigned short *port, const struct sockaddr *sockaddr); ares_socket_t ares__open_socket(ares_channel_t *channel, int af, int type, int protocol); ares_bool_t ares__socket_try_again(int errnum); ares_ssize_t ares__conn_write(ares_conn_t *conn, const void *data, size_t len); ares_ssize_t ares__socket_recvfrom(ares_channel_t *channel, ares_socket_t s, void *data, size_t data_len, int flags, struct sockaddr *from, ares_socklen_t *from_len); ares_ssize_t ares__socket_recv(ares_channel_t *channel, ares_socket_t s, void *data, size_t data_len); void ares__close_socket(ares_channel_t *channel, ares_socket_t s); ares_status_t ares__connect_socket(ares_channel_t *channel, ares_socket_t sockfd, const struct sockaddr *addr, ares_socklen_t addrlen); void ares__destroy_server(ares_server_t *server); ares_status_t ares__servers_update(ares_channel_t *channel, ares__llist_t *server_list, ares_bool_t user_specified); ares_status_t ares__sconfig_append(ares__llist_t **sconfig, const struct ares_addr *addr, unsigned short udp_port, unsigned short tcp_port, const char *ll_iface); ares_status_t ares__sconfig_append_fromstr(ares__llist_t **sconfig, const char *str, ares_bool_t ignore_invalid); ares_status_t ares_in_addr_to_server_config_llist(const struct in_addr *servers, size_t nservers, ares__llist_t **llist); ares_status_t ares_get_server_addr(const ares_server_t *server, ares__buf_t *buf); struct ares_hosts_entry; typedef struct ares_hosts_entry ares_hosts_entry_t; void ares__hosts_file_destroy(ares_hosts_file_t *hf); ares_status_t ares__hosts_search_ipaddr(ares_channel_t *channel, ares_bool_t use_env, const char *ipaddr, const ares_hosts_entry_t **entry); ares_status_t ares__hosts_search_host(ares_channel_t *channel, ares_bool_t use_env, const char *host, const ares_hosts_entry_t **entry); ares_status_t ares__hosts_entry_to_hostent(const ares_hosts_entry_t *entry, int family, struct hostent **hostent); ares_status_t ares__hosts_entry_to_addrinfo(const ares_hosts_entry_t *entry, const char *name, int family, unsigned short port, ares_bool_t want_cnames, struct ares_addrinfo *ai); /* Same as ares_query_dnsrec() except does not take a channel lock. Use this * if a channel lock is already held */ ares_status_t ares_query_nolock(ares_channel_t *channel, const char *name, ares_dns_class_t dnsclass, ares_dns_rec_type_t type, ares_callback_dnsrec callback, void *arg, unsigned short *qid); /* Same as ares_send_dnsrec() except does not take a channel lock. Use this * if a channel lock is already held */ ares_status_t ares_send_nolock(ares_channel_t *channel, const ares_dns_record_t *dnsrec, ares_callback_dnsrec callback, void *arg, unsigned short *qid); /* Same as ares_gethostbyaddr() except does not take a channel lock. Use this * if a channel lock is already held */ void ares_gethostbyaddr_nolock(ares_channel_t *channel, const void *addr, int addrlen, int family, ares_host_callback callback, void *arg); /*! Parse a compressed DNS name as defined in RFC1035 starting at the current * offset within the buffer. * * It is assumed that either a const buffer is being used, or before * the message processing was started that ares__buf_reclaim() was called. * * \param[in] buf Initialized buffer object * \param[out] name Pointer passed by reference to be filled in with * allocated string of the parsed name that must be * ares_free()'d by the caller. * \param[in] is_hostname if ARES_TRUE, will validate the character set for * a valid hostname or will return error. * \return ARES_SUCCESS on success */ ares_status_t ares__dns_name_parse(ares__buf_t *buf, char **name, ares_bool_t is_hostname); /*! Write the DNS name to the buffer in the DNS domain-name syntax as a * series of labels. The maximum domain name length is 255 characters with * each label being a maximum of 63 characters. If the validate_hostname * flag is set, it will strictly validate the character set. * * \param[in,out] buf Initialized buffer object to write name to * \param[in,out] list Pointer passed by reference to maintain a list of * domain name to indexes used for name compression. * Pass NULL (not by reference) if name compression isn't * desired. Otherwise the list will be automatically * created upon first entry. * \param[in] validate_hostname Validate the hostname character set. * \param[in] name Name to write out, it may have escape * sequences. * \return ARES_SUCCESS on success, most likely ARES_EBADNAME if the name is * bad. */ ares_status_t ares__dns_name_write(ares__buf_t *buf, ares__llist_t **list, ares_bool_t validate_hostname, const char *name); /*! Check if the queue is empty, if so, wake any waiters. This is only * effective if built with threading support. * * Must be holding a channel lock when calling this function. * * \param[in] channel Initialized ares channel object */ void ares_queue_notify_empty(ares_channel_t *channel); #define SOCK_STATE_CALLBACK(c, s, r, w) \ do { \ if ((c)->sock_state_cb) { \ (c)->sock_state_cb((c)->sock_state_cb_data, (s), (r), (w)); \ } \ } while (0) #define ARES_CONFIG_CHECK(x) \ (x && x->lookups && ares__slist_len(x->servers) > 0 && x->timeout > 0 && \ x->tries > 0) ares_bool_t ares__subnet_match(const struct ares_addr *addr, const struct ares_addr *subnet, unsigned char netmask); ares_bool_t ares__addr_is_linklocal(const struct ares_addr *addr); ares_bool_t ares__is_64bit(void); size_t ares__round_up_pow2(size_t n); size_t ares__log2(size_t n); size_t ares__pow(size_t x, size_t y); size_t ares__count_digits(size_t n); size_t ares__count_hexdigits(size_t n); unsigned char ares__count_bits_u8(unsigned char x); void ares__qcache_destroy(ares__qcache_t *cache); ares_status_t ares__qcache_create(ares_rand_state *rand_state, unsigned int max_ttl, ares__qcache_t **cache_out); void ares__qcache_flush(ares__qcache_t *cache); ares_status_t ares_qcache_insert(ares_channel_t *channel, const ares_timeval_t *now, const ares_query_t *query, ares_dns_record_t *dnsrec); ares_status_t ares_qcache_fetch(ares_channel_t *channel, const ares_timeval_t *now, const ares_dns_record_t *dnsrec, const ares_dns_record_t **dnsrec_resp); void ares_metrics_record(const ares_query_t *query, ares_server_t *server, ares_status_t status, const ares_dns_record_t *dnsrec); size_t ares_metrics_server_timeout(const ares_server_t *server, const ares_timeval_t *now); ares_status_t ares_cookie_apply(ares_dns_record_t *dnsrec, ares_conn_t *conn, const ares_timeval_t *now); ares_status_t ares_cookie_validate(ares_query_t *query, const ares_dns_record_t *dnsresp, ares_conn_t *conn, const ares_timeval_t *now); ares_status_t ares__channel_threading_init(ares_channel_t *channel); void ares__channel_threading_destroy(ares_channel_t *channel); void ares__channel_lock(const ares_channel_t *channel); void ares__channel_unlock(const ares_channel_t *channel); struct ares_event_thread; typedef struct ares_event_thread ares_event_thread_t; void ares_event_thread_destroy(ares_channel_t *channel); ares_status_t ares_event_thread_init(ares_channel_t *channel); #ifdef _WIN32 # define HOSTENT_ADDRTYPE_TYPE short # define HOSTENT_LENGTH_TYPE short #else # define HOSTENT_ADDRTYPE_TYPE int # define HOSTENT_LENGTH_TYPE int #endif #endif /* __ARES_PRIVATE_H */ gevent-24.11.1/deps/c-ares/src/lib/ares_process.c000066400000000000000000001170711471441230600214570ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2010 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_STRINGS_H # include #endif #ifdef HAVE_SYS_IOCTL_H # include #endif #ifdef NETWARE # include #endif #ifdef HAVE_STDINT_H # include #endif #include #include #include static void timeadd(ares_timeval_t *now, size_t millisecs); static void write_tcp_data(ares_channel_t *channel, fd_set *write_fds, ares_socket_t write_fd); static void read_packets(ares_channel_t *channel, fd_set *read_fds, ares_socket_t read_fd, const ares_timeval_t *now); static void process_timeouts(ares_channel_t *channel, const ares_timeval_t *now); static ares_status_t process_answer(ares_channel_t *channel, const unsigned char *abuf, size_t alen, ares_conn_t *conn, ares_bool_t tcp, const ares_timeval_t *now); static void handle_conn_error(ares_conn_t *conn, ares_bool_t critical_failure, ares_status_t failure_status); static ares_bool_t same_questions(const ares_query_t *query, const ares_dns_record_t *arec); static ares_bool_t same_address(const struct sockaddr *sa, const struct ares_addr *aa); static void end_query(ares_channel_t *channel, ares_server_t *server, ares_query_t *query, ares_status_t status, const ares_dns_record_t *dnsrec); static void ares__query_disassociate_from_conn(ares_query_t *query) { /* If its not part of a connection, it can't be tracked for timeouts either */ ares__slist_node_destroy(query->node_queries_by_timeout); ares__llist_node_destroy(query->node_queries_to_conn); query->node_queries_by_timeout = NULL; query->node_queries_to_conn = NULL; query->conn = NULL; } /* Invoke the server state callback after a success or failure */ static void invoke_server_state_cb(const ares_server_t *server, ares_bool_t success, int flags) { const ares_channel_t *channel = server->channel; ares__buf_t *buf; ares_status_t status; char *server_string; if (channel->server_state_cb == NULL) { return; } buf = ares__buf_create(); if (buf == NULL) { return; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares_get_server_addr(server, buf); if (status != ARES_SUCCESS) { ares__buf_destroy(buf); /* LCOV_EXCL_LINE: OutOfMemory */ return; /* LCOV_EXCL_LINE: OutOfMemory */ } server_string = ares__buf_finish_str(buf, NULL); buf = NULL; if (server_string == NULL) { return; /* LCOV_EXCL_LINE: OutOfMemory */ } channel->server_state_cb(server_string, success, flags, channel->server_state_cb_data); ares_free(server_string); } static void server_increment_failures(ares_server_t *server, ares_bool_t used_tcp) { ares__slist_node_t *node; const ares_channel_t *channel = server->channel; ares_timeval_t next_retry_time; node = ares__slist_node_find(channel->servers, server); if (node == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } server->consec_failures++; ares__slist_node_reinsert(node); ares__tvnow(&next_retry_time); timeadd(&next_retry_time, channel->server_retry_delay); server->next_retry_time = next_retry_time; invoke_server_state_cb(server, ARES_FALSE, used_tcp == ARES_TRUE ? ARES_SERV_STATE_TCP : ARES_SERV_STATE_UDP); } static void server_set_good(ares_server_t *server, ares_bool_t used_tcp) { ares__slist_node_t *node; const ares_channel_t *channel = server->channel; node = ares__slist_node_find(channel->servers, server); if (node == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (server->consec_failures > 0) { server->consec_failures = 0; ares__slist_node_reinsert(node); } server->next_retry_time.sec = 0; server->next_retry_time.usec = 0; invoke_server_state_cb(server, ARES_TRUE, used_tcp == ARES_TRUE ? ARES_SERV_STATE_TCP : ARES_SERV_STATE_UDP); } /* return true if now is exactly check time or later */ ares_bool_t ares__timedout(const ares_timeval_t *now, const ares_timeval_t *check) { ares_int64_t secs = (now->sec - check->sec); if (secs > 0) { return ARES_TRUE; /* yes, timed out */ } if (secs < 0) { return ARES_FALSE; /* nope, not timed out */ } /* if the full seconds were identical, check the sub second parts */ return ((ares_int64_t)now->usec - (ares_int64_t)check->usec) >= 0 ? ARES_TRUE : ARES_FALSE; } /* add the specific number of milliseconds to the time in the first argument */ static void timeadd(ares_timeval_t *now, size_t millisecs) { now->sec += (ares_int64_t)millisecs / 1000; now->usec += (unsigned int)((millisecs % 1000) * 1000); if (now->usec >= 1000000) { now->sec += now->usec / 1000000; now->usec %= 1000000; } } /* * generic process function */ static void processfds(ares_channel_t *channel, fd_set *read_fds, ares_socket_t read_fd, fd_set *write_fds, ares_socket_t write_fd) { ares_timeval_t now; if (channel == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } ares__channel_lock(channel); ares__tvnow(&now); read_packets(channel, read_fds, read_fd, &now); process_timeouts(channel, &now); /* Write last as the other 2 operations might have triggered writes */ write_tcp_data(channel, write_fds, write_fd); /* See if any connections should be cleaned up */ ares__check_cleanup_conns(channel); ares__channel_unlock(channel); } /* Something interesting happened on the wire, or there was a timeout. * See what's up and respond accordingly. */ void ares_process(ares_channel_t *channel, fd_set *read_fds, fd_set *write_fds) { processfds(channel, read_fds, ARES_SOCKET_BAD, write_fds, ARES_SOCKET_BAD); } /* Something interesting happened on the wire, or there was a timeout. * See what's up and respond accordingly. */ void ares_process_fd(ares_channel_t *channel, ares_socket_t read_fd, /* use ARES_SOCKET_BAD or valid file descriptors */ ares_socket_t write_fd) { processfds(channel, NULL, read_fd, NULL, write_fd); } /* If any TCP sockets select true for writing, write out queued data * we have for them. */ static void write_tcp_data(ares_channel_t *channel, fd_set *write_fds, ares_socket_t write_fd) { ares__slist_node_t *node; if (!write_fds && (write_fd == ARES_SOCKET_BAD)) { /* no possible action */ return; } for (node = ares__slist_node_first(channel->servers); node != NULL; node = ares__slist_node_next(node)) { ares_server_t *server = ares__slist_node_val(node); const unsigned char *data; size_t data_len; ares_ssize_t count; /* Make sure server has data to send and is selected in write_fds or write_fd. */ if (ares__buf_len(server->tcp_send) == 0 || server->tcp_conn == NULL) { continue; } if (write_fds) { if (!FD_ISSET(server->tcp_conn->fd, write_fds)) { continue; } } else { if (server->tcp_conn->fd != write_fd) { continue; } } if (write_fds) { /* If there's an error and we close this socket, then open * another with the same fd to talk to another server, then we * don't want to think that it was the new socket that was * ready. This is not disastrous, but is likely to result in * extra system calls and confusion. */ FD_CLR(server->tcp_conn->fd, write_fds); } data = ares__buf_peek(server->tcp_send, &data_len); count = ares__conn_write(server->tcp_conn, data, data_len); if (count <= 0) { if (!ares__socket_try_again(SOCKERRNO)) { handle_conn_error(server->tcp_conn, ARES_TRUE, ARES_ECONNREFUSED); } continue; } /* Strip data written from the buffer */ ares__buf_consume(server->tcp_send, (size_t)count); /* Notify state callback all data is written */ if (ares__buf_len(server->tcp_send) == 0) { SOCK_STATE_CALLBACK(channel, server->tcp_conn->fd, 1, 0); } } } /* If any TCP socket selects true for reading, read some data, * allocate a buffer if we finish reading the length word, and process * a packet if we finish reading one. */ static void read_tcp_data(ares_channel_t *channel, ares_conn_t *conn, const ares_timeval_t *now) { ares_ssize_t count; ares_server_t *server = conn->server; /* Fetch buffer to store data we are reading */ size_t ptr_len = 65535; unsigned char *ptr; ptr = ares__buf_append_start(server->tcp_parser, &ptr_len); if (ptr == NULL) { handle_conn_error(conn, ARES_FALSE /* not critical to connection */, ARES_SUCCESS); return; /* bail out on malloc failure. TODO: make this function return error codes */ } /* Read from socket */ count = ares__socket_recv(channel, conn->fd, ptr, ptr_len); if (count <= 0) { ares__buf_append_finish(server->tcp_parser, 0); if (!(count == -1 && ares__socket_try_again(SOCKERRNO))) { handle_conn_error(conn, ARES_TRUE, ARES_ECONNREFUSED); } return; } /* Record amount of data read */ ares__buf_append_finish(server->tcp_parser, (size_t)count); /* Process all queued answers */ while (1) { unsigned short dns_len = 0; const unsigned char *data = NULL; size_t data_len = 0; ares_status_t status; /* Tag so we can roll back */ ares__buf_tag(server->tcp_parser); /* Read length indicator */ if (ares__buf_fetch_be16(server->tcp_parser, &dns_len) != ARES_SUCCESS) { ares__buf_tag_rollback(server->tcp_parser); break; } /* Not enough data for a full response yet */ if (ares__buf_consume(server->tcp_parser, dns_len) != ARES_SUCCESS) { ares__buf_tag_rollback(server->tcp_parser); break; } /* Can't fail except for misuse */ data = ares__buf_tag_fetch(server->tcp_parser, &data_len); if (data == NULL || data_len < 2) { ares__buf_tag_clear(server->tcp_parser); break; } /* Strip off 2 bytes length */ data += 2; data_len -= 2; /* We finished reading this answer; process it */ status = process_answer(channel, data, data_len, conn, ARES_TRUE, now); if (status != ARES_SUCCESS) { handle_conn_error(conn, ARES_TRUE, status); return; } /* Since we processed the answer, clear the tag so space can be reclaimed */ ares__buf_tag_clear(server->tcp_parser); } } static ares_socket_t *channel_socket_list(const ares_channel_t *channel, size_t *num) { ares__slist_node_t *snode; ares__array_t *arr = ares__array_create(sizeof(ares_socket_t), NULL); *num = 0; if (arr == NULL) { return NULL; /* LCOV_EXCL_LINE: OutOfMemory */ } for (snode = ares__slist_node_first(channel->servers); snode != NULL; snode = ares__slist_node_next(snode)) { ares_server_t *server = ares__slist_node_val(snode); ares__llist_node_t *node; for (node = ares__llist_node_first(server->connections); node != NULL; node = ares__llist_node_next(node)) { const ares_conn_t *conn = ares__llist_node_val(node); ares_socket_t *sptr; ares_status_t status; if (conn->fd == ARES_SOCKET_BAD) { continue; } status = ares__array_insert_last((void **)&sptr, arr); if (status != ARES_SUCCESS) { ares__array_destroy(arr); /* LCOV_EXCL_LINE: OutOfMemory */ return NULL; /* LCOV_EXCL_LINE: OutOfMemory */ } *sptr = conn->fd; } } return ares__array_finish(arr, num); } /* If any UDP sockets select true for reading, process them. */ static void read_udp_packets_fd(ares_channel_t *channel, ares_conn_t *conn, const ares_timeval_t *now) { ares_ssize_t read_len; unsigned char buf[MAXENDSSZ + 1]; #ifdef HAVE_RECVFROM ares_socklen_t fromlen; union { struct sockaddr sa; struct sockaddr_in sa4; struct sockaddr_in6 sa6; } from; memset(&from, 0, sizeof(from)); #endif /* To reduce event loop overhead, read and process as many * packets as we can. */ do { if (conn->fd == ARES_SOCKET_BAD) { read_len = -1; } else { if (conn->server->addr.family == AF_INET) { fromlen = sizeof(from.sa4); } else { fromlen = sizeof(from.sa6); } read_len = ares__socket_recvfrom(channel, conn->fd, (void *)buf, sizeof(buf), 0, &from.sa, &fromlen); } if (read_len == 0) { /* UDP is connectionless, so result code of 0 is a 0-length UDP * packet, and not an indication the connection is closed like on * tcp */ continue; } else if (read_len < 0) { if (ares__socket_try_again(SOCKERRNO)) { break; } handle_conn_error(conn, ARES_TRUE, ARES_ECONNREFUSED); return; #ifdef HAVE_RECVFROM } else if (!same_address(&from.sa, &conn->server->addr)) { /* The address the response comes from does not match the address we * sent the request to. Someone may be attempting to perform a cache * poisoning attack. */ continue; #endif } else { process_answer(channel, buf, (size_t)read_len, conn, ARES_FALSE, now); } /* Try to read again only if *we* set up the socket, otherwise it may be * a blocking socket and would cause recvfrom to hang. */ } while (read_len >= 0 && channel->sock_funcs == NULL); } static void read_packets(ares_channel_t *channel, fd_set *read_fds, ares_socket_t read_fd, const ares_timeval_t *now) { size_t i; ares_socket_t *socketlist = NULL; size_t num_sockets = 0; ares_conn_t *conn = NULL; ares__llist_node_t *node = NULL; if (!read_fds && (read_fd == ARES_SOCKET_BAD)) { /* no possible action */ return; } /* Single socket specified */ if (!read_fds) { node = ares__htable_asvp_get_direct(channel->connnode_by_socket, read_fd); if (node == NULL) { return; } conn = ares__llist_node_val(node); if (conn->flags & ARES_CONN_FLAG_TCP) { read_tcp_data(channel, conn, now); } else { read_udp_packets_fd(channel, conn, now); } return; } /* There is no good way to iterate across an fd_set, instead we must pull a * list of all known fds, and iterate across that checking against the fd_set. */ socketlist = channel_socket_list(channel, &num_sockets); for (i = 0; i < num_sockets; i++) { if (!FD_ISSET(socketlist[i], read_fds)) { continue; } /* If there's an error and we close this socket, then open * another with the same fd to talk to another server, then we * don't want to think that it was the new socket that was * ready. This is not disastrous, but is likely to result in * extra system calls and confusion. */ FD_CLR(socketlist[i], read_fds); node = ares__htable_asvp_get_direct(channel->connnode_by_socket, socketlist[i]); if (node == NULL) { return; } conn = ares__llist_node_val(node); if (conn->flags & ARES_CONN_FLAG_TCP) { read_tcp_data(channel, conn, now); } else { read_udp_packets_fd(channel, conn, now); } } ares_free(socketlist); } /* If any queries have timed out, note the timeout and move them on. */ static void process_timeouts(ares_channel_t *channel, const ares_timeval_t *now) { ares__slist_node_t *node; /* Just keep popping off the first as this list will re-sort as things come * and go. We don't want to try to rely on 'next' as some operation might * cause a cleanup of that pointer and would become invalid */ while ((node = ares__slist_node_first(channel->queries_by_timeout)) != NULL) { ares_query_t *query = ares__slist_node_val(node); ares_conn_t *conn; /* Since this is sorted, as soon as we hit a query that isn't timed out, * break */ if (!ares__timedout(now, &query->timeout)) { break; } query->timeouts++; conn = query->conn; server_increment_failures(conn->server, query->using_tcp); ares__requeue_query(query, now, ARES_ETIMEOUT, ARES_TRUE, NULL); } } static ares_status_t rewrite_without_edns(ares_query_t *query) { ares_status_t status = ARES_SUCCESS; size_t i; ares_bool_t found_opt_rr = ARES_FALSE; /* Find and remove the OPT RR record */ for (i = 0; i < ares_dns_record_rr_cnt(query->query, ARES_SECTION_ADDITIONAL); i++) { const ares_dns_rr_t *rr; rr = ares_dns_record_rr_get(query->query, ARES_SECTION_ADDITIONAL, i); if (ares_dns_rr_get_type(rr) == ARES_REC_TYPE_OPT) { ares_dns_record_rr_del(query->query, ARES_SECTION_ADDITIONAL, i); found_opt_rr = ARES_TRUE; break; } } if (!found_opt_rr) { status = ARES_EFORMERR; goto done; } done: return status; } /* Handle an answer from a server. This must NEVER cleanup the * server connection! Return something other than ARES_SUCCESS to cause * the connection to be terminated after this call. */ static ares_status_t process_answer(ares_channel_t *channel, const unsigned char *abuf, size_t alen, ares_conn_t *conn, ares_bool_t tcp, const ares_timeval_t *now) { ares_query_t *query; /* Cache these as once ares__send_query() gets called, it may end up * invalidating the connection all-together */ ares_server_t *server = conn->server; ares_dns_record_t *rdnsrec = NULL; ares_status_t status; ares_bool_t is_cached = ARES_FALSE; /* Parse the response */ status = ares_dns_parse(abuf, alen, 0, &rdnsrec); if (status != ARES_SUCCESS) { /* Malformations are never accepted */ status = ARES_EBADRESP; goto cleanup; } /* Find the query corresponding to this packet. The queries are * hashed/bucketed by query id, so this lookup should be quick. */ query = ares__htable_szvp_get_direct(channel->queries_by_qid, ares_dns_record_get_id(rdnsrec)); if (!query) { /* We may have stopped listening for this query, that's ok */ status = ARES_SUCCESS; goto cleanup; } /* Both the query id and the questions must be the same. We will drop any * replies that aren't for the same query as this is considered invalid. */ if (!same_questions(query, rdnsrec)) { /* Possible qid conflict due to delayed response, that's ok */ status = ARES_SUCCESS; goto cleanup; } /* Validate DNS cookie in response. This function may need to requeue the * query. */ if (ares_cookie_validate(query, rdnsrec, conn, now) != ARES_SUCCESS) { /* Drop response and return */ status = ARES_SUCCESS; goto cleanup; } /* At this point we know we've received an answer for this query, so we should * remove it from the connection's queue so we can possibly invalidate the * connection. Delay cleaning up the connection though as we may enqueue * something new. */ ares__llist_node_destroy(query->node_queries_to_conn); query->node_queries_to_conn = NULL; /* If we use EDNS and server answers with FORMERR without an OPT RR, the * protocol extension is not understood by the responder. We must retry the * query without EDNS enabled. */ if (ares_dns_record_get_rcode(rdnsrec) == ARES_RCODE_FORMERR && ares_dns_get_opt_rr_const(query->query) != NULL && ares_dns_get_opt_rr_const(rdnsrec) == NULL) { status = rewrite_without_edns(query); if (status != ARES_SUCCESS) { end_query(channel, server, query, status, NULL); goto cleanup; } ares__send_query(query, now); status = ARES_SUCCESS; goto cleanup; } /* If we got a truncated UDP packet and are not ignoring truncation, * don't accept the packet, and switch the query to TCP if we hadn't * done so already. */ if (ares_dns_record_get_flags(rdnsrec) & ARES_FLAG_TC && !tcp && !(channel->flags & ARES_FLAG_IGNTC)) { query->using_tcp = ARES_TRUE; ares__send_query(query, now); status = ARES_SUCCESS; /* Switched to TCP is ok */ goto cleanup; } /* If we aren't passing through all error packets, discard packets * with SERVFAIL, NOTIMP, or REFUSED response codes. */ if (!(channel->flags & ARES_FLAG_NOCHECKRESP)) { ares_dns_rcode_t rcode = ares_dns_record_get_rcode(rdnsrec); if (rcode == ARES_RCODE_SERVFAIL || rcode == ARES_RCODE_NOTIMP || rcode == ARES_RCODE_REFUSED) { switch (rcode) { case ARES_RCODE_SERVFAIL: status = ARES_ESERVFAIL; break; case ARES_RCODE_NOTIMP: status = ARES_ENOTIMP; break; case ARES_RCODE_REFUSED: status = ARES_EREFUSED; break; default: break; } server_increment_failures(server, query->using_tcp); ares__requeue_query(query, now, status, ARES_TRUE, rdnsrec); /* Should any of these cause a connection termination? * Maybe SERVER_FAILURE? */ status = ARES_SUCCESS; goto cleanup; } } /* If cache insertion was successful, it took ownership. We ignore * other cache insertion failures. */ if (ares_qcache_insert(channel, now, query, rdnsrec) == ARES_SUCCESS) { is_cached = ARES_TRUE; } server_set_good(server, query->using_tcp); end_query(channel, server, query, ARES_SUCCESS, rdnsrec); status = ARES_SUCCESS; cleanup: /* Don't cleanup the cached pointer to the dns response */ if (!is_cached) { ares_dns_record_destroy(rdnsrec); } return status; } static void handle_conn_error(ares_conn_t *conn, ares_bool_t critical_failure, ares_status_t failure_status) { ares_server_t *server = conn->server; /* Increment failures first before requeue so it is unlikely to requeue * to the same server */ if (critical_failure) { server_increment_failures( server, (conn->flags & ARES_CONN_FLAG_TCP) ? ARES_TRUE : ARES_FALSE); } /* This will requeue any connections automatically */ ares__close_connection(conn, failure_status); } ares_status_t ares__requeue_query(ares_query_t *query, const ares_timeval_t *now, ares_status_t status, ares_bool_t inc_try_count, const ares_dns_record_t *dnsrec) { ares_channel_t *channel = query->channel; size_t max_tries = ares__slist_len(channel->servers) * channel->tries; ares__query_disassociate_from_conn(query); if (status != ARES_SUCCESS) { query->error_status = status; } if (inc_try_count) { query->try_count++; } if (query->try_count < max_tries && !query->no_retries) { return ares__send_query(query, now); } /* If we are here, all attempts to perform query failed. */ if (query->error_status == ARES_SUCCESS) { query->error_status = ARES_ETIMEOUT; } end_query(channel, NULL, query, query->error_status, dnsrec); return ARES_ETIMEOUT; } /* Pick a random server from the list, we first get a random number in the * range of the number of servers, then scan until we find that server in * the list */ static ares_server_t *ares__random_server(ares_channel_t *channel) { unsigned char c; size_t cnt; size_t idx; ares__slist_node_t *node; size_t num_servers = ares__slist_len(channel->servers); /* Silence coverity, not possible */ if (num_servers == 0) { return NULL; } ares__rand_bytes(channel->rand_state, &c, 1); cnt = c; idx = cnt % num_servers; cnt = 0; for (node = ares__slist_node_first(channel->servers); node != NULL; node = ares__slist_node_next(node)) { if (cnt == idx) { return ares__slist_node_val(node); } cnt++; } return NULL; } /* Pick a server from the list with failover behavior. * * We default to using the first server in the sorted list of servers. That is * the server with the lowest number of consecutive failures and then the * highest priority server (by idx) if there is a draw. * * However, if a server temporarily goes down and hits some failures, then that * server will never be retried until all other servers hit the same number of * failures. This may prevent the server from being retried for a long time. * * To resolve this, with some probability we select a failed server to retry * instead. */ static ares_server_t *ares__failover_server(ares_channel_t *channel) { ares_server_t *first_server = ares__slist_first_val(channel->servers); const ares_server_t *last_server = ares__slist_last_val(channel->servers); unsigned short r; /* Defensive code against no servers being available on the channel. */ if (first_server == NULL) { return NULL; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* If no servers have failures, then prefer the first server in the list. */ if (last_server != NULL && last_server->consec_failures == 0) { return first_server; } /* If we are not configured with a server retry chance then return the first * server. */ if (channel->server_retry_chance == 0) { return first_server; } /* Generate a random value to decide whether to retry a failed server. The * probability to use is 1/channel->server_retry_chance, rounded up to a * precision of 1/2^B where B is the number of bits in the random value. * We use an unsigned short for the random value for increased precision. */ ares__rand_bytes(channel->rand_state, (unsigned char *)&r, sizeof(r)); if (r % channel->server_retry_chance == 0) { /* Select a suitable failed server to retry. */ ares_timeval_t now; ares__slist_node_t *node; ares__tvnow(&now); for (node = ares__slist_node_first(channel->servers); node != NULL; node = ares__slist_node_next(node)) { ares_server_t *node_val = ares__slist_node_val(node); if (node_val != NULL && node_val->consec_failures > 0 && ares__timedout(&now, &node_val->next_retry_time)) { return node_val; } } } /* If we have not returned yet, then return the first server. */ return first_server; } static size_t ares__calc_query_timeout(const ares_query_t *query, const ares_server_t *server, const ares_timeval_t *now) { const ares_channel_t *channel = query->channel; size_t timeout = ares_metrics_server_timeout(server, now); size_t timeplus = timeout; size_t rounds; size_t num_servers = ares__slist_len(channel->servers); if (num_servers == 0) { return 0; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* For each trip through the entire server list, we want to double the * retry from the last retry */ rounds = (query->try_count / num_servers); if (rounds > 0) { timeplus <<= rounds; } if (channel->maxtimeout && timeplus > channel->maxtimeout) { timeplus = channel->maxtimeout; } /* Add some jitter to the retry timeout. * * Jitter is needed in situation when resolve requests are performed * simultaneously from multiple hosts and DNS server throttle these requests. * Adding randomness allows to avoid synchronisation of retries. * * Value of timeplus adjusted randomly to the range [0.5 * timeplus, * timeplus]. */ if (rounds > 0) { unsigned short r; float delta_multiplier; ares__rand_bytes(channel->rand_state, (unsigned char *)&r, sizeof(r)); delta_multiplier = ((float)r / USHRT_MAX) * 0.5f; timeplus -= (size_t)((float)timeplus * delta_multiplier); } /* We want explicitly guarantee that timeplus is greater or equal to timeout * specified in channel options. */ if (timeplus < timeout) { timeplus = timeout; } return timeplus; } static ares_conn_t *ares__fetch_connection(const ares_channel_t *channel, ares_server_t *server, const ares_query_t *query) { ares__llist_node_t *node; ares_conn_t *conn; if (query->using_tcp) { return server->tcp_conn; } /* Fetch existing UDP connection */ node = ares__llist_node_first(server->connections); if (node == NULL) { return NULL; } conn = ares__llist_node_val(node); /* Not UDP, skip */ if (conn->flags & ARES_CONN_FLAG_TCP) { return NULL; } /* Used too many times */ if (channel->udp_max_queries > 0 && conn->total_queries >= channel->udp_max_queries) { return NULL; } return conn; } static ares_status_t ares__conn_query_write(ares_conn_t *conn, ares_query_t *query, const ares_timeval_t *now) { unsigned char *qbuf = NULL; size_t qbuf_len = 0; ares_ssize_t len; ares_server_t *server = conn->server; ares_channel_t *channel = server->channel; ares_status_t status; status = ares_cookie_apply(query->query, conn, now); if (status != ARES_SUCCESS) { return status; } if (conn->flags & ARES_CONN_FLAG_TCP) { size_t prior_len = ares__buf_len(server->tcp_send); status = ares_dns_write_buf_tcp(query->query, server->tcp_send); if (status != ARES_SUCCESS) { return status; } if (conn->flags & ARES_CONN_FLAG_TFO_INITIAL) { /* When using TFO, we need to put it on the wire immediately. */ size_t data_len; const unsigned char *data = NULL; data = ares__buf_peek(server->tcp_send, &data_len); len = ares__conn_write(conn, data, data_len); if (len <= 0) { if (ares__socket_try_again(SOCKERRNO)) { /* This means we must not have qualified for TFO, keep the data * buffered, wait on write signal. */ return ARES_SUCCESS; } /* TCP TFO might delay failure. Reflect that here */ return ARES_ECONNREFUSED; } /* Consume what was written */ ares__buf_consume(server->tcp_send, (size_t)len); return ARES_SUCCESS; } if (prior_len == 0) { SOCK_STATE_CALLBACK(channel, conn->fd, 1, 1); } return ARES_SUCCESS; } /* UDP Here */ status = ares_dns_write(query->query, &qbuf, &qbuf_len); if (status != ARES_SUCCESS) { return status; } len = ares__conn_write(conn, qbuf, qbuf_len); ares_free(qbuf); if (len == -1) { if (ares__socket_try_again(SOCKERRNO)) { return ARES_ESERVFAIL; } /* UDP is connection-less, but we might receive an ICMP unreachable which * means we can't talk to the remote host at all and that will be * reflected here */ return ARES_ECONNREFUSED; } return ARES_SUCCESS; } ares_status_t ares__send_query(ares_query_t *query, const ares_timeval_t *now) { ares_channel_t *channel = query->channel; ares_server_t *server; ares_conn_t *conn; size_t timeplus; ares_status_t status; /* Choose the server to send the query to */ if (channel->rotate) { /* Pull random server */ server = ares__random_server(channel); } else { /* Pull server with failover behavior */ server = ares__failover_server(channel); } if (server == NULL) { end_query(channel, server, query, ARES_ENOSERVER /* ? */, NULL); return ARES_ENOSERVER; } conn = ares__fetch_connection(channel, server, query); if (conn == NULL) { status = ares__open_connection(&conn, channel, server, query->using_tcp); switch (status) { /* Good result, continue on */ case ARES_SUCCESS: break; /* These conditions are retryable as they are server-specific * error codes */ case ARES_ECONNREFUSED: case ARES_EBADFAMILY: server_increment_failures(server, query->using_tcp); return ares__requeue_query(query, now, status, ARES_TRUE, NULL); /* Anything else is not retryable, likely ENOMEM */ default: end_query(channel, server, query, status, NULL); return status; } } /* Write the query */ status = ares__conn_query_write(conn, query, now); switch (status) { /* Good result, continue on */ case ARES_SUCCESS: break; case ARES_ENOMEM: /* Not retryable */ end_query(channel, server, query, status, NULL); return status; /* These conditions are retryable as they are server-specific * error codes */ case ARES_ECONNREFUSED: case ARES_EBADFAMILY: handle_conn_error(conn, ARES_TRUE, status); status = ares__requeue_query(query, now, status, ARES_TRUE, NULL); if (status == ARES_ETIMEOUT) { status = ARES_ECONNREFUSED; } return status; /* FIXME: Handle EAGAIN here since it likely can happen. Right now we * just requeue to a different server/connection. */ default: server_increment_failures(server, query->using_tcp); status = ares__requeue_query(query, now, status, ARES_TRUE, NULL); return status; } timeplus = ares__calc_query_timeout(query, server, now); /* Keep track of queries bucketed by timeout, so we can process * timeout events quickly. */ ares__slist_node_destroy(query->node_queries_by_timeout); query->ts = *now; query->timeout = *now; timeadd(&query->timeout, timeplus); query->node_queries_by_timeout = ares__slist_insert(channel->queries_by_timeout, query); if (!query->node_queries_by_timeout) { /* LCOV_EXCL_START: OutOfMemory */ end_query(channel, server, query, ARES_ENOMEM, NULL); return ARES_ENOMEM; /* LCOV_EXCL_STOP */ } /* Keep track of queries bucketed by connection, so we can process errors * quickly. */ ares__llist_node_destroy(query->node_queries_to_conn); query->node_queries_to_conn = ares__llist_insert_last(conn->queries_to_conn, query); if (query->node_queries_to_conn == NULL) { /* LCOV_EXCL_START: OutOfMemory */ end_query(channel, server, query, ARES_ENOMEM, NULL); return ARES_ENOMEM; /* LCOV_EXCL_STOP */ } query->conn = conn; conn->total_queries++; return ARES_SUCCESS; } static ares_bool_t same_questions(const ares_query_t *query, const ares_dns_record_t *arec) { size_t i; ares_bool_t rv = ARES_FALSE; const ares_dns_record_t *qrec = query->query; const ares_channel_t *channel = query->channel; if (ares_dns_record_query_cnt(qrec) != ares_dns_record_query_cnt(arec)) { goto done; } for (i = 0; i < ares_dns_record_query_cnt(qrec); i++) { const char *qname = NULL; const char *aname = NULL; ares_dns_rec_type_t qtype; ares_dns_rec_type_t atype; ares_dns_class_t qclass; ares_dns_class_t aclass; if (ares_dns_record_query_get(qrec, i, &qname, &qtype, &qclass) != ARES_SUCCESS || qname == NULL) { goto done; } if (ares_dns_record_query_get(arec, i, &aname, &atype, &aclass) != ARES_SUCCESS || aname == NULL) { goto done; } if (qtype != atype || qclass != aclass) { goto done; } if (channel->flags & ARES_FLAG_DNS0x20 && !query->using_tcp) { /* NOTE: for DNS 0x20, part of the protection is to use a case-sensitive * comparison of the DNS query name. This expects the upstream DNS * server to preserve the case of the name in the response packet. * https://datatracker.ietf.org/doc/html/draft-vixie-dnsext-dns0x20-00 */ if (strcmp(qname, aname) != 0) { goto done; } } else { /* without DNS0x20 use case-insensitive matching */ if (strcasecmp(qname, aname) != 0) { goto done; } } } rv = ARES_TRUE; done: return rv; } static ares_bool_t same_address(const struct sockaddr *sa, const struct ares_addr *aa) { const void *addr1; const void *addr2; if (sa->sa_family == aa->family) { switch (aa->family) { case AF_INET: addr1 = &aa->addr.addr4; addr2 = &(CARES_INADDR_CAST(const struct sockaddr_in *, sa))->sin_addr; if (memcmp(addr1, addr2, sizeof(aa->addr.addr4)) == 0) { return ARES_TRUE; /* match */ } break; case AF_INET6: addr1 = &aa->addr.addr6; addr2 = &(CARES_INADDR_CAST(const struct sockaddr_in6 *, sa))->sin6_addr; if (memcmp(addr1, addr2, sizeof(aa->addr.addr6)) == 0) { return ARES_TRUE; /* match */ } break; default: break; /* LCOV_EXCL_LINE */ } } return ARES_FALSE; /* different */ } static void ares_detach_query(ares_query_t *query) { /* Remove the query from all the lists in which it is linked */ ares__query_disassociate_from_conn(query); ares__htable_szvp_remove(query->channel->queries_by_qid, query->qid); ares__llist_node_destroy(query->node_all_queries); query->node_all_queries = NULL; } static void end_query(ares_channel_t *channel, ares_server_t *server, ares_query_t *query, ares_status_t status, const ares_dns_record_t *dnsrec) { ares_metrics_record(query, server, status, dnsrec); /* Invoke the callback. */ query->callback(query->arg, status, query->timeouts, dnsrec); ares__free_query(query); /* Check and notify if no other queries are enqueued on the channel. This * must come after the callback and freeing the query for 2 reasons. * 1) The callback itself may enqueue a new query * 2) Technically the current query isn't detached until it is free()'d. */ ares_queue_notify_empty(channel); } void ares__free_query(ares_query_t *query) { ares_detach_query(query); /* Zero out some important stuff, to help catch bugs */ query->callback = NULL; query->arg = NULL; /* Deallocate the memory associated with the query */ ares_dns_record_destroy(query->query); ares_free(query); } gevent-24.11.1/deps/c-ares/src/lib/ares_qcache.c000066400000000000000000000276651471441230600212360ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" struct ares__qcache { ares__htable_strvp_t *cache; ares__slist_t *expire; unsigned int max_ttl; }; typedef struct { char *key; ares_dns_record_t *dnsrec; time_t expire_ts; time_t insert_ts; } ares__qcache_entry_t; static char *ares__qcache_calc_key(const ares_dns_record_t *dnsrec) { ares__buf_t *buf = ares__buf_create(); size_t i; ares_status_t status; ares_dns_flags_t flags; if (dnsrec == NULL || buf == NULL) { return NULL; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* Format is OPCODE|FLAGS[|QTYPE1|QCLASS1|QNAME1]... */ status = ares__buf_append_str( buf, ares_dns_opcode_tostr(ares_dns_record_get_opcode(dnsrec))); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__buf_append_byte(buf, '|'); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } flags = ares_dns_record_get_flags(dnsrec); /* Only care about RD and CD */ if (flags & ARES_FLAG_RD) { status = ares__buf_append_str(buf, "rd"); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } } if (flags & ARES_FLAG_CD) { status = ares__buf_append_str(buf, "cd"); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } } for (i = 0; i < ares_dns_record_query_cnt(dnsrec); i++) { const char *name; size_t name_len; ares_dns_rec_type_t qtype; ares_dns_class_t qclass; status = ares_dns_record_query_get(dnsrec, i, &name, &qtype, &qclass); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: DefensiveCoding */ } status = ares__buf_append_byte(buf, '|'); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__buf_append_str(buf, ares_dns_rec_type_tostr(qtype)); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__buf_append_byte(buf, '|'); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__buf_append_str(buf, ares_dns_class_tostr(qclass)); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__buf_append_byte(buf, '|'); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } /* On queries, a '.' may be appended to the name to indicate an explicit * name lookup without performing a search. Strip this since its not part * of a cached response. */ name_len = ares_strlen(name); if (name_len && name[name_len - 1] == '.') { name_len--; } if (name_len > 0) { status = ares__buf_append(buf, (const unsigned char *)name, name_len); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } } } return ares__buf_finish_str(buf, NULL); /* LCOV_EXCL_START: OutOfMemory */ fail: ares__buf_destroy(buf); return NULL; /* LCOV_EXCL_STOP */ } static void ares__qcache_expire(ares__qcache_t *cache, const ares_timeval_t *now) { ares__slist_node_t *node; if (cache == NULL) { return; } while ((node = ares__slist_node_first(cache->expire)) != NULL) { const ares__qcache_entry_t *entry = ares__slist_node_val(node); /* If now is NULL, we're flushing everything, so don't break */ if (now != NULL && entry->expire_ts > now->sec) { break; } ares__htable_strvp_remove(cache->cache, entry->key); ares__slist_node_destroy(node); } } void ares__qcache_flush(ares__qcache_t *cache) { ares__qcache_expire(cache, NULL /* flush all */); } void ares__qcache_destroy(ares__qcache_t *cache) { if (cache == NULL) { return; } ares__htable_strvp_destroy(cache->cache); ares__slist_destroy(cache->expire); ares_free(cache); } static int ares__qcache_entry_sort_cb(const void *arg1, const void *arg2) { const ares__qcache_entry_t *entry1 = arg1; const ares__qcache_entry_t *entry2 = arg2; if (entry1->expire_ts > entry2->expire_ts) { return 1; } if (entry1->expire_ts < entry2->expire_ts) { return -1; } return 0; } static void ares__qcache_entry_destroy_cb(void *arg) { ares__qcache_entry_t *entry = arg; if (entry == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } ares_free(entry->key); ares_dns_record_destroy(entry->dnsrec); ares_free(entry); } ares_status_t ares__qcache_create(ares_rand_state *rand_state, unsigned int max_ttl, ares__qcache_t **cache_out) { ares_status_t status = ARES_SUCCESS; ares__qcache_t *cache; cache = ares_malloc_zero(sizeof(*cache)); if (cache == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } cache->cache = ares__htable_strvp_create(NULL); if (cache->cache == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } cache->expire = ares__slist_create(rand_state, ares__qcache_entry_sort_cb, ares__qcache_entry_destroy_cb); if (cache->expire == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } cache->max_ttl = max_ttl; done: if (status != ARES_SUCCESS) { *cache_out = NULL; ares__qcache_destroy(cache); return status; } *cache_out = cache; return status; } static unsigned int ares__qcache_calc_minttl(ares_dns_record_t *dnsrec) { unsigned int minttl = 0xFFFFFFFF; size_t sect; for (sect = ARES_SECTION_ANSWER; sect <= ARES_SECTION_ADDITIONAL; sect++) { size_t i; for (i = 0; i < ares_dns_record_rr_cnt(dnsrec, (ares_dns_section_t)sect); i++) { const ares_dns_rr_t *rr = ares_dns_record_rr_get(dnsrec, (ares_dns_section_t)sect, i); ares_dns_rec_type_t type = ares_dns_rr_get_type(rr); unsigned int ttl = ares_dns_rr_get_ttl(rr); /* TTL is meaningless on these record types */ if (type == ARES_REC_TYPE_OPT || type == ARES_REC_TYPE_SOA || type == ARES_REC_TYPE_SIG) { continue; } if (ttl < minttl) { minttl = ttl; } } } return minttl; } static unsigned int ares__qcache_soa_minimum(ares_dns_record_t *dnsrec) { size_t i; /* RFC 2308 Section 5 says its the minimum of MINIMUM and the TTL of the * record. */ for (i = 0; i < ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_AUTHORITY); i++) { const ares_dns_rr_t *rr = ares_dns_record_rr_get(dnsrec, ARES_SECTION_AUTHORITY, i); ares_dns_rec_type_t type = ares_dns_rr_get_type(rr); unsigned int ttl; unsigned int minimum; if (type != ARES_REC_TYPE_SOA) { continue; } minimum = ares_dns_rr_get_u32(rr, ARES_RR_SOA_MINIMUM); ttl = ares_dns_rr_get_ttl(rr); if (ttl > minimum) { return minimum; } return ttl; } return 0; } /* On success, takes ownership of dnsrec */ static ares_status_t ares__qcache_insert(ares__qcache_t *qcache, ares_dns_record_t *qresp, const ares_dns_record_t *qreq, const ares_timeval_t *now) { ares__qcache_entry_t *entry; unsigned int ttl; ares_dns_rcode_t rcode = ares_dns_record_get_rcode(qresp); ares_dns_flags_t flags = ares_dns_record_get_flags(qresp); if (qcache == NULL || qresp == NULL) { return ARES_EFORMERR; } /* Only save NOERROR or NXDOMAIN */ if (rcode != ARES_RCODE_NOERROR && rcode != ARES_RCODE_NXDOMAIN) { return ARES_ENOTIMP; } /* Don't save truncated queries */ if (flags & ARES_FLAG_TC) { return ARES_ENOTIMP; } /* Look at SOA for NXDOMAIN for minimum */ if (rcode == ARES_RCODE_NXDOMAIN) { ttl = ares__qcache_soa_minimum(qresp); } else { ttl = ares__qcache_calc_minttl(qresp); } if (ttl > qcache->max_ttl) { ttl = qcache->max_ttl; } /* Don't cache something that is already expired */ if (ttl == 0) { return ARES_EREFUSED; } entry = ares_malloc_zero(sizeof(*entry)); if (entry == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } entry->dnsrec = qresp; entry->expire_ts = (time_t)now->sec + (time_t)ttl; entry->insert_ts = (time_t)now->sec; /* We can't guarantee the server responded with the same flags as the * request had, so we have to re-parse the request in order to generate the * key for caching, but we'll only do this once we know for sure we really * want to cache it */ entry->key = ares__qcache_calc_key(qreq); if (entry->key == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } if (!ares__htable_strvp_insert(qcache->cache, entry->key, entry)) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } if (ares__slist_insert(qcache->expire, entry) == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } return ARES_SUCCESS; /* LCOV_EXCL_START: OutOfMemory */ fail: if (entry != NULL && entry->key != NULL) { ares__htable_strvp_remove(qcache->cache, entry->key); ares_free(entry->key); ares_free(entry); } return ARES_ENOMEM; /* LCOV_EXCL_STOP */ } ares_status_t ares_qcache_fetch(ares_channel_t *channel, const ares_timeval_t *now, const ares_dns_record_t *dnsrec, const ares_dns_record_t **dnsrec_resp) { char *key = NULL; ares__qcache_entry_t *entry; ares_status_t status = ARES_SUCCESS; if (channel == NULL || dnsrec == NULL || dnsrec_resp == NULL) { return ARES_EFORMERR; } if (channel->qcache == NULL) { return ARES_ENOTFOUND; } ares__qcache_expire(channel->qcache, now); key = ares__qcache_calc_key(dnsrec); if (key == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } entry = ares__htable_strvp_get_direct(channel->qcache->cache, key); if (entry == NULL) { status = ARES_ENOTFOUND; goto done; } ares_dns_record_write_ttl_decrement( entry->dnsrec, (unsigned int)(now->sec - entry->insert_ts)); *dnsrec_resp = entry->dnsrec; done: ares_free(key); return status; } ares_status_t ares_qcache_insert(ares_channel_t *channel, const ares_timeval_t *now, const ares_query_t *query, ares_dns_record_t *dnsrec) { return ares__qcache_insert(channel->qcache, dnsrec, query->query, now); } gevent-24.11.1/deps/c-ares/src/lib/ares_query.c000066400000000000000000000117471471441230600211510ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif typedef struct { ares_callback_dnsrec callback; void *arg; } ares_query_dnsrec_arg_t; static void ares_query_dnsrec_cb(void *arg, ares_status_t status, size_t timeouts, const ares_dns_record_t *dnsrec) { ares_query_dnsrec_arg_t *qquery = arg; if (status != ARES_SUCCESS) { qquery->callback(qquery->arg, status, timeouts, dnsrec); } else { size_t ancount; ares_dns_rcode_t rcode; /* Pull the response code and answer count from the packet and convert any * errors. */ rcode = ares_dns_record_get_rcode(dnsrec); ancount = ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER); status = ares_dns_query_reply_tostatus(rcode, ancount); qquery->callback(qquery->arg, status, timeouts, dnsrec); } ares_free(qquery); } ares_status_t ares_query_nolock(ares_channel_t *channel, const char *name, ares_dns_class_t dnsclass, ares_dns_rec_type_t type, ares_callback_dnsrec callback, void *arg, unsigned short *qid) { ares_status_t status; ares_dns_record_t *dnsrec = NULL; ares_dns_flags_t flags = 0; ares_query_dnsrec_arg_t *qquery = NULL; if (channel == NULL || name == NULL || callback == NULL) { /* LCOV_EXCL_START: DefensiveCoding */ status = ARES_EFORMERR; if (callback != NULL) { callback(arg, status, 0, NULL); } return status; /* LCOV_EXCL_STOP */ } if (!(channel->flags & ARES_FLAG_NORECURSE)) { flags |= ARES_FLAG_RD; } status = ares_dns_record_create_query( &dnsrec, name, dnsclass, type, 0, flags, (size_t)(channel->flags & ARES_FLAG_EDNS) ? channel->ednspsz : 0); if (status != ARES_SUCCESS) { callback(arg, status, 0, NULL); /* LCOV_EXCL_LINE: OutOfMemory */ return status; /* LCOV_EXCL_LINE: OutOfMemory */ } qquery = ares_malloc(sizeof(*qquery)); if (qquery == NULL) { /* LCOV_EXCL_START: OutOfMemory */ status = ARES_ENOMEM; callback(arg, status, 0, NULL); ares_dns_record_destroy(dnsrec); return status; /* LCOV_EXCL_STOP */ } qquery->callback = callback; qquery->arg = arg; /* Send it off. qcallback will be called when we get an answer. */ status = ares_send_nolock(channel, dnsrec, ares_query_dnsrec_cb, qquery, qid); ares_dns_record_destroy(dnsrec); return status; } ares_status_t ares_query_dnsrec(ares_channel_t *channel, const char *name, ares_dns_class_t dnsclass, ares_dns_rec_type_t type, ares_callback_dnsrec callback, void *arg, unsigned short *qid) { ares_status_t status; if (channel == NULL) { return ARES_EFORMERR; } ares__channel_lock(channel); status = ares_query_nolock(channel, name, dnsclass, type, callback, arg, qid); ares__channel_unlock(channel); return status; } void ares_query(ares_channel_t *channel, const char *name, int dnsclass, int type, ares_callback callback, void *arg) { void *carg = NULL; if (channel == NULL) { return; } carg = ares__dnsrec_convert_arg(callback, arg); if (carg == NULL) { callback(arg, ARES_ENOMEM, 0, NULL, 0); /* LCOV_EXCL_LINE: OutOfMemory */ return; /* LCOV_EXCL_LINE: OutOfMemory */ } ares_query_dnsrec(channel, name, (ares_dns_class_t)dnsclass, (ares_dns_rec_type_t)type, ares__dnsrec_convert_cb, carg, NULL); } gevent-24.11.1/deps/c-ares/src/lib/ares_search.c000066400000000000000000000431011471441230600212360ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_STRINGS_H # include #endif struct search_query { /* Arguments passed to ares_search_dnsrec() */ ares_channel_t *channel; ares_callback_dnsrec callback; void *arg; /* Duplicate of DNS record passed to ares_search_dnsrec() */ ares_dns_record_t *dnsrec; /* Search order for names */ char **names; size_t names_cnt; /* State tracking progress through the search query */ size_t next_name_idx; /* next name index being attempted */ size_t timeouts; /* number of timeouts we saw for this request */ ares_bool_t ever_got_nodata; /* did we ever get ARES_ENODATA along the way? */ }; static void squery_free(struct search_query *squery) { if (squery == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } ares__strsplit_free(squery->names, squery->names_cnt); ares_dns_record_destroy(squery->dnsrec); ares_free(squery); } /* End a search query by invoking the user callback and freeing the * search_query structure. */ static void end_squery(struct search_query *squery, ares_status_t status, const ares_dns_record_t *dnsrec) { squery->callback(squery->arg, status, squery->timeouts, dnsrec); squery_free(squery); } static void search_callback(void *arg, ares_status_t status, size_t timeouts, const ares_dns_record_t *dnsrec); static ares_status_t ares_search_next(ares_channel_t *channel, struct search_query *squery, ares_bool_t *skip_cleanup) { ares_status_t status; *skip_cleanup = ARES_FALSE; /* Misuse check */ if (squery->next_name_idx >= squery->names_cnt) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } status = ares_dns_record_query_set_name( squery->dnsrec, 0, squery->names[squery->next_name_idx++]); if (status != ARES_SUCCESS) { return status; } status = ares_send_nolock(channel, squery->dnsrec, search_callback, squery, NULL); if (status != ARES_EFORMERR) { *skip_cleanup = ARES_TRUE; } return status; } static void search_callback(void *arg, ares_status_t status, size_t timeouts, const ares_dns_record_t *dnsrec) { struct search_query *squery = (struct search_query *)arg; ares_channel_t *channel = squery->channel; ares_status_t mystatus; ares_bool_t skip_cleanup = ARES_FALSE; squery->timeouts += timeouts; if (dnsrec) { ares_dns_rcode_t rcode = ares_dns_record_get_rcode(dnsrec); size_t ancount = ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER); mystatus = ares_dns_query_reply_tostatus(rcode, ancount); } else { mystatus = status; } switch (mystatus) { case ARES_ENODATA: case ARES_ENOTFOUND: break; case ARES_ESERVFAIL: case ARES_EREFUSED: /* Issue #852, systemd-resolved may return SERVFAIL or REFUSED on a * single label domain name. */ if (ares__name_label_cnt(squery->names[squery->next_name_idx-1]) != 1) { end_squery(squery, mystatus, dnsrec); return; } break; default: end_squery(squery, mystatus, dnsrec); return; } /* If we ever get ARES_ENODATA along the way, record that; if the search * should run to the very end and we got at least one ARES_ENODATA, * then callers like ares_gethostbyname() may want to try a T_A search * even if the last domain we queried for T_AAAA resource records * returned ARES_ENOTFOUND. */ if (mystatus == ARES_ENODATA) { squery->ever_got_nodata = ARES_TRUE; } if (squery->next_name_idx < squery->names_cnt) { mystatus = ares_search_next(channel, squery, &skip_cleanup); if (mystatus != ARES_SUCCESS && !skip_cleanup) { end_squery(squery, mystatus, NULL); } return; } /* We have no more domains to search, return an appropriate response. */ if (mystatus == ARES_ENOTFOUND && squery->ever_got_nodata) { end_squery(squery, ARES_ENODATA, NULL); return; } end_squery(squery, mystatus, NULL); } /* Determine if the domain should be looked up as-is, or if it is eligible * for search by appending domains */ static ares_bool_t ares__search_eligible(const ares_channel_t *channel, const char *name) { size_t len = ares_strlen(name); /* Name ends in '.', cannot search */ if (len && name[len - 1] == '.') { return ARES_FALSE; } if (channel->flags & ARES_FLAG_NOSEARCH) { return ARES_FALSE; } return ARES_TRUE; } size_t ares__name_label_cnt(const char *name) { const char *p; size_t ndots = 0; if (name == NULL) { return 0; } for (p = name; p != NULL && *p != 0; p++) { if (*p == '.') { ndots++; } } /* Label count is 1 greater than ndots */ return ndots+1; } ares_status_t ares__search_name_list(const ares_channel_t *channel, const char *name, char ***names, size_t *names_len) { ares_status_t status; char **list = NULL; size_t list_len = 0; char *alias = NULL; size_t ndots = 0; size_t idx = 0; size_t i; /* Perform HOSTALIASES resolution */ status = ares__lookup_hostaliases(channel, name, &alias); if (status == ARES_SUCCESS) { /* If hostalias succeeds, there is no searching, it is used as-is */ list_len = 1; list = ares_malloc_zero(sizeof(*list) * list_len); if (list == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } list[0] = alias; alias = NULL; goto done; } else if (status != ARES_ENOTFOUND) { goto done; } /* See if searching is eligible at all, if not, look up as-is only */ if (!ares__search_eligible(channel, name)) { list_len = 1; list = ares_malloc_zero(sizeof(*list) * list_len); if (list == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } list[0] = ares_strdup(name); if (list[0] == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } else { status = ARES_SUCCESS; } goto done; } /* Count the number of dots in name, 1 less than label count */ ndots = ares__name_label_cnt(name); if (ndots > 0) { ndots--; } /* Allocate an entry for each search domain, plus one for as-is */ list_len = channel->ndomains + 1; list = ares_malloc_zero(sizeof(*list) * list_len); if (list == NULL) { status = ARES_ENOMEM; goto done; } /* Set status here, its possible there are no search domains at all, so * status may be ARES_ENOTFOUND from ares__lookup_hostaliases(). */ status = ARES_SUCCESS; /* Try as-is first */ if (ndots >= channel->ndots) { list[idx] = ares_strdup(name); if (list[idx] == NULL) { status = ARES_ENOMEM; goto done; } idx++; } /* Append each search suffix to the name */ for (i = 0; i < channel->ndomains; i++) { status = ares__cat_domain(name, channel->domains[i], &list[idx]); if (status != ARES_SUCCESS) { goto done; } idx++; } /* Try as-is last */ if (ndots < channel->ndots) { list[idx] = ares_strdup(name); if (list[idx] == NULL) { status = ARES_ENOMEM; goto done; } idx++; } done: if (status == ARES_SUCCESS) { *names = list; *names_len = list_len; } else { ares__strsplit_free(list, list_len); } ares_free(alias); return status; } static ares_status_t ares_search_int(ares_channel_t *channel, const ares_dns_record_t *dnsrec, ares_callback_dnsrec callback, void *arg) { struct search_query *squery = NULL; const char *name; ares_status_t status = ARES_SUCCESS; ares_bool_t skip_cleanup = ARES_FALSE; /* Extract the name for the search. Note that searches are only supported for * DNS records containing a single query. */ if (ares_dns_record_query_cnt(dnsrec) != 1) { status = ARES_EBADQUERY; goto fail; } status = ares_dns_record_query_get(dnsrec, 0, &name, NULL, NULL); if (status != ARES_SUCCESS) { goto fail; } /* Per RFC 7686, reject queries for ".onion" domain names with NXDOMAIN. */ if (ares__is_onion_domain(name)) { status = ARES_ENOTFOUND; goto fail; } /* Allocate a search_query structure to hold the state necessary for * doing multiple lookups. */ squery = ares_malloc_zero(sizeof(*squery)); if (squery == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } squery->channel = channel; /* Duplicate DNS record since, name will need to be rewritten */ squery->dnsrec = ares_dns_record_duplicate(dnsrec); if (squery->dnsrec == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } squery->callback = callback; squery->arg = arg; squery->timeouts = 0; squery->ever_got_nodata = ARES_FALSE; status = ares__search_name_list(channel, name, &squery->names, &squery->names_cnt); if (status != ARES_SUCCESS) { goto fail; } status = ares_search_next(channel, squery, &skip_cleanup); if (status != ARES_SUCCESS) { goto fail; } return status; fail: if (!skip_cleanup) { squery_free(squery); callback(arg, status, 0, NULL); } return status; } /* Callback argument structure passed to ares__dnsrec_convert_cb(). */ typedef struct { ares_callback callback; void *arg; } dnsrec_convert_arg_t; /*! Function to create callback arg for converting from ares_callback_dnsrec * to ares_calback */ void *ares__dnsrec_convert_arg(ares_callback callback, void *arg) { dnsrec_convert_arg_t *carg = ares_malloc_zero(sizeof(*carg)); if (carg == NULL) { return NULL; } carg->callback = callback; carg->arg = arg; return carg; } /*! Callback function used to convert from the ares_callback_dnsrec prototype to * the ares_callback prototype, by writing the result and passing that to * the inner callback. */ void ares__dnsrec_convert_cb(void *arg, ares_status_t status, size_t timeouts, const ares_dns_record_t *dnsrec) { dnsrec_convert_arg_t *carg = arg; unsigned char *abuf = NULL; size_t alen = 0; if (dnsrec != NULL) { ares_status_t mystatus = ares_dns_write(dnsrec, &abuf, &alen); if (mystatus != ARES_SUCCESS) { status = mystatus; } } carg->callback(carg->arg, (int)status, (int)timeouts, abuf, (int)alen); ares_free(abuf); ares_free(carg); } /* Search for a DNS name with given class and type. Wrapper around * ares_search_int() where the DNS record to search is first constructed. */ void ares_search(ares_channel_t *channel, const char *name, int dnsclass, int type, ares_callback callback, void *arg) { ares_status_t status; ares_dns_record_t *dnsrec = NULL; size_t max_udp_size; ares_dns_flags_t rd_flag; void *carg = NULL; if (channel == NULL || name == NULL) { return; } /* For now, ares_search_int() uses the ares_callback prototype. We need to * wrap the callback passed to this function in ares__dnsrec_convert_cb, to * convert from ares_callback_dnsrec to ares_callback. Allocate the convert * arg structure here. */ carg = ares__dnsrec_convert_arg(callback, arg); if (carg == NULL) { callback(arg, ARES_ENOMEM, 0, NULL, 0); return; } rd_flag = !(channel->flags & ARES_FLAG_NORECURSE) ? ARES_FLAG_RD : 0; max_udp_size = (channel->flags & ARES_FLAG_EDNS) ? channel->ednspsz : 0; status = ares_dns_record_create_query( &dnsrec, name, (ares_dns_class_t)dnsclass, (ares_dns_rec_type_t)type, 0, rd_flag, max_udp_size); if (status != ARES_SUCCESS) { callback(arg, (int)status, 0, NULL, 0); ares_free(carg); return; } ares__channel_lock(channel); ares_search_int(channel, dnsrec, ares__dnsrec_convert_cb, carg); ares__channel_unlock(channel); ares_dns_record_destroy(dnsrec); } /* Search for a DNS record. Wrapper around ares_search_int(). */ ares_status_t ares_search_dnsrec(ares_channel_t *channel, const ares_dns_record_t *dnsrec, ares_callback_dnsrec callback, void *arg) { ares_status_t status; if (channel == NULL || dnsrec == NULL || callback == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } ares__channel_lock(channel); status = ares_search_int(channel, dnsrec, callback, arg); ares__channel_unlock(channel); return status; } /* Concatenate two domains. */ ares_status_t ares__cat_domain(const char *name, const char *domain, char **s) { size_t nlen = ares_strlen(name); size_t dlen = ares_strlen(domain); *s = ares_malloc(nlen + 1 + dlen + 1); if (!*s) { return ARES_ENOMEM; } memcpy(*s, name, nlen); (*s)[nlen] = '.'; if (strcmp(domain, ".") == 0) { /* Avoid appending the root domain to the separator, which would set *s to an ill-formed value (ending in two consecutive dots). */ dlen = 0; } memcpy(*s + nlen + 1, domain, dlen); (*s)[nlen + 1 + dlen] = 0; return ARES_SUCCESS; } ares_status_t ares__lookup_hostaliases(const ares_channel_t *channel, const char *name, char **alias) { ares_status_t status = ARES_SUCCESS; const char *hostaliases = NULL; ares__buf_t *buf = NULL; ares__llist_t *lines = NULL; ares__llist_node_t *node; if (channel == NULL || name == NULL || alias == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } *alias = NULL; /* Configuration says to not perform alias lookup */ if (channel->flags & ARES_FLAG_NOALIASES) { return ARES_ENOTFOUND; } /* If a domain has a '.', its not allowed to perform an alias lookup */ if (strchr(name, '.') != NULL) { return ARES_ENOTFOUND; } hostaliases = getenv("HOSTALIASES"); if (hostaliases == NULL) { status = ARES_ENOTFOUND; goto done; } buf = ares__buf_create(); if (buf == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__buf_load_file(hostaliases, buf); if (status != ARES_SUCCESS) { goto done; } /* The HOSTALIASES file is structured as one alias per line. The first * field in the line is the simple hostname with no periods, followed by * whitespace, then the full domain name, e.g.: * * c-ares www.c-ares.org * curl www.curl.se */ status = ares__buf_split(buf, (const unsigned char *)"\n", 1, ARES_BUF_SPLIT_TRIM, 0, &lines); if (status != ARES_SUCCESS) { goto done; } for (node = ares__llist_node_first(lines); node != NULL; node = ares__llist_node_next(node)) { ares__buf_t *line = ares__llist_node_val(node); char hostname[64] = ""; char fqdn[256] = ""; /* Pull off hostname */ ares__buf_tag(line); ares__buf_consume_nonwhitespace(line); if (ares__buf_tag_fetch_string(line, hostname, sizeof(hostname)) != ARES_SUCCESS) { continue; } /* Match hostname */ if (strcasecmp(hostname, name) != 0) { continue; } /* consume whitespace */ ares__buf_consume_whitespace(line, ARES_TRUE); /* pull off fqdn */ ares__buf_tag(line); ares__buf_consume_nonwhitespace(line); if (ares__buf_tag_fetch_string(line, fqdn, sizeof(fqdn)) != ARES_SUCCESS || ares_strlen(fqdn) == 0) { continue; } /* Validate characterset */ if (!ares__is_hostname(fqdn)) { continue; } *alias = ares_strdup(fqdn); if (*alias == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Good! */ status = ARES_SUCCESS; goto done; } status = ARES_ENOTFOUND; done: ares__buf_destroy(buf); ares__llist_destroy(lines); return status; } gevent-24.11.1/deps/c-ares/src/lib/ares_send.c000066400000000000000000000200551471441230600207250ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif #include "ares_nameser.h" static unsigned short generate_unique_qid(ares_channel_t *channel) { unsigned short id; do { id = ares__generate_new_id(channel->rand_state); } while (ares__htable_szvp_get(channel->queries_by_qid, id, NULL)); return id; } /* https://datatracker.ietf.org/doc/html/draft-vixie-dnsext-dns0x20-00 */ static ares_status_t ares_apply_dns0x20(ares_channel_t *channel, ares_dns_record_t *dnsrec) { ares_status_t status = ARES_SUCCESS; const char *name = NULL; char dns0x20name[256]; unsigned char randdata[256 / 8]; size_t len; size_t remaining_bits; size_t total_bits; size_t i; status = ares_dns_record_query_get(dnsrec, 0, &name, NULL, NULL); if (status != ARES_SUCCESS) { goto done; } len = ares_strlen(name); if (len == 0) { return ARES_SUCCESS; } if (len >= sizeof(dns0x20name)) { status = ARES_EBADNAME; goto done; } memset(dns0x20name, 0, sizeof(dns0x20name)); /* Fetch the minimum amount of random data we'd need for the string, which * is 1 bit per byte */ total_bits = ((len + 7) / 8) * 8; remaining_bits = total_bits; ares__rand_bytes(channel->rand_state, randdata, total_bits / 8); /* Randomly apply 0x20 to name */ for (i = 0; i < len; i++) { size_t bit; /* Only apply 0x20 to alpha characters */ if (!ares__isalpha(name[i])) { dns0x20name[i] = name[i]; continue; } /* coin flip */ bit = total_bits - remaining_bits; if (randdata[bit / 8] & (1 << (bit % 8))) { dns0x20name[i] = name[i] | 0x20; /* Set 0x20 */ } else { dns0x20name[i] = (char)(((unsigned char)name[i]) & 0xDF); /* Unset 0x20 */ } remaining_bits--; } status = ares_dns_record_query_set_name(dnsrec, 0, dns0x20name); done: return status; } ares_status_t ares_send_nolock(ares_channel_t *channel, const ares_dns_record_t *dnsrec, ares_callback_dnsrec callback, void *arg, unsigned short *qid) { ares_query_t *query; ares_timeval_t now; ares_status_t status; unsigned short id = generate_unique_qid(channel); const ares_dns_record_t *dnsrec_resp = NULL; ares__tvnow(&now); if (ares__slist_len(channel->servers) == 0) { callback(arg, ARES_ENOSERVER, 0, NULL); return ARES_ENOSERVER; } /* Check query cache */ status = ares_qcache_fetch(channel, &now, dnsrec, &dnsrec_resp); if (status != ARES_ENOTFOUND) { /* ARES_SUCCESS means we retrieved the cache, anything else is a critical * failure, all result in termination */ callback(arg, status, 0, dnsrec_resp); return status; } /* Allocate space for query and allocated fields. */ query = ares_malloc(sizeof(ares_query_t)); if (!query) { callback(arg, ARES_ENOMEM, 0, NULL); /* LCOV_EXCL_LINE: OutOfMemory */ return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } memset(query, 0, sizeof(*query)); query->channel = channel; query->qid = id; query->timeout.sec = 0; query->timeout.usec = 0; query->using_tcp = (channel->flags & ARES_FLAG_USEVC) ? ARES_TRUE : ARES_FALSE; /* Duplicate Query */ status = ares_dns_record_duplicate_ex(&query->query, dnsrec); if (status != ARES_SUCCESS) { ares_free(query); callback(arg, status, 0, NULL); return status; } ares_dns_record_set_id(query->query, id); if (channel->flags & ARES_FLAG_DNS0x20 && !query->using_tcp) { status = ares_apply_dns0x20(channel, query->query); if (status != ARES_SUCCESS) { /* LCOV_EXCL_START: OutOfMemory */ callback(arg, status, 0, NULL); ares__free_query(query); return status; /* LCOV_EXCL_STOP */ } } /* Fill in query arguments. */ query->callback = callback; query->arg = arg; /* Initialize query status. */ query->try_count = 0; query->error_status = ARES_SUCCESS; query->timeouts = 0; /* Initialize our list nodes. */ query->node_queries_by_timeout = NULL; query->node_queries_to_conn = NULL; /* Chain the query into the list of all queries. */ query->node_all_queries = ares__llist_insert_last(channel->all_queries, query); if (query->node_all_queries == NULL) { /* LCOV_EXCL_START: OutOfMemory */ callback(arg, ARES_ENOMEM, 0, NULL); ares__free_query(query); return ARES_ENOMEM; /* LCOV_EXCL_STOP */ } /* Keep track of queries bucketed by qid, so we can process DNS * responses quickly. */ if (!ares__htable_szvp_insert(channel->queries_by_qid, query->qid, query)) { /* LCOV_EXCL_START: OutOfMemory */ callback(arg, ARES_ENOMEM, 0, NULL); ares__free_query(query); return ARES_ENOMEM; /* LCOV_EXCL_STOP */ } /* Perform the first query action. */ status = ares__send_query(query, &now); if (status == ARES_SUCCESS && qid) { *qid = id; } return status; } ares_status_t ares_send_dnsrec(ares_channel_t *channel, const ares_dns_record_t *dnsrec, ares_callback_dnsrec callback, void *arg, unsigned short *qid) { ares_status_t status; if (channel == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } ares__channel_lock(channel); status = ares_send_nolock(channel, dnsrec, callback, arg, qid); ares__channel_unlock(channel); return status; } void ares_send(ares_channel_t *channel, const unsigned char *qbuf, int qlen, ares_callback callback, void *arg) { ares_dns_record_t *dnsrec = NULL; ares_status_t status; void *carg = NULL; if (channel == NULL) { return; } /* Verify that the query is at least long enough to hold the header. */ if (qlen < HFIXEDSZ || qlen >= (1 << 16)) { callback(arg, ARES_EBADQUERY, 0, NULL, 0); return; } status = ares_dns_parse(qbuf, (size_t)qlen, 0, &dnsrec); if (status != ARES_SUCCESS) { callback(arg, (int)status, 0, NULL, 0); return; } carg = ares__dnsrec_convert_arg(callback, arg); if (carg == NULL) { /* LCOV_EXCL_START: OutOfMemory */ status = ARES_ENOMEM; ares_dns_record_destroy(dnsrec); callback(arg, (int)status, 0, NULL, 0); return; /* LCOV_EXCL_STOP */ } ares_send_dnsrec(channel, dnsrec, ares__dnsrec_convert_cb, carg, NULL); ares_dns_record_destroy(dnsrec); } size_t ares_queue_active_queries(const ares_channel_t *channel) { size_t len; if (channel == NULL) { return 0; } ares__channel_lock(channel); len = ares__llist_len(channel->all_queries); ares__channel_unlock(channel); return len; } gevent-24.11.1/deps/c-ares/src/lib/ares_setup.h000066400000000000000000000246171471441230600211510ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2004 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES_SETUP_H #define __ARES_SETUP_H /* ============================================================================ * NOTE: This file is automatically included by ares_private.h and should not * typically be included directly. * All c-ares source files should include ares_private.h as the * first header. * ============================================================================ */ /* * Include configuration script results or hand-crafted * configuration file for platforms which lack config tool. */ #ifdef HAVE_CONFIG_H # include "ares_config.h" #else # ifdef _WIN32 # include "config-win32.h" # endif #endif /* HAVE_CONFIG_H */ /* * c-ares external interface definitions are also used internally, * and might also include required system header files to define them. */ #include "ares_build.h" /* ================================================================= */ /* No system header file shall be included in this file before this */ /* point. The only allowed ones are those included from ares_build.h */ /* ================================================================= */ /* * Include header files for windows builds before redefining anything. * Use this preproessor block only to include or exclude windows.h, * winsock2.h, ws2tcpip.h or winsock.h. Any other windows thing belongs * to any other further and independent block. Under Cygwin things work * just as under linux (e.g. ) and the winsock headers should * never be included when __CYGWIN__ is defined. configure script takes * care of this, not defining HAVE_WINDOWS_H, HAVE_WINSOCK_H, HAVE_WINSOCK2_H, * neither HAVE_WS2TCPIP_H when __CYGWIN__ is defined. */ #ifdef USE_WINSOCK # undef USE_WINSOCK #endif #ifdef HAVE_WINDOWS_H # ifndef WIN32_LEAN_AND_MEAN # define WIN32_LEAN_AND_MEAN # endif # include # ifdef HAVE_WINSOCK2_H # include # define USE_WINSOCK 2 # ifdef HAVE_WS2TCPIP_H # include # endif # else # ifdef HAVE_WINSOCK_H # include # define USE_WINSOCK 1 # endif # endif #endif #include #include #include #include #include #ifdef HAVE_ERRNO_H # include #endif #ifdef HAVE_SYS_TYPES_H # include #endif #ifdef HAVE_MALLOC_H # include #endif #ifdef HAVE_SYS_STAT_H # include #endif #ifdef HAVE_SYS_TIME_H # include #endif #ifdef HAVE_TIME_H # include #endif #ifdef HAVE_UNISTD_H # include #endif #ifdef HAVE_SYS_SOCKET_H # include #endif /* * Arg 2 type for gethostname in case it hasn't been defined in config file. */ #ifndef GETHOSTNAME_TYPE_ARG2 # ifdef USE_WINSOCK # define GETHOSTNAME_TYPE_ARG2 int # else # define GETHOSTNAME_TYPE_ARG2 size_t # endif #endif #ifdef __POCC__ # include # include # define ESRCH 3 #endif /* * Android does have the arpa/nameser.h header which is detected by configure * but it appears to be empty with recent NDK r7b / r7c, so we undefine here. * z/OS does have the arpa/nameser.h header which is detected by configure * but it is not fully implemented and missing identifiers, so udefine here. */ #if (defined(ANDROID) || defined(__ANDROID__) || defined(__MVS__)) && \ defined(HAVE_ARPA_NAMESER_H) # undef HAVE_ARPA_NAMESER_H #endif /* * Recent autoconf versions define these symbols in ares_config.h. We don't * want them (since they collide with the libcurl ones when we build * --enable-debug) so we undef them again here. */ #ifdef PACKAGE_STRING # undef PACKAGE_STRING #endif #ifdef PACKAGE_TARNAME # undef PACKAGE_TARNAME #endif #ifdef PACKAGE_VERSION # undef PACKAGE_VERSION #endif #ifdef PACKAGE_BUGREPORT # undef PACKAGE_BUGREPORT #endif #ifdef PACKAGE_NAME # undef PACKAGE_NAME #endif #ifdef VERSION # undef VERSION #endif #ifdef PACKAGE # undef PACKAGE #endif /* IPv6 compatibility */ #if !defined(HAVE_AF_INET6) # if defined(HAVE_PF_INET6) # define AF_INET6 PF_INET6 # else # define AF_INET6 AF_MAX + 1 # endif #endif #ifdef __hpux # if !defined(_XOPEN_SOURCE_EXTENDED) || defined(_KERNEL) # ifdef _APP32_64BIT_OFF_T # define OLD_APP32_64BIT_OFF_T _APP32_64BIT_OFF_T # undef _APP32_64BIT_OFF_T # else # undef OLD_APP32_64BIT_OFF_T # endif # endif #endif #ifdef __hpux # if !defined(_XOPEN_SOURCE_EXTENDED) || defined(_KERNEL) # ifdef OLD_APP32_64BIT_OFF_T # define _APP32_64BIT_OFF_T OLD_APP32_64BIT_OFF_T # undef OLD_APP32_64BIT_OFF_T # endif # endif #endif /* * Definition of timeval struct for platforms that don't have it. */ #ifndef HAVE_STRUCT_TIMEVAL struct timeval { long tv_sec; long tv_usec; }; #endif /* * Function-like macro definition used to close a socket. */ #if defined(HAVE_CLOSESOCKET) # define sclose(x) closesocket((x)) #elif defined(HAVE_CLOSESOCKET_CAMEL) # define sclose(x) CloseSocket((x)) #elif defined(HAVE_CLOSE_S) # define sclose(x) close_s((x)) #else # define sclose(x) close((x)) #endif /* * Macro used to include code only in debug builds. */ #ifdef DEBUGBUILD # define DEBUGF(x) x #else # define DEBUGF(x) \ do { \ } while (0) #endif /* * Macro SOCKERRNO / SET_SOCKERRNO() returns / sets the *socket-related* errno * (or equivalent) on this platform to hide platform details to code using it. */ #ifdef USE_WINSOCK # define SOCKERRNO ((int)WSAGetLastError()) # define SET_SOCKERRNO(x) (WSASetLastError((int)(x))) #else # define SOCKERRNO (errno) # define SET_SOCKERRNO(x) (errno = (x)) #endif /* * Macro ERRNO / SET_ERRNO() returns / sets the NOT *socket-related* errno * (or equivalent) on this platform to hide platform details to code using it. */ #if defined(WIN32) && !defined(WATT32) # define ERRNO ((int)GetLastError()) # define SET_ERRNO(x) (SetLastError((DWORD)(x))) #else # define ERRNO (errno) # define SET_ERRNO(x) (errno = (x)) #endif /* * Portable error number symbolic names defined to Winsock error codes. */ #ifdef USE_WINSOCK # undef EBADF /* override definition in errno.h */ # define EBADF WSAEBADF # undef EINTR /* override definition in errno.h */ # define EINTR WSAEINTR # undef EINVAL /* override definition in errno.h */ # define EINVAL WSAEINVAL # undef EWOULDBLOCK /* override definition in errno.h */ # define EWOULDBLOCK WSAEWOULDBLOCK # undef EINPROGRESS /* override definition in errno.h */ # define EINPROGRESS WSAEINPROGRESS # undef EALREADY /* override definition in errno.h */ # define EALREADY WSAEALREADY # undef ENOTSOCK /* override definition in errno.h */ # define ENOTSOCK WSAENOTSOCK # undef EDESTADDRREQ /* override definition in errno.h */ # define EDESTADDRREQ WSAEDESTADDRREQ # undef EMSGSIZE /* override definition in errno.h */ # define EMSGSIZE WSAEMSGSIZE # undef EPROTOTYPE /* override definition in errno.h */ # define EPROTOTYPE WSAEPROTOTYPE # undef ENOPROTOOPT /* override definition in errno.h */ # define ENOPROTOOPT WSAENOPROTOOPT # undef EPROTONOSUPPORT /* override definition in errno.h */ # define EPROTONOSUPPORT WSAEPROTONOSUPPORT # define ESOCKTNOSUPPORT WSAESOCKTNOSUPPORT # undef EOPNOTSUPP /* override definition in errno.h */ # define EOPNOTSUPP WSAEOPNOTSUPP # define EPFNOSUPPORT WSAEPFNOSUPPORT # undef EAFNOSUPPORT /* override definition in errno.h */ # define EAFNOSUPPORT WSAEAFNOSUPPORT # undef EADDRINUSE /* override definition in errno.h */ # define EADDRINUSE WSAEADDRINUSE # undef EADDRNOTAVAIL /* override definition in errno.h */ # define EADDRNOTAVAIL WSAEADDRNOTAVAIL # undef ENETDOWN /* override definition in errno.h */ # define ENETDOWN WSAENETDOWN # undef ENETUNREACH /* override definition in errno.h */ # define ENETUNREACH WSAENETUNREACH # undef ENETRESET /* override definition in errno.h */ # define ENETRESET WSAENETRESET # undef ECONNABORTED /* override definition in errno.h */ # define ECONNABORTED WSAECONNABORTED # undef ECONNRESET /* override definition in errno.h */ # define ECONNRESET WSAECONNRESET # undef ENOBUFS /* override definition in errno.h */ # define ENOBUFS WSAENOBUFS # undef EISCONN /* override definition in errno.h */ # define EISCONN WSAEISCONN # undef ENOTCONN /* override definition in errno.h */ # define ENOTCONN WSAENOTCONN # define ESHUTDOWN WSAESHUTDOWN # define ETOOMANYREFS WSAETOOMANYREFS # undef ETIMEDOUT /* override definition in errno.h */ # define ETIMEDOUT WSAETIMEDOUT # undef ECONNREFUSED /* override definition in errno.h */ # define ECONNREFUSED WSAECONNREFUSED # undef ELOOP /* override definition in errno.h */ # define ELOOP WSAELOOP # ifndef ENAMETOOLONG /* possible previous definition in errno.h */ # define ENAMETOOLONG WSAENAMETOOLONG # endif # define EHOSTDOWN WSAEHOSTDOWN # undef EHOSTUNREACH /* override definition in errno.h */ # define EHOSTUNREACH WSAEHOSTUNREACH # ifndef ENOTEMPTY /* possible previous definition in errno.h */ # define ENOTEMPTY WSAENOTEMPTY # endif # define EPROCLIM WSAEPROCLIM # define EUSERS WSAEUSERS # define EDQUOT WSAEDQUOT # define ESTALE WSAESTALE # define EREMOTE WSAEREMOTE #endif #endif /* __ARES_SETUP_H */ gevent-24.11.1/deps/c-ares/src/lib/ares_strerror.c000066400000000000000000000063751471441230600216670ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" const char *ares_strerror(int code) { ares_status_t status = (ares_status_t)code; switch (status) { case ARES_SUCCESS: return "Successful completion"; case ARES_ENODATA: return "DNS server returned answer with no data"; case ARES_EFORMERR: return "DNS server claims query was misformatted"; case ARES_ESERVFAIL: return "DNS server returned general failure"; case ARES_ENOTFOUND: return "Domain name not found"; case ARES_ENOTIMP: return "DNS server does not implement requested operation"; case ARES_EREFUSED: return "DNS server refused query"; case ARES_EBADQUERY: return "Misformatted DNS query"; case ARES_EBADNAME: return "Misformatted domain name"; case ARES_EBADFAMILY: return "Unsupported address family"; case ARES_EBADRESP: return "Misformatted DNS reply"; case ARES_ECONNREFUSED: return "Could not contact DNS servers"; case ARES_ETIMEOUT: return "Timeout while contacting DNS servers"; case ARES_EOF: return "End of file"; case ARES_EFILE: return "Error reading file"; case ARES_ENOMEM: return "Out of memory"; case ARES_EDESTRUCTION: return "Channel is being destroyed"; case ARES_EBADSTR: return "Misformatted string"; case ARES_EBADFLAGS: return "Illegal flags specified"; case ARES_ENONAME: return "Given hostname is not numeric"; case ARES_EBADHINTS: return "Illegal hints flags specified"; case ARES_ENOTINITIALIZED: return "c-ares library initialization not yet performed"; case ARES_ELOADIPHLPAPI: return "Error loading iphlpapi.dll"; case ARES_EADDRGETNETWORKPARAMS: return "Could not find GetNetworkParams function"; case ARES_ECANCELLED: return "DNS query cancelled"; case ARES_ESERVICE: return "Invalid service name or number"; case ARES_ENOSERVER: return "No DNS servers were configured"; } return "unknown"; } gevent-24.11.1/deps/c-ares/src/lib/ares_sysconfig.c000066400000000000000000000345001471441230600220000ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2007 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_SYS_PARAM_H # include #endif #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #if defined(ANDROID) || defined(__ANDROID__) # include # include "ares_android.h" /* From the Bionic sources */ # define DNS_PROP_NAME_PREFIX "net.dns" # define MAX_DNS_PROPERTIES 8 #endif #if defined(CARES_USE_LIBRESOLV) # include #endif #include "ares_inet_net_pton.h" #include "ares_platform.h" #if defined(__MVS__) static ares_status_t ares__init_sysconfig_mvs(ares_sysconfig_t *sysconfig) { struct __res_state *res = 0; size_t count4; size_t count6; int i; __STATEEXTIPV6 *v6; arse__llist_t *sconfig = NULL; ares_status_t status; if (0 == res) { int rc = res_init(); while (rc == -1 && h_errno == TRY_AGAIN) { rc = res_init(); } if (rc == -1) { return ARES_ENOMEM; } res = __res(); } v6 = res->__res_extIPv6; if (res->nscount > 0) { count4 = (size_t)res->nscount; } if (v6 && v6->__stat_nscount > 0) { count6 = (size_t)v6->__stat_nscount; } else { count6 = 0; } for (i = 0; i < count4; i++) { struct sockaddr_in *addr_in = &(res->nsaddr_list[i]); struct ares_addr addr; addr.addr.addr4.s_addr = addr_in->sin_addr.s_addr; addr.family = AF_INET; status = ares__sconfig_append(&sysconfig->sconfig, &addr, htons(addr_in->sin_port), htons(addr_in->sin_port), NULL); if (status != ARES_SUCCESS) { return status; } } for (i = 0; i < count6; i++) { struct sockaddr_in6 *addr_in = &(v6->__stat_nsaddr_list[i]); struct ares_addr addr; addr.family = AF_INET6; memcpy(&(addr.addr.addr6), &(addr_in->sin6_addr), sizeof(addr_in->sin6_addr)); status = ares__sconfig_append(&sysconfig->sconfig, &addr, htons(addr_in->sin_port), htons(addr_in->sin_port), NULL); if (status != ARES_SUCCESS) { return status; } } return ARES_SUCCESS; } #endif #if defined(__riscos__) static ares_status_t ares__init_sysconfig_riscos(ares_sysconfig_t *sysconfig) { char *line; ares_status_t status = ARES_SUCCESS; /* Under RISC OS, name servers are listed in the system variable Inet$Resolvers, space separated. */ line = getenv("Inet$Resolvers"); if (line) { char *resolvers = ares_strdup(line); char *pos; char *space; if (!resolvers) { return ARES_ENOMEM; } pos = resolvers; do { space = strchr(pos, ' '); if (space) { *space = '\0'; } status = ares__sconfig_append_fromstr(&sysconfig->sconfig, pos, ARES_TRUE); if (status != ARES_SUCCESS) { break; } pos = space + 1; } while (space); ares_free(resolvers); } return status; } #endif #if defined(WATT32) static ares_status_t ares__init_sysconfig_watt32(ares_sysconfig_t *sysconfig) { size_t i; ares_status_t status; sock_init(); for (i = 0; def_nameservers[i]; i++) { struct ares_addr addr; addr.family = AF_INET; addr.addr.addr4.s_addr = htonl(def_nameservers[i]); status = ares__sconfig_append(&sysconfig->sconfig, &addr, 0, 0, NULL); if (status != ARES_SUCCESS) { return status; } } return ARES_SUCCESS; } #endif #if defined(ANDROID) || defined(__ANDROID__) static ares_status_t ares__init_sysconfig_android(ares_sysconfig_t *sysconfig) { size_t i; char **dns_servers; char *domains; size_t num_servers; ares_status_t status = ARES_EFILE; /* Use the Android connectivity manager to get a list * of DNS servers. As of Android 8 (Oreo) net.dns# * system properties are no longer available. Google claims this * improves privacy. Apps now need the ACCESS_NETWORK_STATE * permission and must use the ConnectivityManager which * is Java only. */ dns_servers = ares_get_android_server_list(MAX_DNS_PROPERTIES, &num_servers); if (dns_servers != NULL) { for (i = 0; i < num_servers; i++) { status = ares__sconfig_append_fromstr(&sysconfig->sconfig, dns_servers[i], ARES_TRUE); if (status != ARES_SUCCESS) { return status; } } for (i = 0; i < num_servers; i++) { ares_free(dns_servers[i]); } ares_free(dns_servers); } domains = ares_get_android_search_domains_list(); sysconfig->domains = ares__strsplit(domains, ", ", &sysconfig->ndomains); ares_free(domains); # ifdef HAVE___SYSTEM_PROPERTY_GET /* Old way using the system property still in place as * a fallback. Older android versions can still use this. * it's possible for older apps not not have added the new * permission and we want to try to avoid breaking those. * * We'll only run this if we don't have any dns servers * because this will get the same ones (if it works). */ if (sysconfig->sconfig == NULL) { char propname[PROP_NAME_MAX]; char propvalue[PROP_VALUE_MAX] = ""; for (i = 1; i <= MAX_DNS_PROPERTIES; i++) { snprintf(propname, sizeof(propname), "%s%zu", DNS_PROP_NAME_PREFIX, i); if (__system_property_get(propname, propvalue) < 1) { break; } status = ares__sconfig_append_fromstr(&sysconfig->sconfig, propvalue, ARES_TRUE); if (status != ARES_SUCCESS) { return status; } } } # endif /* HAVE___SYSTEM_PROPERTY_GET */ return status; } #endif #if defined(CARES_USE_LIBRESOLV) static ares_status_t ares__init_sysconfig_libresolv(ares_sysconfig_t *sysconfig) { struct __res_state res; ares_status_t status = ARES_SUCCESS; union res_sockaddr_union addr[MAXNS]; int nscount; size_t i; size_t entries = 0; ares__buf_t *ipbuf = NULL; memset(&res, 0, sizeof(res)); if (res_ninit(&res) != 0 || !(res.options & RES_INIT)) { return ARES_EFILE; } nscount = res_getservers(&res, addr, MAXNS); for (i = 0; i < (size_t)nscount; ++i) { char ipaddr[INET6_ADDRSTRLEN] = ""; char *ipstr = NULL; unsigned short port = 0; unsigned int ll_scope = 0; sa_family_t family = addr[i].sin.sin_family; if (family == AF_INET) { ares_inet_ntop(family, &addr[i].sin.sin_addr, ipaddr, sizeof(ipaddr)); port = ntohs(addr[i].sin.sin_port); } else if (family == AF_INET6) { ares_inet_ntop(family, &addr[i].sin6.sin6_addr, ipaddr, sizeof(ipaddr)); port = ntohs(addr[i].sin6.sin6_port); ll_scope = addr[i].sin6.sin6_scope_id; } else { continue; } /* [ip]:port%iface */ ipbuf = ares__buf_create(); if (ipbuf == NULL) { status = ARES_ENOMEM; goto done; } status = ares__buf_append_str(ipbuf, "["); if (status != ARES_SUCCESS) { goto done; } status = ares__buf_append_str(ipbuf, ipaddr); if (status != ARES_SUCCESS) { goto done; } status = ares__buf_append_str(ipbuf, "]"); if (status != ARES_SUCCESS) { goto done; } if (port) { status = ares__buf_append_str(ipbuf, ":"); if (status != ARES_SUCCESS) { goto done; } status = ares__buf_append_num_dec(ipbuf, port, 0); if (status != ARES_SUCCESS) { goto done; } } if (ll_scope) { status = ares__buf_append_str(ipbuf, "%"); if (status != ARES_SUCCESS) { goto done; } status = ares__buf_append_num_dec(ipbuf, ll_scope, 0); if (status != ARES_SUCCESS) { goto done; } } ipstr = ares__buf_finish_str(ipbuf, NULL); ipbuf = NULL; if (ipstr == NULL) { status = ARES_ENOMEM; goto done; } status = ares__sconfig_append_fromstr(&sysconfig->sconfig, ipstr, ARES_TRUE); ares_free(ipstr); if (status != ARES_SUCCESS) { goto done; } } while ((entries < MAXDNSRCH) && res.dnsrch[entries]) { entries++; } if (entries) { sysconfig->domains = ares_malloc_zero(entries * sizeof(char *)); if (sysconfig->domains == NULL) { status = ARES_ENOMEM; goto done; } else { sysconfig->ndomains = entries; for (i = 0; i < sysconfig->ndomains; i++) { sysconfig->domains[i] = ares_strdup(res.dnsrch[i]); if (sysconfig->domains[i] == NULL) { status = ARES_ENOMEM; goto done; } } } } if (res.ndots >= 0) { sysconfig->ndots = (size_t)res.ndots; } /* Apple does not allow configuration of retry, so this is a static dummy * value, ignore */ # ifndef __APPLE__ if (res.retry > 0) { sysconfig->tries = (size_t)res.retry; } # endif if (res.options & RES_ROTATE) { sysconfig->rotate = ARES_TRUE; } if (res.retrans > 0) { /* Apple does not allow configuration of retrans, so this is a dummy value * that is extremely high (5s) */ # ifndef __APPLE__ if (res.retrans > 0) { sysconfig->timeout_ms = (unsigned int)res.retrans * 1000; } # endif } done: ares__buf_destroy(ipbuf); res_ndestroy(&res); return status; } #endif static void ares_sysconfig_free(ares_sysconfig_t *sysconfig) { ares__llist_destroy(sysconfig->sconfig); ares__strsplit_free(sysconfig->domains, sysconfig->ndomains); ares_free(sysconfig->sortlist); ares_free(sysconfig->lookups); memset(sysconfig, 0, sizeof(*sysconfig)); } static ares_status_t ares_sysconfig_apply(ares_channel_t *channel, const ares_sysconfig_t *sysconfig) { ares_status_t status; if (sysconfig->sconfig && !(channel->optmask & ARES_OPT_SERVERS)) { status = ares__servers_update(channel, sysconfig->sconfig, ARES_FALSE); if (status != ARES_SUCCESS) { return status; } } if (sysconfig->domains && !(channel->optmask & ARES_OPT_DOMAINS)) { /* Make sure we duplicate first then replace so even if there is * ARES_ENOMEM, the channel stays in a good state */ char **temp = ares__strsplit_duplicate(sysconfig->domains, sysconfig->ndomains); if (temp == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } ares__strsplit_free(channel->domains, channel->ndomains); channel->domains = temp; channel->ndomains = sysconfig->ndomains; } if (sysconfig->lookups && !(channel->optmask & ARES_OPT_LOOKUPS)) { char *temp = ares_strdup(sysconfig->lookups); if (temp == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } ares_free(channel->lookups); channel->lookups = temp; } if (sysconfig->sortlist && !(channel->optmask & ARES_OPT_SORTLIST)) { struct apattern *temp = ares_malloc(sizeof(*channel->sortlist) * sysconfig->nsortlist); if (temp == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } memcpy(temp, sysconfig->sortlist, sizeof(*channel->sortlist) * sysconfig->nsortlist); ares_free(channel->sortlist); channel->sortlist = temp; channel->nsort = sysconfig->nsortlist; } if (!(channel->optmask & ARES_OPT_NDOTS)) { channel->ndots = sysconfig->ndots; } if (sysconfig->tries && !(channel->optmask & ARES_OPT_TRIES)) { channel->tries = sysconfig->tries; } if (sysconfig->timeout_ms && !(channel->optmask & ARES_OPT_TIMEOUTMS)) { channel->timeout = sysconfig->timeout_ms; } if (!(channel->optmask & (ARES_OPT_ROTATE | ARES_OPT_NOROTATE))) { channel->rotate = sysconfig->rotate; } if (sysconfig->usevc) { channel->flags |= ARES_FLAG_USEVC; } return ARES_SUCCESS; } ares_status_t ares__init_by_sysconfig(ares_channel_t *channel) { ares_status_t status; ares_sysconfig_t sysconfig; memset(&sysconfig, 0, sizeof(sysconfig)); sysconfig.ndots = 1; /* Default value if not otherwise set */ #if defined(USE_WINSOCK) status = ares__init_sysconfig_windows(&sysconfig); #elif defined(__MVS__) status = ares__init_sysconfig_mvs(&sysconfig); #elif defined(__riscos__) status = ares__init_sysconfig_riscos(&sysconfig); #elif defined(WATT32) status = ares__init_sysconfig_watt32(&sysconfig); #elif defined(ANDROID) || defined(__ANDROID__) status = ares__init_sysconfig_android(&sysconfig); #elif defined(__APPLE__) status = ares__init_sysconfig_macos(&sysconfig); #elif defined(CARES_USE_LIBRESOLV) status = ares__init_sysconfig_libresolv(&sysconfig); #else status = ares__init_sysconfig_files(channel, &sysconfig); #endif if (status != ARES_SUCCESS) { goto done; } /* Environment is supposed to override sysconfig */ status = ares__init_by_environment(&sysconfig); if (status != ARES_SUCCESS) { goto done; } /* Lock when applying the configuration to the channel. Don't need to * lock prior to this. */ ares__channel_lock(channel); status = ares_sysconfig_apply(channel, &sysconfig); ares__channel_unlock(channel); if (status != ARES_SUCCESS) { goto done; } done: ares_sysconfig_free(&sysconfig); return status; } gevent-24.11.1/deps/c-ares/src/lib/ares_sysconfig_files.c000066400000000000000000000562151471441230600231710ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2007 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_SYS_PARAM_H # include #endif #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #if defined(ANDROID) || defined(__ANDROID__) # include # include "ares_android.h" /* From the Bionic sources */ # define DNS_PROP_NAME_PREFIX "net.dns" # define MAX_DNS_PROPERTIES 8 #endif #if defined(CARES_USE_LIBRESOLV) # include #endif #if defined(USE_WINSOCK) && defined(HAVE_IPHLPAPI_H) # include #endif #include "ares_inet_net_pton.h" #include "ares_platform.h" static unsigned char ip_natural_mask(const struct ares_addr *addr) { const unsigned char *ptr = NULL; /* This is an odd one. If a raw ipv4 address is specified, then we take * what is called a natural mask, which means we look at the first octet * of the ip address and for values 0-127 we assume it is a class A (/8), * for values 128-191 we assume it is a class B (/16), and for 192-223 * we assume it is a class C (/24). 223-239 is Class D which and 240-255 is * Class E, however, there is no pre-defined mask for this, so we'll use * /24 as well as that's what the old code did. * * For IPv6, we'll use /64. */ if (addr->family == AF_INET6) { return 64; } ptr = (const unsigned char *)&addr->addr.addr4; if (*ptr < 128) { return 8; } if (*ptr < 192) { return 16; } return 24; } static ares_bool_t sortlist_append(struct apattern **sortlist, size_t *nsort, const struct apattern *pat) { struct apattern *newsort; newsort = ares_realloc(*sortlist, (*nsort + 1) * sizeof(*newsort)); if (newsort == NULL) { return ARES_FALSE; /* LCOV_EXCL_LINE: OutOfMemory */ } *sortlist = newsort; memcpy(&(*sortlist)[*nsort], pat, sizeof(**sortlist)); (*nsort)++; return ARES_TRUE; } static ares_status_t parse_sort(ares__buf_t *buf, struct apattern *pat) { ares_status_t status; const unsigned char ip_charset[] = "ABCDEFabcdef0123456789.:"; char ipaddr[INET6_ADDRSTRLEN] = ""; size_t addrlen; memset(pat, 0, sizeof(*pat)); /* Consume any leading whitespace */ ares__buf_consume_whitespace(buf, ARES_TRUE); /* If no length, just ignore, return ENOTFOUND as an indicator */ if (ares__buf_len(buf) == 0) { return ARES_ENOTFOUND; } ares__buf_tag(buf); /* Consume ip address */ if (ares__buf_consume_charset(buf, ip_charset, sizeof(ip_charset) - 1) == 0) { return ARES_EBADSTR; } /* Fetch ip address */ status = ares__buf_tag_fetch_string(buf, ipaddr, sizeof(ipaddr)); if (status != ARES_SUCCESS) { return status; } /* Parse it to make sure its valid */ pat->addr.family = AF_UNSPEC; if (ares_dns_pton(ipaddr, &pat->addr, &addrlen) == NULL) { return ARES_EBADSTR; } /* See if there is a subnet mask */ if (ares__buf_begins_with(buf, (const unsigned char *)"/", 1)) { char maskstr[16]; const unsigned char ipv4_charset[] = "0123456789."; /* Consume / */ ares__buf_consume(buf, 1); ares__buf_tag(buf); /* Consume mask */ if (ares__buf_consume_charset(buf, ipv4_charset, sizeof(ipv4_charset) - 1) == 0) { return ARES_EBADSTR; } /* Fetch mask */ status = ares__buf_tag_fetch_string(buf, maskstr, sizeof(maskstr)); if (status != ARES_SUCCESS) { return status; } if (ares_str_isnum(maskstr)) { /* Numeric mask */ int mask = atoi(maskstr); if (mask < 0 || mask > 128) { return ARES_EBADSTR; } if (pat->addr.family == AF_INET && mask > 32) { return ARES_EBADSTR; } pat->mask = (unsigned char)mask; } else { /* Ipv4 subnet style mask */ struct ares_addr maskaddr; const unsigned char *ptr; memset(&maskaddr, 0, sizeof(maskaddr)); maskaddr.family = AF_INET; if (ares_dns_pton(maskstr, &maskaddr, &addrlen) == NULL) { return ARES_EBADSTR; } ptr = (const unsigned char *)&maskaddr.addr.addr4; pat->mask = (unsigned char)(ares__count_bits_u8(ptr[0]) + ares__count_bits_u8(ptr[1]) + ares__count_bits_u8(ptr[2]) + ares__count_bits_u8(ptr[3])); } } else { pat->mask = ip_natural_mask(&pat->addr); } /* Consume any trailing whitespace */ ares__buf_consume_whitespace(buf, ARES_TRUE); /* If we have any trailing bytes other than whitespace, its a parse failure */ if (ares__buf_len(buf) != 0) { return ARES_EBADSTR; } return ARES_SUCCESS; } ares_status_t ares__parse_sortlist(struct apattern **sortlist, size_t *nsort, const char *str) { ares__buf_t *buf = NULL; ares__llist_t *list = NULL; ares_status_t status = ARES_SUCCESS; ares__llist_node_t *node = NULL; if (sortlist == NULL || nsort == NULL || str == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (*sortlist != NULL) { ares_free(*sortlist); } *sortlist = NULL; *nsort = 0; buf = ares__buf_create_const((const unsigned char *)str, ares_strlen(str)); if (buf == NULL) { status = ARES_ENOMEM; goto done; } /* Split on space or semicolon */ status = ares__buf_split(buf, (const unsigned char *)" ;", 2, ARES_BUF_SPLIT_NONE, 0, &list); if (status != ARES_SUCCESS) { goto done; } for (node = ares__llist_node_first(list); node != NULL; node = ares__llist_node_next(node)) { ares__buf_t *entry = ares__llist_node_val(node); struct apattern pat; status = parse_sort(entry, &pat); if (status != ARES_SUCCESS && status != ARES_ENOTFOUND) { goto done; } if (status != ARES_SUCCESS) { continue; } if (!sortlist_append(sortlist, nsort, &pat)) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } status = ARES_SUCCESS; done: ares__buf_destroy(buf); ares__llist_destroy(list); if (status != ARES_SUCCESS) { ares_free(*sortlist); *sortlist = NULL; *nsort = 0; } return status; } static ares_status_t config_search(ares_sysconfig_t *sysconfig, const char *str, size_t max_domains) { if (sysconfig->domains && sysconfig->ndomains > 0) { /* if we already have some domains present, free them first */ ares__strsplit_free(sysconfig->domains, sysconfig->ndomains); sysconfig->domains = NULL; sysconfig->ndomains = 0; } sysconfig->domains = ares__strsplit(str, ", ", &sysconfig->ndomains); if (sysconfig->domains == NULL) { return ARES_ENOMEM; } /* Truncate if necessary */ if (max_domains && sysconfig->ndomains > max_domains) { size_t i; for (i = max_domains; i < sysconfig->ndomains; i++) { ares_free(sysconfig->domains[i]); sysconfig->domains[i] = NULL; } sysconfig->ndomains = max_domains; } return ARES_SUCCESS; } static ares_status_t buf_fetch_string(ares__buf_t *buf, char *str, size_t str_len) { ares_status_t status; ares__buf_tag(buf); ares__buf_consume(buf, ares__buf_len(buf)); status = ares__buf_tag_fetch_string(buf, str, str_len); return status; } static ares_status_t config_lookup(ares_sysconfig_t *sysconfig, ares__buf_t *buf, const char *separators) { ares_status_t status; char lookupstr[32]; size_t lookupstr_cnt = 0; ares__llist_t *lookups = NULL; ares__llist_node_t *node; size_t separators_len = ares_strlen(separators); status = ares__buf_split(buf, (const unsigned char *)separators, separators_len, ARES_BUF_SPLIT_TRIM, 0, &lookups); if (status != ARES_SUCCESS) { goto done; } memset(lookupstr, 0, sizeof(lookupstr)); for (node = ares__llist_node_first(lookups); node != NULL; node = ares__llist_node_next(node)) { char value[128]; char ch; ares__buf_t *valbuf = ares__llist_node_val(node); status = buf_fetch_string(valbuf, value, sizeof(value)); if (status != ARES_SUCCESS) { continue; } if (strcasecmp(value, "dns") == 0 || strcasecmp(value, "bind") == 0 || strcasecmp(value, "resolv") == 0 || strcasecmp(value, "resolve") == 0) { ch = 'b'; } else if (strcasecmp(value, "files") == 0 || strcasecmp(value, "file") == 0 || strcasecmp(value, "local") == 0) { ch = 'f'; } else { continue; } /* Look for a duplicate and ignore */ if (memchr(lookupstr, ch, lookupstr_cnt) == NULL) { lookupstr[lookupstr_cnt++] = ch; } } if (lookupstr_cnt) { ares_free(sysconfig->lookups); sysconfig->lookups = ares_strdup(lookupstr); if (sysconfig->lookups == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } } status = ARES_SUCCESS; done: if (status != ARES_ENOMEM) { status = ARES_SUCCESS; } ares__llist_destroy(lookups); return status; } static ares_status_t process_option(ares_sysconfig_t *sysconfig, ares__buf_t *option) { ares__llist_t *kv = NULL; char key[32] = ""; char val[32] = ""; unsigned int valint = 0; ares_status_t status; /* Split on : */ status = ares__buf_split(option, (const unsigned char *)":", 1, ARES_BUF_SPLIT_TRIM, 2, &kv); if (status != ARES_SUCCESS) { goto done; } status = buf_fetch_string(ares__llist_first_val(kv), key, sizeof(key)); if (status != ARES_SUCCESS) { goto done; } if (ares__llist_len(kv) == 2) { status = buf_fetch_string(ares__llist_last_val(kv), val, sizeof(val)); if (status != ARES_SUCCESS) { goto done; } valint = (unsigned int)strtoul(val, NULL, 10); } if (strcmp(key, "ndots") == 0) { sysconfig->ndots = valint; } else if (strcmp(key, "retrans") == 0 || strcmp(key, "timeout") == 0) { if (valint == 0) { return ARES_EFORMERR; } sysconfig->timeout_ms = valint * 1000; } else if (strcmp(key, "retry") == 0 || strcmp(key, "attempts") == 0) { if (valint == 0) { return ARES_EFORMERR; } sysconfig->tries = valint; } else if (strcmp(key, "rotate") == 0) { sysconfig->rotate = ARES_TRUE; } else if (strcmp(key, "use-vc") == 0 || strcmp(key, "usevc") == 0) { sysconfig->usevc = ARES_TRUE; } done: ares__llist_destroy(kv); return status; } ares_status_t ares__sysconfig_set_options(ares_sysconfig_t *sysconfig, const char *str) { ares__buf_t *buf = NULL; ares__llist_t *options = NULL; ares_status_t status; ares__llist_node_t *node; buf = ares__buf_create_const((const unsigned char *)str, ares_strlen(str)); if (buf == NULL) { return ARES_ENOMEM; } status = ares__buf_split(buf, (const unsigned char *)" \t", 2, ARES_BUF_SPLIT_TRIM, 0, &options); if (status != ARES_SUCCESS) { goto done; } for (node = ares__llist_node_first(options); node != NULL; node = ares__llist_node_next(node)) { ares__buf_t *valbuf = ares__llist_node_val(node); status = process_option(sysconfig, valbuf); /* Out of memory is the only fatal condition */ if (status == ARES_ENOMEM) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } status = ARES_SUCCESS; done: ares__llist_destroy(options); ares__buf_destroy(buf); return status; } ares_status_t ares__init_by_environment(ares_sysconfig_t *sysconfig) { const char *localdomain; const char *res_options; ares_status_t status; localdomain = getenv("LOCALDOMAIN"); if (localdomain) { char *temp = ares_strdup(localdomain); if (temp == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } status = config_search(sysconfig, temp, 1); ares_free(temp); if (status != ARES_SUCCESS) { return status; } } res_options = getenv("RES_OPTIONS"); if (res_options) { status = ares__sysconfig_set_options(sysconfig, res_options); if (status != ARES_SUCCESS) { return status; } } return ARES_SUCCESS; } /* Configuration Files: * /etc/resolv.conf * - All Unix-like systems * - Comments start with ; or # * - Lines have a keyword followed by a value that is interpreted specific * to the keyword: * - Keywords: * - nameserver - IP address of nameserver with optional port (using a : * prefix). If using an ipv6 address and specifying a port, the ipv6 * address must be encapsulated in brackets. For link-local ipv6 * addresses, the interface can also be specified with a % prefix. e.g.: * "nameserver [fe80::1]:1234%iface" * This keyword may be specified multiple times. * - search - whitespace separated list of domains * - domain - obsolete, same as search except only a single domain * - lookup / hostresorder - local, bind, file, files * - sortlist - whitespace separated ip-address/netmask pairs * - options - options controlling resolver variables * - ndots:n - set ndots option * - timeout:n (retrans:n) - timeout per query attempt in seconds * - attempts:n (retry:n) - number of times resolver will send query * - rotate - round-robin selection of name servers * - use-vc / usevc - force tcp * /etc/nsswitch.conf * - Modern Linux, FreeBSD, HP-UX, Solaris * - Search order set via: * "hosts: files dns mdns4_minimal mdns4" * - files is /etc/hosts * - dns is dns * - mdns4_minimal does mdns only if ending in .local * - mdns4 does not limit to domains ending in .local * /etc/netsvc.conf * - AIX * - Search order set via: * "hosts = local , bind" * - bind is dns * - local is /etc/hosts * /etc/svc.conf * - Tru64 * - Same format as /etc/netsvc.conf * /etc/host.conf * - Early FreeBSD, Early Linux * - Not worth supporting, format varied based on system, FreeBSD used * just a line per search order, Linux used "order " and a comma * delimited list of "bind" and "hosts" */ /* This function will only return ARES_SUCCESS or ARES_ENOMEM. Any other * conditions are ignored. Users may mess up config files, but we want to * process anything we can. */ static ares_status_t parse_resolvconf_line(ares_sysconfig_t *sysconfig, ares__buf_t *line) { char option[32]; char value[512]; ares_status_t status = ARES_SUCCESS; /* Ignore lines beginning with a comment */ if (ares__buf_begins_with(line, (const unsigned char *)"#", 1) || ares__buf_begins_with(line, (const unsigned char *)";", 1)) { return ARES_SUCCESS; } ares__buf_tag(line); /* Shouldn't be possible, but if it happens, ignore the line. */ if (ares__buf_consume_nonwhitespace(line) == 0) { return ARES_SUCCESS; } status = ares__buf_tag_fetch_string(line, option, sizeof(option)); if (status != ARES_SUCCESS) { return ARES_SUCCESS; } ares__buf_consume_whitespace(line, ARES_TRUE); status = buf_fetch_string(line, value, sizeof(value)); if (status != ARES_SUCCESS) { return ARES_SUCCESS; } ares__str_trim(value); if (*value == 0) { return ARES_SUCCESS; } /* At this point we have a string option and a string value, both trimmed * of leading and trailing whitespace. Lets try to evaluate them */ if (strcmp(option, "domain") == 0) { /* Domain is legacy, don't overwrite an existing config set by search */ if (sysconfig->domains == NULL) { status = config_search(sysconfig, value, 1); } } else if (strcmp(option, "lookup") == 0 || strcmp(option, "hostresorder") == 0) { ares__buf_tag_rollback(line); status = config_lookup(sysconfig, line, " \t"); } else if (strcmp(option, "search") == 0) { status = config_search(sysconfig, value, 0); } else if (strcmp(option, "nameserver") == 0) { status = ares__sconfig_append_fromstr(&sysconfig->sconfig, value, ARES_TRUE); } else if (strcmp(option, "sortlist") == 0) { /* Ignore all failures except ENOMEM. If the sysadmin set a bad * sortlist, just ignore the sortlist, don't cause an inoperable * channel */ status = ares__parse_sortlist(&sysconfig->sortlist, &sysconfig->nsortlist, value); if (status != ARES_ENOMEM) { status = ARES_SUCCESS; } } else if (strcmp(option, "options") == 0) { status = ares__sysconfig_set_options(sysconfig, value); } return status; } /* This function will only return ARES_SUCCESS or ARES_ENOMEM. Any other * conditions are ignored. Users may mess up config files, but we want to * process anything we can. */ static ares_status_t parse_nsswitch_line(ares_sysconfig_t *sysconfig, ares__buf_t *line) { char option[32]; ares__buf_t *buf; ares_status_t status = ARES_SUCCESS; ares__llist_t *sects = NULL; /* Ignore lines beginning with a comment */ if (ares__buf_begins_with(line, (const unsigned char *)"#", 1)) { return ARES_SUCCESS; } /* database : values (space delimited) */ status = ares__buf_split(line, (const unsigned char *)":", 1, ARES_BUF_SPLIT_TRIM, 2, §s); if (status != ARES_SUCCESS || ares__llist_len(sects) != 2) { goto done; } buf = ares__llist_first_val(sects); status = buf_fetch_string(buf, option, sizeof(option)); if (status != ARES_SUCCESS) { goto done; } /* Only support "hosts:" */ if (strcmp(option, "hosts") != 0) { goto done; } /* Values are space separated */ buf = ares__llist_last_val(sects); status = config_lookup(sysconfig, buf, " \t"); done: ares__llist_destroy(sects); if (status != ARES_ENOMEM) { status = ARES_SUCCESS; } return status; } /* This function will only return ARES_SUCCESS or ARES_ENOMEM. Any other * conditions are ignored. Users may mess up config files, but we want to * process anything we can. */ static ares_status_t parse_svcconf_line(ares_sysconfig_t *sysconfig, ares__buf_t *line) { char option[32]; ares__buf_t *buf; ares_status_t status = ARES_SUCCESS; ares__llist_t *sects = NULL; /* Ignore lines beginning with a comment */ if (ares__buf_begins_with(line, (const unsigned char *)"#", 1)) { return ARES_SUCCESS; } /* database = values (comma delimited)*/ status = ares__buf_split(line, (const unsigned char *)"=", 1, ARES_BUF_SPLIT_TRIM, 2, §s); if (status != ARES_SUCCESS || ares__llist_len(sects) != 2) { goto done; } buf = ares__llist_first_val(sects); status = buf_fetch_string(buf, option, sizeof(option)); if (status != ARES_SUCCESS) { goto done; } /* Only support "hosts=" */ if (strcmp(option, "hosts") != 0) { goto done; } /* Values are comma separated */ buf = ares__llist_last_val(sects); status = config_lookup(sysconfig, buf, ","); done: ares__llist_destroy(sects); if (status != ARES_ENOMEM) { status = ARES_SUCCESS; } return status; } typedef ares_status_t (*line_callback_t)(ares_sysconfig_t *sysconfig, ares__buf_t *line); /* Should only return: * ARES_ENOTFOUND - file not found * ARES_EFILE - error reading file (perms) * ARES_ENOMEM - out of memory * ARES_SUCCESS - file processed, doesn't necessarily mean it was a good * file, but we're not erroring out if we can't parse * something (or anything at all) */ static ares_status_t process_config_lines(const char *filename, ares_sysconfig_t *sysconfig, line_callback_t cb) { ares_status_t status = ARES_SUCCESS; ares__llist_node_t *node; ares__llist_t *lines = NULL; ares__buf_t *buf = NULL; buf = ares__buf_create(); if (buf == NULL) { status = ARES_ENOMEM; goto done; } status = ares__buf_load_file(filename, buf); if (status != ARES_SUCCESS) { goto done; } status = ares__buf_split(buf, (const unsigned char *)"\n", 1, ARES_BUF_SPLIT_TRIM, 0, &lines); if (status != ARES_SUCCESS) { goto done; } for (node = ares__llist_node_first(lines); node != NULL; node = ares__llist_node_next(node)) { ares__buf_t *line = ares__llist_node_val(node); status = cb(sysconfig, line); if (status != ARES_SUCCESS) { goto done; } } done: ares__buf_destroy(buf); ares__llist_destroy(lines); return status; } ares_status_t ares__init_sysconfig_files(const ares_channel_t *channel, ares_sysconfig_t *sysconfig) { ares_status_t status = ARES_SUCCESS; /* Resolv.conf */ status = process_config_lines((channel->resolvconf_path != NULL) ? channel->resolvconf_path : PATH_RESOLV_CONF, sysconfig, parse_resolvconf_line); if (status != ARES_SUCCESS && status != ARES_ENOTFOUND) { goto done; } /* Nsswitch.conf */ status = process_config_lines("/etc/nsswitch.conf", sysconfig, parse_nsswitch_line); if (status != ARES_SUCCESS && status != ARES_ENOTFOUND) { goto done; } /* netsvc.conf */ status = process_config_lines("/etc/netsvc.conf", sysconfig, parse_svcconf_line); if (status != ARES_SUCCESS && status != ARES_ENOTFOUND) { goto done; } /* svc.conf */ status = process_config_lines("/etc/svc.conf", sysconfig, parse_svcconf_line); if (status != ARES_SUCCESS && status != ARES_ENOTFOUND) { goto done; } status = ARES_SUCCESS; done: return status; } gevent-24.11.1/deps/c-ares/src/lib/ares_sysconfig_mac.c000066400000000000000000000266041471441230600226260ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifdef __APPLE__ /* The DNS configuration for apple is stored in the system configuration * database. Apple does provide an emulated `/etc/resolv.conf` on MacOS (but * not iOS), it cannot, however, represent the entirety of the DNS * configuration. Alternatively, libresolv could be used to also retrieve some * system configuration, but it too is not capable of retrieving the entirety * of the DNS configuration. * * Attempts to use the preferred public API of `SCDynamicStoreCreate()` and * friends yielded incomplete DNS information. Instead, that leaves some apple * "internal" symbols from `configd` that we need to access in order to get the * entire configuration. We can see that we're not the only ones to do this as * Google Chrome also does: * https://chromium.googlesource.com/chromium/src/+/HEAD/net/dns/dns_config_watcher_mac.cc * These internal functions are what `libresolv` and `scutil` use to retrieve * the dns configuration. Since these symbols are not publicly available, we * will dynamically load the symbols from `libSystem` and import the `dnsinfo.h` * private header extracted from: * https://opensource.apple.com/source/configd/configd-1109.140.1/dnsinfo/dnsinfo.h */ /* The apple header uses anonymous unions which came with C11 */ # if defined(__clang__) # pragma GCC diagnostic push # pragma GCC diagnostic ignored "-Wc11-extensions" # endif # include "ares_private.h" # include # include # include # include # include # include "thirdparty/apple/dnsinfo.h" # include # if MAC_OS_X_VERSION_MIN_REQUIRED >= 1080 /* MacOS 10.8 */ # include # endif typedef struct { void *handle; dns_config_t *(*dns_configuration_copy)(void); void (*dns_configuration_free)(dns_config_t *config); } dnsinfo_t; static void dnsinfo_destroy(dnsinfo_t *dnsinfo) { if (dnsinfo == NULL) { return; } if (dnsinfo->handle) { dlclose(dnsinfo->handle); } ares_free(dnsinfo); } static ares_status_t dnsinfo_init(dnsinfo_t **dnsinfo_out) { dnsinfo_t *dnsinfo = NULL; ares_status_t status = ARES_SUCCESS; size_t i; const char *searchlibs[] = { "/usr/lib/libSystem.dylib", "/System/Library/Frameworks/SystemConfiguration.framework/" "SystemConfiguration", NULL }; if (dnsinfo_out == NULL) { status = ARES_EFORMERR; goto done; } *dnsinfo_out = NULL; dnsinfo = ares_malloc_zero(sizeof(*dnsinfo)); if (dnsinfo == NULL) { status = ARES_ENOMEM; goto done; } for (i = 0; searchlibs[i] != NULL; i++) { dnsinfo->handle = dlopen(searchlibs[i], RTLD_LAZY /* | RTLD_NOLOAD */); if (dnsinfo->handle == NULL) { /* Fail, loop */ continue; } dnsinfo->dns_configuration_copy = (dns_config_t * (*)(void)) dlsym(dnsinfo->handle, "dns_configuration_copy"); dnsinfo->dns_configuration_free = (void (*)(dns_config_t *))dlsym( dnsinfo->handle, "dns_configuration_free"); if (dnsinfo->dns_configuration_copy != NULL && dnsinfo->dns_configuration_free != NULL) { break; } /* Fail, loop */ dlclose(dnsinfo->handle); dnsinfo->handle = NULL; } if (dnsinfo->dns_configuration_copy == NULL || dnsinfo->dns_configuration_free == NULL) { status = ARES_ESERVFAIL; goto done; } done: if (status == ARES_SUCCESS) { *dnsinfo_out = dnsinfo; } else { dnsinfo_destroy(dnsinfo); } return status; } static ares_bool_t search_is_duplicate(const ares_sysconfig_t *sysconfig, const char *name) { size_t i; for (i = 0; i < sysconfig->ndomains; i++) { if (strcasecmp(sysconfig->domains[i], name) == 0) { return ARES_TRUE; } } return ARES_FALSE; } static ares_status_t read_resolver(const dns_resolver_t *resolver, ares_sysconfig_t *sysconfig) { int i; unsigned short port = 0; ares_status_t status = ARES_SUCCESS; # if MAC_OS_X_VERSION_MIN_REQUIRED >= 1080 /* MacOS 10.8 */ /* XXX: resolver->domain is for domain-specific servers. When we implement * this support, we'll want to use this. But for now, we're going to * skip any servers which set this since we can't properly route. * MacOS used to use this setting for a different purpose in the * past however, so on versions of MacOS < 10.8 just ignore this * completely. */ if (resolver->domain != NULL) { return ARES_SUCCESS; } # endif # if MAC_OS_X_VERSION_MIN_REQUIRED >= 1080 /* MacOS 10.8 */ /* Check to see if DNS server should be used, base this on if the server is * reachable or can be reachable automatically if we send traffic that * direction. */ if (!(resolver->reach_flags & (kSCNetworkFlagsReachable | kSCNetworkReachabilityFlagsConnectionOnTraffic))) { return ARES_SUCCESS; } # endif /* NOTE: it doesn't look like resolver->flags is relevant */ /* If there's no nameservers, nothing to do */ if (resolver->n_nameserver <= 0) { return ARES_SUCCESS; } /* Default port */ port = resolver->port; /* Append search list */ if (resolver->n_search > 0) { char **new_domains = ares_realloc_zero( sysconfig->domains, sizeof(*sysconfig->domains) * sysconfig->ndomains, sizeof(*sysconfig->domains) * (sysconfig->ndomains + (size_t)resolver->n_search)); if (new_domains == NULL) { return ARES_ENOMEM; } sysconfig->domains = new_domains; for (i = 0; i < resolver->n_search; i++) { const char *search; /* UBSAN: copy pointer using memcpy due to misalignment */ memcpy(&search, resolver->search + i, sizeof(search)); /* Skip duplicates */ if (search_is_duplicate(sysconfig, search)) { continue; } sysconfig->domains[sysconfig->ndomains] = ares_strdup(search); if (sysconfig->domains[sysconfig->ndomains] == NULL) { return ARES_ENOMEM; } sysconfig->ndomains++; } } /* NOTE: we're going to skip importing the sort addresses for now. Its * likely not used, its not obvious how to even configure such a thing. */ # if 0 for (i=0; in_sortaddr; i++) { char val[256]; inet_ntop(AF_INET, &resolver->sortaddr[i]->address, val, sizeof(val)); printf("\t\t%s/", val); inet_ntop(AF_INET, &resolver->sortaddr[i]->mask, val, sizeof(val)); printf("%s\n", val); } # endif if (resolver->options != NULL) { status = ares__sysconfig_set_options(sysconfig, resolver->options); if (status != ARES_SUCCESS) { return status; } } /* NOTE: * - resolver->timeout appears unused, always 0, so we ignore this * - resolver->service_identifier doesn't appear relevant to us * - resolver->cid also isn't relevant * - resolver->if_name we won't use since it isn't available in MacOS 10.8 * or earlier, use resolver->if_index instead to then lookup the name. */ /* XXX: resolver->search_order appears like it might be relevant, we might * need to sort the resulting list by this metric if we find in the future we * need to. That said, due to the automatic re-sorting we do, I'm not sure it * matters. Here's an article on this search order stuff: * https://www.cnet.com/tech/computing/os-x-10-6-3-and-dns-server-priority-changes/ */ for (i = 0; i < resolver->n_nameserver; i++) { struct ares_addr addr; unsigned short addrport; const struct sockaddr *sockaddr; char if_name_str[256] = ""; const char *if_name; /* UBSAN alignment workaround to fetch memory address */ memcpy(&sockaddr, resolver->nameserver + i, sizeof(sockaddr)); if (!ares_sockaddr_to_ares_addr(&addr, &addrport, sockaddr)) { continue; } if (addrport == 0) { addrport = port; } if_name = ares__if_indextoname(resolver->if_index, if_name_str, sizeof(if_name_str)); status = ares__sconfig_append(&sysconfig->sconfig, &addr, addrport, addrport, if_name); if (status != ARES_SUCCESS) { return status; } } return status; } static ares_status_t read_resolvers(dns_resolver_t **resolvers, int nresolvers, ares_sysconfig_t *sysconfig) { ares_status_t status = ARES_SUCCESS; int i; for (i = 0; status == ARES_SUCCESS && i < nresolvers; i++) { const dns_resolver_t *resolver_ptr; /* UBSAN doesn't like that this is unaligned, lets use memcpy to get the * address. Equivalent to: * resolver = resolvers[i] */ memcpy(&resolver_ptr, resolvers + i, sizeof(resolver_ptr)); status = read_resolver(resolver_ptr, sysconfig); } return status; } ares_status_t ares__init_sysconfig_macos(ares_sysconfig_t *sysconfig) { dnsinfo_t *dnsinfo = NULL; dns_config_t *sc_dns = NULL; ares_status_t status = ARES_SUCCESS; status = dnsinfo_init(&dnsinfo); if (status != ARES_SUCCESS) { goto done; } sc_dns = dnsinfo->dns_configuration_copy(); if (sc_dns == NULL) { status = ARES_ESERVFAIL; goto done; } /* There are `resolver`, `scoped_resolver`, and `service_specific_resolver` * settings. The `scoped_resolver` settings appear to be already available via * the `resolver` settings and likely are only relevant to link-local dns * servers which we can already detect via the address itself, so we'll ignore * the `scoped_resolver` section. It isn't clear what the * `service_specific_resolver` is used for, I haven't personally seen it * in use so we'll ignore this until at some point where we find we need it. * Likely this wasn't available via `/etc/resolv.conf` nor `libresolv` anyhow * so its not worse to prior configuration methods, worst case. */ status = read_resolvers(sc_dns->resolver, sc_dns->n_resolver, sysconfig); done: if (dnsinfo) { dnsinfo->dns_configuration_free(sc_dns); dnsinfo_destroy(dnsinfo); } return status; } # if defined(__clang__) # pragma GCC diagnostic pop # endif #else /* Prevent compiler warnings due to empty translation unit */ typedef int make_iso_compilers_happy; #endif gevent-24.11.1/deps/c-ares/src/lib/ares_sysconfig_win.c000066400000000000000000000463541471441230600226670ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2007 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_SYS_PARAM_H # include #endif #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #if defined(USE_WINSOCK) # if defined(HAVE_IPHLPAPI_H) # include # endif # if defined(HAVE_NETIOAPI_H) # include # endif #endif #include "ares_inet_net_pton.h" #include "ares_platform.h" #if defined(USE_WINSOCK) /* * get_REG_SZ() * * Given a 'hKey' handle to an open registry key and a 'leafKeyName' pointer * to the name of the registry leaf key to be queried, fetch it's string * value and return a pointer in *outptr to a newly allocated memory area * holding it as a null-terminated string. * * Returns 0 and nullifies *outptr upon inability to return a string value. * * Returns 1 and sets *outptr when returning a dynamically allocated string. * * Supported on Windows NT 3.5 and newer. */ static ares_bool_t get_REG_SZ(HKEY hKey, const char *leafKeyName, char **outptr) { DWORD size = 0; int res; *outptr = NULL; /* Find out size of string stored in registry */ res = RegQueryValueExA(hKey, leafKeyName, 0, NULL, NULL, &size); if ((res != ERROR_SUCCESS && res != ERROR_MORE_DATA) || !size) { return ARES_FALSE; } /* Allocate buffer of indicated size plus one given that string might have been stored without null termination */ *outptr = ares_malloc(size + 1); if (!*outptr) { return ARES_FALSE; } /* Get the value for real */ res = RegQueryValueExA(hKey, leafKeyName, 0, NULL, (unsigned char *)*outptr, &size); if ((res != ERROR_SUCCESS) || (size == 1)) { ares_free(*outptr); *outptr = NULL; return ARES_FALSE; } /* Null terminate buffer always */ *(*outptr + size) = '\0'; return ARES_TRUE; } static void commanjoin(char **dst, const char * const src, const size_t len) { char *newbuf; size_t newsize; /* 1 for terminating 0 and 2 for , and terminating 0 */ newsize = len + (*dst ? (ares_strlen(*dst) + 2) : 1); newbuf = ares_realloc(*dst, newsize); if (!newbuf) { return; } if (*dst == NULL) { *newbuf = '\0'; } *dst = newbuf; if (ares_strlen(*dst) != 0) { strcat(*dst, ","); } strncat(*dst, src, len); } /* * commajoin() * * RTF code. */ static void commajoin(char **dst, const char *src) { commanjoin(dst, src, ares_strlen(src)); } /* A structure to hold the string form of IPv4 and IPv6 addresses so we can * sort them by a metric. */ typedef struct { /* The metric we sort them by. */ ULONG metric; /* Original index of the item, used as a secondary sort parameter to make * qsort() stable if the metrics are equal */ size_t orig_idx; /* Room enough for the string form of any IPv4 or IPv6 address that * ares_inet_ntop() will create. Based on the existing c-ares practice. */ char text[INET6_ADDRSTRLEN + 8 + 64]; /* [%s]:NNNNN%iface */ } Address; /* Sort Address values \a left and \a right by metric, returning the usual * indicators for qsort(). */ static int compareAddresses(const void *arg1, const void *arg2) { const Address * const left = arg1; const Address * const right = arg2; /* Lower metric the more preferred */ if (left->metric < right->metric) { return -1; } if (left->metric > right->metric) { return 1; } /* If metrics are equal, lower original index more preferred */ if (left->orig_idx < right->orig_idx) { return -1; } if (left->orig_idx > right->orig_idx) { return 1; } return 0; } /* There can be multiple routes to "the Internet". And there can be different * DNS servers associated with each of the interfaces that offer those routes. * We have to assume that any DNS server can serve any request. But, some DNS * servers may only respond if requested over their associated interface. But * we also want to use "the preferred route to the Internet" whenever possible * (and not use DNS servers on a non-preferred route even by forcing request * to go out on the associated non-preferred interface). i.e. We want to use * the DNS servers associated with the same interface that we would use to * make a general request to anything else. * * But, Windows won't sort the DNS servers by the metrics associated with the * routes and interfaces _even_ though it obviously sends IP packets based on * those same routes and metrics. So, we must do it ourselves. * * So, we sort the DNS servers by the same metric values used to determine how * an outgoing IP packet will go, thus effectively using the DNS servers * associated with the interface that the DNS requests themselves will * travel. This gives us optimal routing and avoids issues where DNS servers * won't respond to requests that don't arrive via some specific subnetwork * (and thus some specific interface). * * This function computes the metric we use to sort. On the interface * identified by \a luid, it determines the best route to \a dest and combines * that route's metric with \a interfaceMetric to compute a metric for the * destination address on that interface. This metric can be used as a weight * to sort the DNS server addresses associated with each interface (lower is * better). * * Note that by restricting the route search to the specific interface with * which the DNS servers are associated, this function asks the question "What * is the metric for sending IP packets to this DNS server?" which allows us * to sort the DNS servers correctly. */ static ULONG getBestRouteMetric(IF_LUID * const luid, /* Can't be const :( */ const SOCKADDR_INET * const dest, const ULONG interfaceMetric) { /* On this interface, get the best route to that destination. */ # if defined(__WATCOMC__) /* OpenWatcom's builtin Windows SDK does not have a definition for * MIB_IPFORWARD_ROW2, and also does not allow the usage of SOCKADDR_INET * as a variable. Let's work around this by returning the worst possible * metric, but only when using the OpenWatcom compiler. * It may be worth investigating using a different version of the Windows * SDK with OpenWatcom in the future, though this may be fixed in OpenWatcom * 2.0. */ return (ULONG)-1; # else MIB_IPFORWARD_ROW2 row; SOCKADDR_INET ignored; if (GetBestRoute2(/* The interface to use. The index is ignored since we are * passing a LUID. */ luid, 0, /* No specific source address. */ NULL, /* Our destination address. */ dest, /* No options. */ 0, /* The route row. */ &row, /* The best source address, which we don't need. */ &ignored) != NO_ERROR /* If the metric is "unused" (-1) or too large for us to add the two * metrics, use the worst possible, thus sorting this last. */ || row.Metric == (ULONG)-1 || row.Metric > ((ULONG)-1) - interfaceMetric) { /* Return the worst possible metric. */ return (ULONG)-1; } /* Return the metric value from that row, plus the interface metric. * * See * http://msdn.microsoft.com/en-us/library/windows/desktop/aa814494(v=vs.85).aspx * which describes the combination as a "sum". */ return row.Metric + interfaceMetric; # endif /* __WATCOMC__ */ } /* * get_DNS_Windows() * * Locates DNS info using GetAdaptersAddresses() function from the Internet * Protocol Helper (IP Helper) API. When located, this returns a pointer * in *outptr to a newly allocated memory area holding a null-terminated * string with a space or comma separated list of DNS IP addresses. * * Returns 0 and nullifies *outptr upon inability to return DNSes string. * * Returns 1 and sets *outptr when returning a dynamically allocated string. * * Implementation supports Windows XP and newer. */ # define IPAA_INITIAL_BUF_SZ 15 * 1024 # define IPAA_MAX_TRIES 3 static ares_bool_t get_DNS_Windows(char **outptr) { IP_ADAPTER_DNS_SERVER_ADDRESS *ipaDNSAddr; IP_ADAPTER_ADDRESSES *ipaa; IP_ADAPTER_ADDRESSES *newipaa; IP_ADAPTER_ADDRESSES *ipaaEntry; ULONG ReqBufsz = IPAA_INITIAL_BUF_SZ; ULONG Bufsz = IPAA_INITIAL_BUF_SZ; ULONG AddrFlags = 0; int trying = IPAA_MAX_TRIES; ULONG res; /* The capacity of addresses, in elements. */ size_t addressesSize; /* The number of elements in addresses. */ size_t addressesIndex = 0; /* The addresses we will sort. */ Address *addresses; union { struct sockaddr *sa; struct sockaddr_in *sa4; struct sockaddr_in6 *sa6; } namesrvr; *outptr = NULL; ipaa = ares_malloc(Bufsz); if (!ipaa) { return ARES_FALSE; } /* Start with enough room for a few DNS server addresses and we'll grow it * as we encounter more. */ addressesSize = 4; addresses = (Address *)ares_malloc(sizeof(Address) * addressesSize); if (addresses == NULL) { /* We need room for at least some addresses to function. */ ares_free(ipaa); return ARES_FALSE; } /* Usually this call succeeds with initial buffer size */ res = GetAdaptersAddresses(AF_UNSPEC, AddrFlags, NULL, ipaa, &ReqBufsz); if ((res != ERROR_BUFFER_OVERFLOW) && (res != ERROR_SUCCESS)) { goto done; } while ((res == ERROR_BUFFER_OVERFLOW) && (--trying)) { if (Bufsz < ReqBufsz) { newipaa = ares_realloc(ipaa, ReqBufsz); if (!newipaa) { goto done; } Bufsz = ReqBufsz; ipaa = newipaa; } res = GetAdaptersAddresses(AF_UNSPEC, AddrFlags, NULL, ipaa, &ReqBufsz); if (res == ERROR_SUCCESS) { break; } } if (res != ERROR_SUCCESS) { goto done; } for (ipaaEntry = ipaa; ipaaEntry; ipaaEntry = ipaaEntry->Next) { if (ipaaEntry->OperStatus != IfOperStatusUp) { continue; } /* For each interface, find any associated DNS servers as IPv4 or IPv6 * addresses. For each found address, find the best route to that DNS * server address _on_ _that_ _interface_ (at this moment in time) and * compute the resulting total metric, just as Windows routing will do. * Then, sort all the addresses found by the metric. */ for (ipaDNSAddr = ipaaEntry->FirstDnsServerAddress; ipaDNSAddr != NULL; ipaDNSAddr = ipaDNSAddr->Next) { char ipaddr[INET6_ADDRSTRLEN] = ""; namesrvr.sa = ipaDNSAddr->Address.lpSockaddr; if (namesrvr.sa->sa_family == AF_INET) { if ((namesrvr.sa4->sin_addr.S_un.S_addr == INADDR_ANY) || (namesrvr.sa4->sin_addr.S_un.S_addr == INADDR_NONE)) { continue; } /* Allocate room for another address, if necessary, else skip. */ if (addressesIndex == addressesSize) { const size_t newSize = addressesSize + 4; Address * const newMem = (Address *)ares_realloc(addresses, sizeof(Address) * newSize); if (newMem == NULL) { continue; } addresses = newMem; addressesSize = newSize; } addresses[addressesIndex].metric = getBestRouteMetric( &ipaaEntry->Luid, (SOCKADDR_INET *)((void *)(namesrvr.sa)), ipaaEntry->Ipv4Metric); /* Record insertion index to make qsort stable */ addresses[addressesIndex].orig_idx = addressesIndex; if (!ares_inet_ntop(AF_INET, &namesrvr.sa4->sin_addr, ipaddr, sizeof(ipaddr))) { continue; } snprintf(addresses[addressesIndex].text, sizeof(addresses[addressesIndex].text), "[%s]:%u", ipaddr, ntohs(namesrvr.sa4->sin_port)); ++addressesIndex; } else if (namesrvr.sa->sa_family == AF_INET6) { unsigned int ll_scope = 0; struct ares_addr addr; if (memcmp(&namesrvr.sa6->sin6_addr, &ares_in6addr_any, sizeof(namesrvr.sa6->sin6_addr)) == 0) { continue; } /* Allocate room for another address, if necessary, else skip. */ if (addressesIndex == addressesSize) { const size_t newSize = addressesSize + 4; Address * const newMem = (Address *)ares_realloc(addresses, sizeof(Address) * newSize); if (newMem == NULL) { continue; } addresses = newMem; addressesSize = newSize; } /* See if its link-local */ memset(&addr, 0, sizeof(addr)); addr.family = AF_INET6; memcpy(&addr.addr.addr6, &namesrvr.sa6->sin6_addr, 16); if (ares__addr_is_linklocal(&addr)) { ll_scope = ipaaEntry->Ipv6IfIndex; } addresses[addressesIndex].metric = getBestRouteMetric( &ipaaEntry->Luid, (SOCKADDR_INET *)((void *)(namesrvr.sa)), ipaaEntry->Ipv6Metric); /* Record insertion index to make qsort stable */ addresses[addressesIndex].orig_idx = addressesIndex; if (!ares_inet_ntop(AF_INET6, &namesrvr.sa6->sin6_addr, ipaddr, sizeof(ipaddr))) { continue; } if (ll_scope) { snprintf(addresses[addressesIndex].text, sizeof(addresses[addressesIndex].text), "[%s]:%u%%%u", ipaddr, ntohs(namesrvr.sa6->sin6_port), ll_scope); } else { snprintf(addresses[addressesIndex].text, sizeof(addresses[addressesIndex].text), "[%s]:%u", ipaddr, ntohs(namesrvr.sa6->sin6_port)); } ++addressesIndex; } else { /* Skip non-IPv4/IPv6 addresses completely. */ continue; } } } /* Sort all of the textual addresses by their metric (and original index if * metrics are equal). */ qsort(addresses, addressesIndex, sizeof(*addresses), compareAddresses); /* Join them all into a single string, removing duplicates. */ { size_t i; for (i = 0; i < addressesIndex; ++i) { size_t j; /* Look for this address text appearing previously in the results. */ for (j = 0; j < i; ++j) { if (strcmp(addresses[j].text, addresses[i].text) == 0) { break; } } /* Iff we didn't emit this address already, emit it now. */ if (j == i) { /* Add that to outptr (if we can). */ commajoin(outptr, addresses[i].text); } } } done: ares_free(addresses); if (ipaa) { ares_free(ipaa); } if (!*outptr) { return ARES_FALSE; } return ARES_TRUE; } /* * get_SuffixList_Windows() * * Reads the "DNS Suffix Search List" from registry and writes the list items * whitespace separated to outptr. If the Search List is empty, the * "Primary Dns Suffix" is written to outptr. * * Returns 0 and nullifies *outptr upon inability to return the suffix list. * * Returns 1 and sets *outptr when returning a dynamically allocated string. * * Implementation supports Windows Server 2003 and newer */ static ares_bool_t get_SuffixList_Windows(char **outptr) { HKEY hKey; HKEY hKeyEnum; char keyName[256]; DWORD keyNameBuffSize; DWORD keyIdx = 0; char *p = NULL; *outptr = NULL; if (ares__getplatform() != WIN_NT) { return ARES_FALSE; } /* 1. Global DNS Suffix Search List */ if (RegOpenKeyExA(HKEY_LOCAL_MACHINE, WIN_NS_NT_KEY, 0, KEY_READ, &hKey) == ERROR_SUCCESS) { get_REG_SZ(hKey, SEARCHLIST_KEY, outptr); if (get_REG_SZ(hKey, DOMAIN_KEY, &p)) { commajoin(outptr, p); ares_free(p); p = NULL; } RegCloseKey(hKey); } if (RegOpenKeyExA(HKEY_LOCAL_MACHINE, WIN_NT_DNSCLIENT, 0, KEY_READ, &hKey) == ERROR_SUCCESS) { if (get_REG_SZ(hKey, SEARCHLIST_KEY, &p)) { commajoin(outptr, p); ares_free(p); p = NULL; } RegCloseKey(hKey); } /* 2. Connection Specific Search List composed of: * a. Primary DNS Suffix */ if (RegOpenKeyExA(HKEY_LOCAL_MACHINE, WIN_DNSCLIENT, 0, KEY_READ, &hKey) == ERROR_SUCCESS) { if (get_REG_SZ(hKey, PRIMARYDNSSUFFIX_KEY, &p)) { commajoin(outptr, p); ares_free(p); p = NULL; } RegCloseKey(hKey); } /* b. Interface SearchList, Domain, DhcpDomain */ if (RegOpenKeyExA(HKEY_LOCAL_MACHINE, WIN_NS_NT_KEY "\\" INTERFACES_KEY, 0, KEY_READ, &hKey) == ERROR_SUCCESS) { for (;;) { keyNameBuffSize = sizeof(keyName); if (RegEnumKeyExA(hKey, keyIdx++, keyName, &keyNameBuffSize, 0, NULL, NULL, NULL) != ERROR_SUCCESS) { break; } if (RegOpenKeyExA(hKey, keyName, 0, KEY_QUERY_VALUE, &hKeyEnum) != ERROR_SUCCESS) { continue; } /* p can be comma separated (SearchList) */ if (get_REG_SZ(hKeyEnum, SEARCHLIST_KEY, &p)) { commajoin(outptr, p); ares_free(p); p = NULL; } if (get_REG_SZ(hKeyEnum, DOMAIN_KEY, &p)) { commajoin(outptr, p); ares_free(p); p = NULL; } if (get_REG_SZ(hKeyEnum, DHCPDOMAIN_KEY, &p)) { commajoin(outptr, p); ares_free(p); p = NULL; } RegCloseKey(hKeyEnum); } RegCloseKey(hKey); } return *outptr != NULL ? ARES_TRUE : ARES_FALSE; } ares_status_t ares__init_sysconfig_windows(ares_sysconfig_t *sysconfig) { char *line = NULL; ares_status_t status = ARES_SUCCESS; if (get_DNS_Windows(&line)) { status = ares__sconfig_append_fromstr(&sysconfig->sconfig, line, ARES_TRUE); ares_free(line); if (status != ARES_SUCCESS) { goto done; } } if (get_SuffixList_Windows(&line)) { sysconfig->domains = ares__strsplit(line, ", ", &sysconfig->ndomains); ares_free(line); if (sysconfig->domains == NULL) { status = ARES_EFILE; } if (status != ARES_SUCCESS) { goto done; } } done: return status; } #endif gevent-24.11.1/deps/c-ares/src/lib/ares_timeout.c000066400000000000000000000102421471441230600214570ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_LIMITS_H # include #endif void ares__timeval_remaining(ares_timeval_t *remaining, const ares_timeval_t *now, const ares_timeval_t *tout) { memset(remaining, 0, sizeof(*remaining)); /* Expired! */ if (tout->sec < now->sec || (tout->sec == now->sec && tout->usec < now->usec)) { return; } remaining->sec = tout->sec - now->sec; if (tout->usec < now->usec) { remaining->sec -= 1; remaining->usec = (tout->usec + 1000000) - now->usec; } else { remaining->usec = tout->usec - now->usec; } } void ares__timeval_diff(ares_timeval_t *tvdiff, const ares_timeval_t *tvstart, const ares_timeval_t *tvstop) { tvdiff->sec = tvstop->sec - tvstart->sec; if (tvstop->usec > tvstart->usec) { tvdiff->usec = tvstop->usec - tvstart->usec; } else { tvdiff->sec -= 1; tvdiff->usec = tvstop->usec + 1000000 - tvstart->usec; } } static void ares_timeval_to_struct_timeval(struct timeval *tv, const ares_timeval_t *atv) { #ifdef USE_WINSOCK tv->tv_sec = (long)atv->sec; #else tv->tv_sec = (time_t)atv->sec; #endif tv->tv_usec = (int)atv->usec; } static void struct_timeval_to_ares_timeval(ares_timeval_t *atv, const struct timeval *tv) { atv->sec = (ares_int64_t)tv->tv_sec; atv->usec = (unsigned int)tv->tv_usec; } static struct timeval *ares_timeout_int(const ares_channel_t *channel, struct timeval *maxtv, struct timeval *tvbuf) { const ares_query_t *query; ares__slist_node_t *node; ares_timeval_t now; ares_timeval_t atvbuf; ares_timeval_t amaxtv; /* The minimum timeout of all queries is always the first entry in * channel->queries_by_timeout */ node = ares__slist_node_first(channel->queries_by_timeout); /* no queries/timeout */ if (node == NULL) { return maxtv; } query = ares__slist_node_val(node); ares__tvnow(&now); ares__timeval_remaining(&atvbuf, &now, &query->timeout); ares_timeval_to_struct_timeval(tvbuf, &atvbuf); if (maxtv == NULL) { return tvbuf; } /* Return the minimum time between maxtv and tvbuf */ struct_timeval_to_ares_timeval(&amaxtv, maxtv); if (atvbuf.sec > amaxtv.sec) { return maxtv; } if (atvbuf.sec < amaxtv.sec) { return tvbuf; } if (atvbuf.usec > amaxtv.usec) { return maxtv; } return tvbuf; } struct timeval *ares_timeout(const ares_channel_t *channel, struct timeval *maxtv, struct timeval *tvbuf) { struct timeval *rv; if (channel == NULL || tvbuf == NULL) { return NULL; } ares__channel_lock(channel); rv = ares_timeout_int(channel, maxtv, tvbuf); ares__channel_unlock(channel); return rv; } gevent-24.11.1/deps/c-ares/src/lib/ares_update_servers.c000066400000000000000000001043601471441230600230310ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2008 Daniel Stenberg * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_ARPA_INET_H # include #endif #ifdef HAVE_SYS_TYPES_H # include #endif #ifdef HAVE_SYS_SOCKET_H # include #endif #ifdef HAVE_NET_IF_H # include #endif #if defined(USE_WINSOCK) # if defined(HAVE_IPHLPAPI_H) # include # endif # if defined(HAVE_NETIOAPI_H) # include # endif #endif #include "ares_data.h" #include "ares_inet_net_pton.h" typedef struct { struct ares_addr addr; unsigned short tcp_port; unsigned short udp_port; char ll_iface[IF_NAMESIZE]; unsigned int ll_scope; } ares_sconfig_t; static ares_bool_t ares__addr_match(const struct ares_addr *addr1, const struct ares_addr *addr2) { if (addr1 == NULL && addr2 == NULL) { return ARES_TRUE; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (addr1 == NULL || addr2 == NULL) { return ARES_FALSE; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (addr1->family != addr2->family) { return ARES_FALSE; } if (addr1->family == AF_INET && memcmp(&addr1->addr.addr4, &addr2->addr.addr4, sizeof(addr1->addr.addr4)) == 0) { return ARES_TRUE; } if (addr1->family == AF_INET6 && memcmp(&addr1->addr.addr6._S6_un._S6_u8, &addr2->addr.addr6._S6_un._S6_u8, sizeof(addr1->addr.addr6._S6_un._S6_u8)) == 0) { return ARES_TRUE; } return ARES_FALSE; } ares_bool_t ares__subnet_match(const struct ares_addr *addr, const struct ares_addr *subnet, unsigned char netmask) { const unsigned char *addr_ptr; const unsigned char *subnet_ptr; size_t len; size_t i; if (addr == NULL || subnet == NULL) { return ARES_FALSE; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (addr->family != subnet->family) { return ARES_FALSE; } if (addr->family == AF_INET) { addr_ptr = (const unsigned char *)&addr->addr.addr4; subnet_ptr = (const unsigned char *)&subnet->addr.addr4; len = 4; if (netmask > 32) { return ARES_FALSE; /* LCOV_EXCL_LINE: DefensiveCoding */ } } else if (addr->family == AF_INET6) { addr_ptr = (const unsigned char *)&addr->addr.addr6; subnet_ptr = (const unsigned char *)&subnet->addr.addr6; len = 16; if (netmask > 128) { return ARES_FALSE; /* LCOV_EXCL_LINE: DefensiveCoding */ } } else { return ARES_FALSE; /* LCOV_EXCL_LINE: DefensiveCoding */ } for (i = 0; i < len && netmask > 0; i++) { unsigned char mask = 0xff; if (netmask < 8) { mask <<= (8 - netmask); netmask = 0; } else { netmask -= 8; } if ((addr_ptr[i] & mask) != (subnet_ptr[i] & mask)) { return ARES_FALSE; } } return ARES_TRUE; } ares_bool_t ares__addr_is_linklocal(const struct ares_addr *addr) { struct ares_addr subnet; const unsigned char subnetaddr[16] = { 0xfe, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }; /* fe80::/10 */ subnet.family = AF_INET6; memcpy(&subnet.addr.addr6, subnetaddr, 16); return ares__subnet_match(addr, &subnet, 10); } static ares_bool_t ares_server_blacklisted(const struct ares_addr *addr) { /* A list of blacklisted IPv6 subnets. */ const struct { const unsigned char netbase[16]; unsigned char netmask; } blacklist_v6[] = { /* fec0::/10 was deprecated by [RFC3879] in September 2004. Formerly a * Site-Local scoped address prefix. These are never valid DNS servers, * but are known to be returned at least sometimes on Windows and Android. */ { { 0xfe, 0xc0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, 10 } }; size_t i; if (addr->family != AF_INET6) { return ARES_FALSE; } /* See if ipaddr matches any of the entries in the blacklist. */ for (i = 0; i < sizeof(blacklist_v6) / sizeof(*blacklist_v6); i++) { struct ares_addr subnet; subnet.family = AF_INET6; memcpy(&subnet.addr.addr6, blacklist_v6[i].netbase, 16); if (ares__subnet_match(addr, &subnet, blacklist_v6[i].netmask)) { return ARES_TRUE; } } return ARES_FALSE; } /* Parse address and port in these formats, either ipv4 or ipv6 addresses * are allowed: * ipaddr * ipv4addr:port * [ipaddr] * [ipaddr]:port * * Modifiers: %iface * * TODO: #domain modifier * * If a port is not specified, will set port to 0. * * Will fail if an IPv6 nameserver as detected by * ares_ipv6_server_blacklisted() * * Returns an error code on failure, else ARES_SUCCESS */ static ares_status_t parse_nameserver(ares__buf_t *buf, ares_sconfig_t *sconfig) { ares_status_t status; char ipaddr[INET6_ADDRSTRLEN] = ""; size_t addrlen; memset(sconfig, 0, sizeof(*sconfig)); /* Consume any leading whitespace */ ares__buf_consume_whitespace(buf, ARES_TRUE); /* pop off IP address. If it is in [ ] then it can be ipv4 or ipv6. If * not, ipv4 only */ if (ares__buf_begins_with(buf, (const unsigned char *)"[", 1)) { /* Consume [ */ ares__buf_consume(buf, 1); ares__buf_tag(buf); /* Consume until ] */ if (ares__buf_consume_until_charset(buf, (const unsigned char *)"]", 1, ARES_TRUE) == 0) { return ARES_EBADSTR; } status = ares__buf_tag_fetch_string(buf, ipaddr, sizeof(ipaddr)); if (status != ARES_SUCCESS) { return status; } /* Skip over ] */ ares__buf_consume(buf, 1); } else { size_t offset; /* Not in [ ], see if '.' is in first 4 characters, if it is, then its ipv4, * otherwise treat as ipv6 */ ares__buf_tag(buf); offset = ares__buf_consume_until_charset(buf, (const unsigned char *)".", 1, ARES_TRUE); ares__buf_tag_rollback(buf); ares__buf_tag(buf); if (offset > 0 && offset < 4) { /* IPv4 */ if (ares__buf_consume_charset(buf, (const unsigned char *)"0123456789.", 11) == 0) { return ARES_EBADSTR; } } else { /* IPv6 */ const unsigned char ipv6_charset[] = "ABCDEFabcdef0123456789.:"; if (ares__buf_consume_charset(buf, ipv6_charset, sizeof(ipv6_charset) - 1) == 0) { return ARES_EBADSTR; } } status = ares__buf_tag_fetch_string(buf, ipaddr, sizeof(ipaddr)); if (status != ARES_SUCCESS) { return status; } } /* Convert ip address from string to network byte order */ sconfig->addr.family = AF_UNSPEC; if (ares_dns_pton(ipaddr, &sconfig->addr, &addrlen) == NULL) { return ARES_EBADSTR; } /* Pull off port */ if (ares__buf_begins_with(buf, (const unsigned char *)":", 1)) { char portstr[6]; /* Consume : */ ares__buf_consume(buf, 1); ares__buf_tag(buf); /* Read numbers */ if (ares__buf_consume_charset(buf, (const unsigned char *)"0123456789", 10) == 0) { return ARES_EBADSTR; } status = ares__buf_tag_fetch_string(buf, portstr, sizeof(portstr)); if (status != ARES_SUCCESS) { return status; } sconfig->udp_port = (unsigned short)atoi(portstr); sconfig->tcp_port = sconfig->udp_port; } /* Pull off interface modifier */ if (ares__buf_begins_with(buf, (const unsigned char *)"%", 1)) { const unsigned char iface_charset[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "abcdefghijklmnopqrstuvwxyz" "0123456789.-_\\:{}"; /* Consume % */ ares__buf_consume(buf, 1); ares__buf_tag(buf); if (ares__buf_consume_charset(buf, iface_charset, sizeof(iface_charset) - 1) == 0) { return ARES_EBADSTR; } status = ares__buf_tag_fetch_string(buf, sconfig->ll_iface, sizeof(sconfig->ll_iface)); if (status != ARES_SUCCESS) { return status; } } /* Consume any trailing whitespace so we can bail out if there is something * after we didn't read */ ares__buf_consume_whitespace(buf, ARES_TRUE); if (ares__buf_len(buf) != 0) { return ARES_EBADSTR; } return ARES_SUCCESS; } static ares_status_t ares__sconfig_linklocal(ares_sconfig_t *s, const char *ll_iface) { unsigned int ll_scope = 0; if (ares_str_isnum(ll_iface)) { char ifname[IF_NAMESIZE] = ""; ll_scope = (unsigned int)atoi(ll_iface); if (ares__if_indextoname(ll_scope, ifname, sizeof(ifname)) == NULL) { DEBUGF(fprintf(stderr, "Interface %s for ipv6 Link Local not found\n", ll_iface)); return ARES_ENOTFOUND; } ares_strcpy(s->ll_iface, ifname, sizeof(s->ll_iface)); s->ll_scope = ll_scope; return ARES_SUCCESS; } ll_scope = ares__if_nametoindex(ll_iface); if (ll_scope == 0) { DEBUGF(fprintf(stderr, "Interface %s for ipv6 Link Local not found\n", ll_iface)); return ARES_ENOTFOUND; } ares_strcpy(s->ll_iface, ll_iface, sizeof(s->ll_iface)); s->ll_scope = ll_scope; return ARES_SUCCESS; } ares_status_t ares__sconfig_append(ares__llist_t **sconfig, const struct ares_addr *addr, unsigned short udp_port, unsigned short tcp_port, const char *ll_iface) { ares_sconfig_t *s; ares_status_t status; if (sconfig == NULL || addr == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* Silently skip blacklisted IPv6 servers. */ if (ares_server_blacklisted(addr)) { return ARES_SUCCESS; } s = ares_malloc_zero(sizeof(*s)); if (s == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } if (*sconfig == NULL) { *sconfig = ares__llist_create(ares_free); if (*sconfig == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } } memcpy(&s->addr, addr, sizeof(s->addr)); s->udp_port = udp_port; s->tcp_port = tcp_port; /* Handle link-local enumeration. If an interface is specified on a * non-link-local address, we'll simply end up ignoring that */ if (ares__addr_is_linklocal(&s->addr)) { if (ares_strlen(ll_iface) == 0) { /* Silently ignore this entry, we require an interface */ status = ARES_SUCCESS; goto fail; } status = ares__sconfig_linklocal(s, ll_iface); /* Silently ignore this entry, we can't validate the interface */ if (status != ARES_SUCCESS) { status = ARES_SUCCESS; goto fail; } } if (ares__llist_insert_last(*sconfig, s) == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } return ARES_SUCCESS; fail: ares_free(s); return status; } /* Add the IPv4 or IPv6 nameservers in str (separated by commas or spaces) to * the servers list, updating servers and nservers as required. * * If a nameserver is encapsulated in [ ] it may optionally include a port * suffix, e.g.: * [127.0.0.1]:59591 * * The extended format is required to support OpenBSD's resolv.conf format: * https://man.openbsd.org/OpenBSD-5.1/resolv.conf.5 * As well as MacOS libresolv that may include a non-default port number. * * This will silently ignore blacklisted IPv6 nameservers as detected by * ares_ipv6_server_blacklisted(). * * Returns an error code on failure, else ARES_SUCCESS. */ ares_status_t ares__sconfig_append_fromstr(ares__llist_t **sconfig, const char *str, ares_bool_t ignore_invalid) { ares_status_t status = ARES_SUCCESS; ares__buf_t *buf = NULL; ares__llist_t *list = NULL; ares__llist_node_t *node; /* On Windows, there may be more than one nameserver specified in the same * registry key, so we parse input as a space or comma separated list. */ buf = ares__buf_create_const((const unsigned char *)str, ares_strlen(str)); if (buf == NULL) { status = ARES_ENOMEM; goto done; } status = ares__buf_split(buf, (const unsigned char *)" ,", 2, ARES_BUF_SPLIT_NONE, 0, &list); if (status != ARES_SUCCESS) { goto done; } for (node = ares__llist_node_first(list); node != NULL; node = ares__llist_node_next(node)) { ares__buf_t *entry = ares__llist_node_val(node); ares_sconfig_t s; status = parse_nameserver(entry, &s); if (status != ARES_SUCCESS) { if (ignore_invalid) { continue; } else { goto done; } } status = ares__sconfig_append(sconfig, &s.addr, s.udp_port, s.tcp_port, s.ll_iface); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } status = ARES_SUCCESS; done: ares__llist_destroy(list); ares__buf_destroy(buf); return status; } static unsigned short ares__sconfig_get_port(const ares_channel_t *channel, const ares_sconfig_t *s, ares_bool_t is_tcp) { unsigned short port = is_tcp ? s->tcp_port : s->udp_port; if (port == 0) { port = is_tcp ? channel->tcp_port : channel->udp_port; } if (port == 0) { port = 53; } return port; } static ares__slist_node_t *ares__server_find(ares_channel_t *channel, const ares_sconfig_t *s) { ares__slist_node_t *node; for (node = ares__slist_node_first(channel->servers); node != NULL; node = ares__slist_node_next(node)) { const ares_server_t *server = ares__slist_node_val(node); if (!ares__addr_match(&server->addr, &s->addr)) { continue; } if (server->tcp_port != ares__sconfig_get_port(channel, s, ARES_TRUE)) { continue; } if (server->udp_port != ares__sconfig_get_port(channel, s, ARES_FALSE)) { continue; } return node; } return NULL; } static ares_bool_t ares__server_isdup(const ares_channel_t *channel, ares__llist_node_t *s) { /* Scan backwards to see if this is a duplicate */ ares__llist_node_t *prev; const ares_sconfig_t *server = ares__llist_node_val(s); for (prev = ares__llist_node_prev(s); prev != NULL; prev = ares__llist_node_prev(prev)) { const ares_sconfig_t *p = ares__llist_node_val(prev); if (!ares__addr_match(&server->addr, &p->addr)) { continue; } if (ares__sconfig_get_port(channel, server, ARES_TRUE) != ares__sconfig_get_port(channel, p, ARES_TRUE)) { continue; } if (ares__sconfig_get_port(channel, server, ARES_FALSE) != ares__sconfig_get_port(channel, p, ARES_FALSE)) { continue; } return ARES_TRUE; } return ARES_FALSE; } static ares_status_t ares__server_create(ares_channel_t *channel, const ares_sconfig_t *sconfig, size_t idx) { ares_status_t status; ares_server_t *server = ares_malloc_zero(sizeof(*server)); if (server == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } server->idx = idx; server->channel = channel; server->udp_port = ares__sconfig_get_port(channel, sconfig, ARES_FALSE); server->tcp_port = ares__sconfig_get_port(channel, sconfig, ARES_TRUE); server->addr.family = sconfig->addr.family; server->next_retry_time.sec = 0; server->next_retry_time.usec = 0; if (sconfig->addr.family == AF_INET) { memcpy(&server->addr.addr.addr4, &sconfig->addr.addr.addr4, sizeof(server->addr.addr.addr4)); } else if (sconfig->addr.family == AF_INET6) { memcpy(&server->addr.addr.addr6, &sconfig->addr.addr.addr6, sizeof(server->addr.addr.addr6)); } /* Copy over link-local settings */ if (ares_strlen(sconfig->ll_iface)) { ares_strcpy(server->ll_iface, sconfig->ll_iface, sizeof(server->ll_iface)); server->ll_scope = sconfig->ll_scope; } server->tcp_parser = ares__buf_create(); if (server->tcp_parser == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } server->tcp_send = ares__buf_create(); if (server->tcp_send == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } server->connections = ares__llist_create(NULL); if (server->connections == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } if (ares__slist_insert(channel->servers, server) == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ARES_SUCCESS; done: if (status != ARES_SUCCESS) { ares__destroy_server(server); /* LCOV_EXCL_LINE: OutOfMemory */ } return status; } static ares_bool_t ares__server_in_newconfig(const ares_server_t *server, ares__llist_t *srvlist) { ares__llist_node_t *node; const ares_channel_t *channel = server->channel; for (node = ares__llist_node_first(srvlist); node != NULL; node = ares__llist_node_next(node)) { const ares_sconfig_t *s = ares__llist_node_val(node); if (!ares__addr_match(&server->addr, &s->addr)) { continue; } if (server->tcp_port != ares__sconfig_get_port(channel, s, ARES_TRUE)) { continue; } if (server->udp_port != ares__sconfig_get_port(channel, s, ARES_FALSE)) { continue; } return ARES_TRUE; } return ARES_FALSE; } static ares_bool_t ares__servers_remove_stale(ares_channel_t *channel, ares__llist_t *srvlist) { ares_bool_t stale_removed = ARES_FALSE; ares__slist_node_t *snode = ares__slist_node_first(channel->servers); while (snode != NULL) { ares__slist_node_t *snext = ares__slist_node_next(snode); const ares_server_t *server = ares__slist_node_val(snode); if (!ares__server_in_newconfig(server, srvlist)) { /* This will clean up all server state via the destruction callback and * move any queries to new servers */ ares__slist_node_destroy(snode); stale_removed = ARES_TRUE; } snode = snext; } return stale_removed; } static void ares__servers_trim_single(ares_channel_t *channel) { while (ares__slist_len(channel->servers) > 1) { ares__slist_node_destroy(ares__slist_node_last(channel->servers)); } } ares_status_t ares__servers_update(ares_channel_t *channel, ares__llist_t *server_list, ares_bool_t user_specified) { ares__llist_node_t *node; size_t idx = 0; ares_status_t status; ares_bool_t list_changed = ARES_FALSE; if (channel == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* NOTE: a NULL or zero entry server list is considered valid due to * real-world people needing support for this for their test harnesses */ /* Add new entries */ for (node = ares__llist_node_first(server_list); node != NULL; node = ares__llist_node_next(node)) { const ares_sconfig_t *sconfig = ares__llist_node_val(node); ares__slist_node_t *snode; /* If a server has already appeared in the list of new servers, skip it. */ if (ares__server_isdup(channel, node)) { continue; } snode = ares__server_find(channel, sconfig); if (snode != NULL) { ares_server_t *server = ares__slist_node_val(snode); /* Copy over link-local settings. Its possible some of this data has * changed, maybe ... */ if (ares_strlen(sconfig->ll_iface)) { ares_strcpy(server->ll_iface, sconfig->ll_iface, sizeof(server->ll_iface)); server->ll_scope = sconfig->ll_scope; } if (server->idx != idx) { server->idx = idx; /* Index changed, reinsert node, doesn't require any memory * allocations so can't fail. */ ares__slist_node_reinsert(snode); } } else { status = ares__server_create(channel, sconfig, idx); if (status != ARES_SUCCESS) { goto done; } list_changed = ARES_TRUE; } idx++; } /* Remove any servers that don't exist in the current configuration */ if (ares__servers_remove_stale(channel, server_list)) { list_changed = ARES_TRUE; } /* Trim to one server if ARES_FLAG_PRIMARY is set. */ if (channel->flags & ARES_FLAG_PRIMARY) { ares__servers_trim_single(channel); } if (user_specified) { /* Save servers as if they were passed in as an option */ channel->optmask |= ARES_OPT_SERVERS; } /* Clear any cached query results only if the server list changed */ if (list_changed) { ares__qcache_flush(channel->qcache); } status = ARES_SUCCESS; done: return status; } static ares_status_t ares_addr_node_to_server_config_llist(const struct ares_addr_node *servers, ares__llist_t **llist) { const struct ares_addr_node *node; ares__llist_t *s; *llist = NULL; s = ares__llist_create(ares_free); if (s == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } for (node = servers; node != NULL; node = node->next) { ares_sconfig_t *sconfig; /* Invalid entry */ if (node->family != AF_INET && node->family != AF_INET6) { continue; } sconfig = ares_malloc_zero(sizeof(*sconfig)); if (sconfig == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } sconfig->addr.family = node->family; if (node->family == AF_INET) { memcpy(&sconfig->addr.addr.addr4, &node->addr.addr4, sizeof(sconfig->addr.addr.addr4)); } else if (sconfig->addr.family == AF_INET6) { memcpy(&sconfig->addr.addr.addr6, &node->addr.addr6, sizeof(sconfig->addr.addr.addr6)); } if (ares__llist_insert_last(s, sconfig) == NULL) { ares_free(sconfig); /* LCOV_EXCL_LINE: OutOfMemory */ goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } } *llist = s; return ARES_SUCCESS; /* LCOV_EXCL_START: OutOfMemory */ fail: ares__llist_destroy(s); return ARES_ENOMEM; /* LCOV_EXCL_STOP */ } static ares_status_t ares_addr_port_node_to_server_config_llist( const struct ares_addr_port_node *servers, ares__llist_t **llist) { const struct ares_addr_port_node *node; ares__llist_t *s; *llist = NULL; s = ares__llist_create(ares_free); if (s == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } for (node = servers; node != NULL; node = node->next) { ares_sconfig_t *sconfig; /* Invalid entry */ if (node->family != AF_INET && node->family != AF_INET6) { continue; } sconfig = ares_malloc_zero(sizeof(*sconfig)); if (sconfig == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } sconfig->addr.family = node->family; if (node->family == AF_INET) { memcpy(&sconfig->addr.addr.addr4, &node->addr.addr4, sizeof(sconfig->addr.addr.addr4)); } else if (sconfig->addr.family == AF_INET6) { memcpy(&sconfig->addr.addr.addr6, &node->addr.addr6, sizeof(sconfig->addr.addr.addr6)); } sconfig->tcp_port = (unsigned short)node->tcp_port; sconfig->udp_port = (unsigned short)node->udp_port; if (ares__llist_insert_last(s, sconfig) == NULL) { ares_free(sconfig); /* LCOV_EXCL_LINE: OutOfMemory */ goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } } *llist = s; return ARES_SUCCESS; /* LCOV_EXCL_START: OutOfMemory */ fail: ares__llist_destroy(s); return ARES_ENOMEM; /* LCOV_EXCL_STOP */ } ares_status_t ares_in_addr_to_server_config_llist(const struct in_addr *servers, size_t nservers, ares__llist_t **llist) { size_t i; ares__llist_t *s; *llist = NULL; s = ares__llist_create(ares_free); if (s == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } for (i = 0; servers != NULL && i < nservers; i++) { ares_sconfig_t *sconfig; sconfig = ares_malloc_zero(sizeof(*sconfig)); if (sconfig == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } sconfig->addr.family = AF_INET; memcpy(&sconfig->addr.addr.addr4, &servers[i], sizeof(sconfig->addr.addr.addr4)); if (ares__llist_insert_last(s, sconfig) == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } } *llist = s; return ARES_SUCCESS; /* LCOV_EXCL_START: OutOfMemory */ fail: ares__llist_destroy(s); return ARES_ENOMEM; /* LCOV_EXCL_STOP */ } /* Write out the details of a server to a buffer */ ares_status_t ares_get_server_addr(const ares_server_t *server, ares__buf_t *buf) { ares_status_t status; char addr[INET6_ADDRSTRLEN]; /* ipv4addr or [ipv6addr] */ if (server->addr.family == AF_INET6) { status = ares__buf_append_byte(buf, '['); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } ares_inet_ntop(server->addr.family, &server->addr.addr, addr, sizeof(addr)); status = ares__buf_append_str(buf, addr); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } if (server->addr.family == AF_INET6) { status = ares__buf_append_byte(buf, ']'); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } /* :port */ status = ares__buf_append_byte(buf, ':'); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__buf_append_num_dec(buf, server->udp_port, 0); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* %iface */ if (ares_strlen(server->ll_iface)) { status = ares__buf_append_byte(buf, '%'); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__buf_append_str(buf, server->ll_iface); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } return ARES_SUCCESS; } int ares_get_servers(const ares_channel_t *channel, struct ares_addr_node **servers) { struct ares_addr_node *srvr_head = NULL; struct ares_addr_node *srvr_last = NULL; struct ares_addr_node *srvr_curr; ares_status_t status = ARES_SUCCESS; ares__slist_node_t *node; if (channel == NULL) { return ARES_ENODATA; } ares__channel_lock(channel); for (node = ares__slist_node_first(channel->servers); node != NULL; node = ares__slist_node_next(node)) { const ares_server_t *server = ares__slist_node_val(node); /* Allocate storage for this server node appending it to the list */ srvr_curr = ares_malloc_data(ARES_DATATYPE_ADDR_NODE); if (!srvr_curr) { status = ARES_ENOMEM; break; } if (srvr_last) { srvr_last->next = srvr_curr; } else { srvr_head = srvr_curr; } srvr_last = srvr_curr; /* Fill this server node data */ srvr_curr->family = server->addr.family; if (srvr_curr->family == AF_INET) { memcpy(&srvr_curr->addr.addr4, &server->addr.addr.addr4, sizeof(srvr_curr->addr.addr4)); } else { memcpy(&srvr_curr->addr.addr6, &server->addr.addr.addr6, sizeof(srvr_curr->addr.addr6)); } } if (status != ARES_SUCCESS) { ares_free_data(srvr_head); srvr_head = NULL; } *servers = srvr_head; ares__channel_unlock(channel); return (int)status; } int ares_get_servers_ports(const ares_channel_t *channel, struct ares_addr_port_node **servers) { struct ares_addr_port_node *srvr_head = NULL; struct ares_addr_port_node *srvr_last = NULL; struct ares_addr_port_node *srvr_curr; ares_status_t status = ARES_SUCCESS; ares__slist_node_t *node; if (channel == NULL) { return ARES_ENODATA; } ares__channel_lock(channel); for (node = ares__slist_node_first(channel->servers); node != NULL; node = ares__slist_node_next(node)) { const ares_server_t *server = ares__slist_node_val(node); /* Allocate storage for this server node appending it to the list */ srvr_curr = ares_malloc_data(ARES_DATATYPE_ADDR_PORT_NODE); if (!srvr_curr) { status = ARES_ENOMEM; break; } if (srvr_last) { srvr_last->next = srvr_curr; } else { srvr_head = srvr_curr; } srvr_last = srvr_curr; /* Fill this server node data */ srvr_curr->family = server->addr.family; srvr_curr->udp_port = server->udp_port; srvr_curr->tcp_port = server->tcp_port; if (srvr_curr->family == AF_INET) { memcpy(&srvr_curr->addr.addr4, &server->addr.addr.addr4, sizeof(srvr_curr->addr.addr4)); } else { memcpy(&srvr_curr->addr.addr6, &server->addr.addr.addr6, sizeof(srvr_curr->addr.addr6)); } } if (status != ARES_SUCCESS) { ares_free_data(srvr_head); srvr_head = NULL; } *servers = srvr_head; ares__channel_unlock(channel); return (int)status; } int ares_set_servers(ares_channel_t *channel, const struct ares_addr_node *servers) { ares__llist_t *slist; ares_status_t status; if (channel == NULL) { return ARES_ENODATA; } status = ares_addr_node_to_server_config_llist(servers, &slist); if (status != ARES_SUCCESS) { return (int)status; } ares__channel_lock(channel); status = ares__servers_update(channel, slist, ARES_TRUE); ares__channel_unlock(channel); ares__llist_destroy(slist); return (int)status; } int ares_set_servers_ports(ares_channel_t *channel, const struct ares_addr_port_node *servers) { ares__llist_t *slist; ares_status_t status; if (channel == NULL) { return ARES_ENODATA; } status = ares_addr_port_node_to_server_config_llist(servers, &slist); if (status != ARES_SUCCESS) { return (int)status; } ares__channel_lock(channel); status = ares__servers_update(channel, slist, ARES_TRUE); ares__channel_unlock(channel); ares__llist_destroy(slist); return (int)status; } /* Incoming string format: host[:port][,host[:port]]... */ /* IPv6 addresses with ports require square brackets [fe80::1]:53 */ static ares_status_t set_servers_csv(ares_channel_t *channel, const char *_csv) { ares_status_t status; ares__llist_t *slist = NULL; if (channel == NULL) { return ARES_ENODATA; } if (ares_strlen(_csv) == 0) { /* blank all servers */ ares__channel_lock(channel); status = ares__servers_update(channel, NULL, ARES_TRUE); ares__channel_unlock(channel); return status; } status = ares__sconfig_append_fromstr(&slist, _csv, ARES_FALSE); if (status != ARES_SUCCESS) { ares__llist_destroy(slist); return status; } ares__channel_lock(channel); status = ares__servers_update(channel, slist, ARES_TRUE); ares__channel_unlock(channel); ares__llist_destroy(slist); return status; } /* We'll go ahead and honor ports anyhow */ int ares_set_servers_csv(ares_channel_t *channel, const char *_csv) { return (int)set_servers_csv(channel, _csv); } int ares_set_servers_ports_csv(ares_channel_t *channel, const char *_csv) { return (int)set_servers_csv(channel, _csv); } char *ares_get_servers_csv(const ares_channel_t *channel) { ares__buf_t *buf = NULL; char *out = NULL; ares__slist_node_t *node; ares__channel_lock(channel); buf = ares__buf_create(); if (buf == NULL) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } for (node = ares__slist_node_first(channel->servers); node != NULL; node = ares__slist_node_next(node)) { ares_status_t status; const ares_server_t *server = ares__slist_node_val(node); if (ares__buf_len(buf)) { status = ares__buf_append_byte(buf, ','); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } status = ares_get_server_addr(server, buf); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } out = ares__buf_finish_str(buf, NULL); buf = NULL; done: ares__channel_unlock(channel); ares__buf_destroy(buf); return out; } void ares_set_server_state_callback(ares_channel_t *channel, ares_server_state_callback cb, void *data) { if (channel == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } channel->server_state_cb = cb; channel->server_state_cb_data = data; } gevent-24.11.1/deps/c-ares/src/lib/ares_version.c000066400000000000000000000026271471441230600214660ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" const char *ares_version(int *version) { if (version) { *version = ARES_VERSION; } return ARES_VERSION_STR; } gevent-24.11.1/deps/c-ares/src/lib/cares.rc000066400000000000000000000050141471441230600202370ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2009 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include #include "../../include/ares_version.h" LANGUAGE 0x09,0x01 #define RC_VERSION ARES_VERSION_MAJOR, ARES_VERSION_MINOR, ARES_VERSION_PATCH, 0 VS_VERSION_INFO VERSIONINFO FILEVERSION RC_VERSION PRODUCTVERSION RC_VERSION FILEFLAGSMASK 0x3fL #if defined(DEBUGBUILD) || defined(_DEBUG) FILEFLAGS 1 #else FILEFLAGS 0 #endif FILEOS VOS__WINDOWS32 FILETYPE VFT_DLL FILESUBTYPE 0x0L BEGIN BLOCK "StringFileInfo" BEGIN BLOCK "040904b0" BEGIN VALUE "CompanyName", "The c-ares library, https://c-ares.org/\0" #if defined(DEBUGBUILD) || defined(_DEBUG) VALUE "FileDescription", "c-ares Debug Shared Library\0" VALUE "FileVersion", ARES_VERSION_STR "\0" VALUE "InternalName", "c-ares\0" VALUE "OriginalFilename", "caresd.dll\0" #else VALUE "FileDescription", "c-ares Shared Library\0" VALUE "FileVersion", ARES_VERSION_STR "\0" VALUE "InternalName", "c-ares\0" VALUE "OriginalFilename", "cares.dll\0" #endif VALUE "ProductName", "The c-ares library\0" VALUE "ProductVersion", ARES_VERSION_STR "\0" VALUE "LegalCopyright", " " ARES_COPYRIGHT "\0" VALUE "License", "https://c-ares.org/license.html\0" END END BLOCK "VarFileInfo" BEGIN VALUE "Translation", 0x409, 1200 END END gevent-24.11.1/deps/c-ares/src/lib/config-dos.h000066400000000000000000000071211471441230600210160ustar00rootroot00000000000000#ifndef HEADER_CONFIG_DOS_H #define HEADER_CONFIG_DOS_H /* ================================================================ * ares/config-dos.h - Hand crafted config file for DOS * * Copyright (C) The c-ares project and its contributors * SPDX-License-Identifier: MIT * ================================================================ */ #define PACKAGE "c-ares" #define HAVE_ERRNO_H 1 #define HAVE_GETENV 1 #define HAVE_GETTIMEOFDAY 1 #define HAVE_IOCTLSOCKET 1 #define HAVE_IOCTLSOCKET_FIONBIO 1 #define HAVE_LIMITS_H 1 #define HAVE_NET_IF_H 1 #define HAVE_RECV 1 #define HAVE_RECVFROM 1 #define HAVE_SEND 1 #define HAVE_STRDUP 1 #define HAVE_STRICMP 1 #define HAVE_STRUCT_IN6_ADDR 1 #define HAVE_STRUCT_TIMEVAL 1 #define HAVE_SYS_IOCTL_H 1 #define HAVE_SYS_SOCKET_H 1 #define HAVE_SYS_STAT_H 1 #define HAVE_SYS_TYPES_H 1 #define HAVE_TIME_H 1 #define HAVE_UNISTD_H 1 #define HAVE_WRITEV 1 #define HAVE_STAT 1 #define HAVE_MALLOC_H 1 /* Qualifiers for send(), recv(), recvfrom() and getnameinfo(). */ #define SEND_TYPE_ARG1 int #define SEND_TYPE_ARG2 const void * #define SEND_TYPE_ARG3 int #define SEND_TYPE_ARG4 int #define SEND_TYPE_RETV int #define RECV_TYPE_ARG1 int #define RECV_TYPE_ARG2 void * #define RECV_TYPE_ARG3 int #define RECV_TYPE_ARG4 int #define RECV_TYPE_RETV int #define RECVFROM_TYPE_ARG1 int #define RECVFROM_TYPE_ARG2 void #define RECVFROM_TYPE_ARG3 int #define RECVFROM_TYPE_ARG4 int #define RECVFROM_TYPE_ARG5 struct sockaddr #define RECVFROM_TYPE_ARG6 int #define RECVFROM_TYPE_RETV int #define RECVFROM_TYPE_ARG2_IS_VOID 1 #define BSD /* Target HAVE_x section */ #if defined(DJGPP) # undef _SSIZE_T # include /* For 'ssize_t' */ # define HAVE_STRCASECMP 1 # define HAVE_STRNCASECMP 1 # define HAVE_SYS_TIME_H 1 # define HAVE_VARIADIC_MACROS_GCC 1 /* Because djgpp <= 2.03 doesn't have snprintf() etc. */ # if defined(DJGPP_MINOR) && DJGPP_MINOR < 4 # define _MPRINTF_REPLACE # endif #elif defined(__WATCOMC__) # define HAVE_STRCASECMP 1 #elif defined(__HIGHC__) # define HAVE_SYS_TIME_H 1 # define strerror(e) strerror_s_((e)) #endif /* This seems odd, can DOS build without WATT32? */ #ifdef WATT32 # define HAVE_AF_INET6 1 # define HAVE_ARPA_INET_H 1 # define HAVE_ARPA_NAMESER_H 1 # define HAVE_CLOSE_S 1 # define HAVE_GETHOSTNAME 1 # define HAVE_NETDB_H 1 # define HAVE_NETINET_IN_H 1 # define HAVE_NETINET_TCP_H 1 # define HAVE_PF_INET6 1 # define HAVE_STRUCT_SOCKADDR_IN6_SIN6_SCOPE_ID 1 # define HAVE_STRUCT_ADDRINFO 1 # define HAVE_STRUCT_IN6_ADDR 1 # define HAVE_STRUCT_SOCKADDR_IN6 1 # define HAVE_SYS_SOCKET_H 1 # define HAVE_SYS_IOCTL_H 1 # define HAVE_SYS_UIO_H 1 # define NS_INADDRSZ 4 # define HAVE_GETSERVBYPORT_R 1 # define GETSERVBYPORT_R_ARGS 6 # define HAVE_WRITEV 1 # define HAVE_IF_NAMETOINDEX 1 # define HAVE_IF_INDEXTONAME 1 #endif #endif /* HEADER_CONFIG_DOS_H */ gevent-24.11.1/deps/c-ares/src/lib/config-win32.h000066400000000000000000000276351471441230600212070ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2004 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef HEADER_CARES_CONFIG_WIN32_H #define HEADER_CARES_CONFIG_WIN32_H /* ================================================================ */ /* c-ares/config-win32.h - Hand crafted config file for Windows */ /* ================================================================ */ /* ---------------------------------------------------------------- */ /* HEADER FILES */ /* ---------------------------------------------------------------- */ /* Define if you have the header file. */ #define HAVE_ASSERT_H 1 /* Define if you have the header file. */ #define HAVE_ERRNO_H 1 /* Define if you have the header file. */ #if defined(__MINGW32__) || defined(__POCC__) # define HAVE_GETOPT_H 1 #endif /* Define if you have the header file. */ #define HAVE_LIMITS_H 1 /* Define if you have the header file. */ #ifndef __SALFORDC__ # define HAVE_PROCESS_H 1 #endif /* Define if you have the header file. */ #define HAVE_SIGNAL_H 1 /* Define if you have the header file */ /* #define HAVE_SYS_TIME_H 1 */ /* Define if you have the header file. */ #define HAVE_TIME_H 1 /* Define if you have the header file. */ #if defined(__MINGW32__) || defined(__WATCOMC__) || defined(__LCC__) || \ defined(__POCC__) # define HAVE_UNISTD_H 1 #endif /* Define if you have the header file. */ #define HAVE_WINDOWS_H 1 /* Define if you have the header file. */ #define HAVE_WINSOCK_H 1 /* Define if you have the header file. */ #ifndef __SALFORDC__ # define HAVE_WINSOCK2_H 1 #endif /* Define if you have the header file. */ #ifndef __SALFORDC__ # define HAVE_WS2TCPIP_H 1 #endif /* Define if you have header file */ #define HAVE_IPHLPAPI_H 1 /* Define if you have header file */ #if !defined(__WATCOMC__) && !defined(WATT32) # define HAVE_NETIOAPI_H 1 #endif #define HAVE_SYS_TYPES_H 1 #define HAVE_SYS_STAT_H 1 /* If we are building with OpenWatcom, we need to specify that we have * . */ #if defined(__WATCOMC__) # define HAVE_STDINT_H #endif /* ---------------------------------------------------------------- */ /* OTHER HEADER INFO */ /* ---------------------------------------------------------------- */ /* Define if you have the ANSI C header files. */ #define STDC_HEADERS 1 /* ---------------------------------------------------------------- */ /* FUNCTIONS */ /* ---------------------------------------------------------------- */ /* Define if you have the closesocket function. */ #define HAVE_CLOSESOCKET 1 /* Define if you have the getenv function. */ #define HAVE_GETENV 1 /* Define if you have the gethostname function. */ #define HAVE_GETHOSTNAME 1 /* Define if you have the ioctlsocket function. */ #define HAVE_IOCTLSOCKET 1 /* Define if you have a working ioctlsocket FIONBIO function. */ #define HAVE_IOCTLSOCKET_FIONBIO 1 /* Define if you have the strcasecmp function. */ /* #define HAVE_STRCASECMP 1 */ /* Define if you have the strdup function. */ #define HAVE_STRDUP 1 /* Define if you have the stricmp function. */ #define HAVE_STRICMP 1 /* Define if you have the strncasecmp function. */ /* #define HAVE_STRNCASECMP 1 */ /* Define if you have the strnicmp function. */ #define HAVE_STRNICMP 1 /* Define if you have the recv function. */ #define HAVE_RECV 1 /* Define to the type of arg 1 for recv. */ #define RECV_TYPE_ARG1 SOCKET /* Define to the type of arg 2 for recv. */ #define RECV_TYPE_ARG2 char * /* Define to the type of arg 3 for recv. */ #define RECV_TYPE_ARG3 int /* Define to the type of arg 4 for recv. */ #define RECV_TYPE_ARG4 int /* Define to the function return type for recv. */ #define RECV_TYPE_RETV int /* Define if you have the recvfrom function. */ #define HAVE_RECVFROM 1 /* Define to the type of arg 1 for recvfrom. */ #define RECVFROM_TYPE_ARG1 SOCKET /* Define to the type pointed by arg 2 for recvfrom. */ #define RECVFROM_TYPE_ARG2 char /* Define to the type of arg 3 for recvfrom. */ #define RECVFROM_TYPE_ARG3 int /* Define to the type of arg 4 for recvfrom. */ #define RECVFROM_TYPE_ARG4 int /* Define to the type pointed by arg 5 for recvfrom. */ #define RECVFROM_TYPE_ARG5 struct sockaddr /* Define to the type pointed by arg 6 for recvfrom. */ #define RECVFROM_TYPE_ARG6 int /* Define to the function return type for recvfrom. */ #define RECVFROM_TYPE_RETV int /* Define if you have the send function. */ #define HAVE_SEND 1 /* Define to the type of arg 1 for send. */ #define SEND_TYPE_ARG1 SOCKET /* Define to the type of arg 2 for send. */ #define SEND_TYPE_ARG2 const char * /* Define to the type of arg 3 for send. */ #define SEND_TYPE_ARG3 int /* Define to the type of arg 4 for send. */ #define SEND_TYPE_ARG4 int /* Define to the function return type for send. */ #define SEND_TYPE_RETV int /* Specifics for the Watt-32 tcp/ip stack. */ #ifdef WATT32 # undef RECV_TYPE_ARG1 # define RECV_TYPE_ARG1 int # undef SEND_TYPE_ARG1 # define SEND_TYPE_ARG1 int # undef RECVFROM_TYPE_ARG1 # define RECVFROM_TYPE_ARG1 int # define NS_INADDRSZ 4 # define HAVE_ARPA_NAMESER_H 1 # define HAVE_ARPA_INET_H 1 # define HAVE_NETDB_H 1 # define HAVE_NETINET_IN_H 1 # define HAVE_SYS_SOCKET_H 1 # define HAVE_SYS_IOCTL_H 1 # define HAVE_NETINET_TCP_H 1 # define HAVE_AF_INET6 1 # define HAVE_PF_INET6 1 # define HAVE_STRUCT_IN6_ADDR 1 # define HAVE_STRUCT_SOCKADDR_IN6 1 # define HAVE_WRITEV 1 # define HAVE_IF_NAMETOINDEX 1 # define HAVE_IF_INDEXTONAME 1 # define HAVE_GETSERVBYPORT_R 1 # define GETSERVBYPORT_R_ARGS 6 # undef HAVE_WINSOCK_H # undef HAVE_WINSOCK2_H # undef HAVE_WS2TCPIP_H # undef HAVE_IPHLPAPI_H # undef HAVE_NETIOAPI_H #endif /* Threading support enabled */ #define CARES_THREADS 1 /* ---------------------------------------------------------------- */ /* TYPEDEF REPLACEMENTS */ /* ---------------------------------------------------------------- */ /* ---------------------------------------------------------------- */ /* TYPE SIZES */ /* ---------------------------------------------------------------- */ /* ---------------------------------------------------------------- */ /* STRUCT RELATED */ /* ---------------------------------------------------------------- */ /* Define if you have struct addrinfo. */ #define HAVE_STRUCT_ADDRINFO 1 /* Define if you have struct sockaddr_storage. */ #if !defined(__SALFORDC__) && !defined(__BORLANDC__) # define HAVE_STRUCT_SOCKADDR_STORAGE 1 #endif /* Define if you have struct timeval. */ #define HAVE_STRUCT_TIMEVAL 1 /* ---------------------------------------------------------------- */ /* COMPILER SPECIFIC */ /* ---------------------------------------------------------------- */ /* Define to avoid VS2005 complaining about portable C functions. */ #if defined(_MSC_VER) && (_MSC_VER >= 1400) # define _CRT_SECURE_NO_DEPRECATE 1 # define _CRT_NONSTDC_NO_DEPRECATE 1 #endif /* Set the Target to Win8 */ #if defined(_MSC_VER) && (_MSC_VER >= 1500) # define MSVC_MIN_TARGET 0x0602 #endif /* MSVC default target settings */ #if defined(_MSC_VER) && (_MSC_VER >= 1500) # ifndef _WIN32_WINNT # define _WIN32_WINNT MSVC_MIN_TARGET # endif # ifndef WINVER # define WINVER MSVC_MIN_TARGET # endif #endif /* When no build target is specified Pelles C 5.00 and later default build target is Windows Vista. */ #if defined(__POCC__) && (__POCC__ >= 500) # ifndef _WIN32_WINNT # define _WIN32_WINNT 0x0602 # endif # ifndef WINVER # define WINVER 0x0602 # endif #endif /* Availability of freeaddrinfo, getaddrinfo and getnameinfo functions is quite convoluted, compiler dependent and even build target dependent. */ #if defined(HAVE_WS2TCPIP_H) # if defined(__POCC__) # define HAVE_FREEADDRINFO 1 # define HAVE_GETADDRINFO 1 # define HAVE_GETNAMEINFO 1 # elif defined(_WIN32_WINNT) && (_WIN32_WINNT >= 0x0501) # define HAVE_FREEADDRINFO 1 # define HAVE_GETADDRINFO 1 # define HAVE_GETNAMEINFO 1 # elif defined(_MSC_VER) && (_MSC_VER >= 1200) # define HAVE_FREEADDRINFO 1 # define HAVE_GETADDRINFO 1 # define HAVE_GETNAMEINFO 1 # endif #endif #if defined(__POCC__) # ifndef _MSC_VER # error Microsoft extensions /Ze compiler option is required # endif # ifndef __POCC__OLDNAMES # error Compatibility names /Go compiler option is required # endif #endif /* ---------------------------------------------------------------- */ /* IPV6 COMPATIBILITY */ /* ---------------------------------------------------------------- */ /* Define if you have address family AF_INET6. */ #ifdef HAVE_WINSOCK2_H # define HAVE_AF_INET6 1 #endif /* Define if you have protocol family PF_INET6. */ #ifdef HAVE_WINSOCK2_H # define HAVE_PF_INET6 1 #endif /* Define if you have struct in6_addr. */ #ifdef HAVE_WS2TCPIP_H # define HAVE_STRUCT_IN6_ADDR 1 #endif /* Define if you have struct sockaddr_in6. */ #ifdef HAVE_WS2TCPIP_H # define HAVE_STRUCT_SOCKADDR_IN6 1 #endif /* Define if you have sockaddr_in6 with scopeid. */ #ifdef HAVE_WS2TCPIP_H # define HAVE_STRUCT_SOCKADDR_IN6_SIN6_SCOPE_ID 1 #endif /* Define to 1 if you have the `RegisterWaitForSingleObject' function. */ #define HAVE_REGISTERWAITFORSINGLEOBJECT 1 #if defined(_WIN32_WINNT) && (_WIN32_WINNT >= 0x0600) && \ !defined(__WATCOMC__) && !defined(WATT32) /* Define if you have if_nametoindex() */ # define HAVE_IF_NAMETOINDEX 1 /* Define if you have if_indextoname() */ # define HAVE_IF_INDEXTONAME 1 /* Define to 1 if you have the `ConvertInterfaceIndexToLuid' function. */ # define HAVE_CONVERTINTERFACEINDEXTOLUID 1 /* Define to 1 if you have the `ConvertInterfaceLuidToNameA' function. */ # define HAVE_CONVERTINTERFACELUIDTONAMEA 1 /* Define to 1 if you have the `NotifyIpInterfaceChange' function. */ # define HAVE_NOTIFYIPINTERFACECHANGE 1 #endif /* ---------------------------------------------------------------- */ /* Win CE */ /* ---------------------------------------------------------------- */ /* FIXME: A proper config-win32ce.h should be created to hold these */ /* * System error codes for Windows CE */ #if defined(_WIN32_WCE) && !defined(HAVE_ERRNO_H) # define ENOENT ERROR_FILE_NOT_FOUND # define ESRCH ERROR_PATH_NOT_FOUND # define ENOMEM ERROR_NOT_ENOUGH_MEMORY # define ENOSPC ERROR_INVALID_PARAMETER #endif #endif /* HEADER_CARES_CONFIG_WIN32_H */ gevent-24.11.1/deps/c-ares/src/lib/dsa/000077500000000000000000000000001471441230600173635ustar00rootroot00000000000000gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__array.c000066400000000000000000000211701471441230600220170ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares__array.h" #define ARES__ARRAY_MIN 4 struct ares__array { ares__array_destructor_t destruct; void *arr; size_t member_size; size_t cnt; size_t offset; size_t alloc_cnt; }; ares__array_t *ares__array_create(size_t member_size, ares__array_destructor_t destruct) { ares__array_t *arr; if (member_size == 0) { return NULL; } arr = ares_malloc_zero(sizeof(*arr)); if (arr == NULL) { return NULL; } arr->member_size = member_size; arr->destruct = destruct; return arr; } size_t ares__array_len(const ares__array_t *arr) { if (arr == NULL) { return 0; } return arr->cnt; } void *ares__array_at(ares__array_t *arr, size_t idx) { if (arr == NULL || idx >= arr->cnt) { return NULL; } return (unsigned char *)arr->arr + ((idx + arr->offset) * arr->member_size); } const void *ares__array_at_const(const ares__array_t *arr, size_t idx) { if (arr == NULL || idx >= arr->cnt) { return NULL; } return (unsigned char *)arr->arr + ((idx + arr->offset) * arr->member_size); } ares_status_t ares__array_sort(ares__array_t *arr, ares__array_cmp_t cmp) { if (arr == NULL || cmp == NULL) { return ARES_EFORMERR; } /* Nothing to sort */ if (arr->cnt < 2) { return ARES_SUCCESS; } qsort((unsigned char *)arr->arr + (arr->offset * arr->member_size), arr->cnt, arr->member_size, cmp); return ARES_SUCCESS; } void ares__array_destroy(ares__array_t *arr) { size_t i; if (arr == NULL) { return; } if (arr->destruct != NULL) { for (i = 0; i < arr->cnt; i++) { arr->destruct(ares__array_at(arr, i)); } } ares_free(arr->arr); ares_free(arr); } /* NOTE: this function operates on actual indexes, NOT indexes using the * arr->offset */ static ares_status_t ares__array_move(ares__array_t *arr, size_t dest_idx, size_t src_idx) { void *dest_ptr; const void *src_ptr; size_t nmembers; if (arr == NULL || dest_idx >= arr->alloc_cnt || src_idx >= arr->alloc_cnt) { return ARES_EFORMERR; } /* Nothing to do */ if (dest_idx == src_idx) { return ARES_SUCCESS; } dest_ptr = (unsigned char *)arr->arr + (dest_idx * arr->member_size); src_ptr = (unsigned char *)arr->arr + (src_idx * arr->member_size); /* Check to make sure shifting to the right won't overflow our allocation * boundary */ if (dest_idx > src_idx && arr->cnt + (dest_idx - src_idx) > arr->alloc_cnt) { return ARES_EFORMERR; } if (dest_idx < src_idx) { nmembers = arr->cnt - dest_idx; } else { nmembers = arr->cnt - src_idx; } memmove(dest_ptr, src_ptr, nmembers * arr->member_size); return ARES_SUCCESS; } void *ares__array_finish(ares__array_t *arr, size_t *num_members) { void *ptr; if (arr == NULL || num_members == NULL) { return NULL; } /* Make sure we move data to beginning of allocation */ if (arr->offset != 0) { if (ares__array_move(arr, 0, arr->offset) != ARES_SUCCESS) { return NULL; } arr->offset = 0; } ptr = arr->arr; *num_members = arr->cnt; ares_free(arr); return ptr; } ares_status_t ares__array_set_size(ares__array_t *arr, size_t size) { void *temp; if (arr == NULL || size == 0 || size < arr->cnt) { return ARES_EFORMERR; } /* Always operate on powers of 2 */ size = ares__round_up_pow2(size); if (size < ARES__ARRAY_MIN) { size = ARES__ARRAY_MIN; } /* If our allocation size is already large enough, skip */ if (size <= arr->alloc_cnt) { return ARES_SUCCESS; } temp = ares_realloc_zero(arr->arr, arr->alloc_cnt * arr->member_size, size * arr->member_size); if (temp == NULL) { return ARES_ENOMEM; } arr->alloc_cnt = size; arr->arr = temp; return ARES_SUCCESS; } ares_status_t ares__array_insert_at(void **elem_ptr, ares__array_t *arr, size_t idx) { void *ptr; ares_status_t status; if (arr == NULL) { return ARES_EFORMERR; } /* Not >= since we are allowed to append to the end */ if (idx > arr->cnt) { return ARES_EFORMERR; } /* Allocate more if needed */ status = ares__array_set_size(arr, arr->cnt + 1); if (status != ARES_SUCCESS) { return status; } /* Shift if we have memory but not enough room at the end */ if (arr->cnt + 1 + arr->offset > arr->alloc_cnt) { status = ares__array_move(arr, 0, arr->offset); if (status != ARES_SUCCESS) { return status; } arr->offset = 0; } /* If we're inserting anywhere other than the end, we need to move some * elements out of the way */ if (idx != arr->cnt) { status = ares__array_move(arr, idx + arr->offset + 1, idx + arr->offset); if (status != ARES_SUCCESS) { return status; } } /* Ok, we're guaranteed to have a gap where we need it, lets zero it out, * and return it */ ptr = (unsigned char *)arr->arr + ((idx + arr->offset) * arr->member_size); memset(ptr, 0, arr->member_size); arr->cnt++; if (elem_ptr) { *elem_ptr = ptr; } return ARES_SUCCESS; } ares_status_t ares__array_insert_last(void **elem_ptr, ares__array_t *arr) { return ares__array_insert_at(elem_ptr, arr, ares__array_len(arr)); } ares_status_t ares__array_insert_first(void **elem_ptr, ares__array_t *arr) { return ares__array_insert_at(elem_ptr, arr, 0); } void *ares__array_first(ares__array_t *arr) { return ares__array_at(arr, 0); } void *ares__array_last(ares__array_t *arr) { size_t cnt = ares__array_len(arr); if (cnt == 0) { return NULL; } return ares__array_at(arr, cnt - 1); } const void *ares__array_first_const(const ares__array_t *arr) { return ares__array_at_const(arr, 0); } const void *ares__array_last_const(const ares__array_t *arr) { size_t cnt = ares__array_len(arr); if (cnt == 0) { return NULL; } return ares__array_at_const(arr, cnt - 1); } ares_status_t ares__array_claim_at(void *dest, size_t dest_size, ares__array_t *arr, size_t idx) { ares_status_t status; if (arr == NULL || idx >= arr->cnt) { return ARES_EFORMERR; } if (dest != NULL && dest_size < arr->member_size) { return ARES_EFORMERR; } if (dest) { memcpy(dest, ares__array_at(arr, idx), arr->member_size); } if (idx == 0) { /* Optimization, if first element, just increment offset, makes removing a * lot from the start quick */ arr->offset++; } else if (idx != arr->cnt - 1) { /* Must shift entire array if removing an element from the middle. Does * nothing if removing last element other than decrement count. */ status = ares__array_move(arr, idx + arr->offset, idx + arr->offset + 1); if (status != ARES_SUCCESS) { return status; } } arr->cnt--; return ARES_SUCCESS; } ares_status_t ares__array_remove_at(ares__array_t *arr, size_t idx) { void *ptr = ares__array_at(arr, idx); if (arr == NULL || ptr == NULL) { return ARES_EFORMERR; } if (arr->destruct != NULL) { arr->destruct(ptr); } return ares__array_claim_at(NULL, 0, arr, idx); } ares_status_t ares__array_remove_first(ares__array_t *arr) { return ares__array_remove_at(arr, 0); } ares_status_t ares__array_remove_last(ares__array_t *arr) { size_t cnt = ares__array_len(arr); if (cnt == 0) { return ARES_EFORMERR; } return ares__array_remove_at(arr, cnt - 1); } gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__array.h000066400000000000000000000220201471441230600220170ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES__ARRAY_H #define __ARES__ARRAY_H /*! \addtogroup ares__array Array Data Structure * * This is an array with helpers. It is meant to have as little overhead * as possible over direct array management by applications but to provide * safety and some optimization features. It can also return the array in * native form once all manipulation has been performed. * * @{ */ struct ares__array; /*! Opaque data structure for array */ typedef struct ares__array ares__array_t; /*! Callback to free user-defined member data * * \param[in] data pointer to member of array to be destroyed. The pointer * itself must not be destroyed, just the data it contains. */ typedef void (*ares__array_destructor_t)(void *data); /*! Callback to compare two array elements used for sorting * * \param[in] data1 array member 1 * \param[in] data2 array member 2 * \return < 0 if data1 < data2, > 0 if data1 > data2, 0 if data1 == data2 */ typedef int (*ares__array_cmp_t)(const void *data1, const void *data2); /*! Create an array object * * NOTE: members of the array are typically going to be an going to be a * struct with compiler/ABI specific padding to ensure proper alignment. * Care needs to be taken if using primitive types, especially floating * point numbers which size may not indicate the required alignment. * For example, a double may be 80 bits (10 bytes), but required * alignment of 16 bytes. In such a case, a member_size of 16 would be * required to be used. * * \param[in] destruct Optional. Destructor to call on a removed member * \param[in] member_size Size of array member, usually determined using * sizeof() for the member such as a struct. * * \return array object or NULL on out of memory */ ares__array_t *ares__array_create(size_t member_size, ares__array_destructor_t destruct); /*! Request the array be at least the requested size. Useful if the desired * array size is known prior to populating the array to prevent reallocations. * * \param[in] arr Initialized array object. * \param[in] size Minimum number of members * \return ARES_SUCCESS on success, ARES_EFORMERR on misuse, * ARES_ENOMEM on out of memory */ ares_status_t ares__array_set_size(ares__array_t *arr, size_t size); /*! Sort the array using the given comparison function. This is not * persistent, any future elements inserted will not maintain this sort. * * \param[in] arr Initialized array object. * \param[in] cb Sort callback * \return ARES_SUCCESS on success */ ares_status_t ares__array_sort(ares__array_t *arr, ares__array_cmp_t cmp); /*! Destroy an array object. If a destructor is set, will be called on each * member of the array. * * \param[in] arr Initialized array object. */ void ares__array_destroy(ares__array_t *arr); /*! Retrieve the array in the native format. This will also destroy the * container. It is the responsibility of the caller to free the returned * pointer and also any data within each array element. * * \param[in] arr Initialized array object * \param[out] num_members the number of members in the returned array * \return pointer to native array on success, NULL on failure. */ void *ares__array_finish(ares__array_t *arr, size_t *num_members); /*! Retrieve the number of members in the array * * \param[in] arr Initialized array object. * \return numbrer of members */ size_t ares__array_len(const ares__array_t *arr); /*! Insert a new array member at the given index * * \param[out] elem_ptr Optional. Pointer to the returned array element. * \param[in] arr Initialized array object. * \param[in] idx Index in array to place new element, will shift any * elements down that exist after this point. * \return ARES_SUCCESS on success, ARES_EFORMERR on bad index, * ARES_ENOMEM on out of memory. */ ares_status_t ares__array_insert_at(void **elem_ptr, ares__array_t *arr, size_t idx); /*! Insert a new array member at the end of the array * * \param[out] elem_ptr Optional. Pointer to the returned array element. * \param[in] arr Initialized array object. * \return ARES_SUCCESS on success, ARES_ENOMEM on out of memory. */ ares_status_t ares__array_insert_last(void **elem_ptr, ares__array_t *arr); /*! Insert a new array member at the beginning of the array * * \param[out] elem_ptr Optional. Pointer to the returned array element. * \param[in] arr Initialized array object. * \return ARES_SUCCESS on success, ARES_ENOMEM on out of memory. */ ares_status_t ares__array_insert_first(void **elem_ptr, ares__array_t *arr); /*! Fetch a pointer to the given element in the array * \param[in] array Initialized array object * \param[in] idx Index to fetch * \return pointer on success, NULL on failure */ void *ares__array_at(ares__array_t *arr, size_t idx); /*! Fetch a pointer to the first element in the array * \param[in] array Initialized array object * \return pointer on success, NULL on failure */ void *ares__array_first(ares__array_t *arr); /*! Fetch a pointer to the last element in the array * \param[in] array Initialized array object * \return pointer on success, NULL on failure */ void *ares__array_last(ares__array_t *arr); /*! Fetch a constant pointer to the given element in the array * \param[in] array Initialized array object * \param[in] idx Index to fetch * \return pointer on success, NULL on failure */ const void *ares__array_at_const(const ares__array_t *arr, size_t idx); /*! Fetch a constant pointer to the first element in the array * \param[in] array Initialized array object * \return pointer on success, NULL on failure */ const void *ares__array_first_const(const ares__array_t *arr); /*! Fetch a constant pointer to the last element in the array * \param[in] array Initialized array object * \return pointer on success, NULL on failure */ const void *ares__array_last_const(const ares__array_t *arr); /*! Claim the data from the specified array index, copying it to the buffer * provided by the caller. The index specified in the array will then be * removed (without calling any possible destructor) * * \param[in,out] dest Optional. Buffer to hold array member. Pass NULL * if not needed. This could leak memory if array * member needs destructor if not provided. * \param[in] dest_size Size of buffer provided, used as a sanity check. * Must match member_size provided to * ares__array_create() if dest_size specified. * \param[in] arr Initialized array object * \param[in] idx Index to claim * \return ARES_SUCCESS on success, ARES_EFORMERR on usage failure. */ ares_status_t ares__array_claim_at(void *dest, size_t dest_size, ares__array_t *arr, size_t idx); /*! Remove the member at the specified array index. The destructor will be * called. * * \param[in] arr Initialized array object * \param[in] idx Index to remove * \return ARES_SUCCESS if removed, ARES_EFORMERR on invalid use */ ares_status_t ares__array_remove_at(ares__array_t *arr, size_t idx); /*! Remove the first member of the array. * * \param[in] arr Initialized array object * \return ARES_SUCCESS if removed, ARES_EFORMERR on invalid use */ ares_status_t ares__array_remove_first(ares__array_t *arr); /*! Remove the last member of the array. * * \param[in] arr Initialized array object * \return ARES_SUCCESS if removed, ARES_EFORMERR on invalid use */ ares_status_t ares__array_remove_last(ares__array_t *arr); /*! @} */ #endif /* __ARES__ARRAY_H */ gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__htable.c000066400000000000000000000316471471441230600221520ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares__llist.h" #include "ares__htable.h" #define ARES__HTABLE_MAX_BUCKETS (1U << 24) #define ARES__HTABLE_MIN_BUCKETS (1U << 4) #define ARES__HTABLE_EXPAND_PERCENT 75 struct ares__htable { ares__htable_hashfunc_t hash; ares__htable_bucket_key_t bucket_key; ares__htable_bucket_free_t bucket_free; ares__htable_key_eq_t key_eq; unsigned int seed; unsigned int size; size_t num_keys; size_t num_collisions; /* NOTE: if we converted buckets into ares__slist_t we could guarantee on * hash collisions we would have O(log n) worst case insert and search * performance. (We'd also need to make key_eq into a key_cmp to * support sort). That said, risk with a random hash seed is near zero, * and ares__slist_t is heavier weight, so I think using ares__llist_t * is an overall win. */ ares__llist_t **buckets; }; static unsigned int ares__htable_generate_seed(ares__htable_t *htable) { unsigned int seed = 0; time_t t = time(NULL); /* Mix stack address, heap address, and time to generate a random seed, it * doesn't have to be super secure, just quick. Likelihood of a hash * collision attack is very low with a small amount of effort */ seed |= (unsigned int)((size_t)htable & 0xFFFFFFFF); seed |= (unsigned int)((size_t)&seed & 0xFFFFFFFF); seed |= (unsigned int)(((ares_uint64_t)t) & 0xFFFFFFFF); return seed; } static void ares__htable_buckets_destroy(ares__llist_t **buckets, unsigned int size, ares_bool_t destroy_vals) { unsigned int i; if (buckets == NULL) { return; } for (i = 0; i < size; i++) { if (buckets[i] == NULL) { continue; } if (!destroy_vals) { ares__llist_replace_destructor(buckets[i], NULL); } ares__llist_destroy(buckets[i]); } ares_free(buckets); } void ares__htable_destroy(ares__htable_t *htable) { if (htable == NULL) { return; } ares__htable_buckets_destroy(htable->buckets, htable->size, ARES_TRUE); ares_free(htable); } ares__htable_t *ares__htable_create(ares__htable_hashfunc_t hash_func, ares__htable_bucket_key_t bucket_key, ares__htable_bucket_free_t bucket_free, ares__htable_key_eq_t key_eq) { ares__htable_t *htable = NULL; if (hash_func == NULL || bucket_key == NULL || bucket_free == NULL || key_eq == NULL) { goto fail; } htable = ares_malloc_zero(sizeof(*htable)); if (htable == NULL) { goto fail; } htable->hash = hash_func; htable->bucket_key = bucket_key; htable->bucket_free = bucket_free; htable->key_eq = key_eq; htable->seed = ares__htable_generate_seed(htable); htable->size = ARES__HTABLE_MIN_BUCKETS; htable->buckets = ares_malloc_zero(sizeof(*htable->buckets) * htable->size); if (htable->buckets == NULL) { goto fail; } return htable; fail: ares__htable_destroy(htable); return NULL; } const void **ares__htable_all_buckets(const ares__htable_t *htable, size_t *num) { const void **out = NULL; size_t cnt = 0; size_t i; if (htable == NULL || num == NULL) { return NULL; /* LCOV_EXCL_LINE */ } *num = 0; out = ares_malloc_zero(sizeof(*out) * htable->num_keys); if (out == NULL) { return NULL; /* LCOV_EXCL_LINE */ } for (i = 0; i < htable->size; i++) { ares__llist_node_t *node; for (node = ares__llist_node_first(htable->buckets[i]); node != NULL; node = ares__llist_node_next(node)) { out[cnt++] = ares__llist_node_val(node); } } *num = cnt; return out; } /*! Grabs the Hashtable index from the key and length. The h index is * the hash of the function reduced to the size of the bucket list. * We are doing "hash & (size - 1)" since we are guaranteeing a power of * 2 for size. This is equivalent to "hash % size", but should be more * efficient */ #define HASH_IDX(h, key) h->hash(key, h->seed) & (h->size - 1) static ares__llist_node_t *ares__htable_find(const ares__htable_t *htable, unsigned int idx, const void *key) { ares__llist_node_t *node = NULL; for (node = ares__llist_node_first(htable->buckets[idx]); node != NULL; node = ares__llist_node_next(node)) { if (htable->key_eq(key, htable->bucket_key(ares__llist_node_val(node)))) { break; } } return node; } static ares_bool_t ares__htable_expand(ares__htable_t *htable) { ares__llist_t **buckets = NULL; unsigned int old_size = htable->size; size_t i; ares__llist_t **prealloc_llist = NULL; size_t prealloc_llist_len = 0; ares_bool_t rv = ARES_FALSE; /* Not a failure, just won't expand */ if (old_size == ARES__HTABLE_MAX_BUCKETS) { return ARES_TRUE; /* LCOV_EXCL_LINE */ } htable->size <<= 1; /* We must pre-allocate all memory we'll need before moving entries to the * new hash array. Otherwise if there's a memory allocation failure in the * middle, we wouldn't be able to recover. */ buckets = ares_malloc_zero(sizeof(*buckets) * htable->size); if (buckets == NULL) { goto done; /* LCOV_EXCL_LINE */ } /* The maximum number of new llists we'll need is the number of collisions * that were recorded */ prealloc_llist_len = htable->num_collisions; if (prealloc_llist_len) { prealloc_llist = ares_malloc_zero(sizeof(*prealloc_llist) * prealloc_llist_len); if (prealloc_llist == NULL) { goto done; /* LCOV_EXCL_LINE */ } } for (i = 0; i < prealloc_llist_len; i++) { prealloc_llist[i] = ares__llist_create(htable->bucket_free); if (prealloc_llist[i] == NULL) { goto done; } } /* Iterate across all buckets and move the entries to the new buckets */ htable->num_collisions = 0; for (i = 0; i < old_size; i++) { ares__llist_node_t *node; /* Nothing in this bucket */ if (htable->buckets[i] == NULL) { continue; } /* Fast path optimization (most likely case), there is likely only a single * entry in both the source and destination, check for this to confirm and * if so, just move the bucket over */ if (ares__llist_len(htable->buckets[i]) == 1) { const void *val = ares__llist_first_val(htable->buckets[i]); size_t idx = HASH_IDX(htable, htable->bucket_key(val)); if (buckets[idx] == NULL) { /* Swap! */ buckets[idx] = htable->buckets[i]; htable->buckets[i] = NULL; continue; } } /* Slow path, collisions */ while ((node = ares__llist_node_first(htable->buckets[i])) != NULL) { const void *val = ares__llist_node_val(node); size_t idx = HASH_IDX(htable, htable->bucket_key(val)); /* Try fast path again as maybe we popped one collision off and the * next we can reuse the llist parent */ if (buckets[idx] == NULL && ares__llist_len(htable->buckets[i]) == 1) { /* Swap! */ buckets[idx] = htable->buckets[i]; htable->buckets[i] = NULL; break; } /* Grab one off our preallocated list */ if (buckets[idx] == NULL) { /* Silence static analysis, this isn't possible but it doesn't know */ if (prealloc_llist == NULL || prealloc_llist_len == 0) { goto done; /* LCOV_EXCL_LINE */ } buckets[idx] = prealloc_llist[prealloc_llist_len - 1]; prealloc_llist_len--; } else { /* Collision occurred since the bucket wasn't empty */ htable->num_collisions++; } ares__llist_node_move_parent_first(node, buckets[idx]); } /* Abandoned bucket, destroy */ if (htable->buckets[i] != NULL) { ares__llist_destroy(htable->buckets[i]); htable->buckets[i] = NULL; } } /* We have guaranteed all the buckets have either been moved or destroyed, * so we just call ares_free() on the array and swap out the pointer */ ares_free(htable->buckets); htable->buckets = buckets; buckets = NULL; rv = ARES_TRUE; done: ares_free(buckets); /* destroy any unused preallocated buckets */ ares__htable_buckets_destroy(prealloc_llist, (unsigned int)prealloc_llist_len, ARES_FALSE); /* On failure, we need to restore the htable size */ if (rv != ARES_TRUE) { htable->size = old_size; /* LCOV_EXCL_LINE */ } return rv; } ares_bool_t ares__htable_insert(ares__htable_t *htable, void *bucket) { unsigned int idx = 0; ares__llist_node_t *node = NULL; const void *key = NULL; if (htable == NULL || bucket == NULL) { return ARES_FALSE; } key = htable->bucket_key(bucket); idx = HASH_IDX(htable, key); /* See if we have a matching bucket already, if so, replace it */ node = ares__htable_find(htable, idx, key); if (node != NULL) { ares__llist_node_replace(node, bucket); return ARES_TRUE; } /* Check to see if we should rehash because likelihood of collisions has * increased beyond our threshold */ if (htable->num_keys + 1 > (htable->size * ARES__HTABLE_EXPAND_PERCENT) / 100) { if (!ares__htable_expand(htable)) { return ARES_FALSE; /* LCOV_EXCL_LINE */ } /* If we expanded, need to calculate a new index */ idx = HASH_IDX(htable, key); } /* We lazily allocate the linked list */ if (htable->buckets[idx] == NULL) { htable->buckets[idx] = ares__llist_create(htable->bucket_free); if (htable->buckets[idx] == NULL) { return ARES_FALSE; } } node = ares__llist_insert_first(htable->buckets[idx], bucket); if (node == NULL) { return ARES_FALSE; } /* Track collisions for rehash stability */ if (ares__llist_len(htable->buckets[idx]) > 1) { htable->num_collisions++; } htable->num_keys++; return ARES_TRUE; } void *ares__htable_get(const ares__htable_t *htable, const void *key) { unsigned int idx; if (htable == NULL || key == NULL) { return NULL; } idx = HASH_IDX(htable, key); return ares__llist_node_val(ares__htable_find(htable, idx, key)); } ares_bool_t ares__htable_remove(ares__htable_t *htable, const void *key) { ares__llist_node_t *node; unsigned int idx; if (htable == NULL || key == NULL) { return ARES_FALSE; } idx = HASH_IDX(htable, key); node = ares__htable_find(htable, idx, key); if (node == NULL) { return ARES_FALSE; } htable->num_keys--; /* Reduce collisions */ if (ares__llist_len(ares__llist_node_parent(node)) > 1) { htable->num_collisions--; } ares__llist_node_destroy(node); return ARES_TRUE; } size_t ares__htable_num_keys(const ares__htable_t *htable) { if (htable == NULL) { return 0; } return htable->num_keys; } unsigned int ares__htable_hash_FNV1a(const unsigned char *key, size_t key_len, unsigned int seed) { /* recommended seed is 2166136261U, but we don't want collisions */ unsigned int hv = seed; size_t i; for (i = 0; i < key_len; i++) { hv ^= (unsigned int)key[i]; /* hv *= 0x01000193 */ hv += (hv << 1) + (hv << 4) + (hv << 7) + (hv << 8) + (hv << 24); } return hv; } /* Case insensitive version, meant for ASCII strings */ unsigned int ares__htable_hash_FNV1a_casecmp(const unsigned char *key, size_t key_len, unsigned int seed) { /* recommended seed is 2166136261U, but we don't want collisions */ unsigned int hv = seed; size_t i; for (i = 0; i < key_len; i++) { hv ^= (unsigned int)ares__tolower(key[i]); /* hv *= 0x01000193 */ hv += (hv << 1) + (hv << 4) + (hv << 7) + (hv << 8) + (hv << 24); } return hv; } gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__htable.h000066400000000000000000000150141471441230600221450ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES__HTABLE_H #define __ARES__HTABLE_H /*! \addtogroup ares__htable Base HashTable Data Structure * * This is a basic hashtable data structure that is meant to be wrapped * by a higher level implementation. This data structure is designed to * be callback-based in order to facilitate wrapping without needing to * worry about any underlying complexities of the hashtable implementation. * * This implementation supports automatic growing by powers of 2 when reaching * 75% capacity. A rehash will be performed on the expanded bucket list. * * Average time complexity: * - Insert: O(1) * - Search: O(1) * - Delete: O(1) * * @{ */ struct ares__htable; /*! Opaque data type for generic hash table implementation */ typedef struct ares__htable ares__htable_t; /*! Callback for generating a hash of the key. * * \param[in] key pointer to key to be hashed * \param[in] seed randomly generated seed used by hash function. * value is specific to the hashtable instance * but otherwise will not change between calls. * \return hash */ typedef unsigned int (*ares__htable_hashfunc_t)(const void *key, unsigned int seed); /*! Callback to free the bucket * * \param[in] bucket user provided bucket */ typedef void (*ares__htable_bucket_free_t)(void *bucket); /*! Callback to extract the key from the user-provided bucket * * \param[in] bucket user provided bucket * \return pointer to key held in bucket */ typedef const void *(*ares__htable_bucket_key_t)(const void *bucket); /*! Callback to compare two keys for equality * * \param[in] key1 first key * \param[in] key2 second key * \return ARES_TRUE if equal, ARES_FALSE if not */ typedef ares_bool_t (*ares__htable_key_eq_t)(const void *key1, const void *key2); /*! Destroy the initialized hashtable * * \param[in] htable initialized hashtable */ void ares__htable_destroy(ares__htable_t *htable); /*! Create a new hashtable * * \param[in] hash_func Required. Callback for Hash function. * \param[in] bucket_key Required. Callback to extract key from bucket. * \param[in] bucket_free Required. Callback to free bucket. * \param[in] key_eq Required. Callback to check for key equality. * \return initialized hashtable. NULL if out of memory or misuse. */ ares__htable_t *ares__htable_create(ares__htable_hashfunc_t hash_func, ares__htable_bucket_key_t bucket_key, ares__htable_bucket_free_t bucket_free, ares__htable_key_eq_t key_eq); /*! Count of keys from initialized hashtable * * \param[in] htable Initialized hashtable. * \return count of keys */ size_t ares__htable_num_keys(const ares__htable_t *htable); /*! Retrieve an array of buckets from the hashtable. This is mainly used as * a helper for retrieving an array of keys. * * \param[in] htable Initialized hashtable * \param[out] num Count of returned buckets * \return Array of pointers to the buckets. These are internal pointers * to data within the hashtable, so if the key is removed, there * will be a dangling pointer. It is expected wrappers will make * such values safe by duplicating them. */ const void **ares__htable_all_buckets(const ares__htable_t *htable, size_t *num); /*! Insert bucket into hashtable * * \param[in] htable Initialized hashtable. * \param[in] bucket User-provided bucket to insert. Takes "ownership". Not * allowed to be NULL. * \return ARES_TRUE on success, ARES_FALSE if out of memory */ ares_bool_t ares__htable_insert(ares__htable_t *htable, void *bucket); /*! Retrieve bucket from hashtable based on key. * * \param[in] htable Initialized hashtable * \param[in] key Pointer to key to use for comparison. * \return matching bucket, or NULL if not found. */ void *ares__htable_get(const ares__htable_t *htable, const void *key); /*! Remove bucket from hashtable by key * * \param[in] htable Initialized hashtable * \param[in] key Pointer to key to use for comparison * \return ARES_TRUE if found, ARES_FALSE if not found */ ares_bool_t ares__htable_remove(ares__htable_t *htable, const void *key); /*! FNV1a hash algorithm. Can be used as underlying primitive for building * a wrapper hashtable. * * \param[in] key pointer to key * \param[in] key_len Length of key * \param[in] seed Seed for generating hash * \return hash value */ unsigned int ares__htable_hash_FNV1a(const unsigned char *key, size_t key_len, unsigned int seed); /*! FNV1a hash algorithm, but converts all characters to lowercase before * hashing to make the hash case-insensitive. Can be used as underlying * primitive for building a wrapper hashtable. Used on string-based keys. * * \param[in] key pointer to key * \param[in] key_len Length of key * \param[in] seed Seed for generating hash * \return hash value */ unsigned int ares__htable_hash_FNV1a_casecmp(const unsigned char *key, size_t key_len, unsigned int seed); /*! @} */ #endif /* __ARES__HTABLE_H */ gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__htable_asvp.c000066400000000000000000000125111471441230600231700ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares__htable.h" #include "ares__htable_asvp.h" struct ares__htable_asvp { ares__htable_asvp_val_free_t free_val; ares__htable_t *hash; }; typedef struct { ares_socket_t key; void *val; ares__htable_asvp_t *parent; } ares__htable_asvp_bucket_t; void ares__htable_asvp_destroy(ares__htable_asvp_t *htable) { if (htable == NULL) { return; } ares__htable_destroy(htable->hash); ares_free(htable); } static unsigned int hash_func(const void *key, unsigned int seed) { const ares_socket_t *arg = key; return ares__htable_hash_FNV1a((const unsigned char *)arg, sizeof(*arg), seed); } static const void *bucket_key(const void *bucket) { const ares__htable_asvp_bucket_t *arg = bucket; return &arg->key; } static void bucket_free(void *bucket) { ares__htable_asvp_bucket_t *arg = bucket; if (arg->parent->free_val) { arg->parent->free_val(arg->val); } ares_free(arg); } static ares_bool_t key_eq(const void *key1, const void *key2) { const ares_socket_t *k1 = key1; const ares_socket_t *k2 = key2; if (*k1 == *k2) { return ARES_TRUE; } return ARES_FALSE; } ares__htable_asvp_t * ares__htable_asvp_create(ares__htable_asvp_val_free_t val_free) { ares__htable_asvp_t *htable = ares_malloc(sizeof(*htable)); if (htable == NULL) { goto fail; } htable->hash = ares__htable_create(hash_func, bucket_key, bucket_free, key_eq); if (htable->hash == NULL) { goto fail; } htable->free_val = val_free; return htable; fail: if (htable) { ares__htable_destroy(htable->hash); ares_free(htable); } return NULL; } ares_socket_t *ares__htable_asvp_keys(const ares__htable_asvp_t *htable, size_t *num) { const void **buckets = NULL; size_t cnt = 0; ares_socket_t *out = NULL; size_t i; if (htable == NULL || num == NULL) { return NULL; /* LCOV_EXCL_LINE: DefensiveCoding */ } *num = 0; buckets = ares__htable_all_buckets(htable->hash, &cnt); if (buckets == NULL || cnt == 0) { return NULL; } out = ares_malloc_zero(sizeof(*out) * cnt); if (out == NULL) { ares_free(buckets); /* LCOV_EXCL_LINE: OutOfMemory */ return NULL; /* LCOV_EXCL_LINE: OutOfMemory */ } for (i = 0; i < cnt; i++) { out[i] = ((const ares__htable_asvp_bucket_t *)buckets[i])->key; } ares_free(buckets); *num = cnt; return out; } ares_bool_t ares__htable_asvp_insert(ares__htable_asvp_t *htable, ares_socket_t key, void *val) { ares__htable_asvp_bucket_t *bucket = NULL; if (htable == NULL) { goto fail; } bucket = ares_malloc(sizeof(*bucket)); if (bucket == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } bucket->parent = htable; bucket->key = key; bucket->val = val; if (!ares__htable_insert(htable->hash, bucket)) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } return ARES_TRUE; fail: if (bucket) { ares_free(bucket); /* LCOV_EXCL_LINE: OutOfMemory */ } return ARES_FALSE; } ares_bool_t ares__htable_asvp_get(const ares__htable_asvp_t *htable, ares_socket_t key, void **val) { ares__htable_asvp_bucket_t *bucket = NULL; if (val) { *val = NULL; } if (htable == NULL) { return ARES_FALSE; } bucket = ares__htable_get(htable->hash, &key); if (bucket == NULL) { return ARES_FALSE; } if (val) { *val = bucket->val; } return ARES_TRUE; } void *ares__htable_asvp_get_direct(const ares__htable_asvp_t *htable, ares_socket_t key) { void *val = NULL; ares__htable_asvp_get(htable, key, &val); return val; } ares_bool_t ares__htable_asvp_remove(ares__htable_asvp_t *htable, ares_socket_t key) { if (htable == NULL) { return ARES_FALSE; } return ares__htable_remove(htable->hash, &key); } size_t ares__htable_asvp_num_keys(const ares__htable_asvp_t *htable) { if (htable == NULL) { return 0; } return ares__htable_num_keys(htable->hash); } gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__htable_asvp.h000066400000000000000000000113021471441230600231720ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES__HTABLE_ASVP_H #define __ARES__HTABLE_ASVP_H /*! \addtogroup ares__htable_asvp HashTable with ares_socket_t Key and * void pointer Value * * This data structure wraps the base ares__htable data structure in order to * split the key and value data types as ares_socket_t and void pointer, * respectively. * * Average time complexity: * - Insert: O(1) * - Search: O(1) * - Delete: O(1) * * @{ */ struct ares__htable_asvp; /*! Opaque data type for ares_socket_t key, void pointer hash table * implementation */ typedef struct ares__htable_asvp ares__htable_asvp_t; /*! Callback to free value stored in hashtable * * \param[in] val user-supplied value */ typedef void (*ares__htable_asvp_val_free_t)(void *val); /*! Destroy hashtable * * \param[in] htable Initialized hashtable */ void ares__htable_asvp_destroy(ares__htable_asvp_t *htable); /*! Create size_t key, void pointer value hash table * * \param[in] val_free Optional. Call back to free user-supplied value. If * NULL it is expected the caller will clean up any user * supplied values. */ ares__htable_asvp_t * ares__htable_asvp_create(ares__htable_asvp_val_free_t val_free); /*! Retrieve an array of keys from the hashtable. * * \param[in] htable Initialized hashtable * \param[out] num Count of returned keys * \return Array of keys in the hashtable. Must be free'd with ares_free(). */ ares_socket_t *ares__htable_asvp_keys(const ares__htable_asvp_t *htable, size_t *num); /*! Insert key/value into hash table * * \param[in] htable Initialized hash table * \param[in] key key to associate with value * \param[in] val value to store (takes ownership). May be NULL. * \return ARES_TRUE on success, ARES_FALSE on out of memory or misuse */ ares_bool_t ares__htable_asvp_insert(ares__htable_asvp_t *htable, ares_socket_t key, void *val); /*! Retrieve value from hashtable based on key * * \param[in] htable Initialized hash table * \param[in] key key to use to search * \param[out] val Optional. Pointer to store value. * \return ARES_TRUE on success, ARES_FALSE on failure */ ares_bool_t ares__htable_asvp_get(const ares__htable_asvp_t *htable, ares_socket_t key, void **val); /*! Retrieve value from hashtable directly as return value. Caveat to this * function over ares__htable_asvp_get() is that if a NULL value is stored * you cannot determine if the key is not found or the value is NULL. * * \param[in] htable Initialized hash table * \param[in] key key to use to search * \return value associated with key in hashtable or NULL */ void *ares__htable_asvp_get_direct(const ares__htable_asvp_t *htable, ares_socket_t key); /*! Remove a value from the hashtable by key * * \param[in] htable Initialized hash table * \param[in] key key to use to search * \return ARES_TRUE if found, ARES_FALSE if not found */ ares_bool_t ares__htable_asvp_remove(ares__htable_asvp_t *htable, ares_socket_t key); /*! Retrieve the number of keys stored in the hash table * * \param[in] htable Initialized hash table * \return count */ size_t ares__htable_asvp_num_keys(const ares__htable_asvp_t *htable); /*! @} */ #endif /* __ARES__HTABLE_ASVP_H */ gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__htable_strvp.c000066400000000000000000000111141471441230600233730ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares__htable.h" #include "ares__htable_strvp.h" struct ares__htable_strvp { ares__htable_strvp_val_free_t free_val; ares__htable_t *hash; }; typedef struct { char *key; void *val; ares__htable_strvp_t *parent; } ares__htable_strvp_bucket_t; void ares__htable_strvp_destroy(ares__htable_strvp_t *htable) { if (htable == NULL) { return; } ares__htable_destroy(htable->hash); ares_free(htable); } static unsigned int hash_func(const void *key, unsigned int seed) { const char *arg = key; return ares__htable_hash_FNV1a_casecmp((const unsigned char *)arg, ares_strlen(arg), seed); } static const void *bucket_key(const void *bucket) { const ares__htable_strvp_bucket_t *arg = bucket; return arg->key; } static void bucket_free(void *bucket) { ares__htable_strvp_bucket_t *arg = bucket; if (arg->parent->free_val) { arg->parent->free_val(arg->val); } ares_free(arg->key); ares_free(arg); } static ares_bool_t key_eq(const void *key1, const void *key2) { const char *k1 = key1; const char *k2 = key2; if (strcasecmp(k1, k2) == 0) { return ARES_TRUE; } return ARES_FALSE; } ares__htable_strvp_t * ares__htable_strvp_create(ares__htable_strvp_val_free_t val_free) { ares__htable_strvp_t *htable = ares_malloc(sizeof(*htable)); if (htable == NULL) { goto fail; } htable->hash = ares__htable_create(hash_func, bucket_key, bucket_free, key_eq); if (htable->hash == NULL) { goto fail; } htable->free_val = val_free; return htable; fail: if (htable) { ares__htable_destroy(htable->hash); ares_free(htable); } return NULL; } ares_bool_t ares__htable_strvp_insert(ares__htable_strvp_t *htable, const char *key, void *val) { ares__htable_strvp_bucket_t *bucket = NULL; if (htable == NULL || key == NULL) { goto fail; } bucket = ares_malloc(sizeof(*bucket)); if (bucket == NULL) { goto fail; } bucket->parent = htable; bucket->key = ares_strdup(key); if (bucket->key == NULL) { goto fail; } bucket->val = val; if (!ares__htable_insert(htable->hash, bucket)) { goto fail; } return ARES_TRUE; fail: if (bucket) { ares_free(bucket->key); ares_free(bucket); } return ARES_FALSE; } ares_bool_t ares__htable_strvp_get(const ares__htable_strvp_t *htable, const char *key, void **val) { ares__htable_strvp_bucket_t *bucket = NULL; if (val) { *val = NULL; } if (htable == NULL || key == NULL) { return ARES_FALSE; } bucket = ares__htable_get(htable->hash, key); if (bucket == NULL) { return ARES_FALSE; } if (val) { *val = bucket->val; } return ARES_TRUE; } void *ares__htable_strvp_get_direct(const ares__htable_strvp_t *htable, const char *key) { void *val = NULL; ares__htable_strvp_get(htable, key, &val); return val; } ares_bool_t ares__htable_strvp_remove(ares__htable_strvp_t *htable, const char *key) { if (htable == NULL) { return ARES_FALSE; } return ares__htable_remove(htable->hash, key); } size_t ares__htable_strvp_num_keys(const ares__htable_strvp_t *htable) { if (htable == NULL) { return 0; } return ares__htable_num_keys(htable->hash); } gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__htable_strvp.h000066400000000000000000000104031471441230600234000ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES__HTABLE_STRVP_H #define __ARES__HTABLE_STRVP_H /*! \addtogroup ares__htable_strvp HashTable with string Key and void pointer * Value * * This data structure wraps the base ares__htable data structure in order to * split the key and value data types as string and void pointer, respectively. * * Average time complexity: * - Insert: O(1) * - Search: O(1) * - Delete: O(1) * * @{ */ struct ares__htable_strvp; /*! Opaque data type for size_t key, void pointer hash table implementation */ typedef struct ares__htable_strvp ares__htable_strvp_t; /*! Callback to free value stored in hashtable * * \param[in] val user-supplied value */ typedef void (*ares__htable_strvp_val_free_t)(void *val); /*! Destroy hashtable * * \param[in] htable Initialized hashtable */ void ares__htable_strvp_destroy(ares__htable_strvp_t *htable); /*! Create string, void pointer value hash table * * \param[in] val_free Optional. Call back to free user-supplied value. If * NULL it is expected the caller will clean up any user * supplied values. */ ares__htable_strvp_t * ares__htable_strvp_create(ares__htable_strvp_val_free_t val_free); /*! Insert key/value into hash table * * \param[in] htable Initialized hash table * \param[in] key key to associate with value * \param[in] val value to store (takes ownership). May be NULL. * \return ARES_TRUE on success, ARES_FALSE on failure or out of memory */ ares_bool_t ares__htable_strvp_insert(ares__htable_strvp_t *htable, const char *key, void *val); /*! Retrieve value from hashtable based on key * * \param[in] htable Initialized hash table * \param[in] key key to use to search * \param[out] val Optional. Pointer to store value. * \return ARES_TRUE on success, ARES_FALSE on failure */ ares_bool_t ares__htable_strvp_get(const ares__htable_strvp_t *htable, const char *key, void **val); /*! Retrieve value from hashtable directly as return value. Caveat to this * function over ares__htable_strvp_get() is that if a NULL value is stored * you cannot determine if the key is not found or the value is NULL. * * \param[in] htable Initialized hash table * \param[in] key key to use to search * \return value associated with key in hashtable or NULL */ void *ares__htable_strvp_get_direct(const ares__htable_strvp_t *htable, const char *key); /*! Remove a value from the hashtable by key * * \param[in] htable Initialized hash table * \param[in] key key to use to search * \return ARES_TRUE if found, ARES_FALSE if not */ ares_bool_t ares__htable_strvp_remove(ares__htable_strvp_t *htable, const char *key); /*! Retrieve the number of keys stored in the hash table * * \param[in] htable Initialized hash table * \return count */ size_t ares__htable_strvp_num_keys(const ares__htable_strvp_t *htable); /*! @} */ #endif /* __ARES__HTABLE_STRVP_H */ gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__htable_szvp.c000066400000000000000000000106601471441230600232240ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares__htable.h" #include "ares__htable_szvp.h" struct ares__htable_szvp { ares__htable_szvp_val_free_t free_val; ares__htable_t *hash; }; typedef struct { size_t key; void *val; ares__htable_szvp_t *parent; } ares__htable_szvp_bucket_t; void ares__htable_szvp_destroy(ares__htable_szvp_t *htable) { if (htable == NULL) { return; } ares__htable_destroy(htable->hash); ares_free(htable); } static unsigned int hash_func(const void *key, unsigned int seed) { const size_t *arg = key; return ares__htable_hash_FNV1a((const unsigned char *)arg, sizeof(*arg), seed); } static const void *bucket_key(const void *bucket) { const ares__htable_szvp_bucket_t *arg = bucket; return &arg->key; } static void bucket_free(void *bucket) { ares__htable_szvp_bucket_t *arg = bucket; if (arg->parent->free_val) { arg->parent->free_val(arg->val); } ares_free(arg); } static ares_bool_t key_eq(const void *key1, const void *key2) { const size_t *k1 = key1; const size_t *k2 = key2; if (*k1 == *k2) { return ARES_TRUE; } return ARES_FALSE; } ares__htable_szvp_t * ares__htable_szvp_create(ares__htable_szvp_val_free_t val_free) { ares__htable_szvp_t *htable = ares_malloc(sizeof(*htable)); if (htable == NULL) { goto fail; } htable->hash = ares__htable_create(hash_func, bucket_key, bucket_free, key_eq); if (htable->hash == NULL) { goto fail; } htable->free_val = val_free; return htable; fail: if (htable) { ares__htable_destroy(htable->hash); ares_free(htable); } return NULL; } ares_bool_t ares__htable_szvp_insert(ares__htable_szvp_t *htable, size_t key, void *val) { ares__htable_szvp_bucket_t *bucket = NULL; if (htable == NULL) { goto fail; } bucket = ares_malloc(sizeof(*bucket)); if (bucket == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } bucket->parent = htable; bucket->key = key; bucket->val = val; if (!ares__htable_insert(htable->hash, bucket)) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } return ARES_TRUE; fail: if (bucket) { ares_free(bucket); /* LCOV_EXCL_LINE: OutOfMemory */ } return ARES_FALSE; } ares_bool_t ares__htable_szvp_get(const ares__htable_szvp_t *htable, size_t key, void **val) { ares__htable_szvp_bucket_t *bucket = NULL; if (val) { *val = NULL; } if (htable == NULL) { return ARES_FALSE; } bucket = ares__htable_get(htable->hash, &key); if (bucket == NULL) { return ARES_FALSE; } if (val) { *val = bucket->val; } return ARES_TRUE; } void *ares__htable_szvp_get_direct(const ares__htable_szvp_t *htable, size_t key) { void *val = NULL; ares__htable_szvp_get(htable, key, &val); return val; } ares_bool_t ares__htable_szvp_remove(ares__htable_szvp_t *htable, size_t key) { if (htable == NULL) { return ARES_FALSE; } return ares__htable_remove(htable->hash, &key); } size_t ares__htable_szvp_num_keys(const ares__htable_szvp_t *htable) { if (htable == NULL) { return 0; } return ares__htable_num_keys(htable->hash); } gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__htable_szvp.h000066400000000000000000000102541471441230600232300ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES__HTABLE_STVP_H #define __ARES__HTABLE_STVP_H /*! \addtogroup ares__htable_szvp HashTable with size_t Key and void pointer * Value * * This data structure wraps the base ares__htable data structure in order to * split the key and value data types as size_t and void pointer, respectively. * * Average time complexity: * - Insert: O(1) * - Search: O(1) * - Delete: O(1) * * @{ */ struct ares__htable_szvp; /*! Opaque data type for size_t key, void pointer hash table implementation */ typedef struct ares__htable_szvp ares__htable_szvp_t; /*! Callback to free value stored in hashtable * * \param[in] val user-supplied value */ typedef void (*ares__htable_szvp_val_free_t)(void *val); /*! Destroy hashtable * * \param[in] htable Initialized hashtable */ void ares__htable_szvp_destroy(ares__htable_szvp_t *htable); /*! Create size_t key, void pointer value hash table * * \param[in] val_free Optional. Call back to free user-supplied value. If * NULL it is expected the caller will clean up any user * supplied values. */ ares__htable_szvp_t * ares__htable_szvp_create(ares__htable_szvp_val_free_t val_free); /*! Insert key/value into hash table * * \param[in] htable Initialized hash table * \param[in] key key to associate with value * \param[in] val value to store (takes ownership). May be NULL. * \return ARES_TRUE on success, ARES_FALSE on failure or out of memory */ ares_bool_t ares__htable_szvp_insert(ares__htable_szvp_t *htable, size_t key, void *val); /*! Retrieve value from hashtable based on key * * \param[in] htable Initialized hash table * \param[in] key key to use to search * \param[out] val Optional. Pointer to store value. * \return ARES_TRUE on success, ARES_FALSE on failure */ ares_bool_t ares__htable_szvp_get(const ares__htable_szvp_t *htable, size_t key, void **val); /*! Retrieve value from hashtable directly as return value. Caveat to this * function over ares__htable_szvp_get() is that if a NULL value is stored * you cannot determine if the key is not found or the value is NULL. * * \param[in] htable Initialized hash table * \param[in] key key to use to search * \return value associated with key in hashtable or NULL */ void *ares__htable_szvp_get_direct(const ares__htable_szvp_t *htable, size_t key); /*! Remove a value from the hashtable by key * * \param[in] htable Initialized hash table * \param[in] key key to use to search * \return ARES_TRUE if found, ARES_FALSE if not */ ares_bool_t ares__htable_szvp_remove(ares__htable_szvp_t *htable, size_t key); /*! Retrieve the number of keys stored in the hash table * * \param[in] htable Initialized hash table * \return count */ size_t ares__htable_szvp_num_keys(const ares__htable_szvp_t *htable); /*! @} */ #endif /* __ARES__HTABLE_STVP_H */ gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__htable_vpvp.c000066400000000000000000000112451471441230600232150ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares__htable.h" #include "ares__htable_vpvp.h" struct ares__htable_vpvp { ares__htable_vpvp_key_free_t free_key; ares__htable_vpvp_val_free_t free_val; ares__htable_t *hash; }; typedef struct { void *key; void *val; ares__htable_vpvp_t *parent; } ares__htable_vpvp_bucket_t; void ares__htable_vpvp_destroy(ares__htable_vpvp_t *htable) { if (htable == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } ares__htable_destroy(htable->hash); ares_free(htable); } static unsigned int hash_func(const void *key, unsigned int seed) { return ares__htable_hash_FNV1a((const unsigned char *)&key, sizeof(key), seed); } static const void *bucket_key(const void *bucket) { const ares__htable_vpvp_bucket_t *arg = bucket; return arg->key; } static void bucket_free(void *bucket) { ares__htable_vpvp_bucket_t *arg = bucket; if (arg->parent->free_key) { arg->parent->free_key(arg->key); } if (arg->parent->free_val) { arg->parent->free_val(arg->val); } ares_free(arg); } static ares_bool_t key_eq(const void *key1, const void *key2) { if (key1 == key2) { return ARES_TRUE; } return ARES_FALSE; } ares__htable_vpvp_t * ares__htable_vpvp_create(ares__htable_vpvp_key_free_t key_free, ares__htable_vpvp_val_free_t val_free) { ares__htable_vpvp_t *htable = ares_malloc(sizeof(*htable)); if (htable == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } htable->hash = ares__htable_create(hash_func, bucket_key, bucket_free, key_eq); if (htable->hash == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } htable->free_key = key_free; htable->free_val = val_free; return htable; /* LCOV_EXCL_START: OutOfMemory */ fail: if (htable) { ares__htable_destroy(htable->hash); ares_free(htable); } return NULL; /* LCOV_EXCL_STOP */ } ares_bool_t ares__htable_vpvp_insert(ares__htable_vpvp_t *htable, void *key, void *val) { ares__htable_vpvp_bucket_t *bucket = NULL; if (htable == NULL) { goto fail; } bucket = ares_malloc(sizeof(*bucket)); if (bucket == NULL) { goto fail; } bucket->parent = htable; bucket->key = key; bucket->val = val; if (!ares__htable_insert(htable->hash, bucket)) { goto fail; } return ARES_TRUE; fail: if (bucket) { ares_free(bucket); } return ARES_FALSE; } ares_bool_t ares__htable_vpvp_get(const ares__htable_vpvp_t *htable, const void *key, void **val) { ares__htable_vpvp_bucket_t *bucket = NULL; if (val) { *val = NULL; } if (htable == NULL) { return ARES_FALSE; } bucket = ares__htable_get(htable->hash, key); if (bucket == NULL) { return ARES_FALSE; } if (val) { *val = bucket->val; } return ARES_TRUE; } void *ares__htable_vpvp_get_direct(const ares__htable_vpvp_t *htable, const void *key) { void *val = NULL; ares__htable_vpvp_get(htable, key, &val); return val; } ares_bool_t ares__htable_vpvp_remove(ares__htable_vpvp_t *htable, const void *key) { if (htable == NULL) { return ARES_FALSE; } return ares__htable_remove(htable->hash, key); } size_t ares__htable_vpvp_num_keys(const ares__htable_vpvp_t *htable) { if (htable == NULL) { return 0; } return ares__htable_num_keys(htable->hash); } gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__htable_vpvp.h000066400000000000000000000112141471441230600232160ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES__HTABLE_VPVP_H #define __ARES__HTABLE_VPVP_H /*! \addtogroup ares__htable_vpvp HashTable with void pointer Key and void * pointer Value * * This data structure wraps the base ares__htable data structure in order to * split the key and value data types as size_t and void pointer, respectively. * * Average time complexity: * - Insert: O(1) * - Search: O(1) * - Delete: O(1) * * @{ */ struct ares__htable_vpvp; /*! Opaque data type for size_t key, void pointer hash table implementation */ typedef struct ares__htable_vpvp ares__htable_vpvp_t; /*! Callback to free key stored in hashtable * * \param[in] key user-supplied key */ typedef void (*ares__htable_vpvp_key_free_t)(void *key); /*! Callback to free value stored in hashtable * * \param[in] val user-supplied value */ typedef void (*ares__htable_vpvp_val_free_t)(void *val); /*! Destroy hashtable * * \param[in] htable Initialized hashtable */ void ares__htable_vpvp_destroy(ares__htable_vpvp_t *htable); /*! Create size_t key, void pointer value hash table * * \param[in] key_free Optional. Call back to free user-supplied key. If * NULL it is expected the caller will clean up any user * supplied keys. * \param[in] val_free Optional. Call back to free user-supplied value. If * NULL it is expected the caller will clean up any user * supplied values. */ ares__htable_vpvp_t * ares__htable_vpvp_create(ares__htable_vpvp_key_free_t key_free, ares__htable_vpvp_val_free_t val_free); /*! Insert key/value into hash table * * \param[in] htable Initialized hash table * \param[in] key key to associate with value * \param[in] val value to store (takes ownership). May be NULL. * \return ARES_TRUE on success, ARES_FALSE on failure or out of memory */ ares_bool_t ares__htable_vpvp_insert(ares__htable_vpvp_t *htable, void *key, void *val); /*! Retrieve value from hashtable based on key * * \param[in] htable Initialized hash table * \param[in] key key to use to search * \param[out] val Optional. Pointer to store value. * \return ARES_TRUE on success, ARES_FALSE on failure */ ares_bool_t ares__htable_vpvp_get(const ares__htable_vpvp_t *htable, const void *key, void **val); /*! Retrieve value from hashtable directly as return value. Caveat to this * function over ares__htable_vpvp_get() is that if a NULL value is stored * you cannot determine if the key is not found or the value is NULL. * * \param[in] htable Initialized hash table * \param[in] key key to use to search * \return value associated with key in hashtable or NULL */ void *ares__htable_vpvp_get_direct(const ares__htable_vpvp_t *htable, const void *key); /*! Remove a value from the hashtable by key * * \param[in] htable Initialized hash table * \param[in] key key to use to search * \return ARES_TRUE if found, ARES_FALSE if not */ ares_bool_t ares__htable_vpvp_remove(ares__htable_vpvp_t *htable, const void *key); /*! Retrieve the number of keys stored in the hash table * * \param[in] htable Initialized hash table * \return count */ size_t ares__htable_vpvp_num_keys(const ares__htable_vpvp_t *htable); /*! @} */ #endif /* __ARES__HTABLE_VPVP_H */ gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__llist.c000066400000000000000000000210001471441230600220200ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares__llist.h" struct ares__llist { ares__llist_node_t *head; ares__llist_node_t *tail; ares__llist_destructor_t destruct; size_t cnt; }; struct ares__llist_node { void *data; ares__llist_node_t *prev; ares__llist_node_t *next; ares__llist_t *parent; }; ares__llist_t *ares__llist_create(ares__llist_destructor_t destruct) { ares__llist_t *list = ares_malloc_zero(sizeof(*list)); if (list == NULL) { return NULL; } list->destruct = destruct; return list; } void ares__llist_replace_destructor(ares__llist_t *list, ares__llist_destructor_t destruct) { if (list == NULL) { return; } list->destruct = destruct; } typedef enum { ARES__LLIST_INSERT_HEAD, ARES__LLIST_INSERT_TAIL, ARES__LLIST_INSERT_BEFORE } ares__llist_insert_type_t; static void ares__llist_attach_at(ares__llist_t *list, ares__llist_insert_type_t type, ares__llist_node_t *at, ares__llist_node_t *node) { if (list == NULL || node == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } node->parent = list; if (type == ARES__LLIST_INSERT_BEFORE && (at == list->head || at == NULL)) { type = ARES__LLIST_INSERT_HEAD; } switch (type) { case ARES__LLIST_INSERT_HEAD: node->next = list->head; node->prev = NULL; if (list->head) { list->head->prev = node; } list->head = node; break; case ARES__LLIST_INSERT_TAIL: node->next = NULL; node->prev = list->tail; if (list->tail) { list->tail->next = node; } list->tail = node; break; case ARES__LLIST_INSERT_BEFORE: node->next = at; node->prev = at->prev; at->prev = node; break; } if (list->tail == NULL) { list->tail = node; } if (list->head == NULL) { list->head = node; } list->cnt++; } static ares__llist_node_t *ares__llist_insert_at(ares__llist_t *list, ares__llist_insert_type_t type, ares__llist_node_t *at, void *val) { ares__llist_node_t *node = NULL; if (list == NULL || val == NULL) { return NULL; /* LCOV_EXCL_LINE: DefensiveCoding */ } node = ares_malloc_zero(sizeof(*node)); if (node == NULL) { return NULL; } node->data = val; ares__llist_attach_at(list, type, at, node); return node; } ares__llist_node_t *ares__llist_insert_first(ares__llist_t *list, void *val) { return ares__llist_insert_at(list, ARES__LLIST_INSERT_HEAD, NULL, val); } ares__llist_node_t *ares__llist_insert_last(ares__llist_t *list, void *val) { return ares__llist_insert_at(list, ARES__LLIST_INSERT_TAIL, NULL, val); } ares__llist_node_t *ares__llist_insert_before(ares__llist_node_t *node, void *val) { if (node == NULL) { return NULL; } return ares__llist_insert_at(node->parent, ARES__LLIST_INSERT_BEFORE, node, val); } ares__llist_node_t *ares__llist_insert_after(ares__llist_node_t *node, void *val) { if (node == NULL) { return NULL; } if (node->next == NULL) { return ares__llist_insert_last(node->parent, val); } return ares__llist_insert_at(node->parent, ARES__LLIST_INSERT_BEFORE, node->next, val); } ares__llist_node_t *ares__llist_node_first(ares__llist_t *list) { if (list == NULL) { return NULL; } return list->head; } ares__llist_node_t *ares__llist_node_idx(ares__llist_t *list, size_t idx) { ares__llist_node_t *node; size_t cnt; if (list == NULL) { return NULL; } if (idx >= list->cnt) { return NULL; } node = list->head; for (cnt = 0; node != NULL && cnt < idx; cnt++) { node = node->next; } return node; } ares__llist_node_t *ares__llist_node_last(ares__llist_t *list) { if (list == NULL) { return NULL; } return list->tail; } ares__llist_node_t *ares__llist_node_next(ares__llist_node_t *node) { if (node == NULL) { return NULL; } return node->next; } ares__llist_node_t *ares__llist_node_prev(ares__llist_node_t *node) { if (node == NULL) { return NULL; } return node->prev; } void *ares__llist_node_val(ares__llist_node_t *node) { if (node == NULL) { return NULL; } return node->data; } size_t ares__llist_len(const ares__llist_t *list) { if (list == NULL) { return 0; } return list->cnt; } ares__llist_t *ares__llist_node_parent(ares__llist_node_t *node) { if (node == NULL) { return NULL; } return node->parent; } void *ares__llist_first_val(ares__llist_t *list) { return ares__llist_node_val(ares__llist_node_first(list)); } void *ares__llist_last_val(ares__llist_t *list) { return ares__llist_node_val(ares__llist_node_last(list)); } static void ares__llist_node_detach(ares__llist_node_t *node) { ares__llist_t *list; if (node == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } list = node->parent; if (node->prev) { node->prev->next = node->next; } if (node->next) { node->next->prev = node->prev; } if (node == list->head) { list->head = node->next; } if (node == list->tail) { list->tail = node->prev; } node->parent = NULL; list->cnt--; } void *ares__llist_node_claim(ares__llist_node_t *node) { void *val; if (node == NULL) { return NULL; } val = node->data; ares__llist_node_detach(node); ares_free(node); return val; } void ares__llist_node_destroy(ares__llist_node_t *node) { ares__llist_destructor_t destruct; void *val; if (node == NULL) { return; } destruct = node->parent->destruct; val = ares__llist_node_claim(node); if (val != NULL && destruct != NULL) { destruct(val); } } void ares__llist_node_replace(ares__llist_node_t *node, void *val) { ares__llist_destructor_t destruct; if (node == NULL) { return; } destruct = node->parent->destruct; if (destruct != NULL) { destruct(node->data); } node->data = val; } void ares__llist_clear(ares__llist_t *list) { ares__llist_node_t *node; if (list == NULL) { return; } while ((node = ares__llist_node_first(list)) != NULL) { ares__llist_node_destroy(node); } } void ares__llist_destroy(ares__llist_t *list) { if (list == NULL) { return; } ares__llist_clear(list); ares_free(list); } void ares__llist_node_move_parent_last(ares__llist_node_t *node, ares__llist_t *new_parent) { if (node == NULL || new_parent == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } ares__llist_node_detach(node); ares__llist_attach_at(new_parent, ARES__LLIST_INSERT_TAIL, NULL, node); } void ares__llist_node_move_parent_first(ares__llist_node_t *node, ares__llist_t *new_parent) { if (node == NULL || new_parent == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } ares__llist_node_detach(node); ares__llist_attach_at(new_parent, ARES__LLIST_INSERT_HEAD, NULL, node); } gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__llist.h000066400000000000000000000170651471441230600220450ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES__LLIST_H #define __ARES__LLIST_H /*! \addtogroup ares__llist LinkedList Data Structure * * This is a doubly-linked list data structure. * * Average time complexity: * - Insert: O(1) -- head or tail * - Search: O(n) * - Delete: O(1) -- delete assumes you hold a node pointer * * @{ */ struct ares__llist; /*! Opaque data structure for linked list */ typedef struct ares__llist ares__llist_t; struct ares__llist_node; /*! Opaque data structure for a node in a linked list */ typedef struct ares__llist_node ares__llist_node_t; /*! Callback to free user-defined node data * * \param[in] data user supplied data */ typedef void (*ares__llist_destructor_t)(void *data); /*! Create a linked list object * * \param[in] destruct Optional. Destructor to call on all removed nodes * \return linked list object or NULL on out of memory */ ares__llist_t *ares__llist_create(ares__llist_destructor_t destruct); /*! Replace destructor for linked list nodes. Typically this is used * when wanting to disable the destructor by using NULL. * * \param[in] list Initialized linked list object * \param[in] destruct replacement destructor, NULL is allowed */ void ares__llist_replace_destructor(ares__llist_t *list, ares__llist_destructor_t destruct); /*! Insert value as the first node in the linked list * * \param[in] list Initialized linked list object * \param[in] val user-supplied value. * \return node object referencing place in list, or null if out of memory or * misuse */ ares__llist_node_t *ares__llist_insert_first(ares__llist_t *list, void *val); /*! Insert value as the last node in the linked list * * \param[in] list Initialized linked list object * \param[in] val user-supplied value. * \return node object referencing place in list, or null if out of memory or * misuse */ ares__llist_node_t *ares__llist_insert_last(ares__llist_t *list, void *val); /*! Insert value before specified node in the linked list * * \param[in] node node referenced to insert before * \param[in] val user-supplied value. * \return node object referencing place in list, or null if out of memory or * misuse */ ares__llist_node_t *ares__llist_insert_before(ares__llist_node_t *node, void *val); /*! Insert value after specified node in the linked list * * \param[in] node node referenced to insert after * \param[in] val user-supplied value. * \return node object referencing place in list, or null if out of memory or * misuse */ ares__llist_node_t *ares__llist_insert_after(ares__llist_node_t *node, void *val); /*! Obtain first node in list * * \param[in] list Initialized list object * \return first node in list or NULL if none */ ares__llist_node_t *ares__llist_node_first(ares__llist_t *list); /*! Obtain last node in list * * \param[in] list Initialized list object * \return last node in list or NULL if none */ ares__llist_node_t *ares__llist_node_last(ares__llist_t *list); /*! Obtain a node based on its index. This is an O(n) operation. * * \param[in] list Initialized list object * \param[in] idx Index of node to retrieve * \return node at index or NULL if invalid index */ ares__llist_node_t *ares__llist_node_idx(ares__llist_t *list, size_t idx); /*! Obtain next node in respect to specified node * * \param[in] node Node referenced * \return node or NULL if none */ ares__llist_node_t *ares__llist_node_next(ares__llist_node_t *node); /*! Obtain previous node in respect to specified node * * \param[in] node Node referenced * \return node or NULL if none */ ares__llist_node_t *ares__llist_node_prev(ares__llist_node_t *node); /*! Obtain value from node * * \param[in] node Node referenced * \return user provided value from node */ void *ares__llist_node_val(ares__llist_node_t *node); /*! Obtain the number of entries in the list * * \param[in] list Initialized list object * \return count */ size_t ares__llist_len(const ares__llist_t *list); /*! Clear all entries in the list, but don't destroy the list object. * * \param[in] list Initialized list object */ void ares__llist_clear(ares__llist_t *list); /*! Obtain list object from referenced node * * \param[in] node Node referenced * \return list object node belongs to */ ares__llist_t *ares__llist_node_parent(ares__llist_node_t *node); /*! Obtain the first user-supplied value in the list * * \param[in] list Initialized list object * \return first user supplied value or NULL if none */ void *ares__llist_first_val(ares__llist_t *list); /*! Obtain the last user-supplied value in the list * * \param[in] list Initialized list object * \return last user supplied value or NULL if none */ void *ares__llist_last_val(ares__llist_t *list); /*! Take ownership of user-supplied value in list without calling destructor. * Will unchain entry from list. * * \param[in] node Node referenced * \return user supplied value */ void *ares__llist_node_claim(ares__llist_node_t *node); /*! Replace user-supplied value for node * * \param[in] node Node referenced * \param[in] val new user-supplied value */ void ares__llist_node_replace(ares__llist_node_t *node, void *val); /*! Destroy the node, removing it from the list and calling destructor. * * \param[in] node Node referenced */ void ares__llist_node_destroy(ares__llist_node_t *node); /*! Destroy the list object and all nodes in the list. * * \param[in] list Initialized list object */ void ares__llist_destroy(ares__llist_t *list); /*! Detach node from the current list and re-attach it to the new list as the * last entry. * * \param[in] node node to move * \param[in] new_parent new list */ void ares__llist_node_move_parent_last(ares__llist_node_t *node, ares__llist_t *new_parent); /*! Detach node from the current list and re-attach it to the new list as the * first entry. * * \param[in] node node to move * \param[in] new_parent new list */ void ares__llist_node_move_parent_first(ares__llist_node_t *node, ares__llist_t *new_parent); /*! @} */ #endif /* __ARES__LLIST_H */ gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__slist.c000066400000000000000000000260211471441230600220370ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares__slist.h" /* SkipList implementation */ #define ARES__SLIST_START_LEVELS 4 struct ares__slist { ares_rand_state *rand_state; unsigned char rand_data[8]; size_t rand_bits; ares__slist_node_t **head; size_t levels; ares__slist_node_t *tail; ares__slist_cmp_t cmp; ares__slist_destructor_t destruct; size_t cnt; }; struct ares__slist_node { void *data; ares__slist_node_t **prev; ares__slist_node_t **next; size_t levels; ares__slist_t *parent; }; ares__slist_t *ares__slist_create(ares_rand_state *rand_state, ares__slist_cmp_t cmp, ares__slist_destructor_t destruct) { ares__slist_t *list; if (rand_state == NULL || cmp == NULL) { return NULL; } list = ares_malloc_zero(sizeof(*list)); if (list == NULL) { return NULL; } list->rand_state = rand_state; list->cmp = cmp; list->destruct = destruct; list->levels = ARES__SLIST_START_LEVELS; list->head = ares_malloc_zero(sizeof(*list->head) * list->levels); if (list->head == NULL) { ares_free(list); return NULL; } return list; } static ares_bool_t ares__slist_coin_flip(ares__slist_t *list) { size_t total_bits = sizeof(list->rand_data) * 8; size_t bit; /* Refill random data used for coin flips. We pull this in 8 byte chunks. * ares__rand_bytes() has some built-in caching of its own so we don't need * to be excessive in caching ourselves. Prefer to require less memory per * skiplist */ if (list->rand_bits == 0) { ares__rand_bytes(list->rand_state, list->rand_data, sizeof(list->rand_data)); list->rand_bits = total_bits; } bit = total_bits - list->rand_bits; list->rand_bits--; return (list->rand_data[bit / 8] & (1 << (bit % 8))) ? ARES_TRUE : ARES_FALSE; } void ares__slist_replace_destructor(ares__slist_t *list, ares__slist_destructor_t destruct) { if (list == NULL) { return; } list->destruct = destruct; } static size_t ares__slist_max_level(const ares__slist_t *list) { size_t max_level = 0; if (list->cnt + 1 <= (1 << ARES__SLIST_START_LEVELS)) { max_level = ARES__SLIST_START_LEVELS; } else { max_level = ares__log2(ares__round_up_pow2(list->cnt + 1)); } if (list->levels > max_level) { max_level = list->levels; } return max_level; } static size_t ares__slist_calc_level(ares__slist_t *list) { size_t max_level = ares__slist_max_level(list); size_t level; for (level = 1; ares__slist_coin_flip(list) && level < max_level; level++) ; return level; } static void ares__slist_node_push(ares__slist_t *list, ares__slist_node_t *node) { size_t i; ares__slist_node_t *left = NULL; /* Scan from highest level in the slist, even if we're not using that number * of levels for this entry as this is what makes it O(log n) */ for (i = list->levels; i-- > 0;) { /* set left if left is NULL and the current node value is greater than the * head at this level */ if (left == NULL && list->head[i] != NULL && list->cmp(node->data, list->head[i]->data) > 0) { left = list->head[i]; } if (left != NULL) { /* scan forward to find our insertion point */ while (left->next[i] != NULL && list->cmp(node->data, left->next[i]->data) > 0) { left = left->next[i]; } } /* search only as we didn't randomly select this number of levels */ if (i >= node->levels) { continue; } if (left == NULL) { /* head insertion */ node->next[i] = list->head[i]; node->prev[i] = NULL; list->head[i] = node; } else { /* Chain */ node->next[i] = left->next[i]; node->prev[i] = left; left->next[i] = node; } if (node->next[i] != NULL) { /* chain prev */ node->next[i]->prev[i] = node; } else { if (i == 0) { /* update tail */ list->tail = node; } } } } ares__slist_node_t *ares__slist_insert(ares__slist_t *list, void *val) { ares__slist_node_t *node = NULL; if (list == NULL || val == NULL) { return NULL; } node = ares_malloc_zero(sizeof(*node)); if (node == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } node->data = val; node->parent = list; /* Randomly determine the number of levels we want to use */ node->levels = ares__slist_calc_level(list); /* Allocate array of next and prev nodes for linking each level */ node->next = ares_malloc_zero(sizeof(*node->next) * node->levels); if (node->next == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } node->prev = ares_malloc_zero(sizeof(*node->prev) * node->levels); if (node->prev == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } /* If the number of levels is greater than we currently support in the slist, * increase the count */ if (list->levels < node->levels) { void *ptr = ares_realloc_zero(list->head, sizeof(*list->head) * list->levels, sizeof(*list->head) * node->levels); if (ptr == NULL) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } list->head = ptr; list->levels = node->levels; } ares__slist_node_push(list, node); list->cnt++; return node; /* LCOV_EXCL_START: OutOfMemory */ fail: if (node) { ares_free(node->prev); ares_free(node->next); ares_free(node); } return NULL; /* LCOV_EXCL_STOP */ } static void ares__slist_node_pop(ares__slist_node_t *node) { ares__slist_t *list = node->parent; size_t i; /* relink each node at each level */ for (i = node->levels; i-- > 0;) { if (node->next[i] == NULL) { if (i == 0) { list->tail = node->prev[0]; } } else { node->next[i]->prev[i] = node->prev[i]; } if (node->prev[i] == NULL) { list->head[i] = node->next[i]; } else { node->prev[i]->next[i] = node->next[i]; } } memset(node->next, 0, sizeof(*node->next) * node->levels); memset(node->prev, 0, sizeof(*node->prev) * node->levels); } void *ares__slist_node_claim(ares__slist_node_t *node) { ares__slist_t *list; void *val; if (node == NULL) { return NULL; } list = node->parent; val = node->data; ares__slist_node_pop(node); ares_free(node->next); ares_free(node->prev); ares_free(node); list->cnt--; return val; } void ares__slist_node_reinsert(ares__slist_node_t *node) { ares__slist_t *list; if (node == NULL) { return; } list = node->parent; ares__slist_node_pop(node); ares__slist_node_push(list, node); } ares__slist_node_t *ares__slist_node_find(ares__slist_t *list, const void *val) { size_t i; ares__slist_node_t *node = NULL; int rv = -1; if (list == NULL || val == NULL) { return NULL; } /* Scan nodes starting at the highest level. For each level scan forward * until the value is between the prior and next node, or if equal quit * as we found a match */ for (i = list->levels; i-- > 0;) { if (node == NULL) { node = list->head[i]; } if (node == NULL) { continue; } do { rv = list->cmp(val, node->data); if (rv < 0) { /* back off, our value is greater than current node reference */ node = node->prev[i]; } else if (rv > 0) { /* move forward and try again. if it goes past, it will loop again and * go to previous entry */ node = node->next[i]; } /* rv == 0 will terminate loop */ } while (node != NULL && rv > 0); /* Found a match, no need to continue */ if (rv == 0) { break; } } /* no match */ if (rv != 0) { return NULL; } /* The list may have multiple entries that match. They're guaranteed to be * in order, but we're not guaranteed to have selected the _first_ matching * node. Lets scan backwards to find the first match */ while (node->prev[0] != NULL && list->cmp(node->prev[0]->data, val) == 0) { node = node->prev[0]; } return node; } ares__slist_node_t *ares__slist_node_first(ares__slist_t *list) { if (list == NULL) { return NULL; } return list->head[0]; } ares__slist_node_t *ares__slist_node_last(ares__slist_t *list) { if (list == NULL) { return NULL; } return list->tail; } ares__slist_node_t *ares__slist_node_next(ares__slist_node_t *node) { if (node == NULL) { return NULL; } return node->next[0]; } ares__slist_node_t *ares__slist_node_prev(ares__slist_node_t *node) { if (node == NULL) { return NULL; } return node->prev[0]; } void *ares__slist_node_val(ares__slist_node_t *node) { if (node == NULL) { return NULL; } return node->data; } size_t ares__slist_len(const ares__slist_t *list) { if (list == NULL) { return 0; } return list->cnt; } ares__slist_t *ares__slist_node_parent(ares__slist_node_t *node) { if (node == NULL) { return NULL; } return node->parent; } void *ares__slist_first_val(ares__slist_t *list) { return ares__slist_node_val(ares__slist_node_first(list)); } void *ares__slist_last_val(ares__slist_t *list) { return ares__slist_node_val(ares__slist_node_last(list)); } void ares__slist_node_destroy(ares__slist_node_t *node) { ares__slist_destructor_t destruct; void *val; if (node == NULL) { return; } destruct = node->parent->destruct; val = ares__slist_node_claim(node); if (val != NULL && destruct != NULL) { destruct(val); } } void ares__slist_destroy(ares__slist_t *list) { ares__slist_node_t *node; if (list == NULL) { return; } while ((node = ares__slist_node_first(list)) != NULL) { ares__slist_node_destroy(node); } ares_free(list->head); ares_free(list); } gevent-24.11.1/deps/c-ares/src/lib/dsa/ares__slist.h000066400000000000000000000156221471441230600220510ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES__SLIST_H #define __ARES__SLIST_H /*! \addtogroup ares__slist SkipList Data Structure * * This data structure is known as a Skip List, which in essence is a sorted * linked list with multiple levels of linkage to gain some algorithmic * advantages. The usage symantecs are almost identical to what you'd expect * with a linked list. * * Average time complexity: * - Insert: O(log n) * - Search: O(log n) * - Delete: O(1) -- delete assumes you hold a node pointer * * It should be noted, however, that "effort" involved with an insert or * remove operation is higher than a normal linked list. For very small * lists this may be less efficient, but for any list with a moderate number * of entries this will prove much more efficient. * * This data structure is often compared with a Binary Search Tree in * functionality and usage. * * @{ */ struct ares__slist; /*! SkipList Object, opaque */ typedef struct ares__slist ares__slist_t; struct ares__slist_node; /*! SkipList Node Object, opaque */ typedef struct ares__slist_node ares__slist_node_t; /*! SkipList Node Value destructor callback * * \param[in] data User-defined data to destroy */ typedef void (*ares__slist_destructor_t)(void *data); /*! SkipList comparison function * * \param[in] data1 First user-defined data object * \param[in] data2 Second user-defined data object * \return < 0 if data1 < data1, > 0 if data1 > data2, 0 if data1 == data2 */ typedef int (*ares__slist_cmp_t)(const void *data1, const void *data2); /*! Create SkipList * * \param[in] rand_state Initialized ares random state. * \param[in] cmp SkipList comparison function * \param[in] destruct SkipList Node Value Destructor. Optional, use NULL. * \return Initialized SkipList Object or NULL on misuse or ENOMEM */ ares__slist_t *ares__slist_create(ares_rand_state *rand_state, ares__slist_cmp_t cmp, ares__slist_destructor_t destruct); /*! Replace SkipList Node Value Destructor * * \param[in] list Initialized SkipList Object * \param[in] destruct Replacement destructor. May be NULL. */ void ares__slist_replace_destructor(ares__slist_t *list, ares__slist_destructor_t destruct); /*! Insert Value into SkipList * * \param[in] list Initialized SkipList Object * \param[in] val Node Value. Must not be NULL. Function takes ownership * and will have destructor called. * \return SkipList Node Object or NULL on misuse or ENOMEM */ ares__slist_node_t *ares__slist_insert(ares__slist_t *list, void *val); /*! Fetch first node in SkipList * * \param[in] list Initialized SkipList Object * \return SkipList Node Object or NULL if none */ ares__slist_node_t *ares__slist_node_first(ares__slist_t *list); /*! Fetch last node in SkipList * * \param[in] list Initialized SkipList Object * \return SkipList Node Object or NULL if none */ ares__slist_node_t *ares__slist_node_last(ares__slist_t *list); /*! Fetch next node in SkipList * * \param[in] node SkipList Node Object * \return SkipList Node Object or NULL if none */ ares__slist_node_t *ares__slist_node_next(ares__slist_node_t *node); /*! Fetch previous node in SkipList * * \param[in] node SkipList Node Object * \return SkipList Node Object or NULL if none */ ares__slist_node_t *ares__slist_node_prev(ares__slist_node_t *node); /*! Fetch SkipList Node Object by Value * * \param[in] list Initialized SkipList Object * \param[in] val Object to use for comparison * \return SkipList Node Object or NULL if not found */ ares__slist_node_t *ares__slist_node_find(ares__slist_t *list, const void *val); /*! Fetch Node Value * * \param[in] node SkipList Node Object * \return user defined node value */ void *ares__slist_node_val(ares__slist_node_t *node); /*! Fetch number of entries in SkipList Object * * \param[in] list Initialized SkipList Object * \return number of entries */ size_t ares__slist_len(const ares__slist_t *list); /*! Fetch SkipList Object from SkipList Node * * \param[in] node SkipList Node Object * \return SkipList Object */ ares__slist_t *ares__slist_node_parent(ares__slist_node_t *node); /*! Fetch first Node Value in SkipList * * \param[in] list Initialized SkipList Object * \return user defined node value or NULL if none */ void *ares__slist_first_val(ares__slist_t *list); /*! Fetch last Node Value in SkipList * * \param[in] list Initialized SkipList Object * \return user defined node value or NULL if none */ void *ares__slist_last_val(ares__slist_t *list); /*! Take back ownership of Node Value in SkipList, remove from SkipList. * * \param[in] node SkipList Node Object * \return user defined node value */ void *ares__slist_node_claim(ares__slist_node_t *node); /*! The internals of the node have changed, thus its position in the sorted * list is no longer valid. This function will remove it and re-add it to * the proper position without needing to perform any memory allocations * and thus cannot fail. * * \param[in] node SkipList Node Object */ void ares__slist_node_reinsert(ares__slist_node_t *node); /*! Remove Node from SkipList, calling destructor for Node Value. * * \param[in] node SkipList Node Object */ void ares__slist_node_destroy(ares__slist_node_t *node); /*! Destroy SkipList Object. If there are any nodes, they will be destroyed. * * \param[in] list Initialized SkipList Object */ void ares__slist_destroy(ares__slist_t *list); /*! @} */ #endif /* __ARES__SLIST_H */ gevent-24.11.1/deps/c-ares/src/lib/event/000077500000000000000000000000001471441230600177355ustar00rootroot00000000000000gevent-24.11.1/deps/c-ares/src/lib/event/ares_event.h000066400000000000000000000171311471441230600222440ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES__EVENT_H #define __ARES__EVENT_H struct ares_event; typedef struct ares_event ares_event_t; typedef enum { ARES_EVENT_FLAG_NONE = 0, ARES_EVENT_FLAG_READ = 1 << 0, ARES_EVENT_FLAG_WRITE = 1 << 1, ARES_EVENT_FLAG_OTHER = 1 << 2 } ares_event_flags_t; typedef void (*ares_event_cb_t)(ares_event_thread_t *e, ares_socket_t fd, void *data, ares_event_flags_t flags); typedef void (*ares_event_free_data_t)(void *data); typedef void (*ares_event_signal_cb_t)(const ares_event_t *event); struct ares_event { /*! Registered event thread this event is bound to */ ares_event_thread_t *e; /*! Flags to monitor. OTHER is only allowed if the socket is ARES_SOCKET_BAD. */ ares_event_flags_t flags; /*! Callback to be called when event is triggered */ ares_event_cb_t cb; /*! Socket to monitor, allowed to be ARES_SOCKET_BAD if not monitoring a * socket. */ ares_socket_t fd; /*! Data associated with event handle that will be passed to the callback. * Typically OS/event subsystem specific data. * Optional, may be NULL. */ /*! Data to be passed to callback. Optional, may be NULL. */ void *data; /*! When cleaning up the registered event (either when removed or during * shutdown), this function will be called to clean up the user-supplied * data. Optional, May be NULL. */ ares_event_free_data_t free_data_cb; /*! Callback to call to trigger an event. */ ares_event_signal_cb_t signal_cb; }; typedef struct { const char *name; ares_bool_t (*init)(ares_event_thread_t *e); void (*destroy)(ares_event_thread_t *e); ares_bool_t (*event_add)(ares_event_t *event); void (*event_del)(ares_event_t *event); void (*event_mod)(ares_event_t *event, ares_event_flags_t new_flags); size_t (*wait)(ares_event_thread_t *e, unsigned long timeout_ms); } ares_event_sys_t; struct ares_event_configchg; typedef struct ares_event_configchg ares_event_configchg_t; ares_status_t ares_event_configchg_init(ares_event_configchg_t **configchg, ares_event_thread_t *e); void ares_event_configchg_destroy(ares_event_configchg_t *configchg); struct ares_event_thread { /*! Whether the event thread should be online or not. Checked on every wake * event before sleeping. */ ares_bool_t isup; /*! Handle to the thread for joining during shutdown */ ares__thread_t *thread; /*! Lock to protect the data contained within the event thread itself */ ares__thread_mutex_t *mutex; /*! Reference to the ares channel, for being able to call things like * ares_timeout() and ares_process_fd(). */ ares_channel_t *channel; /*! Not-yet-processed event handle updates. These will get enqueued by a * thread other than the event thread itself. The event thread will then * be woken then process these updates itself */ ares__llist_t *ev_updates; /*! Registered socket event handles */ ares__htable_asvp_t *ev_sock_handles; /*! Registered custom event handles. Typically used for external triggering. */ ares__htable_vpvp_t *ev_cust_handles; /*! Pointer to the event handle which is used to signal and wake the event * thread itself. This is needed to be able to do things like update the * file descriptors being waited on and to wake the event subsystem during * shutdown */ ares_event_t *ev_signal; /*! Handle for configuration change monitoring */ ares_event_configchg_t *configchg; /* Event subsystem callbacks */ const ares_event_sys_t *ev_sys; /* Event subsystem private data */ void *ev_sys_data; }; /*! Queue an update for the event handle. * * Will search by the fd passed if not ARES_SOCKET_BAD to find a match and * perform an update or delete (depending on flags). Otherwise will add. * Do not use the event handle returned if its not guaranteed to be an add * operation. * * \param[out] event Event handle. Optional, can be NULL. This handle * will be invalidate quickly if the result of the * operation is not an ADD. * \param[in] e pointer to event thread handle * \param[in] flags flags for the event handle. Use * ARES_EVENT_FLAG_NONE if removing a socket from * queue (not valid if socket is ARES_SOCKET_BAD). * Non-socket events cannot be removed, and must have * ARES_EVENT_FLAG_OTHER set. * \param[in] cb Callback to call when * event is triggered. Required if flags is not * ARES_EVENT_FLAG_NONE. Not allowed to be * changed, ignored on modification. * \param[in] fd File descriptor/socket to monitor. May * be ARES_SOCKET_BAD if not monitoring file * descriptor. * \param[in] data Optional. Caller-supplied data to be passed to * callback. Only allowed on initial add, cannot be * modified later, ignored on modification. * \param[in] free_data_cb Optional. Callback to clean up caller-supplied * data. Only allowed on initial add, cannot be * modified later, ignored on modification. * \param[in] signal_cb Optional. Callback to call to trigger an event. * \return ARES_SUCCESS on success */ ares_status_t ares_event_update(ares_event_t **event, ares_event_thread_t *e, ares_event_flags_t flags, ares_event_cb_t cb, ares_socket_t fd, void *data, ares_event_free_data_t free_data_cb, ares_event_signal_cb_t signal_cb); #ifdef HAVE_PIPE ares_event_t *ares_pipeevent_create(ares_event_thread_t *e); #endif #ifdef HAVE_POLL extern const ares_event_sys_t ares_evsys_poll; #endif #ifdef HAVE_KQUEUE extern const ares_event_sys_t ares_evsys_kqueue; #endif #ifdef HAVE_EPOLL extern const ares_event_sys_t ares_evsys_epoll; #endif #ifdef _WIN32 extern const ares_event_sys_t ares_evsys_win32; #endif /* All systems have select(), but not all have a way to wake, so we require * pipe() to wake the select() */ #ifdef HAVE_PIPE extern const ares_event_sys_t ares_evsys_select; #endif #endif gevent-24.11.1/deps/c-ares/src/lib/event/ares_event_configchg.c000066400000000000000000000452471471441230600242570ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_event.h" #ifdef __ANDROID__ ares_status_t ares_event_configchg_init(ares_event_configchg_t **configchg, ares_event_thread_t *e) { (void)configchg; (void)e; /* No ability */ return ARES_ENOTIMP; } void ares_event_configchg_destroy(ares_event_configchg_t *configchg) { /* No-op */ (void)configchg; } #elif defined(__linux__) # include struct ares_event_configchg { int inotify_fd; ares_event_thread_t *e; }; void ares_event_configchg_destroy(ares_event_configchg_t *configchg) { if (configchg == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* Tell event system to stop monitoring for changes. This will cause the * cleanup to be called */ ares_event_update(NULL, configchg->e, ARES_EVENT_FLAG_NONE, NULL, configchg->inotify_fd, NULL, NULL, NULL); } static void ares_event_configchg_free(void *data) { ares_event_configchg_t *configchg = data; if (configchg == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (configchg->inotify_fd >= 0) { close(configchg->inotify_fd); configchg->inotify_fd = -1; } ares_free(configchg); } static void ares_event_configchg_cb(ares_event_thread_t *e, ares_socket_t fd, void *data, ares_event_flags_t flags) { const ares_event_configchg_t *configchg = data; /* Some systems cannot read integer variables if they are not * properly aligned. On other systems, incorrect alignment may * decrease performance. Hence, the buffer used for reading from * the inotify file descriptor should have the same alignment as * struct inotify_event. */ unsigned char buf[4096] __attribute__((aligned(__alignof__(struct inotify_event)))); const struct inotify_event *event; ssize_t len; ares_bool_t triggered = ARES_FALSE; (void)fd; (void)flags; while (1) { const unsigned char *ptr; len = read(configchg->inotify_fd, buf, sizeof(buf)); if (len <= 0) { break; } /* Loop over all events in the buffer. Says kernel will check the buffer * size provided, so I assume it won't ever return partial events. */ for (ptr = buf; ptr < buf + len; ptr += sizeof(struct inotify_event) + event->len) { event = (const struct inotify_event *)((const void *)ptr); if (event->len == 0 || ares_strlen(event->name) == 0) { continue; } if (strcasecmp(event->name, "resolv.conf") == 0 || strcasecmp(event->name, "nsswitch.conf") == 0) { triggered = ARES_TRUE; } } } /* Only process after all events are read. No need to process more often as * we don't want to reload the config back to back */ if (triggered) { ares_reinit(e->channel); } } ares_status_t ares_event_configchg_init(ares_event_configchg_t **configchg, ares_event_thread_t *e) { ares_status_t status = ARES_SUCCESS; ares_event_configchg_t *c; (void)e; /* Not used by this implementation */ *configchg = NULL; c = ares_malloc_zero(sizeof(*c)); if (c == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } c->e = e; c->inotify_fd = inotify_init1(IN_NONBLOCK | IN_CLOEXEC); if (c->inotify_fd == -1) { status = ARES_ESERVFAIL; /* LCOV_EXCL_LINE: UntestablePath */ goto done; /* LCOV_EXCL_LINE: UntestablePath */ } /* We need to monitor /etc/resolv.conf, /etc/nsswitch.conf */ if (inotify_add_watch(c->inotify_fd, "/etc", IN_CREATE | IN_MODIFY | IN_MOVED_TO | IN_ONLYDIR) == -1) { status = ARES_ESERVFAIL; /* LCOV_EXCL_LINE: UntestablePath */ goto done; /* LCOV_EXCL_LINE: UntestablePath */ } status = ares_event_update(NULL, e, ARES_EVENT_FLAG_READ, ares_event_configchg_cb, c->inotify_fd, c, ares_event_configchg_free, NULL); done: if (status != ARES_SUCCESS) { ares_event_configchg_free(c); } else { *configchg = c; } return status; } #elif defined(USE_WINSOCK) # include # include # include # include struct ares_event_configchg { HANDLE ifchg_hnd; HKEY regip4; HANDLE regip4_event; HANDLE regip4_wait; HKEY regip6; HANDLE regip6_event; HANDLE regip6_wait; ares_event_thread_t *e; }; void ares_event_configchg_destroy(ares_event_configchg_t *configchg) { if (configchg == NULL) { return; } # ifdef HAVE_NOTIFYIPINTERFACECHANGE if (configchg->ifchg_hnd != NULL) { CancelMibChangeNotify2(configchg->ifchg_hnd); configchg->ifchg_hnd = NULL; } # endif # ifdef HAVE_REGISTERWAITFORSINGLEOBJECT if (configchg->regip4_wait != NULL) { UnregisterWait(configchg->regip4_wait); configchg->regip4_wait = NULL; } if (configchg->regip6_wait != NULL) { UnregisterWait(configchg->regip6_wait); configchg->regip6_wait = NULL; } if (configchg->regip4 != NULL) { RegCloseKey(configchg->regip4); configchg->regip4 = NULL; } if (configchg->regip6 != NULL) { RegCloseKey(configchg->regip6); configchg->regip6 = NULL; } if (configchg->regip4_event != NULL) { CloseHandle(configchg->regip4_event); configchg->regip4_event = NULL; } if (configchg->regip6_event != NULL) { CloseHandle(configchg->regip6_event); configchg->regip6_event = NULL; } # endif ares_free(configchg); } # ifdef HAVE_NOTIFYIPINTERFACECHANGE static void NETIOAPI_API_ ares_event_configchg_ip_cb(PVOID CallerContext, PMIB_IPINTERFACE_ROW Row, MIB_NOTIFICATION_TYPE NotificationType) { ares_event_configchg_t *configchg = CallerContext; (void)Row; (void)NotificationType; ares_reinit(configchg->e->channel); } # endif static ares_bool_t ares_event_configchg_regnotify(ares_event_configchg_t *configchg) { # ifdef HAVE_REGISTERWAITFORSINGLEOBJECT # if defined(__WATCOMC__) && !defined(REG_NOTIFY_THREAD_AGNOSTIC) # define REG_NOTIFY_THREAD_AGNOSTIC 0x10000000L # endif DWORD flags = REG_NOTIFY_CHANGE_NAME | REG_NOTIFY_CHANGE_LAST_SET | REG_NOTIFY_THREAD_AGNOSTIC; if (RegNotifyChangeKeyValue(configchg->regip4, TRUE, flags, configchg->regip4_event, TRUE) != ERROR_SUCCESS) { return ARES_FALSE; } if (RegNotifyChangeKeyValue(configchg->regip6, TRUE, flags, configchg->regip6_event, TRUE) != ERROR_SUCCESS) { return ARES_FALSE; } # else (void)configchg; # endif return ARES_TRUE; } static VOID CALLBACK ares_event_configchg_reg_cb(PVOID lpParameter, BOOLEAN TimerOrWaitFired) { ares_event_configchg_t *configchg = lpParameter; (void)TimerOrWaitFired; ares_reinit(configchg->e->channel); /* Re-arm, as its single-shot. However, we don't know which one needs to * be re-armed, so we just do both */ ares_event_configchg_regnotify(configchg); } ares_status_t ares_event_configchg_init(ares_event_configchg_t **configchg, ares_event_thread_t *e) { ares_status_t status = ARES_SUCCESS; ares_event_configchg_t *c = NULL; c = ares_malloc_zero(sizeof(**configchg)); if (c == NULL) { return ARES_ENOMEM; } c->e = e; # ifdef HAVE_NOTIFYIPINTERFACECHANGE /* NOTE: If a user goes into the control panel and changes the network * adapter DNS addresses manually, this will NOT trigger a notification. * We've also tried listening on NotifyUnicastIpAddressChange(), but * that didn't get triggered either. */ if (NotifyIpInterfaceChange(AF_UNSPEC, ares_event_configchg_ip_cb, c, FALSE, &c->ifchg_hnd) != NO_ERROR) { status = ARES_ESERVFAIL; goto done; } # endif # ifdef HAVE_REGISTERWAITFORSINGLEOBJECT /* Monitor HKLM\SYSTEM\CurrentControlSet\Services\Tcpip6\Parameters\Interfaces * and HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces * for changes via RegNotifyChangeKeyValue() */ if (RegOpenKeyExW( HKEY_LOCAL_MACHINE, L"SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters\\Interfaces", 0, KEY_NOTIFY, &c->regip4) != ERROR_SUCCESS) { status = ARES_ESERVFAIL; goto done; } if (RegOpenKeyExW( HKEY_LOCAL_MACHINE, L"SYSTEM\\CurrentControlSet\\Services\\Tcpip6\\Parameters\\Interfaces", 0, KEY_NOTIFY, &c->regip6) != ERROR_SUCCESS) { status = ARES_ESERVFAIL; goto done; } c->regip4_event = CreateEvent(NULL, TRUE, FALSE, NULL); if (c->regip4_event == NULL) { status = ARES_ESERVFAIL; goto done; } c->regip6_event = CreateEvent(NULL, TRUE, FALSE, NULL); if (c->regip6_event == NULL) { status = ARES_ESERVFAIL; goto done; } if (!RegisterWaitForSingleObject(&c->regip4_wait, c->regip4_event, ares_event_configchg_reg_cb, c, INFINITE, WT_EXECUTEDEFAULT)) { status = ARES_ESERVFAIL; goto done; } if (!RegisterWaitForSingleObject(&c->regip6_wait, c->regip6_event, ares_event_configchg_reg_cb, c, INFINITE, WT_EXECUTEDEFAULT)) { status = ARES_ESERVFAIL; goto done; } # endif if (!ares_event_configchg_regnotify(c)) { status = ARES_ESERVFAIL; goto done; } done: if (status != ARES_SUCCESS) { ares_event_configchg_destroy(c); } else { *configchg = c; } return status; } #elif defined(__APPLE__) # include # include # include # include # include struct ares_event_configchg { int fd; int token; }; void ares_event_configchg_destroy(ares_event_configchg_t *configchg) { (void)configchg; /* Cleanup happens automatically */ } static void ares_event_configchg_free(void *data) { ares_event_configchg_t *configchg = data; if (configchg == NULL) { return; } if (configchg->fd >= 0) { notify_cancel(configchg->token); /* automatically closes fd */ configchg->fd = -1; } ares_free(configchg); } static void ares_event_configchg_cb(ares_event_thread_t *e, ares_socket_t fd, void *data, ares_event_flags_t flags) { ares_event_configchg_t *configchg = data; ares_bool_t triggered = ARES_FALSE; (void)fd; (void)flags; while (1) { int t = 0; ssize_t len; len = read(configchg->fd, &t, sizeof(t)); if (len < (ssize_t)sizeof(t)) { break; } /* Token is read in network byte order (yeah, docs don't mention this) */ t = (int)ntohl(t); if (t != configchg->token) { continue; } triggered = ARES_TRUE; } /* Only process after all events are read. No need to process more often as * we don't want to reload the config back to back */ if (triggered) { ares_reinit(e->channel); } } ares_status_t ares_event_configchg_init(ares_event_configchg_t **configchg, ares_event_thread_t *e) { ares_status_t status = ARES_SUCCESS; void *handle = NULL; const char *(*pdns_configuration_notify_key)(void) = NULL; const char *notify_key = NULL; int flags; size_t i; const char *searchlibs[] = { "/usr/lib/libSystem.dylib", "/System/Library/Frameworks/SystemConfiguration.framework/" "SystemConfiguration", NULL }; *configchg = ares_malloc_zero(sizeof(**configchg)); if (*configchg == NULL) { return ARES_ENOMEM; } /* Load symbol as it isn't normally public */ for (i = 0; searchlibs[i] != NULL; i++) { handle = dlopen(searchlibs[i], RTLD_LAZY); if (handle == NULL) { /* Fail, loop! */ continue; } pdns_configuration_notify_key = (const char *(*)(void))dlsym(handle, "dns_configuration_notify_key"); if (pdns_configuration_notify_key != NULL) { break; } /* Fail, loop! */ dlclose(handle); handle = NULL; } if (pdns_configuration_notify_key == NULL) { status = ARES_ESERVFAIL; goto done; } notify_key = pdns_configuration_notify_key(); if (notify_key == NULL) { status = ARES_ESERVFAIL; goto done; } if (notify_register_file_descriptor(notify_key, &(*configchg)->fd, 0, &(*configchg)->token) != NOTIFY_STATUS_OK) { status = ARES_ESERVFAIL; goto done; } /* Set file descriptor to non-blocking */ flags = fcntl((*configchg)->fd, F_GETFL, 0); fcntl((*configchg)->fd, F_SETFL, flags | O_NONBLOCK); /* Register file descriptor with event subsystem */ status = ares_event_update(NULL, e, ARES_EVENT_FLAG_READ, ares_event_configchg_cb, (*configchg)->fd, *configchg, ares_event_configchg_free, NULL); done: if (status != ARES_SUCCESS) { ares_event_configchg_free(*configchg); *configchg = NULL; } if (handle) { dlclose(handle); } return status; } #elif defined(HAVE_STAT) && !defined(_WIN32) # ifdef HAVE_SYS_TYPES_H # include # endif # ifdef HAVE_SYS_STAT_H # include # endif typedef struct { size_t size; time_t mtime; } fileinfo_t; struct ares_event_configchg { ares_bool_t isup; ares__thread_t *thread; ares__htable_strvp_t *filestat; ares__thread_mutex_t *lock; ares__thread_cond_t *wake; const char *resolvconf_path; ares_event_thread_t *e; }; static ares_status_t config_change_check(ares__htable_strvp_t *filestat, const char *resolvconf_path) { size_t i; const char *configfiles[5]; ares_bool_t changed = ARES_FALSE; configfiles[0] = resolvconf_path; configfiles[1] = "/etc/nsswitch.conf"; configfiles[2] = "/etc/netsvc.conf"; configfiles[3] = "/etc/svc.conf"; configfiles[4] = NULL; for (i = 0; configfiles[i] != NULL; i++) { fileinfo_t *fi = ares__htable_strvp_get_direct(filestat, configfiles[i]); struct stat st; if (stat(configfiles[i], &st) == 0) { if (fi == NULL) { fi = ares_malloc_zero(sizeof(*fi)); if (fi == NULL) { return ARES_ENOMEM; } if (!ares__htable_strvp_insert(filestat, configfiles[i], fi)) { ares_free(fi); return ARES_ENOMEM; } } if (fi->size != (size_t)st.st_size || fi->mtime != (time_t)st.st_mtime) { changed = ARES_TRUE; } fi->size = (size_t)st.st_size; fi->mtime = (time_t)st.st_mtime; } else if (fi != NULL) { /* File no longer exists, remove */ ares__htable_strvp_remove(filestat, configfiles[i]); changed = ARES_TRUE; } } if (changed) { return ARES_SUCCESS; } return ARES_ENOTFOUND; } static void *ares_event_configchg_thread(void *arg) { ares_event_configchg_t *c = arg; ares__thread_mutex_lock(c->lock); while (c->isup) { ares_status_t status; if (ares__thread_cond_timedwait(c->wake, c->lock, 30000) != ARES_ETIMEOUT) { continue; } /* make sure status didn't change even though we got a timeout */ if (!c->isup) { break; } status = config_change_check(c->filestat, c->resolvconf_path); if (status == ARES_SUCCESS) { ares_reinit(c->e->channel); } } ares__thread_mutex_unlock(c->lock); return NULL; } ares_status_t ares_event_configchg_init(ares_event_configchg_t **configchg, ares_event_thread_t *e) { ares_status_t status = ARES_SUCCESS; ares_event_configchg_t *c = NULL; *configchg = NULL; c = ares_malloc_zero(sizeof(*c)); if (c == NULL) { status = ARES_ENOMEM; goto done; } c->e = e; c->filestat = ares__htable_strvp_create(ares_free); if (c->filestat == NULL) { status = ARES_ENOMEM; goto done; } c->wake = ares__thread_cond_create(); if (c->wake == NULL) { status = ARES_ENOMEM; goto done; } c->resolvconf_path = c->e->channel->resolvconf_path; if (c->resolvconf_path == NULL) { c->resolvconf_path = PATH_RESOLV_CONF; } status = config_change_check(c->filestat, c->resolvconf_path); if (status == ARES_ENOMEM) { goto done; } c->isup = ARES_TRUE; status = ares__thread_create(&c->thread, ares_event_configchg_thread, c); done: if (status != ARES_SUCCESS) { ares_event_configchg_destroy(c); } else { *configchg = c; } return status; } void ares_event_configchg_destroy(ares_event_configchg_t *configchg) { if (configchg == NULL) { return; } if (configchg->lock) { ares__thread_mutex_lock(configchg->lock); } configchg->isup = ARES_FALSE; if (configchg->wake) { ares__thread_cond_signal(configchg->wake); } if (configchg->lock) { ares__thread_mutex_unlock(configchg->lock); } if (configchg->thread) { void *rv = NULL; ares__thread_join(configchg->thread, &rv); } ares__thread_mutex_destroy(configchg->lock); ares__thread_cond_destroy(configchg->wake); ares__htable_strvp_destroy(configchg->filestat); ares_free(configchg); } #else ares_status_t ares_event_configchg_init(ares_event_configchg_t **configchg, ares_event_thread_t *e) { /* No ability */ return ARES_ENOTIMP; } void ares_event_configchg_destroy(ares_event_configchg_t *configchg) { /* No-op */ } #endif gevent-24.11.1/deps/c-ares/src/lib/event/ares_event_epoll.c000066400000000000000000000132201471441230600234250ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_event.h" #ifdef HAVE_SYS_EPOLL_H # include #endif #ifdef HAVE_FCNTL_H # include #endif #ifdef HAVE_EPOLL typedef struct { int epoll_fd; } ares_evsys_epoll_t; static void ares_evsys_epoll_destroy(ares_event_thread_t *e) { ares_evsys_epoll_t *ep = NULL; if (e == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } ep = e->ev_sys_data; if (ep == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (ep->epoll_fd != -1) { close(ep->epoll_fd); } ares_free(ep); e->ev_sys_data = NULL; } static ares_bool_t ares_evsys_epoll_init(ares_event_thread_t *e) { ares_evsys_epoll_t *ep = NULL; ep = ares_malloc_zero(sizeof(*ep)); if (ep == NULL) { return ARES_FALSE; /* LCOV_EXCL_LINE: OutOfMemory */ } e->ev_sys_data = ep; ep->epoll_fd = epoll_create1(EPOLL_CLOEXEC); if (ep->epoll_fd == -1) { ares_evsys_epoll_destroy(e); /* LCOV_EXCL_LINE: UntestablePath */ return ARES_FALSE; /* LCOV_EXCL_LINE: UntestablePath */ } e->ev_signal = ares_pipeevent_create(e); if (e->ev_signal == NULL) { ares_evsys_epoll_destroy(e); /* LCOV_EXCL_LINE: UntestablePath */ return ARES_FALSE; /* LCOV_EXCL_LINE: UntestablePath */ } return ARES_TRUE; } static ares_bool_t ares_evsys_epoll_event_add(ares_event_t *event) { const ares_event_thread_t *e = event->e; const ares_evsys_epoll_t *ep = e->ev_sys_data; struct epoll_event epev; memset(&epev, 0, sizeof(epev)); epev.data.fd = event->fd; epev.events = EPOLLRDHUP | EPOLLERR | EPOLLHUP; if (event->flags & ARES_EVENT_FLAG_READ) { epev.events |= EPOLLIN; } if (event->flags & ARES_EVENT_FLAG_WRITE) { epev.events |= EPOLLOUT; } if (epoll_ctl(ep->epoll_fd, EPOLL_CTL_ADD, event->fd, &epev) != 0) { return ARES_FALSE; /* LCOV_EXCL_LINE: UntestablePath */ } return ARES_TRUE; } static void ares_evsys_epoll_event_del(ares_event_t *event) { const ares_event_thread_t *e = event->e; const ares_evsys_epoll_t *ep = e->ev_sys_data; struct epoll_event epev; memset(&epev, 0, sizeof(epev)); epev.data.fd = event->fd; epoll_ctl(ep->epoll_fd, EPOLL_CTL_DEL, event->fd, &epev); } static void ares_evsys_epoll_event_mod(ares_event_t *event, ares_event_flags_t new_flags) { const ares_event_thread_t *e = event->e; const ares_evsys_epoll_t *ep = e->ev_sys_data; struct epoll_event epev; memset(&epev, 0, sizeof(epev)); epev.data.fd = event->fd; epev.events = EPOLLRDHUP | EPOLLERR | EPOLLHUP; if (new_flags & ARES_EVENT_FLAG_READ) { epev.events |= EPOLLIN; } if (new_flags & ARES_EVENT_FLAG_WRITE) { epev.events |= EPOLLOUT; } epoll_ctl(ep->epoll_fd, EPOLL_CTL_MOD, event->fd, &epev); } static size_t ares_evsys_epoll_wait(ares_event_thread_t *e, unsigned long timeout_ms) { struct epoll_event events[8]; size_t nevents = sizeof(events) / sizeof(*events); const ares_evsys_epoll_t *ep = e->ev_sys_data; int rv; size_t i; size_t cnt = 0; memset(events, 0, sizeof(events)); rv = epoll_wait(ep->epoll_fd, events, (int)nevents, (timeout_ms == 0) ? -1 : (int)timeout_ms); if (rv < 0) { return 0; /* LCOV_EXCL_LINE: UntestablePath */ } nevents = (size_t)rv; for (i = 0; i < nevents; i++) { ares_event_t *ev; ares_event_flags_t flags = 0; ev = ares__htable_asvp_get_direct(e->ev_sock_handles, (ares_socket_t)events[i].data.fd); if (ev == NULL || ev->cb == NULL) { continue; /* LCOV_EXCL_LINE: DefensiveCoding */ } cnt++; if (events[i].events & (EPOLLIN | EPOLLRDHUP | EPOLLHUP | EPOLLERR)) { flags |= ARES_EVENT_FLAG_READ; } if (events[i].events & EPOLLOUT) { flags |= ARES_EVENT_FLAG_WRITE; } ev->cb(e, ev->fd, ev->data, flags); } return cnt; } const ares_event_sys_t ares_evsys_epoll = { "epoll", ares_evsys_epoll_init, ares_evsys_epoll_destroy, ares_evsys_epoll_event_add, ares_evsys_epoll_event_del, ares_evsys_epoll_event_mod, ares_evsys_epoll_wait }; #endif gevent-24.11.1/deps/c-ares/src/lib/event/ares_event_kqueue.c000066400000000000000000000151011471441230600236110ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_event.h" #ifdef HAVE_SYS_TYPES_H # include #endif #ifdef HAVE_SYS_EVENT_H # include #endif #ifdef HAVE_SYS_TIME_H # include #endif #ifdef HAVE_FCNTL_H # include #endif #ifdef HAVE_KQUEUE typedef struct { int kqueue_fd; struct kevent *changelist; size_t nchanges; size_t nchanges_alloc; } ares_evsys_kqueue_t; static void ares_evsys_kqueue_destroy(ares_event_thread_t *e) { ares_evsys_kqueue_t *kq = NULL; if (e == NULL) { return; } kq = e->ev_sys_data; if (kq == NULL) { return; } if (kq->kqueue_fd != -1) { close(kq->kqueue_fd); } ares_free(kq->changelist); ares_free(kq); e->ev_sys_data = NULL; } static ares_bool_t ares_evsys_kqueue_init(ares_event_thread_t *e) { ares_evsys_kqueue_t *kq = NULL; kq = ares_malloc_zero(sizeof(*kq)); if (kq == NULL) { return ARES_FALSE; } e->ev_sys_data = kq; kq->kqueue_fd = kqueue(); if (kq->kqueue_fd == -1) { ares_evsys_kqueue_destroy(e); return ARES_FALSE; } # ifdef FD_CLOEXEC fcntl(kq->kqueue_fd, F_SETFD, FD_CLOEXEC); # endif kq->nchanges_alloc = 8; kq->changelist = ares_malloc_zero(kq->nchanges_alloc * sizeof(*kq->changelist)); if (kq->changelist == NULL) { ares_evsys_kqueue_destroy(e); return ARES_FALSE; } e->ev_signal = ares_pipeevent_create(e); if (e->ev_signal == NULL) { ares_evsys_kqueue_destroy(e); return ARES_FALSE; } return ARES_TRUE; } static void ares_evsys_kqueue_enqueue(ares_evsys_kqueue_t *kq, int fd, int16_t filter, uint16_t flags) { size_t idx; if (kq == NULL) { return; } idx = kq->nchanges; kq->nchanges++; if (kq->nchanges > kq->nchanges_alloc) { kq->nchanges_alloc <<= 1; kq->changelist = ares_realloc_zero( kq->changelist, (kq->nchanges_alloc >> 1) * sizeof(*kq->changelist), kq->nchanges_alloc * sizeof(*kq->changelist)); } EV_SET(&kq->changelist[idx], fd, filter, flags, 0, 0, 0); } static void ares_evsys_kqueue_event_process(ares_event_t *event, ares_event_flags_t old_flags, ares_event_flags_t new_flags) { ares_event_thread_t *e = event->e; ares_evsys_kqueue_t *kq; if (e == NULL) { return; } kq = e->ev_sys_data; if (kq == NULL) { return; } if (new_flags & ARES_EVENT_FLAG_READ && !(old_flags & ARES_EVENT_FLAG_READ)) { ares_evsys_kqueue_enqueue(kq, event->fd, EVFILT_READ, EV_ADD | EV_ENABLE); } if (!(new_flags & ARES_EVENT_FLAG_READ) && old_flags & ARES_EVENT_FLAG_READ) { ares_evsys_kqueue_enqueue(kq, event->fd, EVFILT_READ, EV_DELETE); } if (new_flags & ARES_EVENT_FLAG_WRITE && !(old_flags & ARES_EVENT_FLAG_WRITE)) { ares_evsys_kqueue_enqueue(kq, event->fd, EVFILT_WRITE, EV_ADD | EV_ENABLE); } if (!(new_flags & ARES_EVENT_FLAG_WRITE) && old_flags & ARES_EVENT_FLAG_WRITE) { ares_evsys_kqueue_enqueue(kq, event->fd, EVFILT_WRITE, EV_DELETE); } } static ares_bool_t ares_evsys_kqueue_event_add(ares_event_t *event) { ares_evsys_kqueue_event_process(event, 0, event->flags); return ARES_TRUE; } static void ares_evsys_kqueue_event_del(ares_event_t *event) { ares_evsys_kqueue_event_process(event, event->flags, 0); } static void ares_evsys_kqueue_event_mod(ares_event_t *event, ares_event_flags_t new_flags) { ares_evsys_kqueue_event_process(event, event->flags, new_flags); } static size_t ares_evsys_kqueue_wait(ares_event_thread_t *e, unsigned long timeout_ms) { struct kevent events[8]; size_t nevents = sizeof(events) / sizeof(*events); ares_evsys_kqueue_t *kq = e->ev_sys_data; int rv; size_t i; struct timespec ts; struct timespec *timeout = NULL; size_t cnt = 0; if (timeout_ms != 0) { ts.tv_sec = (time_t)timeout_ms / 1000; ts.tv_nsec = (timeout_ms % 1000) * 1000 * 1000; timeout = &ts; } memset(events, 0, sizeof(events)); rv = kevent(kq->kqueue_fd, kq->changelist, (int)kq->nchanges, events, (int)nevents, timeout); if (rv < 0) { return 0; } /* Changelist was consumed */ kq->nchanges = 0; nevents = (size_t)rv; for (i = 0; i < nevents; i++) { ares_event_t *ev; ares_event_flags_t flags = 0; ev = ares__htable_asvp_get_direct(e->ev_sock_handles, (ares_socket_t)events[i].ident); if (ev == NULL || ev->cb == NULL) { continue; } cnt++; if (events[i].filter == EVFILT_READ || events[i].flags & (EV_EOF | EV_ERROR)) { flags |= ARES_EVENT_FLAG_READ; } else { flags |= ARES_EVENT_FLAG_WRITE; } ev->cb(e, ev->fd, ev->data, flags); } return cnt; } const ares_event_sys_t ares_evsys_kqueue = { "kqueue", ares_evsys_kqueue_init, ares_evsys_kqueue_destroy, ares_evsys_kqueue_event_add, ares_evsys_kqueue_event_del, ares_evsys_kqueue_event_mod, ares_evsys_kqueue_wait }; #endif gevent-24.11.1/deps/c-ares/src/lib/event/ares_event_poll.c000066400000000000000000000100621471441230600232610ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_event.h" #ifdef HAVE_POLL_H # include #endif #if defined(HAVE_POLL) static ares_bool_t ares_evsys_poll_init(ares_event_thread_t *e) { e->ev_signal = ares_pipeevent_create(e); if (e->ev_signal == NULL) { return ARES_FALSE; /* LCOV_EXCL_LINE: UntestablePath */ } return ARES_TRUE; } static void ares_evsys_poll_destroy(ares_event_thread_t *e) { (void)e; } static ares_bool_t ares_evsys_poll_event_add(ares_event_t *event) { (void)event; return ARES_TRUE; } static void ares_evsys_poll_event_del(ares_event_t *event) { (void)event; } static void ares_evsys_poll_event_mod(ares_event_t *event, ares_event_flags_t new_flags) { (void)event; (void)new_flags; } static size_t ares_evsys_poll_wait(ares_event_thread_t *e, unsigned long timeout_ms) { size_t num_fds = 0; ares_socket_t *fdlist = ares__htable_asvp_keys(e->ev_sock_handles, &num_fds); struct pollfd *pollfd = NULL; int rv; size_t cnt = 0; size_t i; if (fdlist != NULL && num_fds) { pollfd = ares_malloc_zero(sizeof(*pollfd) * num_fds); if (pollfd == NULL) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } for (i = 0; i < num_fds; i++) { const ares_event_t *ev = ares__htable_asvp_get_direct(e->ev_sock_handles, fdlist[i]); pollfd[i].fd = ev->fd; if (ev->flags & ARES_EVENT_FLAG_READ) { pollfd[i].events |= POLLIN; } if (ev->flags & ARES_EVENT_FLAG_WRITE) { pollfd[i].events |= POLLOUT; } } } ares_free(fdlist); rv = poll(pollfd, (nfds_t)num_fds, (timeout_ms == 0) ? -1 : (int)timeout_ms); if (rv <= 0) { goto done; } for (i = 0; pollfd != NULL && i < num_fds; i++) { ares_event_t *ev; ares_event_flags_t flags = 0; if (pollfd[i].revents == 0) { continue; } cnt++; ev = ares__htable_asvp_get_direct(e->ev_sock_handles, pollfd[i].fd); if (ev == NULL || ev->cb == NULL) { continue; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (pollfd[i].revents & (POLLERR | POLLHUP | POLLIN)) { flags |= ARES_EVENT_FLAG_READ; } if (pollfd[i].revents & POLLOUT) { flags |= ARES_EVENT_FLAG_WRITE; } ev->cb(e, pollfd[i].fd, ev->data, flags); } done: ares_free(pollfd); return cnt; } const ares_event_sys_t ares_evsys_poll = { "poll", ares_evsys_poll_init, ares_evsys_poll_destroy, /* NoOp */ ares_evsys_poll_event_add, /* NoOp */ ares_evsys_poll_event_del, /* NoOp */ ares_evsys_poll_event_mod, /* NoOp */ ares_evsys_poll_wait }; #endif gevent-24.11.1/deps/c-ares/src/lib/event/ares_event_select.c000066400000000000000000000105121471441230600235720ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ /* Some systems might default to something low like 256 (NetBSD), lets define * this to assist. Really, no one should be using select, but lets be safe * anyhow */ #define FD_SETSIZE 4096 #include "ares_private.h" #include "ares_event.h" #ifdef HAVE_SYS_SELECT_H # include #endif /* All systems have select(), but not all have a way to wake, so we require * pipe() to wake the select() */ #if defined(HAVE_PIPE) static ares_bool_t ares_evsys_select_init(ares_event_thread_t *e) { e->ev_signal = ares_pipeevent_create(e); if (e->ev_signal == NULL) { return ARES_FALSE; /* LCOV_EXCL_LINE: UntestablePath */ } return ARES_TRUE; } static void ares_evsys_select_destroy(ares_event_thread_t *e) { (void)e; } static ares_bool_t ares_evsys_select_event_add(ares_event_t *event) { (void)event; return ARES_TRUE; } static void ares_evsys_select_event_del(ares_event_t *event) { (void)event; } static void ares_evsys_select_event_mod(ares_event_t *event, ares_event_flags_t new_flags) { (void)event; (void)new_flags; } static size_t ares_evsys_select_wait(ares_event_thread_t *e, unsigned long timeout_ms) { size_t num_fds = 0; ares_socket_t *fdlist = ares__htable_asvp_keys(e->ev_sock_handles, &num_fds); int rv; size_t cnt = 0; size_t i; fd_set read_fds; fd_set write_fds; fd_set except_fds; int nfds = 0; struct timeval tv; struct timeval *tout = NULL; FD_ZERO(&read_fds); FD_ZERO(&write_fds); FD_ZERO(&except_fds); for (i = 0; i < num_fds; i++) { const ares_event_t *ev = ares__htable_asvp_get_direct(e->ev_sock_handles, fdlist[i]); if (ev->flags & ARES_EVENT_FLAG_READ) { FD_SET(ev->fd, &read_fds); } if (ev->flags & ARES_EVENT_FLAG_WRITE) { FD_SET(ev->fd, &write_fds); } FD_SET(ev->fd, &except_fds); if (ev->fd + 1 > nfds) { nfds = ev->fd + 1; } } if (timeout_ms) { tv.tv_sec = (int)(timeout_ms / 1000); tv.tv_usec = (int)((timeout_ms % 1000) * 1000); tout = &tv; } rv = select(nfds, &read_fds, &write_fds, &except_fds, tout); if (rv > 0) { for (i = 0; i < num_fds; i++) { ares_event_t *ev; ares_event_flags_t flags = 0; ev = ares__htable_asvp_get_direct(e->ev_sock_handles, fdlist[i]); if (ev == NULL || ev->cb == NULL) { continue; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (FD_ISSET(fdlist[i], &read_fds) || FD_ISSET(fdlist[i], &except_fds)) { flags |= ARES_EVENT_FLAG_READ; } if (FD_ISSET(fdlist[i], &write_fds)) { flags |= ARES_EVENT_FLAG_WRITE; } if (flags == 0) { continue; } cnt++; ev->cb(e, fdlist[i], ev->data, flags); } } ares_free(fdlist); return cnt; } const ares_event_sys_t ares_evsys_select = { "select", ares_evsys_select_init, ares_evsys_select_destroy, /* NoOp */ ares_evsys_select_event_add, /* NoOp */ ares_evsys_select_event_del, /* NoOp */ ares_evsys_select_event_mod, /* NoOp */ ares_evsys_select_wait }; #endif gevent-24.11.1/deps/c-ares/src/lib/event/ares_event_thread.c000066400000000000000000000347101471441230600235700ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_event.h" static void ares_event_destroy_cb(void *arg) { ares_event_t *event = arg; if (event == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* Unregister from the event thread if it was registered with one */ if (event->e) { const ares_event_thread_t *e = event->e; e->ev_sys->event_del(event); event->e = NULL; } if (event->free_data_cb && event->data) { event->free_data_cb(event->data); } ares_free(event); } static void ares_event_signal(const ares_event_t *event) { if (event == NULL || event->signal_cb == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } event->signal_cb(event); } static void ares_event_thread_wake(const ares_event_thread_t *e) { if (e == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } ares_event_signal(e->ev_signal); } /* See if a pending update already exists. We don't want to enqueue multiple * updates for the same event handle. Right now this is O(n) based on number * of updates already enqueued. In the future, it might make sense to make * this O(1) with a hashtable. * NOTE: in some cases a delete then re-add of the same fd, but really pointing * to a different destination can happen due to a quick close of a * connection then creation of a new one. So we need to look at the * flags and ignore any delete events when finding a match since we * need to process the delete always, it can't be combined with other * updates. */ static ares_event_t *ares_event_update_find(ares_event_thread_t *e, ares_socket_t fd, const void *data) { ares__llist_node_t *node; for (node = ares__llist_node_first(e->ev_updates); node != NULL; node = ares__llist_node_next(node)) { ares_event_t *ev = ares__llist_node_val(node); if (fd != ARES_SOCKET_BAD && fd == ev->fd && ev->flags != 0) { return ev; } if (fd == ARES_SOCKET_BAD && ev->fd == ARES_SOCKET_BAD && data == ev->data && ev->flags != 0) { return ev; } } return NULL; } ares_status_t ares_event_update(ares_event_t **event, ares_event_thread_t *e, ares_event_flags_t flags, ares_event_cb_t cb, ares_socket_t fd, void *data, ares_event_free_data_t free_data_cb, ares_event_signal_cb_t signal_cb) { ares_event_t *ev = NULL; ares_status_t status; if (e == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* Callback must be specified if not a removal event. */ if (flags != ARES_EVENT_FLAG_NONE && cb == NULL) { return ARES_EFORMERR; } if (event != NULL) { *event = NULL; } /* Validate flags */ if (fd == ARES_SOCKET_BAD) { if (flags & (ARES_EVENT_FLAG_READ | ARES_EVENT_FLAG_WRITE)) { return ARES_EFORMERR; } if (!(flags & ARES_EVENT_FLAG_OTHER)) { return ARES_EFORMERR; } } else { if (flags & ARES_EVENT_FLAG_OTHER) { return ARES_EFORMERR; } } /* That's all the validation we can really do */ ares__thread_mutex_lock(e->mutex); /* See if we have a queued update already */ ev = ares_event_update_find(e, fd, data); if (ev == NULL) { /* Allocate a new one */ ev = ares_malloc_zero(sizeof(*ev)); if (ev == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } if (ares__llist_insert_last(e->ev_updates, ev) == NULL) { ares_free(ev); /* LCOV_EXCL_LINE: OutOfMemory */ status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } ev->flags = flags; ev->fd = fd; if (ev->cb == NULL) { ev->cb = cb; } if (ev->data == NULL) { ev->data = data; } if (ev->free_data_cb == NULL) { ev->free_data_cb = free_data_cb; } if (ev->signal_cb == NULL) { ev->signal_cb = signal_cb; } if (event != NULL) { *event = ev; } status = ARES_SUCCESS; done: if (status == ARES_SUCCESS) { /* Wake event thread if successful so it can pull the updates */ ares_event_thread_wake(e); } ares__thread_mutex_unlock(e->mutex); return status; } static void ares_event_thread_process_fd(ares_event_thread_t *e, ares_socket_t fd, void *data, ares_event_flags_t flags) { (void)data; ares_process_fd(e->channel, (flags & ARES_EVENT_FLAG_READ) ? fd : ARES_SOCKET_BAD, (flags & ARES_EVENT_FLAG_WRITE) ? fd : ARES_SOCKET_BAD); } static void ares_event_thread_sockstate_cb(void *data, ares_socket_t socket_fd, int readable, int writable) { ares_event_thread_t *e = data; ares_event_flags_t flags = ARES_EVENT_FLAG_NONE; if (readable) { flags |= ARES_EVENT_FLAG_READ; } if (writable) { flags |= ARES_EVENT_FLAG_WRITE; } /* Update channel fd. This function will lock e->mutex and also wake the * event thread to process the update */ ares_event_update(NULL, e, flags, ares_event_thread_process_fd, socket_fd, NULL, NULL, NULL); } static void ares_event_process_updates(ares_event_thread_t *e) { ares__llist_node_t *node; /* Iterate across all updates and apply to internal list, removing from update * list */ while ((node = ares__llist_node_first(e->ev_updates)) != NULL) { ares_event_t *newev = ares__llist_node_claim(node); ares_event_t *oldev; if (newev->fd == ARES_SOCKET_BAD) { oldev = ares__htable_vpvp_get_direct(e->ev_cust_handles, newev->data); } else { oldev = ares__htable_asvp_get_direct(e->ev_sock_handles, newev->fd); } /* Adding new */ if (oldev == NULL) { newev->e = e; /* Don't try to add a new event if all flags are cleared, that's basically * someone trying to delete something already deleted. Also if it fails * to add, cleanup. */ if (newev->flags == ARES_EVENT_FLAG_NONE || !e->ev_sys->event_add(newev)) { newev->e = NULL; ares_event_destroy_cb(newev); } else { if (newev->fd == ARES_SOCKET_BAD) { ares__htable_vpvp_insert(e->ev_cust_handles, newev->data, newev); } else { ares__htable_asvp_insert(e->ev_sock_handles, newev->fd, newev); } } continue; } /* Removal request */ if (newev->flags == ARES_EVENT_FLAG_NONE) { /* the callback for the removal will call e->ev_sys->event_del(e, event) */ if (newev->fd == ARES_SOCKET_BAD) { ares__htable_vpvp_remove(e->ev_cust_handles, newev->data); } else { ares__htable_asvp_remove(e->ev_sock_handles, newev->fd); } ares_free(newev); continue; } /* Modify request -- only flags can be changed */ e->ev_sys->event_mod(oldev, newev->flags); oldev->flags = newev->flags; ares_free(newev); } } static void ares_event_thread_cleanup(ares_event_thread_t *e) { /* Manually free any updates that weren't processed */ if (e->ev_updates != NULL) { ares__llist_node_t *node; while ((node = ares__llist_node_first(e->ev_updates)) != NULL) { ares_event_destroy_cb(ares__llist_node_claim(node)); } ares__llist_destroy(e->ev_updates); e->ev_updates = NULL; } if (e->ev_sock_handles != NULL) { ares__htable_asvp_destroy(e->ev_sock_handles); e->ev_sock_handles = NULL; } if (e->ev_cust_handles != NULL) { ares__htable_vpvp_destroy(e->ev_cust_handles); e->ev_cust_handles = NULL; } if (e->ev_sys != NULL && e->ev_sys->destroy != NULL) { e->ev_sys->destroy(e); e->ev_sys = NULL; } } static void *ares_event_thread(void *arg) { ares_event_thread_t *e = arg; ares__thread_mutex_lock(e->mutex); while (e->isup) { struct timeval tv; const struct timeval *tvout; unsigned long timeout_ms = 0; /* 0 = unlimited */ ares_event_process_updates(e); /* Don't hold a mutex while waiting on events or calling into anything * that might require a c-ares channel lock since a callback could be * triggered cross-thread */ ares__thread_mutex_unlock(e->mutex); tvout = ares_timeout(e->channel, NULL, &tv); if (tvout != NULL) { timeout_ms = (unsigned long)((tvout->tv_sec * 1000) + (tvout->tv_usec / 1000) + 1); } e->ev_sys->wait(e, timeout_ms); /* Each iteration should do timeout processing */ if (e->isup) { ares_process_fd(e->channel, ARES_SOCKET_BAD, ARES_SOCKET_BAD); } /* Relock before we loop again */ ares__thread_mutex_lock(e->mutex); } /* Lets cleanup while we're in the thread itself */ ares_event_thread_cleanup(e); ares__thread_mutex_unlock(e->mutex); return NULL; } static void ares_event_thread_destroy_int(ares_event_thread_t *e) { /* Wake thread and tell it to shutdown if it exists */ ares__thread_mutex_lock(e->mutex); if (e->isup) { e->isup = ARES_FALSE; ares_event_thread_wake(e); } ares__thread_mutex_unlock(e->mutex); /* Wait for thread to shutdown */ if (e->thread) { void *rv = NULL; ares__thread_join(e->thread, &rv); e->thread = NULL; } /* If the event thread ever got to the point of starting, this is a no-op * as it runs this same cleanup when it shuts down */ ares_event_thread_cleanup(e); ares__thread_mutex_destroy(e->mutex); e->mutex = NULL; ares_free(e); } void ares_event_thread_destroy(ares_channel_t *channel) { ares_event_thread_t *e = channel->sock_state_cb_data; if (e == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } ares_event_thread_destroy_int(e); channel->sock_state_cb_data = NULL; channel->sock_state_cb = NULL; } static const ares_event_sys_t *ares_event_fetch_sys(ares_evsys_t evsys) { switch (evsys) { case ARES_EVSYS_WIN32: #if defined(USE_WINSOCK) return &ares_evsys_win32; #else return NULL; #endif case ARES_EVSYS_EPOLL: #if defined(HAVE_EPOLL) return &ares_evsys_epoll; #else return NULL; #endif case ARES_EVSYS_KQUEUE: #if defined(HAVE_KQUEUE) return &ares_evsys_kqueue; #else return NULL; #endif case ARES_EVSYS_POLL: #if defined(HAVE_POLL) return &ares_evsys_poll; #else return NULL; #endif case ARES_EVSYS_SELECT: #if defined(HAVE_PIPE) return &ares_evsys_select; #else return NULL; #endif /* case ARES_EVSYS_DEFAULT: */ default: break; } /* default */ #if defined(USE_WINSOCK) return &ares_evsys_win32; #elif defined(HAVE_KQUEUE) return &ares_evsys_kqueue; #elif defined(HAVE_EPOLL) return &ares_evsys_epoll; #elif defined(HAVE_POLL) return &ares_evsys_poll; #elif defined(HAVE_PIPE) return &ares_evsys_select; #else return NULL; #endif } ares_status_t ares_event_thread_init(ares_channel_t *channel) { ares_event_thread_t *e; e = ares_malloc_zero(sizeof(*e)); if (e == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } e->mutex = ares__thread_mutex_create(); if (e->mutex == NULL) { ares_event_thread_destroy_int(e); /* LCOV_EXCL_LINE: OutOfMemory */ return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } e->ev_updates = ares__llist_create(NULL); if (e->ev_updates == NULL) { ares_event_thread_destroy_int(e); /* LCOV_EXCL_LINE: OutOfMemory */ return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } e->ev_sock_handles = ares__htable_asvp_create(ares_event_destroy_cb); if (e->ev_sock_handles == NULL) { ares_event_thread_destroy_int(e); /* LCOV_EXCL_LINE: OutOfMemory */ return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } e->ev_cust_handles = ares__htable_vpvp_create(NULL, ares_event_destroy_cb); if (e->ev_cust_handles == NULL) { ares_event_thread_destroy_int(e); /* LCOV_EXCL_LINE: OutOfMemory */ return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } e->channel = channel; e->isup = ARES_TRUE; e->ev_sys = ares_event_fetch_sys(channel->evsys); if (e->ev_sys == NULL) { ares_event_thread_destroy_int(e); /* LCOV_EXCL_LINE: UntestablePath */ return ARES_ENOTIMP; /* LCOV_EXCL_LINE: UntestablePath */ } channel->sock_state_cb = ares_event_thread_sockstate_cb; channel->sock_state_cb_data = e; if (!e->ev_sys->init(e)) { /* LCOV_EXCL_START: UntestablePath */ ares_event_thread_destroy_int(e); channel->sock_state_cb = NULL; channel->sock_state_cb_data = NULL; return ARES_ESERVFAIL; /* LCOV_EXCL_STOP */ } /* Before starting the thread, process any possible events the initialization * might have enqueued as we may actually depend on these being valid * immediately upon return, which may mean before the thread is fully spawned * and processed the list itself. We don't want any sort of race conditions * (like the event system wake handle itself). */ ares_event_process_updates(e); /* Start thread */ if (ares__thread_create(&e->thread, ares_event_thread, e) != ARES_SUCCESS) { /* LCOV_EXCL_START: UntestablePath */ ares_event_thread_destroy_int(e); channel->sock_state_cb = NULL; channel->sock_state_cb_data = NULL; return ARES_ESERVFAIL; /* LCOV_EXCL_STOP */ } return ARES_SUCCESS; } gevent-24.11.1/deps/c-ares/src/lib/event/ares_event_wake_pipe.c000066400000000000000000000100541471441230600242600ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_event.h" #ifdef HAVE_UNISTD_H # include #endif #ifdef HAVE_FCNTL_H # include #endif #ifdef HAVE_PIPE typedef struct { int filedes[2]; } ares_pipeevent_t; static void ares_pipeevent_destroy(ares_pipeevent_t *p) { if (p->filedes[0] != -1) { close(p->filedes[0]); } if (p->filedes[1] != -1) { close(p->filedes[1]); } ares_free(p); } static void ares_pipeevent_destroy_cb(void *arg) { ares_pipeevent_destroy(arg); } static ares_pipeevent_t *ares_pipeevent_init(void) { ares_pipeevent_t *p = ares_malloc_zero(sizeof(*p)); if (p == NULL) { return NULL; /* LCOV_EXCL_LINE: OutOfMemory */ } p->filedes[0] = -1; p->filedes[1] = -1; # ifdef HAVE_PIPE2 if (pipe2(p->filedes, O_NONBLOCK | O_CLOEXEC) != 0) { ares_pipeevent_destroy(p); /* LCOV_EXCL_LINE: UntestablePath */ return NULL; /* LCOV_EXCL_LINE: UntestablePath */ } # else if (pipe(p->filedes) != 0) { ares_pipeevent_destroy(p); return NULL; } # ifdef O_NONBLOCK { int val; val = fcntl(p->filedes[0], F_GETFL, 0); if (val >= 0) { val |= O_NONBLOCK; } fcntl(p->filedes[0], F_SETFL, val); val = fcntl(p->filedes[1], F_GETFL, 0); if (val >= 0) { val |= O_NONBLOCK; } fcntl(p->filedes[1], F_SETFL, val); } # endif # ifdef O_CLOEXEC fcntl(p->filedes[0], F_SETFD, O_CLOEXEC); fcntl(p->filedes[1], F_SETFD, O_CLOEXEC); # endif # endif # ifdef F_SETNOSIGPIPE fcntl(p->filedes[0], F_SETNOSIGPIPE, 1); fcntl(p->filedes[1], F_SETNOSIGPIPE, 1); # endif return p; } static void ares_pipeevent_signal(const ares_event_t *e) { const ares_pipeevent_t *p; if (e == NULL || e->data == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } p = e->data; (void)write(p->filedes[1], "1", 1); } static void ares_pipeevent_cb(ares_event_thread_t *e, ares_socket_t fd, void *data, ares_event_flags_t flags) { unsigned char buf[32]; const ares_pipeevent_t *p = NULL; (void)e; (void)fd; (void)flags; if (data == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } p = data; while (read(p->filedes[0], buf, sizeof(buf)) == sizeof(buf)) { /* Do nothing */ } } ares_event_t *ares_pipeevent_create(ares_event_thread_t *e) { ares_event_t *event = NULL; ares_pipeevent_t *p = NULL; ares_status_t status; p = ares_pipeevent_init(); if (p == NULL) { return NULL; } status = ares_event_update(&event, e, ARES_EVENT_FLAG_READ, ares_pipeevent_cb, p->filedes[0], p, ares_pipeevent_destroy_cb, ares_pipeevent_signal); if (status != ARES_SUCCESS) { ares_pipeevent_destroy(p); /* LCOV_EXCL_LINE: DefensiveCoding */ return NULL; /* LCOV_EXCL_LINE: DefensiveCoding */ } return event; } #endif gevent-24.11.1/deps/c-ares/src/lib/event/ares_event_win32.c000066400000000000000000000767411471441230600232750ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ /* Uses an anonymous union */ #if defined(__clang__) || defined(__GNUC__) # pragma GCC diagnostic push # if defined(__clang__) # pragma GCC diagnostic ignored "-Wc11-extensions" # else # pragma GCC diagnostic ignored "-Wpedantic" # endif #endif #include "ares_private.h" #include "ares_event.h" #include "ares_event_win32.h" #ifdef HAVE_LIMITS_H # include #endif #if defined(USE_WINSOCK) /* IMPLEMENTATION NOTES * ==================== * * This implementation uses some undocumented functionality within Windows for * monitoring sockets. The Ancillary Function Driver (AFD) is the low level * implementation that Winsock2 sits on top of. Winsock2 unfortunately does * not expose the equivalent of epoll() or kqueue(), but it is possible to * access AFD directly and use along with IOCP to simulate the functionality. * We want to use IOCP if possible as it gives us the ability to monitor more * than just sockets (WSAPoll is not an option), and perform arbitrary callbacks * which means we can hook in non-socket related events. * * The information for this implementation was gathered from "wepoll" and * "libuv" which both use slight variants on this. We originally went with * an implementation methodology more similar to "libuv", but we had a few * user reports of crashes during shutdown and memory leaks due to some * events not being delivered for cleanup of closed sockets. * * Initialization: * 1. Dynamically load the NtDeviceIoControlFile, NtCreateFile, and * NtCancelIoFileEx internal symbols from ntdll.dll. (Don't believe * Microsoft's documentation for NtCancelIoFileEx as it documents an * invalid prototype). These functions are to open a reference to the * Ancillary Function Driver (AFD), and to submit and cancel POLL * requests. * 2. Create an IO Completion Port base handle via CreateIoCompletionPort() * that all socket events will be delivered through. * 3. Create a list of AFD Handles and track the number of poll requests * per AFD handle. When we exceed a pre-determined limit of poll requests * for a handle (128), we will automatically create a new handle. The * reason behind this is NtCancelIoFileEx uses a horrible algorithm for * issuing cancellations. See: * https://github.com/python-trio/trio/issues/52#issuecomment-548215128 * 4. Create a callback to be used to be able to interrupt waiting for IOCP * events, this may be called for allowing enqueuing of additional socket * events or removing socket events. PostQueuedCompletionStatus() is the * obvious choice. We can use the same container format, the event * delivered won't have an OVERLAPPED pointer so we can differentiate from * socket events. Use the container as the completion key. * * Socket Add: * 1. Create/Allocate a container for holding metadata about a socket * including: * - SOCKET base_socket; * - IO_STATUS_BLOCK iosb; -- Used by AFD POLL, returned as OVERLAPPED * - AFD_POLL_INFO afd_poll_info; -- Used by AFD POLL * - afd list node -- for tracking which AFD handle a POLL request was * submitted to. * 2. Call WSAIoctl(..., SIO_BASE_HANDLE, ...) to unwrap the SOCKET and get * the "base socket" we can use for polling. It appears this may fail so * we should call WSAIoctl(..., SIO_BSP_HANDLE_POLL, ...) as a fallback. * 3. Submit AFD POLL request (see "AFD POLL Request" section) * 4. Record a mapping between the "IO Status Block" and the socket container * so when events are delivered we can dereference. * * Socket Delete: * 1. Call * NtCancelIoFileEx(afd, iosb, &temp_iosb); * to cancel any pending operations. * 2. Tag the socket container as being queued for deletion * 3. Wait for an event to be delivered for the socket (cancel isn't * immediate, it delivers an event to know its complete). Delete only once * that event has been delivered. If we don't do this we could try to * access free()'d memory at a later point. * * Socket Modify: * 1. Call * NtCancelIoFileEx(afd, iosb, &temp_iosb) * to cancel any pending operation. * 2. When the event comes through that the cancel is complete, enqueue * another "AFD Poll Request" for the desired events. * * Event Wait: * 1. Call GetQueuedCompletionStatusEx() with the base IOCP handle, a * stack allocated array of OVERLAPPED_ENTRY's, and an appropriate * timeout. * 2. Iterate across returned events, if the lpOverlapped is NULL, then the * the CompletionKey is a pointer to the container registered via * PostQueuedCompletionStatus(), otherwise it is the "IO Status Block" * registered with the "AFD Poll Request" which needs to be dereferenced * to the "socket container". * 3. If it is a "socket container", disassociate it from the afd list node * it was previously submitted to. * 4. If it is a "socket container" check to see if we are cleaning up, if so, * clean it up. * 5. If it is a "socket container" that is still valid, Submit an * AFD POLL Request (see "AFD POLL Request"). We must re-enable the request * each time we receive a response, it is not persistent. * 6. Notify of any events received as indicated in the AFD_POLL_INFO * Handles[0].Events (NOTE: check NumberOfHandles > 0, and the status in * the IO_STATUS_BLOCK. If we received an AFD_POLL_LOCAL_CLOSE, clean up * the connection like the integrator requested it to be cleaned up. * * AFD Poll Request: * 1. Find an afd poll handle in the list that has fewer pending requests than * the limit. * 2. If an afd poll handle was not associated (e.g. due to all being over * limit), create a new afd poll handle by calling NtCreateFile() * with path \Device\Afd , then add the AFD handle to the IO Completion * Port. We can leave the completion key as blank since events for * multiple sockets will be delivered through this and we need to * differentiate via the OVERLAPPED member returned. Add the new AFD * handle to the list of handles. * 3. Initialize the AFD_POLL_INFO structure: * Exclusive = FALSE; // allow multiple requests * NumberOfHandles = 1; * Timeout.QuadPart = LLONG_MAX; * Handles[0].Handle = (HANDLE)base_socket; * Handles[0].Status = 0; * Handles[0].Events = AFD_POLL_LOCAL_CLOSE + additional events to wait for * such as AFD_POLL_RECEIVE, etc; * 4. Zero out the IO_STATUS_BLOCK structures * 5. Set the "Status" member of IO_STATUS_BLOCK to STATUS_PENDING * 6. Call * NtDeviceIoControlFile(afd, NULL, NULL, &iosb, * &iosb, IOCTL_AFD_POLL * &afd_poll_info, sizeof(afd_poll_info), * &afd_poll_info, sizeof(afd_poll_info)); * * * References: * - https://github.com/piscisaureus/wepoll/ * - https://github.com/libuv/libuv/ */ /* Cap the number of outstanding AFD poll requests per AFD handle due to known * slowdowns with large lists and NtCancelIoFileEx() */ # define AFD_POLL_PER_HANDLE 128 # include /* # define CARES_DEBUG 1 */ # ifdef __GNUC__ # define CARES_PRINTF_LIKE(fmt, args) \ __attribute__((format(printf, fmt, args))) # else # define CARES_PRINTF_LIKE(fmt, args) # endif static void CARES_DEBUG_LOG(const char *fmt, ...) CARES_PRINTF_LIKE(1, 2); static void CARES_DEBUG_LOG(const char *fmt, ...) { va_list ap; va_start(ap, fmt); # ifdef CARES_DEBUG vfprintf(stderr, fmt, ap); fflush(stderr); # endif va_end(ap); } typedef struct { /* Dynamically loaded symbols */ NtCreateFile_t NtCreateFile; NtDeviceIoControlFile_t NtDeviceIoControlFile; NtCancelIoFileEx_t NtCancelIoFileEx; /* Implementation details */ ares__slist_t *afd_handles; HANDLE iocp_handle; /* IO_STATUS_BLOCK * -> ares_evsys_win32_eventdata_t * mapping. There is * no completion key passed to IOCP with this method so we have to look * up based on the lpOverlapped returned (which is mapped to IO_STATUS_BLOCK) */ ares__htable_vpvp_t *sockets; /* Flag about whether or not we are shutting down */ ares_bool_t is_shutdown; } ares_evsys_win32_t; typedef enum { POLL_STATUS_NONE = 0, POLL_STATUS_PENDING = 1, POLL_STATUS_CANCEL = 2, POLL_STATUS_DESTROY = 3 } poll_status_t; typedef struct { /*! Pointer to parent event container */ ares_event_t *event; /*! Socket passed in to monitor */ SOCKET socket; /*! Base socket derived from provided socket */ SOCKET base_socket; /*! Structure for submitting AFD POLL requests (Internals!) */ AFD_POLL_INFO afd_poll_info; /*! Status of current polling operation */ poll_status_t poll_status; /*! IO Status Block structure submitted with AFD POLL requests and returned * with IOCP results as lpOverlapped (even though its a different structure) */ IO_STATUS_BLOCK iosb; /*! AFD handle node an outstanding poll request is associated with */ ares__slist_node_t *afd_handle_node; /* Lock is only for PostQueuedCompletionStatus() to prevent multiple * signals. Tracking via POLL_STATUS_PENDING/POLL_STATUS_NONE */ ares__thread_mutex_t *lock; } ares_evsys_win32_eventdata_t; static size_t ares_evsys_win32_wait(ares_event_thread_t *e, unsigned long timeout_ms); static void ares_iocpevent_signal(const ares_event_t *event) { ares_event_thread_t *e = event->e; ares_evsys_win32_t *ew = e->ev_sys_data; ares_evsys_win32_eventdata_t *ed = event->data; ares_bool_t queue_event = ARES_FALSE; ares__thread_mutex_lock(ed->lock); if (ed->poll_status != POLL_STATUS_PENDING) { ed->poll_status = POLL_STATUS_PENDING; queue_event = ARES_TRUE; } ares__thread_mutex_unlock(ed->lock); if (!queue_event) { return; } PostQueuedCompletionStatus(ew->iocp_handle, 0, (ULONG_PTR)event->data, NULL); } static void ares_iocpevent_cb(ares_event_thread_t *e, ares_socket_t fd, void *data, ares_event_flags_t flags) { ares_evsys_win32_eventdata_t *ed = data; (void)e; (void)fd; (void)flags; ares__thread_mutex_lock(ed->lock); ed->poll_status = POLL_STATUS_NONE; ares__thread_mutex_unlock(ed->lock); } static ares_event_t *ares_iocpevent_create(ares_event_thread_t *e) { ares_event_t *event = NULL; ares_status_t status; status = ares_event_update(&event, e, ARES_EVENT_FLAG_OTHER, ares_iocpevent_cb, ARES_SOCKET_BAD, NULL, NULL, ares_iocpevent_signal); if (status != ARES_SUCCESS) { return NULL; } return event; } static void ares_evsys_win32_destroy(ares_event_thread_t *e) { ares_evsys_win32_t *ew = NULL; if (e == NULL) { return; } CARES_DEBUG_LOG("** Win32 Event Destroy\n"); ew = e->ev_sys_data; if (ew == NULL) { return; } ew->is_shutdown = ARES_TRUE; CARES_DEBUG_LOG(" ** waiting on %lu remaining sockets to be destroyed\n", (unsigned long)ares__htable_vpvp_num_keys(ew->sockets)); while (ares__htable_vpvp_num_keys(ew->sockets)) { ares_evsys_win32_wait(e, 0); } CARES_DEBUG_LOG(" ** all sockets cleaned up\n"); if (ew->iocp_handle != NULL) { CloseHandle(ew->iocp_handle); } ares__slist_destroy(ew->afd_handles); ares__htable_vpvp_destroy(ew->sockets); ares_free(ew); e->ev_sys_data = NULL; } typedef struct { size_t poll_cnt; HANDLE afd_handle; } ares_afd_handle_t; static void ares_afd_handle_destroy(void *arg) { ares_afd_handle_t *hnd = arg; if (hnd != NULL && hnd->afd_handle != NULL) { CloseHandle(hnd->afd_handle); } ares_free(hnd); } static int ares_afd_handle_cmp(const void *data1, const void *data2) { const ares_afd_handle_t *hnd1 = data1; const ares_afd_handle_t *hnd2 = data2; if (hnd1->poll_cnt > hnd2->poll_cnt) { return 1; } if (hnd1->poll_cnt < hnd2->poll_cnt) { return -1; } return 0; } static void fill_object_attributes(OBJECT_ATTRIBUTES *attr, UNICODE_STRING *name, ULONG attributes) { memset(attr, 0, sizeof(*attr)); attr->Length = sizeof(*attr); attr->ObjectName = name; attr->Attributes = attributes; } # define UNICODE_STRING_CONSTANT(s) \ { (sizeof(s) - 1) * sizeof(wchar_t), sizeof(s) * sizeof(wchar_t), L##s } static ares__slist_node_t *ares_afd_handle_create(ares_evsys_win32_t *ew) { UNICODE_STRING afd_device_name = UNICODE_STRING_CONSTANT("\\Device\\Afd"); OBJECT_ATTRIBUTES afd_attributes; NTSTATUS status; IO_STATUS_BLOCK iosb; ares_afd_handle_t *afd = ares_malloc_zero(sizeof(*afd)); ares__slist_node_t *node = NULL; if (afd == NULL) { goto fail; } /* Open a handle to the AFD subsystem */ fill_object_attributes(&afd_attributes, &afd_device_name, 0); memset(&iosb, 0, sizeof(iosb)); iosb.Status = STATUS_PENDING; status = ew->NtCreateFile(&afd->afd_handle, SYNCHRONIZE, &afd_attributes, &iosb, NULL, 0, FILE_SHARE_READ | FILE_SHARE_WRITE, FILE_OPEN, 0, NULL, 0); if (status != STATUS_SUCCESS) { CARES_DEBUG_LOG("** Failed to create AFD endpoint\n"); goto fail; } if (CreateIoCompletionPort(afd->afd_handle, ew->iocp_handle, 0 /* CompletionKey */, 0) == NULL) { goto fail; } if (!SetFileCompletionNotificationModes(afd->afd_handle, FILE_SKIP_SET_EVENT_ON_HANDLE)) { goto fail; } node = ares__slist_insert(ew->afd_handles, afd); if (node == NULL) { goto fail; } return node; fail: ares_afd_handle_destroy(afd); return NULL; } /* Fetch the lowest poll count entry, but if it exceeds the limit, create a * new one and return that */ static ares__slist_node_t *ares_afd_handle_fetch(ares_evsys_win32_t *ew) { ares__slist_node_t *node = ares__slist_node_first(ew->afd_handles); ares_afd_handle_t *afd = ares__slist_node_val(node); if (afd != NULL && afd->poll_cnt < AFD_POLL_PER_HANDLE) { return node; } return ares_afd_handle_create(ew); } static ares_bool_t ares_evsys_win32_init(ares_event_thread_t *e) { ares_evsys_win32_t *ew = NULL; HMODULE ntdll; CARES_DEBUG_LOG("** Win32 Event Init\n"); ew = ares_malloc_zero(sizeof(*ew)); if (ew == NULL) { return ARES_FALSE; } e->ev_sys_data = ew; /* All apps should have ntdll.dll already loaded, so just get a handle to * this */ ntdll = GetModuleHandleA("ntdll.dll"); if (ntdll == NULL) { goto fail; } # ifdef __GNUC__ # pragma GCC diagnostic push # pragma GCC diagnostic ignored "-Wpedantic" /* Without the (void *) cast we get: * warning: cast between incompatible function types from 'FARPROC' {aka 'long * long int (*)()'} to 'NTSTATUS (*)(...)'} [-Wcast-function-type] but with it * we get: warning: ISO C forbids conversion of function pointer to object * pointer type [-Wpedantic] look unsolvable short of killing the warning. */ # endif /* Load Internal symbols not typically accessible */ ew->NtCreateFile = (NtCreateFile_t)(void *)GetProcAddress(ntdll, "NtCreateFile"); ew->NtDeviceIoControlFile = (NtDeviceIoControlFile_t)(void *)GetProcAddress( ntdll, "NtDeviceIoControlFile"); ew->NtCancelIoFileEx = (NtCancelIoFileEx_t)(void *)GetProcAddress(ntdll, "NtCancelIoFileEx"); # ifdef __GNUC__ # pragma GCC diagnostic pop # endif if (ew->NtCreateFile == NULL || ew->NtCancelIoFileEx == NULL || ew->NtDeviceIoControlFile == NULL) { goto fail; } ew->iocp_handle = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, 0, 0); if (ew->iocp_handle == NULL) { goto fail; } ew->afd_handles = ares__slist_create( e->channel->rand_state, ares_afd_handle_cmp, ares_afd_handle_destroy); if (ew->afd_handles == NULL) { goto fail; } /* Create at least the first afd handle, so we know of any critical system * issues during startup */ if (ares_afd_handle_create(ew) == NULL) { goto fail; } e->ev_signal = ares_iocpevent_create(e); if (e->ev_signal == NULL) { goto fail; } ew->sockets = ares__htable_vpvp_create(NULL, NULL); if (ew->sockets == NULL) { goto fail; } return ARES_TRUE; fail: ares_evsys_win32_destroy(e); return ARES_FALSE; } static ares_socket_t ares_evsys_win32_basesocket(ares_socket_t socket) { while (1) { DWORD bytes; /* Not used */ ares_socket_t base_socket = ARES_SOCKET_BAD; int rv; rv = WSAIoctl(socket, SIO_BASE_HANDLE, NULL, 0, &base_socket, sizeof(base_socket), &bytes, NULL, NULL); if (rv != SOCKET_ERROR && base_socket != ARES_SOCKET_BAD) { socket = base_socket; break; } /* If we're here, an error occurred */ if (GetLastError() == WSAENOTSOCK) { /* This is critical, exit */ return ARES_SOCKET_BAD; } /* Work around known bug in Komodia based LSPs, use ARES_BSP_HANDLE_POLL * to retrieve the underlying socket to then loop and get the base socket: * https://docs.microsoft.com/en-us/windows/win32/winsock/winsock-ioctls * https://www.komodia.com/newwiki/index.php?title=Komodia%27s_Redirector_bug_fixes#Version_2.2.2.6 */ base_socket = ARES_SOCKET_BAD; rv = WSAIoctl(socket, SIO_BSP_HANDLE_POLL, NULL, 0, &base_socket, sizeof(base_socket), &bytes, NULL, NULL); if (rv != SOCKET_ERROR && base_socket != ARES_SOCKET_BAD && base_socket != socket) { socket = base_socket; continue; /* loop! */ } return ARES_SOCKET_BAD; } return socket; } static ares_bool_t ares_evsys_win32_afd_enqueue(ares_event_t *event, ares_event_flags_t flags) { ares_event_thread_t *e = event->e; ares_evsys_win32_t *ew = e->ev_sys_data; ares_evsys_win32_eventdata_t *ed = event->data; ares_afd_handle_t *afd; NTSTATUS status; if (e == NULL || ed == NULL || ew == NULL) { return ARES_FALSE; } /* Misuse */ if (ed->poll_status != POLL_STATUS_NONE) { return ARES_FALSE; } ed->afd_handle_node = ares_afd_handle_fetch(ew); /* System resource issue? */ if (ed->afd_handle_node == NULL) { return ARES_FALSE; } afd = ares__slist_node_val(ed->afd_handle_node); /* Enqueue AFD Poll */ ed->afd_poll_info.Exclusive = FALSE; ed->afd_poll_info.NumberOfHandles = 1; ed->afd_poll_info.Timeout.QuadPart = LLONG_MAX; ed->afd_poll_info.Handles[0].Handle = (HANDLE)ed->base_socket; ed->afd_poll_info.Handles[0].Status = 0; ed->afd_poll_info.Handles[0].Events = AFD_POLL_LOCAL_CLOSE; if (flags & ARES_EVENT_FLAG_READ) { ed->afd_poll_info.Handles[0].Events |= (AFD_POLL_RECEIVE | AFD_POLL_DISCONNECT | AFD_POLL_ACCEPT | AFD_POLL_ABORT); } if (flags & ARES_EVENT_FLAG_WRITE) { ed->afd_poll_info.Handles[0].Events |= (AFD_POLL_SEND | AFD_POLL_CONNECT_FAIL); } if (flags == 0) { ed->afd_poll_info.Handles[0].Events |= AFD_POLL_DISCONNECT; } memset(&ed->iosb, 0, sizeof(ed->iosb)); ed->iosb.Status = STATUS_PENDING; status = ew->NtDeviceIoControlFile( afd->afd_handle, NULL, NULL, &ed->iosb, &ed->iosb, IOCTL_AFD_POLL, &ed->afd_poll_info, sizeof(ed->afd_poll_info), &ed->afd_poll_info, sizeof(ed->afd_poll_info)); if (status != STATUS_SUCCESS && status != STATUS_PENDING) { CARES_DEBUG_LOG("** afd_enqueue ed=%p FAILED\n", (void *)ed); ed->afd_handle_node = NULL; return ARES_FALSE; } /* Record that we submitted a poll request to this handle and tell it to * re-sort the node since we changed its sort value */ afd->poll_cnt++; ares__slist_node_reinsert(ed->afd_handle_node); ed->poll_status = POLL_STATUS_PENDING; CARES_DEBUG_LOG("++ afd_enqueue ed=%p flags=%X\n", (void *)ed, (unsigned int)flags); return ARES_TRUE; } static ares_bool_t ares_evsys_win32_afd_cancel(ares_evsys_win32_eventdata_t *ed) { IO_STATUS_BLOCK cancel_iosb; ares_evsys_win32_t *ew; NTSTATUS status; ares_afd_handle_t *afd; ew = ed->event->e->ev_sys_data; /* Misuse */ if (ed->poll_status != POLL_STATUS_PENDING) { return ARES_FALSE; } afd = ares__slist_node_val(ed->afd_handle_node); /* Misuse */ if (afd == NULL) { return ARES_FALSE; } ed->poll_status = POLL_STATUS_CANCEL; /* Not pending, nothing to do. Most likely that means there is a pending * event that hasn't yet been delivered otherwise it would be re-armed * already */ if (ed->iosb.Status != STATUS_PENDING) { CARES_DEBUG_LOG("** cancel not needed for ed=%p\n", (void *)ed); return ARES_FALSE; } status = ew->NtCancelIoFileEx(afd->afd_handle, &ed->iosb, &cancel_iosb); CARES_DEBUG_LOG("** Enqueued cancel for ed=%p, status = %lX\n", (void *)ed, status); /* NtCancelIoFileEx() may return STATUS_NOT_FOUND if the operation completed * just before calling NtCancelIoFileEx(), but we have not yet received the * notifiction (but it should be queued for the next IOCP event). */ if (status == STATUS_SUCCESS || status == STATUS_NOT_FOUND) { return ARES_TRUE; } return ARES_FALSE; } static void ares_evsys_win32_eventdata_destroy(ares_evsys_win32_t *ew, ares_evsys_win32_eventdata_t *ed) { if (ew == NULL || ed == NULL) { return; } CARES_DEBUG_LOG("-- deleting ed=%p (%s)\n", (void *)ed, (ed->socket == ARES_SOCKET_BAD) ? "data" : "socket"); /* These type of handles are deferred destroy. Update tracking. */ if (ed->socket != ARES_SOCKET_BAD) { ares__htable_vpvp_remove(ew->sockets, &ed->iosb); } ares__thread_mutex_destroy(ed->lock); if (ed->event != NULL) { ed->event->data = NULL; } ares_free(ed); } static ares_bool_t ares_evsys_win32_event_add(ares_event_t *event) { ares_event_thread_t *e = event->e; ares_evsys_win32_t *ew = e->ev_sys_data; ares_evsys_win32_eventdata_t *ed; ares_bool_t rc = ARES_FALSE; ed = ares_malloc_zero(sizeof(*ed)); ed->event = event; ed->socket = event->fd; ed->base_socket = ARES_SOCKET_BAD; event->data = ed; CARES_DEBUG_LOG("++ add ed=%p (%s) flags=%X\n", (void *)ed, (ed->socket == ARES_SOCKET_BAD) ? "data" : "socket", (unsigned int)event->flags); /* Likely a signal event, not something we will directly handle. We create * the ares_evsys_win32_eventdata_t as the placeholder to use as the * IOCP Completion Key */ if (ed->socket == ARES_SOCKET_BAD) { ed->lock = ares__thread_mutex_create(); if (ed->lock == NULL) { goto done; } rc = ARES_TRUE; goto done; } ed->base_socket = ares_evsys_win32_basesocket(ed->socket); if (ed->base_socket == ARES_SOCKET_BAD) { goto done; } if (!ares__htable_vpvp_insert(ew->sockets, &ed->iosb, ed)) { goto done; } if (!ares_evsys_win32_afd_enqueue(event, event->flags)) { goto done; } rc = ARES_TRUE; done: if (!rc) { ares_evsys_win32_eventdata_destroy(ew, ed); event->data = NULL; } return rc; } static void ares_evsys_win32_event_del(ares_event_t *event) { ares_evsys_win32_eventdata_t *ed = event->data; /* Already cleaned up, likely a LOCAL_CLOSE */ if (ed == NULL) { return; } CARES_DEBUG_LOG("-- DELETE requested for ed=%p (%s)\n", (void *)ed, (ed->socket != ARES_SOCKET_BAD) ? "socket" : "data"); /* * Cancel pending AFD Poll operation. */ if (ed->socket != ARES_SOCKET_BAD) { ares_evsys_win32_afd_cancel(ed); ed->poll_status = POLL_STATUS_DESTROY; ed->event = NULL; } else { ares_evsys_win32_eventdata_destroy(event->e->ev_sys_data, ed); } event->data = NULL; } static void ares_evsys_win32_event_mod(ares_event_t *event, ares_event_flags_t new_flags) { ares_evsys_win32_eventdata_t *ed = event->data; /* Not for us */ if (event->fd == ARES_SOCKET_BAD || ed == NULL) { return; } CARES_DEBUG_LOG("** mod ed=%p new_flags=%X\n", (void *)ed, (unsigned int)new_flags); /* All we need to do is cancel the pending operation. When the event gets * delivered for the cancellation, it will automatically re-enqueue a new * event */ ares_evsys_win32_afd_cancel(ed); } static ares_bool_t ares_evsys_win32_process_other_event( ares_evsys_win32_t *ew, ares_evsys_win32_eventdata_t *ed, size_t i) { ares_event_t *event; /* NOTE: do NOT dereference 'ed' if during shutdown as this could be an * invalid pointer if the signal handle was cleaned up, but there was still a * pending event! */ if (ew->is_shutdown) { CARES_DEBUG_LOG("\t\t** i=%lu, skip non-socket handle during shutdown\n", (unsigned long)i); return ARES_FALSE; } event = ed->event; CARES_DEBUG_LOG("\t\t** i=%lu, ed=%p (data)\n", (unsigned long)i, (void *)ed); event->cb(event->e, event->fd, event->data, ARES_EVENT_FLAG_OTHER); return ARES_TRUE; } static ares_bool_t ares_evsys_win32_process_socket_event( ares_evsys_win32_t *ew, ares_evsys_win32_eventdata_t *ed, size_t i) { ares_event_flags_t flags = 0; ares_event_t *event = NULL; ares_afd_handle_t *afd = NULL; /* Shouldn't be possible */ if (ed == NULL) { CARES_DEBUG_LOG("\t\t** i=%lu, Invalid handle.\n", (unsigned long)i); return ARES_FALSE; } event = ed->event; CARES_DEBUG_LOG("\t\t** i=%lu, ed=%p (socket)\n", (unsigned long)i, (void *)ed); /* Process events */ if (ed->poll_status == POLL_STATUS_PENDING && ed->iosb.Status == STATUS_SUCCESS && ed->afd_poll_info.NumberOfHandles > 0) { if (ed->afd_poll_info.Handles[0].Events & (AFD_POLL_RECEIVE | AFD_POLL_DISCONNECT | AFD_POLL_ACCEPT | AFD_POLL_ABORT)) { flags |= ARES_EVENT_FLAG_READ; } if (ed->afd_poll_info.Handles[0].Events & (AFD_POLL_SEND | AFD_POLL_CONNECT_FAIL)) { flags |= ARES_EVENT_FLAG_WRITE; } if (ed->afd_poll_info.Handles[0].Events & AFD_POLL_LOCAL_CLOSE) { CARES_DEBUG_LOG("\t\t** ed=%p LOCAL CLOSE\n", (void *)ed); ed->poll_status = POLL_STATUS_DESTROY; } } CARES_DEBUG_LOG("\t\t** ed=%p, iosb status=%lX, poll_status=%d, flags=%X\n", (void *)ed, (unsigned long)ed->iosb.Status, (int)ed->poll_status, (unsigned int)flags); /* Decrement poll count for AFD handle then resort, also disassociate * with socket */ afd = ares__slist_node_val(ed->afd_handle_node); afd->poll_cnt--; ares__slist_node_reinsert(ed->afd_handle_node); ed->afd_handle_node = NULL; /* Pending destroy, go ahead and kill it */ if (ed->poll_status == POLL_STATUS_DESTROY) { ares_evsys_win32_eventdata_destroy(ew, ed); return ARES_FALSE; } ed->poll_status = POLL_STATUS_NONE; /* Mask flags against current desired flags. We could have an event * queued that is outdated. */ flags &= event->flags; /* Don't actually do anything with the event that was delivered as we are * in a shutdown/cleanup process. Mostly just handling the delayed * destruction of sockets */ if (ew->is_shutdown) { return ARES_FALSE; } /* Re-enqueue so we can get more events on the socket, we either * received a real event, or a cancellation notice. Both cases we * re-queue using the current configured event flags. * * If we can't re-enqueue, that likely means the socket has been * closed, so we want to kill our reference to it */ if (!ares_evsys_win32_afd_enqueue(event, event->flags)) { ares_evsys_win32_eventdata_destroy(ew, ed); return ARES_FALSE; } /* No events we recognize to deliver */ if (flags == 0) { return ARES_FALSE; } event->cb(event->e, event->fd, event->data, flags); return ARES_TRUE; } static size_t ares_evsys_win32_wait(ares_event_thread_t *e, unsigned long timeout_ms) { ares_evsys_win32_t *ew = e->ev_sys_data; OVERLAPPED_ENTRY entries[16]; ULONG maxentries = sizeof(entries) / sizeof(*entries); ULONG nentries; BOOL status; size_t i; size_t cnt = 0; DWORD tout = (timeout_ms == 0) ? INFINITE : (DWORD)timeout_ms; CARES_DEBUG_LOG("** Wait Enter\n"); /* Process in a loop for as long as it fills the entire entries buffer, and * on subsequent attempts, ensure the timeout is 0 */ do { nentries = maxentries; status = GetQueuedCompletionStatusEx(ew->iocp_handle, entries, nentries, &nentries, tout, FALSE); /* Next loop around, we want to return instantly if there are no events to * be processed */ tout = 0; if (!status) { break; } CARES_DEBUG_LOG("\t** GetQueuedCompletionStatusEx returned %lu entries\n", (unsigned long)nentries); for (i = 0; i < (size_t)nentries; i++) { ares_evsys_win32_eventdata_t *ed = NULL; ares_bool_t rc; /* For things triggered via PostQueuedCompletionStatus() we have an * lpCompletionKey we can just use. Otherwise we need to dereference the * pointer returned in lpOverlapped to determine the referenced * socket */ if (entries[i].lpCompletionKey) { ed = (ares_evsys_win32_eventdata_t *)entries[i].lpCompletionKey; rc = ares_evsys_win32_process_other_event(ew, ed, i); } else { ed = ares__htable_vpvp_get_direct(ew->sockets, entries[i].lpOverlapped); rc = ares_evsys_win32_process_socket_event(ew, ed, i); } /* We processed actual events */ if (rc) { cnt++; } } } while (nentries == maxentries); CARES_DEBUG_LOG("** Wait Exit\n"); return cnt; } const ares_event_sys_t ares_evsys_win32 = { "win32", ares_evsys_win32_init, ares_evsys_win32_destroy, ares_evsys_win32_event_add, ares_evsys_win32_event_del, ares_evsys_win32_event_mod, ares_evsys_win32_wait }; #endif #if defined(__clang__) || defined(__GNUC__) # pragma GCC diagnostic pop #endif gevent-24.11.1/deps/c-ares/src/lib/event/ares_event_win32.h000066400000000000000000000132161471441230600232660ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES_EVENT_WIN32_H #define __ARES_EVENT_WIN32_H #ifdef _WIN32 # ifdef HAVE_WINSOCK2_H # include # endif # ifdef HAVE_WS2TCPIP_H # include # endif # ifdef HAVE_MSWSOCK_H # include # endif # ifdef HAVE_WINDOWS_H # include # endif /* From winternl.h */ /* If WDK is not installed and not using MinGW, provide the needed definitions */ typedef LONG NTSTATUS; typedef struct _IO_STATUS_BLOCK { union { NTSTATUS Status; PVOID Pointer; }; ULONG_PTR Information; } IO_STATUS_BLOCK, *PIO_STATUS_BLOCK; typedef VOID(NTAPI *PIO_APC_ROUTINE)(PVOID ApcContext, PIO_STATUS_BLOCK IoStatusBlock, ULONG Reserved); /* From ntstatus.h */ # define STATUS_SUCCESS ((NTSTATUS)0x00000000) # ifndef STATUS_PENDING # define STATUS_PENDING ((NTSTATUS)0x00000103L) # endif # define STATUS_CANCELLED ((NTSTATUS)0xC0000120L) # define STATUS_NOT_FOUND ((NTSTATUS)0xC0000225L) typedef struct _UNICODE_STRING { USHORT Length; USHORT MaximumLength; LPCWSTR Buffer; } UNICODE_STRING, *PUNICODE_STRING; typedef struct _OBJECT_ATTRIBUTES { ULONG Length; HANDLE RootDirectory; PUNICODE_STRING ObjectName; ULONG Attributes; PVOID SecurityDescriptor; PVOID SecurityQualityOfService; } OBJECT_ATTRIBUTES, *POBJECT_ATTRIBUTES; # ifndef FILE_OPEN # define FILE_OPEN 0x00000001UL # endif /* Not sure what headers might have these */ # define IOCTL_AFD_POLL 0x00012024 # define AFD_POLL_RECEIVE_BIT 0 # define AFD_POLL_RECEIVE (1 << AFD_POLL_RECEIVE_BIT) # define AFD_POLL_RECEIVE_EXPEDITED_BIT 1 # define AFD_POLL_RECEIVE_EXPEDITED (1 << AFD_POLL_RECEIVE_EXPEDITED_BIT) # define AFD_POLL_SEND_BIT 2 # define AFD_POLL_SEND (1 << AFD_POLL_SEND_BIT) # define AFD_POLL_DISCONNECT_BIT 3 # define AFD_POLL_DISCONNECT (1 << AFD_POLL_DISCONNECT_BIT) # define AFD_POLL_ABORT_BIT 4 # define AFD_POLL_ABORT (1 << AFD_POLL_ABORT_BIT) # define AFD_POLL_LOCAL_CLOSE_BIT 5 # define AFD_POLL_LOCAL_CLOSE (1 << AFD_POLL_LOCAL_CLOSE_BIT) # define AFD_POLL_CONNECT_BIT 6 # define AFD_POLL_CONNECT (1 << AFD_POLL_CONNECT_BIT) # define AFD_POLL_ACCEPT_BIT 7 # define AFD_POLL_ACCEPT (1 << AFD_POLL_ACCEPT_BIT) # define AFD_POLL_CONNECT_FAIL_BIT 8 # define AFD_POLL_CONNECT_FAIL (1 << AFD_POLL_CONNECT_FAIL_BIT) # define AFD_POLL_QOS_BIT 9 # define AFD_POLL_QOS (1 << AFD_POLL_QOS_BIT) # define AFD_POLL_GROUP_QOS_BIT 10 # define AFD_POLL_GROUP_QOS (1 << AFD_POLL_GROUP_QOS_BIT) # define AFD_NUM_POLL_EVENTS 11 # define AFD_POLL_ALL ((1 << AFD_NUM_POLL_EVENTS) - 1) typedef struct _AFD_POLL_HANDLE_INFO { HANDLE Handle; ULONG Events; NTSTATUS Status; } AFD_POLL_HANDLE_INFO, *PAFD_POLL_HANDLE_INFO; typedef struct _AFD_POLL_INFO { LARGE_INTEGER Timeout; ULONG NumberOfHandles; ULONG Exclusive; AFD_POLL_HANDLE_INFO Handles[1]; } AFD_POLL_INFO, *PAFD_POLL_INFO; /* Prototypes for dynamically loaded functions from ntdll.dll */ typedef NTSTATUS(NTAPI *NtCancelIoFileEx_t)(HANDLE FileHandle, PIO_STATUS_BLOCK IoRequestToCancel, PIO_STATUS_BLOCK IoStatusBlock); typedef NTSTATUS(NTAPI *NtDeviceIoControlFile_t)( HANDLE FileHandle, HANDLE Event, PIO_APC_ROUTINE ApcRoutine, PVOID ApcContext, PIO_STATUS_BLOCK IoStatusBlock, ULONG IoControlCode, PVOID InputBuffer, ULONG InputBufferLength, PVOID OutputBuffer, ULONG OutputBufferLength); typedef NTSTATUS(NTAPI *NtCreateFile_t)( PHANDLE FileHandle, ACCESS_MASK DesiredAccess, POBJECT_ATTRIBUTES ObjectAttributes, PIO_STATUS_BLOCK IoStatusBlock, PLARGE_INTEGER AllocationSize, ULONG FileAttributes, ULONG ShareAccess, ULONG CreateDisposition, ULONG CreateOptions, PVOID EaBuffer, ULONG EaLength); /* On UWP/Windows Store, these definitions aren't there for some reason */ # ifndef SIO_BSP_HANDLE_POLL # define SIO_BSP_HANDLE_POLL 0x4800001D # endif # ifndef SIO_BASE_HANDLE # define SIO_BASE_HANDLE 0x48000022 # endif # ifndef HANDLE_FLAG_INHERIT # define HANDLE_FLAG_INHERIT 0x00000001 # endif #endif /* _WIN32 */ #endif /* __ARES_EVENT_WIN32_H */ gevent-24.11.1/deps/c-ares/src/lib/inet_net_pton.c000066400000000000000000000263741471441230600216410ustar00rootroot00000000000000/* * Copyright (c) 2012 by Gilles Chehade * Copyright (c) 1996,1999 by Internet Software Consortium. * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SOFTWARE CONSORTIUM DISCLAIMS * ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL INTERNET SOFTWARE * CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL * DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR * PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #include "ares_nameser.h" #include "ares_ipv6.h" #include "ares_inet_net_pton.h" const struct ares_in6_addr ares_in6addr_any = { { { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 } } }; /* * static int * inet_net_pton_ipv4(src, dst, size) * convert IPv4 network number from presentation to network format. * accepts hex octets, hex strings, decimal octets, and /CIDR. * "size" is in bytes and describes "dst". * return: * number of bits, either imputed classfully or specified with /CIDR, * or -1 if some failure occurred (check errno). ENOENT means it was * not an IPv4 network specification. * note: * network byte order assumed. this means 192.5.5.240/28 has * 0b11110000 in its fourth octet. * note: * On Windows we store the error in the thread errno, not * in the winsock error code. This is to avoid losing the * actual last winsock error. So use macro ERRNO to fetch the * errno this function sets when returning (-1), not SOCKERRNO. * author: * Paul Vixie (ISC), June 1996 */ static int ares_inet_net_pton_ipv4(const char *src, unsigned char *dst, size_t size) { static const char xdigits[] = "0123456789abcdef"; static const char digits[] = "0123456789"; int n; int ch; int tmp = 0; int dirty; int bits; const unsigned char *odst = dst; ch = *src++; if (ch == '0' && (src[0] == 'x' || src[0] == 'X') && ares__isascii(src[1]) && ares__isxdigit(src[1])) { /* Hexadecimal: Eat nybble string. */ if (!size) { goto emsgsize; } dirty = 0; src++; /* skip x or X. */ while ((ch = *src++) != '\0' && ares__isascii(ch) && ares__isxdigit(ch)) { if (ares__isupper(ch)) { ch = ares__tolower((unsigned char)ch); } n = (int)(strchr(xdigits, ch) - xdigits); if (dirty == 0) { tmp = n; } else { tmp = (tmp << 4) | n; } if (++dirty == 2) { if (!size--) { goto emsgsize; } *dst++ = (unsigned char)tmp; dirty = 0; } } if (dirty) { /* Odd trailing nybble? */ if (!size--) { goto emsgsize; } *dst++ = (unsigned char)(tmp << 4); } } else if (ares__isascii(ch) && ares__isdigit(ch)) { /* Decimal: eat dotted digit string. */ for (;;) { tmp = 0; do { n = (int)(strchr(digits, ch) - digits); tmp *= 10; tmp += n; if (tmp > 255) { goto enoent; } } while ((ch = *src++) != '\0' && ares__isascii(ch) && ares__isdigit(ch)); if (!size--) { goto emsgsize; } *dst++ = (unsigned char)tmp; if (ch == '\0' || ch == '/') { break; } if (ch != '.') { goto enoent; } ch = *src++; if (!ares__isascii(ch) || !ares__isdigit(ch)) { goto enoent; } } } else { goto enoent; } bits = -1; if (ch == '/' && ares__isascii(src[0]) && ares__isdigit(src[0]) && dst > odst) { /* CIDR width specifier. Nothing can follow it. */ ch = *src++; /* Skip over the /. */ bits = 0; do { n = (int)(strchr(digits, ch) - digits); bits *= 10; bits += n; if (bits > 32) { goto enoent; } } while ((ch = *src++) != '\0' && ares__isascii(ch) && ares__isdigit(ch)); if (ch != '\0') { goto enoent; } } /* Firey death and destruction unless we prefetched EOS. */ if (ch != '\0') { goto enoent; } /* If nothing was written to the destination, we found no address. */ if (dst == odst) { goto enoent; /* LCOV_EXCL_LINE: all valid paths above increment dst */ } /* If no CIDR spec was given, infer width from net class. */ if (bits == -1) { if (*odst >= 240) { /* Class E */ bits = 32; } else if (*odst >= 224) { /* Class D */ bits = 8; } else if (*odst >= 192) { /* Class C */ bits = 24; } else if (*odst >= 128) { /* Class B */ bits = 16; } else { /* Class A */ bits = 8; } /* If imputed mask is narrower than specified octets, widen. */ if (bits < ((dst - odst) * 8)) { bits = (int)(dst - odst) * 8; } /* * If there are no additional bits specified for a class D * address adjust bits to 4. */ if (bits == 8 && *odst == 224) { bits = 4; } } /* Extend network to cover the actual mask. */ while (bits > ((dst - odst) * 8)) { if (!size--) { goto emsgsize; } *dst++ = '\0'; } return bits; enoent: SET_ERRNO(ENOENT); return -1; emsgsize: SET_ERRNO(EMSGSIZE); return -1; } static int getbits(const char *src, size_t *bitsp) { static const char digits[] = "0123456789"; size_t n; size_t val; char ch; val = 0; n = 0; while ((ch = *src++) != '\0') { const char *pch; pch = strchr(digits, ch); if (pch != NULL) { if (n++ != 0 && val == 0) { /* no leading zeros */ return 0; } val *= 10; val += (size_t)(pch - digits); if (val > 128) { /* range */ return 0; } continue; } return 0; } if (n == 0) { return 0; } *bitsp = val; return 1; } static int ares_inet_pton6(const char *src, unsigned char *dst) { static const char xdigits_l[] = "0123456789abcdef"; static const char xdigits_u[] = "0123456789ABCDEF"; unsigned char tmp[NS_IN6ADDRSZ]; unsigned char *tp; unsigned char *endp; unsigned char *colonp; const char *xdigits; const char *curtok; int ch; int saw_xdigit; int count_xdigit; unsigned int val; memset((tp = tmp), '\0', NS_IN6ADDRSZ); endp = tp + NS_IN6ADDRSZ; colonp = NULL; /* Leading :: requires some special handling. */ if (*src == ':') { if (*++src != ':') { goto enoent; } } curtok = src; saw_xdigit = count_xdigit = 0; val = 0; while ((ch = *src++) != '\0') { const char *pch; if ((pch = strchr((xdigits = xdigits_l), ch)) == NULL) { pch = strchr((xdigits = xdigits_u), ch); } if (pch != NULL) { if (count_xdigit >= 4) { goto enoent; } val <<= 4; val |= (unsigned int)(pch - xdigits); if (val > 0xffff) { goto enoent; } saw_xdigit = 1; count_xdigit++; continue; } if (ch == ':') { curtok = src; if (!saw_xdigit) { if (colonp) { goto enoent; } colonp = tp; continue; } else if (*src == '\0') { goto enoent; } if (tp + NS_INT16SZ > endp) { goto enoent; } *tp++ = (unsigned char)(val >> 8) & 0xff; *tp++ = (unsigned char)val & 0xff; saw_xdigit = 0; count_xdigit = 0; val = 0; continue; } if (ch == '.' && ((tp + NS_INADDRSZ) <= endp) && ares_inet_net_pton_ipv4(curtok, tp, NS_INADDRSZ) > 0) { tp += NS_INADDRSZ; saw_xdigit = 0; break; /* '\0' was seen by inet_pton4(). */ } goto enoent; } if (saw_xdigit) { if (tp + NS_INT16SZ > endp) { goto enoent; } *tp++ = (unsigned char)(val >> 8) & 0xff; *tp++ = (unsigned char)val & 0xff; } if (colonp != NULL) { /* * Since some memmove()'s erroneously fail to handle * overlapping regions, we'll do the shift by hand. */ const int n = (int)(tp - colonp); int i; if (tp == endp) { goto enoent; } for (i = 1; i <= n; i++) { endp[-i] = colonp[n - i]; colonp[n - i] = 0; } tp = endp; } if (tp != endp) { goto enoent; } memcpy(dst, tmp, NS_IN6ADDRSZ); return 1; enoent: SET_ERRNO(ENOENT); return -1; } static int ares_inet_net_pton_ipv6(const char *src, unsigned char *dst, size_t size) { struct ares_in6_addr in6; int ret; size_t bits; size_t bytes; char buf[INET6_ADDRSTRLEN + sizeof("/128")]; char *sep; if (ares_strlen(src) >= sizeof buf) { SET_ERRNO(EMSGSIZE); return -1; } ares_strcpy(buf, src, sizeof buf); sep = strchr(buf, '/'); if (sep != NULL) { *sep++ = '\0'; } ret = ares_inet_pton6(buf, (unsigned char *)&in6); if (ret != 1) { return -1; } if (sep == NULL) { bits = 128; } else { if (!getbits(sep, &bits)) { SET_ERRNO(ENOENT); return -1; } } bytes = (bits + 7) / 8; if (bytes > size) { SET_ERRNO(EMSGSIZE); return -1; } memcpy(dst, &in6, bytes); return (int)bits; } /* * int * inet_net_pton(af, src, dst, size) * convert network number from presentation to network format. * accepts hex octets, hex strings, decimal octets, and /CIDR. * "size" is in bytes and describes "dst". * return: * number of bits, either imputed classfully or specified with /CIDR, * or -1 if some failure occurred (check errno). ENOENT means it was * not a valid network specification. * note: * On Windows we store the error in the thread errno, not * in the winsock error code. This is to avoid losing the * actual last winsock error. So use macro ERRNO to fetch the * errno this function sets when returning (-1), not SOCKERRNO. * author: * Paul Vixie (ISC), June 1996 */ int ares_inet_net_pton(int af, const char *src, void *dst, size_t size) { switch (af) { case AF_INET: return ares_inet_net_pton_ipv4(src, dst, size); case AF_INET6: return ares_inet_net_pton_ipv6(src, dst, size); default: SET_ERRNO(EAFNOSUPPORT); return -1; } } int ares_inet_pton(int af, const char *src, void *dst) { int result; size_t size; if (af == AF_INET) { size = sizeof(struct in_addr); } else if (af == AF_INET6) { size = sizeof(struct ares_in6_addr); } else { SET_ERRNO(EAFNOSUPPORT); return -1; } result = ares_inet_net_pton(af, src, dst, size); if (result == -1 && ERRNO == ENOENT) { return 0; } return (result > -1) ? 1 : -1; } gevent-24.11.1/deps/c-ares/src/lib/inet_ntop.c000066400000000000000000000134231471441230600207620ustar00rootroot00000000000000/* * Copyright (c) 2004 by Internet Systems Consortium, Inc. ("ISC") * Copyright (c) 1996-1999 by Internet Software Consortium. * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT * OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #include "ares_nameser.h" #include "ares_ipv6.h" /* * WARNING: Don't even consider trying to compile this on a system where * sizeof(int) < 4. sizeof(int) > 4 is fine; all the world's not a VAX. */ static const char *inet_ntop4(const unsigned char *src, char *dst, size_t size); static const char *inet_ntop6(const unsigned char *src, char *dst, size_t size); /* char * * inet_ntop(af, src, dst, size) * convert a network format address to presentation format. * return: * pointer to presentation format address (`dst'), or NULL (see errno). * note: * On Windows we store the error in the thread errno, not * in the winsock error code. This is to avoid losing the * actual last winsock error. So use macro ERRNO to fetch the * errno this function sets when returning NULL, not SOCKERRNO. * author: * Paul Vixie, 1996. */ const char *ares_inet_ntop(int af, const void *src, char *dst, ares_socklen_t size) { switch (af) { case AF_INET: return inet_ntop4(src, dst, (size_t)size); case AF_INET6: return inet_ntop6(src, dst, (size_t)size); default: break; } SET_ERRNO(EAFNOSUPPORT); return NULL; } /* const char * * inet_ntop4(src, dst, size) * format an IPv4 address * return: * `dst' (as a const) * notes: * (1) uses no statics * (2) takes a unsigned char* not an in_addr as input * author: * Paul Vixie, 1996. */ static const char *inet_ntop4(const unsigned char *src, char *dst, size_t size) { static const char fmt[] = "%u.%u.%u.%u"; char tmp[sizeof("255.255.255.255")]; if (size < sizeof(tmp)) { SET_ERRNO(ENOSPC); return NULL; } if ((size_t)snprintf(tmp, sizeof(tmp), fmt, src[0], src[1], src[2], src[3]) >= size) { SET_ERRNO(ENOSPC); return NULL; } ares_strcpy(dst, tmp, size); return dst; } /* const char * * inet_ntop6(src, dst, size) * convert IPv6 binary address into presentation (printable) format * author: * Paul Vixie, 1996. */ static const char *inet_ntop6(const unsigned char *src, char *dst, size_t size) { /* * Note that int32_t and int16_t need only be "at least" large enough * to contain a value of the specified size. On some systems, like * Crays, there is no such thing as an integer variable with 16 bits. * Keep this in mind if you think this function should have been coded * to use pointer overlays. All the world's not a VAX. */ char tmp[sizeof("ffff:ffff:ffff:ffff:ffff:ffff:255.255.255.255")]; char *tp; struct { ares_ssize_t base; size_t len; } best, cur; unsigned int words[NS_IN6ADDRSZ / NS_INT16SZ]; size_t i; /* * Preprocess: * Copy the input (bytewise) array into a wordwise array. * Find the longest run of 0x00's in src[] for :: shorthanding. */ memset(words, '\0', sizeof(words)); for (i = 0; i < NS_IN6ADDRSZ; i++) { words[i / 2] |= (unsigned int)(src[i] << ((1 - (i % 2)) << 3)); } best.base = -1; best.len = 0; cur.base = -1; cur.len = 0; for (i = 0; i < (NS_IN6ADDRSZ / NS_INT16SZ); i++) { if (words[i] == 0) { if (cur.base == -1) { cur.base = (ares_ssize_t)i; cur.len = 1; } else { cur.len++; } } else { if (cur.base != -1) { if (best.base == -1 || cur.len > best.len) { best = cur; } cur.base = -1; } } } if (cur.base != -1) { if (best.base == -1 || cur.len > best.len) { best = cur; } } if (best.base != -1 && best.len < 2) { best.base = -1; } /* * Format the result. */ tp = tmp; for (i = 0; i < (NS_IN6ADDRSZ / NS_INT16SZ); i++) { /* Are we inside the best run of 0x00's? */ if (best.base != -1 && i >= (size_t)best.base && i < ((size_t)best.base + best.len)) { if (i == (size_t)best.base) { *tp++ = ':'; } continue; } /* Are we following an initial run of 0x00s or any real hex? */ if (i != 0) { *tp++ = ':'; } /* Is this address an encapsulated IPv4? */ if (i == 6 && best.base == 0 && (best.len == 6 || (best.len == 7 && words[7] != 0x0001) || (best.len == 5 && words[5] == 0xffff))) { if (!inet_ntop4(src + 12, tp, sizeof(tmp) - (size_t)(tp - tmp))) { return (NULL); } tp += ares_strlen(tp); break; } tp += snprintf(tp, sizeof(tmp) - (size_t)(tp - tmp), "%x", words[i]); } /* Was it a trailing run of 0x00's? */ if (best.base != -1 && ((size_t)best.base + best.len) == (NS_IN6ADDRSZ / NS_INT16SZ)) { *tp++ = ':'; } *tp++ = '\0'; /* * Check for overflow, copy, and we're done. */ if ((size_t)(tp - tmp) > size) { SET_ERRNO(ENOSPC); return NULL; } ares_strcpy(dst, tmp, size); return dst; } gevent-24.11.1/deps/c-ares/src/lib/legacy/000077500000000000000000000000001471441230600200605ustar00rootroot00000000000000gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_create_query.c000066400000000000000000000052161471441230600237320ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" static int ares_create_query_int(const char *name, int dnsclass, int type, unsigned short id, int rd, unsigned char **bufp, int *buflenp, int max_udp_size) { ares_status_t status; ares_dns_record_t *dnsrec = NULL; size_t len; ares_dns_flags_t rd_flag = rd ? ARES_FLAG_RD : 0; if (name == NULL || bufp == NULL || buflenp == NULL) { status = ARES_EFORMERR; goto done; } *bufp = NULL; *buflenp = 0; status = ares_dns_record_create_query( &dnsrec, name, (ares_dns_class_t)dnsclass, (ares_dns_rec_type_t)type, id, rd_flag, (size_t)max_udp_size); if (status != ARES_SUCCESS) { goto done; } status = ares_dns_write(dnsrec, bufp, &len); if (status != ARES_SUCCESS) { goto done; } *buflenp = (int)len; done: ares_dns_record_destroy(dnsrec); return (int)status; } int ares_create_query(const char *name, int dnsclass, int type, unsigned short id, int rd, unsigned char **bufp, int *buflenp, int max_udp_size) { return ares_create_query_int(name, dnsclass, type, id, rd, bufp, buflenp, max_udp_size); } int ares_mkquery(const char *name, int dnsclass, int type, unsigned short id, int rd, unsigned char **buf, int *buflen) { return ares_create_query_int(name, dnsclass, type, id, rd, buf, buflen, 0); } gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_expand_name.c000066400000000000000000000060261471441230600235210ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998, 2011 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif #include "ares_nameser.h" ares_status_t ares__expand_name_validated(const unsigned char *encoded, const unsigned char *abuf, size_t alen, char **s, size_t *enclen, ares_bool_t is_hostname) { ares_status_t status; ares__buf_t *buf = NULL; size_t start_len; if (encoded == NULL || abuf == NULL || alen == 0 || enclen == NULL) { return ARES_EBADNAME; /* EFORMERR would be better */ } if (encoded < abuf || encoded >= abuf + alen) { return ARES_EBADNAME; /* EFORMERR would be better */ } *enclen = 0; /* NOTE: we allow 's' to be NULL to skip it */ if (s) { *s = NULL; } buf = ares__buf_create_const(abuf, alen); if (buf == NULL) { return ARES_ENOMEM; } status = ares__buf_set_position(buf, (size_t)(encoded - abuf)); if (status != ARES_SUCCESS) { goto done; } start_len = ares__buf_len(buf); status = ares__dns_name_parse(buf, s, is_hostname); if (status != ARES_SUCCESS) { goto done; } *enclen = start_len - ares__buf_len(buf); done: ares__buf_destroy(buf); return status; } int ares_expand_name(const unsigned char *encoded, const unsigned char *abuf, int alen, char **s, long *enclen) { /* Keep public API compatible */ size_t enclen_temp = 0; ares_status_t status; if (encoded == NULL || abuf == NULL || alen <= 0 || enclen == NULL) { return ARES_EBADNAME; } status = ares__expand_name_validated(encoded, abuf, (size_t)alen, s, &enclen_temp, ARES_FALSE); *enclen = (long)enclen_temp; return (int)status; } gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_expand_string.c000066400000000000000000000064441471441230600241130ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif #include "ares_nameser.h" /* Simply decodes a length-encoded character string. The first byte of the * input is the length of the string to be returned and the bytes thereafter * are the characters of the string. The returned result will be NULL * terminated. */ ares_status_t ares_expand_string_ex(const unsigned char *encoded, const unsigned char *abuf, size_t alen, unsigned char **s, size_t *enclen) { ares_status_t status; ares__buf_t *buf = NULL; size_t start_len; size_t len = 0; if (encoded == NULL || abuf == NULL || alen == 0 || enclen == NULL) { return ARES_EBADSTR; /* EFORMERR would be better */ } if (encoded < abuf || encoded >= abuf + alen) { return ARES_EBADSTR; /* EFORMERR would be better */ } *enclen = 0; /* NOTE: we allow 's' to be NULL to skip it */ if (s) { *s = NULL; } buf = ares__buf_create_const(abuf, alen); if (buf == NULL) { return ARES_ENOMEM; } status = ares__buf_set_position(buf, (size_t)(encoded - abuf)); if (status != ARES_SUCCESS) { goto done; } start_len = ares__buf_len(buf); status = ares__buf_parse_dns_binstr(buf, ares__buf_len(buf), s, &len); /* hrm, no way to pass back 'len' with the prototype */ if (status != ARES_SUCCESS) { goto done; } *enclen = start_len - ares__buf_len(buf); done: ares__buf_destroy(buf); if (status == ARES_EBADNAME || status == ARES_EBADRESP) { status = ARES_EBADSTR; } return status; } int ares_expand_string(const unsigned char *encoded, const unsigned char *abuf, int alen, unsigned char **s, long *enclen) { ares_status_t status; size_t temp_enclen = 0; if (encoded == NULL || abuf == NULL || alen <= 0 || enclen == NULL) { return ARES_EBADRESP; } status = ares_expand_string_ex(encoded, abuf, (size_t)alen, s, &temp_enclen); *enclen = (long)temp_enclen; return (int)status; } gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_fds.c000066400000000000000000000052311471441230600220130ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" int ares_fds(const ares_channel_t *channel, fd_set *read_fds, fd_set *write_fds) { ares_socket_t nfds; ares__slist_node_t *snode; /* Are there any active queries? */ size_t active_queries; if (channel == NULL || read_fds == NULL || write_fds == NULL) { return 0; } ares__channel_lock(channel); active_queries = ares__llist_len(channel->all_queries); nfds = 0; for (snode = ares__slist_node_first(channel->servers); snode != NULL; snode = ares__slist_node_next(snode)) { ares_server_t *server = ares__slist_node_val(snode); ares__llist_node_t *node; for (node = ares__llist_node_first(server->connections); node != NULL; node = ares__llist_node_next(node)) { const ares_conn_t *conn = ares__llist_node_val(node); if (!active_queries && !(conn->flags & ARES_CONN_FLAG_TCP)) { continue; } /* Silence coverity, shouldn't be possible */ if (conn->fd == ARES_SOCKET_BAD) { continue; } /* Always wait on read */ FD_SET(conn->fd, read_fds); if (conn->fd >= nfds) { nfds = conn->fd + 1; } /* TCP only wait on write if we have buffered data */ if (conn->flags & ARES_CONN_FLAG_TCP && ares__buf_len(server->tcp_send)) { FD_SET(conn->fd, write_fds); } } } ares__channel_unlock(channel); return (int)nfds; } gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_getsock.c000066400000000000000000000055341471441230600227040ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2005 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" int ares_getsock(const ares_channel_t *channel, ares_socket_t *socks, int numsocks) /* size of the 'socks' array */ { ares__slist_node_t *snode; size_t sockindex = 0; unsigned int bitmap = 0; unsigned int setbits = 0xffffffff; /* Are there any active queries? */ size_t active_queries; if (channel == NULL || numsocks <= 0) { return 0; } ares__channel_lock(channel); active_queries = ares__llist_len(channel->all_queries); for (snode = ares__slist_node_first(channel->servers); snode != NULL; snode = ares__slist_node_next(snode)) { ares_server_t *server = ares__slist_node_val(snode); ares__llist_node_t *node; for (node = ares__llist_node_first(server->connections); node != NULL; node = ares__llist_node_next(node)) { const ares_conn_t *conn = ares__llist_node_val(node); if (sockindex >= (size_t)numsocks || sockindex >= ARES_GETSOCK_MAXNUM) { break; } /* We only need to register interest in UDP sockets if we have * outstanding queries. */ if (!active_queries && !(conn->flags & ARES_CONN_FLAG_TCP)) { continue; } socks[sockindex] = conn->fd; if (active_queries || conn->flags & ARES_CONN_FLAG_TCP) { bitmap |= ARES_GETSOCK_READABLE(setbits, sockindex); } if (conn->flags & ARES_CONN_FLAG_TCP && ares__buf_len(server->tcp_send)) { /* then the tcp socket is also writable! */ bitmap |= ARES_GETSOCK_WRITABLE(setbits, sockindex); } sockindex++; } } ares__channel_unlock(channel); return (int)bitmap; } gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_parse_a_reply.c000066400000000000000000000057521471441230600240740ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2019 Andrew Selivanov * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #ifdef HAVE_STRINGS_H # include #endif #ifdef HAVE_LIMITS_H # include #endif int ares_parse_a_reply(const unsigned char *abuf, int alen, struct hostent **host, struct ares_addrttl *addrttls, int *naddrttls) { struct ares_addrinfo ai; char *question_hostname = NULL; ares_status_t status; size_t req_naddrttls = 0; ares_dns_record_t *dnsrec = NULL; if (alen < 0) { return ARES_EBADRESP; } if (naddrttls) { req_naddrttls = (size_t)*naddrttls; *naddrttls = 0; } memset(&ai, 0, sizeof(ai)); status = ares_dns_parse(abuf, (size_t)alen, 0, &dnsrec); if (status != ARES_SUCCESS) { goto fail; } status = ares__parse_into_addrinfo(dnsrec, 0, 0, &ai); if (status != ARES_SUCCESS && status != ARES_ENODATA) { goto fail; } if (host != NULL) { status = ares__addrinfo2hostent(&ai, AF_INET, host); if (status != ARES_SUCCESS && status != ARES_ENODATA) { goto fail; /* LCOV_EXCL_LINE: DefensiveCoding */ } } if (addrttls != NULL && req_naddrttls) { size_t temp_naddrttls = 0; ares__addrinfo2addrttl(&ai, AF_INET, req_naddrttls, addrttls, NULL, &temp_naddrttls); *naddrttls = (int)temp_naddrttls; } fail: ares__freeaddrinfo_cnames(ai.cnames); ares__freeaddrinfo_nodes(ai.nodes); ares_free(ai.name); ares_free(question_hostname); ares_dns_record_destroy(dnsrec); if (status == ARES_EBADNAME) { status = ARES_EBADRESP; } return (int)status; } gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_parse_aaaa_reply.c000066400000000000000000000060741471441230600245350ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) 2005 Dominick Meglio * Copyright (c) 2019 Andrew Selivanov * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #ifdef HAVE_STRINGS_H # include #endif #ifdef HAVE_LIMITS_H # include #endif #include "ares_inet_net_pton.h" int ares_parse_aaaa_reply(const unsigned char *abuf, int alen, struct hostent **host, struct ares_addr6ttl *addrttls, int *naddrttls) { struct ares_addrinfo ai; char *question_hostname = NULL; ares_status_t status; size_t req_naddrttls = 0; ares_dns_record_t *dnsrec = NULL; if (alen < 0) { return ARES_EBADRESP; } if (naddrttls) { req_naddrttls = (size_t)*naddrttls; *naddrttls = 0; } memset(&ai, 0, sizeof(ai)); status = ares_dns_parse(abuf, (size_t)alen, 0, &dnsrec); if (status != ARES_SUCCESS) { goto fail; } status = ares__parse_into_addrinfo(dnsrec, 0, 0, &ai); if (status != ARES_SUCCESS && status != ARES_ENODATA) { goto fail; } if (host != NULL) { status = ares__addrinfo2hostent(&ai, AF_INET6, host); if (status != ARES_SUCCESS && status != ARES_ENODATA) { goto fail; /* LCOV_EXCL_LINE: DefensiveCoding */ } } if (addrttls != NULL && req_naddrttls) { size_t temp_naddrttls = 0; ares__addrinfo2addrttl(&ai, AF_INET6, req_naddrttls, NULL, addrttls, &temp_naddrttls); *naddrttls = (int)temp_naddrttls; } fail: ares__freeaddrinfo_cnames(ai.cnames); ares__freeaddrinfo_nodes(ai.nodes); ares_free(question_hostname); ares_free(ai.name); ares_dns_record_destroy(dnsrec); if (status == ARES_EBADNAME) { status = ARES_EBADRESP; } return (int)status; } gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_parse_caa_reply.c000066400000000000000000000106401471441230600243700ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_data.h" int ares_parse_caa_reply(const unsigned char *abuf, int alen_int, struct ares_caa_reply **caa_out) { ares_status_t status; size_t alen; struct ares_caa_reply *caa_head = NULL; struct ares_caa_reply *caa_last = NULL; struct ares_caa_reply *caa_curr; ares_dns_record_t *dnsrec = NULL; size_t i; *caa_out = NULL; if (alen_int < 0) { return ARES_EBADRESP; } alen = (size_t)alen_int; status = ares_dns_parse(abuf, alen, 0, &dnsrec); if (status != ARES_SUCCESS) { goto done; } if (ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER) == 0) { status = ARES_ENODATA; goto done; } for (i = 0; i < ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER); i++) { const unsigned char *ptr; size_t ptr_len; const ares_dns_rr_t *rr = ares_dns_record_rr_get(dnsrec, ARES_SECTION_ANSWER, i); if (rr == NULL) { /* Shouldn't be possible */ status = ARES_EBADRESP; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* XXX: Why do we allow Chaos class? */ if (ares_dns_rr_get_class(rr) != ARES_CLASS_IN && ares_dns_rr_get_class(rr) != ARES_CLASS_CHAOS) { continue; } /* Only looking for CAA records */ if (ares_dns_rr_get_type(rr) != ARES_REC_TYPE_CAA) { continue; } /* Allocate storage for this CAA answer appending it to the list */ caa_curr = ares_malloc_data(ARES_DATATYPE_CAA_REPLY); if (caa_curr == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Link in the record */ if (caa_last) { caa_last->next = caa_curr; } else { caa_head = caa_curr; } caa_last = caa_curr; caa_curr->critical = ares_dns_rr_get_u8(rr, ARES_RR_CAA_CRITICAL); caa_curr->property = (unsigned char *)ares_strdup(ares_dns_rr_get_str(rr, ARES_RR_CAA_TAG)); if (caa_curr->property == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ break; /* LCOV_EXCL_LINE: OutOfMemory */ } /* RFC6844 says this can only be ascii, so not sure why we're recording a * length */ caa_curr->plength = ares_strlen((const char *)caa_curr->property); ptr = ares_dns_rr_get_bin(rr, ARES_RR_CAA_VALUE, &ptr_len); if (ptr == NULL) { status = ARES_EBADRESP; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* Wants NULL termination for some reason */ caa_curr->value = ares_malloc(ptr_len + 1); if (caa_curr->value == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } memcpy(caa_curr->value, ptr, ptr_len); caa_curr->value[ptr_len] = 0; caa_curr->length = ptr_len; } done: /* clean up on error */ if (status != ARES_SUCCESS) { if (caa_head) { ares_free_data(caa_head); } } else { /* everything looks fine, return the data */ *caa_out = caa_head; } ares_dns_record_destroy(dnsrec); return (int)status; } gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_parse_mx_reply.c000066400000000000000000000065731471441230600243020ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_data.h" int ares_parse_mx_reply(const unsigned char *abuf, int alen_int, struct ares_mx_reply **mx_out) { ares_status_t status; size_t alen; struct ares_mx_reply *mx_head = NULL; struct ares_mx_reply *mx_last = NULL; struct ares_mx_reply *mx_curr; ares_dns_record_t *dnsrec = NULL; size_t i; *mx_out = NULL; if (alen_int < 0) { return ARES_EBADRESP; } alen = (size_t)alen_int; status = ares_dns_parse(abuf, alen, 0, &dnsrec); if (status != ARES_SUCCESS) { goto done; } if (ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER) == 0) { status = ARES_ENODATA; goto done; } for (i = 0; i < ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER); i++) { const ares_dns_rr_t *rr = ares_dns_record_rr_get(dnsrec, ARES_SECTION_ANSWER, i); if (rr == NULL) { /* Shouldn't be possible */ status = ARES_EBADRESP; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (ares_dns_rr_get_class(rr) != ARES_CLASS_IN || ares_dns_rr_get_type(rr) != ARES_REC_TYPE_MX) { continue; } /* Allocate storage for this MX answer appending it to the list */ mx_curr = ares_malloc_data(ARES_DATATYPE_MX_REPLY); if (mx_curr == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Link in the record */ if (mx_last) { mx_last->next = mx_curr; } else { mx_head = mx_curr; } mx_last = mx_curr; mx_curr->priority = ares_dns_rr_get_u16(rr, ARES_RR_MX_PREFERENCE); mx_curr->host = ares_strdup(ares_dns_rr_get_str(rr, ARES_RR_MX_EXCHANGE)); if (mx_curr->host == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } done: /* clean up on error */ if (status != ARES_SUCCESS) { if (mx_head) { ares_free_data(mx_head); } } else { /* everything looks fine, return the data */ *mx_out = mx_head; } ares_dns_record_destroy(dnsrec); return (int)status; } gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_parse_naptr_reply.c000066400000000000000000000110211471441230600247620ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_data.h" int ares_parse_naptr_reply(const unsigned char *abuf, int alen_int, struct ares_naptr_reply **naptr_out) { ares_status_t status; size_t alen; struct ares_naptr_reply *naptr_head = NULL; struct ares_naptr_reply *naptr_last = NULL; struct ares_naptr_reply *naptr_curr; ares_dns_record_t *dnsrec = NULL; size_t i; *naptr_out = NULL; if (alen_int < 0) { return ARES_EBADRESP; } alen = (size_t)alen_int; status = ares_dns_parse(abuf, alen, 0, &dnsrec); if (status != ARES_SUCCESS) { goto done; } if (ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER) == 0) { status = ARES_ENODATA; goto done; } for (i = 0; i < ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER); i++) { const ares_dns_rr_t *rr = ares_dns_record_rr_get(dnsrec, ARES_SECTION_ANSWER, i); if (rr == NULL) { /* Shouldn't be possible */ status = ARES_EBADRESP; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (ares_dns_rr_get_class(rr) != ARES_CLASS_IN || ares_dns_rr_get_type(rr) != ARES_REC_TYPE_NAPTR) { continue; } /* Allocate storage for this NAPTR answer appending it to the list */ naptr_curr = ares_malloc_data(ARES_DATATYPE_NAPTR_REPLY); if (naptr_curr == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Link in the record */ if (naptr_last) { naptr_last->next = naptr_curr; } else { naptr_head = naptr_curr; } naptr_last = naptr_curr; naptr_curr->order = ares_dns_rr_get_u16(rr, ARES_RR_NAPTR_ORDER); naptr_curr->preference = ares_dns_rr_get_u16(rr, ARES_RR_NAPTR_PREFERENCE); /* XXX: Why is this unsigned char * ? */ naptr_curr->flags = (unsigned char *)ares_strdup( ares_dns_rr_get_str(rr, ARES_RR_NAPTR_FLAGS)); if (naptr_curr->flags == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } /* XXX: Why is this unsigned char * ? */ naptr_curr->service = (unsigned char *)ares_strdup( ares_dns_rr_get_str(rr, ARES_RR_NAPTR_SERVICES)); if (naptr_curr->service == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } /* XXX: Why is this unsigned char * ? */ naptr_curr->regexp = (unsigned char *)ares_strdup( ares_dns_rr_get_str(rr, ARES_RR_NAPTR_REGEXP)); if (naptr_curr->regexp == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } naptr_curr->replacement = ares_strdup(ares_dns_rr_get_str(rr, ARES_RR_NAPTR_REPLACEMENT)); if (naptr_curr->replacement == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } done: /* clean up on error */ if (status != ARES_SUCCESS) { if (naptr_head) { ares_free_data(naptr_head); } } else { /* everything looks fine, return the data */ *naptr_out = naptr_head; } ares_dns_record_destroy(dnsrec); return (int)status; } gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_parse_ns_reply.c000066400000000000000000000111121471441230600242570ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif int ares_parse_ns_reply(const unsigned char *abuf, int alen_int, struct hostent **host) { ares_status_t status; size_t alen; size_t nscount = 0; struct hostent *hostent = NULL; const char *hostname = NULL; ares_dns_record_t *dnsrec = NULL; size_t i; size_t ancount; *host = NULL; if (alen_int < 0) { return ARES_EBADRESP; } alen = (size_t)alen_int; status = ares_dns_parse(abuf, alen, 0, &dnsrec); if (status != ARES_SUCCESS) { goto done; } ancount = ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER); if (ancount == 0) { status = ARES_ENODATA; goto done; } /* Response structure */ hostent = ares_malloc(sizeof(*hostent)); if (hostent == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } memset(hostent, 0, sizeof(*hostent)); hostent->h_addr_list = ares_malloc(sizeof(*hostent->h_addr_list)); if (hostent->h_addr_list == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } hostent->h_addr_list[0] = NULL; hostent->h_addrtype = AF_INET; hostent->h_length = sizeof(struct in_addr); /* Fill in hostname */ status = ares_dns_record_query_get(dnsrec, 0, &hostname, NULL, NULL); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } hostent->h_name = ares_strdup(hostname); if (hostent->h_name == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Preallocate the maximum number + 1 */ hostent->h_aliases = ares_malloc((ancount + 1) * sizeof(*hostent->h_aliases)); if (hostent->h_aliases == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } memset(hostent->h_aliases, 0, (ancount + 1) * sizeof(*hostent->h_aliases)); for (i = 0; i < ancount; i++) { const ares_dns_rr_t *rr = ares_dns_record_rr_get(dnsrec, ARES_SECTION_ANSWER, i); if (rr == NULL) { /* Shouldn't be possible */ status = ARES_EBADRESP; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (ares_dns_rr_get_class(rr) != ARES_CLASS_IN || ares_dns_rr_get_type(rr) != ARES_REC_TYPE_NS) { continue; } hostname = ares_dns_rr_get_str(rr, ARES_RR_NS_NSDNAME); if (hostname == NULL) { status = ARES_EBADRESP; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } hostent->h_aliases[nscount] = ares_strdup(hostname); if (hostent->h_aliases[nscount] == NULL) { status = ARES_ENOMEM; goto done; } nscount++; } if (nscount == 0) { status = ARES_ENODATA; } else { status = ARES_SUCCESS; } done: if (status != ARES_SUCCESS) { ares_free_hostent(hostent); /* Compatibility */ if (status == ARES_EBADNAME) { status = ARES_EBADRESP; } } else { *host = hostent; } ares_dns_record_destroy(dnsrec); return (int)status; } gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_parse_ptr_reply.c000066400000000000000000000143021471441230600244500ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif ares_status_t ares_parse_ptr_reply_dnsrec(const ares_dns_record_t *dnsrec, const void *addr, int addrlen, int family, struct hostent **host) { ares_status_t status; size_t ptrcount = 0; struct hostent *hostent = NULL; const char *hostname = NULL; const char *ptrname = NULL; size_t i; size_t ancount; *host = NULL; /* Fetch name from query as we will use it to compare later on. Old code * did this check, so we'll retain it. */ status = ares_dns_record_query_get(dnsrec, 0, &ptrname, NULL, NULL); if (status != ARES_SUCCESS) { goto done; } ancount = ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER); if (ancount == 0) { status = ARES_ENODATA; goto done; } /* Response structure */ hostent = ares_malloc(sizeof(*hostent)); if (hostent == NULL) { status = ARES_ENOMEM; goto done; } memset(hostent, 0, sizeof(*hostent)); hostent->h_addr_list = ares_malloc(2 * sizeof(*hostent->h_addr_list)); if (hostent->h_addr_list == NULL) { status = ARES_ENOMEM; goto done; } memset(hostent->h_addr_list, 0, 2 * sizeof(*hostent->h_addr_list)); if (addr != NULL && addrlen > 0) { hostent->h_addr_list[0] = ares_malloc((size_t)addrlen); if (hostent->h_addr_list[0] == NULL) { status = ARES_ENOMEM; goto done; } memcpy(hostent->h_addr_list[0], addr, (size_t)addrlen); } hostent->h_addrtype = (HOSTENT_ADDRTYPE_TYPE)family; hostent->h_length = (HOSTENT_LENGTH_TYPE)addrlen; /* Preallocate the maximum number + 1 */ hostent->h_aliases = ares_malloc((ancount + 1) * sizeof(*hostent->h_aliases)); if (hostent->h_aliases == NULL) { status = ARES_ENOMEM; goto done; } memset(hostent->h_aliases, 0, (ancount + 1) * sizeof(*hostent->h_aliases)); /* Cycle through answers */ for (i = 0; i < ancount; i++) { const ares_dns_rr_t *rr = ares_dns_record_rr_get_const(dnsrec, ARES_SECTION_ANSWER, i); if (rr == NULL) { /* Shouldn't be possible */ status = ARES_EBADRESP; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (ares_dns_rr_get_class(rr) != ARES_CLASS_IN) { continue; } /* Any time we see a CNAME, replace our ptrname with its value */ if (ares_dns_rr_get_type(rr) == ARES_REC_TYPE_CNAME) { ptrname = ares_dns_rr_get_str(rr, ARES_RR_CNAME_CNAME); if (ptrname == NULL) { status = ARES_EBADRESP; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } } /* Handling for PTR records below this, otherwise skip */ if (ares_dns_rr_get_type(rr) != ARES_REC_TYPE_PTR) { continue; } /* Issue #683 * Old code compared the name in the rr to the ptrname, but I think this * is wrong since it was proven wrong for A & AAAA records. Leaving * this code commented out for future reference * * rname = ares_dns_rr_get_name(rr); * if (rname == NULL) { * status = ARES_EBADRESP; * goto done; * } * if (strcasecmp(ptrname, rname) != 0) { * continue; * } */ /* Save most recent PTR record as the hostname */ hostname = ares_dns_rr_get_str(rr, ARES_RR_PTR_DNAME); if (hostname == NULL) { status = ARES_EBADRESP; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* Append as an alias */ hostent->h_aliases[ptrcount] = ares_strdup(hostname); if (hostent->h_aliases[ptrcount] == NULL) { status = ARES_ENOMEM; goto done; } ptrcount++; } if (ptrcount == 0) { status = ARES_ENODATA; goto done; } else { status = ARES_SUCCESS; } /* Fill in hostname */ hostent->h_name = ares_strdup(hostname); if (hostent->h_name == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } done: if (status != ARES_SUCCESS) { ares_free_hostent(hostent); /* Compatibility */ if (status == ARES_EBADNAME) { status = ARES_EBADRESP; } } else { *host = hostent; } return status; } int ares_parse_ptr_reply(const unsigned char *abuf, int alen_int, const void *addr, int addrlen, int family, struct hostent **host) { size_t alen; ares_dns_record_t *dnsrec = NULL; ares_status_t status; if (alen_int < 0) { return ARES_EBADRESP; } alen = (size_t)alen_int; status = ares_dns_parse(abuf, alen, 0, &dnsrec); if (status != ARES_SUCCESS) { goto done; } status = ares_parse_ptr_reply_dnsrec(dnsrec, addr, addrlen, family, host); done: ares_dns_record_destroy(dnsrec); if (status == ARES_EBADNAME) { status = ARES_EBADRESP; } return (int)status; } gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_parse_soa_reply.c000066400000000000000000000074061471441230600244340ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_data.h" int ares_parse_soa_reply(const unsigned char *abuf, int alen_int, struct ares_soa_reply **soa_out) { ares_status_t status; size_t alen; struct ares_soa_reply *soa = NULL; ares_dns_record_t *dnsrec = NULL; size_t i; *soa_out = NULL; if (alen_int < 0) { return ARES_EBADRESP; } alen = (size_t)alen_int; status = ares_dns_parse(abuf, alen, 0, &dnsrec); if (status != ARES_SUCCESS) { goto done; } if (ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER) == 0) { status = ARES_EBADRESP; /* ENODATA might make more sense */ goto done; } for (i = 0; i < ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER); i++) { const ares_dns_rr_t *rr = ares_dns_record_rr_get(dnsrec, ARES_SECTION_ANSWER, i); if (rr == NULL) { /* Shouldn't be possible */ status = ARES_EBADRESP; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (ares_dns_rr_get_class(rr) != ARES_CLASS_IN || ares_dns_rr_get_type(rr) != ARES_REC_TYPE_SOA) { continue; } /* allocate result struct */ soa = ares_malloc_data(ARES_DATATYPE_SOA_REPLY); if (soa == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } soa->serial = ares_dns_rr_get_u32(rr, ARES_RR_SOA_SERIAL); soa->refresh = ares_dns_rr_get_u32(rr, ARES_RR_SOA_REFRESH); soa->retry = ares_dns_rr_get_u32(rr, ARES_RR_SOA_RETRY); soa->expire = ares_dns_rr_get_u32(rr, ARES_RR_SOA_EXPIRE); soa->minttl = ares_dns_rr_get_u32(rr, ARES_RR_SOA_MINIMUM); soa->nsname = ares_strdup(ares_dns_rr_get_str(rr, ARES_RR_SOA_MNAME)); if (soa->nsname == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } soa->hostmaster = ares_strdup(ares_dns_rr_get_str(rr, ARES_RR_SOA_RNAME)); if (soa->hostmaster == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } break; } if (soa == NULL) { status = ARES_EBADRESP; } done: /* clean up on error */ if (status != ARES_SUCCESS) { ares_free_data(soa); /* Compatibility */ if (status == ARES_EBADNAME) { status = ARES_EBADRESP; } } else { /* everything looks fine, return the data */ *soa_out = soa; } ares_dns_record_destroy(dnsrec); return (int)status; } gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_parse_srv_reply.c000066400000000000000000000070471471441230600244650ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_data.h" int ares_parse_srv_reply(const unsigned char *abuf, int alen_int, struct ares_srv_reply **srv_out) { ares_status_t status; size_t alen; struct ares_srv_reply *srv_head = NULL; struct ares_srv_reply *srv_last = NULL; struct ares_srv_reply *srv_curr; ares_dns_record_t *dnsrec = NULL; size_t i; *srv_out = NULL; if (alen_int < 0) { return ARES_EBADRESP; } alen = (size_t)alen_int; status = ares_dns_parse(abuf, alen, 0, &dnsrec); if (status != ARES_SUCCESS) { goto done; } if (ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER) == 0) { status = ARES_ENODATA; goto done; } for (i = 0; i < ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER); i++) { const ares_dns_rr_t *rr = ares_dns_record_rr_get(dnsrec, ARES_SECTION_ANSWER, i); if (rr == NULL) { /* Shouldn't be possible */ status = ARES_EBADRESP; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (ares_dns_rr_get_class(rr) != ARES_CLASS_IN || ares_dns_rr_get_type(rr) != ARES_REC_TYPE_SRV) { continue; } /* Allocate storage for this SRV answer appending it to the list */ srv_curr = ares_malloc_data(ARES_DATATYPE_SRV_REPLY); if (srv_curr == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Link in the record */ if (srv_last) { srv_last->next = srv_curr; } else { srv_head = srv_curr; } srv_last = srv_curr; srv_curr->priority = ares_dns_rr_get_u16(rr, ARES_RR_SRV_PRIORITY); srv_curr->weight = ares_dns_rr_get_u16(rr, ARES_RR_SRV_WEIGHT); srv_curr->port = ares_dns_rr_get_u16(rr, ARES_RR_SRV_PORT); srv_curr->host = ares_strdup(ares_dns_rr_get_str(rr, ARES_RR_SRV_TARGET)); if (srv_curr->host == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } done: /* clean up on error */ if (status != ARES_SUCCESS) { if (srv_head) { ares_free_data(srv_head); } } else { /* everything looks fine, return the data */ *srv_out = srv_head; } ares_dns_record_destroy(dnsrec); return (int)status; } gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_parse_txt_reply.c000066400000000000000000000105751471441230600244720ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_data.h" static int ares__parse_txt_reply(const unsigned char *abuf, size_t alen, ares_bool_t ex, void **txt_out) { ares_status_t status; struct ares_txt_ext *txt_head = NULL; struct ares_txt_ext *txt_last = NULL; struct ares_txt_ext *txt_curr; ares_dns_record_t *dnsrec = NULL; size_t i; *txt_out = NULL; status = ares_dns_parse(abuf, alen, 0, &dnsrec); if (status != ARES_SUCCESS) { goto done; } if (ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER) == 0) { status = ARES_ENODATA; goto done; } for (i = 0; i < ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER); i++) { const ares_dns_rr_t *rr = ares_dns_record_rr_get(dnsrec, ARES_SECTION_ANSWER, i); size_t j; size_t cnt; if (rr == NULL) { /* Shouldn't be possible */ status = ARES_EBADRESP; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* XXX: Why Chaos? */ if ((ares_dns_rr_get_class(rr) != ARES_CLASS_IN && ares_dns_rr_get_class(rr) != ARES_CLASS_CHAOS) || ares_dns_rr_get_type(rr) != ARES_REC_TYPE_TXT) { continue; } cnt = ares_dns_rr_get_abin_cnt(rr, ARES_RR_TXT_DATA); for (j = 0; j < cnt; j++) { const unsigned char *ptr; size_t ptr_len; /* Allocate storage for this TXT answer appending it to the list */ txt_curr = ares_malloc_data(ex ? ARES_DATATYPE_TXT_EXT : ARES_DATATYPE_TXT_REPLY); if (txt_curr == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Link in the record */ if (txt_last) { txt_last->next = txt_curr; } else { txt_head = txt_curr; } txt_last = txt_curr; /* Tag start on first for each TXT record */ if (ex && j == 0) { txt_curr->record_start = 1; } ptr = ares_dns_rr_get_abin(rr, ARES_RR_TXT_DATA, j, &ptr_len); txt_curr->txt = ares_malloc(ptr_len + 1); if (txt_curr->txt == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } memcpy(txt_curr->txt, ptr, ptr_len); txt_curr->txt[ptr_len] = 0; txt_curr->length = ptr_len; } } done: /* clean up on error */ if (status != ARES_SUCCESS) { if (txt_head) { ares_free_data(txt_head); } } else { /* everything looks fine, return the data */ *txt_out = txt_head; } ares_dns_record_destroy(dnsrec); return (int)status; } int ares_parse_txt_reply(const unsigned char *abuf, int alen, struct ares_txt_reply **txt_out) { if (alen < 0) { return ARES_EBADRESP; } return ares__parse_txt_reply(abuf, (size_t)alen, ARES_FALSE, (void **)txt_out); } int ares_parse_txt_reply_ext(const unsigned char *abuf, int alen, struct ares_txt_ext **txt_out) { if (alen < 0) { return ARES_EBADRESP; } return ares__parse_txt_reply(abuf, (size_t)alen, ARES_TRUE, (void **)txt_out); } gevent-24.11.1/deps/c-ares/src/lib/legacy/ares_parse_uri_reply.c000066400000000000000000000067031471441230600244500ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_data.h" int ares_parse_uri_reply(const unsigned char *abuf, int alen_int, struct ares_uri_reply **uri_out) { ares_status_t status; size_t alen; struct ares_uri_reply *uri_head = NULL; struct ares_uri_reply *uri_last = NULL; struct ares_uri_reply *uri_curr; ares_dns_record_t *dnsrec = NULL; size_t i; *uri_out = NULL; if (alen_int < 0) { return ARES_EBADRESP; } alen = (size_t)alen_int; status = ares_dns_parse(abuf, alen, 0, &dnsrec); if (status != ARES_SUCCESS) { goto done; } if (ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER) == 0) { status = ARES_ENODATA; goto done; } for (i = 0; i < ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER); i++) { const ares_dns_rr_t *rr = ares_dns_record_rr_get(dnsrec, ARES_SECTION_ANSWER, i); if (rr == NULL) { /* Shouldn't be possible */ status = ARES_EBADRESP; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (ares_dns_rr_get_class(rr) != ARES_CLASS_IN || ares_dns_rr_get_type(rr) != ARES_REC_TYPE_URI) { continue; } /* Allocate storage for this URI answer appending it to the list */ uri_curr = ares_malloc_data(ARES_DATATYPE_URI_REPLY); if (uri_curr == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Link in the record */ if (uri_last) { uri_last->next = uri_curr; } else { uri_head = uri_curr; } uri_last = uri_curr; uri_curr->priority = ares_dns_rr_get_u16(rr, ARES_RR_URI_PRIORITY); uri_curr->weight = ares_dns_rr_get_u16(rr, ARES_RR_URI_WEIGHT); uri_curr->uri = ares_strdup(ares_dns_rr_get_str(rr, ARES_RR_URI_TARGET)); uri_curr->ttl = (int)ares_dns_rr_get_ttl(rr); if (uri_curr->uri == NULL) { status = ARES_ENOMEM; goto done; } } done: /* clean up on error */ if (status != ARES_SUCCESS) { if (uri_head) { ares_free_data(uri_head); } } else { /* everything looks fine, return the data */ *uri_out = uri_head; } ares_dns_record_destroy(dnsrec); return (int)status; } gevent-24.11.1/deps/c-ares/src/lib/record/000077500000000000000000000000001471441230600200725ustar00rootroot00000000000000gevent-24.11.1/deps/c-ares/src/lib/record/ares_dns_mapping.c000066400000000000000000000654221471441230600235600ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" ares_bool_t ares_dns_opcode_isvalid(ares_dns_opcode_t opcode) { switch (opcode) { case ARES_OPCODE_QUERY: case ARES_OPCODE_IQUERY: case ARES_OPCODE_STATUS: case ARES_OPCODE_NOTIFY: case ARES_OPCODE_UPDATE: return ARES_TRUE; } return ARES_FALSE; } ares_bool_t ares_dns_rcode_isvalid(ares_dns_rcode_t rcode) { switch (rcode) { case ARES_RCODE_NOERROR: case ARES_RCODE_FORMERR: case ARES_RCODE_SERVFAIL: case ARES_RCODE_NXDOMAIN: case ARES_RCODE_NOTIMP: case ARES_RCODE_REFUSED: case ARES_RCODE_YXDOMAIN: case ARES_RCODE_YXRRSET: case ARES_RCODE_NXRRSET: case ARES_RCODE_NOTAUTH: case ARES_RCODE_NOTZONE: case ARES_RCODE_DSOTYPEI: case ARES_RCODE_BADSIG: case ARES_RCODE_BADKEY: case ARES_RCODE_BADTIME: case ARES_RCODE_BADMODE: case ARES_RCODE_BADNAME: case ARES_RCODE_BADALG: case ARES_RCODE_BADTRUNC: case ARES_RCODE_BADCOOKIE: return ARES_TRUE; } return ARES_FALSE; } ares_bool_t ares_dns_flags_arevalid(unsigned short flags) { unsigned short allflags = ARES_FLAG_QR | ARES_FLAG_AA | ARES_FLAG_TC | ARES_FLAG_RD | ARES_FLAG_RA | ARES_FLAG_AD | ARES_FLAG_CD; if (flags & ~allflags) { return ARES_FALSE; } return ARES_TRUE; } ares_bool_t ares_dns_rec_type_isvalid(ares_dns_rec_type_t type, ares_bool_t is_query) { switch (type) { case ARES_REC_TYPE_A: case ARES_REC_TYPE_NS: case ARES_REC_TYPE_CNAME: case ARES_REC_TYPE_SOA: case ARES_REC_TYPE_PTR: case ARES_REC_TYPE_HINFO: case ARES_REC_TYPE_MX: case ARES_REC_TYPE_TXT: case ARES_REC_TYPE_SIG: case ARES_REC_TYPE_AAAA: case ARES_REC_TYPE_SRV: case ARES_REC_TYPE_NAPTR: case ARES_REC_TYPE_OPT: case ARES_REC_TYPE_TLSA: case ARES_REC_TYPE_SVCB: case ARES_REC_TYPE_HTTPS: case ARES_REC_TYPE_ANY: case ARES_REC_TYPE_URI: case ARES_REC_TYPE_CAA: return ARES_TRUE; case ARES_REC_TYPE_RAW_RR: return is_query ? ARES_FALSE : ARES_TRUE; default: break; } return is_query ? ARES_TRUE : ARES_FALSE; } ares_bool_t ares_dns_rec_type_allow_name_compression(ares_dns_rec_type_t type) { /* Only record types defined in RFC1035 allow name compression within the * RDATA. Otherwise nameservers that don't understand an RR may not be * able to pass along the RR in a proper manner */ switch (type) { case ARES_REC_TYPE_A: case ARES_REC_TYPE_NS: case ARES_REC_TYPE_CNAME: case ARES_REC_TYPE_SOA: case ARES_REC_TYPE_PTR: case ARES_REC_TYPE_HINFO: case ARES_REC_TYPE_MX: case ARES_REC_TYPE_TXT: return ARES_TRUE; default: break; } return ARES_FALSE; } ares_bool_t ares_dns_class_isvalid(ares_dns_class_t qclass, ares_dns_rec_type_t type, ares_bool_t is_query) { /* If we don't understand the record type, we shouldn't validate the class * as there are some instances like on RFC 2391 (SIG RR) the class is * meaningless, but since we didn't support that record type, we didn't * know it shouldn't be validated */ if (type == ARES_REC_TYPE_RAW_RR) { return ARES_TRUE; } switch (qclass) { case ARES_CLASS_IN: case ARES_CLASS_CHAOS: case ARES_CLASS_HESOID: case ARES_CLASS_NONE: return ARES_TRUE; case ARES_CLASS_ANY: if (type == ARES_REC_TYPE_SIG) { return ARES_TRUE; } if (is_query) { return ARES_TRUE; } return ARES_FALSE; } return ARES_FALSE; } ares_bool_t ares_dns_section_isvalid(ares_dns_section_t sect) { switch (sect) { case ARES_SECTION_ANSWER: case ARES_SECTION_AUTHORITY: case ARES_SECTION_ADDITIONAL: return ARES_TRUE; } return ARES_FALSE; } ares_dns_rec_type_t ares_dns_rr_key_to_rec_type(ares_dns_rr_key_t key) { /* NOTE: due to the way we've numerated the keys, we can simply divide by * 100 to get the type rather than having to do a huge switch * statement. That said, we do then validate the type returned is * valid in case something completely bogus is passed in */ ares_dns_rec_type_t type = key / 100; if (!ares_dns_rec_type_isvalid(type, ARES_FALSE)) { return 0; } return type; } const char *ares_dns_rec_type_tostr(ares_dns_rec_type_t type) { switch (type) { case ARES_REC_TYPE_A: return "A"; case ARES_REC_TYPE_NS: return "NS"; case ARES_REC_TYPE_CNAME: return "CNAME"; case ARES_REC_TYPE_SOA: return "SOA"; case ARES_REC_TYPE_PTR: return "PTR"; case ARES_REC_TYPE_HINFO: return "HINFO"; case ARES_REC_TYPE_MX: return "MX"; case ARES_REC_TYPE_TXT: return "TXT"; case ARES_REC_TYPE_SIG: return "SIG"; case ARES_REC_TYPE_AAAA: return "AAAA"; case ARES_REC_TYPE_SRV: return "SRV"; case ARES_REC_TYPE_NAPTR: return "NAPTR"; case ARES_REC_TYPE_OPT: return "OPT"; case ARES_REC_TYPE_TLSA: return "TLSA"; case ARES_REC_TYPE_SVCB: return "SVCB"; case ARES_REC_TYPE_HTTPS: return "HTTPS"; case ARES_REC_TYPE_ANY: return "ANY"; case ARES_REC_TYPE_URI: return "URI"; case ARES_REC_TYPE_CAA: return "CAA"; case ARES_REC_TYPE_RAW_RR: return "RAWRR"; } return "UNKNOWN"; } const char *ares_dns_class_tostr(ares_dns_class_t qclass) { switch (qclass) { case ARES_CLASS_IN: return "IN"; case ARES_CLASS_CHAOS: return "CH"; case ARES_CLASS_HESOID: return "HS"; case ARES_CLASS_ANY: return "ANY"; case ARES_CLASS_NONE: return "NONE"; } return "UNKNOWN"; } const char *ares_dns_opcode_tostr(ares_dns_opcode_t opcode) { switch (opcode) { case ARES_OPCODE_QUERY: return "QUERY"; case ARES_OPCODE_IQUERY: return "IQUERY"; case ARES_OPCODE_STATUS: return "STATUS"; case ARES_OPCODE_NOTIFY: return "NOTIFY"; case ARES_OPCODE_UPDATE: return "UPDATE"; } return "UNKNOWN"; } const char *ares_dns_rr_key_tostr(ares_dns_rr_key_t key) { switch (key) { case ARES_RR_A_ADDR: return "ADDR"; case ARES_RR_NS_NSDNAME: return "NSDNAME"; case ARES_RR_CNAME_CNAME: return "CNAME"; case ARES_RR_SOA_MNAME: return "MNAME"; case ARES_RR_SOA_RNAME: return "RNAME"; case ARES_RR_SOA_SERIAL: return "SERIAL"; case ARES_RR_SOA_REFRESH: return "REFRESH"; case ARES_RR_SOA_RETRY: return "RETRY"; case ARES_RR_SOA_EXPIRE: return "EXPIRE"; case ARES_RR_SOA_MINIMUM: return "MINIMUM"; case ARES_RR_PTR_DNAME: return "DNAME"; case ARES_RR_AAAA_ADDR: return "ADDR"; case ARES_RR_HINFO_CPU: return "CPU"; case ARES_RR_HINFO_OS: return "OS"; case ARES_RR_MX_PREFERENCE: return "PREFERENCE"; case ARES_RR_MX_EXCHANGE: return "EXCHANGE"; case ARES_RR_TXT_DATA: return "DATA"; case ARES_RR_SIG_TYPE_COVERED: return "TYPE_COVERED"; case ARES_RR_SIG_ALGORITHM: return "ALGORITHM"; case ARES_RR_SIG_LABELS: return "LABELS"; case ARES_RR_SIG_ORIGINAL_TTL: return "ORIGINAL_TTL"; case ARES_RR_SIG_EXPIRATION: return "EXPIRATION"; case ARES_RR_SIG_INCEPTION: return "INCEPTION"; case ARES_RR_SIG_KEY_TAG: return "KEY_TAG"; case ARES_RR_SIG_SIGNERS_NAME: return "SIGNERS_NAME"; case ARES_RR_SIG_SIGNATURE: return "SIGNATURE"; case ARES_RR_SRV_PRIORITY: return "PRIORITY"; case ARES_RR_SRV_WEIGHT: return "WEIGHT"; case ARES_RR_SRV_PORT: return "PORT"; case ARES_RR_SRV_TARGET: return "TARGET"; case ARES_RR_NAPTR_ORDER: return "ORDER"; case ARES_RR_NAPTR_PREFERENCE: return "PREFERENCE"; case ARES_RR_NAPTR_FLAGS: return "FLAGS"; case ARES_RR_NAPTR_SERVICES: return "SERVICES"; case ARES_RR_NAPTR_REGEXP: return "REGEXP"; case ARES_RR_NAPTR_REPLACEMENT: return "REPLACEMENT"; case ARES_RR_OPT_UDP_SIZE: return "UDP_SIZE"; case ARES_RR_OPT_VERSION: return "VERSION"; case ARES_RR_OPT_FLAGS: return "FLAGS"; case ARES_RR_OPT_OPTIONS: return "OPTIONS"; case ARES_RR_TLSA_CERT_USAGE: return "CERT_USAGE"; case ARES_RR_TLSA_SELECTOR: return "SELECTOR"; case ARES_RR_TLSA_MATCH: return "MATCH"; case ARES_RR_TLSA_DATA: return "DATA"; case ARES_RR_SVCB_PRIORITY: return "PRIORITY"; case ARES_RR_SVCB_TARGET: return "TARGET"; case ARES_RR_SVCB_PARAMS: return "PARAMS"; case ARES_RR_HTTPS_PRIORITY: return "PRIORITY"; case ARES_RR_HTTPS_TARGET: return "TARGET"; case ARES_RR_HTTPS_PARAMS: return "PARAMS"; case ARES_RR_URI_PRIORITY: return "PRIORITY"; case ARES_RR_URI_WEIGHT: return "WEIGHT"; case ARES_RR_URI_TARGET: return "TARGET"; case ARES_RR_CAA_CRITICAL: return "CRITICAL"; case ARES_RR_CAA_TAG: return "TAG"; case ARES_RR_CAA_VALUE: return "VALUE"; case ARES_RR_RAW_RR_TYPE: return "TYPE"; case ARES_RR_RAW_RR_DATA: return "DATA"; } return "UNKNOWN"; } ares_dns_datatype_t ares_dns_rr_key_datatype(ares_dns_rr_key_t key) { switch (key) { case ARES_RR_A_ADDR: return ARES_DATATYPE_INADDR; case ARES_RR_AAAA_ADDR: return ARES_DATATYPE_INADDR6; case ARES_RR_NS_NSDNAME: case ARES_RR_CNAME_CNAME: case ARES_RR_SOA_MNAME: case ARES_RR_SOA_RNAME: case ARES_RR_PTR_DNAME: case ARES_RR_MX_EXCHANGE: case ARES_RR_SIG_SIGNERS_NAME: case ARES_RR_SRV_TARGET: case ARES_RR_SVCB_TARGET: case ARES_RR_HTTPS_TARGET: case ARES_RR_NAPTR_REPLACEMENT: case ARES_RR_URI_TARGET: return ARES_DATATYPE_NAME; case ARES_RR_HINFO_CPU: case ARES_RR_HINFO_OS: case ARES_RR_NAPTR_FLAGS: case ARES_RR_NAPTR_SERVICES: case ARES_RR_NAPTR_REGEXP: case ARES_RR_CAA_TAG: return ARES_DATATYPE_STR; case ARES_RR_SOA_SERIAL: case ARES_RR_SOA_REFRESH: case ARES_RR_SOA_RETRY: case ARES_RR_SOA_EXPIRE: case ARES_RR_SOA_MINIMUM: case ARES_RR_SIG_ORIGINAL_TTL: case ARES_RR_SIG_EXPIRATION: case ARES_RR_SIG_INCEPTION: return ARES_DATATYPE_U32; case ARES_RR_MX_PREFERENCE: case ARES_RR_SIG_TYPE_COVERED: case ARES_RR_SIG_KEY_TAG: case ARES_RR_SRV_PRIORITY: case ARES_RR_SRV_WEIGHT: case ARES_RR_SRV_PORT: case ARES_RR_NAPTR_ORDER: case ARES_RR_NAPTR_PREFERENCE: case ARES_RR_OPT_UDP_SIZE: case ARES_RR_OPT_FLAGS: case ARES_RR_SVCB_PRIORITY: case ARES_RR_HTTPS_PRIORITY: case ARES_RR_URI_PRIORITY: case ARES_RR_URI_WEIGHT: case ARES_RR_RAW_RR_TYPE: return ARES_DATATYPE_U16; case ARES_RR_SIG_ALGORITHM: case ARES_RR_SIG_LABELS: case ARES_RR_OPT_VERSION: case ARES_RR_TLSA_CERT_USAGE: case ARES_RR_TLSA_SELECTOR: case ARES_RR_TLSA_MATCH: case ARES_RR_CAA_CRITICAL: return ARES_DATATYPE_U8; case ARES_RR_CAA_VALUE: return ARES_DATATYPE_BINP; case ARES_RR_TXT_DATA: return ARES_DATATYPE_ABINP; case ARES_RR_SIG_SIGNATURE: case ARES_RR_TLSA_DATA: case ARES_RR_RAW_RR_DATA: return ARES_DATATYPE_BIN; case ARES_RR_OPT_OPTIONS: case ARES_RR_SVCB_PARAMS: case ARES_RR_HTTPS_PARAMS: return ARES_DATATYPE_OPT; } return 0; } static const ares_dns_rr_key_t rr_a_keys[] = { ARES_RR_A_ADDR }; static const ares_dns_rr_key_t rr_ns_keys[] = { ARES_RR_NS_NSDNAME }; static const ares_dns_rr_key_t rr_cname_keys[] = { ARES_RR_CNAME_CNAME }; static const ares_dns_rr_key_t rr_soa_keys[] = { ARES_RR_SOA_MNAME, ARES_RR_SOA_RNAME, ARES_RR_SOA_SERIAL, ARES_RR_SOA_REFRESH, ARES_RR_SOA_RETRY, ARES_RR_SOA_EXPIRE, ARES_RR_SOA_MINIMUM }; static const ares_dns_rr_key_t rr_ptr_keys[] = { ARES_RR_PTR_DNAME }; static const ares_dns_rr_key_t rr_hinfo_keys[] = { ARES_RR_HINFO_CPU, ARES_RR_HINFO_OS }; static const ares_dns_rr_key_t rr_mx_keys[] = { ARES_RR_MX_PREFERENCE, ARES_RR_MX_EXCHANGE }; static const ares_dns_rr_key_t rr_sig_keys[] = { ARES_RR_SIG_TYPE_COVERED, ARES_RR_SIG_ALGORITHM, ARES_RR_SIG_LABELS, ARES_RR_SIG_ORIGINAL_TTL, ARES_RR_SIG_EXPIRATION, ARES_RR_SIG_INCEPTION, ARES_RR_SIG_KEY_TAG, ARES_RR_SIG_SIGNERS_NAME, ARES_RR_SIG_SIGNATURE }; static const ares_dns_rr_key_t rr_txt_keys[] = { ARES_RR_TXT_DATA }; static const ares_dns_rr_key_t rr_aaaa_keys[] = { ARES_RR_AAAA_ADDR }; static const ares_dns_rr_key_t rr_srv_keys[] = { ARES_RR_SRV_PRIORITY, ARES_RR_SRV_WEIGHT, ARES_RR_SRV_PORT, ARES_RR_SRV_TARGET }; static const ares_dns_rr_key_t rr_naptr_keys[] = { ARES_RR_NAPTR_ORDER, ARES_RR_NAPTR_PREFERENCE, ARES_RR_NAPTR_FLAGS, ARES_RR_NAPTR_SERVICES, ARES_RR_NAPTR_REGEXP, ARES_RR_NAPTR_REPLACEMENT }; static const ares_dns_rr_key_t rr_opt_keys[] = { ARES_RR_OPT_UDP_SIZE, ARES_RR_OPT_VERSION, ARES_RR_OPT_FLAGS, ARES_RR_OPT_OPTIONS }; static const ares_dns_rr_key_t rr_tlsa_keys[] = { ARES_RR_TLSA_CERT_USAGE, ARES_RR_TLSA_SELECTOR, ARES_RR_TLSA_MATCH, ARES_RR_TLSA_DATA }; static const ares_dns_rr_key_t rr_svcb_keys[] = { ARES_RR_SVCB_PRIORITY, ARES_RR_SVCB_TARGET, ARES_RR_SVCB_PARAMS }; static const ares_dns_rr_key_t rr_https_keys[] = { ARES_RR_HTTPS_PRIORITY, ARES_RR_HTTPS_TARGET, ARES_RR_HTTPS_PARAMS }; static const ares_dns_rr_key_t rr_uri_keys[] = { ARES_RR_URI_PRIORITY, ARES_RR_URI_WEIGHT, ARES_RR_URI_TARGET }; static const ares_dns_rr_key_t rr_caa_keys[] = { ARES_RR_CAA_CRITICAL, ARES_RR_CAA_TAG, ARES_RR_CAA_VALUE }; static const ares_dns_rr_key_t rr_raw_rr_keys[] = { ARES_RR_RAW_RR_TYPE, ARES_RR_RAW_RR_DATA }; const ares_dns_rr_key_t *ares_dns_rr_get_keys(ares_dns_rec_type_t type, size_t *cnt) { if (cnt == NULL) { return NULL; } *cnt = 0; switch (type) { case ARES_REC_TYPE_A: *cnt = sizeof(rr_a_keys) / sizeof(*rr_a_keys); return rr_a_keys; case ARES_REC_TYPE_NS: *cnt = sizeof(rr_ns_keys) / sizeof(*rr_ns_keys); return rr_ns_keys; case ARES_REC_TYPE_CNAME: *cnt = sizeof(rr_cname_keys) / sizeof(*rr_cname_keys); return rr_cname_keys; case ARES_REC_TYPE_SOA: *cnt = sizeof(rr_soa_keys) / sizeof(*rr_soa_keys); return rr_soa_keys; case ARES_REC_TYPE_PTR: *cnt = sizeof(rr_ptr_keys) / sizeof(*rr_ptr_keys); return rr_ptr_keys; case ARES_REC_TYPE_HINFO: *cnt = sizeof(rr_hinfo_keys) / sizeof(*rr_hinfo_keys); return rr_hinfo_keys; case ARES_REC_TYPE_MX: *cnt = sizeof(rr_mx_keys) / sizeof(*rr_mx_keys); return rr_mx_keys; case ARES_REC_TYPE_TXT: *cnt = sizeof(rr_txt_keys) / sizeof(*rr_txt_keys); return rr_txt_keys; case ARES_REC_TYPE_SIG: *cnt = sizeof(rr_sig_keys) / sizeof(*rr_sig_keys); return rr_sig_keys; case ARES_REC_TYPE_AAAA: *cnt = sizeof(rr_aaaa_keys) / sizeof(*rr_aaaa_keys); return rr_aaaa_keys; case ARES_REC_TYPE_SRV: *cnt = sizeof(rr_srv_keys) / sizeof(*rr_srv_keys); return rr_srv_keys; case ARES_REC_TYPE_NAPTR: *cnt = sizeof(rr_naptr_keys) / sizeof(*rr_naptr_keys); return rr_naptr_keys; case ARES_REC_TYPE_OPT: *cnt = sizeof(rr_opt_keys) / sizeof(*rr_opt_keys); return rr_opt_keys; case ARES_REC_TYPE_TLSA: *cnt = sizeof(rr_tlsa_keys) / sizeof(*rr_tlsa_keys); return rr_tlsa_keys; case ARES_REC_TYPE_SVCB: *cnt = sizeof(rr_svcb_keys) / sizeof(*rr_svcb_keys); return rr_svcb_keys; case ARES_REC_TYPE_HTTPS: *cnt = sizeof(rr_https_keys) / sizeof(*rr_https_keys); return rr_https_keys; case ARES_REC_TYPE_ANY: /* Not real */ break; case ARES_REC_TYPE_URI: *cnt = sizeof(rr_uri_keys) / sizeof(*rr_uri_keys); return rr_uri_keys; case ARES_REC_TYPE_CAA: *cnt = sizeof(rr_caa_keys) / sizeof(*rr_caa_keys); return rr_caa_keys; case ARES_REC_TYPE_RAW_RR: *cnt = sizeof(rr_raw_rr_keys) / sizeof(*rr_raw_rr_keys); return rr_raw_rr_keys; } return NULL; } ares_bool_t ares_dns_class_fromstr(ares_dns_class_t *qclass, const char *str) { size_t i; static const struct { const char *name; ares_dns_class_t qclass; } list[] = { { "IN", ARES_CLASS_IN }, { "CH", ARES_CLASS_CHAOS }, { "HS", ARES_CLASS_HESOID }, { "NONE", ARES_CLASS_NONE }, { "ANY", ARES_CLASS_ANY }, { NULL, 0 } }; if (qclass == NULL || str == NULL) { return ARES_FALSE; } for (i = 0; list[i].name != NULL; i++) { if (strcasecmp(list[i].name, str) == 0) { *qclass = list[i].qclass; return ARES_TRUE; } } return ARES_FALSE; } ares_bool_t ares_dns_rec_type_fromstr(ares_dns_rec_type_t *qtype, const char *str) { size_t i; static const struct { const char *name; ares_dns_rec_type_t type; } list[] = { { "A", ARES_REC_TYPE_A }, { "NS", ARES_REC_TYPE_NS }, { "CNAME", ARES_REC_TYPE_CNAME }, { "SOA", ARES_REC_TYPE_SOA }, { "PTR", ARES_REC_TYPE_PTR }, { "HINFO", ARES_REC_TYPE_HINFO }, { "MX", ARES_REC_TYPE_MX }, { "TXT", ARES_REC_TYPE_TXT }, { "SIG", ARES_REC_TYPE_SIG }, { "AAAA", ARES_REC_TYPE_AAAA }, { "SRV", ARES_REC_TYPE_SRV }, { "NAPTR", ARES_REC_TYPE_NAPTR }, { "OPT", ARES_REC_TYPE_OPT }, { "TLSA", ARES_REC_TYPE_TLSA }, { "SVCB", ARES_REC_TYPE_SVCB }, { "HTTPS", ARES_REC_TYPE_HTTPS }, { "ANY", ARES_REC_TYPE_ANY }, { "URI", ARES_REC_TYPE_URI }, { "CAA", ARES_REC_TYPE_CAA }, { "RAW_RR", ARES_REC_TYPE_RAW_RR }, { NULL, 0 } }; if (qtype == NULL || str == NULL) { return ARES_FALSE; } for (i = 0; list[i].name != NULL; i++) { if (strcasecmp(list[i].name, str) == 0) { *qtype = list[i].type; return ARES_TRUE; } } return ARES_FALSE; } const char *ares_dns_section_tostr(ares_dns_section_t section) { switch (section) { case ARES_SECTION_ANSWER: return "ANSWER"; case ARES_SECTION_AUTHORITY: return "AUTHORITY"; case ARES_SECTION_ADDITIONAL: return "ADDITIONAL"; } return "UNKNOWN"; } static ares_dns_opt_datatype_t ares_dns_opt_get_type_opt(unsigned short opt) { ares_opt_param_t param = (ares_opt_param_t)opt; switch (param) { case ARES_OPT_PARAM_LLQ: /* Really it is u16 version, u16 opcode, u16 error, u64 id, u32 lease */ return ARES_OPT_DATATYPE_BIN; case ARES_OPT_PARAM_UL: return ARES_OPT_DATATYPE_U32; case ARES_OPT_PARAM_NSID: return ARES_OPT_DATATYPE_BIN; case ARES_OPT_PARAM_DAU: return ARES_OPT_DATATYPE_U8_LIST; case ARES_OPT_PARAM_DHU: return ARES_OPT_DATATYPE_U8_LIST; case ARES_OPT_PARAM_N3U: return ARES_OPT_DATATYPE_U8_LIST; case ARES_OPT_PARAM_EDNS_CLIENT_SUBNET: /* Really it is a u16 address family, u8 source prefix length, * u8 scope prefix length, address */ return ARES_OPT_DATATYPE_BIN; case ARES_OPT_PARAM_EDNS_EXPIRE: return ARES_OPT_DATATYPE_U32; case ARES_OPT_PARAM_COOKIE: /* 8 bytes for client, 16-40 bytes for server */ return ARES_OPT_DATATYPE_BIN; case ARES_OPT_PARAM_EDNS_TCP_KEEPALIVE: /* Timeout in 100ms intervals */ return ARES_OPT_DATATYPE_U16; case ARES_OPT_PARAM_PADDING: /* Arbitrary padding */ return ARES_OPT_DATATYPE_BIN; case ARES_OPT_PARAM_CHAIN: return ARES_OPT_DATATYPE_NAME; case ARES_OPT_PARAM_EDNS_KEY_TAG: return ARES_OPT_DATATYPE_U16_LIST; case ARES_OPT_PARAM_EXTENDED_DNS_ERROR: /* Really 16bit code followed by textual message */ return ARES_OPT_DATATYPE_BIN; } return ARES_OPT_DATATYPE_BIN; } static ares_dns_opt_datatype_t ares_dns_opt_get_type_svcb(unsigned short opt) { ares_svcb_param_t param = (ares_svcb_param_t)opt; switch (param) { case ARES_SVCB_PARAM_NO_DEFAULT_ALPN: return ARES_OPT_DATATYPE_NONE; case ARES_SVCB_PARAM_ECH: return ARES_OPT_DATATYPE_BIN; case ARES_SVCB_PARAM_MANDATORY: return ARES_OPT_DATATYPE_U16_LIST; case ARES_SVCB_PARAM_ALPN: return ARES_OPT_DATATYPE_STR_LIST; case ARES_SVCB_PARAM_PORT: return ARES_OPT_DATATYPE_U16; case ARES_SVCB_PARAM_IPV4HINT: return ARES_OPT_DATATYPE_INADDR4_LIST; case ARES_SVCB_PARAM_IPV6HINT: return ARES_OPT_DATATYPE_INADDR6_LIST; } return ARES_OPT_DATATYPE_BIN; } ares_dns_opt_datatype_t ares_dns_opt_get_datatype(ares_dns_rr_key_t key, unsigned short opt) { switch (key) { case ARES_RR_OPT_OPTIONS: return ares_dns_opt_get_type_opt(opt); case ARES_RR_SVCB_PARAMS: case ARES_RR_HTTPS_PARAMS: return ares_dns_opt_get_type_svcb(opt); default: break; } return ARES_OPT_DATATYPE_BIN; } static const char *ares_dns_opt_get_name_opt(unsigned short opt) { ares_opt_param_t param = (ares_opt_param_t)opt; switch (param) { case ARES_OPT_PARAM_LLQ: return "LLQ"; case ARES_OPT_PARAM_UL: return "UL"; case ARES_OPT_PARAM_NSID: return "NSID"; case ARES_OPT_PARAM_DAU: return "DAU"; case ARES_OPT_PARAM_DHU: return "DHU"; case ARES_OPT_PARAM_N3U: return "N3U"; case ARES_OPT_PARAM_EDNS_CLIENT_SUBNET: return "edns-client-subnet"; case ARES_OPT_PARAM_EDNS_EXPIRE: return "edns-expire"; case ARES_OPT_PARAM_COOKIE: return "COOKIE"; case ARES_OPT_PARAM_EDNS_TCP_KEEPALIVE: return "edns-tcp-keepalive"; case ARES_OPT_PARAM_PADDING: return "Padding"; case ARES_OPT_PARAM_CHAIN: return "CHAIN"; case ARES_OPT_PARAM_EDNS_KEY_TAG: return "edns-key-tag"; case ARES_OPT_PARAM_EXTENDED_DNS_ERROR: return "extended-dns-error"; } return NULL; } static const char *ares_dns_opt_get_name_svcb(unsigned short opt) { ares_svcb_param_t param = (ares_svcb_param_t)opt; switch (param) { case ARES_SVCB_PARAM_NO_DEFAULT_ALPN: return "no-default-alpn"; case ARES_SVCB_PARAM_ECH: return "ech"; case ARES_SVCB_PARAM_MANDATORY: return "mandatory"; case ARES_SVCB_PARAM_ALPN: return "alpn"; case ARES_SVCB_PARAM_PORT: return "port"; case ARES_SVCB_PARAM_IPV4HINT: return "ipv4hint"; case ARES_SVCB_PARAM_IPV6HINT: return "ipv6hint"; } return NULL; } const char *ares_dns_opt_get_name(ares_dns_rr_key_t key, unsigned short opt) { switch (key) { case ARES_RR_OPT_OPTIONS: return ares_dns_opt_get_name_opt(opt); case ARES_RR_SVCB_PARAMS: case ARES_RR_HTTPS_PARAMS: return ares_dns_opt_get_name_svcb(opt); default: break; } return NULL; } const char *ares_dns_rcode_tostr(ares_dns_rcode_t rcode) { switch (rcode) { case ARES_RCODE_NOERROR: return "NOERROR"; case ARES_RCODE_FORMERR: return "FORMERR"; case ARES_RCODE_SERVFAIL: return "SERVFAIL"; case ARES_RCODE_NXDOMAIN: return "NXDOMAIN"; case ARES_RCODE_NOTIMP: return "NOTIMP"; case ARES_RCODE_REFUSED: return "REFUSED"; case ARES_RCODE_YXDOMAIN: return "YXDOMAIN"; case ARES_RCODE_YXRRSET: return "YXRRSET"; case ARES_RCODE_NXRRSET: return "NXRRSET"; case ARES_RCODE_NOTAUTH: return "NOTAUTH"; case ARES_RCODE_NOTZONE: return "NOTZONE"; case ARES_RCODE_DSOTYPEI: return "DSOTYPEI"; case ARES_RCODE_BADSIG: return "BADSIG"; case ARES_RCODE_BADKEY: return "BADKEY"; case ARES_RCODE_BADTIME: return "BADTIME"; case ARES_RCODE_BADMODE: return "BADMODE"; case ARES_RCODE_BADNAME: return "BADNAME"; case ARES_RCODE_BADALG: return "BADALG"; case ARES_RCODE_BADTRUNC: return "BADTRUNC"; case ARES_RCODE_BADCOOKIE: return "BADCOOKIE"; } return "UNKNOWN"; } /* Convert an rcode and ancount from a query reply into an ares_status_t * value. Used internally by ares_search() and ares_query(). */ ares_status_t ares_dns_query_reply_tostatus(ares_dns_rcode_t rcode, size_t ancount) { ares_status_t status = ARES_SUCCESS; switch (rcode) { case ARES_RCODE_NOERROR: status = (ancount > 0) ? ARES_SUCCESS : ARES_ENODATA; break; case ARES_RCODE_FORMERR: status = ARES_EFORMERR; break; case ARES_RCODE_SERVFAIL: status = ARES_ESERVFAIL; break; case ARES_RCODE_NXDOMAIN: status = ARES_ENOTFOUND; break; case ARES_RCODE_NOTIMP: status = ARES_ENOTIMP; break; case ARES_RCODE_REFUSED: status = ARES_EREFUSED; break; default: break; } return status; } gevent-24.11.1/deps/c-ares/src/lib/record/ares_dns_multistring.c000066400000000000000000000130321471441230600244740ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_dns_private.h" typedef struct { unsigned char *data; size_t len; } multistring_data_t; struct ares__dns_multistring { /*! whether or not cached concatenated string is valid */ ares_bool_t cache_invalidated; /*! combined/concatenated string cache */ unsigned char *cache_str; /*! length of combined/concatenated string */ size_t cache_str_len; /*! Data making up strings */ ares__array_t *strs; /*!< multistring_data_t type */ }; static void ares__dns_multistring_free_cb(void *arg) { multistring_data_t *data = arg; if (data == NULL) { return; } ares_free(data->data); } ares__dns_multistring_t *ares__dns_multistring_create(void) { ares__dns_multistring_t *strs = ares_malloc_zero(sizeof(*strs)); if (strs == NULL) { return NULL; } strs->strs = ares__array_create(sizeof(multistring_data_t), ares__dns_multistring_free_cb); if (strs->strs == NULL) { ares_free(strs); return NULL; } return strs; } void ares__dns_multistring_clear(ares__dns_multistring_t *strs) { if (strs == NULL) { return; } while (ares__array_len(strs->strs)) { ares__array_remove_last(strs->strs); } } void ares__dns_multistring_destroy(ares__dns_multistring_t *strs) { if (strs == NULL) { return; } ares__dns_multistring_clear(strs); ares__array_destroy(strs->strs); ares_free(strs->cache_str); ares_free(strs); } ares_status_t ares__dns_multistring_replace_own(ares__dns_multistring_t *strs, size_t idx, unsigned char *str, size_t len) { multistring_data_t *data; if (strs == NULL || str == NULL || len == 0) { return ARES_EFORMERR; } strs->cache_invalidated = ARES_TRUE; data = ares__array_at(strs->strs, idx); if (data == NULL) { return ARES_EFORMERR; } ares_free(data->data); data->data = str; data->len = len; return ARES_SUCCESS; } ares_status_t ares__dns_multistring_del(ares__dns_multistring_t *strs, size_t idx) { if (strs == NULL) { return ARES_EFORMERR; } strs->cache_invalidated = ARES_TRUE; return ares__array_remove_at(strs->strs, idx); } ares_status_t ares__dns_multistring_add_own(ares__dns_multistring_t *strs, unsigned char *str, size_t len) { multistring_data_t *data; ares_status_t status; if (strs == NULL) { return ARES_EFORMERR; } strs->cache_invalidated = ARES_TRUE; /* NOTE: its ok to have an empty string added */ if (str == NULL && len != 0) { return ARES_EFORMERR; } status = ares__array_insert_last((void **)&data, strs->strs); if (status != ARES_SUCCESS) { return status; } data->data = str; data->len = len; return ARES_SUCCESS; } size_t ares__dns_multistring_cnt(const ares__dns_multistring_t *strs) { if (strs == NULL) { return 0; } return ares__array_len(strs->strs); } const unsigned char * ares__dns_multistring_get(const ares__dns_multistring_t *strs, size_t idx, size_t *len) { const multistring_data_t *data; if (strs == NULL || len == NULL) { return NULL; } data = ares__array_at_const(strs->strs, idx); if (data == NULL) { return NULL; } *len = data->len; return data->data; } const unsigned char * ares__dns_multistring_get_combined(ares__dns_multistring_t *strs, size_t *len) { ares__buf_t *buf = NULL; size_t i; if (strs == NULL || len == NULL) { return NULL; } *len = 0; /* Return cache if possible */ if (!strs->cache_invalidated) { *len = strs->cache_str_len; return strs->cache_str; } /* Clear cache */ ares_free(strs->cache_str); strs->cache_str = NULL; strs->cache_str_len = 0; buf = ares__buf_create(); for (i = 0; i < ares__array_len(strs->strs); i++) { const multistring_data_t *data = ares__array_at_const(strs->strs, i); if (data == NULL || ares__buf_append(buf, data->data, data->len) != ARES_SUCCESS) { ares__buf_destroy(buf); return NULL; } } strs->cache_str = (unsigned char *)ares__buf_finish_str(buf, &strs->cache_str_len); if (strs->cache_str != NULL) { strs->cache_invalidated = ARES_FALSE; } *len = strs->cache_str_len; return strs->cache_str; } gevent-24.11.1/deps/c-ares/src/lib/record/ares_dns_multistring.h000066400000000000000000000046611471441230600245110ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2024 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES_DNS_MULTISTRING_H #define __ARES_DNS_MULTISTRING_H struct ares__dns_multistring; typedef struct ares__dns_multistring ares__dns_multistring_t; ares__dns_multistring_t *ares__dns_multistring_create(void); void ares__dns_multistring_clear(ares__dns_multistring_t *strs); void ares__dns_multistring_destroy(ares__dns_multistring_t *strs); ares_status_t ares__dns_multistring_replace_own(ares__dns_multistring_t *strs, size_t idx, unsigned char *str, size_t len); ares_status_t ares__dns_multistring_del(ares__dns_multistring_t *strs, size_t idx); ares_status_t ares__dns_multistring_add_own(ares__dns_multistring_t *strs, unsigned char *str, size_t len); size_t ares__dns_multistring_cnt(const ares__dns_multistring_t *strs); const unsigned char * ares__dns_multistring_get(const ares__dns_multistring_t *strs, size_t idx, size_t *len); const unsigned char * ares__dns_multistring_get_combined(ares__dns_multistring_t *strs, size_t *len); #endif gevent-24.11.1/deps/c-ares/src/lib/record/ares_dns_name.c000066400000000000000000000443431471441230600230440ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" typedef struct { char *name; size_t name_len; size_t idx; } ares_nameoffset_t; static void ares__nameoffset_free(void *arg) { ares_nameoffset_t *off = arg; if (off == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } ares_free(off->name); ares_free(off); } static ares_status_t ares__nameoffset_create(ares__llist_t **list, const char *name, size_t idx) { ares_status_t status; ares_nameoffset_t *off = NULL; if (list == NULL || name == NULL || ares_strlen(name) == 0 || ares_strlen(name) > 255) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (*list == NULL) { *list = ares__llist_create(ares__nameoffset_free); } if (*list == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } off = ares_malloc_zero(sizeof(*off)); if (off == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } off->name = ares_strdup(name); off->name_len = ares_strlen(off->name); off->idx = idx; if (ares__llist_insert_last(*list, off) == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } return ARES_SUCCESS; /* LCOV_EXCL_START: OutOfMemory */ fail: ares__nameoffset_free(off); return status; /* LCOV_EXCL_STOP */ } static const ares_nameoffset_t *ares__nameoffset_find(ares__llist_t *list, const char *name) { size_t name_len = ares_strlen(name); ares__llist_node_t *node; const ares_nameoffset_t *longest_match = NULL; if (list == NULL || name == NULL || name_len == 0) { return NULL; } for (node = ares__llist_node_first(list); node != NULL; node = ares__llist_node_next(node)) { const ares_nameoffset_t *val = ares__llist_node_val(node); size_t prefix_len; /* Can't be a match if the stored name is longer */ if (val->name_len > name_len) { continue; } /* Can't be the longest match if our existing longest match is longer */ if (longest_match != NULL && longest_match->name_len > val->name_len) { continue; } prefix_len = name_len - val->name_len; /* Due to DNS 0x20, lets not inadvertently mangle things, use case-sensitive * matching instead of case-insensitive. This may result in slightly * larger DNS queries overall. */ if (strcmp(val->name, name + prefix_len) != 0) { continue; } /* We need to make sure if `val->name` is "example.com" that name is * is separated by a label, e.g. "myexample.com" is not ok, however * "my.example.com" is, so we look for the preceding "." */ if (prefix_len != 0 && name[prefix_len - 1] != '.') { continue; } longest_match = val; } return longest_match; } static void ares_dns_labels_free_cb(void *arg) { ares__buf_t **buf = arg; if (buf == NULL) { return; } ares__buf_destroy(*buf); } static ares__buf_t *ares_dns_labels_add(ares__array_t *labels) { ares__buf_t **buf; if (labels == NULL) { return NULL; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (ares__array_insert_last((void **)&buf, labels) != ARES_SUCCESS) { return NULL; } *buf = ares__buf_create(); if (*buf == NULL) { ares__array_remove_last(labels); return NULL; } return *buf; } static ares__buf_t *ares_dns_labels_get_last(ares__array_t *labels) { ares__buf_t **buf = ares__array_last(labels); if (buf == NULL) { return NULL; } return *buf; } static ares__buf_t *ares_dns_labels_get_at(ares__array_t *labels, size_t idx) { ares__buf_t **buf = ares__array_at(labels, idx); if (buf == NULL) { return NULL; } return *buf; } static void ares_dns_name_labels_del_last(ares__array_t *labels) { ares__array_remove_last(labels); } static ares_status_t ares_parse_dns_name_escape(ares__buf_t *namebuf, ares__buf_t *label, ares_bool_t validate_hostname) { ares_status_t status; unsigned char c; status = ares__buf_fetch_bytes(namebuf, &c, 1); if (status != ARES_SUCCESS) { return ARES_EBADNAME; } /* If next character is a digit, read 2 more digits */ if (ares__isdigit(c)) { size_t i; unsigned int val = 0; val = c - '0'; for (i = 0; i < 2; i++) { status = ares__buf_fetch_bytes(namebuf, &c, 1); if (status != ARES_SUCCESS) { return ARES_EBADNAME; } if (!ares__isdigit(c)) { return ARES_EBADNAME; } val *= 10; val += c - '0'; } /* Out of range */ if (val > 255) { return ARES_EBADNAME; } if (validate_hostname && !ares__is_hostnamech((unsigned char)val)) { return ARES_EBADNAME; } return ares__buf_append_byte(label, (unsigned char)val); } /* We can just output the character */ if (validate_hostname && !ares__is_hostnamech(c)) { return ARES_EBADNAME; } return ares__buf_append_byte(label, c); } static ares_status_t ares_split_dns_name(ares__array_t *labels, ares_bool_t validate_hostname, const char *name) { ares_status_t status; ares__buf_t *label = NULL; ares__buf_t *namebuf = NULL; size_t i; size_t total_len = 0; unsigned char c; if (name == NULL || labels == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* Put name into a buffer for parsing */ namebuf = ares__buf_create(); if (namebuf == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } if (*name != '\0') { status = ares__buf_append(namebuf, (const unsigned char *)name, ares_strlen(name)); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } /* Start with 1 label */ label = ares_dns_labels_add(labels); if (label == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } while (ares__buf_fetch_bytes(namebuf, &c, 1) == ARES_SUCCESS) { /* New label */ if (c == '.') { label = ares_dns_labels_add(labels); if (label == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } continue; } /* Escape */ if (c == '\\') { status = ares_parse_dns_name_escape(namebuf, label, validate_hostname); if (status != ARES_SUCCESS) { goto done; } continue; } /* Output direct character */ if (validate_hostname && !ares__is_hostnamech(c)) { status = ARES_EBADNAME; goto done; } status = ares__buf_append_byte(label, c); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } /* Remove trailing blank label */ if (ares__buf_len(ares_dns_labels_get_last(labels)) == 0) { ares_dns_name_labels_del_last(labels); } /* If someone passed in "." there could have been 2 blank labels, check for * that */ if (ares__array_len(labels) == 1 && ares__buf_len(ares_dns_labels_get_last(labels)) == 0) { ares_dns_name_labels_del_last(labels); } /* Scan to make sure label lengths are valid */ for (i = 0; i < ares__array_len(labels); i++) { const ares__buf_t *buf = ares_dns_labels_get_at(labels, i); size_t len = ares__buf_len(buf); /* No 0-length labels, and no labels over 63 bytes */ if (len == 0 || len > 63) { status = ARES_EBADNAME; goto done; } total_len += len; } /* Can't exceed maximum (unescaped) length */ if (ares__array_len(labels) && total_len + ares__array_len(labels) - 1 > 255) { status = ARES_EBADNAME; goto done; } status = ARES_SUCCESS; done: ares__buf_destroy(namebuf); return status; } ares_status_t ares__dns_name_write(ares__buf_t *buf, ares__llist_t **list, ares_bool_t validate_hostname, const char *name) { const ares_nameoffset_t *off = NULL; size_t name_len; size_t orig_name_len; size_t pos = ares__buf_len(buf); ares__array_t *labels = NULL; char name_copy[512]; ares_status_t status; if (buf == NULL || name == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } labels = ares__array_create(sizeof(ares__buf_t *), ares_dns_labels_free_cb); if (labels == NULL) { return ARES_ENOMEM; } /* NOTE: due to possible escaping, name_copy buffer is > 256 to allow for * this */ name_len = ares_strcpy(name_copy, name, sizeof(name_copy)); orig_name_len = name_len; /* Find longest match */ if (list != NULL) { off = ares__nameoffset_find(*list, name_copy); if (off != NULL && off->name_len != name_len) { /* truncate */ name_len -= (off->name_len + 1); name_copy[name_len] = 0; } } /* Output labels */ if (off == NULL || off->name_len != orig_name_len) { size_t i; status = ares_split_dns_name(labels, validate_hostname, name_copy); if (status != ARES_SUCCESS) { goto done; } for (i = 0; i < ares__array_len(labels); i++) { size_t len = 0; const ares__buf_t *lbuf = ares_dns_labels_get_at(labels, i); const unsigned char *ptr = ares__buf_peek(lbuf, &len); status = ares__buf_append_byte(buf, (unsigned char)(len & 0xFF)); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__buf_append(buf, ptr, len); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } /* If we are NOT jumping to another label, output terminator */ if (off == NULL) { status = ares__buf_append_byte(buf, 0); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } } /* Output name compression offset jump */ if (off != NULL) { unsigned short u16 = (unsigned short)0xC000 | (unsigned short)(off->idx & 0x3FFF); status = ares__buf_append_be16(buf, u16); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } /* Store pointer for future jumps as long as its not an exact match for * a prior entry */ if (list != NULL && (off == NULL || off->name_len != orig_name_len) && name_len > 0) { status = ares__nameoffset_create(list, name /* not truncated copy! */, pos); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } } status = ARES_SUCCESS; done: ares__array_destroy(labels); return status; } /* Reserved characters for names that need to be escaped */ static ares_bool_t is_reservedch(int ch) { switch (ch) { case '"': case '.': case ';': case '\\': case '(': case ')': case '@': case '$': return ARES_TRUE; default: break; } return ARES_FALSE; } static ares_status_t ares__fetch_dnsname_into_buf(ares__buf_t *buf, ares__buf_t *dest, size_t len, ares_bool_t is_hostname) { size_t remaining_len; const unsigned char *ptr = ares__buf_peek(buf, &remaining_len); ares_status_t status; size_t i; if (buf == NULL || len == 0 || remaining_len < len) { return ARES_EBADRESP; } for (i = 0; i < len; i++) { unsigned char c = ptr[i]; /* Hostnames have a very specific allowed character set. Anything outside * of that (non-printable and reserved included) are disallowed */ if (is_hostname && !ares__is_hostnamech(c)) { status = ARES_EBADRESP; goto fail; } /* NOTE: dest may be NULL if the user is trying to skip the name. validation * still occurs above. */ if (dest == NULL) { continue; } /* Non-printable characters need to be output as \DDD */ if (!ares__isprint(c)) { unsigned char escape[4]; escape[0] = '\\'; escape[1] = '0' + (c / 100); escape[2] = '0' + ((c % 100) / 10); escape[3] = '0' + (c % 10); status = ares__buf_append(dest, escape, sizeof(escape)); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } continue; } /* Reserved characters need to be escaped, otherwise normal */ if (is_reservedch(c)) { status = ares__buf_append_byte(dest, '\\'); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } } status = ares__buf_append_byte(dest, c); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } return ares__buf_consume(buf, len); fail: return status; } ares_status_t ares__dns_name_parse(ares__buf_t *buf, char **name, ares_bool_t is_hostname) { size_t save_offset = 0; unsigned char c; ares_status_t status; ares__buf_t *namebuf = NULL; size_t label_start = ares__buf_get_position(buf); if (buf == NULL) { return ARES_EFORMERR; } if (name != NULL) { namebuf = ares__buf_create(); if (namebuf == NULL) { status = ARES_ENOMEM; goto fail; } } /* The compression scheme allows a domain name in a message to be * represented as either: * * - a sequence of labels ending in a zero octet * - a pointer * - a sequence of labels ending with a pointer */ while (1) { /* Keep track of the minimum label starting position to prevent forward * jumping */ if (label_start > ares__buf_get_position(buf)) { label_start = ares__buf_get_position(buf); } status = ares__buf_fetch_bytes(buf, &c, 1); if (status != ARES_SUCCESS) { goto fail; } /* Pointer/Redirect */ if ((c & 0xc0) == 0xc0) { /* The pointer takes the form of a two octet sequence: * * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * | 1 1| OFFSET | * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * * The first two bits are ones. This allows a pointer to be distinguished * from a label, since the label must begin with two zero bits because * labels are restricted to 63 octets or less. (The 10 and 01 * combinations are reserved for future use.) The OFFSET field specifies * an offset from the start of the message (i.e., the first octet of the * ID field in the domain header). A zero offset specifies the first byte * of the ID field, etc. */ size_t offset = (size_t)((c & 0x3F) << 8); /* Fetch second byte of the redirect length */ status = ares__buf_fetch_bytes(buf, &c, 1); if (status != ARES_SUCCESS) { goto fail; } offset |= ((size_t)c); /* According to RFC 1035 4.1.4: * In this scheme, an entire domain name or a list of labels at * the end of a domain name is replaced with a pointer to a prior * occurrence of the same name. * Note the word "prior", meaning it must go backwards. This was * confirmed via the ISC BIND code that it also prevents forward * pointers. */ if (offset >= label_start) { status = ARES_EBADNAME; goto fail; } /* First time we make a jump, save the current position */ if (save_offset == 0) { save_offset = ares__buf_get_position(buf); } status = ares__buf_set_position(buf, offset); if (status != ARES_SUCCESS) { status = ARES_EBADNAME; goto fail; } continue; } else if ((c & 0xc0) != 0) { /* 10 and 01 are reserved */ status = ARES_EBADNAME; goto fail; } else if (c == 0) { /* termination via zero octet*/ break; } /* New label */ /* Labels are separated by periods */ if (ares__buf_len(namebuf) != 0 && name != NULL) { status = ares__buf_append_byte(namebuf, '.'); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } } status = ares__fetch_dnsname_into_buf(buf, namebuf, c, is_hostname); if (status != ARES_SUCCESS) { goto fail; } } /* Restore offset read after first redirect/pointer as this is where the DNS * message continues */ if (save_offset) { ares__buf_set_position(buf, save_offset); } if (name != NULL) { *name = ares__buf_finish_str(namebuf, NULL); if (*name == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } } return ARES_SUCCESS; fail: /* We want badname response if we couldn't parse */ if (status == ARES_EBADRESP) { status = ARES_EBADNAME; } ares__buf_destroy(namebuf); return status; } gevent-24.11.1/deps/c-ares/src/lib/record/ares_dns_parse.c000066400000000000000000001104011471441230600232230ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include #ifdef HAVE_STDINT_H # include #endif static size_t ares_dns_rr_remaining_len(const ares__buf_t *buf, size_t orig_len, size_t rdlength) { size_t used_len = orig_len - ares__buf_len(buf); if (used_len >= rdlength) { return 0; } return rdlength - used_len; } static ares_status_t ares_dns_parse_and_set_dns_name(ares__buf_t *buf, ares_bool_t is_hostname, ares_dns_rr_t *rr, ares_dns_rr_key_t key) { ares_status_t status; char *name = NULL; status = ares__dns_name_parse(buf, &name, is_hostname); if (status != ARES_SUCCESS) { return status; } status = ares_dns_rr_set_str_own(rr, key, name); if (status != ARES_SUCCESS) { ares_free(name); return status; } return ARES_SUCCESS; } static ares_status_t ares_dns_parse_and_set_dns_str(ares__buf_t *buf, size_t max_len, ares_dns_rr_t *rr, ares_dns_rr_key_t key, ares_bool_t blank_allowed) { ares_status_t status; char *str = NULL; status = ares__buf_parse_dns_str(buf, max_len, &str); if (status != ARES_SUCCESS) { return status; } if (!blank_allowed && ares_strlen(str) == 0) { ares_free(str); return ARES_EBADRESP; } status = ares_dns_rr_set_str_own(rr, key, str); if (status != ARES_SUCCESS) { ares_free(str); return status; } return ARES_SUCCESS; } static ares_status_t ares_dns_parse_and_set_dns_abin(ares__buf_t *buf, size_t max_len, ares_dns_rr_t *rr, ares_dns_rr_key_t key, ares_bool_t validate_printable) { ares_status_t status; ares__dns_multistring_t *strs = NULL; status = ares__buf_parse_dns_abinstr(buf, max_len, &strs, validate_printable); if (status != ARES_SUCCESS) { return status; } status = ares_dns_rr_set_abin_own(rr, key, strs); if (status != ARES_SUCCESS) { ares__dns_multistring_destroy(strs); return status; } return ARES_SUCCESS; } static ares_status_t ares_dns_parse_and_set_be32(ares__buf_t *buf, ares_dns_rr_t *rr, ares_dns_rr_key_t key) { ares_status_t status; unsigned int u32; status = ares__buf_fetch_be32(buf, &u32); if (status != ARES_SUCCESS) { return status; } return ares_dns_rr_set_u32(rr, key, u32); } static ares_status_t ares_dns_parse_and_set_be16(ares__buf_t *buf, ares_dns_rr_t *rr, ares_dns_rr_key_t key) { ares_status_t status; unsigned short u16; status = ares__buf_fetch_be16(buf, &u16); if (status != ARES_SUCCESS) { return status; } return ares_dns_rr_set_u16(rr, key, u16); } static ares_status_t ares_dns_parse_and_set_u8(ares__buf_t *buf, ares_dns_rr_t *rr, ares_dns_rr_key_t key) { ares_status_t status; unsigned char u8; status = ares__buf_fetch_bytes(buf, &u8, 1); if (status != ARES_SUCCESS) { return status; } return ares_dns_rr_set_u8(rr, key, u8); } static ares_status_t ares_dns_parse_rr_a(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { struct in_addr addr; ares_status_t status; (void)rdlength; /* Not needed */ status = ares__buf_fetch_bytes(buf, (unsigned char *)&addr, sizeof(addr)); if (status != ARES_SUCCESS) { return status; } return ares_dns_rr_set_addr(rr, ARES_RR_A_ADDR, &addr); } static ares_status_t ares_dns_parse_rr_ns(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { (void)rdlength; /* Not needed */ return ares_dns_parse_and_set_dns_name(buf, ARES_FALSE, rr, ARES_RR_NS_NSDNAME); } static ares_status_t ares_dns_parse_rr_cname(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { (void)rdlength; /* Not needed */ return ares_dns_parse_and_set_dns_name(buf, ARES_FALSE, rr, ARES_RR_CNAME_CNAME); } static ares_status_t ares_dns_parse_rr_soa(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { ares_status_t status; (void)rdlength; /* Not needed */ /* MNAME */ status = ares_dns_parse_and_set_dns_name(buf, ARES_FALSE, rr, ARES_RR_SOA_MNAME); if (status != ARES_SUCCESS) { return status; } /* RNAME */ status = ares_dns_parse_and_set_dns_name(buf, ARES_FALSE, rr, ARES_RR_SOA_RNAME); if (status != ARES_SUCCESS) { return status; } /* SERIAL */ status = ares_dns_parse_and_set_be32(buf, rr, ARES_RR_SOA_SERIAL); if (status != ARES_SUCCESS) { return status; } /* REFRESH */ status = ares_dns_parse_and_set_be32(buf, rr, ARES_RR_SOA_REFRESH); if (status != ARES_SUCCESS) { return status; } /* RETRY */ status = ares_dns_parse_and_set_be32(buf, rr, ARES_RR_SOA_RETRY); if (status != ARES_SUCCESS) { return status; } /* EXPIRE */ status = ares_dns_parse_and_set_be32(buf, rr, ARES_RR_SOA_EXPIRE); if (status != ARES_SUCCESS) { return status; } /* MINIMUM */ return ares_dns_parse_and_set_be32(buf, rr, ARES_RR_SOA_MINIMUM); } static ares_status_t ares_dns_parse_rr_ptr(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { (void)rdlength; /* Not needed */ return ares_dns_parse_and_set_dns_name(buf, ARES_FALSE, rr, ARES_RR_PTR_DNAME); } static ares_status_t ares_dns_parse_rr_hinfo(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { ares_status_t status; size_t orig_len = ares__buf_len(buf); (void)rdlength; /* Not needed */ /* CPU */ status = ares_dns_parse_and_set_dns_str( buf, ares_dns_rr_remaining_len(buf, orig_len, rdlength), rr, ARES_RR_HINFO_CPU, ARES_TRUE); if (status != ARES_SUCCESS) { return status; } /* OS */ status = ares_dns_parse_and_set_dns_str( buf, ares_dns_rr_remaining_len(buf, orig_len, rdlength), rr, ARES_RR_HINFO_OS, ARES_TRUE); return status; } static ares_status_t ares_dns_parse_rr_mx(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { ares_status_t status; (void)rdlength; /* Not needed */ /* PREFERENCE */ status = ares_dns_parse_and_set_be16(buf, rr, ARES_RR_MX_PREFERENCE); if (status != ARES_SUCCESS) { return status; } /* EXCHANGE */ return ares_dns_parse_and_set_dns_name(buf, ARES_FALSE, rr, ARES_RR_MX_EXCHANGE); } static ares_status_t ares_dns_parse_rr_txt(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { return ares_dns_parse_and_set_dns_abin(buf, rdlength, rr, ARES_RR_TXT_DATA, ARES_FALSE); } static ares_status_t ares_dns_parse_rr_sig(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { ares_status_t status; size_t orig_len = ares__buf_len(buf); size_t len; unsigned char *data; status = ares_dns_parse_and_set_be16(buf, rr, ARES_RR_SIG_TYPE_COVERED); if (status != ARES_SUCCESS) { return status; } status = ares_dns_parse_and_set_u8(buf, rr, ARES_RR_SIG_ALGORITHM); if (status != ARES_SUCCESS) { return status; } status = ares_dns_parse_and_set_u8(buf, rr, ARES_RR_SIG_LABELS); if (status != ARES_SUCCESS) { return status; } status = ares_dns_parse_and_set_be32(buf, rr, ARES_RR_SIG_ORIGINAL_TTL); if (status != ARES_SUCCESS) { return status; } status = ares_dns_parse_and_set_be32(buf, rr, ARES_RR_SIG_EXPIRATION); if (status != ARES_SUCCESS) { return status; } status = ares_dns_parse_and_set_be32(buf, rr, ARES_RR_SIG_INCEPTION); if (status != ARES_SUCCESS) { return status; } status = ares_dns_parse_and_set_be16(buf, rr, ARES_RR_SIG_KEY_TAG); if (status != ARES_SUCCESS) { return status; } status = ares_dns_parse_and_set_dns_name(buf, ARES_FALSE, rr, ARES_RR_SIG_SIGNERS_NAME); if (status != ARES_SUCCESS) { return status; } len = ares_dns_rr_remaining_len(buf, orig_len, rdlength); if (len == 0) { return ARES_EBADRESP; } status = ares__buf_fetch_bytes_dup(buf, len, ARES_FALSE, &data); if (status != ARES_SUCCESS) { return status; } status = ares_dns_rr_set_bin_own(rr, ARES_RR_SIG_SIGNATURE, data, len); if (status != ARES_SUCCESS) { ares_free(data); return status; } return ARES_SUCCESS; } static ares_status_t ares_dns_parse_rr_aaaa(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { struct ares_in6_addr addr; ares_status_t status; (void)rdlength; /* Not needed */ status = ares__buf_fetch_bytes(buf, (unsigned char *)&addr, sizeof(addr)); if (status != ARES_SUCCESS) { return status; } return ares_dns_rr_set_addr6(rr, ARES_RR_AAAA_ADDR, &addr); } static ares_status_t ares_dns_parse_rr_srv(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { ares_status_t status; (void)rdlength; /* Not needed */ /* PRIORITY */ status = ares_dns_parse_and_set_be16(buf, rr, ARES_RR_SRV_PRIORITY); if (status != ARES_SUCCESS) { return status; } /* WEIGHT */ status = ares_dns_parse_and_set_be16(buf, rr, ARES_RR_SRV_WEIGHT); if (status != ARES_SUCCESS) { return status; } /* PORT */ status = ares_dns_parse_and_set_be16(buf, rr, ARES_RR_SRV_PORT); if (status != ARES_SUCCESS) { return status; } /* TARGET */ return ares_dns_parse_and_set_dns_name(buf, ARES_FALSE, rr, ARES_RR_SRV_TARGET); } static ares_status_t ares_dns_parse_rr_naptr(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { ares_status_t status; size_t orig_len = ares__buf_len(buf); /* ORDER */ status = ares_dns_parse_and_set_be16(buf, rr, ARES_RR_NAPTR_ORDER); if (status != ARES_SUCCESS) { return status; } /* PREFERENCE */ status = ares_dns_parse_and_set_be16(buf, rr, ARES_RR_NAPTR_PREFERENCE); if (status != ARES_SUCCESS) { return status; } /* FLAGS */ status = ares_dns_parse_and_set_dns_str( buf, ares_dns_rr_remaining_len(buf, orig_len, rdlength), rr, ARES_RR_NAPTR_FLAGS, ARES_TRUE); if (status != ARES_SUCCESS) { return status; } /* SERVICES */ status = ares_dns_parse_and_set_dns_str( buf, ares_dns_rr_remaining_len(buf, orig_len, rdlength), rr, ARES_RR_NAPTR_SERVICES, ARES_TRUE); if (status != ARES_SUCCESS) { return status; } /* REGEXP */ status = ares_dns_parse_and_set_dns_str( buf, ares_dns_rr_remaining_len(buf, orig_len, rdlength), rr, ARES_RR_NAPTR_REGEXP, ARES_TRUE); if (status != ARES_SUCCESS) { return status; } /* REPLACEMENT */ return ares_dns_parse_and_set_dns_name(buf, ARES_FALSE, rr, ARES_RR_NAPTR_REPLACEMENT); } static ares_status_t ares_dns_parse_rr_opt(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength, unsigned short raw_class, unsigned int raw_ttl) { ares_status_t status; size_t orig_len = ares__buf_len(buf); unsigned short rcode_high; status = ares_dns_rr_set_u16(rr, ARES_RR_OPT_UDP_SIZE, raw_class); if (status != ARES_SUCCESS) { return status; } /* First 8 bits of TTL are an extended RCODE, and they go in the higher order * after the original 4-bit rcode */ rcode_high = (unsigned short)((raw_ttl >> 20) & 0x0FF0); rr->parent->raw_rcode |= rcode_high; status = ares_dns_rr_set_u8(rr, ARES_RR_OPT_VERSION, (unsigned char)(raw_ttl >> 16) & 0xFF); if (status != ARES_SUCCESS) { return status; } status = ares_dns_rr_set_u16(rr, ARES_RR_OPT_FLAGS, (unsigned short)(raw_ttl & 0xFFFF)); if (status != ARES_SUCCESS) { return status; } /* Parse options */ while (ares_dns_rr_remaining_len(buf, orig_len, rdlength)) { unsigned short opt = 0; unsigned short len = 0; unsigned char *val = NULL; /* Fetch be16 option */ status = ares__buf_fetch_be16(buf, &opt); if (status != ARES_SUCCESS) { return status; } /* Fetch be16 length */ status = ares__buf_fetch_be16(buf, &len); if (status != ARES_SUCCESS) { return status; } if (len) { status = ares__buf_fetch_bytes_dup(buf, len, ARES_TRUE, &val); if (status != ARES_SUCCESS) { return status; } } status = ares_dns_rr_set_opt_own(rr, ARES_RR_OPT_OPTIONS, opt, val, len); if (status != ARES_SUCCESS) { return status; } } return ARES_SUCCESS; } static ares_status_t ares_dns_parse_rr_tlsa(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { ares_status_t status; size_t orig_len = ares__buf_len(buf); size_t len; unsigned char *data; status = ares_dns_parse_and_set_u8(buf, rr, ARES_RR_TLSA_CERT_USAGE); if (status != ARES_SUCCESS) { return status; } status = ares_dns_parse_and_set_u8(buf, rr, ARES_RR_TLSA_SELECTOR); if (status != ARES_SUCCESS) { return status; } status = ares_dns_parse_and_set_u8(buf, rr, ARES_RR_TLSA_MATCH); if (status != ARES_SUCCESS) { return status; } len = ares_dns_rr_remaining_len(buf, orig_len, rdlength); if (len == 0) { return ARES_EBADRESP; } status = ares__buf_fetch_bytes_dup(buf, len, ARES_FALSE, &data); if (status != ARES_SUCCESS) { return status; } status = ares_dns_rr_set_bin_own(rr, ARES_RR_TLSA_DATA, data, len); if (status != ARES_SUCCESS) { ares_free(data); return status; } return ARES_SUCCESS; } static ares_status_t ares_dns_parse_rr_svcb(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { ares_status_t status; size_t orig_len = ares__buf_len(buf); status = ares_dns_parse_and_set_be16(buf, rr, ARES_RR_SVCB_PRIORITY); if (status != ARES_SUCCESS) { return status; } status = ares_dns_parse_and_set_dns_name(buf, ARES_FALSE, rr, ARES_RR_SVCB_TARGET); if (status != ARES_SUCCESS) { return status; } /* Parse params */ while (ares_dns_rr_remaining_len(buf, orig_len, rdlength)) { unsigned short opt = 0; unsigned short len = 0; unsigned char *val = NULL; /* Fetch be16 option */ status = ares__buf_fetch_be16(buf, &opt); if (status != ARES_SUCCESS) { return status; } /* Fetch be16 length */ status = ares__buf_fetch_be16(buf, &len); if (status != ARES_SUCCESS) { return status; } if (len) { status = ares__buf_fetch_bytes_dup(buf, len, ARES_TRUE, &val); if (status != ARES_SUCCESS) { return status; } } status = ares_dns_rr_set_opt_own(rr, ARES_RR_SVCB_PARAMS, opt, val, len); if (status != ARES_SUCCESS) { return status; } } return ARES_SUCCESS; } static ares_status_t ares_dns_parse_rr_https(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { ares_status_t status; size_t orig_len = ares__buf_len(buf); status = ares_dns_parse_and_set_be16(buf, rr, ARES_RR_HTTPS_PRIORITY); if (status != ARES_SUCCESS) { return status; } status = ares_dns_parse_and_set_dns_name(buf, ARES_FALSE, rr, ARES_RR_HTTPS_TARGET); if (status != ARES_SUCCESS) { return status; } /* Parse params */ while (ares_dns_rr_remaining_len(buf, orig_len, rdlength)) { unsigned short opt = 0; unsigned short len = 0; unsigned char *val = NULL; /* Fetch be16 option */ status = ares__buf_fetch_be16(buf, &opt); if (status != ARES_SUCCESS) { return status; } /* Fetch be16 length */ status = ares__buf_fetch_be16(buf, &len); if (status != ARES_SUCCESS) { return status; } if (len) { status = ares__buf_fetch_bytes_dup(buf, len, ARES_TRUE, &val); if (status != ARES_SUCCESS) { return status; } } status = ares_dns_rr_set_opt_own(rr, ARES_RR_HTTPS_PARAMS, opt, val, len); if (status != ARES_SUCCESS) { return status; } } return ARES_SUCCESS; } static ares_status_t ares_dns_parse_rr_uri(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { char *name = NULL; ares_status_t status; size_t orig_len = ares__buf_len(buf); size_t remaining_len; /* PRIORITY */ status = ares_dns_parse_and_set_be16(buf, rr, ARES_RR_URI_PRIORITY); if (status != ARES_SUCCESS) { return status; } /* WEIGHT */ status = ares_dns_parse_and_set_be16(buf, rr, ARES_RR_URI_WEIGHT); if (status != ARES_SUCCESS) { return status; } /* TARGET -- not in string format, rest of buffer, required to be * non-zero length */ remaining_len = ares_dns_rr_remaining_len(buf, orig_len, rdlength); if (remaining_len == 0) { status = ARES_EBADRESP; return status; } /* NOTE: Not in DNS string format */ status = ares__buf_fetch_str_dup(buf, remaining_len, &name); if (status != ARES_SUCCESS) { return status; } if (!ares__str_isprint(name, remaining_len)) { ares_free(name); return ARES_EBADRESP; } status = ares_dns_rr_set_str_own(rr, ARES_RR_URI_TARGET, name); if (status != ARES_SUCCESS) { ares_free(name); return status; } name = NULL; return ARES_SUCCESS; } static ares_status_t ares_dns_parse_rr_caa(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength) { unsigned char *data = NULL; size_t data_len = 0; ares_status_t status; size_t orig_len = ares__buf_len(buf); /* CRITICAL */ status = ares_dns_parse_and_set_u8(buf, rr, ARES_RR_CAA_CRITICAL); if (status != ARES_SUCCESS) { return status; } /* Tag */ status = ares_dns_parse_and_set_dns_str( buf, ares_dns_rr_remaining_len(buf, orig_len, rdlength), rr, ARES_RR_CAA_TAG, ARES_FALSE); if (status != ARES_SUCCESS) { return status; } /* Value - binary! (remaining buffer */ data_len = ares_dns_rr_remaining_len(buf, orig_len, rdlength); if (data_len == 0) { status = ARES_EBADRESP; return status; } status = ares__buf_fetch_bytes_dup(buf, data_len, ARES_TRUE, &data); if (status != ARES_SUCCESS) { return status; } status = ares_dns_rr_set_bin_own(rr, ARES_RR_CAA_VALUE, data, data_len); if (status != ARES_SUCCESS) { ares_free(data); return status; } data = NULL; return ARES_SUCCESS; } static ares_status_t ares_dns_parse_rr_raw_rr(ares__buf_t *buf, ares_dns_rr_t *rr, size_t rdlength, unsigned short raw_type) { ares_status_t status; unsigned char *bytes = NULL; if (rdlength == 0) { return ARES_SUCCESS; } status = ares__buf_fetch_bytes_dup(buf, rdlength, ARES_FALSE, &bytes); if (status != ARES_SUCCESS) { return status; } /* Can't fail */ status = ares_dns_rr_set_u16(rr, ARES_RR_RAW_RR_TYPE, raw_type); if (status != ARES_SUCCESS) { ares_free(bytes); return status; } status = ares_dns_rr_set_bin_own(rr, ARES_RR_RAW_RR_DATA, bytes, rdlength); if (status != ARES_SUCCESS) { ares_free(bytes); return status; } return ARES_SUCCESS; } static ares_status_t ares_dns_parse_header(ares__buf_t *buf, unsigned int flags, ares_dns_record_t **dnsrec, unsigned short *qdcount, unsigned short *ancount, unsigned short *nscount, unsigned short *arcount) { ares_status_t status = ARES_EBADRESP; unsigned short u16; unsigned short id; unsigned short dns_flags = 0; ares_dns_opcode_t opcode; unsigned short rcode; (void)flags; /* currently unused */ if (buf == NULL || dnsrec == NULL || qdcount == NULL || ancount == NULL || nscount == NULL || arcount == NULL) { return ARES_EFORMERR; } *dnsrec = NULL; /* * RFC 1035 4.1.1. Header section format. * and Updated by RFC 2065 to add AD and CD bits. * 1 1 1 1 1 1 * 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * | ID | * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * |QR| Opcode |AA|TC|RD|RA| Z|AD|CD| RCODE | * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * | QDCOUNT | * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * | ANCOUNT | * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * | NSCOUNT | * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * | ARCOUNT | * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ */ /* ID */ status = ares__buf_fetch_be16(buf, &id); if (status != ARES_SUCCESS) { goto fail; } /* Flags */ status = ares__buf_fetch_be16(buf, &u16); if (status != ARES_SUCCESS) { goto fail; } /* QR */ if (u16 & 0x8000) { dns_flags |= ARES_FLAG_QR; } /* OPCODE */ opcode = (u16 >> 11) & 0xf; /* AA */ if (u16 & 0x400) { dns_flags |= ARES_FLAG_AA; } /* TC */ if (u16 & 0x200) { dns_flags |= ARES_FLAG_TC; } /* RD */ if (u16 & 0x100) { dns_flags |= ARES_FLAG_RD; } /* RA */ if (u16 & 0x80) { dns_flags |= ARES_FLAG_RA; } /* Z -- unused */ /* AD */ if (u16 & 0x20) { dns_flags |= ARES_FLAG_AD; } /* CD */ if (u16 & 0x10) { dns_flags |= ARES_FLAG_CD; } /* RCODE */ rcode = u16 & 0xf; /* QDCOUNT */ status = ares__buf_fetch_be16(buf, qdcount); if (status != ARES_SUCCESS) { goto fail; } /* ANCOUNT */ status = ares__buf_fetch_be16(buf, ancount); if (status != ARES_SUCCESS) { goto fail; } /* NSCOUNT */ status = ares__buf_fetch_be16(buf, nscount); if (status != ARES_SUCCESS) { goto fail; } /* ARCOUNT */ status = ares__buf_fetch_be16(buf, arcount); if (status != ARES_SUCCESS) { goto fail; } status = ares_dns_record_create(dnsrec, id, dns_flags, opcode, ARES_RCODE_NOERROR /* Temporary */); if (status != ARES_SUCCESS) { goto fail; } (*dnsrec)->raw_rcode = rcode; if (*ancount > 0) { status = ares_dns_record_rr_prealloc(*dnsrec, ARES_SECTION_ANSWER, *ancount); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } } if (*nscount > 0) { status = ares_dns_record_rr_prealloc(*dnsrec, ARES_SECTION_AUTHORITY, *nscount); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } } if (*arcount > 0) { status = ares_dns_record_rr_prealloc(*dnsrec, ARES_SECTION_ADDITIONAL, *arcount); if (status != ARES_SUCCESS) { goto fail; /* LCOV_EXCL_LINE: OutOfMemory */ } } return ARES_SUCCESS; fail: ares_dns_record_destroy(*dnsrec); *dnsrec = NULL; *qdcount = 0; *ancount = 0; *nscount = 0; *arcount = 0; return status; } static ares_status_t ares_dns_parse_rr_data(ares__buf_t *buf, size_t rdlength, ares_dns_rr_t *rr, ares_dns_rec_type_t type, unsigned short raw_type, unsigned short raw_class, unsigned int raw_ttl) { switch (type) { case ARES_REC_TYPE_A: return ares_dns_parse_rr_a(buf, rr, rdlength); case ARES_REC_TYPE_NS: return ares_dns_parse_rr_ns(buf, rr, rdlength); case ARES_REC_TYPE_CNAME: return ares_dns_parse_rr_cname(buf, rr, rdlength); case ARES_REC_TYPE_SOA: return ares_dns_parse_rr_soa(buf, rr, rdlength); case ARES_REC_TYPE_PTR: return ares_dns_parse_rr_ptr(buf, rr, rdlength); case ARES_REC_TYPE_HINFO: return ares_dns_parse_rr_hinfo(buf, rr, rdlength); case ARES_REC_TYPE_MX: return ares_dns_parse_rr_mx(buf, rr, rdlength); case ARES_REC_TYPE_TXT: return ares_dns_parse_rr_txt(buf, rr, rdlength); case ARES_REC_TYPE_SIG: return ares_dns_parse_rr_sig(buf, rr, rdlength); case ARES_REC_TYPE_AAAA: return ares_dns_parse_rr_aaaa(buf, rr, rdlength); case ARES_REC_TYPE_SRV: return ares_dns_parse_rr_srv(buf, rr, rdlength); case ARES_REC_TYPE_NAPTR: return ares_dns_parse_rr_naptr(buf, rr, rdlength); case ARES_REC_TYPE_ANY: return ARES_EBADRESP; case ARES_REC_TYPE_OPT: return ares_dns_parse_rr_opt(buf, rr, rdlength, raw_class, raw_ttl); case ARES_REC_TYPE_TLSA: return ares_dns_parse_rr_tlsa(buf, rr, rdlength); case ARES_REC_TYPE_SVCB: return ares_dns_parse_rr_svcb(buf, rr, rdlength); case ARES_REC_TYPE_HTTPS: return ares_dns_parse_rr_https(buf, rr, rdlength); case ARES_REC_TYPE_URI: return ares_dns_parse_rr_uri(buf, rr, rdlength); case ARES_REC_TYPE_CAA: return ares_dns_parse_rr_caa(buf, rr, rdlength); case ARES_REC_TYPE_RAW_RR: return ares_dns_parse_rr_raw_rr(buf, rr, rdlength, raw_type); } return ARES_EFORMERR; } static ares_status_t ares_dns_parse_qd(ares__buf_t *buf, ares_dns_record_t *dnsrec) { char *name = NULL; unsigned short u16; ares_status_t status; ares_dns_rec_type_t type; ares_dns_class_t qclass; /* The question section is used to carry the "question" in most queries, * i.e., the parameters that define what is being asked. The section * contains QDCOUNT (usually 1) entries, each of the following format: * 1 1 1 1 1 1 * 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * | | * / QNAME / * / / * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * | QTYPE | * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * | QCLASS | * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ */ /* Name */ status = ares__dns_name_parse(buf, &name, ARES_FALSE); if (status != ARES_SUCCESS) { goto done; } /* Type */ status = ares__buf_fetch_be16(buf, &u16); if (status != ARES_SUCCESS) { goto done; } type = u16; /* Class */ status = ares__buf_fetch_be16(buf, &u16); if (status != ARES_SUCCESS) { goto done; } qclass = u16; /* Add question */ status = ares_dns_record_query_add(dnsrec, name, type, qclass); if (status != ARES_SUCCESS) { goto done; } done: ares_free(name); return status; } static ares_status_t ares_dns_parse_rr(ares__buf_t *buf, unsigned int flags, ares_dns_section_t sect, ares_dns_record_t *dnsrec) { char *name = NULL; unsigned short u16; unsigned short raw_type; ares_status_t status; ares_dns_rec_type_t type; ares_dns_class_t qclass; unsigned int ttl; size_t rdlength; ares_dns_rr_t *rr = NULL; size_t remaining_len = 0; size_t processed_len = 0; ares_bool_t namecomp; /* All RRs have the same top level format shown below: * 1 1 1 1 1 1 * 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * | | * / / * / NAME / * | | * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * | TYPE | * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * | CLASS | * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * | TTL | * | | * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ * | RDLENGTH | * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--| * / RDATA / * / / * +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+ */ /* Name */ status = ares__dns_name_parse(buf, &name, ARES_FALSE); if (status != ARES_SUCCESS) { goto done; } /* Type */ status = ares__buf_fetch_be16(buf, &u16); if (status != ARES_SUCCESS) { goto done; } type = u16; raw_type = u16; /* Only used for raw rr data */ /* Class */ status = ares__buf_fetch_be16(buf, &u16); if (status != ARES_SUCCESS) { goto done; } qclass = u16; /* TTL */ status = ares__buf_fetch_be32(buf, &ttl); if (status != ARES_SUCCESS) { goto done; } /* Length */ status = ares__buf_fetch_be16(buf, &u16); if (status != ARES_SUCCESS) { goto done; } rdlength = u16; if (!ares_dns_rec_type_isvalid(type, ARES_FALSE)) { type = ARES_REC_TYPE_RAW_RR; } namecomp = ares_dns_rec_type_allow_name_compression(type); if (sect == ARES_SECTION_ANSWER && (flags & (namecomp ? ARES_DNS_PARSE_AN_BASE_RAW : ARES_DNS_PARSE_AN_EXT_RAW))) { type = ARES_REC_TYPE_RAW_RR; } if (sect == ARES_SECTION_AUTHORITY && (flags & (namecomp ? ARES_DNS_PARSE_NS_BASE_RAW : ARES_DNS_PARSE_NS_EXT_RAW))) { type = ARES_REC_TYPE_RAW_RR; } if (sect == ARES_SECTION_ADDITIONAL && (flags & (namecomp ? ARES_DNS_PARSE_AR_BASE_RAW : ARES_DNS_PARSE_AR_EXT_RAW))) { type = ARES_REC_TYPE_RAW_RR; } /* Pull into another buffer for safety */ if (rdlength > ares__buf_len(buf)) { status = ARES_EBADRESP; goto done; } /* Add the base rr */ status = ares_dns_record_rr_add(&rr, dnsrec, sect, name, type, type == ARES_REC_TYPE_OPT ? ARES_CLASS_IN : qclass, type == ARES_REC_TYPE_OPT ? 0 : ttl); if (status != ARES_SUCCESS) { goto done; } /* Record the current remaining length in the buffer so we can tell how * much was processed */ remaining_len = ares__buf_len(buf); /* Fill in the data for the rr */ status = ares_dns_parse_rr_data(buf, rdlength, rr, type, raw_type, (unsigned short)qclass, ttl); if (status != ARES_SUCCESS) { goto done; } /* Determine how many bytes were processed */ processed_len = remaining_len - ares__buf_len(buf); /* If too many bytes were processed, error! */ if (processed_len > rdlength) { status = ARES_EBADRESP; goto done; } /* If too few bytes were processed, consume the unprocessed data for this * record as the parser may not have wanted/needed to use it */ if (processed_len < rdlength) { ares__buf_consume(buf, rdlength - processed_len); } done: ares_free(name); return status; } static ares_status_t ares_dns_parse_buf(ares__buf_t *buf, unsigned int flags, ares_dns_record_t **dnsrec) { ares_status_t status; unsigned short qdcount; unsigned short ancount; unsigned short nscount; unsigned short arcount; unsigned short i; if (buf == NULL || dnsrec == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* Maximum DNS packet size is 64k, even over TCP */ if (ares__buf_len(buf) > 0xFFFF) { return ARES_EFORMERR; } /* All communications inside of the domain protocol are carried in a single * format called a message. The top level format of message is divided * into 5 sections (some of which are empty in certain cases) shown below: * * +---------------------+ * | Header | * +---------------------+ * | Question | the question for the name server * +---------------------+ * | Answer | RRs answering the question * +---------------------+ * | Authority | RRs pointing toward an authority * +---------------------+ * | Additional | RRs holding additional information * +---------------------+ */ /* Parse header */ status = ares_dns_parse_header(buf, flags, dnsrec, &qdcount, &ancount, &nscount, &arcount); if (status != ARES_SUCCESS) { goto fail; } /* Must have questions */ if (qdcount == 0) { status = ARES_EBADRESP; goto fail; } /* XXX: this should be controlled by a flag in case we want to allow * multiple questions. I think mDNS allows this */ if (qdcount > 1) { status = ARES_EBADRESP; goto fail; } /* Parse questions */ for (i = 0; i < qdcount; i++) { status = ares_dns_parse_qd(buf, *dnsrec); if (status != ARES_SUCCESS) { goto fail; } } /* Parse Answers */ for (i = 0; i < ancount; i++) { status = ares_dns_parse_rr(buf, flags, ARES_SECTION_ANSWER, *dnsrec); if (status != ARES_SUCCESS) { goto fail; } } /* Parse Authority */ for (i = 0; i < nscount; i++) { status = ares_dns_parse_rr(buf, flags, ARES_SECTION_AUTHORITY, *dnsrec); if (status != ARES_SUCCESS) { goto fail; } } /* Parse Additional */ for (i = 0; i < arcount; i++) { status = ares_dns_parse_rr(buf, flags, ARES_SECTION_ADDITIONAL, *dnsrec); if (status != ARES_SUCCESS) { goto fail; } } /* Finalize rcode now that if we have OPT it is processed */ if (!ares_dns_rcode_isvalid((*dnsrec)->raw_rcode)) { (*dnsrec)->rcode = ARES_RCODE_SERVFAIL; } else { (*dnsrec)->rcode = (ares_dns_rcode_t)(*dnsrec)->raw_rcode; } return ARES_SUCCESS; fail: ares_dns_record_destroy(*dnsrec); *dnsrec = NULL; return status; } ares_status_t ares_dns_parse(const unsigned char *buf, size_t buf_len, unsigned int flags, ares_dns_record_t **dnsrec) { ares__buf_t *parser = NULL; ares_status_t status; if (buf == NULL || buf_len == 0 || dnsrec == NULL) { return ARES_EFORMERR; } parser = ares__buf_create_const(buf, buf_len); if (parser == NULL) { return ARES_ENOMEM; } status = ares_dns_parse_buf(parser, flags, dnsrec); ares__buf_destroy(parser); return status; } gevent-24.11.1/deps/c-ares/src/lib/record/ares_dns_private.h000066400000000000000000000227511471441230600236020ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES_DNS_PRIVATE_H #define __ARES_DNS_PRIVATE_H ares_status_t ares_dns_record_duplicate_ex(ares_dns_record_t **dest, const ares_dns_record_t *src); ares_bool_t ares_dns_rec_type_allow_name_compression(ares_dns_rec_type_t type); ares_bool_t ares_dns_opcode_isvalid(ares_dns_opcode_t opcode); ares_bool_t ares_dns_rcode_isvalid(ares_dns_rcode_t rcode); ares_bool_t ares_dns_flags_arevalid(unsigned short flags); ares_bool_t ares_dns_rec_type_isvalid(ares_dns_rec_type_t type, ares_bool_t is_query); ares_bool_t ares_dns_class_isvalid(ares_dns_class_t qclass, ares_dns_rec_type_t type, ares_bool_t is_query); ares_bool_t ares_dns_section_isvalid(ares_dns_section_t sect); ares_status_t ares_dns_rr_set_str_own(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, char *val); ares_status_t ares_dns_rr_set_bin_own(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned char *val, size_t len); ares_status_t ares_dns_rr_set_abin_own(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, ares__dns_multistring_t *strs); ares_status_t ares_dns_rr_set_opt_own(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned short opt, unsigned char *val, size_t val_len); ares_status_t ares_dns_record_rr_prealloc(ares_dns_record_t *dnsrec, ares_dns_section_t sect, size_t cnt); ares_dns_rr_t *ares_dns_get_opt_rr(ares_dns_record_t *rec); const ares_dns_rr_t *ares_dns_get_opt_rr_const(const ares_dns_record_t *rec); void ares_dns_record_write_ttl_decrement(ares_dns_record_t *dnsrec, unsigned int ttl_decrement); /* Same as ares_dns_write() but appends to an existing buffer object */ ares_status_t ares_dns_write_buf(const ares_dns_record_t *dnsrec, ares__buf_t *buf); /* Same as ares_dns_write_buf(), but prepends a 16bit length */ ares_status_t ares_dns_write_buf_tcp(const ares_dns_record_t *dnsrec, ares__buf_t *buf); /*! Create a DNS record object for a query. The arguments are the same as * those for ares_create_query(). * * \param[out] dnsrec DNS record object to create. * \param[in] name NUL-terminated name for the query. * \param[in] dnsclass Class for the query. * \param[in] type Type for the query. * \param[in] id Identifier for the query. * \param[in] flags Flags for the query. * \param[in] max_udp_size Maximum size of a UDP packet for EDNS. * \return ARES_SUCCESS on success, otherwise an error code. */ ares_status_t ares_dns_record_create_query(ares_dns_record_t **dnsrec, const char *name, ares_dns_class_t dnsclass, ares_dns_rec_type_t type, unsigned short id, ares_dns_flags_t flags, size_t max_udp_size); /*! Convert the RCODE and ANCOUNT from a DNS query reply into a status code. * * \param[in] rcode The RCODE from the reply. * \param[in] ancount The ANCOUNT from the reply. * \return An appropriate status code. */ ares_status_t ares_dns_query_reply_tostatus(ares_dns_rcode_t rcode, size_t ancount); struct ares_dns_qd { char *name; ares_dns_rec_type_t qtype; ares_dns_class_t qclass; }; typedef struct { struct in_addr addr; } ares__dns_a_t; typedef struct { char *nsdname; } ares__dns_ns_t; typedef struct { char *cname; } ares__dns_cname_t; typedef struct { char *mname; char *rname; unsigned int serial; unsigned int refresh; unsigned int retry; unsigned int expire; unsigned int minimum; } ares__dns_soa_t; typedef struct { char *dname; } ares__dns_ptr_t; typedef struct { char *cpu; char *os; } ares__dns_hinfo_t; typedef struct { unsigned short preference; char *exchange; } ares__dns_mx_t; typedef struct { ares__dns_multistring_t *strs; } ares__dns_txt_t; typedef struct { unsigned short type_covered; unsigned char algorithm; unsigned char labels; unsigned int original_ttl; unsigned int expiration; unsigned int inception; unsigned short key_tag; char *signers_name; unsigned char *signature; size_t signature_len; } ares__dns_sig_t; typedef struct { struct ares_in6_addr addr; } ares__dns_aaaa_t; typedef struct { unsigned short priority; unsigned short weight; unsigned short port; char *target; } ares__dns_srv_t; typedef struct { unsigned short order; unsigned short preference; char *flags; char *services; char *regexp; char *replacement; } ares__dns_naptr_t; typedef struct { unsigned short opt; unsigned char *val; size_t val_len; } ares__dns_optval_t; typedef struct { unsigned short udp_size; /*!< taken from class */ unsigned char version; /*!< taken from bits 8-16 of ttl */ unsigned short flags; /*!< Flags, remaining 16 bits, though only * 1 currently defined */ ares__array_t *options; /*!< Type is ares__dns_optval_t */ } ares__dns_opt_t; typedef struct { unsigned char cert_usage; unsigned char selector; unsigned char match; unsigned char *data; size_t data_len; } ares__dns_tlsa_t; typedef struct { unsigned short priority; char *target; ares__array_t *params; /*!< Type is ares__dns_optval_t */ } ares__dns_svcb_t; typedef struct { unsigned short priority; unsigned short weight; char *target; } ares__dns_uri_t; typedef struct { unsigned char critical; char *tag; unsigned char *value; size_t value_len; } ares__dns_caa_t; /*! Raw, unparsed RR data */ typedef struct { unsigned short type; /*!< Not ares_rec_type_t because it likely isn't one * of those values since it wasn't parsed */ unsigned char *data; /*!< Raw RR data */ size_t length; /*!< Length of raw RR data */ } ares__dns_raw_rr_t; /*! DNS RR data structure */ struct ares_dns_rr { ares_dns_record_t *parent; char *name; ares_dns_rec_type_t type; ares_dns_class_t rclass; unsigned int ttl; union { ares__dns_a_t a; ares__dns_ns_t ns; ares__dns_cname_t cname; ares__dns_soa_t soa; ares__dns_ptr_t ptr; ares__dns_hinfo_t hinfo; ares__dns_mx_t mx; ares__dns_txt_t txt; ares__dns_sig_t sig; ares__dns_aaaa_t aaaa; ares__dns_srv_t srv; ares__dns_naptr_t naptr; ares__dns_opt_t opt; ares__dns_tlsa_t tlsa; ares__dns_svcb_t svcb; ares__dns_svcb_t https; /*!< https is a type of svcb, so this is right */ ares__dns_uri_t uri; ares__dns_caa_t caa; ares__dns_raw_rr_t raw_rr; } r; }; /*! DNS data structure */ struct ares_dns_record { unsigned short id; /*!< DNS query id */ unsigned short flags; /*!< One or more ares_dns_flags_t */ ares_dns_opcode_t opcode; /*!< DNS Opcode */ ares_dns_rcode_t rcode; /*!< DNS RCODE */ unsigned short raw_rcode; /*!< Raw rcode, used to ultimately form real * rcode after reading OPT record if it * exists */ unsigned int ttl_decrement; /*!< Special case to apply to writing out * this record, where it will decrement * the ttl of any resource records by * this amount. Used for cache */ ares__array_t *qd; /*!< Type is ares_dns_qd_t */ ares__array_t *an; /*!< Type is ares_dns_rr_t */ ares__array_t *ns; /*!< Type is ares_dns_rr_t */ ares__array_t *ar; /*!< Type is ares_dns_rr_t */ }; #endif gevent-24.11.1/deps/c-ares/src/lib/record/ares_dns_record.c000066400000000000000000001154511471441230600234010ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include #ifdef HAVE_STDINT_H # include #endif static void ares__dns_rr_free(ares_dns_rr_t *rr); static void ares_dns_qd_free_cb(void *arg) { ares_dns_qd_t *qd = arg; if (qd == NULL) { return; } ares_free(qd->name); } static void ares_dns_rr_free_cb(void *arg) { ares_dns_rr_t *rr = arg; if (rr == NULL) { return; } ares__dns_rr_free(rr); } ares_status_t ares_dns_record_create(ares_dns_record_t **dnsrec, unsigned short id, unsigned short flags, ares_dns_opcode_t opcode, ares_dns_rcode_t rcode) { if (dnsrec == NULL) { return ARES_EFORMERR; } *dnsrec = NULL; if (!ares_dns_opcode_isvalid(opcode) || !ares_dns_rcode_isvalid(rcode) || !ares_dns_flags_arevalid(flags)) { return ARES_EFORMERR; } *dnsrec = ares_malloc_zero(sizeof(**dnsrec)); if (*dnsrec == NULL) { return ARES_ENOMEM; } (*dnsrec)->id = id; (*dnsrec)->flags = flags; (*dnsrec)->opcode = opcode; (*dnsrec)->rcode = rcode; (*dnsrec)->qd = ares__array_create(sizeof(ares_dns_qd_t), ares_dns_qd_free_cb); (*dnsrec)->an = ares__array_create(sizeof(ares_dns_rr_t), ares_dns_rr_free_cb); (*dnsrec)->ns = ares__array_create(sizeof(ares_dns_rr_t), ares_dns_rr_free_cb); (*dnsrec)->ar = ares__array_create(sizeof(ares_dns_rr_t), ares_dns_rr_free_cb); if ((*dnsrec)->qd == NULL || (*dnsrec)->an == NULL || (*dnsrec)->ns == NULL || (*dnsrec)->ar == NULL) { ares_dns_record_destroy(*dnsrec); *dnsrec = NULL; return ARES_ENOMEM; } return ARES_SUCCESS; } unsigned short ares_dns_record_get_id(const ares_dns_record_t *dnsrec) { if (dnsrec == NULL) { return 0; } return dnsrec->id; } ares_bool_t ares_dns_record_set_id(ares_dns_record_t *dnsrec, unsigned short id) { if (dnsrec == NULL) { return ARES_FALSE; } dnsrec->id = id; return ARES_TRUE; } unsigned short ares_dns_record_get_flags(const ares_dns_record_t *dnsrec) { if (dnsrec == NULL) { return 0; } return dnsrec->flags; } ares_dns_opcode_t ares_dns_record_get_opcode(const ares_dns_record_t *dnsrec) { if (dnsrec == NULL) { return 0; } return dnsrec->opcode; } ares_dns_rcode_t ares_dns_record_get_rcode(const ares_dns_record_t *dnsrec) { if (dnsrec == NULL) { return 0; } return dnsrec->rcode; } static void ares__dns_rr_free(ares_dns_rr_t *rr) { ares_free(rr->name); switch (rr->type) { case ARES_REC_TYPE_A: case ARES_REC_TYPE_AAAA: case ARES_REC_TYPE_ANY: /* Nothing to free */ break; case ARES_REC_TYPE_NS: ares_free(rr->r.ns.nsdname); break; case ARES_REC_TYPE_CNAME: ares_free(rr->r.cname.cname); break; case ARES_REC_TYPE_SOA: ares_free(rr->r.soa.mname); ares_free(rr->r.soa.rname); break; case ARES_REC_TYPE_PTR: ares_free(rr->r.ptr.dname); break; case ARES_REC_TYPE_HINFO: ares_free(rr->r.hinfo.cpu); ares_free(rr->r.hinfo.os); break; case ARES_REC_TYPE_MX: ares_free(rr->r.mx.exchange); break; case ARES_REC_TYPE_TXT: ares__dns_multistring_destroy(rr->r.txt.strs); break; case ARES_REC_TYPE_SIG: ares_free(rr->r.sig.signers_name); ares_free(rr->r.sig.signature); break; case ARES_REC_TYPE_SRV: ares_free(rr->r.srv.target); break; case ARES_REC_TYPE_NAPTR: ares_free(rr->r.naptr.flags); ares_free(rr->r.naptr.services); ares_free(rr->r.naptr.regexp); ares_free(rr->r.naptr.replacement); break; case ARES_REC_TYPE_OPT: ares__array_destroy(rr->r.opt.options); break; case ARES_REC_TYPE_TLSA: ares_free(rr->r.tlsa.data); break; case ARES_REC_TYPE_SVCB: ares_free(rr->r.svcb.target); ares__array_destroy(rr->r.svcb.params); break; case ARES_REC_TYPE_HTTPS: ares_free(rr->r.https.target); ares__array_destroy(rr->r.https.params); break; case ARES_REC_TYPE_URI: ares_free(rr->r.uri.target); break; case ARES_REC_TYPE_CAA: ares_free(rr->r.caa.tag); ares_free(rr->r.caa.value); break; case ARES_REC_TYPE_RAW_RR: ares_free(rr->r.raw_rr.data); break; } } void ares_dns_record_destroy(ares_dns_record_t *dnsrec) { if (dnsrec == NULL) { return; } /* Free questions */ ares__array_destroy(dnsrec->qd); /* Free answers */ ares__array_destroy(dnsrec->an); /* Free authority */ ares__array_destroy(dnsrec->ns); /* Free additional */ ares__array_destroy(dnsrec->ar); ares_free(dnsrec); } size_t ares_dns_record_query_cnt(const ares_dns_record_t *dnsrec) { if (dnsrec == NULL) { return 0; } return ares__array_len(dnsrec->qd); } ares_status_t ares_dns_record_query_add(ares_dns_record_t *dnsrec, const char *name, ares_dns_rec_type_t qtype, ares_dns_class_t qclass) { size_t idx; ares_dns_qd_t *qd; ares_status_t status; if (dnsrec == NULL || name == NULL || !ares_dns_rec_type_isvalid(qtype, ARES_TRUE) || !ares_dns_class_isvalid(qclass, qtype, ARES_TRUE)) { return ARES_EFORMERR; } idx = ares__array_len(dnsrec->qd); status = ares__array_insert_last((void **)&qd, dnsrec->qd); if (status != ARES_SUCCESS) { return status; } qd->name = ares_strdup(name); if (qd->name == NULL) { ares__array_remove_at(dnsrec->qd, idx); return ARES_ENOMEM; } qd->qtype = qtype; qd->qclass = qclass; return ARES_SUCCESS; } ares_status_t ares_dns_record_query_set_name(ares_dns_record_t *dnsrec, size_t idx, const char *name) { char *orig_name = NULL; ares_dns_qd_t *qd; if (dnsrec == NULL || idx >= ares__array_len(dnsrec->qd) || name == NULL) { return ARES_EFORMERR; } qd = ares__array_at(dnsrec->qd, idx); orig_name = qd->name; qd->name = ares_strdup(name); if (qd->name == NULL) { qd->name = orig_name; /* LCOV_EXCL_LINE: OutOfMemory */ return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } ares_free(orig_name); return ARES_SUCCESS; } ares_status_t ares_dns_record_query_set_type(ares_dns_record_t *dnsrec, size_t idx, ares_dns_rec_type_t qtype) { ares_dns_qd_t *qd; if (dnsrec == NULL || idx >= ares__array_len(dnsrec->qd) || !ares_dns_rec_type_isvalid(qtype, ARES_TRUE)) { return ARES_EFORMERR; } qd = ares__array_at(dnsrec->qd, idx); qd->qtype = qtype; return ARES_SUCCESS; } ares_status_t ares_dns_record_query_get(const ares_dns_record_t *dnsrec, size_t idx, const char **name, ares_dns_rec_type_t *qtype, ares_dns_class_t *qclass) { const ares_dns_qd_t *qd; if (dnsrec == NULL || idx >= ares__array_len(dnsrec->qd)) { return ARES_EFORMERR; } qd = ares__array_at(dnsrec->qd, idx); if (name != NULL) { *name = qd->name; } if (qtype != NULL) { *qtype = qd->qtype; } if (qclass != NULL) { *qclass = qd->qclass; } return ARES_SUCCESS; } size_t ares_dns_record_rr_cnt(const ares_dns_record_t *dnsrec, ares_dns_section_t sect) { if (dnsrec == NULL || !ares_dns_section_isvalid(sect)) { return 0; } switch (sect) { case ARES_SECTION_ANSWER: return ares__array_len(dnsrec->an); case ARES_SECTION_AUTHORITY: return ares__array_len(dnsrec->ns); case ARES_SECTION_ADDITIONAL: return ares__array_len(dnsrec->ar); } return 0; /* LCOV_EXCL_LINE: DefensiveCoding */ } ares_status_t ares_dns_record_rr_prealloc(ares_dns_record_t *dnsrec, ares_dns_section_t sect, size_t cnt) { ares__array_t *arr = NULL; if (dnsrec == NULL || !ares_dns_section_isvalid(sect)) { return ARES_EFORMERR; } switch (sect) { case ARES_SECTION_ANSWER: arr = dnsrec->an; break; case ARES_SECTION_AUTHORITY: arr = dnsrec->ns; break; case ARES_SECTION_ADDITIONAL: arr = dnsrec->ar; break; } if (cnt < ares__array_len(arr)) { return ARES_EFORMERR; } return ares__array_set_size(arr, cnt); } ares_status_t ares_dns_record_rr_add(ares_dns_rr_t **rr_out, ares_dns_record_t *dnsrec, ares_dns_section_t sect, const char *name, ares_dns_rec_type_t type, ares_dns_class_t rclass, unsigned int ttl) { ares_dns_rr_t *rr = NULL; ares__array_t *arr = NULL; ares_status_t status; size_t idx; if (dnsrec == NULL || name == NULL || rr_out == NULL || !ares_dns_section_isvalid(sect) || !ares_dns_rec_type_isvalid(type, ARES_FALSE) || !ares_dns_class_isvalid(rclass, type, ARES_FALSE)) { return ARES_EFORMERR; } *rr_out = NULL; switch (sect) { case ARES_SECTION_ANSWER: arr = dnsrec->an; break; case ARES_SECTION_AUTHORITY: arr = dnsrec->ns; break; case ARES_SECTION_ADDITIONAL: arr = dnsrec->ar; break; } idx = ares__array_len(arr); status = ares__array_insert_last((void **)&rr, arr); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } rr->name = ares_strdup(name); if (rr->name == NULL) { ares__array_remove_at(arr, idx); return ARES_ENOMEM; } rr->parent = dnsrec; rr->type = type; rr->rclass = rclass; rr->ttl = ttl; *rr_out = rr; return ARES_SUCCESS; } ares_status_t ares_dns_record_rr_del(ares_dns_record_t *dnsrec, ares_dns_section_t sect, size_t idx) { ares__array_t *arr = NULL; if (dnsrec == NULL || !ares_dns_section_isvalid(sect)) { return ARES_EFORMERR; } switch (sect) { case ARES_SECTION_ANSWER: arr = dnsrec->an; break; case ARES_SECTION_AUTHORITY: arr = dnsrec->ns; break; case ARES_SECTION_ADDITIONAL: arr = dnsrec->ar; break; } return ares__array_remove_at(arr, idx); } ares_dns_rr_t *ares_dns_record_rr_get(ares_dns_record_t *dnsrec, ares_dns_section_t sect, size_t idx) { ares__array_t *arr = NULL; if (dnsrec == NULL || !ares_dns_section_isvalid(sect)) { return NULL; } switch (sect) { case ARES_SECTION_ANSWER: arr = dnsrec->an; break; case ARES_SECTION_AUTHORITY: arr = dnsrec->ns; break; case ARES_SECTION_ADDITIONAL: arr = dnsrec->ar; break; } return ares__array_at(arr, idx); } const ares_dns_rr_t * ares_dns_record_rr_get_const(const ares_dns_record_t *dnsrec, ares_dns_section_t sect, size_t idx) { return ares_dns_record_rr_get((void *)((size_t)dnsrec), sect, idx); } const char *ares_dns_rr_get_name(const ares_dns_rr_t *rr) { if (rr == NULL) { return NULL; } return rr->name; } ares_dns_rec_type_t ares_dns_rr_get_type(const ares_dns_rr_t *rr) { if (rr == NULL) { return 0; } return rr->type; } ares_dns_class_t ares_dns_rr_get_class(const ares_dns_rr_t *rr) { if (rr == NULL) { return 0; } return rr->rclass; } unsigned int ares_dns_rr_get_ttl(const ares_dns_rr_t *rr) { if (rr == NULL) { return 0; } return rr->ttl; } static void *ares_dns_rr_data_ptr(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, size_t **lenptr) { if (dns_rr == NULL || dns_rr->type != ares_dns_rr_key_to_rec_type(key)) { return NULL; /* LCOV_EXCL_LINE: DefensiveCoding */ } switch (key) { case ARES_RR_A_ADDR: return &dns_rr->r.a.addr; case ARES_RR_NS_NSDNAME: return &dns_rr->r.ns.nsdname; case ARES_RR_CNAME_CNAME: return &dns_rr->r.cname.cname; case ARES_RR_SOA_MNAME: return &dns_rr->r.soa.mname; case ARES_RR_SOA_RNAME: return &dns_rr->r.soa.rname; case ARES_RR_SOA_SERIAL: return &dns_rr->r.soa.serial; case ARES_RR_SOA_REFRESH: return &dns_rr->r.soa.refresh; case ARES_RR_SOA_RETRY: return &dns_rr->r.soa.retry; case ARES_RR_SOA_EXPIRE: return &dns_rr->r.soa.expire; case ARES_RR_SOA_MINIMUM: return &dns_rr->r.soa.minimum; case ARES_RR_PTR_DNAME: return &dns_rr->r.ptr.dname; case ARES_RR_AAAA_ADDR: return &dns_rr->r.aaaa.addr; case ARES_RR_HINFO_CPU: return &dns_rr->r.hinfo.cpu; case ARES_RR_HINFO_OS: return &dns_rr->r.hinfo.os; case ARES_RR_MX_PREFERENCE: return &dns_rr->r.mx.preference; case ARES_RR_MX_EXCHANGE: return &dns_rr->r.mx.exchange; case ARES_RR_SIG_TYPE_COVERED: return &dns_rr->r.sig.type_covered; case ARES_RR_SIG_ALGORITHM: return &dns_rr->r.sig.algorithm; case ARES_RR_SIG_LABELS: return &dns_rr->r.sig.labels; case ARES_RR_SIG_ORIGINAL_TTL: return &dns_rr->r.sig.original_ttl; case ARES_RR_SIG_EXPIRATION: return &dns_rr->r.sig.expiration; case ARES_RR_SIG_INCEPTION: return &dns_rr->r.sig.inception; case ARES_RR_SIG_KEY_TAG: return &dns_rr->r.sig.key_tag; case ARES_RR_SIG_SIGNERS_NAME: return &dns_rr->r.sig.signers_name; case ARES_RR_SIG_SIGNATURE: if (lenptr == NULL) { return NULL; } *lenptr = &dns_rr->r.sig.signature_len; return &dns_rr->r.sig.signature; case ARES_RR_TXT_DATA: return &dns_rr->r.txt.strs; case ARES_RR_SRV_PRIORITY: return &dns_rr->r.srv.priority; case ARES_RR_SRV_WEIGHT: return &dns_rr->r.srv.weight; case ARES_RR_SRV_PORT: return &dns_rr->r.srv.port; case ARES_RR_SRV_TARGET: return &dns_rr->r.srv.target; case ARES_RR_NAPTR_ORDER: return &dns_rr->r.naptr.order; case ARES_RR_NAPTR_PREFERENCE: return &dns_rr->r.naptr.preference; case ARES_RR_NAPTR_FLAGS: return &dns_rr->r.naptr.flags; case ARES_RR_NAPTR_SERVICES: return &dns_rr->r.naptr.services; case ARES_RR_NAPTR_REGEXP: return &dns_rr->r.naptr.regexp; case ARES_RR_NAPTR_REPLACEMENT: return &dns_rr->r.naptr.replacement; case ARES_RR_OPT_UDP_SIZE: return &dns_rr->r.opt.udp_size; case ARES_RR_OPT_VERSION: return &dns_rr->r.opt.version; case ARES_RR_OPT_FLAGS: return &dns_rr->r.opt.flags; case ARES_RR_OPT_OPTIONS: return &dns_rr->r.opt.options; case ARES_RR_TLSA_CERT_USAGE: return &dns_rr->r.tlsa.cert_usage; case ARES_RR_TLSA_SELECTOR: return &dns_rr->r.tlsa.selector; case ARES_RR_TLSA_MATCH: return &dns_rr->r.tlsa.match; case ARES_RR_TLSA_DATA: if (lenptr == NULL) { return NULL; } *lenptr = &dns_rr->r.tlsa.data_len; return &dns_rr->r.tlsa.data; case ARES_RR_SVCB_PRIORITY: return &dns_rr->r.svcb.priority; case ARES_RR_SVCB_TARGET: return &dns_rr->r.svcb.target; case ARES_RR_SVCB_PARAMS: return &dns_rr->r.svcb.params; case ARES_RR_HTTPS_PRIORITY: return &dns_rr->r.https.priority; case ARES_RR_HTTPS_TARGET: return &dns_rr->r.https.target; case ARES_RR_HTTPS_PARAMS: return &dns_rr->r.https.params; case ARES_RR_URI_PRIORITY: return &dns_rr->r.uri.priority; case ARES_RR_URI_WEIGHT: return &dns_rr->r.uri.weight; case ARES_RR_URI_TARGET: return &dns_rr->r.uri.target; case ARES_RR_CAA_CRITICAL: return &dns_rr->r.caa.critical; case ARES_RR_CAA_TAG: return &dns_rr->r.caa.tag; case ARES_RR_CAA_VALUE: if (lenptr == NULL) { return NULL; } *lenptr = &dns_rr->r.caa.value_len; return &dns_rr->r.caa.value; case ARES_RR_RAW_RR_TYPE: return &dns_rr->r.raw_rr.type; case ARES_RR_RAW_RR_DATA: if (lenptr == NULL) { return NULL; } *lenptr = &dns_rr->r.raw_rr.length; return &dns_rr->r.raw_rr.data; } return NULL; } static const void *ares_dns_rr_data_ptr_const(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, const size_t **lenptr) { /* We're going to cast off the const */ return ares_dns_rr_data_ptr((void *)((size_t)dns_rr), key, (void *)((size_t)lenptr)); } const struct in_addr *ares_dns_rr_get_addr(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key) { const struct in_addr *addr; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_INADDR) { return NULL; } addr = ares_dns_rr_data_ptr_const(dns_rr, key, NULL); if (addr == NULL) { return NULL; } return addr; } const struct ares_in6_addr *ares_dns_rr_get_addr6(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key) { const struct ares_in6_addr *addr; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_INADDR6) { return NULL; } addr = ares_dns_rr_data_ptr_const(dns_rr, key, NULL); if (addr == NULL) { return NULL; } return addr; } unsigned char ares_dns_rr_get_u8(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key) { const unsigned char *u8; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_U8) { return 0; } u8 = ares_dns_rr_data_ptr_const(dns_rr, key, NULL); if (u8 == NULL) { return 0; } return *u8; } unsigned short ares_dns_rr_get_u16(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key) { const unsigned short *u16; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_U16) { return 0; } u16 = ares_dns_rr_data_ptr_const(dns_rr, key, NULL); if (u16 == NULL) { return 0; } return *u16; } unsigned int ares_dns_rr_get_u32(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key) { const unsigned int *u32; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_U32) { return 0; } u32 = ares_dns_rr_data_ptr_const(dns_rr, key, NULL); if (u32 == NULL) { return 0; } return *u32; } const unsigned char *ares_dns_rr_get_bin(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, size_t *len) { unsigned char * const *bin = NULL; size_t const *bin_len = NULL; if ((ares_dns_rr_key_datatype(key) != ARES_DATATYPE_BIN && ares_dns_rr_key_datatype(key) != ARES_DATATYPE_BINP && ares_dns_rr_key_datatype(key) != ARES_DATATYPE_ABINP) || len == NULL) { return NULL; } /* Array of strings, return concatenated version */ if (ares_dns_rr_key_datatype(key) == ARES_DATATYPE_ABINP) { ares__dns_multistring_t * const *strs = ares_dns_rr_data_ptr_const(dns_rr, key, NULL); if (strs == NULL) { return NULL; } return ares__dns_multistring_get_combined(*strs, len); } /* Not a multi-string, just straight binary data */ bin = ares_dns_rr_data_ptr_const(dns_rr, key, &bin_len); if (bin == NULL) { return NULL; } /* Shouldn't be possible */ if (bin_len == NULL) { return NULL; } *len = *bin_len; return *bin; } size_t ares_dns_rr_get_abin_cnt(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key) { ares__dns_multistring_t * const *strs; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_ABINP) { return 0; } strs = ares_dns_rr_data_ptr_const(dns_rr, key, NULL); if (strs == NULL) { return 0; } return ares__dns_multistring_cnt(*strs); } const unsigned char *ares_dns_rr_get_abin(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, size_t idx, size_t *len) { ares__dns_multistring_t * const *strs; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_ABINP) { return NULL; } strs = ares_dns_rr_data_ptr_const(dns_rr, key, NULL); if (strs == NULL) { return NULL; } return ares__dns_multistring_get(*strs, idx, len); } ares_status_t ares_dns_rr_del_abin(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, size_t idx) { ares__dns_multistring_t **strs; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_ABINP) { return ARES_EFORMERR; } strs = ares_dns_rr_data_ptr(dns_rr, key, NULL); if (strs == NULL) { return ARES_EFORMERR; } return ares__dns_multistring_del(*strs, idx); } ares_status_t ares_dns_rr_add_abin(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, const unsigned char *val, size_t len) { ares_status_t status; ares_dns_datatype_t datatype = ares_dns_rr_key_datatype(key); ares_bool_t is_nullterm = (datatype == ARES_DATATYPE_ABINP) ? ARES_TRUE : ARES_FALSE; size_t alloclen = is_nullterm ? len + 1 : len; unsigned char *temp; ares__dns_multistring_t **strs; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_ABINP) { return ARES_EFORMERR; } strs = ares_dns_rr_data_ptr(dns_rr, key, NULL); if (strs == NULL) { return ARES_EFORMERR; } if (*strs == NULL) { *strs = ares__dns_multistring_create(); if (*strs == NULL) { return ARES_ENOMEM; } } temp = ares_malloc(alloclen); if (temp == NULL) { return ARES_ENOMEM; } memcpy(temp, val, len); /* NULL-term ABINP */ if (is_nullterm) { temp[len] = 0; } status = ares__dns_multistring_add_own(*strs, temp, len); if (status != ARES_SUCCESS) { ares_free(temp); } return status; } const char *ares_dns_rr_get_str(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key) { char * const *str; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_STR && ares_dns_rr_key_datatype(key) != ARES_DATATYPE_NAME) { return NULL; } str = ares_dns_rr_data_ptr_const(dns_rr, key, NULL); if (str == NULL) { return NULL; } return *str; } size_t ares_dns_rr_get_opt_cnt(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key) { ares__array_t * const *opts; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_OPT) { return 0; } opts = ares_dns_rr_data_ptr_const(dns_rr, key, NULL); if (opts == NULL || *opts == NULL) { return 0; } return ares__array_len(*opts); } unsigned short ares_dns_rr_get_opt(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, size_t idx, const unsigned char **val, size_t *val_len) { ares__array_t * const *opts; const ares__dns_optval_t *opt; if (val) { *val = NULL; } if (val_len) { *val_len = 0; } if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_OPT) { return 65535; } opts = ares_dns_rr_data_ptr_const(dns_rr, key, NULL); if (opts == NULL || *opts == NULL) { return 65535; } opt = ares__array_at(*opts, idx); if (opt == NULL) { return 65535; } if (val) { *val = opt->val; } if (val_len) { *val_len = opt->val_len; } return opt->opt; } ares_bool_t ares_dns_rr_get_opt_byid(const ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned short opt, const unsigned char **val, size_t *val_len) { ares__array_t * const *opts; size_t i; size_t cnt; const ares__dns_optval_t *optptr = NULL; if (val) { *val = NULL; } if (val_len) { *val_len = 0; } if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_OPT) { return ARES_FALSE; } opts = ares_dns_rr_data_ptr_const(dns_rr, key, NULL); if (opts == NULL || *opts == NULL) { return ARES_FALSE; } cnt = ares__array_len(*opts); for (i = 0; i < cnt; i++) { optptr = ares__array_at(*opts, i); if (optptr == NULL) { return ARES_FALSE; } if (optptr->opt == opt) { break; } } if (i >= cnt || optptr == NULL) { return ARES_FALSE; } if (val) { *val = optptr->val; } if (val_len) { *val_len = optptr->val_len; } return ARES_TRUE; } ares_status_t ares_dns_rr_set_addr(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, const struct in_addr *addr) { struct in_addr *a; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_INADDR || addr == NULL) { return ARES_EFORMERR; } a = ares_dns_rr_data_ptr(dns_rr, key, NULL); if (a == NULL) { return ARES_EFORMERR; } memcpy(a, addr, sizeof(*a)); return ARES_SUCCESS; } ares_status_t ares_dns_rr_set_addr6(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, const struct ares_in6_addr *addr) { struct ares_in6_addr *a; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_INADDR6 || addr == NULL) { return ARES_EFORMERR; } a = ares_dns_rr_data_ptr(dns_rr, key, NULL); if (a == NULL) { return ARES_EFORMERR; } memcpy(a, addr, sizeof(*a)); return ARES_SUCCESS; } ares_status_t ares_dns_rr_set_u8(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned char val) { unsigned char *u8; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_U8) { return ARES_EFORMERR; } u8 = ares_dns_rr_data_ptr(dns_rr, key, NULL); if (u8 == NULL) { return ARES_EFORMERR; } *u8 = val; return ARES_SUCCESS; } ares_status_t ares_dns_rr_set_u16(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned short val) { unsigned short *u16; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_U16) { return ARES_EFORMERR; } u16 = ares_dns_rr_data_ptr(dns_rr, key, NULL); if (u16 == NULL) { return ARES_EFORMERR; } *u16 = val; return ARES_SUCCESS; } ares_status_t ares_dns_rr_set_u32(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned int val) { unsigned int *u32; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_U32) { return ARES_EFORMERR; } u32 = ares_dns_rr_data_ptr(dns_rr, key, NULL); if (u32 == NULL) { return ARES_EFORMERR; } *u32 = val; return ARES_SUCCESS; } ares_status_t ares_dns_rr_set_bin_own(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned char *val, size_t len) { unsigned char **bin; size_t *bin_len = NULL; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_BIN && ares_dns_rr_key_datatype(key) != ARES_DATATYPE_BINP && ares_dns_rr_key_datatype(key) != ARES_DATATYPE_ABINP) { return ARES_EFORMERR; } if (ares_dns_rr_key_datatype(key) == ARES_DATATYPE_ABINP) { ares__dns_multistring_t **strs = ares_dns_rr_data_ptr(dns_rr, key, NULL); if (strs == NULL) { return ARES_EFORMERR; } if (*strs == NULL) { *strs = ares__dns_multistring_create(); if (*strs == NULL) { return ARES_ENOMEM; } } /* Clear all existing entries as this is an override */ ares__dns_multistring_clear(*strs); return ares__dns_multistring_add_own(*strs, val, len); } bin = ares_dns_rr_data_ptr(dns_rr, key, &bin_len); if (bin == NULL || bin_len == NULL) { return ARES_EFORMERR; } if (*bin) { ares_free(*bin); } *bin = val; *bin_len = len; return ARES_SUCCESS; } ares_status_t ares_dns_rr_set_bin(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, const unsigned char *val, size_t len) { ares_status_t status; ares_dns_datatype_t datatype = ares_dns_rr_key_datatype(key); ares_bool_t is_nullterm = (datatype == ARES_DATATYPE_BINP || datatype == ARES_DATATYPE_ABINP) ? ARES_TRUE : ARES_FALSE; size_t alloclen = is_nullterm ? len + 1 : len; unsigned char *temp = ares_malloc(alloclen); if (temp == NULL) { return ARES_ENOMEM; } memcpy(temp, val, len); /* NULL-term BINP */ if (is_nullterm) { temp[len] = 0; } status = ares_dns_rr_set_bin_own(dns_rr, key, temp, len); if (status != ARES_SUCCESS) { ares_free(temp); } return status; } ares_status_t ares_dns_rr_set_str_own(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, char *val) { char **str; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_STR && ares_dns_rr_key_datatype(key) != ARES_DATATYPE_NAME) { return ARES_EFORMERR; } str = ares_dns_rr_data_ptr(dns_rr, key, NULL); if (str == NULL) { return ARES_EFORMERR; } if (*str) { ares_free(*str); } *str = val; return ARES_SUCCESS; } ares_status_t ares_dns_rr_set_str(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, const char *val) { ares_status_t status; char *temp = NULL; if (val != NULL) { temp = ares_strdup(val); if (temp == NULL) { return ARES_ENOMEM; } } status = ares_dns_rr_set_str_own(dns_rr, key, temp); if (status != ARES_SUCCESS) { ares_free(temp); } return status; } ares_status_t ares_dns_rr_set_abin_own(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, ares__dns_multistring_t *strs) { ares__dns_multistring_t **strs_ptr; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_ABINP) { return ARES_EFORMERR; } strs_ptr = ares_dns_rr_data_ptr(dns_rr, key, NULL); if (strs_ptr == NULL) { return ARES_EFORMERR; } if (*strs_ptr != NULL) { ares__dns_multistring_destroy(*strs_ptr); } *strs_ptr = strs; return ARES_SUCCESS; } static void ares__dns_opt_free_cb(void *arg) { ares__dns_optval_t *opt = arg; if (opt == NULL) { return; } ares_free(opt->val); } ares_status_t ares_dns_rr_set_opt_own(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned short opt, unsigned char *val, size_t val_len) { ares__array_t **options; ares__dns_optval_t *optptr = NULL; size_t idx; size_t cnt; ares_status_t status; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_OPT) { return ARES_EFORMERR; } options = ares_dns_rr_data_ptr(dns_rr, key, NULL); if (options == NULL) { return ARES_EFORMERR; } if (*options == NULL) { *options = ares__array_create(sizeof(ares__dns_optval_t), ares__dns_opt_free_cb); } if (*options == NULL) { return ARES_ENOMEM; } cnt = ares__array_len(*options); for (idx = 0; idx < cnt; idx++) { optptr = ares__array_at(*options, idx); if (optptr == NULL) { return ARES_EFORMERR; } if (optptr->opt == opt) { break; } } /* Duplicate entry, replace */ if (idx != cnt && optptr != NULL) { goto done; } status = ares__array_insert_last((void **)&optptr, *options); if (status != ARES_SUCCESS) { return status; } done: ares_free(optptr->val); optptr->opt = opt; optptr->val = val; optptr->val_len = val_len; return ARES_SUCCESS; } ares_status_t ares_dns_rr_set_opt(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned short opt, const unsigned char *val, size_t val_len) { unsigned char *temp = NULL; ares_status_t status; if (val != NULL) { temp = ares_malloc(val_len + 1); if (temp == NULL) { return ARES_ENOMEM; } memcpy(temp, val, val_len); temp[val_len] = 0; } status = ares_dns_rr_set_opt_own(dns_rr, key, opt, temp, val_len); if (status != ARES_SUCCESS) { ares_free(temp); } return status; } ares_status_t ares_dns_rr_del_opt_byid(ares_dns_rr_t *dns_rr, ares_dns_rr_key_t key, unsigned short opt) { ares__array_t **options; const ares__dns_optval_t *optptr; size_t idx; size_t cnt; if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_OPT) { return ARES_EFORMERR; } options = ares_dns_rr_data_ptr(dns_rr, key, NULL); if (options == NULL) { return ARES_EFORMERR; } /* No options */ if (*options == NULL) { return ARES_SUCCESS; } cnt = ares__array_len(*options); for (idx = 0; idx < cnt; idx++) { optptr = ares__array_at_const(*options, idx); if (optptr == NULL) { return ARES_ENOTFOUND; } if (optptr->opt == opt) { return ares__array_remove_at(*options, idx); } } return ARES_ENOTFOUND; } char *ares_dns_addr_to_ptr(const struct ares_addr *addr) { ares__buf_t *buf = NULL; const unsigned char *ptr = NULL; size_t ptr_len = 0; size_t i; ares_status_t status; static const unsigned char hexbytes[] = "0123456789abcdef"; if (addr->family != AF_INET && addr->family != AF_INET6) { goto fail; } buf = ares__buf_create(); if (buf == NULL) { goto fail; } if (addr->family == AF_INET) { ptr = (const unsigned char *)&addr->addr.addr4; ptr_len = 4; } else { ptr = (const unsigned char *)&addr->addr.addr6; ptr_len = 16; } for (i = ptr_len; i > 0; i--) { if (addr->family == AF_INET) { status = ares__buf_append_num_dec(buf, (size_t)ptr[i - 1], 0); } else { unsigned char c; c = ptr[i - 1] & 0xF; status = ares__buf_append_byte(buf, hexbytes[c]); if (status != ARES_SUCCESS) { goto fail; } status = ares__buf_append_byte(buf, '.'); if (status != ARES_SUCCESS) { goto fail; } c = (ptr[i - 1] >> 4) & 0xF; status = ares__buf_append_byte(buf, hexbytes[c]); } if (status != ARES_SUCCESS) { goto fail; } status = ares__buf_append_byte(buf, '.'); if (status != ARES_SUCCESS) { goto fail; } } if (addr->family == AF_INET) { status = ares__buf_append(buf, (const unsigned char *)"in-addr.arpa", 12); } else { status = ares__buf_append(buf, (const unsigned char *)"ip6.arpa", 8); } if (status != ARES_SUCCESS) { goto fail; } return ares__buf_finish_str(buf, NULL); fail: ares__buf_destroy(buf); return NULL; } ares_dns_rr_t *ares_dns_get_opt_rr(ares_dns_record_t *rec) { size_t i; for (i = 0; i < ares_dns_record_rr_cnt(rec, ARES_SECTION_ADDITIONAL); i++) { ares_dns_rr_t *rr = ares_dns_record_rr_get(rec, ARES_SECTION_ADDITIONAL, i); if (ares_dns_rr_get_type(rr) == ARES_REC_TYPE_OPT) { return rr; } } return NULL; } const ares_dns_rr_t *ares_dns_get_opt_rr_const(const ares_dns_record_t *rec) { size_t i; for (i = 0; i < ares_dns_record_rr_cnt(rec, ARES_SECTION_ADDITIONAL); i++) { const ares_dns_rr_t *rr = ares_dns_record_rr_get_const(rec, ARES_SECTION_ADDITIONAL, i); if (ares_dns_rr_get_type(rr) == ARES_REC_TYPE_OPT) { return rr; } } return NULL; } /* Construct a DNS record for a name with given class and type. Used internally * by ares_search() and ares_create_query(). */ ares_status_t ares_dns_record_create_query(ares_dns_record_t **dnsrec, const char *name, ares_dns_class_t dnsclass, ares_dns_rec_type_t type, unsigned short id, ares_dns_flags_t flags, size_t max_udp_size) { ares_status_t status; ares_dns_rr_t *rr = NULL; if (dnsrec == NULL) { return ARES_EFORMERR; } *dnsrec = NULL; /* Per RFC 7686, reject queries for ".onion" domain names with NXDOMAIN */ if (ares__is_onion_domain(name)) { status = ARES_ENOTFOUND; goto done; } status = ares_dns_record_create(dnsrec, id, (unsigned short)flags, ARES_OPCODE_QUERY, ARES_RCODE_NOERROR); if (status != ARES_SUCCESS) { goto done; } status = ares_dns_record_query_add(*dnsrec, name, type, dnsclass); if (status != ARES_SUCCESS) { goto done; } /* max_udp_size > 0 indicates EDNS, so send OPT RR as an additional record */ if (max_udp_size > 0) { /* max_udp_size must fit into a 16 bit unsigned integer field on the OPT * RR, so check here that it fits */ if (max_udp_size > 65535) { status = ARES_EFORMERR; goto done; } status = ares_dns_record_rr_add(&rr, *dnsrec, ARES_SECTION_ADDITIONAL, "", ARES_REC_TYPE_OPT, ARES_CLASS_IN, 0); if (status != ARES_SUCCESS) { goto done; } status = ares_dns_rr_set_u16(rr, ARES_RR_OPT_UDP_SIZE, (unsigned short)max_udp_size); if (status != ARES_SUCCESS) { goto done; } status = ares_dns_rr_set_u8(rr, ARES_RR_OPT_VERSION, 0); if (status != ARES_SUCCESS) { goto done; } status = ares_dns_rr_set_u16(rr, ARES_RR_OPT_FLAGS, 0); if (status != ARES_SUCCESS) { goto done; } } done: if (status != ARES_SUCCESS) { ares_dns_record_destroy(*dnsrec); *dnsrec = NULL; } return status; } ares_status_t ares_dns_record_duplicate_ex(ares_dns_record_t **dest, const ares_dns_record_t *src) { unsigned char *data = NULL; size_t data_len = 0; ares_status_t status; if (dest == NULL || src == NULL) { return ARES_EFORMERR; } *dest = NULL; status = ares_dns_write(src, &data, &data_len); if (status != ARES_SUCCESS) { return status; } status = ares_dns_parse(data, data_len, 0, dest); ares_free(data); return status; } ares_dns_record_t *ares_dns_record_duplicate(const ares_dns_record_t *dnsrec) { ares_dns_record_t *dest = NULL; ares_dns_record_duplicate_ex(&dest, dnsrec); return dest; } gevent-24.11.1/deps/c-ares/src/lib/record/ares_dns_write.c000066400000000000000000001056651471441230600232630ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include #ifdef HAVE_STDINT_H # include #endif static ares_status_t ares_dns_write_header(const ares_dns_record_t *dnsrec, ares__buf_t *buf) { unsigned short u16; unsigned short opcode; unsigned short rcode; ares_status_t status; /* ID */ status = ares__buf_append_be16(buf, dnsrec->id); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Flags */ u16 = 0; /* QR */ if (dnsrec->flags & ARES_FLAG_QR) { u16 |= 0x8000; } /* OPCODE */ opcode = (unsigned short)(dnsrec->opcode & 0xF); opcode <<= 11; u16 |= opcode; /* AA */ if (dnsrec->flags & ARES_FLAG_AA) { u16 |= 0x400; } /* TC */ if (dnsrec->flags & ARES_FLAG_TC) { u16 |= 0x200; } /* RD */ if (dnsrec->flags & ARES_FLAG_RD) { u16 |= 0x100; } /* RA */ if (dnsrec->flags & ARES_FLAG_RA) { u16 |= 0x80; } /* Z -- unused */ /* AD */ if (dnsrec->flags & ARES_FLAG_AD) { u16 |= 0x20; } /* CD */ if (dnsrec->flags & ARES_FLAG_CD) { u16 |= 0x10; } /* RCODE */ if (dnsrec->rcode > 15 && ares_dns_get_opt_rr_const(dnsrec) == NULL) { /* Must have OPT RR in order to write extended error codes */ rcode = ARES_RCODE_SERVFAIL; } else { rcode = (unsigned short)(dnsrec->rcode & 0xF); } u16 |= rcode; status = ares__buf_append_be16(buf, u16); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* QDCOUNT */ status = ares__buf_append_be16( buf, (unsigned short)ares_dns_record_query_cnt(dnsrec)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* ANCOUNT */ status = ares__buf_append_be16( buf, (unsigned short)ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* NSCOUNT */ status = ares__buf_append_be16(buf, (unsigned short)ares_dns_record_rr_cnt( dnsrec, ARES_SECTION_AUTHORITY)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* ARCOUNT */ status = ares__buf_append_be16(buf, (unsigned short)ares_dns_record_rr_cnt( dnsrec, ARES_SECTION_ADDITIONAL)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } return ARES_SUCCESS; } static ares_status_t ares_dns_write_questions(const ares_dns_record_t *dnsrec, ares__llist_t **namelist, ares__buf_t *buf) { size_t i; for (i = 0; i < ares_dns_record_query_cnt(dnsrec); i++) { ares_status_t status; const char *name = NULL; ares_dns_rec_type_t qtype; ares_dns_class_t qclass; status = ares_dns_record_query_get(dnsrec, i, &name, &qtype, &qclass); if (status != ARES_SUCCESS) { return status; } /* Name */ status = ares__dns_name_write(buf, namelist, ARES_TRUE, name); if (status != ARES_SUCCESS) { return status; } /* Type */ status = ares__buf_append_be16(buf, (unsigned short)qtype); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Class */ status = ares__buf_append_be16(buf, (unsigned short)qclass); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } return ARES_SUCCESS; } static ares_status_t ares_dns_write_rr_name(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist, ares_bool_t validate_hostname, ares_dns_rr_key_t key) { const char *name; name = ares_dns_rr_get_str(rr, key); if (name == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } return ares__dns_name_write(buf, namelist, validate_hostname, name); } static ares_status_t ares_dns_write_rr_str(ares__buf_t *buf, const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { const char *str; size_t len; ares_status_t status; str = ares_dns_rr_get_str(rr, key); if (str == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } len = ares_strlen(str); if (len > 255) { return ARES_EFORMERR; } /* Write 1 byte length */ status = ares__buf_append_byte(buf, (unsigned char)(len & 0xFF)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } if (len == 0) { return ARES_SUCCESS; } /* Write string */ return ares__buf_append(buf, (const unsigned char *)str, len); } static ares_status_t ares_dns_write_binstr(ares__buf_t *buf, const unsigned char *bin, size_t bin_len) { const unsigned char *ptr; size_t ptr_len; ares_status_t status; /* split into possible multiple 255-byte or less length strings */ ptr = bin; ptr_len = bin_len; do { size_t len = ptr_len; if (len > 255) { len = 255; } /* Length */ status = ares__buf_append_byte(buf, (unsigned char)(len & 0xFF)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* String */ if (len) { status = ares__buf_append(buf, ptr, len); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } ptr += len; ptr_len -= len; } while (ptr_len > 0); return ARES_SUCCESS; } static ares_status_t ares_dns_write_rr_abin(ares__buf_t *buf, const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { ares_status_t status = ARES_EFORMERR; size_t i; size_t cnt = ares_dns_rr_get_abin_cnt(rr, key); if (cnt == 0) { return ARES_EFORMERR; } for (i = 0; i < cnt; i++) { const unsigned char *bin; size_t bin_len; bin = ares_dns_rr_get_abin(rr, key, i, &bin_len); status = ares_dns_write_binstr(buf, bin, bin_len); if (status != ARES_SUCCESS) { break; } } return status; } static ares_status_t ares_dns_write_rr_be32(ares__buf_t *buf, const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_U32) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } return ares__buf_append_be32(buf, ares_dns_rr_get_u32(rr, key)); } static ares_status_t ares_dns_write_rr_be16(ares__buf_t *buf, const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_U16) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } return ares__buf_append_be16(buf, ares_dns_rr_get_u16(rr, key)); } static ares_status_t ares_dns_write_rr_u8(ares__buf_t *buf, const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { if (ares_dns_rr_key_datatype(key) != ARES_DATATYPE_U8) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } return ares__buf_append_byte(buf, ares_dns_rr_get_u8(rr, key)); } static ares_status_t ares_dns_write_rr_a(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { const struct in_addr *addr; (void)namelist; addr = ares_dns_rr_get_addr(rr, ARES_RR_A_ADDR); if (addr == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } return ares__buf_append(buf, (const unsigned char *)addr, sizeof(*addr)); } static ares_status_t ares_dns_write_rr_ns(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { return ares_dns_write_rr_name(buf, rr, namelist, ARES_FALSE, ARES_RR_NS_NSDNAME); } static ares_status_t ares_dns_write_rr_cname(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { return ares_dns_write_rr_name(buf, rr, namelist, ARES_FALSE, ARES_RR_CNAME_CNAME); } static ares_status_t ares_dns_write_rr_soa(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { ares_status_t status; /* MNAME */ status = ares_dns_write_rr_name(buf, rr, namelist, ARES_FALSE, ARES_RR_SOA_MNAME); if (status != ARES_SUCCESS) { return status; } /* RNAME */ status = ares_dns_write_rr_name(buf, rr, namelist, ARES_FALSE, ARES_RR_SOA_RNAME); if (status != ARES_SUCCESS) { return status; } /* SERIAL */ status = ares_dns_write_rr_be32(buf, rr, ARES_RR_SOA_SERIAL); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* REFRESH */ status = ares_dns_write_rr_be32(buf, rr, ARES_RR_SOA_REFRESH); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* RETRY */ status = ares_dns_write_rr_be32(buf, rr, ARES_RR_SOA_RETRY); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* EXPIRE */ status = ares_dns_write_rr_be32(buf, rr, ARES_RR_SOA_EXPIRE); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* MINIMUM */ return ares_dns_write_rr_be32(buf, rr, ARES_RR_SOA_MINIMUM); } static ares_status_t ares_dns_write_rr_ptr(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { return ares_dns_write_rr_name(buf, rr, namelist, ARES_FALSE, ARES_RR_PTR_DNAME); } static ares_status_t ares_dns_write_rr_hinfo(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { ares_status_t status; (void)namelist; /* CPU */ status = ares_dns_write_rr_str(buf, rr, ARES_RR_HINFO_CPU); if (status != ARES_SUCCESS) { return status; } /* OS */ return ares_dns_write_rr_str(buf, rr, ARES_RR_HINFO_OS); } static ares_status_t ares_dns_write_rr_mx(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { ares_status_t status; /* PREFERENCE */ status = ares_dns_write_rr_be16(buf, rr, ARES_RR_MX_PREFERENCE); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* EXCHANGE */ return ares_dns_write_rr_name(buf, rr, namelist, ARES_FALSE, ARES_RR_MX_EXCHANGE); } static ares_status_t ares_dns_write_rr_txt(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { (void)namelist; return ares_dns_write_rr_abin(buf, rr, ARES_RR_TXT_DATA); } static ares_status_t ares_dns_write_rr_sig(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { ares_status_t status; const unsigned char *data; size_t len = 0; (void)namelist; /* TYPE COVERED */ status = ares_dns_write_rr_be16(buf, rr, ARES_RR_SIG_TYPE_COVERED); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* ALGORITHM */ status = ares_dns_write_rr_u8(buf, rr, ARES_RR_SIG_ALGORITHM); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* LABELS */ status = ares_dns_write_rr_u8(buf, rr, ARES_RR_SIG_LABELS); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* ORIGINAL TTL */ status = ares_dns_write_rr_be32(buf, rr, ARES_RR_SIG_ORIGINAL_TTL); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* EXPIRATION */ status = ares_dns_write_rr_be32(buf, rr, ARES_RR_SIG_EXPIRATION); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* INCEPTION */ status = ares_dns_write_rr_be32(buf, rr, ARES_RR_SIG_INCEPTION); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* KEY TAG */ status = ares_dns_write_rr_be16(buf, rr, ARES_RR_SIG_KEY_TAG); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* SIGNERS NAME */ status = ares_dns_write_rr_name(buf, rr, namelist, ARES_FALSE, ARES_RR_SIG_SIGNERS_NAME); if (status != ARES_SUCCESS) { return status; } /* SIGNATURE -- binary, rest of buffer, required to be non-zero length */ data = ares_dns_rr_get_bin(rr, ARES_RR_SIG_SIGNATURE, &len); if (data == NULL || len == 0) { return ARES_EFORMERR; } return ares__buf_append(buf, data, len); } static ares_status_t ares_dns_write_rr_aaaa(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { const struct ares_in6_addr *addr; (void)namelist; addr = ares_dns_rr_get_addr6(rr, ARES_RR_AAAA_ADDR); if (addr == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } return ares__buf_append(buf, (const unsigned char *)addr, sizeof(*addr)); } static ares_status_t ares_dns_write_rr_srv(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { ares_status_t status; /* PRIORITY */ status = ares_dns_write_rr_be16(buf, rr, ARES_RR_SRV_PRIORITY); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* WEIGHT */ status = ares_dns_write_rr_be16(buf, rr, ARES_RR_SRV_WEIGHT); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* PORT */ status = ares_dns_write_rr_be16(buf, rr, ARES_RR_SRV_PORT); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* TARGET */ return ares_dns_write_rr_name(buf, rr, namelist, ARES_FALSE, ARES_RR_SRV_TARGET); } static ares_status_t ares_dns_write_rr_naptr(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { ares_status_t status; /* ORDER */ status = ares_dns_write_rr_be16(buf, rr, ARES_RR_NAPTR_ORDER); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* PREFERENCE */ status = ares_dns_write_rr_be16(buf, rr, ARES_RR_NAPTR_PREFERENCE); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* FLAGS */ status = ares_dns_write_rr_str(buf, rr, ARES_RR_NAPTR_FLAGS); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* SERVICES */ status = ares_dns_write_rr_str(buf, rr, ARES_RR_NAPTR_SERVICES); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* REGEXP */ status = ares_dns_write_rr_str(buf, rr, ARES_RR_NAPTR_REGEXP); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* REPLACEMENT */ return ares_dns_write_rr_name(buf, rr, namelist, ARES_FALSE, ARES_RR_NAPTR_REPLACEMENT); } static ares_status_t ares_dns_write_rr_opt(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { size_t len = ares__buf_len(buf); ares_status_t status; unsigned int ttl = 0; size_t i; unsigned short rcode = (unsigned short)((rr->parent->rcode >> 4) & 0xFF); (void)namelist; /* Coverity reports on this even though its not possible when taken * into context */ if (len == 0) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* We need to go back and overwrite the class and ttl that were emitted as * the OPT record overloads them for its own use (yes, very strange!) */ status = ares__buf_set_length(buf, len - 2 /* RDLENGTH */ - 4 /* TTL */ - 2 /* CLASS */); if (status != ARES_SUCCESS) { return status; } /* Class -> UDP Size */ status = ares_dns_write_rr_be16(buf, rr, ARES_RR_OPT_UDP_SIZE); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* TTL -> rcode (u8) << 24 | version (u8) << 16 | flags (u16) */ ttl |= (unsigned int)rcode << 24; ttl |= (unsigned int)ares_dns_rr_get_u8(rr, ARES_RR_OPT_VERSION) << 16; ttl |= (unsigned int)ares_dns_rr_get_u16(rr, ARES_RR_OPT_FLAGS); status = ares__buf_append_be32(buf, ttl); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Now go back to real end */ status = ares__buf_set_length(buf, len); if (status != ARES_SUCCESS) { return status; } /* Append Options */ for (i = 0; i < ares_dns_rr_get_opt_cnt(rr, ARES_RR_OPT_OPTIONS); i++) { unsigned short opt; size_t val_len; const unsigned char *val; opt = ares_dns_rr_get_opt(rr, ARES_RR_OPT_OPTIONS, i, &val, &val_len); /* BE16 option */ status = ares__buf_append_be16(buf, opt); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* BE16 length */ status = ares__buf_append_be16(buf, (unsigned short)(val_len & 0xFFFF)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Value */ if (val && val_len) { status = ares__buf_append(buf, val, val_len); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } } return ARES_SUCCESS; } static ares_status_t ares_dns_write_rr_tlsa(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { ares_status_t status; const unsigned char *data; size_t len = 0; (void)namelist; /* CERT_USAGE */ status = ares_dns_write_rr_u8(buf, rr, ARES_RR_TLSA_CERT_USAGE); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* SELECTOR */ status = ares_dns_write_rr_u8(buf, rr, ARES_RR_TLSA_SELECTOR); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* MATCH */ status = ares_dns_write_rr_u8(buf, rr, ARES_RR_TLSA_MATCH); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* DATA -- binary, rest of buffer, required to be non-zero length */ data = ares_dns_rr_get_bin(rr, ARES_RR_TLSA_DATA, &len); if (data == NULL || len == 0) { return ARES_EFORMERR; } return ares__buf_append(buf, data, len); } static ares_status_t ares_dns_write_rr_svcb(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { ares_status_t status; size_t i; /* PRIORITY */ status = ares_dns_write_rr_be16(buf, rr, ARES_RR_SVCB_PRIORITY); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* TARGET */ status = ares_dns_write_rr_name(buf, rr, namelist, ARES_FALSE, ARES_RR_SVCB_TARGET); if (status != ARES_SUCCESS) { return status; } /* Append Params */ for (i = 0; i < ares_dns_rr_get_opt_cnt(rr, ARES_RR_SVCB_PARAMS); i++) { unsigned short opt; size_t val_len; const unsigned char *val; opt = ares_dns_rr_get_opt(rr, ARES_RR_SVCB_PARAMS, i, &val, &val_len); /* BE16 option */ status = ares__buf_append_be16(buf, opt); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* BE16 length */ status = ares__buf_append_be16(buf, (unsigned short)(val_len & 0xFFFF)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Value */ if (val && val_len) { status = ares__buf_append(buf, val, val_len); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } } return ARES_SUCCESS; } static ares_status_t ares_dns_write_rr_https(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { ares_status_t status; size_t i; /* PRIORITY */ status = ares_dns_write_rr_be16(buf, rr, ARES_RR_HTTPS_PRIORITY); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* TARGET */ status = ares_dns_write_rr_name(buf, rr, namelist, ARES_FALSE, ARES_RR_HTTPS_TARGET); if (status != ARES_SUCCESS) { return status; } /* Append Params */ for (i = 0; i < ares_dns_rr_get_opt_cnt(rr, ARES_RR_HTTPS_PARAMS); i++) { unsigned short opt; size_t val_len; const unsigned char *val; opt = ares_dns_rr_get_opt(rr, ARES_RR_HTTPS_PARAMS, i, &val, &val_len); /* BE16 option */ status = ares__buf_append_be16(buf, opt); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* BE16 length */ status = ares__buf_append_be16(buf, (unsigned short)(val_len & 0xFFFF)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Value */ if (val && val_len) { status = ares__buf_append(buf, val, val_len); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } } return ARES_SUCCESS; } static ares_status_t ares_dns_write_rr_uri(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { ares_status_t status; const char *target; (void)namelist; /* PRIORITY */ status = ares_dns_write_rr_be16(buf, rr, ARES_RR_URI_PRIORITY); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* WEIGHT */ status = ares_dns_write_rr_be16(buf, rr, ARES_RR_URI_WEIGHT); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* TARGET -- not in DNS string format, rest of buffer, required to be * non-zero length */ target = ares_dns_rr_get_str(rr, ARES_RR_URI_TARGET); if (target == NULL || ares_strlen(target) == 0) { return ARES_EFORMERR; } return ares__buf_append(buf, (const unsigned char *)target, ares_strlen(target)); } static ares_status_t ares_dns_write_rr_caa(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { const unsigned char *data = NULL; size_t data_len = 0; ares_status_t status; (void)namelist; /* CRITICAL */ status = ares_dns_write_rr_u8(buf, rr, ARES_RR_CAA_CRITICAL); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Tag */ status = ares_dns_write_rr_str(buf, rr, ARES_RR_CAA_TAG); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Value - binary! (remaining buffer */ data = ares_dns_rr_get_bin(rr, ARES_RR_CAA_VALUE, &data_len); if (data == NULL || data_len == 0) { return ARES_EFORMERR; } return ares__buf_append(buf, data, data_len); } static ares_status_t ares_dns_write_rr_raw_rr(ares__buf_t *buf, const ares_dns_rr_t *rr, ares__llist_t **namelist) { size_t len = ares__buf_len(buf); ares_status_t status; const unsigned char *data = NULL; size_t data_len = 0; (void)namelist; /* Coverity reports on this even though its not possible when taken * into context */ if (len == 0) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* We need to go back and overwrite the type that was emitted by the parent * function */ status = ares__buf_set_length(buf, len - 2 /* RDLENGTH */ - 4 /* TTL */ - 2 /* CLASS */ - 2 /* TYPE */); if (status != ARES_SUCCESS) { return status; } status = ares_dns_write_rr_be16(buf, rr, ARES_RR_RAW_RR_TYPE); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Now go back to real end */ status = ares__buf_set_length(buf, len); if (status != ARES_SUCCESS) { return status; } /* Output raw data */ data = ares_dns_rr_get_bin(rr, ARES_RR_RAW_RR_DATA, &data_len); if (data == NULL) { return ARES_EFORMERR; } if (data_len == 0) { return ARES_SUCCESS; } return ares__buf_append(buf, data, data_len); } static ares_status_t ares_dns_write_rr(const ares_dns_record_t *dnsrec, ares__llist_t **namelist, ares_dns_section_t section, ares__buf_t *buf) { size_t i; for (i = 0; i < ares_dns_record_rr_cnt(dnsrec, section); i++) { const ares_dns_rr_t *rr; ares_dns_rec_type_t type; ares_bool_t allow_compress; ares__llist_t **namelistptr = NULL; size_t pos_len; ares_status_t status; size_t rdlength; size_t end_length; unsigned int ttl; rr = ares_dns_record_rr_get_const(dnsrec, section, i); if (rr == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } type = ares_dns_rr_get_type(rr); allow_compress = ares_dns_rec_type_allow_name_compression(type); if (allow_compress) { namelistptr = namelist; } /* Name */ status = ares__dns_name_write(buf, namelist, ARES_TRUE, ares_dns_rr_get_name(rr)); if (status != ARES_SUCCESS) { return status; } /* Type */ status = ares__buf_append_be16(buf, (unsigned short)type); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Class */ status = ares__buf_append_be16(buf, (unsigned short)ares_dns_rr_get_class(rr)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* TTL */ ttl = ares_dns_rr_get_ttl(rr); if (rr->parent->ttl_decrement > ttl) { ttl = 0; } else { ttl -= rr->parent->ttl_decrement; } status = ares__buf_append_be32(buf, ttl); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Length */ pos_len = ares__buf_len(buf); /* Save to write real length later */ status = ares__buf_append_be16(buf, 0); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Data */ switch (type) { case ARES_REC_TYPE_A: status = ares_dns_write_rr_a(buf, rr, namelistptr); break; case ARES_REC_TYPE_NS: status = ares_dns_write_rr_ns(buf, rr, namelistptr); break; case ARES_REC_TYPE_CNAME: status = ares_dns_write_rr_cname(buf, rr, namelistptr); break; case ARES_REC_TYPE_SOA: status = ares_dns_write_rr_soa(buf, rr, namelistptr); break; case ARES_REC_TYPE_PTR: status = ares_dns_write_rr_ptr(buf, rr, namelistptr); break; case ARES_REC_TYPE_HINFO: status = ares_dns_write_rr_hinfo(buf, rr, namelistptr); break; case ARES_REC_TYPE_MX: status = ares_dns_write_rr_mx(buf, rr, namelistptr); break; case ARES_REC_TYPE_TXT: status = ares_dns_write_rr_txt(buf, rr, namelistptr); break; case ARES_REC_TYPE_SIG: status = ares_dns_write_rr_sig(buf, rr, namelistptr); break; case ARES_REC_TYPE_AAAA: status = ares_dns_write_rr_aaaa(buf, rr, namelistptr); break; case ARES_REC_TYPE_SRV: status = ares_dns_write_rr_srv(buf, rr, namelistptr); break; case ARES_REC_TYPE_NAPTR: status = ares_dns_write_rr_naptr(buf, rr, namelistptr); break; case ARES_REC_TYPE_ANY: status = ARES_EFORMERR; break; case ARES_REC_TYPE_OPT: status = ares_dns_write_rr_opt(buf, rr, namelistptr); break; case ARES_REC_TYPE_TLSA: status = ares_dns_write_rr_tlsa(buf, rr, namelistptr); break; case ARES_REC_TYPE_SVCB: status = ares_dns_write_rr_svcb(buf, rr, namelistptr); break; case ARES_REC_TYPE_HTTPS: status = ares_dns_write_rr_https(buf, rr, namelistptr); break; case ARES_REC_TYPE_URI: status = ares_dns_write_rr_uri(buf, rr, namelistptr); break; case ARES_REC_TYPE_CAA: status = ares_dns_write_rr_caa(buf, rr, namelistptr); break; case ARES_REC_TYPE_RAW_RR: status = ares_dns_write_rr_raw_rr(buf, rr, namelistptr); break; } if (status != ARES_SUCCESS) { return status; } /* Back off write pointer, write real length, then go back to proper * position */ end_length = ares__buf_len(buf); rdlength = end_length - pos_len - 2; status = ares__buf_set_length(buf, pos_len); if (status != ARES_SUCCESS) { return status; } status = ares__buf_append_be16(buf, (unsigned short)(rdlength & 0xFFFF)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__buf_set_length(buf, end_length); if (status != ARES_SUCCESS) { return status; } } return ARES_SUCCESS; } ares_status_t ares_dns_write_buf(const ares_dns_record_t *dnsrec, ares__buf_t *buf) { ares__llist_t *namelist = NULL; size_t orig_len; ares_status_t status; if (dnsrec == NULL || buf == NULL) { return ARES_EFORMERR; } orig_len = ares__buf_len(buf); status = ares_dns_write_header(dnsrec, buf); if (status != ARES_SUCCESS) { goto done; } status = ares_dns_write_questions(dnsrec, &namelist, buf); if (status != ARES_SUCCESS) { goto done; } status = ares_dns_write_rr(dnsrec, &namelist, ARES_SECTION_ANSWER, buf); if (status != ARES_SUCCESS) { goto done; } status = ares_dns_write_rr(dnsrec, &namelist, ARES_SECTION_AUTHORITY, buf); if (status != ARES_SUCCESS) { goto done; } status = ares_dns_write_rr(dnsrec, &namelist, ARES_SECTION_ADDITIONAL, buf); if (status != ARES_SUCCESS) { goto done; } done: ares__llist_destroy(namelist); if (status != ARES_SUCCESS) { ares__buf_set_length(buf, orig_len); } return status; } ares_status_t ares_dns_write_buf_tcp(const ares_dns_record_t *dnsrec, ares__buf_t *buf) { ares_status_t status; size_t orig_len; size_t msg_len; size_t len; if (dnsrec == NULL || buf == NULL) { return ARES_EFORMERR; } orig_len = ares__buf_len(buf); /* Write placeholder for length */ status = ares__buf_append_be16(buf, 0); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } /* Write message */ status = ares_dns_write_buf(dnsrec, buf); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } len = ares__buf_len(buf); msg_len = len - orig_len - 2; if (msg_len > 65535) { status = ARES_EBADQUERY; goto done; } /* Now we need to overwrite the length, so we jump back to the original * message length, overwrite the section and jump back */ ares__buf_set_length(buf, orig_len); status = ares__buf_append_be16(buf, (unsigned short)(msg_len & 0xFFFF)); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: UntestablePath */ } ares__buf_set_length(buf, len); done: if (status != ARES_SUCCESS) { ares__buf_set_length(buf, orig_len); } return status; } ares_status_t ares_dns_write(const ares_dns_record_t *dnsrec, unsigned char **buf, size_t *buf_len) { ares__buf_t *b = NULL; ares_status_t status; if (buf == NULL || buf_len == NULL || dnsrec == NULL) { return ARES_EFORMERR; } *buf = NULL; *buf_len = 0; b = ares__buf_create(); if (b == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares_dns_write_buf(dnsrec, b); if (status != ARES_SUCCESS) { ares__buf_destroy(b); return status; } *buf = ares__buf_finish_bin(b, buf_len); return status; } void ares_dns_record_write_ttl_decrement(ares_dns_record_t *dnsrec, unsigned int ttl_decrement) { if (dnsrec == NULL) { return; } dnsrec->ttl_decrement = ttl_decrement; } gevent-24.11.1/deps/c-ares/src/lib/str/000077500000000000000000000000001471441230600174245ustar00rootroot00000000000000gevent-24.11.1/deps/c-ares/src/lib/str/ares__buf.c000066400000000000000000000767751471441230600215430ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares__buf.h" #include #ifdef HAVE_STDINT_H # include #endif struct ares__buf { const unsigned char *data; /*!< pointer to start of data buffer */ size_t data_len; /*!< total size of data in buffer */ unsigned char *alloc_buf; /*!< Pointer to allocated data buffer, * not used for const buffers */ size_t alloc_buf_len; /*!< Size of allocated data buffer */ size_t offset; /*!< Current working offset in buffer */ size_t tag_offset; /*!< Tagged offset in buffer. Uses * SIZE_MAX if not set. */ }; ares__buf_t *ares__buf_create(void) { ares__buf_t *buf = ares_malloc_zero(sizeof(*buf)); if (buf == NULL) { return NULL; } buf->tag_offset = SIZE_MAX; return buf; } ares__buf_t *ares__buf_create_const(const unsigned char *data, size_t data_len) { ares__buf_t *buf; if (data == NULL || data_len == 0) { return NULL; } buf = ares__buf_create(); if (buf == NULL) { return NULL; } buf->data = data; buf->data_len = data_len; return buf; } void ares__buf_destroy(ares__buf_t *buf) { if (buf == NULL) { return; } ares_free(buf->alloc_buf); ares_free(buf); } static ares_bool_t ares__buf_is_const(const ares__buf_t *buf) { if (buf == NULL) { return ARES_FALSE; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (buf->data != NULL && buf->alloc_buf == NULL) { return ARES_TRUE; } return ARES_FALSE; } void ares__buf_reclaim(ares__buf_t *buf) { size_t prefix_size; size_t data_size; if (buf == NULL) { return; } if (ares__buf_is_const(buf)) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* Silence coverity. All lengths are zero so would bail out later but * coverity doesn't know this */ if (buf->alloc_buf == NULL) { return; } if (buf->tag_offset != SIZE_MAX && buf->tag_offset < buf->offset) { prefix_size = buf->tag_offset; } else { prefix_size = buf->offset; } if (prefix_size == 0) { return; } data_size = buf->data_len - prefix_size; memmove(buf->alloc_buf, buf->alloc_buf + prefix_size, data_size); buf->data = buf->alloc_buf; buf->data_len = data_size; buf->offset -= prefix_size; if (buf->tag_offset != SIZE_MAX) { buf->tag_offset -= prefix_size; } } static ares_status_t ares__buf_ensure_space(ares__buf_t *buf, size_t needed_size) { size_t remaining_size; size_t alloc_size; unsigned char *ptr; if (buf == NULL) { return ARES_EFORMERR; } if (ares__buf_is_const(buf)) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* When calling ares__buf_finish_str() we end up adding a null terminator, * so we want to ensure the size is always sufficient for this as we don't * want an ARES_ENOMEM at that point */ needed_size++; /* No need to do an expensive move operation, we have enough to just append */ remaining_size = buf->alloc_buf_len - buf->data_len; if (remaining_size >= needed_size) { return ARES_SUCCESS; } /* See if just moving consumed data frees up enough space */ ares__buf_reclaim(buf); remaining_size = buf->alloc_buf_len - buf->data_len; if (remaining_size >= needed_size) { return ARES_SUCCESS; } alloc_size = buf->alloc_buf_len; /* Not yet started */ if (alloc_size == 0) { alloc_size = 16; /* Always shifts 1, so ends up being 32 minimum */ } /* Increase allocation by powers of 2 */ do { alloc_size <<= 1; remaining_size = alloc_size - buf->data_len; } while (remaining_size < needed_size); ptr = ares_realloc(buf->alloc_buf, alloc_size); if (ptr == NULL) { return ARES_ENOMEM; } buf->alloc_buf = ptr; buf->alloc_buf_len = alloc_size; buf->data = ptr; return ARES_SUCCESS; } ares_status_t ares__buf_set_length(ares__buf_t *buf, size_t len) { if (buf == NULL || ares__buf_is_const(buf)) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (len >= buf->alloc_buf_len - buf->offset) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } buf->data_len = len + buf->offset; return ARES_SUCCESS; } ares_status_t ares__buf_append(ares__buf_t *buf, const unsigned char *data, size_t data_len) { ares_status_t status; if (data == NULL && data_len != 0) { return ARES_EFORMERR; } if (data_len == 0) { return ARES_SUCCESS; } status = ares__buf_ensure_space(buf, data_len); if (status != ARES_SUCCESS) { return status; } memcpy(buf->alloc_buf + buf->data_len, data, data_len); buf->data_len += data_len; return ARES_SUCCESS; } ares_status_t ares__buf_append_byte(ares__buf_t *buf, unsigned char b) { return ares__buf_append(buf, &b, 1); } ares_status_t ares__buf_append_be16(ares__buf_t *buf, unsigned short u16) { ares_status_t status; status = ares__buf_append_byte(buf, (unsigned char)((u16 >> 8) & 0xff)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__buf_append_byte(buf, (unsigned char)(u16 & 0xff)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } return ARES_SUCCESS; } ares_status_t ares__buf_append_be32(ares__buf_t *buf, unsigned int u32) { ares_status_t status; status = ares__buf_append_byte(buf, ((unsigned char)(u32 >> 24) & 0xff)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__buf_append_byte(buf, ((unsigned char)(u32 >> 16) & 0xff)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__buf_append_byte(buf, ((unsigned char)(u32 >> 8) & 0xff)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__buf_append_byte(buf, ((unsigned char)u32 & 0xff)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } return ARES_SUCCESS; } unsigned char *ares__buf_append_start(ares__buf_t *buf, size_t *len) { ares_status_t status; if (len == NULL || *len == 0) { return NULL; } status = ares__buf_ensure_space(buf, *len); if (status != ARES_SUCCESS) { return NULL; } /* -1 for possible null terminator for ares__buf_finish_str() */ *len = buf->alloc_buf_len - buf->data_len - 1; return buf->alloc_buf + buf->data_len; } void ares__buf_append_finish(ares__buf_t *buf, size_t len) { if (buf == NULL) { return; } buf->data_len += len; } unsigned char *ares__buf_finish_bin(ares__buf_t *buf, size_t *len) { unsigned char *ptr = NULL; if (buf == NULL || len == NULL || ares__buf_is_const(buf)) { return NULL; } ares__buf_reclaim(buf); /* We don't want to return NULL except on failure, may be zero-length */ if (buf->alloc_buf == NULL && ares__buf_ensure_space(buf, 1) != ARES_SUCCESS) { return NULL; /* LCOV_EXCL_LINE: OutOfMemory */ } ptr = buf->alloc_buf; *len = buf->data_len; ares_free(buf); return ptr; } char *ares__buf_finish_str(ares__buf_t *buf, size_t *len) { char *ptr; size_t mylen; ptr = (char *)ares__buf_finish_bin(buf, &mylen); if (ptr == NULL) { return NULL; } if (len != NULL) { *len = mylen; } /* NOTE: ensured via ares__buf_ensure_space() that there is always at least * 1 extra byte available for this specific use-case */ ptr[mylen] = 0; return ptr; } void ares__buf_tag(ares__buf_t *buf) { if (buf == NULL) { return; } buf->tag_offset = buf->offset; } ares_status_t ares__buf_tag_rollback(ares__buf_t *buf) { if (buf == NULL || buf->tag_offset == SIZE_MAX) { return ARES_EFORMERR; } buf->offset = buf->tag_offset; buf->tag_offset = SIZE_MAX; return ARES_SUCCESS; } ares_status_t ares__buf_tag_clear(ares__buf_t *buf) { if (buf == NULL || buf->tag_offset == SIZE_MAX) { return ARES_EFORMERR; } buf->tag_offset = SIZE_MAX; return ARES_SUCCESS; } const unsigned char *ares__buf_tag_fetch(const ares__buf_t *buf, size_t *len) { if (buf == NULL || buf->tag_offset == SIZE_MAX || len == NULL) { return NULL; } *len = buf->offset - buf->tag_offset; return buf->data + buf->tag_offset; } size_t ares__buf_tag_length(const ares__buf_t *buf) { if (buf == NULL || buf->tag_offset == SIZE_MAX) { return 0; } return buf->offset - buf->tag_offset; } ares_status_t ares__buf_tag_fetch_bytes(const ares__buf_t *buf, unsigned char *bytes, size_t *len) { size_t ptr_len = 0; const unsigned char *ptr = ares__buf_tag_fetch(buf, &ptr_len); if (ptr == NULL || bytes == NULL || len == NULL) { return ARES_EFORMERR; } if (*len < ptr_len) { return ARES_EFORMERR; } *len = ptr_len; if (ptr_len > 0) { memcpy(bytes, ptr, ptr_len); } return ARES_SUCCESS; } ares_status_t ares__buf_tag_fetch_string(const ares__buf_t *buf, char *str, size_t len) { size_t out_len; ares_status_t status; size_t i; if (str == NULL || len == 0) { return ARES_EFORMERR; } /* Space for NULL terminator */ out_len = len - 1; status = ares__buf_tag_fetch_bytes(buf, (unsigned char *)str, &out_len); if (status != ARES_SUCCESS) { return status; } /* NULL terminate */ str[out_len] = 0; /* Validate string is printable */ for (i = 0; i < out_len; i++) { if (!ares__isprint(str[i])) { return ARES_EBADSTR; } } return ARES_SUCCESS; } static const unsigned char *ares__buf_fetch(const ares__buf_t *buf, size_t *len) { if (len != NULL) { *len = 0; } if (buf == NULL || len == NULL || buf->data == NULL) { return NULL; } *len = buf->data_len - buf->offset; if (*len == 0) { return NULL; } return buf->data + buf->offset; } ares_status_t ares__buf_consume(ares__buf_t *buf, size_t len) { size_t remaining_len = ares__buf_len(buf); if (remaining_len < len) { return ARES_EBADRESP; } buf->offset += len; return ARES_SUCCESS; } ares_status_t ares__buf_fetch_be16(ares__buf_t *buf, unsigned short *u16) { size_t remaining_len; const unsigned char *ptr = ares__buf_fetch(buf, &remaining_len); unsigned int u32; if (buf == NULL || u16 == NULL || remaining_len < sizeof(*u16)) { return ARES_EBADRESP; } /* Do math in an unsigned int in order to prevent warnings due to automatic * conversion by the compiler from short to int during shifts */ u32 = ((unsigned int)(ptr[0]) << 8 | (unsigned int)ptr[1]); *u16 = (unsigned short)(u32 & 0xFFFF); return ares__buf_consume(buf, sizeof(*u16)); } ares_status_t ares__buf_fetch_be32(ares__buf_t *buf, unsigned int *u32) { size_t remaining_len; const unsigned char *ptr = ares__buf_fetch(buf, &remaining_len); if (buf == NULL || u32 == NULL || remaining_len < sizeof(*u32)) { return ARES_EBADRESP; } *u32 = ((unsigned int)(ptr[0]) << 24 | (unsigned int)(ptr[1]) << 16 | (unsigned int)(ptr[2]) << 8 | (unsigned int)(ptr[3])); return ares__buf_consume(buf, sizeof(*u32)); } ares_status_t ares__buf_fetch_bytes(ares__buf_t *buf, unsigned char *bytes, size_t len) { size_t remaining_len; const unsigned char *ptr = ares__buf_fetch(buf, &remaining_len); if (buf == NULL || bytes == NULL || len == 0 || remaining_len < len) { return ARES_EBADRESP; } memcpy(bytes, ptr, len); return ares__buf_consume(buf, len); } ares_status_t ares__buf_fetch_bytes_dup(ares__buf_t *buf, size_t len, ares_bool_t null_term, unsigned char **bytes) { size_t remaining_len; const unsigned char *ptr = ares__buf_fetch(buf, &remaining_len); if (buf == NULL || bytes == NULL || len == 0 || remaining_len < len) { return ARES_EBADRESP; } *bytes = ares_malloc(null_term ? len + 1 : len); if (*bytes == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } memcpy(*bytes, ptr, len); if (null_term) { (*bytes)[len] = 0; } return ares__buf_consume(buf, len); } ares_status_t ares__buf_fetch_str_dup(ares__buf_t *buf, size_t len, char **str) { size_t remaining_len; const unsigned char *ptr = ares__buf_fetch(buf, &remaining_len); if (buf == NULL || str == NULL || len == 0 || remaining_len < len) { return ARES_EBADRESP; } *str = ares_malloc(len + 1); if (*str == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } memcpy(*str, ptr, len); (*str)[len] = 0; return ares__buf_consume(buf, len); } ares_status_t ares__buf_fetch_bytes_into_buf(ares__buf_t *buf, ares__buf_t *dest, size_t len) { size_t remaining_len; const unsigned char *ptr = ares__buf_fetch(buf, &remaining_len); ares_status_t status; if (buf == NULL || dest == NULL || len == 0 || remaining_len < len) { return ARES_EBADRESP; } status = ares__buf_append(dest, ptr, len); if (status != ARES_SUCCESS) { return status; } return ares__buf_consume(buf, len); } static ares_bool_t ares__is_whitespace(unsigned char c, ares_bool_t include_linefeed) { switch (c) { case '\r': case '\t': case ' ': case '\v': case '\f': return ARES_TRUE; case '\n': return include_linefeed; default: break; } return ARES_FALSE; } size_t ares__buf_consume_whitespace(ares__buf_t *buf, ares_bool_t include_linefeed) { size_t remaining_len = 0; const unsigned char *ptr = ares__buf_fetch(buf, &remaining_len); size_t i; if (ptr == NULL) { return 0; } for (i = 0; i < remaining_len; i++) { if (!ares__is_whitespace(ptr[i], include_linefeed)) { break; } } if (i > 0) { ares__buf_consume(buf, i); } return i; } size_t ares__buf_consume_nonwhitespace(ares__buf_t *buf) { size_t remaining_len = 0; const unsigned char *ptr = ares__buf_fetch(buf, &remaining_len); size_t i; if (ptr == NULL) { return 0; } for (i = 0; i < remaining_len; i++) { if (ares__is_whitespace(ptr[i], ARES_TRUE)) { break; } } if (i > 0) { ares__buf_consume(buf, i); } return i; } size_t ares__buf_consume_line(ares__buf_t *buf, ares_bool_t include_linefeed) { size_t remaining_len = 0; const unsigned char *ptr = ares__buf_fetch(buf, &remaining_len); size_t i; if (ptr == NULL) { return 0; } for (i = 0; i < remaining_len; i++) { if (ptr[i] == '\n') { goto done; } } done: if (include_linefeed && i < remaining_len && ptr[i] == '\n') { i++; } if (i > 0) { ares__buf_consume(buf, i); } return i; } size_t ares__buf_consume_until_charset(ares__buf_t *buf, const unsigned char *charset, size_t len, ares_bool_t require_charset) { size_t remaining_len = 0; const unsigned char *ptr = ares__buf_fetch(buf, &remaining_len); size_t i; ares_bool_t found = ARES_FALSE; if (ptr == NULL || charset == NULL || len == 0) { return 0; } for (i = 0; i < remaining_len; i++) { size_t j; for (j = 0; j < len; j++) { if (ptr[i] == charset[j]) { found = ARES_TRUE; goto done; } } } done: if (require_charset && !found) { return 0; } if (i > 0) { ares__buf_consume(buf, i); } return i; } size_t ares__buf_consume_charset(ares__buf_t *buf, const unsigned char *charset, size_t len) { size_t remaining_len = 0; const unsigned char *ptr = ares__buf_fetch(buf, &remaining_len); size_t i; if (ptr == NULL || charset == NULL || len == 0) { return 0; } for (i = 0; i < remaining_len; i++) { size_t j; for (j = 0; j < len; j++) { if (ptr[i] == charset[j]) { break; } } /* Not found */ if (j == len) { break; } } if (i > 0) { ares__buf_consume(buf, i); } return i; } static void ares__buf_destroy_cb(void *arg) { ares__buf_destroy(arg); } static ares_bool_t ares__buf_split_isduplicate(ares__llist_t *list, const unsigned char *val, size_t len, ares__buf_split_t flags) { ares__llist_node_t *node; for (node = ares__llist_node_first(list); node != NULL; node = ares__llist_node_next(node)) { const ares__buf_t *buf = ares__llist_node_val(node); size_t plen = 0; const unsigned char *ptr = ares__buf_peek(buf, &plen); /* Can't be duplicate if lengths mismatch */ if (plen != len) { continue; } if (flags & ARES_BUF_SPLIT_CASE_INSENSITIVE) { if (ares__memeq_ci(ptr, val, len)) { return ARES_TRUE; } } else { if (memcmp(ptr, val, len) == 0) { return ARES_TRUE; } } } return ARES_FALSE; } ares_status_t ares__buf_split(ares__buf_t *buf, const unsigned char *delims, size_t delims_len, ares__buf_split_t flags, size_t max_sections, ares__llist_t **list) { ares_status_t status = ARES_SUCCESS; ares_bool_t first = ARES_TRUE; if (buf == NULL || delims == NULL || delims_len == 0 || list == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } *list = ares__llist_create(ares__buf_destroy_cb); if (*list == NULL) { status = ARES_ENOMEM; goto done; } while (ares__buf_len(buf)) { size_t len = 0; const unsigned char *ptr; if (first) { /* No delimiter yet, just tag the start */ ares__buf_tag(buf); } else { if (flags & ARES_BUF_SPLIT_DONT_CONSUME_DELIMS) { /* tag then eat delimiter so its first byte in buffer */ ares__buf_tag(buf); ares__buf_consume(buf, 1); } else { /* throw away delimiter */ ares__buf_consume(buf, 1); ares__buf_tag(buf); } } if (max_sections && ares__llist_len(*list) >= max_sections - 1) { ares__buf_consume(buf, ares__buf_len(buf)); } else { ares__buf_consume_until_charset(buf, delims, delims_len, ARES_FALSE); } ptr = ares__buf_tag_fetch(buf, &len); /* Shouldn't be possible */ if (ptr == NULL) { status = ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; } if (flags & ARES_BUF_SPLIT_LTRIM) { size_t i; for (i = 0; i < len; i++) { if (!ares__is_whitespace(ptr[i], ARES_TRUE)) { break; } } ptr += i; len -= i; } if (flags & ARES_BUF_SPLIT_RTRIM) { while (len > 0 && ares__is_whitespace(ptr[len - 1], ARES_TRUE)) { len--; } } if (len != 0 || flags & ARES_BUF_SPLIT_ALLOW_BLANK) { ares__buf_t *data; if (!(flags & ARES_BUF_SPLIT_NO_DUPLICATES) || !ares__buf_split_isduplicate(*list, ptr, len, flags)) { /* Since we don't allow const buffers of 0 length, and user wants * 0-length buffers, swap what we do here */ if (len) { data = ares__buf_create_const(ptr, len); } else { data = ares__buf_create(); } if (data == NULL) { status = ARES_ENOMEM; goto done; } if (ares__llist_insert_last(*list, data) == NULL) { ares__buf_destroy(data); status = ARES_ENOMEM; goto done; } } } first = ARES_FALSE; } done: if (status != ARES_SUCCESS) { ares__llist_destroy(*list); *list = NULL; } return status; } ares_bool_t ares__buf_begins_with(const ares__buf_t *buf, const unsigned char *data, size_t data_len) { size_t remaining_len = 0; const unsigned char *ptr = ares__buf_fetch(buf, &remaining_len); if (ptr == NULL || data == NULL || data_len == 0) { return ARES_FALSE; } if (data_len > remaining_len) { return ARES_FALSE; } if (memcmp(ptr, data, data_len) != 0) { return ARES_FALSE; } return ARES_TRUE; } size_t ares__buf_len(const ares__buf_t *buf) { if (buf == NULL) { return 0; } return buf->data_len - buf->offset; } const unsigned char *ares__buf_peek(const ares__buf_t *buf, size_t *len) { return ares__buf_fetch(buf, len); } size_t ares__buf_get_position(const ares__buf_t *buf) { if (buf == NULL) { return 0; } return buf->offset; } ares_status_t ares__buf_set_position(ares__buf_t *buf, size_t idx) { if (buf == NULL) { return ARES_EFORMERR; } if (idx > buf->data_len) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } buf->offset = idx; return ARES_SUCCESS; } ares_status_t ares__buf_parse_dns_abinstr(ares__buf_t *buf, size_t remaining_len, ares__dns_multistring_t **strs, ares_bool_t validate_printable) { unsigned char len; ares_status_t status = ARES_EBADRESP; size_t orig_len = ares__buf_len(buf); if (buf == NULL) { return ARES_EFORMERR; } if (remaining_len == 0) { return ARES_EBADRESP; } if (strs != NULL) { *strs = ares__dns_multistring_create(); if (*strs == NULL) { return ARES_ENOMEM; } } while (orig_len - ares__buf_len(buf) < remaining_len) { status = ares__buf_fetch_bytes(buf, &len, 1); if (status != ARES_SUCCESS) { break; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (len) { /* When used by the _str() parser, it really needs to be validated to * be a valid printable ascii string. Do that here */ if (validate_printable && ares__buf_len(buf) >= len) { size_t mylen; const char *data = (const char *)ares__buf_peek(buf, &mylen); if (!ares__str_isprint(data, len)) { status = ARES_EBADSTR; break; } } if (strs != NULL) { unsigned char *data = NULL; status = ares__buf_fetch_bytes_dup(buf, len, ARES_TRUE, &data); if (status != ARES_SUCCESS) { break; } status = ares__dns_multistring_add_own(*strs, data, len); if (status != ARES_SUCCESS) { ares_free(data); break; } } else { status = ares__buf_consume(buf, len); if (status != ARES_SUCCESS) { break; } } } } if (status != ARES_SUCCESS && strs != NULL) { ares__dns_multistring_destroy(*strs); *strs = NULL; } return status; } static ares_status_t ares__buf_parse_dns_binstr_int(ares__buf_t *buf, size_t remaining_len, unsigned char **bin, size_t *bin_len, ares_bool_t validate_printable) { unsigned char len; ares_status_t status = ARES_EBADRESP; ares__buf_t *binbuf = NULL; if (buf == NULL) { return ARES_EFORMERR; } if (remaining_len == 0) { return ARES_EBADRESP; } binbuf = ares__buf_create(); if (binbuf == NULL) { return ARES_ENOMEM; } status = ares__buf_fetch_bytes(buf, &len, 1); if (status != ARES_SUCCESS) { goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } remaining_len--; if (len > remaining_len) { status = ARES_EBADRESP; goto done; } if (len) { /* When used by the _str() parser, it really needs to be validated to * be a valid printable ascii string. Do that here */ if (validate_printable && ares__buf_len(buf) >= len) { size_t mylen; const char *data = (const char *)ares__buf_peek(buf, &mylen); if (!ares__str_isprint(data, len)) { status = ARES_EBADSTR; goto done; } } if (bin != NULL) { status = ares__buf_fetch_bytes_into_buf(buf, binbuf, len); } else { status = ares__buf_consume(buf, len); } } done: if (status != ARES_SUCCESS) { ares__buf_destroy(binbuf); } else { if (bin != NULL) { size_t mylen = 0; /* NOTE: we use ares__buf_finish_str() here as we guarantee NULL * Termination even though we are technically returning binary data. */ *bin = (unsigned char *)ares__buf_finish_str(binbuf, &mylen); *bin_len = mylen; } } return status; } ares_status_t ares__buf_parse_dns_binstr(ares__buf_t *buf, size_t remaining_len, unsigned char **bin, size_t *bin_len) { return ares__buf_parse_dns_binstr_int(buf, remaining_len, bin, bin_len, ARES_FALSE); } ares_status_t ares__buf_parse_dns_str(ares__buf_t *buf, size_t remaining_len, char **str) { size_t len; return ares__buf_parse_dns_binstr_int(buf, remaining_len, (unsigned char **)str, &len, ARES_TRUE); } ares_status_t ares__buf_append_num_dec(ares__buf_t *buf, size_t num, size_t len) { size_t i; size_t mod; if (len == 0) { len = ares__count_digits(num); } mod = ares__pow(10, len); for (i = len; i > 0; i--) { size_t digit = (num % mod); ares_status_t status; mod /= 10; /* Silence coverity. Shouldn't be possible since we calculate it above */ if (mod == 0) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } digit /= mod; status = ares__buf_append_byte(buf, '0' + (unsigned char)(digit & 0xFF)); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } return ARES_SUCCESS; } ares_status_t ares__buf_append_num_hex(ares__buf_t *buf, size_t num, size_t len) { size_t i; static const unsigned char hexbytes[] = "0123456789ABCDEF"; if (len == 0) { len = ares__count_hexdigits(num); } for (i = len; i > 0; i--) { ares_status_t status; status = ares__buf_append_byte(buf, hexbytes[(num >> ((i - 1) * 4)) & 0xF]); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } return ARES_SUCCESS; } ares_status_t ares__buf_append_str(ares__buf_t *buf, const char *str) { return ares__buf_append(buf, (const unsigned char *)str, ares_strlen(str)); } static ares_status_t ares__buf_hexdump_line(ares__buf_t *buf, size_t idx, const unsigned char *data, size_t len) { size_t i; ares_status_t status; /* Address */ status = ares__buf_append_num_hex(buf, idx, 6); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } /* | */ status = ares__buf_append_str(buf, " | "); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } for (i = 0; i < 16; i++) { if (i >= len) { status = ares__buf_append_str(buf, " "); } else { status = ares__buf_append_num_hex(buf, data[i], 2); } if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__buf_append_byte(buf, ' '); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } /* | */ status = ares__buf_append_str(buf, " | "); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } for (i = 0; i < 16; i++) { if (i >= len) { break; } status = ares__buf_append_byte(buf, ares__isprint(data[i]) ? data[i] : '.'); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } return ares__buf_append_byte(buf, '\n'); } ares_status_t ares__buf_hexdump(ares__buf_t *buf, const unsigned char *data, size_t len) { size_t i; /* Each line is 16 bytes */ for (i = 0; i < len; i += 16) { ares_status_t status; status = ares__buf_hexdump_line(buf, i, data + i, len - i); if (status != ARES_SUCCESS) { return status; /* LCOV_EXCL_LINE: OutOfMemory */ } } return ARES_SUCCESS; } ares_status_t ares__buf_load_file(const char *filename, ares__buf_t *buf) { FILE *fp = NULL; unsigned char *ptr = NULL; size_t len = 0; size_t ptr_len = 0; long ftell_len = 0; ares_status_t status; if (filename == NULL || buf == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } fp = fopen(filename, "rb"); if (fp == NULL) { int error = ERRNO; switch (error) { case ENOENT: case ESRCH: status = ARES_ENOTFOUND; goto done; default: DEBUGF(fprintf(stderr, "fopen() failed with error: %d %s\n", error, strerror(error))); DEBUGF(fprintf(stderr, "Error opening file: %s\n", filename)); status = ARES_EFILE; goto done; } } /* Get length portably, fstat() is POSIX, not C */ if (fseek(fp, 0, SEEK_END) != 0) { status = ARES_EFILE; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } ftell_len = ftell(fp); if (ftell_len < 0) { status = ARES_EFILE; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } len = (size_t)ftell_len; if (fseek(fp, 0, SEEK_SET) != 0) { status = ARES_EFILE; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } if (len == 0) { status = ARES_SUCCESS; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* Read entire data into buffer */ ptr_len = len; ptr = ares__buf_append_start(buf, &ptr_len); if (ptr == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } ptr_len = fread(ptr, 1, len, fp); if (ptr_len != len) { status = ARES_EFILE; /* LCOV_EXCL_LINE: DefensiveCoding */ goto done; /* LCOV_EXCL_LINE: DefensiveCoding */ } ares__buf_append_finish(buf, len); status = ARES_SUCCESS; done: if (fp != NULL) { fclose(fp); } return status; } gevent-24.11.1/deps/c-ares/src/lib/str/ares__buf.h000066400000000000000000000634361471441230600215360ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES__BUF_H #define __ARES__BUF_H /*! \addtogroup ares__buf Safe Data Builder and buffer * * This is a buffer building and parsing framework with a focus on security over * performance. All data to be read from the buffer will perform explicit length * validation and return a success/fail result. There are also various helpers * for writing data to the buffer which dynamically grows. * * All operations that fetch or consume data from the buffer will move forward * the internal pointer, thus marking the data as processed which may no longer * be accessible after certain operations (such as append). * * The helpers for this object are meant to be added as needed. If you can't * find it, write it! * * @{ */ struct ares__buf; /*! Opaque data type for generic hash table implementation */ typedef struct ares__buf ares__buf_t; /*! Create a new buffer object that dynamically allocates buffers for data. * * \return initialized buffer object or NULL if out of memory. */ ares__buf_t *ares__buf_create(void); /*! Create a new buffer object that uses a user-provided data pointer. The * data provided will not be manipulated, and cannot be appended to. This * is strictly used for parsing. * * \param[in] data Data to provide to buffer, must not be NULL. * \param[in] data_len Size of buffer provided, must be > 0 * * \return initialized buffer object or NULL if out of memory or misuse. */ ares__buf_t *ares__buf_create_const(const unsigned char *data, size_t data_len); /*! Destroy an initialized buffer object. * * \param[in] buf Initialized buf object */ void ares__buf_destroy(ares__buf_t *buf); /*! Append multiple bytes to a dynamic buffer object * * \param[in] buf Initialized buffer object * \param[in] data Data to copy to buffer object * \param[in] data_len Length of data to copy to buffer object. * \return ARES_SUCCESS or one of the c-ares error codes */ ares_status_t ares__buf_append(ares__buf_t *buf, const unsigned char *data, size_t data_len); /*! Append a single byte to the dynamic buffer object * * \param[in] buf Initialized buffer object * \param[in] b Single byte to append to buffer object. * \return ARES_SUCCESS or one of the c-ares error codes */ ares_status_t ares__buf_append_byte(ares__buf_t *buf, unsigned char b); /*! Append a null-terminated string to the dynamic buffer object * * \param[in] buf Initialized buffer object * \param[in] str String to append to buffer object. * \return ARES_SUCCESS or one of the c-ares error codes */ ares_status_t ares__buf_append_str(ares__buf_t *buf, const char *str); /*! Append a 16bit Big Endian number to the buffer. * * \param[in] buf Initialized buffer object * \param[out] u16 16bit integer * \return ARES_SUCCESS or one of the c-ares error codes */ ares_status_t ares__buf_append_be16(ares__buf_t *buf, unsigned short u16); /*! Append a 32bit Big Endian number to the buffer. * * \param[in] buf Initialized buffer object * \param[out] u32 32bit integer * \return ARES_SUCCESS or one of the c-ares error codes */ ares_status_t ares__buf_append_be32(ares__buf_t *buf, unsigned int u32); /*! Append a number in ASCII decimal form. * * \param[in] buf Initialized buffer object * \param[in] num Number to print * \param[in] len Length to output, use 0 for no padding * \return ARES_SUCCESS on success */ ares_status_t ares__buf_append_num_dec(ares__buf_t *buf, size_t num, size_t len); /*! Append a number in ASCII hexadecimal form. * * \param[in] buf Initialized buffer object * \param[in] num Number to print * \param[in] len Length to output, use 0 for no padding * \return ARES_SUCCESS on success */ ares_status_t ares__buf_append_num_hex(ares__buf_t *buf, size_t num, size_t len); /*! Sets the current buffer length. This *may* be used if there is a need to * override a prior position in the buffer, such as if there is a length * prefix that isn't easily predictable, and you must go back and overwrite * that position. * * Only valid on non-const buffers. Length provided must not exceed current * allocated buffer size, but otherwise there are very few protections on * this function. Use cautiously. * * \param[in] buf Initialized buffer object * \param[in] len Length to set * \return ARES_SUCCESS or one of the c-ares error codes */ ares_status_t ares__buf_set_length(ares__buf_t *buf, size_t len); /*! Start a dynamic append operation that returns a buffer suitable for * writing. A desired minimum length is passed in, and the actual allocated * buffer size is returned which may be greater than the requested size. * No operation other than ares__buf_append_finish() is allowed on the * buffer after this request. * * \param[in] buf Initialized buffer object * \param[in,out] len Desired non-zero length passed in, actual buffer size * returned. * \return Pointer to writable buffer or NULL on failure (usage, out of mem) */ unsigned char *ares__buf_append_start(ares__buf_t *buf, size_t *len); /*! Finish a dynamic append operation. Called after * ares__buf_append_start() once desired data is written. * * \param[in] buf Initialized buffer object. * \param[in] len Length of data written. May be zero to terminate * operation. Must not be greater than returned from * ares__buf_append_start(). */ void ares__buf_append_finish(ares__buf_t *buf, size_t len); /*! Write the data provided to the buffer in a hexdump format. * * \param[in] buf Initialized buffer object. * \param[in] data Data to hex dump * \param[in] len Length of data to hexdump * \return ARES_SUCCESS on success. */ ares_status_t ares__buf_hexdump(ares__buf_t *buf, const unsigned char *data, size_t len); /*! Clean up ares__buf_t and return allocated pointer to unprocessed data. It * is the responsibility of the caller to ares_free() the returned buffer. * The passed in buf parameter is invalidated by this call. * * \param[in] buf Initialized buffer object. Can not be a "const" buffer. * \param[out] len Length of data returned * \return pointer to unprocessed data (may be zero length) or NULL on error. */ unsigned char *ares__buf_finish_bin(ares__buf_t *buf, size_t *len); /*! Clean up ares__buf_t and return allocated pointer to unprocessed data and * return it as a string (null terminated). It is the responsibility of the * caller to ares_free() the returned buffer. The passed in buf parameter is * invalidated by this call. * * This function in no way validates the data in this buffer is actually * a string, that characters are printable, or that there aren't multiple * NULL terminators. It is assumed that the caller will either validate that * themselves or has built this buffer with only a valid character set. * * \param[in] buf Initialized buffer object. Can not be a "const" buffer. * \param[out] len Optional. Length of data returned, or NULL if not needed. * \return pointer to unprocessed data or NULL on error. */ char *ares__buf_finish_str(ares__buf_t *buf, size_t *len); /*! Tag a position to save in the buffer in case parsing needs to rollback, * such as if insufficient data is available, but more data may be added in * the future. Only a single tag can be set per buffer object. Setting a * tag will override any pre-existing tag. * * \param[in] buf Initialized buffer object */ void ares__buf_tag(ares__buf_t *buf); /*! Rollback to a tagged position. Will automatically clear the tag. * * \param[in] buf Initialized buffer object * \return ARES_SUCCESS or one of the c-ares error codes */ ares_status_t ares__buf_tag_rollback(ares__buf_t *buf); /*! Clear the tagged position without rolling back. You should do this any * time a tag is no longer needed as future append operations can reclaim * buffer space. * * \param[in] buf Initialized buffer object * \return ARES_SUCCESS or one of the c-ares error codes */ ares_status_t ares__buf_tag_clear(ares__buf_t *buf); /*! Fetch the buffer and length of data starting from the tagged position up * to the _current_ position. It will not unset the tagged position. The * data may be invalidated by any future ares__buf_*() calls. * * \param[in] buf Initialized buffer object * \param[out] len Length between tag and current offset in buffer * \return NULL on failure (such as no tag), otherwise pointer to start of * buffer */ const unsigned char *ares__buf_tag_fetch(const ares__buf_t *buf, size_t *len); /*! Get the length of the current tag offset to the current position. * * \param[in] buf Initialized buffer object * \return length */ size_t ares__buf_tag_length(const ares__buf_t *buf); /*! Fetch the bytes starting from the tagged position up to the _current_ * position using the provided buffer. It will not unset the tagged position. * * \param[in] buf Initialized buffer object * \param[in,out] bytes Buffer to hold data * \param[in,out] len On input, buffer size, on output, bytes place in * buffer. * \return ARES_SUCCESS if fetched, ARES_EFORMERR if insufficient buffer size */ ares_status_t ares__buf_tag_fetch_bytes(const ares__buf_t *buf, unsigned char *bytes, size_t *len); /*! Fetch the bytes starting from the tagged position up to the _current_ * position as a NULL-terminated string using the provided buffer. The data * is validated to be ASCII-printable data. It will not unset the tagged * poition. * * \param[in] buf Initialized buffer object * \param[in,out] str Buffer to hold data * \param[in] len On input, buffer size, on output, bytes place in * buffer. * \return ARES_SUCCESS if fetched, ARES_EFORMERR if insufficient buffer size, * ARES_EBADSTR if not printable ASCII */ ares_status_t ares__buf_tag_fetch_string(const ares__buf_t *buf, char *str, size_t len); /*! Consume the given number of bytes without reading them. * * \param[in] buf Initialized buffer object * \param[in] len Length to consume * \return ARES_SUCCESS or one of the c-ares error codes */ ares_status_t ares__buf_consume(ares__buf_t *buf, size_t len); /*! Fetch a 16bit Big Endian number from the buffer. * * \param[in] buf Initialized buffer object * \param[out] u16 Buffer to hold 16bit integer * \return ARES_SUCCESS or one of the c-ares error codes */ ares_status_t ares__buf_fetch_be16(ares__buf_t *buf, unsigned short *u16); /*! Fetch a 32bit Big Endian number from the buffer. * * \param[in] buf Initialized buffer object * \param[out] u32 Buffer to hold 32bit integer * \return ARES_SUCCESS or one of the c-ares error codes */ ares_status_t ares__buf_fetch_be32(ares__buf_t *buf, unsigned int *u32); /*! Fetch the requested number of bytes into the provided buffer * * \param[in] buf Initialized buffer object * \param[out] bytes Buffer to hold data * \param[in] len Requested number of bytes (must be > 0) * \return ARES_SUCCESS or one of the c-ares error codes */ ares_status_t ares__buf_fetch_bytes(ares__buf_t *buf, unsigned char *bytes, size_t len); /*! Fetch the requested number of bytes and return a new buffer that must be * ares_free()'d by the caller. * * \param[in] buf Initialized buffer object * \param[in] len Requested number of bytes (must be > 0) * \param[in] null_term Even though this is considered binary data, the user * knows it may be a vald string, so add a null * terminator. * \param[out] bytes Pointer passed by reference. Will be allocated. * \return ARES_SUCCESS or one of the c-ares error codes */ ares_status_t ares__buf_fetch_bytes_dup(ares__buf_t *buf, size_t len, ares_bool_t null_term, unsigned char **bytes); /*! Fetch the requested number of bytes and place them into the provided * dest buffer object. * * \param[in] buf Initialized buffer object * \param[out] dest Buffer object to append bytes. * \param[in] len Requested number of bytes (must be > 0) * \return ARES_SUCCESS or one of the c-ares error codes */ ares_status_t ares__buf_fetch_bytes_into_buf(ares__buf_t *buf, ares__buf_t *dest, size_t len); /*! Fetch the requested number of bytes and return a new buffer that must be * ares_free()'d by the caller. The returned buffer is a null terminated * string. * * \param[in] buf Initialized buffer object * \param[in] len Requested number of bytes (must be > 0) * \param[out] str Pointer passed by reference. Will be allocated. * \return ARES_SUCCESS or one of the c-ares error codes */ ares_status_t ares__buf_fetch_str_dup(ares__buf_t *buf, size_t len, char **str); /*! Consume whitespace characters (0x09, 0x0B, 0x0C, 0x0D, 0x20, and optionally * 0x0A). * * \param[in] buf Initialized buffer object * \param[in] include_linefeed ARES_TRUE to include consuming 0x0A, * ARES_FALSE otherwise. * \return number of whitespace characters consumed */ size_t ares__buf_consume_whitespace(ares__buf_t *buf, ares_bool_t include_linefeed); /*! Consume any non-whitespace character (anything other than 0x09, 0x0B, 0x0C, * 0x0D, 0x20, and 0x0A). * * \param[in] buf Initialized buffer object * \return number of characters consumed */ size_t ares__buf_consume_nonwhitespace(ares__buf_t *buf); /*! Consume until a character in the character set provided is reached. Does * not include the character from the charset at the end. * * \param[in] buf Initialized buffer object * \param[in] charset character set * \param[in] len length of character set * \param[in] require_charset require we find a character from the charset. * if ARES_FALSE it will simply consume the * rest of the buffer. If ARES_TRUE will return * 0 if not found. * \return number of characters consumed */ size_t ares__buf_consume_until_charset(ares__buf_t *buf, const unsigned char *charset, size_t len, ares_bool_t require_charset); /*! Consume while the characters match the characters in the provided set. * * \param[in] buf Initialized buffer object * \param[in] charset character set * \param[in] len length of character set * \return number of characters consumed */ size_t ares__buf_consume_charset(ares__buf_t *buf, const unsigned char *charset, size_t len); /*! Consume from the current position until the end of the line, and optionally * the end of line character (0x0A) itself. * * \param[in] buf Initialized buffer object * \param[in] include_linefeed ARES_TRUE to include consuming 0x0A, * ARES_FALSE otherwise. * \return number of characters consumed */ size_t ares__buf_consume_line(ares__buf_t *buf, ares_bool_t include_linefeed); typedef enum { /*! No flags */ ARES_BUF_SPLIT_NONE = 0, /*! The delimiter will be the first character in the buffer, except the * first buffer since the start doesn't have a delimiter. This option is * incompatible with ARES_BUF_SPLIT_LTRIM since the delimiter is always * the first character. */ ARES_BUF_SPLIT_DONT_CONSUME_DELIMS = 1 << 0, /*! Allow blank sections, by default blank sections are not emitted. If using * ARES_BUF_SPLIT_DONT_CONSUME_DELIMS, the delimiter is not counted as part * of the section */ ARES_BUF_SPLIT_ALLOW_BLANK = 1 << 1, /*! Remove duplicate entries */ ARES_BUF_SPLIT_NO_DUPLICATES = 1 << 2, /*! Perform case-insensitive matching when comparing values */ ARES_BUF_SPLIT_CASE_INSENSITIVE = 1 << 3, /*! Trim leading whitespace from buffer */ ARES_BUF_SPLIT_LTRIM = 1 << 4, /*! Trim trailing whitespace from buffer */ ARES_BUF_SPLIT_RTRIM = 1 << 5, /*! Trim leading and trailing whitespace from buffer */ ARES_BUF_SPLIT_TRIM = (ARES_BUF_SPLIT_LTRIM | ARES_BUF_SPLIT_RTRIM) } ares__buf_split_t; /*! Split the provided buffer into multiple sub-buffers stored in the variable * pointed to by the linked list. The sub buffers are const buffers pointing * into the buf provided. * * \param[in] buf Initialized buffer object * \param[in] delims Possible delimiters * \param[in] delims_len Length of possible delimiters * \param[in] flags One more more flags * \param[in] max_sections Maximum number of sections. Use 0 for * unlimited. Useful for splitting key/value * pairs where the delimiter may be a valid * character in the value. A value of 1 would * have little usefulness and would effectively * ignore the delimiter itself. * \param[out] list Result. Depending on flags, this may be a * valid list with no elements. Use * ares__llist_destroy() to free the memory which * will also free the contained ares__buf_t * objects. * \return ARES_SUCCESS on success, or error like ARES_ENOMEM. */ ares_status_t ares__buf_split(ares__buf_t *buf, const unsigned char *delims, size_t delims_len, ares__buf_split_t flags, size_t max_sections, ares__llist_t **list); /*! Check the unprocessed buffer to see if it begins with the sequence of * characters provided. * * \param[in] buf Initialized buffer object * \param[in] data Bytes of data to compare. * \param[in] data_len Length of data to compare. * \return ARES_TRUE on match, ARES_FALSE otherwise. */ ares_bool_t ares__buf_begins_with(const ares__buf_t *buf, const unsigned char *data, size_t data_len); /*! Size of unprocessed remaining data length * * \param[in] buf Initialized buffer object * \return length remaining */ size_t ares__buf_len(const ares__buf_t *buf); /*! Retrieve a pointer to the currently unprocessed data. Generally this isn't * recommended to be used in practice. The returned pointer may be invalidated * by any future ares__buf_*() calls. * * \param[in] buf Initialized buffer object * \param[out] len Length of available data * \return Pointer to buffer of unprocessed data */ const unsigned char *ares__buf_peek(const ares__buf_t *buf, size_t *len); /*! Wipe any processed data from the beginning of the buffer. This will * move any remaining data to the front of the internally allocated buffer. * * Can not be used on const buffer objects. * * Typically not needed to call, as any new append operation will automatically * call this function if there is insufficient space to append the data in * order to try to avoid another memory allocation. * * It may be useful to call in order to ensure the current message being * processed is in the beginning of the buffer if there is an intent to use * ares__buf_set_position() and ares__buf_get_position() as may be necessary * when processing DNS compressed names. * * If there is an active tag, it will NOT clear the tag, it will use the tag * as the start of the unprocessed data rather than the current offset. If * a prior tag is no longer needed, may be wise to call ares__buf_tag_clear(). * * \param[in] buf Initialized buffer object */ void ares__buf_reclaim(ares__buf_t *buf); /*! Set the current offset within the internal buffer. * * Typically this should not be used, if possible, use the ares__buf_tag*() * operations instead. * * One exception is DNS name compression which may backwards reference to * an index in the message. It may be necessary in such a case to call * ares__buf_reclaim() if using a dynamic (non-const) buffer before processing * such a message. * * \param[in] buf Initialized buffer object * \param[in] idx Index to set position * \return ARES_SUCCESS if valid index */ ares_status_t ares__buf_set_position(ares__buf_t *buf, size_t idx); /*! Get the current offset within the internal buffer. * * Typically this should not be used, if possible, use the ares__buf_tag*() * operations instead. * * This can be used to get the current position, useful for saving if a * jump via ares__buf_set_position() is performed and need to restore the * current position for future operations. * * \param[in] buf Initialized buffer object * \return index of current position */ size_t ares__buf_get_position(const ares__buf_t *buf); /*! Parse a character-string as defined in RFC1035, as a null-terminated * string. * * \param[in] buf initialized buffer object * \param[in] remaining_len maximum length that should be used for parsing * the string, this is often less than the remaining * buffer and is based on the RR record length. * \param[out] name Pointer passed by reference to be filled in with * allocated string of the parsed that must be * ares_free()'d by the caller. * \return ARES_SUCCESS on success */ ares_status_t ares__buf_parse_dns_str(ares__buf_t *buf, size_t remaining_len, char **name); /*! Parse an array of character strings as defined in RFC1035, as binary, * however, for convenience this does guarantee a NULL terminator (that is * not included in the length for each value). * * \param[in] buf initialized buffer object * \param[in] remaining_len maximum length that should be used for * parsing the string, this is often less than * the remaining buffer and is based on the RR * record length. * \param[out] strs Pointer passed by reference to be filled in * with * the array of values. * \param[out] validate_printable Validate the strings contain only printable * data. * \return ARES_SUCCESS on success */ ares_status_t ares__buf_parse_dns_abinstr(ares__buf_t *buf, size_t remaining_len, ares__dns_multistring_t **strs, ares_bool_t validate_printable); /*! Parse a character-string as defined in RFC1035, as binary, however for * convenience this does guarantee a NULL terminator (that is not included * in the returned length). * * \param[in] buf initialized buffer object * \param[in] remaining_len maximum length that should be used for parsing * the string, this is often less than the remaining * buffer and is based on the RR record length. * \param[out] bin Pointer passed by reference to be filled in with * allocated string of the parsed that must be * ares_free()'d by the caller. * \param[out] bin_len Length of returned string. * \return ARES_SUCCESS on success */ ares_status_t ares__buf_parse_dns_binstr(ares__buf_t *buf, size_t remaining_len, unsigned char **bin, size_t *bin_len); /*! Load data from specified file path into provided buffer. The entire file * is loaded into memory. * * \param[in] filename complete path to file * \param[in,out] buf Initialized (non-const) buffer object to load data * into * \return ARES_ENOTFOUND if file not found, ARES_EFILE if issues reading * file, ARES_ENOMEM if out of memory, ARES_SUCCESS on success. */ ares_status_t ares__buf_load_file(const char *filename, ares__buf_t *buf); /*! @} */ #endif /* __ARES__BUF_H */ gevent-24.11.1/deps/c-ares/src/lib/str/ares_str.c000066400000000000000000000161041471441230600214140ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_str.h" #ifdef HAVE_STDINT_H # include #endif size_t ares_strlen(const char *str) { if (str == NULL) { return 0; } return strlen(str); } char *ares_strdup(const char *s1) { size_t len; char *out; if (s1 == NULL) { return NULL; } len = ares_strlen(s1); /* Don't see how this is possible */ if (len == SIZE_MAX) { return NULL; /* LCOV_EXCL_LINE: DefensiveCoding */ } out = ares_malloc(len + 1); if (out == NULL) { return NULL; } if (len) { memcpy(out, s1, len); } out[len] = 0; return out; } size_t ares_strcpy(char *dest, const char *src, size_t dest_size) { size_t len = 0; if (dest == NULL || dest_size == 0) { return 0; /* LCOV_EXCL_LINE: DefensiveCoding */ } len = ares_strlen(src); if (len >= dest_size) { len = dest_size - 1; } if (len) { memcpy(dest, src, len); } dest[len] = 0; return len; } ares_bool_t ares_str_isnum(const char *str) { size_t i; if (str == NULL || *str == 0) { return ARES_FALSE; } for (i = 0; str[i] != 0; i++) { if (str[i] < '0' || str[i] > '9') { return ARES_FALSE; } } return ARES_TRUE; } void ares__str_rtrim(char *str) { size_t len; size_t i; if (str == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } len = ares_strlen(str); for (i = len; i > 0; i--) { if (!ares__isspace(str[i - 1])) { break; } } str[i] = 0; } void ares__str_ltrim(char *str) { size_t i; size_t len; if (str == NULL) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } for (i = 0; str[i] != 0 && ares__isspace(str[i]); i++) { /* Do nothing */ } if (i == 0) { return; } len = ares_strlen(str); if (i != len) { memmove(str, str + i, len - i); } str[len - i] = 0; } void ares__str_trim(char *str) { ares__str_ltrim(str); ares__str_rtrim(str); } /* tolower() is locale-specific. Use a lookup table fast conversion that only * operates on ASCII */ static const unsigned char ares__tolower_lookup[] = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1A, 0x1B, 0x1C, 0x1D, 0x1E, 0x1F, 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x28, 0x29, 0x2A, 0x2B, 0x2C, 0x2D, 0x2E, 0x2F, 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3A, 0x3B, 0x3C, 0x3D, 0x3E, 0x3F, 0x40, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, 0x69, 0x6A, 0x6B, 0x6C, 0x6D, 0x6E, 0x6F, 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, 0x79, 0x7A, 0x5B, 0x5C, 0x5D, 0x5E, 0x5F, 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, 0x69, 0x6A, 0x6B, 0x6C, 0x6D, 0x6E, 0x6F, 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, 0x79, 0x7A, 0x7B, 0x7C, 0x7D, 0x7E, 0x7F, 0x80, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87, 0x88, 0x89, 0x8A, 0x8B, 0x8C, 0x8D, 0x8E, 0x8F, 0x90, 0x91, 0x92, 0x93, 0x94, 0x95, 0x96, 0x97, 0x98, 0x99, 0x9A, 0x9B, 0x9C, 0x9D, 0x9E, 0x9F, 0xA0, 0xA1, 0xA2, 0xA3, 0xA4, 0xA5, 0xA6, 0xA7, 0xA8, 0xA9, 0xAA, 0xAB, 0xAC, 0xAD, 0xAE, 0xAF, 0xB0, 0xB1, 0xB2, 0xB3, 0xB4, 0xB5, 0xB6, 0xB7, 0xB8, 0xB9, 0xBA, 0xBB, 0xBC, 0xBD, 0xBE, 0xBF, 0xC0, 0xC1, 0xC2, 0xC3, 0xC4, 0xC5, 0xC6, 0xC7, 0xC8, 0xC9, 0xCA, 0xCB, 0xCC, 0xCD, 0xCE, 0xCF, 0xD0, 0xD1, 0xD2, 0xD3, 0xD4, 0xD5, 0xD6, 0xD7, 0xD8, 0xD9, 0xDA, 0xDB, 0xDC, 0xDD, 0xDE, 0xDF, 0xE0, 0xE1, 0xE2, 0xE3, 0xE4, 0xE5, 0xE6, 0xE7, 0xE8, 0xE9, 0xEA, 0xEB, 0xEC, 0xED, 0xEE, 0xEF, 0xF0, 0xF1, 0xF2, 0xF3, 0xF4, 0xF5, 0xF6, 0xF7, 0xF8, 0xF9, 0xFA, 0xFB, 0xFC, 0xFD, 0xFE, 0xFF }; unsigned char ares__tolower(unsigned char c) { return ares__tolower_lookup[c]; } ares_bool_t ares__memeq_ci(const unsigned char *ptr, const unsigned char *val, size_t len) { size_t i; for (i = 0; i < len; i++) { if (ares__tolower_lookup[ptr[i]] != ares__tolower_lookup[val[i]]) { return ARES_FALSE; } } return ARES_TRUE; } ares_bool_t ares__isspace(int ch) { switch (ch) { case '\r': case '\t': case ' ': case '\v': case '\f': case '\n': return ARES_TRUE; default: break; } return ARES_FALSE; } ares_bool_t ares__isprint(int ch) { if (ch >= 0x20 && ch <= 0x7E) { return ARES_TRUE; } return ARES_FALSE; } /* Character set allowed by hostnames. This is to include the normal * domain name character set plus: * - underscores which are used in SRV records. * - Forward slashes such as are used for classless in-addr.arpa * delegation (CNAMEs) * - Asterisks may be used for wildcard domains in CNAMEs as seen in the * real world. * While RFC 2181 section 11 does state not to do validation, * that applies to servers, not clients. Vulnerabilities have been * reported when this validation is not performed. Security is more * important than edge-case compatibility (which is probably invalid * anyhow). */ ares_bool_t ares__is_hostnamech(int ch) { /* [A-Za-z0-9-*._/] * Don't use isalnum() as it is locale-specific */ if (ch >= 'A' && ch <= 'Z') { return ARES_TRUE; } if (ch >= 'a' && ch <= 'z') { return ARES_TRUE; } if (ch >= '0' && ch <= '9') { return ARES_TRUE; } if (ch == '-' || ch == '.' || ch == '_' || ch == '/' || ch == '*') { return ARES_TRUE; } return ARES_FALSE; } ares_bool_t ares__is_hostname(const char *str) { size_t i; if (str == NULL) { return ARES_FALSE; /* LCOV_EXCL_LINE: DefensiveCoding */ } for (i = 0; str[i] != 0; i++) { if (!ares__is_hostnamech(str[i])) { return ARES_FALSE; } } return ARES_TRUE; } ares_bool_t ares__str_isprint(const char *str, size_t len) { size_t i; if (str == NULL && len != 0) { return ARES_FALSE; } for (i = 0; i < len; i++) { if (!ares__isprint(str[i])) { return ARES_FALSE; } } return ARES_TRUE; } gevent-24.11.1/deps/c-ares/src/lib/str/ares_str.h000066400000000000000000000073761471441230600214340ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES_STR_H #define __ARES_STR_H char *ares_strdup(const char *s1); size_t ares_strlen(const char *str); /*! Copy string from source to destination with destination buffer size * provided. The destination is guaranteed to be null terminated, if the * provided buffer isn't large enough, only those bytes from the source that * will fit will be copied. * * \param[out] dest Destination buffer * \param[in] src Source to copy * \param[in] dest_size Size of destination buffer * \return String length. Will be at most dest_size-1 */ size_t ares_strcpy(char *dest, const char *src, size_t dest_size); ares_bool_t ares_str_isnum(const char *str); void ares__str_ltrim(char *str); void ares__str_rtrim(char *str); void ares__str_trim(char *str); unsigned char ares__tolower(unsigned char c); ares_bool_t ares__memeq_ci(const unsigned char *ptr, const unsigned char *val, size_t len); ares_bool_t ares__isspace(int ch); ares_bool_t ares__isprint(int ch); ares_bool_t ares__is_hostnamech(int ch); ares_bool_t ares__is_hostname(const char *str); /*! Validate the string provided is printable. The length specified must be * at least the size of the buffer provided. If a NULL-terminator is hit * before the length provided is hit, this will not be considered a valid * printable string. This does not validate that the string is actually * NULL terminated. * * \param[in] str Buffer containing string to evaluate. * \param[in] len Number of characters to evaluate within provided buffer. * If 0, will return TRUE since it did not hit an exception. * \return ARES_TRUE if the entire string is printable, ARES_FALSE if not. */ ares_bool_t ares__str_isprint(const char *str, size_t len); /* We only care about ASCII rules */ #define ares__isascii(x) (((unsigned char)x) <= 127) #define ares__isdigit(x) \ (((unsigned char)x) >= '0' && ((unsigned char)x) <= '9') #define ares__isxdigit(x) \ (ares__isdigit(x) || \ (((unsigned char)x) >= 'a' && ((unsigned char)x) <= 'f') || \ (((unsigned char)x) >= 'A' && ((unsigned char)x) <= 'F')) #define ares__isupper(x) \ (((unsigned char)x) >= 'A' && ((unsigned char)x) <= 'Z') #define ares__islower(x) \ (((unsigned char)x) >= 'a' && ((unsigned char)x) <= 'z') #define ares__isalpha(x) (ares__islower(x) || ares__isupper(x)) #endif /* __ARES_STR_H */ gevent-24.11.1/deps/c-ares/src/lib/str/ares_strcasecmp.c000066400000000000000000000042321471441230600227470ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include "ares_strcasecmp.h" #ifndef HAVE_STRCASECMP int ares_strcasecmp(const char *a, const char *b) { # if defined(HAVE_STRCMPI) return strcmpi(a, b); # elif defined(HAVE_STRICMP) return stricmp(a, b); # else size_t i; for (i = 0; i < (size_t)-1; i++) { int c1 = ares__tolower(a[i]); int c2 = ares__tolower(b[i]); if (c1 != c2) { return c1 - c2; } if (!c1) { break; } } return 0; # endif } #endif #ifndef HAVE_STRNCASECMP int ares_strncasecmp(const char *a, const char *b, size_t n) { # if defined(HAVE_STRNCMPI) return strncmpi(a, b, n); # elif defined(HAVE_STRNICMP) return strnicmp(a, b, n); # else size_t i; for (i = 0; i < n; i++) { int c1 = ares__tolower(a[i]); int c2 = ares__tolower(b[i]); if (c1 != c2) { return c1 - c2; } if (!c1) { break; } } return 0; # endif } #endif gevent-24.11.1/deps/c-ares/src/lib/str/ares_strcasecmp.h000066400000000000000000000030571471441230600227600ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef HEADER_CARES_STRCASECMP_H #define HEADER_CARES_STRCASECMP_H #ifndef HAVE_STRCASECMP extern int ares_strcasecmp(const char *a, const char *b); #endif #ifndef HAVE_STRNCASECMP extern int ares_strncasecmp(const char *a, const char *b, size_t n); #endif #endif /* HEADER_CARES_STRCASECMP_H */ gevent-24.11.1/deps/c-ares/src/lib/str/ares_strsplit.c000066400000000000000000000070561471441230600224760ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2018 John Schember * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" void ares__strsplit_free(char **elms, size_t num_elm) { size_t i; if (elms == NULL) { return; } for (i = 0; i < num_elm; i++) { ares_free(elms[i]); } ares_free(elms); } char **ares__strsplit_duplicate(char **elms, size_t num_elm) { size_t i; char **out; if (elms == NULL || num_elm == 0) { return NULL; /* LCOV_EXCL_LINE: DefensiveCoding */ } out = ares_malloc_zero(sizeof(*elms) * num_elm); if (out == NULL) { return NULL; /* LCOV_EXCL_LINE: OutOfMemory */ } for (i = 0; i < num_elm; i++) { out[i] = ares_strdup(elms[i]); if (out[i] == NULL) { ares__strsplit_free(out, num_elm); /* LCOV_EXCL_LINE: OutOfMemory */ return NULL; /* LCOV_EXCL_LINE: OutOfMemory */ } } return out; } char **ares__strsplit(const char *in, const char *delms, size_t *num_elm) { ares_status_t status; ares__buf_t *buf = NULL; ares__llist_t *llist = NULL; ares__llist_node_t *node; char **out = NULL; size_t cnt = 0; size_t idx = 0; if (in == NULL || delms == NULL || num_elm == NULL) { return NULL; /* LCOV_EXCL_LINE: DefensiveCoding */ } *num_elm = 0; buf = ares__buf_create_const((const unsigned char *)in, ares_strlen(in)); if (buf == NULL) { return NULL; } status = ares__buf_split( buf, (const unsigned char *)delms, ares_strlen(delms), ARES_BUF_SPLIT_NO_DUPLICATES | ARES_BUF_SPLIT_CASE_INSENSITIVE, 0, &llist); if (status != ARES_SUCCESS) { goto done; } cnt = ares__llist_len(llist); if (cnt == 0) { status = ARES_EFORMERR; goto done; } out = ares_malloc_zero(cnt * sizeof(*out)); if (out == NULL) { status = ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ goto done; /* LCOV_EXCL_LINE: OutOfMemory */ } for (node = ares__llist_node_first(llist); node != NULL; node = ares__llist_node_next(node)) { ares__buf_t *val = ares__llist_node_val(node); char *temp = NULL; status = ares__buf_fetch_str_dup(val, ares__buf_len(val), &temp); if (status != ARES_SUCCESS) { goto done; } out[idx++] = temp; } *num_elm = cnt; status = ARES_SUCCESS; done: ares__llist_destroy(llist); ares__buf_destroy(buf); if (status != ARES_SUCCESS) { ares__strsplit_free(out, cnt); out = NULL; } return out; } gevent-24.11.1/deps/c-ares/src/lib/str/ares_strsplit.h000066400000000000000000000041571471441230600225020ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2018 John Schember * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef HEADER_CARES_STRSPLIT_H #define HEADER_CARES_STRSPLIT_H /* Split a string on delms skipping empty or duplicate elements. * * param in String to split. * param delms String of characters to treat as a delimiter. * Each character in the string is a delimiter so * there can be multiple delimiters to split on. * E.g. ", " will split on all comma's and spaces. * Duplicate (case-insensitive) entries are removed. * param num_elm Return parameter of the number of elements * in the result array. * * returns an allocated array of allocated string elements. * */ char **ares__strsplit(const char *in, const char *delms, size_t *num_elm); /* Frees the result returned from ares__strsplit(). */ void ares__strsplit_free(char **elms, size_t num_elm); /* Duplicate the array */ char **ares__strsplit_duplicate(char **elms, size_t num_elm); #endif /* HEADER_CARES_STRSPLIT_H */ gevent-24.11.1/deps/c-ares/src/lib/thirdparty/000077500000000000000000000000001471441230600210065ustar00rootroot00000000000000gevent-24.11.1/deps/c-ares/src/lib/thirdparty/apple/000077500000000000000000000000001471441230600221075ustar00rootroot00000000000000gevent-24.11.1/deps/c-ares/src/lib/thirdparty/apple/dnsinfo.h000066400000000000000000000066641471441230600237340ustar00rootroot00000000000000/* * Copyright (c) 2004-2006, 2008, 2009, 2011 Apple Inc. All rights reserved. * * @APPLE_LICENSE_HEADER_START@ * * This file contains Original Code and/or Modifications of Original Code * as defined in and that are subject to the Apple Public Source License * Version 2.0 (the 'License'). You may not use this file except in * compliance with the License. Please obtain a copy of the License at * http://www.opensource.apple.com/apsl/ and read it before using this * file. * * The Original Code and all software distributed under the License are * distributed on an 'AS IS' basis, WITHOUT WARRANTY OF ANY KIND, EITHER * EXPRESS OR IMPLIED, AND APPLE HEREBY DISCLAIMS ALL SUCH WARRANTIES, * INCLUDING WITHOUT LIMITATION, ANY WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE, QUIET ENJOYMENT OR NON-INFRINGEMENT. * Please see the License for the specific language governing rights and * limitations under the License. * * @APPLE_LICENSE_HEADER_END@ */ #ifndef __DNSINFO_H__ #define __DNSINFO_H__ /* * These routines provide access to the systems DNS configuration */ #include #include #include #include #include #include #include #define DNSINFO_VERSION 20111104 #define DEFAULT_SEARCH_ORDER 200000 /* search order for the "default" resolver domain name */ #define DNS_PTR(type, name) \ union { \ type name; \ uint64_t _ ## name ## _p; \ } #define DNS_VAR(type, name) \ type name #pragma pack(4) typedef struct { struct in_addr address; struct in_addr mask; } dns_sortaddr_t; #pragma pack() #pragma pack(4) typedef struct { DNS_PTR(char *, domain); /* domain */ DNS_VAR(int32_t, n_nameserver); /* # nameserver */ DNS_PTR(struct sockaddr **, nameserver); DNS_VAR(uint16_t, port); /* port (in host byte order) */ DNS_VAR(int32_t, n_search); /* # search */ DNS_PTR(char **, search); DNS_VAR(int32_t, n_sortaddr); /* # sortaddr */ DNS_PTR(dns_sortaddr_t **, sortaddr); DNS_PTR(char *, options); /* options */ DNS_VAR(uint32_t, timeout); /* timeout */ DNS_VAR(uint32_t, search_order); /* search_order */ DNS_VAR(uint32_t, if_index); DNS_VAR(uint32_t, flags); #if MAC_OS_X_VERSION_MIN_REQUIRED < 1080 /* MacOS 10.8 */ DNS_VAR(uint32_t, reserved[6]); #else DNS_VAR(uint32_t, reach_flags); /* SCNetworkReachabilityFlags */ DNS_VAR(uint32_t, reserved[5]); #endif } dns_resolver_t; #pragma pack() #define DNS_RESOLVER_FLAGS_SCOPED 1 /* configuration is for scoped questions */ #pragma pack(4) typedef struct { DNS_VAR(int32_t, n_resolver); /* resolver configurations */ DNS_PTR(dns_resolver_t **, resolver); DNS_VAR(int32_t, n_scoped_resolver); /* "scoped" resolver configurations */ DNS_PTR(dns_resolver_t **, scoped_resolver); DNS_VAR(uint32_t, reserved[5]); } dns_config_t; #pragma pack() __BEGIN_DECLS /* * DNS configuration access APIs */ const char * dns_configuration_notify_key (void) __OSX_AVAILABLE_STARTING(__MAC_10_4,__IPHONE_2_0); dns_config_t * dns_configuration_copy (void) __OSX_AVAILABLE_STARTING(__MAC_10_4,__IPHONE_2_0); void dns_configuration_free (dns_config_t *config) __OSX_AVAILABLE_STARTING(__MAC_10_4,__IPHONE_2_0); #if MAC_OS_X_VERSION_MIN_REQUIRED >= 1080 void _dns_configuration_ack (dns_config_t *config, const char *bundle_id) __OSX_AVAILABLE_STARTING(__MAC_10_8, __IPHONE_6_0); #endif __END_DECLS #endif /* __DNSINFO_H__ */ gevent-24.11.1/deps/c-ares/src/lib/util/000077500000000000000000000000001471441230600175715ustar00rootroot00000000000000gevent-24.11.1/deps/c-ares/src/lib/util/ares__iface_ips.c000066400000000000000000000370241471441230600230360ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef USE_WINSOCK # include # include # if defined(HAVE_IPHLPAPI_H) # include # endif # if defined(HAVE_NETIOAPI_H) # include # endif #endif #ifdef HAVE_SYS_TYPES_H # include #endif #ifdef HAVE_SYS_SOCKET_H # include #endif #ifdef HAVE_NET_IF_H # include #endif #ifdef HAVE_IFADDRS_H # include #endif #ifdef HAVE_SYS_IOCTL_H # include #endif #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_NETDB_H # include #endif static ares_status_t ares__iface_ips_enumerate(ares__iface_ips_t *ips, const char *name); typedef struct { char *name; struct ares_addr addr; unsigned char netmask; unsigned int ll_scope; ares__iface_ip_flags_t flags; } ares__iface_ip_t; struct ares__iface_ips { ares__array_t *ips; /*!< Type is ares__iface_ip_t */ ares__iface_ip_flags_t enum_flags; }; static void ares__iface_ip_free_cb(void *arg) { ares__iface_ip_t *ip = arg; if (ip == NULL) { return; } ares_free(ip->name); } static ares__iface_ips_t *ares__iface_ips_alloc(ares__iface_ip_flags_t flags) { ares__iface_ips_t *ips = ares_malloc_zero(sizeof(*ips)); if (ips == NULL) { return NULL; /* LCOV_EXCL_LINE: OutOfMemory */ } ips->enum_flags = flags; ips->ips = ares__array_create(sizeof(ares__iface_ip_t), ares__iface_ip_free_cb); if (ips->ips == NULL) { ares_free(ips); /* LCOV_EXCL_LINE: OutOfMemory */ return NULL; /* LCOV_EXCL_LINE: OutOfMemory */ } return ips; } void ares__iface_ips_destroy(ares__iface_ips_t *ips) { if (ips == NULL) { return; } ares__array_destroy(ips->ips); ares_free(ips); } ares_status_t ares__iface_ips(ares__iface_ips_t **ips, ares__iface_ip_flags_t flags, const char *name) { ares_status_t status; if (ips == NULL) { return ARES_EFORMERR; } *ips = ares__iface_ips_alloc(flags); if (*ips == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } status = ares__iface_ips_enumerate(*ips, name); if (status != ARES_SUCCESS) { /* LCOV_EXCL_START: UntestablePath */ ares__iface_ips_destroy(*ips); *ips = NULL; return status; /* LCOV_EXCL_STOP */ } return ARES_SUCCESS; } static ares_status_t ares__iface_ips_add(ares__iface_ips_t *ips, ares__iface_ip_flags_t flags, const char *name, const struct ares_addr *addr, unsigned char netmask, unsigned int ll_scope) { ares__iface_ip_t *ip; ares_status_t status; if (ips == NULL || name == NULL || addr == NULL) { return ARES_EFORMERR; /* LCOV_EXCL_LINE: DefensiveCoding */ } /* Don't want loopback */ if (flags & ARES_IFACE_IP_LOOPBACK && !(ips->enum_flags & ARES_IFACE_IP_LOOPBACK)) { return ARES_SUCCESS; } /* Don't want offline */ if (flags & ARES_IFACE_IP_OFFLINE && !(ips->enum_flags & ARES_IFACE_IP_OFFLINE)) { return ARES_SUCCESS; } /* Check for link-local */ if (ares__addr_is_linklocal(addr)) { flags |= ARES_IFACE_IP_LINKLOCAL; } if (flags & ARES_IFACE_IP_LINKLOCAL && !(ips->enum_flags & ARES_IFACE_IP_LINKLOCAL)) { return ARES_SUCCESS; } /* Set address flag based on address provided */ if (addr->family == AF_INET) { flags |= ARES_IFACE_IP_V4; } if (addr->family == AF_INET6) { flags |= ARES_IFACE_IP_V6; } /* If they specified either v4 or v6 validate flags otherwise assume they * want to enumerate both */ if (ips->enum_flags & (ARES_IFACE_IP_V4 | ARES_IFACE_IP_V6)) { if (flags & ARES_IFACE_IP_V4 && !(ips->enum_flags & ARES_IFACE_IP_V4)) { return ARES_SUCCESS; } if (flags & ARES_IFACE_IP_V6 && !(ips->enum_flags & ARES_IFACE_IP_V6)) { return ARES_SUCCESS; } } status = ares__array_insert_last((void **)&ip, ips->ips); if (status != ARES_SUCCESS) { return status; } ip->flags = flags; ip->netmask = netmask; if (flags & ARES_IFACE_IP_LINKLOCAL) { ip->ll_scope = ll_scope; } memcpy(&ip->addr, addr, sizeof(*addr)); ip->name = ares_strdup(name); if (ip->name == NULL) { ares__array_remove_last(ips->ips); return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } return ARES_SUCCESS; } size_t ares__iface_ips_cnt(const ares__iface_ips_t *ips) { if (ips == NULL) { return 0; } return ares__array_len(ips->ips); } const char *ares__iface_ips_get_name(const ares__iface_ips_t *ips, size_t idx) { const ares__iface_ip_t *ip; if (ips == NULL) { return NULL; } ip = ares__array_at_const(ips->ips, idx); if (ip == NULL) { return NULL; } return ip->name; } const struct ares_addr *ares__iface_ips_get_addr(const ares__iface_ips_t *ips, size_t idx) { const ares__iface_ip_t *ip; if (ips == NULL) { return NULL; } ip = ares__array_at_const(ips->ips, idx); if (ip == NULL) { return NULL; } return &ip->addr; } ares__iface_ip_flags_t ares__iface_ips_get_flags(const ares__iface_ips_t *ips, size_t idx) { const ares__iface_ip_t *ip; if (ips == NULL) { return 0; } ip = ares__array_at_const(ips->ips, idx); if (ip == NULL) { return 0; } return ip->flags; } unsigned char ares__iface_ips_get_netmask(const ares__iface_ips_t *ips, size_t idx) { const ares__iface_ip_t *ip; if (ips == NULL) { return 0; } ip = ares__array_at_const(ips->ips, idx); if (ip == NULL) { return 0; } return ip->netmask; } unsigned int ares__iface_ips_get_ll_scope(const ares__iface_ips_t *ips, size_t idx) { const ares__iface_ip_t *ip; if (ips == NULL) { return 0; } ip = ares__array_at_const(ips->ips, idx); if (ip == NULL) { return 0; } return ip->ll_scope; } #ifdef USE_WINSOCK # if 0 static char *wcharp_to_charp(const wchar_t *in) { char *out; int len; len = WideCharToMultiByte(CP_UTF8, 0, in, -1, NULL, 0, NULL, NULL); if (len == -1) { return NULL; } out = ares_malloc_zero((size_t)len + 1); if (WideCharToMultiByte(CP_UTF8, 0, in, -1, out, len, NULL, NULL) == -1) { ares_free(out); return NULL; } return out; } # endif static ares_bool_t name_match(const char *name, const char *adapter_name, unsigned int ll_scope) { if (name == NULL || *name == 0) { return ARES_TRUE; } if (strcasecmp(name, adapter_name) == 0) { return ARES_TRUE; } if (ares_str_isnum(name) && (unsigned int)atoi(name) == ll_scope) { return ARES_TRUE; } return ARES_FALSE; } static ares_status_t ares__iface_ips_enumerate(ares__iface_ips_t *ips, const char *name) { ULONG myflags = GAA_FLAG_INCLUDE_PREFIX /*|GAA_FLAG_INCLUDE_ALL_INTERFACES */; ULONG outBufLen = 0; DWORD retval; IP_ADAPTER_ADDRESSES *addresses = NULL; IP_ADAPTER_ADDRESSES *address = NULL; ares_status_t status = ARES_SUCCESS; /* Get necessary buffer size */ GetAdaptersAddresses(AF_UNSPEC, myflags, NULL, NULL, &outBufLen); if (outBufLen == 0) { status = ARES_EFILE; goto done; } addresses = ares_malloc_zero(outBufLen); if (addresses == NULL) { status = ARES_ENOMEM; goto done; } retval = GetAdaptersAddresses(AF_UNSPEC, myflags, NULL, addresses, &outBufLen); if (retval != ERROR_SUCCESS) { status = ARES_EFILE; goto done; } for (address = addresses; address != NULL; address = address->Next) { IP_ADAPTER_UNICAST_ADDRESS *ipaddr = NULL; ares__iface_ip_flags_t addrflag = 0; char ifname[64] = ""; # if defined(HAVE_CONVERTINTERFACEINDEXTOLUID) && \ defined(HAVE_CONVERTINTERFACELUIDTONAMEA) /* Retrieve name from interface index. * address->AdapterName appears to be a GUID/UUID of some sort, not a name. * address->FriendlyName is user-changeable. * That said, this doesn't appear to help us out on systems that don't * have if_nametoindex() or if_indextoname() as they don't have these * functions either! */ NET_LUID luid; ConvertInterfaceIndexToLuid(address->IfIndex, &luid); ConvertInterfaceLuidToNameA(&luid, ifname, sizeof(ifname)); # else ares_strcpy(ifname, address->AdapterName, sizeof(ifname)); # endif if (address->OperStatus != IfOperStatusUp) { addrflag |= ARES_IFACE_IP_OFFLINE; } if (address->IfType == IF_TYPE_SOFTWARE_LOOPBACK) { addrflag |= ARES_IFACE_IP_LOOPBACK; } for (ipaddr = address->FirstUnicastAddress; ipaddr != NULL; ipaddr = ipaddr->Next) { struct ares_addr addr; if (ipaddr->Address.lpSockaddr->sa_family == AF_INET) { const struct sockaddr_in *sockaddr_in = (const struct sockaddr_in *)((void *)ipaddr->Address.lpSockaddr); addr.family = AF_INET; memcpy(&addr.addr.addr4, &sockaddr_in->sin_addr, sizeof(addr.addr.addr4)); } else if (ipaddr->Address.lpSockaddr->sa_family == AF_INET6) { const struct sockaddr_in6 *sockaddr_in6 = (const struct sockaddr_in6 *)((void *)ipaddr->Address.lpSockaddr); addr.family = AF_INET6; memcpy(&addr.addr.addr6, &sockaddr_in6->sin6_addr, sizeof(addr.addr.addr6)); } else { /* Unknown */ continue; } /* Sometimes windows may use numerics to indicate a DNS server's adapter, * which corresponds to the index rather than the name. Check and * validate both. */ if (!name_match(name, ifname, address->Ipv6IfIndex)) { continue; } status = ares__iface_ips_add(ips, addrflag, ifname, &addr, ipaddr->OnLinkPrefixLength /* netmask */, address->Ipv6IfIndex /* ll_scope */); if (status != ARES_SUCCESS) { goto done; } } } done: ares_free(addresses); return status; } #elif defined(HAVE_GETIFADDRS) static unsigned char count_addr_bits(const unsigned char *addr, size_t addr_len) { size_t i; unsigned char count = 0; for (i = 0; i < addr_len; i++) { count += ares__count_bits_u8(addr[i]); } return count; } static ares_status_t ares__iface_ips_enumerate(ares__iface_ips_t *ips, const char *name) { struct ifaddrs *ifap = NULL; struct ifaddrs *ifa = NULL; ares_status_t status = ARES_SUCCESS; if (getifaddrs(&ifap) != 0) { status = ARES_EFILE; goto done; } for (ifa = ifap; ifa != NULL; ifa = ifa->ifa_next) { ares__iface_ip_flags_t addrflag = 0; struct ares_addr addr; unsigned char netmask = 0; unsigned int ll_scope = 0; if (ifa->ifa_addr == NULL) { continue; } if (!(ifa->ifa_flags & IFF_UP)) { addrflag |= ARES_IFACE_IP_OFFLINE; } if (ifa->ifa_flags & IFF_LOOPBACK) { addrflag |= ARES_IFACE_IP_LOOPBACK; } if (ifa->ifa_addr->sa_family == AF_INET) { const struct sockaddr_in *sockaddr_in = (const struct sockaddr_in *)((void *)ifa->ifa_addr); addr.family = AF_INET; memcpy(&addr.addr.addr4, &sockaddr_in->sin_addr, sizeof(addr.addr.addr4)); /* netmask */ sockaddr_in = (struct sockaddr_in *)((void *)ifa->ifa_netmask); netmask = count_addr_bits((const void *)&sockaddr_in->sin_addr, 4); } else if (ifa->ifa_addr->sa_family == AF_INET6) { const struct sockaddr_in6 *sockaddr_in6 = (const struct sockaddr_in6 *)((void *)ifa->ifa_addr); addr.family = AF_INET6; memcpy(&addr.addr.addr6, &sockaddr_in6->sin6_addr, sizeof(addr.addr.addr6)); /* netmask */ sockaddr_in6 = (struct sockaddr_in6 *)((void *)ifa->ifa_netmask); netmask = count_addr_bits((const void *)&sockaddr_in6->sin6_addr, 16); # ifdef HAVE_STRUCT_SOCKADDR_IN6_SIN6_SCOPE_ID ll_scope = sockaddr_in6->sin6_scope_id; # endif } else { /* unknown */ continue; } /* Name mismatch */ if (name != NULL && strcasecmp(ifa->ifa_name, name) != 0) { continue; } status = ares__iface_ips_add(ips, addrflag, ifa->ifa_name, &addr, netmask, ll_scope); if (status != ARES_SUCCESS) { goto done; } } done: freeifaddrs(ifap); return status; } #else static ares_status_t ares__iface_ips_enumerate(ares__iface_ips_t *ips, const char *name) { (void)ips; (void)name; return ARES_ENOTIMP; } #endif unsigned int ares__if_nametoindex(const char *name) { #ifdef HAVE_IF_NAMETOINDEX if (name == NULL) { return 0; } return if_nametoindex(name); #else ares_status_t status; ares__iface_ips_t *ips = NULL; size_t i; unsigned int index = 0; if (name == NULL) { return 0; } status = ares__iface_ips(&ips, ARES_IFACE_IP_V6 | ARES_IFACE_IP_LINKLOCAL, name); if (status != ARES_SUCCESS) { goto done; } for (i = 0; i < ares__iface_ips_cnt(ips); i++) { if (ares__iface_ips_get_flags(ips, i) & ARES_IFACE_IP_LINKLOCAL) { index = ares__iface_ips_get_ll_scope(ips, i); goto done; } } done: ares__iface_ips_destroy(ips); return index; #endif } const char *ares__if_indextoname(unsigned int index, char *name, size_t name_len) { #ifdef HAVE_IF_INDEXTONAME if (name_len < IF_NAMESIZE) { return NULL; } return if_indextoname(index, name); #else ares_status_t status; ares__iface_ips_t *ips = NULL; size_t i; const char *ptr = NULL; if (name == NULL || name_len < IF_NAMESIZE) { goto done; } if (index == 0) { goto done; } status = ares__iface_ips(&ips, ARES_IFACE_IP_V6 | ARES_IFACE_IP_LINKLOCAL, NULL); if (status != ARES_SUCCESS) { goto done; } for (i = 0; i < ares__iface_ips_cnt(ips); i++) { if (ares__iface_ips_get_flags(ips, i) & ARES_IFACE_IP_LINKLOCAL && ares__iface_ips_get_ll_scope(ips, i) == index) { ares_strcpy(name, ares__iface_ips_get_name(ips, i), name_len); ptr = name; goto done; } } done: ares__iface_ips_destroy(ips); return ptr; #endif } gevent-24.11.1/deps/c-ares/src/lib/util/ares__iface_ips.h000066400000000000000000000130361471441230600230400ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES__IFACE_IPS_H #define __ARES__IFACE_IPS_H /*! Flags for interface ip addresses. */ typedef enum { ARES_IFACE_IP_V4 = 1 << 0, /*!< IPv4 address. During enumeration if * this flag is set ARES_IFACE_IP_V6 * is not, will only enumerate v4 * addresses. */ ARES_IFACE_IP_V6 = 1 << 1, /*!< IPv6 address. During enumeration if * this flag is set ARES_IFACE_IP_V4 * is not, will only enumerate v6 * addresses. */ ARES_IFACE_IP_LOOPBACK = 1 << 2, /*!< Loopback adapter */ ARES_IFACE_IP_OFFLINE = 1 << 3, /*!< Adapter offline */ ARES_IFACE_IP_LINKLOCAL = 1 << 4, /*!< Link-local ip address */ /*! Default, enumerate all ips for online interfaces, including loopback */ ARES_IFACE_IP_DEFAULT = (ARES_IFACE_IP_V4 | ARES_IFACE_IP_V6 | ARES_IFACE_IP_LOOPBACK | ARES_IFACE_IP_LINKLOCAL) } ares__iface_ip_flags_t; struct ares__iface_ips; /*! Opaque pointer for holding enumerated interface ip addresses */ typedef struct ares__iface_ips ares__iface_ips_t; /*! Destroy ip address enumeration created by ares__iface_ips(). * * \param[in] ips Initialized IP address enumeration structure */ void ares__iface_ips_destroy(ares__iface_ips_t *ips); /*! Enumerate ip addresses on interfaces * * \param[out] ips Returns initialized ip address structure * \param[in] flags Flags for enumeration * \param[in] name Interface name to enumerate, or NULL to enumerate all * \return ARES_ENOMEM on out of memory, ARES_ENOTIMP if not supported on * the system, ARES_SUCCESS on success */ ares_status_t ares__iface_ips(ares__iface_ips_t **ips, ares__iface_ip_flags_t flags, const char *name); /*! Count of ips enumerated * * \param[in] ips Initialized IP address enumeration structure * \return count */ size_t ares__iface_ips_cnt(const ares__iface_ips_t *ips); /*! Retrieve interface name * * \param[in] ips Initialized IP address enumeration structure * \param[in] idx Index of entry to pull * \return interface name */ const char *ares__iface_ips_get_name(const ares__iface_ips_t *ips, size_t idx); /*! Retrieve interface address * * \param[in] ips Initialized IP address enumeration structure * \param[in] idx Index of entry to pull * \return interface address */ const struct ares_addr *ares__iface_ips_get_addr(const ares__iface_ips_t *ips, size_t idx); /*! Retrieve interface address flags * * \param[in] ips Initialized IP address enumeration structure * \param[in] idx Index of entry to pull * \return interface address flags */ ares__iface_ip_flags_t ares__iface_ips_get_flags(const ares__iface_ips_t *ips, size_t idx); /*! Retrieve interface address netmask * * \param[in] ips Initialized IP address enumeration structure * \param[in] idx Index of entry to pull * \return interface address netmask */ unsigned char ares__iface_ips_get_netmask(const ares__iface_ips_t *ips, size_t idx); /*! Retrieve interface ipv6 link local scope * * \param[in] ips Initialized IP address enumeration structure * \param[in] idx Index of entry to pull * \return interface ipv6 link local scope */ unsigned int ares__iface_ips_get_ll_scope(const ares__iface_ips_t *ips, size_t idx); /*! Retrieve the interface index (aka link local scope) from the interface * name. * * \param[in] name Interface name * \return 0 on failure, index otherwise */ unsigned int ares__if_nametoindex(const char *name); /*! Retrieves the interface name from the index (aka link local scope) * * \param[in] index Interface index (> 0) * \param[in] name Buffer to hold name * \param[in] name_len Length of provided buffer, must be at least IF_NAMESIZE * \return NULL on failure, or pointer to name on success */ const char *ares__if_indextoname(unsigned int index, char *name, size_t name_len); #endif gevent-24.11.1/deps/c-ares/src/lib/util/ares__threads.c000066400000000000000000000324521471441230600225460ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #ifdef CARES_THREADS # ifdef _WIN32 struct ares__thread_mutex { CRITICAL_SECTION mutex; }; ares__thread_mutex_t *ares__thread_mutex_create(void) { ares__thread_mutex_t *mut = ares_malloc_zero(sizeof(*mut)); if (mut == NULL) { return NULL; } InitializeCriticalSection(&mut->mutex); return mut; } void ares__thread_mutex_destroy(ares__thread_mutex_t *mut) { if (mut == NULL) { return; } DeleteCriticalSection(&mut->mutex); ares_free(mut); } void ares__thread_mutex_lock(ares__thread_mutex_t *mut) { if (mut == NULL) { return; } EnterCriticalSection(&mut->mutex); } void ares__thread_mutex_unlock(ares__thread_mutex_t *mut) { if (mut == NULL) { return; } LeaveCriticalSection(&mut->mutex); } struct ares__thread_cond { CONDITION_VARIABLE cond; }; ares__thread_cond_t *ares__thread_cond_create(void) { ares__thread_cond_t *cond = ares_malloc_zero(sizeof(*cond)); if (cond == NULL) { return NULL; } InitializeConditionVariable(&cond->cond); return cond; } void ares__thread_cond_destroy(ares__thread_cond_t *cond) { if (cond == NULL) { return; } ares_free(cond); } void ares__thread_cond_signal(ares__thread_cond_t *cond) { if (cond == NULL) { return; } WakeConditionVariable(&cond->cond); } void ares__thread_cond_broadcast(ares__thread_cond_t *cond) { if (cond == NULL) { return; } WakeAllConditionVariable(&cond->cond); } ares_status_t ares__thread_cond_wait(ares__thread_cond_t *cond, ares__thread_mutex_t *mut) { if (cond == NULL || mut == NULL) { return ARES_EFORMERR; } SleepConditionVariableCS(&cond->cond, &mut->mutex, INFINITE); return ARES_SUCCESS; } ares_status_t ares__thread_cond_timedwait(ares__thread_cond_t *cond, ares__thread_mutex_t *mut, unsigned long timeout_ms) { if (cond == NULL || mut == NULL) { return ARES_EFORMERR; } if (!SleepConditionVariableCS(&cond->cond, &mut->mutex, timeout_ms)) { return ARES_ETIMEOUT; } return ARES_SUCCESS; } struct ares__thread { HANDLE thread; DWORD id; void *(*func)(void *arg); void *arg; void *rv; }; /* Wrap for pthread compatibility */ static DWORD WINAPI ares__thread_func(LPVOID lpParameter) { ares__thread_t *thread = lpParameter; thread->rv = thread->func(thread->arg); return 0; } ares_status_t ares__thread_create(ares__thread_t **thread, ares__thread_func_t func, void *arg) { ares__thread_t *thr = NULL; if (func == NULL || thread == NULL) { return ARES_EFORMERR; } thr = ares_malloc_zero(sizeof(*thr)); if (thr == NULL) { return ARES_ENOMEM; } thr->func = func; thr->arg = arg; thr->thread = CreateThread(NULL, 0, ares__thread_func, thr, 0, &thr->id); if (thr->thread == NULL) { ares_free(thr); return ARES_ESERVFAIL; } *thread = thr; return ARES_SUCCESS; } ares_status_t ares__thread_join(ares__thread_t *thread, void **rv) { ares_status_t status = ARES_SUCCESS; if (thread == NULL) { return ARES_EFORMERR; } if (WaitForSingleObject(thread->thread, INFINITE) != WAIT_OBJECT_0) { status = ARES_ENOTFOUND; } else { CloseHandle(thread->thread); } if (status == ARES_SUCCESS && rv != NULL) { *rv = thread->rv; } ares_free(thread); return status; } # else /* !WIN32 == PTHREAD */ # include /* for clock_gettime() */ # ifdef HAVE_TIME_H # include # endif /* for gettimeofday() */ # ifdef HAVE_SYS_TIME_H # include # endif struct ares__thread_mutex { pthread_mutex_t mutex; }; ares__thread_mutex_t *ares__thread_mutex_create(void) { pthread_mutexattr_t attr; ares__thread_mutex_t *mut = ares_malloc_zero(sizeof(*mut)); if (mut == NULL) { return NULL; } if (pthread_mutexattr_init(&attr) != 0) { ares_free(mut); /* LCOV_EXCL_LINE: UntestablePath */ return NULL; /* LCOV_EXCL_LINE: UntestablePath */ } if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE) != 0) { goto fail; /* LCOV_EXCL_LINE: UntestablePath */ } if (pthread_mutex_init(&mut->mutex, &attr) != 0) { goto fail; /* LCOV_EXCL_LINE: UntestablePath */ } pthread_mutexattr_destroy(&attr); return mut; /* LCOV_EXCL_START: UntestablePath */ fail: pthread_mutexattr_destroy(&attr); ares_free(mut); return NULL; /* LCOV_EXCL_STOP */ } void ares__thread_mutex_destroy(ares__thread_mutex_t *mut) { if (mut == NULL) { return; } pthread_mutex_destroy(&mut->mutex); ares_free(mut); } void ares__thread_mutex_lock(ares__thread_mutex_t *mut) { if (mut == NULL) { return; } pthread_mutex_lock(&mut->mutex); } void ares__thread_mutex_unlock(ares__thread_mutex_t *mut) { if (mut == NULL) { return; } pthread_mutex_unlock(&mut->mutex); } struct ares__thread_cond { pthread_cond_t cond; }; ares__thread_cond_t *ares__thread_cond_create(void) { ares__thread_cond_t *cond = ares_malloc_zero(sizeof(*cond)); if (cond == NULL) { return NULL; } pthread_cond_init(&cond->cond, NULL); return cond; } void ares__thread_cond_destroy(ares__thread_cond_t *cond) { if (cond == NULL) { return; } pthread_cond_destroy(&cond->cond); ares_free(cond); } void ares__thread_cond_signal(ares__thread_cond_t *cond) { if (cond == NULL) { return; } pthread_cond_signal(&cond->cond); } void ares__thread_cond_broadcast(ares__thread_cond_t *cond) { if (cond == NULL) { return; } pthread_cond_broadcast(&cond->cond); } ares_status_t ares__thread_cond_wait(ares__thread_cond_t *cond, ares__thread_mutex_t *mut) { if (cond == NULL || mut == NULL) { return ARES_EFORMERR; } pthread_cond_wait(&cond->cond, &mut->mutex); return ARES_SUCCESS; } static void ares__timespec_timeout(struct timespec *ts, unsigned long add_ms) { # if defined(HAVE_CLOCK_GETTIME) && defined(CLOCK_REALTIME) clock_gettime(CLOCK_REALTIME, ts); # elif defined(HAVE_GETTIMEOFDAY) struct timeval tv; gettimeofday(&tv, NULL); ts->tv_sec = tv.tv_sec; ts->tv_nsec = tv.tv_usec * 1000; # else # error cannot determine current system time # endif ts->tv_sec += (time_t)(add_ms / 1000); ts->tv_nsec += (long)((add_ms % 1000) * 1000000); /* Normalize if needed */ if (ts->tv_nsec >= 1000000000) { ts->tv_sec += ts->tv_nsec / 1000000000; ts->tv_nsec %= 1000000000; } } ares_status_t ares__thread_cond_timedwait(ares__thread_cond_t *cond, ares__thread_mutex_t *mut, unsigned long timeout_ms) { struct timespec ts; if (cond == NULL || mut == NULL) { return ARES_EFORMERR; } ares__timespec_timeout(&ts, timeout_ms); if (pthread_cond_timedwait(&cond->cond, &mut->mutex, &ts) != 0) { return ARES_ETIMEOUT; } return ARES_SUCCESS; } struct ares__thread { pthread_t thread; }; ares_status_t ares__thread_create(ares__thread_t **thread, ares__thread_func_t func, void *arg) { ares__thread_t *thr = NULL; if (func == NULL || thread == NULL) { return ARES_EFORMERR; } thr = ares_malloc_zero(sizeof(*thr)); if (thr == NULL) { return ARES_ENOMEM; /* LCOV_EXCL_LINE: OutOfMemory */ } if (pthread_create(&thr->thread, NULL, func, arg) != 0) { ares_free(thr); /* LCOV_EXCL_LINE: UntestablePath */ return ARES_ESERVFAIL; /* LCOV_EXCL_LINE: UntestablePath */ } *thread = thr; return ARES_SUCCESS; } ares_status_t ares__thread_join(ares__thread_t *thread, void **rv) { void *ret = NULL; ares_status_t status = ARES_SUCCESS; if (thread == NULL) { return ARES_EFORMERR; } if (pthread_join(thread->thread, &ret) != 0) { status = ARES_ENOTFOUND; } ares_free(thread); if (status == ARES_SUCCESS && rv != NULL) { *rv = ret; } return status; } # endif ares_bool_t ares_threadsafety(void) { return ARES_TRUE; } #else /* !CARES_THREADS */ /* NoOp */ ares__thread_mutex_t *ares__thread_mutex_create(void) { return NULL; } void ares__thread_mutex_destroy(ares__thread_mutex_t *mut) { (void)mut; } void ares__thread_mutex_lock(ares__thread_mutex_t *mut) { (void)mut; } void ares__thread_mutex_unlock(ares__thread_mutex_t *mut) { (void)mut; } ares__thread_cond_t *ares__thread_cond_create(void) { return NULL; } void ares__thread_cond_destroy(ares__thread_cond_t *cond) { (void)cond; } void ares__thread_cond_signal(ares__thread_cond_t *cond) { (void)cond; } void ares__thread_cond_broadcast(ares__thread_cond_t *cond) { (void)cond; } ares_status_t ares__thread_cond_wait(ares__thread_cond_t *cond, ares__thread_mutex_t *mut) { (void)cond; (void)mut; return ARES_ENOTIMP; } ares_status_t ares__thread_cond_timedwait(ares__thread_cond_t *cond, ares__thread_mutex_t *mut, unsigned long timeout_ms) { (void)cond; (void)mut; (void)timeout_ms; return ARES_ENOTIMP; } ares_status_t ares__thread_create(ares__thread_t **thread, ares__thread_func_t func, void *arg) { (void)thread; (void)func; (void)arg; return ARES_ENOTIMP; } ares_status_t ares__thread_join(ares__thread_t *thread, void **rv) { (void)thread; (void)rv; return ARES_ENOTIMP; } ares_bool_t ares_threadsafety(void) { return ARES_FALSE; } #endif ares_status_t ares__channel_threading_init(ares_channel_t *channel) { ares_status_t status = ARES_SUCCESS; /* Threading is optional! */ if (!ares_threadsafety()) { return ARES_SUCCESS; } channel->lock = ares__thread_mutex_create(); if (channel->lock == NULL) { status = ARES_ENOMEM; goto done; } channel->cond_empty = ares__thread_cond_create(); if (channel->cond_empty == NULL) { status = ARES_ENOMEM; goto done; } done: if (status != ARES_SUCCESS) { ares__channel_threading_destroy(channel); } return status; } void ares__channel_threading_destroy(ares_channel_t *channel) { ares__thread_mutex_destroy(channel->lock); channel->lock = NULL; ares__thread_cond_destroy(channel->cond_empty); channel->cond_empty = NULL; } void ares__channel_lock(const ares_channel_t *channel) { ares__thread_mutex_lock(channel->lock); } void ares__channel_unlock(const ares_channel_t *channel) { ares__thread_mutex_unlock(channel->lock); } /* Must not be holding a channel lock already, public function only */ ares_status_t ares_queue_wait_empty(ares_channel_t *channel, int timeout_ms) { ares_status_t status = ARES_SUCCESS; ares_timeval_t tout; if (!ares_threadsafety()) { return ARES_ENOTIMP; } if (channel == NULL) { return ARES_EFORMERR; } if (timeout_ms >= 0) { ares__tvnow(&tout); tout.sec += (ares_int64_t)(timeout_ms / 1000); tout.usec += (unsigned int)(timeout_ms % 1000) * 1000; } ares__thread_mutex_lock(channel->lock); while (ares__llist_len(channel->all_queries)) { if (timeout_ms < 0) { ares__thread_cond_wait(channel->cond_empty, channel->lock); } else { ares_timeval_t tv_remaining; ares_timeval_t tv_now; unsigned long tms; ares__tvnow(&tv_now); ares__timeval_remaining(&tv_remaining, &tv_now, &tout); tms = (unsigned long)((tv_remaining.sec * 1000) + (tv_remaining.usec / 1000)); if (tms == 0) { status = ARES_ETIMEOUT; } else { status = ares__thread_cond_timedwait(channel->cond_empty, channel->lock, tms); } /* If there was a timeout, don't loop. Otherwise, make sure this wasn't * a spurious wakeup by looping and checking the condition. */ if (status == ARES_ETIMEOUT) { break; } } } ares__thread_mutex_unlock(channel->lock); return status; } void ares_queue_notify_empty(ares_channel_t *channel) { if (channel == NULL) { return; } /* We are guaranteed to be holding a channel lock already */ if (ares__llist_len(channel->all_queries)) { return; } /* Notify all waiters of the conditional */ ares__thread_cond_broadcast(channel->cond_empty); } gevent-24.11.1/deps/c-ares/src/lib/util/ares__threads.h000066400000000000000000000050551471441230600225520ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #ifndef __ARES__THREADS_H #define __ARES__THREADS_H struct ares__thread_mutex; typedef struct ares__thread_mutex ares__thread_mutex_t; ares__thread_mutex_t *ares__thread_mutex_create(void); void ares__thread_mutex_destroy(ares__thread_mutex_t *mut); void ares__thread_mutex_lock(ares__thread_mutex_t *mut); void ares__thread_mutex_unlock(ares__thread_mutex_t *mut); struct ares__thread_cond; typedef struct ares__thread_cond ares__thread_cond_t; ares__thread_cond_t *ares__thread_cond_create(void); void ares__thread_cond_destroy(ares__thread_cond_t *cond); void ares__thread_cond_signal(ares__thread_cond_t *cond); void ares__thread_cond_broadcast(ares__thread_cond_t *cond); ares_status_t ares__thread_cond_wait(ares__thread_cond_t *cond, ares__thread_mutex_t *mut); ares_status_t ares__thread_cond_timedwait(ares__thread_cond_t *cond, ares__thread_mutex_t *mut, unsigned long timeout_ms); struct ares__thread; typedef struct ares__thread ares__thread_t; typedef void *(*ares__thread_func_t)(void *arg); ares_status_t ares__thread_create(ares__thread_t **thread, ares__thread_func_t func, void *arg); ares_status_t ares__thread_join(ares__thread_t *thread, void **rv); #endif gevent-24.11.1/deps/c-ares/src/lib/util/ares__timeval.c000066400000000000000000000065151471441230600225560ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2008 Daniel Stenberg * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #if defined(_WIN32) && !defined(MSDOS) void ares__tvnow(ares_timeval_t *now) { /* QueryPerformanceCounters() has been around since Windows 2000, though * significant fixes were made in later versions. Documentation states * 1 microsecond or better resolution with a rollover not less than 100 years. * This differs from GetTickCount{64}() which has a resolution between 10 and * 16 ms. */ LARGE_INTEGER freq; LARGE_INTEGER current; /* Not sure how long it takes to get the frequency, I see it recommended to * cache it */ QueryPerformanceFrequency(&freq); QueryPerformanceCounter(¤t); now->sec = current.QuadPart / freq.QuadPart; /* We want to prevent overflows so we get the remainder, then multiply to * microseconds before dividing */ now->usec = (unsigned int)(((current.QuadPart % freq.QuadPart) * 1000000) / freq.QuadPart); } #elif defined(HAVE_CLOCK_GETTIME_MONOTONIC) void ares__tvnow(ares_timeval_t *now) { /* clock_gettime() is guaranteed to be increased monotonically when the * monotonic clock is queried. Time starting point is unspecified, it * could be the system start-up time, the Epoch, or something else, * in any case the time starting point does not change once that the * system has started up. */ struct timespec tsnow; if (clock_gettime(CLOCK_MONOTONIC, &tsnow) == 0) { now->sec = (ares_int64_t)tsnow.tv_sec; now->usec = (unsigned int)(tsnow.tv_nsec / 1000); } else { /* LCOV_EXCL_START: FallbackCode */ struct timeval tv; (void)gettimeofday(&tv, NULL); now->sec = (ares_int64_t)tv.tv_sec; now->usec = (unsigned int)tv.tv_usec; /* LCOV_EXCL_STOP */ } } #elif defined(HAVE_GETTIMEOFDAY) void ares__tvnow(ares_timeval_t *now) { /* gettimeofday() is not granted to be increased monotonically, due to * clock drifting and external source time synchronization it can jump * forward or backward in time. */ struct timeval tv; (void)gettimeofday(&tv, NULL); now->sec = (ares_int64_t)tv.tv_sec; now->usec = (unsigned int)tv.tv_usec; } #else # error missing sub-second time retrieval function #endif gevent-24.11.1/deps/c-ares/src/lib/util/ares_math.c000066400000000000000000000077171471441230600217140ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" /* Uses public domain code snippets from * http://graphics.stanford.edu/~seander/bithacks.html */ static unsigned int ares__round_up_pow2_u32(unsigned int n) { /* NOTE: if already a power of 2, will return itself, not the next */ n--; n |= n >> 1; n |= n >> 2; n |= n >> 4; n |= n >> 8; n |= n >> 16; n++; return n; } static ares_int64_t ares__round_up_pow2_u64(ares_int64_t n) { /* NOTE: if already a power of 2, will return itself, not the next */ n--; n |= n >> 1; n |= n >> 2; n |= n >> 4; n |= n >> 8; n |= n >> 16; n |= n >> 32; n++; return n; } ares_bool_t ares__is_64bit(void) { #ifdef _MSC_VER # pragma warning(push) # pragma warning(disable : 4127) #endif return (sizeof(size_t) == 4) ? ARES_FALSE : ARES_TRUE; #ifdef _MSC_VER # pragma warning(pop) #endif } size_t ares__round_up_pow2(size_t n) { if (ares__is_64bit()) { return (size_t)ares__round_up_pow2_u64((ares_int64_t)n); } return (size_t)ares__round_up_pow2_u32((unsigned int)n); } size_t ares__log2(size_t n) { static const unsigned char tab32[32] = { 0, 1, 28, 2, 29, 14, 24, 3, 30, 22, 20, 15, 25, 17, 4, 8, 31, 27, 13, 23, 21, 19, 16, 7, 26, 12, 18, 6, 11, 5, 10, 9 }; static const unsigned char tab64[64] = { 63, 0, 58, 1, 59, 47, 53, 2, 60, 39, 48, 27, 54, 33, 42, 3, 61, 51, 37, 40, 49, 18, 28, 20, 55, 30, 34, 11, 43, 14, 22, 4, 62, 57, 46, 52, 38, 26, 32, 41, 50, 36, 17, 19, 29, 10, 13, 21, 56, 45, 25, 31, 35, 16, 9, 12, 44, 24, 15, 8, 23, 7, 6, 5 }; if (!ares__is_64bit()) { return tab32[(n * 0x077CB531) >> 27]; } return tab64[(n * 0x07EDD5E59A4E28C2) >> 58]; } /* x^y */ size_t ares__pow(size_t x, size_t y) { size_t res = 1; while (y > 0) { /* If y is odd, multiply x with result */ if (y & 1) { res = res * x; } /* y must be even now */ y = y >> 1; /* y /= 2; */ x = x * x; /* x^2 */ } return res; } size_t ares__count_digits(size_t n) { size_t digits; for (digits = 0; n > 0; digits++) { n /= 10; } if (digits == 0) { digits = 1; } return digits; } size_t ares__count_hexdigits(size_t n) { size_t digits; for (digits = 0; n > 0; digits++) { n /= 16; } if (digits == 0) { digits = 1; } return digits; } unsigned char ares__count_bits_u8(unsigned char x) { /* Implementation obtained from: * http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetTable */ #define B2(n) n, n + 1, n + 1, n + 2 #define B4(n) B2(n), B2(n + 1), B2(n + 1), B2(n + 2) #define B6(n) B4(n), B4(n + 1), B4(n + 1), B4(n + 2) static const unsigned char lookup[256] = { B6(0), B6(1), B6(1), B6(2) }; return lookup[x]; } gevent-24.11.1/deps/c-ares/src/lib/util/ares_rand.c000066400000000000000000000246421471441230600217030ustar00rootroot00000000000000/* MIT License * * Copyright (c) 2023 Brad House * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_private.h" #include /* Older MacOS versions require including AvailabilityMacros.h before * sys/random.h */ #ifdef HAVE_AVAILABILITYMACROS_H # include #endif #ifdef HAVE_SYS_RANDOM_H # include #endif typedef enum { ARES_RAND_OS = 1 << 0, /* OS-provided such as RtlGenRandom or arc4random */ ARES_RAND_FILE = 1 << 1, /* OS file-backed random number generator */ ARES_RAND_RC4 = 1 << 2 /* Internal RC4 based PRNG */ } ares_rand_backend; #define ARES_RC4_KEY_LEN 32 /* 256 bits */ typedef struct ares_rand_rc4 { unsigned char S[256]; size_t i; size_t j; } ares_rand_rc4; static unsigned int ares_u32_from_ptr(void *addr) { /* LCOV_EXCL_START: FallbackCode */ if (ares__is_64bit()) { return (unsigned int)((((ares_uint64_t)addr >> 32) & 0xFFFFFFFF) | ((ares_uint64_t)addr & 0xFFFFFFFF)); } return (unsigned int)((size_t)addr & 0xFFFFFFFF); /* LCOV_EXCL_STOP */ } /* initialize an rc4 key as the last possible fallback. */ static void ares_rc4_generate_key(ares_rand_rc4 *rc4_state, unsigned char *key, size_t key_len) { /* LCOV_EXCL_START: FallbackCode */ size_t i; size_t len = 0; unsigned int data; ares_timeval_t tv; if (key_len != ARES_RC4_KEY_LEN) { return; } /* Randomness is hard to come by. Maybe the system randomizes heap and stack * addresses. Maybe the current timestamp give us some randomness. Use * rc4_state (heap), &i (stack), and ares__tvnow() */ data = ares_u32_from_ptr(rc4_state); memcpy(key + len, &data, sizeof(data)); len += sizeof(data); data = ares_u32_from_ptr(&i); memcpy(key + len, &data, sizeof(data)); len += sizeof(data); ares__tvnow(&tv); data = (unsigned int)((tv.sec | tv.usec) & 0xFFFFFFFF); memcpy(key + len, &data, sizeof(data)); len += sizeof(data); srand(ares_u32_from_ptr(rc4_state) | ares_u32_from_ptr(&i) | (unsigned int)((tv.sec | tv.usec) & 0xFFFFFFFF)); for (i = len; i < key_len; i++) { key[i] = (unsigned char)(rand() % 256); /* LCOV_EXCL_LINE */ } /* LCOV_EXCL_STOP */ } #define ARES_SWAP_BYTE(a, b) \ do { \ unsigned char swapByte = *(a); \ *(a) = *(b); \ *(b) = swapByte; \ } while (0) static void ares_rc4_init(ares_rand_rc4 *rc4_state) { /* LCOV_EXCL_START: FallbackCode */ unsigned char key[ARES_RC4_KEY_LEN]; size_t i; size_t j; ares_rc4_generate_key(rc4_state, key, sizeof(key)); for (i = 0; i < sizeof(rc4_state->S); i++) { rc4_state->S[i] = i & 0xFF; } for (i = 0, j = 0; i < 256; i++) { j = (j + rc4_state->S[i] + key[i % sizeof(key)]) % 256; ARES_SWAP_BYTE(&rc4_state->S[i], &rc4_state->S[j]); } rc4_state->i = 0; rc4_state->j = 0; /* LCOV_EXCL_STOP */ } /* Just outputs the key schedule, no need to XOR with any data since we have * none */ static void ares_rc4_prng(ares_rand_rc4 *rc4_state, unsigned char *buf, size_t len) { /* LCOV_EXCL_START: FallbackCode */ unsigned char *S = rc4_state->S; size_t i = rc4_state->i; size_t j = rc4_state->j; size_t cnt; for (cnt = 0; cnt < len; cnt++) { i = (i + 1) % 256; j = (j + S[i]) % 256; ARES_SWAP_BYTE(&S[i], &S[j]); buf[cnt] = S[(S[i] + S[j]) % 256]; } rc4_state->i = i; rc4_state->j = j; /* LCOV_EXCL_STOP */ } struct ares_rand_state { ares_rand_backend type; ares_rand_backend bad_backends; union { FILE *rand_file; ares_rand_rc4 rc4; } state; /* Since except for RC4, random data will likely result in a syscall, lets * pre-pull 256 bytes at a time. Every query will pull 2 bytes off this so * that means we should only need a syscall every 128 queries. 256bytes * appears to be a sweet spot that may be able to be served without * interruption */ unsigned char cache[256]; size_t cache_remaining; }; /* Define RtlGenRandom = SystemFunction036. This is in advapi32.dll. There is * no need to dynamically load this, other software used widely does not. * http://blogs.msdn.com/michael_howard/archive/2005/01/14/353379.aspx * https://docs.microsoft.com/en-us/windows/win32/api/ntsecapi/nf-ntsecapi-rtlgenrandom */ #ifdef _WIN32 BOOLEAN WINAPI SystemFunction036(PVOID RandomBuffer, ULONG RandomBufferLength); # ifndef RtlGenRandom # define RtlGenRandom(a, b) SystemFunction036(a, b) # endif #endif static ares_bool_t ares__init_rand_engine(ares_rand_state *state) { state->cache_remaining = 0; #if defined(HAVE_ARC4RANDOM_BUF) || defined(HAVE_GETRANDOM) || defined(_WIN32) if (!(state->bad_backends & ARES_RAND_OS)) { state->type = ARES_RAND_OS; return ARES_TRUE; } #endif #if defined(CARES_RANDOM_FILE) /* LCOV_EXCL_START: FallbackCode */ if (!(state->bad_backends & ARES_RAND_FILE)) { state->type = ARES_RAND_FILE; state->state.rand_file = fopen(CARES_RANDOM_FILE, "rb"); if (state->state.rand_file) { setvbuf(state->state.rand_file, NULL, _IONBF, 0); return ARES_TRUE; } } /* LCOV_EXCL_STOP */ /* Fall-Thru on failure to RC4 */ #endif /* LCOV_EXCL_START: FallbackCode */ state->type = ARES_RAND_RC4; ares_rc4_init(&state->state.rc4); /* LCOV_EXCL_STOP */ /* Currently cannot fail */ return ARES_TRUE; /* LCOV_EXCL_LINE: UntestablePath */ } ares_rand_state *ares__init_rand_state(void) { ares_rand_state *state = NULL; state = ares_malloc_zero(sizeof(*state)); if (!state) { return NULL; } if (!ares__init_rand_engine(state)) { ares_free(state); /* LCOV_EXCL_LINE: UntestablePath */ return NULL; /* LCOV_EXCL_LINE: UntestablePath */ } return state; } static void ares__clear_rand_state(ares_rand_state *state) { if (!state) { return; /* LCOV_EXCL_LINE: DefensiveCoding */ } switch (state->type) { case ARES_RAND_OS: break; /* LCOV_EXCL_START: FallbackCode */ case ARES_RAND_FILE: fclose(state->state.rand_file); break; case ARES_RAND_RC4: break; /* LCOV_EXCL_STOP */ } } static void ares__reinit_rand(ares_rand_state *state) { /* LCOV_EXCL_START: UntestablePath */ ares__clear_rand_state(state); ares__init_rand_engine(state); /* LCOV_EXCL_STOP */ } void ares__destroy_rand_state(ares_rand_state *state) { if (!state) { return; } ares__clear_rand_state(state); ares_free(state); } static void ares__rand_bytes_fetch(ares_rand_state *state, unsigned char *buf, size_t len) { while (1) { size_t bytes_read = 0; switch (state->type) { case ARES_RAND_OS: #ifdef _WIN32 RtlGenRandom(buf, (ULONG)len); return; #elif defined(HAVE_ARC4RANDOM_BUF) arc4random_buf(buf, len); return; #elif defined(HAVE_GETRANDOM) while (1) { size_t n = len - bytes_read; /* getrandom() on Linux always succeeds and is never * interrupted by a signal when requesting <= 256 bytes. */ ssize_t rv = getrandom(buf + bytes_read, n > 256 ? 256 : n, 0); if (rv <= 0) { /* We need to fall back to another backend */ if (errno == ENOSYS) { state->bad_backends |= ARES_RAND_OS; break; } continue; /* Just retry. */ } bytes_read += (size_t)rv; if (bytes_read == len) { return; } } break; #else /* Shouldn't be possible to be here */ break; #endif /* LCOV_EXCL_START: FallbackCode */ case ARES_RAND_FILE: while (1) { size_t rv = fread(buf + bytes_read, 1, len - bytes_read, state->state.rand_file); if (rv == 0) { break; /* critical error, will reinit rand state */ } bytes_read += rv; if (bytes_read == len) { return; } } break; case ARES_RAND_RC4: ares_rc4_prng(&state->state.rc4, buf, len); return; /* LCOV_EXCL_STOP */ } /* If we didn't return before we got here, that means we had a critical rand * failure and need to reinitialized */ ares__reinit_rand(state); /* LCOV_EXCL_LINE: UntestablePath */ } } void ares__rand_bytes(ares_rand_state *state, unsigned char *buf, size_t len) { /* See if we need to refill the cache to serve the request, but if len is * excessive, we're not going to update our cache or serve from cache */ if (len > state->cache_remaining && len < sizeof(state->cache)) { size_t fetch_size = sizeof(state->cache) - state->cache_remaining; ares__rand_bytes_fetch(state, state->cache, fetch_size); state->cache_remaining = sizeof(state->cache); } /* Serve from cache */ if (len <= state->cache_remaining) { size_t offset = sizeof(state->cache) - state->cache_remaining; memcpy(buf, state->cache + offset, len); state->cache_remaining -= len; return; } /* Serve direct due to excess size of request */ ares__rand_bytes_fetch(state, buf, len); } unsigned short ares__generate_new_id(ares_rand_state *state) { unsigned short r = 0; ares__rand_bytes(state, (unsigned char *)&r, sizeof(r)); return r; } gevent-24.11.1/deps/c-ares/src/lib/windows_port.c000066400000000000000000000011231471441230600215130ustar00rootroot00000000000000/********************************************************************** * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (C) Daniel Stenberg * * SPDX-License-Identifier: MIT * */ #include "ares_private.h" /* only do the following on windows */ #if defined(_WIN32) && !defined(MSDOS) # ifdef __WATCOMC__ /* * Watcom needs a DllMain() in order to initialise the clib startup code. */ BOOL WINAPI DllMain(HINSTANCE hnd, DWORD reason, LPVOID reserved) { (void)hnd; (void)reason; (void)reserved; return (TRUE); } # endif #endif /* WIN32 builds only */ gevent-24.11.1/deps/c-ares/src/tools/000077500000000000000000000000001471441230600172065ustar00rootroot00000000000000gevent-24.11.1/deps/c-ares/src/tools/CMakeLists.txt000066400000000000000000000040541471441230600217510ustar00rootroot00000000000000# Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT IF (CARES_BUILD_TOOLS) # Transform Makefile.inc transform_makefile_inc("Makefile.inc" "${PROJECT_BINARY_DIR}/src/tools/Makefile.inc.cmake") include(${PROJECT_BINARY_DIR}/src/tools/Makefile.inc.cmake) # Build ahost ADD_EXECUTABLE (ahost ahost.c ${SAMPLESOURCES}) TARGET_INCLUDE_DIRECTORIES (ahost PUBLIC "$" "$" "$" "$" "$" PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}" ) SET_TARGET_PROPERTIES (ahost PROPERTIES C_STANDARD 90 ) IF (ANDROID) SET_TARGET_PROPERTIES (ahost PROPERTIES C_STANDARD 99) ENDIF () TARGET_COMPILE_DEFINITIONS (ahost PRIVATE HAVE_CONFIG_H=1 CARES_NO_DEPRECATED) TARGET_LINK_LIBRARIES (ahost PRIVATE ${PROJECT_NAME}) IF (CARES_INSTALL) INSTALL (TARGETS ahost COMPONENT Tools ${TARGETS_INST_DEST}) ENDIF () # Build adig ADD_EXECUTABLE (adig adig.c ${SAMPLESOURCES}) # Don't build adig and ahost in parallel. This is to prevent a Windows MSVC # build error due to them both using the same source files. ADD_DEPENDENCIES(adig ahost) TARGET_INCLUDE_DIRECTORIES (adig PUBLIC "$" "$" "$" "$" "$" PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}" ) SET_TARGET_PROPERTIES (adig PROPERTIES C_STANDARD 90 ) IF (ANDROID) SET_TARGET_PROPERTIES (adig PROPERTIES C_STANDARD 99) ENDIF () TARGET_COMPILE_DEFINITIONS (adig PRIVATE HAVE_CONFIG_H=1 CARES_NO_DEPRECATED) TARGET_LINK_LIBRARIES (adig PRIVATE ${PROJECT_NAME}) IF (CARES_INSTALL) INSTALL (TARGETS adig COMPONENT Tools ${TARGETS_INST_DEST}) ENDIF () ENDIF () gevent-24.11.1/deps/c-ares/src/tools/Makefile.am000066400000000000000000000022251471441230600212430ustar00rootroot00000000000000# Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT AUTOMAKE_OPTIONS = foreign subdir-objects nostdinc 1.9.6 PROGS = ahost adig EXTRA_DIST = CMakeLists.txt Makefile.inc noinst_PROGRAMS =$(PROGS) # Specify our include paths here, and do it relative to $(top_srcdir) and # $(top_builddir), to ensure that these paths which belong to the library # being currently built and tested are searched before the library which # might possibly already be installed in the system. AM_CPPFLAGS += -I$(top_builddir)/include \ -I$(top_builddir)/src/lib \ -I$(top_srcdir)/include \ -I$(top_srcdir)/src/lib \ -DCARES_NO_DEPRECATED include Makefile.inc # We're not interested in code coverage of the test apps themselves, but need # to link with gcov if building with code coverage enabled LDADD = $(top_builddir)/src/lib/libcares.la $(CODE_COVERAGE_LIBS) ahost_SOURCES = ahost.c $(SAMPLESOURCES) $(SAMPLEHEADERS) ahost_CFLAGS = $(AM_CFLAGS) ahost_CPPFLAGS = $(AM_CPPFLAGS) adig_SOURCES = adig.c $(SAMPLESOURCES) $(SAMPLEHEADERS) adig_CFLAGS = $(AM_CFLAGS) adig_CPPFLAGS = $(AM_CPPFLAGS) gevent-24.11.1/deps/c-ares/src/tools/Makefile.in000066400000000000000000001057411471441230600212630ustar00rootroot00000000000000# Makefile.in generated by automake 1.17 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2024 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = { \ if test -z '$(MAKELEVEL)'; then \ false; \ elif test -n '$(MAKE_HOST)'; then \ true; \ elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \ true; \ else \ false; \ fi; \ } am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) am__rm_f = rm -f $(am__rm_f_notfound) am__rm_rf = rm -rf $(am__rm_f_notfound) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ noinst_PROGRAMS = $(am__EXEEXT_1) subdir = src/tools ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/m4/ax_ac_append_to_file.m4 \ $(top_srcdir)/m4/ax_ac_print_to_file.m4 \ $(top_srcdir)/m4/ax_add_am_macro_static.m4 \ $(top_srcdir)/m4/ax_am_macros_static.m4 \ $(top_srcdir)/m4/ax_append_compile_flags.m4 \ $(top_srcdir)/m4/ax_append_flag.m4 \ $(top_srcdir)/m4/ax_append_link_flags.m4 \ $(top_srcdir)/m4/ax_check_compile_flag.m4 \ $(top_srcdir)/m4/ax_check_gnu_make.m4 \ $(top_srcdir)/m4/ax_check_link_flag.m4 \ $(top_srcdir)/m4/ax_check_user_namespace.m4 \ $(top_srcdir)/m4/ax_check_uts_namespace.m4 \ $(top_srcdir)/m4/ax_code_coverage.m4 \ $(top_srcdir)/m4/ax_compiler_vendor.m4 \ $(top_srcdir)/m4/ax_cxx_compile_stdcxx.m4 \ $(top_srcdir)/m4/ax_cxx_compile_stdcxx_14.m4 \ $(top_srcdir)/m4/ax_file_escapes.m4 \ $(top_srcdir)/m4/ax_pthread.m4 \ $(top_srcdir)/m4/ax_require_defined.m4 \ $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ $(top_srcdir)/m4/lt~obsolete.m4 $(top_srcdir)/m4/pkg.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) DIST_COMMON = $(srcdir)/Makefile.am $(am__DIST_COMMON) mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/src/lib/ares_config.h \ $(top_builddir)/include/ares_build.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = am__EXEEXT_1 = ahost$(EXEEXT) adig$(EXEEXT) PROGRAMS = $(noinst_PROGRAMS) am__dirstamp = $(am__leading_dot)dirstamp am__objects_1 = adig-ares_getopt.$(OBJEXT) \ ../lib/str/adig-ares_strcasecmp.$(OBJEXT) am__objects_2 = am_adig_OBJECTS = adig-adig.$(OBJEXT) $(am__objects_1) \ $(am__objects_2) adig_OBJECTS = $(am_adig_OBJECTS) adig_LDADD = $(LDADD) am__DEPENDENCIES_1 = adig_DEPENDENCIES = $(top_builddir)/src/lib/libcares.la \ $(am__DEPENDENCIES_1) AM_V_lt = $(am__v_lt_@AM_V@) am__v_lt_ = $(am__v_lt_@AM_DEFAULT_V@) am__v_lt_0 = --silent am__v_lt_1 = adig_LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CCLD) $(adig_CFLAGS) $(CFLAGS) \ $(AM_LDFLAGS) $(LDFLAGS) -o $@ am__objects_3 = ahost-ares_getopt.$(OBJEXT) \ ../lib/str/ahost-ares_strcasecmp.$(OBJEXT) am_ahost_OBJECTS = ahost-ahost.$(OBJEXT) $(am__objects_3) \ $(am__objects_2) ahost_OBJECTS = $(am_ahost_OBJECTS) ahost_LDADD = $(LDADD) ahost_DEPENDENCIES = $(top_builddir)/src/lib/libcares.la \ $(am__DEPENDENCIES_1) ahost_LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CCLD) $(ahost_CFLAGS) $(CFLAGS) \ $(AM_LDFLAGS) $(LDFLAGS) -o $@ AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = DEFAULT_INCLUDES = depcomp = $(SHELL) $(top_srcdir)/config/depcomp am__maybe_remake_depfiles = depfiles am__depfiles_remade = ../lib/str/$(DEPDIR)/adig-ares_strcasecmp.Po \ ../lib/str/$(DEPDIR)/ahost-ares_strcasecmp.Po \ ./$(DEPDIR)/adig-adig.Po ./$(DEPDIR)/adig-ares_getopt.Po \ ./$(DEPDIR)/ahost-ahost.Po ./$(DEPDIR)/ahost-ares_getopt.Po am__mv = mv -f COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \ $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) LTCOMPILE = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) \ $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) \ $(AM_CFLAGS) $(CFLAGS) AM_V_CC = $(am__v_CC_@AM_V@) am__v_CC_ = $(am__v_CC_@AM_DEFAULT_V@) am__v_CC_0 = @echo " CC " $@; am__v_CC_1 = CCLD = $(CC) LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \ $(AM_LDFLAGS) $(LDFLAGS) -o $@ AM_V_CCLD = $(am__v_CCLD_@AM_V@) am__v_CCLD_ = $(am__v_CCLD_@AM_DEFAULT_V@) am__v_CCLD_0 = @echo " CCLD " $@; am__v_CCLD_1 = SOURCES = $(adig_SOURCES) $(ahost_SOURCES) DIST_SOURCES = $(adig_SOURCES) $(ahost_SOURCES) am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP) # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` am__DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/Makefile.inc \ $(top_srcdir)/config/depcomp DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_CFLAGS = @AM_CFLAGS@ # Specify our include paths here, and do it relative to $(top_srcdir) and # $(top_builddir), to ensure that these paths which belong to the library # being currently built and tested are searched before the library which # might possibly already be installed in the system. AM_CPPFLAGS = @AM_CPPFLAGS@ -I$(top_builddir)/include \ -I$(top_builddir)/src/lib -I$(top_srcdir)/include \ -I$(top_srcdir)/src/lib -DCARES_NO_DEPRECATED AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AS = @AS@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ BUILD_SUBDIRS = @BUILD_SUBDIRS@ CARES_PRIVATE_LIBS = @CARES_PRIVATE_LIBS@ CARES_RANDOM_FILE = @CARES_RANDOM_FILE@ CARES_SYMBOL_HIDING_CFLAG = @CARES_SYMBOL_HIDING_CFLAG@ CARES_VERSION_INFO = @CARES_VERSION_INFO@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CODE_COVERAGE_CFLAGS = @CODE_COVERAGE_CFLAGS@ CODE_COVERAGE_CPPFLAGS = @CODE_COVERAGE_CPPFLAGS@ CODE_COVERAGE_CXXFLAGS = @CODE_COVERAGE_CXXFLAGS@ CODE_COVERAGE_ENABLED = @CODE_COVERAGE_ENABLED@ CODE_COVERAGE_LIBS = @CODE_COVERAGE_LIBS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CSCOPE = @CSCOPE@ CTAGS = @CTAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ ETAGS = @ETAGS@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ FILECMD = @FILECMD@ GCOV = @GCOV@ GENHTML = @GENHTML@ GMOCK112_CFLAGS = @GMOCK112_CFLAGS@ GMOCK112_LIBS = @GMOCK112_LIBS@ GMOCK_CFLAGS = @GMOCK_CFLAGS@ GMOCK_LIBS = @GMOCK_LIBS@ GREP = @GREP@ HAVE_CXX14 = @HAVE_CXX14@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ LCOV = @LCOV@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ NM = @NM@ NMEDIT = @NMEDIT@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ PKGCONFIG_CFLAGS = @PKGCONFIG_CFLAGS@ PKG_CONFIG = @PKG_CONFIG@ PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@ PKG_CONFIG_PATH = @PKG_CONFIG_PATH@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_CXX = @PTHREAD_CXX@ PTHREAD_LIBS = @PTHREAD_LIBS@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ VERSION = @VERSION@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__rm_f_notfound = @am__rm_f_notfound@ am__tar = @am__tar@ am__untar = @am__untar@ am__xargs_n = @am__xargs_n@ ax_pthread_config = @ax_pthread_config@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ ifGNUmake = @ifGNUmake@ ifnGNUmake = @ifnGNUmake@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ runstatedir = @runstatedir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ # Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT AUTOMAKE_OPTIONS = foreign subdir-objects nostdinc 1.9.6 PROGS = ahost adig EXTRA_DIST = CMakeLists.txt Makefile.inc # Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT SAMPLESOURCES = ares_getopt.c \ ../lib/str/ares_strcasecmp.c SAMPLEHEADERS = ares_getopt.h \ ../lib/str/ares_strcasecmp.h # We're not interested in code coverage of the test apps themselves, but need # to link with gcov if building with code coverage enabled LDADD = $(top_builddir)/src/lib/libcares.la $(CODE_COVERAGE_LIBS) ahost_SOURCES = ahost.c $(SAMPLESOURCES) $(SAMPLEHEADERS) ahost_CFLAGS = $(AM_CFLAGS) ahost_CPPFLAGS = $(AM_CPPFLAGS) adig_SOURCES = adig.c $(SAMPLESOURCES) $(SAMPLEHEADERS) adig_CFLAGS = $(AM_CFLAGS) adig_CPPFLAGS = $(AM_CPPFLAGS) all: all-am .SUFFIXES: .SUFFIXES: .c .lo .o .obj $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(srcdir)/Makefile.inc $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ && { if test -f $@; then exit 0; else break; fi; }; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign src/tools/Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign src/tools/Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__maybe_remake_depfiles);; \ esac; $(srcdir)/Makefile.inc $(am__empty): $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(am__aclocal_m4_deps): clean-noinstPROGRAMS: $(am__rm_f) $(noinst_PROGRAMS) test -z "$(EXEEXT)" || $(am__rm_f) $(noinst_PROGRAMS:$(EXEEXT)=) ../lib/str/$(am__dirstamp): @$(MKDIR_P) ../lib/str @: >>../lib/str/$(am__dirstamp) ../lib/str/$(DEPDIR)/$(am__dirstamp): @$(MKDIR_P) ../lib/str/$(DEPDIR) @: >>../lib/str/$(DEPDIR)/$(am__dirstamp) ../lib/str/adig-ares_strcasecmp.$(OBJEXT): ../lib/str/$(am__dirstamp) \ ../lib/str/$(DEPDIR)/$(am__dirstamp) adig$(EXEEXT): $(adig_OBJECTS) $(adig_DEPENDENCIES) $(EXTRA_adig_DEPENDENCIES) @rm -f adig$(EXEEXT) $(AM_V_CCLD)$(adig_LINK) $(adig_OBJECTS) $(adig_LDADD) $(LIBS) ../lib/str/ahost-ares_strcasecmp.$(OBJEXT): \ ../lib/str/$(am__dirstamp) \ ../lib/str/$(DEPDIR)/$(am__dirstamp) ahost$(EXEEXT): $(ahost_OBJECTS) $(ahost_DEPENDENCIES) $(EXTRA_ahost_DEPENDENCIES) @rm -f ahost$(EXEEXT) $(AM_V_CCLD)$(ahost_LINK) $(ahost_OBJECTS) $(ahost_LDADD) $(LIBS) mostlyclean-compile: -rm -f *.$(OBJEXT) -rm -f ../lib/str/*.$(OBJEXT) distclean-compile: -rm -f *.tab.c @AMDEP_TRUE@@am__include@ @am__quote@../lib/str/$(DEPDIR)/adig-ares_strcasecmp.Po@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@../lib/str/$(DEPDIR)/ahost-ares_strcasecmp.Po@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/adig-adig.Po@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/adig-ares_getopt.Po@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ahost-ahost.Po@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ahost-ares_getopt.Po@am__quote@ # am--include-marker $(am__depfiles_remade): @$(MKDIR_P) $(@D) @: >>$@ am--depfiles: $(am__depfiles_remade) .c.o: @am__fastdepCC_TRUE@ $(AM_V_CC)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.o$$||'`;\ @am__fastdepCC_TRUE@ $(COMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ $< &&\ @am__fastdepCC_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ $< .c.obj: @am__fastdepCC_TRUE@ $(AM_V_CC)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.obj$$||'`;\ @am__fastdepCC_TRUE@ $(COMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ `$(CYGPATH_W) '$<'` &&\ @am__fastdepCC_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ `$(CYGPATH_W) '$<'` .c.lo: @am__fastdepCC_TRUE@ $(AM_V_CC)depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.lo$$||'`;\ @am__fastdepCC_TRUE@ $(LTCOMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ $< &&\ @am__fastdepCC_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LTCOMPILE) -c -o $@ $< adig-adig.o: adig.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(adig_CPPFLAGS) $(CPPFLAGS) $(adig_CFLAGS) $(CFLAGS) -MT adig-adig.o -MD -MP -MF $(DEPDIR)/adig-adig.Tpo -c -o adig-adig.o `test -f 'adig.c' || echo '$(srcdir)/'`adig.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/adig-adig.Tpo $(DEPDIR)/adig-adig.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='adig.c' object='adig-adig.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(adig_CPPFLAGS) $(CPPFLAGS) $(adig_CFLAGS) $(CFLAGS) -c -o adig-adig.o `test -f 'adig.c' || echo '$(srcdir)/'`adig.c adig-adig.obj: adig.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(adig_CPPFLAGS) $(CPPFLAGS) $(adig_CFLAGS) $(CFLAGS) -MT adig-adig.obj -MD -MP -MF $(DEPDIR)/adig-adig.Tpo -c -o adig-adig.obj `if test -f 'adig.c'; then $(CYGPATH_W) 'adig.c'; else $(CYGPATH_W) '$(srcdir)/adig.c'; fi` @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/adig-adig.Tpo $(DEPDIR)/adig-adig.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='adig.c' object='adig-adig.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(adig_CPPFLAGS) $(CPPFLAGS) $(adig_CFLAGS) $(CFLAGS) -c -o adig-adig.obj `if test -f 'adig.c'; then $(CYGPATH_W) 'adig.c'; else $(CYGPATH_W) '$(srcdir)/adig.c'; fi` adig-ares_getopt.o: ares_getopt.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(adig_CPPFLAGS) $(CPPFLAGS) $(adig_CFLAGS) $(CFLAGS) -MT adig-ares_getopt.o -MD -MP -MF $(DEPDIR)/adig-ares_getopt.Tpo -c -o adig-ares_getopt.o `test -f 'ares_getopt.c' || echo '$(srcdir)/'`ares_getopt.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/adig-ares_getopt.Tpo $(DEPDIR)/adig-ares_getopt.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_getopt.c' object='adig-ares_getopt.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(adig_CPPFLAGS) $(CPPFLAGS) $(adig_CFLAGS) $(CFLAGS) -c -o adig-ares_getopt.o `test -f 'ares_getopt.c' || echo '$(srcdir)/'`ares_getopt.c adig-ares_getopt.obj: ares_getopt.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(adig_CPPFLAGS) $(CPPFLAGS) $(adig_CFLAGS) $(CFLAGS) -MT adig-ares_getopt.obj -MD -MP -MF $(DEPDIR)/adig-ares_getopt.Tpo -c -o adig-ares_getopt.obj `if test -f 'ares_getopt.c'; then $(CYGPATH_W) 'ares_getopt.c'; else $(CYGPATH_W) '$(srcdir)/ares_getopt.c'; fi` @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/adig-ares_getopt.Tpo $(DEPDIR)/adig-ares_getopt.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_getopt.c' object='adig-ares_getopt.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(adig_CPPFLAGS) $(CPPFLAGS) $(adig_CFLAGS) $(CFLAGS) -c -o adig-ares_getopt.obj `if test -f 'ares_getopt.c'; then $(CYGPATH_W) 'ares_getopt.c'; else $(CYGPATH_W) '$(srcdir)/ares_getopt.c'; fi` ../lib/str/adig-ares_strcasecmp.o: ../lib/str/ares_strcasecmp.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(adig_CPPFLAGS) $(CPPFLAGS) $(adig_CFLAGS) $(CFLAGS) -MT ../lib/str/adig-ares_strcasecmp.o -MD -MP -MF ../lib/str/$(DEPDIR)/adig-ares_strcasecmp.Tpo -c -o ../lib/str/adig-ares_strcasecmp.o `test -f '../lib/str/ares_strcasecmp.c' || echo '$(srcdir)/'`../lib/str/ares_strcasecmp.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) ../lib/str/$(DEPDIR)/adig-ares_strcasecmp.Tpo ../lib/str/$(DEPDIR)/adig-ares_strcasecmp.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='../lib/str/ares_strcasecmp.c' object='../lib/str/adig-ares_strcasecmp.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(adig_CPPFLAGS) $(CPPFLAGS) $(adig_CFLAGS) $(CFLAGS) -c -o ../lib/str/adig-ares_strcasecmp.o `test -f '../lib/str/ares_strcasecmp.c' || echo '$(srcdir)/'`../lib/str/ares_strcasecmp.c ../lib/str/adig-ares_strcasecmp.obj: ../lib/str/ares_strcasecmp.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(adig_CPPFLAGS) $(CPPFLAGS) $(adig_CFLAGS) $(CFLAGS) -MT ../lib/str/adig-ares_strcasecmp.obj -MD -MP -MF ../lib/str/$(DEPDIR)/adig-ares_strcasecmp.Tpo -c -o ../lib/str/adig-ares_strcasecmp.obj `if test -f '../lib/str/ares_strcasecmp.c'; then $(CYGPATH_W) '../lib/str/ares_strcasecmp.c'; else $(CYGPATH_W) '$(srcdir)/../lib/str/ares_strcasecmp.c'; fi` @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) ../lib/str/$(DEPDIR)/adig-ares_strcasecmp.Tpo ../lib/str/$(DEPDIR)/adig-ares_strcasecmp.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='../lib/str/ares_strcasecmp.c' object='../lib/str/adig-ares_strcasecmp.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(adig_CPPFLAGS) $(CPPFLAGS) $(adig_CFLAGS) $(CFLAGS) -c -o ../lib/str/adig-ares_strcasecmp.obj `if test -f '../lib/str/ares_strcasecmp.c'; then $(CYGPATH_W) '../lib/str/ares_strcasecmp.c'; else $(CYGPATH_W) '$(srcdir)/../lib/str/ares_strcasecmp.c'; fi` ahost-ahost.o: ahost.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ahost_CPPFLAGS) $(CPPFLAGS) $(ahost_CFLAGS) $(CFLAGS) -MT ahost-ahost.o -MD -MP -MF $(DEPDIR)/ahost-ahost.Tpo -c -o ahost-ahost.o `test -f 'ahost.c' || echo '$(srcdir)/'`ahost.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/ahost-ahost.Tpo $(DEPDIR)/ahost-ahost.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ahost.c' object='ahost-ahost.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ahost_CPPFLAGS) $(CPPFLAGS) $(ahost_CFLAGS) $(CFLAGS) -c -o ahost-ahost.o `test -f 'ahost.c' || echo '$(srcdir)/'`ahost.c ahost-ahost.obj: ahost.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ahost_CPPFLAGS) $(CPPFLAGS) $(ahost_CFLAGS) $(CFLAGS) -MT ahost-ahost.obj -MD -MP -MF $(DEPDIR)/ahost-ahost.Tpo -c -o ahost-ahost.obj `if test -f 'ahost.c'; then $(CYGPATH_W) 'ahost.c'; else $(CYGPATH_W) '$(srcdir)/ahost.c'; fi` @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/ahost-ahost.Tpo $(DEPDIR)/ahost-ahost.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ahost.c' object='ahost-ahost.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ahost_CPPFLAGS) $(CPPFLAGS) $(ahost_CFLAGS) $(CFLAGS) -c -o ahost-ahost.obj `if test -f 'ahost.c'; then $(CYGPATH_W) 'ahost.c'; else $(CYGPATH_W) '$(srcdir)/ahost.c'; fi` ahost-ares_getopt.o: ares_getopt.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ahost_CPPFLAGS) $(CPPFLAGS) $(ahost_CFLAGS) $(CFLAGS) -MT ahost-ares_getopt.o -MD -MP -MF $(DEPDIR)/ahost-ares_getopt.Tpo -c -o ahost-ares_getopt.o `test -f 'ares_getopt.c' || echo '$(srcdir)/'`ares_getopt.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/ahost-ares_getopt.Tpo $(DEPDIR)/ahost-ares_getopt.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_getopt.c' object='ahost-ares_getopt.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ahost_CPPFLAGS) $(CPPFLAGS) $(ahost_CFLAGS) $(CFLAGS) -c -o ahost-ares_getopt.o `test -f 'ares_getopt.c' || echo '$(srcdir)/'`ares_getopt.c ahost-ares_getopt.obj: ares_getopt.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ahost_CPPFLAGS) $(CPPFLAGS) $(ahost_CFLAGS) $(CFLAGS) -MT ahost-ares_getopt.obj -MD -MP -MF $(DEPDIR)/ahost-ares_getopt.Tpo -c -o ahost-ares_getopt.obj `if test -f 'ares_getopt.c'; then $(CYGPATH_W) 'ares_getopt.c'; else $(CYGPATH_W) '$(srcdir)/ares_getopt.c'; fi` @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/ahost-ares_getopt.Tpo $(DEPDIR)/ahost-ares_getopt.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='ares_getopt.c' object='ahost-ares_getopt.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ahost_CPPFLAGS) $(CPPFLAGS) $(ahost_CFLAGS) $(CFLAGS) -c -o ahost-ares_getopt.obj `if test -f 'ares_getopt.c'; then $(CYGPATH_W) 'ares_getopt.c'; else $(CYGPATH_W) '$(srcdir)/ares_getopt.c'; fi` ../lib/str/ahost-ares_strcasecmp.o: ../lib/str/ares_strcasecmp.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ahost_CPPFLAGS) $(CPPFLAGS) $(ahost_CFLAGS) $(CFLAGS) -MT ../lib/str/ahost-ares_strcasecmp.o -MD -MP -MF ../lib/str/$(DEPDIR)/ahost-ares_strcasecmp.Tpo -c -o ../lib/str/ahost-ares_strcasecmp.o `test -f '../lib/str/ares_strcasecmp.c' || echo '$(srcdir)/'`../lib/str/ares_strcasecmp.c @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) ../lib/str/$(DEPDIR)/ahost-ares_strcasecmp.Tpo ../lib/str/$(DEPDIR)/ahost-ares_strcasecmp.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='../lib/str/ares_strcasecmp.c' object='../lib/str/ahost-ares_strcasecmp.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ahost_CPPFLAGS) $(CPPFLAGS) $(ahost_CFLAGS) $(CFLAGS) -c -o ../lib/str/ahost-ares_strcasecmp.o `test -f '../lib/str/ares_strcasecmp.c' || echo '$(srcdir)/'`../lib/str/ares_strcasecmp.c ../lib/str/ahost-ares_strcasecmp.obj: ../lib/str/ares_strcasecmp.c @am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ahost_CPPFLAGS) $(CPPFLAGS) $(ahost_CFLAGS) $(CFLAGS) -MT ../lib/str/ahost-ares_strcasecmp.obj -MD -MP -MF ../lib/str/$(DEPDIR)/ahost-ares_strcasecmp.Tpo -c -o ../lib/str/ahost-ares_strcasecmp.obj `if test -f '../lib/str/ares_strcasecmp.c'; then $(CYGPATH_W) '../lib/str/ares_strcasecmp.c'; else $(CYGPATH_W) '$(srcdir)/../lib/str/ares_strcasecmp.c'; fi` @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) ../lib/str/$(DEPDIR)/ahost-ares_strcasecmp.Tpo ../lib/str/$(DEPDIR)/ahost-ares_strcasecmp.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='../lib/str/ares_strcasecmp.c' object='../lib/str/ahost-ares_strcasecmp.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ahost_CPPFLAGS) $(CPPFLAGS) $(ahost_CFLAGS) $(CFLAGS) -c -o ../lib/str/ahost-ares_strcasecmp.obj `if test -f '../lib/str/ares_strcasecmp.c'; then $(CYGPATH_W) '../lib/str/ares_strcasecmp.c'; else $(CYGPATH_W) '$(srcdir)/../lib/str/ares_strcasecmp.c'; fi` mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-am TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-am CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscopelist: cscopelist-am cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags distdir: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) distdir-am distdir-am: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done check-am: all-am check: check-am all-am: Makefile $(PROGRAMS) installdirs: install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -$(am__rm_f) $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || $(am__rm_f) $(CONFIG_CLEAN_VPATH_FILES) -$(am__rm_f) ../lib/str/$(DEPDIR)/$(am__dirstamp) -$(am__rm_f) ../lib/str/$(am__dirstamp) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libtool clean-noinstPROGRAMS \ mostlyclean-am distclean: distclean-am -rm -f ../lib/str/$(DEPDIR)/adig-ares_strcasecmp.Po -rm -f ../lib/str/$(DEPDIR)/ahost-ares_strcasecmp.Po -rm -f ./$(DEPDIR)/adig-adig.Po -rm -f ./$(DEPDIR)/adig-ares_getopt.Po -rm -f ./$(DEPDIR)/ahost-ahost.Po -rm -f ./$(DEPDIR)/ahost-ares_getopt.Po -rm -f Makefile distclean-am: clean-am distclean-compile distclean-generic \ distclean-tags dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f ../lib/str/$(DEPDIR)/adig-ares_strcasecmp.Po -rm -f ../lib/str/$(DEPDIR)/ahost-ares_strcasecmp.Po -rm -f ./$(DEPDIR)/adig-adig.Po -rm -f ./$(DEPDIR)/adig-ares_getopt.Po -rm -f ./$(DEPDIR)/ahost-ahost.Po -rm -f ./$(DEPDIR)/ahost-ares_getopt.Po -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-compile mostlyclean-generic \ mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: .MAKE: install-am install-strip .PHONY: CTAGS GTAGS TAGS all all-am am--depfiles check check-am clean \ clean-generic clean-libtool clean-noinstPROGRAMS cscopelist-am \ ctags ctags-am distclean distclean-compile distclean-generic \ distclean-libtool distclean-tags distdir dvi dvi-am html \ html-am info info-am install install-am install-data \ install-data-am install-dvi install-dvi-am install-exec \ install-exec-am install-html install-html-am install-info \ install-info-am install-man install-pdf install-pdf-am \ install-ps install-ps-am install-strip installcheck \ installcheck-am installdirs maintainer-clean \ maintainer-clean-generic mostlyclean mostlyclean-compile \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags tags-am uninstall uninstall-am .PRECIOUS: Makefile # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: # Tell GNU make to disable its built-in pattern rules. %:: %,v %:: RCS/%,v %:: RCS/% %:: s.% %:: SCCS/s.% gevent-24.11.1/deps/c-ares/src/tools/Makefile.inc000066400000000000000000000003301471441230600214120ustar00rootroot00000000000000# Copyright (C) The c-ares project and its contributors # SPDX-License-Identifier: MIT SAMPLESOURCES = ares_getopt.c \ ../lib/str/ares_strcasecmp.c SAMPLEHEADERS = ares_getopt.h \ ../lib/str/ares_strcasecmp.h gevent-24.11.1/deps/c-ares/src/tools/adig.c000066400000000000000000000640231471441230600202630ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_setup.h" #ifdef HAVE_NETINET_IN_H # include #endif #ifdef HAVE_ARPA_INET_H # include #endif #ifdef HAVE_NETDB_H # include #endif #include "ares_nameser.h" #ifdef HAVE_STRINGS_H # include #endif #include "ares.h" #include "ares_dns.h" #ifndef HAVE_STRDUP # include "str/ares_str.h" # define strdup(ptr) ares_strdup(ptr) #endif #ifndef HAVE_STRCASECMP # include "str/ares_strcasecmp.h" # define strcasecmp(p1, p2) ares_strcasecmp(p1, p2) #endif #ifndef HAVE_STRNCASECMP # include "str/ares_strcasecmp.h" # define strncasecmp(p1, p2, n) ares_strncasecmp(p1, p2, n) #endif #include "ares_getopt.h" typedef struct { ares_bool_t is_help; struct ares_options options; int optmask; ares_dns_class_t qclass; ares_dns_rec_type_t qtype; int args_processed; char *servers; char error[256]; } adig_config_t; typedef struct { const char *name; int value; } nv_t; static const nv_t configflags[] = { { "usevc", ARES_FLAG_USEVC }, { "primary", ARES_FLAG_PRIMARY }, { "igntc", ARES_FLAG_IGNTC }, { "norecurse", ARES_FLAG_NORECURSE }, { "stayopen", ARES_FLAG_STAYOPEN }, { "noaliases", ARES_FLAG_NOALIASES }, { "edns", ARES_FLAG_EDNS }, { "dns0x20", ARES_FLAG_DNS0x20 } }; static const size_t nconfigflags = sizeof(configflags) / sizeof(*configflags); static int lookup_flag(const nv_t *nv, size_t num_nv, const char *name) { size_t i; if (name == NULL) { return 0; } for (i = 0; i < num_nv; i++) { if (strcasecmp(nv[i].name, name) == 0) { return nv[i].value; } } return 0; } static void free_config(adig_config_t *config) { free(config->servers); memset(config, 0, sizeof(*config)); } static void print_help(void) { /* Split due to maximum c89 string literal of 509 bytes */ printf("adig version %s\n\n", ares_version(NULL)); printf( "usage: adig [-h] [-d] [-f flag] [[-s server] ...] [-T|U port] [-c class]\n" " [-t type] name ...\n\n"); printf(" -h : Display this help and exit.\n"); printf(" -d : Print some extra debugging output.\n"); printf( " -f flag : Add a behavior control flag. May be specified more than " "once\n" " to add additional flags. Possible values are:\n" " igntc - do not retry a truncated query as TCP, just\n" " return the truncated answer\n" " noaliases - don't honor the HOSTALIASES environment\n" " variable\n"); printf(" norecurse - don't query upstream servers recursively\n" " primary - use the first server\n" " stayopen - don't close the communication sockets\n" " usevc - use TCP only\n" " edns - use EDNS\n" " dns0x20 - enable DNS 0x20 support\n"); printf( " -s server : Connect to the specified DNS server, instead of the\n" " system's default one(s). Servers are tried in round-robin,\n" " if the previous one failed.\n"); printf(" -T port : Connect to the specified TCP port of DNS server.\n"); printf(" -U port : Connect to the specified UDP port of DNS server.\n"); printf(" -c class : Set the query class. Possible values for class are:\n" " ANY, CHAOS, HS and IN (default)\n"); printf( " -t type : Query records of the specified type. Possible values for\n" " type are:\n" " A (default), AAAA, ANY, CNAME, HINFO, MX, NAPTR, NS, PTR,\n" " SOA, SRV, TXT, TLSA, URI, CAA, SVCB, HTTPS\n\n"); } static ares_bool_t read_cmdline(int argc, const char * const *argv, adig_config_t *config) { ares_getopt_state_t state; int c; ares_getopt_init(&state, argc, argv); state.opterr = 0; while ((c = ares_getopt(&state, "dh?f:s:c:t:T:U:")) != -1) { int f; switch (c) { case 'd': #ifdef WATT32 dbug_init(); #endif break; case 'h': config->is_help = ARES_TRUE; return ARES_TRUE; case 'f': f = lookup_flag(configflags, nconfigflags, state.optarg); if (f == 0) { snprintf(config->error, sizeof(config->error), "flag %s unknown", state.optarg); } config->options.flags |= f; config->optmask |= ARES_OPT_FLAGS; break; case 's': if (state.optarg == NULL) { snprintf(config->error, sizeof(config->error), "%s", "missing servers"); return ARES_FALSE; } if (config->servers) { free(config->servers); } config->servers = strdup(state.optarg); break; case 'c': if (!ares_dns_class_fromstr(&config->qclass, state.optarg)) { snprintf(config->error, sizeof(config->error), "unrecognized class %s", state.optarg); return ARES_FALSE; } break; case 't': if (!ares_dns_rec_type_fromstr(&config->qtype, state.optarg)) { snprintf(config->error, sizeof(config->error), "unrecognized type %s", state.optarg); return ARES_FALSE; } break; case 'T': { /* Set the TCP port number. */ long port = strtol(state.optarg, NULL, 0); if (port <= 0 || port > 65535) { snprintf(config->error, sizeof(config->error), "invalid port number"); return ARES_FALSE; } config->options.tcp_port = (unsigned short)port; config->options.flags |= ARES_FLAG_USEVC; config->optmask |= ARES_OPT_TCP_PORT; } break; case 'U': { /* Set the TCP port number. */ long port = strtol(state.optarg, NULL, 0); if (port <= 0 || port > 65535) { snprintf(config->error, sizeof(config->error), "invalid port number"); return ARES_FALSE; } config->options.udp_port = (unsigned short)port; config->options.flags |= ARES_FLAG_USEVC; config->optmask |= ARES_OPT_UDP_PORT; } break; case ':': snprintf(config->error, sizeof(config->error), "%c requires an argument", state.optopt); return ARES_FALSE; default: snprintf(config->error, sizeof(config->error), "unrecognized option: %c", state.optopt); return ARES_FALSE; } } config->args_processed = state.optind; if (config->args_processed >= argc) { snprintf(config->error, sizeof(config->error), "missing query name"); return ARES_FALSE; } return ARES_TRUE; } static void print_flags(ares_dns_flags_t flags) { if (flags & ARES_FLAG_QR) { printf(" qr"); } if (flags & ARES_FLAG_AA) { printf(" aa"); } if (flags & ARES_FLAG_TC) { printf(" tc"); } if (flags & ARES_FLAG_RD) { printf(" rd"); } if (flags & ARES_FLAG_RA) { printf(" ra"); } if (flags & ARES_FLAG_AD) { printf(" ad"); } if (flags & ARES_FLAG_CD) { printf(" cd"); } } static void print_header(const ares_dns_record_t *dnsrec) { printf(";; ->>HEADER<<- opcode: %s, status: %s, id: %u\n", ares_dns_opcode_tostr(ares_dns_record_get_opcode(dnsrec)), ares_dns_rcode_tostr(ares_dns_record_get_rcode(dnsrec)), ares_dns_record_get_id(dnsrec)); printf(";; flags:"); print_flags(ares_dns_record_get_flags(dnsrec)); printf("; QUERY: %u, ANSWER: %u, AUTHORITY: %u, ADDITIONAL: %u\n\n", (unsigned int)ares_dns_record_query_cnt(dnsrec), (unsigned int)ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ANSWER), (unsigned int)ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_AUTHORITY), (unsigned int)ares_dns_record_rr_cnt(dnsrec, ARES_SECTION_ADDITIONAL)); } static void print_question(const ares_dns_record_t *dnsrec) { size_t i; printf(";; QUESTION SECTION:\n"); for (i = 0; i < ares_dns_record_query_cnt(dnsrec); i++) { const char *name; ares_dns_rec_type_t qtype; ares_dns_class_t qclass; size_t len; if (ares_dns_record_query_get(dnsrec, i, &name, &qtype, &qclass) != ARES_SUCCESS) { return; } if (name == NULL) { return; } len = strlen(name); printf(";%s.\t", name); if (len + 1 < 24) { printf("\t"); } if (len + 1 < 16) { printf("\t"); } printf("%s\t%s\n", ares_dns_class_tostr(qclass), ares_dns_rec_type_tostr(qtype)); } printf("\n"); } static void print_opt_none(const unsigned char *val, size_t val_len) { (void)val; if (val_len != 0) { printf("INVALID!"); } } static void print_opt_addr_list(const unsigned char *val, size_t val_len) { size_t i; if (val_len % 4 != 0) { printf("INVALID!"); return; } for (i = 0; i < val_len; i += 4) { char buf[256] = ""; ares_inet_ntop(AF_INET, val + i, buf, sizeof(buf)); if (i != 0) { printf(","); } printf("%s", buf); } } static void print_opt_addr6_list(const unsigned char *val, size_t val_len) { size_t i; if (val_len % 16 != 0) { printf("INVALID!"); return; } for (i = 0; i < val_len; i += 16) { char buf[256] = ""; ares_inet_ntop(AF_INET6, val + i, buf, sizeof(buf)); if (i != 0) { printf(","); } printf("%s", buf); } } static void print_opt_u8_list(const unsigned char *val, size_t val_len) { size_t i; for (i = 0; i < val_len; i++) { if (i != 0) { printf(","); } printf("%u", (unsigned int)val[i]); } } static void print_opt_u16_list(const unsigned char *val, size_t val_len) { size_t i; if (val_len < 2 || val_len % 2 != 0) { printf("INVALID!"); return; } for (i = 0; i < val_len; i += 2) { unsigned short u16 = 0; unsigned short c; /* Jumping over backwards to try to avoid odd compiler warnings */ c = (unsigned short)val[i]; u16 |= (unsigned short)((c << 8) & 0xFFFF); c = (unsigned short)val[i + 1]; u16 |= c; if (i != 0) { printf(","); } printf("%u", (unsigned int)u16); } } static void print_opt_u32_list(const unsigned char *val, size_t val_len) { size_t i; if (val_len < 4 || val_len % 4 != 0) { printf("INVALID!"); return; } for (i = 0; i < val_len; i += 4) { unsigned int u32 = 0; u32 |= (unsigned int)(val[i] << 24); u32 |= (unsigned int)(val[i + 1] << 16); u32 |= (unsigned int)(val[i + 2] << 8); u32 |= (unsigned int)(val[i + 3]); if (i != 0) { printf(","); } printf("%u", u32); } } static void print_opt_str_list(const unsigned char *val, size_t val_len) { size_t cnt = 0; printf("\""); while (val_len) { long read_len = 0; unsigned char *str = NULL; ares_status_t status; if (cnt) { printf(","); } status = (ares_status_t)ares_expand_string(val, val, (int)val_len, &str, &read_len); if (status != ARES_SUCCESS) { printf("INVALID"); break; } printf("%s", str); ares_free_string(str); val_len -= (size_t)read_len; val += read_len; cnt++; } printf("\""); } static void print_opt_name(const unsigned char *val, size_t val_len) { char *str = NULL; long read_len = 0; if (ares_expand_name(val, val, (int)val_len, &str, &read_len) != ARES_SUCCESS) { printf("INVALID!"); return; } printf("%s.", str); ares_free_string(str); } static void print_opt_bin(const unsigned char *val, size_t val_len) { size_t i; for (i = 0; i < val_len; i++) { printf("%02x", (unsigned int)val[i]); } } static ares_bool_t adig_isprint(int ch) { if (ch >= 0x20 && ch <= 0x7E) { return ARES_TRUE; } return ARES_FALSE; } static void print_opt_binp(const unsigned char *val, size_t val_len) { size_t i; printf("\""); for (i = 0; i < val_len; i++) { if (adig_isprint(val[i])) { printf("%c", val[i]); } else { printf("\\%03d", val[i]); } } printf("\""); } static void print_opts(const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { size_t i; for (i = 0; i < ares_dns_rr_get_opt_cnt(rr, key); i++) { size_t val_len = 0; const unsigned char *val = NULL; unsigned short opt; const char *name; if (i != 0) { printf(" "); } opt = ares_dns_rr_get_opt(rr, key, i, &val, &val_len); name = ares_dns_opt_get_name(key, opt); if (name == NULL) { printf("key%u", (unsigned int)opt); } else { printf("%s", name); } if (val_len == 0) { return; } printf("="); switch (ares_dns_opt_get_datatype(key, opt)) { case ARES_OPT_DATATYPE_NONE: print_opt_none(val, val_len); break; case ARES_OPT_DATATYPE_U8_LIST: print_opt_u8_list(val, val_len); break; case ARES_OPT_DATATYPE_INADDR4_LIST: print_opt_addr_list(val, val_len); break; case ARES_OPT_DATATYPE_INADDR6_LIST: print_opt_addr6_list(val, val_len); break; case ARES_OPT_DATATYPE_U16: case ARES_OPT_DATATYPE_U16_LIST: print_opt_u16_list(val, val_len); break; case ARES_OPT_DATATYPE_U32: case ARES_OPT_DATATYPE_U32_LIST: print_opt_u32_list(val, val_len); break; case ARES_OPT_DATATYPE_STR_LIST: print_opt_str_list(val, val_len); break; case ARES_OPT_DATATYPE_BIN: print_opt_bin(val, val_len); break; case ARES_OPT_DATATYPE_NAME: print_opt_name(val, val_len); break; } } } static void print_addr(const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { const struct in_addr *addr = ares_dns_rr_get_addr(rr, key); char buf[256] = ""; ares_inet_ntop(AF_INET, addr, buf, sizeof(buf)); printf("%s", buf); } static void print_addr6(const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { const struct ares_in6_addr *addr = ares_dns_rr_get_addr6(rr, key); char buf[256] = ""; ares_inet_ntop(AF_INET6, addr, buf, sizeof(buf)); printf("%s", buf); } static void print_u8(const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { unsigned char u8 = ares_dns_rr_get_u8(rr, key); printf("%u", (unsigned int)u8); } static void print_u16(const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { unsigned short u16 = ares_dns_rr_get_u16(rr, key); printf("%u", (unsigned int)u16); } static void print_u32(const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { unsigned int u32 = ares_dns_rr_get_u32(rr, key); printf("%u", u32); } static void print_name(const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { const char *str = ares_dns_rr_get_str(rr, key); printf("%s.", str); } static void print_str(const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { const char *str = ares_dns_rr_get_str(rr, key); printf("\"%s\"", str); } static void print_bin(const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { size_t len = 0; const unsigned char *binp = ares_dns_rr_get_bin(rr, key, &len); print_opt_bin(binp, len); } static void print_binp(const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { size_t len; const unsigned char *binp = ares_dns_rr_get_bin(rr, key, &len); print_opt_binp(binp, len); } static void print_abinp(const ares_dns_rr_t *rr, ares_dns_rr_key_t key) { size_t i; size_t cnt = ares_dns_rr_get_abin_cnt(rr, key); for (i = 0; i < cnt; i++) { size_t len; const unsigned char *binp = ares_dns_rr_get_abin(rr, key, i, &len); if (i != 0) { printf(" "); } print_opt_binp(binp, len); } } static void print_rr(const ares_dns_rr_t *rr) { const char *name = ares_dns_rr_get_name(rr); size_t len = 0; size_t keys_cnt = 0; ares_dns_rec_type_t rtype = ares_dns_rr_get_type(rr); const ares_dns_rr_key_t *keys = ares_dns_rr_get_keys(rtype, &keys_cnt); size_t i; if (name == NULL) { return; } len = strlen(name); printf("%s.\t", name); if (len < 24) { printf("\t"); } printf("%u\t%s\t%s\t", ares_dns_rr_get_ttl(rr), ares_dns_class_tostr(ares_dns_rr_get_class(rr)), ares_dns_rec_type_tostr(rtype)); /* Output params here */ for (i = 0; i < keys_cnt; i++) { ares_dns_datatype_t datatype = ares_dns_rr_key_datatype(keys[i]); if (i != 0) { printf(" "); } switch (datatype) { case ARES_DATATYPE_INADDR: print_addr(rr, keys[i]); break; case ARES_DATATYPE_INADDR6: print_addr6(rr, keys[i]); break; case ARES_DATATYPE_U8: print_u8(rr, keys[i]); break; case ARES_DATATYPE_U16: print_u16(rr, keys[i]); break; case ARES_DATATYPE_U32: print_u32(rr, keys[i]); break; case ARES_DATATYPE_NAME: print_name(rr, keys[i]); break; case ARES_DATATYPE_STR: print_str(rr, keys[i]); break; case ARES_DATATYPE_BIN: print_bin(rr, keys[i]); break; case ARES_DATATYPE_BINP: print_binp(rr, keys[i]); break; case ARES_DATATYPE_ABINP: print_abinp(rr, keys[i]); break; case ARES_DATATYPE_OPT: print_opts(rr, keys[i]); break; } } printf("\n"); } static const ares_dns_rr_t *has_opt(ares_dns_record_t *dnsrec, ares_dns_section_t section) { size_t i; for (i = 0; i < ares_dns_record_rr_cnt(dnsrec, section); i++) { const ares_dns_rr_t *rr = ares_dns_record_rr_get(dnsrec, section, i); if (ares_dns_rr_get_type(rr) == ARES_REC_TYPE_OPT) { return rr; } } return NULL; } static void print_section(ares_dns_record_t *dnsrec, ares_dns_section_t section) { size_t i; if (ares_dns_record_rr_cnt(dnsrec, section) == 0 || (ares_dns_record_rr_cnt(dnsrec, section) == 1 && has_opt(dnsrec, section) != NULL)) { return; } printf(";; %s SECTION:\n", ares_dns_section_tostr(section)); for (i = 0; i < ares_dns_record_rr_cnt(dnsrec, section); i++) { const ares_dns_rr_t *rr = ares_dns_record_rr_get(dnsrec, section, i); if (ares_dns_rr_get_type(rr) == ARES_REC_TYPE_OPT) { continue; } print_rr(rr); } printf("\n"); } static void print_opt_psuedosection(ares_dns_record_t *dnsrec) { const ares_dns_rr_t *rr = has_opt(dnsrec, ARES_SECTION_ADDITIONAL); const unsigned char *cookie = NULL; size_t cookie_len = 0; if (rr == NULL) { return; } if (!ares_dns_rr_get_opt_byid(rr, ARES_RR_OPT_OPTIONS, ARES_OPT_PARAM_COOKIE, &cookie, &cookie_len)) { cookie = NULL; } printf(";; OPT PSEUDOSECTION:\n"); printf("; EDNS: version: %u, flags: %u; udp: %u\n", (unsigned int)ares_dns_rr_get_u8(rr, ARES_RR_OPT_VERSION), (unsigned int)ares_dns_rr_get_u16(rr, ARES_RR_OPT_FLAGS), (unsigned int)ares_dns_rr_get_u16(rr, ARES_RR_OPT_UDP_SIZE)); if (cookie) { printf("; COOKIE: "); print_opt_bin(cookie, cookie_len); printf(" (good)\n"); } } static void callback(void *arg, int status, int timeouts, unsigned char *abuf, int alen) { ares_dns_record_t *dnsrec = NULL; (void)arg; (void)timeouts; /* We got a "Server status" */ if (status >= ARES_SUCCESS && status <= ARES_EREFUSED) { printf(";; Got answer:"); } else { printf(";;"); } if (status != ARES_SUCCESS) { printf(" %s", ares_strerror(status)); } printf("\n"); if (abuf == NULL || alen == 0) { return; } status = (int)ares_dns_parse(abuf, (size_t)alen, 0, &dnsrec); if (status != ARES_SUCCESS) { fprintf(stderr, ";; FAILED TO PARSE DNS PACKET: %s\n", ares_strerror(status)); return; } print_header(dnsrec); print_opt_psuedosection(dnsrec); print_question(dnsrec); print_section(dnsrec, ARES_SECTION_ANSWER); print_section(dnsrec, ARES_SECTION_ADDITIONAL); print_section(dnsrec, ARES_SECTION_AUTHORITY); printf(";; MSG SIZE rcvd: %d\n\n", alen); ares_dns_record_destroy(dnsrec); } static ares_status_t enqueue_query(ares_channel_t *channel, const adig_config_t *config, const char *name) { ares_dns_record_t *dnsrec = NULL; ares_dns_rr_t *rr = NULL; ares_status_t status; unsigned char *buf = NULL; size_t buf_len = 0; unsigned short flags = 0; char *nametemp = NULL; if (!(config->options.flags & ARES_FLAG_NORECURSE)) { flags |= ARES_FLAG_RD; } status = ares_dns_record_create(&dnsrec, 0, flags, ARES_OPCODE_QUERY, ARES_RCODE_NOERROR); if (status != ARES_SUCCESS) { goto done; } /* If it is a PTR record, convert from ip address into in-arpa form * automatically */ if (config->qtype == ARES_REC_TYPE_PTR) { struct ares_addr addr; size_t len; addr.family = AF_UNSPEC; if (ares_dns_pton(name, &addr, &len) != NULL) { nametemp = ares_dns_addr_to_ptr(&addr); name = nametemp; } } status = ares_dns_record_query_add(dnsrec, name, config->qtype, config->qclass); if (status != ARES_SUCCESS) { goto done; } status = ares_dns_record_rr_add(&rr, dnsrec, ARES_SECTION_ADDITIONAL, "", ARES_REC_TYPE_OPT, ARES_CLASS_IN, 0); if (status != ARES_SUCCESS) { goto done; } ares_dns_rr_set_u16(rr, ARES_RR_OPT_UDP_SIZE, 1280); ares_dns_rr_set_u8(rr, ARES_RR_OPT_VERSION, 0); status = ares_dns_write(dnsrec, &buf, &buf_len); if (status != ARES_SUCCESS) { goto done; } ares_send(channel, buf, (int)buf_len, callback, NULL); ares_free_string(buf); done: ares_free_string(nametemp); ares_dns_record_destroy(dnsrec); return status; } static int event_loop(ares_channel_t *channel) { while (1) { fd_set read_fds; fd_set write_fds; int nfds; struct timeval tv; struct timeval *tvp; int count; FD_ZERO(&read_fds); FD_ZERO(&write_fds); memset(&tv, 0, sizeof(tv)); nfds = ares_fds(channel, &read_fds, &write_fds); if (nfds == 0) { break; } tvp = ares_timeout(channel, NULL, &tv); if (tvp == NULL) { break; } count = select(nfds, &read_fds, &write_fds, NULL, tvp); if (count < 0) { #ifdef USE_WINSOCK int err = WSAGetLastError(); #else int err = errno; #endif if (err != EAGAIN && err != EINTR) { fprintf(stderr, "select fail: %d", err); return 1; } } ares_process(channel, &read_fds, &write_fds); } return 0; } int main(int argc, char **argv) { ares_channel_t *channel = NULL; ares_status_t status; adig_config_t config; int i; int rv = 0; #ifdef USE_WINSOCK WORD wVersionRequested = MAKEWORD(USE_WINSOCK, USE_WINSOCK); WSADATA wsaData; WSAStartup(wVersionRequested, &wsaData); #endif status = (ares_status_t)ares_library_init(ARES_LIB_INIT_ALL); if (status != ARES_SUCCESS) { fprintf(stderr, "ares_library_init: %s\n", ares_strerror((int)status)); return 1; } memset(&config, 0, sizeof(config)); config.qclass = ARES_CLASS_IN; config.qtype = ARES_REC_TYPE_A; if (!read_cmdline(argc, (const char * const *)argv, &config)) { printf("\n** ERROR: %s\n\n", config.error); print_help(); rv = 1; goto done; } if (config.is_help) { print_help(); goto done; } status = (ares_status_t)ares_init_options(&channel, &config.options, config.optmask); if (status != ARES_SUCCESS) { fprintf(stderr, "ares_init_options: %s\n", ares_strerror((int)status)); rv = 1; goto done; } if (config.servers) { status = (ares_status_t)ares_set_servers_ports_csv(channel, config.servers); if (status != ARES_SUCCESS) { fprintf(stderr, "ares_set_servers_ports_csv: %s\n", ares_strerror((int)status)); rv = 1; goto done; } } /* Enqueue a query for each separate name */ for (i = config.args_processed; i < argc; i++) { status = enqueue_query(channel, &config, argv[i]); if (status != ARES_SUCCESS) { fprintf(stderr, "Failed to create query for %s: %s\n", argv[i], ares_strerror((int)status)); rv = 1; goto done; } } /* Debug */ printf("\n; <<>> c-ares DiG %s <<>>", ares_version(NULL)); for (i = config.args_processed; i < argc; i++) { printf(" %s", argv[i]); } printf("\n"); /* Process events */ rv = event_loop(channel); done: free_config(&config); ares_destroy(channel); ares_library_cleanup(); #ifdef USE_WINSOCK WSACleanup(); #endif return rv; } gevent-24.11.1/deps/c-ares/src/tools/ahost.c000066400000000000000000000206621471441230600204760ustar00rootroot00000000000000/* MIT License * * Copyright (c) 1998 Massachusetts Institute of Technology * Copyright (c) The c-ares project and its contributors * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice (including the next * paragraph) shall be included in all copies or substantial portions of the * Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. * * SPDX-License-Identifier: MIT */ #include "ares_setup.h" #if !defined(_WIN32) || defined(WATT32) # include # include # include #endif #ifdef HAVE_STRINGS_H # include #endif #include "ares.h" #include "ares_dns.h" #include "ares_getopt.h" #include "ares_ipv6.h" #ifndef HAVE_STRDUP # include "str/ares_str.h" # define strdup(ptr) ares_strdup(ptr) #endif #ifndef HAVE_STRCASECMP # include "str/ares_strcasecmp.h" # define strcasecmp(p1, p2) ares_strcasecmp(p1, p2) #endif #ifndef HAVE_STRNCASECMP # include "str/ares_strcasecmp.h" # define strncasecmp(p1, p2, n) ares_strncasecmp(p1, p2, n) #endif static void callback(void *arg, int status, int timeouts, struct hostent *host); static void ai_callback(void *arg, int status, int timeouts, struct ares_addrinfo *result); static void usage(void); static void print_help_info_ahost(void); int main(int argc, char **argv) { struct ares_options options; int optmask = 0; ares_channel_t *channel; int status; int nfds; int c; int addr_family = AF_UNSPEC; fd_set read_fds; fd_set write_fds; struct timeval *tvp; struct timeval tv; struct in_addr addr4; struct ares_in6_addr addr6; ares_getopt_state_t state; char *servers = NULL; #ifdef USE_WINSOCK WORD wVersionRequested = MAKEWORD(USE_WINSOCK, USE_WINSOCK); WSADATA wsaData; WSAStartup(wVersionRequested, &wsaData); #endif memset(&options, 0, sizeof(options)); status = ares_library_init(ARES_LIB_INIT_ALL); if (status != ARES_SUCCESS) { fprintf(stderr, "ares_library_init: %s\n", ares_strerror(status)); return 1; } ares_getopt_init(&state, argc, (const char * const *)argv); while ((c = ares_getopt(&state, "dt:h?D:s:")) != -1) { switch (c) { case 'd': #ifdef WATT32 dbug_init(); #endif break; case 'D': optmask |= ARES_OPT_DOMAINS; options.ndomains++; options.domains = (char **)realloc( options.domains, (size_t)options.ndomains * sizeof(char *)); options.domains[options.ndomains - 1] = strdup(state.optarg); break; case 't': if (!strcasecmp(state.optarg, "a")) { addr_family = AF_INET; } else if (!strcasecmp(state.optarg, "aaaa")) { addr_family = AF_INET6; } else if (!strcasecmp(state.optarg, "u")) { addr_family = AF_UNSPEC; } else { usage(); } break; case 's': if (state.optarg == NULL) { fprintf(stderr, "%s", "missing servers"); usage(); break; } if (servers) { free(servers); } servers = strdup(state.optarg); break; case 'h': case '?': print_help_info_ahost(); break; default: usage(); break; } } argc -= state.optind; argv += state.optind; if (argc < 1) { usage(); } status = ares_init_options(&channel, &options, optmask); if (status != ARES_SUCCESS) { free(servers); fprintf(stderr, "ares_init: %s\n", ares_strerror(status)); return 1; } if (servers) { status = ares_set_servers_csv(channel, servers); if (status != ARES_SUCCESS) { fprintf(stderr, "ares_set_serveres_csv: %s\n", ares_strerror(status)); free(servers); usage(); return 1; } free(servers); } /* Initiate the queries, one per command-line argument. */ for (; *argv; argv++) { if (ares_inet_pton(AF_INET, *argv, &addr4) == 1) { ares_gethostbyaddr(channel, &addr4, sizeof(addr4), AF_INET, callback, *argv); } else if (ares_inet_pton(AF_INET6, *argv, &addr6) == 1) { ares_gethostbyaddr(channel, &addr6, sizeof(addr6), AF_INET6, callback, *argv); } else { struct ares_addrinfo_hints hints; memset(&hints, 0, sizeof(hints)); hints.ai_family = addr_family; ares_getaddrinfo(channel, *argv, NULL, &hints, ai_callback, *argv); } } /* Wait for all queries to complete. */ for (;;) { int res; FD_ZERO(&read_fds); FD_ZERO(&write_fds); nfds = ares_fds(channel, &read_fds, &write_fds); if (nfds == 0) { break; } tvp = ares_timeout(channel, NULL, &tv); if (tvp == NULL) { break; } res = select(nfds, &read_fds, &write_fds, NULL, tvp); if (-1 == res) { break; } ares_process(channel, &read_fds, &write_fds); } ares_destroy(channel); ares_library_cleanup(); #ifdef USE_WINSOCK WSACleanup(); #endif return 0; } static void callback(void *arg, int status, int timeouts, struct hostent *host) { char **p; (void)timeouts; if (status != ARES_SUCCESS) { fprintf(stderr, "%s: %s\n", (char *)arg, ares_strerror(status)); return; } for (p = host->h_addr_list; *p; p++) { char addr_buf[46] = "??"; ares_inet_ntop(host->h_addrtype, *p, addr_buf, sizeof(addr_buf)); printf("%-32s\t%s", host->h_name, addr_buf); puts(""); } } static void ai_callback(void *arg, int status, int timeouts, struct ares_addrinfo *result) { struct ares_addrinfo_node *node = NULL; (void)timeouts; if (status != ARES_SUCCESS) { fprintf(stderr, "%s: %s\n", (char *)arg, ares_strerror(status)); return; } for (node = result->nodes; node != NULL; node = node->ai_next) { char addr_buf[64] = ""; const void *ptr = NULL; if (node->ai_family == AF_INET) { const struct sockaddr_in *in_addr = (const struct sockaddr_in *)((void *)node->ai_addr); ptr = &in_addr->sin_addr; } else if (node->ai_family == AF_INET6) { const struct sockaddr_in6 *in_addr = (const struct sockaddr_in6 *)((void *)node->ai_addr); ptr = &in_addr->sin6_addr; } else { continue; } ares_inet_ntop(node->ai_family, ptr, addr_buf, sizeof(addr_buf)); printf("%-32s\t%s\n", result->name, addr_buf); } ares_freeaddrinfo(result); } static void usage(void) { fprintf(stderr, "usage: ahost [-h] [-d] [[-D {domain}] ...] [-s {server}] " "[-t {a|aaaa|u}] {host|addr} ...\n"); exit(1); } /* Information from the man page. Formatting taken from man -h */ static void print_help_info_ahost(void) { /* Split due to maximum c89 string literal of 509 bytes */ printf("ahost, version %s\n\n", ARES_VERSION_STR); printf( "usage: ahost [-h] [-d] [-D domain] [-s server] [-t a|aaaa|u] host|addr " "...\n\n"); printf( " -h : Display this help and exit.\n" " -d : Print some extra debugging output.\n\n" " -D domain : Specify the domain to search instead of using the default " "values\n"); printf( " -s server : Connect to the specified DNS server, instead of the\n" " system's default one(s). Servers are tried in round-robin,\n" " if the previous one failed.\n" " -t type : If type is \"a\", print the A record.\n"); printf( " If type is \"aaaa\", print the AAAA record.\n" " If type is \"u\" (default), print both A and AAAA records.\n" "\n"); exit(0); } gevent-24.11.1/deps/c-ares/src/tools/ares_getopt.c000066400000000000000000000101061471441230600216640ustar00rootroot00000000000000/* * Original file name getopt.c Initial import into the c-ares source tree * on 2007-04-11. Lifted from version 5.2 of the 'Open Mash' project with * the modified BSD license, BSD license without the advertising clause. * */ /* * getopt.c -- * * Standard UNIX getopt function. Code is from BSD. * * Copyright (c) 1987-2001 The Regents of the University of California. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * A. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * B. Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation * and/or other materials provided with the distribution. * C. Neither the names of the copyright holders nor the names of its * contributors may be used to endorse or promote products derived from this * software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS * IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. * * SPDX-License-Identifier: BSD-3-Clause */ #include #include #include #include "ares_getopt.h" #define BADCH (int)'?' #define BADARG (int)':' #define EMSG "" void ares_getopt_init(ares_getopt_state_t *state, int nargc, const char * const *nargv) { memset(state, 0, sizeof(*state)); state->opterr = 1; state->optind = 1; state->place = EMSG; state->argc = nargc; state->argv = nargv; } /* * ares_getopt -- * Parse argc/argv argument vector. */ int ares_getopt(ares_getopt_state_t *state, const char *ostr) { const char *oli; /* option letter list index */ /* update scanning pointer */ if (!*state->place) { if (state->optind >= state->argc) { return -1; } state->place = state->argv[state->optind]; if (*(state->place) != '-') { return -1; } state->place++; /* found "--" */ if (*(state->place) == '-') { state->optind++; return -1; } /* Found just - */ if (!*(state->place)) { state->optopt = 0; return BADCH; } } /* option letter okay? */ state->optopt = *(state->place); state->place++; oli = strchr(ostr, state->optopt); if (oli == NULL) { if (!(*state->place)) { ++state->optind; } if (state->opterr) { (void)fprintf(stderr, "%s: illegal option -- %c\n", __FILE__, state->optopt); } return BADCH; } /* don't need argument */ if (*++oli != ':') { state->optarg = NULL; if (!*state->place) { ++state->optind; } } else { /* need an argument */ if (*state->place) { /* no white space */ state->optarg = state->place; } else if (state->argc <= ++state->optind) { /* no arg */ state->place = EMSG; if (*ostr == ':') { return BADARG; } if (state->opterr) { (void)fprintf(stderr, "%s: option requires an argument -- %c\n", __FILE__, state->optopt); } return BADARG; } else { /* white space */ state->optarg = state->argv[state->optind]; } state->place = EMSG; ++state->optind; } return state->optopt; /* dump back option letter */ } gevent-24.11.1/deps/c-ares/src/tools/ares_getopt.h000066400000000000000000000043741471441230600217030ustar00rootroot00000000000000#ifndef ARES_GETOPT_H #define ARES_GETOPT_H /* * Copyright (c) 1987-2001 The Regents of the University of California. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * A. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * B. Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation * and/or other materials provided with the distribution. * C. Neither the names of the copyright holders nor the names of its * contributors may be used to endorse or promote products derived from this * software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS * IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. * * SPDX-License-Identifier: BSD-3-Clause */ typedef struct { const char *optarg; /* argument associated with option */ int optind; /* index into parent argv vector */ int opterr; /* if error message should be printed */ int optopt; /* character checked for validity */ const char *place; int argc; const char * const *argv; } ares_getopt_state_t; void ares_getopt_init(ares_getopt_state_t *state, int argc, const char * const *argv); int ares_getopt(ares_getopt_state_t *state, const char *ostr); #endif /* ARES_GETOPT_H */ gevent-24.11.1/deps/cares-make.patch000066400000000000000000000072761471441230600171520ustar00rootroot00000000000000diff --git a/deps/README.rst b/deps/README.rst index 8e331699..a0e3712d 100644 --- a/deps/README.rst +++ b/deps/README.rst @@ -53,7 +53,15 @@ Updating c-ares At this point there might be new files in c-ares that need added to git, evaluate them and add them. - Note that the patch may not apply cleanly. + Note that the patch may not apply cleanly. If not, commit the + changes before the patch. Then manually apply them by editing the + three files to remove the references to ``docs`` and ``test``; this + is easiest to do by reading the existing patch file and searching + for the relevant lines in the target files. Once this is working + correctly, create the new patch using ``git diff -p --minimal -w`` + (note that you cannot directly redirect the output of this into + ``cares-make.patch``, or you'll get the diff of the patch itself in + the diff!). - Follow the same 'config.guess' and 'config.sub' steps as libev. diff --git a/deps/c-ares/Makefile.am b/deps/c-ares/Makefile.am index eef3d3d1..c3e37f73 100644 --- a/deps/c-ares/Makefile.am +++ b/deps/c-ares/Makefile.am @@ -16,7 +16,7 @@ CLEANFILES = $(PDFPAGES) $(HTMLPAGES) DISTCLEANFILES = include/ares_build.h -DIST_SUBDIRS = include src test docs +DIST_SUBDIRS = include src SUBDIRS = @BUILD_SUBDIRS@ diff --git a/deps/c-ares/Makefile.in b/deps/c-ares/Makefile.in index 3dfa479a..db261682 100644 --- a/deps/c-ares/Makefile.in +++ b/deps/c-ares/Makefile.in @@ -413,7 +413,7 @@ EXTRA_DIST = AUTHORS CHANGES README.cares $(man_MANS) RELEASE-NOTES \ CLEANFILES = $(PDFPAGES) $(HTMLPAGES) DISTCLEANFILES = include/ares_build.h -DIST_SUBDIRS = include src test docs +DIST_SUBDIRS = include src SUBDIRS = @BUILD_SUBDIRS@ pkgconfigdir = $(libdir)/pkgconfig pkgconfig_DATA = libcares.pc diff --git a/deps/c-ares/configure b/deps/c-ares/configure index 2f182e0c..1a6af2cb 100755 --- a/deps/c-ares/configure +++ b/deps/c-ares/configure @@ -34612,18 +34612,18 @@ fi printf "%s\n" "$build_tests" >&6; } -BUILD_SUBDIRS="include src docs" +BUILD_SUBDIRS="include src" if test "x$build_tests" = "xyes" ; then subdirs="$subdirs test" - BUILD_SUBDIRS="${BUILD_SUBDIRS} test" + BUILD_SUBDIRS="${BUILD_SUBDIRS}" fi -ac_config_files="$ac_config_files Makefile include/Makefile src/Makefile src/lib/Makefile src/tools/Makefile docs/Makefile libcares.pc" +ac_config_files="$ac_config_files Makefile include/Makefile src/Makefile src/lib/Makefile src/tools/Makefile libcares.pc" cat >confcache <<\_ACEOF @@ -35769,7 +35769,6 @@ do "src/Makefile") CONFIG_FILES="$CONFIG_FILES src/Makefile" ;; "src/lib/Makefile") CONFIG_FILES="$CONFIG_FILES src/lib/Makefile" ;; "src/tools/Makefile") CONFIG_FILES="$CONFIG_FILES src/tools/Makefile" ;; - "docs/Makefile") CONFIG_FILES="$CONFIG_FILES docs/Makefile" ;; "libcares.pc") CONFIG_FILES="$CONFIG_FILES libcares.pc" ;; *) as_fn_error $? "invalid argument: \`$ac_config_target'" "$LINENO" 5;; @@ -37464,6 +37463,3 @@ done ## -------------------------------- ## ## End of distclean amending code ## ## -------------------------------- ## - - - diff --git a/deps/c-ares/configure.ac b/deps/c-ares/configure.ac index 54e79d6e..3b216010 100644 --- a/deps/c-ares/configure.ac +++ b/deps/c-ares/configure.ac @@ -954,7 +954,7 @@ fi AC_MSG_RESULT([$build_tests]) -BUILD_SUBDIRS="include src docs" +BUILD_SUBDIRS="include src" if test "x$build_tests" = "xyes" ; then AC_CONFIG_SUBDIRS([test]) BUILD_SUBDIRS="${BUILD_SUBDIRS} test" @@ -967,7 +967,6 @@ AC_CONFIG_FILES([Makefile \ src/Makefile \ src/lib/Makefile \ src/tools/Makefile \ - docs/Makefile \ libcares.pc ]) AC_OUTPUT gevent-24.11.1/deps/greenlet/000077500000000000000000000000001471441230600157125ustar00rootroot00000000000000gevent-24.11.1/deps/greenlet/greenlet.h000066400000000000000000000112231471441230600176670ustar00rootroot00000000000000/* -*- indent-tabs-mode: nil; tab-width: 4; -*- */ /* Greenlet object interface */ #ifndef Py_GREENLETOBJECT_H #define Py_GREENLETOBJECT_H #include #ifdef __cplusplus extern "C" { #endif /* This is deprecated and undocumented. It does not change. */ #define GREENLET_VERSION "1.0.0" #ifndef GREENLET_MODULE #define implementation_ptr_t void* #endif typedef struct _greenlet { PyObject_HEAD PyObject* weakreflist; PyObject* dict; implementation_ptr_t pimpl; } PyGreenlet; #define PyGreenlet_Check(op) (op && PyObject_TypeCheck(op, &PyGreenlet_Type)) /* C API functions */ /* Total number of symbols that are exported */ #define PyGreenlet_API_pointers 12 #define PyGreenlet_Type_NUM 0 #define PyExc_GreenletError_NUM 1 #define PyExc_GreenletExit_NUM 2 #define PyGreenlet_New_NUM 3 #define PyGreenlet_GetCurrent_NUM 4 #define PyGreenlet_Throw_NUM 5 #define PyGreenlet_Switch_NUM 6 #define PyGreenlet_SetParent_NUM 7 #define PyGreenlet_MAIN_NUM 8 #define PyGreenlet_STARTED_NUM 9 #define PyGreenlet_ACTIVE_NUM 10 #define PyGreenlet_GET_PARENT_NUM 11 #ifndef GREENLET_MODULE /* This section is used by modules that uses the greenlet C API */ static void** _PyGreenlet_API = NULL; # define PyGreenlet_Type \ (*(PyTypeObject*)_PyGreenlet_API[PyGreenlet_Type_NUM]) # define PyExc_GreenletError \ ((PyObject*)_PyGreenlet_API[PyExc_GreenletError_NUM]) # define PyExc_GreenletExit \ ((PyObject*)_PyGreenlet_API[PyExc_GreenletExit_NUM]) /* * PyGreenlet_New(PyObject *args) * * greenlet.greenlet(run, parent=None) */ # define PyGreenlet_New \ (*(PyGreenlet * (*)(PyObject * run, PyGreenlet * parent)) \ _PyGreenlet_API[PyGreenlet_New_NUM]) /* * PyGreenlet_GetCurrent(void) * * greenlet.getcurrent() */ # define PyGreenlet_GetCurrent \ (*(PyGreenlet * (*)(void)) _PyGreenlet_API[PyGreenlet_GetCurrent_NUM]) /* * PyGreenlet_Throw( * PyGreenlet *greenlet, * PyObject *typ, * PyObject *val, * PyObject *tb) * * g.throw(...) */ # define PyGreenlet_Throw \ (*(PyObject * (*)(PyGreenlet * self, \ PyObject * typ, \ PyObject * val, \ PyObject * tb)) \ _PyGreenlet_API[PyGreenlet_Throw_NUM]) /* * PyGreenlet_Switch(PyGreenlet *greenlet, PyObject *args) * * g.switch(*args, **kwargs) */ # define PyGreenlet_Switch \ (*(PyObject * \ (*)(PyGreenlet * greenlet, PyObject * args, PyObject * kwargs)) \ _PyGreenlet_API[PyGreenlet_Switch_NUM]) /* * PyGreenlet_SetParent(PyObject *greenlet, PyObject *new_parent) * * g.parent = new_parent */ # define PyGreenlet_SetParent \ (*(int (*)(PyGreenlet * greenlet, PyGreenlet * nparent)) \ _PyGreenlet_API[PyGreenlet_SetParent_NUM]) /* * PyGreenlet_GetParent(PyObject* greenlet) * * return greenlet.parent; * * This could return NULL even if there is no exception active. * If it does not return NULL, you are responsible for decrementing the * reference count. */ # define PyGreenlet_GetParent \ (*(PyGreenlet* (*)(PyGreenlet*)) \ _PyGreenlet_API[PyGreenlet_GET_PARENT_NUM]) /* * deprecated, undocumented alias. */ # define PyGreenlet_GET_PARENT PyGreenlet_GetParent # define PyGreenlet_MAIN \ (*(int (*)(PyGreenlet*)) \ _PyGreenlet_API[PyGreenlet_MAIN_NUM]) # define PyGreenlet_STARTED \ (*(int (*)(PyGreenlet*)) \ _PyGreenlet_API[PyGreenlet_STARTED_NUM]) # define PyGreenlet_ACTIVE \ (*(int (*)(PyGreenlet*)) \ _PyGreenlet_API[PyGreenlet_ACTIVE_NUM]) /* Macro that imports greenlet and initializes C API */ /* NOTE: This has actually moved to ``greenlet._greenlet._C_API``, but we keep the older definition to be sure older code that might have a copy of the header still works. */ # define PyGreenlet_Import() \ { \ _PyGreenlet_API = (void**)PyCapsule_Import("greenlet._C_API", 0); \ } #endif /* GREENLET_MODULE */ #ifdef __cplusplus } #endif #endif /* !Py_GREENLETOBJECT_H */ gevent-24.11.1/deps/libev/000077500000000000000000000000001471441230600152065ustar00rootroot00000000000000gevent-24.11.1/deps/libev/Changes000066400000000000000000000775031471441230600165150ustar00rootroot00000000000000Revision history for libev, a high-performance and full-featured event loop. TODO: for next ABI/API change, consider moving EV__IOFDSSET into io->fd instead and provide a getter. TODO: document EV_TSTAMP_T 4.33 Wed Mar 18 13:22:29 CET 2020 - no changes w.r.t. 4.32. 4.32 (EV only) - the 4.31 timerfd code wrongly changed the priority of the signal fd watcher, which is usually harmless unless signal fds are also used (found via cpan tester service). - the documentation wrongly claimed that user may modify fd and events members in io watchers when the watcher was stopped (found by b_jonas). - new ev_io_modify mutator which changes only the events member, which can be faster. also added ev::io::set (int events) method to ev++.h. - officially allow a zero events mask for io watchers. this should work with older libev versions as well but was not officially allowed before. - do not wake up every minute when timerfd is used to detect timejumps. - do not wake up every minute when periodics are disabled and we have a monotonic clock. - support a lot more "uncommon" compile time configurations, such as ev_embed enabled but ev_timer disabled. - use a start/stop wrapper class to reduce code duplication in ev++.h and make it needlessly more c++-y. - the linux aio backend is no longer compiled in by default. - update to libecb version 0x00010008. 4.31 Fri Dec 20 21:58:29 CET 2019 - handle backends with minimum wait time a bit better by not waiting in the presence of already-expired timers (behaviour reported by Felipe Gasper). - new feature: use timerfd to detect timejumps quickly, can be disabled with the new EVFLAG_NOTIMERFD loop flag. - document EV_USE_SIGNALFD feature macro. 4.30 (EV only) - change non-autoconf test for __kernel_rwf_t by testing LINUX_VERSION_CODE, the most direct test I could find. - fix a bug in the io_uring backend that polled the wrong backend fd, causing it to not work in many cases. 4.29 (EV only) - add io uring autoconf and non-autoconf detection. - disable io_uring when some header files are too old. 4.28 (EV only) - linuxaio backend resulted in random memory corruption when loop is forked. - linuxaio backend might have tried to cancel an iocb multiple times (was unable to trigger this). - linuxaio backend now employs a generation counter to avoid handling spurious events from cancelled requests. - io_cancel can return EINTR, deal with it. also, assume io_submit also returns EINTR. - fix some other minor bugs in linuxaio backend. - ev_tstamp type can now be overriden by defining EV_TSTAMP_T. - cleanup: replace expect_true/false and noinline by their libecb counterparts. - move syscall infrastructure from ev_linuxaio.c to ev.c. - prepare io_uring integration. - tweak ev_floor. - epoll, poll, win32 Sleep and other places that use millisecond reslution now all try to round up times. - solaris port backend didn't compile. - abstract time constants into their macros, for more flexibility. 4.27 Thu Jun 27 22:43:44 CEST 2019 - linux aio backend almost completely rewritten to work around its limitations. - linux aio backend now requires linux 4.19+. - epoll backend now mandatory for linux aio backend. - fail assertions more aggressively on invalid fd's detected in the event loop, do not just silently fd_kill in case of user error. - ev_io_start/ev_io_stop now verify the watcher fd using a syscall when EV_VERIFY is 2 or higher. 4.26 (EV only) - update to libecb 0x00010006. - new experimental linux aio backend (linux 4.18+). - removed redundant 0-ptr check in ev_once. - updated/extended ev_set_allocator documentation. - replaced EMPTY2 macro by array_needsize_noinit. - minor code cleanups. - epoll backend now uses epoll_create1 also after fork. 4.25 Fri Dec 21 07:49:20 CET 2018 - INCOMPATIBLE CHANGE: EV_THROW was renamed to EV_NOEXCEPT (EV_THROW still provided) and now uses noexcept on C++11 or newer. - move the darwin select workaround higher in ev.c, as newer versions of darwin managed to break their broken select even more. - ANDROID => __ANDROID__ (reported by enh@google.com). - disable epoll_create1 on android because it has broken header files and google is unwilling to fix them (reported by enh@google.com). - avoid a minor compilation warning on win32. - c++: remove deprecated dynamic throw() specifications. - c++: improve the (unsupported) bad_loop exception class. - backport perl ev_periodic example to C, untested. - update libecb, biggets change is to include a memory fence in ECB_MEMORY_FENCE_RELEASE on x86/amd64. - minor autoconf/automake modernisation. 4.24 Wed Dec 28 05:19:55 CET 2016 - bump version to 4.24, as the release tarball inexplicably didn't have the right version in ev.h, even though the cvs-tagged version did have the right one (reported by Ales Teska). 4.23 Wed Nov 16 18:23:41 CET 2016 - move some declarations at the beginning to help certain retarded microsoft compilers, even though their documentation claims otherwise (reported by Ruslan Osmanov). 4.22 Sun Dec 20 22:11:50 CET 2015 - when epoll detects unremovable fds in the fd set, rebuild only the epoll descriptor, not the signal pipe, to avoid SIGPIPE in ev_async_send. This doesn't solve it on fork, so document what needs to be done in ev_loop_fork (analyzed by Benjamin Mahler). - remove superfluous sys/timeb.h include on win32 (analyzed by Jason Madden). - updated libecb. 4.20 Sat Jun 20 13:01:43 CEST 2015 - prefer noexcept over throw () with C++ 11. - update ecb.h due to incompatibilities with c11. - fix a potential aliasing issue when reading and writing watcher callbacks. 4.19 Thu Sep 25 08:18:25 CEST 2014 - ev.h wasn't valid C++ anymore, which tripped compilers other than clang, msvc or gcc (analyzed by Raphael 'kena' Poss). Unfortunately, C++ doesn't support typedefs for function pointers fully, so the affected declarations have to spell out the types each time. - when not using autoconf, tighten the check for clock_gettime and related functionality. 4.18 Fri Sep 5 17:55:26 CEST 2014 - events on files were not always generated properly with the epoll backend (testcase by Assaf Inbal). - mark event pipe fd as cloexec after a fork (analyzed by Sami Farin). - (ecb) support m68k, m88k and sh (patch by Miod Vallat). - use a reasonable fallback for EV_NSIG instead of erroring out when we can't detect the signal set size. - in the absence of autoconf, do not use the clock syscall on glibc >= 2.17 (avoids the syscall AND -lrt on systems doing clock_gettime in userspace). - ensure extern "C" function pointers are used for externally-visible loop callbacks (not watcher callbacks yet). - (ecb) work around memory barriers and volatile apparently both being broken in visual studio 2008 and later (analysed and patch by Nicolas Noble). 4.15 Fri Mar 1 12:04:50 CET 2013 - destroying a non-default loop would stop the global waitpid watcher (Denis Bilenko). - queueing pending watchers of higher priority from a watcher now invokes them in a timely fashion (reported by Denis Bilenko). - add throw() to all libev functions that cannot throw exceptions, for further code size decrease when compiling for C++. - add throw () to callbacks that must not throw exceptions (allocator, syserr, loop acquire/release, periodic reschedule cbs). - fix event_base_loop return code, add event_get_callback, event_base_new, event_base_get_method calls to improve libevent 1.x emulation and add some libevent 2.x functionality (based on a patch by Jeff Davey). - add more memory fences to fix a bug reported by Jeff Davey. Better be overfenced than underprotected. - ev_run now returns a boolean status (true meaning watchers are still active). - ev_once: undef EV_ERROR in ev_kqueue.c, to avoid clashing with libev's EV_ERROR (reported by 191919). - (ecb) add memory fence support for xlC (Darin McBride). - (ecb) add memory fence support for gcc-mips (Anton Kirilov). - (ecb) add memory fence support for gcc-alpha (Christian Weisgerber). - work around some kernels losing file descriptors by leaking the kqueue descriptor in the child. - work around linux inotify not reporting IN_ATTRIB changes for directories in many cases. - include sys/syscall.h instead of plain syscall.h. - check for io watcher loops in ev_verify, check for the most common reported usage bug in ev_io_start. - choose socket vs. WSASocket at compiletime using EV_USE_WSASOCKET. - always use WSASend/WSARecv directly on windows, hoping that this works in all cases (unlike read/write/send/recv...). - try to detect signals around a fork faster (test program by Denis Bilenko). - work around recent glibc versions that leak memory in realloc. - rename ev::embed::set to ev::embed::set_embed to avoid clashing the watcher base set (loop) method. - rewrite the async/signal pipe logic to always keep a valid fd, which simplifies (and hopefully correctifies :) the race checking on fork, at the cost of one extra fd. - add fat, msdos, jffs2, ramfs, ntfs and btrfs to the list of inotify-supporting filesystems. - move orig_CFLAGS assignment to after AC_INIT, as newer autoconf versions ignore it before (https://bugzilla.redhat.com/show_bug.cgi?id=908096). - add some untested android support. - enum expressions must be of type int (reported by Juan Pablo L). 4.11 Sat Feb 4 19:52:39 CET 2012 - INCOMPATIBLE CHANGE: ev_timer_again now clears the pending status, as was documented already, but not implemented in the repeating case. - new compiletime symbols: EV_NO_SMP and EV_NO_THREADS. - fix a race where the workaround against the epoll fork bugs caused signals to not be handled anymore. - correct backend_fudge for most backends, and implement a windows specific workaround to avoid looping because we call both select and Sleep, both with different time resolutions. - document range and guarantees of ev_sleep. - document reasonable ranges for periodics interval and offset. - rename backend_fudge to backend_mintime to avoid future confusion :) - change the default periodic reschedule function to hopefully be more exact and correct even in corner cases or in the far future. - do not rely on -lm anymore: use it when available but use our own floor () if it is missing. This should make it easier to embed, as no external libraries are required. - strategically import macros from libecb and mark rarely-used functions as cache-cold (saving almost 2k code size on typical amd64 setups). - add Symbols.ev and Symbols.event files, that were missing. - fix backend_mintime value for epoll (was 1/1024, is 1/1000 now). - fix #3 "be smart about timeouts" to not "deadlock" when timeout == now, also improve the section overall. - avoid "AVOIDING FINISHING BEFORE RETURNING" idiom. - support new EV_API_STATIC mode to make all libev symbols static. - supply default CFLAGS of -g -O3 with gcc when original CFLAGS were empty. 4.04 Wed Feb 16 09:01:51 CET 2011 - fix two problems in the native win32 backend, where reuse of fd's with different underlying handles caused handles not to be removed or added to the select set (analyzed and tested by Bert Belder). - do no rely on ceil() in ev_e?poll.c. - backport libev to HP-UX versions before 11 v3. - configure did not detect nanosleep and clock_gettime properly when they are available in the libc (as opposed to -lrt). 4.03 Tue Jan 11 14:37:25 CET 2011 - officially support polling files with all backends. - support files, /dev/zero etc. the same way as select in the epoll backend, by generating events on our own. - ports backend: work around solaris bug 6874410 and many related ones (EINTR, maybe more), with no performance loss (note that the solaris bug report is actually wrong, reality is far more bizarre and broken than that). - define EV_READ/EV_WRITE as macros in event.h, as some programs use #ifdef to test for them. - new (experimental) function: ev_feed_signal. - new (to become default) EVFLAG_NOSIGMASK flag. - new EVBACKEND_MASK symbol. - updated COMMON IDIOMS SECTION. 4.01 Fri Nov 5 21:51:29 CET 2010 - automake fucked it up, apparently, --add-missing -f is not quite enough to make it update its files, so 4.00 didn't install ev++.h and event.h on make install. grrr. - ev_loop(count|depth) didn't return anything (Robin Haberkorn). - change EV_UNDEF to 0xffffffff to silence some overzealous compilers. - use "(libev) " prefix for all libev error messages now. 4.00 Mon Oct 25 12:32:12 CEST 2010 - "PORTING FROM LIBEV 3.X TO 4.X" (in ev.pod) is recommended reading. - ev_embed_stop did not correctly stop the watcher (very good testcase by Vladimir Timofeev). - ev_run will now always update the current loop time - it erroneously didn't when idle watchers were active, causing timers not to fire. - fix a bug where a timeout of zero caused the timer not to fire in the libevent emulation (testcase by Péter Szabó). - applied win32 fixes by Michael Lenaghan (also James Mansion). - replace EV_MINIMAL by EV_FEATURES. - prefer EPOLL_CTL_ADD over EPOLL_CTL_MOD in some more cases, as it seems the former is *much* faster than the latter. - linux kernel version detection (for inotify bug workarounds) did not work properly. - reduce the number of spurious wake-ups with the ports backend. - remove dependency on sys/queue.h on freebsd (patch by Vanilla Hsu). - do async init within ev_async_start, not ev_async_set, which avoids an API quirk where the set function must be called in the C++ API even when there is nothing to set. - add (undocumented) EV_ENABLE when adding events with kqueue, this might help with OS X, which seems to need it despite documenting not to need it (helpfully pointed out by Tilghman Lesher). - do not use poll by default on freebsd, it's broken (what isn't on freebsd...). - allow to embed epoll on kernels >= 2.6.32. - configure now prepends -O3, not appends it, so one can still override it. - ev.pod: greatly expanded the portability section, added a porting section, a description of watcher states and made lots of minor fixes. - disable poll backend on AIX, the poll header spams the namespace and it's not worth working around dead platforms (reported and analyzed by Aivars Kalvans). - improve header file compatibility of the standalone eventfd code in an obscure case. - implement EV_AVOID_STDIO option. - do not use sscanf to parse linux version number (smaller, faster, no sscanf dependency). - new EV_CHILD_ENABLE and EV_SIGNAL_ENABLE configurable settings. - update libev.m4 HAVE_CLOCK_SYSCALL test for newer glibcs. - add section on accept() problems to the manpage. - rename EV_TIMEOUT to EV_TIMER. - rename ev_loop_count/depth/verify/loop/unloop. - remove ev_default_destroy and ev_default_fork. - switch to two-digit minor version. - work around an apparent gentoo compiler bug. - define _DARWIN_UNLIMITED_SELECT. just so. - use enum instead of #define for most constants. - improve compatibility to older C++ compilers. - (experimental) ev_run/ev_default_loop/ev_break/ev_loop_new have now default arguments when compiled as C++. - enable automake dependency tracking. - ev_loop_new no longer leaks memory when loop creation failed. - new ev_cleanup watcher type. 3.9 Thu Dec 31 07:59:59 CET 2009 - signalfd is no longer used by default and has to be requested explicitly - this means that easy to catch bugs become hard to catch race conditions, but the users have spoken. - point out the unspecified signal mask in the documentation, and that this is a race condition regardless of EV_SIGNALFD. - backport inotify code to C89. - inotify file descriptors could leak into child processes. - ev_stat watchers could keep an erroneous extra ref on the loop, preventing exit when unregistering all watchers (testcases provided by ry@tinyclouds.org). - implement EV_WIN32_HANDLE_TO_FD and EV_WIN32_CLOSE_FD configuration symbols to make it easier for apps to do their own fd management. - support EV_IDLE_ENABLE being disabled in ev++.h (patch by Didier Spezia). - take advantage of inotify_init1, if available, to set cloexec/nonblock on fd creation, to avoid races. - the signal handling pipe wasn't always initialised under windows (analysed by lekma). - changed minimum glibc requirement from glibc 2.9 to 2.7, for signalfd. - add missing string.h include (Denis F. Latypoff). - only replace ev_stat.prev when we detect an actual difference, so prev is (almost) always different to attr. this might have caused the problems with 04_stat.t. - add ev::timer->remaining () method to C++ API. 3.8 Sun Aug 9 14:30:45 CEST 2009 - incompatible change: do not necessarily reset signal handler to SIG_DFL when a sighandler is stopped. - ev_default_destroy did not properly free or zero some members, potentially causing crashes and memory corruption on repeated ev_default_destroy/ev_default_loop calls. - take advantage of signalfd on GNU/Linux systems. - document that the signal mask might be in an unspecified state when using libev's signal handling. - take advantage of some GNU/Linux calls to set cloexec/nonblock on fd creation, to avoid race conditions. 3.7 Fri Jul 17 16:36:32 CEST 2009 - ev_unloop and ev_loop wrongly used a global variable to exit loops, instead of using a per-loop variable (bug caught by accident...). - the ev_set_io_collect_interval interpretation has changed. - add new functionality: ev_set_userdata, ev_userdata, ev_set_invoke_pending_cb, ev_set_loop_release_cb, ev_invoke_pending, ev_pending_count, together with a long example about thread locking. - add ev_timer_remaining (as requested by Denis F. Latypoff). - add ev_loop_depth. - calling ev_unloop in fork/prepare watchers will no longer poll for new events. - Denis F. Latypoff corrected many typos in example code snippets. - honor autoconf detection of EV_USE_CLOCK_SYSCALL, also double- check that the syscall number is available before trying to use it (reported by ry@tinyclouds). - use GetSystemTimeAsFileTime instead of _timeb on windows, for slightly higher accuracy. - properly declare ev_loop_verify and ev_now_update even when !EV_MULTIPLICITY. - do not compile in any priority code when EV_MAXPRI == EV_MINPRI. - support EV_MINIMAL==2 for a reduced API. - actually 0-initialise struct sigaction when installing signals. - add section on hibernate and stopped processes to ev_timer docs. 3.6 Tue Apr 28 02:49:30 CEST 2009 - multiple timers becoming ready within an event loop iteration will be invoked in the "correct" order now. - do not leave the event loop early just because we have no active watchers, fixing a problem when embedding a kqueue loop that has active kernel events but no registered watchers (reported by blacksand blacksand). - correctly zero the idx values for arrays, so destroying and reinitialising the default loop actually works (patch by Malek Hadj-Ali). - implement ev_suspend and ev_resume. - new EV_CUSTOM revents flag for use by applications. - add documentation section about priorities. - add a glossary to the documentation. - extend the ev_fork description slightly. - optimize a jump out of call_pending. 3.53 Sun Feb 15 02:38:20 CET 2009 - fix a bug in event pipe creation on win32 that would cause a failed assertion on event loop creation (patch by Malek Hadj-Ali). - probe for CLOCK_REALTIME support at runtime as well and fall back to gettimeofday if there is an error, to support older operating systems with newer header files/libraries. - prefer gettimeofday over clock_gettime with USE_CLOCK_SYSCALL (default most everywhere), otherwise not. 3.52 Wed Jan 7 21:43:02 CET 2009 - fix compilation of select backend in fd_set mode when NFDBITS is missing (to get it to compile on QNX, reported by Rodrigo Campos). - better select-nfds handling when select backend is in fd_set mode. - diagnose fd_set overruns when select backend is in fd_set mode. - due to a thinko, instead of disabling everything but select on the borked OS X platform, everything but select was allowed (reported by Emanuele Giaquinta). - actually verify that local and remote port are matching in libev's socketpair emulation, which makes denial-of-service attacks harder (but not impossible - it's windows). Make sure it even works under vista, which thinks that getpeer/sockname should return fantasy port numbers. - include "libev" in all assertion messages for potentially clearer diagnostics. - event_get_version (libevent compatibility) returned a useless string instead of the expected version string (patch by W.C.A. Wijngaards). 3.51 Wed Dec 24 23:00:11 CET 2008 - fix a bug where an inotify watcher was added twice, causing freezes on hash collisions (reported and analysed by Graham Leggett). - new config symbol, EV_USE_CLOCK_SYSCALL, to make libev use a direct syscall - slower, but no dependency on librt et al. - assume negative return values != -1 signals success of port_getn (http://cvs.epicsol.org/cgi/viewcvs.cgi/epic5/source/newio.c?rev=1.52) (no known failure reports, but it doesn't hurt). - fork detection in ev_embed now stops and restarts the watcher automatically. - EXPERIMENTAL: default the method to operator () in ev++.h, to make it nicer to use functors (requested by Benedek László). - fixed const object callbacks in ev++.h. - replaced loop_ref argument of watcher.set (loop) by a direct ev_loop * in ev++.h, to avoid clashes with functor patch. - do not try to watch the empty string via inotify. - inotify watchers could be leaked under certain circumstances. - OS X 10.5 is actually even more broken than earlier versions, so fall back to select on that piece of garbage. - fixed some weirdness in the ev_embed documentation. 3.49 Wed Nov 19 11:26:53 CET 2008 - ev_stat watchers will now use inotify as a mere hint on kernels <2.6.25, or if the filesystem is not in the "known to be good" list. - better mingw32 compatibility (it's not as borked as native win32) (analysed by Roger Pack). - include stdio.h in the example program, as too many people are confused by the weird C language otherwise. I guess the next thing I get told is that the "..." ellipses in the examples don't compile with their C compiler. 3.48 Thu Oct 30 09:02:37 CET 2008 - further optimise away the EPOLL_CTL_ADD/MOD combo in the epoll backend by assuming the kernel event mask hasn't changed if ADD fails with EEXIST. - work around spurious event notification bugs in epoll by using a 32-bit generation counter. recreate kernel state if we receive spurious notifications or unwanted events. this is very costly, but I didn't come up with this horrible design. - use memset to initialise most arrays now and do away with the init functions. - expand time-out strategies into a "Be smart about timeouts" section. - drop the "struct" from all ev_watcher declarations in the documentation and did other clarifications (yeah, it was a mistake to have a struct AND a function called ev_loop). - fix a bug where ev_default would not initialise the default loop again after it was destroyed with ev_default_destroy. - rename syserr to ev_syserr to avoid name clashes when embedding, do similar changes for event.c. 3.45 Tue Oct 21 21:59:26 CEST 2008 - disable inotify usage on linux <2.6.25, as it is broken (reported by Yoann Vandoorselaere). - ev_stat erroneously would try to add inotify watchers even when inotify wasn't available (this should only have a performance impact). - ev_once now passes both timeout and io to the callback if both occur concurrently, instead of giving timeouts precedence. - disable EV_USE_INOTIFY when sys/inotify.h is too old. 3.44 Mon Sep 29 05:18:39 CEST 2008 - embed watchers now automatically invoke ev_loop_fork on the embedded loop when the parent loop forks. - new function: ev_now_update (loop). - verify_watcher was not marked static. - improve the "associating..." manpage section. - documentation tweaks here and there. 3.43 Sun Jul 6 05:34:41 CEST 2008 - include more include files on windows to get struct _stati64 (reported by Chris Hulbert, but doesn't quite fix his issue). - add missing #include in ev.c on windows (reported by Matt Tolton). 3.42 Tue Jun 17 12:12:07 CEST 2008 - work around yet another windows bug: FD_SET actually adds fd's multiple times to the fd_*SET*, despite official MSN docs claiming otherwise. Reported and well-analysed by Matt Tolton. - define NFDBITS to 0 when EV_SELECT_IS_WINSOCKET to make it compile (reported any analysed by Chris Hulbert). - fix a bug in ev_ebadf (this function is only used to catch programming errors in the libev user). reported by Matt Tolton. - fix a bug in fd_intern on win32 (could lead to compile errors under some circumstances, but would work correctly if it compiles). reported by Matt Tolton. - (try to) work around missing lstat on windows. - pass in the write fd set as except fd set under windows. windows is so uncontrollably lame that it requires this. this means that switching off oobinline is not supported (but tcp/ip doesn't have oob, so that would be stupid anyways. - use posix module symbol to auto-detect monotonic clock presence and some other default values. 3.41 Fri May 23 18:42:54 CEST 2008 - work around an obscure bug in winsocket select: if you provide only empty fd sets then select returns WSAEINVAL. how sucky. - improve timer scheduling stability and reduce use of time_epsilon. - use 1-based 2-heap for EV_MINIMAL, simplifies code, reduces codesize and makes for better cache-efficiency. - use 3-based 4-heap for !EV_MINIMAL. this makes better use of cpu cache lines and gives better growth behaviour than 2-based heaps. - cache timestamp within heap for !EV_MINIMAL, to avoid random memory accesses. - document/add EV_USE_4HEAP and EV_HEAP_CACHE_AT. - fix a potential aliasing issue in ev_timer_again. - add/document ev_periodic_at, retract direct access to ->at. - improve ev_stat docs. - add portability requirements section. - fix manpage headers etc. - normalise WSA error codes to lower range on windows. - add consistency check code that can be called automatically or on demand to check for internal structures (ev_loop_verify). 3.31 Wed Apr 16 20:45:04 CEST 2008 - added last minute fix for ev_poll.c by Brandon Black. 3.3 Wed Apr 16 19:04:10 CEST 2008 - event_base_loopexit should return 0 on success (W.C.A. Wijngaards). - added linux eventfd support. - try to autodetect epoll and inotify support by libc header version if not using autoconf. - new symbols: EV_DEFAULT_UC and EV_DEFAULT_UC_. - declare functions defined in ev.h as inline if C99 or gcc are available. - enable inlining with gcc versions 2 and 3. - work around broken poll implementations potentially not clearing revents field in ev_poll (Brandon Black) (no such systems are known at this time). - work around a bug in realloc on openbsd and darwin, also makes the erroneous valgrind complaints go away (noted by various people). - fix ev_async_pending, add c++ wrapper for ev_async (based on patch sent by Johannes Deisenhofer). - add sensible set method to ev::embed. - made integer constants type int in ev.h. 3.2 Wed Apr 2 17:11:19 CEST 2008 - fix a 64 bit overflow issue in the select backend, by using fd_mask instead of int for the mask. - rename internal sighandler to avoid clash with very old perls. - entering ev_loop will not clear the ONESHOT or NONBLOCKING flags of any outer loops anymore. - add ev_async_pending. 3.1 Thu Mar 13 13:45:22 CET 2008 - implement ev_async watchers. - only initialise signal pipe on demand. - make use of sig_atomic_t configurable. - improved documentation. 3.0 Mon Jan 28 13:14:47 CET 2008 - API/ABI bump to version 3.0. - ev++.h includes "ev.h" by default now, not . - slightly improved documentation. - speed up signal detection after a fork. - only optionally return trace status changed in ev_child watchers. - experimental (and undocumented) loop wrappers for ev++.h. 2.01 Tue Dec 25 08:04:41 CET 2007 - separate Changes file. - fix ev_path_set => ev_stat_set typo. - remove event_compat.h from the libev tarball. - change how include files are found. - doc updates. - update licenses, explicitly allow for GPL relicensing. 2.0 Sat Dec 22 17:47:03 CET 2007 - new ev_sleep, ev_set_(io|timeout)_collect_interval. - removed epoll from embeddable fd set. - fix embed watchers. - renamed ev_embed.loop to other. - added exported Symbol tables. - undefine member wrapper macros at the end of ev.c. - respect EV_H in ev++.h. 1.86 Tue Dec 18 02:36:57 CET 2007 - fix memleak on loop destroy (not relevant for perl). 1.85 Fri Dec 14 20:32:40 CET 2007 - fix some aliasing issues w.r.t. timers and periodics (not relevant for perl). (for historic versions refer to EV/Changes, found in the Perl interface) 0.1 Wed Oct 31 21:31:48 CET 2007 - original version; hacked together in <24h. gevent-24.11.1/deps/libev/LICENSE000066400000000000000000000040071471441230600162140ustar00rootroot00000000000000All files in libev are Copyright (c)2007,2008,2009,2010,2011,2012,2013 Marc Alexander Lehmann. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Alternatively, the contents of this package may be used under the terms of the GNU General Public License ("GPL") version 2 or any later version, in which case the provisions of the GPL are applicable instead of the above. If you wish to allow the use of your version of this package only under the terms of the GPL and not to allow others to use your version of this file under the BSD license, indicate your decision by deleting the provisions above and replace them with the notice and other provisions required by the GPL in this and the other files of this package. If you do not delete the provisions above, a recipient may use your version of this file under either the BSD or the GPL. gevent-24.11.1/deps/libev/Makefile.in000066400000000000000000000740111471441230600172560ustar00rootroot00000000000000# Makefile.in generated by automake 1.16.1 from Makefile.am. # @configure_input@ # Copyright (C) 1994-2018 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ VPATH = @srcdir@ am__is_gnu_make = { \ if test -z '$(MAKELEVEL)'; then \ false; \ elif test -n '$(MAKE_HOST)'; then \ true; \ elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \ true; \ else \ false; \ fi; \ } am__make_running_with_option = \ case $${target_option-} in \ ?) ;; \ *) echo "am__make_running_with_option: internal error: invalid" \ "target option '$${target_option-}' specified" >&2; \ exit 1;; \ esac; \ has_opt=no; \ sane_makeflags=$$MAKEFLAGS; \ if $(am__is_gnu_make); then \ sane_makeflags=$$MFLAGS; \ else \ case $$MAKEFLAGS in \ *\\[\ \ ]*) \ bs=\\; \ sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \ | sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \ esac; \ fi; \ skip_next=no; \ strip_trailopt () \ { \ flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \ }; \ for flg in $$sane_makeflags; do \ test $$skip_next = yes && { skip_next=no; continue; }; \ case $$flg in \ *=*|--*) continue;; \ -*I) strip_trailopt 'I'; skip_next=yes;; \ -*I?*) strip_trailopt 'I';; \ -*O) strip_trailopt 'O'; skip_next=yes;; \ -*O?*) strip_trailopt 'O';; \ -*l) strip_trailopt 'l'; skip_next=yes;; \ -*l?*) strip_trailopt 'l';; \ -[dEDm]) skip_next=yes;; \ -[JT]) skip_next=yes;; \ esac; \ case $$flg in \ *$$target_option*) has_opt=yes; break;; \ esac; \ done; \ test $$has_opt = yes am__make_dryrun = (target_option=n; $(am__make_running_with_option)) am__make_keepgoing = (target_option=k; $(am__make_running_with_option)) pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ subdir = . ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/libev.m4 \ $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) DIST_COMMON = $(srcdir)/Makefile.am $(top_srcdir)/configure \ $(am__configure_deps) $(include_HEADERS) $(am__DIST_COMMON) am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \ configure.lineno config.status.lineno mkinstalldirs = $(SHELL) $(top_srcdir)/mkinstalldirs CONFIG_HEADER = config.h CONFIG_CLEAN_FILES = CONFIG_CLEAN_VPATH_FILES = am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } am__installdirs = "$(DESTDIR)$(libdir)" "$(DESTDIR)$(man3dir)" \ "$(DESTDIR)$(includedir)" LTLIBRARIES = $(lib_LTLIBRARIES) libev_la_LIBADD = am_libev_la_OBJECTS = ev.lo event.lo libev_la_OBJECTS = $(am_libev_la_OBJECTS) AM_V_lt = $(am__v_lt_@AM_V@) am__v_lt_ = $(am__v_lt_@AM_DEFAULT_V@) am__v_lt_0 = --silent am__v_lt_1 = libev_la_LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \ $(libev_la_LDFLAGS) $(LDFLAGS) -o $@ AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false am__v_P_1 = : AM_V_GEN = $(am__v_GEN_@AM_V@) am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@) am__v_GEN_0 = @echo " GEN " $@; am__v_GEN_1 = AM_V_at = $(am__v_at_@AM_V@) am__v_at_ = $(am__v_at_@AM_DEFAULT_V@) am__v_at_0 = @ am__v_at_1 = DEFAULT_INCLUDES = -I.@am__isrc@ depcomp = $(SHELL) $(top_srcdir)/depcomp am__maybe_remake_depfiles = depfiles am__depfiles_remade = ./$(DEPDIR)/ev.Plo ./$(DEPDIR)/event.Plo am__mv = mv -f COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \ $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) LTCOMPILE = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) \ $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) \ $(AM_CFLAGS) $(CFLAGS) AM_V_CC = $(am__v_CC_@AM_V@) am__v_CC_ = $(am__v_CC_@AM_DEFAULT_V@) am__v_CC_0 = @echo " CC " $@; am__v_CC_1 = CCLD = $(CC) LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \ $(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \ $(AM_LDFLAGS) $(LDFLAGS) -o $@ AM_V_CCLD = $(am__v_CCLD_@AM_V@) am__v_CCLD_ = $(am__v_CCLD_@AM_DEFAULT_V@) am__v_CCLD_0 = @echo " CCLD " $@; am__v_CCLD_1 = SOURCES = $(libev_la_SOURCES) DIST_SOURCES = $(libev_la_SOURCES) am__can_run_installinfo = \ case $$AM_UPDATE_INFO_DIR in \ n|no|NO) false;; \ *) (install-info --version) >/dev/null 2>&1;; \ esac man3dir = $(mandir)/man3 NROFF = nroff MANS = $(man_MANS) HEADERS = $(include_HEADERS) am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) \ $(LISP)config.h.in # Read a list of newline-separated strings from the standard input, # and print each of them once, without duplicates. Input order is # *not* preserved. am__uniquify_input = $(AWK) '\ BEGIN { nonempty = 0; } \ { items[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in items) print i; }; } \ ' # Make sure the list of sources is unique. This is necessary because, # e.g., the same source file might be shared among _SOURCES variables # for different programs/libraries. am__define_uniq_tagged_files = \ list='$(am__tagged_files)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | $(am__uniquify_input)` ETAGS = etags CTAGS = ctags CSCOPE = cscope AM_RECURSIVE_TARGETS = cscope am__DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/config.h.in README \ TODO compile config.guess config.sub depcomp install-sh \ ltmain.sh missing mkinstalldirs DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) distdir = $(PACKAGE)-$(VERSION) top_distdir = $(distdir) am__remove_distdir = \ if test -d "$(distdir)"; then \ find "$(distdir)" -type d ! -perm -200 -exec chmod u+w {} ';' \ && rm -rf "$(distdir)" \ || { sleep 5 && rm -rf "$(distdir)"; }; \ else :; fi am__post_remove_distdir = $(am__remove_distdir) DIST_ARCHIVES = $(distdir).tar.gz GZIP_ENV = --best DIST_TARGETS = dist-gzip distuninstallcheck_listfiles = find . -type f -print am__distuninstallcheck_listfiles = $(distuninstallcheck_listfiles) \ | sed 's|^\./|$(prefix)/|' | grep -v '$(infodir)/dir$$' distcleancheck_listfiles = find . -type f -print ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@ AR = @AR@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ GREP = @GREP@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@ MAINT = @MAINT@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ NM = @NM@ NMEDIT = @NMEDIT@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ VERSION = @VERSION@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ runstatedir = @runstatedir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ AUTOMAKE_OPTIONS = foreign VERSION_INFO = 4:0:0 EXTRA_DIST = LICENSE Changes libev.m4 autogen.sh \ ev_vars.h ev_wrap.h \ ev_epoll.c ev_select.c ev_poll.c ev_kqueue.c ev_port.c ev_linuxaio.c ev_iouring.c \ ev_win32.c \ ev.3 ev.pod Symbols.ev Symbols.event man_MANS = ev.3 include_HEADERS = ev.h ev++.h event.h lib_LTLIBRARIES = libev.la libev_la_SOURCES = ev.c event.c libev_la_LDFLAGS = -version-info $(VERSION_INFO) all: config.h $(MAKE) $(AM_MAKEFLAGS) all-am .SUFFIXES: .SUFFIXES: .c .lo .o .obj am--refresh: Makefile @: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ echo ' cd $(srcdir) && $(AUTOMAKE) --foreign'; \ $(am__cd) $(srcdir) && $(AUTOMAKE) --foreign \ && exit 0; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ echo ' $(SHELL) ./config.status'; \ $(SHELL) ./config.status;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles)'; \ cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__maybe_remake_depfiles);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) $(SHELL) ./config.status --recheck $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) $(am__cd) $(srcdir) && $(AUTOCONF) $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) $(am__cd) $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS) $(am__aclocal_m4_deps): config.h: stamp-h1 @test -f $@ || rm -f stamp-h1 @test -f $@ || $(MAKE) $(AM_MAKEFLAGS) stamp-h1 stamp-h1: $(srcdir)/config.h.in $(top_builddir)/config.status @rm -f stamp-h1 cd $(top_builddir) && $(SHELL) ./config.status config.h $(srcdir)/config.h.in: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) ($(am__cd) $(top_srcdir) && $(AUTOHEADER)) rm -f stamp-h1 touch $@ distclean-hdr: -rm -f config.h stamp-h1 install-libLTLIBRARIES: $(lib_LTLIBRARIES) @$(NORMAL_INSTALL) @list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \ list2=; for p in $$list; do \ if test -f $$p; then \ list2="$$list2 $$p"; \ else :; fi; \ done; \ test -z "$$list2" || { \ echo " $(MKDIR_P) '$(DESTDIR)$(libdir)'"; \ $(MKDIR_P) "$(DESTDIR)$(libdir)" || exit 1; \ echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 '$(DESTDIR)$(libdir)'"; \ $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 "$(DESTDIR)$(libdir)"; \ } uninstall-libLTLIBRARIES: @$(NORMAL_UNINSTALL) @list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \ for p in $$list; do \ $(am__strip_dir) \ echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f '$(DESTDIR)$(libdir)/$$f'"; \ $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f "$(DESTDIR)$(libdir)/$$f"; \ done clean-libLTLIBRARIES: -test -z "$(lib_LTLIBRARIES)" || rm -f $(lib_LTLIBRARIES) @list='$(lib_LTLIBRARIES)'; \ locs=`for p in $$list; do echo $$p; done | \ sed 's|^[^/]*$$|.|; s|/[^/]*$$||; s|$$|/so_locations|' | \ sort -u`; \ test -z "$$locs" || { \ echo rm -f $${locs}; \ rm -f $${locs}; \ } libev.la: $(libev_la_OBJECTS) $(libev_la_DEPENDENCIES) $(EXTRA_libev_la_DEPENDENCIES) $(AM_V_CCLD)$(libev_la_LINK) -rpath $(libdir) $(libev_la_OBJECTS) $(libev_la_LIBADD) $(LIBS) mostlyclean-compile: -rm -f *.$(OBJEXT) distclean-compile: -rm -f *.tab.c @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ev.Plo@am__quote@ # am--include-marker @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/event.Plo@am__quote@ # am--include-marker $(am__depfiles_remade): @$(MKDIR_P) $(@D) @echo '# dummy' >$@-t && $(am__mv) $@-t $@ am--depfiles: $(am__depfiles_remade) .c.o: @am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $< @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ $< .c.obj: @am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ `$(CYGPATH_W) '$<'` @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ `$(CYGPATH_W) '$<'` .c.lo: @am__fastdepCC_TRUE@ $(AM_V_CC)$(LTCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $< @am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Plo @AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LTCOMPILE) -c -o $@ $< mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs distclean-libtool: -rm -f libtool config.lt install-man3: $(man_MANS) @$(NORMAL_INSTALL) @list1=''; \ list2='$(man_MANS)'; \ test -n "$(man3dir)" \ && test -n "`echo $$list1$$list2`" \ || exit 0; \ echo " $(MKDIR_P) '$(DESTDIR)$(man3dir)'"; \ $(MKDIR_P) "$(DESTDIR)$(man3dir)" || exit 1; \ { for i in $$list1; do echo "$$i"; done; \ if test -n "$$list2"; then \ for i in $$list2; do echo "$$i"; done \ | sed -n '/\.3[a-z]*$$/p'; \ fi; \ } | while read p; do \ if test -f $$p; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; echo "$$p"; \ done | \ sed -e 'n;s,.*/,,;p;h;s,.*\.,,;s,^[^3][0-9a-z]*$$,3,;x' \ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,' | \ sed 'N;N;s,\n, ,g' | { \ list=; while read file base inst; do \ if test "$$base" = "$$inst"; then list="$$list $$file"; else \ echo " $(INSTALL_DATA) '$$file' '$(DESTDIR)$(man3dir)/$$inst'"; \ $(INSTALL_DATA) "$$file" "$(DESTDIR)$(man3dir)/$$inst" || exit $$?; \ fi; \ done; \ for i in $$list; do echo "$$i"; done | $(am__base_list) | \ while read files; do \ test -z "$$files" || { \ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(man3dir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(man3dir)" || exit $$?; }; \ done; } uninstall-man3: @$(NORMAL_UNINSTALL) @list=''; test -n "$(man3dir)" || exit 0; \ files=`{ for i in $$list; do echo "$$i"; done; \ l2='$(man_MANS)'; for i in $$l2; do echo "$$i"; done | \ sed -n '/\.3[a-z]*$$/p'; \ } | sed -e 's,.*/,,;h;s,.*\.,,;s,^[^3][0-9a-z]*$$,3,;x' \ -e 's,\.[0-9a-z]*$$,,;$(transform);G;s,\n,.,'`; \ dir='$(DESTDIR)$(man3dir)'; $(am__uninstall_files_from_dir) install-includeHEADERS: $(include_HEADERS) @$(NORMAL_INSTALL) @list='$(include_HEADERS)'; test -n "$(includedir)" || list=; \ if test -n "$$list"; then \ echo " $(MKDIR_P) '$(DESTDIR)$(includedir)'"; \ $(MKDIR_P) "$(DESTDIR)$(includedir)" || exit 1; \ fi; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_HEADER) $$files '$(DESTDIR)$(includedir)'"; \ $(INSTALL_HEADER) $$files "$(DESTDIR)$(includedir)" || exit $$?; \ done uninstall-includeHEADERS: @$(NORMAL_UNINSTALL) @list='$(include_HEADERS)'; test -n "$(includedir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ dir='$(DESTDIR)$(includedir)'; $(am__uninstall_files_from_dir) ID: $(am__tagged_files) $(am__define_uniq_tagged_files); mkid -fID $$unique tags: tags-am TAGS: tags tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) set x; \ here=`pwd`; \ $(am__define_uniq_tagged_files); \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: ctags-am CTAGS: ctags ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files) $(am__define_uniq_tagged_files); \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" cscope: cscope.files test ! -s cscope.files \ || $(CSCOPE) -b -q $(AM_CSCOPEFLAGS) $(CSCOPEFLAGS) -i cscope.files $(CSCOPE_ARGS) clean-cscope: -rm -f cscope.files cscope.files: clean-cscope cscopelist cscopelist: cscopelist-am cscopelist-am: $(am__tagged_files) list='$(am__tagged_files)'; \ case "$(srcdir)" in \ [\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \ *) sdir=$(subdir)/$(srcdir) ;; \ esac; \ for i in $$list; do \ if test -f "$$i"; then \ echo "$(subdir)/$$i"; \ else \ echo "$$sdir/$$i"; \ fi; \ done >> $(top_builddir)/cscope.files distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags -rm -f cscope.out cscope.in.out cscope.po.out cscope.files distdir: $(BUILT_SOURCES) $(MAKE) $(AM_MAKEFLAGS) distdir-am distdir-am: $(DISTFILES) $(am__remove_distdir) test -d "$(distdir)" || mkdir "$(distdir)" @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done -test -n "$(am__skip_mode_fix)" \ || find "$(distdir)" -type d ! -perm -755 \ -exec chmod u+rwx,go+rx {} \; -o \ ! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \ ! -type d ! -perm -400 -exec chmod a+r {} \; -o \ ! -type d ! -perm -444 -exec $(install_sh) -c -m a+r {} {} \; \ || chmod -R a+r "$(distdir)" dist-gzip: distdir tardir=$(distdir) && $(am__tar) | eval GZIP= gzip $(GZIP_ENV) -c >$(distdir).tar.gz $(am__post_remove_distdir) dist-bzip2: distdir tardir=$(distdir) && $(am__tar) | BZIP2=$${BZIP2--9} bzip2 -c >$(distdir).tar.bz2 $(am__post_remove_distdir) dist-lzip: distdir tardir=$(distdir) && $(am__tar) | lzip -c $${LZIP_OPT--9} >$(distdir).tar.lz $(am__post_remove_distdir) dist-xz: distdir tardir=$(distdir) && $(am__tar) | XZ_OPT=$${XZ_OPT--e} xz -c >$(distdir).tar.xz $(am__post_remove_distdir) dist-tarZ: distdir @echo WARNING: "Support for distribution archives compressed with" \ "legacy program 'compress' is deprecated." >&2 @echo WARNING: "It will be removed altogether in Automake 2.0" >&2 tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z $(am__post_remove_distdir) dist-shar: distdir @echo WARNING: "Support for shar distribution archives is" \ "deprecated." >&2 @echo WARNING: "It will be removed altogether in Automake 2.0" >&2 shar $(distdir) | eval GZIP= gzip $(GZIP_ENV) -c >$(distdir).shar.gz $(am__post_remove_distdir) dist-zip: distdir -rm -f $(distdir).zip zip -rq $(distdir).zip $(distdir) $(am__post_remove_distdir) dist dist-all: $(MAKE) $(AM_MAKEFLAGS) $(DIST_TARGETS) am__post_remove_distdir='@:' $(am__post_remove_distdir) # This target untars the dist file and tries a VPATH configuration. Then # it guarantees that the distribution is self-contained by making another # tarfile. distcheck: dist case '$(DIST_ARCHIVES)' in \ *.tar.gz*) \ eval GZIP= gzip $(GZIP_ENV) -dc $(distdir).tar.gz | $(am__untar) ;;\ *.tar.bz2*) \ bzip2 -dc $(distdir).tar.bz2 | $(am__untar) ;;\ *.tar.lz*) \ lzip -dc $(distdir).tar.lz | $(am__untar) ;;\ *.tar.xz*) \ xz -dc $(distdir).tar.xz | $(am__untar) ;;\ *.tar.Z*) \ uncompress -c $(distdir).tar.Z | $(am__untar) ;;\ *.shar.gz*) \ eval GZIP= gzip $(GZIP_ENV) -dc $(distdir).shar.gz | unshar ;;\ *.zip*) \ unzip $(distdir).zip ;;\ esac chmod -R a-w $(distdir) chmod u+w $(distdir) mkdir $(distdir)/_build $(distdir)/_build/sub $(distdir)/_inst chmod a-w $(distdir) test -d $(distdir)/_build || exit 0; \ dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \ && dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \ && am__cwd=`pwd` \ && $(am__cd) $(distdir)/_build/sub \ && ../../configure \ $(AM_DISTCHECK_CONFIGURE_FLAGS) \ $(DISTCHECK_CONFIGURE_FLAGS) \ --srcdir=../.. --prefix="$$dc_install_base" \ && $(MAKE) $(AM_MAKEFLAGS) \ && $(MAKE) $(AM_MAKEFLAGS) dvi \ && $(MAKE) $(AM_MAKEFLAGS) check \ && $(MAKE) $(AM_MAKEFLAGS) install \ && $(MAKE) $(AM_MAKEFLAGS) installcheck \ && $(MAKE) $(AM_MAKEFLAGS) uninstall \ && $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \ distuninstallcheck \ && chmod -R a-w "$$dc_install_base" \ && ({ \ (cd ../.. && umask 077 && mkdir "$$dc_destdir") \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \ distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \ } || { rm -rf "$$dc_destdir"; exit 1; }) \ && rm -rf "$$dc_destdir" \ && $(MAKE) $(AM_MAKEFLAGS) dist \ && rm -rf $(DIST_ARCHIVES) \ && $(MAKE) $(AM_MAKEFLAGS) distcleancheck \ && cd "$$am__cwd" \ || exit 1 $(am__post_remove_distdir) @(echo "$(distdir) archives ready for distribution: "; \ list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \ sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x' distuninstallcheck: @test -n '$(distuninstallcheck_dir)' || { \ echo 'ERROR: trying to run $@ with an empty' \ '$$(distuninstallcheck_dir)' >&2; \ exit 1; \ }; \ $(am__cd) '$(distuninstallcheck_dir)' || { \ echo 'ERROR: cannot chdir into $(distuninstallcheck_dir)' >&2; \ exit 1; \ }; \ test `$(am__distuninstallcheck_listfiles) | wc -l` -eq 0 \ || { echo "ERROR: files left after uninstall:" ; \ if test -n "$(DESTDIR)"; then \ echo " (check DESTDIR support)"; \ fi ; \ $(distuninstallcheck_listfiles) ; \ exit 1; } >&2 distcleancheck: distclean @if test '$(srcdir)' = . ; then \ echo "ERROR: distcleancheck can only run from a VPATH build" ; \ exit 1 ; \ fi @test `$(distcleancheck_listfiles) | wc -l` -eq 0 \ || { echo "ERROR: files left in build directory after distclean:" ; \ $(distcleancheck_listfiles) ; \ exit 1; } >&2 check-am: all-am check: check-am all-am: Makefile $(LTLIBRARIES) $(MANS) $(HEADERS) config.h installdirs: for dir in "$(DESTDIR)$(libdir)" "$(DESTDIR)$(man3dir)" "$(DESTDIR)$(includedir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic clean-libLTLIBRARIES clean-libtool \ mostlyclean-am distclean: distclean-am -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -f ./$(DEPDIR)/ev.Plo -rm -f ./$(DEPDIR)/event.Plo -rm -f Makefile distclean-am: clean-am distclean-compile distclean-generic \ distclean-hdr distclean-libtool distclean-tags dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-includeHEADERS install-man install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-libLTLIBRARIES install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-man3 install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -rf $(top_srcdir)/autom4te.cache -rm -f ./$(DEPDIR)/ev.Plo -rm -f ./$(DEPDIR)/event.Plo -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-compile mostlyclean-generic \ mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-includeHEADERS uninstall-libLTLIBRARIES \ uninstall-man uninstall-man: uninstall-man3 .MAKE: all install-am install-strip .PHONY: CTAGS GTAGS TAGS all all-am am--depfiles am--refresh check \ check-am clean clean-cscope clean-generic clean-libLTLIBRARIES \ clean-libtool cscope cscopelist-am ctags ctags-am dist \ dist-all dist-bzip2 dist-gzip dist-lzip dist-shar dist-tarZ \ dist-xz dist-zip distcheck distclean distclean-compile \ distclean-generic distclean-hdr distclean-libtool \ distclean-tags distcleancheck distdir distuninstallcheck dvi \ dvi-am html html-am info info-am install install-am \ install-data install-data-am install-dvi install-dvi-am \ install-exec install-exec-am install-html install-html-am \ install-includeHEADERS install-info install-info-am \ install-libLTLIBRARIES install-man install-man3 install-pdf \ install-pdf-am install-ps install-ps-am install-strip \ installcheck installcheck-am installdirs maintainer-clean \ maintainer-clean-generic mostlyclean mostlyclean-compile \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags tags-am uninstall uninstall-am uninstall-includeHEADERS \ uninstall-libLTLIBRARIES uninstall-man uninstall-man3 .PRECIOUS: Makefile ev.3: ev.pod pod2man -n LIBEV -r "libev-$(VERSION)" -c "libev - high performance full featured event loop" -s3 <$< >$@ # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: gevent-24.11.1/deps/libev/README000066400000000000000000000050361471441230600160720ustar00rootroot00000000000000libev is a high-performance event loop/event model with lots of features. (see benchmark at http://libev.schmorp.de/bench.html) ABOUT Homepage: http://software.schmorp.de/pkg/libev Mailinglist: libev@lists.schmorp.de http://lists.schmorp.de/cgi-bin/mailman/listinfo/libev Library Documentation: http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod Libev is modelled (very losely) after libevent and the Event perl module, but is faster, scales better and is more correct, and also more featureful. And also smaller. Yay. Some of the specialties of libev not commonly found elsewhere are: - extensive and detailed, readable documentation (not doxygen garbage). - fully supports fork, can detect fork in various ways and automatically re-arms kernel mechanisms that do not support fork. - highly optimised select, poll, linux epoll, linux aio, bsd kqueue and solaris event ports backends. - filesystem object (path) watching (with optional linux inotify support). - wallclock-based times (using absolute time, cron-like). - relative timers/timeouts (handle time jumps). - fast intra-thread communication between multiple event loops (with optional fast linux eventfd backend). - extremely easy to embed (fully documented, no dependencies, autoconf supported but optional). - very small codebase, no bloated library, simple code. - fully extensible by being able to plug into the event loop, integrate other event loops, integrate other event loop users. - very little memory use (small watchers, small event loop data). - optional C++ interface allowing method and function callbacks at no extra memory or runtime overhead. - optional Perl interface with similar characteristics (capable of running Glib/Gtk2 on libev). - support for other languages (multiple C++ interfaces, D, Ruby, Python) available from third-parties. Examples of programs that embed libev: the EV perl module, node.js, auditd, rxvt-unicode, gvpe (GNU Virtual Private Ethernet), the Deliantra MMORPG server (http://www.deliantra.net/), Rubinius (a next-generation Ruby VM), the Ebb web server, the Rev event toolkit. CONTRIBUTORS libev was written and designed by Marc Lehmann and Emanuele Giaquinta. The following people sent in patches or made other noteworthy contributions to the design (for minor patches, see the Changes file. If I forgot to include you, please shout at me, it was an accident): W.C.A. Wijngaards Christopher Layne Chris Brody gevent-24.11.1/deps/libev/config.guess000066400000000000000000001430461471441230600175330ustar00rootroot00000000000000#! /bin/sh # Attempt to guess a canonical system name. # Copyright 1992-2024 Free Software Foundation, Inc. # shellcheck disable=SC2006,SC2268 # see below for rationale timestamp='2024-01-01' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, see . # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). # # Originally written by Per Bothner; maintained since 2000 by Ben Elliston. # # You can get the latest version of this script from: # https://git.savannah.gnu.org/cgit/config.git/plain/config.guess # # Please send patches to . # The "shellcheck disable" line above the timestamp inhibits complaints # about features and limitations of the classic Bourne shell that were # superseded or lifted in POSIX. However, this script identifies a wide # variety of pre-POSIX systems that do not have POSIX shells at all, and # even some reasonably current systems (Solaris 10 as case-in-point) still # have a pre-POSIX /bin/sh. me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] Output the configuration name of the system '$me' is run on. Options: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.guess ($timestamp) Originally written by Per Bothner. Copyright 1992-2024 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try '$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" >&2 exit 1 ;; * ) break ;; esac done if test $# != 0; then echo "$me: too many arguments$help" >&2 exit 1 fi # Just in case it came from the environment. GUESS= # CC_FOR_BUILD -- compiler used by this script. Note that the use of a # compiler to aid in system detection is discouraged as it requires # temporary files to be created and, as you can see below, it is a # headache to deal with in a portable fashion. # Historically, 'CC_FOR_BUILD' used to be named 'HOST_CC'. We still # use 'HOST_CC' if defined, but it is deprecated. # Portable tmp directory creation inspired by the Autoconf team. tmp= # shellcheck disable=SC2172 trap 'test -z "$tmp" || rm -fr "$tmp"' 0 1 2 13 15 set_cc_for_build() { # prevent multiple calls if $tmp is already set test "$tmp" && return 0 : "${TMPDIR=/tmp}" # shellcheck disable=SC2039,SC3028 { tmp=`(umask 077 && mktemp -d "$TMPDIR/cgXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" ; } || { test -n "$RANDOM" && tmp=$TMPDIR/cg$$-$RANDOM && (umask 077 && mkdir "$tmp" 2>/dev/null) ; } || { tmp=$TMPDIR/cg-$$ && (umask 077 && mkdir "$tmp" 2>/dev/null) && echo "Warning: creating insecure temp directory" >&2 ; } || { echo "$me: cannot create a temporary directory in $TMPDIR" >&2 ; exit 1 ; } dummy=$tmp/dummy case ${CC_FOR_BUILD-},${HOST_CC-},${CC-} in ,,) echo "int x;" > "$dummy.c" for driver in cc gcc c89 c99 ; do if ($driver -c -o "$dummy.o" "$dummy.c") >/dev/null 2>&1 ; then CC_FOR_BUILD=$driver break fi done if test x"$CC_FOR_BUILD" = x ; then CC_FOR_BUILD=no_compiler_found fi ;; ,,*) CC_FOR_BUILD=$CC ;; ,*,*) CC_FOR_BUILD=$HOST_CC ;; esac } # This is needed to find uname on a Pyramid OSx when run in the BSD universe. # (ghazi@noc.rutgers.edu 1994-08-24) if test -f /.attbin/uname ; then PATH=$PATH:/.attbin ; export PATH fi UNAME_MACHINE=`(uname -m) 2>/dev/null` || UNAME_MACHINE=unknown UNAME_RELEASE=`(uname -r) 2>/dev/null` || UNAME_RELEASE=unknown UNAME_SYSTEM=`(uname -s) 2>/dev/null` || UNAME_SYSTEM=unknown UNAME_VERSION=`(uname -v) 2>/dev/null` || UNAME_VERSION=unknown case $UNAME_SYSTEM in Linux|GNU|GNU/*) LIBC=unknown set_cc_for_build cat <<-EOF > "$dummy.c" #if defined(__ANDROID__) LIBC=android #else #include #if defined(__UCLIBC__) LIBC=uclibc #elif defined(__dietlibc__) LIBC=dietlibc #elif defined(__GLIBC__) LIBC=gnu #elif defined(__LLVM_LIBC__) LIBC=llvm #else #include /* First heuristic to detect musl libc. */ #ifdef __DEFINED_va_list LIBC=musl #endif #endif #endif EOF cc_set_libc=`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^LIBC' | sed 's, ,,g'` eval "$cc_set_libc" # Second heuristic to detect musl libc. if [ "$LIBC" = unknown ] && command -v ldd >/dev/null && ldd --version 2>&1 | grep -q ^musl; then LIBC=musl fi # If the system lacks a compiler, then just pick glibc. # We could probably try harder. if [ "$LIBC" = unknown ]; then LIBC=gnu fi ;; esac # Note: order is significant - the case branches are not exclusive. case $UNAME_MACHINE:$UNAME_SYSTEM:$UNAME_RELEASE:$UNAME_VERSION in *:NetBSD:*:*) # NetBSD (nbsd) targets should (where applicable) match one or # more of the tuples: *-*-netbsdelf*, *-*-netbsdaout*, # *-*-netbsdecoff* and *-*-netbsd*. For targets that recently # switched to ELF, *-*-netbsd* would select the old # object file format. This provides both forward # compatibility and a consistent mechanism for selecting the # object file format. # # Note: NetBSD doesn't particularly care about the vendor # portion of the name. We always set it to "unknown". UNAME_MACHINE_ARCH=`(uname -p 2>/dev/null || \ /sbin/sysctl -n hw.machine_arch 2>/dev/null || \ /usr/sbin/sysctl -n hw.machine_arch 2>/dev/null || \ echo unknown)` case $UNAME_MACHINE_ARCH in aarch64eb) machine=aarch64_be-unknown ;; armeb) machine=armeb-unknown ;; arm*) machine=arm-unknown ;; sh3el) machine=shl-unknown ;; sh3eb) machine=sh-unknown ;; sh5el) machine=sh5le-unknown ;; earmv*) arch=`echo "$UNAME_MACHINE_ARCH" | sed -e 's,^e\(armv[0-9]\).*$,\1,'` endian=`echo "$UNAME_MACHINE_ARCH" | sed -ne 's,^.*\(eb\)$,\1,p'` machine=${arch}${endian}-unknown ;; *) machine=$UNAME_MACHINE_ARCH-unknown ;; esac # The Operating System including object format, if it has switched # to ELF recently (or will in the future) and ABI. case $UNAME_MACHINE_ARCH in earm*) os=netbsdelf ;; arm*|i386|m68k|ns32k|sh3*|sparc|vax) set_cc_for_build if echo __ELF__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ELF__ then # Once all utilities can be ECOFF (netbsdecoff) or a.out (netbsdaout). # Return netbsd for either. FIX? os=netbsd else os=netbsdelf fi ;; *) os=netbsd ;; esac # Determine ABI tags. case $UNAME_MACHINE_ARCH in earm*) expr='s/^earmv[0-9]/-eabi/;s/eb$//' abi=`echo "$UNAME_MACHINE_ARCH" | sed -e "$expr"` ;; esac # The OS release # Debian GNU/NetBSD machines have a different userland, and # thus, need a distinct triplet. However, they do not need # kernel version information, so it can be replaced with a # suitable tag, in the style of linux-gnu. case $UNAME_VERSION in Debian*) release='-gnu' ;; *) release=`echo "$UNAME_RELEASE" | sed -e 's/[-_].*//' | cut -d. -f1,2` ;; esac # Since CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM: # contains redundant information, the shorter form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used. GUESS=$machine-${os}${release}${abi-} ;; *:Bitrig:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/Bitrig.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-bitrig$UNAME_RELEASE ;; *:OpenBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/OpenBSD.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-openbsd$UNAME_RELEASE ;; *:SecBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/SecBSD.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-secbsd$UNAME_RELEASE ;; *:LibertyBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/^.*BSD\.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-libertybsd$UNAME_RELEASE ;; *:MidnightBSD:*:*) GUESS=$UNAME_MACHINE-unknown-midnightbsd$UNAME_RELEASE ;; *:ekkoBSD:*:*) GUESS=$UNAME_MACHINE-unknown-ekkobsd$UNAME_RELEASE ;; *:SolidBSD:*:*) GUESS=$UNAME_MACHINE-unknown-solidbsd$UNAME_RELEASE ;; *:OS108:*:*) GUESS=$UNAME_MACHINE-unknown-os108_$UNAME_RELEASE ;; macppc:MirBSD:*:*) GUESS=powerpc-unknown-mirbsd$UNAME_RELEASE ;; *:MirBSD:*:*) GUESS=$UNAME_MACHINE-unknown-mirbsd$UNAME_RELEASE ;; *:Sortix:*:*) GUESS=$UNAME_MACHINE-unknown-sortix ;; *:Twizzler:*:*) GUESS=$UNAME_MACHINE-unknown-twizzler ;; *:Redox:*:*) GUESS=$UNAME_MACHINE-unknown-redox ;; mips:OSF1:*.*) GUESS=mips-dec-osf1 ;; alpha:OSF1:*:*) # Reset EXIT trap before exiting to avoid spurious non-zero exit code. trap '' 0 case $UNAME_RELEASE in *4.0) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $3}'` ;; *5.*) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $4}'` ;; esac # According to Compaq, /usr/sbin/psrinfo has been available on # OSF/1 and Tru64 systems produced since 1995. I hope that # covers most systems running today. This code pipes the CPU # types through head -n 1, so we only detect the type of CPU 0. ALPHA_CPU_TYPE=`/usr/sbin/psrinfo -v | sed -n -e 's/^ The alpha \(.*\) processor.*$/\1/p' | head -n 1` case $ALPHA_CPU_TYPE in "EV4 (21064)") UNAME_MACHINE=alpha ;; "EV4.5 (21064)") UNAME_MACHINE=alpha ;; "LCA4 (21066/21068)") UNAME_MACHINE=alpha ;; "EV5 (21164)") UNAME_MACHINE=alphaev5 ;; "EV5.6 (21164A)") UNAME_MACHINE=alphaev56 ;; "EV5.6 (21164PC)") UNAME_MACHINE=alphapca56 ;; "EV5.7 (21164PC)") UNAME_MACHINE=alphapca57 ;; "EV6 (21264)") UNAME_MACHINE=alphaev6 ;; "EV6.7 (21264A)") UNAME_MACHINE=alphaev67 ;; "EV6.8CB (21264C)") UNAME_MACHINE=alphaev68 ;; "EV6.8AL (21264B)") UNAME_MACHINE=alphaev68 ;; "EV6.8CX (21264D)") UNAME_MACHINE=alphaev68 ;; "EV6.9A (21264/EV69A)") UNAME_MACHINE=alphaev69 ;; "EV7 (21364)") UNAME_MACHINE=alphaev7 ;; "EV7.9 (21364A)") UNAME_MACHINE=alphaev79 ;; esac # A Pn.n version is a patched version. # A Vn.n version is a released version. # A Tn.n version is a released field test version. # A Xn.n version is an unreleased experimental baselevel. # 1.2 uses "1.2" for uname -r. OSF_REL=`echo "$UNAME_RELEASE" | sed -e 's/^[PVTX]//' | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz` GUESS=$UNAME_MACHINE-dec-osf$OSF_REL ;; Amiga*:UNIX_System_V:4.0:*) GUESS=m68k-unknown-sysv4 ;; *:[Aa]miga[Oo][Ss]:*:*) GUESS=$UNAME_MACHINE-unknown-amigaos ;; *:[Mm]orph[Oo][Ss]:*:*) GUESS=$UNAME_MACHINE-unknown-morphos ;; *:OS/390:*:*) GUESS=i370-ibm-openedition ;; *:z/VM:*:*) GUESS=s390-ibm-zvmoe ;; *:OS400:*:*) GUESS=powerpc-ibm-os400 ;; arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*) GUESS=arm-acorn-riscix$UNAME_RELEASE ;; arm*:riscos:*:*|arm*:RISCOS:*:*) GUESS=arm-unknown-riscos ;; SR2?01:HI-UX/MPP:*:* | SR8000:HI-UX/MPP:*:*) GUESS=hppa1.1-hitachi-hiuxmpp ;; Pyramid*:OSx*:*:* | MIS*:OSx*:*:* | MIS*:SMP_DC-OSx*:*:*) # akee@wpdis03.wpafb.af.mil (Earle F. Ake) contributed MIS and NILE. case `(/bin/universe) 2>/dev/null` in att) GUESS=pyramid-pyramid-sysv3 ;; *) GUESS=pyramid-pyramid-bsd ;; esac ;; NILE*:*:*:dcosx) GUESS=pyramid-pyramid-svr4 ;; DRS?6000:unix:4.0:6*) GUESS=sparc-icl-nx6 ;; DRS?6000:UNIX_SV:4.2*:7* | DRS?6000:isis:4.2*:7*) case `/usr/bin/uname -p` in sparc) GUESS=sparc-icl-nx7 ;; esac ;; s390x:SunOS:*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=$UNAME_MACHINE-ibm-solaris2$SUN_REL ;; sun4H:SunOS:5.*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=sparc-hal-solaris2$SUN_REL ;; sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=sparc-sun-solaris2$SUN_REL ;; i86pc:AuroraUX:5.*:* | i86xen:AuroraUX:5.*:*) GUESS=i386-pc-auroraux$UNAME_RELEASE ;; i86pc:SunOS:5.*:* | i86xen:SunOS:5.*:*) set_cc_for_build SUN_ARCH=i386 # If there is a compiler, see if it is configured for 64-bit objects. # Note that the Sun cc does not turn __LP64__ into 1 like gcc does. # This test works for both compilers. if test "$CC_FOR_BUILD" != no_compiler_found; then if (echo '#ifdef __amd64'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS="" $CC_FOR_BUILD -m64 -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then SUN_ARCH=x86_64 fi fi SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=$SUN_ARCH-pc-solaris2$SUN_REL ;; sun4*:SunOS:6*:*) # According to config.sub, this is the proper way to canonicalize # SunOS6. Hard to guess exactly what SunOS6 will be like, but # it's likely to be more like Solaris than SunOS4. SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=sparc-sun-solaris3$SUN_REL ;; sun4*:SunOS:*:*) case `/usr/bin/arch -k` in Series*|S4*) UNAME_RELEASE=`uname -v` ;; esac # Japanese Language versions have a version number like '4.1.3-JL'. SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/-/_/'` GUESS=sparc-sun-sunos$SUN_REL ;; sun3*:SunOS:*:*) GUESS=m68k-sun-sunos$UNAME_RELEASE ;; sun*:*:4.2BSD:*) UNAME_RELEASE=`(sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null` test "x$UNAME_RELEASE" = x && UNAME_RELEASE=3 case `/bin/arch` in sun3) GUESS=m68k-sun-sunos$UNAME_RELEASE ;; sun4) GUESS=sparc-sun-sunos$UNAME_RELEASE ;; esac ;; aushp:SunOS:*:*) GUESS=sparc-auspex-sunos$UNAME_RELEASE ;; # The situation for MiNT is a little confusing. The machine name # can be virtually everything (everything which is not # "atarist" or "atariste" at least should have a processor # > m68000). The system name ranges from "MiNT" over "FreeMiNT" # to the lowercase version "mint" (or "freemint"). Finally # the system name "TOS" denotes a system which is actually not # MiNT. But MiNT is downward compatible to TOS, so this should # be no problem. atarist[e]:*MiNT:*:* | atarist[e]:*mint:*:* | atarist[e]:*TOS:*:*) GUESS=m68k-atari-mint$UNAME_RELEASE ;; atari*:*MiNT:*:* | atari*:*mint:*:* | atarist[e]:*TOS:*:*) GUESS=m68k-atari-mint$UNAME_RELEASE ;; *falcon*:*MiNT:*:* | *falcon*:*mint:*:* | *falcon*:*TOS:*:*) GUESS=m68k-atari-mint$UNAME_RELEASE ;; milan*:*MiNT:*:* | milan*:*mint:*:* | *milan*:*TOS:*:*) GUESS=m68k-milan-mint$UNAME_RELEASE ;; hades*:*MiNT:*:* | hades*:*mint:*:* | *hades*:*TOS:*:*) GUESS=m68k-hades-mint$UNAME_RELEASE ;; *:*MiNT:*:* | *:*mint:*:* | *:*TOS:*:*) GUESS=m68k-unknown-mint$UNAME_RELEASE ;; m68k:machten:*:*) GUESS=m68k-apple-machten$UNAME_RELEASE ;; powerpc:machten:*:*) GUESS=powerpc-apple-machten$UNAME_RELEASE ;; RISC*:Mach:*:*) GUESS=mips-dec-mach_bsd4.3 ;; RISC*:ULTRIX:*:*) GUESS=mips-dec-ultrix$UNAME_RELEASE ;; VAX*:ULTRIX*:*:*) GUESS=vax-dec-ultrix$UNAME_RELEASE ;; 2020:CLIX:*:* | 2430:CLIX:*:*) GUESS=clipper-intergraph-clix$UNAME_RELEASE ;; mips:*:*:UMIPS | mips:*:*:RISCos) set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #ifdef __cplusplus #include /* for printf() prototype */ int main (int argc, char *argv[]) { #else int main (argc, argv) int argc; char *argv[]; { #endif #if defined (host_mips) && defined (MIPSEB) #if defined (SYSTYPE_SYSV) printf ("mips-mips-riscos%ssysv\\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_SVR4) printf ("mips-mips-riscos%ssvr4\\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_BSD43) || defined(SYSTYPE_BSD) printf ("mips-mips-riscos%sbsd\\n", argv[1]); exit (0); #endif #endif exit (-1); } EOF $CC_FOR_BUILD -o "$dummy" "$dummy.c" && dummyarg=`echo "$UNAME_RELEASE" | sed -n 's/\([0-9]*\).*/\1/p'` && SYSTEM_NAME=`"$dummy" "$dummyarg"` && { echo "$SYSTEM_NAME"; exit; } GUESS=mips-mips-riscos$UNAME_RELEASE ;; Motorola:PowerMAX_OS:*:*) GUESS=powerpc-motorola-powermax ;; Motorola:*:4.3:PL8-*) GUESS=powerpc-harris-powermax ;; Night_Hawk:*:*:PowerMAX_OS | Synergy:PowerMAX_OS:*:*) GUESS=powerpc-harris-powermax ;; Night_Hawk:Power_UNIX:*:*) GUESS=powerpc-harris-powerunix ;; m88k:CX/UX:7*:*) GUESS=m88k-harris-cxux7 ;; m88k:*:4*:R4*) GUESS=m88k-motorola-sysv4 ;; m88k:*:3*:R3*) GUESS=m88k-motorola-sysv3 ;; AViiON:dgux:*:*) # DG/UX returns AViiON for all architectures UNAME_PROCESSOR=`/usr/bin/uname -p` if test "$UNAME_PROCESSOR" = mc88100 || test "$UNAME_PROCESSOR" = mc88110 then if test "$TARGET_BINARY_INTERFACE"x = m88kdguxelfx || \ test "$TARGET_BINARY_INTERFACE"x = x then GUESS=m88k-dg-dgux$UNAME_RELEASE else GUESS=m88k-dg-dguxbcs$UNAME_RELEASE fi else GUESS=i586-dg-dgux$UNAME_RELEASE fi ;; M88*:DolphinOS:*:*) # DolphinOS (SVR3) GUESS=m88k-dolphin-sysv3 ;; M88*:*:R3*:*) # Delta 88k system running SVR3 GUESS=m88k-motorola-sysv3 ;; XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3) GUESS=m88k-tektronix-sysv3 ;; Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD) GUESS=m68k-tektronix-bsd ;; *:IRIX*:*:*) IRIX_REL=`echo "$UNAME_RELEASE" | sed -e 's/-/_/g'` GUESS=mips-sgi-irix$IRIX_REL ;; ????????:AIX?:[12].1:2) # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX. GUESS=romp-ibm-aix # uname -m gives an 8 hex-code CPU id ;; # Note that: echo "'`uname -s`'" gives 'AIX ' i*86:AIX:*:*) GUESS=i386-ibm-aix ;; ia64:AIX:*:*) if test -x /usr/bin/oslevel ; then IBM_REV=`/usr/bin/oslevel` else IBM_REV=$UNAME_VERSION.$UNAME_RELEASE fi GUESS=$UNAME_MACHINE-ibm-aix$IBM_REV ;; *:AIX:2:3) if grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #include main() { if (!__power_pc()) exit(1); puts("powerpc-ibm-aix3.2.5"); exit(0); } EOF if $CC_FOR_BUILD -o "$dummy" "$dummy.c" && SYSTEM_NAME=`"$dummy"` then GUESS=$SYSTEM_NAME else GUESS=rs6000-ibm-aix3.2.5 fi elif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then GUESS=rs6000-ibm-aix3.2.4 else GUESS=rs6000-ibm-aix3.2 fi ;; *:AIX:*:[4567]) IBM_CPU_ID=`/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }'` if /usr/sbin/lsattr -El "$IBM_CPU_ID" | grep ' POWER' >/dev/null 2>&1; then IBM_ARCH=rs6000 else IBM_ARCH=powerpc fi if test -x /usr/bin/lslpp ; then IBM_REV=`/usr/bin/lslpp -Lqc bos.rte.libc | \ awk -F: '{ print $3 }' | sed s/[0-9]*$/0/` else IBM_REV=$UNAME_VERSION.$UNAME_RELEASE fi GUESS=$IBM_ARCH-ibm-aix$IBM_REV ;; *:AIX:*:*) GUESS=rs6000-ibm-aix ;; ibmrt:4.4BSD:*|romp-ibm:4.4BSD:*) GUESS=romp-ibm-bsd4.4 ;; ibmrt:*BSD:*|romp-ibm:BSD:*) # covers RT/PC BSD and GUESS=romp-ibm-bsd$UNAME_RELEASE # 4.3 with uname added to ;; # report: romp-ibm BSD 4.3 *:BOSX:*:*) GUESS=rs6000-bull-bosx ;; DPX/2?00:B.O.S.:*:*) GUESS=m68k-bull-sysv3 ;; 9000/[34]??:4.3bsd:1.*:*) GUESS=m68k-hp-bsd ;; hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*) GUESS=m68k-hp-bsd4.4 ;; 9000/[34678]??:HP-UX:*:*) HPUX_REV=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*.[0B]*//'` case $UNAME_MACHINE in 9000/31?) HP_ARCH=m68000 ;; 9000/[34]??) HP_ARCH=m68k ;; 9000/[678][0-9][0-9]) if test -x /usr/bin/getconf; then sc_cpu_version=`/usr/bin/getconf SC_CPU_VERSION 2>/dev/null` sc_kernel_bits=`/usr/bin/getconf SC_KERNEL_BITS 2>/dev/null` case $sc_cpu_version in 523) HP_ARCH=hppa1.0 ;; # CPU_PA_RISC1_0 528) HP_ARCH=hppa1.1 ;; # CPU_PA_RISC1_1 532) # CPU_PA_RISC2_0 case $sc_kernel_bits in 32) HP_ARCH=hppa2.0n ;; 64) HP_ARCH=hppa2.0w ;; '') HP_ARCH=hppa2.0 ;; # HP-UX 10.20 esac ;; esac fi if test "$HP_ARCH" = ""; then set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #define _HPUX_SOURCE #include #include int main () { #if defined(_SC_KERNEL_BITS) long bits = sysconf(_SC_KERNEL_BITS); #endif long cpu = sysconf (_SC_CPU_VERSION); switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0"); break; case CPU_PA_RISC1_1: puts ("hppa1.1"); break; case CPU_PA_RISC2_0: #if defined(_SC_KERNEL_BITS) switch (bits) { case 64: puts ("hppa2.0w"); break; case 32: puts ("hppa2.0n"); break; default: puts ("hppa2.0"); break; } break; #else /* !defined(_SC_KERNEL_BITS) */ puts ("hppa2.0"); break; #endif default: puts ("hppa1.0"); break; } exit (0); } EOF (CCOPTS="" $CC_FOR_BUILD -o "$dummy" "$dummy.c" 2>/dev/null) && HP_ARCH=`"$dummy"` test -z "$HP_ARCH" && HP_ARCH=hppa fi ;; esac if test "$HP_ARCH" = hppa2.0w then set_cc_for_build # hppa2.0w-hp-hpux* has a 64-bit kernel and a compiler generating # 32-bit code. hppa64-hp-hpux* has the same kernel and a compiler # generating 64-bit code. GNU and HP use different nomenclature: # # $ CC_FOR_BUILD=cc ./config.guess # => hppa2.0w-hp-hpux11.23 # $ CC_FOR_BUILD="cc +DA2.0w" ./config.guess # => hppa64-hp-hpux11.23 if echo __LP64__ | (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | grep -q __LP64__ then HP_ARCH=hppa2.0w else HP_ARCH=hppa64 fi fi GUESS=$HP_ARCH-hp-hpux$HPUX_REV ;; ia64:HP-UX:*:*) HPUX_REV=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*.[0B]*//'` GUESS=ia64-hp-hpux$HPUX_REV ;; 3050*:HI-UX:*:*) set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #include int main () { long cpu = sysconf (_SC_CPU_VERSION); /* The order matters, because CPU_IS_HP_MC68K erroneously returns true for CPU_PA_RISC1_0. CPU_IS_PA_RISC returns correct results, however. */ if (CPU_IS_PA_RISC (cpu)) { switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0-hitachi-hiuxwe2"); break; case CPU_PA_RISC1_1: puts ("hppa1.1-hitachi-hiuxwe2"); break; case CPU_PA_RISC2_0: puts ("hppa2.0-hitachi-hiuxwe2"); break; default: puts ("hppa-hitachi-hiuxwe2"); break; } } else if (CPU_IS_HP_MC68K (cpu)) puts ("m68k-hitachi-hiuxwe2"); else puts ("unknown-hitachi-hiuxwe2"); exit (0); } EOF $CC_FOR_BUILD -o "$dummy" "$dummy.c" && SYSTEM_NAME=`"$dummy"` && { echo "$SYSTEM_NAME"; exit; } GUESS=unknown-hitachi-hiuxwe2 ;; 9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:*) GUESS=hppa1.1-hp-bsd ;; 9000/8??:4.3bsd:*:*) GUESS=hppa1.0-hp-bsd ;; *9??*:MPE/iX:*:* | *3000*:MPE/iX:*:*) GUESS=hppa1.0-hp-mpeix ;; hp7??:OSF1:*:* | hp8?[79]:OSF1:*:*) GUESS=hppa1.1-hp-osf ;; hp8??:OSF1:*:*) GUESS=hppa1.0-hp-osf ;; i*86:OSF1:*:*) if test -x /usr/sbin/sysversion ; then GUESS=$UNAME_MACHINE-unknown-osf1mk else GUESS=$UNAME_MACHINE-unknown-osf1 fi ;; parisc*:Lites*:*:*) GUESS=hppa1.1-hp-lites ;; C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*) GUESS=c1-convex-bsd ;; C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*) if getsysinfo -f scalar_acc then echo c32-convex-bsd else echo c2-convex-bsd fi exit ;; C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*) GUESS=c34-convex-bsd ;; C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*) GUESS=c38-convex-bsd ;; C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*) GUESS=c4-convex-bsd ;; CRAY*Y-MP:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=ymp-cray-unicos$CRAY_REL ;; CRAY*[A-Z]90:*:*:*) echo "$UNAME_MACHINE"-cray-unicos"$UNAME_RELEASE" \ | sed -e 's/CRAY.*\([A-Z]90\)/\1/' \ -e y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/ \ -e 's/\.[^.]*$/.X/' exit ;; CRAY*TS:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=t90-cray-unicos$CRAY_REL ;; CRAY*T3E:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=alphaev5-cray-unicosmk$CRAY_REL ;; CRAY*SV1:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=sv1-cray-unicos$CRAY_REL ;; *:UNICOS/mp:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=craynv-cray-unicosmp$CRAY_REL ;; F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*) FUJITSU_PROC=`uname -m | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz` FUJITSU_SYS=`uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\///'` FUJITSU_REL=`echo "$UNAME_RELEASE" | sed -e 's/ /_/'` GUESS=${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL} ;; 5000:UNIX_System_V:4.*:*) FUJITSU_SYS=`uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\///'` FUJITSU_REL=`echo "$UNAME_RELEASE" | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/ /_/'` GUESS=sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL} ;; i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\ Embedded/OS:*:*) GUESS=$UNAME_MACHINE-pc-bsdi$UNAME_RELEASE ;; sparc*:BSD/OS:*:*) GUESS=sparc-unknown-bsdi$UNAME_RELEASE ;; *:BSD/OS:*:*) GUESS=$UNAME_MACHINE-unknown-bsdi$UNAME_RELEASE ;; arm:FreeBSD:*:*) UNAME_PROCESSOR=`uname -p` set_cc_for_build if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_PCS_VFP then FREEBSD_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_PROCESSOR-unknown-freebsd$FREEBSD_REL-gnueabi else FREEBSD_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_PROCESSOR-unknown-freebsd$FREEBSD_REL-gnueabihf fi ;; *:FreeBSD:*:*) UNAME_PROCESSOR=`uname -p` case $UNAME_PROCESSOR in amd64) UNAME_PROCESSOR=x86_64 ;; i386) UNAME_PROCESSOR=i586 ;; esac FREEBSD_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_PROCESSOR-unknown-freebsd$FREEBSD_REL ;; i*:CYGWIN*:*) GUESS=$UNAME_MACHINE-pc-cygwin ;; *:MINGW64*:*) GUESS=$UNAME_MACHINE-pc-mingw64 ;; *:MINGW*:*) GUESS=$UNAME_MACHINE-pc-mingw32 ;; *:MSYS*:*) GUESS=$UNAME_MACHINE-pc-msys ;; i*:PW*:*) GUESS=$UNAME_MACHINE-pc-pw32 ;; *:SerenityOS:*:*) GUESS=$UNAME_MACHINE-pc-serenity ;; *:Interix*:*) case $UNAME_MACHINE in x86) GUESS=i586-pc-interix$UNAME_RELEASE ;; authenticamd | genuineintel | EM64T) GUESS=x86_64-unknown-interix$UNAME_RELEASE ;; IA64) GUESS=ia64-unknown-interix$UNAME_RELEASE ;; esac ;; i*:UWIN*:*) GUESS=$UNAME_MACHINE-pc-uwin ;; amd64:CYGWIN*:*:* | x86_64:CYGWIN*:*:*) GUESS=x86_64-pc-cygwin ;; prep*:SunOS:5.*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=powerpcle-unknown-solaris2$SUN_REL ;; *:GNU:*:*) # the GNU system GNU_ARCH=`echo "$UNAME_MACHINE" | sed -e 's,[-/].*$,,'` GNU_REL=`echo "$UNAME_RELEASE" | sed -e 's,/.*$,,'` GUESS=$GNU_ARCH-unknown-$LIBC$GNU_REL ;; *:GNU/*:*:*) # other systems with GNU libc and userland GNU_SYS=`echo "$UNAME_SYSTEM" | sed 's,^[^/]*/,,' | tr "[:upper:]" "[:lower:]"` GNU_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_MACHINE-unknown-$GNU_SYS$GNU_REL-$LIBC ;; x86_64:[Mm]anagarm:*:*|i?86:[Mm]anagarm:*:*) GUESS="$UNAME_MACHINE-pc-managarm-mlibc" ;; *:[Mm]anagarm:*:*) GUESS="$UNAME_MACHINE-unknown-managarm-mlibc" ;; *:Minix:*:*) GUESS=$UNAME_MACHINE-unknown-minix ;; aarch64:Linux:*:*) set_cc_for_build CPU=$UNAME_MACHINE LIBCABI=$LIBC if test "$CC_FOR_BUILD" != no_compiler_found; then ABI=64 sed 's/^ //' << EOF > "$dummy.c" #ifdef __ARM_EABI__ #ifdef __ARM_PCS_VFP ABI=eabihf #else ABI=eabi #endif #endif EOF cc_set_abi=`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^ABI' | sed 's, ,,g'` eval "$cc_set_abi" case $ABI in eabi | eabihf) CPU=armv8l; LIBCABI=$LIBC$ABI ;; esac fi GUESS=$CPU-unknown-linux-$LIBCABI ;; aarch64_be:Linux:*:*) UNAME_MACHINE=aarch64_be GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; alpha:Linux:*:*) case `sed -n '/^cpu model/s/^.*: \(.*\)/\1/p' /proc/cpuinfo 2>/dev/null` in EV5) UNAME_MACHINE=alphaev5 ;; EV56) UNAME_MACHINE=alphaev56 ;; PCA56) UNAME_MACHINE=alphapca56 ;; PCA57) UNAME_MACHINE=alphapca56 ;; EV6) UNAME_MACHINE=alphaev6 ;; EV67) UNAME_MACHINE=alphaev67 ;; EV68*) UNAME_MACHINE=alphaev68 ;; esac objdump --private-headers /bin/sh | grep -q ld.so.1 if test "$?" = 0 ; then LIBC=gnulibc1 ; fi GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; arc:Linux:*:* | arceb:Linux:*:* | arc32:Linux:*:* | arc64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; arm*:Linux:*:*) set_cc_for_build if echo __ARM_EABI__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_EABI__ then GUESS=$UNAME_MACHINE-unknown-linux-$LIBC else if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_PCS_VFP then GUESS=$UNAME_MACHINE-unknown-linux-${LIBC}eabi else GUESS=$UNAME_MACHINE-unknown-linux-${LIBC}eabihf fi fi ;; avr32*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; cris:Linux:*:*) GUESS=$UNAME_MACHINE-axis-linux-$LIBC ;; crisv32:Linux:*:*) GUESS=$UNAME_MACHINE-axis-linux-$LIBC ;; e2k:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; frv:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; hexagon:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; i*86:Linux:*:*) GUESS=$UNAME_MACHINE-pc-linux-$LIBC ;; ia64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; k1om:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; kvx:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; kvx:cos:*:*) GUESS=$UNAME_MACHINE-unknown-cos ;; kvx:mbr:*:*) GUESS=$UNAME_MACHINE-unknown-mbr ;; loongarch32:Linux:*:* | loongarch64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; m32r*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; m68*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; mips:Linux:*:* | mips64:Linux:*:*) set_cc_for_build IS_GLIBC=0 test x"${LIBC}" = xgnu && IS_GLIBC=1 sed 's/^ //' << EOF > "$dummy.c" #undef CPU #undef mips #undef mipsel #undef mips64 #undef mips64el #if ${IS_GLIBC} && defined(_ABI64) LIBCABI=gnuabi64 #else #if ${IS_GLIBC} && defined(_ABIN32) LIBCABI=gnuabin32 #else LIBCABI=${LIBC} #endif #endif #if ${IS_GLIBC} && defined(__mips64) && defined(__mips_isa_rev) && __mips_isa_rev>=6 CPU=mipsisa64r6 #else #if ${IS_GLIBC} && !defined(__mips64) && defined(__mips_isa_rev) && __mips_isa_rev>=6 CPU=mipsisa32r6 #else #if defined(__mips64) CPU=mips64 #else CPU=mips #endif #endif #endif #if defined(__MIPSEL__) || defined(__MIPSEL) || defined(_MIPSEL) || defined(MIPSEL) MIPS_ENDIAN=el #else #if defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || defined(MIPSEB) MIPS_ENDIAN= #else MIPS_ENDIAN= #endif #endif EOF cc_set_vars=`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^CPU\|^MIPS_ENDIAN\|^LIBCABI'` eval "$cc_set_vars" test "x$CPU" != x && { echo "$CPU${MIPS_ENDIAN}-unknown-linux-$LIBCABI"; exit; } ;; mips64el:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; openrisc*:Linux:*:*) GUESS=or1k-unknown-linux-$LIBC ;; or32:Linux:*:* | or1k*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; padre:Linux:*:*) GUESS=sparc-unknown-linux-$LIBC ;; parisc64:Linux:*:* | hppa64:Linux:*:*) GUESS=hppa64-unknown-linux-$LIBC ;; parisc:Linux:*:* | hppa:Linux:*:*) # Look for CPU level case `grep '^cpu[^a-z]*:' /proc/cpuinfo 2>/dev/null | cut -d' ' -f2` in PA7*) GUESS=hppa1.1-unknown-linux-$LIBC ;; PA8*) GUESS=hppa2.0-unknown-linux-$LIBC ;; *) GUESS=hppa-unknown-linux-$LIBC ;; esac ;; ppc64:Linux:*:*) GUESS=powerpc64-unknown-linux-$LIBC ;; ppc:Linux:*:*) GUESS=powerpc-unknown-linux-$LIBC ;; ppc64le:Linux:*:*) GUESS=powerpc64le-unknown-linux-$LIBC ;; ppcle:Linux:*:*) GUESS=powerpcle-unknown-linux-$LIBC ;; riscv32:Linux:*:* | riscv32be:Linux:*:* | riscv64:Linux:*:* | riscv64be:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; s390:Linux:*:* | s390x:Linux:*:*) GUESS=$UNAME_MACHINE-ibm-linux-$LIBC ;; sh64*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; sh*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; sparc:Linux:*:* | sparc64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; tile*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; vax:Linux:*:*) GUESS=$UNAME_MACHINE-dec-linux-$LIBC ;; x86_64:Linux:*:*) set_cc_for_build CPU=$UNAME_MACHINE LIBCABI=$LIBC if test "$CC_FOR_BUILD" != no_compiler_found; then ABI=64 sed 's/^ //' << EOF > "$dummy.c" #ifdef __i386__ ABI=x86 #else #ifdef __ILP32__ ABI=x32 #endif #endif EOF cc_set_abi=`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^ABI' | sed 's, ,,g'` eval "$cc_set_abi" case $ABI in x86) CPU=i686 ;; x32) LIBCABI=${LIBC}x32 ;; esac fi GUESS=$CPU-pc-linux-$LIBCABI ;; xtensa*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; i*86:DYNIX/ptx:4*:*) # ptx 4.0 does uname -s correctly, with DYNIX/ptx in there. # earlier versions are messed up and put the nodename in both # sysname and nodename. GUESS=i386-sequent-sysv4 ;; i*86:UNIX_SV:4.2MP:2.*) # Unixware is an offshoot of SVR4, but it has its own version # number series starting with 2... # I am not positive that other SVR4 systems won't match this, # I just have to hope. -- rms. # Use sysv4.2uw... so that sysv4* matches it. GUESS=$UNAME_MACHINE-pc-sysv4.2uw$UNAME_VERSION ;; i*86:OS/2:*:*) # If we were able to find 'uname', then EMX Unix compatibility # is probably installed. GUESS=$UNAME_MACHINE-pc-os2-emx ;; i*86:XTS-300:*:STOP) GUESS=$UNAME_MACHINE-unknown-stop ;; i*86:atheos:*:*) GUESS=$UNAME_MACHINE-unknown-atheos ;; i*86:syllable:*:*) GUESS=$UNAME_MACHINE-pc-syllable ;; i*86:LynxOS:2.*:* | i*86:LynxOS:3.[01]*:* | i*86:LynxOS:4.[02]*:*) GUESS=i386-unknown-lynxos$UNAME_RELEASE ;; i*86:*DOS:*:*) GUESS=$UNAME_MACHINE-pc-msdosdjgpp ;; i*86:*:4.*:*) UNAME_REL=`echo "$UNAME_RELEASE" | sed 's/\/MP$//'` if grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then GUESS=$UNAME_MACHINE-univel-sysv$UNAME_REL else GUESS=$UNAME_MACHINE-pc-sysv$UNAME_REL fi ;; i*86:*:5:[678]*) # UnixWare 7.x, OpenUNIX and OpenServer 6. case `/bin/uname -X | grep "^Machine"` in *486*) UNAME_MACHINE=i486 ;; *Pentium) UNAME_MACHINE=i586 ;; *Pent*|*Celeron) UNAME_MACHINE=i686 ;; esac GUESS=$UNAME_MACHINE-unknown-sysv${UNAME_RELEASE}${UNAME_SYSTEM}${UNAME_VERSION} ;; i*86:*:3.2:*) if test -f /usr/options/cb.name; then UNAME_REL=`sed -n 's/.*Version //p' /dev/null >/dev/null ; then UNAME_REL=`(/bin/uname -X|grep Release|sed -e 's/.*= //')` (/bin/uname -X|grep i80486 >/dev/null) && UNAME_MACHINE=i486 (/bin/uname -X|grep '^Machine.*Pentium' >/dev/null) \ && UNAME_MACHINE=i586 (/bin/uname -X|grep '^Machine.*Pent *II' >/dev/null) \ && UNAME_MACHINE=i686 (/bin/uname -X|grep '^Machine.*Pentium Pro' >/dev/null) \ && UNAME_MACHINE=i686 GUESS=$UNAME_MACHINE-pc-sco$UNAME_REL else GUESS=$UNAME_MACHINE-pc-sysv32 fi ;; pc:*:*:*) # Left here for compatibility: # uname -m prints for DJGPP always 'pc', but it prints nothing about # the processor, so we play safe by assuming i586. # Note: whatever this is, it MUST be the same as what config.sub # prints for the "djgpp" host, or else GDB configure will decide that # this is a cross-build. GUESS=i586-pc-msdosdjgpp ;; Intel:Mach:3*:*) GUESS=i386-pc-mach3 ;; paragon:*:*:*) GUESS=i860-intel-osf1 ;; i860:*:4.*:*) # i860-SVR4 if grep Stardent /usr/include/sys/uadmin.h >/dev/null 2>&1 ; then GUESS=i860-stardent-sysv$UNAME_RELEASE # Stardent Vistra i860-SVR4 else # Add other i860-SVR4 vendors below as they are discovered. GUESS=i860-unknown-sysv$UNAME_RELEASE # Unknown i860-SVR4 fi ;; mini*:CTIX:SYS*5:*) # "miniframe" GUESS=m68010-convergent-sysv ;; mc68k:UNIX:SYSTEM5:3.51m) GUESS=m68k-convergent-sysv ;; M680?0:D-NIX:5.3:*) GUESS=m68k-diab-dnix ;; M68*:*:R3V[5678]*:*) test -r /sysV68 && { echo 'm68k-motorola-sysv'; exit; } ;; 3[345]??:*:4.0:3.0 | 3[34]??A:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0 | 3[34]??/*:*:4.0:3.0 | 4400:*:4.0:3.0 | 4850:*:4.0:3.0 | SKA40:*:4.0:3.0 | SDS2:*:4.0:3.0 | SHG2:*:4.0:3.0 | S7501*:*:4.0:3.0) OS_REL='' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3"$OS_REL"; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3"$OS_REL"; exit; } ;; 3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*) /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4; exit; } ;; NCR*:*:4.2:* | MPRAS*:*:4.2:*) OS_REL='.3' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3"$OS_REL"; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3"$OS_REL"; exit; } /bin/uname -p 2>/dev/null | /bin/grep pteron >/dev/null \ && { echo i586-ncr-sysv4.3"$OS_REL"; exit; } ;; m68*:LynxOS:2.*:* | m68*:LynxOS:3.0*:*) GUESS=m68k-unknown-lynxos$UNAME_RELEASE ;; mc68030:UNIX_System_V:4.*:*) GUESS=m68k-atari-sysv4 ;; TSUNAMI:LynxOS:2.*:*) GUESS=sparc-unknown-lynxos$UNAME_RELEASE ;; rs6000:LynxOS:2.*:*) GUESS=rs6000-unknown-lynxos$UNAME_RELEASE ;; PowerPC:LynxOS:2.*:* | PowerPC:LynxOS:3.[01]*:* | PowerPC:LynxOS:4.[02]*:*) GUESS=powerpc-unknown-lynxos$UNAME_RELEASE ;; SM[BE]S:UNIX_SV:*:*) GUESS=mips-dde-sysv$UNAME_RELEASE ;; RM*:ReliantUNIX-*:*:*) GUESS=mips-sni-sysv4 ;; RM*:SINIX-*:*:*) GUESS=mips-sni-sysv4 ;; *:SINIX-*:*:*) if uname -p 2>/dev/null >/dev/null ; then UNAME_MACHINE=`(uname -p) 2>/dev/null` GUESS=$UNAME_MACHINE-sni-sysv4 else GUESS=ns32k-sni-sysv fi ;; PENTIUM:*:4.0*:*) # Unisys 'ClearPath HMP IX 4000' SVR4/MP effort # says GUESS=i586-unisys-sysv4 ;; *:UNIX_System_V:4*:FTX*) # From Gerald Hewes . # How about differentiating between stratus architectures? -djm GUESS=hppa1.1-stratus-sysv4 ;; *:*:*:FTX*) # From seanf@swdc.stratus.com. GUESS=i860-stratus-sysv4 ;; i*86:VOS:*:*) # From Paul.Green@stratus.com. GUESS=$UNAME_MACHINE-stratus-vos ;; *:VOS:*:*) # From Paul.Green@stratus.com. GUESS=hppa1.1-stratus-vos ;; mc68*:A/UX:*:*) GUESS=m68k-apple-aux$UNAME_RELEASE ;; news*:NEWS-OS:6*:*) GUESS=mips-sony-newsos6 ;; R[34]000:*System_V*:*:* | R4000:UNIX_SYSV:*:* | R*000:UNIX_SV:*:*) if test -d /usr/nec; then GUESS=mips-nec-sysv$UNAME_RELEASE else GUESS=mips-unknown-sysv$UNAME_RELEASE fi ;; BeBox:BeOS:*:*) # BeOS running on hardware made by Be, PPC only. GUESS=powerpc-be-beos ;; BeMac:BeOS:*:*) # BeOS running on Mac or Mac clone, PPC only. GUESS=powerpc-apple-beos ;; BePC:BeOS:*:*) # BeOS running on Intel PC compatible. GUESS=i586-pc-beos ;; BePC:Haiku:*:*) # Haiku running on Intel PC compatible. GUESS=i586-pc-haiku ;; ppc:Haiku:*:*) # Haiku running on Apple PowerPC GUESS=powerpc-apple-haiku ;; *:Haiku:*:*) # Haiku modern gcc (not bound by BeOS compat) GUESS=$UNAME_MACHINE-unknown-haiku ;; SX-4:SUPER-UX:*:*) GUESS=sx4-nec-superux$UNAME_RELEASE ;; SX-5:SUPER-UX:*:*) GUESS=sx5-nec-superux$UNAME_RELEASE ;; SX-6:SUPER-UX:*:*) GUESS=sx6-nec-superux$UNAME_RELEASE ;; SX-7:SUPER-UX:*:*) GUESS=sx7-nec-superux$UNAME_RELEASE ;; SX-8:SUPER-UX:*:*) GUESS=sx8-nec-superux$UNAME_RELEASE ;; SX-8R:SUPER-UX:*:*) GUESS=sx8r-nec-superux$UNAME_RELEASE ;; SX-ACE:SUPER-UX:*:*) GUESS=sxace-nec-superux$UNAME_RELEASE ;; Power*:Rhapsody:*:*) GUESS=powerpc-apple-rhapsody$UNAME_RELEASE ;; *:Rhapsody:*:*) GUESS=$UNAME_MACHINE-apple-rhapsody$UNAME_RELEASE ;; arm64:Darwin:*:*) GUESS=aarch64-apple-darwin$UNAME_RELEASE ;; *:Darwin:*:*) UNAME_PROCESSOR=`uname -p` case $UNAME_PROCESSOR in unknown) UNAME_PROCESSOR=powerpc ;; esac if command -v xcode-select > /dev/null 2> /dev/null && \ ! xcode-select --print-path > /dev/null 2> /dev/null ; then # Avoid executing cc if there is no toolchain installed as # cc will be a stub that puts up a graphical alert # prompting the user to install developer tools. CC_FOR_BUILD=no_compiler_found else set_cc_for_build fi if test "$CC_FOR_BUILD" != no_compiler_found; then if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then case $UNAME_PROCESSOR in i386) UNAME_PROCESSOR=x86_64 ;; powerpc) UNAME_PROCESSOR=powerpc64 ;; esac fi # On 10.4-10.6 one might compile for PowerPC via gcc -arch ppc if (echo '#ifdef __POWERPC__'; echo IS_PPC; echo '#endif') | \ (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_PPC >/dev/null then UNAME_PROCESSOR=powerpc fi elif test "$UNAME_PROCESSOR" = i386 ; then # uname -m returns i386 or x86_64 UNAME_PROCESSOR=$UNAME_MACHINE fi GUESS=$UNAME_PROCESSOR-apple-darwin$UNAME_RELEASE ;; *:procnto*:*:* | *:QNX:[0123456789]*:*) UNAME_PROCESSOR=`uname -p` if test "$UNAME_PROCESSOR" = x86; then UNAME_PROCESSOR=i386 UNAME_MACHINE=pc fi GUESS=$UNAME_PROCESSOR-$UNAME_MACHINE-nto-qnx$UNAME_RELEASE ;; *:QNX:*:4*) GUESS=i386-pc-qnx ;; NEO-*:NONSTOP_KERNEL:*:*) GUESS=neo-tandem-nsk$UNAME_RELEASE ;; NSE-*:NONSTOP_KERNEL:*:*) GUESS=nse-tandem-nsk$UNAME_RELEASE ;; NSR-*:NONSTOP_KERNEL:*:*) GUESS=nsr-tandem-nsk$UNAME_RELEASE ;; NSV-*:NONSTOP_KERNEL:*:*) GUESS=nsv-tandem-nsk$UNAME_RELEASE ;; NSX-*:NONSTOP_KERNEL:*:*) GUESS=nsx-tandem-nsk$UNAME_RELEASE ;; *:NonStop-UX:*:*) GUESS=mips-compaq-nonstopux ;; BS2000:POSIX*:*:*) GUESS=bs2000-siemens-sysv ;; DS/*:UNIX_System_V:*:*) GUESS=$UNAME_MACHINE-$UNAME_SYSTEM-$UNAME_RELEASE ;; *:Plan9:*:*) # "uname -m" is not consistent, so use $cputype instead. 386 # is converted to i386 for consistency with other x86 # operating systems. if test "${cputype-}" = 386; then UNAME_MACHINE=i386 elif test "x${cputype-}" != x; then UNAME_MACHINE=$cputype fi GUESS=$UNAME_MACHINE-unknown-plan9 ;; *:TOPS-10:*:*) GUESS=pdp10-unknown-tops10 ;; *:TENEX:*:*) GUESS=pdp10-unknown-tenex ;; KS10:TOPS-20:*:* | KL10:TOPS-20:*:* | TYPE4:TOPS-20:*:*) GUESS=pdp10-dec-tops20 ;; XKL-1:TOPS-20:*:* | TYPE5:TOPS-20:*:*) GUESS=pdp10-xkl-tops20 ;; *:TOPS-20:*:*) GUESS=pdp10-unknown-tops20 ;; *:ITS:*:*) GUESS=pdp10-unknown-its ;; SEI:*:*:SEIUX) GUESS=mips-sei-seiux$UNAME_RELEASE ;; *:DragonFly:*:*) DRAGONFLY_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_MACHINE-unknown-dragonfly$DRAGONFLY_REL ;; *:*VMS:*:*) UNAME_MACHINE=`(uname -p) 2>/dev/null` case $UNAME_MACHINE in A*) GUESS=alpha-dec-vms ;; I*) GUESS=ia64-dec-vms ;; V*) GUESS=vax-dec-vms ;; esac ;; *:XENIX:*:SysV) GUESS=i386-pc-xenix ;; i*86:skyos:*:*) SKYOS_REL=`echo "$UNAME_RELEASE" | sed -e 's/ .*$//'` GUESS=$UNAME_MACHINE-pc-skyos$SKYOS_REL ;; i*86:rdos:*:*) GUESS=$UNAME_MACHINE-pc-rdos ;; i*86:Fiwix:*:*) GUESS=$UNAME_MACHINE-pc-fiwix ;; *:AROS:*:*) GUESS=$UNAME_MACHINE-unknown-aros ;; x86_64:VMkernel:*:*) GUESS=$UNAME_MACHINE-unknown-esx ;; amd64:Isilon\ OneFS:*:*) GUESS=x86_64-unknown-onefs ;; *:Unleashed:*:*) GUESS=$UNAME_MACHINE-unknown-unleashed$UNAME_RELEASE ;; *:Ironclad:*:*) GUESS=$UNAME_MACHINE-unknown-ironclad ;; esac # Do we have a guess based on uname results? if test "x$GUESS" != x; then echo "$GUESS" exit fi # No uname command or uname output not recognized. set_cc_for_build cat > "$dummy.c" < #include #endif #if defined(ultrix) || defined(_ultrix) || defined(__ultrix) || defined(__ultrix__) #if defined (vax) || defined (__vax) || defined (__vax__) || defined(mips) || defined(__mips) || defined(__mips__) || defined(MIPS) || defined(__MIPS__) #include #if defined(_SIZE_T_) || defined(SIGLOST) #include #endif #endif #endif main () { #if defined (sony) #if defined (MIPSEB) /* BFD wants "bsd" instead of "newsos". Perhaps BFD should be changed, I don't know.... */ printf ("mips-sony-bsd\n"); exit (0); #else #include printf ("m68k-sony-newsos%s\n", #ifdef NEWSOS4 "4" #else "" #endif ); exit (0); #endif #endif #if defined (NeXT) #if !defined (__ARCHITECTURE__) #define __ARCHITECTURE__ "m68k" #endif int version; version=`(hostinfo | sed -n 's/.*NeXT Mach \([0-9]*\).*/\1/p') 2>/dev/null`; if (version < 4) printf ("%s-next-nextstep%d\n", __ARCHITECTURE__, version); else printf ("%s-next-openstep%d\n", __ARCHITECTURE__, version); exit (0); #endif #if defined (MULTIMAX) || defined (n16) #if defined (UMAXV) printf ("ns32k-encore-sysv\n"); exit (0); #else #if defined (CMU) printf ("ns32k-encore-mach\n"); exit (0); #else printf ("ns32k-encore-bsd\n"); exit (0); #endif #endif #endif #if defined (__386BSD__) printf ("i386-pc-bsd\n"); exit (0); #endif #if defined (sequent) #if defined (i386) printf ("i386-sequent-dynix\n"); exit (0); #endif #if defined (ns32000) printf ("ns32k-sequent-dynix\n"); exit (0); #endif #endif #if defined (_SEQUENT_) struct utsname un; uname(&un); if (strncmp(un.version, "V2", 2) == 0) { printf ("i386-sequent-ptx2\n"); exit (0); } if (strncmp(un.version, "V1", 2) == 0) { /* XXX is V1 correct? */ printf ("i386-sequent-ptx1\n"); exit (0); } printf ("i386-sequent-ptx\n"); exit (0); #endif #if defined (vax) #if !defined (ultrix) #include #if defined (BSD) #if BSD == 43 printf ("vax-dec-bsd4.3\n"); exit (0); #else #if BSD == 199006 printf ("vax-dec-bsd4.3reno\n"); exit (0); #else printf ("vax-dec-bsd\n"); exit (0); #endif #endif #else printf ("vax-dec-bsd\n"); exit (0); #endif #else #if defined(_SIZE_T_) || defined(SIGLOST) struct utsname un; uname (&un); printf ("vax-dec-ultrix%s\n", un.release); exit (0); #else printf ("vax-dec-ultrix\n"); exit (0); #endif #endif #endif #if defined(ultrix) || defined(_ultrix) || defined(__ultrix) || defined(__ultrix__) #if defined(mips) || defined(__mips) || defined(__mips__) || defined(MIPS) || defined(__MIPS__) #if defined(_SIZE_T_) || defined(SIGLOST) struct utsname *un; uname (&un); printf ("mips-dec-ultrix%s\n", un.release); exit (0); #else printf ("mips-dec-ultrix\n"); exit (0); #endif #endif #endif #if defined (alliant) && defined (i860) printf ("i860-alliant-bsd\n"); exit (0); #endif exit (1); } EOF $CC_FOR_BUILD -o "$dummy" "$dummy.c" 2>/dev/null && SYSTEM_NAME=`"$dummy"` && { echo "$SYSTEM_NAME"; exit; } # Apollos put the system type in the environment. test -d /usr/apollo && { echo "$ISP-apollo-$SYSTYPE"; exit; } echo "$0: unable to guess system type" >&2 case $UNAME_MACHINE:$UNAME_SYSTEM in mips:Linux | mips64:Linux) # If we got here on MIPS GNU/Linux, output extra information. cat >&2 <&2 <&2 </dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null` /bin/uname -X = `(/bin/uname -X) 2>/dev/null` hostinfo = `(hostinfo) 2>/dev/null` /bin/universe = `(/bin/universe) 2>/dev/null` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null` /bin/arch = `(/bin/arch) 2>/dev/null` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null` UNAME_MACHINE = "$UNAME_MACHINE" UNAME_RELEASE = "$UNAME_RELEASE" UNAME_SYSTEM = "$UNAME_SYSTEM" UNAME_VERSION = "$UNAME_VERSION" EOF fi exit 1 # Local variables: # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: gevent-24.11.1/deps/libev/config.h.in000066400000000000000000000071141471441230600172340ustar00rootroot00000000000000/* config.h.in. Generated from configure.ac by autoheader. */ /* Define to 1 if you have the `clock_gettime' function. */ #undef HAVE_CLOCK_GETTIME /* Define to 1 to use the syscall interface for clock_gettime */ #undef HAVE_CLOCK_SYSCALL /* Define to 1 if you have the header file. */ #undef HAVE_DLFCN_H /* Define to 1 if you have the `epoll_ctl' function. */ #undef HAVE_EPOLL_CTL /* Define to 1 if you have the `eventfd' function. */ #undef HAVE_EVENTFD /* Define to 1 if the floor function is available */ #undef HAVE_FLOOR /* Define to 1 if you have the `inotify_init' function. */ #undef HAVE_INOTIFY_INIT /* Define to 1 if you have the header file. */ #undef HAVE_INTTYPES_H /* Define to 1 if linux/fs.h defined kernel_rwf_t */ #undef HAVE_KERNEL_RWF_T /* Define to 1 if you have the `kqueue' function. */ #undef HAVE_KQUEUE /* Define to 1 if you have the `rt' library (-lrt). */ #undef HAVE_LIBRT /* Define to 1 if you have the header file. */ #undef HAVE_LINUX_AIO_ABI_H /* Define to 1 if you have the header file. */ #undef HAVE_LINUX_FS_H /* Define to 1 if you have the header file. */ #undef HAVE_MEMORY_H /* Define to 1 if you have the `nanosleep' function. */ #undef HAVE_NANOSLEEP /* Define to 1 if you have the `poll' function. */ #undef HAVE_POLL /* Define to 1 if you have the header file. */ #undef HAVE_POLL_H /* Define to 1 if you have the `port_create' function. */ #undef HAVE_PORT_CREATE /* Define to 1 if you have the header file. */ #undef HAVE_PORT_H /* Define to 1 if you have the `select' function. */ #undef HAVE_SELECT /* Define to 1 if you have the `signalfd' function. */ #undef HAVE_SIGNALFD /* Define to 1 if you have the header file. */ #undef HAVE_STDINT_H /* Define to 1 if you have the header file. */ #undef HAVE_STDLIB_H /* Define to 1 if you have the header file. */ #undef HAVE_STRINGS_H /* Define to 1 if you have the header file. */ #undef HAVE_STRING_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_EPOLL_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_EVENTFD_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_EVENT_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_INOTIFY_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_SELECT_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_SIGNALFD_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_STAT_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_TIMERFD_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_TYPES_H /* Define to 1 if you have the header file. */ #undef HAVE_UNISTD_H /* Define to the sub-directory where libtool stores uninstalled libraries. */ #undef LT_OBJDIR /* Name of package */ #undef PACKAGE /* Define to the address where bug reports for this package should be sent. */ #undef PACKAGE_BUGREPORT /* Define to the full name of this package. */ #undef PACKAGE_NAME /* Define to the full name and version of this package. */ #undef PACKAGE_STRING /* Define to the one symbol short name of this package. */ #undef PACKAGE_TARNAME /* Define to the home page for this package. */ #undef PACKAGE_URL /* Define to the version of this package. */ #undef PACKAGE_VERSION /* Define to 1 if you have the ANSI C header files. */ #undef STDC_HEADERS /* Version number of package */ #undef VERSION gevent-24.11.1/deps/libev/config.sub000066400000000000000000001077561471441230600172060ustar00rootroot00000000000000#! /bin/sh # Configuration validation subroutine script. # Copyright 1992-2024 Free Software Foundation, Inc. # shellcheck disable=SC2006,SC2268 # see below for rationale timestamp='2024-01-01' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, see . # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). # Please send patches to . # # Configuration subroutine to validate and canonicalize a configuration type. # Supply the specified configuration type as an argument. # If it is invalid, we print an error message on stderr and exit with code 1. # Otherwise, we print the canonical config type on stdout and succeed. # You can get the latest version of this script from: # https://git.savannah.gnu.org/cgit/config.git/plain/config.sub # This file is supposed to be the same for all GNU packages # and recognize all the CPU types, system types and aliases # that are meaningful with *any* GNU software. # Each package is responsible for reporting which valid configurations # it does not support. The user should be able to distinguish # a failure to support a valid configuration from a meaningless # configuration. # The goal of this file is to map all the various variations of a given # machine specification into a single specification in the form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM # or in some cases, the newer four-part form: # CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM # It is wrong to echo any other type of specification. # The "shellcheck disable" line above the timestamp inhibits complaints # about features and limitations of the classic Bourne shell that were # superseded or lifted in POSIX. However, this script identifies a wide # variety of pre-POSIX systems that do not have POSIX shells at all, and # even some reasonably current systems (Solaris 10 as case-in-point) still # have a pre-POSIX /bin/sh. me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] CPU-MFR-OPSYS or ALIAS Canonicalize a configuration name. Options: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.sub ($timestamp) Copyright 1992-2024 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try '$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" >&2 exit 1 ;; *local*) # First pass through any local machine types. echo "$1" exit ;; * ) break ;; esac done case $# in 0) echo "$me: missing argument$help" >&2 exit 1;; 1) ;; *) echo "$me: too many arguments$help" >&2 exit 1;; esac # Split fields of configuration type # shellcheck disable=SC2162 saved_IFS=$IFS IFS="-" read field1 field2 field3 field4 <&2 exit 1 ;; *-*-*-*) basic_machine=$field1-$field2 basic_os=$field3-$field4 ;; *-*-*) # Ambiguous whether COMPANY is present, or skipped and KERNEL-OS is two # parts maybe_os=$field2-$field3 case $maybe_os in nto-qnx* | linux-* | uclinux-uclibc* \ | uclinux-gnu* | kfreebsd*-gnu* | knetbsd*-gnu* | netbsd*-gnu* \ | netbsd*-eabi* | kopensolaris*-gnu* | cloudabi*-eabi* \ | storm-chaos* | os2-emx* | rtmk-nova* | managarm-* \ | windows-* ) basic_machine=$field1 basic_os=$maybe_os ;; android-linux) basic_machine=$field1-unknown basic_os=linux-android ;; *) basic_machine=$field1-$field2 basic_os=$field3 ;; esac ;; *-*) # A lone config we happen to match not fitting any pattern case $field1-$field2 in decstation-3100) basic_machine=mips-dec basic_os= ;; *-*) # Second component is usually, but not always the OS case $field2 in # Prevent following clause from handling this valid os sun*os*) basic_machine=$field1 basic_os=$field2 ;; zephyr*) basic_machine=$field1-unknown basic_os=$field2 ;; # Manufacturers dec* | mips* | sequent* | encore* | pc533* | sgi* | sony* \ | att* | 7300* | 3300* | delta* | motorola* | sun[234]* \ | unicom* | ibm* | next | hp | isi* | apollo | altos* \ | convergent* | ncr* | news | 32* | 3600* | 3100* \ | hitachi* | c[123]* | convex* | sun | crds | omron* | dg \ | ultra | tti* | harris | dolphin | highlevel | gould \ | cbm | ns | masscomp | apple | axis | knuth | cray \ | microblaze* | sim | cisco \ | oki | wec | wrs | winbond) basic_machine=$field1-$field2 basic_os= ;; *) basic_machine=$field1 basic_os=$field2 ;; esac ;; esac ;; *) # Convert single-component short-hands not valid as part of # multi-component configurations. case $field1 in 386bsd) basic_machine=i386-pc basic_os=bsd ;; a29khif) basic_machine=a29k-amd basic_os=udi ;; adobe68k) basic_machine=m68010-adobe basic_os=scout ;; alliant) basic_machine=fx80-alliant basic_os= ;; altos | altos3068) basic_machine=m68k-altos basic_os= ;; am29k) basic_machine=a29k-none basic_os=bsd ;; amdahl) basic_machine=580-amdahl basic_os=sysv ;; amiga) basic_machine=m68k-unknown basic_os= ;; amigaos | amigados) basic_machine=m68k-unknown basic_os=amigaos ;; amigaunix | amix) basic_machine=m68k-unknown basic_os=sysv4 ;; apollo68) basic_machine=m68k-apollo basic_os=sysv ;; apollo68bsd) basic_machine=m68k-apollo basic_os=bsd ;; aros) basic_machine=i386-pc basic_os=aros ;; aux) basic_machine=m68k-apple basic_os=aux ;; balance) basic_machine=ns32k-sequent basic_os=dynix ;; blackfin) basic_machine=bfin-unknown basic_os=linux ;; cegcc) basic_machine=arm-unknown basic_os=cegcc ;; convex-c1) basic_machine=c1-convex basic_os=bsd ;; convex-c2) basic_machine=c2-convex basic_os=bsd ;; convex-c32) basic_machine=c32-convex basic_os=bsd ;; convex-c34) basic_machine=c34-convex basic_os=bsd ;; convex-c38) basic_machine=c38-convex basic_os=bsd ;; cray) basic_machine=j90-cray basic_os=unicos ;; crds | unos) basic_machine=m68k-crds basic_os= ;; da30) basic_machine=m68k-da30 basic_os= ;; decstation | pmax | pmin | dec3100 | decstatn) basic_machine=mips-dec basic_os= ;; delta88) basic_machine=m88k-motorola basic_os=sysv3 ;; dicos) basic_machine=i686-pc basic_os=dicos ;; djgpp) basic_machine=i586-pc basic_os=msdosdjgpp ;; ebmon29k) basic_machine=a29k-amd basic_os=ebmon ;; es1800 | OSE68k | ose68k | ose | OSE) basic_machine=m68k-ericsson basic_os=ose ;; gmicro) basic_machine=tron-gmicro basic_os=sysv ;; go32) basic_machine=i386-pc basic_os=go32 ;; h8300hms) basic_machine=h8300-hitachi basic_os=hms ;; h8300xray) basic_machine=h8300-hitachi basic_os=xray ;; h8500hms) basic_machine=h8500-hitachi basic_os=hms ;; harris) basic_machine=m88k-harris basic_os=sysv3 ;; hp300 | hp300hpux) basic_machine=m68k-hp basic_os=hpux ;; hp300bsd) basic_machine=m68k-hp basic_os=bsd ;; hppaosf) basic_machine=hppa1.1-hp basic_os=osf ;; hppro) basic_machine=hppa1.1-hp basic_os=proelf ;; i386mach) basic_machine=i386-mach basic_os=mach ;; isi68 | isi) basic_machine=m68k-isi basic_os=sysv ;; m68knommu) basic_machine=m68k-unknown basic_os=linux ;; magnum | m3230) basic_machine=mips-mips basic_os=sysv ;; merlin) basic_machine=ns32k-utek basic_os=sysv ;; mingw64) basic_machine=x86_64-pc basic_os=mingw64 ;; mingw32) basic_machine=i686-pc basic_os=mingw32 ;; mingw32ce) basic_machine=arm-unknown basic_os=mingw32ce ;; monitor) basic_machine=m68k-rom68k basic_os=coff ;; morphos) basic_machine=powerpc-unknown basic_os=morphos ;; moxiebox) basic_machine=moxie-unknown basic_os=moxiebox ;; msdos) basic_machine=i386-pc basic_os=msdos ;; msys) basic_machine=i686-pc basic_os=msys ;; mvs) basic_machine=i370-ibm basic_os=mvs ;; nacl) basic_machine=le32-unknown basic_os=nacl ;; ncr3000) basic_machine=i486-ncr basic_os=sysv4 ;; netbsd386) basic_machine=i386-pc basic_os=netbsd ;; netwinder) basic_machine=armv4l-rebel basic_os=linux ;; news | news700 | news800 | news900) basic_machine=m68k-sony basic_os=newsos ;; news1000) basic_machine=m68030-sony basic_os=newsos ;; necv70) basic_machine=v70-nec basic_os=sysv ;; nh3000) basic_machine=m68k-harris basic_os=cxux ;; nh[45]000) basic_machine=m88k-harris basic_os=cxux ;; nindy960) basic_machine=i960-intel basic_os=nindy ;; mon960) basic_machine=i960-intel basic_os=mon960 ;; nonstopux) basic_machine=mips-compaq basic_os=nonstopux ;; os400) basic_machine=powerpc-ibm basic_os=os400 ;; OSE68000 | ose68000) basic_machine=m68000-ericsson basic_os=ose ;; os68k) basic_machine=m68k-none basic_os=os68k ;; paragon) basic_machine=i860-intel basic_os=osf ;; parisc) basic_machine=hppa-unknown basic_os=linux ;; psp) basic_machine=mipsallegrexel-sony basic_os=psp ;; pw32) basic_machine=i586-unknown basic_os=pw32 ;; rdos | rdos64) basic_machine=x86_64-pc basic_os=rdos ;; rdos32) basic_machine=i386-pc basic_os=rdos ;; rom68k) basic_machine=m68k-rom68k basic_os=coff ;; sa29200) basic_machine=a29k-amd basic_os=udi ;; sei) basic_machine=mips-sei basic_os=seiux ;; sequent) basic_machine=i386-sequent basic_os= ;; sps7) basic_machine=m68k-bull basic_os=sysv2 ;; st2000) basic_machine=m68k-tandem basic_os= ;; stratus) basic_machine=i860-stratus basic_os=sysv4 ;; sun2) basic_machine=m68000-sun basic_os= ;; sun2os3) basic_machine=m68000-sun basic_os=sunos3 ;; sun2os4) basic_machine=m68000-sun basic_os=sunos4 ;; sun3) basic_machine=m68k-sun basic_os= ;; sun3os3) basic_machine=m68k-sun basic_os=sunos3 ;; sun3os4) basic_machine=m68k-sun basic_os=sunos4 ;; sun4) basic_machine=sparc-sun basic_os= ;; sun4os3) basic_machine=sparc-sun basic_os=sunos3 ;; sun4os4) basic_machine=sparc-sun basic_os=sunos4 ;; sun4sol2) basic_machine=sparc-sun basic_os=solaris2 ;; sun386 | sun386i | roadrunner) basic_machine=i386-sun basic_os= ;; sv1) basic_machine=sv1-cray basic_os=unicos ;; symmetry) basic_machine=i386-sequent basic_os=dynix ;; t3e) basic_machine=alphaev5-cray basic_os=unicos ;; t90) basic_machine=t90-cray basic_os=unicos ;; toad1) basic_machine=pdp10-xkl basic_os=tops20 ;; tpf) basic_machine=s390x-ibm basic_os=tpf ;; udi29k) basic_machine=a29k-amd basic_os=udi ;; ultra3) basic_machine=a29k-nyu basic_os=sym1 ;; v810 | necv810) basic_machine=v810-nec basic_os=none ;; vaxv) basic_machine=vax-dec basic_os=sysv ;; vms) basic_machine=vax-dec basic_os=vms ;; vsta) basic_machine=i386-pc basic_os=vsta ;; vxworks960) basic_machine=i960-wrs basic_os=vxworks ;; vxworks68) basic_machine=m68k-wrs basic_os=vxworks ;; vxworks29k) basic_machine=a29k-wrs basic_os=vxworks ;; xbox) basic_machine=i686-pc basic_os=mingw32 ;; ymp) basic_machine=ymp-cray basic_os=unicos ;; *) basic_machine=$1 basic_os= ;; esac ;; esac # Decode 1-component or ad-hoc basic machines case $basic_machine in # Here we handle the default manufacturer of certain CPU types. It is in # some cases the only manufacturer, in others, it is the most popular. w89k) cpu=hppa1.1 vendor=winbond ;; op50n) cpu=hppa1.1 vendor=oki ;; op60c) cpu=hppa1.1 vendor=oki ;; ibm*) cpu=i370 vendor=ibm ;; orion105) cpu=clipper vendor=highlevel ;; mac | mpw | mac-mpw) cpu=m68k vendor=apple ;; pmac | pmac-mpw) cpu=powerpc vendor=apple ;; # Recognize the various machine names and aliases which stand # for a CPU type and a company and sometimes even an OS. 3b1 | 7300 | 7300-att | att-7300 | pc7300 | safari | unixpc) cpu=m68000 vendor=att ;; 3b*) cpu=we32k vendor=att ;; bluegene*) cpu=powerpc vendor=ibm basic_os=cnk ;; decsystem10* | dec10*) cpu=pdp10 vendor=dec basic_os=tops10 ;; decsystem20* | dec20*) cpu=pdp10 vendor=dec basic_os=tops20 ;; delta | 3300 | motorola-3300 | motorola-delta \ | 3300-motorola | delta-motorola) cpu=m68k vendor=motorola ;; dpx2*) cpu=m68k vendor=bull basic_os=sysv3 ;; encore | umax | mmax) cpu=ns32k vendor=encore ;; elxsi) cpu=elxsi vendor=elxsi basic_os=${basic_os:-bsd} ;; fx2800) cpu=i860 vendor=alliant ;; genix) cpu=ns32k vendor=ns ;; h3050r* | hiux*) cpu=hppa1.1 vendor=hitachi basic_os=hiuxwe2 ;; hp3k9[0-9][0-9] | hp9[0-9][0-9]) cpu=hppa1.0 vendor=hp ;; hp9k2[0-9][0-9] | hp9k31[0-9]) cpu=m68000 vendor=hp ;; hp9k3[2-9][0-9]) cpu=m68k vendor=hp ;; hp9k6[0-9][0-9] | hp6[0-9][0-9]) cpu=hppa1.0 vendor=hp ;; hp9k7[0-79][0-9] | hp7[0-79][0-9]) cpu=hppa1.1 vendor=hp ;; hp9k78[0-9] | hp78[0-9]) # FIXME: really hppa2.0-hp cpu=hppa1.1 vendor=hp ;; hp9k8[67]1 | hp8[67]1 | hp9k80[24] | hp80[24] | hp9k8[78]9 | hp8[78]9 | hp9k893 | hp893) # FIXME: really hppa2.0-hp cpu=hppa1.1 vendor=hp ;; hp9k8[0-9][13679] | hp8[0-9][13679]) cpu=hppa1.1 vendor=hp ;; hp9k8[0-9][0-9] | hp8[0-9][0-9]) cpu=hppa1.0 vendor=hp ;; i*86v32) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=sysv32 ;; i*86v4*) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=sysv4 ;; i*86v) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=sysv ;; i*86sol2) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=solaris2 ;; j90 | j90-cray) cpu=j90 vendor=cray basic_os=${basic_os:-unicos} ;; iris | iris4d) cpu=mips vendor=sgi case $basic_os in irix*) ;; *) basic_os=irix4 ;; esac ;; miniframe) cpu=m68000 vendor=convergent ;; *mint | mint[0-9]* | *MiNT | *MiNT[0-9]*) cpu=m68k vendor=atari basic_os=mint ;; news-3600 | risc-news) cpu=mips vendor=sony basic_os=newsos ;; next | m*-next) cpu=m68k vendor=next case $basic_os in openstep*) ;; nextstep*) ;; ns2*) basic_os=nextstep2 ;; *) basic_os=nextstep3 ;; esac ;; np1) cpu=np1 vendor=gould ;; op50n-* | op60c-*) cpu=hppa1.1 vendor=oki basic_os=proelf ;; pa-hitachi) cpu=hppa1.1 vendor=hitachi basic_os=hiuxwe2 ;; pbd) cpu=sparc vendor=tti ;; pbb) cpu=m68k vendor=tti ;; pc532) cpu=ns32k vendor=pc532 ;; pn) cpu=pn vendor=gould ;; power) cpu=power vendor=ibm ;; ps2) cpu=i386 vendor=ibm ;; rm[46]00) cpu=mips vendor=siemens ;; rtpc | rtpc-*) cpu=romp vendor=ibm ;; sde) cpu=mipsisa32 vendor=sde basic_os=${basic_os:-elf} ;; simso-wrs) cpu=sparclite vendor=wrs basic_os=vxworks ;; tower | tower-32) cpu=m68k vendor=ncr ;; vpp*|vx|vx-*) cpu=f301 vendor=fujitsu ;; w65) cpu=w65 vendor=wdc ;; w89k-*) cpu=hppa1.1 vendor=winbond basic_os=proelf ;; none) cpu=none vendor=none ;; leon|leon[3-9]) cpu=sparc vendor=$basic_machine ;; leon-*|leon[3-9]-*) cpu=sparc vendor=`echo "$basic_machine" | sed 's/-.*//'` ;; *-*) # shellcheck disable=SC2162 saved_IFS=$IFS IFS="-" read cpu vendor <&2 exit 1 ;; esac ;; esac # Here we canonicalize certain aliases for manufacturers. case $vendor in digital*) vendor=dec ;; commodore*) vendor=cbm ;; *) ;; esac # Decode manufacturer-specific aliases for certain operating systems. if test x"$basic_os" != x then # First recognize some ad-hoc cases, or perhaps split kernel-os, or else just # set os. obj= case $basic_os in gnu/linux*) kernel=linux os=`echo "$basic_os" | sed -e 's|gnu/linux|gnu|'` ;; os2-emx) kernel=os2 os=`echo "$basic_os" | sed -e 's|os2-emx|emx|'` ;; nto-qnx*) kernel=nto os=`echo "$basic_os" | sed -e 's|nto-qnx|qnx|'` ;; *-*) # shellcheck disable=SC2162 saved_IFS=$IFS IFS="-" read kernel os <&2 fi ;; *) echo "Invalid configuration '$1': OS '$os' not recognized" 1>&2 exit 1 ;; esac case $obj in aout* | coff* | elf* | pe*) ;; '') # empty is fine ;; *) echo "Invalid configuration '$1': Machine code format '$obj' not recognized" 1>&2 exit 1 ;; esac # Here we handle the constraint that a (synthetic) cpu and os are # valid only in combination with each other and nowhere else. case $cpu-$os in # The "javascript-unknown-ghcjs" triple is used by GHC; we # accept it here in order to tolerate that, but reject any # variations. javascript-ghcjs) ;; javascript-* | *-ghcjs) echo "Invalid configuration '$1': cpu '$cpu' is not valid with os '$os$obj'" 1>&2 exit 1 ;; esac # As a final step for OS-related things, validate the OS-kernel combination # (given a valid OS), if there is a kernel. case $kernel-$os-$obj in linux-gnu*- | linux-android*- | linux-dietlibc*- | linux-llvm*- \ | linux-mlibc*- | linux-musl*- | linux-newlib*- \ | linux-relibc*- | linux-uclibc*- ) ;; uclinux-uclibc*- ) ;; managarm-mlibc*- | managarm-kernel*- ) ;; windows*-msvc*-) ;; -dietlibc*- | -llvm*- | -mlibc*- | -musl*- | -newlib*- | -relibc*- \ | -uclibc*- ) # These are just libc implementations, not actual OSes, and thus # require a kernel. echo "Invalid configuration '$1': libc '$os' needs explicit kernel." 1>&2 exit 1 ;; -kernel*- ) echo "Invalid configuration '$1': '$os' needs explicit kernel." 1>&2 exit 1 ;; *-kernel*- ) echo "Invalid configuration '$1': '$kernel' does not support '$os'." 1>&2 exit 1 ;; *-msvc*- ) echo "Invalid configuration '$1': '$os' needs 'windows'." 1>&2 exit 1 ;; kfreebsd*-gnu*- | kopensolaris*-gnu*-) ;; vxworks-simlinux- | vxworks-simwindows- | vxworks-spe-) ;; nto-qnx*-) ;; os2-emx-) ;; *-eabi*- | *-gnueabi*-) ;; none--*) # None (no kernel, i.e. freestanding / bare metal), # can be paired with an machine code file format ;; -*-) # Blank kernel with real OS is always fine. ;; --*) # Blank kernel and OS with real machine code file format is always fine. ;; *-*-*) echo "Invalid configuration '$1': Kernel '$kernel' not known to work with OS '$os'." 1>&2 exit 1 ;; esac # Here we handle the case where we know the os, and the CPU type, but not the # manufacturer. We pick the logical manufacturer. case $vendor in unknown) case $cpu-$os in *-riscix*) vendor=acorn ;; *-sunos*) vendor=sun ;; *-cnk* | *-aix*) vendor=ibm ;; *-beos*) vendor=be ;; *-hpux*) vendor=hp ;; *-mpeix*) vendor=hp ;; *-hiux*) vendor=hitachi ;; *-unos*) vendor=crds ;; *-dgux*) vendor=dg ;; *-luna*) vendor=omron ;; *-genix*) vendor=ns ;; *-clix*) vendor=intergraph ;; *-mvs* | *-opened*) vendor=ibm ;; *-os400*) vendor=ibm ;; s390-* | s390x-*) vendor=ibm ;; *-ptx*) vendor=sequent ;; *-tpf*) vendor=ibm ;; *-vxsim* | *-vxworks* | *-windiss*) vendor=wrs ;; *-aux*) vendor=apple ;; *-hms*) vendor=hitachi ;; *-mpw* | *-macos*) vendor=apple ;; *-*mint | *-mint[0-9]* | *-*MiNT | *-MiNT[0-9]*) vendor=atari ;; *-vos*) vendor=stratus ;; esac ;; esac echo "$cpu-$vendor${kernel:+-$kernel}${os:+-$os}${obj:+-$obj}" exit # Local variables: # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: gevent-24.11.1/deps/libev/configure000077500000000000000000015275531471441230600171370ustar00rootroot00000000000000#! /bin/sh # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.69 for libev 4.33. # # # Copyright (C) 1992-1996, 1998-2012 Free Software Foundation, Inc. # # # This configure script is free software; the Free Software Foundation # gives unlimited permission to copy, distribute and modify it. ## -------------------- ## ## M4sh Initialization. ## ## -------------------- ## # Be more Bourne compatible DUALCASE=1; export DUALCASE # for MKS sh if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case `(set -o) 2>/dev/null` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi as_nl=' ' export as_nl # Printing a long string crashes Solaris 7 /usr/bin/printf. as_echo='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo$as_echo # Prefer a ksh shell builtin over an external printf program on Solaris, # but without wasting forks for bash or zsh. if test -z "$BASH_VERSION$ZSH_VERSION" \ && (test "X`print -r -- $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='print -r --' as_echo_n='print -rn --' elif (test "X`printf %s $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='printf %s\n' as_echo_n='printf %s' else if test "X`(/usr/ucb/echo -n -n $as_echo) 2>/dev/null`" = "X-n $as_echo"; then as_echo_body='eval /usr/ucb/echo -n "$1$as_nl"' as_echo_n='/usr/ucb/echo -n' else as_echo_body='eval expr "X$1" : "X\\(.*\\)"' as_echo_n_body='eval arg=$1; case $arg in #( *"$as_nl"*) expr "X$arg" : "X\\(.*\\)$as_nl"; arg=`expr "X$arg" : ".*$as_nl\\(.*\\)"`;; esac; expr "X$arg" : "X\\(.*\\)" | tr -d "$as_nl" ' export as_echo_n_body as_echo_n='sh -c $as_echo_n_body as_echo' fi export as_echo_body as_echo='sh -c $as_echo_body as_echo' fi # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi # IFS # We need space, tab and new line, in precisely that order. Quoting is # there to prevent editors from complaining about space-tab. # (If _AS_PATH_WALK were called with IFS unset, it would disable word # splitting by setting IFS to empty value.) IFS=" "" $as_nl" # Find who we are. Look in the path if we contain no directory separator. as_myself= case $0 in #(( *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break done IFS=$as_save_IFS ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then $as_echo "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2 exit 1 fi # Unset variables that we do not need and which cause bugs (e.g. in # pre-3.0 UWIN ksh). But do not cause bugs in bash 2.01; the "|| exit 1" # suppresses any "Segmentation fault" message there. '((' could # trigger a bug in pdksh 5.2.14. for as_var in BASH_ENV ENV MAIL MAILPATH do eval test x\${$as_var+set} = xset \ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || : done PS1='$ ' PS2='> ' PS4='+ ' # NLS nuisances. LC_ALL=C export LC_ALL LANGUAGE=C export LANGUAGE # CDPATH. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH # Use a proper internal environment variable to ensure we don't fall # into an infinite loop, continuously re-executing ourselves. if test x"${_as_can_reexec}" != xno && test "x$CONFIG_SHELL" != x; then _as_can_reexec=no; export _as_can_reexec; # We cannot yet assume a decent shell, so we have to provide a # neutralization value for shells without unset; and this also # works around shells that cannot unset nonexistent variables. # Preserve -v and -x to the replacement shell. BASH_ENV=/dev/null ENV=/dev/null (unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV case $- in # (((( *v*x* | *x*v* ) as_opts=-vx ;; *v* ) as_opts=-v ;; *x* ) as_opts=-x ;; * ) as_opts= ;; esac exec $CONFIG_SHELL $as_opts "$as_myself" ${1+"$@"} # Admittedly, this is quite paranoid, since all the known shells bail # out after a failed `exec'. $as_echo "$0: could not re-execute with $CONFIG_SHELL" >&2 as_fn_exit 255 fi # We don't want this to propagate to other subprocesses. { _as_can_reexec=; unset _as_can_reexec;} if test "x$CONFIG_SHELL" = x; then as_bourne_compatible="if test -n \"\${ZSH_VERSION+set}\" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on \${1+\"\$@\"}, which # is contrary to our usage. Disable this feature. alias -g '\${1+\"\$@\"}'='\"\$@\"' setopt NO_GLOB_SUBST else case \`(set -o) 2>/dev/null\` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi " as_required="as_fn_return () { (exit \$1); } as_fn_success () { as_fn_return 0; } as_fn_failure () { as_fn_return 1; } as_fn_ret_success () { return 0; } as_fn_ret_failure () { return 1; } exitcode=0 as_fn_success || { exitcode=1; echo as_fn_success failed.; } as_fn_failure && { exitcode=1; echo as_fn_failure succeeded.; } as_fn_ret_success || { exitcode=1; echo as_fn_ret_success failed.; } as_fn_ret_failure && { exitcode=1; echo as_fn_ret_failure succeeded.; } if ( set x; as_fn_ret_success y && test x = \"\$1\" ); then : else exitcode=1; echo positional parameters were not saved. fi test x\$exitcode = x0 || exit 1 test -x / || exit 1" as_suggested=" as_lineno_1=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_1a=\$LINENO as_lineno_2=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_2a=\$LINENO eval 'test \"x\$as_lineno_1'\$as_run'\" != \"x\$as_lineno_2'\$as_run'\" && test \"x\`expr \$as_lineno_1'\$as_run' + 1\`\" = \"x\$as_lineno_2'\$as_run'\"' || exit 1 test -n \"\${ZSH_VERSION+set}\${BASH_VERSION+set}\" || ( ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO ECHO=\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO PATH=/empty FPATH=/empty; export PATH FPATH test \"X\`printf %s \$ECHO\`\" = \"X\$ECHO\" \\ || test \"X\`print -r -- \$ECHO\`\" = \"X\$ECHO\" ) || exit 1 test \$(( 1 + 1 )) = 2 || exit 1" if (eval "$as_required") 2>/dev/null; then : as_have_required=yes else as_have_required=no fi if test x$as_have_required = xyes && (eval "$as_suggested") 2>/dev/null; then : else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR as_found=false for as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. as_found=: case $as_dir in #( /*) for as_base in sh bash ksh sh5; do # Try only shells that exist, to save several forks. as_shell=$as_dir/$as_base if { test -f "$as_shell" || test -f "$as_shell.exe"; } && { $as_echo "$as_bourne_compatible""$as_required" | as_run=a "$as_shell"; } 2>/dev/null; then : CONFIG_SHELL=$as_shell as_have_required=yes if { $as_echo "$as_bourne_compatible""$as_suggested" | as_run=a "$as_shell"; } 2>/dev/null; then : break 2 fi fi done;; esac as_found=false done $as_found || { if { test -f "$SHELL" || test -f "$SHELL.exe"; } && { $as_echo "$as_bourne_compatible""$as_required" | as_run=a "$SHELL"; } 2>/dev/null; then : CONFIG_SHELL=$SHELL as_have_required=yes fi; } IFS=$as_save_IFS if test "x$CONFIG_SHELL" != x; then : export CONFIG_SHELL # We cannot yet assume a decent shell, so we have to provide a # neutralization value for shells without unset; and this also # works around shells that cannot unset nonexistent variables. # Preserve -v and -x to the replacement shell. BASH_ENV=/dev/null ENV=/dev/null (unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV case $- in # (((( *v*x* | *x*v* ) as_opts=-vx ;; *v* ) as_opts=-v ;; *x* ) as_opts=-x ;; * ) as_opts= ;; esac exec $CONFIG_SHELL $as_opts "$as_myself" ${1+"$@"} # Admittedly, this is quite paranoid, since all the known shells bail # out after a failed `exec'. $as_echo "$0: could not re-execute with $CONFIG_SHELL" >&2 exit 255 fi if test x$as_have_required = xno; then : $as_echo "$0: This script requires a shell more modern than all" $as_echo "$0: the shells that I found on your system." if test x${ZSH_VERSION+set} = xset ; then $as_echo "$0: In particular, zsh $ZSH_VERSION has bugs and should" $as_echo "$0: be upgraded to zsh 4.3.4 or later." else $as_echo "$0: Please tell bug-autoconf@gnu.org about your system, $0: including any error possibly output before this $0: message. Then install a modern shell, or manually run $0: the script under such a shell if you do have one." fi exit 1 fi fi fi SHELL=${CONFIG_SHELL-/bin/sh} export SHELL # Unset more variables known to interfere with behavior of common tools. CLICOLOR_FORCE= GREP_OPTIONS= unset CLICOLOR_FORCE GREP_OPTIONS ## --------------------- ## ## M4sh Shell Functions. ## ## --------------------- ## # as_fn_unset VAR # --------------- # Portably unset VAR. as_fn_unset () { { eval $1=; unset $1;} } as_unset=as_fn_unset # as_fn_set_status STATUS # ----------------------- # Set $? to STATUS, without forking. as_fn_set_status () { return $1 } # as_fn_set_status # as_fn_exit STATUS # ----------------- # Exit the shell with STATUS, even in a "trap 0" or "set -e" context. as_fn_exit () { set +e as_fn_set_status $1 exit $1 } # as_fn_exit # as_fn_mkdir_p # ------------- # Create "$as_dir" as a directory, including parents if necessary. as_fn_mkdir_p () { case $as_dir in #( -*) as_dir=./$as_dir;; esac test -d "$as_dir" || eval $as_mkdir_p || { as_dirs= while :; do case $as_dir in #( *\'*) as_qdir=`$as_echo "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'( *) as_qdir=$as_dir;; esac as_dirs="'$as_qdir' $as_dirs" as_dir=`$as_dirname -- "$as_dir" || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` test -d "$as_dir" && break done test -z "$as_dirs" || eval "mkdir $as_dirs" } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir" } # as_fn_mkdir_p # as_fn_executable_p FILE # ----------------------- # Test if FILE is an executable regular file. as_fn_executable_p () { test -f "$1" && test -x "$1" } # as_fn_executable_p # as_fn_append VAR VALUE # ---------------------- # Append the text in VALUE to the end of the definition contained in VAR. Take # advantage of any shell optimizations that allow amortized linear growth over # repeated appends, instead of the typical quadratic growth present in naive # implementations. if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null; then : eval 'as_fn_append () { eval $1+=\$2 }' else as_fn_append () { eval $1=\$$1\$2 } fi # as_fn_append # as_fn_arith ARG... # ------------------ # Perform arithmetic evaluation on the ARGs, and store the result in the # global $as_val. Take advantage of shells that can avoid forks. The arguments # must be portable across $(()) and expr. if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null; then : eval 'as_fn_arith () { as_val=$(( $* )) }' else as_fn_arith () { as_val=`expr "$@" || test $? -eq 1` } fi # as_fn_arith # as_fn_error STATUS ERROR [LINENO LOG_FD] # ---------------------------------------- # Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are # provided, also output the error to LOG_FD, referencing LINENO. Then exit the # script with STATUS, using 1 if that was 0. as_fn_error () { as_status=$1; test $as_status -eq 0 && as_status=1 if test "$4"; then as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack $as_echo "$as_me:${as_lineno-$LINENO}: error: $2" >&$4 fi $as_echo "$as_me: error: $2" >&2 as_fn_exit $as_status } # as_fn_error if expr a : '\(a\)' >/dev/null 2>&1 && test "X`expr 00001 : '.*\(...\)'`" = X001; then as_expr=expr else as_expr=false fi if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then as_dirname=dirname else as_dirname=false fi as_me=`$as_basename -- "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| . 2>/dev/null || $as_echo X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits as_lineno_1=$LINENO as_lineno_1a=$LINENO as_lineno_2=$LINENO as_lineno_2a=$LINENO eval 'test "x$as_lineno_1'$as_run'" != "x$as_lineno_2'$as_run'" && test "x`expr $as_lineno_1'$as_run' + 1`" = "x$as_lineno_2'$as_run'"' || { # Blame Lee E. McMahon (1931-1989) for sed's syntax. :-) sed -n ' p /[$]LINENO/= ' <$as_myself | sed ' s/[$]LINENO.*/&-/ t lineno b :lineno N :loop s/[$]LINENO\([^'$as_cr_alnum'_].*\n\)\(.*\)/\2\1\2/ t loop s/-\n.*// ' >$as_me.lineno && chmod +x "$as_me.lineno" || { $as_echo "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2; as_fn_exit 1; } # If we had to re-execute with $CONFIG_SHELL, we're ensured to have # already done that, so ensure we don't try to do so again and fall # in an infinite loop. This has already happened in practice. _as_can_reexec=no; export _as_can_reexec # Don't try to exec as it changes $[0], causing all sort of problems # (the dirname of $[0] is not the place where we might find the # original and so on. Autoconf is especially sensitive to this). . "./$as_me.lineno" # Exit status is that of the last command. exit } ECHO_C= ECHO_N= ECHO_T= case `echo -n x` in #((((( -n*) case `echo 'xy\c'` in *c*) ECHO_T=' ';; # ECHO_T is single tab character. xy) ECHO_C='\c';; *) echo `echo ksh88 bug on AIX 6.1` > /dev/null ECHO_T=' ';; esac;; *) ECHO_N='-n';; esac rm -f conf$$ conf$$.exe conf$$.file if test -d conf$$.dir; then rm -f conf$$.dir/conf$$.file else rm -f conf$$.dir mkdir conf$$.dir 2>/dev/null fi if (echo >conf$$.file) 2>/dev/null; then if ln -s conf$$.file conf$$ 2>/dev/null; then as_ln_s='ln -s' # ... but there are two gotchas: # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable. # In both cases, we have to default to `cp -pR'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || as_ln_s='cp -pR' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -pR' fi else as_ln_s='cp -pR' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null if mkdir -p . 2>/dev/null; then as_mkdir_p='mkdir -p "$as_dir"' else test -d ./-p && rmdir ./-p as_mkdir_p=false fi as_test_x='test -x' as_executable_p=as_fn_executable_p # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" # Sed expression to map a string onto a valid variable name. as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" SHELL=${CONFIG_SHELL-/bin/sh} test -n "$DJDIR" || exec 7<&0 &1 # Name of the host. # hostname on some systems (SVR3.2, old GNU/Linux) returns a bogus exit status, # so uname gets run too. ac_hostname=`(hostname || uname -n) 2>/dev/null | sed 1q` # # Initializations. # ac_default_prefix=/usr/local ac_clean_files= ac_config_libobj_dir=. LIBOBJS= cross_compiling=no subdirs= MFLAGS= MAKEFLAGS= # Identity of this package. PACKAGE_NAME='libev' PACKAGE_TARNAME='libev' PACKAGE_VERSION='4.33' PACKAGE_STRING='libev 4.33' PACKAGE_BUGREPORT='' PACKAGE_URL='' ac_unique_file="ev_epoll.c" # Factoring default headers for most tests. ac_includes_default="\ #include #ifdef HAVE_SYS_TYPES_H # include #endif #ifdef HAVE_SYS_STAT_H # include #endif #ifdef STDC_HEADERS # include # include #else # ifdef HAVE_STDLIB_H # include # endif #endif #ifdef HAVE_STRING_H # if !defined STDC_HEADERS && defined HAVE_MEMORY_H # include # endif # include #endif #ifdef HAVE_STRINGS_H # include #endif #ifdef HAVE_INTTYPES_H # include #endif #ifdef HAVE_STDINT_H # include #endif #ifdef HAVE_UNISTD_H # include #endif" ac_subst_vars='am__EXEEXT_FALSE am__EXEEXT_TRUE LTLIBOBJS LIBOBJS CPP LT_SYS_LIBRARY_PATH OTOOL64 OTOOL LIPO NMEDIT DSYMUTIL MANIFEST_TOOL RANLIB ac_ct_AR AR DLLTOOL OBJDUMP LN_S NM ac_ct_DUMPBIN DUMPBIN LD FGREP EGREP GREP SED host_os host_vendor host_cpu host build_os build_vendor build_cpu build LIBTOOL am__fastdepCC_FALSE am__fastdepCC_TRUE CCDEPMODE am__nodep AMDEPBACKSLASH AMDEP_FALSE AMDEP_TRUE am__include DEPDIR OBJEXT EXEEXT ac_ct_CC CPPFLAGS LDFLAGS CFLAGS CC MAINT MAINTAINER_MODE_FALSE MAINTAINER_MODE_TRUE AM_BACKSLASH AM_DEFAULT_VERBOSITY AM_DEFAULT_V AM_V am__untar am__tar AMTAR am__leading_dot SET_MAKE AWK mkdir_p MKDIR_P INSTALL_STRIP_PROGRAM STRIP install_sh MAKEINFO AUTOHEADER AUTOMAKE AUTOCONF ACLOCAL VERSION PACKAGE CYGPATH_W am__isrc INSTALL_DATA INSTALL_SCRIPT INSTALL_PROGRAM target_alias host_alias build_alias LIBS ECHO_T ECHO_N ECHO_C DEFS mandir localedir libdir psdir pdfdir dvidir htmldir infodir docdir oldincludedir includedir runstatedir localstatedir sharedstatedir sysconfdir datadir datarootdir libexecdir sbindir bindir program_transform_name prefix exec_prefix PACKAGE_URL PACKAGE_BUGREPORT PACKAGE_STRING PACKAGE_VERSION PACKAGE_TARNAME PACKAGE_NAME PATH_SEPARATOR SHELL am__quote' ac_subst_files='' ac_user_opts=' enable_option_checking enable_silent_rules enable_maintainer_mode enable_dependency_tracking enable_shared enable_static with_pic enable_fast_install with_aix_soname with_gnu_ld with_sysroot enable_libtool_lock ' ac_precious_vars='build_alias host_alias target_alias CC CFLAGS LDFLAGS LIBS CPPFLAGS LT_SYS_LIBRARY_PATH CPP' # Initialize some variables set by options. ac_init_help= ac_init_version=false ac_unrecognized_opts= ac_unrecognized_sep= # The variables have the same names as the options, with # dashes changed to underlines. cache_file=/dev/null exec_prefix=NONE no_create= no_recursion= prefix=NONE program_prefix=NONE program_suffix=NONE program_transform_name=s,x,x, silent= site= srcdir= verbose= x_includes=NONE x_libraries=NONE # Installation directory options. # These are left unexpanded so users can "make install exec_prefix=/foo" # and all the variables that are supposed to be based on exec_prefix # by default will actually change. # Use braces instead of parens because sh, perl, etc. also accept them. # (The list follows the same order as the GNU Coding Standards.) bindir='${exec_prefix}/bin' sbindir='${exec_prefix}/sbin' libexecdir='${exec_prefix}/libexec' datarootdir='${prefix}/share' datadir='${datarootdir}' sysconfdir='${prefix}/etc' sharedstatedir='${prefix}/com' localstatedir='${prefix}/var' runstatedir='${localstatedir}/run' includedir='${prefix}/include' oldincludedir='/usr/include' docdir='${datarootdir}/doc/${PACKAGE_TARNAME}' infodir='${datarootdir}/info' htmldir='${docdir}' dvidir='${docdir}' pdfdir='${docdir}' psdir='${docdir}' libdir='${exec_prefix}/lib' localedir='${datarootdir}/locale' mandir='${datarootdir}/man' ac_prev= ac_dashdash= for ac_option do # If the previous option needs an argument, assign it. if test -n "$ac_prev"; then eval $ac_prev=\$ac_option ac_prev= continue fi case $ac_option in *=?*) ac_optarg=`expr "X$ac_option" : '[^=]*=\(.*\)'` ;; *=) ac_optarg= ;; *) ac_optarg=yes ;; esac # Accept the important Cygnus configure options, so we can diagnose typos. case $ac_dashdash$ac_option in --) ac_dashdash=yes ;; -bindir | --bindir | --bindi | --bind | --bin | --bi) ac_prev=bindir ;; -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*) bindir=$ac_optarg ;; -build | --build | --buil | --bui | --bu) ac_prev=build_alias ;; -build=* | --build=* | --buil=* | --bui=* | --bu=*) build_alias=$ac_optarg ;; -cache-file | --cache-file | --cache-fil | --cache-fi \ | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c) ac_prev=cache_file ;; -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \ | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*) cache_file=$ac_optarg ;; --config-cache | -C) cache_file=config.cache ;; -datadir | --datadir | --datadi | --datad) ac_prev=datadir ;; -datadir=* | --datadir=* | --datadi=* | --datad=*) datadir=$ac_optarg ;; -datarootdir | --datarootdir | --datarootdi | --datarootd | --dataroot \ | --dataroo | --dataro | --datar) ac_prev=datarootdir ;; -datarootdir=* | --datarootdir=* | --datarootdi=* | --datarootd=* \ | --dataroot=* | --dataroo=* | --dataro=* | --datar=*) datarootdir=$ac_optarg ;; -disable-* | --disable-*) ac_useropt=`expr "x$ac_option" : 'x-*disable-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid feature name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "enable_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--disable-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval enable_$ac_useropt=no ;; -docdir | --docdir | --docdi | --doc | --do) ac_prev=docdir ;; -docdir=* | --docdir=* | --docdi=* | --doc=* | --do=*) docdir=$ac_optarg ;; -dvidir | --dvidir | --dvidi | --dvid | --dvi | --dv) ac_prev=dvidir ;; -dvidir=* | --dvidir=* | --dvidi=* | --dvid=* | --dvi=* | --dv=*) dvidir=$ac_optarg ;; -enable-* | --enable-*) ac_useropt=`expr "x$ac_option" : 'x-*enable-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid feature name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "enable_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--enable-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval enable_$ac_useropt=\$ac_optarg ;; -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \ | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \ | --exec | --exe | --ex) ac_prev=exec_prefix ;; -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \ | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \ | --exec=* | --exe=* | --ex=*) exec_prefix=$ac_optarg ;; -gas | --gas | --ga | --g) # Obsolete; use --with-gas. with_gas=yes ;; -help | --help | --hel | --he | -h) ac_init_help=long ;; -help=r* | --help=r* | --hel=r* | --he=r* | -hr*) ac_init_help=recursive ;; -help=s* | --help=s* | --hel=s* | --he=s* | -hs*) ac_init_help=short ;; -host | --host | --hos | --ho) ac_prev=host_alias ;; -host=* | --host=* | --hos=* | --ho=*) host_alias=$ac_optarg ;; -htmldir | --htmldir | --htmldi | --htmld | --html | --htm | --ht) ac_prev=htmldir ;; -htmldir=* | --htmldir=* | --htmldi=* | --htmld=* | --html=* | --htm=* \ | --ht=*) htmldir=$ac_optarg ;; -includedir | --includedir | --includedi | --included | --include \ | --includ | --inclu | --incl | --inc) ac_prev=includedir ;; -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \ | --includ=* | --inclu=* | --incl=* | --inc=*) includedir=$ac_optarg ;; -infodir | --infodir | --infodi | --infod | --info | --inf) ac_prev=infodir ;; -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*) infodir=$ac_optarg ;; -libdir | --libdir | --libdi | --libd) ac_prev=libdir ;; -libdir=* | --libdir=* | --libdi=* | --libd=*) libdir=$ac_optarg ;; -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \ | --libexe | --libex | --libe) ac_prev=libexecdir ;; -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \ | --libexe=* | --libex=* | --libe=*) libexecdir=$ac_optarg ;; -localedir | --localedir | --localedi | --localed | --locale) ac_prev=localedir ;; -localedir=* | --localedir=* | --localedi=* | --localed=* | --locale=*) localedir=$ac_optarg ;; -localstatedir | --localstatedir | --localstatedi | --localstated \ | --localstate | --localstat | --localsta | --localst | --locals) ac_prev=localstatedir ;; -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \ | --localstate=* | --localstat=* | --localsta=* | --localst=* | --locals=*) localstatedir=$ac_optarg ;; -mandir | --mandir | --mandi | --mand | --man | --ma | --m) ac_prev=mandir ;; -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*) mandir=$ac_optarg ;; -nfp | --nfp | --nf) # Obsolete; use --without-fp. with_fp=no ;; -no-create | --no-create | --no-creat | --no-crea | --no-cre \ | --no-cr | --no-c | -n) no_create=yes ;; -no-recursion | --no-recursion | --no-recursio | --no-recursi \ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) no_recursion=yes ;; -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \ | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \ | --oldin | --oldi | --old | --ol | --o) ac_prev=oldincludedir ;; -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \ | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \ | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*) oldincludedir=$ac_optarg ;; -prefix | --prefix | --prefi | --pref | --pre | --pr | --p) ac_prev=prefix ;; -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*) prefix=$ac_optarg ;; -program-prefix | --program-prefix | --program-prefi | --program-pref \ | --program-pre | --program-pr | --program-p) ac_prev=program_prefix ;; -program-prefix=* | --program-prefix=* | --program-prefi=* \ | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*) program_prefix=$ac_optarg ;; -program-suffix | --program-suffix | --program-suffi | --program-suff \ | --program-suf | --program-su | --program-s) ac_prev=program_suffix ;; -program-suffix=* | --program-suffix=* | --program-suffi=* \ | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*) program_suffix=$ac_optarg ;; -program-transform-name | --program-transform-name \ | --program-transform-nam | --program-transform-na \ | --program-transform-n | --program-transform- \ | --program-transform | --program-transfor \ | --program-transfo | --program-transf \ | --program-trans | --program-tran \ | --progr-tra | --program-tr | --program-t) ac_prev=program_transform_name ;; -program-transform-name=* | --program-transform-name=* \ | --program-transform-nam=* | --program-transform-na=* \ | --program-transform-n=* | --program-transform-=* \ | --program-transform=* | --program-transfor=* \ | --program-transfo=* | --program-transf=* \ | --program-trans=* | --program-tran=* \ | --progr-tra=* | --program-tr=* | --program-t=*) program_transform_name=$ac_optarg ;; -pdfdir | --pdfdir | --pdfdi | --pdfd | --pdf | --pd) ac_prev=pdfdir ;; -pdfdir=* | --pdfdir=* | --pdfdi=* | --pdfd=* | --pdf=* | --pd=*) pdfdir=$ac_optarg ;; -psdir | --psdir | --psdi | --psd | --ps) ac_prev=psdir ;; -psdir=* | --psdir=* | --psdi=* | --psd=* | --ps=*) psdir=$ac_optarg ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) silent=yes ;; -runstatedir | --runstatedir | --runstatedi | --runstated \ | --runstate | --runstat | --runsta | --runst | --runs \ | --run | --ru | --r) ac_prev=runstatedir ;; -runstatedir=* | --runstatedir=* | --runstatedi=* | --runstated=* \ | --runstate=* | --runstat=* | --runsta=* | --runst=* | --runs=* \ | --run=* | --ru=* | --r=*) runstatedir=$ac_optarg ;; -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb) ac_prev=sbindir ;; -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \ | --sbi=* | --sb=*) sbindir=$ac_optarg ;; -sharedstatedir | --sharedstatedir | --sharedstatedi \ | --sharedstated | --sharedstate | --sharedstat | --sharedsta \ | --sharedst | --shareds | --shared | --share | --shar \ | --sha | --sh) ac_prev=sharedstatedir ;; -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \ | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \ | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \ | --sha=* | --sh=*) sharedstatedir=$ac_optarg ;; -site | --site | --sit) ac_prev=site ;; -site=* | --site=* | --sit=*) site=$ac_optarg ;; -srcdir | --srcdir | --srcdi | --srcd | --src | --sr) ac_prev=srcdir ;; -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*) srcdir=$ac_optarg ;; -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \ | --syscon | --sysco | --sysc | --sys | --sy) ac_prev=sysconfdir ;; -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \ | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*) sysconfdir=$ac_optarg ;; -target | --target | --targe | --targ | --tar | --ta | --t) ac_prev=target_alias ;; -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*) target_alias=$ac_optarg ;; -v | -verbose | --verbose | --verbos | --verbo | --verb) verbose=yes ;; -version | --version | --versio | --versi | --vers | -V) ac_init_version=: ;; -with-* | --with-*) ac_useropt=`expr "x$ac_option" : 'x-*with-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid package name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "with_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--with-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval with_$ac_useropt=\$ac_optarg ;; -without-* | --without-*) ac_useropt=`expr "x$ac_option" : 'x-*without-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid package name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "with_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--without-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval with_$ac_useropt=no ;; --x) # Obsolete; use --with-x. with_x=yes ;; -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \ | --x-incl | --x-inc | --x-in | --x-i) ac_prev=x_includes ;; -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \ | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*) x_includes=$ac_optarg ;; -x-libraries | --x-libraries | --x-librarie | --x-librari \ | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l) ac_prev=x_libraries ;; -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \ | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*) x_libraries=$ac_optarg ;; -*) as_fn_error $? "unrecognized option: \`$ac_option' Try \`$0 --help' for more information" ;; *=*) ac_envvar=`expr "x$ac_option" : 'x\([^=]*\)='` # Reject names that are not valid shell variable names. case $ac_envvar in #( '' | [0-9]* | *[!_$as_cr_alnum]* ) as_fn_error $? "invalid variable name: \`$ac_envvar'" ;; esac eval $ac_envvar=\$ac_optarg export $ac_envvar ;; *) # FIXME: should be removed in autoconf 3.0. $as_echo "$as_me: WARNING: you should use --build, --host, --target" >&2 expr "x$ac_option" : ".*[^-._$as_cr_alnum]" >/dev/null && $as_echo "$as_me: WARNING: invalid host type: $ac_option" >&2 : "${build_alias=$ac_option} ${host_alias=$ac_option} ${target_alias=$ac_option}" ;; esac done if test -n "$ac_prev"; then ac_option=--`echo $ac_prev | sed 's/_/-/g'` as_fn_error $? "missing argument to $ac_option" fi if test -n "$ac_unrecognized_opts"; then case $enable_option_checking in no) ;; fatal) as_fn_error $? "unrecognized options: $ac_unrecognized_opts" ;; *) $as_echo "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2 ;; esac fi # Check all directory arguments for consistency. for ac_var in exec_prefix prefix bindir sbindir libexecdir datarootdir \ datadir sysconfdir sharedstatedir localstatedir includedir \ oldincludedir docdir infodir htmldir dvidir pdfdir psdir \ libdir localedir mandir runstatedir do eval ac_val=\$$ac_var # Remove trailing slashes. case $ac_val in */ ) ac_val=`expr "X$ac_val" : 'X\(.*[^/]\)' \| "X$ac_val" : 'X\(.*\)'` eval $ac_var=\$ac_val;; esac # Be sure to have absolute directory names. case $ac_val in [\\/$]* | ?:[\\/]* ) continue;; NONE | '' ) case $ac_var in *prefix ) continue;; esac;; esac as_fn_error $? "expected an absolute directory name for --$ac_var: $ac_val" done # There might be people who depend on the old broken behavior: `$host' # used to hold the argument of --host etc. # FIXME: To remove some day. build=$build_alias host=$host_alias target=$target_alias # FIXME: To remove some day. if test "x$host_alias" != x; then if test "x$build_alias" = x; then cross_compiling=maybe elif test "x$build_alias" != "x$host_alias"; then cross_compiling=yes fi fi ac_tool_prefix= test -n "$host_alias" && ac_tool_prefix=$host_alias- test "$silent" = yes && exec 6>/dev/null ac_pwd=`pwd` && test -n "$ac_pwd" && ac_ls_di=`ls -di .` && ac_pwd_ls_di=`cd "$ac_pwd" && ls -di .` || as_fn_error $? "working directory cannot be determined" test "X$ac_ls_di" = "X$ac_pwd_ls_di" || as_fn_error $? "pwd does not report name of working directory" # Find the source files, if location was not specified. if test -z "$srcdir"; then ac_srcdir_defaulted=yes # Try the directory containing this script, then the parent directory. ac_confdir=`$as_dirname -- "$as_myself" || $as_expr X"$as_myself" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_myself" : 'X\(//\)[^/]' \| \ X"$as_myself" : 'X\(//\)$' \| \ X"$as_myself" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$as_myself" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` srcdir=$ac_confdir if test ! -r "$srcdir/$ac_unique_file"; then srcdir=.. fi else ac_srcdir_defaulted=no fi if test ! -r "$srcdir/$ac_unique_file"; then test "$ac_srcdir_defaulted" = yes && srcdir="$ac_confdir or .." as_fn_error $? "cannot find sources ($ac_unique_file) in $srcdir" fi ac_msg="sources are in $srcdir, but \`cd $srcdir' does not work" ac_abs_confdir=`( cd "$srcdir" && test -r "./$ac_unique_file" || as_fn_error $? "$ac_msg" pwd)` # When building in place, set srcdir=. if test "$ac_abs_confdir" = "$ac_pwd"; then srcdir=. fi # Remove unnecessary trailing slashes from srcdir. # Double slashes in file names in object file debugging info # mess up M-x gdb in Emacs. case $srcdir in */) srcdir=`expr "X$srcdir" : 'X\(.*[^/]\)' \| "X$srcdir" : 'X\(.*\)'`;; esac for ac_var in $ac_precious_vars; do eval ac_env_${ac_var}_set=\${${ac_var}+set} eval ac_env_${ac_var}_value=\$${ac_var} eval ac_cv_env_${ac_var}_set=\${${ac_var}+set} eval ac_cv_env_${ac_var}_value=\$${ac_var} done # # Report the --help message. # if test "$ac_init_help" = "long"; then # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat <<_ACEOF \`configure' configures libev 4.33 to adapt to many kinds of systems. Usage: $0 [OPTION]... [VAR=VALUE]... To assign environment variables (e.g., CC, CFLAGS...), specify them as VAR=VALUE. See below for descriptions of some of the useful variables. Defaults for the options are specified in brackets. Configuration: -h, --help display this help and exit --help=short display options specific to this package --help=recursive display the short help of all the included packages -V, --version display version information and exit -q, --quiet, --silent do not print \`checking ...' messages --cache-file=FILE cache test results in FILE [disabled] -C, --config-cache alias for \`--cache-file=config.cache' -n, --no-create do not create output files --srcdir=DIR find the sources in DIR [configure dir or \`..'] Installation directories: --prefix=PREFIX install architecture-independent files in PREFIX [$ac_default_prefix] --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX [PREFIX] By default, \`make install' will install all the files in \`$ac_default_prefix/bin', \`$ac_default_prefix/lib' etc. You can specify an installation prefix other than \`$ac_default_prefix' using \`--prefix', for instance \`--prefix=\$HOME'. For better control, use the options below. Fine tuning of the installation directories: --bindir=DIR user executables [EPREFIX/bin] --sbindir=DIR system admin executables [EPREFIX/sbin] --libexecdir=DIR program executables [EPREFIX/libexec] --sysconfdir=DIR read-only single-machine data [PREFIX/etc] --sharedstatedir=DIR modifiable architecture-independent data [PREFIX/com] --localstatedir=DIR modifiable single-machine data [PREFIX/var] --runstatedir=DIR modifiable per-process data [LOCALSTATEDIR/run] --libdir=DIR object code libraries [EPREFIX/lib] --includedir=DIR C header files [PREFIX/include] --oldincludedir=DIR C header files for non-gcc [/usr/include] --datarootdir=DIR read-only arch.-independent data root [PREFIX/share] --datadir=DIR read-only architecture-independent data [DATAROOTDIR] --infodir=DIR info documentation [DATAROOTDIR/info] --localedir=DIR locale-dependent data [DATAROOTDIR/locale] --mandir=DIR man documentation [DATAROOTDIR/man] --docdir=DIR documentation root [DATAROOTDIR/doc/libev] --htmldir=DIR html documentation [DOCDIR] --dvidir=DIR dvi documentation [DOCDIR] --pdfdir=DIR pdf documentation [DOCDIR] --psdir=DIR ps documentation [DOCDIR] _ACEOF cat <<\_ACEOF Program names: --program-prefix=PREFIX prepend PREFIX to installed program names --program-suffix=SUFFIX append SUFFIX to installed program names --program-transform-name=PROGRAM run sed PROGRAM on installed program names System types: --build=BUILD configure for building on BUILD [guessed] --host=HOST cross-compile to build programs to run on HOST [BUILD] _ACEOF fi if test -n "$ac_init_help"; then case $ac_init_help in short | recursive ) echo "Configuration of libev 4.33:";; esac cat <<\_ACEOF Optional Features: --disable-option-checking ignore unrecognized --enable/--with options --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no) --enable-FEATURE[=ARG] include FEATURE [ARG=yes] --enable-silent-rules less verbose build output (undo: "make V=1") --disable-silent-rules verbose build output (undo: "make V=0") --enable-maintainer-mode enable make rules and dependencies not useful (and sometimes confusing) to the casual installer --enable-dependency-tracking do not reject slow dependency extractors --disable-dependency-tracking speeds up one-time build --enable-shared[=PKGS] build shared libraries [default=yes] --enable-static[=PKGS] build static libraries [default=yes] --enable-fast-install[=PKGS] optimize for fast installation [default=yes] --disable-libtool-lock avoid locking (might break parallel builds) Optional Packages: --with-PACKAGE[=ARG] use PACKAGE [ARG=yes] --without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no) --with-pic[=PKGS] try to use only PIC/non-PIC objects [default=use both] --with-aix-soname=aix|svr4|both shared library versioning (aka "SONAME") variant to provide on AIX, [default=aix]. --with-gnu-ld assume the C compiler uses GNU ld [default=no] --with-sysroot[=DIR] Search for dependent libraries within DIR (or the compiler's sysroot if not specified). Some influential environment variables: CC C compiler command CFLAGS C compiler flags LDFLAGS linker flags, e.g. -L if you have libraries in a nonstandard directory LIBS libraries to pass to the linker, e.g. -l CPPFLAGS (Objective) C/C++ preprocessor flags, e.g. -I if you have headers in a nonstandard directory LT_SYS_LIBRARY_PATH User-defined run-time library search path. CPP C preprocessor Use these variables to override the choices made by `configure' or to help it to find libraries and programs with nonstandard names/locations. Report bugs to the package provider. _ACEOF ac_status=$? fi if test "$ac_init_help" = "recursive"; then # If there are subdirs, report their specific --help. for ac_dir in : $ac_subdirs_all; do test "x$ac_dir" = x: && continue test -d "$ac_dir" || { cd "$srcdir" && ac_pwd=`pwd` && srcdir=. && test -d "$ac_dir"; } || continue ac_builddir=. case "$ac_dir" in .) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_dir_suffix=/`$as_echo "$ac_dir" | sed 's|^\.[\\/]||'` # A ".." for each directory in $ac_dir_suffix. ac_top_builddir_sub=`$as_echo "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'` case $ac_top_builddir_sub in "") ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_top_build_prefix=$ac_top_builddir_sub/ ;; esac ;; esac ac_abs_top_builddir=$ac_pwd ac_abs_builddir=$ac_pwd$ac_dir_suffix # for backward compatibility: ac_top_builddir=$ac_top_build_prefix case $srcdir in .) # We are building in place. ac_srcdir=. ac_top_srcdir=$ac_top_builddir_sub ac_abs_top_srcdir=$ac_pwd ;; [\\/]* | ?:[\\/]* ) # Absolute name. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ac_abs_top_srcdir=$srcdir ;; *) # Relative name. ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_build_prefix$srcdir ac_abs_top_srcdir=$ac_pwd/$srcdir ;; esac ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix cd "$ac_dir" || { ac_status=$?; continue; } # Check for guested configure. if test -f "$ac_srcdir/configure.gnu"; then echo && $SHELL "$ac_srcdir/configure.gnu" --help=recursive elif test -f "$ac_srcdir/configure"; then echo && $SHELL "$ac_srcdir/configure" --help=recursive else $as_echo "$as_me: WARNING: no configuration information is in $ac_dir" >&2 fi || ac_status=$? cd "$ac_pwd" || { ac_status=$?; break; } done fi test -n "$ac_init_help" && exit $ac_status if $ac_init_version; then cat <<\_ACEOF libev configure 4.33 generated by GNU Autoconf 2.69 Copyright (C) 2012 Free Software Foundation, Inc. This configure script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it. _ACEOF exit fi ## ------------------------ ## ## Autoconf initialization. ## ## ------------------------ ## # ac_fn_c_try_compile LINENO # -------------------------- # Try to compile conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_c_werror_flag" || test ! -s conftest.err } && test -s conftest.$ac_objext; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_compile # ac_fn_c_try_link LINENO # ----------------------- # Try to link conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_link () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext conftest$ac_exeext if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_c_werror_flag" || test ! -s conftest.err } && test -s conftest$ac_exeext && { test "$cross_compiling" = yes || test -x conftest$ac_exeext }; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi # Delete the IPA/IPO (Inter Procedural Analysis/Optimization) information # created by the PGI compiler (conftest_ipa8_conftest.oo), as it would # interfere with the next link command; also delete a directory that is # left behind by Apple's compiler. We do this before executing the actions. rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_link # ac_fn_c_check_header_compile LINENO HEADER VAR INCLUDES # ------------------------------------------------------- # Tests whether HEADER exists and can be compiled using the include files in # INCLUDES, setting the cache variable VAR accordingly. ac_fn_c_check_header_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 #include <$2> _ACEOF if ac_fn_c_try_compile "$LINENO"; then : eval "$3=yes" else eval "$3=no" fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_header_compile # ac_fn_c_try_cpp LINENO # ---------------------- # Try to preprocess conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_cpp () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_cpp conftest.$ac_ext" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_cpp conftest.$ac_ext") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } > conftest.i && { test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || test ! -s conftest.err }; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_cpp # ac_fn_c_try_run LINENO # ---------------------- # Try to link conftest.$ac_ext, and return whether this succeeded. Assumes # that executables *can* be run. ac_fn_c_try_run () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { ac_try='./conftest$ac_exeext' { { case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_try") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; }; then : ac_retval=0 else $as_echo "$as_me: program exited with status $ac_status" >&5 $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=$ac_status fi rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_run # ac_fn_c_check_func LINENO FUNC VAR # ---------------------------------- # Tests whether FUNC exists, setting the cache variable VAR accordingly ac_fn_c_check_func () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Define $2 to an innocuous variant, in case declares $2. For example, HP-UX 11i declares gettimeofday. */ #define $2 innocuous_$2 /* System header to define __stub macros and hopefully few prototypes, which can conflict with char $2 (); below. Prefer to if __STDC__ is defined, since exists even on freestanding compilers. */ #ifdef __STDC__ # include #else # include #endif #undef $2 /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char $2 (); /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined __stub_$2 || defined __stub___$2 choke me #endif int main () { return $2 (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : eval "$3=yes" else eval "$3=no" fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_func # ac_fn_c_check_header_mongrel LINENO HEADER VAR INCLUDES # ------------------------------------------------------- # Tests whether HEADER exists, giving a warning if it cannot be compiled using # the include files in INCLUDES and setting the cache variable VAR # accordingly. ac_fn_c_check_header_mongrel () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if eval \${$3+:} false; then : { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } else # Is the header compilable? { $as_echo "$as_me:${as_lineno-$LINENO}: checking $2 usability" >&5 $as_echo_n "checking $2 usability... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 #include <$2> _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_header_compiler=yes else ac_header_compiler=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_header_compiler" >&5 $as_echo "$ac_header_compiler" >&6; } # Is the header present? { $as_echo "$as_me:${as_lineno-$LINENO}: checking $2 presence" >&5 $as_echo_n "checking $2 presence... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include <$2> _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : ac_header_preproc=yes else ac_header_preproc=no fi rm -f conftest.err conftest.i conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_header_preproc" >&5 $as_echo "$ac_header_preproc" >&6; } # So? What about this header? case $ac_header_compiler:$ac_header_preproc:$ac_c_preproc_warn_flag in #(( yes:no: ) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: accepted by the compiler, rejected by the preprocessor!" >&5 $as_echo "$as_me: WARNING: $2: accepted by the compiler, rejected by the preprocessor!" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: proceeding with the compiler's result" >&5 $as_echo "$as_me: WARNING: $2: proceeding with the compiler's result" >&2;} ;; no:yes:* ) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: present but cannot be compiled" >&5 $as_echo "$as_me: WARNING: $2: present but cannot be compiled" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: check for missing prerequisite headers?" >&5 $as_echo "$as_me: WARNING: $2: check for missing prerequisite headers?" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: see the Autoconf documentation" >&5 $as_echo "$as_me: WARNING: $2: see the Autoconf documentation" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: section \"Present But Cannot Be Compiled\"" >&5 $as_echo "$as_me: WARNING: $2: section \"Present But Cannot Be Compiled\"" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $2: proceeding with the compiler's result" >&5 $as_echo "$as_me: WARNING: $2: proceeding with the compiler's result" >&2;} ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else eval "$3=\$ac_header_compiler" fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_header_mongrel # ac_fn_c_check_type LINENO TYPE VAR INCLUDES # ------------------------------------------- # Tests whether TYPE exists after having included INCLUDES, setting cache # variable VAR accordingly. ac_fn_c_check_type () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else eval "$3=no" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 int main () { if (sizeof ($2)) return 0; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 int main () { if (sizeof (($2))) return 0; ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : else eval "$3=yes" fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_type cat >config.log <<_ACEOF This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by libev $as_me 4.33, which was generated by GNU Autoconf 2.69. Invocation command line was $ $0 $@ _ACEOF exec 5>>config.log { cat <<_ASUNAME ## --------- ## ## Platform. ## ## --------- ## hostname = `(hostname || uname -n) 2>/dev/null | sed 1q` uname -m = `(uname -m) 2>/dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null || echo unknown` /bin/uname -X = `(/bin/uname -X) 2>/dev/null || echo unknown` /bin/arch = `(/bin/arch) 2>/dev/null || echo unknown` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null || echo unknown` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null || echo unknown` /usr/bin/hostinfo = `(/usr/bin/hostinfo) 2>/dev/null || echo unknown` /bin/machine = `(/bin/machine) 2>/dev/null || echo unknown` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null || echo unknown` /bin/universe = `(/bin/universe) 2>/dev/null || echo unknown` _ASUNAME as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. $as_echo "PATH: $as_dir" done IFS=$as_save_IFS } >&5 cat >&5 <<_ACEOF ## ----------- ## ## Core tests. ## ## ----------- ## _ACEOF # Keep a trace of the command line. # Strip out --no-create and --no-recursion so they do not pile up. # Strip out --silent because we don't want to record it for future runs. # Also quote any args containing shell meta-characters. # Make two passes to allow for proper duplicate-argument suppression. ac_configure_args= ac_configure_args0= ac_configure_args1= ac_must_keep_next=false for ac_pass in 1 2 do for ac_arg do case $ac_arg in -no-create | --no-c* | -n | -no-recursion | --no-r*) continue ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) continue ;; *\'*) ac_arg=`$as_echo "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;; esac case $ac_pass in 1) as_fn_append ac_configure_args0 " '$ac_arg'" ;; 2) as_fn_append ac_configure_args1 " '$ac_arg'" if test $ac_must_keep_next = true; then ac_must_keep_next=false # Got value, back to normal. else case $ac_arg in *=* | --config-cache | -C | -disable-* | --disable-* \ | -enable-* | --enable-* | -gas | --g* | -nfp | --nf* \ | -q | -quiet | --q* | -silent | --sil* | -v | -verb* \ | -with-* | --with-* | -without-* | --without-* | --x) case "$ac_configure_args0 " in "$ac_configure_args1"*" '$ac_arg' "* ) continue ;; esac ;; -* ) ac_must_keep_next=true ;; esac fi as_fn_append ac_configure_args " '$ac_arg'" ;; esac done done { ac_configure_args0=; unset ac_configure_args0;} { ac_configure_args1=; unset ac_configure_args1;} # When interrupted or exit'd, cleanup temporary files, and complete # config.log. We remove comments because anyway the quotes in there # would cause problems or look ugly. # WARNING: Use '\'' to represent an apostrophe within the trap. # WARNING: Do not start the trap code with a newline, due to a FreeBSD 4.0 bug. trap 'exit_status=$? # Save into config.log some information that might help in debugging. { echo $as_echo "## ---------------- ## ## Cache variables. ## ## ---------------- ##" echo # The following way of writing the cache mishandles newlines in values, ( for ac_var in `(set) 2>&1 | sed -n '\''s/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'\''`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_*) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5 $as_echo "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;; esac case $ac_var in #( _ | IFS | as_nl) ;; #( BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #( *) { eval $ac_var=; unset $ac_var;} ;; esac ;; esac done (set) 2>&1 | case $as_nl`(ac_space='\'' '\''; set) 2>&1` in #( *${as_nl}ac_space=\ *) sed -n \ "s/'\''/'\''\\\\'\'''\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\''\\2'\''/p" ;; #( *) sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p" ;; esac | sort ) echo $as_echo "## ----------------- ## ## Output variables. ## ## ----------------- ##" echo for ac_var in $ac_subst_vars do eval ac_val=\$$ac_var case $ac_val in *\'\''*) ac_val=`$as_echo "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;; esac $as_echo "$ac_var='\''$ac_val'\''" done | sort echo if test -n "$ac_subst_files"; then $as_echo "## ------------------- ## ## File substitutions. ## ## ------------------- ##" echo for ac_var in $ac_subst_files do eval ac_val=\$$ac_var case $ac_val in *\'\''*) ac_val=`$as_echo "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;; esac $as_echo "$ac_var='\''$ac_val'\''" done | sort echo fi if test -s confdefs.h; then $as_echo "## ----------- ## ## confdefs.h. ## ## ----------- ##" echo cat confdefs.h echo fi test "$ac_signal" != 0 && $as_echo "$as_me: caught signal $ac_signal" $as_echo "$as_me: exit $exit_status" } >&5 rm -f core *.core core.conftest.* && rm -f -r conftest* confdefs* conf$$* $ac_clean_files && exit $exit_status ' 0 for ac_signal in 1 2 13 15; do trap 'ac_signal='$ac_signal'; as_fn_exit 1' $ac_signal done ac_signal=0 # confdefs.h avoids OS command line length limits that DEFS can exceed. rm -f -r conftest* confdefs.h $as_echo "/* confdefs.h */" > confdefs.h # Predefined preprocessor variables. cat >>confdefs.h <<_ACEOF #define PACKAGE_NAME "$PACKAGE_NAME" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_TARNAME "$PACKAGE_TARNAME" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_VERSION "$PACKAGE_VERSION" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_STRING "$PACKAGE_STRING" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_BUGREPORT "$PACKAGE_BUGREPORT" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_URL "$PACKAGE_URL" _ACEOF # Let the site file select an alternate cache file if it wants to. # Prefer an explicitly selected file to automatically selected ones. ac_site_file1=NONE ac_site_file2=NONE if test -n "$CONFIG_SITE"; then # We do not want a PATH search for config.site. case $CONFIG_SITE in #(( -*) ac_site_file1=./$CONFIG_SITE;; */*) ac_site_file1=$CONFIG_SITE;; *) ac_site_file1=./$CONFIG_SITE;; esac elif test "x$prefix" != xNONE; then ac_site_file1=$prefix/share/config.site ac_site_file2=$prefix/etc/config.site else ac_site_file1=$ac_default_prefix/share/config.site ac_site_file2=$ac_default_prefix/etc/config.site fi for ac_site_file in "$ac_site_file1" "$ac_site_file2" do test "x$ac_site_file" = xNONE && continue if test /dev/null != "$ac_site_file" && test -r "$ac_site_file"; then { $as_echo "$as_me:${as_lineno-$LINENO}: loading site script $ac_site_file" >&5 $as_echo "$as_me: loading site script $ac_site_file" >&6;} sed 's/^/| /' "$ac_site_file" >&5 . "$ac_site_file" \ || { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "failed to load site script $ac_site_file See \`config.log' for more details" "$LINENO" 5; } fi done if test -r "$cache_file"; then # Some versions of bash will fail to source /dev/null (special files # actually), so we avoid doing that. DJGPP emulates it as a regular file. if test /dev/null != "$cache_file" && test -f "$cache_file"; then { $as_echo "$as_me:${as_lineno-$LINENO}: loading cache $cache_file" >&5 $as_echo "$as_me: loading cache $cache_file" >&6;} case $cache_file in [\\/]* | ?:[\\/]* ) . "$cache_file";; *) . "./$cache_file";; esac fi else { $as_echo "$as_me:${as_lineno-$LINENO}: creating cache $cache_file" >&5 $as_echo "$as_me: creating cache $cache_file" >&6;} >$cache_file fi # Check that the precious variables saved in the cache have kept the same # value. ac_cache_corrupted=false for ac_var in $ac_precious_vars; do eval ac_old_set=\$ac_cv_env_${ac_var}_set eval ac_new_set=\$ac_env_${ac_var}_set eval ac_old_val=\$ac_cv_env_${ac_var}_value eval ac_new_val=\$ac_env_${ac_var}_value case $ac_old_set,$ac_new_set in set,) { $as_echo "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&5 $as_echo "$as_me: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&2;} ac_cache_corrupted=: ;; ,set) { $as_echo "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' was not set in the previous run" >&5 $as_echo "$as_me: error: \`$ac_var' was not set in the previous run" >&2;} ac_cache_corrupted=: ;; ,);; *) if test "x$ac_old_val" != "x$ac_new_val"; then # differences in whitespace do not lead to failure. ac_old_val_w=`echo x $ac_old_val` ac_new_val_w=`echo x $ac_new_val` if test "$ac_old_val_w" != "$ac_new_val_w"; then { $as_echo "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' has changed since the previous run:" >&5 $as_echo "$as_me: error: \`$ac_var' has changed since the previous run:" >&2;} ac_cache_corrupted=: else { $as_echo "$as_me:${as_lineno-$LINENO}: warning: ignoring whitespace changes in \`$ac_var' since the previous run:" >&5 $as_echo "$as_me: warning: ignoring whitespace changes in \`$ac_var' since the previous run:" >&2;} eval $ac_var=\$ac_old_val fi { $as_echo "$as_me:${as_lineno-$LINENO}: former value: \`$ac_old_val'" >&5 $as_echo "$as_me: former value: \`$ac_old_val'" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: current value: \`$ac_new_val'" >&5 $as_echo "$as_me: current value: \`$ac_new_val'" >&2;} fi;; esac # Pass precious variables to config.status. if test "$ac_new_set" = set; then case $ac_new_val in *\'*) ac_arg=$ac_var=`$as_echo "$ac_new_val" | sed "s/'/'\\\\\\\\''/g"` ;; *) ac_arg=$ac_var=$ac_new_val ;; esac case " $ac_configure_args " in *" '$ac_arg' "*) ;; # Avoid dups. Use of quotes ensures accuracy. *) as_fn_append ac_configure_args " '$ac_arg'" ;; esac fi done if $ac_cache_corrupted; then { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: error: changes in the environment can compromise the build" >&5 $as_echo "$as_me: error: changes in the environment can compromise the build" >&2;} as_fn_error $? "run \`make distclean' and/or \`rm $cache_file' and start over" "$LINENO" 5 fi ## -------------------- ## ## Main body of script. ## ## -------------------- ## ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu orig_CFLAGS="$CFLAGS" am__api_version='1.16' ac_aux_dir= for ac_dir in "$srcdir" "$srcdir/.." "$srcdir/../.."; do if test -f "$ac_dir/install-sh"; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install-sh -c" break elif test -f "$ac_dir/install.sh"; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install.sh -c" break elif test -f "$ac_dir/shtool"; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/shtool install -c" break fi done if test -z "$ac_aux_dir"; then as_fn_error $? "cannot find install-sh, install.sh, or shtool in \"$srcdir\" \"$srcdir/..\" \"$srcdir/../..\"" "$LINENO" 5 fi # These three variables are undocumented and unsupported, # and are intended to be withdrawn in a future Autoconf release. # They can cause serious problems if a builder's source tree is in a directory # whose full name contains unusual characters. ac_config_guess="$SHELL $ac_aux_dir/config.guess" # Please don't use this var. ac_config_sub="$SHELL $ac_aux_dir/config.sub" # Please don't use this var. ac_configure="$SHELL $ac_aux_dir/configure" # Please don't use this var. # Find a good install program. We prefer a C program (faster), # so one script is as good as another. But avoid the broken or # incompatible versions: # SysV /etc/install, /usr/sbin/install # SunOS /usr/etc/install # IRIX /sbin/install # AIX /bin/install # AmigaOS /C/install, which installs bootblocks on floppy discs # AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag # AFS /usr/afsws/bin/install, which mishandles nonexistent args # SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff" # OS/2's system install, which has a completely different semantic # ./install, which can be erroneously created by make from ./install.sh. # Reject install programs that cannot install multiple files. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for a BSD-compatible install" >&5 $as_echo_n "checking for a BSD-compatible install... " >&6; } if test -z "$INSTALL"; then if ${ac_cv_path_install+:} false; then : $as_echo_n "(cached) " >&6 else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. # Account for people who put trailing slashes in PATH elements. case $as_dir/ in #(( ./ | .// | /[cC]/* | \ /etc/* | /usr/sbin/* | /usr/etc/* | /sbin/* | /usr/afsws/bin/* | \ ?:[\\/]os2[\\/]install[\\/]* | ?:[\\/]OS2[\\/]INSTALL[\\/]* | \ /usr/ucb/* ) ;; *) # OSF1 and SCO ODT 3.0 have their own names for install. # Don't use installbsd from OSF since it installs stuff as root # by default. for ac_prog in ginstall scoinst install; do for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_prog$ac_exec_ext"; then if test $ac_prog = install && grep dspmsg "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # AIX install. It has an incompatible calling convention. : elif test $ac_prog = install && grep pwplus "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # program-specific install script used by HP pwplus--don't use. : else rm -rf conftest.one conftest.two conftest.dir echo one > conftest.one echo two > conftest.two mkdir conftest.dir if "$as_dir/$ac_prog$ac_exec_ext" -c conftest.one conftest.two "`pwd`/conftest.dir" && test -s conftest.one && test -s conftest.two && test -s conftest.dir/conftest.one && test -s conftest.dir/conftest.two then ac_cv_path_install="$as_dir/$ac_prog$ac_exec_ext -c" break 3 fi fi fi done done ;; esac done IFS=$as_save_IFS rm -rf conftest.one conftest.two conftest.dir fi if test "${ac_cv_path_install+set}" = set; then INSTALL=$ac_cv_path_install else # As a last resort, use the slow shell script. Don't cache a # value for INSTALL within a source directory, because that will # break other packages using the cache if that directory is # removed, or if the value is a relative name. INSTALL=$ac_install_sh fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $INSTALL" >&5 $as_echo "$INSTALL" >&6; } # Use test -z because SunOS4 sh mishandles braces in ${var-val}. # It thinks the first close brace ends the variable substitution. test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}' test -z "$INSTALL_SCRIPT" && INSTALL_SCRIPT='${INSTALL}' test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644' { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether build environment is sane" >&5 $as_echo_n "checking whether build environment is sane... " >&6; } # Reject unsafe characters in $srcdir or the absolute working directory # name. Accept space and tab only in the latter. am_lf=' ' case `pwd` in *[\\\"\#\$\&\'\`$am_lf]*) as_fn_error $? "unsafe absolute working directory name" "$LINENO" 5;; esac case $srcdir in *[\\\"\#\$\&\'\`$am_lf\ \ ]*) as_fn_error $? "unsafe srcdir value: '$srcdir'" "$LINENO" 5;; esac # Do 'set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( am_has_slept=no for am_try in 1 2; do echo "timestamp, slept: $am_has_slept" > conftest.file set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t "$srcdir/configure" conftest.file` fi if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". as_fn_error $? "ls -t appears to fail. Make sure there is not a broken alias in your environment" "$LINENO" 5 fi if test "$2" = conftest.file || test $am_try -eq 2; then break fi # Just in case. sleep 1 am_has_slept=yes done test "$2" = conftest.file ) then # Ok. : else as_fn_error $? "newly created file is older than distributed files! Check your system clock" "$LINENO" 5 fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } # If we didn't sleep, we still need to ensure time stamps of config.status and # generated files are strictly newer. am_sleep_pid= if grep 'slept: no' conftest.file >/dev/null 2>&1; then ( sleep 1 ) & am_sleep_pid=$! fi rm -f conftest.file test "$program_prefix" != NONE && program_transform_name="s&^&$program_prefix&;$program_transform_name" # Use a double $ so make ignores it. test "$program_suffix" != NONE && program_transform_name="s&\$&$program_suffix&;$program_transform_name" # Double any \ or $. # By default was `s,x,x', remove it if useless. ac_script='s/[\\$]/&&/g;s/;s,x,x,$//' program_transform_name=`$as_echo "$program_transform_name" | sed "$ac_script"` # Expand $ac_aux_dir to an absolute path. am_aux_dir=`cd "$ac_aux_dir" && pwd` if test x"${MISSING+set}" != xset; then case $am_aux_dir in *\ * | *\ *) MISSING="\${SHELL} \"$am_aux_dir/missing\"" ;; *) MISSING="\${SHELL} $am_aux_dir/missing" ;; esac fi # Use eval to expand $SHELL if eval "$MISSING --is-lightweight"; then am_missing_run="$MISSING " else am_missing_run= { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: 'missing' script is too old or missing" >&5 $as_echo "$as_me: WARNING: 'missing' script is too old or missing" >&2;} fi if test x"${install_sh+set}" != xset; then case $am_aux_dir in *\ * | *\ *) install_sh="\${SHELL} '$am_aux_dir/install-sh'" ;; *) install_sh="\${SHELL} $am_aux_dir/install-sh" esac fi # Installed binaries are usually stripped using 'strip' when the user # run "make install-strip". However 'strip' might not be the right # tool to use in cross-compilation environments, therefore Automake # will honor the 'STRIP' environment variable to overrule this program. if test "$cross_compiling" != no; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. set dummy ${ac_tool_prefix}strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$STRIP"; then ac_cv_prog_STRIP="$STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_STRIP="${ac_tool_prefix}strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi STRIP=$ac_cv_prog_STRIP if test -n "$STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $STRIP" >&5 $as_echo "$STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_STRIP"; then ac_ct_STRIP=$STRIP # Extract the first word of "strip", so it can be a program name with args. set dummy strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_STRIP"; then ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_STRIP="strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP if test -n "$ac_ct_STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_STRIP" >&5 $as_echo "$ac_ct_STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_STRIP" = x; then STRIP=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac STRIP=$ac_ct_STRIP fi else STRIP="$ac_cv_prog_STRIP" fi fi INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for a thread-safe mkdir -p" >&5 $as_echo_n "checking for a thread-safe mkdir -p... " >&6; } if test -z "$MKDIR_P"; then if ${ac_cv_path_mkdir+:} false; then : $as_echo_n "(cached) " >&6 else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/opt/sfw/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in mkdir gmkdir; do for ac_exec_ext in '' $ac_executable_extensions; do as_fn_executable_p "$as_dir/$ac_prog$ac_exec_ext" || continue case `"$as_dir/$ac_prog$ac_exec_ext" --version 2>&1` in #( 'mkdir (GNU coreutils) '* | \ 'mkdir (coreutils) '* | \ 'mkdir (fileutils) '4.1*) ac_cv_path_mkdir=$as_dir/$ac_prog$ac_exec_ext break 3;; esac done done done IFS=$as_save_IFS fi test -d ./--version && rmdir ./--version if test "${ac_cv_path_mkdir+set}" = set; then MKDIR_P="$ac_cv_path_mkdir -p" else # As a last resort, use the slow shell script. Don't cache a # value for MKDIR_P within a source directory, because that will # break other packages using the cache if that directory is # removed, or if the value is a relative name. MKDIR_P="$ac_install_sh -d" fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MKDIR_P" >&5 $as_echo "$MKDIR_P" >&6; } for ac_prog in gawk mawk nawk awk do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_AWK+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$AWK"; then ac_cv_prog_AWK="$AWK" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_AWK="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi AWK=$ac_cv_prog_AWK if test -n "$AWK"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $AWK" >&5 $as_echo "$AWK" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$AWK" && break done { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${MAKE-make} sets \$(MAKE)" >&5 $as_echo_n "checking whether ${MAKE-make} sets \$(MAKE)... " >&6; } set x ${MAKE-make} ac_make=`$as_echo "$2" | sed 's/+/p/g; s/[^a-zA-Z0-9_]/_/g'` if eval \${ac_cv_prog_make_${ac_make}_set+:} false; then : $as_echo_n "(cached) " >&6 else cat >conftest.make <<\_ACEOF SHELL = /bin/sh all: @echo '@@@%%%=$(MAKE)=@@@%%%' _ACEOF # GNU make sometimes prints "make[1]: Entering ...", which would confuse us. case `${MAKE-make} -f conftest.make 2>/dev/null` in *@@@%%%=?*=@@@%%%*) eval ac_cv_prog_make_${ac_make}_set=yes;; *) eval ac_cv_prog_make_${ac_make}_set=no;; esac rm -f conftest.make fi if eval test \$ac_cv_prog_make_${ac_make}_set = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } SET_MAKE= else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } SET_MAKE="MAKE=${MAKE-make}" fi rm -rf .tst 2>/dev/null mkdir .tst 2>/dev/null if test -d .tst; then am__leading_dot=. else am__leading_dot=_ fi rmdir .tst 2>/dev/null # Check whether --enable-silent-rules was given. if test "${enable_silent_rules+set}" = set; then : enableval=$enable_silent_rules; fi case $enable_silent_rules in # ((( yes) AM_DEFAULT_VERBOSITY=0;; no) AM_DEFAULT_VERBOSITY=1;; *) AM_DEFAULT_VERBOSITY=1;; esac am_make=${MAKE-make} { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $am_make supports nested variables" >&5 $as_echo_n "checking whether $am_make supports nested variables... " >&6; } if ${am_cv_make_support_nested_variables+:} false; then : $as_echo_n "(cached) " >&6 else if $as_echo 'TRUE=$(BAR$(V)) BAR0=false BAR1=true V=1 am__doit: @$(TRUE) .PHONY: am__doit' | $am_make -f - >/dev/null 2>&1; then am_cv_make_support_nested_variables=yes else am_cv_make_support_nested_variables=no fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_make_support_nested_variables" >&5 $as_echo "$am_cv_make_support_nested_variables" >&6; } if test $am_cv_make_support_nested_variables = yes; then AM_V='$(V)' AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)' else AM_V=$AM_DEFAULT_VERBOSITY AM_DEFAULT_V=$AM_DEFAULT_VERBOSITY fi AM_BACKSLASH='\' if test "`cd $srcdir && pwd`" != "`pwd`"; then # Use -I$(srcdir) only when $(srcdir) != ., so that make's output # is not polluted with repeated "-I." am__isrc=' -I$(srcdir)' # test to see if srcdir already configured if test -f $srcdir/config.status; then as_fn_error $? "source directory already configured; run \"make distclean\" there first" "$LINENO" 5 fi fi # test whether we have cygpath if test -z "$CYGPATH_W"; then if (cygpath --version) >/dev/null 2>/dev/null; then CYGPATH_W='cygpath -w' else CYGPATH_W=echo fi fi # Define the identity of the package. PACKAGE='libev' VERSION='4.33' cat >>confdefs.h <<_ACEOF #define PACKAGE "$PACKAGE" _ACEOF cat >>confdefs.h <<_ACEOF #define VERSION "$VERSION" _ACEOF # Some tools Automake needs. ACLOCAL=${ACLOCAL-"${am_missing_run}aclocal-${am__api_version}"} AUTOCONF=${AUTOCONF-"${am_missing_run}autoconf"} AUTOMAKE=${AUTOMAKE-"${am_missing_run}automake-${am__api_version}"} AUTOHEADER=${AUTOHEADER-"${am_missing_run}autoheader"} MAKEINFO=${MAKEINFO-"${am_missing_run}makeinfo"} # For better backward compatibility. To be removed once Automake 1.9.x # dies out for good. For more background, see: # # mkdir_p='$(MKDIR_P)' # We need awk for the "check" target (and possibly the TAP driver). The # system "awk" is bad on some platforms. # Always define AMTAR for backward compatibility. Yes, it's still used # in the wild :-( We should find a proper way to deprecate it ... AMTAR='$${TAR-tar}' # We'll loop over all known methods to create a tar archive until one works. _am_tools='gnutar pax cpio none' am__tar='$${TAR-tar} chof - "$$tardir"' am__untar='$${TAR-tar} xf -' # POSIX will say in a future version that running "rm -f" with no argument # is OK; and we want to be able to make that assumption in our Makefile # recipes. So use an aggressive probe to check that the usage we want is # actually supported "in the wild" to an acceptable degree. # See automake bug#10828. # To make any issue more visible, cause the running configure to be aborted # by default if the 'rm' program in use doesn't match our expectations; the # user can still override this though. if rm -f && rm -fr && rm -rf; then : OK; else cat >&2 <<'END' Oops! Your 'rm' program seems unable to run without file operands specified on the command line, even when the '-f' option is present. This is contrary to the behaviour of most rm programs out there, and not conforming with the upcoming POSIX standard: Please tell bug-automake@gnu.org about your system, including the value of your $PATH and any error possibly output before this message. This can help us improve future automake versions. END if test x"$ACCEPT_INFERIOR_RM_PROGRAM" = x"yes"; then echo 'Configuration will proceed anyway, since you have set the' >&2 echo 'ACCEPT_INFERIOR_RM_PROGRAM variable to "yes"' >&2 echo >&2 else cat >&2 <<'END' Aborting the configuration process, to ensure you take notice of the issue. You can download and install GNU coreutils to get an 'rm' implementation that behaves properly: . If you want to complete the configuration process using your problematic 'rm' anyway, export the environment variable ACCEPT_INFERIOR_RM_PROGRAM to "yes", and re-run configure. END as_fn_error $? "Your 'rm' program is bad, sorry." "$LINENO" 5 fi fi ac_config_headers="$ac_config_headers config.h" { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to enable maintainer-specific portions of Makefiles" >&5 $as_echo_n "checking whether to enable maintainer-specific portions of Makefiles... " >&6; } # Check whether --enable-maintainer-mode was given. if test "${enable_maintainer_mode+set}" = set; then : enableval=$enable_maintainer_mode; USE_MAINTAINER_MODE=$enableval else USE_MAINTAINER_MODE=no fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $USE_MAINTAINER_MODE" >&5 $as_echo "$USE_MAINTAINER_MODE" >&6; } if test $USE_MAINTAINER_MODE = yes; then MAINTAINER_MODE_TRUE= MAINTAINER_MODE_FALSE='#' else MAINTAINER_MODE_TRUE='#' MAINTAINER_MODE_FALSE= fi MAINT=$MAINTAINER_MODE_TRUE ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}gcc", so it can be a program name with args. set dummy ${ac_tool_prefix}gcc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_CC"; then ac_ct_CC=$CC # Extract the first word of "gcc", so it can be a program name with args. set dummy gcc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 $as_echo "$ac_ct_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi else CC="$ac_cv_prog_CC" fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}cc", so it can be a program name with args. set dummy ${ac_tool_prefix}cc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="${ac_tool_prefix}cc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi fi if test -z "$CC"; then # Extract the first word of "cc", so it can be a program name with args. set dummy cc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else ac_prog_rejected=no as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then if test "$as_dir/$ac_word$ac_exec_ext" = "/usr/ucb/cc"; then ac_prog_rejected=yes continue fi ac_cv_prog_CC="cc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS if test $ac_prog_rejected = yes; then # We found a bogon in the path, so make sure we never use it. set dummy $ac_cv_prog_CC shift if test $# != 0; then # We chose a different compiler from the bogus one. # However, it has the same basename, so the bogon will be chosen # first if we set CC to just the basename; use the full file name. shift ac_cv_prog_CC="$as_dir/$ac_word${1+' '}$@" fi fi fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then for ac_prog in cl.exe do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_CC="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$CC" && break done fi if test -z "$CC"; then ac_ct_CC=$CC for ac_prog in cl.exe do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_CC="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 $as_echo "$ac_ct_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_CC" && break done if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi fi fi test -z "$CC" && { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "no acceptable C compiler found in \$PATH See \`config.log' for more details" "$LINENO" 5; } # Provide some information about the compiler. $as_echo "$as_me:${as_lineno-$LINENO}: checking for C compiler version" >&5 set X $ac_compile ac_compiler=$2 for ac_option in --version -v -V -qversion; do { { ac_try="$ac_compiler $ac_option >&5" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compiler $ac_option >&5") 2>conftest.err ac_status=$? if test -s conftest.err; then sed '10a\ ... rest of stderr output deleted ... 10q' conftest.err >conftest.er1 cat conftest.er1 >&5 fi rm -f conftest.er1 conftest.err $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } done cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files a.out a.out.dSYM a.exe b.out" # Try to create an executable without -o first, disregard a.out. # It will help us diagnose broken compilers, and finding out an intuition # of exeext. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the C compiler works" >&5 $as_echo_n "checking whether the C compiler works... " >&6; } ac_link_default=`$as_echo "$ac_link" | sed 's/ -o *conftest[^ ]*//'` # The possible output files: ac_files="a.out conftest.exe conftest a.exe a_out.exe b.out conftest.*" ac_rmfiles= for ac_file in $ac_files do case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; * ) ac_rmfiles="$ac_rmfiles $ac_file";; esac done rm -f $ac_rmfiles if { { ac_try="$ac_link_default" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link_default") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then : # Autoconf-2.13 could set the ac_cv_exeext variable to `no'. # So ignore a value of `no', otherwise this would lead to `EXEEXT = no' # in a Makefile. We should not override ac_cv_exeext if it was cached, # so that the user can short-circuit this test for compilers unknown to # Autoconf. for ac_file in $ac_files '' do test -f "$ac_file" || continue case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; [ab].out ) # We found the default executable, but exeext='' is most # certainly right. break;; *.* ) if test "${ac_cv_exeext+set}" = set && test "$ac_cv_exeext" != no; then :; else ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` fi # We set ac_cv_exeext here because the later test for it is not # safe: cross compilers may not add the suffix if given an `-o' # argument, so we may need to know it at that point already. # Even if this section looks crufty: it has the advantage of # actually working. break;; * ) break;; esac done test "$ac_cv_exeext" = no && ac_cv_exeext= else ac_file='' fi if test -z "$ac_file"; then : { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error 77 "C compiler cannot create executables See \`config.log' for more details" "$LINENO" 5; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for C compiler default output file name" >&5 $as_echo_n "checking for C compiler default output file name... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_file" >&5 $as_echo "$ac_file" >&6; } ac_exeext=$ac_cv_exeext rm -f -r a.out a.out.dSYM a.exe conftest$ac_cv_exeext b.out ac_clean_files=$ac_clean_files_save { $as_echo "$as_me:${as_lineno-$LINENO}: checking for suffix of executables" >&5 $as_echo_n "checking for suffix of executables... " >&6; } if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then : # If both `conftest.exe' and `conftest' are `present' (well, observable) # catch `conftest.exe'. For instance with Cygwin, `ls conftest' will # work properly (i.e., refer to `conftest.exe'), while it won't with # `rm'. for ac_file in conftest.exe conftest conftest.*; do test -f "$ac_file" || continue case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; *.* ) ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` break;; * ) break;; esac done else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot compute suffix of executables: cannot compile and link See \`config.log' for more details" "$LINENO" 5; } fi rm -f conftest conftest$ac_cv_exeext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_exeext" >&5 $as_echo "$ac_cv_exeext" >&6; } rm -f conftest.$ac_ext EXEEXT=$ac_cv_exeext ac_exeext=$EXEEXT cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { FILE *f = fopen ("conftest.out", "w"); return ferror (f) || fclose (f) != 0; ; return 0; } _ACEOF ac_clean_files="$ac_clean_files conftest.out" # Check that the compiler produces executables we can run. If not, either # the compiler is broken, or we cross compile. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are cross compiling" >&5 $as_echo_n "checking whether we are cross compiling... " >&6; } if test "$cross_compiling" != yes; then { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if { ac_try='./conftest$ac_cv_exeext' { { case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_try") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; }; then cross_compiling=no else if test "$cross_compiling" = maybe; then cross_compiling=yes else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot run C compiled programs. If you meant to cross compile, use \`--host'. See \`config.log' for more details" "$LINENO" 5; } fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $cross_compiling" >&5 $as_echo "$cross_compiling" >&6; } rm -f conftest.$ac_ext conftest$ac_cv_exeext conftest.out ac_clean_files=$ac_clean_files_save { $as_echo "$as_me:${as_lineno-$LINENO}: checking for suffix of object files" >&5 $as_echo_n "checking for suffix of object files... " >&6; } if ${ac_cv_objext+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF rm -f conftest.o conftest.obj if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then : for ac_file in conftest.o conftest.obj conftest.*; do test -f "$ac_file" || continue; case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM ) ;; *) ac_cv_objext=`expr "$ac_file" : '.*\.\(.*\)'` break;; esac done else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot compute suffix of object files: cannot compile See \`config.log' for more details" "$LINENO" 5; } fi rm -f conftest.$ac_cv_objext conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_objext" >&5 $as_echo "$ac_cv_objext" >&6; } OBJEXT=$ac_cv_objext ac_objext=$OBJEXT { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are using the GNU C compiler" >&5 $as_echo_n "checking whether we are using the GNU C compiler... " >&6; } if ${ac_cv_c_compiler_gnu+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { #ifndef __GNUC__ choke me #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_compiler_gnu=yes else ac_compiler_gnu=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cv_c_compiler_gnu=$ac_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_compiler_gnu" >&5 $as_echo "$ac_cv_c_compiler_gnu" >&6; } if test $ac_compiler_gnu = yes; then GCC=yes else GCC= fi ac_test_CFLAGS=${CFLAGS+set} ac_save_CFLAGS=$CFLAGS { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CC accepts -g" >&5 $as_echo_n "checking whether $CC accepts -g... " >&6; } if ${ac_cv_prog_cc_g+:} false; then : $as_echo_n "(cached) " >&6 else ac_save_c_werror_flag=$ac_c_werror_flag ac_c_werror_flag=yes ac_cv_prog_cc_g=no CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_g=yes else CFLAGS="" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : else ac_c_werror_flag=$ac_save_c_werror_flag CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_g=yes fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_c_werror_flag=$ac_save_c_werror_flag fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_g" >&5 $as_echo "$ac_cv_prog_cc_g" >&6; } if test "$ac_test_CFLAGS" = set; then CFLAGS=$ac_save_CFLAGS elif test $ac_cv_prog_cc_g = yes; then if test "$GCC" = yes; then CFLAGS="-g -O2" else CFLAGS="-g" fi else if test "$GCC" = yes; then CFLAGS="-O2" else CFLAGS= fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $CC option to accept ISO C89" >&5 $as_echo_n "checking for $CC option to accept ISO C89... " >&6; } if ${ac_cv_prog_cc_c89+:} false; then : $as_echo_n "(cached) " >&6 else ac_cv_prog_cc_c89=no ac_save_CC=$CC cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include struct stat; /* Most of the following tests are stolen from RCS 5.7's src/conf.sh. */ struct buf { int x; }; FILE * (*rcsopen) (struct buf *, struct stat *, int); static char *e (p, i) char **p; int i; { return p[i]; } static char *f (char * (*g) (char **, int), char **p, ...) { char *s; va_list v; va_start (v,p); s = g (p, va_arg (v,int)); va_end (v); return s; } /* OSF 4.0 Compaq cc is some sort of almost-ANSI by default. It has function prototypes and stuff, but not '\xHH' hex character constants. These don't provoke an error unfortunately, instead are silently treated as 'x'. The following induces an error, until -std is added to get proper ANSI mode. Curiously '\x00'!='x' always comes out true, for an array size at least. It's necessary to write '\x00'==0 to get something that's true only with -std. */ int osf4_cc_array ['\x00' == 0 ? 1 : -1]; /* IBM C 6 for AIX is almost-ANSI by default, but it replaces macro parameters inside strings and character constants. */ #define FOO(x) 'x' int xlc6_cc_array[FOO(a) == 'x' ? 1 : -1]; int test (int i, double x); struct s1 {int (*f) (int a);}; struct s2 {int (*f) (double a);}; int pairnames (int, char **, FILE *(*)(struct buf *, struct stat *, int), int, int); int argc; char **argv; int main () { return f (e, argv, 0) != argv[0] || f (e, argv, 1) != argv[1]; ; return 0; } _ACEOF for ac_arg in '' -qlanglvl=extc89 -qlanglvl=ansi -std \ -Ae "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__" do CC="$ac_save_CC $ac_arg" if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_c89=$ac_arg fi rm -f core conftest.err conftest.$ac_objext test "x$ac_cv_prog_cc_c89" != "xno" && break done rm -f conftest.$ac_ext CC=$ac_save_CC fi # AC_CACHE_VAL case "x$ac_cv_prog_cc_c89" in x) { $as_echo "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 $as_echo "none needed" >&6; } ;; xno) { $as_echo "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 $as_echo "unsupported" >&6; } ;; *) CC="$CC $ac_cv_prog_cc_c89" { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c89" >&5 $as_echo "$ac_cv_prog_cc_c89" >&6; } ;; esac if test "x$ac_cv_prog_cc_c89" != xno; then : fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CC understands -c and -o together" >&5 $as_echo_n "checking whether $CC understands -c and -o together... " >&6; } if ${am_cv_prog_cc_c_o+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF # Make sure it works both with $CC and with simple cc. # Following AC_PROG_CC_C_O, we do the test twice because some # compilers refuse to overwrite an existing .o file with -o, # though they will create one. am_cv_prog_cc_c_o=yes for am_i in 1 2; do if { echo "$as_me:$LINENO: $CC -c conftest.$ac_ext -o conftest2.$ac_objext" >&5 ($CC -c conftest.$ac_ext -o conftest2.$ac_objext) >&5 2>&5 ac_status=$? echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } \ && test -f conftest2.$ac_objext; then : OK else am_cv_prog_cc_c_o=no break fi done rm -f core conftest* unset am_i fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_prog_cc_c_o" >&5 $as_echo "$am_cv_prog_cc_c_o" >&6; } if test "$am_cv_prog_cc_c_o" != yes; then # Losing compiler, so override with the script. # FIXME: It is wrong to rewrite CC. # But if we don't then we get into trouble of one sort or another. # A longer-term fix would be to have automake use am__CC in this case, # and then we could set am__CC="\$(top_srcdir)/compile \$(CC)" CC="$am_aux_dir/compile $CC" fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu DEPDIR="${am__leading_dot}deps" ac_config_commands="$ac_config_commands depfiles" { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${MAKE-make} supports the include directive" >&5 $as_echo_n "checking whether ${MAKE-make} supports the include directive... " >&6; } cat > confinc.mk << 'END' am__doit: @echo this is the am__doit target >confinc.out .PHONY: am__doit END am__include="#" am__quote= # BSD make does it like this. echo '.include "confinc.mk" # ignored' > confmf.BSD # Other make implementations (GNU, Solaris 10, AIX) do it like this. echo 'include confinc.mk # ignored' > confmf.GNU _am_result=no for s in GNU BSD; do { echo "$as_me:$LINENO: ${MAKE-make} -f confmf.$s && cat confinc.out" >&5 (${MAKE-make} -f confmf.$s && cat confinc.out) >&5 2>&5 ac_status=$? echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } case $?:`cat confinc.out 2>/dev/null` in #( '0:this is the am__doit target') : case $s in #( BSD) : am__include='.include' am__quote='"' ;; #( *) : am__include='include' am__quote='' ;; esac ;; #( *) : ;; esac if test "$am__include" != "#"; then _am_result="yes ($s style)" break fi done rm -f confinc.* confmf.* { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${_am_result}" >&5 $as_echo "${_am_result}" >&6; } # Check whether --enable-dependency-tracking was given. if test "${enable_dependency_tracking+set}" = set; then : enableval=$enable_dependency_tracking; fi if test "x$enable_dependency_tracking" != xno; then am_depcomp="$ac_aux_dir/depcomp" AMDEPBACKSLASH='\' am__nodep='_no' fi if test "x$enable_dependency_tracking" != xno; then AMDEP_TRUE= AMDEP_FALSE='#' else AMDEP_TRUE='#' AMDEP_FALSE= fi depcc="$CC" am_compiler_list= { $as_echo "$as_me:${as_lineno-$LINENO}: checking dependency style of $depcc" >&5 $as_echo_n "checking dependency style of $depcc... " >&6; } if ${am_cv_CC_dependencies_compiler_type+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named 'D' -- because '-MD' means "put the output # in D". rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_CC_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` fi am__universal=false case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using ": > sub/conftst$i.h" creates only sub/conftst1.h with # Solaris 10 /bin/sh. echo '/* dummy */' > sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with '-c' and '-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle '-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs. am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # After this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested. if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) # This compiler won't grok '-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thusly: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_CC_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_CC_dependencies_compiler_type=none fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_CC_dependencies_compiler_type" >&5 $as_echo "$am_cv_CC_dependencies_compiler_type" >&6; } CCDEPMODE=depmode=$am_cv_CC_dependencies_compiler_type if test "x$enable_dependency_tracking" != xno \ && test "$am_cv_CC_dependencies_compiler_type" = gcc3; then am__fastdepCC_TRUE= am__fastdepCC_FALSE='#' else am__fastdepCC_TRUE='#' am__fastdepCC_FALSE= fi if test -z "$orig_CFLAGS"; then if test x$GCC = xyes; then CFLAGS="-g -O3" fi fi case `pwd` in *\ * | *\ *) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Libtool does not cope well with whitespace in \`pwd\`" >&5 $as_echo "$as_me: WARNING: Libtool does not cope well with whitespace in \`pwd\`" >&2;} ;; esac macro_version='2.4.6' macro_revision='2.4.6' ltmain=$ac_aux_dir/ltmain.sh # Make sure we can run config.sub. $SHELL "$ac_aux_dir/config.sub" sun4 >/dev/null 2>&1 || as_fn_error $? "cannot run $SHELL $ac_aux_dir/config.sub" "$LINENO" 5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking build system type" >&5 $as_echo_n "checking build system type... " >&6; } if ${ac_cv_build+:} false; then : $as_echo_n "(cached) " >&6 else ac_build_alias=$build_alias test "x$ac_build_alias" = x && ac_build_alias=`$SHELL "$ac_aux_dir/config.guess"` test "x$ac_build_alias" = x && as_fn_error $? "cannot guess build type; you must specify one" "$LINENO" 5 ac_cv_build=`$SHELL "$ac_aux_dir/config.sub" $ac_build_alias` || as_fn_error $? "$SHELL $ac_aux_dir/config.sub $ac_build_alias failed" "$LINENO" 5 fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_build" >&5 $as_echo "$ac_cv_build" >&6; } case $ac_cv_build in *-*-*) ;; *) as_fn_error $? "invalid value of canonical build" "$LINENO" 5;; esac build=$ac_cv_build ac_save_IFS=$IFS; IFS='-' set x $ac_cv_build shift build_cpu=$1 build_vendor=$2 shift; shift # Remember, the first character of IFS is used to create $*, # except with old shells: build_os=$* IFS=$ac_save_IFS case $build_os in *\ *) build_os=`echo "$build_os" | sed 's/ /-/g'`;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking host system type" >&5 $as_echo_n "checking host system type... " >&6; } if ${ac_cv_host+:} false; then : $as_echo_n "(cached) " >&6 else if test "x$host_alias" = x; then ac_cv_host=$ac_cv_build else ac_cv_host=`$SHELL "$ac_aux_dir/config.sub" $host_alias` || as_fn_error $? "$SHELL $ac_aux_dir/config.sub $host_alias failed" "$LINENO" 5 fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_host" >&5 $as_echo "$ac_cv_host" >&6; } case $ac_cv_host in *-*-*) ;; *) as_fn_error $? "invalid value of canonical host" "$LINENO" 5;; esac host=$ac_cv_host ac_save_IFS=$IFS; IFS='-' set x $ac_cv_host shift host_cpu=$1 host_vendor=$2 shift; shift # Remember, the first character of IFS is used to create $*, # except with old shells: host_os=$* IFS=$ac_save_IFS case $host_os in *\ *) host_os=`echo "$host_os" | sed 's/ /-/g'`;; esac # Backslashify metacharacters that are still active within # double-quoted strings. sed_quote_subst='s/\(["`$\\]\)/\\\1/g' # Same as above, but do not quote variable references. double_quote_subst='s/\(["`\\]\)/\\\1/g' # Sed substitution to delay expansion of an escaped shell variable in a # double_quote_subst'ed string. delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g' # Sed substitution to delay expansion of an escaped single quote. delay_single_quote_subst='s/'\''/'\'\\\\\\\'\''/g' # Sed substitution to avoid accidental globbing in evaled expressions no_glob_subst='s/\*/\\\*/g' ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5 $as_echo_n "checking how to print strings... " >&6; } # Test print first, because it will be a builtin if present. if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \ test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='print -r --' elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='printf %s\n' else # Use this function as a fallback that always works. func_fallback_echo () { eval 'cat <<_LTECHO_EOF $1 _LTECHO_EOF' } ECHO='func_fallback_echo' fi # func_echo_all arg... # Invoke $ECHO with all args, space-separated. func_echo_all () { $ECHO "" } case $ECHO in printf*) { $as_echo "$as_me:${as_lineno-$LINENO}: result: printf" >&5 $as_echo "printf" >&6; } ;; print*) { $as_echo "$as_me:${as_lineno-$LINENO}: result: print -r" >&5 $as_echo "print -r" >&6; } ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: cat" >&5 $as_echo "cat" >&6; } ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking for a sed that does not truncate output" >&5 $as_echo_n "checking for a sed that does not truncate output... " >&6; } if ${ac_cv_path_SED+:} false; then : $as_echo_n "(cached) " >&6 else ac_script=s/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb/ for ac_i in 1 2 3 4 5 6 7; do ac_script="$ac_script$as_nl$ac_script" done echo "$ac_script" 2>/dev/null | sed 99q >conftest.sed { ac_script=; unset ac_script;} if test -z "$SED"; then ac_path_SED_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in sed gsed; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_SED="$as_dir/$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_SED" || continue # Check for GNU ac_path_SED and select it if it is found. # Check for GNU $ac_path_SED case `"$ac_path_SED" --version 2>&1` in *GNU*) ac_cv_path_SED="$ac_path_SED" ac_path_SED_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo '' >> "conftest.nl" "$ac_path_SED" -f conftest.sed < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_SED_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_SED="$ac_path_SED" ac_path_SED_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_SED_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_SED"; then as_fn_error $? "no acceptable sed could be found in \$PATH" "$LINENO" 5 fi else ac_cv_path_SED=$SED fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_SED" >&5 $as_echo "$ac_cv_path_SED" >&6; } SED="$ac_cv_path_SED" rm -f conftest.sed test -z "$SED" && SED=sed Xsed="$SED -e 1s/^X//" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for grep that handles long lines and -e" >&5 $as_echo_n "checking for grep that handles long lines and -e... " >&6; } if ${ac_cv_path_GREP+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$GREP"; then ac_path_GREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in grep ggrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_GREP="$as_dir/$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_GREP" || continue # Check for GNU ac_path_GREP and select it if it is found. # Check for GNU $ac_path_GREP case `"$ac_path_GREP" --version 2>&1` in *GNU*) ac_cv_path_GREP="$ac_path_GREP" ac_path_GREP_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo 'GREP' >> "conftest.nl" "$ac_path_GREP" -e 'GREP$' -e '-(cannot match)-' < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_GREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_GREP="$ac_path_GREP" ac_path_GREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_GREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_GREP"; then as_fn_error $? "no acceptable grep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_GREP=$GREP fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_GREP" >&5 $as_echo "$ac_cv_path_GREP" >&6; } GREP="$ac_cv_path_GREP" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for egrep" >&5 $as_echo_n "checking for egrep... " >&6; } if ${ac_cv_path_EGREP+:} false; then : $as_echo_n "(cached) " >&6 else if echo a | $GREP -E '(a|b)' >/dev/null 2>&1 then ac_cv_path_EGREP="$GREP -E" else if test -z "$EGREP"; then ac_path_EGREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in egrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_EGREP="$as_dir/$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_EGREP" || continue # Check for GNU ac_path_EGREP and select it if it is found. # Check for GNU $ac_path_EGREP case `"$ac_path_EGREP" --version 2>&1` in *GNU*) ac_cv_path_EGREP="$ac_path_EGREP" ac_path_EGREP_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo 'EGREP' >> "conftest.nl" "$ac_path_EGREP" 'EGREP$' < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_EGREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_EGREP="$ac_path_EGREP" ac_path_EGREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_EGREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_EGREP"; then as_fn_error $? "no acceptable egrep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_EGREP=$EGREP fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_EGREP" >&5 $as_echo "$ac_cv_path_EGREP" >&6; } EGREP="$ac_cv_path_EGREP" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for fgrep" >&5 $as_echo_n "checking for fgrep... " >&6; } if ${ac_cv_path_FGREP+:} false; then : $as_echo_n "(cached) " >&6 else if echo 'ab*c' | $GREP -F 'ab*c' >/dev/null 2>&1 then ac_cv_path_FGREP="$GREP -F" else if test -z "$FGREP"; then ac_path_FGREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in fgrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_FGREP="$as_dir/$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_FGREP" || continue # Check for GNU ac_path_FGREP and select it if it is found. # Check for GNU $ac_path_FGREP case `"$ac_path_FGREP" --version 2>&1` in *GNU*) ac_cv_path_FGREP="$ac_path_FGREP" ac_path_FGREP_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo 'FGREP' >> "conftest.nl" "$ac_path_FGREP" FGREP < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_FGREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_FGREP="$ac_path_FGREP" ac_path_FGREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_FGREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_FGREP"; then as_fn_error $? "no acceptable fgrep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_FGREP=$FGREP fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_FGREP" >&5 $as_echo "$ac_cv_path_FGREP" >&6; } FGREP="$ac_cv_path_FGREP" test -z "$GREP" && GREP=grep # Check whether --with-gnu-ld was given. if test "${with_gnu_ld+set}" = set; then : withval=$with_gnu_ld; test no = "$withval" || with_gnu_ld=yes else with_gnu_ld=no fi ac_prog=ld if test yes = "$GCC"; then # Check if gcc -print-prog-name=ld gives a path. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ld used by $CC" >&5 $as_echo_n "checking for ld used by $CC... " >&6; } case $host in *-*-mingw*) # gcc leaves a trailing carriage return, which upsets mingw ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; *) ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; esac case $ac_prog in # Accept absolute paths. [\\/]* | ?:[\\/]*) re_direlt='/[^/][^/]*/\.\./' # Canonicalize the pathname of ld ac_prog=`$ECHO "$ac_prog"| $SED 's%\\\\%/%g'` while $ECHO "$ac_prog" | $GREP "$re_direlt" > /dev/null 2>&1; do ac_prog=`$ECHO $ac_prog| $SED "s%$re_direlt%/%"` done test -z "$LD" && LD=$ac_prog ;; "") # If it fails, then pretend we aren't using GCC. ac_prog=ld ;; *) # If it is relative, then search for the first ld in PATH. with_gnu_ld=unknown ;; esac elif test yes = "$with_gnu_ld"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for GNU ld" >&5 $as_echo_n "checking for GNU ld... " >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for non-GNU ld" >&5 $as_echo_n "checking for non-GNU ld... " >&6; } fi if ${lt_cv_path_LD+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$LD"; then lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR for ac_dir in $PATH; do IFS=$lt_save_ifs test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then lt_cv_path_LD=$ac_dir/$ac_prog # Check to see if the program is GNU ld. I'd rather use --version, # but apparently some variants of GNU ld only accept -v. # Break only if it was the GNU/non-GNU ld that we prefer. case `"$lt_cv_path_LD" -v 2>&1 &5 $as_echo "$LD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -z "$LD" && as_fn_error $? "no acceptable ld found in \$PATH" "$LINENO" 5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking if the linker ($LD) is GNU ld" >&5 $as_echo_n "checking if the linker ($LD) is GNU ld... " >&6; } if ${lt_cv_prog_gnu_ld+:} false; then : $as_echo_n "(cached) " >&6 else # I'd rather use --version here, but apparently some GNU lds only accept -v. case `$LD -v 2>&1 &5 $as_echo "$lt_cv_prog_gnu_ld" >&6; } with_gnu_ld=$lt_cv_prog_gnu_ld { $as_echo "$as_me:${as_lineno-$LINENO}: checking for BSD- or MS-compatible name lister (nm)" >&5 $as_echo_n "checking for BSD- or MS-compatible name lister (nm)... " >&6; } if ${lt_cv_path_NM+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$NM"; then # Let the user override the test. lt_cv_path_NM=$NM else lt_nm_to_check=${ac_tool_prefix}nm if test -n "$ac_tool_prefix" && test "$build" = "$host"; then lt_nm_to_check="$lt_nm_to_check nm" fi for lt_tmp_nm in $lt_nm_to_check; do lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do IFS=$lt_save_ifs test -z "$ac_dir" && ac_dir=. tmp_nm=$ac_dir/$lt_tmp_nm if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext"; then # Check to see if the nm accepts a BSD-compat flag. # Adding the 'sed 1q' prevents false positives on HP-UX, which says: # nm: unknown option "B" ignored # Tru64's nm complains that /dev/null is an invalid object file # MSYS converts /dev/null to NUL, MinGW nm treats NUL as empty case $build_os in mingw*) lt_bad_file=conftest.nm/nofile ;; *) lt_bad_file=/dev/null ;; esac case `"$tmp_nm" -B $lt_bad_file 2>&1 | sed '1q'` in *$lt_bad_file* | *'Invalid file or object type'*) lt_cv_path_NM="$tmp_nm -B" break 2 ;; *) case `"$tmp_nm" -p /dev/null 2>&1 | sed '1q'` in */dev/null*) lt_cv_path_NM="$tmp_nm -p" break 2 ;; *) lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but continue # so that we can try to find one that supports BSD flags ;; esac ;; esac fi done IFS=$lt_save_ifs done : ${lt_cv_path_NM=no} fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_NM" >&5 $as_echo "$lt_cv_path_NM" >&6; } if test no != "$lt_cv_path_NM"; then NM=$lt_cv_path_NM else # Didn't find any BSD compatible name lister, look for dumpbin. if test -n "$DUMPBIN"; then : # Let the user override the test. else if test -n "$ac_tool_prefix"; then for ac_prog in dumpbin "link -dump" do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_DUMPBIN+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$DUMPBIN"; then ac_cv_prog_DUMPBIN="$DUMPBIN" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_DUMPBIN="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi DUMPBIN=$ac_cv_prog_DUMPBIN if test -n "$DUMPBIN"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DUMPBIN" >&5 $as_echo "$DUMPBIN" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$DUMPBIN" && break done fi if test -z "$DUMPBIN"; then ac_ct_DUMPBIN=$DUMPBIN for ac_prog in dumpbin "link -dump" do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_DUMPBIN+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_DUMPBIN"; then ac_cv_prog_ac_ct_DUMPBIN="$ac_ct_DUMPBIN" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DUMPBIN="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_DUMPBIN=$ac_cv_prog_ac_ct_DUMPBIN if test -n "$ac_ct_DUMPBIN"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DUMPBIN" >&5 $as_echo "$ac_ct_DUMPBIN" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_DUMPBIN" && break done if test "x$ac_ct_DUMPBIN" = x; then DUMPBIN=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DUMPBIN=$ac_ct_DUMPBIN fi fi case `$DUMPBIN -symbols -headers /dev/null 2>&1 | sed '1q'` in *COFF*) DUMPBIN="$DUMPBIN -symbols -headers" ;; *) DUMPBIN=: ;; esac fi if test : != "$DUMPBIN"; then NM=$DUMPBIN fi fi test -z "$NM" && NM=nm { $as_echo "$as_me:${as_lineno-$LINENO}: checking the name lister ($NM) interface" >&5 $as_echo_n "checking the name lister ($NM) interface... " >&6; } if ${lt_cv_nm_interface+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_nm_interface="BSD nm" echo "int some_variable = 0;" > conftest.$ac_ext (eval echo "\"\$as_me:$LINENO: $ac_compile\"" >&5) (eval "$ac_compile" 2>conftest.err) cat conftest.err >&5 (eval echo "\"\$as_me:$LINENO: $NM \\\"conftest.$ac_objext\\\"\"" >&5) (eval "$NM \"conftest.$ac_objext\"" 2>conftest.err > conftest.out) cat conftest.err >&5 (eval echo "\"\$as_me:$LINENO: output\"" >&5) cat conftest.out >&5 if $GREP 'External.*some_variable' conftest.out > /dev/null; then lt_cv_nm_interface="MS dumpbin" fi rm -f conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_nm_interface" >&5 $as_echo "$lt_cv_nm_interface" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ln -s works" >&5 $as_echo_n "checking whether ln -s works... " >&6; } LN_S=$as_ln_s if test "$LN_S" = "ln -s"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no, using $LN_S" >&5 $as_echo "no, using $LN_S" >&6; } fi # find the maximum length of command line arguments { $as_echo "$as_me:${as_lineno-$LINENO}: checking the maximum length of command line arguments" >&5 $as_echo_n "checking the maximum length of command line arguments... " >&6; } if ${lt_cv_sys_max_cmd_len+:} false; then : $as_echo_n "(cached) " >&6 else i=0 teststring=ABCD case $build_os in msdosdjgpp*) # On DJGPP, this test can blow up pretty badly due to problems in libc # (any single argument exceeding 2000 bytes causes a buffer overrun # during glob expansion). Even if it were fixed, the result of this # check would be larger than it should be. lt_cv_sys_max_cmd_len=12288; # 12K is about right ;; gnu*) # Under GNU Hurd, this test is not required because there is # no limit to the length of command line arguments. # Libtool will interpret -1 as no limit whatsoever lt_cv_sys_max_cmd_len=-1; ;; cygwin* | mingw* | cegcc*) # On Win9x/ME, this test blows up -- it succeeds, but takes # about 5 minutes as the teststring grows exponentially. # Worse, since 9x/ME are not pre-emptively multitasking, # you end up with a "frozen" computer, even though with patience # the test eventually succeeds (with a max line length of 256k). # Instead, let's just punt: use the minimum linelength reported by # all of the supported platforms: 8192 (on NT/2K/XP). lt_cv_sys_max_cmd_len=8192; ;; mint*) # On MiNT this can take a long time and run out of memory. lt_cv_sys_max_cmd_len=8192; ;; amigaos*) # On AmigaOS with pdksh, this test takes hours, literally. # So we just punt and use a minimum line length of 8192. lt_cv_sys_max_cmd_len=8192; ;; bitrig* | darwin* | dragonfly* | freebsd* | netbsd* | openbsd*) # This has been around since 386BSD, at least. Likely further. if test -x /sbin/sysctl; then lt_cv_sys_max_cmd_len=`/sbin/sysctl -n kern.argmax` elif test -x /usr/sbin/sysctl; then lt_cv_sys_max_cmd_len=`/usr/sbin/sysctl -n kern.argmax` else lt_cv_sys_max_cmd_len=65536 # usable default for all BSDs fi # And add a safety zone lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` ;; interix*) # We know the value 262144 and hardcode it with a safety zone (like BSD) lt_cv_sys_max_cmd_len=196608 ;; os2*) # The test takes a long time on OS/2. lt_cv_sys_max_cmd_len=8192 ;; osf*) # Dr. Hans Ekkehard Plesser reports seeing a kernel panic running configure # due to this test when exec_disable_arg_limit is 1 on Tru64. It is not # nice to cause kernel panics so lets avoid the loop below. # First set a reasonable default. lt_cv_sys_max_cmd_len=16384 # if test -x /sbin/sysconfig; then case `/sbin/sysconfig -q proc exec_disable_arg_limit` in *1*) lt_cv_sys_max_cmd_len=-1 ;; esac fi ;; sco3.2v5*) lt_cv_sys_max_cmd_len=102400 ;; sysv5* | sco5v6* | sysv4.2uw2*) kargmax=`grep ARG_MAX /etc/conf/cf.d/stune 2>/dev/null` if test -n "$kargmax"; then lt_cv_sys_max_cmd_len=`echo $kargmax | sed 's/.*[ ]//'` else lt_cv_sys_max_cmd_len=32768 fi ;; *) lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null` if test -n "$lt_cv_sys_max_cmd_len" && \ test undefined != "$lt_cv_sys_max_cmd_len"; then lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` else # Make teststring a little bigger before we do anything with it. # a 1K string should be a reasonable start. for i in 1 2 3 4 5 6 7 8; do teststring=$teststring$teststring done SHELL=${SHELL-${CONFIG_SHELL-/bin/sh}} # If test is not a shell built-in, we'll probably end up computing a # maximum length that is only half of the actual maximum length, but # we can't tell. while { test X`env echo "$teststring$teststring" 2>/dev/null` \ = "X$teststring$teststring"; } >/dev/null 2>&1 && test 17 != "$i" # 1/2 MB should be enough do i=`expr $i + 1` teststring=$teststring$teststring done # Only check the string length outside the loop. lt_cv_sys_max_cmd_len=`expr "X$teststring" : ".*" 2>&1` teststring= # Add a significant safety factor because C++ compilers can tack on # massive amounts of additional arguments before passing them to the # linker. It appears as though 1/2 is a usable value. lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2` fi ;; esac fi if test -n "$lt_cv_sys_max_cmd_len"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sys_max_cmd_len" >&5 $as_echo "$lt_cv_sys_max_cmd_len" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: none" >&5 $as_echo "none" >&6; } fi max_cmd_len=$lt_cv_sys_max_cmd_len : ${CP="cp -f"} : ${MV="mv -f"} : ${RM="rm -f"} if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then lt_unset=unset else lt_unset=false fi # test EBCDIC or ASCII case `echo X|tr X '\101'` in A) # ASCII based system # \n is not interpreted correctly by Solaris 8 /usr/ucb/tr lt_SP2NL='tr \040 \012' lt_NL2SP='tr \015\012 \040\040' ;; *) # EBCDIC based system lt_SP2NL='tr \100 \n' lt_NL2SP='tr \r\n \100\100' ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5 $as_echo_n "checking how to convert $build file names to $host format... " >&6; } if ${lt_cv_to_host_file_cmd+:} false; then : $as_echo_n "(cached) " >&6 else case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32 ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32 ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32 ;; esac ;; *-*-cygwin* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_noop ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin ;; esac ;; * ) # unhandled hosts (and "normal" native builds) lt_cv_to_host_file_cmd=func_convert_file_noop ;; esac fi to_host_file_cmd=$lt_cv_to_host_file_cmd { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5 $as_echo "$lt_cv_to_host_file_cmd" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5 $as_echo_n "checking how to convert $build file names to toolchain format... " >&6; } if ${lt_cv_to_tool_file_cmd+:} false; then : $as_echo_n "(cached) " >&6 else #assume ordinary cross tools, or native build. lt_cv_to_tool_file_cmd=func_convert_file_noop case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32 ;; esac ;; esac fi to_tool_file_cmd=$lt_cv_to_tool_file_cmd { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5 $as_echo "$lt_cv_to_tool_file_cmd" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5 $as_echo_n "checking for $LD option to reload object files... " >&6; } if ${lt_cv_ld_reload_flag+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ld_reload_flag='-r' fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_reload_flag" >&5 $as_echo "$lt_cv_ld_reload_flag" >&6; } reload_flag=$lt_cv_ld_reload_flag case $reload_flag in "" | " "*) ;; *) reload_flag=" $reload_flag" ;; esac reload_cmds='$LD$reload_flag -o $output$reload_objs' case $host_os in cygwin* | mingw* | pw32* | cegcc*) if test yes != "$GCC"; then reload_cmds=false fi ;; darwin*) if test yes = "$GCC"; then reload_cmds='$LTCC $LTCFLAGS -nostdlib $wl-r -o $output$reload_objs' else reload_cmds='$LD$reload_flag -o $output$reload_objs' fi ;; esac if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}objdump", so it can be a program name with args. set dummy ${ac_tool_prefix}objdump; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_OBJDUMP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$OBJDUMP"; then ac_cv_prog_OBJDUMP="$OBJDUMP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_OBJDUMP="${ac_tool_prefix}objdump" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi OBJDUMP=$ac_cv_prog_OBJDUMP if test -n "$OBJDUMP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $OBJDUMP" >&5 $as_echo "$OBJDUMP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_OBJDUMP"; then ac_ct_OBJDUMP=$OBJDUMP # Extract the first word of "objdump", so it can be a program name with args. set dummy objdump; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_OBJDUMP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_OBJDUMP"; then ac_cv_prog_ac_ct_OBJDUMP="$ac_ct_OBJDUMP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OBJDUMP="objdump" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_OBJDUMP=$ac_cv_prog_ac_ct_OBJDUMP if test -n "$ac_ct_OBJDUMP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OBJDUMP" >&5 $as_echo "$ac_ct_OBJDUMP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_OBJDUMP" = x; then OBJDUMP="false" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OBJDUMP=$ac_ct_OBJDUMP fi else OBJDUMP="$ac_cv_prog_OBJDUMP" fi test -z "$OBJDUMP" && OBJDUMP=objdump { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to recognize dependent libraries" >&5 $as_echo_n "checking how to recognize dependent libraries... " >&6; } if ${lt_cv_deplibs_check_method+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_file_magic_cmd='$MAGIC_CMD' lt_cv_file_magic_test_file= lt_cv_deplibs_check_method='unknown' # Need to set the preceding variable on all platforms that support # interlibrary dependencies. # 'none' -- dependencies not supported. # 'unknown' -- same as none, but documents that we really don't know. # 'pass_all' -- all dependencies passed with no checks. # 'test_compile' -- check by making test program. # 'file_magic [[regex]]' -- check by looking for files in library path # that responds to the $file_magic_cmd with a given extended regex. # If you have 'file' or equivalent on your system and you're not sure # whether 'pass_all' will *always* work, you probably want this one. case $host_os in aix[4-9]*) lt_cv_deplibs_check_method=pass_all ;; beos*) lt_cv_deplibs_check_method=pass_all ;; bsdi[45]*) lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib)' lt_cv_file_magic_cmd='/usr/bin/file -L' lt_cv_file_magic_test_file=/shlib/libc.so ;; cygwin*) # func_win32_libid is a shell function defined in ltmain.sh lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' ;; mingw* | pw32*) # Base MSYS/MinGW do not provide the 'file' command needed by # func_win32_libid shell function, so use a weaker test based on 'objdump', # unless we find 'file', for example because we are cross-compiling. if ( file / ) >/dev/null 2>&1; then lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' else # Keep this pattern in sync with the one in func_win32_libid. lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' lt_cv_file_magic_cmd='$OBJDUMP -f' fi ;; cegcc*) # use the weaker test based on 'objdump'. See mingw*. lt_cv_deplibs_check_method='file_magic file format pe-arm-.*little(.*architecture: arm)?' lt_cv_file_magic_cmd='$OBJDUMP -f' ;; darwin* | rhapsody*) lt_cv_deplibs_check_method=pass_all ;; freebsd* | dragonfly*) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then case $host_cpu in i*86 ) # Not sure whether the presence of OpenBSD here was a mistake. # Let's accept both of them until this is cleared up. lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD|DragonFly)/i[3-9]86 (compact )?demand paged shared library' lt_cv_file_magic_cmd=/usr/bin/file lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*` ;; esac else lt_cv_deplibs_check_method=pass_all fi ;; haiku*) lt_cv_deplibs_check_method=pass_all ;; hpux10.20* | hpux11*) lt_cv_file_magic_cmd=/usr/bin/file case $host_cpu in ia64*) lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF-[0-9][0-9]) shared object file - IA64' lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so ;; hppa*64*) lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF[ -][0-9][0-9])(-bit)?( [LM]SB)? shared object( file)?[, -]* PA-RISC [0-9]\.[0-9]' lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl ;; *) lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|PA-RISC[0-9]\.[0-9]) shared library' lt_cv_file_magic_test_file=/usr/lib/libc.sl ;; esac ;; interix[3-9]*) # PIC code is broken on Interix 3.x, that's why |\.a not |_pic\.a here lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so|\.a)$' ;; irix5* | irix6* | nonstopux*) case $LD in *-32|*"-32 ") libmagic=32-bit;; *-n32|*"-n32 ") libmagic=N32;; *-64|*"-64 ") libmagic=64-bit;; *) libmagic=never-match;; esac lt_cv_deplibs_check_method=pass_all ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) lt_cv_deplibs_check_method=pass_all ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so|_pic\.a)$' fi ;; newos6*) lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (executable|dynamic lib)' lt_cv_file_magic_cmd=/usr/bin/file lt_cv_file_magic_test_file=/usr/lib/libnls.so ;; *nto* | *qnx*) lt_cv_deplibs_check_method=pass_all ;; openbsd* | bitrig*) if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|\.so|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$' fi ;; osf3* | osf4* | osf5*) lt_cv_deplibs_check_method=pass_all ;; rdos*) lt_cv_deplibs_check_method=pass_all ;; solaris*) lt_cv_deplibs_check_method=pass_all ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) lt_cv_deplibs_check_method=pass_all ;; sysv4 | sysv4.3*) case $host_vendor in motorola) lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib) M[0-9][0-9]* Version [0-9]' lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*` ;; ncr) lt_cv_deplibs_check_method=pass_all ;; sequent) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [LM]SB (shared object|dynamic lib )' ;; sni) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method="file_magic ELF [0-9][0-9]*-bit [LM]SB dynamic lib" lt_cv_file_magic_test_file=/lib/libc.so ;; siemens) lt_cv_deplibs_check_method=pass_all ;; pc) lt_cv_deplibs_check_method=pass_all ;; esac ;; tpf*) lt_cv_deplibs_check_method=pass_all ;; os2*) lt_cv_deplibs_check_method=pass_all ;; esac fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5 $as_echo "$lt_cv_deplibs_check_method" >&6; } file_magic_glob= want_nocaseglob=no if test "$build" = "$host"; then case $host_os in mingw* | pw32*) if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then want_nocaseglob=yes else file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"` fi ;; esac fi file_magic_cmd=$lt_cv_file_magic_cmd deplibs_check_method=$lt_cv_deplibs_check_method test -z "$deplibs_check_method" && deplibs_check_method=unknown if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args. set dummy ${ac_tool_prefix}dlltool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_DLLTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$DLLTOOL"; then ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi DLLTOOL=$ac_cv_prog_DLLTOOL if test -n "$DLLTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5 $as_echo "$DLLTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_DLLTOOL"; then ac_ct_DLLTOOL=$DLLTOOL # Extract the first word of "dlltool", so it can be a program name with args. set dummy dlltool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_DLLTOOL"; then ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DLLTOOL="dlltool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL if test -n "$ac_ct_DLLTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5 $as_echo "$ac_ct_DLLTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_DLLTOOL" = x; then DLLTOOL="false" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DLLTOOL=$ac_ct_DLLTOOL fi else DLLTOOL="$ac_cv_prog_DLLTOOL" fi test -z "$DLLTOOL" && DLLTOOL=dlltool { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5 $as_echo_n "checking how to associate runtime and link libraries... " >&6; } if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_sharedlib_from_linklib_cmd='unknown' case $host_os in cygwin* | mingw* | pw32* | cegcc*) # two different shell functions defined in ltmain.sh; # decide which one to use based on capabilities of $DLLTOOL case `$DLLTOOL --help 2>&1` in *--identify-strict*) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib ;; *) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback ;; esac ;; *) # fallback: assume linklib IS sharedlib lt_cv_sharedlib_from_linklib_cmd=$ECHO ;; esac fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5 $as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; } sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO if test -n "$ac_tool_prefix"; then for ac_prog in ar do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_AR+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$AR"; then ac_cv_prog_AR="$AR" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_AR="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi AR=$ac_cv_prog_AR if test -n "$AR"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $AR" >&5 $as_echo "$AR" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$AR" && break done fi if test -z "$AR"; then ac_ct_AR=$AR for ac_prog in ar do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_AR+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_AR"; then ac_cv_prog_ac_ct_AR="$ac_ct_AR" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_AR="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_AR=$ac_cv_prog_ac_ct_AR if test -n "$ac_ct_AR"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_AR" >&5 $as_echo "$ac_ct_AR" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_AR" && break done if test "x$ac_ct_AR" = x; then AR="false" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac AR=$ac_ct_AR fi fi : ${AR=ar} : ${AR_FLAGS=cru} { $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5 $as_echo_n "checking for archiver @FILE support... " >&6; } if ${lt_cv_ar_at_file+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ar_at_file=no cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : echo conftest.$ac_objext > conftest.lst lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5' { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5 (eval $lt_ar_try) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if test 0 -eq "$ac_status"; then # Ensure the archiver fails upon bogus file names. rm -f conftest.$ac_objext libconftest.a { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5 (eval $lt_ar_try) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if test 0 -ne "$ac_status"; then lt_cv_ar_at_file=@ fi fi rm -f conftest.* libconftest.a fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5 $as_echo "$lt_cv_ar_at_file" >&6; } if test no = "$lt_cv_ar_at_file"; then archiver_list_spec= else archiver_list_spec=$lt_cv_ar_at_file fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. set dummy ${ac_tool_prefix}strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$STRIP"; then ac_cv_prog_STRIP="$STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_STRIP="${ac_tool_prefix}strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi STRIP=$ac_cv_prog_STRIP if test -n "$STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $STRIP" >&5 $as_echo "$STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_STRIP"; then ac_ct_STRIP=$STRIP # Extract the first word of "strip", so it can be a program name with args. set dummy strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_STRIP"; then ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_STRIP="strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP if test -n "$ac_ct_STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_STRIP" >&5 $as_echo "$ac_ct_STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_STRIP" = x; then STRIP=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac STRIP=$ac_ct_STRIP fi else STRIP="$ac_cv_prog_STRIP" fi test -z "$STRIP" && STRIP=: if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}ranlib", so it can be a program name with args. set dummy ${ac_tool_prefix}ranlib; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_RANLIB+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$RANLIB"; then ac_cv_prog_RANLIB="$RANLIB" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_RANLIB="${ac_tool_prefix}ranlib" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi RANLIB=$ac_cv_prog_RANLIB if test -n "$RANLIB"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $RANLIB" >&5 $as_echo "$RANLIB" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_RANLIB"; then ac_ct_RANLIB=$RANLIB # Extract the first word of "ranlib", so it can be a program name with args. set dummy ranlib; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_RANLIB+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_RANLIB"; then ac_cv_prog_ac_ct_RANLIB="$ac_ct_RANLIB" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_RANLIB="ranlib" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_RANLIB=$ac_cv_prog_ac_ct_RANLIB if test -n "$ac_ct_RANLIB"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_RANLIB" >&5 $as_echo "$ac_ct_RANLIB" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_RANLIB" = x; then RANLIB=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac RANLIB=$ac_ct_RANLIB fi else RANLIB="$ac_cv_prog_RANLIB" fi test -z "$RANLIB" && RANLIB=: # Determine commands to create old-style static archives. old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs' old_postinstall_cmds='chmod 644 $oldlib' old_postuninstall_cmds= if test -n "$RANLIB"; then case $host_os in bitrig* | openbsd*) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB -t \$tool_oldlib" ;; *) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB \$tool_oldlib" ;; esac old_archive_cmds="$old_archive_cmds~\$RANLIB \$tool_oldlib" fi case $host_os in darwin*) lock_old_archive_extraction=yes ;; *) lock_old_archive_extraction=no ;; esac # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC # Check for command to grab the raw symbol name followed by C symbol from nm. { $as_echo "$as_me:${as_lineno-$LINENO}: checking command to parse $NM output from $compiler object" >&5 $as_echo_n "checking command to parse $NM output from $compiler object... " >&6; } if ${lt_cv_sys_global_symbol_pipe+:} false; then : $as_echo_n "(cached) " >&6 else # These are sane defaults that work on at least a few old systems. # [They come from Ultrix. What could be older than Ultrix?!! ;)] # Character class describing NM global symbol codes. symcode='[BCDEGRST]' # Regexp to match symbols that can be accessed directly from C. sympat='\([_A-Za-z][_A-Za-z0-9]*\)' # Define system-specific variables. case $host_os in aix*) symcode='[BCDT]' ;; cygwin* | mingw* | pw32* | cegcc*) symcode='[ABCDGISTW]' ;; hpux*) if test ia64 = "$host_cpu"; then symcode='[ABCDEGRST]' fi ;; irix* | nonstopux*) symcode='[BCDEGRST]' ;; osf*) symcode='[BCDEGQRST]' ;; solaris*) symcode='[BDRT]' ;; sco3.2v5*) symcode='[DT]' ;; sysv4.2uw2*) symcode='[DT]' ;; sysv5* | sco5v6* | unixware* | OpenUNIX*) symcode='[ABDT]' ;; sysv4) symcode='[DFNSTU]' ;; esac # If we're using GNU nm, then use its standard symbol codes. case `$NM -V 2>&1` in *GNU* | *'with BFD'*) symcode='[ABCDGIRSTW]' ;; esac if test "$lt_cv_nm_interface" = "MS dumpbin"; then # Gets list of data symbols to import. lt_cv_sys_global_symbol_to_import="sed -n -e 's/^I .* \(.*\)$/\1/p'" # Adjust the below global symbol transforms to fixup imported variables. lt_cdecl_hook=" -e 's/^I .* \(.*\)$/extern __declspec(dllimport) char \1;/p'" lt_c_name_hook=" -e 's/^I .* \(.*\)$/ {\"\1\", (void *) 0},/p'" lt_c_name_lib_hook="\ -e 's/^I .* \(lib.*\)$/ {\"\1\", (void *) 0},/p'\ -e 's/^I .* \(.*\)$/ {\"lib\1\", (void *) 0},/p'" else # Disable hooks by default. lt_cv_sys_global_symbol_to_import= lt_cdecl_hook= lt_c_name_hook= lt_c_name_lib_hook= fi # Transform an extracted symbol line into a proper C declaration. # Some systems (esp. on ia64) link data and code symbols differently, # so use this general approach. lt_cv_sys_global_symbol_to_cdecl="sed -n"\ $lt_cdecl_hook\ " -e 's/^T .* \(.*\)$/extern int \1();/p'"\ " -e 's/^$symcode$symcode* .* \(.*\)$/extern char \1;/p'" # Transform an extracted symbol line into symbol name and symbol address lt_cv_sys_global_symbol_to_c_name_address="sed -n"\ $lt_c_name_hook\ " -e 's/^: \(.*\) .*$/ {\"\1\", (void *) 0},/p'"\ " -e 's/^$symcode$symcode* .* \(.*\)$/ {\"\1\", (void *) \&\1},/p'" # Transform an extracted symbol line into symbol name with lib prefix and # symbol address. lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n"\ $lt_c_name_lib_hook\ " -e 's/^: \(.*\) .*$/ {\"\1\", (void *) 0},/p'"\ " -e 's/^$symcode$symcode* .* \(lib.*\)$/ {\"\1\", (void *) \&\1},/p'"\ " -e 's/^$symcode$symcode* .* \(.*\)$/ {\"lib\1\", (void *) \&\1},/p'" # Handle CRLF in mingw tool chain opt_cr= case $build_os in mingw*) opt_cr=`$ECHO 'x\{0,1\}' | tr x '\015'` # option cr in regexp ;; esac # Try without a prefix underscore, then with it. for ac_symprfx in "" "_"; do # Transform symcode, sympat, and symprfx into a raw symbol and a C symbol. symxfrm="\\1 $ac_symprfx\\2 \\2" # Write the raw and C identifiers. if test "$lt_cv_nm_interface" = "MS dumpbin"; then # Fake it for dumpbin and say T for any non-static function, # D for any global variable and I for any imported variable. # Also find C++ and __fastcall symbols from MSVC++, # which start with @ or ?. lt_cv_sys_global_symbol_pipe="$AWK '"\ " {last_section=section; section=\$ 3};"\ " /^COFF SYMBOL TABLE/{for(i in hide) delete hide[i]};"\ " /Section length .*#relocs.*(pick any)/{hide[last_section]=1};"\ " /^ *Symbol name *: /{split(\$ 0,sn,\":\"); si=substr(sn[2],2)};"\ " /^ *Type *: code/{print \"T\",si,substr(si,length(prfx))};"\ " /^ *Type *: data/{print \"I\",si,substr(si,length(prfx))};"\ " \$ 0!~/External *\|/{next};"\ " / 0+ UNDEF /{next}; / UNDEF \([^|]\)*()/{next};"\ " {if(hide[section]) next};"\ " {f=\"D\"}; \$ 0~/\(\).*\|/{f=\"T\"};"\ " {split(\$ 0,a,/\||\r/); split(a[2],s)};"\ " s[1]~/^[@?]/{print f,s[1],s[1]; next};"\ " s[1]~prfx {split(s[1],t,\"@\"); print f,t[1],substr(t[1],length(prfx))}"\ " ' prfx=^$ac_symprfx" else lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[ ]\($symcode$symcode*\)[ ][ ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'" fi lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'" # Check to see that the pipe works correctly. pipe_works=no rm -f conftest* cat > conftest.$ac_ext <<_LT_EOF #ifdef __cplusplus extern "C" { #endif char nm_test_var; void nm_test_func(void); void nm_test_func(void){} #ifdef __cplusplus } #endif int main(){nm_test_var='a';nm_test_func();return(0);} _LT_EOF if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then # Now try to grab the symbols. nlist=conftest.nm $ECHO "$as_me:$LINENO: $NM conftest.$ac_objext | $lt_cv_sys_global_symbol_pipe > $nlist" >&5 if eval "$NM" conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist 2>&5 && test -s "$nlist"; then # Try sorting and uniquifying the output. if sort "$nlist" | uniq > "$nlist"T; then mv -f "$nlist"T "$nlist" else rm -f "$nlist"T fi # Make sure that we snagged all the symbols we need. if $GREP ' nm_test_var$' "$nlist" >/dev/null; then if $GREP ' nm_test_func$' "$nlist" >/dev/null; then cat <<_LT_EOF > conftest.$ac_ext /* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */ #if defined _WIN32 || defined __CYGWIN__ || defined _WIN32_WCE /* DATA imports from DLLs on WIN32 can't be const, because runtime relocations are performed -- see ld's documentation on pseudo-relocs. */ # define LT_DLSYM_CONST #elif defined __osf__ /* This system does not cope well with relocations in const data. */ # define LT_DLSYM_CONST #else # define LT_DLSYM_CONST const #endif #ifdef __cplusplus extern "C" { #endif _LT_EOF # Now generate the symbol file. eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | $GREP -v main >> conftest.$ac_ext' cat <<_LT_EOF >> conftest.$ac_ext /* The mapping between symbol names and symbols. */ LT_DLSYM_CONST struct { const char *name; void *address; } lt__PROGRAM__LTX_preloaded_symbols[] = { { "@PROGRAM@", (void *) 0 }, _LT_EOF $SED "s/^$symcode$symcode* .* \(.*\)$/ {\"\1\", (void *) \&\1},/" < "$nlist" | $GREP -v main >> conftest.$ac_ext cat <<\_LT_EOF >> conftest.$ac_ext {0, (void *) 0} }; /* This works around a problem in FreeBSD linker */ #ifdef FREEBSD_WORKAROUND static const void *lt_preloaded_setup() { return lt__PROGRAM__LTX_preloaded_symbols; } #endif #ifdef __cplusplus } #endif _LT_EOF # Now try linking the two files. mv conftest.$ac_objext conftstm.$ac_objext lt_globsym_save_LIBS=$LIBS lt_globsym_save_CFLAGS=$CFLAGS LIBS=conftstm.$ac_objext CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag" if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5 (eval $ac_link) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s conftest$ac_exeext; then pipe_works=yes fi LIBS=$lt_globsym_save_LIBS CFLAGS=$lt_globsym_save_CFLAGS else echo "cannot find nm_test_func in $nlist" >&5 fi else echo "cannot find nm_test_var in $nlist" >&5 fi else echo "cannot run $lt_cv_sys_global_symbol_pipe" >&5 fi else echo "$progname: failed program was:" >&5 cat conftest.$ac_ext >&5 fi rm -rf conftest* conftst* # Do not use the global_symbol_pipe unless it works. if test yes = "$pipe_works"; then break else lt_cv_sys_global_symbol_pipe= fi done fi if test -z "$lt_cv_sys_global_symbol_pipe"; then lt_cv_sys_global_symbol_to_cdecl= fi if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: failed" >&5 $as_echo "failed" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: ok" >&5 $as_echo "ok" >&6; } fi # Response file support. if test "$lt_cv_nm_interface" = "MS dumpbin"; then nm_file_list_spec='@' elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then nm_file_list_spec='@' fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5 $as_echo_n "checking for sysroot... " >&6; } # Check whether --with-sysroot was given. if test "${with_sysroot+set}" = set; then : withval=$with_sysroot; else with_sysroot=no fi lt_sysroot= case $with_sysroot in #( yes) if test yes = "$GCC"; then lt_sysroot=`$CC --print-sysroot 2>/dev/null` fi ;; #( /*) lt_sysroot=`echo "$with_sysroot" | sed -e "$sed_quote_subst"` ;; #( no|'') ;; #( *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_sysroot" >&5 $as_echo "$with_sysroot" >&6; } as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5 ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5 $as_echo "${lt_sysroot:-no}" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for a working dd" >&5 $as_echo_n "checking for a working dd... " >&6; } if ${ac_cv_path_lt_DD+:} false; then : $as_echo_n "(cached) " >&6 else printf 0123456789abcdef0123456789abcdef >conftest.i cat conftest.i conftest.i >conftest2.i : ${lt_DD:=$DD} if test -z "$lt_DD"; then ac_path_lt_DD_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in dd; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_lt_DD="$as_dir/$ac_prog$ac_exec_ext" as_fn_executable_p "$ac_path_lt_DD" || continue if "$ac_path_lt_DD" bs=32 count=1 conftest.out 2>/dev/null; then cmp -s conftest.i conftest.out \ && ac_cv_path_lt_DD="$ac_path_lt_DD" ac_path_lt_DD_found=: fi $ac_path_lt_DD_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_lt_DD"; then : fi else ac_cv_path_lt_DD=$lt_DD fi rm -f conftest.i conftest2.i conftest.out fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_lt_DD" >&5 $as_echo "$ac_cv_path_lt_DD" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to truncate binary pipes" >&5 $as_echo_n "checking how to truncate binary pipes... " >&6; } if ${lt_cv_truncate_bin+:} false; then : $as_echo_n "(cached) " >&6 else printf 0123456789abcdef0123456789abcdef >conftest.i cat conftest.i conftest.i >conftest2.i lt_cv_truncate_bin= if "$ac_cv_path_lt_DD" bs=32 count=1 conftest.out 2>/dev/null; then cmp -s conftest.i conftest.out \ && lt_cv_truncate_bin="$ac_cv_path_lt_DD bs=4096 count=1" fi rm -f conftest.i conftest2.i conftest.out test -z "$lt_cv_truncate_bin" && lt_cv_truncate_bin="$SED -e 4q" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_truncate_bin" >&5 $as_echo "$lt_cv_truncate_bin" >&6; } # Calculate cc_basename. Skip known compiler wrappers and cross-prefix. func_cc_basename () { for cc_temp in $*""; do case $cc_temp in compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; \-*) ;; *) break;; esac done func_cc_basename_result=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"` } # Check whether --enable-libtool-lock was given. if test "${enable_libtool_lock+set}" = set; then : enableval=$enable_libtool_lock; fi test no = "$enable_libtool_lock" || enable_libtool_lock=yes # Some flags need to be propagated to the compiler or linker for good # libtool support. case $host in ia64-*-hpux*) # Find out what ABI is being produced by ac_compile, and set mode # options accordingly. echo 'int i;' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then case `/usr/bin/file conftest.$ac_objext` in *ELF-32*) HPUX_IA64_MODE=32 ;; *ELF-64*) HPUX_IA64_MODE=64 ;; esac fi rm -rf conftest* ;; *-*-irix6*) # Find out what ABI is being produced by ac_compile, and set linker # options accordingly. echo '#line '$LINENO' "configure"' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then if test yes = "$lt_cv_prog_gnu_ld"; then case `/usr/bin/file conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -melf32bsmip" ;; *N32*) LD="${LD-ld} -melf32bmipn32" ;; *64-bit*) LD="${LD-ld} -melf64bmip" ;; esac else case `/usr/bin/file conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -32" ;; *N32*) LD="${LD-ld} -n32" ;; *64-bit*) LD="${LD-ld} -64" ;; esac fi fi rm -rf conftest* ;; mips64*-*linux*) # Find out what ABI is being produced by ac_compile, and set linker # options accordingly. echo '#line '$LINENO' "configure"' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then emul=elf case `/usr/bin/file conftest.$ac_objext` in *32-bit*) emul="${emul}32" ;; *64-bit*) emul="${emul}64" ;; esac case `/usr/bin/file conftest.$ac_objext` in *MSB*) emul="${emul}btsmip" ;; *LSB*) emul="${emul}ltsmip" ;; esac case `/usr/bin/file conftest.$ac_objext` in *N32*) emul="${emul}n32" ;; esac LD="${LD-ld} -m $emul" fi rm -rf conftest* ;; x86_64-*kfreebsd*-gnu|x86_64-*linux*|powerpc*-*linux*| \ s390*-*linux*|s390*-*tpf*|sparc*-*linux*) # Find out what ABI is being produced by ac_compile, and set linker # options accordingly. Note that the listed cases only cover the # situations where additional linker options are needed (such as when # doing 32-bit compilation for a host where ld defaults to 64-bit, or # vice versa); the common cases where no linker options are needed do # not appear in the list. echo 'int i;' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then case `/usr/bin/file conftest.o` in *32-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_i386_fbsd" ;; x86_64-*linux*) case `/usr/bin/file conftest.o` in *x86-64*) LD="${LD-ld} -m elf32_x86_64" ;; *) LD="${LD-ld} -m elf_i386" ;; esac ;; powerpc64le-*linux*) LD="${LD-ld} -m elf32lppclinux" ;; powerpc64-*linux*) LD="${LD-ld} -m elf32ppclinux" ;; s390x-*linux*) LD="${LD-ld} -m elf_s390" ;; sparc64-*linux*) LD="${LD-ld} -m elf32_sparc" ;; esac ;; *64-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_x86_64_fbsd" ;; x86_64-*linux*) LD="${LD-ld} -m elf_x86_64" ;; powerpcle-*linux*) LD="${LD-ld} -m elf64lppc" ;; powerpc-*linux*) LD="${LD-ld} -m elf64ppc" ;; s390*-*linux*|s390*-*tpf*) LD="${LD-ld} -m elf64_s390" ;; sparc*-*linux*) LD="${LD-ld} -m elf64_sparc" ;; esac ;; esac fi rm -rf conftest* ;; *-*-sco3.2v5*) # On SCO OpenServer 5, we need -belf to get full-featured binaries. SAVE_CFLAGS=$CFLAGS CFLAGS="$CFLAGS -belf" { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the C compiler needs -belf" >&5 $as_echo_n "checking whether the C compiler needs -belf... " >&6; } if ${lt_cv_cc_needs_belf+:} false; then : $as_echo_n "(cached) " >&6 else ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_cv_cc_needs_belf=yes else lt_cv_cc_needs_belf=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_cc_needs_belf" >&5 $as_echo "$lt_cv_cc_needs_belf" >&6; } if test yes != "$lt_cv_cc_needs_belf"; then # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf CFLAGS=$SAVE_CFLAGS fi ;; *-*solaris*) # Find out what ABI is being produced by ac_compile, and set linker # options accordingly. echo 'int i;' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then case `/usr/bin/file conftest.o` in *64-bit*) case $lt_cv_prog_gnu_ld in yes*) case $host in i?86-*-solaris*|x86_64-*-solaris*) LD="${LD-ld} -m elf_x86_64" ;; sparc*-*-solaris*) LD="${LD-ld} -m elf64_sparc" ;; esac # GNU ld 2.21 introduced _sol2 emulations. Use them if available. if ${LD-ld} -V | grep _sol2 >/dev/null 2>&1; then LD=${LD-ld}_sol2 fi ;; *) if ${LD-ld} -64 -r -o conftest2.o conftest.o >/dev/null 2>&1; then LD="${LD-ld} -64" fi ;; esac ;; esac fi rm -rf conftest* ;; esac need_locks=$enable_libtool_lock if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args. set dummy ${ac_tool_prefix}mt; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_MANIFEST_TOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$MANIFEST_TOOL"; then ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL if test -n "$MANIFEST_TOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5 $as_echo "$MANIFEST_TOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_MANIFEST_TOOL"; then ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL # Extract the first word of "mt", so it can be a program name with args. set dummy mt; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_MANIFEST_TOOL"; then ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_MANIFEST_TOOL="mt" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL if test -n "$ac_ct_MANIFEST_TOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5 $as_echo "$ac_ct_MANIFEST_TOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_MANIFEST_TOOL" = x; then MANIFEST_TOOL=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL fi else MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL" fi test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5 $as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; } if ${lt_cv_path_mainfest_tool+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_path_mainfest_tool=no echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5 $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out cat conftest.err >&5 if $GREP 'Manifest Tool' conftest.out > /dev/null; then lt_cv_path_mainfest_tool=yes fi rm -f conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5 $as_echo "$lt_cv_path_mainfest_tool" >&6; } if test yes != "$lt_cv_path_mainfest_tool"; then MANIFEST_TOOL=: fi case $host_os in rhapsody* | darwin*) if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}dsymutil", so it can be a program name with args. set dummy ${ac_tool_prefix}dsymutil; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_DSYMUTIL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$DSYMUTIL"; then ac_cv_prog_DSYMUTIL="$DSYMUTIL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_DSYMUTIL="${ac_tool_prefix}dsymutil" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi DSYMUTIL=$ac_cv_prog_DSYMUTIL if test -n "$DSYMUTIL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DSYMUTIL" >&5 $as_echo "$DSYMUTIL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_DSYMUTIL"; then ac_ct_DSYMUTIL=$DSYMUTIL # Extract the first word of "dsymutil", so it can be a program name with args. set dummy dsymutil; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_DSYMUTIL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_DSYMUTIL"; then ac_cv_prog_ac_ct_DSYMUTIL="$ac_ct_DSYMUTIL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_DSYMUTIL="dsymutil" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_DSYMUTIL=$ac_cv_prog_ac_ct_DSYMUTIL if test -n "$ac_ct_DSYMUTIL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DSYMUTIL" >&5 $as_echo "$ac_ct_DSYMUTIL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_DSYMUTIL" = x; then DSYMUTIL=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DSYMUTIL=$ac_ct_DSYMUTIL fi else DSYMUTIL="$ac_cv_prog_DSYMUTIL" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}nmedit", so it can be a program name with args. set dummy ${ac_tool_prefix}nmedit; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_NMEDIT+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$NMEDIT"; then ac_cv_prog_NMEDIT="$NMEDIT" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_NMEDIT="${ac_tool_prefix}nmedit" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi NMEDIT=$ac_cv_prog_NMEDIT if test -n "$NMEDIT"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $NMEDIT" >&5 $as_echo "$NMEDIT" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_NMEDIT"; then ac_ct_NMEDIT=$NMEDIT # Extract the first word of "nmedit", so it can be a program name with args. set dummy nmedit; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_NMEDIT+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_NMEDIT"; then ac_cv_prog_ac_ct_NMEDIT="$ac_ct_NMEDIT" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_NMEDIT="nmedit" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_NMEDIT=$ac_cv_prog_ac_ct_NMEDIT if test -n "$ac_ct_NMEDIT"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_NMEDIT" >&5 $as_echo "$ac_ct_NMEDIT" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_NMEDIT" = x; then NMEDIT=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac NMEDIT=$ac_ct_NMEDIT fi else NMEDIT="$ac_cv_prog_NMEDIT" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}lipo", so it can be a program name with args. set dummy ${ac_tool_prefix}lipo; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_LIPO+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$LIPO"; then ac_cv_prog_LIPO="$LIPO" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_LIPO="${ac_tool_prefix}lipo" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi LIPO=$ac_cv_prog_LIPO if test -n "$LIPO"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $LIPO" >&5 $as_echo "$LIPO" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_LIPO"; then ac_ct_LIPO=$LIPO # Extract the first word of "lipo", so it can be a program name with args. set dummy lipo; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_LIPO+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_LIPO"; then ac_cv_prog_ac_ct_LIPO="$ac_ct_LIPO" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_LIPO="lipo" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_LIPO=$ac_cv_prog_ac_ct_LIPO if test -n "$ac_ct_LIPO"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_LIPO" >&5 $as_echo "$ac_ct_LIPO" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_LIPO" = x; then LIPO=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac LIPO=$ac_ct_LIPO fi else LIPO="$ac_cv_prog_LIPO" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}otool", so it can be a program name with args. set dummy ${ac_tool_prefix}otool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_OTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$OTOOL"; then ac_cv_prog_OTOOL="$OTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_OTOOL="${ac_tool_prefix}otool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi OTOOL=$ac_cv_prog_OTOOL if test -n "$OTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $OTOOL" >&5 $as_echo "$OTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_OTOOL"; then ac_ct_OTOOL=$OTOOL # Extract the first word of "otool", so it can be a program name with args. set dummy otool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_OTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_OTOOL"; then ac_cv_prog_ac_ct_OTOOL="$ac_ct_OTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OTOOL="otool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_OTOOL=$ac_cv_prog_ac_ct_OTOOL if test -n "$ac_ct_OTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OTOOL" >&5 $as_echo "$ac_ct_OTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_OTOOL" = x; then OTOOL=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OTOOL=$ac_ct_OTOOL fi else OTOOL="$ac_cv_prog_OTOOL" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}otool64", so it can be a program name with args. set dummy ${ac_tool_prefix}otool64; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_OTOOL64+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$OTOOL64"; then ac_cv_prog_OTOOL64="$OTOOL64" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_OTOOL64="${ac_tool_prefix}otool64" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi OTOOL64=$ac_cv_prog_OTOOL64 if test -n "$OTOOL64"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $OTOOL64" >&5 $as_echo "$OTOOL64" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_OTOOL64"; then ac_ct_OTOOL64=$OTOOL64 # Extract the first word of "otool64", so it can be a program name with args. set dummy otool64; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_OTOOL64+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_OTOOL64"; then ac_cv_prog_ac_ct_OTOOL64="$ac_ct_OTOOL64" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_OTOOL64="otool64" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_OTOOL64=$ac_cv_prog_ac_ct_OTOOL64 if test -n "$ac_ct_OTOOL64"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OTOOL64" >&5 $as_echo "$ac_ct_OTOOL64" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_OTOOL64" = x; then OTOOL64=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OTOOL64=$ac_ct_OTOOL64 fi else OTOOL64="$ac_cv_prog_OTOOL64" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for -single_module linker flag" >&5 $as_echo_n "checking for -single_module linker flag... " >&6; } if ${lt_cv_apple_cc_single_mod+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_apple_cc_single_mod=no if test -z "$LT_MULTI_MODULE"; then # By default we will add the -single_module flag. You can override # by either setting the environment variable LT_MULTI_MODULE # non-empty at configure time, or by adding -multi_module to the # link flags. rm -rf libconftest.dylib* echo "int foo(void){return 1;}" > conftest.c echo "$LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c" >&5 $LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c 2>conftest.err _lt_result=$? # If there is a non-empty error log, and "single_module" # appears in it, assume the flag caused a linker warning if test -s conftest.err && $GREP single_module conftest.err; then cat conftest.err >&5 # Otherwise, if the output was created with a 0 exit code from # the compiler, it worked. elif test -f libconftest.dylib && test 0 = "$_lt_result"; then lt_cv_apple_cc_single_mod=yes else cat conftest.err >&5 fi rm -rf libconftest.dylib* rm -f conftest.* fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_apple_cc_single_mod" >&5 $as_echo "$lt_cv_apple_cc_single_mod" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for -exported_symbols_list linker flag" >&5 $as_echo_n "checking for -exported_symbols_list linker flag... " >&6; } if ${lt_cv_ld_exported_symbols_list+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ld_exported_symbols_list=no save_LDFLAGS=$LDFLAGS echo "_main" > conftest.sym LDFLAGS="$LDFLAGS -Wl,-exported_symbols_list,conftest.sym" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_cv_ld_exported_symbols_list=yes else lt_cv_ld_exported_symbols_list=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS=$save_LDFLAGS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_exported_symbols_list" >&5 $as_echo "$lt_cv_ld_exported_symbols_list" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for -force_load linker flag" >&5 $as_echo_n "checking for -force_load linker flag... " >&6; } if ${lt_cv_ld_force_load+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ld_force_load=no cat > conftest.c << _LT_EOF int forced_loaded() { return 2;} _LT_EOF echo "$LTCC $LTCFLAGS -c -o conftest.o conftest.c" >&5 $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5 echo "$AR cru libconftest.a conftest.o" >&5 $AR cru libconftest.a conftest.o 2>&5 echo "$RANLIB libconftest.a" >&5 $RANLIB libconftest.a 2>&5 cat > conftest.c << _LT_EOF int main() { return 0;} _LT_EOF echo "$LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a" >&5 $LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a 2>conftest.err _lt_result=$? if test -s conftest.err && $GREP force_load conftest.err; then cat conftest.err >&5 elif test -f conftest && test 0 = "$_lt_result" && $GREP forced_load conftest >/dev/null 2>&1; then lt_cv_ld_force_load=yes else cat conftest.err >&5 fi rm -f conftest.err libconftest.a conftest conftest.c rm -rf conftest.dSYM fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_force_load" >&5 $as_echo "$lt_cv_ld_force_load" >&6; } case $host_os in rhapsody* | darwin1.[012]) _lt_dar_allow_undefined='$wl-undefined ${wl}suppress' ;; darwin1.*) _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;; darwin*) # darwin 5.x on # if running on 10.5 or later, the deployment target defaults # to the OS version, if on x86, and 10.4, the deployment # target defaults to 10.4. Don't you love it? case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in 10.0,*86*-darwin8*|10.0,*-darwin[91]*) _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;; 10.[012][,.]*) _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;; 10.*) _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;; esac ;; esac if test yes = "$lt_cv_apple_cc_single_mod"; then _lt_dar_single_mod='$single_module' fi if test yes = "$lt_cv_ld_exported_symbols_list"; then _lt_dar_export_syms=' $wl-exported_symbols_list,$output_objdir/$libname-symbols.expsym' else _lt_dar_export_syms='~$NMEDIT -s $output_objdir/$libname-symbols.expsym $lib' fi if test : != "$DSYMUTIL" && test no = "$lt_cv_ld_force_load"; then _lt_dsymutil='~$DSYMUTIL $lib || :' else _lt_dsymutil= fi ;; esac # func_munge_path_list VARIABLE PATH # ----------------------------------- # VARIABLE is name of variable containing _space_ separated list of # directories to be munged by the contents of PATH, which is string # having a format: # "DIR[:DIR]:" # string "DIR[ DIR]" will be prepended to VARIABLE # ":DIR[:DIR]" # string "DIR[ DIR]" will be appended to VARIABLE # "DIRP[:DIRP]::[DIRA:]DIRA" # string "DIRP[ DIRP]" will be prepended to VARIABLE and string # "DIRA[ DIRA]" will be appended to VARIABLE # "DIR[:DIR]" # VARIABLE will be replaced by "DIR[ DIR]" func_munge_path_list () { case x$2 in x) ;; *:) eval $1=\"`$ECHO $2 | $SED 's/:/ /g'` \$$1\" ;; x:*) eval $1=\"\$$1 `$ECHO $2 | $SED 's/:/ /g'`\" ;; *::*) eval $1=\"\$$1\ `$ECHO $2 | $SED -e 's/.*:://' -e 's/:/ /g'`\" eval $1=\"`$ECHO $2 | $SED -e 's/::.*//' -e 's/:/ /g'`\ \$$1\" ;; *) eval $1=\"`$ECHO $2 | $SED 's/:/ /g'`\" ;; esac } ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to run the C preprocessor" >&5 $as_echo_n "checking how to run the C preprocessor... " >&6; } # On Suns, sometimes $CPP names a directory. if test -n "$CPP" && test -d "$CPP"; then CPP= fi if test -z "$CPP"; then if ${ac_cv_prog_CPP+:} false; then : $as_echo_n "(cached) " >&6 else # Double quotes because CPP needs to be expanded for CPP in "$CC -E" "$CC -E -traditional-cpp" "/lib/cpp" do ac_preproc_ok=false for ac_c_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : break fi done ac_cv_prog_CPP=$CPP fi CPP=$ac_cv_prog_CPP else ac_cv_prog_CPP=$CPP fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CPP" >&5 $as_echo "$CPP" >&6; } ac_preproc_ok=false for ac_c_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "C preprocessor \"$CPP\" fails sanity check See \`config.log' for more details" "$LINENO" 5; } fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ANSI C header files" >&5 $as_echo_n "checking for ANSI C header files... " >&6; } if ${ac_cv_header_stdc+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include #include int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_header_stdc=yes else ac_cv_header_stdc=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext if test $ac_cv_header_stdc = yes; then # SunOS 4.x string.h does not declare mem*, contrary to ANSI. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "memchr" >/dev/null 2>&1; then : else ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # ISC 2.0.2 stdlib.h does not declare free, contrary to ANSI. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "free" >/dev/null 2>&1; then : else ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # /bin/cc in Irix-4.0.5 gets non-ANSI ctype macros unless using -ansi. if test "$cross_compiling" = yes; then : : else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #if ((' ' & 0x0FF) == 0x020) # define ISLOWER(c) ('a' <= (c) && (c) <= 'z') # define TOUPPER(c) (ISLOWER(c) ? 'A' + ((c) - 'a') : (c)) #else # define ISLOWER(c) \ (('a' <= (c) && (c) <= 'i') \ || ('j' <= (c) && (c) <= 'r') \ || ('s' <= (c) && (c) <= 'z')) # define TOUPPER(c) (ISLOWER(c) ? ((c) | 0x40) : (c)) #endif #define XOR(e, f) (((e) && !(f)) || (!(e) && (f))) int main () { int i; for (i = 0; i < 256; i++) if (XOR (islower (i), ISLOWER (i)) || toupper (i) != TOUPPER (i)) return 2; return 0; } _ACEOF if ac_fn_c_try_run "$LINENO"; then : else ac_cv_header_stdc=no fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_header_stdc" >&5 $as_echo "$ac_cv_header_stdc" >&6; } if test $ac_cv_header_stdc = yes; then $as_echo "#define STDC_HEADERS 1" >>confdefs.h fi # On IRIX 5.3, sys/types and inttypes.h are conflicting. for ac_header in sys/types.h sys/stat.h stdlib.h string.h memory.h strings.h \ inttypes.h stdint.h unistd.h do : as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh` ac_fn_c_check_header_compile "$LINENO" "$ac_header" "$as_ac_Header" "$ac_includes_default " if eval test \"x\$"$as_ac_Header"\" = x"yes"; then : cat >>confdefs.h <<_ACEOF #define `$as_echo "HAVE_$ac_header" | $as_tr_cpp` 1 _ACEOF fi done for ac_header in dlfcn.h do : ac_fn_c_check_header_compile "$LINENO" "dlfcn.h" "ac_cv_header_dlfcn_h" "$ac_includes_default " if test "x$ac_cv_header_dlfcn_h" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_DLFCN_H 1 _ACEOF fi done # Set options enable_dlopen=no enable_win32_dll=no # Check whether --enable-shared was given. if test "${enable_shared+set}" = set; then : enableval=$enable_shared; p=${PACKAGE-default} case $enableval in yes) enable_shared=yes ;; no) enable_shared=no ;; *) enable_shared=no # Look at the argument we got. We use all the common list separators. lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR, for pkg in $enableval; do IFS=$lt_save_ifs if test "X$pkg" = "X$p"; then enable_shared=yes fi done IFS=$lt_save_ifs ;; esac else enable_shared=yes fi # Check whether --enable-static was given. if test "${enable_static+set}" = set; then : enableval=$enable_static; p=${PACKAGE-default} case $enableval in yes) enable_static=yes ;; no) enable_static=no ;; *) enable_static=no # Look at the argument we got. We use all the common list separators. lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR, for pkg in $enableval; do IFS=$lt_save_ifs if test "X$pkg" = "X$p"; then enable_static=yes fi done IFS=$lt_save_ifs ;; esac else enable_static=yes fi # Check whether --with-pic was given. if test "${with_pic+set}" = set; then : withval=$with_pic; lt_p=${PACKAGE-default} case $withval in yes|no) pic_mode=$withval ;; *) pic_mode=default # Look at the argument we got. We use all the common list separators. lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR, for lt_pkg in $withval; do IFS=$lt_save_ifs if test "X$lt_pkg" = "X$lt_p"; then pic_mode=yes fi done IFS=$lt_save_ifs ;; esac else pic_mode=default fi # Check whether --enable-fast-install was given. if test "${enable_fast_install+set}" = set; then : enableval=$enable_fast_install; p=${PACKAGE-default} case $enableval in yes) enable_fast_install=yes ;; no) enable_fast_install=no ;; *) enable_fast_install=no # Look at the argument we got. We use all the common list separators. lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR, for pkg in $enableval; do IFS=$lt_save_ifs if test "X$pkg" = "X$p"; then enable_fast_install=yes fi done IFS=$lt_save_ifs ;; esac else enable_fast_install=yes fi shared_archive_member_spec= case $host,$enable_shared in power*-*-aix[5-9]*,yes) { $as_echo "$as_me:${as_lineno-$LINENO}: checking which variant of shared library versioning to provide" >&5 $as_echo_n "checking which variant of shared library versioning to provide... " >&6; } # Check whether --with-aix-soname was given. if test "${with_aix_soname+set}" = set; then : withval=$with_aix_soname; case $withval in aix|svr4|both) ;; *) as_fn_error $? "Unknown argument to --with-aix-soname" "$LINENO" 5 ;; esac lt_cv_with_aix_soname=$with_aix_soname else if ${lt_cv_with_aix_soname+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_with_aix_soname=aix fi with_aix_soname=$lt_cv_with_aix_soname fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $with_aix_soname" >&5 $as_echo "$with_aix_soname" >&6; } if test aix != "$with_aix_soname"; then # For the AIX way of multilib, we name the shared archive member # based on the bitwidth used, traditionally 'shr.o' or 'shr_64.o', # and 'shr.imp' or 'shr_64.imp', respectively, for the Import File. # Even when GNU compilers ignore OBJECT_MODE but need '-maix64' flag, # the AIX toolchain works better with OBJECT_MODE set (default 32). if test 64 = "${OBJECT_MODE-32}"; then shared_archive_member_spec=shr_64 else shared_archive_member_spec=shr fi fi ;; *) with_aix_soname=aix ;; esac # This can be used to rebuild libtool when needed LIBTOOL_DEPS=$ltmain # Always use our own libtool. LIBTOOL='$(SHELL) $(top_builddir)/libtool' test -z "$LN_S" && LN_S="ln -s" if test -n "${ZSH_VERSION+set}"; then setopt NO_GLOB_SUBST fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for objdir" >&5 $as_echo_n "checking for objdir... " >&6; } if ${lt_cv_objdir+:} false; then : $as_echo_n "(cached) " >&6 else rm -f .libs 2>/dev/null mkdir .libs 2>/dev/null if test -d .libs; then lt_cv_objdir=.libs else # MS-DOS does not allow filenames that begin with a dot. lt_cv_objdir=_libs fi rmdir .libs 2>/dev/null fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_objdir" >&5 $as_echo "$lt_cv_objdir" >&6; } objdir=$lt_cv_objdir cat >>confdefs.h <<_ACEOF #define LT_OBJDIR "$lt_cv_objdir/" _ACEOF case $host_os in aix3*) # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test set != "${COLLECT_NAMES+set}"; then COLLECT_NAMES= export COLLECT_NAMES fi ;; esac # Global variables: ofile=libtool can_build_shared=yes # All known linkers require a '.a' archive for static linking (except MSVC, # which needs '.lib'). libext=a with_gnu_ld=$lt_cv_prog_gnu_ld old_CC=$CC old_CFLAGS=$CFLAGS # Set sane defaults for various variables test -z "$CC" && CC=cc test -z "$LTCC" && LTCC=$CC test -z "$LTCFLAGS" && LTCFLAGS=$CFLAGS test -z "$LD" && LD=ld test -z "$ac_objext" && ac_objext=o func_cc_basename $compiler cc_basename=$func_cc_basename_result # Only perform the check for file, if the check method requires it test -z "$MAGIC_CMD" && MAGIC_CMD=file case $deplibs_check_method in file_magic*) if test "$file_magic_cmd" = '$MAGIC_CMD'; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ${ac_tool_prefix}file" >&5 $as_echo_n "checking for ${ac_tool_prefix}file... " >&6; } if ${lt_cv_path_MAGIC_CMD+:} false; then : $as_echo_n "(cached) " >&6 else case $MAGIC_CMD in [\\/*] | ?:[\\/]*) lt_cv_path_MAGIC_CMD=$MAGIC_CMD # Let the user override the test with a path. ;; *) lt_save_MAGIC_CMD=$MAGIC_CMD lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR ac_dummy="/usr/bin$PATH_SEPARATOR$PATH" for ac_dir in $ac_dummy; do IFS=$lt_save_ifs test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/${ac_tool_prefix}file"; then lt_cv_path_MAGIC_CMD=$ac_dir/"${ac_tool_prefix}file" if test -n "$file_magic_test_file"; then case $deplibs_check_method in "file_magic "*) file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` MAGIC_CMD=$lt_cv_path_MAGIC_CMD if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | $EGREP "$file_magic_regex" > /dev/null; then : else cat <<_LT_EOF 1>&2 *** Warning: the command libtool uses to detect shared libraries, *** $file_magic_cmd, produces output that libtool cannot recognize. *** The result is that libtool may fail to recognize shared libraries *** as such. This will affect the creation of libtool libraries that *** depend on shared libraries, but programs linked with such libtool *** libraries will work regardless of this problem. Nevertheless, you *** may want to report the problem to your system manager and/or to *** bug-libtool@gnu.org _LT_EOF fi ;; esac fi break fi done IFS=$lt_save_ifs MAGIC_CMD=$lt_save_MAGIC_CMD ;; esac fi MAGIC_CMD=$lt_cv_path_MAGIC_CMD if test -n "$MAGIC_CMD"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MAGIC_CMD" >&5 $as_echo "$MAGIC_CMD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test -z "$lt_cv_path_MAGIC_CMD"; then if test -n "$ac_tool_prefix"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for file" >&5 $as_echo_n "checking for file... " >&6; } if ${lt_cv_path_MAGIC_CMD+:} false; then : $as_echo_n "(cached) " >&6 else case $MAGIC_CMD in [\\/*] | ?:[\\/]*) lt_cv_path_MAGIC_CMD=$MAGIC_CMD # Let the user override the test with a path. ;; *) lt_save_MAGIC_CMD=$MAGIC_CMD lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR ac_dummy="/usr/bin$PATH_SEPARATOR$PATH" for ac_dir in $ac_dummy; do IFS=$lt_save_ifs test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/file"; then lt_cv_path_MAGIC_CMD=$ac_dir/"file" if test -n "$file_magic_test_file"; then case $deplibs_check_method in "file_magic "*) file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` MAGIC_CMD=$lt_cv_path_MAGIC_CMD if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | $EGREP "$file_magic_regex" > /dev/null; then : else cat <<_LT_EOF 1>&2 *** Warning: the command libtool uses to detect shared libraries, *** $file_magic_cmd, produces output that libtool cannot recognize. *** The result is that libtool may fail to recognize shared libraries *** as such. This will affect the creation of libtool libraries that *** depend on shared libraries, but programs linked with such libtool *** libraries will work regardless of this problem. Nevertheless, you *** may want to report the problem to your system manager and/or to *** bug-libtool@gnu.org _LT_EOF fi ;; esac fi break fi done IFS=$lt_save_ifs MAGIC_CMD=$lt_save_MAGIC_CMD ;; esac fi MAGIC_CMD=$lt_cv_path_MAGIC_CMD if test -n "$MAGIC_CMD"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MAGIC_CMD" >&5 $as_echo "$MAGIC_CMD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else MAGIC_CMD=: fi fi fi ;; esac # Use C for the default configuration in the libtool script lt_save_CC=$CC ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu # Source file extension for C test sources. ac_ext=c # Object file extension for compiled C test sources. objext=o objext=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(){return(0);}' # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC # Save the default compiler, since it gets overwritten when the other # tags are being tested, and _LT_TAGVAR(compiler, []) is a NOP. compiler_DEFAULT=$CC # save warnings/boilerplate of simple test code ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" >conftest.$ac_ext eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_compiler_boilerplate=`cat conftest.err` $RM conftest* ac_outfile=conftest.$ac_objext echo "$lt_simple_link_test_code" >conftest.$ac_ext eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_linker_boilerplate=`cat conftest.err` $RM -r conftest* if test -n "$compiler"; then lt_prog_compiler_no_builtin_flag= if test yes = "$GCC"; then case $cc_basename in nvcc*) lt_prog_compiler_no_builtin_flag=' -Xcompiler -fno-builtin' ;; *) lt_prog_compiler_no_builtin_flag=' -fno-builtin' ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -fno-rtti -fno-exceptions" >&5 $as_echo_n "checking if $compiler supports -fno-rtti -fno-exceptions... " >&6; } if ${lt_cv_prog_compiler_rtti_exceptions+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_rtti_exceptions=no ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-fno-rtti -fno-exceptions" ## exclude from sc_useless_quotes_in_assignment # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_rtti_exceptions=yes fi fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_rtti_exceptions" >&5 $as_echo "$lt_cv_prog_compiler_rtti_exceptions" >&6; } if test yes = "$lt_cv_prog_compiler_rtti_exceptions"; then lt_prog_compiler_no_builtin_flag="$lt_prog_compiler_no_builtin_flag -fno-rtti -fno-exceptions" else : fi fi lt_prog_compiler_wl= lt_prog_compiler_pic= lt_prog_compiler_static= if test yes = "$GCC"; then lt_prog_compiler_wl='-Wl,' lt_prog_compiler_static='-static' case $host_os in aix*) # All AIX code is PIC. if test ia64 = "$host_cpu"; then # AIX 5 now supports IA64 processor lt_prog_compiler_static='-Bstatic' fi lt_prog_compiler_pic='-fPIC' ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support lt_prog_compiler_pic='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but # adding the '-m68020' flag to GCC prevents building anything better, # like '-m68040'. lt_prog_compiler_pic='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries lt_prog_compiler_pic='-DDLL_EXPORT' case $host_os in os2*) lt_prog_compiler_static='$wl-static' ;; esac ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files lt_prog_compiler_pic='-fno-common' ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. lt_prog_compiler_static= ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) # +Z the default ;; *) lt_prog_compiler_pic='-fPIC' ;; esac ;; interix[3-9]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; msdosdjgpp*) # Just because we use GCC doesn't mean we suddenly get shared libraries # on systems that don't support them. lt_prog_compiler_can_build_shared=no enable_shared=no ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic='-fPIC -shared' ;; sysv4*MP*) if test -d /usr/nec; then lt_prog_compiler_pic=-Kconform_pic fi ;; *) lt_prog_compiler_pic='-fPIC' ;; esac case $cc_basename in nvcc*) # Cuda Compiler Driver 2.2 lt_prog_compiler_wl='-Xlinker ' if test -n "$lt_prog_compiler_pic"; then lt_prog_compiler_pic="-Xcompiler $lt_prog_compiler_pic" fi ;; esac else # PORTME Check for flag to pass linker flags through the system compiler. case $host_os in aix*) lt_prog_compiler_wl='-Wl,' if test ia64 = "$host_cpu"; then # AIX 5 now supports IA64 processor lt_prog_compiler_static='-Bstatic' else lt_prog_compiler_static='-bnso -bI:/lib/syscalls.exp' fi ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files lt_prog_compiler_pic='-fno-common' case $cc_basename in nagfor*) # NAG Fortran compiler lt_prog_compiler_wl='-Wl,-Wl,,' lt_prog_compiler_pic='-PIC' lt_prog_compiler_static='-Bstatic' ;; esac ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). lt_prog_compiler_pic='-DDLL_EXPORT' case $host_os in os2*) lt_prog_compiler_static='$wl-static' ;; esac ;; hpux9* | hpux10* | hpux11*) lt_prog_compiler_wl='-Wl,' # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but # not for PA HP-UX. case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) lt_prog_compiler_pic='+Z' ;; esac # Is there a better lt_prog_compiler_static that works with the bundled CC? lt_prog_compiler_static='$wl-a ${wl}archive' ;; irix5* | irix6* | nonstopux*) lt_prog_compiler_wl='-Wl,' # PIC (with -KPIC) is the default. lt_prog_compiler_static='-non_shared' ;; linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in # old Intel for x86_64, which still supported -KPIC. ecc*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-static' ;; # icc used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. icc* | ifort*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fPIC' lt_prog_compiler_static='-static' ;; # Lahey Fortran 8.1. lf95*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='--shared' lt_prog_compiler_static='--static' ;; nagfor*) # NAG Fortran compiler lt_prog_compiler_wl='-Wl,-Wl,,' lt_prog_compiler_pic='-PIC' lt_prog_compiler_static='-Bstatic' ;; tcc*) # Fabrice Bellard et al's Tiny C Compiler lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fPIC' lt_prog_compiler_static='-static' ;; pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group compilers (*not* the Pentium gcc compiler, # which looks to be a dead project) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fpic' lt_prog_compiler_static='-Bstatic' ;; ccc*) lt_prog_compiler_wl='-Wl,' # All Alpha code is PIC. lt_prog_compiler_static='-non_shared' ;; xl* | bgxl* | bgf* | mpixl*) # IBM XL C 8.0/Fortran 10.1, 11.1 on PPC and BlueGene lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-qpic' lt_prog_compiler_static='-qstaticlink' ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ Ceres\ Fortran* | *Sun*Fortran*\ [1-7].* | *Sun*Fortran*\ 8.[0-3]*) # Sun Fortran 8.3 passes all unrecognized flags to the linker lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' lt_prog_compiler_wl='' ;; *Sun\ F* | *Sun*Fortran*) lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' lt_prog_compiler_wl='-Qoption ld ' ;; *Sun\ C*) # Sun C 5.9 lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' lt_prog_compiler_wl='-Wl,' ;; *Intel*\ [CF]*Compiler*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fPIC' lt_prog_compiler_static='-static' ;; *Portland\ Group*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fpic' lt_prog_compiler_static='-Bstatic' ;; esac ;; esac ;; newsos6) lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic='-fPIC -shared' ;; osf3* | osf4* | osf5*) lt_prog_compiler_wl='-Wl,' # All OSF/1 code is PIC. lt_prog_compiler_static='-non_shared' ;; rdos*) lt_prog_compiler_static='-non_shared' ;; solaris*) lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' case $cc_basename in f77* | f90* | f95* | sunf77* | sunf90* | sunf95*) lt_prog_compiler_wl='-Qoption ld ';; *) lt_prog_compiler_wl='-Wl,';; esac ;; sunos4*) lt_prog_compiler_wl='-Qoption ld ' lt_prog_compiler_pic='-PIC' lt_prog_compiler_static='-Bstatic' ;; sysv4 | sysv4.2uw2* | sysv4.3*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' ;; sysv4*MP*) if test -d /usr/nec; then lt_prog_compiler_pic='-Kconform_pic' lt_prog_compiler_static='-Bstatic' fi ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' ;; unicos*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_can_build_shared=no ;; uts4*) lt_prog_compiler_pic='-pic' lt_prog_compiler_static='-Bstatic' ;; *) lt_prog_compiler_can_build_shared=no ;; esac fi case $host_os in # For platforms that do not support PIC, -DPIC is meaningless: *djgpp*) lt_prog_compiler_pic= ;; *) lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC" ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5 $as_echo_n "checking for $compiler option to produce PIC... " >&6; } if ${lt_cv_prog_compiler_pic+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_pic=$lt_prog_compiler_pic fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5 $as_echo "$lt_cv_prog_compiler_pic" >&6; } lt_prog_compiler_pic=$lt_cv_prog_compiler_pic # # Check to make sure the PIC flag actually works. # if test -n "$lt_prog_compiler_pic"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler PIC flag $lt_prog_compiler_pic works" >&5 $as_echo_n "checking if $compiler PIC flag $lt_prog_compiler_pic works... " >&6; } if ${lt_cv_prog_compiler_pic_works+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_pic_works=no ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="$lt_prog_compiler_pic -DPIC" ## exclude from sc_useless_quotes_in_assignment # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_pic_works=yes fi fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_works" >&5 $as_echo "$lt_cv_prog_compiler_pic_works" >&6; } if test yes = "$lt_cv_prog_compiler_pic_works"; then case $lt_prog_compiler_pic in "" | " "*) ;; *) lt_prog_compiler_pic=" $lt_prog_compiler_pic" ;; esac else lt_prog_compiler_pic= lt_prog_compiler_can_build_shared=no fi fi # # Check to make sure the static flag actually works. # wl=$lt_prog_compiler_wl eval lt_tmp_static_flag=\"$lt_prog_compiler_static\" { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler static flag $lt_tmp_static_flag works" >&5 $as_echo_n "checking if $compiler static flag $lt_tmp_static_flag works... " >&6; } if ${lt_cv_prog_compiler_static_works+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_static_works=no save_LDFLAGS=$LDFLAGS LDFLAGS="$LDFLAGS $lt_tmp_static_flag" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&5 $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_static_works=yes fi else lt_cv_prog_compiler_static_works=yes fi fi $RM -r conftest* LDFLAGS=$save_LDFLAGS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_static_works" >&5 $as_echo "$lt_cv_prog_compiler_static_works" >&6; } if test yes = "$lt_cv_prog_compiler_static_works"; then : else lt_prog_compiler_static= fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 $as_echo_n "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if ${lt_cv_prog_compiler_c_o+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_c_o=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o" >&5 $as_echo "$lt_cv_prog_compiler_c_o" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 $as_echo_n "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if ${lt_cv_prog_compiler_c_o+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_c_o=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o" >&5 $as_echo "$lt_cv_prog_compiler_c_o" >&6; } hard_links=nottested if test no = "$lt_cv_prog_compiler_c_o" && test no != "$need_locks"; then # do not overwrite the value of need_locks provided by the user { $as_echo "$as_me:${as_lineno-$LINENO}: checking if we can lock with hard links" >&5 $as_echo_n "checking if we can lock with hard links... " >&6; } hard_links=yes $RM conftest* ln conftest.a conftest.b 2>/dev/null && hard_links=no touch conftest.a ln conftest.a conftest.b 2>&5 || hard_links=no ln conftest.a conftest.b 2>/dev/null && hard_links=no { $as_echo "$as_me:${as_lineno-$LINENO}: result: $hard_links" >&5 $as_echo "$hard_links" >&6; } if test no = "$hard_links"; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: '$CC' does not support '-c -o', so 'make -j' may be unsafe" >&5 $as_echo "$as_me: WARNING: '$CC' does not support '-c -o', so 'make -j' may be unsafe" >&2;} need_locks=warn fi else need_locks=no fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $compiler linker ($LD) supports shared libraries" >&5 $as_echo_n "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; } runpath_var= allow_undefined_flag= always_export_symbols=no archive_cmds= archive_expsym_cmds= compiler_needs_object=no enable_shared_with_static_runtimes=no export_dynamic_flag_spec= export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' hardcode_automatic=no hardcode_direct=no hardcode_direct_absolute=no hardcode_libdir_flag_spec= hardcode_libdir_separator= hardcode_minus_L=no hardcode_shlibpath_var=unsupported inherit_rpath=no link_all_deplibs=unknown module_cmds= module_expsym_cmds= old_archive_from_new_cmds= old_archive_from_expsyms_cmds= thread_safe_flag_spec= whole_archive_flag_spec= # include_expsyms should be a list of space-separated symbols to be *always* # included in the symbol list include_expsyms= # exclude_expsyms can be an extended regexp of symbols to exclude # it will be wrapped by ' (' and ')$', so one must not match beginning or # end of line. Example: 'a|bc|.*d.*' will exclude the symbols 'a' and 'bc', # as well as any symbol that contains 'd'. exclude_expsyms='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*' # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out # platforms (ab)use it in PIC code, but their linkers get confused if # the symbol is explicitly referenced. Since portable code cannot # rely on this symbol name, it's probably fine to never include it in # preloaded symbol tables. # Exclude shared library initialization/finalization symbols. extract_expsyms_cmds= case $host_os in cygwin* | mingw* | pw32* | cegcc*) # FIXME: the MSVC++ port hasn't been tested in a loooong time # When not using gcc, we currently assume that we are using # Microsoft Visual C++. if test yes != "$GCC"; then with_gnu_ld=no fi ;; interix*) # we just hope/assume this is gcc and not c89 (= MSVC++) with_gnu_ld=yes ;; openbsd* | bitrig*) with_gnu_ld=no ;; linux* | k*bsd*-gnu | gnu*) link_all_deplibs=no ;; esac ld_shlibs=yes # On some targets, GNU ld is compatible enough with the native linker # that we're better off using the native interface for both. lt_use_gnu_ld_interface=no if test yes = "$with_gnu_ld"; then case $host_os in aix*) # The AIX port of GNU ld has always aspired to compatibility # with the native linker. However, as the warning in the GNU ld # block says, versions before 2.19.5* couldn't really create working # shared libraries, regardless of the interface used. case `$LD -v 2>&1` in *\ \(GNU\ Binutils\)\ 2.19.5*) ;; *\ \(GNU\ Binutils\)\ 2.[2-9]*) ;; *\ \(GNU\ Binutils\)\ [3-9]*) ;; *) lt_use_gnu_ld_interface=yes ;; esac ;; *) lt_use_gnu_ld_interface=yes ;; esac fi if test yes = "$lt_use_gnu_ld_interface"; then # If archive_cmds runs LD, not CC, wlarc should be empty wlarc='$wl' # Set some defaults for GNU ld with shared library support. These # are reset later if shared libraries are not supported. Putting them # here allows them to be overridden if necessary. runpath_var=LD_RUN_PATH hardcode_libdir_flag_spec='$wl-rpath $wl$libdir' export_dynamic_flag_spec='$wl--export-dynamic' # ancient GNU ld didn't support --whole-archive et. al. if $LD --help 2>&1 | $GREP 'no-whole-archive' > /dev/null; then whole_archive_flag_spec=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive' else whole_archive_flag_spec= fi supports_anon_versioning=no case `$LD -v | $SED -e 's/(^)\+)\s\+//' 2>&1` in *GNU\ gold*) supports_anon_versioning=yes ;; *\ [01].* | *\ 2.[0-9].* | *\ 2.10.*) ;; # catch versions < 2.11 *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ... *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ... *\ 2.11.*) ;; # other 2.11 versions *) supports_anon_versioning=yes ;; esac # See if GNU ld supports shared libraries. case $host_os in aix[3-9]*) # On AIX/PPC, the GNU linker is very broken if test ia64 != "$host_cpu"; then ld_shlibs=no cat <<_LT_EOF 1>&2 *** Warning: the GNU linker, at least up to release 2.19, is reported *** to be unable to reliably create shared libraries on AIX. *** Therefore, libtool is disabling shared libraries support. If you *** really care for shared libraries, you may want to install binutils *** 2.20 or above, or modify your PATH so that a non-GNU linker is found. *** You will then need to restart the configuration process. _LT_EOF fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds='' ;; m68k) archive_cmds='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes ;; esac ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then allow_undefined_flag=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME archive_cmds='$CC -nostart $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' else ld_shlibs=no fi ;; cygwin* | mingw* | pw32* | cegcc*) # _LT_TAGVAR(hardcode_libdir_flag_spec, ) is actually meaningless, # as there is no search path for DLLs. hardcode_libdir_flag_spec='-L$libdir' export_dynamic_flag_spec='$wl--export-all-symbols' allow_undefined_flag=unsupported always_export_symbols=no enable_shared_with_static_runtimes=yes export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols' exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname' if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' # If the export-symbols file already is a .def file, use it as # is; otherwise, prepend EXPORTS... archive_expsym_cmds='if test DEF = "`$SED -n -e '\''s/^[ ]*//'\'' -e '\''/^\(;.*\)*$/d'\'' -e '\''s/^\(EXPORTS\|LIBRARY\)\([ ].*\)*$/DEF/p'\'' -e q $export_symbols`" ; then cp $export_symbols $output_objdir/$soname.def; else echo EXPORTS > $output_objdir/$soname.def; cat $export_symbols >> $output_objdir/$soname.def; fi~ $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else ld_shlibs=no fi ;; haiku*) archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' link_all_deplibs=yes ;; os2*) hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes allow_undefined_flag=unsupported shrext_cmds=.dll archive_cmds='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' archive_expsym_cmds='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ prefix_cmds="$SED"~ if test EXPORTS = "`$SED 1q $export_symbols`"; then prefix_cmds="$prefix_cmds -e 1d"; fi~ prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~ cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' old_archive_From_new_cmds='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def' enable_shared_with_static_runtimes=yes ;; interix[3-9]*) hardcode_direct=no hardcode_shlibpath_var=no hardcode_libdir_flag_spec='$wl-rpath,$libdir' export_dynamic_flag_spec='$wl-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' archive_expsym_cmds='sed "s|^|_|" $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--retain-symbols-file,$output_objdir/$soname.expsym $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; gnu* | linux* | tpf* | k*bsd*-gnu | kopensolaris*-gnu) tmp_diet=no if test linux-dietlibc = "$host_os"; then case $cc_basename in diet\ *) tmp_diet=yes;; # linux-dietlibc with static linking (!diet-dyn) esac fi if $LD --help 2>&1 | $EGREP ': supported targets:.* elf' > /dev/null \ && test no = "$tmp_diet" then tmp_addflag=' $pic_flag' tmp_sharedflag='-shared' case $cc_basename,$host_cpu in pgcc*) # Portland Group C compiler whole_archive_flag_spec='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' tmp_addflag=' $pic_flag' ;; pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group f77 and f90 compilers whole_archive_flag_spec='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' tmp_addflag=' $pic_flag -Mnomain' ;; ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64 tmp_addflag=' -i_dynamic' ;; efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64 tmp_addflag=' -i_dynamic -nofor_main' ;; ifc* | ifort*) # Intel Fortran compiler tmp_addflag=' -nofor_main' ;; lf95*) # Lahey Fortran 8.1 whole_archive_flag_spec= tmp_sharedflag='--shared' ;; nagfor*) # NAGFOR 5.3 tmp_sharedflag='-Wl,-shared' ;; xl[cC]* | bgxl[cC]* | mpixl[cC]*) # IBM XL C 8.0 on PPC (deal with xlf below) tmp_sharedflag='-qmkshrobj' tmp_addflag= ;; nvcc*) # Cuda Compiler Driver 2.2 whole_archive_flag_spec='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' compiler_needs_object=yes ;; esac case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C 5.9 whole_archive_flag_spec='$wl--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' compiler_needs_object=yes tmp_sharedflag='-G' ;; *Sun\ F*) # Sun Fortran 8.3 tmp_sharedflag='-G' ;; esac archive_cmds='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' if test yes = "$supports_anon_versioning"; then archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-version-script $wl$output_objdir/$libname.ver -o $lib' fi case $cc_basename in tcc*) export_dynamic_flag_spec='-rdynamic' ;; xlf* | bgf* | bgxlf* | mpixlf*) # IBM XL Fortran 10.1 on PPC cannot create shared libs itself whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive' hardcode_libdir_flag_spec='$wl-rpath $wl$libdir' archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib' if test yes = "$supports_anon_versioning"; then archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib' fi ;; esac else ld_shlibs=no fi ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' wlarc= else archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' fi ;; solaris*) if $LD -v 2>&1 | $GREP 'BFD 2\.8' > /dev/null; then ld_shlibs=no cat <<_LT_EOF 1>&2 *** Warning: The releases 2.8.* of the GNU linker cannot reliably *** create shared libraries on Solaris systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.9.1 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' else ld_shlibs=no fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*) case `$LD -v 2>&1` in *\ [01].* | *\ 2.[0-9].* | *\ 2.1[0-5].*) ld_shlibs=no cat <<_LT_EOF 1>&2 *** Warning: Releases of the GNU linker prior to 2.16.91.0.3 cannot *** reliably create shared libraries on SCO systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.16.91.0.3 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF ;; *) # For security reasons, it is highly recommended that you always # use absolute paths for naming shared libraries, and exclude the # DT_RUNPATH tag from executables and libraries. But doing so # requires that you compile everything twice, which is a pain. if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then hardcode_libdir_flag_spec='$wl-rpath $wl$libdir' archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' else ld_shlibs=no fi ;; esac ;; sunos4*) archive_cmds='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags' wlarc= hardcode_direct=yes hardcode_shlibpath_var=no ;; *) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' else ld_shlibs=no fi ;; esac if test no = "$ld_shlibs"; then runpath_var= hardcode_libdir_flag_spec= export_dynamic_flag_spec= whole_archive_flag_spec= fi else # PORTME fill in a description of your system's linker (not GNU ld) case $host_os in aix3*) allow_undefined_flag=unsupported always_export_symbols=yes archive_expsym_cmds='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname' # Note: this linker hardcodes the directories in LIBPATH if there # are no directories specified by -L. hardcode_minus_L=yes if test yes = "$GCC" && test -z "$lt_prog_compiler_static"; then # Neither direct hardcoding nor static linking is supported with a # broken collect2. hardcode_direct=unsupported fi ;; aix[4-9]*) if test ia64 = "$host_cpu"; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' no_entry_flag= else # If we're using GNU nm, then we don't want the "-C" option. # -C means demangle to GNU nm, but means don't demangle to AIX nm. # Without the "-l" option, or with the "-B" option, AIX nm treats # weak defined symbols like other global defined symbols, whereas # GNU nm marks them as "W". # While the 'weak' keyword is ignored in the Export File, we need # it in the Import File for the 'aix-soname' feature, so we have # to replace the "-B" option with "-P" for AIX nm. if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then export_symbols_cmds='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && (substr(\$ 3,1,1) != ".")) { if (\$ 2 == "W") { print \$ 3 " weak" } else { print \$ 3 } } }'\'' | sort -u > $export_symbols' else export_symbols_cmds='`func_echo_all $NM | $SED -e '\''s/B\([^B]*\)$/P\1/'\''` -PCpgl $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) && (substr(\$ 1,1,1) != ".")) { if ((\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) { print \$ 1 " weak" } else { print \$ 1 } } }'\'' | sort -u > $export_symbols' fi aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we # have runtime linking enabled, and use it for executables. # For shared libraries, we enable/disable runtime linking # depending on the kind of the shared library created - # when "with_aix_soname,aix_use_runtimelinking" is: # "aix,no" lib.a(lib.so.V) shared, rtl:no, for executables # "aix,yes" lib.so shared, rtl:yes, for executables # lib.a static archive # "both,no" lib.so.V(shr.o) shared, rtl:yes # lib.a(lib.so.V) shared, rtl:no, for executables # "both,yes" lib.so.V(shr.o) shared, rtl:yes, for executables # lib.a(lib.so.V) shared, rtl:no # "svr4,*" lib.so.V(shr.o) shared, rtl:yes, for executables # lib.a static archive case $host_os in aix4.[23]|aix4.[23].*|aix[5-9]*) for ld_flag in $LDFLAGS; do if (test x-brtl = "x$ld_flag" || test x-Wl,-brtl = "x$ld_flag"); then aix_use_runtimelinking=yes break fi done if test svr4,no = "$with_aix_soname,$aix_use_runtimelinking"; then # With aix-soname=svr4, we create the lib.so.V shared archives only, # so we don't have lib.a shared libs to link our executables. # We have to force runtime linking in this case. aix_use_runtimelinking=yes LDFLAGS="$LDFLAGS -Wl,-brtl" fi ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. archive_cmds='' hardcode_direct=yes hardcode_direct_absolute=yes hardcode_libdir_separator=':' link_all_deplibs=yes file_list_spec='$wl-f,' case $with_aix_soname,$aix_use_runtimelinking in aix,*) ;; # traditional, no import file svr4,* | *,yes) # use import file # The Import File defines what to hardcode. hardcode_direct=no hardcode_direct_absolute=no ;; esac if test yes = "$GCC"; then case $host_os in aix4.[012]|aix4.[012].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ collect2name=`$CC -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 hardcode_direct=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking hardcode_minus_L=yes hardcode_libdir_flag_spec='-L$libdir' hardcode_libdir_separator= fi ;; esac shared_flag='-shared' if test yes = "$aix_use_runtimelinking"; then shared_flag="$shared_flag "'$wl-G' fi # Need to ensure runtime linking is disabled for the traditional # shared library, or the linker may eventually find shared libraries # /with/ Import File - we do not want to mix them. shared_flag_aix='-shared' shared_flag_svr4='-shared $wl-G' else # not using gcc if test ia64 = "$host_cpu"; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else if test yes = "$aix_use_runtimelinking"; then shared_flag='$wl-G' else shared_flag='$wl-bM:SRE' fi shared_flag_aix='$wl-bM:SRE' shared_flag_svr4='$wl-G' fi fi export_dynamic_flag_spec='$wl-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to export. always_export_symbols=yes if test aix,yes = "$with_aix_soname,$aix_use_runtimelinking"; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. allow_undefined_flag='-berok' # Determine the default libpath from the value encoded in an # empty executable. if test set = "${lt_cv_aix_libpath+set}"; then aix_libpath=$lt_cv_aix_libpath else if ${lt_cv_aix_libpath_+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_=/usr/lib:/lib fi fi aix_libpath=$lt_cv_aix_libpath_ fi hardcode_libdir_flag_spec='$wl-blibpath:$libdir:'"$aix_libpath" archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs $wl'$no_entry_flag' $compiler_flags `if test -n "$allow_undefined_flag"; then func_echo_all "$wl$allow_undefined_flag"; else :; fi` $wl'$exp_sym_flag:\$export_symbols' '$shared_flag else if test ia64 = "$host_cpu"; then hardcode_libdir_flag_spec='$wl-R $libdir:/usr/lib:/lib' allow_undefined_flag="-z nodefs" archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\$wl$no_entry_flag"' $compiler_flags $wl$allow_undefined_flag '"\$wl$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. if test set = "${lt_cv_aix_libpath+set}"; then aix_libpath=$lt_cv_aix_libpath else if ${lt_cv_aix_libpath_+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_=/usr/lib:/lib fi fi aix_libpath=$lt_cv_aix_libpath_ fi hardcode_libdir_flag_spec='$wl-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. no_undefined_flag=' $wl-bernotok' allow_undefined_flag=' $wl-berok' if test yes = "$with_gnu_ld"; then # We only use this code for GNU lds that support --whole-archive. whole_archive_flag_spec='$wl--whole-archive$convenience $wl--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives whole_archive_flag_spec='$convenience' fi archive_cmds_need_lc=yes archive_expsym_cmds='$RM -r $output_objdir/$realname.d~$MKDIR $output_objdir/$realname.d' # -brtl affects multiple linker settings, -berok does not and is overridden later compiler_flags_filtered='`func_echo_all "$compiler_flags " | $SED -e "s%-brtl\\([, ]\\)%-berok\\1%g"`' if test svr4 != "$with_aix_soname"; then # This is similar to how AIX traditionally builds its shared libraries. archive_expsym_cmds="$archive_expsym_cmds"'~$CC '$shared_flag_aix' -o $output_objdir/$realname.d/$soname $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$realname.d/$soname' fi if test aix != "$with_aix_soname"; then archive_expsym_cmds="$archive_expsym_cmds"'~$CC '$shared_flag_svr4' -o $output_objdir/$realname.d/$shared_archive_member_spec.o $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$STRIP -e $output_objdir/$realname.d/$shared_archive_member_spec.o~( func_echo_all "#! $soname($shared_archive_member_spec.o)"; if test shr_64 = "$shared_archive_member_spec"; then func_echo_all "# 64"; else func_echo_all "# 32"; fi; cat $export_symbols ) > $output_objdir/$realname.d/$shared_archive_member_spec.imp~$AR $AR_FLAGS $output_objdir/$soname $output_objdir/$realname.d/$shared_archive_member_spec.o $output_objdir/$realname.d/$shared_archive_member_spec.imp' else # used by -dlpreopen to get the symbols archive_expsym_cmds="$archive_expsym_cmds"'~$MV $output_objdir/$realname.d/$soname $output_objdir' fi archive_expsym_cmds="$archive_expsym_cmds"'~$RM -r $output_objdir/$realname.d' fi fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' archive_expsym_cmds='' ;; m68k) archive_cmds='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes ;; esac ;; bsdi[45]*) export_dynamic_flag_spec=-rdynamic ;; cygwin* | mingw* | pw32* | cegcc*) # When not using gcc, we currently assume that we are using # Microsoft Visual C++. # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. case $cc_basename in cl*) # Native MSVC hardcode_libdir_flag_spec=' ' allow_undefined_flag=unsupported always_export_symbols=yes file_list_spec='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=.dll # FIXME: Setting linknames here is a bad hack. archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~linknames=' archive_expsym_cmds='if test DEF = "`$SED -n -e '\''s/^[ ]*//'\'' -e '\''/^\(;.*\)*$/d'\'' -e '\''s/^\(EXPORTS\|LIBRARY\)\([ ].*\)*$/DEF/p'\'' -e q $export_symbols`" ; then cp "$export_symbols" "$output_objdir/$soname.def"; echo "$tool_output_objdir$soname.def" > "$output_objdir/$soname.exp"; else $SED -e '\''s/^/-link -EXPORT:/'\'' < $export_symbols > $output_objdir/$soname.exp; fi~ $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, )='true' enable_shared_with_static_runtimes=yes exclude_expsyms='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols' # Don't use ranlib old_postinstall_cmds='chmod 644 $oldlib' postlink_cmds='lt_outputfile="@OUTPUT@"~ lt_tool_outputfile="@TOOL_OUTPUT@"~ case $lt_outputfile in *.exe|*.EXE) ;; *) lt_outputfile=$lt_outputfile.exe lt_tool_outputfile=$lt_tool_outputfile.exe ;; esac~ if test : != "$MANIFEST_TOOL" && test -f "$lt_outputfile.manifest"; then $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; $RM "$lt_outputfile.manifest"; fi' ;; *) # Assume MSVC wrapper hardcode_libdir_flag_spec=' ' allow_undefined_flag=unsupported # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=.dll # FIXME: Setting linknames here is a bad hack. archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames=' # The linker will automatically build a .lib file if we build a DLL. old_archive_from_new_cmds='true' # FIXME: Should let the user specify the lib program. old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs' enable_shared_with_static_runtimes=yes ;; esac ;; darwin* | rhapsody*) archive_cmds_need_lc=no hardcode_direct=no hardcode_automatic=yes hardcode_shlibpath_var=unsupported if test yes = "$lt_cv_ld_force_load"; then whole_archive_flag_spec='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience $wl-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`' else whole_archive_flag_spec='' fi link_all_deplibs=yes allow_undefined_flag=$_lt_dar_allow_undefined case $cc_basename in ifort*|nagfor*) _lt_dar_can_shared=yes ;; *) _lt_dar_can_shared=$GCC ;; esac if test yes = "$_lt_dar_can_shared"; then output_verbose_link_cmd=func_echo_all archive_cmds="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dsymutil" module_cmds="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dsymutil" archive_expsym_cmds="sed 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dar_export_syms$_lt_dsymutil" module_expsym_cmds="sed -e 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dar_export_syms$_lt_dsymutil" else ld_shlibs=no fi ;; dgux*) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_libdir_flag_spec='-L$libdir' hardcode_shlibpath_var=no ;; # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor # support. Future versions do this automatically, but an explicit c++rt0.o # does not break anything, and helps significantly (at the cost of a little # extra space). freebsd2.2*) archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o' hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes hardcode_shlibpath_var=no ;; # Unfortunately, older versions of FreeBSD 2 do not have this feature. freebsd2.*) archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=yes hardcode_minus_L=yes hardcode_shlibpath_var=no ;; # FreeBSD 3 and greater uses gcc -shared to do shared libraries. freebsd* | dragonfly*) archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes hardcode_shlibpath_var=no ;; hpux9*) if test yes = "$GCC"; then archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag $wl+b $wl$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib' else archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib' fi hardcode_libdir_flag_spec='$wl+b $wl$libdir' hardcode_libdir_separator=: hardcode_direct=yes # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes export_dynamic_flag_spec='$wl-E' ;; hpux10*) if test yes,no = "$GCC,$with_gnu_ld"; then archive_cmds='$CC -shared $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' fi if test no = "$with_gnu_ld"; then hardcode_libdir_flag_spec='$wl+b $wl$libdir' hardcode_libdir_separator=: hardcode_direct=yes hardcode_direct_absolute=yes export_dynamic_flag_spec='$wl-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes fi ;; hpux11*) if test yes,no = "$GCC,$with_gnu_ld"; then case $host_cpu in hppa*64*) archive_cmds='$CC -shared $wl+h $wl$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) archive_cmds='$CC -shared $pic_flag $wl+h $wl$soname $wl+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) archive_cmds='$CC -shared $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags' ;; esac else case $host_cpu in hppa*64*) archive_cmds='$CC -b $wl+h $wl$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) archive_cmds='$CC -b $wl+h $wl$soname $wl+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) # Older versions of the 11.00 compiler do not understand -b yet # (HP92453-01 A.11.01.20 doesn't, HP92453-01 B.11.X.35175-35176.GP does) { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $CC understands -b" >&5 $as_echo_n "checking if $CC understands -b... " >&6; } if ${lt_cv_prog_compiler__b+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler__b=no save_LDFLAGS=$LDFLAGS LDFLAGS="$LDFLAGS -b" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&5 $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler__b=yes fi else lt_cv_prog_compiler__b=yes fi fi $RM -r conftest* LDFLAGS=$save_LDFLAGS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler__b" >&5 $as_echo "$lt_cv_prog_compiler__b" >&6; } if test yes = "$lt_cv_prog_compiler__b"; then archive_cmds='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' fi ;; esac fi if test no = "$with_gnu_ld"; then hardcode_libdir_flag_spec='$wl+b $wl$libdir' hardcode_libdir_separator=: case $host_cpu in hppa*64*|ia64*) hardcode_direct=no hardcode_shlibpath_var=no ;; *) hardcode_direct=yes hardcode_direct_absolute=yes export_dynamic_flag_spec='$wl-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes ;; esac fi ;; irix5* | irix6* | nonstopux*) if test yes = "$GCC"; then archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' # Try to use the -exported_symbol ld option, if it does not # work, assume that -exports_file does not work either and # implicitly export all symbols. # This should be the same for all languages, so no per-tag cache variable. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5 $as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; } if ${lt_cv_irix_exported_symbol+:} false; then : $as_echo_n "(cached) " >&6 else save_LDFLAGS=$LDFLAGS LDFLAGS="$LDFLAGS -shared $wl-exported_symbol ${wl}foo $wl-update_registry $wl/dev/null" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int foo (void) { return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_cv_irix_exported_symbol=yes else lt_cv_irix_exported_symbol=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS=$save_LDFLAGS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5 $as_echo "$lt_cv_irix_exported_symbol" >&6; } if test yes = "$lt_cv_irix_exported_symbol"; then archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations $wl-exports_file $wl$export_symbols -o $lib' fi link_all_deplibs=no else archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -exports_file $export_symbols -o $lib' fi archive_cmds_need_lc='no' hardcode_libdir_flag_spec='$wl-rpath $wl$libdir' hardcode_libdir_separator=: inherit_rpath=yes link_all_deplibs=yes ;; linux*) case $cc_basename in tcc*) # Fabrice Bellard et al's Tiny C Compiler ld_shlibs=yes archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out else archive_cmds='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF fi hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes hardcode_shlibpath_var=no ;; newsos6) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=yes hardcode_libdir_flag_spec='$wl-rpath $wl$libdir' hardcode_libdir_separator=: hardcode_shlibpath_var=no ;; *nto* | *qnx*) ;; openbsd* | bitrig*) if test -f /usr/libexec/ld.so; then hardcode_direct=yes hardcode_shlibpath_var=no hardcode_direct_absolute=yes if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags $wl-retain-symbols-file,$export_symbols' hardcode_libdir_flag_spec='$wl-rpath,$libdir' export_dynamic_flag_spec='$wl-E' else archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' hardcode_libdir_flag_spec='$wl-rpath,$libdir' fi else ld_shlibs=no fi ;; os2*) hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes allow_undefined_flag=unsupported shrext_cmds=.dll archive_cmds='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' archive_expsym_cmds='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ $ECHO EXPORTS >> $output_objdir/$libname.def~ prefix_cmds="$SED"~ if test EXPORTS = "`$SED 1q $export_symbols`"; then prefix_cmds="$prefix_cmds -e 1d"; fi~ prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~ cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~ $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ emximp -o $lib $output_objdir/$libname.def' old_archive_From_new_cmds='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def' enable_shared_with_static_runtimes=yes ;; osf3*) if test yes = "$GCC"; then allow_undefined_flag=' $wl-expect_unresolved $wl\*' archive_cmds='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' else allow_undefined_flag=' -expect_unresolved \*' archive_cmds='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' fi archive_cmds_need_lc='no' hardcode_libdir_flag_spec='$wl-rpath $wl$libdir' hardcode_libdir_separator=: ;; osf4* | osf5*) # as osf3* with the addition of -msym flag if test yes = "$GCC"; then allow_undefined_flag=' $wl-expect_unresolved $wl\*' archive_cmds='$CC -shared$allow_undefined_flag $pic_flag $libobjs $deplibs $compiler_flags $wl-msym $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' hardcode_libdir_flag_spec='$wl-rpath $wl$libdir' else allow_undefined_flag=' -expect_unresolved \*' archive_cmds='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' archive_expsym_cmds='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; printf "%s\\n" "-hidden">> $lib.exp~ $CC -shared$allow_undefined_flag $wl-input $wl$lib.exp $compiler_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib~$RM $lib.exp' # Both c and cxx compiler support -rpath directly hardcode_libdir_flag_spec='-rpath $libdir' fi archive_cmds_need_lc='no' hardcode_libdir_separator=: ;; solaris*) no_undefined_flag=' -z defs' if test yes = "$GCC"; then wlarc='$wl' archive_cmds='$CC -shared $pic_flag $wl-z ${wl}text $wl-h $wl$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -shared $pic_flag $wl-z ${wl}text $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' else case `$CC -V 2>&1` in *"Compilers 5.0"*) wlarc='' archive_cmds='$LD -G$allow_undefined_flag -h $soname -o $lib $libobjs $deplibs $linker_flags' archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $LD -G$allow_undefined_flag -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$RM $lib.exp' ;; *) wlarc='$wl' archive_cmds='$CC -G$allow_undefined_flag -h $soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G$allow_undefined_flag -M $lib.exp -h $soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' ;; esac fi hardcode_libdir_flag_spec='-R$libdir' hardcode_shlibpath_var=no case $host_os in solaris2.[0-5] | solaris2.[0-5].*) ;; *) # The compiler driver will combine and reorder linker options, # but understands '-z linker_flag'. GCC discards it without '$wl', # but is careful enough not to reorder. # Supported since Solaris 2.6 (maybe 2.5.1?) if test yes = "$GCC"; then whole_archive_flag_spec='$wl-z ${wl}allextract$convenience $wl-z ${wl}defaultextract' else whole_archive_flag_spec='-z allextract$convenience -z defaultextract' fi ;; esac link_all_deplibs=yes ;; sunos4*) if test sequent = "$host_vendor"; then # Use $CC to link under sequent, because it throws in some extra .o # files that make .init and .fini sections work. archive_cmds='$CC -G $wl-h $soname -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags' fi hardcode_libdir_flag_spec='-L$libdir' hardcode_direct=yes hardcode_minus_L=yes hardcode_shlibpath_var=no ;; sysv4) case $host_vendor in sni) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=yes # is this really true??? ;; siemens) ## LD is ld it makes a PLAMLIB ## CC just makes a GrossModule. archive_cmds='$LD -G -o $lib $libobjs $deplibs $linker_flags' reload_cmds='$CC -r -o $output$reload_objs' hardcode_direct=no ;; motorola) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=no #Motorola manual says yes, but my tests say they lie ;; esac runpath_var='LD_RUN_PATH' hardcode_shlibpath_var=no ;; sysv4.3*) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_shlibpath_var=no export_dynamic_flag_spec='-Bexport' ;; sysv4*MP*) if test -d /usr/nec; then archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_shlibpath_var=no runpath_var=LD_RUN_PATH hardcode_runpath_var=yes ld_shlibs=yes fi ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[01].[10]* | unixware7* | sco3.2v5.0.[024]*) no_undefined_flag='$wl-z,text' archive_cmds_need_lc=no hardcode_shlibpath_var=no runpath_var='LD_RUN_PATH' if test yes = "$GCC"; then archive_cmds='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; sysv5* | sco3.2v5* | sco5v6*) # Note: We CANNOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. no_undefined_flag='$wl-z,text' allow_undefined_flag='$wl-z,nodefs' archive_cmds_need_lc=no hardcode_shlibpath_var=no hardcode_libdir_flag_spec='$wl-R,$libdir' hardcode_libdir_separator=':' link_all_deplibs=yes export_dynamic_flag_spec='$wl-Bexport' runpath_var='LD_RUN_PATH' if test yes = "$GCC"; then archive_cmds='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; uts4*) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_libdir_flag_spec='-L$libdir' hardcode_shlibpath_var=no ;; *) ld_shlibs=no ;; esac if test sni = "$host_vendor"; then case $host in sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*) export_dynamic_flag_spec='$wl-Blargedynsym' ;; esac fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs" >&5 $as_echo "$ld_shlibs" >&6; } test no = "$ld_shlibs" && can_build_shared=no with_gnu_ld=$with_gnu_ld # # Do we need to explicitly link libc? # case "x$archive_cmds_need_lc" in x|xyes) # Assume -lc should be added archive_cmds_need_lc=yes if test yes,yes = "$GCC,$enable_shared"; then case $archive_cmds in *'~'*) # FIXME: we may have to deal with multi-command sequences. ;; '$CC '*) # Test whether the compiler implicitly links with -lc since on some # systems, -lgcc has to come before -lc. If gcc already passes -lc # to ld, don't add -lc before -lgcc. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether -lc should be explicitly linked in" >&5 $as_echo_n "checking whether -lc should be explicitly linked in... " >&6; } if ${lt_cv_archive_cmds_need_lc+:} false; then : $as_echo_n "(cached) " >&6 else $RM conftest* echo "$lt_simple_compile_test_code" > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } 2>conftest.err; then soname=conftest lib=conftest libobjs=conftest.$ac_objext deplibs= wl=$lt_prog_compiler_wl pic_flag=$lt_prog_compiler_pic compiler_flags=-v linker_flags=-v verstring= output_objdir=. libname=conftest lt_save_allow_undefined_flag=$allow_undefined_flag allow_undefined_flag= if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$archive_cmds 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1\""; } >&5 (eval $archive_cmds 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } then lt_cv_archive_cmds_need_lc=no else lt_cv_archive_cmds_need_lc=yes fi allow_undefined_flag=$lt_save_allow_undefined_flag else cat conftest.err 1>&5 fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_archive_cmds_need_lc" >&5 $as_echo "$lt_cv_archive_cmds_need_lc" >&6; } archive_cmds_need_lc=$lt_cv_archive_cmds_need_lc ;; esac fi ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking dynamic linker characteristics" >&5 $as_echo_n "checking dynamic linker characteristics... " >&6; } if test yes = "$GCC"; then case $host_os in darwin*) lt_awk_arg='/^libraries:/,/LR/' ;; *) lt_awk_arg='/^libraries:/' ;; esac case $host_os in mingw* | cegcc*) lt_sed_strip_eq='s|=\([A-Za-z]:\)|\1|g' ;; *) lt_sed_strip_eq='s|=/|/|g' ;; esac lt_search_path_spec=`$CC -print-search-dirs | awk $lt_awk_arg | $SED -e "s/^libraries://" -e $lt_sed_strip_eq` case $lt_search_path_spec in *\;*) # if the path contains ";" then we assume it to be the separator # otherwise default to the standard path separator (i.e. ":") - it is # assumed that no part of a normal pathname contains ";" but that should # okay in the real world where ";" in dirpaths is itself problematic. lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED 's/;/ /g'` ;; *) lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED "s/$PATH_SEPARATOR/ /g"` ;; esac # Ok, now we have the path, separated by spaces, we can step through it # and add multilib dir if necessary... lt_tmp_lt_search_path_spec= lt_multi_os_dir=/`$CC $CPPFLAGS $CFLAGS $LDFLAGS -print-multi-os-directory 2>/dev/null` # ...but if some path component already ends with the multilib dir we assume # that all is fine and trust -print-search-dirs as is (GCC 4.2? or newer). case "$lt_multi_os_dir; $lt_search_path_spec " in "/; "* | "/.; "* | "/./; "* | *"$lt_multi_os_dir "* | *"$lt_multi_os_dir/ "*) lt_multi_os_dir= ;; esac for lt_sys_path in $lt_search_path_spec; do if test -d "$lt_sys_path$lt_multi_os_dir"; then lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path$lt_multi_os_dir" elif test -n "$lt_multi_os_dir"; then test -d "$lt_sys_path" && \ lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path" fi done lt_search_path_spec=`$ECHO "$lt_tmp_lt_search_path_spec" | awk ' BEGIN {RS = " "; FS = "/|\n";} { lt_foo = ""; lt_count = 0; for (lt_i = NF; lt_i > 0; lt_i--) { if ($lt_i != "" && $lt_i != ".") { if ($lt_i == "..") { lt_count++; } else { if (lt_count == 0) { lt_foo = "/" $lt_i lt_foo; } else { lt_count--; } } } } if (lt_foo != "") { lt_freq[lt_foo]++; } if (lt_freq[lt_foo] == 1) { print lt_foo; } }'` # AWK program above erroneously prepends '/' to C:/dos/paths # for these hosts. case $host_os in mingw* | cegcc*) lt_search_path_spec=`$ECHO "$lt_search_path_spec" |\ $SED 's|/\([A-Za-z]:\)|\1|g'` ;; esac sys_lib_search_path_spec=`$ECHO "$lt_search_path_spec" | $lt_NL2SP` else sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" fi library_names_spec= libname_spec='lib$name' soname_spec= shrext_cmds=.so postinstall_cmds= postuninstall_cmds= finish_cmds= finish_eval= shlibpath_var= shlibpath_overrides_runpath=unknown version_type=none dynamic_linker="$host_os ld.so" sys_lib_dlsearch_path_spec="/lib /usr/lib" need_lib_prefix=unknown hardcode_into_libs=no # when you set need_version to no, make sure it does not cause -set_version # flags to be left without arguments need_version=unknown case $host_os in aix3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname.a' shlibpath_var=LIBPATH # AIX 3 has no versioning support, so we append a major version to the name. soname_spec='$libname$release$shared_ext$major' ;; aix[4-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no hardcode_into_libs=yes if test ia64 = "$host_cpu"; then # AIX 5 supports IA64 library_names_spec='$libname$release$shared_ext$major $libname$release$shared_ext$versuffix $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH else # With GCC up to 2.95.x, collect2 would create an import file # for dependence libraries. The import file would start with # the line '#! .'. This would cause the generated library to # depend on '.', always an invalid library. This was fixed in # development snapshots of GCC prior to 3.0. case $host_os in aix4 | aix4.[01] | aix4.[01].*) if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' echo ' yes ' echo '#endif'; } | $CC -E - | $GREP yes > /dev/null; then : else can_build_shared=no fi ;; esac # Using Import Files as archive members, it is possible to support # filename-based versioning of shared library archives on AIX. While # this would work for both with and without runtime linking, it will # prevent static linking of such archives. So we do filename-based # shared library versioning with .so extension only, which is used # when both runtime linking and shared linking is enabled. # Unfortunately, runtime linking may impact performance, so we do # not want this to be the default eventually. Also, we use the # versioned .so libs for executables only if there is the -brtl # linker flag in LDFLAGS as well, or --with-aix-soname=svr4 only. # To allow for filename-based versioning support, we need to create # libNAME.so.V as an archive file, containing: # *) an Import File, referring to the versioned filename of the # archive as well as the shared archive member, telling the # bitwidth (32 or 64) of that shared object, and providing the # list of exported symbols of that shared object, eventually # decorated with the 'weak' keyword # *) the shared object with the F_LOADONLY flag set, to really avoid # it being seen by the linker. # At run time we better use the real file rather than another symlink, # but for link time we create the symlink libNAME.so -> libNAME.so.V case $with_aix_soname,$aix_use_runtimelinking in # AIX (on Power*) has no versioning support, so currently we cannot hardcode correct # soname into executable. Probably we can add versioning support to # collect2, so additional links can be useful in future. aix,yes) # traditional libtool dynamic_linker='AIX unversionable lib.so' # If using run time linking (on AIX 4.2 or later) use lib.so # instead of lib.a to let people know that these are not # typical AIX shared libraries. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' ;; aix,no) # traditional AIX only dynamic_linker='AIX lib.a(lib.so.V)' # We preserve .a as extension for shared libraries through AIX4.2 # and later when we are not doing run time linking. library_names_spec='$libname$release.a $libname.a' soname_spec='$libname$release$shared_ext$major' ;; svr4,*) # full svr4 only dynamic_linker="AIX lib.so.V($shared_archive_member_spec.o)" library_names_spec='$libname$release$shared_ext$major $libname$shared_ext' # We do not specify a path in Import Files, so LIBPATH fires. shlibpath_overrides_runpath=yes ;; *,yes) # both, prefer svr4 dynamic_linker="AIX lib.so.V($shared_archive_member_spec.o), lib.a(lib.so.V)" library_names_spec='$libname$release$shared_ext$major $libname$shared_ext' # unpreferred sharedlib libNAME.a needs extra handling postinstall_cmds='test -n "$linkname" || linkname="$realname"~func_stripname "" ".so" "$linkname"~$install_shared_prog "$dir/$func_stripname_result.$libext" "$destdir/$func_stripname_result.$libext"~test -z "$tstripme" || test -z "$striplib" || $striplib "$destdir/$func_stripname_result.$libext"' postuninstall_cmds='for n in $library_names $old_library; do :; done~func_stripname "" ".so" "$n"~test "$func_stripname_result" = "$n" || func_append rmfiles " $odir/$func_stripname_result.$libext"' # We do not specify a path in Import Files, so LIBPATH fires. shlibpath_overrides_runpath=yes ;; *,no) # both, prefer aix dynamic_linker="AIX lib.a(lib.so.V), lib.so.V($shared_archive_member_spec.o)" library_names_spec='$libname$release.a $libname.a' soname_spec='$libname$release$shared_ext$major' # unpreferred sharedlib libNAME.so.V and symlink libNAME.so need extra handling postinstall_cmds='test -z "$dlname" || $install_shared_prog $dir/$dlname $destdir/$dlname~test -z "$tstripme" || test -z "$striplib" || $striplib $destdir/$dlname~test -n "$linkname" || linkname=$realname~func_stripname "" ".a" "$linkname"~(cd "$destdir" && $LN_S -f $dlname $func_stripname_result.so)' postuninstall_cmds='test -z "$dlname" || func_append rmfiles " $odir/$dlname"~for n in $old_library $library_names; do :; done~func_stripname "" ".a" "$n"~func_append rmfiles " $odir/$func_stripname_result.so"' ;; esac shlibpath_var=LIBPATH fi ;; amigaos*) case $host_cpu in powerpc) # Since July 2007 AmigaOS4 officially supports .so libraries. # When compiling the executable, add -use-dynld -Lsobjs: to the compileline. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' ;; m68k) library_names_spec='$libname.ixlibrary $libname.a' # Create ${libname}_ixlibrary.a entries in /sys/libs. finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' ;; esac ;; beos*) library_names_spec='$libname$shared_ext' dynamic_linker="$host_os ld.so" shlibpath_var=LIBRARY_PATH ;; bsdi[45]*) version_type=linux # correct to gnu/linux during the next big refactor need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" # the default ld.so.conf also contains /usr/contrib/lib and # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow # libtool to hard-code these into programs ;; cygwin* | mingw* | pw32* | cegcc*) version_type=windows shrext_cmds=.dll need_version=no need_lib_prefix=no case $GCC,$cc_basename in yes,*) # gcc library_names_spec='$libname.dll.a' # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \$file`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes case $host_os in cygwin*) # Cygwin DLLs use 'cyg' prefix rather than 'lib' soname_spec='`echo $libname | sed -e 's/^lib/cyg/'``echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext' sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/lib/w32api" ;; mingw* | cegcc*) # MinGW DLLs use traditional 'lib' prefix soname_spec='$libname`echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext' ;; pw32*) # pw32 DLLs use 'pw' prefix rather than 'lib' library_names_spec='`echo $libname | sed -e 's/^lib/pw/'``echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext' ;; esac dynamic_linker='Win32 ld.exe' ;; *,cl*) # Native MSVC libname_spec='$name' soname_spec='$libname`echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext' library_names_spec='$libname.dll.lib' case $build_os in mingw*) sys_lib_search_path_spec= lt_save_ifs=$IFS IFS=';' for lt_path in $LIB do IFS=$lt_save_ifs # Let DOS variable expansion print the short 8.3 style file name. lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"` sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path" done IFS=$lt_save_ifs # Convert to MSYS style. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'` ;; cygwin*) # Convert to unix form, then to dos form, then back to unix form # but this time dos style (no spaces!) so that the unix form looks # like /cygdrive/c/PROGRA~1:/cygdr... sys_lib_search_path_spec=`cygpath --path --unix "$LIB"` sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null` sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` ;; *) sys_lib_search_path_spec=$LIB if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then # It is most probably a Windows format PATH. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` else sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` fi # FIXME: find the short name or the path components, as spaces are # common. (e.g. "Program Files" -> "PROGRA~1") ;; esac # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \$file`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes dynamic_linker='Win32 link.exe' ;; *) # Assume MSVC wrapper library_names_spec='$libname`echo $release | $SED -e 's/[.]/-/g'`$versuffix$shared_ext $libname.lib' dynamic_linker='Win32 ld.exe' ;; esac # FIXME: first we should search . and the directory the executable is in shlibpath_var=PATH ;; darwin* | rhapsody*) dynamic_linker="$host_os dyld" version_type=darwin need_lib_prefix=no need_version=no library_names_spec='$libname$release$major$shared_ext $libname$shared_ext' soname_spec='$libname$release$major$shared_ext' shlibpath_overrides_runpath=yes shlibpath_var=DYLD_LIBRARY_PATH shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/local/lib" sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' ;; dgux*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH ;; freebsd* | dragonfly*) # DragonFly does not have aout. When/if they implement a new # versioning mechanism, adjust this. if test -x /usr/bin/objformat; then objformat=`/usr/bin/objformat` else case $host_os in freebsd[23].*) objformat=aout ;; *) objformat=elf ;; esac fi version_type=freebsd-$objformat case $version_type in freebsd-elf*) library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' need_version=no need_lib_prefix=no ;; freebsd-*) library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' need_version=yes ;; esac shlibpath_var=LD_LIBRARY_PATH case $host_os in freebsd2.*) shlibpath_overrides_runpath=yes ;; freebsd3.[01]* | freebsdelf3.[01]*) shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; freebsd3.[2-9]* | freebsdelf3.[2-9]* | \ freebsd4.[0-5] | freebsdelf4.[0-5] | freebsd4.1.1 | freebsdelf4.1.1) shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; *) # from 4.6 on, and DragonFly shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; esac ;; haiku*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no dynamic_linker="$host_os runtime_loader" library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LIBRARY_PATH shlibpath_overrides_runpath=no sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib' hardcode_into_libs=yes ;; hpux9* | hpux10* | hpux11*) # Give a soname corresponding to the major version so that dld.sl refuses to # link against other versions. version_type=sunos need_lib_prefix=no need_version=no case $host_cpu in ia64*) shrext_cmds='.so' hardcode_into_libs=yes dynamic_linker="$host_os dld.so" shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' if test 32 = "$HPUX_IA64_MODE"; then sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" sys_lib_dlsearch_path_spec=/usr/lib/hpux32 else sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" sys_lib_dlsearch_path_spec=/usr/lib/hpux64 fi ;; hppa*64*) shrext_cmds='.sl' hardcode_into_libs=yes dynamic_linker="$host_os dld.sl" shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; *) shrext_cmds='.sl' dynamic_linker="$host_os dld.sl" shlibpath_var=SHLIB_PATH shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' ;; esac # HP-UX runs *really* slowly unless shared libraries are mode 555, ... postinstall_cmds='chmod 555 $lib' # or fails outright, so override atomically: install_override_mode=555 ;; interix[3-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; irix5* | irix6* | nonstopux*) case $host_os in nonstopux*) version_type=nonstopux ;; *) if test yes = "$lt_cv_prog_gnu_ld"; then version_type=linux # correct to gnu/linux during the next big refactor else version_type=irix fi ;; esac need_lib_prefix=no need_version=no soname_spec='$libname$release$shared_ext$major' library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$release$shared_ext $libname$shared_ext' case $host_os in irix5* | nonstopux*) libsuff= shlibsuff= ;; *) case $LD in # libtool.m4 will add one of these switches to LD *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") libsuff= shlibsuff= libmagic=32-bit;; *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") libsuff=32 shlibsuff=N32 libmagic=N32;; *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") libsuff=64 shlibsuff=64 libmagic=64-bit;; *) libsuff= shlibsuff= libmagic=never-match;; esac ;; esac shlibpath_var=LD_LIBRARY${shlibsuff}_PATH shlibpath_overrides_runpath=no sys_lib_search_path_spec="/usr/lib$libsuff /lib$libsuff /usr/local/lib$libsuff" sys_lib_dlsearch_path_spec="/usr/lib$libsuff /lib$libsuff" hardcode_into_libs=yes ;; # No shared lib support for Linux oldld, aout, or coff. linux*oldld* | linux*aout* | linux*coff*) dynamic_linker=no ;; linux*android*) version_type=none # Android doesn't support versioned libraries. need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext' soname_spec='$libname$release$shared_ext' finish_cmds= shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes dynamic_linker='Android linker' # Don't embed -rpath directories since the linker doesn't support them. hardcode_libdir_flag_spec='-L$libdir' ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no # Some binutils ld are patched to set DT_RUNPATH if ${lt_cv_shlibpath_overrides_runpath+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_shlibpath_overrides_runpath=no save_LDFLAGS=$LDFLAGS save_libdir=$libdir eval "libdir=/foo; wl=\"$lt_prog_compiler_wl\"; \ LDFLAGS=\"\$LDFLAGS $hardcode_libdir_flag_spec\"" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : if ($OBJDUMP -p conftest$ac_exeext) 2>/dev/null | grep "RUNPATH.*$libdir" >/dev/null; then : lt_cv_shlibpath_overrides_runpath=yes fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS=$save_LDFLAGS libdir=$save_libdir fi shlibpath_overrides_runpath=$lt_cv_shlibpath_overrides_runpath # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes # Ideally, we could use ldconfig to report *all* directores which are # searched for libraries, however this is still not possible. Aside from not # being certain /sbin/ldconfig is available, command # 'ldconfig -N -X -v | grep ^/' on 64bit Fedora does not report /usr/lib64, # even though it is searched at run-time. Try to do the best guess by # appending ld.so.conf contents (and includes) to the search path. if test -f /etc/ld.so.conf; then lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \$2)); skip = 1; } { if (!skip) print \$0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '` sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra" fi # We used to test for /lib/ld.so.1 and disable shared libraries on # powerpc, because MkLinux only supported shared libraries with the # GNU dynamic linker. Since this was broken with cross compilers, # most powerpc-linux boxes support dynamic linking these days and # people can always --disable-shared, the test was removed, and we # assume the GNU/Linux dynamic linker is in use. dynamic_linker='GNU/Linux ld.so' ;; netbsdelf*-gnu) version_type=linux need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='NetBSD ld.elf_so' ;; netbsd*) version_type=sunos need_lib_prefix=no need_version=no if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' dynamic_linker='NetBSD (a.out) ld.so' else library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' dynamic_linker='NetBSD ld.elf_so' fi shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; newsos6) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; *nto* | *qnx*) version_type=qnx need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='ldqnx.so' ;; openbsd* | bitrig*) version_type=sunos sys_lib_dlsearch_path_spec=/usr/lib need_lib_prefix=no if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then need_version=no else need_version=yes fi library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; os2*) libname_spec='$name' version_type=windows shrext_cmds=.dll need_version=no need_lib_prefix=no # OS/2 can only load a DLL with a base name of 8 characters or less. soname_spec='`test -n "$os2dllname" && libname="$os2dllname"; v=$($ECHO $release$versuffix | tr -d .-); n=$($ECHO $libname | cut -b -$((8 - ${#v})) | tr . _); $ECHO $n$v`$shared_ext' library_names_spec='${libname}_dll.$libext' dynamic_linker='OS/2 ld.exe' shlibpath_var=BEGINLIBPATH sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec postinstall_cmds='base_file=`basename \$file`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; $ECHO \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; $ECHO \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' ;; osf3* | osf4* | osf5*) version_type=osf need_lib_prefix=no need_version=no soname_spec='$libname$release$shared_ext$major' library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; rdos*) dynamic_linker=no ;; solaris*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes # ldd complains unless libraries are executable postinstall_cmds='chmod +x $lib' ;; sunos4*) version_type=sunos library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes if test yes = "$with_gnu_ld"; then need_lib_prefix=no fi need_version=yes ;; sysv4 | sysv4.3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH case $host_vendor in sni) shlibpath_overrides_runpath=no need_lib_prefix=no runpath_var=LD_RUN_PATH ;; siemens) need_lib_prefix=no ;; motorola) need_lib_prefix=no need_version=no shlibpath_overrides_runpath=no sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' ;; esac ;; sysv4*MP*) if test -d /usr/nec; then version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$shared_ext.$versuffix $libname$shared_ext.$major $libname$shared_ext' soname_spec='$libname$shared_ext.$major' shlibpath_var=LD_LIBRARY_PATH fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) version_type=sco need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes if test yes = "$with_gnu_ld"; then sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' else sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' case $host_os in sco3.2v5*) sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" ;; esac fi sys_lib_dlsearch_path_spec='/usr/lib' ;; tpf*) # TPF is a cross-target only. Preferred cross-host = GNU/Linux. version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; uts4*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH ;; *) dynamic_linker=no ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: $dynamic_linker" >&5 $as_echo "$dynamic_linker" >&6; } test no = "$dynamic_linker" && can_build_shared=no variables_saved_for_relink="PATH $shlibpath_var $runpath_var" if test yes = "$GCC"; then variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" fi if test set = "${lt_cv_sys_lib_search_path_spec+set}"; then sys_lib_search_path_spec=$lt_cv_sys_lib_search_path_spec fi if test set = "${lt_cv_sys_lib_dlsearch_path_spec+set}"; then sys_lib_dlsearch_path_spec=$lt_cv_sys_lib_dlsearch_path_spec fi # remember unaugmented sys_lib_dlsearch_path content for libtool script decls... configure_time_dlsearch_path=$sys_lib_dlsearch_path_spec # ... but it needs LT_SYS_LIBRARY_PATH munging for other configure-time code func_munge_path_list sys_lib_dlsearch_path_spec "$LT_SYS_LIBRARY_PATH" # to be used as default LT_SYS_LIBRARY_PATH value in generated libtool configure_time_lt_sys_library_path=$LT_SYS_LIBRARY_PATH { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to hardcode library paths into programs" >&5 $as_echo_n "checking how to hardcode library paths into programs... " >&6; } hardcode_action= if test -n "$hardcode_libdir_flag_spec" || test -n "$runpath_var" || test yes = "$hardcode_automatic"; then # We can hardcode non-existent directories. if test no != "$hardcode_direct" && # If the only mechanism to avoid hardcoding is shlibpath_var, we # have to relink, otherwise we might link with an installed library # when we should be linking with a yet-to-be-installed one ## test no != "$_LT_TAGVAR(hardcode_shlibpath_var, )" && test no != "$hardcode_minus_L"; then # Linking always hardcodes the temporary library directory. hardcode_action=relink else # We can link without hardcoding, and we can hardcode nonexisting dirs. hardcode_action=immediate fi else # We cannot hardcode anything, or else we can only hardcode existing # directories. hardcode_action=unsupported fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $hardcode_action" >&5 $as_echo "$hardcode_action" >&6; } if test relink = "$hardcode_action" || test yes = "$inherit_rpath"; then # Fast installation is not supported enable_fast_install=no elif test yes = "$shlibpath_overrides_runpath" || test no = "$enable_shared"; then # Fast installation is not necessary enable_fast_install=needless fi if test yes != "$enable_dlopen"; then enable_dlopen=unknown enable_dlopen_self=unknown enable_dlopen_self_static=unknown else lt_cv_dlopen=no lt_cv_dlopen_libs= case $host_os in beos*) lt_cv_dlopen=load_add_on lt_cv_dlopen_libs= lt_cv_dlopen_self=yes ;; mingw* | pw32* | cegcc*) lt_cv_dlopen=LoadLibrary lt_cv_dlopen_libs= ;; cygwin*) lt_cv_dlopen=dlopen lt_cv_dlopen_libs= ;; darwin*) # if libdl is installed we need to link against it { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5 $as_echo_n "checking for dlopen in -ldl... " >&6; } if ${ac_cv_lib_dl_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dl_dlopen=yes else ac_cv_lib_dl_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5 $as_echo "$ac_cv_lib_dl_dlopen" >&6; } if test "x$ac_cv_lib_dl_dlopen" = xyes; then : lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-ldl else lt_cv_dlopen=dyld lt_cv_dlopen_libs= lt_cv_dlopen_self=yes fi ;; tpf*) # Don't try to run any link tests for TPF. We know it's impossible # because TPF is a cross-compiler, and we know how we open DSOs. lt_cv_dlopen=dlopen lt_cv_dlopen_libs= lt_cv_dlopen_self=no ;; *) ac_fn_c_check_func "$LINENO" "shl_load" "ac_cv_func_shl_load" if test "x$ac_cv_func_shl_load" = xyes; then : lt_cv_dlopen=shl_load else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for shl_load in -ldld" >&5 $as_echo_n "checking for shl_load in -ldld... " >&6; } if ${ac_cv_lib_dld_shl_load+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldld $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char shl_load (); int main () { return shl_load (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dld_shl_load=yes else ac_cv_lib_dld_shl_load=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dld_shl_load" >&5 $as_echo "$ac_cv_lib_dld_shl_load" >&6; } if test "x$ac_cv_lib_dld_shl_load" = xyes; then : lt_cv_dlopen=shl_load lt_cv_dlopen_libs=-ldld else ac_fn_c_check_func "$LINENO" "dlopen" "ac_cv_func_dlopen" if test "x$ac_cv_func_dlopen" = xyes; then : lt_cv_dlopen=dlopen else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5 $as_echo_n "checking for dlopen in -ldl... " >&6; } if ${ac_cv_lib_dl_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dl_dlopen=yes else ac_cv_lib_dl_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5 $as_echo "$ac_cv_lib_dl_dlopen" >&6; } if test "x$ac_cv_lib_dl_dlopen" = xyes; then : lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-ldl else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -lsvld" >&5 $as_echo_n "checking for dlopen in -lsvld... " >&6; } if ${ac_cv_lib_svld_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lsvld $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_svld_dlopen=yes else ac_cv_lib_svld_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_svld_dlopen" >&5 $as_echo "$ac_cv_lib_svld_dlopen" >&6; } if test "x$ac_cv_lib_svld_dlopen" = xyes; then : lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-lsvld else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dld_link in -ldld" >&5 $as_echo_n "checking for dld_link in -ldld... " >&6; } if ${ac_cv_lib_dld_dld_link+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldld $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dld_link (); int main () { return dld_link (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dld_dld_link=yes else ac_cv_lib_dld_dld_link=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dld_dld_link" >&5 $as_echo "$ac_cv_lib_dld_dld_link" >&6; } if test "x$ac_cv_lib_dld_dld_link" = xyes; then : lt_cv_dlopen=dld_link lt_cv_dlopen_libs=-ldld fi fi fi fi fi fi ;; esac if test no = "$lt_cv_dlopen"; then enable_dlopen=no else enable_dlopen=yes fi case $lt_cv_dlopen in dlopen) save_CPPFLAGS=$CPPFLAGS test yes = "$ac_cv_header_dlfcn_h" && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H" save_LDFLAGS=$LDFLAGS wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\" save_LIBS=$LIBS LIBS="$lt_cv_dlopen_libs $LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether a program can dlopen itself" >&5 $as_echo_n "checking whether a program can dlopen itself... " >&6; } if ${lt_cv_dlopen_self+:} false; then : $as_echo_n "(cached) " >&6 else if test yes = "$cross_compiling"; then : lt_cv_dlopen_self=cross else lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat > conftest.$ac_ext <<_LT_EOF #line $LINENO "configure" #include "confdefs.h" #if HAVE_DLFCN_H #include #endif #include #ifdef RTLD_GLOBAL # define LT_DLGLOBAL RTLD_GLOBAL #else # ifdef DL_GLOBAL # define LT_DLGLOBAL DL_GLOBAL # else # define LT_DLGLOBAL 0 # endif #endif /* We may have to define LT_DLLAZY_OR_NOW in the command line if we find out it does not work in some platform. */ #ifndef LT_DLLAZY_OR_NOW # ifdef RTLD_LAZY # define LT_DLLAZY_OR_NOW RTLD_LAZY # else # ifdef DL_LAZY # define LT_DLLAZY_OR_NOW DL_LAZY # else # ifdef RTLD_NOW # define LT_DLLAZY_OR_NOW RTLD_NOW # else # ifdef DL_NOW # define LT_DLLAZY_OR_NOW DL_NOW # else # define LT_DLLAZY_OR_NOW 0 # endif # endif # endif # endif #endif /* When -fvisibility=hidden is used, assume the code has been annotated correspondingly for the symbols needed. */ #if defined __GNUC__ && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3)) int fnord () __attribute__((visibility("default"))); #endif int fnord () { return 42; } int main () { void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); int status = $lt_dlunknown; if (self) { if (dlsym (self,"fnord")) status = $lt_dlno_uscore; else { if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; else puts (dlerror ()); } /* dlclose (self); */ } else puts (dlerror ()); return status; } _LT_EOF if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5 (eval $ac_link) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s "conftest$ac_exeext" 2>/dev/null; then (./conftest; exit; ) >&5 2>/dev/null lt_status=$? case x$lt_status in x$lt_dlno_uscore) lt_cv_dlopen_self=yes ;; x$lt_dlneed_uscore) lt_cv_dlopen_self=yes ;; x$lt_dlunknown|x*) lt_cv_dlopen_self=no ;; esac else : # compilation failed lt_cv_dlopen_self=no fi fi rm -fr conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_dlopen_self" >&5 $as_echo "$lt_cv_dlopen_self" >&6; } if test yes = "$lt_cv_dlopen_self"; then wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $lt_prog_compiler_static\" { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether a statically linked program can dlopen itself" >&5 $as_echo_n "checking whether a statically linked program can dlopen itself... " >&6; } if ${lt_cv_dlopen_self_static+:} false; then : $as_echo_n "(cached) " >&6 else if test yes = "$cross_compiling"; then : lt_cv_dlopen_self_static=cross else lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat > conftest.$ac_ext <<_LT_EOF #line $LINENO "configure" #include "confdefs.h" #if HAVE_DLFCN_H #include #endif #include #ifdef RTLD_GLOBAL # define LT_DLGLOBAL RTLD_GLOBAL #else # ifdef DL_GLOBAL # define LT_DLGLOBAL DL_GLOBAL # else # define LT_DLGLOBAL 0 # endif #endif /* We may have to define LT_DLLAZY_OR_NOW in the command line if we find out it does not work in some platform. */ #ifndef LT_DLLAZY_OR_NOW # ifdef RTLD_LAZY # define LT_DLLAZY_OR_NOW RTLD_LAZY # else # ifdef DL_LAZY # define LT_DLLAZY_OR_NOW DL_LAZY # else # ifdef RTLD_NOW # define LT_DLLAZY_OR_NOW RTLD_NOW # else # ifdef DL_NOW # define LT_DLLAZY_OR_NOW DL_NOW # else # define LT_DLLAZY_OR_NOW 0 # endif # endif # endif # endif #endif /* When -fvisibility=hidden is used, assume the code has been annotated correspondingly for the symbols needed. */ #if defined __GNUC__ && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3)) int fnord () __attribute__((visibility("default"))); #endif int fnord () { return 42; } int main () { void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); int status = $lt_dlunknown; if (self) { if (dlsym (self,"fnord")) status = $lt_dlno_uscore; else { if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; else puts (dlerror ()); } /* dlclose (self); */ } else puts (dlerror ()); return status; } _LT_EOF if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5 (eval $ac_link) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s "conftest$ac_exeext" 2>/dev/null; then (./conftest; exit; ) >&5 2>/dev/null lt_status=$? case x$lt_status in x$lt_dlno_uscore) lt_cv_dlopen_self_static=yes ;; x$lt_dlneed_uscore) lt_cv_dlopen_self_static=yes ;; x$lt_dlunknown|x*) lt_cv_dlopen_self_static=no ;; esac else : # compilation failed lt_cv_dlopen_self_static=no fi fi rm -fr conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_dlopen_self_static" >&5 $as_echo "$lt_cv_dlopen_self_static" >&6; } fi CPPFLAGS=$save_CPPFLAGS LDFLAGS=$save_LDFLAGS LIBS=$save_LIBS ;; esac case $lt_cv_dlopen_self in yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;; *) enable_dlopen_self=unknown ;; esac case $lt_cv_dlopen_self_static in yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;; *) enable_dlopen_self_static=unknown ;; esac fi striplib= old_striplib= { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether stripping libraries is possible" >&5 $as_echo_n "checking whether stripping libraries is possible... " >&6; } if test -n "$STRIP" && $STRIP -V 2>&1 | $GREP "GNU strip" >/dev/null; then test -z "$old_striplib" && old_striplib="$STRIP --strip-debug" test -z "$striplib" && striplib="$STRIP --strip-unneeded" { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else # FIXME - insert some real tests, host_os isn't really good enough case $host_os in darwin*) if test -n "$STRIP"; then striplib="$STRIP -x" old_striplib="$STRIP -S" { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac fi # Report what library types will actually be built { $as_echo "$as_me:${as_lineno-$LINENO}: checking if libtool supports shared libraries" >&5 $as_echo_n "checking if libtool supports shared libraries... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: result: $can_build_shared" >&5 $as_echo "$can_build_shared" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build shared libraries" >&5 $as_echo_n "checking whether to build shared libraries... " >&6; } test no = "$can_build_shared" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) test yes = "$enable_shared" && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[4-9]*) if test ia64 != "$host_cpu"; then case $enable_shared,$with_aix_soname,$aix_use_runtimelinking in yes,aix,yes) ;; # shared object as lib.so file only yes,svr4,*) ;; # shared object as lib.so archive member only yes,*) enable_static=no ;; # shared object in lib.a archive as well esac fi ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: $enable_shared" >&5 $as_echo "$enable_shared" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build static libraries" >&5 $as_echo_n "checking whether to build static libraries... " >&6; } # Make sure either enable_shared or enable_static is yes. test yes = "$enable_shared" || enable_static=yes { $as_echo "$as_me:${as_lineno-$LINENO}: result: $enable_static" >&5 $as_echo "$enable_static" >&6; } fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu CC=$lt_save_CC ac_config_commands="$ac_config_commands libtool" # Only expand once: for ac_header in sys/inotify.h sys/epoll.h sys/event.h port.h poll.h sys/timerfd.h do : as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh` ac_fn_c_check_header_mongrel "$LINENO" "$ac_header" "$as_ac_Header" "$ac_includes_default" if eval test \"x\$"$as_ac_Header"\" = x"yes"; then : cat >>confdefs.h <<_ACEOF #define `$as_echo "HAVE_$ac_header" | $as_tr_cpp` 1 _ACEOF fi done for ac_header in sys/select.h sys/eventfd.h sys/signalfd.h linux/aio_abi.h linux/fs.h do : as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh` ac_fn_c_check_header_mongrel "$LINENO" "$ac_header" "$as_ac_Header" "$ac_includes_default" if eval test \"x\$"$as_ac_Header"\" = x"yes"; then : cat >>confdefs.h <<_ACEOF #define `$as_echo "HAVE_$ac_header" | $as_tr_cpp` 1 _ACEOF fi done for ac_func in inotify_init epoll_ctl kqueue port_create poll select eventfd signalfd do : as_ac_var=`$as_echo "ac_cv_func_$ac_func" | $as_tr_sh` ac_fn_c_check_func "$LINENO" "$ac_func" "$as_ac_var" if eval test \"x\$"$as_ac_var"\" = x"yes"; then : cat >>confdefs.h <<_ACEOF #define `$as_echo "HAVE_$ac_func" | $as_tr_cpp` 1 _ACEOF fi done for ac_func in clock_gettime do : ac_fn_c_check_func "$LINENO" "clock_gettime" "ac_cv_func_clock_gettime" if test "x$ac_cv_func_clock_gettime" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_CLOCK_GETTIME 1 _ACEOF else if test $(uname) = Linux; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for clock_gettime syscall" >&5 $as_echo_n "checking for clock_gettime syscall... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include int main () { struct timespec ts; int status = syscall (SYS_clock_gettime, CLOCK_REALTIME, &ts) ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_have_clock_syscall=1 $as_echo "#define HAVE_CLOCK_SYSCALL 1" >>confdefs.h { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext fi if test -z "$LIBEV_M4_AVOID_LIBRT" && test -z "$ac_have_clock_syscall"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for clock_gettime in -lrt" >&5 $as_echo_n "checking for clock_gettime in -lrt... " >&6; } if ${ac_cv_lib_rt_clock_gettime+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lrt $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char clock_gettime (); int main () { return clock_gettime (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_rt_clock_gettime=yes else ac_cv_lib_rt_clock_gettime=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_rt_clock_gettime" >&5 $as_echo "$ac_cv_lib_rt_clock_gettime" >&6; } if test "x$ac_cv_lib_rt_clock_gettime" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_LIBRT 1 _ACEOF LIBS="-lrt $LIBS" fi unset ac_cv_func_clock_gettime for ac_func in clock_gettime do : ac_fn_c_check_func "$LINENO" "clock_gettime" "ac_cv_func_clock_gettime" if test "x$ac_cv_func_clock_gettime" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_CLOCK_GETTIME 1 _ACEOF fi done fi fi done for ac_func in nanosleep do : ac_fn_c_check_func "$LINENO" "nanosleep" "ac_cv_func_nanosleep" if test "x$ac_cv_func_nanosleep" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_NANOSLEEP 1 _ACEOF else if test -z "$LIBEV_M4_AVOID_LIBRT"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for nanosleep in -lrt" >&5 $as_echo_n "checking for nanosleep in -lrt... " >&6; } if ${ac_cv_lib_rt_nanosleep+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lrt $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char nanosleep (); int main () { return nanosleep (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_rt_nanosleep=yes else ac_cv_lib_rt_nanosleep=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_rt_nanosleep" >&5 $as_echo "$ac_cv_lib_rt_nanosleep" >&6; } if test "x$ac_cv_lib_rt_nanosleep" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_LIBRT 1 _ACEOF LIBS="-lrt $LIBS" fi unset ac_cv_func_nanosleep for ac_func in nanosleep do : ac_fn_c_check_func "$LINENO" "nanosleep" "ac_cv_func_nanosleep" if test "x$ac_cv_func_nanosleep" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_NANOSLEEP 1 _ACEOF fi done fi fi done ac_fn_c_check_type "$LINENO" "__kernel_rwf_t" "ac_cv_type___kernel_rwf_t" "#include " if test "x$ac_cv_type___kernel_rwf_t" = xyes; then : $as_echo "#define HAVE_KERNEL_RWF_T 1" >>confdefs.h fi if test -z "$LIBEV_M4_AVOID_LIBM"; then LIBM=m fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for library containing floor" >&5 $as_echo_n "checking for library containing floor... " >&6; } if ${ac_cv_search_floor+:} false; then : $as_echo_n "(cached) " >&6 else ac_func_search_save_LIBS=$LIBS cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char floor (); int main () { return floor (); ; return 0; } _ACEOF for ac_lib in '' $LIBM; do if test -z "$ac_lib"; then ac_res="none required" else ac_res=-l$ac_lib LIBS="-l$ac_lib $ac_func_search_save_LIBS" fi if ac_fn_c_try_link "$LINENO"; then : ac_cv_search_floor=$ac_res fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext if ${ac_cv_search_floor+:} false; then : break fi done if ${ac_cv_search_floor+:} false; then : else ac_cv_search_floor=no fi rm conftest.$ac_ext LIBS=$ac_func_search_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_floor" >&5 $as_echo "$ac_cv_search_floor" >&6; } ac_res=$ac_cv_search_floor if test "$ac_res" != no; then : test "$ac_res" = "none required" || LIBS="$ac_res $LIBS" $as_echo "#define HAVE_FLOOR 1" >>confdefs.h fi ac_config_files="$ac_config_files Makefile" cat >confcache <<\_ACEOF # This file is a shell script that caches the results of configure # tests run on this system so they can be shared between configure # scripts and configure runs, see configure's option --config-cache. # It is not useful on other systems. If it contains results you don't # want to keep, you may remove or edit it. # # config.status only pays attention to the cache file if you give it # the --recheck option to rerun configure. # # `ac_cv_env_foo' variables (set or unset) will be overridden when # loading this file, other *unset* `ac_cv_foo' will be assigned the # following values. _ACEOF # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_*) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5 $as_echo "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;; esac case $ac_var in #( _ | IFS | as_nl) ;; #( BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #( *) { eval $ac_var=; unset $ac_var;} ;; esac ;; esac done (set) 2>&1 | case $as_nl`(ac_space=' '; set) 2>&1` in #( *${as_nl}ac_space=\ *) # `set' does not quote correctly, so add quotes: double-quote # substitution turns \\\\ into \\, and sed turns \\ into \. sed -n \ "s/'/'\\\\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\\2'/p" ;; #( *) # `set' quotes correctly as required by POSIX, so do not add quotes. sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p" ;; esac | sort ) | sed ' /^ac_cv_env_/b end t clear :clear s/^\([^=]*\)=\(.*[{}].*\)$/test "${\1+set}" = set || &/ t end s/^\([^=]*\)=\(.*\)$/\1=${\1=\2}/ :end' >>confcache if diff "$cache_file" confcache >/dev/null 2>&1; then :; else if test -w "$cache_file"; then if test "x$cache_file" != "x/dev/null"; then { $as_echo "$as_me:${as_lineno-$LINENO}: updating cache $cache_file" >&5 $as_echo "$as_me: updating cache $cache_file" >&6;} if test ! -f "$cache_file" || test -h "$cache_file"; then cat confcache >"$cache_file" else case $cache_file in #( */* | ?:*) mv -f confcache "$cache_file"$$ && mv -f "$cache_file"$$ "$cache_file" ;; #( *) mv -f confcache "$cache_file" ;; esac fi fi else { $as_echo "$as_me:${as_lineno-$LINENO}: not updating unwritable cache $cache_file" >&5 $as_echo "$as_me: not updating unwritable cache $cache_file" >&6;} fi fi rm -f confcache test "x$prefix" = xNONE && prefix=$ac_default_prefix # Let make expand exec_prefix. test "x$exec_prefix" = xNONE && exec_prefix='${prefix}' DEFS=-DHAVE_CONFIG_H ac_libobjs= ac_ltlibobjs= U= for ac_i in : $LIBOBJS; do test "x$ac_i" = x: && continue # 1. Remove the extension, and $U if already installed. ac_script='s/\$U\././;s/\.o$//;s/\.obj$//' ac_i=`$as_echo "$ac_i" | sed "$ac_script"` # 2. Prepend LIBOBJDIR. When used with automake>=1.10 LIBOBJDIR # will be set to the directory where LIBOBJS objects are built. as_fn_append ac_libobjs " \${LIBOBJDIR}$ac_i\$U.$ac_objext" as_fn_append ac_ltlibobjs " \${LIBOBJDIR}$ac_i"'$U.lo' done LIBOBJS=$ac_libobjs LTLIBOBJS=$ac_ltlibobjs { $as_echo "$as_me:${as_lineno-$LINENO}: checking that generated files are newer than configure" >&5 $as_echo_n "checking that generated files are newer than configure... " >&6; } if test -n "$am_sleep_pid"; then # Hide warnings about reused PIDs. wait $am_sleep_pid 2>/dev/null fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: done" >&5 $as_echo "done" >&6; } if test -n "$EXEEXT"; then am__EXEEXT_TRUE= am__EXEEXT_FALSE='#' else am__EXEEXT_TRUE='#' am__EXEEXT_FALSE= fi if test -z "${MAINTAINER_MODE_TRUE}" && test -z "${MAINTAINER_MODE_FALSE}"; then as_fn_error $? "conditional \"MAINTAINER_MODE\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${AMDEP_TRUE}" && test -z "${AMDEP_FALSE}"; then as_fn_error $? "conditional \"AMDEP\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${am__fastdepCC_TRUE}" && test -z "${am__fastdepCC_FALSE}"; then as_fn_error $? "conditional \"am__fastdepCC\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi : "${CONFIG_STATUS=./config.status}" ac_write_fail=0 ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files $CONFIG_STATUS" { $as_echo "$as_me:${as_lineno-$LINENO}: creating $CONFIG_STATUS" >&5 $as_echo "$as_me: creating $CONFIG_STATUS" >&6;} as_write_fail=0 cat >$CONFIG_STATUS <<_ASEOF || as_write_fail=1 #! $SHELL # Generated by $as_me. # Run this file to recreate the current configuration. # Compiler output produced by configure, useful for debugging # configure, is in config.log if it exists. debug=false ac_cs_recheck=false ac_cs_silent=false SHELL=\${CONFIG_SHELL-$SHELL} export SHELL _ASEOF cat >>$CONFIG_STATUS <<\_ASEOF || as_write_fail=1 ## -------------------- ## ## M4sh Initialization. ## ## -------------------- ## # Be more Bourne compatible DUALCASE=1; export DUALCASE # for MKS sh if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case `(set -o) 2>/dev/null` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi as_nl=' ' export as_nl # Printing a long string crashes Solaris 7 /usr/bin/printf. as_echo='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo$as_echo # Prefer a ksh shell builtin over an external printf program on Solaris, # but without wasting forks for bash or zsh. if test -z "$BASH_VERSION$ZSH_VERSION" \ && (test "X`print -r -- $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='print -r --' as_echo_n='print -rn --' elif (test "X`printf %s $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='printf %s\n' as_echo_n='printf %s' else if test "X`(/usr/ucb/echo -n -n $as_echo) 2>/dev/null`" = "X-n $as_echo"; then as_echo_body='eval /usr/ucb/echo -n "$1$as_nl"' as_echo_n='/usr/ucb/echo -n' else as_echo_body='eval expr "X$1" : "X\\(.*\\)"' as_echo_n_body='eval arg=$1; case $arg in #( *"$as_nl"*) expr "X$arg" : "X\\(.*\\)$as_nl"; arg=`expr "X$arg" : ".*$as_nl\\(.*\\)"`;; esac; expr "X$arg" : "X\\(.*\\)" | tr -d "$as_nl" ' export as_echo_n_body as_echo_n='sh -c $as_echo_n_body as_echo' fi export as_echo_body as_echo='sh -c $as_echo_body as_echo' fi # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi # IFS # We need space, tab and new line, in precisely that order. Quoting is # there to prevent editors from complaining about space-tab. # (If _AS_PATH_WALK were called with IFS unset, it would disable word # splitting by setting IFS to empty value.) IFS=" "" $as_nl" # Find who we are. Look in the path if we contain no directory separator. as_myself= case $0 in #(( *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break done IFS=$as_save_IFS ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then $as_echo "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2 exit 1 fi # Unset variables that we do not need and which cause bugs (e.g. in # pre-3.0 UWIN ksh). But do not cause bugs in bash 2.01; the "|| exit 1" # suppresses any "Segmentation fault" message there. '((' could # trigger a bug in pdksh 5.2.14. for as_var in BASH_ENV ENV MAIL MAILPATH do eval test x\${$as_var+set} = xset \ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || : done PS1='$ ' PS2='> ' PS4='+ ' # NLS nuisances. LC_ALL=C export LC_ALL LANGUAGE=C export LANGUAGE # CDPATH. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH # as_fn_error STATUS ERROR [LINENO LOG_FD] # ---------------------------------------- # Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are # provided, also output the error to LOG_FD, referencing LINENO. Then exit the # script with STATUS, using 1 if that was 0. as_fn_error () { as_status=$1; test $as_status -eq 0 && as_status=1 if test "$4"; then as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack $as_echo "$as_me:${as_lineno-$LINENO}: error: $2" >&$4 fi $as_echo "$as_me: error: $2" >&2 as_fn_exit $as_status } # as_fn_error # as_fn_set_status STATUS # ----------------------- # Set $? to STATUS, without forking. as_fn_set_status () { return $1 } # as_fn_set_status # as_fn_exit STATUS # ----------------- # Exit the shell with STATUS, even in a "trap 0" or "set -e" context. as_fn_exit () { set +e as_fn_set_status $1 exit $1 } # as_fn_exit # as_fn_unset VAR # --------------- # Portably unset VAR. as_fn_unset () { { eval $1=; unset $1;} } as_unset=as_fn_unset # as_fn_append VAR VALUE # ---------------------- # Append the text in VALUE to the end of the definition contained in VAR. Take # advantage of any shell optimizations that allow amortized linear growth over # repeated appends, instead of the typical quadratic growth present in naive # implementations. if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null; then : eval 'as_fn_append () { eval $1+=\$2 }' else as_fn_append () { eval $1=\$$1\$2 } fi # as_fn_append # as_fn_arith ARG... # ------------------ # Perform arithmetic evaluation on the ARGs, and store the result in the # global $as_val. Take advantage of shells that can avoid forks. The arguments # must be portable across $(()) and expr. if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null; then : eval 'as_fn_arith () { as_val=$(( $* )) }' else as_fn_arith () { as_val=`expr "$@" || test $? -eq 1` } fi # as_fn_arith if expr a : '\(a\)' >/dev/null 2>&1 && test "X`expr 00001 : '.*\(...\)'`" = X001; then as_expr=expr else as_expr=false fi if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then as_dirname=dirname else as_dirname=false fi as_me=`$as_basename -- "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| . 2>/dev/null || $as_echo X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits ECHO_C= ECHO_N= ECHO_T= case `echo -n x` in #((((( -n*) case `echo 'xy\c'` in *c*) ECHO_T=' ';; # ECHO_T is single tab character. xy) ECHO_C='\c';; *) echo `echo ksh88 bug on AIX 6.1` > /dev/null ECHO_T=' ';; esac;; *) ECHO_N='-n';; esac rm -f conf$$ conf$$.exe conf$$.file if test -d conf$$.dir; then rm -f conf$$.dir/conf$$.file else rm -f conf$$.dir mkdir conf$$.dir 2>/dev/null fi if (echo >conf$$.file) 2>/dev/null; then if ln -s conf$$.file conf$$ 2>/dev/null; then as_ln_s='ln -s' # ... but there are two gotchas: # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable. # In both cases, we have to default to `cp -pR'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || as_ln_s='cp -pR' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -pR' fi else as_ln_s='cp -pR' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null # as_fn_mkdir_p # ------------- # Create "$as_dir" as a directory, including parents if necessary. as_fn_mkdir_p () { case $as_dir in #( -*) as_dir=./$as_dir;; esac test -d "$as_dir" || eval $as_mkdir_p || { as_dirs= while :; do case $as_dir in #( *\'*) as_qdir=`$as_echo "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'( *) as_qdir=$as_dir;; esac as_dirs="'$as_qdir' $as_dirs" as_dir=`$as_dirname -- "$as_dir" || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` test -d "$as_dir" && break done test -z "$as_dirs" || eval "mkdir $as_dirs" } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir" } # as_fn_mkdir_p if mkdir -p . 2>/dev/null; then as_mkdir_p='mkdir -p "$as_dir"' else test -d ./-p && rmdir ./-p as_mkdir_p=false fi # as_fn_executable_p FILE # ----------------------- # Test if FILE is an executable regular file. as_fn_executable_p () { test -f "$1" && test -x "$1" } # as_fn_executable_p as_test_x='test -x' as_executable_p=as_fn_executable_p # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" # Sed expression to map a string onto a valid variable name. as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" exec 6>&1 ## ----------------------------------- ## ## Main body of $CONFIG_STATUS script. ## ## ----------------------------------- ## _ASEOF test $as_write_fail = 0 && chmod +x $CONFIG_STATUS || ac_write_fail=1 cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # Save the log message, to keep $0 and so on meaningful, and to # report actual input values of CONFIG_FILES etc. instead of their # values after options handling. ac_log=" This file was extended by libev $as_me 4.33, which was generated by GNU Autoconf 2.69. Invocation command line was CONFIG_FILES = $CONFIG_FILES CONFIG_HEADERS = $CONFIG_HEADERS CONFIG_LINKS = $CONFIG_LINKS CONFIG_COMMANDS = $CONFIG_COMMANDS $ $0 $@ on `(hostname || uname -n) 2>/dev/null | sed 1q` " _ACEOF case $ac_config_files in *" "*) set x $ac_config_files; shift; ac_config_files=$*;; esac case $ac_config_headers in *" "*) set x $ac_config_headers; shift; ac_config_headers=$*;; esac cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 # Files that config.status was made for. config_files="$ac_config_files" config_headers="$ac_config_headers" config_commands="$ac_config_commands" _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 ac_cs_usage="\ \`$as_me' instantiates files and other configuration actions from templates according to the current configuration. Unless the files and actions are specified as TAGs, all are instantiated by default. Usage: $0 [OPTION]... [TAG]... -h, --help print this help, then exit -V, --version print version number and configuration settings, then exit --config print configuration, then exit -q, --quiet, --silent do not print progress messages -d, --debug don't remove temporary files --recheck update $as_me by reconfiguring in the same conditions --file=FILE[:TEMPLATE] instantiate the configuration file FILE --header=FILE[:TEMPLATE] instantiate the configuration header FILE Configuration files: $config_files Configuration headers: $config_headers Configuration commands: $config_commands Report bugs to the package provider." _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`" ac_cs_version="\\ libev config.status 4.33 configured by $0, generated by GNU Autoconf 2.69, with options \\"\$ac_cs_config\\" Copyright (C) 2012 Free Software Foundation, Inc. This config.status script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it." ac_pwd='$ac_pwd' srcdir='$srcdir' INSTALL='$INSTALL' MKDIR_P='$MKDIR_P' AWK='$AWK' test -n "\$AWK" || AWK=awk _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # The default lists apply if the user does not specify any file. ac_need_defaults=: while test $# != 0 do case $1 in --*=?*) ac_option=`expr "X$1" : 'X\([^=]*\)='` ac_optarg=`expr "X$1" : 'X[^=]*=\(.*\)'` ac_shift=: ;; --*=) ac_option=`expr "X$1" : 'X\([^=]*\)='` ac_optarg= ac_shift=: ;; *) ac_option=$1 ac_optarg=$2 ac_shift=shift ;; esac case $ac_option in # Handling of the options. -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r) ac_cs_recheck=: ;; --version | --versio | --versi | --vers | --ver | --ve | --v | -V ) $as_echo "$ac_cs_version"; exit ;; --config | --confi | --conf | --con | --co | --c ) $as_echo "$ac_cs_config"; exit ;; --debug | --debu | --deb | --de | --d | -d ) debug=: ;; --file | --fil | --fi | --f ) $ac_shift case $ac_optarg in *\'*) ac_optarg=`$as_echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` ;; '') as_fn_error $? "missing file argument" ;; esac as_fn_append CONFIG_FILES " '$ac_optarg'" ac_need_defaults=false;; --header | --heade | --head | --hea ) $ac_shift case $ac_optarg in *\'*) ac_optarg=`$as_echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` ;; esac as_fn_append CONFIG_HEADERS " '$ac_optarg'" ac_need_defaults=false;; --he | --h) # Conflict between --help and --header as_fn_error $? "ambiguous option: \`$1' Try \`$0 --help' for more information.";; --help | --hel | -h ) $as_echo "$ac_cs_usage"; exit ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil | --si | --s) ac_cs_silent=: ;; # This is an error. -*) as_fn_error $? "unrecognized option: \`$1' Try \`$0 --help' for more information." ;; *) as_fn_append ac_config_targets " $1" ac_need_defaults=false ;; esac shift done ac_configure_extra_args= if $ac_cs_silent; then exec 6>/dev/null ac_configure_extra_args="$ac_configure_extra_args --silent" fi _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 if \$ac_cs_recheck; then set X $SHELL '$0' $ac_configure_args \$ac_configure_extra_args --no-create --no-recursion shift \$as_echo "running CONFIG_SHELL=$SHELL \$*" >&6 CONFIG_SHELL='$SHELL' export CONFIG_SHELL exec "\$@" fi _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 exec 5>>config.log { echo sed 'h;s/./-/g;s/^.../## /;s/...$/ ##/;p;x;p;x' <<_ASBOX ## Running $as_me. ## _ASBOX $as_echo "$ac_log" } >&5 _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 # # INIT-COMMANDS # AMDEP_TRUE="$AMDEP_TRUE" MAKE="${MAKE-make}" # The HP-UX ksh and POSIX shell print the target directory to stdout # if CDPATH is set. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH sed_quote_subst='$sed_quote_subst' double_quote_subst='$double_quote_subst' delay_variable_subst='$delay_variable_subst' macro_version='`$ECHO "$macro_version" | $SED "$delay_single_quote_subst"`' macro_revision='`$ECHO "$macro_revision" | $SED "$delay_single_quote_subst"`' enable_shared='`$ECHO "$enable_shared" | $SED "$delay_single_quote_subst"`' enable_static='`$ECHO "$enable_static" | $SED "$delay_single_quote_subst"`' pic_mode='`$ECHO "$pic_mode" | $SED "$delay_single_quote_subst"`' enable_fast_install='`$ECHO "$enable_fast_install" | $SED "$delay_single_quote_subst"`' shared_archive_member_spec='`$ECHO "$shared_archive_member_spec" | $SED "$delay_single_quote_subst"`' SHELL='`$ECHO "$SHELL" | $SED "$delay_single_quote_subst"`' ECHO='`$ECHO "$ECHO" | $SED "$delay_single_quote_subst"`' PATH_SEPARATOR='`$ECHO "$PATH_SEPARATOR" | $SED "$delay_single_quote_subst"`' host_alias='`$ECHO "$host_alias" | $SED "$delay_single_quote_subst"`' host='`$ECHO "$host" | $SED "$delay_single_quote_subst"`' host_os='`$ECHO "$host_os" | $SED "$delay_single_quote_subst"`' build_alias='`$ECHO "$build_alias" | $SED "$delay_single_quote_subst"`' build='`$ECHO "$build" | $SED "$delay_single_quote_subst"`' build_os='`$ECHO "$build_os" | $SED "$delay_single_quote_subst"`' SED='`$ECHO "$SED" | $SED "$delay_single_quote_subst"`' Xsed='`$ECHO "$Xsed" | $SED "$delay_single_quote_subst"`' GREP='`$ECHO "$GREP" | $SED "$delay_single_quote_subst"`' EGREP='`$ECHO "$EGREP" | $SED "$delay_single_quote_subst"`' FGREP='`$ECHO "$FGREP" | $SED "$delay_single_quote_subst"`' LD='`$ECHO "$LD" | $SED "$delay_single_quote_subst"`' NM='`$ECHO "$NM" | $SED "$delay_single_quote_subst"`' LN_S='`$ECHO "$LN_S" | $SED "$delay_single_quote_subst"`' max_cmd_len='`$ECHO "$max_cmd_len" | $SED "$delay_single_quote_subst"`' ac_objext='`$ECHO "$ac_objext" | $SED "$delay_single_quote_subst"`' exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`' lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`' lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`' lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`' lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`' lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`' reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`' reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`' OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`' deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`' file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`' file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`' want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`' DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`' sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`' AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`' AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`' archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`' STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`' RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`' old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`' old_postuninstall_cmds='`$ECHO "$old_postuninstall_cmds" | $SED "$delay_single_quote_subst"`' old_archive_cmds='`$ECHO "$old_archive_cmds" | $SED "$delay_single_quote_subst"`' lock_old_archive_extraction='`$ECHO "$lock_old_archive_extraction" | $SED "$delay_single_quote_subst"`' CC='`$ECHO "$CC" | $SED "$delay_single_quote_subst"`' CFLAGS='`$ECHO "$CFLAGS" | $SED "$delay_single_quote_subst"`' compiler='`$ECHO "$compiler" | $SED "$delay_single_quote_subst"`' GCC='`$ECHO "$GCC" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_import='`$ECHO "$lt_cv_sys_global_symbol_to_import" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`' lt_cv_nm_interface='`$ECHO "$lt_cv_nm_interface" | $SED "$delay_single_quote_subst"`' nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`' lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`' lt_cv_truncate_bin='`$ECHO "$lt_cv_truncate_bin" | $SED "$delay_single_quote_subst"`' objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`' MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`' lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`' need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`' MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`' DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`' NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`' LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`' OTOOL='`$ECHO "$OTOOL" | $SED "$delay_single_quote_subst"`' OTOOL64='`$ECHO "$OTOOL64" | $SED "$delay_single_quote_subst"`' libext='`$ECHO "$libext" | $SED "$delay_single_quote_subst"`' shrext_cmds='`$ECHO "$shrext_cmds" | $SED "$delay_single_quote_subst"`' extract_expsyms_cmds='`$ECHO "$extract_expsyms_cmds" | $SED "$delay_single_quote_subst"`' archive_cmds_need_lc='`$ECHO "$archive_cmds_need_lc" | $SED "$delay_single_quote_subst"`' enable_shared_with_static_runtimes='`$ECHO "$enable_shared_with_static_runtimes" | $SED "$delay_single_quote_subst"`' export_dynamic_flag_spec='`$ECHO "$export_dynamic_flag_spec" | $SED "$delay_single_quote_subst"`' whole_archive_flag_spec='`$ECHO "$whole_archive_flag_spec" | $SED "$delay_single_quote_subst"`' compiler_needs_object='`$ECHO "$compiler_needs_object" | $SED "$delay_single_quote_subst"`' old_archive_from_new_cmds='`$ECHO "$old_archive_from_new_cmds" | $SED "$delay_single_quote_subst"`' old_archive_from_expsyms_cmds='`$ECHO "$old_archive_from_expsyms_cmds" | $SED "$delay_single_quote_subst"`' archive_cmds='`$ECHO "$archive_cmds" | $SED "$delay_single_quote_subst"`' archive_expsym_cmds='`$ECHO "$archive_expsym_cmds" | $SED "$delay_single_quote_subst"`' module_cmds='`$ECHO "$module_cmds" | $SED "$delay_single_quote_subst"`' module_expsym_cmds='`$ECHO "$module_expsym_cmds" | $SED "$delay_single_quote_subst"`' with_gnu_ld='`$ECHO "$with_gnu_ld" | $SED "$delay_single_quote_subst"`' allow_undefined_flag='`$ECHO "$allow_undefined_flag" | $SED "$delay_single_quote_subst"`' no_undefined_flag='`$ECHO "$no_undefined_flag" | $SED "$delay_single_quote_subst"`' hardcode_libdir_flag_spec='`$ECHO "$hardcode_libdir_flag_spec" | $SED "$delay_single_quote_subst"`' hardcode_libdir_separator='`$ECHO "$hardcode_libdir_separator" | $SED "$delay_single_quote_subst"`' hardcode_direct='`$ECHO "$hardcode_direct" | $SED "$delay_single_quote_subst"`' hardcode_direct_absolute='`$ECHO "$hardcode_direct_absolute" | $SED "$delay_single_quote_subst"`' hardcode_minus_L='`$ECHO "$hardcode_minus_L" | $SED "$delay_single_quote_subst"`' hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_quote_subst"`' hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`' inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`' link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`' always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`' export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`' exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`' include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`' prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`' postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`' file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`' variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`' need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`' need_version='`$ECHO "$need_version" | $SED "$delay_single_quote_subst"`' version_type='`$ECHO "$version_type" | $SED "$delay_single_quote_subst"`' runpath_var='`$ECHO "$runpath_var" | $SED "$delay_single_quote_subst"`' shlibpath_var='`$ECHO "$shlibpath_var" | $SED "$delay_single_quote_subst"`' shlibpath_overrides_runpath='`$ECHO "$shlibpath_overrides_runpath" | $SED "$delay_single_quote_subst"`' libname_spec='`$ECHO "$libname_spec" | $SED "$delay_single_quote_subst"`' library_names_spec='`$ECHO "$library_names_spec" | $SED "$delay_single_quote_subst"`' soname_spec='`$ECHO "$soname_spec" | $SED "$delay_single_quote_subst"`' install_override_mode='`$ECHO "$install_override_mode" | $SED "$delay_single_quote_subst"`' postinstall_cmds='`$ECHO "$postinstall_cmds" | $SED "$delay_single_quote_subst"`' postuninstall_cmds='`$ECHO "$postuninstall_cmds" | $SED "$delay_single_quote_subst"`' finish_cmds='`$ECHO "$finish_cmds" | $SED "$delay_single_quote_subst"`' finish_eval='`$ECHO "$finish_eval" | $SED "$delay_single_quote_subst"`' hardcode_into_libs='`$ECHO "$hardcode_into_libs" | $SED "$delay_single_quote_subst"`' sys_lib_search_path_spec='`$ECHO "$sys_lib_search_path_spec" | $SED "$delay_single_quote_subst"`' configure_time_dlsearch_path='`$ECHO "$configure_time_dlsearch_path" | $SED "$delay_single_quote_subst"`' configure_time_lt_sys_library_path='`$ECHO "$configure_time_lt_sys_library_path" | $SED "$delay_single_quote_subst"`' hardcode_action='`$ECHO "$hardcode_action" | $SED "$delay_single_quote_subst"`' enable_dlopen='`$ECHO "$enable_dlopen" | $SED "$delay_single_quote_subst"`' enable_dlopen_self='`$ECHO "$enable_dlopen_self" | $SED "$delay_single_quote_subst"`' enable_dlopen_self_static='`$ECHO "$enable_dlopen_self_static" | $SED "$delay_single_quote_subst"`' old_striplib='`$ECHO "$old_striplib" | $SED "$delay_single_quote_subst"`' striplib='`$ECHO "$striplib" | $SED "$delay_single_quote_subst"`' LTCC='$LTCC' LTCFLAGS='$LTCFLAGS' compiler='$compiler_DEFAULT' # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF \$1 _LTECHO_EOF' } # Quote evaled strings. for var in SHELL \ ECHO \ PATH_SEPARATOR \ SED \ GREP \ EGREP \ FGREP \ LD \ NM \ LN_S \ lt_SP2NL \ lt_NL2SP \ reload_flag \ OBJDUMP \ deplibs_check_method \ file_magic_cmd \ file_magic_glob \ want_nocaseglob \ DLLTOOL \ sharedlib_from_linklib_cmd \ AR \ AR_FLAGS \ archiver_list_spec \ STRIP \ RANLIB \ CC \ CFLAGS \ compiler \ lt_cv_sys_global_symbol_pipe \ lt_cv_sys_global_symbol_to_cdecl \ lt_cv_sys_global_symbol_to_import \ lt_cv_sys_global_symbol_to_c_name_address \ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \ lt_cv_nm_interface \ nm_file_list_spec \ lt_cv_truncate_bin \ lt_prog_compiler_no_builtin_flag \ lt_prog_compiler_pic \ lt_prog_compiler_wl \ lt_prog_compiler_static \ lt_cv_prog_compiler_c_o \ need_locks \ MANIFEST_TOOL \ DSYMUTIL \ NMEDIT \ LIPO \ OTOOL \ OTOOL64 \ shrext_cmds \ export_dynamic_flag_spec \ whole_archive_flag_spec \ compiler_needs_object \ with_gnu_ld \ allow_undefined_flag \ no_undefined_flag \ hardcode_libdir_flag_spec \ hardcode_libdir_separator \ exclude_expsyms \ include_expsyms \ file_list_spec \ variables_saved_for_relink \ libname_spec \ library_names_spec \ soname_spec \ install_override_mode \ finish_eval \ old_striplib \ striplib; do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[\\\\\\\`\\"\\\$]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED \\"\\\$sed_quote_subst\\"\\\`\\\\\\"" ## exclude from sc_prohibit_nested_quotes ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done # Double-quote double-evaled strings. for var in reload_cmds \ old_postinstall_cmds \ old_postuninstall_cmds \ old_archive_cmds \ extract_expsyms_cmds \ old_archive_from_new_cmds \ old_archive_from_expsyms_cmds \ archive_cmds \ archive_expsym_cmds \ module_cmds \ module_expsym_cmds \ export_symbols_cmds \ prelink_cmds \ postlink_cmds \ postinstall_cmds \ postuninstall_cmds \ finish_cmds \ sys_lib_search_path_spec \ configure_time_dlsearch_path \ configure_time_lt_sys_library_path; do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[\\\\\\\`\\"\\\$]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\"" ## exclude from sc_prohibit_nested_quotes ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done ac_aux_dir='$ac_aux_dir' # See if we are running on zsh, and set the options that allow our # commands through without removal of \ escapes INIT. if test -n "\${ZSH_VERSION+set}"; then setopt NO_GLOB_SUBST fi PACKAGE='$PACKAGE' VERSION='$VERSION' RM='$RM' ofile='$ofile' _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # Handling of arguments. for ac_config_target in $ac_config_targets do case $ac_config_target in "config.h") CONFIG_HEADERS="$CONFIG_HEADERS config.h" ;; "depfiles") CONFIG_COMMANDS="$CONFIG_COMMANDS depfiles" ;; "libtool") CONFIG_COMMANDS="$CONFIG_COMMANDS libtool" ;; "Makefile") CONFIG_FILES="$CONFIG_FILES Makefile" ;; *) as_fn_error $? "invalid argument: \`$ac_config_target'" "$LINENO" 5;; esac done # If the user did not use the arguments to specify the items to instantiate, # then the envvar interface is used. Set only those that are not. # We use the long form for the default assignment because of an extremely # bizarre bug on SunOS 4.1.3. if $ac_need_defaults; then test "${CONFIG_FILES+set}" = set || CONFIG_FILES=$config_files test "${CONFIG_HEADERS+set}" = set || CONFIG_HEADERS=$config_headers test "${CONFIG_COMMANDS+set}" = set || CONFIG_COMMANDS=$config_commands fi # Have a temporary directory for convenience. Make it in the build tree # simply because there is no reason against having it here, and in addition, # creating and moving files from /tmp can sometimes cause problems. # Hook for its removal unless debugging. # Note that there is a small window in which the directory will not be cleaned: # after its creation but before its name has been assigned to `$tmp'. $debug || { tmp= ac_tmp= trap 'exit_status=$? : "${ac_tmp:=$tmp}" { test ! -d "$ac_tmp" || rm -fr "$ac_tmp"; } && exit $exit_status ' 0 trap 'as_fn_exit 1' 1 2 13 15 } # Create a (secure) tmp directory for tmp files. { tmp=`(umask 077 && mktemp -d "./confXXXXXX") 2>/dev/null` && test -d "$tmp" } || { tmp=./conf$$-$RANDOM (umask 077 && mkdir "$tmp") } || as_fn_error $? "cannot create a temporary directory in ." "$LINENO" 5 ac_tmp=$tmp # Set up the scripts for CONFIG_FILES section. # No need to generate them if there are no CONFIG_FILES. # This happens for instance with `./config.status config.h'. if test -n "$CONFIG_FILES"; then ac_cr=`echo X | tr X '\015'` # On cygwin, bash can eat \r inside `` if the user requested igncr. # But we know of no other shell where ac_cr would be empty at this # point, so we can use a bashism as a fallback. if test "x$ac_cr" = x; then eval ac_cr=\$\'\\r\' fi ac_cs_awk_cr=`$AWK 'BEGIN { print "a\rb" }' /dev/null` if test "$ac_cs_awk_cr" = "a${ac_cr}b"; then ac_cs_awk_cr='\\r' else ac_cs_awk_cr=$ac_cr fi echo 'BEGIN {' >"$ac_tmp/subs1.awk" && _ACEOF { echo "cat >conf$$subs.awk <<_ACEOF" && echo "$ac_subst_vars" | sed 's/.*/&!$&$ac_delim/' && echo "_ACEOF" } >conf$$subs.sh || as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 ac_delim_num=`echo "$ac_subst_vars" | grep -c '^'` ac_delim='%!_!# ' for ac_last_try in false false false false false :; do . ./conf$$subs.sh || as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 ac_delim_n=`sed -n "s/.*$ac_delim\$/X/p" conf$$subs.awk | grep -c X` if test $ac_delim_n = $ac_delim_num; then break elif $ac_last_try; then as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 else ac_delim="$ac_delim!$ac_delim _$ac_delim!! " fi done rm -f conf$$subs.sh cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 cat >>"\$ac_tmp/subs1.awk" <<\\_ACAWK && _ACEOF sed -n ' h s/^/S["/; s/!.*/"]=/ p g s/^[^!]*!// :repl t repl s/'"$ac_delim"'$// t delim :nl h s/\(.\{148\}\)..*/\1/ t more1 s/["\\]/\\&/g; s/^/"/; s/$/\\n"\\/ p n b repl :more1 s/["\\]/\\&/g; s/^/"/; s/$/"\\/ p g s/.\{148\}// t nl :delim h s/\(.\{148\}\)..*/\1/ t more2 s/["\\]/\\&/g; s/^/"/; s/$/"/ p b :more2 s/["\\]/\\&/g; s/^/"/; s/$/"\\/ p g s/.\{148\}// t delim ' >$CONFIG_STATUS || ac_write_fail=1 rm -f conf$$subs.awk cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 _ACAWK cat >>"\$ac_tmp/subs1.awk" <<_ACAWK && for (key in S) S_is_set[key] = 1 FS = "" } { line = $ 0 nfields = split(line, field, "@") substed = 0 len = length(field[1]) for (i = 2; i < nfields; i++) { key = field[i] keylen = length(key) if (S_is_set[key]) { value = S[key] line = substr(line, 1, len) "" value "" substr(line, len + keylen + 3) len += length(value) + length(field[++i]) substed = 1 } else len += 1 + keylen } print line } _ACAWK _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 if sed "s/$ac_cr//" < /dev/null > /dev/null 2>&1; then sed "s/$ac_cr\$//; s/$ac_cr/$ac_cs_awk_cr/g" else cat fi < "$ac_tmp/subs1.awk" > "$ac_tmp/subs.awk" \ || as_fn_error $? "could not setup config files machinery" "$LINENO" 5 _ACEOF # VPATH may cause trouble with some makes, so we remove sole $(srcdir), # ${srcdir} and @srcdir@ entries from VPATH if srcdir is ".", strip leading and # trailing colons and then remove the whole line if VPATH becomes empty # (actually we leave an empty line to preserve line numbers). if test "x$srcdir" = x.; then ac_vpsub='/^[ ]*VPATH[ ]*=[ ]*/{ h s/// s/^/:/ s/[ ]*$/:/ s/:\$(srcdir):/:/g s/:\${srcdir}:/:/g s/:@srcdir@:/:/g s/^:*// s/:*$// x s/\(=[ ]*\).*/\1/ G s/\n// s/^[^=]*=[ ]*$// }' fi cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 fi # test -n "$CONFIG_FILES" # Set up the scripts for CONFIG_HEADERS section. # No need to generate them if there are no CONFIG_HEADERS. # This happens for instance with `./config.status Makefile'. if test -n "$CONFIG_HEADERS"; then cat >"$ac_tmp/defines.awk" <<\_ACAWK || BEGIN { _ACEOF # Transform confdefs.h into an awk script `defines.awk', embedded as # here-document in config.status, that substitutes the proper values into # config.h.in to produce config.h. # Create a delimiter string that does not exist in confdefs.h, to ease # handling of long lines. ac_delim='%!_!# ' for ac_last_try in false false :; do ac_tt=`sed -n "/$ac_delim/p" confdefs.h` if test -z "$ac_tt"; then break elif $ac_last_try; then as_fn_error $? "could not make $CONFIG_HEADERS" "$LINENO" 5 else ac_delim="$ac_delim!$ac_delim _$ac_delim!! " fi done # For the awk script, D is an array of macro values keyed by name, # likewise P contains macro parameters if any. Preserve backslash # newline sequences. ac_word_re=[_$as_cr_Letters][_$as_cr_alnum]* sed -n ' s/.\{148\}/&'"$ac_delim"'/g t rset :rset s/^[ ]*#[ ]*define[ ][ ]*/ / t def d :def s/\\$// t bsnl s/["\\]/\\&/g s/^ \('"$ac_word_re"'\)\(([^()]*)\)[ ]*\(.*\)/P["\1"]="\2"\ D["\1"]=" \3"/p s/^ \('"$ac_word_re"'\)[ ]*\(.*\)/D["\1"]=" \2"/p d :bsnl s/["\\]/\\&/g s/^ \('"$ac_word_re"'\)\(([^()]*)\)[ ]*\(.*\)/P["\1"]="\2"\ D["\1"]=" \3\\\\\\n"\\/p t cont s/^ \('"$ac_word_re"'\)[ ]*\(.*\)/D["\1"]=" \2\\\\\\n"\\/p t cont d :cont n s/.\{148\}/&'"$ac_delim"'/g t clear :clear s/\\$// t bsnlc s/["\\]/\\&/g; s/^/"/; s/$/"/p d :bsnlc s/["\\]/\\&/g; s/^/"/; s/$/\\\\\\n"\\/p b cont ' >$CONFIG_STATUS || ac_write_fail=1 cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 for (key in D) D_is_set[key] = 1 FS = "" } /^[\t ]*#[\t ]*(define|undef)[\t ]+$ac_word_re([\t (]|\$)/ { line = \$ 0 split(line, arg, " ") if (arg[1] == "#") { defundef = arg[2] mac1 = arg[3] } else { defundef = substr(arg[1], 2) mac1 = arg[2] } split(mac1, mac2, "(") #) macro = mac2[1] prefix = substr(line, 1, index(line, defundef) - 1) if (D_is_set[macro]) { # Preserve the white space surrounding the "#". print prefix "define", macro P[macro] D[macro] next } else { # Replace #undef with comments. This is necessary, for example, # in the case of _POSIX_SOURCE, which is predefined and required # on some systems where configure will not decide to define it. if (defundef == "undef") { print "/*", prefix defundef, macro, "*/" next } } } { print } _ACAWK _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 as_fn_error $? "could not setup config headers machinery" "$LINENO" 5 fi # test -n "$CONFIG_HEADERS" eval set X " :F $CONFIG_FILES :H $CONFIG_HEADERS :C $CONFIG_COMMANDS" shift for ac_tag do case $ac_tag in :[FHLC]) ac_mode=$ac_tag; continue;; esac case $ac_mode$ac_tag in :[FHL]*:*);; :L* | :C*:*) as_fn_error $? "invalid tag \`$ac_tag'" "$LINENO" 5;; :[FH]-) ac_tag=-:-;; :[FH]*) ac_tag=$ac_tag:$ac_tag.in;; esac ac_save_IFS=$IFS IFS=: set x $ac_tag IFS=$ac_save_IFS shift ac_file=$1 shift case $ac_mode in :L) ac_source=$1;; :[FH]) ac_file_inputs= for ac_f do case $ac_f in -) ac_f="$ac_tmp/stdin";; *) # Look for the file first in the build tree, then in the source tree # (if the path is not absolute). The absolute path cannot be DOS-style, # because $ac_f cannot contain `:'. test -f "$ac_f" || case $ac_f in [\\/$]*) false;; *) test -f "$srcdir/$ac_f" && ac_f="$srcdir/$ac_f";; esac || as_fn_error 1 "cannot find input file: \`$ac_f'" "$LINENO" 5;; esac case $ac_f in *\'*) ac_f=`$as_echo "$ac_f" | sed "s/'/'\\\\\\\\''/g"`;; esac as_fn_append ac_file_inputs " '$ac_f'" done # Let's still pretend it is `configure' which instantiates (i.e., don't # use $as_me), people would be surprised to read: # /* config.h. Generated by config.status. */ configure_input='Generated from '` $as_echo "$*" | sed 's|^[^:]*/||;s|:[^:]*/|, |g' `' by configure.' if test x"$ac_file" != x-; then configure_input="$ac_file. $configure_input" { $as_echo "$as_me:${as_lineno-$LINENO}: creating $ac_file" >&5 $as_echo "$as_me: creating $ac_file" >&6;} fi # Neutralize special characters interpreted by sed in replacement strings. case $configure_input in #( *\&* | *\|* | *\\* ) ac_sed_conf_input=`$as_echo "$configure_input" | sed 's/[\\\\&|]/\\\\&/g'`;; #( *) ac_sed_conf_input=$configure_input;; esac case $ac_tag in *:-:* | *:-) cat >"$ac_tmp/stdin" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 ;; esac ;; esac ac_dir=`$as_dirname -- "$ac_file" || $as_expr X"$ac_file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$ac_file" : 'X\(//\)[^/]' \| \ X"$ac_file" : 'X\(//\)$' \| \ X"$ac_file" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$ac_file" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` as_dir="$ac_dir"; as_fn_mkdir_p ac_builddir=. case "$ac_dir" in .) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_dir_suffix=/`$as_echo "$ac_dir" | sed 's|^\.[\\/]||'` # A ".." for each directory in $ac_dir_suffix. ac_top_builddir_sub=`$as_echo "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'` case $ac_top_builddir_sub in "") ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_top_build_prefix=$ac_top_builddir_sub/ ;; esac ;; esac ac_abs_top_builddir=$ac_pwd ac_abs_builddir=$ac_pwd$ac_dir_suffix # for backward compatibility: ac_top_builddir=$ac_top_build_prefix case $srcdir in .) # We are building in place. ac_srcdir=. ac_top_srcdir=$ac_top_builddir_sub ac_abs_top_srcdir=$ac_pwd ;; [\\/]* | ?:[\\/]* ) # Absolute name. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ac_abs_top_srcdir=$srcdir ;; *) # Relative name. ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_build_prefix$srcdir ac_abs_top_srcdir=$ac_pwd/$srcdir ;; esac ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix case $ac_mode in :F) # # CONFIG_FILE # case $INSTALL in [\\/$]* | ?:[\\/]* ) ac_INSTALL=$INSTALL ;; *) ac_INSTALL=$ac_top_build_prefix$INSTALL ;; esac ac_MKDIR_P=$MKDIR_P case $MKDIR_P in [\\/$]* | ?:[\\/]* ) ;; */*) ac_MKDIR_P=$ac_top_build_prefix$MKDIR_P ;; esac _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # If the template does not know about datarootdir, expand it. # FIXME: This hack should be removed a few years after 2.60. ac_datarootdir_hack=; ac_datarootdir_seen= ac_sed_dataroot=' /datarootdir/ { p q } /@datadir@/p /@docdir@/p /@infodir@/p /@localedir@/p /@mandir@/p' case `eval "sed -n \"\$ac_sed_dataroot\" $ac_file_inputs"` in *datarootdir*) ac_datarootdir_seen=yes;; *@datadir@*|*@docdir@*|*@infodir@*|*@localedir@*|*@mandir@*) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&5 $as_echo "$as_me: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&2;} _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_datarootdir_hack=' s&@datadir@&$datadir&g s&@docdir@&$docdir&g s&@infodir@&$infodir&g s&@localedir@&$localedir&g s&@mandir@&$mandir&g s&\\\${datarootdir}&$datarootdir&g' ;; esac _ACEOF # Neutralize VPATH when `$srcdir' = `.'. # Shell code in configure.ac might set extrasub. # FIXME: do we really want to maintain this feature? cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_sed_extra="$ac_vpsub $extrasub _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 :t /@[a-zA-Z_][a-zA-Z_0-9]*@/!b s|@configure_input@|$ac_sed_conf_input|;t t s&@top_builddir@&$ac_top_builddir_sub&;t t s&@top_build_prefix@&$ac_top_build_prefix&;t t s&@srcdir@&$ac_srcdir&;t t s&@abs_srcdir@&$ac_abs_srcdir&;t t s&@top_srcdir@&$ac_top_srcdir&;t t s&@abs_top_srcdir@&$ac_abs_top_srcdir&;t t s&@builddir@&$ac_builddir&;t t s&@abs_builddir@&$ac_abs_builddir&;t t s&@abs_top_builddir@&$ac_abs_top_builddir&;t t s&@INSTALL@&$ac_INSTALL&;t t s&@MKDIR_P@&$ac_MKDIR_P&;t t $ac_datarootdir_hack " eval sed \"\$ac_sed_extra\" "$ac_file_inputs" | $AWK -f "$ac_tmp/subs.awk" \ >$ac_tmp/out || as_fn_error $? "could not create $ac_file" "$LINENO" 5 test -z "$ac_datarootdir_hack$ac_datarootdir_seen" && { ac_out=`sed -n '/\${datarootdir}/p' "$ac_tmp/out"`; test -n "$ac_out"; } && { ac_out=`sed -n '/^[ ]*datarootdir[ ]*:*=/p' \ "$ac_tmp/out"`; test -z "$ac_out"; } && { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file contains a reference to the variable \`datarootdir' which seems to be undefined. Please make sure it is defined" >&5 $as_echo "$as_me: WARNING: $ac_file contains a reference to the variable \`datarootdir' which seems to be undefined. Please make sure it is defined" >&2;} rm -f "$ac_tmp/stdin" case $ac_file in -) cat "$ac_tmp/out" && rm -f "$ac_tmp/out";; *) rm -f "$ac_file" && mv "$ac_tmp/out" "$ac_file";; esac \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 ;; :H) # # CONFIG_HEADER # if test x"$ac_file" != x-; then { $as_echo "/* $configure_input */" \ && eval '$AWK -f "$ac_tmp/defines.awk"' "$ac_file_inputs" } >"$ac_tmp/config.h" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 if diff "$ac_file" "$ac_tmp/config.h" >/dev/null 2>&1; then { $as_echo "$as_me:${as_lineno-$LINENO}: $ac_file is unchanged" >&5 $as_echo "$as_me: $ac_file is unchanged" >&6;} else rm -f "$ac_file" mv "$ac_tmp/config.h" "$ac_file" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 fi else $as_echo "/* $configure_input */" \ && eval '$AWK -f "$ac_tmp/defines.awk"' "$ac_file_inputs" \ || as_fn_error $? "could not create -" "$LINENO" 5 fi # Compute "$ac_file"'s index in $config_headers. _am_arg="$ac_file" _am_stamp_count=1 for _am_header in $config_headers :; do case $_am_header in $_am_arg | $_am_arg:* ) break ;; * ) _am_stamp_count=`expr $_am_stamp_count + 1` ;; esac done echo "timestamp for $_am_arg" >`$as_dirname -- "$_am_arg" || $as_expr X"$_am_arg" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$_am_arg" : 'X\(//\)[^/]' \| \ X"$_am_arg" : 'X\(//\)$' \| \ X"$_am_arg" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$_am_arg" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'`/stamp-h$_am_stamp_count ;; :C) { $as_echo "$as_me:${as_lineno-$LINENO}: executing $ac_file commands" >&5 $as_echo "$as_me: executing $ac_file commands" >&6;} ;; esac case $ac_file$ac_mode in "depfiles":C) test x"$AMDEP_TRUE" != x"" || { # Older Autoconf quotes --file arguments for eval, but not when files # are listed without --file. Let's play safe and only enable the eval # if we detect the quoting. # TODO: see whether this extra hack can be removed once we start # requiring Autoconf 2.70 or later. case $CONFIG_FILES in #( *\'*) : eval set x "$CONFIG_FILES" ;; #( *) : set x $CONFIG_FILES ;; #( *) : ;; esac shift # Used to flag and report bootstrapping failures. am_rc=0 for am_mf do # Strip MF so we end up with the name of the file. am_mf=`$as_echo "$am_mf" | sed -e 's/:.*$//'` # Check whether this is an Automake generated Makefile which includes # dependency-tracking related rules and includes. # Grep'ing the whole file directly is not great: AIX grep has a line # limit of 2048, but all sed's we know have understand at least 4000. sed -n 's,^am--depfiles:.*,X,p' "$am_mf" | grep X >/dev/null 2>&1 \ || continue am_dirpart=`$as_dirname -- "$am_mf" || $as_expr X"$am_mf" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$am_mf" : 'X\(//\)[^/]' \| \ X"$am_mf" : 'X\(//\)$' \| \ X"$am_mf" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$am_mf" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` am_filepart=`$as_basename -- "$am_mf" || $as_expr X/"$am_mf" : '.*/\([^/][^/]*\)/*$' \| \ X"$am_mf" : 'X\(//\)$' \| \ X"$am_mf" : 'X\(/\)' \| . 2>/dev/null || $as_echo X/"$am_mf" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` { echo "$as_me:$LINENO: cd "$am_dirpart" \ && sed -e '/# am--include-marker/d' "$am_filepart" \ | $MAKE -f - am--depfiles" >&5 (cd "$am_dirpart" \ && sed -e '/# am--include-marker/d' "$am_filepart" \ | $MAKE -f - am--depfiles) >&5 2>&5 ac_status=$? echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); } || am_rc=$? done if test $am_rc -ne 0; then { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "Something went wrong bootstrapping makefile fragments for automatic dependency tracking. Try re-running configure with the '--disable-dependency-tracking' option to at least be able to build the package (albeit without support for automatic dependency tracking). See \`config.log' for more details" "$LINENO" 5; } fi { am_dirpart=; unset am_dirpart;} { am_filepart=; unset am_filepart;} { am_mf=; unset am_mf;} { am_rc=; unset am_rc;} rm -f conftest-deps.mk } ;; "libtool":C) # See if we are running on zsh, and set the options that allow our # commands through without removal of \ escapes. if test -n "${ZSH_VERSION+set}"; then setopt NO_GLOB_SUBST fi cfgfile=${ofile}T trap "$RM \"$cfgfile\"; exit 1" 1 2 15 $RM "$cfgfile" cat <<_LT_EOF >> "$cfgfile" #! $SHELL # Generated automatically by $as_me ($PACKAGE) $VERSION # NOTE: Changes made to this file will be lost: look at ltmain.sh. # Provide generalized library-building support services. # Written by Gordon Matzigkeit, 1996 # Copyright (C) 2014 Free Software Foundation, Inc. # This is free software; see the source for copying conditions. There is NO # warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # GNU Libtool is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of of the License, or # (at your option) any later version. # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program or library that is built # using GNU Libtool, you may include this file under the same # distribution terms that you use for the rest of that program. # # GNU Libtool is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . # The names of the tagged configurations supported by this script. available_tags='' # Configured defaults for sys_lib_dlsearch_path munging. : \${LT_SYS_LIBRARY_PATH="$configure_time_lt_sys_library_path"} # ### BEGIN LIBTOOL CONFIG # Which release of libtool.m4 was used? macro_version=$macro_version macro_revision=$macro_revision # Whether or not to build shared libraries. build_libtool_libs=$enable_shared # Whether or not to build static libraries. build_old_libs=$enable_static # What type of objects to build. pic_mode=$pic_mode # Whether or not to optimize for fast installation. fast_install=$enable_fast_install # Shared archive member basename,for filename based shared library versioning on AIX. shared_archive_member_spec=$shared_archive_member_spec # Shell to use when invoking shell scripts. SHELL=$lt_SHELL # An echo program that protects backslashes. ECHO=$lt_ECHO # The PATH separator for the build system. PATH_SEPARATOR=$lt_PATH_SEPARATOR # The host system. host_alias=$host_alias host=$host host_os=$host_os # The build system. build_alias=$build_alias build=$build build_os=$build_os # A sed program that does not truncate output. SED=$lt_SED # Sed that helps us avoid accidentally triggering echo(1) options like -n. Xsed="\$SED -e 1s/^X//" # A grep program that handles long lines. GREP=$lt_GREP # An ERE matcher. EGREP=$lt_EGREP # A literal string matcher. FGREP=$lt_FGREP # A BSD- or MS-compatible name lister. NM=$lt_NM # Whether we need soft or hard links. LN_S=$lt_LN_S # What is the maximum length of a command? max_cmd_len=$max_cmd_len # Object file suffix (normally "o"). objext=$ac_objext # Executable file suffix (normally ""). exeext=$exeext # whether the shell understands "unset". lt_unset=$lt_unset # turn spaces into newlines. SP2NL=$lt_lt_SP2NL # turn newlines into spaces. NL2SP=$lt_lt_NL2SP # convert \$build file names to \$host format. to_host_file_cmd=$lt_cv_to_host_file_cmd # convert \$build files to toolchain format. to_tool_file_cmd=$lt_cv_to_tool_file_cmd # An object symbol dumper. OBJDUMP=$lt_OBJDUMP # Method to check whether dependent libraries are shared objects. deplibs_check_method=$lt_deplibs_check_method # Command to use when deplibs_check_method = "file_magic". file_magic_cmd=$lt_file_magic_cmd # How to find potential files when deplibs_check_method = "file_magic". file_magic_glob=$lt_file_magic_glob # Find potential files using nocaseglob when deplibs_check_method = "file_magic". want_nocaseglob=$lt_want_nocaseglob # DLL creation program. DLLTOOL=$lt_DLLTOOL # Command to associate shared and link libraries. sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd # The archiver. AR=$lt_AR # Flags to create an archive. AR_FLAGS=$lt_AR_FLAGS # How to feed a file listing to the archiver. archiver_list_spec=$lt_archiver_list_spec # A symbol stripping program. STRIP=$lt_STRIP # Commands used to install an old-style archive. RANLIB=$lt_RANLIB old_postinstall_cmds=$lt_old_postinstall_cmds old_postuninstall_cmds=$lt_old_postuninstall_cmds # Whether to use a lock for old archive extraction. lock_old_archive_extraction=$lock_old_archive_extraction # A C compiler. LTCC=$lt_CC # LTCC compiler flags. LTCFLAGS=$lt_CFLAGS # Take the output of nm and produce a listing of raw symbols and C names. global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe # Transform the output of nm in a proper C declaration. global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl # Transform the output of nm into a list of symbols to manually relocate. global_symbol_to_import=$lt_lt_cv_sys_global_symbol_to_import # Transform the output of nm in a C name address pair. global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address # Transform the output of nm in a C name address pair when lib prefix is needed. global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix # The name lister interface. nm_interface=$lt_lt_cv_nm_interface # Specify filename containing input files for \$NM. nm_file_list_spec=$lt_nm_file_list_spec # The root where to search for dependent libraries,and where our libraries should be installed. lt_sysroot=$lt_sysroot # Command to truncate a binary pipe. lt_truncate_bin=$lt_lt_cv_truncate_bin # The name of the directory that contains temporary libtool files. objdir=$objdir # Used to examine libraries when file_magic_cmd begins with "file". MAGIC_CMD=$MAGIC_CMD # Must we lock files when doing compilation? need_locks=$lt_need_locks # Manifest tool. MANIFEST_TOOL=$lt_MANIFEST_TOOL # Tool to manipulate archived DWARF debug symbol files on Mac OS X. DSYMUTIL=$lt_DSYMUTIL # Tool to change global to local symbols on Mac OS X. NMEDIT=$lt_NMEDIT # Tool to manipulate fat objects and archives on Mac OS X. LIPO=$lt_LIPO # ldd/readelf like tool for Mach-O binaries on Mac OS X. OTOOL=$lt_OTOOL # ldd/readelf like tool for 64 bit Mach-O binaries on Mac OS X 10.4. OTOOL64=$lt_OTOOL64 # Old archive suffix (normally "a"). libext=$libext # Shared library suffix (normally ".so"). shrext_cmds=$lt_shrext_cmds # The commands to extract the exported symbol list from a shared archive. extract_expsyms_cmds=$lt_extract_expsyms_cmds # Variables whose values should be saved in libtool wrapper scripts and # restored at link time. variables_saved_for_relink=$lt_variables_saved_for_relink # Do we need the "lib" prefix for modules? need_lib_prefix=$need_lib_prefix # Do we need a version for libraries? need_version=$need_version # Library versioning type. version_type=$version_type # Shared library runtime path variable. runpath_var=$runpath_var # Shared library path variable. shlibpath_var=$shlibpath_var # Is shlibpath searched before the hard-coded library search path? shlibpath_overrides_runpath=$shlibpath_overrides_runpath # Format of library name prefix. libname_spec=$lt_libname_spec # List of archive names. First name is the real one, the rest are links. # The last name is the one that the linker finds with -lNAME library_names_spec=$lt_library_names_spec # The coded name of the library, if different from the real name. soname_spec=$lt_soname_spec # Permission mode override for installation of shared libraries. install_override_mode=$lt_install_override_mode # Command to use after installation of a shared archive. postinstall_cmds=$lt_postinstall_cmds # Command to use after uninstallation of a shared archive. postuninstall_cmds=$lt_postuninstall_cmds # Commands used to finish a libtool library installation in a directory. finish_cmds=$lt_finish_cmds # As "finish_cmds", except a single script fragment to be evaled but # not shown. finish_eval=$lt_finish_eval # Whether we should hardcode library paths into libraries. hardcode_into_libs=$hardcode_into_libs # Compile-time system search path for libraries. sys_lib_search_path_spec=$lt_sys_lib_search_path_spec # Detected run-time system search path for libraries. sys_lib_dlsearch_path_spec=$lt_configure_time_dlsearch_path # Explicit LT_SYS_LIBRARY_PATH set during ./configure time. configure_time_lt_sys_library_path=$lt_configure_time_lt_sys_library_path # Whether dlopen is supported. dlopen_support=$enable_dlopen # Whether dlopen of programs is supported. dlopen_self=$enable_dlopen_self # Whether dlopen of statically linked programs is supported. dlopen_self_static=$enable_dlopen_self_static # Commands to strip libraries. old_striplib=$lt_old_striplib striplib=$lt_striplib # The linker used to build libraries. LD=$lt_LD # How to create reloadable object files. reload_flag=$lt_reload_flag reload_cmds=$lt_reload_cmds # Commands used to build an old-style archive. old_archive_cmds=$lt_old_archive_cmds # A language specific compiler. CC=$lt_compiler # Is the compiler the GNU compiler? with_gcc=$GCC # Compiler flag to turn off builtin functions. no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag # Additional compiler flags for building library objects. pic_flag=$lt_lt_prog_compiler_pic # How to pass a linker flag through the compiler. wl=$lt_lt_prog_compiler_wl # Compiler flag to prevent dynamic linking. link_static_flag=$lt_lt_prog_compiler_static # Does compiler simultaneously support -c and -o options? compiler_c_o=$lt_lt_cv_prog_compiler_c_o # Whether or not to add -lc for building shared libraries. build_libtool_need_lc=$archive_cmds_need_lc # Whether or not to disallow shared libs when runtime libs are static. allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes # Compiler flag to allow reflexive dlopens. export_dynamic_flag_spec=$lt_export_dynamic_flag_spec # Compiler flag to generate shared objects directly from archives. whole_archive_flag_spec=$lt_whole_archive_flag_spec # Whether the compiler copes with passing no objects directly. compiler_needs_object=$lt_compiler_needs_object # Create an old-style archive from a shared archive. old_archive_from_new_cmds=$lt_old_archive_from_new_cmds # Create a temporary old-style archive to link instead of a shared archive. old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds # Commands used to build a shared archive. archive_cmds=$lt_archive_cmds archive_expsym_cmds=$lt_archive_expsym_cmds # Commands used to build a loadable module if different from building # a shared archive. module_cmds=$lt_module_cmds module_expsym_cmds=$lt_module_expsym_cmds # Whether we are building with GNU ld or not. with_gnu_ld=$lt_with_gnu_ld # Flag that allows shared libraries with undefined symbols to be built. allow_undefined_flag=$lt_allow_undefined_flag # Flag that enforces no undefined symbols. no_undefined_flag=$lt_no_undefined_flag # Flag to hardcode \$libdir into a binary during linking. # This must work even if \$libdir does not exist hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec # Whether we need a single "-rpath" flag with a separated argument. hardcode_libdir_separator=$lt_hardcode_libdir_separator # Set to "yes" if using DIR/libNAME\$shared_ext during linking hardcodes # DIR into the resulting binary. hardcode_direct=$hardcode_direct # Set to "yes" if using DIR/libNAME\$shared_ext during linking hardcodes # DIR into the resulting binary and the resulting library dependency is # "absolute",i.e impossible to change by setting \$shlibpath_var if the # library is relocated. hardcode_direct_absolute=$hardcode_direct_absolute # Set to "yes" if using the -LDIR flag during linking hardcodes DIR # into the resulting binary. hardcode_minus_L=$hardcode_minus_L # Set to "yes" if using SHLIBPATH_VAR=DIR during linking hardcodes DIR # into the resulting binary. hardcode_shlibpath_var=$hardcode_shlibpath_var # Set to "yes" if building a shared library automatically hardcodes DIR # into the library and all subsequent libraries and executables linked # against it. hardcode_automatic=$hardcode_automatic # Set to yes if linker adds runtime paths of dependent libraries # to runtime path list. inherit_rpath=$inherit_rpath # Whether libtool must link a program against all its dependency libraries. link_all_deplibs=$link_all_deplibs # Set to "yes" if exported symbols are required. always_export_symbols=$always_export_symbols # The commands to list exported symbols. export_symbols_cmds=$lt_export_symbols_cmds # Symbols that should not be listed in the preloaded symbols. exclude_expsyms=$lt_exclude_expsyms # Symbols that must always be exported. include_expsyms=$lt_include_expsyms # Commands necessary for linking programs (against libraries) with templates. prelink_cmds=$lt_prelink_cmds # Commands necessary for finishing linking programs. postlink_cmds=$lt_postlink_cmds # Specify filename containing input files. file_list_spec=$lt_file_list_spec # How to hardcode a shared library path into an executable. hardcode_action=$hardcode_action # ### END LIBTOOL CONFIG _LT_EOF cat <<'_LT_EOF' >> "$cfgfile" # ### BEGIN FUNCTIONS SHARED WITH CONFIGURE # func_munge_path_list VARIABLE PATH # ----------------------------------- # VARIABLE is name of variable containing _space_ separated list of # directories to be munged by the contents of PATH, which is string # having a format: # "DIR[:DIR]:" # string "DIR[ DIR]" will be prepended to VARIABLE # ":DIR[:DIR]" # string "DIR[ DIR]" will be appended to VARIABLE # "DIRP[:DIRP]::[DIRA:]DIRA" # string "DIRP[ DIRP]" will be prepended to VARIABLE and string # "DIRA[ DIRA]" will be appended to VARIABLE # "DIR[:DIR]" # VARIABLE will be replaced by "DIR[ DIR]" func_munge_path_list () { case x$2 in x) ;; *:) eval $1=\"`$ECHO $2 | $SED 's/:/ /g'` \$$1\" ;; x:*) eval $1=\"\$$1 `$ECHO $2 | $SED 's/:/ /g'`\" ;; *::*) eval $1=\"\$$1\ `$ECHO $2 | $SED -e 's/.*:://' -e 's/:/ /g'`\" eval $1=\"`$ECHO $2 | $SED -e 's/::.*//' -e 's/:/ /g'`\ \$$1\" ;; *) eval $1=\"`$ECHO $2 | $SED 's/:/ /g'`\" ;; esac } # Calculate cc_basename. Skip known compiler wrappers and cross-prefix. func_cc_basename () { for cc_temp in $*""; do case $cc_temp in compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; \-*) ;; *) break;; esac done func_cc_basename_result=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"` } # ### END FUNCTIONS SHARED WITH CONFIGURE _LT_EOF case $host_os in aix3*) cat <<\_LT_EOF >> "$cfgfile" # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test set != "${COLLECT_NAMES+set}"; then COLLECT_NAMES= export COLLECT_NAMES fi _LT_EOF ;; esac ltmain=$ac_aux_dir/ltmain.sh # We use sed instead of cat because bash on DJGPP gets confused if # if finds mixed CR/LF and LF-only lines. Since sed operates in # text mode, it properly converts lines to CR/LF. This bash problem # is reportedly fixed, but why not run on old versions too? sed '$q' "$ltmain" >> "$cfgfile" \ || (rm -f "$cfgfile"; exit 1) mv -f "$cfgfile" "$ofile" || (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile") chmod +x "$ofile" ;; esac done # for ac_tag as_fn_exit 0 _ACEOF ac_clean_files=$ac_clean_files_save test $ac_write_fail = 0 || as_fn_error $? "write failure creating $CONFIG_STATUS" "$LINENO" 5 # configure is writing to config.log, and then calls config.status. # config.status does its own redirection, appending to config.log. # Unfortunately, on DOS this fails, as config.log is still kept open # by configure, so config.status won't be able to write to it; its # output is simply discarded. So we exec the FD to /dev/null, # effectively closing config.log, so it can be properly (re)opened and # appended to by config.status. When coming back to configure, we # need to make the FD available again. if test "$no_create" != yes; then ac_cs_success=: ac_config_status_args= test "$silent" = yes && ac_config_status_args="$ac_config_status_args --quiet" exec 5>/dev/null $SHELL $CONFIG_STATUS $ac_config_status_args || ac_cs_success=false exec 5>>config.log # Use ||, not &&, to avoid exiting from the if with $? = 1, which # would make configure fail if this is the last instruction. $ac_cs_success || as_fn_exit 1 fi if test -n "$ac_unrecognized_opts" && test "$enable_option_checking" != no; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unrecognized options: $ac_unrecognized_opts" >&5 $as_echo "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2;} fi gevent-24.11.1/deps/libev/depcomp000077500000000000000000000560201471441230600165660ustar00rootroot00000000000000#! /bin/sh # depcomp - compile a program generating dependencies as side-effects scriptversion=2018-03-07.03; # UTC # Copyright (C) 1999-2018 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see . # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. # Originally written by Alexandre Oliva . case $1 in '') echo "$0: No command. Try '$0 --help' for more information." 1>&2 exit 1; ;; -h | --h*) cat <<\EOF Usage: depcomp [--help] [--version] PROGRAM [ARGS] Run PROGRAMS ARGS to compile a file, generating dependencies as side-effects. Environment variables: depmode Dependency tracking mode. source Source file read by 'PROGRAMS ARGS'. object Object file output by 'PROGRAMS ARGS'. DEPDIR directory where to store dependencies. depfile Dependency file to output. tmpdepfile Temporary file to use when outputting dependencies. libtool Whether libtool is used (yes/no). Report bugs to . EOF exit $? ;; -v | --v*) echo "depcomp $scriptversion" exit $? ;; esac # Get the directory component of the given path, and save it in the # global variables '$dir'. Note that this directory component will # be either empty or ending with a '/' character. This is deliberate. set_dir_from () { case $1 in */*) dir=`echo "$1" | sed -e 's|/[^/]*$|/|'`;; *) dir=;; esac } # Get the suffix-stripped basename of the given path, and save it the # global variable '$base'. set_base_from () { base=`echo "$1" | sed -e 's|^.*/||' -e 's/\.[^.]*$//'` } # If no dependency file was actually created by the compiler invocation, # we still have to create a dummy depfile, to avoid errors with the # Makefile "include basename.Plo" scheme. make_dummy_depfile () { echo "#dummy" > "$depfile" } # Factor out some common post-processing of the generated depfile. # Requires the auxiliary global variable '$tmpdepfile' to be set. aix_post_process_depfile () { # If the compiler actually managed to produce a dependency file, # post-process it. if test -f "$tmpdepfile"; then # Each line is of the form 'foo.o: dependency.h'. # Do two passes, one to just change these to # $object: dependency.h # and one to simply output # dependency.h: # which is needed to avoid the deleted-header problem. { sed -e "s,^.*\.[$lower]*:,$object:," < "$tmpdepfile" sed -e "s,^.*\.[$lower]*:[$tab ]*,," -e 's,$,:,' < "$tmpdepfile" } > "$depfile" rm -f "$tmpdepfile" else make_dummy_depfile fi } # A tabulation character. tab=' ' # A newline character. nl=' ' # Character ranges might be problematic outside the C locale. # These definitions help. upper=ABCDEFGHIJKLMNOPQRSTUVWXYZ lower=abcdefghijklmnopqrstuvwxyz digits=0123456789 alpha=${upper}${lower} if test -z "$depmode" || test -z "$source" || test -z "$object"; then echo "depcomp: Variables source, object and depmode must be set" 1>&2 exit 1 fi # Dependencies for sub/bar.o or sub/bar.obj go into sub/.deps/bar.Po. depfile=${depfile-`echo "$object" | sed 's|[^\\/]*$|'${DEPDIR-.deps}'/&|;s|\.\([^.]*\)$|.P\1|;s|Pobj$|Po|'`} tmpdepfile=${tmpdepfile-`echo "$depfile" | sed 's/\.\([^.]*\)$/.T\1/'`} rm -f "$tmpdepfile" # Avoid interferences from the environment. gccflag= dashmflag= # Some modes work just like other modes, but use different flags. We # parameterize here, but still list the modes in the big case below, # to make depend.m4 easier to write. Note that we *cannot* use a case # here, because this file can only contain one case statement. if test "$depmode" = hp; then # HP compiler uses -M and no extra arg. gccflag=-M depmode=gcc fi if test "$depmode" = dashXmstdout; then # This is just like dashmstdout with a different argument. dashmflag=-xM depmode=dashmstdout fi cygpath_u="cygpath -u -f -" if test "$depmode" = msvcmsys; then # This is just like msvisualcpp but w/o cygpath translation. # Just convert the backslash-escaped backslashes to single forward # slashes to satisfy depend.m4 cygpath_u='sed s,\\\\,/,g' depmode=msvisualcpp fi if test "$depmode" = msvc7msys; then # This is just like msvc7 but w/o cygpath translation. # Just convert the backslash-escaped backslashes to single forward # slashes to satisfy depend.m4 cygpath_u='sed s,\\\\,/,g' depmode=msvc7 fi if test "$depmode" = xlc; then # IBM C/C++ Compilers xlc/xlC can output gcc-like dependency information. gccflag=-qmakedep=gcc,-MF depmode=gcc fi case "$depmode" in gcc3) ## gcc 3 implements dependency tracking that does exactly what ## we want. Yay! Note: for some reason libtool 1.4 doesn't like ## it if -MD -MP comes after the -MF stuff. Hmm. ## Unfortunately, FreeBSD c89 acceptance of flags depends upon ## the command line argument order; so add the flags where they ## appear in depend2.am. Note that the slowdown incurred here ## affects only configure: in makefiles, %FASTDEP% shortcuts this. for arg do case $arg in -c) set fnord "$@" -MT "$object" -MD -MP -MF "$tmpdepfile" "$arg" ;; *) set fnord "$@" "$arg" ;; esac shift # fnord shift # $arg done "$@" stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi mv "$tmpdepfile" "$depfile" ;; gcc) ## Note that this doesn't just cater to obsosete pre-3.x GCC compilers. ## but also to in-use compilers like IMB xlc/xlC and the HP C compiler. ## (see the conditional assignment to $gccflag above). ## There are various ways to get dependency output from gcc. Here's ## why we pick this rather obscure method: ## - Don't want to use -MD because we'd like the dependencies to end ## up in a subdir. Having to rename by hand is ugly. ## (We might end up doing this anyway to support other compilers.) ## - The DEPENDENCIES_OUTPUT environment variable makes gcc act like ## -MM, not -M (despite what the docs say). Also, it might not be ## supported by the other compilers which use the 'gcc' depmode. ## - Using -M directly means running the compiler twice (even worse ## than renaming). if test -z "$gccflag"; then gccflag=-MD, fi "$@" -Wp,"$gccflag$tmpdepfile" stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" echo "$object : \\" > "$depfile" # The second -e expression handles DOS-style file names with drive # letters. sed -e 's/^[^:]*: / /' \ -e 's/^['$alpha']:\/[^:]*: / /' < "$tmpdepfile" >> "$depfile" ## This next piece of magic avoids the "deleted header file" problem. ## The problem is that when a header file which appears in a .P file ## is deleted, the dependency causes make to die (because there is ## typically no way to rebuild the header). We avoid this by adding ## dummy dependencies for each header file. Too bad gcc doesn't do ## this for us directly. ## Some versions of gcc put a space before the ':'. On the theory ## that the space means something, we add a space to the output as ## well. hp depmode also adds that space, but also prefixes the VPATH ## to the object. Take care to not repeat it in the output. ## Some versions of the HPUX 10.20 sed can't process this invocation ## correctly. Breaking it into two sed invocations is a workaround. tr ' ' "$nl" < "$tmpdepfile" \ | sed -e 's/^\\$//' -e '/^$/d' -e "s|.*$object$||" -e '/:$/d' \ | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; hp) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; sgi) if test "$libtool" = yes; then "$@" "-Wp,-MDupdate,$tmpdepfile" else "$@" -MDupdate "$tmpdepfile" fi stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" if test -f "$tmpdepfile"; then # yes, the sourcefile depend on other files echo "$object : \\" > "$depfile" # Clip off the initial element (the dependent). Don't try to be # clever and replace this with sed code, as IRIX sed won't handle # lines with more than a fixed number of characters (4096 in # IRIX 6.2 sed, 8192 in IRIX 6.5). We also remove comment lines; # the IRIX cc adds comments like '#:fec' to the end of the # dependency line. tr ' ' "$nl" < "$tmpdepfile" \ | sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' \ | tr "$nl" ' ' >> "$depfile" echo >> "$depfile" # The second pass generates a dummy entry for each header file. tr ' ' "$nl" < "$tmpdepfile" \ | sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' -e 's/$/:/' \ >> "$depfile" else make_dummy_depfile fi rm -f "$tmpdepfile" ;; xlc) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; aix) # The C for AIX Compiler uses -M and outputs the dependencies # in a .u file. In older versions, this file always lives in the # current directory. Also, the AIX compiler puts '$object:' at the # start of each line; $object doesn't have directory information. # Version 6 uses the directory in both cases. set_dir_from "$object" set_base_from "$object" if test "$libtool" = yes; then tmpdepfile1=$dir$base.u tmpdepfile2=$base.u tmpdepfile3=$dir.libs/$base.u "$@" -Wc,-M else tmpdepfile1=$dir$base.u tmpdepfile2=$dir$base.u tmpdepfile3=$dir$base.u "$@" -M fi stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" exit $stat fi for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" do test -f "$tmpdepfile" && break done aix_post_process_depfile ;; tcc) # tcc (Tiny C Compiler) understand '-MD -MF file' since version 0.9.26 # FIXME: That version still under development at the moment of writing. # Make that this statement remains true also for stable, released # versions. # It will wrap lines (doesn't matter whether long or short) with a # trailing '\', as in: # # foo.o : \ # foo.c \ # foo.h \ # # It will put a trailing '\' even on the last line, and will use leading # spaces rather than leading tabs (at least since its commit 0394caf7 # "Emit spaces for -MD"). "$@" -MD -MF "$tmpdepfile" stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" # Each non-empty line is of the form 'foo.o : \' or ' dep.h \'. # We have to change lines of the first kind to '$object: \'. sed -e "s|.*:|$object :|" < "$tmpdepfile" > "$depfile" # And for each line of the second kind, we have to emit a 'dep.h:' # dummy dependency, to avoid the deleted-header problem. sed -n -e 's|^ *\(.*\) *\\$|\1:|p' < "$tmpdepfile" >> "$depfile" rm -f "$tmpdepfile" ;; ## The order of this option in the case statement is important, since the ## shell code in configure will try each of these formats in the order ## listed in this file. A plain '-MD' option would be understood by many ## compilers, so we must ensure this comes after the gcc and icc options. pgcc) # Portland's C compiler understands '-MD'. # Will always output deps to 'file.d' where file is the root name of the # source file under compilation, even if file resides in a subdirectory. # The object file name does not affect the name of the '.d' file. # pgcc 10.2 will output # foo.o: sub/foo.c sub/foo.h # and will wrap long lines using '\' : # foo.o: sub/foo.c ... \ # sub/foo.h ... \ # ... set_dir_from "$object" # Use the source, not the object, to determine the base name, since # that's sadly what pgcc will do too. set_base_from "$source" tmpdepfile=$base.d # For projects that build the same source file twice into different object # files, the pgcc approach of using the *source* file root name can cause # problems in parallel builds. Use a locking strategy to avoid stomping on # the same $tmpdepfile. lockdir=$base.d-lock trap " echo '$0: caught signal, cleaning up...' >&2 rmdir '$lockdir' exit 1 " 1 2 13 15 numtries=100 i=$numtries while test $i -gt 0; do # mkdir is a portable test-and-set. if mkdir "$lockdir" 2>/dev/null; then # This process acquired the lock. "$@" -MD stat=$? # Release the lock. rmdir "$lockdir" break else # If the lock is being held by a different process, wait # until the winning process is done or we timeout. while test -d "$lockdir" && test $i -gt 0; do sleep 1 i=`expr $i - 1` done fi i=`expr $i - 1` done trap - 1 2 13 15 if test $i -le 0; then echo "$0: failed to acquire lock after $numtries attempts" >&2 echo "$0: check lockdir '$lockdir'" >&2 exit 1 fi if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" # Each line is of the form `foo.o: dependent.h', # or `foo.o: dep1.h dep2.h \', or ` dep3.h dep4.h \'. # Do two passes, one to just change these to # `$object: dependent.h' and one to simply `dependent.h:'. sed "s,^[^:]*:,$object :," < "$tmpdepfile" > "$depfile" # Some versions of the HPUX 10.20 sed can't process this invocation # correctly. Breaking it into two sed invocations is a workaround. sed 's,^[^:]*: \(.*\)$,\1,;s/^\\$//;/^$/d;/:$/d' < "$tmpdepfile" \ | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; hp2) # The "hp" stanza above does not work with aCC (C++) and HP's ia64 # compilers, which have integrated preprocessors. The correct option # to use with these is +Maked; it writes dependencies to a file named # 'foo.d', which lands next to the object file, wherever that # happens to be. # Much of this is similar to the tru64 case; see comments there. set_dir_from "$object" set_base_from "$object" if test "$libtool" = yes; then tmpdepfile1=$dir$base.d tmpdepfile2=$dir.libs/$base.d "$@" -Wc,+Maked else tmpdepfile1=$dir$base.d tmpdepfile2=$dir$base.d "$@" +Maked fi stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile1" "$tmpdepfile2" exit $stat fi for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" do test -f "$tmpdepfile" && break done if test -f "$tmpdepfile"; then sed -e "s,^.*\.[$lower]*:,$object:," "$tmpdepfile" > "$depfile" # Add 'dependent.h:' lines. sed -ne '2,${ s/^ *// s/ \\*$// s/$/:/ p }' "$tmpdepfile" >> "$depfile" else make_dummy_depfile fi rm -f "$tmpdepfile" "$tmpdepfile2" ;; tru64) # The Tru64 compiler uses -MD to generate dependencies as a side # effect. 'cc -MD -o foo.o ...' puts the dependencies into 'foo.o.d'. # At least on Alpha/Redhat 6.1, Compaq CCC V6.2-504 seems to put # dependencies in 'foo.d' instead, so we check for that too. # Subdirectories are respected. set_dir_from "$object" set_base_from "$object" if test "$libtool" = yes; then # Libtool generates 2 separate objects for the 2 libraries. These # two compilations output dependencies in $dir.libs/$base.o.d and # in $dir$base.o.d. We have to check for both files, because # one of the two compilations can be disabled. We should prefer # $dir$base.o.d over $dir.libs/$base.o.d because the latter is # automatically cleaned when .libs/ is deleted, while ignoring # the former would cause a distcleancheck panic. tmpdepfile1=$dir$base.o.d # libtool 1.5 tmpdepfile2=$dir.libs/$base.o.d # Likewise. tmpdepfile3=$dir.libs/$base.d # Compaq CCC V6.2-504 "$@" -Wc,-MD else tmpdepfile1=$dir$base.d tmpdepfile2=$dir$base.d tmpdepfile3=$dir$base.d "$@" -MD fi stat=$? if test $stat -ne 0; then rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" exit $stat fi for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" do test -f "$tmpdepfile" && break done # Same post-processing that is required for AIX mode. aix_post_process_depfile ;; msvc7) if test "$libtool" = yes; then showIncludes=-Wc,-showIncludes else showIncludes=-showIncludes fi "$@" $showIncludes > "$tmpdepfile" stat=$? grep -v '^Note: including file: ' "$tmpdepfile" if test $stat -ne 0; then rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" echo "$object : \\" > "$depfile" # The first sed program below extracts the file names and escapes # backslashes for cygpath. The second sed program outputs the file # name when reading, but also accumulates all include files in the # hold buffer in order to output them again at the end. This only # works with sed implementations that can handle large buffers. sed < "$tmpdepfile" -n ' /^Note: including file: *\(.*\)/ { s//\1/ s/\\/\\\\/g p }' | $cygpath_u | sort -u | sed -n ' s/ /\\ /g s/\(.*\)/'"$tab"'\1 \\/p s/.\(.*\) \\/\1:/ H $ { s/.*/'"$tab"'/ G p }' >> "$depfile" echo >> "$depfile" # make sure the fragment doesn't end with a backslash rm -f "$tmpdepfile" ;; msvc7msys) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; #nosideeffect) # This comment above is used by automake to tell side-effect # dependency tracking mechanisms from slower ones. dashmstdout) # Important note: in order to support this mode, a compiler *must* # always write the preprocessed file to stdout, regardless of -o. "$@" || exit $? # Remove the call to Libtool. if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi # Remove '-o $object'. IFS=" " for arg do case $arg in -o) shift ;; $object) shift ;; *) set fnord "$@" "$arg" shift # fnord shift # $arg ;; esac done test -z "$dashmflag" && dashmflag=-M # Require at least two characters before searching for ':' # in the target name. This is to cope with DOS-style filenames: # a dependency such as 'c:/foo/bar' could be seen as target 'c' otherwise. "$@" $dashmflag | sed "s|^[$tab ]*[^:$tab ][^:][^:]*:[$tab ]*|$object: |" > "$tmpdepfile" rm -f "$depfile" cat < "$tmpdepfile" > "$depfile" # Some versions of the HPUX 10.20 sed can't process this sed invocation # correctly. Breaking it into two sed invocations is a workaround. tr ' ' "$nl" < "$tmpdepfile" \ | sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \ | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; dashXmstdout) # This case only exists to satisfy depend.m4. It is never actually # run, as this mode is specially recognized in the preamble. exit 1 ;; makedepend) "$@" || exit $? # Remove any Libtool call if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi # X makedepend shift cleared=no eat=no for arg do case $cleared in no) set ""; shift cleared=yes ;; esac if test $eat = yes; then eat=no continue fi case "$arg" in -D*|-I*) set fnord "$@" "$arg"; shift ;; # Strip any option that makedepend may not understand. Remove # the object too, otherwise makedepend will parse it as a source file. -arch) eat=yes ;; -*|$object) ;; *) set fnord "$@" "$arg"; shift ;; esac done obj_suffix=`echo "$object" | sed 's/^.*\././'` touch "$tmpdepfile" ${MAKEDEPEND-makedepend} -o"$obj_suffix" -f"$tmpdepfile" "$@" rm -f "$depfile" # makedepend may prepend the VPATH from the source file name to the object. # No need to regex-escape $object, excess matching of '.' is harmless. sed "s|^.*\($object *:\)|\1|" "$tmpdepfile" > "$depfile" # Some versions of the HPUX 10.20 sed can't process the last invocation # correctly. Breaking it into two sed invocations is a workaround. sed '1,2d' "$tmpdepfile" \ | tr ' ' "$nl" \ | sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \ | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" "$tmpdepfile".bak ;; cpp) # Important note: in order to support this mode, a compiler *must* # always write the preprocessed file to stdout. "$@" || exit $? # Remove the call to Libtool. if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi # Remove '-o $object'. IFS=" " for arg do case $arg in -o) shift ;; $object) shift ;; *) set fnord "$@" "$arg" shift # fnord shift # $arg ;; esac done "$@" -E \ | sed -n -e '/^# [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \ -e '/^#line [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \ | sed '$ s: \\$::' > "$tmpdepfile" rm -f "$depfile" echo "$object : \\" > "$depfile" cat < "$tmpdepfile" >> "$depfile" sed < "$tmpdepfile" '/^$/d;s/^ //;s/ \\$//;s/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; msvisualcpp) # Important note: in order to support this mode, a compiler *must* # always write the preprocessed file to stdout. "$@" || exit $? # Remove the call to Libtool. if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi IFS=" " for arg do case "$arg" in -o) shift ;; $object) shift ;; "-Gm"|"/Gm"|"-Gi"|"/Gi"|"-ZI"|"/ZI") set fnord "$@" shift shift ;; *) set fnord "$@" "$arg" shift shift ;; esac done "$@" -E 2>/dev/null | sed -n '/^#line [0-9][0-9]* "\([^"]*\)"/ s::\1:p' | $cygpath_u | sort -u > "$tmpdepfile" rm -f "$depfile" echo "$object : \\" > "$depfile" sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::'"$tab"'\1 \\:p' >> "$depfile" echo "$tab" >> "$depfile" sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::\1\::p' >> "$depfile" rm -f "$tmpdepfile" ;; msvcmsys) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; none) exec "$@" ;; *) echo "Unknown depmode $depmode" 1>&2 exit 1 ;; esac exit 0 # Local Variables: # mode: shell-script # sh-indentation: 2 # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC0" # time-stamp-end: "; # UTC" # End: gevent-24.11.1/deps/libev/ev++.h000066400000000000000000000501051471441230600161200ustar00rootroot00000000000000/* * libev simple C++ wrapper classes * * Copyright (c) 2007,2008,2010,2018,2020 Marc Alexander Lehmann * All rights reserved. * * Redistribution and use in source and binary forms, with or without modifica- * tion, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * * Alternatively, the contents of this file may be used under the terms of * the GNU General Public License ("GPL") version 2 or any later version, * in which case the provisions of the GPL are applicable instead of * the above. If you wish to allow the use of your version of this file * only under the terms of the GPL and not to allow others to use your * version of this file under the BSD license, indicate your decision * by deleting the provisions above and replace them with the notice * and other provisions required by the GPL. If you do not delete the * provisions above, a recipient may use your version of this file under * either the BSD or the GPL. */ #ifndef EVPP_H__ #define EVPP_H__ #ifdef EV_H # include EV_H #else # include "ev.h" #endif #ifndef EV_USE_STDEXCEPT # define EV_USE_STDEXCEPT 1 #endif #if EV_USE_STDEXCEPT # include #endif namespace ev { typedef ev_tstamp tstamp; enum { UNDEF = EV_UNDEF, NONE = EV_NONE, READ = EV_READ, WRITE = EV_WRITE, #if EV_COMPAT3 TIMEOUT = EV_TIMEOUT, #endif TIMER = EV_TIMER, PERIODIC = EV_PERIODIC, SIGNAL = EV_SIGNAL, CHILD = EV_CHILD, STAT = EV_STAT, IDLE = EV_IDLE, CHECK = EV_CHECK, PREPARE = EV_PREPARE, FORK = EV_FORK, ASYNC = EV_ASYNC, EMBED = EV_EMBED, # undef ERROR // some systems stupidly #define ERROR ERROR = EV_ERROR }; enum { AUTO = EVFLAG_AUTO, NOENV = EVFLAG_NOENV, FORKCHECK = EVFLAG_FORKCHECK, SELECT = EVBACKEND_SELECT, POLL = EVBACKEND_POLL, EPOLL = EVBACKEND_EPOLL, KQUEUE = EVBACKEND_KQUEUE, DEVPOLL = EVBACKEND_DEVPOLL, PORT = EVBACKEND_PORT }; enum { #if EV_COMPAT3 NONBLOCK = EVLOOP_NONBLOCK, ONESHOT = EVLOOP_ONESHOT, #endif NOWAIT = EVRUN_NOWAIT, ONCE = EVRUN_ONCE }; enum how_t { ONE = EVBREAK_ONE, ALL = EVBREAK_ALL }; struct bad_loop #if EV_USE_STDEXCEPT : std::exception #endif { #if EV_USE_STDEXCEPT const char *what () const EV_NOEXCEPT { return "libev event loop cannot be initialized, bad value of LIBEV_FLAGS?"; } #endif }; #ifdef EV_AX # undef EV_AX #endif #ifdef EV_AX_ # undef EV_AX_ #endif #if EV_MULTIPLICITY # define EV_AX raw_loop # define EV_AX_ raw_loop, #else # define EV_AX # define EV_AX_ #endif struct loop_ref { loop_ref (EV_P) EV_NOEXCEPT #if EV_MULTIPLICITY : EV_AX (EV_A) #endif { } bool operator == (const loop_ref &other) const EV_NOEXCEPT { #if EV_MULTIPLICITY return EV_AX == other.EV_AX; #else return true; #endif } bool operator != (const loop_ref &other) const EV_NOEXCEPT { #if EV_MULTIPLICITY return ! (*this == other); #else return false; #endif } #if EV_MULTIPLICITY bool operator == (const EV_P) const EV_NOEXCEPT { return this->EV_AX == EV_A; } bool operator != (const EV_P) const EV_NOEXCEPT { return ! (*this == EV_A); } operator struct ev_loop * () const EV_NOEXCEPT { return EV_AX; } operator const struct ev_loop * () const EV_NOEXCEPT { return EV_AX; } bool is_default () const EV_NOEXCEPT { return EV_AX == ev_default_loop (0); } #endif #if EV_COMPAT3 void loop (int flags = 0) { ev_run (EV_AX_ flags); } void unloop (how_t how = ONE) EV_NOEXCEPT { ev_break (EV_AX_ how); } #endif void run (int flags = 0) { ev_run (EV_AX_ flags); } void break_loop (how_t how = ONE) EV_NOEXCEPT { ev_break (EV_AX_ how); } void post_fork () EV_NOEXCEPT { ev_loop_fork (EV_AX); } unsigned int backend () const EV_NOEXCEPT { return ev_backend (EV_AX); } tstamp now () const EV_NOEXCEPT { return ev_now (EV_AX); } void ref () EV_NOEXCEPT { ev_ref (EV_AX); } void unref () EV_NOEXCEPT { ev_unref (EV_AX); } #if EV_FEATURE_API unsigned int iteration () const EV_NOEXCEPT { return ev_iteration (EV_AX); } unsigned int depth () const EV_NOEXCEPT { return ev_depth (EV_AX); } void set_io_collect_interval (tstamp interval) EV_NOEXCEPT { ev_set_io_collect_interval (EV_AX_ interval); } void set_timeout_collect_interval (tstamp interval) EV_NOEXCEPT { ev_set_timeout_collect_interval (EV_AX_ interval); } #endif // function callback void once (int fd, int events, tstamp timeout, void (*cb)(int, void *), void *arg = 0) EV_NOEXCEPT { ev_once (EV_AX_ fd, events, timeout, cb, arg); } // method callback template void once (int fd, int events, tstamp timeout, K *object) EV_NOEXCEPT { once (fd, events, timeout, method_thunk, object); } // default method == operator () template void once (int fd, int events, tstamp timeout, K *object) EV_NOEXCEPT { once (fd, events, timeout, method_thunk, object); } template static void method_thunk (int revents, void *arg) { (static_cast(arg)->*method) (revents); } // no-argument method callback template void once (int fd, int events, tstamp timeout, K *object) EV_NOEXCEPT { once (fd, events, timeout, method_noargs_thunk, object); } template static void method_noargs_thunk (int revents, void *arg) { (static_cast(arg)->*method) (); } // simpler function callback template void once (int fd, int events, tstamp timeout) EV_NOEXCEPT { once (fd, events, timeout, simpler_func_thunk); } template static void simpler_func_thunk (int revents, void *arg) { (*cb) (revents); } // simplest function callback template void once (int fd, int events, tstamp timeout) EV_NOEXCEPT { once (fd, events, timeout, simplest_func_thunk); } template static void simplest_func_thunk (int revents, void *arg) { (*cb) (); } void feed_fd_event (int fd, int revents) EV_NOEXCEPT { ev_feed_fd_event (EV_AX_ fd, revents); } void feed_signal_event (int signum) EV_NOEXCEPT { ev_feed_signal_event (EV_AX_ signum); } #if EV_MULTIPLICITY struct ev_loop* EV_AX; #endif }; #if EV_MULTIPLICITY struct dynamic_loop : loop_ref { dynamic_loop (unsigned int flags = AUTO) : loop_ref (ev_loop_new (flags)) { if (!EV_AX) throw bad_loop (); } ~dynamic_loop () EV_NOEXCEPT { ev_loop_destroy (EV_AX); EV_AX = 0; } private: dynamic_loop (const dynamic_loop &); dynamic_loop & operator= (const dynamic_loop &); }; #endif struct default_loop : loop_ref { default_loop (unsigned int flags = AUTO) #if EV_MULTIPLICITY : loop_ref (ev_default_loop (flags)) #endif { if ( #if EV_MULTIPLICITY !EV_AX #else !ev_default_loop (flags) #endif ) throw bad_loop (); } private: default_loop (const default_loop &); default_loop &operator = (const default_loop &); }; inline loop_ref get_default_loop () EV_NOEXCEPT { #if EV_MULTIPLICITY return ev_default_loop (0); #else return loop_ref (); #endif } #undef EV_AX #undef EV_AX_ #undef EV_PX #undef EV_PX_ #if EV_MULTIPLICITY # define EV_PX loop_ref EV_A # define EV_PX_ loop_ref EV_A_ #else # define EV_PX # define EV_PX_ #endif template struct base : ev_watcher { // scoped pause/unpause of a watcher struct freeze_guard { watcher &w; bool active; freeze_guard (watcher *self) EV_NOEXCEPT : w (*self), active (w.is_active ()) { if (active) w.stop (); } ~freeze_guard () { if (active) w.start (); } }; #if EV_MULTIPLICITY EV_PX; // loop set void set (EV_P) EV_NOEXCEPT { this->EV_A = EV_A; } #endif base (EV_PX) EV_NOEXCEPT #if EV_MULTIPLICITY : EV_A (EV_A) #endif { ev_init (this, 0); } void set_ (const void *data, void (*cb)(EV_P_ ev_watcher *w, int revents)) EV_NOEXCEPT { this->data = (void *)data; ev_set_cb (static_cast(this), cb); } // function callback template void set (void *data = 0) EV_NOEXCEPT { set_ (data, function_thunk); } template static void function_thunk (EV_P_ ev_watcher *w, int revents) { function (*static_cast(w), revents); } // method callback template void set (K *object) EV_NOEXCEPT { set_ (object, method_thunk); } // default method == operator () template void set (K *object) EV_NOEXCEPT { set_ (object, method_thunk); } template static void method_thunk (EV_P_ ev_watcher *w, int revents) { (static_cast(w->data)->*method) (*static_cast(w), revents); } // no-argument callback template void set (K *object) EV_NOEXCEPT { set_ (object, method_noargs_thunk); } template static void method_noargs_thunk (EV_P_ ev_watcher *w, int revents) { (static_cast(w->data)->*method) (); } void operator ()(int events = EV_UNDEF) { return ev_cb (static_cast(this)) (static_cast(this), events); } bool is_active () const EV_NOEXCEPT { return ev_is_active (static_cast(this)); } bool is_pending () const EV_NOEXCEPT { return ev_is_pending (static_cast(this)); } void feed_event (int revents) EV_NOEXCEPT { ev_feed_event (EV_A_ static_cast(this), revents); } }; inline tstamp now (EV_P) EV_NOEXCEPT { return ev_now (EV_A); } inline void delay (tstamp interval) EV_NOEXCEPT { ev_sleep (interval); } inline int version_major () EV_NOEXCEPT { return ev_version_major (); } inline int version_minor () EV_NOEXCEPT { return ev_version_minor (); } inline unsigned int supported_backends () EV_NOEXCEPT { return ev_supported_backends (); } inline unsigned int recommended_backends () EV_NOEXCEPT { return ev_recommended_backends (); } inline unsigned int embeddable_backends () EV_NOEXCEPT { return ev_embeddable_backends (); } inline void set_allocator (void *(*cb)(void *ptr, long size) EV_NOEXCEPT) EV_NOEXCEPT { ev_set_allocator (cb); } inline void set_syserr_cb (void (*cb)(const char *msg) EV_NOEXCEPT) EV_NOEXCEPT { ev_set_syserr_cb (cb); } #if EV_MULTIPLICITY #define EV_CONSTRUCT(cppstem,cstem) \ (EV_PX = get_default_loop ()) EV_NOEXCEPT \ : base (EV_A) \ { \ } #else #define EV_CONSTRUCT(cppstem,cstem) \ () EV_NOEXCEPT \ { \ } #endif /* using a template here would require quite a few more lines, * so a macro solution was chosen */ #define EV_BEGIN_WATCHER(cppstem,cstem) \ \ struct cppstem : base \ { \ void start () EV_NOEXCEPT \ { \ ev_ ## cstem ## _start (EV_A_ static_cast(this)); \ } \ \ void stop () EV_NOEXCEPT \ { \ ev_ ## cstem ## _stop (EV_A_ static_cast(this)); \ } \ \ cppstem EV_CONSTRUCT(cppstem,cstem) \ \ ~cppstem () EV_NOEXCEPT \ { \ stop (); \ } \ \ using base::set; \ \ private: \ \ cppstem (const cppstem &o); \ \ cppstem &operator =(const cppstem &o); \ \ public: #define EV_END_WATCHER(cppstem,cstem) \ }; EV_BEGIN_WATCHER (io, io) void set (int fd, int events) EV_NOEXCEPT { freeze_guard freeze (this); ev_io_set (static_cast(this), fd, events); } void set (int events) EV_NOEXCEPT { freeze_guard freeze (this); ev_io_modify (static_cast(this), events); } void start (int fd, int events) EV_NOEXCEPT { set (fd, events); start (); } EV_END_WATCHER (io, io) EV_BEGIN_WATCHER (timer, timer) void set (ev_tstamp after, ev_tstamp repeat = 0.) EV_NOEXCEPT { freeze_guard freeze (this); ev_timer_set (static_cast(this), after, repeat); } void start (ev_tstamp after, ev_tstamp repeat = 0.) EV_NOEXCEPT { set (after, repeat); start (); } void again () EV_NOEXCEPT { ev_timer_again (EV_A_ static_cast(this)); } ev_tstamp remaining () { return ev_timer_remaining (EV_A_ static_cast(this)); } EV_END_WATCHER (timer, timer) #if EV_PERIODIC_ENABLE EV_BEGIN_WATCHER (periodic, periodic) void set (ev_tstamp at, ev_tstamp interval = 0.) EV_NOEXCEPT { freeze_guard freeze (this); ev_periodic_set (static_cast(this), at, interval, 0); } void start (ev_tstamp at, ev_tstamp interval = 0.) EV_NOEXCEPT { set (at, interval); start (); } void again () EV_NOEXCEPT { ev_periodic_again (EV_A_ static_cast(this)); } EV_END_WATCHER (periodic, periodic) #endif #if EV_SIGNAL_ENABLE EV_BEGIN_WATCHER (sig, signal) void set (int signum) EV_NOEXCEPT { freeze_guard freeze (this); ev_signal_set (static_cast(this), signum); } void start (int signum) EV_NOEXCEPT { set (signum); start (); } EV_END_WATCHER (sig, signal) #endif #if EV_CHILD_ENABLE EV_BEGIN_WATCHER (child, child) void set (int pid, int trace = 0) EV_NOEXCEPT { freeze_guard freeze (this); ev_child_set (static_cast(this), pid, trace); } void start (int pid, int trace = 0) EV_NOEXCEPT { set (pid, trace); start (); } EV_END_WATCHER (child, child) #endif #if EV_STAT_ENABLE EV_BEGIN_WATCHER (stat, stat) void set (const char *path, ev_tstamp interval = 0.) EV_NOEXCEPT { freeze_guard freeze (this); ev_stat_set (static_cast(this), path, interval); } void start (const char *path, ev_tstamp interval = 0.) EV_NOEXCEPT { stop (); set (path, interval); start (); } void update () EV_NOEXCEPT { ev_stat_stat (EV_A_ static_cast(this)); } EV_END_WATCHER (stat, stat) #endif #if EV_IDLE_ENABLE EV_BEGIN_WATCHER (idle, idle) void set () EV_NOEXCEPT { } EV_END_WATCHER (idle, idle) #endif #if EV_PREPARE_ENABLE EV_BEGIN_WATCHER (prepare, prepare) void set () EV_NOEXCEPT { } EV_END_WATCHER (prepare, prepare) #endif #if EV_CHECK_ENABLE EV_BEGIN_WATCHER (check, check) void set () EV_NOEXCEPT { } EV_END_WATCHER (check, check) #endif #if EV_EMBED_ENABLE EV_BEGIN_WATCHER (embed, embed) void set_embed (struct ev_loop *embedded_loop) EV_NOEXCEPT { freeze_guard freeze (this); ev_embed_set (static_cast(this), embedded_loop); } void start (struct ev_loop *embedded_loop) EV_NOEXCEPT { set (embedded_loop); start (); } void sweep () { ev_embed_sweep (EV_A_ static_cast(this)); } EV_END_WATCHER (embed, embed) #endif #if EV_FORK_ENABLE EV_BEGIN_WATCHER (fork, fork) void set () EV_NOEXCEPT { } EV_END_WATCHER (fork, fork) #endif #if EV_ASYNC_ENABLE EV_BEGIN_WATCHER (async, async) void send () EV_NOEXCEPT { ev_async_send (EV_A_ static_cast(this)); } bool async_pending () EV_NOEXCEPT { return ev_async_pending (static_cast(this)); } EV_END_WATCHER (async, async) #endif #undef EV_PX #undef EV_PX_ #undef EV_CONSTRUCT #undef EV_BEGIN_WATCHER #undef EV_END_WATCHER } #endif gevent-24.11.1/deps/libev/ev.3000066400000000000000000010254131471441230600157120ustar00rootroot00000000000000.\" Automatically generated by Pod::Man 4.11 (Pod::Simple 3.35) .\" .\" Standard preamble: .\" ======================================================================== .de Sp \" Vertical space (when we can't use .PP) .if t .sp .5v .if n .sp .. .de Vb \" Begin verbatim text .ft CW .nf .ne \\$1 .. .de Ve \" End verbatim text .ft R .fi .. .\" Set up some character translations and predefined strings. \*(-- will .\" give an unbreakable dash, \*(PI will give pi, \*(L" will give a left .\" double quote, and \*(R" will give a right double quote. \*(C+ will .\" give a nicer C++. Capital omega is used to do unbreakable dashes and .\" therefore won't be available. \*(C` and \*(C' expand to `' in nroff, .\" nothing in troff, for use with C<>. .tr \(*W- .ds C+ C\v'-.1v'\h'-1p'\s-2+\h'-1p'+\s0\v'.1v'\h'-1p' .ie n \{\ . ds -- \(*W- . ds PI pi . if (\n(.H=4u)&(1m=24u) .ds -- \(*W\h'-12u'\(*W\h'-12u'-\" diablo 10 pitch . if (\n(.H=4u)&(1m=20u) .ds -- \(*W\h'-12u'\(*W\h'-8u'-\" diablo 12 pitch . ds L" "" . ds R" "" . ds C` "" . ds C' "" 'br\} .el\{\ . ds -- \|\(em\| . ds PI \(*p . ds L" `` . ds R" '' . ds C` . ds C' 'br\} .\" .\" Escape single quotes in literal strings from groff's Unicode transform. .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" .\" If the F register is >0, we'll generate index entries on stderr for .\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index .\" entries marked with X<> in POD. Of course, you'll have to process the .\" output yourself in some meaningful fashion. .\" .\" Avoid warning from groff about undefined register 'F'. .de IX .. .nr rF 0 .if \n(.g .if rF .nr rF 1 .if (\n(rF:(\n(.g==0)) \{\ . if \nF \{\ . de IX . tm Index:\\$1\t\\n%\t"\\$2" .. . if !\nF==2 \{\ . nr % 0 . nr F 2 . \} . \} .\} .rr rF .\" .\" Accent mark definitions (@(#)ms.acc 1.5 88/02/08 SMI; from UCB 4.2). .\" Fear. Run. Save yourself. No user-serviceable parts. . \" fudge factors for nroff and troff .if n \{\ . ds #H 0 . ds #V .8m . ds #F .3m . ds #[ \f1 . ds #] \fP .\} .if t \{\ . ds #H ((1u-(\\\\n(.fu%2u))*.13m) . ds #V .6m . ds #F 0 . ds #[ \& . ds #] \& .\} . \" simple accents for nroff and troff .if n \{\ . ds ' \& . ds ` \& . ds ^ \& . ds , \& . ds ~ ~ . ds / .\} .if t \{\ . ds ' \\k:\h'-(\\n(.wu*8/10-\*(#H)'\'\h"|\\n:u" . ds ` \\k:\h'-(\\n(.wu*8/10-\*(#H)'\`\h'|\\n:u' . ds ^ \\k:\h'-(\\n(.wu*10/11-\*(#H)'^\h'|\\n:u' . ds , \\k:\h'-(\\n(.wu*8/10)',\h'|\\n:u' . ds ~ \\k:\h'-(\\n(.wu-\*(#H-.1m)'~\h'|\\n:u' . ds / \\k:\h'-(\\n(.wu*8/10-\*(#H)'\z\(sl\h'|\\n:u' .\} . \" troff and (daisy-wheel) nroff accents .ds : \\k:\h'-(\\n(.wu*8/10-\*(#H+.1m+\*(#F)'\v'-\*(#V'\z.\h'.2m+\*(#F'.\h'|\\n:u'\v'\*(#V' .ds 8 \h'\*(#H'\(*b\h'-\*(#H' .ds o \\k:\h'-(\\n(.wu+\w'\(de'u-\*(#H)/2u'\v'-.3n'\*(#[\z\(de\v'.3n'\h'|\\n:u'\*(#] .ds d- \h'\*(#H'\(pd\h'-\w'~'u'\v'-.25m'\f2\(hy\fP\v'.25m'\h'-\*(#H' .ds D- D\\k:\h'-\w'D'u'\v'-.11m'\z\(hy\v'.11m'\h'|\\n:u' .ds th \*(#[\v'.3m'\s+1I\s-1\v'-.3m'\h'-(\w'I'u*2/3)'\s-1o\s+1\*(#] .ds Th \*(#[\s+2I\s-2\h'-\w'I'u*3/5'\v'-.3m'o\v'.3m'\*(#] .ds ae a\h'-(\w'a'u*4/10)'e .ds Ae A\h'-(\w'A'u*4/10)'E . \" corrections for vroff .if v .ds ~ \\k:\h'-(\\n(.wu*9/10-\*(#H)'\s-2\u~\d\s+2\h'|\\n:u' .if v .ds ^ \\k:\h'-(\\n(.wu*10/11-\*(#H)'\v'-.4m'^\v'.4m'\h'|\\n:u' . \" for low resolution devices (crt and lpr) .if \n(.H>23 .if \n(.V>19 \ \{\ . ds : e . ds 8 ss . ds o a . ds d- d\h'-1'\(ga . ds D- D\h'-1'\(hy . ds th \o'bp' . ds Th \o'LP' . ds ae ae . ds Ae AE .\} .rm #[ #] #H #V #F C .\" ======================================================================== .\" .IX Title "LIBEV 3" .TH LIBEV 3 "2020-03-12" "libev-4.31" "libev - high performance full featured event loop" .\" For nroff, turn off justification. Always turn off hyphenation; it makes .\" way too many mistakes in technical documents. .if n .ad l .nh .SH "NAME" libev \- a high performance full\-featured event loop written in C .SH "SYNOPSIS" .IX Header "SYNOPSIS" .Vb 1 \& #include .Ve .SS "\s-1EXAMPLE PROGRAM\s0" .IX Subsection "EXAMPLE PROGRAM" .Vb 2 \& // a single header file is required \& #include \& \& #include // for puts \& \& // every watcher type has its own typedef\*(Aqd struct \& // with the name ev_TYPE \& ev_io stdin_watcher; \& ev_timer timeout_watcher; \& \& // all watcher callbacks have a similar signature \& // this callback is called when data is readable on stdin \& static void \& stdin_cb (EV_P_ ev_io *w, int revents) \& { \& puts ("stdin ready"); \& // for one\-shot events, one must manually stop the watcher \& // with its corresponding stop function. \& ev_io_stop (EV_A_ w); \& \& // this causes all nested ev_run\*(Aqs to stop iterating \& ev_break (EV_A_ EVBREAK_ALL); \& } \& \& // another callback, this time for a time\-out \& static void \& timeout_cb (EV_P_ ev_timer *w, int revents) \& { \& puts ("timeout"); \& // this causes the innermost ev_run to stop iterating \& ev_break (EV_A_ EVBREAK_ONE); \& } \& \& int \& main (void) \& { \& // use the default event loop unless you have special needs \& struct ev_loop *loop = EV_DEFAULT; \& \& // initialise an io watcher, then start it \& // this one will watch for stdin to become readable \& ev_io_init (&stdin_watcher, stdin_cb, /*STDIN_FILENO*/ 0, EV_READ); \& ev_io_start (loop, &stdin_watcher); \& \& // initialise a timer watcher, then start it \& // simple non\-repeating 5.5 second timeout \& ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.); \& ev_timer_start (loop, &timeout_watcher); \& \& // now wait for events to arrive \& ev_run (loop, 0); \& \& // break was called, so exit \& return 0; \& } .Ve .SH "ABOUT THIS DOCUMENT" .IX Header "ABOUT THIS DOCUMENT" This document documents the libev software package. .PP The newest version of this document is also available as an html-formatted web page you might find easier to navigate when reading it for the first time: . .PP While this document tries to be as complete as possible in documenting libev, its usage and the rationale behind its design, it is not a tutorial on event-based programming, nor will it introduce event-based programming with libev. .PP Familiarity with event based programming techniques in general is assumed throughout this document. .SH "WHAT TO READ WHEN IN A HURRY" .IX Header "WHAT TO READ WHEN IN A HURRY" This manual tries to be very detailed, but unfortunately, this also makes it very long. If you just want to know the basics of libev, I suggest reading \*(L"\s-1ANATOMY OF A WATCHER\*(R"\s0, then the \*(L"\s-1EXAMPLE PROGRAM\*(R"\s0 above and look up the missing functions in \*(L"\s-1GLOBAL FUNCTIONS\*(R"\s0 and the \f(CW\*(C`ev_io\*(C'\fR and \&\f(CW\*(C`ev_timer\*(C'\fR sections in \*(L"\s-1WATCHER TYPES\*(R"\s0. .SH "ABOUT LIBEV" .IX Header "ABOUT LIBEV" Libev is an event loop: you register interest in certain events (such as a file descriptor being readable or a timeout occurring), and it will manage these event sources and provide your program with events. .PP To do this, it must take more or less complete control over your process (or thread) by executing the \fIevent loop\fR handler, and will then communicate events via a callback mechanism. .PP You register interest in certain events by registering so-called \fIevent watchers\fR, which are relatively small C structures you initialise with the details of the event, and then hand it over to libev by \fIstarting\fR the watcher. .SS "\s-1FEATURES\s0" .IX Subsection "FEATURES" Libev supports \f(CW\*(C`select\*(C'\fR, \f(CW\*(C`poll\*(C'\fR, the Linux-specific aio and \f(CW\*(C`epoll\*(C'\fR interfaces, the BSD-specific \f(CW\*(C`kqueue\*(C'\fR and the Solaris-specific event port mechanisms for file descriptor events (\f(CW\*(C`ev_io\*(C'\fR), the Linux \f(CW\*(C`inotify\*(C'\fR interface (for \f(CW\*(C`ev_stat\*(C'\fR), Linux eventfd/signalfd (for faster and cleaner inter-thread wakeup (\f(CW\*(C`ev_async\*(C'\fR)/signal handling (\f(CW\*(C`ev_signal\*(C'\fR)) relative timers (\f(CW\*(C`ev_timer\*(C'\fR), absolute timers with customised rescheduling (\f(CW\*(C`ev_periodic\*(C'\fR), synchronous signals (\f(CW\*(C`ev_signal\*(C'\fR), process status change events (\f(CW\*(C`ev_child\*(C'\fR), and event watchers dealing with the event loop mechanism itself (\f(CW\*(C`ev_idle\*(C'\fR, \f(CW\*(C`ev_embed\*(C'\fR, \f(CW\*(C`ev_prepare\*(C'\fR and \&\f(CW\*(C`ev_check\*(C'\fR watchers) as well as file watchers (\f(CW\*(C`ev_stat\*(C'\fR) and even limited support for fork events (\f(CW\*(C`ev_fork\*(C'\fR). .PP It also is quite fast (see this benchmark comparing it to libevent for example). .SS "\s-1CONVENTIONS\s0" .IX Subsection "CONVENTIONS" Libev is very configurable. In this manual the default (and most common) configuration will be described, which supports multiple event loops. For more info about various configuration options please have a look at \&\fB\s-1EMBED\s0\fR section in this manual. If libev was configured without support for multiple event loops, then all functions taking an initial argument of name \f(CW\*(C`loop\*(C'\fR (which is always of type \f(CW\*(C`struct ev_loop *\*(C'\fR) will not have this argument. .SS "\s-1TIME REPRESENTATION\s0" .IX Subsection "TIME REPRESENTATION" Libev represents time as a single floating point number, representing the (fractional) number of seconds since the (\s-1POSIX\s0) epoch (in practice somewhere near the beginning of 1970, details are complicated, don't ask). This type is called \f(CW\*(C`ev_tstamp\*(C'\fR, which is what you should use too. It usually aliases to the \f(CW\*(C`double\*(C'\fR type in C. When you need to do any calculations on it, you should treat it as some floating point value. .PP Unlike the name component \f(CW\*(C`stamp\*(C'\fR might indicate, it is also used for time differences (e.g. delays) throughout libev. .SH "ERROR HANDLING" .IX Header "ERROR HANDLING" Libev knows three classes of errors: operating system errors, usage errors and internal errors (bugs). .PP When libev catches an operating system error it cannot handle (for example a system call indicating a condition libev cannot fix), it calls the callback set via \f(CW\*(C`ev_set_syserr_cb\*(C'\fR, which is supposed to fix the problem or abort. The default is to print a diagnostic message and to call \f(CW\*(C`abort ()\*(C'\fR. .PP When libev detects a usage error such as a negative timer interval, then it will print a diagnostic message and abort (via the \f(CW\*(C`assert\*(C'\fR mechanism, so \f(CW\*(C`NDEBUG\*(C'\fR will disable this checking): these are programming errors in the libev caller and need to be fixed there. .PP Via the \f(CW\*(C`EV_FREQUENT\*(C'\fR macro you can compile in and/or enable extensive consistency checking code inside libev that can be used to check for internal inconsistencies, suually caused by application bugs. .PP Libev also has a few internal error-checking \f(CW\*(C`assert\*(C'\fRions. These do not trigger under normal circumstances, as they indicate either a bug in libev or worse. .SH "GLOBAL FUNCTIONS" .IX Header "GLOBAL FUNCTIONS" These functions can be called anytime, even before initialising the library in any way. .IP "ev_tstamp ev_time ()" 4 .IX Item "ev_tstamp ev_time ()" Returns the current time as libev would use it. Please note that the \&\f(CW\*(C`ev_now\*(C'\fR function is usually faster and also often returns the timestamp you actually want to know. Also interesting is the combination of \&\f(CW\*(C`ev_now_update\*(C'\fR and \f(CW\*(C`ev_now\*(C'\fR. .IP "ev_sleep (ev_tstamp interval)" 4 .IX Item "ev_sleep (ev_tstamp interval)" Sleep for the given interval: The current thread will be blocked until either it is interrupted or the given time interval has passed (approximately \- it might return a bit earlier even if not interrupted). Returns immediately if \f(CW\*(C`interval <= 0\*(C'\fR. .Sp Basically this is a sub-second-resolution \f(CW\*(C`sleep ()\*(C'\fR. .Sp The range of the \f(CW\*(C`interval\*(C'\fR is limited \- libev only guarantees to work with sleep times of up to one day (\f(CW\*(C`interval <= 86400\*(C'\fR). .IP "int ev_version_major ()" 4 .IX Item "int ev_version_major ()" .PD 0 .IP "int ev_version_minor ()" 4 .IX Item "int ev_version_minor ()" .PD You can find out the major and minor \s-1ABI\s0 version numbers of the library you linked against by calling the functions \f(CW\*(C`ev_version_major\*(C'\fR and \&\f(CW\*(C`ev_version_minor\*(C'\fR. If you want, you can compare against the global symbols \f(CW\*(C`EV_VERSION_MAJOR\*(C'\fR and \f(CW\*(C`EV_VERSION_MINOR\*(C'\fR, which specify the version of the library your program was compiled against. .Sp These version numbers refer to the \s-1ABI\s0 version of the library, not the release version. .Sp Usually, it's a good idea to terminate if the major versions mismatch, as this indicates an incompatible change. Minor versions are usually compatible to older versions, so a larger minor version alone is usually not a problem. .Sp Example: Make sure we haven't accidentally been linked against the wrong version (note, however, that this will not detect other \s-1ABI\s0 mismatches, such as \s-1LFS\s0 or reentrancy). .Sp .Vb 3 \& assert (("libev version mismatch", \& ev_version_major () == EV_VERSION_MAJOR \& && ev_version_minor () >= EV_VERSION_MINOR)); .Ve .IP "unsigned int ev_supported_backends ()" 4 .IX Item "unsigned int ev_supported_backends ()" Return the set of all backends (i.e. their corresponding \f(CW\*(C`EV_BACKEND_*\*(C'\fR value) compiled into this binary of libev (independent of their availability on the system you are running on). See \f(CW\*(C`ev_default_loop\*(C'\fR for a description of the set values. .Sp Example: make sure we have the epoll method, because yeah this is cool and a must have and can we have a torrent of it please!!!11 .Sp .Vb 2 \& assert (("sorry, no epoll, no sex", \& ev_supported_backends () & EVBACKEND_EPOLL)); .Ve .IP "unsigned int ev_recommended_backends ()" 4 .IX Item "unsigned int ev_recommended_backends ()" Return the set of all backends compiled into this binary of libev and also recommended for this platform, meaning it will work for most file descriptor types. This set is often smaller than the one returned by \&\f(CW\*(C`ev_supported_backends\*(C'\fR, as for example kqueue is broken on most BSDs and will not be auto-detected unless you explicitly request it (assuming you know what you are doing). This is the set of backends that libev will probe for if you specify no backends explicitly. .IP "unsigned int ev_embeddable_backends ()" 4 .IX Item "unsigned int ev_embeddable_backends ()" Returns the set of backends that are embeddable in other event loops. This value is platform-specific but can include backends not available on the current system. To find which embeddable backends might be supported on the current system, you would need to look at \f(CW\*(C`ev_embeddable_backends () & ev_supported_backends ()\*(C'\fR, likewise for recommended ones. .Sp See the description of \f(CW\*(C`ev_embed\*(C'\fR watchers for more info. .IP "ev_set_allocator (void *(*cb)(void *ptr, long size) throw ())" 4 .IX Item "ev_set_allocator (void *(*cb)(void *ptr, long size) throw ())" Sets the allocation function to use (the prototype is similar \- the semantics are identical to the \f(CW\*(C`realloc\*(C'\fR C89/SuS/POSIX function). It is used to allocate and free memory (no surprises here). If it returns zero when memory needs to be allocated (\f(CW\*(C`size != 0\*(C'\fR), the library might abort or take some potentially destructive action. .Sp Since some systems (at least OpenBSD and Darwin) fail to implement correct \f(CW\*(C`realloc\*(C'\fR semantics, libev will use a wrapper around the system \&\f(CW\*(C`realloc\*(C'\fR and \f(CW\*(C`free\*(C'\fR functions by default. .Sp You could override this function in high-availability programs to, say, free some memory if it cannot allocate memory, to use a special allocator, or even to sleep a while and retry until some memory is available. .Sp Example: The following is the \f(CW\*(C`realloc\*(C'\fR function that libev itself uses which should work with \f(CW\*(C`realloc\*(C'\fR and \f(CW\*(C`free\*(C'\fR functions of all kinds and is probably a good basis for your own implementation. .Sp .Vb 5 \& static void * \& ev_realloc_emul (void *ptr, long size) EV_NOEXCEPT \& { \& if (size) \& return realloc (ptr, size); \& \& free (ptr); \& return 0; \& } .Ve .Sp Example: Replace the libev allocator with one that waits a bit and then retries. .Sp .Vb 8 \& static void * \& persistent_realloc (void *ptr, size_t size) \& { \& if (!size) \& { \& free (ptr); \& return 0; \& } \& \& for (;;) \& { \& void *newptr = realloc (ptr, size); \& \& if (newptr) \& return newptr; \& \& sleep (60); \& } \& } \& \& ... \& ev_set_allocator (persistent_realloc); .Ve .IP "ev_set_syserr_cb (void (*cb)(const char *msg) throw ())" 4 .IX Item "ev_set_syserr_cb (void (*cb)(const char *msg) throw ())" Set the callback function to call on a retryable system call error (such as failed select, poll, epoll_wait). The message is a printable string indicating the system call or subsystem causing the problem. If this callback is set, then libev will expect it to remedy the situation, no matter what, when it returns. That is, libev will generally retry the requested operation, or, if the condition doesn't go away, do bad stuff (such as abort). .Sp Example: This is basically the same thing that libev does internally, too. .Sp .Vb 6 \& static void \& fatal_error (const char *msg) \& { \& perror (msg); \& abort (); \& } \& \& ... \& ev_set_syserr_cb (fatal_error); .Ve .IP "ev_feed_signal (int signum)" 4 .IX Item "ev_feed_signal (int signum)" This function can be used to \*(L"simulate\*(R" a signal receive. It is completely safe to call this function at any time, from any context, including signal handlers or random threads. .Sp Its main use is to customise signal handling in your process, especially in the presence of threads. For example, you could block signals by default in all threads (and specifying \f(CW\*(C`EVFLAG_NOSIGMASK\*(C'\fR when creating any loops), and in one thread, use \f(CW\*(C`sigwait\*(C'\fR or any other mechanism to wait for signals, then \*(L"deliver\*(R" them to libev by calling \&\f(CW\*(C`ev_feed_signal\*(C'\fR. .SH "FUNCTIONS CONTROLLING EVENT LOOPS" .IX Header "FUNCTIONS CONTROLLING EVENT LOOPS" An event loop is described by a \f(CW\*(C`struct ev_loop *\*(C'\fR (the \f(CW\*(C`struct\*(C'\fR is \&\fInot\fR optional in this case unless libev 3 compatibility is disabled, as libev 3 had an \f(CW\*(C`ev_loop\*(C'\fR function colliding with the struct name). .PP The library knows two types of such loops, the \fIdefault\fR loop, which supports child process events, and dynamically created event loops which do not. .IP "struct ev_loop *ev_default_loop (unsigned int flags)" 4 .IX Item "struct ev_loop *ev_default_loop (unsigned int flags)" This returns the \*(L"default\*(R" event loop object, which is what you should normally use when you just need \*(L"the event loop\*(R". Event loop objects and the \f(CW\*(C`flags\*(C'\fR parameter are described in more detail in the entry for \&\f(CW\*(C`ev_loop_new\*(C'\fR. .Sp If the default loop is already initialised then this function simply returns it (and ignores the flags. If that is troubling you, check \&\f(CW\*(C`ev_backend ()\*(C'\fR afterwards). Otherwise it will create it with the given flags, which should almost always be \f(CW0\fR, unless the caller is also the one calling \f(CW\*(C`ev_run\*(C'\fR or otherwise qualifies as \*(L"the main program\*(R". .Sp If you don't know what event loop to use, use the one returned from this function (or via the \f(CW\*(C`EV_DEFAULT\*(C'\fR macro). .Sp Note that this function is \fInot\fR thread-safe, so if you want to use it from multiple threads, you have to employ some kind of mutex (note also that this case is unlikely, as loops cannot be shared easily between threads anyway). .Sp The default loop is the only loop that can handle \f(CW\*(C`ev_child\*(C'\fR watchers, and to do this, it always registers a handler for \f(CW\*(C`SIGCHLD\*(C'\fR. If this is a problem for your application you can either create a dynamic loop with \&\f(CW\*(C`ev_loop_new\*(C'\fR which doesn't do that, or you can simply overwrite the \&\f(CW\*(C`SIGCHLD\*(C'\fR signal handler \fIafter\fR calling \f(CW\*(C`ev_default_init\*(C'\fR. .Sp Example: This is the most typical usage. .Sp .Vb 2 \& if (!ev_default_loop (0)) \& fatal ("could not initialise libev, bad $LIBEV_FLAGS in environment?"); .Ve .Sp Example: Restrict libev to the select and poll backends, and do not allow environment settings to be taken into account: .Sp .Vb 1 \& ev_default_loop (EVBACKEND_POLL | EVBACKEND_SELECT | EVFLAG_NOENV); .Ve .IP "struct ev_loop *ev_loop_new (unsigned int flags)" 4 .IX Item "struct ev_loop *ev_loop_new (unsigned int flags)" This will create and initialise a new event loop object. If the loop could not be initialised, returns false. .Sp This function is thread-safe, and one common way to use libev with threads is indeed to create one loop per thread, and using the default loop in the \*(L"main\*(R" or \*(L"initial\*(R" thread. .Sp The flags argument can be used to specify special behaviour or specific backends to use, and is usually specified as \f(CW0\fR (or \f(CW\*(C`EVFLAG_AUTO\*(C'\fR). .Sp The following flags are supported: .RS 4 .ie n .IP """EVFLAG_AUTO""" 4 .el .IP "\f(CWEVFLAG_AUTO\fR" 4 .IX Item "EVFLAG_AUTO" The default flags value. Use this if you have no clue (it's the right thing, believe me). .ie n .IP """EVFLAG_NOENV""" 4 .el .IP "\f(CWEVFLAG_NOENV\fR" 4 .IX Item "EVFLAG_NOENV" If this flag bit is or'ed into the flag value (or the program runs setuid or setgid) then libev will \fInot\fR look at the environment variable \&\f(CW\*(C`LIBEV_FLAGS\*(C'\fR. Otherwise (the default), this environment variable will override the flags completely if it is found in the environment. This is useful to try out specific backends to test their performance, to work around bugs, or to make libev threadsafe (accessing environment variables cannot be done in a threadsafe way, but usually it works if no other thread modifies them). .ie n .IP """EVFLAG_FORKCHECK""" 4 .el .IP "\f(CWEVFLAG_FORKCHECK\fR" 4 .IX Item "EVFLAG_FORKCHECK" Instead of calling \f(CW\*(C`ev_loop_fork\*(C'\fR manually after a fork, you can also make libev check for a fork in each iteration by enabling this flag. .Sp This works by calling \f(CW\*(C`getpid ()\*(C'\fR on every iteration of the loop, and thus this might slow down your event loop if you do a lot of loop iterations and little real work, but is usually not noticeable (on my GNU/Linux system for example, \f(CW\*(C`getpid\*(C'\fR is actually a simple 5\-insn sequence without a system call and thus \fIvery\fR fast, but my GNU/Linux system also has \f(CW\*(C`pthread_atfork\*(C'\fR which is even faster). (Update: glibc versions 2.25 apparently removed the \f(CW\*(C`getpid\*(C'\fR optimisation again). .Sp The big advantage of this flag is that you can forget about fork (and forget about forgetting to tell libev about forking, although you still have to ignore \f(CW\*(C`SIGPIPE\*(C'\fR) when you use this flag. .Sp This flag setting cannot be overridden or specified in the \f(CW\*(C`LIBEV_FLAGS\*(C'\fR environment variable. .ie n .IP """EVFLAG_NOINOTIFY""" 4 .el .IP "\f(CWEVFLAG_NOINOTIFY\fR" 4 .IX Item "EVFLAG_NOINOTIFY" When this flag is specified, then libev will not attempt to use the \&\fIinotify\fR \s-1API\s0 for its \f(CW\*(C`ev_stat\*(C'\fR watchers. Apart from debugging and testing, this flag can be useful to conserve inotify file descriptors, as otherwise each loop using \f(CW\*(C`ev_stat\*(C'\fR watchers consumes one inotify handle. .ie n .IP """EVFLAG_SIGNALFD""" 4 .el .IP "\f(CWEVFLAG_SIGNALFD\fR" 4 .IX Item "EVFLAG_SIGNALFD" When this flag is specified, then libev will attempt to use the \&\fIsignalfd\fR \s-1API\s0 for its \f(CW\*(C`ev_signal\*(C'\fR (and \f(CW\*(C`ev_child\*(C'\fR) watchers. This \s-1API\s0 delivers signals synchronously, which makes it both faster and might make it possible to get the queued signal data. It can also simplify signal handling with threads, as long as you properly block signals in your threads that are not interested in handling them. .Sp Signalfd will not be used by default as this changes your signal mask, and there are a lot of shoddy libraries and programs (glib's threadpool for example) that can't properly initialise their signal masks. .ie n .IP """EVFLAG_NOSIGMASK""" 4 .el .IP "\f(CWEVFLAG_NOSIGMASK\fR" 4 .IX Item "EVFLAG_NOSIGMASK" When this flag is specified, then libev will avoid to modify the signal mask. Specifically, this means you have to make sure signals are unblocked when you want to receive them. .Sp This behaviour is useful when you want to do your own signal handling, or want to handle signals only in specific threads and want to avoid libev unblocking the signals. .Sp It's also required by \s-1POSIX\s0 in a threaded program, as libev calls \&\f(CW\*(C`sigprocmask\*(C'\fR, whose behaviour is officially unspecified. .ie n .IP """EVFLAG_NOTIMERFD""" 4 .el .IP "\f(CWEVFLAG_NOTIMERFD\fR" 4 .IX Item "EVFLAG_NOTIMERFD" When this flag is specified, the libev will avoid using a \f(CW\*(C`timerfd\*(C'\fR to detect time jumps. It will still be able to detect time jumps, but takes longer and has a lower accuracy in doing so, but saves a file descriptor per loop. .Sp The current implementation only tries to use a \f(CW\*(C`timerfd\*(C'\fR when the first \&\f(CW\*(C`ev_periodic\*(C'\fR watcher is started and falls back on other methods if it cannot be created, but this behaviour might change in the future. .ie n .IP """EVBACKEND_SELECT"" (value 1, portable select backend)" 4 .el .IP "\f(CWEVBACKEND_SELECT\fR (value 1, portable select backend)" 4 .IX Item "EVBACKEND_SELECT (value 1, portable select backend)" This is your standard \fBselect\fR\|(2) backend. Not \fIcompletely\fR standard, as libev tries to roll its own fd_set with no limits on the number of fds, but if that fails, expect a fairly low limit on the number of fds when using this backend. It doesn't scale too well (O(highest_fd)), but its usually the fastest backend for a low number of (low-numbered :) fds. .Sp To get good performance out of this backend you need a high amount of parallelism (most of the file descriptors should be busy). If you are writing a server, you should \f(CW\*(C`accept ()\*(C'\fR in a loop to accept as many connections as possible during one iteration. You might also want to have a look at \f(CW\*(C`ev_set_io_collect_interval ()\*(C'\fR to increase the amount of readiness notifications you get per iteration. .Sp This backend maps \f(CW\*(C`EV_READ\*(C'\fR to the \f(CW\*(C`readfds\*(C'\fR set and \f(CW\*(C`EV_WRITE\*(C'\fR to the \&\f(CW\*(C`writefds\*(C'\fR set (and to work around Microsoft Windows bugs, also onto the \&\f(CW\*(C`exceptfds\*(C'\fR set on that platform). .ie n .IP """EVBACKEND_POLL"" (value 2, poll backend, available everywhere except on windows)" 4 .el .IP "\f(CWEVBACKEND_POLL\fR (value 2, poll backend, available everywhere except on windows)" 4 .IX Item "EVBACKEND_POLL (value 2, poll backend, available everywhere except on windows)" And this is your standard \fBpoll\fR\|(2) backend. It's more complicated than select, but handles sparse fds better and has no artificial limit on the number of fds you can use (except it will slow down considerably with a lot of inactive fds). It scales similarly to select, i.e. O(total_fds). See the entry for \f(CW\*(C`EVBACKEND_SELECT\*(C'\fR, above, for performance tips. .Sp This backend maps \f(CW\*(C`EV_READ\*(C'\fR to \f(CW\*(C`POLLIN | POLLERR | POLLHUP\*(C'\fR, and \&\f(CW\*(C`EV_WRITE\*(C'\fR to \f(CW\*(C`POLLOUT | POLLERR | POLLHUP\*(C'\fR. .ie n .IP """EVBACKEND_EPOLL"" (value 4, Linux)" 4 .el .IP "\f(CWEVBACKEND_EPOLL\fR (value 4, Linux)" 4 .IX Item "EVBACKEND_EPOLL (value 4, Linux)" Use the Linux-specific \fBepoll\fR\|(7) interface (for both pre\- and post\-2.6.9 kernels). .Sp For few fds, this backend is a bit little slower than poll and select, but it scales phenomenally better. While poll and select usually scale like O(total_fds) where total_fds is the total number of fds (or the highest fd), epoll scales either O(1) or O(active_fds). .Sp The epoll mechanism deserves honorable mention as the most misdesigned of the more advanced event mechanisms: mere annoyances include silently dropping file descriptors, requiring a system call per change per file descriptor (and unnecessary guessing of parameters), problems with dup, returning before the timeout value, resulting in additional iterations (and only giving 5ms accuracy while select on the same platform gives 0.1ms) and so on. The biggest issue is fork races, however \- if a program forks then \fIboth\fR parent and child process have to recreate the epoll set, which can take considerable time (one syscall per file descriptor) and is of course hard to detect. .Sp Epoll is also notoriously buggy \- embedding epoll fds \fIshould\fR work, but of course \fIdoesn't\fR, and epoll just loves to report events for totally \fIdifferent\fR file descriptors (even already closed ones, so one cannot even remove them from the set) than registered in the set (especially on \s-1SMP\s0 systems). Libev tries to counter these spurious notifications by employing an additional generation counter and comparing that against the events to filter out spurious ones, recreating the set when required. Epoll also erroneously rounds down timeouts, but gives you no way to know when and by how much, so sometimes you have to busy-wait because epoll returns immediately despite a nonzero timeout. And last not least, it also refuses to work with some file descriptors which work perfectly fine with \f(CW\*(C`select\*(C'\fR (files, many character devices...). .Sp Epoll is truly the train wreck among event poll mechanisms, a frankenpoll, cobbled together in a hurry, no thought to design or interaction with others. Oh, the pain, will it ever stop... .Sp While stopping, setting and starting an I/O watcher in the same iteration will result in some caching, there is still a system call per such incident (because the same \fIfile descriptor\fR could point to a different \&\fIfile description\fR now), so its best to avoid that. Also, \f(CW\*(C`dup ()\*(C'\fR'ed file descriptors might not work very well if you register events for both file descriptors. .Sp Best performance from this backend is achieved by not unregistering all watchers for a file descriptor until it has been closed, if possible, i.e. keep at least one watcher active per fd at all times. Stopping and starting a watcher (without re-setting it) also usually doesn't cause extra overhead. A fork can both result in spurious notifications as well as in libev having to destroy and recreate the epoll object, which can take considerable time and thus should be avoided. .Sp All this means that, in practice, \f(CW\*(C`EVBACKEND_SELECT\*(C'\fR can be as fast or faster than epoll for maybe up to a hundred file descriptors, depending on the usage. So sad. .Sp While nominally embeddable in other event loops, this feature is broken in a lot of kernel revisions, but probably(!) works in current versions. .Sp This backend maps \f(CW\*(C`EV_READ\*(C'\fR and \f(CW\*(C`EV_WRITE\*(C'\fR in the same way as \&\f(CW\*(C`EVBACKEND_POLL\*(C'\fR. .ie n .IP """EVBACKEND_LINUXAIO"" (value 64, Linux)" 4 .el .IP "\f(CWEVBACKEND_LINUXAIO\fR (value 64, Linux)" 4 .IX Item "EVBACKEND_LINUXAIO (value 64, Linux)" Use the Linux-specific Linux \s-1AIO\s0 (\fInot\fR \f(CWaio(7)\fR but \f(CWio_submit(2)\fR) event interface available in post\-4.18 kernels (but libev only tries to use it in 4.19+). .Sp This is another Linux train wreck of an event interface. .Sp If this backend works for you (as of this writing, it was very experimental), it is the best event interface available on Linux and might be well worth enabling it \- if it isn't available in your kernel this will be detected and this backend will be skipped. .Sp This backend can batch oneshot requests and supports a user-space ring buffer to receive events. It also doesn't suffer from most of the design problems of epoll (such as not being able to remove event sources from the epoll set), and generally sounds too good to be true. Because, this being the Linux kernel, of course it suffers from a whole new set of limitations, forcing you to fall back to epoll, inheriting all its design issues. .Sp For one, it is not easily embeddable (but probably could be done using an event fd at some extra overhead). It also is subject to a system wide limit that can be configured in \fI/proc/sys/fs/aio\-max\-nr\fR. If no \s-1AIO\s0 requests are left, this backend will be skipped during initialisation, and will switch to epoll when the loop is active. .Sp Most problematic in practice, however, is that not all file descriptors work with it. For example, in Linux 5.1, \s-1TCP\s0 sockets, pipes, event fds, files, \fI/dev/null\fR and many others are supported, but ttys do not work properly (a known bug that the kernel developers don't care about, see ), so this is not (yet?) a generic event polling interface. .Sp Overall, it seems the Linux developers just don't want it to have a generic event handling mechanism other than \f(CW\*(C`select\*(C'\fR or \f(CW\*(C`poll\*(C'\fR. .Sp To work around all these problem, the current version of libev uses its epoll backend as a fallback for file descriptor types that do not work. Or falls back completely to epoll if the kernel acts up. .Sp This backend maps \f(CW\*(C`EV_READ\*(C'\fR and \f(CW\*(C`EV_WRITE\*(C'\fR in the same way as \&\f(CW\*(C`EVBACKEND_POLL\*(C'\fR. .ie n .IP """EVBACKEND_KQUEUE"" (value 8, most \s-1BSD\s0 clones)" 4 .el .IP "\f(CWEVBACKEND_KQUEUE\fR (value 8, most \s-1BSD\s0 clones)" 4 .IX Item "EVBACKEND_KQUEUE (value 8, most BSD clones)" Kqueue deserves special mention, as at the time this backend was implemented, it was broken on all BSDs except NetBSD (usually it doesn't work reliably with anything but sockets and pipes, except on Darwin, where of course it's completely useless). Unlike epoll, however, whose brokenness is by design, these kqueue bugs can be (and mostly have been) fixed without \s-1API\s0 changes to existing programs. For this reason it's not being \*(L"auto-detected\*(R" on all platforms unless you explicitly specify it in the flags (i.e. using \f(CW\*(C`EVBACKEND_KQUEUE\*(C'\fR) or libev was compiled on a known-to-be-good (\-enough) system like NetBSD. .Sp You still can embed kqueue into a normal poll or select backend and use it only for sockets (after having made sure that sockets work with kqueue on the target platform). See \f(CW\*(C`ev_embed\*(C'\fR watchers for more info. .Sp It scales in the same way as the epoll backend, but the interface to the kernel is more efficient (which says nothing about its actual speed, of course). While stopping, setting and starting an I/O watcher does never cause an extra system call as with \f(CW\*(C`EVBACKEND_EPOLL\*(C'\fR, it still adds up to two event changes per incident. Support for \f(CW\*(C`fork ()\*(C'\fR is very bad (you might have to leak fds on fork, but it's more sane than epoll) and it drops fds silently in similarly hard-to-detect cases. .Sp This backend usually performs well under most conditions. .Sp While nominally embeddable in other event loops, this doesn't work everywhere, so you might need to test for this. And since it is broken almost everywhere, you should only use it when you have a lot of sockets (for which it usually works), by embedding it into another event loop (e.g. \f(CW\*(C`EVBACKEND_SELECT\*(C'\fR or \f(CW\*(C`EVBACKEND_POLL\*(C'\fR (but \f(CW\*(C`poll\*(C'\fR is of course also broken on \s-1OS X\s0)) and, did I mention it, using it only for sockets. .Sp This backend maps \f(CW\*(C`EV_READ\*(C'\fR into an \f(CW\*(C`EVFILT_READ\*(C'\fR kevent with \&\f(CW\*(C`NOTE_EOF\*(C'\fR, and \f(CW\*(C`EV_WRITE\*(C'\fR into an \f(CW\*(C`EVFILT_WRITE\*(C'\fR kevent with \&\f(CW\*(C`NOTE_EOF\*(C'\fR. .ie n .IP """EVBACKEND_DEVPOLL"" (value 16, Solaris 8)" 4 .el .IP "\f(CWEVBACKEND_DEVPOLL\fR (value 16, Solaris 8)" 4 .IX Item "EVBACKEND_DEVPOLL (value 16, Solaris 8)" This is not implemented yet (and might never be, unless you send me an implementation). According to reports, \f(CW\*(C`/dev/poll\*(C'\fR only supports sockets and is not embeddable, which would limit the usefulness of this backend immensely. .ie n .IP """EVBACKEND_PORT"" (value 32, Solaris 10)" 4 .el .IP "\f(CWEVBACKEND_PORT\fR (value 32, Solaris 10)" 4 .IX Item "EVBACKEND_PORT (value 32, Solaris 10)" This uses the Solaris 10 event port mechanism. As with everything on Solaris, it's really slow, but it still scales very well (O(active_fds)). .Sp While this backend scales well, it requires one system call per active file descriptor per loop iteration. For small and medium numbers of file descriptors a \*(L"slow\*(R" \f(CW\*(C`EVBACKEND_SELECT\*(C'\fR or \f(CW\*(C`EVBACKEND_POLL\*(C'\fR backend might perform better. .Sp On the positive side, this backend actually performed fully to specification in all tests and is fully embeddable, which is a rare feat among the OS-specific backends (I vastly prefer correctness over speed hacks). .Sp On the negative side, the interface is \fIbizarre\fR \- so bizarre that even sun itself gets it wrong in their code examples: The event polling function sometimes returns events to the caller even though an error occurred, but with no indication whether it has done so or not (yes, it's even documented that way) \- deadly for edge-triggered interfaces where you absolutely have to know whether an event occurred or not because you have to re-arm the watcher. .Sp Fortunately libev seems to be able to work around these idiocies. .Sp This backend maps \f(CW\*(C`EV_READ\*(C'\fR and \f(CW\*(C`EV_WRITE\*(C'\fR in the same way as \&\f(CW\*(C`EVBACKEND_POLL\*(C'\fR. .ie n .IP """EVBACKEND_ALL""" 4 .el .IP "\f(CWEVBACKEND_ALL\fR" 4 .IX Item "EVBACKEND_ALL" Try all backends (even potentially broken ones that wouldn't be tried with \f(CW\*(C`EVFLAG_AUTO\*(C'\fR). Since this is a mask, you can do stuff such as \&\f(CW\*(C`EVBACKEND_ALL & ~EVBACKEND_KQUEUE\*(C'\fR. .Sp It is definitely not recommended to use this flag, use whatever \&\f(CW\*(C`ev_recommended_backends ()\*(C'\fR returns, or simply do not specify a backend at all. .ie n .IP """EVBACKEND_MASK""" 4 .el .IP "\f(CWEVBACKEND_MASK\fR" 4 .IX Item "EVBACKEND_MASK" Not a backend at all, but a mask to select all backend bits from a \&\f(CW\*(C`flags\*(C'\fR value, in case you want to mask out any backends from a flags value (e.g. when modifying the \f(CW\*(C`LIBEV_FLAGS\*(C'\fR environment variable). .RE .RS 4 .Sp If one or more of the backend flags are or'ed into the flags value, then only these backends will be tried (in the reverse order as listed here). If none are specified, all backends in \f(CW\*(C`ev_recommended_backends ()\*(C'\fR will be tried. .Sp Example: Try to create a event loop that uses epoll and nothing else. .Sp .Vb 3 \& struct ev_loop *epoller = ev_loop_new (EVBACKEND_EPOLL | EVFLAG_NOENV); \& if (!epoller) \& fatal ("no epoll found here, maybe it hides under your chair"); .Ve .Sp Example: Use whatever libev has to offer, but make sure that kqueue is used if available. .Sp .Vb 1 \& struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_KQUEUE); .Ve .Sp Example: Similarly, on linux, you mgiht want to take advantage of the linux aio backend if possible, but fall back to something else if that isn't available. .Sp .Vb 1 \& struct ev_loop *loop = ev_loop_new (ev_recommended_backends () | EVBACKEND_LINUXAIO); .Ve .RE .IP "ev_loop_destroy (loop)" 4 .IX Item "ev_loop_destroy (loop)" Destroys an event loop object (frees all memory and kernel state etc.). None of the active event watchers will be stopped in the normal sense, so e.g. \f(CW\*(C`ev_is_active\*(C'\fR might still return true. It is your responsibility to either stop all watchers cleanly yourself \fIbefore\fR calling this function, or cope with the fact afterwards (which is usually the easiest thing, you can just ignore the watchers and/or \f(CW\*(C`free ()\*(C'\fR them for example). .Sp Note that certain global state, such as signal state (and installed signal handlers), will not be freed by this function, and related watchers (such as signal and child watchers) would need to be stopped manually. .Sp This function is normally used on loop objects allocated by \&\f(CW\*(C`ev_loop_new\*(C'\fR, but it can also be used on the default loop returned by \&\f(CW\*(C`ev_default_loop\*(C'\fR, in which case it is not thread-safe. .Sp Note that it is not advisable to call this function on the default loop except in the rare occasion where you really need to free its resources. If you need dynamically allocated loops it is better to use \f(CW\*(C`ev_loop_new\*(C'\fR and \f(CW\*(C`ev_loop_destroy\*(C'\fR. .IP "ev_loop_fork (loop)" 4 .IX Item "ev_loop_fork (loop)" This function sets a flag that causes subsequent \f(CW\*(C`ev_run\*(C'\fR iterations to reinitialise the kernel state for backends that have one. Despite the name, you can call it anytime you are allowed to start or stop watchers (except inside an \f(CW\*(C`ev_prepare\*(C'\fR callback), but it makes most sense after forking, in the child process. You \fImust\fR call it (or use \&\f(CW\*(C`EVFLAG_FORKCHECK\*(C'\fR) in the child before resuming or calling \f(CW\*(C`ev_run\*(C'\fR. .Sp In addition, if you want to reuse a loop (via this function or \&\f(CW\*(C`EVFLAG_FORKCHECK\*(C'\fR), you \fIalso\fR have to ignore \f(CW\*(C`SIGPIPE\*(C'\fR. .Sp Again, you \fIhave\fR to call it on \fIany\fR loop that you want to re-use after a fork, \fIeven if you do not plan to use the loop in the parent\fR. This is because some kernel interfaces *cough* \fIkqueue\fR *cough* do funny things during fork. .Sp On the other hand, you only need to call this function in the child process if and only if you want to use the event loop in the child. If you just fork+exec or create a new loop in the child, you don't have to call it at all (in fact, \f(CW\*(C`epoll\*(C'\fR is so badly broken that it makes a difference, but libev will usually detect this case on its own and do a costly reset of the backend). .Sp The function itself is quite fast and it's usually not a problem to call it just in case after a fork. .Sp Example: Automate calling \f(CW\*(C`ev_loop_fork\*(C'\fR on the default loop when using pthreads. .Sp .Vb 5 \& static void \& post_fork_child (void) \& { \& ev_loop_fork (EV_DEFAULT); \& } \& \& ... \& pthread_atfork (0, 0, post_fork_child); .Ve .IP "int ev_is_default_loop (loop)" 4 .IX Item "int ev_is_default_loop (loop)" Returns true when the given loop is, in fact, the default loop, and false otherwise. .IP "unsigned int ev_iteration (loop)" 4 .IX Item "unsigned int ev_iteration (loop)" Returns the current iteration count for the event loop, which is identical to the number of times libev did poll for new events. It starts at \f(CW0\fR and happily wraps around with enough iterations. .Sp This value can sometimes be useful as a generation counter of sorts (it \&\*(L"ticks\*(R" the number of loop iterations), as it roughly corresponds with \&\f(CW\*(C`ev_prepare\*(C'\fR and \f(CW\*(C`ev_check\*(C'\fR calls \- and is incremented between the prepare and check phases. .IP "unsigned int ev_depth (loop)" 4 .IX Item "unsigned int ev_depth (loop)" Returns the number of times \f(CW\*(C`ev_run\*(C'\fR was entered minus the number of times \f(CW\*(C`ev_run\*(C'\fR was exited normally, in other words, the recursion depth. .Sp Outside \f(CW\*(C`ev_run\*(C'\fR, this number is zero. In a callback, this number is \&\f(CW1\fR, unless \f(CW\*(C`ev_run\*(C'\fR was invoked recursively (or from another thread), in which case it is higher. .Sp Leaving \f(CW\*(C`ev_run\*(C'\fR abnormally (setjmp/longjmp, cancelling the thread, throwing an exception etc.), doesn't count as \*(L"exit\*(R" \- consider this as a hint to avoid such ungentleman-like behaviour unless it's really convenient, in which case it is fully supported. .IP "unsigned int ev_backend (loop)" 4 .IX Item "unsigned int ev_backend (loop)" Returns one of the \f(CW\*(C`EVBACKEND_*\*(C'\fR flags indicating the event backend in use. .IP "ev_tstamp ev_now (loop)" 4 .IX Item "ev_tstamp ev_now (loop)" Returns the current \*(L"event loop time\*(R", which is the time the event loop received events and started processing them. This timestamp does not change as long as callbacks are being processed, and this is also the base time used for relative timers. You can treat it as the timestamp of the event occurring (or more correctly, libev finding out about it). .IP "ev_now_update (loop)" 4 .IX Item "ev_now_update (loop)" Establishes the current time by querying the kernel, updating the time returned by \f(CW\*(C`ev_now ()\*(C'\fR in the progress. This is a costly operation and is usually done automatically within \f(CW\*(C`ev_run ()\*(C'\fR. .Sp This function is rarely useful, but when some event callback runs for a very long time without entering the event loop, updating libev's idea of the current time is a good idea. .Sp See also \*(L"The special problem of time updates\*(R" in the \f(CW\*(C`ev_timer\*(C'\fR section. .IP "ev_suspend (loop)" 4 .IX Item "ev_suspend (loop)" .PD 0 .IP "ev_resume (loop)" 4 .IX Item "ev_resume (loop)" .PD These two functions suspend and resume an event loop, for use when the loop is not used for a while and timeouts should not be processed. .Sp A typical use case would be an interactive program such as a game: When the user presses \f(CW\*(C`^Z\*(C'\fR to suspend the game and resumes it an hour later it would be best to handle timeouts as if no time had actually passed while the program was suspended. This can be achieved by calling \f(CW\*(C`ev_suspend\*(C'\fR in your \f(CW\*(C`SIGTSTP\*(C'\fR handler, sending yourself a \f(CW\*(C`SIGSTOP\*(C'\fR and calling \&\f(CW\*(C`ev_resume\*(C'\fR directly afterwards to resume timer processing. .Sp Effectively, all \f(CW\*(C`ev_timer\*(C'\fR watchers will be delayed by the time spend between \f(CW\*(C`ev_suspend\*(C'\fR and \f(CW\*(C`ev_resume\*(C'\fR, and all \f(CW\*(C`ev_periodic\*(C'\fR watchers will be rescheduled (that is, they will lose any events that would have occurred while suspended). .Sp After calling \f(CW\*(C`ev_suspend\*(C'\fR you \fBmust not\fR call \fIany\fR function on the given loop other than \f(CW\*(C`ev_resume\*(C'\fR, and you \fBmust not\fR call \f(CW\*(C`ev_resume\*(C'\fR without a previous call to \f(CW\*(C`ev_suspend\*(C'\fR. .Sp Calling \f(CW\*(C`ev_suspend\*(C'\fR/\f(CW\*(C`ev_resume\*(C'\fR has the side effect of updating the event loop time (see \f(CW\*(C`ev_now_update\*(C'\fR). .IP "bool ev_run (loop, int flags)" 4 .IX Item "bool ev_run (loop, int flags)" Finally, this is it, the event handler. This function usually is called after you have initialised all your watchers and you want to start handling events. It will ask the operating system for any new events, call the watcher callbacks, and then repeat the whole process indefinitely: This is why event loops are called \fIloops\fR. .Sp If the flags argument is specified as \f(CW0\fR, it will keep handling events until either no event watchers are active anymore or \f(CW\*(C`ev_break\*(C'\fR was called. .Sp The return value is false if there are no more active watchers (which usually means \*(L"all jobs done\*(R" or \*(L"deadlock\*(R"), and true in all other cases (which usually means " you should call \f(CW\*(C`ev_run\*(C'\fR again"). .Sp Please note that an explicit \f(CW\*(C`ev_break\*(C'\fR is usually better than relying on all watchers to be stopped when deciding when a program has finished (especially in interactive programs), but having a program that automatically loops as long as it has to and no longer by virtue of relying on its watchers stopping correctly, that is truly a thing of beauty. .Sp This function is \fImostly\fR exception-safe \- you can break out of a \&\f(CW\*(C`ev_run\*(C'\fR call by calling \f(CW\*(C`longjmp\*(C'\fR in a callback, throwing a \*(C+ exception and so on. This does not decrement the \f(CW\*(C`ev_depth\*(C'\fR value, nor will it clear any outstanding \f(CW\*(C`EVBREAK_ONE\*(C'\fR breaks. .Sp A flags value of \f(CW\*(C`EVRUN_NOWAIT\*(C'\fR will look for new events, will handle those events and any already outstanding ones, but will not wait and block your process in case there are no events and will return after one iteration of the loop. This is sometimes useful to poll and handle new events while doing lengthy calculations, to keep the program responsive. .Sp A flags value of \f(CW\*(C`EVRUN_ONCE\*(C'\fR will look for new events (waiting if necessary) and will handle those and any already outstanding ones. It will block your process until at least one new event arrives (which could be an event internal to libev itself, so there is no guarantee that a user-registered callback will be called), and will return after one iteration of the loop. .Sp This is useful if you are waiting for some external event in conjunction with something not expressible using other libev watchers (i.e. "roll your own \f(CW\*(C`ev_run\*(C'\fR"). However, a pair of \f(CW\*(C`ev_prepare\*(C'\fR/\f(CW\*(C`ev_check\*(C'\fR watchers is usually a better approach for this kind of thing. .Sp Here are the gory details of what \f(CW\*(C`ev_run\*(C'\fR does (this is for your understanding, not a guarantee that things will work exactly like this in future versions): .Sp .Vb 10 \& \- Increment loop depth. \& \- Reset the ev_break status. \& \- Before the first iteration, call any pending watchers. \& LOOP: \& \- If EVFLAG_FORKCHECK was used, check for a fork. \& \- If a fork was detected (by any means), queue and call all fork watchers. \& \- Queue and call all prepare watchers. \& \- If ev_break was called, goto FINISH. \& \- If we have been forked, detach and recreate the kernel state \& as to not disturb the other process. \& \- Update the kernel state with all outstanding changes. \& \- Update the "event loop time" (ev_now ()). \& \- Calculate for how long to sleep or block, if at all \& (active idle watchers, EVRUN_NOWAIT or not having \& any active watchers at all will result in not sleeping). \& \- Sleep if the I/O and timer collect interval say so. \& \- Increment loop iteration counter. \& \- Block the process, waiting for any events. \& \- Queue all outstanding I/O (fd) events. \& \- Update the "event loop time" (ev_now ()), and do time jump adjustments. \& \- Queue all expired timers. \& \- Queue all expired periodics. \& \- Queue all idle watchers with priority higher than that of pending events. \& \- Queue all check watchers. \& \- Call all queued watchers in reverse order (i.e. check watchers first). \& Signals and child watchers are implemented as I/O watchers, and will \& be handled here by queueing them when their watcher gets executed. \& \- If ev_break has been called, or EVRUN_ONCE or EVRUN_NOWAIT \& were used, or there are no active watchers, goto FINISH, otherwise \& continue with step LOOP. \& FINISH: \& \- Reset the ev_break status iff it was EVBREAK_ONE. \& \- Decrement the loop depth. \& \- Return. .Ve .Sp Example: Queue some jobs and then loop until no events are outstanding anymore. .Sp .Vb 4 \& ... queue jobs here, make sure they register event watchers as long \& ... as they still have work to do (even an idle watcher will do..) \& ev_run (my_loop, 0); \& ... jobs done or somebody called break. yeah! .Ve .IP "ev_break (loop, how)" 4 .IX Item "ev_break (loop, how)" Can be used to make a call to \f(CW\*(C`ev_run\*(C'\fR return early (but only after it has processed all outstanding events). The \f(CW\*(C`how\*(C'\fR argument must be either \&\f(CW\*(C`EVBREAK_ONE\*(C'\fR, which will make the innermost \f(CW\*(C`ev_run\*(C'\fR call return, or \&\f(CW\*(C`EVBREAK_ALL\*(C'\fR, which will make all nested \f(CW\*(C`ev_run\*(C'\fR calls return. .Sp This \*(L"break state\*(R" will be cleared on the next call to \f(CW\*(C`ev_run\*(C'\fR. .Sp It is safe to call \f(CW\*(C`ev_break\*(C'\fR from outside any \f(CW\*(C`ev_run\*(C'\fR calls, too, in which case it will have no effect. .IP "ev_ref (loop)" 4 .IX Item "ev_ref (loop)" .PD 0 .IP "ev_unref (loop)" 4 .IX Item "ev_unref (loop)" .PD Ref/unref can be used to add or remove a reference count on the event loop: Every watcher keeps one reference, and as long as the reference count is nonzero, \f(CW\*(C`ev_run\*(C'\fR will not return on its own. .Sp This is useful when you have a watcher that you never intend to unregister, but that nevertheless should not keep \f(CW\*(C`ev_run\*(C'\fR from returning. In such a case, call \f(CW\*(C`ev_unref\*(C'\fR after starting, and \f(CW\*(C`ev_ref\*(C'\fR before stopping it. .Sp As an example, libev itself uses this for its internal signal pipe: It is not visible to the libev user and should not keep \f(CW\*(C`ev_run\*(C'\fR from exiting if no event watchers registered by it are active. It is also an excellent way to do this for generic recurring timers or from within third-party libraries. Just remember to \fIunref after start\fR and \fIref before stop\fR (but only if the watcher wasn't active before, or was active before, respectively. Note also that libev might stop watchers itself (e.g. non-repeating timers) in which case you have to \f(CW\*(C`ev_ref\*(C'\fR in the callback). .Sp Example: Create a signal watcher, but keep it from keeping \f(CW\*(C`ev_run\*(C'\fR running when nothing else is active. .Sp .Vb 4 \& ev_signal exitsig; \& ev_signal_init (&exitsig, sig_cb, SIGINT); \& ev_signal_start (loop, &exitsig); \& ev_unref (loop); .Ve .Sp Example: For some weird reason, unregister the above signal handler again. .Sp .Vb 2 \& ev_ref (loop); \& ev_signal_stop (loop, &exitsig); .Ve .IP "ev_set_io_collect_interval (loop, ev_tstamp interval)" 4 .IX Item "ev_set_io_collect_interval (loop, ev_tstamp interval)" .PD 0 .IP "ev_set_timeout_collect_interval (loop, ev_tstamp interval)" 4 .IX Item "ev_set_timeout_collect_interval (loop, ev_tstamp interval)" .PD These advanced functions influence the time that libev will spend waiting for events. Both time intervals are by default \f(CW0\fR, meaning that libev will try to invoke timer/periodic callbacks and I/O callbacks with minimum latency. .Sp Setting these to a higher value (the \f(CW\*(C`interval\*(C'\fR \fImust\fR be >= \f(CW0\fR) allows libev to delay invocation of I/O and timer/periodic callbacks to increase efficiency of loop iterations (or to increase power-saving opportunities). .Sp The idea is that sometimes your program runs just fast enough to handle one (or very few) event(s) per loop iteration. While this makes the program responsive, it also wastes a lot of \s-1CPU\s0 time to poll for new events, especially with backends like \f(CW\*(C`select ()\*(C'\fR which have a high overhead for the actual polling but can deliver many events at once. .Sp By setting a higher \fIio collect interval\fR you allow libev to spend more time collecting I/O events, so you can handle more events per iteration, at the cost of increasing latency. Timeouts (both \f(CW\*(C`ev_periodic\*(C'\fR and \&\f(CW\*(C`ev_timer\*(C'\fR) will not be affected. Setting this to a non-null value will introduce an additional \f(CW\*(C`ev_sleep ()\*(C'\fR call into most loop iterations. The sleep time ensures that libev will not poll for I/O events more often then once per this interval, on average (as long as the host time resolution is good enough). .Sp Likewise, by setting a higher \fItimeout collect interval\fR you allow libev to spend more time collecting timeouts, at the expense of increased latency/jitter/inexactness (the watcher callback will be called later). \f(CW\*(C`ev_io\*(C'\fR watchers will not be affected. Setting this to a non-null value will not introduce any overhead in libev. .Sp Many (busy) programs can usually benefit by setting the I/O collect interval to a value near \f(CW0.1\fR or so, which is often enough for interactive servers (of course not for games), likewise for timeouts. It usually doesn't make much sense to set it to a lower value than \f(CW0.01\fR, as this approaches the timing granularity of most systems. Note that if you do transactions with the outside world and you can't increase the parallelity, then this setting will limit your transaction rate (if you need to poll once per transaction and the I/O collect interval is 0.01, then you can't do more than 100 transactions per second). .Sp Setting the \fItimeout collect interval\fR can improve the opportunity for saving power, as the program will \*(L"bundle\*(R" timer callback invocations that are \*(L"near\*(R" in time together, by delaying some, thus reducing the number of times the process sleeps and wakes up again. Another useful technique to reduce iterations/wake\-ups is to use \f(CW\*(C`ev_periodic\*(C'\fR watchers and make sure they fire on, say, one-second boundaries only. .Sp Example: we only need 0.1s timeout granularity, and we wish not to poll more often than 100 times per second: .Sp .Vb 2 \& ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1); \& ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01); .Ve .IP "ev_invoke_pending (loop)" 4 .IX Item "ev_invoke_pending (loop)" This call will simply invoke all pending watchers while resetting their pending state. Normally, \f(CW\*(C`ev_run\*(C'\fR does this automatically when required, but when overriding the invoke callback this call comes handy. This function can be invoked from a watcher \- this can be useful for example when you want to do some lengthy calculation and want to pass further event handling to another thread (you still have to make sure only one thread executes within \f(CW\*(C`ev_invoke_pending\*(C'\fR or \f(CW\*(C`ev_run\*(C'\fR of course). .IP "int ev_pending_count (loop)" 4 .IX Item "int ev_pending_count (loop)" Returns the number of pending watchers \- zero indicates that no watchers are pending. .IP "ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(\s-1EV_P\s0))" 4 .IX Item "ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(EV_P))" This overrides the invoke pending functionality of the loop: Instead of invoking all pending watchers when there are any, \f(CW\*(C`ev_run\*(C'\fR will call this callback instead. This is useful, for example, when you want to invoke the actual watchers inside another context (another thread etc.). .Sp If you want to reset the callback, use \f(CW\*(C`ev_invoke_pending\*(C'\fR as new callback. .IP "ev_set_loop_release_cb (loop, void (*release)(\s-1EV_P\s0) throw (), void (*acquire)(\s-1EV_P\s0) throw ())" 4 .IX Item "ev_set_loop_release_cb (loop, void (*release)(EV_P) throw (), void (*acquire)(EV_P) throw ())" Sometimes you want to share the same loop between multiple threads. This can be done relatively simply by putting mutex_lock/unlock calls around each call to a libev function. .Sp However, \f(CW\*(C`ev_run\*(C'\fR can run an indefinite time, so it is not feasible to wait for it to return. One way around this is to wake up the event loop via \f(CW\*(C`ev_break\*(C'\fR and \f(CW\*(C`ev_async_send\*(C'\fR, another way is to set these \&\fIrelease\fR and \fIacquire\fR callbacks on the loop. .Sp When set, then \f(CW\*(C`release\*(C'\fR will be called just before the thread is suspended waiting for new events, and \f(CW\*(C`acquire\*(C'\fR is called just afterwards. .Sp Ideally, \f(CW\*(C`release\*(C'\fR will just call your mutex_unlock function, and \&\f(CW\*(C`acquire\*(C'\fR will just call the mutex_lock function again. .Sp While event loop modifications are allowed between invocations of \&\f(CW\*(C`release\*(C'\fR and \f(CW\*(C`acquire\*(C'\fR (that's their only purpose after all), no modifications done will affect the event loop, i.e. adding watchers will have no effect on the set of file descriptors being watched, or the time waited. Use an \f(CW\*(C`ev_async\*(C'\fR watcher to wake up \f(CW\*(C`ev_run\*(C'\fR when you want it to take note of any changes you made. .Sp In theory, threads executing \f(CW\*(C`ev_run\*(C'\fR will be async-cancel safe between invocations of \f(CW\*(C`release\*(C'\fR and \f(CW\*(C`acquire\*(C'\fR. .Sp See also the locking example in the \f(CW\*(C`THREADS\*(C'\fR section later in this document. .IP "ev_set_userdata (loop, void *data)" 4 .IX Item "ev_set_userdata (loop, void *data)" .PD 0 .IP "void *ev_userdata (loop)" 4 .IX Item "void *ev_userdata (loop)" .PD Set and retrieve a single \f(CW\*(C`void *\*(C'\fR associated with a loop. When \&\f(CW\*(C`ev_set_userdata\*(C'\fR has never been called, then \f(CW\*(C`ev_userdata\*(C'\fR returns \&\f(CW0\fR. .Sp These two functions can be used to associate arbitrary data with a loop, and are intended solely for the \f(CW\*(C`invoke_pending_cb\*(C'\fR, \f(CW\*(C`release\*(C'\fR and \&\f(CW\*(C`acquire\*(C'\fR callbacks described above, but of course can be (ab\-)used for any other purpose as well. .IP "ev_verify (loop)" 4 .IX Item "ev_verify (loop)" This function only does something when \f(CW\*(C`EV_VERIFY\*(C'\fR support has been compiled in, which is the default for non-minimal builds. It tries to go through all internal structures and checks them for validity. If anything is found to be inconsistent, it will print an error message to standard error and call \f(CW\*(C`abort ()\*(C'\fR. .Sp This can be used to catch bugs inside libev itself: under normal circumstances, this function will never abort as of course libev keeps its data structures consistent. .SH "ANATOMY OF A WATCHER" .IX Header "ANATOMY OF A WATCHER" In the following description, uppercase \f(CW\*(C`TYPE\*(C'\fR in names stands for the watcher type, e.g. \f(CW\*(C`ev_TYPE_start\*(C'\fR can mean \f(CW\*(C`ev_timer_start\*(C'\fR for timer watchers and \f(CW\*(C`ev_io_start\*(C'\fR for I/O watchers. .PP A watcher is an opaque structure that you allocate and register to record your interest in some event. To make a concrete example, imagine you want to wait for \s-1STDIN\s0 to become readable, you would create an \f(CW\*(C`ev_io\*(C'\fR watcher for that: .PP .Vb 5 \& static void my_cb (struct ev_loop *loop, ev_io *w, int revents) \& { \& ev_io_stop (w); \& ev_break (loop, EVBREAK_ALL); \& } \& \& struct ev_loop *loop = ev_default_loop (0); \& \& ev_io stdin_watcher; \& \& ev_init (&stdin_watcher, my_cb); \& ev_io_set (&stdin_watcher, STDIN_FILENO, EV_READ); \& ev_io_start (loop, &stdin_watcher); \& \& ev_run (loop, 0); .Ve .PP As you can see, you are responsible for allocating the memory for your watcher structures (and it is \fIusually\fR a bad idea to do this on the stack). .PP Each watcher has an associated watcher structure (called \f(CW\*(C`struct ev_TYPE\*(C'\fR or simply \f(CW\*(C`ev_TYPE\*(C'\fR, as typedefs are provided for all watcher structs). .PP Each watcher structure must be initialised by a call to \f(CW\*(C`ev_init (watcher *, callback)\*(C'\fR, which expects a callback to be provided. This callback is invoked each time the event occurs (or, in the case of I/O watchers, each time the event loop detects that the file descriptor given is readable and/or writable). .PP Each watcher type further has its own \f(CW\*(C`ev_TYPE_set (watcher *, ...)\*(C'\fR macro to configure it, with arguments specific to the watcher type. There is also a macro to combine initialisation and setting in one call: \f(CW\*(C`ev_TYPE_init (watcher *, callback, ...)\*(C'\fR. .PP To make the watcher actually watch out for events, you have to start it with a watcher-specific start function (\f(CW\*(C`ev_TYPE_start (loop, watcher *)\*(C'\fR), and you can stop watching for events at any time by calling the corresponding stop function (\f(CW\*(C`ev_TYPE_stop (loop, watcher *)\*(C'\fR. .PP As long as your watcher is active (has been started but not stopped) you must not touch the values stored in it except when explicitly documented otherwise. Most specifically you must never reinitialise it or call its \&\f(CW\*(C`ev_TYPE_set\*(C'\fR macro. .PP Each and every callback receives the event loop pointer as first, the registered watcher structure as second, and a bitset of received events as third argument. .PP The received events usually include a single bit per event type received (you can receive multiple events at the same time). The possible bit masks are: .ie n .IP """EV_READ""" 4 .el .IP "\f(CWEV_READ\fR" 4 .IX Item "EV_READ" .PD 0 .ie n .IP """EV_WRITE""" 4 .el .IP "\f(CWEV_WRITE\fR" 4 .IX Item "EV_WRITE" .PD The file descriptor in the \f(CW\*(C`ev_io\*(C'\fR watcher has become readable and/or writable. .ie n .IP """EV_TIMER""" 4 .el .IP "\f(CWEV_TIMER\fR" 4 .IX Item "EV_TIMER" The \f(CW\*(C`ev_timer\*(C'\fR watcher has timed out. .ie n .IP """EV_PERIODIC""" 4 .el .IP "\f(CWEV_PERIODIC\fR" 4 .IX Item "EV_PERIODIC" The \f(CW\*(C`ev_periodic\*(C'\fR watcher has timed out. .ie n .IP """EV_SIGNAL""" 4 .el .IP "\f(CWEV_SIGNAL\fR" 4 .IX Item "EV_SIGNAL" The signal specified in the \f(CW\*(C`ev_signal\*(C'\fR watcher has been received by a thread. .ie n .IP """EV_CHILD""" 4 .el .IP "\f(CWEV_CHILD\fR" 4 .IX Item "EV_CHILD" The pid specified in the \f(CW\*(C`ev_child\*(C'\fR watcher has received a status change. .ie n .IP """EV_STAT""" 4 .el .IP "\f(CWEV_STAT\fR" 4 .IX Item "EV_STAT" The path specified in the \f(CW\*(C`ev_stat\*(C'\fR watcher changed its attributes somehow. .ie n .IP """EV_IDLE""" 4 .el .IP "\f(CWEV_IDLE\fR" 4 .IX Item "EV_IDLE" The \f(CW\*(C`ev_idle\*(C'\fR watcher has determined that you have nothing better to do. .ie n .IP """EV_PREPARE""" 4 .el .IP "\f(CWEV_PREPARE\fR" 4 .IX Item "EV_PREPARE" .PD 0 .ie n .IP """EV_CHECK""" 4 .el .IP "\f(CWEV_CHECK\fR" 4 .IX Item "EV_CHECK" .PD All \f(CW\*(C`ev_prepare\*(C'\fR watchers are invoked just \fIbefore\fR \f(CW\*(C`ev_run\*(C'\fR starts to gather new events, and all \f(CW\*(C`ev_check\*(C'\fR watchers are queued (not invoked) just after \f(CW\*(C`ev_run\*(C'\fR has gathered them, but before it queues any callbacks for any received events. That means \f(CW\*(C`ev_prepare\*(C'\fR watchers are the last watchers invoked before the event loop sleeps or polls for new events, and \&\f(CW\*(C`ev_check\*(C'\fR watchers will be invoked before any other watchers of the same or lower priority within an event loop iteration. .Sp Callbacks of both watcher types can start and stop as many watchers as they want, and all of them will be taken into account (for example, a \&\f(CW\*(C`ev_prepare\*(C'\fR watcher might start an idle watcher to keep \f(CW\*(C`ev_run\*(C'\fR from blocking). .ie n .IP """EV_EMBED""" 4 .el .IP "\f(CWEV_EMBED\fR" 4 .IX Item "EV_EMBED" The embedded event loop specified in the \f(CW\*(C`ev_embed\*(C'\fR watcher needs attention. .ie n .IP """EV_FORK""" 4 .el .IP "\f(CWEV_FORK\fR" 4 .IX Item "EV_FORK" The event loop has been resumed in the child process after fork (see \&\f(CW\*(C`ev_fork\*(C'\fR). .ie n .IP """EV_CLEANUP""" 4 .el .IP "\f(CWEV_CLEANUP\fR" 4 .IX Item "EV_CLEANUP" The event loop is about to be destroyed (see \f(CW\*(C`ev_cleanup\*(C'\fR). .ie n .IP """EV_ASYNC""" 4 .el .IP "\f(CWEV_ASYNC\fR" 4 .IX Item "EV_ASYNC" The given async watcher has been asynchronously notified (see \f(CW\*(C`ev_async\*(C'\fR). .ie n .IP """EV_CUSTOM""" 4 .el .IP "\f(CWEV_CUSTOM\fR" 4 .IX Item "EV_CUSTOM" Not ever sent (or otherwise used) by libev itself, but can be freely used by libev users to signal watchers (e.g. via \f(CW\*(C`ev_feed_event\*(C'\fR). .ie n .IP """EV_ERROR""" 4 .el .IP "\f(CWEV_ERROR\fR" 4 .IX Item "EV_ERROR" An unspecified error has occurred, the watcher has been stopped. This might happen because the watcher could not be properly started because libev ran out of memory, a file descriptor was found to be closed or any other problem. Libev considers these application bugs. .Sp You best act on it by reporting the problem and somehow coping with the watcher being stopped. Note that well-written programs should not receive an error ever, so when your watcher receives it, this usually indicates a bug in your program. .Sp Libev will usually signal a few \*(L"dummy\*(R" events together with an error, for example it might indicate that a fd is readable or writable, and if your callbacks is well-written it can just attempt the operation and cope with the error from \fBread()\fR or \fBwrite()\fR. This will not work in multi-threaded programs, though, as the fd could already be closed and reused for another thing, so beware. .SS "\s-1GENERIC WATCHER FUNCTIONS\s0" .IX Subsection "GENERIC WATCHER FUNCTIONS" .ie n .IP """ev_init"" (ev_TYPE *watcher, callback)" 4 .el .IP "\f(CWev_init\fR (ev_TYPE *watcher, callback)" 4 .IX Item "ev_init (ev_TYPE *watcher, callback)" This macro initialises the generic portion of a watcher. The contents of the watcher object can be arbitrary (so \f(CW\*(C`malloc\*(C'\fR will do). Only the generic parts of the watcher are initialised, you \fIneed\fR to call the type-specific \f(CW\*(C`ev_TYPE_set\*(C'\fR macro afterwards to initialise the type-specific parts. For each type there is also a \f(CW\*(C`ev_TYPE_init\*(C'\fR macro which rolls both calls into one. .Sp You can reinitialise a watcher at any time as long as it has been stopped (or never started) and there are no pending events outstanding. .Sp The callback is always of type \f(CW\*(C`void (*)(struct ev_loop *loop, ev_TYPE *watcher, int revents)\*(C'\fR. .Sp Example: Initialise an \f(CW\*(C`ev_io\*(C'\fR watcher in two steps. .Sp .Vb 3 \& ev_io w; \& ev_init (&w, my_cb); \& ev_io_set (&w, STDIN_FILENO, EV_READ); .Ve .ie n .IP """ev_TYPE_set"" (ev_TYPE *watcher, [args])" 4 .el .IP "\f(CWev_TYPE_set\fR (ev_TYPE *watcher, [args])" 4 .IX Item "ev_TYPE_set (ev_TYPE *watcher, [args])" This macro initialises the type-specific parts of a watcher. You need to call \f(CW\*(C`ev_init\*(C'\fR at least once before you call this macro, but you can call \f(CW\*(C`ev_TYPE_set\*(C'\fR any number of times. You must not, however, call this macro on a watcher that is active (it can be pending, however, which is a difference to the \f(CW\*(C`ev_init\*(C'\fR macro). .Sp Although some watcher types do not have type-specific arguments (e.g. \f(CW\*(C`ev_prepare\*(C'\fR) you still need to call its \f(CW\*(C`set\*(C'\fR macro. .Sp See \f(CW\*(C`ev_init\*(C'\fR, above, for an example. .ie n .IP """ev_TYPE_init"" (ev_TYPE *watcher, callback, [args])" 4 .el .IP "\f(CWev_TYPE_init\fR (ev_TYPE *watcher, callback, [args])" 4 .IX Item "ev_TYPE_init (ev_TYPE *watcher, callback, [args])" This convenience macro rolls both \f(CW\*(C`ev_init\*(C'\fR and \f(CW\*(C`ev_TYPE_set\*(C'\fR macro calls into a single call. This is the most convenient method to initialise a watcher. The same limitations apply, of course. .Sp Example: Initialise and set an \f(CW\*(C`ev_io\*(C'\fR watcher in one step. .Sp .Vb 1 \& ev_io_init (&w, my_cb, STDIN_FILENO, EV_READ); .Ve .ie n .IP """ev_TYPE_start"" (loop, ev_TYPE *watcher)" 4 .el .IP "\f(CWev_TYPE_start\fR (loop, ev_TYPE *watcher)" 4 .IX Item "ev_TYPE_start (loop, ev_TYPE *watcher)" Starts (activates) the given watcher. Only active watchers will receive events. If the watcher is already active nothing will happen. .Sp Example: Start the \f(CW\*(C`ev_io\*(C'\fR watcher that is being abused as example in this whole section. .Sp .Vb 1 \& ev_io_start (EV_DEFAULT_UC, &w); .Ve .ie n .IP """ev_TYPE_stop"" (loop, ev_TYPE *watcher)" 4 .el .IP "\f(CWev_TYPE_stop\fR (loop, ev_TYPE *watcher)" 4 .IX Item "ev_TYPE_stop (loop, ev_TYPE *watcher)" Stops the given watcher if active, and clears the pending status (whether the watcher was active or not). .Sp It is possible that stopped watchers are pending \- for example, non-repeating timers are being stopped when they become pending \- but calling \f(CW\*(C`ev_TYPE_stop\*(C'\fR ensures that the watcher is neither active nor pending. If you want to free or reuse the memory used by the watcher it is therefore a good idea to always call its \f(CW\*(C`ev_TYPE_stop\*(C'\fR function. .IP "bool ev_is_active (ev_TYPE *watcher)" 4 .IX Item "bool ev_is_active (ev_TYPE *watcher)" Returns a true value iff the watcher is active (i.e. it has been started and not yet been stopped). As long as a watcher is active you must not modify it. .IP "bool ev_is_pending (ev_TYPE *watcher)" 4 .IX Item "bool ev_is_pending (ev_TYPE *watcher)" Returns a true value iff the watcher is pending, (i.e. it has outstanding events but its callback has not yet been invoked). As long as a watcher is pending (but not active) you must not call an init function on it (but \&\f(CW\*(C`ev_TYPE_set\*(C'\fR is safe), you must not change its priority, and you must make sure the watcher is available to libev (e.g. you cannot \f(CW\*(C`free ()\*(C'\fR it). .IP "callback ev_cb (ev_TYPE *watcher)" 4 .IX Item "callback ev_cb (ev_TYPE *watcher)" Returns the callback currently set on the watcher. .IP "ev_set_cb (ev_TYPE *watcher, callback)" 4 .IX Item "ev_set_cb (ev_TYPE *watcher, callback)" Change the callback. You can change the callback at virtually any time (modulo threads). .IP "ev_set_priority (ev_TYPE *watcher, int priority)" 4 .IX Item "ev_set_priority (ev_TYPE *watcher, int priority)" .PD 0 .IP "int ev_priority (ev_TYPE *watcher)" 4 .IX Item "int ev_priority (ev_TYPE *watcher)" .PD Set and query the priority of the watcher. The priority is a small integer between \f(CW\*(C`EV_MAXPRI\*(C'\fR (default: \f(CW2\fR) and \f(CW\*(C`EV_MINPRI\*(C'\fR (default: \f(CW\*(C`\-2\*(C'\fR). Pending watchers with higher priority will be invoked before watchers with lower priority, but priority will not keep watchers from being executed (except for \f(CW\*(C`ev_idle\*(C'\fR watchers). .Sp If you need to suppress invocation when higher priority events are pending you need to look at \f(CW\*(C`ev_idle\*(C'\fR watchers, which provide this functionality. .Sp You \fImust not\fR change the priority of a watcher as long as it is active or pending. .Sp Setting a priority outside the range of \f(CW\*(C`EV_MINPRI\*(C'\fR to \f(CW\*(C`EV_MAXPRI\*(C'\fR is fine, as long as you do not mind that the priority value you query might or might not have been clamped to the valid range. .Sp The default priority used by watchers when no priority has been set is always \f(CW0\fR, which is supposed to not be too high and not be too low :). .Sp See \*(L"\s-1WATCHER PRIORITY MODELS\*(R"\s0, below, for a more thorough treatment of priorities. .IP "ev_invoke (loop, ev_TYPE *watcher, int revents)" 4 .IX Item "ev_invoke (loop, ev_TYPE *watcher, int revents)" Invoke the \f(CW\*(C`watcher\*(C'\fR with the given \f(CW\*(C`loop\*(C'\fR and \f(CW\*(C`revents\*(C'\fR. Neither \&\f(CW\*(C`loop\*(C'\fR nor \f(CW\*(C`revents\*(C'\fR need to be valid as long as the watcher callback can deal with that fact, as both are simply passed through to the callback. .IP "int ev_clear_pending (loop, ev_TYPE *watcher)" 4 .IX Item "int ev_clear_pending (loop, ev_TYPE *watcher)" If the watcher is pending, this function clears its pending status and returns its \f(CW\*(C`revents\*(C'\fR bitset (as if its callback was invoked). If the watcher isn't pending it does nothing and returns \f(CW0\fR. .Sp Sometimes it can be useful to \*(L"poll\*(R" a watcher instead of waiting for its callback to be invoked, which can be accomplished with this function. .IP "ev_feed_event (loop, ev_TYPE *watcher, int revents)" 4 .IX Item "ev_feed_event (loop, ev_TYPE *watcher, int revents)" Feeds the given event set into the event loop, as if the specified event had happened for the specified watcher (which must be a pointer to an initialised but not necessarily started event watcher). Obviously you must not free the watcher as long as it has pending events. .Sp Stopping the watcher, letting libev invoke it, or calling \&\f(CW\*(C`ev_clear_pending\*(C'\fR will clear the pending event, even if the watcher was not started in the first place. .Sp See also \f(CW\*(C`ev_feed_fd_event\*(C'\fR and \f(CW\*(C`ev_feed_signal_event\*(C'\fR for related functions that do not need a watcher. .PP See also the \*(L"\s-1ASSOCIATING CUSTOM DATA WITH A WATCHER\*(R"\s0 and \*(L"\s-1BUILDING YOUR OWN COMPOSITE WATCHERS\*(R"\s0 idioms. .SS "\s-1WATCHER STATES\s0" .IX Subsection "WATCHER STATES" There are various watcher states mentioned throughout this manual \- active, pending and so on. In this section these states and the rules to transition between them will be described in more detail \- and while these rules might look complicated, they usually do \*(L"the right thing\*(R". .IP "initialised" 4 .IX Item "initialised" Before a watcher can be registered with the event loop it has to be initialised. This can be done with a call to \f(CW\*(C`ev_TYPE_init\*(C'\fR, or calls to \&\f(CW\*(C`ev_init\*(C'\fR followed by the watcher-specific \f(CW\*(C`ev_TYPE_set\*(C'\fR function. .Sp In this state it is simply some block of memory that is suitable for use in an event loop. It can be moved around, freed, reused etc. at will \- as long as you either keep the memory contents intact, or call \&\f(CW\*(C`ev_TYPE_init\*(C'\fR again. .IP "started/running/active" 4 .IX Item "started/running/active" Once a watcher has been started with a call to \f(CW\*(C`ev_TYPE_start\*(C'\fR it becomes property of the event loop, and is actively waiting for events. While in this state it cannot be accessed (except in a few documented ways), moved, freed or anything else \- the only legal thing is to keep a pointer to it, and call libev functions on it that are documented to work on active watchers. .IP "pending" 4 .IX Item "pending" If a watcher is active and libev determines that an event it is interested in has occurred (such as a timer expiring), it will become pending. It will stay in this pending state until either it is stopped or its callback is about to be invoked, so it is not normally pending inside the watcher callback. .Sp The watcher might or might not be active while it is pending (for example, an expired non-repeating timer can be pending but no longer active). If it is stopped, it can be freely accessed (e.g. by calling \f(CW\*(C`ev_TYPE_set\*(C'\fR), but it is still property of the event loop at this time, so cannot be moved, freed or reused. And if it is active the rules described in the previous item still apply. .Sp It is also possible to feed an event on a watcher that is not active (e.g. via \f(CW\*(C`ev_feed_event\*(C'\fR), in which case it becomes pending without being active. .IP "stopped" 4 .IX Item "stopped" A watcher can be stopped implicitly by libev (in which case it might still be pending), or explicitly by calling its \f(CW\*(C`ev_TYPE_stop\*(C'\fR function. The latter will clear any pending state the watcher might be in, regardless of whether it was active or not, so stopping a watcher explicitly before freeing it is often a good idea. .Sp While stopped (and not pending) the watcher is essentially in the initialised state, that is, it can be reused, moved, modified in any way you wish (but when you trash the memory block, you need to \f(CW\*(C`ev_TYPE_init\*(C'\fR it again). .SS "\s-1WATCHER PRIORITY MODELS\s0" .IX Subsection "WATCHER PRIORITY MODELS" Many event loops support \fIwatcher priorities\fR, which are usually small integers that influence the ordering of event callback invocation between watchers in some way, all else being equal. .PP In libev, watcher priorities can be set using \f(CW\*(C`ev_set_priority\*(C'\fR. See its description for the more technical details such as the actual priority range. .PP There are two common ways how these these priorities are being interpreted by event loops: .PP In the more common lock-out model, higher priorities \*(L"lock out\*(R" invocation of lower priority watchers, which means as long as higher priority watchers receive events, lower priority watchers are not being invoked. .PP The less common only-for-ordering model uses priorities solely to order callback invocation within a single event loop iteration: Higher priority watchers are invoked before lower priority ones, but they all get invoked before polling for new events. .PP Libev uses the second (only-for-ordering) model for all its watchers except for idle watchers (which use the lock-out model). .PP The rationale behind this is that implementing the lock-out model for watchers is not well supported by most kernel interfaces, and most event libraries will just poll for the same events again and again as long as their callbacks have not been executed, which is very inefficient in the common case of one high-priority watcher locking out a mass of lower priority ones. .PP Static (ordering) priorities are most useful when you have two or more watchers handling the same resource: a typical usage example is having an \&\f(CW\*(C`ev_io\*(C'\fR watcher to receive data, and an associated \f(CW\*(C`ev_timer\*(C'\fR to handle timeouts. Under load, data might be received while the program handles other jobs, but since timers normally get invoked first, the timeout handler will be executed before checking for data. In that case, giving the timer a lower priority than the I/O watcher ensures that I/O will be handled first even under adverse conditions (which is usually, but not always, what you want). .PP Since idle watchers use the \*(L"lock-out\*(R" model, meaning that idle watchers will only be executed when no same or higher priority watchers have received events, they can be used to implement the \*(L"lock-out\*(R" model when required. .PP For example, to emulate how many other event libraries handle priorities, you can associate an \f(CW\*(C`ev_idle\*(C'\fR watcher to each such watcher, and in the normal watcher callback, you just start the idle watcher. The real processing is done in the idle watcher callback. This causes libev to continuously poll and process kernel event data for the watcher, but when the lock-out case is known to be rare (which in turn is rare :), this is workable. .PP Usually, however, the lock-out model implemented that way will perform miserably under the type of load it was designed to handle. In that case, it might be preferable to stop the real watcher before starting the idle watcher, so the kernel will not have to process the event in case the actual processing will be delayed for considerable time. .PP Here is an example of an I/O watcher that should run at a strictly lower priority than the default, and which should only process data when no other events are pending: .PP .Vb 2 \& ev_idle idle; // actual processing watcher \& ev_io io; // actual event watcher \& \& static void \& io_cb (EV_P_ ev_io *w, int revents) \& { \& // stop the I/O watcher, we received the event, but \& // are not yet ready to handle it. \& ev_io_stop (EV_A_ w); \& \& // start the idle watcher to handle the actual event. \& // it will not be executed as long as other watchers \& // with the default priority are receiving events. \& ev_idle_start (EV_A_ &idle); \& } \& \& static void \& idle_cb (EV_P_ ev_idle *w, int revents) \& { \& // actual processing \& read (STDIN_FILENO, ...); \& \& // have to start the I/O watcher again, as \& // we have handled the event \& ev_io_start (EV_P_ &io); \& } \& \& // initialisation \& ev_idle_init (&idle, idle_cb); \& ev_io_init (&io, io_cb, STDIN_FILENO, EV_READ); \& ev_io_start (EV_DEFAULT_ &io); .Ve .PP In the \*(L"real\*(R" world, it might also be beneficial to start a timer, so that low-priority connections can not be locked out forever under load. This enables your program to keep a lower latency for important connections during short periods of high load, while not completely locking out less important ones. .SH "WATCHER TYPES" .IX Header "WATCHER TYPES" This section describes each watcher in detail, but will not repeat information given in the last section. Any initialisation/set macros, functions and members specific to the watcher type are explained. .PP Most members are additionally marked with either \fI[read\-only]\fR, meaning that, while the watcher is active, you can look at the member and expect some sensible content, but you must not modify it (you can modify it while the watcher is stopped to your hearts content), or \fI[read\-write]\fR, which means you can expect it to have some sensible content while the watcher is active, but you can also modify it (within the same thread as the event loop, i.e. without creating data races). Modifying it may not do something sensible or take immediate effect (or do anything at all), but libev will not crash or malfunction in any way. .PP In any case, the documentation for each member will explain what the effects are, and if there are any additional access restrictions. .ie n .SS """ev_io"" \- is this file descriptor readable or writable?" .el .SS "\f(CWev_io\fP \- is this file descriptor readable or writable?" .IX Subsection "ev_io - is this file descriptor readable or writable?" I/O watchers check whether a file descriptor is readable or writable in each iteration of the event loop, or, more precisely, when reading would not block the process and writing would at least be able to write some data. This behaviour is called level-triggering because you keep receiving events as long as the condition persists. Remember you can stop the watcher if you don't want to act on the event and neither want to receive future events. .PP In general you can register as many read and/or write event watchers per fd as you want (as long as you don't confuse yourself). Setting all file descriptors to non-blocking mode is also usually a good idea (but not required if you know what you are doing). .PP Another thing you have to watch out for is that it is quite easy to receive \*(L"spurious\*(R" readiness notifications, that is, your callback might be called with \f(CW\*(C`EV_READ\*(C'\fR but a subsequent \f(CW\*(C`read\*(C'\fR(2) will actually block because there is no data. It is very easy to get into this situation even with a relatively standard program structure. Thus it is best to always use non-blocking I/O: An extra \f(CW\*(C`read\*(C'\fR(2) returning \f(CW\*(C`EAGAIN\*(C'\fR is far preferable to a program hanging until some data arrives. .PP If you cannot run the fd in non-blocking mode (for example you should not play around with an Xlib connection), then you have to separately re-test whether a file descriptor is really ready with a known-to-be good interface such as poll (fortunately in the case of Xlib, it already does this on its own, so its quite safe to use). Some people additionally use \f(CW\*(C`SIGALRM\*(C'\fR and an interval timer, just to be sure you won't block indefinitely. .PP But really, best use non-blocking mode. .PP \fIThe special problem of disappearing file descriptors\fR .IX Subsection "The special problem of disappearing file descriptors" .PP Some backends (e.g. kqueue, epoll, linuxaio) need to be told about closing a file descriptor (either due to calling \f(CW\*(C`close\*(C'\fR explicitly or any other means, such as \f(CW\*(C`dup2\*(C'\fR). The reason is that you register interest in some file descriptor, but when it goes away, the operating system will silently drop this interest. If another file descriptor with the same number then is registered with libev, there is no efficient way to see that this is, in fact, a different file descriptor. .PP To avoid having to explicitly tell libev about such cases, libev follows the following policy: Each time \f(CW\*(C`ev_io_set\*(C'\fR is being called, libev will assume that this is potentially a new file descriptor, otherwise it is assumed that the file descriptor stays the same. That means that you \fIhave\fR to call \f(CW\*(C`ev_io_set\*(C'\fR (or \f(CW\*(C`ev_io_init\*(C'\fR) when you change the descriptor even if the file descriptor number itself did not change. .PP This is how one would do it normally anyway, the important point is that the libev application should not optimise around libev but should leave optimisations to libev. .PP \fIThe special problem of dup'ed file descriptors\fR .IX Subsection "The special problem of dup'ed file descriptors" .PP Some backends (e.g. epoll), cannot register events for file descriptors, but only events for the underlying file descriptions. That means when you have \f(CW\*(C`dup ()\*(C'\fR'ed file descriptors or weirder constellations, and register events for them, only one file descriptor might actually receive events. .PP There is no workaround possible except not registering events for potentially \f(CW\*(C`dup ()\*(C'\fR'ed file descriptors, or to resort to \&\f(CW\*(C`EVBACKEND_SELECT\*(C'\fR or \f(CW\*(C`EVBACKEND_POLL\*(C'\fR. .PP \fIThe special problem of files\fR .IX Subsection "The special problem of files" .PP Many people try to use \f(CW\*(C`select\*(C'\fR (or libev) on file descriptors representing files, and expect it to become ready when their program doesn't block on disk accesses (which can take a long time on their own). .PP However, this cannot ever work in the \*(L"expected\*(R" way \- you get a readiness notification as soon as the kernel knows whether and how much data is there, and in the case of open files, that's always the case, so you always get a readiness notification instantly, and your read (or possibly write) will still block on the disk I/O. .PP Another way to view it is that in the case of sockets, pipes, character devices and so on, there is another party (the sender) that delivers data on its own, but in the case of files, there is no such thing: the disk will not send data on its own, simply because it doesn't know what you wish to read \- you would first have to request some data. .PP Since files are typically not-so-well supported by advanced notification mechanism, libev tries hard to emulate \s-1POSIX\s0 behaviour with respect to files, even though you should not use it. The reason for this is convenience: sometimes you want to watch \s-1STDIN\s0 or \s-1STDOUT,\s0 which is usually a tty, often a pipe, but also sometimes files or special devices (for example, \f(CW\*(C`epoll\*(C'\fR on Linux works with \fI/dev/random\fR but not with \&\fI/dev/urandom\fR), and even though the file might better be served with asynchronous I/O instead of with non-blocking I/O, it is still useful when it \*(L"just works\*(R" instead of freezing. .PP So avoid file descriptors pointing to files when you know it (e.g. use libeio), but use them when it is convenient, e.g. for \s-1STDIN/STDOUT,\s0 or when you rarely read from a file instead of from a socket, and want to reuse the same code path. .PP \fIThe special problem of fork\fR .IX Subsection "The special problem of fork" .PP Some backends (epoll, kqueue, linuxaio, iouring) do not support \f(CW\*(C`fork ()\*(C'\fR at all or exhibit useless behaviour. Libev fully supports fork, but needs to be told about it in the child if you want to continue to use it in the child. .PP To support fork in your child processes, you have to call \f(CW\*(C`ev_loop_fork ()\*(C'\fR after a fork in the child, enable \f(CW\*(C`EVFLAG_FORKCHECK\*(C'\fR, or resort to \&\f(CW\*(C`EVBACKEND_SELECT\*(C'\fR or \f(CW\*(C`EVBACKEND_POLL\*(C'\fR. .PP \fIThe special problem of \s-1SIGPIPE\s0\fR .IX Subsection "The special problem of SIGPIPE" .PP While not really specific to libev, it is easy to forget about \f(CW\*(C`SIGPIPE\*(C'\fR: when writing to a pipe whose other end has been closed, your program gets sent a \s-1SIGPIPE,\s0 which, by default, aborts your program. For most programs this is sensible behaviour, for daemons, this is usually undesirable. .PP So when you encounter spurious, unexplained daemon exits, make sure you ignore \s-1SIGPIPE\s0 (and maybe make sure you log the exit status of your daemon somewhere, as that would have given you a big clue). .PP \fIThe special problem of \f(BIaccept()\fIing when you can't\fR .IX Subsection "The special problem of accept()ing when you can't" .PP Many implementations of the \s-1POSIX\s0 \f(CW\*(C`accept\*(C'\fR function (for example, found in post\-2004 Linux) have the peculiar behaviour of not removing a connection from the pending queue in all error cases. .PP For example, larger servers often run out of file descriptors (because of resource limits), causing \f(CW\*(C`accept\*(C'\fR to fail with \f(CW\*(C`ENFILE\*(C'\fR but not rejecting the connection, leading to libev signalling readiness on the next iteration again (the connection still exists after all), and typically causing the program to loop at 100% \s-1CPU\s0 usage. .PP Unfortunately, the set of errors that cause this issue differs between operating systems, there is usually little the app can do to remedy the situation, and no known thread-safe method of removing the connection to cope with overload is known (to me). .PP One of the easiest ways to handle this situation is to just ignore it \&\- when the program encounters an overload, it will just loop until the situation is over. While this is a form of busy waiting, no \s-1OS\s0 offers an event-based way to handle this situation, so it's the best one can do. .PP A better way to handle the situation is to log any errors other than \&\f(CW\*(C`EAGAIN\*(C'\fR and \f(CW\*(C`EWOULDBLOCK\*(C'\fR, making sure not to flood the log with such messages, and continue as usual, which at least gives the user an idea of what could be wrong (\*(L"raise the ulimit!\*(R"). For extra points one could stop the \f(CW\*(C`ev_io\*(C'\fR watcher on the listening fd \*(L"for a while\*(R", which reduces \s-1CPU\s0 usage. .PP If your program is single-threaded, then you could also keep a dummy file descriptor for overload situations (e.g. by opening \fI/dev/null\fR), and when you run into \f(CW\*(C`ENFILE\*(C'\fR or \f(CW\*(C`EMFILE\*(C'\fR, close it, run \f(CW\*(C`accept\*(C'\fR, close that fd, and create a new dummy fd. This will gracefully refuse clients under typical overload conditions. .PP The last way to handle it is to simply log the error and \f(CW\*(C`exit\*(C'\fR, as is often done with \f(CW\*(C`malloc\*(C'\fR failures, but this results in an easy opportunity for a DoS attack. .PP \fIWatcher-Specific Functions\fR .IX Subsection "Watcher-Specific Functions" .IP "ev_io_init (ev_io *, callback, int fd, int events)" 4 .IX Item "ev_io_init (ev_io *, callback, int fd, int events)" .PD 0 .IP "ev_io_set (ev_io *, int fd, int events)" 4 .IX Item "ev_io_set (ev_io *, int fd, int events)" .PD Configures an \f(CW\*(C`ev_io\*(C'\fR watcher. The \f(CW\*(C`fd\*(C'\fR is the file descriptor to receive events for and \f(CW\*(C`events\*(C'\fR is either \f(CW\*(C`EV_READ\*(C'\fR, \f(CW\*(C`EV_WRITE\*(C'\fR, both \&\f(CW\*(C`EV_READ | EV_WRITE\*(C'\fR or \f(CW0\fR, to express the desire to receive the given events. .Sp Note that setting the \f(CW\*(C`events\*(C'\fR to \f(CW0\fR and starting the watcher is supported, but not specially optimized \- if your program sometimes happens to generate this combination this is fine, but if it is easy to avoid starting an io watcher watching for no events you should do so. .IP "ev_io_modify (ev_io *, int events)" 4 .IX Item "ev_io_modify (ev_io *, int events)" Similar to \f(CW\*(C`ev_io_set\*(C'\fR, but only changes the requested events. Using this might be faster with some backends, as libev can assume that the \f(CW\*(C`fd\*(C'\fR still refers to the same underlying file description, something it cannot do when using \f(CW\*(C`ev_io_set\*(C'\fR. .IP "int fd [no\-modify]" 4 .IX Item "int fd [no-modify]" The file descriptor being watched. While it can be read at any time, you must not modify this member even when the watcher is stopped \- always use \&\f(CW\*(C`ev_io_set\*(C'\fR for that. .IP "int events [no\-modify]" 4 .IX Item "int events [no-modify]" The set of events the fd is being watched for, among other flags. Remember that this is a bit set \- to test for \f(CW\*(C`EV_READ\*(C'\fR, use \f(CW\*(C`w\->events & EV_READ\*(C'\fR, and similarly for \f(CW\*(C`EV_WRITE\*(C'\fR. .Sp As with \f(CW\*(C`fd\*(C'\fR, you must not modify this member even when the watcher is stopped, always use \f(CW\*(C`ev_io_set\*(C'\fR or \f(CW\*(C`ev_io_modify\*(C'\fR for that. .PP \fIExamples\fR .IX Subsection "Examples" .PP Example: Call \f(CW\*(C`stdin_readable_cb\*(C'\fR when \s-1STDIN_FILENO\s0 has become, well readable, but only once. Since it is likely line-buffered, you could attempt to read a whole line in the callback. .PP .Vb 6 \& static void \& stdin_readable_cb (struct ev_loop *loop, ev_io *w, int revents) \& { \& ev_io_stop (loop, w); \& .. read from stdin here (or from w\->fd) and handle any I/O errors \& } \& \& ... \& struct ev_loop *loop = ev_default_init (0); \& ev_io stdin_readable; \& ev_io_init (&stdin_readable, stdin_readable_cb, STDIN_FILENO, EV_READ); \& ev_io_start (loop, &stdin_readable); \& ev_run (loop, 0); .Ve .ie n .SS """ev_timer"" \- relative and optionally repeating timeouts" .el .SS "\f(CWev_timer\fP \- relative and optionally repeating timeouts" .IX Subsection "ev_timer - relative and optionally repeating timeouts" Timer watchers are simple relative timers that generate an event after a given time, and optionally repeating in regular intervals after that. .PP The timers are based on real time, that is, if you register an event that times out after an hour and you reset your system clock to January last year, it will still time out after (roughly) one hour. \*(L"Roughly\*(R" because detecting time jumps is hard, and some inaccuracies are unavoidable (the monotonic clock option helps a lot here). .PP The callback is guaranteed to be invoked only \fIafter\fR its timeout has passed (not \fIat\fR, so on systems with very low-resolution clocks this might introduce a small delay, see \*(L"the special problem of being too early\*(R", below). If multiple timers become ready during the same loop iteration then the ones with earlier time-out values are invoked before ones of the same priority with later time-out values (but this is no longer true when a callback calls \f(CW\*(C`ev_run\*(C'\fR recursively). .PP \fIBe smart about timeouts\fR .IX Subsection "Be smart about timeouts" .PP Many real-world problems involve some kind of timeout, usually for error recovery. A typical example is an \s-1HTTP\s0 request \- if the other side hangs, you want to raise some error after a while. .PP What follows are some ways to handle this problem, from obvious and inefficient to smart and efficient. .PP In the following, a 60 second activity timeout is assumed \- a timeout that gets reset to 60 seconds each time there is activity (e.g. each time some data or other life sign was received). .IP "1. Use a timer and stop, reinitialise and start it on activity." 4 .IX Item "1. Use a timer and stop, reinitialise and start it on activity." This is the most obvious, but not the most simple way: In the beginning, start the watcher: .Sp .Vb 2 \& ev_timer_init (timer, callback, 60., 0.); \& ev_timer_start (loop, timer); .Ve .Sp Then, each time there is some activity, \f(CW\*(C`ev_timer_stop\*(C'\fR it, initialise it and start it again: .Sp .Vb 3 \& ev_timer_stop (loop, timer); \& ev_timer_set (timer, 60., 0.); \& ev_timer_start (loop, timer); .Ve .Sp This is relatively simple to implement, but means that each time there is some activity, libev will first have to remove the timer from its internal data structure and then add it again. Libev tries to be fast, but it's still not a constant-time operation. .ie n .IP "2. Use a timer and re-start it with ""ev_timer_again"" inactivity." 4 .el .IP "2. Use a timer and re-start it with \f(CWev_timer_again\fR inactivity." 4 .IX Item "2. Use a timer and re-start it with ev_timer_again inactivity." This is the easiest way, and involves using \f(CW\*(C`ev_timer_again\*(C'\fR instead of \&\f(CW\*(C`ev_timer_start\*(C'\fR. .Sp To implement this, configure an \f(CW\*(C`ev_timer\*(C'\fR with a \f(CW\*(C`repeat\*(C'\fR value of \f(CW60\fR and then call \f(CW\*(C`ev_timer_again\*(C'\fR at start and each time you successfully read or write some data. If you go into an idle state where you do not expect data to travel on the socket, you can \f(CW\*(C`ev_timer_stop\*(C'\fR the timer, and \f(CW\*(C`ev_timer_again\*(C'\fR will automatically restart it if need be. .Sp That means you can ignore both the \f(CW\*(C`ev_timer_start\*(C'\fR function and the \&\f(CW\*(C`after\*(C'\fR argument to \f(CW\*(C`ev_timer_set\*(C'\fR, and only ever use the \f(CW\*(C`repeat\*(C'\fR member and \f(CW\*(C`ev_timer_again\*(C'\fR. .Sp At start: .Sp .Vb 3 \& ev_init (timer, callback); \& timer\->repeat = 60.; \& ev_timer_again (loop, timer); .Ve .Sp Each time there is some activity: .Sp .Vb 1 \& ev_timer_again (loop, timer); .Ve .Sp It is even possible to change the time-out on the fly, regardless of whether the watcher is active or not: .Sp .Vb 2 \& timer\->repeat = 30.; \& ev_timer_again (loop, timer); .Ve .Sp This is slightly more efficient then stopping/starting the timer each time you want to modify its timeout value, as libev does not have to completely remove and re-insert the timer from/into its internal data structure. .Sp It is, however, even simpler than the \*(L"obvious\*(R" way to do it. .IP "3. Let the timer time out, but then re-arm it as required." 4 .IX Item "3. Let the timer time out, but then re-arm it as required." This method is more tricky, but usually most efficient: Most timeouts are relatively long compared to the intervals between other activity \- in our example, within 60 seconds, there are usually many I/O events with associated activity resets. .Sp In this case, it would be more efficient to leave the \f(CW\*(C`ev_timer\*(C'\fR alone, but remember the time of last activity, and check for a real timeout only within the callback: .Sp .Vb 3 \& ev_tstamp timeout = 60.; \& ev_tstamp last_activity; // time of last activity \& ev_timer timer; \& \& static void \& callback (EV_P_ ev_timer *w, int revents) \& { \& // calculate when the timeout would happen \& ev_tstamp after = last_activity \- ev_now (EV_A) + timeout; \& \& // if negative, it means we the timeout already occurred \& if (after < 0.) \& { \& // timeout occurred, take action \& } \& else \& { \& // callback was invoked, but there was some recent \& // activity. simply restart the timer to time out \& // after "after" seconds, which is the earliest time \& // the timeout can occur. \& ev_timer_set (w, after, 0.); \& ev_timer_start (EV_A_ w); \& } \& } .Ve .Sp To summarise the callback: first calculate in how many seconds the timeout will occur (by calculating the absolute time when it would occur, \&\f(CW\*(C`last_activity + timeout\*(C'\fR, and subtracting the current time, \f(CW\*(C`ev_now (EV_A)\*(C'\fR from that). .Sp If this value is negative, then we are already past the timeout, i.e. we timed out, and need to do whatever is needed in this case. .Sp Otherwise, we now the earliest time at which the timeout would trigger, and simply start the timer with this timeout value. .Sp In other words, each time the callback is invoked it will check whether the timeout occurred. If not, it will simply reschedule itself to check again at the earliest time it could time out. Rinse. Repeat. .Sp This scheme causes more callback invocations (about one every 60 seconds minus half the average time between activity), but virtually no calls to libev to change the timeout. .Sp To start the machinery, simply initialise the watcher and set \&\f(CW\*(C`last_activity\*(C'\fR to the current time (meaning there was some activity just now), then call the callback, which will \*(L"do the right thing\*(R" and start the timer: .Sp .Vb 3 \& last_activity = ev_now (EV_A); \& ev_init (&timer, callback); \& callback (EV_A_ &timer, 0); .Ve .Sp When there is some activity, simply store the current time in \&\f(CW\*(C`last_activity\*(C'\fR, no libev calls at all: .Sp .Vb 2 \& if (activity detected) \& last_activity = ev_now (EV_A); .Ve .Sp When your timeout value changes, then the timeout can be changed by simply providing a new value, stopping the timer and calling the callback, which will again do the right thing (for example, time out immediately :). .Sp .Vb 3 \& timeout = new_value; \& ev_timer_stop (EV_A_ &timer); \& callback (EV_A_ &timer, 0); .Ve .Sp This technique is slightly more complex, but in most cases where the time-out is unlikely to be triggered, much more efficient. .IP "4. Wee, just use a double-linked list for your timeouts." 4 .IX Item "4. Wee, just use a double-linked list for your timeouts." If there is not one request, but many thousands (millions...), all employing some kind of timeout with the same timeout value, then one can do even better: .Sp When starting the timeout, calculate the timeout value and put the timeout at the \fIend\fR of the list. .Sp Then use an \f(CW\*(C`ev_timer\*(C'\fR to fire when the timeout at the \fIbeginning\fR of the list is expected to fire (for example, using the technique #3). .Sp When there is some activity, remove the timer from the list, recalculate the timeout, append it to the end of the list again, and make sure to update the \f(CW\*(C`ev_timer\*(C'\fR if it was taken from the beginning of the list. .Sp This way, one can manage an unlimited number of timeouts in O(1) time for starting, stopping and updating the timers, at the expense of a major complication, and having to use a constant timeout. The constant timeout ensures that the list stays sorted. .PP So which method the best? .PP Method #2 is a simple no-brain-required solution that is adequate in most situations. Method #3 requires a bit more thinking, but handles many cases better, and isn't very complicated either. In most case, choosing either one is fine, with #3 being better in typical situations. .PP Method #1 is almost always a bad idea, and buys you nothing. Method #4 is rather complicated, but extremely efficient, something that really pays off after the first million or so of active timers, i.e. it's usually overkill :) .PP \fIThe special problem of being too early\fR .IX Subsection "The special problem of being too early" .PP If you ask a timer to call your callback after three seconds, then you expect it to be invoked after three seconds \- but of course, this cannot be guaranteed to infinite precision. Less obviously, it cannot be guaranteed to any precision by libev \- imagine somebody suspending the process with a \s-1STOP\s0 signal for a few hours for example. .PP So, libev tries to invoke your callback as soon as possible \fIafter\fR the delay has occurred, but cannot guarantee this. .PP A less obvious failure mode is calling your callback too early: many event loops compare timestamps with a \*(L"elapsed delay >= requested delay\*(R", but this can cause your callback to be invoked much earlier than you would expect. .PP To see why, imagine a system with a clock that only offers full second resolution (think windows if you can't come up with a broken enough \s-1OS\s0 yourself). If you schedule a one-second timer at the time 500.9, then the event loop will schedule your timeout to elapse at a system time of 500 (500.9 truncated to the resolution) + 1, or 501. .PP If an event library looks at the timeout 0.1s later, it will see \*(L"501 >= 501\*(R" and invoke the callback 0.1s after it was started, even though a one-second delay was requested \- this is being \*(L"too early\*(R", despite best intentions. .PP This is the reason why libev will never invoke the callback if the elapsed delay equals the requested delay, but only when the elapsed delay is larger than the requested delay. In the example above, libev would only invoke the callback at system time 502, or 1.1s after the timer was started. .PP So, while libev cannot guarantee that your callback will be invoked exactly when requested, it \fIcan\fR and \fIdoes\fR guarantee that the requested delay has actually elapsed, or in other words, it always errs on the \*(L"too late\*(R" side of things. .PP \fIThe special problem of time updates\fR .IX Subsection "The special problem of time updates" .PP Establishing the current time is a costly operation (it usually takes at least one system call): \s-1EV\s0 therefore updates its idea of the current time only before and after \f(CW\*(C`ev_run\*(C'\fR collects new events, which causes a growing difference between \f(CW\*(C`ev_now ()\*(C'\fR and \f(CW\*(C`ev_time ()\*(C'\fR when handling lots of events in one iteration. .PP The relative timeouts are calculated relative to the \f(CW\*(C`ev_now ()\*(C'\fR time. This is usually the right thing as this timestamp refers to the time of the event triggering whatever timeout you are modifying/starting. If you suspect event processing to be delayed and you \fIneed\fR to base the timeout on the current time, use something like the following to adjust for it: .PP .Vb 1 \& ev_timer_set (&timer, after + (ev_time () \- ev_now ()), 0.); .Ve .PP If the event loop is suspended for a long time, you can also force an update of the time returned by \f(CW\*(C`ev_now ()\*(C'\fR by calling \f(CW\*(C`ev_now_update ()\*(C'\fR, although that will push the event time of all outstanding events further into the future. .PP \fIThe special problem of unsynchronised clocks\fR .IX Subsection "The special problem of unsynchronised clocks" .PP Modern systems have a variety of clocks \- libev itself uses the normal \&\*(L"wall clock\*(R" clock and, if available, the monotonic clock (to avoid time jumps). .PP Neither of these clocks is synchronised with each other or any other clock on the system, so \f(CW\*(C`ev_time ()\*(C'\fR might return a considerably different time than \f(CW\*(C`gettimeofday ()\*(C'\fR or \f(CW\*(C`time ()\*(C'\fR. On a GNU/Linux system, for example, a call to \f(CW\*(C`gettimeofday\*(C'\fR might return a second count that is one higher than a directly following call to \f(CW\*(C`time\*(C'\fR. .PP The moral of this is to only compare libev-related timestamps with \&\f(CW\*(C`ev_time ()\*(C'\fR and \f(CW\*(C`ev_now ()\*(C'\fR, at least if you want better precision than a second or so. .PP One more problem arises due to this lack of synchronisation: if libev uses the system monotonic clock and you compare timestamps from \f(CW\*(C`ev_time\*(C'\fR or \f(CW\*(C`ev_now\*(C'\fR from when you started your timer and when your callback is invoked, you will find that sometimes the callback is a bit \*(L"early\*(R". .PP This is because \f(CW\*(C`ev_timer\*(C'\fRs work in real time, not wall clock time, so libev makes sure your callback is not invoked before the delay happened, \&\fImeasured according to the real time\fR, not the system clock. .PP If your timeouts are based on a physical timescale (e.g. \*(L"time out this connection after 100 seconds\*(R") then this shouldn't bother you as it is exactly the right behaviour. .PP If you want to compare wall clock/system timestamps to your timers, then you need to use \f(CW\*(C`ev_periodic\*(C'\fRs, as these are based on the wall clock time, where your comparisons will always generate correct results. .PP \fIThe special problems of suspended animation\fR .IX Subsection "The special problems of suspended animation" .PP When you leave the server world it is quite customary to hit machines that can suspend/hibernate \- what happens to the clocks during such a suspend? .PP Some quick tests made with a Linux 2.6.28 indicate that a suspend freezes all processes, while the clocks (\f(CW\*(C`times\*(C'\fR, \f(CW\*(C`CLOCK_MONOTONIC\*(C'\fR) continue to run until the system is suspended, but they will not advance while the system is suspended. That means, on resume, it will be as if the program was frozen for a few seconds, but the suspend time will not be counted towards \f(CW\*(C`ev_timer\*(C'\fR when a monotonic clock source is used. The real time clock advanced as expected, but if it is used as sole clocksource, then a long suspend would be detected as a time jump by libev, and timers would be adjusted accordingly. .PP I would not be surprised to see different behaviour in different between operating systems, \s-1OS\s0 versions or even different hardware. .PP The other form of suspend (job control, or sending a \s-1SIGSTOP\s0) will see a time jump in the monotonic clocks and the realtime clock. If the program is suspended for a very long time, and monotonic clock sources are in use, then you can expect \f(CW\*(C`ev_timer\*(C'\fRs to expire as the full suspension time will be counted towards the timers. When no monotonic clock source is in use, then libev will again assume a timejump and adjust accordingly. .PP It might be beneficial for this latter case to call \f(CW\*(C`ev_suspend\*(C'\fR and \f(CW\*(C`ev_resume\*(C'\fR in code that handles \f(CW\*(C`SIGTSTP\*(C'\fR, to at least get deterministic behaviour in this case (you can do nothing against \&\f(CW\*(C`SIGSTOP\*(C'\fR). .PP \fIWatcher-Specific Functions and Data Members\fR .IX Subsection "Watcher-Specific Functions and Data Members" .IP "ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat)" 4 .IX Item "ev_timer_init (ev_timer *, callback, ev_tstamp after, ev_tstamp repeat)" .PD 0 .IP "ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat)" 4 .IX Item "ev_timer_set (ev_timer *, ev_tstamp after, ev_tstamp repeat)" .PD Configure the timer to trigger after \f(CW\*(C`after\*(C'\fR seconds (fractional and negative values are supported). If \f(CW\*(C`repeat\*(C'\fR is \f(CW0.\fR, then it will automatically be stopped once the timeout is reached. If it is positive, then the timer will automatically be configured to trigger again \f(CW\*(C`repeat\*(C'\fR seconds later, again, and again, until stopped manually. .Sp The timer itself will do a best-effort at avoiding drift, that is, if you configure a timer to trigger every 10 seconds, then it will normally trigger at exactly 10 second intervals. If, however, your program cannot keep up with the timer (because it takes longer than those 10 seconds to do stuff) the timer will not fire more than once per event loop iteration. .IP "ev_timer_again (loop, ev_timer *)" 4 .IX Item "ev_timer_again (loop, ev_timer *)" This will act as if the timer timed out, and restarts it again if it is repeating. It basically works like calling \f(CW\*(C`ev_timer_stop\*(C'\fR, updating the timeout to the \f(CW\*(C`repeat\*(C'\fR value and calling \f(CW\*(C`ev_timer_start\*(C'\fR. .Sp The exact semantics are as in the following rules, all of which will be applied to the watcher: .RS 4 .IP "If the timer is pending, the pending status is always cleared." 4 .IX Item "If the timer is pending, the pending status is always cleared." .PD 0 .IP "If the timer is started but non-repeating, stop it (as if it timed out, without invoking it)." 4 .IX Item "If the timer is started but non-repeating, stop it (as if it timed out, without invoking it)." .ie n .IP "If the timer is repeating, make the ""repeat"" value the new timeout and start the timer, if necessary." 4 .el .IP "If the timer is repeating, make the \f(CWrepeat\fR value the new timeout and start the timer, if necessary." 4 .IX Item "If the timer is repeating, make the repeat value the new timeout and start the timer, if necessary." .RE .RS 4 .PD .Sp This sounds a bit complicated, see \*(L"Be smart about timeouts\*(R", above, for a usage example. .RE .IP "ev_tstamp ev_timer_remaining (loop, ev_timer *)" 4 .IX Item "ev_tstamp ev_timer_remaining (loop, ev_timer *)" Returns the remaining time until a timer fires. If the timer is active, then this time is relative to the current event loop time, otherwise it's the timeout value currently configured. .Sp That is, after an \f(CW\*(C`ev_timer_set (w, 5, 7)\*(C'\fR, \f(CW\*(C`ev_timer_remaining\*(C'\fR returns \&\f(CW5\fR. When the timer is started and one second passes, \f(CW\*(C`ev_timer_remaining\*(C'\fR will return \f(CW4\fR. When the timer expires and is restarted, it will return roughly \f(CW7\fR (likely slightly less as callback invocation takes some time, too), and so on. .IP "ev_tstamp repeat [read\-write]" 4 .IX Item "ev_tstamp repeat [read-write]" The current \f(CW\*(C`repeat\*(C'\fR value. Will be used each time the watcher times out or \f(CW\*(C`ev_timer_again\*(C'\fR is called, and determines the next timeout (if any), which is also when any modifications are taken into account. .PP \fIExamples\fR .IX Subsection "Examples" .PP Example: Create a timer that fires after 60 seconds. .PP .Vb 5 \& static void \& one_minute_cb (struct ev_loop *loop, ev_timer *w, int revents) \& { \& .. one minute over, w is actually stopped right here \& } \& \& ev_timer mytimer; \& ev_timer_init (&mytimer, one_minute_cb, 60., 0.); \& ev_timer_start (loop, &mytimer); .Ve .PP Example: Create a timeout timer that times out after 10 seconds of inactivity. .PP .Vb 5 \& static void \& timeout_cb (struct ev_loop *loop, ev_timer *w, int revents) \& { \& .. ten seconds without any activity \& } \& \& ev_timer mytimer; \& ev_timer_init (&mytimer, timeout_cb, 0., 10.); /* note, only repeat used */ \& ev_timer_again (&mytimer); /* start timer */ \& ev_run (loop, 0); \& \& // and in some piece of code that gets executed on any "activity": \& // reset the timeout to start ticking again at 10 seconds \& ev_timer_again (&mytimer); .Ve .ie n .SS """ev_periodic"" \- to cron or not to cron?" .el .SS "\f(CWev_periodic\fP \- to cron or not to cron?" .IX Subsection "ev_periodic - to cron or not to cron?" Periodic watchers are also timers of a kind, but they are very versatile (and unfortunately a bit complex). .PP Unlike \f(CW\*(C`ev_timer\*(C'\fR, periodic watchers are not based on real time (or relative time, the physical time that passes) but on wall clock time (absolute time, the thing you can read on your calendar or clock). The difference is that wall clock time can run faster or slower than real time, and time jumps are not uncommon (e.g. when you adjust your wrist-watch). .PP You can tell a periodic watcher to trigger after some specific point in time: for example, if you tell a periodic watcher to trigger \*(L"in 10 seconds\*(R" (by specifying e.g. \f(CW\*(C`ev_now () + 10.\*(C'\fR, that is, an absolute time not a delay) and then reset your system clock to January of the previous year, then it will take a year or more to trigger the event (unlike an \&\f(CW\*(C`ev_timer\*(C'\fR, which would still trigger roughly 10 seconds after starting it, as it uses a relative timeout). .PP \&\f(CW\*(C`ev_periodic\*(C'\fR watchers can also be used to implement vastly more complex timers, such as triggering an event on each \*(L"midnight, local time\*(R", or other complicated rules. This cannot easily be done with \f(CW\*(C`ev_timer\*(C'\fR watchers, as those cannot react to time jumps. .PP As with timers, the callback is guaranteed to be invoked only when the point in time where it is supposed to trigger has passed. If multiple timers become ready during the same loop iteration then the ones with earlier time-out values are invoked before ones with later time-out values (but this is no longer true when a callback calls \f(CW\*(C`ev_run\*(C'\fR recursively). .PP \fIWatcher-Specific Functions and Data Members\fR .IX Subsection "Watcher-Specific Functions and Data Members" .IP "ev_periodic_init (ev_periodic *, callback, ev_tstamp offset, ev_tstamp interval, reschedule_cb)" 4 .IX Item "ev_periodic_init (ev_periodic *, callback, ev_tstamp offset, ev_tstamp interval, reschedule_cb)" .PD 0 .IP "ev_periodic_set (ev_periodic *, ev_tstamp offset, ev_tstamp interval, reschedule_cb)" 4 .IX Item "ev_periodic_set (ev_periodic *, ev_tstamp offset, ev_tstamp interval, reschedule_cb)" .PD Lots of arguments, let's sort it out... There are basically three modes of operation, and we will explain them from simplest to most complex: .RS 4 .IP "\(bu" 4 absolute timer (offset = absolute time, interval = 0, reschedule_cb = 0) .Sp In this configuration the watcher triggers an event after the wall clock time \f(CW\*(C`offset\*(C'\fR has passed. It will not repeat and will not adjust when a time jump occurs, that is, if it is to be run at January 1st 2011 then it will be stopped and invoked when the system clock reaches or surpasses this point in time. .IP "\(bu" 4 repeating interval timer (offset = offset within interval, interval > 0, reschedule_cb = 0) .Sp In this mode the watcher will always be scheduled to time out at the next \&\f(CW\*(C`offset + N * interval\*(C'\fR time (for some integer N, which can also be negative) and then repeat, regardless of any time jumps. The \f(CW\*(C`offset\*(C'\fR argument is merely an offset into the \f(CW\*(C`interval\*(C'\fR periods. .Sp This can be used to create timers that do not drift with respect to the system clock, for example, here is an \f(CW\*(C`ev_periodic\*(C'\fR that triggers each hour, on the hour (with respect to \s-1UTC\s0): .Sp .Vb 1 \& ev_periodic_set (&periodic, 0., 3600., 0); .Ve .Sp This doesn't mean there will always be 3600 seconds in between triggers, but only that the callback will be called when the system time shows a full hour (\s-1UTC\s0), or more correctly, when the system time is evenly divisible by 3600. .Sp Another way to think about it (for the mathematically inclined) is that \&\f(CW\*(C`ev_periodic\*(C'\fR will try to run the callback in this mode at the next possible time where \f(CW\*(C`time = offset (mod interval)\*(C'\fR, regardless of any time jumps. .Sp The \f(CW\*(C`interval\*(C'\fR \fI\s-1MUST\s0\fR be positive, and for numerical stability, the interval value should be higher than \f(CW\*(C`1/8192\*(C'\fR (which is around 100 microseconds) and \f(CW\*(C`offset\*(C'\fR should be higher than \f(CW0\fR and should have at most a similar magnitude as the current time (say, within a factor of ten). Typical values for offset are, in fact, \f(CW0\fR or something between \&\f(CW0\fR and \f(CW\*(C`interval\*(C'\fR, which is also the recommended range. .Sp Note also that there is an upper limit to how often a timer can fire (\s-1CPU\s0 speed for example), so if \f(CW\*(C`interval\*(C'\fR is very small then timing stability will of course deteriorate. Libev itself tries to be exact to be about one millisecond (if the \s-1OS\s0 supports it and the machine is fast enough). .IP "\(bu" 4 manual reschedule mode (offset ignored, interval ignored, reschedule_cb = callback) .Sp In this mode the values for \f(CW\*(C`interval\*(C'\fR and \f(CW\*(C`offset\*(C'\fR are both being ignored. Instead, each time the periodic watcher gets scheduled, the reschedule callback will be called with the watcher as first, and the current time as second argument. .Sp \&\s-1NOTE:\s0 \fIThis callback \s-1MUST NOT\s0 stop or destroy any periodic watcher, ever, or make \s-1ANY\s0 other event loop modifications whatsoever, unless explicitly allowed by documentation here\fR. .Sp If you need to stop it, return \f(CW\*(C`now + 1e30\*(C'\fR (or so, fudge fudge) and stop it afterwards (e.g. by starting an \f(CW\*(C`ev_prepare\*(C'\fR watcher, which is the only event loop modification you are allowed to do). .Sp The callback prototype is \f(CW\*(C`ev_tstamp (*reschedule_cb)(ev_periodic *w, ev_tstamp now)\*(C'\fR, e.g.: .Sp .Vb 5 \& static ev_tstamp \& my_rescheduler (ev_periodic *w, ev_tstamp now) \& { \& return now + 60.; \& } .Ve .Sp It must return the next time to trigger, based on the passed time value (that is, the lowest time value larger than to the second argument). It will usually be called just before the callback will be triggered, but might be called at other times, too. .Sp \&\s-1NOTE:\s0 \fIThis callback must always return a time that is higher than or equal to the passed \f(CI\*(C`now\*(C'\fI value\fR. .Sp This can be used to create very complex timers, such as a timer that triggers on \*(L"next midnight, local time\*(R". To do this, you would calculate the next midnight after \f(CW\*(C`now\*(C'\fR and return the timestamp value for this. Here is a (completely untested, no error checking) example on how to do this: .Sp .Vb 1 \& #include \& \& static ev_tstamp \& my_rescheduler (ev_periodic *w, ev_tstamp now) \& { \& time_t tnow = (time_t)now; \& struct tm tm; \& localtime_r (&tnow, &tm); \& \& tm.tm_sec = tm.tm_min = tm.tm_hour = 0; // midnight current day \& ++tm.tm_mday; // midnight next day \& \& return mktime (&tm); \& } .Ve .Sp Note: this code might run into trouble on days that have more then two midnights (beginning and end). .RE .RS 4 .RE .IP "ev_periodic_again (loop, ev_periodic *)" 4 .IX Item "ev_periodic_again (loop, ev_periodic *)" Simply stops and restarts the periodic watcher again. This is only useful when you changed some parameters or the reschedule callback would return a different time than the last time it was called (e.g. in a crond like program when the crontabs have changed). .IP "ev_tstamp ev_periodic_at (ev_periodic *)" 4 .IX Item "ev_tstamp ev_periodic_at (ev_periodic *)" When active, returns the absolute time that the watcher is supposed to trigger next. This is not the same as the \f(CW\*(C`offset\*(C'\fR argument to \&\f(CW\*(C`ev_periodic_set\*(C'\fR, but indeed works even in interval and manual rescheduling modes. .IP "ev_tstamp offset [read\-write]" 4 .IX Item "ev_tstamp offset [read-write]" When repeating, this contains the offset value, otherwise this is the absolute point in time (the \f(CW\*(C`offset\*(C'\fR value passed to \f(CW\*(C`ev_periodic_set\*(C'\fR, although libev might modify this value for better numerical stability). .Sp Can be modified any time, but changes only take effect when the periodic timer fires or \f(CW\*(C`ev_periodic_again\*(C'\fR is being called. .IP "ev_tstamp interval [read\-write]" 4 .IX Item "ev_tstamp interval [read-write]" The current interval value. Can be modified any time, but changes only take effect when the periodic timer fires or \f(CW\*(C`ev_periodic_again\*(C'\fR is being called. .IP "ev_tstamp (*reschedule_cb)(ev_periodic *w, ev_tstamp now) [read\-write]" 4 .IX Item "ev_tstamp (*reschedule_cb)(ev_periodic *w, ev_tstamp now) [read-write]" The current reschedule callback, or \f(CW0\fR, if this functionality is switched off. Can be changed any time, but changes only take effect when the periodic timer fires or \f(CW\*(C`ev_periodic_again\*(C'\fR is being called. .PP \fIExamples\fR .IX Subsection "Examples" .PP Example: Call a callback every hour, or, more precisely, whenever the system time is divisible by 3600. The callback invocation times have potentially a lot of jitter, but good long-term stability. .PP .Vb 5 \& static void \& clock_cb (struct ev_loop *loop, ev_periodic *w, int revents) \& { \& ... its now a full hour (UTC, or TAI or whatever your clock follows) \& } \& \& ev_periodic hourly_tick; \& ev_periodic_init (&hourly_tick, clock_cb, 0., 3600., 0); \& ev_periodic_start (loop, &hourly_tick); .Ve .PP Example: The same as above, but use a reschedule callback to do it: .PP .Vb 1 \& #include \& \& static ev_tstamp \& my_scheduler_cb (ev_periodic *w, ev_tstamp now) \& { \& return now + (3600. \- fmod (now, 3600.)); \& } \& \& ev_periodic_init (&hourly_tick, clock_cb, 0., 0., my_scheduler_cb); .Ve .PP Example: Call a callback every hour, starting now: .PP .Vb 4 \& ev_periodic hourly_tick; \& ev_periodic_init (&hourly_tick, clock_cb, \& fmod (ev_now (loop), 3600.), 3600., 0); \& ev_periodic_start (loop, &hourly_tick); .Ve .ie n .SS """ev_signal"" \- signal me when a signal gets signalled!" .el .SS "\f(CWev_signal\fP \- signal me when a signal gets signalled!" .IX Subsection "ev_signal - signal me when a signal gets signalled!" Signal watchers will trigger an event when the process receives a specific signal one or more times. Even though signals are very asynchronous, libev will try its best to deliver signals synchronously, i.e. as part of the normal event processing, like any other event. .PP If you want signals to be delivered truly asynchronously, just use \&\f(CW\*(C`sigaction\*(C'\fR as you would do without libev and forget about sharing the signal. You can even use \f(CW\*(C`ev_async\*(C'\fR from a signal handler to synchronously wake up an event loop. .PP You can configure as many watchers as you like for the same signal, but only within the same loop, i.e. you can watch for \f(CW\*(C`SIGINT\*(C'\fR in your default loop and for \f(CW\*(C`SIGIO\*(C'\fR in another loop, but you cannot watch for \&\f(CW\*(C`SIGINT\*(C'\fR in both the default loop and another loop at the same time. At the moment, \f(CW\*(C`SIGCHLD\*(C'\fR is permanently tied to the default loop. .PP Only after the first watcher for a signal is started will libev actually register something with the kernel. It thus coexists with your own signal handlers as long as you don't register any with libev for the same signal. .PP If possible and supported, libev will install its handlers with \&\f(CW\*(C`SA_RESTART\*(C'\fR (or equivalent) behaviour enabled, so system calls should not be unduly interrupted. If you have a problem with system calls getting interrupted by signals you can block all signals in an \f(CW\*(C`ev_check\*(C'\fR watcher and unblock them in an \f(CW\*(C`ev_prepare\*(C'\fR watcher. .PP \fIThe special problem of inheritance over fork/execve/pthread_create\fR .IX Subsection "The special problem of inheritance over fork/execve/pthread_create" .PP Both the signal mask (\f(CW\*(C`sigprocmask\*(C'\fR) and the signal disposition (\f(CW\*(C`sigaction\*(C'\fR) are unspecified after starting a signal watcher (and after stopping it again), that is, libev might or might not block the signal, and might or might not set or restore the installed signal handler (but see \f(CW\*(C`EVFLAG_NOSIGMASK\*(C'\fR). .PP While this does not matter for the signal disposition (libev never sets signals to \f(CW\*(C`SIG_IGN\*(C'\fR, so handlers will be reset to \f(CW\*(C`SIG_DFL\*(C'\fR on \&\f(CW\*(C`execve\*(C'\fR), this matters for the signal mask: many programs do not expect certain signals to be blocked. .PP This means that before calling \f(CW\*(C`exec\*(C'\fR (from the child) you should reset the signal mask to whatever \*(L"default\*(R" you expect (all clear is a good choice usually). .PP The simplest way to ensure that the signal mask is reset in the child is to install a fork handler with \f(CW\*(C`pthread_atfork\*(C'\fR that resets it. That will catch fork calls done by libraries (such as the libc) as well. .PP In current versions of libev, the signal will not be blocked indefinitely unless you use the \f(CW\*(C`signalfd\*(C'\fR \s-1API\s0 (\f(CW\*(C`EV_SIGNALFD\*(C'\fR). While this reduces the window of opportunity for problems, it will not go away, as libev \&\fIhas\fR to modify the signal mask, at least temporarily. .PP So I can't stress this enough: \fIIf you do not reset your signal mask when you expect it to be empty, you have a race condition in your code\fR. This is not a libev-specific thing, this is true for most event libraries. .PP \fIThe special problem of threads signal handling\fR .IX Subsection "The special problem of threads signal handling" .PP \&\s-1POSIX\s0 threads has problematic signal handling semantics, specifically, a lot of functionality (sigfd, sigwait etc.) only really works if all threads in a process block signals, which is hard to achieve. .PP When you want to use sigwait (or mix libev signal handling with your own for the same signals), you can tackle this problem by globally blocking all signals before creating any threads (or creating them with a fully set sigprocmask) and also specifying the \f(CW\*(C`EVFLAG_NOSIGMASK\*(C'\fR when creating loops. Then designate one thread as \*(L"signal receiver thread\*(R" which handles these signals. You can pass on any signals that libev might be interested in by calling \f(CW\*(C`ev_feed_signal\*(C'\fR. .PP \fIWatcher-Specific Functions and Data Members\fR .IX Subsection "Watcher-Specific Functions and Data Members" .IP "ev_signal_init (ev_signal *, callback, int signum)" 4 .IX Item "ev_signal_init (ev_signal *, callback, int signum)" .PD 0 .IP "ev_signal_set (ev_signal *, int signum)" 4 .IX Item "ev_signal_set (ev_signal *, int signum)" .PD Configures the watcher to trigger on the given signal number (usually one of the \f(CW\*(C`SIGxxx\*(C'\fR constants). .IP "int signum [read\-only]" 4 .IX Item "int signum [read-only]" The signal the watcher watches out for. .PP \fIExamples\fR .IX Subsection "Examples" .PP Example: Try to exit cleanly on \s-1SIGINT.\s0 .PP .Vb 5 \& static void \& sigint_cb (struct ev_loop *loop, ev_signal *w, int revents) \& { \& ev_break (loop, EVBREAK_ALL); \& } \& \& ev_signal signal_watcher; \& ev_signal_init (&signal_watcher, sigint_cb, SIGINT); \& ev_signal_start (loop, &signal_watcher); .Ve .ie n .SS """ev_child"" \- watch out for process status changes" .el .SS "\f(CWev_child\fP \- watch out for process status changes" .IX Subsection "ev_child - watch out for process status changes" Child watchers trigger when your process receives a \s-1SIGCHLD\s0 in response to some child status changes (most typically when a child of yours dies or exits). It is permissible to install a child watcher \fIafter\fR the child has been forked (which implies it might have already exited), as long as the event loop isn't entered (or is continued from a watcher), i.e., forking and then immediately registering a watcher for the child is fine, but forking and registering a watcher a few event loop iterations later or in the next callback invocation is not. .PP Only the default event loop is capable of handling signals, and therefore you can only register child watchers in the default event loop. .PP Due to some design glitches inside libev, child watchers will always be handled at maximum priority (their priority is set to \f(CW\*(C`EV_MAXPRI\*(C'\fR by libev) .PP \fIProcess Interaction\fR .IX Subsection "Process Interaction" .PP Libev grabs \f(CW\*(C`SIGCHLD\*(C'\fR as soon as the default event loop is initialised. This is necessary to guarantee proper behaviour even if the first child watcher is started after the child exits. The occurrence of \f(CW\*(C`SIGCHLD\*(C'\fR is recorded asynchronously, but child reaping is done synchronously as part of the event loop processing. Libev always reaps all children, even ones not watched. .PP \fIOverriding the Built-In Processing\fR .IX Subsection "Overriding the Built-In Processing" .PP Libev offers no special support for overriding the built-in child processing, but if your application collides with libev's default child handler, you can override it easily by installing your own handler for \&\f(CW\*(C`SIGCHLD\*(C'\fR after initialising the default loop, and making sure the default loop never gets destroyed. You are encouraged, however, to use an event-based approach to child reaping and thus use libev's support for that, so other libev users can use \f(CW\*(C`ev_child\*(C'\fR watchers freely. .PP \fIStopping the Child Watcher\fR .IX Subsection "Stopping the Child Watcher" .PP Currently, the child watcher never gets stopped, even when the child terminates, so normally one needs to stop the watcher in the callback. Future versions of libev might stop the watcher automatically when a child exit is detected (calling \f(CW\*(C`ev_child_stop\*(C'\fR twice is not a problem). .PP \fIWatcher-Specific Functions and Data Members\fR .IX Subsection "Watcher-Specific Functions and Data Members" .IP "ev_child_init (ev_child *, callback, int pid, int trace)" 4 .IX Item "ev_child_init (ev_child *, callback, int pid, int trace)" .PD 0 .IP "ev_child_set (ev_child *, int pid, int trace)" 4 .IX Item "ev_child_set (ev_child *, int pid, int trace)" .PD Configures the watcher to wait for status changes of process \f(CW\*(C`pid\*(C'\fR (or \&\fIany\fR process if \f(CW\*(C`pid\*(C'\fR is specified as \f(CW0\fR). The callback can look at the \f(CW\*(C`rstatus\*(C'\fR member of the \f(CW\*(C`ev_child\*(C'\fR watcher structure to see the status word (use the macros from \f(CW\*(C`sys/wait.h\*(C'\fR and see your systems \&\f(CW\*(C`waitpid\*(C'\fR documentation). The \f(CW\*(C`rpid\*(C'\fR member contains the pid of the process causing the status change. \f(CW\*(C`trace\*(C'\fR must be either \f(CW0\fR (only activate the watcher when the process terminates) or \f(CW1\fR (additionally activate the watcher when the process is stopped or continued). .IP "int pid [read\-only]" 4 .IX Item "int pid [read-only]" The process id this watcher watches out for, or \f(CW0\fR, meaning any process id. .IP "int rpid [read\-write]" 4 .IX Item "int rpid [read-write]" The process id that detected a status change. .IP "int rstatus [read\-write]" 4 .IX Item "int rstatus [read-write]" The process exit/trace status caused by \f(CW\*(C`rpid\*(C'\fR (see your systems \&\f(CW\*(C`waitpid\*(C'\fR and \f(CW\*(C`sys/wait.h\*(C'\fR documentation for details). .PP \fIExamples\fR .IX Subsection "Examples" .PP Example: \f(CW\*(C`fork()\*(C'\fR a new process and install a child handler to wait for its completion. .PP .Vb 1 \& ev_child cw; \& \& static void \& child_cb (EV_P_ ev_child *w, int revents) \& { \& ev_child_stop (EV_A_ w); \& printf ("process %d exited with status %x\en", w\->rpid, w\->rstatus); \& } \& \& pid_t pid = fork (); \& \& if (pid < 0) \& // error \& else if (pid == 0) \& { \& // the forked child executes here \& exit (1); \& } \& else \& { \& ev_child_init (&cw, child_cb, pid, 0); \& ev_child_start (EV_DEFAULT_ &cw); \& } .Ve .ie n .SS """ev_stat"" \- did the file attributes just change?" .el .SS "\f(CWev_stat\fP \- did the file attributes just change?" .IX Subsection "ev_stat - did the file attributes just change?" This watches a file system path for attribute changes. That is, it calls \&\f(CW\*(C`stat\*(C'\fR on that path in regular intervals (or when the \s-1OS\s0 says it changed) and sees if it changed compared to the last time, invoking the callback if it did. Starting the watcher \f(CW\*(C`stat\*(C'\fR's the file, so only changes that happen after the watcher has been started will be reported. .PP The path does not need to exist: changing from \*(L"path exists\*(R" to \*(L"path does not exist\*(R" is a status change like any other. The condition \*(L"path does not exist\*(R" (or more correctly \*(L"path cannot be stat'ed\*(R") is signified by the \&\f(CW\*(C`st_nlink\*(C'\fR field being zero (which is otherwise always forced to be at least one) and all the other fields of the stat buffer having unspecified contents. .PP The path \fImust not\fR end in a slash or contain special components such as \&\f(CW\*(C`.\*(C'\fR or \f(CW\*(C`..\*(C'\fR. The path \fIshould\fR be absolute: If it is relative and your working directory changes, then the behaviour is undefined. .PP Since there is no portable change notification interface available, the portable implementation simply calls \f(CWstat(2)\fR regularly on the path to see if it changed somehow. You can specify a recommended polling interval for this case. If you specify a polling interval of \f(CW0\fR (highly recommended!) then a \fIsuitable, unspecified default\fR value will be used (which you can expect to be around five seconds, although this might change dynamically). Libev will also impose a minimum interval which is currently around \f(CW0.1\fR, but that's usually overkill. .PP This watcher type is not meant for massive numbers of stat watchers, as even with OS-supported change notifications, this can be resource-intensive. .PP At the time of this writing, the only OS-specific interface implemented is the Linux inotify interface (implementing kqueue support is left as an exercise for the reader. Note, however, that the author sees no way of implementing \f(CW\*(C`ev_stat\*(C'\fR semantics with kqueue, except as a hint). .PP \fI\s-1ABI\s0 Issues (Largefile Support)\fR .IX Subsection "ABI Issues (Largefile Support)" .PP Libev by default (unless the user overrides this) uses the default compilation environment, which means that on systems with large file support disabled by default, you get the 32 bit version of the stat structure. When using the library from programs that change the \s-1ABI\s0 to use 64 bit file offsets the programs will fail. In that case you have to compile libev with the same flags to get binary compatibility. This is obviously the case with any flags that change the \s-1ABI,\s0 but the problem is most noticeably displayed with ev_stat and large file support. .PP The solution for this is to lobby your distribution maker to make large file interfaces available by default (as e.g. FreeBSD does) and not optional. Libev cannot simply switch on large file support because it has to exchange stat structures with application programs compiled using the default compilation environment. .PP \fIInotify and Kqueue\fR .IX Subsection "Inotify and Kqueue" .PP When \f(CW\*(C`inotify (7)\*(C'\fR support has been compiled into libev and present at runtime, it will be used to speed up change detection where possible. The inotify descriptor will be created lazily when the first \f(CW\*(C`ev_stat\*(C'\fR watcher is being started. .PP Inotify presence does not change the semantics of \f(CW\*(C`ev_stat\*(C'\fR watchers except that changes might be detected earlier, and in some cases, to avoid making regular \f(CW\*(C`stat\*(C'\fR calls. Even in the presence of inotify support there are many cases where libev has to resort to regular \f(CW\*(C`stat\*(C'\fR polling, but as long as kernel 2.6.25 or newer is used (2.6.24 and older have too many bugs), the path exists (i.e. stat succeeds), and the path resides on a local filesystem (libev currently assumes only ext2/3, jfs, reiserfs and xfs are fully working) libev usually gets away without polling. .PP There is no support for kqueue, as apparently it cannot be used to implement this functionality, due to the requirement of having a file descriptor open on the object at all times, and detecting renames, unlinks etc. is difficult. .PP \fI\f(CI\*(C`stat ()\*(C'\fI is a synchronous operation\fR .IX Subsection "stat () is a synchronous operation" .PP Libev doesn't normally do any kind of I/O itself, and so is not blocking the process. The exception are \f(CW\*(C`ev_stat\*(C'\fR watchers \- those call \f(CW\*(C`stat ()\*(C'\fR, which is a synchronous operation. .PP For local paths, this usually doesn't matter: unless the system is very busy or the intervals between stat's are large, a stat call will be fast, as the path data is usually in memory already (except when starting the watcher). .PP For networked file systems, calling \f(CW\*(C`stat ()\*(C'\fR can block an indefinite time due to network issues, and even under good conditions, a stat call often takes multiple milliseconds. .PP Therefore, it is best to avoid using \f(CW\*(C`ev_stat\*(C'\fR watchers on networked paths, although this is fully supported by libev. .PP \fIThe special problem of stat time resolution\fR .IX Subsection "The special problem of stat time resolution" .PP The \f(CW\*(C`stat ()\*(C'\fR system call only supports full-second resolution portably, and even on systems where the resolution is higher, most file systems still only support whole seconds. .PP That means that, if the time is the only thing that changes, you can easily miss updates: on the first update, \f(CW\*(C`ev_stat\*(C'\fR detects a change and calls your callback, which does something. When there is another update within the same second, \f(CW\*(C`ev_stat\*(C'\fR will be unable to detect unless the stat data does change in other ways (e.g. file size). .PP The solution to this is to delay acting on a change for slightly more than a second (or till slightly after the next full second boundary), using a roughly one-second-delay \f(CW\*(C`ev_timer\*(C'\fR (e.g. \f(CW\*(C`ev_timer_set (w, 0., 1.02); ev_timer_again (loop, w)\*(C'\fR). .PP The \f(CW.02\fR offset is added to work around small timing inconsistencies of some operating systems (where the second counter of the current time might be be delayed. One such system is the Linux kernel, where a call to \&\f(CW\*(C`gettimeofday\*(C'\fR might return a timestamp with a full second later than a subsequent \f(CW\*(C`time\*(C'\fR call \- if the equivalent of \f(CW\*(C`time ()\*(C'\fR is used to update file times then there will be a small window where the kernel uses the previous second to update file times but libev might already execute the timer callback). .PP \fIWatcher-Specific Functions and Data Members\fR .IX Subsection "Watcher-Specific Functions and Data Members" .IP "ev_stat_init (ev_stat *, callback, const char *path, ev_tstamp interval)" 4 .IX Item "ev_stat_init (ev_stat *, callback, const char *path, ev_tstamp interval)" .PD 0 .IP "ev_stat_set (ev_stat *, const char *path, ev_tstamp interval)" 4 .IX Item "ev_stat_set (ev_stat *, const char *path, ev_tstamp interval)" .PD Configures the watcher to wait for status changes of the given \&\f(CW\*(C`path\*(C'\fR. The \f(CW\*(C`interval\*(C'\fR is a hint on how quickly a change is expected to be detected and should normally be specified as \f(CW0\fR to let libev choose a suitable value. The memory pointed to by \f(CW\*(C`path\*(C'\fR must point to the same path for as long as the watcher is active. .Sp The callback will receive an \f(CW\*(C`EV_STAT\*(C'\fR event when a change was detected, relative to the attributes at the time the watcher was started (or the last change was detected). .IP "ev_stat_stat (loop, ev_stat *)" 4 .IX Item "ev_stat_stat (loop, ev_stat *)" Updates the stat buffer immediately with new values. If you change the watched path in your callback, you could call this function to avoid detecting this change (while introducing a race condition if you are not the only one changing the path). Can also be useful simply to find out the new values. .IP "ev_statdata attr [read\-only]" 4 .IX Item "ev_statdata attr [read-only]" The most-recently detected attributes of the file. Although the type is \&\f(CW\*(C`ev_statdata\*(C'\fR, this is usually the (or one of the) \f(CW\*(C`struct stat\*(C'\fR types suitable for your system, but you can only rely on the POSIX-standardised members to be present. If the \f(CW\*(C`st_nlink\*(C'\fR member is \f(CW0\fR, then there was some error while \f(CW\*(C`stat\*(C'\fRing the file. .IP "ev_statdata prev [read\-only]" 4 .IX Item "ev_statdata prev [read-only]" The previous attributes of the file. The callback gets invoked whenever \&\f(CW\*(C`prev\*(C'\fR != \f(CW\*(C`attr\*(C'\fR, or, more precisely, one or more of these members differ: \f(CW\*(C`st_dev\*(C'\fR, \f(CW\*(C`st_ino\*(C'\fR, \f(CW\*(C`st_mode\*(C'\fR, \f(CW\*(C`st_nlink\*(C'\fR, \f(CW\*(C`st_uid\*(C'\fR, \&\f(CW\*(C`st_gid\*(C'\fR, \f(CW\*(C`st_rdev\*(C'\fR, \f(CW\*(C`st_size\*(C'\fR, \f(CW\*(C`st_atime\*(C'\fR, \f(CW\*(C`st_mtime\*(C'\fR, \f(CW\*(C`st_ctime\*(C'\fR. .IP "ev_tstamp interval [read\-only]" 4 .IX Item "ev_tstamp interval [read-only]" The specified interval. .IP "const char *path [read\-only]" 4 .IX Item "const char *path [read-only]" The file system path that is being watched. .PP \fIExamples\fR .IX Subsection "Examples" .PP Example: Watch \f(CW\*(C`/etc/passwd\*(C'\fR for attribute changes. .PP .Vb 10 \& static void \& passwd_cb (struct ev_loop *loop, ev_stat *w, int revents) \& { \& /* /etc/passwd changed in some way */ \& if (w\->attr.st_nlink) \& { \& printf ("passwd current size %ld\en", (long)w\->attr.st_size); \& printf ("passwd current atime %ld\en", (long)w\->attr.st_mtime); \& printf ("passwd current mtime %ld\en", (long)w\->attr.st_mtime); \& } \& else \& /* you shalt not abuse printf for puts */ \& puts ("wow, /etc/passwd is not there, expect problems. " \& "if this is windows, they already arrived\en"); \& } \& \& ... \& ev_stat passwd; \& \& ev_stat_init (&passwd, passwd_cb, "/etc/passwd", 0.); \& ev_stat_start (loop, &passwd); .Ve .PP Example: Like above, but additionally use a one-second delay so we do not miss updates (however, frequent updates will delay processing, too, so one might do the work both on \f(CW\*(C`ev_stat\*(C'\fR callback invocation \fIand\fR on \&\f(CW\*(C`ev_timer\*(C'\fR callback invocation). .PP .Vb 2 \& static ev_stat passwd; \& static ev_timer timer; \& \& static void \& timer_cb (EV_P_ ev_timer *w, int revents) \& { \& ev_timer_stop (EV_A_ w); \& \& /* now it\*(Aqs one second after the most recent passwd change */ \& } \& \& static void \& stat_cb (EV_P_ ev_stat *w, int revents) \& { \& /* reset the one\-second timer */ \& ev_timer_again (EV_A_ &timer); \& } \& \& ... \& ev_stat_init (&passwd, stat_cb, "/etc/passwd", 0.); \& ev_stat_start (loop, &passwd); \& ev_timer_init (&timer, timer_cb, 0., 1.02); .Ve .ie n .SS """ev_idle"" \- when you've got nothing better to do..." .el .SS "\f(CWev_idle\fP \- when you've got nothing better to do..." .IX Subsection "ev_idle - when you've got nothing better to do..." Idle watchers trigger events when no other events of the same or higher priority are pending (prepare, check and other idle watchers do not count as receiving \*(L"events\*(R"). .PP That is, as long as your process is busy handling sockets or timeouts (or even signals, imagine) of the same or higher priority it will not be triggered. But when your process is idle (or only lower-priority watchers are pending), the idle watchers are being called once per event loop iteration \- until stopped, that is, or your process receives more events and becomes busy again with higher priority stuff. .PP The most noteworthy effect is that as long as any idle watchers are active, the process will not block when waiting for new events. .PP Apart from keeping your process non-blocking (which is a useful effect on its own sometimes), idle watchers are a good place to do \&\*(L"pseudo-background processing\*(R", or delay processing stuff to after the event loop has handled all outstanding events. .PP \fIAbusing an \f(CI\*(C`ev_idle\*(C'\fI watcher for its side-effect\fR .IX Subsection "Abusing an ev_idle watcher for its side-effect" .PP As long as there is at least one active idle watcher, libev will never sleep unnecessarily. Or in other words, it will loop as fast as possible. For this to work, the idle watcher doesn't need to be invoked at all \- the lowest priority will do. .PP This mode of operation can be useful together with an \f(CW\*(C`ev_check\*(C'\fR watcher, to do something on each event loop iteration \- for example to balance load between different connections. .PP See \*(L"Abusing an ev_check watcher for its side-effect\*(R" for a longer example. .PP \fIWatcher-Specific Functions and Data Members\fR .IX Subsection "Watcher-Specific Functions and Data Members" .IP "ev_idle_init (ev_idle *, callback)" 4 .IX Item "ev_idle_init (ev_idle *, callback)" Initialises and configures the idle watcher \- it has no parameters of any kind. There is a \f(CW\*(C`ev_idle_set\*(C'\fR macro, but using it is utterly pointless, believe me. .PP \fIExamples\fR .IX Subsection "Examples" .PP Example: Dynamically allocate an \f(CW\*(C`ev_idle\*(C'\fR watcher, start it, and in the callback, free it. Also, use no error checking, as usual. .PP .Vb 5 \& static void \& idle_cb (struct ev_loop *loop, ev_idle *w, int revents) \& { \& // stop the watcher \& ev_idle_stop (loop, w); \& \& // now we can free it \& free (w); \& \& // now do something you wanted to do when the program has \& // no longer anything immediate to do. \& } \& \& ev_idle *idle_watcher = malloc (sizeof (ev_idle)); \& ev_idle_init (idle_watcher, idle_cb); \& ev_idle_start (loop, idle_watcher); .Ve .ie n .SS """ev_prepare"" and ""ev_check"" \- customise your event loop!" .el .SS "\f(CWev_prepare\fP and \f(CWev_check\fP \- customise your event loop!" .IX Subsection "ev_prepare and ev_check - customise your event loop!" Prepare and check watchers are often (but not always) used in pairs: prepare watchers get invoked before the process blocks and check watchers afterwards. .PP You \fImust not\fR call \f(CW\*(C`ev_run\*(C'\fR (or similar functions that enter the current event loop) or \f(CW\*(C`ev_loop_fork\*(C'\fR from either \f(CW\*(C`ev_prepare\*(C'\fR or \&\f(CW\*(C`ev_check\*(C'\fR watchers. Other loops than the current one are fine, however. The rationale behind this is that you do not need to check for recursion in those watchers, i.e. the sequence will always be \&\f(CW\*(C`ev_prepare\*(C'\fR, blocking, \f(CW\*(C`ev_check\*(C'\fR so if you have one watcher of each kind they will always be called in pairs bracketing the blocking call. .PP Their main purpose is to integrate other event mechanisms into libev and their use is somewhat advanced. They could be used, for example, to track variable changes, implement your own watchers, integrate net-snmp or a coroutine library and lots more. They are also occasionally useful if you cache some data and want to flush it before blocking (for example, in X programs you might want to do an \f(CW\*(C`XFlush ()\*(C'\fR in an \f(CW\*(C`ev_prepare\*(C'\fR watcher). .PP This is done by examining in each prepare call which file descriptors need to be watched by the other library, registering \f(CW\*(C`ev_io\*(C'\fR watchers for them and starting an \f(CW\*(C`ev_timer\*(C'\fR watcher for any timeouts (many libraries provide exactly this functionality). Then, in the check watcher, you check for any events that occurred (by checking the pending status of all watchers and stopping them) and call back into the library. The I/O and timer callbacks will never actually be called (but must be valid nevertheless, because you never know, you know?). .PP As another example, the Perl Coro module uses these hooks to integrate coroutines into libev programs, by yielding to other active coroutines during each prepare and only letting the process block if no coroutines are ready to run (it's actually more complicated: it only runs coroutines with priority higher than or equal to the event loop and one coroutine of lower priority, but only once, using idle watchers to keep the event loop from blocking if lower-priority coroutines are active, thus mapping low-priority coroutines to idle/background tasks). .PP When used for this purpose, it is recommended to give \f(CW\*(C`ev_check\*(C'\fR watchers highest (\f(CW\*(C`EV_MAXPRI\*(C'\fR) priority, to ensure that they are being run before any other watchers after the poll (this doesn't matter for \f(CW\*(C`ev_prepare\*(C'\fR watchers). .PP Also, \f(CW\*(C`ev_check\*(C'\fR watchers (and \f(CW\*(C`ev_prepare\*(C'\fR watchers, too) should not activate (\*(L"feed\*(R") events into libev. While libev fully supports this, they might get executed before other \f(CW\*(C`ev_check\*(C'\fR watchers did their job. As \&\f(CW\*(C`ev_check\*(C'\fR watchers are often used to embed other (non-libev) event loops those other event loops might be in an unusable state until their \&\f(CW\*(C`ev_check\*(C'\fR watcher ran (always remind yourself to coexist peacefully with others). .PP \fIAbusing an \f(CI\*(C`ev_check\*(C'\fI watcher for its side-effect\fR .IX Subsection "Abusing an ev_check watcher for its side-effect" .PP \&\f(CW\*(C`ev_check\*(C'\fR (and less often also \f(CW\*(C`ev_prepare\*(C'\fR) watchers can also be useful because they are called once per event loop iteration. For example, if you want to handle a large number of connections fairly, you normally only do a bit of work for each active connection, and if there is more work to do, you wait for the next event loop iteration, so other connections have a chance of making progress. .PP Using an \f(CW\*(C`ev_check\*(C'\fR watcher is almost enough: it will be called on the next event loop iteration. However, that isn't as soon as possible \- without external events, your \f(CW\*(C`ev_check\*(C'\fR watcher will not be invoked. .PP This is where \f(CW\*(C`ev_idle\*(C'\fR watchers come in handy \- all you need is a single global idle watcher that is active as long as you have one active \&\f(CW\*(C`ev_check\*(C'\fR watcher. The \f(CW\*(C`ev_idle\*(C'\fR watcher makes sure the event loop will not sleep, and the \f(CW\*(C`ev_check\*(C'\fR watcher makes sure a callback gets invoked. Neither watcher alone can do that. .PP \fIWatcher-Specific Functions and Data Members\fR .IX Subsection "Watcher-Specific Functions and Data Members" .IP "ev_prepare_init (ev_prepare *, callback)" 4 .IX Item "ev_prepare_init (ev_prepare *, callback)" .PD 0 .IP "ev_check_init (ev_check *, callback)" 4 .IX Item "ev_check_init (ev_check *, callback)" .PD Initialises and configures the prepare or check watcher \- they have no parameters of any kind. There are \f(CW\*(C`ev_prepare_set\*(C'\fR and \f(CW\*(C`ev_check_set\*(C'\fR macros, but using them is utterly, utterly, utterly and completely pointless. .PP \fIExamples\fR .IX Subsection "Examples" .PP There are a number of principal ways to embed other event loops or modules into libev. Here are some ideas on how to include libadns into libev (there is a Perl module named \f(CW\*(C`EV::ADNS\*(C'\fR that does this, which you could use as a working example. Another Perl module named \f(CW\*(C`EV::Glib\*(C'\fR embeds a Glib main context into libev, and finally, \f(CW\*(C`Glib::EV\*(C'\fR embeds \s-1EV\s0 into the Glib event loop). .PP Method 1: Add \s-1IO\s0 watchers and a timeout watcher in a prepare handler, and in a check watcher, destroy them and call into libadns. What follows is pseudo-code only of course. This requires you to either use a low priority for the check watcher or use \f(CW\*(C`ev_clear_pending\*(C'\fR explicitly, as the callbacks for the IO/timeout watchers might not have been called yet. .PP .Vb 2 \& static ev_io iow [nfd]; \& static ev_timer tw; \& \& static void \& io_cb (struct ev_loop *loop, ev_io *w, int revents) \& { \& } \& \& // create io watchers for each fd and a timer before blocking \& static void \& adns_prepare_cb (struct ev_loop *loop, ev_prepare *w, int revents) \& { \& int timeout = 3600000; \& struct pollfd fds [nfd]; \& // actual code will need to loop here and realloc etc. \& adns_beforepoll (ads, fds, &nfd, &timeout, timeval_from (ev_time ())); \& \& /* the callback is illegal, but won\*(Aqt be called as we stop during check */ \& ev_timer_init (&tw, 0, timeout * 1e\-3, 0.); \& ev_timer_start (loop, &tw); \& \& // create one ev_io per pollfd \& for (int i = 0; i < nfd; ++i) \& { \& ev_io_init (iow + i, io_cb, fds [i].fd, \& ((fds [i].events & POLLIN ? EV_READ : 0) \& | (fds [i].events & POLLOUT ? EV_WRITE : 0))); \& \& fds [i].revents = 0; \& ev_io_start (loop, iow + i); \& } \& } \& \& // stop all watchers after blocking \& static void \& adns_check_cb (struct ev_loop *loop, ev_check *w, int revents) \& { \& ev_timer_stop (loop, &tw); \& \& for (int i = 0; i < nfd; ++i) \& { \& // set the relevant poll flags \& // could also call adns_processreadable etc. here \& struct pollfd *fd = fds + i; \& int revents = ev_clear_pending (iow + i); \& if (revents & EV_READ ) fd\->revents |= fd\->events & POLLIN; \& if (revents & EV_WRITE) fd\->revents |= fd\->events & POLLOUT; \& \& // now stop the watcher \& ev_io_stop (loop, iow + i); \& } \& \& adns_afterpoll (adns, fds, nfd, timeval_from (ev_now (loop)); \& } .Ve .PP Method 2: This would be just like method 1, but you run \f(CW\*(C`adns_afterpoll\*(C'\fR in the prepare watcher and would dispose of the check watcher. .PP Method 3: If the module to be embedded supports explicit event notification (libadns does), you can also make use of the actual watcher callbacks, and only destroy/create the watchers in the prepare watcher. .PP .Vb 5 \& static void \& timer_cb (EV_P_ ev_timer *w, int revents) \& { \& adns_state ads = (adns_state)w\->data; \& update_now (EV_A); \& \& adns_processtimeouts (ads, &tv_now); \& } \& \& static void \& io_cb (EV_P_ ev_io *w, int revents) \& { \& adns_state ads = (adns_state)w\->data; \& update_now (EV_A); \& \& if (revents & EV_READ ) adns_processreadable (ads, w\->fd, &tv_now); \& if (revents & EV_WRITE) adns_processwriteable (ads, w\->fd, &tv_now); \& } \& \& // do not ever call adns_afterpoll .Ve .PP Method 4: Do not use a prepare or check watcher because the module you want to embed is not flexible enough to support it. Instead, you can override their poll function. The drawback with this solution is that the main loop is now no longer controllable by \s-1EV.\s0 The \f(CW\*(C`Glib::EV\*(C'\fR module uses this approach, effectively embedding \s-1EV\s0 as a client into the horrible libglib event loop. .PP .Vb 4 \& static gint \& event_poll_func (GPollFD *fds, guint nfds, gint timeout) \& { \& int got_events = 0; \& \& for (n = 0; n < nfds; ++n) \& // create/start io watcher that sets the relevant bits in fds[n] and increment got_events \& \& if (timeout >= 0) \& // create/start timer \& \& // poll \& ev_run (EV_A_ 0); \& \& // stop timer again \& if (timeout >= 0) \& ev_timer_stop (EV_A_ &to); \& \& // stop io watchers again \- their callbacks should have set \& for (n = 0; n < nfds; ++n) \& ev_io_stop (EV_A_ iow [n]); \& \& return got_events; \& } .Ve .ie n .SS """ev_embed"" \- when one backend isn't enough..." .el .SS "\f(CWev_embed\fP \- when one backend isn't enough..." .IX Subsection "ev_embed - when one backend isn't enough..." This is a rather advanced watcher type that lets you embed one event loop into another (currently only \f(CW\*(C`ev_io\*(C'\fR events are supported in the embedded loop, other types of watchers might be handled in a delayed or incorrect fashion and must not be used). .PP There are primarily two reasons you would want that: work around bugs and prioritise I/O. .PP As an example for a bug workaround, the kqueue backend might only support sockets on some platform, so it is unusable as generic backend, but you still want to make use of it because you have many sockets and it scales so nicely. In this case, you would create a kqueue-based loop and embed it into your default loop (which might use e.g. poll). Overall operation will be a bit slower because first libev has to call \f(CW\*(C`poll\*(C'\fR and then \&\f(CW\*(C`kevent\*(C'\fR, but at least you can use both mechanisms for what they are best: \f(CW\*(C`kqueue\*(C'\fR for scalable sockets and \f(CW\*(C`poll\*(C'\fR if you want it to work :) .PP As for prioritising I/O: under rare circumstances you have the case where some fds have to be watched and handled very quickly (with low latency), and even priorities and idle watchers might have too much overhead. In this case you would put all the high priority stuff in one loop and all the rest in a second one, and embed the second one in the first. .PP As long as the watcher is active, the callback will be invoked every time there might be events pending in the embedded loop. The callback must then call \f(CW\*(C`ev_embed_sweep (mainloop, watcher)\*(C'\fR to make a single sweep and invoke their callbacks (the callback doesn't need to invoke the \&\f(CW\*(C`ev_embed_sweep\*(C'\fR function directly, it could also start an idle watcher to give the embedded loop strictly lower priority for example). .PP You can also set the callback to \f(CW0\fR, in which case the embed watcher will automatically execute the embedded loop sweep whenever necessary. .PP Fork detection will be handled transparently while the \f(CW\*(C`ev_embed\*(C'\fR watcher is active, i.e., the embedded loop will automatically be forked when the embedding loop forks. In other cases, the user is responsible for calling \&\f(CW\*(C`ev_loop_fork\*(C'\fR on the embedded loop. .PP Unfortunately, not all backends are embeddable: only the ones returned by \&\f(CW\*(C`ev_embeddable_backends\*(C'\fR are, which, unfortunately, does not include any portable one. .PP So when you want to use this feature you will always have to be prepared that you cannot get an embeddable loop. The recommended way to get around this is to have a separate variables for your embeddable loop, try to create it, and if that fails, use the normal loop for everything. .PP \fI\f(CI\*(C`ev_embed\*(C'\fI and fork\fR .IX Subsection "ev_embed and fork" .PP While the \f(CW\*(C`ev_embed\*(C'\fR watcher is running, forks in the embedding loop will automatically be applied to the embedded loop as well, so no special fork handling is required in that case. When the watcher is not running, however, it is still the task of the libev user to call \f(CW\*(C`ev_loop_fork ()\*(C'\fR as applicable. .PP \fIWatcher-Specific Functions and Data Members\fR .IX Subsection "Watcher-Specific Functions and Data Members" .IP "ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop)" 4 .IX Item "ev_embed_init (ev_embed *, callback, struct ev_loop *embedded_loop)" .PD 0 .IP "ev_embed_set (ev_embed *, struct ev_loop *embedded_loop)" 4 .IX Item "ev_embed_set (ev_embed *, struct ev_loop *embedded_loop)" .PD Configures the watcher to embed the given loop, which must be embeddable. If the callback is \f(CW0\fR, then \f(CW\*(C`ev_embed_sweep\*(C'\fR will be invoked automatically, otherwise it is the responsibility of the callback to invoke it (it will continue to be called until the sweep has been done, if you do not want that, you need to temporarily stop the embed watcher). .IP "ev_embed_sweep (loop, ev_embed *)" 4 .IX Item "ev_embed_sweep (loop, ev_embed *)" Make a single, non-blocking sweep over the embedded loop. This works similarly to \f(CW\*(C`ev_run (embedded_loop, EVRUN_NOWAIT)\*(C'\fR, but in the most appropriate way for embedded loops. .IP "struct ev_loop *other [read\-only]" 4 .IX Item "struct ev_loop *other [read-only]" The embedded event loop. .PP \fIExamples\fR .IX Subsection "Examples" .PP Example: Try to get an embeddable event loop and embed it into the default event loop. If that is not possible, use the default loop. The default loop is stored in \f(CW\*(C`loop_hi\*(C'\fR, while the embeddable loop is stored in \&\f(CW\*(C`loop_lo\*(C'\fR (which is \f(CW\*(C`loop_hi\*(C'\fR in the case no embeddable loop can be used). .PP .Vb 3 \& struct ev_loop *loop_hi = ev_default_init (0); \& struct ev_loop *loop_lo = 0; \& ev_embed embed; \& \& // see if there is a chance of getting one that works \& // (remember that a flags value of 0 means autodetection) \& loop_lo = ev_embeddable_backends () & ev_recommended_backends () \& ? ev_loop_new (ev_embeddable_backends () & ev_recommended_backends ()) \& : 0; \& \& // if we got one, then embed it, otherwise default to loop_hi \& if (loop_lo) \& { \& ev_embed_init (&embed, 0, loop_lo); \& ev_embed_start (loop_hi, &embed); \& } \& else \& loop_lo = loop_hi; .Ve .PP Example: Check if kqueue is available but not recommended and create a kqueue backend for use with sockets (which usually work with any kqueue implementation). Store the kqueue/socket\-only event loop in \&\f(CW\*(C`loop_socket\*(C'\fR. (One might optionally use \f(CW\*(C`EVFLAG_NOENV\*(C'\fR, too). .PP .Vb 3 \& struct ev_loop *loop = ev_default_init (0); \& struct ev_loop *loop_socket = 0; \& ev_embed embed; \& \& if (ev_supported_backends () & ~ev_recommended_backends () & EVBACKEND_KQUEUE) \& if ((loop_socket = ev_loop_new (EVBACKEND_KQUEUE)) \& { \& ev_embed_init (&embed, 0, loop_socket); \& ev_embed_start (loop, &embed); \& } \& \& if (!loop_socket) \& loop_socket = loop; \& \& // now use loop_socket for all sockets, and loop for everything else .Ve .ie n .SS """ev_fork"" \- the audacity to resume the event loop after a fork" .el .SS "\f(CWev_fork\fP \- the audacity to resume the event loop after a fork" .IX Subsection "ev_fork - the audacity to resume the event loop after a fork" Fork watchers are called when a \f(CW\*(C`fork ()\*(C'\fR was detected (usually because whoever is a good citizen cared to tell libev about it by calling \&\f(CW\*(C`ev_loop_fork\*(C'\fR). The invocation is done before the event loop blocks next and before \f(CW\*(C`ev_check\*(C'\fR watchers are being called, and only in the child after the fork. If whoever good citizen calling \f(CW\*(C`ev_default_fork\*(C'\fR cheats and calls it in the wrong process, the fork handlers will be invoked, too, of course. .PP \fIThe special problem of life after fork \- how is it possible?\fR .IX Subsection "The special problem of life after fork - how is it possible?" .PP Most uses of \f(CW\*(C`fork ()\*(C'\fR consist of forking, then some simple calls to set up/change the process environment, followed by a call to \f(CW\*(C`exec()\*(C'\fR. This sequence should be handled by libev without any problems. .PP This changes when the application actually wants to do event handling in the child, or both parent in child, in effect \*(L"continuing\*(R" after the fork. .PP The default mode of operation (for libev, with application help to detect forks) is to duplicate all the state in the child, as would be expected when \fIeither\fR the parent \fIor\fR the child process continues. .PP When both processes want to continue using libev, then this is usually the wrong result. In that case, usually one process (typically the parent) is supposed to continue with all watchers in place as before, while the other process typically wants to start fresh, i.e. without any active watchers. .PP The cleanest and most efficient way to achieve that with libev is to simply create a new event loop, which of course will be \*(L"empty\*(R", and use that for new watchers. This has the advantage of not touching more memory than necessary, and thus avoiding the copy-on-write, and the disadvantage of having to use multiple event loops (which do not support signal watchers). .PP When this is not possible, or you want to use the default loop for other reasons, then in the process that wants to start \*(L"fresh\*(R", call \&\f(CW\*(C`ev_loop_destroy (EV_DEFAULT)\*(C'\fR followed by \f(CW\*(C`ev_default_loop (...)\*(C'\fR. Destroying the default loop will \*(L"orphan\*(R" (not stop) all registered watchers, so you have to be careful not to execute code that modifies those watchers. Note also that in that case, you have to re-register any signal watchers. .PP \fIWatcher-Specific Functions and Data Members\fR .IX Subsection "Watcher-Specific Functions and Data Members" .IP "ev_fork_init (ev_fork *, callback)" 4 .IX Item "ev_fork_init (ev_fork *, callback)" Initialises and configures the fork watcher \- it has no parameters of any kind. There is a \f(CW\*(C`ev_fork_set\*(C'\fR macro, but using it is utterly pointless, really. .ie n .SS """ev_cleanup"" \- even the best things end" .el .SS "\f(CWev_cleanup\fP \- even the best things end" .IX Subsection "ev_cleanup - even the best things end" Cleanup watchers are called just before the event loop is being destroyed by a call to \f(CW\*(C`ev_loop_destroy\*(C'\fR. .PP While there is no guarantee that the event loop gets destroyed, cleanup watchers provide a convenient method to install cleanup hooks for your program, worker threads and so on \- you just to make sure to destroy the loop when you want them to be invoked. .PP Cleanup watchers are invoked in the same way as any other watcher. Unlike all other watchers, they do not keep a reference to the event loop (which makes a lot of sense if you think about it). Like all other watchers, you can call libev functions in the callback, except \f(CW\*(C`ev_cleanup_start\*(C'\fR. .PP \fIWatcher-Specific Functions and Data Members\fR .IX Subsection "Watcher-Specific Functions and Data Members" .IP "ev_cleanup_init (ev_cleanup *, callback)" 4 .IX Item "ev_cleanup_init (ev_cleanup *, callback)" Initialises and configures the cleanup watcher \- it has no parameters of any kind. There is a \f(CW\*(C`ev_cleanup_set\*(C'\fR macro, but using it is utterly pointless, I assure you. .PP Example: Register an atexit handler to destroy the default loop, so any cleanup functions are called. .PP .Vb 5 \& static void \& program_exits (void) \& { \& ev_loop_destroy (EV_DEFAULT_UC); \& } \& \& ... \& atexit (program_exits); .Ve .ie n .SS """ev_async"" \- how to wake up an event loop" .el .SS "\f(CWev_async\fP \- how to wake up an event loop" .IX Subsection "ev_async - how to wake up an event loop" In general, you cannot use an \f(CW\*(C`ev_loop\*(C'\fR from multiple threads or other asynchronous sources such as signal handlers (as opposed to multiple event loops \- those are of course safe to use in different threads). .PP Sometimes, however, you need to wake up an event loop you do not control, for example because it belongs to another thread. This is what \f(CW\*(C`ev_async\*(C'\fR watchers do: as long as the \f(CW\*(C`ev_async\*(C'\fR watcher is active, you can signal it by calling \f(CW\*(C`ev_async_send\*(C'\fR, which is thread\- and signal safe. .PP This functionality is very similar to \f(CW\*(C`ev_signal\*(C'\fR watchers, as signals, too, are asynchronous in nature, and signals, too, will be compressed (i.e. the number of callback invocations may be less than the number of \&\f(CW\*(C`ev_async_send\*(C'\fR calls). In fact, you could use signal watchers as a kind of \*(L"global async watchers\*(R" by using a watcher on an otherwise unused signal, and \f(CW\*(C`ev_feed_signal\*(C'\fR to signal this watcher from another thread, even without knowing which loop owns the signal. .PP \fIQueueing\fR .IX Subsection "Queueing" .PP \&\f(CW\*(C`ev_async\*(C'\fR does not support queueing of data in any way. The reason is that the author does not know of a simple (or any) algorithm for a multiple-writer-single-reader queue that works in all cases and doesn't need elaborate support such as pthreads or unportable memory access semantics. .PP That means that if you want to queue data, you have to provide your own queue. But at least I can tell you how to implement locking around your queue: .IP "queueing from a signal handler context" 4 .IX Item "queueing from a signal handler context" To implement race-free queueing, you simply add to the queue in the signal handler but you block the signal handler in the watcher callback. Here is an example that does that for some fictitious \s-1SIGUSR1\s0 handler: .Sp .Vb 1 \& static ev_async mysig; \& \& static void \& sigusr1_handler (void) \& { \& sometype data; \& \& // no locking etc. \& queue_put (data); \& ev_async_send (EV_DEFAULT_ &mysig); \& } \& \& static void \& mysig_cb (EV_P_ ev_async *w, int revents) \& { \& sometype data; \& sigset_t block, prev; \& \& sigemptyset (&block); \& sigaddset (&block, SIGUSR1); \& sigprocmask (SIG_BLOCK, &block, &prev); \& \& while (queue_get (&data)) \& process (data); \& \& if (sigismember (&prev, SIGUSR1) \& sigprocmask (SIG_UNBLOCK, &block, 0); \& } .Ve .Sp (Note: pthreads in theory requires you to use \f(CW\*(C`pthread_setmask\*(C'\fR instead of \f(CW\*(C`sigprocmask\*(C'\fR when you use threads, but libev doesn't do it either...). .IP "queueing from a thread context" 4 .IX Item "queueing from a thread context" The strategy for threads is different, as you cannot (easily) block threads but you can easily preempt them, so to queue safely you need to employ a traditional mutex lock, such as in this pthread example: .Sp .Vb 2 \& static ev_async mysig; \& static pthread_mutex_t mymutex = PTHREAD_MUTEX_INITIALIZER; \& \& static void \& otherthread (void) \& { \& // only need to lock the actual queueing operation \& pthread_mutex_lock (&mymutex); \& queue_put (data); \& pthread_mutex_unlock (&mymutex); \& \& ev_async_send (EV_DEFAULT_ &mysig); \& } \& \& static void \& mysig_cb (EV_P_ ev_async *w, int revents) \& { \& pthread_mutex_lock (&mymutex); \& \& while (queue_get (&data)) \& process (data); \& \& pthread_mutex_unlock (&mymutex); \& } .Ve .PP \fIWatcher-Specific Functions and Data Members\fR .IX Subsection "Watcher-Specific Functions and Data Members" .IP "ev_async_init (ev_async *, callback)" 4 .IX Item "ev_async_init (ev_async *, callback)" Initialises and configures the async watcher \- it has no parameters of any kind. There is a \f(CW\*(C`ev_async_set\*(C'\fR macro, but using it is utterly pointless, trust me. .IP "ev_async_send (loop, ev_async *)" 4 .IX Item "ev_async_send (loop, ev_async *)" Sends/signals/activates the given \f(CW\*(C`ev_async\*(C'\fR watcher, that is, feeds an \f(CW\*(C`EV_ASYNC\*(C'\fR event on the watcher into the event loop, and instantly returns. .Sp Unlike \f(CW\*(C`ev_feed_event\*(C'\fR, this call is safe to do from other threads, signal or similar contexts (see the discussion of \f(CW\*(C`EV_ATOMIC_T\*(C'\fR in the embedding section below on what exactly this means). .Sp Note that, as with other watchers in libev, multiple events might get compressed into a single callback invocation (another way to look at this is that \f(CW\*(C`ev_async\*(C'\fR watchers are level-triggered: they are set on \&\f(CW\*(C`ev_async_send\*(C'\fR, reset when the event loop detects that). .Sp This call incurs the overhead of at most one extra system call per event loop iteration, if the event loop is blocked, and no syscall at all if the event loop (or your program) is processing events. That means that repeated calls are basically free (there is no need to avoid calls for performance reasons) and that the overhead becomes smaller (typically zero) under load. .IP "bool = ev_async_pending (ev_async *)" 4 .IX Item "bool = ev_async_pending (ev_async *)" Returns a non-zero value when \f(CW\*(C`ev_async_send\*(C'\fR has been called on the watcher but the event has not yet been processed (or even noted) by the event loop. .Sp \&\f(CW\*(C`ev_async_send\*(C'\fR sets a flag in the watcher and wakes up the loop. When the loop iterates next and checks for the watcher to have become active, it will reset the flag again. \f(CW\*(C`ev_async_pending\*(C'\fR can be used to very quickly check whether invoking the loop might be a good idea. .Sp Not that this does \fInot\fR check whether the watcher itself is pending, only whether it has been requested to make this watcher pending: there is a time window between the event loop checking and resetting the async notification, and the callback being invoked. .SH "OTHER FUNCTIONS" .IX Header "OTHER FUNCTIONS" There are some other functions of possible interest. Described. Here. Now. .IP "ev_once (loop, int fd, int events, ev_tstamp timeout, callback, arg)" 4 .IX Item "ev_once (loop, int fd, int events, ev_tstamp timeout, callback, arg)" This function combines a simple timer and an I/O watcher, calls your callback on whichever event happens first and automatically stops both watchers. This is useful if you want to wait for a single event on an fd or timeout without having to allocate/configure/start/stop/free one or more watchers yourself. .Sp If \f(CW\*(C`fd\*(C'\fR is less than 0, then no I/O watcher will be started and the \&\f(CW\*(C`events\*(C'\fR argument is being ignored. Otherwise, an \f(CW\*(C`ev_io\*(C'\fR watcher for the given \f(CW\*(C`fd\*(C'\fR and \f(CW\*(C`events\*(C'\fR set will be created and started. .Sp If \f(CW\*(C`timeout\*(C'\fR is less than 0, then no timeout watcher will be started. Otherwise an \f(CW\*(C`ev_timer\*(C'\fR watcher with after = \f(CW\*(C`timeout\*(C'\fR (and repeat = 0) will be started. \f(CW0\fR is a valid timeout. .Sp The callback has the type \f(CW\*(C`void (*cb)(int revents, void *arg)\*(C'\fR and is passed an \f(CW\*(C`revents\*(C'\fR set like normal event callbacks (a combination of \&\f(CW\*(C`EV_ERROR\*(C'\fR, \f(CW\*(C`EV_READ\*(C'\fR, \f(CW\*(C`EV_WRITE\*(C'\fR or \f(CW\*(C`EV_TIMER\*(C'\fR) and the \f(CW\*(C`arg\*(C'\fR value passed to \f(CW\*(C`ev_once\*(C'\fR. Note that it is possible to receive \fIboth\fR a timeout and an io event at the same time \- you probably should give io events precedence. .Sp Example: wait up to ten seconds for data to appear on \s-1STDIN_FILENO.\s0 .Sp .Vb 7 \& static void stdin_ready (int revents, void *arg) \& { \& if (revents & EV_READ) \& /* stdin might have data for us, joy! */; \& else if (revents & EV_TIMER) \& /* doh, nothing entered */; \& } \& \& ev_once (STDIN_FILENO, EV_READ, 10., stdin_ready, 0); .Ve .IP "ev_feed_fd_event (loop, int fd, int revents)" 4 .IX Item "ev_feed_fd_event (loop, int fd, int revents)" Feed an event on the given fd, as if a file descriptor backend detected the given events. .IP "ev_feed_signal_event (loop, int signum)" 4 .IX Item "ev_feed_signal_event (loop, int signum)" Feed an event as if the given signal occurred. See also \f(CW\*(C`ev_feed_signal\*(C'\fR, which is async-safe. .SH "COMMON OR USEFUL IDIOMS (OR BOTH)" .IX Header "COMMON OR USEFUL IDIOMS (OR BOTH)" This section explains some common idioms that are not immediately obvious. Note that examples are sprinkled over the whole manual, and this section only contains stuff that wouldn't fit anywhere else. .SS "\s-1ASSOCIATING CUSTOM DATA WITH A WATCHER\s0" .IX Subsection "ASSOCIATING CUSTOM DATA WITH A WATCHER" Each watcher has, by default, a \f(CW\*(C`void *data\*(C'\fR member that you can read or modify at any time: libev will completely ignore it. This can be used to associate arbitrary data with your watcher. If you need more data and don't want to allocate memory separately and store a pointer to it in that data member, you can also \*(L"subclass\*(R" the watcher type and provide your own data: .PP .Vb 7 \& struct my_io \& { \& ev_io io; \& int otherfd; \& void *somedata; \& struct whatever *mostinteresting; \& }; \& \& ... \& struct my_io w; \& ev_io_init (&w.io, my_cb, fd, EV_READ); .Ve .PP And since your callback will be called with a pointer to the watcher, you can cast it back to your own type: .PP .Vb 5 \& static void my_cb (struct ev_loop *loop, ev_io *w_, int revents) \& { \& struct my_io *w = (struct my_io *)w_; \& ... \& } .Ve .PP More interesting and less C\-conformant ways of casting your callback function type instead have been omitted. .SS "\s-1BUILDING YOUR OWN COMPOSITE WATCHERS\s0" .IX Subsection "BUILDING YOUR OWN COMPOSITE WATCHERS" Another common scenario is to use some data structure with multiple embedded watchers, in effect creating your own watcher that combines multiple libev event sources into one \*(L"super-watcher\*(R": .PP .Vb 6 \& struct my_biggy \& { \& int some_data; \& ev_timer t1; \& ev_timer t2; \& } .Ve .PP In this case getting the pointer to \f(CW\*(C`my_biggy\*(C'\fR is a bit more complicated: Either you store the address of your \f(CW\*(C`my_biggy\*(C'\fR struct in the \f(CW\*(C`data\*(C'\fR member of the watcher (for woozies or \*(C+ coders), or you need to use some pointer arithmetic using \f(CW\*(C`offsetof\*(C'\fR inside your watchers (for real programmers): .PP .Vb 1 \& #include \& \& static void \& t1_cb (EV_P_ ev_timer *w, int revents) \& { \& struct my_biggy big = (struct my_biggy *) \& (((char *)w) \- offsetof (struct my_biggy, t1)); \& } \& \& static void \& t2_cb (EV_P_ ev_timer *w, int revents) \& { \& struct my_biggy big = (struct my_biggy *) \& (((char *)w) \- offsetof (struct my_biggy, t2)); \& } .Ve .SS "\s-1AVOIDING FINISHING BEFORE RETURNING\s0" .IX Subsection "AVOIDING FINISHING BEFORE RETURNING" Often you have structures like this in event-based programs: .PP .Vb 4 \& callback () \& { \& free (request); \& } \& \& request = start_new_request (..., callback); .Ve .PP The intent is to start some \*(L"lengthy\*(R" operation. The \f(CW\*(C`request\*(C'\fR could be used to cancel the operation, or do other things with it. .PP It's not uncommon to have code paths in \f(CW\*(C`start_new_request\*(C'\fR that immediately invoke the callback, for example, to report errors. Or you add some caching layer that finds that it can skip the lengthy aspects of the operation and simply invoke the callback with the result. .PP The problem here is that this will happen \fIbefore\fR \f(CW\*(C`start_new_request\*(C'\fR has returned, so \f(CW\*(C`request\*(C'\fR is not set. .PP Even if you pass the request by some safer means to the callback, you might want to do something to the request after starting it, such as canceling it, which probably isn't working so well when the callback has already been invoked. .PP A common way around all these issues is to make sure that \&\f(CW\*(C`start_new_request\*(C'\fR \fIalways\fR returns before the callback is invoked. If \&\f(CW\*(C`start_new_request\*(C'\fR immediately knows the result, it can artificially delay invoking the callback by using a \f(CW\*(C`prepare\*(C'\fR or \f(CW\*(C`idle\*(C'\fR watcher for example, or more sneakily, by reusing an existing (stopped) watcher and pushing it into the pending queue: .PP .Vb 2 \& ev_set_cb (watcher, callback); \& ev_feed_event (EV_A_ watcher, 0); .Ve .PP This way, \f(CW\*(C`start_new_request\*(C'\fR can safely return before the callback is invoked, while not delaying callback invocation too much. .SS "\s-1MODEL/NESTED EVENT LOOP INVOCATIONS AND EXIT CONDITIONS\s0" .IX Subsection "MODEL/NESTED EVENT LOOP INVOCATIONS AND EXIT CONDITIONS" Often (especially in \s-1GUI\s0 toolkits) there are places where you have \&\fImodal\fR interaction, which is most easily implemented by recursively invoking \f(CW\*(C`ev_run\*(C'\fR. .PP This brings the problem of exiting \- a callback might want to finish the main \f(CW\*(C`ev_run\*(C'\fR call, but not the nested one (e.g. user clicked \*(L"Quit\*(R", but a modal \*(L"Are you sure?\*(R" dialog is still waiting), or just the nested one and not the main one (e.g. user clocked \*(L"Ok\*(R" in a modal dialog), or some other combination: In these cases, a simple \f(CW\*(C`ev_break\*(C'\fR will not work. .PP The solution is to maintain \*(L"break this loop\*(R" variable for each \f(CW\*(C`ev_run\*(C'\fR invocation, and use a loop around \f(CW\*(C`ev_run\*(C'\fR until the condition is triggered, using \f(CW\*(C`EVRUN_ONCE\*(C'\fR: .PP .Vb 2 \& // main loop \& int exit_main_loop = 0; \& \& while (!exit_main_loop) \& ev_run (EV_DEFAULT_ EVRUN_ONCE); \& \& // in a modal watcher \& int exit_nested_loop = 0; \& \& while (!exit_nested_loop) \& ev_run (EV_A_ EVRUN_ONCE); .Ve .PP To exit from any of these loops, just set the corresponding exit variable: .PP .Vb 2 \& // exit modal loop \& exit_nested_loop = 1; \& \& // exit main program, after modal loop is finished \& exit_main_loop = 1; \& \& // exit both \& exit_main_loop = exit_nested_loop = 1; .Ve .SS "\s-1THREAD LOCKING EXAMPLE\s0" .IX Subsection "THREAD LOCKING EXAMPLE" Here is a fictitious example of how to run an event loop in a different thread from where callbacks are being invoked and watchers are created/added/removed. .PP For a real-world example, see the \f(CW\*(C`EV::Loop::Async\*(C'\fR perl module, which uses exactly this technique (which is suited for many high-level languages). .PP The example uses a pthread mutex to protect the loop data, a condition variable to wait for callback invocations, an async watcher to notify the event loop thread and an unspecified mechanism to wake up the main thread. .PP First, you need to associate some data with the event loop: .PP .Vb 6 \& typedef struct { \& mutex_t lock; /* global loop lock */ \& ev_async async_w; \& thread_t tid; \& cond_t invoke_cv; \& } userdata; \& \& void prepare_loop (EV_P) \& { \& // for simplicity, we use a static userdata struct. \& static userdata u; \& \& ev_async_init (&u\->async_w, async_cb); \& ev_async_start (EV_A_ &u\->async_w); \& \& pthread_mutex_init (&u\->lock, 0); \& pthread_cond_init (&u\->invoke_cv, 0); \& \& // now associate this with the loop \& ev_set_userdata (EV_A_ u); \& ev_set_invoke_pending_cb (EV_A_ l_invoke); \& ev_set_loop_release_cb (EV_A_ l_release, l_acquire); \& \& // then create the thread running ev_run \& pthread_create (&u\->tid, 0, l_run, EV_A); \& } .Ve .PP The callback for the \f(CW\*(C`ev_async\*(C'\fR watcher does nothing: the watcher is used solely to wake up the event loop so it takes notice of any new watchers that might have been added: .PP .Vb 5 \& static void \& async_cb (EV_P_ ev_async *w, int revents) \& { \& // just used for the side effects \& } .Ve .PP The \f(CW\*(C`l_release\*(C'\fR and \f(CW\*(C`l_acquire\*(C'\fR callbacks simply unlock/lock the mutex protecting the loop data, respectively. .PP .Vb 6 \& static void \& l_release (EV_P) \& { \& userdata *u = ev_userdata (EV_A); \& pthread_mutex_unlock (&u\->lock); \& } \& \& static void \& l_acquire (EV_P) \& { \& userdata *u = ev_userdata (EV_A); \& pthread_mutex_lock (&u\->lock); \& } .Ve .PP The event loop thread first acquires the mutex, and then jumps straight into \f(CW\*(C`ev_run\*(C'\fR: .PP .Vb 4 \& void * \& l_run (void *thr_arg) \& { \& struct ev_loop *loop = (struct ev_loop *)thr_arg; \& \& l_acquire (EV_A); \& pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, 0); \& ev_run (EV_A_ 0); \& l_release (EV_A); \& \& return 0; \& } .Ve .PP Instead of invoking all pending watchers, the \f(CW\*(C`l_invoke\*(C'\fR callback will signal the main thread via some unspecified mechanism (signals? pipe writes? \f(CW\*(C`Async::Interrupt\*(C'\fR?) and then waits until all pending watchers have been called (in a while loop because a) spurious wakeups are possible and b) skipping inter-thread-communication when there are no pending watchers is very beneficial): .PP .Vb 4 \& static void \& l_invoke (EV_P) \& { \& userdata *u = ev_userdata (EV_A); \& \& while (ev_pending_count (EV_A)) \& { \& wake_up_other_thread_in_some_magic_or_not_so_magic_way (); \& pthread_cond_wait (&u\->invoke_cv, &u\->lock); \& } \& } .Ve .PP Now, whenever the main thread gets told to invoke pending watchers, it will grab the lock, call \f(CW\*(C`ev_invoke_pending\*(C'\fR and then signal the loop thread to continue: .PP .Vb 4 \& static void \& real_invoke_pending (EV_P) \& { \& userdata *u = ev_userdata (EV_A); \& \& pthread_mutex_lock (&u\->lock); \& ev_invoke_pending (EV_A); \& pthread_cond_signal (&u\->invoke_cv); \& pthread_mutex_unlock (&u\->lock); \& } .Ve .PP Whenever you want to start/stop a watcher or do other modifications to an event loop, you will now have to lock: .PP .Vb 2 \& ev_timer timeout_watcher; \& userdata *u = ev_userdata (EV_A); \& \& ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.); \& \& pthread_mutex_lock (&u\->lock); \& ev_timer_start (EV_A_ &timeout_watcher); \& ev_async_send (EV_A_ &u\->async_w); \& pthread_mutex_unlock (&u\->lock); .Ve .PP Note that sending the \f(CW\*(C`ev_async\*(C'\fR watcher is required because otherwise an event loop currently blocking in the kernel will have no knowledge about the newly added timer. By waking up the loop it will pick up any new watchers in the next event loop iteration. .SS "\s-1THREADS, COROUTINES, CONTINUATIONS, QUEUES... INSTEAD OF CALLBACKS\s0" .IX Subsection "THREADS, COROUTINES, CONTINUATIONS, QUEUES... INSTEAD OF CALLBACKS" While the overhead of a callback that e.g. schedules a thread is small, it is still an overhead. If you embed libev, and your main usage is with some kind of threads or coroutines, you might want to customise libev so that doesn't need callbacks anymore. .PP Imagine you have coroutines that you can switch to using a function \&\f(CW\*(C`switch_to (coro)\*(C'\fR, that libev runs in a coroutine called \f(CW\*(C`libev_coro\*(C'\fR and that due to some magic, the currently active coroutine is stored in a global called \f(CW\*(C`current_coro\*(C'\fR. Then you can build your own \*(L"wait for libev event\*(R" primitive by changing \f(CW\*(C`EV_CB_DECLARE\*(C'\fR and \f(CW\*(C`EV_CB_INVOKE\*(C'\fR (note the differing \f(CW\*(C`;\*(C'\fR conventions): .PP .Vb 2 \& #define EV_CB_DECLARE(type) struct my_coro *cb; \& #define EV_CB_INVOKE(watcher) switch_to ((watcher)\->cb) .Ve .PP That means instead of having a C callback function, you store the coroutine to switch to in each watcher, and instead of having libev call your callback, you instead have it switch to that coroutine. .PP A coroutine might now wait for an event with a function called \&\f(CW\*(C`wait_for_event\*(C'\fR. (the watcher needs to be started, as always, but it doesn't matter when, or whether the watcher is active or not when this function is called): .PP .Vb 6 \& void \& wait_for_event (ev_watcher *w) \& { \& ev_set_cb (w, current_coro); \& switch_to (libev_coro); \& } .Ve .PP That basically suspends the coroutine inside \f(CW\*(C`wait_for_event\*(C'\fR and continues the libev coroutine, which, when appropriate, switches back to this or any other coroutine. .PP You can do similar tricks if you have, say, threads with an event queue \- instead of storing a coroutine, you store the queue object and instead of switching to a coroutine, you push the watcher onto the queue and notify any waiters. .PP To embed libev, see \*(L"\s-1EMBEDDING\*(R"\s0, but in short, it's easiest to create two files, \fImy_ev.h\fR and \fImy_ev.c\fR that include the respective libev files: .PP .Vb 4 \& // my_ev.h \& #define EV_CB_DECLARE(type) struct my_coro *cb; \& #define EV_CB_INVOKE(watcher) switch_to ((watcher)\->cb) \& #include "../libev/ev.h" \& \& // my_ev.c \& #define EV_H "my_ev.h" \& #include "../libev/ev.c" .Ve .PP And then use \fImy_ev.h\fR when you would normally use \fIev.h\fR, and compile \&\fImy_ev.c\fR into your project. When properly specifying include paths, you can even use \fIev.h\fR as header file name directly. .SH "LIBEVENT EMULATION" .IX Header "LIBEVENT EMULATION" Libev offers a compatibility emulation layer for libevent. It cannot emulate the internals of libevent, so here are some usage hints: .IP "\(bu" 4 Only the libevent\-1.4.1\-beta \s-1API\s0 is being emulated. .Sp This was the newest libevent version available when libev was implemented, and is still mostly unchanged in 2010. .IP "\(bu" 4 Use it by including , as usual. .IP "\(bu" 4 The following members are fully supported: ev_base, ev_callback, ev_arg, ev_fd, ev_res, ev_events. .IP "\(bu" 4 Avoid using ev_flags and the EVLIST_*\-macros, while it is maintained by libev, it does not work exactly the same way as in libevent (consider it a private \s-1API\s0). .IP "\(bu" 4 Priorities are not currently supported. Initialising priorities will fail and all watchers will have the same priority, even though there is an ev_pri field. .IP "\(bu" 4 In libevent, the last base created gets the signals, in libev, the base that registered the signal gets the signals. .IP "\(bu" 4 Other members are not supported. .IP "\(bu" 4 The libev emulation is \fInot\fR \s-1ABI\s0 compatible to libevent, you need to use the libev header file and library. .SH "\*(C+ SUPPORT" .IX Header " SUPPORT" .SS "C \s-1API\s0" .IX Subsection "C API" The normal C \s-1API\s0 should work fine when used from \*(C+: both ev.h and the libev sources can be compiled as \*(C+. Therefore, code that uses the C \s-1API\s0 will work fine. .PP Proper exception specifications might have to be added to callbacks passed to libev: exceptions may be thrown only from watcher callbacks, all other callbacks (allocator, syserr, loop acquire/release and periodic reschedule callbacks) must not throw exceptions, and might need a \f(CW\*(C`noexcept\*(C'\fR specification. If you have code that needs to be compiled as both C and \&\*(C+ you can use the \f(CW\*(C`EV_NOEXCEPT\*(C'\fR macro for this: .PP .Vb 6 \& static void \& fatal_error (const char *msg) EV_NOEXCEPT \& { \& perror (msg); \& abort (); \& } \& \& ... \& ev_set_syserr_cb (fatal_error); .Ve .PP The only \s-1API\s0 functions that can currently throw exceptions are \f(CW\*(C`ev_run\*(C'\fR, \&\f(CW\*(C`ev_invoke\*(C'\fR, \f(CW\*(C`ev_invoke_pending\*(C'\fR and \f(CW\*(C`ev_loop_destroy\*(C'\fR (the latter because it runs cleanup watchers). .PP Throwing exceptions in watcher callbacks is only supported if libev itself is compiled with a \*(C+ compiler or your C and \*(C+ environments allow throwing exceptions through C libraries (most do). .SS "\*(C+ \s-1API\s0" .IX Subsection " API" Libev comes with some simplistic wrapper classes for \*(C+ that mainly allow you to use some convenience methods to start/stop watchers and also change the callback model to a model using method callbacks on objects. .PP To use it, .PP .Vb 1 \& #include .Ve .PP This automatically includes \fIev.h\fR and puts all of its definitions (many of them macros) into the global namespace. All \*(C+ specific things are put into the \f(CW\*(C`ev\*(C'\fR namespace. It should support all the same embedding options as \fIev.h\fR, most notably \f(CW\*(C`EV_MULTIPLICITY\*(C'\fR. .PP Care has been taken to keep the overhead low. The only data member the \*(C+ classes add (compared to plain C\-style watchers) is the event loop pointer that the watcher is associated with (or no additional members at all if you disable \f(CW\*(C`EV_MULTIPLICITY\*(C'\fR when embedding libev). .PP Currently, functions, static and non-static member functions and classes with \f(CW\*(C`operator ()\*(C'\fR can be used as callbacks. Other types should be easy to add as long as they only need one additional pointer for context. If you need support for other types of functors please contact the author (preferably after implementing it). .PP For all this to work, your \*(C+ compiler either has to use the same calling conventions as your C compiler (for static member functions), or you have to embed libev and compile libev itself as \*(C+. .PP Here is a list of things available in the \f(CW\*(C`ev\*(C'\fR namespace: .ie n .IP """ev::READ"", ""ev::WRITE"" etc." 4 .el .IP "\f(CWev::READ\fR, \f(CWev::WRITE\fR etc." 4 .IX Item "ev::READ, ev::WRITE etc." These are just enum values with the same values as the \f(CW\*(C`EV_READ\*(C'\fR etc. macros from \fIev.h\fR. .ie n .IP """ev::tstamp"", ""ev::now""" 4 .el .IP "\f(CWev::tstamp\fR, \f(CWev::now\fR" 4 .IX Item "ev::tstamp, ev::now" Aliases to the same types/functions as with the \f(CW\*(C`ev_\*(C'\fR prefix. .ie n .IP """ev::io"", ""ev::timer"", ""ev::periodic"", ""ev::idle"", ""ev::sig"" etc." 4 .el .IP "\f(CWev::io\fR, \f(CWev::timer\fR, \f(CWev::periodic\fR, \f(CWev::idle\fR, \f(CWev::sig\fR etc." 4 .IX Item "ev::io, ev::timer, ev::periodic, ev::idle, ev::sig etc." For each \f(CW\*(C`ev_TYPE\*(C'\fR watcher in \fIev.h\fR there is a corresponding class of the same name in the \f(CW\*(C`ev\*(C'\fR namespace, with the exception of \f(CW\*(C`ev_signal\*(C'\fR which is called \f(CW\*(C`ev::sig\*(C'\fR to avoid clashes with the \f(CW\*(C`signal\*(C'\fR macro defined by many implementations. .Sp All of those classes have these methods: .RS 4 .IP "ev::TYPE::TYPE ()" 4 .IX Item "ev::TYPE::TYPE ()" .PD 0 .IP "ev::TYPE::TYPE (loop)" 4 .IX Item "ev::TYPE::TYPE (loop)" .IP "ev::TYPE::~TYPE" 4 .IX Item "ev::TYPE::~TYPE" .PD The constructor (optionally) takes an event loop to associate the watcher with. If it is omitted, it will use \f(CW\*(C`EV_DEFAULT\*(C'\fR. .Sp The constructor calls \f(CW\*(C`ev_init\*(C'\fR for you, which means you have to call the \&\f(CW\*(C`set\*(C'\fR method before starting it. .Sp It will not set a callback, however: You have to call the templated \f(CW\*(C`set\*(C'\fR method to set a callback before you can start the watcher. .Sp (The reason why you have to use a method is a limitation in \*(C+ which does not allow explicit template arguments for constructors). .Sp The destructor automatically stops the watcher if it is active. .IP "w\->set (object *)" 4 .IX Item "w->set (object *)" This method sets the callback method to call. The method has to have a signature of \f(CW\*(C`void (*)(ev_TYPE &, int)\*(C'\fR, it receives the watcher as first argument and the \f(CW\*(C`revents\*(C'\fR as second. The object must be given as parameter and is stored in the \f(CW\*(C`data\*(C'\fR member of the watcher. .Sp This method synthesizes efficient thunking code to call your method from the C callback that libev requires. If your compiler can inline your callback (i.e. it is visible to it at the place of the \f(CW\*(C`set\*(C'\fR call and your compiler is good :), then the method will be fully inlined into the thunking function, making it as fast as a direct C callback. .Sp Example: simple class declaration and watcher initialisation .Sp .Vb 4 \& struct myclass \& { \& void io_cb (ev::io &w, int revents) { } \& } \& \& myclass obj; \& ev::io iow; \& iow.set (&obj); .Ve .IP "w\->set (object *)" 4 .IX Item "w->set (object *)" This is a variation of a method callback \- leaving out the method to call will default the method to \f(CW\*(C`operator ()\*(C'\fR, which makes it possible to use functor objects without having to manually specify the \f(CW\*(C`operator ()\*(C'\fR all the time. Incidentally, you can then also leave out the template argument list. .Sp The \f(CW\*(C`operator ()\*(C'\fR method prototype must be \f(CW\*(C`void operator ()(watcher &w, int revents)\*(C'\fR. .Sp See the method\-\f(CW\*(C`set\*(C'\fR above for more details. .Sp Example: use a functor object as callback. .Sp .Vb 7 \& struct myfunctor \& { \& void operator() (ev::io &w, int revents) \& { \& ... \& } \& } \& \& myfunctor f; \& \& ev::io w; \& w.set (&f); .Ve .IP "w\->set (void *data = 0)" 4 .IX Item "w->set (void *data = 0)" Also sets a callback, but uses a static method or plain function as callback. The optional \f(CW\*(C`data\*(C'\fR argument will be stored in the watcher's \&\f(CW\*(C`data\*(C'\fR member and is free for you to use. .Sp The prototype of the \f(CW\*(C`function\*(C'\fR must be \f(CW\*(C`void (*)(ev::TYPE &w, int)\*(C'\fR. .Sp See the method\-\f(CW\*(C`set\*(C'\fR above for more details. .Sp Example: Use a plain function as callback. .Sp .Vb 2 \& static void io_cb (ev::io &w, int revents) { } \& iow.set (); .Ve .IP "w\->set (loop)" 4 .IX Item "w->set (loop)" Associates a different \f(CW\*(C`struct ev_loop\*(C'\fR with this watcher. You can only do this when the watcher is inactive (and not pending either). .IP "w\->set ([arguments])" 4 .IX Item "w->set ([arguments])" Basically the same as \f(CW\*(C`ev_TYPE_set\*(C'\fR (except for \f(CW\*(C`ev::embed\*(C'\fR watchers>), with the same arguments. Either this method or a suitable start method must be called at least once. Unlike the C counterpart, an active watcher gets automatically stopped and restarted when reconfiguring it with this method. .Sp For \f(CW\*(C`ev::embed\*(C'\fR watchers this method is called \f(CW\*(C`set_embed\*(C'\fR, to avoid clashing with the \f(CW\*(C`set (loop)\*(C'\fR method. .Sp For \f(CW\*(C`ev::io\*(C'\fR watchers there is an additional \f(CW\*(C`set\*(C'\fR method that acepts a new event mask only, and internally calls \f(CW\*(C`ev_io_modfify\*(C'\fR. .IP "w\->start ()" 4 .IX Item "w->start ()" Starts the watcher. Note that there is no \f(CW\*(C`loop\*(C'\fR argument, as the constructor already stores the event loop. .IP "w\->start ([arguments])" 4 .IX Item "w->start ([arguments])" Instead of calling \f(CW\*(C`set\*(C'\fR and \f(CW\*(C`start\*(C'\fR methods separately, it is often convenient to wrap them in one call. Uses the same type of arguments as the configure \f(CW\*(C`set\*(C'\fR method of the watcher. .IP "w\->stop ()" 4 .IX Item "w->stop ()" Stops the watcher if it is active. Again, no \f(CW\*(C`loop\*(C'\fR argument. .ie n .IP "w\->again () (""ev::timer"", ""ev::periodic"" only)" 4 .el .IP "w\->again () (\f(CWev::timer\fR, \f(CWev::periodic\fR only)" 4 .IX Item "w->again () (ev::timer, ev::periodic only)" For \f(CW\*(C`ev::timer\*(C'\fR and \f(CW\*(C`ev::periodic\*(C'\fR, this invokes the corresponding \&\f(CW\*(C`ev_TYPE_again\*(C'\fR function. .ie n .IP "w\->sweep () (""ev::embed"" only)" 4 .el .IP "w\->sweep () (\f(CWev::embed\fR only)" 4 .IX Item "w->sweep () (ev::embed only)" Invokes \f(CW\*(C`ev_embed_sweep\*(C'\fR. .ie n .IP "w\->update () (""ev::stat"" only)" 4 .el .IP "w\->update () (\f(CWev::stat\fR only)" 4 .IX Item "w->update () (ev::stat only)" Invokes \f(CW\*(C`ev_stat_stat\*(C'\fR. .RE .RS 4 .RE .PP Example: Define a class with two I/O and idle watchers, start the I/O watchers in the constructor. .PP .Vb 5 \& class myclass \& { \& ev::io io ; void io_cb (ev::io &w, int revents); \& ev::io io2 ; void io2_cb (ev::io &w, int revents); \& ev::idle idle; void idle_cb (ev::idle &w, int revents); \& \& myclass (int fd) \& { \& io .set (this); \& io2 .set (this); \& idle.set (this); \& \& io.set (fd, ev::WRITE); // configure the watcher \& io.start (); // start it whenever convenient \& \& io2.start (fd, ev::READ); // set + start in one call \& } \& }; .Ve .SH "OTHER LANGUAGE BINDINGS" .IX Header "OTHER LANGUAGE BINDINGS" Libev does not offer other language bindings itself, but bindings for a number of languages exist in the form of third-party packages. If you know any interesting language binding in addition to the ones listed here, drop me a note. .IP "Perl" 4 .IX Item "Perl" The \s-1EV\s0 module implements the full libev \s-1API\s0 and is actually used to test libev. \s-1EV\s0 is developed together with libev. Apart from the \s-1EV\s0 core module, there are additional modules that implement libev-compatible interfaces to \f(CW\*(C`libadns\*(C'\fR (\f(CW\*(C`EV::ADNS\*(C'\fR, but \f(CW\*(C`AnyEvent::DNS\*(C'\fR is preferred nowadays), \&\f(CW\*(C`Net::SNMP\*(C'\fR (\f(CW\*(C`Net::SNMP::EV\*(C'\fR) and the \f(CW\*(C`libglib\*(C'\fR event core (\f(CW\*(C`Glib::EV\*(C'\fR and \f(CW\*(C`EV::Glib\*(C'\fR). .Sp It can be found and installed via \s-1CPAN,\s0 its homepage is at . .IP "Python" 4 .IX Item "Python" Python bindings can be found at . It seems to be quite complete and well-documented. .IP "Ruby" 4 .IX Item "Ruby" Tony Arcieri has written a ruby extension that offers access to a subset of the libev \s-1API\s0 and adds file handle abstractions, asynchronous \s-1DNS\s0 and more on top of it. It can be found via gem servers. Its homepage is at . .Sp Roger Pack reports that using the link order \f(CW\*(C`\-lws2_32 \-lmsvcrt\-ruby\-190\*(C'\fR makes rev work even on mingw. .IP "Haskell" 4 .IX Item "Haskell" A haskell binding to libev is available at . .IP "D" 4 .IX Item "D" Leandro Lucarella has written a D language binding (\fIev.d\fR) for libev, to be found at . .IP "Ocaml" 4 .IX Item "Ocaml" Erkki Seppala has written Ocaml bindings for libev, to be found at . .IP "Lua" 4 .IX Item "Lua" Brian Maher has written a partial interface to libev for lua (at the time of this writing, only \f(CW\*(C`ev_io\*(C'\fR and \f(CW\*(C`ev_timer\*(C'\fR), to be found at . .IP "Javascript" 4 .IX Item "Javascript" Node.js () uses libev as the underlying event library. .IP "Others" 4 .IX Item "Others" There are others, and I stopped counting. .SH "MACRO MAGIC" .IX Header "MACRO MAGIC" Libev can be compiled with a variety of options, the most fundamental of which is \f(CW\*(C`EV_MULTIPLICITY\*(C'\fR. This option determines whether (most) functions and callbacks have an initial \f(CW\*(C`struct ev_loop *\*(C'\fR argument. .PP To make it easier to write programs that cope with either variant, the following macros are defined: .ie n .IP """EV_A"", ""EV_A_""" 4 .el .IP "\f(CWEV_A\fR, \f(CWEV_A_\fR" 4 .IX Item "EV_A, EV_A_" This provides the loop \fIargument\fR for functions, if one is required (\*(L"ev loop argument\*(R"). The \f(CW\*(C`EV_A\*(C'\fR form is used when this is the sole argument, \&\f(CW\*(C`EV_A_\*(C'\fR is used when other arguments are following. Example: .Sp .Vb 3 \& ev_unref (EV_A); \& ev_timer_add (EV_A_ watcher); \& ev_run (EV_A_ 0); .Ve .Sp It assumes the variable \f(CW\*(C`loop\*(C'\fR of type \f(CW\*(C`struct ev_loop *\*(C'\fR is in scope, which is often provided by the following macro. .ie n .IP """EV_P"", ""EV_P_""" 4 .el .IP "\f(CWEV_P\fR, \f(CWEV_P_\fR" 4 .IX Item "EV_P, EV_P_" This provides the loop \fIparameter\fR for functions, if one is required (\*(L"ev loop parameter\*(R"). The \f(CW\*(C`EV_P\*(C'\fR form is used when this is the sole parameter, \&\f(CW\*(C`EV_P_\*(C'\fR is used when other parameters are following. Example: .Sp .Vb 2 \& // this is how ev_unref is being declared \& static void ev_unref (EV_P); \& \& // this is how you can declare your typical callback \& static void cb (EV_P_ ev_timer *w, int revents) .Ve .Sp It declares a parameter \f(CW\*(C`loop\*(C'\fR of type \f(CW\*(C`struct ev_loop *\*(C'\fR, quite suitable for use with \f(CW\*(C`EV_A\*(C'\fR. .ie n .IP """EV_DEFAULT"", ""EV_DEFAULT_""" 4 .el .IP "\f(CWEV_DEFAULT\fR, \f(CWEV_DEFAULT_\fR" 4 .IX Item "EV_DEFAULT, EV_DEFAULT_" Similar to the other two macros, this gives you the value of the default loop, if multiple loops are supported (\*(L"ev loop default\*(R"). The default loop will be initialised if it isn't already initialised. .Sp For non-multiplicity builds, these macros do nothing, so you always have to initialise the loop somewhere. .ie n .IP """EV_DEFAULT_UC"", ""EV_DEFAULT_UC_""" 4 .el .IP "\f(CWEV_DEFAULT_UC\fR, \f(CWEV_DEFAULT_UC_\fR" 4 .IX Item "EV_DEFAULT_UC, EV_DEFAULT_UC_" Usage identical to \f(CW\*(C`EV_DEFAULT\*(C'\fR and \f(CW\*(C`EV_DEFAULT_\*(C'\fR, but requires that the default loop has been initialised (\f(CW\*(C`UC\*(C'\fR == unchecked). Their behaviour is undefined when the default loop has not been initialised by a previous execution of \f(CW\*(C`EV_DEFAULT\*(C'\fR, \f(CW\*(C`EV_DEFAULT_\*(C'\fR or \f(CW\*(C`ev_default_init (...)\*(C'\fR. .Sp It is often prudent to use \f(CW\*(C`EV_DEFAULT\*(C'\fR when initialising the first watcher in a function but use \f(CW\*(C`EV_DEFAULT_UC\*(C'\fR afterwards. .PP Example: Declare and initialise a check watcher, utilising the above macros so it will work regardless of whether multiple loops are supported or not. .PP .Vb 5 \& static void \& check_cb (EV_P_ ev_timer *w, int revents) \& { \& ev_check_stop (EV_A_ w); \& } \& \& ev_check check; \& ev_check_init (&check, check_cb); \& ev_check_start (EV_DEFAULT_ &check); \& ev_run (EV_DEFAULT_ 0); .Ve .SH "EMBEDDING" .IX Header "EMBEDDING" Libev can (and often is) directly embedded into host applications. Examples of applications that embed it include the Deliantra Game Server, the \s-1EV\s0 perl module, the \s-1GNU\s0 Virtual Private Ethernet (gvpe) and rxvt-unicode. .PP The goal is to enable you to just copy the necessary files into your source directory without having to change even a single line in them, so you can easily upgrade by simply copying (or having a checked-out copy of libev somewhere in your source tree). .SS "\s-1FILESETS\s0" .IX Subsection "FILESETS" Depending on what features you need you need to include one or more sets of files in your application. .PP \fI\s-1CORE EVENT LOOP\s0\fR .IX Subsection "CORE EVENT LOOP" .PP To include only the libev core (all the \f(CW\*(C`ev_*\*(C'\fR functions), with manual configuration (no autoconf): .PP .Vb 2 \& #define EV_STANDALONE 1 \& #include "ev.c" .Ve .PP This will automatically include \fIev.h\fR, too, and should be done in a single C source file only to provide the function implementations. To use it, do the same for \fIev.h\fR in all files wishing to use this \s-1API\s0 (best done by writing a wrapper around \fIev.h\fR that you can include instead and where you can put other configuration options): .PP .Vb 2 \& #define EV_STANDALONE 1 \& #include "ev.h" .Ve .PP Both header files and implementation files can be compiled with a \*(C+ compiler (at least, that's a stated goal, and breakage will be treated as a bug). .PP You need the following files in your source tree, or in a directory in your include path (e.g. in libev/ when using \-Ilibev): .PP .Vb 4 \& ev.h \& ev.c \& ev_vars.h \& ev_wrap.h \& \& ev_win32.c required on win32 platforms only \& \& ev_select.c only when select backend is enabled \& ev_poll.c only when poll backend is enabled \& ev_epoll.c only when the epoll backend is enabled \& ev_linuxaio.c only when the linux aio backend is enabled \& ev_iouring.c only when the linux io_uring backend is enabled \& ev_kqueue.c only when the kqueue backend is enabled \& ev_port.c only when the solaris port backend is enabled .Ve .PP \&\fIev.c\fR includes the backend files directly when enabled, so you only need to compile this single file. .PP \fI\s-1LIBEVENT COMPATIBILITY API\s0\fR .IX Subsection "LIBEVENT COMPATIBILITY API" .PP To include the libevent compatibility \s-1API,\s0 also include: .PP .Vb 1 \& #include "event.c" .Ve .PP in the file including \fIev.c\fR, and: .PP .Vb 1 \& #include "event.h" .Ve .PP in the files that want to use the libevent \s-1API.\s0 This also includes \fIev.h\fR. .PP You need the following additional files for this: .PP .Vb 2 \& event.h \& event.c .Ve .PP \fI\s-1AUTOCONF SUPPORT\s0\fR .IX Subsection "AUTOCONF SUPPORT" .PP Instead of using \f(CW\*(C`EV_STANDALONE=1\*(C'\fR and providing your configuration in whatever way you want, you can also \f(CW\*(C`m4_include([libev.m4])\*(C'\fR in your \&\fIconfigure.ac\fR and leave \f(CW\*(C`EV_STANDALONE\*(C'\fR undefined. \fIev.c\fR will then include \fIconfig.h\fR and configure itself accordingly. .PP For this of course you need the m4 file: .PP .Vb 1 \& libev.m4 .Ve .SS "\s-1PREPROCESSOR SYMBOLS/MACROS\s0" .IX Subsection "PREPROCESSOR SYMBOLS/MACROS" Libev can be configured via a variety of preprocessor symbols you have to define before including (or compiling) any of its files. The default in the absence of autoconf is documented for every option. .PP Symbols marked with \*(L"(h)\*(R" do not change the \s-1ABI,\s0 and can have different values when compiling libev vs. including \fIev.h\fR, so it is permissible to redefine them before including \fIev.h\fR without breaking compatibility to a compiled library. All other symbols change the \s-1ABI,\s0 which means all users of libev and the libev code itself must be compiled with compatible settings. .IP "\s-1EV_COMPAT3\s0 (h)" 4 .IX Item "EV_COMPAT3 (h)" Backwards compatibility is a major concern for libev. This is why this release of libev comes with wrappers for the functions and symbols that have been renamed between libev version 3 and 4. .Sp You can disable these wrappers (to test compatibility with future versions) by defining \f(CW\*(C`EV_COMPAT3\*(C'\fR to \f(CW0\fR when compiling your sources. This has the additional advantage that you can drop the \f(CW\*(C`struct\*(C'\fR from \f(CW\*(C`struct ev_loop\*(C'\fR declarations, as libev will provide an \f(CW\*(C`ev_loop\*(C'\fR typedef in that case. .Sp In some future version, the default for \f(CW\*(C`EV_COMPAT3\*(C'\fR will become \f(CW0\fR, and in some even more future version the compatibility code will be removed completely. .IP "\s-1EV_STANDALONE\s0 (h)" 4 .IX Item "EV_STANDALONE (h)" Must always be \f(CW1\fR if you do not use autoconf configuration, which keeps libev from including \fIconfig.h\fR, and it also defines dummy implementations for some libevent functions (such as logging, which is not supported). It will also not define any of the structs usually found in \&\fIevent.h\fR that are not directly supported by the libev core alone. .Sp In standalone mode, libev will still try to automatically deduce the configuration, but has to be more conservative. .IP "\s-1EV_USE_FLOOR\s0" 4 .IX Item "EV_USE_FLOOR" If defined to be \f(CW1\fR, libev will use the \f(CW\*(C`floor ()\*(C'\fR function for its periodic reschedule calculations, otherwise libev will fall back on a portable (slower) implementation. If you enable this, you usually have to link against libm or something equivalent. Enabling this when the \f(CW\*(C`floor\*(C'\fR function is not available will fail, so the safe default is to not enable this. .IP "\s-1EV_USE_MONOTONIC\s0" 4 .IX Item "EV_USE_MONOTONIC" If defined to be \f(CW1\fR, libev will try to detect the availability of the monotonic clock option at both compile time and runtime. Otherwise no use of the monotonic clock option will be attempted. If you enable this, you usually have to link against librt or something similar. Enabling it when the functionality isn't available is safe, though, although you have to make sure you link against any libraries where the \f(CW\*(C`clock_gettime\*(C'\fR function is hiding in (often \fI\-lrt\fR). See also \f(CW\*(C`EV_USE_CLOCK_SYSCALL\*(C'\fR. .IP "\s-1EV_USE_REALTIME\s0" 4 .IX Item "EV_USE_REALTIME" If defined to be \f(CW1\fR, libev will try to detect the availability of the real-time clock option at compile time (and assume its availability at runtime if successful). Otherwise no use of the real-time clock option will be attempted. This effectively replaces \f(CW\*(C`gettimeofday\*(C'\fR by \f(CW\*(C`clock_get (CLOCK_REALTIME, ...)\*(C'\fR and will not normally affect correctness. See the note about libraries in the description of \&\f(CW\*(C`EV_USE_MONOTONIC\*(C'\fR, though. Defaults to the opposite value of \&\f(CW\*(C`EV_USE_CLOCK_SYSCALL\*(C'\fR. .IP "\s-1EV_USE_CLOCK_SYSCALL\s0" 4 .IX Item "EV_USE_CLOCK_SYSCALL" If defined to be \f(CW1\fR, libev will try to use a direct syscall instead of calling the system-provided \f(CW\*(C`clock_gettime\*(C'\fR function. This option exists because on GNU/Linux, \f(CW\*(C`clock_gettime\*(C'\fR is in \f(CW\*(C`librt\*(C'\fR, but \f(CW\*(C`librt\*(C'\fR unconditionally pulls in \f(CW\*(C`libpthread\*(C'\fR, slowing down single-threaded programs needlessly. Using a direct syscall is slightly slower (in theory), because no optimised vdso implementation can be used, but avoids the pthread dependency. Defaults to \f(CW1\fR on GNU/Linux with glibc 2.x or higher, as it simplifies linking (no need for \f(CW\*(C`\-lrt\*(C'\fR). .IP "\s-1EV_USE_NANOSLEEP\s0" 4 .IX Item "EV_USE_NANOSLEEP" If defined to be \f(CW1\fR, libev will assume that \f(CW\*(C`nanosleep ()\*(C'\fR is available and will use it for delays. Otherwise it will use \f(CW\*(C`select ()\*(C'\fR. .IP "\s-1EV_USE_EVENTFD\s0" 4 .IX Item "EV_USE_EVENTFD" If defined to be \f(CW1\fR, then libev will assume that \f(CW\*(C`eventfd ()\*(C'\fR is available and will probe for kernel support at runtime. This will improve \&\f(CW\*(C`ev_signal\*(C'\fR and \f(CW\*(C`ev_async\*(C'\fR performance and reduce resource consumption. If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc 2.7 or newer, otherwise disabled. .IP "\s-1EV_USE_SIGNALFD\s0" 4 .IX Item "EV_USE_SIGNALFD" If defined to be \f(CW1\fR, then libev will assume that \f(CW\*(C`signalfd ()\*(C'\fR is available and will probe for kernel support at runtime. This enables the use of \s-1EVFLAG_SIGNALFD\s0 for faster and simpler signal handling. If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc 2.7 or newer, otherwise disabled. .IP "\s-1EV_USE_TIMERFD\s0" 4 .IX Item "EV_USE_TIMERFD" If defined to be \f(CW1\fR, then libev will assume that \f(CW\*(C`timerfd ()\*(C'\fR is available and will probe for kernel support at runtime. This allows libev to detect time jumps accurately. If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc 2.8 or newer and define \&\f(CW\*(C`TFD_TIMER_CANCEL_ON_SET\*(C'\fR, otherwise disabled. .IP "\s-1EV_USE_EVENTFD\s0" 4 .IX Item "EV_USE_EVENTFD" If defined to be \f(CW1\fR, then libev will assume that \f(CW\*(C`eventfd ()\*(C'\fR is available and will probe for kernel support at runtime. This will improve \&\f(CW\*(C`ev_signal\*(C'\fR and \f(CW\*(C`ev_async\*(C'\fR performance and reduce resource consumption. If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc 2.7 or newer, otherwise disabled. .IP "\s-1EV_USE_SELECT\s0" 4 .IX Item "EV_USE_SELECT" If undefined or defined to be \f(CW1\fR, libev will compile in support for the \&\f(CW\*(C`select\*(C'\fR(2) backend. No attempt at auto-detection will be done: if no other method takes over, select will be it. Otherwise the select backend will not be compiled in. .IP "\s-1EV_SELECT_USE_FD_SET\s0" 4 .IX Item "EV_SELECT_USE_FD_SET" If defined to \f(CW1\fR, then the select backend will use the system \f(CW\*(C`fd_set\*(C'\fR structure. This is useful if libev doesn't compile due to a missing \&\f(CW\*(C`NFDBITS\*(C'\fR or \f(CW\*(C`fd_mask\*(C'\fR definition or it mis-guesses the bitset layout on exotic systems. This usually limits the range of file descriptors to some low limit such as 1024 or might have other limitations (winsocket only allows 64 sockets). The \f(CW\*(C`FD_SETSIZE\*(C'\fR macro, set before compilation, configures the maximum size of the \f(CW\*(C`fd_set\*(C'\fR. .IP "\s-1EV_SELECT_IS_WINSOCKET\s0" 4 .IX Item "EV_SELECT_IS_WINSOCKET" When defined to \f(CW1\fR, the select backend will assume that select/socket/connect etc. don't understand file descriptors but wants osf handles on win32 (this is the case when the select to be used is the winsock select). This means that it will call \&\f(CW\*(C`_get_osfhandle\*(C'\fR on the fd to convert it to an \s-1OS\s0 handle. Otherwise, it is assumed that all these functions actually work on fds, even on win32. Should not be defined on non\-win32 platforms. .IP "\s-1EV_FD_TO_WIN32_HANDLE\s0(fd)" 4 .IX Item "EV_FD_TO_WIN32_HANDLE(fd)" If \f(CW\*(C`EV_SELECT_IS_WINSOCKET\*(C'\fR is enabled, then libev needs a way to map file descriptors to socket handles. When not defining this symbol (the default), then libev will call \f(CW\*(C`_get_osfhandle\*(C'\fR, which is usually correct. In some cases, programs use their own file descriptor management, in which case they can provide this function to map fds to socket handles. .IP "\s-1EV_WIN32_HANDLE_TO_FD\s0(handle)" 4 .IX Item "EV_WIN32_HANDLE_TO_FD(handle)" If \f(CW\*(C`EV_SELECT_IS_WINSOCKET\*(C'\fR then libev maps handles to file descriptors using the standard \f(CW\*(C`_open_osfhandle\*(C'\fR function. For programs implementing their own fd to handle mapping, overwriting this function makes it easier to do so. This can be done by defining this macro to an appropriate value. .IP "\s-1EV_WIN32_CLOSE_FD\s0(fd)" 4 .IX Item "EV_WIN32_CLOSE_FD(fd)" If programs implement their own fd to handle mapping on win32, then this macro can be used to override the \f(CW\*(C`close\*(C'\fR function, useful to unregister file descriptors again. Note that the replacement function has to close the underlying \s-1OS\s0 handle. .IP "\s-1EV_USE_WSASOCKET\s0" 4 .IX Item "EV_USE_WSASOCKET" If defined to be \f(CW1\fR, libev will use \f(CW\*(C`WSASocket\*(C'\fR to create its internal communication socket, which works better in some environments. Otherwise, the normal \f(CW\*(C`socket\*(C'\fR function will be used, which works better in other environments. .IP "\s-1EV_USE_POLL\s0" 4 .IX Item "EV_USE_POLL" If defined to be \f(CW1\fR, libev will compile in support for the \f(CW\*(C`poll\*(C'\fR(2) backend. Otherwise it will be enabled on non\-win32 platforms. It takes precedence over select. .IP "\s-1EV_USE_EPOLL\s0" 4 .IX Item "EV_USE_EPOLL" If defined to be \f(CW1\fR, libev will compile in support for the Linux \&\f(CW\*(C`epoll\*(C'\fR(7) backend. Its availability will be detected at runtime, otherwise another method will be used as fallback. This is the preferred backend for GNU/Linux systems. If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc 2.4 or newer, otherwise disabled. .IP "\s-1EV_USE_LINUXAIO\s0" 4 .IX Item "EV_USE_LINUXAIO" If defined to be \f(CW1\fR, libev will compile in support for the Linux aio backend (\f(CW\*(C`EV_USE_EPOLL\*(C'\fR must also be enabled). If undefined, it will be enabled on linux, otherwise disabled. .IP "\s-1EV_USE_IOURING\s0" 4 .IX Item "EV_USE_IOURING" If defined to be \f(CW1\fR, libev will compile in support for the Linux io_uring backend (\f(CW\*(C`EV_USE_EPOLL\*(C'\fR must also be enabled). Due to it's current limitations it has to be requested explicitly. If undefined, it will be enabled on linux, otherwise disabled. .IP "\s-1EV_USE_KQUEUE\s0" 4 .IX Item "EV_USE_KQUEUE" If defined to be \f(CW1\fR, libev will compile in support for the \s-1BSD\s0 style \&\f(CW\*(C`kqueue\*(C'\fR(2) backend. Its actual availability will be detected at runtime, otherwise another method will be used as fallback. This is the preferred backend for \s-1BSD\s0 and BSD-like systems, although on most BSDs kqueue only supports some types of fds correctly (the only platform we found that supports ptys for example was NetBSD), so kqueue might be compiled in, but not be used unless explicitly requested. The best way to use it is to find out whether kqueue supports your type of fd properly and use an embedded kqueue loop. .IP "\s-1EV_USE_PORT\s0" 4 .IX Item "EV_USE_PORT" If defined to be \f(CW1\fR, libev will compile in support for the Solaris 10 port style backend. Its availability will be detected at runtime, otherwise another method will be used as fallback. This is the preferred backend for Solaris 10 systems. .IP "\s-1EV_USE_DEVPOLL\s0" 4 .IX Item "EV_USE_DEVPOLL" Reserved for future expansion, works like the \s-1USE\s0 symbols above. .IP "\s-1EV_USE_INOTIFY\s0" 4 .IX Item "EV_USE_INOTIFY" If defined to be \f(CW1\fR, libev will compile in support for the Linux inotify interface to speed up \f(CW\*(C`ev_stat\*(C'\fR watchers. Its actual availability will be detected at runtime. If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc 2.4 or newer, otherwise disabled. .IP "\s-1EV_NO_SMP\s0" 4 .IX Item "EV_NO_SMP" If defined to be \f(CW1\fR, libev will assume that memory is always coherent between threads, that is, threads can be used, but threads never run on different cpus (or different cpu cores). This reduces dependencies and makes libev faster. .IP "\s-1EV_NO_THREADS\s0" 4 .IX Item "EV_NO_THREADS" If defined to be \f(CW1\fR, libev will assume that it will never be called from different threads (that includes signal handlers), which is a stronger assumption than \f(CW\*(C`EV_NO_SMP\*(C'\fR, above. This reduces dependencies and makes libev faster. .IP "\s-1EV_ATOMIC_T\s0" 4 .IX Item "EV_ATOMIC_T" Libev requires an integer type (suitable for storing \f(CW0\fR or \f(CW1\fR) whose access is atomic with respect to other threads or signal contexts. No such type is easily found in the C language, so you can provide your own type that you know is safe for your purposes. It is used both for signal handler \*(L"locking\*(R" as well as for signal and thread safety in \f(CW\*(C`ev_async\*(C'\fR watchers. .Sp In the absence of this define, libev will use \f(CW\*(C`sig_atomic_t volatile\*(C'\fR (from \fIsignal.h\fR), which is usually good enough on most platforms. .IP "\s-1EV_H\s0 (h)" 4 .IX Item "EV_H (h)" The name of the \fIev.h\fR header file used to include it. The default if undefined is \f(CW"ev.h"\fR in \fIevent.h\fR, \fIev.c\fR and \fIev++.h\fR. This can be used to virtually rename the \fIev.h\fR header file in case of conflicts. .IP "\s-1EV_CONFIG_H\s0 (h)" 4 .IX Item "EV_CONFIG_H (h)" If \f(CW\*(C`EV_STANDALONE\*(C'\fR isn't \f(CW1\fR, this variable can be used to override \&\fIev.c\fR's idea of where to find the \fIconfig.h\fR file, similarly to \&\f(CW\*(C`EV_H\*(C'\fR, above. .IP "\s-1EV_EVENT_H\s0 (h)" 4 .IX Item "EV_EVENT_H (h)" Similarly to \f(CW\*(C`EV_H\*(C'\fR, this macro can be used to override \fIevent.c\fR's idea of how the \fIevent.h\fR header can be found, the default is \f(CW"event.h"\fR. .IP "\s-1EV_PROTOTYPES\s0 (h)" 4 .IX Item "EV_PROTOTYPES (h)" If defined to be \f(CW0\fR, then \fIev.h\fR will not define any function prototypes, but still define all the structs and other symbols. This is occasionally useful if you want to provide your own wrapper functions around libev functions. .IP "\s-1EV_MULTIPLICITY\s0" 4 .IX Item "EV_MULTIPLICITY" If undefined or defined to \f(CW1\fR, then all event-loop-specific functions will have the \f(CW\*(C`struct ev_loop *\*(C'\fR as first argument, and you can create additional independent event loops. Otherwise there will be no support for multiple event loops and there is no first event loop pointer argument. Instead, all functions act on the single default loop. .Sp Note that \f(CW\*(C`EV_DEFAULT\*(C'\fR and \f(CW\*(C`EV_DEFAULT_\*(C'\fR will no longer provide a default loop when multiplicity is switched off \- you always have to initialise the loop manually in this case. .IP "\s-1EV_MINPRI\s0" 4 .IX Item "EV_MINPRI" .PD 0 .IP "\s-1EV_MAXPRI\s0" 4 .IX Item "EV_MAXPRI" .PD The range of allowed priorities. \f(CW\*(C`EV_MINPRI\*(C'\fR must be smaller or equal to \&\f(CW\*(C`EV_MAXPRI\*(C'\fR, but otherwise there are no non-obvious limitations. You can provide for more priorities by overriding those symbols (usually defined to be \f(CW\*(C`\-2\*(C'\fR and \f(CW2\fR, respectively). .Sp When doing priority-based operations, libev usually has to linearly search all the priorities, so having many of them (hundreds) uses a lot of space and time, so using the defaults of five priorities (\-2 .. +2) is usually fine. .Sp If your embedding application does not need any priorities, defining these both to \f(CW0\fR will save some memory and \s-1CPU.\s0 .IP "\s-1EV_PERIODIC_ENABLE, EV_IDLE_ENABLE, EV_EMBED_ENABLE, EV_STAT_ENABLE, EV_PREPARE_ENABLE, EV_CHECK_ENABLE, EV_FORK_ENABLE, EV_SIGNAL_ENABLE, EV_ASYNC_ENABLE, EV_CHILD_ENABLE.\s0" 4 .IX Item "EV_PERIODIC_ENABLE, EV_IDLE_ENABLE, EV_EMBED_ENABLE, EV_STAT_ENABLE, EV_PREPARE_ENABLE, EV_CHECK_ENABLE, EV_FORK_ENABLE, EV_SIGNAL_ENABLE, EV_ASYNC_ENABLE, EV_CHILD_ENABLE." If undefined or defined to be \f(CW1\fR (and the platform supports it), then the respective watcher type is supported. If defined to be \f(CW0\fR, then it is not. Disabling watcher types mainly saves code size. .IP "\s-1EV_FEATURES\s0" 4 .IX Item "EV_FEATURES" If you need to shave off some kilobytes of code at the expense of some speed (but with the full \s-1API\s0), you can define this symbol to request certain subsets of functionality. The default is to enable all features that can be enabled on the platform. .Sp A typical way to use this symbol is to define it to \f(CW0\fR (or to a bitset with some broad features you want) and then selectively re-enable additional parts you want, for example if you want everything minimal, but multiple event loop support, async and child watchers and the poll backend, use this: .Sp .Vb 5 \& #define EV_FEATURES 0 \& #define EV_MULTIPLICITY 1 \& #define EV_USE_POLL 1 \& #define EV_CHILD_ENABLE 1 \& #define EV_ASYNC_ENABLE 1 .Ve .Sp The actual value is a bitset, it can be a combination of the following values (by default, all of these are enabled): .RS 4 .ie n .IP "1 \- faster/larger code" 4 .el .IP "\f(CW1\fR \- faster/larger code" 4 .IX Item "1 - faster/larger code" Use larger code to speed up some operations. .Sp Currently this is used to override some inlining decisions (enlarging the code size by roughly 30% on amd64). .Sp When optimising for size, use of compiler flags such as \f(CW\*(C`\-Os\*(C'\fR with gcc is recommended, as well as \f(CW\*(C`\-DNDEBUG\*(C'\fR, as libev contains a number of assertions. .Sp The default is off when \f(CW\*(C`_\|_OPTIMIZE_SIZE_\|_\*(C'\fR is defined by your compiler (e.g. gcc with \f(CW\*(C`\-Os\*(C'\fR). .ie n .IP "2 \- faster/larger data structures" 4 .el .IP "\f(CW2\fR \- faster/larger data structures" 4 .IX Item "2 - faster/larger data structures" Replaces the small 2\-heap for timer management by a faster 4\-heap, larger hash table sizes and so on. This will usually further increase code size and can additionally have an effect on the size of data structures at runtime. .Sp The default is off when \f(CW\*(C`_\|_OPTIMIZE_SIZE_\|_\*(C'\fR is defined by your compiler (e.g. gcc with \f(CW\*(C`\-Os\*(C'\fR). .ie n .IP "4 \- full \s-1API\s0 configuration" 4 .el .IP "\f(CW4\fR \- full \s-1API\s0 configuration" 4 .IX Item "4 - full API configuration" This enables priorities (sets \f(CW\*(C`EV_MAXPRI\*(C'\fR=2 and \f(CW\*(C`EV_MINPRI\*(C'\fR=\-2), and enables multiplicity (\f(CW\*(C`EV_MULTIPLICITY\*(C'\fR=1). .ie n .IP "8 \- full \s-1API\s0" 4 .el .IP "\f(CW8\fR \- full \s-1API\s0" 4 .IX Item "8 - full API" This enables a lot of the \*(L"lesser used\*(R" \s-1API\s0 functions. See \f(CW\*(C`ev.h\*(C'\fR for details on which parts of the \s-1API\s0 are still available without this feature, and do not complain if this subset changes over time. .ie n .IP "16 \- enable all optional watcher types" 4 .el .IP "\f(CW16\fR \- enable all optional watcher types" 4 .IX Item "16 - enable all optional watcher types" Enables all optional watcher types. If you want to selectively enable only some watcher types other than I/O and timers (e.g. prepare, embed, async, child...) you can enable them manually by defining \&\f(CW\*(C`EV_watchertype_ENABLE\*(C'\fR to \f(CW1\fR instead. .ie n .IP "32 \- enable all backends" 4 .el .IP "\f(CW32\fR \- enable all backends" 4 .IX Item "32 - enable all backends" This enables all backends \- without this feature, you need to enable at least one backend manually (\f(CW\*(C`EV_USE_SELECT\*(C'\fR is a good choice). .ie n .IP "64 \- enable OS-specific ""helper"" APIs" 4 .el .IP "\f(CW64\fR \- enable OS-specific ``helper'' APIs" 4 .IX Item "64 - enable OS-specific helper APIs" Enable inotify, eventfd, signalfd and similar OS-specific helper APIs by default. .RE .RS 4 .Sp Compiling with \f(CW\*(C`gcc \-Os \-DEV_STANDALONE \-DEV_USE_EPOLL=1 \-DEV_FEATURES=0\*(C'\fR reduces the compiled size of libev from 24.7Kb code/2.8Kb data to 6.5Kb code/0.3Kb data on my GNU/Linux amd64 system, while still giving you I/O watchers, timers and monotonic clock support. .Sp With an intelligent-enough linker (gcc+binutils are intelligent enough when you use \f(CW\*(C`\-Wl,\-\-gc\-sections \-ffunction\-sections\*(C'\fR) functions unused by your program might be left out as well \- a binary starting a timer and an I/O watcher then might come out at only 5Kb. .RE .IP "\s-1EV_API_STATIC\s0" 4 .IX Item "EV_API_STATIC" If this symbol is defined (by default it is not), then all identifiers will have static linkage. This means that libev will not export any identifiers, and you cannot link against libev anymore. This can be useful when you embed libev, only want to use libev functions in a single file, and do not want its identifiers to be visible. .Sp To use this, define \f(CW\*(C`EV_API_STATIC\*(C'\fR and include \fIev.c\fR in the file that wants to use libev. .Sp This option only works when libev is compiled with a C compiler, as \*(C+ doesn't support the required declaration syntax. .IP "\s-1EV_AVOID_STDIO\s0" 4 .IX Item "EV_AVOID_STDIO" If this is set to \f(CW1\fR at compiletime, then libev will avoid using stdio functions (printf, scanf, perror etc.). This will increase the code size somewhat, but if your program doesn't otherwise depend on stdio and your libc allows it, this avoids linking in the stdio library which is quite big. .Sp Note that error messages might become less precise when this option is enabled. .IP "\s-1EV_NSIG\s0" 4 .IX Item "EV_NSIG" The highest supported signal number, +1 (or, the number of signals): Normally, libev tries to deduce the maximum number of signals automatically, but sometimes this fails, in which case it can be specified. Also, using a lower number than detected (\f(CW32\fR should be good for about any system in existence) can save some memory, as libev statically allocates some 12\-24 bytes per signal number. .IP "\s-1EV_PID_HASHSIZE\s0" 4 .IX Item "EV_PID_HASHSIZE" \&\f(CW\*(C`ev_child\*(C'\fR watchers use a small hash table to distribute workload by pid. The default size is \f(CW16\fR (or \f(CW1\fR with \f(CW\*(C`EV_FEATURES\*(C'\fR disabled), usually more than enough. If you need to manage thousands of children you might want to increase this value (\fImust\fR be a power of two). .IP "\s-1EV_INOTIFY_HASHSIZE\s0" 4 .IX Item "EV_INOTIFY_HASHSIZE" \&\f(CW\*(C`ev_stat\*(C'\fR watchers use a small hash table to distribute workload by inotify watch id. The default size is \f(CW16\fR (or \f(CW1\fR with \f(CW\*(C`EV_FEATURES\*(C'\fR disabled), usually more than enough. If you need to manage thousands of \&\f(CW\*(C`ev_stat\*(C'\fR watchers you might want to increase this value (\fImust\fR be a power of two). .IP "\s-1EV_USE_4HEAP\s0" 4 .IX Item "EV_USE_4HEAP" Heaps are not very cache-efficient. To improve the cache-efficiency of the timer and periodics heaps, libev uses a 4\-heap when this symbol is defined to \f(CW1\fR. The 4\-heap uses more complicated (longer) code but has noticeably faster performance with many (thousands) of watchers. .Sp The default is \f(CW1\fR, unless \f(CW\*(C`EV_FEATURES\*(C'\fR overrides it, in which case it will be \f(CW0\fR. .IP "\s-1EV_HEAP_CACHE_AT\s0" 4 .IX Item "EV_HEAP_CACHE_AT" Heaps are not very cache-efficient. To improve the cache-efficiency of the timer and periodics heaps, libev can cache the timestamp (\fIat\fR) within the heap structure (selected by defining \f(CW\*(C`EV_HEAP_CACHE_AT\*(C'\fR to \f(CW1\fR), which uses 8\-12 bytes more per watcher and a few hundred bytes more code, but avoids random read accesses on heap changes. This improves performance noticeably with many (hundreds) of watchers. .Sp The default is \f(CW1\fR, unless \f(CW\*(C`EV_FEATURES\*(C'\fR overrides it, in which case it will be \f(CW0\fR. .IP "\s-1EV_VERIFY\s0" 4 .IX Item "EV_VERIFY" Controls how much internal verification (see \f(CW\*(C`ev_verify ()\*(C'\fR) will be done: If set to \f(CW0\fR, no internal verification code will be compiled in. If set to \f(CW1\fR, then verification code will be compiled in, but not called. If set to \f(CW2\fR, then the internal verification code will be called once per loop, which can slow down libev. If set to \f(CW3\fR, then the verification code will be called very frequently, which will slow down libev considerably. .Sp Verification errors are reported via C's \f(CW\*(C`assert\*(C'\fR mechanism, so if you disable that (e.g. by defining \f(CW\*(C`NDEBUG\*(C'\fR) then no errors will be reported. .Sp The default is \f(CW1\fR, unless \f(CW\*(C`EV_FEATURES\*(C'\fR overrides it, in which case it will be \f(CW0\fR. .IP "\s-1EV_COMMON\s0" 4 .IX Item "EV_COMMON" By default, all watchers have a \f(CW\*(C`void *data\*(C'\fR member. By redefining this macro to something else you can include more and other types of members. You have to define it each time you include one of the files, though, and it must be identical each time. .Sp For example, the perl \s-1EV\s0 module uses something like this: .Sp .Vb 3 \& #define EV_COMMON \e \& SV *self; /* contains this struct */ \e \& SV *cb_sv, *fh /* note no trailing ";" */ .Ve .IP "\s-1EV_CB_DECLARE\s0 (type)" 4 .IX Item "EV_CB_DECLARE (type)" .PD 0 .IP "\s-1EV_CB_INVOKE\s0 (watcher, revents)" 4 .IX Item "EV_CB_INVOKE (watcher, revents)" .IP "ev_set_cb (ev, cb)" 4 .IX Item "ev_set_cb (ev, cb)" .PD Can be used to change the callback member declaration in each watcher, and the way callbacks are invoked and set. Must expand to a struct member definition and a statement, respectively. See the \fIev.h\fR header file for their default definitions. One possible use for overriding these is to avoid the \f(CW\*(C`struct ev_loop *\*(C'\fR as first argument in all cases, or to use method calls instead of plain function calls in \*(C+. .SS "\s-1EXPORTED API SYMBOLS\s0" .IX Subsection "EXPORTED API SYMBOLS" If you need to re-export the \s-1API\s0 (e.g. via a \s-1DLL\s0) and you need a list of exported symbols, you can use the provided \fISymbol.*\fR files which list all public symbols, one per line: .PP .Vb 2 \& Symbols.ev for libev proper \& Symbols.event for the libevent emulation .Ve .PP This can also be used to rename all public symbols to avoid clashes with multiple versions of libev linked together (which is obviously bad in itself, but sometimes it is inconvenient to avoid this). .PP A sed command like this will create wrapper \f(CW\*(C`#define\*(C'\fR's that you need to include before including \fIev.h\fR: .PP .Vb 1 \& wrap.h .Ve .PP This would create a file \fIwrap.h\fR which essentially looks like this: .PP .Vb 4 \& #define ev_backend myprefix_ev_backend \& #define ev_check_start myprefix_ev_check_start \& #define ev_check_stop myprefix_ev_check_stop \& ... .Ve .SS "\s-1EXAMPLES\s0" .IX Subsection "EXAMPLES" For a real-world example of a program the includes libev verbatim, you can have a look at the \s-1EV\s0 perl module (). It has the libev files in the \fIlibev/\fR subdirectory and includes them in the \fI\s-1EV/EVAPI\s0.h\fR (public interface) and \fI\s-1EV\s0.xs\fR (implementation) files. Only the \fI\s-1EV\s0.xs\fR file will be compiled. It is pretty complex because it provides its own header file. .PP The usage in rxvt-unicode is simpler. It has a \fIev_cpp.h\fR header file that everybody includes and which overrides some configure choices: .PP .Vb 8 \& #define EV_FEATURES 8 \& #define EV_USE_SELECT 1 \& #define EV_PREPARE_ENABLE 1 \& #define EV_IDLE_ENABLE 1 \& #define EV_SIGNAL_ENABLE 1 \& #define EV_CHILD_ENABLE 1 \& #define EV_USE_STDEXCEPT 0 \& #define EV_CONFIG_H \& \& #include "ev++.h" .Ve .PP And a \fIev_cpp.C\fR implementation file that contains libev proper and is compiled: .PP .Vb 2 \& #include "ev_cpp.h" \& #include "ev.c" .Ve .SH "INTERACTION WITH OTHER PROGRAMS, LIBRARIES OR THE ENVIRONMENT" .IX Header "INTERACTION WITH OTHER PROGRAMS, LIBRARIES OR THE ENVIRONMENT" .SS "\s-1THREADS AND COROUTINES\s0" .IX Subsection "THREADS AND COROUTINES" \fI\s-1THREADS\s0\fR .IX Subsection "THREADS" .PP All libev functions are reentrant and thread-safe unless explicitly documented otherwise, but libev implements no locking itself. This means that you can use as many loops as you want in parallel, as long as there are no concurrent calls into any libev function with the same loop parameter (\f(CW\*(C`ev_default_*\*(C'\fR calls have an implicit default loop parameter, of course): libev guarantees that different event loops share no data structures that need any locking. .PP Or to put it differently: calls with different loop parameters can be done concurrently from multiple threads, calls with the same loop parameter must be done serially (but can be done from different threads, as long as only one thread ever is inside a call at any point in time, e.g. by using a mutex per loop). .PP Specifically to support threads (and signal handlers), libev implements so-called \f(CW\*(C`ev_async\*(C'\fR watchers, which allow some limited form of concurrency on the same event loop, namely waking it up \*(L"from the outside\*(R". .PP If you want to know which design (one loop, locking, or multiple loops without or something else still) is best for your problem, then I cannot help you, but here is some generic advice: .IP "\(bu" 4 most applications have a main thread: use the default libev loop in that thread, or create a separate thread running only the default loop. .Sp This helps integrating other libraries or software modules that use libev themselves and don't care/know about threading. .IP "\(bu" 4 one loop per thread is usually a good model. .Sp Doing this is almost never wrong, sometimes a better-performance model exists, but it is always a good start. .IP "\(bu" 4 other models exist, such as the leader/follower pattern, where one loop is handed through multiple threads in a kind of round-robin fashion. .Sp Choosing a model is hard \- look around, learn, know that usually you can do better than you currently do :\-) .IP "\(bu" 4 often you need to talk to some other thread which blocks in the event loop. .Sp \&\f(CW\*(C`ev_async\*(C'\fR watchers can be used to wake them up from other threads safely (or from signal contexts...). .Sp An example use would be to communicate signals or other events that only work in the default loop by registering the signal watcher with the default loop and triggering an \f(CW\*(C`ev_async\*(C'\fR watcher from the default loop watcher callback into the event loop interested in the signal. .PP See also \*(L"\s-1THREAD LOCKING EXAMPLE\*(R"\s0. .PP \fI\s-1COROUTINES\s0\fR .IX Subsection "COROUTINES" .PP Libev is very accommodating to coroutines (\*(L"cooperative threads\*(R"): libev fully supports nesting calls to its functions from different coroutines (e.g. you can call \f(CW\*(C`ev_run\*(C'\fR on the same loop from two different coroutines, and switch freely between both coroutines running the loop, as long as you don't confuse yourself). The only exception is that you must not do this from \f(CW\*(C`ev_periodic\*(C'\fR reschedule callbacks. .PP Care has been taken to ensure that libev does not keep local state inside \&\f(CW\*(C`ev_run\*(C'\fR, and other calls do not usually allow for coroutine switches as they do not call any callbacks. .SS "\s-1COMPILER WARNINGS\s0" .IX Subsection "COMPILER WARNINGS" Depending on your compiler and compiler settings, you might get no or a lot of warnings when compiling libev code. Some people are apparently scared by this. .PP However, these are unavoidable for many reasons. For one, each compiler has different warnings, and each user has different tastes regarding warning options. \*(L"Warn-free\*(R" code therefore cannot be a goal except when targeting a specific compiler and compiler-version. .PP Another reason is that some compiler warnings require elaborate workarounds, or other changes to the code that make it less clear and less maintainable. .PP And of course, some compiler warnings are just plain stupid, or simply wrong (because they don't actually warn about the condition their message seems to warn about). For example, certain older gcc versions had some warnings that resulted in an extreme number of false positives. These have been fixed, but some people still insist on making code warn-free with such buggy versions. .PP While libev is written to generate as few warnings as possible, \&\*(L"warn-free\*(R" code is not a goal, and it is recommended not to build libev with any compiler warnings enabled unless you are prepared to cope with them (e.g. by ignoring them). Remember that warnings are just that: warnings, not errors, or proof of bugs. .SS "\s-1VALGRIND\s0" .IX Subsection "VALGRIND" Valgrind has a special section here because it is a popular tool that is highly useful. Unfortunately, valgrind reports are very hard to interpret. .PP If you think you found a bug (memory leak, uninitialised data access etc.) in libev, then check twice: If valgrind reports something like: .PP .Vb 3 \& ==2274== definitely lost: 0 bytes in 0 blocks. \& ==2274== possibly lost: 0 bytes in 0 blocks. \& ==2274== still reachable: 256 bytes in 1 blocks. .Ve .PP Then there is no memory leak, just as memory accounted to global variables is not a memleak \- the memory is still being referenced, and didn't leak. .PP Similarly, under some circumstances, valgrind might report kernel bugs as if it were a bug in libev (e.g. in realloc or in the poll backend, although an acceptable workaround has been found here), or it might be confused. .PP Keep in mind that valgrind is a very good tool, but only a tool. Don't make it into some kind of religion. .PP If you are unsure about something, feel free to contact the mailing list with the full valgrind report and an explanation on why you think this is a bug in libev (best check the archives, too :). However, don't be annoyed when you get a brisk \*(L"this is no bug\*(R" answer and take the chance of learning how to interpret valgrind properly. .PP If you need, for some reason, empty reports from valgrind for your project I suggest using suppression lists. .SH "PORTABILITY NOTES" .IX Header "PORTABILITY NOTES" .SS "\s-1GNU/LINUX 32 BIT LIMITATIONS\s0" .IX Subsection "GNU/LINUX 32 BIT LIMITATIONS" GNU/Linux is the only common platform that supports 64 bit file/large file interfaces but \fIdisables\fR them by default. .PP That means that libev compiled in the default environment doesn't support files larger than 2GiB or so, which mainly affects \f(CW\*(C`ev_stat\*(C'\fR watchers. .PP Unfortunately, many programs try to work around this GNU/Linux issue by enabling the large file \s-1API,\s0 which makes them incompatible with the standard libev compiled for their system. .PP Likewise, libev cannot enable the large file \s-1API\s0 itself as this would suddenly make it incompatible to the default compile time environment, i.e. all programs not using special compile switches. .SS "\s-1OS/X AND DARWIN BUGS\s0" .IX Subsection "OS/X AND DARWIN BUGS" The whole thing is a bug if you ask me \- basically any system interface you touch is broken, whether it is locales, poll, kqueue or even the OpenGL drivers. .PP \fI\f(CI\*(C`kqueue\*(C'\fI is buggy\fR .IX Subsection "kqueue is buggy" .PP The kqueue syscall is broken in all known versions \- most versions support only sockets, many support pipes. .PP Libev tries to work around this by not using \f(CW\*(C`kqueue\*(C'\fR by default on this rotten platform, but of course you can still ask for it when creating a loop \- embedding a socket-only kqueue loop into a select-based one is probably going to work well. .PP \fI\f(CI\*(C`poll\*(C'\fI is buggy\fR .IX Subsection "poll is buggy" .PP Instead of fixing \f(CW\*(C`kqueue\*(C'\fR, Apple replaced their (working) \f(CW\*(C`poll\*(C'\fR implementation by something calling \f(CW\*(C`kqueue\*(C'\fR internally around the 10.5.6 release, so now \f(CW\*(C`kqueue\*(C'\fR \fIand\fR \f(CW\*(C`poll\*(C'\fR are broken. .PP Libev tries to work around this by not using \f(CW\*(C`poll\*(C'\fR by default on this rotten platform, but of course you can still ask for it when creating a loop. .PP \fI\f(CI\*(C`select\*(C'\fI is buggy\fR .IX Subsection "select is buggy" .PP All that's left is \f(CW\*(C`select\*(C'\fR, and of course Apple found a way to fuck this one up as well: On \s-1OS/X,\s0 \f(CW\*(C`select\*(C'\fR actively limits the number of file descriptors you can pass in to 1024 \- your program suddenly crashes when you use more. .PP There is an undocumented \*(L"workaround\*(R" for this \- defining \&\f(CW\*(C`_DARWIN_UNLIMITED_SELECT\*(C'\fR, which libev tries to use, so select \fIshould\fR work on \s-1OS/X.\s0 .SS "\s-1SOLARIS PROBLEMS AND WORKAROUNDS\s0" .IX Subsection "SOLARIS PROBLEMS AND WORKAROUNDS" \fI\f(CI\*(C`errno\*(C'\fI reentrancy\fR .IX Subsection "errno reentrancy" .PP The default compile environment on Solaris is unfortunately so thread-unsafe that you can't even use components/libraries compiled without \f(CW\*(C`\-D_REENTRANT\*(C'\fR in a threaded program, which, of course, isn't defined by default. A valid, if stupid, implementation choice. .PP If you want to use libev in threaded environments you have to make sure it's compiled with \f(CW\*(C`_REENTRANT\*(C'\fR defined. .PP \fIEvent port backend\fR .IX Subsection "Event port backend" .PP The scalable event interface for Solaris is called \*(L"event ports\*(R". Unfortunately, this mechanism is very buggy in all major releases. If you run into high \s-1CPU\s0 usage, your program freezes or you get a large number of spurious wakeups, make sure you have all the relevant and latest kernel patches applied. No, I don't know which ones, but there are multiple ones to apply, and afterwards, event ports actually work great. .PP If you can't get it to work, you can try running the program by setting the environment variable \f(CW\*(C`LIBEV_FLAGS=3\*(C'\fR to only allow \f(CW\*(C`poll\*(C'\fR and \&\f(CW\*(C`select\*(C'\fR backends. .SS "\s-1AIX POLL BUG\s0" .IX Subsection "AIX POLL BUG" \&\s-1AIX\s0 unfortunately has a broken \f(CW\*(C`poll.h\*(C'\fR header. Libev works around this by trying to avoid the poll backend altogether (i.e. it's not even compiled in), which normally isn't a big problem as \f(CW\*(C`select\*(C'\fR works fine with large bitsets on \s-1AIX,\s0 and \s-1AIX\s0 is dead anyway. .SS "\s-1WIN32 PLATFORM LIMITATIONS AND WORKAROUNDS\s0" .IX Subsection "WIN32 PLATFORM LIMITATIONS AND WORKAROUNDS" \fIGeneral issues\fR .IX Subsection "General issues" .PP Win32 doesn't support any of the standards (e.g. \s-1POSIX\s0) that libev requires, and its I/O model is fundamentally incompatible with the \s-1POSIX\s0 model. Libev still offers limited functionality on this platform in the form of the \f(CW\*(C`EVBACKEND_SELECT\*(C'\fR backend, and only supports socket descriptors. This only applies when using Win32 natively, not when using e.g. cygwin. Actually, it only applies to the microsofts own compilers, as every compiler comes with a slightly differently broken/incompatible environment. .PP Lifting these limitations would basically require the full re-implementation of the I/O system. If you are into this kind of thing, then note that glib does exactly that for you in a very portable way (note also that glib is the slowest event library known to man). .PP There is no supported compilation method available on windows except embedding it into other applications. .PP Sensible signal handling is officially unsupported by Microsoft \- libev tries its best, but under most conditions, signals will simply not work. .PP Not a libev limitation but worth mentioning: windows apparently doesn't accept large writes: instead of resulting in a partial write, windows will either accept everything or return \f(CW\*(C`ENOBUFS\*(C'\fR if the buffer is too large, so make sure you only write small amounts into your sockets (less than a megabyte seems safe, but this apparently depends on the amount of memory available). .PP Due to the many, low, and arbitrary limits on the win32 platform and the abysmal performance of winsockets, using a large number of sockets is not recommended (and not reasonable). If your program needs to use more than a hundred or so sockets, then likely it needs to use a totally different implementation for windows, as libev offers the \s-1POSIX\s0 readiness notification model, which cannot be implemented efficiently on windows (due to Microsoft monopoly games). .PP A typical way to use libev under windows is to embed it (see the embedding section for details) and use the following \fIevwrap.h\fR header file instead of \fIev.h\fR: .PP .Vb 2 \& #define EV_STANDALONE /* keeps ev from requiring config.h */ \& #define EV_SELECT_IS_WINSOCKET 1 /* configure libev for windows select */ \& \& #include "ev.h" .Ve .PP And compile the following \fIevwrap.c\fR file into your project (make sure you do \fInot\fR compile the \fIev.c\fR or any other embedded source files!): .PP .Vb 2 \& #include "evwrap.h" \& #include "ev.c" .Ve .PP \fIThe winsocket \f(CI\*(C`select\*(C'\fI function\fR .IX Subsection "The winsocket select function" .PP The winsocket \f(CW\*(C`select\*(C'\fR function doesn't follow \s-1POSIX\s0 in that it requires socket \fIhandles\fR and not socket \fIfile descriptors\fR (it is also extremely buggy). This makes select very inefficient, and also requires a mapping from file descriptors to socket handles (the Microsoft C runtime provides the function \f(CW\*(C`_open_osfhandle\*(C'\fR for this). See the discussion of the \f(CW\*(C`EV_SELECT_USE_FD_SET\*(C'\fR, \f(CW\*(C`EV_SELECT_IS_WINSOCKET\*(C'\fR and \&\f(CW\*(C`EV_FD_TO_WIN32_HANDLE\*(C'\fR preprocessor symbols for more info. .PP The configuration for a \*(L"naked\*(R" win32 using the Microsoft runtime libraries and raw winsocket select is: .PP .Vb 2 \& #define EV_USE_SELECT 1 \& #define EV_SELECT_IS_WINSOCKET 1 /* forces EV_SELECT_USE_FD_SET, too */ .Ve .PP Note that winsockets handling of fd sets is O(n), so you can easily get a complexity in the O(nX) range when using win32. .PP \fILimited number of file descriptors\fR .IX Subsection "Limited number of file descriptors" .PP Windows has numerous arbitrary (and low) limits on things. .PP Early versions of winsocket's select only supported waiting for a maximum of \f(CW64\fR handles (probably owning to the fact that all windows kernels can only wait for \f(CW64\fR things at the same time internally; Microsoft recommends spawning a chain of threads and wait for 63 handles and the previous thread in each. Sounds great!). .PP Newer versions support more handles, but you need to define \f(CW\*(C`FD_SETSIZE\*(C'\fR to some high number (e.g. \f(CW2048\fR) before compiling the winsocket select call (which might be in libev or elsewhere, for example, perl and many other interpreters do their own select emulation on windows). .PP Another limit is the number of file descriptors in the Microsoft runtime libraries, which by default is \f(CW64\fR (there must be a hidden \fI64\fR fetish or something like this inside Microsoft). You can increase this by calling \f(CW\*(C`_setmaxstdio\*(C'\fR, which can increase this limit to \f(CW2048\fR (another arbitrary limit), but is broken in many versions of the Microsoft runtime libraries. This might get you to about \f(CW512\fR or \f(CW2048\fR sockets (depending on windows version and/or the phase of the moon). To get more, you need to wrap all I/O functions and provide your own fd management, but the cost of calling select (O(nX)) will likely make this unworkable. .SS "\s-1PORTABILITY REQUIREMENTS\s0" .IX Subsection "PORTABILITY REQUIREMENTS" In addition to a working ISO-C implementation and of course the backend-specific APIs, libev relies on a few additional extensions: .ie n .IP """void (*)(ev_watcher_type *, int revents)"" must have compatible calling conventions regardless of ""ev_watcher_type *""." 4 .el .IP "\f(CWvoid (*)(ev_watcher_type *, int revents)\fR must have compatible calling conventions regardless of \f(CWev_watcher_type *\fR." 4 .IX Item "void (*)(ev_watcher_type *, int revents) must have compatible calling conventions regardless of ev_watcher_type *." Libev assumes not only that all watcher pointers have the same internal structure (guaranteed by \s-1POSIX\s0 but not by \s-1ISO C\s0 for example), but it also assumes that the same (machine) code can be used to call any watcher callback: The watcher callbacks have different type signatures, but libev calls them using an \f(CW\*(C`ev_watcher *\*(C'\fR internally. .IP "null pointers and integer zero are represented by 0 bytes" 4 .IX Item "null pointers and integer zero are represented by 0 bytes" Libev uses \f(CW\*(C`memset\*(C'\fR to initialise structs and arrays to \f(CW0\fR bytes, and relies on this setting pointers and integers to null. .IP "pointer accesses must be thread-atomic" 4 .IX Item "pointer accesses must be thread-atomic" Accessing a pointer value must be atomic, it must both be readable and writable in one piece \- this is the case on all current architectures. .ie n .IP """sig_atomic_t volatile"" must be thread-atomic as well" 4 .el .IP "\f(CWsig_atomic_t volatile\fR must be thread-atomic as well" 4 .IX Item "sig_atomic_t volatile must be thread-atomic as well" The type \f(CW\*(C`sig_atomic_t volatile\*(C'\fR (or whatever is defined as \&\f(CW\*(C`EV_ATOMIC_T\*(C'\fR) must be atomic with respect to accesses from different threads. This is not part of the specification for \f(CW\*(C`sig_atomic_t\*(C'\fR, but is believed to be sufficiently portable. .ie n .IP """sigprocmask"" must work in a threaded environment" 4 .el .IP "\f(CWsigprocmask\fR must work in a threaded environment" 4 .IX Item "sigprocmask must work in a threaded environment" Libev uses \f(CW\*(C`sigprocmask\*(C'\fR to temporarily block signals. This is not allowed in a threaded program (\f(CW\*(C`pthread_sigmask\*(C'\fR has to be used). Typical pthread implementations will either allow \f(CW\*(C`sigprocmask\*(C'\fR in the \*(L"main thread\*(R" or will block signals process-wide, both behaviours would be compatible with libev. Interaction between \f(CW\*(C`sigprocmask\*(C'\fR and \&\f(CW\*(C`pthread_sigmask\*(C'\fR could complicate things, however. .Sp The most portable way to handle signals is to block signals in all threads except the initial one, and run the signal handling loop in the initial thread as well. .ie n .IP """long"" must be large enough for common memory allocation sizes" 4 .el .IP "\f(CWlong\fR must be large enough for common memory allocation sizes" 4 .IX Item "long must be large enough for common memory allocation sizes" To improve portability and simplify its \s-1API,\s0 libev uses \f(CW\*(C`long\*(C'\fR internally instead of \f(CW\*(C`size_t\*(C'\fR when allocating its data structures. On non-POSIX systems (Microsoft...) this might be unexpectedly low, but is still at least 31 bits everywhere, which is enough for hundreds of millions of watchers. .ie n .IP """double"" must hold a time value in seconds with enough accuracy" 4 .el .IP "\f(CWdouble\fR must hold a time value in seconds with enough accuracy" 4 .IX Item "double must hold a time value in seconds with enough accuracy" The type \f(CW\*(C`double\*(C'\fR is used to represent timestamps. It is required to have at least 51 bits of mantissa (and 9 bits of exponent), which is good enough for at least into the year 4000 with millisecond accuracy (the design goal for libev). This requirement is overfulfilled by implementations using \s-1IEEE 754,\s0 which is basically all existing ones. .Sp With \s-1IEEE 754\s0 doubles, you get microsecond accuracy until at least the year 2255 (and millisecond accuracy till the year 287396 \- by then, libev is either obsolete or somebody patched it to use \f(CW\*(C`long double\*(C'\fR or something like that, just kidding). .PP If you know of other additional requirements drop me a note. .SH "ALGORITHMIC COMPLEXITIES" .IX Header "ALGORITHMIC COMPLEXITIES" In this section the complexities of (many of) the algorithms used inside libev will be documented. For complexity discussions about backends see the documentation for \f(CW\*(C`ev_default_init\*(C'\fR. .PP All of the following are about amortised time: If an array needs to be extended, libev needs to realloc and move the whole array, but this happens asymptotically rarer with higher number of elements, so O(1) might mean that libev does a lengthy realloc operation in rare cases, but on average it is much faster and asymptotically approaches constant time. .IP "Starting and stopping timer/periodic watchers: O(log skipped_other_timers)" 4 .IX Item "Starting and stopping timer/periodic watchers: O(log skipped_other_timers)" This means that, when you have a watcher that triggers in one hour and there are 100 watchers that would trigger before that, then inserting will have to skip roughly seven (\f(CW\*(C`ld 100\*(C'\fR) of these watchers. .IP "Changing timer/periodic watchers (by autorepeat or calling again): O(log skipped_other_timers)" 4 .IX Item "Changing timer/periodic watchers (by autorepeat or calling again): O(log skipped_other_timers)" That means that changing a timer costs less than removing/adding them, as only the relative motion in the event queue has to be paid for. .IP "Starting io/check/prepare/idle/signal/child/fork/async watchers: O(1)" 4 .IX Item "Starting io/check/prepare/idle/signal/child/fork/async watchers: O(1)" These just add the watcher into an array or at the head of a list. .IP "Stopping check/prepare/idle/fork/async watchers: O(1)" 4 .IX Item "Stopping check/prepare/idle/fork/async watchers: O(1)" .PD 0 .IP "Stopping an io/signal/child watcher: O(number_of_watchers_for_this_(fd/signal/pid % \s-1EV_PID_HASHSIZE\s0))" 4 .IX Item "Stopping an io/signal/child watcher: O(number_of_watchers_for_this_(fd/signal/pid % EV_PID_HASHSIZE))" .PD These watchers are stored in lists, so they need to be walked to find the correct watcher to remove. The lists are usually short (you don't usually have many watchers waiting for the same fd or signal: one is typical, two is rare). .IP "Finding the next timer in each loop iteration: O(1)" 4 .IX Item "Finding the next timer in each loop iteration: O(1)" By virtue of using a binary or 4\-heap, the next timer is always found at a fixed position in the storage array. .IP "Each change on a file descriptor per loop iteration: O(number_of_watchers_for_this_fd)" 4 .IX Item "Each change on a file descriptor per loop iteration: O(number_of_watchers_for_this_fd)" A change means an I/O watcher gets started or stopped, which requires libev to recalculate its status (and possibly tell the kernel, depending on backend and whether \f(CW\*(C`ev_io_set\*(C'\fR was used). .IP "Activating one watcher (putting it into the pending state): O(1)" 4 .IX Item "Activating one watcher (putting it into the pending state): O(1)" .PD 0 .IP "Priority handling: O(number_of_priorities)" 4 .IX Item "Priority handling: O(number_of_priorities)" .PD Priorities are implemented by allocating some space for each priority. When doing priority-based operations, libev usually has to linearly search all the priorities, but starting/stopping and activating watchers becomes O(1) with respect to priority handling. .IP "Sending an ev_async: O(1)" 4 .IX Item "Sending an ev_async: O(1)" .PD 0 .IP "Processing ev_async_send: O(number_of_async_watchers)" 4 .IX Item "Processing ev_async_send: O(number_of_async_watchers)" .IP "Processing signals: O(max_signal_number)" 4 .IX Item "Processing signals: O(max_signal_number)" .PD Sending involves a system call \fIiff\fR there were no other \f(CW\*(C`ev_async_send\*(C'\fR calls in the current loop iteration and the loop is currently blocked. Checking for async and signal events involves iterating over all running async watchers or all signal numbers. .SH "PORTING FROM LIBEV 3.X TO 4.X" .IX Header "PORTING FROM LIBEV 3.X TO 4.X" The major version 4 introduced some incompatible changes to the \s-1API.\s0 .PP At the moment, the \f(CW\*(C`ev.h\*(C'\fR header file provides compatibility definitions for all changes, so most programs should still compile. The compatibility layer might be removed in later versions of libev, so better update to the new \s-1API\s0 early than late. .ie n .IP """EV_COMPAT3"" backwards compatibility mechanism" 4 .el .IP "\f(CWEV_COMPAT3\fR backwards compatibility mechanism" 4 .IX Item "EV_COMPAT3 backwards compatibility mechanism" The backward compatibility mechanism can be controlled by \&\f(CW\*(C`EV_COMPAT3\*(C'\fR. See \*(L"\s-1PREPROCESSOR SYMBOLS/MACROS\*(R"\s0 in the \*(L"\s-1EMBEDDING\*(R"\s0 section. .ie n .IP """ev_default_destroy"" and ""ev_default_fork"" have been removed" 4 .el .IP "\f(CWev_default_destroy\fR and \f(CWev_default_fork\fR have been removed" 4 .IX Item "ev_default_destroy and ev_default_fork have been removed" These calls can be replaced easily by their \f(CW\*(C`ev_loop_xxx\*(C'\fR counterparts: .Sp .Vb 2 \& ev_loop_destroy (EV_DEFAULT_UC); \& ev_loop_fork (EV_DEFAULT); .Ve .IP "function/symbol renames" 4 .IX Item "function/symbol renames" A number of functions and symbols have been renamed: .Sp .Vb 3 \& ev_loop => ev_run \& EVLOOP_NONBLOCK => EVRUN_NOWAIT \& EVLOOP_ONESHOT => EVRUN_ONCE \& \& ev_unloop => ev_break \& EVUNLOOP_CANCEL => EVBREAK_CANCEL \& EVUNLOOP_ONE => EVBREAK_ONE \& EVUNLOOP_ALL => EVBREAK_ALL \& \& EV_TIMEOUT => EV_TIMER \& \& ev_loop_count => ev_iteration \& ev_loop_depth => ev_depth \& ev_loop_verify => ev_verify .Ve .Sp Most functions working on \f(CW\*(C`struct ev_loop\*(C'\fR objects don't have an \&\f(CW\*(C`ev_loop_\*(C'\fR prefix, so it was removed; \f(CW\*(C`ev_loop\*(C'\fR, \f(CW\*(C`ev_unloop\*(C'\fR and associated constants have been renamed to not collide with the \f(CW\*(C`struct ev_loop\*(C'\fR anymore and \f(CW\*(C`EV_TIMER\*(C'\fR now follows the same naming scheme as all other watcher types. Note that \f(CW\*(C`ev_loop_fork\*(C'\fR is still called \&\f(CW\*(C`ev_loop_fork\*(C'\fR because it would otherwise clash with the \f(CW\*(C`ev_fork\*(C'\fR typedef. .ie n .IP """EV_MINIMAL"" mechanism replaced by ""EV_FEATURES""" 4 .el .IP "\f(CWEV_MINIMAL\fR mechanism replaced by \f(CWEV_FEATURES\fR" 4 .IX Item "EV_MINIMAL mechanism replaced by EV_FEATURES" The preprocessor symbol \f(CW\*(C`EV_MINIMAL\*(C'\fR has been replaced by a different mechanism, \f(CW\*(C`EV_FEATURES\*(C'\fR. Programs using \f(CW\*(C`EV_MINIMAL\*(C'\fR usually compile and work, but the library code will of course be larger. .SH "GLOSSARY" .IX Header "GLOSSARY" .IP "active" 4 .IX Item "active" A watcher is active as long as it has been started and not yet stopped. See \*(L"\s-1WATCHER STATES\*(R"\s0 for details. .IP "application" 4 .IX Item "application" In this document, an application is whatever is using libev. .IP "backend" 4 .IX Item "backend" The part of the code dealing with the operating system interfaces. .IP "callback" 4 .IX Item "callback" The address of a function that is called when some event has been detected. Callbacks are being passed the event loop, the watcher that received the event, and the actual event bitset. .IP "callback/watcher invocation" 4 .IX Item "callback/watcher invocation" The act of calling the callback associated with a watcher. .IP "event" 4 .IX Item "event" A change of state of some external event, such as data now being available for reading on a file descriptor, time having passed or simply not having any other events happening anymore. .Sp In libev, events are represented as single bits (such as \f(CW\*(C`EV_READ\*(C'\fR or \&\f(CW\*(C`EV_TIMER\*(C'\fR). .IP "event library" 4 .IX Item "event library" A software package implementing an event model and loop. .IP "event loop" 4 .IX Item "event loop" An entity that handles and processes external events and converts them into callback invocations. .IP "event model" 4 .IX Item "event model" The model used to describe how an event loop handles and processes watchers and events. .IP "pending" 4 .IX Item "pending" A watcher is pending as soon as the corresponding event has been detected. See \*(L"\s-1WATCHER STATES\*(R"\s0 for details. .IP "real time" 4 .IX Item "real time" The physical time that is observed. It is apparently strictly monotonic :) .IP "wall-clock time" 4 .IX Item "wall-clock time" The time and date as shown on clocks. Unlike real time, it can actually be wrong and jump forwards and backwards, e.g. when you adjust your clock. .IP "watcher" 4 .IX Item "watcher" A data structure that describes interest in certain events. Watchers need to be started (attached to an event loop) before they can receive events. .SH "AUTHOR" .IX Header "AUTHOR" Marc Lehmann , with repeated corrections by Mikael Magnusson and Emanuele Giaquinta, and minor corrections by many others. gevent-24.11.1/deps/libev/ev.c000066400000000000000000004502551471441230600157770ustar00rootroot00000000000000/* * libev event processing core, watcher management * * Copyright (c) 2007-2019 Marc Alexander Lehmann * All rights reserved. * * Redistribution and use in source and binary forms, with or without modifica- * tion, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * * Alternatively, the contents of this file may be used under the terms of * the GNU General Public License ("GPL") version 2 or any later version, * in which case the provisions of the GPL are applicable instead of * the above. If you wish to allow the use of your version of this file * only under the terms of the GPL and not to allow others to use your * version of this file under the BSD license, indicate your decision * by deleting the provisions above and replace them with the notice * and other provisions required by the GPL. If you do not delete the * provisions above, a recipient may use your version of this file under * either the BSD or the GPL. */ /* this big block deduces configuration from config.h */ #ifndef EV_STANDALONE # ifdef EV_CONFIG_H # include EV_CONFIG_H # else # include "config.h" # endif # if HAVE_FLOOR # ifndef EV_USE_FLOOR # define EV_USE_FLOOR 1 # endif # endif # if HAVE_CLOCK_SYSCALL # ifndef EV_USE_CLOCK_SYSCALL # define EV_USE_CLOCK_SYSCALL 1 # ifndef EV_USE_REALTIME # define EV_USE_REALTIME 0 # endif # ifndef EV_USE_MONOTONIC # define EV_USE_MONOTONIC 1 # endif # endif # elif !defined EV_USE_CLOCK_SYSCALL # define EV_USE_CLOCK_SYSCALL 0 # endif # if HAVE_CLOCK_GETTIME # ifndef EV_USE_MONOTONIC # define EV_USE_MONOTONIC 1 # endif # ifndef EV_USE_REALTIME # define EV_USE_REALTIME 0 # endif # else # ifndef EV_USE_MONOTONIC # define EV_USE_MONOTONIC 0 # endif # ifndef EV_USE_REALTIME # define EV_USE_REALTIME 0 # endif # endif # if HAVE_NANOSLEEP # ifndef EV_USE_NANOSLEEP # define EV_USE_NANOSLEEP EV_FEATURE_OS # endif # else # undef EV_USE_NANOSLEEP # define EV_USE_NANOSLEEP 0 # endif # if HAVE_SELECT && HAVE_SYS_SELECT_H # ifndef EV_USE_SELECT # define EV_USE_SELECT EV_FEATURE_BACKENDS # endif # else # undef EV_USE_SELECT # define EV_USE_SELECT 0 # endif # if HAVE_POLL && HAVE_POLL_H # ifndef EV_USE_POLL # define EV_USE_POLL EV_FEATURE_BACKENDS # endif # else # undef EV_USE_POLL # define EV_USE_POLL 0 # endif # if HAVE_EPOLL_CTL && HAVE_SYS_EPOLL_H # ifndef EV_USE_EPOLL # define EV_USE_EPOLL EV_FEATURE_BACKENDS # endif # else # undef EV_USE_EPOLL # define EV_USE_EPOLL 0 # endif # if HAVE_LINUX_AIO_ABI_H # ifndef EV_USE_LINUXAIO # define EV_USE_LINUXAIO 0 /* was: EV_FEATURE_BACKENDS, always off by default */ # endif # else # undef EV_USE_LINUXAIO # define EV_USE_LINUXAIO 0 # endif # if HAVE_LINUX_FS_H && HAVE_SYS_TIMERFD_H && HAVE_KERNEL_RWF_T # ifndef EV_USE_IOURING # define EV_USE_IOURING EV_FEATURE_BACKENDS # endif # else # undef EV_USE_IOURING # define EV_USE_IOURING 0 # endif # if HAVE_KQUEUE && HAVE_SYS_EVENT_H # ifndef EV_USE_KQUEUE # define EV_USE_KQUEUE EV_FEATURE_BACKENDS # endif # else # undef EV_USE_KQUEUE # define EV_USE_KQUEUE 0 # endif # if HAVE_PORT_H && HAVE_PORT_CREATE # ifndef EV_USE_PORT # define EV_USE_PORT EV_FEATURE_BACKENDS # endif # else # undef EV_USE_PORT # define EV_USE_PORT 0 # endif # if HAVE_INOTIFY_INIT && HAVE_SYS_INOTIFY_H # ifndef EV_USE_INOTIFY # define EV_USE_INOTIFY EV_FEATURE_OS # endif # else # undef EV_USE_INOTIFY # define EV_USE_INOTIFY 0 # endif # if HAVE_SIGNALFD && HAVE_SYS_SIGNALFD_H # ifndef EV_USE_SIGNALFD # define EV_USE_SIGNALFD EV_FEATURE_OS # endif # else # undef EV_USE_SIGNALFD # define EV_USE_SIGNALFD 0 # endif # if HAVE_EVENTFD # ifndef EV_USE_EVENTFD # define EV_USE_EVENTFD EV_FEATURE_OS # endif # else # undef EV_USE_EVENTFD # define EV_USE_EVENTFD 0 # endif # if HAVE_SYS_TIMERFD_H # ifndef EV_USE_TIMERFD # define EV_USE_TIMERFD EV_FEATURE_OS # endif # else # undef EV_USE_TIMERFD # define EV_USE_TIMERFD 0 # endif #endif /* OS X, in its infinite idiocy, actually HARDCODES * a limit of 1024 into their select. Where people have brains, * OS X engineers apparently have a vacuum. Or maybe they were * ordered to have a vacuum, or they do anything for money. * This might help. Or not. * Note that this must be defined early, as other include files * will rely on this define as well. */ #define _DARWIN_UNLIMITED_SELECT 1 #include #include #include #include #include #include #include #include #include #include #include #ifdef EV_H # include EV_H #else # include "ev.h" #endif #if EV_NO_THREADS # undef EV_NO_SMP # define EV_NO_SMP 1 # undef ECB_NO_THREADS # define ECB_NO_THREADS 1 #endif #if EV_NO_SMP # undef EV_NO_SMP # define ECB_NO_SMP 1 #endif #ifndef _WIN32 # include # include # include #else # include # define WIN32_LEAN_AND_MEAN # include # include # ifndef EV_SELECT_IS_WINSOCKET # define EV_SELECT_IS_WINSOCKET 1 # endif # undef EV_AVOID_STDIO #endif /* this block tries to deduce configuration from header-defined symbols and defaults */ /* try to deduce the maximum number of signals on this platform */ #if defined EV_NSIG /* use what's provided */ #elif defined NSIG # define EV_NSIG (NSIG) #elif defined _NSIG # define EV_NSIG (_NSIG) #elif defined SIGMAX # define EV_NSIG (SIGMAX+1) #elif defined SIG_MAX # define EV_NSIG (SIG_MAX+1) #elif defined _SIG_MAX # define EV_NSIG (_SIG_MAX+1) #elif defined MAXSIG # define EV_NSIG (MAXSIG+1) #elif defined MAX_SIG # define EV_NSIG (MAX_SIG+1) #elif defined SIGARRAYSIZE # define EV_NSIG (SIGARRAYSIZE) /* Assume ary[SIGARRAYSIZE] */ #elif defined _sys_nsig # define EV_NSIG (_sys_nsig) /* Solaris 2.5 */ #else # define EV_NSIG (8 * sizeof (sigset_t) + 1) #endif #ifndef EV_USE_FLOOR # define EV_USE_FLOOR 0 #endif #ifndef EV_USE_CLOCK_SYSCALL # if __linux && __GLIBC__ == 2 && __GLIBC_MINOR__ < 17 # define EV_USE_CLOCK_SYSCALL EV_FEATURE_OS # else # define EV_USE_CLOCK_SYSCALL 0 # endif #endif #if !(_POSIX_TIMERS > 0) # ifndef EV_USE_MONOTONIC # define EV_USE_MONOTONIC 0 # endif # ifndef EV_USE_REALTIME # define EV_USE_REALTIME 0 # endif #endif #ifndef EV_USE_MONOTONIC # if defined _POSIX_MONOTONIC_CLOCK && _POSIX_MONOTONIC_CLOCK >= 0 # define EV_USE_MONOTONIC EV_FEATURE_OS # else # define EV_USE_MONOTONIC 0 # endif #endif #ifndef EV_USE_REALTIME # define EV_USE_REALTIME !EV_USE_CLOCK_SYSCALL #endif #ifndef EV_USE_NANOSLEEP # if _POSIX_C_SOURCE >= 199309L # define EV_USE_NANOSLEEP EV_FEATURE_OS # else # define EV_USE_NANOSLEEP 0 # endif #endif #ifndef EV_USE_SELECT # define EV_USE_SELECT EV_FEATURE_BACKENDS #endif #ifndef EV_USE_POLL # ifdef _WIN32 # define EV_USE_POLL 0 # else # define EV_USE_POLL EV_FEATURE_BACKENDS # endif #endif #ifndef EV_USE_EPOLL # if __linux && (__GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 4)) # define EV_USE_EPOLL EV_FEATURE_BACKENDS # else # define EV_USE_EPOLL 0 # endif #endif #ifndef EV_USE_KQUEUE # define EV_USE_KQUEUE 0 #endif #ifndef EV_USE_PORT # define EV_USE_PORT 0 #endif #ifndef EV_USE_LINUXAIO # if __linux /* libev currently assumes linux/aio_abi.h is always available on linux */ # define EV_USE_LINUXAIO 0 /* was: 1, always off by default */ # else # define EV_USE_LINUXAIO 0 # endif #endif #ifndef EV_USE_IOURING # if __linux /* later checks might disable again */ # define EV_USE_IOURING 1 # else # define EV_USE_IOURING 0 # endif #endif #ifndef EV_USE_INOTIFY # if __linux && (__GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 4)) # define EV_USE_INOTIFY EV_FEATURE_OS # else # define EV_USE_INOTIFY 0 # endif #endif #ifndef EV_PID_HASHSIZE # define EV_PID_HASHSIZE EV_FEATURE_DATA ? 16 : 1 #endif #ifndef EV_INOTIFY_HASHSIZE # define EV_INOTIFY_HASHSIZE EV_FEATURE_DATA ? 16 : 1 #endif #ifndef EV_USE_EVENTFD # if __linux && (__GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 7)) # define EV_USE_EVENTFD EV_FEATURE_OS # else # define EV_USE_EVENTFD 0 # endif #endif #ifndef EV_USE_SIGNALFD # if __linux && (__GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 7)) # define EV_USE_SIGNALFD EV_FEATURE_OS # else # define EV_USE_SIGNALFD 0 # endif #endif #ifndef EV_USE_TIMERFD # if __linux && (__GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 8)) # define EV_USE_TIMERFD EV_FEATURE_OS # else # define EV_USE_TIMERFD 0 # endif #endif #if 0 /* debugging */ # define EV_VERIFY 3 # define EV_USE_4HEAP 1 # define EV_HEAP_CACHE_AT 1 #endif #ifndef EV_VERIFY # define EV_VERIFY (EV_FEATURE_API ? 1 : 0) #endif #ifndef EV_USE_4HEAP # define EV_USE_4HEAP EV_FEATURE_DATA #endif #ifndef EV_HEAP_CACHE_AT # define EV_HEAP_CACHE_AT EV_FEATURE_DATA #endif #ifdef __ANDROID__ /* supposedly, android doesn't typedef fd_mask */ # undef EV_USE_SELECT # define EV_USE_SELECT 0 /* supposedly, we need to include syscall.h, not sys/syscall.h, so just disable */ # undef EV_USE_CLOCK_SYSCALL # define EV_USE_CLOCK_SYSCALL 0 #endif /* aix's poll.h seems to cause lots of trouble */ #ifdef _AIX /* AIX has a completely broken poll.h header */ # undef EV_USE_POLL # define EV_USE_POLL 0 #endif /* on linux, we can use a (slow) syscall to avoid a dependency on pthread, */ /* which makes programs even slower. might work on other unices, too. */ #if EV_USE_CLOCK_SYSCALL # include # ifdef SYS_clock_gettime # define clock_gettime(id, ts) syscall (SYS_clock_gettime, (id), (ts)) # undef EV_USE_MONOTONIC # define EV_USE_MONOTONIC 1 # define EV_NEED_SYSCALL 1 # else # undef EV_USE_CLOCK_SYSCALL # define EV_USE_CLOCK_SYSCALL 0 # endif #endif /* this block fixes any misconfiguration where we know we run into trouble otherwise */ #ifndef CLOCK_MONOTONIC # undef EV_USE_MONOTONIC # define EV_USE_MONOTONIC 0 #endif #ifndef CLOCK_REALTIME # undef EV_USE_REALTIME # define EV_USE_REALTIME 0 #endif #if !EV_STAT_ENABLE # undef EV_USE_INOTIFY # define EV_USE_INOTIFY 0 #endif #if __linux && EV_USE_IOURING # include # if LINUX_VERSION_CODE < KERNEL_VERSION(4,14,0) # undef EV_USE_IOURING # define EV_USE_IOURING 0 # endif #endif #if !EV_USE_NANOSLEEP /* hp-ux has it in sys/time.h, which we unconditionally include above */ # if !defined _WIN32 && !defined __hpux # include # endif #endif #if EV_USE_LINUXAIO # include # if SYS_io_getevents && EV_USE_EPOLL /* linuxaio backend requires epoll backend */ # define EV_NEED_SYSCALL 1 # else # undef EV_USE_LINUXAIO # define EV_USE_LINUXAIO 0 # endif #endif #if EV_USE_IOURING # include # if !SYS_io_uring_setup && __linux && !__alpha # define SYS_io_uring_setup 425 # define SYS_io_uring_enter 426 # define SYS_io_uring_wregister 427 # endif # if SYS_io_uring_setup && EV_USE_EPOLL /* iouring backend requires epoll backend */ # define EV_NEED_SYSCALL 1 # else # undef EV_USE_IOURING # define EV_USE_IOURING 0 # endif #endif #if EV_USE_INOTIFY # include # include /* some very old inotify.h headers don't have IN_DONT_FOLLOW */ # ifndef IN_DONT_FOLLOW # undef EV_USE_INOTIFY # define EV_USE_INOTIFY 0 # endif #endif #if EV_USE_EVENTFD /* our minimum requirement is glibc 2.7 which has the stub, but not the full header */ # include # ifndef EFD_NONBLOCK # define EFD_NONBLOCK O_NONBLOCK # endif # ifndef EFD_CLOEXEC # ifdef O_CLOEXEC # define EFD_CLOEXEC O_CLOEXEC # else # define EFD_CLOEXEC 02000000 # endif # endif EV_CPP(extern "C") int (eventfd) (unsigned int initval, int flags); #endif #if EV_USE_SIGNALFD /* our minimum requirement is glibc 2.7 which has the stub, but not the full header */ # include # ifndef SFD_NONBLOCK # define SFD_NONBLOCK O_NONBLOCK # endif # ifndef SFD_CLOEXEC # ifdef O_CLOEXEC # define SFD_CLOEXEC O_CLOEXEC # else # define SFD_CLOEXEC 02000000 # endif # endif EV_CPP (extern "C") int (signalfd) (int fd, const sigset_t *mask, int flags); struct signalfd_siginfo { uint32_t ssi_signo; char pad[128 - sizeof (uint32_t)]; }; #endif /* for timerfd, libev core requires TFD_TIMER_CANCEL_ON_SET &c */ #if EV_USE_TIMERFD # include /* timerfd is only used for periodics */ # if !(defined (TFD_TIMER_CANCEL_ON_SET) && defined (TFD_CLOEXEC) && defined (TFD_NONBLOCK)) || !EV_PERIODIC_ENABLE # undef EV_USE_TIMERFD # define EV_USE_TIMERFD 0 # endif #endif /*****************************************************************************/ #if EV_VERIFY >= 3 # define EV_FREQUENT_CHECK ev_verify (EV_A) #else # define EV_FREQUENT_CHECK do { } while (0) #endif /* * This is used to work around floating point rounding problems. * This value is good at least till the year 4000. */ #define MIN_INTERVAL 0.0001220703125 /* 1/2**13, good till 4000 */ /*#define MIN_INTERVAL 0.00000095367431640625 /* 1/2**20, good till 2200 */ #define MIN_TIMEJUMP 1. /* minimum timejump that gets detected (if monotonic clock available) */ #define MAX_BLOCKTIME 59.743 /* never wait longer than this time (to detect time jumps) */ #define MAX_BLOCKTIME2 1500001.07 /* same, but when timerfd is used to detect jumps, also safe delay to not overflow */ /* find a portable timestamp that is "always" in the future but fits into time_t. * this is quite hard, and we are mostly guessing - we handle 32 bit signed/unsigned time_t, * and sizes larger than 32 bit, and maybe the unlikely floating point time_t */ #define EV_TSTAMP_HUGE \ (sizeof (time_t) >= 8 ? 10000000000000. \ : 0 < (time_t)4294967295 ? 4294967295. \ : 2147483647.) \ #ifndef EV_TS_CONST # define EV_TS_CONST(nv) nv # define EV_TS_TO_MSEC(a) a * 1e3 + 0.9999 # define EV_TS_FROM_USEC(us) us * 1e-6 # define EV_TV_SET(tv,t) do { tv.tv_sec = (long)t; tv.tv_usec = (long)((t - tv.tv_sec) * 1e6); } while (0) # define EV_TS_SET(ts,t) do { ts.tv_sec = (long)t; ts.tv_nsec = (long)((t - ts.tv_sec) * 1e9); } while (0) # define EV_TV_GET(tv) ((tv).tv_sec + (tv).tv_usec * 1e-6) # define EV_TS_GET(ts) ((ts).tv_sec + (ts).tv_nsec * 1e-9) #endif /* the following is ecb.h embedded into libev - use update_ev_c to update from an external copy */ /* ECB.H BEGIN */ /* * libecb - http://software.schmorp.de/pkg/libecb * * Copyright (©) 2009-2015,2018-2020 Marc Alexander Lehmann * Copyright (©) 2011 Emanuele Giaquinta * All rights reserved. * * Redistribution and use in source and binary forms, with or without modifica- * tion, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * * Alternatively, the contents of this file may be used under the terms of * the GNU General Public License ("GPL") version 2 or any later version, * in which case the provisions of the GPL are applicable instead of * the above. If you wish to allow the use of your version of this file * only under the terms of the GPL and not to allow others to use your * version of this file under the BSD license, indicate your decision * by deleting the provisions above and replace them with the notice * and other provisions required by the GPL. If you do not delete the * provisions above, a recipient may use your version of this file under * either the BSD or the GPL. */ #ifndef ECB_H #define ECB_H /* 16 bits major, 16 bits minor */ #define ECB_VERSION 0x00010008 #include /* for memcpy */ #if defined (_WIN32) && !defined (__MINGW32__) typedef signed char int8_t; typedef unsigned char uint8_t; typedef signed char int_fast8_t; typedef unsigned char uint_fast8_t; typedef signed short int16_t; typedef unsigned short uint16_t; typedef signed int int_fast16_t; typedef unsigned int uint_fast16_t; typedef signed int int32_t; typedef unsigned int uint32_t; typedef signed int int_fast32_t; typedef unsigned int uint_fast32_t; #if __GNUC__ typedef signed long long int64_t; typedef unsigned long long uint64_t; #else /* _MSC_VER || __BORLANDC__ */ typedef signed __int64 int64_t; typedef unsigned __int64 uint64_t; #endif typedef int64_t int_fast64_t; typedef uint64_t uint_fast64_t; #ifdef _WIN64 #define ECB_PTRSIZE 8 typedef uint64_t uintptr_t; typedef int64_t intptr_t; #else #define ECB_PTRSIZE 4 typedef uint32_t uintptr_t; typedef int32_t intptr_t; #endif #else #include #if (defined INTPTR_MAX ? INTPTR_MAX : ULONG_MAX) > 0xffffffffU #define ECB_PTRSIZE 8 #else #define ECB_PTRSIZE 4 #endif #endif #define ECB_GCC_AMD64 (__amd64 || __amd64__ || __x86_64 || __x86_64__) #define ECB_MSVC_AMD64 (_M_AMD64 || _M_X64) #ifndef ECB_OPTIMIZE_SIZE #if __OPTIMIZE_SIZE__ #define ECB_OPTIMIZE_SIZE 1 #else #define ECB_OPTIMIZE_SIZE 0 #endif #endif /* work around x32 idiocy by defining proper macros */ #if ECB_GCC_AMD64 || ECB_MSVC_AMD64 #if _ILP32 #define ECB_AMD64_X32 1 #else #define ECB_AMD64 1 #endif #endif /* many compilers define _GNUC_ to some versions but then only implement * what their idiot authors think are the "more important" extensions, * causing enormous grief in return for some better fake benchmark numbers. * or so. * we try to detect these and simply assume they are not gcc - if they have * an issue with that they should have done it right in the first place. */ #if !defined __GNUC_MINOR__ || defined __INTEL_COMPILER || defined __SUNPRO_C || defined __SUNPRO_CC || defined __llvm__ || defined __clang__ #define ECB_GCC_VERSION(major,minor) 0 #else #define ECB_GCC_VERSION(major,minor) (__GNUC__ > (major) || (__GNUC__ == (major) && __GNUC_MINOR__ >= (minor))) #endif #define ECB_CLANG_VERSION(major,minor) (__clang_major__ > (major) || (__clang_major__ == (major) && __clang_minor__ >= (minor))) #if __clang__ && defined __has_builtin #define ECB_CLANG_BUILTIN(x) __has_builtin (x) #else #define ECB_CLANG_BUILTIN(x) 0 #endif #if __clang__ && defined __has_extension #define ECB_CLANG_EXTENSION(x) __has_extension (x) #else #define ECB_CLANG_EXTENSION(x) 0 #endif #define ECB_CPP (__cplusplus+0) #define ECB_CPP11 (__cplusplus >= 201103L) #define ECB_CPP14 (__cplusplus >= 201402L) #define ECB_CPP17 (__cplusplus >= 201703L) #if ECB_CPP #define ECB_C 0 #define ECB_STDC_VERSION 0 #else #define ECB_C 1 #define ECB_STDC_VERSION __STDC_VERSION__ #endif #define ECB_C99 (ECB_STDC_VERSION >= 199901L) #define ECB_C11 (ECB_STDC_VERSION >= 201112L) #define ECB_C17 (ECB_STDC_VERSION >= 201710L) #if ECB_CPP #define ECB_EXTERN_C extern "C" #define ECB_EXTERN_C_BEG ECB_EXTERN_C { #define ECB_EXTERN_C_END } #else #define ECB_EXTERN_C extern #define ECB_EXTERN_C_BEG #define ECB_EXTERN_C_END #endif /*****************************************************************************/ /* ECB_NO_THREADS - ecb is not used by multiple threads, ever */ /* ECB_NO_SMP - ecb might be used in multiple threads, but only on a single cpu */ #if ECB_NO_THREADS #define ECB_NO_SMP 1 #endif #if ECB_NO_SMP #define ECB_MEMORY_FENCE do { } while (0) #endif /* http://www-01.ibm.com/support/knowledgecenter/SSGH3R_13.1.0/com.ibm.xlcpp131.aix.doc/compiler_ref/compiler_builtins.html */ #if __xlC__ && ECB_CPP #include #endif #if 1400 <= _MSC_VER #include /* fence functions _ReadBarrier, also bit search functions _BitScanReverse */ #endif #ifndef ECB_MEMORY_FENCE #if ECB_GCC_VERSION(2,5) || defined __INTEL_COMPILER || (__llvm__ && __GNUC__) || __SUNPRO_C >= 0x5110 || __SUNPRO_CC >= 0x5110 #define ECB_MEMORY_FENCE_RELAXED __asm__ __volatile__ ("" : : : "memory") #if __i386 || __i386__ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("lock; orb $0, -1(%%esp)" : : : "memory") #define ECB_MEMORY_FENCE_ACQUIRE __asm__ __volatile__ ("" : : : "memory") #define ECB_MEMORY_FENCE_RELEASE __asm__ __volatile__ ("" : : : "memory") #elif ECB_GCC_AMD64 #define ECB_MEMORY_FENCE __asm__ __volatile__ ("mfence" : : : "memory") #define ECB_MEMORY_FENCE_ACQUIRE __asm__ __volatile__ ("" : : : "memory") #define ECB_MEMORY_FENCE_RELEASE __asm__ __volatile__ ("" : : : "memory") #elif __powerpc__ || __ppc__ || __powerpc64__ || __ppc64__ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("sync" : : : "memory") #elif defined __ARM_ARCH_2__ \ || defined __ARM_ARCH_3__ || defined __ARM_ARCH_3M__ \ || defined __ARM_ARCH_4__ || defined __ARM_ARCH_4T__ \ || defined __ARM_ARCH_5__ || defined __ARM_ARCH_5E__ \ || defined __ARM_ARCH_5T__ || defined __ARM_ARCH_5TE__ \ || defined __ARM_ARCH_5TEJ__ /* should not need any, unless running old code on newer cpu - arm doesn't support that */ #elif defined __ARM_ARCH_6__ || defined __ARM_ARCH_6J__ \ || defined __ARM_ARCH_6K__ || defined __ARM_ARCH_6ZK__ \ || defined __ARM_ARCH_6T2__ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("mcr p15,0,%0,c7,c10,5" : : "r" (0) : "memory") #elif defined __ARM_ARCH_7__ || defined __ARM_ARCH_7A__ \ || defined __ARM_ARCH_7R__ || defined __ARM_ARCH_7M__ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("dmb" : : : "memory") #elif __aarch64__ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("dmb ish" : : : "memory") #elif (__sparc || __sparc__) && !(__sparc_v8__ || defined __sparcv8) #define ECB_MEMORY_FENCE __asm__ __volatile__ ("membar #LoadStore | #LoadLoad | #StoreStore | #StoreLoad" : : : "memory") #define ECB_MEMORY_FENCE_ACQUIRE __asm__ __volatile__ ("membar #LoadStore | #LoadLoad" : : : "memory") #define ECB_MEMORY_FENCE_RELEASE __asm__ __volatile__ ("membar #LoadStore | #StoreStore") #elif defined __s390__ || defined __s390x__ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("bcr 15,0" : : : "memory") #elif defined __mips__ /* GNU/Linux emulates sync on mips1 architectures, so we force its use */ /* anybody else who still uses mips1 is supposed to send in their version, with detection code. */ #define ECB_MEMORY_FENCE __asm__ __volatile__ (".set mips2; sync; .set mips0" : : : "memory") #elif defined __alpha__ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("mb" : : : "memory") #elif defined __hppa__ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("" : : : "memory") #define ECB_MEMORY_FENCE_RELEASE __asm__ __volatile__ ("") #elif defined __ia64__ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("mf" : : : "memory") #elif defined __m68k__ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("" : : : "memory") #elif defined __m88k__ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("tb1 0,%%r0,128" : : : "memory") #elif defined __sh__ #define ECB_MEMORY_FENCE __asm__ __volatile__ ("" : : : "memory") #endif #endif #endif #ifndef ECB_MEMORY_FENCE #if ECB_GCC_VERSION(4,7) /* see comment below (stdatomic.h) about the C11 memory model. */ #define ECB_MEMORY_FENCE __atomic_thread_fence (__ATOMIC_SEQ_CST) #define ECB_MEMORY_FENCE_ACQUIRE __atomic_thread_fence (__ATOMIC_ACQUIRE) #define ECB_MEMORY_FENCE_RELEASE __atomic_thread_fence (__ATOMIC_RELEASE) #define ECB_MEMORY_FENCE_RELAXED __atomic_thread_fence (__ATOMIC_RELAXED) #elif ECB_CLANG_EXTENSION(c_atomic) /* see comment below (stdatomic.h) about the C11 memory model. */ #define ECB_MEMORY_FENCE __c11_atomic_thread_fence (__ATOMIC_SEQ_CST) #define ECB_MEMORY_FENCE_ACQUIRE __c11_atomic_thread_fence (__ATOMIC_ACQUIRE) #define ECB_MEMORY_FENCE_RELEASE __c11_atomic_thread_fence (__ATOMIC_RELEASE) #define ECB_MEMORY_FENCE_RELAXED __c11_atomic_thread_fence (__ATOMIC_RELAXED) #elif ECB_GCC_VERSION(4,4) || defined __INTEL_COMPILER || defined __clang__ #define ECB_MEMORY_FENCE __sync_synchronize () #elif _MSC_VER >= 1500 /* VC++ 2008 */ /* apparently, microsoft broke all the memory barrier stuff in Visual Studio 2008... */ #pragma intrinsic(_ReadBarrier,_WriteBarrier,_ReadWriteBarrier) #define ECB_MEMORY_FENCE _ReadWriteBarrier (); MemoryBarrier() #define ECB_MEMORY_FENCE_ACQUIRE _ReadWriteBarrier (); MemoryBarrier() /* according to msdn, _ReadBarrier is not a load fence */ #define ECB_MEMORY_FENCE_RELEASE _WriteBarrier (); MemoryBarrier() #elif _MSC_VER >= 1400 /* VC++ 2005 */ #pragma intrinsic(_ReadBarrier,_WriteBarrier,_ReadWriteBarrier) #define ECB_MEMORY_FENCE _ReadWriteBarrier () #define ECB_MEMORY_FENCE_ACQUIRE _ReadWriteBarrier () /* according to msdn, _ReadBarrier is not a load fence */ #define ECB_MEMORY_FENCE_RELEASE _WriteBarrier () #elif defined _WIN32 #include #define ECB_MEMORY_FENCE MemoryBarrier () /* actually just xchg on x86... scary */ #elif __SUNPRO_C >= 0x5110 || __SUNPRO_CC >= 0x5110 #include #define ECB_MEMORY_FENCE __machine_rw_barrier () #define ECB_MEMORY_FENCE_ACQUIRE __machine_acq_barrier () #define ECB_MEMORY_FENCE_RELEASE __machine_rel_barrier () #define ECB_MEMORY_FENCE_RELAXED __compiler_barrier () #elif __xlC__ #define ECB_MEMORY_FENCE __sync () #endif #endif #ifndef ECB_MEMORY_FENCE #if ECB_C11 && !defined __STDC_NO_ATOMICS__ /* we assume that these memory fences work on all variables/all memory accesses, */ /* not just C11 atomics and atomic accesses */ #include #define ECB_MEMORY_FENCE atomic_thread_fence (memory_order_seq_cst) #define ECB_MEMORY_FENCE_ACQUIRE atomic_thread_fence (memory_order_acquire) #define ECB_MEMORY_FENCE_RELEASE atomic_thread_fence (memory_order_release) #endif #endif #ifndef ECB_MEMORY_FENCE #if !ECB_AVOID_PTHREADS /* * if you get undefined symbol references to pthread_mutex_lock, * or failure to find pthread.h, then you should implement * the ECB_MEMORY_FENCE operations for your cpu/compiler * OR provide pthread.h and link against the posix thread library * of your system. */ #include #define ECB_NEEDS_PTHREADS 1 #define ECB_MEMORY_FENCE_NEEDS_PTHREADS 1 static pthread_mutex_t ecb_mf_lock = PTHREAD_MUTEX_INITIALIZER; #define ECB_MEMORY_FENCE do { pthread_mutex_lock (&ecb_mf_lock); pthread_mutex_unlock (&ecb_mf_lock); } while (0) #endif #endif #if !defined ECB_MEMORY_FENCE_ACQUIRE && defined ECB_MEMORY_FENCE #define ECB_MEMORY_FENCE_ACQUIRE ECB_MEMORY_FENCE #endif #if !defined ECB_MEMORY_FENCE_RELEASE && defined ECB_MEMORY_FENCE #define ECB_MEMORY_FENCE_RELEASE ECB_MEMORY_FENCE #endif #if !defined ECB_MEMORY_FENCE_RELAXED && defined ECB_MEMORY_FENCE #define ECB_MEMORY_FENCE_RELAXED ECB_MEMORY_FENCE /* very heavy-handed */ #endif /*****************************************************************************/ #if ECB_CPP #define ecb_inline static inline #elif ECB_GCC_VERSION(2,5) #define ecb_inline static __inline__ #elif ECB_C99 #define ecb_inline static inline #else #define ecb_inline static #endif #if ECB_GCC_VERSION(3,3) #define ecb_restrict __restrict__ #elif ECB_C99 #define ecb_restrict restrict #else #define ecb_restrict #endif typedef int ecb_bool; #define ECB_CONCAT_(a, b) a ## b #define ECB_CONCAT(a, b) ECB_CONCAT_(a, b) #define ECB_STRINGIFY_(a) # a #define ECB_STRINGIFY(a) ECB_STRINGIFY_(a) #define ECB_STRINGIFY_EXPR(expr) ((expr), ECB_STRINGIFY_ (expr)) #define ecb_function_ ecb_inline #if ECB_GCC_VERSION(3,1) || ECB_CLANG_VERSION(2,8) #define ecb_attribute(attrlist) __attribute__ (attrlist) #else #define ecb_attribute(attrlist) #endif #if ECB_GCC_VERSION(3,1) || ECB_CLANG_BUILTIN(__builtin_constant_p) #define ecb_is_constant(expr) __builtin_constant_p (expr) #else /* possible C11 impl for integral types typedef struct ecb_is_constant_struct ecb_is_constant_struct; #define ecb_is_constant(expr) _Generic ((1 ? (struct ecb_is_constant_struct *)0 : (void *)((expr) - (expr)), ecb_is_constant_struct *: 0, default: 1)) */ #define ecb_is_constant(expr) 0 #endif #if ECB_GCC_VERSION(3,1) || ECB_CLANG_BUILTIN(__builtin_expect) #define ecb_expect(expr,value) __builtin_expect ((expr),(value)) #else #define ecb_expect(expr,value) (expr) #endif #if ECB_GCC_VERSION(3,1) || ECB_CLANG_BUILTIN(__builtin_prefetch) #define ecb_prefetch(addr,rw,locality) __builtin_prefetch (addr, rw, locality) #else #define ecb_prefetch(addr,rw,locality) #endif /* no emulation for ecb_decltype */ #if ECB_CPP11 // older implementations might have problems with decltype(x)::type, work around it template struct ecb_decltype_t { typedef T type; }; #define ecb_decltype(x) ecb_decltype_t::type #elif ECB_GCC_VERSION(3,0) || ECB_CLANG_VERSION(2,8) #define ecb_decltype(x) __typeof__ (x) #endif #if _MSC_VER >= 1300 #define ecb_deprecated __declspec (deprecated) #else #define ecb_deprecated ecb_attribute ((__deprecated__)) #endif #if _MSC_VER >= 1500 #define ecb_deprecated_message(msg) __declspec (deprecated (msg)) #elif ECB_GCC_VERSION(4,5) #define ecb_deprecated_message(msg) ecb_attribute ((__deprecated__ (msg)) #else #define ecb_deprecated_message(msg) ecb_deprecated #endif #if _MSC_VER >= 1400 #define ecb_noinline __declspec (noinline) #else #define ecb_noinline ecb_attribute ((__noinline__)) #endif #define ecb_unused ecb_attribute ((__unused__)) #define ecb_const ecb_attribute ((__const__)) #define ecb_pure ecb_attribute ((__pure__)) #if ECB_C11 || __IBMC_NORETURN /* http://www-01.ibm.com/support/knowledgecenter/SSGH3R_13.1.0/com.ibm.xlcpp131.aix.doc/language_ref/noreturn.html */ #define ecb_noreturn _Noreturn #elif ECB_CPP11 #define ecb_noreturn [[noreturn]] #elif _MSC_VER >= 1200 /* http://msdn.microsoft.com/en-us/library/k6ktzx3s.aspx */ #define ecb_noreturn __declspec (noreturn) #else #define ecb_noreturn ecb_attribute ((__noreturn__)) #endif #if ECB_GCC_VERSION(4,3) #define ecb_artificial ecb_attribute ((__artificial__)) #define ecb_hot ecb_attribute ((__hot__)) #define ecb_cold ecb_attribute ((__cold__)) #else #define ecb_artificial #define ecb_hot #define ecb_cold #endif /* put around conditional expressions if you are very sure that the */ /* expression is mostly true or mostly false. note that these return */ /* booleans, not the expression. */ #define ecb_expect_false(expr) ecb_expect (!!(expr), 0) #define ecb_expect_true(expr) ecb_expect (!!(expr), 1) /* for compatibility to the rest of the world */ #define ecb_likely(expr) ecb_expect_true (expr) #define ecb_unlikely(expr) ecb_expect_false (expr) /* count trailing zero bits and count # of one bits */ #if ECB_GCC_VERSION(3,4) \ || (ECB_CLANG_BUILTIN(__builtin_clz) && ECB_CLANG_BUILTIN(__builtin_clzll) \ && ECB_CLANG_BUILTIN(__builtin_ctz) && ECB_CLANG_BUILTIN(__builtin_ctzll) \ && ECB_CLANG_BUILTIN(__builtin_popcount)) /* we assume int == 32 bit, long == 32 or 64 bit and long long == 64 bit */ #define ecb_ld32(x) (__builtin_clz (x) ^ 31) #define ecb_ld64(x) (__builtin_clzll (x) ^ 63) #define ecb_ctz32(x) __builtin_ctz (x) #define ecb_ctz64(x) __builtin_ctzll (x) #define ecb_popcount32(x) __builtin_popcount (x) /* no popcountll */ #else ecb_function_ ecb_const int ecb_ctz32 (uint32_t x); ecb_function_ ecb_const int ecb_ctz32 (uint32_t x) { #if 1400 <= _MSC_VER && (_M_IX86 || _M_X64 || _M_IA64 || _M_ARM) unsigned long r; _BitScanForward (&r, x); return (int)r; #else int r = 0; x &= ~x + 1; /* this isolates the lowest bit */ #if ECB_branchless_on_i386 r += !!(x & 0xaaaaaaaa) << 0; r += !!(x & 0xcccccccc) << 1; r += !!(x & 0xf0f0f0f0) << 2; r += !!(x & 0xff00ff00) << 3; r += !!(x & 0xffff0000) << 4; #else if (x & 0xaaaaaaaa) r += 1; if (x & 0xcccccccc) r += 2; if (x & 0xf0f0f0f0) r += 4; if (x & 0xff00ff00) r += 8; if (x & 0xffff0000) r += 16; #endif return r; #endif } ecb_function_ ecb_const int ecb_ctz64 (uint64_t x); ecb_function_ ecb_const int ecb_ctz64 (uint64_t x) { #if 1400 <= _MSC_VER && (_M_X64 || _M_IA64 || _M_ARM) unsigned long r; _BitScanForward64 (&r, x); return (int)r; #else int shift = x & 0xffffffff ? 0 : 32; return ecb_ctz32 (x >> shift) + shift; #endif } ecb_function_ ecb_const int ecb_popcount32 (uint32_t x); ecb_function_ ecb_const int ecb_popcount32 (uint32_t x) { x -= (x >> 1) & 0x55555555; x = ((x >> 2) & 0x33333333) + (x & 0x33333333); x = ((x >> 4) + x) & 0x0f0f0f0f; x *= 0x01010101; return x >> 24; } ecb_function_ ecb_const int ecb_ld32 (uint32_t x); ecb_function_ ecb_const int ecb_ld32 (uint32_t x) { #if 1400 <= _MSC_VER && (_M_IX86 || _M_X64 || _M_IA64 || _M_ARM) unsigned long r; _BitScanReverse (&r, x); return (int)r; #else int r = 0; if (x >> 16) { x >>= 16; r += 16; } if (x >> 8) { x >>= 8; r += 8; } if (x >> 4) { x >>= 4; r += 4; } if (x >> 2) { x >>= 2; r += 2; } if (x >> 1) { r += 1; } return r; #endif } ecb_function_ ecb_const int ecb_ld64 (uint64_t x); ecb_function_ ecb_const int ecb_ld64 (uint64_t x) { #if 1400 <= _MSC_VER && (_M_X64 || _M_IA64 || _M_ARM) unsigned long r; _BitScanReverse64 (&r, x); return (int)r; #else int r = 0; if (x >> 32) { x >>= 32; r += 32; } return r + ecb_ld32 (x); #endif } #endif ecb_function_ ecb_const ecb_bool ecb_is_pot32 (uint32_t x); ecb_function_ ecb_const ecb_bool ecb_is_pot32 (uint32_t x) { return !(x & (x - 1)); } ecb_function_ ecb_const ecb_bool ecb_is_pot64 (uint64_t x); ecb_function_ ecb_const ecb_bool ecb_is_pot64 (uint64_t x) { return !(x & (x - 1)); } ecb_function_ ecb_const uint8_t ecb_bitrev8 (uint8_t x); ecb_function_ ecb_const uint8_t ecb_bitrev8 (uint8_t x) { return ( (x * 0x0802U & 0x22110U) | (x * 0x8020U & 0x88440U)) * 0x10101U >> 16; } ecb_function_ ecb_const uint16_t ecb_bitrev16 (uint16_t x); ecb_function_ ecb_const uint16_t ecb_bitrev16 (uint16_t x) { x = ((x >> 1) & 0x5555) | ((x & 0x5555) << 1); x = ((x >> 2) & 0x3333) | ((x & 0x3333) << 2); x = ((x >> 4) & 0x0f0f) | ((x & 0x0f0f) << 4); x = ( x >> 8 ) | ( x << 8); return x; } ecb_function_ ecb_const uint32_t ecb_bitrev32 (uint32_t x); ecb_function_ ecb_const uint32_t ecb_bitrev32 (uint32_t x) { x = ((x >> 1) & 0x55555555) | ((x & 0x55555555) << 1); x = ((x >> 2) & 0x33333333) | ((x & 0x33333333) << 2); x = ((x >> 4) & 0x0f0f0f0f) | ((x & 0x0f0f0f0f) << 4); x = ((x >> 8) & 0x00ff00ff) | ((x & 0x00ff00ff) << 8); x = ( x >> 16 ) | ( x << 16); return x; } /* popcount64 is only available on 64 bit cpus as gcc builtin */ /* so for this version we are lazy */ ecb_function_ ecb_const int ecb_popcount64 (uint64_t x); ecb_function_ ecb_const int ecb_popcount64 (uint64_t x) { return ecb_popcount32 (x) + ecb_popcount32 (x >> 32); } ecb_inline ecb_const uint8_t ecb_rotl8 (uint8_t x, unsigned int count); ecb_inline ecb_const uint8_t ecb_rotr8 (uint8_t x, unsigned int count); ecb_inline ecb_const uint16_t ecb_rotl16 (uint16_t x, unsigned int count); ecb_inline ecb_const uint16_t ecb_rotr16 (uint16_t x, unsigned int count); ecb_inline ecb_const uint32_t ecb_rotl32 (uint32_t x, unsigned int count); ecb_inline ecb_const uint32_t ecb_rotr32 (uint32_t x, unsigned int count); ecb_inline ecb_const uint64_t ecb_rotl64 (uint64_t x, unsigned int count); ecb_inline ecb_const uint64_t ecb_rotr64 (uint64_t x, unsigned int count); ecb_inline ecb_const uint8_t ecb_rotl8 (uint8_t x, unsigned int count) { return (x >> ( 8 - count)) | (x << count); } ecb_inline ecb_const uint8_t ecb_rotr8 (uint8_t x, unsigned int count) { return (x << ( 8 - count)) | (x >> count); } ecb_inline ecb_const uint16_t ecb_rotl16 (uint16_t x, unsigned int count) { return (x >> (16 - count)) | (x << count); } ecb_inline ecb_const uint16_t ecb_rotr16 (uint16_t x, unsigned int count) { return (x << (16 - count)) | (x >> count); } ecb_inline ecb_const uint32_t ecb_rotl32 (uint32_t x, unsigned int count) { return (x >> (32 - count)) | (x << count); } ecb_inline ecb_const uint32_t ecb_rotr32 (uint32_t x, unsigned int count) { return (x << (32 - count)) | (x >> count); } ecb_inline ecb_const uint64_t ecb_rotl64 (uint64_t x, unsigned int count) { return (x >> (64 - count)) | (x << count); } ecb_inline ecb_const uint64_t ecb_rotr64 (uint64_t x, unsigned int count) { return (x << (64 - count)) | (x >> count); } #if ECB_CPP inline uint8_t ecb_ctz (uint8_t v) { return ecb_ctz32 (v); } inline uint16_t ecb_ctz (uint16_t v) { return ecb_ctz32 (v); } inline uint32_t ecb_ctz (uint32_t v) { return ecb_ctz32 (v); } inline uint64_t ecb_ctz (uint64_t v) { return ecb_ctz64 (v); } inline bool ecb_is_pot (uint8_t v) { return ecb_is_pot32 (v); } inline bool ecb_is_pot (uint16_t v) { return ecb_is_pot32 (v); } inline bool ecb_is_pot (uint32_t v) { return ecb_is_pot32 (v); } inline bool ecb_is_pot (uint64_t v) { return ecb_is_pot64 (v); } inline int ecb_ld (uint8_t v) { return ecb_ld32 (v); } inline int ecb_ld (uint16_t v) { return ecb_ld32 (v); } inline int ecb_ld (uint32_t v) { return ecb_ld32 (v); } inline int ecb_ld (uint64_t v) { return ecb_ld64 (v); } inline int ecb_popcount (uint8_t v) { return ecb_popcount32 (v); } inline int ecb_popcount (uint16_t v) { return ecb_popcount32 (v); } inline int ecb_popcount (uint32_t v) { return ecb_popcount32 (v); } inline int ecb_popcount (uint64_t v) { return ecb_popcount64 (v); } inline uint8_t ecb_bitrev (uint8_t v) { return ecb_bitrev8 (v); } inline uint16_t ecb_bitrev (uint16_t v) { return ecb_bitrev16 (v); } inline uint32_t ecb_bitrev (uint32_t v) { return ecb_bitrev32 (v); } inline uint8_t ecb_rotl (uint8_t v, unsigned int count) { return ecb_rotl8 (v, count); } inline uint16_t ecb_rotl (uint16_t v, unsigned int count) { return ecb_rotl16 (v, count); } inline uint32_t ecb_rotl (uint32_t v, unsigned int count) { return ecb_rotl32 (v, count); } inline uint64_t ecb_rotl (uint64_t v, unsigned int count) { return ecb_rotl64 (v, count); } inline uint8_t ecb_rotr (uint8_t v, unsigned int count) { return ecb_rotr8 (v, count); } inline uint16_t ecb_rotr (uint16_t v, unsigned int count) { return ecb_rotr16 (v, count); } inline uint32_t ecb_rotr (uint32_t v, unsigned int count) { return ecb_rotr32 (v, count); } inline uint64_t ecb_rotr (uint64_t v, unsigned int count) { return ecb_rotr64 (v, count); } #endif #if ECB_GCC_VERSION(4,3) || (ECB_CLANG_BUILTIN(__builtin_bswap32) && ECB_CLANG_BUILTIN(__builtin_bswap64)) #if ECB_GCC_VERSION(4,8) || ECB_CLANG_BUILTIN(__builtin_bswap16) #define ecb_bswap16(x) __builtin_bswap16 (x) #else #define ecb_bswap16(x) (__builtin_bswap32 (x) >> 16) #endif #define ecb_bswap32(x) __builtin_bswap32 (x) #define ecb_bswap64(x) __builtin_bswap64 (x) #elif _MSC_VER #include #define ecb_bswap16(x) ((uint16_t)_byteswap_ushort ((uint16_t)(x))) #define ecb_bswap32(x) ((uint32_t)_byteswap_ulong ((uint32_t)(x))) #define ecb_bswap64(x) ((uint64_t)_byteswap_uint64 ((uint64_t)(x))) #else ecb_function_ ecb_const uint16_t ecb_bswap16 (uint16_t x); ecb_function_ ecb_const uint16_t ecb_bswap16 (uint16_t x) { return ecb_rotl16 (x, 8); } ecb_function_ ecb_const uint32_t ecb_bswap32 (uint32_t x); ecb_function_ ecb_const uint32_t ecb_bswap32 (uint32_t x) { return (((uint32_t)ecb_bswap16 (x)) << 16) | ecb_bswap16 (x >> 16); } ecb_function_ ecb_const uint64_t ecb_bswap64 (uint64_t x); ecb_function_ ecb_const uint64_t ecb_bswap64 (uint64_t x) { return (((uint64_t)ecb_bswap32 (x)) << 32) | ecb_bswap32 (x >> 32); } #endif #if ECB_GCC_VERSION(4,5) || ECB_CLANG_BUILTIN(__builtin_unreachable) #define ecb_unreachable() __builtin_unreachable () #else /* this seems to work fine, but gcc always emits a warning for it :/ */ ecb_inline ecb_noreturn void ecb_unreachable (void); ecb_inline ecb_noreturn void ecb_unreachable (void) { } #endif /* try to tell the compiler that some condition is definitely true */ #define ecb_assume(cond) if (!(cond)) ecb_unreachable (); else 0 ecb_inline ecb_const uint32_t ecb_byteorder_helper (void); ecb_inline ecb_const uint32_t ecb_byteorder_helper (void) { /* the union code still generates code under pressure in gcc, */ /* but less than using pointers, and always seems to */ /* successfully return a constant. */ /* the reason why we have this horrible preprocessor mess */ /* is to avoid it in all cases, at least on common architectures */ /* or when using a recent enough gcc version (>= 4.6) */ #if (defined __BYTE_ORDER__ && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__) \ || ((__i386 || __i386__ || _M_IX86 || ECB_GCC_AMD64 || ECB_MSVC_AMD64) && !__VOS__) #define ECB_LITTLE_ENDIAN 1 return 0x44332211; #elif (defined __BYTE_ORDER__ && __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__) \ || ((__AARCH64EB__ || __MIPSEB__ || __ARMEB__) && !__VOS__) #define ECB_BIG_ENDIAN 1 return 0x11223344; #else union { uint8_t c[4]; uint32_t u; } u = { 0x11, 0x22, 0x33, 0x44 }; return u.u; #endif } ecb_inline ecb_const ecb_bool ecb_big_endian (void); ecb_inline ecb_const ecb_bool ecb_big_endian (void) { return ecb_byteorder_helper () == 0x11223344; } ecb_inline ecb_const ecb_bool ecb_little_endian (void); ecb_inline ecb_const ecb_bool ecb_little_endian (void) { return ecb_byteorder_helper () == 0x44332211; } /*****************************************************************************/ /* unaligned load/store */ ecb_inline uint_fast16_t ecb_be_u16_to_host (uint_fast16_t v) { return ecb_little_endian () ? ecb_bswap16 (v) : v; } ecb_inline uint_fast32_t ecb_be_u32_to_host (uint_fast32_t v) { return ecb_little_endian () ? ecb_bswap32 (v) : v; } ecb_inline uint_fast64_t ecb_be_u64_to_host (uint_fast64_t v) { return ecb_little_endian () ? ecb_bswap64 (v) : v; } ecb_inline uint_fast16_t ecb_le_u16_to_host (uint_fast16_t v) { return ecb_big_endian () ? ecb_bswap16 (v) : v; } ecb_inline uint_fast32_t ecb_le_u32_to_host (uint_fast32_t v) { return ecb_big_endian () ? ecb_bswap32 (v) : v; } ecb_inline uint_fast64_t ecb_le_u64_to_host (uint_fast64_t v) { return ecb_big_endian () ? ecb_bswap64 (v) : v; } ecb_inline uint_fast16_t ecb_peek_u16_u (const void *ptr) { uint16_t v; memcpy (&v, ptr, sizeof (v)); return v; } ecb_inline uint_fast32_t ecb_peek_u32_u (const void *ptr) { uint32_t v; memcpy (&v, ptr, sizeof (v)); return v; } ecb_inline uint_fast64_t ecb_peek_u64_u (const void *ptr) { uint64_t v; memcpy (&v, ptr, sizeof (v)); return v; } ecb_inline uint_fast16_t ecb_peek_be_u16_u (const void *ptr) { return ecb_be_u16_to_host (ecb_peek_u16_u (ptr)); } ecb_inline uint_fast32_t ecb_peek_be_u32_u (const void *ptr) { return ecb_be_u32_to_host (ecb_peek_u32_u (ptr)); } ecb_inline uint_fast64_t ecb_peek_be_u64_u (const void *ptr) { return ecb_be_u64_to_host (ecb_peek_u64_u (ptr)); } ecb_inline uint_fast16_t ecb_peek_le_u16_u (const void *ptr) { return ecb_le_u16_to_host (ecb_peek_u16_u (ptr)); } ecb_inline uint_fast32_t ecb_peek_le_u32_u (const void *ptr) { return ecb_le_u32_to_host (ecb_peek_u32_u (ptr)); } ecb_inline uint_fast64_t ecb_peek_le_u64_u (const void *ptr) { return ecb_le_u64_to_host (ecb_peek_u64_u (ptr)); } ecb_inline uint_fast16_t ecb_host_to_be_u16 (uint_fast16_t v) { return ecb_little_endian () ? ecb_bswap16 (v) : v; } ecb_inline uint_fast32_t ecb_host_to_be_u32 (uint_fast32_t v) { return ecb_little_endian () ? ecb_bswap32 (v) : v; } ecb_inline uint_fast64_t ecb_host_to_be_u64 (uint_fast64_t v) { return ecb_little_endian () ? ecb_bswap64 (v) : v; } ecb_inline uint_fast16_t ecb_host_to_le_u16 (uint_fast16_t v) { return ecb_big_endian () ? ecb_bswap16 (v) : v; } ecb_inline uint_fast32_t ecb_host_to_le_u32 (uint_fast32_t v) { return ecb_big_endian () ? ecb_bswap32 (v) : v; } ecb_inline uint_fast64_t ecb_host_to_le_u64 (uint_fast64_t v) { return ecb_big_endian () ? ecb_bswap64 (v) : v; } ecb_inline void ecb_poke_u16_u (void *ptr, uint16_t v) { memcpy (ptr, &v, sizeof (v)); } ecb_inline void ecb_poke_u32_u (void *ptr, uint32_t v) { memcpy (ptr, &v, sizeof (v)); } ecb_inline void ecb_poke_u64_u (void *ptr, uint64_t v) { memcpy (ptr, &v, sizeof (v)); } ecb_inline void ecb_poke_be_u16_u (void *ptr, uint_fast16_t v) { ecb_poke_u16_u (ptr, ecb_host_to_be_u16 (v)); } ecb_inline void ecb_poke_be_u32_u (void *ptr, uint_fast32_t v) { ecb_poke_u32_u (ptr, ecb_host_to_be_u32 (v)); } ecb_inline void ecb_poke_be_u64_u (void *ptr, uint_fast64_t v) { ecb_poke_u64_u (ptr, ecb_host_to_be_u64 (v)); } ecb_inline void ecb_poke_le_u16_u (void *ptr, uint_fast16_t v) { ecb_poke_u16_u (ptr, ecb_host_to_le_u16 (v)); } ecb_inline void ecb_poke_le_u32_u (void *ptr, uint_fast32_t v) { ecb_poke_u32_u (ptr, ecb_host_to_le_u32 (v)); } ecb_inline void ecb_poke_le_u64_u (void *ptr, uint_fast64_t v) { ecb_poke_u64_u (ptr, ecb_host_to_le_u64 (v)); } #if ECB_CPP inline uint8_t ecb_bswap (uint8_t v) { return v; } inline uint16_t ecb_bswap (uint16_t v) { return ecb_bswap16 (v); } inline uint32_t ecb_bswap (uint32_t v) { return ecb_bswap32 (v); } inline uint64_t ecb_bswap (uint64_t v) { return ecb_bswap64 (v); } template inline T ecb_be_to_host (T v) { return ecb_little_endian () ? ecb_bswap (v) : v; } template inline T ecb_le_to_host (T v) { return ecb_big_endian () ? ecb_bswap (v) : v; } template inline T ecb_peek (const void *ptr) { return *(const T *)ptr; } template inline T ecb_peek_be (const void *ptr) { return ecb_be_to_host (ecb_peek (ptr)); } template inline T ecb_peek_le (const void *ptr) { return ecb_le_to_host (ecb_peek (ptr)); } template inline T ecb_peek_u (const void *ptr) { T v; memcpy (&v, ptr, sizeof (v)); return v; } template inline T ecb_peek_be_u (const void *ptr) { return ecb_be_to_host (ecb_peek_u (ptr)); } template inline T ecb_peek_le_u (const void *ptr) { return ecb_le_to_host (ecb_peek_u (ptr)); } template inline T ecb_host_to_be (T v) { return ecb_little_endian () ? ecb_bswap (v) : v; } template inline T ecb_host_to_le (T v) { return ecb_big_endian () ? ecb_bswap (v) : v; } template inline void ecb_poke (void *ptr, T v) { *(T *)ptr = v; } template inline void ecb_poke_be (void *ptr, T v) { return ecb_poke (ptr, ecb_host_to_be (v)); } template inline void ecb_poke_le (void *ptr, T v) { return ecb_poke (ptr, ecb_host_to_le (v)); } template inline void ecb_poke_u (void *ptr, T v) { memcpy (ptr, &v, sizeof (v)); } template inline void ecb_poke_be_u (void *ptr, T v) { return ecb_poke_u (ptr, ecb_host_to_be (v)); } template inline void ecb_poke_le_u (void *ptr, T v) { return ecb_poke_u (ptr, ecb_host_to_le (v)); } #endif /*****************************************************************************/ #if ECB_GCC_VERSION(3,0) || ECB_C99 #define ecb_mod(m,n) ((m) % (n) + ((m) % (n) < 0 ? (n) : 0)) #else #define ecb_mod(m,n) ((m) < 0 ? ((n) - 1 - ((-1 - (m)) % (n))) : ((m) % (n))) #endif #if ECB_CPP template static inline T ecb_div_rd (T val, T div) { return val < 0 ? - ((-val + div - 1) / div) : (val ) / div; } template static inline T ecb_div_ru (T val, T div) { return val < 0 ? - ((-val ) / div) : (val + div - 1) / div; } #else #define ecb_div_rd(val,div) ((val) < 0 ? - ((-(val) + (div) - 1) / (div)) : ((val) ) / (div)) #define ecb_div_ru(val,div) ((val) < 0 ? - ((-(val) ) / (div)) : ((val) + (div) - 1) / (div)) #endif #if ecb_cplusplus_does_not_suck /* does not work for local types (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2657.htm) */ template static inline int ecb_array_length (const T (&arr)[N]) { return N; } #else #define ecb_array_length(name) (sizeof (name) / sizeof (name [0])) #endif /*****************************************************************************/ ecb_function_ ecb_const uint32_t ecb_binary16_to_binary32 (uint32_t x); ecb_function_ ecb_const uint32_t ecb_binary16_to_binary32 (uint32_t x) { unsigned int s = (x & 0x8000) << (31 - 15); int e = (x >> 10) & 0x001f; unsigned int m = x & 0x03ff; if (ecb_expect_false (e == 31)) /* infinity or NaN */ e = 255 - (127 - 15); else if (ecb_expect_false (!e)) { if (ecb_expect_true (!m)) /* zero, handled by code below by forcing e to 0 */ e = 0 - (127 - 15); else { /* subnormal, renormalise */ unsigned int s = 10 - ecb_ld32 (m); m = (m << s) & 0x3ff; /* mask implicit bit */ e -= s - 1; } } /* e and m now are normalised, or zero, (or inf or nan) */ e += 127 - 15; return s | (e << 23) | (m << (23 - 10)); } ecb_function_ ecb_const uint16_t ecb_binary32_to_binary16 (uint32_t x); ecb_function_ ecb_const uint16_t ecb_binary32_to_binary16 (uint32_t x) { unsigned int s = (x >> 16) & 0x00008000; /* sign bit, the easy part */ unsigned int e = ((x >> 23) & 0x000000ff) - (127 - 15); /* the desired exponent */ unsigned int m = x & 0x007fffff; x &= 0x7fffffff; /* if it's within range of binary16 normals, use fast path */ if (ecb_expect_true (0x38800000 <= x && x <= 0x477fefff)) { /* mantissa round-to-even */ m += 0x00000fff + ((m >> (23 - 10)) & 1); /* handle overflow */ if (ecb_expect_false (m >= 0x00800000)) { m >>= 1; e += 1; } return s | (e << 10) | (m >> (23 - 10)); } /* handle large numbers and infinity */ if (ecb_expect_true (0x477fefff < x && x <= 0x7f800000)) return s | 0x7c00; /* handle zero, subnormals and small numbers */ if (ecb_expect_true (x < 0x38800000)) { /* zero */ if (ecb_expect_true (!x)) return s; /* handle subnormals */ /* too small, will be zero */ if (e < (14 - 24)) /* might not be sharp, but is good enough */ return s; m |= 0x00800000; /* make implicit bit explicit */ /* very tricky - we need to round to the nearest e (+10) bit value */ { unsigned int bits = 14 - e; unsigned int half = (1 << (bits - 1)) - 1; unsigned int even = (m >> bits) & 1; /* if this overflows, we will end up with a normalised number */ m = (m + half + even) >> bits; } return s | m; } /* handle NaNs, preserve leftmost nan bits, but make sure we don't turn them into infinities */ m >>= 13; return s | 0x7c00 | m | !m; } /*******************************************************************************/ /* floating point stuff, can be disabled by defining ECB_NO_LIBM */ /* basically, everything uses "ieee pure-endian" floating point numbers */ /* the only noteworthy exception is ancient armle, which uses order 43218765 */ #if 0 \ || __i386 || __i386__ \ || ECB_GCC_AMD64 \ || __powerpc__ || __ppc__ || __powerpc64__ || __ppc64__ \ || defined __s390__ || defined __s390x__ \ || defined __mips__ \ || defined __alpha__ \ || defined __hppa__ \ || defined __ia64__ \ || defined __m68k__ \ || defined __m88k__ \ || defined __sh__ \ || defined _M_IX86 || defined ECB_MSVC_AMD64 || defined _M_IA64 \ || (defined __arm__ && (defined __ARM_EABI__ || defined __EABI__ || defined __VFP_FP__ || defined _WIN32_WCE || defined __ANDROID__)) \ || defined __aarch64__ #define ECB_STDFP 1 #else #define ECB_STDFP 0 #endif #ifndef ECB_NO_LIBM #include /* for frexp*, ldexp*, INFINITY, NAN */ /* only the oldest of old doesn't have this one. solaris. */ #ifdef INFINITY #define ECB_INFINITY INFINITY #else #define ECB_INFINITY HUGE_VAL #endif #ifdef NAN #define ECB_NAN NAN #else #define ECB_NAN ECB_INFINITY #endif #if ECB_C99 || _XOPEN_VERSION >= 600 || _POSIX_VERSION >= 200112L #define ecb_ldexpf(x,e) ldexpf ((x), (e)) #define ecb_frexpf(x,e) frexpf ((x), (e)) #else #define ecb_ldexpf(x,e) (float) ldexp ((double) (x), (e)) #define ecb_frexpf(x,e) (float) frexp ((double) (x), (e)) #endif /* convert a float to ieee single/binary32 */ ecb_function_ ecb_const uint32_t ecb_float_to_binary32 (float x); ecb_function_ ecb_const uint32_t ecb_float_to_binary32 (float x) { uint32_t r; #if ECB_STDFP memcpy (&r, &x, 4); #else /* slow emulation, works for anything but -0 */ uint32_t m; int e; if (x == 0e0f ) return 0x00000000U; if (x > +3.40282346638528860e+38f) return 0x7f800000U; if (x < -3.40282346638528860e+38f) return 0xff800000U; if (x != x ) return 0x7fbfffffU; m = ecb_frexpf (x, &e) * 0x1000000U; r = m & 0x80000000U; if (r) m = -m; if (e <= -126) { m &= 0xffffffU; m >>= (-125 - e); e = -126; } r |= (e + 126) << 23; r |= m & 0x7fffffU; #endif return r; } /* converts an ieee single/binary32 to a float */ ecb_function_ ecb_const float ecb_binary32_to_float (uint32_t x); ecb_function_ ecb_const float ecb_binary32_to_float (uint32_t x) { float r; #if ECB_STDFP memcpy (&r, &x, 4); #else /* emulation, only works for normals and subnormals and +0 */ int neg = x >> 31; int e = (x >> 23) & 0xffU; x &= 0x7fffffU; if (e) x |= 0x800000U; else e = 1; /* we distrust ldexpf a bit and do the 2**-24 scaling by an extra multiply */ r = ecb_ldexpf (x * (0.5f / 0x800000U), e - 126); r = neg ? -r : r; #endif return r; } /* convert a double to ieee double/binary64 */ ecb_function_ ecb_const uint64_t ecb_double_to_binary64 (double x); ecb_function_ ecb_const uint64_t ecb_double_to_binary64 (double x) { uint64_t r; #if ECB_STDFP memcpy (&r, &x, 8); #else /* slow emulation, works for anything but -0 */ uint64_t m; int e; if (x == 0e0 ) return 0x0000000000000000U; if (x > +1.79769313486231470e+308) return 0x7ff0000000000000U; if (x < -1.79769313486231470e+308) return 0xfff0000000000000U; if (x != x ) return 0X7ff7ffffffffffffU; m = frexp (x, &e) * 0x20000000000000U; r = m & 0x8000000000000000;; if (r) m = -m; if (e <= -1022) { m &= 0x1fffffffffffffU; m >>= (-1021 - e); e = -1022; } r |= ((uint64_t)(e + 1022)) << 52; r |= m & 0xfffffffffffffU; #endif return r; } /* converts an ieee double/binary64 to a double */ ecb_function_ ecb_const double ecb_binary64_to_double (uint64_t x); ecb_function_ ecb_const double ecb_binary64_to_double (uint64_t x) { double r; #if ECB_STDFP memcpy (&r, &x, 8); #else /* emulation, only works for normals and subnormals and +0 */ int neg = x >> 63; int e = (x >> 52) & 0x7ffU; x &= 0xfffffffffffffU; if (e) x |= 0x10000000000000U; else e = 1; /* we distrust ldexp a bit and do the 2**-53 scaling by an extra multiply */ r = ldexp (x * (0.5 / 0x10000000000000U), e - 1022); r = neg ? -r : r; #endif return r; } /* convert a float to ieee half/binary16 */ ecb_function_ ecb_const uint16_t ecb_float_to_binary16 (float x); ecb_function_ ecb_const uint16_t ecb_float_to_binary16 (float x) { return ecb_binary32_to_binary16 (ecb_float_to_binary32 (x)); } /* convert an ieee half/binary16 to float */ ecb_function_ ecb_const float ecb_binary16_to_float (uint16_t x); ecb_function_ ecb_const float ecb_binary16_to_float (uint16_t x) { return ecb_binary32_to_float (ecb_binary16_to_binary32 (x)); } #endif #endif /* ECB.H END */ #if ECB_MEMORY_FENCE_NEEDS_PTHREADS /* if your architecture doesn't need memory fences, e.g. because it is * single-cpu/core, or if you use libev in a project that doesn't use libev * from multiple threads, then you can define ECB_NO_THREADS when compiling * libev, in which cases the memory fences become nops. * alternatively, you can remove this #error and link against libpthread, * which will then provide the memory fences. */ # error "memory fences not defined for your architecture, please report" #endif #ifndef ECB_MEMORY_FENCE # define ECB_MEMORY_FENCE do { } while (0) # define ECB_MEMORY_FENCE_ACQUIRE ECB_MEMORY_FENCE # define ECB_MEMORY_FENCE_RELEASE ECB_MEMORY_FENCE #endif #define inline_size ecb_inline #if EV_FEATURE_CODE # define inline_speed ecb_inline #else # define inline_speed ecb_noinline static #endif /*****************************************************************************/ /* raw syscall wrappers */ #if EV_NEED_SYSCALL #include /* * define some syscall wrappers for common architectures * this is mostly for nice looks during debugging, not performance. * our syscalls return < 0, not == -1, on error. which is good * enough for linux aio. * TODO: arm is also common nowadays, maybe even mips and x86 * TODO: after implementing this, it suddenly looks like overkill, but its hard to remove... */ #if __GNUC__ && __linux && ECB_AMD64 && !EV_FEATURE_CODE /* the costly errno access probably kills this for size optimisation */ #define ev_syscall(nr,narg,arg1,arg2,arg3,arg4,arg5,arg6) \ ({ \ long res; \ register unsigned long r6 __asm__ ("r9" ); \ register unsigned long r5 __asm__ ("r8" ); \ register unsigned long r4 __asm__ ("r10"); \ register unsigned long r3 __asm__ ("rdx"); \ register unsigned long r2 __asm__ ("rsi"); \ register unsigned long r1 __asm__ ("rdi"); \ if (narg >= 6) r6 = (unsigned long)(arg6); \ if (narg >= 5) r5 = (unsigned long)(arg5); \ if (narg >= 4) r4 = (unsigned long)(arg4); \ if (narg >= 3) r3 = (unsigned long)(arg3); \ if (narg >= 2) r2 = (unsigned long)(arg2); \ if (narg >= 1) r1 = (unsigned long)(arg1); \ __asm__ __volatile__ ( \ "syscall\n\t" \ : "=a" (res) \ : "0" (nr), "r" (r1), "r" (r2), "r" (r3), "r" (r4), "r" (r5) \ : "cc", "r11", "cx", "memory"); \ errno = -res; \ res; \ }) #endif #ifdef ev_syscall #define ev_syscall0(nr) ev_syscall (nr, 0, 0, 0, 0, 0, 0, 0) #define ev_syscall1(nr,arg1) ev_syscall (nr, 1, arg1, 0, 0, 0, 0, 0) #define ev_syscall2(nr,arg1,arg2) ev_syscall (nr, 2, arg1, arg2, 0, 0, 0, 0) #define ev_syscall3(nr,arg1,arg2,arg3) ev_syscall (nr, 3, arg1, arg2, arg3, 0, 0, 0) #define ev_syscall4(nr,arg1,arg2,arg3,arg4) ev_syscall (nr, 3, arg1, arg2, arg3, arg4, 0, 0) #define ev_syscall5(nr,arg1,arg2,arg3,arg4,arg5) ev_syscall (nr, 5, arg1, arg2, arg3, arg4, arg5, 0) #define ev_syscall6(nr,arg1,arg2,arg3,arg4,arg5,arg6) ev_syscall (nr, 6, arg1, arg2, arg3, arg4, arg5,arg6) #else #define ev_syscall0(nr) syscall (nr) #define ev_syscall1(nr,arg1) syscall (nr, arg1) #define ev_syscall2(nr,arg1,arg2) syscall (nr, arg1, arg2) #define ev_syscall3(nr,arg1,arg2,arg3) syscall (nr, arg1, arg2, arg3) #define ev_syscall4(nr,arg1,arg2,arg3,arg4) syscall (nr, arg1, arg2, arg3, arg4) #define ev_syscall5(nr,arg1,arg2,arg3,arg4,arg5) syscall (nr, arg1, arg2, arg3, arg4, arg5) #define ev_syscall6(nr,arg1,arg2,arg3,arg4,arg5,arg6) syscall (nr, arg1, arg2, arg3, arg4, arg5,arg6) #endif #endif /*****************************************************************************/ #define NUMPRI (EV_MAXPRI - EV_MINPRI + 1) #if EV_MINPRI == EV_MAXPRI # define ABSPRI(w) (((W)w), 0) #else # define ABSPRI(w) (((W)w)->priority - EV_MINPRI) #endif #define EMPTY /* required for microsofts broken pseudo-c compiler */ typedef ev_watcher *W; typedef ev_watcher_list *WL; typedef ev_watcher_time *WT; #define ev_active(w) ((W)(w))->active #define ev_at(w) ((WT)(w))->at #if EV_USE_REALTIME /* sig_atomic_t is used to avoid per-thread variables or locking but still */ /* giving it a reasonably high chance of working on typical architectures */ static EV_ATOMIC_T have_realtime; /* did clock_gettime (CLOCK_REALTIME) work? */ #endif #if EV_USE_MONOTONIC static EV_ATOMIC_T have_monotonic; /* did clock_gettime (CLOCK_MONOTONIC) work? */ #endif #ifndef EV_FD_TO_WIN32_HANDLE # define EV_FD_TO_WIN32_HANDLE(fd) _get_osfhandle (fd) #endif #ifndef EV_WIN32_HANDLE_TO_FD # define EV_WIN32_HANDLE_TO_FD(handle) _open_osfhandle (handle, 0) #endif #ifndef EV_WIN32_CLOSE_FD # define EV_WIN32_CLOSE_FD(fd) close (fd) #endif #ifdef _WIN32 # include "ev_win32.c" #endif /*****************************************************************************/ #if EV_USE_LINUXAIO # include /* probably only needed for aio_context_t */ #endif /* define a suitable floor function (only used by periodics atm) */ #if EV_USE_FLOOR # include # define ev_floor(v) floor (v) #else #include /* a floor() replacement function, should be independent of ev_tstamp type */ ecb_noinline static ev_tstamp ev_floor (ev_tstamp v) { /* the choice of shift factor is not terribly important */ #if FLT_RADIX != 2 /* assume FLT_RADIX == 10 */ const ev_tstamp shift = sizeof (unsigned long) >= 8 ? 10000000000000000000. : 1000000000.; #else const ev_tstamp shift = sizeof (unsigned long) >= 8 ? 18446744073709551616. : 4294967296.; #endif /* special treatment for negative arguments */ if (ecb_expect_false (v < 0.)) { ev_tstamp f = -ev_floor (-v); return f - (f == v ? 0 : 1); } /* argument too large for an unsigned long? then reduce it */ if (ecb_expect_false (v >= shift)) { ev_tstamp f; if (v == v - 1.) return v; /* very large numbers are assumed to be integer */ f = shift * ev_floor (v * (1. / shift)); return f + ev_floor (v - f); } /* fits into an unsigned long */ return (unsigned long)v; } #endif /*****************************************************************************/ #ifdef __linux # include #endif ecb_noinline ecb_cold static unsigned int ev_linux_version (void) { #ifdef __linux unsigned int v = 0; struct utsname buf; int i; char *p = buf.release; if (uname (&buf)) return 0; for (i = 3+1; --i; ) { unsigned int c = 0; for (;;) { if (*p >= '0' && *p <= '9') c = c * 10 + *p++ - '0'; else { p += *p == '.'; break; } } v = (v << 8) | c; } return v; #else return 0; #endif } /*****************************************************************************/ #if EV_AVOID_STDIO ecb_noinline ecb_cold static void ev_printerr (const char *msg) { write (STDERR_FILENO, msg, strlen (msg)); } #endif static void (*syserr_cb)(const char *msg) EV_NOEXCEPT; ecb_cold void ev_set_syserr_cb (void (*cb)(const char *msg) EV_NOEXCEPT) EV_NOEXCEPT { syserr_cb = cb; } ecb_noinline ecb_cold static void ev_syserr (const char *msg) { if (!msg) msg = "(libev) system error"; if (syserr_cb) syserr_cb (msg); else { #if EV_AVOID_STDIO ev_printerr (msg); ev_printerr (": "); ev_printerr (strerror (errno)); ev_printerr ("\n"); #else perror (msg); #endif abort (); } } static void * ev_realloc_emul (void *ptr, long size) EV_NOEXCEPT { /* some systems, notably openbsd and darwin, fail to properly * implement realloc (x, 0) (as required by both ansi c-89 and * the single unix specification, so work around them here. * recently, also (at least) fedora and debian started breaking it, * despite documenting it otherwise. */ if (size) return realloc (ptr, size); free (ptr); return 0; } static void *(*alloc)(void *ptr, long size) EV_NOEXCEPT = ev_realloc_emul; ecb_cold void ev_set_allocator (void *(*cb)(void *ptr, long size) EV_NOEXCEPT) EV_NOEXCEPT { alloc = cb; } inline_speed void * ev_realloc (void *ptr, long size) { ptr = alloc (ptr, size); if (!ptr && size) { #if EV_AVOID_STDIO ev_printerr ("(libev) memory allocation failed, aborting.\n"); #else fprintf (stderr, "(libev) cannot allocate %ld bytes, aborting.", size); #endif abort (); } return ptr; } #define ev_malloc(size) ev_realloc (0, (size)) #define ev_free(ptr) ev_realloc ((ptr), 0) /*****************************************************************************/ /* set in reify when reification needed */ #define EV_ANFD_REIFY 1 /* file descriptor info structure */ typedef struct { WL head; unsigned char events; /* the events watched for */ unsigned char reify; /* flag set when this ANFD needs reification (EV_ANFD_REIFY, EV__IOFDSET) */ unsigned char emask; /* some backends store the actual kernel mask in here */ unsigned char eflags; /* flags field for use by backends */ #if EV_USE_EPOLL unsigned int egen; /* generation counter to counter epoll bugs */ #endif #if EV_SELECT_IS_WINSOCKET || EV_USE_IOCP SOCKET handle; #endif #if EV_USE_IOCP OVERLAPPED or, ow; #endif } ANFD; /* stores the pending event set for a given watcher */ typedef struct { W w; int events; /* the pending event set for the given watcher */ } ANPENDING; #if EV_USE_INOTIFY /* hash table entry per inotify-id */ typedef struct { WL head; } ANFS; #endif /* Heap Entry */ #if EV_HEAP_CACHE_AT /* a heap element */ typedef struct { ev_tstamp at; WT w; } ANHE; #define ANHE_w(he) (he).w /* access watcher, read-write */ #define ANHE_at(he) (he).at /* access cached at, read-only */ #define ANHE_at_cache(he) (he).at = (he).w->at /* update at from watcher */ #else /* a heap element */ typedef WT ANHE; #define ANHE_w(he) (he) #define ANHE_at(he) (he)->at #define ANHE_at_cache(he) #endif #if EV_MULTIPLICITY struct ev_loop { ev_tstamp ev_rt_now; #define ev_rt_now ((loop)->ev_rt_now) #define VAR(name,decl) decl; #include "ev_vars.h" #undef VAR }; #include "ev_wrap.h" static struct ev_loop default_loop_struct; EV_API_DECL struct ev_loop *ev_default_loop_ptr = 0; /* needs to be initialised to make it a definition despite extern */ #else EV_API_DECL ev_tstamp ev_rt_now = EV_TS_CONST (0.); /* needs to be initialised to make it a definition despite extern */ #define VAR(name,decl) static decl; #include "ev_vars.h" #undef VAR static int ev_default_loop_ptr; #endif #if EV_FEATURE_API # define EV_RELEASE_CB if (ecb_expect_false (release_cb)) release_cb (EV_A) # define EV_ACQUIRE_CB if (ecb_expect_false (acquire_cb)) acquire_cb (EV_A) # define EV_INVOKE_PENDING invoke_cb (EV_A) #else # define EV_RELEASE_CB (void)0 # define EV_ACQUIRE_CB (void)0 # define EV_INVOKE_PENDING ev_invoke_pending (EV_A) #endif #define EVBREAK_RECURSE 0x80 /*****************************************************************************/ #ifndef EV_HAVE_EV_TIME ev_tstamp ev_time (void) EV_NOEXCEPT { #if EV_USE_REALTIME if (ecb_expect_true (have_realtime)) { struct timespec ts; clock_gettime (CLOCK_REALTIME, &ts); return EV_TS_GET (ts); } #endif { struct timeval tv; gettimeofday (&tv, 0); return EV_TV_GET (tv); } } #endif inline_size ev_tstamp get_clock (void) { #if EV_USE_MONOTONIC if (ecb_expect_true (have_monotonic)) { struct timespec ts; clock_gettime (CLOCK_MONOTONIC, &ts); return EV_TS_GET (ts); } #endif return ev_time (); } #if EV_MULTIPLICITY ev_tstamp ev_now (EV_P) EV_NOEXCEPT { return ev_rt_now; } #endif void ev_sleep (ev_tstamp delay) EV_NOEXCEPT { if (delay > EV_TS_CONST (0.)) { #if EV_USE_NANOSLEEP struct timespec ts; EV_TS_SET (ts, delay); nanosleep (&ts, 0); #elif defined _WIN32 /* maybe this should round up, as ms is very low resolution */ /* compared to select (µs) or nanosleep (ns) */ Sleep ((unsigned long)(EV_TS_TO_MSEC (delay))); #else struct timeval tv; /* here we rely on sys/time.h + sys/types.h + unistd.h providing select */ /* something not guaranteed by newer posix versions, but guaranteed */ /* by older ones */ EV_TV_SET (tv, delay); select (0, 0, 0, 0, &tv); #endif } } /*****************************************************************************/ #define MALLOC_ROUND 4096 /* prefer to allocate in chunks of this size, must be 2**n and >> 4 longs */ /* find a suitable new size for the given array, */ /* hopefully by rounding to a nice-to-malloc size */ inline_size int array_nextsize (int elem, int cur, int cnt) { int ncur = cur + 1; do ncur <<= 1; while (cnt > ncur); /* if size is large, round to MALLOC_ROUND - 4 * longs to accommodate malloc overhead */ if (elem * ncur > MALLOC_ROUND - sizeof (void *) * 4) { ncur *= elem; ncur = (ncur + elem + (MALLOC_ROUND - 1) + sizeof (void *) * 4) & ~(MALLOC_ROUND - 1); ncur = ncur - sizeof (void *) * 4; ncur /= elem; } return ncur; } ecb_noinline ecb_cold static void * array_realloc (int elem, void *base, int *cur, int cnt) { *cur = array_nextsize (elem, *cur, cnt); return ev_realloc (base, elem * *cur); } #define array_needsize_noinit(base,offset,count) #define array_needsize_zerofill(base,offset,count) \ memset ((void *)(base + offset), 0, sizeof (*(base)) * (count)) #define array_needsize(type,base,cur,cnt,init) \ if (ecb_expect_false ((cnt) > (cur))) \ { \ ecb_unused int ocur_ = (cur); \ (base) = (type *)array_realloc \ (sizeof (type), (base), &(cur), (cnt)); \ init ((base), ocur_, ((cur) - ocur_)); \ } #if 0 #define array_slim(type,stem) \ if (stem ## max < array_roundsize (stem ## cnt >> 2)) \ { \ stem ## max = array_roundsize (stem ## cnt >> 1); \ base = (type *)ev_realloc (base, sizeof (type) * (stem ## max));\ fprintf (stderr, "slimmed down " # stem " to %d\n", stem ## max);/*D*/\ } #endif #define array_free(stem, idx) \ ev_free (stem ## s idx); stem ## cnt idx = stem ## max idx = 0; stem ## s idx = 0 /*****************************************************************************/ /* dummy callback for pending events */ ecb_noinline static void pendingcb (EV_P_ ev_prepare *w, int revents) { } ecb_noinline void ev_feed_event (EV_P_ void *w, int revents) EV_NOEXCEPT { W w_ = (W)w; int pri = ABSPRI (w_); if (ecb_expect_false (w_->pending)) pendings [pri][w_->pending - 1].events |= revents; else { w_->pending = ++pendingcnt [pri]; array_needsize (ANPENDING, pendings [pri], pendingmax [pri], w_->pending, array_needsize_noinit); pendings [pri][w_->pending - 1].w = w_; pendings [pri][w_->pending - 1].events = revents; } pendingpri = NUMPRI - 1; } inline_speed void feed_reverse (EV_P_ W w) { array_needsize (W, rfeeds, rfeedmax, rfeedcnt + 1, array_needsize_noinit); rfeeds [rfeedcnt++] = w; } inline_size void feed_reverse_done (EV_P_ int revents) { do ev_feed_event (EV_A_ rfeeds [--rfeedcnt], revents); while (rfeedcnt); } inline_speed void queue_events (EV_P_ W *events, int eventcnt, int type) { int i; for (i = 0; i < eventcnt; ++i) ev_feed_event (EV_A_ events [i], type); } /*****************************************************************************/ inline_speed void fd_event_nocheck (EV_P_ int fd, int revents) { ANFD *anfd = anfds + fd; ev_io *w; for (w = (ev_io *)anfd->head; w; w = (ev_io *)((WL)w)->next) { int ev = w->events & revents; if (ev) ev_feed_event (EV_A_ (W)w, ev); } } /* do not submit kernel events for fds that have reify set */ /* because that means they changed while we were polling for new events */ inline_speed void fd_event (EV_P_ int fd, int revents) { ANFD *anfd = anfds + fd; if (ecb_expect_true (!anfd->reify)) fd_event_nocheck (EV_A_ fd, revents); } void ev_feed_fd_event (EV_P_ int fd, int revents) EV_NOEXCEPT { if (fd >= 0 && fd < anfdmax) fd_event_nocheck (EV_A_ fd, revents); } /* make sure the external fd watch events are in-sync */ /* with the kernel/libev internal state */ inline_size void fd_reify (EV_P) { int i; /* most backends do not modify the fdchanges list in backend_modfiy. * except io_uring, which has fixed-size buffers which might force us * to handle events in backend_modify, causing fdchanges to be amended, * which could result in an endless loop. * to avoid this, we do not dynamically handle fds that were added * during fd_reify. that means that for those backends, fdchangecnt * might be non-zero during poll, which must cause them to not block. * to not put too much of a burden on other backends, this detail * needs to be handled in the backend. */ int changecnt = fdchangecnt; #if EV_SELECT_IS_WINSOCKET || EV_USE_IOCP for (i = 0; i < changecnt; ++i) { int fd = fdchanges [i]; ANFD *anfd = anfds + fd; if (anfd->reify & EV__IOFDSET && anfd->head) { SOCKET handle = EV_FD_TO_WIN32_HANDLE (fd); if (handle != anfd->handle) { unsigned long arg; assert (("libev: only socket fds supported in this configuration", ioctlsocket (handle, FIONREAD, &arg) == 0)); /* handle changed, but fd didn't - we need to do it in two steps */ backend_modify (EV_A_ fd, anfd->events, 0); anfd->events = 0; anfd->handle = handle; } } } #endif for (i = 0; i < changecnt; ++i) { int fd = fdchanges [i]; ANFD *anfd = anfds + fd; ev_io *w; unsigned char o_events = anfd->events; unsigned char o_reify = anfd->reify; anfd->reify = 0; /*if (ecb_expect_true (o_reify & EV_ANFD_REIFY)) probably a deoptimisation */ { anfd->events = 0; for (w = (ev_io *)anfd->head; w; w = (ev_io *)((WL)w)->next) anfd->events |= (unsigned char)w->events; if (o_events != anfd->events) o_reify = EV__IOFDSET; /* actually |= */ } if (o_reify & EV__IOFDSET) backend_modify (EV_A_ fd, o_events, anfd->events); } /* normally, fdchangecnt hasn't changed. if it has, then new fds have been added. * this is a rare case (see beginning comment in this function), so we copy them to the * front and hope the backend handles this case. */ if (ecb_expect_false (fdchangecnt != changecnt)) memmove (fdchanges, fdchanges + changecnt, (fdchangecnt - changecnt) * sizeof (*fdchanges)); fdchangecnt -= changecnt; } /* something about the given fd changed */ inline_size void fd_change (EV_P_ int fd, int flags) { unsigned char reify = anfds [fd].reify; anfds [fd].reify = reify | flags; if (ecb_expect_true (!reify)) { ++fdchangecnt; array_needsize (int, fdchanges, fdchangemax, fdchangecnt, array_needsize_noinit); fdchanges [fdchangecnt - 1] = fd; } } /* the given fd is invalid/unusable, so make sure it doesn't hurt us anymore */ inline_speed ecb_cold void fd_kill (EV_P_ int fd) { ev_io *w; while ((w = (ev_io *)anfds [fd].head)) { ev_io_stop (EV_A_ w); ev_feed_event (EV_A_ (W)w, EV_ERROR | EV_READ | EV_WRITE); } } /* check whether the given fd is actually valid, for error recovery */ inline_size ecb_cold int fd_valid (int fd) { #ifdef _WIN32 return EV_FD_TO_WIN32_HANDLE (fd) != -1; #else return fcntl (fd, F_GETFD) != -1; #endif } /* called on EBADF to verify fds */ ecb_noinline ecb_cold static void fd_ebadf (EV_P) { int fd; for (fd = 0; fd < anfdmax; ++fd) if (anfds [fd].events) if (!fd_valid (fd) && errno == EBADF) fd_kill (EV_A_ fd); } /* called on ENOMEM in select/poll to kill some fds and retry */ ecb_noinline ecb_cold static void fd_enomem (EV_P) { int fd; for (fd = anfdmax; fd--; ) if (anfds [fd].events) { fd_kill (EV_A_ fd); break; } } /* usually called after fork if backend needs to re-arm all fds from scratch */ ecb_noinline static void fd_rearm_all (EV_P) { int fd; for (fd = 0; fd < anfdmax; ++fd) if (anfds [fd].events) { anfds [fd].events = 0; anfds [fd].emask = 0; fd_change (EV_A_ fd, EV__IOFDSET | EV_ANFD_REIFY); } } /* used to prepare libev internal fd's */ /* this is not fork-safe */ inline_speed void fd_intern (int fd) { #ifdef _WIN32 unsigned long arg = 1; ioctlsocket (EV_FD_TO_WIN32_HANDLE (fd), FIONBIO, &arg); #else fcntl (fd, F_SETFD, FD_CLOEXEC); fcntl (fd, F_SETFL, O_NONBLOCK); #endif } /*****************************************************************************/ /* * the heap functions want a real array index. array index 0 is guaranteed to not * be in-use at any time. the first heap entry is at array [HEAP0]. DHEAP gives * the branching factor of the d-tree. */ /* * at the moment we allow libev the luxury of two heaps, * a small-code-size 2-heap one and a ~1.5kb larger 4-heap * which is more cache-efficient. * the difference is about 5% with 50000+ watchers. */ #if EV_USE_4HEAP #define DHEAP 4 #define HEAP0 (DHEAP - 1) /* index of first element in heap */ #define HPARENT(k) ((((k) - HEAP0 - 1) / DHEAP) + HEAP0) #define UPHEAP_DONE(p,k) ((p) == (k)) /* away from the root */ inline_speed void downheap (ANHE *heap, int N, int k) { ANHE he = heap [k]; ANHE *E = heap + N + HEAP0; for (;;) { ev_tstamp minat; ANHE *minpos; ANHE *pos = heap + DHEAP * (k - HEAP0) + HEAP0 + 1; /* find minimum child */ if (ecb_expect_true (pos + DHEAP - 1 < E)) { /* fast path */ (minpos = pos + 0), (minat = ANHE_at (*minpos)); if ( minat > ANHE_at (pos [1])) (minpos = pos + 1), (minat = ANHE_at (*minpos)); if ( minat > ANHE_at (pos [2])) (minpos = pos + 2), (minat = ANHE_at (*minpos)); if ( minat > ANHE_at (pos [3])) (minpos = pos + 3), (minat = ANHE_at (*minpos)); } else if (pos < E) { /* slow path */ (minpos = pos + 0), (minat = ANHE_at (*minpos)); if (pos + 1 < E && minat > ANHE_at (pos [1])) (minpos = pos + 1), (minat = ANHE_at (*minpos)); if (pos + 2 < E && minat > ANHE_at (pos [2])) (minpos = pos + 2), (minat = ANHE_at (*minpos)); if (pos + 3 < E && minat > ANHE_at (pos [3])) (minpos = pos + 3), (minat = ANHE_at (*minpos)); } else break; if (ANHE_at (he) <= minat) break; heap [k] = *minpos; ev_active (ANHE_w (*minpos)) = k; k = minpos - heap; } heap [k] = he; ev_active (ANHE_w (he)) = k; } #else /* not 4HEAP */ #define HEAP0 1 #define HPARENT(k) ((k) >> 1) #define UPHEAP_DONE(p,k) (!(p)) /* away from the root */ inline_speed void downheap (ANHE *heap, int N, int k) { ANHE he = heap [k]; for (;;) { int c = k << 1; if (c >= N + HEAP0) break; c += c + 1 < N + HEAP0 && ANHE_at (heap [c]) > ANHE_at (heap [c + 1]) ? 1 : 0; if (ANHE_at (he) <= ANHE_at (heap [c])) break; heap [k] = heap [c]; ev_active (ANHE_w (heap [k])) = k; k = c; } heap [k] = he; ev_active (ANHE_w (he)) = k; } #endif /* towards the root */ inline_speed void upheap (ANHE *heap, int k) { ANHE he = heap [k]; for (;;) { int p = HPARENT (k); if (UPHEAP_DONE (p, k) || ANHE_at (heap [p]) <= ANHE_at (he)) break; heap [k] = heap [p]; ev_active (ANHE_w (heap [k])) = k; k = p; } heap [k] = he; ev_active (ANHE_w (he)) = k; } /* move an element suitably so it is in a correct place */ inline_size void adjustheap (ANHE *heap, int N, int k) { if (k > HEAP0 && ANHE_at (heap [k]) <= ANHE_at (heap [HPARENT (k)])) upheap (heap, k); else downheap (heap, N, k); } /* rebuild the heap: this function is used only once and executed rarely */ inline_size void reheap (ANHE *heap, int N) { int i; /* we don't use floyds algorithm, upheap is simpler and is more cache-efficient */ /* also, this is easy to implement and correct for both 2-heaps and 4-heaps */ for (i = 0; i < N; ++i) upheap (heap, i + HEAP0); } /*****************************************************************************/ /* associate signal watchers to a signal */ typedef struct { EV_ATOMIC_T pending; #if EV_MULTIPLICITY EV_P; #endif WL head; } ANSIG; static ANSIG signals [EV_NSIG - 1]; /*****************************************************************************/ #if EV_SIGNAL_ENABLE || EV_ASYNC_ENABLE ecb_noinline ecb_cold static void evpipe_init (EV_P) { if (!ev_is_active (&pipe_w)) { int fds [2]; # if EV_USE_EVENTFD fds [0] = -1; fds [1] = eventfd (0, EFD_NONBLOCK | EFD_CLOEXEC); if (fds [1] < 0 && errno == EINVAL) fds [1] = eventfd (0, 0); if (fds [1] < 0) # endif { while (pipe (fds)) ev_syserr ("(libev) error creating signal/async pipe"); fd_intern (fds [0]); } evpipe [0] = fds [0]; if (evpipe [1] < 0) evpipe [1] = fds [1]; /* first call, set write fd */ else { /* on subsequent calls, do not change evpipe [1] */ /* so that evpipe_write can always rely on its value. */ /* this branch does not do anything sensible on windows, */ /* so must not be executed on windows */ dup2 (fds [1], evpipe [1]); close (fds [1]); } fd_intern (evpipe [1]); ev_io_set (&pipe_w, evpipe [0] < 0 ? evpipe [1] : evpipe [0], EV_READ); ev_io_start (EV_A_ &pipe_w); ev_unref (EV_A); /* watcher should not keep loop alive */ } } inline_speed void evpipe_write (EV_P_ EV_ATOMIC_T *flag) { ECB_MEMORY_FENCE; /* push out the write before this function was called, acquire flag */ if (ecb_expect_true (*flag)) return; *flag = 1; ECB_MEMORY_FENCE_RELEASE; /* make sure flag is visible before the wakeup */ pipe_write_skipped = 1; ECB_MEMORY_FENCE; /* make sure pipe_write_skipped is visible before we check pipe_write_wanted */ if (pipe_write_wanted) { int old_errno; pipe_write_skipped = 0; ECB_MEMORY_FENCE_RELEASE; old_errno = errno; /* save errno because write will clobber it */ #if EV_USE_EVENTFD if (evpipe [0] < 0) { uint64_t counter = 1; write (evpipe [1], &counter, sizeof (uint64_t)); } else #endif { #ifdef _WIN32 WSABUF buf; DWORD sent; buf.buf = (char *)&buf; buf.len = 1; WSASend (EV_FD_TO_WIN32_HANDLE (evpipe [1]), &buf, 1, &sent, 0, 0, 0); #else write (evpipe [1], &(evpipe [1]), 1); #endif } errno = old_errno; } } /* called whenever the libev signal pipe */ /* got some events (signal, async) */ static void pipecb (EV_P_ ev_io *iow, int revents) { int i; if (revents & EV_READ) { #if EV_USE_EVENTFD if (evpipe [0] < 0) { uint64_t counter; read (evpipe [1], &counter, sizeof (uint64_t)); } else #endif { char dummy[4]; #ifdef _WIN32 WSABUF buf; DWORD recvd; DWORD flags = 0; buf.buf = dummy; buf.len = sizeof (dummy); WSARecv (EV_FD_TO_WIN32_HANDLE (evpipe [0]), &buf, 1, &recvd, &flags, 0, 0); #else read (evpipe [0], &dummy, sizeof (dummy)); #endif } } pipe_write_skipped = 0; ECB_MEMORY_FENCE; /* push out skipped, acquire flags */ #if EV_SIGNAL_ENABLE if (sig_pending) { sig_pending = 0; ECB_MEMORY_FENCE; for (i = EV_NSIG - 1; i--; ) if (ecb_expect_false (signals [i].pending)) ev_feed_signal_event (EV_A_ i + 1); } #endif #if EV_ASYNC_ENABLE if (async_pending) { async_pending = 0; ECB_MEMORY_FENCE; for (i = asynccnt; i--; ) if (asyncs [i]->sent) { asyncs [i]->sent = 0; ECB_MEMORY_FENCE_RELEASE; ev_feed_event (EV_A_ asyncs [i], EV_ASYNC); } } #endif } /*****************************************************************************/ void ev_feed_signal (int signum) EV_NOEXCEPT { #if EV_MULTIPLICITY EV_P; ECB_MEMORY_FENCE_ACQUIRE; EV_A = signals [signum - 1].loop; if (!EV_A) return; #endif signals [signum - 1].pending = 1; evpipe_write (EV_A_ &sig_pending); } static void ev_sighandler (int signum) { #ifdef _WIN32 signal (signum, ev_sighandler); #endif ev_feed_signal (signum); } ecb_noinline void ev_feed_signal_event (EV_P_ int signum) EV_NOEXCEPT { WL w; if (ecb_expect_false (signum <= 0 || signum >= EV_NSIG)) return; --signum; #if EV_MULTIPLICITY /* it is permissible to try to feed a signal to the wrong loop */ /* or, likely more useful, feeding a signal nobody is waiting for */ if (ecb_expect_false (signals [signum].loop != EV_A)) return; #endif signals [signum].pending = 0; ECB_MEMORY_FENCE_RELEASE; for (w = signals [signum].head; w; w = w->next) ev_feed_event (EV_A_ (W)w, EV_SIGNAL); } #if EV_USE_SIGNALFD static void sigfdcb (EV_P_ ev_io *iow, int revents) { struct signalfd_siginfo si[2], *sip; /* these structs are big */ for (;;) { ssize_t res = read (sigfd, si, sizeof (si)); /* not ISO-C, as res might be -1, but works with SuS */ for (sip = si; (char *)sip < (char *)si + res; ++sip) ev_feed_signal_event (EV_A_ sip->ssi_signo); if (res < (ssize_t)sizeof (si)) break; } } #endif #endif /*****************************************************************************/ #if EV_CHILD_ENABLE static WL childs [EV_PID_HASHSIZE]; static ev_signal childev; #ifndef WIFCONTINUED # define WIFCONTINUED(status) 0 #endif /* handle a single child status event */ inline_speed void child_reap (EV_P_ int chain, int pid, int status) { ev_child *w; int traced = WIFSTOPPED (status) || WIFCONTINUED (status); for (w = (ev_child *)childs [chain & ((EV_PID_HASHSIZE) - 1)]; w; w = (ev_child *)((WL)w)->next) { if ((w->pid == pid || !w->pid) && (!traced || (w->flags & 1))) { ev_set_priority (w, EV_MAXPRI); /* need to do it *now*, this *must* be the same prio as the signal watcher itself */ w->rpid = pid; w->rstatus = status; ev_feed_event (EV_A_ (W)w, EV_CHILD); } } } #ifndef WCONTINUED # define WCONTINUED 0 #endif /* called on sigchld etc., calls waitpid */ static void childcb (EV_P_ ev_signal *sw, int revents) { int pid, status; /* some systems define WCONTINUED but then fail to support it (linux 2.4) */ if (0 >= (pid = waitpid (-1, &status, WNOHANG | WUNTRACED | WCONTINUED))) if (!WCONTINUED || errno != EINVAL || 0 >= (pid = waitpid (-1, &status, WNOHANG | WUNTRACED))) return; /* make sure we are called again until all children have been reaped */ /* we need to do it this way so that the callback gets called before we continue */ ev_feed_event (EV_A_ (W)sw, EV_SIGNAL); child_reap (EV_A_ pid, pid, status); if ((EV_PID_HASHSIZE) > 1) child_reap (EV_A_ 0, pid, status); /* this might trigger a watcher twice, but feed_event catches that */ } #endif /*****************************************************************************/ #if EV_USE_TIMERFD static void periodics_reschedule (EV_P); static void timerfdcb (EV_P_ ev_io *iow, int revents) { struct itimerspec its = { 0 }; its.it_value.tv_sec = ev_rt_now + (int)MAX_BLOCKTIME2; timerfd_settime (timerfd, TFD_TIMER_ABSTIME | TFD_TIMER_CANCEL_ON_SET, &its, 0); ev_rt_now = ev_time (); /* periodics_reschedule only needs ev_rt_now */ /* but maybe in the future we want the full treatment. */ /* now_floor = EV_TS_CONST (0.); time_update (EV_A_ EV_TSTAMP_HUGE); */ #if EV_PERIODIC_ENABLE periodics_reschedule (EV_A); #endif } ecb_noinline ecb_cold static void evtimerfd_init (EV_P) { if (!ev_is_active (&timerfd_w)) { timerfd = timerfd_create (CLOCK_REALTIME, TFD_NONBLOCK | TFD_CLOEXEC); if (timerfd >= 0) { fd_intern (timerfd); /* just to be sure */ ev_io_init (&timerfd_w, timerfdcb, timerfd, EV_READ); ev_set_priority (&timerfd_w, EV_MINPRI); ev_io_start (EV_A_ &timerfd_w); ev_unref (EV_A); /* watcher should not keep loop alive */ /* (re-) arm timer */ timerfdcb (EV_A_ 0, 0); } } } #endif /*****************************************************************************/ #if EV_USE_IOCP # include "ev_iocp.c" #endif #if EV_USE_PORT # include "ev_port.c" #endif #if EV_USE_KQUEUE # include "ev_kqueue.c" #endif #if EV_USE_EPOLL # include "ev_epoll.c" #endif #if EV_USE_LINUXAIO # include "ev_linuxaio.c" #endif #if EV_USE_IOURING # include "ev_iouring.c" #endif #if EV_USE_POLL # include "ev_poll.c" #endif #if EV_USE_SELECT # include "ev_select.c" #endif ecb_cold int ev_version_major (void) EV_NOEXCEPT { return EV_VERSION_MAJOR; } ecb_cold int ev_version_minor (void) EV_NOEXCEPT { return EV_VERSION_MINOR; } /* return true if we are running with elevated privileges and should ignore env variables */ inline_size ecb_cold int enable_secure (void) { #ifdef _WIN32 return 0; #else return getuid () != geteuid () || getgid () != getegid (); #endif } ecb_cold unsigned int ev_supported_backends (void) EV_NOEXCEPT { unsigned int flags = 0; if (EV_USE_PORT ) flags |= EVBACKEND_PORT; if (EV_USE_KQUEUE ) flags |= EVBACKEND_KQUEUE; if (EV_USE_EPOLL ) flags |= EVBACKEND_EPOLL; if (EV_USE_LINUXAIO ) flags |= EVBACKEND_LINUXAIO; if (EV_USE_IOURING && ev_linux_version () >= 0x050601) flags |= EVBACKEND_IOURING; /* 5.6.1+ */ if (EV_USE_POLL ) flags |= EVBACKEND_POLL; if (EV_USE_SELECT ) flags |= EVBACKEND_SELECT; return flags; } ecb_cold unsigned int ev_recommended_backends (void) EV_NOEXCEPT { unsigned int flags = ev_supported_backends (); #ifndef __NetBSD__ /* kqueue is borked on everything but netbsd apparently */ /* it usually doesn't work correctly on anything but sockets and pipes */ flags &= ~EVBACKEND_KQUEUE; #endif #ifdef __APPLE__ /* only select works correctly on that "unix-certified" platform */ flags &= ~EVBACKEND_KQUEUE; /* horribly broken, even for sockets */ flags &= ~EVBACKEND_POLL; /* poll is based on kqueue from 10.5 onwards */ #endif #ifdef __FreeBSD__ flags &= ~EVBACKEND_POLL; /* poll return value is unusable (http://forums.freebsd.org/archive/index.php/t-10270.html) */ #endif /* TODO: linuxaio is very experimental */ #if !EV_RECOMMEND_LINUXAIO flags &= ~EVBACKEND_LINUXAIO; #endif /* TODO: linuxaio is super experimental */ #if !EV_RECOMMEND_IOURING flags &= ~EVBACKEND_IOURING; #endif return flags; } ecb_cold unsigned int ev_embeddable_backends (void) EV_NOEXCEPT { int flags = EVBACKEND_EPOLL | EVBACKEND_KQUEUE | EVBACKEND_PORT | EVBACKEND_IOURING; /* epoll embeddability broken on all linux versions up to at least 2.6.23 */ if (ev_linux_version () < 0x020620) /* disable it on linux < 2.6.32 */ flags &= ~EVBACKEND_EPOLL; /* EVBACKEND_LINUXAIO is theoretically embeddable, but suffers from a performance overhead */ return flags; } unsigned int ev_backend (EV_P) EV_NOEXCEPT { return backend; } #if EV_FEATURE_API unsigned int ev_iteration (EV_P) EV_NOEXCEPT { return loop_count; } unsigned int ev_depth (EV_P) EV_NOEXCEPT { return loop_depth; } void ev_set_io_collect_interval (EV_P_ ev_tstamp interval) EV_NOEXCEPT { io_blocktime = interval; } void ev_set_timeout_collect_interval (EV_P_ ev_tstamp interval) EV_NOEXCEPT { timeout_blocktime = interval; } void ev_set_userdata (EV_P_ void *data) EV_NOEXCEPT { userdata = data; } void * ev_userdata (EV_P) EV_NOEXCEPT { return userdata; } void ev_set_invoke_pending_cb (EV_P_ ev_loop_callback invoke_pending_cb) EV_NOEXCEPT { invoke_cb = invoke_pending_cb; } void ev_set_loop_release_cb (EV_P_ void (*release)(EV_P) EV_NOEXCEPT, void (*acquire)(EV_P) EV_NOEXCEPT) EV_NOEXCEPT { release_cb = release; acquire_cb = acquire; } #endif /* initialise a loop structure, must be zero-initialised */ ecb_noinline ecb_cold static void loop_init (EV_P_ unsigned int flags) EV_NOEXCEPT { if (!backend) { origflags = flags; #if EV_USE_REALTIME if (!have_realtime) { struct timespec ts; if (!clock_gettime (CLOCK_REALTIME, &ts)) have_realtime = 1; } #endif #if EV_USE_MONOTONIC if (!have_monotonic) { struct timespec ts; if (!clock_gettime (CLOCK_MONOTONIC, &ts)) have_monotonic = 1; } #endif /* pid check not overridable via env */ #ifndef _WIN32 if (flags & EVFLAG_FORKCHECK) curpid = getpid (); #endif if (!(flags & EVFLAG_NOENV) && !enable_secure () && getenv ("LIBEV_FLAGS")) flags = atoi (getenv ("LIBEV_FLAGS")); ev_rt_now = ev_time (); mn_now = get_clock (); now_floor = mn_now; rtmn_diff = ev_rt_now - mn_now; #if EV_FEATURE_API invoke_cb = ev_invoke_pending; #endif io_blocktime = 0.; timeout_blocktime = 0.; backend = 0; backend_fd = -1; sig_pending = 0; #if EV_ASYNC_ENABLE async_pending = 0; #endif pipe_write_skipped = 0; pipe_write_wanted = 0; evpipe [0] = -1; evpipe [1] = -1; #if EV_USE_INOTIFY fs_fd = flags & EVFLAG_NOINOTIFY ? -1 : -2; #endif #if EV_USE_SIGNALFD sigfd = flags & EVFLAG_SIGNALFD ? -2 : -1; #endif #if EV_USE_TIMERFD timerfd = flags & EVFLAG_NOTIMERFD ? -1 : -2; #endif if (!(flags & EVBACKEND_MASK)) flags |= ev_recommended_backends (); #if EV_USE_IOCP if (!backend && (flags & EVBACKEND_IOCP )) backend = iocp_init (EV_A_ flags); #endif #if EV_USE_PORT if (!backend && (flags & EVBACKEND_PORT )) backend = port_init (EV_A_ flags); #endif #if EV_USE_KQUEUE if (!backend && (flags & EVBACKEND_KQUEUE )) backend = kqueue_init (EV_A_ flags); #endif #if EV_USE_IOURING if (!backend && (flags & EVBACKEND_IOURING )) backend = iouring_init (EV_A_ flags); #endif #if EV_USE_LINUXAIO if (!backend && (flags & EVBACKEND_LINUXAIO)) backend = linuxaio_init (EV_A_ flags); #endif #if EV_USE_EPOLL if (!backend && (flags & EVBACKEND_EPOLL )) backend = epoll_init (EV_A_ flags); #endif #if EV_USE_POLL if (!backend && (flags & EVBACKEND_POLL )) backend = poll_init (EV_A_ flags); #endif #if EV_USE_SELECT if (!backend && (flags & EVBACKEND_SELECT )) backend = select_init (EV_A_ flags); #endif ev_prepare_init (&pending_w, pendingcb); #if EV_SIGNAL_ENABLE || EV_ASYNC_ENABLE ev_init (&pipe_w, pipecb); ev_set_priority (&pipe_w, EV_MAXPRI); #endif } } /* free up a loop structure */ ecb_cold void ev_loop_destroy (EV_P) { int i; #if EV_MULTIPLICITY /* mimic free (0) */ if (!EV_A) return; #endif #if EV_CLEANUP_ENABLE /* queue cleanup watchers (and execute them) */ if (ecb_expect_false (cleanupcnt)) { queue_events (EV_A_ (W *)cleanups, cleanupcnt, EV_CLEANUP); EV_INVOKE_PENDING; } #endif #if EV_CHILD_ENABLE if (ev_is_default_loop (EV_A) && ev_is_active (&childev)) { ev_ref (EV_A); /* child watcher */ ev_signal_stop (EV_A_ &childev); } #endif if (ev_is_active (&pipe_w)) { /*ev_ref (EV_A);*/ /*ev_io_stop (EV_A_ &pipe_w);*/ if (evpipe [0] >= 0) EV_WIN32_CLOSE_FD (evpipe [0]); if (evpipe [1] >= 0) EV_WIN32_CLOSE_FD (evpipe [1]); } #if EV_USE_SIGNALFD if (ev_is_active (&sigfd_w)) close (sigfd); #endif #if EV_USE_TIMERFD if (ev_is_active (&timerfd_w)) close (timerfd); #endif #if EV_USE_INOTIFY if (fs_fd >= 0) close (fs_fd); #endif if (backend_fd >= 0) close (backend_fd); #if EV_USE_IOCP if (backend == EVBACKEND_IOCP ) iocp_destroy (EV_A); #endif #if EV_USE_PORT if (backend == EVBACKEND_PORT ) port_destroy (EV_A); #endif #if EV_USE_KQUEUE if (backend == EVBACKEND_KQUEUE ) kqueue_destroy (EV_A); #endif #if EV_USE_IOURING if (backend == EVBACKEND_IOURING ) iouring_destroy (EV_A); #endif #if EV_USE_LINUXAIO if (backend == EVBACKEND_LINUXAIO) linuxaio_destroy (EV_A); #endif #if EV_USE_EPOLL if (backend == EVBACKEND_EPOLL ) epoll_destroy (EV_A); #endif #if EV_USE_POLL if (backend == EVBACKEND_POLL ) poll_destroy (EV_A); #endif #if EV_USE_SELECT if (backend == EVBACKEND_SELECT ) select_destroy (EV_A); #endif for (i = NUMPRI; i--; ) { array_free (pending, [i]); #if EV_IDLE_ENABLE array_free (idle, [i]); #endif } ev_free (anfds); anfds = 0; anfdmax = 0; /* have to use the microsoft-never-gets-it-right macro */ array_free (rfeed, EMPTY); array_free (fdchange, EMPTY); array_free (timer, EMPTY); #if EV_PERIODIC_ENABLE array_free (periodic, EMPTY); #endif #if EV_FORK_ENABLE array_free (fork, EMPTY); #endif #if EV_CLEANUP_ENABLE array_free (cleanup, EMPTY); #endif array_free (prepare, EMPTY); array_free (check, EMPTY); #if EV_ASYNC_ENABLE array_free (async, EMPTY); #endif backend = 0; #if EV_MULTIPLICITY if (ev_is_default_loop (EV_A)) #endif ev_default_loop_ptr = 0; #if EV_MULTIPLICITY else ev_free (EV_A); #endif } #if EV_USE_INOTIFY inline_size void infy_fork (EV_P); #endif inline_size void loop_fork (EV_P) { #if EV_USE_PORT if (backend == EVBACKEND_PORT ) port_fork (EV_A); #endif #if EV_USE_KQUEUE if (backend == EVBACKEND_KQUEUE ) kqueue_fork (EV_A); #endif #if EV_USE_IOURING if (backend == EVBACKEND_IOURING ) iouring_fork (EV_A); #endif #if EV_USE_LINUXAIO if (backend == EVBACKEND_LINUXAIO) linuxaio_fork (EV_A); #endif #if EV_USE_EPOLL if (backend == EVBACKEND_EPOLL ) epoll_fork (EV_A); #endif #if EV_USE_INOTIFY infy_fork (EV_A); #endif if (postfork != 2) { #if EV_USE_SIGNALFD /* surprisingly, nothing needs to be done for signalfd, accoridng to docs, it does the right thing on fork */ #endif #if EV_USE_TIMERFD if (ev_is_active (&timerfd_w)) { ev_ref (EV_A); ev_io_stop (EV_A_ &timerfd_w); close (timerfd); timerfd = -2; evtimerfd_init (EV_A); /* reschedule periodics, in case we missed something */ ev_feed_event (EV_A_ &timerfd_w, EV_CUSTOM); } #endif #if EV_SIGNAL_ENABLE || EV_ASYNC_ENABLE if (ev_is_active (&pipe_w)) { /* pipe_write_wanted must be false now, so modifying fd vars should be safe */ ev_ref (EV_A); ev_io_stop (EV_A_ &pipe_w); if (evpipe [0] >= 0) EV_WIN32_CLOSE_FD (evpipe [0]); evpipe_init (EV_A); /* iterate over everything, in case we missed something before */ ev_feed_event (EV_A_ &pipe_w, EV_CUSTOM); } #endif } postfork = 0; } #if EV_MULTIPLICITY ecb_cold struct ev_loop * ev_loop_new (unsigned int flags) EV_NOEXCEPT { EV_P = (struct ev_loop *)ev_malloc (sizeof (struct ev_loop)); memset (EV_A, 0, sizeof (struct ev_loop)); loop_init (EV_A_ flags); if (ev_backend (EV_A)) return EV_A; ev_free (EV_A); return 0; } #endif /* multiplicity */ #if EV_VERIFY ecb_noinline ecb_cold static void verify_watcher (EV_P_ W w) { assert (("libev: watcher has invalid priority", ABSPRI (w) >= 0 && ABSPRI (w) < NUMPRI)); if (w->pending) assert (("libev: pending watcher not on pending queue", pendings [ABSPRI (w)][w->pending - 1].w == w)); } ecb_noinline ecb_cold static void verify_heap (EV_P_ ANHE *heap, int N) { int i; for (i = HEAP0; i < N + HEAP0; ++i) { assert (("libev: active index mismatch in heap", ev_active (ANHE_w (heap [i])) == i)); assert (("libev: heap condition violated", i == HEAP0 || ANHE_at (heap [HPARENT (i)]) <= ANHE_at (heap [i]))); assert (("libev: heap at cache mismatch", ANHE_at (heap [i]) == ev_at (ANHE_w (heap [i])))); verify_watcher (EV_A_ (W)ANHE_w (heap [i])); } } ecb_noinline ecb_cold static void array_verify (EV_P_ W *ws, int cnt) { while (cnt--) { assert (("libev: active index mismatch", ev_active (ws [cnt]) == cnt + 1)); verify_watcher (EV_A_ ws [cnt]); } } #endif #if EV_FEATURE_API void ecb_cold ev_verify (EV_P) EV_NOEXCEPT { #if EV_VERIFY int i; WL w, w2; assert (activecnt >= -1); assert (fdchangemax >= fdchangecnt); for (i = 0; i < fdchangecnt; ++i) assert (("libev: negative fd in fdchanges", fdchanges [i] >= 0)); assert (anfdmax >= 0); for (i = 0; i < anfdmax; ++i) { int j = 0; for (w = w2 = anfds [i].head; w; w = w->next) { verify_watcher (EV_A_ (W)w); if (j++ & 1) { assert (("libev: io watcher list contains a loop", w != w2)); w2 = w2->next; } assert (("libev: inactive fd watcher on anfd list", ev_active (w) == 1)); assert (("libev: fd mismatch between watcher and anfd", ((ev_io *)w)->fd == i)); } } assert (timermax >= timercnt); verify_heap (EV_A_ timers, timercnt); #if EV_PERIODIC_ENABLE assert (periodicmax >= periodiccnt); verify_heap (EV_A_ periodics, periodiccnt); #endif for (i = NUMPRI; i--; ) { assert (pendingmax [i] >= pendingcnt [i]); #if EV_IDLE_ENABLE assert (idleall >= 0); assert (idlemax [i] >= idlecnt [i]); array_verify (EV_A_ (W *)idles [i], idlecnt [i]); #endif } #if EV_FORK_ENABLE assert (forkmax >= forkcnt); array_verify (EV_A_ (W *)forks, forkcnt); #endif #if EV_CLEANUP_ENABLE assert (cleanupmax >= cleanupcnt); array_verify (EV_A_ (W *)cleanups, cleanupcnt); #endif #if EV_ASYNC_ENABLE assert (asyncmax >= asynccnt); array_verify (EV_A_ (W *)asyncs, asynccnt); #endif #if EV_PREPARE_ENABLE assert (preparemax >= preparecnt); array_verify (EV_A_ (W *)prepares, preparecnt); #endif #if EV_CHECK_ENABLE assert (checkmax >= checkcnt); array_verify (EV_A_ (W *)checks, checkcnt); #endif # if 0 #if EV_CHILD_ENABLE for (w = (ev_child *)childs [chain & ((EV_PID_HASHSIZE) - 1)]; w; w = (ev_child *)((WL)w)->next) for (signum = EV_NSIG; signum--; ) if (signals [signum].pending) #endif # endif #endif } #endif #if EV_MULTIPLICITY ecb_cold struct ev_loop * #else int #endif ev_default_loop (unsigned int flags) EV_NOEXCEPT { if (!ev_default_loop_ptr) { #if EV_MULTIPLICITY EV_P = ev_default_loop_ptr = &default_loop_struct; #else ev_default_loop_ptr = 1; #endif loop_init (EV_A_ flags); if (ev_backend (EV_A)) { #if EV_CHILD_ENABLE ev_signal_init (&childev, childcb, SIGCHLD); ev_set_priority (&childev, EV_MAXPRI); ev_signal_start (EV_A_ &childev); ev_unref (EV_A); /* child watcher should not keep loop alive */ #endif } else ev_default_loop_ptr = 0; } return ev_default_loop_ptr; } void ev_loop_fork (EV_P) EV_NOEXCEPT { postfork = 1; } /*****************************************************************************/ void ev_invoke (EV_P_ void *w, int revents) { EV_CB_INVOKE ((W)w, revents); } unsigned int ev_pending_count (EV_P) EV_NOEXCEPT { int pri; unsigned int count = 0; for (pri = NUMPRI; pri--; ) count += pendingcnt [pri]; return count; } ecb_noinline void ev_invoke_pending (EV_P) { pendingpri = NUMPRI; do { --pendingpri; /* pendingpri possibly gets modified in the inner loop */ while (pendingcnt [pendingpri]) { ANPENDING *p = pendings [pendingpri] + --pendingcnt [pendingpri]; p->w->pending = 0; EV_CB_INVOKE (p->w, p->events); EV_FREQUENT_CHECK; } } while (pendingpri); } #if EV_IDLE_ENABLE /* make idle watchers pending. this handles the "call-idle */ /* only when higher priorities are idle" logic */ inline_size void idle_reify (EV_P) { if (ecb_expect_false (idleall)) { int pri; for (pri = NUMPRI; pri--; ) { if (pendingcnt [pri]) break; if (idlecnt [pri]) { queue_events (EV_A_ (W *)idles [pri], idlecnt [pri], EV_IDLE); break; } } } } #endif /* make timers pending */ inline_size void timers_reify (EV_P) { EV_FREQUENT_CHECK; if (timercnt && ANHE_at (timers [HEAP0]) < mn_now) { do { ev_timer *w = (ev_timer *)ANHE_w (timers [HEAP0]); /*assert (("libev: inactive timer on timer heap detected", ev_is_active (w)));*/ /* first reschedule or stop timer */ if (w->repeat) { ev_at (w) += w->repeat; if (ev_at (w) < mn_now) ev_at (w) = mn_now; assert (("libev: negative ev_timer repeat value found while processing timers", w->repeat > EV_TS_CONST (0.))); ANHE_at_cache (timers [HEAP0]); downheap (timers, timercnt, HEAP0); } else ev_timer_stop (EV_A_ w); /* nonrepeating: stop timer */ EV_FREQUENT_CHECK; feed_reverse (EV_A_ (W)w); } while (timercnt && ANHE_at (timers [HEAP0]) < mn_now); feed_reverse_done (EV_A_ EV_TIMER); } } #if EV_PERIODIC_ENABLE ecb_noinline static void periodic_recalc (EV_P_ ev_periodic *w) { ev_tstamp interval = w->interval > MIN_INTERVAL ? w->interval : MIN_INTERVAL; ev_tstamp at = w->offset + interval * ev_floor ((ev_rt_now - w->offset) / interval); /* the above almost always errs on the low side */ while (at <= ev_rt_now) { ev_tstamp nat = at + w->interval; /* when resolution fails us, we use ev_rt_now */ if (ecb_expect_false (nat == at)) { at = ev_rt_now; break; } at = nat; } ev_at (w) = at; } /* make periodics pending */ inline_size void periodics_reify (EV_P) { EV_FREQUENT_CHECK; while (periodiccnt && ANHE_at (periodics [HEAP0]) < ev_rt_now) { do { ev_periodic *w = (ev_periodic *)ANHE_w (periodics [HEAP0]); /*assert (("libev: inactive timer on periodic heap detected", ev_is_active (w)));*/ /* first reschedule or stop timer */ if (w->reschedule_cb) { ev_at (w) = w->reschedule_cb (w, ev_rt_now); assert (("libev: ev_periodic reschedule callback returned time in the past", ev_at (w) >= ev_rt_now)); ANHE_at_cache (periodics [HEAP0]); downheap (periodics, periodiccnt, HEAP0); } else if (w->interval) { periodic_recalc (EV_A_ w); ANHE_at_cache (periodics [HEAP0]); downheap (periodics, periodiccnt, HEAP0); } else ev_periodic_stop (EV_A_ w); /* nonrepeating: stop timer */ EV_FREQUENT_CHECK; feed_reverse (EV_A_ (W)w); } while (periodiccnt && ANHE_at (periodics [HEAP0]) < ev_rt_now); feed_reverse_done (EV_A_ EV_PERIODIC); } } /* simply recalculate all periodics */ /* TODO: maybe ensure that at least one event happens when jumping forward? */ ecb_noinline ecb_cold static void periodics_reschedule (EV_P) { int i; /* adjust periodics after time jump */ for (i = HEAP0; i < periodiccnt + HEAP0; ++i) { ev_periodic *w = (ev_periodic *)ANHE_w (periodics [i]); if (w->reschedule_cb) ev_at (w) = w->reschedule_cb (w, ev_rt_now); else if (w->interval) periodic_recalc (EV_A_ w); ANHE_at_cache (periodics [i]); } reheap (periodics, periodiccnt); } #endif /* adjust all timers by a given offset */ ecb_noinline ecb_cold static void timers_reschedule (EV_P_ ev_tstamp adjust) { int i; for (i = 0; i < timercnt; ++i) { ANHE *he = timers + i + HEAP0; ANHE_w (*he)->at += adjust; ANHE_at_cache (*he); } } /* fetch new monotonic and realtime times from the kernel */ /* also detect if there was a timejump, and act accordingly */ inline_speed void time_update (EV_P_ ev_tstamp max_block) { #if EV_USE_MONOTONIC if (ecb_expect_true (have_monotonic)) { int i; ev_tstamp odiff = rtmn_diff; mn_now = get_clock (); /* only fetch the realtime clock every 0.5*MIN_TIMEJUMP seconds */ /* interpolate in the meantime */ if (ecb_expect_true (mn_now - now_floor < EV_TS_CONST (MIN_TIMEJUMP * .5))) { ev_rt_now = rtmn_diff + mn_now; return; } now_floor = mn_now; ev_rt_now = ev_time (); /* loop a few times, before making important decisions. * on the choice of "4": one iteration isn't enough, * in case we get preempted during the calls to * ev_time and get_clock. a second call is almost guaranteed * to succeed in that case, though. and looping a few more times * doesn't hurt either as we only do this on time-jumps or * in the unlikely event of having been preempted here. */ for (i = 4; --i; ) { ev_tstamp diff; rtmn_diff = ev_rt_now - mn_now; diff = odiff - rtmn_diff; if (ecb_expect_true ((diff < EV_TS_CONST (0.) ? -diff : diff) < EV_TS_CONST (MIN_TIMEJUMP))) return; /* all is well */ ev_rt_now = ev_time (); mn_now = get_clock (); now_floor = mn_now; } /* no timer adjustment, as the monotonic clock doesn't jump */ /* timers_reschedule (EV_A_ rtmn_diff - odiff) */ # if EV_PERIODIC_ENABLE periodics_reschedule (EV_A); # endif } else #endif { ev_rt_now = ev_time (); if (ecb_expect_false (mn_now > ev_rt_now || ev_rt_now > mn_now + max_block + EV_TS_CONST (MIN_TIMEJUMP))) { /* adjust timers. this is easy, as the offset is the same for all of them */ timers_reschedule (EV_A_ ev_rt_now - mn_now); #if EV_PERIODIC_ENABLE periodics_reschedule (EV_A); #endif } mn_now = ev_rt_now; } } int ev_run (EV_P_ int flags) { #if EV_FEATURE_API ++loop_depth; #endif assert (("libev: ev_loop recursion during release detected", loop_done != EVBREAK_RECURSE)); loop_done = EVBREAK_CANCEL; EV_INVOKE_PENDING; /* in case we recurse, ensure ordering stays nice and clean */ do { #if EV_VERIFY >= 2 ev_verify (EV_A); #endif #ifndef _WIN32 if (ecb_expect_false (curpid)) /* penalise the forking check even more */ if (ecb_expect_false (getpid () != curpid)) { curpid = getpid (); postfork = 1; } #endif #if EV_FORK_ENABLE /* we might have forked, so queue fork handlers */ if (ecb_expect_false (postfork)) if (forkcnt) { queue_events (EV_A_ (W *)forks, forkcnt, EV_FORK); EV_INVOKE_PENDING; } #endif #if EV_PREPARE_ENABLE /* queue prepare watchers (and execute them) */ if (ecb_expect_false (preparecnt)) { queue_events (EV_A_ (W *)prepares, preparecnt, EV_PREPARE); EV_INVOKE_PENDING; } #endif if (ecb_expect_false (loop_done)) break; /* we might have forked, so reify kernel state if necessary */ if (ecb_expect_false (postfork)) loop_fork (EV_A); /* update fd-related kernel structures */ fd_reify (EV_A); /* calculate blocking time */ { ev_tstamp waittime = 0.; ev_tstamp sleeptime = 0.; /* remember old timestamp for io_blocktime calculation */ ev_tstamp prev_mn_now = mn_now; /* update time to cancel out callback processing overhead */ time_update (EV_A_ EV_TS_CONST (EV_TSTAMP_HUGE)); /* from now on, we want a pipe-wake-up */ pipe_write_wanted = 1; ECB_MEMORY_FENCE; /* make sure pipe_write_wanted is visible before we check for potential skips */ if (ecb_expect_true (!(flags & EVRUN_NOWAIT || idleall || !activecnt || pipe_write_skipped))) { waittime = EV_TS_CONST (MAX_BLOCKTIME); #if EV_USE_TIMERFD /* sleep a lot longer when we can reliably detect timejumps */ if (ecb_expect_true (timerfd >= 0)) waittime = EV_TS_CONST (MAX_BLOCKTIME2); #endif #if !EV_PERIODIC_ENABLE /* without periodics but with monotonic clock there is no need */ /* for any time jump detection, so sleep longer */ if (ecb_expect_true (have_monotonic)) waittime = EV_TS_CONST (MAX_BLOCKTIME2); #endif if (timercnt) { ev_tstamp to = ANHE_at (timers [HEAP0]) - mn_now; if (waittime > to) waittime = to; } #if EV_PERIODIC_ENABLE if (periodiccnt) { ev_tstamp to = ANHE_at (periodics [HEAP0]) - ev_rt_now; if (waittime > to) waittime = to; } #endif /* don't let timeouts decrease the waittime below timeout_blocktime */ if (ecb_expect_false (waittime < timeout_blocktime)) waittime = timeout_blocktime; /* now there are two more special cases left, either we have * already-expired timers, so we should not sleep, or we have timers * that expire very soon, in which case we need to wait for a minimum * amount of time for some event loop backends. */ if (ecb_expect_false (waittime < backend_mintime)) waittime = waittime <= EV_TS_CONST (0.) ? EV_TS_CONST (0.) : backend_mintime; /* extra check because io_blocktime is commonly 0 */ if (ecb_expect_false (io_blocktime)) { sleeptime = io_blocktime - (mn_now - prev_mn_now); if (sleeptime > waittime - backend_mintime) sleeptime = waittime - backend_mintime; if (ecb_expect_true (sleeptime > EV_TS_CONST (0.))) { ev_sleep (sleeptime); waittime -= sleeptime; } } } #if EV_FEATURE_API ++loop_count; #endif assert ((loop_done = EVBREAK_RECURSE, 1)); /* assert for side effect */ backend_poll (EV_A_ waittime); assert ((loop_done = EVBREAK_CANCEL, 1)); /* assert for side effect */ pipe_write_wanted = 0; /* just an optimisation, no fence needed */ ECB_MEMORY_FENCE_ACQUIRE; if (pipe_write_skipped) { assert (("libev: pipe_w not active, but pipe not written", ev_is_active (&pipe_w))); ev_feed_event (EV_A_ &pipe_w, EV_CUSTOM); } /* update ev_rt_now, do magic */ time_update (EV_A_ waittime + sleeptime); } /* queue pending timers and reschedule them */ timers_reify (EV_A); /* relative timers called last */ #if EV_PERIODIC_ENABLE periodics_reify (EV_A); /* absolute timers called first */ #endif #if EV_IDLE_ENABLE /* queue idle watchers unless other events are pending */ idle_reify (EV_A); #endif #if EV_CHECK_ENABLE /* queue check watchers, to be executed first */ if (ecb_expect_false (checkcnt)) queue_events (EV_A_ (W *)checks, checkcnt, EV_CHECK); #endif EV_INVOKE_PENDING; } while (ecb_expect_true ( activecnt && !loop_done && !(flags & (EVRUN_ONCE | EVRUN_NOWAIT)) )); if (loop_done == EVBREAK_ONE) loop_done = EVBREAK_CANCEL; #if EV_FEATURE_API --loop_depth; #endif return activecnt; } void ev_break (EV_P_ int how) EV_NOEXCEPT { loop_done = how; } void ev_ref (EV_P) EV_NOEXCEPT { ++activecnt; } void ev_unref (EV_P) EV_NOEXCEPT { --activecnt; } void ev_now_update (EV_P) EV_NOEXCEPT { time_update (EV_A_ EV_TSTAMP_HUGE); } void ev_suspend (EV_P) EV_NOEXCEPT { ev_now_update (EV_A); } void ev_resume (EV_P) EV_NOEXCEPT { ev_tstamp mn_prev = mn_now; ev_now_update (EV_A); timers_reschedule (EV_A_ mn_now - mn_prev); #if EV_PERIODIC_ENABLE /* TODO: really do this? */ periodics_reschedule (EV_A); #endif } /*****************************************************************************/ /* singly-linked list management, used when the expected list length is short */ inline_size void wlist_add (WL *head, WL elem) { elem->next = *head; *head = elem; } inline_size void wlist_del (WL *head, WL elem) { while (*head) { if (ecb_expect_true (*head == elem)) { *head = elem->next; break; } head = &(*head)->next; } } /* internal, faster, version of ev_clear_pending */ inline_speed void clear_pending (EV_P_ W w) { if (w->pending) { pendings [ABSPRI (w)][w->pending - 1].w = (W)&pending_w; w->pending = 0; } } int ev_clear_pending (EV_P_ void *w) EV_NOEXCEPT { W w_ = (W)w; int pending = w_->pending; if (ecb_expect_true (pending)) { ANPENDING *p = pendings [ABSPRI (w_)] + pending - 1; p->w = (W)&pending_w; w_->pending = 0; return p->events; } else return 0; } inline_size void pri_adjust (EV_P_ W w) { int pri = ev_priority (w); pri = pri < EV_MINPRI ? EV_MINPRI : pri; pri = pri > EV_MAXPRI ? EV_MAXPRI : pri; ev_set_priority (w, pri); } inline_speed void ev_start (EV_P_ W w, int active) { pri_adjust (EV_A_ w); w->active = active; ev_ref (EV_A); } inline_size void ev_stop (EV_P_ W w) { ev_unref (EV_A); w->active = 0; } /*****************************************************************************/ ecb_noinline void ev_io_start (EV_P_ ev_io *w) EV_NOEXCEPT { int fd = w->fd; if (ecb_expect_false (ev_is_active (w))) return; assert (("libev: ev_io_start called with negative fd", fd >= 0)); assert (("libev: ev_io_start called with illegal event mask", !(w->events & ~(EV__IOFDSET | EV_READ | EV_WRITE)))); #if EV_VERIFY >= 2 assert (("libev: ev_io_start called on watcher with invalid fd", fd_valid (fd))); #endif EV_FREQUENT_CHECK; ev_start (EV_A_ (W)w, 1); array_needsize (ANFD, anfds, anfdmax, fd + 1, array_needsize_zerofill); wlist_add (&anfds[fd].head, (WL)w); /* common bug, apparently */ assert (("libev: ev_io_start called with corrupted watcher", ((WL)w)->next != (WL)w)); fd_change (EV_A_ fd, w->events & EV__IOFDSET | EV_ANFD_REIFY); w->events &= ~EV__IOFDSET; EV_FREQUENT_CHECK; } ecb_noinline void ev_io_stop (EV_P_ ev_io *w) EV_NOEXCEPT { clear_pending (EV_A_ (W)w); if (ecb_expect_false (!ev_is_active (w))) return; assert (("libev: ev_io_stop called with illegal fd (must stay constant after start!)", w->fd >= 0 && w->fd < anfdmax)); #if EV_VERIFY >= 2 assert (("libev: ev_io_stop called on watcher with invalid fd", fd_valid (w->fd))); #endif EV_FREQUENT_CHECK; wlist_del (&anfds[w->fd].head, (WL)w); ev_stop (EV_A_ (W)w); fd_change (EV_A_ w->fd, EV_ANFD_REIFY); EV_FREQUENT_CHECK; } ecb_noinline void ev_timer_start (EV_P_ ev_timer *w) EV_NOEXCEPT { if (ecb_expect_false (ev_is_active (w))) return; ev_at (w) += mn_now; assert (("libev: ev_timer_start called with negative timer repeat value", w->repeat >= 0.)); EV_FREQUENT_CHECK; ++timercnt; ev_start (EV_A_ (W)w, timercnt + HEAP0 - 1); array_needsize (ANHE, timers, timermax, ev_active (w) + 1, array_needsize_noinit); ANHE_w (timers [ev_active (w)]) = (WT)w; ANHE_at_cache (timers [ev_active (w)]); upheap (timers, ev_active (w)); EV_FREQUENT_CHECK; /*assert (("libev: internal timer heap corruption", timers [ev_active (w)] == (WT)w));*/ } ecb_noinline void ev_timer_stop (EV_P_ ev_timer *w) EV_NOEXCEPT { clear_pending (EV_A_ (W)w); if (ecb_expect_false (!ev_is_active (w))) return; EV_FREQUENT_CHECK; { int active = ev_active (w); assert (("libev: internal timer heap corruption", ANHE_w (timers [active]) == (WT)w)); --timercnt; if (ecb_expect_true (active < timercnt + HEAP0)) { timers [active] = timers [timercnt + HEAP0]; adjustheap (timers, timercnt, active); } } ev_at (w) -= mn_now; ev_stop (EV_A_ (W)w); EV_FREQUENT_CHECK; } ecb_noinline void ev_timer_again (EV_P_ ev_timer *w) EV_NOEXCEPT { EV_FREQUENT_CHECK; clear_pending (EV_A_ (W)w); if (ev_is_active (w)) { if (w->repeat) { ev_at (w) = mn_now + w->repeat; ANHE_at_cache (timers [ev_active (w)]); adjustheap (timers, timercnt, ev_active (w)); } else ev_timer_stop (EV_A_ w); } else if (w->repeat) { ev_at (w) = w->repeat; ev_timer_start (EV_A_ w); } EV_FREQUENT_CHECK; } ev_tstamp ev_timer_remaining (EV_P_ ev_timer *w) EV_NOEXCEPT { return ev_at (w) - (ev_is_active (w) ? mn_now : EV_TS_CONST (0.)); } #if EV_PERIODIC_ENABLE ecb_noinline void ev_periodic_start (EV_P_ ev_periodic *w) EV_NOEXCEPT { if (ecb_expect_false (ev_is_active (w))) return; #if EV_USE_TIMERFD if (timerfd == -2) evtimerfd_init (EV_A); #endif if (w->reschedule_cb) ev_at (w) = w->reschedule_cb (w, ev_rt_now); else if (w->interval) { assert (("libev: ev_periodic_start called with negative interval value", w->interval >= 0.)); periodic_recalc (EV_A_ w); } else ev_at (w) = w->offset; EV_FREQUENT_CHECK; ++periodiccnt; ev_start (EV_A_ (W)w, periodiccnt + HEAP0 - 1); array_needsize (ANHE, periodics, periodicmax, ev_active (w) + 1, array_needsize_noinit); ANHE_w (periodics [ev_active (w)]) = (WT)w; ANHE_at_cache (periodics [ev_active (w)]); upheap (periodics, ev_active (w)); EV_FREQUENT_CHECK; /*assert (("libev: internal periodic heap corruption", ANHE_w (periodics [ev_active (w)]) == (WT)w));*/ } ecb_noinline void ev_periodic_stop (EV_P_ ev_periodic *w) EV_NOEXCEPT { clear_pending (EV_A_ (W)w); if (ecb_expect_false (!ev_is_active (w))) return; EV_FREQUENT_CHECK; { int active = ev_active (w); assert (("libev: internal periodic heap corruption", ANHE_w (periodics [active]) == (WT)w)); --periodiccnt; if (ecb_expect_true (active < periodiccnt + HEAP0)) { periodics [active] = periodics [periodiccnt + HEAP0]; adjustheap (periodics, periodiccnt, active); } } ev_stop (EV_A_ (W)w); EV_FREQUENT_CHECK; } ecb_noinline void ev_periodic_again (EV_P_ ev_periodic *w) EV_NOEXCEPT { /* TODO: use adjustheap and recalculation */ ev_periodic_stop (EV_A_ w); ev_periodic_start (EV_A_ w); } #endif #ifndef SA_RESTART # define SA_RESTART 0 #endif #if EV_SIGNAL_ENABLE ecb_noinline void ev_signal_start (EV_P_ ev_signal *w) EV_NOEXCEPT { if (ecb_expect_false (ev_is_active (w))) return; assert (("libev: ev_signal_start called with illegal signal number", w->signum > 0 && w->signum < EV_NSIG)); #if EV_MULTIPLICITY assert (("libev: a signal must not be attached to two different loops", !signals [w->signum - 1].loop || signals [w->signum - 1].loop == loop)); signals [w->signum - 1].loop = EV_A; ECB_MEMORY_FENCE_RELEASE; #endif EV_FREQUENT_CHECK; #if EV_USE_SIGNALFD if (sigfd == -2) { sigfd = signalfd (-1, &sigfd_set, SFD_NONBLOCK | SFD_CLOEXEC); if (sigfd < 0 && errno == EINVAL) sigfd = signalfd (-1, &sigfd_set, 0); /* retry without flags */ if (sigfd >= 0) { fd_intern (sigfd); /* doing it twice will not hurt */ sigemptyset (&sigfd_set); ev_io_init (&sigfd_w, sigfdcb, sigfd, EV_READ); ev_set_priority (&sigfd_w, EV_MAXPRI); ev_io_start (EV_A_ &sigfd_w); ev_unref (EV_A); /* signalfd watcher should not keep loop alive */ } } if (sigfd >= 0) { /* TODO: check .head */ sigaddset (&sigfd_set, w->signum); sigprocmask (SIG_BLOCK, &sigfd_set, 0); signalfd (sigfd, &sigfd_set, 0); } #endif ev_start (EV_A_ (W)w, 1); wlist_add (&signals [w->signum - 1].head, (WL)w); if (!((WL)w)->next) # if EV_USE_SIGNALFD if (sigfd < 0) /*TODO*/ # endif { # ifdef _WIN32 evpipe_init (EV_A); signal (w->signum, ev_sighandler); # else struct sigaction sa; evpipe_init (EV_A); sa.sa_handler = ev_sighandler; sigfillset (&sa.sa_mask); sa.sa_flags = SA_RESTART; /* if restarting works we save one iteration */ sigaction (w->signum, &sa, 0); if (origflags & EVFLAG_NOSIGMASK) { sigemptyset (&sa.sa_mask); sigaddset (&sa.sa_mask, w->signum); sigprocmask (SIG_UNBLOCK, &sa.sa_mask, 0); } #endif } EV_FREQUENT_CHECK; } ecb_noinline void ev_signal_stop (EV_P_ ev_signal *w) EV_NOEXCEPT { clear_pending (EV_A_ (W)w); if (ecb_expect_false (!ev_is_active (w))) return; EV_FREQUENT_CHECK; wlist_del (&signals [w->signum - 1].head, (WL)w); ev_stop (EV_A_ (W)w); if (!signals [w->signum - 1].head) { #if EV_MULTIPLICITY signals [w->signum - 1].loop = 0; /* unattach from signal */ #endif #if EV_USE_SIGNALFD if (sigfd >= 0) { sigset_t ss; sigemptyset (&ss); sigaddset (&ss, w->signum); sigdelset (&sigfd_set, w->signum); signalfd (sigfd, &sigfd_set, 0); sigprocmask (SIG_UNBLOCK, &ss, 0); } else #endif signal (w->signum, SIG_DFL); } EV_FREQUENT_CHECK; } #endif #if EV_CHILD_ENABLE void ev_child_start (EV_P_ ev_child *w) EV_NOEXCEPT { #if EV_MULTIPLICITY assert (("libev: child watchers are only supported in the default loop", loop == ev_default_loop_ptr)); #endif if (ecb_expect_false (ev_is_active (w))) return; EV_FREQUENT_CHECK; ev_start (EV_A_ (W)w, 1); wlist_add (&childs [w->pid & ((EV_PID_HASHSIZE) - 1)], (WL)w); EV_FREQUENT_CHECK; } void ev_child_stop (EV_P_ ev_child *w) EV_NOEXCEPT { clear_pending (EV_A_ (W)w); if (ecb_expect_false (!ev_is_active (w))) return; EV_FREQUENT_CHECK; wlist_del (&childs [w->pid & ((EV_PID_HASHSIZE) - 1)], (WL)w); ev_stop (EV_A_ (W)w); EV_FREQUENT_CHECK; } #endif #if EV_STAT_ENABLE # ifdef _WIN32 # undef lstat # define lstat(a,b) _stati64 (a,b) # endif #define DEF_STAT_INTERVAL 5.0074891 #define NFS_STAT_INTERVAL 30.1074891 /* for filesystems potentially failing inotify */ #define MIN_STAT_INTERVAL 0.1074891 ecb_noinline static void stat_timer_cb (EV_P_ ev_timer *w_, int revents); #if EV_USE_INOTIFY /* the * 2 is to allow for alignment padding, which for some reason is >> 8 */ # define EV_INOTIFY_BUFSIZE (sizeof (struct inotify_event) * 2 + NAME_MAX) ecb_noinline static void infy_add (EV_P_ ev_stat *w) { w->wd = inotify_add_watch (fs_fd, w->path, IN_ATTRIB | IN_DELETE_SELF | IN_MOVE_SELF | IN_MODIFY | IN_CREATE | IN_DELETE | IN_MOVED_FROM | IN_MOVED_TO | IN_DONT_FOLLOW | IN_MASK_ADD); if (w->wd >= 0) { struct statfs sfs; /* now local changes will be tracked by inotify, but remote changes won't */ /* unless the filesystem is known to be local, we therefore still poll */ /* also do poll on <2.6.25, but with normal frequency */ if (!fs_2625) w->timer.repeat = w->interval ? w->interval : DEF_STAT_INTERVAL; else if (!statfs (w->path, &sfs) && (sfs.f_type == 0x1373 /* devfs */ || sfs.f_type == 0x4006 /* fat */ || sfs.f_type == 0x4d44 /* msdos */ || sfs.f_type == 0xEF53 /* ext2/3 */ || sfs.f_type == 0x72b6 /* jffs2 */ || sfs.f_type == 0x858458f6 /* ramfs */ || sfs.f_type == 0x5346544e /* ntfs */ || sfs.f_type == 0x3153464a /* jfs */ || sfs.f_type == 0x9123683e /* btrfs */ || sfs.f_type == 0x52654973 /* reiser3 */ || sfs.f_type == 0x01021994 /* tmpfs */ || sfs.f_type == 0x58465342 /* xfs */)) w->timer.repeat = 0.; /* filesystem is local, kernel new enough */ else w->timer.repeat = w->interval ? w->interval : NFS_STAT_INTERVAL; /* remote, use reduced frequency */ } else { /* can't use inotify, continue to stat */ w->timer.repeat = w->interval ? w->interval : DEF_STAT_INTERVAL; /* if path is not there, monitor some parent directory for speedup hints */ /* note that exceeding the hardcoded path limit is not a correctness issue, */ /* but an efficiency issue only */ if ((errno == ENOENT || errno == EACCES) && strlen (w->path) < 4096) { char path [4096]; strcpy (path, w->path); do { int mask = IN_MASK_ADD | IN_DELETE_SELF | IN_MOVE_SELF | (errno == EACCES ? IN_ATTRIB : IN_CREATE | IN_MOVED_TO); char *pend = strrchr (path, '/'); if (!pend || pend == path) break; *pend = 0; w->wd = inotify_add_watch (fs_fd, path, mask); } while (w->wd < 0 && (errno == ENOENT || errno == EACCES)); } } if (w->wd >= 0) wlist_add (&fs_hash [w->wd & ((EV_INOTIFY_HASHSIZE) - 1)].head, (WL)w); /* now re-arm timer, if required */ if (ev_is_active (&w->timer)) ev_ref (EV_A); ev_timer_again (EV_A_ &w->timer); if (ev_is_active (&w->timer)) ev_unref (EV_A); } ecb_noinline static void infy_del (EV_P_ ev_stat *w) { int slot; int wd = w->wd; if (wd < 0) return; w->wd = -2; slot = wd & ((EV_INOTIFY_HASHSIZE) - 1); wlist_del (&fs_hash [slot].head, (WL)w); /* remove this watcher, if others are watching it, they will rearm */ inotify_rm_watch (fs_fd, wd); } ecb_noinline static void infy_wd (EV_P_ int slot, int wd, struct inotify_event *ev) { if (slot < 0) /* overflow, need to check for all hash slots */ for (slot = 0; slot < (EV_INOTIFY_HASHSIZE); ++slot) infy_wd (EV_A_ slot, wd, ev); else { WL w_; for (w_ = fs_hash [slot & ((EV_INOTIFY_HASHSIZE) - 1)].head; w_; ) { ev_stat *w = (ev_stat *)w_; w_ = w_->next; /* lets us remove this watcher and all before it */ if (w->wd == wd || wd == -1) { if (ev->mask & (IN_IGNORED | IN_UNMOUNT | IN_DELETE_SELF)) { wlist_del (&fs_hash [slot & ((EV_INOTIFY_HASHSIZE) - 1)].head, (WL)w); w->wd = -1; infy_add (EV_A_ w); /* re-add, no matter what */ } stat_timer_cb (EV_A_ &w->timer, 0); } } } } static void infy_cb (EV_P_ ev_io *w, int revents) { char buf [EV_INOTIFY_BUFSIZE]; int ofs; int len = read (fs_fd, buf, sizeof (buf)); for (ofs = 0; ofs < len; ) { struct inotify_event *ev = (struct inotify_event *)(buf + ofs); infy_wd (EV_A_ ev->wd, ev->wd, ev); ofs += sizeof (struct inotify_event) + ev->len; } } inline_size ecb_cold void ev_check_2625 (EV_P) { /* kernels < 2.6.25 are borked * http://www.ussg.indiana.edu/hypermail/linux/kernel/0711.3/1208.html */ if (ev_linux_version () < 0x020619) return; fs_2625 = 1; } inline_size int infy_newfd (void) { #if defined IN_CLOEXEC && defined IN_NONBLOCK int fd = inotify_init1 (IN_CLOEXEC | IN_NONBLOCK); if (fd >= 0) return fd; #endif return inotify_init (); } inline_size void infy_init (EV_P) { if (fs_fd != -2) return; fs_fd = -1; ev_check_2625 (EV_A); fs_fd = infy_newfd (); if (fs_fd >= 0) { fd_intern (fs_fd); ev_io_init (&fs_w, infy_cb, fs_fd, EV_READ); ev_set_priority (&fs_w, EV_MAXPRI); ev_io_start (EV_A_ &fs_w); ev_unref (EV_A); } } inline_size void infy_fork (EV_P) { int slot; if (fs_fd < 0) return; ev_ref (EV_A); ev_io_stop (EV_A_ &fs_w); close (fs_fd); fs_fd = infy_newfd (); if (fs_fd >= 0) { fd_intern (fs_fd); ev_io_set (&fs_w, fs_fd, EV_READ); ev_io_start (EV_A_ &fs_w); ev_unref (EV_A); } for (slot = 0; slot < (EV_INOTIFY_HASHSIZE); ++slot) { WL w_ = fs_hash [slot].head; fs_hash [slot].head = 0; while (w_) { ev_stat *w = (ev_stat *)w_; w_ = w_->next; /* lets us add this watcher */ w->wd = -1; if (fs_fd >= 0) infy_add (EV_A_ w); /* re-add, no matter what */ else { w->timer.repeat = w->interval ? w->interval : DEF_STAT_INTERVAL; if (ev_is_active (&w->timer)) ev_ref (EV_A); ev_timer_again (EV_A_ &w->timer); if (ev_is_active (&w->timer)) ev_unref (EV_A); } } } } #endif #ifdef _WIN32 # define EV_LSTAT(p,b) _stati64 (p, b) #else # define EV_LSTAT(p,b) lstat (p, b) #endif void ev_stat_stat (EV_P_ ev_stat *w) EV_NOEXCEPT { if (lstat (w->path, &w->attr) < 0) w->attr.st_nlink = 0; else if (!w->attr.st_nlink) w->attr.st_nlink = 1; } ecb_noinline static void stat_timer_cb (EV_P_ ev_timer *w_, int revents) { ev_stat *w = (ev_stat *)(((char *)w_) - offsetof (ev_stat, timer)); ev_statdata prev = w->attr; ev_stat_stat (EV_A_ w); /* memcmp doesn't work on netbsd, they.... do stuff to their struct stat */ if ( prev.st_dev != w->attr.st_dev || prev.st_ino != w->attr.st_ino || prev.st_mode != w->attr.st_mode || prev.st_nlink != w->attr.st_nlink || prev.st_uid != w->attr.st_uid || prev.st_gid != w->attr.st_gid || prev.st_rdev != w->attr.st_rdev || prev.st_size != w->attr.st_size || prev.st_atime != w->attr.st_atime || prev.st_mtime != w->attr.st_mtime || prev.st_ctime != w->attr.st_ctime ) { /* we only update w->prev on actual differences */ /* in case we test more often than invoke the callback, */ /* to ensure that prev is always different to attr */ w->prev = prev; #if EV_USE_INOTIFY if (fs_fd >= 0) { infy_del (EV_A_ w); infy_add (EV_A_ w); ev_stat_stat (EV_A_ w); /* avoid race... */ } #endif ev_feed_event (EV_A_ w, EV_STAT); } } void ev_stat_start (EV_P_ ev_stat *w) EV_NOEXCEPT { if (ecb_expect_false (ev_is_active (w))) return; ev_stat_stat (EV_A_ w); if (w->interval < MIN_STAT_INTERVAL && w->interval) w->interval = MIN_STAT_INTERVAL; ev_timer_init (&w->timer, stat_timer_cb, 0., w->interval ? w->interval : DEF_STAT_INTERVAL); ev_set_priority (&w->timer, ev_priority (w)); #if EV_USE_INOTIFY infy_init (EV_A); if (fs_fd >= 0) infy_add (EV_A_ w); else #endif { ev_timer_again (EV_A_ &w->timer); ev_unref (EV_A); } ev_start (EV_A_ (W)w, 1); EV_FREQUENT_CHECK; } void ev_stat_stop (EV_P_ ev_stat *w) EV_NOEXCEPT { clear_pending (EV_A_ (W)w); if (ecb_expect_false (!ev_is_active (w))) return; EV_FREQUENT_CHECK; #if EV_USE_INOTIFY infy_del (EV_A_ w); #endif if (ev_is_active (&w->timer)) { ev_ref (EV_A); ev_timer_stop (EV_A_ &w->timer); } ev_stop (EV_A_ (W)w); EV_FREQUENT_CHECK; } #endif #if EV_IDLE_ENABLE void ev_idle_start (EV_P_ ev_idle *w) EV_NOEXCEPT { if (ecb_expect_false (ev_is_active (w))) return; pri_adjust (EV_A_ (W)w); EV_FREQUENT_CHECK; { int active = ++idlecnt [ABSPRI (w)]; ++idleall; ev_start (EV_A_ (W)w, active); array_needsize (ev_idle *, idles [ABSPRI (w)], idlemax [ABSPRI (w)], active, array_needsize_noinit); idles [ABSPRI (w)][active - 1] = w; } EV_FREQUENT_CHECK; } void ev_idle_stop (EV_P_ ev_idle *w) EV_NOEXCEPT { clear_pending (EV_A_ (W)w); if (ecb_expect_false (!ev_is_active (w))) return; EV_FREQUENT_CHECK; { int active = ev_active (w); idles [ABSPRI (w)][active - 1] = idles [ABSPRI (w)][--idlecnt [ABSPRI (w)]]; ev_active (idles [ABSPRI (w)][active - 1]) = active; ev_stop (EV_A_ (W)w); --idleall; } EV_FREQUENT_CHECK; } #endif #if EV_PREPARE_ENABLE void ev_prepare_start (EV_P_ ev_prepare *w) EV_NOEXCEPT { if (ecb_expect_false (ev_is_active (w))) return; EV_FREQUENT_CHECK; ev_start (EV_A_ (W)w, ++preparecnt); array_needsize (ev_prepare *, prepares, preparemax, preparecnt, array_needsize_noinit); prepares [preparecnt - 1] = w; EV_FREQUENT_CHECK; } void ev_prepare_stop (EV_P_ ev_prepare *w) EV_NOEXCEPT { clear_pending (EV_A_ (W)w); if (ecb_expect_false (!ev_is_active (w))) return; EV_FREQUENT_CHECK; { int active = ev_active (w); prepares [active - 1] = prepares [--preparecnt]; ev_active (prepares [active - 1]) = active; } ev_stop (EV_A_ (W)w); EV_FREQUENT_CHECK; } #endif #if EV_CHECK_ENABLE void ev_check_start (EV_P_ ev_check *w) EV_NOEXCEPT { if (ecb_expect_false (ev_is_active (w))) return; EV_FREQUENT_CHECK; ev_start (EV_A_ (W)w, ++checkcnt); array_needsize (ev_check *, checks, checkmax, checkcnt, array_needsize_noinit); checks [checkcnt - 1] = w; EV_FREQUENT_CHECK; } void ev_check_stop (EV_P_ ev_check *w) EV_NOEXCEPT { clear_pending (EV_A_ (W)w); if (ecb_expect_false (!ev_is_active (w))) return; EV_FREQUENT_CHECK; { int active = ev_active (w); checks [active - 1] = checks [--checkcnt]; ev_active (checks [active - 1]) = active; } ev_stop (EV_A_ (W)w); EV_FREQUENT_CHECK; } #endif #if EV_EMBED_ENABLE ecb_noinline void ev_embed_sweep (EV_P_ ev_embed *w) EV_NOEXCEPT { ev_run (w->other, EVRUN_NOWAIT); } static void embed_io_cb (EV_P_ ev_io *io, int revents) { ev_embed *w = (ev_embed *)(((char *)io) - offsetof (ev_embed, io)); if (ev_cb (w)) ev_feed_event (EV_A_ (W)w, EV_EMBED); else ev_run (w->other, EVRUN_NOWAIT); } static void embed_prepare_cb (EV_P_ ev_prepare *prepare, int revents) { ev_embed *w = (ev_embed *)(((char *)prepare) - offsetof (ev_embed, prepare)); { EV_P = w->other; while (fdchangecnt) { fd_reify (EV_A); ev_run (EV_A_ EVRUN_NOWAIT); } } } #if EV_FORK_ENABLE static void embed_fork_cb (EV_P_ ev_fork *fork_w, int revents) { ev_embed *w = (ev_embed *)(((char *)fork_w) - offsetof (ev_embed, fork)); ev_embed_stop (EV_A_ w); { EV_P = w->other; ev_loop_fork (EV_A); ev_run (EV_A_ EVRUN_NOWAIT); } ev_embed_start (EV_A_ w); } #endif #if 0 static void embed_idle_cb (EV_P_ ev_idle *idle, int revents) { ev_idle_stop (EV_A_ idle); } #endif void ev_embed_start (EV_P_ ev_embed *w) EV_NOEXCEPT { if (ecb_expect_false (ev_is_active (w))) return; { EV_P = w->other; assert (("libev: loop to be embedded is not embeddable", backend & ev_embeddable_backends ())); ev_io_init (&w->io, embed_io_cb, backend_fd, EV_READ); } EV_FREQUENT_CHECK; ev_set_priority (&w->io, ev_priority (w)); ev_io_start (EV_A_ &w->io); ev_prepare_init (&w->prepare, embed_prepare_cb); ev_set_priority (&w->prepare, EV_MINPRI); ev_prepare_start (EV_A_ &w->prepare); #if EV_FORK_ENABLE ev_fork_init (&w->fork, embed_fork_cb); ev_fork_start (EV_A_ &w->fork); #endif /*ev_idle_init (&w->idle, e,bed_idle_cb);*/ ev_start (EV_A_ (W)w, 1); EV_FREQUENT_CHECK; } void ev_embed_stop (EV_P_ ev_embed *w) EV_NOEXCEPT { clear_pending (EV_A_ (W)w); if (ecb_expect_false (!ev_is_active (w))) return; EV_FREQUENT_CHECK; ev_io_stop (EV_A_ &w->io); ev_prepare_stop (EV_A_ &w->prepare); #if EV_FORK_ENABLE ev_fork_stop (EV_A_ &w->fork); #endif ev_stop (EV_A_ (W)w); EV_FREQUENT_CHECK; } #endif #if EV_FORK_ENABLE void ev_fork_start (EV_P_ ev_fork *w) EV_NOEXCEPT { if (ecb_expect_false (ev_is_active (w))) return; EV_FREQUENT_CHECK; ev_start (EV_A_ (W)w, ++forkcnt); array_needsize (ev_fork *, forks, forkmax, forkcnt, array_needsize_noinit); forks [forkcnt - 1] = w; EV_FREQUENT_CHECK; } void ev_fork_stop (EV_P_ ev_fork *w) EV_NOEXCEPT { clear_pending (EV_A_ (W)w); if (ecb_expect_false (!ev_is_active (w))) return; EV_FREQUENT_CHECK; { int active = ev_active (w); forks [active - 1] = forks [--forkcnt]; ev_active (forks [active - 1]) = active; } ev_stop (EV_A_ (W)w); EV_FREQUENT_CHECK; } #endif #if EV_CLEANUP_ENABLE void ev_cleanup_start (EV_P_ ev_cleanup *w) EV_NOEXCEPT { if (ecb_expect_false (ev_is_active (w))) return; EV_FREQUENT_CHECK; ev_start (EV_A_ (W)w, ++cleanupcnt); array_needsize (ev_cleanup *, cleanups, cleanupmax, cleanupcnt, array_needsize_noinit); cleanups [cleanupcnt - 1] = w; /* cleanup watchers should never keep a refcount on the loop */ ev_unref (EV_A); EV_FREQUENT_CHECK; } void ev_cleanup_stop (EV_P_ ev_cleanup *w) EV_NOEXCEPT { clear_pending (EV_A_ (W)w); if (ecb_expect_false (!ev_is_active (w))) return; EV_FREQUENT_CHECK; ev_ref (EV_A); { int active = ev_active (w); cleanups [active - 1] = cleanups [--cleanupcnt]; ev_active (cleanups [active - 1]) = active; } ev_stop (EV_A_ (W)w); EV_FREQUENT_CHECK; } #endif #if EV_ASYNC_ENABLE void ev_async_start (EV_P_ ev_async *w) EV_NOEXCEPT { if (ecb_expect_false (ev_is_active (w))) return; w->sent = 0; evpipe_init (EV_A); EV_FREQUENT_CHECK; ev_start (EV_A_ (W)w, ++asynccnt); array_needsize (ev_async *, asyncs, asyncmax, asynccnt, array_needsize_noinit); asyncs [asynccnt - 1] = w; EV_FREQUENT_CHECK; } void ev_async_stop (EV_P_ ev_async *w) EV_NOEXCEPT { clear_pending (EV_A_ (W)w); if (ecb_expect_false (!ev_is_active (w))) return; EV_FREQUENT_CHECK; { int active = ev_active (w); asyncs [active - 1] = asyncs [--asynccnt]; ev_active (asyncs [active - 1]) = active; } ev_stop (EV_A_ (W)w); EV_FREQUENT_CHECK; } void ev_async_send (EV_P_ ev_async *w) EV_NOEXCEPT { w->sent = 1; evpipe_write (EV_A_ &async_pending); } #endif /*****************************************************************************/ struct ev_once { ev_io io; ev_timer to; void (*cb)(int revents, void *arg); void *arg; }; static void once_cb (EV_P_ struct ev_once *once, int revents) { void (*cb)(int revents, void *arg) = once->cb; void *arg = once->arg; ev_io_stop (EV_A_ &once->io); ev_timer_stop (EV_A_ &once->to); ev_free (once); cb (revents, arg); } static void once_cb_io (EV_P_ ev_io *w, int revents) { struct ev_once *once = (struct ev_once *)(((char *)w) - offsetof (struct ev_once, io)); once_cb (EV_A_ once, revents | ev_clear_pending (EV_A_ &once->to)); } static void once_cb_to (EV_P_ ev_timer *w, int revents) { struct ev_once *once = (struct ev_once *)(((char *)w) - offsetof (struct ev_once, to)); once_cb (EV_A_ once, revents | ev_clear_pending (EV_A_ &once->io)); } void ev_once (EV_P_ int fd, int events, ev_tstamp timeout, void (*cb)(int revents, void *arg), void *arg) EV_NOEXCEPT { struct ev_once *once = (struct ev_once *)ev_malloc (sizeof (struct ev_once)); once->cb = cb; once->arg = arg; ev_init (&once->io, once_cb_io); if (fd >= 0) { ev_io_set (&once->io, fd, events); ev_io_start (EV_A_ &once->io); } ev_init (&once->to, once_cb_to); if (timeout >= 0.) { ev_timer_set (&once->to, timeout, 0.); ev_timer_start (EV_A_ &once->to); } } /*****************************************************************************/ #if EV_WALK_ENABLE ecb_cold void ev_walk (EV_P_ int types, void (*cb)(EV_P_ int type, void *w)) EV_NOEXCEPT { int i, j; ev_watcher_list *wl, *wn; if (types & (EV_IO | EV_EMBED)) for (i = 0; i < anfdmax; ++i) for (wl = anfds [i].head; wl; ) { wn = wl->next; #if EV_EMBED_ENABLE if (ev_cb ((ev_io *)wl) == embed_io_cb) { if (types & EV_EMBED) cb (EV_A_ EV_EMBED, ((char *)wl) - offsetof (struct ev_embed, io)); } else #endif #if EV_USE_INOTIFY if (ev_cb ((ev_io *)wl) == infy_cb) ; else #endif if ((ev_io *)wl != &pipe_w) if (types & EV_IO) cb (EV_A_ EV_IO, wl); wl = wn; } if (types & (EV_TIMER | EV_STAT)) for (i = timercnt + HEAP0; i-- > HEAP0; ) #if EV_STAT_ENABLE /*TODO: timer is not always active*/ if (ev_cb ((ev_timer *)ANHE_w (timers [i])) == stat_timer_cb) { if (types & EV_STAT) cb (EV_A_ EV_STAT, ((char *)ANHE_w (timers [i])) - offsetof (struct ev_stat, timer)); } else #endif if (types & EV_TIMER) cb (EV_A_ EV_TIMER, ANHE_w (timers [i])); #if EV_PERIODIC_ENABLE if (types & EV_PERIODIC) for (i = periodiccnt + HEAP0; i-- > HEAP0; ) cb (EV_A_ EV_PERIODIC, ANHE_w (periodics [i])); #endif #if EV_IDLE_ENABLE if (types & EV_IDLE) for (j = NUMPRI; j--; ) for (i = idlecnt [j]; i--; ) cb (EV_A_ EV_IDLE, idles [j][i]); #endif #if EV_FORK_ENABLE if (types & EV_FORK) for (i = forkcnt; i--; ) if (ev_cb (forks [i]) != embed_fork_cb) cb (EV_A_ EV_FORK, forks [i]); #endif #if EV_ASYNC_ENABLE if (types & EV_ASYNC) for (i = asynccnt; i--; ) cb (EV_A_ EV_ASYNC, asyncs [i]); #endif #if EV_PREPARE_ENABLE if (types & EV_PREPARE) for (i = preparecnt; i--; ) # if EV_EMBED_ENABLE if (ev_cb (prepares [i]) != embed_prepare_cb) # endif cb (EV_A_ EV_PREPARE, prepares [i]); #endif #if EV_CHECK_ENABLE if (types & EV_CHECK) for (i = checkcnt; i--; ) cb (EV_A_ EV_CHECK, checks [i]); #endif #if EV_SIGNAL_ENABLE if (types & EV_SIGNAL) for (i = 0; i < EV_NSIG - 1; ++i) for (wl = signals [i].head; wl; ) { wn = wl->next; cb (EV_A_ EV_SIGNAL, wl); wl = wn; } #endif #if EV_CHILD_ENABLE if (types & EV_CHILD) for (i = (EV_PID_HASHSIZE); i--; ) for (wl = childs [i]; wl; ) { wn = wl->next; cb (EV_A_ EV_CHILD, wl); wl = wn; } #endif /* EV_STAT 0x00001000 /* stat data changed */ /* EV_EMBED 0x00010000 /* embedded event loop needs sweep */ } #endif #if EV_MULTIPLICITY #include "ev_wrap.h" #endif gevent-24.11.1/deps/libev/ev.h000066400000000000000000000730611471441230600160000ustar00rootroot00000000000000/* * libev native API header * * Copyright (c) 2007-2020 Marc Alexander Lehmann * All rights reserved. * * Redistribution and use in source and binary forms, with or without modifica- * tion, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * * Alternatively, the contents of this file may be used under the terms of * the GNU General Public License ("GPL") version 2 or any later version, * in which case the provisions of the GPL are applicable instead of * the above. If you wish to allow the use of your version of this file * only under the terms of the GPL and not to allow others to use your * version of this file under the BSD license, indicate your decision * by deleting the provisions above and replace them with the notice * and other provisions required by the GPL. If you do not delete the * provisions above, a recipient may use your version of this file under * either the BSD or the GPL. */ #ifndef EV_H_ #define EV_H_ #ifdef __cplusplus # define EV_CPP(x) x # if __cplusplus >= 201103L # define EV_NOEXCEPT noexcept # else # define EV_NOEXCEPT # endif #else # define EV_CPP(x) # define EV_NOEXCEPT #endif #define EV_THROW EV_NOEXCEPT /* pre-4.25, do not use in new code */ EV_CPP(extern "C" {) /*****************************************************************************/ /* pre-4.0 compatibility */ #ifndef EV_COMPAT3 # define EV_COMPAT3 1 #endif #ifndef EV_FEATURES # if defined __OPTIMIZE_SIZE__ # define EV_FEATURES 0x7c # else # define EV_FEATURES 0x7f # endif #endif #define EV_FEATURE_CODE ((EV_FEATURES) & 1) #define EV_FEATURE_DATA ((EV_FEATURES) & 2) #define EV_FEATURE_CONFIG ((EV_FEATURES) & 4) #define EV_FEATURE_API ((EV_FEATURES) & 8) #define EV_FEATURE_WATCHERS ((EV_FEATURES) & 16) #define EV_FEATURE_BACKENDS ((EV_FEATURES) & 32) #define EV_FEATURE_OS ((EV_FEATURES) & 64) /* these priorities are inclusive, higher priorities will be invoked earlier */ #ifndef EV_MINPRI # define EV_MINPRI (EV_FEATURE_CONFIG ? -2 : 0) #endif #ifndef EV_MAXPRI # define EV_MAXPRI (EV_FEATURE_CONFIG ? +2 : 0) #endif #ifndef EV_MULTIPLICITY # define EV_MULTIPLICITY EV_FEATURE_CONFIG #endif #ifndef EV_PERIODIC_ENABLE # define EV_PERIODIC_ENABLE EV_FEATURE_WATCHERS #endif #ifndef EV_STAT_ENABLE # define EV_STAT_ENABLE EV_FEATURE_WATCHERS #endif #ifndef EV_PREPARE_ENABLE # define EV_PREPARE_ENABLE EV_FEATURE_WATCHERS #endif #ifndef EV_CHECK_ENABLE # define EV_CHECK_ENABLE EV_FEATURE_WATCHERS #endif #ifndef EV_IDLE_ENABLE # define EV_IDLE_ENABLE EV_FEATURE_WATCHERS #endif #ifndef EV_FORK_ENABLE # define EV_FORK_ENABLE EV_FEATURE_WATCHERS #endif #ifndef EV_CLEANUP_ENABLE # define EV_CLEANUP_ENABLE EV_FEATURE_WATCHERS #endif #ifndef EV_SIGNAL_ENABLE # define EV_SIGNAL_ENABLE EV_FEATURE_WATCHERS #endif #ifndef EV_CHILD_ENABLE # ifdef _WIN32 # define EV_CHILD_ENABLE 0 # else # define EV_CHILD_ENABLE EV_FEATURE_WATCHERS #endif #endif #ifndef EV_ASYNC_ENABLE # define EV_ASYNC_ENABLE EV_FEATURE_WATCHERS #endif #ifndef EV_EMBED_ENABLE # define EV_EMBED_ENABLE EV_FEATURE_WATCHERS #endif #ifndef EV_WALK_ENABLE # define EV_WALK_ENABLE 0 /* not yet */ #endif /*****************************************************************************/ #if EV_CHILD_ENABLE && !EV_SIGNAL_ENABLE # undef EV_SIGNAL_ENABLE # define EV_SIGNAL_ENABLE 1 #endif /*****************************************************************************/ #ifndef EV_TSTAMP_T # define EV_TSTAMP_T double #endif typedef EV_TSTAMP_T ev_tstamp; #include /* for memmove */ #ifndef EV_ATOMIC_T # include # define EV_ATOMIC_T sig_atomic_t volatile #endif #if EV_STAT_ENABLE # ifdef _WIN32 # include # include # endif # include #endif /* support multiple event loops? */ #if EV_MULTIPLICITY struct ev_loop; # define EV_P struct ev_loop *loop /* a loop as sole parameter in a declaration */ # define EV_P_ EV_P, /* a loop as first of multiple parameters */ # define EV_A loop /* a loop as sole argument to a function call */ # define EV_A_ EV_A, /* a loop as first of multiple arguments */ # define EV_DEFAULT_UC ev_default_loop_uc_ () /* the default loop, if initialised, as sole arg */ # define EV_DEFAULT_UC_ EV_DEFAULT_UC, /* the default loop as first of multiple arguments */ # define EV_DEFAULT ev_default_loop (0) /* the default loop as sole arg */ # define EV_DEFAULT_ EV_DEFAULT, /* the default loop as first of multiple arguments */ #else # define EV_P void # define EV_P_ # define EV_A # define EV_A_ # define EV_DEFAULT # define EV_DEFAULT_ # define EV_DEFAULT_UC # define EV_DEFAULT_UC_ # undef EV_EMBED_ENABLE #endif /* EV_INLINE is used for functions in header files */ #if __STDC_VERSION__ >= 199901L || __GNUC__ >= 3 # define EV_INLINE static inline #else # define EV_INLINE static #endif #ifdef EV_API_STATIC # define EV_API_DECL static #else # define EV_API_DECL extern #endif /* EV_PROTOTYPES can be used to switch of prototype declarations */ #ifndef EV_PROTOTYPES # define EV_PROTOTYPES 1 #endif /*****************************************************************************/ #define EV_VERSION_MAJOR 4 #define EV_VERSION_MINOR 33 /* eventmask, revents, events... */ enum { EV_UNDEF = (int)0xFFFFFFFF, /* guaranteed to be invalid */ EV_NONE = 0x00, /* no events */ EV_READ = 0x01, /* ev_io detected read will not block */ EV_WRITE = 0x02, /* ev_io detected write will not block */ EV__IOFDSET = 0x80, /* internal use only */ EV_IO = EV_READ, /* alias for type-detection */ EV_TIMER = 0x00000100, /* timer timed out */ #if EV_COMPAT3 EV_TIMEOUT = EV_TIMER, /* pre 4.0 API compatibility */ #endif EV_PERIODIC = 0x00000200, /* periodic timer timed out */ EV_SIGNAL = 0x00000400, /* signal was received */ EV_CHILD = 0x00000800, /* child/pid had status change */ EV_STAT = 0x00001000, /* stat data changed */ EV_IDLE = 0x00002000, /* event loop is idling */ EV_PREPARE = 0x00004000, /* event loop about to poll */ EV_CHECK = 0x00008000, /* event loop finished poll */ EV_EMBED = 0x00010000, /* embedded event loop needs sweep */ EV_FORK = 0x00020000, /* event loop resumed in child */ EV_CLEANUP = 0x00040000, /* event loop resumed in child */ EV_ASYNC = 0x00080000, /* async intra-loop signal */ EV_CUSTOM = 0x01000000, /* for use by user code */ EV_ERROR = (int)0x80000000 /* sent when an error occurs */ }; /* can be used to add custom fields to all watchers, while losing binary compatibility */ #ifndef EV_COMMON # define EV_COMMON void *data; #endif #ifndef EV_CB_DECLARE # define EV_CB_DECLARE(type) void (*cb)(EV_P_ struct type *w, int revents); #endif #ifndef EV_CB_INVOKE # define EV_CB_INVOKE(watcher,revents) (watcher)->cb (EV_A_ (watcher), (revents)) #endif /* not official, do not use */ #define EV_CB(type,name) void name (EV_P_ struct ev_ ## type *w, int revents) /* * struct member types: * private: you may look at them, but not change them, * and they might not mean anything to you. * ro: can be read anytime, but only changed when the watcher isn't active. * rw: can be read and modified anytime, even when the watcher is active. * * some internal details that might be helpful for debugging: * * active is either 0, which means the watcher is not active, * or the array index of the watcher (periodics, timers) * or the array index + 1 (most other watchers) * or simply 1 for watchers that aren't in some array. * pending is either 0, in which case the watcher isn't, * or the array index + 1 in the pendings array. */ #if EV_MINPRI == EV_MAXPRI # define EV_DECL_PRIORITY #elif !defined (EV_DECL_PRIORITY) # define EV_DECL_PRIORITY int priority; #endif /* shared by all watchers */ #define EV_WATCHER(type) \ int active; /* private */ \ int pending; /* private */ \ EV_DECL_PRIORITY /* private */ \ EV_COMMON /* rw */ \ EV_CB_DECLARE (type) /* private */ #define EV_WATCHER_LIST(type) \ EV_WATCHER (type) \ struct ev_watcher_list *next; /* private */ #define EV_WATCHER_TIME(type) \ EV_WATCHER (type) \ ev_tstamp at; /* private */ /* base class, nothing to see here unless you subclass */ typedef struct ev_watcher { EV_WATCHER (ev_watcher) } ev_watcher; /* base class, nothing to see here unless you subclass */ typedef struct ev_watcher_list { EV_WATCHER_LIST (ev_watcher_list) } ev_watcher_list; /* base class, nothing to see here unless you subclass */ typedef struct ev_watcher_time { EV_WATCHER_TIME (ev_watcher_time) } ev_watcher_time; /* invoked when fd is either EV_READable or EV_WRITEable */ /* revent EV_READ, EV_WRITE */ typedef struct ev_io { EV_WATCHER_LIST (ev_io) int fd; /* ro */ int events; /* ro */ } ev_io; /* invoked after a specific time, repeatable (based on monotonic clock) */ /* revent EV_TIMEOUT */ typedef struct ev_timer { EV_WATCHER_TIME (ev_timer) ev_tstamp repeat; /* rw */ } ev_timer; /* invoked at some specific time, possibly repeating at regular intervals (based on UTC) */ /* revent EV_PERIODIC */ typedef struct ev_periodic { EV_WATCHER_TIME (ev_periodic) ev_tstamp offset; /* rw */ ev_tstamp interval; /* rw */ ev_tstamp (*reschedule_cb)(struct ev_periodic *w, ev_tstamp now) EV_NOEXCEPT; /* rw */ } ev_periodic; /* invoked when the given signal has been received */ /* revent EV_SIGNAL */ typedef struct ev_signal { EV_WATCHER_LIST (ev_signal) int signum; /* ro */ } ev_signal; /* invoked when sigchld is received and waitpid indicates the given pid */ /* revent EV_CHILD */ /* does not support priorities */ typedef struct ev_child { EV_WATCHER_LIST (ev_child) int flags; /* private */ int pid; /* ro */ int rpid; /* rw, holds the received pid */ int rstatus; /* rw, holds the exit status, use the macros from sys/wait.h */ } ev_child; #if EV_STAT_ENABLE /* st_nlink = 0 means missing file or other error */ # ifdef _WIN32 typedef struct _stati64 ev_statdata; # else typedef struct stat ev_statdata; # endif /* invoked each time the stat data changes for a given path */ /* revent EV_STAT */ typedef struct ev_stat { EV_WATCHER_LIST (ev_stat) ev_timer timer; /* private */ ev_tstamp interval; /* ro */ const char *path; /* ro */ ev_statdata prev; /* ro */ ev_statdata attr; /* ro */ int wd; /* wd for inotify, fd for kqueue */ } ev_stat; #endif /* invoked when the nothing else needs to be done, keeps the process from blocking */ /* revent EV_IDLE */ typedef struct ev_idle { EV_WATCHER (ev_idle) } ev_idle; /* invoked for each run of the mainloop, just before the blocking call */ /* you can still change events in any way you like */ /* revent EV_PREPARE */ typedef struct ev_prepare { EV_WATCHER (ev_prepare) } ev_prepare; /* invoked for each run of the mainloop, just after the blocking call */ /* revent EV_CHECK */ typedef struct ev_check { EV_WATCHER (ev_check) } ev_check; /* the callback gets invoked before check in the child process when a fork was detected */ /* revent EV_FORK */ typedef struct ev_fork { EV_WATCHER (ev_fork) } ev_fork; /* is invoked just before the loop gets destroyed */ /* revent EV_CLEANUP */ typedef struct ev_cleanup { EV_WATCHER (ev_cleanup) } ev_cleanup; #if EV_EMBED_ENABLE /* used to embed an event loop inside another */ /* the callback gets invoked when the event loop has handled events, and can be 0 */ typedef struct ev_embed { EV_WATCHER (ev_embed) struct ev_loop *other; /* ro */ #undef EV_IO_ENABLE #define EV_IO_ENABLE 1 ev_io io; /* private */ #undef EV_PREPARE_ENABLE #define EV_PREPARE_ENABLE 1 ev_prepare prepare; /* private */ ev_check check; /* unused */ ev_timer timer; /* unused */ ev_periodic periodic; /* unused */ ev_idle idle; /* unused */ ev_fork fork; /* private */ ev_cleanup cleanup; /* unused */ } ev_embed; #endif #if EV_ASYNC_ENABLE /* invoked when somebody calls ev_async_send on the watcher */ /* revent EV_ASYNC */ typedef struct ev_async { EV_WATCHER (ev_async) EV_ATOMIC_T sent; /* private */ } ev_async; # define ev_async_pending(w) (+(w)->sent) #endif /* the presence of this union forces similar struct layout */ union ev_any_watcher { struct ev_watcher w; struct ev_watcher_list wl; struct ev_io io; struct ev_timer timer; struct ev_periodic periodic; struct ev_signal signal; struct ev_child child; #if EV_STAT_ENABLE struct ev_stat stat; #endif #if EV_IDLE_ENABLE struct ev_idle idle; #endif struct ev_prepare prepare; struct ev_check check; #if EV_FORK_ENABLE struct ev_fork fork; #endif #if EV_CLEANUP_ENABLE struct ev_cleanup cleanup; #endif #if EV_EMBED_ENABLE struct ev_embed embed; #endif #if EV_ASYNC_ENABLE struct ev_async async; #endif }; /* flag bits for ev_default_loop and ev_loop_new */ enum { /* the default */ EVFLAG_AUTO = 0x00000000U, /* not quite a mask */ /* flag bits */ EVFLAG_NOENV = 0x01000000U, /* do NOT consult environment */ EVFLAG_FORKCHECK = 0x02000000U, /* check for a fork in each iteration */ /* debugging/feature disable */ EVFLAG_NOINOTIFY = 0x00100000U, /* do not attempt to use inotify */ #if EV_COMPAT3 EVFLAG_NOSIGFD = 0, /* compatibility to pre-3.9 */ #endif EVFLAG_SIGNALFD = 0x00200000U, /* attempt to use signalfd */ EVFLAG_NOSIGMASK = 0x00400000U, /* avoid modifying the signal mask */ EVFLAG_NOTIMERFD = 0x00800000U /* avoid creating a timerfd */ }; /* method bits to be ored together */ enum { EVBACKEND_SELECT = 0x00000001U, /* available just about anywhere */ EVBACKEND_POLL = 0x00000002U, /* !win, !aix, broken on osx */ EVBACKEND_EPOLL = 0x00000004U, /* linux */ EVBACKEND_KQUEUE = 0x00000008U, /* bsd, broken on osx */ EVBACKEND_DEVPOLL = 0x00000010U, /* solaris 8 */ /* NYI */ EVBACKEND_PORT = 0x00000020U, /* solaris 10 */ EVBACKEND_LINUXAIO = 0x00000040U, /* linux AIO, 4.19+ */ EVBACKEND_IOURING = 0x00000080U, /* linux io_uring, 5.1+ */ EVBACKEND_ALL = 0x000000FFU, /* all known backends */ EVBACKEND_MASK = 0x0000FFFFU /* all future backends */ }; #if EV_PROTOTYPES EV_API_DECL int ev_version_major (void) EV_NOEXCEPT; EV_API_DECL int ev_version_minor (void) EV_NOEXCEPT; EV_API_DECL unsigned int ev_supported_backends (void) EV_NOEXCEPT; EV_API_DECL unsigned int ev_recommended_backends (void) EV_NOEXCEPT; EV_API_DECL unsigned int ev_embeddable_backends (void) EV_NOEXCEPT; EV_API_DECL ev_tstamp ev_time (void) EV_NOEXCEPT; EV_API_DECL void ev_sleep (ev_tstamp delay) EV_NOEXCEPT; /* sleep for a while */ /* Sets the allocation function to use, works like realloc. * It is used to allocate and free memory. * If it returns zero when memory needs to be allocated, the library might abort * or take some potentially destructive action. * The default is your system realloc function. */ EV_API_DECL void ev_set_allocator (void *(*cb)(void *ptr, long size) EV_NOEXCEPT) EV_NOEXCEPT; /* set the callback function to call on a * retryable syscall error * (such as failed select, poll, epoll_wait) */ EV_API_DECL void ev_set_syserr_cb (void (*cb)(const char *msg) EV_NOEXCEPT) EV_NOEXCEPT; #if EV_MULTIPLICITY /* the default loop is the only one that handles signals and child watchers */ /* you can call this as often as you like */ EV_API_DECL struct ev_loop *ev_default_loop (unsigned int flags EV_CPP (= 0)) EV_NOEXCEPT; #ifdef EV_API_STATIC EV_API_DECL struct ev_loop *ev_default_loop_ptr; #endif EV_INLINE struct ev_loop * ev_default_loop_uc_ (void) EV_NOEXCEPT { extern struct ev_loop *ev_default_loop_ptr; return ev_default_loop_ptr; } EV_INLINE int ev_is_default_loop (EV_P) EV_NOEXCEPT { return EV_A == EV_DEFAULT_UC; } /* create and destroy alternative loops that don't handle signals */ EV_API_DECL struct ev_loop *ev_loop_new (unsigned int flags EV_CPP (= 0)) EV_NOEXCEPT; EV_API_DECL ev_tstamp ev_now (EV_P) EV_NOEXCEPT; /* time w.r.t. timers and the eventloop, updated after each poll */ #else EV_API_DECL int ev_default_loop (unsigned int flags EV_CPP (= 0)) EV_NOEXCEPT; /* returns true when successful */ EV_API_DECL ev_tstamp ev_rt_now; EV_INLINE ev_tstamp ev_now (void) EV_NOEXCEPT { return ev_rt_now; } /* looks weird, but ev_is_default_loop (EV_A) still works if this exists */ EV_INLINE int ev_is_default_loop (void) EV_NOEXCEPT { return 1; } #endif /* multiplicity */ /* destroy event loops, also works for the default loop */ EV_API_DECL void ev_loop_destroy (EV_P); /* this needs to be called after fork, to duplicate the loop */ /* when you want to re-use it in the child */ /* you can call it in either the parent or the child */ /* you can actually call it at any time, anywhere :) */ EV_API_DECL void ev_loop_fork (EV_P) EV_NOEXCEPT; EV_API_DECL unsigned int ev_backend (EV_P) EV_NOEXCEPT; /* backend in use by loop */ EV_API_DECL void ev_now_update (EV_P) EV_NOEXCEPT; /* update event loop time */ #if EV_WALK_ENABLE /* walk (almost) all watchers in the loop of a given type, invoking the */ /* callback on every such watcher. The callback might stop the watcher, */ /* but do nothing else with the loop */ EV_API_DECL void ev_walk (EV_P_ int types, void (*cb)(EV_P_ int type, void *w)) EV_NOEXCEPT; #endif #endif /* prototypes */ /* ev_run flags values */ enum { EVRUN_NOWAIT = 1, /* do not block/wait */ EVRUN_ONCE = 2 /* block *once* only */ }; /* ev_break how values */ enum { EVBREAK_CANCEL = 0, /* undo unloop */ EVBREAK_ONE = 1, /* unloop once */ EVBREAK_ALL = 2 /* unloop all loops */ }; #if EV_PROTOTYPES EV_API_DECL int ev_run (EV_P_ int flags EV_CPP (= 0)); EV_API_DECL void ev_break (EV_P_ int how EV_CPP (= EVBREAK_ONE)) EV_NOEXCEPT; /* break out of the loop */ /* * ref/unref can be used to add or remove a refcount on the mainloop. every watcher * keeps one reference. if you have a long-running watcher you never unregister that * should not keep ev_loop from running, unref() after starting, and ref() before stopping. */ EV_API_DECL void ev_ref (EV_P) EV_NOEXCEPT; EV_API_DECL void ev_unref (EV_P) EV_NOEXCEPT; /* * convenience function, wait for a single event, without registering an event watcher * if timeout is < 0, do wait indefinitely */ EV_API_DECL void ev_once (EV_P_ int fd, int events, ev_tstamp timeout, void (*cb)(int revents, void *arg), void *arg) EV_NOEXCEPT; EV_API_DECL void ev_invoke_pending (EV_P); /* invoke all pending watchers */ # if EV_FEATURE_API EV_API_DECL unsigned int ev_iteration (EV_P) EV_NOEXCEPT; /* number of loop iterations */ EV_API_DECL unsigned int ev_depth (EV_P) EV_NOEXCEPT; /* #ev_loop enters - #ev_loop leaves */ EV_API_DECL void ev_verify (EV_P) EV_NOEXCEPT; /* abort if loop data corrupted */ EV_API_DECL void ev_set_io_collect_interval (EV_P_ ev_tstamp interval) EV_NOEXCEPT; /* sleep at least this time, default 0 */ EV_API_DECL void ev_set_timeout_collect_interval (EV_P_ ev_tstamp interval) EV_NOEXCEPT; /* sleep at least this time, default 0 */ /* advanced stuff for threading etc. support, see docs */ EV_API_DECL void ev_set_userdata (EV_P_ void *data) EV_NOEXCEPT; EV_API_DECL void *ev_userdata (EV_P) EV_NOEXCEPT; typedef void (*ev_loop_callback)(EV_P); EV_API_DECL void ev_set_invoke_pending_cb (EV_P_ ev_loop_callback invoke_pending_cb) EV_NOEXCEPT; /* C++ doesn't allow the use of the ev_loop_callback typedef here, so we need to spell it out */ EV_API_DECL void ev_set_loop_release_cb (EV_P_ void (*release)(EV_P) EV_NOEXCEPT, void (*acquire)(EV_P) EV_NOEXCEPT) EV_NOEXCEPT; EV_API_DECL unsigned int ev_pending_count (EV_P) EV_NOEXCEPT; /* number of pending events, if any */ /* * stop/start the timer handling. */ EV_API_DECL void ev_suspend (EV_P) EV_NOEXCEPT; EV_API_DECL void ev_resume (EV_P) EV_NOEXCEPT; #endif #endif /* these may evaluate ev multiple times, and the other arguments at most once */ /* either use ev_init + ev_TYPE_set, or the ev_TYPE_init macro, below, to first initialise a watcher */ #define ev_init(ev,cb_) do { \ ((ev_watcher *)(void *)(ev))->active = \ ((ev_watcher *)(void *)(ev))->pending = 0; \ ev_set_priority ((ev), 0); \ ev_set_cb ((ev), cb_); \ } while (0) #define ev_io_modify(ev,events_) do { (ev)->events = (ev)->events & EV__IOFDSET | (events_); } while (0) #define ev_io_set(ev,fd_,events_) do { (ev)->fd = (fd_); (ev)->events = (events_) | EV__IOFDSET; } while (0) #define ev_timer_set(ev,after_,repeat_) do { ((ev_watcher_time *)(ev))->at = (after_); (ev)->repeat = (repeat_); } while (0) #define ev_periodic_set(ev,ofs_,ival_,rcb_) do { (ev)->offset = (ofs_); (ev)->interval = (ival_); (ev)->reschedule_cb = (rcb_); } while (0) #define ev_signal_set(ev,signum_) do { (ev)->signum = (signum_); } while (0) #define ev_child_set(ev,pid_,trace_) do { (ev)->pid = (pid_); (ev)->flags = !!(trace_); } while (0) #define ev_stat_set(ev,path_,interval_) do { (ev)->path = (path_); (ev)->interval = (interval_); (ev)->wd = -2; } while (0) #define ev_idle_set(ev) /* nop, yes, this is a serious in-joke */ #define ev_prepare_set(ev) /* nop, yes, this is a serious in-joke */ #define ev_check_set(ev) /* nop, yes, this is a serious in-joke */ #define ev_embed_set(ev,other_) do { (ev)->other = (other_); } while (0) #define ev_fork_set(ev) /* nop, yes, this is a serious in-joke */ #define ev_cleanup_set(ev) /* nop, yes, this is a serious in-joke */ #define ev_async_set(ev) /* nop, yes, this is a serious in-joke */ #define ev_io_init(ev,cb,fd,events) do { ev_init ((ev), (cb)); ev_io_set ((ev),(fd),(events)); } while (0) #define ev_timer_init(ev,cb,after,repeat) do { ev_init ((ev), (cb)); ev_timer_set ((ev),(after),(repeat)); } while (0) #define ev_periodic_init(ev,cb,ofs,ival,rcb) do { ev_init ((ev), (cb)); ev_periodic_set ((ev),(ofs),(ival),(rcb)); } while (0) #define ev_signal_init(ev,cb,signum) do { ev_init ((ev), (cb)); ev_signal_set ((ev), (signum)); } while (0) #define ev_child_init(ev,cb,pid,trace) do { ev_init ((ev), (cb)); ev_child_set ((ev),(pid),(trace)); } while (0) #define ev_stat_init(ev,cb,path,interval) do { ev_init ((ev), (cb)); ev_stat_set ((ev),(path),(interval)); } while (0) #define ev_idle_init(ev,cb) do { ev_init ((ev), (cb)); ev_idle_set ((ev)); } while (0) #define ev_prepare_init(ev,cb) do { ev_init ((ev), (cb)); ev_prepare_set ((ev)); } while (0) #define ev_check_init(ev,cb) do { ev_init ((ev), (cb)); ev_check_set ((ev)); } while (0) #define ev_embed_init(ev,cb,other) do { ev_init ((ev), (cb)); ev_embed_set ((ev),(other)); } while (0) #define ev_fork_init(ev,cb) do { ev_init ((ev), (cb)); ev_fork_set ((ev)); } while (0) #define ev_cleanup_init(ev,cb) do { ev_init ((ev), (cb)); ev_cleanup_set ((ev)); } while (0) #define ev_async_init(ev,cb) do { ev_init ((ev), (cb)); ev_async_set ((ev)); } while (0) #define ev_is_pending(ev) (0 + ((ev_watcher *)(void *)(ev))->pending) /* ro, true when watcher is waiting for callback invocation */ #define ev_is_active(ev) (0 + ((ev_watcher *)(void *)(ev))->active) /* ro, true when the watcher has been started */ #define ev_cb_(ev) (ev)->cb /* rw */ #define ev_cb(ev) (memmove (&ev_cb_ (ev), &((ev_watcher *)(ev))->cb, sizeof (ev_cb_ (ev))), (ev)->cb) #if EV_MINPRI == EV_MAXPRI # define ev_priority(ev) ((ev), EV_MINPRI) # define ev_set_priority(ev,pri) ((ev), (pri)) #else # define ev_priority(ev) (+(((ev_watcher *)(void *)(ev))->priority)) # define ev_set_priority(ev,pri) ( (ev_watcher *)(void *)(ev))->priority = (pri) #endif #define ev_periodic_at(ev) (+((ev_watcher_time *)(ev))->at) #ifndef ev_set_cb /* memmove is used here to avoid strict aliasing violations, and hopefully is optimized out by any reasonable compiler */ # define ev_set_cb(ev,cb_) (ev_cb_ (ev) = (cb_), memmove (&((ev_watcher *)(ev))->cb, &ev_cb_ (ev), sizeof (ev_cb_ (ev)))) #endif /* stopping (enabling, adding) a watcher does nothing if it is already running */ /* stopping (disabling, deleting) a watcher does nothing unless it's already running */ #if EV_PROTOTYPES /* feeds an event into a watcher as if the event actually occurred */ /* accepts any ev_watcher type */ EV_API_DECL void ev_feed_event (EV_P_ void *w, int revents) EV_NOEXCEPT; EV_API_DECL void ev_feed_fd_event (EV_P_ int fd, int revents) EV_NOEXCEPT; #if EV_SIGNAL_ENABLE EV_API_DECL void ev_feed_signal (int signum) EV_NOEXCEPT; EV_API_DECL void ev_feed_signal_event (EV_P_ int signum) EV_NOEXCEPT; #endif EV_API_DECL void ev_invoke (EV_P_ void *w, int revents); EV_API_DECL int ev_clear_pending (EV_P_ void *w) EV_NOEXCEPT; EV_API_DECL void ev_io_start (EV_P_ ev_io *w) EV_NOEXCEPT; EV_API_DECL void ev_io_stop (EV_P_ ev_io *w) EV_NOEXCEPT; EV_API_DECL void ev_timer_start (EV_P_ ev_timer *w) EV_NOEXCEPT; EV_API_DECL void ev_timer_stop (EV_P_ ev_timer *w) EV_NOEXCEPT; /* stops if active and no repeat, restarts if active and repeating, starts if inactive and repeating */ EV_API_DECL void ev_timer_again (EV_P_ ev_timer *w) EV_NOEXCEPT; /* return remaining time */ EV_API_DECL ev_tstamp ev_timer_remaining (EV_P_ ev_timer *w) EV_NOEXCEPT; #if EV_PERIODIC_ENABLE EV_API_DECL void ev_periodic_start (EV_P_ ev_periodic *w) EV_NOEXCEPT; EV_API_DECL void ev_periodic_stop (EV_P_ ev_periodic *w) EV_NOEXCEPT; EV_API_DECL void ev_periodic_again (EV_P_ ev_periodic *w) EV_NOEXCEPT; #endif /* only supported in the default loop */ #if EV_SIGNAL_ENABLE EV_API_DECL void ev_signal_start (EV_P_ ev_signal *w) EV_NOEXCEPT; EV_API_DECL void ev_signal_stop (EV_P_ ev_signal *w) EV_NOEXCEPT; #endif /* only supported in the default loop */ # if EV_CHILD_ENABLE EV_API_DECL void ev_child_start (EV_P_ ev_child *w) EV_NOEXCEPT; EV_API_DECL void ev_child_stop (EV_P_ ev_child *w) EV_NOEXCEPT; # endif # if EV_STAT_ENABLE EV_API_DECL void ev_stat_start (EV_P_ ev_stat *w) EV_NOEXCEPT; EV_API_DECL void ev_stat_stop (EV_P_ ev_stat *w) EV_NOEXCEPT; EV_API_DECL void ev_stat_stat (EV_P_ ev_stat *w) EV_NOEXCEPT; # endif # if EV_IDLE_ENABLE EV_API_DECL void ev_idle_start (EV_P_ ev_idle *w) EV_NOEXCEPT; EV_API_DECL void ev_idle_stop (EV_P_ ev_idle *w) EV_NOEXCEPT; # endif #if EV_PREPARE_ENABLE EV_API_DECL void ev_prepare_start (EV_P_ ev_prepare *w) EV_NOEXCEPT; EV_API_DECL void ev_prepare_stop (EV_P_ ev_prepare *w) EV_NOEXCEPT; #endif #if EV_CHECK_ENABLE EV_API_DECL void ev_check_start (EV_P_ ev_check *w) EV_NOEXCEPT; EV_API_DECL void ev_check_stop (EV_P_ ev_check *w) EV_NOEXCEPT; #endif # if EV_FORK_ENABLE EV_API_DECL void ev_fork_start (EV_P_ ev_fork *w) EV_NOEXCEPT; EV_API_DECL void ev_fork_stop (EV_P_ ev_fork *w) EV_NOEXCEPT; # endif # if EV_CLEANUP_ENABLE EV_API_DECL void ev_cleanup_start (EV_P_ ev_cleanup *w) EV_NOEXCEPT; EV_API_DECL void ev_cleanup_stop (EV_P_ ev_cleanup *w) EV_NOEXCEPT; # endif # if EV_EMBED_ENABLE /* only supported when loop to be embedded is in fact embeddable */ EV_API_DECL void ev_embed_start (EV_P_ ev_embed *w) EV_NOEXCEPT; EV_API_DECL void ev_embed_stop (EV_P_ ev_embed *w) EV_NOEXCEPT; EV_API_DECL void ev_embed_sweep (EV_P_ ev_embed *w) EV_NOEXCEPT; # endif # if EV_ASYNC_ENABLE EV_API_DECL void ev_async_start (EV_P_ ev_async *w) EV_NOEXCEPT; EV_API_DECL void ev_async_stop (EV_P_ ev_async *w) EV_NOEXCEPT; EV_API_DECL void ev_async_send (EV_P_ ev_async *w) EV_NOEXCEPT; # endif #if EV_COMPAT3 #define EVLOOP_NONBLOCK EVRUN_NOWAIT #define EVLOOP_ONESHOT EVRUN_ONCE #define EVUNLOOP_CANCEL EVBREAK_CANCEL #define EVUNLOOP_ONE EVBREAK_ONE #define EVUNLOOP_ALL EVBREAK_ALL #if EV_PROTOTYPES EV_INLINE void ev_loop (EV_P_ int flags) { ev_run (EV_A_ flags); } EV_INLINE void ev_unloop (EV_P_ int how ) { ev_break (EV_A_ how ); } EV_INLINE void ev_default_destroy (void) { ev_loop_destroy (EV_DEFAULT); } EV_INLINE void ev_default_fork (void) { ev_loop_fork (EV_DEFAULT); } #if EV_FEATURE_API EV_INLINE unsigned int ev_loop_count (EV_P) { return ev_iteration (EV_A); } EV_INLINE unsigned int ev_loop_depth (EV_P) { return ev_depth (EV_A); } EV_INLINE void ev_loop_verify (EV_P) { ev_verify (EV_A); } #endif #endif #else typedef struct ev_loop ev_loop; #endif #endif EV_CPP(}) #endif gevent-24.11.1/deps/libev/ev.pod000066400000000000000000006676031471441230600163460ustar00rootroot00000000000000=encoding utf-8 =head1 NAME libev - a high performance full-featured event loop written in C =head1 SYNOPSIS #include =head2 EXAMPLE PROGRAM // a single header file is required #include #include // for puts // every watcher type has its own typedef'd struct // with the name ev_TYPE ev_io stdin_watcher; ev_timer timeout_watcher; // all watcher callbacks have a similar signature // this callback is called when data is readable on stdin static void stdin_cb (EV_P_ ev_io *w, int revents) { puts ("stdin ready"); // for one-shot events, one must manually stop the watcher // with its corresponding stop function. ev_io_stop (EV_A_ w); // this causes all nested ev_run's to stop iterating ev_break (EV_A_ EVBREAK_ALL); } // another callback, this time for a time-out static void timeout_cb (EV_P_ ev_timer *w, int revents) { puts ("timeout"); // this causes the innermost ev_run to stop iterating ev_break (EV_A_ EVBREAK_ONE); } int main (void) { // use the default event loop unless you have special needs struct ev_loop *loop = EV_DEFAULT; // initialise an io watcher, then start it // this one will watch for stdin to become readable ev_io_init (&stdin_watcher, stdin_cb, /*STDIN_FILENO*/ 0, EV_READ); ev_io_start (loop, &stdin_watcher); // initialise a timer watcher, then start it // simple non-repeating 5.5 second timeout ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.); ev_timer_start (loop, &timeout_watcher); // now wait for events to arrive ev_run (loop, 0); // break was called, so exit return 0; } =head1 ABOUT THIS DOCUMENT This document documents the libev software package. The newest version of this document is also available as an html-formatted web page you might find easier to navigate when reading it for the first time: L. While this document tries to be as complete as possible in documenting libev, its usage and the rationale behind its design, it is not a tutorial on event-based programming, nor will it introduce event-based programming with libev. Familiarity with event based programming techniques in general is assumed throughout this document. =head1 WHAT TO READ WHEN IN A HURRY This manual tries to be very detailed, but unfortunately, this also makes it very long. If you just want to know the basics of libev, I suggest reading L, then the L above and look up the missing functions in L and the C and C sections in L. =head1 ABOUT LIBEV Libev is an event loop: you register interest in certain events (such as a file descriptor being readable or a timeout occurring), and it will manage these event sources and provide your program with events. To do this, it must take more or less complete control over your process (or thread) by executing the I handler, and will then communicate events via a callback mechanism. You register interest in certain events by registering so-called I, which are relatively small C structures you initialise with the details of the event, and then hand it over to libev by I the watcher. =head2 FEATURES Libev supports C (files, many character devices...). Epoll is truly the train wreck among event poll mechanisms, a frankenpoll, cobbled together in a hurry, no thought to design or interaction with others. Oh, the pain, will it ever stop... While stopping, setting and starting an I/O watcher in the same iteration will result in some caching, there is still a system call per such incident (because the same I could point to a different I now), so its best to avoid that. Also, C'ed file descriptors might not work very well if you register events for both file descriptors. Best performance from this backend is achieved by not unregistering all watchers for a file descriptor until it has been closed, if possible, i.e. keep at least one watcher active per fd at all times. Stopping and starting a watcher (without re-setting it) also usually doesn't cause extra overhead. A fork can both result in spurious notifications as well as in libev having to destroy and recreate the epoll object, which can take considerable time and thus should be avoided. All this means that, in practice, C can be as fast or faster than epoll for maybe up to a hundred file descriptors, depending on the usage. So sad. While nominally embeddable in other event loops, this feature is broken in a lot of kernel revisions, but probably(!) works in current versions. This backend maps C and C in the same way as C. =item C (value 64, Linux) Use the Linux-specific Linux AIO (I C<< aio(7) >> but C<< io_submit(2) >>) event interface available in post-4.18 kernels (but libev only tries to use it in 4.19+). This is another Linux train wreck of an event interface. If this backend works for you (as of this writing, it was very experimental), it is the best event interface available on Linux and might be well worth enabling it - if it isn't available in your kernel this will be detected and this backend will be skipped. This backend can batch oneshot requests and supports a user-space ring buffer to receive events. It also doesn't suffer from most of the design problems of epoll (such as not being able to remove event sources from the epoll set), and generally sounds too good to be true. Because, this being the Linux kernel, of course it suffers from a whole new set of limitations, forcing you to fall back to epoll, inheriting all its design issues. For one, it is not easily embeddable (but probably could be done using an event fd at some extra overhead). It also is subject to a system wide limit that can be configured in F. If no AIO requests are left, this backend will be skipped during initialisation, and will switch to epoll when the loop is active. Most problematic in practice, however, is that not all file descriptors work with it. For example, in Linux 5.1, TCP sockets, pipes, event fds, files, F and many others are supported, but ttys do not work properly (a known bug that the kernel developers don't care about, see L), so this is not (yet?) a generic event polling interface. Overall, it seems the Linux developers just don't want it to have a generic event handling mechanism other than C which have a high overhead for the actual polling but can deliver many events at once. By setting a higher I you allow libev to spend more time collecting I/O events, so you can handle more events per iteration, at the cost of increasing latency. Timeouts (both C and C) will not be affected. Setting this to a non-null value will introduce an additional C call into most loop iterations. The sleep time ensures that libev will not poll for I/O events more often then once per this interval, on average (as long as the host time resolution is good enough). Likewise, by setting a higher I you allow libev to spend more time collecting timeouts, at the expense of increased latency/jitter/inexactness (the watcher callback will be called later). C watchers will not be affected. Setting this to a non-null value will not introduce any overhead in libev. Many (busy) programs can usually benefit by setting the I/O collect interval to a value near C<0.1> or so, which is often enough for interactive servers (of course not for games), likewise for timeouts. It usually doesn't make much sense to set it to a lower value than C<0.01>, as this approaches the timing granularity of most systems. Note that if you do transactions with the outside world and you can't increase the parallelity, then this setting will limit your transaction rate (if you need to poll once per transaction and the I/O collect interval is 0.01, then you can't do more than 100 transactions per second). Setting the I can improve the opportunity for saving power, as the program will "bundle" timer callback invocations that are "near" in time together, by delaying some, thus reducing the number of times the process sleeps and wakes up again. Another useful technique to reduce iterations/wake-ups is to use C watchers and make sure they fire on, say, one-second boundaries only. Example: we only need 0.1s timeout granularity, and we wish not to poll more often than 100 times per second: ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1); ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01); =item ev_invoke_pending (loop) This call will simply invoke all pending watchers while resetting their pending state. Normally, C does this automatically when required, but when overriding the invoke callback this call comes handy. This function can be invoked from a watcher - this can be useful for example when you want to do some lengthy calculation and want to pass further event handling to another thread (you still have to make sure only one thread executes within C or C of course). =item int ev_pending_count (loop) Returns the number of pending watchers - zero indicates that no watchers are pending. =item ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(EV_P)) This overrides the invoke pending functionality of the loop: Instead of invoking all pending watchers when there are any, C will call this callback instead. This is useful, for example, when you want to invoke the actual watchers inside another context (another thread etc.). If you want to reset the callback, use C as new callback. =item ev_set_loop_release_cb (loop, void (*release)(EV_P) throw (), void (*acquire)(EV_P) throw ()) Sometimes you want to share the same loop between multiple threads. This can be done relatively simply by putting mutex_lock/unlock calls around each call to a libev function. However, C can run an indefinite time, so it is not feasible to wait for it to return. One way around this is to wake up the event loop via C and C, another way is to set these I and I callbacks on the loop. When set, then C will be called just before the thread is suspended waiting for new events, and C is called just afterwards. Ideally, C will just call your mutex_unlock function, and C will just call the mutex_lock function again. While event loop modifications are allowed between invocations of C and C (that's their only purpose after all), no modifications done will affect the event loop, i.e. adding watchers will have no effect on the set of file descriptors being watched, or the time waited. Use an C watcher to wake up C when you want it to take note of any changes you made. In theory, threads executing C will be async-cancel safe between invocations of C and C. See also the locking example in the C section later in this document. =item ev_set_userdata (loop, void *data) =item void *ev_userdata (loop) Set and retrieve a single C associated with a loop. When C has never been called, then C returns C<0>. These two functions can be used to associate arbitrary data with a loop, and are intended solely for the C, C and C callbacks described above, but of course can be (ab-)used for any other purpose as well. =item ev_verify (loop) This function only does something when C support has been compiled in, which is the default for non-minimal builds. It tries to go through all internal structures and checks them for validity. If anything is found to be inconsistent, it will print an error message to standard error and call C. This can be used to catch bugs inside libev itself: under normal circumstances, this function will never abort as of course libev keeps its data structures consistent. =back =head1 ANATOMY OF A WATCHER In the following description, uppercase C in names stands for the watcher type, e.g. C can mean C for timer watchers and C for I/O watchers. A watcher is an opaque structure that you allocate and register to record your interest in some event. To make a concrete example, imagine you want to wait for STDIN to become readable, you would create an C watcher for that: static void my_cb (struct ev_loop *loop, ev_io *w, int revents) { ev_io_stop (w); ev_break (loop, EVBREAK_ALL); } struct ev_loop *loop = ev_default_loop (0); ev_io stdin_watcher; ev_init (&stdin_watcher, my_cb); ev_io_set (&stdin_watcher, STDIN_FILENO, EV_READ); ev_io_start (loop, &stdin_watcher); ev_run (loop, 0); As you can see, you are responsible for allocating the memory for your watcher structures (and it is I a bad idea to do this on the stack). Each watcher has an associated watcher structure (called C or simply C, as typedefs are provided for all watcher structs). Each watcher structure must be initialised by a call to C, which expects a callback to be provided. This callback is invoked each time the event occurs (or, in the case of I/O watchers, each time the event loop detects that the file descriptor given is readable and/or writable). Each watcher type further has its own C<< ev_TYPE_set (watcher *, ...) >> macro to configure it, with arguments specific to the watcher type. There is also a macro to combine initialisation and setting in one call: C<< ev_TYPE_init (watcher *, callback, ...) >>. To make the watcher actually watch out for events, you have to start it with a watcher-specific start function (C<< ev_TYPE_start (loop, watcher *) >>), and you can stop watching for events at any time by calling the corresponding stop function (C<< ev_TYPE_stop (loop, watcher *) >>. As long as your watcher is active (has been started but not stopped) you must not touch the values stored in it except when explicitly documented otherwise. Most specifically you must never reinitialise it or call its C macro. Each and every callback receives the event loop pointer as first, the registered watcher structure as second, and a bitset of received events as third argument. The received events usually include a single bit per event type received (you can receive multiple events at the same time). The possible bit masks are: =over 4 =item C =item C The file descriptor in the C watcher has become readable and/or writable. =item C The C watcher has timed out. =item C The C watcher has timed out. =item C The signal specified in the C watcher has been received by a thread. =item C The pid specified in the C watcher has received a status change. =item C The path specified in the C watcher changed its attributes somehow. =item C The C watcher has determined that you have nothing better to do. =item C =item C All C watchers are invoked just I C starts to gather new events, and all C watchers are queued (not invoked) just after C has gathered them, but before it queues any callbacks for any received events. That means C watchers are the last watchers invoked before the event loop sleeps or polls for new events, and C watchers will be invoked before any other watchers of the same or lower priority within an event loop iteration. Callbacks of both watcher types can start and stop as many watchers as they want, and all of them will be taken into account (for example, a C watcher might start an idle watcher to keep C from blocking). =item C The embedded event loop specified in the C watcher needs attention. =item C The event loop has been resumed in the child process after fork (see C). =item C The event loop is about to be destroyed (see C). =item C The given async watcher has been asynchronously notified (see C). =item C Not ever sent (or otherwise used) by libev itself, but can be freely used by libev users to signal watchers (e.g. via C). =item C An unspecified error has occurred, the watcher has been stopped. This might happen because the watcher could not be properly started because libev ran out of memory, a file descriptor was found to be closed or any other problem. Libev considers these application bugs. You best act on it by reporting the problem and somehow coping with the watcher being stopped. Note that well-written programs should not receive an error ever, so when your watcher receives it, this usually indicates a bug in your program. Libev will usually signal a few "dummy" events together with an error, for example it might indicate that a fd is readable or writable, and if your callbacks is well-written it can just attempt the operation and cope with the error from read() or write(). This will not work in multi-threaded programs, though, as the fd could already be closed and reused for another thing, so beware. =back =head2 GENERIC WATCHER FUNCTIONS =over 4 =item C (ev_TYPE *watcher, callback) This macro initialises the generic portion of a watcher. The contents of the watcher object can be arbitrary (so C will do). Only the generic parts of the watcher are initialised, you I to call the type-specific C macro afterwards to initialise the type-specific parts. For each type there is also a C macro which rolls both calls into one. You can reinitialise a watcher at any time as long as it has been stopped (or never started) and there are no pending events outstanding. The callback is always of type C. Example: Initialise an C watcher in two steps. ev_io w; ev_init (&w, my_cb); ev_io_set (&w, STDIN_FILENO, EV_READ); =item C (ev_TYPE *watcher, [args]) This macro initialises the type-specific parts of a watcher. You need to call C at least once before you call this macro, but you can call C any number of times. You must not, however, call this macro on a watcher that is active (it can be pending, however, which is a difference to the C macro). Although some watcher types do not have type-specific arguments (e.g. C) you still need to call its C macro. See C, above, for an example. =item C (ev_TYPE *watcher, callback, [args]) This convenience macro rolls both C and C macro calls into a single call. This is the most convenient method to initialise a watcher. The same limitations apply, of course. Example: Initialise and set an C watcher in one step. ev_io_init (&w, my_cb, STDIN_FILENO, EV_READ); =item C (loop, ev_TYPE *watcher) Starts (activates) the given watcher. Only active watchers will receive events. If the watcher is already active nothing will happen. Example: Start the C watcher that is being abused as example in this whole section. ev_io_start (EV_DEFAULT_UC, &w); =item C (loop, ev_TYPE *watcher) Stops the given watcher if active, and clears the pending status (whether the watcher was active or not). It is possible that stopped watchers are pending - for example, non-repeating timers are being stopped when they become pending - but calling C ensures that the watcher is neither active nor pending. If you want to free or reuse the memory used by the watcher it is therefore a good idea to always call its C function. =item bool ev_is_active (ev_TYPE *watcher) Returns a true value iff the watcher is active (i.e. it has been started and not yet been stopped). As long as a watcher is active you must not modify it. =item bool ev_is_pending (ev_TYPE *watcher) Returns a true value iff the watcher is pending, (i.e. it has outstanding events but its callback has not yet been invoked). As long as a watcher is pending (but not active) you must not call an init function on it (but C is safe), you must not change its priority, and you must make sure the watcher is available to libev (e.g. you cannot C it). =item callback ev_cb (ev_TYPE *watcher) Returns the callback currently set on the watcher. =item ev_set_cb (ev_TYPE *watcher, callback) Change the callback. You can change the callback at virtually any time (modulo threads). =item ev_set_priority (ev_TYPE *watcher, int priority) =item int ev_priority (ev_TYPE *watcher) Set and query the priority of the watcher. The priority is a small integer between C (default: C<2>) and C (default: C<-2>). Pending watchers with higher priority will be invoked before watchers with lower priority, but priority will not keep watchers from being executed (except for C watchers). If you need to suppress invocation when higher priority events are pending you need to look at C watchers, which provide this functionality. You I change the priority of a watcher as long as it is active or pending. Setting a priority outside the range of C to C is fine, as long as you do not mind that the priority value you query might or might not have been clamped to the valid range. The default priority used by watchers when no priority has been set is always C<0>, which is supposed to not be too high and not be too low :). See L, below, for a more thorough treatment of priorities. =item ev_invoke (loop, ev_TYPE *watcher, int revents) Invoke the C with the given C and C. Neither C nor C need to be valid as long as the watcher callback can deal with that fact, as both are simply passed through to the callback. =item int ev_clear_pending (loop, ev_TYPE *watcher) If the watcher is pending, this function clears its pending status and returns its C bitset (as if its callback was invoked). If the watcher isn't pending it does nothing and returns C<0>. Sometimes it can be useful to "poll" a watcher instead of waiting for its callback to be invoked, which can be accomplished with this function. =item ev_feed_event (loop, ev_TYPE *watcher, int revents) Feeds the given event set into the event loop, as if the specified event had happened for the specified watcher (which must be a pointer to an initialised but not necessarily started event watcher). Obviously you must not free the watcher as long as it has pending events. Stopping the watcher, letting libev invoke it, or calling C will clear the pending event, even if the watcher was not started in the first place. See also C and C for related functions that do not need a watcher. =back See also the L and L idioms. =head2 WATCHER STATES There are various watcher states mentioned throughout this manual - active, pending and so on. In this section these states and the rules to transition between them will be described in more detail - and while these rules might look complicated, they usually do "the right thing". =over 4 =item initialised Before a watcher can be registered with the event loop it has to be initialised. This can be done with a call to C, or calls to C followed by the watcher-specific C function. In this state it is simply some block of memory that is suitable for use in an event loop. It can be moved around, freed, reused etc. at will - as long as you either keep the memory contents intact, or call C again. =item started/running/active Once a watcher has been started with a call to C it becomes property of the event loop, and is actively waiting for events. While in this state it cannot be accessed (except in a few documented ways), moved, freed or anything else - the only legal thing is to keep a pointer to it, and call libev functions on it that are documented to work on active watchers. =item pending If a watcher is active and libev determines that an event it is interested in has occurred (such as a timer expiring), it will become pending. It will stay in this pending state until either it is stopped or its callback is about to be invoked, so it is not normally pending inside the watcher callback. The watcher might or might not be active while it is pending (for example, an expired non-repeating timer can be pending but no longer active). If it is stopped, it can be freely accessed (e.g. by calling C), but it is still property of the event loop at this time, so cannot be moved, freed or reused. And if it is active the rules described in the previous item still apply. It is also possible to feed an event on a watcher that is not active (e.g. via C), in which case it becomes pending without being active. =item stopped A watcher can be stopped implicitly by libev (in which case it might still be pending), or explicitly by calling its C function. The latter will clear any pending state the watcher might be in, regardless of whether it was active or not, so stopping a watcher explicitly before freeing it is often a good idea. While stopped (and not pending) the watcher is essentially in the initialised state, that is, it can be reused, moved, modified in any way you wish (but when you trash the memory block, you need to C it again). =back =head2 WATCHER PRIORITY MODELS Many event loops support I, which are usually small integers that influence the ordering of event callback invocation between watchers in some way, all else being equal. In libev, watcher priorities can be set using C. See its description for the more technical details such as the actual priority range. There are two common ways how these these priorities are being interpreted by event loops: In the more common lock-out model, higher priorities "lock out" invocation of lower priority watchers, which means as long as higher priority watchers receive events, lower priority watchers are not being invoked. The less common only-for-ordering model uses priorities solely to order callback invocation within a single event loop iteration: Higher priority watchers are invoked before lower priority ones, but they all get invoked before polling for new events. Libev uses the second (only-for-ordering) model for all its watchers except for idle watchers (which use the lock-out model). The rationale behind this is that implementing the lock-out model for watchers is not well supported by most kernel interfaces, and most event libraries will just poll for the same events again and again as long as their callbacks have not been executed, which is very inefficient in the common case of one high-priority watcher locking out a mass of lower priority ones. Static (ordering) priorities are most useful when you have two or more watchers handling the same resource: a typical usage example is having an C watcher to receive data, and an associated C to handle timeouts. Under load, data might be received while the program handles other jobs, but since timers normally get invoked first, the timeout handler will be executed before checking for data. In that case, giving the timer a lower priority than the I/O watcher ensures that I/O will be handled first even under adverse conditions (which is usually, but not always, what you want). Since idle watchers use the "lock-out" model, meaning that idle watchers will only be executed when no same or higher priority watchers have received events, they can be used to implement the "lock-out" model when required. For example, to emulate how many other event libraries handle priorities, you can associate an C watcher to each such watcher, and in the normal watcher callback, you just start the idle watcher. The real processing is done in the idle watcher callback. This causes libev to continuously poll and process kernel event data for the watcher, but when the lock-out case is known to be rare (which in turn is rare :), this is workable. Usually, however, the lock-out model implemented that way will perform miserably under the type of load it was designed to handle. In that case, it might be preferable to stop the real watcher before starting the idle watcher, so the kernel will not have to process the event in case the actual processing will be delayed for considerable time. Here is an example of an I/O watcher that should run at a strictly lower priority than the default, and which should only process data when no other events are pending: ev_idle idle; // actual processing watcher ev_io io; // actual event watcher static void io_cb (EV_P_ ev_io *w, int revents) { // stop the I/O watcher, we received the event, but // are not yet ready to handle it. ev_io_stop (EV_A_ w); // start the idle watcher to handle the actual event. // it will not be executed as long as other watchers // with the default priority are receiving events. ev_idle_start (EV_A_ &idle); } static void idle_cb (EV_P_ ev_idle *w, int revents) { // actual processing read (STDIN_FILENO, ...); // have to start the I/O watcher again, as // we have handled the event ev_io_start (EV_P_ &io); } // initialisation ev_idle_init (&idle, idle_cb); ev_io_init (&io, io_cb, STDIN_FILENO, EV_READ); ev_io_start (EV_DEFAULT_ &io); In the "real" world, it might also be beneficial to start a timer, so that low-priority connections can not be locked out forever under load. This enables your program to keep a lower latency for important connections during short periods of high load, while not completely locking out less important ones. =head1 WATCHER TYPES This section describes each watcher in detail, but will not repeat information given in the last section. Any initialisation/set macros, functions and members specific to the watcher type are explained. Most members are additionally marked with either I<[read-only]>, meaning that, while the watcher is active, you can look at the member and expect some sensible content, but you must not modify it (you can modify it while the watcher is stopped to your hearts content), or I<[read-write]>, which means you can expect it to have some sensible content while the watcher is active, but you can also modify it (within the same thread as the event loop, i.e. without creating data races). Modifying it may not do something sensible or take immediate effect (or do anything at all), but libev will not crash or malfunction in any way. In any case, the documentation for each member will explain what the effects are, and if there are any additional access restrictions. =head2 C - is this file descriptor readable or writable? I/O watchers check whether a file descriptor is readable or writable in each iteration of the event loop, or, more precisely, when reading would not block the process and writing would at least be able to write some data. This behaviour is called level-triggering because you keep receiving events as long as the condition persists. Remember you can stop the watcher if you don't want to act on the event and neither want to receive future events. In general you can register as many read and/or write event watchers per fd as you want (as long as you don't confuse yourself). Setting all file descriptors to non-blocking mode is also usually a good idea (but not required if you know what you are doing). Another thing you have to watch out for is that it is quite easy to receive "spurious" readiness notifications, that is, your callback might be called with C but a subsequent C(2) will actually block because there is no data. It is very easy to get into this situation even with a relatively standard program structure. Thus it is best to always use non-blocking I/O: An extra C(2) returning C is far preferable to a program hanging until some data arrives. If you cannot run the fd in non-blocking mode (for example you should not play around with an Xlib connection), then you have to separately re-test whether a file descriptor is really ready with a known-to-be good interface such as poll (fortunately in the case of Xlib, it already does this on its own, so its quite safe to use). Some people additionally use C and an interval timer, just to be sure you won't block indefinitely. But really, best use non-blocking mode. =head3 The special problem of disappearing file descriptors Some backends (e.g. kqueue, epoll, linuxaio) need to be told about closing a file descriptor (either due to calling C explicitly or any other means, such as C). The reason is that you register interest in some file descriptor, but when it goes away, the operating system will silently drop this interest. If another file descriptor with the same number then is registered with libev, there is no efficient way to see that this is, in fact, a different file descriptor. To avoid having to explicitly tell libev about such cases, libev follows the following policy: Each time C is being called, libev will assume that this is potentially a new file descriptor, otherwise it is assumed that the file descriptor stays the same. That means that you I to call C (or C) when you change the descriptor even if the file descriptor number itself did not change. This is how one would do it normally anyway, the important point is that the libev application should not optimise around libev but should leave optimisations to libev. =head3 The special problem of dup'ed file descriptors Some backends (e.g. epoll), cannot register events for file descriptors, but only events for the underlying file descriptions. That means when you have C'ed file descriptors or weirder constellations, and register events for them, only one file descriptor might actually receive events. There is no workaround possible except not registering events for potentially C'ed file descriptors, or to resort to C or C. =head3 The special problem of files Many people try to use C. =item EV_USE_EVENTFD If defined to be C<1>, then libev will assume that C is available and will probe for kernel support at runtime. This will improve C and C performance and reduce resource consumption. If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc 2.7 or newer, otherwise disabled. =item EV_USE_SIGNALFD If defined to be C<1>, then libev will assume that C is available and will probe for kernel support at runtime. This enables the use of EVFLAG_SIGNALFD for faster and simpler signal handling. If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc 2.7 or newer, otherwise disabled. =item EV_USE_TIMERFD If defined to be C<1>, then libev will assume that C is available and will probe for kernel support at runtime. This allows libev to detect time jumps accurately. If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc 2.8 or newer and define C, otherwise disabled. =item EV_USE_EVENTFD If defined to be C<1>, then libev will assume that C is available and will probe for kernel support at runtime. This will improve C and C performance and reduce resource consumption. If undefined, it will be enabled if the headers indicate GNU/Linux + Glibc 2.7 or newer, otherwise disabled. =item EV_USE_SELECT If undefined or defined to be C<1>, libev will compile in support for the C is buggy All that's left is C actively limits the number of file descriptors you can pass in to 1024 - your program suddenly crashes when you use more. There is an undocumented "workaround" for this - defining C<_DARWIN_UNLIMITED_SELECT>, which libev tries to use, so select I work on OS/X. =head2 SOLARIS PROBLEMS AND WORKAROUNDS =head3 C reentrancy The default compile environment on Solaris is unfortunately so thread-unsafe that you can't even use components/libraries compiled without C<-D_REENTRANT> in a threaded program, which, of course, isn't defined by default. A valid, if stupid, implementation choice. If you want to use libev in threaded environments you have to make sure it's compiled with C<_REENTRANT> defined. =head3 Event port backend The scalable event interface for Solaris is called "event ports". Unfortunately, this mechanism is very buggy in all major releases. If you run into high CPU usage, your program freezes or you get a large number of spurious wakeups, make sure you have all the relevant and latest kernel patches applied. No, I don't know which ones, but there are multiple ones to apply, and afterwards, event ports actually work great. If you can't get it to work, you can try running the program by setting the environment variable C to only allow C and C works fine with large bitsets on AIX, and AIX is dead anyway. =head2 WIN32 PLATFORM LIMITATIONS AND WORKAROUNDS =head3 General issues Win32 doesn't support any of the standards (e.g. POSIX) that libev requires, and its I/O model is fundamentally incompatible with the POSIX model. Libev still offers limited functionality on this platform in the form of the C backend, and only supports socket descriptors. This only applies when using Win32 natively, not when using e.g. cygwin. Actually, it only applies to the microsofts own compilers, as every compiler comes with a slightly differently broken/incompatible environment. Lifting these limitations would basically require the full re-implementation of the I/O system. If you are into this kind of thing, then note that glib does exactly that for you in a very portable way (note also that glib is the slowest event library known to man). There is no supported compilation method available on windows except embedding it into other applications. Sensible signal handling is officially unsupported by Microsoft - libev tries its best, but under most conditions, signals will simply not work. Not a libev limitation but worth mentioning: windows apparently doesn't accept large writes: instead of resulting in a partial write, windows will either accept everything or return C if the buffer is too large, so make sure you only write small amounts into your sockets (less than a megabyte seems safe, but this apparently depends on the amount of memory available). Due to the many, low, and arbitrary limits on the win32 platform and the abysmal performance of winsockets, using a large number of sockets is not recommended (and not reasonable). If your program needs to use more than a hundred or so sockets, then likely it needs to use a totally different implementation for windows, as libev offers the POSIX readiness notification model, which cannot be implemented efficiently on windows (due to Microsoft monopoly games). A typical way to use libev under windows is to embed it (see the embedding section for details) and use the following F header file instead of F: #define EV_STANDALONE /* keeps ev from requiring config.h */ #define EV_SELECT_IS_WINSOCKET 1 /* configure libev for windows select */ #include "ev.h" And compile the following F file into your project (make sure you do I compile the F or any other embedded source files!): #include "evwrap.h" #include "ev.c" =head3 The winsocket C function doesn't follow POSIX in that it requires socket I and not socket I (it is also extremely buggy). This makes select very inefficient, and also requires a mapping from file descriptors to socket handles (the Microsoft C runtime provides the function C<_open_osfhandle> for this). See the discussion of the C, C and C preprocessor symbols for more info. The configuration for a "naked" win32 using the Microsoft runtime libraries and raw winsocket select is: #define EV_USE_SELECT 1 #define EV_SELECT_IS_WINSOCKET 1 /* forces EV_SELECT_USE_FD_SET, too */ Note that winsockets handling of fd sets is O(n), so you can easily get a complexity in the O(n²) range when using win32. =head3 Limited number of file descriptors Windows has numerous arbitrary (and low) limits on things. Early versions of winsocket's select only supported waiting for a maximum of C<64> handles (probably owning to the fact that all windows kernels can only wait for C<64> things at the same time internally; Microsoft recommends spawning a chain of threads and wait for 63 handles and the previous thread in each. Sounds great!). Newer versions support more handles, but you need to define C to some high number (e.g. C<2048>) before compiling the winsocket select call (which might be in libev or elsewhere, for example, perl and many other interpreters do their own select emulation on windows). Another limit is the number of file descriptors in the Microsoft runtime libraries, which by default is C<64> (there must be a hidden I<64> fetish or something like this inside Microsoft). You can increase this by calling C<_setmaxstdio>, which can increase this limit to C<2048> (another arbitrary limit), but is broken in many versions of the Microsoft runtime libraries. This might get you to about C<512> or C<2048> sockets (depending on windows version and/or the phase of the moon). To get more, you need to wrap all I/O functions and provide your own fd management, but the cost of calling select (O(n²)) will likely make this unworkable. =head2 PORTABILITY REQUIREMENTS In addition to a working ISO-C implementation and of course the backend-specific APIs, libev relies on a few additional extensions: =over 4 =item C must have compatible calling conventions regardless of C. Libev assumes not only that all watcher pointers have the same internal structure (guaranteed by POSIX but not by ISO C for example), but it also assumes that the same (machine) code can be used to call any watcher callback: The watcher callbacks have different type signatures, but libev calls them using an C internally. =item null pointers and integer zero are represented by 0 bytes Libev uses C to initialise structs and arrays to C<0> bytes, and relies on this setting pointers and integers to null. =item pointer accesses must be thread-atomic Accessing a pointer value must be atomic, it must both be readable and writable in one piece - this is the case on all current architectures. =item C must be thread-atomic as well The type C (or whatever is defined as C) must be atomic with respect to accesses from different threads. This is not part of the specification for C, but is believed to be sufficiently portable. =item C must work in a threaded environment Libev uses C to temporarily block signals. This is not allowed in a threaded program (C has to be used). Typical pthread implementations will either allow C in the "main thread" or will block signals process-wide, both behaviours would be compatible with libev. Interaction between C and C could complicate things, however. The most portable way to handle signals is to block signals in all threads except the initial one, and run the signal handling loop in the initial thread as well. =item C must be large enough for common memory allocation sizes To improve portability and simplify its API, libev uses C internally instead of C when allocating its data structures. On non-POSIX systems (Microsoft...) this might be unexpectedly low, but is still at least 31 bits everywhere, which is enough for hundreds of millions of watchers. =item C must hold a time value in seconds with enough accuracy The type C is used to represent timestamps. It is required to have at least 51 bits of mantissa (and 9 bits of exponent), which is good enough for at least into the year 4000 with millisecond accuracy (the design goal for libev). This requirement is overfulfilled by implementations using IEEE 754, which is basically all existing ones. With IEEE 754 doubles, you get microsecond accuracy until at least the year 2255 (and millisecond accuracy till the year 287396 - by then, libev is either obsolete or somebody patched it to use C or something like that, just kidding). =back If you know of other additional requirements drop me a note. =head1 ALGORITHMIC COMPLEXITIES In this section the complexities of (many of) the algorithms used inside libev will be documented. For complexity discussions about backends see the documentation for C. All of the following are about amortised time: If an array needs to be extended, libev needs to realloc and move the whole array, but this happens asymptotically rarer with higher number of elements, so O(1) might mean that libev does a lengthy realloc operation in rare cases, but on average it is much faster and asymptotically approaches constant time. =over 4 =item Starting and stopping timer/periodic watchers: O(log skipped_other_timers) This means that, when you have a watcher that triggers in one hour and there are 100 watchers that would trigger before that, then inserting will have to skip roughly seven (C) of these watchers. =item Changing timer/periodic watchers (by autorepeat or calling again): O(log skipped_other_timers) That means that changing a timer costs less than removing/adding them, as only the relative motion in the event queue has to be paid for. =item Starting io/check/prepare/idle/signal/child/fork/async watchers: O(1) These just add the watcher into an array or at the head of a list. =item Stopping check/prepare/idle/fork/async watchers: O(1) =item Stopping an io/signal/child watcher: O(number_of_watchers_for_this_(fd/signal/pid % EV_PID_HASHSIZE)) These watchers are stored in lists, so they need to be walked to find the correct watcher to remove. The lists are usually short (you don't usually have many watchers waiting for the same fd or signal: one is typical, two is rare). =item Finding the next timer in each loop iteration: O(1) By virtue of using a binary or 4-heap, the next timer is always found at a fixed position in the storage array. =item Each change on a file descriptor per loop iteration: O(number_of_watchers_for_this_fd) A change means an I/O watcher gets started or stopped, which requires libev to recalculate its status (and possibly tell the kernel, depending on backend and whether C was used). =item Activating one watcher (putting it into the pending state): O(1) =item Priority handling: O(number_of_priorities) Priorities are implemented by allocating some space for each priority. When doing priority-based operations, libev usually has to linearly search all the priorities, but starting/stopping and activating watchers becomes O(1) with respect to priority handling. =item Sending an ev_async: O(1) =item Processing ev_async_send: O(number_of_async_watchers) =item Processing signals: O(max_signal_number) Sending involves a system call I there were no other C calls in the current loop iteration and the loop is currently blocked. Checking for async and signal events involves iterating over all running async watchers or all signal numbers. =back =head1 PORTING FROM LIBEV 3.X TO 4.X The major version 4 introduced some incompatible changes to the API. At the moment, the C header file provides compatibility definitions for all changes, so most programs should still compile. The compatibility layer might be removed in later versions of libev, so better update to the new API early than late. =over 4 =item C backwards compatibility mechanism The backward compatibility mechanism can be controlled by C. See L in the L section. =item C and C have been removed These calls can be replaced easily by their C counterparts: ev_loop_destroy (EV_DEFAULT_UC); ev_loop_fork (EV_DEFAULT); =item function/symbol renames A number of functions and symbols have been renamed: ev_loop => ev_run EVLOOP_NONBLOCK => EVRUN_NOWAIT EVLOOP_ONESHOT => EVRUN_ONCE ev_unloop => ev_break EVUNLOOP_CANCEL => EVBREAK_CANCEL EVUNLOOP_ONE => EVBREAK_ONE EVUNLOOP_ALL => EVBREAK_ALL EV_TIMEOUT => EV_TIMER ev_loop_count => ev_iteration ev_loop_depth => ev_depth ev_loop_verify => ev_verify Most functions working on C objects don't have an C prefix, so it was removed; C, C and associated constants have been renamed to not collide with the C anymore and C now follows the same naming scheme as all other watcher types. Note that C is still called C because it would otherwise clash with the C typedef. =item C mechanism replaced by C The preprocessor symbol C has been replaced by a different mechanism, C. Programs using C usually compile and work, but the library code will of course be larger. =back =head1 GLOSSARY =over 4 =item active A watcher is active as long as it has been started and not yet stopped. See L for details. =item application In this document, an application is whatever is using libev. =item backend The part of the code dealing with the operating system interfaces. =item callback The address of a function that is called when some event has been detected. Callbacks are being passed the event loop, the watcher that received the event, and the actual event bitset. =item callback/watcher invocation The act of calling the callback associated with a watcher. =item event A change of state of some external event, such as data now being available for reading on a file descriptor, time having passed or simply not having any other events happening anymore. In libev, events are represented as single bits (such as C or C). =item event library A software package implementing an event model and loop. =item event loop An entity that handles and processes external events and converts them into callback invocations. =item event model The model used to describe how an event loop handles and processes watchers and events. =item pending A watcher is pending as soon as the corresponding event has been detected. See L for details. =item real time The physical time that is observed. It is apparently strictly monotonic :) =item wall-clock time The time and date as shown on clocks. Unlike real time, it can actually be wrong and jump forwards and backwards, e.g. when you adjust your clock. =item watcher A data structure that describes interest in certain events. Watchers need to be started (attached to an event loop) before they can receive events. =back =head1 AUTHOR Marc Lehmann , with repeated corrections by Mikael Magnusson and Emanuele Giaquinta, and minor corrections by many others. gevent-24.11.1/deps/libev/ev_epoll.c000066400000000000000000000241751471441230600171700ustar00rootroot00000000000000/* * libev epoll fd activity backend * * Copyright (c) 2007,2008,2009,2010,2011,2016,2017,2019 Marc Alexander Lehmann * All rights reserved. * * Redistribution and use in source and binary forms, with or without modifica- * tion, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * * Alternatively, the contents of this file may be used under the terms of * the GNU General Public License ("GPL") version 2 or any later version, * in which case the provisions of the GPL are applicable instead of * the above. If you wish to allow the use of your version of this file * only under the terms of the GPL and not to allow others to use your * version of this file under the BSD license, indicate your decision * by deleting the provisions above and replace them with the notice * and other provisions required by the GPL. If you do not delete the * provisions above, a recipient may use your version of this file under * either the BSD or the GPL. */ /* * general notes about epoll: * * a) epoll silently removes fds from the fd set. as nothing tells us * that an fd has been removed otherwise, we have to continually * "rearm" fds that we suspect *might* have changed (same * problem with kqueue, but much less costly there). * b) the fact that ADD != MOD creates a lot of extra syscalls due to a) * and seems not to have any advantage. * c) the inability to handle fork or file descriptors (think dup) * limits the applicability over poll, so this is not a generic * poll replacement. * d) epoll doesn't work the same as select with many file descriptors * (such as files). while not critical, no other advanced interface * seems to share this (rather non-unixy) limitation. * e) epoll claims to be embeddable, but in practise you never get * a ready event for the epoll fd (broken: <=2.6.26, working: >=2.6.32). * f) epoll_ctl returning EPERM means the fd is always ready. * * lots of "weird code" and complication handling in this file is due * to these design problems with epoll, as we try very hard to avoid * epoll_ctl syscalls for common usage patterns and handle the breakage * ensuing from receiving events for closed and otherwise long gone * file descriptors. */ #include #define EV_EMASK_EPERM 0x80 static void epoll_modify (EV_P_ int fd, int oev, int nev) { struct epoll_event ev; unsigned char oldmask; /* * we handle EPOLL_CTL_DEL by ignoring it here * on the assumption that the fd is gone anyways * if that is wrong, we have to handle the spurious * event in epoll_poll. * if the fd is added again, we try to ADD it, and, if that * fails, we assume it still has the same eventmask. */ if (!nev) return; oldmask = anfds [fd].emask; anfds [fd].emask = nev; /* store the generation counter in the upper 32 bits, the fd in the lower 32 bits */ ev.data.u64 = (uint64_t)(uint32_t)fd | ((uint64_t)(uint32_t)++anfds [fd].egen << 32); ev.events = (nev & EV_READ ? EPOLLIN : 0) | (nev & EV_WRITE ? EPOLLOUT : 0); if (ecb_expect_true (!epoll_ctl (backend_fd, oev && oldmask != nev ? EPOLL_CTL_MOD : EPOLL_CTL_ADD, fd, &ev))) return; if (ecb_expect_true (errno == ENOENT)) { /* if ENOENT then the fd went away, so try to do the right thing */ if (!nev) goto dec_egen; if (!epoll_ctl (backend_fd, EPOLL_CTL_ADD, fd, &ev)) return; } else if (ecb_expect_true (errno == EEXIST)) { /* EEXIST means we ignored a previous DEL, but the fd is still active */ /* if the kernel mask is the same as the new mask, we assume it hasn't changed */ if (oldmask == nev) goto dec_egen; if (!epoll_ctl (backend_fd, EPOLL_CTL_MOD, fd, &ev)) return; } else if (ecb_expect_true (errno == EPERM)) { /* EPERM means the fd is always ready, but epoll is too snobbish */ /* to handle it, unlike select or poll. */ anfds [fd].emask = EV_EMASK_EPERM; /* add fd to epoll_eperms, if not already inside */ if (!(oldmask & EV_EMASK_EPERM)) { array_needsize (int, epoll_eperms, epoll_epermmax, epoll_epermcnt + 1, array_needsize_noinit); epoll_eperms [epoll_epermcnt++] = fd; } return; } else assert (("libev: I/O watcher with invalid fd found in epoll_ctl", errno != EBADF && errno != ELOOP && errno != EINVAL)); fd_kill (EV_A_ fd); dec_egen: /* we didn't successfully call epoll_ctl, so decrement the generation counter again */ --anfds [fd].egen; } static void epoll_poll (EV_P_ ev_tstamp timeout) { int i; int eventcnt; if (ecb_expect_false (epoll_epermcnt)) timeout = EV_TS_CONST (0.); /* epoll wait times cannot be larger than (LONG_MAX - 999UL) / HZ msecs, which is below */ /* the default libev max wait time, however. */ EV_RELEASE_CB; eventcnt = epoll_wait (backend_fd, epoll_events, epoll_eventmax, EV_TS_TO_MSEC (timeout)); EV_ACQUIRE_CB; if (ecb_expect_false (eventcnt < 0)) { if (errno != EINTR) ev_syserr ("(libev) epoll_wait"); return; } for (i = 0; i < eventcnt; ++i) { struct epoll_event *ev = epoll_events + i; int fd = (uint32_t)ev->data.u64; /* mask out the lower 32 bits */ int want = anfds [fd].events; int got = (ev->events & (EPOLLOUT | EPOLLERR | EPOLLHUP) ? EV_WRITE : 0) | (ev->events & (EPOLLIN | EPOLLERR | EPOLLHUP) ? EV_READ : 0); /* * check for spurious notification. * this only finds spurious notifications on egen updates * other spurious notifications will be found by epoll_ctl, below * we assume that fd is always in range, as we never shrink the anfds array */ if (ecb_expect_false ((uint32_t)anfds [fd].egen != (uint32_t)(ev->data.u64 >> 32))) { /* recreate kernel state */ postfork |= 2; continue; } if (ecb_expect_false (got & ~want)) { anfds [fd].emask = want; /* * we received an event but are not interested in it, try mod or del * this often happens because we optimistically do not unregister fds * when we are no longer interested in them, but also when we get spurious * notifications for fds from another process. this is partially handled * above with the gencounter check (== our fd is not the event fd), and * partially here, when epoll_ctl returns an error (== a child has the fd * but we closed it). * note: for events such as POLLHUP, where we can't know whether it refers * to EV_READ or EV_WRITE, we might issue redundant EPOLL_CTL_MOD calls. */ ev->events = (want & EV_READ ? EPOLLIN : 0) | (want & EV_WRITE ? EPOLLOUT : 0); /* pre-2.6.9 kernels require a non-null pointer with EPOLL_CTL_DEL, */ /* which is fortunately easy to do for us. */ if (epoll_ctl (backend_fd, want ? EPOLL_CTL_MOD : EPOLL_CTL_DEL, fd, ev)) { postfork |= 2; /* an error occurred, recreate kernel state */ continue; } } fd_event (EV_A_ fd, got); } /* if the receive array was full, increase its size */ if (ecb_expect_false (eventcnt == epoll_eventmax)) { ev_free (epoll_events); epoll_eventmax = array_nextsize (sizeof (struct epoll_event), epoll_eventmax, epoll_eventmax + 1); epoll_events = (struct epoll_event *)ev_malloc (sizeof (struct epoll_event) * epoll_eventmax); } /* now synthesize events for all fds where epoll fails, while select works... */ for (i = epoll_epermcnt; i--; ) { int fd = epoll_eperms [i]; unsigned char events = anfds [fd].events & (EV_READ | EV_WRITE); if (anfds [fd].emask & EV_EMASK_EPERM && events) fd_event (EV_A_ fd, events); else { epoll_eperms [i] = epoll_eperms [--epoll_epermcnt]; anfds [fd].emask = 0; } } } static int epoll_epoll_create (void) { int fd; #if defined EPOLL_CLOEXEC && !defined __ANDROID__ fd = epoll_create1 (EPOLL_CLOEXEC); if (fd < 0 && (errno == EINVAL || errno == ENOSYS)) #endif { fd = epoll_create (256); if (fd >= 0) fcntl (fd, F_SETFD, FD_CLOEXEC); } return fd; } inline_size int epoll_init (EV_P_ int flags) { if ((backend_fd = epoll_epoll_create ()) < 0) return 0; backend_mintime = EV_TS_CONST (1e-3); /* epoll does sometimes return early, this is just to avoid the worst */ backend_modify = epoll_modify; backend_poll = epoll_poll; epoll_eventmax = 64; /* initial number of events receivable per poll */ epoll_events = (struct epoll_event *)ev_malloc (sizeof (struct epoll_event) * epoll_eventmax); return EVBACKEND_EPOLL; } inline_size void epoll_destroy (EV_P) { ev_free (epoll_events); array_free (epoll_eperm, EMPTY); } ecb_cold static void epoll_fork (EV_P) { close (backend_fd); while ((backend_fd = epoll_epoll_create ()) < 0) ev_syserr ("(libev) epoll_create"); fd_rearm_all (EV_A); } gevent-24.11.1/deps/libev/ev_iouring.c000066400000000000000000000513341471441230600175260ustar00rootroot00000000000000/* * libev linux io_uring fd activity backend * * Copyright (c) 2019-2020 Marc Alexander Lehmann * All rights reserved. * * Redistribution and use in source and binary forms, with or without modifica- * tion, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * * Alternatively, the contents of this file may be used under the terms of * the GNU General Public License ("GPL") version 2 or any later version, * in which case the provisions of the GPL are applicable instead of * the above. If you wish to allow the use of your version of this file * only under the terms of the GPL and not to allow others to use your * version of this file under the BSD license, indicate your decision * by deleting the provisions above and replace them with the notice * and other provisions required by the GPL. If you do not delete the * provisions above, a recipient may use your version of this file under * either the BSD or the GPL. */ /* * general notes about linux io_uring: * * a) it's the best interface I have seen so far. on linux. * b) best is not necessarily very good. * c) it's better than the aio mess, doesn't suffer from the fork problems * of linux aio or epoll and so on and so on. and you could do event stuff * without any syscalls. what's not to like? * d) ok, it's vastly more complex, but that's ok, really. * e) why two mmaps instead of one? one would be more space-efficient, * and I can't see what benefit two would have (other than being * somehow resizable/relocatable, but that's apparently not possible). * f) hmm, it's practically undebuggable (gdb can't access the memory, and * the bizarre way structure offsets are communicated makes it hard to * just print the ring buffer heads, even *iff* the memory were visible * in gdb. but then, that's also ok, really. * g) well, you cannot specify a timeout when waiting for events. no, * seriously, the interface doesn't support a timeout. never seen _that_ * before. sure, you can use a timerfd, but that's another syscall * you could have avoided. overall, this bizarre omission smells * like a µ-optimisation by the io_uring author for his personal * applications, to the detriment of everybody else who just wants * an event loop. but, umm, ok, if that's all, it could be worse. * (from what I gather from the author Jens Axboe, it simply didn't * occur to him, and he made good on it by adding an unlimited nuber * of timeouts later :). * h) initially there was a hardcoded limit of 4096 outstanding events. * later versions not only bump this to 32k, but also can handle * an unlimited amount of events, so this only affects the batch size. * i) unlike linux aio, you *can* register more then the limit * of fd events. while early verisons of io_uring signalled an overflow * and you ended up getting wet. 5.5+ does not do this anymore. * j) but, oh my! it had exactly the same bugs as the linux aio backend, * where some undocumented poll combinations just fail. fortunately, * after finally reaching the author, he was more than willing to fix * this probably in 5.6+. * k) overall, the *API* itself is, I dare to say, not a total trainwreck. * once the bugs ae fixed (probably in 5.6+), it will be without * competition. */ /* TODO: use internal TIMEOUT */ /* TODO: take advantage of single mmap, NODROP etc. */ /* TODO: resize cq/sq size independently */ #include #include #include #include #define IOURING_INIT_ENTRIES 32 /*****************************************************************************/ /* syscall wrapdadoop - this section has the raw api/abi definitions */ #include #include /* mostly directly taken from the kernel or documentation */ struct io_uring_sqe { __u8 opcode; __u8 flags; __u16 ioprio; __s32 fd; union { __u64 off; __u64 addr2; }; __u64 addr; __u32 len; union { __kernel_rwf_t rw_flags; __u32 fsync_flags; __u16 poll_events; __u32 sync_range_flags; __u32 msg_flags; __u32 timeout_flags; __u32 accept_flags; __u32 cancel_flags; __u32 open_flags; __u32 statx_flags; }; __u64 user_data; union { __u16 buf_index; __u64 __pad2[3]; }; }; struct io_uring_cqe { __u64 user_data; __s32 res; __u32 flags; }; struct io_sqring_offsets { __u32 head; __u32 tail; __u32 ring_mask; __u32 ring_entries; __u32 flags; __u32 dropped; __u32 array; __u32 resv1; __u64 resv2; }; struct io_cqring_offsets { __u32 head; __u32 tail; __u32 ring_mask; __u32 ring_entries; __u32 overflow; __u32 cqes; __u64 resv[2]; }; struct io_uring_params { __u32 sq_entries; __u32 cq_entries; __u32 flags; __u32 sq_thread_cpu; __u32 sq_thread_idle; __u32 features; __u32 resv[4]; struct io_sqring_offsets sq_off; struct io_cqring_offsets cq_off; }; #define IORING_SETUP_CQSIZE 0x00000008 #define IORING_OP_POLL_ADD 6 #define IORING_OP_POLL_REMOVE 7 #define IORING_OP_TIMEOUT 11 #define IORING_OP_TIMEOUT_REMOVE 12 /* relative or absolute, reference clock is CLOCK_MONOTONIC */ struct iouring_kernel_timespec { int64_t tv_sec; long long tv_nsec; }; #define IORING_TIMEOUT_ABS 0x00000001 #define IORING_ENTER_GETEVENTS 0x01 #define IORING_OFF_SQ_RING 0x00000000ULL #define IORING_OFF_CQ_RING 0x08000000ULL #define IORING_OFF_SQES 0x10000000ULL #define IORING_FEAT_SINGLE_MMAP 0x00000001 #define IORING_FEAT_NODROP 0x00000002 #define IORING_FEAT_SUBMIT_STABLE 0x00000004 inline_size int evsys_io_uring_setup (unsigned entries, struct io_uring_params *params) { return ev_syscall2 (SYS_io_uring_setup, entries, params); } inline_size int evsys_io_uring_enter (int fd, unsigned to_submit, unsigned min_complete, unsigned flags, const sigset_t *sig, size_t sigsz) { return ev_syscall6 (SYS_io_uring_enter, fd, to_submit, min_complete, flags, sig, sigsz); } /*****************************************************************************/ /* actual backed implementation */ /* we hope that volatile will make the compiler access this variables only once */ #define EV_SQ_VAR(name) *(volatile unsigned *)((char *)iouring_sq_ring + iouring_sq_ ## name) #define EV_CQ_VAR(name) *(volatile unsigned *)((char *)iouring_cq_ring + iouring_cq_ ## name) /* the index array */ #define EV_SQ_ARRAY ((unsigned *)((char *)iouring_sq_ring + iouring_sq_array)) /* the submit/completion queue entries */ #define EV_SQES ((struct io_uring_sqe *) iouring_sqes) #define EV_CQES ((struct io_uring_cqe *)((char *)iouring_cq_ring + iouring_cq_cqes)) inline_speed int iouring_enter (EV_P_ ev_tstamp timeout) { int res; EV_RELEASE_CB; res = evsys_io_uring_enter (iouring_fd, iouring_to_submit, 1, timeout > EV_TS_CONST (0.) ? IORING_ENTER_GETEVENTS : 0, 0, 0); assert (("libev: io_uring_enter did not consume all sqes", (res < 0 || res == iouring_to_submit))); iouring_to_submit = 0; EV_ACQUIRE_CB; return res; } /* TODO: can we move things around so we don't need this forward-reference? */ static void iouring_poll (EV_P_ ev_tstamp timeout); static struct io_uring_sqe * iouring_sqe_get (EV_P) { unsigned tail; for (;;) { tail = EV_SQ_VAR (tail); if (ecb_expect_true (tail + 1 - EV_SQ_VAR (head) <= EV_SQ_VAR (ring_entries))) break; /* whats the problem, we have free sqes */ /* queue full, need to flush and possibly handle some events */ #if EV_FEATURE_CODE /* first we ask the kernel nicely, most often this frees up some sqes */ int res = iouring_enter (EV_A_ EV_TS_CONST (0.)); ECB_MEMORY_FENCE_ACQUIRE; /* better safe than sorry */ if (res >= 0) continue; /* yes, it worked, try again */ #endif /* some problem, possibly EBUSY - do the full poll and let it handle any issues */ iouring_poll (EV_A_ EV_TS_CONST (0.)); /* iouring_poll should have done ECB_MEMORY_FENCE_ACQUIRE for us */ } /*assert (("libev: io_uring queue full after flush", tail + 1 - EV_SQ_VAR (head) <= EV_SQ_VAR (ring_entries)));*/ return EV_SQES + (tail & EV_SQ_VAR (ring_mask)); } inline_size struct io_uring_sqe * iouring_sqe_submit (EV_P_ struct io_uring_sqe *sqe) { unsigned idx = sqe - EV_SQES; EV_SQ_ARRAY [idx] = idx; ECB_MEMORY_FENCE_RELEASE; ++EV_SQ_VAR (tail); /*ECB_MEMORY_FENCE_RELEASE; /* for the time being we assume this is not needed */ ++iouring_to_submit; } /*****************************************************************************/ /* when the timerfd expires we simply note the fact, * as the purpose of the timerfd is to wake us up, nothing else. * the next iteration should re-set it. */ static void iouring_tfd_cb (EV_P_ struct ev_io *w, int revents) { iouring_tfd_to = EV_TSTAMP_HUGE; } /* called for full and partial cleanup */ ecb_cold static int iouring_internal_destroy (EV_P) { close (iouring_tfd); close (iouring_fd); if (iouring_sq_ring != MAP_FAILED) munmap (iouring_sq_ring, iouring_sq_ring_size); if (iouring_cq_ring != MAP_FAILED) munmap (iouring_cq_ring, iouring_cq_ring_size); if (iouring_sqes != MAP_FAILED) munmap (iouring_sqes , iouring_sqes_size ); if (ev_is_active (&iouring_tfd_w)) { ev_ref (EV_A); ev_io_stop (EV_A_ &iouring_tfd_w); } } ecb_cold static int iouring_internal_init (EV_P) { struct io_uring_params params = { 0 }; iouring_to_submit = 0; iouring_tfd = -1; iouring_sq_ring = MAP_FAILED; iouring_cq_ring = MAP_FAILED; iouring_sqes = MAP_FAILED; if (!have_monotonic) /* cannot really happen, but what if11 */ return -1; for (;;) { iouring_fd = evsys_io_uring_setup (iouring_entries, ¶ms); if (iouring_fd >= 0) break; /* yippie */ if (errno != EINVAL) return -1; /* we failed */ #if TODO if ((~params.features) & (IORING_FEAT_NODROP | IORING_FEATURE_SINGLE_MMAP | IORING_FEAT_SUBMIT_STABLE)) return -1; /* we require the above features */ #endif /* EINVAL: lots of possible reasons, but maybe * it is because we hit the unqueryable hardcoded size limit */ /* we hit the limit already, give up */ if (iouring_max_entries) return -1; /* first time we hit EINVAL? assume we hit the limit, so go back and retry */ iouring_entries >>= 1; iouring_max_entries = iouring_entries; } iouring_sq_ring_size = params.sq_off.array + params.sq_entries * sizeof (unsigned); iouring_cq_ring_size = params.cq_off.cqes + params.cq_entries * sizeof (struct io_uring_cqe); iouring_sqes_size = params.sq_entries * sizeof (struct io_uring_sqe); iouring_sq_ring = mmap (0, iouring_sq_ring_size, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE, iouring_fd, IORING_OFF_SQ_RING); iouring_cq_ring = mmap (0, iouring_cq_ring_size, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE, iouring_fd, IORING_OFF_CQ_RING); iouring_sqes = mmap (0, iouring_sqes_size, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE, iouring_fd, IORING_OFF_SQES); if (iouring_sq_ring == MAP_FAILED || iouring_cq_ring == MAP_FAILED || iouring_sqes == MAP_FAILED) return -1; iouring_sq_head = params.sq_off.head; iouring_sq_tail = params.sq_off.tail; iouring_sq_ring_mask = params.sq_off.ring_mask; iouring_sq_ring_entries = params.sq_off.ring_entries; iouring_sq_flags = params.sq_off.flags; iouring_sq_dropped = params.sq_off.dropped; iouring_sq_array = params.sq_off.array; iouring_cq_head = params.cq_off.head; iouring_cq_tail = params.cq_off.tail; iouring_cq_ring_mask = params.cq_off.ring_mask; iouring_cq_ring_entries = params.cq_off.ring_entries; iouring_cq_overflow = params.cq_off.overflow; iouring_cq_cqes = params.cq_off.cqes; iouring_tfd = timerfd_create (CLOCK_MONOTONIC, TFD_CLOEXEC); if (iouring_tfd < 0) return iouring_tfd; iouring_tfd_to = EV_TSTAMP_HUGE; return 0; } ecb_cold static void iouring_fork (EV_P) { iouring_internal_destroy (EV_A); while (iouring_internal_init (EV_A) < 0) ev_syserr ("(libev) io_uring_setup"); fd_rearm_all (EV_A); ev_io_stop (EV_A_ &iouring_tfd_w); ev_io_set (EV_A_ &iouring_tfd_w, iouring_tfd, EV_READ); ev_io_start (EV_A_ &iouring_tfd_w); } /*****************************************************************************/ static void iouring_modify (EV_P_ int fd, int oev, int nev) { if (oev) { /* we assume the sqe's are all "properly" initialised */ struct io_uring_sqe *sqe = iouring_sqe_get (EV_A); sqe->opcode = IORING_OP_POLL_REMOVE; sqe->fd = fd; /* Jens Axboe notified me that user_data is not what is documented, but is * some kind of unique ID that has to match, otherwise the request cannot * be removed. Since we don't *really* have that, we pass in the old * generation counter - if that fails, too bad, it will hopefully be removed * at close time and then be ignored. */ sqe->addr = (uint32_t)fd | ((__u64)(uint32_t)anfds [fd].egen << 32); sqe->user_data = (uint64_t)-1; iouring_sqe_submit (EV_A_ sqe); /* increment generation counter to avoid handling old events */ ++anfds [fd].egen; } if (nev) { struct io_uring_sqe *sqe = iouring_sqe_get (EV_A); sqe->opcode = IORING_OP_POLL_ADD; sqe->fd = fd; sqe->addr = 0; sqe->user_data = (uint32_t)fd | ((__u64)(uint32_t)anfds [fd].egen << 32); sqe->poll_events = (nev & EV_READ ? POLLIN : 0) | (nev & EV_WRITE ? POLLOUT : 0); iouring_sqe_submit (EV_A_ sqe); } } inline_size void iouring_tfd_update (EV_P_ ev_tstamp timeout) { ev_tstamp tfd_to = mn_now + timeout; /* we assume there will be many iterations per timer change, so * we only re-set the timerfd when we have to because its expiry * is too late. */ if (ecb_expect_false (tfd_to < iouring_tfd_to)) { struct itimerspec its; iouring_tfd_to = tfd_to; EV_TS_SET (its.it_interval, 0.); EV_TS_SET (its.it_value, tfd_to); if (timerfd_settime (iouring_tfd, TFD_TIMER_ABSTIME, &its, 0) < 0) assert (("libev: iouring timerfd_settime failed", 0)); } } inline_size void iouring_process_cqe (EV_P_ struct io_uring_cqe *cqe) { int fd = cqe->user_data & 0xffffffffU; uint32_t gen = cqe->user_data >> 32; int res = cqe->res; /* user_data -1 is a remove that we are not atm. interested in */ if (cqe->user_data == (uint64_t)-1) return; assert (("libev: io_uring fd must be in-bounds", fd >= 0 && fd < anfdmax)); /* documentation lies, of course. the result value is NOT like * normal syscalls, but like linux raw syscalls, i.e. negative * error numbers. fortunate, as otherwise there would be no way * to get error codes at all. still, why not document this? */ /* ignore event if generation doesn't match */ /* other than skipping removal events, */ /* this should actually be very rare */ if (ecb_expect_false (gen != (uint32_t)anfds [fd].egen)) return; if (ecb_expect_false (res < 0)) { /*TODO: EINVAL handling (was something failed with this fd)*/ if (res == -EBADF) { assert (("libev: event loop rejected bad fd", res != -EBADF)); fd_kill (EV_A_ fd); } else { errno = -res; ev_syserr ("(libev) IORING_OP_POLL_ADD"); } return; } /* feed events, we do not expect or handle POLLNVAL */ fd_event ( EV_A_ fd, (res & (POLLOUT | POLLERR | POLLHUP) ? EV_WRITE : 0) | (res & (POLLIN | POLLERR | POLLHUP) ? EV_READ : 0) ); /* io_uring is oneshot, so we need to re-arm the fd next iteration */ /* this also means we usually have to do at least one syscall per iteration */ anfds [fd].events = 0; fd_change (EV_A_ fd, EV_ANFD_REIFY); } /* called when the event queue overflows */ ecb_cold static void iouring_overflow (EV_P) { /* we have two options, resize the queue (by tearing down * everything and recreating it, or living with it * and polling. * we implement this by resizing the queue, and, if that fails, * we just recreate the state on every failure, which * kind of is a very inefficient poll. * one danger is, due to the bios toward lower fds, * we will only really get events for those, so * maybe we need a poll() fallback, after all. */ /*EV_CQ_VAR (overflow) = 0;*/ /* need to do this if we keep the state and poll manually */ fd_rearm_all (EV_A); /* we double the size until we hit the hard-to-probe maximum */ if (!iouring_max_entries) { iouring_entries <<= 1; iouring_fork (EV_A); } else { /* we hit the kernel limit, we should fall back to something else. * we can either poll() a few times and hope for the best, * poll always, or switch to epoll. * TODO: is this necessary with newer kernels? */ iouring_internal_destroy (EV_A); /* this should make it so that on return, we don't call any uring functions */ iouring_to_submit = 0; for (;;) { backend = epoll_init (EV_A_ 0); if (backend) break; ev_syserr ("(libev) iouring switch to epoll"); } } } /* handle any events in the completion queue, return true if there were any */ static int iouring_handle_cq (EV_P) { unsigned head, tail, mask; head = EV_CQ_VAR (head); ECB_MEMORY_FENCE_ACQUIRE; tail = EV_CQ_VAR (tail); if (head == tail) return 0; /* it can only overflow if we have events, yes, yes? */ if (ecb_expect_false (EV_CQ_VAR (overflow))) { iouring_overflow (EV_A); return 1; } mask = EV_CQ_VAR (ring_mask); do iouring_process_cqe (EV_A_ &EV_CQES [head++ & mask]); while (head != tail); EV_CQ_VAR (head) = head; ECB_MEMORY_FENCE_RELEASE; return 1; } static void iouring_poll (EV_P_ ev_tstamp timeout) { /* if we have events, no need for extra syscalls, but we might have to queue events */ /* we also clar the timeout if there are outstanding fdchanges */ /* the latter should only happen if both the sq and cq are full, most likely */ /* because we have a lot of event sources that immediately complete */ /* TODO: fdchacngecnt is always 0 because fd_reify does not have two buffers yet */ if (iouring_handle_cq (EV_A) || fdchangecnt) timeout = EV_TS_CONST (0.); else /* no events, so maybe wait for some */ iouring_tfd_update (EV_A_ timeout); /* only enter the kernel if we have something to submit, or we need to wait */ if (timeout || iouring_to_submit) { int res = iouring_enter (EV_A_ timeout); if (ecb_expect_false (res < 0)) if (errno == EINTR) /* ignore */; else if (errno == EBUSY) /* cq full, cannot submit - should be rare because we flush the cq first, so simply ignore */; else ev_syserr ("(libev) iouring setup"); else iouring_handle_cq (EV_A); } } inline_size int iouring_init (EV_P_ int flags) { iouring_entries = IOURING_INIT_ENTRIES; iouring_max_entries = 0; if (iouring_internal_init (EV_A) < 0) { iouring_internal_destroy (EV_A); return 0; } ev_io_init (&iouring_tfd_w, iouring_tfd_cb, iouring_tfd, EV_READ); ev_set_priority (&iouring_tfd_w, EV_MINPRI); ev_io_start (EV_A_ &iouring_tfd_w); ev_unref (EV_A); /* watcher should not keep loop alive */ backend_modify = iouring_modify; backend_poll = iouring_poll; return EVBACKEND_IOURING; } inline_size void iouring_destroy (EV_P) { iouring_internal_destroy (EV_A); } gevent-24.11.1/deps/libev/ev_kqueue.c000066400000000000000000000156601471441230600173530ustar00rootroot00000000000000/* * libev kqueue backend * * Copyright (c) 2007,2008,2009,2010,2011,2012,2013,2016,2019 Marc Alexander Lehmann * All rights reserved. * * Redistribution and use in source and binary forms, with or without modifica- * tion, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * * Alternatively, the contents of this file may be used under the terms of * the GNU General Public License ("GPL") version 2 or any later version, * in which case the provisions of the GPL are applicable instead of * the above. If you wish to allow the use of your version of this file * only under the terms of the GPL and not to allow others to use your * version of this file under the BSD license, indicate your decision * by deleting the provisions above and replace them with the notice * and other provisions required by the GPL. If you do not delete the * provisions above, a recipient may use your version of this file under * either the BSD or the GPL. */ #include #include #include #include #include inline_speed void kqueue_change (EV_P_ int fd, int filter, int flags, int fflags) { ++kqueue_changecnt; array_needsize (struct kevent, kqueue_changes, kqueue_changemax, kqueue_changecnt, array_needsize_noinit); EV_SET (&kqueue_changes [kqueue_changecnt - 1], fd, filter, flags, fflags, 0, 0); } /* OS X at least needs this */ #ifndef EV_ENABLE # define EV_ENABLE 0 #endif #ifndef NOTE_EOF # define NOTE_EOF 0 #endif static void kqueue_modify (EV_P_ int fd, int oev, int nev) { if (oev != nev) { if (oev & EV_READ) kqueue_change (EV_A_ fd, EVFILT_READ , EV_DELETE, 0); if (oev & EV_WRITE) kqueue_change (EV_A_ fd, EVFILT_WRITE, EV_DELETE, 0); } /* to detect close/reopen reliably, we have to re-add */ /* event requests even when oev == nev */ if (nev & EV_READ) kqueue_change (EV_A_ fd, EVFILT_READ , EV_ADD | EV_ENABLE, NOTE_EOF); if (nev & EV_WRITE) kqueue_change (EV_A_ fd, EVFILT_WRITE, EV_ADD | EV_ENABLE, NOTE_EOF); } static void kqueue_poll (EV_P_ ev_tstamp timeout) { int res, i; struct timespec ts; /* need to resize so there is enough space for errors */ if (kqueue_changecnt > kqueue_eventmax) { ev_free (kqueue_events); kqueue_eventmax = array_nextsize (sizeof (struct kevent), kqueue_eventmax, kqueue_changecnt); kqueue_events = (struct kevent *)ev_malloc (sizeof (struct kevent) * kqueue_eventmax); } EV_RELEASE_CB; EV_TS_SET (ts, timeout); res = kevent (backend_fd, kqueue_changes, kqueue_changecnt, kqueue_events, kqueue_eventmax, &ts); EV_ACQUIRE_CB; kqueue_changecnt = 0; if (ecb_expect_false (res < 0)) { if (errno != EINTR) ev_syserr ("(libev) kqueue kevent"); return; } for (i = 0; i < res; ++i) { int fd = kqueue_events [i].ident; if (ecb_expect_false (kqueue_events [i].flags & EV_ERROR)) { int err = kqueue_events [i].data; /* we are only interested in errors for fds that we are interested in :) */ if (anfds [fd].events) { if (err == ENOENT) /* resubmit changes on ENOENT */ kqueue_modify (EV_A_ fd, 0, anfds [fd].events); else if (err == EBADF) /* on EBADF, we re-check the fd */ { if (fd_valid (fd)) kqueue_modify (EV_A_ fd, 0, anfds [fd].events); else { assert (("libev: kqueue found invalid fd", 0)); fd_kill (EV_A_ fd); } } else /* on all other errors, we error out on the fd */ { assert (("libev: kqueue found invalid fd", 0)); fd_kill (EV_A_ fd); } } } else fd_event ( EV_A_ fd, kqueue_events [i].filter == EVFILT_READ ? EV_READ : kqueue_events [i].filter == EVFILT_WRITE ? EV_WRITE : 0 ); } if (ecb_expect_false (res == kqueue_eventmax)) { ev_free (kqueue_events); kqueue_eventmax = array_nextsize (sizeof (struct kevent), kqueue_eventmax, kqueue_eventmax + 1); kqueue_events = (struct kevent *)ev_malloc (sizeof (struct kevent) * kqueue_eventmax); } } inline_size int kqueue_init (EV_P_ int flags) { /* initialize the kernel queue */ kqueue_fd_pid = getpid (); if ((backend_fd = kqueue ()) < 0) return 0; fcntl (backend_fd, F_SETFD, FD_CLOEXEC); /* not sure if necessary, hopefully doesn't hurt */ backend_mintime = EV_TS_CONST (1e-9); /* apparently, they did the right thing in freebsd */ backend_modify = kqueue_modify; backend_poll = kqueue_poll; kqueue_eventmax = 64; /* initial number of events receivable per poll */ kqueue_events = (struct kevent *)ev_malloc (sizeof (struct kevent) * kqueue_eventmax); kqueue_changes = 0; kqueue_changemax = 0; kqueue_changecnt = 0; return EVBACKEND_KQUEUE; } inline_size void kqueue_destroy (EV_P) { ev_free (kqueue_events); ev_free (kqueue_changes); } inline_size void kqueue_fork (EV_P) { /* some BSD kernels don't just destroy the kqueue itself, * but also close the fd, which isn't documented, and * impossible to support properly. * we remember the pid of the kqueue call and only close * the fd if the pid is still the same. * this leaks fds on sane kernels, but BSD interfaces are * notoriously buggy and rarely get fixed. */ pid_t newpid = getpid (); if (newpid == kqueue_fd_pid) close (backend_fd); kqueue_fd_pid = newpid; while ((backend_fd = kqueue ()) < 0) ev_syserr ("(libev) kqueue"); fcntl (backend_fd, F_SETFD, FD_CLOEXEC); /* re-register interest in fds */ fd_rearm_all (EV_A); } /* sys/event.h defines EV_ERROR */ #undef EV_ERROR gevent-24.11.1/deps/libev/ev_linuxaio.c000066400000000000000000000517661471441230600177130ustar00rootroot00000000000000/* * libev linux aio fd activity backend * * Copyright (c) 2019 Marc Alexander Lehmann * All rights reserved. * * Redistribution and use in source and binary forms, with or without modifica- * tion, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * * Alternatively, the contents of this file may be used under the terms of * the GNU General Public License ("GPL") version 2 or any later version, * in which case the provisions of the GPL are applicable instead of * the above. If you wish to allow the use of your version of this file * only under the terms of the GPL and not to allow others to use your * version of this file under the BSD license, indicate your decision * by deleting the provisions above and replace them with the notice * and other provisions required by the GPL. If you do not delete the * provisions above, a recipient may use your version of this file under * either the BSD or the GPL. */ /* * general notes about linux aio: * * a) at first, the linux aio IOCB_CMD_POLL functionality introduced in * 4.18 looks too good to be true: both watchers and events can be * batched, and events can even be handled in userspace using * a ring buffer shared with the kernel. watchers can be canceled * regardless of whether the fd has been closed. no problems with fork. * ok, the ring buffer is 200% undocumented (there isn't even a * header file), but otherwise, it's pure bliss! * b) ok, watchers are one-shot, so you have to re-arm active ones * on every iteration. so much for syscall-less event handling, * but at least these re-arms can be batched, no big deal, right? * c) well, linux as usual: the documentation lies to you: io_submit * sometimes returns EINVAL because the kernel doesn't feel like * handling your poll mask - ttys can be polled for POLLOUT, * POLLOUT|POLLIN, but polling for POLLIN fails. just great, * so we have to fall back to something else (hello, epoll), * but at least the fallback can be slow, because these are * exceptional cases, right? * d) hmm, you have to tell the kernel the maximum number of watchers * you want to queue when initialising the aio context. but of * course the real limit is magically calculated in the kernel, and * is often higher then we asked for. so we just have to destroy * the aio context and re-create it a bit larger if we hit the limit. * (starts to remind you of epoll? well, it's a bit more deterministic * and less gambling, but still ugly as hell). * e) that's when you find out you can also hit an arbitrary system-wide * limit. or the kernel simply doesn't want to handle your watchers. * what the fuck do we do then? you guessed it, in the middle * of event handling we have to switch to 100% epoll polling. and * that better is as fast as normal epoll polling, so you practically * have to use the normal epoll backend with all its quirks. * f) end result of this train wreck: it inherits all the disadvantages * from epoll, while adding a number on its own. why even bother to use * it? because if conditions are right and your fds are supported and you * don't hit a limit, this backend is actually faster, doesn't gamble with * your fds, batches watchers and events and doesn't require costly state * recreates. well, until it does. * g) all of this makes this backend use almost twice as much code as epoll. * which in turn uses twice as much code as poll. and that#s not counting * the fact that this backend also depends on the epoll backend, making * it three times as much code as poll, or kqueue. * h) bleah. why can't linux just do kqueue. sure kqueue is ugly, but by now * it's clear that whatever linux comes up with is far, far, far worse. */ #include /* actually linux/time.h, but we must assume they are compatible */ #include #include /*****************************************************************************/ /* syscall wrapdadoop - this section has the raw api/abi definitions */ #include /* no glibc wrappers */ /* aio_abi.h is not versioned in any way, so we cannot test for its existance */ #define IOCB_CMD_POLL 5 /* taken from linux/fs/aio.c. yup, that's a .c file. * not only is this totally undocumented, not even the source code * can tell you what the future semantics of compat_features and * incompat_features are, or what header_length actually is for. */ #define AIO_RING_MAGIC 0xa10a10a1 #define EV_AIO_RING_INCOMPAT_FEATURES 0 struct aio_ring { unsigned id; /* kernel internal index number */ unsigned nr; /* number of io_events */ unsigned head; /* Written to by userland or by kernel. */ unsigned tail; unsigned magic; unsigned compat_features; unsigned incompat_features; unsigned header_length; /* size of aio_ring */ struct io_event io_events[0]; }; inline_size int evsys_io_setup (unsigned nr_events, aio_context_t *ctx_idp) { return ev_syscall2 (SYS_io_setup, nr_events, ctx_idp); } inline_size int evsys_io_destroy (aio_context_t ctx_id) { return ev_syscall1 (SYS_io_destroy, ctx_id); } inline_size int evsys_io_submit (aio_context_t ctx_id, long nr, struct iocb *cbp[]) { return ev_syscall3 (SYS_io_submit, ctx_id, nr, cbp); } inline_size int evsys_io_cancel (aio_context_t ctx_id, struct iocb *cbp, struct io_event *result) { return ev_syscall3 (SYS_io_cancel, ctx_id, cbp, result); } inline_size int evsys_io_getevents (aio_context_t ctx_id, long min_nr, long nr, struct io_event *events, struct timespec *timeout) { return ev_syscall5 (SYS_io_getevents, ctx_id, min_nr, nr, events, timeout); } /*****************************************************************************/ /* actual backed implementation */ ecb_cold static int linuxaio_nr_events (EV_P) { /* we start with 16 iocbs and incraese from there * that's tiny, but the kernel has a rather low system-wide * limit that can be reached quickly, so let's be parsimonious * with this resource. * Rest assured, the kernel generously rounds up small and big numbers * in different ways (but doesn't seem to charge you for it). * The 15 here is because the kernel usually has a power of two as aio-max-nr, * and this helps to take advantage of that limit. */ /* we try to fill 4kB pages exactly. * the ring buffer header is 32 bytes, every io event is 32 bytes. * the kernel takes the io requests number, doubles it, adds 2 * and adds the ring buffer. * the way we use this is by starting low, and then roughly doubling the * size each time we hit a limit. */ int requests = 15 << linuxaio_iteration; int one_page = (4096 / sizeof (struct io_event) ) / 2; /* how many fit into one page */ int first_page = ((4096 - sizeof (struct aio_ring)) / sizeof (struct io_event) - 2) / 2; /* how many fit into the first page */ /* if everything fits into one page, use count exactly */ if (requests > first_page) /* otherwise, round down to full pages and add the first page */ requests = requests / one_page * one_page + first_page; return requests; } /* we use out own wrapper structure in case we ever want to do something "clever" */ typedef struct aniocb { struct iocb io; /*int inuse;*/ } *ANIOCBP; inline_size void linuxaio_array_needsize_iocbp (ANIOCBP *base, int offset, int count) { while (count--) { /* TODO: quite the overhead to allocate every iocb separately, maybe use our own allocator? */ ANIOCBP iocb = (ANIOCBP)ev_malloc (sizeof (*iocb)); /* full zero initialise is probably not required at the moment, but * this is not well documented, so we better do it. */ memset (iocb, 0, sizeof (*iocb)); iocb->io.aio_lio_opcode = IOCB_CMD_POLL; iocb->io.aio_fildes = offset; base [offset++] = iocb; } } ecb_cold static void linuxaio_free_iocbp (EV_P) { while (linuxaio_iocbpmax--) ev_free (linuxaio_iocbps [linuxaio_iocbpmax]); linuxaio_iocbpmax = 0; /* next resize will completely reallocate the array, at some overhead */ } static void linuxaio_modify (EV_P_ int fd, int oev, int nev) { array_needsize (ANIOCBP, linuxaio_iocbps, linuxaio_iocbpmax, fd + 1, linuxaio_array_needsize_iocbp); ANIOCBP iocb = linuxaio_iocbps [fd]; ANFD *anfd = &anfds [fd]; if (ecb_expect_false (iocb->io.aio_reqprio < 0)) { /* we handed this fd over to epoll, so undo this first */ /* we do it manually because the optimisations on epoll_modify won't do us any good */ epoll_ctl (backend_fd, EPOLL_CTL_DEL, fd, 0); anfd->emask = 0; iocb->io.aio_reqprio = 0; } else if (ecb_expect_false (iocb->io.aio_buf)) { /* iocb active, so cancel it first before resubmit */ /* this assumes we only ever get one call per fd per loop iteration */ for (;;) { /* on all relevant kernels, io_cancel fails with EINPROGRESS on "success" */ if (ecb_expect_false (evsys_io_cancel (linuxaio_ctx, &iocb->io, (struct io_event *)0) == 0)) break; if (ecb_expect_true (errno == EINPROGRESS)) break; /* the EINPROGRESS test is for nicer error message. clumsy. */ if (errno != EINTR) { assert (("libev: linuxaio unexpected io_cancel failed", errno != EINTR && errno != EINPROGRESS)); break; } } /* increment generation counter to avoid handling old events */ ++anfd->egen; } iocb->io.aio_buf = (nev & EV_READ ? POLLIN : 0) | (nev & EV_WRITE ? POLLOUT : 0); if (nev) { iocb->io.aio_data = (uint32_t)fd | ((__u64)(uint32_t)anfd->egen << 32); /* queue iocb up for io_submit */ /* this assumes we only ever get one call per fd per loop iteration */ ++linuxaio_submitcnt; array_needsize (struct iocb *, linuxaio_submits, linuxaio_submitmax, linuxaio_submitcnt, array_needsize_noinit); linuxaio_submits [linuxaio_submitcnt - 1] = &iocb->io; } } static void linuxaio_epoll_cb (EV_P_ struct ev_io *w, int revents) { epoll_poll (EV_A_ 0); } inline_speed void linuxaio_fd_rearm (EV_P_ int fd) { anfds [fd].events = 0; linuxaio_iocbps [fd]->io.aio_buf = 0; fd_change (EV_A_ fd, EV_ANFD_REIFY); } static void linuxaio_parse_events (EV_P_ struct io_event *ev, int nr) { while (nr) { int fd = ev->data & 0xffffffff; uint32_t gen = ev->data >> 32; int res = ev->res; assert (("libev: iocb fd must be in-bounds", fd >= 0 && fd < anfdmax)); /* only accept events if generation counter matches */ if (ecb_expect_true (gen == (uint32_t)anfds [fd].egen)) { /* feed events, we do not expect or handle POLLNVAL */ fd_event ( EV_A_ fd, (res & (POLLOUT | POLLERR | POLLHUP) ? EV_WRITE : 0) | (res & (POLLIN | POLLERR | POLLHUP) ? EV_READ : 0) ); /* linux aio is oneshot: rearm fd. TODO: this does more work than strictly needed */ linuxaio_fd_rearm (EV_A_ fd); } --nr; ++ev; } } /* get any events from ring buffer, return true if any were handled */ static int linuxaio_get_events_from_ring (EV_P) { struct aio_ring *ring = (struct aio_ring *)linuxaio_ctx; unsigned head, tail; /* the kernel reads and writes both of these variables, */ /* as a C extension, we assume that volatile use here */ /* both makes reads atomic and once-only */ head = *(volatile unsigned *)&ring->head; ECB_MEMORY_FENCE_ACQUIRE; tail = *(volatile unsigned *)&ring->tail; if (head == tail) return 0; /* parse all available events, but only once, to avoid starvation */ if (ecb_expect_true (tail > head)) /* normal case around */ linuxaio_parse_events (EV_A_ ring->io_events + head, tail - head); else /* wrapped around */ { linuxaio_parse_events (EV_A_ ring->io_events + head, ring->nr - head); linuxaio_parse_events (EV_A_ ring->io_events, tail); } ECB_MEMORY_FENCE_RELEASE; /* as an extension to C, we hope that the volatile will make this atomic and once-only */ *(volatile unsigned *)&ring->head = tail; return 1; } inline_size int linuxaio_ringbuf_valid (EV_P) { struct aio_ring *ring = (struct aio_ring *)linuxaio_ctx; return ecb_expect_true (ring->magic == AIO_RING_MAGIC) && ring->incompat_features == EV_AIO_RING_INCOMPAT_FEATURES && ring->header_length == sizeof (struct aio_ring); /* TODO: or use it to find io_event[0]? */ } /* read at least one event from kernel, or timeout */ inline_size void linuxaio_get_events (EV_P_ ev_tstamp timeout) { struct timespec ts; struct io_event ioev[8]; /* 256 octet stack space */ int want = 1; /* how many events to request */ int ringbuf_valid = linuxaio_ringbuf_valid (EV_A); if (ecb_expect_true (ringbuf_valid)) { /* if the ring buffer has any events, we don't wait or call the kernel at all */ if (linuxaio_get_events_from_ring (EV_A)) return; /* if the ring buffer is empty, and we don't have a timeout, then don't call the kernel */ if (!timeout) return; } else /* no ringbuffer, request slightly larger batch */ want = sizeof (ioev) / sizeof (ioev [0]); /* no events, so wait for some * for fairness reasons, we do this in a loop, to fetch all events */ for (;;) { int res; EV_RELEASE_CB; EV_TS_SET (ts, timeout); res = evsys_io_getevents (linuxaio_ctx, 1, want, ioev, &ts); EV_ACQUIRE_CB; if (res < 0) if (errno == EINTR) /* ignored, retry */; else ev_syserr ("(libev) linuxaio io_getevents"); else if (res) { /* at least one event available, handle them */ linuxaio_parse_events (EV_A_ ioev, res); if (ecb_expect_true (ringbuf_valid)) { /* if we have a ring buffer, handle any remaining events in it */ linuxaio_get_events_from_ring (EV_A); /* at this point, we should have handled all outstanding events */ break; } else if (res < want) /* otherwise, if there were fewere events than we wanted, we assume there are no more */ break; } else break; /* no events from the kernel, we are done */ timeout = EV_TS_CONST (0.); /* only wait in the first iteration */ } } inline_size int linuxaio_io_setup (EV_P) { linuxaio_ctx = 0; return evsys_io_setup (linuxaio_nr_events (EV_A), &linuxaio_ctx); } static void linuxaio_poll (EV_P_ ev_tstamp timeout) { int submitted; /* first phase: submit new iocbs */ /* io_submit might return less than the requested number of iocbs */ /* this is, afaics, only because of errors, but we go by the book and use a loop, */ /* which allows us to pinpoint the erroneous iocb */ for (submitted = 0; submitted < linuxaio_submitcnt; ) { int res = evsys_io_submit (linuxaio_ctx, linuxaio_submitcnt - submitted, linuxaio_submits + submitted); if (ecb_expect_false (res < 0)) if (errno == EINVAL) { /* This happens for unsupported fds, officially, but in my testing, * also randomly happens for supported fds. We fall back to good old * poll() here, under the assumption that this is a very rare case. * See https://lore.kernel.org/patchwork/patch/1047453/ to see * discussion about such a case (ttys) where polling for POLLIN * fails but POLLIN|POLLOUT works. */ struct iocb *iocb = linuxaio_submits [submitted]; epoll_modify (EV_A_ iocb->aio_fildes, 0, anfds [iocb->aio_fildes].events); iocb->aio_reqprio = -1; /* mark iocb as epoll */ res = 1; /* skip this iocb - another iocb, another chance */ } else if (errno == EAGAIN) { /* This happens when the ring buffer is full, or some other shit we * don't know and isn't documented. Most likely because we have too * many requests and linux aio can't be assed to handle them. * In this case, we try to allocate a larger ring buffer, freeing * ours first. This might fail, in which case we have to fall back to 100% * epoll. * God, how I hate linux not getting its act together. Ever. */ evsys_io_destroy (linuxaio_ctx); linuxaio_submitcnt = 0; /* rearm all fds with active iocbs */ { int fd; for (fd = 0; fd < linuxaio_iocbpmax; ++fd) if (linuxaio_iocbps [fd]->io.aio_buf) linuxaio_fd_rearm (EV_A_ fd); } ++linuxaio_iteration; if (linuxaio_io_setup (EV_A) < 0) { /* TODO: rearm all and recreate epoll backend from scratch */ /* TODO: might be more prudent? */ /* to bad, we can't get a new aio context, go 100% epoll */ linuxaio_free_iocbp (EV_A); ev_io_stop (EV_A_ &linuxaio_epoll_w); ev_ref (EV_A); linuxaio_ctx = 0; backend = EVBACKEND_EPOLL; backend_modify = epoll_modify; backend_poll = epoll_poll; } timeout = EV_TS_CONST (0.); /* it's easiest to handle this mess in another iteration */ return; } else if (errno == EBADF) { assert (("libev: event loop rejected bad fd", errno != EBADF)); fd_kill (EV_A_ linuxaio_submits [submitted]->aio_fildes); res = 1; /* skip this iocb */ } else if (errno == EINTR) /* not seen in reality, not documented */ res = 0; /* silently ignore and retry */ else { ev_syserr ("(libev) linuxaio io_submit"); res = 0; } submitted += res; } linuxaio_submitcnt = 0; /* second phase: fetch and parse events */ linuxaio_get_events (EV_A_ timeout); } inline_size int linuxaio_init (EV_P_ int flags) { /* would be great to have a nice test for IOCB_CMD_POLL instead */ /* also: test some semi-common fd types, such as files and ttys in recommended_backends */ /* 4.18 introduced IOCB_CMD_POLL, 4.19 made epoll work, and we need that */ if (ev_linux_version () < 0x041300) return 0; if (!epoll_init (EV_A_ 0)) return 0; linuxaio_iteration = 0; if (linuxaio_io_setup (EV_A) < 0) { epoll_destroy (EV_A); return 0; } ev_io_init (&linuxaio_epoll_w, linuxaio_epoll_cb, backend_fd, EV_READ); ev_set_priority (&linuxaio_epoll_w, EV_MAXPRI); ev_io_start (EV_A_ &linuxaio_epoll_w); ev_unref (EV_A); /* watcher should not keep loop alive */ backend_modify = linuxaio_modify; backend_poll = linuxaio_poll; linuxaio_iocbpmax = 0; linuxaio_iocbps = 0; linuxaio_submits = 0; linuxaio_submitmax = 0; linuxaio_submitcnt = 0; return EVBACKEND_LINUXAIO; } inline_size void linuxaio_destroy (EV_P) { epoll_destroy (EV_A); linuxaio_free_iocbp (EV_A); evsys_io_destroy (linuxaio_ctx); /* fails in child, aio context is destroyed */ } ecb_cold static void linuxaio_fork (EV_P) { linuxaio_submitcnt = 0; /* all pointers were invalidated */ linuxaio_free_iocbp (EV_A); /* this frees all iocbs, which is very heavy-handed */ evsys_io_destroy (linuxaio_ctx); /* fails in child, aio context is destroyed */ linuxaio_iteration = 0; /* we start over in the child */ while (linuxaio_io_setup (EV_A) < 0) ev_syserr ("(libev) linuxaio io_setup"); /* forking epoll should also effectively unregister all fds from the backend */ epoll_fork (EV_A); /* epoll_fork already did this. hopefully */ /*fd_rearm_all (EV_A);*/ ev_io_stop (EV_A_ &linuxaio_epoll_w); ev_io_set (EV_A_ &linuxaio_epoll_w, backend_fd, EV_READ); ev_io_start (EV_A_ &linuxaio_epoll_w); } gevent-24.11.1/deps/libev/ev_poll.c000066400000000000000000000110231471441230600170070ustar00rootroot00000000000000/* * libev poll fd activity backend * * Copyright (c) 2007,2008,2009,2010,2011,2016,2019 Marc Alexander Lehmann * All rights reserved. * * Redistribution and use in source and binary forms, with or without modifica- * tion, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * * Alternatively, the contents of this file may be used under the terms of * the GNU General Public License ("GPL") version 2 or any later version, * in which case the provisions of the GPL are applicable instead of * the above. If you wish to allow the use of your version of this file * only under the terms of the GPL and not to allow others to use your * version of this file under the BSD license, indicate your decision * by deleting the provisions above and replace them with the notice * and other provisions required by the GPL. If you do not delete the * provisions above, a recipient may use your version of this file under * either the BSD or the GPL. */ #include inline_size void array_needsize_pollidx (int *base, int offset, int count) { /* using memset (.., -1, ...) is tempting, we we try * to be ultraportable */ base += offset; while (count--) *base++ = -1; } static void poll_modify (EV_P_ int fd, int oev, int nev) { int idx; if (oev == nev) return; array_needsize (int, pollidxs, pollidxmax, fd + 1, array_needsize_pollidx); idx = pollidxs [fd]; if (idx < 0) /* need to allocate a new pollfd */ { pollidxs [fd] = idx = pollcnt++; array_needsize (struct pollfd, polls, pollmax, pollcnt, array_needsize_noinit); polls [idx].fd = fd; } assert (polls [idx].fd == fd); if (nev) polls [idx].events = (nev & EV_READ ? POLLIN : 0) | (nev & EV_WRITE ? POLLOUT : 0); else /* remove pollfd */ { pollidxs [fd] = -1; if (ecb_expect_true (idx < --pollcnt)) { polls [idx] = polls [pollcnt]; pollidxs [polls [idx].fd] = idx; } } } static void poll_poll (EV_P_ ev_tstamp timeout) { struct pollfd *p; int res; EV_RELEASE_CB; res = poll (polls, pollcnt, EV_TS_TO_MSEC (timeout)); EV_ACQUIRE_CB; if (ecb_expect_false (res < 0)) { if (errno == EBADF) fd_ebadf (EV_A); else if (errno == ENOMEM && !syserr_cb) fd_enomem (EV_A); else if (errno != EINTR) ev_syserr ("(libev) poll"); } else for (p = polls; res; ++p) { assert (("libev: poll returned illegal result, broken BSD kernel?", p < polls + pollcnt)); if (ecb_expect_false (p->revents)) /* this expect is debatable */ { --res; if (ecb_expect_false (p->revents & POLLNVAL)) { assert (("libev: poll found invalid fd in poll set", 0)); fd_kill (EV_A_ p->fd); } else fd_event ( EV_A_ p->fd, (p->revents & (POLLOUT | POLLERR | POLLHUP) ? EV_WRITE : 0) | (p->revents & (POLLIN | POLLERR | POLLHUP) ? EV_READ : 0) ); } } } inline_size int poll_init (EV_P_ int flags) { backend_mintime = EV_TS_CONST (1e-3); backend_modify = poll_modify; backend_poll = poll_poll; pollidxs = 0; pollidxmax = 0; polls = 0; pollmax = 0; pollcnt = 0; return EVBACKEND_POLL; } inline_size void poll_destroy (EV_P) { ev_free (pollidxs); ev_free (polls); } gevent-24.11.1/deps/libev/ev_port.c000066400000000000000000000146001471441230600170310ustar00rootroot00000000000000/* * libev solaris event port backend * * Copyright (c) 2007,2008,2009,2010,2011,2019 Marc Alexander Lehmann * All rights reserved. * * Redistribution and use in source and binary forms, with or without modifica- * tion, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * * Alternatively, the contents of this file may be used under the terms of * the GNU General Public License ("GPL") version 2 or any later version, * in which case the provisions of the GPL are applicable instead of * the above. If you wish to allow the use of your version of this file * only under the terms of the GPL and not to allow others to use your * version of this file under the BSD license, indicate your decision * by deleting the provisions above and replace them with the notice * and other provisions required by the GPL. If you do not delete the * provisions above, a recipient may use your version of this file under * either the BSD or the GPL. */ /* useful reading: * * http://bugs.opensolaris.org/view_bug.do?bug_id=6268715 (random results) * http://bugs.opensolaris.org/view_bug.do?bug_id=6455223 (just totally broken) * http://bugs.opensolaris.org/view_bug.do?bug_id=6873782 (manpage ETIME) * http://bugs.opensolaris.org/view_bug.do?bug_id=6874410 (implementation ETIME) * http://www.mail-archive.com/networking-discuss@opensolaris.org/msg11898.html ETIME vs. nget * http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libc/port/gen/event_port.c (libc) * http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/portfs/port.c#1325 (kernel) */ #include #include #include #include #include #include inline_speed void port_associate_and_check (EV_P_ int fd, int ev) { if (0 > port_associate ( backend_fd, PORT_SOURCE_FD, fd, (ev & EV_READ ? POLLIN : 0) | (ev & EV_WRITE ? POLLOUT : 0), 0 ) ) { if (errno == EBADFD) { assert (("libev: port_associate found invalid fd", errno != EBADFD)); fd_kill (EV_A_ fd); } else ev_syserr ("(libev) port_associate"); } } static void port_modify (EV_P_ int fd, int oev, int nev) { /* we need to reassociate no matter what, as closes are * once more silently being discarded. */ if (!nev) { if (oev) port_dissociate (backend_fd, PORT_SOURCE_FD, fd); } else port_associate_and_check (EV_A_ fd, nev); } static void port_poll (EV_P_ ev_tstamp timeout) { int res, i; struct timespec ts; uint_t nget = 1; /* we initialise this to something we will skip in the loop, as */ /* port_getn can return with nget unchanged, but no indication */ /* whether it was the original value or has been updated :/ */ port_events [0].portev_source = 0; EV_RELEASE_CB; EV_TS_SET (ts, timeout); res = port_getn (backend_fd, port_events, port_eventmax, &nget, &ts); EV_ACQUIRE_CB; /* port_getn may or may not set nget on error */ /* so we rely on port_events [0].portev_source not being updated */ if (res == -1 && errno != ETIME && errno != EINTR) ev_syserr ("(libev) port_getn (see http://bugs.opensolaris.org/view_bug.do?bug_id=6268715, try LIBEV_FLAGS=3 env variable)"); for (i = 0; i < nget; ++i) { if (port_events [i].portev_source == PORT_SOURCE_FD) { int fd = port_events [i].portev_object; fd_event ( EV_A_ fd, (port_events [i].portev_events & (POLLOUT | POLLERR | POLLHUP) ? EV_WRITE : 0) | (port_events [i].portev_events & (POLLIN | POLLERR | POLLHUP) ? EV_READ : 0) ); fd_change (EV_A_ fd, EV__IOFDSET); } } if (ecb_expect_false (nget == port_eventmax)) { ev_free (port_events); port_eventmax = array_nextsize (sizeof (port_event_t), port_eventmax, port_eventmax + 1); port_events = (port_event_t *)ev_malloc (sizeof (port_event_t) * port_eventmax); } } inline_size int port_init (EV_P_ int flags) { /* Initialize the kernel queue */ if ((backend_fd = port_create ()) < 0) return 0; assert (("libev: PORT_SOURCE_FD must not be zero", PORT_SOURCE_FD)); fcntl (backend_fd, F_SETFD, FD_CLOEXEC); /* not sure if necessary, hopefully doesn't hurt */ /* if my reading of the opensolaris kernel sources are correct, then * opensolaris does something very stupid: it checks if the time has already * elapsed and doesn't round up if that is the case, otherwise it DOES round * up. Since we can't know what the case is, we need to guess by using a * "large enough" timeout. Normally, 1e-9 would be correct. */ backend_mintime = EV_TS_CONST (1e-3); /* needed to compensate for port_getn returning early */ backend_modify = port_modify; backend_poll = port_poll; port_eventmax = 64; /* initial number of events receivable per poll */ port_events = (port_event_t *)ev_malloc (sizeof (port_event_t) * port_eventmax); return EVBACKEND_PORT; } inline_size void port_destroy (EV_P) { ev_free (port_events); } inline_size void port_fork (EV_P) { close (backend_fd); while ((backend_fd = port_create ()) < 0) ev_syserr ("(libev) port"); fcntl (backend_fd, F_SETFD, FD_CLOEXEC); /* re-register interest in fds */ fd_rearm_all (EV_A); } gevent-24.11.1/deps/libev/ev_select.c000066400000000000000000000212251471441230600173250ustar00rootroot00000000000000/* * libev select fd activity backend * * Copyright (c) 2007,2008,2009,2010,2011 Marc Alexander Lehmann * All rights reserved. * * Redistribution and use in source and binary forms, with or without modifica- * tion, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * * Alternatively, the contents of this file may be used under the terms of * the GNU General Public License ("GPL") version 2 or any later version, * in which case the provisions of the GPL are applicable instead of * the above. If you wish to allow the use of your version of this file * only under the terms of the GPL and not to allow others to use your * version of this file under the BSD license, indicate your decision * by deleting the provisions above and replace them with the notice * and other provisions required by the GPL. If you do not delete the * provisions above, a recipient may use your version of this file under * either the BSD or the GPL. */ #ifndef _WIN32 /* for unix systems */ # include # ifndef __hpux /* for REAL unix systems */ # include # endif #endif #ifndef EV_SELECT_USE_FD_SET # ifdef NFDBITS # define EV_SELECT_USE_FD_SET 0 # else # define EV_SELECT_USE_FD_SET 1 # endif #endif #if EV_SELECT_IS_WINSOCKET # undef EV_SELECT_USE_FD_SET # define EV_SELECT_USE_FD_SET 1 # undef NFDBITS # define NFDBITS 0 #endif #if !EV_SELECT_USE_FD_SET # define NFDBYTES (NFDBITS / 8) #endif #include static void select_modify (EV_P_ int fd, int oev, int nev) { if (oev == nev) return; { #if EV_SELECT_USE_FD_SET #if EV_SELECT_IS_WINSOCKET SOCKET handle = anfds [fd].handle; #else int handle = fd; #endif assert (("libev: fd >= FD_SETSIZE passed to fd_set-based select backend", fd < FD_SETSIZE)); /* FD_SET is broken on windows (it adds the fd to a set twice or more, * which eventually leads to overflows). Need to call it only on changes. */ #if EV_SELECT_IS_WINSOCKET if ((oev ^ nev) & EV_READ) #endif if (nev & EV_READ) FD_SET (handle, (fd_set *)vec_ri); else FD_CLR (handle, (fd_set *)vec_ri); #if EV_SELECT_IS_WINSOCKET if ((oev ^ nev) & EV_WRITE) #endif if (nev & EV_WRITE) FD_SET (handle, (fd_set *)vec_wi); else FD_CLR (handle, (fd_set *)vec_wi); #else int word = fd / NFDBITS; fd_mask mask = 1UL << (fd % NFDBITS); if (ecb_expect_false (vec_max <= word)) { int new_max = word + 1; vec_ri = ev_realloc (vec_ri, new_max * NFDBYTES); vec_ro = ev_realloc (vec_ro, new_max * NFDBYTES); /* could free/malloc */ vec_wi = ev_realloc (vec_wi, new_max * NFDBYTES); vec_wo = ev_realloc (vec_wo, new_max * NFDBYTES); /* could free/malloc */ #ifdef _WIN32 vec_eo = ev_realloc (vec_eo, new_max * NFDBYTES); /* could free/malloc */ #endif for (; vec_max < new_max; ++vec_max) ((fd_mask *)vec_ri) [vec_max] = ((fd_mask *)vec_wi) [vec_max] = 0; } ((fd_mask *)vec_ri) [word] |= mask; if (!(nev & EV_READ)) ((fd_mask *)vec_ri) [word] &= ~mask; ((fd_mask *)vec_wi) [word] |= mask; if (!(nev & EV_WRITE)) ((fd_mask *)vec_wi) [word] &= ~mask; #endif } } static void select_poll (EV_P_ ev_tstamp timeout) { struct timeval tv; int res; int fd_setsize; EV_RELEASE_CB; EV_TV_SET (tv, timeout); #if EV_SELECT_USE_FD_SET fd_setsize = sizeof (fd_set); #else fd_setsize = vec_max * NFDBYTES; #endif memcpy (vec_ro, vec_ri, fd_setsize); memcpy (vec_wo, vec_wi, fd_setsize); #ifdef _WIN32 /* pass in the write set as except set. * the idea behind this is to work around a windows bug that causes * errors to be reported as an exception and not by setting * the writable bit. this is so uncontrollably lame. */ memcpy (vec_eo, vec_wi, fd_setsize); res = select (vec_max * NFDBITS, (fd_set *)vec_ro, (fd_set *)vec_wo, (fd_set *)vec_eo, &tv); #elif EV_SELECT_USE_FD_SET fd_setsize = anfdmax < FD_SETSIZE ? anfdmax : FD_SETSIZE; res = select (fd_setsize, (fd_set *)vec_ro, (fd_set *)vec_wo, 0, &tv); #else res = select (vec_max * NFDBITS, (fd_set *)vec_ro, (fd_set *)vec_wo, 0, &tv); #endif EV_ACQUIRE_CB; if (ecb_expect_false (res < 0)) { #if EV_SELECT_IS_WINSOCKET errno = WSAGetLastError (); #endif #ifdef WSABASEERR /* on windows, select returns incompatible error codes, fix this */ if (errno >= WSABASEERR && errno < WSABASEERR + 1000) if (errno == WSAENOTSOCK) errno = EBADF; else errno -= WSABASEERR; #endif #ifdef _WIN32 /* select on windows erroneously returns EINVAL when no fd sets have been * provided (this is documented). what microsoft doesn't tell you that this bug * exists even when the fd sets _are_ provided, so we have to check for this bug * here and emulate by sleeping manually. * we also get EINVAL when the timeout is invalid, but we ignore this case here * and assume that EINVAL always means: you have to wait manually. */ if (errno == EINVAL) { if (timeout) { unsigned long ms = EV_TS_TO_MSEC (timeout); Sleep (ms ? ms : 1); } return; } #endif if (errno == EBADF) fd_ebadf (EV_A); else if (errno == ENOMEM && !syserr_cb) fd_enomem (EV_A); else if (errno != EINTR) ev_syserr ("(libev) select"); return; } #if EV_SELECT_USE_FD_SET { int fd; for (fd = 0; fd < anfdmax; ++fd) if (anfds [fd].events) { int events = 0; #if EV_SELECT_IS_WINSOCKET SOCKET handle = anfds [fd].handle; #else int handle = fd; #endif if (FD_ISSET (handle, (fd_set *)vec_ro)) events |= EV_READ; if (FD_ISSET (handle, (fd_set *)vec_wo)) events |= EV_WRITE; #ifdef _WIN32 if (FD_ISSET (handle, (fd_set *)vec_eo)) events |= EV_WRITE; #endif if (ecb_expect_true (events)) fd_event (EV_A_ fd, events); } } #else { int word, bit; for (word = vec_max; word--; ) { fd_mask word_r = ((fd_mask *)vec_ro) [word]; fd_mask word_w = ((fd_mask *)vec_wo) [word]; #ifdef _WIN32 word_w |= ((fd_mask *)vec_eo) [word]; #endif if (word_r || word_w) for (bit = NFDBITS; bit--; ) { fd_mask mask = 1UL << bit; int events = 0; events |= word_r & mask ? EV_READ : 0; events |= word_w & mask ? EV_WRITE : 0; if (ecb_expect_true (events)) fd_event (EV_A_ word * NFDBITS + bit, events); } } } #endif } inline_size int select_init (EV_P_ int flags) { backend_mintime = EV_TS_CONST (1e-6); backend_modify = select_modify; backend_poll = select_poll; #if EV_SELECT_USE_FD_SET vec_ri = ev_malloc (sizeof (fd_set)); FD_ZERO ((fd_set *)vec_ri); vec_ro = ev_malloc (sizeof (fd_set)); vec_wi = ev_malloc (sizeof (fd_set)); FD_ZERO ((fd_set *)vec_wi); vec_wo = ev_malloc (sizeof (fd_set)); #ifdef _WIN32 vec_eo = ev_malloc (sizeof (fd_set)); #endif #else vec_max = 0; vec_ri = 0; vec_ro = 0; vec_wi = 0; vec_wo = 0; #ifdef _WIN32 vec_eo = 0; #endif #endif return EVBACKEND_SELECT; } inline_size void select_destroy (EV_P) { ev_free (vec_ri); ev_free (vec_ro); ev_free (vec_wi); ev_free (vec_wo); #ifdef _WIN32 ev_free (vec_eo); #endif } gevent-24.11.1/deps/libev/ev_vars.h000066400000000000000000000165751471441230600170420ustar00rootroot00000000000000/* * loop member variable declarations * * Copyright (c) 2007,2008,2009,2010,2011,2012,2013,2019 Marc Alexander Lehmann * All rights reserved. * * Redistribution and use in source and binary forms, with or without modifica- * tion, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * * Alternatively, the contents of this file may be used under the terms of * the GNU General Public License ("GPL") version 2 or any later version, * in which case the provisions of the GPL are applicable instead of * the above. If you wish to allow the use of your version of this file * only under the terms of the GPL and not to allow others to use your * version of this file under the BSD license, indicate your decision * by deleting the provisions above and replace them with the notice * and other provisions required by the GPL. If you do not delete the * provisions above, a recipient may use your version of this file under * either the BSD or the GPL. */ #define VARx(type,name) VAR(name, type name) VARx(ev_tstamp, now_floor) /* last time we refreshed rt_time */ VARx(ev_tstamp, mn_now) /* monotonic clock "now" */ VARx(ev_tstamp, rtmn_diff) /* difference realtime - monotonic time */ /* for reverse feeding of events */ VARx(W *, rfeeds) VARx(int, rfeedmax) VARx(int, rfeedcnt) VAR (pendings, ANPENDING *pendings [NUMPRI]) VAR (pendingmax, int pendingmax [NUMPRI]) VAR (pendingcnt, int pendingcnt [NUMPRI]) VARx(int, pendingpri) /* highest priority currently pending */ VARx(ev_prepare, pending_w) /* dummy pending watcher */ VARx(ev_tstamp, io_blocktime) VARx(ev_tstamp, timeout_blocktime) VARx(int, backend) VARx(int, activecnt) /* total number of active events ("refcount") */ VARx(EV_ATOMIC_T, loop_done) /* signal by ev_break */ VARx(int, backend_fd) VARx(ev_tstamp, backend_mintime) /* assumed typical timer resolution */ VAR (backend_modify, void (*backend_modify)(EV_P_ int fd, int oev, int nev)) VAR (backend_poll , void (*backend_poll)(EV_P_ ev_tstamp timeout)) VARx(ANFD *, anfds) VARx(int, anfdmax) VAR (evpipe, int evpipe [2]) VARx(ev_io, pipe_w) VARx(EV_ATOMIC_T, pipe_write_wanted) VARx(EV_ATOMIC_T, pipe_write_skipped) #if !defined(_WIN32) || EV_GENWRAP VARx(pid_t, curpid) #endif VARx(char, postfork) /* true if we need to recreate kernel state after fork */ #if EV_USE_SELECT || EV_GENWRAP VARx(void *, vec_ri) VARx(void *, vec_ro) VARx(void *, vec_wi) VARx(void *, vec_wo) #if defined(_WIN32) || EV_GENWRAP VARx(void *, vec_eo) #endif VARx(int, vec_max) #endif #if EV_USE_POLL || EV_GENWRAP VARx(struct pollfd *, polls) VARx(int, pollmax) VARx(int, pollcnt) VARx(int *, pollidxs) /* maps fds into structure indices */ VARx(int, pollidxmax) #endif #if EV_USE_EPOLL || EV_GENWRAP VARx(struct epoll_event *, epoll_events) VARx(int, epoll_eventmax) VARx(int *, epoll_eperms) VARx(int, epoll_epermcnt) VARx(int, epoll_epermmax) #endif #if EV_USE_LINUXAIO || EV_GENWRAP VARx(aio_context_t, linuxaio_ctx) VARx(int, linuxaio_iteration) VARx(struct aniocb **, linuxaio_iocbps) VARx(int, linuxaio_iocbpmax) VARx(struct iocb **, linuxaio_submits) VARx(int, linuxaio_submitcnt) VARx(int, linuxaio_submitmax) VARx(ev_io, linuxaio_epoll_w) #endif #if EV_USE_IOURING || EV_GENWRAP VARx(int, iouring_fd) VARx(unsigned, iouring_to_submit); VARx(int, iouring_entries) VARx(int, iouring_max_entries) VARx(void *, iouring_sq_ring) VARx(void *, iouring_cq_ring) VARx(void *, iouring_sqes) VARx(uint32_t, iouring_sq_ring_size) VARx(uint32_t, iouring_cq_ring_size) VARx(uint32_t, iouring_sqes_size) VARx(uint32_t, iouring_sq_head) VARx(uint32_t, iouring_sq_tail) VARx(uint32_t, iouring_sq_ring_mask) VARx(uint32_t, iouring_sq_ring_entries) VARx(uint32_t, iouring_sq_flags) VARx(uint32_t, iouring_sq_dropped) VARx(uint32_t, iouring_sq_array) VARx(uint32_t, iouring_cq_head) VARx(uint32_t, iouring_cq_tail) VARx(uint32_t, iouring_cq_ring_mask) VARx(uint32_t, iouring_cq_ring_entries) VARx(uint32_t, iouring_cq_overflow) VARx(uint32_t, iouring_cq_cqes) VARx(ev_tstamp, iouring_tfd_to) VARx(int, iouring_tfd) VARx(ev_io, iouring_tfd_w) #endif #if EV_USE_KQUEUE || EV_GENWRAP VARx(pid_t, kqueue_fd_pid) VARx(struct kevent *, kqueue_changes) VARx(int, kqueue_changemax) VARx(int, kqueue_changecnt) VARx(struct kevent *, kqueue_events) VARx(int, kqueue_eventmax) #endif #if EV_USE_PORT || EV_GENWRAP VARx(struct port_event *, port_events) VARx(int, port_eventmax) #endif #if EV_USE_IOCP || EV_GENWRAP VARx(HANDLE, iocp) #endif VARx(int *, fdchanges) VARx(int, fdchangemax) VARx(int, fdchangecnt) VARx(ANHE *, timers) VARx(int, timermax) VARx(int, timercnt) #if EV_PERIODIC_ENABLE || EV_GENWRAP VARx(ANHE *, periodics) VARx(int, periodicmax) VARx(int, periodiccnt) #endif #if EV_IDLE_ENABLE || EV_GENWRAP VAR (idles, ev_idle **idles [NUMPRI]) VAR (idlemax, int idlemax [NUMPRI]) VAR (idlecnt, int idlecnt [NUMPRI]) #endif VARx(int, idleall) /* total number */ VARx(struct ev_prepare **, prepares) VARx(int, preparemax) VARx(int, preparecnt) VARx(struct ev_check **, checks) VARx(int, checkmax) VARx(int, checkcnt) #if EV_FORK_ENABLE || EV_GENWRAP VARx(struct ev_fork **, forks) VARx(int, forkmax) VARx(int, forkcnt) #endif #if EV_CLEANUP_ENABLE || EV_GENWRAP VARx(struct ev_cleanup **, cleanups) VARx(int, cleanupmax) VARx(int, cleanupcnt) #endif #if EV_ASYNC_ENABLE || EV_GENWRAP VARx(EV_ATOMIC_T, async_pending) VARx(struct ev_async **, asyncs) VARx(int, asyncmax) VARx(int, asynccnt) #endif #if EV_USE_INOTIFY || EV_GENWRAP VARx(int, fs_fd) VARx(ev_io, fs_w) VARx(char, fs_2625) /* whether we are running in linux 2.6.25 or newer */ VAR (fs_hash, ANFS fs_hash [EV_INOTIFY_HASHSIZE]) #endif VARx(EV_ATOMIC_T, sig_pending) #if EV_USE_SIGNALFD || EV_GENWRAP VARx(int, sigfd) VARx(ev_io, sigfd_w) VARx(sigset_t, sigfd_set) #endif #if EV_USE_TIMERFD || EV_GENWRAP VARx(int, timerfd) /* timerfd for time jump detection */ VARx(ev_io, timerfd_w) #endif VARx(unsigned int, origflags) /* original loop flags */ #if EV_FEATURE_API || EV_GENWRAP VARx(unsigned int, loop_count) /* total number of loop iterations/blocks */ VARx(unsigned int, loop_depth) /* #ev_run enters - #ev_run leaves */ VARx(void *, userdata) /* C++ doesn't support the ev_loop_callback typedef here. stinks. */ VAR (release_cb, void (*release_cb)(EV_P) EV_NOEXCEPT) VAR (acquire_cb, void (*acquire_cb)(EV_P) EV_NOEXCEPT) VAR (invoke_cb , ev_loop_callback invoke_cb) #endif #undef VARx gevent-24.11.1/deps/libev/ev_win32.c000066400000000000000000000123421471441230600170100ustar00rootroot00000000000000/* * libev win32 compatibility cruft (_not_ a backend) * * Copyright (c) 2007,2008,2009 Marc Alexander Lehmann * All rights reserved. * * Redistribution and use in source and binary forms, with or without modifica- * tion, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * * Alternatively, the contents of this file may be used under the terms of * the GNU General Public License ("GPL") version 2 or any later version, * in which case the provisions of the GPL are applicable instead of * the above. If you wish to allow the use of your version of this file * only under the terms of the GPL and not to allow others to use your * version of this file under the BSD license, indicate your decision * by deleting the provisions above and replace them with the notice * and other provisions required by the GPL. If you do not delete the * provisions above, a recipient may use your version of this file under * either the BSD or the GPL. */ #ifdef _WIN32 /* note: the comment below could not be substantiated, but what would I care */ /* MSDN says this is required to handle SIGFPE */ /* my wild guess would be that using something floating-pointy is required */ /* for the crt to do something about it */ volatile double SIGFPE_REQ = 0.0f; static SOCKET ev_tcp_socket (void) { #if EV_USE_WSASOCKET return WSASocket (AF_INET, SOCK_STREAM, 0, 0, 0, 0); #else return socket (AF_INET, SOCK_STREAM, 0); #endif } /* oh, the humanity! */ static int ev_pipe (int filedes [2]) { struct sockaddr_in addr = { 0 }; int addr_size = sizeof (addr); struct sockaddr_in adr2; int adr2_size = sizeof (adr2); SOCKET listener; SOCKET sock [2] = { -1, -1 }; if ((listener = ev_tcp_socket ()) == INVALID_SOCKET) return -1; addr.sin_family = AF_INET; addr.sin_addr.s_addr = htonl (INADDR_LOOPBACK); addr.sin_port = 0; if (bind (listener, (struct sockaddr *)&addr, addr_size)) goto fail; if (getsockname (listener, (struct sockaddr *)&addr, &addr_size)) goto fail; if (listen (listener, 1)) goto fail; if ((sock [0] = ev_tcp_socket ()) == INVALID_SOCKET) goto fail; if (connect (sock [0], (struct sockaddr *)&addr, addr_size)) goto fail; /* TODO: returns INVALID_SOCKET on winsock accept, not < 0. fix it */ /* when convenient, probably by just removing error checking altogether? */ if ((sock [1] = accept (listener, 0, 0)) < 0) goto fail; /* windows vista returns fantasy port numbers for sockets: * example for two interconnected tcp sockets: * * (Socket::unpack_sockaddr_in getsockname $sock0)[0] == 53364 * (Socket::unpack_sockaddr_in getpeername $sock0)[0] == 53363 * (Socket::unpack_sockaddr_in getsockname $sock1)[0] == 53363 * (Socket::unpack_sockaddr_in getpeername $sock1)[0] == 53365 * * wow! tridirectional sockets! * * this way of checking ports seems to work: */ if (getpeername (sock [0], (struct sockaddr *)&addr, &addr_size)) goto fail; if (getsockname (sock [1], (struct sockaddr *)&adr2, &adr2_size)) goto fail; errno = WSAEINVAL; if (addr_size != adr2_size || addr.sin_addr.s_addr != adr2.sin_addr.s_addr /* just to be sure, I mean, it's windows */ || addr.sin_port != adr2.sin_port) goto fail; closesocket (listener); #if EV_SELECT_IS_WINSOCKET filedes [0] = EV_WIN32_HANDLE_TO_FD (sock [0]); filedes [1] = EV_WIN32_HANDLE_TO_FD (sock [1]); #else /* when select isn't winsocket, we also expect socket, connect, accept etc. * to work on fds */ filedes [0] = sock [0]; filedes [1] = sock [1]; #endif return 0; fail: closesocket (listener); if (sock [0] != INVALID_SOCKET) closesocket (sock [0]); if (sock [1] != INVALID_SOCKET) closesocket (sock [1]); return -1; } #undef pipe #define pipe(filedes) ev_pipe (filedes) #define EV_HAVE_EV_TIME 1 ev_tstamp ev_time (void) { FILETIME ft; ULARGE_INTEGER ui; GetSystemTimeAsFileTime (&ft); ui.u.LowPart = ft.dwLowDateTime; ui.u.HighPart = ft.dwHighDateTime; /* also, msvc cannot convert ulonglong to double... yes, it is that sucky */ return EV_TS_FROM_USEC (((LONGLONG)(ui.QuadPart - 116444736000000000) * 1e-1)); } #endif gevent-24.11.1/deps/libev/ev_wrap.h000066400000000000000000000200601471441230600170200ustar00rootroot00000000000000/* DO NOT EDIT, automatically generated by update_ev_wrap */ #ifndef EV_WRAP_H #define EV_WRAP_H #define acquire_cb ((loop)->acquire_cb) #define activecnt ((loop)->activecnt) #define anfdmax ((loop)->anfdmax) #define anfds ((loop)->anfds) #define async_pending ((loop)->async_pending) #define asynccnt ((loop)->asynccnt) #define asyncmax ((loop)->asyncmax) #define asyncs ((loop)->asyncs) #define backend ((loop)->backend) #define backend_fd ((loop)->backend_fd) #define backend_mintime ((loop)->backend_mintime) #define backend_modify ((loop)->backend_modify) #define backend_poll ((loop)->backend_poll) #define checkcnt ((loop)->checkcnt) #define checkmax ((loop)->checkmax) #define checks ((loop)->checks) #define cleanupcnt ((loop)->cleanupcnt) #define cleanupmax ((loop)->cleanupmax) #define cleanups ((loop)->cleanups) #define curpid ((loop)->curpid) #define epoll_epermcnt ((loop)->epoll_epermcnt) #define epoll_epermmax ((loop)->epoll_epermmax) #define epoll_eperms ((loop)->epoll_eperms) #define epoll_eventmax ((loop)->epoll_eventmax) #define epoll_events ((loop)->epoll_events) #define evpipe ((loop)->evpipe) #define fdchangecnt ((loop)->fdchangecnt) #define fdchangemax ((loop)->fdchangemax) #define fdchanges ((loop)->fdchanges) #define forkcnt ((loop)->forkcnt) #define forkmax ((loop)->forkmax) #define forks ((loop)->forks) #define fs_2625 ((loop)->fs_2625) #define fs_fd ((loop)->fs_fd) #define fs_hash ((loop)->fs_hash) #define fs_w ((loop)->fs_w) #define idleall ((loop)->idleall) #define idlecnt ((loop)->idlecnt) #define idlemax ((loop)->idlemax) #define idles ((loop)->idles) #define invoke_cb ((loop)->invoke_cb) #define io_blocktime ((loop)->io_blocktime) #define iocp ((loop)->iocp) #define iouring_cq_cqes ((loop)->iouring_cq_cqes) #define iouring_cq_head ((loop)->iouring_cq_head) #define iouring_cq_overflow ((loop)->iouring_cq_overflow) #define iouring_cq_ring ((loop)->iouring_cq_ring) #define iouring_cq_ring_entries ((loop)->iouring_cq_ring_entries) #define iouring_cq_ring_mask ((loop)->iouring_cq_ring_mask) #define iouring_cq_ring_size ((loop)->iouring_cq_ring_size) #define iouring_cq_tail ((loop)->iouring_cq_tail) #define iouring_entries ((loop)->iouring_entries) #define iouring_fd ((loop)->iouring_fd) #define iouring_max_entries ((loop)->iouring_max_entries) #define iouring_sq_array ((loop)->iouring_sq_array) #define iouring_sq_dropped ((loop)->iouring_sq_dropped) #define iouring_sq_flags ((loop)->iouring_sq_flags) #define iouring_sq_head ((loop)->iouring_sq_head) #define iouring_sq_ring ((loop)->iouring_sq_ring) #define iouring_sq_ring_entries ((loop)->iouring_sq_ring_entries) #define iouring_sq_ring_mask ((loop)->iouring_sq_ring_mask) #define iouring_sq_ring_size ((loop)->iouring_sq_ring_size) #define iouring_sq_tail ((loop)->iouring_sq_tail) #define iouring_sqes ((loop)->iouring_sqes) #define iouring_sqes_size ((loop)->iouring_sqes_size) #define iouring_tfd ((loop)->iouring_tfd) #define iouring_tfd_to ((loop)->iouring_tfd_to) #define iouring_tfd_w ((loop)->iouring_tfd_w) #define iouring_to_submit ((loop)->iouring_to_submit) #define kqueue_changecnt ((loop)->kqueue_changecnt) #define kqueue_changemax ((loop)->kqueue_changemax) #define kqueue_changes ((loop)->kqueue_changes) #define kqueue_eventmax ((loop)->kqueue_eventmax) #define kqueue_events ((loop)->kqueue_events) #define kqueue_fd_pid ((loop)->kqueue_fd_pid) #define linuxaio_ctx ((loop)->linuxaio_ctx) #define linuxaio_epoll_w ((loop)->linuxaio_epoll_w) #define linuxaio_iocbpmax ((loop)->linuxaio_iocbpmax) #define linuxaio_iocbps ((loop)->linuxaio_iocbps) #define linuxaio_iteration ((loop)->linuxaio_iteration) #define linuxaio_submitcnt ((loop)->linuxaio_submitcnt) #define linuxaio_submitmax ((loop)->linuxaio_submitmax) #define linuxaio_submits ((loop)->linuxaio_submits) #define loop_count ((loop)->loop_count) #define loop_depth ((loop)->loop_depth) #define loop_done ((loop)->loop_done) #define mn_now ((loop)->mn_now) #define now_floor ((loop)->now_floor) #define origflags ((loop)->origflags) #define pending_w ((loop)->pending_w) #define pendingcnt ((loop)->pendingcnt) #define pendingmax ((loop)->pendingmax) #define pendingpri ((loop)->pendingpri) #define pendings ((loop)->pendings) #define periodiccnt ((loop)->periodiccnt) #define periodicmax ((loop)->periodicmax) #define periodics ((loop)->periodics) #define pipe_w ((loop)->pipe_w) #define pipe_write_skipped ((loop)->pipe_write_skipped) #define pipe_write_wanted ((loop)->pipe_write_wanted) #define pollcnt ((loop)->pollcnt) #define pollidxmax ((loop)->pollidxmax) #define pollidxs ((loop)->pollidxs) #define pollmax ((loop)->pollmax) #define polls ((loop)->polls) #define port_eventmax ((loop)->port_eventmax) #define port_events ((loop)->port_events) #define postfork ((loop)->postfork) #define preparecnt ((loop)->preparecnt) #define preparemax ((loop)->preparemax) #define prepares ((loop)->prepares) #define release_cb ((loop)->release_cb) #define rfeedcnt ((loop)->rfeedcnt) #define rfeedmax ((loop)->rfeedmax) #define rfeeds ((loop)->rfeeds) #define rtmn_diff ((loop)->rtmn_diff) #define sig_pending ((loop)->sig_pending) #define sigfd ((loop)->sigfd) #define sigfd_set ((loop)->sigfd_set) #define sigfd_w ((loop)->sigfd_w) #define timeout_blocktime ((loop)->timeout_blocktime) #define timercnt ((loop)->timercnt) #define timerfd ((loop)->timerfd) #define timerfd_w ((loop)->timerfd_w) #define timermax ((loop)->timermax) #define timers ((loop)->timers) #define userdata ((loop)->userdata) #define vec_eo ((loop)->vec_eo) #define vec_max ((loop)->vec_max) #define vec_ri ((loop)->vec_ri) #define vec_ro ((loop)->vec_ro) #define vec_wi ((loop)->vec_wi) #define vec_wo ((loop)->vec_wo) #else #undef EV_WRAP_H #undef acquire_cb #undef activecnt #undef anfdmax #undef anfds #undef async_pending #undef asynccnt #undef asyncmax #undef asyncs #undef backend #undef backend_fd #undef backend_mintime #undef backend_modify #undef backend_poll #undef checkcnt #undef checkmax #undef checks #undef cleanupcnt #undef cleanupmax #undef cleanups #undef curpid #undef epoll_epermcnt #undef epoll_epermmax #undef epoll_eperms #undef epoll_eventmax #undef epoll_events #undef evpipe #undef fdchangecnt #undef fdchangemax #undef fdchanges #undef forkcnt #undef forkmax #undef forks #undef fs_2625 #undef fs_fd #undef fs_hash #undef fs_w #undef idleall #undef idlecnt #undef idlemax #undef idles #undef invoke_cb #undef io_blocktime #undef iocp #undef iouring_cq_cqes #undef iouring_cq_head #undef iouring_cq_overflow #undef iouring_cq_ring #undef iouring_cq_ring_entries #undef iouring_cq_ring_mask #undef iouring_cq_ring_size #undef iouring_cq_tail #undef iouring_entries #undef iouring_fd #undef iouring_max_entries #undef iouring_sq_array #undef iouring_sq_dropped #undef iouring_sq_flags #undef iouring_sq_head #undef iouring_sq_ring #undef iouring_sq_ring_entries #undef iouring_sq_ring_mask #undef iouring_sq_ring_size #undef iouring_sq_tail #undef iouring_sqes #undef iouring_sqes_size #undef iouring_tfd #undef iouring_tfd_to #undef iouring_tfd_w #undef iouring_to_submit #undef kqueue_changecnt #undef kqueue_changemax #undef kqueue_changes #undef kqueue_eventmax #undef kqueue_events #undef kqueue_fd_pid #undef linuxaio_ctx #undef linuxaio_epoll_w #undef linuxaio_iocbpmax #undef linuxaio_iocbps #undef linuxaio_iteration #undef linuxaio_submitcnt #undef linuxaio_submitmax #undef linuxaio_submits #undef loop_count #undef loop_depth #undef loop_done #undef mn_now #undef now_floor #undef origflags #undef pending_w #undef pendingcnt #undef pendingmax #undef pendingpri #undef pendings #undef periodiccnt #undef periodicmax #undef periodics #undef pipe_w #undef pipe_write_skipped #undef pipe_write_wanted #undef pollcnt #undef pollidxmax #undef pollidxs #undef pollmax #undef polls #undef port_eventmax #undef port_events #undef postfork #undef preparecnt #undef preparemax #undef prepares #undef release_cb #undef rfeedcnt #undef rfeedmax #undef rfeeds #undef rtmn_diff #undef sig_pending #undef sigfd #undef sigfd_set #undef sigfd_w #undef timeout_blocktime #undef timercnt #undef timerfd #undef timerfd_w #undef timermax #undef timers #undef userdata #undef vec_eo #undef vec_max #undef vec_ri #undef vec_ro #undef vec_wi #undef vec_wo #endif gevent-24.11.1/deps/libev/event.c000066400000000000000000000233361471441230600165020ustar00rootroot00000000000000/* * libevent compatibility layer * * Copyright (c) 2007,2008,2009,2010,2012 Marc Alexander Lehmann * All rights reserved. * * Redistribution and use in source and binary forms, with or without modifica- * tion, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * * Alternatively, the contents of this file may be used under the terms of * the GNU General Public License ("GPL") version 2 or any later version, * in which case the provisions of the GPL are applicable instead of * the above. If you wish to allow the use of your version of this file * only under the terms of the GPL and not to allow others to use your * version of this file under the BSD license, indicate your decision * by deleting the provisions above and replace them with the notice * and other provisions required by the GPL. If you do not delete the * provisions above, a recipient may use your version of this file under * either the BSD or the GPL. */ #include #include #include #ifdef EV_EVENT_H # include EV_EVENT_H #else # include "event.h" #endif #if EV_MULTIPLICITY # define dLOOPev struct ev_loop *loop = (struct ev_loop *)ev->ev_base # define dLOOPbase struct ev_loop *loop = (struct ev_loop *)base #else # define dLOOPev # define dLOOPbase #endif /* never accessed, will always be cast from/to ev_loop */ struct event_base { int dummy; }; static struct event_base *ev_x_cur; static ev_tstamp ev_tv_get (struct timeval *tv) { if (tv) { ev_tstamp after = tv->tv_sec + tv->tv_usec * 1e-6; return after ? after : 1e-6; } else return -1.; } #define EVENT_STRINGIFY(s) # s #define EVENT_VERSION(a,b) EVENT_STRINGIFY (a) "." EVENT_STRINGIFY (b) const char * event_get_version (void) { /* returns ABI, not API or library, version */ return EVENT_VERSION (EV_VERSION_MAJOR, EV_VERSION_MINOR); } const char * event_get_method (void) { return "libev"; } void *event_init (void) { #if EV_MULTIPLICITY if (ev_x_cur) ev_x_cur = (struct event_base *)ev_loop_new (EVFLAG_AUTO); else ev_x_cur = (struct event_base *)ev_default_loop (EVFLAG_AUTO); #else assert (("libev: multiple event bases not supported when not compiled with EV_MULTIPLICITY", !ev_x_cur)); ev_x_cur = (struct event_base *)(long)ev_default_loop (EVFLAG_AUTO); #endif return ev_x_cur; } const char * event_base_get_method (const struct event_base *base) { return "libev"; } struct event_base * event_base_new (void) { #if EV_MULTIPLICITY return (struct event_base *)ev_loop_new (EVFLAG_AUTO); #else assert (("libev: multiple event bases not supported when not compiled with EV_MULTIPLICITY")); return NULL; #endif } void event_base_free (struct event_base *base) { dLOOPbase; #if EV_MULTIPLICITY if (!ev_is_default_loop (loop)) ev_loop_destroy (loop); #endif } int event_dispatch (void) { return event_base_dispatch (ev_x_cur); } #ifdef EV_STANDALONE void event_set_log_callback (event_log_cb cb) { /* nop */ } #endif int event_loop (int flags) { return event_base_loop (ev_x_cur, flags); } int event_loopexit (struct timeval *tv) { return event_base_loopexit (ev_x_cur, tv); } event_callback_fn event_get_callback (const struct event *ev) { return ev->ev_callback; } static void ev_x_cb (struct event *ev, int revents) { revents &= EV_READ | EV_WRITE | EV_TIMER | EV_SIGNAL; ev->ev_res = revents; ev->ev_callback (ev->ev_fd, (short)revents, ev->ev_arg); } static void ev_x_cb_sig (EV_P_ struct ev_signal *w, int revents) { struct event *ev = (struct event *)(((char *)w) - offsetof (struct event, iosig.sig)); if (revents & EV_ERROR) event_del (ev); ev_x_cb (ev, revents); } static void ev_x_cb_io (EV_P_ struct ev_io *w, int revents) { struct event *ev = (struct event *)(((char *)w) - offsetof (struct event, iosig.io)); if ((revents & EV_ERROR) || !(ev->ev_events & EV_PERSIST)) event_del (ev); ev_x_cb (ev, revents); } static void ev_x_cb_to (EV_P_ struct ev_timer *w, int revents) { struct event *ev = (struct event *)(((char *)w) - offsetof (struct event, to)); event_del (ev); ev_x_cb (ev, revents); } void event_set (struct event *ev, int fd, short events, void (*cb)(int, short, void *), void *arg) { if (events & EV_SIGNAL) ev_init (&ev->iosig.sig, ev_x_cb_sig); else ev_init (&ev->iosig.io, ev_x_cb_io); ev_init (&ev->to, ev_x_cb_to); ev->ev_base = ev_x_cur; /* not threadsafe, but it's how libevent works */ ev->ev_fd = fd; ev->ev_events = events; ev->ev_pri = 0; ev->ev_callback = cb; ev->ev_arg = arg; ev->ev_res = 0; ev->ev_flags = EVLIST_INIT; } int event_once (int fd, short events, void (*cb)(int, short, void *), void *arg, struct timeval *tv) { return event_base_once (ev_x_cur, fd, events, cb, arg, tv); } int event_add (struct event *ev, struct timeval *tv) { dLOOPev; if (ev->ev_events & EV_SIGNAL) { if (!ev_is_active (&ev->iosig.sig)) { ev_signal_set (&ev->iosig.sig, ev->ev_fd); ev_signal_start (EV_A_ &ev->iosig.sig); ev->ev_flags |= EVLIST_SIGNAL; } } else if (ev->ev_events & (EV_READ | EV_WRITE)) { if (!ev_is_active (&ev->iosig.io)) { ev_io_set (&ev->iosig.io, ev->ev_fd, ev->ev_events & (EV_READ | EV_WRITE)); ev_io_start (EV_A_ &ev->iosig.io); ev->ev_flags |= EVLIST_INSERTED; } } if (tv) { ev->to.repeat = ev_tv_get (tv); ev_timer_again (EV_A_ &ev->to); ev->ev_flags |= EVLIST_TIMEOUT; } else { ev_timer_stop (EV_A_ &ev->to); ev->ev_flags &= ~EVLIST_TIMEOUT; } ev->ev_flags |= EVLIST_ACTIVE; return 0; } int event_del (struct event *ev) { dLOOPev; if (ev->ev_events & EV_SIGNAL) ev_signal_stop (EV_A_ &ev->iosig.sig); else if (ev->ev_events & (EV_READ | EV_WRITE)) ev_io_stop (EV_A_ &ev->iosig.io); if (ev_is_active (&ev->to)) ev_timer_stop (EV_A_ &ev->to); ev->ev_flags = EVLIST_INIT; return 0; } void event_active (struct event *ev, int res, short ncalls) { dLOOPev; if (res & EV_TIMEOUT) ev_feed_event (EV_A_ &ev->to, res & EV_TIMEOUT); if (res & EV_SIGNAL) ev_feed_event (EV_A_ &ev->iosig.sig, res & EV_SIGNAL); if (res & (EV_READ | EV_WRITE)) ev_feed_event (EV_A_ &ev->iosig.io, res & (EV_READ | EV_WRITE)); } int event_pending (struct event *ev, short events, struct timeval *tv) { short revents = 0; dLOOPev; if (ev->ev_events & EV_SIGNAL) { /* sig */ if (ev_is_active (&ev->iosig.sig) || ev_is_pending (&ev->iosig.sig)) revents |= EV_SIGNAL; } else if (ev->ev_events & (EV_READ | EV_WRITE)) { /* io */ if (ev_is_active (&ev->iosig.io) || ev_is_pending (&ev->iosig.io)) revents |= ev->ev_events & (EV_READ | EV_WRITE); } if (ev->ev_events & EV_TIMEOUT || ev_is_active (&ev->to) || ev_is_pending (&ev->to)) { revents |= EV_TIMEOUT; if (tv) { ev_tstamp at = ev_now (EV_A); tv->tv_sec = (long)at; tv->tv_usec = (long)((at - (ev_tstamp)tv->tv_sec) * 1e6); } } return events & revents; } int event_priority_init (int npri) { return event_base_priority_init (ev_x_cur, npri); } int event_priority_set (struct event *ev, int pri) { ev->ev_pri = pri; return 0; } int event_base_set (struct event_base *base, struct event *ev) { ev->ev_base = base; return 0; } int event_base_loop (struct event_base *base, int flags) { dLOOPbase; return !ev_run (EV_A_ flags); } int event_base_dispatch (struct event_base *base) { return event_base_loop (base, 0); } static void ev_x_loopexit_cb (int revents, void *base) { dLOOPbase; ev_break (EV_A_ EVBREAK_ONE); } int event_base_loopexit (struct event_base *base, struct timeval *tv) { ev_tstamp after = ev_tv_get (tv); dLOOPbase; ev_once (EV_A_ -1, 0, after >= 0. ? after : 0., ev_x_loopexit_cb, (void *)base); return 0; } struct ev_x_once { int fd; void (*cb)(int, short, void *); void *arg; }; static void ev_x_once_cb (int revents, void *arg) { struct ev_x_once *once = (struct ev_x_once *)arg; once->cb (once->fd, (short)revents, once->arg); free (once); } int event_base_once (struct event_base *base, int fd, short events, void (*cb)(int, short, void *), void *arg, struct timeval *tv) { struct ev_x_once *once = (struct ev_x_once *)malloc (sizeof (struct ev_x_once)); dLOOPbase; if (!once) return -1; once->fd = fd; once->cb = cb; once->arg = arg; ev_once (EV_A_ fd, events & (EV_READ | EV_WRITE), ev_tv_get (tv), ev_x_once_cb, (void *)once); return 0; } int event_base_priority_init (struct event_base *base, int npri) { /*dLOOPbase;*/ return 0; } gevent-24.11.1/deps/libev/event.h000066400000000000000000000141541471441230600165050ustar00rootroot00000000000000/* * libevent compatibility header, only core events supported * * Copyright (c) 2007,2008,2010,2012 Marc Alexander Lehmann * All rights reserved. * * Redistribution and use in source and binary forms, with or without modifica- * tion, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER- * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE- * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH- * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * * Alternatively, the contents of this file may be used under the terms of * the GNU General Public License ("GPL") version 2 or any later version, * in which case the provisions of the GPL are applicable instead of * the above. If you wish to allow the use of your version of this file * only under the terms of the GPL and not to allow others to use your * version of this file under the BSD license, indicate your decision * by deleting the provisions above and replace them with the notice * and other provisions required by the GPL. If you do not delete the * provisions above, a recipient may use your version of this file under * either the BSD or the GPL. */ #ifndef EVENT_H_ #define EVENT_H_ #ifdef EV_H # include EV_H #else # include "ev.h" #endif #ifndef EVLOOP_NONBLOCK # define EVLOOP_NONBLOCK EVRUN_NOWAIT #endif #ifndef EVLOOP_ONESHOT # define EVLOOP_ONESHOT EVRUN_ONCE #endif #ifndef EV_TIMEOUT # define EV_TIMEOUT EV_TIMER #endif #ifdef __cplusplus extern "C" { #endif /* we need sys/time.h for struct timeval only */ #if !defined (WIN32) || defined (__MINGW32__) # include /* mingw seems to need this, for whatever reason */ # include #endif struct event_base; #define EVLIST_TIMEOUT 0x01 #define EVLIST_INSERTED 0x02 #define EVLIST_SIGNAL 0x04 #define EVLIST_ACTIVE 0x08 #define EVLIST_INTERNAL 0x10 #define EVLIST_INIT 0x80 typedef void (*event_callback_fn)(int, short, void *); struct event { /* libev watchers we map onto */ union { struct ev_io io; struct ev_signal sig; } iosig; struct ev_timer to; /* compatibility slots */ struct event_base *ev_base; event_callback_fn ev_callback; void *ev_arg; int ev_fd; int ev_pri; int ev_res; int ev_flags; short ev_events; }; event_callback_fn event_get_callback (const struct event *ev); #define EV_READ EV_READ #define EV_WRITE EV_WRITE #define EV_PERSIST 0x10 #define EV_ET 0x20 /* nop */ #define EVENT_SIGNAL(ev) ((int) (ev)->ev_fd) #define EVENT_FD(ev) ((int) (ev)->ev_fd) #define event_initialized(ev) ((ev)->ev_flags & EVLIST_INIT) #define evtimer_add(ev,tv) event_add (ev, tv) #define evtimer_set(ev,cb,data) event_set (ev, -1, 0, cb, data) #define evtimer_del(ev) event_del (ev) #define evtimer_pending(ev,tv) event_pending (ev, EV_TIMEOUT, tv) #define evtimer_initialized(ev) event_initialized (ev) #define timeout_add(ev,tv) evtimer_add (ev, tv) #define timeout_set(ev,cb,data) evtimer_set (ev, cb, data) #define timeout_del(ev) evtimer_del (ev) #define timeout_pending(ev,tv) evtimer_pending (ev, tv) #define timeout_initialized(ev) evtimer_initialized (ev) #define signal_add(ev,tv) event_add (ev, tv) #define signal_set(ev,sig,cb,data) event_set (ev, sig, EV_SIGNAL | EV_PERSIST, cb, data) #define signal_del(ev) event_del (ev) #define signal_pending(ev,tv) event_pending (ev, EV_SIGNAL, tv) #define signal_initialized(ev) event_initialized (ev) const char *event_get_version (void); const char *event_get_method (void); void *event_init (void); void event_base_free (struct event_base *base); #define EVLOOP_ONCE EVLOOP_ONESHOT int event_loop (int); int event_loopexit (struct timeval *tv); int event_dispatch (void); #define _EVENT_LOG_DEBUG 0 #define _EVENT_LOG_MSG 1 #define _EVENT_LOG_WARN 2 #define _EVENT_LOG_ERR 3 typedef void (*event_log_cb)(int severity, const char *msg); void event_set_log_callback(event_log_cb cb); void event_set (struct event *ev, int fd, short events, void (*cb)(int, short, void *), void *arg); int event_once (int fd, short events, void (*cb)(int, short, void *), void *arg, struct timeval *tv); int event_add (struct event *ev, struct timeval *tv); int event_del (struct event *ev); void event_active (struct event *ev, int res, short ncalls); /* ncalls is being ignored */ int event_pending (struct event *ev, short, struct timeval *tv); int event_priority_init (int npri); int event_priority_set (struct event *ev, int pri); struct event_base *event_base_new (void); const char *event_base_get_method (const struct event_base *); int event_base_set (struct event_base *base, struct event *ev); int event_base_loop (struct event_base *base, int); int event_base_loopexit (struct event_base *base, struct timeval *tv); int event_base_dispatch (struct event_base *base); int event_base_once (struct event_base *base, int fd, short events, void (*cb)(int, short, void *), void *arg, struct timeval *tv); int event_base_priority_init (struct event_base *base, int fd); /* next line is different in the libevent+libev version */ /*libevent-include*/ #ifdef __cplusplus } #endif #endif gevent-24.11.1/deps/libev/install-sh000077500000000000000000000360101471441230600172120ustar00rootroot00000000000000#!/bin/sh # install - install a program, script, or datafile scriptversion=2018-03-11.20; # UTC # This originates from X11R5 (mit/util/scripts/install.sh), which was # later released in X11R6 (xc/config/util/install.sh) with the # following copyright and license. # # Copyright (C) 1994 X Consortium # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to # deal in the Software without restriction, including without limitation the # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or # sell copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # X CONSORTIUM BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN # AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNEC- # TION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # # Except as contained in this notice, the name of the X Consortium shall not # be used in advertising or otherwise to promote the sale, use or other deal- # ings in this Software without prior written authorization from the X Consor- # tium. # # # FSF changes to this file are in the public domain. # # Calling this script install-sh is preferred over install.sh, to prevent # 'make' implicit rules from creating a file called install from it # when there is no Makefile. # # This script is compatible with the BSD install script, but was written # from scratch. tab=' ' nl=' ' IFS=" $tab$nl" # Set DOITPROG to "echo" to test this script. doit=${DOITPROG-} doit_exec=${doit:-exec} # Put in absolute file names if you don't have them in your path; # or use environment vars. chgrpprog=${CHGRPPROG-chgrp} chmodprog=${CHMODPROG-chmod} chownprog=${CHOWNPROG-chown} cmpprog=${CMPPROG-cmp} cpprog=${CPPROG-cp} mkdirprog=${MKDIRPROG-mkdir} mvprog=${MVPROG-mv} rmprog=${RMPROG-rm} stripprog=${STRIPPROG-strip} posix_mkdir= # Desired mode of installed file. mode=0755 chgrpcmd= chmodcmd=$chmodprog chowncmd= mvcmd=$mvprog rmcmd="$rmprog -f" stripcmd= src= dst= dir_arg= dst_arg= copy_on_change=false is_target_a_directory=possibly usage="\ Usage: $0 [OPTION]... [-T] SRCFILE DSTFILE or: $0 [OPTION]... SRCFILES... DIRECTORY or: $0 [OPTION]... -t DIRECTORY SRCFILES... or: $0 [OPTION]... -d DIRECTORIES... In the 1st form, copy SRCFILE to DSTFILE. In the 2nd and 3rd, copy all SRCFILES to DIRECTORY. In the 4th, create DIRECTORIES. Options: --help display this help and exit. --version display version info and exit. -c (ignored) -C install only if different (preserve the last data modification time) -d create directories instead of installing files. -g GROUP $chgrpprog installed files to GROUP. -m MODE $chmodprog installed files to MODE. -o USER $chownprog installed files to USER. -s $stripprog installed files. -t DIRECTORY install into DIRECTORY. -T report an error if DSTFILE is a directory. Environment variables override the default commands: CHGRPPROG CHMODPROG CHOWNPROG CMPPROG CPPROG MKDIRPROG MVPROG RMPROG STRIPPROG " while test $# -ne 0; do case $1 in -c) ;; -C) copy_on_change=true;; -d) dir_arg=true;; -g) chgrpcmd="$chgrpprog $2" shift;; --help) echo "$usage"; exit $?;; -m) mode=$2 case $mode in *' '* | *"$tab"* | *"$nl"* | *'*'* | *'?'* | *'['*) echo "$0: invalid mode: $mode" >&2 exit 1;; esac shift;; -o) chowncmd="$chownprog $2" shift;; -s) stripcmd=$stripprog;; -t) is_target_a_directory=always dst_arg=$2 # Protect names problematic for 'test' and other utilities. case $dst_arg in -* | [=\(\)!]) dst_arg=./$dst_arg;; esac shift;; -T) is_target_a_directory=never;; --version) echo "$0 $scriptversion"; exit $?;; --) shift break;; -*) echo "$0: invalid option: $1" >&2 exit 1;; *) break;; esac shift done # We allow the use of options -d and -T together, by making -d # take the precedence; this is for compatibility with GNU install. if test -n "$dir_arg"; then if test -n "$dst_arg"; then echo "$0: target directory not allowed when installing a directory." >&2 exit 1 fi fi if test $# -ne 0 && test -z "$dir_arg$dst_arg"; then # When -d is used, all remaining arguments are directories to create. # When -t is used, the destination is already specified. # Otherwise, the last argument is the destination. Remove it from $@. for arg do if test -n "$dst_arg"; then # $@ is not empty: it contains at least $arg. set fnord "$@" "$dst_arg" shift # fnord fi shift # arg dst_arg=$arg # Protect names problematic for 'test' and other utilities. case $dst_arg in -* | [=\(\)!]) dst_arg=./$dst_arg;; esac done fi if test $# -eq 0; then if test -z "$dir_arg"; then echo "$0: no input file specified." >&2 exit 1 fi # It's OK to call 'install-sh -d' without argument. # This can happen when creating conditional directories. exit 0 fi if test -z "$dir_arg"; then if test $# -gt 1 || test "$is_target_a_directory" = always; then if test ! -d "$dst_arg"; then echo "$0: $dst_arg: Is not a directory." >&2 exit 1 fi fi fi if test -z "$dir_arg"; then do_exit='(exit $ret); exit $ret' trap "ret=129; $do_exit" 1 trap "ret=130; $do_exit" 2 trap "ret=141; $do_exit" 13 trap "ret=143; $do_exit" 15 # Set umask so as not to create temps with too-generous modes. # However, 'strip' requires both read and write access to temps. case $mode in # Optimize common cases. *644) cp_umask=133;; *755) cp_umask=22;; *[0-7]) if test -z "$stripcmd"; then u_plus_rw= else u_plus_rw='% 200' fi cp_umask=`expr '(' 777 - $mode % 1000 ')' $u_plus_rw`;; *) if test -z "$stripcmd"; then u_plus_rw= else u_plus_rw=,u+rw fi cp_umask=$mode$u_plus_rw;; esac fi for src do # Protect names problematic for 'test' and other utilities. case $src in -* | [=\(\)!]) src=./$src;; esac if test -n "$dir_arg"; then dst=$src dstdir=$dst test -d "$dstdir" dstdir_status=$? else # Waiting for this to be detected by the "$cpprog $src $dsttmp" command # might cause directories to be created, which would be especially bad # if $src (and thus $dsttmp) contains '*'. if test ! -f "$src" && test ! -d "$src"; then echo "$0: $src does not exist." >&2 exit 1 fi if test -z "$dst_arg"; then echo "$0: no destination specified." >&2 exit 1 fi dst=$dst_arg # If destination is a directory, append the input filename. if test -d "$dst"; then if test "$is_target_a_directory" = never; then echo "$0: $dst_arg: Is a directory" >&2 exit 1 fi dstdir=$dst dstbase=`basename "$src"` case $dst in */) dst=$dst$dstbase;; *) dst=$dst/$dstbase;; esac dstdir_status=0 else dstdir=`dirname "$dst"` test -d "$dstdir" dstdir_status=$? fi fi case $dstdir in */) dstdirslash=$dstdir;; *) dstdirslash=$dstdir/;; esac obsolete_mkdir_used=false if test $dstdir_status != 0; then case $posix_mkdir in '') # Create intermediate dirs using mode 755 as modified by the umask. # This is like FreeBSD 'install' as of 1997-10-28. umask=`umask` case $stripcmd.$umask in # Optimize common cases. *[2367][2367]) mkdir_umask=$umask;; .*0[02][02] | .[02][02] | .[02]) mkdir_umask=22;; *[0-7]) mkdir_umask=`expr $umask + 22 \ - $umask % 100 % 40 + $umask % 20 \ - $umask % 10 % 4 + $umask % 2 `;; *) mkdir_umask=$umask,go-w;; esac # With -d, create the new directory with the user-specified mode. # Otherwise, rely on $mkdir_umask. if test -n "$dir_arg"; then mkdir_mode=-m$mode else mkdir_mode= fi posix_mkdir=false case $umask in *[123567][0-7][0-7]) # POSIX mkdir -p sets u+wx bits regardless of umask, which # is incompatible with FreeBSD 'install' when (umask & 300) != 0. ;; *) # Note that $RANDOM variable is not portable (e.g. dash); Use it # here however when possible just to lower collision chance. tmpdir=${TMPDIR-/tmp}/ins$RANDOM-$$ trap 'ret=$?; rmdir "$tmpdir/a/b" "$tmpdir/a" "$tmpdir" 2>/dev/null; exit $ret' 0 # Because "mkdir -p" follows existing symlinks and we likely work # directly in world-writeable /tmp, make sure that the '$tmpdir' # directory is successfully created first before we actually test # 'mkdir -p' feature. if (umask $mkdir_umask && $mkdirprog $mkdir_mode "$tmpdir" && exec $mkdirprog $mkdir_mode -p -- "$tmpdir/a/b") >/dev/null 2>&1 then if test -z "$dir_arg" || { # Check for POSIX incompatibilities with -m. # HP-UX 11.23 and IRIX 6.5 mkdir -m -p sets group- or # other-writable bit of parent directory when it shouldn't. # FreeBSD 6.1 mkdir -m -p sets mode of existing directory. test_tmpdir="$tmpdir/a" ls_ld_tmpdir=`ls -ld "$test_tmpdir"` case $ls_ld_tmpdir in d????-?r-*) different_mode=700;; d????-?--*) different_mode=755;; *) false;; esac && $mkdirprog -m$different_mode -p -- "$test_tmpdir" && { ls_ld_tmpdir_1=`ls -ld "$test_tmpdir"` test "$ls_ld_tmpdir" = "$ls_ld_tmpdir_1" } } then posix_mkdir=: fi rmdir "$tmpdir/a/b" "$tmpdir/a" "$tmpdir" else # Remove any dirs left behind by ancient mkdir implementations. rmdir ./$mkdir_mode ./-p ./-- "$tmpdir" 2>/dev/null fi trap '' 0;; esac;; esac if $posix_mkdir && ( umask $mkdir_umask && $doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir" ) then : else # The umask is ridiculous, or mkdir does not conform to POSIX, # or it failed possibly due to a race condition. Create the # directory the slow way, step by step, checking for races as we go. case $dstdir in /*) prefix='/';; [-=\(\)!]*) prefix='./';; *) prefix='';; esac oIFS=$IFS IFS=/ set -f set fnord $dstdir shift set +f IFS=$oIFS prefixes= for d do test X"$d" = X && continue prefix=$prefix$d if test -d "$prefix"; then prefixes= else if $posix_mkdir; then (umask=$mkdir_umask && $doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir") && break # Don't fail if two instances are running concurrently. test -d "$prefix" || exit 1 else case $prefix in *\'*) qprefix=`echo "$prefix" | sed "s/'/'\\\\\\\\''/g"`;; *) qprefix=$prefix;; esac prefixes="$prefixes '$qprefix'" fi fi prefix=$prefix/ done if test -n "$prefixes"; then # Don't fail if two instances are running concurrently. (umask $mkdir_umask && eval "\$doit_exec \$mkdirprog $prefixes") || test -d "$dstdir" || exit 1 obsolete_mkdir_used=true fi fi fi if test -n "$dir_arg"; then { test -z "$chowncmd" || $doit $chowncmd "$dst"; } && { test -z "$chgrpcmd" || $doit $chgrpcmd "$dst"; } && { test "$obsolete_mkdir_used$chowncmd$chgrpcmd" = false || test -z "$chmodcmd" || $doit $chmodcmd $mode "$dst"; } || exit 1 else # Make a couple of temp file names in the proper directory. dsttmp=${dstdirslash}_inst.$$_ rmtmp=${dstdirslash}_rm.$$_ # Trap to clean up those temp files at exit. trap 'ret=$?; rm -f "$dsttmp" "$rmtmp" && exit $ret' 0 # Copy the file name to the temp name. (umask $cp_umask && $doit_exec $cpprog "$src" "$dsttmp") && # and set any options; do chmod last to preserve setuid bits. # # If any of these fail, we abort the whole thing. If we want to # ignore errors from any of these, just make sure not to ignore # errors from the above "$doit $cpprog $src $dsttmp" command. # { test -z "$chowncmd" || $doit $chowncmd "$dsttmp"; } && { test -z "$chgrpcmd" || $doit $chgrpcmd "$dsttmp"; } && { test -z "$stripcmd" || $doit $stripcmd "$dsttmp"; } && { test -z "$chmodcmd" || $doit $chmodcmd $mode "$dsttmp"; } && # If -C, don't bother to copy if it wouldn't change the file. if $copy_on_change && old=`LC_ALL=C ls -dlL "$dst" 2>/dev/null` && new=`LC_ALL=C ls -dlL "$dsttmp" 2>/dev/null` && set -f && set X $old && old=:$2:$4:$5:$6 && set X $new && new=:$2:$4:$5:$6 && set +f && test "$old" = "$new" && $cmpprog "$dst" "$dsttmp" >/dev/null 2>&1 then rm -f "$dsttmp" else # Rename the file to the real destination. $doit $mvcmd -f "$dsttmp" "$dst" 2>/dev/null || # The rename failed, perhaps because mv can't rename something else # to itself, or perhaps because mv is so ancient that it does not # support -f. { # Now remove or move aside any old file at destination location. # We try this two ways since rm can't unlink itself on some # systems and the destination file might be busy for other # reasons. In this case, the final cleanup might fail but the new # file should still install successfully. { test ! -f "$dst" || $doit $rmcmd -f "$dst" 2>/dev/null || { $doit $mvcmd -f "$dst" "$rmtmp" 2>/dev/null && { $doit $rmcmd -f "$rmtmp" 2>/dev/null; :; } } || { echo "$0: cannot unlink or rename $dst" >&2 (exit 1); exit 1 } } && # Now rename the file to the real destination. $doit $mvcmd "$dsttmp" "$dst" } fi || exit 1 trap '' 0 fi done # Local variables: # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC0" # time-stamp-end: "; # UTC" # End: gevent-24.11.1/deps/libev/ltmain.sh000066400000000000000000011767121471441230600170450ustar00rootroot00000000000000#! /bin/sh ## DO NOT EDIT - This file generated from ./build-aux/ltmain.in ## by inline-source v2014-01-03.01 # libtool (GNU libtool) 2.4.6 # Provide generalized library-building support services. # Written by Gordon Matzigkeit , 1996 # Copyright (C) 1996-2015 Free Software Foundation, Inc. # This is free software; see the source for copying conditions. There is NO # warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # GNU Libtool is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # As a special exception to the GNU General Public License, # if you distribute this file as part of a program or library that # is built using GNU Libtool, you may include this file under the # same distribution terms that you use for the rest of that program. # # GNU Libtool is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . PROGRAM=libtool PACKAGE=libtool VERSION="2.4.6 Debian-2.4.6-9" package_revision=2.4.6 ## ------ ## ## Usage. ## ## ------ ## # Run './libtool --help' for help with using this script from the # command line. ## ------------------------------- ## ## User overridable command paths. ## ## ------------------------------- ## # After configure completes, it has a better idea of some of the # shell tools we need than the defaults used by the functions shared # with bootstrap, so set those here where they can still be over- # ridden by the user, but otherwise take precedence. : ${AUTOCONF="autoconf"} : ${AUTOMAKE="automake"} ## -------------------------- ## ## Source external libraries. ## ## -------------------------- ## # Much of our low-level functionality needs to be sourced from external # libraries, which are installed to $pkgauxdir. # Set a version string for this script. scriptversion=2015-01-20.17; # UTC # General shell script boiler plate, and helper functions. # Written by Gary V. Vaughan, 2004 # Copyright (C) 2004-2015 Free Software Foundation, Inc. # This is free software; see the source for copying conditions. There is NO # warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 3 of the License, or # (at your option) any later version. # As a special exception to the GNU General Public License, if you distribute # this file as part of a program or library that is built using GNU Libtool, # you may include this file under the same distribution terms that you use # for the rest of that program. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNES FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see . # Please report bugs or propose patches to gary@gnu.org. ## ------ ## ## Usage. ## ## ------ ## # Evaluate this file near the top of your script to gain access to # the functions and variables defined here: # # . `echo "$0" | ${SED-sed} 's|[^/]*$||'`/build-aux/funclib.sh # # If you need to override any of the default environment variable # settings, do that before evaluating this file. ## -------------------- ## ## Shell normalisation. ## ## -------------------- ## # Some shells need a little help to be as Bourne compatible as possible. # Before doing anything else, make sure all that help has been provided! DUALCASE=1; export DUALCASE # for MKS sh if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case `(set -o) 2>/dev/null` in *posix*) set -o posix ;; esac fi # NLS nuisances: We save the old values in case they are required later. _G_user_locale= _G_safe_locale= for _G_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES do eval "if test set = \"\${$_G_var+set}\"; then save_$_G_var=\$$_G_var $_G_var=C export $_G_var _G_user_locale=\"$_G_var=\\\$save_\$_G_var; \$_G_user_locale\" _G_safe_locale=\"$_G_var=C; \$_G_safe_locale\" fi" done # CDPATH. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH # Make sure IFS has a sensible default sp=' ' nl=' ' IFS="$sp $nl" # There are apparently some retarded systems that use ';' as a PATH separator! if test "${PATH_SEPARATOR+set}" != set; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi ## ------------------------- ## ## Locate command utilities. ## ## ------------------------- ## # func_executable_p FILE # ---------------------- # Check that FILE is an executable regular file. func_executable_p () { test -f "$1" && test -x "$1" } # func_path_progs PROGS_LIST CHECK_FUNC [PATH] # -------------------------------------------- # Search for either a program that responds to --version with output # containing "GNU", or else returned by CHECK_FUNC otherwise, by # trying all the directories in PATH with each of the elements of # PROGS_LIST. # # CHECK_FUNC should accept the path to a candidate program, and # set $func_check_prog_result if it truncates its output less than # $_G_path_prog_max characters. func_path_progs () { _G_progs_list=$1 _G_check_func=$2 _G_PATH=${3-"$PATH"} _G_path_prog_max=0 _G_path_prog_found=false _G_save_IFS=$IFS; IFS=${PATH_SEPARATOR-:} for _G_dir in $_G_PATH; do IFS=$_G_save_IFS test -z "$_G_dir" && _G_dir=. for _G_prog_name in $_G_progs_list; do for _exeext in '' .EXE; do _G_path_prog=$_G_dir/$_G_prog_name$_exeext func_executable_p "$_G_path_prog" || continue case `"$_G_path_prog" --version 2>&1` in *GNU*) func_path_progs_result=$_G_path_prog _G_path_prog_found=: ;; *) $_G_check_func $_G_path_prog func_path_progs_result=$func_check_prog_result ;; esac $_G_path_prog_found && break 3 done done done IFS=$_G_save_IFS test -z "$func_path_progs_result" && { echo "no acceptable sed could be found in \$PATH" >&2 exit 1 } } # We want to be able to use the functions in this file before configure # has figured out where the best binaries are kept, which means we have # to search for them ourselves - except when the results are already set # where we skip the searches. # Unless the user overrides by setting SED, search the path for either GNU # sed, or the sed that truncates its output the least. test -z "$SED" && { _G_sed_script=s/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb/ for _G_i in 1 2 3 4 5 6 7; do _G_sed_script=$_G_sed_script$nl$_G_sed_script done echo "$_G_sed_script" 2>/dev/null | sed 99q >conftest.sed _G_sed_script= func_check_prog_sed () { _G_path_prog=$1 _G_count=0 printf 0123456789 >conftest.in while : do cat conftest.in conftest.in >conftest.tmp mv conftest.tmp conftest.in cp conftest.in conftest.nl echo '' >> conftest.nl "$_G_path_prog" -f conftest.sed conftest.out 2>/dev/null || break diff conftest.out conftest.nl >/dev/null 2>&1 || break _G_count=`expr $_G_count + 1` if test "$_G_count" -gt "$_G_path_prog_max"; then # Best one so far, save it but keep looking for a better one func_check_prog_result=$_G_path_prog _G_path_prog_max=$_G_count fi # 10*(2^10) chars as input seems more than enough test 10 -lt "$_G_count" && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out } func_path_progs "sed gsed" func_check_prog_sed $PATH:/usr/xpg4/bin rm -f conftest.sed SED=$func_path_progs_result } # Unless the user overrides by setting GREP, search the path for either GNU # grep, or the grep that truncates its output the least. test -z "$GREP" && { func_check_prog_grep () { _G_path_prog=$1 _G_count=0 _G_path_prog_max=0 printf 0123456789 >conftest.in while : do cat conftest.in conftest.in >conftest.tmp mv conftest.tmp conftest.in cp conftest.in conftest.nl echo 'GREP' >> conftest.nl "$_G_path_prog" -e 'GREP$' -e '-(cannot match)-' conftest.out 2>/dev/null || break diff conftest.out conftest.nl >/dev/null 2>&1 || break _G_count=`expr $_G_count + 1` if test "$_G_count" -gt "$_G_path_prog_max"; then # Best one so far, save it but keep looking for a better one func_check_prog_result=$_G_path_prog _G_path_prog_max=$_G_count fi # 10*(2^10) chars as input seems more than enough test 10 -lt "$_G_count" && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out } func_path_progs "grep ggrep" func_check_prog_grep $PATH:/usr/xpg4/bin GREP=$func_path_progs_result } ## ------------------------------- ## ## User overridable command paths. ## ## ------------------------------- ## # All uppercase variable names are used for environment variables. These # variables can be overridden by the user before calling a script that # uses them if a suitable command of that name is not already available # in the command search PATH. : ${CP="cp -f"} : ${ECHO="printf %s\n"} : ${EGREP="$GREP -E"} : ${FGREP="$GREP -F"} : ${LN_S="ln -s"} : ${MAKE="make"} : ${MKDIR="mkdir"} : ${MV="mv -f"} : ${RM="rm -f"} : ${SHELL="${CONFIG_SHELL-/bin/sh}"} ## -------------------- ## ## Useful sed snippets. ## ## -------------------- ## sed_dirname='s|/[^/]*$||' sed_basename='s|^.*/||' # Sed substitution that helps us do robust quoting. It backslashifies # metacharacters that are still active within double-quoted strings. sed_quote_subst='s|\([`"$\\]\)|\\\1|g' # Same as above, but do not quote variable references. sed_double_quote_subst='s/\(["`\\]\)/\\\1/g' # Sed substitution that turns a string into a regex matching for the # string literally. sed_make_literal_regex='s|[].[^$\\*\/]|\\&|g' # Sed substitution that converts a w32 file name or path # that contains forward slashes, into one that contains # (escaped) backslashes. A very naive implementation. sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g' # Re-'\' parameter expansions in output of sed_double_quote_subst that # were '\'-ed in input to the same. If an odd number of '\' preceded a # '$' in input to sed_double_quote_subst, that '$' was protected from # expansion. Since each input '\' is now two '\'s, look for any number # of runs of four '\'s followed by two '\'s and then a '$'. '\' that '$'. _G_bs='\\' _G_bs2='\\\\' _G_bs4='\\\\\\\\' _G_dollar='\$' sed_double_backslash="\ s/$_G_bs4/&\\ /g s/^$_G_bs2$_G_dollar/$_G_bs&/ s/\\([^$_G_bs]\\)$_G_bs2$_G_dollar/\\1$_G_bs2$_G_bs$_G_dollar/g s/\n//g" ## ----------------- ## ## Global variables. ## ## ----------------- ## # Except for the global variables explicitly listed below, the following # functions in the '^func_' namespace, and the '^require_' namespace # variables initialised in the 'Resource management' section, sourcing # this file will not pollute your global namespace with anything # else. There's no portable way to scope variables in Bourne shell # though, so actually running these functions will sometimes place # results into a variable named after the function, and often use # temporary variables in the '^_G_' namespace. If you are careful to # avoid using those namespaces casually in your sourcing script, things # should continue to work as you expect. And, of course, you can freely # overwrite any of the functions or variables defined here before # calling anything to customize them. EXIT_SUCCESS=0 EXIT_FAILURE=1 EXIT_MISMATCH=63 # $? = 63 is used to indicate version mismatch to missing. EXIT_SKIP=77 # $? = 77 is used to indicate a skipped test to automake. # Allow overriding, eg assuming that you follow the convention of # putting '$debug_cmd' at the start of all your functions, you can get # bash to show function call trace with: # # debug_cmd='eval echo "${FUNCNAME[0]} $*" >&2' bash your-script-name debug_cmd=${debug_cmd-":"} exit_cmd=: # By convention, finish your script with: # # exit $exit_status # # so that you can set exit_status to non-zero if you want to indicate # something went wrong during execution without actually bailing out at # the point of failure. exit_status=$EXIT_SUCCESS # Work around backward compatibility issue on IRIX 6.5. On IRIX 6.4+, sh # is ksh but when the shell is invoked as "sh" and the current value of # the _XPG environment variable is not equal to 1 (one), the special # positional parameter $0, within a function call, is the name of the # function. progpath=$0 # The name of this program. progname=`$ECHO "$progpath" |$SED "$sed_basename"` # Make sure we have an absolute progpath for reexecution: case $progpath in [\\/]*|[A-Za-z]:\\*) ;; *[\\/]*) progdir=`$ECHO "$progpath" |$SED "$sed_dirname"` progdir=`cd "$progdir" && pwd` progpath=$progdir/$progname ;; *) _G_IFS=$IFS IFS=${PATH_SEPARATOR-:} for progdir in $PATH; do IFS=$_G_IFS test -x "$progdir/$progname" && break done IFS=$_G_IFS test -n "$progdir" || progdir=`pwd` progpath=$progdir/$progname ;; esac ## ----------------- ## ## Standard options. ## ## ----------------- ## # The following options affect the operation of the functions defined # below, and should be set appropriately depending on run-time para- # meters passed on the command line. opt_dry_run=false opt_quiet=false opt_verbose=false # Categories 'all' and 'none' are always available. Append any others # you will pass as the first argument to func_warning from your own # code. warning_categories= # By default, display warnings according to 'opt_warning_types'. Set # 'warning_func' to ':' to elide all warnings, or func_fatal_error to # treat the next displayed warning as a fatal error. warning_func=func_warn_and_continue # Set to 'all' to display all warnings, 'none' to suppress all # warnings, or a space delimited list of some subset of # 'warning_categories' to display only the listed warnings. opt_warning_types=all ## -------------------- ## ## Resource management. ## ## -------------------- ## # This section contains definitions for functions that each ensure a # particular resource (a file, or a non-empty configuration variable for # example) is available, and if appropriate to extract default values # from pertinent package files. Call them using their associated # 'require_*' variable to ensure that they are executed, at most, once. # # It's entirely deliberate that calling these functions can set # variables that don't obey the namespace limitations obeyed by the rest # of this file, in order that that they be as useful as possible to # callers. # require_term_colors # ------------------- # Allow display of bold text on terminals that support it. require_term_colors=func_require_term_colors func_require_term_colors () { $debug_cmd test -t 1 && { # COLORTERM and USE_ANSI_COLORS environment variables take # precedence, because most terminfo databases neglect to describe # whether color sequences are supported. test -n "${COLORTERM+set}" && : ${USE_ANSI_COLORS="1"} if test 1 = "$USE_ANSI_COLORS"; then # Standard ANSI escape sequences tc_reset='' tc_bold=''; tc_standout='' tc_red=''; tc_green='' tc_blue=''; tc_cyan='' else # Otherwise trust the terminfo database after all. test -n "`tput sgr0 2>/dev/null`" && { tc_reset=`tput sgr0` test -n "`tput bold 2>/dev/null`" && tc_bold=`tput bold` tc_standout=$tc_bold test -n "`tput smso 2>/dev/null`" && tc_standout=`tput smso` test -n "`tput setaf 1 2>/dev/null`" && tc_red=`tput setaf 1` test -n "`tput setaf 2 2>/dev/null`" && tc_green=`tput setaf 2` test -n "`tput setaf 4 2>/dev/null`" && tc_blue=`tput setaf 4` test -n "`tput setaf 5 2>/dev/null`" && tc_cyan=`tput setaf 5` } fi } require_term_colors=: } ## ----------------- ## ## Function library. ## ## ----------------- ## # This section contains a variety of useful functions to call in your # scripts. Take note of the portable wrappers for features provided by # some modern shells, which will fall back to slower equivalents on # less featureful shells. # func_append VAR VALUE # --------------------- # Append VALUE onto the existing contents of VAR. # We should try to minimise forks, especially on Windows where they are # unreasonably slow, so skip the feature probes when bash or zsh are # being used: if test set = "${BASH_VERSION+set}${ZSH_VERSION+set}"; then : ${_G_HAVE_ARITH_OP="yes"} : ${_G_HAVE_XSI_OPS="yes"} # The += operator was introduced in bash 3.1 case $BASH_VERSION in [12].* | 3.0 | 3.0*) ;; *) : ${_G_HAVE_PLUSEQ_OP="yes"} ;; esac fi # _G_HAVE_PLUSEQ_OP # Can be empty, in which case the shell is probed, "yes" if += is # useable or anything else if it does not work. test -z "$_G_HAVE_PLUSEQ_OP" \ && (eval 'x=a; x+=" b"; test "a b" = "$x"') 2>/dev/null \ && _G_HAVE_PLUSEQ_OP=yes if test yes = "$_G_HAVE_PLUSEQ_OP" then # This is an XSI compatible shell, allowing a faster implementation... eval 'func_append () { $debug_cmd eval "$1+=\$2" }' else # ...otherwise fall back to using expr, which is often a shell builtin. func_append () { $debug_cmd eval "$1=\$$1\$2" } fi # func_append_quoted VAR VALUE # ---------------------------- # Quote VALUE and append to the end of shell variable VAR, separated # by a space. if test yes = "$_G_HAVE_PLUSEQ_OP"; then eval 'func_append_quoted () { $debug_cmd func_quote_for_eval "$2" eval "$1+=\\ \$func_quote_for_eval_result" }' else func_append_quoted () { $debug_cmd func_quote_for_eval "$2" eval "$1=\$$1\\ \$func_quote_for_eval_result" } fi # func_append_uniq VAR VALUE # -------------------------- # Append unique VALUE onto the existing contents of VAR, assuming # entries are delimited by the first character of VALUE. For example: # # func_append_uniq options " --another-option option-argument" # # will only append to $options if " --another-option option-argument " # is not already present somewhere in $options already (note spaces at # each end implied by leading space in second argument). func_append_uniq () { $debug_cmd eval _G_current_value='`$ECHO $'$1'`' _G_delim=`expr "$2" : '\(.\)'` case $_G_delim$_G_current_value$_G_delim in *"$2$_G_delim"*) ;; *) func_append "$@" ;; esac } # func_arith TERM... # ------------------ # Set func_arith_result to the result of evaluating TERMs. test -z "$_G_HAVE_ARITH_OP" \ && (eval 'test 2 = $(( 1 + 1 ))') 2>/dev/null \ && _G_HAVE_ARITH_OP=yes if test yes = "$_G_HAVE_ARITH_OP"; then eval 'func_arith () { $debug_cmd func_arith_result=$(( $* )) }' else func_arith () { $debug_cmd func_arith_result=`expr "$@"` } fi # func_basename FILE # ------------------ # Set func_basename_result to FILE with everything up to and including # the last / stripped. if test yes = "$_G_HAVE_XSI_OPS"; then # If this shell supports suffix pattern removal, then use it to avoid # forking. Hide the definitions single quotes in case the shell chokes # on unsupported syntax... _b='func_basename_result=${1##*/}' _d='case $1 in */*) func_dirname_result=${1%/*}$2 ;; * ) func_dirname_result=$3 ;; esac' else # ...otherwise fall back to using sed. _b='func_basename_result=`$ECHO "$1" |$SED "$sed_basename"`' _d='func_dirname_result=`$ECHO "$1" |$SED "$sed_dirname"` if test "X$func_dirname_result" = "X$1"; then func_dirname_result=$3 else func_append func_dirname_result "$2" fi' fi eval 'func_basename () { $debug_cmd '"$_b"' }' # func_dirname FILE APPEND NONDIR_REPLACEMENT # ------------------------------------------- # Compute the dirname of FILE. If nonempty, add APPEND to the result, # otherwise set result to NONDIR_REPLACEMENT. eval 'func_dirname () { $debug_cmd '"$_d"' }' # func_dirname_and_basename FILE APPEND NONDIR_REPLACEMENT # -------------------------------------------------------- # Perform func_basename and func_dirname in a single function # call: # dirname: Compute the dirname of FILE. If nonempty, # add APPEND to the result, otherwise set result # to NONDIR_REPLACEMENT. # value returned in "$func_dirname_result" # basename: Compute filename of FILE. # value retuned in "$func_basename_result" # For efficiency, we do not delegate to the functions above but instead # duplicate the functionality here. eval 'func_dirname_and_basename () { $debug_cmd '"$_b"' '"$_d"' }' # func_echo ARG... # ---------------- # Echo program name prefixed message. func_echo () { $debug_cmd _G_message=$* func_echo_IFS=$IFS IFS=$nl for _G_line in $_G_message; do IFS=$func_echo_IFS $ECHO "$progname: $_G_line" done IFS=$func_echo_IFS } # func_echo_all ARG... # -------------------- # Invoke $ECHO with all args, space-separated. func_echo_all () { $ECHO "$*" } # func_echo_infix_1 INFIX ARG... # ------------------------------ # Echo program name, followed by INFIX on the first line, with any # additional lines not showing INFIX. func_echo_infix_1 () { $debug_cmd $require_term_colors _G_infix=$1; shift _G_indent=$_G_infix _G_prefix="$progname: $_G_infix: " _G_message=$* # Strip color escape sequences before counting printable length for _G_tc in "$tc_reset" "$tc_bold" "$tc_standout" "$tc_red" "$tc_green" "$tc_blue" "$tc_cyan" do test -n "$_G_tc" && { _G_esc_tc=`$ECHO "$_G_tc" | $SED "$sed_make_literal_regex"` _G_indent=`$ECHO "$_G_indent" | $SED "s|$_G_esc_tc||g"` } done _G_indent="$progname: "`echo "$_G_indent" | $SED 's|.| |g'`" " ## exclude from sc_prohibit_nested_quotes func_echo_infix_1_IFS=$IFS IFS=$nl for _G_line in $_G_message; do IFS=$func_echo_infix_1_IFS $ECHO "$_G_prefix$tc_bold$_G_line$tc_reset" >&2 _G_prefix=$_G_indent done IFS=$func_echo_infix_1_IFS } # func_error ARG... # ----------------- # Echo program name prefixed message to standard error. func_error () { $debug_cmd $require_term_colors func_echo_infix_1 " $tc_standout${tc_red}error$tc_reset" "$*" >&2 } # func_fatal_error ARG... # ----------------------- # Echo program name prefixed message to standard error, and exit. func_fatal_error () { $debug_cmd func_error "$*" exit $EXIT_FAILURE } # func_grep EXPRESSION FILENAME # ----------------------------- # Check whether EXPRESSION matches any line of FILENAME, without output. func_grep () { $debug_cmd $GREP "$1" "$2" >/dev/null 2>&1 } # func_len STRING # --------------- # Set func_len_result to the length of STRING. STRING may not # start with a hyphen. test -z "$_G_HAVE_XSI_OPS" \ && (eval 'x=a/b/c; test 5aa/bb/cc = "${#x}${x%%/*}${x%/*}${x#*/}${x##*/}"') 2>/dev/null \ && _G_HAVE_XSI_OPS=yes if test yes = "$_G_HAVE_XSI_OPS"; then eval 'func_len () { $debug_cmd func_len_result=${#1} }' else func_len () { $debug_cmd func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len` } fi # func_mkdir_p DIRECTORY-PATH # --------------------------- # Make sure the entire path to DIRECTORY-PATH is available. func_mkdir_p () { $debug_cmd _G_directory_path=$1 _G_dir_list= if test -n "$_G_directory_path" && test : != "$opt_dry_run"; then # Protect directory names starting with '-' case $_G_directory_path in -*) _G_directory_path=./$_G_directory_path ;; esac # While some portion of DIR does not yet exist... while test ! -d "$_G_directory_path"; do # ...make a list in topmost first order. Use a colon delimited # list incase some portion of path contains whitespace. _G_dir_list=$_G_directory_path:$_G_dir_list # If the last portion added has no slash in it, the list is done case $_G_directory_path in */*) ;; *) break ;; esac # ...otherwise throw away the child directory and loop _G_directory_path=`$ECHO "$_G_directory_path" | $SED -e "$sed_dirname"` done _G_dir_list=`$ECHO "$_G_dir_list" | $SED 's|:*$||'` func_mkdir_p_IFS=$IFS; IFS=: for _G_dir in $_G_dir_list; do IFS=$func_mkdir_p_IFS # mkdir can fail with a 'File exist' error if two processes # try to create one of the directories concurrently. Don't # stop in that case! $MKDIR "$_G_dir" 2>/dev/null || : done IFS=$func_mkdir_p_IFS # Bail out if we (or some other process) failed to create a directory. test -d "$_G_directory_path" || \ func_fatal_error "Failed to create '$1'" fi } # func_mktempdir [BASENAME] # ------------------------- # Make a temporary directory that won't clash with other running # libtool processes, and avoids race conditions if possible. If # given, BASENAME is the basename for that directory. func_mktempdir () { $debug_cmd _G_template=${TMPDIR-/tmp}/${1-$progname} if test : = "$opt_dry_run"; then # Return a directory name, but don't create it in dry-run mode _G_tmpdir=$_G_template-$$ else # If mktemp works, use that first and foremost _G_tmpdir=`mktemp -d "$_G_template-XXXXXXXX" 2>/dev/null` if test ! -d "$_G_tmpdir"; then # Failing that, at least try and use $RANDOM to avoid a race _G_tmpdir=$_G_template-${RANDOM-0}$$ func_mktempdir_umask=`umask` umask 0077 $MKDIR "$_G_tmpdir" umask $func_mktempdir_umask fi # If we're not in dry-run mode, bomb out on failure test -d "$_G_tmpdir" || \ func_fatal_error "cannot create temporary directory '$_G_tmpdir'" fi $ECHO "$_G_tmpdir" } # func_normal_abspath PATH # ------------------------ # Remove doubled-up and trailing slashes, "." path components, # and cancel out any ".." path components in PATH after making # it an absolute path. func_normal_abspath () { $debug_cmd # These SED scripts presuppose an absolute path with a trailing slash. _G_pathcar='s|^/\([^/]*\).*$|\1|' _G_pathcdr='s|^/[^/]*||' _G_removedotparts=':dotsl s|/\./|/|g t dotsl s|/\.$|/|' _G_collapseslashes='s|/\{1,\}|/|g' _G_finalslash='s|/*$|/|' # Start from root dir and reassemble the path. func_normal_abspath_result= func_normal_abspath_tpath=$1 func_normal_abspath_altnamespace= case $func_normal_abspath_tpath in "") # Empty path, that just means $cwd. func_stripname '' '/' "`pwd`" func_normal_abspath_result=$func_stripname_result return ;; # The next three entries are used to spot a run of precisely # two leading slashes without using negated character classes; # we take advantage of case's first-match behaviour. ///*) # Unusual form of absolute path, do nothing. ;; //*) # Not necessarily an ordinary path; POSIX reserves leading '//' # and for example Cygwin uses it to access remote file shares # over CIFS/SMB, so we conserve a leading double slash if found. func_normal_abspath_altnamespace=/ ;; /*) # Absolute path, do nothing. ;; *) # Relative path, prepend $cwd. func_normal_abspath_tpath=`pwd`/$func_normal_abspath_tpath ;; esac # Cancel out all the simple stuff to save iterations. We also want # the path to end with a slash for ease of parsing, so make sure # there is one (and only one) here. func_normal_abspath_tpath=`$ECHO "$func_normal_abspath_tpath" | $SED \ -e "$_G_removedotparts" -e "$_G_collapseslashes" -e "$_G_finalslash"` while :; do # Processed it all yet? if test / = "$func_normal_abspath_tpath"; then # If we ascended to the root using ".." the result may be empty now. if test -z "$func_normal_abspath_result"; then func_normal_abspath_result=/ fi break fi func_normal_abspath_tcomponent=`$ECHO "$func_normal_abspath_tpath" | $SED \ -e "$_G_pathcar"` func_normal_abspath_tpath=`$ECHO "$func_normal_abspath_tpath" | $SED \ -e "$_G_pathcdr"` # Figure out what to do with it case $func_normal_abspath_tcomponent in "") # Trailing empty path component, ignore it. ;; ..) # Parent dir; strip last assembled component from result. func_dirname "$func_normal_abspath_result" func_normal_abspath_result=$func_dirname_result ;; *) # Actual path component, append it. func_append func_normal_abspath_result "/$func_normal_abspath_tcomponent" ;; esac done # Restore leading double-slash if one was found on entry. func_normal_abspath_result=$func_normal_abspath_altnamespace$func_normal_abspath_result } # func_notquiet ARG... # -------------------- # Echo program name prefixed message only when not in quiet mode. func_notquiet () { $debug_cmd $opt_quiet || func_echo ${1+"$@"} # A bug in bash halts the script if the last line of a function # fails when set -e is in force, so we need another command to # work around that: : } # func_relative_path SRCDIR DSTDIR # -------------------------------- # Set func_relative_path_result to the relative path from SRCDIR to DSTDIR. func_relative_path () { $debug_cmd func_relative_path_result= func_normal_abspath "$1" func_relative_path_tlibdir=$func_normal_abspath_result func_normal_abspath "$2" func_relative_path_tbindir=$func_normal_abspath_result # Ascend the tree starting from libdir while :; do # check if we have found a prefix of bindir case $func_relative_path_tbindir in $func_relative_path_tlibdir) # found an exact match func_relative_path_tcancelled= break ;; $func_relative_path_tlibdir*) # found a matching prefix func_stripname "$func_relative_path_tlibdir" '' "$func_relative_path_tbindir" func_relative_path_tcancelled=$func_stripname_result if test -z "$func_relative_path_result"; then func_relative_path_result=. fi break ;; *) func_dirname $func_relative_path_tlibdir func_relative_path_tlibdir=$func_dirname_result if test -z "$func_relative_path_tlibdir"; then # Have to descend all the way to the root! func_relative_path_result=../$func_relative_path_result func_relative_path_tcancelled=$func_relative_path_tbindir break fi func_relative_path_result=../$func_relative_path_result ;; esac done # Now calculate path; take care to avoid doubling-up slashes. func_stripname '' '/' "$func_relative_path_result" func_relative_path_result=$func_stripname_result func_stripname '/' '/' "$func_relative_path_tcancelled" if test -n "$func_stripname_result"; then func_append func_relative_path_result "/$func_stripname_result" fi # Normalisation. If bindir is libdir, return '.' else relative path. if test -n "$func_relative_path_result"; then func_stripname './' '' "$func_relative_path_result" func_relative_path_result=$func_stripname_result fi test -n "$func_relative_path_result" || func_relative_path_result=. : } # func_quote_for_eval ARG... # -------------------------- # Aesthetically quote ARGs to be evaled later. # This function returns two values: # i) func_quote_for_eval_result # double-quoted, suitable for a subsequent eval # ii) func_quote_for_eval_unquoted_result # has all characters that are still active within double # quotes backslashified. func_quote_for_eval () { $debug_cmd func_quote_for_eval_unquoted_result= func_quote_for_eval_result= while test 0 -lt $#; do case $1 in *[\\\`\"\$]*) _G_unquoted_arg=`printf '%s\n' "$1" |$SED "$sed_quote_subst"` ;; *) _G_unquoted_arg=$1 ;; esac if test -n "$func_quote_for_eval_unquoted_result"; then func_append func_quote_for_eval_unquoted_result " $_G_unquoted_arg" else func_append func_quote_for_eval_unquoted_result "$_G_unquoted_arg" fi case $_G_unquoted_arg in # Double-quote args containing shell metacharacters to delay # word splitting, command substitution and variable expansion # for a subsequent eval. # Many Bourne shells cannot handle close brackets correctly # in scan sets, so we specify it separately. *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"") _G_quoted_arg=\"$_G_unquoted_arg\" ;; *) _G_quoted_arg=$_G_unquoted_arg ;; esac if test -n "$func_quote_for_eval_result"; then func_append func_quote_for_eval_result " $_G_quoted_arg" else func_append func_quote_for_eval_result "$_G_quoted_arg" fi shift done } # func_quote_for_expand ARG # ------------------------- # Aesthetically quote ARG to be evaled later; same as above, # but do not quote variable references. func_quote_for_expand () { $debug_cmd case $1 in *[\\\`\"]*) _G_arg=`$ECHO "$1" | $SED \ -e "$sed_double_quote_subst" -e "$sed_double_backslash"` ;; *) _G_arg=$1 ;; esac case $_G_arg in # Double-quote args containing shell metacharacters to delay # word splitting and command substitution for a subsequent eval. # Many Bourne shells cannot handle close brackets correctly # in scan sets, so we specify it separately. *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"") _G_arg=\"$_G_arg\" ;; esac func_quote_for_expand_result=$_G_arg } # func_stripname PREFIX SUFFIX NAME # --------------------------------- # strip PREFIX and SUFFIX from NAME, and store in func_stripname_result. # PREFIX and SUFFIX must not contain globbing or regex special # characters, hashes, percent signs, but SUFFIX may contain a leading # dot (in which case that matches only a dot). if test yes = "$_G_HAVE_XSI_OPS"; then eval 'func_stripname () { $debug_cmd # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are # positional parameters, so assign one to ordinary variable first. func_stripname_result=$3 func_stripname_result=${func_stripname_result#"$1"} func_stripname_result=${func_stripname_result%"$2"} }' else func_stripname () { $debug_cmd case $2 in .*) func_stripname_result=`$ECHO "$3" | $SED -e "s%^$1%%" -e "s%\\\\$2\$%%"`;; *) func_stripname_result=`$ECHO "$3" | $SED -e "s%^$1%%" -e "s%$2\$%%"`;; esac } fi # func_show_eval CMD [FAIL_EXP] # ----------------------------- # Unless opt_quiet is true, then output CMD. Then, if opt_dryrun is # not true, evaluate CMD. If the evaluation of CMD fails, and FAIL_EXP # is given, then evaluate it. func_show_eval () { $debug_cmd _G_cmd=$1 _G_fail_exp=${2-':'} func_quote_for_expand "$_G_cmd" eval "func_notquiet $func_quote_for_expand_result" $opt_dry_run || { eval "$_G_cmd" _G_status=$? if test 0 -ne "$_G_status"; then eval "(exit $_G_status); $_G_fail_exp" fi } } # func_show_eval_locale CMD [FAIL_EXP] # ------------------------------------ # Unless opt_quiet is true, then output CMD. Then, if opt_dryrun is # not true, evaluate CMD. If the evaluation of CMD fails, and FAIL_EXP # is given, then evaluate it. Use the saved locale for evaluation. func_show_eval_locale () { $debug_cmd _G_cmd=$1 _G_fail_exp=${2-':'} $opt_quiet || { func_quote_for_expand "$_G_cmd" eval "func_echo $func_quote_for_expand_result" } $opt_dry_run || { eval "$_G_user_locale $_G_cmd" _G_status=$? eval "$_G_safe_locale" if test 0 -ne "$_G_status"; then eval "(exit $_G_status); $_G_fail_exp" fi } } # func_tr_sh # ---------- # Turn $1 into a string suitable for a shell variable name. # Result is stored in $func_tr_sh_result. All characters # not in the set a-zA-Z0-9_ are replaced with '_'. Further, # if $1 begins with a digit, a '_' is prepended as well. func_tr_sh () { $debug_cmd case $1 in [0-9]* | *[!a-zA-Z0-9_]*) func_tr_sh_result=`$ECHO "$1" | $SED -e 's/^\([0-9]\)/_\1/' -e 's/[^a-zA-Z0-9_]/_/g'` ;; * ) func_tr_sh_result=$1 ;; esac } # func_verbose ARG... # ------------------- # Echo program name prefixed message in verbose mode only. func_verbose () { $debug_cmd $opt_verbose && func_echo "$*" : } # func_warn_and_continue ARG... # ----------------------------- # Echo program name prefixed warning message to standard error. func_warn_and_continue () { $debug_cmd $require_term_colors func_echo_infix_1 "${tc_red}warning$tc_reset" "$*" >&2 } # func_warning CATEGORY ARG... # ---------------------------- # Echo program name prefixed warning message to standard error. Warning # messages can be filtered according to CATEGORY, where this function # elides messages where CATEGORY is not listed in the global variable # 'opt_warning_types'. func_warning () { $debug_cmd # CATEGORY must be in the warning_categories list! case " $warning_categories " in *" $1 "*) ;; *) func_internal_error "invalid warning category '$1'" ;; esac _G_category=$1 shift case " $opt_warning_types " in *" $_G_category "*) $warning_func ${1+"$@"} ;; esac } # func_sort_ver VER1 VER2 # ----------------------- # 'sort -V' is not generally available. # Note this deviates from the version comparison in automake # in that it treats 1.5 < 1.5.0, and treats 1.4.4a < 1.4-p3a # but this should suffice as we won't be specifying old # version formats or redundant trailing .0 in bootstrap.conf. # If we did want full compatibility then we should probably # use m4_version_compare from autoconf. func_sort_ver () { $debug_cmd printf '%s\n%s\n' "$1" "$2" \ | sort -t. -k 1,1n -k 2,2n -k 3,3n -k 4,4n -k 5,5n -k 6,6n -k 7,7n -k 8,8n -k 9,9n } # func_lt_ver PREV CURR # --------------------- # Return true if PREV and CURR are in the correct order according to # func_sort_ver, otherwise false. Use it like this: # # func_lt_ver "$prev_ver" "$proposed_ver" || func_fatal_error "..." func_lt_ver () { $debug_cmd test "x$1" = x`func_sort_ver "$1" "$2" | $SED 1q` } # Local variables: # mode: shell-script # sh-indentation: 2 # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-pattern: "10/scriptversion=%:y-%02m-%02d.%02H; # UTC" # time-stamp-time-zone: "UTC" # End: #! /bin/sh # Set a version string for this script. scriptversion=2015-10-07.11; # UTC # A portable, pluggable option parser for Bourne shell. # Written by Gary V. Vaughan, 2010 # Copyright (C) 2010-2015 Free Software Foundation, Inc. # This is free software; see the source for copying conditions. There is NO # warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see . # Please report bugs or propose patches to gary@gnu.org. ## ------ ## ## Usage. ## ## ------ ## # This file is a library for parsing options in your shell scripts along # with assorted other useful supporting features that you can make use # of too. # # For the simplest scripts you might need only: # # #!/bin/sh # . relative/path/to/funclib.sh # . relative/path/to/options-parser # scriptversion=1.0 # func_options ${1+"$@"} # eval set dummy "$func_options_result"; shift # ...rest of your script... # # In order for the '--version' option to work, you will need to have a # suitably formatted comment like the one at the top of this file # starting with '# Written by ' and ending with '# warranty; '. # # For '-h' and '--help' to work, you will also need a one line # description of your script's purpose in a comment directly above the # '# Written by ' line, like the one at the top of this file. # # The default options also support '--debug', which will turn on shell # execution tracing (see the comment above debug_cmd below for another # use), and '--verbose' and the func_verbose function to allow your script # to display verbose messages only when your user has specified # '--verbose'. # # After sourcing this file, you can plug processing for additional # options by amending the variables from the 'Configuration' section # below, and following the instructions in the 'Option parsing' # section further down. ## -------------- ## ## Configuration. ## ## -------------- ## # You should override these variables in your script after sourcing this # file so that they reflect the customisations you have added to the # option parser. # The usage line for option parsing errors and the start of '-h' and # '--help' output messages. You can embed shell variables for delayed # expansion at the time the message is displayed, but you will need to # quote other shell meta-characters carefully to prevent them being # expanded when the contents are evaled. usage='$progpath [OPTION]...' # Short help message in response to '-h' and '--help'. Add to this or # override it after sourcing this library to reflect the full set of # options your script accepts. usage_message="\ --debug enable verbose shell tracing -W, --warnings=CATEGORY report the warnings falling in CATEGORY [all] -v, --verbose verbosely report processing --version print version information and exit -h, --help print short or long help message and exit " # Additional text appended to 'usage_message' in response to '--help'. long_help_message=" Warning categories include: 'all' show all warnings 'none' turn off all the warnings 'error' warnings are treated as fatal errors" # Help message printed before fatal option parsing errors. fatal_help="Try '\$progname --help' for more information." ## ------------------------- ## ## Hook function management. ## ## ------------------------- ## # This section contains functions for adding, removing, and running hooks # to the main code. A hook is just a named list of of function, that can # be run in order later on. # func_hookable FUNC_NAME # ----------------------- # Declare that FUNC_NAME will run hooks added with # 'func_add_hook FUNC_NAME ...'. func_hookable () { $debug_cmd func_append hookable_fns " $1" } # func_add_hook FUNC_NAME HOOK_FUNC # --------------------------------- # Request that FUNC_NAME call HOOK_FUNC before it returns. FUNC_NAME must # first have been declared "hookable" by a call to 'func_hookable'. func_add_hook () { $debug_cmd case " $hookable_fns " in *" $1 "*) ;; *) func_fatal_error "'$1' does not accept hook functions." ;; esac eval func_append ${1}_hooks '" $2"' } # func_remove_hook FUNC_NAME HOOK_FUNC # ------------------------------------ # Remove HOOK_FUNC from the list of functions called by FUNC_NAME. func_remove_hook () { $debug_cmd eval ${1}_hooks='`$ECHO "\$'$1'_hooks" |$SED "s| '$2'||"`' } # func_run_hooks FUNC_NAME [ARG]... # --------------------------------- # Run all hook functions registered to FUNC_NAME. # It is assumed that the list of hook functions contains nothing more # than a whitespace-delimited list of legal shell function names, and # no effort is wasted trying to catch shell meta-characters or preserve # whitespace. func_run_hooks () { $debug_cmd _G_rc_run_hooks=false case " $hookable_fns " in *" $1 "*) ;; *) func_fatal_error "'$1' does not support hook funcions.n" ;; esac eval _G_hook_fns=\$$1_hooks; shift for _G_hook in $_G_hook_fns; do if eval $_G_hook '"$@"'; then # store returned options list back into positional # parameters for next 'cmd' execution. eval _G_hook_result=\$${_G_hook}_result eval set dummy "$_G_hook_result"; shift _G_rc_run_hooks=: fi done $_G_rc_run_hooks && func_run_hooks_result=$_G_hook_result } ## --------------- ## ## Option parsing. ## ## --------------- ## # In order to add your own option parsing hooks, you must accept the # full positional parameter list in your hook function, you may remove/edit # any options that you action, and then pass back the remaining unprocessed # options in '_result', escaped suitably for # 'eval'. In this case you also must return $EXIT_SUCCESS to let the # hook's caller know that it should pay attention to # '_result'. Returning $EXIT_FAILURE signalizes that # arguments are left untouched by the hook and therefore caller will ignore the # result variable. # # Like this: # # my_options_prep () # { # $debug_cmd # # # Extend the existing usage message. # usage_message=$usage_message' # -s, --silent don'\''t print informational messages # ' # # No change in '$@' (ignored completely by this hook). There is # # no need to do the equivalent (but slower) action: # # func_quote_for_eval ${1+"$@"} # # my_options_prep_result=$func_quote_for_eval_result # false # } # func_add_hook func_options_prep my_options_prep # # # my_silent_option () # { # $debug_cmd # # args_changed=false # # # Note that for efficiency, we parse as many options as we can # # recognise in a loop before passing the remainder back to the # # caller on the first unrecognised argument we encounter. # while test $# -gt 0; do # opt=$1; shift # case $opt in # --silent|-s) opt_silent=: # args_changed=: # ;; # # Separate non-argument short options: # -s*) func_split_short_opt "$_G_opt" # set dummy "$func_split_short_opt_name" \ # "-$func_split_short_opt_arg" ${1+"$@"} # shift # args_changed=: # ;; # *) # Make sure the first unrecognised option "$_G_opt" # # is added back to "$@", we could need that later # # if $args_changed is true. # set dummy "$_G_opt" ${1+"$@"}; shift; break ;; # esac # done # # if $args_changed; then # func_quote_for_eval ${1+"$@"} # my_silent_option_result=$func_quote_for_eval_result # fi # # $args_changed # } # func_add_hook func_parse_options my_silent_option # # # my_option_validation () # { # $debug_cmd # # $opt_silent && $opt_verbose && func_fatal_help "\ # '--silent' and '--verbose' options are mutually exclusive." # # false # } # func_add_hook func_validate_options my_option_validation # # You'll also need to manually amend $usage_message to reflect the extra # options you parse. It's preferable to append if you can, so that # multiple option parsing hooks can be added safely. # func_options_finish [ARG]... # ---------------------------- # Finishing the option parse loop (call 'func_options' hooks ATM). func_options_finish () { $debug_cmd _G_func_options_finish_exit=false if func_run_hooks func_options ${1+"$@"}; then func_options_finish_result=$func_run_hooks_result _G_func_options_finish_exit=: fi $_G_func_options_finish_exit } # func_options [ARG]... # --------------------- # All the functions called inside func_options are hookable. See the # individual implementations for details. func_hookable func_options func_options () { $debug_cmd _G_rc_options=false for my_func in options_prep parse_options validate_options options_finish do if eval func_$my_func '${1+"$@"}'; then eval _G_res_var='$'"func_${my_func}_result" eval set dummy "$_G_res_var" ; shift _G_rc_options=: fi done # Save modified positional parameters for caller. As a top-level # options-parser function we always need to set the 'func_options_result' # variable (regardless the $_G_rc_options value). if $_G_rc_options; then func_options_result=$_G_res_var else func_quote_for_eval ${1+"$@"} func_options_result=$func_quote_for_eval_result fi $_G_rc_options } # func_options_prep [ARG]... # -------------------------- # All initialisations required before starting the option parse loop. # Note that when calling hook functions, we pass through the list of # positional parameters. If a hook function modifies that list, and # needs to propagate that back to rest of this script, then the complete # modified list must be put in 'func_run_hooks_result' before # returning $EXIT_SUCCESS (otherwise $EXIT_FAILURE is returned). func_hookable func_options_prep func_options_prep () { $debug_cmd # Option defaults: opt_verbose=false opt_warning_types= _G_rc_options_prep=false if func_run_hooks func_options_prep ${1+"$@"}; then _G_rc_options_prep=: # save modified positional parameters for caller func_options_prep_result=$func_run_hooks_result fi $_G_rc_options_prep } # func_parse_options [ARG]... # --------------------------- # The main option parsing loop. func_hookable func_parse_options func_parse_options () { $debug_cmd func_parse_options_result= _G_rc_parse_options=false # this just eases exit handling while test $# -gt 0; do # Defer to hook functions for initial option parsing, so they # get priority in the event of reusing an option name. if func_run_hooks func_parse_options ${1+"$@"}; then eval set dummy "$func_run_hooks_result"; shift _G_rc_parse_options=: fi # Break out of the loop if we already parsed every option. test $# -gt 0 || break _G_match_parse_options=: _G_opt=$1 shift case $_G_opt in --debug|-x) debug_cmd='set -x' func_echo "enabling shell trace mode" $debug_cmd ;; --no-warnings|--no-warning|--no-warn) set dummy --warnings none ${1+"$@"} shift ;; --warnings|--warning|-W) if test $# = 0 && func_missing_arg $_G_opt; then _G_rc_parse_options=: break fi case " $warning_categories $1" in *" $1 "*) # trailing space prevents matching last $1 above func_append_uniq opt_warning_types " $1" ;; *all) opt_warning_types=$warning_categories ;; *none) opt_warning_types=none warning_func=: ;; *error) opt_warning_types=$warning_categories warning_func=func_fatal_error ;; *) func_fatal_error \ "unsupported warning category: '$1'" ;; esac shift ;; --verbose|-v) opt_verbose=: ;; --version) func_version ;; -\?|-h) func_usage ;; --help) func_help ;; # Separate optargs to long options (plugins may need this): --*=*) func_split_equals "$_G_opt" set dummy "$func_split_equals_lhs" \ "$func_split_equals_rhs" ${1+"$@"} shift ;; # Separate optargs to short options: -W*) func_split_short_opt "$_G_opt" set dummy "$func_split_short_opt_name" \ "$func_split_short_opt_arg" ${1+"$@"} shift ;; # Separate non-argument short options: -\?*|-h*|-v*|-x*) func_split_short_opt "$_G_opt" set dummy "$func_split_short_opt_name" \ "-$func_split_short_opt_arg" ${1+"$@"} shift ;; --) _G_rc_parse_options=: ; break ;; -*) func_fatal_help "unrecognised option: '$_G_opt'" ;; *) set dummy "$_G_opt" ${1+"$@"}; shift _G_match_parse_options=false break ;; esac $_G_match_parse_options && _G_rc_parse_options=: done if $_G_rc_parse_options; then # save modified positional parameters for caller func_quote_for_eval ${1+"$@"} func_parse_options_result=$func_quote_for_eval_result fi $_G_rc_parse_options } # func_validate_options [ARG]... # ------------------------------ # Perform any sanity checks on option settings and/or unconsumed # arguments. func_hookable func_validate_options func_validate_options () { $debug_cmd _G_rc_validate_options=false # Display all warnings if -W was not given. test -n "$opt_warning_types" || opt_warning_types=" $warning_categories" if func_run_hooks func_validate_options ${1+"$@"}; then # save modified positional parameters for caller func_validate_options_result=$func_run_hooks_result _G_rc_validate_options=: fi # Bail if the options were screwed! $exit_cmd $EXIT_FAILURE $_G_rc_validate_options } ## ----------------- ## ## Helper functions. ## ## ----------------- ## # This section contains the helper functions used by the rest of the # hookable option parser framework in ascii-betical order. # func_fatal_help ARG... # ---------------------- # Echo program name prefixed message to standard error, followed by # a help hint, and exit. func_fatal_help () { $debug_cmd eval \$ECHO \""Usage: $usage"\" eval \$ECHO \""$fatal_help"\" func_error ${1+"$@"} exit $EXIT_FAILURE } # func_help # --------- # Echo long help message to standard output and exit. func_help () { $debug_cmd func_usage_message $ECHO "$long_help_message" exit 0 } # func_missing_arg ARGNAME # ------------------------ # Echo program name prefixed message to standard error and set global # exit_cmd. func_missing_arg () { $debug_cmd func_error "Missing argument for '$1'." exit_cmd=exit } # func_split_equals STRING # ------------------------ # Set func_split_equals_lhs and func_split_equals_rhs shell variables after # splitting STRING at the '=' sign. test -z "$_G_HAVE_XSI_OPS" \ && (eval 'x=a/b/c; test 5aa/bb/cc = "${#x}${x%%/*}${x%/*}${x#*/}${x##*/}"') 2>/dev/null \ && _G_HAVE_XSI_OPS=yes if test yes = "$_G_HAVE_XSI_OPS" then # This is an XSI compatible shell, allowing a faster implementation... eval 'func_split_equals () { $debug_cmd func_split_equals_lhs=${1%%=*} func_split_equals_rhs=${1#*=} test "x$func_split_equals_lhs" = "x$1" \ && func_split_equals_rhs= }' else # ...otherwise fall back to using expr, which is often a shell builtin. func_split_equals () { $debug_cmd func_split_equals_lhs=`expr "x$1" : 'x\([^=]*\)'` func_split_equals_rhs= test "x$func_split_equals_lhs" = "x$1" \ || func_split_equals_rhs=`expr "x$1" : 'x[^=]*=\(.*\)$'` } fi #func_split_equals # func_split_short_opt SHORTOPT # ----------------------------- # Set func_split_short_opt_name and func_split_short_opt_arg shell # variables after splitting SHORTOPT after the 2nd character. if test yes = "$_G_HAVE_XSI_OPS" then # This is an XSI compatible shell, allowing a faster implementation... eval 'func_split_short_opt () { $debug_cmd func_split_short_opt_arg=${1#??} func_split_short_opt_name=${1%"$func_split_short_opt_arg"} }' else # ...otherwise fall back to using expr, which is often a shell builtin. func_split_short_opt () { $debug_cmd func_split_short_opt_name=`expr "x$1" : 'x-\(.\)'` func_split_short_opt_arg=`expr "x$1" : 'x-.\(.*\)$'` } fi #func_split_short_opt # func_usage # ---------- # Echo short help message to standard output and exit. func_usage () { $debug_cmd func_usage_message $ECHO "Run '$progname --help |${PAGER-more}' for full usage" exit 0 } # func_usage_message # ------------------ # Echo short help message to standard output. func_usage_message () { $debug_cmd eval \$ECHO \""Usage: $usage"\" echo $SED -n 's|^# || /^Written by/{ x;p;x } h /^Written by/q' < "$progpath" echo eval \$ECHO \""$usage_message"\" } # func_version # ------------ # Echo version message to standard output and exit. func_version () { $debug_cmd printf '%s\n' "$progname $scriptversion" $SED -n ' /(C)/!b go :more /\./!{ N s|\n# | | b more } :go /^# Written by /,/# warranty; / { s|^# || s|^# *$|| s|\((C)\)[ 0-9,-]*[ ,-]\([1-9][0-9]* \)|\1 \2| p } /^# Written by / { s|^# || p } /^warranty; /q' < "$progpath" exit $? } # Local variables: # mode: shell-script # sh-indentation: 2 # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-pattern: "10/scriptversion=%:y-%02m-%02d.%02H; # UTC" # time-stamp-time-zone: "UTC" # End: # Set a version string. scriptversion='(GNU libtool) 2.4.6' # func_echo ARG... # ---------------- # Libtool also displays the current mode in messages, so override # funclib.sh func_echo with this custom definition. func_echo () { $debug_cmd _G_message=$* func_echo_IFS=$IFS IFS=$nl for _G_line in $_G_message; do IFS=$func_echo_IFS $ECHO "$progname${opt_mode+: $opt_mode}: $_G_line" done IFS=$func_echo_IFS } # func_warning ARG... # ------------------- # Libtool warnings are not categorized, so override funclib.sh # func_warning with this simpler definition. func_warning () { $debug_cmd $warning_func ${1+"$@"} } ## ---------------- ## ## Options parsing. ## ## ---------------- ## # Hook in the functions to make sure our own options are parsed during # the option parsing loop. usage='$progpath [OPTION]... [MODE-ARG]...' # Short help message in response to '-h'. usage_message="Options: --config show all configuration variables --debug enable verbose shell tracing -n, --dry-run display commands without modifying any files --features display basic configuration information and exit --mode=MODE use operation mode MODE --no-warnings equivalent to '-Wnone' --preserve-dup-deps don't remove duplicate dependency libraries --quiet, --silent don't print informational messages --tag=TAG use configuration variables from tag TAG -v, --verbose print more informational messages than default --version print version information -W, --warnings=CATEGORY report the warnings falling in CATEGORY [all] -h, --help, --help-all print short, long, or detailed help message " # Additional text appended to 'usage_message' in response to '--help'. func_help () { $debug_cmd func_usage_message $ECHO "$long_help_message MODE must be one of the following: clean remove files from the build directory compile compile a source file into a libtool object execute automatically set library path, then run a program finish complete the installation of libtool libraries install install libraries or executables link create a library or an executable uninstall remove libraries from an installed directory MODE-ARGS vary depending on the MODE. When passed as first option, '--mode=MODE' may be abbreviated as 'MODE' or a unique abbreviation of that. Try '$progname --help --mode=MODE' for a more detailed description of MODE. When reporting a bug, please describe a test case to reproduce it and include the following information: host-triplet: $host shell: $SHELL compiler: $LTCC compiler flags: $LTCFLAGS linker: $LD (gnu? $with_gnu_ld) version: $progname $scriptversion Debian-2.4.6-9 automake: `($AUTOMAKE --version) 2>/dev/null |$SED 1q` autoconf: `($AUTOCONF --version) 2>/dev/null |$SED 1q` Report bugs to . GNU libtool home page: . General help using GNU software: ." exit 0 } # func_lo2o OBJECT-NAME # --------------------- # Transform OBJECT-NAME from a '.lo' suffix to the platform specific # object suffix. lo2o=s/\\.lo\$/.$objext/ o2lo=s/\\.$objext\$/.lo/ if test yes = "$_G_HAVE_XSI_OPS"; then eval 'func_lo2o () { case $1 in *.lo) func_lo2o_result=${1%.lo}.$objext ;; * ) func_lo2o_result=$1 ;; esac }' # func_xform LIBOBJ-OR-SOURCE # --------------------------- # Transform LIBOBJ-OR-SOURCE from a '.o' or '.c' (or otherwise) # suffix to a '.lo' libtool-object suffix. eval 'func_xform () { func_xform_result=${1%.*}.lo }' else # ...otherwise fall back to using sed. func_lo2o () { func_lo2o_result=`$ECHO "$1" | $SED "$lo2o"` } func_xform () { func_xform_result=`$ECHO "$1" | $SED 's|\.[^.]*$|.lo|'` } fi # func_fatal_configuration ARG... # ------------------------------- # Echo program name prefixed message to standard error, followed by # a configuration failure hint, and exit. func_fatal_configuration () { func__fatal_error ${1+"$@"} \ "See the $PACKAGE documentation for more information." \ "Fatal configuration error." } # func_config # ----------- # Display the configuration for all the tags in this script. func_config () { re_begincf='^# ### BEGIN LIBTOOL' re_endcf='^# ### END LIBTOOL' # Default configuration. $SED "1,/$re_begincf CONFIG/d;/$re_endcf CONFIG/,\$d" < "$progpath" # Now print the configurations for the tags. for tagname in $taglist; do $SED -n "/$re_begincf TAG CONFIG: $tagname\$/,/$re_endcf TAG CONFIG: $tagname\$/p" < "$progpath" done exit $? } # func_features # ------------- # Display the features supported by this script. func_features () { echo "host: $host" if test yes = "$build_libtool_libs"; then echo "enable shared libraries" else echo "disable shared libraries" fi if test yes = "$build_old_libs"; then echo "enable static libraries" else echo "disable static libraries" fi exit $? } # func_enable_tag TAGNAME # ----------------------- # Verify that TAGNAME is valid, and either flag an error and exit, or # enable the TAGNAME tag. We also add TAGNAME to the global $taglist # variable here. func_enable_tag () { # Global variable: tagname=$1 re_begincf="^# ### BEGIN LIBTOOL TAG CONFIG: $tagname\$" re_endcf="^# ### END LIBTOOL TAG CONFIG: $tagname\$" sed_extractcf=/$re_begincf/,/$re_endcf/p # Validate tagname. case $tagname in *[!-_A-Za-z0-9,/]*) func_fatal_error "invalid tag name: $tagname" ;; esac # Don't test for the "default" C tag, as we know it's # there but not specially marked. case $tagname in CC) ;; *) if $GREP "$re_begincf" "$progpath" >/dev/null 2>&1; then taglist="$taglist $tagname" # Evaluate the configuration. Be careful to quote the path # and the sed script, to avoid splitting on whitespace, but # also don't use non-portable quotes within backquotes within # quotes we have to do it in 2 steps: extractedcf=`$SED -n -e "$sed_extractcf" < "$progpath"` eval "$extractedcf" else func_error "ignoring unknown tag $tagname" fi ;; esac } # func_check_version_match # ------------------------ # Ensure that we are using m4 macros, and libtool script from the same # release of libtool. func_check_version_match () { if test "$package_revision" != "$macro_revision"; then if test "$VERSION" != "$macro_version"; then if test -z "$macro_version"; then cat >&2 <<_LT_EOF $progname: Version mismatch error. This is $PACKAGE $VERSION, but the $progname: definition of this LT_INIT comes from an older release. $progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION $progname: and run autoconf again. _LT_EOF else cat >&2 <<_LT_EOF $progname: Version mismatch error. This is $PACKAGE $VERSION, but the $progname: definition of this LT_INIT comes from $PACKAGE $macro_version. $progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION $progname: and run autoconf again. _LT_EOF fi else cat >&2 <<_LT_EOF $progname: Version mismatch error. This is $PACKAGE $VERSION, revision $package_revision, $progname: but the definition of this LT_INIT comes from revision $macro_revision. $progname: You should recreate aclocal.m4 with macros from revision $package_revision $progname: of $PACKAGE $VERSION and run autoconf again. _LT_EOF fi exit $EXIT_MISMATCH fi } # libtool_options_prep [ARG]... # ----------------------------- # Preparation for options parsed by libtool. libtool_options_prep () { $debug_mode # Option defaults: opt_config=false opt_dlopen= opt_dry_run=false opt_help=false opt_mode= opt_preserve_dup_deps=false opt_quiet=false nonopt= preserve_args= _G_rc_lt_options_prep=: # Shorthand for --mode=foo, only valid as the first argument case $1 in clean|clea|cle|cl) shift; set dummy --mode clean ${1+"$@"}; shift ;; compile|compil|compi|comp|com|co|c) shift; set dummy --mode compile ${1+"$@"}; shift ;; execute|execut|execu|exec|exe|ex|e) shift; set dummy --mode execute ${1+"$@"}; shift ;; finish|finis|fini|fin|fi|f) shift; set dummy --mode finish ${1+"$@"}; shift ;; install|instal|insta|inst|ins|in|i) shift; set dummy --mode install ${1+"$@"}; shift ;; link|lin|li|l) shift; set dummy --mode link ${1+"$@"}; shift ;; uninstall|uninstal|uninsta|uninst|unins|unin|uni|un|u) shift; set dummy --mode uninstall ${1+"$@"}; shift ;; *) _G_rc_lt_options_prep=false ;; esac if $_G_rc_lt_options_prep; then # Pass back the list of options. func_quote_for_eval ${1+"$@"} libtool_options_prep_result=$func_quote_for_eval_result fi $_G_rc_lt_options_prep } func_add_hook func_options_prep libtool_options_prep # libtool_parse_options [ARG]... # --------------------------------- # Provide handling for libtool specific options. libtool_parse_options () { $debug_cmd _G_rc_lt_parse_options=false # Perform our own loop to consume as many options as possible in # each iteration. while test $# -gt 0; do _G_match_lt_parse_options=: _G_opt=$1 shift case $_G_opt in --dry-run|--dryrun|-n) opt_dry_run=: ;; --config) func_config ;; --dlopen|-dlopen) opt_dlopen="${opt_dlopen+$opt_dlopen }$1" shift ;; --preserve-dup-deps) opt_preserve_dup_deps=: ;; --features) func_features ;; --finish) set dummy --mode finish ${1+"$@"}; shift ;; --help) opt_help=: ;; --help-all) opt_help=': help-all' ;; --mode) test $# = 0 && func_missing_arg $_G_opt && break opt_mode=$1 case $1 in # Valid mode arguments: clean|compile|execute|finish|install|link|relink|uninstall) ;; # Catch anything else as an error *) func_error "invalid argument for $_G_opt" exit_cmd=exit break ;; esac shift ;; --no-silent|--no-quiet) opt_quiet=false func_append preserve_args " $_G_opt" ;; --no-warnings|--no-warning|--no-warn) opt_warning=false func_append preserve_args " $_G_opt" ;; --no-verbose) opt_verbose=false func_append preserve_args " $_G_opt" ;; --silent|--quiet) opt_quiet=: opt_verbose=false func_append preserve_args " $_G_opt" ;; --tag) test $# = 0 && func_missing_arg $_G_opt && break opt_tag=$1 func_append preserve_args " $_G_opt $1" func_enable_tag "$1" shift ;; --verbose|-v) opt_quiet=false opt_verbose=: func_append preserve_args " $_G_opt" ;; # An option not handled by this hook function: *) set dummy "$_G_opt" ${1+"$@"} ; shift _G_match_lt_parse_options=false break ;; esac $_G_match_lt_parse_options && _G_rc_lt_parse_options=: done if $_G_rc_lt_parse_options; then # save modified positional parameters for caller func_quote_for_eval ${1+"$@"} libtool_parse_options_result=$func_quote_for_eval_result fi $_G_rc_lt_parse_options } func_add_hook func_parse_options libtool_parse_options # libtool_validate_options [ARG]... # --------------------------------- # Perform any sanity checks on option settings and/or unconsumed # arguments. libtool_validate_options () { # save first non-option argument if test 0 -lt $#; then nonopt=$1 shift fi # preserve --debug test : = "$debug_cmd" || func_append preserve_args " --debug" case $host in # Solaris2 added to fix http://debbugs.gnu.org/cgi/bugreport.cgi?bug=16452 # see also: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=59788 *cygwin* | *mingw* | *pw32* | *cegcc* | *solaris2* | *os2*) # don't eliminate duplications in $postdeps and $predeps opt_duplicate_compiler_generated_deps=: ;; *) opt_duplicate_compiler_generated_deps=$opt_preserve_dup_deps ;; esac $opt_help || { # Sanity checks first: func_check_version_match test yes != "$build_libtool_libs" \ && test yes != "$build_old_libs" \ && func_fatal_configuration "not configured to build any kind of library" # Darwin sucks eval std_shrext=\"$shrext_cmds\" # Only execute mode is allowed to have -dlopen flags. if test -n "$opt_dlopen" && test execute != "$opt_mode"; then func_error "unrecognized option '-dlopen'" $ECHO "$help" 1>&2 exit $EXIT_FAILURE fi # Change the help message to a mode-specific one. generic_help=$help help="Try '$progname --help --mode=$opt_mode' for more information." } # Pass back the unparsed argument list func_quote_for_eval ${1+"$@"} libtool_validate_options_result=$func_quote_for_eval_result } func_add_hook func_validate_options libtool_validate_options # Process options as early as possible so that --help and --version # can return quickly. func_options ${1+"$@"} eval set dummy "$func_options_result"; shift ## ----------- ## ## Main. ## ## ----------- ## magic='%%%MAGIC variable%%%' magic_exe='%%%MAGIC EXE variable%%%' # Global variables. extracted_archives= extracted_serial=0 # If this variable is set in any of the actions, the command in it # will be execed at the end. This prevents here-documents from being # left over by shells. exec_cmd= # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF $1 _LTECHO_EOF' } # func_generated_by_libtool # True iff stdin has been generated by Libtool. This function is only # a basic sanity check; it will hardly flush out determined imposters. func_generated_by_libtool_p () { $GREP "^# Generated by .*$PACKAGE" > /dev/null 2>&1 } # func_lalib_p file # True iff FILE is a libtool '.la' library or '.lo' object file. # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_lalib_p () { test -f "$1" && $SED -e 4q "$1" 2>/dev/null | func_generated_by_libtool_p } # func_lalib_unsafe_p file # True iff FILE is a libtool '.la' library or '.lo' object file. # This function implements the same check as func_lalib_p without # resorting to external programs. To this end, it redirects stdin and # closes it afterwards, without saving the original file descriptor. # As a safety measure, use it only where a negative result would be # fatal anyway. Works if 'file' does not exist. func_lalib_unsafe_p () { lalib_p=no if test -f "$1" && test -r "$1" && exec 5<&0 <"$1"; then for lalib_p_l in 1 2 3 4 do read lalib_p_line case $lalib_p_line in \#\ Generated\ by\ *$PACKAGE* ) lalib_p=yes; break;; esac done exec 0<&5 5<&- fi test yes = "$lalib_p" } # func_ltwrapper_script_p file # True iff FILE is a libtool wrapper script # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_ltwrapper_script_p () { test -f "$1" && $lt_truncate_bin < "$1" 2>/dev/null | func_generated_by_libtool_p } # func_ltwrapper_executable_p file # True iff FILE is a libtool wrapper executable # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_ltwrapper_executable_p () { func_ltwrapper_exec_suffix= case $1 in *.exe) ;; *) func_ltwrapper_exec_suffix=.exe ;; esac $GREP "$magic_exe" "$1$func_ltwrapper_exec_suffix" >/dev/null 2>&1 } # func_ltwrapper_scriptname file # Assumes file is an ltwrapper_executable # uses $file to determine the appropriate filename for a # temporary ltwrapper_script. func_ltwrapper_scriptname () { func_dirname_and_basename "$1" "" "." func_stripname '' '.exe' "$func_basename_result" func_ltwrapper_scriptname_result=$func_dirname_result/$objdir/${func_stripname_result}_ltshwrapper } # func_ltwrapper_p file # True iff FILE is a libtool wrapper script or wrapper executable # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_ltwrapper_p () { func_ltwrapper_script_p "$1" || func_ltwrapper_executable_p "$1" } # func_execute_cmds commands fail_cmd # Execute tilde-delimited COMMANDS. # If FAIL_CMD is given, eval that upon failure. # FAIL_CMD may read-access the current command in variable CMD! func_execute_cmds () { $debug_cmd save_ifs=$IFS; IFS='~' for cmd in $1; do IFS=$sp$nl eval cmd=\"$cmd\" IFS=$save_ifs func_show_eval "$cmd" "${2-:}" done IFS=$save_ifs } # func_source file # Source FILE, adding directory component if necessary. # Note that it is not necessary on cygwin/mingw to append a dot to # FILE even if both FILE and FILE.exe exist: automatic-append-.exe # behavior happens only for exec(3), not for open(2)! Also, sourcing # 'FILE.' does not work on cygwin managed mounts. func_source () { $debug_cmd case $1 in */* | *\\*) . "$1" ;; *) . "./$1" ;; esac } # func_resolve_sysroot PATH # Replace a leading = in PATH with a sysroot. Store the result into # func_resolve_sysroot_result func_resolve_sysroot () { func_resolve_sysroot_result=$1 case $func_resolve_sysroot_result in =*) func_stripname '=' '' "$func_resolve_sysroot_result" func_resolve_sysroot_result=$lt_sysroot$func_stripname_result ;; esac } # func_replace_sysroot PATH # If PATH begins with the sysroot, replace it with = and # store the result into func_replace_sysroot_result. func_replace_sysroot () { case $lt_sysroot:$1 in ?*:"$lt_sysroot"*) func_stripname "$lt_sysroot" '' "$1" func_replace_sysroot_result='='$func_stripname_result ;; *) # Including no sysroot. func_replace_sysroot_result=$1 ;; esac } # func_infer_tag arg # Infer tagged configuration to use if any are available and # if one wasn't chosen via the "--tag" command line option. # Only attempt this if the compiler in the base compile # command doesn't match the default compiler. # arg is usually of the form 'gcc ...' func_infer_tag () { $debug_cmd if test -n "$available_tags" && test -z "$tagname"; then CC_quoted= for arg in $CC; do func_append_quoted CC_quoted "$arg" done CC_expanded=`func_echo_all $CC` CC_quoted_expanded=`func_echo_all $CC_quoted` case $@ in # Blanks in the command may have been stripped by the calling shell, # but not from the CC environment variable when configure was run. " $CC "* | "$CC "* | " $CC_expanded "* | "$CC_expanded "* | \ " $CC_quoted"* | "$CC_quoted "* | " $CC_quoted_expanded "* | "$CC_quoted_expanded "*) ;; # Blanks at the start of $base_compile will cause this to fail # if we don't check for them as well. *) for z in $available_tags; do if $GREP "^# ### BEGIN LIBTOOL TAG CONFIG: $z$" < "$progpath" > /dev/null; then # Evaluate the configuration. eval "`$SED -n -e '/^# ### BEGIN LIBTOOL TAG CONFIG: '$z'$/,/^# ### END LIBTOOL TAG CONFIG: '$z'$/p' < $progpath`" CC_quoted= for arg in $CC; do # Double-quote args containing other shell metacharacters. func_append_quoted CC_quoted "$arg" done CC_expanded=`func_echo_all $CC` CC_quoted_expanded=`func_echo_all $CC_quoted` case "$@ " in " $CC "* | "$CC "* | " $CC_expanded "* | "$CC_expanded "* | \ " $CC_quoted"* | "$CC_quoted "* | " $CC_quoted_expanded "* | "$CC_quoted_expanded "*) # The compiler in the base compile command matches # the one in the tagged configuration. # Assume this is the tagged configuration we want. tagname=$z break ;; esac fi done # If $tagname still isn't set, then no tagged configuration # was found and let the user know that the "--tag" command # line option must be used. if test -z "$tagname"; then func_echo "unable to infer tagged configuration" func_fatal_error "specify a tag with '--tag'" # else # func_verbose "using $tagname tagged configuration" fi ;; esac fi } # func_write_libtool_object output_name pic_name nonpic_name # Create a libtool object file (analogous to a ".la" file), # but don't create it if we're doing a dry run. func_write_libtool_object () { write_libobj=$1 if test yes = "$build_libtool_libs"; then write_lobj=\'$2\' else write_lobj=none fi if test yes = "$build_old_libs"; then write_oldobj=\'$3\' else write_oldobj=none fi $opt_dry_run || { cat >${write_libobj}T </dev/null` if test "$?" -eq 0 && test -n "$func_convert_core_file_wine_to_w32_tmp"; then func_convert_core_file_wine_to_w32_result=`$ECHO "$func_convert_core_file_wine_to_w32_tmp" | $SED -e "$sed_naive_backslashify"` else func_convert_core_file_wine_to_w32_result= fi fi } # end: func_convert_core_file_wine_to_w32 # func_convert_core_path_wine_to_w32 ARG # Helper function used by path conversion functions when $build is *nix, and # $host is mingw, cygwin, or some other w32 environment. Relies on a correctly # configured wine environment available, with the winepath program in $build's # $PATH. Assumes ARG has no leading or trailing path separator characters. # # ARG is path to be converted from $build format to win32. # Result is available in $func_convert_core_path_wine_to_w32_result. # Unconvertible file (directory) names in ARG are skipped; if no directory names # are convertible, then the result may be empty. func_convert_core_path_wine_to_w32 () { $debug_cmd # unfortunately, winepath doesn't convert paths, only file names func_convert_core_path_wine_to_w32_result= if test -n "$1"; then oldIFS=$IFS IFS=: for func_convert_core_path_wine_to_w32_f in $1; do IFS=$oldIFS func_convert_core_file_wine_to_w32 "$func_convert_core_path_wine_to_w32_f" if test -n "$func_convert_core_file_wine_to_w32_result"; then if test -z "$func_convert_core_path_wine_to_w32_result"; then func_convert_core_path_wine_to_w32_result=$func_convert_core_file_wine_to_w32_result else func_append func_convert_core_path_wine_to_w32_result ";$func_convert_core_file_wine_to_w32_result" fi fi done IFS=$oldIFS fi } # end: func_convert_core_path_wine_to_w32 # func_cygpath ARGS... # Wrapper around calling the cygpath program via LT_CYGPATH. This is used when # when (1) $build is *nix and Cygwin is hosted via a wine environment; or (2) # $build is MSYS and $host is Cygwin, or (3) $build is Cygwin. In case (1) or # (2), returns the Cygwin file name or path in func_cygpath_result (input # file name or path is assumed to be in w32 format, as previously converted # from $build's *nix or MSYS format). In case (3), returns the w32 file name # or path in func_cygpath_result (input file name or path is assumed to be in # Cygwin format). Returns an empty string on error. # # ARGS are passed to cygpath, with the last one being the file name or path to # be converted. # # Specify the absolute *nix (or w32) name to cygpath in the LT_CYGPATH # environment variable; do not put it in $PATH. func_cygpath () { $debug_cmd if test -n "$LT_CYGPATH" && test -f "$LT_CYGPATH"; then func_cygpath_result=`$LT_CYGPATH "$@" 2>/dev/null` if test "$?" -ne 0; then # on failure, ensure result is empty func_cygpath_result= fi else func_cygpath_result= func_error "LT_CYGPATH is empty or specifies non-existent file: '$LT_CYGPATH'" fi } #end: func_cygpath # func_convert_core_msys_to_w32 ARG # Convert file name or path ARG from MSYS format to w32 format. Return # result in func_convert_core_msys_to_w32_result. func_convert_core_msys_to_w32 () { $debug_cmd # awkward: cmd appends spaces to result func_convert_core_msys_to_w32_result=`( cmd //c echo "$1" ) 2>/dev/null | $SED -e 's/[ ]*$//' -e "$sed_naive_backslashify"` } #end: func_convert_core_msys_to_w32 # func_convert_file_check ARG1 ARG2 # Verify that ARG1 (a file name in $build format) was converted to $host # format in ARG2. Otherwise, emit an error message, but continue (resetting # func_to_host_file_result to ARG1). func_convert_file_check () { $debug_cmd if test -z "$2" && test -n "$1"; then func_error "Could not determine host file name corresponding to" func_error " '$1'" func_error "Continuing, but uninstalled executables may not work." # Fallback: func_to_host_file_result=$1 fi } # end func_convert_file_check # func_convert_path_check FROM_PATHSEP TO_PATHSEP FROM_PATH TO_PATH # Verify that FROM_PATH (a path in $build format) was converted to $host # format in TO_PATH. Otherwise, emit an error message, but continue, resetting # func_to_host_file_result to a simplistic fallback value (see below). func_convert_path_check () { $debug_cmd if test -z "$4" && test -n "$3"; then func_error "Could not determine the host path corresponding to" func_error " '$3'" func_error "Continuing, but uninstalled executables may not work." # Fallback. This is a deliberately simplistic "conversion" and # should not be "improved". See libtool.info. if test "x$1" != "x$2"; then lt_replace_pathsep_chars="s|$1|$2|g" func_to_host_path_result=`echo "$3" | $SED -e "$lt_replace_pathsep_chars"` else func_to_host_path_result=$3 fi fi } # end func_convert_path_check # func_convert_path_front_back_pathsep FRONTPAT BACKPAT REPL ORIG # Modifies func_to_host_path_result by prepending REPL if ORIG matches FRONTPAT # and appending REPL if ORIG matches BACKPAT. func_convert_path_front_back_pathsep () { $debug_cmd case $4 in $1 ) func_to_host_path_result=$3$func_to_host_path_result ;; esac case $4 in $2 ) func_append func_to_host_path_result "$3" ;; esac } # end func_convert_path_front_back_pathsep ################################################## # $build to $host FILE NAME CONVERSION FUNCTIONS # ################################################## # invoked via '$to_host_file_cmd ARG' # # In each case, ARG is the path to be converted from $build to $host format. # Result will be available in $func_to_host_file_result. # func_to_host_file ARG # Converts the file name ARG from $build format to $host format. Return result # in func_to_host_file_result. func_to_host_file () { $debug_cmd $to_host_file_cmd "$1" } # end func_to_host_file # func_to_tool_file ARG LAZY # converts the file name ARG from $build format to toolchain format. Return # result in func_to_tool_file_result. If the conversion in use is listed # in (the comma separated) LAZY, no conversion takes place. func_to_tool_file () { $debug_cmd case ,$2, in *,"$to_tool_file_cmd",*) func_to_tool_file_result=$1 ;; *) $to_tool_file_cmd "$1" func_to_tool_file_result=$func_to_host_file_result ;; esac } # end func_to_tool_file # func_convert_file_noop ARG # Copy ARG to func_to_host_file_result. func_convert_file_noop () { func_to_host_file_result=$1 } # end func_convert_file_noop # func_convert_file_msys_to_w32 ARG # Convert file name ARG from (mingw) MSYS to (mingw) w32 format; automatic # conversion to w32 is not available inside the cwrapper. Returns result in # func_to_host_file_result. func_convert_file_msys_to_w32 () { $debug_cmd func_to_host_file_result=$1 if test -n "$1"; then func_convert_core_msys_to_w32 "$1" func_to_host_file_result=$func_convert_core_msys_to_w32_result fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_msys_to_w32 # func_convert_file_cygwin_to_w32 ARG # Convert file name ARG from Cygwin to w32 format. Returns result in # func_to_host_file_result. func_convert_file_cygwin_to_w32 () { $debug_cmd func_to_host_file_result=$1 if test -n "$1"; then # because $build is cygwin, we call "the" cygpath in $PATH; no need to use # LT_CYGPATH in this case. func_to_host_file_result=`cygpath -m "$1"` fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_cygwin_to_w32 # func_convert_file_nix_to_w32 ARG # Convert file name ARG from *nix to w32 format. Requires a wine environment # and a working winepath. Returns result in func_to_host_file_result. func_convert_file_nix_to_w32 () { $debug_cmd func_to_host_file_result=$1 if test -n "$1"; then func_convert_core_file_wine_to_w32 "$1" func_to_host_file_result=$func_convert_core_file_wine_to_w32_result fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_nix_to_w32 # func_convert_file_msys_to_cygwin ARG # Convert file name ARG from MSYS to Cygwin format. Requires LT_CYGPATH set. # Returns result in func_to_host_file_result. func_convert_file_msys_to_cygwin () { $debug_cmd func_to_host_file_result=$1 if test -n "$1"; then func_convert_core_msys_to_w32 "$1" func_cygpath -u "$func_convert_core_msys_to_w32_result" func_to_host_file_result=$func_cygpath_result fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_msys_to_cygwin # func_convert_file_nix_to_cygwin ARG # Convert file name ARG from *nix to Cygwin format. Requires Cygwin installed # in a wine environment, working winepath, and LT_CYGPATH set. Returns result # in func_to_host_file_result. func_convert_file_nix_to_cygwin () { $debug_cmd func_to_host_file_result=$1 if test -n "$1"; then # convert from *nix to w32, then use cygpath to convert from w32 to cygwin. func_convert_core_file_wine_to_w32 "$1" func_cygpath -u "$func_convert_core_file_wine_to_w32_result" func_to_host_file_result=$func_cygpath_result fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_nix_to_cygwin ############################################# # $build to $host PATH CONVERSION FUNCTIONS # ############################################# # invoked via '$to_host_path_cmd ARG' # # In each case, ARG is the path to be converted from $build to $host format. # The result will be available in $func_to_host_path_result. # # Path separators are also converted from $build format to $host format. If # ARG begins or ends with a path separator character, it is preserved (but # converted to $host format) on output. # # All path conversion functions are named using the following convention: # file name conversion function : func_convert_file_X_to_Y () # path conversion function : func_convert_path_X_to_Y () # where, for any given $build/$host combination the 'X_to_Y' value is the # same. If conversion functions are added for new $build/$host combinations, # the two new functions must follow this pattern, or func_init_to_host_path_cmd # will break. # func_init_to_host_path_cmd # Ensures that function "pointer" variable $to_host_path_cmd is set to the # appropriate value, based on the value of $to_host_file_cmd. to_host_path_cmd= func_init_to_host_path_cmd () { $debug_cmd if test -z "$to_host_path_cmd"; then func_stripname 'func_convert_file_' '' "$to_host_file_cmd" to_host_path_cmd=func_convert_path_$func_stripname_result fi } # func_to_host_path ARG # Converts the path ARG from $build format to $host format. Return result # in func_to_host_path_result. func_to_host_path () { $debug_cmd func_init_to_host_path_cmd $to_host_path_cmd "$1" } # end func_to_host_path # func_convert_path_noop ARG # Copy ARG to func_to_host_path_result. func_convert_path_noop () { func_to_host_path_result=$1 } # end func_convert_path_noop # func_convert_path_msys_to_w32 ARG # Convert path ARG from (mingw) MSYS to (mingw) w32 format; automatic # conversion to w32 is not available inside the cwrapper. Returns result in # func_to_host_path_result. func_convert_path_msys_to_w32 () { $debug_cmd func_to_host_path_result=$1 if test -n "$1"; then # Remove leading and trailing path separator characters from ARG. MSYS # behavior is inconsistent here; cygpath turns them into '.;' and ';.'; # and winepath ignores them completely. func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_msys_to_w32 "$func_to_host_path_tmp1" func_to_host_path_result=$func_convert_core_msys_to_w32_result func_convert_path_check : ";" \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" ";" "$1" fi } # end func_convert_path_msys_to_w32 # func_convert_path_cygwin_to_w32 ARG # Convert path ARG from Cygwin to w32 format. Returns result in # func_to_host_file_result. func_convert_path_cygwin_to_w32 () { $debug_cmd func_to_host_path_result=$1 if test -n "$1"; then # See func_convert_path_msys_to_w32: func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_to_host_path_result=`cygpath -m -p "$func_to_host_path_tmp1"` func_convert_path_check : ";" \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" ";" "$1" fi } # end func_convert_path_cygwin_to_w32 # func_convert_path_nix_to_w32 ARG # Convert path ARG from *nix to w32 format. Requires a wine environment and # a working winepath. Returns result in func_to_host_file_result. func_convert_path_nix_to_w32 () { $debug_cmd func_to_host_path_result=$1 if test -n "$1"; then # See func_convert_path_msys_to_w32: func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1" func_to_host_path_result=$func_convert_core_path_wine_to_w32_result func_convert_path_check : ";" \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" ";" "$1" fi } # end func_convert_path_nix_to_w32 # func_convert_path_msys_to_cygwin ARG # Convert path ARG from MSYS to Cygwin format. Requires LT_CYGPATH set. # Returns result in func_to_host_file_result. func_convert_path_msys_to_cygwin () { $debug_cmd func_to_host_path_result=$1 if test -n "$1"; then # See func_convert_path_msys_to_w32: func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_msys_to_w32 "$func_to_host_path_tmp1" func_cygpath -u -p "$func_convert_core_msys_to_w32_result" func_to_host_path_result=$func_cygpath_result func_convert_path_check : : \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" : "$1" fi } # end func_convert_path_msys_to_cygwin # func_convert_path_nix_to_cygwin ARG # Convert path ARG from *nix to Cygwin format. Requires Cygwin installed in a # a wine environment, working winepath, and LT_CYGPATH set. Returns result in # func_to_host_file_result. func_convert_path_nix_to_cygwin () { $debug_cmd func_to_host_path_result=$1 if test -n "$1"; then # Remove leading and trailing path separator characters from # ARG. msys behavior is inconsistent here, cygpath turns them # into '.;' and ';.', and winepath ignores them completely. func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1" func_cygpath -u -p "$func_convert_core_path_wine_to_w32_result" func_to_host_path_result=$func_cygpath_result func_convert_path_check : : \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" : "$1" fi } # end func_convert_path_nix_to_cygwin # func_dll_def_p FILE # True iff FILE is a Windows DLL '.def' file. # Keep in sync with _LT_DLL_DEF_P in libtool.m4 func_dll_def_p () { $debug_cmd func_dll_def_p_tmp=`$SED -n \ -e 's/^[ ]*//' \ -e '/^\(;.*\)*$/d' \ -e 's/^\(EXPORTS\|LIBRARY\)\([ ].*\)*$/DEF/p' \ -e q \ "$1"` test DEF = "$func_dll_def_p_tmp" } # func_mode_compile arg... func_mode_compile () { $debug_cmd # Get the compilation command and the source file. base_compile= srcfile=$nonopt # always keep a non-empty value in "srcfile" suppress_opt=yes suppress_output= arg_mode=normal libobj= later= pie_flag= for arg do case $arg_mode in arg ) # do not "continue". Instead, add this to base_compile lastarg=$arg arg_mode=normal ;; target ) libobj=$arg arg_mode=normal continue ;; normal ) # Accept any command-line options. case $arg in -o) test -n "$libobj" && \ func_fatal_error "you cannot specify '-o' more than once" arg_mode=target continue ;; -pie | -fpie | -fPIE) func_append pie_flag " $arg" continue ;; -shared | -static | -prefer-pic | -prefer-non-pic) func_append later " $arg" continue ;; -no-suppress) suppress_opt=no continue ;; -Xcompiler) arg_mode=arg # the next one goes into the "base_compile" arg list continue # The current "srcfile" will either be retained or ;; # replaced later. I would guess that would be a bug. -Wc,*) func_stripname '-Wc,' '' "$arg" args=$func_stripname_result lastarg= save_ifs=$IFS; IFS=, for arg in $args; do IFS=$save_ifs func_append_quoted lastarg "$arg" done IFS=$save_ifs func_stripname ' ' '' "$lastarg" lastarg=$func_stripname_result # Add the arguments to base_compile. func_append base_compile " $lastarg" continue ;; *) # Accept the current argument as the source file. # The previous "srcfile" becomes the current argument. # lastarg=$srcfile srcfile=$arg ;; esac # case $arg ;; esac # case $arg_mode # Aesthetically quote the previous argument. func_append_quoted base_compile "$lastarg" done # for arg case $arg_mode in arg) func_fatal_error "you must specify an argument for -Xcompile" ;; target) func_fatal_error "you must specify a target with '-o'" ;; *) # Get the name of the library object. test -z "$libobj" && { func_basename "$srcfile" libobj=$func_basename_result } ;; esac # Recognize several different file suffixes. # If the user specifies -o file.o, it is replaced with file.lo case $libobj in *.[cCFSifmso] | \ *.ada | *.adb | *.ads | *.asm | \ *.c++ | *.cc | *.ii | *.class | *.cpp | *.cxx | \ *.[fF][09]? | *.for | *.java | *.go | *.obj | *.sx | *.cu | *.cup) func_xform "$libobj" libobj=$func_xform_result ;; esac case $libobj in *.lo) func_lo2o "$libobj"; obj=$func_lo2o_result ;; *) func_fatal_error "cannot determine name of library object from '$libobj'" ;; esac func_infer_tag $base_compile for arg in $later; do case $arg in -shared) test yes = "$build_libtool_libs" \ || func_fatal_configuration "cannot build a shared library" build_old_libs=no continue ;; -static) build_libtool_libs=no build_old_libs=yes continue ;; -prefer-pic) pic_mode=yes continue ;; -prefer-non-pic) pic_mode=no continue ;; esac done func_quote_for_eval "$libobj" test "X$libobj" != "X$func_quote_for_eval_result" \ && $ECHO "X$libobj" | $GREP '[]~#^*{};<>?"'"'"' &()|`$[]' \ && func_warning "libobj name '$libobj' may not contain shell special characters." func_dirname_and_basename "$obj" "/" "" objname=$func_basename_result xdir=$func_dirname_result lobj=$xdir$objdir/$objname test -z "$base_compile" && \ func_fatal_help "you must specify a compilation command" # Delete any leftover library objects. if test yes = "$build_old_libs"; then removelist="$obj $lobj $libobj ${libobj}T" else removelist="$lobj $libobj ${libobj}T" fi # On Cygwin there's no "real" PIC flag so we must build both object types case $host_os in cygwin* | mingw* | pw32* | os2* | cegcc*) pic_mode=default ;; esac if test no = "$pic_mode" && test pass_all != "$deplibs_check_method"; then # non-PIC code in shared libraries is not supported pic_mode=default fi # Calculate the filename of the output object if compiler does # not support -o with -c if test no = "$compiler_c_o"; then output_obj=`$ECHO "$srcfile" | $SED 's%^.*/%%; s%\.[^.]*$%%'`.$objext lockfile=$output_obj.lock else output_obj= need_locks=no lockfile= fi # Lock this critical section if it is needed # We use this script file to make the link, it avoids creating a new file if test yes = "$need_locks"; then until $opt_dry_run || ln "$progpath" "$lockfile" 2>/dev/null; do func_echo "Waiting for $lockfile to be removed" sleep 2 done elif test warn = "$need_locks"; then if test -f "$lockfile"; then $ECHO "\ *** ERROR, $lockfile exists and contains: `cat $lockfile 2>/dev/null` This indicates that another process is trying to use the same temporary object file, and libtool could not work around it because your compiler does not support '-c' and '-o' together. If you repeat this compilation, it may succeed, by chance, but you had better avoid parallel builds (make -j) in this platform, or get a better compiler." $opt_dry_run || $RM $removelist exit $EXIT_FAILURE fi func_append removelist " $output_obj" $ECHO "$srcfile" > "$lockfile" fi $opt_dry_run || $RM $removelist func_append removelist " $lockfile" trap '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' 1 2 15 func_to_tool_file "$srcfile" func_convert_file_msys_to_w32 srcfile=$func_to_tool_file_result func_quote_for_eval "$srcfile" qsrcfile=$func_quote_for_eval_result # Only build a PIC object if we are building libtool libraries. if test yes = "$build_libtool_libs"; then # Without this assignment, base_compile gets emptied. fbsd_hideous_sh_bug=$base_compile if test no != "$pic_mode"; then command="$base_compile $qsrcfile $pic_flag" else # Don't build PIC code command="$base_compile $qsrcfile" fi func_mkdir_p "$xdir$objdir" if test -z "$output_obj"; then # Place PIC objects in $objdir func_append command " -o $lobj" fi func_show_eval_locale "$command" \ 'test -n "$output_obj" && $RM $removelist; exit $EXIT_FAILURE' if test warn = "$need_locks" && test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then $ECHO "\ *** ERROR, $lockfile contains: `cat $lockfile 2>/dev/null` but it should contain: $srcfile This indicates that another process is trying to use the same temporary object file, and libtool could not work around it because your compiler does not support '-c' and '-o' together. If you repeat this compilation, it may succeed, by chance, but you had better avoid parallel builds (make -j) in this platform, or get a better compiler." $opt_dry_run || $RM $removelist exit $EXIT_FAILURE fi # Just move the object if needed, then go on to compile the next one if test -n "$output_obj" && test "X$output_obj" != "X$lobj"; then func_show_eval '$MV "$output_obj" "$lobj"' \ 'error=$?; $opt_dry_run || $RM $removelist; exit $error' fi # Allow error messages only from the first compilation. if test yes = "$suppress_opt"; then suppress_output=' >/dev/null 2>&1' fi fi # Only build a position-dependent object if we build old libraries. if test yes = "$build_old_libs"; then if test yes != "$pic_mode"; then # Don't build PIC code command="$base_compile $qsrcfile$pie_flag" else command="$base_compile $qsrcfile $pic_flag" fi if test yes = "$compiler_c_o"; then func_append command " -o $obj" fi # Suppress compiler output if we already did a PIC compilation. func_append command "$suppress_output" func_show_eval_locale "$command" \ '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' if test warn = "$need_locks" && test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then $ECHO "\ *** ERROR, $lockfile contains: `cat $lockfile 2>/dev/null` but it should contain: $srcfile This indicates that another process is trying to use the same temporary object file, and libtool could not work around it because your compiler does not support '-c' and '-o' together. If you repeat this compilation, it may succeed, by chance, but you had better avoid parallel builds (make -j) in this platform, or get a better compiler." $opt_dry_run || $RM $removelist exit $EXIT_FAILURE fi # Just move the object if needed if test -n "$output_obj" && test "X$output_obj" != "X$obj"; then func_show_eval '$MV "$output_obj" "$obj"' \ 'error=$?; $opt_dry_run || $RM $removelist; exit $error' fi fi $opt_dry_run || { func_write_libtool_object "$libobj" "$objdir/$objname" "$objname" # Unlock the critical section if it was locked if test no != "$need_locks"; then removelist=$lockfile $RM "$lockfile" fi } exit $EXIT_SUCCESS } $opt_help || { test compile = "$opt_mode" && func_mode_compile ${1+"$@"} } func_mode_help () { # We need to display help for each of the modes. case $opt_mode in "") # Generic help is extracted from the usage comments # at the start of this file. func_help ;; clean) $ECHO \ "Usage: $progname [OPTION]... --mode=clean RM [RM-OPTION]... FILE... Remove files from the build directory. RM is the name of the program to use to delete files associated with each FILE (typically '/bin/rm'). RM-OPTIONS are options (such as '-f') to be passed to RM. If FILE is a libtool library, object or program, all the files associated with it are deleted. Otherwise, only FILE itself is deleted using RM." ;; compile) $ECHO \ "Usage: $progname [OPTION]... --mode=compile COMPILE-COMMAND... SOURCEFILE Compile a source file into a libtool library object. This mode accepts the following additional options: -o OUTPUT-FILE set the output file name to OUTPUT-FILE -no-suppress do not suppress compiler output for multiple passes -prefer-pic try to build PIC objects only -prefer-non-pic try to build non-PIC objects only -shared do not build a '.o' file suitable for static linking -static only build a '.o' file suitable for static linking -Wc,FLAG pass FLAG directly to the compiler COMPILE-COMMAND is a command to be used in creating a 'standard' object file from the given SOURCEFILE. The output file name is determined by removing the directory component from SOURCEFILE, then substituting the C source code suffix '.c' with the library object suffix, '.lo'." ;; execute) $ECHO \ "Usage: $progname [OPTION]... --mode=execute COMMAND [ARGS]... Automatically set library path, then run a program. This mode accepts the following additional options: -dlopen FILE add the directory containing FILE to the library path This mode sets the library path environment variable according to '-dlopen' flags. If any of the ARGS are libtool executable wrappers, then they are translated into their corresponding uninstalled binary, and any of their required library directories are added to the library path. Then, COMMAND is executed, with ARGS as arguments." ;; finish) $ECHO \ "Usage: $progname [OPTION]... --mode=finish [LIBDIR]... Complete the installation of libtool libraries. Each LIBDIR is a directory that contains libtool libraries. The commands that this mode executes may require superuser privileges. Use the '--dry-run' option if you just want to see what would be executed." ;; install) $ECHO \ "Usage: $progname [OPTION]... --mode=install INSTALL-COMMAND... Install executables or libraries. INSTALL-COMMAND is the installation command. The first component should be either the 'install' or 'cp' program. The following components of INSTALL-COMMAND are treated specially: -inst-prefix-dir PREFIX-DIR Use PREFIX-DIR as a staging area for installation The rest of the components are interpreted as arguments to that command (only BSD-compatible install options are recognized)." ;; link) $ECHO \ "Usage: $progname [OPTION]... --mode=link LINK-COMMAND... Link object files or libraries together to form another library, or to create an executable program. LINK-COMMAND is a command using the C compiler that you would use to create a program from several object files. The following components of LINK-COMMAND are treated specially: -all-static do not do any dynamic linking at all -avoid-version do not add a version suffix if possible -bindir BINDIR specify path to binaries directory (for systems where libraries must be found in the PATH setting at runtime) -dlopen FILE '-dlpreopen' FILE if it cannot be dlopened at runtime -dlpreopen FILE link in FILE and add its symbols to lt_preloaded_symbols -export-dynamic allow symbols from OUTPUT-FILE to be resolved with dlsym(3) -export-symbols SYMFILE try to export only the symbols listed in SYMFILE -export-symbols-regex REGEX try to export only the symbols matching REGEX -LLIBDIR search LIBDIR for required installed libraries -lNAME OUTPUT-FILE requires the installed library libNAME -module build a library that can dlopened -no-fast-install disable the fast-install mode -no-install link a not-installable executable -no-undefined declare that a library does not refer to external symbols -o OUTPUT-FILE create OUTPUT-FILE from the specified objects -objectlist FILE use a list of object files found in FILE to specify objects -os2dllname NAME force a short DLL name on OS/2 (no effect on other OSes) -precious-files-regex REGEX don't remove output files matching REGEX -release RELEASE specify package release information -rpath LIBDIR the created library will eventually be installed in LIBDIR -R[ ]LIBDIR add LIBDIR to the runtime path of programs and libraries -shared only do dynamic linking of libtool libraries -shrext SUFFIX override the standard shared library file extension -static do not do any dynamic linking of uninstalled libtool libraries -static-libtool-libs do not do any dynamic linking of libtool libraries -version-info CURRENT[:REVISION[:AGE]] specify library version info [each variable defaults to 0] -weak LIBNAME declare that the target provides the LIBNAME interface -Wc,FLAG -Xcompiler FLAG pass linker-specific FLAG directly to the compiler -Wl,FLAG -Xlinker FLAG pass linker-specific FLAG directly to the linker -XCClinker FLAG pass link-specific FLAG to the compiler driver (CC) All other options (arguments beginning with '-') are ignored. Every other argument is treated as a filename. Files ending in '.la' are treated as uninstalled libtool libraries, other files are standard or library object files. If the OUTPUT-FILE ends in '.la', then a libtool library is created, only library objects ('.lo' files) may be specified, and '-rpath' is required, except when creating a convenience library. If OUTPUT-FILE ends in '.a' or '.lib', then a standard library is created using 'ar' and 'ranlib', or on Windows using 'lib'. If OUTPUT-FILE ends in '.lo' or '.$objext', then a reloadable object file is created, otherwise an executable program is created." ;; uninstall) $ECHO \ "Usage: $progname [OPTION]... --mode=uninstall RM [RM-OPTION]... FILE... Remove libraries from an installation directory. RM is the name of the program to use to delete files associated with each FILE (typically '/bin/rm'). RM-OPTIONS are options (such as '-f') to be passed to RM. If FILE is a libtool library, all the files associated with it are deleted. Otherwise, only FILE itself is deleted using RM." ;; *) func_fatal_help "invalid operation mode '$opt_mode'" ;; esac echo $ECHO "Try '$progname --help' for more information about other modes." } # Now that we've collected a possible --mode arg, show help if necessary if $opt_help; then if test : = "$opt_help"; then func_mode_help else { func_help noexit for opt_mode in compile link execute install finish uninstall clean; do func_mode_help done } | $SED -n '1p; 2,$s/^Usage:/ or: /p' { func_help noexit for opt_mode in compile link execute install finish uninstall clean; do echo func_mode_help done } | $SED '1d /^When reporting/,/^Report/{ H d } $x /information about other modes/d /more detailed .*MODE/d s/^Usage:.*--mode=\([^ ]*\) .*/Description of \1 mode:/' fi exit $? fi # func_mode_execute arg... func_mode_execute () { $debug_cmd # The first argument is the command name. cmd=$nonopt test -z "$cmd" && \ func_fatal_help "you must specify a COMMAND" # Handle -dlopen flags immediately. for file in $opt_dlopen; do test -f "$file" \ || func_fatal_help "'$file' is not a file" dir= case $file in *.la) func_resolve_sysroot "$file" file=$func_resolve_sysroot_result # Check to see that this really is a libtool archive. func_lalib_unsafe_p "$file" \ || func_fatal_help "'$lib' is not a valid libtool archive" # Read the libtool library. dlname= library_names= func_source "$file" # Skip this library if it cannot be dlopened. if test -z "$dlname"; then # Warn if it was a shared library. test -n "$library_names" && \ func_warning "'$file' was not linked with '-export-dynamic'" continue fi func_dirname "$file" "" "." dir=$func_dirname_result if test -f "$dir/$objdir/$dlname"; then func_append dir "/$objdir" else if test ! -f "$dir/$dlname"; then func_fatal_error "cannot find '$dlname' in '$dir' or '$dir/$objdir'" fi fi ;; *.lo) # Just add the directory containing the .lo file. func_dirname "$file" "" "." dir=$func_dirname_result ;; *) func_warning "'-dlopen' is ignored for non-libtool libraries and objects" continue ;; esac # Get the absolute pathname. absdir=`cd "$dir" && pwd` test -n "$absdir" && dir=$absdir # Now add the directory to shlibpath_var. if eval "test -z \"\$$shlibpath_var\""; then eval "$shlibpath_var=\"\$dir\"" else eval "$shlibpath_var=\"\$dir:\$$shlibpath_var\"" fi done # This variable tells wrapper scripts just to set shlibpath_var # rather than running their programs. libtool_execute_magic=$magic # Check if any of the arguments is a wrapper script. args= for file do case $file in -* | *.la | *.lo ) ;; *) # Do a test to see if this is really a libtool program. if func_ltwrapper_script_p "$file"; then func_source "$file" # Transform arg to wrapped name. file=$progdir/$program elif func_ltwrapper_executable_p "$file"; then func_ltwrapper_scriptname "$file" func_source "$func_ltwrapper_scriptname_result" # Transform arg to wrapped name. file=$progdir/$program fi ;; esac # Quote arguments (to preserve shell metacharacters). func_append_quoted args "$file" done if $opt_dry_run; then # Display what would be done. if test -n "$shlibpath_var"; then eval "\$ECHO \"\$shlibpath_var=\$$shlibpath_var\"" echo "export $shlibpath_var" fi $ECHO "$cmd$args" exit $EXIT_SUCCESS else if test -n "$shlibpath_var"; then # Export the shlibpath_var. eval "export $shlibpath_var" fi # Restore saved environment variables for lt_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES do eval "if test \"\${save_$lt_var+set}\" = set; then $lt_var=\$save_$lt_var; export $lt_var else $lt_unset $lt_var fi" done # Now prepare to actually exec the command. exec_cmd=\$cmd$args fi } test execute = "$opt_mode" && func_mode_execute ${1+"$@"} # func_mode_finish arg... func_mode_finish () { $debug_cmd libs= libdirs= admincmds= for opt in "$nonopt" ${1+"$@"} do if test -d "$opt"; then func_append libdirs " $opt" elif test -f "$opt"; then if func_lalib_unsafe_p "$opt"; then func_append libs " $opt" else func_warning "'$opt' is not a valid libtool archive" fi else func_fatal_error "invalid argument '$opt'" fi done if test -n "$libs"; then if test -n "$lt_sysroot"; then sysroot_regex=`$ECHO "$lt_sysroot" | $SED "$sed_make_literal_regex"` sysroot_cmd="s/\([ ']\)$sysroot_regex/\1/g;" else sysroot_cmd= fi # Remove sysroot references if $opt_dry_run; then for lib in $libs; do echo "removing references to $lt_sysroot and '=' prefixes from $lib" done else tmpdir=`func_mktempdir` for lib in $libs; do $SED -e "$sysroot_cmd s/\([ ']-[LR]\)=/\1/g; s/\([ ']\)=/\1/g" $lib \ > $tmpdir/tmp-la mv -f $tmpdir/tmp-la $lib done ${RM}r "$tmpdir" fi fi if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then for libdir in $libdirs; do if test -n "$finish_cmds"; then # Do each command in the finish commands. func_execute_cmds "$finish_cmds" 'admincmds="$admincmds '"$cmd"'"' fi if test -n "$finish_eval"; then # Do the single finish_eval. eval cmds=\"$finish_eval\" $opt_dry_run || eval "$cmds" || func_append admincmds " $cmds" fi done fi # Exit here if they wanted silent mode. $opt_quiet && exit $EXIT_SUCCESS if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then echo "----------------------------------------------------------------------" echo "Libraries have been installed in:" for libdir in $libdirs; do $ECHO " $libdir" done echo echo "If you ever happen to want to link against installed libraries" echo "in a given directory, LIBDIR, you must either use libtool, and" echo "specify the full pathname of the library, or use the '-LLIBDIR'" echo "flag during linking and do at least one of the following:" if test -n "$shlibpath_var"; then echo " - add LIBDIR to the '$shlibpath_var' environment variable" echo " during execution" fi if test -n "$runpath_var"; then echo " - add LIBDIR to the '$runpath_var' environment variable" echo " during linking" fi if test -n "$hardcode_libdir_flag_spec"; then libdir=LIBDIR eval flag=\"$hardcode_libdir_flag_spec\" $ECHO " - use the '$flag' linker flag" fi if test -n "$admincmds"; then $ECHO " - have your system administrator run these commands:$admincmds" fi if test -f /etc/ld.so.conf; then echo " - have your system administrator add LIBDIR to '/etc/ld.so.conf'" fi echo echo "See any operating system documentation about shared libraries for" case $host in solaris2.[6789]|solaris2.1[0-9]) echo "more information, such as the ld(1), crle(1) and ld.so(8) manual" echo "pages." ;; *) echo "more information, such as the ld(1) and ld.so(8) manual pages." ;; esac echo "----------------------------------------------------------------------" fi exit $EXIT_SUCCESS } test finish = "$opt_mode" && func_mode_finish ${1+"$@"} # func_mode_install arg... func_mode_install () { $debug_cmd # There may be an optional sh(1) argument at the beginning of # install_prog (especially on Windows NT). if test "$SHELL" = "$nonopt" || test /bin/sh = "$nonopt" || # Allow the use of GNU shtool's install command. case $nonopt in *shtool*) :;; *) false;; esac then # Aesthetically quote it. func_quote_for_eval "$nonopt" install_prog="$func_quote_for_eval_result " arg=$1 shift else install_prog= arg=$nonopt fi # The real first argument should be the name of the installation program. # Aesthetically quote it. func_quote_for_eval "$arg" func_append install_prog "$func_quote_for_eval_result" install_shared_prog=$install_prog case " $install_prog " in *[\\\ /]cp\ *) install_cp=: ;; *) install_cp=false ;; esac # We need to accept at least all the BSD install flags. dest= files= opts= prev= install_type= isdir=false stripme= no_mode=: for arg do arg2= if test -n "$dest"; then func_append files " $dest" dest=$arg continue fi case $arg in -d) isdir=: ;; -f) if $install_cp; then :; else prev=$arg fi ;; -g | -m | -o) prev=$arg ;; -s) stripme=" -s" continue ;; -*) ;; *) # If the previous option needed an argument, then skip it. if test -n "$prev"; then if test X-m = "X$prev" && test -n "$install_override_mode"; then arg2=$install_override_mode no_mode=false fi prev= else dest=$arg continue fi ;; esac # Aesthetically quote the argument. func_quote_for_eval "$arg" func_append install_prog " $func_quote_for_eval_result" if test -n "$arg2"; then func_quote_for_eval "$arg2" fi func_append install_shared_prog " $func_quote_for_eval_result" done test -z "$install_prog" && \ func_fatal_help "you must specify an install program" test -n "$prev" && \ func_fatal_help "the '$prev' option requires an argument" if test -n "$install_override_mode" && $no_mode; then if $install_cp; then :; else func_quote_for_eval "$install_override_mode" func_append install_shared_prog " -m $func_quote_for_eval_result" fi fi if test -z "$files"; then if test -z "$dest"; then func_fatal_help "no file or destination specified" else func_fatal_help "you must specify a destination" fi fi # Strip any trailing slash from the destination. func_stripname '' '/' "$dest" dest=$func_stripname_result # Check to see that the destination is a directory. test -d "$dest" && isdir=: if $isdir; then destdir=$dest destname= else func_dirname_and_basename "$dest" "" "." destdir=$func_dirname_result destname=$func_basename_result # Not a directory, so check to see that there is only one file specified. set dummy $files; shift test "$#" -gt 1 && \ func_fatal_help "'$dest' is not a directory" fi case $destdir in [\\/]* | [A-Za-z]:[\\/]*) ;; *) for file in $files; do case $file in *.lo) ;; *) func_fatal_help "'$destdir' must be an absolute directory name" ;; esac done ;; esac # This variable tells wrapper scripts just to set variables rather # than running their programs. libtool_install_magic=$magic staticlibs= future_libdirs= current_libdirs= for file in $files; do # Do each installation. case $file in *.$libext) # Do the static libraries later. func_append staticlibs " $file" ;; *.la) func_resolve_sysroot "$file" file=$func_resolve_sysroot_result # Check to see that this really is a libtool archive. func_lalib_unsafe_p "$file" \ || func_fatal_help "'$file' is not a valid libtool archive" library_names= old_library= relink_command= func_source "$file" # Add the libdir to current_libdirs if it is the destination. if test "X$destdir" = "X$libdir"; then case "$current_libdirs " in *" $libdir "*) ;; *) func_append current_libdirs " $libdir" ;; esac else # Note the libdir as a future libdir. case "$future_libdirs " in *" $libdir "*) ;; *) func_append future_libdirs " $libdir" ;; esac fi func_dirname "$file" "/" "" dir=$func_dirname_result func_append dir "$objdir" if test -n "$relink_command"; then # Determine the prefix the user has applied to our future dir. inst_prefix_dir=`$ECHO "$destdir" | $SED -e "s%$libdir\$%%"` # Don't allow the user to place us outside of our expected # location b/c this prevents finding dependent libraries that # are installed to the same prefix. # At present, this check doesn't affect windows .dll's that # are installed into $libdir/../bin (currently, that works fine) # but it's something to keep an eye on. test "$inst_prefix_dir" = "$destdir" && \ func_fatal_error "error: cannot install '$file' to a directory not ending in $libdir" if test -n "$inst_prefix_dir"; then # Stick the inst_prefix_dir data into the link command. relink_command=`$ECHO "$relink_command" | $SED "s%@inst_prefix_dir@%-inst-prefix-dir $inst_prefix_dir%"` else relink_command=`$ECHO "$relink_command" | $SED "s%@inst_prefix_dir@%%"` fi func_warning "relinking '$file'" func_show_eval "$relink_command" \ 'func_fatal_error "error: relink '\''$file'\'' with the above command before installing it"' fi # See the names of the shared library. set dummy $library_names; shift if test -n "$1"; then realname=$1 shift srcname=$realname test -n "$relink_command" && srcname=${realname}T # Install the shared library and build the symlinks. func_show_eval "$install_shared_prog $dir/$srcname $destdir/$realname" \ 'exit $?' tstripme=$stripme case $host_os in cygwin* | mingw* | pw32* | cegcc*) case $realname in *.dll.a) tstripme= ;; esac ;; os2*) case $realname in *_dll.a) tstripme= ;; esac ;; esac if test -n "$tstripme" && test -n "$striplib"; then func_show_eval "$striplib $destdir/$realname" 'exit $?' fi if test "$#" -gt 0; then # Delete the old symlinks, and create new ones. # Try 'ln -sf' first, because the 'ln' binary might depend on # the symlink we replace! Solaris /bin/ln does not understand -f, # so we also need to try rm && ln -s. for linkname do test "$linkname" != "$realname" \ && func_show_eval "(cd $destdir && { $LN_S -f $realname $linkname || { $RM $linkname && $LN_S $realname $linkname; }; })" done fi # Do each command in the postinstall commands. lib=$destdir/$realname func_execute_cmds "$postinstall_cmds" 'exit $?' fi # Install the pseudo-library for information purposes. func_basename "$file" name=$func_basename_result instname=$dir/${name}i func_show_eval "$install_prog $instname $destdir/$name" 'exit $?' # Maybe install the static library, too. test -n "$old_library" && func_append staticlibs " $dir/$old_library" ;; *.lo) # Install (i.e. copy) a libtool object. # Figure out destination file name, if it wasn't already specified. if test -n "$destname"; then destfile=$destdir/$destname else func_basename "$file" destfile=$func_basename_result destfile=$destdir/$destfile fi # Deduce the name of the destination old-style object file. case $destfile in *.lo) func_lo2o "$destfile" staticdest=$func_lo2o_result ;; *.$objext) staticdest=$destfile destfile= ;; *) func_fatal_help "cannot copy a libtool object to '$destfile'" ;; esac # Install the libtool object if requested. test -n "$destfile" && \ func_show_eval "$install_prog $file $destfile" 'exit $?' # Install the old object if enabled. if test yes = "$build_old_libs"; then # Deduce the name of the old-style object file. func_lo2o "$file" staticobj=$func_lo2o_result func_show_eval "$install_prog \$staticobj \$staticdest" 'exit $?' fi exit $EXIT_SUCCESS ;; *) # Figure out destination file name, if it wasn't already specified. if test -n "$destname"; then destfile=$destdir/$destname else func_basename "$file" destfile=$func_basename_result destfile=$destdir/$destfile fi # If the file is missing, and there is a .exe on the end, strip it # because it is most likely a libtool script we actually want to # install stripped_ext= case $file in *.exe) if test ! -f "$file"; then func_stripname '' '.exe' "$file" file=$func_stripname_result stripped_ext=.exe fi ;; esac # Do a test to see if this is really a libtool program. case $host in *cygwin* | *mingw*) if func_ltwrapper_executable_p "$file"; then func_ltwrapper_scriptname "$file" wrapper=$func_ltwrapper_scriptname_result else func_stripname '' '.exe' "$file" wrapper=$func_stripname_result fi ;; *) wrapper=$file ;; esac if func_ltwrapper_script_p "$wrapper"; then notinst_deplibs= relink_command= func_source "$wrapper" # Check the variables that should have been set. test -z "$generated_by_libtool_version" && \ func_fatal_error "invalid libtool wrapper script '$wrapper'" finalize=: for lib in $notinst_deplibs; do # Check to see that each library is installed. libdir= if test -f "$lib"; then func_source "$lib" fi libfile=$libdir/`$ECHO "$lib" | $SED 's%^.*/%%g'` if test -n "$libdir" && test ! -f "$libfile"; then func_warning "'$lib' has not been installed in '$libdir'" finalize=false fi done relink_command= func_source "$wrapper" outputname= if test no = "$fast_install" && test -n "$relink_command"; then $opt_dry_run || { if $finalize; then tmpdir=`func_mktempdir` func_basename "$file$stripped_ext" file=$func_basename_result outputname=$tmpdir/$file # Replace the output file specification. relink_command=`$ECHO "$relink_command" | $SED 's%@OUTPUT@%'"$outputname"'%g'` $opt_quiet || { func_quote_for_expand "$relink_command" eval "func_echo $func_quote_for_expand_result" } if eval "$relink_command"; then : else func_error "error: relink '$file' with the above command before installing it" $opt_dry_run || ${RM}r "$tmpdir" continue fi file=$outputname else func_warning "cannot relink '$file'" fi } else # Install the binary that we compiled earlier. file=`$ECHO "$file$stripped_ext" | $SED "s%\([^/]*\)$%$objdir/\1%"` fi fi # remove .exe since cygwin /usr/bin/install will append another # one anyway case $install_prog,$host in */usr/bin/install*,*cygwin*) case $file:$destfile in *.exe:*.exe) # this is ok ;; *.exe:*) destfile=$destfile.exe ;; *:*.exe) func_stripname '' '.exe' "$destfile" destfile=$func_stripname_result ;; esac ;; esac func_show_eval "$install_prog\$stripme \$file \$destfile" 'exit $?' $opt_dry_run || if test -n "$outputname"; then ${RM}r "$tmpdir" fi ;; esac done for file in $staticlibs; do func_basename "$file" name=$func_basename_result # Set up the ranlib parameters. oldlib=$destdir/$name func_to_tool_file "$oldlib" func_convert_file_msys_to_w32 tool_oldlib=$func_to_tool_file_result func_show_eval "$install_prog \$file \$oldlib" 'exit $?' if test -n "$stripme" && test -n "$old_striplib"; then func_show_eval "$old_striplib $tool_oldlib" 'exit $?' fi # Do each command in the postinstall commands. func_execute_cmds "$old_postinstall_cmds" 'exit $?' done test -n "$future_libdirs" && \ func_warning "remember to run '$progname --finish$future_libdirs'" if test -n "$current_libdirs"; then # Maybe just do a dry run. $opt_dry_run && current_libdirs=" -n$current_libdirs" exec_cmd='$SHELL "$progpath" $preserve_args --finish$current_libdirs' else exit $EXIT_SUCCESS fi } test install = "$opt_mode" && func_mode_install ${1+"$@"} # func_generate_dlsyms outputname originator pic_p # Extract symbols from dlprefiles and create ${outputname}S.o with # a dlpreopen symbol table. func_generate_dlsyms () { $debug_cmd my_outputname=$1 my_originator=$2 my_pic_p=${3-false} my_prefix=`$ECHO "$my_originator" | $SED 's%[^a-zA-Z0-9]%_%g'` my_dlsyms= if test -n "$dlfiles$dlprefiles" || test no != "$dlself"; then if test -n "$NM" && test -n "$global_symbol_pipe"; then my_dlsyms=${my_outputname}S.c else func_error "not configured to extract global symbols from dlpreopened files" fi fi if test -n "$my_dlsyms"; then case $my_dlsyms in "") ;; *.c) # Discover the nlist of each of the dlfiles. nlist=$output_objdir/$my_outputname.nm func_show_eval "$RM $nlist ${nlist}S ${nlist}T" # Parse the name list into a source file. func_verbose "creating $output_objdir/$my_dlsyms" $opt_dry_run || $ECHO > "$output_objdir/$my_dlsyms" "\ /* $my_dlsyms - symbol resolution table for '$my_outputname' dlsym emulation. */ /* Generated by $PROGRAM (GNU $PACKAGE) $VERSION */ #ifdef __cplusplus extern \"C\" { #endif #if defined __GNUC__ && (((__GNUC__ == 4) && (__GNUC_MINOR__ >= 4)) || (__GNUC__ > 4)) #pragma GCC diagnostic ignored \"-Wstrict-prototypes\" #endif /* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */ #if defined _WIN32 || defined __CYGWIN__ || defined _WIN32_WCE /* DATA imports from DLLs on WIN32 can't be const, because runtime relocations are performed -- see ld's documentation on pseudo-relocs. */ # define LT_DLSYM_CONST #elif defined __osf__ /* This system does not cope well with relocations in const data. */ # define LT_DLSYM_CONST #else # define LT_DLSYM_CONST const #endif #define STREQ(s1, s2) (strcmp ((s1), (s2)) == 0) /* External symbol declarations for the compiler. */\ " if test yes = "$dlself"; then func_verbose "generating symbol list for '$output'" $opt_dry_run || echo ': @PROGRAM@ ' > "$nlist" # Add our own program objects to the symbol list. progfiles=`$ECHO "$objs$old_deplibs" | $SP2NL | $SED "$lo2o" | $NL2SP` for progfile in $progfiles; do func_to_tool_file "$progfile" func_convert_file_msys_to_w32 func_verbose "extracting global C symbols from '$func_to_tool_file_result'" $opt_dry_run || eval "$NM $func_to_tool_file_result | $global_symbol_pipe >> '$nlist'" done if test -n "$exclude_expsyms"; then $opt_dry_run || { eval '$EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T' eval '$MV "$nlist"T "$nlist"' } fi if test -n "$export_symbols_regex"; then $opt_dry_run || { eval '$EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T' eval '$MV "$nlist"T "$nlist"' } fi # Prepare the list of exported symbols if test -z "$export_symbols"; then export_symbols=$output_objdir/$outputname.exp $opt_dry_run || { $RM $export_symbols eval "$SED -n -e '/^: @PROGRAM@ $/d' -e 's/^.* \(.*\)$/\1/p' "'< "$nlist" > "$export_symbols"' case $host in *cygwin* | *mingw* | *cegcc* ) eval "echo EXPORTS "'> "$output_objdir/$outputname.def"' eval 'cat "$export_symbols" >> "$output_objdir/$outputname.def"' ;; esac } else $opt_dry_run || { eval "$SED -e 's/\([].[*^$]\)/\\\\\1/g' -e 's/^/ /' -e 's/$/$/'"' < "$export_symbols" > "$output_objdir/$outputname.exp"' eval '$GREP -f "$output_objdir/$outputname.exp" < "$nlist" > "$nlist"T' eval '$MV "$nlist"T "$nlist"' case $host in *cygwin* | *mingw* | *cegcc* ) eval "echo EXPORTS "'> "$output_objdir/$outputname.def"' eval 'cat "$nlist" >> "$output_objdir/$outputname.def"' ;; esac } fi fi for dlprefile in $dlprefiles; do func_verbose "extracting global C symbols from '$dlprefile'" func_basename "$dlprefile" name=$func_basename_result case $host in *cygwin* | *mingw* | *cegcc* ) # if an import library, we need to obtain dlname if func_win32_import_lib_p "$dlprefile"; then func_tr_sh "$dlprefile" eval "curr_lafile=\$libfile_$func_tr_sh_result" dlprefile_dlbasename= if test -n "$curr_lafile" && func_lalib_p "$curr_lafile"; then # Use subshell, to avoid clobbering current variable values dlprefile_dlname=`source "$curr_lafile" && echo "$dlname"` if test -n "$dlprefile_dlname"; then func_basename "$dlprefile_dlname" dlprefile_dlbasename=$func_basename_result else # no lafile. user explicitly requested -dlpreopen . $sharedlib_from_linklib_cmd "$dlprefile" dlprefile_dlbasename=$sharedlib_from_linklib_result fi fi $opt_dry_run || { if test -n "$dlprefile_dlbasename"; then eval '$ECHO ": $dlprefile_dlbasename" >> "$nlist"' else func_warning "Could not compute DLL name from $name" eval '$ECHO ": $name " >> "$nlist"' fi func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32 eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe | $SED -e '/I __imp/d' -e 's/I __nm_/D /;s/_nm__//' >> '$nlist'" } else # not an import lib $opt_dry_run || { eval '$ECHO ": $name " >> "$nlist"' func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32 eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'" } fi ;; *) $opt_dry_run || { eval '$ECHO ": $name " >> "$nlist"' func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32 eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'" } ;; esac done $opt_dry_run || { # Make sure we have at least an empty file. test -f "$nlist" || : > "$nlist" if test -n "$exclude_expsyms"; then $EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T $MV "$nlist"T "$nlist" fi # Try sorting and uniquifying the output. if $GREP -v "^: " < "$nlist" | if sort -k 3 /dev/null 2>&1; then sort -k 3 else sort +2 fi | uniq > "$nlist"S; then : else $GREP -v "^: " < "$nlist" > "$nlist"S fi if test -f "$nlist"S; then eval "$global_symbol_to_cdecl"' < "$nlist"S >> "$output_objdir/$my_dlsyms"' else echo '/* NONE */' >> "$output_objdir/$my_dlsyms" fi func_show_eval '$RM "${nlist}I"' if test -n "$global_symbol_to_import"; then eval "$global_symbol_to_import"' < "$nlist"S > "$nlist"I' fi echo >> "$output_objdir/$my_dlsyms" "\ /* The mapping between symbol names and symbols. */ typedef struct { const char *name; void *address; } lt_dlsymlist; extern LT_DLSYM_CONST lt_dlsymlist lt_${my_prefix}_LTX_preloaded_symbols[];\ " if test -s "$nlist"I; then echo >> "$output_objdir/$my_dlsyms" "\ static void lt_syminit(void) { LT_DLSYM_CONST lt_dlsymlist *symbol = lt_${my_prefix}_LTX_preloaded_symbols; for (; symbol->name; ++symbol) {" $SED 's/.*/ if (STREQ (symbol->name, \"&\")) symbol->address = (void *) \&&;/' < "$nlist"I >> "$output_objdir/$my_dlsyms" echo >> "$output_objdir/$my_dlsyms" "\ } }" fi echo >> "$output_objdir/$my_dlsyms" "\ LT_DLSYM_CONST lt_dlsymlist lt_${my_prefix}_LTX_preloaded_symbols[] = { {\"$my_originator\", (void *) 0}," if test -s "$nlist"I; then echo >> "$output_objdir/$my_dlsyms" "\ {\"@INIT@\", (void *) <_syminit}," fi case $need_lib_prefix in no) eval "$global_symbol_to_c_name_address" < "$nlist" >> "$output_objdir/$my_dlsyms" ;; *) eval "$global_symbol_to_c_name_address_lib_prefix" < "$nlist" >> "$output_objdir/$my_dlsyms" ;; esac echo >> "$output_objdir/$my_dlsyms" "\ {0, (void *) 0} }; /* This works around a problem in FreeBSD linker */ #ifdef FREEBSD_WORKAROUND static const void *lt_preloaded_setup() { return lt_${my_prefix}_LTX_preloaded_symbols; } #endif #ifdef __cplusplus } #endif\ " } # !$opt_dry_run pic_flag_for_symtable= case "$compile_command " in *" -static "*) ;; *) case $host in # compiling the symbol table file with pic_flag works around # a FreeBSD bug that causes programs to crash when -lm is # linked before any other PIC object. But we must not use # pic_flag when linking with -static. The problem exists in # FreeBSD 2.2.6 and is fixed in FreeBSD 3.1. *-*-freebsd2.*|*-*-freebsd3.0*|*-*-freebsdelf3.0*) pic_flag_for_symtable=" $pic_flag -DFREEBSD_WORKAROUND" ;; *-*-hpux*) pic_flag_for_symtable=" $pic_flag" ;; *) $my_pic_p && pic_flag_for_symtable=" $pic_flag" ;; esac ;; esac symtab_cflags= for arg in $LTCFLAGS; do case $arg in -pie | -fpie | -fPIE) ;; *) func_append symtab_cflags " $arg" ;; esac done # Now compile the dynamic symbol file. func_show_eval '(cd $output_objdir && $LTCC$symtab_cflags -c$no_builtin_flag$pic_flag_for_symtable "$my_dlsyms")' 'exit $?' # Clean up the generated files. func_show_eval '$RM "$output_objdir/$my_dlsyms" "$nlist" "${nlist}S" "${nlist}T" "${nlist}I"' # Transform the symbol file into the correct name. symfileobj=$output_objdir/${my_outputname}S.$objext case $host in *cygwin* | *mingw* | *cegcc* ) if test -f "$output_objdir/$my_outputname.def"; then compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$output_objdir/$my_outputname.def $symfileobj%"` finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$output_objdir/$my_outputname.def $symfileobj%"` else compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$symfileobj%"` finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$symfileobj%"` fi ;; *) compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$symfileobj%"` finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$symfileobj%"` ;; esac ;; *) func_fatal_error "unknown suffix for '$my_dlsyms'" ;; esac else # We keep going just in case the user didn't refer to # lt_preloaded_symbols. The linker will fail if global_symbol_pipe # really was required. # Nullify the symbol file. compile_command=`$ECHO "$compile_command" | $SED "s% @SYMFILE@%%"` finalize_command=`$ECHO "$finalize_command" | $SED "s% @SYMFILE@%%"` fi } # func_cygming_gnu_implib_p ARG # This predicate returns with zero status (TRUE) if # ARG is a GNU/binutils-style import library. Returns # with nonzero status (FALSE) otherwise. func_cygming_gnu_implib_p () { $debug_cmd func_to_tool_file "$1" func_convert_file_msys_to_w32 func_cygming_gnu_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $EGREP ' (_head_[A-Za-z0-9_]+_[ad]l*|[A-Za-z0-9_]+_[ad]l*_iname)$'` test -n "$func_cygming_gnu_implib_tmp" } # func_cygming_ms_implib_p ARG # This predicate returns with zero status (TRUE) if # ARG is an MS-style import library. Returns # with nonzero status (FALSE) otherwise. func_cygming_ms_implib_p () { $debug_cmd func_to_tool_file "$1" func_convert_file_msys_to_w32 func_cygming_ms_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $GREP '_NULL_IMPORT_DESCRIPTOR'` test -n "$func_cygming_ms_implib_tmp" } # func_win32_libid arg # return the library type of file 'arg' # # Need a lot of goo to handle *both* DLLs and import libs # Has to be a shell function in order to 'eat' the argument # that is supplied when $file_magic_command is called. # Despite the name, also deal with 64 bit binaries. func_win32_libid () { $debug_cmd win32_libid_type=unknown win32_fileres=`file -L $1 2>/dev/null` case $win32_fileres in *ar\ archive\ import\ library*) # definitely import win32_libid_type="x86 archive import" ;; *ar\ archive*) # could be an import, or static # Keep the egrep pattern in sync with the one in _LT_CHECK_MAGIC_METHOD. if eval $OBJDUMP -f $1 | $SED -e '10q' 2>/dev/null | $EGREP 'file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' >/dev/null; then case $nm_interface in "MS dumpbin") if func_cygming_ms_implib_p "$1" || func_cygming_gnu_implib_p "$1" then win32_nmres=import else win32_nmres= fi ;; *) func_to_tool_file "$1" func_convert_file_msys_to_w32 win32_nmres=`eval $NM -f posix -A \"$func_to_tool_file_result\" | $SED -n -e ' 1,100{ / I /{ s|.*|import| p q } }'` ;; esac case $win32_nmres in import*) win32_libid_type="x86 archive import";; *) win32_libid_type="x86 archive static";; esac fi ;; *DLL*) win32_libid_type="x86 DLL" ;; *executable*) # but shell scripts are "executable" too... case $win32_fileres in *MS\ Windows\ PE\ Intel*) win32_libid_type="x86 DLL" ;; esac ;; esac $ECHO "$win32_libid_type" } # func_cygming_dll_for_implib ARG # # Platform-specific function to extract the # name of the DLL associated with the specified # import library ARG. # Invoked by eval'ing the libtool variable # $sharedlib_from_linklib_cmd # Result is available in the variable # $sharedlib_from_linklib_result func_cygming_dll_for_implib () { $debug_cmd sharedlib_from_linklib_result=`$DLLTOOL --identify-strict --identify "$1"` } # func_cygming_dll_for_implib_fallback_core SECTION_NAME LIBNAMEs # # The is the core of a fallback implementation of a # platform-specific function to extract the name of the # DLL associated with the specified import library LIBNAME. # # SECTION_NAME is either .idata$6 or .idata$7, depending # on the platform and compiler that created the implib. # # Echos the name of the DLL associated with the # specified import library. func_cygming_dll_for_implib_fallback_core () { $debug_cmd match_literal=`$ECHO "$1" | $SED "$sed_make_literal_regex"` $OBJDUMP -s --section "$1" "$2" 2>/dev/null | $SED '/^Contents of section '"$match_literal"':/{ # Place marker at beginning of archive member dllname section s/.*/====MARK====/ p d } # These lines can sometimes be longer than 43 characters, but # are always uninteresting /:[ ]*file format pe[i]\{,1\}-/d /^In archive [^:]*:/d # Ensure marker is printed /^====MARK====/p # Remove all lines with less than 43 characters /^.\{43\}/!d # From remaining lines, remove first 43 characters s/^.\{43\}//' | $SED -n ' # Join marker and all lines until next marker into a single line /^====MARK====/ b para H $ b para b :para x s/\n//g # Remove the marker s/^====MARK====// # Remove trailing dots and whitespace s/[\. \t]*$// # Print /./p' | # we now have a list, one entry per line, of the stringified # contents of the appropriate section of all members of the # archive that possess that section. Heuristic: eliminate # all those that have a first or second character that is # a '.' (that is, objdump's representation of an unprintable # character.) This should work for all archives with less than # 0x302f exports -- but will fail for DLLs whose name actually # begins with a literal '.' or a single character followed by # a '.'. # # Of those that remain, print the first one. $SED -e '/^\./d;/^.\./d;q' } # func_cygming_dll_for_implib_fallback ARG # Platform-specific function to extract the # name of the DLL associated with the specified # import library ARG. # # This fallback implementation is for use when $DLLTOOL # does not support the --identify-strict option. # Invoked by eval'ing the libtool variable # $sharedlib_from_linklib_cmd # Result is available in the variable # $sharedlib_from_linklib_result func_cygming_dll_for_implib_fallback () { $debug_cmd if func_cygming_gnu_implib_p "$1"; then # binutils import library sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$7' "$1"` elif func_cygming_ms_implib_p "$1"; then # ms-generated import library sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$6' "$1"` else # unknown sharedlib_from_linklib_result= fi } # func_extract_an_archive dir oldlib func_extract_an_archive () { $debug_cmd f_ex_an_ar_dir=$1; shift f_ex_an_ar_oldlib=$1 if test yes = "$lock_old_archive_extraction"; then lockfile=$f_ex_an_ar_oldlib.lock until $opt_dry_run || ln "$progpath" "$lockfile" 2>/dev/null; do func_echo "Waiting for $lockfile to be removed" sleep 2 done fi func_show_eval "(cd \$f_ex_an_ar_dir && $AR x \"\$f_ex_an_ar_oldlib\")" \ 'stat=$?; rm -f "$lockfile"; exit $stat' if test yes = "$lock_old_archive_extraction"; then $opt_dry_run || rm -f "$lockfile" fi if ($AR t "$f_ex_an_ar_oldlib" | sort | sort -uc >/dev/null 2>&1); then : else func_fatal_error "object name conflicts in archive: $f_ex_an_ar_dir/$f_ex_an_ar_oldlib" fi } # func_extract_archives gentop oldlib ... func_extract_archives () { $debug_cmd my_gentop=$1; shift my_oldlibs=${1+"$@"} my_oldobjs= my_xlib= my_xabs= my_xdir= for my_xlib in $my_oldlibs; do # Extract the objects. case $my_xlib in [\\/]* | [A-Za-z]:[\\/]*) my_xabs=$my_xlib ;; *) my_xabs=`pwd`"/$my_xlib" ;; esac func_basename "$my_xlib" my_xlib=$func_basename_result my_xlib_u=$my_xlib while :; do case " $extracted_archives " in *" $my_xlib_u "*) func_arith $extracted_serial + 1 extracted_serial=$func_arith_result my_xlib_u=lt$extracted_serial-$my_xlib ;; *) break ;; esac done extracted_archives="$extracted_archives $my_xlib_u" my_xdir=$my_gentop/$my_xlib_u func_mkdir_p "$my_xdir" case $host in *-darwin*) func_verbose "Extracting $my_xabs" # Do not bother doing anything if just a dry run $opt_dry_run || { darwin_orig_dir=`pwd` cd $my_xdir || exit $? darwin_archive=$my_xabs darwin_curdir=`pwd` func_basename "$darwin_archive" darwin_base_archive=$func_basename_result darwin_arches=`$LIPO -info "$darwin_archive" 2>/dev/null | $GREP Architectures 2>/dev/null || true` if test -n "$darwin_arches"; then darwin_arches=`$ECHO "$darwin_arches" | $SED -e 's/.*are://'` darwin_arch= func_verbose "$darwin_base_archive has multiple architectures $darwin_arches" for darwin_arch in $darwin_arches; do func_mkdir_p "unfat-$$/$darwin_base_archive-$darwin_arch" $LIPO -thin $darwin_arch -output "unfat-$$/$darwin_base_archive-$darwin_arch/$darwin_base_archive" "$darwin_archive" cd "unfat-$$/$darwin_base_archive-$darwin_arch" func_extract_an_archive "`pwd`" "$darwin_base_archive" cd "$darwin_curdir" $RM "unfat-$$/$darwin_base_archive-$darwin_arch/$darwin_base_archive" done # $darwin_arches ## Okay now we've a bunch of thin objects, gotta fatten them up :) darwin_filelist=`find unfat-$$ -type f -name \*.o -print -o -name \*.lo -print | $SED -e "$sed_basename" | sort -u` darwin_file= darwin_files= for darwin_file in $darwin_filelist; do darwin_files=`find unfat-$$ -name $darwin_file -print | sort | $NL2SP` $LIPO -create -output "$darwin_file" $darwin_files done # $darwin_filelist $RM -rf unfat-$$ cd "$darwin_orig_dir" else cd $darwin_orig_dir func_extract_an_archive "$my_xdir" "$my_xabs" fi # $darwin_arches } # !$opt_dry_run ;; *) func_extract_an_archive "$my_xdir" "$my_xabs" ;; esac my_oldobjs="$my_oldobjs "`find $my_xdir -name \*.$objext -print -o -name \*.lo -print | sort | $NL2SP` done func_extract_archives_result=$my_oldobjs } # func_emit_wrapper [arg=no] # # Emit a libtool wrapper script on stdout. # Don't directly open a file because we may want to # incorporate the script contents within a cygwin/mingw # wrapper executable. Must ONLY be called from within # func_mode_link because it depends on a number of variables # set therein. # # ARG is the value that the WRAPPER_SCRIPT_BELONGS_IN_OBJDIR # variable will take. If 'yes', then the emitted script # will assume that the directory where it is stored is # the $objdir directory. This is a cygwin/mingw-specific # behavior. func_emit_wrapper () { func_emit_wrapper_arg1=${1-no} $ECHO "\ #! $SHELL # $output - temporary wrapper script for $objdir/$outputname # Generated by $PROGRAM (GNU $PACKAGE) $VERSION # # The $output program cannot be directly executed until all the libtool # libraries that it depends on are installed. # # This wrapper script should never be moved out of the build directory. # If it is, it will not operate correctly. # Sed substitution that helps us do robust quoting. It backslashifies # metacharacters that are still active within double-quoted strings. sed_quote_subst='$sed_quote_subst' # Be Bourne compatible if test -n \"\${ZSH_VERSION+set}\" && (emulate sh) >/dev/null 2>&1; then emulate sh NULLCMD=: # Zsh 3.x and 4.x performs word splitting on \${1+\"\$@\"}, which # is contrary to our usage. Disable this feature. alias -g '\${1+\"\$@\"}'='\"\$@\"' setopt NO_GLOB_SUBST else case \`(set -o) 2>/dev/null\` in *posix*) set -o posix;; esac fi BIN_SH=xpg4; export BIN_SH # for Tru64 DUALCASE=1; export DUALCASE # for MKS sh # The HP-UX ksh and POSIX shell print the target directory to stdout # if CDPATH is set. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH relink_command=\"$relink_command\" # This environment variable determines our operation mode. if test \"\$libtool_install_magic\" = \"$magic\"; then # install mode needs the following variables: generated_by_libtool_version='$macro_version' notinst_deplibs='$notinst_deplibs' else # When we are sourced in execute mode, \$file and \$ECHO are already set. if test \"\$libtool_execute_magic\" != \"$magic\"; then file=\"\$0\"" qECHO=`$ECHO "$ECHO" | $SED "$sed_quote_subst"` $ECHO "\ # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF \$1 _LTECHO_EOF' } ECHO=\"$qECHO\" fi # Very basic option parsing. These options are (a) specific to # the libtool wrapper, (b) are identical between the wrapper # /script/ and the wrapper /executable/ that is used only on # windows platforms, and (c) all begin with the string "--lt-" # (application programs are unlikely to have options that match # this pattern). # # There are only two supported options: --lt-debug and # --lt-dump-script. There is, deliberately, no --lt-help. # # The first argument to this parsing function should be the # script's $0 value, followed by "$@". lt_option_debug= func_parse_lt_options () { lt_script_arg0=\$0 shift for lt_opt do case \"\$lt_opt\" in --lt-debug) lt_option_debug=1 ;; --lt-dump-script) lt_dump_D=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%/[^/]*$%%'\` test \"X\$lt_dump_D\" = \"X\$lt_script_arg0\" && lt_dump_D=. lt_dump_F=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%^.*/%%'\` cat \"\$lt_dump_D/\$lt_dump_F\" exit 0 ;; --lt-*) \$ECHO \"Unrecognized --lt- option: '\$lt_opt'\" 1>&2 exit 1 ;; esac done # Print the debug banner immediately: if test -n \"\$lt_option_debug\"; then echo \"$outputname:$output:\$LINENO: libtool wrapper (GNU $PACKAGE) $VERSION\" 1>&2 fi } # Used when --lt-debug. Prints its arguments to stdout # (redirection is the responsibility of the caller) func_lt_dump_args () { lt_dump_args_N=1; for lt_arg do \$ECHO \"$outputname:$output:\$LINENO: newargv[\$lt_dump_args_N]: \$lt_arg\" lt_dump_args_N=\`expr \$lt_dump_args_N + 1\` done } # Core function for launching the target application func_exec_program_core () { " case $host in # Backslashes separate directories on plain windows *-*-mingw | *-*-os2* | *-cegcc*) $ECHO "\ if test -n \"\$lt_option_debug\"; then \$ECHO \"$outputname:$output:\$LINENO: newargv[0]: \$progdir\\\\\$program\" 1>&2 func_lt_dump_args \${1+\"\$@\"} 1>&2 fi exec \"\$progdir\\\\\$program\" \${1+\"\$@\"} " ;; *) $ECHO "\ if test -n \"\$lt_option_debug\"; then \$ECHO \"$outputname:$output:\$LINENO: newargv[0]: \$progdir/\$program\" 1>&2 func_lt_dump_args \${1+\"\$@\"} 1>&2 fi exec \"\$progdir/\$program\" \${1+\"\$@\"} " ;; esac $ECHO "\ \$ECHO \"\$0: cannot exec \$program \$*\" 1>&2 exit 1 } # A function to encapsulate launching the target application # Strips options in the --lt-* namespace from \$@ and # launches target application with the remaining arguments. func_exec_program () { case \" \$* \" in *\\ --lt-*) for lt_wr_arg do case \$lt_wr_arg in --lt-*) ;; *) set x \"\$@\" \"\$lt_wr_arg\"; shift;; esac shift done ;; esac func_exec_program_core \${1+\"\$@\"} } # Parse options func_parse_lt_options \"\$0\" \${1+\"\$@\"} # Find the directory that this script lives in. thisdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*$%%'\` test \"x\$thisdir\" = \"x\$file\" && thisdir=. # Follow symbolic links until we get to the real thisdir. file=\`ls -ld \"\$file\" | $SED -n 's/.*-> //p'\` while test -n \"\$file\"; do destdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*\$%%'\` # If there was a directory component, then change thisdir. if test \"x\$destdir\" != \"x\$file\"; then case \"\$destdir\" in [\\\\/]* | [A-Za-z]:[\\\\/]*) thisdir=\"\$destdir\" ;; *) thisdir=\"\$thisdir/\$destdir\" ;; esac fi file=\`\$ECHO \"\$file\" | $SED 's%^.*/%%'\` file=\`ls -ld \"\$thisdir/\$file\" | $SED -n 's/.*-> //p'\` done # Usually 'no', except on cygwin/mingw when embedded into # the cwrapper. WRAPPER_SCRIPT_BELONGS_IN_OBJDIR=$func_emit_wrapper_arg1 if test \"\$WRAPPER_SCRIPT_BELONGS_IN_OBJDIR\" = \"yes\"; then # special case for '.' if test \"\$thisdir\" = \".\"; then thisdir=\`pwd\` fi # remove .libs from thisdir case \"\$thisdir\" in *[\\\\/]$objdir ) thisdir=\`\$ECHO \"\$thisdir\" | $SED 's%[\\\\/][^\\\\/]*$%%'\` ;; $objdir ) thisdir=. ;; esac fi # Try to get the absolute directory name. absdir=\`cd \"\$thisdir\" && pwd\` test -n \"\$absdir\" && thisdir=\"\$absdir\" " if test yes = "$fast_install"; then $ECHO "\ program=lt-'$outputname'$exeext progdir=\"\$thisdir/$objdir\" if test ! -f \"\$progdir/\$program\" || { file=\`ls -1dt \"\$progdir/\$program\" \"\$progdir/../\$program\" 2>/dev/null | $SED 1q\`; \\ test \"X\$file\" != \"X\$progdir/\$program\"; }; then file=\"\$\$-\$program\" if test ! -d \"\$progdir\"; then $MKDIR \"\$progdir\" else $RM \"\$progdir/\$file\" fi" $ECHO "\ # relink executable if necessary if test -n \"\$relink_command\"; then if relink_command_output=\`eval \$relink_command 2>&1\`; then : else \$ECHO \"\$relink_command_output\" >&2 $RM \"\$progdir/\$file\" exit 1 fi fi $MV \"\$progdir/\$file\" \"\$progdir/\$program\" 2>/dev/null || { $RM \"\$progdir/\$program\"; $MV \"\$progdir/\$file\" \"\$progdir/\$program\"; } $RM \"\$progdir/\$file\" fi" else $ECHO "\ program='$outputname' progdir=\"\$thisdir/$objdir\" " fi $ECHO "\ if test -f \"\$progdir/\$program\"; then" # fixup the dll searchpath if we need to. # # Fix the DLL searchpath if we need to. Do this before prepending # to shlibpath, because on Windows, both are PATH and uninstalled # libraries must come first. if test -n "$dllsearchpath"; then $ECHO "\ # Add the dll search path components to the executable PATH PATH=$dllsearchpath:\$PATH " fi # Export our shlibpath_var if we have one. if test yes = "$shlibpath_overrides_runpath" && test -n "$shlibpath_var" && test -n "$temp_rpath"; then $ECHO "\ # Add our own library path to $shlibpath_var $shlibpath_var=\"$temp_rpath\$$shlibpath_var\" # Some systems cannot cope with colon-terminated $shlibpath_var # The second colon is a workaround for a bug in BeOS R4 sed $shlibpath_var=\`\$ECHO \"\$$shlibpath_var\" | $SED 's/::*\$//'\` export $shlibpath_var " fi $ECHO "\ if test \"\$libtool_execute_magic\" != \"$magic\"; then # Run the actual program with our arguments. func_exec_program \${1+\"\$@\"} fi else # The program doesn't exist. \$ECHO \"\$0: error: '\$progdir/\$program' does not exist\" 1>&2 \$ECHO \"This script is just a wrapper for \$program.\" 1>&2 \$ECHO \"See the $PACKAGE documentation for more information.\" 1>&2 exit 1 fi fi\ " } # func_emit_cwrapperexe_src # emit the source code for a wrapper executable on stdout # Must ONLY be called from within func_mode_link because # it depends on a number of variable set therein. func_emit_cwrapperexe_src () { cat < #include #ifdef _MSC_VER # include # include # include #else # include # include # ifdef __CYGWIN__ # include # endif #endif #include #include #include #include #include #include #include #include #define STREQ(s1, s2) (strcmp ((s1), (s2)) == 0) /* declarations of non-ANSI functions */ #if defined __MINGW32__ # ifdef __STRICT_ANSI__ int _putenv (const char *); # endif #elif defined __CYGWIN__ # ifdef __STRICT_ANSI__ char *realpath (const char *, char *); int putenv (char *); int setenv (const char *, const char *, int); # endif /* #elif defined other_platform || defined ... */ #endif /* portability defines, excluding path handling macros */ #if defined _MSC_VER # define setmode _setmode # define stat _stat # define chmod _chmod # define getcwd _getcwd # define putenv _putenv # define S_IXUSR _S_IEXEC #elif defined __MINGW32__ # define setmode _setmode # define stat _stat # define chmod _chmod # define getcwd _getcwd # define putenv _putenv #elif defined __CYGWIN__ # define HAVE_SETENV # define FOPEN_WB "wb" /* #elif defined other platforms ... */ #endif #if defined PATH_MAX # define LT_PATHMAX PATH_MAX #elif defined MAXPATHLEN # define LT_PATHMAX MAXPATHLEN #else # define LT_PATHMAX 1024 #endif #ifndef S_IXOTH # define S_IXOTH 0 #endif #ifndef S_IXGRP # define S_IXGRP 0 #endif /* path handling portability macros */ #ifndef DIR_SEPARATOR # define DIR_SEPARATOR '/' # define PATH_SEPARATOR ':' #endif #if defined _WIN32 || defined __MSDOS__ || defined __DJGPP__ || \ defined __OS2__ # define HAVE_DOS_BASED_FILE_SYSTEM # define FOPEN_WB "wb" # ifndef DIR_SEPARATOR_2 # define DIR_SEPARATOR_2 '\\' # endif # ifndef PATH_SEPARATOR_2 # define PATH_SEPARATOR_2 ';' # endif #endif #ifndef DIR_SEPARATOR_2 # define IS_DIR_SEPARATOR(ch) ((ch) == DIR_SEPARATOR) #else /* DIR_SEPARATOR_2 */ # define IS_DIR_SEPARATOR(ch) \ (((ch) == DIR_SEPARATOR) || ((ch) == DIR_SEPARATOR_2)) #endif /* DIR_SEPARATOR_2 */ #ifndef PATH_SEPARATOR_2 # define IS_PATH_SEPARATOR(ch) ((ch) == PATH_SEPARATOR) #else /* PATH_SEPARATOR_2 */ # define IS_PATH_SEPARATOR(ch) ((ch) == PATH_SEPARATOR_2) #endif /* PATH_SEPARATOR_2 */ #ifndef FOPEN_WB # define FOPEN_WB "w" #endif #ifndef _O_BINARY # define _O_BINARY 0 #endif #define XMALLOC(type, num) ((type *) xmalloc ((num) * sizeof(type))) #define XFREE(stale) do { \ if (stale) { free (stale); stale = 0; } \ } while (0) #if defined LT_DEBUGWRAPPER static int lt_debug = 1; #else static int lt_debug = 0; #endif const char *program_name = "libtool-wrapper"; /* in case xstrdup fails */ void *xmalloc (size_t num); char *xstrdup (const char *string); const char *base_name (const char *name); char *find_executable (const char *wrapper); char *chase_symlinks (const char *pathspec); int make_executable (const char *path); int check_executable (const char *path); char *strendzap (char *str, const char *pat); void lt_debugprintf (const char *file, int line, const char *fmt, ...); void lt_fatal (const char *file, int line, const char *message, ...); static const char *nonnull (const char *s); static const char *nonempty (const char *s); void lt_setenv (const char *name, const char *value); char *lt_extend_str (const char *orig_value, const char *add, int to_end); void lt_update_exe_path (const char *name, const char *value); void lt_update_lib_path (const char *name, const char *value); char **prepare_spawn (char **argv); void lt_dump_script (FILE *f); EOF cat <= 0) && (st.st_mode & (S_IXUSR | S_IXGRP | S_IXOTH))) return 1; else return 0; } int make_executable (const char *path) { int rval = 0; struct stat st; lt_debugprintf (__FILE__, __LINE__, "(make_executable): %s\n", nonempty (path)); if ((!path) || (!*path)) return 0; if (stat (path, &st) >= 0) { rval = chmod (path, st.st_mode | S_IXOTH | S_IXGRP | S_IXUSR); } return rval; } /* Searches for the full path of the wrapper. Returns newly allocated full path name if found, NULL otherwise Does not chase symlinks, even on platforms that support them. */ char * find_executable (const char *wrapper) { int has_slash = 0; const char *p; const char *p_next; /* static buffer for getcwd */ char tmp[LT_PATHMAX + 1]; size_t tmp_len; char *concat_name; lt_debugprintf (__FILE__, __LINE__, "(find_executable): %s\n", nonempty (wrapper)); if ((wrapper == NULL) || (*wrapper == '\0')) return NULL; /* Absolute path? */ #if defined HAVE_DOS_BASED_FILE_SYSTEM if (isalpha ((unsigned char) wrapper[0]) && wrapper[1] == ':') { concat_name = xstrdup (wrapper); if (check_executable (concat_name)) return concat_name; XFREE (concat_name); } else { #endif if (IS_DIR_SEPARATOR (wrapper[0])) { concat_name = xstrdup (wrapper); if (check_executable (concat_name)) return concat_name; XFREE (concat_name); } #if defined HAVE_DOS_BASED_FILE_SYSTEM } #endif for (p = wrapper; *p; p++) if (*p == '/') { has_slash = 1; break; } if (!has_slash) { /* no slashes; search PATH */ const char *path = getenv ("PATH"); if (path != NULL) { for (p = path; *p; p = p_next) { const char *q; size_t p_len; for (q = p; *q; q++) if (IS_PATH_SEPARATOR (*q)) break; p_len = (size_t) (q - p); p_next = (*q == '\0' ? q : q + 1); if (p_len == 0) { /* empty path: current directory */ if (getcwd (tmp, LT_PATHMAX) == NULL) lt_fatal (__FILE__, __LINE__, "getcwd failed: %s", nonnull (strerror (errno))); tmp_len = strlen (tmp); concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1); memcpy (concat_name, tmp, tmp_len); concat_name[tmp_len] = '/'; strcpy (concat_name + tmp_len + 1, wrapper); } else { concat_name = XMALLOC (char, p_len + 1 + strlen (wrapper) + 1); memcpy (concat_name, p, p_len); concat_name[p_len] = '/'; strcpy (concat_name + p_len + 1, wrapper); } if (check_executable (concat_name)) return concat_name; XFREE (concat_name); } } /* not found in PATH; assume curdir */ } /* Relative path | not found in path: prepend cwd */ if (getcwd (tmp, LT_PATHMAX) == NULL) lt_fatal (__FILE__, __LINE__, "getcwd failed: %s", nonnull (strerror (errno))); tmp_len = strlen (tmp); concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1); memcpy (concat_name, tmp, tmp_len); concat_name[tmp_len] = '/'; strcpy (concat_name + tmp_len + 1, wrapper); if (check_executable (concat_name)) return concat_name; XFREE (concat_name); return NULL; } char * chase_symlinks (const char *pathspec) { #ifndef S_ISLNK return xstrdup (pathspec); #else char buf[LT_PATHMAX]; struct stat s; char *tmp_pathspec = xstrdup (pathspec); char *p; int has_symlinks = 0; while (strlen (tmp_pathspec) && !has_symlinks) { lt_debugprintf (__FILE__, __LINE__, "checking path component for symlinks: %s\n", tmp_pathspec); if (lstat (tmp_pathspec, &s) == 0) { if (S_ISLNK (s.st_mode) != 0) { has_symlinks = 1; break; } /* search backwards for last DIR_SEPARATOR */ p = tmp_pathspec + strlen (tmp_pathspec) - 1; while ((p > tmp_pathspec) && (!IS_DIR_SEPARATOR (*p))) p--; if ((p == tmp_pathspec) && (!IS_DIR_SEPARATOR (*p))) { /* no more DIR_SEPARATORS left */ break; } *p = '\0'; } else { lt_fatal (__FILE__, __LINE__, "error accessing file \"%s\": %s", tmp_pathspec, nonnull (strerror (errno))); } } XFREE (tmp_pathspec); if (!has_symlinks) { return xstrdup (pathspec); } tmp_pathspec = realpath (pathspec, buf); if (tmp_pathspec == 0) { lt_fatal (__FILE__, __LINE__, "could not follow symlinks for %s", pathspec); } return xstrdup (tmp_pathspec); #endif } char * strendzap (char *str, const char *pat) { size_t len, patlen; assert (str != NULL); assert (pat != NULL); len = strlen (str); patlen = strlen (pat); if (patlen <= len) { str += len - patlen; if (STREQ (str, pat)) *str = '\0'; } return str; } void lt_debugprintf (const char *file, int line, const char *fmt, ...) { va_list args; if (lt_debug) { (void) fprintf (stderr, "%s:%s:%d: ", program_name, file, line); va_start (args, fmt); (void) vfprintf (stderr, fmt, args); va_end (args); } } static void lt_error_core (int exit_status, const char *file, int line, const char *mode, const char *message, va_list ap) { fprintf (stderr, "%s:%s:%d: %s: ", program_name, file, line, mode); vfprintf (stderr, message, ap); fprintf (stderr, ".\n"); if (exit_status >= 0) exit (exit_status); } void lt_fatal (const char *file, int line, const char *message, ...) { va_list ap; va_start (ap, message); lt_error_core (EXIT_FAILURE, file, line, "FATAL", message, ap); va_end (ap); } static const char * nonnull (const char *s) { return s ? s : "(null)"; } static const char * nonempty (const char *s) { return (s && !*s) ? "(empty)" : nonnull (s); } void lt_setenv (const char *name, const char *value) { lt_debugprintf (__FILE__, __LINE__, "(lt_setenv) setting '%s' to '%s'\n", nonnull (name), nonnull (value)); { #ifdef HAVE_SETENV /* always make a copy, for consistency with !HAVE_SETENV */ char *str = xstrdup (value); setenv (name, str, 1); #else size_t len = strlen (name) + 1 + strlen (value) + 1; char *str = XMALLOC (char, len); sprintf (str, "%s=%s", name, value); if (putenv (str) != EXIT_SUCCESS) { XFREE (str); } #endif } } char * lt_extend_str (const char *orig_value, const char *add, int to_end) { char *new_value; if (orig_value && *orig_value) { size_t orig_value_len = strlen (orig_value); size_t add_len = strlen (add); new_value = XMALLOC (char, add_len + orig_value_len + 1); if (to_end) { strcpy (new_value, orig_value); strcpy (new_value + orig_value_len, add); } else { strcpy (new_value, add); strcpy (new_value + add_len, orig_value); } } else { new_value = xstrdup (add); } return new_value; } void lt_update_exe_path (const char *name, const char *value) { lt_debugprintf (__FILE__, __LINE__, "(lt_update_exe_path) modifying '%s' by prepending '%s'\n", nonnull (name), nonnull (value)); if (name && *name && value && *value) { char *new_value = lt_extend_str (getenv (name), value, 0); /* some systems can't cope with a ':'-terminated path #' */ size_t len = strlen (new_value); while ((len > 0) && IS_PATH_SEPARATOR (new_value[len-1])) { new_value[--len] = '\0'; } lt_setenv (name, new_value); XFREE (new_value); } } void lt_update_lib_path (const char *name, const char *value) { lt_debugprintf (__FILE__, __LINE__, "(lt_update_lib_path) modifying '%s' by prepending '%s'\n", nonnull (name), nonnull (value)); if (name && *name && value && *value) { char *new_value = lt_extend_str (getenv (name), value, 0); lt_setenv (name, new_value); XFREE (new_value); } } EOF case $host_os in mingw*) cat <<"EOF" /* Prepares an argument vector before calling spawn(). Note that spawn() does not by itself call the command interpreter (getenv ("COMSPEC") != NULL ? getenv ("COMSPEC") : ({ OSVERSIONINFO v; v.dwOSVersionInfoSize = sizeof(OSVERSIONINFO); GetVersionEx(&v); v.dwPlatformId == VER_PLATFORM_WIN32_NT; }) ? "cmd.exe" : "command.com"). Instead it simply concatenates the arguments, separated by ' ', and calls CreateProcess(). We must quote the arguments since Win32 CreateProcess() interprets characters like ' ', '\t', '\\', '"' (but not '<' and '>') in a special way: - Space and tab are interpreted as delimiters. They are not treated as delimiters if they are surrounded by double quotes: "...". - Unescaped double quotes are removed from the input. Their only effect is that within double quotes, space and tab are treated like normal characters. - Backslashes not followed by double quotes are not special. - But 2*n+1 backslashes followed by a double quote become n backslashes followed by a double quote (n >= 0): \" -> " \\\" -> \" \\\\\" -> \\" */ #define SHELL_SPECIAL_CHARS "\"\\ \001\002\003\004\005\006\007\010\011\012\013\014\015\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037" #define SHELL_SPACE_CHARS " \001\002\003\004\005\006\007\010\011\012\013\014\015\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037" char ** prepare_spawn (char **argv) { size_t argc; char **new_argv; size_t i; /* Count number of arguments. */ for (argc = 0; argv[argc] != NULL; argc++) ; /* Allocate new argument vector. */ new_argv = XMALLOC (char *, argc + 1); /* Put quoted arguments into the new argument vector. */ for (i = 0; i < argc; i++) { const char *string = argv[i]; if (string[0] == '\0') new_argv[i] = xstrdup ("\"\""); else if (strpbrk (string, SHELL_SPECIAL_CHARS) != NULL) { int quote_around = (strpbrk (string, SHELL_SPACE_CHARS) != NULL); size_t length; unsigned int backslashes; const char *s; char *quoted_string; char *p; length = 0; backslashes = 0; if (quote_around) length++; for (s = string; *s != '\0'; s++) { char c = *s; if (c == '"') length += backslashes + 1; length++; if (c == '\\') backslashes++; else backslashes = 0; } if (quote_around) length += backslashes + 1; quoted_string = XMALLOC (char, length + 1); p = quoted_string; backslashes = 0; if (quote_around) *p++ = '"'; for (s = string; *s != '\0'; s++) { char c = *s; if (c == '"') { unsigned int j; for (j = backslashes + 1; j > 0; j--) *p++ = '\\'; } *p++ = c; if (c == '\\') backslashes++; else backslashes = 0; } if (quote_around) { unsigned int j; for (j = backslashes; j > 0; j--) *p++ = '\\'; *p++ = '"'; } *p = '\0'; new_argv[i] = quoted_string; } else new_argv[i] = (char *) string; } new_argv[argc] = NULL; return new_argv; } EOF ;; esac cat <<"EOF" void lt_dump_script (FILE* f) { EOF func_emit_wrapper yes | $SED -n -e ' s/^\(.\{79\}\)\(..*\)/\1\ \2/ h s/\([\\"]\)/\\\1/g s/$/\\n/ s/\([^\n]*\).*/ fputs ("\1", f);/p g D' cat <<"EOF" } EOF } # end: func_emit_cwrapperexe_src # func_win32_import_lib_p ARG # True if ARG is an import lib, as indicated by $file_magic_cmd func_win32_import_lib_p () { $debug_cmd case `eval $file_magic_cmd \"\$1\" 2>/dev/null | $SED -e 10q` in *import*) : ;; *) false ;; esac } # func_suncc_cstd_abi # !!ONLY CALL THIS FOR SUN CC AFTER $compile_command IS FULLY EXPANDED!! # Several compiler flags select an ABI that is incompatible with the # Cstd library. Avoid specifying it if any are in CXXFLAGS. func_suncc_cstd_abi () { $debug_cmd case " $compile_command " in *" -compat=g "*|*\ -std=c++[0-9][0-9]\ *|*" -library=stdcxx4 "*|*" -library=stlport4 "*) suncc_use_cstd_abi=no ;; *) suncc_use_cstd_abi=yes ;; esac } # func_mode_link arg... func_mode_link () { $debug_cmd case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) # It is impossible to link a dll without this setting, and # we shouldn't force the makefile maintainer to figure out # what system we are compiling for in order to pass an extra # flag for every libtool invocation. # allow_undefined=no # FIXME: Unfortunately, there are problems with the above when trying # to make a dll that has undefined symbols, in which case not # even a static library is built. For now, we need to specify # -no-undefined on the libtool link line when we can be certain # that all symbols are satisfied, otherwise we get a static library. allow_undefined=yes ;; *) allow_undefined=yes ;; esac libtool_args=$nonopt base_compile="$nonopt $@" compile_command=$nonopt finalize_command=$nonopt compile_rpath= finalize_rpath= compile_shlibpath= finalize_shlibpath= convenience= old_convenience= deplibs= old_deplibs= compiler_flags= linker_flags= dllsearchpath= lib_search_path=`pwd` inst_prefix_dir= new_inherited_linker_flags= avoid_version=no bindir= dlfiles= dlprefiles= dlself=no export_dynamic=no export_symbols= export_symbols_regex= generated= libobjs= ltlibs= module=no no_install=no objs= os2dllname= non_pic_objects= precious_files_regex= prefer_static_libs=no preload=false prev= prevarg= release= rpath= xrpath= perm_rpath= temp_rpath= thread_safe=no vinfo= vinfo_number=no weak_libs= single_module=$wl-single_module func_infer_tag $base_compile # We need to know -static, to get the right output filenames. for arg do case $arg in -shared) test yes != "$build_libtool_libs" \ && func_fatal_configuration "cannot build a shared library" build_old_libs=no break ;; -all-static | -static | -static-libtool-libs) case $arg in -all-static) if test yes = "$build_libtool_libs" && test -z "$link_static_flag"; then func_warning "complete static linking is impossible in this configuration" fi if test -n "$link_static_flag"; then dlopen_self=$dlopen_self_static fi prefer_static_libs=yes ;; -static) if test -z "$pic_flag" && test -n "$link_static_flag"; then dlopen_self=$dlopen_self_static fi prefer_static_libs=built ;; -static-libtool-libs) if test -z "$pic_flag" && test -n "$link_static_flag"; then dlopen_self=$dlopen_self_static fi prefer_static_libs=yes ;; esac build_libtool_libs=no build_old_libs=yes break ;; esac done # See if our shared archives depend on static archives. test -n "$old_archive_from_new_cmds" && build_old_libs=yes # Go through the arguments, transforming them on the way. while test "$#" -gt 0; do arg=$1 shift func_quote_for_eval "$arg" qarg=$func_quote_for_eval_unquoted_result func_append libtool_args " $func_quote_for_eval_result" # If the previous option needs an argument, assign it. if test -n "$prev"; then case $prev in output) func_append compile_command " @OUTPUT@" func_append finalize_command " @OUTPUT@" ;; esac case $prev in bindir) bindir=$arg prev= continue ;; dlfiles|dlprefiles) $preload || { # Add the symbol object into the linking commands. func_append compile_command " @SYMFILE@" func_append finalize_command " @SYMFILE@" preload=: } case $arg in *.la | *.lo) ;; # We handle these cases below. force) if test no = "$dlself"; then dlself=needless export_dynamic=yes fi prev= continue ;; self) if test dlprefiles = "$prev"; then dlself=yes elif test dlfiles = "$prev" && test yes != "$dlopen_self"; then dlself=yes else dlself=needless export_dynamic=yes fi prev= continue ;; *) if test dlfiles = "$prev"; then func_append dlfiles " $arg" else func_append dlprefiles " $arg" fi prev= continue ;; esac ;; expsyms) export_symbols=$arg test -f "$arg" \ || func_fatal_error "symbol file '$arg' does not exist" prev= continue ;; expsyms_regex) export_symbols_regex=$arg prev= continue ;; framework) case $host in *-*-darwin*) case "$deplibs " in *" $qarg.ltframework "*) ;; *) func_append deplibs " $qarg.ltframework" # this is fixed later ;; esac ;; esac prev= continue ;; inst_prefix) inst_prefix_dir=$arg prev= continue ;; mllvm) # Clang does not use LLVM to link, so we can simply discard any # '-mllvm $arg' options when doing the link step. prev= continue ;; objectlist) if test -f "$arg"; then save_arg=$arg moreargs= for fil in `cat "$save_arg"` do # func_append moreargs " $fil" arg=$fil # A libtool-controlled object. # Check to see that this really is a libtool object. if func_lalib_unsafe_p "$arg"; then pic_object= non_pic_object= # Read the .lo file func_source "$arg" if test -z "$pic_object" || test -z "$non_pic_object" || test none = "$pic_object" && test none = "$non_pic_object"; then func_fatal_error "cannot find name of object for '$arg'" fi # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir=$func_dirname_result if test none != "$pic_object"; then # Prepend the subdirectory the object is found in. pic_object=$xdir$pic_object if test dlfiles = "$prev"; then if test yes = "$build_libtool_libs" && test yes = "$dlopen_support"; then func_append dlfiles " $pic_object" prev= continue else # If libtool objects are unsupported, then we need to preload. prev=dlprefiles fi fi # CHECK ME: I think I busted this. -Ossama if test dlprefiles = "$prev"; then # Preload the old-style object. func_append dlprefiles " $pic_object" prev= fi # A PIC object. func_append libobjs " $pic_object" arg=$pic_object fi # Non-PIC object. if test none != "$non_pic_object"; then # Prepend the subdirectory the object is found in. non_pic_object=$xdir$non_pic_object # A standard non-PIC object func_append non_pic_objects " $non_pic_object" if test -z "$pic_object" || test none = "$pic_object"; then arg=$non_pic_object fi else # If the PIC object exists, use it instead. # $xdir was prepended to $pic_object above. non_pic_object=$pic_object func_append non_pic_objects " $non_pic_object" fi else # Only an error if not doing a dry-run. if $opt_dry_run; then # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir=$func_dirname_result func_lo2o "$arg" pic_object=$xdir$objdir/$func_lo2o_result non_pic_object=$xdir$func_lo2o_result func_append libobjs " $pic_object" func_append non_pic_objects " $non_pic_object" else func_fatal_error "'$arg' is not a valid libtool object" fi fi done else func_fatal_error "link input file '$arg' does not exist" fi arg=$save_arg prev= continue ;; os2dllname) os2dllname=$arg prev= continue ;; precious_regex) precious_files_regex=$arg prev= continue ;; release) release=-$arg prev= continue ;; rpath | xrpath) # We need an absolute path. case $arg in [\\/]* | [A-Za-z]:[\\/]*) ;; *) func_fatal_error "only absolute run-paths are allowed" ;; esac if test rpath = "$prev"; then case "$rpath " in *" $arg "*) ;; *) func_append rpath " $arg" ;; esac else case "$xrpath " in *" $arg "*) ;; *) func_append xrpath " $arg" ;; esac fi prev= continue ;; shrext) shrext_cmds=$arg prev= continue ;; weak) func_append weak_libs " $arg" prev= continue ;; xcclinker) func_append linker_flags " $qarg" func_append compiler_flags " $qarg" prev= func_append compile_command " $qarg" func_append finalize_command " $qarg" continue ;; xcompiler) func_append compiler_flags " $qarg" prev= func_append compile_command " $qarg" func_append finalize_command " $qarg" continue ;; xlinker) func_append linker_flags " $qarg" func_append compiler_flags " $wl$qarg" prev= func_append compile_command " $wl$qarg" func_append finalize_command " $wl$qarg" continue ;; *) eval "$prev=\"\$arg\"" prev= continue ;; esac fi # test -n "$prev" prevarg=$arg case $arg in -all-static) if test -n "$link_static_flag"; then # See comment for -static flag below, for more details. func_append compile_command " $link_static_flag" func_append finalize_command " $link_static_flag" fi continue ;; -allow-undefined) # FIXME: remove this flag sometime in the future. func_fatal_error "'-allow-undefined' must not be used because it is the default" ;; -avoid-version) avoid_version=yes continue ;; -bindir) prev=bindir continue ;; -dlopen) prev=dlfiles continue ;; -dlpreopen) prev=dlprefiles continue ;; -export-dynamic) export_dynamic=yes continue ;; -export-symbols | -export-symbols-regex) if test -n "$export_symbols" || test -n "$export_symbols_regex"; then func_fatal_error "more than one -exported-symbols argument is not allowed" fi if test X-export-symbols = "X$arg"; then prev=expsyms else prev=expsyms_regex fi continue ;; -framework) prev=framework continue ;; -inst-prefix-dir) prev=inst_prefix continue ;; # The native IRIX linker understands -LANG:*, -LIST:* and -LNO:* # so, if we see these flags be careful not to treat them like -L -L[A-Z][A-Z]*:*) case $with_gcc/$host in no/*-*-irix* | /*-*-irix*) func_append compile_command " $arg" func_append finalize_command " $arg" ;; esac continue ;; -L*) func_stripname "-L" '' "$arg" if test -z "$func_stripname_result"; then if test "$#" -gt 0; then func_fatal_error "require no space between '-L' and '$1'" else func_fatal_error "need path for '-L' option" fi fi func_resolve_sysroot "$func_stripname_result" dir=$func_resolve_sysroot_result # We need an absolute path. case $dir in [\\/]* | [A-Za-z]:[\\/]*) ;; *) absdir=`cd "$dir" && pwd` test -z "$absdir" && \ func_fatal_error "cannot determine absolute directory name of '$dir'" dir=$absdir ;; esac case "$deplibs " in *" -L$dir "* | *" $arg "*) # Will only happen for absolute or sysroot arguments ;; *) # Preserve sysroot, but never include relative directories case $dir in [\\/]* | [A-Za-z]:[\\/]* | =*) func_append deplibs " $arg" ;; *) func_append deplibs " -L$dir" ;; esac func_append lib_search_path " $dir" ;; esac case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) testbindir=`$ECHO "$dir" | $SED 's*/lib$*/bin*'` case :$dllsearchpath: in *":$dir:"*) ;; ::) dllsearchpath=$dir;; *) func_append dllsearchpath ":$dir";; esac case :$dllsearchpath: in *":$testbindir:"*) ;; ::) dllsearchpath=$testbindir;; *) func_append dllsearchpath ":$testbindir";; esac ;; esac continue ;; -l*) if test X-lc = "X$arg" || test X-lm = "X$arg"; then case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-beos* | *-cegcc* | *-*-haiku*) # These systems don't actually have a C or math library (as such) continue ;; *-*-os2*) # These systems don't actually have a C library (as such) test X-lc = "X$arg" && continue ;; *-*-openbsd* | *-*-freebsd* | *-*-dragonfly* | *-*-bitrig*) # Do not include libc due to us having libc/libc_r. test X-lc = "X$arg" && continue ;; *-*-rhapsody* | *-*-darwin1.[012]) # Rhapsody C and math libraries are in the System framework func_append deplibs " System.ltframework" continue ;; *-*-sco3.2v5* | *-*-sco5v6*) # Causes problems with __ctype test X-lc = "X$arg" && continue ;; *-*-sysv4.2uw2* | *-*-sysv5* | *-*-unixware* | *-*-OpenUNIX*) # Compiler inserts libc in the correct place for threads to work test X-lc = "X$arg" && continue ;; esac elif test X-lc_r = "X$arg"; then case $host in *-*-openbsd* | *-*-freebsd* | *-*-dragonfly* | *-*-bitrig*) # Do not include libc_r directly, use -pthread flag. continue ;; esac fi func_append deplibs " $arg" continue ;; -mllvm) prev=mllvm continue ;; -module) module=yes continue ;; # Tru64 UNIX uses -model [arg] to determine the layout of C++ # classes, name mangling, and exception handling. # Darwin uses the -arch flag to determine output architecture. -model|-arch|-isysroot|--sysroot) func_append compiler_flags " $arg" func_append compile_command " $arg" func_append finalize_command " $arg" prev=xcompiler continue ;; -mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe \ |-threads|-fopenmp|-openmp|-mp|-xopenmp|-omp|-qsmp=*) func_append compiler_flags " $arg" func_append compile_command " $arg" func_append finalize_command " $arg" case "$new_inherited_linker_flags " in *" $arg "*) ;; * ) func_append new_inherited_linker_flags " $arg" ;; esac continue ;; -multi_module) single_module=$wl-multi_module continue ;; -no-fast-install) fast_install=no continue ;; -no-install) case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-*-darwin* | *-cegcc*) # The PATH hackery in wrapper scripts is required on Windows # and Darwin in order for the loader to find any dlls it needs. func_warning "'-no-install' is ignored for $host" func_warning "assuming '-no-fast-install' instead" fast_install=no ;; *) no_install=yes ;; esac continue ;; -no-undefined) allow_undefined=no continue ;; -objectlist) prev=objectlist continue ;; -os2dllname) prev=os2dllname continue ;; -o) prev=output ;; -precious-files-regex) prev=precious_regex continue ;; -release) prev=release continue ;; -rpath) prev=rpath continue ;; -R) prev=xrpath continue ;; -R*) func_stripname '-R' '' "$arg" dir=$func_stripname_result # We need an absolute path. case $dir in [\\/]* | [A-Za-z]:[\\/]*) ;; =*) func_stripname '=' '' "$dir" dir=$lt_sysroot$func_stripname_result ;; *) func_fatal_error "only absolute run-paths are allowed" ;; esac case "$xrpath " in *" $dir "*) ;; *) func_append xrpath " $dir" ;; esac continue ;; -shared) # The effects of -shared are defined in a previous loop. continue ;; -shrext) prev=shrext continue ;; -static | -static-libtool-libs) # The effects of -static are defined in a previous loop. # We used to do the same as -all-static on platforms that # didn't have a PIC flag, but the assumption that the effects # would be equivalent was wrong. It would break on at least # Digital Unix and AIX. continue ;; -thread-safe) thread_safe=yes continue ;; -version-info) prev=vinfo continue ;; -version-number) prev=vinfo vinfo_number=yes continue ;; -weak) prev=weak continue ;; -Wc,*) func_stripname '-Wc,' '' "$arg" args=$func_stripname_result arg= save_ifs=$IFS; IFS=, for flag in $args; do IFS=$save_ifs func_quote_for_eval "$flag" func_append arg " $func_quote_for_eval_result" func_append compiler_flags " $func_quote_for_eval_result" done IFS=$save_ifs func_stripname ' ' '' "$arg" arg=$func_stripname_result ;; -Wl,*) func_stripname '-Wl,' '' "$arg" args=$func_stripname_result arg= save_ifs=$IFS; IFS=, for flag in $args; do IFS=$save_ifs func_quote_for_eval "$flag" func_append arg " $wl$func_quote_for_eval_result" func_append compiler_flags " $wl$func_quote_for_eval_result" func_append linker_flags " $func_quote_for_eval_result" done IFS=$save_ifs func_stripname ' ' '' "$arg" arg=$func_stripname_result ;; -Xcompiler) prev=xcompiler continue ;; -Xlinker) prev=xlinker continue ;; -XCClinker) prev=xcclinker continue ;; # -msg_* for osf cc -msg_*) func_quote_for_eval "$arg" arg=$func_quote_for_eval_result ;; # Flags to be passed through unchanged, with rationale: # -64, -mips[0-9] enable 64-bit mode for the SGI compiler # -r[0-9][0-9]* specify processor for the SGI compiler # -xarch=*, -xtarget=* enable 64-bit mode for the Sun compiler # +DA*, +DD* enable 64-bit mode for the HP compiler # -q* compiler args for the IBM compiler # -m*, -t[45]*, -txscale* architecture-specific flags for GCC # -F/path path to uninstalled frameworks, gcc on darwin # -p, -pg, --coverage, -fprofile-* profiling flags for GCC # -fstack-protector* stack protector flags for GCC # @file GCC response files # -tp=* Portland pgcc target processor selection # --sysroot=* for sysroot support # -O*, -g*, -flto*, -fwhopr*, -fuse-linker-plugin GCC link-time optimization # -specs=* GCC specs files # -stdlib=* select c++ std lib with clang # -fsanitize=* Clang/GCC memory and address sanitizer # -fuse-ld=* Linker select flags for GCC -64|-mips[0-9]|-r[0-9][0-9]*|-xarch=*|-xtarget=*|+DA*|+DD*|-q*|-m*| \ -t[45]*|-txscale*|-p|-pg|--coverage|-fprofile-*|-F*|@*|-tp=*|--sysroot=*| \ -O*|-g*|-flto*|-fwhopr*|-fuse-linker-plugin|-fstack-protector*|-stdlib=*| \ -specs=*|-fsanitize=*|-fuse-ld=*) func_quote_for_eval "$arg" arg=$func_quote_for_eval_result func_append compile_command " $arg" func_append finalize_command " $arg" func_append compiler_flags " $arg" continue ;; -Z*) if test os2 = "`expr $host : '.*\(os2\)'`"; then # OS/2 uses -Zxxx to specify OS/2-specific options compiler_flags="$compiler_flags $arg" func_append compile_command " $arg" func_append finalize_command " $arg" case $arg in -Zlinker | -Zstack) prev=xcompiler ;; esac continue else # Otherwise treat like 'Some other compiler flag' below func_quote_for_eval "$arg" arg=$func_quote_for_eval_result fi ;; # Some other compiler flag. -* | +*) func_quote_for_eval "$arg" arg=$func_quote_for_eval_result ;; *.$objext) # A standard object. func_append objs " $arg" ;; *.lo) # A libtool-controlled object. # Check to see that this really is a libtool object. if func_lalib_unsafe_p "$arg"; then pic_object= non_pic_object= # Read the .lo file func_source "$arg" if test -z "$pic_object" || test -z "$non_pic_object" || test none = "$pic_object" && test none = "$non_pic_object"; then func_fatal_error "cannot find name of object for '$arg'" fi # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir=$func_dirname_result test none = "$pic_object" || { # Prepend the subdirectory the object is found in. pic_object=$xdir$pic_object if test dlfiles = "$prev"; then if test yes = "$build_libtool_libs" && test yes = "$dlopen_support"; then func_append dlfiles " $pic_object" prev= continue else # If libtool objects are unsupported, then we need to preload. prev=dlprefiles fi fi # CHECK ME: I think I busted this. -Ossama if test dlprefiles = "$prev"; then # Preload the old-style object. func_append dlprefiles " $pic_object" prev= fi # A PIC object. func_append libobjs " $pic_object" arg=$pic_object } # Non-PIC object. if test none != "$non_pic_object"; then # Prepend the subdirectory the object is found in. non_pic_object=$xdir$non_pic_object # A standard non-PIC object func_append non_pic_objects " $non_pic_object" if test -z "$pic_object" || test none = "$pic_object"; then arg=$non_pic_object fi else # If the PIC object exists, use it instead. # $xdir was prepended to $pic_object above. non_pic_object=$pic_object func_append non_pic_objects " $non_pic_object" fi else # Only an error if not doing a dry-run. if $opt_dry_run; then # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir=$func_dirname_result func_lo2o "$arg" pic_object=$xdir$objdir/$func_lo2o_result non_pic_object=$xdir$func_lo2o_result func_append libobjs " $pic_object" func_append non_pic_objects " $non_pic_object" else func_fatal_error "'$arg' is not a valid libtool object" fi fi ;; *.$libext) # An archive. func_append deplibs " $arg" func_append old_deplibs " $arg" continue ;; *.la) # A libtool-controlled library. func_resolve_sysroot "$arg" if test dlfiles = "$prev"; then # This library was specified with -dlopen. func_append dlfiles " $func_resolve_sysroot_result" prev= elif test dlprefiles = "$prev"; then # The library was specified with -dlpreopen. func_append dlprefiles " $func_resolve_sysroot_result" prev= else func_append deplibs " $func_resolve_sysroot_result" fi continue ;; # Some other compiler argument. *) # Unknown arguments in both finalize_command and compile_command need # to be aesthetically quoted because they are evaled later. func_quote_for_eval "$arg" arg=$func_quote_for_eval_result ;; esac # arg # Now actually substitute the argument into the commands. if test -n "$arg"; then func_append compile_command " $arg" func_append finalize_command " $arg" fi done # argument parsing loop test -n "$prev" && \ func_fatal_help "the '$prevarg' option requires an argument" if test yes = "$export_dynamic" && test -n "$export_dynamic_flag_spec"; then eval arg=\"$export_dynamic_flag_spec\" func_append compile_command " $arg" func_append finalize_command " $arg" fi oldlibs= # calculate the name of the file, without its directory func_basename "$output" outputname=$func_basename_result libobjs_save=$libobjs if test -n "$shlibpath_var"; then # get the directories listed in $shlibpath_var eval shlib_search_path=\`\$ECHO \"\$$shlibpath_var\" \| \$SED \'s/:/ /g\'\` else shlib_search_path= fi eval sys_lib_search_path=\"$sys_lib_search_path_spec\" eval sys_lib_dlsearch_path=\"$sys_lib_dlsearch_path_spec\" # Definition is injected by LT_CONFIG during libtool generation. func_munge_path_list sys_lib_dlsearch_path "$LT_SYS_LIBRARY_PATH" func_dirname "$output" "/" "" output_objdir=$func_dirname_result$objdir func_to_tool_file "$output_objdir/" tool_output_objdir=$func_to_tool_file_result # Create the object directory. func_mkdir_p "$output_objdir" # Determine the type of output case $output in "") func_fatal_help "you must specify an output file" ;; *.$libext) linkmode=oldlib ;; *.lo | *.$objext) linkmode=obj ;; *.la) linkmode=lib ;; *) linkmode=prog ;; # Anything else should be a program. esac specialdeplibs= libs= # Find all interdependent deplibs by searching for libraries # that are linked more than once (e.g. -la -lb -la) for deplib in $deplibs; do if $opt_preserve_dup_deps; then case "$libs " in *" $deplib "*) func_append specialdeplibs " $deplib" ;; esac fi func_append libs " $deplib" done if test lib = "$linkmode"; then libs="$predeps $libs $compiler_lib_search_path $postdeps" # Compute libraries that are listed more than once in $predeps # $postdeps and mark them as special (i.e., whose duplicates are # not to be eliminated). pre_post_deps= if $opt_duplicate_compiler_generated_deps; then for pre_post_dep in $predeps $postdeps; do case "$pre_post_deps " in *" $pre_post_dep "*) func_append specialdeplibs " $pre_post_deps" ;; esac func_append pre_post_deps " $pre_post_dep" done fi pre_post_deps= fi deplibs= newdependency_libs= newlib_search_path= need_relink=no # whether we're linking any uninstalled libtool libraries notinst_deplibs= # not-installed libtool libraries notinst_path= # paths that contain not-installed libtool libraries case $linkmode in lib) passes="conv dlpreopen link" for file in $dlfiles $dlprefiles; do case $file in *.la) ;; *) func_fatal_help "libraries can '-dlopen' only libtool libraries: $file" ;; esac done ;; prog) compile_deplibs= finalize_deplibs= alldeplibs=false newdlfiles= newdlprefiles= passes="conv scan dlopen dlpreopen link" ;; *) passes="conv" ;; esac for pass in $passes; do # The preopen pass in lib mode reverses $deplibs; put it back here # so that -L comes before libs that need it for instance... if test lib,link = "$linkmode,$pass"; then ## FIXME: Find the place where the list is rebuilt in the wrong ## order, and fix it there properly tmp_deplibs= for deplib in $deplibs; do tmp_deplibs="$deplib $tmp_deplibs" done deplibs=$tmp_deplibs fi if test lib,link = "$linkmode,$pass" || test prog,scan = "$linkmode,$pass"; then libs=$deplibs deplibs= fi if test prog = "$linkmode"; then case $pass in dlopen) libs=$dlfiles ;; dlpreopen) libs=$dlprefiles ;; link) libs="$deplibs %DEPLIBS%" test "X$link_all_deplibs" != Xno && libs="$libs $dependency_libs" ;; esac fi if test lib,dlpreopen = "$linkmode,$pass"; then # Collect and forward deplibs of preopened libtool libs for lib in $dlprefiles; do # Ignore non-libtool-libs dependency_libs= func_resolve_sysroot "$lib" case $lib in *.la) func_source "$func_resolve_sysroot_result" ;; esac # Collect preopened libtool deplibs, except any this library # has declared as weak libs for deplib in $dependency_libs; do func_basename "$deplib" deplib_base=$func_basename_result case " $weak_libs " in *" $deplib_base "*) ;; *) func_append deplibs " $deplib" ;; esac done done libs=$dlprefiles fi if test dlopen = "$pass"; then # Collect dlpreopened libraries save_deplibs=$deplibs deplibs= fi for deplib in $libs; do lib= found=false case $deplib in -mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe \ |-threads|-fopenmp|-openmp|-mp|-xopenmp|-omp|-qsmp=*) if test prog,link = "$linkmode,$pass"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else func_append compiler_flags " $deplib" if test lib = "$linkmode"; then case "$new_inherited_linker_flags " in *" $deplib "*) ;; * ) func_append new_inherited_linker_flags " $deplib" ;; esac fi fi continue ;; -l*) if test lib != "$linkmode" && test prog != "$linkmode"; then func_warning "'-l' is ignored for archives/objects" continue fi func_stripname '-l' '' "$deplib" name=$func_stripname_result if test lib = "$linkmode"; then searchdirs="$newlib_search_path $lib_search_path $compiler_lib_search_dirs $sys_lib_search_path $shlib_search_path" else searchdirs="$newlib_search_path $lib_search_path $sys_lib_search_path $shlib_search_path" fi for searchdir in $searchdirs; do for search_ext in .la $std_shrext .so .a; do # Search the libtool library lib=$searchdir/lib$name$search_ext if test -f "$lib"; then if test .la = "$search_ext"; then found=: else found=false fi break 2 fi done done if $found; then # deplib is a libtool library # If $allow_libtool_libs_with_static_runtimes && $deplib is a stdlib, # We need to do some special things here, and not later. if test yes = "$allow_libtool_libs_with_static_runtimes"; then case " $predeps $postdeps " in *" $deplib "*) if func_lalib_p "$lib"; then library_names= old_library= func_source "$lib" for l in $old_library $library_names; do ll=$l done if test "X$ll" = "X$old_library"; then # only static version available found=false func_dirname "$lib" "" "." ladir=$func_dirname_result lib=$ladir/$old_library if test prog,link = "$linkmode,$pass"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else deplibs="$deplib $deplibs" test lib = "$linkmode" && newdependency_libs="$deplib $newdependency_libs" fi continue fi fi ;; *) ;; esac fi else # deplib doesn't seem to be a libtool library if test prog,link = "$linkmode,$pass"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else deplibs="$deplib $deplibs" test lib = "$linkmode" && newdependency_libs="$deplib $newdependency_libs" fi continue fi ;; # -l *.ltframework) if test prog,link = "$linkmode,$pass"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else deplibs="$deplib $deplibs" if test lib = "$linkmode"; then case "$new_inherited_linker_flags " in *" $deplib "*) ;; * ) func_append new_inherited_linker_flags " $deplib" ;; esac fi fi continue ;; -L*) case $linkmode in lib) deplibs="$deplib $deplibs" test conv = "$pass" && continue newdependency_libs="$deplib $newdependency_libs" func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result" func_append newlib_search_path " $func_resolve_sysroot_result" ;; prog) if test conv = "$pass"; then deplibs="$deplib $deplibs" continue fi if test scan = "$pass"; then deplibs="$deplib $deplibs" else compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" fi func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result" func_append newlib_search_path " $func_resolve_sysroot_result" ;; *) func_warning "'-L' is ignored for archives/objects" ;; esac # linkmode continue ;; # -L -R*) if test link = "$pass"; then func_stripname '-R' '' "$deplib" func_resolve_sysroot "$func_stripname_result" dir=$func_resolve_sysroot_result # Make sure the xrpath contains only unique directories. case "$xrpath " in *" $dir "*) ;; *) func_append xrpath " $dir" ;; esac fi deplibs="$deplib $deplibs" continue ;; *.la) func_resolve_sysroot "$deplib" lib=$func_resolve_sysroot_result ;; *.$libext) if test conv = "$pass"; then deplibs="$deplib $deplibs" continue fi case $linkmode in lib) # Linking convenience modules into shared libraries is allowed, # but linking other static libraries is non-portable. case " $dlpreconveniencelibs " in *" $deplib "*) ;; *) valid_a_lib=false case $deplibs_check_method in match_pattern*) set dummy $deplibs_check_method; shift match_pattern_regex=`expr "$deplibs_check_method" : "$1 \(.*\)"` if eval "\$ECHO \"$deplib\"" 2>/dev/null | $SED 10q \ | $EGREP "$match_pattern_regex" > /dev/null; then valid_a_lib=: fi ;; pass_all) valid_a_lib=: ;; esac if $valid_a_lib; then echo $ECHO "*** Warning: Linking the shared library $output against the" $ECHO "*** static library $deplib is not portable!" deplibs="$deplib $deplibs" else echo $ECHO "*** Warning: Trying to link with static lib archive $deplib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have" echo "*** because the file extensions .$libext of this argument makes me believe" echo "*** that it is just a static archive that I should not use here." fi ;; esac continue ;; prog) if test link != "$pass"; then deplibs="$deplib $deplibs" else compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" fi continue ;; esac # linkmode ;; # *.$libext *.lo | *.$objext) if test conv = "$pass"; then deplibs="$deplib $deplibs" elif test prog = "$linkmode"; then if test dlpreopen = "$pass" || test yes != "$dlopen_support" || test no = "$build_libtool_libs"; then # If there is no dlopen support or we're linking statically, # we need to preload. func_append newdlprefiles " $deplib" compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else func_append newdlfiles " $deplib" fi fi continue ;; %DEPLIBS%) alldeplibs=: continue ;; esac # case $deplib $found || test -f "$lib" \ || func_fatal_error "cannot find the library '$lib' or unhandled argument '$deplib'" # Check to see that this really is a libtool archive. func_lalib_unsafe_p "$lib" \ || func_fatal_error "'$lib' is not a valid libtool archive" func_dirname "$lib" "" "." ladir=$func_dirname_result dlname= dlopen= dlpreopen= libdir= library_names= old_library= inherited_linker_flags= # If the library was installed with an old release of libtool, # it will not redefine variables installed, or shouldnotlink installed=yes shouldnotlink=no avoidtemprpath= # Read the .la file func_source "$lib" # Convert "-framework foo" to "foo.ltframework" if test -n "$inherited_linker_flags"; then tmp_inherited_linker_flags=`$ECHO "$inherited_linker_flags" | $SED 's/-framework \([^ $]*\)/\1.ltframework/g'` for tmp_inherited_linker_flag in $tmp_inherited_linker_flags; do case " $new_inherited_linker_flags " in *" $tmp_inherited_linker_flag "*) ;; *) func_append new_inherited_linker_flags " $tmp_inherited_linker_flag";; esac done fi dependency_libs=`$ECHO " $dependency_libs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` if test lib,link = "$linkmode,$pass" || test prog,scan = "$linkmode,$pass" || { test prog != "$linkmode" && test lib != "$linkmode"; }; then test -n "$dlopen" && func_append dlfiles " $dlopen" test -n "$dlpreopen" && func_append dlprefiles " $dlpreopen" fi if test conv = "$pass"; then # Only check for convenience libraries deplibs="$lib $deplibs" if test -z "$libdir"; then if test -z "$old_library"; then func_fatal_error "cannot find name of link library for '$lib'" fi # It is a libtool convenience library, so add in its objects. func_append convenience " $ladir/$objdir/$old_library" func_append old_convenience " $ladir/$objdir/$old_library" tmp_libs= for deplib in $dependency_libs; do deplibs="$deplib $deplibs" if $opt_preserve_dup_deps; then case "$tmp_libs " in *" $deplib "*) func_append specialdeplibs " $deplib" ;; esac fi func_append tmp_libs " $deplib" done elif test prog != "$linkmode" && test lib != "$linkmode"; then func_fatal_error "'$lib' is not a convenience library" fi continue fi # $pass = conv # Get the name of the library we link against. linklib= if test -n "$old_library" && { test yes = "$prefer_static_libs" || test built,no = "$prefer_static_libs,$installed"; }; then linklib=$old_library else for l in $old_library $library_names; do linklib=$l done fi if test -z "$linklib"; then func_fatal_error "cannot find name of link library for '$lib'" fi # This library was specified with -dlopen. if test dlopen = "$pass"; then test -z "$libdir" \ && func_fatal_error "cannot -dlopen a convenience library: '$lib'" if test -z "$dlname" || test yes != "$dlopen_support" || test no = "$build_libtool_libs" then # If there is no dlname, no dlopen support or we're linking # statically, we need to preload. We also need to preload any # dependent libraries so libltdl's deplib preloader doesn't # bomb out in the load deplibs phase. func_append dlprefiles " $lib $dependency_libs" else func_append newdlfiles " $lib" fi continue fi # $pass = dlopen # We need an absolute path. case $ladir in [\\/]* | [A-Za-z]:[\\/]*) abs_ladir=$ladir ;; *) abs_ladir=`cd "$ladir" && pwd` if test -z "$abs_ladir"; then func_warning "cannot determine absolute directory name of '$ladir'" func_warning "passing it literally to the linker, although it might fail" abs_ladir=$ladir fi ;; esac func_basename "$lib" laname=$func_basename_result # Find the relevant object directory and library name. if test yes = "$installed"; then if test ! -f "$lt_sysroot$libdir/$linklib" && test -f "$abs_ladir/$linklib"; then func_warning "library '$lib' was moved." dir=$ladir absdir=$abs_ladir libdir=$abs_ladir else dir=$lt_sysroot$libdir absdir=$lt_sysroot$libdir fi test yes = "$hardcode_automatic" && avoidtemprpath=yes else if test ! -f "$ladir/$objdir/$linklib" && test -f "$abs_ladir/$linklib"; then dir=$ladir absdir=$abs_ladir # Remove this search path later func_append notinst_path " $abs_ladir" else dir=$ladir/$objdir absdir=$abs_ladir/$objdir # Remove this search path later func_append notinst_path " $abs_ladir" fi fi # $installed = yes func_stripname 'lib' '.la' "$laname" name=$func_stripname_result # This library was specified with -dlpreopen. if test dlpreopen = "$pass"; then if test -z "$libdir" && test prog = "$linkmode"; then func_fatal_error "only libraries may -dlpreopen a convenience library: '$lib'" fi case $host in # special handling for platforms with PE-DLLs. *cygwin* | *mingw* | *cegcc* ) # Linker will automatically link against shared library if both # static and shared are present. Therefore, ensure we extract # symbols from the import library if a shared library is present # (otherwise, the dlopen module name will be incorrect). We do # this by putting the import library name into $newdlprefiles. # We recover the dlopen module name by 'saving' the la file # name in a special purpose variable, and (later) extracting the # dlname from the la file. if test -n "$dlname"; then func_tr_sh "$dir/$linklib" eval "libfile_$func_tr_sh_result=\$abs_ladir/\$laname" func_append newdlprefiles " $dir/$linklib" else func_append newdlprefiles " $dir/$old_library" # Keep a list of preopened convenience libraries to check # that they are being used correctly in the link pass. test -z "$libdir" && \ func_append dlpreconveniencelibs " $dir/$old_library" fi ;; * ) # Prefer using a static library (so that no silly _DYNAMIC symbols # are required to link). if test -n "$old_library"; then func_append newdlprefiles " $dir/$old_library" # Keep a list of preopened convenience libraries to check # that they are being used correctly in the link pass. test -z "$libdir" && \ func_append dlpreconveniencelibs " $dir/$old_library" # Otherwise, use the dlname, so that lt_dlopen finds it. elif test -n "$dlname"; then func_append newdlprefiles " $dir/$dlname" else func_append newdlprefiles " $dir/$linklib" fi ;; esac fi # $pass = dlpreopen if test -z "$libdir"; then # Link the convenience library if test lib = "$linkmode"; then deplibs="$dir/$old_library $deplibs" elif test prog,link = "$linkmode,$pass"; then compile_deplibs="$dir/$old_library $compile_deplibs" finalize_deplibs="$dir/$old_library $finalize_deplibs" else deplibs="$lib $deplibs" # used for prog,scan pass fi continue fi if test prog = "$linkmode" && test link != "$pass"; then func_append newlib_search_path " $ladir" deplibs="$lib $deplibs" linkalldeplibs=false if test no != "$link_all_deplibs" || test -z "$library_names" || test no = "$build_libtool_libs"; then linkalldeplibs=: fi tmp_libs= for deplib in $dependency_libs; do case $deplib in -L*) func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result" func_append newlib_search_path " $func_resolve_sysroot_result" ;; esac # Need to link against all dependency_libs? if $linkalldeplibs; then deplibs="$deplib $deplibs" else # Need to hardcode shared library paths # or/and link against static libraries newdependency_libs="$deplib $newdependency_libs" fi if $opt_preserve_dup_deps; then case "$tmp_libs " in *" $deplib "*) func_append specialdeplibs " $deplib" ;; esac fi func_append tmp_libs " $deplib" done # for deplib continue fi # $linkmode = prog... if test prog,link = "$linkmode,$pass"; then if test -n "$library_names" && { { test no = "$prefer_static_libs" || test built,yes = "$prefer_static_libs,$installed"; } || test -z "$old_library"; }; then # We need to hardcode the library path if test -n "$shlibpath_var" && test -z "$avoidtemprpath"; then # Make sure the rpath contains only unique directories. case $temp_rpath: in *"$absdir:"*) ;; *) func_append temp_rpath "$absdir:" ;; esac fi # Hardcode the library path. # Skip directories that are in the system default run-time # search path. case " $sys_lib_dlsearch_path " in *" $absdir "*) ;; *) case "$compile_rpath " in *" $absdir "*) ;; *) func_append compile_rpath " $absdir" ;; esac ;; esac case " $sys_lib_dlsearch_path " in *" $libdir "*) ;; *) case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac ;; esac fi # $linkmode,$pass = prog,link... if $alldeplibs && { test pass_all = "$deplibs_check_method" || { test yes = "$build_libtool_libs" && test -n "$library_names"; }; }; then # We only need to search for static libraries continue fi fi link_static=no # Whether the deplib will be linked statically use_static_libs=$prefer_static_libs if test built = "$use_static_libs" && test yes = "$installed"; then use_static_libs=no fi if test -n "$library_names" && { test no = "$use_static_libs" || test -z "$old_library"; }; then case $host in *cygwin* | *mingw* | *cegcc* | *os2*) # No point in relinking DLLs because paths are not encoded func_append notinst_deplibs " $lib" need_relink=no ;; *) if test no = "$installed"; then func_append notinst_deplibs " $lib" need_relink=yes fi ;; esac # This is a shared library # Warn about portability, can't link against -module's on some # systems (darwin). Don't bleat about dlopened modules though! dlopenmodule= for dlpremoduletest in $dlprefiles; do if test "X$dlpremoduletest" = "X$lib"; then dlopenmodule=$dlpremoduletest break fi done if test -z "$dlopenmodule" && test yes = "$shouldnotlink" && test link = "$pass"; then echo if test prog = "$linkmode"; then $ECHO "*** Warning: Linking the executable $output against the loadable module" else $ECHO "*** Warning: Linking the shared library $output against the loadable module" fi $ECHO "*** $linklib is not portable!" fi if test lib = "$linkmode" && test yes = "$hardcode_into_libs"; then # Hardcode the library path. # Skip directories that are in the system default run-time # search path. case " $sys_lib_dlsearch_path " in *" $absdir "*) ;; *) case "$compile_rpath " in *" $absdir "*) ;; *) func_append compile_rpath " $absdir" ;; esac ;; esac case " $sys_lib_dlsearch_path " in *" $libdir "*) ;; *) case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac ;; esac fi if test -n "$old_archive_from_expsyms_cmds"; then # figure out the soname set dummy $library_names shift realname=$1 shift libname=`eval "\\$ECHO \"$libname_spec\""` # use dlname if we got it. it's perfectly good, no? if test -n "$dlname"; then soname=$dlname elif test -n "$soname_spec"; then # bleh windows case $host in *cygwin* | mingw* | *cegcc* | *os2*) func_arith $current - $age major=$func_arith_result versuffix=-$major ;; esac eval soname=\"$soname_spec\" else soname=$realname fi # Make a new name for the extract_expsyms_cmds to use soroot=$soname func_basename "$soroot" soname=$func_basename_result func_stripname 'lib' '.dll' "$soname" newlib=libimp-$func_stripname_result.a # If the library has no export list, then create one now if test -f "$output_objdir/$soname-def"; then : else func_verbose "extracting exported symbol list from '$soname'" func_execute_cmds "$extract_expsyms_cmds" 'exit $?' fi # Create $newlib if test -f "$output_objdir/$newlib"; then :; else func_verbose "generating import library for '$soname'" func_execute_cmds "$old_archive_from_expsyms_cmds" 'exit $?' fi # make sure the library variables are pointing to the new library dir=$output_objdir linklib=$newlib fi # test -n "$old_archive_from_expsyms_cmds" if test prog = "$linkmode" || test relink != "$opt_mode"; then add_shlibpath= add_dir= add= lib_linked=yes case $hardcode_action in immediate | unsupported) if test no = "$hardcode_direct"; then add=$dir/$linklib case $host in *-*-sco3.2v5.0.[024]*) add_dir=-L$dir ;; *-*-sysv4*uw2*) add_dir=-L$dir ;; *-*-sysv5OpenUNIX* | *-*-sysv5UnixWare7.[01].[10]* | \ *-*-unixware7*) add_dir=-L$dir ;; *-*-darwin* ) # if the lib is a (non-dlopened) module then we cannot # link against it, someone is ignoring the earlier warnings if /usr/bin/file -L $add 2> /dev/null | $GREP ": [^:]* bundle" >/dev/null; then if test "X$dlopenmodule" != "X$lib"; then $ECHO "*** Warning: lib $linklib is a module, not a shared library" if test -z "$old_library"; then echo echo "*** And there doesn't seem to be a static archive available" echo "*** The link will probably fail, sorry" else add=$dir/$old_library fi elif test -n "$old_library"; then add=$dir/$old_library fi fi esac elif test no = "$hardcode_minus_L"; then case $host in *-*-sunos*) add_shlibpath=$dir ;; esac add_dir=-L$dir add=-l$name elif test no = "$hardcode_shlibpath_var"; then add_shlibpath=$dir add=-l$name else lib_linked=no fi ;; relink) if test yes = "$hardcode_direct" && test no = "$hardcode_direct_absolute"; then add=$dir/$linklib elif test yes = "$hardcode_minus_L"; then add_dir=-L$absdir # Try looking first in the location we're being installed to. if test -n "$inst_prefix_dir"; then case $libdir in [\\/]*) func_append add_dir " -L$inst_prefix_dir$libdir" ;; esac fi add=-l$name elif test yes = "$hardcode_shlibpath_var"; then add_shlibpath=$dir add=-l$name else lib_linked=no fi ;; *) lib_linked=no ;; esac if test yes != "$lib_linked"; then func_fatal_configuration "unsupported hardcode properties" fi if test -n "$add_shlibpath"; then case :$compile_shlibpath: in *":$add_shlibpath:"*) ;; *) func_append compile_shlibpath "$add_shlibpath:" ;; esac fi if test prog = "$linkmode"; then test -n "$add_dir" && compile_deplibs="$add_dir $compile_deplibs" test -n "$add" && compile_deplibs="$add $compile_deplibs" else test -n "$add_dir" && deplibs="$add_dir $deplibs" test -n "$add" && deplibs="$add $deplibs" if test yes != "$hardcode_direct" && test yes != "$hardcode_minus_L" && test yes = "$hardcode_shlibpath_var"; then case :$finalize_shlibpath: in *":$libdir:"*) ;; *) func_append finalize_shlibpath "$libdir:" ;; esac fi fi fi if test prog = "$linkmode" || test relink = "$opt_mode"; then add_shlibpath= add_dir= add= # Finalize command for both is simple: just hardcode it. if test yes = "$hardcode_direct" && test no = "$hardcode_direct_absolute"; then add=$libdir/$linklib elif test yes = "$hardcode_minus_L"; then add_dir=-L$libdir add=-l$name elif test yes = "$hardcode_shlibpath_var"; then case :$finalize_shlibpath: in *":$libdir:"*) ;; *) func_append finalize_shlibpath "$libdir:" ;; esac add=-l$name elif test yes = "$hardcode_automatic"; then if test -n "$inst_prefix_dir" && test -f "$inst_prefix_dir$libdir/$linklib"; then add=$inst_prefix_dir$libdir/$linklib else add=$libdir/$linklib fi else # We cannot seem to hardcode it, guess we'll fake it. add_dir=-L$libdir # Try looking first in the location we're being installed to. if test -n "$inst_prefix_dir"; then case $libdir in [\\/]*) func_append add_dir " -L$inst_prefix_dir$libdir" ;; esac fi add=-l$name fi if test prog = "$linkmode"; then test -n "$add_dir" && finalize_deplibs="$add_dir $finalize_deplibs" test -n "$add" && finalize_deplibs="$add $finalize_deplibs" else test -n "$add_dir" && deplibs="$add_dir $deplibs" test -n "$add" && deplibs="$add $deplibs" fi fi elif test prog = "$linkmode"; then # Here we assume that one of hardcode_direct or hardcode_minus_L # is not unsupported. This is valid on all known static and # shared platforms. if test unsupported != "$hardcode_direct"; then test -n "$old_library" && linklib=$old_library compile_deplibs="$dir/$linklib $compile_deplibs" finalize_deplibs="$dir/$linklib $finalize_deplibs" else compile_deplibs="-l$name -L$dir $compile_deplibs" finalize_deplibs="-l$name -L$dir $finalize_deplibs" fi elif test yes = "$build_libtool_libs"; then # Not a shared library if test pass_all != "$deplibs_check_method"; then # We're trying link a shared library against a static one # but the system doesn't support it. # Just print a warning and add the library to dependency_libs so # that the program can be linked against the static library. echo $ECHO "*** Warning: This system cannot link to static lib archive $lib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have." if test yes = "$module"; then echo "*** But as you try to build a module library, libtool will still create " echo "*** a static module, that should work as long as the dlopening application" echo "*** is linked with the -dlopen flag to resolve symbols at runtime." if test -z "$global_symbol_pipe"; then echo echo "*** However, this would only work if libtool was able to extract symbol" echo "*** lists from a program, using 'nm' or equivalent, but libtool could" echo "*** not find such a program. So, this module is probably useless." echo "*** 'nm' from GNU binutils and a full rebuild may help." fi if test no = "$build_old_libs"; then build_libtool_libs=module build_old_libs=yes else build_libtool_libs=no fi fi else deplibs="$dir/$old_library $deplibs" link_static=yes fi fi # link shared/static library? if test lib = "$linkmode"; then if test -n "$dependency_libs" && { test yes != "$hardcode_into_libs" || test yes = "$build_old_libs" || test yes = "$link_static"; }; then # Extract -R from dependency_libs temp_deplibs= for libdir in $dependency_libs; do case $libdir in -R*) func_stripname '-R' '' "$libdir" temp_xrpath=$func_stripname_result case " $xrpath " in *" $temp_xrpath "*) ;; *) func_append xrpath " $temp_xrpath";; esac;; *) func_append temp_deplibs " $libdir";; esac done dependency_libs=$temp_deplibs fi func_append newlib_search_path " $absdir" # Link against this library test no = "$link_static" && newdependency_libs="$abs_ladir/$laname $newdependency_libs" # ... and its dependency_libs tmp_libs= for deplib in $dependency_libs; do newdependency_libs="$deplib $newdependency_libs" case $deplib in -L*) func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result";; *) func_resolve_sysroot "$deplib" ;; esac if $opt_preserve_dup_deps; then case "$tmp_libs " in *" $func_resolve_sysroot_result "*) func_append specialdeplibs " $func_resolve_sysroot_result" ;; esac fi func_append tmp_libs " $func_resolve_sysroot_result" done if test no != "$link_all_deplibs"; then # Add the search paths of all dependency libraries for deplib in $dependency_libs; do path= case $deplib in -L*) path=$deplib ;; *.la) func_resolve_sysroot "$deplib" deplib=$func_resolve_sysroot_result func_dirname "$deplib" "" "." dir=$func_dirname_result # We need an absolute path. case $dir in [\\/]* | [A-Za-z]:[\\/]*) absdir=$dir ;; *) absdir=`cd "$dir" && pwd` if test -z "$absdir"; then func_warning "cannot determine absolute directory name of '$dir'" absdir=$dir fi ;; esac if $GREP "^installed=no" $deplib > /dev/null; then case $host in *-*-darwin*) depdepl= eval deplibrary_names=`$SED -n -e 's/^library_names=\(.*\)$/\1/p' $deplib` if test -n "$deplibrary_names"; then for tmp in $deplibrary_names; do depdepl=$tmp done if test -f "$absdir/$objdir/$depdepl"; then depdepl=$absdir/$objdir/$depdepl darwin_install_name=`$OTOOL -L $depdepl | awk '{if (NR == 2) {print $1;exit}}'` if test -z "$darwin_install_name"; then darwin_install_name=`$OTOOL64 -L $depdepl | awk '{if (NR == 2) {print $1;exit}}'` fi func_append compiler_flags " $wl-dylib_file $wl$darwin_install_name:$depdepl" func_append linker_flags " -dylib_file $darwin_install_name:$depdepl" path= fi fi ;; *) path=-L$absdir/$objdir ;; esac else eval libdir=`$SED -n -e 's/^libdir=\(.*\)$/\1/p' $deplib` test -z "$libdir" && \ func_fatal_error "'$deplib' is not a valid libtool archive" test "$absdir" != "$libdir" && \ func_warning "'$deplib' seems to be moved" path=-L$absdir fi ;; esac case " $deplibs " in *" $path "*) ;; *) deplibs="$path $deplibs" ;; esac done fi # link_all_deplibs != no fi # linkmode = lib done # for deplib in $libs if test link = "$pass"; then if test prog = "$linkmode"; then compile_deplibs="$new_inherited_linker_flags $compile_deplibs" finalize_deplibs="$new_inherited_linker_flags $finalize_deplibs" else compiler_flags="$compiler_flags "`$ECHO " $new_inherited_linker_flags" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` fi fi dependency_libs=$newdependency_libs if test dlpreopen = "$pass"; then # Link the dlpreopened libraries before other libraries for deplib in $save_deplibs; do deplibs="$deplib $deplibs" done fi if test dlopen != "$pass"; then test conv = "$pass" || { # Make sure lib_search_path contains only unique directories. lib_search_path= for dir in $newlib_search_path; do case "$lib_search_path " in *" $dir "*) ;; *) func_append lib_search_path " $dir" ;; esac done newlib_search_path= } if test prog,link = "$linkmode,$pass"; then vars="compile_deplibs finalize_deplibs" else vars=deplibs fi for var in $vars dependency_libs; do # Add libraries to $var in reverse order eval tmp_libs=\"\$$var\" new_libs= for deplib in $tmp_libs; do # FIXME: Pedantically, this is the right thing to do, so # that some nasty dependency loop isn't accidentally # broken: #new_libs="$deplib $new_libs" # Pragmatically, this seems to cause very few problems in # practice: case $deplib in -L*) new_libs="$deplib $new_libs" ;; -R*) ;; *) # And here is the reason: when a library appears more # than once as an explicit dependence of a library, or # is implicitly linked in more than once by the # compiler, it is considered special, and multiple # occurrences thereof are not removed. Compare this # with having the same library being listed as a # dependency of multiple other libraries: in this case, # we know (pedantically, we assume) the library does not # need to be listed more than once, so we keep only the # last copy. This is not always right, but it is rare # enough that we require users that really mean to play # such unportable linking tricks to link the library # using -Wl,-lname, so that libtool does not consider it # for duplicate removal. case " $specialdeplibs " in *" $deplib "*) new_libs="$deplib $new_libs" ;; *) case " $new_libs " in *" $deplib "*) ;; *) new_libs="$deplib $new_libs" ;; esac ;; esac ;; esac done tmp_libs= for deplib in $new_libs; do case $deplib in -L*) case " $tmp_libs " in *" $deplib "*) ;; *) func_append tmp_libs " $deplib" ;; esac ;; *) func_append tmp_libs " $deplib" ;; esac done eval $var=\"$tmp_libs\" done # for var fi # Add Sun CC postdeps if required: test CXX = "$tagname" && { case $host_os in linux*) case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C++ 5.9 func_suncc_cstd_abi if test no != "$suncc_use_cstd_abi"; then func_append postdeps ' -library=Cstd -library=Crun' fi ;; esac ;; solaris*) func_cc_basename "$CC" case $func_cc_basename_result in CC* | sunCC*) func_suncc_cstd_abi if test no != "$suncc_use_cstd_abi"; then func_append postdeps ' -library=Cstd -library=Crun' fi ;; esac ;; esac } # Last step: remove runtime libs from dependency_libs # (they stay in deplibs) tmp_libs= for i in $dependency_libs; do case " $predeps $postdeps $compiler_lib_search_path " in *" $i "*) i= ;; esac if test -n "$i"; then func_append tmp_libs " $i" fi done dependency_libs=$tmp_libs done # for pass if test prog = "$linkmode"; then dlfiles=$newdlfiles fi if test prog = "$linkmode" || test lib = "$linkmode"; then dlprefiles=$newdlprefiles fi case $linkmode in oldlib) if test -n "$dlfiles$dlprefiles" || test no != "$dlself"; then func_warning "'-dlopen' is ignored for archives" fi case " $deplibs" in *\ -l* | *\ -L*) func_warning "'-l' and '-L' are ignored for archives" ;; esac test -n "$rpath" && \ func_warning "'-rpath' is ignored for archives" test -n "$xrpath" && \ func_warning "'-R' is ignored for archives" test -n "$vinfo" && \ func_warning "'-version-info/-version-number' is ignored for archives" test -n "$release" && \ func_warning "'-release' is ignored for archives" test -n "$export_symbols$export_symbols_regex" && \ func_warning "'-export-symbols' is ignored for archives" # Now set the variables for building old libraries. build_libtool_libs=no oldlibs=$output func_append objs "$old_deplibs" ;; lib) # Make sure we only generate libraries of the form 'libNAME.la'. case $outputname in lib*) func_stripname 'lib' '.la' "$outputname" name=$func_stripname_result eval shared_ext=\"$shrext_cmds\" eval libname=\"$libname_spec\" ;; *) test no = "$module" \ && func_fatal_help "libtool library '$output' must begin with 'lib'" if test no != "$need_lib_prefix"; then # Add the "lib" prefix for modules if required func_stripname '' '.la' "$outputname" name=$func_stripname_result eval shared_ext=\"$shrext_cmds\" eval libname=\"$libname_spec\" else func_stripname '' '.la' "$outputname" libname=$func_stripname_result fi ;; esac if test -n "$objs"; then if test pass_all != "$deplibs_check_method"; then func_fatal_error "cannot build libtool library '$output' from non-libtool objects on this host:$objs" else echo $ECHO "*** Warning: Linking the shared library $output against the non-libtool" $ECHO "*** objects $objs is not portable!" func_append libobjs " $objs" fi fi test no = "$dlself" \ || func_warning "'-dlopen self' is ignored for libtool libraries" set dummy $rpath shift test 1 -lt "$#" \ && func_warning "ignoring multiple '-rpath's for a libtool library" install_libdir=$1 oldlibs= if test -z "$rpath"; then if test yes = "$build_libtool_libs"; then # Building a libtool convenience library. # Some compilers have problems with a '.al' extension so # convenience libraries should have the same extension an # archive normally would. oldlibs="$output_objdir/$libname.$libext $oldlibs" build_libtool_libs=convenience build_old_libs=yes fi test -n "$vinfo" && \ func_warning "'-version-info/-version-number' is ignored for convenience libraries" test -n "$release" && \ func_warning "'-release' is ignored for convenience libraries" else # Parse the version information argument. save_ifs=$IFS; IFS=: set dummy $vinfo 0 0 0 shift IFS=$save_ifs test -n "$7" && \ func_fatal_help "too many parameters to '-version-info'" # convert absolute version numbers to libtool ages # this retains compatibility with .la files and attempts # to make the code below a bit more comprehensible case $vinfo_number in yes) number_major=$1 number_minor=$2 number_revision=$3 # # There are really only two kinds -- those that # use the current revision as the major version # and those that subtract age and use age as # a minor version. But, then there is irix # that has an extra 1 added just for fun # case $version_type in # correct linux to gnu/linux during the next big refactor darwin|freebsd-elf|linux|osf|windows|none) func_arith $number_major + $number_minor current=$func_arith_result age=$number_minor revision=$number_revision ;; freebsd-aout|qnx|sunos) current=$number_major revision=$number_minor age=0 ;; irix|nonstopux) func_arith $number_major + $number_minor current=$func_arith_result age=$number_minor revision=$number_minor lt_irix_increment=no ;; *) func_fatal_configuration "$modename: unknown library version type '$version_type'" ;; esac ;; no) current=$1 revision=$2 age=$3 ;; esac # Check that each of the things are valid numbers. case $current in 0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;; *) func_error "CURRENT '$current' must be a nonnegative integer" func_fatal_error "'$vinfo' is not valid version information" ;; esac case $revision in 0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;; *) func_error "REVISION '$revision' must be a nonnegative integer" func_fatal_error "'$vinfo' is not valid version information" ;; esac case $age in 0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;; *) func_error "AGE '$age' must be a nonnegative integer" func_fatal_error "'$vinfo' is not valid version information" ;; esac if test "$age" -gt "$current"; then func_error "AGE '$age' is greater than the current interface number '$current'" func_fatal_error "'$vinfo' is not valid version information" fi # Calculate the version variables. major= versuffix= verstring= case $version_type in none) ;; darwin) # Like Linux, but with the current version available in # verstring for coding it into the library header func_arith $current - $age major=.$func_arith_result versuffix=$major.$age.$revision # Darwin ld doesn't like 0 for these options... func_arith $current + 1 minor_current=$func_arith_result xlcverstring="$wl-compatibility_version $wl$minor_current $wl-current_version $wl$minor_current.$revision" verstring="-compatibility_version $minor_current -current_version $minor_current.$revision" # On Darwin other compilers case $CC in nagfor*) verstring="$wl-compatibility_version $wl$minor_current $wl-current_version $wl$minor_current.$revision" ;; *) verstring="-compatibility_version $minor_current -current_version $minor_current.$revision" ;; esac ;; freebsd-aout) major=.$current versuffix=.$current.$revision ;; freebsd-elf) func_arith $current - $age major=.$func_arith_result versuffix=$major.$age.$revision ;; irix | nonstopux) if test no = "$lt_irix_increment"; then func_arith $current - $age else func_arith $current - $age + 1 fi major=$func_arith_result case $version_type in nonstopux) verstring_prefix=nonstopux ;; *) verstring_prefix=sgi ;; esac verstring=$verstring_prefix$major.$revision # Add in all the interfaces that we are compatible with. loop=$revision while test 0 -ne "$loop"; do func_arith $revision - $loop iface=$func_arith_result func_arith $loop - 1 loop=$func_arith_result verstring=$verstring_prefix$major.$iface:$verstring done # Before this point, $major must not contain '.'. major=.$major versuffix=$major.$revision ;; linux) # correct to gnu/linux during the next big refactor func_arith $current - $age major=.$func_arith_result versuffix=$major.$age.$revision ;; osf) func_arith $current - $age major=.$func_arith_result versuffix=.$current.$age.$revision verstring=$current.$age.$revision # Add in all the interfaces that we are compatible with. loop=$age while test 0 -ne "$loop"; do func_arith $current - $loop iface=$func_arith_result func_arith $loop - 1 loop=$func_arith_result verstring=$verstring:$iface.0 done # Make executables depend on our current version. func_append verstring ":$current.0" ;; qnx) major=.$current versuffix=.$current ;; sco) major=.$current versuffix=.$current ;; sunos) major=.$current versuffix=.$current.$revision ;; windows) # Use '-' rather than '.', since we only want one # extension on DOS 8.3 file systems. func_arith $current - $age major=$func_arith_result versuffix=-$major ;; *) func_fatal_configuration "unknown library version type '$version_type'" ;; esac # Clear the version info if we defaulted, and they specified a release. if test -z "$vinfo" && test -n "$release"; then major= case $version_type in darwin) # we can't check for "0.0" in archive_cmds due to quoting # problems, so we reset it completely verstring= ;; *) verstring=0.0 ;; esac if test no = "$need_version"; then versuffix= else versuffix=.0.0 fi fi # Remove version info from name if versioning should be avoided if test yes,no = "$avoid_version,$need_version"; then major= versuffix= verstring= fi # Check to see if the archive will have undefined symbols. if test yes = "$allow_undefined"; then if test unsupported = "$allow_undefined_flag"; then if test yes = "$build_old_libs"; then func_warning "undefined symbols not allowed in $host shared libraries; building static only" build_libtool_libs=no else func_fatal_error "can't build $host shared library unless -no-undefined is specified" fi fi else # Don't allow undefined symbols. allow_undefined_flag=$no_undefined_flag fi fi func_generate_dlsyms "$libname" "$libname" : func_append libobjs " $symfileobj" test " " = "$libobjs" && libobjs= if test relink != "$opt_mode"; then # Remove our outputs, but don't remove object files since they # may have been created when compiling PIC objects. removelist= tempremovelist=`$ECHO "$output_objdir/*"` for p in $tempremovelist; do case $p in *.$objext | *.gcno) ;; $output_objdir/$outputname | $output_objdir/$libname.* | $output_objdir/$libname$release.*) if test -n "$precious_files_regex"; then if $ECHO "$p" | $EGREP -e "$precious_files_regex" >/dev/null 2>&1 then continue fi fi func_append removelist " $p" ;; *) ;; esac done test -n "$removelist" && \ func_show_eval "${RM}r \$removelist" fi # Now set the variables for building old libraries. if test yes = "$build_old_libs" && test convenience != "$build_libtool_libs"; then func_append oldlibs " $output_objdir/$libname.$libext" # Transform .lo files to .o files. oldobjs="$objs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.$libext$/d; $lo2o" | $NL2SP` fi # Eliminate all temporary directories. #for path in $notinst_path; do # lib_search_path=`$ECHO "$lib_search_path " | $SED "s% $path % %g"` # deplibs=`$ECHO "$deplibs " | $SED "s% -L$path % %g"` # dependency_libs=`$ECHO "$dependency_libs " | $SED "s% -L$path % %g"` #done if test -n "$xrpath"; then # If the user specified any rpath flags, then add them. temp_xrpath= for libdir in $xrpath; do func_replace_sysroot "$libdir" func_append temp_xrpath " -R$func_replace_sysroot_result" case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac done if test yes != "$hardcode_into_libs" || test yes = "$build_old_libs"; then dependency_libs="$temp_xrpath $dependency_libs" fi fi # Make sure dlfiles contains only unique files that won't be dlpreopened old_dlfiles=$dlfiles dlfiles= for lib in $old_dlfiles; do case " $dlprefiles $dlfiles " in *" $lib "*) ;; *) func_append dlfiles " $lib" ;; esac done # Make sure dlprefiles contains only unique files old_dlprefiles=$dlprefiles dlprefiles= for lib in $old_dlprefiles; do case "$dlprefiles " in *" $lib "*) ;; *) func_append dlprefiles " $lib" ;; esac done if test yes = "$build_libtool_libs"; then if test -n "$rpath"; then case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-*-beos* | *-cegcc* | *-*-haiku*) # these systems don't actually have a c library (as such)! ;; *-*-rhapsody* | *-*-darwin1.[012]) # Rhapsody C library is in the System framework func_append deplibs " System.ltframework" ;; *-*-netbsd*) # Don't link with libc until the a.out ld.so is fixed. ;; *-*-openbsd* | *-*-freebsd* | *-*-dragonfly*) # Do not include libc due to us having libc/libc_r. ;; *-*-sco3.2v5* | *-*-sco5v6*) # Causes problems with __ctype ;; *-*-sysv4.2uw2* | *-*-sysv5* | *-*-unixware* | *-*-OpenUNIX*) # Compiler inserts libc in the correct place for threads to work ;; *) # Add libc to deplibs on all other systems if necessary. if test yes = "$build_libtool_need_lc"; then func_append deplibs " -lc" fi ;; esac fi # Transform deplibs into only deplibs that can be linked in shared. name_save=$name libname_save=$libname release_save=$release versuffix_save=$versuffix major_save=$major # I'm not sure if I'm treating the release correctly. I think # release should show up in the -l (ie -lgmp5) so we don't want to # add it in twice. Is that correct? release= versuffix= major= newdeplibs= droppeddeps=no case $deplibs_check_method in pass_all) # Don't check for shared/static. Everything works. # This might be a little naive. We might want to check # whether the library exists or not. But this is on # osf3 & osf4 and I'm not really sure... Just # implementing what was already the behavior. newdeplibs=$deplibs ;; test_compile) # This code stresses the "libraries are programs" paradigm to its # limits. Maybe even breaks it. We compile a program, linking it # against the deplibs as a proxy for the library. Then we can check # whether they linked in statically or dynamically with ldd. $opt_dry_run || $RM conftest.c cat > conftest.c </dev/null` $nocaseglob else potential_libs=`ls $i/$libnameglob[.-]* 2>/dev/null` fi for potent_lib in $potential_libs; do # Follow soft links. if ls -lLd "$potent_lib" 2>/dev/null | $GREP " -> " >/dev/null; then continue fi # The statement above tries to avoid entering an # endless loop below, in case of cyclic links. # We might still enter an endless loop, since a link # loop can be closed while we follow links, # but so what? potlib=$potent_lib while test -h "$potlib" 2>/dev/null; do potliblink=`ls -ld $potlib | $SED 's/.* -> //'` case $potliblink in [\\/]* | [A-Za-z]:[\\/]*) potlib=$potliblink;; *) potlib=`$ECHO "$potlib" | $SED 's|[^/]*$||'`"$potliblink";; esac done if eval $file_magic_cmd \"\$potlib\" 2>/dev/null | $SED -e 10q | $EGREP "$file_magic_regex" > /dev/null; then func_append newdeplibs " $a_deplib" a_deplib= break 2 fi done done fi if test -n "$a_deplib"; then droppeddeps=yes echo $ECHO "*** Warning: linker path does not have real file for library $a_deplib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have" echo "*** because I did check the linker path looking for a file starting" if test -z "$potlib"; then $ECHO "*** with $libname but no candidates were found. (...for file magic test)" else $ECHO "*** with $libname and none of the candidates passed a file format test" $ECHO "*** using a file magic. Last file checked: $potlib" fi fi ;; *) # Add a -L argument. func_append newdeplibs " $a_deplib" ;; esac done # Gone through all deplibs. ;; match_pattern*) set dummy $deplibs_check_method; shift match_pattern_regex=`expr "$deplibs_check_method" : "$1 \(.*\)"` for a_deplib in $deplibs; do case $a_deplib in -l*) func_stripname -l '' "$a_deplib" name=$func_stripname_result if test yes = "$allow_libtool_libs_with_static_runtimes"; then case " $predeps $postdeps " in *" $a_deplib "*) func_append newdeplibs " $a_deplib" a_deplib= ;; esac fi if test -n "$a_deplib"; then libname=`eval "\\$ECHO \"$libname_spec\""` for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do potential_libs=`ls $i/$libname[.-]* 2>/dev/null` for potent_lib in $potential_libs; do potlib=$potent_lib # see symlink-check above in file_magic test if eval "\$ECHO \"$potent_lib\"" 2>/dev/null | $SED 10q | \ $EGREP "$match_pattern_regex" > /dev/null; then func_append newdeplibs " $a_deplib" a_deplib= break 2 fi done done fi if test -n "$a_deplib"; then droppeddeps=yes echo $ECHO "*** Warning: linker path does not have real file for library $a_deplib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have" echo "*** because I did check the linker path looking for a file starting" if test -z "$potlib"; then $ECHO "*** with $libname but no candidates were found. (...for regex pattern test)" else $ECHO "*** with $libname and none of the candidates passed a file format test" $ECHO "*** using a regex pattern. Last file checked: $potlib" fi fi ;; *) # Add a -L argument. func_append newdeplibs " $a_deplib" ;; esac done # Gone through all deplibs. ;; none | unknown | *) newdeplibs= tmp_deplibs=`$ECHO " $deplibs" | $SED 's/ -lc$//; s/ -[LR][^ ]*//g'` if test yes = "$allow_libtool_libs_with_static_runtimes"; then for i in $predeps $postdeps; do # can't use Xsed below, because $i might contain '/' tmp_deplibs=`$ECHO " $tmp_deplibs" | $SED "s|$i||"` done fi case $tmp_deplibs in *[!\ \ ]*) echo if test none = "$deplibs_check_method"; then echo "*** Warning: inter-library dependencies are not supported in this platform." else echo "*** Warning: inter-library dependencies are not known to be supported." fi echo "*** All declared inter-library dependencies are being dropped." droppeddeps=yes ;; esac ;; esac versuffix=$versuffix_save major=$major_save release=$release_save libname=$libname_save name=$name_save case $host in *-*-rhapsody* | *-*-darwin1.[012]) # On Rhapsody replace the C library with the System framework newdeplibs=`$ECHO " $newdeplibs" | $SED 's/ -lc / System.ltframework /'` ;; esac if test yes = "$droppeddeps"; then if test yes = "$module"; then echo echo "*** Warning: libtool could not satisfy all declared inter-library" $ECHO "*** dependencies of module $libname. Therefore, libtool will create" echo "*** a static module, that should work as long as the dlopening" echo "*** application is linked with the -dlopen flag." if test -z "$global_symbol_pipe"; then echo echo "*** However, this would only work if libtool was able to extract symbol" echo "*** lists from a program, using 'nm' or equivalent, but libtool could" echo "*** not find such a program. So, this module is probably useless." echo "*** 'nm' from GNU binutils and a full rebuild may help." fi if test no = "$build_old_libs"; then oldlibs=$output_objdir/$libname.$libext build_libtool_libs=module build_old_libs=yes else build_libtool_libs=no fi else echo "*** The inter-library dependencies that have been dropped here will be" echo "*** automatically added whenever a program is linked with this library" echo "*** or is declared to -dlopen it." if test no = "$allow_undefined"; then echo echo "*** Since this library must not contain undefined symbols," echo "*** because either the platform does not support them or" echo "*** it was explicitly requested with -no-undefined," echo "*** libtool will only create a static version of it." if test no = "$build_old_libs"; then oldlibs=$output_objdir/$libname.$libext build_libtool_libs=module build_old_libs=yes else build_libtool_libs=no fi fi fi fi # Done checking deplibs! deplibs=$newdeplibs fi # Time to change all our "foo.ltframework" stuff back to "-framework foo" case $host in *-*-darwin*) newdeplibs=`$ECHO " $newdeplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` new_inherited_linker_flags=`$ECHO " $new_inherited_linker_flags" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` deplibs=`$ECHO " $deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` ;; esac # move library search paths that coincide with paths to not yet # installed libraries to the beginning of the library search list new_libs= for path in $notinst_path; do case " $new_libs " in *" -L$path/$objdir "*) ;; *) case " $deplibs " in *" -L$path/$objdir "*) func_append new_libs " -L$path/$objdir" ;; esac ;; esac done for deplib in $deplibs; do case $deplib in -L*) case " $new_libs " in *" $deplib "*) ;; *) func_append new_libs " $deplib" ;; esac ;; *) func_append new_libs " $deplib" ;; esac done deplibs=$new_libs # All the library-specific variables (install_libdir is set above). library_names= old_library= dlname= # Test again, we may have decided not to build it any more if test yes = "$build_libtool_libs"; then # Remove $wl instances when linking with ld. # FIXME: should test the right _cmds variable. case $archive_cmds in *\$LD\ *) wl= ;; esac if test yes = "$hardcode_into_libs"; then # Hardcode the library paths hardcode_libdirs= dep_rpath= rpath=$finalize_rpath test relink = "$opt_mode" || rpath=$compile_rpath$rpath for libdir in $rpath; do if test -n "$hardcode_libdir_flag_spec"; then if test -n "$hardcode_libdir_separator"; then func_replace_sysroot "$libdir" libdir=$func_replace_sysroot_result if test -z "$hardcode_libdirs"; then hardcode_libdirs=$libdir else # Just accumulate the unique libdirs. case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*) ;; *) func_append hardcode_libdirs "$hardcode_libdir_separator$libdir" ;; esac fi else eval flag=\"$hardcode_libdir_flag_spec\" func_append dep_rpath " $flag" fi elif test -n "$runpath_var"; then case "$perm_rpath " in *" $libdir "*) ;; *) func_append perm_rpath " $libdir" ;; esac fi done # Substitute the hardcoded libdirs into the rpath. if test -n "$hardcode_libdir_separator" && test -n "$hardcode_libdirs"; then libdir=$hardcode_libdirs eval "dep_rpath=\"$hardcode_libdir_flag_spec\"" fi if test -n "$runpath_var" && test -n "$perm_rpath"; then # We should set the runpath_var. rpath= for dir in $perm_rpath; do func_append rpath "$dir:" done eval "$runpath_var='$rpath\$$runpath_var'; export $runpath_var" fi test -n "$dep_rpath" && deplibs="$dep_rpath $deplibs" fi shlibpath=$finalize_shlibpath test relink = "$opt_mode" || shlibpath=$compile_shlibpath$shlibpath if test -n "$shlibpath"; then eval "$shlibpath_var='$shlibpath\$$shlibpath_var'; export $shlibpath_var" fi # Get the real and link names of the library. eval shared_ext=\"$shrext_cmds\" eval library_names=\"$library_names_spec\" set dummy $library_names shift realname=$1 shift if test -n "$soname_spec"; then eval soname=\"$soname_spec\" else soname=$realname fi if test -z "$dlname"; then dlname=$soname fi lib=$output_objdir/$realname linknames= for link do func_append linknames " $link" done # Use standard objects if they are pic test -z "$pic_flag" && libobjs=`$ECHO "$libobjs" | $SP2NL | $SED "$lo2o" | $NL2SP` test "X$libobjs" = "X " && libobjs= delfiles= if test -n "$export_symbols" && test -n "$include_expsyms"; then $opt_dry_run || cp "$export_symbols" "$output_objdir/$libname.uexp" export_symbols=$output_objdir/$libname.uexp func_append delfiles " $export_symbols" fi orig_export_symbols= case $host_os in cygwin* | mingw* | cegcc*) if test -n "$export_symbols" && test -z "$export_symbols_regex"; then # exporting using user supplied symfile func_dll_def_p "$export_symbols" || { # and it's NOT already a .def file. Must figure out # which of the given symbols are data symbols and tag # them as such. So, trigger use of export_symbols_cmds. # export_symbols gets reassigned inside the "prepare # the list of exported symbols" if statement, so the # include_expsyms logic still works. orig_export_symbols=$export_symbols export_symbols= always_export_symbols=yes } fi ;; esac # Prepare the list of exported symbols if test -z "$export_symbols"; then if test yes = "$always_export_symbols" || test -n "$export_symbols_regex"; then func_verbose "generating symbol list for '$libname.la'" export_symbols=$output_objdir/$libname.exp $opt_dry_run || $RM $export_symbols cmds=$export_symbols_cmds save_ifs=$IFS; IFS='~' for cmd1 in $cmds; do IFS=$save_ifs # Take the normal branch if the nm_file_list_spec branch # doesn't work or if tool conversion is not needed. case $nm_file_list_spec~$to_tool_file_cmd in *~func_convert_file_noop | *~func_convert_file_msys_to_w32 | ~*) try_normal_branch=yes eval cmd=\"$cmd1\" func_len " $cmd" len=$func_len_result ;; *) try_normal_branch=no ;; esac if test yes = "$try_normal_branch" \ && { test "$len" -lt "$max_cmd_len" \ || test "$max_cmd_len" -le -1; } then func_show_eval "$cmd" 'exit $?' skipped_export=false elif test -n "$nm_file_list_spec"; then func_basename "$output" output_la=$func_basename_result save_libobjs=$libobjs save_output=$output output=$output_objdir/$output_la.nm func_to_tool_file "$output" libobjs=$nm_file_list_spec$func_to_tool_file_result func_append delfiles " $output" func_verbose "creating $NM input file list: $output" for obj in $save_libobjs; do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" done > "$output" eval cmd=\"$cmd1\" func_show_eval "$cmd" 'exit $?' output=$save_output libobjs=$save_libobjs skipped_export=false else # The command line is too long to execute in one step. func_verbose "using reloadable object file for export list..." skipped_export=: # Break out early, otherwise skipped_export may be # set to false by a later but shorter cmd. break fi done IFS=$save_ifs if test -n "$export_symbols_regex" && test : != "$skipped_export"; then func_show_eval '$EGREP -e "$export_symbols_regex" "$export_symbols" > "${export_symbols}T"' func_show_eval '$MV "${export_symbols}T" "$export_symbols"' fi fi fi if test -n "$export_symbols" && test -n "$include_expsyms"; then tmp_export_symbols=$export_symbols test -n "$orig_export_symbols" && tmp_export_symbols=$orig_export_symbols $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"' fi if test : != "$skipped_export" && test -n "$orig_export_symbols"; then # The given exports_symbols file has to be filtered, so filter it. func_verbose "filter symbol list for '$libname.la' to tag DATA exports" # FIXME: $output_objdir/$libname.filter potentially contains lots of # 's' commands, which not all seds can handle. GNU sed should be fine # though. Also, the filter scales superlinearly with the number of # global variables. join(1) would be nice here, but unfortunately # isn't a blessed tool. $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter func_append delfiles " $export_symbols $output_objdir/$libname.filter" export_symbols=$output_objdir/$libname.def $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols fi tmp_deplibs= for test_deplib in $deplibs; do case " $convenience " in *" $test_deplib "*) ;; *) func_append tmp_deplibs " $test_deplib" ;; esac done deplibs=$tmp_deplibs if test -n "$convenience"; then if test -n "$whole_archive_flag_spec" && test yes = "$compiler_needs_object" && test -z "$libobjs"; then # extract the archives, so we have objects to list. # TODO: could optimize this to just extract one archive. whole_archive_flag_spec= fi if test -n "$whole_archive_flag_spec"; then save_libobjs=$libobjs eval libobjs=\"\$libobjs $whole_archive_flag_spec\" test "X$libobjs" = "X " && libobjs= else gentop=$output_objdir/${outputname}x func_append generated " $gentop" func_extract_archives $gentop $convenience func_append libobjs " $func_extract_archives_result" test "X$libobjs" = "X " && libobjs= fi fi if test yes = "$thread_safe" && test -n "$thread_safe_flag_spec"; then eval flag=\"$thread_safe_flag_spec\" func_append linker_flags " $flag" fi # Make a backup of the uninstalled library when relinking if test relink = "$opt_mode"; then $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}U && $MV $realname ${realname}U)' || exit $? fi # Do each of the archive commands. if test yes = "$module" && test -n "$module_cmds"; then if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then eval test_cmds=\"$module_expsym_cmds\" cmds=$module_expsym_cmds else eval test_cmds=\"$module_cmds\" cmds=$module_cmds fi else if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then eval test_cmds=\"$archive_expsym_cmds\" cmds=$archive_expsym_cmds else eval test_cmds=\"$archive_cmds\" cmds=$archive_cmds fi fi if test : != "$skipped_export" && func_len " $test_cmds" && len=$func_len_result && test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then : else # The command line is too long to link in one step, link piecewise # or, if using GNU ld and skipped_export is not :, use a linker # script. # Save the value of $output and $libobjs because we want to # use them later. If we have whole_archive_flag_spec, we # want to use save_libobjs as it was before # whole_archive_flag_spec was expanded, because we can't # assume the linker understands whole_archive_flag_spec. # This may have to be revisited, in case too many # convenience libraries get linked in and end up exceeding # the spec. if test -z "$convenience" || test -z "$whole_archive_flag_spec"; then save_libobjs=$libobjs fi save_output=$output func_basename "$output" output_la=$func_basename_result # Clear the reloadable object creation command queue and # initialize k to one. test_cmds= concat_cmds= objlist= last_robj= k=1 if test -n "$save_libobjs" && test : != "$skipped_export" && test yes = "$with_gnu_ld"; then output=$output_objdir/$output_la.lnkscript func_verbose "creating GNU ld script: $output" echo 'INPUT (' > $output for obj in $save_libobjs do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" >> $output done echo ')' >> $output func_append delfiles " $output" func_to_tool_file "$output" output=$func_to_tool_file_result elif test -n "$save_libobjs" && test : != "$skipped_export" && test -n "$file_list_spec"; then output=$output_objdir/$output_la.lnk func_verbose "creating linker input file list: $output" : > $output set x $save_libobjs shift firstobj= if test yes = "$compiler_needs_object"; then firstobj="$1 " shift fi for obj do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" >> $output done func_append delfiles " $output" func_to_tool_file "$output" output=$firstobj\"$file_list_spec$func_to_tool_file_result\" else if test -n "$save_libobjs"; then func_verbose "creating reloadable object files..." output=$output_objdir/$output_la-$k.$objext eval test_cmds=\"$reload_cmds\" func_len " $test_cmds" len0=$func_len_result len=$len0 # Loop over the list of objects to be linked. for obj in $save_libobjs do func_len " $obj" func_arith $len + $func_len_result len=$func_arith_result if test -z "$objlist" || test "$len" -lt "$max_cmd_len"; then func_append objlist " $obj" else # The command $test_cmds is almost too long, add a # command to the queue. if test 1 -eq "$k"; then # The first file doesn't have a previous command to add. reload_objs=$objlist eval concat_cmds=\"$reload_cmds\" else # All subsequent reloadable object files will link in # the last one created. reload_objs="$objlist $last_robj" eval concat_cmds=\"\$concat_cmds~$reload_cmds~\$RM $last_robj\" fi last_robj=$output_objdir/$output_la-$k.$objext func_arith $k + 1 k=$func_arith_result output=$output_objdir/$output_la-$k.$objext objlist=" $obj" func_len " $last_robj" func_arith $len0 + $func_len_result len=$func_arith_result fi done # Handle the remaining objects by creating one last # reloadable object file. All subsequent reloadable object # files will link in the last one created. test -z "$concat_cmds" || concat_cmds=$concat_cmds~ reload_objs="$objlist $last_robj" eval concat_cmds=\"\$concat_cmds$reload_cmds\" if test -n "$last_robj"; then eval concat_cmds=\"\$concat_cmds~\$RM $last_robj\" fi func_append delfiles " $output" else output= fi ${skipped_export-false} && { func_verbose "generating symbol list for '$libname.la'" export_symbols=$output_objdir/$libname.exp $opt_dry_run || $RM $export_symbols libobjs=$output # Append the command to create the export file. test -z "$concat_cmds" || concat_cmds=$concat_cmds~ eval concat_cmds=\"\$concat_cmds$export_symbols_cmds\" if test -n "$last_robj"; then eval concat_cmds=\"\$concat_cmds~\$RM $last_robj\" fi } test -n "$save_libobjs" && func_verbose "creating a temporary reloadable object file: $output" # Loop through the commands generated above and execute them. save_ifs=$IFS; IFS='~' for cmd in $concat_cmds; do IFS=$save_ifs $opt_quiet || { func_quote_for_expand "$cmd" eval "func_echo $func_quote_for_expand_result" } $opt_dry_run || eval "$cmd" || { lt_exit=$? # Restore the uninstalled library and exit if test relink = "$opt_mode"; then ( cd "$output_objdir" && \ $RM "${realname}T" && \ $MV "${realname}U" "$realname" ) fi exit $lt_exit } done IFS=$save_ifs if test -n "$export_symbols_regex" && ${skipped_export-false}; then func_show_eval '$EGREP -e "$export_symbols_regex" "$export_symbols" > "${export_symbols}T"' func_show_eval '$MV "${export_symbols}T" "$export_symbols"' fi fi ${skipped_export-false} && { if test -n "$export_symbols" && test -n "$include_expsyms"; then tmp_export_symbols=$export_symbols test -n "$orig_export_symbols" && tmp_export_symbols=$orig_export_symbols $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"' fi if test -n "$orig_export_symbols"; then # The given exports_symbols file has to be filtered, so filter it. func_verbose "filter symbol list for '$libname.la' to tag DATA exports" # FIXME: $output_objdir/$libname.filter potentially contains lots of # 's' commands, which not all seds can handle. GNU sed should be fine # though. Also, the filter scales superlinearly with the number of # global variables. join(1) would be nice here, but unfortunately # isn't a blessed tool. $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter func_append delfiles " $export_symbols $output_objdir/$libname.filter" export_symbols=$output_objdir/$libname.def $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols fi } libobjs=$output # Restore the value of output. output=$save_output if test -n "$convenience" && test -n "$whole_archive_flag_spec"; then eval libobjs=\"\$libobjs $whole_archive_flag_spec\" test "X$libobjs" = "X " && libobjs= fi # Expand the library linking commands again to reset the # value of $libobjs for piecewise linking. # Do each of the archive commands. if test yes = "$module" && test -n "$module_cmds"; then if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then cmds=$module_expsym_cmds else cmds=$module_cmds fi else if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then cmds=$archive_expsym_cmds else cmds=$archive_cmds fi fi fi if test -n "$delfiles"; then # Append the command to remove temporary files to $cmds. eval cmds=\"\$cmds~\$RM $delfiles\" fi # Add any objects from preloaded convenience libraries if test -n "$dlprefiles"; then gentop=$output_objdir/${outputname}x func_append generated " $gentop" func_extract_archives $gentop $dlprefiles func_append libobjs " $func_extract_archives_result" test "X$libobjs" = "X " && libobjs= fi save_ifs=$IFS; IFS='~' for cmd in $cmds; do IFS=$sp$nl eval cmd=\"$cmd\" IFS=$save_ifs $opt_quiet || { func_quote_for_expand "$cmd" eval "func_echo $func_quote_for_expand_result" } $opt_dry_run || eval "$cmd" || { lt_exit=$? # Restore the uninstalled library and exit if test relink = "$opt_mode"; then ( cd "$output_objdir" && \ $RM "${realname}T" && \ $MV "${realname}U" "$realname" ) fi exit $lt_exit } done IFS=$save_ifs # Restore the uninstalled library and exit if test relink = "$opt_mode"; then $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}T && $MV $realname ${realname}T && $MV ${realname}U $realname)' || exit $? if test -n "$convenience"; then if test -z "$whole_archive_flag_spec"; then func_show_eval '${RM}r "$gentop"' fi fi exit $EXIT_SUCCESS fi # Create links to the real library. for linkname in $linknames; do if test "$realname" != "$linkname"; then func_show_eval '(cd "$output_objdir" && $RM "$linkname" && $LN_S "$realname" "$linkname")' 'exit $?' fi done # If -module or -export-dynamic was specified, set the dlname. if test yes = "$module" || test yes = "$export_dynamic"; then # On all known operating systems, these are identical. dlname=$soname fi fi ;; obj) if test -n "$dlfiles$dlprefiles" || test no != "$dlself"; then func_warning "'-dlopen' is ignored for objects" fi case " $deplibs" in *\ -l* | *\ -L*) func_warning "'-l' and '-L' are ignored for objects" ;; esac test -n "$rpath" && \ func_warning "'-rpath' is ignored for objects" test -n "$xrpath" && \ func_warning "'-R' is ignored for objects" test -n "$vinfo" && \ func_warning "'-version-info' is ignored for objects" test -n "$release" && \ func_warning "'-release' is ignored for objects" case $output in *.lo) test -n "$objs$old_deplibs" && \ func_fatal_error "cannot build library object '$output' from non-libtool objects" libobj=$output func_lo2o "$libobj" obj=$func_lo2o_result ;; *) libobj= obj=$output ;; esac # Delete the old objects. $opt_dry_run || $RM $obj $libobj # Objects from convenience libraries. This assumes # single-version convenience libraries. Whenever we create # different ones for PIC/non-PIC, this we'll have to duplicate # the extraction. reload_conv_objs= gentop= # if reload_cmds runs $LD directly, get rid of -Wl from # whole_archive_flag_spec and hope we can get by with turning comma # into space. case $reload_cmds in *\$LD[\ \$]*) wl= ;; esac if test -n "$convenience"; then if test -n "$whole_archive_flag_spec"; then eval tmp_whole_archive_flags=\"$whole_archive_flag_spec\" test -n "$wl" || tmp_whole_archive_flags=`$ECHO "$tmp_whole_archive_flags" | $SED 's|,| |g'` reload_conv_objs=$reload_objs\ $tmp_whole_archive_flags else gentop=$output_objdir/${obj}x func_append generated " $gentop" func_extract_archives $gentop $convenience reload_conv_objs="$reload_objs $func_extract_archives_result" fi fi # If we're not building shared, we need to use non_pic_objs test yes = "$build_libtool_libs" || libobjs=$non_pic_objects # Create the old-style object. reload_objs=$objs$old_deplibs' '`$ECHO "$libobjs" | $SP2NL | $SED "/\.$libext$/d; /\.lib$/d; $lo2o" | $NL2SP`' '$reload_conv_objs output=$obj func_execute_cmds "$reload_cmds" 'exit $?' # Exit if we aren't doing a library object file. if test -z "$libobj"; then if test -n "$gentop"; then func_show_eval '${RM}r "$gentop"' fi exit $EXIT_SUCCESS fi test yes = "$build_libtool_libs" || { if test -n "$gentop"; then func_show_eval '${RM}r "$gentop"' fi # Create an invalid libtool object if no PIC, so that we don't # accidentally link it into a program. # $show "echo timestamp > $libobj" # $opt_dry_run || eval "echo timestamp > $libobj" || exit $? exit $EXIT_SUCCESS } if test -n "$pic_flag" || test default != "$pic_mode"; then # Only do commands if we really have different PIC objects. reload_objs="$libobjs $reload_conv_objs" output=$libobj func_execute_cmds "$reload_cmds" 'exit $?' fi if test -n "$gentop"; then func_show_eval '${RM}r "$gentop"' fi exit $EXIT_SUCCESS ;; prog) case $host in *cygwin*) func_stripname '' '.exe' "$output" output=$func_stripname_result.exe;; esac test -n "$vinfo" && \ func_warning "'-version-info' is ignored for programs" test -n "$release" && \ func_warning "'-release' is ignored for programs" $preload \ && test unknown,unknown,unknown = "$dlopen_support,$dlopen_self,$dlopen_self_static" \ && func_warning "'LT_INIT([dlopen])' not used. Assuming no dlopen support." case $host in *-*-rhapsody* | *-*-darwin1.[012]) # On Rhapsody replace the C library is the System framework compile_deplibs=`$ECHO " $compile_deplibs" | $SED 's/ -lc / System.ltframework /'` finalize_deplibs=`$ECHO " $finalize_deplibs" | $SED 's/ -lc / System.ltframework /'` ;; esac case $host in *-*-darwin*) # Don't allow lazy linking, it breaks C++ global constructors # But is supposedly fixed on 10.4 or later (yay!). if test CXX = "$tagname"; then case ${MACOSX_DEPLOYMENT_TARGET-10.0} in 10.[0123]) func_append compile_command " $wl-bind_at_load" func_append finalize_command " $wl-bind_at_load" ;; esac fi # Time to change all our "foo.ltframework" stuff back to "-framework foo" compile_deplibs=`$ECHO " $compile_deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` finalize_deplibs=`$ECHO " $finalize_deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` ;; esac # move library search paths that coincide with paths to not yet # installed libraries to the beginning of the library search list new_libs= for path in $notinst_path; do case " $new_libs " in *" -L$path/$objdir "*) ;; *) case " $compile_deplibs " in *" -L$path/$objdir "*) func_append new_libs " -L$path/$objdir" ;; esac ;; esac done for deplib in $compile_deplibs; do case $deplib in -L*) case " $new_libs " in *" $deplib "*) ;; *) func_append new_libs " $deplib" ;; esac ;; *) func_append new_libs " $deplib" ;; esac done compile_deplibs=$new_libs func_append compile_command " $compile_deplibs" func_append finalize_command " $finalize_deplibs" if test -n "$rpath$xrpath"; then # If the user specified any rpath flags, then add them. for libdir in $rpath $xrpath; do # This is the magic to use -rpath. case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac done fi # Now hardcode the library paths rpath= hardcode_libdirs= for libdir in $compile_rpath $finalize_rpath; do if test -n "$hardcode_libdir_flag_spec"; then if test -n "$hardcode_libdir_separator"; then if test -z "$hardcode_libdirs"; then hardcode_libdirs=$libdir else # Just accumulate the unique libdirs. case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*) ;; *) func_append hardcode_libdirs "$hardcode_libdir_separator$libdir" ;; esac fi else eval flag=\"$hardcode_libdir_flag_spec\" func_append rpath " $flag" fi elif test -n "$runpath_var"; then case "$perm_rpath " in *" $libdir "*) ;; *) func_append perm_rpath " $libdir" ;; esac fi case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) testbindir=`$ECHO "$libdir" | $SED -e 's*/lib$*/bin*'` case :$dllsearchpath: in *":$libdir:"*) ;; ::) dllsearchpath=$libdir;; *) func_append dllsearchpath ":$libdir";; esac case :$dllsearchpath: in *":$testbindir:"*) ;; ::) dllsearchpath=$testbindir;; *) func_append dllsearchpath ":$testbindir";; esac ;; esac done # Substitute the hardcoded libdirs into the rpath. if test -n "$hardcode_libdir_separator" && test -n "$hardcode_libdirs"; then libdir=$hardcode_libdirs eval rpath=\" $hardcode_libdir_flag_spec\" fi compile_rpath=$rpath rpath= hardcode_libdirs= for libdir in $finalize_rpath; do if test -n "$hardcode_libdir_flag_spec"; then if test -n "$hardcode_libdir_separator"; then if test -z "$hardcode_libdirs"; then hardcode_libdirs=$libdir else # Just accumulate the unique libdirs. case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*) ;; *) func_append hardcode_libdirs "$hardcode_libdir_separator$libdir" ;; esac fi else eval flag=\"$hardcode_libdir_flag_spec\" func_append rpath " $flag" fi elif test -n "$runpath_var"; then case "$finalize_perm_rpath " in *" $libdir "*) ;; *) func_append finalize_perm_rpath " $libdir" ;; esac fi done # Substitute the hardcoded libdirs into the rpath. if test -n "$hardcode_libdir_separator" && test -n "$hardcode_libdirs"; then libdir=$hardcode_libdirs eval rpath=\" $hardcode_libdir_flag_spec\" fi finalize_rpath=$rpath if test -n "$libobjs" && test yes = "$build_old_libs"; then # Transform all the library objects into standard objects. compile_command=`$ECHO "$compile_command" | $SP2NL | $SED "$lo2o" | $NL2SP` finalize_command=`$ECHO "$finalize_command" | $SP2NL | $SED "$lo2o" | $NL2SP` fi func_generate_dlsyms "$outputname" "@PROGRAM@" false # template prelinking step if test -n "$prelink_cmds"; then func_execute_cmds "$prelink_cmds" 'exit $?' fi wrappers_required=: case $host in *cegcc* | *mingw32ce*) # Disable wrappers for cegcc and mingw32ce hosts, we are cross compiling anyway. wrappers_required=false ;; *cygwin* | *mingw* ) test yes = "$build_libtool_libs" || wrappers_required=false ;; *) if test no = "$need_relink" || test yes != "$build_libtool_libs"; then wrappers_required=false fi ;; esac $wrappers_required || { # Replace the output file specification. compile_command=`$ECHO "$compile_command" | $SED 's%@OUTPUT@%'"$output"'%g'` link_command=$compile_command$compile_rpath # We have no uninstalled library dependencies, so finalize right now. exit_status=0 func_show_eval "$link_command" 'exit_status=$?' if test -n "$postlink_cmds"; then func_to_tool_file "$output" postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'` func_execute_cmds "$postlink_cmds" 'exit $?' fi # Delete the generated files. if test -f "$output_objdir/${outputname}S.$objext"; then func_show_eval '$RM "$output_objdir/${outputname}S.$objext"' fi exit $exit_status } if test -n "$compile_shlibpath$finalize_shlibpath"; then compile_command="$shlibpath_var=\"$compile_shlibpath$finalize_shlibpath\$$shlibpath_var\" $compile_command" fi if test -n "$finalize_shlibpath"; then finalize_command="$shlibpath_var=\"$finalize_shlibpath\$$shlibpath_var\" $finalize_command" fi compile_var= finalize_var= if test -n "$runpath_var"; then if test -n "$perm_rpath"; then # We should set the runpath_var. rpath= for dir in $perm_rpath; do func_append rpath "$dir:" done compile_var="$runpath_var=\"$rpath\$$runpath_var\" " fi if test -n "$finalize_perm_rpath"; then # We should set the runpath_var. rpath= for dir in $finalize_perm_rpath; do func_append rpath "$dir:" done finalize_var="$runpath_var=\"$rpath\$$runpath_var\" " fi fi if test yes = "$no_install"; then # We don't need to create a wrapper script. link_command=$compile_var$compile_command$compile_rpath # Replace the output file specification. link_command=`$ECHO "$link_command" | $SED 's%@OUTPUT@%'"$output"'%g'` # Delete the old output file. $opt_dry_run || $RM $output # Link the executable and exit func_show_eval "$link_command" 'exit $?' if test -n "$postlink_cmds"; then func_to_tool_file "$output" postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'` func_execute_cmds "$postlink_cmds" 'exit $?' fi exit $EXIT_SUCCESS fi case $hardcode_action,$fast_install in relink,*) # Fast installation is not supported link_command=$compile_var$compile_command$compile_rpath relink_command=$finalize_var$finalize_command$finalize_rpath func_warning "this platform does not like uninstalled shared libraries" func_warning "'$output' will be relinked during installation" ;; *,yes) link_command=$finalize_var$compile_command$finalize_rpath relink_command=`$ECHO "$compile_var$compile_command$compile_rpath" | $SED 's%@OUTPUT@%\$progdir/\$file%g'` ;; *,no) link_command=$compile_var$compile_command$compile_rpath relink_command=$finalize_var$finalize_command$finalize_rpath ;; *,needless) link_command=$finalize_var$compile_command$finalize_rpath relink_command= ;; esac # Replace the output file specification. link_command=`$ECHO "$link_command" | $SED 's%@OUTPUT@%'"$output_objdir/$outputname"'%g'` # Delete the old output files. $opt_dry_run || $RM $output $output_objdir/$outputname $output_objdir/lt-$outputname func_show_eval "$link_command" 'exit $?' if test -n "$postlink_cmds"; then func_to_tool_file "$output_objdir/$outputname" postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output_objdir/$outputname"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'` func_execute_cmds "$postlink_cmds" 'exit $?' fi # Now create the wrapper script. func_verbose "creating $output" # Quote the relink command for shipping. if test -n "$relink_command"; then # Preserve any variables that may affect compiler behavior for var in $variables_saved_for_relink; do if eval test -z \"\${$var+set}\"; then relink_command="{ test -z \"\${$var+set}\" || $lt_unset $var || { $var=; export $var; }; }; $relink_command" elif eval var_value=\$$var; test -z "$var_value"; then relink_command="$var=; export $var; $relink_command" else func_quote_for_eval "$var_value" relink_command="$var=$func_quote_for_eval_result; export $var; $relink_command" fi done relink_command="(cd `pwd`; $relink_command)" relink_command=`$ECHO "$relink_command" | $SED "$sed_quote_subst"` fi # Only actually do things if not in dry run mode. $opt_dry_run || { # win32 will think the script is a binary if it has # a .exe suffix, so we strip it off here. case $output in *.exe) func_stripname '' '.exe' "$output" output=$func_stripname_result ;; esac # test for cygwin because mv fails w/o .exe extensions case $host in *cygwin*) exeext=.exe func_stripname '' '.exe' "$outputname" outputname=$func_stripname_result ;; *) exeext= ;; esac case $host in *cygwin* | *mingw* ) func_dirname_and_basename "$output" "" "." output_name=$func_basename_result output_path=$func_dirname_result cwrappersource=$output_path/$objdir/lt-$output_name.c cwrapper=$output_path/$output_name.exe $RM $cwrappersource $cwrapper trap "$RM $cwrappersource $cwrapper; exit $EXIT_FAILURE" 1 2 15 func_emit_cwrapperexe_src > $cwrappersource # The wrapper executable is built using the $host compiler, # because it contains $host paths and files. If cross- # compiling, it, like the target executable, must be # executed on the $host or under an emulation environment. $opt_dry_run || { $LTCC $LTCFLAGS -o $cwrapper $cwrappersource $STRIP $cwrapper } # Now, create the wrapper script for func_source use: func_ltwrapper_scriptname $cwrapper $RM $func_ltwrapper_scriptname_result trap "$RM $func_ltwrapper_scriptname_result; exit $EXIT_FAILURE" 1 2 15 $opt_dry_run || { # note: this script will not be executed, so do not chmod. if test "x$build" = "x$host"; then $cwrapper --lt-dump-script > $func_ltwrapper_scriptname_result else func_emit_wrapper no > $func_ltwrapper_scriptname_result fi } ;; * ) $RM $output trap "$RM $output; exit $EXIT_FAILURE" 1 2 15 func_emit_wrapper no > $output chmod +x $output ;; esac } exit $EXIT_SUCCESS ;; esac # See if we need to build an old-fashioned archive. for oldlib in $oldlibs; do case $build_libtool_libs in convenience) oldobjs="$libobjs_save $symfileobj" addlibs=$convenience build_libtool_libs=no ;; module) oldobjs=$libobjs_save addlibs=$old_convenience build_libtool_libs=no ;; *) oldobjs="$old_deplibs $non_pic_objects" $preload && test -f "$symfileobj" \ && func_append oldobjs " $symfileobj" addlibs=$old_convenience ;; esac if test -n "$addlibs"; then gentop=$output_objdir/${outputname}x func_append generated " $gentop" func_extract_archives $gentop $addlibs func_append oldobjs " $func_extract_archives_result" fi # Do each command in the archive commands. if test -n "$old_archive_from_new_cmds" && test yes = "$build_libtool_libs"; then cmds=$old_archive_from_new_cmds else # Add any objects from preloaded convenience libraries if test -n "$dlprefiles"; then gentop=$output_objdir/${outputname}x func_append generated " $gentop" func_extract_archives $gentop $dlprefiles func_append oldobjs " $func_extract_archives_result" fi # POSIX demands no paths to be encoded in archives. We have # to avoid creating archives with duplicate basenames if we # might have to extract them afterwards, e.g., when creating a # static archive out of a convenience library, or when linking # the entirety of a libtool archive into another (currently # not supported by libtool). if (for obj in $oldobjs do func_basename "$obj" $ECHO "$func_basename_result" done | sort | sort -uc >/dev/null 2>&1); then : else echo "copying selected object files to avoid basename conflicts..." gentop=$output_objdir/${outputname}x func_append generated " $gentop" func_mkdir_p "$gentop" save_oldobjs=$oldobjs oldobjs= counter=1 for obj in $save_oldobjs do func_basename "$obj" objbase=$func_basename_result case " $oldobjs " in " ") oldobjs=$obj ;; *[\ /]"$objbase "*) while :; do # Make sure we don't pick an alternate name that also # overlaps. newobj=lt$counter-$objbase func_arith $counter + 1 counter=$func_arith_result case " $oldobjs " in *[\ /]"$newobj "*) ;; *) if test ! -f "$gentop/$newobj"; then break; fi ;; esac done func_show_eval "ln $obj $gentop/$newobj || cp $obj $gentop/$newobj" func_append oldobjs " $gentop/$newobj" ;; *) func_append oldobjs " $obj" ;; esac done fi func_to_tool_file "$oldlib" func_convert_file_msys_to_w32 tool_oldlib=$func_to_tool_file_result eval cmds=\"$old_archive_cmds\" func_len " $cmds" len=$func_len_result if test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then cmds=$old_archive_cmds elif test -n "$archiver_list_spec"; then func_verbose "using command file archive linking..." for obj in $oldobjs do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" done > $output_objdir/$libname.libcmd func_to_tool_file "$output_objdir/$libname.libcmd" oldobjs=" $archiver_list_spec$func_to_tool_file_result" cmds=$old_archive_cmds else # the command line is too long to link in one step, link in parts func_verbose "using piecewise archive linking..." save_RANLIB=$RANLIB RANLIB=: objlist= concat_cmds= save_oldobjs=$oldobjs oldobjs= # Is there a better way of finding the last object in the list? for obj in $save_oldobjs do last_oldobj=$obj done eval test_cmds=\"$old_archive_cmds\" func_len " $test_cmds" len0=$func_len_result len=$len0 for obj in $save_oldobjs do func_len " $obj" func_arith $len + $func_len_result len=$func_arith_result func_append objlist " $obj" if test "$len" -lt "$max_cmd_len"; then : else # the above command should be used before it gets too long oldobjs=$objlist if test "$obj" = "$last_oldobj"; then RANLIB=$save_RANLIB fi test -z "$concat_cmds" || concat_cmds=$concat_cmds~ eval concat_cmds=\"\$concat_cmds$old_archive_cmds\" objlist= len=$len0 fi done RANLIB=$save_RANLIB oldobjs=$objlist if test -z "$oldobjs"; then eval cmds=\"\$concat_cmds\" else eval cmds=\"\$concat_cmds~\$old_archive_cmds\" fi fi fi func_execute_cmds "$cmds" 'exit $?' done test -n "$generated" && \ func_show_eval "${RM}r$generated" # Now create the libtool archive. case $output in *.la) old_library= test yes = "$build_old_libs" && old_library=$libname.$libext func_verbose "creating $output" # Preserve any variables that may affect compiler behavior for var in $variables_saved_for_relink; do if eval test -z \"\${$var+set}\"; then relink_command="{ test -z \"\${$var+set}\" || $lt_unset $var || { $var=; export $var; }; }; $relink_command" elif eval var_value=\$$var; test -z "$var_value"; then relink_command="$var=; export $var; $relink_command" else func_quote_for_eval "$var_value" relink_command="$var=$func_quote_for_eval_result; export $var; $relink_command" fi done # Quote the link command for shipping. relink_command="(cd `pwd`; $SHELL \"$progpath\" $preserve_args --mode=relink $libtool_args @inst_prefix_dir@)" relink_command=`$ECHO "$relink_command" | $SED "$sed_quote_subst"` if test yes = "$hardcode_automatic"; then relink_command= fi # Only create the output if not a dry run. $opt_dry_run || { for installed in no yes; do if test yes = "$installed"; then if test -z "$install_libdir"; then break fi output=$output_objdir/${outputname}i # Replace all uninstalled libtool libraries with the installed ones newdependency_libs= for deplib in $dependency_libs; do case $deplib in *.la) func_basename "$deplib" name=$func_basename_result func_resolve_sysroot "$deplib" eval libdir=`$SED -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result` test -z "$libdir" && \ func_fatal_error "'$deplib' is not a valid libtool archive" func_append newdependency_libs " ${lt_sysroot:+=}$libdir/$name" ;; -L*) func_stripname -L '' "$deplib" func_replace_sysroot "$func_stripname_result" func_append newdependency_libs " -L$func_replace_sysroot_result" ;; -R*) func_stripname -R '' "$deplib" func_replace_sysroot "$func_stripname_result" func_append newdependency_libs " -R$func_replace_sysroot_result" ;; *) func_append newdependency_libs " $deplib" ;; esac done dependency_libs=$newdependency_libs newdlfiles= for lib in $dlfiles; do case $lib in *.la) func_basename "$lib" name=$func_basename_result eval libdir=`$SED -n -e 's/^libdir=\(.*\)$/\1/p' $lib` test -z "$libdir" && \ func_fatal_error "'$lib' is not a valid libtool archive" func_append newdlfiles " ${lt_sysroot:+=}$libdir/$name" ;; *) func_append newdlfiles " $lib" ;; esac done dlfiles=$newdlfiles newdlprefiles= for lib in $dlprefiles; do case $lib in *.la) # Only pass preopened files to the pseudo-archive (for # eventual linking with the app. that links it) if we # didn't already link the preopened objects directly into # the library: func_basename "$lib" name=$func_basename_result eval libdir=`$SED -n -e 's/^libdir=\(.*\)$/\1/p' $lib` test -z "$libdir" && \ func_fatal_error "'$lib' is not a valid libtool archive" func_append newdlprefiles " ${lt_sysroot:+=}$libdir/$name" ;; esac done dlprefiles=$newdlprefiles else newdlfiles= for lib in $dlfiles; do case $lib in [\\/]* | [A-Za-z]:[\\/]*) abs=$lib ;; *) abs=`pwd`"/$lib" ;; esac func_append newdlfiles " $abs" done dlfiles=$newdlfiles newdlprefiles= for lib in $dlprefiles; do case $lib in [\\/]* | [A-Za-z]:[\\/]*) abs=$lib ;; *) abs=`pwd`"/$lib" ;; esac func_append newdlprefiles " $abs" done dlprefiles=$newdlprefiles fi $RM $output # place dlname in correct position for cygwin # In fact, it would be nice if we could use this code for all target # systems that can't hard-code library paths into their executables # and that have no shared library path variable independent of PATH, # but it turns out we can't easily determine that from inspecting # libtool variables, so we have to hard-code the OSs to which it # applies here; at the moment, that means platforms that use the PE # object format with DLL files. See the long comment at the top of # tests/bindir.at for full details. tdlname=$dlname case $host,$output,$installed,$module,$dlname in *cygwin*,*lai,yes,no,*.dll | *mingw*,*lai,yes,no,*.dll | *cegcc*,*lai,yes,no,*.dll) # If a -bindir argument was supplied, place the dll there. if test -n "$bindir"; then func_relative_path "$install_libdir" "$bindir" tdlname=$func_relative_path_result/$dlname else # Otherwise fall back on heuristic. tdlname=../bin/$dlname fi ;; esac $ECHO > $output "\ # $outputname - a libtool library file # Generated by $PROGRAM (GNU $PACKAGE) $VERSION # # Please DO NOT delete this file! # It is necessary for linking the library. # The name that we can dlopen(3). dlname='$tdlname' # Names of this library. library_names='$library_names' # The name of the static archive. old_library='$old_library' # Linker flags that cannot go in dependency_libs. inherited_linker_flags='$new_inherited_linker_flags' # Libraries that this one depends upon. dependency_libs='$dependency_libs' # Names of additional weak libraries provided by this library weak_library_names='$weak_libs' # Version information for $libname. current=$current age=$age revision=$revision # Is this an already installed library? installed=$installed # Should we warn about portability when linking against -modules? shouldnotlink=$module # Files to dlopen/dlpreopen dlopen='$dlfiles' dlpreopen='$dlprefiles' # Directory that this library needs to be installed in: libdir='$install_libdir'" if test no,yes = "$installed,$need_relink"; then $ECHO >> $output "\ relink_command=\"$relink_command\"" fi done } # Do a symbolic link so that the libtool archive can be found in # LD_LIBRARY_PATH before the program is installed. func_show_eval '( cd "$output_objdir" && $RM "$outputname" && $LN_S "../$outputname" "$outputname" )' 'exit $?' ;; esac exit $EXIT_SUCCESS } if test link = "$opt_mode" || test relink = "$opt_mode"; then func_mode_link ${1+"$@"} fi # func_mode_uninstall arg... func_mode_uninstall () { $debug_cmd RM=$nonopt files= rmforce=false exit_status=0 # This variable tells wrapper scripts just to set variables rather # than running their programs. libtool_install_magic=$magic for arg do case $arg in -f) func_append RM " $arg"; rmforce=: ;; -*) func_append RM " $arg" ;; *) func_append files " $arg" ;; esac done test -z "$RM" && \ func_fatal_help "you must specify an RM program" rmdirs= for file in $files; do func_dirname "$file" "" "." dir=$func_dirname_result if test . = "$dir"; then odir=$objdir else odir=$dir/$objdir fi func_basename "$file" name=$func_basename_result test uninstall = "$opt_mode" && odir=$dir # Remember odir for removal later, being careful to avoid duplicates if test clean = "$opt_mode"; then case " $rmdirs " in *" $odir "*) ;; *) func_append rmdirs " $odir" ;; esac fi # Don't error if the file doesn't exist and rm -f was used. if { test -L "$file"; } >/dev/null 2>&1 || { test -h "$file"; } >/dev/null 2>&1 || test -f "$file"; then : elif test -d "$file"; then exit_status=1 continue elif $rmforce; then continue fi rmfiles=$file case $name in *.la) # Possibly a libtool archive, so verify it. if func_lalib_p "$file"; then func_source $dir/$name # Delete the libtool libraries and symlinks. for n in $library_names; do func_append rmfiles " $odir/$n" done test -n "$old_library" && func_append rmfiles " $odir/$old_library" case $opt_mode in clean) case " $library_names " in *" $dlname "*) ;; *) test -n "$dlname" && func_append rmfiles " $odir/$dlname" ;; esac test -n "$libdir" && func_append rmfiles " $odir/$name $odir/${name}i" ;; uninstall) if test -n "$library_names"; then # Do each command in the postuninstall commands. func_execute_cmds "$postuninstall_cmds" '$rmforce || exit_status=1' fi if test -n "$old_library"; then # Do each command in the old_postuninstall commands. func_execute_cmds "$old_postuninstall_cmds" '$rmforce || exit_status=1' fi # FIXME: should reinstall the best remaining shared library. ;; esac fi ;; *.lo) # Possibly a libtool object, so verify it. if func_lalib_p "$file"; then # Read the .lo file func_source $dir/$name # Add PIC object to the list of files to remove. if test -n "$pic_object" && test none != "$pic_object"; then func_append rmfiles " $dir/$pic_object" fi # Add non-PIC object to the list of files to remove. if test -n "$non_pic_object" && test none != "$non_pic_object"; then func_append rmfiles " $dir/$non_pic_object" fi fi ;; *) if test clean = "$opt_mode"; then noexename=$name case $file in *.exe) func_stripname '' '.exe' "$file" file=$func_stripname_result func_stripname '' '.exe' "$name" noexename=$func_stripname_result # $file with .exe has already been added to rmfiles, # add $file without .exe func_append rmfiles " $file" ;; esac # Do a test to see if this is a libtool program. if func_ltwrapper_p "$file"; then if func_ltwrapper_executable_p "$file"; then func_ltwrapper_scriptname "$file" relink_command= func_source $func_ltwrapper_scriptname_result func_append rmfiles " $func_ltwrapper_scriptname_result" else relink_command= func_source $dir/$noexename fi # note $name still contains .exe if it was in $file originally # as does the version of $file that was added into $rmfiles func_append rmfiles " $odir/$name $odir/${name}S.$objext" if test yes = "$fast_install" && test -n "$relink_command"; then func_append rmfiles " $odir/lt-$name" fi if test "X$noexename" != "X$name"; then func_append rmfiles " $odir/lt-$noexename.c" fi fi fi ;; esac func_show_eval "$RM $rmfiles" 'exit_status=1' done # Try to remove the $objdir's in the directories where we deleted files for dir in $rmdirs; do if test -d "$dir"; then func_show_eval "rmdir $dir >/dev/null 2>&1" fi done exit $exit_status } if test uninstall = "$opt_mode" || test clean = "$opt_mode"; then func_mode_uninstall ${1+"$@"} fi test -z "$opt_mode" && { help=$generic_help func_fatal_help "you must specify a MODE" } test -z "$exec_cmd" && \ func_fatal_help "invalid operation mode '$opt_mode'" if test -n "$exec_cmd"; then eval exec "$exec_cmd" exit $EXIT_FAILURE fi exit $exit_status # The TAGs below are defined such that we never get into a situation # where we disable both kinds of libraries. Given conflicting # choices, we go for a static library, that is the most portable, # since we can't tell whether shared libraries were disabled because # the user asked for that or because the platform doesn't support # them. This is particularly important on AIX, because we don't # support having both static and shared libraries enabled at the same # time on that platform, so we default to a shared-only configuration. # If a disable-shared tag is given, we'll fallback to a static-only # configuration. But we'll never go from static-only to shared-only. # ### BEGIN LIBTOOL TAG CONFIG: disable-shared build_libtool_libs=no build_old_libs=yes # ### END LIBTOOL TAG CONFIG: disable-shared # ### BEGIN LIBTOOL TAG CONFIG: disable-static build_old_libs=`case $build_libtool_libs in yes) echo no;; *) echo yes;; esac` # ### END LIBTOOL TAG CONFIG: disable-static # Local Variables: # mode:shell-script # sh-indentation:2 # End: gevent-24.11.1/deps/libev/missing000077500000000000000000000153361471441230600166150ustar00rootroot00000000000000#! /bin/sh # Common wrapper for a few potentially missing GNU programs. scriptversion=2018-03-07.03; # UTC # Copyright (C) 1996-2018 Free Software Foundation, Inc. # Originally written by Fran,cois Pinard , 1996. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see . # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. if test $# -eq 0; then echo 1>&2 "Try '$0 --help' for more information" exit 1 fi case $1 in --is-lightweight) # Used by our autoconf macros to check whether the available missing # script is modern enough. exit 0 ;; --run) # Back-compat with the calling convention used by older automake. shift ;; -h|--h|--he|--hel|--help) echo "\ $0 [OPTION]... PROGRAM [ARGUMENT]... Run 'PROGRAM [ARGUMENT]...', returning a proper advice when this fails due to PROGRAM being missing or too old. Options: -h, --help display this help and exit -v, --version output version information and exit Supported PROGRAM values: aclocal autoconf autoheader autom4te automake makeinfo bison yacc flex lex help2man Version suffixes to PROGRAM as well as the prefixes 'gnu-', 'gnu', and 'g' are ignored when checking the name. Send bug reports to ." exit $? ;; -v|--v|--ve|--ver|--vers|--versi|--versio|--version) echo "missing $scriptversion (GNU Automake)" exit $? ;; -*) echo 1>&2 "$0: unknown '$1' option" echo 1>&2 "Try '$0 --help' for more information" exit 1 ;; esac # Run the given program, remember its exit status. "$@"; st=$? # If it succeeded, we are done. test $st -eq 0 && exit 0 # Also exit now if we it failed (or wasn't found), and '--version' was # passed; such an option is passed most likely to detect whether the # program is present and works. case $2 in --version|--help) exit $st;; esac # Exit code 63 means version mismatch. This often happens when the user # tries to use an ancient version of a tool on a file that requires a # minimum version. if test $st -eq 63; then msg="probably too old" elif test $st -eq 127; then # Program was missing. msg="missing on your system" else # Program was found and executed, but failed. Give up. exit $st fi perl_URL=https://www.perl.org/ flex_URL=https://github.com/westes/flex gnu_software_URL=https://www.gnu.org/software program_details () { case $1 in aclocal|automake) echo "The '$1' program is part of the GNU Automake package:" echo "<$gnu_software_URL/automake>" echo "It also requires GNU Autoconf, GNU m4 and Perl in order to run:" echo "<$gnu_software_URL/autoconf>" echo "<$gnu_software_URL/m4/>" echo "<$perl_URL>" ;; autoconf|autom4te|autoheader) echo "The '$1' program is part of the GNU Autoconf package:" echo "<$gnu_software_URL/autoconf/>" echo "It also requires GNU m4 and Perl in order to run:" echo "<$gnu_software_URL/m4/>" echo "<$perl_URL>" ;; esac } give_advice () { # Normalize program name to check for. normalized_program=`echo "$1" | sed ' s/^gnu-//; t s/^gnu//; t s/^g//; t'` printf '%s\n' "'$1' is $msg." configure_deps="'configure.ac' or m4 files included by 'configure.ac'" case $normalized_program in autoconf*) echo "You should only need it if you modified 'configure.ac'," echo "or m4 files included by it." program_details 'autoconf' ;; autoheader*) echo "You should only need it if you modified 'acconfig.h' or" echo "$configure_deps." program_details 'autoheader' ;; automake*) echo "You should only need it if you modified 'Makefile.am' or" echo "$configure_deps." program_details 'automake' ;; aclocal*) echo "You should only need it if you modified 'acinclude.m4' or" echo "$configure_deps." program_details 'aclocal' ;; autom4te*) echo "You might have modified some maintainer files that require" echo "the 'autom4te' program to be rebuilt." program_details 'autom4te' ;; bison*|yacc*) echo "You should only need it if you modified a '.y' file." echo "You may want to install the GNU Bison package:" echo "<$gnu_software_URL/bison/>" ;; lex*|flex*) echo "You should only need it if you modified a '.l' file." echo "You may want to install the Fast Lexical Analyzer package:" echo "<$flex_URL>" ;; help2man*) echo "You should only need it if you modified a dependency" \ "of a man page." echo "You may want to install the GNU Help2man package:" echo "<$gnu_software_URL/help2man/>" ;; makeinfo*) echo "You should only need it if you modified a '.texi' file, or" echo "any other file indirectly affecting the aspect of the manual." echo "You might want to install the Texinfo package:" echo "<$gnu_software_URL/texinfo/>" echo "The spurious makeinfo call might also be the consequence of" echo "using a buggy 'make' (AIX, DU, IRIX), in which case you might" echo "want to install GNU make:" echo "<$gnu_software_URL/make/>" ;; *) echo "You might have modified some files without having the proper" echo "tools for further handling them. Check the 'README' file, it" echo "often tells you about the needed prerequisites for installing" echo "this package. You may also peek at any GNU archive site, in" echo "case some other package contains this missing '$1' program." ;; esac } give_advice "$1" | sed -e '1s/^/WARNING: /' \ -e '2,$s/^/ /' >&2 # Propagate the correct exit status (expected to be 127 for a program # not found, 63 for a program that failed due to version mismatch). exit $st # Local variables: # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC0" # time-stamp-end: "; # UTC" # End: gevent-24.11.1/deps/libuv-1.42-py27-win.patch000066400000000000000000000013211471441230600202200ustar00rootroot00000000000000diff --git a/deps/libuv/src/win/util.c b/deps/libuv/src/win/util.c index 88602c7e..d6009ce3 100644 --- a/deps/libuv/src/win/util.c +++ b/deps/libuv/src/win/util.c @@ -1662,7 +1662,13 @@ int uv_os_unsetenv(const char* name) { return 0; } - +/** + * gevent: disable this function for Python 2.7 on Windows. + * + * It fails to link on anything older than Windows 8/Windows Server 2012 + * because of GetHostNameW. + */ +#if 0 int uv_os_gethostname(char* buffer, size_t* size) { WCHAR buf[UV_MAXHOSTNAMESIZE]; size_t len; @@ -1694,6 +1700,7 @@ int uv_os_gethostname(char* buffer, size_t* size) { *size = len; return 0; } +#endif static int uv__get_handle(uv_pid_t pid, int access, HANDLE* handle) { gevent-24.11.1/deps/libuv/000077500000000000000000000000001471441230600152265ustar00rootroot00000000000000gevent-24.11.1/deps/libuv/.gitattributes000066400000000000000000000000521471441230600201160ustar00rootroot00000000000000test/fixtures/lorem_ipsum.txt text eol=lf gevent-24.11.1/deps/libuv/.gitignore000066400000000000000000000015121471441230600172150ustar00rootroot00000000000000*.swp *.[oa] *.l[oa] *.opensdf *.orig *.pyc *.sdf *.suo .vs/ *.VC.db *.VC.opendb core vgcore.* .buildstamp .dirstamp .deps/ /.libs/ /aclocal.m4 /ar-lib /autom4te.cache/ /compile /config.guess /config.log /config.status /config.sub /configure /depcomp /install-sh /libtool /libuv.a /libuv.dylib /libuv.pc /libuv.so /ltmain.sh /missing /test-driver Makefile Makefile.in /build/ /test/.libs/ /test/run-tests /test/run-tests.exe /test/run-tests.dSYM /test/run-benchmarks /test/run-benchmarks.exe /test/run-benchmarks.dSYM test_file_* *.sln *.sln.cache *.ncb *.vcproj *.vcproj*.user *.vcxproj *.vcxproj.filters *.vcxproj.user _UpgradeReport_Files/ UpgradeLog*.XML Debug Release ipch # sphinx generated files /docs/build/ # Clion / IntelliJ project files /.idea/ cmake-build-debug/ *.xcodeproj *.xcworkspace # make dist output libuv-*.tar.* gevent-24.11.1/deps/libuv/.mailmap000066400000000000000000000062361471441230600166560ustar00rootroot00000000000000A. Hauptmann AJ Heller Aaron Bieber Alan Gutierrez Andrius Bentkus Andy Fiddaman Bert Belder Bert Belder Bert Belder Brandon Philips Brian White Brian White Caleb James DeLisle Christoph Iserlohn Darshan Sen Darshan Sen David Carlier Devchandra Meetei Leishangthem Fedor Indutny Frank Denis Imran Iqbal Isaac Z. Schlueter Jason Williams Jesse Gorzinski Jesse Gorzinski Juan José Arboleda Justin Venus Keno Fischer Keno Fischer Leith Bade Leonard Hecker Maciej Małecki Marc Schlaich Michael Michael Neumann Michael Penick Nicholas Vavilov Nick Logan Rasmus Christian Pedersen Rasmus Christian Pedersen Richard Lau Robert Mustacchi Ryan Dahl Ryan Emery Sakthipriyan Vairamani Sam Roberts San-Tai Hsu Santiago Gimeno Saúl Ibarra Corretgé Saúl Ibarra Corretgé Shigeki Ohtsu Shuowang (Wayne) Zhang TK-one Timothy J. Fontaine Yasuhiro Matsumoto Yazhong Liu Yuki Okumura cjihrig gengjiawen jBarz jBarz ptlomholt tjarlama <59913901+tjarlama@users.noreply.github.com> zlargon gevent-24.11.1/deps/libuv/AUTHORS000066400000000000000000000456361471441230600163140ustar00rootroot00000000000000# Authors ordered by first contribution. Ryan Dahl Bert Belder Josh Roesslein Alan Gutierrez Joshua Peek Igor Zinkovsky San-Tai Hsu Ben Noordhuis Henry Rawas Robert Mustacchi Matt Stevens Paul Querna Shigeki Ohtsu Tom Hughes Peter Bright Jeroen Janssen Andrea Lattuada Augusto Henrique Hentz Clifford Heath Jorge Chamorro Bieling Luis Lavena Matthew Sporleder Erick Tryzelaar Isaac Z. Schlueter Pieter Noordhuis Marek Jelen Fedor Indutny Saúl Ibarra Corretgé Felix Geisendörfer Yuki Okumura Roman Shtylman Frank Denis Carter Allen Tj Holowaychuk Shimon Doodkin Ryan Emery Bruce Mitchener Maciej Małecki Yasuhiro Matsumoto Daisuke Murase Paddy Byers Dan VerWeire Brandon Benvie Brandon Philips Nathan Rajlich Charlie McConnell Vladimir Dronnikov Aaron Bieber Bulat Shakirzyanov Brian White Erik Dubbelboer Keno Fischer Ira Cooper Andrius Bentkus Iñaki Baz Castillo Mark Cavage George Yohng Xidorn Quan Roman Neuhauser Shuhei Tanuma Bryan Cantrill Trond Norbye Tim Holy Prancesco Pertugio Leonard Hecker Andrew Paprocki Luigi Grilli Shannen Saez Artur Adib Hiroaki Nakamura Ting-Yu Lin Stephen Gallagher Shane Holloway Andrew Shaffer Vlad Tudose Ben Leslie Tim Bradshaw Timothy J. Fontaine Marc Schlaich Brian Mazza Elliot Saba Ben Kelly Nils Maier Nicholas Vavilov Miroslav Bajtoš Sean Silva Wynn Wilkes Andrei Sedoi Alex Crichton Brent Cook Brian Kaisner Luca Bruno Reini Urban Maks Naumov Sean Farrell Chris Bank Geert Jansen Christoph Iserlohn Steven Kabbes Alex Gaynor huxingyi Tenor Biel Andrej Manduch Joshua Neuheisel Alexis Campailla Yazhong Liu Sam Roberts River Tarnell Nathan Sweet Trevor Norris Oguz Bastemur Dylan Cali Austin Foxley Benjamin Saunders Geoffry Song William Light Oleg Efimov Lars Gierth Rasmus Christian Pedersen Justin Venus Kristian Evensen Linus Mårtensson Navaneeth Kedaram Nambiathan StarWing thierry-FreeBSD Isaiah Norton Raul Martins David Capello Paul Tan Javier Hernández Tonis Tiigi Norio Kobota 李港平 Chernyshev Viacheslav Stephen von Takach JD Ballard Luka Perkov Ryan Cole HungMingWu Jay Satiro Leith Bade Peter Atashian Tim Cooper Caleb James DeLisle Jameson Nash Graham Lee Andrew Low Pavel Platto Tony Kelman John Firebaugh lilohuang Paul Goldsmith Julien Gilli Michael Hudson-Doyle Recep ASLANTAS Rob Adams Zachary Newman Robin Hahling Jeff Widman cjihrig Tomasz Kołodziejski Unknown W. Brackets Emmanuel Odeke Mikhail Mukovnikov Thorsten Lorenz Yuri D'Elia Manos Nikolaidis Elijah Andrews Michael Ira Krufky Helge Deller Joey Geralnik Tim Caswell Logan Rosen Kenneth Perry John Marino Alexey Melnichuk Johan Bergström Alex Mo Luis Martinez de Bartolome Michael Penick Michael Massimiliano Torromeo TomCrypto Brett Vickers Ole André Vadla Ravnås Kazuho Oku Ryan Phillips Brian Green Devchandra Meetei Leishangthem Corey Farrell Per Nilsson Alan Rogers Daryl Haresign Rui Abreu Ferreira João Reis farblue68 Jason Williams Igor Soarez Miodrag Milanovic Cheng Zhao Michael Neumann Stefano Cristiano heshamsafi A. Hauptmann John McNamee Yosuke Furukawa Santiago Gimeno guworks RossBencina Roger A. Light chenttuuvv Richard Lau ronkorving Corbin Simpson Zachary Hamm Karl Skomski Jeremy Whitlock Willem Thiart Ben Trask Jianghua Yang Colin Snover Sakthipriyan Vairamani Eli Skeggs nmushell Gireesh Punathil Ryan Johnston Adam Stylinski Nathan Corvino Wink Saville Angel Leon Louis DeJardin Imran Iqbal Petka Antonov Ian Kronquist kkdaemon Yuval Brik Joran Dirk Greef Andrey Mazo sztomi Martin Bark Dave Alexis Murzeau Didiet Nan Xiang <514580344@qq.com> Samuel Lorétan Nándor István Krácser Katsutoshi Horie Lukasz Jagiello Robert Chiras Kári Tristan Helgason Krishnaraj Bhat Enno Boland Michael Fero Robert Jefe Lindstaedt Myles Borins Tony Theodore Jason Ginchereau Nicolas Cavallari Pierre-Marie de Rodat Brian Maher neevek John Barboza liuxiaobo Michele Caini Bartosz Sosnowski Matej Knopp sunjin.lee Matt Clarkson Jeffrey Clark Bart Robinson Vit Gottwald Vladimír Čunát Alex Hultman Brad King Philippe Laferriere Will Speak Hitesh Kanwathirtha Eric Sciple jBarz muflub Daniel Bevenius Howard Hellyer Chris Araman Vladimir Matveev Jason Madden Jamie Davis Daniel Kahn Gillmor Keane James McCoy Bernardo Ramos Juan Cruz Viotti Gemini Wen Sebastian Wiedenroth Sai Ke WANG Barnabas Gema Romain Caire Robert Ayrapetyan Refael Ackermann André Klitzing Matthew Taylor CurlyMoo XadillaX Anticrisis Jacob Segal Maciej Szeptuch (Neverous) Joel Winarske Gergely Nagy Kamil Rytarowski tux.uudiin <77389867@qq.com> Nick Logan darobs Zheng, Lei Carlo Marcelo Arenas Belón Scott Parker Wade Brainerd rayrase Pekka Nikander Ed Schouten Xu Meng Matt Harrison Anna Henningsen Jérémy Lal Ben Wijen elephantp Felix Yan Mason X Jesse Gorzinski Ryuichi KAWAMATA Joyee Cheung Michael Kilburn Ruslan Bekenev Bob Burger Thomas Versteeg zzzjim Alex Arslan Kyle Farnung ssrlive <30760636+ssrlive@users.noreply.github.com> Tobias Nießen Björn Linse zyxwvu Shi Peter Johnson Paolo Greppi Shelley Vohr Ujjwal Sharma Michał Kozakiewicz Emil Bay Jeremiah Senkpiel Andy Zhang dmabupt Ryan Liptak Ali Ijaz Sheikh hitesh Svante Signell Samuel Thibault Jeremy Studer damon-kwok <563066990@qq.com> Damon Kwok Ashe Connor Rick Ivan Krylov Michael Meier ptlomholt Victor Costan sid Kevin Adler Stephen Belanger yeyuanfeng erw7 Thomas Karl Pietrowski evgley Andreas Rohner Rich Trott Milad Farazmand zlargon Yury Selivanov Oscar Waddell FX Coudert George Zhao Kyle Edwards ken-cunningham-webuse Kelvin Jin Leorize Vlad A Niels Lohmann Jenil Christo Evgeny Ermakov gengjiawen Leo Chung Javier Blazquez Mustafa M Zach Bjornson Nan Xiao Ben Davies Nhan Khong Crunkle Tomas Krizek Konstantin Podsvirov seny Vladimir Karnushin MaYuming Eneas U de Queiroz Daniel Hahler Yang Yu David Carlier Calvin Hill Isabella Muerte <63051+slurps-mad-rips@users.noreply.github.com> Ouyang Yadong ZYSzys Carl Lei Stefan Bender nia virtualyw Witold Kręcicki Dominique Dumont Manuel BACHMANN Marek Vavrusa TK-one Irek Fakhrutdinov Lin Zhang 毛毛 Sk Sajidul Kadir twosee Rikard Falkeborn Yash Ladha James Ross Colin Finck Shohei YOSHIDA Philip Chimento Michal Artazov Jeroen Roovers MasterDuke17 Alexander Tokmakov Arenoros lander0s Turbinya OleksandrKvl Carter Li Juan Sebastian velez Posada escherstair Evan Lucas tjarlama <59913901+tjarlama@users.noreply.github.com> 司徒玟琅 YuMeiJie Aleksej Lebedev Nikolay Mitev Ulrik Strid Elad Lahav Elad Nachmias Darshan Sen Simon Kadisch Momtchil Momtchev Ethel Weston <66453757+ethelweston@users.noreply.github.com> Drew DeVault Mark Klein schamberg97 <50446906+schamberg97@users.noreply.github.com> Bob Weinand Issam E. Maghni Juan Pablo Canepa Shuowang (Wayne) Zhang Ondřej Surý Juan José Arboleda Zhao Zhili Brandon Cheng Matvii Hodovaniuk Hayden yiyuaner bbara SeverinLeonhardt Andy Fiddaman Romain Roffé Eagle Liang Ricky Zhou Simon Kissane James M Snell Ali Mohammad Pur Erkhes N <71805796+rexes-ND@users.noreply.github.com> Joshua M. Clulow Guilherme Íscaro Martin Storsjö Claes Nästén Mohamed Edrah <43171151+MSE99@users.noreply.github.com> Supragya Raj Ikko Ashimine Sylvain Corlay earnal YAKSH BARIYA Ofek Lev ~locpyl-tidnyd <81016946+locpyl-tidnyd@users.noreply.github.com> Evan Miller Petr Menšík Nicolas Noble AJ Heller Stacey Marshall Jesper Storm Bache Campbell He Andrey Hohutkin deal David Machaj <46852402+dmachaj@users.noreply.github.com> Jessica Clarke Jeremy Rose woclass Luca Adrian L WenTao Ou jonilaitinen UMU Paul Evans wyckster Vittore F. Scolari roflcopter4 <15476346+roflcopter4@users.noreply.github.com> V-for-Vasili Denny C. Dai Hannah Shi tuftedocelot blogdaren chucksilvers Sergey Fedorov theanarkh <2923878201@qq.com> Samuel Cabrero gevent-24.11.1/deps/libuv/CMakeLists.txt000066400000000000000000000570421471441230600177760ustar00rootroot00000000000000cmake_minimum_required(VERSION 3.4) project(libuv LANGUAGES C) cmake_policy(SET CMP0057 NEW) # Enable IN_LIST operator cmake_policy(SET CMP0064 NEW) # Support if (TEST) operator list(APPEND CMAKE_MODULE_PATH "${PROJECT_SOURCE_DIR}/cmake") include(CMakePackageConfigHelpers) include(CMakeDependentOption) include(CheckCCompilerFlag) include(GNUInstallDirs) include(CTest) set(CMAKE_C_VISIBILITY_PRESET hidden) set(CMAKE_C_STANDARD_REQUIRED ON) set(CMAKE_C_EXTENSIONS ON) set(CMAKE_C_STANDARD 90) cmake_dependent_option(LIBUV_BUILD_TESTS "Build the unit tests when BUILD_TESTING is enabled and we are the root project" ON "BUILD_TESTING;CMAKE_SOURCE_DIR STREQUAL PROJECT_SOURCE_DIR" OFF) cmake_dependent_option(LIBUV_BUILD_BENCH "Build the benchmarks when building unit tests and we are the root project" ON "LIBUV_BUILD_TESTS" OFF) # Qemu Build option(QEMU "build for qemu" OFF) if(QEMU) add_definitions(-D__QEMU__=1) endif() option(ASAN "Enable AddressSanitizer (ASan)" OFF) option(TSAN "Enable ThreadSanitizer (TSan)" OFF) if((ASAN OR TSAN) AND NOT (CMAKE_C_COMPILER_ID MATCHES "AppleClang|GNU|Clang")) message(SEND_ERROR "Sanitizer support requires clang or gcc. Try again with -DCMAKE_C_COMPILER.") endif() if(ASAN) add_definitions(-D__ASAN__=1) set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fno-omit-frame-pointer -fsanitize=address") set (CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -fno-omit-frame-pointer -fsanitize=address") set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fno-omit-frame-pointer -fsanitize=address") endif() if(TSAN) add_definitions(-D__TSAN__=1) set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fno-omit-frame-pointer -fsanitize=thread") set (CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -fno-omit-frame-pointer -fsanitize=thread") set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fno-omit-frame-pointer -fsanitize=thread") endif() # Compiler check string(CONCAT is-msvc $, $ >) check_c_compiler_flag(/W4 UV_LINT_W4) check_c_compiler_flag(/wd4100 UV_LINT_NO_UNUSED_PARAMETER_MSVC) check_c_compiler_flag(/wd4127 UV_LINT_NO_CONDITIONAL_CONSTANT_MSVC) check_c_compiler_flag(/wd4201 UV_LINT_NO_NONSTANDARD_MSVC) check_c_compiler_flag(/wd4206 UV_LINT_NO_NONSTANDARD_EMPTY_TU_MSVC) check_c_compiler_flag(/wd4210 UV_LINT_NO_NONSTANDARD_FILE_SCOPE_MSVC) check_c_compiler_flag(/wd4232 UV_LINT_NO_NONSTANDARD_NONSTATIC_DLIMPORT_MSVC) check_c_compiler_flag(/wd4456 UV_LINT_NO_HIDES_LOCAL) check_c_compiler_flag(/wd4457 UV_LINT_NO_HIDES_PARAM) check_c_compiler_flag(/wd4459 UV_LINT_NO_HIDES_GLOBAL) check_c_compiler_flag(/wd4706 UV_LINT_NO_CONDITIONAL_ASSIGNMENT_MSVC) check_c_compiler_flag(/wd4996 UV_LINT_NO_UNSAFE_MSVC) check_c_compiler_flag(-Wall UV_LINT_WALL) # DO NOT use this under MSVC # TODO: Place these into its own function check_c_compiler_flag(-Wno-unused-parameter UV_LINT_NO_UNUSED_PARAMETER) check_c_compiler_flag(-Wstrict-prototypes UV_LINT_STRICT_PROTOTYPES) check_c_compiler_flag(-Wextra UV_LINT_EXTRA) check_c_compiler_flag(/utf-8 UV_LINT_UTF8_MSVC) set(lint-no-unused-parameter $<$:-Wno-unused-parameter>) set(lint-strict-prototypes $<$:-Wstrict-prototypes>) set(lint-extra $<$:-Wextra>) set(lint-w4 $<$:/W4>) set(lint-no-unused-parameter-msvc $<$:/wd4100>) set(lint-no-conditional-constant-msvc $<$:/wd4127>) set(lint-no-nonstandard-msvc $<$:/wd4201>) set(lint-no-nonstandard-empty-tu-msvc $<$:/wd4206>) set(lint-no-nonstandard-file-scope-msvc $<$:/wd4210>) set(lint-no-nonstandard-nonstatic-dlimport-msvc $<$:/wd4232>) set(lint-no-hides-local-msvc $<$:/wd4456>) set(lint-no-hides-param-msvc $<$:/wd4457>) set(lint-no-hides-global-msvc $<$:/wd4459>) set(lint-no-conditional-assignment-msvc $<$:/wd4706>) set(lint-no-unsafe-msvc $<$:/wd4996>) # Unfortunately, this one is complicated because MSVC and clang-cl support -Wall # but using it is like calling -Weverything string(CONCAT lint-default $< $,$>:-Wall >) set(lint-utf8-msvc $<$:/utf-8>) list(APPEND uv_cflags ${lint-strict-prototypes} ${lint-extra} ${lint-default} ${lint-w4}) list(APPEND uv_cflags ${lint-no-unused-parameter}) list(APPEND uv_cflags ${lint-no-unused-parameter-msvc}) list(APPEND uv_cflags ${lint-no-conditional-constant-msvc}) list(APPEND uv_cflags ${lint-no-nonstandard-msvc}) list(APPEND uv_cflags ${lint-no-nonstandard-empty-tu-msvc}) list(APPEND uv_cflags ${lint-no-nonstandard-file-scope-msvc}) list(APPEND uv_cflags ${lint-no-nonstandard-nonstatic-dlimport-msvc}) list(APPEND uv_cflags ${lint-no-hides-local-msvc}) list(APPEND uv_cflags ${lint-no-hides-param-msvc}) list(APPEND uv_cflags ${lint-no-hides-global-msvc}) list(APPEND uv_cflags ${lint-no-conditional-assignment-msvc}) list(APPEND uv_cflags ${lint-no-unsafe-msvc}) list(APPEND uv_cflags ${lint-utf8-msvc} ) check_c_compiler_flag(-fno-strict-aliasing UV_F_STRICT_ALIASING) list(APPEND uv_cflags $<$:-fno-strict-aliasing>) set(uv_sources src/fs-poll.c src/idna.c src/inet.c src/random.c src/strscpy.c src/strtok.c src/threadpool.c src/timer.c src/uv-common.c src/uv-data-getter-setters.c src/version.c) if(WIN32) list(APPEND uv_defines WIN32_LEAN_AND_MEAN _WIN32_WINNT=0x0602) list(APPEND uv_libraries psapi user32 advapi32 iphlpapi userenv ws2_32) list(APPEND uv_sources src/win/async.c src/win/core.c src/win/detect-wakeup.c src/win/dl.c src/win/error.c src/win/fs.c src/win/fs-event.c src/win/getaddrinfo.c src/win/getnameinfo.c src/win/handle.c src/win/loop-watcher.c src/win/pipe.c src/win/thread.c src/win/poll.c src/win/process.c src/win/process-stdio.c src/win/signal.c src/win/snprintf.c src/win/stream.c src/win/tcp.c src/win/tty.c src/win/udp.c src/win/util.c src/win/winapi.c src/win/winsock.c) list(APPEND uv_test_libraries ws2_32) list(APPEND uv_test_sources src/win/snprintf.c test/runner-win.c) else() list(APPEND uv_defines _FILE_OFFSET_BITS=64 _LARGEFILE_SOURCE) if(NOT CMAKE_SYSTEM_NAME MATCHES "Android|OS390|QNX") # TODO: This should be replaced with find_package(Threads) if possible # Android has pthread as part of its c library, not as a separate # libpthread.so. list(APPEND uv_libraries pthread) endif() list(APPEND uv_sources src/unix/async.c src/unix/core.c src/unix/dl.c src/unix/fs.c src/unix/getaddrinfo.c src/unix/getnameinfo.c src/unix/loop-watcher.c src/unix/loop.c src/unix/pipe.c src/unix/poll.c src/unix/process.c src/unix/random-devurandom.c src/unix/signal.c src/unix/stream.c src/unix/tcp.c src/unix/thread.c src/unix/tty.c src/unix/udp.c) list(APPEND uv_test_sources test/runner-unix.c) endif() if(CMAKE_SYSTEM_NAME STREQUAL "AIX") list(APPEND uv_defines _ALL_SOURCE _LINUX_SOURCE_COMPAT _THREAD_SAFE _XOPEN_SOURCE=500 HAVE_SYS_AHAFS_EVPRODS_H) list(APPEND uv_libraries perfstat) list(APPEND uv_sources src/unix/aix.c src/unix/aix-common.c) endif() if(CMAKE_SYSTEM_NAME STREQUAL "Android") list(APPEND uv_defines _GNU_SOURCE) list(APPEND uv_libraries dl) list(APPEND uv_sources src/unix/linux-core.c src/unix/linux-inotify.c src/unix/linux-syscalls.c src/unix/procfs-exepath.c src/unix/pthread-fixes.c src/unix/random-getentropy.c src/unix/random-getrandom.c src/unix/random-sysctl-linux.c src/unix/epoll.c) endif() if(APPLE OR CMAKE_SYSTEM_NAME MATCHES "Android|Linux") list(APPEND uv_sources src/unix/proctitle.c) endif() if(CMAKE_SYSTEM_NAME MATCHES "DragonFly|FreeBSD") list(APPEND uv_sources src/unix/freebsd.c) endif() if(CMAKE_SYSTEM_NAME MATCHES "DragonFly|FreeBSD|NetBSD|OpenBSD") list(APPEND uv_sources src/unix/posix-hrtime.c src/unix/bsd-proctitle.c) endif() if(APPLE OR CMAKE_SYSTEM_NAME MATCHES "DragonFly|FreeBSD|NetBSD|OpenBSD") list(APPEND uv_sources src/unix/bsd-ifaddrs.c src/unix/kqueue.c) endif() if(CMAKE_SYSTEM_NAME MATCHES "FreeBSD") list(APPEND uv_sources src/unix/random-getrandom.c) endif() if(APPLE OR CMAKE_SYSTEM_NAME STREQUAL "OpenBSD") list(APPEND uv_sources src/unix/random-getentropy.c) endif() if(APPLE) list(APPEND uv_defines _DARWIN_UNLIMITED_SELECT=1 _DARWIN_USE_64_BIT_INODE=1) list(APPEND uv_sources src/unix/darwin-proctitle.c src/unix/darwin.c src/unix/fsevents.c) endif() if(CMAKE_SYSTEM_NAME STREQUAL "GNU") list(APPEND uv_libraries dl) list(APPEND uv_sources src/unix/bsd-ifaddrs.c src/unix/no-fsevents.c src/unix/no-proctitle.c src/unix/posix-hrtime.c src/unix/posix-poll.c src/unix/hurd.c) endif() if(CMAKE_SYSTEM_NAME STREQUAL "kFreeBSD") list(APPEND uv_defines _GNU_SOURCE) list(APPEND uv_libraries dl freebsd-glue) endif() if(CMAKE_SYSTEM_NAME STREQUAL "Linux") list(APPEND uv_defines _GNU_SOURCE _POSIX_C_SOURCE=200112) list(APPEND uv_libraries dl rt) list(APPEND uv_sources src/unix/linux-core.c src/unix/linux-inotify.c src/unix/linux-syscalls.c src/unix/procfs-exepath.c src/unix/random-getrandom.c src/unix/random-sysctl-linux.c src/unix/epoll.c) endif() if(CMAKE_SYSTEM_NAME STREQUAL "NetBSD") list(APPEND uv_sources src/unix/netbsd.c) list(APPEND uv_libraries kvm) endif() if(CMAKE_SYSTEM_NAME STREQUAL "OpenBSD") list(APPEND uv_sources src/unix/openbsd.c) endif() if(CMAKE_SYSTEM_NAME STREQUAL "OS390") enable_language(CXX) list(APPEND uv_defines PATH_MAX=1024) list(APPEND uv_defines _AE_BIMODAL) list(APPEND uv_defines _ALL_SOURCE) list(APPEND uv_defines _ENHANCED_ASCII_EXT=0xFFFFFFFF) list(APPEND uv_defines _ISOC99_SOURCE) list(APPEND uv_defines _LARGE_TIME_API) list(APPEND uv_defines _OPEN_MSGQ_EXT) list(APPEND uv_defines _OPEN_SYS_FILE_EXT) list(APPEND uv_defines _OPEN_SYS_IF_EXT) list(APPEND uv_defines _OPEN_SYS_SOCK_EXT3) list(APPEND uv_defines _OPEN_SYS_SOCK_IPV6) list(APPEND uv_defines _UNIX03_SOURCE) list(APPEND uv_defines _UNIX03_THREADS) list(APPEND uv_defines _UNIX03_WITHDRAWN) list(APPEND uv_defines _XOPEN_SOURCE=600) list(APPEND uv_defines _XOPEN_SOURCE_EXTENDED) list(APPEND uv_sources src/unix/pthread-fixes.c src/unix/os390.c src/unix/os390-syscalls.c src/unix/os390-proctitle.c) list(APPEND uv_cflags -q64 -qascii -qexportall -qgonumber -qlongname -qlibansi -qfloat=IEEE -qtune=10 -qarch=10 -qasm -qasmlib=sys1.maclib:sys1.modgen) find_library(ZOSLIB NAMES zoslib PATHS ${ZOSLIB_DIR} PATH_SUFFIXES lib ) list(APPEND uv_libraries ${ZOSLIB}) endif() if(CMAKE_SYSTEM_NAME STREQUAL "OS400") list(APPEND uv_defines _ALL_SOURCE _LINUX_SOURCE_COMPAT _THREAD_SAFE _XOPEN_SOURCE=500) list(APPEND uv_sources src/unix/aix-common.c src/unix/ibmi.c src/unix/no-fsevents.c src/unix/posix-poll.c) endif() if(CMAKE_SYSTEM_NAME STREQUAL "SunOS") list(APPEND uv_defines __EXTENSIONS__ _XOPEN_SOURCE=500 _REENTRANT) list(APPEND uv_libraries kstat nsl sendfile socket) list(APPEND uv_sources src/unix/no-proctitle.c src/unix/sunos.c) endif() if(CMAKE_SYSTEM_NAME STREQUAL "Haiku") list(APPEND uv_defines _BSD_SOURCE) list(APPEND uv_libraries bsd network) list(APPEND uv_sources src/unix/haiku.c src/unix/bsd-ifaddrs.c src/unix/no-fsevents.c src/unix/no-proctitle.c src/unix/posix-hrtime.c src/unix/posix-poll.c) endif() if(CMAKE_SYSTEM_NAME STREQUAL "QNX") list(APPEND uv_sources src/unix/posix-hrtime.c src/unix/posix-poll.c src/unix/qnx.c src/unix/bsd-ifaddrs.c src/unix/no-proctitle.c src/unix/no-fsevents.c) list(APPEND uv_libraries socket) endif() if(APPLE OR CMAKE_SYSTEM_NAME MATCHES "DragonFly|FreeBSD|Linux|NetBSD|OpenBSD") list(APPEND uv_test_libraries util) endif() add_library(uv SHARED ${uv_sources}) target_compile_definitions(uv INTERFACE USING_UV_SHARED=1 PRIVATE BUILDING_UV_SHARED=1 ${uv_defines}) target_compile_options(uv PRIVATE ${uv_cflags}) target_include_directories(uv PUBLIC $ $ PRIVATE $) if(CMAKE_SYSTEM_NAME STREQUAL "OS390") target_include_directories(uv PUBLIC $) set_target_properties(uv PROPERTIES LINKER_LANGUAGE CXX) endif() target_link_libraries(uv ${uv_libraries}) add_library(uv_a STATIC ${uv_sources}) target_compile_definitions(uv_a PRIVATE ${uv_defines}) target_compile_options(uv_a PRIVATE ${uv_cflags}) target_include_directories(uv_a PUBLIC $ $ PRIVATE $) if(CMAKE_SYSTEM_NAME STREQUAL "OS390") target_include_directories(uv_a PUBLIC $) set_target_properties(uv_a PROPERTIES LINKER_LANGUAGE CXX) endif() target_link_libraries(uv_a ${uv_libraries}) if(LIBUV_BUILD_TESTS) # Small hack: use ${uv_test_sources} now to get the runner skeleton, # before the actual tests are added. add_executable( uv_run_benchmarks_a ${uv_test_sources} test/benchmark-async-pummel.c test/benchmark-async.c test/benchmark-fs-stat.c test/benchmark-getaddrinfo.c test/benchmark-loop-count.c test/benchmark-queue-work.c test/benchmark-million-async.c test/benchmark-million-timers.c test/benchmark-multi-accept.c test/benchmark-ping-pongs.c test/benchmark-ping-udp.c test/benchmark-pound.c test/benchmark-pump.c test/benchmark-sizes.c test/benchmark-spawn.c test/benchmark-tcp-write-batch.c test/benchmark-thread.c test/benchmark-udp-pummel.c test/blackhole-server.c test/echo-server.c test/run-benchmarks.c test/runner.c) target_compile_definitions(uv_run_benchmarks_a PRIVATE ${uv_defines}) target_compile_options(uv_run_benchmarks_a PRIVATE ${uv_cflags}) target_link_libraries(uv_run_benchmarks_a uv_a ${uv_test_libraries}) list(APPEND uv_test_sources test/blackhole-server.c test/echo-server.c test/run-tests.c test/runner.c test/test-active.c test/test-async-null-cb.c test/test-async.c test/test-barrier.c test/test-callback-stack.c test/test-close-fd.c test/test-close-order.c test/test-condvar.c test/test-connect-unspecified.c test/test-connection-fail.c test/test-cwd-and-chdir.c test/test-default-loop-close.c test/test-delayed-accept.c test/test-dlerror.c test/test-eintr-handling.c test/test-embed.c test/test-emfile.c test/test-env-vars.c test/test-error.c test/test-fail-always.c test/test-fork.c test/test-fs-copyfile.c test/test-fs-event.c test/test-fs-poll.c test/test-fs.c test/test-fs-readdir.c test/test-fs-fd-hash.c test/test-fs-open-flags.c test/test-get-currentexe.c test/test-get-loadavg.c test/test-get-memory.c test/test-get-passwd.c test/test-getaddrinfo.c test/test-gethostname.c test/test-getnameinfo.c test/test-getsockname.c test/test-getters-setters.c test/test-gettimeofday.c test/test-handle-fileno.c test/test-homedir.c test/test-hrtime.c test/test-idle.c test/test-idna.c test/test-ip4-addr.c test/test-ip6-addr.c test/test-ip-name.c test/test-ipc-heavy-traffic-deadlock-bug.c test/test-ipc-send-recv.c test/test-ipc.c test/test-loop-alive.c test/test-loop-close.c test/test-loop-configure.c test/test-loop-handles.c test/test-loop-stop.c test/test-loop-time.c test/test-metrics.c test/test-multiple-listen.c test/test-mutexes.c test/test-not-readable-nor-writable-on-read-error.c test/test-not-writable-after-shutdown.c test/test-osx-select.c test/test-pass-always.c test/test-ping-pong.c test/test-pipe-bind-error.c test/test-pipe-close-stdout-read-stdin.c test/test-pipe-connect-error.c test/test-pipe-connect-multiple.c test/test-pipe-connect-prepare.c test/test-pipe-getsockname.c test/test-pipe-pending-instances.c test/test-pipe-sendmsg.c test/test-pipe-server-close.c test/test-pipe-set-fchmod.c test/test-pipe-set-non-blocking.c test/test-platform-output.c test/test-poll-close-doesnt-corrupt-stack.c test/test-poll-close.c test/test-poll-closesocket.c test/test-poll-multiple-handles.c test/test-poll-oob.c test/test-poll.c test/test-process-priority.c test/test-process-title-threadsafe.c test/test-process-title.c test/test-queue-foreach-delete.c test/test-random.c test/test-readable-on-eof.c test/test-ref.c test/test-run-nowait.c test/test-run-once.c test/test-semaphore.c test/test-shutdown-close.c test/test-shutdown-eof.c test/test-shutdown-simultaneous.c test/test-shutdown-twice.c test/test-signal-multiple-loops.c test/test-signal-pending-on-close.c test/test-signal.c test/test-socket-buffer-size.c test/test-spawn.c test/test-stdio-over-pipes.c test/test-strscpy.c test/test-strtok.c test/test-tcp-alloc-cb-fail.c test/test-tcp-bind-error.c test/test-tcp-bind6-error.c test/test-tcp-close-accept.c test/test-tcp-close-after-read-timeout.c test/test-tcp-close-while-connecting.c test/test-tcp-close.c test/test-tcp-close-reset.c test/test-tcp-connect-error-after-write.c test/test-tcp-connect-error.c test/test-tcp-connect-timeout.c test/test-tcp-connect6-error.c test/test-tcp-create-socket-early.c test/test-tcp-flags.c test/test-tcp-oob.c test/test-tcp-open.c test/test-tcp-read-stop.c test/test-tcp-read-stop-start.c test/test-tcp-rst.c test/test-tcp-shutdown-after-write.c test/test-tcp-try-write.c test/test-tcp-try-write-error.c test/test-tcp-unexpected-read.c test/test-tcp-write-after-connect.c test/test-tcp-write-fail.c test/test-tcp-write-queue-order.c test/test-tcp-write-to-half-open-connection.c test/test-tcp-writealot.c test/test-test-macros.c test/test-thread-equal.c test/test-thread.c test/test-threadpool-cancel.c test/test-threadpool.c test/test-timer-again.c test/test-timer-from-check.c test/test-timer.c test/test-tmpdir.c test/test-tty-duplicate-key.c test/test-tty-escape-sequence-processing.c test/test-tty.c test/test-udp-alloc-cb-fail.c test/test-udp-bind.c test/test-udp-connect.c test/test-udp-connect6.c test/test-udp-create-socket-early.c test/test-udp-dgram-too-big.c test/test-udp-ipv6.c test/test-udp-mmsg.c test/test-udp-multicast-interface.c test/test-udp-multicast-interface6.c test/test-udp-multicast-join.c test/test-udp-multicast-join6.c test/test-udp-multicast-ttl.c test/test-udp-open.c test/test-udp-options.c test/test-udp-send-and-recv.c test/test-udp-send-hang-loop.c test/test-udp-send-immediate.c test/test-udp-sendmmsg-error.c test/test-udp-send-unreachable.c test/test-udp-try-send.c test/test-uname.c test/test-walk-handles.c test/test-watcher-cross-stop.c) add_executable(uv_run_tests ${uv_test_sources} uv_win_longpath.manifest) target_compile_definitions(uv_run_tests PRIVATE ${uv_defines} USING_UV_SHARED=1) target_compile_options(uv_run_tests PRIVATE ${uv_cflags}) target_link_libraries(uv_run_tests uv ${uv_test_libraries}) add_test(NAME uv_test COMMAND uv_run_tests WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}) if(CMAKE_SYSTEM_NAME STREQUAL "OS390") set_tests_properties(uv_test PROPERTIES ENVIRONMENT "LIBPATH=${CMAKE_BINARY_DIR}:$ENV{LIBPATH}") endif() add_executable(uv_run_tests_a ${uv_test_sources} uv_win_longpath.manifest) target_compile_definitions(uv_run_tests_a PRIVATE ${uv_defines}) target_compile_options(uv_run_tests_a PRIVATE ${uv_cflags}) if(QEMU) target_link_libraries(uv_run_tests_a uv_a ${uv_test_libraries} -static) else() target_link_libraries(uv_run_tests_a uv_a ${uv_test_libraries}) endif() add_test(NAME uv_test_a COMMAND uv_run_tests_a WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}) if(CMAKE_SYSTEM_NAME STREQUAL "OS390") set_target_properties(uv_run_benchmarks_a PROPERTIES LINKER_LANGUAGE CXX) set_target_properties(uv_run_tests PROPERTIES LINKER_LANGUAGE CXX) set_target_properties(uv_run_tests_a PROPERTIES LINKER_LANGUAGE CXX) endif() endif() # Now for some gibbering horrors from beyond the stars... foreach(lib IN LISTS uv_libraries) list(APPEND LIBS "-l${lib}") endforeach() string(REPLACE ";" " " LIBS "${LIBS}") # Consider setting project version via project() call? file(STRINGS configure.ac configure_ac REGEX ^AC_INIT) string(REGEX MATCH "([0-9]+)[.][0-9]+[.][0-9]+" PACKAGE_VERSION "${configure_ac}") set(UV_VERSION_MAJOR "${CMAKE_MATCH_1}") # The version in the filename is mirroring the behaviour of autotools. set_target_properties(uv PROPERTIES VERSION ${UV_VERSION_MAJOR}.0.0 SOVERSION ${UV_VERSION_MAJOR}) set(includedir ${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_INCLUDEDIR}) set(libdir ${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_LIBDIR}) set(prefix ${CMAKE_INSTALL_PREFIX}) configure_file(libuv.pc.in libuv.pc @ONLY) configure_file(libuv-static.pc.in libuv-static.pc @ONLY) install(DIRECTORY include/ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}) install(FILES LICENSE DESTINATION ${CMAKE_INSTALL_DOCDIR}) install(FILES ${PROJECT_BINARY_DIR}/libuv.pc ${PROJECT_BINARY_DIR}/libuv-static.pc DESTINATION ${CMAKE_INSTALL_LIBDIR}/pkgconfig) install(TARGETS uv EXPORT libuvConfig RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR} ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}) install(TARGETS uv_a EXPORT libuvConfig ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}) install(EXPORT libuvConfig DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/libuv) if(MSVC) set(CMAKE_DEBUG_POSTFIX d) endif() message(STATUS "summary of build options: Install prefix: ${CMAKE_INSTALL_PREFIX} Target system: ${CMAKE_SYSTEM_NAME} Compiler: C compiler: ${CMAKE_C_COMPILER} (${CMAKE_C_COMPILER_ID}) CFLAGS: ${CMAKE_C_FLAGS_${_build_type}} ${CMAKE_C_FLAGS} ") gevent-24.11.1/deps/libuv/CONTRIBUTING.md000066400000000000000000000131541471441230600174630ustar00rootroot00000000000000# CONTRIBUTING The libuv project welcomes new contributors. This document will guide you through the process. ### FORK Fork the project [on GitHub](https://github.com/libuv/libuv) and check out your copy. ``` $ git clone https://github.com/username/libuv.git $ cd libuv $ git remote add upstream https://github.com/libuv/libuv.git ``` Now decide if you want your feature or bug fix to go into the master branch or the stable branch. As a rule of thumb, bug fixes go into the stable branch while new features go into the master branch. The stable branch is effectively frozen; patches that change the libuv API/ABI or affect the run-time behavior of applications get rejected. In case of doubt, open an issue in the [issue tracker][], post your question to the [libuv discussions forum], or message the [libuv mailing list]. Especially do so if you plan to work on something big. Nothing is more frustrating than seeing your hard work go to waste because your vision does not align with that of the [project maintainers]. ### BRANCH Okay, so you have decided on the proper branch. Create a feature branch and start hacking: ``` $ git checkout -b my-feature-branch -t origin/v1.x ``` (Where v1.x is the latest stable branch as of this writing.) ### CODE Please adhere to libuv's code style. In general it follows the conventions from the [Google C/C++ style guide]. Some of the key points, as well as some additional guidelines, are enumerated below. * Code that is specific to unix-y platforms should be placed in `src/unix`, and declarations go into `include/uv/unix.h`. * Source code that is Windows-specific goes into `src/win`, and related publicly exported types, functions and macro declarations should generally be declared in `include/uv/win.h`. * Names should be descriptive and concise. * All the symbols and types that libuv makes available publicly should be prefixed with `uv_` (or `UV_` in case of macros). * Internal, non-static functions should be prefixed with `uv__`. * Use two spaces and no tabs. * Lines should be wrapped at 80 characters. * Ensure that lines have no trailing whitespace, and use unix-style (LF) line endings. * Use C89-compliant syntax. In other words, variables can only be declared at the top of a scope (function, if/for/while-block). * When writing comments, use properly constructed sentences, including punctuation. * When documenting APIs and/or source code, don't make assumptions or make implications about race, gender, religion, political orientation or anything else that isn't relevant to the project. * Remember that source code usually gets written once and read often: ensure the reader doesn't have to make guesses. Make sure that the purpose and inner logic are either obvious to a reasonably skilled professional, or add a comment that explains it. ### COMMIT Make sure git knows your name and email address: ``` $ git config --global user.name "J. Random User" $ git config --global user.email "j.random.user@example.com" ``` Writing good commit logs is important. A commit log should describe what changed and why. Follow these guidelines when writing one: 1. The first line should be 50 characters or less and contain a short description of the change prefixed with the name of the changed subsystem (e.g. "net: add localAddress and localPort to Socket"). 2. Keep the second line blank. 3. Wrap all other lines at 72 columns. A good commit log looks like this: ``` subsystem: explaining the commit in one line Body of commit message is a few lines of text, explaining things in more detail, possibly giving some background about the issue being fixed, etc etc. The body of the commit message can be several paragraphs, and please do proper word-wrap and keep columns shorter than about 72 characters or so. That way `git log` will show things nicely even when it is indented. ``` The header line should be meaningful; it is what other people see when they run `git shortlog` or `git log --oneline`. Check the output of `git log --oneline files_that_you_changed` to find out what subsystem (or subsystems) your changes touch. ### REBASE Use `git rebase` (not `git merge`) to sync your work from time to time. ``` $ git fetch upstream $ git rebase upstream/v1.x # or upstream/master ``` ### TEST Bug fixes and features should come with tests. Add your tests in the `test/` directory. Each new test needs to be registered in `test/test-list.h`. If you add a new test file, it needs to be registered in three places: - `CMakeLists.txt`: add the file's name to the `uv_test_sources` list. - `Makefile.am`: add the file's name to the `test_run_tests_SOURCES` list. Look at other tests to see how they should be structured (license boilerplate, the way entry points are declared, etc.). Check README.md file to find out how to run the test suite and make sure that there are no test regressions. ### PUSH ``` $ git push origin my-feature-branch ``` Go to https://github.com/username/libuv and select your feature branch. Click the 'Pull Request' button and fill out the form. Pull requests are usually reviewed within a few days. If there are comments to address, apply your changes in a separate commit and push that to your feature branch. Post a comment in the pull request afterwards; GitHub does not send out notifications when you add commits. [issue tracker]: https://github.com/libuv/libuv/issues [libuv mailing list]: http://groups.google.com/group/libuv [libuv discussions forum]: https://github.com/libuv/libuv/discussions [Google C/C++ style guide]: https://google.github.io/styleguide/cppguide.html [project maintainers]: https://github.com/libuv/libuv/blob/master/MAINTAINERS.md gevent-24.11.1/deps/libuv/ChangeLog000066400000000000000000005033311471441230600170050ustar00rootroot000000000000002022.07.12, Version 1.44.2 (Stable) Changes since version 1.44.1: * Add SHA to ChangeLog (Jameson Nash) * aix, ibmi: handle server hang when remote sends TCP RST (V-for-Vasili) * build: make CI a bit noisier (Jameson Nash) * process: reset the signal mask if the fork fails (Jameson Nash) * zos: implement cmpxchgi() using assembly (Shuowang (Wayne) Zhang) * build: AC_SUBST for AM_CFLAGS (Claes Nästén) * ibmi: Implement UDP disconnect (V-for-Vasili) * doc: update active maintainers list (Ben Noordhuis) * build: fix kFreeBSD build (James McCoy) * build: remove Windows 2016 workflows (Darshan Sen) * Revert "win,errors: remap ERROR_ACCESS_DENIED to UV_EACCES" (Darshan Sen) * unix: simplify getpwuid call (Jameson Nash) * build: filter CI by paths and branches (Jameson Nash) * build: add iOS to macos CI (Jameson Nash) * build: re-enable CI for windows changes (Jameson Nash) * process,iOS: fix build breakage in process.c (Denny C. Dai) * test: remove unused declarations in tcp_rst test (V-for-Vasili) * core: add thread-safe strtok implementation (Guilherme Íscaro) * win: fix incompatible-types warning (twosee) * test: fix flaky file watcher test (Ben Noordhuis) * build: fix AIX xlc autotools build (V-for-Vasili) * unix,win: fix UV_RUN_ONCE + uv_idle_stop loop hang (Ben Noordhuis) * win: fix unexpected ECONNRESET error on TCP socket (twosee) * doc: make sample cross-platform build (gengjiawen) * test: separate some static variables by test cases (Hannah Shi) * sunos: fs-event callback can be called after uv_close() (Andy Fiddaman) * uv: re-register interest in a file after change (Shuowang (Wayne) Zhang) * uv: register UV_RENAME event for _RFIM_UNLINK (Shuowang (Wayne) Zhang) * uv: register __rfim_event 156 as UV_RENAME (Shuowang (Wayne) Zhang) * doc: remove smartos from supported platforms (Ben Noordhuis) * macos: avoid posix_spawnp() cwd bug (Jameson Nash) * release: check versions of autogen scripts are newer (Jameson Nash) * test: rewrite embed test (Ben Noordhuis) * openbsd: use utimensat instead of lutimes (tuftedocelot) * doc: fix link to uvwget example main() function (blogdaren) * unix: use MSG_CMSG_CLOEXEC where supported (Ben Noordhuis) * test: remove disabled callback_order test (Ben Noordhuis) * win,pipe: fix bugs with pipe resource lifetime management (Jameson Nash) * loop: better align order-of-events behavior between platforms (Jameson Nash) * aix,test: uv_backend_fd is not supported by poll (V-for-Vasili) * kqueue: skip EVFILT_PROC when invalidating fds (chucksilvers) * darwin: fix atomic-ops.h ppc64 build (Sergey Fedorov) * zos: don't err when killing a zombie process (Shuowang (Wayne) Zhang) * zos: avoid fs event callbacks after uv_close() (Shuowang (Wayne) Zhang) * zos: correctly format interface addresses names (Shuowang (Wayne) Zhang) * zos: add uv_interface_addresses() netmask support (Shuowang (Wayne) Zhang) * zos: improve memory management of ip addresses (Shuowang (Wayne) Zhang) * tcp,pipe: fail `bind` or `listen` after `close` (theanarkh) * zos: implement uv_available_parallelism() (Shuowang (Wayne) Zhang) * udp,win: fix UDP compiler warning (Jameson Nash) * zos: fix early exit of epoll_wait() (Shuowang (Wayne) Zhang) * unix,tcp: fix errno handling in uv__tcp_bind() (Samuel Cabrero) * shutdown,unix: reduce code duplication (Jameson Nash) * unix: fix c99 comments (Ben Noordhuis) * unix: retry tcgetattr/tcsetattr() on EINTR (Ben Noordhuis) * docs: update introduction.rst (Ikko Ashimine) * unix,stream: optimize uv_shutdown() codepath (Jameson Nash) * zos: delay signal handling until after normal i/o (Shuowang (Wayne) Zhang) * stream: uv__drain() always needs to stop POLLOUT (Jameson Nash) * unix,tcp: allow EINVAL errno from setsockopt in uv_tcp_close_reset() (Stacey Marshall) * win,shutdown: improve how shutdown is dispatched (Jameson Nash) 2022.03.09, Version 1.44.1 (Stable), e8b7eb6908a847ffbe6ab2eec7428e43a0aa53a2 Changes since version 1.44.0: * process: simplify uv__write_int calls (Jameson Nash) * macos: don't use thread-unsafe strtok() (Ben Noordhuis) * process: fix hang after NOTE_EXIT (Jameson Nash) 2022.03.07, Version 1.44.0 (Stable), d2bff508457336d808ba7148b33088f6acbfe0a6 Changes since version 1.43.0: * darwin: remove EPROTOTYPE error workaround (Ben Noordhuis) * doc: fix v1.43.0 changelog entries (cjihrig) * win: replace CRITICAL_SECTION+Semaphore with SRWLock (David Machaj) * darwin: translate EPROTOTYPE to ECONNRESET (Ben Noordhuis) * android: use libc getifaddrs() (Ben Noordhuis) * unix: fix STATIC_ASSERT to check what it means to check (Jessica Clarke) * unix: ensure struct msghdr is zeroed in recvmmsg (Ondřej Surý) * test: test with maximum recvmmsg buffer (Ondřej Surý) * unix: don't allow too small thread stack size (Ben Noordhuis) * bsd: ensure mutex is initialized (Ben Noordhuis) * doc: add gengjiawen as maintainer (gengjiawen) * process: monitor for exit with kqueue on BSDs (Jeremy Rose) * test: fix flaky uv_fs_lutime test (Momtchil Momtchev) * build: fix cmake install locations (Jameson Nash) * thread,win: fix C90 style nit (ssrlive) * build: rename CFLAGS to AM_CFLAGS (Jameson Nash) * doc/guide: update content and sample code (woclass) * process,bsd: handle kevent NOTE_EXIT failure (Jameson Nash) * test: remove flaky test ipc_closed_handle (Ben Noordhuis) * darwin: bump minimum supported version to 10.15 (Ben Noordhuis) * win: return fractional seconds in uv_uptime() (Luca Adrian L) * build: export uv_a for cmake (WenTao Ou) * loop: add pending work to loop-alive check (Jameson Nash) * win: use GetTickCount64 for uptime again (Jameson Nash) * win: restrict system DLL load paths (jonilaitinen) * win,errors: remap ERROR_ACCESS_DENIED to UV_EACCES (Darshan Sen) * bench: add `uv_queue_work` ping-pong measurement (Momtchil Momtchev) * build: fix error C4146 on MSVC (UMU) * test: fix benchmark-ping-udp (Ryan Liptak) * win,fs: consider broken pipe error a normal EOF (Momtchil Momtchev) * document the values of enum uv_stdio_flags (Paul Evans) * win,loop: add missing uv_update_time (twosee) * win,fs: avoid closing an invalid handle (Jameson Nash) * fix oopsie from * doc: clarify android api level (Ben Noordhuis) * win: fix style nits [NFC] (Jameson Nash) * test: fix flaky udp_mmsg test (Santiago Gimeno) * test: fix ipc_send_recv_pipe flakiness (Ben Noordhuis) * doc: checkout -> check out (wyckster) * core: change uv_get_password uid/gid to unsigned (Jameson Nash) * hurd: unbreak build on GNU/Hurd (Vittore F. Scolari) * freebsd: use copy_file_range() in uv_fs_sendfile() (David Carlier) * test: use closefd in runner-unix.c (Guilherme Íscaro) * Reland "macos: use posix_spawn instead of fork" (Jameson Nash) * android: fix build error when no ifaddrs.h (ssrlive) * unix,win: add uv_available_parallelism() (Ben Noordhuis) * process: remove OpenBSD from kevent list (Jameson Nash) * zos: fix build breakage (Ben Noordhuis) * process: only use F_DUPFD_CLOEXEC if it is defined (Jameson Nash) * win,poll: add the MSAFD GUID for AF_UNIX (roflcopter4) * unix: simplify uv__cloexec_fcntl() (Ben Noordhuis) * doc: add secondary GPG ID for vtjnash (Jameson Nash) * unix: remove uv__cloexec_ioctl() (Jameson Nash) 2022.01.05, Version 1.43.0 (Stable), 988f2bfc4defb9a85a536a3e645834c161143ee0 Changes since version 1.42.0: * run test named ip6_sin6_len (Jameson Nash) * docs: fix wrong information about scheduling (Mohamed Edrah) * unix: protect fork in uv_spawn from signals (Jameson Nash) * drop only successfully sent packets post sendmmsg (Supragya Raj) * test: fix typo in test-tty-escape-sequence-processing.c (Ikko Ashimine) * cmake: use standard installation layout always (Sylvain Corlay) * win,spawn: allow UNC path with forward slash (earnal) * win,fsevent: fix uv_fs_event_stop() assert (Ben Noordhuis) * unix: remove redundant include in unix.h (Juan José Arboleda) * doc: mark SmartOS as Tier 3 support (Ben Noordhuis) * doc: fix broken links for netbsd's sysctl manpage (YAKSH BARIYA) * misc: adjust stalebot deadline (Ben Noordhuis) * test: remove `dns-server.c` as it is not used anywhere (Darshan Sen) * build: fix non-cmake android builds (YAKSH BARIYA) * doc: replace pyuv with uvloop (Ofek Lev) * asan: fix some tests (Jameson Nash) * build: add experimental TSAN configuration (Jameson Nash) * pipe: remove useless assertion (~locpyl-tidnyd) * bsd: destroy mutex in uv__process_title_cleanup() (Darshan Sen) * build: add windows build to CI (Darshan Sen) * win,fs: fix error code in uv_fs_read() and uv_fs_write() (Darshan Sen) * build: add macos-latest to ci matrix (Ben Noordhuis) * udp: fix &/&& typo in macro condition (Evan Miller) * build: install cmake package module (Petr Menšík) * win: fix build for mingw32 (Nicolas Noble) * build: fix build failures with MinGW new headers (erw7) * build: fix win build with cmake versions before v3.14 (AJ Heller) * unix: support aarch64 in uv_cpu_info() (Juan José Arboleda) * linux: work around CIFS EPERM bug (Ben Noordhuis) * sunos: Oracle Developer Studio support (Stacey Marshall) * Revert "sunos: Oracle Developer Studio support (cjihrig) * sunos: Oracle Developer Studio support (Stacey Marshall) * stream: permit read after seeing EOF (Jameson Nash) * thread: initialize uv_thread_self for all threads (Jameson Nash) * kqueue: ignore write-end closed notifications (Jameson Nash) * macos: fix the cfdata length in uv__get_cpu_speed (Jesper Storm Bache) * unix,win: add uv_ip_name to get name from sockaddr (Campbell He) * win,test: fix a few typos (AJ Heller) * zos: use destructor for uv__threadpool_cleanup() (Wayne Zhang) * linux: use MemAvailable instead of MemFree (Andrey Hohutkin) * freebsd: call dlerror() only if necessary (Jameson Nash) * bsd,windows,zos: fix udp disconnect EINVAL (deal) 2021.07.21, Version 1.42.0 (Stable), 6ce14710da7079eb248868171f6343bc409ea3a4 Changes since version 1.41.0: * doc: fix code highlighting (Darshan Sen) * test: move to ASSERT_NULL and ASSERT_NOT_NULL test macros (tjarlama) * zos: build in ascii code page (Shuowang (Wayne) Zhang) * zos: don't use nanosecond timestamp fields (Shuowang (Wayne) Zhang) * zos: introduce zoslib (Shuowang (Wayne) Zhang) * zos: use strnlen() from zoslib (Shuowang (Wayne) Zhang) * zos: use nanosleep() from zoslib (Shuowang (Wayne) Zhang) * zos: use __getargv() from zoslib to get exe path (Shuowang (Wayne) Zhang) * zos: treat __rfim_utok as binary (Shuowang (Wayne) Zhang) * zos: use execvpe() to set environ explictly (Shuowang (Wayne) Zhang) * zos: use custom proctitle implementation (Shuowang (Wayne) Zhang) * doc: add instructions for building on z/OS (Shuowang (Wayne) Zhang) * linux,udp: enable full ICMP error reporting (Ondřej Surý) * test: fix test-udp-send-unreachable (Ondřej Surý) * include: fix typo in documentation (Tobias Nießen) * chore: use for(;;) instead of while (Yash Ladha) * test: remove string + int warning on udp-pummel (Juan José Arboleda) * cmake: fix linker flags (Zhao Zhili) * test: fix stack-use-after-scope (Zhao Zhili) * unix: expose thread_stack_size() internally (Brandon Cheng) * darwin: use RLIMIT_STACK for fsevents pthread (Brandon Cheng) * darwin: abort on pthread_attr_init fail (Brandon Cheng) * benchmark: remove unreachable code (Matvii Hodovaniuk) * macos: fix memleaks in uv__get_cpu_speed (George Zhao) * Make Thread Sanitizer aware of file descriptor close in uv__close() (Ondřej Surý) * darwin: fix iOS compilation and functionality (Hayden) * linux: work around copy_file_range() cephfs bug (Ben Noordhuis) * zos: implement uv_get_constrained_memory() (Shuowang (Wayne) Zhang) * zos: fix uv_get_free_memory() (Shuowang (Wayne) Zhang) * zos: use CVTRLSTG to get total memory accurately (Shuowang (Wayne) Zhang) * ibmi: Handle interface names longer than 10 chars (Kevin Adler) * docs: update read-the-docs version of sphinx (Jameson Nash) * unix: refactor uv_try_write (twosee) * linux-core: add proper divide by zero assert (yiyuaner) * misc: remove unnecessary _GNU_SOURCE macros (Darshan Sen) * test: log to stdout to conform TAP spec (bbara) * win,fs: fix C4090 warning with MSVC (SeverinLeonhardt) * build: some systems provide dlopen() in libc (Andy Fiddaman) * include: add EOVERFLOW status code mapping (Darshan Sen) * unix,fs: use uv__load_relaxed and uv__store_relaxed (Darshan Sen) * win: fix string encoding issue of uv_os_gethostname (Eagle Liang) * unix,process: add uv__write_errno helper function (Ricky Zhou) * Re-merge "unix,stream: clear read/write states on close/eof" (Jameson Nash) * unix,core: fix errno handling in uv__getpwuid_r (Darshan Sen) * errors: map ESOCKTNOSUPPORT errno (Ryan Liptak) * doc: uv_read_stop always succeeds (Simon Kissane) * inet: fix inconsistent return value of inet_ntop6 (twosee) * darwin: fix -Wsometimes-uninitialized warning (twosee) * stream: introduce uv_try_write2 function (twosee) * poll,win: UV_PRIORITIZED option should not assert (twosee) * src: DragonFlyBSD has mmsghdr struct too (David Carlier) * cleanup,win: Remove _WIN32 guards on threadpool (James M Snell) * freebsd: fix an incompatible pointer type warning (Darshan Sen) * core: Correct the conditionals for {cloexec,nonblock}_ioctl (Ali Mohammad Pur) * win,tcp: make uv_close work more like unix (Jameson Nash) * doc: more accurate list of valid send_handle's (twosee) * win,tcp: translate system errors correctly (twosee) * unix: implement cpu_relax() on ppc64 (Ben Noordhuis) * docs: move list of project links under PR control (Jameson Nash) * test: wrong pointer arithmetic multiplier (Erkhes N) * doc: switch discussion forum to github (Jameson Nash) * idna: fix OOB read in punycode decoder (Ben Noordhuis) * build: make sure -fvisibility=hidden is set (Santiago Gimeno) * illumos: event ports to epoll (tjarlama) * illumos,tty: UV_TTY_MODE_IO waits for 4 bytes (Joshua M. Clulow) * doc: add vtjnash GPG ID (Jameson Nash) * linux: read CPU model information on ppc (Richard Lau) * darwin: fix uv_barrier race condition (Guilherme Íscaro) * unix,stream: fix loop hang after uv_shutdown (Jameson Nash) * doc,udp: note that suggested_size is 1 max-sized dgram (Ryan Liptak) * mingw: fix building for ARM/AArch64 (Martin Storsjö) * unix: strnlen is not available on Solaris 10 (Claes Nästén) * sunos: restore use of event ports (Andy Fiddaman) * sunos,cmake: use thread-safe errno (Andy Fiddaman) 2021.02.14, Version 1.41.0 (Stable), 1dff88e5161cba5c59276d2070d2e304e4dcb242 Changes since version 1.40.0: * mailmap: update contact information for richardlau (Richard Lau) * build: add asan checks (gengjiawen) * unix: report bind error in uv_tcp_connect() (Ben Noordhuis) * doc: uv_tcp_bind() never returns UV_EADDRINUSE (Ben Noordhuis) * test: fix pump and tcp_write_batch benchmarks (Santiago Gimeno) * doc: mark IBM i as Tier 2 support (Jesse Gorzinski) * doc,poll: add notes (repeated cb & cancel pending cb) (Elad Nachmias) * linux: fix -Wincompatible-pointer-types warning (Ben Noordhuis) * linux: fix -Wsign-compare warning (Ben Noordhuis) * android: add system call api guards (Ben Noordhuis) * unix,win: harmonize uv_read_start() error handling (Ben Noordhuis) * unix,win: more uv_read_start() argument validation (Ben Noordhuis) * build: turn on -fno-strict-aliasing (Ben Noordhuis) * stream: add uv_pipe and uv_socketpair to the API (Jameson Nash) * unix,win: initialize timer `timeout` field (Ben Noordhuis) * bsd-ifaddrs: improve comments (Darshan Sen) * test: remove unnecessary uv_fs_stat() calls (Ben Noordhuis) * fs: fix utime/futime timestamp rounding errors (Ben Noordhuis) * test: ensure reliable floating point comparison (Jameson Nash) * unix,fs: fix uv_fs_sendfile() (Santiago Gimeno) * unix: fix uv_fs_stat when using statx (Simon Kadisch) * linux,macos: fix uv_set_process_title regression (Momtchil Momtchev) * doc: clarify UDP errors and recvmmsg (Ethel Weston) * test-getaddrinfo: use example.invalid (Drew DeVault) * Revert "build: fix android autotools build" (Bernardo Ramos) * unix,fs: on DVS fs, statx returns EOPNOTSUPP (Mark Klein) * win, fs: mkdir really return UV_EINVAL for invalid names (Nicholas Vavilov) * tools: migrate tools/make_dist_html.py to python3 (Dominique Dumont) * unix: fix uv_uptime() on linux (schamberg97) * unix: check for partial copy_file_range support (Momtchil Momtchev) * win: bump minimum supported version to windows 8 (Ben Noordhuis) * poll,unix: ensure safety of rapid fd reuse (Bob Weinand) * test: fix some warnings (Issam E. Maghni) * unix: fix uv_uptime() regression (Santiago Gimeno) * doc: fix versionadded metadata (cjihrig) * test: fix 'incompatible pointer types' warnings (cjihrig) * unix: check for EXDEV in uv__fs_sendfile() (Darshan Sen) 2020.09.26, Version 1.40.0 (Stable), 4e69e333252693bd82d6338d6124f0416538dbfc Changes since version 1.39.0: * udp: add UV_UDP_MMSG_FREE recv_cb flag (Ryan Liptak) * include: re-map UV__EPROTO from 4046 to -4046 (YuMeiJie) * doc: correct UV_UDP_MMSG_FREE version added (cjihrig) * doc: add uv_metrics_idle_time() version metadata (Ryan Liptak) * win,tty: pass through utf-16 surrogate pairs (Mustafa M) * unix: fix DragonFly BSD build (Aleksej Lebedev) * win,udp: fix error code returned by connect() (Santiago Gimeno) * src: suppress user_timeout maybe-uninitialized (Daniel Bevenius) * test: fix compiler warning (Vladimír Čunát) * build: fix the Haiku cmake build (David Carlier) * linux: fix i386 sendmmsg/recvmmsg support (Ben Noordhuis) * build: add libuv-static pkg-config file (Nikolay Mitev) * unix,win: add uv_timer_get_due_in() (Ulrik Strid) * build,unix: add QNX support (Elad Lahav) * include: remove incorrect UV__ERR() for EPROTO (cjihrig) 2020.08.26, Version 1.39.0 (Stable), 25f4b8b8a3c0f934158cd37a37b0525d75ca488e Changes since version 1.38.1: * unix: use relaxed loads/stores for clock id (Ben Noordhuis) * build,win: link to user32.lib and advapi32.lib (George Zhao) * unix: squelch harmless valgrind warning (ssrlive) * include: fx c++ style comments warnings (Turbinya) * build,cmake: Change installation location on MinGW (erw7) * linux: use copy_file_range for uv_fs_copyfile when possible (Carter Li) * win,tcp: avoid reinserting a pending request ( * docs: improve the descriptions for get memory info (Juan Sebastian velez Posada) * test: add udp-mmsg test (Ryan Liptak) * udp: add uv_udp_using_recvmmsg query (Ryan Liptak) * doc: add more error constants (TK-one) * zos: fix potential event loop stall (Trevor Norris) * include: add internal fields struct to uv_loop_t (Trevor Norris) * core: add API to measure event loop idle time (Trevor Norris) * win,fs: use CreateDirectoryW instead of _wmkdir (Mustafa M) * win,nfc: fix integer comparison signedness (escherstair) * win,nfc: use * win,nfc: removed some unused variables (escherstair) * win,nfc: add missing return statement (escherstair) * win,nfc: disable clang-format for * darwin: use IOKit for uv_cpu_info (Evan Lucas) * test: fix thread race in process_title_threadsafe (Ben Noordhuis) * win,fs: avoid implicit access to _doserrno (Jameson Nash) * test: give hrtime test a custom 20s timeout (Jameson Nash) * build: add more failed test, for qemu version bump (gengjiawen) * unix: handle src, dest same in uv_fs_copyfile() (cjihrig) * unix: error when uv_setup_args() is not called (Ryan Liptak) * aix: protect uv_exepath() from uv_set_process_title() (Richard Lau) * fs: clobber req->path on uv_fs_mkstemp() error (tjarlama) * cmake: fix compile error C2001 on Chinese Windows (司徒玟琅) * test: avoid double evaluation in ASSERT_BASE macro (tjarlama) * tcp: fail instantly if local port is unbound (Bartosz Sosnowski) * doc: fix most sphinx warnings (Jameson Nash) * nfci: address some style nits (Jameson Nash) * unix: don't use _POSIX_PATH_MAX (Ben Noordhuis) 2020.07.04, Version 1.38.1 (Stable), e8b989ea1f7f9d4083511a2caec7791e9abd1871 Changes since version 1.38.0: * test: use last matching qemu version (cjihrig) * win, util: rearrange uv_hrtime (Bartosz Sosnowski) * test: skip signal_multiple_loops test on QEMU (gengjiawen) * build: add android build to CI (gengjiawen) * test: extend fs_event_error_reporting timeout (cjihrig) * build: link libkvm on netbsd only (Alexander Tokmakov) * linux: refactor /proc file reader logic (Ben Noordhuis) * linux: read load average from /proc/loadavg (Ben Noordhuis) * android: remove patch code for below 21 (gengjiawen) * win: fix visual studio 2008 build (Arenoros) * win,tty: fix deadlock caused by inconsistent state (lander0s) * unix: use relaxed loads/stores for feature checks (Ben Noordhuis) * build: don't .gitignore m4/ax_pthread.m4 (Ben Noordhuis) * unix: fix gcc atomics feature check (Ben Noordhuis) * darwin: work around clock jumping back in time (Ben Noordhuis) * udp: fix write_queue cleanup on sendmmsg error (Santiago Gimeno) * src: build fix for Android (David Carlier) 2020.05.18, Version 1.38.0 (Stable), 1ab9ea3790378f9f25c4e78e9e2b511c75f9c9ed Changes since version 1.37.0: * test: skip poll_duplex and poll_unidirectional on PASE (Xu Meng) * linux: make cpu_times consistently be milliseconds (James Ross) * win: DRY uv_poll_start() and uv_poll_stop() (Ben Noordhuis) * win: DRY uv_poll_close() (Ben Noordhuis) * unix,win: add uv_library_shutdown() (Ben Noordhuis) * unix: yield cpu when spinlocking on async handle (Ben Noordhuis) * win: remove dep on GetQueuedCompletionStatusEx (Colin Finck) * doc: correct source lines (Shohei YOSHIDA) * build,android: fix typo (twosee) * doc: uv_cancel() handles uv_random_t requests (Philip Chimento) * doc: fix unescaped character (Philip Chimento) * build,cmake: fix compilation on old MinGW (erw7) * build: remove unnessesary MSVC warnings (Bartosz Sosnowski) * win: make uv_udp_init_ex() accept UV_UDP_RECVMMSG (Ben Noordhuis) * unix: simplify uv__udp_init_ex() (Ben Noordhuis) * win: remove MAX_PATH limitations (Bartosz Sosnowski) * build, win: add long path aware manifest (Bartosz Sosnowski) * doc: check/idle/prepare functions always succeed (Ben Noordhuis) * darwin: fix build with non-apple compilers (Ben Noordhuis) * win: support environment variables > 32767 chars (Ben Noordhuis) * unix: fully initialize struct msghdr (Ben Noordhuis) * doc: add uv_replace_allocator thread safety warning (twosee) * unix: fix int overflow when copying large files (Michal Artazov) * fs: report original error (Bartosz Sosnowski) * win, fs: add IO_REPARSE_TAG_APPEXECLINK support (Bartosz Sosnowski) * doc: fix formatting (Ben Noordhuis) * unix: fix memory leak when uv_loop_init() fails (Anna Henningsen) * unix: shrink uv_udp_set_source_membership() stack (Ben Noordhuis) * unix,win: fix wrong sizeof argument to memcpy() (Ben Noordhuis) * build: check for libraries not provided by libc (Jeroen Roovers) * doc: fix the order of arguments to calloc() (MasterDuke17) * unix: don't abort when getrlimit() fails (Ben Noordhuis) * test: support common user profile on IBMi (Xu Meng) * build: test on more platforms via QEMU in CI (gengjiawen) 2020.04.20, Version 1.37.0 (Stable), 02a9e1be252b623ee032a3137c0b0c94afbe6809 Changes since version 1.36.0: * timer: remove redundant check in heap compare (Yash Ladha) * udp: add flag to enable recvmmsg(2) explicitly (Saúl Ibarra Corretgé) 2020.04.16, Version 1.36.0 (Stable), 533b738838ad8407032e14b6772b29ef9af63cfa Changes since version 1.35.0: * build: add aix-common.c for AIX cmake build (Jesse Gorzinski) * zos: explicitly mark message queue events (Irek Fakhrutdinov) * zos: move mq check out of loop to save cpu cycles (Irek Fakhrutdinov) * zos: add checks to ensure behavior of epoll_wait (Irek Fakhrutdinov) * src: add uv__reallocf() (Ben Noordhuis) * build: ibmi support for cmake (Jesse Gorzinski) * build: fix gyp build for Android API >= 28 (Lin Zhang) * udp: return recvmmsg-ed datagrams in order (Saúl Ibarra Corretgé) * zos,test: fix spawn_empty_env for shared library build (Richard Lau) * zos: fix non-Release builds (Richard Lau) * zos: fix return value on expired nanosleep() call (Richard Lau) * build: fix z/OS cmake build (Richard Lau) * test: add a bunch of ASSERT macros (Santiago Gimeno) * test: remove unused extern declaration (Ben Noordhuis) * test: canonicalize argv[0] in exepath test (Ben Noordhuis) * test: simplify platform_init() (Ben Noordhuis) * ibmi: Fix isatty EBADF handling and refactor (Kevin Adler) * test: Test EBADF tty handling (Kevin Adler) * build: make cmake build benchmarks (Ben Noordhuis) * win: use RtlGenRandom from advapi32.dll directly (Ben Noordhuis) * android: fix OOB write in uv_interface_addresses() (Lin Zhang) * test: pass test when hostname is single character (毛毛) * ibmi: set the highest process priority to -10 (Xu Meng) * build: remove support for gyp (Ben Noordhuis) * doc: add note to README on cross-compiling (Ben Noordhuis) * fs: add uv_fs_lutime() (Sk Sajidul Kadir) * unix: implement cpu_relax() for arm (David Carlier) * linux: fix uv__accept4() (twosee) * win: handle file paths in uv_fs_statfs() (erw7) * unix: fix uv_os_environ() null pointer check (Rikard Falkeborn) * win: fix uv_os_environ() null pointer check (Rikard Falkeborn) * unix: fix compilation on macOS 32-bit architectures (Brad King) * win: replace alloca() with stack-based array (Ben Noordhuis) 2020.03.12, Version 1.35.0 (Stable), e45f1ec38db882f8dc17b51f51a6684027034609 Changes since version 1.34.2: * src: android build fix (David Carlier) * build: make code compilable for iOS on Xcode (ssrlive) * ibmi: skip unsupported fs test cases (Xu Meng) * ibmi: ensure that pipe backlog is not zero (Xu Meng) * test,udp6: fix udp_ipv6 test flakiness (Jameson Nash) * test: fix fs_event_watch_dir_recursive flakiness (Santiago Gimeno) * pipe: disallow listening on an IPC pipe (Witold Kręcicki) * build,cmake: improve buil experience (Isabella Muerte) * unix: remove support for FreeBSD < 10 (Saúl Ibarra Corretgé) * linux: simplify uv__accept() (Ben Noordhuis) * linux: assume presence of SOCK_CLOEXEC flag (Ben Noordhuis) * linux: simplify uv__dup2_cloexec() (Ben Noordhuis) * freebsd,linux: simplify uv__make_socketpair() (Ben Noordhuis) * unix: fix error handling in uv__make_socketpair() (Ben Noordhuis) * freebsd,linux: simplify uv__make_pipe() (Ben Noordhuis) * unix: fix error handling in uv__make_pipe() (Ben Noordhuis) * linux: simplify uv__async_eventfd() (Ben Noordhuis) * linux: assume the presence of inotify system calls (Ben Noordhuis) * doc: strip ICC profile from 2 jpg files (Dominique Dumont) * unix: make uv_tcp_keepalive predictable (Manuel BACHMANN) * docs: uv_setup_args() may take ownership of argv (Ben Noordhuis) * unix: fix error path in uv_setup_args() (Ben Noordhuis) * unix: fix size check in uv_get_process_title() (Ben Noordhuis) * doc: add erw7 to maintainers (erw7) * test: fixed udp4_echo_server implementation (Marek Vavrusa) * test: added udp ping benchmark (1,10,100 pingers) (Marek Vavrusa) * freebsd,linux: add recvmmsg() + sendmmsg() udp implementation (Marek Vavrusa) * win,pipe: DRY/simplify some code paths (Jameson Nash) * win: address some style nits (Jameson Nash) * win,pipe: ensure `req->event_handle` is defined (Elliot Saba) * win,pipe: consolidate overlapped initialization (Elliot Saba) * win,pipe: erase event_handle after deleting pointer (Jameson Nash) * build: fix android cmake build, build missing file (Ben Noordhuis) * test: skip some UDP tests on IBMi (Xu Meng) * test: skip some spawn test cases on IBMi (Xu Meng) * src: fix wrong method name in comment (TK-one) * test: add UV_TIMEOUT_MULTIPLIER environment var (Ben Noordhuis) * unix: fix uv_cpu_info always returning UV_ENOTDIR on OpenBSD (Ben Davies) * test: skip the pwd_shell test on IBMi (Xu Meng) * win,tty: Change to restore cursor shape with uv_tty_reset() (erw7) * win,tty: Added set cursor style to CSI sequences (erw7) * test: handle EINTR, fix EOF check in poll test (Ben Noordhuis) * unix: use socklen_t instead of size_t (Ben Noordhuis) * doc: fix header file location (TK-one) * unix: fix signal handle closing deferral (Ben Noordhuis) * ibmi: set the amount of memory in use to zero (Xu Meng) * zos: return on realloc failure in scandir() (Milad Farazmand) * zos: fix scandir() error path NULL pointer deref (Ben Noordhuis) 2020.01.24, Version 1.34.2 (Stable), f868c9ab0c307525a16fff99fd21e32a6ebc3837 Changes since version 1.34.1: * misc: adjust stalebot deadlines (Jameson Nash) * test: fix env-vars flakiness (cjihrig) * test: avoid truncating output lines (Jameson Nash) * darwin: stop calling SetApplicationIsDaemon() (Ben Noordhuis) * ibmi: implement uv_interface_addresses() (Xu Meng) * osx,fsevent: fix race during uv_loop_close (Jameson Nash) * osx,fsevent: clear pointer when deleting it [NFCI] (Jameson Nash) * Revert "aix: replace ECONNRESET with EOF if already closed" (Jameson Nash) * unix: handle uv__open_cloexec return value correctly (Anna Henningsen) 2020.01.13, Version 1.34.1 (Stable), 8aa5636ec72990bb2856f81e14c95813024a5c2b Changes since version 1.34.0: * unix: fix -Wstrict-aliasing compiler warning (Ben Noordhuis) * unix: cache address of dlsym("mkostemp") (Ben Noordhuis) * build: remove -pedantic from compiler flags (Ben Noordhuis) * Revert "darwin: assume pthread_setname_np() is available" (Ben Noordhuis) * Revert "darwin: speed up uv_set_process_title()" (Ben Noordhuis) * darwin: assume pthread_setname_np() is available (Ben Noordhuis) * ibmi: fix the false isatty() issue on IBMi (Xu Meng) * test: fix test failure under NetBSD and OpenBSD (David Carlier) * test: skip some test cases on IBMi (Xu Meng) * test: skip uv_(get|set)_process_title on IBMi (Xu Meng) * doc: remove binaries for Windows from README (Richard Lau) * unix: fix -Wunused-but-set-variable warning (George Zhao) * unix: pass sysctl size arg using ARRAY_SIZE macro (David Carlier) * test: disallow running the test suite as root (cjihrig) * unix: suppress -Waddress-of-packed-member warning (Ben Noordhuis) * misc: make more tags "not-stale" (Jameson Nash) * test: fix pthread memory leak (Trevor Norris) * docs: delete socks5-proxy sample (Jameson Nash) * ibmi: fix the CMSG length issue (Xu Meng) * docs: fix formatting (Jameson Nash) * unix: squelch fchmod() EPERM on CIFS share (Ben Noordhuis) * docs: fix linkcheck (Jameson Nash) * docs: switch from linux.die.net to man7.org (Jameson Nash) * win: remove abort when non-IFS LSP detection fails (virtualyw) * docs: clarify that uv_pipe_t is a pipe (Jameson Nash) * win,tty: avoid regressions in utf-8 handling (Jameson Nash) * win: remove bad assert in uv_loop_close (Jameson Nash) * test: fix -fno-common build errors (Ben Noordhuis) * build: turn on -fno-common to catch regressions (Ben Noordhuis) * test: fix fs birth time test failure (Ben Noordhuis) * tty,unix: avoid affecting controlling TTY (Jameson Nash) 2019.12.05, Version 1.34.0 (Stable), 15ae750151ac9341e5945eb38f8982d59fb99201 Changes since version 1.33.1: * unix: move random-sysctl to random-sysctl-linux (nia) * netbsd: use KERN_ARND sysctl to get entropy (nia) * unix: refactor uv__fs_copyfile() logic (cjihrig) * build: fix android build, add missing sources (Ben Noordhuis) * build: fix android build, fix symbol redefinition (Ben Noordhuis) * build: fix android autotools build (Ben Noordhuis) * fs: handle non-functional statx system call (Milad Farazmand) * unix,win: add uv_sleep() (cjihrig) * doc: add richardlau to maintainers (Richard Lau) * aix: fix netmask for IPv6 (Richard Lau) * aix: clean up after errors in uv_interface_addresses() (Richard Lau) * aix: fix setting of physical addresses (Richard Lau) * fs: add uv_fs_mkstemp (Saúl Ibarra Corretgé) * unix: switch uv_sleep() to nanosleep() (Ben Noordhuis) * unix: retry on EINTR in uv_sleep() (Ben Noordhuis) * zos: fix nanosleep() emulation (Ben Noordhuis) 2019.10.20, Version 1.33.1 (Stable), 07ad32138f4d2285ba2226b5e20462b27b091a59 Changes since version 1.33.0: * linux: fix arm64 SYS__sysctl build breakage (Ben Noordhuis) 2019.10.17, Version 1.33.0 (Stable), e56e42e9310e4437e1886dbd6771792c14c0a5f3 Changes since version 1.32.0: * Revert "linux: drop code path for epoll_pwait-less kernels" (Yang Yu) * build: fix build error with __ANDROID_API__ < 21 (Yang Yu) * win: fix reading hidden env vars (Anna Henningsen) * unix,win: add uv_random() (Ben Noordhuis) * win: simplify mkdtemp (Saúl Ibarra Corretgé) * docs: fix literal-includes in User Guide (Nhan Khong) * win, tty: fix problem of receiving unexpected SIGWINCH (erw7) * unix: fix {Net,Open}BSD build (David Carlier) * win,mingw: Fix undefined MCAST_* constants (Crunkle) * build: Add link for test/fixtures/lorem_ipsum.txt (Andrew Paprocki) * fs: use statvfs in uv__fs_statfs() for Haiku (Calvin Hill) * fsevents: stop using fsevents to watch files (Jameson Nash) * fsevents: regression in watching / (Jameson Nash) * build,cmake: don't try to detect a C++ compiler (Isabella Muerte) * build: fix build warning on cygwin (MaYuming) * unix: set sin_len and sin6_len (Ouyang Yadong) * test: fix order of operations in test (cjihrig) * doc: improve uv_fs_readdir() cleanup docs (cjihrig) * build: remove duplicated test in build files (ZYSzys) * android: enable getentropy on Android >= 28 (David Carlier) * android: fix build (David Carlier) * darwin: speed up uv_set_process_title() (Ben Noordhuis) * darwin: assume pthread_setname_np() is available (Ben Noordhuis) * unix,udp: ensure addr is non-null (Jameson Nash) * win,tty: add uv_tty_{get,set}_vterm_state (erw7) * win: fix uv_statfs_t leak in uv_fs_statfs() (Ryan Liptak) * build: install files on windows via cmake (Carl Lei) * darwin,test: include AvailabilityMacros.h (Saúl Ibarra Corretgé) * darwin,test: update loop time after sleeping (Saúl Ibarra Corretgé) * doc: remove old FreeBSD 9 related note (Saúl Ibarra Corretgé) * doc: improve uv_{send,recv}_buffer_size() docs (Ryan Liptak) * build: move -Wno-long-long check to configure time (Ben Noordhuis) * unix: update uv_fs_copyfile() fallback logic (Stefan Bender) * win: cast setsockopt struct to const char* (Shelley Vohr) 2019.09.10, Version 1.32.0 (Stable), 697bea87b3a0b0e9b4e5ff86b39d1dedb70ee46d Changes since version 1.31.0: * misc: enable stalebot (Saúl Ibarra Corretgé) * win: map ERROR_ENVVAR_NOT_FOUND to UV_ENOENT (cjihrig) * win: use L'\0' as UTF-16 null terminator (cjihrig) * win: support retrieving empty env variables (cjihrig) * unix,stream: fix returned error codes (Santiago Gimeno) * test: fix typo in DYLD_LIBRARY_PATH (Ben Noordhuis) * unix,signal: keep handle active if pending signal (Santiago Gimeno) * openbsd: fix uv_cpu_info (Santiago Gimeno) * src: move uv_free_cpu_info to uv-common.c (Santiago Gimeno) * tcp: add uv_tcp_close_reset method (Santiago Gimeno) * test: fix udp-multicast-join tests (Santiago Gimeno) * test: remove assertion in fs_statfs test (cjihrig) * doc: clarify uv_buf_t usage in uv_alloc_cb (Tomas Krizek) * win: fix typo in preprocessor expression (Konstantin Podsvirov) * timer: fix uv_timer_start on closing timer (seny) * udp: add source-specific multicast support (Vladimir Karnushin) * udp: fix error return values (Santiago Gimeno) * udp: drop IPV6_SSM_SUPPORT macro (Santiago Gimeno) * udp: fix uv__udp_set_source_membership6 (Santiago Gimeno) * udp: use sockaddr_storage instead of union (Santiago Gimeno) * build,zos: add _OPEN_SYS_SOCK_EXT3 flag (Santiago Gimeno) * test: add specific source multicast tests (Santiago Gimeno) * include: map EILSEQ error code (cjihrig) * win, tty: improve SIGWINCH performance (Bartosz Sosnowski) * build: fix ios build error (MaYuming) * aix: replace ECONNRESET with EOF if already closed (Milad Farazmand) * build: add cmake library VERSION, SOVERSION (Eneas U de Queiroz) * build: make include/ public in CMakeLists.txt (Ben Noordhuis) * build: export USING_UV_SHARED=1 to cmake deps (Ben Noordhuis) * build: cmake_minimum_required(VERSION 2.8.12) (Daniel Hahler) * aix: Fix broken cmpxchgi() XL C++ specialization. (Andrew Paprocki) * test: fix -Wsign-compare warning (Ben Noordhuis) * unix: simplify open(O_CLOEXEC) feature detection (Ben Noordhuis) * unix: fix UV_FS_O_DIRECT definition on Linux (Joran Dirk Greef) * doc: uv_handle_t documentation suggestion (Daniel Bevenius) 2019.08.10, Version 1.31.0 (Stable), 0a6771cee4c15184c924bfe9d397bdd0c3b206ba Changes since version 1.30.1: * win,fs: don't modify global file translation mode (Javier Blazquez) * win: fix uv_os_tmpdir when env var is 260 chars (Mustafa M) * win: prevent tty event explosion machine hang (Javier Blazquez) * win: add UV_FS_O_FILEMAP (João Reis) * win, fs: mkdir return UV_EINVAL for invalid names (Bartosz Sosnowski) * github: add root warning to template (cjihrig) * win: misc fs cleanup (cjihrig) * unix,win: add uv_fs_statfs() (cjihrig) * test: avoid AF_LOCAL (Carlo Marcelo Arenas Belón) * unix,win: add ability to retrieve all env variables (Saúl Ibarra Corretgé) * Revert "darwin: speed up uv_set_process_title()" (Ben Noordhuis) * doc: add %p to valgrind log-file arg (Zach Bjornson) * doc: fix typo in basics.rst (Nan Xiao) * ibmi: support Makefile build for IBM i (Xu Meng) * OpenBSD: only get active CPU core count (Ben Davies) * test: fix gcc 8 warnings for tests (Nhan Khong) * ibmi: use correct header files (Xu Meng) * unix: clear UV_HANDLE_READING flag before callback (zyxwvu Shi) * unix: fix unused-function warning on BSD (Nhan Khong) * test: fix test runner on MinGW (Crunkle) * win: remove try-except outside MSVC (Crunkle) * win: fix uv_spawn() ENOMEM on empty env (Ben Noordhuis) 2019.07.03, Version 1.30.1 (Stable), 1551969c84c2f546a429dac169c7fdac3e38115e Changes since version 1.30.0: * doc: fix incorrect versionchanged (cjihrig) * test: allow UV_ECONNRESET in tcp_try_write_error (cjihrig) * unix: add uv_get_constrained_memory() cygwin stub (cjihrig) * build: fix android cmake build (Ben Noordhuis) * unix: squelch -Wcast-function-type warning (Ben Noordhuis) * build: fix compile error with uClibc (zlargon) 2019.06.28, Version 1.30.0 (Stable), 365b6f2a0eacda1ff52be8e57ab9381cfddc5dbb Changes since version 1.29.1: * darwin: fall back to F_BARRIERFSYNC (Ben Noordhuis) * darwin: add 32 bit close$NOCANCEL implementation (ken-cunningham-webuse) * build, core, unix: add support for Haiku (Leorize) * darwin,linux: more conservative minimum stack size (Ben Noordhuis) * threadpool: increase UV_THREADPOOL_SIZE limit (Vlad A) * unix: return actual error from `uv_try_write()` (Anna Henningsen) * darwin: fix build error with macos 10.10 (Ben Noordhuis) * unix: make uv_cwd() report UV_ENOBUFS (Ben Noordhuis) * unix: make uv_fs_read() fill all buffers (Ben Noordhuis) * test: give hrtime test a custom 10s timeout (Ben Noordhuis) * fs: fix uv_fs_copyfile if same src and dst (Santiago Gimeno) * build: add cmake option to skip building tests (Niels Lohmann) * doc: add link to nodejs.org (Jenil Christo) * unix: fix a comment typo in signal.c (Evgeny Ermakov) * unix: remove redundant cast in process.c (gengjiawen) * doc: fix wrong mutex function prototypes (Leo Chung) 2019.05.22, Version 1.29.1 (Stable), d16e6094e1eb3b0b5981ef1dd7e03ec4d466944d Changes since version 1.29.0: * unix: simplify uv/posix.h include logic (cjihrig) * test: increase test timeout (cjihrig) * linux: fix sscanf() overflows reading from /proc (Ben Noordhuis) 2019.05.16, Version 1.29.0 (Stable), 43957efd92c167b352b4c948b617ca7afbee0ed1 Changes since version 1.28.0: * ibmi: read memory and CPU usage info (Xu Meng) * doc: update the cmake testing instruction (zlargon) * unix: fix race condition in uv_async_send() (Ben Noordhuis) * linux: use O_CLOEXEC instead of EPOLL_CLOEXEC (Ben Noordhuis) * doc: mark uv_async_send() as async-signal-safe (Ben Noordhuis) * linux: init st_flags and st_gen when using statx (Oscar Waddell) * linux: read free/total memory from /proc/meminfo (Ben Noordhuis) * test: test zero-sized uv_fs_sendfile() writes (Ben Noordhuis) * unix: don't assert on UV_PROCESS_WINDOWS_* flags (Ben Noordhuis) * linux: set correct mac address for IP-aliases (Santiago Gimeno) * win,util: fix null pointer dereferencing (Tobias Nießen) * unix,win: fix `uv_fs_poll_stop()` when active (Anna Henningsen) * doc: add missing uv_fs_type entries (Michele Caini) * doc: fix build with sphinx 2.x (FX Coudert) * unix: don't make statx system call on Android (George Zhao) * unix: fix clang scan-build warning (Kyle Edwards) * unix: fall back to kqueue on older macOS systems (ken-cunningham-webuse) * unix,win: add uv_get_constrained_memory() (Kelvin Jin) * darwin: fix thread cancellation fd leak (Ben Noordhuis) * linux: fix thread cancellation fd leak (Ben Noordhuis) 2019.04.16, Version 1.28.0 (Stable), 7bf8fabfa934660ee0fe889f78e151198a1165fc Changes since version 1.27.0: * unix,win: add uv_gettimeofday() (cjihrig) * unix,win: add uv_fs_{open,read,close}dir() (cjihrig) * unix: fix uv_interface_addresses() (Andreas Rohner) * fs: remove macOS-specific copyfile(3) (Rich Trott) * fs: add test for copyfile() respecting permissions (Rich Trott) * build: partially revert 5234b1c43a (Ben Noordhuis) * zos: fix setsockopt error when using AF_UNIX (Milad Farazmand) * unix: suppress EINTR/EINPROGRESS in uv_fs_close() (Ben Noordhuis) * build: use cmake APPLE variable to detect platform (zlargon) * distcheck: remove duplicate test/ entry (Jameson Nash) * unix: remove unused cmpxchgl() function (Ben Noordhuis) * unix: support sockaddr_un in uv_udp_send() (Yury Selivanov) * unix: guard use of PTHREAD_STACK_MIN (Kamil Rytarowski) * unix,win: introduce uv_timeval64_t (cjihrig) * doc: document uv_timeval_t and uv_timeval64_t (cjihrig) 2019.03.17, Version 1.27.0 (Stable), a4fc9a66cc35256dbc4dcd67c910174f05b6daa6 Changes since version 1.26.0: * doc: describe unix signal handling better (Vladimír Čunát) * linux: use statx() to obtain file birth time (Ben Noordhuis) * src: fill sockaddr_in6.sin6_len when it's defined (Santiago Gimeno) * test: relax uv_hrtime() test assumptions (Ben Noordhuis) * build: make cmake install LICENSE only once (Thomas Karl Pietrowski) * bsd: plug uv_fs_event_start() error path fd leak (Ben Noordhuis) * unix: fix __FreeBSD_kernel__ typo (cjihrig) * doc: add note about uv_run() not being reentrant (Ben Noordhuis) * unix, win: make fs-poll close wait for resource cleanup (Anna Henningsen) * doc: fix typo in uv_thread_options_t definition (Ryan Liptak) * win: skip winsock initialization in safe mode (evgley) * unix: refactor getsockname/getpeername methods (Santiago Gimeno) * win,udp: allow to use uv_udp_open on bound sockets (Santiago Gimeno) * udp: add support for UDP connected sockets (Santiago Gimeno) * build: fix uv_test shared uv Windows cmake build (ptlomholt) * build: add android-configure scripts to EXTRA_DIST (Ben Noordhuis) * build: add missing header (cjihrig) * sunos: add perror() output prior to abort() (Andrew Paprocki) * test,sunos: disable UV_DISCONNECT handling (Andrew Paprocki) * sunos: disable __attribute__((unused)) (Andrew Paprocki) * test,sunos: use unistd.h code branch (Andrew Paprocki) * build,sunos: better handling of non-GCC compiler (Andrew Paprocki) * test,sunos: fix statement not reached warnings (Andrew Paprocki) * sunos: fix argument/prototype mismatch in atomics (Andrew Paprocki) * test,sunos: test-ipc.c lacks newline at EOF (Andrew Paprocki) * test: change spawn_stdin_stdout return to void (Andrew Paprocki) * test: remove call to floor() in test driver (Andrew Paprocki) 2019.02.11, Version 1.26.0 (Stable), 8669d8d3e93cddb62611b267ef62a3ddb5ba3ca0 Changes since version 1.25.0: * doc: fix uv_get_free_memory doc (Stephen Belanger) * unix: fix epoll cpu 100% issue (yeyuanfeng) * openbsd,tcp: special handling of EINVAL on connect (ptlomholt) * win: simplify registry closing in uv_cpu_info() (cjihrig) * src,include: define UV_MAXHOSTNAMESIZE (cjihrig) * win: return product name in uv_os_uname() version (cjihrig) * thread: allow specifying stack size for new thread (Anna Henningsen) * win: fix duplicate tty vt100 fn key (erw7) * unix: don't attempt to invalidate invalid fd (Ben Noordhuis) 2019.01.19, Version 1.25.0 (Stable), 4a10a9d425863330af199e4b74bd688e62d945f1 Changes since version 1.24.1: * Revert "win,fs: retry if uv_fs_rename fails" (Ben Noordhuis) * aix: manually trigger fs event monitoring (Gireesh Punathil) * unix: rename WRITE_RETRY_ON_ERROR macro (Ben Noordhuis) * darwin: DRY platform-specific error check (Ben Noordhuis) * unix: refactor uv__write() (Ben Noordhuis) * unix: don't send handle twice on partial write (Ben Noordhuis) * tty,win: fix Alt+key under WSL (Bartosz Sosnowski) * build: support running tests in out-of-tree builds (Jameson Nash) * fsevents: really watch files with fsevents on macos 10.7+ (Jameson Nash) * thread,mingw64: need intrin.h header for SSE2 MemoryBarrier (Jameson Nash) * win: fix sizeof-pointer-div warning (cjihrig) * unix,win: add uv_os_uname() (cjihrig) * win, tty: fix CreateFileW() return value check (Bartosz Sosnowski) * unix: enable IPv6 tests on OpenBSD (ptlomholt) * test: fix test-ipc spawn_helper exit_cb (Santiago Gimeno) * test: fix test-ipc tests (Santiago Gimeno) * unix: better handling of unsupported F_FULLFSYNC (Victor Costan) * win,test: de-flake fs_event_watch_dir_short_path (Refael Ackermann) * win: fix msvc warning (sid) * openbsd: switch to libuv's barrier implementation (ptlomholt) * unix,stream: fix zero byte writes (Santiago Gimeno) * ibmi: return EISDIR on read from directory fd (Kevin Adler) * build: wrap long lines in Makefile.am (cjihrig) 2018.12.17, Version 1.24.1 (Stable), 274f2bd3b70847cadd9a3965577a87e666ab9ac3 Changes since version 1.24.0: * test: fix platform_output test on cygwin (damon-kwok) * gitignore: ignore build/ directory (Damon Kwok) * unix: zero epoll_event before use (Ashe Connor) * darwin: use runtime check for file cloning (Ben Noordhuis) * doc: replace deprecated build command on macOS (Rick) * warnings: fix code that emits compiler warnings (Jameson Nash) * doc: clarify expected memory management strategy (Ivan Krylov) * test: add uv_inet_ntop(AF_INET) coverage (Ben Noordhuis) * unix: harden string copying, introduce strscpy() (Ben Noordhuis) * linux: get rid of strncpy() call (Ben Noordhuis) * aix: get rid of strcat() calls (Ben Noordhuis) * aix: fix data race in uv_fs_event_start() (Ben Noordhuis) * win: fs: fix `FILE_FLAG_NO_BUFFERING` for writes (Joran Dirk Greef) * build: don't link against -lpthread on Android (Michael Meier) 2018.11.14, Version 1.24.0 (Stable), 2d427ee0083d1baf995df4ebf79a3f8890e9a3e1 Changes since version 1.23.2: * unix: do not require PATH_MAX to be defined (Brad King) * win,doc: path encoding in uv_fs_XX is UTF-8 (hitesh) * unix: add missing link dependency on kFreeBSD (Svante Signell) * unix: add support for GNU/Hurd (Samuel Thibault) * test: avoid memory leak for test_output (Carlo Marcelo Arenas Belón) * zos: avoid UB with NULL pointer arithmetic (Carlo Marcelo Arenas Belón) * doc: add vtjnash to maintainers (Jameson Nash) * unix: restore skipping of phys_addr copy (cjihrig) * unix,win: make uv_interface_addresses() consistent (cjihrig) * unix: remove unnecessary linebreaks (cjihrig) * unix,win: handle zero-sized allocations uniformly (Ben Noordhuis) * unix: remove unused uv__dup() function (Ben Noordhuis) * core,bsd: refactor process_title functions (Santiago Gimeno) * win: Redefine NSIG to consider SIGWINCH (Jeremy Studer) * test: make sure that reading a directory fails (Sakthipriyan Vairamani) * win, tty: remove zero-size read callbacks (Bartosz Sosnowski) * test: fix test runner getenv async-signal-safety (Ben Noordhuis) * test: fix test runner execvp async-signal-safety (Ben Noordhuis) * test,unix: fix race in test runner (Ben Noordhuis) * unix,win: support IDNA 2008 in uv_getaddrinfo() (Ben Noordhuis) * win, tcp: avoid starving the loop (Bartosz Sosnowski) * win, dl: proper error messages on some systems (Bartosz Sosnowski) * win,fs: retry if uv_fs_rename fails (Bartosz Sosnowski) * darwin: speed up uv_set_process_title() (Ben Noordhuis) * aix: fix race in uv_get_process_title() (Gireesh Punathil) * win: support more fine-grained windows hiding (Bartosz Sosnowski) 2018.10.09, Version 1.23.2 (Stable), 34c12788d2e7308f3ac506c0abcbf74c0d6abd20 Changes since version 1.23.1: * unix: return 0 retrieving rss on cygwin (cjihrig) * unix: initialize uv_interface_address_t.phys_addr (cjihrig) * test: handle uv_os_setpriority() windows edge case (cjihrig) * tty, win: fix read stop for raw mode (Bartosz Sosnowski) * Revert "Revert "unix,fs: fix for potential partial reads/writes"" (Jameson Nash) * unix,readv: always permit partial reads to return (Jameson Nash) * win,tty: fix uv_tty_close() (Bartosz Sosnowski) * doc: remove extraneous "on" (Ben Noordhuis) * unix,win: fix threadpool race condition (Anna Henningsen) * unix: rework thread barrier implementation (Ben Noordhuis) * aix: switch to libuv's own thread barrier impl (Ben Noordhuis) * unix: signal done to last thread barrier waiter (Ben Noordhuis) * test: add uv_barrier_wait serial thread test (Ali Ijaz Sheikh) * unix: optimize uv_fs_readlink() memory allocation (Ben Noordhuis) * win: remove req.c and other cleanup (Carlo Marcelo Arenas Belón) * aix: don't EISDIR on read from directory fd (Ben Noordhuis) 2018.09.22, Version 1.23.1 (Stable), d2282b3d67821dc53c907c2155fa8c5c6ce25180 Changes since version 1.23.0: * unix,win: limit concurrent DNS calls to nthreads/2 (Anna Henningsen) * doc: add addaleax to maintainers (Anna Henningsen) * doc: add missing slash in stream.rst (Emil Bay) * unix,fs: use utimes & friends for uv_fs_utime (Jeremiah Senkpiel) * unix,fs: remove linux fallback from utimesat() (Jeremiah Senkpiel) * unix,fs: remove uv__utimesat() syscall fallback (Jeremiah Senkpiel) * doc: fix argument name in tcp.rts (Emil Bay) * doc: notes on running tests, benchmarks, tools (Jamie Davis) * linux: remove epoll syscall wrappers (Ben Noordhuis) * linux: drop code path for epoll_pwait-less kernels (Ben Noordhuis) * Partially revert "win,code: remove GetQueuedCompletionStatus-based poller" (Jameson Nash) * build: add compile for android arm64/x86/x86-64 (Andy Zhang) * doc: clarify that some remarks apply to windows (Bert Belder) * test: fix compiler warnings (Jamie Davis) * ibmi: return 0 from uv_resident_set_memory() (dmabupt) * win: fix uv_udp_recv_start() error translation (Ryan Liptak) * win,doc: improve uv_os_setpriority() documentation (Bartosz Sosnowski) * test: increase upper bound in condvar_5 (Jamie Davis) * win,tty: remove deadcode (Jameson Nash) * stream: autodetect direction (Jameson Nash) 2018.08.18, Version 1.23.0 (Stable), 7ebb26225f2eaae6db22f4ef34ce76fa16ff89ec Changes since version 1.22.0: * win,pipe: restore compatibility with the old IPC framing protocol (Bert Belder) * fs: add uv_open_osfhandle (Bartosz Sosnowski) * doc: update Visual C++ Build Tools URL (Michał Kozakiewicz) * unix: loop starvation on successful write complete (jBarz) * win: add uv__getnameinfo_work() error handling (A. Hauptmann) * win: return UV_ENOMEM from uv_loop_init() (cjihrig) * unix,win: add uv_os_{get,set}priority() (cjihrig) * test: fix warning in test-tcp-open (Santiago Gimeno) 2018.07.11, Version 1.22.0 (Stable), 8568f78a777d79d35eb7d6994617267b9fb33967 Changes since version 1.21.0: * unix: remove checksparse.sh (Ben Noordhuis) * win: fix mingw build error (Ben Noordhuis) * win: fix -Wunused-function warnings in thread.c (Ben Noordhuis) * unix,win: merge timers implementation (Ben Noordhuis) * win: fix pointer type in pipe.c (Ben Noordhuis) * win: fixing build for older MSVC compilers (Michael Fero) * zos: clear poll events on every iteration (jBarz) * zos: write-protect message queue (jBarz) * zos: use correct pointer type in strnlen (jBarz) * unix,win: merge handle flags (Ben Noordhuis) * doc: update Imran Iqbal's GitHub handle (cjihrig) * src: add new error apis to prevent memory leaks (Shelley Vohr) * test: make test-condvar call uv_cond_wait (Jamie Davis) * fs: change position of uv_fs_lchown (Ujjwal Sharma) 2018.06.23, Version 1.21.0 (Stable), e4983a9b0c152932f7553ff4a9ff189d2314cdcb Changes since version 1.20.3: * unix,windows: map EFTYPE errno (cjihrig) * win: perform case insensitive PATH= comparison (cjihrig) * win, fs: uv_fs_fchmod support for -A files (Bartosz Sosnowski) * src,lib: fix comments (Tobias Nießen) * win,process: allow child pipe handles to be opened in overlapped mode (Björn Linse) * src,test: fix idiosyncratic comment style (Bert Belder) * test: fs_fchmod_archive_readonly must return a value (Bert Belder) * win,pipe: fix incorrect error code returned from uv_pipe_write_impl() (Bert Belder) * win,pipe: properly set uv_write_t.send_handle in uv_write2() (Bert Belder) * test: add vectored uv_write() ping-pong tests (Bert Belder) * win,pipe: support vectored uv_write() calls (Bert Belder) * win,pipe: refactor pipe read cancellation logic (Bert Belder) * test: improve output from IPC test helpers (Bert Belder) * test: add test for IPC deadlock on Windows ( * win,pipe: fix IPC pipe deadlock (Bert Belder) * unix: catch some cases of watching fd twice (Ben Noordhuis) * test: use custom timeout for getaddrinfo_fail_sync (Ben Noordhuis) * Revert "win: add Windows XP support to uv_if_indextoname()" (Bert Belder) * win,thread: remove fallback uv_cond implementation (Bert Belder) * src,test: s/olny/only (cjihrig) * unix: close signal pipe fds on unload (Ben Noordhuis) * win: allow setting udp socket options before bind (cjihrig) * unix: return UV_ENOTSUP on FICLONE_FORCE failure (cjihrig) * win,pipe: remove unreferenced local variable (Bert Belder) * win,code: remove GetQueuedCompletionStatus-based poller (Bert Belder) * win: remove the remaining dynamic kernel32 imports (Bert Belder) * test: speedup process-title-threadsafe on macOS (cjihrig) * core: move all include files except uv.h to uv/ (Saúl Ibarra Corretgé) * win: move stdint-msvc2008.h to include/uv/ (Ben Noordhuis) * build: fix cygwin install (Ben Noordhuis) * build,win: remove MinGW Makefile (Saúl Ibarra Corretgé) * build: add a cmake build file (Ben Noordhuis) * build: add test suite option to cmake build (Ben Noordhuis) * unix: set errno in uv_fs_copyfile() (cjihrig) * samples: fix inconsistency in parse_opts vs usage (zyxwvu Shi) * linux: handle exclusive POLLHUP with UV_DISCONNECT (Brad King) * include: declare uv_cpu_times_s in higher scope (Peter Johnson) * doc: add uv_fs_fsync() AIX limitations (jBarz) * unix,win: add uv_fs_lchown() (Paolo Greppi) * unix: disable clang variable length array warning (Peter Johnson) * doc: document uv_pipe_t::ipc (Ed Schouten) * doc: undocument uv_req_type's UV_REQ_TYPE_PRIVATE (Ed Schouten) * doc: document UV_*_MAP() macros (Ed Schouten) * win: remove use of min() macro in pipe.c (Peter Johnson) * doc: add jbarz as maintainer ( 2018.05.08, Version 1.20.3 (Stable), 8cfd67e59195251dff793ee47c185c9d6a8f3818 Changes since version 1.20.2: * win: add Windows XP support to uv_if_indextoname() (ssrlive) * win: fix `'floor' undefined` compiler warning (ssrlive) * win, pipe: stop read for overlapped pipe (Bartosz Sosnowski) * build: fix utf-8 name of copyright holder (Jérémy Lal) * zos: initialize pollfd revents (jBarz) * zos,doc: add system V message queue note (jBarz) * linux: don't use uv__nonblock_ioctl() on sparc (Ben Noordhuis) 2018.04.23, Version 1.20.2 (Stable), c51fd3f66bbb386a1efdeba6812789f35a372d1e Changes since version 1.20.1: * zos: use custom semaphore (jBarz) * win: fix registry API error handling (Kyle Farnung) * build: add support for 64-bit AIX (Richard Lau) * aix: guard STATIC_ASSERT for glibc work around (Richard Lau) 2018.04.19, Version 1.20.1 (Stable), 36ac2fc8edfd5ff3e9be529be1d4a3f0d5364e94 Changes since version 1.20.0: * doc,fs: improve documentation (Bob Burger) * win: return a floored double from uv_uptime() (Refael Ackermann) * doc: clarify platform specific pipe naming (Thomas Versteeg) * unix: fix uv_pipe_chmod() on macOS (zzzjim) * unix: work around glibc semaphore race condition (Anna Henningsen) * tcp,openbsd: disable Unix TCP check for IPV6_ONLY (Alex Arslan) * test,openbsd: use RETURN_SKIP in UDP IPv6 tests (Alex Arslan) * test,openbsd: fix multicast test (Alex Arslan) * Revert "win, fs: use FILE_WRITE_ATTRIBUTES when opening files" (cjihrig) 2018.04.03, Version 1.20.0 (Stable), 0012178ee2b04d9e4a2c66c27cf8891ad8325ceb Changes since version 1.19.2: * unix,spawn: respect user stdio flags for new pipe (Jameson Nash) * Revert "Revert "unix,tcp: avoid marking server sockets connected"" (Jameson Nash) * req: revisions to uv_req_t handling (Jameson Nash) * win: remove unnecessary initialization (cjihrig) * win: update uv_os_homedir() to use uv_os_getenv() (cjihrig) * test: fix tcp_oob test flakiness (Santiago Gimeno) * posix: fix uv__pollfds_del() for invalidated fd's (Jesse Gorzinski) * doc: README: add note on installing gyp (Jamie Davis) * unix: refactor uv_os_homedir to use uv_os_getenv (Santiago Gimeno) * unix: fix several instances of lost errno (Michael Kilburn) * win,tty: update several TODO comments (Ruslan Bekenev) * unix: add UV_FS_COPYFILE_FICLONE support (cjihrig) * test: fix connect_unspecified (Santiago Gimeno) * unix,win: add UV_FS_COPYFILE_FICLONE_FORCE support (cjihrig) * win: use long directory name for handle->dirw (Nicholas Vavilov) * build: build with -D_FILE_OFFSET_BITS=64 again (Ben Noordhuis) * win, fs: fix uv_fs_unlink for +R -A files (Bartosz Sosnowski) * win, fs: use FILE_WRITE_ATTRIBUTES when opening files (Bartosz Sosnowski) * unix: use __PASE__ on IBM i platforms (Jesse Gorzinski) * test,freebsd: fix flaky poll tests (Santiago Gimeno) * test: increase connection timeout to 1 second (jBarz) * win,tcp: handle canceled connect with ECANCELED (Jameson Nash) 2018.02.22, Version 1.19.2 (Stable), c5afc37e2a8a70d8ab0da8dac10b77ba78c0488c Changes since version 1.19.1: * test: fix incorrect asserts (cjihrig) * test: fix a typo in test-fork.c (Felix Yan) * build: remove long-obsolete gyp workarounds (Ben Noordhuis) * build: split off tests into separate gyp file (Ben Noordhuis) * test: check uv_cond_timedwait more carefully (Jamie Davis) * include,src: introduce UV__ERR() macro (Mason X) * build: add url field to libuv.pc (Ben Noordhuis) * doc: mark IBM i as Tier 3 support (Jesse Gorzinski) * win,build: correct C2059 errors (Michael Fero) * zos: fix timeout for condition variable (jBarz) * win: CREATE_NO_WINDOW when stdio is not inherited (Nick Logan) * build: fix commmon.gypi comment (Ryuichi KAWAMATA) * doc: document uv_timer_start() on an active timer (Vladimír Čunát) * doc: add note about handle movability (Bartosz Sosnowski) * doc: fix syntax error in loop documentation (Bartosz Sosnowski) * osx,stream: retry sending handle on EMSGSIZE error (Santiago Gimeno) * unix: delay fs req register until after validation (cjihrig) * test: add tests for bad inputs (Joyee Cheung) * unix,win: ensure req->bufs is freed (cjihrig) * test: add additional fs memory management checks (cjihrig) 2018.01.20, Version 1.19.1 (Stable), 8202d1751196c2374ad370f7f3779daef89befae Changes since version 1.19.0: * Revert "unix,tcp: avoid marking server sockets connected" (Ben Noordhuis) * Revert "unix,fs: fix for potential partial reads/writes" (Ben Noordhuis) * Revert "win: use RemoveDirectoryW() instead of _wmrmdir()" (Ben Noordhuis) * cygwin: fix compilation of ifaddrs impl (Brad King) 2018.01.18, Version 1.19.0 (Stable), effbb7c9d29090b2e085a40867f8cdfa916a66df Changes since version 1.18.0: * core: add getter/setter functions for easier ABI compat (Anna Henningsen) * unix: make get(set)_process_title MT-safe (Matt Harrison) * unix,win: wait for threads to start (Ben Noordhuis) * test: add threadpool init/teardown test (Bartosz Sosnowski) * win, process: uv_kill improvements (Bartosz Sosnowski) * win: set _WIN32_WINNT to 0x0600 (cjihrig) * zos: implement uv_fs_event* functions (jBarz) * unix,tcp: avoid marking server sockets connected (Jameson Nash) * doc: mark Windows 7 as Tier 1 support (Bartosz Sosnowski) * win: map 0.0.0.0 and :: addresses to localhost (Bartosz Sosnowski) * build: install libuv.pc unconditionally (Ben Noordhuis) * test: remove custom timeout for thread test on ppc (Ben Noordhuis) * test: allow multicast not permitted status (Jérémy Lal) * test: allow net unreachable status in udp test (Ben Noordhuis) * unix: use SA_RESTART when setting our sighandler (Brad King) * unix,fs: fix for potential partial reads/writes (Ben Wijen) * win,build: do not build executable installer for dll (Bert Belder) * win: allow directory symlinks to be created in a non-elevated context (Bert Belder) * zos,test: accept SIGKILL for flaky test (jBarz) * win: use RemoveDirectoryW() instead of _wmrmdir() (Ben Noordhuis) * unix: fix uv_cpu_info() error on FreeBSD (elephantp) * zos,test: decrease pings to avoid timeout (jBarz) 2017.12.02, Version 1.18.0 (Stable), 1489c98b7fc17f1702821a269eb0c5e730c5c813 Changes since version 1.17.0: * aix: fix -Wmaybe-uninitialized warning (cjihrig) * doc: remove note about SIGWINCH on Windows (Bartosz Sosnowski) * Revert "unix,win: wait for threads to start" (Ben Noordhuis) * unix,win: add uv_os_getpid() (Bartosz Sosnowski) * unix: remove incorrect assertion in uv_shutdown() (Jameson Nash) * doc: fix IRC URL in CONTRIBUTING.md (Matt Harrison) 2017.11.25, Version 1.17.0 (Stable), 1344d2bb82e195d0eafc0b40ba103f18dfd04cc5 Changes since version 1.16.1: * unix: avoid malloc() call in uv_spawn() (Ben Noordhuis) * doc: clarify the description of uv_loop_alive() (Ed Schouten) * win: map UV_FS_O_EXLOCK to a share mode of 0 (Joran Dirk Greef) * win: fix build on case-sensitive file systems (Ben Noordhuis) * win: fix test runner build with mingw64 (Ben Noordhuis) * win: remove unused variable in test/test-fs.c (Ben Noordhuis) * zos: add strnlen() implementation (jBarz) * unix: keep track of bound sockets sent via spawn (jBarz) * unix,win: wait for threads to start (Ben Noordhuis) * test: add threadpool init/teardown test (Bartosz Sosnowski) * test: avoid malloc() in threadpool test (Ben Noordhuis) * test: lower number of tasks in threadpool test (Ben Noordhuis) * win: issue memory barrier in uv_thread_join() (Ben Noordhuis) * ibmi: add support for new platform (Xu Meng) * test: fix test-spawn compilation (Bartosz Sosnowski) 2017.11.11, Version 1.16.1 (Stable), 4056fbe46493ef87237e307e0025e551db875e13 Changes since version 1.16.0: * unix: move net/if.h include (cjihrig) * win: fix undeclared NDIS_IF_MAX_STRING_SIZE (Nick Logan) 2017.11.07, Version 1.16.0 (Stable), d68779f0ea742918f653b9c20237460271c39aeb Changes since version 1.15.0: * win: change st_blksize from `2048` to `4096` (Joran Dirk Greef) * unix,win: add fs open flags, map O_DIRECT|O_DSYNC (Joran Dirk Greef) * win, fs: fix non-symlink reparse points (Wade Brainerd) * test: fix -Wstrict-prototypes warnings (Ben Noordhuis) * unix, windows: map ENOTTY errno (Ben Noordhuis) * unix: fall back to fsync() if F_FULLFSYNC fails (Joran Dirk Greef) * unix: do not close invalid kqueue fd after fork (jBarz) * zos: reset epoll data after fork (jBarz) * zos: skip fork_threadpool_queue_work_simple (jBarz) * test: keep platform_output as first test (Bartosz Sosnowski) * win: fix non-English dlopen error message (Bartosz Sosnowski) * unix,win: add uv_os_getppid() (cjihrig) * test: fix const qualification compiler warning (Ben Noordhuis) * doc: mark uv_default_loop() as not thread safe (rayrase) * win, pipe: null-initialize stream->shutdown_req (Jameson Nash) * tty, win: get SetWinEventHook pointer at startup (Bartosz Sosnowski) * test: no extra new line in skipped test output (Bartosz Sosnowski) * pipe: allow access from other users (Bartosz Sosnowski) * unix,win: add uv_if_{indextoname,indextoiid} (Pekka Nikander) 2017.10.03, Version 1.15.0 (Stable), 8b69ce1419d2958011d415a636810705c36c2cc2 Changes since version 1.14.1: * unix: limit uv__has_forked_with_cfrunloop to macOS (Kamil Rytarowski) * win: fix buffer size in uv__getpwuid_r() (tux.uudiin) * win,tty: improve SIGWINCH support (Bartosz Sosnowski) * unix: use fchmod() in uv_fs_copyfile() (cjihrig) * unix: support copying empty files (cjihrig) * unix: truncate destination in uv_fs_copyfile() (Nick Logan) * win,build: keep cwd when setting build environment (darobs) * test: add NetBSD support to test-udp-ipv6.c (Kamil Rytarowski) * unix: add NetBSD support in core.c (Kamil Rytarowski) * linux: increase thread stack size with musl libc (Ben Noordhuis) * netbsd: correct uv_exepath() on NetBSD (Kamil Rytarowski) * test: clean up semaphore after use (jBarz) * win,build: bump vswhere_usability_wrapper to 2.0.0 (Refael Ackermann) * win: let UV_PROCESS_WINDOWS_HIDE hide consoles (cjihrig) * zos: lock protect global epoll list in epoll_ctl (jBarz) * zos: change platform name to match python (jBarz) * android: fix getifaddrs() (Zheng, Lei) * netbsd: implement uv__tty_is_slave() (Kamil Rytarowski) * zos: fix readlink for mounts with system variables (jBarz) * test: sort the tests alphabetically (Sakthipriyan Vairamani) * windows: fix compilation warnings (Carlo Marcelo Arenas Belón) * build: avoid -fstrict-aliasing compile option (jBarz) * win: remove unused variables (Carlo Marcelo Arenas Belón) * unix: remove unused variables (Sakthipriyan Vairamani) * netbsd: disable poll_bad_fdtype on NetBSD (Kamil Rytarowski) * netbsd: use uv__cloexec and uv__nonblock (Kamil Rytarowski) * test: fix udp_multicast_join6 on NetBSD (Kamil Rytarowski) * unix,win: add uv_mutex_init_recursive() (Scott Parker) * netbsd: do not exclude IPv6 functionality (Kamil Rytarowski) * fsevents: watch files with fsevents on macos 10.7+ (Ben Noordhuis) * unix: retry on ENOBUFS in sendmsg(2) (Kamil Rytarowski) 2017.09.07, Version 1.14.1 (Stable), b0f9fb2a07a5e638b1580fe9a42a356c3ab35f37 Changes since version 1.14.0: * fs, win: add support for user symlinks (Bartosz Sosnowski) * cygwin: include uv-posix.h header (Joel Winarske) * zos: fix semaphore initialization (jBarz) * zos: improve loop_count benchmark performance (jBarz) * zos, test: flush out the oob data in callback (jBarz) * unix,win: check for bad flags in uv_fs_copyfile() (cjihrig) * unix: modify argv[0] when process title is set (Matthew Taylor) * unix: don't use req->loop in uv__fs_copyfile() (cjihrig) * doc: fix a trivial typo (Vladimír Čunát) * android: fix uv_cond_timedwait on API level < 21 (Gergely Nagy) * win: add uv__once_init() calls (Bartosz Sosnowski) * unix,windows: init all requests in fs calls (cjihrig) * unix,windows: return UV_EINVAL on NULL fs reqs (cjihrig) * windows: add POST macro to fs functions (cjihrig) * unix: handle partial sends in uv_fs_copyfile() (A. Hauptmann) * Revert "win, test: fix double close in test runner" (Bartosz Sosnowski) * win, test: remove surplus CloseHandle (Bartosz Sosnowski) 2017.08.17, Version 1.14.0 (Stable), e0d31e9e21870f88277746b6d59cf07b977cdfea Changes since version 1.13.1: * unix: check for NULL in uv_os_unsetenv for parameter name (André Klitzing) * doc: add thread safety warning for process title (Matthew Taylor) * unix: always copy process title into local buffer (Matthew Taylor) * poll: add support for OOB TCP and GPIO interrupts (CurlyMoo) * win,build: fix appveyor properly (Refael Ackermann) * win: include filename in dlopen error message (Ben Noordhuis) * aix: add netmask, mac address into net interfaces (Gireesh Punathil) * unix, windows: map EREMOTEIO errno (Ben Noordhuis) * unix: fix wrong MAC of uv_interface_address (XadillaX) * win,build: fix building from Windows SDK or VS console (Saúl Ibarra Corretgé) * github: fix link to help repo in issue template (Ben Noordhuis) * zos: remove nonexistent include from autotools build (Saúl Ibarra Corretgé) * misc: remove reference to pthread-fixes.h from LICENSE (Saúl Ibarra Corretgé) * docs: fix guide source code example paths (Anticrisis) * android: fix compilation with new NDK versions (Saúl Ibarra Corretgé) * misc: add android-toolchain to .gitignore (Saúl Ibarra Corretgé) * win, fs: support unusual reparse points (Bartosz Sosnowski) * android: fix detection of pthread_condattr_setclock (Saúl Ibarra Corretgé) * android: remove no longer needed check (Saúl Ibarra Corretgé) * doc: update instructions for building on Android (Saúl Ibarra Corretgé) * win, process: support semicolons in PATH variable (Bartosz Sosnowski) * doc: document uv_async_(init|send) return values (Ben Noordhuis) * doc: add Android as a tier 3 supported platform (Saúl Ibarra Corretgé) * unix: add missing semicolon (jBarz) * win, test: fix double close in test runner (Bartosz Sosnowski) * doc: update supported windows version baseline (Ben Noordhuis) * test,zos: skip chown root test (jBarz) * test,zos: use gid=-1 to test spawn_setgid_fails (jBarz) * zos: fix hr timer resolution (jBarz) * android: fix blocking recvmsg due to netlink bug (Jacob Segal) * zos: read more accurate rss info from RSM (jBarz) * win: allow bound/connected socket in uv_tcp_open() (Maciej Szeptuch (Neverous)) * doc: differentiate SmartOS and SunOS support (cjihrig) * unix: make uv_poll_stop() remove fd from pollset (Ben Noordhuis) * unix, windows: add basic uv_fs_copyfile() (cjihrig) 2017.07.07, Version 1.13.1 (Stable), 2bb4b68758f07cd8617838e68c44c125bc567ba6 Changes since version 1.13.0: * Now working on version 1.13.1 (cjihrig) * build: workaround AppVeyor quirk (Refael Ackermann) 2017.07.06, Version 1.13.0 (Stable), 8342fcaab815f33b988c1910ea988f28dfe27edb Changes since version 1.12.0: * Now working on version 1.12.1 (cjihrig) * unix: avoid segfault in uv_get_process_title (Michele Caini) * build: add a comma to uv.gyp (Gemini Wen) * win: restore file pos after positional read/write (Bartosz Sosnowski) * unix,stream: return error on closed handle passing (Santiago Gimeno) * unix,benchmark: use fd instead of FILE* after fork (jBarz) * zos: avoid compiler warnings (jBarz) * win,pipe: race condition canceling readfile thread (Jameson Nash) * sunos: filter out non-IPv4/IPv6 interfaces (Sebastian Wiedenroth) * sunos: fix cmpxchgi and cmpxchgl type error (Sai Ke WANG) * unix: reset signal disposition before execve() (Ben Noordhuis) * unix: reset signal mask before execve() (Ben Noordhuis) * unix: fix POLLIN assertion on server read (jBarz) * zos: use stckf builtin for high-res timer (jBarz) * win,udp: implements uv_udp_try_send (Barnabas Gema) * win,udp: return UV_EINVAL instead of aborting (Romain Caire) * freebsd: replace kvm with sysctl (Robert Ayrapetyan) * aix: fix un-initialized pointer field in fs handle (Gireesh Punathil) * win,build: support building with VS2017 (Refael Ackermann) * doc: add instructions for building on Windows (Refael Ackermann) * doc: format README (Refael Ackermann) 2017.05.31, Version 1.12.0 (Stable), d6ac141ac674657049598c36604f26e031fae917 Changes since version 1.11.0: * Now working on version 1.11.1 (cjihrig) * test: fix tests on OpenBSD (Santiago Gimeno) * test: fix -Wformat warning (Santiago Gimeno) * win,fs: avoid double freeing uv_fs_event_t.dirw (Vladimir Matveev) * unix: remove unused code in `uv__io_start` (Fedor Indutny) * signal: add uv_signal_start_oneshot method (Santiago Gimeno) * unix: factor out reusable POSIX hrtime impl (Brad King) * unix,win: add uv_os_{get,set,unset}env() (cjihrig) * win: add uv__convert_utf8_to_utf16() (cjihrig) * docs: improve UV_ENOBUFS scenario documentation (cjihrig) * unix: return UV_EINVAL for NULL env name (jBarz) * unix: filter getifaddrs results consistently (Brad King) * unix: factor out getifaddrs result filter (Brad King) * unix: factor out reusable BSD ifaddrs impl (Brad King) * unix: use union to follow strict aliasing rules (jBarz) * unix: simplify async watcher dispatch logic (Ben Noordhuis) * samples: update timer callback prototype (Ben Noordhuis) * unix: make loops and watchers usable after fork() (Jason Madden) * win: free uv__loops once empty (cjihrig) * tools: add make_dist_html.py script (Ben Noordhuis) * win,sunos: stop handle on uv_fs_event_start() err (cjihrig) * unix,windows: refactor request init logic (Ben Noordhuis) * win: fix memory leak inside uv__pipe_getname (A. Hauptmann) * fsevent: support for files without short name (Bartosz Sosnowski) * doc: fix multiple doc typos (Jamie Davis) * test,osx: fix flaky kill test (Santiago Gimeno) * unix: inline uv_pipe_bind() err_bind goto target (cjihrig) * unix,test: deadstore fixes (Rasmus Christian Pedersen) * win: fix memory leak inside uv_fs_access() (A. Hauptmann) * doc: fix docs/src/fs.rst build warning (Daniel Bevenius) * doc: minor grammar fix in Installation section (Daniel Bevenius) * doc: suggestions for design page (Daniel Bevenius) * doc: libuv does not touch uv_loop_t.data (Ben Noordhuis) * github: add ISSUE_TEMPLATE.md (Ben Noordhuis) * doc: add link to libuv/help to README (Ben Noordhuis) * udp: fix fast path in uv_udp_send() on unix (Fedor Indutny) * test: add test for uv_udp_send() fix (Trevor Norris) * doc: fix documentation for uv_handle_t.type (Daniel Kahn Gillmor) * zos: use proper prototype for epoll_init() (Ben Noordhuis) * doc: rename docs to "libuv documentation" (Saúl Ibarra Corretgé) * doc: update copyright years (Saúl Ibarra Corretgé) * doc: move TOC to a dedicated document (Saúl Ibarra Corretgé) * doc: move documentation section up (Saúl Ibarra Corretgé) * doc: move "upgrading" to a standalone document (Saúl Ibarra Corretgé) * doc: add initial version of the User Guide (Saúl Ibarra Corretgé) * doc: removed unused file (Saúl Ibarra Corretgé) * doc: update guide/about and mention new maintainership (Saúl Ibarra Corretgé) * doc: remove licensing note from guide/about (Saúl Ibarra Corretgé) * doc: add warning note to user guide (Saúl Ibarra Corretgé) * doc: change license to CC BY 4.0 (Saúl Ibarra Corretgé) * doc: remove ubvook reference from README (Saúl Ibarra Corretgé) * doc: add code samples from uvbook (unadapted) (Saúl Ibarra Corretgé) * doc: update supported linux/glibc baseline (Ben Noordhuis) * win: avoid leaking pipe handles to child processes (Jameson Nash) * win,test: support stdout output larger than 1kb (Bartosz Sosnowski) * win: remove __declspec(inline) from atomic op (Keane) * test: fix VC++ compilation warning (Rasmus Christian Pedersen) * build: add -Wstrict-prototypes (Jameson Nash) * zos: implement uv__io_fork, skip fs event tests (jBarz) * unix: do not close udp sockets on bind error (Marc Schlaich) * unix: remove FSEventStreamFlushSync() call (cjihrig) * build,openbsd: remove kvm-related code (James McCoy) * test: fix flaky tcp-write-queue-order (Santiago Gimeno) * unix,win: add uv_os_gethostname() (cjihrig) * zos: increase timeout for tcp_writealot (jBarz) * zos: do not inline OOB data by default (jBarz) * test: fix -Wstrict-prototypes compiler warnings (Ben Noordhuis) * unix: factor out reusable no-proctitle impl (Brad King) * test: factor out fsevents skip explanation (Brad King) * test: skip fork fsevent cases when lacking support (Brad King) * unix: factor out reusable no-fsevents impl (Brad King) * unix: factor out reusable sysinfo memory lookup (Brad King) * unix: factor out reusable sysinfo loadavg impl (Brad King) * unix: factor out reusable procfs exepath impl (Brad King) * unix: add a uv__io_poll impl using POSIX poll(2) (Brad King) * cygwin: implement support for cygwin and msys2 (Brad King) * cygwin: recognize EOF on named pipe closure (Brad King) * cygwin: fix uv_pipe_connect report of ENOTSOCK (Brad King) * cygwin: disable non-functional ipc handle send (Brad King) * test: skip self-connecting tests on cygwin (Brad King) * doc: mark uv_loop_fork() as experimental (cjihrig) * doc: add bzoz to maintainers (Bartosz Sosnowski) * doc: fix memory leak in tcp-echo-server example (Bernardo Ramos) * win: make uv__get_osfhandle() public (Juan Cruz Viotti) * doc: use valid pipe name in pipe-echo-server (Bernardo Ramos) 2017.02.02, Version 1.11.0 (Stable), 7452ef4e06a4f99ee26b694c65476401534f2725 Changes since version 1.10.2: * Now working on version 1.10.3 (cjihrig) * win: added fcntl.h to uv-win.h (Michele Caini) * unix: move function call out of assert (jBarz) * fs: cleanup uv__fs_scandir (Santiago Gimeno) * fs: fix crash in uv_fs_scandir_next (muflub) * win,signal: fix potential deadlock (Bartosz Sosnowski) * unix: use async-signal safe functions between fork and exec (jBarz) * sunos: fix SUNOS_NO_IFADDRS build (Ben Noordhuis) * zos: make platform functional (John Barboza) * doc: add repitition qualifier to version regexs (Daniel Bevenius) * zos: use gyp OS label "os390" on z/OS (John Barboza) * aix: enable uv_get/set_process_title (Howard Hellyer) * zos: use built-in proctitle implementation (John Barboza) * Revert "darwin: use clock_gettime in macOS 10.12" (Chris Araman) * win,test: don't write uninitialized buffer to tty (Bert Belder) * win: define ERROR_ELEVATION_REQUIRED for MinGW (Richard Lau) * aix: re-enable fs watch facility (Gireesh Punathil) 2017.01.10, Version 1.10.2 (Stable), cb9f579a454b8db592030ffa274ae58df78dbe20 Changes since version 1.10.1: * Now working on version 1.10.2 (cjihrig) * darwin: fix fsync and fdatasync (Joran Dirk Greef) * Revert "Revert "win,tty: add support for ANSI codes in win10 v1511"" (Santiago Gimeno) * win,tty: fix MultiByteToWideChar output buffer (Santiago Gimeno) * win: remove dead code related to BACKUP_SEMANTICS (Sam Roberts) * win: fix comment in quote_cmd_arg (Eric Sciple) * darwin: use clock_gettime in macOS 10.12 (Saúl Ibarra Corretgé) * win, tty: fix crash on restarting with pending data (Nicholas Vavilov) * fs: fix uv__to_stat on BSD platforms (Santiago Gimeno) * win: map ERROR_ELEVATION_REQUIRED to UV_EACCES (Richard Lau) * win: fix free() on bad input in uv_getaddrinfo() (Ben Noordhuis) 2016.11.17, Version 1.10.1 (Stable), 2e49e332bdede6db7cf17fa784a902e8386d5d86 Changes since version 1.10.0: * Now working on version 1.10.1 (cjihrig) * win: fix anonymous union syntax (Brad King) * unix: use uv__is_closing everywhere (Santiago Gimeno) * win: add missing break statement (cjihrig) * doc: fix wrong man page link for uv_fs_lstat() (Michele Caini) * win, tty: handle empty buffer in uv_tty_write_bufs (Hitesh Kanwathirtha) * doc: add cjihrig alternative GPG ID (cjihrig) * Revert "win,tty: add support for ANSI codes in win10 v1511" (Ben Noordhuis) 2016.10.25, Version 1.10.0 (Stable), c8a373c729b4c9392e0e14fc53cd6b67b3051ab9 Changes since version 1.9.1: * Now working on version 1.9.2 (Saúl Ibarra Corretgé) * doc: add cjihrig GPG ID (cjihrig) * win,build: fix compilation on old Windows / MSVC (Saúl Ibarra Corretgé) * darwin: fix setting fd to non-blocking in select(() trick (Saúl Ibarra Corretgé) * unix: allow nesting of kqueue fds in uv_poll_start (Ben Noordhuis) * doc: fix generation the first time livehtml runs (Saúl Ibarra Corretgé) * test: fix test_close_accept flakiness on Centos5 (Santiago Gimeno) * license: libuv is no longer a Node project (Saúl Ibarra Corretgé) * license: add license text we've been using for a while (Saúl Ibarra Corretgé) * doc: add licensing information to README (Saúl Ibarra Corretgé) * win,pipe: fixed formatting, DWORD is long unsigned (Miodrag Milanovic) * win: support sub-second precision in uv_fs_futimes() (Jason Ginchereau) * unix: ignore EINPROGRESS in uv__close (Saúl Ibarra Corretgé) * doc: add Imran Iqbal (iWuzHere) to maintainers (Imran Iqbal) * doc: update docs with AIX related information (Imran Iqbal) * test: silence build warnings (Kári Tristan Helgason) * doc: add iWuzHere GPG ID (Imran Iqbal) * linux-core: fix uv_get_total/free_memory on uclibc (Nicolas Cavallari) * build: fix build on DragonFly (Michael Neumann) * unix: correctly detect named pipes on DragonFly (Michael Neumann) * test: make tap output the default (Ben Noordhuis) * test: don't dump output for skipped tests (Ben Noordhuis) * test: improve formatting of diagnostic messages (Ben Noordhuis) * test: remove unused RETURN_TODO macro (Ben Noordhuis) * doc: fix stream typos (Pierre-Marie de Rodat) * doc: update coding style link (Imran Iqbal) * unix,fs: use uint64_t instead of unsigned long (Imran Iqbal) * build: check for warnings for -fvisibility=hidden (Imran Iqbal) * unix: remove unneeded TODO note (Saúl Ibarra Corretgé) * test: skip tty_pty test if pty is not available (Luca Bruno) * sunos: set phys_addr of interface_address using ARP (Brian Maher) * doc: clarify callbacks won't be called in error case (Saúl Ibarra Corretgé) * unix: don't convert stat buffer when syscall fails (Ben Noordhuis) * win: compare entire filename in watch events (cjihrig) * doc: add a note on safe reuse of uv_write_t (neevek) * linux: fix potential event loop stall (Ben Noordhuis) * unix,win: make uv_get_process_title() stricter (cjihrig) * test: close server before initiating new connection (John Barboza) * test: account for multiple handles in one ipc read (John Barboza) * unix: fix errno and retval conflict (liuxiaobo) * doc: add missing entry in uv_fs_type enum (Michele Caini) * unix: preserve loop->data across loop init/done (Ben Noordhuis) * win: return UV_EINVAL on bad uv_tty_mode mode arg (Ben Noordhuis) * win: simplify memory copy logic in fs.c (Ben Noordhuis) * win: fix compilation on mingw (Bartosz Sosnowski) * win: ensure 32-bit printf precision (Matej Knopp) * darwin: handle EINTR in /dev/tty workaround (Ben Noordhuis) * test: fix OOB buffer access (Saúl Ibarra Corretgé) * test: don't close CRT fd handed off to uv_pipe_t (Saúl Ibarra Corretgé) * test: fix android build error. (sunjin.lee) * win: evaluate timers when system wakes up (Bartosz Sosnowski) * doc: add supported platforms description (Saúl Ibarra Corretgé) * win: fix lstat reparse point without link data (Jason Ginchereau) * unix,win: make on_alloc_cb failures more resilient (Saúl Ibarra Corretgé) * zos: add support for new platform (John Barboza) * test: make tcp_close_while_connecting more resilient (Saúl Ibarra Corretgé) * build: use '${prefix}' for pkg-config 'exec_prefix' (Matt Clarkson) * build: GNU/kFreeBSD support (Jeffrey Clark) * zos: use PLO instruction for atomic operations (John Barboza) * zos: use pthread helper functions (John Barboza) * zos: implement uv__fs_futime (John Barboza) * unix: expand range of values for usleep (John Barboza) * zos: track unbound handles and bind before listen (John Barboza) * test: improve tap output on test failures (Santiago Gimeno) * test: refactor fs_event_close_in_callback (Julien Gilli) * zos: implement uv__io_check_fd (John Barboza) * unix: unneccessary use const qualifier in container_of (John Barboza) * win,tty: add support for ANSI codes in win10 v1511 (Imran Iqbal) * doc: add santigimeno to maintainers (Santiago Gimeno) * win: fix typo in type name (Saúl Ibarra Corretgé) * unix: always define pthread barrier fallback pad (Saúl Ibarra Corretgé) * test: use RETURN_SKIP in spawn_setuid_setgid test (Santiago Gimeno) * win: add disk read/write count to uv_getrusage (Imran Iqbal) * doc: document uv_fs_realpath caveats (Saúl Ibarra Corretgé) * test: improve spawn_setuid_setgid test (Santiago Gimeno) * test: fix building pty test on Android (Saúl Ibarra Corretgé) * doc: uv_buf_t members are not readonly (Saúl Ibarra Corretgé) * doc: improve documentation on uv_alloc_cb (Saúl Ibarra Corretgé) * fs: fix uv_fs_fstat on platforms using musl libc (Santiago Gimeno) * doc: update supported fields for uv_rusage_t (Imran Iqbal) * test: fix test-tcp-writealot flakiness on arm (Santiago Gimeno) * test: fix fs_event_watch_dir flakiness on arm (Santiago Gimeno) * unix: don't use alphasort in uv_fs_scandir() (Ben Noordhuis) * doc: fix confusing doc of uv_tcp_nodelay (Bart Robinson) * build,osx: fix warnings on tests compilation with gyp (Santiago Gimeno) * doc: add ABI tracker link to README (Saúl Ibarra Corretgé) * win,tty: fix uv_tty_set_mode race conditions (Bartosz Sosnowski) * test: fix fs_fstat on Android (Vit Gottwald) * win, test: fix fs_event_watch_dir_recursive (Bartosz Sosnowski) * doc: add description of uv_handle_type (Vit Gottwald) * build: use -pthreads for tests with autotools (Julien Gilli) * win: fix leaky fs request buffer (Jason Ginchereau) * doc: note buffer lifetime requirements in uv_write (Vladimír Čunát) * doc: add reference to uv_update_time on uv_timer_start (Alex Hultman) * win: fix winapi function pointer typedef syntax (Brad King) * test: fix tcp_close_while_connecting CI failures (Ben Noordhuis) * test: make threadpool_cancel_single deterministic (Ben Noordhuis) * test: make threadpool saturation reliable (Ben Noordhuis) * unix: don't malloc in uv_thread_create() (Ben Noordhuis) * unix: don't include CoreServices globally on macOS (Brad King) * unix,win: add uv_translate_sys_error() public API (Philippe Laferriere) * win: remove unused static variables (Ben Noordhuis) * win: silence -Wmaybe-uninitialized warning (Ben Noordhuis) * signal: replace pthread_once with uv_once (Santiago Gimeno) * test: fix sign-compare warning (Will Speak) * common: fix unused variable warning (Brad King) 2016.05.17, Version 1.9.1 (Stable), d989902ac658b4323a4f4020446e6f4dc449e25c Changes since version 1.9.0: * test: handle root home directories (cjihrig) * unix: implement uv__fs_futime for AIX 7.1 (Imran Iqbal) * test: skip early bind tests if no IPv6 is supported (Saúl Ibarra Corretgé) * win: fix var declaration to be C89 compliant (Michael Fero) * unix: use POLL{IN,OUT,etc} constants directly (Ben Noordhuis) * doc: add ability to live reload and regenerate HTML (Saúl Ibarra Corretgé) * Revert "win,build: remove unused build defines" (cjihrig) * linux: fix fd leaks in uv_cpu_info() error paths (Ben Noordhuis) * linux: don't abort on malformed /proc/stat (Ben Noordhuis) * linux: fix long lines in linux-core.c (Ben Noordhuis) * test: fix fs_event_watch_file_current_dir for AIX (Imran Iqbal) * unix,fs: code cleanup of uv_fs_event_start for AIX (Imran Iqbal) * unix: delay signal handling until after normal i/o (Ben Noordhuis) * android: pthread_sigmask() does not set errno (Oguz Bastemur) * win: work around sharepoint scandir bug (Ben Noordhuis) * unix: guard against clobbering errno in uv__free() (Ben Noordhuis) * unix: remove unneeded SAVE_ERRNO wrappers (Ben Noordhuis) * test: skip fs_event_close_in_callback on AIX (Imran Iqbal) * win: add maxrss, pagefaults to uv_getrusage() (Robert Jefe Lindstaedt) * test: set a big send buffer size for tcp_write_queue_order (Andrius Bentkus) * unix: error on realpath if PATH_MAX is undefined (Myles Borins) * unix: fix bug in barrier fallback implementation (Kári Tristan Helgason) * build: bump android ndk version (Kári Tristan Helgason) * build: always compile with -fvisibility=hidden (Ben Noordhuis) * test: fix -Wformat warnings in platform test (Ben Noordhuis) * win: clarify fsevents handling code (Saúl Ibarra Corretgé) * test: fix POLLHDRUP related failures for AIX (Imran Iqbal) * build, mingw: set LIBS in configure.ac (Tony Theodore) * win: improve uv__convert_utf16_to_utf8 (Saúl Ibarra Corretgé) * win: simplified UTF16 -> UTF8 conversions (Saúl Ibarra Corretgé) * win: remove unneeded condition (Saúl Ibarra Corretgé) * darwin: work around condition variable kernel bug (Ben Noordhuis) * darwin: make thread stack multiple of page size (Ben Noordhuis) * build,win: rename platform to msbuild_platform (João Reis) * gitignore: ignore VS temporary database files (João Reis) * test: skip emfile on AIX (Imran Iqbal) * unix: use system allocator for scandir() (cjihrig) * common: release uv_fs_scandir() array (cjihrig) * win: call uv__fs_scandir_cleanup() (cjihrig) * win,tty: fix read stop in line mode (João Reis) * win,tty: don't duplicate handle for line reads (João Reis) * win,tty: restore cursor after canceling line read (Alexis Campailla) 2016.04.08, Version 1.9.0 (Stable), 229b3a4cc150aebd6561e6bd43076eafa7a03756 Changes since version 1.8.0: * win: wait for full timeout duration (João Reis) * unix: fix support for uClibc-ng (Martin Bark) * doc: indicate where new test files need to be added (Dave) * test,unix: fix logic error in test runner (Ben Noordhuis) * fs: don't nullify req->bufs on EINTR (Dave) * osx: set the default thread stack size to RLIMIT_STACK (Saúl Ibarra Corretgé) * build: invoke libtoolize with --copy (Ben Noordhuis) * test: fixup eintr_handling (Saúl Ibarra Corretgé) * osx: avoid compilation warning with Clang (Saúl Ibarra Corretgé) * test,win: fix compilation with shared lib (Alexis Murzeau) * test: fix race condition in pipe-close-stdout (Imran Iqbal) * unix,win: add uv_os_tmpdir() (cjihrig) * ios: fix undefined PTHREAD_STACK_MIN (Didiet) * test: fix threadpool_multiple_event_loops for AIX (Imran Iqbal) * unix: report errors for unpollable fds (Ben Noordhuis) * win: fix watching root files (Nicholas Vavilov) * build,win: print the Visual Studio version in use (Saúl Ibarra Corretgé) * build,win: remove unneeded condition from GYP file (Saúl Ibarra Corretgé) * test,win: fix compilation warning (Saúl Ibarra Corretgé) * test: use uv_loop_close and assert its result (Nan Xiang) * build: map 'AMD64' host arch to 'x64' (Ben Noordhuis) * osx: protected use of potentially undefined macro (Samuel Lorétan) * linux: fix compilation with musl (Saúl Ibarra Corretgé) * doc: describe how to make release builds on Unix (Saúl Ibarra Corretgé) * doc: add missing link in README (Saúl Ibarra Corretgé) * build: python 2.x/3.x consistent print usage (Rasmus Christian Pedersen) * test: assume no IPv6 if interfaces cannot be listed (Nan Xiang) * darwin: replace F_FULLFSYNC with fdatasync syscall (Saúl Ibarra Corretgé) * doc: add missing write callback to example (Nándor István Krácser) * build: compile with -D_THREAD_SAFE on AIX (Imran Iqbal) * test: fix threadpool_multiple_event_loops on PPC (Imran Iqbal) * test: reduce timeout in tcp_close_while_connecting (Imran Iqbal) * unix, win: consistently null-terminate buffers (Saúl Ibarra Corretgé) * unix, win: count null byte on UV_ENOBUFS (Saúl Ibarra Corretgé) * test: fix deadlocks in uv_cond_wait (Katsutoshi Horie) * linux: fix cpu count (Lukasz Jagiello) * unix: fix uv__handle_type for AIX (Imran Iqbal) * linux: call fclose(), fix fdopen() memory leak (Ben Noordhuis) * win: remove unneeded condition (Saúl Ibarra Corretgé) * unix: fix compile error in Android using bionic (Robert Chiras) * linux: add braces to multi-statement if (Kári Tristan Helgason) * doc: add @cjihrig as a maintainer (Saúl Ibarra Corretgé) * unix: add fork-safe open file function (Kári Tristan Helgason) * linux: replace calls to fopen with uv__open_file (Kári Tristan Helgason) * linux: remove redundant call to rewind() (Krishnaraj Bhat) * win: remove duplicated code when processing fsevents (Saúl Ibarra Corretgé) * test: fix poll_bad_fdtype for AIX (Imran Iqbal) * linux: fix error checking in uv__open_file (Saúl Ibarra Corretgé) * poll: add UV_DISCONNECT event (Santiago Gimeno) * fs: realpath: fix string size before converting (Yuval Brik) * win: use native APIs for UTF conversions (cjihrig) * doc: clarify uv_loop_close() (Ben Noordhuis) * unix: retry ioctl(TIOCGWINSZ) on EINTR (Ben Noordhuis) * win,build: remove unused build defines (Saúl Ibarra Corretgé) * win: fix buffer overflow in fs events (Joran Dirk Greef) * win: fix uv_relative_path and remove dead branch (Joran Dirk Greef) * unix: use open(2) with O_CLOEXEC on OS X (Kári Tristan Helgason) * test: add missing copyright header (cjihrig) * aix: fix 'POLLRDHUP undeclared' build error (Ben Noordhuis) * unix,win: add uv_get_passwd() (cjihrig) * process: fix uv_spawn edge-case (Santiago Gimeno) * test: use %ld for printing uid/gid (Ben Noordhuis) * aix: fix ahafs implementation (Imran Iqbal) * aix: do not store absolute path to ahafs (Imran Iqbal) * process: close process pipes safely (Santiago Gimeno) * unix: open ttyname instead of /dev/tty (Enno Boland) * unix: remove outdated comment (Kári Tristan Helgason) 2015.12.15, Version 1.8.0 (Stable), 5467299450ecf61635657557b6e01aaaf6c3fdf4 Changes since version 1.7.5: * unix: fix memory leak in uv_interface_addresses (Jianghua Yang) * unix: make uv_guess_handle work properly for AIX (Gireesh Punathil) * fs: undo uv__req_init when uv__malloc failed (Jianghua Yang) * build: remove unused 'component' GYP option (Saúl Ibarra Corretgé) * include: remove duplicate extern declaration (Jianghua Yang) * win: use the MSVC provided snprintf where possible (Jason Williams) * win, test: fix compilation warning (Saúl Ibarra Corretgé) * win: fix compilation with VS < 2012 (Ryan Johnston) * stream: support empty uv_try_write on unix (Fedor Indutny) * unix: fix request handle leak in uv__udp_send (Jianghua Yang) * src: replace QUEUE_SPLIT with QUEUE_MOVE (Ben Noordhuis) * unix: use QUEUE_MOVE when iterating over lists (Ben Noordhuis) * unix: squelch harmless valgrind warning (Ben Noordhuis) * test: don't abort on setrlimit() failure (Ben Noordhuis) * unix: only undo fs req registration in async mode (Ben Noordhuis) * unix: fix uv__getiovmax return value (HungMingWu) * unix: make work with Solaris Studio. (Adam Stylinski) * test: fix fs_event_watch_file_currentdir flakiness (Santiago Gimeno) * unix: skip prohibited syscalls on tvOS and watchOS (Nathan Corvino) * test: use FQDN in getaddrinfo_fail test (Wink Saville) * docs: clarify documentation of uv_tcp_init_ex (Andrius Bentkus) * win: fix comment (Miodrag Milanovic) * doc: fix typo in README (Angel Leon) * darwin: abort() if (un)locking fs mutex fails (Ben Noordhuis) * pipe: enable inprocess uv_write2 on Windows (Louis DeJardin) * win: properly return UV_EBADF when _close() fails (Nicholas Vavilov) * test: skip process_title for AIX (Imran Iqbal) * misc: expose handle print APIs (Petka Antonov) * include: add stdio.h to uv.h (Saúl Ibarra Corretgé) * misc: remove unnecessary null pointer checks (Ian Kronquist) * test,freebsd: skip udp_dual_stack if not supported (Santiago Gimeno) * linux: don't retry dup2/dup3 on EINTR (Ben Noordhuis) * unix: don't retry dup2/dup3 on EINTR (Ben Noordhuis) * test: fix -Wtautological-pointer-compare warnings (Saúl Ibarra Corretgé) * win: map ERROR_BAD_PATHNAME to UV_ENOENT (Tony Kelman) * test: fix test/test-tty.c for AIX (Imran Iqbal) * android: support api level less than 21 (kkdaemon) * fsevents: fix race on simultaneous init+close (Fedor Indutny) * linux,fs: fix p{read,write}v with a 64bit offset (Saúl Ibarra Corretgé) * fs: add uv_fs_realpath() (Yuval Brik) * win: fix path for removed and renamed fs events (Joran Dirk Greef) * win: do not read more from stream than available (Jeremy Whitlock) * test: test that uv_close() doesn't corrupt QUEUE (Andrey Mazo) * unix: fix uv_fs_event_stop() from fs_event_cb (Andrey Mazo) * test: fix self-deadlocks in thread_rwlock_trylock (Ben Noordhuis) * src: remove non ascii character (sztomi) * test: fix test udp_multicast_join6 for AIX (Imran Iqbal) 2015.09.23, Version 1.7.5 (Stable), a8c1136de2cabf25b143021488cbaab05834daa8 Changes since version 1.7.4: * unix: Support atomic compare & swap xlC on AIX (nmushell) * unix: Fix including uv-aix.h on AIX (nmushell) * unix: consolidate rwlock tryrdlock trywrlock errors (Saúl Ibarra Corretgé) * unix, win: consolidate mutex trylock errors (Saúl Ibarra Corretgé) * darwin: fix memory leak in uv_cpu_info (Jianghua Yang) * test: add tests for the uv_rwlock implementation (Bert Belder) * win: redo/fix the uv_rwlock APIs (Bert Belder) * win: don't fetch function pointers to SRWLock APIs (Bert Belder) 2015.09.12, Version 1.7.4 (Stable), a7ad4f52189d89cfcba35f78bfc5ff3b1f4105c4 Changes since version 1.7.3: * doc: uv_read_start and uv_read_cb clarifications (Ben Trask) * freebsd: obtain true uptime through clock_gettime() (Jianghua Yang) * win, tty: do not convert \r to \r\n (Colin Snover) * build,gyp: add DragonFly to the list of OSes (Michael Neumann) * fs: fix bug in sendfile for DragonFly (Michael Neumann) * doc: add uv_dlsym() return type (Brian White) * tests: fix fs tests run w/o full getdents support (Jeremy Whitlock) * doc: fix typo (Devchandra Meetei Leishangthem) * doc: fix uv-unix.h location (Sakthipriyan Vairamani) * unix: fix error check when closing process pipe fd (Ben Noordhuis) * test,freebsd: fix ipc_listen_xx_write tests (Santiago Gimeno) * win: fix unsavory rwlock fallback implementation (Bert Belder) * doc: clarify repeat timer behavior (Eli Skeggs) 2015.08.28, Version 1.7.3 (Stable), 93877b11c8b86e0a6befcda83a54555c1e36e4f0 Changes since version 1.7.2: * threadpool: fix thread starvation bug (Ben Noordhuis) 2015.08.25, Version 1.7.2 (Stable), 4d13a013fcfa72311f0102751fdc7951873f466c Changes since version 1.7.1: * unix, win: make uv_loop_init return on error (Willem Thiart) * win: reset pipe handle for pipe servers (Saúl Ibarra Corretgé) * win: fix replacing pipe handle for pipe servers (Saúl Ibarra Corretgé) * win: fix setting pipe pending instances after bind (Saúl Ibarra Corretgé) 2015.08.20, Version 1.7.1 (Stable), 44f4b6bd82d8ae4583ccc4768a83af778ef69f85 Changes since version 1.7.0: * doc: document the procedure for verifying releases (Saúl Ibarra Corretgé) * doc: add note about Windows binaries to the README (Saúl Ibarra Corretgé) * doc: use long GPG IDs in MAINTAINERS.md (Saúl Ibarra Corretgé) * Revert "stream: squelch ECONNRESET error if already closed" (Saúl Ibarra Corretgé) * doc: clarify uv_read_stop() is idempotent (Corbin Simpson) * unix: OpenBSD's setsockopt needs an unsigned char for multicast (Zachary Hamm) * test: Fix two memory leaks (Karl Skomski) * unix,win: return EINVAL on nullptr args in uv_fs_{read,write} (Karl Skomski) * win: set accepted TCP sockets as non-inheritable (Saúl Ibarra Corretgé) * unix: remove superfluous parentheses in fs macros (Ben Noordhuis) * unix: don't copy arguments for sync fs requests (Ben Noordhuis) * test: plug small memory leak in unix test runner (Ben Noordhuis) * unix,windows: allow NULL loop for sync fs requests (Ben Noordhuis) * unix,windows: don't assert on unknown error code (Ben Noordhuis) * stream: retry write on EPROTOTYPE on OSX (Brian White) * common: fix use of snprintf on Windows (Saúl Ibarra Corretgé) * tests: refactored fs watch_dir tests for stability (Jeremy Whitlock) 2015.08.06, Version 1.7.0 (Stable), 415a865d6365ba58d02b92b89d46ba5d7744ec8b Changes since version 1.6.1: * win,stream: add slot to remember CRT fd (Bert Belder) * win,pipe: properly close when created from CRT fd (Bert Belder) * win,pipe: don't close fd 0-2 (Bert Belder) * win,tty: convert fd -> handle safely (Bert Belder) * win,tty: properly close when created from CRT fd (Bert Belder) * win,tty: don't close fd 0-2 (Bert Belder) * win,fs: don't close fd 0-2 (Bert Belder) * win: include "malloc.h" (Cheng Zhao) * windows: MSVC 2015 has C99 inline (Jason Williams) * dragonflybsd: fixes for nonblocking and cloexec (Michael Neumann) * dragonflybsd: use sendfile(2) for uv_fs_sendfile (Michael Neumann) * dragonflybsd: fix uv_exepath (Michael Neumann) * win,fs: Fixes align(8) directive on mingw (Stefano Cristiano) * unix, win: prevent replacing fd in uv_{udp,tcp,pipe}_t (Saúl Ibarra Corretgé) * win: move logic to set socket non-inheritable to uv_tcp_set_socket (Saúl Ibarra Corretgé) * unix, win: add ability to create tcp/udp sockets early (Saúl Ibarra Corretgé) * test: retry select() on EINTR, honor milliseconds (Ben Noordhuis) * unix: consolidate tcp and udp bind error (Saúl Ibarra Corretgé) * test: conditionally skip udp_ipv6_multicast_join6 (heshamsafi) * core: add UV_VERSION_HEX macro (Saúl Ibarra Corretgé) * doc: add section with version-checking macros and functions (Saúl Ibarra Corretgé) * tty: cleanup handle if uv_tty_init fails (Saúl Ibarra Corretgé) * darwin: save a fd when FSEvents is used (Saúl Ibarra Corretgé) * win: fix returning thread id in uv_thread_self (Saúl Ibarra Corretgé) * common: use offsetof for QUEUE_DATA (Saúl Ibarra Corretgé) * win: remove UV_HANDLE_CONNECTED (A. Hauptmann) * docs: add Windows specific note for uv_fs_open (Saúl Ibarra Corretgé) * doc: add note about uv_fs_scandir (Saúl Ibarra Corretgé) * test,unix: reduce stack size of watchdog threads (Ben Noordhuis) * win: add support for recursive file watching (Saúl Ibarra Corretgé) * win,tty: support consoles with non-default colors (John McNamee) * doc: add missing variable name (Yosuke Furukawa) * stream: squelch ECONNRESET error if already closed (Santiago Gimeno) * build: remove ancient condition from common.gypi (Saúl Ibarra Corretgé) * tests: skip some tests when network is unreachable (Luca Bruno) * build: proper support for android cross compilation (guworks) * android: add missing include to pthread-fixes.c (RossBencina) * test: fix compilation warning (Saúl Ibarra Corretgé) * doc: add a note about uv_dirent_t.type (Saúl Ibarra Corretgé) * win,test: fix shared library build (Saúl Ibarra Corretgé) * test: fix compilation warning (Santiago Gimeno) * build: add experimental Windows installer (Roger A. Light) * threadpool: send signal only when queue is empty (chenttuuvv) * aix: fix uv_exepath with relative paths (Richard Lau) * build: fix version syntax in AppVeyor file (Saúl Ibarra Corretgé) * unix: allow nbufs > IOV_MAX in uv_fs_{read,write} (ronkorving) 2015.06.06, Version 1.6.1 (Stable), 30c8be07bb78a66fdee5141626bf53a49a17094a Changes since version 1.6.0: * unix: handle invalid _SC_GETPW_R_SIZE_MAX values (cjihrig) 2015.06.04, Version 1.6.0 (Stable), adfccad76456061dfcf79b8df8e7dbfee51791d7 Changes since version 1.5.0: * aix: fix setsockopt for multicast options (Michael) * unix: don't block for io if any io handle is primed (Saúl Ibarra Corretgé) * windows: MSVC 2015 has snprintf() (Rui Abreu Ferreira) * windows: Add VS2015 support to vcbuild.bat (Jason Williams) * doc: fix typo in tcp.rst (Igor Soarez) * linux: work around epoll bug in kernels < 2.6.37 (Ben Noordhuis) * unix,win: add uv_os_homedir() (cjihrig) * stream: fix `select()` race condition (Fedor Indutny) * unix: prevent infinite loop in uv__run_pending (Saúl Ibarra Corretgé) * unix: make sure UDP send callbacks are asynchronous (Saúl Ibarra Corretgé) * test: fix `platform_output` netmask printing. (Andrew Paprocki) * aix: add ahafs autoconf detection and README notes (Andrew Paprocki) * core: add ability to customize memory allocator (Saúl Ibarra Corretgé) 2015.05.07, Version 1.5.0 (Stable), 4e77f74c7b95b639b3397095db1bc5bcc016c203 Changes since version 1.4.2: * doc: clarify that the thread pool primites are not thread safe (Andrius Bentkus) * aix: always deregister closing fds from epoll (Michael) * unix: fix glibc-2.20+ macro incompatibility (Massimiliano Torromeo) * doc: add Sphinx plugin for generating links to man pages (Saúl Ibarra Corretgé) * doc: link system and library calls to man pages (Saúl Ibarra Corretgé) * doc: document uv_getnameinfo_t.{host|service} (Saúl Ibarra Corretgé) * build: update the location of gyp (Stephen von Takach) * win: name all anonymous structs and unions (TomCrypto) * linux: work around epoll bug in kernels 3.10-3.19 (Ben Noordhuis) * darwin: fix size calculation in select() fallback (Ole André Vadla Ravnås) * solaris: fix setsockopt for multicast options (Julien Gilli) * test: fix race condition in multithreaded test (Ben Noordhuis) * doc: fix long lines in tty.rst (Ben Noordhuis) * test: use UV_TTY_MODE_* values in tty test (Ben Noordhuis) * unix: don't clobber errno in uv_tty_reset_mode() (Ben Noordhuis) * unix: reject non-tty fds in uv_tty_init() (Ben Noordhuis) * win: fix pipe blocking writes (Alexis Campailla) * build: fix cross-compiling for iOS (Steven Kabbes) * win: remove unnecessary malloc.h * include: use `extern "c++"` for defining C++ code (Kazuho Oku) * unix: reap child on execvp() failure (Ryan Phillips) * windows: fix handle leak on EMFILE (Brian Green) * test: fix tty_file, close handle if initialized (Saúl Ibarra Corretgé) * doc: clarify what uv_*_open accepts (Saúl Ibarra Corretgé) * doc: clarify that we don't maintain external doc resources (Saúl Ibarra Corretgé) * build: add documentation for ninja support (Devchandra Meetei Leishangthem) * doc: document uv_buf_t members (Corey Farrell) * linux: fix epoll_pwait() fallback on arm64 (Ben Noordhuis) * android: fix compilation warning (Saúl Ibarra Corretgé) * unix: don't close the fds we just setup (Sam Roberts) * test: spawn child replacing std{out,err} to stderr (Saúl Ibarra Corretgé) * unix: fix swapping fds order in uv_spawn (Saúl Ibarra Corretgé) * unix: fix potential bug if dup2 fails in uv_spawn (Saúl Ibarra Corretgé) * test: remove LOG and LOGF variadic macros (Saúl Ibarra Corretgé) * win: fix uv_fs_access on directories (Saúl Ibarra Corretgé) * win: fix of double free in uv_uptime (Per Nilsson) * unix: open "/dev/null" instead of "/" for emfile_fd (Alan Rogers) * docs: add some missing words (Daryl Haresign) * unix: clean up uv_fs_open() O_CLOEXEC logic (Ben Noordhuis) * build: set SONAME for shared library in uv.gyp (Rui Abreu Ferreira) * windows: define snprintf replacement as inline instead of static (Rui Abreu Ferreira) * win: fix unlink of readonly files (João Reis) * doc: fix uv_run(UV_RUN_DEFAULT) description (Ben Noordhuis) * linux: intercept syscall when running under memory sanitizer (Keno Fischer) * aix: fix uv_interface_addresses return value (farblue68) * windows: defer reporting TCP write failure until next tick (Saúl Ibarra Corretgé) * test: add test for deferred TCP write failure (Saúl Ibarra Corretgé) 2015.02.27, Version 1.4.2 (Stable), 1a7391348a11d5450c0f69c828d5302e2cb842eb Changes since version 1.4.1: * stream: ignore EINVAL for SO_OOBINLINE on OS X (Fedor Indutny) 2015.02.25, Version 1.4.1 (Stable), e8e3fc5789cc0f02937879d141cca0411274093c Changes since version 1.4.0: * win: don't use inline keyword in thread.c (Ben Noordhuis) * windows: fix setting dirent types on uv_fs_scandir_next (Saúl Ibarra Corretgé) * unix,windows: make uv_thread_create() return errno (Ben Noordhuis) * tty: fix build for SmartOS (Julien Gilli) * unix: fix for uv_async data race (Michael Penick) * unix, windows: map EHOSTDOWN errno (Ben Noordhuis) * stream: use SO_OOBINLINE on OS X (Fedor Indutny) 2015.02.10, Version 1.4.0 (Stable), 19fb8a90648f3763240db004b77ab984264409be Changes since version 1.3.0: * unix: check Android support for pthread_cond_timedwait_monotonic_np (Leith Bade) * test: use modified path in test (cjihrig) * unix: implement uv_stream_set_blocking() (Ben Noordhuis) 2015.01.29, Version 1.3.0 (Stable), 165685b2a9a42cf96501d79cd6d48a18aaa16e3b Changes since version 1.2.1: * unix, windows: set non-block mode in uv_poll_init (Saúl Ibarra Corretgé) * doc: clarify which flags are supported in uv_fs_event_start (Saúl Ibarra Corretgé) * win,unix: move loop functions which have identical implementations (Andrius Bentkus) * doc: explain how the threadpool is allocated (Alex Mo) * doc: clarify uv_default_loop (Saúl Ibarra Corretgé) * unix: fix implicit declaration compiler warning (Ben Noordhuis) * unix: fix long line introduced in commit 94e628fa (Ben Noordhuis) * unix, win: add synchronous uv_get{addr,name}info (Saúl Ibarra Corretgé) * linux: fix epoll_pwait() regression with < 2.6.19 (Ben Noordhuis) * build: compile -D_GNU_SOURCE on linux (Ben Noordhuis) * build: use -fvisibility=hidden in autotools build (Ben Noordhuis) * fs, pipe: no trailing terminator in exact sized buffers (Andrius Bentkus) * style: rename buf to buffer and len to size for consistency (Andrius Bentkus) * test: fix test-spawn on MinGW32 (Luis Martinez de Bartolome) * win, pipe: fix assertion when destroying timer (Andrius Bentkus) * win, unix: add pipe_peername implementation (Andrius Bentkus) 2015.01.29, Version 0.10.33 (Stable), 7a2253d33ad8215a26c1b34f1952aee7242dd687 Changes since version 0.10.32: * linux: fix epoll_pwait() regression with < 2.6.19 (Ben Noordhuis) * test: back-port uv_loop_configure() test (Ben Noordhuis) 2015.01.15, Version 1.2.1 (Stable), 4ca78e989062a1099dc4b9ad182a98e8374134b1 Changes since version 1.2.0: * unix: remove unused dtrace file (Saúl Ibarra Corretgé) * test: skip TTY select test if /dev/tty can't be opened (Saúl Ibarra Corretgé) * doc: clarify the behavior of uv_tty_init (Saúl Ibarra Corretgé) * doc: clarify how uv_async_send behaves (Saúl Ibarra Corretgé) * build: make dist now generates a full tarball (Johan Bergström) * freebsd: make uv_exepath more resilient (Saúl Ibarra Corretgé) * unix: make setting the tty mode to the same value a no-op (Saúl Ibarra Corretgé) * win,tcp: support uv_try_write (Bert Belder) * test: enable test-tcp-try-write on windows (Bert Belder) * win,tty: support uv_try_write (Bert Belder) * unix: set non-block mode in uv_{pipe,tcp,udp}_open (Ben Noordhuis) 2015.01.06, Version 1.2.0 (Stable), 09f25b13cd149c7981108fc1a75611daf1277f83 Changes since version 1.1.0: * linux: fix epoll_pwait() sigmask size calculation (Ben Noordhuis) * tty: implement binary I/O terminal mode (Yuri D'Elia) * test: fix spawn test with autotools build (Ben Noordhuis) * test: skip ipv6 tests when ipv6 is not supported (Ben Noordhuis) * common: move STATIC_ASSERT to uv-common.h (Alexey Melnichuk) * win/thread: store thread handle in a TLS slot (Alexey Melnichuk) * unix: fix ttl, multicast ttl and loop options on IPv6 (Saúl Ibarra Corretgé) * linux: fix support for preadv/pwritev-less kernels (Ben Noordhuis) * unix: make uv_exepath(size=0) return UV_EINVAL (Ben Noordhuis) * darwin: fix uv_exepath(smallbuf) UV_EPERM error (Ben Noordhuis) * openbsd: fix uv_exepath(smallbuf) UV_EINVAL error (Ben Noordhuis) * linux: fix uv_exepath(size=1) UV_EINVAL error (Ben Noordhuis) * sunos: preemptively fix uv_exepath(size=1) (Ben Noordhuis) * win: fix and clarify comments in winapi.h (Bert Belder) * win: make available NtQueryDirectoryFile (Bert Belder) * win: add definitions for directory information types (Bert Belder) * win: use NtQueryDirectoryFile to implement uv_fs_scandir (Bert Belder) * unix: don't unlink unix socket on bind error (Ben Noordhuis) * build: fix bad comment in autogen.sh (Ben Noordhuis) * build: add AC_PROG_LIBTOOL to configure.ac (Ben Noordhuis) * test: skip udp_options6 if there no IPv6 support (Saúl Ibarra Corretgé) * win: add definitions for MUI errors mingw lacks (Bert Belder) * build: enable warnings in autotools build (Ben Noordhuis) * build: remove -Wno-dollar-in-identifier-extension (Ben Noordhuis) * build: move flags from Makefile.am to configure.ac (Ben Noordhuis) 2015.01.06, Version 0.10.32 (Stable), 378de30c59aef5fdb6d130fa5cfcb0a68fce571c Changes since version 0.10.31: * linux: fix epoll_pwait() sigmask size calculation (Ben Noordhuis) 2014.12.25, Version 1.1.0 (Stable), 9572f3e74a167f59a8017e57ca3ebe91ffd88e18 Changes since version 1.0.2: * test: test that closing a poll handle doesn't corrupt the stack (Bert Belder) * win: fix compilation of tests (Marc Schlaich) * Revert "win: keep a reference to AFD_POLL_INFO in cancel poll" (Bert Belder) * win: avoid stack corruption when closing a poll handle (Bert Belder) * test: fix test-fs-file-loop on Windows (Bert Belder) * test: fix test-cwd-and-chdir (Bert Belder) * doc: indicate what version uv_loop_configure was added on (Saúl Ibarra Corretgé) * doc: fix sphinx warning (Saúl Ibarra Corretgé) * test: skip spawn_setuid_setgid if we get EACCES (Saúl Ibarra Corretgé) * test: silence some Clang warnings (Saúl Ibarra Corretgé) * test: relax osx_select_many_fds (Saúl Ibarra Corretgé) * test: fix compilation warnings when building with Clang (Saúl Ibarra Corretgé) * win: fix autotools build of tests (Luis Lavena) * gitignore: ignore Visual Studio files (Marc Schlaich) * win: set fallback message if FormatMessage fails (Marc Schlaich) * win: fall back to default language in uv_dlerror (Marc Schlaich) * test: improve compatibility for dlerror test (Marc Schlaich) * test: check dlerror is "no error" in no error case (Marc Schlaich) * unix: change uv_cwd not to return a trailing slash (Saúl Ibarra Corretgé) * test: fix cwd_and_chdir test on Unix (Saúl Ibarra Corretgé) * test: add uv_cwd output to platform_output test (Saúl Ibarra Corretgé) * build: fix dragonflybsd autotools build (John Marino) * win: scandir use 'ls' for formatting long strings (Kenneth Perry) * build: remove clang and gcc_version gyp defines (Ben Noordhuis) * unix, windows: don't treat uv_run_mode as a bitmask (Saúl Ibarra Corretgé) * unix, windows: fix UV_RUN_ONCE mode if progress was made (Saúl Ibarra Corretgé) 2014.12.25, Version 0.10.31 (Stable), 4dbd27e2219069a6daa769fb37f98673b77b4261 Changes since version 0.10.30: * test: test that closing a poll handle doesn't corrupt the stack (Bert Belder) * win: fix compilation of tests (Marc Schlaich) * Revert "win: keep a reference to AFD_POLL_INFO in cancel poll" (Bert Belder) * win: avoid stack corruption when closing a poll handle (Bert Belder) * gitignore: ignore Visual Studio files (Marc Schlaich) * win: set fallback message if FormatMessage fails (Marc Schlaich) * win: fall back to default language in uv_dlerror (Marc Schlaich) * test: improve compatibility for dlerror test (Marc Schlaich) * test: check dlerror is "no error" in no error case (Marc Schlaich) * build: link against -pthread (Logan Rosen) * win: scandir use 'ls' for formatting long strings (Kenneth Perry) 2014.12.10, Version 1.0.2 (Stable), eec671f0059953505f9a3c9aeb7f9f31466dd7cd Changes since version 1.0.1: * linux: fix sigmask size arg in epoll_pwait() call (Ben Noordhuis) * linux: handle O_NONBLOCK != SOCK_NONBLOCK case (Helge Deller) * doc: fix spelling (Joey Geralnik) * unix, windows: fix typos in comments (Joey Geralnik) * test: canonicalize test runner path (Ben Noordhuis) * test: fix compilation warnings (Saúl Ibarra Corretgé) * test: skip tty test if detected width and height are 0 (Saúl Ibarra Corretgé) * doc: update README with IRC channel (Saúl Ibarra Corretgé) * Revert "unix: use cfmakeraw() for setting raw TTY mode" (Ben Noordhuis) * doc: document how to get result of uv_fs_mkdtemp (Tim Caswell) * unix: add flag for blocking SIGPROF during poll (Ben Noordhuis) * unix, windows: add uv_loop_configure() function (Ben Noordhuis) * win: keep a reference to AFD_POLL_INFO in cancel poll (Marc Schlaich) * test: raise fd limit for OSX select test (Saúl Ibarra Corretgé) * unix: remove overzealous assert in uv_read_stop (Saúl Ibarra Corretgé) * unix: reset the reading flag when a stream gets EOF (Saúl Ibarra Corretgé) * unix: stop reading if an error is produced (Saúl Ibarra Corretgé) * cleanup: remove all dead assignments (Maciej Małecki) * linux: return early if we have no interfaces (Maciej Małecki) * cleanup: remove a dead increment (Maciej Małecki) 2014.12.10, Version 0.10.30 (Stable), 5a63f5e9546dca482eeebc3054139b21f509f21f Changes since version 0.10.29: * linux: fix sigmask size arg in epoll_pwait() call (Ben Noordhuis) * linux: handle O_NONBLOCK != SOCK_NONBLOCK case (Helge Deller) * doc: update project links (Ben Noordhuis) * windows: fix compilation of tests (Marc Schlaich) * unix: add flag for blocking SIGPROF during poll (Ben Noordhuis) * unix, windows: add uv_loop_configure() function (Ben Noordhuis) * win: keep a reference to AFD_POLL_INFO in cancel poll (Marc Schlaich) 2014.11.27, Version 1.0.1 (Stable), 0a8e81374e861d425b56c45c8599595d848911d2 Changes since version 1.0.0: * readme: remove Rust from users (Elijah Andrews) * doc,build,include: update project links (Ben Noordhuis) * doc: fix typo: Strcutures -> Structures (Michael Ira Krufky) * unix: fix processing process handles queue (Saúl Ibarra Corretgé) * win: replace non-ansi characters in source file (Bert Belder) 2014.11.21, Version 1.0.0 (Stable), feb2a9e6947d892f449b2770c4090f7d8c88381b Changes since version 1.0.0-rc2: * doc: fix git/svn url for gyp repo in README (Emmanuel Odeke) * windows: fix fs_read with nbufs > 1 and offset (Unknown W. Brackets) * win: add missing IP_ADAPTER_UNICAST_ADDRESS_LH definition for MinGW (huxingyi) * doc: mention homebrew in README (Mikhail Mukovnikov) * doc: add learnuv workshop to README (Thorsten Lorenz) * doc: fix parameter name in uv_fs_access (Saúl Ibarra Corretgé) * unix: use cfmakeraw() for setting raw TTY mode (Yuri D'Elia) * win: fix uv_thread_self() (Alexis Campailla) * build: add x32 support to gyp build (Ben Noordhuis) * build: remove dtrace probes (Ben Noordhuis) * doc: fix link in misc.rst (Manos Nikolaidis) * mailmap: remove duplicated entries (Saúl Ibarra Corretgé) * gyp: fix comment regarding version info location (Saúl Ibarra Corretgé) 2014.10.21, Version 1.0.0-rc2 (Pre-release) Changes since version 1.0.0-rc1: * build: add missing fixtures to distribution tarball (Rob Adams) * doc: update references to current stable branch (Zachary Newman) * fs: fix readdir on empty directory (Fedor Indutny) * fs: rename uv_fs_readdir to uv_fs_scandir (Saúl Ibarra Corretgé) * doc: document uv_alloc_cb (Saúl Ibarra Corretgé) * doc: add migration guide from version 0.10 (Saúl Ibarra Corretgé) * build: add DragonFly BSD support in autotools (Robin Hahling) * doc: document missing stream related structures (Saúl Ibarra Corretgé) * doc: clarify uv_loop_t.data field lifetime (Saúl Ibarra Corretgé) * doc: add documentation for missing functions and structures (Saúl Ibarra Corretgé) * doc: fix punctuation and grammar in README (Jeff Widman) * windows: return libuv error codes in uv_poll_init() (cjihrig) * unix, windows: add uv_fs_access() (cjihrig) * windows: fix netmask detection (Alexis Campailla) * unix, windows: don't include null byte in uv_cwd size (Saúl Ibarra Corretgé) * unix, windows: add uv_thread_equal (Tomasz Kołodziejski) * windows: fix fs_write with nbufs > 1 and offset (Unknown W. Brackets) 2014.10.21, Version 0.10.29 (Stable), 2d728542d3790183417f8f122a110693cd85db14 Changes since version 0.10.28: * darwin: allocate enough space for select() hack (Fedor Indutny) * linux: try epoll_pwait if epoll_wait is missing (Michael Hudson-Doyle) * windows: map ERROR_INVALID_DRIVE to UV_ENOENT (Saúl Ibarra Corretgé) 2014.09.18, Version 1.0.0-rc1 (Unstable), 0c28bbf7b42882853d1799ab96ff68b07f7f8d49 Changes since version 0.11.29: * windows: improve timer precision (Alexis Campailla) * build, gyp: set xcode flags (Recep ASLANTAS) * ignore: include m4 files which are created manually (Recep ASLANTAS) * build: add m4 for feature/flag-testing (Recep ASLANTAS) * ignore: ignore Xcode project and workspace files (Recep ASLANTAS) * unix: fix warnings about dollar symbol usage in identifiers (Recep ASLANTAS) * unix: fix warnings when loading functions with dlsym (Recep ASLANTAS) * linux: try epoll_pwait if epoll_wait is missing (Michael Hudson-Doyle) * test: add test for closing and recreating default loop (Saúl Ibarra Corretgé) * windows: properly close the default loop (Saúl Ibarra Corretgé) * version: add ability to specify a version suffix (Saúl Ibarra Corretgé) * doc: add API documentation (Saúl Ibarra Corretgé) * test: don't close connection on write error (Trevor Norris) * windows: further simplify the code for timers (Saúl Ibarra Corretgé) * gyp: remove UNLIMITED_SELECT from dependent define (Fedor Indutny) * darwin: allocate enough space for select() hack (Fedor Indutny) * unix, windows: don't allow a NULL callback on timers (Saúl Ibarra Corretgé) * windows: simplify code in uv_timer_again (Saúl Ibarra Corretgé) * test: use less requests on tcp-write-queue-order (Saúl Ibarra Corretgé) * unix: stop child process watcher after last one exits (Saúl Ibarra Corretgé) * unix: simplify how process handle queue is managed (Saúl Ibarra Corretgé) * windows: remove duplicated field (mattn) * core: add a reserved field to uv_handle_t and uv_req_t (Saúl Ibarra Corretgé) * windows: fix buffer leak after failed udp send (Bert Belder) * windows: make sure sockets and handles are reset on close (Saúl Ibarra Corretgé) * unix, windows: add uv_fileno (Saúl Ibarra Corretgé) * build: use same CFLAGS in autotools build as in gyp (Saúl Ibarra Corretgé) * build: remove unneeded define in uv.gyp (Saúl Ibarra Corretgé) * test: fix watcher_cross_stop on Windows (Saúl Ibarra Corretgé) * unix, windows: move includes for EAI constants (Saúl Ibarra Corretgé) * unix: fix exposing EAI_* glibc-isms (Saúl Ibarra Corretgé) * unix: fix tcp write after bad connect freezing (Andrius Bentkus) 2014.08.20, Version 0.11.29 (Unstable), 35451fed830807095bbae8ef981af004a4b9259e Changes since version 0.11.28: * windows: make uv_read_stop immediately stop reading (Jameson Nash) * windows: fix uv__getaddrinfo_translate_error (Alexis Campailla) * netbsd: fix build (Saúl Ibarra Corretgé) * unix, windows: add uv_recv_buffer_size and uv_send_buffer_size (Andrius Bentkus) * windows: add support for UNC paths on uv_spawn (Paul Goldsmith) * windows: replace use of inet_addr with uv_inet_pton (Saúl Ibarra Corretgé) * unix: replace some asserts with returning errors (Andrius Bentkus) * windows: use OpenBSD implementation for uv_fs_mkdtemp (Pavel Platto) * windows: fix GetNameInfoW error handling (Alexis Campailla) * fs: introduce uv_readdir_next() and report types (Fedor Indutny) * fs: extend reported types in uv_fs_readdir_next (Saúl Ibarra Corretgé) * unix: read on stream even when UV__POLLHUP set. (Julien Gilli) 2014.08.08, Version 0.11.28 (Unstable), fc9e2a0bc487b299c0cd3b2c9a23aeb554b5d8d1 Changes since version 0.11.27: * unix, windows: const-ify handle in uv_udp_getsockname (Rasmus Pedersen) * windows: use UV_ECANCELED for aborted TCP writes (Saúl Ibarra Corretgé) * windows: add more required environment variables (Jameson Nash) * windows: sort environment variables before calling CreateProcess (Jameson Nash) * unix, windows: move uv_loop_close out of assert (John Firebaugh) * windows: fix buffer overflow on uv__getnameinfo_work() (lilohuang) * windows: add uv_backend_timeout (Jameson Nash) * test: disable tcp_close_accept on Windows (Saúl Ibarra Corretgé) * windows: read the PATH env var of the child (Alex Crichton) * include: avoid using C++ 'template' reserved word (Iñaki Baz Castillo) * include: fix version number (Saúl Ibarra Corretgé) 2014.07.32, Version 0.11.27 (Unstable), ffe24f955032d060968ea0289af365006afed55e Changes since version 0.11.26: * unix, windows: use the same threadpool implementation (Saúl Ibarra Corretgé) * unix: use struct sockaddr_storage for target UDP addr (Saúl Ibarra Corretgé) * doc: add documentation to uv_udp_start_recv (Andrius Bentkus) * common: use common uv__count_bufs code (Andrius Bentkus) * unix, win: add send_queue_size and send_queue_count to uv_udp_t (Andrius Bentkus) * unix, win: add uv_udp_try_send (Andrius Bentkus) * unix: return UV_EAGAIN if uv_try_write cannot write any data (Saúl Ibarra Corretgé) * windows: fix compatibility with cygwin pipes (Jameson Nash) * windows: count queued bytes even if request completed immediately (Saúl Ibarra Corretgé) * windows: disable CRT debug handler on MinGW32 (Saúl Ibarra Corretgé) * windows: map ERROR_INVALID_DRIVE to UV_ENOENT (Saúl Ibarra Corretgé) * unix: try to write immediately in uv_udp_send (Saúl Ibarra Corretgé) * unix: remove incorrect assert (Saúl Ibarra Corretgé) * openbsd: avoid requiring privileges for uv_resident_set_memory (Aaron Bieber) * unix: guarantee write queue cb execution order in streams (Andrius Bentkus) * img: add logo files (Saúl Ibarra Corretgé) * aix: improve AIX compatibility (Andrew Low) * windows: return bind error immediately when implicitly binding (Saúl Ibarra Corretgé) * windows: don't use atexit for cleaning up the threadpool (Saúl Ibarra Corretgé) * windows: destroy work queue elements when colsing a loop (Saúl Ibarra Corretgé) * unix, windows: add uv_fs_mkdtemp (Pavel Platto) * build: handle platforms without multiprocessing.synchronize (Saúl Ibarra Corretgé) * windows: change GENERIC_ALL to GENERIC_WRITE in fs__create_junction (Tony Kelman) * windows: relay TCP bind errors via ipc (Alexis Campailla) 2014.07.32, Version 0.10.28 (Stable), 9c14b616f5fb84bfd7d45707bab4bbb85894443e Changes since version 0.10.27: * windows: fix handling closed socket while poll handle is closing (Saúl Ibarra Corretgé) * unix: return system error on EAI_SYSTEM (Saúl Ibarra Corretgé) * unix: fix bogus structure field name (Saúl Ibarra Corretgé) * darwin: invoke `mach_timebase_info` only once (Fedor Indutny) 2014.06.28, Version 0.11.26 (Unstable), 115281a1058c4034d5c5ccedacb667fe3f6327ea Changes since version 0.11.25: * windows: add VT100 codes ?25l and ?25h (JD Ballard) * windows: add invert ANSI (7 / 27) emulation (JD Ballard) * unix: fix handling error on UDP socket creation (Saúl Ibarra Corretgé) * unix, windows: getnameinfo implementation (Rasmus Pedersen) * heap: fix `heap_remove()` (Fedor Indutny) * unix, windows: fix parsing scoped IPv6 addresses (Saúl Ibarra Corretgé) * windows: fix handling closed socket while poll handle is closing (Saúl Ibarra Corretgé) * thread: barrier functions (Ben Noordhuis) * windows: fix PYTHON environment variable usage (Jay Satiro) * unix, windows: return system error on EAI_SYSTEM (Saúl Ibarra Corretgé) * windows: fix handling closed socket while poll handle is closing (Saúl Ibarra Corretgé) * unix: don't run i/o callbacks after prepare callbacks (Saúl Ibarra Corretgé) * windows: add tty unicode support for input (Peter Atashian) * header: introduce `uv_loop_size()` (Andrius Bentkus) * darwin: invoke `mach_timebase_info` only once (Fedor Indutny) 2014.05.02, Version 0.11.25 (Unstable), 2acd544cff7142e06aa3b09ec64b4a33dd9ab996 Changes since version 0.11.24: * osx: pass const handle pointer to uv___stream_fd (Chernyshev Viacheslav) * unix, windows: pass const handle ptr to uv_tcp_get*name (Chernyshev Viacheslav) * common: pass const sockaddr ptr to uv_ip*_name (Chernyshev Viacheslav) * unix, windows: validate flags on uv_udp|tcp_bind (Saúl Ibarra Corretgé) * unix: handle case when addr is not initialized after recvmsg (Saúl Ibarra Corretgé) * unix, windows: uv_now constness (Rasmus Pedersen) 2014.04.15, Version 0.11.24 (Unstable), ed948c29f6e8c290f79325a6f0bc9ef35bcde644 Changes since version 0.11.23: * linux: reduce file descriptor count of async pipe (Ben Noordhuis) * sunos: support IPv6 qualified link-local addresses (Saúl Ibarra Corretgé) * windows: fix opening of read-only stdin pipes (Alexis Campailla) * windows: Fix an infinite loop in uv_spawn (Alex Crichton) * windows: fix console signal handler refcount (李港平) * inet: allow scopeid in uv_inet_pton (Fedor Indutny) 2014.04.07, Version 0.11.23 (Unstable), e54de537efcacd593f36fcaaf8b4cb9e64313275 Changes since version 0.11.22: * fs: avoid using readv/writev where possible (Fedor Indutny) * mingw: fix build with autotools (Saúl Ibarra Corretgé) * bsd: support IPv6 qualified link-local addresses (Saúl Ibarra Corretgé) * unix: add UV_HANDLE_IPV6 flag to tcp and udp handles (Saúl Ibarra Corretgé) * unix, windows: do not set SO_REUSEADDR by default on udp (Saúl Ibarra Corretgé) * windows: fix check in uv_tty_endgame() (Maks Naumov) * unix, windows: add IPv6 support for uv_udp_multicast_interface (Saúl Ibarra Corretgé) * unix: fallback to blocking writes if reopening a tty fails (Saúl Ibarra Corretgé) * unix: fix handling uv__open_cloexec failure (Saúl Ibarra Corretgé) * unix, windows: add IPv6 support to uv_udp_set_membership (Saúl Ibarra Corretgé) * unix, windows: removed unused status parameter (Saúl Ibarra Corretgé) * android: add support of ifaddrs in android (Javier Hernández) * build: fix SunOS and AIX build with autotools (Saúl Ibarra Corretgé) * build: freebsd link with libelf if dtrace enabled (Saúl Ibarra Corretgé) * stream: do not leak `alloc_cb` buffers on error (Fedor Indutny) * unix: fix setting written size on uv_wd (Saúl Ibarra Corretgé) 2014.03.11, Version 0.11.22 (Unstable), cd0c19b1d3c56acf0ade7687006e12e75fbda36d Changes since version 0.11.21: * unix, windows: map ERANGE errno (Saúl Ibarra Corretgé) * unix, windows: make uv_cwd be consistent with uv_exepath (Saúl Ibarra Corretgé) * process: remove debug perror() prints (Fedor Indutny) * windows: fall back for volume info query (Isaiah Norton) * pipe: allow queueing pending handles (Fedor Indutny) * windows: fix winsock status codes for address errors (Raul Martins) * windows: Remove unused variable from uv__pipe_insert_pending_socket (David Capello) * unix: workaround broken pthread_sigmask on Android (Paul Tan) * error: add ENXIO for O_NONBLOCK FIFO open() (Fedor Indutny) * freebsd: use accept4, introduced in version 10 (Saúl Ibarra Corretgé) * windows: fix warnings of MinGW -Wall -O3 (StarWing) * openbsd, osx: fix compilation warning on scandir (Saúl Ibarra Corretgé) * linux: always deregister closing fds from epoll (Geoffry Song) * unix: reopen tty as /dev/tty (Saúl Ibarra Corretgé) * kqueue: invalidate fd in uv_fs_event_t (Fedor Indutny) 2014.02.28, Version 0.11.21 (Unstable), 3ef958158ae1019e027ebaa93114160099db5206 Changes since version 0.11.20: * unix: fix uv_fs_write when using an empty buffer (Saúl Ibarra Corretgé) * unix, windows: add assertion in uv_loop_delete (Saúl Ibarra Corretgé) 2014.02.27, Version 0.11.20 (Unstable), 88355e081b51c69ee1e2b6b0015a4e3d38bd0579 Changes since version 0.11.19: * stream: start thread after assignments (Oguz Bastemur) * fs: `uv__cloexec()` opened fd (Fedor Indutny) * gyp: qualify `library` variable (Fedor Indutny) * unix, win: add uv_udp_set_multicast_interface() (Austin Foxley) * unix: fix uv_tcp_nodelay return value in case of error (Saúl Ibarra Corretgé) * unix: call setgoups before calling setuid/setgid (Saúl Ibarra Corretgé) * include: mark close_cb field as private (Saúl Ibarra Corretgé) * unix, windows: map EFBIG errno (Saúl Ibarra Corretgé) * unix: correct error when calling uv_shutdown twice (Keno Fischer) * windows: fix building on MinGW (Alex Crichton) * windows: always initialize uv_process_t (Alex Crichton) * include: expose libuv version in header files (Saúl Ibarra Corretgé) * fs: vectored IO API for filesystem read/write (Benjamin Saunders) * windows: freeze in uv_tcp_endgame (Alexis Campailla) * sunos: handle rearm errors (Fedor Indutny) * unix: use a heap for timers (Ben Noordhuis) * linux: always deregister closing fds from epoll (Geoffry Song) * linux: include grp.h for setgroups() (William Light) * unix, windows: add uv_loop_init and uv_loop_close (Saúl Ibarra Corretgé) * unix, windows: add uv_getrusage() function (Oleg Efimov) * win: minor error handle fix to uv_pipe_write_impl (Rasmus Pedersen) * heap: fix node removal (Keno Fischer) * win: fix C99/C++ comment (Rasmus Pedersen) * fs: vectored IO API for filesystem read/write (Benjamin Saunders) * unix, windows: add uv_pipe_getsockname (Saúl Ibarra Corretgé) * unix, windows: map ENOPROTOOPT errno (Saúl Ibarra Corretgé) * errno: add ETXTBSY (Fedor Indutny) * fsevent: rename filename field to path (Saúl Ibarra Corretgé) * unix, windows: add uv_fs_event_getpath (Saúl Ibarra Corretgé) * unix, windows: add uv_fs_poll_getpath (Saúl Ibarra Corretgé) * unix, windows: map ERANGE errno (Saúl Ibarra Corretgé) * unix, windows: set required size on UV_ENOBUFS (Saúl Ibarra Corretgé) * unix, windows: clarify what uv_stream_set_blocking does (Saúl Ibarra Corretgé) * fs: use preadv on Linux if available (Brian White) 2014.01.30, Version 0.11.19 (Unstable), 336a1825309744f920230ec3e427e78571772347 Changes since version 0.11.18: * linux: move sscanf() out of the assert() (Trevor Norris) * linux: fix C99/C++ comment (Fedor Indutny) 2014.05.02, Version 0.10.27 (Stable), 6e24ce23b1e7576059f85a608eca13b766458a01 Changes since version 0.10.26: * windows: fix console signal handler refcount (Saúl Ibarra Corretgé) * win: always leave crit section in get_proc_title (Fedor Indutny) 2014.04.07, Version 0.10.26 (Stable), d864907611c25ec986c5e77d4d6d6dee88f26926 Changes since version 0.10.25: * process: don't close stdio fds during spawn (Tonis Tiigi) * build, windows: do not fail on Windows SDK Prompt (Marc Schlaich) * build, windows: fix x64 configuration issue (Marc Schlaich) * win: fix buffer leak on error in pipe.c (Fedor Indutny) * kqueue: invalidate fd in uv_fs_event_t (Fedor Indutny) * linux: always deregister closing fds from epoll (Geoffry Song) * error: add ENXIO for O_NONBLOCK FIFO open() (Fedor Indutny) 2014.02.19, Version 0.10.25 (Stable), d778dc588507588b12b9f9d2905078db542ed751 Changes since version 0.10.24: * stream: start thread after assignments (Oguz Bastemur) * unix: correct error when calling uv_shutdown twice (Saúl Ibarra Corretgé) 2014.01.30, Version 0.10.24 (Stable), aecd296b6bce9b40f06a61c5c94e43d45ac7308a Changes since version 0.10.23: * linux: move sscanf() out of the assert() (Trevor Norris) * linux: fix C99/C++ comment (Fedor Indutny) 2014.01.23, Version 0.11.18 (Unstable), d47962e9d93d4a55a9984623feaf546406c9cdbb Changes since version 0.11.17: * osx: Fix a possible segfault in uv__io_poll (Alex Crichton) * windows: improved handling of invalid FDs (Alexis Campailla) * doc: adding ARCHS flag to OS X build command (Nathan Sweet) * tcp: reveal bind-time errors before listen (Alexis Campailla) * tcp: uv_tcp_dualstack() (Fedor Indutny) * linux: relax assumption on /proc/stat parsing (Luca Bruno) * openbsd: fix obvious bug in uv_cpu_info (Fedor Indutny) * process: close stdio after dup2'ing it (Fedor Indutny) * linux: move sscanf() out of the assert() (Trevor Norris) 2014.01.23, Version 0.10.23 (Stable), dbd218e699fec8be311d85e4788be9e28ae884f8 Changes since version 0.10.22: * linux: relax assumption on /proc/stat parsing (Luca Bruno) * openbsd: fix obvious bug in uv_cpu_info (Fedor Indutny) * process: close stdio after dup2'ing it (Fedor Indutny) 2014.01.08, Version 0.10.22 (Stable), f526c90eeff271d9323a9107b9a64a4671fd3103 Changes since version 0.10.21: * windows: avoid assertion failure when pipe server is closed (Bert Belder) 2013.12.32, Version 0.11.17 (Unstable), 589c224d4c2e79fec65db01d361948f1e4976858 Changes since version 0.11.16: * stream: allow multiple buffers for uv_try_write (Fedor Indutny) * unix: fix a possible memory leak in uv_fs_readdir (Alex Crichton) * unix, windows: add uv_loop_alive() function (Sam Roberts) * windows: avoid assertion failure when pipe server is closed (Bert Belder) * osx: Fix a possible segfault in uv__io_poll (Alex Crichton) * stream: fix uv__stream_osx_select (Fedor Indutny) 2013.12.14, Version 0.11.16 (Unstable), ae0ed8c49d0d313c935c22077511148b6e8408a4 Changes since version 0.11.15: * fsevents: remove kFSEventStreamCreateFlagNoDefer polyfill (ci-innoq) * libuv: add more getaddrinfo errors (Steven Kabbes) * unix: fix accept() EMFILE error handling (Ben Noordhuis) * linux: fix up SO_REUSEPORT back-port (Ben Noordhuis) * fsevents: fix subfolder check (Fedor Indutny) * fsevents: fix invalid memory access (huxingyi) * windows/timer: fix uv_hrtime discontinuity (Bert Belder) * unix: fix various memory leaks and undef behavior (Fedor Indutny) * unix, windows: always update loop time (Saúl Ibarra Corretgé) * windows: translate system errors in uv_spawn (Alexis Campailla) * windows: uv_spawn code refactor (Alexis Campailla) * unix, windows: detect errors in uv_ip4/6_addr (Yorkie) * stream: introduce uv_try_write(...) (Fedor Indutny) 2013.12.13, Version 0.10.20 (Stable), 04141464dd0fba90ace9aa6f7003ce139b888a40 Changes since version 0.10.19: * linux: fix up SO_REUSEPORT back-port (Ben Noordhuis) * fs-event: fix invalid memory access (huxingyi) 2013.11.21, Version 0.11.15 (Unstable), bfe645ed7e99ca5670d9279ad472b604c129d2e5 Changes since version 0.11.14: * fsevents: report errors to user (Fedor Indutny) * include: UV_FS_EVENT_RECURSIVE is a flag (Fedor Indutny) * linux: use CLOCK_MONOTONIC_COARSE if available (Ben Noordhuis) * build: make systemtap probes work with gyp build (Ben Noordhuis) * unix: update events from pevents between polls (Fedor Indutny) * fsevents: support japaneese characters in path (Chris Bank) * linux: don't turn on SO_REUSEPORT socket option (Ben Noordhuis) * queue: strengthen type checks (Ben Noordhuis) * include: remove uv_strlcat() and uv_strlcpy() (Ben Noordhuis) * build: fix windows smp build with gyp (Geert Jansen) * unix: return exec errors from uv_spawn, not async (Alex Crichton) * fsevents: use native character encoding file paths (Ben Noordhuis) * linux: handle EPOLLHUP without EPOLLIN/EPOLLOUT (Ben Noordhuis) * windows: use _snwprintf(), not swprintf() (Ben Noordhuis) * fsevents: use FlagNoDefer for FSEventStreamCreate (Fedor Indutny) * unix: fix reopened fd bug (Fedor Indutny) * core: fix fake watcher list and count preservation (Fedor Indutny) * unix: set close-on-exec flag on received fds (Ben Noordhuis) * netbsd, openbsd: enable futimes() wrapper (Ben Noordhuis) * unix: nicer error message when kqueue() fails (Ben Noordhuis) * samples: add socks5 proxy sample application (Ben Noordhuis) 2013.11.13, Version 0.10.19 (Stable), 33959f7524090b8d2c6c41e2400ca77e31755059 Changes since version 0.10.18: * darwin: avoid calling GetCurrentProcess (Fedor Indutny) * unix: update events from pevents between polls (Fedor Indutny) * fsevents: support japaneese characters in path (Chris Bank) * linux: don't turn on SO_REUSEPORT socket option (Ben Noordhuis) * build: fix windows smp build with gyp (Geert Jansen) * linux: handle EPOLLHUP without EPOLLIN/EPOLLOUT (Ben Noordhuis) * unix: fix reopened fd bug (Fedor Indutny) * core: fix fake watcher list and count preservation (Fedor Indutny) 2013.10.30, Version 0.11.14 (Unstable), d7a6482f45c1b4eb4a853dbe1a9ce8090a35633a Changes since version 0.11.13: * darwin: create fsevents thread on demand (Ben Noordhuis) * fsevents: FSEvents is most likely not thread-safe (Fedor Indutny) * fsevents: use shared FSEventStream (Fedor Indutny) * windows: make uv_fs_chmod() report errors correctly (Bert Belder) * windows: make uv_shutdown() for write-only pipes work (Bert Belder) * windows/fs: wrap multi-statement macros in do..while block (Bert Belder) * windows/fs: make uv_fs_open() report EINVAL correctly (Bert Belder) * windows/fs: handle _open_osfhandle() failure correctly (Bert Belder) * windows/fs: wrap multi-statement macros in do..while block (Bert Belder) * windows/fs: make uv_fs_open() report EINVAL correctly (Bert Belder) * windows/fs: handle _open_osfhandle() failure correctly (Bert Belder) * build: clarify instructions for Windows (Brian Kaisner) * build: remove GCC_WARN_ABOUT_MISSING_NEWLINE (Ben Noordhuis) * darwin: fix 10.6 build error in fsevents.c (Ben Noordhuis) * windows: run close callbacks after polling for i/o (Saúl Ibarra Corretgé) * include: clarify uv_tcp_bind() behavior (Ben Noordhuis) * include: clean up includes in uv.h (Ben Noordhuis) * include: remove UV_IO_PRIVATE_FIELDS macro (Ben Noordhuis) * include: fix typo in comment in uv.h (Ben Noordhuis) * include: update uv_is_active() documentation (Ben Noordhuis) * include: make uv_process_options_t.cwd const (Ben Noordhuis) * unix: wrap long lines at 80 columns (Ben Noordhuis) * unix, windows: make uv_is_*() always return 0 or 1 (Ben Noordhuis) * bench: measure total/init/dispatch/cleanup times (Ben Noordhuis) * build: use -pthread on sunos (Timothy J. Fontaine) * windows: remove duplicate check in stream.c (Ben Noordhuis) * unix: sanity-check fds before closing (Ben Noordhuis) * unix: remove uv__pipe_accept() (Ben Noordhuis) * unix: fix uv_spawn() NULL pointer deref on ENOMEM (Ben Noordhuis) * unix: don't close inherited fds on uv_spawn() fail (Ben Noordhuis) * unix: revert recent FSEvent changes (Ben Noordhuis) * fsevents: fix clever rescheduling (Fedor Indutny) * linux: ignore fractional time in uv_uptime() (Ben Noordhuis) * unix: fix SIGCHLD waitpid() race in process.c (Ben Noordhuis) * unix, windows: add uv_fs_event_start/stop functions (Saúl Ibarra Corretgé) * unix: fix non-synchronized access in signal.c (Ben Noordhuis) * unix: add atomic-ops.h (Ben Noordhuis) * unix: add spinlock.h (Ben Noordhuis) * unix: clean up uv_tty_set_mode() a little (Ben Noordhuis) * unix: make uv_tty_reset_mode() async signal-safe (Ben Noordhuis) * include: add E2BIG status code mapping (Ben Noordhuis) * windows: fix duplicate case build error (Ben Noordhuis) * windows: remove unneeded check (Saúl Ibarra Corretgé) * include: document pipe path truncation behavior (Ben Noordhuis) * fsevents: increase stack size for OSX 10.9 (Fedor Indutny) * windows: _snprintf expected wrong parameter type in string (Maks Naumov) * windows: "else" keyword is missing (Maks Naumov) * windows: incorrect check for SOCKET_ERROR (Maks Naumov) * windows: add stdlib.h to satisfy reference to abort (Sean Farrell) * build: fix check target for mingw (Sean Farrell) * unix: move uv_shutdown() assertion (Keno Fischer) * darwin: avoid calling GetCurrentProcess (Fedor Indutny) 2013.10.19, Version 0.10.18 (Stable), 9ec52963b585e822e87bdc5de28d6143aff0d2e5 Changes since version 0.10.17: * unix: fix uv_spawn() NULL pointer deref on ENOMEM (Ben Noordhuis) * unix: don't close inherited fds on uv_spawn() fail (Ben Noordhuis) * unix: revert recent FSEvent changes (Ben Noordhuis) * unix: fix non-synchronized access in signal.c (Ben Noordhuis) 2013.09.25, Version 0.10.17 (Stable), 9670e0a93540c2f0d86c84a375f2303383c11e7e Changes since version 0.10.16: * build: remove GCC_WARN_ABOUT_MISSING_NEWLINE (Ben Noordhuis) * darwin: fix 10.6 build error in fsevents.c (Ben Noordhuis) 2013.09.06, Version 0.10.16 (Stable), 2bce230d81f4853a23662cbeb26fe98010b1084b Changes since version 0.10.15: * windows: make uv_shutdown() for write-only pipes work (Bert Belder) * windows: make uv_fs_open() report EINVAL when invalid arguments are passed (Bert Belder) * windows: make uv_fs_open() report _open_osfhandle() failure correctly (Bert Belder) * windows: make uv_fs_chmod() report errors correctly (Bert Belder) * windows: wrap multi-statement macros in do..while block (Bert Belder) 2013.09.05, Version 0.11.13 (Unstable), f5b6db6c1d7f93d28281207fd47c3841c9a9792e Changes since version 0.11.12: * unix: define _GNU_SOURCE, exposes glibc-isms (Ben Noordhuis) * windows: check for nonconforming swprintf arguments (Brent Cook) * build: include internal headers in source list (Brent Cook) * include: merge uv_tcp_bind and uv_tcp_bind6 (Ben Noordhuis) * include: merge uv_tcp_connect and uv_tcp_connect6 (Ben Noordhuis) * include: merge uv_udp_bind and uv_udp_bind6 (Ben Noordhuis) * include: merge uv_udp_send and uv_udp_send6 (Ben Noordhuis) 2013.09.03, Version 0.11.12 (Unstable), 82d01d5f6780d178f5176a01425ec297583c0811 Changes since version 0.11.11: * test: fix epoll_wait() usage in test-embed.c (Ben Noordhuis) * include: uv_alloc_cb now takes uv_buf_t* (Ben Noordhuis) * include: uv_read{2}_cb now takes const uv_buf_t* (Ben Noordhuis) * include: uv_ip[46]_addr now takes sockaddr_in* (Ben Noordhuis) * include: uv_tcp_bind{6} now takes sockaddr_in* (Ben Noordhuis) * include: uv_tcp_connect{6} now takes sockaddr_in* (Ben Noordhuis) * include: uv_udp_recv_cb now takes const uv_buf_t* (Ben Noordhuis) * include: uv_udp_bind{6} now takes sockaddr_in* (Ben Noordhuis) * include: uv_udp_send{6} now takes sockaddr_in* (Ben Noordhuis) * include: uv_spawn takes const uv_process_options_t* (Ben Noordhuis) * include: make uv_write{2} const correct (Ben Noordhuis) * windows: fix flags assignment in uv_fs_readdir() (Ben Noordhuis) * windows: fix stray comments (Ben Noordhuis) * windows: remove unused is_path_dir() function (Ben Noordhuis) 2013.08.30, Version 0.11.11 (Unstable), ba876d53539ed0427c52039012419cd9374c6f0d Changes since version 0.11.10: * unix, windows: add thread-local storage API (Ben Noordhuis) * linux: don't turn on SO_REUSEPORT socket option (Ben Noordhuis) * darwin: fix 10.6 build error in fsevents.c (Ben Noordhuis) * windows: make uv_shutdown() for write-only pipes work (Bert Belder) * include: update uv_udp_open() / uv_udp_bind() docs (Ben Noordhuis) * unix: req queue must be empty when destroying loop (Ben Noordhuis) * unix: move loop functions from core.c to loop.c (Ben Noordhuis) * darwin: remove CoreFoundation dependency (Ben Noordhuis) * windows: make autotools build system work with mingw (Keno Fischer) * windows: fix mingw build (Alex Crichton) * windows: tweak Makefile.mingw for easier usage (Alex Crichton) * build: remove _GNU_SOURCE macro definition (Ben Noordhuis) 2013.08.25, Version 0.11.10 (Unstable), 742dadcb7154cc7bb89c0c228a223b767a36cf0d * windows: Re-implement uv_fs_stat. The st_ctime field now contains the change time, not the creation time, like on unix systems. st_dev, st_ino, st_blocks and st_blksize are now also filled out. (Bert Belder) * linux: fix setsockopt(SO_REUSEPORT) error handling (Ben Noordhuis) * windows: report uv_process_t exit code correctly (Bert Belder) * windows: make uv_fs_chmod() report errors correctly (Bert Belder) * windows: make some more NT apis available for libuv's internal use (Bert Belder) * windows: squelch some compiler warnings (Bert Belder) 2013.08.24, Version 0.11.9 (Unstable), a2d29b5b068cbac93dc16138fb30a74e2669daad Changes since version 0.11.8: * fsevents: share FSEventStream between multiple FS watchers, which removes a limit on the maximum number of file watchers that can be created on OS X. (Fedor Indutny) * process: the `exit_status` parameter for a uv_process_t's exit callback now is an int64_t, and no longer an int. (Bert Belder) * process: make uv_spawn() return some types of errors immediately on windows, instead of passing the error code the the exit callback. This brings it on par with libuv's behavior on unix. (Bert Belder) 2013.08.24, Version 0.10.15 (Stable), 221078a8fdd9b853c6b557b3d9a5dd744b4fdd6b Changes since version 0.10.14: * fsevents: create FSEvents thread on demand (Ben Noordhuis) * fsevents: use a single thread for interacting with FSEvents, because it's not thread-safe. (Fedor Indutny) * fsevents: share FSEventStream between multiple FS watchers, which removes a limit on the maximum number of file watchers that can be created on OS X. (Fedor Indutny) 2013.08.22, Version 0.11.8 (Unstable), a5260462db80ab0deab6b9e6a8991dd8f5a9a2f8 Changes since version 0.11.7: * unix: fix missing return value warning in stream.c (Ben Noordhuis) * build: serial-tests was added in automake v1.12 (Ben Noordhuis) * windows: fix uninitialized local variable warning (Ben Noordhuis) * windows: fix missing return value warning (Ben Noordhuis) * build: fix string comparisons in autogen.sh (Ben Noordhuis) * windows: move INLINE macro, remove UNUSED (Ben Noordhuis) * unix: clean up __attribute__((quux)) usage (Ben Noordhuis) * sunos: remove futimes() macro (Ben Noordhuis) * unix: fix uv__signal_unlock() prototype (Ben Noordhuis) * unix, windows: allow NULL async callback (Ben Noordhuis) * build: apply dtrace -G to all object files (Timothy J. Fontaine) * darwin: fix indentation in uv__hrtime() (Ben Noordhuis) * darwin: create fsevents thread on demand (Ben Noordhuis) * darwin: reduce fsevents thread stack size (Ben Noordhuis) * darwin: call pthread_setname_np() if available (Ben Noordhuis) * build: fix automake serial-tests check again (Ben Noordhuis) * unix: retry waitpid() on EINTR (Ben Noordhuis) * darwin: fix ios build error (Ben Noordhuis) * darwin: fix ios compiler warning (Ben Noordhuis) * test: simplify test-ip6-addr.c (Ben Noordhuis) * unix, windows: fix ipv6 link-local address parsing (Ben Noordhuis) * fsevents: FSEvents is most likely not thread-safe (Fedor Indutny) * windows: omit stdint.h, fix msvc 2008 build error (Ben Noordhuis) 2013.08.22, Version 0.10.14 (Stable), 15d64132151c18b26346afa892444b95e2addad0 Changes since version 0.10.13: * unix: retry waitpid() on EINTR (Ben Noordhuis) 2013.08.07, Version 0.11.7 (Unstable), 3cad361f8776f70941b39d65bd9426bcb1aa817b Changes since version 0.11.6: * unix, windows: fix uv_fs_chown() function prototype (Ben Noordhuis) * unix, windows: remove unused variables (Brian White) * test: fix signed/unsigned comparison warnings (Ben Noordhuis) * build: dtrace shouldn't break out of tree builds (Timothy J. Fontaine) * unix, windows: don't read/recv if buf.len==0 (Ben Noordhuis) * build: add mingw makefile (Ben Noordhuis) * unix, windows: add MAC to uv_interface_addresses() (Brian White) * build: enable AM_INIT_AUTOMAKE([subdir-objects]) (Ben Noordhuis) * unix, windows: make buf arg to uv_fs_write const (Ben Noordhuis) * sunos: fix build breakage introduced in e3a657c (Ben Noordhuis) * aix: fix build breakage introduced in 3ee4d3f (Ben Noordhuis) * windows: fix mingw32 build, define JOB_OBJECT_XXX (Yasuhiro Matsumoto) * windows: fix mingw32 build, include limits.h (Yasuhiro Matsumoto) * test: replace sprintf() with snprintf() (Ben Noordhuis) * test: replace strcpy() with strncpy() (Ben Noordhuis) * openbsd: fix uv_ip6_addr() unused variable warnings (Ben Noordhuis) * openbsd: fix dlerror() const correctness warning (Ben Noordhuis) * openbsd: fix uv_fs_sendfile() unused variable warnings (Ben Noordhuis) * build: disable parallel automake tests (Ben Noordhuis) * test: add windows-only snprintf() function (Ben Noordhuis) * build: add automake serial-tests version check (Ben Noordhuis) 2013.07.26, Version 0.10.13 (Stable), 381312e1fe6fecbabc943ccd56f0e7d114b3d064 Changes since version 0.10.12: * unix, windows: fix uv_fs_chown() function prototype (Ben Noordhuis) 2013.07.21, Version 0.11.6 (Unstable), 6645b93273e0553d23823c576573b82b129bf28c Changes since version 0.11.5: * test: open stdout fd in write-only mode (Ben Noordhuis) * windows: uv_spawn shouldn't reject reparse points (Bert Belder) * windows: use WSAGetLastError(), not errno (Ben Noordhuis) * build: darwin: disable -fstrict-aliasing warnings (Ben Noordhuis) * test: fix signed/unsigned compiler warning (Ben Noordhuis) * test: add 'start timer from check handle' test (Ben Noordhuis) * build: `all` now builds static and dynamic lib (Ben Noordhuis) * unix, windows: add extra fields to uv_stat_t (Saúl Ibarra Corretgé) * build: add install target to the makefile (Navaneeth Kedaram Nambiathan) * build: switch to autotools (Ben Noordhuis) * build: use AM_PROG_AR conditionally (Ben Noordhuis) * test: fix fs_fstat test on sunos (Ben Noordhuis) * test: fix fs_chown when running as root (Ben Noordhuis) * test: fix spawn_setgid_fails and spawn_setuid_fails (Ben Noordhuis) * build: use AM_SILENT_RULES conditionally (Ben Noordhuis) * build: add DTrace detection for autotools (Timothy J. Fontaine) * linux,darwin,win: link-local IPv6 addresses (Miroslav Bajtoš) * unix: fix build when !defined(PTHREAD_MUTEX_ERRORCHECK) (Ben Noordhuis) * unix, windows: return error codes directly (Ben Noordhuis) 2013.07.10, Version 0.10.12 (Stable), 58a46221bba726746887a661a9f36fe9ff204209 Changes since version 0.10.11: * linux: add support for MIPS (Andrei Sedoi) * windows: uv_spawn shouldn't reject reparse points (Bert Belder) * windows: use WSAGetLastError(), not errno (Ben Noordhuis) * build: darwin: disable -fstrict-aliasing warnings (Ben Noordhuis) * build: `all` now builds static and dynamic lib (Ben Noordhuis) * unix: fix build when !defined(PTHREAD_MUTEX_ERRORCHECK) (Ben Noordhuis) 2013.06.27, Version 0.11.5 (Unstable), e3c63ff1627a14e96f54c1c62b0d68b446d8425b Changes since version 0.11.4: * build: remove CSTDFLAG, use only CFLAGS (Ben Noordhuis) * unix: support for android builds (Linus Mårtensson) * unix: avoid extra read, short-circuit on POLLHUP (Ben Noordhuis) * uv: support android libuv standalone build (Linus Mårtensson) * src: make queue.h c++ compatible (Ben Noordhuis) * unix: s/ngx-queue.h/queue.h/ in checksparse.sh (Ben Noordhuis) * unix: unconditionally stop handle on close (Ben Noordhuis) * freebsd: don't enable dtrace if it's not available (Brian White) * build: make HAVE_DTRACE=0 should disable dtrace (Timothy J. Fontaine) * unix: remove overzealous assert (Ben Noordhuis) * unix: remove unused function uv_fatal_error() (Ben Noordhuis) * unix, windows: clean up uv_thread_create() (Ben Noordhuis) * queue: fix pointer truncation on LLP64 platforms (Bert Belder) * build: set OS=="android" for android builds (Linus Mårtensson) * windows: don't use uppercase in include filename (Ben Noordhuis) * stream: add an API to make streams do blocking writes (Henry Rawas) * windows: use WSAGetLastError(), not errno (Ben Noordhuis) 2013.06.13, Version 0.10.11 (Stable), c3b75406a66a10222a589cb173e8f469e9665c7e Changes since version 0.10.10: * unix: unconditionally stop handle on close (Ben Noordhuis) * freebsd: don't enable dtrace if it's not available (Brian White) * build: make HAVE_DTRACE=0 should disable dtrace (Timothy J. Fontaine) * unix: remove overzealous assert (Ben Noordhuis) * unix: clear UV_STREAM_SHUTTING after shutdown() (Ben Noordhuis) * unix: fix busy loop, write if POLLERR or POLLHUP (Ben Noordhuis) 2013.06.05, Version 0.10.10 (Stable), 0d95a88bd35fce93863c57a460be613aea34d2c5 Changes since version 0.10.9: * include: document uv_update_time() and uv_now() (Ben Noordhuis) * linux: fix cpu model parsing on newer arm kernels (Ben Noordhuis) * linux: fix a memory leak in uv_cpu_info() error path (Ben Noordhuis) * linux: don't ignore out-of-memory errors in uv_cpu_info() (Ben Noordhuis) * unix, windows: move uv_now() to uv-common.c (Ben Noordhuis) * test: fix a compilation problem in test-osx-select.c that was caused by the use of c-style comments (Bert Belder) * darwin: use uv_fs_sendfile() use the sendfile api correctly (Wynn Wilkes) 2013.05.30, Version 0.11.4 (Unstable), e43e5b3d954a0989db5588aa110e1fe4fe6e0219 Changes since version 0.11.3: * windows: make uv_spawn not fail when the libuv embedding application is run under external job control (Bert Belder) * darwin: assume CFRunLoopStop() isn't thread-safe, fixing a race condition when stopping the 'stdin select hack' thread (Fedor Indutny) * win: fix UV_EALREADY not being reported correctly to the libuv user in some cases (Bert Belder) * darwin: make the uv__cf_loop_runner and uv__cf_loop_cb functions static (Ben Noordhuis) * darwin: task_info() cannot fail (Ben Noordhuis) * unix: add error mapping for ENETDOWN (Ben Noordhuis) * unix: implicitly signal write errors to the libuv user (Ben Noordhuis) * unix: fix assertion error on signal pipe overflow (Bert Belder) * unix: turn off POLLOUT after stream connect (Ben Noordhuis) * unix: fix stream refcounting buglet (Ben Noordhuis) * unix: remove assert statements that are no longer correct (Ben Noordhuis) * unix: appease warning about non-standard `inline` (Sean Silva) * unix: add uv__is_closing() macro (Ben Noordhuis) * unix: stop stream POLLOUT watcher on write error (Ben Noordhuis) * include: document uv_update_time() and uv_now() (Ben Noordhuis) * linux: fix cpu model parsing on newer arm kernels (Ben Noordhuis) * linux: fix a memory leak in uv_cpu_info() error path (Ben Noordhuis) * linux: don't ignore out-of-memory errors in uv_cpu_info() (Ben Noordhuis) * unix, windows: move uv_now() to uv-common.c (Ben Noordhuis) * test: fix a compilation problem in test-osx-select.c that was caused by the use of c-style comments (Bert Belder) * darwin: use uv_fs_sendfile() use the sendfile api correctly (Wynn Wilkes) * windows: call idle handles on every loop iteration, something the unix implementation already did (Bert Belder) * test: update the idle-starvation test to verify that idle handles are called in every loop iteration (Bert Belder) * unix, windows: ensure that uv_run() in RUN_ONCE mode calls timers that expire after blocking (Ben Noordhuis) 2013.05.29, Version 0.10.9 (Stable), a195f9ace23d92345baf57582678bfc3017e6632 Changes since version 0.10.8: * unix: fix stream refcounting buglet (Ben Noordhuis) * unix: remove erroneous asserts (Ben Noordhuis) * unix: add uv__is_closing() macro (Ben Noordhuis) * unix: stop stream POLLOUT watcher on write error (Ben Noordhuis) 2013.05.25, Version 0.10.8 (Stable), 0f39be12926fe2d8766a9f025797a473003e6504 Changes since version 0.10.7: * windows: make uv_spawn not fail under job control (Bert Belder) * darwin: assume CFRunLoopStop() isn't thread-safe (Fedor Indutny) * win: fix UV_EALREADY incorrectly set (Bert Belder) * darwin: make two uv__cf_*() functions static (Ben Noordhuis) * darwin: task_info() cannot fail (Ben Noordhuis) * unix: add mapping for ENETDOWN (Ben Noordhuis) * unix: implicitly signal write errors to libuv user (Ben Noordhuis) * unix: fix assert on signal pipe overflow (Bert Belder) * unix: turn off POLLOUT after stream connect (Ben Noordhuis) 2013.05.16, Version 0.11.3 (Unstable), 0a48c05b5988aea84c605751900926fa25443b34 Changes since version 0.11.2: * unix: clean up uv_accept() (Ben Noordhuis) * unix: remove errno preserving code (Ben Noordhuis) * darwin: fix ios build, don't require ApplicationServices (Ben Noordhuis) * windows: kill child processes when the parent dies (Bert Belder) * build: set soname in shared library (Ben Noordhuis) * build: make `make test` link against .a again (Ben Noordhuis) * build: only set soname on shared object builds (Timothy J. Fontaine) * build: convert predefined $PLATFORM to lower case (Elliot Saba) * test: fix process_title failing on linux (Miroslav Bajtoš) * test, sunos: disable process_title test (Miroslav Bajtoš) * test: add error logging to tty unit test (Miroslav Bajtoš) 2013.05.15, Version 0.10.7 (Stable), 028baaf0846b686a81e992cb2f2f5a9b8e841fcf Changes since version 0.10.6: * windows: kill child processes when the parent dies (Bert Belder) 2013.05.15, Version 0.10.6 (Stable), 11e6613e6260d95c8cf11bf89a2759c24649319a Changes since version 0.10.5: * stream: fix osx select hack (Fedor Indutny) * stream: fix small nit in select hack, add test (Fedor Indutny) * build: link with libkvm on openbsd (Ben Noordhuis) * stream: use harder sync restrictions for osx-hack (Fedor Indutny) * unix: fix EMFILE error handling (Ben Noordhuis) * darwin: fix unnecessary include headers (Daisuke Murase) * darwin: rename darwin-getproctitle.m (Ben Noordhuis) * build: convert predefined $PLATFORM to lower case (Elliot Saba) * build: set soname in shared library (Ben Noordhuis) * build: make `make test` link against .a again (Ben Noordhuis) * darwin: fix ios build, don't require ApplicationServices (Ben Noordhuis) * build: only set soname on shared object builds (Timothy J. Fontaine) 2013.05.11, Version 0.11.2 (Unstable), 3fba0bf65f091b91a9760530c05c6339c658d88b Changes since version 0.11.1: * darwin: look up file path with F_GETPATH (Ben Noordhuis) * unix, windows: add uv_has_ref() function (Saúl Ibarra Corretgé) * build: avoid double / in paths for dtrace (Timothy J. Fontaine) * unix: remove src/unix/cygwin.c (Ben Noordhuis) * windows: deal with the fact that GetTickCount might lag (Bert Belder) * unix: silence STATIC_ASSERT compiler warnings (Ben Noordhuis) * linux: don't use fopen() in uv_resident_set_memory() (Ben Noordhuis) 2013.04.24, Version 0.10.5 (Stable), 6595a7732c52eb4f8e57c88655f72997a8567a67 Changes since version 0.10.4: * unix: silence STATIC_ASSERT compiler warnings (Ben Noordhuis) * windows: make timers handle large timeouts (Miroslav Bajtoš) * windows: remove superfluous assert statement (Bert Belder) * unix: silence STATIC_ASSERT compiler warnings (Ben Noordhuis) * linux: don't use fopen() in uv_resident_set_memory() (Ben Noordhuis) 2013.04.12, Version 0.10.4 (Stable), 85827e26403ac6dfa331af8ec9916ea7e27bd833 Changes since version 0.10.3: * include: update uv_backend_fd() documentation (Ben Noordhuis) * unix: include uv.h in src/version.c (Ben Noordhuis) * unix: don't write more than IOV_MAX iovecs (Fedor Indutny) * mingw-w64: don't call _set_invalid_parameter_handler (Nils Maier) * build: gyp disable thin archives (Timothy J. Fontaine) * sunos: re-export entire library when static (Timothy J. Fontaine) * unix: dtrace probes for tick-start and tick-stop (Timothy J. Fontaine) * windows: fix memory leak in fs__sendfile (Shannen Saez) * windows: remove double initialization in uv_tty_init (Shannen Saez) * build: fix dtrace-enabled out of tree build (Ben Noordhuis) * build: squelch -Wdollar-in-identifier-extension warnings (Ben Noordhuis) * inet: snprintf returns int, not size_t (Brian White) * win: refactor uv_cpu_info (Bert Belder) * build: add support for Visual Studio 2012 (Nicholas Vavilov) * build: -Wno-dollar-in-identifier-extension is clang only (Ben Noordhuis) 2013.04.11, Version 0.11.1 (Unstable), 5c10e82ae0bc99eff86d4b9baff1f1aa0bf84c0a This is the first versioned release from the current unstable libuv branch. Changes since Node.js v0.11.0: * all platforms: nanosecond resolution support for uv_fs_[fl]stat (Timothy J. Fontaine) * all platforms: add netmask to uv_interface_address (Ben Kelly) * unix: make sure the `status` parameter passed to the `uv_getaddrinfo` is 0 or -1 (Ben Noordhuis) * unix: limit the number of iovecs written in a single `writev` syscall to IOV_MAX (Fedor Indutny) * unix: add dtrace probes for tick-start and tick-stop (Timothy J. Fontaine) * mingw-w64: don't call _set_invalid_parameter_handler (Nils Maier) * windows: fix memory leak in fs__sendfile (Shannen Saez) * windows: fix edge case bugs in uv_cpu_info (Bert Belder) * include: no longer ship with / include ngx-queue.h (Ben Noordhuis) * include: remove UV_VERSION_* macros from uv.h (Ben Noordhuis) * documentation updates (Kristian Evensen, Ben Kelly, Ben Noordhuis) * build: fix dtrace-enabled builds (Ben Noordhuis, Timothy J. Fontaine) * build: gyp disable thin archives (Timothy J. Fontaine) * build: add support for Visual Studio 2012 (Nicholas Vavilov) 2013.03.28, Version 0.10.3 (Stable), 31ebe23973dd98fd8a24c042b606f37a794e99d0 Changes since version 0.10.2: * include: remove extraneous const from uv_version() (Ben Noordhuis) * doc: update README, replace `OS` by `PLATFORM` (Ben Noordhuis) * build: simplify .buildstamp rule (Ben Noordhuis) * build: disable -Wstrict-aliasing on darwin (Ben Noordhuis) * darwin: don't select(&exceptfds) in fallback path (Ben Noordhuis) * unix: don't clear flags after closing UDP handle (Saúl Ibarra Corretgé) 2013.03.25, Version 0.10.2 (Stable), 0f36a00568f3e7608f97f6c6cdb081f4800a50c9 This is the first officially versioned release of libuv. Starting now libuv will make releases independently of Node.js. Changes since Node.js v0.10.0: * test: add tap output for windows (Timothy J. Fontaine) * unix: fix uv_tcp_simultaneous_accepts() logic (Ben Noordhuis) * include: bump UV_VERSION_MINOR (Ben Noordhuis) * unix: improve uv_guess_handle() implementation (Ben Noordhuis) * stream: run try_select only for pipes and ttys (Fedor Indutny) Changes since Node.js v0.10.1: * build: rename OS to PLATFORM (Ben Noordhuis) * unix: make uv_timer_init() initialize repeat (Brian Mazza) * unix: make timers handle large timeouts (Ben Noordhuis) * build: add OBJC makefile var (Ben Noordhuis) * Add `uv_version()` and `uv_version_string()` APIs (Bert Belder) gevent-24.11.1/deps/libuv/LICENSE000066400000000000000000000055771471441230600162510ustar00rootroot00000000000000libuv is licensed for use as follows: ==== Copyright (c) 2015-present libuv project contributors. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ==== This license applies to parts of libuv originating from the https://github.com/joyent/libuv repository: ==== Copyright Joyent, Inc. and other Node contributors. All rights reserved. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ==== This license applies to all parts of libuv that are not externally maintained libraries. The externally maintained libraries used by libuv are: - tree.h (from FreeBSD), copyright Niels Provos. Two clause BSD license. - inet_pton and inet_ntop implementations, contained in src/inet.c, are copyright the Internet Systems Consortium, Inc., and licensed under the ISC license. - stdint-msvc2008.h (from msinttypes), copyright Alexander Chemeris. Three clause BSD license. - pthread-fixes.c, copyright Google Inc. and Sony Mobile Communications AB. Three clause BSD license. gevent-24.11.1/deps/libuv/LICENSE-docs000066400000000000000000000443341471441230600171710ustar00rootroot00000000000000Attribution 4.0 International ======================================================================= Creative Commons Corporation ("Creative Commons") is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an "as-is" basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible. Using Creative Commons Public Licenses Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses. Considerations for licensors: Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC- licensed material, or material used under an exception or limitation to copyright. More considerations for licensors: wiki.creativecommons.org/Considerations_for_licensors Considerations for the public: By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor's permission is not necessary for any reason--for example, because of any applicable exception or limitation to copyright--then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. More_considerations for the public: wiki.creativecommons.org/Considerations_for_licensees ======================================================================= Creative Commons Attribution 4.0 International Public License By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions. Section 1 -- Definitions. a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image. b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License. c. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights. d. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements. e. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material. f. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License. g. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license. h. Licensor means the individual(s) or entity(ies) granting rights under this Public License. i. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them. j. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world. k. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning. Section 2 -- Scope. a. License grant. 1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to: a. reproduce and Share the Licensed Material, in whole or in part; and b. produce, reproduce, and Share Adapted Material. 2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions. 3. Term. The term of this Public License is specified in Section 6(a). 4. Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a) (4) never produces Adapted Material. 5. Downstream recipients. a. Offer from the Licensor -- Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License. b. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material. 6. No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i). b. Other rights. 1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise. 2. Patent and trademark rights are not licensed under this Public License. 3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties. Section 3 -- License Conditions. Your exercise of the Licensed Rights is expressly made subject to the following conditions. a. Attribution. 1. If You Share the Licensed Material (including in modified form), You must: a. retain the following if it is supplied by the Licensor with the Licensed Material: i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated); ii. a copyright notice; iii. a notice that refers to this Public License; iv. a notice that refers to the disclaimer of warranties; v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable; b. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and c. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License. 2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information. 3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable. 4. If You Share Adapted Material You produce, the Adapter's License You apply must not prevent recipients of the Adapted Material from complying with this Public License. Section 4 -- Sui Generis Database Rights. Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material: a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database; b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material; and c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database. For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights. Section 5 -- Disclaimer of Warranties and Limitation of Liability. a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU. b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR IN PART, THIS LIMITATION MAY NOT APPLY TO YOU. c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability. Section 6 -- Term and Termination. a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically. b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates: 1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or 2. upon express reinstatement by the Licensor. For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License. c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License. d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License. Section 7 -- Other Terms and Conditions. a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed. b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License. Section 8 -- Interpretation. a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License. b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions. c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor. d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority. ======================================================================= Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark "Creative Commons" or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses. Creative Commons may be contacted at creativecommons.org. gevent-24.11.1/deps/libuv/MAINTAINERS.md000066400000000000000000000046551471441230600173340ustar00rootroot00000000000000# Project Maintainers libuv is currently managed by the following individuals: * **Ben Noordhuis** ([@bnoordhuis](https://github.com/bnoordhuis)) - GPG key: D77B 1E34 243F BAF0 5F8E 9CC3 4F55 C8C8 46AB 89B9 (pubkey-bnoordhuis) * **Bert Belder** ([@piscisaureus](https://github.com/piscisaureus)) * **Colin Ihrig** ([@cjihrig](https://github.com/cjihrig)) - GPG key: 94AE 3667 5C46 4D64 BAFA 68DD 7434 390B DBE9 B9C5 (pubkey-cjihrig) - GPG key: 5735 3E0D BDAA A7E8 39B6 6A1A FF47 D5E4 AD8B 4FDC (pubkey-cjihrig-kb) * **Fedor Indutny** ([@indutny](https://github.com/indutny)) - GPG key: AF2E EA41 EC34 47BF DD86 FED9 D706 3CCE 19B7 E890 (pubkey-indutny) * **Jameson Nash** ([@vtjnash](https://github.com/vtjnash)) - GPG key: AEAD 0A4B 6867 6775 1A0E 4AEF 34A2 5FB1 2824 6514 (pubkey-vtjnash) - GPG key: CFBB 9CA9 A5BE AFD7 0E2B 3C5A 79A6 7C55 A367 9C8B (pubkey2022-vtjnash) * **Jiawen Geng** ([@gengjiawen](https://github.com/gengjiawen)) * **Kaoru Takanashi** ([@erw7](https://github.com/erw7)) - GPG Key: 5804 F999 8A92 2AFB A398 47A0 7183 5090 6134 887F (pubkey-erw7) * **Richard Lau** ([@richardlau](https://github.com/richardlau)) - GPG key: C82F A3AE 1CBE DC6B E46B 9360 C43C EC45 C17A B93C (pubkey-richardlau) * **Santiago Gimeno** ([@santigimeno](https://github.com/santigimeno)) - GPG key: 612F 0EAD 9401 6223 79DF 4402 F28C 3C8D A33C 03BE (pubkey-santigimeno) * **Saúl Ibarra Corretgé** ([@saghul](https://github.com/saghul)) - GPG key: FDF5 1936 4458 319F A823 3DC9 410E 5553 AE9B C059 (pubkey-saghul) ## Project Maintainers emeriti * **Anna Henningsen** ([@addaleax](https://github.com/addaleax)) * **Bartosz Sosnowski** ([@bzoz](https://github.com/bzoz)) * **Imran Iqbal** ([@imran-iq](https://github.com/imran-iq)) * **John Barboza** ([@jbarz](https://github.com/jbarz)) ## Storing a maintainer key in Git It's quite handy to store a maintainer's signature as a git blob, and have that object tagged and signed with such key. Export your public key: $ gpg --armor --export saghul@gmail.com > saghul.asc Store it as a blob on the repo: $ git hash-object -w saghul.asc The previous command returns a hash, copy it. For the sake of this explanation, we'll assume it's 'abcd1234'. Storing the blob in git is not enough, it could be garbage collected since nothing references it, so we'll create a tag for it: $ git tag -s pubkey-saghul abcd1234 Commit the changes and push: $ git push origin pubkey-saghul gevent-24.11.1/deps/libuv/Makefile.am000066400000000000000000000522211471441230600172640ustar00rootroot00000000000000# Copyright (c) 2013, Ben Noordhuis # # Permission to use, copy, modify, and/or distribute this software for any # purpose with or without fee is hereby granted, provided that the above # copyright notice and this permission notice appear in all copies. # # THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES # WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF # MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR # ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES # WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN # ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF # OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ACLOCAL_AMFLAGS = -I m4 AM_CPPFLAGS = -I$(top_srcdir)/include \ -I$(top_srcdir)/src include_HEADERS=include/uv.h uvincludedir = $(includedir)/uv uvinclude_HEADERS = include/uv/errno.h \ include/uv/threadpool.h \ include/uv/version.h CLEANFILES = lib_LTLIBRARIES = libuv.la libuv_la_CFLAGS = $(AM_CFLAGS) libuv_la_LDFLAGS = $(AM_LDFLAGS) -no-undefined -version-info 1:0:0 libuv_la_SOURCES = src/fs-poll.c \ src/heap-inl.h \ src/idna.c \ src/idna.h \ src/inet.c \ src/queue.h \ src/random.c \ src/strscpy.c \ src/strscpy.h \ src/threadpool.c \ src/timer.c \ src/uv-data-getter-setters.c \ src/uv-common.c \ src/uv-common.h \ src/version.c \ src/strtok.c \ src/strtok.h if SUNOS # Can't be turned into a CC_CHECK_CFLAGS in configure.ac, it makes compilers # on other platforms complain that the argument is unused during compilation. libuv_la_CFLAGS += -pthreads endif if WINNT uvinclude_HEADERS += include/uv/win.h include/uv/tree.h AM_CPPFLAGS += -I$(top_srcdir)/src/win \ -DWIN32_LEAN_AND_MEAN \ -D_WIN32_WINNT=0x0602 libuv_la_SOURCES += src/win/async.c \ src/win/atomicops-inl.h \ src/win/core.c \ src/win/detect-wakeup.c \ src/win/dl.c \ src/win/error.c \ src/win/fs-event.c \ src/win/fs.c \ src/win/getaddrinfo.c \ src/win/getnameinfo.c \ src/win/handle.c \ src/win/handle-inl.h \ src/win/internal.h \ src/win/loop-watcher.c \ src/win/pipe.c \ src/win/poll.c \ src/win/process-stdio.c \ src/win/process.c \ src/win/req-inl.h \ src/win/signal.c \ src/win/stream.c \ src/win/stream-inl.h \ src/win/tcp.c \ src/win/thread.c \ src/win/tty.c \ src/win/udp.c \ src/win/util.c \ src/win/winapi.c \ src/win/winapi.h \ src/win/winsock.c \ src/win/winsock.h else # WINNT uvinclude_HEADERS += include/uv/unix.h AM_CPPFLAGS += -I$(top_srcdir)/src/unix libuv_la_SOURCES += src/unix/async.c \ src/unix/atomic-ops.h \ src/unix/core.c \ src/unix/dl.c \ src/unix/fs.c \ src/unix/getaddrinfo.c \ src/unix/getnameinfo.c \ src/unix/internal.h \ src/unix/loop-watcher.c \ src/unix/loop.c \ src/unix/pipe.c \ src/unix/poll.c \ src/unix/process.c \ src/unix/random-devurandom.c \ src/unix/signal.c \ src/unix/spinlock.h \ src/unix/stream.c \ src/unix/tcp.c \ src/unix/thread.c \ src/unix/tty.c \ src/unix/udp.c endif # WINNT EXTRA_DIST = test/fixtures/empty_file \ test/fixtures/load_error.node \ test/fixtures/lorem_ipsum.txt \ include \ docs \ img \ CONTRIBUTING.md \ LICENSE \ README.md TESTS = test/run-tests check_PROGRAMS = test/run-tests test_run_tests_CFLAGS = $(AM_CFLAGS) if SUNOS # Can't be turned into a CC_CHECK_CFLAGS in configure.ac, it makes compilers # on other platforms complain that the argument is unused during compilation. test_run_tests_CFLAGS += -pthreads endif test_run_tests_LDFLAGS = $(AM_LDFLAGS) test_run_tests_SOURCES = test/blackhole-server.c \ test/echo-server.c \ test/run-tests.c \ test/runner.c \ test/runner.h \ test/task.h \ test/test-active.c \ test/test-async.c \ test/test-async-null-cb.c \ test/test-barrier.c \ test/test-callback-stack.c \ test/test-close-fd.c \ test/test-close-order.c \ test/test-condvar.c \ test/test-connect-unspecified.c \ test/test-connection-fail.c \ test/test-cwd-and-chdir.c \ test/test-default-loop-close.c \ test/test-delayed-accept.c \ test/test-dlerror.c \ test/test-eintr-handling.c \ test/test-embed.c \ test/test-emfile.c \ test/test-env-vars.c \ test/test-error.c \ test/test-fail-always.c \ test/test-fs-copyfile.c \ test/test-fs-event.c \ test/test-fs-poll.c \ test/test-fs.c \ test/test-fs-readdir.c \ test/test-fs-fd-hash.c \ test/test-fs-open-flags.c \ test/test-fork.c \ test/test-getters-setters.c \ test/test-get-currentexe.c \ test/test-get-loadavg.c \ test/test-get-memory.c \ test/test-get-passwd.c \ test/test-getaddrinfo.c \ test/test-gethostname.c \ test/test-getnameinfo.c \ test/test-getsockname.c \ test/test-gettimeofday.c \ test/test-handle-fileno.c \ test/test-homedir.c \ test/test-hrtime.c \ test/test-idle.c \ test/test-idna.c \ test/test-ip4-addr.c \ test/test-ip6-addr.c \ test/test-ip-name.c \ test/test-ipc-heavy-traffic-deadlock-bug.c \ test/test-ipc-send-recv.c \ test/test-ipc.c \ test/test-list.h \ test/test-loop-handles.c \ test/test-loop-alive.c \ test/test-loop-close.c \ test/test-loop-stop.c \ test/test-loop-time.c \ test/test-loop-configure.c \ test/test-metrics.c \ test/test-multiple-listen.c \ test/test-mutexes.c \ test/test-not-readable-nor-writable-on-read-error.c \ test/test-not-writable-after-shutdown.c \ test/test-osx-select.c \ test/test-pass-always.c \ test/test-ping-pong.c \ test/test-pipe-bind-error.c \ test/test-pipe-connect-error.c \ test/test-pipe-connect-multiple.c \ test/test-pipe-connect-prepare.c \ test/test-pipe-getsockname.c \ test/test-pipe-pending-instances.c \ test/test-pipe-sendmsg.c \ test/test-pipe-server-close.c \ test/test-pipe-close-stdout-read-stdin.c \ test/test-pipe-set-non-blocking.c \ test/test-pipe-set-fchmod.c \ test/test-platform-output.c \ test/test-poll.c \ test/test-poll-close.c \ test/test-poll-close-doesnt-corrupt-stack.c \ test/test-poll-closesocket.c \ test/test-poll-multiple-handles.c \ test/test-poll-oob.c \ test/test-process-priority.c \ test/test-process-title.c \ test/test-process-title-threadsafe.c \ test/test-queue-foreach-delete.c \ test/test-random.c \ test/test-readable-on-eof.c \ test/test-ref.c \ test/test-run-nowait.c \ test/test-run-once.c \ test/test-semaphore.c \ test/test-shutdown-close.c \ test/test-shutdown-eof.c \ test/test-shutdown-simultaneous.c \ test/test-shutdown-twice.c \ test/test-signal-multiple-loops.c \ test/test-signal-pending-on-close.c \ test/test-signal.c \ test/test-socket-buffer-size.c \ test/test-spawn.c \ test/test-stdio-over-pipes.c \ test/test-strscpy.c \ test/test-strtok.c \ test/test-tcp-alloc-cb-fail.c \ test/test-tcp-bind-error.c \ test/test-tcp-bind6-error.c \ test/test-tcp-close-accept.c \ test/test-tcp-close-while-connecting.c \ test/test-tcp-close-after-read-timeout.c \ test/test-tcp-close.c \ test/test-tcp-close-reset.c \ test/test-tcp-create-socket-early.c \ test/test-tcp-connect-error-after-write.c \ test/test-tcp-connect-error.c \ test/test-tcp-connect-timeout.c \ test/test-tcp-connect6-error.c \ test/test-tcp-flags.c \ test/test-tcp-open.c \ test/test-tcp-read-stop.c \ test/test-tcp-read-stop-start.c \ test/test-tcp-rst.c \ test/test-tcp-shutdown-after-write.c \ test/test-tcp-unexpected-read.c \ test/test-tcp-oob.c \ test/test-tcp-write-to-half-open-connection.c \ test/test-tcp-write-after-connect.c \ test/test-tcp-writealot.c \ test/test-tcp-write-fail.c \ test/test-tcp-try-write.c \ test/test-tcp-try-write-error.c \ test/test-tcp-write-queue-order.c \ test/test-test-macros.c \ test/test-thread-equal.c \ test/test-thread.c \ test/test-threadpool-cancel.c \ test/test-threadpool.c \ test/test-timer-again.c \ test/test-timer-from-check.c \ test/test-timer.c \ test/test-tmpdir.c \ test/test-tty-duplicate-key.c \ test/test-tty-escape-sequence-processing.c \ test/test-tty.c \ test/test-udp-alloc-cb-fail.c \ test/test-udp-bind.c \ test/test-udp-connect.c \ test/test-udp-connect6.c \ test/test-udp-create-socket-early.c \ test/test-udp-dgram-too-big.c \ test/test-udp-ipv6.c \ test/test-udp-mmsg.c \ test/test-udp-multicast-interface.c \ test/test-udp-multicast-interface6.c \ test/test-udp-multicast-join.c \ test/test-udp-multicast-join6.c \ test/test-udp-multicast-ttl.c \ test/test-udp-open.c \ test/test-udp-options.c \ test/test-udp-send-and-recv.c \ test/test-udp-send-hang-loop.c \ test/test-udp-send-immediate.c \ test/test-udp-sendmmsg-error.c \ test/test-udp-send-unreachable.c \ test/test-udp-try-send.c \ test/test-uname.c \ test/test-walk-handles.c \ test/test-watcher-cross-stop.c test_run_tests_LDADD = libuv.la if WINNT test_run_tests_SOURCES += test/runner-win.c \ test/runner-win.h else test_run_tests_SOURCES += test/runner-unix.c \ test/runner-unix.h endif if AIX test_run_tests_CFLAGS += -D_ALL_SOURCE \ -D_XOPEN_SOURCE=500 \ -D_LINUX_SOURCE_COMPAT endif if OS400 test_run_tests_CFLAGS += -D_ALL_SOURCE \ -D_XOPEN_SOURCE=500 \ -D_LINUX_SOURCE_COMPAT endif if HAIKU test_run_tests_CFLAGS += -D_BSD_SOURCE endif if LINUX test_run_tests_CFLAGS += -D_GNU_SOURCE endif if SUNOS test_run_tests_CFLAGS += -D__EXTENSIONS__ \ -D_XOPEN_SOURCE=500 \ -D_REENTRANT endif if OS390 test_run_tests_CFLAGS += -D_ISOC99_SOURCE \ -D_UNIX03_THREADS \ -D_UNIX03_SOURCE \ -D_OPEN_SYS_IF_EXT=1 \ -D_OPEN_SYS_SOCK_IPV6 \ -D_OPEN_MSGQ_EXT \ -D_XOPEN_SOURCE_EXTENDED \ -D_ALL_SOURCE \ -D_LARGE_TIME_API \ -D_OPEN_SYS_FILE_EXT \ -DPATH_MAX=255 \ -qCHARS=signed \ -qXPLINK \ -qFLOAT=IEEE endif if AIX libuv_la_CFLAGS += -D_ALL_SOURCE \ -D_XOPEN_SOURCE=500 \ -D_LINUX_SOURCE_COMPAT \ -D_THREAD_SAFE \ -DHAVE_SYS_AHAFS_EVPRODS_H uvinclude_HEADERS += include/uv/aix.h libuv_la_SOURCES += src/unix/aix.c src/unix/aix-common.c endif if OS400 libuv_la_CFLAGS += -D_ALL_SOURCE \ -D_XOPEN_SOURCE=500 \ -D_LINUX_SOURCE_COMPAT \ -D_THREAD_SAFE uvinclude_HEADERS += include/uv/posix.h libuv_la_SOURCES += src/unix/aix-common.c \ src/unix/ibmi.c \ src/unix/posix-poll.c \ src/unix/no-fsevents.c endif if ANDROID libuv_la_CFLAGS += -D_GNU_SOURCE libuv_la_SOURCES += src/unix/pthread-fixes.c endif if CYGWIN uvinclude_HEADERS += include/uv/posix.h libuv_la_CFLAGS += -D_GNU_SOURCE libuv_la_SOURCES += src/unix/cygwin.c \ src/unix/bsd-ifaddrs.c \ src/unix/no-fsevents.c \ src/unix/no-proctitle.c \ src/unix/posix-hrtime.c \ src/unix/posix-poll.c \ src/unix/procfs-exepath.c \ src/unix/sysinfo-loadavg.c \ src/unix/sysinfo-memory.c endif if DARWIN uvinclude_HEADERS += include/uv/darwin.h libuv_la_CFLAGS += -D_DARWIN_USE_64_BIT_INODE=1 libuv_la_CFLAGS += -D_DARWIN_UNLIMITED_SELECT=1 libuv_la_SOURCES += src/unix/bsd-ifaddrs.c \ src/unix/darwin-proctitle.c \ src/unix/darwin-stub.h \ src/unix/darwin.c \ src/unix/fsevents.c \ src/unix/kqueue.c \ src/unix/proctitle.c \ src/unix/random-getentropy.c test_run_tests_LDFLAGS += -lutil endif if DRAGONFLY uvinclude_HEADERS += include/uv/bsd.h libuv_la_SOURCES += src/unix/bsd-ifaddrs.c \ src/unix/bsd-proctitle.c \ src/unix/freebsd.c \ src/unix/kqueue.c \ src/unix/posix-hrtime.c test_run_tests_LDFLAGS += -lutil endif if FREEBSD uvinclude_HEADERS += include/uv/bsd.h libuv_la_SOURCES += src/unix/bsd-ifaddrs.c \ src/unix/bsd-proctitle.c \ src/unix/freebsd.c \ src/unix/kqueue.c \ src/unix/posix-hrtime.c \ src/unix/random-getrandom.c test_run_tests_LDFLAGS += -lutil endif if HAIKU uvinclude_HEADERS += include/uv/posix.h libuv_la_CFLAGS += -D_BSD_SOURCE libuv_la_SOURCES += src/unix/bsd-ifaddrs.c \ src/unix/haiku.c \ src/unix/no-fsevents.c \ src/unix/no-proctitle.c \ src/unix/posix-hrtime.c \ src/unix/posix-poll.c endif if HURD uvinclude_HEADERS += include/uv/posix.h libuv_la_SOURCES += src/unix/bsd-ifaddrs.c \ src/unix/no-fsevents.c \ src/unix/no-proctitle.c \ src/unix/posix-hrtime.c \ src/unix/posix-poll.c \ src/unix/hurd.c endif if KFREEBSD libuv_la_CFLAGS += -D_GNU_SOURCE endif if LINUX uvinclude_HEADERS += include/uv/linux.h libuv_la_CFLAGS += -D_GNU_SOURCE libuv_la_SOURCES += src/unix/linux-core.c \ src/unix/linux-inotify.c \ src/unix/linux-syscalls.c \ src/unix/linux-syscalls.h \ src/unix/procfs-exepath.c \ src/unix/proctitle.c \ src/unix/random-getrandom.c \ src/unix/random-sysctl-linux.c \ src/unix/epoll.c test_run_tests_LDFLAGS += -lutil endif if MSYS libuv_la_CFLAGS += -D_GNU_SOURCE libuv_la_SOURCES += src/unix/cygwin.c \ src/unix/bsd-ifaddrs.c \ src/unix/no-fsevents.c \ src/unix/no-proctitle.c \ src/unix/posix-hrtime.c \ src/unix/posix-poll.c \ src/unix/procfs-exepath.c \ src/unix/sysinfo-loadavg.c \ src/unix/sysinfo-memory.c endif if NETBSD uvinclude_HEADERS += include/uv/bsd.h libuv_la_SOURCES += src/unix/bsd-ifaddrs.c \ src/unix/bsd-proctitle.c \ src/unix/kqueue.c \ src/unix/netbsd.c \ src/unix/posix-hrtime.c test_run_tests_LDFLAGS += -lutil endif if OPENBSD uvinclude_HEADERS += include/uv/bsd.h libuv_la_SOURCES += src/unix/bsd-ifaddrs.c \ src/unix/bsd-proctitle.c \ src/unix/kqueue.c \ src/unix/openbsd.c \ src/unix/posix-hrtime.c \ src/unix/random-getentropy.c test_run_tests_LDFLAGS += -lutil endif if SUNOS uvinclude_HEADERS += include/uv/sunos.h libuv_la_CFLAGS += -D__EXTENSIONS__ \ -D_XOPEN_SOURCE=500 \ -D_REENTRANT libuv_la_SOURCES += src/unix/no-proctitle.c \ src/unix/sunos.c endif if OS390 libuv_la_CFLAGS += -D_UNIX03_THREADS \ -D_UNIX03_SOURCE \ -D_OPEN_SYS_IF_EXT=1 \ -D_OPEN_MSGQ_EXT \ -D_XOPEN_SOURCE_EXTENDED \ -D_ALL_SOURCE \ -D_LARGE_TIME_API \ -D_OPEN_SYS_SOCK_EXT3 \ -D_OPEN_SYS_SOCK_IPV6 \ -D_OPEN_SYS_FILE_EXT \ -DUV_PLATFORM_SEM_T=int \ -DPATH_MAX=255 \ -qCHARS=signed \ -qXPLINK \ -qFLOAT=IEEE libuv_la_LDFLAGS += -qXPLINK libuv_la_SOURCES += src/unix/pthread-fixes.c \ src/unix/os390.c \ src/unix/os390-syscalls.c \ src/unix/proctitle.c endif pkgconfigdir = $(libdir)/pkgconfig pkgconfig_DATA = @PACKAGE_NAME@.pc gevent-24.11.1/deps/libuv/README.md000066400000000000000000000221171471441230600165100ustar00rootroot00000000000000![libuv][libuv_banner] ## Overview libuv is a multi-platform support library with a focus on asynchronous I/O. It was primarily developed for use by [Node.js][], but it's also used by [Luvit](http://luvit.io/), [Julia](http://julialang.org/), [uvloop](https://github.com/MagicStack/uvloop), and [others](https://github.com/libuv/libuv/blob/v1.x/LINKS.md). ## Feature highlights * Full-featured event loop backed by epoll, kqueue, IOCP, event ports. * Asynchronous TCP and UDP sockets * Asynchronous DNS resolution * Asynchronous file and file system operations * File system events * ANSI escape code controlled TTY * IPC with socket sharing, using Unix domain sockets or named pipes (Windows) * Child processes * Thread pool * Signal handling * High resolution clock * Threading and synchronization primitives ## Versioning Starting with version 1.0.0 libuv follows the [semantic versioning](http://semver.org/) scheme. The API change and backwards compatibility rules are those indicated by SemVer. libuv will keep a stable ABI across major releases. The ABI/API changes can be tracked [here](http://abi-laboratory.pro/tracker/timeline/libuv/). ## Licensing libuv is licensed under the MIT license. Check the [LICENSE file](LICENSE). The documentation is licensed under the CC BY 4.0 license. Check the [LICENSE-docs file](LICENSE-docs). ## Community * [Support](https://github.com/libuv/libuv/discussions) * [Mailing list](http://groups.google.com/group/libuv) ## Documentation ### Official documentation Located in the docs/ subdirectory. It uses the [Sphinx](http://sphinx-doc.org/) framework, which makes it possible to build the documentation in multiple formats. Show different supported building options: ```bash $ make help ``` Build documentation as HTML: ```bash $ make html ``` Build documentation as HTML and live reload it when it changes (this requires sphinx-autobuild to be installed and is only supported on Unix): ```bash $ make livehtml ``` Build documentation as man pages: ```bash $ make man ``` Build documentation as ePub: ```bash $ make epub ``` NOTE: Windows users need to use make.bat instead of plain 'make'. Documentation can be browsed online [here](http://docs.libuv.org). The [tests and benchmarks](https://github.com/libuv/libuv/tree/master/test) also serve as API specification and usage examples. ### Other resources * [LXJS 2012 talk](http://www.youtube.com/watch?v=nGn60vDSxQ4) — High-level introductory talk about libuv. * [libuv-dox](https://github.com/thlorenz/libuv-dox) — Documenting types and methods of libuv, mostly by reading uv.h. * [learnuv](https://github.com/thlorenz/learnuv) — Learn uv for fun and profit, a self guided workshop to libuv. These resources are not handled by libuv maintainers and might be out of date. Please verify it before opening new issues. ## Downloading libuv can be downloaded either from the [GitHub repository](https://github.com/libuv/libuv) or from the [downloads site](http://dist.libuv.org/dist/). Before verifying the git tags or signature files, importing the relevant keys is necessary. Key IDs are listed in the [MAINTAINERS](https://github.com/libuv/libuv/blob/master/MAINTAINERS.md) file, but are also available as git blob objects for easier use. Importing a key the usual way: ```bash $ gpg --keyserver pool.sks-keyservers.net --recv-keys AE9BC059 ``` Importing a key from a git blob object: ```bash $ git show pubkey-saghul | gpg --import ``` ### Verifying releases Git tags are signed with the developer's key, they can be verified as follows: ```bash $ git verify-tag v1.6.1 ``` Starting with libuv 1.7.0, the tarballs stored in the [downloads site](http://dist.libuv.org/dist/) are signed and an accompanying signature file sit alongside each. Once both the release tarball and the signature file are downloaded, the file can be verified as follows: ```bash $ gpg --verify libuv-1.7.0.tar.gz.sign ``` ## Build Instructions For UNIX-like platforms, including macOS, there are two build methods: autotools or [CMake][]. For Windows, [CMake][] is the only supported build method and has the following prerequisites:
* One of: * [Visual C++ Build Tools][] * [Visual Studio 2015 Update 3][], all editions including the Community edition (remember to select "Common Tools for Visual C++ 2015" feature during installation). * [Visual Studio 2017][], any edition (including the Build Tools SKU). **Required Components:** "MSbuild", "VC++ 2017 v141 toolset" and one of the Windows SDKs (10 or 8.1). * Basic Unix tools required for some tests, [Git for Windows][] includes Git Bash and tools which can be included in the global `PATH`.
To build with autotools: ```bash $ sh autogen.sh $ ./configure $ make $ make check $ make install ``` To build with [CMake][]: ```bash $ mkdir -p build $ (cd build && cmake .. -DBUILD_TESTING=ON) # generate project with tests $ cmake --build build # add `-j ` with cmake >= 3.12 # Run tests: $ (cd build && ctest -C Debug --output-on-failure) # Or manually run tests: $ build/uv_run_tests # shared library build $ build/uv_run_tests_a # static library build ``` To cross-compile with [CMake][] (unsupported but generally works): ```bash $ cmake ../.. \ -DCMAKE_SYSTEM_NAME=Windows \ -DCMAKE_SYSTEM_VERSION=6.1 \ -DCMAKE_C_COMPILER=i686-w64-mingw32-gcc ``` ### Install with Homebrew ```bash $ brew install --HEAD libuv ``` Note to OS X users: Make sure that you specify the architecture you wish to build for in the "ARCHS" flag. You can specify more than one by delimiting with a space (e.g. "x86_64 i386"). ### Running tests Some tests are timing sensitive. Relaxing test timeouts may be necessary on slow or overloaded machines: ```bash $ env UV_TEST_TIMEOUT_MULTIPLIER=2 build/uv_run_tests # 10s instead of 5s ``` #### Run one test The list of all tests is in `test/test-list.h`. This invocation will cause the test driver to fork and execute `TEST_NAME` in a child process: ```bash $ build/uv_run_tests_a TEST_NAME ``` This invocation will cause the test driver to execute the test in the same process: ```bash $ build/uv_run_tests_a TEST_NAME TEST_NAME ``` #### Debugging tools When running the test from within the test driver process (`build/uv_run_tests_a TEST_NAME TEST_NAME`), tools like gdb and valgrind work normally. When running the test from a child of the test driver process (`build/uv_run_tests_a TEST_NAME`), use these tools in a fork-aware manner. ##### Fork-aware gdb Use the [follow-fork-mode](https://sourceware.org/gdb/onlinedocs/gdb/Forks.html) setting: ``` $ gdb --args build/uv_run_tests_a TEST_NAME (gdb) set follow-fork-mode child ... ``` ##### Fork-aware valgrind Use the `--trace-children=yes` parameter: ```bash $ valgrind --trace-children=yes -v --tool=memcheck --leak-check=full --track-origins=yes --leak-resolution=high --show-reachable=yes --log-file=memcheck-%p.log build/uv_run_tests_a TEST_NAME ``` ### Running benchmarks See the section on running tests. The benchmark driver is `./uv_run_benchmarks_a` and the benchmarks are listed in `test/benchmark-list.h`. ## Supported Platforms Check the [SUPPORTED_PLATFORMS file](SUPPORTED_PLATFORMS.md). ### `-fno-strict-aliasing` It is recommended to turn on the `-fno-strict-aliasing` compiler flag in projects that use libuv. The use of ad hoc "inheritance" in the libuv API may not be safe in the presence of compiler optimizations that depend on strict aliasing. MSVC does not have an equivalent flag but it also does not appear to need it at the time of writing (December 2019.) ### AIX Notes AIX compilation using IBM XL C/C++ requires version 12.1 or greater. AIX support for filesystem events requires the non-default IBM `bos.ahafs` package to be installed. This package provides the AIX Event Infrastructure that is detected by `autoconf`. [IBM documentation](http://www.ibm.com/developerworks/aix/library/au-aix_event_infrastructure/) describes the package in more detail. ### z/OS Notes z/OS compilation requires [ZOSLIB](https://github.com/ibmruntimes/zoslib) to be installed. When building with [CMake][], use the flag `-DZOSLIB_DIR` to specify the path to [ZOSLIB](https://github.com/ibmruntimes/zoslib): ```bash $ (cd build && cmake .. -DBUILD_TESTING=ON -DZOSLIB_DIR=/path/to/zoslib) $ cmake --build build ``` z/OS creates System V semaphores and message queues. These persist on the system after the process terminates unless the event loop is closed. Use the `ipcrm` command to manually clear up System V resources. ## Patches See the [guidelines for contributing][]. [CMake]: https://cmake.org/ [node.js]: http://nodejs.org/ [guidelines for contributing]: https://github.com/libuv/libuv/blob/master/CONTRIBUTING.md [libuv_banner]: https://raw.githubusercontent.com/libuv/libuv/master/img/banner.png [Visual C++ Build Tools]: https://visualstudio.microsoft.com/visual-cpp-build-tools/ [Visual Studio 2015 Update 3]: https://www.visualstudio.com/vs/older-downloads/ [Visual Studio 2017]: https://www.visualstudio.com/downloads/ [Git for Windows]: http://git-scm.com/download/win gevent-24.11.1/deps/libuv/SUPPORTED_PLATFORMS.md000066400000000000000000000052601471441230600205670ustar00rootroot00000000000000# Supported platforms | System | Support type | Supported versions | Notes | |---|---|---|---| | GNU/Linux | Tier 1 | Linux >= 2.6.32 with glibc >= 2.12 | | | macOS | Tier 1 | macOS >= 10.15 | Current and previous macOS release | | Windows | Tier 1 | >= Windows 8 | VS 2015 and later are supported | | FreeBSD | Tier 1 | >= 10 | | | AIX | Tier 2 | >= 6 | Maintainers: @libuv/aix | | IBM i | Tier 2 | >= IBM i 7.2 | Maintainers: @libuv/ibmi | | z/OS | Tier 2 | >= V2R2 | Maintainers: @libuv/zos | | Linux with musl | Tier 2 | musl >= 1.0 | | | Android | Tier 3 | NDK >= r15b | Android 7.0, `-DANDROID_PLATFORM=android-24` | | MinGW | Tier 3 | MinGW32 and MinGW-w64 | | | SunOS | Tier 3 | Solaris 121 and later | | | Other | Tier 3 | N/A | | ## Support types * **Tier 1**: Officially supported and tested with CI. Any contributed patch MUST NOT break such systems. These are supported by @libuv/collaborators. * **Tier 2**: Officially supported, but not necessarily tested with CI. These systems are maintained to the best of @libuv/collaborators ability, without being a top priority. * **Tier 3**: Community maintained. These systems may inadvertently break and the community and interested parties are expected to help with the maintenance. ## Adding support for a new platform **IMPORTANT**: Before attempting to add support for a new platform please open an issue about it for discussion. ### Unix I/O handling is abstracted by an internal `uv__io_t` handle. The new platform will need to implement some of the functions, the prototypes are in ``src/unix/internal.h``. If the new platform requires extra fields for any handle structure, create a new include file in ``include/`` with the name ``uv-theplatform.h`` and add the appropriate defines there. All functionality related to the new platform must be implemented in its own file inside ``src/unix/`` unless it's already done in a common file, in which case adding an `ifdef` is fine. Two build systems are supported: autotools and cmake. Ideally both need to be supported, but if one of the two does not support the new platform it can be left out. ### Windows Windows is treated as a single platform, so adding support for a new platform would mean adding support for a new version. Compilation and runtime must succeed for the minimum supported version. If a new API is to be used, it must be done optionally, only in supported versions. ### Common Some common notes when adding support for new platforms: * Generally libuv tries to avoid compile time checks. Do not add any to the autotools based build system or use version checking macros. Dynamically load functions and symbols if they are not supported by the minimum supported version. gevent-24.11.1/deps/libuv/autogen.sh000077500000000000000000000046721471441230600172400ustar00rootroot00000000000000#!/bin/sh # Copyright (c) 2013, Ben Noordhuis # # Permission to use, copy, modify, and/or distribute this software for any # purpose with or without fee is hereby granted, provided that the above # copyright notice and this permission notice appear in all copies. # # THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES # WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF # MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR # ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES # WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN # ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF # OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. set -eu cd `dirname "$0"` if [ "${1:-dev}" == "release" ]; then export LIBUV_RELEASE=true else export LIBUV_RELEASE=false fi if [ "${LIBTOOLIZE:-}" = "" ] && [ "`uname`" = "Darwin" ]; then LIBTOOLIZE=glibtoolize fi ACLOCAL=${ACLOCAL:-aclocal} AUTOCONF=${AUTOCONF:-autoconf} AUTOMAKE=${AUTOMAKE:-automake} LIBTOOLIZE=${LIBTOOLIZE:-libtoolize} aclocal_version=`"$ACLOCAL" --version | head -n 1 | sed 's/[^.0-9]//g'` autoconf_version=`"$AUTOCONF" --version | head -n 1 | sed 's/[^.0-9]//g'` automake_version=`"$AUTOMAKE" --version | head -n 1 | sed 's/[^.0-9]//g'` automake_version_major=`echo "$automake_version" | cut -d. -f1` automake_version_minor=`echo "$automake_version" | cut -d. -f2` libtoolize_version=`"$LIBTOOLIZE" --version | head -n 1 | sed 's/[^.0-9]//g'` if [ $aclocal_version != $automake_version ]; then echo "FATAL: aclocal version appears not to be from the same as automake" exit 1 fi UV_EXTRA_AUTOMAKE_FLAGS= if test "$automake_version_major" -gt 1 || \ test "$automake_version_major" -eq 1 && \ test "$automake_version_minor" -gt 11; then # serial-tests is available in v1.12 and newer. UV_EXTRA_AUTOMAKE_FLAGS="$UV_EXTRA_AUTOMAKE_FLAGS serial-tests" fi echo "m4_define([UV_EXTRA_AUTOMAKE_FLAGS], [$UV_EXTRA_AUTOMAKE_FLAGS])" \ > m4/libuv-extra-automake-flags.m4 (set -x "$LIBTOOLIZE" --copy --force "$ACLOCAL" -I m4 ) if $LIBUV_RELEASE; then "$AUTOCONF" -o /dev/null m4/libuv-check-versions.m4 echo " AC_PREREQ($autoconf_version) AC_INIT([libuv-release-check], [0.0]) AM_INIT_AUTOMAKE([$automake_version]) LT_PREREQ($libtoolize_version) AC_OUTPUT " > m4/libuv-check-versions.m4 fi ( set -x "$AUTOCONF" "$AUTOMAKE" --add-missing --copy ) gevent-24.11.1/deps/libuv/config.guess000066400000000000000000001414201471441230600175450ustar00rootroot00000000000000#! /bin/sh # Attempt to guess a canonical system name. # Copyright 1992-2023 Free Software Foundation, Inc. # shellcheck disable=SC2006,SC2268 # see below for rationale timestamp='2023-06-23' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, see . # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). # # Originally written by Per Bothner; maintained since 2000 by Ben Elliston. # # You can get the latest version of this script from: # https://git.savannah.gnu.org/cgit/config.git/plain/config.guess # # Please send patches to . # The "shellcheck disable" line above the timestamp inhibits complaints # about features and limitations of the classic Bourne shell that were # superseded or lifted in POSIX. However, this script identifies a wide # variety of pre-POSIX systems that do not have POSIX shells at all, and # even some reasonably current systems (Solaris 10 as case-in-point) still # have a pre-POSIX /bin/sh. me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] Output the configuration name of the system '$me' is run on. Options: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.guess ($timestamp) Originally written by Per Bothner. Copyright 1992-2023 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try '$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" >&2 exit 1 ;; * ) break ;; esac done if test $# != 0; then echo "$me: too many arguments$help" >&2 exit 1 fi # Just in case it came from the environment. GUESS= # CC_FOR_BUILD -- compiler used by this script. Note that the use of a # compiler to aid in system detection is discouraged as it requires # temporary files to be created and, as you can see below, it is a # headache to deal with in a portable fashion. # Historically, 'CC_FOR_BUILD' used to be named 'HOST_CC'. We still # use 'HOST_CC' if defined, but it is deprecated. # Portable tmp directory creation inspired by the Autoconf team. tmp= # shellcheck disable=SC2172 trap 'test -z "$tmp" || rm -fr "$tmp"' 0 1 2 13 15 set_cc_for_build() { # prevent multiple calls if $tmp is already set test "$tmp" && return 0 : "${TMPDIR=/tmp}" # shellcheck disable=SC2039,SC3028 { tmp=`(umask 077 && mktemp -d "$TMPDIR/cgXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" ; } || { test -n "$RANDOM" && tmp=$TMPDIR/cg$$-$RANDOM && (umask 077 && mkdir "$tmp" 2>/dev/null) ; } || { tmp=$TMPDIR/cg-$$ && (umask 077 && mkdir "$tmp" 2>/dev/null) && echo "Warning: creating insecure temp directory" >&2 ; } || { echo "$me: cannot create a temporary directory in $TMPDIR" >&2 ; exit 1 ; } dummy=$tmp/dummy case ${CC_FOR_BUILD-},${HOST_CC-},${CC-} in ,,) echo "int x;" > "$dummy.c" for driver in cc gcc c89 c99 ; do if ($driver -c -o "$dummy.o" "$dummy.c") >/dev/null 2>&1 ; then CC_FOR_BUILD=$driver break fi done if test x"$CC_FOR_BUILD" = x ; then CC_FOR_BUILD=no_compiler_found fi ;; ,,*) CC_FOR_BUILD=$CC ;; ,*,*) CC_FOR_BUILD=$HOST_CC ;; esac } # This is needed to find uname on a Pyramid OSx when run in the BSD universe. # (ghazi@noc.rutgers.edu 1994-08-24) if test -f /.attbin/uname ; then PATH=$PATH:/.attbin ; export PATH fi UNAME_MACHINE=`(uname -m) 2>/dev/null` || UNAME_MACHINE=unknown UNAME_RELEASE=`(uname -r) 2>/dev/null` || UNAME_RELEASE=unknown UNAME_SYSTEM=`(uname -s) 2>/dev/null` || UNAME_SYSTEM=unknown UNAME_VERSION=`(uname -v) 2>/dev/null` || UNAME_VERSION=unknown case $UNAME_SYSTEM in Linux|GNU|GNU/*) LIBC=unknown set_cc_for_build cat <<-EOF > "$dummy.c" #include #if defined(__UCLIBC__) LIBC=uclibc #elif defined(__dietlibc__) LIBC=dietlibc #elif defined(__GLIBC__) LIBC=gnu #else #include /* First heuristic to detect musl libc. */ #ifdef __DEFINED_va_list LIBC=musl #endif #endif EOF cc_set_libc=`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^LIBC' | sed 's, ,,g'` eval "$cc_set_libc" # Second heuristic to detect musl libc. if [ "$LIBC" = unknown ] && command -v ldd >/dev/null && ldd --version 2>&1 | grep -q ^musl; then LIBC=musl fi # If the system lacks a compiler, then just pick glibc. # We could probably try harder. if [ "$LIBC" = unknown ]; then LIBC=gnu fi ;; esac # Note: order is significant - the case branches are not exclusive. case $UNAME_MACHINE:$UNAME_SYSTEM:$UNAME_RELEASE:$UNAME_VERSION in *:NetBSD:*:*) # NetBSD (nbsd) targets should (where applicable) match one or # more of the tuples: *-*-netbsdelf*, *-*-netbsdaout*, # *-*-netbsdecoff* and *-*-netbsd*. For targets that recently # switched to ELF, *-*-netbsd* would select the old # object file format. This provides both forward # compatibility and a consistent mechanism for selecting the # object file format. # # Note: NetBSD doesn't particularly care about the vendor # portion of the name. We always set it to "unknown". UNAME_MACHINE_ARCH=`(uname -p 2>/dev/null || \ /sbin/sysctl -n hw.machine_arch 2>/dev/null || \ /usr/sbin/sysctl -n hw.machine_arch 2>/dev/null || \ echo unknown)` case $UNAME_MACHINE_ARCH in aarch64eb) machine=aarch64_be-unknown ;; armeb) machine=armeb-unknown ;; arm*) machine=arm-unknown ;; sh3el) machine=shl-unknown ;; sh3eb) machine=sh-unknown ;; sh5el) machine=sh5le-unknown ;; earmv*) arch=`echo "$UNAME_MACHINE_ARCH" | sed -e 's,^e\(armv[0-9]\).*$,\1,'` endian=`echo "$UNAME_MACHINE_ARCH" | sed -ne 's,^.*\(eb\)$,\1,p'` machine=${arch}${endian}-unknown ;; *) machine=$UNAME_MACHINE_ARCH-unknown ;; esac # The Operating System including object format, if it has switched # to ELF recently (or will in the future) and ABI. case $UNAME_MACHINE_ARCH in earm*) os=netbsdelf ;; arm*|i386|m68k|ns32k|sh3*|sparc|vax) set_cc_for_build if echo __ELF__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ELF__ then # Once all utilities can be ECOFF (netbsdecoff) or a.out (netbsdaout). # Return netbsd for either. FIX? os=netbsd else os=netbsdelf fi ;; *) os=netbsd ;; esac # Determine ABI tags. case $UNAME_MACHINE_ARCH in earm*) expr='s/^earmv[0-9]/-eabi/;s/eb$//' abi=`echo "$UNAME_MACHINE_ARCH" | sed -e "$expr"` ;; esac # The OS release # Debian GNU/NetBSD machines have a different userland, and # thus, need a distinct triplet. However, they do not need # kernel version information, so it can be replaced with a # suitable tag, in the style of linux-gnu. case $UNAME_VERSION in Debian*) release='-gnu' ;; *) release=`echo "$UNAME_RELEASE" | sed -e 's/[-_].*//' | cut -d. -f1,2` ;; esac # Since CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM: # contains redundant information, the shorter form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used. GUESS=$machine-${os}${release}${abi-} ;; *:Bitrig:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/Bitrig.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-bitrig$UNAME_RELEASE ;; *:OpenBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/OpenBSD.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-openbsd$UNAME_RELEASE ;; *:SecBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/SecBSD.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-secbsd$UNAME_RELEASE ;; *:LibertyBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/^.*BSD\.//'` GUESS=$UNAME_MACHINE_ARCH-unknown-libertybsd$UNAME_RELEASE ;; *:MidnightBSD:*:*) GUESS=$UNAME_MACHINE-unknown-midnightbsd$UNAME_RELEASE ;; *:ekkoBSD:*:*) GUESS=$UNAME_MACHINE-unknown-ekkobsd$UNAME_RELEASE ;; *:SolidBSD:*:*) GUESS=$UNAME_MACHINE-unknown-solidbsd$UNAME_RELEASE ;; *:OS108:*:*) GUESS=$UNAME_MACHINE-unknown-os108_$UNAME_RELEASE ;; macppc:MirBSD:*:*) GUESS=powerpc-unknown-mirbsd$UNAME_RELEASE ;; *:MirBSD:*:*) GUESS=$UNAME_MACHINE-unknown-mirbsd$UNAME_RELEASE ;; *:Sortix:*:*) GUESS=$UNAME_MACHINE-unknown-sortix ;; *:Twizzler:*:*) GUESS=$UNAME_MACHINE-unknown-twizzler ;; *:Redox:*:*) GUESS=$UNAME_MACHINE-unknown-redox ;; mips:OSF1:*.*) GUESS=mips-dec-osf1 ;; alpha:OSF1:*:*) # Reset EXIT trap before exiting to avoid spurious non-zero exit code. trap '' 0 case $UNAME_RELEASE in *4.0) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $3}'` ;; *5.*) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $4}'` ;; esac # According to Compaq, /usr/sbin/psrinfo has been available on # OSF/1 and Tru64 systems produced since 1995. I hope that # covers most systems running today. This code pipes the CPU # types through head -n 1, so we only detect the type of CPU 0. ALPHA_CPU_TYPE=`/usr/sbin/psrinfo -v | sed -n -e 's/^ The alpha \(.*\) processor.*$/\1/p' | head -n 1` case $ALPHA_CPU_TYPE in "EV4 (21064)") UNAME_MACHINE=alpha ;; "EV4.5 (21064)") UNAME_MACHINE=alpha ;; "LCA4 (21066/21068)") UNAME_MACHINE=alpha ;; "EV5 (21164)") UNAME_MACHINE=alphaev5 ;; "EV5.6 (21164A)") UNAME_MACHINE=alphaev56 ;; "EV5.6 (21164PC)") UNAME_MACHINE=alphapca56 ;; "EV5.7 (21164PC)") UNAME_MACHINE=alphapca57 ;; "EV6 (21264)") UNAME_MACHINE=alphaev6 ;; "EV6.7 (21264A)") UNAME_MACHINE=alphaev67 ;; "EV6.8CB (21264C)") UNAME_MACHINE=alphaev68 ;; "EV6.8AL (21264B)") UNAME_MACHINE=alphaev68 ;; "EV6.8CX (21264D)") UNAME_MACHINE=alphaev68 ;; "EV6.9A (21264/EV69A)") UNAME_MACHINE=alphaev69 ;; "EV7 (21364)") UNAME_MACHINE=alphaev7 ;; "EV7.9 (21364A)") UNAME_MACHINE=alphaev79 ;; esac # A Pn.n version is a patched version. # A Vn.n version is a released version. # A Tn.n version is a released field test version. # A Xn.n version is an unreleased experimental baselevel. # 1.2 uses "1.2" for uname -r. OSF_REL=`echo "$UNAME_RELEASE" | sed -e 's/^[PVTX]//' | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz` GUESS=$UNAME_MACHINE-dec-osf$OSF_REL ;; Amiga*:UNIX_System_V:4.0:*) GUESS=m68k-unknown-sysv4 ;; *:[Aa]miga[Oo][Ss]:*:*) GUESS=$UNAME_MACHINE-unknown-amigaos ;; *:[Mm]orph[Oo][Ss]:*:*) GUESS=$UNAME_MACHINE-unknown-morphos ;; *:OS/390:*:*) GUESS=i370-ibm-openedition ;; *:z/VM:*:*) GUESS=s390-ibm-zvmoe ;; *:OS400:*:*) GUESS=powerpc-ibm-os400 ;; arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*) GUESS=arm-acorn-riscix$UNAME_RELEASE ;; arm*:riscos:*:*|arm*:RISCOS:*:*) GUESS=arm-unknown-riscos ;; SR2?01:HI-UX/MPP:*:* | SR8000:HI-UX/MPP:*:*) GUESS=hppa1.1-hitachi-hiuxmpp ;; Pyramid*:OSx*:*:* | MIS*:OSx*:*:* | MIS*:SMP_DC-OSx*:*:*) # akee@wpdis03.wpafb.af.mil (Earle F. Ake) contributed MIS and NILE. case `(/bin/universe) 2>/dev/null` in att) GUESS=pyramid-pyramid-sysv3 ;; *) GUESS=pyramid-pyramid-bsd ;; esac ;; NILE*:*:*:dcosx) GUESS=pyramid-pyramid-svr4 ;; DRS?6000:unix:4.0:6*) GUESS=sparc-icl-nx6 ;; DRS?6000:UNIX_SV:4.2*:7* | DRS?6000:isis:4.2*:7*) case `/usr/bin/uname -p` in sparc) GUESS=sparc-icl-nx7 ;; esac ;; s390x:SunOS:*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=$UNAME_MACHINE-ibm-solaris2$SUN_REL ;; sun4H:SunOS:5.*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=sparc-hal-solaris2$SUN_REL ;; sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=sparc-sun-solaris2$SUN_REL ;; i86pc:AuroraUX:5.*:* | i86xen:AuroraUX:5.*:*) GUESS=i386-pc-auroraux$UNAME_RELEASE ;; i86pc:SunOS:5.*:* | i86xen:SunOS:5.*:*) set_cc_for_build SUN_ARCH=i386 # If there is a compiler, see if it is configured for 64-bit objects. # Note that the Sun cc does not turn __LP64__ into 1 like gcc does. # This test works for both compilers. if test "$CC_FOR_BUILD" != no_compiler_found; then if (echo '#ifdef __amd64'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS="" $CC_FOR_BUILD -m64 -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then SUN_ARCH=x86_64 fi fi SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=$SUN_ARCH-pc-solaris2$SUN_REL ;; sun4*:SunOS:6*:*) # According to config.sub, this is the proper way to canonicalize # SunOS6. Hard to guess exactly what SunOS6 will be like, but # it's likely to be more like Solaris than SunOS4. SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=sparc-sun-solaris3$SUN_REL ;; sun4*:SunOS:*:*) case `/usr/bin/arch -k` in Series*|S4*) UNAME_RELEASE=`uname -v` ;; esac # Japanese Language versions have a version number like '4.1.3-JL'. SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/-/_/'` GUESS=sparc-sun-sunos$SUN_REL ;; sun3*:SunOS:*:*) GUESS=m68k-sun-sunos$UNAME_RELEASE ;; sun*:*:4.2BSD:*) UNAME_RELEASE=`(sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null` test "x$UNAME_RELEASE" = x && UNAME_RELEASE=3 case `/bin/arch` in sun3) GUESS=m68k-sun-sunos$UNAME_RELEASE ;; sun4) GUESS=sparc-sun-sunos$UNAME_RELEASE ;; esac ;; aushp:SunOS:*:*) GUESS=sparc-auspex-sunos$UNAME_RELEASE ;; # The situation for MiNT is a little confusing. The machine name # can be virtually everything (everything which is not # "atarist" or "atariste" at least should have a processor # > m68000). The system name ranges from "MiNT" over "FreeMiNT" # to the lowercase version "mint" (or "freemint"). Finally # the system name "TOS" denotes a system which is actually not # MiNT. But MiNT is downward compatible to TOS, so this should # be no problem. atarist[e]:*MiNT:*:* | atarist[e]:*mint:*:* | atarist[e]:*TOS:*:*) GUESS=m68k-atari-mint$UNAME_RELEASE ;; atari*:*MiNT:*:* | atari*:*mint:*:* | atarist[e]:*TOS:*:*) GUESS=m68k-atari-mint$UNAME_RELEASE ;; *falcon*:*MiNT:*:* | *falcon*:*mint:*:* | *falcon*:*TOS:*:*) GUESS=m68k-atari-mint$UNAME_RELEASE ;; milan*:*MiNT:*:* | milan*:*mint:*:* | *milan*:*TOS:*:*) GUESS=m68k-milan-mint$UNAME_RELEASE ;; hades*:*MiNT:*:* | hades*:*mint:*:* | *hades*:*TOS:*:*) GUESS=m68k-hades-mint$UNAME_RELEASE ;; *:*MiNT:*:* | *:*mint:*:* | *:*TOS:*:*) GUESS=m68k-unknown-mint$UNAME_RELEASE ;; m68k:machten:*:*) GUESS=m68k-apple-machten$UNAME_RELEASE ;; powerpc:machten:*:*) GUESS=powerpc-apple-machten$UNAME_RELEASE ;; RISC*:Mach:*:*) GUESS=mips-dec-mach_bsd4.3 ;; RISC*:ULTRIX:*:*) GUESS=mips-dec-ultrix$UNAME_RELEASE ;; VAX*:ULTRIX*:*:*) GUESS=vax-dec-ultrix$UNAME_RELEASE ;; 2020:CLIX:*:* | 2430:CLIX:*:*) GUESS=clipper-intergraph-clix$UNAME_RELEASE ;; mips:*:*:UMIPS | mips:*:*:RISCos) set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #ifdef __cplusplus #include /* for printf() prototype */ int main (int argc, char *argv[]) { #else int main (argc, argv) int argc; char *argv[]; { #endif #if defined (host_mips) && defined (MIPSEB) #if defined (SYSTYPE_SYSV) printf ("mips-mips-riscos%ssysv\\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_SVR4) printf ("mips-mips-riscos%ssvr4\\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_BSD43) || defined(SYSTYPE_BSD) printf ("mips-mips-riscos%sbsd\\n", argv[1]); exit (0); #endif #endif exit (-1); } EOF $CC_FOR_BUILD -o "$dummy" "$dummy.c" && dummyarg=`echo "$UNAME_RELEASE" | sed -n 's/\([0-9]*\).*/\1/p'` && SYSTEM_NAME=`"$dummy" "$dummyarg"` && { echo "$SYSTEM_NAME"; exit; } GUESS=mips-mips-riscos$UNAME_RELEASE ;; Motorola:PowerMAX_OS:*:*) GUESS=powerpc-motorola-powermax ;; Motorola:*:4.3:PL8-*) GUESS=powerpc-harris-powermax ;; Night_Hawk:*:*:PowerMAX_OS | Synergy:PowerMAX_OS:*:*) GUESS=powerpc-harris-powermax ;; Night_Hawk:Power_UNIX:*:*) GUESS=powerpc-harris-powerunix ;; m88k:CX/UX:7*:*) GUESS=m88k-harris-cxux7 ;; m88k:*:4*:R4*) GUESS=m88k-motorola-sysv4 ;; m88k:*:3*:R3*) GUESS=m88k-motorola-sysv3 ;; AViiON:dgux:*:*) # DG/UX returns AViiON for all architectures UNAME_PROCESSOR=`/usr/bin/uname -p` if test "$UNAME_PROCESSOR" = mc88100 || test "$UNAME_PROCESSOR" = mc88110 then if test "$TARGET_BINARY_INTERFACE"x = m88kdguxelfx || \ test "$TARGET_BINARY_INTERFACE"x = x then GUESS=m88k-dg-dgux$UNAME_RELEASE else GUESS=m88k-dg-dguxbcs$UNAME_RELEASE fi else GUESS=i586-dg-dgux$UNAME_RELEASE fi ;; M88*:DolphinOS:*:*) # DolphinOS (SVR3) GUESS=m88k-dolphin-sysv3 ;; M88*:*:R3*:*) # Delta 88k system running SVR3 GUESS=m88k-motorola-sysv3 ;; XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3) GUESS=m88k-tektronix-sysv3 ;; Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD) GUESS=m68k-tektronix-bsd ;; *:IRIX*:*:*) IRIX_REL=`echo "$UNAME_RELEASE" | sed -e 's/-/_/g'` GUESS=mips-sgi-irix$IRIX_REL ;; ????????:AIX?:[12].1:2) # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX. GUESS=romp-ibm-aix # uname -m gives an 8 hex-code CPU id ;; # Note that: echo "'`uname -s`'" gives 'AIX ' i*86:AIX:*:*) GUESS=i386-ibm-aix ;; ia64:AIX:*:*) if test -x /usr/bin/oslevel ; then IBM_REV=`/usr/bin/oslevel` else IBM_REV=$UNAME_VERSION.$UNAME_RELEASE fi GUESS=$UNAME_MACHINE-ibm-aix$IBM_REV ;; *:AIX:2:3) if grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #include main() { if (!__power_pc()) exit(1); puts("powerpc-ibm-aix3.2.5"); exit(0); } EOF if $CC_FOR_BUILD -o "$dummy" "$dummy.c" && SYSTEM_NAME=`"$dummy"` then GUESS=$SYSTEM_NAME else GUESS=rs6000-ibm-aix3.2.5 fi elif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then GUESS=rs6000-ibm-aix3.2.4 else GUESS=rs6000-ibm-aix3.2 fi ;; *:AIX:*:[4567]) IBM_CPU_ID=`/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }'` if /usr/sbin/lsattr -El "$IBM_CPU_ID" | grep ' POWER' >/dev/null 2>&1; then IBM_ARCH=rs6000 else IBM_ARCH=powerpc fi if test -x /usr/bin/lslpp ; then IBM_REV=`/usr/bin/lslpp -Lqc bos.rte.libc | \ awk -F: '{ print $3 }' | sed s/[0-9]*$/0/` else IBM_REV=$UNAME_VERSION.$UNAME_RELEASE fi GUESS=$IBM_ARCH-ibm-aix$IBM_REV ;; *:AIX:*:*) GUESS=rs6000-ibm-aix ;; ibmrt:4.4BSD:*|romp-ibm:4.4BSD:*) GUESS=romp-ibm-bsd4.4 ;; ibmrt:*BSD:*|romp-ibm:BSD:*) # covers RT/PC BSD and GUESS=romp-ibm-bsd$UNAME_RELEASE # 4.3 with uname added to ;; # report: romp-ibm BSD 4.3 *:BOSX:*:*) GUESS=rs6000-bull-bosx ;; DPX/2?00:B.O.S.:*:*) GUESS=m68k-bull-sysv3 ;; 9000/[34]??:4.3bsd:1.*:*) GUESS=m68k-hp-bsd ;; hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*) GUESS=m68k-hp-bsd4.4 ;; 9000/[34678]??:HP-UX:*:*) HPUX_REV=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*.[0B]*//'` case $UNAME_MACHINE in 9000/31?) HP_ARCH=m68000 ;; 9000/[34]??) HP_ARCH=m68k ;; 9000/[678][0-9][0-9]) if test -x /usr/bin/getconf; then sc_cpu_version=`/usr/bin/getconf SC_CPU_VERSION 2>/dev/null` sc_kernel_bits=`/usr/bin/getconf SC_KERNEL_BITS 2>/dev/null` case $sc_cpu_version in 523) HP_ARCH=hppa1.0 ;; # CPU_PA_RISC1_0 528) HP_ARCH=hppa1.1 ;; # CPU_PA_RISC1_1 532) # CPU_PA_RISC2_0 case $sc_kernel_bits in 32) HP_ARCH=hppa2.0n ;; 64) HP_ARCH=hppa2.0w ;; '') HP_ARCH=hppa2.0 ;; # HP-UX 10.20 esac ;; esac fi if test "$HP_ARCH" = ""; then set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #define _HPUX_SOURCE #include #include int main () { #if defined(_SC_KERNEL_BITS) long bits = sysconf(_SC_KERNEL_BITS); #endif long cpu = sysconf (_SC_CPU_VERSION); switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0"); break; case CPU_PA_RISC1_1: puts ("hppa1.1"); break; case CPU_PA_RISC2_0: #if defined(_SC_KERNEL_BITS) switch (bits) { case 64: puts ("hppa2.0w"); break; case 32: puts ("hppa2.0n"); break; default: puts ("hppa2.0"); break; } break; #else /* !defined(_SC_KERNEL_BITS) */ puts ("hppa2.0"); break; #endif default: puts ("hppa1.0"); break; } exit (0); } EOF (CCOPTS="" $CC_FOR_BUILD -o "$dummy" "$dummy.c" 2>/dev/null) && HP_ARCH=`"$dummy"` test -z "$HP_ARCH" && HP_ARCH=hppa fi ;; esac if test "$HP_ARCH" = hppa2.0w then set_cc_for_build # hppa2.0w-hp-hpux* has a 64-bit kernel and a compiler generating # 32-bit code. hppa64-hp-hpux* has the same kernel and a compiler # generating 64-bit code. GNU and HP use different nomenclature: # # $ CC_FOR_BUILD=cc ./config.guess # => hppa2.0w-hp-hpux11.23 # $ CC_FOR_BUILD="cc +DA2.0w" ./config.guess # => hppa64-hp-hpux11.23 if echo __LP64__ | (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | grep -q __LP64__ then HP_ARCH=hppa2.0w else HP_ARCH=hppa64 fi fi GUESS=$HP_ARCH-hp-hpux$HPUX_REV ;; ia64:HP-UX:*:*) HPUX_REV=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*.[0B]*//'` GUESS=ia64-hp-hpux$HPUX_REV ;; 3050*:HI-UX:*:*) set_cc_for_build sed 's/^ //' << EOF > "$dummy.c" #include int main () { long cpu = sysconf (_SC_CPU_VERSION); /* The order matters, because CPU_IS_HP_MC68K erroneously returns true for CPU_PA_RISC1_0. CPU_IS_PA_RISC returns correct results, however. */ if (CPU_IS_PA_RISC (cpu)) { switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0-hitachi-hiuxwe2"); break; case CPU_PA_RISC1_1: puts ("hppa1.1-hitachi-hiuxwe2"); break; case CPU_PA_RISC2_0: puts ("hppa2.0-hitachi-hiuxwe2"); break; default: puts ("hppa-hitachi-hiuxwe2"); break; } } else if (CPU_IS_HP_MC68K (cpu)) puts ("m68k-hitachi-hiuxwe2"); else puts ("unknown-hitachi-hiuxwe2"); exit (0); } EOF $CC_FOR_BUILD -o "$dummy" "$dummy.c" && SYSTEM_NAME=`"$dummy"` && { echo "$SYSTEM_NAME"; exit; } GUESS=unknown-hitachi-hiuxwe2 ;; 9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:*) GUESS=hppa1.1-hp-bsd ;; 9000/8??:4.3bsd:*:*) GUESS=hppa1.0-hp-bsd ;; *9??*:MPE/iX:*:* | *3000*:MPE/iX:*:*) GUESS=hppa1.0-hp-mpeix ;; hp7??:OSF1:*:* | hp8?[79]:OSF1:*:*) GUESS=hppa1.1-hp-osf ;; hp8??:OSF1:*:*) GUESS=hppa1.0-hp-osf ;; i*86:OSF1:*:*) if test -x /usr/sbin/sysversion ; then GUESS=$UNAME_MACHINE-unknown-osf1mk else GUESS=$UNAME_MACHINE-unknown-osf1 fi ;; parisc*:Lites*:*:*) GUESS=hppa1.1-hp-lites ;; C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*) GUESS=c1-convex-bsd ;; C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*) if getsysinfo -f scalar_acc then echo c32-convex-bsd else echo c2-convex-bsd fi exit ;; C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*) GUESS=c34-convex-bsd ;; C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*) GUESS=c38-convex-bsd ;; C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*) GUESS=c4-convex-bsd ;; CRAY*Y-MP:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=ymp-cray-unicos$CRAY_REL ;; CRAY*[A-Z]90:*:*:*) echo "$UNAME_MACHINE"-cray-unicos"$UNAME_RELEASE" \ | sed -e 's/CRAY.*\([A-Z]90\)/\1/' \ -e y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/ \ -e 's/\.[^.]*$/.X/' exit ;; CRAY*TS:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=t90-cray-unicos$CRAY_REL ;; CRAY*T3E:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=alphaev5-cray-unicosmk$CRAY_REL ;; CRAY*SV1:*:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=sv1-cray-unicos$CRAY_REL ;; *:UNICOS/mp:*:*) CRAY_REL=`echo "$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'` GUESS=craynv-cray-unicosmp$CRAY_REL ;; F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*) FUJITSU_PROC=`uname -m | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz` FUJITSU_SYS=`uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\///'` FUJITSU_REL=`echo "$UNAME_RELEASE" | sed -e 's/ /_/'` GUESS=${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL} ;; 5000:UNIX_System_V:4.*:*) FUJITSU_SYS=`uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\///'` FUJITSU_REL=`echo "$UNAME_RELEASE" | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/ /_/'` GUESS=sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL} ;; i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\ Embedded/OS:*:*) GUESS=$UNAME_MACHINE-pc-bsdi$UNAME_RELEASE ;; sparc*:BSD/OS:*:*) GUESS=sparc-unknown-bsdi$UNAME_RELEASE ;; *:BSD/OS:*:*) GUESS=$UNAME_MACHINE-unknown-bsdi$UNAME_RELEASE ;; arm:FreeBSD:*:*) UNAME_PROCESSOR=`uname -p` set_cc_for_build if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_PCS_VFP then FREEBSD_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_PROCESSOR-unknown-freebsd$FREEBSD_REL-gnueabi else FREEBSD_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_PROCESSOR-unknown-freebsd$FREEBSD_REL-gnueabihf fi ;; *:FreeBSD:*:*) UNAME_PROCESSOR=`/usr/bin/uname -p` case $UNAME_PROCESSOR in amd64) UNAME_PROCESSOR=x86_64 ;; i386) UNAME_PROCESSOR=i586 ;; esac FREEBSD_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_PROCESSOR-unknown-freebsd$FREEBSD_REL ;; i*:CYGWIN*:*) GUESS=$UNAME_MACHINE-pc-cygwin ;; *:MINGW64*:*) GUESS=$UNAME_MACHINE-pc-mingw64 ;; *:MINGW*:*) GUESS=$UNAME_MACHINE-pc-mingw32 ;; *:MSYS*:*) GUESS=$UNAME_MACHINE-pc-msys ;; i*:PW*:*) GUESS=$UNAME_MACHINE-pc-pw32 ;; *:SerenityOS:*:*) GUESS=$UNAME_MACHINE-pc-serenity ;; *:Interix*:*) case $UNAME_MACHINE in x86) GUESS=i586-pc-interix$UNAME_RELEASE ;; authenticamd | genuineintel | EM64T) GUESS=x86_64-unknown-interix$UNAME_RELEASE ;; IA64) GUESS=ia64-unknown-interix$UNAME_RELEASE ;; esac ;; i*:UWIN*:*) GUESS=$UNAME_MACHINE-pc-uwin ;; amd64:CYGWIN*:*:* | x86_64:CYGWIN*:*:*) GUESS=x86_64-pc-cygwin ;; prep*:SunOS:5.*:*) SUN_REL=`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'` GUESS=powerpcle-unknown-solaris2$SUN_REL ;; *:GNU:*:*) # the GNU system GNU_ARCH=`echo "$UNAME_MACHINE" | sed -e 's,[-/].*$,,'` GNU_REL=`echo "$UNAME_RELEASE" | sed -e 's,/.*$,,'` GUESS=$GNU_ARCH-unknown-$LIBC$GNU_REL ;; *:GNU/*:*:*) # other systems with GNU libc and userland GNU_SYS=`echo "$UNAME_SYSTEM" | sed 's,^[^/]*/,,' | tr "[:upper:]" "[:lower:]"` GNU_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_MACHINE-unknown-$GNU_SYS$GNU_REL-$LIBC ;; x86_64:[Mm]anagarm:*:*|i?86:[Mm]anagarm:*:*) GUESS="$UNAME_MACHINE-pc-managarm-mlibc" ;; *:[Mm]anagarm:*:*) GUESS="$UNAME_MACHINE-unknown-managarm-mlibc" ;; *:Minix:*:*) GUESS=$UNAME_MACHINE-unknown-minix ;; aarch64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; aarch64_be:Linux:*:*) UNAME_MACHINE=aarch64_be GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; alpha:Linux:*:*) case `sed -n '/^cpu model/s/^.*: \(.*\)/\1/p' /proc/cpuinfo 2>/dev/null` in EV5) UNAME_MACHINE=alphaev5 ;; EV56) UNAME_MACHINE=alphaev56 ;; PCA56) UNAME_MACHINE=alphapca56 ;; PCA57) UNAME_MACHINE=alphapca56 ;; EV6) UNAME_MACHINE=alphaev6 ;; EV67) UNAME_MACHINE=alphaev67 ;; EV68*) UNAME_MACHINE=alphaev68 ;; esac objdump --private-headers /bin/sh | grep -q ld.so.1 if test "$?" = 0 ; then LIBC=gnulibc1 ; fi GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; arc:Linux:*:* | arceb:Linux:*:* | arc32:Linux:*:* | arc64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; arm*:Linux:*:*) set_cc_for_build if echo __ARM_EABI__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_EABI__ then GUESS=$UNAME_MACHINE-unknown-linux-$LIBC else if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_PCS_VFP then GUESS=$UNAME_MACHINE-unknown-linux-${LIBC}eabi else GUESS=$UNAME_MACHINE-unknown-linux-${LIBC}eabihf fi fi ;; avr32*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; cris:Linux:*:*) GUESS=$UNAME_MACHINE-axis-linux-$LIBC ;; crisv32:Linux:*:*) GUESS=$UNAME_MACHINE-axis-linux-$LIBC ;; e2k:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; frv:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; hexagon:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; i*86:Linux:*:*) GUESS=$UNAME_MACHINE-pc-linux-$LIBC ;; ia64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; k1om:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; loongarch32:Linux:*:* | loongarch64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; m32r*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; m68*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; mips:Linux:*:* | mips64:Linux:*:*) set_cc_for_build IS_GLIBC=0 test x"${LIBC}" = xgnu && IS_GLIBC=1 sed 's/^ //' << EOF > "$dummy.c" #undef CPU #undef mips #undef mipsel #undef mips64 #undef mips64el #if ${IS_GLIBC} && defined(_ABI64) LIBCABI=gnuabi64 #else #if ${IS_GLIBC} && defined(_ABIN32) LIBCABI=gnuabin32 #else LIBCABI=${LIBC} #endif #endif #if ${IS_GLIBC} && defined(__mips64) && defined(__mips_isa_rev) && __mips_isa_rev>=6 CPU=mipsisa64r6 #else #if ${IS_GLIBC} && !defined(__mips64) && defined(__mips_isa_rev) && __mips_isa_rev>=6 CPU=mipsisa32r6 #else #if defined(__mips64) CPU=mips64 #else CPU=mips #endif #endif #endif #if defined(__MIPSEL__) || defined(__MIPSEL) || defined(_MIPSEL) || defined(MIPSEL) MIPS_ENDIAN=el #else #if defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || defined(MIPSEB) MIPS_ENDIAN= #else MIPS_ENDIAN= #endif #endif EOF cc_set_vars=`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^CPU\|^MIPS_ENDIAN\|^LIBCABI'` eval "$cc_set_vars" test "x$CPU" != x && { echo "$CPU${MIPS_ENDIAN}-unknown-linux-$LIBCABI"; exit; } ;; mips64el:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; openrisc*:Linux:*:*) GUESS=or1k-unknown-linux-$LIBC ;; or32:Linux:*:* | or1k*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; padre:Linux:*:*) GUESS=sparc-unknown-linux-$LIBC ;; parisc64:Linux:*:* | hppa64:Linux:*:*) GUESS=hppa64-unknown-linux-$LIBC ;; parisc:Linux:*:* | hppa:Linux:*:*) # Look for CPU level case `grep '^cpu[^a-z]*:' /proc/cpuinfo 2>/dev/null | cut -d' ' -f2` in PA7*) GUESS=hppa1.1-unknown-linux-$LIBC ;; PA8*) GUESS=hppa2.0-unknown-linux-$LIBC ;; *) GUESS=hppa-unknown-linux-$LIBC ;; esac ;; ppc64:Linux:*:*) GUESS=powerpc64-unknown-linux-$LIBC ;; ppc:Linux:*:*) GUESS=powerpc-unknown-linux-$LIBC ;; ppc64le:Linux:*:*) GUESS=powerpc64le-unknown-linux-$LIBC ;; ppcle:Linux:*:*) GUESS=powerpcle-unknown-linux-$LIBC ;; riscv32:Linux:*:* | riscv32be:Linux:*:* | riscv64:Linux:*:* | riscv64be:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; s390:Linux:*:* | s390x:Linux:*:*) GUESS=$UNAME_MACHINE-ibm-linux-$LIBC ;; sh64*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; sh*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; sparc:Linux:*:* | sparc64:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; tile*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; vax:Linux:*:*) GUESS=$UNAME_MACHINE-dec-linux-$LIBC ;; x86_64:Linux:*:*) set_cc_for_build CPU=$UNAME_MACHINE LIBCABI=$LIBC if test "$CC_FOR_BUILD" != no_compiler_found; then ABI=64 sed 's/^ //' << EOF > "$dummy.c" #ifdef __i386__ ABI=x86 #else #ifdef __ILP32__ ABI=x32 #endif #endif EOF cc_set_abi=`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^ABI' | sed 's, ,,g'` eval "$cc_set_abi" case $ABI in x86) CPU=i686 ;; x32) LIBCABI=${LIBC}x32 ;; esac fi GUESS=$CPU-pc-linux-$LIBCABI ;; xtensa*:Linux:*:*) GUESS=$UNAME_MACHINE-unknown-linux-$LIBC ;; i*86:DYNIX/ptx:4*:*) # ptx 4.0 does uname -s correctly, with DYNIX/ptx in there. # earlier versions are messed up and put the nodename in both # sysname and nodename. GUESS=i386-sequent-sysv4 ;; i*86:UNIX_SV:4.2MP:2.*) # Unixware is an offshoot of SVR4, but it has its own version # number series starting with 2... # I am not positive that other SVR4 systems won't match this, # I just have to hope. -- rms. # Use sysv4.2uw... so that sysv4* matches it. GUESS=$UNAME_MACHINE-pc-sysv4.2uw$UNAME_VERSION ;; i*86:OS/2:*:*) # If we were able to find 'uname', then EMX Unix compatibility # is probably installed. GUESS=$UNAME_MACHINE-pc-os2-emx ;; i*86:XTS-300:*:STOP) GUESS=$UNAME_MACHINE-unknown-stop ;; i*86:atheos:*:*) GUESS=$UNAME_MACHINE-unknown-atheos ;; i*86:syllable:*:*) GUESS=$UNAME_MACHINE-pc-syllable ;; i*86:LynxOS:2.*:* | i*86:LynxOS:3.[01]*:* | i*86:LynxOS:4.[02]*:*) GUESS=i386-unknown-lynxos$UNAME_RELEASE ;; i*86:*DOS:*:*) GUESS=$UNAME_MACHINE-pc-msdosdjgpp ;; i*86:*:4.*:*) UNAME_REL=`echo "$UNAME_RELEASE" | sed 's/\/MP$//'` if grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then GUESS=$UNAME_MACHINE-univel-sysv$UNAME_REL else GUESS=$UNAME_MACHINE-pc-sysv$UNAME_REL fi ;; i*86:*:5:[678]*) # UnixWare 7.x, OpenUNIX and OpenServer 6. case `/bin/uname -X | grep "^Machine"` in *486*) UNAME_MACHINE=i486 ;; *Pentium) UNAME_MACHINE=i586 ;; *Pent*|*Celeron) UNAME_MACHINE=i686 ;; esac GUESS=$UNAME_MACHINE-unknown-sysv${UNAME_RELEASE}${UNAME_SYSTEM}${UNAME_VERSION} ;; i*86:*:3.2:*) if test -f /usr/options/cb.name; then UNAME_REL=`sed -n 's/.*Version //p' /dev/null >/dev/null ; then UNAME_REL=`(/bin/uname -X|grep Release|sed -e 's/.*= //')` (/bin/uname -X|grep i80486 >/dev/null) && UNAME_MACHINE=i486 (/bin/uname -X|grep '^Machine.*Pentium' >/dev/null) \ && UNAME_MACHINE=i586 (/bin/uname -X|grep '^Machine.*Pent *II' >/dev/null) \ && UNAME_MACHINE=i686 (/bin/uname -X|grep '^Machine.*Pentium Pro' >/dev/null) \ && UNAME_MACHINE=i686 GUESS=$UNAME_MACHINE-pc-sco$UNAME_REL else GUESS=$UNAME_MACHINE-pc-sysv32 fi ;; pc:*:*:*) # Left here for compatibility: # uname -m prints for DJGPP always 'pc', but it prints nothing about # the processor, so we play safe by assuming i586. # Note: whatever this is, it MUST be the same as what config.sub # prints for the "djgpp" host, or else GDB configure will decide that # this is a cross-build. GUESS=i586-pc-msdosdjgpp ;; Intel:Mach:3*:*) GUESS=i386-pc-mach3 ;; paragon:*:*:*) GUESS=i860-intel-osf1 ;; i860:*:4.*:*) # i860-SVR4 if grep Stardent /usr/include/sys/uadmin.h >/dev/null 2>&1 ; then GUESS=i860-stardent-sysv$UNAME_RELEASE # Stardent Vistra i860-SVR4 else # Add other i860-SVR4 vendors below as they are discovered. GUESS=i860-unknown-sysv$UNAME_RELEASE # Unknown i860-SVR4 fi ;; mini*:CTIX:SYS*5:*) # "miniframe" GUESS=m68010-convergent-sysv ;; mc68k:UNIX:SYSTEM5:3.51m) GUESS=m68k-convergent-sysv ;; M680?0:D-NIX:5.3:*) GUESS=m68k-diab-dnix ;; M68*:*:R3V[5678]*:*) test -r /sysV68 && { echo 'm68k-motorola-sysv'; exit; } ;; 3[345]??:*:4.0:3.0 | 3[34]??A:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0 | 3[34]??/*:*:4.0:3.0 | 4400:*:4.0:3.0 | 4850:*:4.0:3.0 | SKA40:*:4.0:3.0 | SDS2:*:4.0:3.0 | SHG2:*:4.0:3.0 | S7501*:*:4.0:3.0) OS_REL='' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3"$OS_REL"; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3"$OS_REL"; exit; } ;; 3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*) /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4; exit; } ;; NCR*:*:4.2:* | MPRAS*:*:4.2:*) OS_REL='.3' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3"$OS_REL"; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3"$OS_REL"; exit; } /bin/uname -p 2>/dev/null | /bin/grep pteron >/dev/null \ && { echo i586-ncr-sysv4.3"$OS_REL"; exit; } ;; m68*:LynxOS:2.*:* | m68*:LynxOS:3.0*:*) GUESS=m68k-unknown-lynxos$UNAME_RELEASE ;; mc68030:UNIX_System_V:4.*:*) GUESS=m68k-atari-sysv4 ;; TSUNAMI:LynxOS:2.*:*) GUESS=sparc-unknown-lynxos$UNAME_RELEASE ;; rs6000:LynxOS:2.*:*) GUESS=rs6000-unknown-lynxos$UNAME_RELEASE ;; PowerPC:LynxOS:2.*:* | PowerPC:LynxOS:3.[01]*:* | PowerPC:LynxOS:4.[02]*:*) GUESS=powerpc-unknown-lynxos$UNAME_RELEASE ;; SM[BE]S:UNIX_SV:*:*) GUESS=mips-dde-sysv$UNAME_RELEASE ;; RM*:ReliantUNIX-*:*:*) GUESS=mips-sni-sysv4 ;; RM*:SINIX-*:*:*) GUESS=mips-sni-sysv4 ;; *:SINIX-*:*:*) if uname -p 2>/dev/null >/dev/null ; then UNAME_MACHINE=`(uname -p) 2>/dev/null` GUESS=$UNAME_MACHINE-sni-sysv4 else GUESS=ns32k-sni-sysv fi ;; PENTIUM:*:4.0*:*) # Unisys 'ClearPath HMP IX 4000' SVR4/MP effort # says GUESS=i586-unisys-sysv4 ;; *:UNIX_System_V:4*:FTX*) # From Gerald Hewes . # How about differentiating between stratus architectures? -djm GUESS=hppa1.1-stratus-sysv4 ;; *:*:*:FTX*) # From seanf@swdc.stratus.com. GUESS=i860-stratus-sysv4 ;; i*86:VOS:*:*) # From Paul.Green@stratus.com. GUESS=$UNAME_MACHINE-stratus-vos ;; *:VOS:*:*) # From Paul.Green@stratus.com. GUESS=hppa1.1-stratus-vos ;; mc68*:A/UX:*:*) GUESS=m68k-apple-aux$UNAME_RELEASE ;; news*:NEWS-OS:6*:*) GUESS=mips-sony-newsos6 ;; R[34]000:*System_V*:*:* | R4000:UNIX_SYSV:*:* | R*000:UNIX_SV:*:*) if test -d /usr/nec; then GUESS=mips-nec-sysv$UNAME_RELEASE else GUESS=mips-unknown-sysv$UNAME_RELEASE fi ;; BeBox:BeOS:*:*) # BeOS running on hardware made by Be, PPC only. GUESS=powerpc-be-beos ;; BeMac:BeOS:*:*) # BeOS running on Mac or Mac clone, PPC only. GUESS=powerpc-apple-beos ;; BePC:BeOS:*:*) # BeOS running on Intel PC compatible. GUESS=i586-pc-beos ;; BePC:Haiku:*:*) # Haiku running on Intel PC compatible. GUESS=i586-pc-haiku ;; ppc:Haiku:*:*) # Haiku running on Apple PowerPC GUESS=powerpc-apple-haiku ;; *:Haiku:*:*) # Haiku modern gcc (not bound by BeOS compat) GUESS=$UNAME_MACHINE-unknown-haiku ;; SX-4:SUPER-UX:*:*) GUESS=sx4-nec-superux$UNAME_RELEASE ;; SX-5:SUPER-UX:*:*) GUESS=sx5-nec-superux$UNAME_RELEASE ;; SX-6:SUPER-UX:*:*) GUESS=sx6-nec-superux$UNAME_RELEASE ;; SX-7:SUPER-UX:*:*) GUESS=sx7-nec-superux$UNAME_RELEASE ;; SX-8:SUPER-UX:*:*) GUESS=sx8-nec-superux$UNAME_RELEASE ;; SX-8R:SUPER-UX:*:*) GUESS=sx8r-nec-superux$UNAME_RELEASE ;; SX-ACE:SUPER-UX:*:*) GUESS=sxace-nec-superux$UNAME_RELEASE ;; Power*:Rhapsody:*:*) GUESS=powerpc-apple-rhapsody$UNAME_RELEASE ;; *:Rhapsody:*:*) GUESS=$UNAME_MACHINE-apple-rhapsody$UNAME_RELEASE ;; arm64:Darwin:*:*) GUESS=aarch64-apple-darwin$UNAME_RELEASE ;; *:Darwin:*:*) UNAME_PROCESSOR=`uname -p` case $UNAME_PROCESSOR in unknown) UNAME_PROCESSOR=powerpc ;; esac if command -v xcode-select > /dev/null 2> /dev/null && \ ! xcode-select --print-path > /dev/null 2> /dev/null ; then # Avoid executing cc if there is no toolchain installed as # cc will be a stub that puts up a graphical alert # prompting the user to install developer tools. CC_FOR_BUILD=no_compiler_found else set_cc_for_build fi if test "$CC_FOR_BUILD" != no_compiler_found; then if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then case $UNAME_PROCESSOR in i386) UNAME_PROCESSOR=x86_64 ;; powerpc) UNAME_PROCESSOR=powerpc64 ;; esac fi # On 10.4-10.6 one might compile for PowerPC via gcc -arch ppc if (echo '#ifdef __POWERPC__'; echo IS_PPC; echo '#endif') | \ (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_PPC >/dev/null then UNAME_PROCESSOR=powerpc fi elif test "$UNAME_PROCESSOR" = i386 ; then # uname -m returns i386 or x86_64 UNAME_PROCESSOR=$UNAME_MACHINE fi GUESS=$UNAME_PROCESSOR-apple-darwin$UNAME_RELEASE ;; *:procnto*:*:* | *:QNX:[0123456789]*:*) UNAME_PROCESSOR=`uname -p` if test "$UNAME_PROCESSOR" = x86; then UNAME_PROCESSOR=i386 UNAME_MACHINE=pc fi GUESS=$UNAME_PROCESSOR-$UNAME_MACHINE-nto-qnx$UNAME_RELEASE ;; *:QNX:*:4*) GUESS=i386-pc-qnx ;; NEO-*:NONSTOP_KERNEL:*:*) GUESS=neo-tandem-nsk$UNAME_RELEASE ;; NSE-*:NONSTOP_KERNEL:*:*) GUESS=nse-tandem-nsk$UNAME_RELEASE ;; NSR-*:NONSTOP_KERNEL:*:*) GUESS=nsr-tandem-nsk$UNAME_RELEASE ;; NSV-*:NONSTOP_KERNEL:*:*) GUESS=nsv-tandem-nsk$UNAME_RELEASE ;; NSX-*:NONSTOP_KERNEL:*:*) GUESS=nsx-tandem-nsk$UNAME_RELEASE ;; *:NonStop-UX:*:*) GUESS=mips-compaq-nonstopux ;; BS2000:POSIX*:*:*) GUESS=bs2000-siemens-sysv ;; DS/*:UNIX_System_V:*:*) GUESS=$UNAME_MACHINE-$UNAME_SYSTEM-$UNAME_RELEASE ;; *:Plan9:*:*) # "uname -m" is not consistent, so use $cputype instead. 386 # is converted to i386 for consistency with other x86 # operating systems. if test "${cputype-}" = 386; then UNAME_MACHINE=i386 elif test "x${cputype-}" != x; then UNAME_MACHINE=$cputype fi GUESS=$UNAME_MACHINE-unknown-plan9 ;; *:TOPS-10:*:*) GUESS=pdp10-unknown-tops10 ;; *:TENEX:*:*) GUESS=pdp10-unknown-tenex ;; KS10:TOPS-20:*:* | KL10:TOPS-20:*:* | TYPE4:TOPS-20:*:*) GUESS=pdp10-dec-tops20 ;; XKL-1:TOPS-20:*:* | TYPE5:TOPS-20:*:*) GUESS=pdp10-xkl-tops20 ;; *:TOPS-20:*:*) GUESS=pdp10-unknown-tops20 ;; *:ITS:*:*) GUESS=pdp10-unknown-its ;; SEI:*:*:SEIUX) GUESS=mips-sei-seiux$UNAME_RELEASE ;; *:DragonFly:*:*) DRAGONFLY_REL=`echo "$UNAME_RELEASE" | sed -e 's/[-(].*//'` GUESS=$UNAME_MACHINE-unknown-dragonfly$DRAGONFLY_REL ;; *:*VMS:*:*) UNAME_MACHINE=`(uname -p) 2>/dev/null` case $UNAME_MACHINE in A*) GUESS=alpha-dec-vms ;; I*) GUESS=ia64-dec-vms ;; V*) GUESS=vax-dec-vms ;; esac ;; *:XENIX:*:SysV) GUESS=i386-pc-xenix ;; i*86:skyos:*:*) SKYOS_REL=`echo "$UNAME_RELEASE" | sed -e 's/ .*$//'` GUESS=$UNAME_MACHINE-pc-skyos$SKYOS_REL ;; i*86:rdos:*:*) GUESS=$UNAME_MACHINE-pc-rdos ;; i*86:Fiwix:*:*) GUESS=$UNAME_MACHINE-pc-fiwix ;; *:AROS:*:*) GUESS=$UNAME_MACHINE-unknown-aros ;; x86_64:VMkernel:*:*) GUESS=$UNAME_MACHINE-unknown-esx ;; amd64:Isilon\ OneFS:*:*) GUESS=x86_64-unknown-onefs ;; *:Unleashed:*:*) GUESS=$UNAME_MACHINE-unknown-unleashed$UNAME_RELEASE ;; esac # Do we have a guess based on uname results? if test "x$GUESS" != x; then echo "$GUESS" exit fi # No uname command or uname output not recognized. set_cc_for_build cat > "$dummy.c" < #include #endif #if defined(ultrix) || defined(_ultrix) || defined(__ultrix) || defined(__ultrix__) #if defined (vax) || defined (__vax) || defined (__vax__) || defined(mips) || defined(__mips) || defined(__mips__) || defined(MIPS) || defined(__MIPS__) #include #if defined(_SIZE_T_) || defined(SIGLOST) #include #endif #endif #endif main () { #if defined (sony) #if defined (MIPSEB) /* BFD wants "bsd" instead of "newsos". Perhaps BFD should be changed, I don't know.... */ printf ("mips-sony-bsd\n"); exit (0); #else #include printf ("m68k-sony-newsos%s\n", #ifdef NEWSOS4 "4" #else "" #endif ); exit (0); #endif #endif #if defined (NeXT) #if !defined (__ARCHITECTURE__) #define __ARCHITECTURE__ "m68k" #endif int version; version=`(hostinfo | sed -n 's/.*NeXT Mach \([0-9]*\).*/\1/p') 2>/dev/null`; if (version < 4) printf ("%s-next-nextstep%d\n", __ARCHITECTURE__, version); else printf ("%s-next-openstep%d\n", __ARCHITECTURE__, version); exit (0); #endif #if defined (MULTIMAX) || defined (n16) #if defined (UMAXV) printf ("ns32k-encore-sysv\n"); exit (0); #else #if defined (CMU) printf ("ns32k-encore-mach\n"); exit (0); #else printf ("ns32k-encore-bsd\n"); exit (0); #endif #endif #endif #if defined (__386BSD__) printf ("i386-pc-bsd\n"); exit (0); #endif #if defined (sequent) #if defined (i386) printf ("i386-sequent-dynix\n"); exit (0); #endif #if defined (ns32000) printf ("ns32k-sequent-dynix\n"); exit (0); #endif #endif #if defined (_SEQUENT_) struct utsname un; uname(&un); if (strncmp(un.version, "V2", 2) == 0) { printf ("i386-sequent-ptx2\n"); exit (0); } if (strncmp(un.version, "V1", 2) == 0) { /* XXX is V1 correct? */ printf ("i386-sequent-ptx1\n"); exit (0); } printf ("i386-sequent-ptx\n"); exit (0); #endif #if defined (vax) #if !defined (ultrix) #include #if defined (BSD) #if BSD == 43 printf ("vax-dec-bsd4.3\n"); exit (0); #else #if BSD == 199006 printf ("vax-dec-bsd4.3reno\n"); exit (0); #else printf ("vax-dec-bsd\n"); exit (0); #endif #endif #else printf ("vax-dec-bsd\n"); exit (0); #endif #else #if defined(_SIZE_T_) || defined(SIGLOST) struct utsname un; uname (&un); printf ("vax-dec-ultrix%s\n", un.release); exit (0); #else printf ("vax-dec-ultrix\n"); exit (0); #endif #endif #endif #if defined(ultrix) || defined(_ultrix) || defined(__ultrix) || defined(__ultrix__) #if defined(mips) || defined(__mips) || defined(__mips__) || defined(MIPS) || defined(__MIPS__) #if defined(_SIZE_T_) || defined(SIGLOST) struct utsname *un; uname (&un); printf ("mips-dec-ultrix%s\n", un.release); exit (0); #else printf ("mips-dec-ultrix\n"); exit (0); #endif #endif #endif #if defined (alliant) && defined (i860) printf ("i860-alliant-bsd\n"); exit (0); #endif exit (1); } EOF $CC_FOR_BUILD -o "$dummy" "$dummy.c" 2>/dev/null && SYSTEM_NAME=`"$dummy"` && { echo "$SYSTEM_NAME"; exit; } # Apollos put the system type in the environment. test -d /usr/apollo && { echo "$ISP-apollo-$SYSTYPE"; exit; } echo "$0: unable to guess system type" >&2 case $UNAME_MACHINE:$UNAME_SYSTEM in mips:Linux | mips64:Linux) # If we got here on MIPS GNU/Linux, output extra information. cat >&2 <&2 <&2 </dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null` /bin/uname -X = `(/bin/uname -X) 2>/dev/null` hostinfo = `(hostinfo) 2>/dev/null` /bin/universe = `(/bin/universe) 2>/dev/null` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null` /bin/arch = `(/bin/arch) 2>/dev/null` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null` UNAME_MACHINE = "$UNAME_MACHINE" UNAME_RELEASE = "$UNAME_RELEASE" UNAME_SYSTEM = "$UNAME_SYSTEM" UNAME_VERSION = "$UNAME_VERSION" EOF fi exit 1 # Local variables: # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: gevent-24.11.1/deps/libuv/config.sub000077500000000000000000001061531471441230600172170ustar00rootroot00000000000000#! /bin/sh # Configuration validation subroutine script. # Copyright 1992-2023 Free Software Foundation, Inc. # shellcheck disable=SC2006,SC2268 # see below for rationale timestamp='2023-06-26' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, see . # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that # program. This Exception is an additional permission under section 7 # of the GNU General Public License, version 3 ("GPLv3"). # Please send patches to . # # Configuration subroutine to validate and canonicalize a configuration type. # Supply the specified configuration type as an argument. # If it is invalid, we print an error message on stderr and exit with code 1. # Otherwise, we print the canonical config type on stdout and succeed. # You can get the latest version of this script from: # https://git.savannah.gnu.org/cgit/config.git/plain/config.sub # This file is supposed to be the same for all GNU packages # and recognize all the CPU types, system types and aliases # that are meaningful with *any* GNU software. # Each package is responsible for reporting which valid configurations # it does not support. The user should be able to distinguish # a failure to support a valid configuration from a meaningless # configuration. # The goal of this file is to map all the various variations of a given # machine specification into a single specification in the form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM # or in some cases, the newer four-part form: # CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM # It is wrong to echo any other type of specification. # The "shellcheck disable" line above the timestamp inhibits complaints # about features and limitations of the classic Bourne shell that were # superseded or lifted in POSIX. However, this script identifies a wide # variety of pre-POSIX systems that do not have POSIX shells at all, and # even some reasonably current systems (Solaris 10 as case-in-point) still # have a pre-POSIX /bin/sh. me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] CPU-MFR-OPSYS or ALIAS Canonicalize a configuration name. Options: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.sub ($timestamp) Copyright 1992-2023 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try '$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" >&2 exit 1 ;; *local*) # First pass through any local machine types. echo "$1" exit ;; * ) break ;; esac done case $# in 0) echo "$me: missing argument$help" >&2 exit 1;; 1) ;; *) echo "$me: too many arguments$help" >&2 exit 1;; esac # Split fields of configuration type # shellcheck disable=SC2162 saved_IFS=$IFS IFS="-" read field1 field2 field3 field4 <&2 exit 1 ;; *-*-*-*) basic_machine=$field1-$field2 basic_os=$field3-$field4 ;; *-*-*) # Ambiguous whether COMPANY is present, or skipped and KERNEL-OS is two # parts maybe_os=$field2-$field3 case $maybe_os in nto-qnx* | linux-* | uclinux-uclibc* \ | uclinux-gnu* | kfreebsd*-gnu* | knetbsd*-gnu* | netbsd*-gnu* \ | netbsd*-eabi* | kopensolaris*-gnu* | cloudabi*-eabi* \ | storm-chaos* | os2-emx* | rtmk-nova* | managarm-* \ | windows-* ) basic_machine=$field1 basic_os=$maybe_os ;; android-linux) basic_machine=$field1-unknown basic_os=linux-android ;; *) basic_machine=$field1-$field2 basic_os=$field3 ;; esac ;; *-*) # A lone config we happen to match not fitting any pattern case $field1-$field2 in decstation-3100) basic_machine=mips-dec basic_os= ;; *-*) # Second component is usually, but not always the OS case $field2 in # Prevent following clause from handling this valid os sun*os*) basic_machine=$field1 basic_os=$field2 ;; zephyr*) basic_machine=$field1-unknown basic_os=$field2 ;; # Manufacturers dec* | mips* | sequent* | encore* | pc533* | sgi* | sony* \ | att* | 7300* | 3300* | delta* | motorola* | sun[234]* \ | unicom* | ibm* | next | hp | isi* | apollo | altos* \ | convergent* | ncr* | news | 32* | 3600* | 3100* \ | hitachi* | c[123]* | convex* | sun | crds | omron* | dg \ | ultra | tti* | harris | dolphin | highlevel | gould \ | cbm | ns | masscomp | apple | axis | knuth | cray \ | microblaze* | sim | cisco \ | oki | wec | wrs | winbond) basic_machine=$field1-$field2 basic_os= ;; *) basic_machine=$field1 basic_os=$field2 ;; esac ;; esac ;; *) # Convert single-component short-hands not valid as part of # multi-component configurations. case $field1 in 386bsd) basic_machine=i386-pc basic_os=bsd ;; a29khif) basic_machine=a29k-amd basic_os=udi ;; adobe68k) basic_machine=m68010-adobe basic_os=scout ;; alliant) basic_machine=fx80-alliant basic_os= ;; altos | altos3068) basic_machine=m68k-altos basic_os= ;; am29k) basic_machine=a29k-none basic_os=bsd ;; amdahl) basic_machine=580-amdahl basic_os=sysv ;; amiga) basic_machine=m68k-unknown basic_os= ;; amigaos | amigados) basic_machine=m68k-unknown basic_os=amigaos ;; amigaunix | amix) basic_machine=m68k-unknown basic_os=sysv4 ;; apollo68) basic_machine=m68k-apollo basic_os=sysv ;; apollo68bsd) basic_machine=m68k-apollo basic_os=bsd ;; aros) basic_machine=i386-pc basic_os=aros ;; aux) basic_machine=m68k-apple basic_os=aux ;; balance) basic_machine=ns32k-sequent basic_os=dynix ;; blackfin) basic_machine=bfin-unknown basic_os=linux ;; cegcc) basic_machine=arm-unknown basic_os=cegcc ;; convex-c1) basic_machine=c1-convex basic_os=bsd ;; convex-c2) basic_machine=c2-convex basic_os=bsd ;; convex-c32) basic_machine=c32-convex basic_os=bsd ;; convex-c34) basic_machine=c34-convex basic_os=bsd ;; convex-c38) basic_machine=c38-convex basic_os=bsd ;; cray) basic_machine=j90-cray basic_os=unicos ;; crds | unos) basic_machine=m68k-crds basic_os= ;; da30) basic_machine=m68k-da30 basic_os= ;; decstation | pmax | pmin | dec3100 | decstatn) basic_machine=mips-dec basic_os= ;; delta88) basic_machine=m88k-motorola basic_os=sysv3 ;; dicos) basic_machine=i686-pc basic_os=dicos ;; djgpp) basic_machine=i586-pc basic_os=msdosdjgpp ;; ebmon29k) basic_machine=a29k-amd basic_os=ebmon ;; es1800 | OSE68k | ose68k | ose | OSE) basic_machine=m68k-ericsson basic_os=ose ;; gmicro) basic_machine=tron-gmicro basic_os=sysv ;; go32) basic_machine=i386-pc basic_os=go32 ;; h8300hms) basic_machine=h8300-hitachi basic_os=hms ;; h8300xray) basic_machine=h8300-hitachi basic_os=xray ;; h8500hms) basic_machine=h8500-hitachi basic_os=hms ;; harris) basic_machine=m88k-harris basic_os=sysv3 ;; hp300 | hp300hpux) basic_machine=m68k-hp basic_os=hpux ;; hp300bsd) basic_machine=m68k-hp basic_os=bsd ;; hppaosf) basic_machine=hppa1.1-hp basic_os=osf ;; hppro) basic_machine=hppa1.1-hp basic_os=proelf ;; i386mach) basic_machine=i386-mach basic_os=mach ;; isi68 | isi) basic_machine=m68k-isi basic_os=sysv ;; m68knommu) basic_machine=m68k-unknown basic_os=linux ;; magnum | m3230) basic_machine=mips-mips basic_os=sysv ;; merlin) basic_machine=ns32k-utek basic_os=sysv ;; mingw64) basic_machine=x86_64-pc basic_os=mingw64 ;; mingw32) basic_machine=i686-pc basic_os=mingw32 ;; mingw32ce) basic_machine=arm-unknown basic_os=mingw32ce ;; monitor) basic_machine=m68k-rom68k basic_os=coff ;; morphos) basic_machine=powerpc-unknown basic_os=morphos ;; moxiebox) basic_machine=moxie-unknown basic_os=moxiebox ;; msdos) basic_machine=i386-pc basic_os=msdos ;; msys) basic_machine=i686-pc basic_os=msys ;; mvs) basic_machine=i370-ibm basic_os=mvs ;; nacl) basic_machine=le32-unknown basic_os=nacl ;; ncr3000) basic_machine=i486-ncr basic_os=sysv4 ;; netbsd386) basic_machine=i386-pc basic_os=netbsd ;; netwinder) basic_machine=armv4l-rebel basic_os=linux ;; news | news700 | news800 | news900) basic_machine=m68k-sony basic_os=newsos ;; news1000) basic_machine=m68030-sony basic_os=newsos ;; necv70) basic_machine=v70-nec basic_os=sysv ;; nh3000) basic_machine=m68k-harris basic_os=cxux ;; nh[45]000) basic_machine=m88k-harris basic_os=cxux ;; nindy960) basic_machine=i960-intel basic_os=nindy ;; mon960) basic_machine=i960-intel basic_os=mon960 ;; nonstopux) basic_machine=mips-compaq basic_os=nonstopux ;; os400) basic_machine=powerpc-ibm basic_os=os400 ;; OSE68000 | ose68000) basic_machine=m68000-ericsson basic_os=ose ;; os68k) basic_machine=m68k-none basic_os=os68k ;; paragon) basic_machine=i860-intel basic_os=osf ;; parisc) basic_machine=hppa-unknown basic_os=linux ;; psp) basic_machine=mipsallegrexel-sony basic_os=psp ;; pw32) basic_machine=i586-unknown basic_os=pw32 ;; rdos | rdos64) basic_machine=x86_64-pc basic_os=rdos ;; rdos32) basic_machine=i386-pc basic_os=rdos ;; rom68k) basic_machine=m68k-rom68k basic_os=coff ;; sa29200) basic_machine=a29k-amd basic_os=udi ;; sei) basic_machine=mips-sei basic_os=seiux ;; sequent) basic_machine=i386-sequent basic_os= ;; sps7) basic_machine=m68k-bull basic_os=sysv2 ;; st2000) basic_machine=m68k-tandem basic_os= ;; stratus) basic_machine=i860-stratus basic_os=sysv4 ;; sun2) basic_machine=m68000-sun basic_os= ;; sun2os3) basic_machine=m68000-sun basic_os=sunos3 ;; sun2os4) basic_machine=m68000-sun basic_os=sunos4 ;; sun3) basic_machine=m68k-sun basic_os= ;; sun3os3) basic_machine=m68k-sun basic_os=sunos3 ;; sun3os4) basic_machine=m68k-sun basic_os=sunos4 ;; sun4) basic_machine=sparc-sun basic_os= ;; sun4os3) basic_machine=sparc-sun basic_os=sunos3 ;; sun4os4) basic_machine=sparc-sun basic_os=sunos4 ;; sun4sol2) basic_machine=sparc-sun basic_os=solaris2 ;; sun386 | sun386i | roadrunner) basic_machine=i386-sun basic_os= ;; sv1) basic_machine=sv1-cray basic_os=unicos ;; symmetry) basic_machine=i386-sequent basic_os=dynix ;; t3e) basic_machine=alphaev5-cray basic_os=unicos ;; t90) basic_machine=t90-cray basic_os=unicos ;; toad1) basic_machine=pdp10-xkl basic_os=tops20 ;; tpf) basic_machine=s390x-ibm basic_os=tpf ;; udi29k) basic_machine=a29k-amd basic_os=udi ;; ultra3) basic_machine=a29k-nyu basic_os=sym1 ;; v810 | necv810) basic_machine=v810-nec basic_os=none ;; vaxv) basic_machine=vax-dec basic_os=sysv ;; vms) basic_machine=vax-dec basic_os=vms ;; vsta) basic_machine=i386-pc basic_os=vsta ;; vxworks960) basic_machine=i960-wrs basic_os=vxworks ;; vxworks68) basic_machine=m68k-wrs basic_os=vxworks ;; vxworks29k) basic_machine=a29k-wrs basic_os=vxworks ;; xbox) basic_machine=i686-pc basic_os=mingw32 ;; ymp) basic_machine=ymp-cray basic_os=unicos ;; *) basic_machine=$1 basic_os= ;; esac ;; esac # Decode 1-component or ad-hoc basic machines case $basic_machine in # Here we handle the default manufacturer of certain CPU types. It is in # some cases the only manufacturer, in others, it is the most popular. w89k) cpu=hppa1.1 vendor=winbond ;; op50n) cpu=hppa1.1 vendor=oki ;; op60c) cpu=hppa1.1 vendor=oki ;; ibm*) cpu=i370 vendor=ibm ;; orion105) cpu=clipper vendor=highlevel ;; mac | mpw | mac-mpw) cpu=m68k vendor=apple ;; pmac | pmac-mpw) cpu=powerpc vendor=apple ;; # Recognize the various machine names and aliases which stand # for a CPU type and a company and sometimes even an OS. 3b1 | 7300 | 7300-att | att-7300 | pc7300 | safari | unixpc) cpu=m68000 vendor=att ;; 3b*) cpu=we32k vendor=att ;; bluegene*) cpu=powerpc vendor=ibm basic_os=cnk ;; decsystem10* | dec10*) cpu=pdp10 vendor=dec basic_os=tops10 ;; decsystem20* | dec20*) cpu=pdp10 vendor=dec basic_os=tops20 ;; delta | 3300 | motorola-3300 | motorola-delta \ | 3300-motorola | delta-motorola) cpu=m68k vendor=motorola ;; dpx2*) cpu=m68k vendor=bull basic_os=sysv3 ;; encore | umax | mmax) cpu=ns32k vendor=encore ;; elxsi) cpu=elxsi vendor=elxsi basic_os=${basic_os:-bsd} ;; fx2800) cpu=i860 vendor=alliant ;; genix) cpu=ns32k vendor=ns ;; h3050r* | hiux*) cpu=hppa1.1 vendor=hitachi basic_os=hiuxwe2 ;; hp3k9[0-9][0-9] | hp9[0-9][0-9]) cpu=hppa1.0 vendor=hp ;; hp9k2[0-9][0-9] | hp9k31[0-9]) cpu=m68000 vendor=hp ;; hp9k3[2-9][0-9]) cpu=m68k vendor=hp ;; hp9k6[0-9][0-9] | hp6[0-9][0-9]) cpu=hppa1.0 vendor=hp ;; hp9k7[0-79][0-9] | hp7[0-79][0-9]) cpu=hppa1.1 vendor=hp ;; hp9k78[0-9] | hp78[0-9]) # FIXME: really hppa2.0-hp cpu=hppa1.1 vendor=hp ;; hp9k8[67]1 | hp8[67]1 | hp9k80[24] | hp80[24] | hp9k8[78]9 | hp8[78]9 | hp9k893 | hp893) # FIXME: really hppa2.0-hp cpu=hppa1.1 vendor=hp ;; hp9k8[0-9][13679] | hp8[0-9][13679]) cpu=hppa1.1 vendor=hp ;; hp9k8[0-9][0-9] | hp8[0-9][0-9]) cpu=hppa1.0 vendor=hp ;; i*86v32) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=sysv32 ;; i*86v4*) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=sysv4 ;; i*86v) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=sysv ;; i*86sol2) cpu=`echo "$1" | sed -e 's/86.*/86/'` vendor=pc basic_os=solaris2 ;; j90 | j90-cray) cpu=j90 vendor=cray basic_os=${basic_os:-unicos} ;; iris | iris4d) cpu=mips vendor=sgi case $basic_os in irix*) ;; *) basic_os=irix4 ;; esac ;; miniframe) cpu=m68000 vendor=convergent ;; *mint | mint[0-9]* | *MiNT | *MiNT[0-9]*) cpu=m68k vendor=atari basic_os=mint ;; news-3600 | risc-news) cpu=mips vendor=sony basic_os=newsos ;; next | m*-next) cpu=m68k vendor=next case $basic_os in openstep*) ;; nextstep*) ;; ns2*) basic_os=nextstep2 ;; *) basic_os=nextstep3 ;; esac ;; np1) cpu=np1 vendor=gould ;; op50n-* | op60c-*) cpu=hppa1.1 vendor=oki basic_os=proelf ;; pa-hitachi) cpu=hppa1.1 vendor=hitachi basic_os=hiuxwe2 ;; pbd) cpu=sparc vendor=tti ;; pbb) cpu=m68k vendor=tti ;; pc532) cpu=ns32k vendor=pc532 ;; pn) cpu=pn vendor=gould ;; power) cpu=power vendor=ibm ;; ps2) cpu=i386 vendor=ibm ;; rm[46]00) cpu=mips vendor=siemens ;; rtpc | rtpc-*) cpu=romp vendor=ibm ;; sde) cpu=mipsisa32 vendor=sde basic_os=${basic_os:-elf} ;; simso-wrs) cpu=sparclite vendor=wrs basic_os=vxworks ;; tower | tower-32) cpu=m68k vendor=ncr ;; vpp*|vx|vx-*) cpu=f301 vendor=fujitsu ;; w65) cpu=w65 vendor=wdc ;; w89k-*) cpu=hppa1.1 vendor=winbond basic_os=proelf ;; none) cpu=none vendor=none ;; leon|leon[3-9]) cpu=sparc vendor=$basic_machine ;; leon-*|leon[3-9]-*) cpu=sparc vendor=`echo "$basic_machine" | sed 's/-.*//'` ;; *-*) # shellcheck disable=SC2162 saved_IFS=$IFS IFS="-" read cpu vendor <&2 exit 1 ;; esac ;; esac # Here we canonicalize certain aliases for manufacturers. case $vendor in digital*) vendor=dec ;; commodore*) vendor=cbm ;; *) ;; esac # Decode manufacturer-specific aliases for certain operating systems. if test x$basic_os != x then # First recognize some ad-hoc cases, or perhaps split kernel-os, or else just # set os. case $basic_os in gnu/linux*) kernel=linux os=`echo "$basic_os" | sed -e 's|gnu/linux|gnu|'` ;; os2-emx) kernel=os2 os=`echo "$basic_os" | sed -e 's|os2-emx|emx|'` ;; nto-qnx*) kernel=nto os=`echo "$basic_os" | sed -e 's|nto-qnx|qnx|'` ;; *-*) # shellcheck disable=SC2162 saved_IFS=$IFS IFS="-" read kernel os <&2 exit 1 ;; esac # As a final step for OS-related things, validate the OS-kernel combination # (given a valid OS), if there is a kernel. case $kernel-$os in linux-gnu* | linux-dietlibc* | linux-android* | linux-newlib* \ | linux-musl* | linux-relibc* | linux-uclibc* | linux-mlibc* ) ;; uclinux-uclibc* ) ;; managarm-mlibc* | managarm-kernel* ) ;; windows*-gnu* | windows*-msvc*) ;; -dietlibc* | -newlib* | -musl* | -relibc* | -uclibc* | -mlibc* ) # These are just libc implementations, not actual OSes, and thus # require a kernel. echo "Invalid configuration '$1': libc '$os' needs explicit kernel." 1>&2 exit 1 ;; -kernel* ) echo "Invalid configuration '$1': '$os' needs explicit kernel." 1>&2 exit 1 ;; *-kernel* ) echo "Invalid configuration '$1': '$kernel' does not support '$os'." 1>&2 exit 1 ;; *-msvc* ) echo "Invalid configuration '$1': '$os' needs 'windows'." 1>&2 exit 1 ;; kfreebsd*-gnu* | kopensolaris*-gnu*) ;; vxworks-simlinux | vxworks-simwindows | vxworks-spe) ;; nto-qnx*) ;; os2-emx) ;; *-eabi* | *-gnueabi*) ;; -*) # Blank kernel with real OS is always fine. ;; *-*) echo "Invalid configuration '$1': Kernel '$kernel' not known to work with OS '$os'." 1>&2 exit 1 ;; esac # Here we handle the case where we know the os, and the CPU type, but not the # manufacturer. We pick the logical manufacturer. case $vendor in unknown) case $cpu-$os in *-riscix*) vendor=acorn ;; *-sunos*) vendor=sun ;; *-cnk* | *-aix*) vendor=ibm ;; *-beos*) vendor=be ;; *-hpux*) vendor=hp ;; *-mpeix*) vendor=hp ;; *-hiux*) vendor=hitachi ;; *-unos*) vendor=crds ;; *-dgux*) vendor=dg ;; *-luna*) vendor=omron ;; *-genix*) vendor=ns ;; *-clix*) vendor=intergraph ;; *-mvs* | *-opened*) vendor=ibm ;; *-os400*) vendor=ibm ;; s390-* | s390x-*) vendor=ibm ;; *-ptx*) vendor=sequent ;; *-tpf*) vendor=ibm ;; *-vxsim* | *-vxworks* | *-windiss*) vendor=wrs ;; *-aux*) vendor=apple ;; *-hms*) vendor=hitachi ;; *-mpw* | *-macos*) vendor=apple ;; *-*mint | *-mint[0-9]* | *-*MiNT | *-MiNT[0-9]*) vendor=atari ;; *-vos*) vendor=stratus ;; esac ;; esac echo "$cpu-$vendor-${kernel:+$kernel-}$os" exit # Local variables: # eval: (add-hook 'before-save-hook 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: gevent-24.11.1/deps/libuv/configure.ac000066400000000000000000000104041471441230600175130ustar00rootroot00000000000000# Copyright (c) 2013, Ben Noordhuis # # Permission to use, copy, modify, and/or distribute this software for any # purpose with or without fee is hereby granted, provided that the above # copyright notice and this permission notice appear in all copies. # # THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES # WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF # MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR # ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES # WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN # ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF # OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. AC_PREREQ(2.57) AC_INIT([libuv], [1.44.2], [https://github.com/libuv/libuv/issues]) AC_CONFIG_MACRO_DIR([m4]) m4_include([m4/libuv-extra-automake-flags.m4]) m4_include([m4/as_case.m4]) m4_include([m4/libuv-check-flags.m4]) AM_INIT_AUTOMAKE([-Wall -Werror foreign subdir-objects] UV_EXTRA_AUTOMAKE_FLAGS) AC_CANONICAL_HOST AC_ENABLE_SHARED AC_ENABLE_STATIC AC_PROG_CC AM_PROG_CC_C_O CC_ATTRIBUTE_VISIBILITY([default], [ CC_FLAG_VISIBILITY([CFLAGS="${CFLAGS} -fvisibility=hidden"]) ]) # Xlc has a flag "-f". Need to use CC_CHECK_FLAG_SUPPORTED_APPEND so # we exclude -fno-strict-aliasing for xlc CC_CHECK_FLAG_SUPPORTED_APPEND([-fno-strict-aliasing]) CC_CHECK_CFLAGS_APPEND([-g]) CC_CHECK_CFLAGS_APPEND([-std=gnu89]) CC_CHECK_CFLAGS_APPEND([-Wall]) CC_CHECK_CFLAGS_APPEND([-Wextra]) CC_CHECK_CFLAGS_APPEND([-Wno-long-long]) CC_CHECK_CFLAGS_APPEND([-Wno-unused-parameter]) CC_CHECK_CFLAGS_APPEND([-Wstrict-prototypes]) # AM_PROG_AR is not available in automake v0.11 but it's essential in v0.12. m4_ifdef([AM_PROG_AR], [AM_PROG_AR]) # autoconf complains if AC_PROG_LIBTOOL precedes AM_PROG_AR. AC_PROG_LIBTOOL m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES([yes])]) LT_INIT AX_PTHREAD([ LIBS="$LIBS $PTHREAD_LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" ]) AC_SEARCH_LIBS([dlopen], [dl]) AC_SEARCH_LIBS([kstat_lookup], [kstat]) AC_SEARCH_LIBS([gethostbyname], [nsl]) AC_SEARCH_LIBS([perfstat_cpu], [perfstat]) AC_SEARCH_LIBS([clock_gettime], [rt]) AC_SEARCH_LIBS([sendfile], [sendfile]) AC_SEARCH_LIBS([socket], [socket]) AC_SYS_LARGEFILE AM_CONDITIONAL([AIX], [AS_CASE([$host_os],[aix*], [true], [false])]) AM_CONDITIONAL([ANDROID], [AS_CASE([$host_os],[linux-android*],[true], [false])]) AM_CONDITIONAL([CYGWIN], [AS_CASE([$host_os],[cygwin*], [true], [false])]) AM_CONDITIONAL([DARWIN], [AS_CASE([$host_os],[darwin*], [true], [false])]) AM_CONDITIONAL([DRAGONFLY],[AS_CASE([$host_os],[dragonfly*], [true], [false])]) AM_CONDITIONAL([FREEBSD], [AS_CASE([$host_os],[*freebsd*], [true], [false])]) AM_CONDITIONAL([KFREEBSD], [AS_CASE([$host_os],[kfreebsd*], [true], [false])]) AM_CONDITIONAL([HAIKU], [AS_CASE([$host_os],[haiku], [true], [false])]) AM_CONDITIONAL([HURD], [AS_CASE([$host_os],[gnu*], [true], [false])]) AM_CONDITIONAL([LINUX], [AS_CASE([$host_os],[linux*], [true], [false])]) AM_CONDITIONAL([MSYS], [AS_CASE([$host_os],[msys*], [true], [false])]) AM_CONDITIONAL([NETBSD], [AS_CASE([$host_os],[netbsd*], [true], [false])]) AM_CONDITIONAL([OPENBSD], [AS_CASE([$host_os],[openbsd*], [true], [false])]) AM_CONDITIONAL([OS390], [AS_CASE([$host_os],[openedition*], [true], [false])]) AM_CONDITIONAL([OS400], [AS_CASE([$host_os],[os400], [true], [false])]) AM_CONDITIONAL([SUNOS], [AS_CASE([$host_os],[solaris*], [true], [false])]) AM_CONDITIONAL([WINNT], [AS_CASE([$host_os],[mingw*], [true], [false])]) AS_CASE([$host_os],[mingw*], [ LIBS="$LIBS -lws2_32 -lpsapi -liphlpapi -lshell32 -luserenv -luser32" ]) AS_CASE([$host_os], [netbsd*], [AC_CHECK_LIB([kvm], [kvm_open])]) AS_CASE([$host_os], [kfreebsd*], [ LIBS="$LIBS -lfreebsd-glue" ]) AS_CASE([$host_os], [haiku], [ LIBS="$LIBS -lnetwork" ]) AC_CHECK_HEADERS([sys/ahafs_evProds.h]) AC_CONFIG_FILES([Makefile libuv.pc]) AC_CONFIG_LINKS([test/fixtures/empty_file:test/fixtures/empty_file]) AC_CONFIG_LINKS([test/fixtures/load_error.node:test/fixtures/load_error.node]) AC_CONFIG_LINKS([test/fixtures/lorem_ipsum.txt:test/fixtures/lorem_ipsum.txt]) AC_OUTPUT gevent-24.11.1/deps/libuv/img/000077500000000000000000000000001471441230600160025ustar00rootroot00000000000000gevent-24.11.1/deps/libuv/img/banner.png000066400000000000000000001261061471441230600177630ustar00rootroot00000000000000PNG  IHDR3 pHYs   OiCCPPhotoshop ICC profilexڝSgTS=BKKoR RB&*! J!QEEȠQ, !{kּ> H3Q5 B.@ $pd!s#~<<+"x M0B\t8K@zB@F&S`cbP-`'{[! eDh;VEX0fK9-0IWfH  0Q){`##xFW<+*x<$9E[-qWW.(I+6aa@.y24x6_-"bbϫp@t~,/;m%h^ uf@Wp~<5j>{-]cK'Xto(hw?G%fIq^D$.Tʳ?D*A, `6B$BB dr`)B(Ͱ*`/@4Qhp.U=pa( Aa!ڈbX#!H$ ɈQ"K5H1RT UH=r9\F;2G1Q= C7F dt1r=6Ыhڏ>C03l0.B8, c˱" VcϱwE 6wB aAHXLXNH $4 7 Q'"K&b21XH,#/{C7$C2'ITFnR#,4H#dk9, +ȅ3![ b@qS(RjJ4e2AURݨT5ZBRQ4u9̓IKhhitݕNWGw Ljg(gwLӋT071oUX**| J&*/Tު UUT^S}FU3S ԖUPSSg;goT?~YYLOCQ_ cx,!k u5&|v*=9C3J3WRf?qtN (~))4L1e\kXHQG6EYAJ'\'GgSSݧ M=:.kDwn^Loy}/TmG X $ <5qo</QC]@Caaᄑ.ȽJtq]zۯ6iܟ4)Y3sCQ? 0k߬~OCOg#/c/Wװwa>>r><72Y_7ȷOo_C#dz%gA[z|!?:eAAA!h쐭!ΑiP~aa~ 'W?pX15wCsDDDޛg1O9-J5*>.j<74?.fYXXIlK9.*6nl {/]py.,:@LN8A*%w% yg"/6шC\*NH*Mz쑼5y$3,幄'L Lݛ:v m2=:1qB!Mggfvˬen/kY- BTZ(*geWf͉9+̳ې7ᒶKW-X潬j9(xoʿܔĹdff-[n ڴ VE/(ۻCɾUUMfeI?m]Nmq#׹=TR+Gw- 6 U#pDy  :v{vg/jBFS[b[O>zG499?rCd&ˮ/~јѡ򗓿m|x31^VwwO| (hSЧc3- cHRMz%u0`:o_FqIDATxmP[w'~͓ecLn89'!ۙͽu3v1ڹ5'teýoRe"5f(3΍v=|ؗ 8t0`;B`g5DDDDDLP$%]aog Z-LJBgg'>Ν w>rTU@UUU2d LLPpCV_MԠ:;;n;7lO*>|xOpLLP@پ^hڈ |ÁsX,_XZZ;39hhǡCqD*\7Ƚ1@& :q’ᴷ BF}}=>BA Vɀ? DDDD9»dol6[Ǚ3+U5xz>}˿: ^ ~&\)LQv. oa.~jҪ*8 4@\jqD wl,--+\xccܵ'~Z/YuܛDDDDDLRuj]%[[[_|1lkia@ N'n6 077n:[_YܓDDDDD)x]xn} Z!#]]ݰl?f3w嗏رc(`gz @DDDDLP non}[OV [,>}n|Zoy@Çرc!6 p1@DDDDL0 fnܸqxMM Nhߛa2pyۚf?]k`2}RZL2N'144G$ ]f"@M8|LQ(.ii 7n[t&\jëQog9>hii]AqѦtZ """"b ֭oPaw:hl|  QKKKx"\CREK|MpIdG௃gH!${zzPSSz /p\I:T*U,Y@"@DDDD.u_j.~H|y|GI @}[ƅ +qNC8} "M)"ej]#""""b f"D񺻻x+pj];M2S) w%\r%R~RSSl6tuu~4 ~O 2Ԁ~qZq Ń K/|.>}&-?៚{ Q}VvYA.( &;b!"""""&/;N ٞtW&׾# 9'D̝p7+պ8n.q1պ<%bu[=ۃsSb/2j enii }><ݬ&XkDDDDDLb`A9|y\x1Rp6& &)w u7l`V!C3338} \)I7`#""""Ju.!<>X,tttڵT*u܏W]]`˦uo֭oىz|e.~ %+3/3 ׮ THd-WSS `0!Ӊ>,ڴwj"""""&4YD[ZZg}>,c!|NBRIٳ= w<po1-0 =ۓeR%U`zz*j'L =ۃp7{+1 j]a:{:8M6Ī6] %H:US .r o^Xoff&k^l>,#jqD O,'V\. z]uB/fŨUX,l1ݶUYBX2B6xqo1 ߿?>}Fs#,O@V'Yp:r >|DΛ"""""b@jhh}Q֍⋁np8|SZmLy%& N>;nrj])DDDDDLH%ou--- ľ0͘za_@Nc ;9N|Gv PǽEDDDD@&_Bxϲ҅ "w8 Kt:,188iJ@ V%gŮg}%/]ru: Z- Μ9^&@)`_"""""&S]s?łӧτ4dn0/6qZ-Μ9. s”\Z?"""""b  2_X,[f34t8 ͛o57 9$@rJ2es@"""""&R :Gsn,b]] 1Nw[F Os@&Hj.~Ff-x a N>.VMqO1u/^ĕ+WpM0p?۲ {xǭ[2 @DDDD@ʂKb":@l [̷ttYxb\)DDDDDLl7p'%'t~@̽^zKf/ŋKDDDDm;ٳ=lM_|1h`zzwޘn_؈7l Y֭o#%.1 @DDDDLж>:%$t?OMMt/tJJZuBؗqoŰ/*DZe """"+B/>g#===_ du*[h8DpBSl ƴYɓ+*X@DDDD`9|gAj{0lZ6@\ϕLUt~`Ñqݿ+>]KKK t"K/5e{ok6TȆ$@u"""""b@K~@t:g_z{bjp8鿻;llL}@_}l9_ gwE"""""&}tsss;2p800)_el6'N5?Xj:T76:/L1h>n}o]BDDDD\yG _r\6k׆^]ӡc{1]p@…ۮ0|r%хehytܺUEiDDDD<q$q9 ٌӧτ==g0Wׄ\jq鄓 'WPܑ}ݿ^) BKuQwUAuidbjj SSSadժם8Z޾) O`u49w SΝAyyU$kVn#""""ɹ oӿ/ `HZ/$TU]r/@&uY. i e'Z3gN+i_|lX,0 Q{H1TRTʕ+bWuXEDDDDLHպ D:suD z{6-j8,Kă}_orr[АU) 1 vt)ݻ==t:C([hZMP-$+ҵ4 all\/SPCL\Zހkjp8 q=}*:?etLx"/.%eDDDD|yl<Lw #4H֔m6([0 gzSDӈ 4r@JfYVL0Lvm8\TE0 i0з;1;Y +5oJ:?Or<wiQ"""""I+yay)K U55I}\Ik׆pF ϟ-ѝ-B~ ԯA܂ەBGƿQcʬ}9?kϥoСhhKub qDDDD$UY[m0 (o/l \0L!Վ+I ݮ<߈l]hz}y-_2 -izSzŀ0 DON#"""")=L| 9|y.q\o~EДmhcT7$׮h㬩 B}}=/>~?/DDDD@]BВ33wp ٿ0r Ս.Я%<«)Du Ս^S>)D]]AhpvavRwF סP>oG@Y{*]P1& of=.ZwJKc ٗ1z.9Uf%nrS@R_f.q*1BbN/^JUYл1zfe V_/]V ntaڬiH*[xCT"n0ec!ܫycܮ2Db6+tt6Q`2})}W_ZwJ]y!Vśoj׮  5)Y0Q9⿒,--ahhoVU @DDDDL0/HАlKWcF p{‘o]<֍K0~Sl5sADmnc$kGsp8ۇ0^EcKp:{3BF8XJZښ5558}T/t:tww`0?`B!?Ѧ'iY V5w=lFnj7P_߀Ç_,|q*1 KJ31Z{1npn4EQo w,-Z8X0k_Tk9|{ "4:a jk!T;hAOOV55ոpbosAEVHA2ۏe+nI =j:榅 h ]˒xL]=Z"7pرUt?7DDDD6Y=/>sss+[h>H]m?5] M儮Jgh?ttNX,4N{+& qCmLM3Hchvo`j,/Ν Z,CDDDD'獷Zu~|+WehZD *t?wǵ_(79 >O@SkkOX a`ڬFOͅ;rg71=!_og>doԾ~nWhzzp^?eV]0 i`$hjەjK1~S!c$'8aV1x)%""""N6 Bt:14$h q{]$7<ؗKcZq@?Y!Q_'S܁Fƿv=' Ͽ#Mqn:N\r x""""b ~_044?Le;+ߨ1vSR&xv_vW_1 4=F+ƔY FU]EuKOߗK~qÇ/Y=zx1Z;_a-0V{؀ovw'1; d|:6 B^Y l6ڵaLMMTGi!MVGöPE7[}3;YwSCh6p)\"7V\-/p6˶m7w0;Yo{Mh6#/c-m~`ڬ;a_Ώxi\v*b'fK|hz6yqz f'I]BX Rxbܛ-5 /Ʃ!y1ʨ1pַxc _ADDDDLD.Jvu[Mbw-bLUxØFi5e~_lN'$/| Q=%*a 0S׮h Gbi+W**3j]$""""&"FKv)[kx6ʧS8a.qaZlnʸ)ۄff**8N*f'ܮ"|rm _ ",W9WcaɦDW'WPnaڷg–*HKL2mb#¨u}yK^qlCf  mOFWܣBIԟL|_hUr (:jh󓰣+I/?IntV b{K%>V5]y}}gB i2.B?9mV;ۻ@S r>>5~Kƾ6+#=0 iB&"L L- BiHWAp<+bc[BCdWX{ik2Ttxl mZw؀ . ͈ЈPSPJ²uɢݿ^\}a);$iH 'W|=I pqh^*%FCv ;$`/""""bպ؁=/ |Ô<ە+>8b2z!rs/Yc,+ DK]y"XKŴ AI <~S>*P(pIN(DԟAlp`BEaڼӞr+Ѱ0f> 7wlY6B'*ی8o_nT5B嫔-{~pP \ ֥"WKc~g}wD<4e8p$J)Ҹ\a}!o&J!(uu1KHlܮF_)x_-g[(L 6,nWFbv8-1vT{cR+~J_ؒlHDDDDYk73s'eK@5?"ѠMXQd 0ZgWQ}m Zm`J|r_99OBdoO^ۥ)%R1!LQRHh[ڂ/[)}]7yHBu }̮ݿM&2ԯA_bƑ(G`v|^݅:"H Awɕ b*B:izG}DfjjBaRWS7N=%e5lii cchh`.**Y@)6!m[!""@@ӫT7<$?wFlgP]! gJ"V;f'1vSP.yƩr]y}ķ`"]7zx4;Y!t5O8ݿl);q1:]}}g҂t5%^ݵ_Rc|,8xzRidd="" )VbyP,4{ho'8::@=`*v;7\ ݞ6+1W"m 0 ipý |7hz?4 k/ Bg;n} |1Qv%ľ4b g%ZS m3z ݌Ѧ'4m-^!`~q9oh_ 1wmq|?ӫ@_j%yDDDDD&)m,--aff&O(~BnW\mp>ޕF-Zݰ-bWZR{c7U׺qv5tB-½"M kWp!DB+it4-z(hcڬLi+VRƍxci&""""&boK5F@DЃms{qFb:&|UT0?[KPv© vmh2nWJ_rRM&vnW^Ԋ M&^q@_p߄tԯ0mVF/$LCkh.v:ڻ/n9DDDD$2PLH똆&*[ppo \< m~=.Vő׮bʮ=g8>5uOF]Qu4w$*@HTTljllMM_Z**MӁC =YZZ\e\PWgN;r`([׺}d-c7U۞`Ҷ_'x%t~pVlD@ jG w%Bmƴ0 P6$"""mh@=AS}&':cDR:]ŴY1mzi2Unu |Nwתݿ)2*#Y~ 5_fʬ {]y^ ǝI;Fh>CN """L Ctd%=s% TPS+MB+ߚMwl`G¾%cߨE)RtDT7p2^@DDDD)m/Xm? qito<"$J:?_ ۘѾ=0 iV{BrDh$2`.ꐱ.+JGXRPp@I܋# TM(Dt~`'7xø?st:/n#""""Y& 2uL u&gywGۏ۟H pgxJ>r G. qaIyDDDDDKf9δU #հ{d4T(ݿ+//72xpvy6L!M&ڻQ-&eF~-> cbxVQ20 ׮{/)݄B5 1(Ǫntan\R}9cߨ1($ɢPnR1R{tOVǣMO0;Y,ڄPLVqw}jR-;o0;vAB N33wp<""""JT|A:+<([@IxhF Pvԯk6Q_Վ ܮIH IB"?$-DVb""""JDYZZRڞܖ/$-{u.M&-ؗvU]M:F*kە+I}FW@@2GV|`h D/ L:P{&x edd=LD@u@elG)M<"19P-5MdqMM&8f%vm&uZŽE)΀C8r*U@WyP:o (9 *u\KIU67`&L##YOu/YHM H:G<_:o?kI@dk7?[9$}v{FړB!ؗ=ٙ~ mhcj)n@_&|y2տMUU ?C W }!!0422<Ynk}#|R̒fO$|LA2KzI*&7#$J"$R]ntR`ڬ+p0-mzmV"D%hl /u4!!hl%z"lC#JʉTWOe"U(qM{ rv:*k8DvFݾHv%$C*Ƕ`)@1/5[zƖ.AAdH9`Pזk2x͗SiM? isą`P{#zALۧcߨz k8=&I>D93/%&(]-k̪DFݛȞu%};i6}%jJGvaꮴ@,}y& ;d%FY[=!̥iR##ÖlK|g |r%% U)W0*{ƿQ4KGxՍPj.xĉr&y_ʒ_,D(H -G>GKeZ$NӉaLT s0eVf$`0L#H_Z0#6?XI2e~o_Lk\N˹ ִbU 4a. }}zII_dnNt/x:/5[iRxV j##+~3\ t=QڢojV> 뾀϶>u?=.4RԎ%"S-UǣOct3-@3ķ/?k?C xw3VŒ= ܮ<R! bY\V(|MwhImJ|w}g_CMc, Qug@ 5oi71ۧHYFFM٘06e}9Ecj)] B7,fX}StE%F+kx%t~pV;^ n{1mVmF~-lY%b em_ʽ,0dnPۗX'ǒj@gAl?½{wvmMTp@FpF (;k)56GGcIהm-”YF|^-BNiz!Mgf5W |=6 phlAd"UmNeo /RNCY@B!-"j_iԏB)w6޷ JZ2}6=7Q+GKpxÔ_]}$lA)SuE nI iQj4 g Lmdk./qo(g~,_Ge G}IOܮ<^p:V;6=iHR`[(/G{rة%T&Fgs]y+ۿ]JY_OʩÇ_@D95#J~'ڐ}dpNd_+Hg mnW^HvavRRc\D5z4$w4s+kBݮ}yB!ʔ<^ذw:FVNY}DcƖOFF i/MX 8NIYg2+EW7pv5)(9Lr oz?.){^*+ߨ}yBrJ@D]e(g% K>C6 oJ*-<:_ffH-ђ_h 7?  ł8v:6? ̳GaʞJMܮΉ"q0zuWQ8--‘Ծ/6=iH+jO({' QOdd6R u;dP9O&!;_JK _ۻS }-{%̤p m+kݜ A)7~SKD{f{Nqoj4p@ԁ9DDh3[ lH2rY 3 `BVՑٞ_ C} qnܮ)mdo$V_-ѝq%ǚO>6kO{ >}9ܮ<5}>O""qg0@022wW?ᕗBnRʳ wD?<+J`.$`HiWh5~S͏8z}9yⳄij$DAO/&2˾oԘ6+㪚Q(uhch63NH 3ӥ $""hlyϻ: -nz7 M/ϦW8j%C.N:0YaXp8r6 a[(Mq\M[/&-T]M{SJ'$JDx*ڐ&[Dqp*i2(Fdd\ u:^z5!MOO/޷'NNj5,?t/`Xnjq\Zpll6[~º]yM5pRh~c71-x5s7Jܾ3JH4|""7L\ -H8f4) JM;Y,ZvᏗvo;p|kjp8pڰNuj55վϟx5a!AkIp8`2}UUB^-U4Hyyq0mVbu^([٢!ls&KQlH23CLhKeC>f%6= DRE kb~N|V5<azNS/&>Rԯhc/bnEۯ]6\ސk5 j~?.i"YAD1|U~_\cC:\gK)jIx Iu@H`f$R_(Mx0AT 5pdtF 54p:#ow4]z`޽1?7l_BG;iKg0зM4‾vUBgyćX >tg~1<XBAX~S=0zu4e8$A-mz¯'D$Ut6 hlӯUh]6 G \JMcMh6x':-h}ccc h [ VKB 6bjjOxr>LC4_G5ݿ X  Ƶס}~#iMg0eV4v({;0mV42F*;gz?'IxF mx罟=@*$##VHyA‡vPj I"i^ylv==] a[( ;=NP(|QsʘT׮j9-ѝ-Bɕi7R2o5eE:M,~3@V4ASFc9xF#0T!,V')}(vx-9!Vp4&W 6Aj6A5ѐ.R4Y?.hIFaF m^ a>ڻH6BG vIs 6TA)Y/y8-UuR|?Mc)SV$d2]RVB.{s£6 f9bYzbMf$c Hp8.C)!k/h3l"4:}~ݛBGEP]yTdq@e7dv8 R_ DKu/0-.AM-2N`P^1[!ʛ6Hou ). J) +l)\&U FÍǓt[G[} <*^3@B6}\6sJl9)Q(*bIDBXBK1O4667f_x?i`ʭgl +@6,GQ@{~ Hs2XR*yD\p@J\nW2/Qt_ Z6K/:㝫p8Z0;$v :m?/}^|QZvtN|qyH ~IgGb}"ʤa`eO'rR}?Ia5OHe)0VÎzձ/K{'Nd V#Mx>}J?GM2$zd4Axƙ\0 i22ە==Zpv:~I6scݿ8"ʤsRm5d_2@rtYmK<ޝ}ȾbAi$_,KFZXm VXm;#Vb_{ׇHCl)x't0zeWZsB!BV+ntk/Ky%")K@)O)A oJ[%s9 Ss6 j:i? V~7 PPm )G^2w! Dv*l6[e -a/wV#p:o6g[(ww5:0|rEVd6[/MJ$;T`4BHs$ \zNMM_b ؛L&>}8vww'F <'D6vm8lvnõJD K~k,ݞ ĺ*~+?ȪVQX?yȈRfH֖-c@N2Z,>}^ V}mjhUi&ע*I ${H+>z4v4IkPnAB+d1?Ј(崱##&Hɑ7"4!@9|::N%%x_( ܶD ·|oKq D{LkV@gf9 jbJؗ/gx]2@D'ux"vedNJ "B@PPSSp]Sjf9$ RC,A*U_}푶wm߱oI=_•ZY/]A i2S,X1@IM@$G\ ϙ NJ \Kˠ?hѦģrHp+ $B*hu8!+<BTo Y€؜l/ԻC.THn;? hpDY$Ǎ&-9TӋd2Ht21 @bڵQxV R#h {z$N8}ŪbgotD[L!gђ& _@{{i2)~ꮐn91_#{K$&ExRF#i LJu|AUe/d)hA c #%jjcN"mb}b2`0D_^GdLΐw5Z9q+[!SX@DYkB'o_)O/~ o[[>W6%**$T*rGoooT:j@{ڰhGX$Z-qm|ccc@}EES BOOOf{~!vؗյ~~GKB7N=̩s``C+/"(d:_` #6X ak>W<&Lj & O2 9bbݿ]! v UhOMM$ WDOQ@^ӀiZttgĪܮ<^- #([9uYc>w( |-0𗇞 {povFVrߟ⭬He^ ~1 a$JAH7іhli4WX&"}1=|߽b&mZs-T8G(PWZ{w)C~} +c7Co|r%JFfe̕O"&('KtZex,2x)Jci".s2*[H< Ë́0>|?^U5u_[Hbi8:$ntASɓv>˴¼ BYDoG(Zrim<ɓV!_L077Ƿ/ec[!TeoSL8{ <E]my NOdW_[^_0?On7;Y71O /ieSf%dx$**K7I wQj O-f_6$yT\aY~JG dǨT*{I˅Xs'5; ùy_]ݏ&#/CϡfϳU^ڷ7%`ʬ +g M&0x}9f%].6$J)W]@YDV$,^ɓmKaVڑ$k|yX6 gZO=x& BK vv)e]a!SfUFʂ_d"lx##&EWӌ O$]mFcKik1`@LE>պ(ۖb.(Sws-؁[ TeP4d9=Q>6w`͑n/xi.|N*Bi2?Gc'WQƒ@DDFF-FcˠD6$ @&9mIW۳ ]JojiA3;`sarɁ`J 8 \Nܫ*_(vL-/lh@lgT!""9CM$sT/IW`A$ϴxn{UE},v&z7>}?W ua>^;P<\fϳl}!q?~jX$WZ! "p 7BimLDpTU໖dgT\-{1/R&?j;~>:_'!R_K&L<"DM Q: nWY`4AIq ?/UWu){)aV|K%pSwCQw{սe߾[/϶P T́ݿ 6[ܝ,N+-I ?DɘDOtNpamgZU~Ilˋxq7$B_pUE-տ\սީQ_5 gԯP!2d1 >N (ZQoG(xWްIlӒ1 M222<@lB>d/u#4 Gf5{J?Ӧ|_ڷ˗x G-kShsQmy,d>ڔ?& P<#n_o𣙿[mrW_#EDD%]/p,:toNHDm)ro6} jTT|ͣ@DDOHp ۸o_D63i+ J)wo\)\ͷW]JԖUܮjMfZKȶq> e>*?$m*5[3/r3@yy9T*N.A+~xԃ'}皯Е_7/Xogi+o7hܮn>aVHA &481q+,^ܥ Xo;so{2^ܥ_(,,/(.׮d }ۏiI(E\FFWƖ~HoI6T!_T>wl KO|=g忇Gl^ДmD >LO8t G>}s@/l:QҖgAPV\gSdk[DA'5D*&xsW$i&_g q_XJi#?lrX _;_| nI S=%?O;@VVP&ɶ??DѦ9n׎n:t """sIoC`6 iطŕ+PwHQc}/>M._mfl ۏxއ""""Hxd9'pUUT*N'߮Q²%Cϥ"@H8L.9xFm5%zw).G͞ .%O?RUP_A| jre.vG [ƖAHmܕ##&k>|('/>jԃ'aGՅ-/N^U!^ܥVYTK+<1-ʓ6&**1@DDD I0nCiR,'G&!_\ث*_z/uab;B T]ʀ@_QF|Dzm~¹&~L#"iJчFJD$!uD7ڦRmddxB 3$,VN}Hʄԅ xi.}RVG\r$R#{ ? |RF>Iѭ? vHp?Qu )c >˅א/* UUU|RF7#yO*[Gi**jxA$G?%V_^'v Q,OHlJƖ:o>'FF'd cNة]S%5T8=P(PMː?JL]@$9R~, 'D&r)/ѮI@3*?CQs(e/?ALByy9ߦV-7;k|I/Rk#EDHZK- *L)LXHC+6tS'|5G>~o%)cw'%=" WUSDhJ ("pI#PQoe@$6VZR1l3iԽ%/_<ȣDa[&˒סGAy׵_ឈ(gkImgsSb+ UvW)}:ƚ#QU.:\.}U{AyOQ] Ɓ18]yI/V>cl&=YGNƖ6lD#Zz{H# -Pʓ8ve)T#*c [qCNё{ct:466~_s$AF -kƁoَ3gN`0\+M6lƸ_q'a~7I}LE'MrmRHɰA zD0a!M @;¨ JkxOPZ_=tvwwTl`V<諾 ƃ _aܮ|i[Te=>kW !I/\ɾl7fڵп:zz΢ Z6S܁JlnW^ʃm[M=Q.&򙷝̠}E&7R>s1)-9-##ÃLHOVS))"7U=UhxDt555P*>```Ev}} x`5Z<1}e)ߡ[žXɎRD#c2?5$թ$w^̕r{$Ԙ嗏ARmJq]ڴDMEۭ |eˆ}O٘}p\$ta7N=A¶PjS6_ǐ#$6'<-$vB>乾r:xf0!9_i':ߦb4>K/5\'+x9} I,&)a޵yQT_S0[gɺ$x`T*(fZ+bۻ`RǴܿo !$ ` \D9}Ml63GWW7L&h޵\\0~S_٢?fFw4]V gԺ\o6 lL0V O<\,ƖOY`4S !pKgp6|fU_%7q9 lw;VFFy= ,Vb?߇Ppַ<@wH9 [`,e0p8|]SXP+(\6e }z{D;3mWveiϷ8fZ3gNTJVU{7wbp8p8 wʭ?nWLC }9_|0@5WŮ_N,llFD,-%@|Hx8Cp?5zcqxDs<:`?xd-Bi峀MUU0;fPSSp8pvWub_}e^BG1 giRP_}Χ7Y*I~D;(YHFFMFcٱ$ h(M@Eždsz-MX/ƚ#tykc؊|xVWc1 Hl6h__ۿ]ôYRYuG({Ue:ܗEt"JCKpjl0$@P>|yDs8؁GVOVqDK)| [(lHu:.\8x:]Z;^l N A߿{\yjlCe㨷ٓעw*m ԇĝˑ$Jl(glJx|[oel=b·ttt4pˢ uttZ6m_/H Ac! .~\ۣ%-ەB!gd]Q8Vs$x0pQjgwqw?e,跳222<#e }\0 h!gZZx\Zl{ b͆DT7)G˿}9? ~$  qp=Na}9n"{e<× SrY,G AWL/ҋM*_H,X,lQ*kY???[Ӑ\ڠߟuUٿ 7 &`A?[޹_#+);䄦}; ZZZx`)=g- R@!nz Q+E]߉R 'W`ooa[(̉sR_ɨ}bW\_(9<ӽo ~@adk8 ~KU>|/| n}#,CEMh.7CG"9QĮ5(BZFww7zzz|ۭ (5PCS˞r/ p9s:f X,YiX*-a-۵#gxi䣨;}>$rٿ ( ρLLFc9P+'aLSe0K'" UU{V~o%eXpjtww\fUpl8} ׮ 3~ ]|rE}97ո^|B>-4(}2?tOn 1QN@B VGF?PZy<M تT*N=UfB*Cn#`>? $XX i_|17/Z/,W_,/4 ۻNX݉]uR>N!$j &2aPze+Ro? ;_/:Ole唻fYDSj٬=N,䰴ەݘ,ɖ;!In"Js%$,!z6˥!`Hhgg'H-Z73 9 }Ojz`jj*l~c,#m/Kɟ}9WKqC-Gj_/MD)5122#ϤAHhj6 x X%IOPBE#`>1>’~K G^Sv? ؾb Tlnt$y6+vDKv::?G(?MJRpQ[ >c@BYBU@p*i*xOP|kc؊V tSۗP0La; qNт\-@ Yl;K1'x"ʽ/=Ҁ$/2ǘ,0(ma@<$b+½UvW{;ȇ'N-x֐ j.Gcccm,'lOmДm S^&NMI>^cw/MT9I]x)$TI}3/6ʀy[;UĹ/?Vf…mK=W($w.ŵ?C)U,_ 71@dc,ddA`'M8|ZyӨ@ׯ9E w^HZ6ઽeOC@|jj }ad2]5ڻ%1߾/WƩ8qԯ`hu|ݮ#?\ p?VKq5{n,؁"&]A~oo_tC=ݮ<4ހхhc40 ixw=@{׃~}9ep A|=5&z9%i@bH;;K֦:OwTyA'6_բ=1{hP] H4N>k׆nv݇7ո7=Ir Go~Ӑ}{_ ?ѩcx~bWط½,/- | /fӗ{oLǷH?GT=:r׿G}3 s 7V} j?\WpOhԔnU@]bvUቭ(1::Ҕm䣌;aI?!הmhckP(`[(/f-`WnхWДmn뱞Įԛl$i7[&|d}]F/[)O>%FcEbK_(;* 3V6ܞοuu皚 :VꄶJF/R+>]P(0mVbUqꡯRb5/DE>~ > e8Ɵz/ɈA @H`Jy!/ M$zZƙ3}6vhw_W7>eZ +'H|r]8m=mE,ƞp=//+/R$5o>$ܔCx;%/$-\22B+*}j. _W8{KKK<R`0`0w0-`l;쿦SSS" !\:DlKyqZd1FBG_8EGGbڬI)D~ kW]y-`~V?³)/| /bw=%kFcYxI|5-YMr`822c4G&q֜ o G2SY* Gt:yF$ZFG;Pjl GU*uH/Bpd2,|?É?32߶Pݮ< HB~ k׮n߶PbGEZUUN8?+&o"7JI\] gt0Ͼ ٕ<@tVS_ o.^3"}%/q_H ƵZԈV_FcccNhb ;] \p޵y*l +'M&q@ʃk^Ѿb}Byy9~3_X9?:o |! $2[*y$DoGX?1'&)ycx` J,kjj·v@ة)b 4P*$f%LCݿ8wR}bL,><{r+\>_2UR!ع7MIx-MNBI']<)ԯ%5vaڬyw6p1~^4E/}yͻX2_^=E" %X2heg{$|ڼ?RH93@BIE]ى{㝈s;::Vpth޽ڀDN  .oCLMM… Q_[~ oz(l-k8wC_N~/TU9!MPb7{ϻpo/$?{ȍ~dA,xVR #)9ƃϠƖ:xϠ{|'^shly-COc۰ӧ{;$RxQt:L&kZzS!U_~pDNÅ E ;_ӡ,j5l6b G54:3LbTǽحxԿ駜ODrS<8+~_'KRcZƉ-aڵkjl)?o_/~7"n\p/n+[8pq@kMSMDDDD${Wg~5A{5C ۿbsH\6,:o2}vNhlmTFݮ<ܾ^8nKt `cnQB!j6qq)g쳀M L[Ktvvŋ9O,?~nllVfZ b*/y Áᰣ]yRo)NT̒~`[({]Ww{gkl?z~OFev:;"x$eV?1I_~dߩS8>|w7lHLMM`0ܮ_}eM&\6*7Ǽ54\IZ򉚝,iHlc_·}9*k8ss;τ _c""""6y!IY~IΜSS UHSGl6z{® xn]2ۗ1з;tntڻD wQi_""""FONNU `6_G;%ww80LU55hllM\68|}l uvτ"*tᅯ_4;ƾQcaV/'%M4Jd[ODDDD/(G[:ѫ|sH~ Mл}~q?d1NcvR%@_%vOQ`YbI zm~UTu?^6 _b]VPp8Ƶk1הmh빧M5LC~r$HS nOՆu+B{wv*:J UUU+TU ߓd""""b CI+Wt50z4fvZf?Xv[=@ꁿەѫ0;YO)[׺qv)1?[cSUU T*ݲ """"b CI9|yY'4qW$C~ 5NI4~qV{ʏߴYwFs-gH_lK'ѝ)ot)ۄv5eR7rr/%4o?2vS6![oe}1D%;N|y55\/ 2^_Ák{Kĥ¶P2'@S䘌TGeu1 $g-fffdz-w-G ?̇BW= >R 'Wu 'W8nWf'm0;;;}+sYٶ """"&ҟpV캋/֭oe6?A+[;+%I)D+kWva~ȷ_,JN44ԇ _7 """"b 3Ix2Ս.]E~-\H3G܂vg]"ɀUO%kG-7D[& % P||g^!RI\l ]2<#+ټ """"&2&t---Vi6Qʊi 姩u[^LH @]uNW\e_AqOqh6ah3PTx뭷5szre0@DDDDLH' B hZlnl?d n |ǎ'x>A3@DDDDL0@ɀ6$N'ff0! J*>|'h劊}ܫLLPJ!!077{l(觪 UUpaTUHd_0g 3@DDDDL0@JDtbn/) '*{F`,g Ja2@M tΝ,--aiT TUUARCP*={P^^}+< """"bҝ0xBB Ľ{sBUA>J% -<Ә """"b&갍)* ╢`> """"bdyBR@LTT3,!& ֤@_RK,_gT` """"&% _/oooa ?1'"""JeO_ AWM1H$} """""& """""""b """""""& """""""& """""""b """""""& """""""b & """""""& """""""̖/IENDB`gevent-24.11.1/deps/libuv/img/logos.svg000066400000000000000000001772241471441230600176630ustar00rootroot00000000000000 image/svg+xml gevent-24.11.1/deps/libuv/include/000077500000000000000000000000001471441230600166515ustar00rootroot00000000000000gevent-24.11.1/deps/libuv/include/uv.h000066400000000000000000002032471471441230600174640ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ /* See https://github.com/libuv/libuv#documentation for documentation. */ #ifndef UV_H #define UV_H #ifdef __cplusplus extern "C" { #endif #if defined(BUILDING_UV_SHARED) && defined(USING_UV_SHARED) #error "Define either BUILDING_UV_SHARED or USING_UV_SHARED, not both." #endif #ifdef _WIN32 /* Windows - set up dll import/export decorators. */ # if defined(BUILDING_UV_SHARED) /* Building shared library. */ # define UV_EXTERN __declspec(dllexport) # elif defined(USING_UV_SHARED) /* Using shared library. */ # define UV_EXTERN __declspec(dllimport) # else /* Building static library. */ # define UV_EXTERN /* nothing */ # endif #elif __GNUC__ >= 4 # define UV_EXTERN __attribute__((visibility("default"))) #elif defined(__SUNPRO_C) && (__SUNPRO_C >= 0x550) /* Sun Studio >= 8 */ # define UV_EXTERN __global #else # define UV_EXTERN /* nothing */ #endif #include "uv/errno.h" #include "uv/version.h" #include #include #if defined(_MSC_VER) && _MSC_VER < 1600 # include "uv/stdint-msvc2008.h" #else # include #endif #if defined(_WIN32) # include "uv/win.h" #else # include "uv/unix.h" #endif /* Expand this list if necessary. */ #define UV_ERRNO_MAP(XX) \ XX(E2BIG, "argument list too long") \ XX(EACCES, "permission denied") \ XX(EADDRINUSE, "address already in use") \ XX(EADDRNOTAVAIL, "address not available") \ XX(EAFNOSUPPORT, "address family not supported") \ XX(EAGAIN, "resource temporarily unavailable") \ XX(EAI_ADDRFAMILY, "address family not supported") \ XX(EAI_AGAIN, "temporary failure") \ XX(EAI_BADFLAGS, "bad ai_flags value") \ XX(EAI_BADHINTS, "invalid value for hints") \ XX(EAI_CANCELED, "request canceled") \ XX(EAI_FAIL, "permanent failure") \ XX(EAI_FAMILY, "ai_family not supported") \ XX(EAI_MEMORY, "out of memory") \ XX(EAI_NODATA, "no address") \ XX(EAI_NONAME, "unknown node or service") \ XX(EAI_OVERFLOW, "argument buffer overflow") \ XX(EAI_PROTOCOL, "resolved protocol is unknown") \ XX(EAI_SERVICE, "service not available for socket type") \ XX(EAI_SOCKTYPE, "socket type not supported") \ XX(EALREADY, "connection already in progress") \ XX(EBADF, "bad file descriptor") \ XX(EBUSY, "resource busy or locked") \ XX(ECANCELED, "operation canceled") \ XX(ECHARSET, "invalid Unicode character") \ XX(ECONNABORTED, "software caused connection abort") \ XX(ECONNREFUSED, "connection refused") \ XX(ECONNRESET, "connection reset by peer") \ XX(EDESTADDRREQ, "destination address required") \ XX(EEXIST, "file already exists") \ XX(EFAULT, "bad address in system call argument") \ XX(EFBIG, "file too large") \ XX(EHOSTUNREACH, "host is unreachable") \ XX(EINTR, "interrupted system call") \ XX(EINVAL, "invalid argument") \ XX(EIO, "i/o error") \ XX(EISCONN, "socket is already connected") \ XX(EISDIR, "illegal operation on a directory") \ XX(ELOOP, "too many symbolic links encountered") \ XX(EMFILE, "too many open files") \ XX(EMSGSIZE, "message too long") \ XX(ENAMETOOLONG, "name too long") \ XX(ENETDOWN, "network is down") \ XX(ENETUNREACH, "network is unreachable") \ XX(ENFILE, "file table overflow") \ XX(ENOBUFS, "no buffer space available") \ XX(ENODEV, "no such device") \ XX(ENOENT, "no such file or directory") \ XX(ENOMEM, "not enough memory") \ XX(ENONET, "machine is not on the network") \ XX(ENOPROTOOPT, "protocol not available") \ XX(ENOSPC, "no space left on device") \ XX(ENOSYS, "function not implemented") \ XX(ENOTCONN, "socket is not connected") \ XX(ENOTDIR, "not a directory") \ XX(ENOTEMPTY, "directory not empty") \ XX(ENOTSOCK, "socket operation on non-socket") \ XX(ENOTSUP, "operation not supported on socket") \ XX(EOVERFLOW, "value too large for defined data type") \ XX(EPERM, "operation not permitted") \ XX(EPIPE, "broken pipe") \ XX(EPROTO, "protocol error") \ XX(EPROTONOSUPPORT, "protocol not supported") \ XX(EPROTOTYPE, "protocol wrong type for socket") \ XX(ERANGE, "result too large") \ XX(EROFS, "read-only file system") \ XX(ESHUTDOWN, "cannot send after transport endpoint shutdown") \ XX(ESPIPE, "invalid seek") \ XX(ESRCH, "no such process") \ XX(ETIMEDOUT, "connection timed out") \ XX(ETXTBSY, "text file is busy") \ XX(EXDEV, "cross-device link not permitted") \ XX(UNKNOWN, "unknown error") \ XX(EOF, "end of file") \ XX(ENXIO, "no such device or address") \ XX(EMLINK, "too many links") \ XX(EHOSTDOWN, "host is down") \ XX(EREMOTEIO, "remote I/O error") \ XX(ENOTTY, "inappropriate ioctl for device") \ XX(EFTYPE, "inappropriate file type or format") \ XX(EILSEQ, "illegal byte sequence") \ XX(ESOCKTNOSUPPORT, "socket type not supported") \ #define UV_HANDLE_TYPE_MAP(XX) \ XX(ASYNC, async) \ XX(CHECK, check) \ XX(FS_EVENT, fs_event) \ XX(FS_POLL, fs_poll) \ XX(HANDLE, handle) \ XX(IDLE, idle) \ XX(NAMED_PIPE, pipe) \ XX(POLL, poll) \ XX(PREPARE, prepare) \ XX(PROCESS, process) \ XX(STREAM, stream) \ XX(TCP, tcp) \ XX(TIMER, timer) \ XX(TTY, tty) \ XX(UDP, udp) \ XX(SIGNAL, signal) \ #define UV_REQ_TYPE_MAP(XX) \ XX(REQ, req) \ XX(CONNECT, connect) \ XX(WRITE, write) \ XX(SHUTDOWN, shutdown) \ XX(UDP_SEND, udp_send) \ XX(FS, fs) \ XX(WORK, work) \ XX(GETADDRINFO, getaddrinfo) \ XX(GETNAMEINFO, getnameinfo) \ XX(RANDOM, random) \ typedef enum { #define XX(code, _) UV_ ## code = UV__ ## code, UV_ERRNO_MAP(XX) #undef XX UV_ERRNO_MAX = UV__EOF - 1 } uv_errno_t; typedef enum { UV_UNKNOWN_HANDLE = 0, #define XX(uc, lc) UV_##uc, UV_HANDLE_TYPE_MAP(XX) #undef XX UV_FILE, UV_HANDLE_TYPE_MAX } uv_handle_type; typedef enum { UV_UNKNOWN_REQ = 0, #define XX(uc, lc) UV_##uc, UV_REQ_TYPE_MAP(XX) #undef XX UV_REQ_TYPE_PRIVATE UV_REQ_TYPE_MAX } uv_req_type; /* Handle types. */ typedef struct uv_loop_s uv_loop_t; typedef struct uv_handle_s uv_handle_t; typedef struct uv_dir_s uv_dir_t; typedef struct uv_stream_s uv_stream_t; typedef struct uv_tcp_s uv_tcp_t; typedef struct uv_udp_s uv_udp_t; typedef struct uv_pipe_s uv_pipe_t; typedef struct uv_tty_s uv_tty_t; typedef struct uv_poll_s uv_poll_t; typedef struct uv_timer_s uv_timer_t; typedef struct uv_prepare_s uv_prepare_t; typedef struct uv_check_s uv_check_t; typedef struct uv_idle_s uv_idle_t; typedef struct uv_async_s uv_async_t; typedef struct uv_process_s uv_process_t; typedef struct uv_fs_event_s uv_fs_event_t; typedef struct uv_fs_poll_s uv_fs_poll_t; typedef struct uv_signal_s uv_signal_t; /* Request types. */ typedef struct uv_req_s uv_req_t; typedef struct uv_getaddrinfo_s uv_getaddrinfo_t; typedef struct uv_getnameinfo_s uv_getnameinfo_t; typedef struct uv_shutdown_s uv_shutdown_t; typedef struct uv_write_s uv_write_t; typedef struct uv_connect_s uv_connect_t; typedef struct uv_udp_send_s uv_udp_send_t; typedef struct uv_fs_s uv_fs_t; typedef struct uv_work_s uv_work_t; typedef struct uv_random_s uv_random_t; /* None of the above. */ typedef struct uv_env_item_s uv_env_item_t; typedef struct uv_cpu_info_s uv_cpu_info_t; typedef struct uv_interface_address_s uv_interface_address_t; typedef struct uv_dirent_s uv_dirent_t; typedef struct uv_passwd_s uv_passwd_t; typedef struct uv_utsname_s uv_utsname_t; typedef struct uv_statfs_s uv_statfs_t; typedef enum { UV_LOOP_BLOCK_SIGNAL = 0, UV_METRICS_IDLE_TIME } uv_loop_option; typedef enum { UV_RUN_DEFAULT = 0, UV_RUN_ONCE, UV_RUN_NOWAIT } uv_run_mode; UV_EXTERN unsigned int uv_version(void); UV_EXTERN const char* uv_version_string(void); typedef void* (*uv_malloc_func)(size_t size); typedef void* (*uv_realloc_func)(void* ptr, size_t size); typedef void* (*uv_calloc_func)(size_t count, size_t size); typedef void (*uv_free_func)(void* ptr); UV_EXTERN void uv_library_shutdown(void); UV_EXTERN int uv_replace_allocator(uv_malloc_func malloc_func, uv_realloc_func realloc_func, uv_calloc_func calloc_func, uv_free_func free_func); UV_EXTERN uv_loop_t* uv_default_loop(void); UV_EXTERN int uv_loop_init(uv_loop_t* loop); UV_EXTERN int uv_loop_close(uv_loop_t* loop); /* * NOTE: * This function is DEPRECATED (to be removed after 0.12), users should * allocate the loop manually and use uv_loop_init instead. */ UV_EXTERN uv_loop_t* uv_loop_new(void); /* * NOTE: * This function is DEPRECATED (to be removed after 0.12). Users should use * uv_loop_close and free the memory manually instead. */ UV_EXTERN void uv_loop_delete(uv_loop_t*); UV_EXTERN size_t uv_loop_size(void); UV_EXTERN int uv_loop_alive(const uv_loop_t* loop); UV_EXTERN int uv_loop_configure(uv_loop_t* loop, uv_loop_option option, ...); UV_EXTERN int uv_loop_fork(uv_loop_t* loop); UV_EXTERN int uv_run(uv_loop_t*, uv_run_mode mode); UV_EXTERN void uv_stop(uv_loop_t*); UV_EXTERN void uv_ref(uv_handle_t*); UV_EXTERN void uv_unref(uv_handle_t*); UV_EXTERN int uv_has_ref(const uv_handle_t*); UV_EXTERN void uv_update_time(uv_loop_t*); UV_EXTERN uint64_t uv_now(const uv_loop_t*); UV_EXTERN int uv_backend_fd(const uv_loop_t*); UV_EXTERN int uv_backend_timeout(const uv_loop_t*); typedef void (*uv_alloc_cb)(uv_handle_t* handle, size_t suggested_size, uv_buf_t* buf); typedef void (*uv_read_cb)(uv_stream_t* stream, ssize_t nread, const uv_buf_t* buf); typedef void (*uv_write_cb)(uv_write_t* req, int status); typedef void (*uv_connect_cb)(uv_connect_t* req, int status); typedef void (*uv_shutdown_cb)(uv_shutdown_t* req, int status); typedef void (*uv_connection_cb)(uv_stream_t* server, int status); typedef void (*uv_close_cb)(uv_handle_t* handle); typedef void (*uv_poll_cb)(uv_poll_t* handle, int status, int events); typedef void (*uv_timer_cb)(uv_timer_t* handle); typedef void (*uv_async_cb)(uv_async_t* handle); typedef void (*uv_prepare_cb)(uv_prepare_t* handle); typedef void (*uv_check_cb)(uv_check_t* handle); typedef void (*uv_idle_cb)(uv_idle_t* handle); typedef void (*uv_exit_cb)(uv_process_t*, int64_t exit_status, int term_signal); typedef void (*uv_walk_cb)(uv_handle_t* handle, void* arg); typedef void (*uv_fs_cb)(uv_fs_t* req); typedef void (*uv_work_cb)(uv_work_t* req); typedef void (*uv_after_work_cb)(uv_work_t* req, int status); typedef void (*uv_getaddrinfo_cb)(uv_getaddrinfo_t* req, int status, struct addrinfo* res); typedef void (*uv_getnameinfo_cb)(uv_getnameinfo_t* req, int status, const char* hostname, const char* service); typedef void (*uv_random_cb)(uv_random_t* req, int status, void* buf, size_t buflen); typedef struct { long tv_sec; long tv_nsec; } uv_timespec_t; typedef struct { uint64_t st_dev; uint64_t st_mode; uint64_t st_nlink; uint64_t st_uid; uint64_t st_gid; uint64_t st_rdev; uint64_t st_ino; uint64_t st_size; uint64_t st_blksize; uint64_t st_blocks; uint64_t st_flags; uint64_t st_gen; uv_timespec_t st_atim; uv_timespec_t st_mtim; uv_timespec_t st_ctim; uv_timespec_t st_birthtim; } uv_stat_t; typedef void (*uv_fs_event_cb)(uv_fs_event_t* handle, const char* filename, int events, int status); typedef void (*uv_fs_poll_cb)(uv_fs_poll_t* handle, int status, const uv_stat_t* prev, const uv_stat_t* curr); typedef void (*uv_signal_cb)(uv_signal_t* handle, int signum); typedef enum { UV_LEAVE_GROUP = 0, UV_JOIN_GROUP } uv_membership; UV_EXTERN int uv_translate_sys_error(int sys_errno); UV_EXTERN const char* uv_strerror(int err); UV_EXTERN char* uv_strerror_r(int err, char* buf, size_t buflen); UV_EXTERN const char* uv_err_name(int err); UV_EXTERN char* uv_err_name_r(int err, char* buf, size_t buflen); #define UV_REQ_FIELDS \ /* public */ \ void* data; \ /* read-only */ \ uv_req_type type; \ /* private */ \ void* reserved[6]; \ UV_REQ_PRIVATE_FIELDS \ /* Abstract base class of all requests. */ struct uv_req_s { UV_REQ_FIELDS }; /* Platform-specific request types. */ UV_PRIVATE_REQ_TYPES UV_EXTERN int uv_shutdown(uv_shutdown_t* req, uv_stream_t* handle, uv_shutdown_cb cb); struct uv_shutdown_s { UV_REQ_FIELDS uv_stream_t* handle; uv_shutdown_cb cb; UV_SHUTDOWN_PRIVATE_FIELDS }; #define UV_HANDLE_FIELDS \ /* public */ \ void* data; \ /* read-only */ \ uv_loop_t* loop; \ uv_handle_type type; \ /* private */ \ uv_close_cb close_cb; \ void* handle_queue[2]; \ union { \ int fd; \ void* reserved[4]; \ } u; \ UV_HANDLE_PRIVATE_FIELDS \ /* The abstract base class of all handles. */ struct uv_handle_s { UV_HANDLE_FIELDS }; UV_EXTERN size_t uv_handle_size(uv_handle_type type); UV_EXTERN uv_handle_type uv_handle_get_type(const uv_handle_t* handle); UV_EXTERN const char* uv_handle_type_name(uv_handle_type type); UV_EXTERN void* uv_handle_get_data(const uv_handle_t* handle); UV_EXTERN uv_loop_t* uv_handle_get_loop(const uv_handle_t* handle); UV_EXTERN void uv_handle_set_data(uv_handle_t* handle, void* data); UV_EXTERN size_t uv_req_size(uv_req_type type); UV_EXTERN void* uv_req_get_data(const uv_req_t* req); UV_EXTERN void uv_req_set_data(uv_req_t* req, void* data); UV_EXTERN uv_req_type uv_req_get_type(const uv_req_t* req); UV_EXTERN const char* uv_req_type_name(uv_req_type type); UV_EXTERN int uv_is_active(const uv_handle_t* handle); UV_EXTERN void uv_walk(uv_loop_t* loop, uv_walk_cb walk_cb, void* arg); /* Helpers for ad hoc debugging, no API/ABI stability guaranteed. */ UV_EXTERN void uv_print_all_handles(uv_loop_t* loop, FILE* stream); UV_EXTERN void uv_print_active_handles(uv_loop_t* loop, FILE* stream); UV_EXTERN void uv_close(uv_handle_t* handle, uv_close_cb close_cb); UV_EXTERN int uv_send_buffer_size(uv_handle_t* handle, int* value); UV_EXTERN int uv_recv_buffer_size(uv_handle_t* handle, int* value); UV_EXTERN int uv_fileno(const uv_handle_t* handle, uv_os_fd_t* fd); UV_EXTERN uv_buf_t uv_buf_init(char* base, unsigned int len); UV_EXTERN int uv_pipe(uv_file fds[2], int read_flags, int write_flags); UV_EXTERN int uv_socketpair(int type, int protocol, uv_os_sock_t socket_vector[2], int flags0, int flags1); #define UV_STREAM_FIELDS \ /* number of bytes queued for writing */ \ size_t write_queue_size; \ uv_alloc_cb alloc_cb; \ uv_read_cb read_cb; \ /* private */ \ UV_STREAM_PRIVATE_FIELDS /* * uv_stream_t is a subclass of uv_handle_t. * * uv_stream is an abstract class. * * uv_stream_t is the parent class of uv_tcp_t, uv_pipe_t and uv_tty_t. */ struct uv_stream_s { UV_HANDLE_FIELDS UV_STREAM_FIELDS }; UV_EXTERN size_t uv_stream_get_write_queue_size(const uv_stream_t* stream); UV_EXTERN int uv_listen(uv_stream_t* stream, int backlog, uv_connection_cb cb); UV_EXTERN int uv_accept(uv_stream_t* server, uv_stream_t* client); UV_EXTERN int uv_read_start(uv_stream_t*, uv_alloc_cb alloc_cb, uv_read_cb read_cb); UV_EXTERN int uv_read_stop(uv_stream_t*); UV_EXTERN int uv_write(uv_write_t* req, uv_stream_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_write_cb cb); UV_EXTERN int uv_write2(uv_write_t* req, uv_stream_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_stream_t* send_handle, uv_write_cb cb); UV_EXTERN int uv_try_write(uv_stream_t* handle, const uv_buf_t bufs[], unsigned int nbufs); UV_EXTERN int uv_try_write2(uv_stream_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_stream_t* send_handle); /* uv_write_t is a subclass of uv_req_t. */ struct uv_write_s { UV_REQ_FIELDS uv_write_cb cb; uv_stream_t* send_handle; /* TODO: make private and unix-only in v2.x. */ uv_stream_t* handle; UV_WRITE_PRIVATE_FIELDS }; UV_EXTERN int uv_is_readable(const uv_stream_t* handle); UV_EXTERN int uv_is_writable(const uv_stream_t* handle); UV_EXTERN int uv_stream_set_blocking(uv_stream_t* handle, int blocking); UV_EXTERN int uv_is_closing(const uv_handle_t* handle); /* * uv_tcp_t is a subclass of uv_stream_t. * * Represents a TCP stream or TCP server. */ struct uv_tcp_s { UV_HANDLE_FIELDS UV_STREAM_FIELDS UV_TCP_PRIVATE_FIELDS }; UV_EXTERN int uv_tcp_init(uv_loop_t*, uv_tcp_t* handle); UV_EXTERN int uv_tcp_init_ex(uv_loop_t*, uv_tcp_t* handle, unsigned int flags); UV_EXTERN int uv_tcp_open(uv_tcp_t* handle, uv_os_sock_t sock); UV_EXTERN int uv_tcp_nodelay(uv_tcp_t* handle, int enable); UV_EXTERN int uv_tcp_keepalive(uv_tcp_t* handle, int enable, unsigned int delay); UV_EXTERN int uv_tcp_simultaneous_accepts(uv_tcp_t* handle, int enable); enum uv_tcp_flags { /* Used with uv_tcp_bind, when an IPv6 address is used. */ UV_TCP_IPV6ONLY = 1 }; UV_EXTERN int uv_tcp_bind(uv_tcp_t* handle, const struct sockaddr* addr, unsigned int flags); UV_EXTERN int uv_tcp_getsockname(const uv_tcp_t* handle, struct sockaddr* name, int* namelen); UV_EXTERN int uv_tcp_getpeername(const uv_tcp_t* handle, struct sockaddr* name, int* namelen); UV_EXTERN int uv_tcp_close_reset(uv_tcp_t* handle, uv_close_cb close_cb); UV_EXTERN int uv_tcp_connect(uv_connect_t* req, uv_tcp_t* handle, const struct sockaddr* addr, uv_connect_cb cb); /* uv_connect_t is a subclass of uv_req_t. */ struct uv_connect_s { UV_REQ_FIELDS uv_connect_cb cb; uv_stream_t* handle; UV_CONNECT_PRIVATE_FIELDS }; /* * UDP support. */ enum uv_udp_flags { /* Disables dual stack mode. */ UV_UDP_IPV6ONLY = 1, /* * Indicates message was truncated because read buffer was too small. The * remainder was discarded by the OS. Used in uv_udp_recv_cb. */ UV_UDP_PARTIAL = 2, /* * Indicates if SO_REUSEADDR will be set when binding the handle. * This sets the SO_REUSEPORT socket flag on the BSDs and OS X. On other * Unix platforms, it sets the SO_REUSEADDR flag. What that means is that * multiple threads or processes can bind to the same address without error * (provided they all set the flag) but only the last one to bind will receive * any traffic, in effect "stealing" the port from the previous listener. */ UV_UDP_REUSEADDR = 4, /* * Indicates that the message was received by recvmmsg, so the buffer provided * must not be freed by the recv_cb callback. */ UV_UDP_MMSG_CHUNK = 8, /* * Indicates that the buffer provided has been fully utilized by recvmmsg and * that it should now be freed by the recv_cb callback. When this flag is set * in uv_udp_recv_cb, nread will always be 0 and addr will always be NULL. */ UV_UDP_MMSG_FREE = 16, /* * Indicates if IP_RECVERR/IPV6_RECVERR will be set when binding the handle. * This sets IP_RECVERR for IPv4 and IPV6_RECVERR for IPv6 UDP sockets on * Linux. This stops the Linux kernel from suppressing some ICMP error * messages and enables full ICMP error reporting for faster failover. * This flag is no-op on platforms other than Linux. */ UV_UDP_LINUX_RECVERR = 32, /* * Indicates that recvmmsg should be used, if available. */ UV_UDP_RECVMMSG = 256 }; typedef void (*uv_udp_send_cb)(uv_udp_send_t* req, int status); typedef void (*uv_udp_recv_cb)(uv_udp_t* handle, ssize_t nread, const uv_buf_t* buf, const struct sockaddr* addr, unsigned flags); /* uv_udp_t is a subclass of uv_handle_t. */ struct uv_udp_s { UV_HANDLE_FIELDS /* read-only */ /* * Number of bytes queued for sending. This field strictly shows how much * information is currently queued. */ size_t send_queue_size; /* * Number of send requests currently in the queue awaiting to be processed. */ size_t send_queue_count; UV_UDP_PRIVATE_FIELDS }; /* uv_udp_send_t is a subclass of uv_req_t. */ struct uv_udp_send_s { UV_REQ_FIELDS uv_udp_t* handle; uv_udp_send_cb cb; UV_UDP_SEND_PRIVATE_FIELDS }; UV_EXTERN int uv_udp_init(uv_loop_t*, uv_udp_t* handle); UV_EXTERN int uv_udp_init_ex(uv_loop_t*, uv_udp_t* handle, unsigned int flags); UV_EXTERN int uv_udp_open(uv_udp_t* handle, uv_os_sock_t sock); UV_EXTERN int uv_udp_bind(uv_udp_t* handle, const struct sockaddr* addr, unsigned int flags); UV_EXTERN int uv_udp_connect(uv_udp_t* handle, const struct sockaddr* addr); UV_EXTERN int uv_udp_getpeername(const uv_udp_t* handle, struct sockaddr* name, int* namelen); UV_EXTERN int uv_udp_getsockname(const uv_udp_t* handle, struct sockaddr* name, int* namelen); UV_EXTERN int uv_udp_set_membership(uv_udp_t* handle, const char* multicast_addr, const char* interface_addr, uv_membership membership); UV_EXTERN int uv_udp_set_source_membership(uv_udp_t* handle, const char* multicast_addr, const char* interface_addr, const char* source_addr, uv_membership membership); UV_EXTERN int uv_udp_set_multicast_loop(uv_udp_t* handle, int on); UV_EXTERN int uv_udp_set_multicast_ttl(uv_udp_t* handle, int ttl); UV_EXTERN int uv_udp_set_multicast_interface(uv_udp_t* handle, const char* interface_addr); UV_EXTERN int uv_udp_set_broadcast(uv_udp_t* handle, int on); UV_EXTERN int uv_udp_set_ttl(uv_udp_t* handle, int ttl); UV_EXTERN int uv_udp_send(uv_udp_send_t* req, uv_udp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, const struct sockaddr* addr, uv_udp_send_cb send_cb); UV_EXTERN int uv_udp_try_send(uv_udp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, const struct sockaddr* addr); UV_EXTERN int uv_udp_recv_start(uv_udp_t* handle, uv_alloc_cb alloc_cb, uv_udp_recv_cb recv_cb); UV_EXTERN int uv_udp_using_recvmmsg(const uv_udp_t* handle); UV_EXTERN int uv_udp_recv_stop(uv_udp_t* handle); UV_EXTERN size_t uv_udp_get_send_queue_size(const uv_udp_t* handle); UV_EXTERN size_t uv_udp_get_send_queue_count(const uv_udp_t* handle); /* * uv_tty_t is a subclass of uv_stream_t. * * Representing a stream for the console. */ struct uv_tty_s { UV_HANDLE_FIELDS UV_STREAM_FIELDS UV_TTY_PRIVATE_FIELDS }; typedef enum { /* Initial/normal terminal mode */ UV_TTY_MODE_NORMAL, /* Raw input mode (On Windows, ENABLE_WINDOW_INPUT is also enabled) */ UV_TTY_MODE_RAW, /* Binary-safe I/O mode for IPC (Unix-only) */ UV_TTY_MODE_IO } uv_tty_mode_t; typedef enum { /* * The console supports handling of virtual terminal sequences * (Windows10 new console, ConEmu) */ UV_TTY_SUPPORTED, /* The console cannot process the virtual terminal sequence. (Legacy * console) */ UV_TTY_UNSUPPORTED } uv_tty_vtermstate_t; UV_EXTERN int uv_tty_init(uv_loop_t*, uv_tty_t*, uv_file fd, int readable); UV_EXTERN int uv_tty_set_mode(uv_tty_t*, uv_tty_mode_t mode); UV_EXTERN int uv_tty_reset_mode(void); UV_EXTERN int uv_tty_get_winsize(uv_tty_t*, int* width, int* height); UV_EXTERN void uv_tty_set_vterm_state(uv_tty_vtermstate_t state); UV_EXTERN int uv_tty_get_vterm_state(uv_tty_vtermstate_t* state); #ifdef __cplusplus extern "C++" { inline int uv_tty_set_mode(uv_tty_t* handle, int mode) { return uv_tty_set_mode(handle, static_cast(mode)); } } #endif UV_EXTERN uv_handle_type uv_guess_handle(uv_file file); /* * uv_pipe_t is a subclass of uv_stream_t. * * Representing a pipe stream or pipe server. On Windows this is a Named * Pipe. On Unix this is a Unix domain socket. */ struct uv_pipe_s { UV_HANDLE_FIELDS UV_STREAM_FIELDS int ipc; /* non-zero if this pipe is used for passing handles */ UV_PIPE_PRIVATE_FIELDS }; UV_EXTERN int uv_pipe_init(uv_loop_t*, uv_pipe_t* handle, int ipc); UV_EXTERN int uv_pipe_open(uv_pipe_t*, uv_file file); UV_EXTERN int uv_pipe_bind(uv_pipe_t* handle, const char* name); UV_EXTERN void uv_pipe_connect(uv_connect_t* req, uv_pipe_t* handle, const char* name, uv_connect_cb cb); UV_EXTERN int uv_pipe_getsockname(const uv_pipe_t* handle, char* buffer, size_t* size); UV_EXTERN int uv_pipe_getpeername(const uv_pipe_t* handle, char* buffer, size_t* size); UV_EXTERN void uv_pipe_pending_instances(uv_pipe_t* handle, int count); UV_EXTERN int uv_pipe_pending_count(uv_pipe_t* handle); UV_EXTERN uv_handle_type uv_pipe_pending_type(uv_pipe_t* handle); UV_EXTERN int uv_pipe_chmod(uv_pipe_t* handle, int flags); struct uv_poll_s { UV_HANDLE_FIELDS uv_poll_cb poll_cb; UV_POLL_PRIVATE_FIELDS }; enum uv_poll_event { UV_READABLE = 1, UV_WRITABLE = 2, UV_DISCONNECT = 4, UV_PRIORITIZED = 8 }; UV_EXTERN int uv_poll_init(uv_loop_t* loop, uv_poll_t* handle, int fd); UV_EXTERN int uv_poll_init_socket(uv_loop_t* loop, uv_poll_t* handle, uv_os_sock_t socket); UV_EXTERN int uv_poll_start(uv_poll_t* handle, int events, uv_poll_cb cb); UV_EXTERN int uv_poll_stop(uv_poll_t* handle); struct uv_prepare_s { UV_HANDLE_FIELDS UV_PREPARE_PRIVATE_FIELDS }; UV_EXTERN int uv_prepare_init(uv_loop_t*, uv_prepare_t* prepare); UV_EXTERN int uv_prepare_start(uv_prepare_t* prepare, uv_prepare_cb cb); UV_EXTERN int uv_prepare_stop(uv_prepare_t* prepare); struct uv_check_s { UV_HANDLE_FIELDS UV_CHECK_PRIVATE_FIELDS }; UV_EXTERN int uv_check_init(uv_loop_t*, uv_check_t* check); UV_EXTERN int uv_check_start(uv_check_t* check, uv_check_cb cb); UV_EXTERN int uv_check_stop(uv_check_t* check); struct uv_idle_s { UV_HANDLE_FIELDS UV_IDLE_PRIVATE_FIELDS }; UV_EXTERN int uv_idle_init(uv_loop_t*, uv_idle_t* idle); UV_EXTERN int uv_idle_start(uv_idle_t* idle, uv_idle_cb cb); UV_EXTERN int uv_idle_stop(uv_idle_t* idle); struct uv_async_s { UV_HANDLE_FIELDS UV_ASYNC_PRIVATE_FIELDS }; UV_EXTERN int uv_async_init(uv_loop_t*, uv_async_t* async, uv_async_cb async_cb); UV_EXTERN int uv_async_send(uv_async_t* async); /* * uv_timer_t is a subclass of uv_handle_t. * * Used to get woken up at a specified time in the future. */ struct uv_timer_s { UV_HANDLE_FIELDS UV_TIMER_PRIVATE_FIELDS }; UV_EXTERN int uv_timer_init(uv_loop_t*, uv_timer_t* handle); UV_EXTERN int uv_timer_start(uv_timer_t* handle, uv_timer_cb cb, uint64_t timeout, uint64_t repeat); UV_EXTERN int uv_timer_stop(uv_timer_t* handle); UV_EXTERN int uv_timer_again(uv_timer_t* handle); UV_EXTERN void uv_timer_set_repeat(uv_timer_t* handle, uint64_t repeat); UV_EXTERN uint64_t uv_timer_get_repeat(const uv_timer_t* handle); UV_EXTERN uint64_t uv_timer_get_due_in(const uv_timer_t* handle); /* * uv_getaddrinfo_t is a subclass of uv_req_t. * * Request object for uv_getaddrinfo. */ struct uv_getaddrinfo_s { UV_REQ_FIELDS /* read-only */ uv_loop_t* loop; /* struct addrinfo* addrinfo is marked as private, but it really isn't. */ UV_GETADDRINFO_PRIVATE_FIELDS }; UV_EXTERN int uv_getaddrinfo(uv_loop_t* loop, uv_getaddrinfo_t* req, uv_getaddrinfo_cb getaddrinfo_cb, const char* node, const char* service, const struct addrinfo* hints); UV_EXTERN void uv_freeaddrinfo(struct addrinfo* ai); /* * uv_getnameinfo_t is a subclass of uv_req_t. * * Request object for uv_getnameinfo. */ struct uv_getnameinfo_s { UV_REQ_FIELDS /* read-only */ uv_loop_t* loop; /* host and service are marked as private, but they really aren't. */ UV_GETNAMEINFO_PRIVATE_FIELDS }; UV_EXTERN int uv_getnameinfo(uv_loop_t* loop, uv_getnameinfo_t* req, uv_getnameinfo_cb getnameinfo_cb, const struct sockaddr* addr, int flags); /* uv_spawn() options. */ typedef enum { UV_IGNORE = 0x00, UV_CREATE_PIPE = 0x01, UV_INHERIT_FD = 0x02, UV_INHERIT_STREAM = 0x04, /* * When UV_CREATE_PIPE is specified, UV_READABLE_PIPE and UV_WRITABLE_PIPE * determine the direction of flow, from the child process' perspective. Both * flags may be specified to create a duplex data stream. */ UV_READABLE_PIPE = 0x10, UV_WRITABLE_PIPE = 0x20, /* * When UV_CREATE_PIPE is specified, specifying UV_NONBLOCK_PIPE opens the * handle in non-blocking mode in the child. This may cause loss of data, * if the child is not designed to handle to encounter this mode, * but can also be significantly more efficient. */ UV_NONBLOCK_PIPE = 0x40, UV_OVERLAPPED_PIPE = 0x40 /* old name, for compatibility */ } uv_stdio_flags; typedef struct uv_stdio_container_s { uv_stdio_flags flags; union { uv_stream_t* stream; int fd; } data; } uv_stdio_container_t; typedef struct uv_process_options_s { uv_exit_cb exit_cb; /* Called after the process exits. */ const char* file; /* Path to program to execute. */ /* * Command line arguments. args[0] should be the path to the program. On * Windows this uses CreateProcess which concatenates the arguments into a * string this can cause some strange errors. See the note at * windows_verbatim_arguments. */ char** args; /* * This will be set as the environ variable in the subprocess. If this is * NULL then the parents environ will be used. */ char** env; /* * If non-null this represents a directory the subprocess should execute * in. Stands for current working directory. */ const char* cwd; /* * Various flags that control how uv_spawn() behaves. See the definition of * `enum uv_process_flags` below. */ unsigned int flags; /* * The `stdio` field points to an array of uv_stdio_container_t structs that * describe the file descriptors that will be made available to the child * process. The convention is that stdio[0] points to stdin, fd 1 is used for * stdout, and fd 2 is stderr. * * Note that on windows file descriptors greater than 2 are available to the * child process only if the child processes uses the MSVCRT runtime. */ int stdio_count; uv_stdio_container_t* stdio; /* * Libuv can change the child process' user/group id. This happens only when * the appropriate bits are set in the flags fields. This is not supported on * windows; uv_spawn() will fail and set the error to UV_ENOTSUP. */ uv_uid_t uid; uv_gid_t gid; } uv_process_options_t; /* * These are the flags that can be used for the uv_process_options.flags field. */ enum uv_process_flags { /* * Set the child process' user id. The user id is supplied in the `uid` field * of the options struct. This does not work on windows; setting this flag * will cause uv_spawn() to fail. */ UV_PROCESS_SETUID = (1 << 0), /* * Set the child process' group id. The user id is supplied in the `gid` * field of the options struct. This does not work on windows; setting this * flag will cause uv_spawn() to fail. */ UV_PROCESS_SETGID = (1 << 1), /* * Do not wrap any arguments in quotes, or perform any other escaping, when * converting the argument list into a command line string. This option is * only meaningful on Windows systems. On Unix it is silently ignored. */ UV_PROCESS_WINDOWS_VERBATIM_ARGUMENTS = (1 << 2), /* * Spawn the child process in a detached state - this will make it a process * group leader, and will effectively enable the child to keep running after * the parent exits. Note that the child process will still keep the * parent's event loop alive unless the parent process calls uv_unref() on * the child's process handle. */ UV_PROCESS_DETACHED = (1 << 3), /* * Hide the subprocess window that would normally be created. This option is * only meaningful on Windows systems. On Unix it is silently ignored. */ UV_PROCESS_WINDOWS_HIDE = (1 << 4), /* * Hide the subprocess console window that would normally be created. This * option is only meaningful on Windows systems. On Unix it is silently * ignored. */ UV_PROCESS_WINDOWS_HIDE_CONSOLE = (1 << 5), /* * Hide the subprocess GUI window that would normally be created. This * option is only meaningful on Windows systems. On Unix it is silently * ignored. */ UV_PROCESS_WINDOWS_HIDE_GUI = (1 << 6) }; /* * uv_process_t is a subclass of uv_handle_t. */ struct uv_process_s { UV_HANDLE_FIELDS uv_exit_cb exit_cb; int pid; UV_PROCESS_PRIVATE_FIELDS }; UV_EXTERN int uv_spawn(uv_loop_t* loop, uv_process_t* handle, const uv_process_options_t* options); UV_EXTERN int uv_process_kill(uv_process_t*, int signum); UV_EXTERN int uv_kill(int pid, int signum); UV_EXTERN uv_pid_t uv_process_get_pid(const uv_process_t*); /* * uv_work_t is a subclass of uv_req_t. */ struct uv_work_s { UV_REQ_FIELDS uv_loop_t* loop; uv_work_cb work_cb; uv_after_work_cb after_work_cb; UV_WORK_PRIVATE_FIELDS }; UV_EXTERN int uv_queue_work(uv_loop_t* loop, uv_work_t* req, uv_work_cb work_cb, uv_after_work_cb after_work_cb); UV_EXTERN int uv_cancel(uv_req_t* req); struct uv_cpu_times_s { uint64_t user; /* milliseconds */ uint64_t nice; /* milliseconds */ uint64_t sys; /* milliseconds */ uint64_t idle; /* milliseconds */ uint64_t irq; /* milliseconds */ }; struct uv_cpu_info_s { char* model; int speed; struct uv_cpu_times_s cpu_times; }; struct uv_interface_address_s { char* name; char phys_addr[6]; int is_internal; union { struct sockaddr_in address4; struct sockaddr_in6 address6; } address; union { struct sockaddr_in netmask4; struct sockaddr_in6 netmask6; } netmask; }; struct uv_passwd_s { char* username; unsigned long uid; unsigned long gid; char* shell; char* homedir; }; struct uv_utsname_s { char sysname[256]; char release[256]; char version[256]; char machine[256]; /* This struct does not contain the nodename and domainname fields present in the utsname type. domainname is a GNU extension. Both fields are referred to as meaningless in the docs. */ }; struct uv_statfs_s { uint64_t f_type; uint64_t f_bsize; uint64_t f_blocks; uint64_t f_bfree; uint64_t f_bavail; uint64_t f_files; uint64_t f_ffree; uint64_t f_spare[4]; }; typedef enum { UV_DIRENT_UNKNOWN, UV_DIRENT_FILE, UV_DIRENT_DIR, UV_DIRENT_LINK, UV_DIRENT_FIFO, UV_DIRENT_SOCKET, UV_DIRENT_CHAR, UV_DIRENT_BLOCK } uv_dirent_type_t; struct uv_dirent_s { const char* name; uv_dirent_type_t type; }; UV_EXTERN char** uv_setup_args(int argc, char** argv); UV_EXTERN int uv_get_process_title(char* buffer, size_t size); UV_EXTERN int uv_set_process_title(const char* title); UV_EXTERN int uv_resident_set_memory(size_t* rss); UV_EXTERN int uv_uptime(double* uptime); UV_EXTERN uv_os_fd_t uv_get_osfhandle(int fd); UV_EXTERN int uv_open_osfhandle(uv_os_fd_t os_fd); typedef struct { long tv_sec; long tv_usec; } uv_timeval_t; typedef struct { int64_t tv_sec; int32_t tv_usec; } uv_timeval64_t; typedef struct { uv_timeval_t ru_utime; /* user CPU time used */ uv_timeval_t ru_stime; /* system CPU time used */ uint64_t ru_maxrss; /* maximum resident set size */ uint64_t ru_ixrss; /* integral shared memory size */ uint64_t ru_idrss; /* integral unshared data size */ uint64_t ru_isrss; /* integral unshared stack size */ uint64_t ru_minflt; /* page reclaims (soft page faults) */ uint64_t ru_majflt; /* page faults (hard page faults) */ uint64_t ru_nswap; /* swaps */ uint64_t ru_inblock; /* block input operations */ uint64_t ru_oublock; /* block output operations */ uint64_t ru_msgsnd; /* IPC messages sent */ uint64_t ru_msgrcv; /* IPC messages received */ uint64_t ru_nsignals; /* signals received */ uint64_t ru_nvcsw; /* voluntary context switches */ uint64_t ru_nivcsw; /* involuntary context switches */ } uv_rusage_t; UV_EXTERN int uv_getrusage(uv_rusage_t* rusage); UV_EXTERN int uv_os_homedir(char* buffer, size_t* size); UV_EXTERN int uv_os_tmpdir(char* buffer, size_t* size); UV_EXTERN int uv_os_get_passwd(uv_passwd_t* pwd); UV_EXTERN void uv_os_free_passwd(uv_passwd_t* pwd); UV_EXTERN uv_pid_t uv_os_getpid(void); UV_EXTERN uv_pid_t uv_os_getppid(void); #if defined(__PASE__) /* On IBM i PASE, the highest process priority is -10 */ # define UV_PRIORITY_LOW 39 /* RUNPTY(99) */ # define UV_PRIORITY_BELOW_NORMAL 15 /* RUNPTY(50) */ # define UV_PRIORITY_NORMAL 0 /* RUNPTY(20) */ # define UV_PRIORITY_ABOVE_NORMAL -4 /* RUNTY(12) */ # define UV_PRIORITY_HIGH -7 /* RUNPTY(6) */ # define UV_PRIORITY_HIGHEST -10 /* RUNPTY(1) */ #else # define UV_PRIORITY_LOW 19 # define UV_PRIORITY_BELOW_NORMAL 10 # define UV_PRIORITY_NORMAL 0 # define UV_PRIORITY_ABOVE_NORMAL -7 # define UV_PRIORITY_HIGH -14 # define UV_PRIORITY_HIGHEST -20 #endif UV_EXTERN int uv_os_getpriority(uv_pid_t pid, int* priority); UV_EXTERN int uv_os_setpriority(uv_pid_t pid, int priority); UV_EXTERN unsigned int uv_available_parallelism(void); UV_EXTERN int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count); UV_EXTERN void uv_free_cpu_info(uv_cpu_info_t* cpu_infos, int count); UV_EXTERN int uv_interface_addresses(uv_interface_address_t** addresses, int* count); UV_EXTERN void uv_free_interface_addresses(uv_interface_address_t* addresses, int count); struct uv_env_item_s { char* name; char* value; }; UV_EXTERN int uv_os_environ(uv_env_item_t** envitems, int* count); UV_EXTERN void uv_os_free_environ(uv_env_item_t* envitems, int count); UV_EXTERN int uv_os_getenv(const char* name, char* buffer, size_t* size); UV_EXTERN int uv_os_setenv(const char* name, const char* value); UV_EXTERN int uv_os_unsetenv(const char* name); #ifdef MAXHOSTNAMELEN # define UV_MAXHOSTNAMESIZE (MAXHOSTNAMELEN + 1) #else /* Fallback for the maximum hostname size, including the null terminator. The Windows gethostname() documentation states that 256 bytes will always be large enough to hold the null-terminated hostname. */ # define UV_MAXHOSTNAMESIZE 256 #endif UV_EXTERN int uv_os_gethostname(char* buffer, size_t* size); UV_EXTERN int uv_os_uname(uv_utsname_t* buffer); UV_EXTERN uint64_t uv_metrics_idle_time(uv_loop_t* loop); typedef enum { UV_FS_UNKNOWN = -1, UV_FS_CUSTOM, UV_FS_OPEN, UV_FS_CLOSE, UV_FS_READ, UV_FS_WRITE, UV_FS_SENDFILE, UV_FS_STAT, UV_FS_LSTAT, UV_FS_FSTAT, UV_FS_FTRUNCATE, UV_FS_UTIME, UV_FS_FUTIME, UV_FS_ACCESS, UV_FS_CHMOD, UV_FS_FCHMOD, UV_FS_FSYNC, UV_FS_FDATASYNC, UV_FS_UNLINK, UV_FS_RMDIR, UV_FS_MKDIR, UV_FS_MKDTEMP, UV_FS_RENAME, UV_FS_SCANDIR, UV_FS_LINK, UV_FS_SYMLINK, UV_FS_READLINK, UV_FS_CHOWN, UV_FS_FCHOWN, UV_FS_REALPATH, UV_FS_COPYFILE, UV_FS_LCHOWN, UV_FS_OPENDIR, UV_FS_READDIR, UV_FS_CLOSEDIR, UV_FS_STATFS, UV_FS_MKSTEMP, UV_FS_LUTIME } uv_fs_type; struct uv_dir_s { uv_dirent_t* dirents; size_t nentries; void* reserved[4]; UV_DIR_PRIVATE_FIELDS }; /* uv_fs_t is a subclass of uv_req_t. */ struct uv_fs_s { UV_REQ_FIELDS uv_fs_type fs_type; uv_loop_t* loop; uv_fs_cb cb; ssize_t result; void* ptr; const char* path; uv_stat_t statbuf; /* Stores the result of uv_fs_stat() and uv_fs_fstat(). */ UV_FS_PRIVATE_FIELDS }; UV_EXTERN uv_fs_type uv_fs_get_type(const uv_fs_t*); UV_EXTERN ssize_t uv_fs_get_result(const uv_fs_t*); UV_EXTERN int uv_fs_get_system_error(const uv_fs_t*); UV_EXTERN void* uv_fs_get_ptr(const uv_fs_t*); UV_EXTERN const char* uv_fs_get_path(const uv_fs_t*); UV_EXTERN uv_stat_t* uv_fs_get_statbuf(uv_fs_t*); UV_EXTERN void uv_fs_req_cleanup(uv_fs_t* req); UV_EXTERN int uv_fs_close(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb); UV_EXTERN int uv_fs_open(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, int mode, uv_fs_cb cb); UV_EXTERN int uv_fs_read(uv_loop_t* loop, uv_fs_t* req, uv_file file, const uv_buf_t bufs[], unsigned int nbufs, int64_t offset, uv_fs_cb cb); UV_EXTERN int uv_fs_unlink(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb); UV_EXTERN int uv_fs_write(uv_loop_t* loop, uv_fs_t* req, uv_file file, const uv_buf_t bufs[], unsigned int nbufs, int64_t offset, uv_fs_cb cb); /* * This flag can be used with uv_fs_copyfile() to return an error if the * destination already exists. */ #define UV_FS_COPYFILE_EXCL 0x0001 /* * This flag can be used with uv_fs_copyfile() to attempt to create a reflink. * If copy-on-write is not supported, a fallback copy mechanism is used. */ #define UV_FS_COPYFILE_FICLONE 0x0002 /* * This flag can be used with uv_fs_copyfile() to attempt to create a reflink. * If copy-on-write is not supported, an error is returned. */ #define UV_FS_COPYFILE_FICLONE_FORCE 0x0004 UV_EXTERN int uv_fs_copyfile(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, int flags, uv_fs_cb cb); UV_EXTERN int uv_fs_mkdir(uv_loop_t* loop, uv_fs_t* req, const char* path, int mode, uv_fs_cb cb); UV_EXTERN int uv_fs_mkdtemp(uv_loop_t* loop, uv_fs_t* req, const char* tpl, uv_fs_cb cb); UV_EXTERN int uv_fs_mkstemp(uv_loop_t* loop, uv_fs_t* req, const char* tpl, uv_fs_cb cb); UV_EXTERN int uv_fs_rmdir(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb); UV_EXTERN int uv_fs_scandir(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, uv_fs_cb cb); UV_EXTERN int uv_fs_scandir_next(uv_fs_t* req, uv_dirent_t* ent); UV_EXTERN int uv_fs_opendir(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb); UV_EXTERN int uv_fs_readdir(uv_loop_t* loop, uv_fs_t* req, uv_dir_t* dir, uv_fs_cb cb); UV_EXTERN int uv_fs_closedir(uv_loop_t* loop, uv_fs_t* req, uv_dir_t* dir, uv_fs_cb cb); UV_EXTERN int uv_fs_stat(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb); UV_EXTERN int uv_fs_fstat(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb); UV_EXTERN int uv_fs_rename(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, uv_fs_cb cb); UV_EXTERN int uv_fs_fsync(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb); UV_EXTERN int uv_fs_fdatasync(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb); UV_EXTERN int uv_fs_ftruncate(uv_loop_t* loop, uv_fs_t* req, uv_file file, int64_t offset, uv_fs_cb cb); UV_EXTERN int uv_fs_sendfile(uv_loop_t* loop, uv_fs_t* req, uv_file out_fd, uv_file in_fd, int64_t in_offset, size_t length, uv_fs_cb cb); UV_EXTERN int uv_fs_access(uv_loop_t* loop, uv_fs_t* req, const char* path, int mode, uv_fs_cb cb); UV_EXTERN int uv_fs_chmod(uv_loop_t* loop, uv_fs_t* req, const char* path, int mode, uv_fs_cb cb); UV_EXTERN int uv_fs_utime(uv_loop_t* loop, uv_fs_t* req, const char* path, double atime, double mtime, uv_fs_cb cb); UV_EXTERN int uv_fs_futime(uv_loop_t* loop, uv_fs_t* req, uv_file file, double atime, double mtime, uv_fs_cb cb); UV_EXTERN int uv_fs_lutime(uv_loop_t* loop, uv_fs_t* req, const char* path, double atime, double mtime, uv_fs_cb cb); UV_EXTERN int uv_fs_lstat(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb); UV_EXTERN int uv_fs_link(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, uv_fs_cb cb); /* * This flag can be used with uv_fs_symlink() on Windows to specify whether * path argument points to a directory. */ #define UV_FS_SYMLINK_DIR 0x0001 /* * This flag can be used with uv_fs_symlink() on Windows to specify whether * the symlink is to be created using junction points. */ #define UV_FS_SYMLINK_JUNCTION 0x0002 UV_EXTERN int uv_fs_symlink(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, int flags, uv_fs_cb cb); UV_EXTERN int uv_fs_readlink(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb); UV_EXTERN int uv_fs_realpath(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb); UV_EXTERN int uv_fs_fchmod(uv_loop_t* loop, uv_fs_t* req, uv_file file, int mode, uv_fs_cb cb); UV_EXTERN int uv_fs_chown(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb); UV_EXTERN int uv_fs_fchown(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb); UV_EXTERN int uv_fs_lchown(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb); UV_EXTERN int uv_fs_statfs(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb); enum uv_fs_event { UV_RENAME = 1, UV_CHANGE = 2 }; struct uv_fs_event_s { UV_HANDLE_FIELDS /* private */ char* path; UV_FS_EVENT_PRIVATE_FIELDS }; /* * uv_fs_stat() based polling file watcher. */ struct uv_fs_poll_s { UV_HANDLE_FIELDS /* Private, don't touch. */ void* poll_ctx; }; UV_EXTERN int uv_fs_poll_init(uv_loop_t* loop, uv_fs_poll_t* handle); UV_EXTERN int uv_fs_poll_start(uv_fs_poll_t* handle, uv_fs_poll_cb poll_cb, const char* path, unsigned int interval); UV_EXTERN int uv_fs_poll_stop(uv_fs_poll_t* handle); UV_EXTERN int uv_fs_poll_getpath(uv_fs_poll_t* handle, char* buffer, size_t* size); struct uv_signal_s { UV_HANDLE_FIELDS uv_signal_cb signal_cb; int signum; UV_SIGNAL_PRIVATE_FIELDS }; UV_EXTERN int uv_signal_init(uv_loop_t* loop, uv_signal_t* handle); UV_EXTERN int uv_signal_start(uv_signal_t* handle, uv_signal_cb signal_cb, int signum); UV_EXTERN int uv_signal_start_oneshot(uv_signal_t* handle, uv_signal_cb signal_cb, int signum); UV_EXTERN int uv_signal_stop(uv_signal_t* handle); UV_EXTERN void uv_loadavg(double avg[3]); /* * Flags to be passed to uv_fs_event_start(). */ enum uv_fs_event_flags { /* * By default, if the fs event watcher is given a directory name, we will * watch for all events in that directory. This flags overrides this behavior * and makes fs_event report only changes to the directory entry itself. This * flag does not affect individual files watched. * This flag is currently not implemented yet on any backend. */ UV_FS_EVENT_WATCH_ENTRY = 1, /* * By default uv_fs_event will try to use a kernel interface such as inotify * or kqueue to detect events. This may not work on remote filesystems such * as NFS mounts. This flag makes fs_event fall back to calling stat() on a * regular interval. * This flag is currently not implemented yet on any backend. */ UV_FS_EVENT_STAT = 2, /* * By default, event watcher, when watching directory, is not registering * (is ignoring) changes in it's subdirectories. * This flag will override this behaviour on platforms that support it. */ UV_FS_EVENT_RECURSIVE = 4 }; UV_EXTERN int uv_fs_event_init(uv_loop_t* loop, uv_fs_event_t* handle); UV_EXTERN int uv_fs_event_start(uv_fs_event_t* handle, uv_fs_event_cb cb, const char* path, unsigned int flags); UV_EXTERN int uv_fs_event_stop(uv_fs_event_t* handle); UV_EXTERN int uv_fs_event_getpath(uv_fs_event_t* handle, char* buffer, size_t* size); UV_EXTERN int uv_ip4_addr(const char* ip, int port, struct sockaddr_in* addr); UV_EXTERN int uv_ip6_addr(const char* ip, int port, struct sockaddr_in6* addr); UV_EXTERN int uv_ip4_name(const struct sockaddr_in* src, char* dst, size_t size); UV_EXTERN int uv_ip6_name(const struct sockaddr_in6* src, char* dst, size_t size); UV_EXTERN int uv_ip_name(const struct sockaddr* src, char* dst, size_t size); UV_EXTERN int uv_inet_ntop(int af, const void* src, char* dst, size_t size); UV_EXTERN int uv_inet_pton(int af, const char* src, void* dst); struct uv_random_s { UV_REQ_FIELDS /* read-only */ uv_loop_t* loop; /* private */ int status; void* buf; size_t buflen; uv_random_cb cb; struct uv__work work_req; }; UV_EXTERN int uv_random(uv_loop_t* loop, uv_random_t* req, void *buf, size_t buflen, unsigned flags, /* For future extension; must be 0. */ uv_random_cb cb); #if defined(IF_NAMESIZE) # define UV_IF_NAMESIZE (IF_NAMESIZE + 1) #elif defined(IFNAMSIZ) # define UV_IF_NAMESIZE (IFNAMSIZ + 1) #else # define UV_IF_NAMESIZE (16 + 1) #endif UV_EXTERN int uv_if_indextoname(unsigned int ifindex, char* buffer, size_t* size); UV_EXTERN int uv_if_indextoiid(unsigned int ifindex, char* buffer, size_t* size); UV_EXTERN int uv_exepath(char* buffer, size_t* size); UV_EXTERN int uv_cwd(char* buffer, size_t* size); UV_EXTERN int uv_chdir(const char* dir); UV_EXTERN uint64_t uv_get_free_memory(void); UV_EXTERN uint64_t uv_get_total_memory(void); UV_EXTERN uint64_t uv_get_constrained_memory(void); UV_EXTERN uint64_t uv_hrtime(void); UV_EXTERN void uv_sleep(unsigned int msec); UV_EXTERN void uv_disable_stdio_inheritance(void); UV_EXTERN int uv_dlopen(const char* filename, uv_lib_t* lib); UV_EXTERN void uv_dlclose(uv_lib_t* lib); UV_EXTERN int uv_dlsym(uv_lib_t* lib, const char* name, void** ptr); UV_EXTERN const char* uv_dlerror(const uv_lib_t* lib); UV_EXTERN int uv_mutex_init(uv_mutex_t* handle); UV_EXTERN int uv_mutex_init_recursive(uv_mutex_t* handle); UV_EXTERN void uv_mutex_destroy(uv_mutex_t* handle); UV_EXTERN void uv_mutex_lock(uv_mutex_t* handle); UV_EXTERN int uv_mutex_trylock(uv_mutex_t* handle); UV_EXTERN void uv_mutex_unlock(uv_mutex_t* handle); UV_EXTERN int uv_rwlock_init(uv_rwlock_t* rwlock); UV_EXTERN void uv_rwlock_destroy(uv_rwlock_t* rwlock); UV_EXTERN void uv_rwlock_rdlock(uv_rwlock_t* rwlock); UV_EXTERN int uv_rwlock_tryrdlock(uv_rwlock_t* rwlock); UV_EXTERN void uv_rwlock_rdunlock(uv_rwlock_t* rwlock); UV_EXTERN void uv_rwlock_wrlock(uv_rwlock_t* rwlock); UV_EXTERN int uv_rwlock_trywrlock(uv_rwlock_t* rwlock); UV_EXTERN void uv_rwlock_wrunlock(uv_rwlock_t* rwlock); UV_EXTERN int uv_sem_init(uv_sem_t* sem, unsigned int value); UV_EXTERN void uv_sem_destroy(uv_sem_t* sem); UV_EXTERN void uv_sem_post(uv_sem_t* sem); UV_EXTERN void uv_sem_wait(uv_sem_t* sem); UV_EXTERN int uv_sem_trywait(uv_sem_t* sem); UV_EXTERN int uv_cond_init(uv_cond_t* cond); UV_EXTERN void uv_cond_destroy(uv_cond_t* cond); UV_EXTERN void uv_cond_signal(uv_cond_t* cond); UV_EXTERN void uv_cond_broadcast(uv_cond_t* cond); UV_EXTERN int uv_barrier_init(uv_barrier_t* barrier, unsigned int count); UV_EXTERN void uv_barrier_destroy(uv_barrier_t* barrier); UV_EXTERN int uv_barrier_wait(uv_barrier_t* barrier); UV_EXTERN void uv_cond_wait(uv_cond_t* cond, uv_mutex_t* mutex); UV_EXTERN int uv_cond_timedwait(uv_cond_t* cond, uv_mutex_t* mutex, uint64_t timeout); UV_EXTERN void uv_once(uv_once_t* guard, void (*callback)(void)); UV_EXTERN int uv_key_create(uv_key_t* key); UV_EXTERN void uv_key_delete(uv_key_t* key); UV_EXTERN void* uv_key_get(uv_key_t* key); UV_EXTERN void uv_key_set(uv_key_t* key, void* value); UV_EXTERN int uv_gettimeofday(uv_timeval64_t* tv); typedef void (*uv_thread_cb)(void* arg); UV_EXTERN int uv_thread_create(uv_thread_t* tid, uv_thread_cb entry, void* arg); typedef enum { UV_THREAD_NO_FLAGS = 0x00, UV_THREAD_HAS_STACK_SIZE = 0x01 } uv_thread_create_flags; struct uv_thread_options_s { unsigned int flags; size_t stack_size; /* More fields may be added at any time. */ }; typedef struct uv_thread_options_s uv_thread_options_t; UV_EXTERN int uv_thread_create_ex(uv_thread_t* tid, const uv_thread_options_t* params, uv_thread_cb entry, void* arg); UV_EXTERN uv_thread_t uv_thread_self(void); UV_EXTERN int uv_thread_join(uv_thread_t *tid); UV_EXTERN int uv_thread_equal(const uv_thread_t* t1, const uv_thread_t* t2); /* The presence of these unions force similar struct layout. */ #define XX(_, name) uv_ ## name ## _t name; union uv_any_handle { UV_HANDLE_TYPE_MAP(XX) }; union uv_any_req { UV_REQ_TYPE_MAP(XX) }; #undef XX struct uv_loop_s { /* User data - use this for whatever. */ void* data; /* Loop reference counting. */ unsigned int active_handles; void* handle_queue[2]; union { void* unused; unsigned int count; } active_reqs; /* Internal storage for future extensions. */ void* internal_fields; /* Internal flag to signal loop stop. */ unsigned int stop_flag; UV_LOOP_PRIVATE_FIELDS }; UV_EXTERN void* uv_loop_get_data(const uv_loop_t*); UV_EXTERN void uv_loop_set_data(uv_loop_t*, void* data); /* Don't export the private CPP symbols. */ #undef UV_HANDLE_TYPE_PRIVATE #undef UV_REQ_TYPE_PRIVATE #undef UV_REQ_PRIVATE_FIELDS #undef UV_STREAM_PRIVATE_FIELDS #undef UV_TCP_PRIVATE_FIELDS #undef UV_PREPARE_PRIVATE_FIELDS #undef UV_CHECK_PRIVATE_FIELDS #undef UV_IDLE_PRIVATE_FIELDS #undef UV_ASYNC_PRIVATE_FIELDS #undef UV_TIMER_PRIVATE_FIELDS #undef UV_GETADDRINFO_PRIVATE_FIELDS #undef UV_GETNAMEINFO_PRIVATE_FIELDS #undef UV_FS_REQ_PRIVATE_FIELDS #undef UV_WORK_PRIVATE_FIELDS #undef UV_FS_EVENT_PRIVATE_FIELDS #undef UV_SIGNAL_PRIVATE_FIELDS #undef UV_LOOP_PRIVATE_FIELDS #undef UV_LOOP_PRIVATE_PLATFORM_FIELDS #undef UV__ERR #ifdef __cplusplus } #endif #endif /* UV_H */ gevent-24.11.1/deps/libuv/include/uv/000077500000000000000000000000001471441230600173035ustar00rootroot00000000000000gevent-24.11.1/deps/libuv/include/uv/aix.h000066400000000000000000000031171471441230600202370ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_AIX_H #define UV_AIX_H #define UV_PLATFORM_LOOP_FIELDS \ int fs_fd; \ #define UV_PLATFORM_FS_EVENT_FIELDS \ uv__io_t event_watcher; \ char *dir_filename; \ #endif /* UV_AIX_H */ gevent-24.11.1/deps/libuv/include/uv/bsd.h000066400000000000000000000031511471441230600202240ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_BSD_H #define UV_BSD_H #define UV_PLATFORM_FS_EVENT_FIELDS \ uv__io_t event_watcher; \ #define UV_IO_PRIVATE_PLATFORM_FIELDS \ int rcount; \ int wcount; \ #define UV_HAVE_KQUEUE 1 #endif /* UV_BSD_H */ gevent-24.11.1/deps/libuv/include/uv/darwin.h000066400000000000000000000062151471441230600207440ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_DARWIN_H #define UV_DARWIN_H #if defined(__APPLE__) && defined(__MACH__) # include # include # include # include # define UV_PLATFORM_SEM_T semaphore_t #endif #define UV_IO_PRIVATE_PLATFORM_FIELDS \ int rcount; \ int wcount; \ #define UV_PLATFORM_LOOP_FIELDS \ uv_thread_t cf_thread; \ void* _cf_reserved; \ void* cf_state; \ uv_mutex_t cf_mutex; \ uv_sem_t cf_sem; \ void* cf_signals[2]; \ #define UV_PLATFORM_FS_EVENT_FIELDS \ uv__io_t event_watcher; \ char* realpath; \ int realpath_len; \ int cf_flags; \ uv_async_t* cf_cb; \ void* cf_events[2]; \ void* cf_member[2]; \ int cf_error; \ uv_mutex_t cf_mutex; \ #define UV_STREAM_PRIVATE_PLATFORM_FIELDS \ void* select; \ #define UV_HAVE_KQUEUE 1 #endif /* UV_DARWIN_H */ gevent-24.11.1/deps/libuv/include/uv/errno.h000066400000000000000000000246231471441230600206100ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_ERRNO_H_ #define UV_ERRNO_H_ #include #if EDOM > 0 # define UV__ERR(x) (-(x)) #else # define UV__ERR(x) (x) #endif #define UV__EOF (-4095) #define UV__UNKNOWN (-4094) #define UV__EAI_ADDRFAMILY (-3000) #define UV__EAI_AGAIN (-3001) #define UV__EAI_BADFLAGS (-3002) #define UV__EAI_CANCELED (-3003) #define UV__EAI_FAIL (-3004) #define UV__EAI_FAMILY (-3005) #define UV__EAI_MEMORY (-3006) #define UV__EAI_NODATA (-3007) #define UV__EAI_NONAME (-3008) #define UV__EAI_OVERFLOW (-3009) #define UV__EAI_SERVICE (-3010) #define UV__EAI_SOCKTYPE (-3011) #define UV__EAI_BADHINTS (-3013) #define UV__EAI_PROTOCOL (-3014) /* Only map to the system errno on non-Windows platforms. It's apparently * a fairly common practice for Windows programmers to redefine errno codes. */ #if defined(E2BIG) && !defined(_WIN32) # define UV__E2BIG UV__ERR(E2BIG) #else # define UV__E2BIG (-4093) #endif #if defined(EACCES) && !defined(_WIN32) # define UV__EACCES UV__ERR(EACCES) #else # define UV__EACCES (-4092) #endif #if defined(EADDRINUSE) && !defined(_WIN32) # define UV__EADDRINUSE UV__ERR(EADDRINUSE) #else # define UV__EADDRINUSE (-4091) #endif #if defined(EADDRNOTAVAIL) && !defined(_WIN32) # define UV__EADDRNOTAVAIL UV__ERR(EADDRNOTAVAIL) #else # define UV__EADDRNOTAVAIL (-4090) #endif #if defined(EAFNOSUPPORT) && !defined(_WIN32) # define UV__EAFNOSUPPORT UV__ERR(EAFNOSUPPORT) #else # define UV__EAFNOSUPPORT (-4089) #endif #if defined(EAGAIN) && !defined(_WIN32) # define UV__EAGAIN UV__ERR(EAGAIN) #else # define UV__EAGAIN (-4088) #endif #if defined(EALREADY) && !defined(_WIN32) # define UV__EALREADY UV__ERR(EALREADY) #else # define UV__EALREADY (-4084) #endif #if defined(EBADF) && !defined(_WIN32) # define UV__EBADF UV__ERR(EBADF) #else # define UV__EBADF (-4083) #endif #if defined(EBUSY) && !defined(_WIN32) # define UV__EBUSY UV__ERR(EBUSY) #else # define UV__EBUSY (-4082) #endif #if defined(ECANCELED) && !defined(_WIN32) # define UV__ECANCELED UV__ERR(ECANCELED) #else # define UV__ECANCELED (-4081) #endif #if defined(ECHARSET) && !defined(_WIN32) # define UV__ECHARSET UV__ERR(ECHARSET) #else # define UV__ECHARSET (-4080) #endif #if defined(ECONNABORTED) && !defined(_WIN32) # define UV__ECONNABORTED UV__ERR(ECONNABORTED) #else # define UV__ECONNABORTED (-4079) #endif #if defined(ECONNREFUSED) && !defined(_WIN32) # define UV__ECONNREFUSED UV__ERR(ECONNREFUSED) #else # define UV__ECONNREFUSED (-4078) #endif #if defined(ECONNRESET) && !defined(_WIN32) # define UV__ECONNRESET UV__ERR(ECONNRESET) #else # define UV__ECONNRESET (-4077) #endif #if defined(EDESTADDRREQ) && !defined(_WIN32) # define UV__EDESTADDRREQ UV__ERR(EDESTADDRREQ) #else # define UV__EDESTADDRREQ (-4076) #endif #if defined(EEXIST) && !defined(_WIN32) # define UV__EEXIST UV__ERR(EEXIST) #else # define UV__EEXIST (-4075) #endif #if defined(EFAULT) && !defined(_WIN32) # define UV__EFAULT UV__ERR(EFAULT) #else # define UV__EFAULT (-4074) #endif #if defined(EHOSTUNREACH) && !defined(_WIN32) # define UV__EHOSTUNREACH UV__ERR(EHOSTUNREACH) #else # define UV__EHOSTUNREACH (-4073) #endif #if defined(EINTR) && !defined(_WIN32) # define UV__EINTR UV__ERR(EINTR) #else # define UV__EINTR (-4072) #endif #if defined(EINVAL) && !defined(_WIN32) # define UV__EINVAL UV__ERR(EINVAL) #else # define UV__EINVAL (-4071) #endif #if defined(EIO) && !defined(_WIN32) # define UV__EIO UV__ERR(EIO) #else # define UV__EIO (-4070) #endif #if defined(EISCONN) && !defined(_WIN32) # define UV__EISCONN UV__ERR(EISCONN) #else # define UV__EISCONN (-4069) #endif #if defined(EISDIR) && !defined(_WIN32) # define UV__EISDIR UV__ERR(EISDIR) #else # define UV__EISDIR (-4068) #endif #if defined(ELOOP) && !defined(_WIN32) # define UV__ELOOP UV__ERR(ELOOP) #else # define UV__ELOOP (-4067) #endif #if defined(EMFILE) && !defined(_WIN32) # define UV__EMFILE UV__ERR(EMFILE) #else # define UV__EMFILE (-4066) #endif #if defined(EMSGSIZE) && !defined(_WIN32) # define UV__EMSGSIZE UV__ERR(EMSGSIZE) #else # define UV__EMSGSIZE (-4065) #endif #if defined(ENAMETOOLONG) && !defined(_WIN32) # define UV__ENAMETOOLONG UV__ERR(ENAMETOOLONG) #else # define UV__ENAMETOOLONG (-4064) #endif #if defined(ENETDOWN) && !defined(_WIN32) # define UV__ENETDOWN UV__ERR(ENETDOWN) #else # define UV__ENETDOWN (-4063) #endif #if defined(ENETUNREACH) && !defined(_WIN32) # define UV__ENETUNREACH UV__ERR(ENETUNREACH) #else # define UV__ENETUNREACH (-4062) #endif #if defined(ENFILE) && !defined(_WIN32) # define UV__ENFILE UV__ERR(ENFILE) #else # define UV__ENFILE (-4061) #endif #if defined(ENOBUFS) && !defined(_WIN32) # define UV__ENOBUFS UV__ERR(ENOBUFS) #else # define UV__ENOBUFS (-4060) #endif #if defined(ENODEV) && !defined(_WIN32) # define UV__ENODEV UV__ERR(ENODEV) #else # define UV__ENODEV (-4059) #endif #if defined(ENOENT) && !defined(_WIN32) # define UV__ENOENT UV__ERR(ENOENT) #else # define UV__ENOENT (-4058) #endif #if defined(ENOMEM) && !defined(_WIN32) # define UV__ENOMEM UV__ERR(ENOMEM) #else # define UV__ENOMEM (-4057) #endif #if defined(ENONET) && !defined(_WIN32) # define UV__ENONET UV__ERR(ENONET) #else # define UV__ENONET (-4056) #endif #if defined(ENOSPC) && !defined(_WIN32) # define UV__ENOSPC UV__ERR(ENOSPC) #else # define UV__ENOSPC (-4055) #endif #if defined(ENOSYS) && !defined(_WIN32) # define UV__ENOSYS UV__ERR(ENOSYS) #else # define UV__ENOSYS (-4054) #endif #if defined(ENOTCONN) && !defined(_WIN32) # define UV__ENOTCONN UV__ERR(ENOTCONN) #else # define UV__ENOTCONN (-4053) #endif #if defined(ENOTDIR) && !defined(_WIN32) # define UV__ENOTDIR UV__ERR(ENOTDIR) #else # define UV__ENOTDIR (-4052) #endif #if defined(ENOTEMPTY) && !defined(_WIN32) # define UV__ENOTEMPTY UV__ERR(ENOTEMPTY) #else # define UV__ENOTEMPTY (-4051) #endif #if defined(ENOTSOCK) && !defined(_WIN32) # define UV__ENOTSOCK UV__ERR(ENOTSOCK) #else # define UV__ENOTSOCK (-4050) #endif #if defined(ENOTSUP) && !defined(_WIN32) # define UV__ENOTSUP UV__ERR(ENOTSUP) #else # define UV__ENOTSUP (-4049) #endif #if defined(EPERM) && !defined(_WIN32) # define UV__EPERM UV__ERR(EPERM) #else # define UV__EPERM (-4048) #endif #if defined(EPIPE) && !defined(_WIN32) # define UV__EPIPE UV__ERR(EPIPE) #else # define UV__EPIPE (-4047) #endif #if defined(EPROTO) && !defined(_WIN32) # define UV__EPROTO UV__ERR(EPROTO) #else # define UV__EPROTO (-4046) #endif #if defined(EPROTONOSUPPORT) && !defined(_WIN32) # define UV__EPROTONOSUPPORT UV__ERR(EPROTONOSUPPORT) #else # define UV__EPROTONOSUPPORT (-4045) #endif #if defined(EPROTOTYPE) && !defined(_WIN32) # define UV__EPROTOTYPE UV__ERR(EPROTOTYPE) #else # define UV__EPROTOTYPE (-4044) #endif #if defined(EROFS) && !defined(_WIN32) # define UV__EROFS UV__ERR(EROFS) #else # define UV__EROFS (-4043) #endif #if defined(ESHUTDOWN) && !defined(_WIN32) # define UV__ESHUTDOWN UV__ERR(ESHUTDOWN) #else # define UV__ESHUTDOWN (-4042) #endif #if defined(ESPIPE) && !defined(_WIN32) # define UV__ESPIPE UV__ERR(ESPIPE) #else # define UV__ESPIPE (-4041) #endif #if defined(ESRCH) && !defined(_WIN32) # define UV__ESRCH UV__ERR(ESRCH) #else # define UV__ESRCH (-4040) #endif #if defined(ETIMEDOUT) && !defined(_WIN32) # define UV__ETIMEDOUT UV__ERR(ETIMEDOUT) #else # define UV__ETIMEDOUT (-4039) #endif #if defined(ETXTBSY) && !defined(_WIN32) # define UV__ETXTBSY UV__ERR(ETXTBSY) #else # define UV__ETXTBSY (-4038) #endif #if defined(EXDEV) && !defined(_WIN32) # define UV__EXDEV UV__ERR(EXDEV) #else # define UV__EXDEV (-4037) #endif #if defined(EFBIG) && !defined(_WIN32) # define UV__EFBIG UV__ERR(EFBIG) #else # define UV__EFBIG (-4036) #endif #if defined(ENOPROTOOPT) && !defined(_WIN32) # define UV__ENOPROTOOPT UV__ERR(ENOPROTOOPT) #else # define UV__ENOPROTOOPT (-4035) #endif #if defined(ERANGE) && !defined(_WIN32) # define UV__ERANGE UV__ERR(ERANGE) #else # define UV__ERANGE (-4034) #endif #if defined(ENXIO) && !defined(_WIN32) # define UV__ENXIO UV__ERR(ENXIO) #else # define UV__ENXIO (-4033) #endif #if defined(EMLINK) && !defined(_WIN32) # define UV__EMLINK UV__ERR(EMLINK) #else # define UV__EMLINK (-4032) #endif /* EHOSTDOWN is not visible on BSD-like systems when _POSIX_C_SOURCE is * defined. Fortunately, its value is always 64 so it's possible albeit * icky to hard-code it. */ #if defined(EHOSTDOWN) && !defined(_WIN32) # define UV__EHOSTDOWN UV__ERR(EHOSTDOWN) #elif defined(__APPLE__) || \ defined(__DragonFly__) || \ defined(__FreeBSD__) || \ defined(__FreeBSD_kernel__) || \ defined(__NetBSD__) || \ defined(__OpenBSD__) # define UV__EHOSTDOWN (-64) #else # define UV__EHOSTDOWN (-4031) #endif #if defined(EREMOTEIO) && !defined(_WIN32) # define UV__EREMOTEIO UV__ERR(EREMOTEIO) #else # define UV__EREMOTEIO (-4030) #endif #if defined(ENOTTY) && !defined(_WIN32) # define UV__ENOTTY UV__ERR(ENOTTY) #else # define UV__ENOTTY (-4029) #endif #if defined(EFTYPE) && !defined(_WIN32) # define UV__EFTYPE UV__ERR(EFTYPE) #else # define UV__EFTYPE (-4028) #endif #if defined(EILSEQ) && !defined(_WIN32) # define UV__EILSEQ UV__ERR(EILSEQ) #else # define UV__EILSEQ (-4027) #endif #if defined(EOVERFLOW) && !defined(_WIN32) # define UV__EOVERFLOW UV__ERR(EOVERFLOW) #else # define UV__EOVERFLOW (-4026) #endif #if defined(ESOCKTNOSUPPORT) && !defined(_WIN32) # define UV__ESOCKTNOSUPPORT UV__ERR(ESOCKTNOSUPPORT) #else # define UV__ESOCKTNOSUPPORT (-4025) #endif #endif /* UV_ERRNO_H_ */ gevent-24.11.1/deps/libuv/include/uv/linux.h000066400000000000000000000033651471441230600206220ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_LINUX_H #define UV_LINUX_H #define UV_PLATFORM_LOOP_FIELDS \ uv__io_t inotify_read_watcher; \ void* inotify_watchers; \ int inotify_fd; \ #define UV_PLATFORM_FS_EVENT_FIELDS \ void* watchers[2]; \ int wd; \ #endif /* UV_LINUX_H */ gevent-24.11.1/deps/libuv/include/uv/os390.h000066400000000000000000000030211471441230600203250ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_MVS_H #define UV_MVS_H #define UV_PLATFORM_SEM_T long #define UV_PLATFORM_LOOP_FIELDS \ void* ep; \ #define UV_PLATFORM_FS_EVENT_FIELDS \ char rfis_rftok[8]; \ #endif /* UV_MVS_H */ gevent-24.11.1/deps/libuv/include/uv/posix.h000066400000000000000000000031061471441230600206160ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_POSIX_H #define UV_POSIX_H #define UV_PLATFORM_LOOP_FIELDS \ struct pollfd* poll_fds; \ size_t poll_fds_used; \ size_t poll_fds_size; \ unsigned char poll_fds_iterating; \ #endif /* UV_POSIX_H */ gevent-24.11.1/deps/libuv/include/uv/stdint-msvc2008.h000066400000000000000000000170601471441230600222450ustar00rootroot00000000000000// ISO C9x compliant stdint.h for Microsoft Visual Studio // Based on ISO/IEC 9899:TC2 Committee draft (May 6, 2005) WG14/N1124 // // Copyright (c) 2006-2008 Alexander Chemeris // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are met: // // 1. Redistributions of source code must retain the above copyright notice, // this list of conditions and the following disclaimer. // // 2. Redistributions in binary form must reproduce the above copyright // notice, this list of conditions and the following disclaimer in the // documentation and/or other materials provided with the distribution. // // 3. The name of the author may be used to endorse or promote products // derived from this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED // WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF // MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO // EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, // PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; // OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR // OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF // ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // /////////////////////////////////////////////////////////////////////////////// #ifndef _MSC_VER // [ #error "Use this header only with Microsoft Visual C++ compilers!" #endif // _MSC_VER ] #ifndef _MSC_STDINT_H_ // [ #define _MSC_STDINT_H_ #if _MSC_VER > 1000 #pragma once #endif #include // For Visual Studio 6 in C++ mode and for many Visual Studio versions when // compiling for ARM we should wrap include with 'extern "C++" {}' // or compiler give many errors like this: // error C2733: second C linkage of overloaded function 'wmemchr' not allowed #ifdef __cplusplus extern "C" { #endif # include #ifdef __cplusplus } #endif // Define _W64 macros to mark types changing their size, like intptr_t. #ifndef _W64 # if !defined(__midl) && (defined(_X86_) || defined(_M_IX86)) && _MSC_VER >= 1300 # define _W64 __w64 # else # define _W64 # endif #endif // 7.18.1 Integer types // 7.18.1.1 Exact-width integer types // Visual Studio 6 and Embedded Visual C++ 4 doesn't // realize that, e.g. char has the same size as __int8 // so we give up on __intX for them. #if (_MSC_VER < 1300) typedef signed char int8_t; typedef signed short int16_t; typedef signed int int32_t; typedef unsigned char uint8_t; typedef unsigned short uint16_t; typedef unsigned int uint32_t; #else typedef signed __int8 int8_t; typedef signed __int16 int16_t; typedef signed __int32 int32_t; typedef unsigned __int8 uint8_t; typedef unsigned __int16 uint16_t; typedef unsigned __int32 uint32_t; #endif typedef signed __int64 int64_t; typedef unsigned __int64 uint64_t; // 7.18.1.2 Minimum-width integer types typedef int8_t int_least8_t; typedef int16_t int_least16_t; typedef int32_t int_least32_t; typedef int64_t int_least64_t; typedef uint8_t uint_least8_t; typedef uint16_t uint_least16_t; typedef uint32_t uint_least32_t; typedef uint64_t uint_least64_t; // 7.18.1.3 Fastest minimum-width integer types typedef int8_t int_fast8_t; typedef int16_t int_fast16_t; typedef int32_t int_fast32_t; typedef int64_t int_fast64_t; typedef uint8_t uint_fast8_t; typedef uint16_t uint_fast16_t; typedef uint32_t uint_fast32_t; typedef uint64_t uint_fast64_t; // 7.18.1.4 Integer types capable of holding object pointers #ifdef _WIN64 // [ typedef signed __int64 intptr_t; typedef unsigned __int64 uintptr_t; #else // _WIN64 ][ typedef _W64 signed int intptr_t; typedef _W64 unsigned int uintptr_t; #endif // _WIN64 ] // 7.18.1.5 Greatest-width integer types typedef int64_t intmax_t; typedef uint64_t uintmax_t; // 7.18.2 Limits of specified-width integer types #if !defined(__cplusplus) || defined(__STDC_LIMIT_MACROS) // [ See footnote 220 at page 257 and footnote 221 at page 259 // 7.18.2.1 Limits of exact-width integer types #define INT8_MIN ((int8_t)_I8_MIN) #define INT8_MAX _I8_MAX #define INT16_MIN ((int16_t)_I16_MIN) #define INT16_MAX _I16_MAX #define INT32_MIN ((int32_t)_I32_MIN) #define INT32_MAX _I32_MAX #define INT64_MIN ((int64_t)_I64_MIN) #define INT64_MAX _I64_MAX #define UINT8_MAX _UI8_MAX #define UINT16_MAX _UI16_MAX #define UINT32_MAX _UI32_MAX #define UINT64_MAX _UI64_MAX // 7.18.2.2 Limits of minimum-width integer types #define INT_LEAST8_MIN INT8_MIN #define INT_LEAST8_MAX INT8_MAX #define INT_LEAST16_MIN INT16_MIN #define INT_LEAST16_MAX INT16_MAX #define INT_LEAST32_MIN INT32_MIN #define INT_LEAST32_MAX INT32_MAX #define INT_LEAST64_MIN INT64_MIN #define INT_LEAST64_MAX INT64_MAX #define UINT_LEAST8_MAX UINT8_MAX #define UINT_LEAST16_MAX UINT16_MAX #define UINT_LEAST32_MAX UINT32_MAX #define UINT_LEAST64_MAX UINT64_MAX // 7.18.2.3 Limits of fastest minimum-width integer types #define INT_FAST8_MIN INT8_MIN #define INT_FAST8_MAX INT8_MAX #define INT_FAST16_MIN INT16_MIN #define INT_FAST16_MAX INT16_MAX #define INT_FAST32_MIN INT32_MIN #define INT_FAST32_MAX INT32_MAX #define INT_FAST64_MIN INT64_MIN #define INT_FAST64_MAX INT64_MAX #define UINT_FAST8_MAX UINT8_MAX #define UINT_FAST16_MAX UINT16_MAX #define UINT_FAST32_MAX UINT32_MAX #define UINT_FAST64_MAX UINT64_MAX // 7.18.2.4 Limits of integer types capable of holding object pointers #ifdef _WIN64 // [ # define INTPTR_MIN INT64_MIN # define INTPTR_MAX INT64_MAX # define UINTPTR_MAX UINT64_MAX #else // _WIN64 ][ # define INTPTR_MIN INT32_MIN # define INTPTR_MAX INT32_MAX # define UINTPTR_MAX UINT32_MAX #endif // _WIN64 ] // 7.18.2.5 Limits of greatest-width integer types #define INTMAX_MIN INT64_MIN #define INTMAX_MAX INT64_MAX #define UINTMAX_MAX UINT64_MAX // 7.18.3 Limits of other integer types #ifdef _WIN64 // [ # define PTRDIFF_MIN _I64_MIN # define PTRDIFF_MAX _I64_MAX #else // _WIN64 ][ # define PTRDIFF_MIN _I32_MIN # define PTRDIFF_MAX _I32_MAX #endif // _WIN64 ] #define SIG_ATOMIC_MIN INT_MIN #define SIG_ATOMIC_MAX INT_MAX #ifndef SIZE_MAX // [ # ifdef _WIN64 // [ # define SIZE_MAX _UI64_MAX # else // _WIN64 ][ # define SIZE_MAX _UI32_MAX # endif // _WIN64 ] #endif // SIZE_MAX ] // WCHAR_MIN and WCHAR_MAX are also defined in #ifndef WCHAR_MIN // [ # define WCHAR_MIN 0 #endif // WCHAR_MIN ] #ifndef WCHAR_MAX // [ # define WCHAR_MAX _UI16_MAX #endif // WCHAR_MAX ] #define WINT_MIN 0 #define WINT_MAX _UI16_MAX #endif // __STDC_LIMIT_MACROS ] // 7.18.4 Limits of other integer types #if !defined(__cplusplus) || defined(__STDC_CONSTANT_MACROS) // [ See footnote 224 at page 260 // 7.18.4.1 Macros for minimum-width integer constants #define INT8_C(val) val##i8 #define INT16_C(val) val##i16 #define INT32_C(val) val##i32 #define INT64_C(val) val##i64 #define UINT8_C(val) val##ui8 #define UINT16_C(val) val##ui16 #define UINT32_C(val) val##ui32 #define UINT64_C(val) val##ui64 // 7.18.4.2 Macros for greatest-width integer constants #define INTMAX_C INT64_C #define UINTMAX_C UINT64_C #endif // __STDC_CONSTANT_MACROS ] #endif // _MSC_STDINT_H_ ] gevent-24.11.1/deps/libuv/include/uv/sunos.h000066400000000000000000000037011471441230600206240ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_SUNOS_H #define UV_SUNOS_H #include #include /* For the sake of convenience and reduced #ifdef-ery in src/unix/sunos.c, * add the fs_event fields even when this version of SunOS doesn't support * file watching. */ #define UV_PLATFORM_LOOP_FIELDS \ uv__io_t fs_event_watcher; \ int fs_fd; \ #if defined(PORT_SOURCE_FILE) # define UV_PLATFORM_FS_EVENT_FIELDS \ file_obj_t fo; \ int fd; \ #endif /* defined(PORT_SOURCE_FILE) */ #endif /* UV_SUNOS_H */ gevent-24.11.1/deps/libuv/include/uv/threadpool.h000066400000000000000000000027311471441230600216200ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ /* * This file is private to libuv. It provides common functionality to both * Windows and Unix backends. */ #ifndef UV_THREADPOOL_H_ #define UV_THREADPOOL_H_ struct uv__work { void (*work)(struct uv__work *w); void (*done)(struct uv__work *w, int status); struct uv_loop_s* loop; void* wq[2]; }; #endif /* UV_THREADPOOL_H_ */ gevent-24.11.1/deps/libuv/include/uv/tree.h000066400000000000000000001472311471441230600204230ustar00rootroot00000000000000/*- * Copyright 2002 Niels Provos * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef UV_TREE_H_ #define UV_TREE_H_ #ifndef UV__UNUSED # if __GNUC__ # define UV__UNUSED __attribute__((unused)) # else # define UV__UNUSED # endif #endif /* * This file defines data structures for different types of trees: * splay trees and red-black trees. * * A splay tree is a self-organizing data structure. Every operation * on the tree causes a splay to happen. The splay moves the requested * node to the root of the tree and partly rebalances it. * * This has the benefit that request locality causes faster lookups as * the requested nodes move to the top of the tree. On the other hand, * every lookup causes memory writes. * * The Balance Theorem bounds the total access time for m operations * and n inserts on an initially empty tree as O((m + n)lg n). The * amortized cost for a sequence of m accesses to a splay tree is O(lg n); * * A red-black tree is a binary search tree with the node color as an * extra attribute. It fulfills a set of conditions: * - every search path from the root to a leaf consists of the * same number of black nodes, * - each red node (except for the root) has a black parent, * - each leaf node is black. * * Every operation on a red-black tree is bounded as O(lg n). * The maximum height of a red-black tree is 2lg (n+1). */ #define SPLAY_HEAD(name, type) \ struct name { \ struct type *sph_root; /* root of the tree */ \ } #define SPLAY_INITIALIZER(root) \ { NULL } #define SPLAY_INIT(root) do { \ (root)->sph_root = NULL; \ } while (/*CONSTCOND*/ 0) #define SPLAY_ENTRY(type) \ struct { \ struct type *spe_left; /* left element */ \ struct type *spe_right; /* right element */ \ } #define SPLAY_LEFT(elm, field) (elm)->field.spe_left #define SPLAY_RIGHT(elm, field) (elm)->field.spe_right #define SPLAY_ROOT(head) (head)->sph_root #define SPLAY_EMPTY(head) (SPLAY_ROOT(head) == NULL) /* SPLAY_ROTATE_{LEFT,RIGHT} expect that tmp hold SPLAY_{RIGHT,LEFT} */ #define SPLAY_ROTATE_RIGHT(head, tmp, field) do { \ SPLAY_LEFT((head)->sph_root, field) = SPLAY_RIGHT(tmp, field); \ SPLAY_RIGHT(tmp, field) = (head)->sph_root; \ (head)->sph_root = tmp; \ } while (/*CONSTCOND*/ 0) #define SPLAY_ROTATE_LEFT(head, tmp, field) do { \ SPLAY_RIGHT((head)->sph_root, field) = SPLAY_LEFT(tmp, field); \ SPLAY_LEFT(tmp, field) = (head)->sph_root; \ (head)->sph_root = tmp; \ } while (/*CONSTCOND*/ 0) #define SPLAY_LINKLEFT(head, tmp, field) do { \ SPLAY_LEFT(tmp, field) = (head)->sph_root; \ tmp = (head)->sph_root; \ (head)->sph_root = SPLAY_LEFT((head)->sph_root, field); \ } while (/*CONSTCOND*/ 0) #define SPLAY_LINKRIGHT(head, tmp, field) do { \ SPLAY_RIGHT(tmp, field) = (head)->sph_root; \ tmp = (head)->sph_root; \ (head)->sph_root = SPLAY_RIGHT((head)->sph_root, field); \ } while (/*CONSTCOND*/ 0) #define SPLAY_ASSEMBLE(head, node, left, right, field) do { \ SPLAY_RIGHT(left, field) = SPLAY_LEFT((head)->sph_root, field); \ SPLAY_LEFT(right, field) = SPLAY_RIGHT((head)->sph_root, field); \ SPLAY_LEFT((head)->sph_root, field) = SPLAY_RIGHT(node, field); \ SPLAY_RIGHT((head)->sph_root, field) = SPLAY_LEFT(node, field); \ } while (/*CONSTCOND*/ 0) /* Generates prototypes and inline functions */ #define SPLAY_PROTOTYPE(name, type, field, cmp) \ void name##_SPLAY(struct name *, struct type *); \ void name##_SPLAY_MINMAX(struct name *, int); \ struct type *name##_SPLAY_INSERT(struct name *, struct type *); \ struct type *name##_SPLAY_REMOVE(struct name *, struct type *); \ \ /* Finds the node with the same key as elm */ \ static __inline struct type * \ name##_SPLAY_FIND(struct name *head, struct type *elm) \ { \ if (SPLAY_EMPTY(head)) \ return(NULL); \ name##_SPLAY(head, elm); \ if ((cmp)(elm, (head)->sph_root) == 0) \ return (head->sph_root); \ return (NULL); \ } \ \ static __inline struct type * \ name##_SPLAY_NEXT(struct name *head, struct type *elm) \ { \ name##_SPLAY(head, elm); \ if (SPLAY_RIGHT(elm, field) != NULL) { \ elm = SPLAY_RIGHT(elm, field); \ while (SPLAY_LEFT(elm, field) != NULL) { \ elm = SPLAY_LEFT(elm, field); \ } \ } else \ elm = NULL; \ return (elm); \ } \ \ static __inline struct type * \ name##_SPLAY_MIN_MAX(struct name *head, int val) \ { \ name##_SPLAY_MINMAX(head, val); \ return (SPLAY_ROOT(head)); \ } /* Main splay operation. * Moves node close to the key of elm to top */ #define SPLAY_GENERATE(name, type, field, cmp) \ struct type * \ name##_SPLAY_INSERT(struct name *head, struct type *elm) \ { \ if (SPLAY_EMPTY(head)) { \ SPLAY_LEFT(elm, field) = SPLAY_RIGHT(elm, field) = NULL; \ } else { \ int __comp; \ name##_SPLAY(head, elm); \ __comp = (cmp)(elm, (head)->sph_root); \ if(__comp < 0) { \ SPLAY_LEFT(elm, field) = SPLAY_LEFT((head)->sph_root, field); \ SPLAY_RIGHT(elm, field) = (head)->sph_root; \ SPLAY_LEFT((head)->sph_root, field) = NULL; \ } else if (__comp > 0) { \ SPLAY_RIGHT(elm, field) = SPLAY_RIGHT((head)->sph_root, field); \ SPLAY_LEFT(elm, field) = (head)->sph_root; \ SPLAY_RIGHT((head)->sph_root, field) = NULL; \ } else \ return ((head)->sph_root); \ } \ (head)->sph_root = (elm); \ return (NULL); \ } \ \ struct type * \ name##_SPLAY_REMOVE(struct name *head, struct type *elm) \ { \ struct type *__tmp; \ if (SPLAY_EMPTY(head)) \ return (NULL); \ name##_SPLAY(head, elm); \ if ((cmp)(elm, (head)->sph_root) == 0) { \ if (SPLAY_LEFT((head)->sph_root, field) == NULL) { \ (head)->sph_root = SPLAY_RIGHT((head)->sph_root, field); \ } else { \ __tmp = SPLAY_RIGHT((head)->sph_root, field); \ (head)->sph_root = SPLAY_LEFT((head)->sph_root, field); \ name##_SPLAY(head, elm); \ SPLAY_RIGHT((head)->sph_root, field) = __tmp; \ } \ return (elm); \ } \ return (NULL); \ } \ \ void \ name##_SPLAY(struct name *head, struct type *elm) \ { \ struct type __node, *__left, *__right, *__tmp; \ int __comp; \ \ SPLAY_LEFT(&__node, field) = SPLAY_RIGHT(&__node, field) = NULL; \ __left = __right = &__node; \ \ while ((__comp = (cmp)(elm, (head)->sph_root)) != 0) { \ if (__comp < 0) { \ __tmp = SPLAY_LEFT((head)->sph_root, field); \ if (__tmp == NULL) \ break; \ if ((cmp)(elm, __tmp) < 0){ \ SPLAY_ROTATE_RIGHT(head, __tmp, field); \ if (SPLAY_LEFT((head)->sph_root, field) == NULL) \ break; \ } \ SPLAY_LINKLEFT(head, __right, field); \ } else if (__comp > 0) { \ __tmp = SPLAY_RIGHT((head)->sph_root, field); \ if (__tmp == NULL) \ break; \ if ((cmp)(elm, __tmp) > 0){ \ SPLAY_ROTATE_LEFT(head, __tmp, field); \ if (SPLAY_RIGHT((head)->sph_root, field) == NULL) \ break; \ } \ SPLAY_LINKRIGHT(head, __left, field); \ } \ } \ SPLAY_ASSEMBLE(head, &__node, __left, __right, field); \ } \ \ /* Splay with either the minimum or the maximum element \ * Used to find minimum or maximum element in tree. \ */ \ void name##_SPLAY_MINMAX(struct name *head, int __comp) \ { \ struct type __node, *__left, *__right, *__tmp; \ \ SPLAY_LEFT(&__node, field) = SPLAY_RIGHT(&__node, field) = NULL; \ __left = __right = &__node; \ \ for (;;) { \ if (__comp < 0) { \ __tmp = SPLAY_LEFT((head)->sph_root, field); \ if (__tmp == NULL) \ break; \ if (__comp < 0){ \ SPLAY_ROTATE_RIGHT(head, __tmp, field); \ if (SPLAY_LEFT((head)->sph_root, field) == NULL) \ break; \ } \ SPLAY_LINKLEFT(head, __right, field); \ } else if (__comp > 0) { \ __tmp = SPLAY_RIGHT((head)->sph_root, field); \ if (__tmp == NULL) \ break; \ if (__comp > 0) { \ SPLAY_ROTATE_LEFT(head, __tmp, field); \ if (SPLAY_RIGHT((head)->sph_root, field) == NULL) \ break; \ } \ SPLAY_LINKRIGHT(head, __left, field); \ } \ } \ SPLAY_ASSEMBLE(head, &__node, __left, __right, field); \ } #define SPLAY_NEGINF -1 #define SPLAY_INF 1 #define SPLAY_INSERT(name, x, y) name##_SPLAY_INSERT(x, y) #define SPLAY_REMOVE(name, x, y) name##_SPLAY_REMOVE(x, y) #define SPLAY_FIND(name, x, y) name##_SPLAY_FIND(x, y) #define SPLAY_NEXT(name, x, y) name##_SPLAY_NEXT(x, y) #define SPLAY_MIN(name, x) (SPLAY_EMPTY(x) ? NULL \ : name##_SPLAY_MIN_MAX(x, SPLAY_NEGINF)) #define SPLAY_MAX(name, x) (SPLAY_EMPTY(x) ? NULL \ : name##_SPLAY_MIN_MAX(x, SPLAY_INF)) #define SPLAY_FOREACH(x, name, head) \ for ((x) = SPLAY_MIN(name, head); \ (x) != NULL; \ (x) = SPLAY_NEXT(name, head, x)) /* Macros that define a red-black tree */ #define RB_HEAD(name, type) \ struct name { \ struct type *rbh_root; /* root of the tree */ \ } #define RB_INITIALIZER(root) \ { NULL } #define RB_INIT(root) do { \ (root)->rbh_root = NULL; \ } while (/*CONSTCOND*/ 0) #define RB_BLACK 0 #define RB_RED 1 #define RB_ENTRY(type) \ struct { \ struct type *rbe_left; /* left element */ \ struct type *rbe_right; /* right element */ \ struct type *rbe_parent; /* parent element */ \ int rbe_color; /* node color */ \ } #define RB_LEFT(elm, field) (elm)->field.rbe_left #define RB_RIGHT(elm, field) (elm)->field.rbe_right #define RB_PARENT(elm, field) (elm)->field.rbe_parent #define RB_COLOR(elm, field) (elm)->field.rbe_color #define RB_ROOT(head) (head)->rbh_root #define RB_EMPTY(head) (RB_ROOT(head) == NULL) #define RB_SET(elm, parent, field) do { \ RB_PARENT(elm, field) = parent; \ RB_LEFT(elm, field) = RB_RIGHT(elm, field) = NULL; \ RB_COLOR(elm, field) = RB_RED; \ } while (/*CONSTCOND*/ 0) #define RB_SET_BLACKRED(black, red, field) do { \ RB_COLOR(black, field) = RB_BLACK; \ RB_COLOR(red, field) = RB_RED; \ } while (/*CONSTCOND*/ 0) #ifndef RB_AUGMENT #define RB_AUGMENT(x) do {} while (0) #endif #define RB_ROTATE_LEFT(head, elm, tmp, field) do { \ (tmp) = RB_RIGHT(elm, field); \ if ((RB_RIGHT(elm, field) = RB_LEFT(tmp, field)) != NULL) { \ RB_PARENT(RB_LEFT(tmp, field), field) = (elm); \ } \ RB_AUGMENT(elm); \ if ((RB_PARENT(tmp, field) = RB_PARENT(elm, field)) != NULL) { \ if ((elm) == RB_LEFT(RB_PARENT(elm, field), field)) \ RB_LEFT(RB_PARENT(elm, field), field) = (tmp); \ else \ RB_RIGHT(RB_PARENT(elm, field), field) = (tmp); \ } else \ (head)->rbh_root = (tmp); \ RB_LEFT(tmp, field) = (elm); \ RB_PARENT(elm, field) = (tmp); \ RB_AUGMENT(tmp); \ if ((RB_PARENT(tmp, field))) \ RB_AUGMENT(RB_PARENT(tmp, field)); \ } while (/*CONSTCOND*/ 0) #define RB_ROTATE_RIGHT(head, elm, tmp, field) do { \ (tmp) = RB_LEFT(elm, field); \ if ((RB_LEFT(elm, field) = RB_RIGHT(tmp, field)) != NULL) { \ RB_PARENT(RB_RIGHT(tmp, field), field) = (elm); \ } \ RB_AUGMENT(elm); \ if ((RB_PARENT(tmp, field) = RB_PARENT(elm, field)) != NULL) { \ if ((elm) == RB_LEFT(RB_PARENT(elm, field), field)) \ RB_LEFT(RB_PARENT(elm, field), field) = (tmp); \ else \ RB_RIGHT(RB_PARENT(elm, field), field) = (tmp); \ } else \ (head)->rbh_root = (tmp); \ RB_RIGHT(tmp, field) = (elm); \ RB_PARENT(elm, field) = (tmp); \ RB_AUGMENT(tmp); \ if ((RB_PARENT(tmp, field))) \ RB_AUGMENT(RB_PARENT(tmp, field)); \ } while (/*CONSTCOND*/ 0) /* Generates prototypes and inline functions */ #define RB_PROTOTYPE(name, type, field, cmp) \ RB_PROTOTYPE_INTERNAL(name, type, field, cmp,) #define RB_PROTOTYPE_STATIC(name, type, field, cmp) \ RB_PROTOTYPE_INTERNAL(name, type, field, cmp, UV__UNUSED static) #define RB_PROTOTYPE_INTERNAL(name, type, field, cmp, attr) \ attr void name##_RB_INSERT_COLOR(struct name *, struct type *); \ attr void name##_RB_REMOVE_COLOR(struct name *, struct type *, struct type *);\ attr struct type *name##_RB_REMOVE(struct name *, struct type *); \ attr struct type *name##_RB_INSERT(struct name *, struct type *); \ attr struct type *name##_RB_FIND(struct name *, struct type *); \ attr struct type *name##_RB_NFIND(struct name *, struct type *); \ attr struct type *name##_RB_NEXT(struct type *); \ attr struct type *name##_RB_PREV(struct type *); \ attr struct type *name##_RB_MINMAX(struct name *, int); \ \ /* Main rb operation. * Moves node close to the key of elm to top */ #define RB_GENERATE(name, type, field, cmp) \ RB_GENERATE_INTERNAL(name, type, field, cmp,) #define RB_GENERATE_STATIC(name, type, field, cmp) \ RB_GENERATE_INTERNAL(name, type, field, cmp, UV__UNUSED static) #define RB_GENERATE_INTERNAL(name, type, field, cmp, attr) \ attr void \ name##_RB_INSERT_COLOR(struct name *head, struct type *elm) \ { \ struct type *parent, *gparent, *tmp; \ while ((parent = RB_PARENT(elm, field)) != NULL && \ RB_COLOR(parent, field) == RB_RED) { \ gparent = RB_PARENT(parent, field); \ if (parent == RB_LEFT(gparent, field)) { \ tmp = RB_RIGHT(gparent, field); \ if (tmp && RB_COLOR(tmp, field) == RB_RED) { \ RB_COLOR(tmp, field) = RB_BLACK; \ RB_SET_BLACKRED(parent, gparent, field); \ elm = gparent; \ continue; \ } \ if (RB_RIGHT(parent, field) == elm) { \ RB_ROTATE_LEFT(head, parent, tmp, field); \ tmp = parent; \ parent = elm; \ elm = tmp; \ } \ RB_SET_BLACKRED(parent, gparent, field); \ RB_ROTATE_RIGHT(head, gparent, tmp, field); \ } else { \ tmp = RB_LEFT(gparent, field); \ if (tmp && RB_COLOR(tmp, field) == RB_RED) { \ RB_COLOR(tmp, field) = RB_BLACK; \ RB_SET_BLACKRED(parent, gparent, field); \ elm = gparent; \ continue; \ } \ if (RB_LEFT(parent, field) == elm) { \ RB_ROTATE_RIGHT(head, parent, tmp, field); \ tmp = parent; \ parent = elm; \ elm = tmp; \ } \ RB_SET_BLACKRED(parent, gparent, field); \ RB_ROTATE_LEFT(head, gparent, tmp, field); \ } \ } \ RB_COLOR(head->rbh_root, field) = RB_BLACK; \ } \ \ attr void \ name##_RB_REMOVE_COLOR(struct name *head, struct type *parent, \ struct type *elm) \ { \ struct type *tmp; \ while ((elm == NULL || RB_COLOR(elm, field) == RB_BLACK) && \ elm != RB_ROOT(head)) { \ if (RB_LEFT(parent, field) == elm) { \ tmp = RB_RIGHT(parent, field); \ if (RB_COLOR(tmp, field) == RB_RED) { \ RB_SET_BLACKRED(tmp, parent, field); \ RB_ROTATE_LEFT(head, parent, tmp, field); \ tmp = RB_RIGHT(parent, field); \ } \ if ((RB_LEFT(tmp, field) == NULL || \ RB_COLOR(RB_LEFT(tmp, field), field) == RB_BLACK) && \ (RB_RIGHT(tmp, field) == NULL || \ RB_COLOR(RB_RIGHT(tmp, field), field) == RB_BLACK)) { \ RB_COLOR(tmp, field) = RB_RED; \ elm = parent; \ parent = RB_PARENT(elm, field); \ } else { \ if (RB_RIGHT(tmp, field) == NULL || \ RB_COLOR(RB_RIGHT(tmp, field), field) == RB_BLACK) { \ struct type *oleft; \ if ((oleft = RB_LEFT(tmp, field)) \ != NULL) \ RB_COLOR(oleft, field) = RB_BLACK; \ RB_COLOR(tmp, field) = RB_RED; \ RB_ROTATE_RIGHT(head, tmp, oleft, field); \ tmp = RB_RIGHT(parent, field); \ } \ RB_COLOR(tmp, field) = RB_COLOR(parent, field); \ RB_COLOR(parent, field) = RB_BLACK; \ if (RB_RIGHT(tmp, field)) \ RB_COLOR(RB_RIGHT(tmp, field), field) = RB_BLACK; \ RB_ROTATE_LEFT(head, parent, tmp, field); \ elm = RB_ROOT(head); \ break; \ } \ } else { \ tmp = RB_LEFT(parent, field); \ if (RB_COLOR(tmp, field) == RB_RED) { \ RB_SET_BLACKRED(tmp, parent, field); \ RB_ROTATE_RIGHT(head, parent, tmp, field); \ tmp = RB_LEFT(parent, field); \ } \ if ((RB_LEFT(tmp, field) == NULL || \ RB_COLOR(RB_LEFT(tmp, field), field) == RB_BLACK) && \ (RB_RIGHT(tmp, field) == NULL || \ RB_COLOR(RB_RIGHT(tmp, field), field) == RB_BLACK)) { \ RB_COLOR(tmp, field) = RB_RED; \ elm = parent; \ parent = RB_PARENT(elm, field); \ } else { \ if (RB_LEFT(tmp, field) == NULL || \ RB_COLOR(RB_LEFT(tmp, field), field) == RB_BLACK) { \ struct type *oright; \ if ((oright = RB_RIGHT(tmp, field)) \ != NULL) \ RB_COLOR(oright, field) = RB_BLACK; \ RB_COLOR(tmp, field) = RB_RED; \ RB_ROTATE_LEFT(head, tmp, oright, field); \ tmp = RB_LEFT(parent, field); \ } \ RB_COLOR(tmp, field) = RB_COLOR(parent, field); \ RB_COLOR(parent, field) = RB_BLACK; \ if (RB_LEFT(tmp, field)) \ RB_COLOR(RB_LEFT(tmp, field), field) = RB_BLACK; \ RB_ROTATE_RIGHT(head, parent, tmp, field); \ elm = RB_ROOT(head); \ break; \ } \ } \ } \ if (elm) \ RB_COLOR(elm, field) = RB_BLACK; \ } \ \ attr struct type * \ name##_RB_REMOVE(struct name *head, struct type *elm) \ { \ struct type *child, *parent, *old = elm; \ int color; \ if (RB_LEFT(elm, field) == NULL) \ child = RB_RIGHT(elm, field); \ else if (RB_RIGHT(elm, field) == NULL) \ child = RB_LEFT(elm, field); \ else { \ struct type *left; \ elm = RB_RIGHT(elm, field); \ while ((left = RB_LEFT(elm, field)) != NULL) \ elm = left; \ child = RB_RIGHT(elm, field); \ parent = RB_PARENT(elm, field); \ color = RB_COLOR(elm, field); \ if (child) \ RB_PARENT(child, field) = parent; \ if (parent) { \ if (RB_LEFT(parent, field) == elm) \ RB_LEFT(parent, field) = child; \ else \ RB_RIGHT(parent, field) = child; \ RB_AUGMENT(parent); \ } else \ RB_ROOT(head) = child; \ if (RB_PARENT(elm, field) == old) \ parent = elm; \ (elm)->field = (old)->field; \ if (RB_PARENT(old, field)) { \ if (RB_LEFT(RB_PARENT(old, field), field) == old) \ RB_LEFT(RB_PARENT(old, field), field) = elm; \ else \ RB_RIGHT(RB_PARENT(old, field), field) = elm; \ RB_AUGMENT(RB_PARENT(old, field)); \ } else \ RB_ROOT(head) = elm; \ RB_PARENT(RB_LEFT(old, field), field) = elm; \ if (RB_RIGHT(old, field)) \ RB_PARENT(RB_RIGHT(old, field), field) = elm; \ if (parent) { \ left = parent; \ do { \ RB_AUGMENT(left); \ } while ((left = RB_PARENT(left, field)) != NULL); \ } \ goto color; \ } \ parent = RB_PARENT(elm, field); \ color = RB_COLOR(elm, field); \ if (child) \ RB_PARENT(child, field) = parent; \ if (parent) { \ if (RB_LEFT(parent, field) == elm) \ RB_LEFT(parent, field) = child; \ else \ RB_RIGHT(parent, field) = child; \ RB_AUGMENT(parent); \ } else \ RB_ROOT(head) = child; \ color: \ if (color == RB_BLACK) \ name##_RB_REMOVE_COLOR(head, parent, child); \ return (old); \ } \ \ /* Inserts a node into the RB tree */ \ attr struct type * \ name##_RB_INSERT(struct name *head, struct type *elm) \ { \ struct type *tmp; \ struct type *parent = NULL; \ int comp = 0; \ tmp = RB_ROOT(head); \ while (tmp) { \ parent = tmp; \ comp = (cmp)(elm, parent); \ if (comp < 0) \ tmp = RB_LEFT(tmp, field); \ else if (comp > 0) \ tmp = RB_RIGHT(tmp, field); \ else \ return (tmp); \ } \ RB_SET(elm, parent, field); \ if (parent != NULL) { \ if (comp < 0) \ RB_LEFT(parent, field) = elm; \ else \ RB_RIGHT(parent, field) = elm; \ RB_AUGMENT(parent); \ } else \ RB_ROOT(head) = elm; \ name##_RB_INSERT_COLOR(head, elm); \ return (NULL); \ } \ \ /* Finds the node with the same key as elm */ \ attr struct type * \ name##_RB_FIND(struct name *head, struct type *elm) \ { \ struct type *tmp = RB_ROOT(head); \ int comp; \ while (tmp) { \ comp = cmp(elm, tmp); \ if (comp < 0) \ tmp = RB_LEFT(tmp, field); \ else if (comp > 0) \ tmp = RB_RIGHT(tmp, field); \ else \ return (tmp); \ } \ return (NULL); \ } \ \ /* Finds the first node greater than or equal to the search key */ \ attr struct type * \ name##_RB_NFIND(struct name *head, struct type *elm) \ { \ struct type *tmp = RB_ROOT(head); \ struct type *res = NULL; \ int comp; \ while (tmp) { \ comp = cmp(elm, tmp); \ if (comp < 0) { \ res = tmp; \ tmp = RB_LEFT(tmp, field); \ } \ else if (comp > 0) \ tmp = RB_RIGHT(tmp, field); \ else \ return (tmp); \ } \ return (res); \ } \ \ /* ARGSUSED */ \ attr struct type * \ name##_RB_NEXT(struct type *elm) \ { \ if (RB_RIGHT(elm, field)) { \ elm = RB_RIGHT(elm, field); \ while (RB_LEFT(elm, field)) \ elm = RB_LEFT(elm, field); \ } else { \ if (RB_PARENT(elm, field) && \ (elm == RB_LEFT(RB_PARENT(elm, field), field))) \ elm = RB_PARENT(elm, field); \ else { \ while (RB_PARENT(elm, field) && \ (elm == RB_RIGHT(RB_PARENT(elm, field), field))) \ elm = RB_PARENT(elm, field); \ elm = RB_PARENT(elm, field); \ } \ } \ return (elm); \ } \ \ /* ARGSUSED */ \ attr struct type * \ name##_RB_PREV(struct type *elm) \ { \ if (RB_LEFT(elm, field)) { \ elm = RB_LEFT(elm, field); \ while (RB_RIGHT(elm, field)) \ elm = RB_RIGHT(elm, field); \ } else { \ if (RB_PARENT(elm, field) && \ (elm == RB_RIGHT(RB_PARENT(elm, field), field))) \ elm = RB_PARENT(elm, field); \ else { \ while (RB_PARENT(elm, field) && \ (elm == RB_LEFT(RB_PARENT(elm, field), field))) \ elm = RB_PARENT(elm, field); \ elm = RB_PARENT(elm, field); \ } \ } \ return (elm); \ } \ \ attr struct type * \ name##_RB_MINMAX(struct name *head, int val) \ { \ struct type *tmp = RB_ROOT(head); \ struct type *parent = NULL; \ while (tmp) { \ parent = tmp; \ if (val < 0) \ tmp = RB_LEFT(tmp, field); \ else \ tmp = RB_RIGHT(tmp, field); \ } \ return (parent); \ } #define RB_NEGINF -1 #define RB_INF 1 #define RB_INSERT(name, x, y) name##_RB_INSERT(x, y) #define RB_REMOVE(name, x, y) name##_RB_REMOVE(x, y) #define RB_FIND(name, x, y) name##_RB_FIND(x, y) #define RB_NFIND(name, x, y) name##_RB_NFIND(x, y) #define RB_NEXT(name, x, y) name##_RB_NEXT(y) #define RB_PREV(name, x, y) name##_RB_PREV(y) #define RB_MIN(name, x) name##_RB_MINMAX(x, RB_NEGINF) #define RB_MAX(name, x) name##_RB_MINMAX(x, RB_INF) #define RB_FOREACH(x, name, head) \ for ((x) = RB_MIN(name, head); \ (x) != NULL; \ (x) = name##_RB_NEXT(x)) #define RB_FOREACH_FROM(x, name, y) \ for ((x) = (y); \ ((x) != NULL) && ((y) = name##_RB_NEXT(x), (x) != NULL); \ (x) = (y)) #define RB_FOREACH_SAFE(x, name, head, y) \ for ((x) = RB_MIN(name, head); \ ((x) != NULL) && ((y) = name##_RB_NEXT(x), (x) != NULL); \ (x) = (y)) #define RB_FOREACH_REVERSE(x, name, head) \ for ((x) = RB_MAX(name, head); \ (x) != NULL; \ (x) = name##_RB_PREV(x)) #define RB_FOREACH_REVERSE_FROM(x, name, y) \ for ((x) = (y); \ ((x) != NULL) && ((y) = name##_RB_PREV(x), (x) != NULL); \ (x) = (y)) #define RB_FOREACH_REVERSE_SAFE(x, name, head, y) \ for ((x) = RB_MAX(name, head); \ ((x) != NULL) && ((y) = name##_RB_PREV(x), (x) != NULL); \ (x) = (y)) #endif /* UV_TREE_H_ */ gevent-24.11.1/deps/libuv/include/uv/unix.h000066400000000000000000000460601471441230600204450ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_UNIX_H #define UV_UNIX_H #include #include #include #include #include #include #include #include #include /* MAXHOSTNAMELEN on Solaris */ #include #include #if !defined(__MVS__) #include #include /* MAXHOSTNAMELEN on Linux and the BSDs */ #endif #include #include #include "uv/threadpool.h" #if defined(__linux__) # include "uv/linux.h" #elif defined (__MVS__) # include "uv/os390.h" #elif defined(__PASE__) /* __PASE__ and _AIX are both defined on IBM i */ # include "uv/posix.h" /* IBM i needs uv/posix.h, not uv/aix.h */ #elif defined(_AIX) # include "uv/aix.h" #elif defined(__sun) # include "uv/sunos.h" #elif defined(__APPLE__) # include "uv/darwin.h" #elif defined(__DragonFly__) || \ defined(__FreeBSD__) || \ defined(__FreeBSD_kernel__) || \ defined(__OpenBSD__) || \ defined(__NetBSD__) # include "uv/bsd.h" #elif defined(__CYGWIN__) || \ defined(__MSYS__) || \ defined(__HAIKU__) || \ defined(__QNX__) || \ defined(__GNU__) # include "uv/posix.h" #endif #ifndef NI_MAXHOST # define NI_MAXHOST 1025 #endif #ifndef NI_MAXSERV # define NI_MAXSERV 32 #endif #ifndef UV_IO_PRIVATE_PLATFORM_FIELDS # define UV_IO_PRIVATE_PLATFORM_FIELDS /* empty */ #endif struct uv__io_s; struct uv_loop_s; typedef void (*uv__io_cb)(struct uv_loop_s* loop, struct uv__io_s* w, unsigned int events); typedef struct uv__io_s uv__io_t; struct uv__io_s { uv__io_cb cb; void* pending_queue[2]; void* watcher_queue[2]; unsigned int pevents; /* Pending event mask i.e. mask at next tick. */ unsigned int events; /* Current event mask. */ int fd; UV_IO_PRIVATE_PLATFORM_FIELDS }; #ifndef UV_PLATFORM_SEM_T # define UV_PLATFORM_SEM_T sem_t #endif #ifndef UV_PLATFORM_LOOP_FIELDS # define UV_PLATFORM_LOOP_FIELDS /* empty */ #endif #ifndef UV_PLATFORM_FS_EVENT_FIELDS # define UV_PLATFORM_FS_EVENT_FIELDS /* empty */ #endif #ifndef UV_STREAM_PRIVATE_PLATFORM_FIELDS # define UV_STREAM_PRIVATE_PLATFORM_FIELDS /* empty */ #endif /* Note: May be cast to struct iovec. See writev(2). */ typedef struct uv_buf_t { char* base; size_t len; } uv_buf_t; typedef int uv_file; typedef int uv_os_sock_t; typedef int uv_os_fd_t; typedef pid_t uv_pid_t; #define UV_ONCE_INIT PTHREAD_ONCE_INIT typedef pthread_once_t uv_once_t; typedef pthread_t uv_thread_t; typedef pthread_mutex_t uv_mutex_t; typedef pthread_rwlock_t uv_rwlock_t; typedef UV_PLATFORM_SEM_T uv_sem_t; typedef pthread_cond_t uv_cond_t; typedef pthread_key_t uv_key_t; /* Note: guard clauses should match uv_barrier_init's in src/unix/thread.c. */ #if defined(_AIX) || \ defined(__OpenBSD__) || \ !defined(PTHREAD_BARRIER_SERIAL_THREAD) /* TODO(bnoordhuis) Merge into uv_barrier_t in v2. */ struct _uv_barrier { uv_mutex_t mutex; uv_cond_t cond; unsigned threshold; unsigned in; unsigned out; }; typedef struct { struct _uv_barrier* b; # if defined(PTHREAD_BARRIER_SERIAL_THREAD) /* TODO(bnoordhuis) Remove padding in v2. */ char pad[sizeof(pthread_barrier_t) - sizeof(struct _uv_barrier*)]; # endif } uv_barrier_t; #else typedef pthread_barrier_t uv_barrier_t; #endif /* Platform-specific definitions for uv_spawn support. */ typedef gid_t uv_gid_t; typedef uid_t uv_uid_t; typedef struct dirent uv__dirent_t; #define UV_DIR_PRIVATE_FIELDS \ DIR* dir; #if defined(DT_UNKNOWN) # define HAVE_DIRENT_TYPES # if defined(DT_REG) # define UV__DT_FILE DT_REG # else # define UV__DT_FILE -1 # endif # if defined(DT_DIR) # define UV__DT_DIR DT_DIR # else # define UV__DT_DIR -2 # endif # if defined(DT_LNK) # define UV__DT_LINK DT_LNK # else # define UV__DT_LINK -3 # endif # if defined(DT_FIFO) # define UV__DT_FIFO DT_FIFO # else # define UV__DT_FIFO -4 # endif # if defined(DT_SOCK) # define UV__DT_SOCKET DT_SOCK # else # define UV__DT_SOCKET -5 # endif # if defined(DT_CHR) # define UV__DT_CHAR DT_CHR # else # define UV__DT_CHAR -6 # endif # if defined(DT_BLK) # define UV__DT_BLOCK DT_BLK # else # define UV__DT_BLOCK -7 # endif #endif /* Platform-specific definitions for uv_dlopen support. */ #define UV_DYNAMIC /* empty */ typedef struct { void* handle; char* errmsg; } uv_lib_t; #define UV_LOOP_PRIVATE_FIELDS \ unsigned long flags; \ int backend_fd; \ void* pending_queue[2]; \ void* watcher_queue[2]; \ uv__io_t** watchers; \ unsigned int nwatchers; \ unsigned int nfds; \ void* wq[2]; \ uv_mutex_t wq_mutex; \ uv_async_t wq_async; \ uv_rwlock_t cloexec_lock; \ uv_handle_t* closing_handles; \ void* process_handles[2]; \ void* prepare_handles[2]; \ void* check_handles[2]; \ void* idle_handles[2]; \ void* async_handles[2]; \ void (*async_unused)(void); /* TODO(bnoordhuis) Remove in libuv v2. */ \ uv__io_t async_io_watcher; \ int async_wfd; \ struct { \ void* min; \ unsigned int nelts; \ } timer_heap; \ uint64_t timer_counter; \ uint64_t time; \ int signal_pipefd[2]; \ uv__io_t signal_io_watcher; \ uv_signal_t child_watcher; \ int emfile_fd; \ UV_PLATFORM_LOOP_FIELDS \ #define UV_REQ_TYPE_PRIVATE /* empty */ #define UV_REQ_PRIVATE_FIELDS /* empty */ #define UV_PRIVATE_REQ_TYPES /* empty */ #define UV_WRITE_PRIVATE_FIELDS \ void* queue[2]; \ unsigned int write_index; \ uv_buf_t* bufs; \ unsigned int nbufs; \ int error; \ uv_buf_t bufsml[4]; \ #define UV_CONNECT_PRIVATE_FIELDS \ void* queue[2]; \ #define UV_SHUTDOWN_PRIVATE_FIELDS /* empty */ #define UV_UDP_SEND_PRIVATE_FIELDS \ void* queue[2]; \ struct sockaddr_storage addr; \ unsigned int nbufs; \ uv_buf_t* bufs; \ ssize_t status; \ uv_udp_send_cb send_cb; \ uv_buf_t bufsml[4]; \ #define UV_HANDLE_PRIVATE_FIELDS \ uv_handle_t* next_closing; \ unsigned int flags; \ #define UV_STREAM_PRIVATE_FIELDS \ uv_connect_t *connect_req; \ uv_shutdown_t *shutdown_req; \ uv__io_t io_watcher; \ void* write_queue[2]; \ void* write_completed_queue[2]; \ uv_connection_cb connection_cb; \ int delayed_error; \ int accepted_fd; \ void* queued_fds; \ UV_STREAM_PRIVATE_PLATFORM_FIELDS \ #define UV_TCP_PRIVATE_FIELDS /* empty */ #define UV_UDP_PRIVATE_FIELDS \ uv_alloc_cb alloc_cb; \ uv_udp_recv_cb recv_cb; \ uv__io_t io_watcher; \ void* write_queue[2]; \ void* write_completed_queue[2]; \ #define UV_PIPE_PRIVATE_FIELDS \ const char* pipe_fname; /* strdup'ed */ #define UV_POLL_PRIVATE_FIELDS \ uv__io_t io_watcher; #define UV_PREPARE_PRIVATE_FIELDS \ uv_prepare_cb prepare_cb; \ void* queue[2]; \ #define UV_CHECK_PRIVATE_FIELDS \ uv_check_cb check_cb; \ void* queue[2]; \ #define UV_IDLE_PRIVATE_FIELDS \ uv_idle_cb idle_cb; \ void* queue[2]; \ #define UV_ASYNC_PRIVATE_FIELDS \ uv_async_cb async_cb; \ void* queue[2]; \ int pending; \ #define UV_TIMER_PRIVATE_FIELDS \ uv_timer_cb timer_cb; \ void* heap_node[3]; \ uint64_t timeout; \ uint64_t repeat; \ uint64_t start_id; #define UV_GETADDRINFO_PRIVATE_FIELDS \ struct uv__work work_req; \ uv_getaddrinfo_cb cb; \ struct addrinfo* hints; \ char* hostname; \ char* service; \ struct addrinfo* addrinfo; \ int retcode; #define UV_GETNAMEINFO_PRIVATE_FIELDS \ struct uv__work work_req; \ uv_getnameinfo_cb getnameinfo_cb; \ struct sockaddr_storage storage; \ int flags; \ char host[NI_MAXHOST]; \ char service[NI_MAXSERV]; \ int retcode; #define UV_PROCESS_PRIVATE_FIELDS \ void* queue[2]; \ int status; \ #define UV_FS_PRIVATE_FIELDS \ const char *new_path; \ uv_file file; \ int flags; \ mode_t mode; \ unsigned int nbufs; \ uv_buf_t* bufs; \ off_t off; \ uv_uid_t uid; \ uv_gid_t gid; \ double atime; \ double mtime; \ struct uv__work work_req; \ uv_buf_t bufsml[4]; \ #define UV_WORK_PRIVATE_FIELDS \ struct uv__work work_req; #define UV_TTY_PRIVATE_FIELDS \ struct termios orig_termios; \ int mode; #define UV_SIGNAL_PRIVATE_FIELDS \ /* RB_ENTRY(uv_signal_s) tree_entry; */ \ struct { \ struct uv_signal_s* rbe_left; \ struct uv_signal_s* rbe_right; \ struct uv_signal_s* rbe_parent; \ int rbe_color; \ } tree_entry; \ /* Use two counters here so we don have to fiddle with atomics. */ \ unsigned int caught_signals; \ unsigned int dispatched_signals; #define UV_FS_EVENT_PRIVATE_FIELDS \ uv_fs_event_cb cb; \ UV_PLATFORM_FS_EVENT_FIELDS \ /* fs open() flags supported on this platform: */ #if defined(O_APPEND) # define UV_FS_O_APPEND O_APPEND #else # define UV_FS_O_APPEND 0 #endif #if defined(O_CREAT) # define UV_FS_O_CREAT O_CREAT #else # define UV_FS_O_CREAT 0 #endif #if defined(__linux__) && defined(__arm__) # define UV_FS_O_DIRECT 0x10000 #elif defined(__linux__) && defined(__m68k__) # define UV_FS_O_DIRECT 0x10000 #elif defined(__linux__) && defined(__mips__) # define UV_FS_O_DIRECT 0x08000 #elif defined(__linux__) && defined(__powerpc__) # define UV_FS_O_DIRECT 0x20000 #elif defined(__linux__) && defined(__s390x__) # define UV_FS_O_DIRECT 0x04000 #elif defined(__linux__) && defined(__x86_64__) # define UV_FS_O_DIRECT 0x04000 #elif defined(O_DIRECT) # define UV_FS_O_DIRECT O_DIRECT #else # define UV_FS_O_DIRECT 0 #endif #if defined(O_DIRECTORY) # define UV_FS_O_DIRECTORY O_DIRECTORY #else # define UV_FS_O_DIRECTORY 0 #endif #if defined(O_DSYNC) # define UV_FS_O_DSYNC O_DSYNC #else # define UV_FS_O_DSYNC 0 #endif #if defined(O_EXCL) # define UV_FS_O_EXCL O_EXCL #else # define UV_FS_O_EXCL 0 #endif #if defined(O_EXLOCK) # define UV_FS_O_EXLOCK O_EXLOCK #else # define UV_FS_O_EXLOCK 0 #endif #if defined(O_NOATIME) # define UV_FS_O_NOATIME O_NOATIME #else # define UV_FS_O_NOATIME 0 #endif #if defined(O_NOCTTY) # define UV_FS_O_NOCTTY O_NOCTTY #else # define UV_FS_O_NOCTTY 0 #endif #if defined(O_NOFOLLOW) # define UV_FS_O_NOFOLLOW O_NOFOLLOW #else # define UV_FS_O_NOFOLLOW 0 #endif #if defined(O_NONBLOCK) # define UV_FS_O_NONBLOCK O_NONBLOCK #else # define UV_FS_O_NONBLOCK 0 #endif #if defined(O_RDONLY) # define UV_FS_O_RDONLY O_RDONLY #else # define UV_FS_O_RDONLY 0 #endif #if defined(O_RDWR) # define UV_FS_O_RDWR O_RDWR #else # define UV_FS_O_RDWR 0 #endif #if defined(O_SYMLINK) # define UV_FS_O_SYMLINK O_SYMLINK #else # define UV_FS_O_SYMLINK 0 #endif #if defined(O_SYNC) # define UV_FS_O_SYNC O_SYNC #else # define UV_FS_O_SYNC 0 #endif #if defined(O_TRUNC) # define UV_FS_O_TRUNC O_TRUNC #else # define UV_FS_O_TRUNC 0 #endif #if defined(O_WRONLY) # define UV_FS_O_WRONLY O_WRONLY #else # define UV_FS_O_WRONLY 0 #endif /* fs open() flags supported on other platforms: */ #define UV_FS_O_FILEMAP 0 #define UV_FS_O_RANDOM 0 #define UV_FS_O_SHORT_LIVED 0 #define UV_FS_O_SEQUENTIAL 0 #define UV_FS_O_TEMPORARY 0 #endif /* UV_UNIX_H */ gevent-24.11.1/deps/libuv/include/uv/version.h000066400000000000000000000034471471441230600211510ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_VERSION_H #define UV_VERSION_H /* * Versions with the same major number are ABI stable. API is allowed to * evolve between minor releases, but only in a backwards compatible way. * Make sure you update the -soname directives in configure.ac * whenever you bump UV_VERSION_MAJOR or UV_VERSION_MINOR (but * not UV_VERSION_PATCH.) */ #define UV_VERSION_MAJOR 1 #define UV_VERSION_MINOR 44 #define UV_VERSION_PATCH 2 #define UV_VERSION_IS_RELEASE 1 #define UV_VERSION_SUFFIX "" #define UV_VERSION_HEX ((UV_VERSION_MAJOR << 16) | \ (UV_VERSION_MINOR << 8) | \ (UV_VERSION_PATCH)) #endif /* UV_VERSION_H */ gevent-24.11.1/deps/libuv/include/uv/win.h000066400000000000000000001007441471441230600202570ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef _WIN32_WINNT # define _WIN32_WINNT 0x0600 #endif #if !defined(_SSIZE_T_) && !defined(_SSIZE_T_DEFINED) typedef intptr_t ssize_t; # define SSIZE_MAX INTPTR_MAX # define _SSIZE_T_ # define _SSIZE_T_DEFINED #endif #include #if defined(__MINGW32__) && !defined(__MINGW64_VERSION_MAJOR) typedef struct pollfd { SOCKET fd; short events; short revents; } WSAPOLLFD, *PWSAPOLLFD, *LPWSAPOLLFD; #endif #ifndef LOCALE_INVARIANT # define LOCALE_INVARIANT 0x007f #endif #include // Disable the typedef in mstcpip.h of MinGW. #define _TCP_INITIAL_RTO_PARAMETERS _TCP_INITIAL_RTO_PARAMETERS__AVOID #define TCP_INITIAL_RTO_PARAMETERS TCP_INITIAL_RTO_PARAMETERS__AVOID #define PTCP_INITIAL_RTO_PARAMETERS PTCP_INITIAL_RTO_PARAMETERS__AVOID #include #undef _TCP_INITIAL_RTO_PARAMETERS #undef TCP_INITIAL_RTO_PARAMETERS #undef PTCP_INITIAL_RTO_PARAMETERS #include #include #include #include #include #if defined(_MSC_VER) && _MSC_VER < 1600 # include "uv/stdint-msvc2008.h" #else # include #endif #include "uv/tree.h" #include "uv/threadpool.h" #define MAX_PIPENAME_LEN 256 #ifndef S_IFLNK # define S_IFLNK 0xA000 #endif /* Additional signals supported by uv_signal and or uv_kill. The CRT defines * the following signals already: * * #define SIGINT 2 * #define SIGILL 4 * #define SIGABRT_COMPAT 6 * #define SIGFPE 8 * #define SIGSEGV 11 * #define SIGTERM 15 * #define SIGBREAK 21 * #define SIGABRT 22 * * The additional signals have values that are common on other Unix * variants (Linux and Darwin) */ #define SIGHUP 1 #define SIGKILL 9 #define SIGWINCH 28 /* Redefine NSIG to take SIGWINCH into consideration */ #if defined(NSIG) && NSIG <= SIGWINCH # undef NSIG #endif #ifndef NSIG # define NSIG SIGWINCH + 1 #endif /* The CRT defines SIGABRT_COMPAT as 6, which equals SIGABRT on many unix-like * platforms. However MinGW doesn't define it, so we do. */ #ifndef SIGABRT_COMPAT # define SIGABRT_COMPAT 6 #endif /* * Guids and typedefs for winsock extension functions * Mingw32 doesn't have these :-( */ #ifndef WSAID_ACCEPTEX # define WSAID_ACCEPTEX \ {0xb5367df1, 0xcbac, 0x11cf, \ {0x95, 0xca, 0x00, 0x80, 0x5f, 0x48, 0xa1, 0x92}} # define WSAID_CONNECTEX \ {0x25a207b9, 0xddf3, 0x4660, \ {0x8e, 0xe9, 0x76, 0xe5, 0x8c, 0x74, 0x06, 0x3e}} # define WSAID_GETACCEPTEXSOCKADDRS \ {0xb5367df2, 0xcbac, 0x11cf, \ {0x95, 0xca, 0x00, 0x80, 0x5f, 0x48, 0xa1, 0x92}} # define WSAID_DISCONNECTEX \ {0x7fda2e11, 0x8630, 0x436f, \ {0xa0, 0x31, 0xf5, 0x36, 0xa6, 0xee, 0xc1, 0x57}} # define WSAID_TRANSMITFILE \ {0xb5367df0, 0xcbac, 0x11cf, \ {0x95, 0xca, 0x00, 0x80, 0x5f, 0x48, 0xa1, 0x92}} typedef BOOL (PASCAL *LPFN_ACCEPTEX) (SOCKET sListenSocket, SOCKET sAcceptSocket, PVOID lpOutputBuffer, DWORD dwReceiveDataLength, DWORD dwLocalAddressLength, DWORD dwRemoteAddressLength, LPDWORD lpdwBytesReceived, LPOVERLAPPED lpOverlapped); typedef BOOL (PASCAL *LPFN_CONNECTEX) (SOCKET s, const struct sockaddr* name, int namelen, PVOID lpSendBuffer, DWORD dwSendDataLength, LPDWORD lpdwBytesSent, LPOVERLAPPED lpOverlapped); typedef void (PASCAL *LPFN_GETACCEPTEXSOCKADDRS) (PVOID lpOutputBuffer, DWORD dwReceiveDataLength, DWORD dwLocalAddressLength, DWORD dwRemoteAddressLength, LPSOCKADDR* LocalSockaddr, LPINT LocalSockaddrLength, LPSOCKADDR* RemoteSockaddr, LPINT RemoteSockaddrLength); typedef BOOL (PASCAL *LPFN_DISCONNECTEX) (SOCKET hSocket, LPOVERLAPPED lpOverlapped, DWORD dwFlags, DWORD reserved); typedef BOOL (PASCAL *LPFN_TRANSMITFILE) (SOCKET hSocket, HANDLE hFile, DWORD nNumberOfBytesToWrite, DWORD nNumberOfBytesPerSend, LPOVERLAPPED lpOverlapped, LPTRANSMIT_FILE_BUFFERS lpTransmitBuffers, DWORD dwFlags); typedef PVOID RTL_SRWLOCK; typedef RTL_SRWLOCK SRWLOCK, *PSRWLOCK; #endif typedef int (WSAAPI* LPFN_WSARECV) (SOCKET socket, LPWSABUF buffers, DWORD buffer_count, LPDWORD bytes, LPDWORD flags, LPWSAOVERLAPPED overlapped, LPWSAOVERLAPPED_COMPLETION_ROUTINE completion_routine); typedef int (WSAAPI* LPFN_WSARECVFROM) (SOCKET socket, LPWSABUF buffers, DWORD buffer_count, LPDWORD bytes, LPDWORD flags, struct sockaddr* addr, LPINT addr_len, LPWSAOVERLAPPED overlapped, LPWSAOVERLAPPED_COMPLETION_ROUTINE completion_routine); #ifndef _NTDEF_ typedef LONG NTSTATUS; typedef NTSTATUS *PNTSTATUS; #endif #ifndef RTL_CONDITION_VARIABLE_INIT typedef PVOID CONDITION_VARIABLE, *PCONDITION_VARIABLE; #endif typedef struct _AFD_POLL_HANDLE_INFO { HANDLE Handle; ULONG Events; NTSTATUS Status; } AFD_POLL_HANDLE_INFO, *PAFD_POLL_HANDLE_INFO; typedef struct _AFD_POLL_INFO { LARGE_INTEGER Timeout; ULONG NumberOfHandles; ULONG Exclusive; AFD_POLL_HANDLE_INFO Handles[1]; } AFD_POLL_INFO, *PAFD_POLL_INFO; #define UV_MSAFD_PROVIDER_COUNT 4 /** * It should be possible to cast uv_buf_t[] to WSABUF[] * see http://msdn.microsoft.com/en-us/library/ms741542(v=vs.85).aspx */ typedef struct uv_buf_t { ULONG len; char* base; } uv_buf_t; typedef int uv_file; typedef SOCKET uv_os_sock_t; typedef HANDLE uv_os_fd_t; typedef int uv_pid_t; typedef HANDLE uv_thread_t; typedef HANDLE uv_sem_t; typedef CRITICAL_SECTION uv_mutex_t; /* This condition variable implementation is based on the SetEvent solution * (section 3.2) at http://www.cs.wustl.edu/~schmidt/win32-cv-1.html * We could not use the SignalObjectAndWait solution (section 3.4) because * it want the 2nd argument (type uv_mutex_t) of uv_cond_wait() and * uv_cond_timedwait() to be HANDLEs, but we use CRITICAL_SECTIONs. */ typedef union { CONDITION_VARIABLE cond_var; struct { unsigned int waiters_count; CRITICAL_SECTION waiters_count_lock; HANDLE signal_event; HANDLE broadcast_event; } unused_; /* TODO: retained for ABI compatibility; remove me in v2.x. */ } uv_cond_t; typedef struct { SRWLOCK read_write_lock_; /* TODO: retained for ABI compatibility; remove me in v2.x */ #ifdef _WIN64 unsigned char padding_[72]; #else unsigned char padding_[44]; #endif } uv_rwlock_t; typedef struct { unsigned int n; unsigned int count; uv_mutex_t mutex; uv_sem_t turnstile1; uv_sem_t turnstile2; } uv_barrier_t; typedef struct { DWORD tls_index; } uv_key_t; #define UV_ONCE_INIT { 0, NULL } typedef struct uv_once_s { unsigned char ran; HANDLE event; } uv_once_t; /* Platform-specific definitions for uv_spawn support. */ typedef unsigned char uv_uid_t; typedef unsigned char uv_gid_t; typedef struct uv__dirent_s { int d_type; char d_name[1]; } uv__dirent_t; #define UV_DIR_PRIVATE_FIELDS \ HANDLE dir_handle; \ WIN32_FIND_DATAW find_data; \ BOOL need_find_call; #define HAVE_DIRENT_TYPES #define UV__DT_DIR UV_DIRENT_DIR #define UV__DT_FILE UV_DIRENT_FILE #define UV__DT_LINK UV_DIRENT_LINK #define UV__DT_FIFO UV_DIRENT_FIFO #define UV__DT_SOCKET UV_DIRENT_SOCKET #define UV__DT_CHAR UV_DIRENT_CHAR #define UV__DT_BLOCK UV_DIRENT_BLOCK /* Platform-specific definitions for uv_dlopen support. */ #define UV_DYNAMIC FAR WINAPI typedef struct { HMODULE handle; char* errmsg; } uv_lib_t; #define UV_LOOP_PRIVATE_FIELDS \ /* The loop's I/O completion port */ \ HANDLE iocp; \ /* The current time according to the event loop. in msecs. */ \ uint64_t time; \ /* Tail of a single-linked circular queue of pending reqs. If the queue */ \ /* is empty, tail_ is NULL. If there is only one item, */ \ /* tail_->next_req == tail_ */ \ uv_req_t* pending_reqs_tail; \ /* Head of a single-linked list of closed handles */ \ uv_handle_t* endgame_handles; \ /* TODO(bnoordhuis) Stop heap-allocating |timer_heap| in libuv v2.x. */ \ void* timer_heap; \ /* Lists of active loop (prepare / check / idle) watchers */ \ uv_prepare_t* prepare_handles; \ uv_check_t* check_handles; \ uv_idle_t* idle_handles; \ /* This pointer will refer to the prepare/check/idle handle whose */ \ /* callback is scheduled to be called next. This is needed to allow */ \ /* safe removal from one of the lists above while that list being */ \ /* iterated over. */ \ uv_prepare_t* next_prepare_handle; \ uv_check_t* next_check_handle; \ uv_idle_t* next_idle_handle; \ /* This handle holds the peer sockets for the fast variant of uv_poll_t */ \ SOCKET poll_peer_sockets[UV_MSAFD_PROVIDER_COUNT]; \ /* Counter to keep track of active tcp streams */ \ unsigned int active_tcp_streams; \ /* Counter to keep track of active udp streams */ \ unsigned int active_udp_streams; \ /* Counter to started timer */ \ uint64_t timer_counter; \ /* Threadpool */ \ void* wq[2]; \ uv_mutex_t wq_mutex; \ uv_async_t wq_async; #define UV_REQ_TYPE_PRIVATE \ /* TODO: remove the req suffix */ \ UV_ACCEPT, \ UV_FS_EVENT_REQ, \ UV_POLL_REQ, \ UV_PROCESS_EXIT, \ UV_READ, \ UV_UDP_RECV, \ UV_WAKEUP, \ UV_SIGNAL_REQ, #define UV_REQ_PRIVATE_FIELDS \ union { \ /* Used by I/O operations */ \ struct { \ OVERLAPPED overlapped; \ size_t queued_bytes; \ } io; \ /* in v2, we can move these to the UV_CONNECT_PRIVATE_FIELDS */ \ struct { \ ULONG_PTR result; /* overlapped.Internal is reused to hold the result */\ HANDLE pipeHandle; \ DWORD duplex_flags; \ } connect; \ } u; \ struct uv_req_s* next_req; #define UV_WRITE_PRIVATE_FIELDS \ int coalesced; \ uv_buf_t write_buffer; \ HANDLE event_handle; \ HANDLE wait_handle; #define UV_CONNECT_PRIVATE_FIELDS \ /* empty */ #define UV_SHUTDOWN_PRIVATE_FIELDS \ /* empty */ #define UV_UDP_SEND_PRIVATE_FIELDS \ /* empty */ #define UV_PRIVATE_REQ_TYPES \ typedef struct uv_pipe_accept_s { \ UV_REQ_FIELDS \ HANDLE pipeHandle; \ struct uv_pipe_accept_s* next_pending; \ } uv_pipe_accept_t; \ \ typedef struct uv_tcp_accept_s { \ UV_REQ_FIELDS \ SOCKET accept_socket; \ char accept_buffer[sizeof(struct sockaddr_storage) * 2 + 32]; \ HANDLE event_handle; \ HANDLE wait_handle; \ struct uv_tcp_accept_s* next_pending; \ } uv_tcp_accept_t; \ \ typedef struct uv_read_s { \ UV_REQ_FIELDS \ HANDLE event_handle; \ HANDLE wait_handle; \ } uv_read_t; #define uv_stream_connection_fields \ unsigned int write_reqs_pending; \ uv_shutdown_t* shutdown_req; #define uv_stream_server_fields \ uv_connection_cb connection_cb; #define UV_STREAM_PRIVATE_FIELDS \ unsigned int reqs_pending; \ int activecnt; \ uv_read_t read_req; \ union { \ struct { uv_stream_connection_fields } conn; \ struct { uv_stream_server_fields } serv; \ } stream; #define uv_tcp_server_fields \ uv_tcp_accept_t* accept_reqs; \ unsigned int processed_accepts; \ uv_tcp_accept_t* pending_accepts; \ LPFN_ACCEPTEX func_acceptex; #define uv_tcp_connection_fields \ uv_buf_t read_buffer; \ LPFN_CONNECTEX func_connectex; #define UV_TCP_PRIVATE_FIELDS \ SOCKET socket; \ int delayed_error; \ union { \ struct { uv_tcp_server_fields } serv; \ struct { uv_tcp_connection_fields } conn; \ } tcp; #define UV_UDP_PRIVATE_FIELDS \ SOCKET socket; \ unsigned int reqs_pending; \ int activecnt; \ uv_req_t recv_req; \ uv_buf_t recv_buffer; \ struct sockaddr_storage recv_from; \ int recv_from_len; \ uv_udp_recv_cb recv_cb; \ uv_alloc_cb alloc_cb; \ LPFN_WSARECV func_wsarecv; \ LPFN_WSARECVFROM func_wsarecvfrom; #define uv_pipe_server_fields \ int pending_instances; \ uv_pipe_accept_t* accept_reqs; \ uv_pipe_accept_t* pending_accepts; #define uv_pipe_connection_fields \ uv_timer_t* eof_timer; \ uv_write_t dummy; /* TODO: retained for ABI compat; remove this in v2.x. */ \ DWORD ipc_remote_pid; \ union { \ uint32_t payload_remaining; \ uint64_t dummy; /* TODO: retained for ABI compat; remove this in v2.x. */ \ } ipc_data_frame; \ void* ipc_xfer_queue[2]; \ int ipc_xfer_queue_length; \ uv_write_t* non_overlapped_writes_tail; \ CRITICAL_SECTION readfile_thread_lock; \ volatile HANDLE readfile_thread_handle; #define UV_PIPE_PRIVATE_FIELDS \ HANDLE handle; \ WCHAR* name; \ union { \ struct { uv_pipe_server_fields } serv; \ struct { uv_pipe_connection_fields } conn; \ } pipe; /* TODO: put the parser states in an union - TTY handles are always half-duplex * so read-state can safely overlap write-state. */ #define UV_TTY_PRIVATE_FIELDS \ HANDLE handle; \ union { \ struct { \ /* Used for readable TTY handles */ \ /* TODO: remove me in v2.x. */ \ HANDLE unused_; \ uv_buf_t read_line_buffer; \ HANDLE read_raw_wait; \ /* Fields used for translating win keystrokes into vt100 characters */ \ char last_key[8]; \ unsigned char last_key_offset; \ unsigned char last_key_len; \ WCHAR last_utf16_high_surrogate; \ INPUT_RECORD last_input_record; \ } rd; \ struct { \ /* Used for writable TTY handles */ \ /* utf8-to-utf16 conversion state */ \ unsigned int utf8_codepoint; \ unsigned char utf8_bytes_left; \ /* eol conversion state */ \ unsigned char previous_eol; \ /* ansi parser state */ \ unsigned short ansi_parser_state; \ unsigned char ansi_csi_argc; \ unsigned short ansi_csi_argv[4]; \ COORD saved_position; \ WORD saved_attributes; \ } wr; \ } tty; #define UV_POLL_PRIVATE_FIELDS \ SOCKET socket; \ /* Used in fast mode */ \ SOCKET peer_socket; \ AFD_POLL_INFO afd_poll_info_1; \ AFD_POLL_INFO afd_poll_info_2; \ /* Used in fast and slow mode. */ \ uv_req_t poll_req_1; \ uv_req_t poll_req_2; \ unsigned char submitted_events_1; \ unsigned char submitted_events_2; \ unsigned char mask_events_1; \ unsigned char mask_events_2; \ unsigned char events; #define UV_TIMER_PRIVATE_FIELDS \ void* heap_node[3]; \ int unused; \ uint64_t timeout; \ uint64_t repeat; \ uint64_t start_id; \ uv_timer_cb timer_cb; #define UV_ASYNC_PRIVATE_FIELDS \ struct uv_req_s async_req; \ uv_async_cb async_cb; \ /* char to avoid alignment issues */ \ char volatile async_sent; #define UV_PREPARE_PRIVATE_FIELDS \ uv_prepare_t* prepare_prev; \ uv_prepare_t* prepare_next; \ uv_prepare_cb prepare_cb; #define UV_CHECK_PRIVATE_FIELDS \ uv_check_t* check_prev; \ uv_check_t* check_next; \ uv_check_cb check_cb; #define UV_IDLE_PRIVATE_FIELDS \ uv_idle_t* idle_prev; \ uv_idle_t* idle_next; \ uv_idle_cb idle_cb; #define UV_HANDLE_PRIVATE_FIELDS \ uv_handle_t* endgame_next; \ unsigned int flags; #define UV_GETADDRINFO_PRIVATE_FIELDS \ struct uv__work work_req; \ uv_getaddrinfo_cb getaddrinfo_cb; \ void* alloc; \ WCHAR* node; \ WCHAR* service; \ /* The addrinfoW field is used to store a pointer to the hints, and */ \ /* later on to store the result of GetAddrInfoW. The final result will */ \ /* be converted to struct addrinfo* and stored in the addrinfo field. */ \ struct addrinfoW* addrinfow; \ struct addrinfo* addrinfo; \ int retcode; #define UV_GETNAMEINFO_PRIVATE_FIELDS \ struct uv__work work_req; \ uv_getnameinfo_cb getnameinfo_cb; \ struct sockaddr_storage storage; \ int flags; \ char host[NI_MAXHOST]; \ char service[NI_MAXSERV]; \ int retcode; #define UV_PROCESS_PRIVATE_FIELDS \ struct uv_process_exit_s { \ UV_REQ_FIELDS \ } exit_req; \ BYTE* child_stdio_buffer; \ int exit_signal; \ HANDLE wait_handle; \ HANDLE process_handle; \ volatile char exit_cb_pending; #define UV_FS_PRIVATE_FIELDS \ struct uv__work work_req; \ int flags; \ DWORD sys_errno_; \ union { \ /* TODO: remove me in 0.9. */ \ WCHAR* pathw; \ int fd; \ } file; \ union { \ struct { \ int mode; \ WCHAR* new_pathw; \ int file_flags; \ int fd_out; \ unsigned int nbufs; \ uv_buf_t* bufs; \ int64_t offset; \ uv_buf_t bufsml[4]; \ } info; \ struct { \ double atime; \ double mtime; \ } time; \ } fs; #define UV_WORK_PRIVATE_FIELDS \ struct uv__work work_req; #define UV_FS_EVENT_PRIVATE_FIELDS \ struct uv_fs_event_req_s { \ UV_REQ_FIELDS \ } req; \ HANDLE dir_handle; \ int req_pending; \ uv_fs_event_cb cb; \ WCHAR* filew; \ WCHAR* short_filew; \ WCHAR* dirw; \ char* buffer; #define UV_SIGNAL_PRIVATE_FIELDS \ RB_ENTRY(uv_signal_s) tree_entry; \ struct uv_req_s signal_req; \ unsigned long pending_signum; #ifndef F_OK #define F_OK 0 #endif #ifndef R_OK #define R_OK 4 #endif #ifndef W_OK #define W_OK 2 #endif #ifndef X_OK #define X_OK 1 #endif /* fs open() flags supported on this platform: */ #define UV_FS_O_APPEND _O_APPEND #define UV_FS_O_CREAT _O_CREAT #define UV_FS_O_EXCL _O_EXCL #define UV_FS_O_FILEMAP 0x20000000 #define UV_FS_O_RANDOM _O_RANDOM #define UV_FS_O_RDONLY _O_RDONLY #define UV_FS_O_RDWR _O_RDWR #define UV_FS_O_SEQUENTIAL _O_SEQUENTIAL #define UV_FS_O_SHORT_LIVED _O_SHORT_LIVED #define UV_FS_O_TEMPORARY _O_TEMPORARY #define UV_FS_O_TRUNC _O_TRUNC #define UV_FS_O_WRONLY _O_WRONLY /* fs open() flags supported on other platforms (or mapped on this platform): */ #define UV_FS_O_DIRECT 0x02000000 /* FILE_FLAG_NO_BUFFERING */ #define UV_FS_O_DIRECTORY 0 #define UV_FS_O_DSYNC 0x04000000 /* FILE_FLAG_WRITE_THROUGH */ #define UV_FS_O_EXLOCK 0x10000000 /* EXCLUSIVE SHARING MODE */ #define UV_FS_O_NOATIME 0 #define UV_FS_O_NOCTTY 0 #define UV_FS_O_NOFOLLOW 0 #define UV_FS_O_NONBLOCK 0 #define UV_FS_O_SYMLINK 0 #define UV_FS_O_SYNC 0x08000000 /* FILE_FLAG_WRITE_THROUGH */ gevent-24.11.1/deps/libuv/libuv-static.pc.in000066400000000000000000000004331471441230600205650ustar00rootroot00000000000000prefix=@prefix@ exec_prefix=${prefix} libdir=@libdir@ includedir=@includedir@ Name: libuv-static Version: @PACKAGE_VERSION@ Description: multi-platform support library with a focus on asynchronous I/O. URL: http://libuv.org/ Libs: -L${libdir} -luv_a @LIBS@ Cflags: -I${includedir} gevent-24.11.1/deps/libuv/libuv.pc.in000066400000000000000000000004221471441230600172760ustar00rootroot00000000000000prefix=@prefix@ exec_prefix=${prefix} libdir=@libdir@ includedir=@includedir@ Name: libuv Version: @PACKAGE_VERSION@ Description: multi-platform support library with a focus on asynchronous I/O. URL: http://libuv.org/ Libs: -L${libdir} -luv @LIBS@ Cflags: -I${includedir} gevent-24.11.1/deps/libuv/m4/000077500000000000000000000000001471441230600155465ustar00rootroot00000000000000gevent-24.11.1/deps/libuv/m4/.gitignore000066400000000000000000000001331471441230600175330ustar00rootroot00000000000000# Ignore libtoolize-generated files. *.m4 !as_case.m4 !ax_pthread.m4 !libuv-check-flags.m4 gevent-24.11.1/deps/libuv/m4/as_case.m4000066400000000000000000000010031471441230600174000ustar00rootroot00000000000000# AS_CASE(WORD, [PATTERN1], [IF-MATCHED1]...[DEFAULT]) # ---------------------------------------------------- # Expand into # | case WORD in # | PATTERN1) IF-MATCHED1 ;; # | ... # | *) DEFAULT ;; # | esac m4_define([_AS_CASE], [m4_if([$#], 0, [m4_fatal([$0: too few arguments: $#])], [$#], 1, [ *) $1 ;;], [$#], 2, [ $1) m4_default([$2], [:]) ;;], [ $1) m4_default([$2], [:]) ;; $0(m4_shiftn(2, $@))])dnl ]) m4_defun([AS_CASE], [m4_ifval([$2$3], [case $1 in _AS_CASE(m4_shift($@)) esac])]) gevent-24.11.1/deps/libuv/m4/ax_pthread.m4000066400000000000000000000505221471441230600201330ustar00rootroot00000000000000# =========================================================================== # https://www.gnu.org/software/autoconf-archive/ax_pthread.html # =========================================================================== # # SYNOPSIS # # AX_PTHREAD([ACTION-IF-FOUND[, ACTION-IF-NOT-FOUND]]) # # DESCRIPTION # # This macro figures out how to build C programs using POSIX threads. It # sets the PTHREAD_LIBS output variable to the threads library and linker # flags, and the PTHREAD_CFLAGS output variable to any special C compiler # flags that are needed. (The user can also force certain compiler # flags/libs to be tested by setting these environment variables.) # # Also sets PTHREAD_CC to any special C compiler that is needed for # multi-threaded programs (defaults to the value of CC otherwise). (This # is necessary on AIX to use the special cc_r compiler alias.) # # NOTE: You are assumed to not only compile your program with these flags, # but also to link with them as well. For example, you might link with # $PTHREAD_CC $CFLAGS $PTHREAD_CFLAGS $LDFLAGS ... $PTHREAD_LIBS $LIBS # # If you are only building threaded programs, you may wish to use these # variables in your default LIBS, CFLAGS, and CC: # # LIBS="$PTHREAD_LIBS $LIBS" # CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # CC="$PTHREAD_CC" # # In addition, if the PTHREAD_CREATE_JOINABLE thread-attribute constant # has a nonstandard name, this macro defines PTHREAD_CREATE_JOINABLE to # that name (e.g. PTHREAD_CREATE_UNDETACHED on AIX). # # Also HAVE_PTHREAD_PRIO_INHERIT is defined if pthread is found and the # PTHREAD_PRIO_INHERIT symbol is defined when compiling with # PTHREAD_CFLAGS. # # ACTION-IF-FOUND is a list of shell commands to run if a threads library # is found, and ACTION-IF-NOT-FOUND is a list of commands to run it if it # is not found. If ACTION-IF-FOUND is not specified, the default action # will define HAVE_PTHREAD. # # Please let the authors know if this macro fails on any platform, or if # you have any other suggestions or comments. This macro was based on work # by SGJ on autoconf scripts for FFTW (http://www.fftw.org/) (with help # from M. Frigo), as well as ac_pthread and hb_pthread macros posted by # Alejandro Forero Cuervo to the autoconf macro repository. We are also # grateful for the helpful feedback of numerous users. # # Updated for Autoconf 2.68 by Daniel Richard G. # # LICENSE # # Copyright (c) 2008 Steven G. Johnson # Copyright (c) 2011 Daniel Richard G. # # This program is free software: you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by the # Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General # Public License for more details. # # You should have received a copy of the GNU General Public License along # with this program. If not, see . # # As a special exception, the respective Autoconf Macro's copyright owner # gives unlimited permission to copy, distribute and modify the configure # scripts that are the output of Autoconf when processing the Macro. You # need not follow the terms of the GNU General Public License when using # or distributing such scripts, even though portions of the text of the # Macro appear in them. The GNU General Public License (GPL) does govern # all other use of the material that constitutes the Autoconf Macro. # # This special exception to the GPL applies to versions of the Autoconf # Macro released by the Autoconf Archive. When you make and distribute a # modified version of the Autoconf Macro, you may extend this special # exception to the GPL to apply to your modified version as well. #serial 24 AU_ALIAS([ACX_PTHREAD], [AX_PTHREAD]) AC_DEFUN([AX_PTHREAD], [ AC_REQUIRE([AC_CANONICAL_HOST]) AC_REQUIRE([AC_PROG_CC]) AC_REQUIRE([AC_PROG_SED]) AC_LANG_PUSH([C]) ax_pthread_ok=no # We used to check for pthread.h first, but this fails if pthread.h # requires special compiler flags (e.g. on Tru64 or Sequent). # It gets checked for in the link test anyway. # First of all, check if the user has set any of the PTHREAD_LIBS, # etcetera environment variables, and if threads linking works using # them: if test "x$PTHREAD_CFLAGS$PTHREAD_LIBS" != "x"; then ax_pthread_save_CC="$CC" ax_pthread_save_CFLAGS="$CFLAGS" ax_pthread_save_LIBS="$LIBS" AS_IF([test "x$PTHREAD_CC" != "x"], [CC="$PTHREAD_CC"]) CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" AC_MSG_CHECKING([for pthread_join using $CC $PTHREAD_CFLAGS $PTHREAD_LIBS]) AC_LINK_IFELSE([AC_LANG_CALL([], [pthread_join])], [ax_pthread_ok=yes]) AC_MSG_RESULT([$ax_pthread_ok]) if test "x$ax_pthread_ok" = "xno"; then PTHREAD_LIBS="" PTHREAD_CFLAGS="" fi CC="$ax_pthread_save_CC" CFLAGS="$ax_pthread_save_CFLAGS" LIBS="$ax_pthread_save_LIBS" fi # We must check for the threads library under a number of different # names; the ordering is very important because some systems # (e.g. DEC) have both -lpthread and -lpthreads, where one of the # libraries is broken (non-POSIX). # Create a list of thread flags to try. Items starting with a "-" are # C compiler flags, and other items are library names, except for "none" # which indicates that we try without any flags at all, and "pthread-config" # which is a program returning the flags for the Pth emulation library. ax_pthread_flags="pthreads none -Kthread -pthread -pthreads -mthreads pthread --thread-safe -mt pthread-config" # The ordering *is* (sometimes) important. Some notes on the # individual items follow: # pthreads: AIX (must check this before -lpthread) # none: in case threads are in libc; should be tried before -Kthread and # other compiler flags to prevent continual compiler warnings # -Kthread: Sequent (threads in libc, but -Kthread needed for pthread.h) # -pthread: Linux/gcc (kernel threads), BSD/gcc (userland threads), Tru64 # (Note: HP C rejects this with "bad form for `-t' option") # -pthreads: Solaris/gcc (Note: HP C also rejects) # -mt: Sun Workshop C (may only link SunOS threads [-lthread], but it # doesn't hurt to check since this sometimes defines pthreads and # -D_REENTRANT too), HP C (must be checked before -lpthread, which # is present but should not be used directly; and before -mthreads, # because the compiler interprets this as "-mt" + "-hreads") # -mthreads: Mingw32/gcc, Lynx/gcc # pthread: Linux, etcetera # --thread-safe: KAI C++ # pthread-config: use pthread-config program (for GNU Pth library) case $host_os in freebsd*) # -kthread: FreeBSD kernel threads (preferred to -pthread since SMP-able) # lthread: LinuxThreads port on FreeBSD (also preferred to -pthread) ax_pthread_flags="-kthread lthread $ax_pthread_flags" ;; hpux*) # From the cc(1) man page: "[-mt] Sets various -D flags to enable # multi-threading and also sets -lpthread." ax_pthread_flags="-mt -pthread pthread $ax_pthread_flags" ;; openedition*) # IBM z/OS requires a feature-test macro to be defined in order to # enable POSIX threads at all, so give the user a hint if this is # not set. (We don't define these ourselves, as they can affect # other portions of the system API in unpredictable ways.) AC_EGREP_CPP([AX_PTHREAD_ZOS_MISSING], [ # if !defined(_OPEN_THREADS) && !defined(_UNIX03_THREADS) AX_PTHREAD_ZOS_MISSING # endif ], [AC_MSG_WARN([IBM z/OS requires -D_OPEN_THREADS or -D_UNIX03_THREADS to enable pthreads support.])]) ;; solaris*) # On Solaris (at least, for some versions), libc contains stubbed # (non-functional) versions of the pthreads routines, so link-based # tests will erroneously succeed. (N.B.: The stubs are missing # pthread_cleanup_push, or rather a function called by this macro, # so we could check for that, but who knows whether they'll stub # that too in a future libc.) So we'll check first for the # standard Solaris way of linking pthreads (-mt -lpthread). ax_pthread_flags="-mt,pthread pthread $ax_pthread_flags" ;; esac # GCC generally uses -pthread, or -pthreads on some platforms (e.g. SPARC) AS_IF([test "x$GCC" = "xyes"], [ax_pthread_flags="-pthread -pthreads $ax_pthread_flags"]) # The presence of a feature test macro requesting re-entrant function # definitions is, on some systems, a strong hint that pthreads support is # correctly enabled case $host_os in darwin* | hpux* | linux* | osf* | solaris*) ax_pthread_check_macro="_REENTRANT" ;; aix*) ax_pthread_check_macro="_THREAD_SAFE" ;; *) ax_pthread_check_macro="--" ;; esac AS_IF([test "x$ax_pthread_check_macro" = "x--"], [ax_pthread_check_cond=0], [ax_pthread_check_cond="!defined($ax_pthread_check_macro)"]) # Are we compiling with Clang? AC_CACHE_CHECK([whether $CC is Clang], [ax_cv_PTHREAD_CLANG], [ax_cv_PTHREAD_CLANG=no # Note that Autoconf sets GCC=yes for Clang as well as GCC if test "x$GCC" = "xyes"; then AC_EGREP_CPP([AX_PTHREAD_CC_IS_CLANG], [/* Note: Clang 2.7 lacks __clang_[a-z]+__ */ # if defined(__clang__) && defined(__llvm__) AX_PTHREAD_CC_IS_CLANG # endif ], [ax_cv_PTHREAD_CLANG=yes]) fi ]) ax_pthread_clang="$ax_cv_PTHREAD_CLANG" ax_pthread_clang_warning=no # Clang needs special handling, because older versions handle the -pthread # option in a rather... idiosyncratic way if test "x$ax_pthread_clang" = "xyes"; then # Clang takes -pthread; it has never supported any other flag # (Note 1: This will need to be revisited if a system that Clang # supports has POSIX threads in a separate library. This tends not # to be the way of modern systems, but it's conceivable.) # (Note 2: On some systems, notably Darwin, -pthread is not needed # to get POSIX threads support; the API is always present and # active. We could reasonably leave PTHREAD_CFLAGS empty. But # -pthread does define _REENTRANT, and while the Darwin headers # ignore this macro, third-party headers might not.) PTHREAD_CFLAGS="-pthread" PTHREAD_LIBS= ax_pthread_ok=yes # However, older versions of Clang make a point of warning the user # that, in an invocation where only linking and no compilation is # taking place, the -pthread option has no effect ("argument unused # during compilation"). They expect -pthread to be passed in only # when source code is being compiled. # # Problem is, this is at odds with the way Automake and most other # C build frameworks function, which is that the same flags used in # compilation (CFLAGS) are also used in linking. Many systems # supported by AX_PTHREAD require exactly this for POSIX threads # support, and in fact it is often not straightforward to specify a # flag that is used only in the compilation phase and not in # linking. Such a scenario is extremely rare in practice. # # Even though use of the -pthread flag in linking would only print # a warning, this can be a nuisance for well-run software projects # that build with -Werror. So if the active version of Clang has # this misfeature, we search for an option to squash it. AC_CACHE_CHECK([whether Clang needs flag to prevent "argument unused" warning when linking with -pthread], [ax_cv_PTHREAD_CLANG_NO_WARN_FLAG], [ax_cv_PTHREAD_CLANG_NO_WARN_FLAG=unknown # Create an alternate version of $ac_link that compiles and # links in two steps (.c -> .o, .o -> exe) instead of one # (.c -> exe), because the warning occurs only in the second # step ax_pthread_save_ac_link="$ac_link" ax_pthread_sed='s/conftest\.\$ac_ext/conftest.$ac_objext/g' ax_pthread_link_step=`$as_echo "$ac_link" | sed "$ax_pthread_sed"` ax_pthread_2step_ac_link="($ac_compile) && (echo ==== >&5) && ($ax_pthread_link_step)" ax_pthread_save_CFLAGS="$CFLAGS" for ax_pthread_try in '' -Qunused-arguments -Wno-unused-command-line-argument unknown; do AS_IF([test "x$ax_pthread_try" = "xunknown"], [break]) CFLAGS="-Werror -Wunknown-warning-option $ax_pthread_try -pthread $ax_pthread_save_CFLAGS" ac_link="$ax_pthread_save_ac_link" AC_LINK_IFELSE([AC_LANG_SOURCE([[int main(void){return 0;}]])], [ac_link="$ax_pthread_2step_ac_link" AC_LINK_IFELSE([AC_LANG_SOURCE([[int main(void){return 0;}]])], [break]) ]) done ac_link="$ax_pthread_save_ac_link" CFLAGS="$ax_pthread_save_CFLAGS" AS_IF([test "x$ax_pthread_try" = "x"], [ax_pthread_try=no]) ax_cv_PTHREAD_CLANG_NO_WARN_FLAG="$ax_pthread_try" ]) case "$ax_cv_PTHREAD_CLANG_NO_WARN_FLAG" in no | unknown) ;; *) PTHREAD_CFLAGS="$ax_cv_PTHREAD_CLANG_NO_WARN_FLAG $PTHREAD_CFLAGS" ;; esac fi # $ax_pthread_clang = yes if test "x$ax_pthread_ok" = "xno"; then for ax_pthread_try_flag in $ax_pthread_flags; do case $ax_pthread_try_flag in none) AC_MSG_CHECKING([whether pthreads work without any flags]) ;; -mt,pthread) AC_MSG_CHECKING([whether pthreads work with -mt -lpthread]) PTHREAD_CFLAGS="-mt" PTHREAD_LIBS="-lpthread" ;; -*) AC_MSG_CHECKING([whether pthreads work with $ax_pthread_try_flag]) PTHREAD_CFLAGS="$ax_pthread_try_flag" ;; pthread-config) AC_CHECK_PROG([ax_pthread_config], [pthread-config], [yes], [no]) AS_IF([test "x$ax_pthread_config" = "xno"], [continue]) PTHREAD_CFLAGS="`pthread-config --cflags`" PTHREAD_LIBS="`pthread-config --ldflags` `pthread-config --libs`" ;; *) AC_MSG_CHECKING([for the pthreads library -l$ax_pthread_try_flag]) PTHREAD_LIBS="-l$ax_pthread_try_flag" ;; esac ax_pthread_save_CFLAGS="$CFLAGS" ax_pthread_save_LIBS="$LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" # Check for various functions. We must include pthread.h, # since some functions may be macros. (On the Sequent, we # need a special flag -Kthread to make this header compile.) # We check for pthread_join because it is in -lpthread on IRIX # while pthread_create is in libc. We check for pthread_attr_init # due to DEC craziness with -lpthreads. We check for # pthread_cleanup_push because it is one of the few pthread # functions on Solaris that doesn't have a non-functional libc stub. # We try pthread_create on general principles. AC_LINK_IFELSE([AC_LANG_PROGRAM([#include # if $ax_pthread_check_cond # error "$ax_pthread_check_macro must be defined" # endif static void routine(void *a) { a = 0; } static void *start_routine(void *a) { return a; }], [pthread_t th; pthread_attr_t attr; pthread_create(&th, 0, start_routine, 0); pthread_join(th, 0); pthread_attr_init(&attr); pthread_cleanup_push(routine, 0); pthread_cleanup_pop(0) /* ; */])], [ax_pthread_ok=yes], []) CFLAGS="$ax_pthread_save_CFLAGS" LIBS="$ax_pthread_save_LIBS" AC_MSG_RESULT([$ax_pthread_ok]) AS_IF([test "x$ax_pthread_ok" = "xyes"], [break]) PTHREAD_LIBS="" PTHREAD_CFLAGS="" done fi # Various other checks: if test "x$ax_pthread_ok" = "xyes"; then ax_pthread_save_CFLAGS="$CFLAGS" ax_pthread_save_LIBS="$LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" # Detect AIX lossage: JOINABLE attribute is called UNDETACHED. AC_CACHE_CHECK([for joinable pthread attribute], [ax_cv_PTHREAD_JOINABLE_ATTR], [ax_cv_PTHREAD_JOINABLE_ATTR=unknown for ax_pthread_attr in PTHREAD_CREATE_JOINABLE PTHREAD_CREATE_UNDETACHED; do AC_LINK_IFELSE([AC_LANG_PROGRAM([#include ], [int attr = $ax_pthread_attr; return attr /* ; */])], [ax_cv_PTHREAD_JOINABLE_ATTR=$ax_pthread_attr; break], []) done ]) AS_IF([test "x$ax_cv_PTHREAD_JOINABLE_ATTR" != "xunknown" && \ test "x$ax_cv_PTHREAD_JOINABLE_ATTR" != "xPTHREAD_CREATE_JOINABLE" && \ test "x$ax_pthread_joinable_attr_defined" != "xyes"], [AC_DEFINE_UNQUOTED([PTHREAD_CREATE_JOINABLE], [$ax_cv_PTHREAD_JOINABLE_ATTR], [Define to necessary symbol if this constant uses a non-standard name on your system.]) ax_pthread_joinable_attr_defined=yes ]) AC_CACHE_CHECK([whether more special flags are required for pthreads], [ax_cv_PTHREAD_SPECIAL_FLAGS], [ax_cv_PTHREAD_SPECIAL_FLAGS=no case $host_os in solaris*) ax_cv_PTHREAD_SPECIAL_FLAGS="-D_POSIX_PTHREAD_SEMANTICS" ;; esac ]) AS_IF([test "x$ax_cv_PTHREAD_SPECIAL_FLAGS" != "xno" && \ test "x$ax_pthread_special_flags_added" != "xyes"], [PTHREAD_CFLAGS="$ax_cv_PTHREAD_SPECIAL_FLAGS $PTHREAD_CFLAGS" ax_pthread_special_flags_added=yes]) AC_CACHE_CHECK([for PTHREAD_PRIO_INHERIT], [ax_cv_PTHREAD_PRIO_INHERIT], [AC_LINK_IFELSE([AC_LANG_PROGRAM([[#include ]], [[int i = PTHREAD_PRIO_INHERIT;]])], [ax_cv_PTHREAD_PRIO_INHERIT=yes], [ax_cv_PTHREAD_PRIO_INHERIT=no]) ]) AS_IF([test "x$ax_cv_PTHREAD_PRIO_INHERIT" = "xyes" && \ test "x$ax_pthread_prio_inherit_defined" != "xyes"], [AC_DEFINE([HAVE_PTHREAD_PRIO_INHERIT], [1], [Have PTHREAD_PRIO_INHERIT.]) ax_pthread_prio_inherit_defined=yes ]) CFLAGS="$ax_pthread_save_CFLAGS" LIBS="$ax_pthread_save_LIBS" # More AIX lossage: compile with *_r variant if test "x$GCC" != "xyes"; then case $host_os in aix*) AS_CASE(["x/$CC"], [x*/c89|x*/c89_128|x*/c99|x*/c99_128|x*/cc|x*/cc128|x*/xlc|x*/xlc_v6|x*/xlc128|x*/xlc128_v6], [#handle absolute path differently from PATH based program lookup AS_CASE(["x$CC"], [x/*], [AS_IF([AS_EXECUTABLE_P([${CC}_r])],[PTHREAD_CC="${CC}_r"])], [AC_CHECK_PROGS([PTHREAD_CC],[${CC}_r],[$CC])])]) ;; esac fi fi test -n "$PTHREAD_CC" || PTHREAD_CC="$CC" AC_SUBST([PTHREAD_LIBS]) AC_SUBST([PTHREAD_CFLAGS]) AC_SUBST([PTHREAD_CC]) # Finally, execute ACTION-IF-FOUND/ACTION-IF-NOT-FOUND: if test "x$ax_pthread_ok" = "xyes"; then ifelse([$1],,[AC_DEFINE([HAVE_PTHREAD],[1],[Define if you have POSIX threads libraries and header files.])],[$1]) : else ax_pthread_ok=no $2 fi AC_LANG_POP ])dnl AX_PTHREAD gevent-24.11.1/deps/libuv/m4/libuv-check-flags.m4000066400000000000000000000254361471441230600213100ustar00rootroot00000000000000dnl Macros to check the presence of generic (non-typed) symbols. dnl Copyright (c) 2006-2008 Diego Pettenò dnl Copyright (c) 2006-2008 xine project dnl Copyright (c) 2021 libuv project dnl dnl This program is free software; you can redistribute it and/or modify dnl it under the terms of the GNU General Public License as published by dnl the Free Software Foundation; either version 3, or (at your option) dnl any later version. dnl dnl This program is distributed in the hope that it will be useful, dnl but WITHOUT ANY WARRANTY; without even the implied warranty of dnl MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the dnl GNU General Public License for more details. dnl dnl You should have received a copy of the GNU General Public License dnl along with this program; if not, write to the Free Software dnl Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA dnl 02110-1301, USA. dnl dnl As a special exception, the copyright owners of the dnl macro gives unlimited permission to copy, distribute and modify the dnl configure scripts that are the output of Autoconf when processing the dnl Macro. You need not follow the terms of the GNU General Public dnl License when using or distributing such scripts, even though portions dnl of the text of the Macro appear in them. The GNU General Public dnl License (GPL) does govern all other use of the material that dnl constitutes the Autoconf Macro. dnl dnl This special exception to the GPL applies to versions of the dnl Autoconf Macro released by this project. When you make and dnl distribute a modified version of the Autoconf Macro, you may extend dnl this special exception to the GPL to apply to your modified version as dnl well. dnl Check if the flag is supported by compiler dnl CC_CHECK_CFLAGS_SILENT([FLAG], [ACTION-IF-FOUND],[ACTION-IF-NOT-FOUND]) AC_DEFUN([CC_CHECK_CFLAGS_SILENT], [ AC_CACHE_VAL(AS_TR_SH([cc_cv_cflags_$1]), [ac_save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $1" AC_COMPILE_IFELSE([AC_LANG_SOURCE([int a;])], [eval "AS_TR_SH([cc_cv_cflags_$1])='yes'"], [eval "AS_TR_SH([cc_cv_cflags_$1])='no'"]) CFLAGS="$ac_save_CFLAGS" ]) AS_IF([eval test x$]AS_TR_SH([cc_cv_cflags_$1])[ = xyes], [$2], [$3]) ]) dnl Check if the flag is supported by compiler (cacheable) dnl CC_CHECK_CFLAGS([FLAG], [ACTION-IF-FOUND],[ACTION-IF-NOT-FOUND]) AC_DEFUN([CC_CHECK_CFLAGS], [ AC_CACHE_CHECK([if $CC supports $1 flag], AS_TR_SH([cc_cv_cflags_$1]), CC_CHECK_CFLAGS_SILENT([$1]) dnl Don't execute actions here! ) AS_IF([eval test x$]AS_TR_SH([cc_cv_cflags_$1])[ = xyes], [$2], [$3]) ]) dnl CC_CHECK_CFLAG_APPEND(FLAG, [action-if-found], [action-if-not-found]) dnl Check for CFLAG and appends them to AM_CFLAGS if supported AC_DEFUN([CC_CHECK_CFLAG_APPEND], [ AC_CACHE_CHECK([if $CC supports $1 flag], AS_TR_SH([cc_cv_cflags_$1]), CC_CHECK_CFLAGS_SILENT([$1]) dnl Don't execute actions here! ) AS_IF([eval test x$]AS_TR_SH([cc_cv_cflags_$1])[ = xyes], [AM_CFLAGS="$AM_CFLAGS $1"; DEBUG_CFLAGS="$DEBUG_CFLAGS $1"; $2], [$3]) AC_SUBST([AM_CFLAGS]) ]) dnl CC_CHECK_CFLAGS_APPEND([FLAG1 FLAG2], [action-if-found], [action-if-not]) AC_DEFUN([CC_CHECK_CFLAGS_APPEND], [ for flag in $1; do CC_CHECK_CFLAG_APPEND($flag, [$2], [$3]) done ]) dnl Check if the flag is supported by linker (cacheable) dnl CC_CHECK_LDFLAGS([FLAG], [ACTION-IF-FOUND],[ACTION-IF-NOT-FOUND]) AC_DEFUN([CC_CHECK_LDFLAGS], [ AC_CACHE_CHECK([if $CC supports $1 flag], AS_TR_SH([cc_cv_ldflags_$1]), [ac_save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS $1" AC_LANG_PUSH([C]) AC_LINK_IFELSE([AC_LANG_SOURCE([int main() { return 1; }])], [eval "AS_TR_SH([cc_cv_ldflags_$1])='yes'"], [eval "AS_TR_SH([cc_cv_ldflags_$1])="]) AC_LANG_POP([C]) LDFLAGS="$ac_save_LDFLAGS" ]) AS_IF([eval test x$]AS_TR_SH([cc_cv_ldflags_$1])[ = xyes], [$2], [$3]) ]) dnl Check if flag is supported by both compiler and linker dnl If so, append it to AM_CFLAGS dnl CC_CHECK_FLAG_SUPPORTED_APPEND([FLAG]) AC_DEFUN([CC_CHECK_FLAG_SUPPORTED_APPEND], [ CC_CHECK_CFLAGS([$1], [CC_CHECK_LDFLAGS([$1], [AM_CFLAGS="$AM_CFLAGS $1"; DEBUG_CFLAGS="$DEBUG_CFLAGS $1"; AC_SUBST([AM_CFLAGS]) ]) ]) ]) dnl define the LDFLAGS_NOUNDEFINED variable with the correct value for dnl the current linker to avoid undefined references in a shared object. AC_DEFUN([CC_NOUNDEFINED], [ dnl We check $host for which systems to enable this for. AC_REQUIRE([AC_CANONICAL_HOST]) case $host in dnl FreeBSD (et al.) does not complete linking for shared objects when pthreads dnl are requested, as different implementations are present; to avoid problems dnl use -Wl,-z,defs only for those platform not behaving this way. *-freebsd* | *-openbsd*) ;; *) dnl First of all check for the --no-undefined variant of GNU ld. This allows dnl for a much more readable commandline, so that people can understand what dnl it does without going to look for what the heck -z defs does. for possible_flags in "-Wl,--no-undefined" "-Wl,-z,defs"; do CC_CHECK_LDFLAGS([$possible_flags], [LDFLAGS_NOUNDEFINED="$possible_flags"]) break done ;; esac AC_SUBST([LDFLAGS_NOUNDEFINED]) ]) dnl Check for a -Werror flag or equivalent. -Werror is the GCC dnl and ICC flag that tells the compiler to treat all the warnings dnl as fatal. We usually need this option to make sure that some dnl constructs (like attributes) are not simply ignored. dnl dnl Other compilers don't support -Werror per se, but they support dnl an equivalent flag: dnl - Sun Studio compiler supports -errwarn=%all AC_DEFUN([CC_CHECK_WERROR], [ AC_CACHE_CHECK( [for $CC way to treat warnings as errors], [cc_cv_werror], [CC_CHECK_CFLAGS_SILENT([-Werror], [cc_cv_werror=-Werror], [CC_CHECK_CFLAGS_SILENT([-errwarn=%all], [cc_cv_werror=-errwarn=%all])]) ]) ]) AC_DEFUN([CC_CHECK_ATTRIBUTE], [ AC_REQUIRE([CC_CHECK_WERROR]) AC_CACHE_CHECK([if $CC supports __attribute__(( ifelse([$2], , [$1], [$2]) ))], AS_TR_SH([cc_cv_attribute_$1]), [ac_save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $cc_cv_werror" AC_LANG_PUSH([C]) AC_COMPILE_IFELSE([AC_LANG_SOURCE([$3])], [eval "AS_TR_SH([cc_cv_attribute_$1])='yes'"], [eval "AS_TR_SH([cc_cv_attribute_$1])='no'"]) AC_LANG_POP([C]) CFLAGS="$ac_save_CFLAGS" ]) AS_IF([eval test x$]AS_TR_SH([cc_cv_attribute_$1])[ = xyes], [AC_DEFINE( AS_TR_CPP([SUPPORT_ATTRIBUTE_$1]), 1, [Define this if the compiler supports __attribute__(( ifelse([$2], , [$1], [$2]) ))] ) $4], [$5]) ]) AC_DEFUN([CC_ATTRIBUTE_CONSTRUCTOR], [ CC_CHECK_ATTRIBUTE( [constructor],, [void __attribute__((constructor)) ctor() { int a; }], [$1], [$2]) ]) AC_DEFUN([CC_ATTRIBUTE_FORMAT], [ CC_CHECK_ATTRIBUTE( [format], [format(printf, n, n)], [void __attribute__((format(printf, 1, 2))) printflike(const char *fmt, ...) { fmt = (void *)0; }], [$1], [$2]) ]) AC_DEFUN([CC_ATTRIBUTE_FORMAT_ARG], [ CC_CHECK_ATTRIBUTE( [format_arg], [format_arg(printf)], [char *__attribute__((format_arg(1))) gettextlike(const char *fmt) { fmt = (void *)0; }], [$1], [$2]) ]) AC_DEFUN([CC_ATTRIBUTE_VISIBILITY], [ CC_CHECK_ATTRIBUTE( [visibility_$1], [visibility("$1")], [void __attribute__((visibility("$1"))) $1_function() { }], [$2], [$3]) ]) AC_DEFUN([CC_ATTRIBUTE_NONNULL], [ CC_CHECK_ATTRIBUTE( [nonnull], [nonnull()], [void __attribute__((nonnull())) some_function(void *foo, void *bar) { foo = (void*)0; bar = (void*)0; }], [$1], [$2]) ]) AC_DEFUN([CC_ATTRIBUTE_UNUSED], [ CC_CHECK_ATTRIBUTE( [unused], , [void some_function(void *foo, __attribute__((unused)) void *bar);], [$1], [$2]) ]) AC_DEFUN([CC_ATTRIBUTE_SENTINEL], [ CC_CHECK_ATTRIBUTE( [sentinel], , [void some_function(void *foo, ...) __attribute__((sentinel));], [$1], [$2]) ]) AC_DEFUN([CC_ATTRIBUTE_DEPRECATED], [ CC_CHECK_ATTRIBUTE( [deprecated], , [void some_function(void *foo, ...) __attribute__((deprecated));], [$1], [$2]) ]) AC_DEFUN([CC_ATTRIBUTE_ALIAS], [ CC_CHECK_ATTRIBUTE( [alias], [weak, alias], [void other_function(void *foo) { } void some_function(void *foo) __attribute__((weak, alias("other_function")));], [$1], [$2]) ]) AC_DEFUN([CC_ATTRIBUTE_MALLOC], [ CC_CHECK_ATTRIBUTE( [malloc], , [void * __attribute__((malloc)) my_alloc(int n);], [$1], [$2]) ]) AC_DEFUN([CC_ATTRIBUTE_PACKED], [ CC_CHECK_ATTRIBUTE( [packed], , [struct astructure { char a; int b; long c; void *d; } __attribute__((packed));], [$1], [$2]) ]) AC_DEFUN([CC_ATTRIBUTE_CONST], [ CC_CHECK_ATTRIBUTE( [const], , [int __attribute__((const)) twopow(int n) { return 1 << n; } ], [$1], [$2]) ]) AC_DEFUN([CC_FLAG_VISIBILITY], [ AC_REQUIRE([CC_CHECK_WERROR]) AC_CACHE_CHECK([if $CC supports -fvisibility=hidden], [cc_cv_flag_visibility], [cc_flag_visibility_save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $cc_cv_werror" CC_CHECK_CFLAGS_SILENT([-fvisibility=hidden], cc_cv_flag_visibility='yes', cc_cv_flag_visibility='no') CFLAGS="$cc_flag_visibility_save_CFLAGS"]) AS_IF([test "x$cc_cv_flag_visibility" = "xyes"], [AC_DEFINE([SUPPORT_FLAG_VISIBILITY], 1, [Define this if the compiler supports the -fvisibility flag]) $1], [$2]) ]) AC_DEFUN([CC_FUNC_EXPECT], [ AC_REQUIRE([CC_CHECK_WERROR]) AC_CACHE_CHECK([if compiler has __builtin_expect function], [cc_cv_func_expect], [ac_save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $cc_cv_werror" AC_LANG_PUSH([C]) AC_COMPILE_IFELSE([AC_LANG_SOURCE( [int some_function() { int a = 3; return (int)__builtin_expect(a, 3); }])], [cc_cv_func_expect=yes], [cc_cv_func_expect=no]) AC_LANG_POP([C]) CFLAGS="$ac_save_CFLAGS" ]) AS_IF([test "x$cc_cv_func_expect" = "xyes"], [AC_DEFINE([SUPPORT__BUILTIN_EXPECT], 1, [Define this if the compiler supports __builtin_expect() function]) $1], [$2]) ]) AC_DEFUN([CC_ATTRIBUTE_ALIGNED], [ AC_REQUIRE([CC_CHECK_WERROR]) AC_CACHE_CHECK([highest __attribute__ ((aligned ())) supported], [cc_cv_attribute_aligned], [ac_save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $cc_cv_werror" AC_LANG_PUSH([C]) for cc_attribute_align_try in 64 32 16 8 4 2; do AC_COMPILE_IFELSE([AC_LANG_SOURCE([ int main() { static char c __attribute__ ((aligned($cc_attribute_align_try))) = 0; return c; }])], [cc_cv_attribute_aligned=$cc_attribute_align_try; break]) done AC_LANG_POP([C]) CFLAGS="$ac_save_CFLAGS" ]) if test "x$cc_cv_attribute_aligned" != "x"; then AC_DEFINE_UNQUOTED([ATTRIBUTE_ALIGNED_MAX], [$cc_cv_attribute_aligned], [Define the highest alignment supported]) fi ]) gevent-24.11.1/deps/libuv/m4/libuv-check-versions.m4000066400000000000000000000001561471441230600220540ustar00rootroot00000000000000 AC_PREREQ(2.71) AC_INIT([libuv-release-check], [0.0]) AM_INIT_AUTOMAKE([1.16.5]) LT_PREREQ(2.4.7) AC_OUTPUT gevent-24.11.1/deps/libuv/src/000077500000000000000000000000001471441230600160155ustar00rootroot00000000000000gevent-24.11.1/deps/libuv/src/fs-poll.c000066400000000000000000000166671471441230600175550ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "uv-common.h" #ifdef _WIN32 #include "win/internal.h" #include "win/handle-inl.h" #define uv__make_close_pending(h) uv__want_endgame((h)->loop, (h)) #else #include "unix/internal.h" #endif #include #include #include struct poll_ctx { uv_fs_poll_t* parent_handle; int busy_polling; unsigned int interval; uint64_t start_time; uv_loop_t* loop; uv_fs_poll_cb poll_cb; uv_timer_t timer_handle; uv_fs_t fs_req; /* TODO(bnoordhuis) mark fs_req internal */ uv_stat_t statbuf; struct poll_ctx* previous; /* context from previous start()..stop() period */ char path[1]; /* variable length */ }; static int statbuf_eq(const uv_stat_t* a, const uv_stat_t* b); static void poll_cb(uv_fs_t* req); static void timer_cb(uv_timer_t* timer); static void timer_close_cb(uv_handle_t* handle); static uv_stat_t zero_statbuf; int uv_fs_poll_init(uv_loop_t* loop, uv_fs_poll_t* handle) { uv__handle_init(loop, (uv_handle_t*)handle, UV_FS_POLL); handle->poll_ctx = NULL; return 0; } int uv_fs_poll_start(uv_fs_poll_t* handle, uv_fs_poll_cb cb, const char* path, unsigned int interval) { struct poll_ctx* ctx; uv_loop_t* loop; size_t len; int err; if (uv_is_active((uv_handle_t*)handle)) return 0; loop = handle->loop; len = strlen(path); ctx = uv__calloc(1, sizeof(*ctx) + len); if (ctx == NULL) return UV_ENOMEM; ctx->loop = loop; ctx->poll_cb = cb; ctx->interval = interval ? interval : 1; ctx->start_time = uv_now(loop); ctx->parent_handle = handle; memcpy(ctx->path, path, len + 1); err = uv_timer_init(loop, &ctx->timer_handle); if (err < 0) goto error; ctx->timer_handle.flags |= UV_HANDLE_INTERNAL; uv__handle_unref(&ctx->timer_handle); err = uv_fs_stat(loop, &ctx->fs_req, ctx->path, poll_cb); if (err < 0) goto error; if (handle->poll_ctx != NULL) ctx->previous = handle->poll_ctx; handle->poll_ctx = ctx; uv__handle_start(handle); return 0; error: uv__free(ctx); return err; } int uv_fs_poll_stop(uv_fs_poll_t* handle) { struct poll_ctx* ctx; if (!uv_is_active((uv_handle_t*)handle)) return 0; ctx = handle->poll_ctx; assert(ctx != NULL); assert(ctx->parent_handle == handle); /* Close the timer if it's active. If it's inactive, there's a stat request * in progress and poll_cb will take care of the cleanup. */ if (uv_is_active((uv_handle_t*)&ctx->timer_handle)) uv_close((uv_handle_t*)&ctx->timer_handle, timer_close_cb); uv__handle_stop(handle); return 0; } int uv_fs_poll_getpath(uv_fs_poll_t* handle, char* buffer, size_t* size) { struct poll_ctx* ctx; size_t required_len; if (!uv_is_active((uv_handle_t*)handle)) { *size = 0; return UV_EINVAL; } ctx = handle->poll_ctx; assert(ctx != NULL); required_len = strlen(ctx->path); if (required_len >= *size) { *size = required_len + 1; return UV_ENOBUFS; } memcpy(buffer, ctx->path, required_len); *size = required_len; buffer[required_len] = '\0'; return 0; } void uv__fs_poll_close(uv_fs_poll_t* handle) { uv_fs_poll_stop(handle); if (handle->poll_ctx == NULL) uv__make_close_pending((uv_handle_t*)handle); } static void timer_cb(uv_timer_t* timer) { struct poll_ctx* ctx; ctx = container_of(timer, struct poll_ctx, timer_handle); assert(ctx->parent_handle != NULL); assert(ctx->parent_handle->poll_ctx == ctx); ctx->start_time = uv_now(ctx->loop); if (uv_fs_stat(ctx->loop, &ctx->fs_req, ctx->path, poll_cb)) abort(); } static void poll_cb(uv_fs_t* req) { uv_stat_t* statbuf; struct poll_ctx* ctx; uint64_t interval; uv_fs_poll_t* handle; ctx = container_of(req, struct poll_ctx, fs_req); handle = ctx->parent_handle; if (!uv_is_active((uv_handle_t*)handle) || uv__is_closing(handle)) goto out; if (req->result != 0) { if (ctx->busy_polling != req->result) { ctx->poll_cb(ctx->parent_handle, req->result, &ctx->statbuf, &zero_statbuf); ctx->busy_polling = req->result; } goto out; } statbuf = &req->statbuf; if (ctx->busy_polling != 0) if (ctx->busy_polling < 0 || !statbuf_eq(&ctx->statbuf, statbuf)) ctx->poll_cb(ctx->parent_handle, 0, &ctx->statbuf, statbuf); ctx->statbuf = *statbuf; ctx->busy_polling = 1; out: uv_fs_req_cleanup(req); if (!uv_is_active((uv_handle_t*)handle) || uv__is_closing(handle)) { uv_close((uv_handle_t*)&ctx->timer_handle, timer_close_cb); return; } /* Reschedule timer, subtract the delay from doing the stat(). */ interval = ctx->interval; interval -= (uv_now(ctx->loop) - ctx->start_time) % interval; if (uv_timer_start(&ctx->timer_handle, timer_cb, interval, 0)) abort(); } static void timer_close_cb(uv_handle_t* timer) { struct poll_ctx* ctx; struct poll_ctx* it; struct poll_ctx* last; uv_fs_poll_t* handle; ctx = container_of(timer, struct poll_ctx, timer_handle); handle = ctx->parent_handle; if (ctx == handle->poll_ctx) { handle->poll_ctx = ctx->previous; if (handle->poll_ctx == NULL && uv__is_closing(handle)) uv__make_close_pending((uv_handle_t*)handle); } else { for (last = handle->poll_ctx, it = last->previous; it != ctx; last = it, it = it->previous) { assert(last->previous != NULL); } last->previous = ctx->previous; } uv__free(ctx); } static int statbuf_eq(const uv_stat_t* a, const uv_stat_t* b) { return a->st_ctim.tv_nsec == b->st_ctim.tv_nsec && a->st_mtim.tv_nsec == b->st_mtim.tv_nsec && a->st_birthtim.tv_nsec == b->st_birthtim.tv_nsec && a->st_ctim.tv_sec == b->st_ctim.tv_sec && a->st_mtim.tv_sec == b->st_mtim.tv_sec && a->st_birthtim.tv_sec == b->st_birthtim.tv_sec && a->st_size == b->st_size && a->st_mode == b->st_mode && a->st_uid == b->st_uid && a->st_gid == b->st_gid && a->st_ino == b->st_ino && a->st_dev == b->st_dev && a->st_flags == b->st_flags && a->st_gen == b->st_gen; } #if defined(_WIN32) #include "win/internal.h" #include "win/handle-inl.h" void uv__fs_poll_endgame(uv_loop_t* loop, uv_fs_poll_t* handle) { assert(handle->flags & UV_HANDLE_CLOSING); assert(!(handle->flags & UV_HANDLE_CLOSED)); uv__handle_close(handle); } #endif /* _WIN32 */ gevent-24.11.1/deps/libuv/src/heap-inl.h000066400000000000000000000156151471441230600176730ustar00rootroot00000000000000/* Copyright (c) 2013, Ben Noordhuis * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #ifndef UV_SRC_HEAP_H_ #define UV_SRC_HEAP_H_ #include /* NULL */ #if defined(__GNUC__) # define HEAP_EXPORT(declaration) __attribute__((unused)) static declaration #else # define HEAP_EXPORT(declaration) static declaration #endif struct heap_node { struct heap_node* left; struct heap_node* right; struct heap_node* parent; }; /* A binary min heap. The usual properties hold: the root is the lowest * element in the set, the height of the tree is at most log2(nodes) and * it's always a complete binary tree. * * The heap function try hard to detect corrupted tree nodes at the cost * of a minor reduction in performance. Compile with -DNDEBUG to disable. */ struct heap { struct heap_node* min; unsigned int nelts; }; /* Return non-zero if a < b. */ typedef int (*heap_compare_fn)(const struct heap_node* a, const struct heap_node* b); /* Public functions. */ HEAP_EXPORT(void heap_init(struct heap* heap)); HEAP_EXPORT(struct heap_node* heap_min(const struct heap* heap)); HEAP_EXPORT(void heap_insert(struct heap* heap, struct heap_node* newnode, heap_compare_fn less_than)); HEAP_EXPORT(void heap_remove(struct heap* heap, struct heap_node* node, heap_compare_fn less_than)); HEAP_EXPORT(void heap_dequeue(struct heap* heap, heap_compare_fn less_than)); /* Implementation follows. */ HEAP_EXPORT(void heap_init(struct heap* heap)) { heap->min = NULL; heap->nelts = 0; } HEAP_EXPORT(struct heap_node* heap_min(const struct heap* heap)) { return heap->min; } /* Swap parent with child. Child moves closer to the root, parent moves away. */ static void heap_node_swap(struct heap* heap, struct heap_node* parent, struct heap_node* child) { struct heap_node* sibling; struct heap_node t; t = *parent; *parent = *child; *child = t; parent->parent = child; if (child->left == child) { child->left = parent; sibling = child->right; } else { child->right = parent; sibling = child->left; } if (sibling != NULL) sibling->parent = child; if (parent->left != NULL) parent->left->parent = parent; if (parent->right != NULL) parent->right->parent = parent; if (child->parent == NULL) heap->min = child; else if (child->parent->left == parent) child->parent->left = child; else child->parent->right = child; } HEAP_EXPORT(void heap_insert(struct heap* heap, struct heap_node* newnode, heap_compare_fn less_than)) { struct heap_node** parent; struct heap_node** child; unsigned int path; unsigned int n; unsigned int k; newnode->left = NULL; newnode->right = NULL; newnode->parent = NULL; /* Calculate the path from the root to the insertion point. This is a min * heap so we always insert at the left-most free node of the bottom row. */ path = 0; for (k = 0, n = 1 + heap->nelts; n >= 2; k += 1, n /= 2) path = (path << 1) | (n & 1); /* Now traverse the heap using the path we calculated in the previous step. */ parent = child = &heap->min; while (k > 0) { parent = child; if (path & 1) child = &(*child)->right; else child = &(*child)->left; path >>= 1; k -= 1; } /* Insert the new node. */ newnode->parent = *parent; *child = newnode; heap->nelts += 1; /* Walk up the tree and check at each node if the heap property holds. * It's a min heap so parent < child must be true. */ while (newnode->parent != NULL && less_than(newnode, newnode->parent)) heap_node_swap(heap, newnode->parent, newnode); } HEAP_EXPORT(void heap_remove(struct heap* heap, struct heap_node* node, heap_compare_fn less_than)) { struct heap_node* smallest; struct heap_node** max; struct heap_node* child; unsigned int path; unsigned int k; unsigned int n; if (heap->nelts == 0) return; /* Calculate the path from the min (the root) to the max, the left-most node * of the bottom row. */ path = 0; for (k = 0, n = heap->nelts; n >= 2; k += 1, n /= 2) path = (path << 1) | (n & 1); /* Now traverse the heap using the path we calculated in the previous step. */ max = &heap->min; while (k > 0) { if (path & 1) max = &(*max)->right; else max = &(*max)->left; path >>= 1; k -= 1; } heap->nelts -= 1; /* Unlink the max node. */ child = *max; *max = NULL; if (child == node) { /* We're removing either the max or the last node in the tree. */ if (child == heap->min) { heap->min = NULL; } return; } /* Replace the to be deleted node with the max node. */ child->left = node->left; child->right = node->right; child->parent = node->parent; if (child->left != NULL) { child->left->parent = child; } if (child->right != NULL) { child->right->parent = child; } if (node->parent == NULL) { heap->min = child; } else if (node->parent->left == node) { node->parent->left = child; } else { node->parent->right = child; } /* Walk down the subtree and check at each node if the heap property holds. * It's a min heap so parent < child must be true. If the parent is bigger, * swap it with the smallest child. */ for (;;) { smallest = child; if (child->left != NULL && less_than(child->left, smallest)) smallest = child->left; if (child->right != NULL && less_than(child->right, smallest)) smallest = child->right; if (smallest == child) break; heap_node_swap(heap, child, smallest); } /* Walk up the subtree and check that each parent is less than the node * this is required, because `max` node is not guaranteed to be the * actual maximum in tree */ while (child->parent != NULL && less_than(child, child->parent)) heap_node_swap(heap, child->parent, child); } HEAP_EXPORT(void heap_dequeue(struct heap* heap, heap_compare_fn less_than)) { heap_remove(heap, heap->min, less_than); } #undef HEAP_EXPORT #endif /* UV_SRC_HEAP_H_ */ gevent-24.11.1/deps/libuv/src/idna.c000066400000000000000000000143711471441230600171020ustar00rootroot00000000000000/* Copyright (c) 2011, 2018 Ben Noordhuis * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ /* Derived from https://github.com/bnoordhuis/punycode * but updated to support IDNA 2008. */ #include "uv.h" #include "idna.h" #include #include #include /* UINT_MAX */ static unsigned uv__utf8_decode1_slow(const char** p, const char* pe, unsigned a) { unsigned b; unsigned c; unsigned d; unsigned min; if (a > 0xF7) return -1; switch (pe - *p) { default: if (a > 0xEF) { min = 0x10000; a = a & 7; b = (unsigned char) *(*p)++; c = (unsigned char) *(*p)++; d = (unsigned char) *(*p)++; break; } /* Fall through. */ case 2: if (a > 0xDF) { min = 0x800; b = 0x80 | (a & 15); c = (unsigned char) *(*p)++; d = (unsigned char) *(*p)++; a = 0; break; } /* Fall through. */ case 1: if (a > 0xBF) { min = 0x80; b = 0x80; c = 0x80 | (a & 31); d = (unsigned char) *(*p)++; a = 0; break; } /* Fall through. */ case 0: return -1; /* Invalid continuation byte. */ } if (0x80 != (0xC0 & (b ^ c ^ d))) return -1; /* Invalid sequence. */ b &= 63; c &= 63; d &= 63; a = (a << 18) | (b << 12) | (c << 6) | d; if (a < min) return -1; /* Overlong sequence. */ if (a > 0x10FFFF) return -1; /* Four-byte sequence > U+10FFFF. */ if (a >= 0xD800 && a <= 0xDFFF) return -1; /* Surrogate pair. */ return a; } unsigned uv__utf8_decode1(const char** p, const char* pe) { unsigned a; assert(*p < pe); a = (unsigned char) *(*p)++; if (a < 128) return a; /* ASCII, common case. */ return uv__utf8_decode1_slow(p, pe, a); } static int uv__idna_toascii_label(const char* s, const char* se, char** d, char* de) { static const char alphabet[] = "abcdefghijklmnopqrstuvwxyz0123456789"; const char* ss; unsigned c; unsigned h; unsigned k; unsigned n; unsigned m; unsigned q; unsigned t; unsigned x; unsigned y; unsigned bias; unsigned delta; unsigned todo; int first; h = 0; ss = s; todo = 0; /* Note: after this loop we've visited all UTF-8 characters and know * they're legal so we no longer need to check for decode errors. */ while (s < se) { c = uv__utf8_decode1(&s, se); if (c == UINT_MAX) return UV_EINVAL; if (c < 128) h++; else todo++; } /* Only write "xn--" when there are non-ASCII characters. */ if (todo > 0) { if (*d < de) *(*d)++ = 'x'; if (*d < de) *(*d)++ = 'n'; if (*d < de) *(*d)++ = '-'; if (*d < de) *(*d)++ = '-'; } /* Write ASCII characters. */ x = 0; s = ss; while (s < se) { c = uv__utf8_decode1(&s, se); assert(c != UINT_MAX); if (c > 127) continue; if (*d < de) *(*d)++ = c; if (++x == h) break; /* Visited all ASCII characters. */ } if (todo == 0) return h; /* Only write separator when we've written ASCII characters first. */ if (h > 0) if (*d < de) *(*d)++ = '-'; n = 128; bias = 72; delta = 0; first = 1; while (todo > 0) { m = -1; s = ss; while (s < se) { c = uv__utf8_decode1(&s, se); assert(c != UINT_MAX); if (c >= n) if (c < m) m = c; } x = m - n; y = h + 1; if (x > ~delta / y) return UV_E2BIG; /* Overflow. */ delta += x * y; n = m; s = ss; while (s < se) { c = uv__utf8_decode1(&s, se); assert(c != UINT_MAX); if (c < n) if (++delta == 0) return UV_E2BIG; /* Overflow. */ if (c != n) continue; for (k = 36, q = delta; /* empty */; k += 36) { t = 1; if (k > bias) t = k - bias; if (t > 26) t = 26; if (q < t) break; /* TODO(bnoordhuis) Since 1 <= t <= 26 and therefore * 10 <= y <= 35, we can optimize the long division * into a table-based reciprocal multiplication. */ x = q - t; y = 36 - t; /* 10 <= y <= 35 since 1 <= t <= 26. */ q = x / y; t = t + x % y; /* 1 <= t <= 35 because of y. */ if (*d < de) *(*d)++ = alphabet[t]; } if (*d < de) *(*d)++ = alphabet[q]; delta /= 2; if (first) { delta /= 350; first = 0; } /* No overflow check is needed because |delta| was just * divided by 2 and |delta+delta >= delta + delta/h|. */ h++; delta += delta / h; for (bias = 0; delta > 35 * 26 / 2; bias += 36) delta /= 35; bias += 36 * delta / (delta + 38); delta = 0; todo--; } delta++; n++; } return 0; } long uv__idna_toascii(const char* s, const char* se, char* d, char* de) { const char* si; const char* st; unsigned c; char* ds; int rc; ds = d; si = s; while (si < se) { st = si; c = uv__utf8_decode1(&si, se); if (c == UINT_MAX) return UV_EINVAL; if (c != '.') if (c != 0x3002) /* 。 */ if (c != 0xFF0E) /* . */ if (c != 0xFF61) /* 。 */ continue; rc = uv__idna_toascii_label(s, st, &d, de); if (rc < 0) return rc; if (d < de) *d++ = '.'; s = si; } if (s < se) { rc = uv__idna_toascii_label(s, se, &d, de); if (rc < 0) return rc; } if (d < de) *d++ = '\0'; return d - ds; /* Number of bytes written. */ } gevent-24.11.1/deps/libuv/src/idna.h000066400000000000000000000026571471441230600171130ustar00rootroot00000000000000/* Copyright (c) 2011, 2018 Ben Noordhuis * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #ifndef UV_SRC_IDNA_H_ #define UV_SRC_IDNA_H_ /* Decode a single codepoint. Returns the codepoint or UINT32_MAX on error. * |p| is updated on success _and_ error, i.e., bad multi-byte sequences are * skipped in their entirety, not just the first bad byte. */ unsigned uv__utf8_decode1(const char** p, const char* pe); /* Convert a UTF-8 domain name to IDNA 2008 / Punycode. A return value >= 0 * is the number of bytes written to |d|, including the trailing nul byte. * A return value < 0 is a libuv error code. |s| and |d| can not overlap. */ long uv__idna_toascii(const char* s, const char* se, char* d, char* de); #endif /* UV_SRC_IDNA_H_ */ gevent-24.11.1/deps/libuv/src/inet.c000066400000000000000000000177031471441230600171300ustar00rootroot00000000000000/* * Copyright (c) 2004 by Internet Systems Consortium, Inc. ("ISC") * Copyright (c) 1996-1999 by Internet Software Consortium. * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT * OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include #include #if defined(_MSC_VER) && _MSC_VER < 1600 # include "uv/stdint-msvc2008.h" #else # include #endif #include "uv.h" #include "uv-common.h" #define UV__INET_ADDRSTRLEN 16 #define UV__INET6_ADDRSTRLEN 46 static int inet_ntop4(const unsigned char *src, char *dst, size_t size); static int inet_ntop6(const unsigned char *src, char *dst, size_t size); static int inet_pton4(const char *src, unsigned char *dst); static int inet_pton6(const char *src, unsigned char *dst); int uv_inet_ntop(int af, const void* src, char* dst, size_t size) { switch (af) { case AF_INET: return (inet_ntop4(src, dst, size)); case AF_INET6: return (inet_ntop6(src, dst, size)); default: return UV_EAFNOSUPPORT; } /* NOTREACHED */ } static int inet_ntop4(const unsigned char *src, char *dst, size_t size) { static const char fmt[] = "%u.%u.%u.%u"; char tmp[UV__INET_ADDRSTRLEN]; int l; l = snprintf(tmp, sizeof(tmp), fmt, src[0], src[1], src[2], src[3]); if (l <= 0 || (size_t) l >= size) { return UV_ENOSPC; } uv__strscpy(dst, tmp, size); return 0; } static int inet_ntop6(const unsigned char *src, char *dst, size_t size) { /* * Note that int32_t and int16_t need only be "at least" large enough * to contain a value of the specified size. On some systems, like * Crays, there is no such thing as an integer variable with 16 bits. * Keep this in mind if you think this function should have been coded * to use pointer overlays. All the world's not a VAX. */ char tmp[UV__INET6_ADDRSTRLEN], *tp; struct { int base, len; } best, cur; unsigned int words[sizeof(struct in6_addr) / sizeof(uint16_t)]; int i; /* * Preprocess: * Copy the input (bytewise) array into a wordwise array. * Find the longest run of 0x00's in src[] for :: shorthanding. */ memset(words, '\0', sizeof words); for (i = 0; i < (int) sizeof(struct in6_addr); i++) words[i / 2] |= (src[i] << ((1 - (i % 2)) << 3)); best.base = -1; best.len = 0; cur.base = -1; cur.len = 0; for (i = 0; i < (int) ARRAY_SIZE(words); i++) { if (words[i] == 0) { if (cur.base == -1) cur.base = i, cur.len = 1; else cur.len++; } else { if (cur.base != -1) { if (best.base == -1 || cur.len > best.len) best = cur; cur.base = -1; } } } if (cur.base != -1) { if (best.base == -1 || cur.len > best.len) best = cur; } if (best.base != -1 && best.len < 2) best.base = -1; /* * Format the result. */ tp = tmp; for (i = 0; i < (int) ARRAY_SIZE(words); i++) { /* Are we inside the best run of 0x00's? */ if (best.base != -1 && i >= best.base && i < (best.base + best.len)) { if (i == best.base) *tp++ = ':'; continue; } /* Are we following an initial run of 0x00s or any real hex? */ if (i != 0) *tp++ = ':'; /* Is this address an encapsulated IPv4? */ if (i == 6 && best.base == 0 && (best.len == 6 || (best.len == 7 && words[7] != 0x0001) || (best.len == 5 && words[5] == 0xffff))) { int err = inet_ntop4(src+12, tp, sizeof tmp - (tp - tmp)); if (err) return err; tp += strlen(tp); break; } tp += sprintf(tp, "%x", words[i]); } /* Was it a trailing run of 0x00's? */ if (best.base != -1 && (best.base + best.len) == ARRAY_SIZE(words)) *tp++ = ':'; *tp++ = '\0'; if ((size_t) (tp - tmp) > size) return UV_ENOSPC; uv__strscpy(dst, tmp, size); return 0; } int uv_inet_pton(int af, const char* src, void* dst) { if (src == NULL || dst == NULL) return UV_EINVAL; switch (af) { case AF_INET: return (inet_pton4(src, dst)); case AF_INET6: { int len; char tmp[UV__INET6_ADDRSTRLEN], *s, *p; s = (char*) src; p = strchr(src, '%'); if (p != NULL) { s = tmp; len = p - src; if (len > UV__INET6_ADDRSTRLEN-1) return UV_EINVAL; memcpy(s, src, len); s[len] = '\0'; } return inet_pton6(s, dst); } default: return UV_EAFNOSUPPORT; } /* NOTREACHED */ } static int inet_pton4(const char *src, unsigned char *dst) { static const char digits[] = "0123456789"; int saw_digit, octets, ch; unsigned char tmp[sizeof(struct in_addr)], *tp; saw_digit = 0; octets = 0; *(tp = tmp) = 0; while ((ch = *src++) != '\0') { const char *pch; if ((pch = strchr(digits, ch)) != NULL) { unsigned int nw = *tp * 10 + (pch - digits); if (saw_digit && *tp == 0) return UV_EINVAL; if (nw > 255) return UV_EINVAL; *tp = nw; if (!saw_digit) { if (++octets > 4) return UV_EINVAL; saw_digit = 1; } } else if (ch == '.' && saw_digit) { if (octets == 4) return UV_EINVAL; *++tp = 0; saw_digit = 0; } else return UV_EINVAL; } if (octets < 4) return UV_EINVAL; memcpy(dst, tmp, sizeof(struct in_addr)); return 0; } static int inet_pton6(const char *src, unsigned char *dst) { static const char xdigits_l[] = "0123456789abcdef", xdigits_u[] = "0123456789ABCDEF"; unsigned char tmp[sizeof(struct in6_addr)], *tp, *endp, *colonp; const char *xdigits, *curtok; int ch, seen_xdigits; unsigned int val; memset((tp = tmp), '\0', sizeof tmp); endp = tp + sizeof tmp; colonp = NULL; /* Leading :: requires some special handling. */ if (*src == ':') if (*++src != ':') return UV_EINVAL; curtok = src; seen_xdigits = 0; val = 0; while ((ch = *src++) != '\0') { const char *pch; if ((pch = strchr((xdigits = xdigits_l), ch)) == NULL) pch = strchr((xdigits = xdigits_u), ch); if (pch != NULL) { val <<= 4; val |= (pch - xdigits); if (++seen_xdigits > 4) return UV_EINVAL; continue; } if (ch == ':') { curtok = src; if (!seen_xdigits) { if (colonp) return UV_EINVAL; colonp = tp; continue; } else if (*src == '\0') { return UV_EINVAL; } if (tp + sizeof(uint16_t) > endp) return UV_EINVAL; *tp++ = (unsigned char) (val >> 8) & 0xff; *tp++ = (unsigned char) val & 0xff; seen_xdigits = 0; val = 0; continue; } if (ch == '.' && ((tp + sizeof(struct in_addr)) <= endp)) { int err = inet_pton4(curtok, tp); if (err == 0) { tp += sizeof(struct in_addr); seen_xdigits = 0; break; /*%< '\\0' was seen by inet_pton4(). */ } } return UV_EINVAL; } if (seen_xdigits) { if (tp + sizeof(uint16_t) > endp) return UV_EINVAL; *tp++ = (unsigned char) (val >> 8) & 0xff; *tp++ = (unsigned char) val & 0xff; } if (colonp != NULL) { /* * Since some memmove()'s erroneously fail to handle * overlapping regions, we'll do the shift by hand. */ const int n = tp - colonp; int i; if (tp == endp) return UV_EINVAL; for (i = 1; i <= n; i++) { endp[- i] = colonp[n - i]; colonp[n - i] = 0; } tp = endp; } if (tp != endp) return UV_EINVAL; memcpy(dst, tmp, sizeof tmp); return 0; } gevent-24.11.1/deps/libuv/src/queue.h000066400000000000000000000132671471441230600173230ustar00rootroot00000000000000/* Copyright (c) 2013, Ben Noordhuis * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #ifndef QUEUE_H_ #define QUEUE_H_ #include typedef void *QUEUE[2]; /* Private macros. */ #define QUEUE_NEXT(q) (*(QUEUE **) &((*(q))[0])) #define QUEUE_PREV(q) (*(QUEUE **) &((*(q))[1])) #define QUEUE_PREV_NEXT(q) (QUEUE_NEXT(QUEUE_PREV(q))) #define QUEUE_NEXT_PREV(q) (QUEUE_PREV(QUEUE_NEXT(q))) /* Public macros. */ #define QUEUE_DATA(ptr, type, field) \ ((type *) ((char *) (ptr) - offsetof(type, field))) /* Important note: mutating the list while QUEUE_FOREACH is * iterating over its elements results in undefined behavior. */ #define QUEUE_FOREACH(q, h) \ for ((q) = QUEUE_NEXT(h); (q) != (h); (q) = QUEUE_NEXT(q)) #define QUEUE_EMPTY(q) \ ((const QUEUE *) (q) == (const QUEUE *) QUEUE_NEXT(q)) #define QUEUE_HEAD(q) \ (QUEUE_NEXT(q)) #define QUEUE_INIT(q) \ do { \ QUEUE_NEXT(q) = (q); \ QUEUE_PREV(q) = (q); \ } \ while (0) #define QUEUE_ADD(h, n) \ do { \ QUEUE_PREV_NEXT(h) = QUEUE_NEXT(n); \ QUEUE_NEXT_PREV(n) = QUEUE_PREV(h); \ QUEUE_PREV(h) = QUEUE_PREV(n); \ QUEUE_PREV_NEXT(h) = (h); \ } \ while (0) #define QUEUE_SPLIT(h, q, n) \ do { \ QUEUE_PREV(n) = QUEUE_PREV(h); \ QUEUE_PREV_NEXT(n) = (n); \ QUEUE_NEXT(n) = (q); \ QUEUE_PREV(h) = QUEUE_PREV(q); \ QUEUE_PREV_NEXT(h) = (h); \ QUEUE_PREV(q) = (n); \ } \ while (0) #define QUEUE_MOVE(h, n) \ do { \ if (QUEUE_EMPTY(h)) \ QUEUE_INIT(n); \ else { \ QUEUE* q = QUEUE_HEAD(h); \ QUEUE_SPLIT(h, q, n); \ } \ } \ while (0) #define QUEUE_INSERT_HEAD(h, q) \ do { \ QUEUE_NEXT(q) = QUEUE_NEXT(h); \ QUEUE_PREV(q) = (h); \ QUEUE_NEXT_PREV(q) = (q); \ QUEUE_NEXT(h) = (q); \ } \ while (0) #define QUEUE_INSERT_TAIL(h, q) \ do { \ QUEUE_NEXT(q) = (h); \ QUEUE_PREV(q) = QUEUE_PREV(h); \ QUEUE_PREV_NEXT(q) = (q); \ QUEUE_PREV(h) = (q); \ } \ while (0) #define QUEUE_REMOVE(q) \ do { \ QUEUE_PREV_NEXT(q) = QUEUE_NEXT(q); \ QUEUE_NEXT_PREV(q) = QUEUE_PREV(q); \ } \ while (0) #endif /* QUEUE_H_ */ gevent-24.11.1/deps/libuv/src/random.c000066400000000000000000000065371471441230600174540ustar00rootroot00000000000000/* Copyright libuv contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "uv-common.h" #ifdef _WIN32 # include "win/internal.h" #else # include "unix/internal.h" #endif static int uv__random(void* buf, size_t buflen) { int rc; #if defined(__PASE__) rc = uv__random_readpath("/dev/urandom", buf, buflen); #elif defined(_AIX) || defined(__QNX__) rc = uv__random_readpath("/dev/random", buf, buflen); #elif defined(__APPLE__) || defined(__OpenBSD__) || \ (defined(__ANDROID_API__) && __ANDROID_API__ >= 28) rc = uv__random_getentropy(buf, buflen); if (rc == UV_ENOSYS) rc = uv__random_devurandom(buf, buflen); #elif defined(__NetBSD__) rc = uv__random_sysctl(buf, buflen); #elif defined(__FreeBSD__) || defined(__linux__) rc = uv__random_getrandom(buf, buflen); if (rc == UV_ENOSYS) rc = uv__random_devurandom(buf, buflen); # if defined(__linux__) switch (rc) { case UV_EACCES: case UV_EIO: case UV_ELOOP: case UV_EMFILE: case UV_ENFILE: case UV_ENOENT: case UV_EPERM: rc = uv__random_sysctl(buf, buflen); break; } # endif #elif defined(_WIN32) uv__once_init(); rc = uv__random_rtlgenrandom(buf, buflen); #else rc = uv__random_devurandom(buf, buflen); #endif return rc; } static void uv__random_work(struct uv__work* w) { uv_random_t* req; req = container_of(w, uv_random_t, work_req); req->status = uv__random(req->buf, req->buflen); } static void uv__random_done(struct uv__work* w, int status) { uv_random_t* req; req = container_of(w, uv_random_t, work_req); uv__req_unregister(req->loop, req); if (status == 0) status = req->status; req->cb(req, status, req->buf, req->buflen); } int uv_random(uv_loop_t* loop, uv_random_t* req, void *buf, size_t buflen, unsigned flags, uv_random_cb cb) { if (buflen > 0x7FFFFFFFu) return UV_E2BIG; if (flags != 0) return UV_EINVAL; if (cb == NULL) return uv__random(buf, buflen); uv__req_init(loop, req, UV_RANDOM); req->loop = loop; req->status = 0; req->cb = cb; req->buf = buf; req->buflen = buflen; uv__work_submit(loop, &req->work_req, UV__WORK_CPU, uv__random_work, uv__random_done); return 0; } gevent-24.11.1/deps/libuv/src/strscpy.c000066400000000000000000000026511471441230600176740ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "strscpy.h" #include /* SSIZE_MAX */ ssize_t uv__strscpy(char* d, const char* s, size_t n) { size_t i; for (i = 0; i < n; i++) if ('\0' == (d[i] = s[i])) return i > SSIZE_MAX ? UV_E2BIG : (ssize_t) i; if (i == 0) return 0; d[--i] = '\0'; return UV_E2BIG; } gevent-24.11.1/deps/libuv/src/strscpy.h000066400000000000000000000033241471441230600176770ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_STRSCPY_H_ #define UV_STRSCPY_H_ /* Include uv.h for its definitions of size_t and ssize_t. * size_t can be obtained directly from but ssize_t requires * some hoop jumping on Windows that I didn't want to duplicate here. */ #include "uv.h" /* Copies up to |n-1| bytes from |s| to |d| and always zero-terminates * the result, except when |n==0|. Returns the number of bytes copied * or UV_E2BIG if |d| is too small. * * See https://www.kernel.org/doc/htmldocs/kernel-api/API-strscpy.html */ ssize_t uv__strscpy(char* d, const char* s, size_t n); #endif /* UV_STRSCPY_H_ */ gevent-24.11.1/deps/libuv/src/strtok.c000066400000000000000000000031701471441230600175100ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include "strtok.h" char* uv__strtok(char* str, const char* sep, char** itr) { const char* sep_itr; char* tmp; char* start; if (str == NULL) start = tmp = *itr; else start = tmp = str; if (tmp == NULL) return NULL; while (*tmp != '\0') { sep_itr = sep; while (*sep_itr != '\0') { if (*tmp == *sep_itr) { *itr = tmp + 1; *tmp = '\0'; return start; } sep_itr++; } tmp++; } *itr = NULL; return start; } gevent-24.11.1/deps/libuv/src/strtok.h000066400000000000000000000023671471441230600175240ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_STRTOK_H_ #define UV_STRTOK_H_ char* uv__strtok(char* str, const char* sep, char** itr); #endif /* UV_STRTOK_H_ */ gevent-24.11.1/deps/libuv/src/threadpool.c000066400000000000000000000233651471441230600203330ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv-common.h" #if !defined(_WIN32) # include "unix/internal.h" #endif #include #define MAX_THREADPOOL_SIZE 1024 static uv_once_t once = UV_ONCE_INIT; static uv_cond_t cond; static uv_mutex_t mutex; static unsigned int idle_threads; static unsigned int slow_io_work_running; static unsigned int nthreads; static uv_thread_t* threads; static uv_thread_t default_threads[4]; static QUEUE exit_message; static QUEUE wq; static QUEUE run_slow_work_message; static QUEUE slow_io_pending_wq; static unsigned int slow_work_thread_threshold(void) { return (nthreads + 1) / 2; } static void uv__cancelled(struct uv__work* w) { abort(); } /* To avoid deadlock with uv_cancel() it's crucial that the worker * never holds the global mutex and the loop-local mutex at the same time. */ static void worker(void* arg) { struct uv__work* w; QUEUE* q; int is_slow_work; uv_sem_post((uv_sem_t*) arg); arg = NULL; uv_mutex_lock(&mutex); for (;;) { /* `mutex` should always be locked at this point. */ /* Keep waiting while either no work is present or only slow I/O and we're at the threshold for that. */ while (QUEUE_EMPTY(&wq) || (QUEUE_HEAD(&wq) == &run_slow_work_message && QUEUE_NEXT(&run_slow_work_message) == &wq && slow_io_work_running >= slow_work_thread_threshold())) { idle_threads += 1; uv_cond_wait(&cond, &mutex); idle_threads -= 1; } q = QUEUE_HEAD(&wq); if (q == &exit_message) { uv_cond_signal(&cond); uv_mutex_unlock(&mutex); break; } QUEUE_REMOVE(q); QUEUE_INIT(q); /* Signal uv_cancel() that the work req is executing. */ is_slow_work = 0; if (q == &run_slow_work_message) { /* If we're at the slow I/O threshold, re-schedule until after all other work in the queue is done. */ if (slow_io_work_running >= slow_work_thread_threshold()) { QUEUE_INSERT_TAIL(&wq, q); continue; } /* If we encountered a request to run slow I/O work but there is none to run, that means it's cancelled => Start over. */ if (QUEUE_EMPTY(&slow_io_pending_wq)) continue; is_slow_work = 1; slow_io_work_running++; q = QUEUE_HEAD(&slow_io_pending_wq); QUEUE_REMOVE(q); QUEUE_INIT(q); /* If there is more slow I/O work, schedule it to be run as well. */ if (!QUEUE_EMPTY(&slow_io_pending_wq)) { QUEUE_INSERT_TAIL(&wq, &run_slow_work_message); if (idle_threads > 0) uv_cond_signal(&cond); } } uv_mutex_unlock(&mutex); w = QUEUE_DATA(q, struct uv__work, wq); w->work(w); uv_mutex_lock(&w->loop->wq_mutex); w->work = NULL; /* Signal uv_cancel() that the work req is done executing. */ QUEUE_INSERT_TAIL(&w->loop->wq, &w->wq); uv_async_send(&w->loop->wq_async); uv_mutex_unlock(&w->loop->wq_mutex); /* Lock `mutex` since that is expected at the start of the next * iteration. */ uv_mutex_lock(&mutex); if (is_slow_work) { /* `slow_io_work_running` is protected by `mutex`. */ slow_io_work_running--; } } } static void post(QUEUE* q, enum uv__work_kind kind) { uv_mutex_lock(&mutex); if (kind == UV__WORK_SLOW_IO) { /* Insert into a separate queue. */ QUEUE_INSERT_TAIL(&slow_io_pending_wq, q); if (!QUEUE_EMPTY(&run_slow_work_message)) { /* Running slow I/O tasks is already scheduled => Nothing to do here. The worker that runs said other task will schedule this one as well. */ uv_mutex_unlock(&mutex); return; } q = &run_slow_work_message; } QUEUE_INSERT_TAIL(&wq, q); if (idle_threads > 0) uv_cond_signal(&cond); uv_mutex_unlock(&mutex); } #ifdef __MVS__ /* TODO(itodorov) - zos: revisit when Woz compiler is available. */ __attribute__((destructor)) #endif void uv__threadpool_cleanup(void) { unsigned int i; if (nthreads == 0) return; #ifndef __MVS__ /* TODO(gabylb) - zos: revisit when Woz compiler is available. */ post(&exit_message, UV__WORK_CPU); #endif for (i = 0; i < nthreads; i++) if (uv_thread_join(threads + i)) abort(); if (threads != default_threads) uv__free(threads); uv_mutex_destroy(&mutex); uv_cond_destroy(&cond); threads = NULL; nthreads = 0; } static void init_threads(void) { unsigned int i; const char* val; uv_sem_t sem; nthreads = ARRAY_SIZE(default_threads); val = getenv("UV_THREADPOOL_SIZE"); if (val != NULL) nthreads = atoi(val); if (nthreads == 0) nthreads = 1; if (nthreads > MAX_THREADPOOL_SIZE) nthreads = MAX_THREADPOOL_SIZE; threads = default_threads; if (nthreads > ARRAY_SIZE(default_threads)) { threads = uv__malloc(nthreads * sizeof(threads[0])); if (threads == NULL) { nthreads = ARRAY_SIZE(default_threads); threads = default_threads; } } if (uv_cond_init(&cond)) abort(); if (uv_mutex_init(&mutex)) abort(); QUEUE_INIT(&wq); QUEUE_INIT(&slow_io_pending_wq); QUEUE_INIT(&run_slow_work_message); if (uv_sem_init(&sem, 0)) abort(); for (i = 0; i < nthreads; i++) if (uv_thread_create(threads + i, worker, &sem)) abort(); for (i = 0; i < nthreads; i++) uv_sem_wait(&sem); uv_sem_destroy(&sem); } #ifndef _WIN32 static void reset_once(void) { uv_once_t child_once = UV_ONCE_INIT; memcpy(&once, &child_once, sizeof(child_once)); } #endif static void init_once(void) { #ifndef _WIN32 /* Re-initialize the threadpool after fork. * Note that this discards the global mutex and condition as well * as the work queue. */ if (pthread_atfork(NULL, NULL, &reset_once)) abort(); #endif init_threads(); } void uv__work_submit(uv_loop_t* loop, struct uv__work* w, enum uv__work_kind kind, void (*work)(struct uv__work* w), void (*done)(struct uv__work* w, int status)) { uv_once(&once, init_once); w->loop = loop; w->work = work; w->done = done; post(&w->wq, kind); } static int uv__work_cancel(uv_loop_t* loop, uv_req_t* req, struct uv__work* w) { int cancelled; uv_mutex_lock(&mutex); uv_mutex_lock(&w->loop->wq_mutex); cancelled = !QUEUE_EMPTY(&w->wq) && w->work != NULL; if (cancelled) QUEUE_REMOVE(&w->wq); uv_mutex_unlock(&w->loop->wq_mutex); uv_mutex_unlock(&mutex); if (!cancelled) return UV_EBUSY; w->work = uv__cancelled; uv_mutex_lock(&loop->wq_mutex); QUEUE_INSERT_TAIL(&loop->wq, &w->wq); uv_async_send(&loop->wq_async); uv_mutex_unlock(&loop->wq_mutex); return 0; } void uv__work_done(uv_async_t* handle) { struct uv__work* w; uv_loop_t* loop; QUEUE* q; QUEUE wq; int err; loop = container_of(handle, uv_loop_t, wq_async); uv_mutex_lock(&loop->wq_mutex); QUEUE_MOVE(&loop->wq, &wq); uv_mutex_unlock(&loop->wq_mutex); while (!QUEUE_EMPTY(&wq)) { q = QUEUE_HEAD(&wq); QUEUE_REMOVE(q); w = container_of(q, struct uv__work, wq); err = (w->work == uv__cancelled) ? UV_ECANCELED : 0; w->done(w, err); } } static void uv__queue_work(struct uv__work* w) { uv_work_t* req = container_of(w, uv_work_t, work_req); req->work_cb(req); } static void uv__queue_done(struct uv__work* w, int err) { uv_work_t* req; req = container_of(w, uv_work_t, work_req); uv__req_unregister(req->loop, req); if (req->after_work_cb == NULL) return; req->after_work_cb(req, err); } int uv_queue_work(uv_loop_t* loop, uv_work_t* req, uv_work_cb work_cb, uv_after_work_cb after_work_cb) { if (work_cb == NULL) return UV_EINVAL; uv__req_init(loop, req, UV_WORK); req->loop = loop; req->work_cb = work_cb; req->after_work_cb = after_work_cb; uv__work_submit(loop, &req->work_req, UV__WORK_CPU, uv__queue_work, uv__queue_done); return 0; } int uv_cancel(uv_req_t* req) { struct uv__work* wreq; uv_loop_t* loop; switch (req->type) { case UV_FS: loop = ((uv_fs_t*) req)->loop; wreq = &((uv_fs_t*) req)->work_req; break; case UV_GETADDRINFO: loop = ((uv_getaddrinfo_t*) req)->loop; wreq = &((uv_getaddrinfo_t*) req)->work_req; break; case UV_GETNAMEINFO: loop = ((uv_getnameinfo_t*) req)->loop; wreq = &((uv_getnameinfo_t*) req)->work_req; break; case UV_RANDOM: loop = ((uv_random_t*) req)->loop; wreq = &((uv_random_t*) req)->work_req; break; case UV_WORK: loop = ((uv_work_t*) req)->loop; wreq = &((uv_work_t*) req)->work_req; break; default: return UV_EINVAL; } return uv__work_cancel(loop, req, wreq); } gevent-24.11.1/deps/libuv/src/timer.c000066400000000000000000000111731471441230600173040ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "uv-common.h" #include "heap-inl.h" #include #include static struct heap *timer_heap(const uv_loop_t* loop) { #ifdef _WIN32 return (struct heap*) loop->timer_heap; #else return (struct heap*) &loop->timer_heap; #endif } static int timer_less_than(const struct heap_node* ha, const struct heap_node* hb) { const uv_timer_t* a; const uv_timer_t* b; a = container_of(ha, uv_timer_t, heap_node); b = container_of(hb, uv_timer_t, heap_node); if (a->timeout < b->timeout) return 1; if (b->timeout < a->timeout) return 0; /* Compare start_id when both have the same timeout. start_id is * allocated with loop->timer_counter in uv_timer_start(). */ return a->start_id < b->start_id; } int uv_timer_init(uv_loop_t* loop, uv_timer_t* handle) { uv__handle_init(loop, (uv_handle_t*)handle, UV_TIMER); handle->timer_cb = NULL; handle->timeout = 0; handle->repeat = 0; return 0; } int uv_timer_start(uv_timer_t* handle, uv_timer_cb cb, uint64_t timeout, uint64_t repeat) { uint64_t clamped_timeout; if (uv__is_closing(handle) || cb == NULL) return UV_EINVAL; if (uv__is_active(handle)) uv_timer_stop(handle); clamped_timeout = handle->loop->time + timeout; if (clamped_timeout < timeout) clamped_timeout = (uint64_t) -1; handle->timer_cb = cb; handle->timeout = clamped_timeout; handle->repeat = repeat; /* start_id is the second index to be compared in timer_less_than() */ handle->start_id = handle->loop->timer_counter++; heap_insert(timer_heap(handle->loop), (struct heap_node*) &handle->heap_node, timer_less_than); uv__handle_start(handle); return 0; } int uv_timer_stop(uv_timer_t* handle) { if (!uv__is_active(handle)) return 0; heap_remove(timer_heap(handle->loop), (struct heap_node*) &handle->heap_node, timer_less_than); uv__handle_stop(handle); return 0; } int uv_timer_again(uv_timer_t* handle) { if (handle->timer_cb == NULL) return UV_EINVAL; if (handle->repeat) { uv_timer_stop(handle); uv_timer_start(handle, handle->timer_cb, handle->repeat, handle->repeat); } return 0; } void uv_timer_set_repeat(uv_timer_t* handle, uint64_t repeat) { handle->repeat = repeat; } uint64_t uv_timer_get_repeat(const uv_timer_t* handle) { return handle->repeat; } uint64_t uv_timer_get_due_in(const uv_timer_t* handle) { if (handle->loop->time >= handle->timeout) return 0; return handle->timeout - handle->loop->time; } int uv__next_timeout(const uv_loop_t* loop) { const struct heap_node* heap_node; const uv_timer_t* handle; uint64_t diff; heap_node = heap_min(timer_heap(loop)); if (heap_node == NULL) return -1; /* block indefinitely */ handle = container_of(heap_node, uv_timer_t, heap_node); if (handle->timeout <= loop->time) return 0; diff = handle->timeout - loop->time; if (diff > INT_MAX) diff = INT_MAX; return (int) diff; } void uv__run_timers(uv_loop_t* loop) { struct heap_node* heap_node; uv_timer_t* handle; for (;;) { heap_node = heap_min(timer_heap(loop)); if (heap_node == NULL) break; handle = container_of(heap_node, uv_timer_t, heap_node); if (handle->timeout > loop->time) break; uv_timer_stop(handle); uv_timer_again(handle); handle->timer_cb(handle); } } void uv__timer_close(uv_timer_t* handle) { uv_timer_stop(handle); } gevent-24.11.1/deps/libuv/src/unix/000077500000000000000000000000001471441230600170005ustar00rootroot00000000000000gevent-24.11.1/deps/libuv/src/unix/aix-common.c000066400000000000000000000057041471441230600212210ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include #include #include extern char* original_exepath; extern uv_mutex_t process_title_mutex; extern uv_once_t process_title_mutex_once; extern void init_process_title_mutex_once(void); uint64_t uv__hrtime(uv_clocktype_t type) { uint64_t G = 1000000000; timebasestruct_t t; read_wall_time(&t, TIMEBASE_SZ); time_base_to_time(&t, TIMEBASE_SZ); return (uint64_t) t.tb_high * G + t.tb_low; } /* * We could use a static buffer for the path manipulations that we need outside * of the function, but this function could be called by multiple consumers and * we don't want to potentially create a race condition in the use of snprintf. * There is no direct way of getting the exe path in AIX - either through /procfs * or through some libc APIs. The below approach is to parse the argv[0]'s pattern * and use it in conjunction with PATH environment variable to craft one. */ int uv_exepath(char* buffer, size_t* size) { int res; char args[UV__PATH_MAX]; size_t cached_len; struct procsinfo pi; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; uv_once(&process_title_mutex_once, init_process_title_mutex_once); uv_mutex_lock(&process_title_mutex); if (original_exepath != NULL) { cached_len = strlen(original_exepath); *size -= 1; if (*size > cached_len) *size = cached_len; memcpy(buffer, original_exepath, *size); buffer[*size] = '\0'; uv_mutex_unlock(&process_title_mutex); return 0; } uv_mutex_unlock(&process_title_mutex); pi.pi_pid = getpid(); res = getargs(&pi, sizeof(pi), args, sizeof(args)); if (res < 0) return UV_EINVAL; return uv__search_path(args, buffer, size); } gevent-24.11.1/deps/libuv/src/unix/aix.c000066400000000000000000001006011471441230600177230ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef HAVE_SYS_AHAFS_EVPRODS_H #include #endif #include #include #include #include #include #define RDWR_BUF_SIZE 4096 #define EQ(a,b) (strcmp(a,b) == 0) char* original_exepath = NULL; uv_mutex_t process_title_mutex; uv_once_t process_title_mutex_once = UV_ONCE_INIT; static void* args_mem = NULL; static char** process_argv = NULL; static int process_argc = 0; static char* process_title_ptr = NULL; void init_process_title_mutex_once(void) { uv_mutex_init(&process_title_mutex); } int uv__platform_loop_init(uv_loop_t* loop) { loop->fs_fd = -1; /* Passing maxfd of -1 should mean the limit is determined * by the user's ulimit or the global limit as per the doc */ loop->backend_fd = pollset_create(-1); if (loop->backend_fd == -1) return -1; return 0; } void uv__platform_loop_delete(uv_loop_t* loop) { if (loop->fs_fd != -1) { uv__close(loop->fs_fd); loop->fs_fd = -1; } if (loop->backend_fd != -1) { pollset_destroy(loop->backend_fd); loop->backend_fd = -1; } } int uv__io_fork(uv_loop_t* loop) { uv__platform_loop_delete(loop); return uv__platform_loop_init(loop); } int uv__io_check_fd(uv_loop_t* loop, int fd) { struct poll_ctl pc; pc.events = POLLIN; pc.cmd = PS_MOD; /* Equivalent to PS_ADD if the fd is not in the pollset. */ pc.fd = fd; if (pollset_ctl(loop->backend_fd, &pc, 1)) return UV__ERR(errno); pc.cmd = PS_DELETE; if (pollset_ctl(loop->backend_fd, &pc, 1)) abort(); return 0; } void uv__io_poll(uv_loop_t* loop, int timeout) { struct pollfd events[1024]; struct pollfd pqry; struct pollfd* pe; struct poll_ctl pc; QUEUE* q; uv__io_t* w; uint64_t base; uint64_t diff; int have_signals; int nevents; int count; int nfds; int i; int rc; int add_failed; int user_timeout; int reset_timeout; if (loop->nfds == 0) { assert(QUEUE_EMPTY(&loop->watcher_queue)); return; } while (!QUEUE_EMPTY(&loop->watcher_queue)) { q = QUEUE_HEAD(&loop->watcher_queue); QUEUE_REMOVE(q); QUEUE_INIT(q); w = QUEUE_DATA(q, uv__io_t, watcher_queue); assert(w->pevents != 0); assert(w->fd >= 0); assert(w->fd < (int) loop->nwatchers); pc.events = w->pevents; pc.fd = w->fd; add_failed = 0; if (w->events == 0) { pc.cmd = PS_ADD; if (pollset_ctl(loop->backend_fd, &pc, 1)) { if (errno != EINVAL) { assert(0 && "Failed to add file descriptor (pc.fd) to pollset"); abort(); } /* Check if the fd is already in the pollset */ pqry.fd = pc.fd; rc = pollset_query(loop->backend_fd, &pqry); switch (rc) { case -1: assert(0 && "Failed to query pollset for file descriptor"); abort(); case 0: assert(0 && "Pollset does not contain file descriptor"); abort(); } /* If we got here then the pollset already contained the file descriptor even though * we didn't think it should. This probably shouldn't happen, but we can continue. */ add_failed = 1; } } if (w->events != 0 || add_failed) { /* Modify, potentially removing events -- need to delete then add. * Could maybe mod if we knew for sure no events are removed, but * content of w->events is handled above as not reliable (falls back) * so may require a pollset_query() which would have to be pretty cheap * compared to a PS_DELETE to be worth optimizing. Alternatively, could * lazily remove events, squelching them in the mean time. */ pc.cmd = PS_DELETE; if (pollset_ctl(loop->backend_fd, &pc, 1)) { assert(0 && "Failed to delete file descriptor (pc.fd) from pollset"); abort(); } pc.cmd = PS_ADD; if (pollset_ctl(loop->backend_fd, &pc, 1)) { assert(0 && "Failed to add file descriptor (pc.fd) to pollset"); abort(); } } w->events = w->pevents; } assert(timeout >= -1); base = loop->time; count = 48; /* Benchmarks suggest this gives the best throughput. */ if (uv__get_internal_fields(loop)->flags & UV_METRICS_IDLE_TIME) { reset_timeout = 1; user_timeout = timeout; timeout = 0; } else { reset_timeout = 0; } for (;;) { /* Only need to set the provider_entry_time if timeout != 0. The function * will return early if the loop isn't configured with UV_METRICS_IDLE_TIME. */ if (timeout != 0) uv__metrics_set_provider_entry_time(loop); nfds = pollset_poll(loop->backend_fd, events, ARRAY_SIZE(events), timeout); /* Update loop->time unconditionally. It's tempting to skip the update when * timeout == 0 (i.e. non-blocking poll) but there is no guarantee that the * operating system didn't reschedule our process while in the syscall. */ SAVE_ERRNO(uv__update_time(loop)); if (nfds == 0) { if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; if (timeout == -1) continue; if (timeout > 0) goto update_timeout; } assert(timeout != -1); return; } if (nfds == -1) { if (errno != EINTR) { abort(); } if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } if (timeout == -1) continue; if (timeout == 0) return; /* Interrupted by a signal. Update timeout and poll again. */ goto update_timeout; } have_signals = 0; nevents = 0; assert(loop->watchers != NULL); loop->watchers[loop->nwatchers] = (void*) events; loop->watchers[loop->nwatchers + 1] = (void*) (uintptr_t) nfds; for (i = 0; i < nfds; i++) { pe = events + i; pc.cmd = PS_DELETE; pc.fd = pe->fd; /* Skip invalidated events, see uv__platform_invalidate_fd */ if (pc.fd == -1) continue; assert(pc.fd >= 0); assert((unsigned) pc.fd < loop->nwatchers); w = loop->watchers[pc.fd]; if (w == NULL) { /* File descriptor that we've stopped watching, disarm it. * * Ignore all errors because we may be racing with another thread * when the file descriptor is closed. */ pollset_ctl(loop->backend_fd, &pc, 1); continue; } /* Run signal watchers last. This also affects child process watchers * because those are implemented in terms of signal watchers. */ if (w == &loop->signal_io_watcher) { have_signals = 1; } else { uv__metrics_update_idle_time(loop); w->cb(loop, w, pe->revents); } nevents++; } if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } if (have_signals != 0) { uv__metrics_update_idle_time(loop); loop->signal_io_watcher.cb(loop, &loop->signal_io_watcher, POLLIN); } loop->watchers[loop->nwatchers] = NULL; loop->watchers[loop->nwatchers + 1] = NULL; if (have_signals != 0) return; /* Event loop should cycle now so don't poll again. */ if (nevents != 0) { if (nfds == ARRAY_SIZE(events) && --count != 0) { /* Poll for more events but don't block this time. */ timeout = 0; continue; } return; } if (timeout == 0) return; if (timeout == -1) continue; update_timeout: assert(timeout > 0); diff = loop->time - base; if (diff >= (uint64_t) timeout) return; timeout -= diff; } } uint64_t uv_get_free_memory(void) { perfstat_memory_total_t mem_total; int result = perfstat_memory_total(NULL, &mem_total, sizeof(mem_total), 1); if (result == -1) { return 0; } return mem_total.real_free * 4096; } uint64_t uv_get_total_memory(void) { perfstat_memory_total_t mem_total; int result = perfstat_memory_total(NULL, &mem_total, sizeof(mem_total), 1); if (result == -1) { return 0; } return mem_total.real_total * 4096; } uint64_t uv_get_constrained_memory(void) { return 0; /* Memory constraints are unknown. */ } void uv_loadavg(double avg[3]) { perfstat_cpu_total_t ps_total; int result = perfstat_cpu_total(NULL, &ps_total, sizeof(ps_total), 1); if (result == -1) { avg[0] = 0.; avg[1] = 0.; avg[2] = 0.; return; } avg[0] = ps_total.loadavg[0] / (double)(1 << SBITS); avg[1] = ps_total.loadavg[1] / (double)(1 << SBITS); avg[2] = ps_total.loadavg[2] / (double)(1 << SBITS); } #ifdef HAVE_SYS_AHAFS_EVPRODS_H static char* uv__rawname(const char* cp, char (*dst)[FILENAME_MAX+1]) { char* dp; dp = rindex(cp, '/'); if (dp == 0) return 0; snprintf(*dst, sizeof(*dst), "%.*s/r%s", (int) (dp - cp), cp, dp + 1); return *dst; } /* * Determine whether given pathname is a directory * Returns 0 if the path is a directory, -1 if not * * Note: Opportunity here for more detailed error information but * that requires changing callers of this function as well */ static int uv__path_is_a_directory(char* filename) { struct stat statbuf; if (stat(filename, &statbuf) < 0) return -1; /* failed: not a directory, assume it is a file */ if (statbuf.st_type == VDIR) return 0; return -1; } /* * Check whether AHAFS is mounted. * Returns 0 if AHAFS is mounted, or an error code < 0 on failure */ static int uv__is_ahafs_mounted(void){ char rawbuf[FILENAME_MAX+1]; int rv, i = 2; struct vmount *p; int size_multiplier = 10; size_t siz = sizeof(struct vmount)*size_multiplier; struct vmount *vmt; const char *dev = "/aha"; char *obj, *stub; p = uv__malloc(siz); if (p == NULL) return UV__ERR(errno); /* Retrieve all mounted filesystems */ rv = mntctl(MCTL_QUERY, siz, (char*)p); if (rv < 0) return UV__ERR(errno); if (rv == 0) { /* buffer was not large enough, reallocate to correct size */ siz = *(int*)p; uv__free(p); p = uv__malloc(siz); if (p == NULL) return UV__ERR(errno); rv = mntctl(MCTL_QUERY, siz, (char*)p); if (rv < 0) return UV__ERR(errno); } /* Look for dev in filesystems mount info */ for(vmt = p, i = 0; i < rv; i++) { obj = vmt2dataptr(vmt, VMT_OBJECT); /* device */ stub = vmt2dataptr(vmt, VMT_STUB); /* mount point */ if (EQ(obj, dev) || EQ(uv__rawname(obj, &rawbuf), dev) || EQ(stub, dev)) { uv__free(p); /* Found a match */ return 0; } vmt = (struct vmount *) ((char *) vmt + vmt->vmt_length); } /* /aha is required for monitoring filesystem changes */ return -1; } /* * Recursive call to mkdir() to create intermediate folders, if any * Returns code from mkdir call */ static int uv__makedir_p(const char *dir) { char tmp[256]; char *p = NULL; size_t len; int err; /* TODO(bnoordhuis) Check uv__strscpy() return value. */ uv__strscpy(tmp, dir, sizeof(tmp)); len = strlen(tmp); if (tmp[len - 1] == '/') tmp[len - 1] = 0; for (p = tmp + 1; *p; p++) { if (*p == '/') { *p = 0; err = mkdir(tmp, S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH); if (err != 0 && errno != EEXIST) return err; *p = '/'; } } return mkdir(tmp, S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH); } /* * Creates necessary subdirectories in the AIX Event Infrastructure * file system for monitoring the object specified. * Returns code from mkdir call */ static int uv__make_subdirs_p(const char *filename) { char cmd[2048]; char *p; int rc = 0; /* Strip off the monitor file name */ p = strrchr(filename, '/'); if (p == NULL) return 0; if (uv__path_is_a_directory((char*)filename) == 0) { sprintf(cmd, "/aha/fs/modDir.monFactory"); } else { sprintf(cmd, "/aha/fs/modFile.monFactory"); } strncat(cmd, filename, (p - filename)); rc = uv__makedir_p(cmd); if (rc == -1 && errno != EEXIST){ return UV__ERR(errno); } return rc; } /* * Checks if /aha is mounted, then proceeds to set up the monitoring * objects for the specified file. * Returns 0 on success, or an error code < 0 on failure */ static int uv__setup_ahafs(const char* filename, int *fd) { int rc = 0; char mon_file_write_string[RDWR_BUF_SIZE]; char mon_file[PATH_MAX]; int file_is_directory = 0; /* -1 == NO, 0 == YES */ /* Create monitor file name for object */ file_is_directory = uv__path_is_a_directory((char*)filename); if (file_is_directory == 0) sprintf(mon_file, "/aha/fs/modDir.monFactory"); else sprintf(mon_file, "/aha/fs/modFile.monFactory"); if ((strlen(mon_file) + strlen(filename) + 5) > PATH_MAX) return UV_ENAMETOOLONG; /* Make the necessary subdirectories for the monitor file */ rc = uv__make_subdirs_p(filename); if (rc == -1 && errno != EEXIST) return rc; strcat(mon_file, filename); strcat(mon_file, ".mon"); *fd = 0; errno = 0; /* Open the monitor file, creating it if necessary */ *fd = open(mon_file, O_CREAT|O_RDWR); if (*fd < 0) return UV__ERR(errno); /* Write out the monitoring specifications. * In this case, we are monitoring for a state change event type * CHANGED=YES * We will be waiting in select call, rather than a read: * WAIT_TYPE=WAIT_IN_SELECT * We only want minimal information for files: * INFO_LVL=1 * For directories, we want more information to track what file * caused the change * INFO_LVL=2 */ if (file_is_directory == 0) sprintf(mon_file_write_string, "CHANGED=YES;WAIT_TYPE=WAIT_IN_SELECT;INFO_LVL=2"); else sprintf(mon_file_write_string, "CHANGED=YES;WAIT_TYPE=WAIT_IN_SELECT;INFO_LVL=1"); rc = write(*fd, mon_file_write_string, strlen(mon_file_write_string)+1); if (rc < 0 && errno != EBUSY) return UV__ERR(errno); return 0; } /* * Skips a specified number of lines in the buffer passed in. * Walks the buffer pointed to by p and attempts to skip n lines. * Returns the total number of lines skipped */ static int uv__skip_lines(char **p, int n) { int lines = 0; while(n > 0) { *p = strchr(*p, '\n'); if (!p) return lines; (*p)++; n--; lines++; } return lines; } /* * Parse the event occurrence data to figure out what event just occurred * and take proper action. * * The buf is a pointer to the buffer containing the event occurrence data * Returns 0 on success, -1 if unrecoverable error in parsing * */ static int uv__parse_data(char *buf, int *events, uv_fs_event_t* handle) { int evp_rc, i; char *p; char filename[PATH_MAX]; /* To be used when handling directories */ p = buf; *events = 0; /* Clean the filename buffer*/ for(i = 0; i < PATH_MAX; i++) { filename[i] = 0; } i = 0; /* Check for BUF_WRAP */ if (strncmp(buf, "BUF_WRAP", strlen("BUF_WRAP")) == 0) { assert(0 && "Buffer wrap detected, Some event occurrences lost!"); return 0; } /* Since we are using the default buffer size (4K), and have specified * INFO_LVL=1, we won't see any EVENT_OVERFLOW conditions. Applications * should check for this keyword if they are using an INFO_LVL of 2 or * higher, and have a buffer size of <= 4K */ /* Skip to RC_FROM_EVPROD */ if (uv__skip_lines(&p, 9) != 9) return -1; if (sscanf(p, "RC_FROM_EVPROD=%d\nEND_EVENT_DATA", &evp_rc) == 1) { if (uv__path_is_a_directory(handle->path) == 0) { /* Directory */ if (evp_rc == AHAFS_MODDIR_UNMOUNT || evp_rc == AHAFS_MODDIR_REMOVE_SELF) { /* The directory is no longer available for monitoring */ *events = UV_RENAME; handle->dir_filename = NULL; } else { /* A file was added/removed inside the directory */ *events = UV_CHANGE; /* Get the EVPROD_INFO */ if (uv__skip_lines(&p, 1) != 1) return -1; /* Scan out the name of the file that triggered the event*/ if (sscanf(p, "BEGIN_EVPROD_INFO\n%sEND_EVPROD_INFO", filename) == 1) { handle->dir_filename = uv__strdup((const char*)&filename); } else return -1; } } else { /* Regular File */ if (evp_rc == AHAFS_MODFILE_RENAME) *events = UV_RENAME; else *events = UV_CHANGE; } } else return -1; return 0; } /* This is the internal callback */ static void uv__ahafs_event(uv_loop_t* loop, uv__io_t* event_watch, unsigned int fflags) { char result_data[RDWR_BUF_SIZE]; int bytes, rc = 0; uv_fs_event_t* handle; int events = 0; char fname[PATH_MAX]; char *p; handle = container_of(event_watch, uv_fs_event_t, event_watcher); /* At this point, we assume that polling has been done on the * file descriptor, so we can just read the AHAFS event occurrence * data and parse its results without having to block anything */ bytes = pread(event_watch->fd, result_data, RDWR_BUF_SIZE, 0); assert((bytes >= 0) && "uv__ahafs_event - Error reading monitor file"); /* In file / directory move cases, AIX Event infrastructure * produces a second event with no data. * Ignore it and return gracefully. */ if(bytes == 0) return; /* Parse the data */ if(bytes > 0) rc = uv__parse_data(result_data, &events, handle); /* Unrecoverable error */ if (rc == -1) return; /* For directory changes, the name of the files that triggered the change * are never absolute pathnames */ if (uv__path_is_a_directory(handle->path) == 0) { p = handle->dir_filename; } else { p = strrchr(handle->path, '/'); if (p == NULL) p = handle->path; else p++; } /* TODO(bnoordhuis) Check uv__strscpy() return value. */ uv__strscpy(fname, p, sizeof(fname)); handle->cb(handle, fname, events, 0); } #endif int uv_fs_event_init(uv_loop_t* loop, uv_fs_event_t* handle) { #ifdef HAVE_SYS_AHAFS_EVPRODS_H uv__handle_init(loop, (uv_handle_t*)handle, UV_FS_EVENT); return 0; #else return UV_ENOSYS; #endif } int uv_fs_event_start(uv_fs_event_t* handle, uv_fs_event_cb cb, const char* filename, unsigned int flags) { #ifdef HAVE_SYS_AHAFS_EVPRODS_H int fd, rc, str_offset = 0; char cwd[PATH_MAX]; char absolute_path[PATH_MAX]; char readlink_cwd[PATH_MAX]; struct timeval zt; fd_set pollfd; /* Figure out whether filename is absolute or not */ if (filename[0] == '\0') { /* Missing a pathname */ return UV_ENOENT; } else if (filename[0] == '/') { /* We have absolute pathname */ /* TODO(bnoordhuis) Check uv__strscpy() return value. */ uv__strscpy(absolute_path, filename, sizeof(absolute_path)); } else { /* We have a relative pathname, compose the absolute pathname */ snprintf(cwd, sizeof(cwd), "/proc/%lu/cwd", (unsigned long) getpid()); rc = readlink(cwd, readlink_cwd, sizeof(readlink_cwd) - 1); if (rc < 0) return rc; /* readlink does not null terminate our string */ readlink_cwd[rc] = '\0'; if (filename[0] == '.' && filename[1] == '/') str_offset = 2; snprintf(absolute_path, sizeof(absolute_path), "%s%s", readlink_cwd, filename + str_offset); } if (uv__is_ahafs_mounted() < 0) /* /aha checks failed */ return UV_ENOSYS; /* Setup ahafs */ rc = uv__setup_ahafs((const char *)absolute_path, &fd); if (rc != 0) return rc; /* Setup/Initialize all the libuv routines */ uv__handle_start(handle); uv__io_init(&handle->event_watcher, uv__ahafs_event, fd); handle->path = uv__strdup(filename); handle->cb = cb; handle->dir_filename = NULL; uv__io_start(handle->loop, &handle->event_watcher, POLLIN); /* AHAFS wants someone to poll for it to start mointoring. * so kick-start it so that we don't miss an event in the * eventuality of an event that occurs in the current loop. */ do { memset(&zt, 0, sizeof(zt)); FD_ZERO(&pollfd); FD_SET(fd, &pollfd); rc = select(fd + 1, &pollfd, NULL, NULL, &zt); } while (rc == -1 && errno == EINTR); return 0; #else return UV_ENOSYS; #endif } int uv_fs_event_stop(uv_fs_event_t* handle) { #ifdef HAVE_SYS_AHAFS_EVPRODS_H if (!uv__is_active(handle)) return 0; uv__io_close(handle->loop, &handle->event_watcher); uv__handle_stop(handle); if (uv__path_is_a_directory(handle->path) == 0) { uv__free(handle->dir_filename); handle->dir_filename = NULL; } uv__free(handle->path); handle->path = NULL; uv__close(handle->event_watcher.fd); handle->event_watcher.fd = -1; return 0; #else return UV_ENOSYS; #endif } void uv__fs_event_close(uv_fs_event_t* handle) { #ifdef HAVE_SYS_AHAFS_EVPRODS_H uv_fs_event_stop(handle); #else UNREACHABLE(); #endif } char** uv_setup_args(int argc, char** argv) { char exepath[UV__PATH_MAX]; char** new_argv; size_t size; char* s; int i; if (argc <= 0) return argv; /* Save the original pointer to argv. * AIX uses argv to read the process name. * (Not the memory pointed to by argv[0..n] as on Linux.) */ process_argv = argv; process_argc = argc; /* Use argv[0] to determine value for uv_exepath(). */ size = sizeof(exepath); if (uv__search_path(argv[0], exepath, &size) == 0) { uv_once(&process_title_mutex_once, init_process_title_mutex_once); uv_mutex_lock(&process_title_mutex); original_exepath = uv__strdup(exepath); uv_mutex_unlock(&process_title_mutex); } /* Calculate how much memory we need for the argv strings. */ size = 0; for (i = 0; i < argc; i++) size += strlen(argv[i]) + 1; /* Add space for the argv pointers. */ size += (argc + 1) * sizeof(char*); new_argv = uv__malloc(size); if (new_argv == NULL) return argv; args_mem = new_argv; /* Copy over the strings and set up the pointer table. */ s = (char*) &new_argv[argc + 1]; for (i = 0; i < argc; i++) { size = strlen(argv[i]) + 1; memcpy(s, argv[i], size); new_argv[i] = s; s += size; } new_argv[i] = NULL; return new_argv; } int uv_set_process_title(const char* title) { char* new_title; /* If uv_setup_args wasn't called or failed, we can't continue. */ if (process_argv == NULL || args_mem == NULL) return UV_ENOBUFS; /* We cannot free this pointer when libuv shuts down, * the process may still be using it. */ new_title = uv__strdup(title); if (new_title == NULL) return UV_ENOMEM; uv_once(&process_title_mutex_once, init_process_title_mutex_once); uv_mutex_lock(&process_title_mutex); /* If this is the first time this is set, * don't free and set argv[1] to NULL. */ if (process_title_ptr != NULL) uv__free(process_title_ptr); process_title_ptr = new_title; process_argv[0] = process_title_ptr; if (process_argc > 1) process_argv[1] = NULL; uv_mutex_unlock(&process_title_mutex); return 0; } int uv_get_process_title(char* buffer, size_t size) { size_t len; if (buffer == NULL || size == 0) return UV_EINVAL; /* If uv_setup_args wasn't called, we can't continue. */ if (process_argv == NULL) return UV_ENOBUFS; uv_once(&process_title_mutex_once, init_process_title_mutex_once); uv_mutex_lock(&process_title_mutex); len = strlen(process_argv[0]); if (size <= len) { uv_mutex_unlock(&process_title_mutex); return UV_ENOBUFS; } memcpy(buffer, process_argv[0], len); buffer[len] = '\0'; uv_mutex_unlock(&process_title_mutex); return 0; } void uv__process_title_cleanup(void) { uv__free(args_mem); /* Keep valgrind happy. */ args_mem = NULL; } int uv_resident_set_memory(size_t* rss) { char pp[64]; psinfo_t psinfo; int err; int fd; snprintf(pp, sizeof(pp), "/proc/%lu/psinfo", (unsigned long) getpid()); fd = open(pp, O_RDONLY); if (fd == -1) return UV__ERR(errno); /* FIXME(bnoordhuis) Handle EINTR. */ err = UV_EINVAL; if (read(fd, &psinfo, sizeof(psinfo)) == sizeof(psinfo)) { *rss = (size_t)psinfo.pr_rssize * 1024; err = 0; } uv__close(fd); return err; } int uv_uptime(double* uptime) { struct utmp *utmp_buf; size_t entries = 0; time_t boot_time; boot_time = 0; utmpname(UTMP_FILE); setutent(); while ((utmp_buf = getutent()) != NULL) { if (utmp_buf->ut_user[0] && utmp_buf->ut_type == USER_PROCESS) ++entries; if (utmp_buf->ut_type == BOOT_TIME) boot_time = utmp_buf->ut_time; } endutent(); if (boot_time == 0) return UV_ENOSYS; *uptime = time(NULL) - boot_time; return 0; } int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count) { uv_cpu_info_t* cpu_info; perfstat_cpu_total_t ps_total; perfstat_cpu_t* ps_cpus; perfstat_id_t cpu_id; int result, ncpus, idx = 0; result = perfstat_cpu_total(NULL, &ps_total, sizeof(ps_total), 1); if (result == -1) { return UV_ENOSYS; } ncpus = result = perfstat_cpu(NULL, NULL, sizeof(perfstat_cpu_t), 0); if (result == -1) { return UV_ENOSYS; } ps_cpus = (perfstat_cpu_t*) uv__malloc(ncpus * sizeof(perfstat_cpu_t)); if (!ps_cpus) { return UV_ENOMEM; } /* TODO(bnoordhuis) Check uv__strscpy() return value. */ uv__strscpy(cpu_id.name, FIRST_CPU, sizeof(cpu_id.name)); result = perfstat_cpu(&cpu_id, ps_cpus, sizeof(perfstat_cpu_t), ncpus); if (result == -1) { uv__free(ps_cpus); return UV_ENOSYS; } *cpu_infos = (uv_cpu_info_t*) uv__malloc(ncpus * sizeof(uv_cpu_info_t)); if (!*cpu_infos) { uv__free(ps_cpus); return UV_ENOMEM; } *count = ncpus; cpu_info = *cpu_infos; while (idx < ncpus) { cpu_info->speed = (int)(ps_total.processorHZ / 1000000); cpu_info->model = uv__strdup(ps_total.description); cpu_info->cpu_times.user = ps_cpus[idx].user; cpu_info->cpu_times.sys = ps_cpus[idx].sys; cpu_info->cpu_times.idle = ps_cpus[idx].idle; cpu_info->cpu_times.irq = ps_cpus[idx].wait; cpu_info->cpu_times.nice = 0; cpu_info++; idx++; } uv__free(ps_cpus); return 0; } int uv_interface_addresses(uv_interface_address_t** addresses, int* count) { uv_interface_address_t* address; int sockfd, sock6fd, inet6, i, r, size = 1; struct ifconf ifc; struct ifreq *ifr, *p, flg; struct in6_ifreq if6; struct sockaddr_dl* sa_addr; ifc.ifc_req = NULL; sock6fd = -1; r = 0; *count = 0; *addresses = NULL; if (0 > (sockfd = socket(AF_INET, SOCK_DGRAM, IPPROTO_IP))) { r = UV__ERR(errno); goto cleanup; } if (0 > (sock6fd = socket(AF_INET6, SOCK_DGRAM, IPPROTO_IP))) { r = UV__ERR(errno); goto cleanup; } if (ioctl(sockfd, SIOCGSIZIFCONF, &size) == -1) { r = UV__ERR(errno); goto cleanup; } ifc.ifc_req = (struct ifreq*)uv__malloc(size); if (ifc.ifc_req == NULL) { r = UV_ENOMEM; goto cleanup; } ifc.ifc_len = size; if (ioctl(sockfd, SIOCGIFCONF, &ifc) == -1) { r = UV__ERR(errno); goto cleanup; } #define ADDR_SIZE(p) MAX((p).sa_len, sizeof(p)) /* Count all up and running ipv4/ipv6 addresses */ ifr = ifc.ifc_req; while ((char*)ifr < (char*)ifc.ifc_req + ifc.ifc_len) { p = ifr; ifr = (struct ifreq*) ((char*)ifr + sizeof(ifr->ifr_name) + ADDR_SIZE(ifr->ifr_addr)); if (!(p->ifr_addr.sa_family == AF_INET6 || p->ifr_addr.sa_family == AF_INET)) continue; memcpy(flg.ifr_name, p->ifr_name, sizeof(flg.ifr_name)); if (ioctl(sockfd, SIOCGIFFLAGS, &flg) == -1) { r = UV__ERR(errno); goto cleanup; } if (!(flg.ifr_flags & IFF_UP && flg.ifr_flags & IFF_RUNNING)) continue; (*count)++; } if (*count == 0) goto cleanup; /* Alloc the return interface structs */ *addresses = uv__calloc(*count, sizeof(**addresses)); if (!(*addresses)) { r = UV_ENOMEM; goto cleanup; } address = *addresses; ifr = ifc.ifc_req; while ((char*)ifr < (char*)ifc.ifc_req + ifc.ifc_len) { p = ifr; ifr = (struct ifreq*) ((char*)ifr + sizeof(ifr->ifr_name) + ADDR_SIZE(ifr->ifr_addr)); if (!(p->ifr_addr.sa_family == AF_INET6 || p->ifr_addr.sa_family == AF_INET)) continue; inet6 = (p->ifr_addr.sa_family == AF_INET6); memcpy(flg.ifr_name, p->ifr_name, sizeof(flg.ifr_name)); if (ioctl(sockfd, SIOCGIFFLAGS, &flg) == -1) goto syserror; if (!(flg.ifr_flags & IFF_UP && flg.ifr_flags & IFF_RUNNING)) continue; /* All conditions above must match count loop */ address->name = uv__strdup(p->ifr_name); if (inet6) address->address.address6 = *((struct sockaddr_in6*) &p->ifr_addr); else address->address.address4 = *((struct sockaddr_in*) &p->ifr_addr); if (inet6) { memset(&if6, 0, sizeof(if6)); r = uv__strscpy(if6.ifr_name, p->ifr_name, sizeof(if6.ifr_name)); if (r == UV_E2BIG) goto cleanup; r = 0; memcpy(&if6.ifr_Addr, &p->ifr_addr, sizeof(if6.ifr_Addr)); if (ioctl(sock6fd, SIOCGIFNETMASK6, &if6) == -1) goto syserror; address->netmask.netmask6 = *((struct sockaddr_in6*) &if6.ifr_Addr); /* Explicitly set family as the ioctl call appears to return it as 0. */ address->netmask.netmask6.sin6_family = AF_INET6; } else { if (ioctl(sockfd, SIOCGIFNETMASK, p) == -1) goto syserror; address->netmask.netmask4 = *((struct sockaddr_in*) &p->ifr_addr); /* Explicitly set family as the ioctl call appears to return it as 0. */ address->netmask.netmask4.sin_family = AF_INET; } address->is_internal = flg.ifr_flags & IFF_LOOPBACK ? 1 : 0; address++; } /* Fill in physical addresses. */ ifr = ifc.ifc_req; while ((char*)ifr < (char*)ifc.ifc_req + ifc.ifc_len) { p = ifr; ifr = (struct ifreq*) ((char*)ifr + sizeof(ifr->ifr_name) + ADDR_SIZE(ifr->ifr_addr)); if (p->ifr_addr.sa_family != AF_LINK) continue; address = *addresses; for (i = 0; i < *count; i++) { if (strcmp(address->name, p->ifr_name) == 0) { sa_addr = (struct sockaddr_dl*) &p->ifr_addr; memcpy(address->phys_addr, LLADDR(sa_addr), sizeof(address->phys_addr)); } address++; } } #undef ADDR_SIZE goto cleanup; syserror: uv_free_interface_addresses(*addresses, *count); *addresses = NULL; *count = 0; r = UV_ENOSYS; cleanup: if (sockfd != -1) uv__close(sockfd); if (sock6fd != -1) uv__close(sock6fd); uv__free(ifc.ifc_req); return r; } void uv_free_interface_addresses(uv_interface_address_t* addresses, int count) { int i; for (i = 0; i < count; ++i) { uv__free(addresses[i].name); } uv__free(addresses); } void uv__platform_invalidate_fd(uv_loop_t* loop, int fd) { struct pollfd* events; uintptr_t i; uintptr_t nfds; struct poll_ctl pc; assert(loop->watchers != NULL); assert(fd >= 0); events = (struct pollfd*) loop->watchers[loop->nwatchers]; nfds = (uintptr_t) loop->watchers[loop->nwatchers + 1]; if (events != NULL) /* Invalidate events with same file descriptor */ for (i = 0; i < nfds; i++) if ((int) events[i].fd == fd) events[i].fd = -1; /* Remove the file descriptor from the poll set */ pc.events = 0; pc.cmd = PS_DELETE; pc.fd = fd; if(loop->backend_fd >= 0) pollset_ctl(loop->backend_fd, &pc, 1); } gevent-24.11.1/deps/libuv/src/unix/async.c000066400000000000000000000134731471441230600202710ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ /* This file contains both the uv__async internal infrastructure and the * user-facing uv_async_t functions. */ #include "uv.h" #include "internal.h" #include "atomic-ops.h" #include #include /* snprintf() */ #include #include #include #include #include /* sched_yield() */ #ifdef __linux__ #include #endif static void uv__async_send(uv_loop_t* loop); static int uv__async_start(uv_loop_t* loop); int uv_async_init(uv_loop_t* loop, uv_async_t* handle, uv_async_cb async_cb) { int err; err = uv__async_start(loop); if (err) return err; uv__handle_init(loop, (uv_handle_t*)handle, UV_ASYNC); handle->async_cb = async_cb; handle->pending = 0; QUEUE_INSERT_TAIL(&loop->async_handles, &handle->queue); uv__handle_start(handle); return 0; } int uv_async_send(uv_async_t* handle) { /* Do a cheap read first. */ if (ACCESS_ONCE(int, handle->pending) != 0) return 0; /* Tell the other thread we're busy with the handle. */ if (cmpxchgi(&handle->pending, 0, 1) != 0) return 0; /* Wake up the other thread's event loop. */ uv__async_send(handle->loop); /* Tell the other thread we're done. */ if (cmpxchgi(&handle->pending, 1, 2) != 1) abort(); return 0; } /* Only call this from the event loop thread. */ static int uv__async_spin(uv_async_t* handle) { int i; int rc; for (;;) { /* 997 is not completely chosen at random. It's a prime number, acyclical * by nature, and should therefore hopefully dampen sympathetic resonance. */ for (i = 0; i < 997; i++) { /* rc=0 -- handle is not pending. * rc=1 -- handle is pending, other thread is still working with it. * rc=2 -- handle is pending, other thread is done. */ rc = cmpxchgi(&handle->pending, 2, 0); if (rc != 1) return rc; /* Other thread is busy with this handle, spin until it's done. */ cpu_relax(); } /* Yield the CPU. We may have preempted the other thread while it's * inside the critical section and if it's running on the same CPU * as us, we'll just burn CPU cycles until the end of our time slice. */ sched_yield(); } } void uv__async_close(uv_async_t* handle) { uv__async_spin(handle); QUEUE_REMOVE(&handle->queue); uv__handle_stop(handle); } static void uv__async_io(uv_loop_t* loop, uv__io_t* w, unsigned int events) { char buf[1024]; ssize_t r; QUEUE queue; QUEUE* q; uv_async_t* h; assert(w == &loop->async_io_watcher); for (;;) { r = read(w->fd, buf, sizeof(buf)); if (r == sizeof(buf)) continue; if (r != -1) break; if (errno == EAGAIN || errno == EWOULDBLOCK) break; if (errno == EINTR) continue; abort(); } QUEUE_MOVE(&loop->async_handles, &queue); while (!QUEUE_EMPTY(&queue)) { q = QUEUE_HEAD(&queue); h = QUEUE_DATA(q, uv_async_t, queue); QUEUE_REMOVE(q); QUEUE_INSERT_TAIL(&loop->async_handles, q); if (0 == uv__async_spin(h)) continue; /* Not pending. */ if (h->async_cb == NULL) continue; h->async_cb(h); } } static void uv__async_send(uv_loop_t* loop) { const void* buf; ssize_t len; int fd; int r; buf = ""; len = 1; fd = loop->async_wfd; #if defined(__linux__) if (fd == -1) { static const uint64_t val = 1; buf = &val; len = sizeof(val); fd = loop->async_io_watcher.fd; /* eventfd */ } #endif do r = write(fd, buf, len); while (r == -1 && errno == EINTR); if (r == len) return; if (r == -1) if (errno == EAGAIN || errno == EWOULDBLOCK) return; abort(); } static int uv__async_start(uv_loop_t* loop) { int pipefd[2]; int err; if (loop->async_io_watcher.fd != -1) return 0; #ifdef __linux__ err = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK); if (err < 0) return UV__ERR(errno); pipefd[0] = err; pipefd[1] = -1; #else err = uv__make_pipe(pipefd, UV_NONBLOCK_PIPE); if (err < 0) return err; #endif uv__io_init(&loop->async_io_watcher, uv__async_io, pipefd[0]); uv__io_start(loop, &loop->async_io_watcher, POLLIN); loop->async_wfd = pipefd[1]; return 0; } int uv__async_fork(uv_loop_t* loop) { if (loop->async_io_watcher.fd == -1) /* never started */ return 0; uv__async_stop(loop); return uv__async_start(loop); } void uv__async_stop(uv_loop_t* loop) { if (loop->async_io_watcher.fd == -1) return; if (loop->async_wfd != -1) { if (loop->async_wfd != loop->async_io_watcher.fd) uv__close(loop->async_wfd); loop->async_wfd = -1; } uv__io_stop(loop, &loop->async_io_watcher, POLLIN); uv__close(loop->async_io_watcher.fd); loop->async_io_watcher.fd = -1; } gevent-24.11.1/deps/libuv/src/unix/atomic-ops.h000066400000000000000000000047241471441230600212330ustar00rootroot00000000000000/* Copyright (c) 2013, Ben Noordhuis * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #ifndef UV_ATOMIC_OPS_H_ #define UV_ATOMIC_OPS_H_ #include "internal.h" /* UV_UNUSED */ #if defined(__SUNPRO_C) || defined(__SUNPRO_CC) #include #endif UV_UNUSED(static int cmpxchgi(int* ptr, int oldval, int newval)); UV_UNUSED(static void cpu_relax(void)); /* Prefer hand-rolled assembly over the gcc builtins because the latter also * issue full memory barriers. */ UV_UNUSED(static int cmpxchgi(int* ptr, int oldval, int newval)) { #if defined(__i386__) || defined(__x86_64__) int out; __asm__ __volatile__ ("lock; cmpxchg %2, %1;" : "=a" (out), "+m" (*(volatile int*) ptr) : "r" (newval), "0" (oldval) : "memory"); return out; #elif defined(__MVS__) /* Use hand-rolled assembly because codegen from builtin __plo_CSST results in * a runtime bug. */ __asm(" cs %0,%2,%1 \n " : "+r"(oldval), "+m"(*ptr) : "r"(newval) :); return oldval; #elif defined(__SUNPRO_C) || defined(__SUNPRO_CC) return atomic_cas_uint((uint_t *)ptr, (uint_t)oldval, (uint_t)newval); #else return __sync_val_compare_and_swap(ptr, oldval, newval); #endif } UV_UNUSED(static void cpu_relax(void)) { #if defined(__i386__) || defined(__x86_64__) __asm__ __volatile__ ("rep; nop" ::: "memory"); /* a.k.a. PAUSE */ #elif (defined(__arm__) && __ARM_ARCH >= 7) || defined(__aarch64__) __asm__ __volatile__ ("yield" ::: "memory"); #elif (defined(__ppc__) || defined(__ppc64__)) && defined(__APPLE__) __asm volatile ("" : : : "memory"); #elif !defined(__APPLE__) && (defined(__powerpc64__) || defined(__ppc64__) || defined(__PPC64__)) __asm__ __volatile__ ("or 1,1,1; or 2,2,2" ::: "memory"); #endif } #endif /* UV_ATOMIC_OPS_H_ */ gevent-24.11.1/deps/libuv/src/unix/bsd-ifaddrs.c000066400000000000000000000114711471441230600213320ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #if !defined(__CYGWIN__) && !defined(__MSYS__) && !defined(__GNU__) #include #endif #if defined(__HAIKU__) #define IFF_RUNNING IFF_LINK #endif static int uv__ifaddr_exclude(struct ifaddrs *ent, int exclude_type) { if (!((ent->ifa_flags & IFF_UP) && (ent->ifa_flags & IFF_RUNNING))) return 1; if (ent->ifa_addr == NULL) return 1; #if !defined(__CYGWIN__) && !defined(__MSYS__) && !defined(__GNU__) /* * If `exclude_type` is `UV__EXCLUDE_IFPHYS`, return whether `sa_family` * equals `AF_LINK`. Otherwise, the result depends on the operating * system with `AF_LINK` or `PF_INET`. */ if (exclude_type == UV__EXCLUDE_IFPHYS) return (ent->ifa_addr->sa_family != AF_LINK); #endif #if defined(__APPLE__) || defined(__FreeBSD__) || defined(__DragonFly__) || \ defined(__HAIKU__) /* * On BSD getifaddrs returns information related to the raw underlying * devices. We're not interested in this information. */ if (ent->ifa_addr->sa_family == AF_LINK) return 1; #elif defined(__NetBSD__) || defined(__OpenBSD__) if (ent->ifa_addr->sa_family != PF_INET && ent->ifa_addr->sa_family != PF_INET6) return 1; #endif return 0; } int uv_interface_addresses(uv_interface_address_t** addresses, int* count) { struct ifaddrs* addrs; struct ifaddrs* ent; uv_interface_address_t* address; #if !(defined(__CYGWIN__) || defined(__MSYS__)) && !defined(__GNU__) int i; #endif *count = 0; *addresses = NULL; if (getifaddrs(&addrs) != 0) return UV__ERR(errno); /* Count the number of interfaces */ for (ent = addrs; ent != NULL; ent = ent->ifa_next) { if (uv__ifaddr_exclude(ent, UV__EXCLUDE_IFADDR)) continue; (*count)++; } if (*count == 0) { freeifaddrs(addrs); return 0; } /* Make sure the memory is initiallized to zero using calloc() */ *addresses = uv__calloc(*count, sizeof(**addresses)); if (*addresses == NULL) { freeifaddrs(addrs); return UV_ENOMEM; } address = *addresses; for (ent = addrs; ent != NULL; ent = ent->ifa_next) { if (uv__ifaddr_exclude(ent, UV__EXCLUDE_IFADDR)) continue; address->name = uv__strdup(ent->ifa_name); if (ent->ifa_addr->sa_family == AF_INET6) { address->address.address6 = *((struct sockaddr_in6*) ent->ifa_addr); } else { address->address.address4 = *((struct sockaddr_in*) ent->ifa_addr); } if (ent->ifa_netmask == NULL) { memset(&address->netmask, 0, sizeof(address->netmask)); } else if (ent->ifa_netmask->sa_family == AF_INET6) { address->netmask.netmask6 = *((struct sockaddr_in6*) ent->ifa_netmask); } else { address->netmask.netmask4 = *((struct sockaddr_in*) ent->ifa_netmask); } address->is_internal = !!(ent->ifa_flags & IFF_LOOPBACK); address++; } #if !(defined(__CYGWIN__) || defined(__MSYS__)) && !defined(__GNU__) /* Fill in physical addresses for each interface */ for (ent = addrs; ent != NULL; ent = ent->ifa_next) { if (uv__ifaddr_exclude(ent, UV__EXCLUDE_IFPHYS)) continue; address = *addresses; for (i = 0; i < *count; i++) { if (strcmp(address->name, ent->ifa_name) == 0) { struct sockaddr_dl* sa_addr; sa_addr = (struct sockaddr_dl*)(ent->ifa_addr); memcpy(address->phys_addr, LLADDR(sa_addr), sizeof(address->phys_addr)); } address++; } } #endif freeifaddrs(addrs); return 0; } void uv_free_interface_addresses(uv_interface_address_t* addresses, int count) { int i; for (i = 0; i < count; i++) { uv__free(addresses[i].name); } uv__free(addresses); } gevent-24.11.1/deps/libuv/src/unix/bsd-proctitle.c000066400000000000000000000052031471441230600217170ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include static uv_mutex_t process_title_mutex; static uv_once_t process_title_mutex_once = UV_ONCE_INIT; static char* process_title; static void init_process_title_mutex_once(void) { if (uv_mutex_init(&process_title_mutex)) abort(); } void uv__process_title_cleanup(void) { uv_once(&process_title_mutex_once, init_process_title_mutex_once); uv_mutex_destroy(&process_title_mutex); } char** uv_setup_args(int argc, char** argv) { process_title = argc > 0 ? uv__strdup(argv[0]) : NULL; return argv; } int uv_set_process_title(const char* title) { char* new_title; new_title = uv__strdup(title); if (new_title == NULL) return UV_ENOMEM; uv_once(&process_title_mutex_once, init_process_title_mutex_once); uv_mutex_lock(&process_title_mutex); uv__free(process_title); process_title = new_title; setproctitle("%s", title); uv_mutex_unlock(&process_title_mutex); return 0; } int uv_get_process_title(char* buffer, size_t size) { size_t len; if (buffer == NULL || size == 0) return UV_EINVAL; uv_once(&process_title_mutex_once, init_process_title_mutex_once); uv_mutex_lock(&process_title_mutex); if (process_title != NULL) { len = strlen(process_title) + 1; if (size < len) { uv_mutex_unlock(&process_title_mutex); return UV_ENOBUFS; } memcpy(buffer, process_title, len); } else { len = 0; } uv_mutex_unlock(&process_title_mutex); buffer[len] = '\0'; return 0; } gevent-24.11.1/deps/libuv/src/unix/core.c000066400000000000000000001103701471441230600200760ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include "strtok.h" #include /* NULL */ #include /* printf */ #include #include /* strerror */ #include #include #include #include #include #include /* O_CLOEXEC */ #include #include #include #include #include #include /* INT_MAX, PATH_MAX, IOV_MAX */ #include /* writev */ #include /* getrusage */ #include #include #include #ifdef __sun # include # include # include #endif #if defined(__APPLE__) # include # endif /* defined(__APPLE__) */ #if defined(__APPLE__) && !TARGET_OS_IPHONE # include # include /* _NSGetExecutablePath */ # define environ (*_NSGetEnviron()) #else /* defined(__APPLE__) && !TARGET_OS_IPHONE */ extern char** environ; #endif /* !(defined(__APPLE__) && !TARGET_OS_IPHONE) */ #if defined(__DragonFly__) || \ defined(__FreeBSD__) || \ defined(__FreeBSD_kernel__) || \ defined(__NetBSD__) || \ defined(__OpenBSD__) # include # include # include # if defined(__FreeBSD__) # define uv__accept4 accept4 # endif # if defined(__NetBSD__) # define uv__accept4(a, b, c, d) paccept((a), (b), (c), NULL, (d)) # endif #endif #if defined(__MVS__) # include # include "zos-sys-info.h" #endif #if defined(__linux__) # include # include # define uv__accept4 accept4 #endif #if defined(__linux__) && defined(__SANITIZE_THREAD__) && defined(__clang__) # include #endif static void uv__run_pending(uv_loop_t* loop); /* Verify that uv_buf_t is ABI-compatible with struct iovec. */ STATIC_ASSERT(sizeof(uv_buf_t) == sizeof(struct iovec)); STATIC_ASSERT(sizeof(((uv_buf_t*) 0)->base) == sizeof(((struct iovec*) 0)->iov_base)); STATIC_ASSERT(sizeof(((uv_buf_t*) 0)->len) == sizeof(((struct iovec*) 0)->iov_len)); STATIC_ASSERT(offsetof(uv_buf_t, base) == offsetof(struct iovec, iov_base)); STATIC_ASSERT(offsetof(uv_buf_t, len) == offsetof(struct iovec, iov_len)); uint64_t uv_hrtime(void) { return uv__hrtime(UV_CLOCK_PRECISE); } void uv_close(uv_handle_t* handle, uv_close_cb close_cb) { assert(!uv__is_closing(handle)); handle->flags |= UV_HANDLE_CLOSING; handle->close_cb = close_cb; switch (handle->type) { case UV_NAMED_PIPE: uv__pipe_close((uv_pipe_t*)handle); break; case UV_TTY: uv__stream_close((uv_stream_t*)handle); break; case UV_TCP: uv__tcp_close((uv_tcp_t*)handle); break; case UV_UDP: uv__udp_close((uv_udp_t*)handle); break; case UV_PREPARE: uv__prepare_close((uv_prepare_t*)handle); break; case UV_CHECK: uv__check_close((uv_check_t*)handle); break; case UV_IDLE: uv__idle_close((uv_idle_t*)handle); break; case UV_ASYNC: uv__async_close((uv_async_t*)handle); break; case UV_TIMER: uv__timer_close((uv_timer_t*)handle); break; case UV_PROCESS: uv__process_close((uv_process_t*)handle); break; case UV_FS_EVENT: uv__fs_event_close((uv_fs_event_t*)handle); #if defined(__sun) || defined(__MVS__) /* * On Solaris, illumos, and z/OS we will not be able to dissociate the * watcher for an event which is pending delivery, so we cannot always call * uv__make_close_pending() straight away. The backend will call the * function once the event has cleared. */ return; #endif break; case UV_POLL: uv__poll_close((uv_poll_t*)handle); break; case UV_FS_POLL: uv__fs_poll_close((uv_fs_poll_t*)handle); /* Poll handles use file system requests, and one of them may still be * running. The poll code will call uv__make_close_pending() for us. */ return; case UV_SIGNAL: uv__signal_close((uv_signal_t*) handle); break; default: assert(0); } uv__make_close_pending(handle); } int uv__socket_sockopt(uv_handle_t* handle, int optname, int* value) { int r; int fd; socklen_t len; if (handle == NULL || value == NULL) return UV_EINVAL; if (handle->type == UV_TCP || handle->type == UV_NAMED_PIPE) fd = uv__stream_fd((uv_stream_t*) handle); else if (handle->type == UV_UDP) fd = ((uv_udp_t *) handle)->io_watcher.fd; else return UV_ENOTSUP; len = sizeof(*value); if (*value == 0) r = getsockopt(fd, SOL_SOCKET, optname, value, &len); else r = setsockopt(fd, SOL_SOCKET, optname, (const void*) value, len); if (r < 0) return UV__ERR(errno); return 0; } void uv__make_close_pending(uv_handle_t* handle) { assert(handle->flags & UV_HANDLE_CLOSING); assert(!(handle->flags & UV_HANDLE_CLOSED)); handle->next_closing = handle->loop->closing_handles; handle->loop->closing_handles = handle; } int uv__getiovmax(void) { #if defined(IOV_MAX) return IOV_MAX; #elif defined(_SC_IOV_MAX) static int iovmax_cached = -1; int iovmax; iovmax = uv__load_relaxed(&iovmax_cached); if (iovmax != -1) return iovmax; /* On some embedded devices (arm-linux-uclibc based ip camera), * sysconf(_SC_IOV_MAX) can not get the correct value. The return * value is -1 and the errno is EINPROGRESS. Degrade the value to 1. */ iovmax = sysconf(_SC_IOV_MAX); if (iovmax == -1) iovmax = 1; uv__store_relaxed(&iovmax_cached, iovmax); return iovmax; #else return 1024; #endif } static void uv__finish_close(uv_handle_t* handle) { uv_signal_t* sh; /* Note: while the handle is in the UV_HANDLE_CLOSING state now, it's still * possible for it to be active in the sense that uv__is_active() returns * true. * * A good example is when the user calls uv_shutdown(), immediately followed * by uv_close(). The handle is considered active at this point because the * completion of the shutdown req is still pending. */ assert(handle->flags & UV_HANDLE_CLOSING); assert(!(handle->flags & UV_HANDLE_CLOSED)); handle->flags |= UV_HANDLE_CLOSED; switch (handle->type) { case UV_PREPARE: case UV_CHECK: case UV_IDLE: case UV_ASYNC: case UV_TIMER: case UV_PROCESS: case UV_FS_EVENT: case UV_FS_POLL: case UV_POLL: break; case UV_SIGNAL: /* If there are any caught signals "trapped" in the signal pipe, * we can't call the close callback yet. Reinserting the handle * into the closing queue makes the event loop spin but that's * okay because we only need to deliver the pending events. */ sh = (uv_signal_t*) handle; if (sh->caught_signals > sh->dispatched_signals) { handle->flags ^= UV_HANDLE_CLOSED; uv__make_close_pending(handle); /* Back into the queue. */ return; } break; case UV_NAMED_PIPE: case UV_TCP: case UV_TTY: uv__stream_destroy((uv_stream_t*)handle); break; case UV_UDP: uv__udp_finish_close((uv_udp_t*)handle); break; default: assert(0); break; } uv__handle_unref(handle); QUEUE_REMOVE(&handle->handle_queue); if (handle->close_cb) { handle->close_cb(handle); } } static void uv__run_closing_handles(uv_loop_t* loop) { uv_handle_t* p; uv_handle_t* q; p = loop->closing_handles; loop->closing_handles = NULL; while (p) { q = p->next_closing; uv__finish_close(p); p = q; } } int uv_is_closing(const uv_handle_t* handle) { return uv__is_closing(handle); } int uv_backend_fd(const uv_loop_t* loop) { return loop->backend_fd; } static int uv__loop_alive(const uv_loop_t* loop) { return uv__has_active_handles(loop) || uv__has_active_reqs(loop) || !QUEUE_EMPTY(&loop->pending_queue) || loop->closing_handles != NULL; } static int uv__backend_timeout(const uv_loop_t* loop) { if (loop->stop_flag == 0 && /* uv__loop_alive(loop) && */ (uv__has_active_handles(loop) || uv__has_active_reqs(loop)) && QUEUE_EMPTY(&loop->pending_queue) && QUEUE_EMPTY(&loop->idle_handles) && loop->closing_handles == NULL) return uv__next_timeout(loop); return 0; } int uv_backend_timeout(const uv_loop_t* loop) { if (QUEUE_EMPTY(&loop->watcher_queue)) return uv__backend_timeout(loop); /* Need to call uv_run to update the backend fd state. */ return 0; } int uv_loop_alive(const uv_loop_t* loop) { return uv__loop_alive(loop); } int uv_run(uv_loop_t* loop, uv_run_mode mode) { int timeout; int r; int can_sleep; r = uv__loop_alive(loop); if (!r) uv__update_time(loop); while (r != 0 && loop->stop_flag == 0) { uv__update_time(loop); uv__run_timers(loop); can_sleep = QUEUE_EMPTY(&loop->pending_queue) && QUEUE_EMPTY(&loop->idle_handles); uv__run_pending(loop); uv__run_idle(loop); uv__run_prepare(loop); timeout = 0; if ((mode == UV_RUN_ONCE && can_sleep) || mode == UV_RUN_DEFAULT) timeout = uv__backend_timeout(loop); uv__io_poll(loop, timeout); /* Process immediate callbacks (e.g. write_cb) a small fixed number of * times to avoid loop starvation.*/ for (r = 0; r < 8 && !QUEUE_EMPTY(&loop->pending_queue); r++) uv__run_pending(loop); /* Run one final update on the provider_idle_time in case uv__io_poll * returned because the timeout expired, but no events were received. This * call will be ignored if the provider_entry_time was either never set (if * the timeout == 0) or was already updated b/c an event was received. */ uv__metrics_update_idle_time(loop); uv__run_check(loop); uv__run_closing_handles(loop); if (mode == UV_RUN_ONCE) { /* UV_RUN_ONCE implies forward progress: at least one callback must have * been invoked when it returns. uv__io_poll() can return without doing * I/O (meaning: no callbacks) when its timeout expires - which means we * have pending timers that satisfy the forward progress constraint. * * UV_RUN_NOWAIT makes no guarantees about progress so it's omitted from * the check. */ uv__update_time(loop); uv__run_timers(loop); } r = uv__loop_alive(loop); if (mode == UV_RUN_ONCE || mode == UV_RUN_NOWAIT) break; } /* The if statement lets gcc compile it to a conditional store. Avoids * dirtying a cache line. */ if (loop->stop_flag != 0) loop->stop_flag = 0; return r; } void uv_update_time(uv_loop_t* loop) { uv__update_time(loop); } int uv_is_active(const uv_handle_t* handle) { return uv__is_active(handle); } /* Open a socket in non-blocking close-on-exec mode, atomically if possible. */ int uv__socket(int domain, int type, int protocol) { int sockfd; int err; #if defined(SOCK_NONBLOCK) && defined(SOCK_CLOEXEC) sockfd = socket(domain, type | SOCK_NONBLOCK | SOCK_CLOEXEC, protocol); if (sockfd != -1) return sockfd; if (errno != EINVAL) return UV__ERR(errno); #endif sockfd = socket(domain, type, protocol); if (sockfd == -1) return UV__ERR(errno); err = uv__nonblock(sockfd, 1); if (err == 0) err = uv__cloexec(sockfd, 1); if (err) { uv__close(sockfd); return err; } #if defined(SO_NOSIGPIPE) { int on = 1; setsockopt(sockfd, SOL_SOCKET, SO_NOSIGPIPE, &on, sizeof(on)); } #endif return sockfd; } /* get a file pointer to a file in read-only and close-on-exec mode */ FILE* uv__open_file(const char* path) { int fd; FILE* fp; fd = uv__open_cloexec(path, O_RDONLY); if (fd < 0) return NULL; fp = fdopen(fd, "r"); if (fp == NULL) uv__close(fd); return fp; } int uv__accept(int sockfd) { int peerfd; int err; (void) &err; assert(sockfd >= 0); do #ifdef uv__accept4 peerfd = uv__accept4(sockfd, NULL, NULL, SOCK_NONBLOCK|SOCK_CLOEXEC); #else peerfd = accept(sockfd, NULL, NULL); #endif while (peerfd == -1 && errno == EINTR); if (peerfd == -1) return UV__ERR(errno); #ifndef uv__accept4 err = uv__cloexec(peerfd, 1); if (err == 0) err = uv__nonblock(peerfd, 1); if (err != 0) { uv__close(peerfd); return err; } #endif return peerfd; } /* close() on macos has the "interesting" quirk that it fails with EINTR * without closing the file descriptor when a thread is in the cancel state. * That's why libuv calls close$NOCANCEL() instead. * * glibc on linux has a similar issue: close() is a cancellation point and * will unwind the thread when it's in the cancel state. Work around that * by making the system call directly. Musl libc is unaffected. */ int uv__close_nocancel(int fd) { #if defined(__APPLE__) #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wdollar-in-identifier-extension" #if defined(__LP64__) || TARGET_OS_IPHONE extern int close$NOCANCEL(int); return close$NOCANCEL(fd); #else extern int close$NOCANCEL$UNIX2003(int); return close$NOCANCEL$UNIX2003(fd); #endif #pragma GCC diagnostic pop #elif defined(__linux__) && defined(__SANITIZE_THREAD__) && defined(__clang__) long rc; __sanitizer_syscall_pre_close(fd); rc = syscall(SYS_close, fd); __sanitizer_syscall_post_close(rc, fd); return rc; #elif defined(__linux__) && !defined(__SANITIZE_THREAD__) return syscall(SYS_close, fd); #else return close(fd); #endif } int uv__close_nocheckstdio(int fd) { int saved_errno; int rc; assert(fd > -1); /* Catch uninitialized io_watcher.fd bugs. */ saved_errno = errno; rc = uv__close_nocancel(fd); if (rc == -1) { rc = UV__ERR(errno); if (rc == UV_EINTR || rc == UV__ERR(EINPROGRESS)) rc = 0; /* The close is in progress, not an error. */ errno = saved_errno; } return rc; } int uv__close(int fd) { assert(fd > STDERR_FILENO); /* Catch stdio close bugs. */ #if defined(__MVS__) SAVE_ERRNO(epoll_file_close(fd)); #endif return uv__close_nocheckstdio(fd); } #if UV__NONBLOCK_IS_IOCTL int uv__nonblock_ioctl(int fd, int set) { int r; do r = ioctl(fd, FIONBIO, &set); while (r == -1 && errno == EINTR); if (r) return UV__ERR(errno); return 0; } #endif int uv__nonblock_fcntl(int fd, int set) { int flags; int r; do r = fcntl(fd, F_GETFL); while (r == -1 && errno == EINTR); if (r == -1) return UV__ERR(errno); /* Bail out now if already set/clear. */ if (!!(r & O_NONBLOCK) == !!set) return 0; if (set) flags = r | O_NONBLOCK; else flags = r & ~O_NONBLOCK; do r = fcntl(fd, F_SETFL, flags); while (r == -1 && errno == EINTR); if (r) return UV__ERR(errno); return 0; } int uv__cloexec(int fd, int set) { int flags; int r; flags = 0; if (set) flags = FD_CLOEXEC; do r = fcntl(fd, F_SETFD, flags); while (r == -1 && errno == EINTR); if (r) return UV__ERR(errno); return 0; } ssize_t uv__recvmsg(int fd, struct msghdr* msg, int flags) { #if defined(__ANDROID__) || \ defined(__DragonFly__) || \ defined(__FreeBSD__) || \ defined(__NetBSD__) || \ defined(__OpenBSD__) || \ defined(__linux__) ssize_t rc; rc = recvmsg(fd, msg, flags | MSG_CMSG_CLOEXEC); if (rc == -1) return UV__ERR(errno); return rc; #else struct cmsghdr* cmsg; int* pfd; int* end; ssize_t rc; rc = recvmsg(fd, msg, flags); if (rc == -1) return UV__ERR(errno); if (msg->msg_controllen == 0) return rc; for (cmsg = CMSG_FIRSTHDR(msg); cmsg != NULL; cmsg = CMSG_NXTHDR(msg, cmsg)) if (cmsg->cmsg_type == SCM_RIGHTS) for (pfd = (int*) CMSG_DATA(cmsg), end = (int*) ((char*) cmsg + cmsg->cmsg_len); pfd < end; pfd += 1) uv__cloexec(*pfd, 1); return rc; #endif } int uv_cwd(char* buffer, size_t* size) { char scratch[1 + UV__PATH_MAX]; if (buffer == NULL || size == NULL) return UV_EINVAL; /* Try to read directly into the user's buffer first... */ if (getcwd(buffer, *size) != NULL) goto fixup; if (errno != ERANGE) return UV__ERR(errno); /* ...or into scratch space if the user's buffer is too small * so we can report how much space to provide on the next try. */ if (getcwd(scratch, sizeof(scratch)) == NULL) return UV__ERR(errno); buffer = scratch; fixup: *size = strlen(buffer); if (*size > 1 && buffer[*size - 1] == '/') { *size -= 1; buffer[*size] = '\0'; } if (buffer == scratch) { *size += 1; return UV_ENOBUFS; } return 0; } int uv_chdir(const char* dir) { if (chdir(dir)) return UV__ERR(errno); return 0; } void uv_disable_stdio_inheritance(void) { int fd; /* Set the CLOEXEC flag on all open descriptors. Unconditionally try the * first 16 file descriptors. After that, bail out after the first error. */ for (fd = 0; ; fd++) if (uv__cloexec(fd, 1) && fd > 15) break; } int uv_fileno(const uv_handle_t* handle, uv_os_fd_t* fd) { int fd_out; switch (handle->type) { case UV_TCP: case UV_NAMED_PIPE: case UV_TTY: fd_out = uv__stream_fd((uv_stream_t*) handle); break; case UV_UDP: fd_out = ((uv_udp_t *) handle)->io_watcher.fd; break; case UV_POLL: fd_out = ((uv_poll_t *) handle)->io_watcher.fd; break; default: return UV_EINVAL; } if (uv__is_closing(handle) || fd_out == -1) return UV_EBADF; *fd = fd_out; return 0; } static void uv__run_pending(uv_loop_t* loop) { QUEUE* q; QUEUE pq; uv__io_t* w; QUEUE_MOVE(&loop->pending_queue, &pq); while (!QUEUE_EMPTY(&pq)) { q = QUEUE_HEAD(&pq); QUEUE_REMOVE(q); QUEUE_INIT(q); w = QUEUE_DATA(q, uv__io_t, pending_queue); w->cb(loop, w, POLLOUT); } } static unsigned int next_power_of_two(unsigned int val) { val -= 1; val |= val >> 1; val |= val >> 2; val |= val >> 4; val |= val >> 8; val |= val >> 16; val += 1; return val; } static void maybe_resize(uv_loop_t* loop, unsigned int len) { uv__io_t** watchers; void* fake_watcher_list; void* fake_watcher_count; unsigned int nwatchers; unsigned int i; if (len <= loop->nwatchers) return; /* Preserve fake watcher list and count at the end of the watchers */ if (loop->watchers != NULL) { fake_watcher_list = loop->watchers[loop->nwatchers]; fake_watcher_count = loop->watchers[loop->nwatchers + 1]; } else { fake_watcher_list = NULL; fake_watcher_count = NULL; } nwatchers = next_power_of_two(len + 2) - 2; watchers = uv__reallocf(loop->watchers, (nwatchers + 2) * sizeof(loop->watchers[0])); if (watchers == NULL) abort(); for (i = loop->nwatchers; i < nwatchers; i++) watchers[i] = NULL; watchers[nwatchers] = fake_watcher_list; watchers[nwatchers + 1] = fake_watcher_count; loop->watchers = watchers; loop->nwatchers = nwatchers; } void uv__io_init(uv__io_t* w, uv__io_cb cb, int fd) { assert(cb != NULL); assert(fd >= -1); QUEUE_INIT(&w->pending_queue); QUEUE_INIT(&w->watcher_queue); w->cb = cb; w->fd = fd; w->events = 0; w->pevents = 0; #if defined(UV_HAVE_KQUEUE) w->rcount = 0; w->wcount = 0; #endif /* defined(UV_HAVE_KQUEUE) */ } void uv__io_start(uv_loop_t* loop, uv__io_t* w, unsigned int events) { assert(0 == (events & ~(POLLIN | POLLOUT | UV__POLLRDHUP | UV__POLLPRI))); assert(0 != events); assert(w->fd >= 0); assert(w->fd < INT_MAX); w->pevents |= events; maybe_resize(loop, w->fd + 1); #if !defined(__sun) /* The event ports backend needs to rearm all file descriptors on each and * every tick of the event loop but the other backends allow us to * short-circuit here if the event mask is unchanged. */ if (w->events == w->pevents) return; #endif if (QUEUE_EMPTY(&w->watcher_queue)) QUEUE_INSERT_TAIL(&loop->watcher_queue, &w->watcher_queue); if (loop->watchers[w->fd] == NULL) { loop->watchers[w->fd] = w; loop->nfds++; } } void uv__io_stop(uv_loop_t* loop, uv__io_t* w, unsigned int events) { assert(0 == (events & ~(POLLIN | POLLOUT | UV__POLLRDHUP | UV__POLLPRI))); assert(0 != events); if (w->fd == -1) return; assert(w->fd >= 0); /* Happens when uv__io_stop() is called on a handle that was never started. */ if ((unsigned) w->fd >= loop->nwatchers) return; w->pevents &= ~events; if (w->pevents == 0) { QUEUE_REMOVE(&w->watcher_queue); QUEUE_INIT(&w->watcher_queue); w->events = 0; if (w == loop->watchers[w->fd]) { assert(loop->nfds > 0); loop->watchers[w->fd] = NULL; loop->nfds--; } } else if (QUEUE_EMPTY(&w->watcher_queue)) QUEUE_INSERT_TAIL(&loop->watcher_queue, &w->watcher_queue); } void uv__io_close(uv_loop_t* loop, uv__io_t* w) { uv__io_stop(loop, w, POLLIN | POLLOUT | UV__POLLRDHUP | UV__POLLPRI); QUEUE_REMOVE(&w->pending_queue); /* Remove stale events for this file descriptor */ if (w->fd != -1) uv__platform_invalidate_fd(loop, w->fd); } void uv__io_feed(uv_loop_t* loop, uv__io_t* w) { if (QUEUE_EMPTY(&w->pending_queue)) QUEUE_INSERT_TAIL(&loop->pending_queue, &w->pending_queue); } int uv__io_active(const uv__io_t* w, unsigned int events) { assert(0 == (events & ~(POLLIN | POLLOUT | UV__POLLRDHUP | UV__POLLPRI))); assert(0 != events); return 0 != (w->pevents & events); } int uv__fd_exists(uv_loop_t* loop, int fd) { return (unsigned) fd < loop->nwatchers && loop->watchers[fd] != NULL; } int uv_getrusage(uv_rusage_t* rusage) { struct rusage usage; if (getrusage(RUSAGE_SELF, &usage)) return UV__ERR(errno); rusage->ru_utime.tv_sec = usage.ru_utime.tv_sec; rusage->ru_utime.tv_usec = usage.ru_utime.tv_usec; rusage->ru_stime.tv_sec = usage.ru_stime.tv_sec; rusage->ru_stime.tv_usec = usage.ru_stime.tv_usec; #if !defined(__MVS__) && !defined(__HAIKU__) rusage->ru_maxrss = usage.ru_maxrss; rusage->ru_ixrss = usage.ru_ixrss; rusage->ru_idrss = usage.ru_idrss; rusage->ru_isrss = usage.ru_isrss; rusage->ru_minflt = usage.ru_minflt; rusage->ru_majflt = usage.ru_majflt; rusage->ru_nswap = usage.ru_nswap; rusage->ru_inblock = usage.ru_inblock; rusage->ru_oublock = usage.ru_oublock; rusage->ru_msgsnd = usage.ru_msgsnd; rusage->ru_msgrcv = usage.ru_msgrcv; rusage->ru_nsignals = usage.ru_nsignals; rusage->ru_nvcsw = usage.ru_nvcsw; rusage->ru_nivcsw = usage.ru_nivcsw; #endif return 0; } int uv__open_cloexec(const char* path, int flags) { #if defined(O_CLOEXEC) int fd; fd = open(path, flags | O_CLOEXEC); if (fd == -1) return UV__ERR(errno); return fd; #else /* O_CLOEXEC */ int err; int fd; fd = open(path, flags); if (fd == -1) return UV__ERR(errno); err = uv__cloexec(fd, 1); if (err) { uv__close(fd); return err; } return fd; #endif /* O_CLOEXEC */ } int uv__slurp(const char* filename, char* buf, size_t len) { ssize_t n; int fd; assert(len > 0); fd = uv__open_cloexec(filename, O_RDONLY); if (fd < 0) return fd; do n = read(fd, buf, len - 1); while (n == -1 && errno == EINTR); if (uv__close_nocheckstdio(fd)) abort(); if (n < 0) return UV__ERR(errno); buf[n] = '\0'; return 0; } int uv__dup2_cloexec(int oldfd, int newfd) { #if defined(__FreeBSD__) || defined(__NetBSD__) || defined(__linux__) int r; r = dup3(oldfd, newfd, O_CLOEXEC); if (r == -1) return UV__ERR(errno); return r; #else int err; int r; r = dup2(oldfd, newfd); /* Never retry. */ if (r == -1) return UV__ERR(errno); err = uv__cloexec(newfd, 1); if (err != 0) { uv__close(newfd); return err; } return r; #endif } int uv_os_homedir(char* buffer, size_t* size) { uv_passwd_t pwd; size_t len; int r; /* Check if the HOME environment variable is set first. The task of performing input validation on buffer and size is taken care of by uv_os_getenv(). */ r = uv_os_getenv("HOME", buffer, size); if (r != UV_ENOENT) return r; /* HOME is not set, so call uv__getpwuid_r() */ r = uv__getpwuid_r(&pwd); if (r != 0) { return r; } len = strlen(pwd.homedir); if (len >= *size) { *size = len + 1; uv_os_free_passwd(&pwd); return UV_ENOBUFS; } memcpy(buffer, pwd.homedir, len + 1); *size = len; uv_os_free_passwd(&pwd); return 0; } int uv_os_tmpdir(char* buffer, size_t* size) { const char* buf; size_t len; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; #define CHECK_ENV_VAR(name) \ do { \ buf = getenv(name); \ if (buf != NULL) \ goto return_buffer; \ } \ while (0) /* Check the TMPDIR, TMP, TEMP, and TEMPDIR environment variables in order */ CHECK_ENV_VAR("TMPDIR"); CHECK_ENV_VAR("TMP"); CHECK_ENV_VAR("TEMP"); CHECK_ENV_VAR("TEMPDIR"); #undef CHECK_ENV_VAR /* No temp environment variables defined */ #if defined(__ANDROID__) buf = "/data/local/tmp"; #else buf = "/tmp"; #endif return_buffer: len = strlen(buf); if (len >= *size) { *size = len + 1; return UV_ENOBUFS; } /* The returned directory should not have a trailing slash. */ if (len > 1 && buf[len - 1] == '/') { len--; } memcpy(buffer, buf, len + 1); buffer[len] = '\0'; *size = len; return 0; } int uv__getpwuid_r(uv_passwd_t* pwd) { struct passwd pw; struct passwd* result; char* buf; uid_t uid; size_t bufsize; size_t name_size; size_t homedir_size; size_t shell_size; int r; if (pwd == NULL) return UV_EINVAL; uid = geteuid(); /* Calling sysconf(_SC_GETPW_R_SIZE_MAX) would get the suggested size, but it * is frequently 1024 or 4096, so we can just use that directly. The pwent * will not usually be large. */ for (bufsize = 2000;; bufsize *= 2) { buf = uv__malloc(bufsize); if (buf == NULL) return UV_ENOMEM; do r = getpwuid_r(uid, &pw, buf, bufsize, &result); while (r == EINTR); if (r != 0 || result == NULL) uv__free(buf); if (r != ERANGE) break; } if (r != 0) return UV__ERR(r); if (result == NULL) return UV_ENOENT; /* Allocate memory for the username, shell, and home directory */ name_size = strlen(pw.pw_name) + 1; homedir_size = strlen(pw.pw_dir) + 1; shell_size = strlen(pw.pw_shell) + 1; pwd->username = uv__malloc(name_size + homedir_size + shell_size); if (pwd->username == NULL) { uv__free(buf); return UV_ENOMEM; } /* Copy the username */ memcpy(pwd->username, pw.pw_name, name_size); /* Copy the home directory */ pwd->homedir = pwd->username + name_size; memcpy(pwd->homedir, pw.pw_dir, homedir_size); /* Copy the shell */ pwd->shell = pwd->homedir + homedir_size; memcpy(pwd->shell, pw.pw_shell, shell_size); /* Copy the uid and gid */ pwd->uid = pw.pw_uid; pwd->gid = pw.pw_gid; uv__free(buf); return 0; } void uv_os_free_passwd(uv_passwd_t* pwd) { if (pwd == NULL) return; /* The memory for name, shell, and homedir are allocated in a single uv__malloc() call. The base of the pointer is stored in pwd->username, so that is the field that needs to be freed. */ uv__free(pwd->username); pwd->username = NULL; pwd->shell = NULL; pwd->homedir = NULL; } int uv_os_get_passwd(uv_passwd_t* pwd) { return uv__getpwuid_r(pwd); } int uv_translate_sys_error(int sys_errno) { /* If < 0 then it's already a libuv error. */ return sys_errno <= 0 ? sys_errno : -sys_errno; } int uv_os_environ(uv_env_item_t** envitems, int* count) { int i, j, cnt; uv_env_item_t* envitem; *envitems = NULL; *count = 0; for (i = 0; environ[i] != NULL; i++); *envitems = uv__calloc(i, sizeof(**envitems)); if (*envitems == NULL) return UV_ENOMEM; for (j = 0, cnt = 0; j < i; j++) { char* buf; char* ptr; if (environ[j] == NULL) break; buf = uv__strdup(environ[j]); if (buf == NULL) goto fail; ptr = strchr(buf, '='); if (ptr == NULL) { uv__free(buf); continue; } *ptr = '\0'; envitem = &(*envitems)[cnt]; envitem->name = buf; envitem->value = ptr + 1; cnt++; } *count = cnt; return 0; fail: for (i = 0; i < cnt; i++) { envitem = &(*envitems)[cnt]; uv__free(envitem->name); } uv__free(*envitems); *envitems = NULL; *count = 0; return UV_ENOMEM; } int uv_os_getenv(const char* name, char* buffer, size_t* size) { char* var; size_t len; if (name == NULL || buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; var = getenv(name); if (var == NULL) return UV_ENOENT; len = strlen(var); if (len >= *size) { *size = len + 1; return UV_ENOBUFS; } memcpy(buffer, var, len + 1); *size = len; return 0; } int uv_os_setenv(const char* name, const char* value) { if (name == NULL || value == NULL) return UV_EINVAL; if (setenv(name, value, 1) != 0) return UV__ERR(errno); return 0; } int uv_os_unsetenv(const char* name) { if (name == NULL) return UV_EINVAL; if (unsetenv(name) != 0) return UV__ERR(errno); return 0; } int uv_os_gethostname(char* buffer, size_t* size) { /* On some platforms, if the input buffer is not large enough, gethostname() succeeds, but truncates the result. libuv can detect this and return ENOBUFS instead by creating a large enough buffer and comparing the hostname length to the size input. */ char buf[UV_MAXHOSTNAMESIZE]; size_t len; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; if (gethostname(buf, sizeof(buf)) != 0) return UV__ERR(errno); buf[sizeof(buf) - 1] = '\0'; /* Null terminate, just to be safe. */ len = strlen(buf); if (len >= *size) { *size = len + 1; return UV_ENOBUFS; } memcpy(buffer, buf, len + 1); *size = len; return 0; } uv_os_fd_t uv_get_osfhandle(int fd) { return fd; } int uv_open_osfhandle(uv_os_fd_t os_fd) { return os_fd; } uv_pid_t uv_os_getpid(void) { return getpid(); } uv_pid_t uv_os_getppid(void) { return getppid(); } int uv_os_getpriority(uv_pid_t pid, int* priority) { int r; if (priority == NULL) return UV_EINVAL; errno = 0; r = getpriority(PRIO_PROCESS, (int) pid); if (r == -1 && errno != 0) return UV__ERR(errno); *priority = r; return 0; } int uv_os_setpriority(uv_pid_t pid, int priority) { if (priority < UV_PRIORITY_HIGHEST || priority > UV_PRIORITY_LOW) return UV_EINVAL; if (setpriority(PRIO_PROCESS, (int) pid, priority) != 0) return UV__ERR(errno); return 0; } int uv_os_uname(uv_utsname_t* buffer) { struct utsname buf; int r; if (buffer == NULL) return UV_EINVAL; if (uname(&buf) == -1) { r = UV__ERR(errno); goto error; } r = uv__strscpy(buffer->sysname, buf.sysname, sizeof(buffer->sysname)); if (r == UV_E2BIG) goto error; #ifdef _AIX r = snprintf(buffer->release, sizeof(buffer->release), "%s.%s", buf.version, buf.release); if (r >= sizeof(buffer->release)) { r = UV_E2BIG; goto error; } #else r = uv__strscpy(buffer->release, buf.release, sizeof(buffer->release)); if (r == UV_E2BIG) goto error; #endif r = uv__strscpy(buffer->version, buf.version, sizeof(buffer->version)); if (r == UV_E2BIG) goto error; #if defined(_AIX) || defined(__PASE__) r = uv__strscpy(buffer->machine, "ppc64", sizeof(buffer->machine)); #else r = uv__strscpy(buffer->machine, buf.machine, sizeof(buffer->machine)); #endif if (r == UV_E2BIG) goto error; return 0; error: buffer->sysname[0] = '\0'; buffer->release[0] = '\0'; buffer->version[0] = '\0'; buffer->machine[0] = '\0'; return r; } int uv__getsockpeername(const uv_handle_t* handle, uv__peersockfunc func, struct sockaddr* name, int* namelen) { socklen_t socklen; uv_os_fd_t fd; int r; r = uv_fileno(handle, &fd); if (r < 0) return r; /* sizeof(socklen_t) != sizeof(int) on some systems. */ socklen = (socklen_t) *namelen; if (func(fd, name, &socklen)) return UV__ERR(errno); *namelen = (int) socklen; return 0; } int uv_gettimeofday(uv_timeval64_t* tv) { struct timeval time; if (tv == NULL) return UV_EINVAL; if (gettimeofday(&time, NULL) != 0) return UV__ERR(errno); tv->tv_sec = (int64_t) time.tv_sec; tv->tv_usec = (int32_t) time.tv_usec; return 0; } void uv_sleep(unsigned int msec) { struct timespec timeout; int rc; timeout.tv_sec = msec / 1000; timeout.tv_nsec = (msec % 1000) * 1000 * 1000; do rc = nanosleep(&timeout, &timeout); while (rc == -1 && errno == EINTR); assert(rc == 0); } int uv__search_path(const char* prog, char* buf, size_t* buflen) { char abspath[UV__PATH_MAX]; size_t abspath_size; char trypath[UV__PATH_MAX]; char* cloned_path; char* path_env; char* token; char* itr; if (buf == NULL || buflen == NULL || *buflen == 0) return UV_EINVAL; /* * Possibilities for prog: * i) an absolute path such as: /home/user/myprojects/nodejs/node * ii) a relative path such as: ./node or ../myprojects/nodejs/node * iii) a bare filename such as "node", after exporting PATH variable * to its location. */ /* Case i) and ii) absolute or relative paths */ if (strchr(prog, '/') != NULL) { if (realpath(prog, abspath) != abspath) return UV__ERR(errno); abspath_size = strlen(abspath); *buflen -= 1; if (*buflen > abspath_size) *buflen = abspath_size; memcpy(buf, abspath, *buflen); buf[*buflen] = '\0'; return 0; } /* Case iii). Search PATH environment variable */ cloned_path = NULL; token = NULL; path_env = getenv("PATH"); if (path_env == NULL) return UV_EINVAL; cloned_path = uv__strdup(path_env); if (cloned_path == NULL) return UV_ENOMEM; token = uv__strtok(cloned_path, ":", &itr); while (token != NULL) { snprintf(trypath, sizeof(trypath) - 1, "%s/%s", token, prog); if (realpath(trypath, abspath) == abspath) { /* Check the match is executable */ if (access(abspath, X_OK) == 0) { abspath_size = strlen(abspath); *buflen -= 1; if (*buflen > abspath_size) *buflen = abspath_size; memcpy(buf, abspath, *buflen); buf[*buflen] = '\0'; uv__free(cloned_path); return 0; } } token = uv__strtok(NULL, ":", &itr); } uv__free(cloned_path); /* Out of tokens (path entries), and no match found */ return UV_EINVAL; } unsigned int uv_available_parallelism(void) { #ifdef __linux__ cpu_set_t set; long rc; memset(&set, 0, sizeof(set)); /* sysconf(_SC_NPROCESSORS_ONLN) in musl calls sched_getaffinity() but in * glibc it's... complicated... so for consistency try sched_getaffinity() * before falling back to sysconf(_SC_NPROCESSORS_ONLN). */ if (0 == sched_getaffinity(0, sizeof(set), &set)) rc = CPU_COUNT(&set); else rc = sysconf(_SC_NPROCESSORS_ONLN); if (rc < 1) rc = 1; return (unsigned) rc; #elif defined(__MVS__) int rc; rc = __get_num_online_cpus(); if (rc < 1) rc = 1; return (unsigned) rc; #else /* __linux__ */ long rc; rc = sysconf(_SC_NPROCESSORS_ONLN); if (rc < 1) rc = 1; return (unsigned) rc; #endif /* __linux__ */ } gevent-24.11.1/deps/libuv/src/unix/cygwin.c000066400000000000000000000032741471441230600204520ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include int uv_uptime(double* uptime) { struct sysinfo info; if (sysinfo(&info) < 0) return UV__ERR(errno); *uptime = info.uptime; return 0; } int uv_resident_set_memory(size_t* rss) { /* FIXME: read /proc/meminfo? */ *rss = 0; return 0; } int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count) { /* FIXME: read /proc/stat? */ *cpu_infos = NULL; *count = 0; return UV_ENOSYS; } uint64_t uv_get_constrained_memory(void) { return 0; /* Memory constraints are unknown. */ } gevent-24.11.1/deps/libuv/src/unix/darwin-proctitle.c000066400000000000000000000156211471441230600224400ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include #if !TARGET_OS_IPHONE #include "darwin-stub.h" #endif static int uv__pthread_setname_np(const char* name) { char namebuf[64]; /* MAXTHREADNAMESIZE */ int err; strncpy(namebuf, name, sizeof(namebuf) - 1); namebuf[sizeof(namebuf) - 1] = '\0'; err = pthread_setname_np(namebuf); if (err) return UV__ERR(err); return 0; } int uv__set_process_title(const char* title) { #if TARGET_OS_IPHONE return uv__pthread_setname_np(title); #else CFStringRef (*pCFStringCreateWithCString)(CFAllocatorRef, const char*, CFStringEncoding); CFBundleRef (*pCFBundleGetBundleWithIdentifier)(CFStringRef); void *(*pCFBundleGetDataPointerForName)(CFBundleRef, CFStringRef); void *(*pCFBundleGetFunctionPointerForName)(CFBundleRef, CFStringRef); CFTypeRef (*pLSGetCurrentApplicationASN)(void); OSStatus (*pLSSetApplicationInformationItem)(int, CFTypeRef, CFStringRef, CFStringRef, CFDictionaryRef*); void* application_services_handle; void* core_foundation_handle; CFBundleRef launch_services_bundle; CFStringRef* display_name_key; CFDictionaryRef (*pCFBundleGetInfoDictionary)(CFBundleRef); CFBundleRef (*pCFBundleGetMainBundle)(void); CFDictionaryRef (*pLSApplicationCheckIn)(int, CFDictionaryRef); void (*pLSSetApplicationLaunchServicesServerConnectionStatus)(uint64_t, void*); CFTypeRef asn; int err; err = UV_ENOENT; application_services_handle = dlopen("/System/Library/Frameworks/" "ApplicationServices.framework/" "Versions/A/ApplicationServices", RTLD_LAZY | RTLD_LOCAL); core_foundation_handle = dlopen("/System/Library/Frameworks/" "CoreFoundation.framework/" "Versions/A/CoreFoundation", RTLD_LAZY | RTLD_LOCAL); if (application_services_handle == NULL || core_foundation_handle == NULL) goto out; *(void **)(&pCFStringCreateWithCString) = dlsym(core_foundation_handle, "CFStringCreateWithCString"); *(void **)(&pCFBundleGetBundleWithIdentifier) = dlsym(core_foundation_handle, "CFBundleGetBundleWithIdentifier"); *(void **)(&pCFBundleGetDataPointerForName) = dlsym(core_foundation_handle, "CFBundleGetDataPointerForName"); *(void **)(&pCFBundleGetFunctionPointerForName) = dlsym(core_foundation_handle, "CFBundleGetFunctionPointerForName"); if (pCFStringCreateWithCString == NULL || pCFBundleGetBundleWithIdentifier == NULL || pCFBundleGetDataPointerForName == NULL || pCFBundleGetFunctionPointerForName == NULL) { goto out; } #define S(s) pCFStringCreateWithCString(NULL, (s), kCFStringEncodingUTF8) launch_services_bundle = pCFBundleGetBundleWithIdentifier(S("com.apple.LaunchServices")); if (launch_services_bundle == NULL) goto out; *(void **)(&pLSGetCurrentApplicationASN) = pCFBundleGetFunctionPointerForName(launch_services_bundle, S("_LSGetCurrentApplicationASN")); if (pLSGetCurrentApplicationASN == NULL) goto out; *(void **)(&pLSSetApplicationInformationItem) = pCFBundleGetFunctionPointerForName(launch_services_bundle, S("_LSSetApplicationInformationItem")); if (pLSSetApplicationInformationItem == NULL) goto out; display_name_key = pCFBundleGetDataPointerForName(launch_services_bundle, S("_kLSDisplayNameKey")); if (display_name_key == NULL || *display_name_key == NULL) goto out; *(void **)(&pCFBundleGetInfoDictionary) = dlsym(core_foundation_handle, "CFBundleGetInfoDictionary"); *(void **)(&pCFBundleGetMainBundle) = dlsym(core_foundation_handle, "CFBundleGetMainBundle"); if (pCFBundleGetInfoDictionary == NULL || pCFBundleGetMainBundle == NULL) goto out; *(void **)(&pLSApplicationCheckIn) = pCFBundleGetFunctionPointerForName( launch_services_bundle, S("_LSApplicationCheckIn")); if (pLSApplicationCheckIn == NULL) goto out; *(void **)(&pLSSetApplicationLaunchServicesServerConnectionStatus) = pCFBundleGetFunctionPointerForName( launch_services_bundle, S("_LSSetApplicationLaunchServicesServerConnectionStatus")); if (pLSSetApplicationLaunchServicesServerConnectionStatus == NULL) goto out; pLSSetApplicationLaunchServicesServerConnectionStatus(0, NULL); /* Check into process manager?! */ pLSApplicationCheckIn(-2, pCFBundleGetInfoDictionary(pCFBundleGetMainBundle())); asn = pLSGetCurrentApplicationASN(); err = UV_EBUSY; if (asn == NULL) goto out; err = UV_EINVAL; if (pLSSetApplicationInformationItem(-2, /* Magic value. */ asn, *display_name_key, S(title), NULL) != noErr) { goto out; } uv__pthread_setname_np(title); /* Don't care if it fails. */ err = 0; out: if (core_foundation_handle != NULL) dlclose(core_foundation_handle); if (application_services_handle != NULL) dlclose(application_services_handle); return err; #endif /* !TARGET_OS_IPHONE */ } gevent-24.11.1/deps/libuv/src/unix/darwin-stub.h000066400000000000000000000101001471441230600214000ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_DARWIN_STUB_H_ #define UV_DARWIN_STUB_H_ #include struct CFArrayCallBacks; struct CFRunLoopSourceContext; struct FSEventStreamContext; struct CFRange; typedef double CFAbsoluteTime; typedef double CFTimeInterval; typedef int FSEventStreamEventFlags; typedef int OSStatus; typedef long CFIndex; typedef struct CFArrayCallBacks CFArrayCallBacks; typedef struct CFRunLoopSourceContext CFRunLoopSourceContext; typedef struct FSEventStreamContext FSEventStreamContext; typedef uint32_t FSEventStreamCreateFlags; typedef uint64_t FSEventStreamEventId; typedef unsigned CFStringEncoding; typedef void* CFAllocatorRef; typedef void* CFArrayRef; typedef void* CFBundleRef; typedef void* CFDataRef; typedef void* CFDictionaryRef; typedef void* CFMutableDictionaryRef; typedef struct CFRange CFRange; typedef void* CFRunLoopRef; typedef void* CFRunLoopSourceRef; typedef void* CFStringRef; typedef void* CFTypeRef; typedef void* FSEventStreamRef; typedef uint32_t IOOptionBits; typedef unsigned int io_iterator_t; typedef unsigned int io_object_t; typedef unsigned int io_service_t; typedef unsigned int io_registry_entry_t; typedef void (*FSEventStreamCallback)(const FSEventStreamRef, void*, size_t, void*, const FSEventStreamEventFlags*, const FSEventStreamEventId*); struct CFRunLoopSourceContext { CFIndex version; void* info; void* pad[7]; void (*perform)(void*); }; struct FSEventStreamContext { CFIndex version; void* info; void* pad[3]; }; struct CFRange { CFIndex location; CFIndex length; }; static const CFStringEncoding kCFStringEncodingUTF8 = 0x8000100; static const OSStatus noErr = 0; static const FSEventStreamEventId kFSEventStreamEventIdSinceNow = -1; static const int kFSEventStreamCreateFlagNoDefer = 2; static const int kFSEventStreamCreateFlagFileEvents = 16; static const int kFSEventStreamEventFlagEventIdsWrapped = 8; static const int kFSEventStreamEventFlagHistoryDone = 16; static const int kFSEventStreamEventFlagItemChangeOwner = 0x4000; static const int kFSEventStreamEventFlagItemCreated = 0x100; static const int kFSEventStreamEventFlagItemFinderInfoMod = 0x2000; static const int kFSEventStreamEventFlagItemInodeMetaMod = 0x400; static const int kFSEventStreamEventFlagItemIsDir = 0x20000; static const int kFSEventStreamEventFlagItemModified = 0x1000; static const int kFSEventStreamEventFlagItemRemoved = 0x200; static const int kFSEventStreamEventFlagItemRenamed = 0x800; static const int kFSEventStreamEventFlagItemXattrMod = 0x8000; static const int kFSEventStreamEventFlagKernelDropped = 4; static const int kFSEventStreamEventFlagMount = 64; static const int kFSEventStreamEventFlagRootChanged = 32; static const int kFSEventStreamEventFlagUnmount = 128; static const int kFSEventStreamEventFlagUserDropped = 2; #endif /* UV_DARWIN_STUB_H_ */ gevent-24.11.1/deps/libuv/src/unix/darwin.c000066400000000000000000000264171471441230600204420ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include #include /* _NSGetExecutablePath */ #include #include #include /* sysconf */ #include "darwin-stub.h" static uv_once_t once = UV_ONCE_INIT; static uint64_t (*time_func)(void); static mach_timebase_info_data_t timebase; typedef unsigned char UInt8; int uv__platform_loop_init(uv_loop_t* loop) { loop->cf_state = NULL; if (uv__kqueue_init(loop)) return UV__ERR(errno); return 0; } void uv__platform_loop_delete(uv_loop_t* loop) { uv__fsevents_loop_delete(loop); } static void uv__hrtime_init_once(void) { if (KERN_SUCCESS != mach_timebase_info(&timebase)) abort(); time_func = (uint64_t (*)(void)) dlsym(RTLD_DEFAULT, "mach_continuous_time"); if (time_func == NULL) time_func = mach_absolute_time; } uint64_t uv__hrtime(uv_clocktype_t type) { uv_once(&once, uv__hrtime_init_once); return time_func() * timebase.numer / timebase.denom; } int uv_exepath(char* buffer, size_t* size) { /* realpath(exepath) may be > PATH_MAX so double it to be on the safe side. */ char abspath[PATH_MAX * 2 + 1]; char exepath[PATH_MAX + 1]; uint32_t exepath_size; size_t abspath_size; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; exepath_size = sizeof(exepath); if (_NSGetExecutablePath(exepath, &exepath_size)) return UV_EIO; if (realpath(exepath, abspath) != abspath) return UV__ERR(errno); abspath_size = strlen(abspath); if (abspath_size == 0) return UV_EIO; *size -= 1; if (*size > abspath_size) *size = abspath_size; memcpy(buffer, abspath, *size); buffer[*size] = '\0'; return 0; } uint64_t uv_get_free_memory(void) { vm_statistics_data_t info; mach_msg_type_number_t count = sizeof(info) / sizeof(integer_t); if (host_statistics(mach_host_self(), HOST_VM_INFO, (host_info_t)&info, &count) != KERN_SUCCESS) { return UV_EINVAL; /* FIXME(bnoordhuis) Translate error. */ } return (uint64_t) info.free_count * sysconf(_SC_PAGESIZE); } uint64_t uv_get_total_memory(void) { uint64_t info; int which[] = {CTL_HW, HW_MEMSIZE}; size_t size = sizeof(info); if (sysctl(which, ARRAY_SIZE(which), &info, &size, NULL, 0)) return UV__ERR(errno); return (uint64_t) info; } uint64_t uv_get_constrained_memory(void) { return 0; /* Memory constraints are unknown. */ } void uv_loadavg(double avg[3]) { struct loadavg info; size_t size = sizeof(info); int which[] = {CTL_VM, VM_LOADAVG}; if (sysctl(which, ARRAY_SIZE(which), &info, &size, NULL, 0) < 0) return; avg[0] = (double) info.ldavg[0] / info.fscale; avg[1] = (double) info.ldavg[1] / info.fscale; avg[2] = (double) info.ldavg[2] / info.fscale; } int uv_resident_set_memory(size_t* rss) { mach_msg_type_number_t count; task_basic_info_data_t info; kern_return_t err; count = TASK_BASIC_INFO_COUNT; err = task_info(mach_task_self(), TASK_BASIC_INFO, (task_info_t) &info, &count); (void) &err; /* task_info(TASK_BASIC_INFO) cannot really fail. Anything other than * KERN_SUCCESS implies a libuv bug. */ assert(err == KERN_SUCCESS); *rss = info.resident_size; return 0; } int uv_uptime(double* uptime) { time_t now; struct timeval info; size_t size = sizeof(info); static int which[] = {CTL_KERN, KERN_BOOTTIME}; if (sysctl(which, ARRAY_SIZE(which), &info, &size, NULL, 0)) return UV__ERR(errno); now = time(NULL); *uptime = now - info.tv_sec; return 0; } static int uv__get_cpu_speed(uint64_t* speed) { /* IOKit */ void (*pIOObjectRelease)(io_object_t); kern_return_t (*pIOMasterPort)(mach_port_t, mach_port_t*); CFMutableDictionaryRef (*pIOServiceMatching)(const char*); kern_return_t (*pIOServiceGetMatchingServices)(mach_port_t, CFMutableDictionaryRef, io_iterator_t*); io_service_t (*pIOIteratorNext)(io_iterator_t); CFTypeRef (*pIORegistryEntryCreateCFProperty)(io_registry_entry_t, CFStringRef, CFAllocatorRef, IOOptionBits); /* CoreFoundation */ CFStringRef (*pCFStringCreateWithCString)(CFAllocatorRef, const char*, CFStringEncoding); CFStringEncoding (*pCFStringGetSystemEncoding)(void); UInt8 *(*pCFDataGetBytePtr)(CFDataRef); CFIndex (*pCFDataGetLength)(CFDataRef); void (*pCFDataGetBytes)(CFDataRef, CFRange, UInt8*); void (*pCFRelease)(CFTypeRef); void* core_foundation_handle; void* iokit_handle; int err; kern_return_t kr; mach_port_t mach_port; io_iterator_t it; io_object_t service; mach_port = 0; err = UV_ENOENT; core_foundation_handle = dlopen("/System/Library/Frameworks/" "CoreFoundation.framework/" "CoreFoundation", RTLD_LAZY | RTLD_LOCAL); iokit_handle = dlopen("/System/Library/Frameworks/IOKit.framework/" "IOKit", RTLD_LAZY | RTLD_LOCAL); if (core_foundation_handle == NULL || iokit_handle == NULL) goto out; #define V(handle, symbol) \ do { \ *(void **)(&p ## symbol) = dlsym((handle), #symbol); \ if (p ## symbol == NULL) \ goto out; \ } \ while (0) V(iokit_handle, IOMasterPort); V(iokit_handle, IOServiceMatching); V(iokit_handle, IOServiceGetMatchingServices); V(iokit_handle, IOIteratorNext); V(iokit_handle, IOObjectRelease); V(iokit_handle, IORegistryEntryCreateCFProperty); V(core_foundation_handle, CFStringCreateWithCString); V(core_foundation_handle, CFStringGetSystemEncoding); V(core_foundation_handle, CFDataGetBytePtr); V(core_foundation_handle, CFDataGetLength); V(core_foundation_handle, CFDataGetBytes); V(core_foundation_handle, CFRelease); #undef V #define S(s) pCFStringCreateWithCString(NULL, (s), kCFStringEncodingUTF8) kr = pIOMasterPort(MACH_PORT_NULL, &mach_port); assert(kr == KERN_SUCCESS); CFMutableDictionaryRef classes_to_match = pIOServiceMatching("IOPlatformDevice"); kr = pIOServiceGetMatchingServices(mach_port, classes_to_match, &it); assert(kr == KERN_SUCCESS); service = pIOIteratorNext(it); CFStringRef device_type_str = S("device_type"); CFStringRef clock_frequency_str = S("clock-frequency"); while (service != 0) { CFDataRef data; data = pIORegistryEntryCreateCFProperty(service, device_type_str, NULL, 0); if (data) { const UInt8* raw = pCFDataGetBytePtr(data); if (strncmp((char*)raw, "cpu", 3) == 0 || strncmp((char*)raw, "processor", 9) == 0) { CFDataRef freq_ref; freq_ref = pIORegistryEntryCreateCFProperty(service, clock_frequency_str, NULL, 0); if (freq_ref) { const UInt8* freq_ref_ptr = pCFDataGetBytePtr(freq_ref); CFIndex len = pCFDataGetLength(freq_ref); if (len == 8) memcpy(speed, freq_ref_ptr, 8); else if (len == 4) { uint32_t v; memcpy(&v, freq_ref_ptr, 4); *speed = v; } else { *speed = 0; } pCFRelease(freq_ref); pCFRelease(data); break; } } pCFRelease(data); } service = pIOIteratorNext(it); } pIOObjectRelease(it); err = 0; if (device_type_str != NULL) pCFRelease(device_type_str); if (clock_frequency_str != NULL) pCFRelease(clock_frequency_str); out: if (core_foundation_handle != NULL) dlclose(core_foundation_handle); if (iokit_handle != NULL) dlclose(iokit_handle); mach_port_deallocate(mach_task_self(), mach_port); return err; } int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count) { unsigned int ticks = (unsigned int)sysconf(_SC_CLK_TCK), multiplier = ((uint64_t)1000L / ticks); char model[512]; size_t size; unsigned int i; natural_t numcpus; mach_msg_type_number_t msg_type; processor_cpu_load_info_data_t *info; uv_cpu_info_t* cpu_info; uint64_t cpuspeed; int err; size = sizeof(model); if (sysctlbyname("machdep.cpu.brand_string", &model, &size, NULL, 0) && sysctlbyname("hw.model", &model, &size, NULL, 0)) { return UV__ERR(errno); } err = uv__get_cpu_speed(&cpuspeed); if (err < 0) return err; if (host_processor_info(mach_host_self(), PROCESSOR_CPU_LOAD_INFO, &numcpus, (processor_info_array_t*)&info, &msg_type) != KERN_SUCCESS) { return UV_EINVAL; /* FIXME(bnoordhuis) Translate error. */ } *cpu_infos = uv__malloc(numcpus * sizeof(**cpu_infos)); if (!(*cpu_infos)) { vm_deallocate(mach_task_self(), (vm_address_t)info, msg_type); return UV_ENOMEM; } *count = numcpus; for (i = 0; i < numcpus; i++) { cpu_info = &(*cpu_infos)[i]; cpu_info->cpu_times.user = (uint64_t)(info[i].cpu_ticks[0]) * multiplier; cpu_info->cpu_times.nice = (uint64_t)(info[i].cpu_ticks[3]) * multiplier; cpu_info->cpu_times.sys = (uint64_t)(info[i].cpu_ticks[1]) * multiplier; cpu_info->cpu_times.idle = (uint64_t)(info[i].cpu_ticks[2]) * multiplier; cpu_info->cpu_times.irq = 0; cpu_info->model = uv__strdup(model); cpu_info->speed = cpuspeed/1000000; } vm_deallocate(mach_task_self(), (vm_address_t)info, msg_type); return 0; } gevent-24.11.1/deps/libuv/src/unix/dl.c000066400000000000000000000043401471441230600175440ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include static int uv__dlerror(uv_lib_t* lib); int uv_dlopen(const char* filename, uv_lib_t* lib) { dlerror(); /* Reset error status. */ lib->errmsg = NULL; lib->handle = dlopen(filename, RTLD_LAZY); return lib->handle ? 0 : uv__dlerror(lib); } void uv_dlclose(uv_lib_t* lib) { uv__free(lib->errmsg); lib->errmsg = NULL; if (lib->handle) { /* Ignore errors. No good way to signal them without leaking memory. */ dlclose(lib->handle); lib->handle = NULL; } } int uv_dlsym(uv_lib_t* lib, const char* name, void** ptr) { dlerror(); /* Reset error status. */ *ptr = dlsym(lib->handle, name); return *ptr ? 0 : uv__dlerror(lib); } const char* uv_dlerror(const uv_lib_t* lib) { return lib->errmsg ? lib->errmsg : "no error"; } static int uv__dlerror(uv_lib_t* lib) { const char* errmsg; uv__free(lib->errmsg); errmsg = dlerror(); if (errmsg) { lib->errmsg = uv__strdup(errmsg); return -1; } else { lib->errmsg = NULL; return 0; } } gevent-24.11.1/deps/libuv/src/unix/epoll.c000066400000000000000000000302601471441230600202600ustar00rootroot00000000000000/* Copyright libuv contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include int uv__epoll_init(uv_loop_t* loop) { int fd; fd = epoll_create1(O_CLOEXEC); /* epoll_create1() can fail either because it's not implemented (old kernel) * or because it doesn't understand the O_CLOEXEC flag. */ if (fd == -1 && (errno == ENOSYS || errno == EINVAL)) { fd = epoll_create(256); if (fd != -1) uv__cloexec(fd, 1); } loop->backend_fd = fd; if (fd == -1) return UV__ERR(errno); return 0; } void uv__platform_invalidate_fd(uv_loop_t* loop, int fd) { struct epoll_event* events; struct epoll_event dummy; uintptr_t i; uintptr_t nfds; assert(loop->watchers != NULL); assert(fd >= 0); events = (struct epoll_event*) loop->watchers[loop->nwatchers]; nfds = (uintptr_t) loop->watchers[loop->nwatchers + 1]; if (events != NULL) /* Invalidate events with same file descriptor */ for (i = 0; i < nfds; i++) if (events[i].data.fd == fd) events[i].data.fd = -1; /* Remove the file descriptor from the epoll. * This avoids a problem where the same file description remains open * in another process, causing repeated junk epoll events. * * We pass in a dummy epoll_event, to work around a bug in old kernels. */ if (loop->backend_fd >= 0) { /* Work around a bug in kernels 3.10 to 3.19 where passing a struct that * has the EPOLLWAKEUP flag set generates spurious audit syslog warnings. */ memset(&dummy, 0, sizeof(dummy)); epoll_ctl(loop->backend_fd, EPOLL_CTL_DEL, fd, &dummy); } } int uv__io_check_fd(uv_loop_t* loop, int fd) { struct epoll_event e; int rc; memset(&e, 0, sizeof(e)); e.events = POLLIN; e.data.fd = -1; rc = 0; if (epoll_ctl(loop->backend_fd, EPOLL_CTL_ADD, fd, &e)) if (errno != EEXIST) rc = UV__ERR(errno); if (rc == 0) if (epoll_ctl(loop->backend_fd, EPOLL_CTL_DEL, fd, &e)) abort(); return rc; } void uv__io_poll(uv_loop_t* loop, int timeout) { /* A bug in kernels < 2.6.37 makes timeouts larger than ~30 minutes * effectively infinite on 32 bits architectures. To avoid blocking * indefinitely, we cap the timeout and poll again if necessary. * * Note that "30 minutes" is a simplification because it depends on * the value of CONFIG_HZ. The magic constant assumes CONFIG_HZ=1200, * that being the largest value I have seen in the wild (and only once.) */ static const int max_safe_timeout = 1789569; static int no_epoll_pwait_cached; static int no_epoll_wait_cached; int no_epoll_pwait; int no_epoll_wait; struct epoll_event events[1024]; struct epoll_event* pe; struct epoll_event e; int real_timeout; QUEUE* q; uv__io_t* w; sigset_t sigset; uint64_t sigmask; uint64_t base; int have_signals; int nevents; int count; int nfds; int fd; int op; int i; int user_timeout; int reset_timeout; if (loop->nfds == 0) { assert(QUEUE_EMPTY(&loop->watcher_queue)); return; } memset(&e, 0, sizeof(e)); while (!QUEUE_EMPTY(&loop->watcher_queue)) { q = QUEUE_HEAD(&loop->watcher_queue); QUEUE_REMOVE(q); QUEUE_INIT(q); w = QUEUE_DATA(q, uv__io_t, watcher_queue); assert(w->pevents != 0); assert(w->fd >= 0); assert(w->fd < (int) loop->nwatchers); e.events = w->pevents; e.data.fd = w->fd; if (w->events == 0) op = EPOLL_CTL_ADD; else op = EPOLL_CTL_MOD; /* XXX Future optimization: do EPOLL_CTL_MOD lazily if we stop watching * events, skip the syscall and squelch the events after epoll_wait(). */ if (epoll_ctl(loop->backend_fd, op, w->fd, &e)) { if (errno != EEXIST) abort(); assert(op == EPOLL_CTL_ADD); /* We've reactivated a file descriptor that's been watched before. */ if (epoll_ctl(loop->backend_fd, EPOLL_CTL_MOD, w->fd, &e)) abort(); } w->events = w->pevents; } sigmask = 0; if (loop->flags & UV_LOOP_BLOCK_SIGPROF) { sigemptyset(&sigset); sigaddset(&sigset, SIGPROF); sigmask |= 1 << (SIGPROF - 1); } assert(timeout >= -1); base = loop->time; count = 48; /* Benchmarks suggest this gives the best throughput. */ real_timeout = timeout; if (uv__get_internal_fields(loop)->flags & UV_METRICS_IDLE_TIME) { reset_timeout = 1; user_timeout = timeout; timeout = 0; } else { reset_timeout = 0; user_timeout = 0; } /* You could argue there is a dependency between these two but * ultimately we don't care about their ordering with respect * to one another. Worst case, we make a few system calls that * could have been avoided because another thread already knows * they fail with ENOSYS. Hardly the end of the world. */ no_epoll_pwait = uv__load_relaxed(&no_epoll_pwait_cached); no_epoll_wait = uv__load_relaxed(&no_epoll_wait_cached); for (;;) { /* Only need to set the provider_entry_time if timeout != 0. The function * will return early if the loop isn't configured with UV_METRICS_IDLE_TIME. */ if (timeout != 0) uv__metrics_set_provider_entry_time(loop); /* See the comment for max_safe_timeout for an explanation of why * this is necessary. Executive summary: kernel bug workaround. */ if (sizeof(int32_t) == sizeof(long) && timeout >= max_safe_timeout) timeout = max_safe_timeout; if (sigmask != 0 && no_epoll_pwait != 0) if (pthread_sigmask(SIG_BLOCK, &sigset, NULL)) abort(); if (no_epoll_wait != 0 || (sigmask != 0 && no_epoll_pwait == 0)) { nfds = epoll_pwait(loop->backend_fd, events, ARRAY_SIZE(events), timeout, &sigset); if (nfds == -1 && errno == ENOSYS) { uv__store_relaxed(&no_epoll_pwait_cached, 1); no_epoll_pwait = 1; } } else { nfds = epoll_wait(loop->backend_fd, events, ARRAY_SIZE(events), timeout); if (nfds == -1 && errno == ENOSYS) { uv__store_relaxed(&no_epoll_wait_cached, 1); no_epoll_wait = 1; } } if (sigmask != 0 && no_epoll_pwait != 0) if (pthread_sigmask(SIG_UNBLOCK, &sigset, NULL)) abort(); /* Update loop->time unconditionally. It's tempting to skip the update when * timeout == 0 (i.e. non-blocking poll) but there is no guarantee that the * operating system didn't reschedule our process while in the syscall. */ SAVE_ERRNO(uv__update_time(loop)); if (nfds == 0) { assert(timeout != -1); if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } if (timeout == -1) continue; if (timeout == 0) return; /* We may have been inside the system call for longer than |timeout| * milliseconds so we need to update the timestamp to avoid drift. */ goto update_timeout; } if (nfds == -1) { if (errno == ENOSYS) { /* epoll_wait() or epoll_pwait() failed, try the other system call. */ assert(no_epoll_wait == 0 || no_epoll_pwait == 0); continue; } if (errno != EINTR) abort(); if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } if (timeout == -1) continue; if (timeout == 0) return; /* Interrupted by a signal. Update timeout and poll again. */ goto update_timeout; } have_signals = 0; nevents = 0; { /* Squelch a -Waddress-of-packed-member warning with gcc >= 9. */ union { struct epoll_event* events; uv__io_t* watchers; } x; x.events = events; assert(loop->watchers != NULL); loop->watchers[loop->nwatchers] = x.watchers; loop->watchers[loop->nwatchers + 1] = (void*) (uintptr_t) nfds; } for (i = 0; i < nfds; i++) { pe = events + i; fd = pe->data.fd; /* Skip invalidated events, see uv__platform_invalidate_fd */ if (fd == -1) continue; assert(fd >= 0); assert((unsigned) fd < loop->nwatchers); w = loop->watchers[fd]; if (w == NULL) { /* File descriptor that we've stopped watching, disarm it. * * Ignore all errors because we may be racing with another thread * when the file descriptor is closed. */ epoll_ctl(loop->backend_fd, EPOLL_CTL_DEL, fd, pe); continue; } /* Give users only events they're interested in. Prevents spurious * callbacks when previous callback invocation in this loop has stopped * the current watcher. Also, filters out events that users has not * requested us to watch. */ pe->events &= w->pevents | POLLERR | POLLHUP; /* Work around an epoll quirk where it sometimes reports just the * EPOLLERR or EPOLLHUP event. In order to force the event loop to * move forward, we merge in the read/write events that the watcher * is interested in; uv__read() and uv__write() will then deal with * the error or hangup in the usual fashion. * * Note to self: happens when epoll reports EPOLLIN|EPOLLHUP, the user * reads the available data, calls uv_read_stop(), then sometime later * calls uv_read_start() again. By then, libuv has forgotten about the * hangup and the kernel won't report EPOLLIN again because there's * nothing left to read. If anything, libuv is to blame here. The * current hack is just a quick bandaid; to properly fix it, libuv * needs to remember the error/hangup event. We should get that for * free when we switch over to edge-triggered I/O. */ if (pe->events == POLLERR || pe->events == POLLHUP) pe->events |= w->pevents & (POLLIN | POLLOUT | UV__POLLRDHUP | UV__POLLPRI); if (pe->events != 0) { /* Run signal watchers last. This also affects child process watchers * because those are implemented in terms of signal watchers. */ if (w == &loop->signal_io_watcher) { have_signals = 1; } else { uv__metrics_update_idle_time(loop); w->cb(loop, w, pe->events); } nevents++; } } if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } if (have_signals != 0) { uv__metrics_update_idle_time(loop); loop->signal_io_watcher.cb(loop, &loop->signal_io_watcher, POLLIN); } loop->watchers[loop->nwatchers] = NULL; loop->watchers[loop->nwatchers + 1] = NULL; if (have_signals != 0) return; /* Event loop should cycle now so don't poll again. */ if (nevents != 0) { if (nfds == ARRAY_SIZE(events) && --count != 0) { /* Poll for more events but don't block this time. */ timeout = 0; continue; } return; } if (timeout == 0) return; if (timeout == -1) continue; update_timeout: assert(timeout > 0); real_timeout -= (loop->time - base); if (real_timeout <= 0) return; timeout = real_timeout; } } gevent-24.11.1/deps/libuv/src/unix/freebsd.c000066400000000000000000000167401471441230600205660ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include #include #include #include /* VM_LOADAVG */ #include #include #include /* sysconf */ #include #ifndef CPUSTATES # define CPUSTATES 5U #endif #ifndef CP_USER # define CP_USER 0 # define CP_NICE 1 # define CP_SYS 2 # define CP_IDLE 3 # define CP_INTR 4 #endif int uv__platform_loop_init(uv_loop_t* loop) { return uv__kqueue_init(loop); } void uv__platform_loop_delete(uv_loop_t* loop) { } int uv_exepath(char* buffer, size_t* size) { char abspath[PATH_MAX * 2 + 1]; int mib[4]; size_t abspath_size; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; mib[0] = CTL_KERN; mib[1] = KERN_PROC; mib[2] = KERN_PROC_PATHNAME; mib[3] = -1; abspath_size = sizeof abspath; if (sysctl(mib, ARRAY_SIZE(mib), abspath, &abspath_size, NULL, 0)) return UV__ERR(errno); assert(abspath_size > 0); abspath_size -= 1; *size -= 1; if (*size > abspath_size) *size = abspath_size; memcpy(buffer, abspath, *size); buffer[*size] = '\0'; return 0; } uint64_t uv_get_free_memory(void) { int freecount; size_t size = sizeof(freecount); if (sysctlbyname("vm.stats.vm.v_free_count", &freecount, &size, NULL, 0)) return UV__ERR(errno); return (uint64_t) freecount * sysconf(_SC_PAGESIZE); } uint64_t uv_get_total_memory(void) { unsigned long info; int which[] = {CTL_HW, HW_PHYSMEM}; size_t size = sizeof(info); if (sysctl(which, ARRAY_SIZE(which), &info, &size, NULL, 0)) return UV__ERR(errno); return (uint64_t) info; } uint64_t uv_get_constrained_memory(void) { return 0; /* Memory constraints are unknown. */ } void uv_loadavg(double avg[3]) { struct loadavg info; size_t size = sizeof(info); int which[] = {CTL_VM, VM_LOADAVG}; if (sysctl(which, ARRAY_SIZE(which), &info, &size, NULL, 0) < 0) return; avg[0] = (double) info.ldavg[0] / info.fscale; avg[1] = (double) info.ldavg[1] / info.fscale; avg[2] = (double) info.ldavg[2] / info.fscale; } int uv_resident_set_memory(size_t* rss) { struct kinfo_proc kinfo; size_t page_size; size_t kinfo_size; int mib[4]; mib[0] = CTL_KERN; mib[1] = KERN_PROC; mib[2] = KERN_PROC_PID; mib[3] = getpid(); kinfo_size = sizeof(kinfo); if (sysctl(mib, ARRAY_SIZE(mib), &kinfo, &kinfo_size, NULL, 0)) return UV__ERR(errno); page_size = getpagesize(); #ifdef __DragonFly__ *rss = kinfo.kp_vm_rssize * page_size; #else *rss = kinfo.ki_rssize * page_size; #endif return 0; } int uv_uptime(double* uptime) { int r; struct timespec sp; r = clock_gettime(CLOCK_MONOTONIC, &sp); if (r) return UV__ERR(errno); *uptime = sp.tv_sec; return 0; } int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count) { unsigned int ticks = (unsigned int)sysconf(_SC_CLK_TCK), multiplier = ((uint64_t)1000L / ticks), cpuspeed, maxcpus, cur = 0; uv_cpu_info_t* cpu_info; const char* maxcpus_key; const char* cptimes_key; const char* model_key; char model[512]; long* cp_times; int numcpus; size_t size; int i; #if defined(__DragonFly__) /* This is not quite correct but DragonFlyBSD doesn't seem to have anything * comparable to kern.smp.maxcpus or kern.cp_times (kern.cp_time is a total, * not per CPU). At least this stops uv_cpu_info() from failing completely. */ maxcpus_key = "hw.ncpu"; cptimes_key = "kern.cp_time"; #else maxcpus_key = "kern.smp.maxcpus"; cptimes_key = "kern.cp_times"; #endif #if defined(__arm__) || defined(__aarch64__) /* The key hw.model and hw.clockrate are not available on FreeBSD ARM. */ model_key = "hw.machine"; cpuspeed = 0; #else model_key = "hw.model"; size = sizeof(cpuspeed); if (sysctlbyname("hw.clockrate", &cpuspeed, &size, NULL, 0)) return -errno; #endif size = sizeof(model); if (sysctlbyname(model_key, &model, &size, NULL, 0)) return UV__ERR(errno); size = sizeof(numcpus); if (sysctlbyname("hw.ncpu", &numcpus, &size, NULL, 0)) return UV__ERR(errno); *cpu_infos = uv__malloc(numcpus * sizeof(**cpu_infos)); if (!(*cpu_infos)) return UV_ENOMEM; *count = numcpus; /* kern.cp_times on FreeBSD i386 gives an array up to maxcpus instead of * ncpu. */ size = sizeof(maxcpus); if (sysctlbyname(maxcpus_key, &maxcpus, &size, NULL, 0)) { uv__free(*cpu_infos); return UV__ERR(errno); } size = maxcpus * CPUSTATES * sizeof(long); cp_times = uv__malloc(size); if (cp_times == NULL) { uv__free(*cpu_infos); return UV_ENOMEM; } if (sysctlbyname(cptimes_key, cp_times, &size, NULL, 0)) { uv__free(cp_times); uv__free(*cpu_infos); return UV__ERR(errno); } for (i = 0; i < numcpus; i++) { cpu_info = &(*cpu_infos)[i]; cpu_info->cpu_times.user = (uint64_t)(cp_times[CP_USER+cur]) * multiplier; cpu_info->cpu_times.nice = (uint64_t)(cp_times[CP_NICE+cur]) * multiplier; cpu_info->cpu_times.sys = (uint64_t)(cp_times[CP_SYS+cur]) * multiplier; cpu_info->cpu_times.idle = (uint64_t)(cp_times[CP_IDLE+cur]) * multiplier; cpu_info->cpu_times.irq = (uint64_t)(cp_times[CP_INTR+cur]) * multiplier; cpu_info->model = uv__strdup(model); cpu_info->speed = cpuspeed; cur+=CPUSTATES; } uv__free(cp_times); return 0; } int uv__sendmmsg(int fd, struct uv__mmsghdr* mmsg, unsigned int vlen) { #if __FreeBSD__ >= 11 && !defined(__DragonFly__) return sendmmsg(fd, (struct mmsghdr*) mmsg, vlen, 0 /* flags */); #else return errno = ENOSYS, -1; #endif } int uv__recvmmsg(int fd, struct uv__mmsghdr* mmsg, unsigned int vlen) { #if __FreeBSD__ >= 11 && !defined(__DragonFly__) return recvmmsg(fd, (struct mmsghdr*) mmsg, vlen, 0 /* flags */, NULL /* timeout */); #else return errno = ENOSYS, -1; #endif } ssize_t uv__fs_copy_file_range(int fd_in, off_t* off_in, int fd_out, off_t* off_out, size_t len, unsigned int flags) { #if __FreeBSD__ >= 13 && !defined(__DragonFly__) return copy_file_range(fd_in, off_in, fd_out, off_out, len, flags); #else return errno = ENOSYS, -1; #endif } gevent-24.11.1/deps/libuv/src/unix/fs.c000066400000000000000000001613211471441230600175600ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ /* Caveat emptor: this file deviates from the libuv convention of returning * negated errno codes. Most uv_fs_*() functions map directly to the system * call of the same name. For more complex wrappers, it's easier to just * return -1 with errno set. The dispatcher in uv__fs_work() takes care of * getting the errno to the right place (req->result or as the return value.) */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include /* PATH_MAX */ #include #include #include #include #include #include #include #include #include #if defined(__DragonFly__) || \ defined(__FreeBSD__) || \ defined(__FreeBSD_kernel__) || \ defined(__OpenBSD__) || \ defined(__NetBSD__) # define HAVE_PREADV 1 #else # define HAVE_PREADV 0 #endif #if defined(__linux__) # include "sys/utsname.h" #endif #if defined(__linux__) || defined(__sun) # include # include #endif #if defined(__APPLE__) # include #elif defined(__linux__) && !defined(FICLONE) # include # define FICLONE _IOW(0x94, 9, int) #endif #if defined(_AIX) && !defined(_AIX71) # include #endif #if defined(__APPLE__) || \ defined(__DragonFly__) || \ defined(__FreeBSD__) || \ defined(__FreeBSD_kernel__) || \ defined(__OpenBSD__) || \ defined(__NetBSD__) # include # include #elif defined(__sun) || \ defined(__MVS__) || \ defined(__NetBSD__) || \ defined(__HAIKU__) || \ defined(__QNX__) # include #else # include #endif #if defined(_AIX) && _XOPEN_SOURCE <= 600 extern char *mkdtemp(char *template); /* See issue #740 on AIX < 7 */ #endif #define INIT(subtype) \ do { \ if (req == NULL) \ return UV_EINVAL; \ UV_REQ_INIT(req, UV_FS); \ req->fs_type = UV_FS_ ## subtype; \ req->result = 0; \ req->ptr = NULL; \ req->loop = loop; \ req->path = NULL; \ req->new_path = NULL; \ req->bufs = NULL; \ req->cb = cb; \ } \ while (0) #define PATH \ do { \ assert(path != NULL); \ if (cb == NULL) { \ req->path = path; \ } else { \ req->path = uv__strdup(path); \ if (req->path == NULL) \ return UV_ENOMEM; \ } \ } \ while (0) #define PATH2 \ do { \ if (cb == NULL) { \ req->path = path; \ req->new_path = new_path; \ } else { \ size_t path_len; \ size_t new_path_len; \ path_len = strlen(path) + 1; \ new_path_len = strlen(new_path) + 1; \ req->path = uv__malloc(path_len + new_path_len); \ if (req->path == NULL) \ return UV_ENOMEM; \ req->new_path = req->path + path_len; \ memcpy((void*) req->path, path, path_len); \ memcpy((void*) req->new_path, new_path, new_path_len); \ } \ } \ while (0) #define POST \ do { \ if (cb != NULL) { \ uv__req_register(loop, req); \ uv__work_submit(loop, \ &req->work_req, \ UV__WORK_FAST_IO, \ uv__fs_work, \ uv__fs_done); \ return 0; \ } \ else { \ uv__fs_work(&req->work_req); \ return req->result; \ } \ } \ while (0) static int uv__fs_close(int fd) { int rc; rc = uv__close_nocancel(fd); if (rc == -1) if (errno == EINTR || errno == EINPROGRESS) rc = 0; /* The close is in progress, not an error. */ return rc; } static ssize_t uv__fs_fsync(uv_fs_t* req) { #if defined(__APPLE__) /* Apple's fdatasync and fsync explicitly do NOT flush the drive write cache * to the drive platters. This is in contrast to Linux's fdatasync and fsync * which do, according to recent man pages. F_FULLFSYNC is Apple's equivalent * for flushing buffered data to permanent storage. If F_FULLFSYNC is not * supported by the file system we fall back to F_BARRIERFSYNC or fsync(). * This is the same approach taken by sqlite, except sqlite does not issue * an F_BARRIERFSYNC call. */ int r; r = fcntl(req->file, F_FULLFSYNC); if (r != 0) r = fcntl(req->file, 85 /* F_BARRIERFSYNC */); /* fsync + barrier */ if (r != 0) r = fsync(req->file); return r; #else return fsync(req->file); #endif } static ssize_t uv__fs_fdatasync(uv_fs_t* req) { #if defined(__linux__) || defined(__sun) || defined(__NetBSD__) return fdatasync(req->file); #elif defined(__APPLE__) /* See the comment in uv__fs_fsync. */ return uv__fs_fsync(req); #else return fsync(req->file); #endif } UV_UNUSED(static struct timespec uv__fs_to_timespec(double time)) { struct timespec ts; ts.tv_sec = time; ts.tv_nsec = (time - ts.tv_sec) * 1e9; /* TODO(bnoordhuis) Remove this. utimesat() has nanosecond resolution but we * stick to microsecond resolution for the sake of consistency with other * platforms. I'm the original author of this compatibility hack but I'm * less convinced it's useful nowadays. */ ts.tv_nsec -= ts.tv_nsec % 1000; if (ts.tv_nsec < 0) { ts.tv_nsec += 1e9; ts.tv_sec -= 1; } return ts; } UV_UNUSED(static struct timeval uv__fs_to_timeval(double time)) { struct timeval tv; tv.tv_sec = time; tv.tv_usec = (time - tv.tv_sec) * 1e6; if (tv.tv_usec < 0) { tv.tv_usec += 1e6; tv.tv_sec -= 1; } return tv; } static ssize_t uv__fs_futime(uv_fs_t* req) { #if defined(__linux__) \ || defined(_AIX71) \ || defined(__HAIKU__) \ || defined(__GNU__) struct timespec ts[2]; ts[0] = uv__fs_to_timespec(req->atime); ts[1] = uv__fs_to_timespec(req->mtime); return futimens(req->file, ts); #elif defined(__APPLE__) \ || defined(__DragonFly__) \ || defined(__FreeBSD__) \ || defined(__FreeBSD_kernel__) \ || defined(__NetBSD__) \ || defined(__OpenBSD__) \ || defined(__sun) struct timeval tv[2]; tv[0] = uv__fs_to_timeval(req->atime); tv[1] = uv__fs_to_timeval(req->mtime); # if defined(__sun) return futimesat(req->file, NULL, tv); # else return futimes(req->file, tv); # endif #elif defined(__MVS__) attrib_t atr; memset(&atr, 0, sizeof(atr)); atr.att_mtimechg = 1; atr.att_atimechg = 1; atr.att_mtime = req->mtime; atr.att_atime = req->atime; return __fchattr(req->file, &atr, sizeof(atr)); #else errno = ENOSYS; return -1; #endif } static ssize_t uv__fs_mkdtemp(uv_fs_t* req) { return mkdtemp((char*) req->path) ? 0 : -1; } static int (*uv__mkostemp)(char*, int); static void uv__mkostemp_initonce(void) { /* z/os doesn't have RTLD_DEFAULT but that's okay * because it doesn't have mkostemp(O_CLOEXEC) either. */ #ifdef RTLD_DEFAULT uv__mkostemp = (int (*)(char*, int)) dlsym(RTLD_DEFAULT, "mkostemp"); /* We don't care about errors, but we do want to clean them up. * If there has been no error, then dlerror() will just return * NULL. */ dlerror(); #endif /* RTLD_DEFAULT */ } static int uv__fs_mkstemp(uv_fs_t* req) { static uv_once_t once = UV_ONCE_INIT; int r; #ifdef O_CLOEXEC static int no_cloexec_support; #endif static const char pattern[] = "XXXXXX"; static const size_t pattern_size = sizeof(pattern) - 1; char* path; size_t path_length; path = (char*) req->path; path_length = strlen(path); /* EINVAL can be returned for 2 reasons: 1. The template's last 6 characters were not XXXXXX 2. open() didn't support O_CLOEXEC We want to avoid going to the fallback path in case of 1, so it's manually checked before. */ if (path_length < pattern_size || strcmp(path + path_length - pattern_size, pattern)) { errno = EINVAL; r = -1; goto clobber; } uv_once(&once, uv__mkostemp_initonce); #ifdef O_CLOEXEC if (uv__load_relaxed(&no_cloexec_support) == 0 && uv__mkostemp != NULL) { r = uv__mkostemp(path, O_CLOEXEC); if (r >= 0) return r; /* If mkostemp() returns EINVAL, it means the kernel doesn't support O_CLOEXEC, so we just fallback to mkstemp() below. */ if (errno != EINVAL) goto clobber; /* We set the static variable so that next calls don't even try to use mkostemp. */ uv__store_relaxed(&no_cloexec_support, 1); } #endif /* O_CLOEXEC */ if (req->cb != NULL) uv_rwlock_rdlock(&req->loop->cloexec_lock); r = mkstemp(path); /* In case of failure `uv__cloexec` will leave error in `errno`, * so it is enough to just set `r` to `-1`. */ if (r >= 0 && uv__cloexec(r, 1) != 0) { r = uv__close(r); if (r != 0) abort(); r = -1; } if (req->cb != NULL) uv_rwlock_rdunlock(&req->loop->cloexec_lock); clobber: if (r < 0) path[0] = '\0'; return r; } static ssize_t uv__fs_open(uv_fs_t* req) { #ifdef O_CLOEXEC return open(req->path, req->flags | O_CLOEXEC, req->mode); #else /* O_CLOEXEC */ int r; if (req->cb != NULL) uv_rwlock_rdlock(&req->loop->cloexec_lock); r = open(req->path, req->flags, req->mode); /* In case of failure `uv__cloexec` will leave error in `errno`, * so it is enough to just set `r` to `-1`. */ if (r >= 0 && uv__cloexec(r, 1) != 0) { r = uv__close(r); if (r != 0) abort(); r = -1; } if (req->cb != NULL) uv_rwlock_rdunlock(&req->loop->cloexec_lock); return r; #endif /* O_CLOEXEC */ } #if !HAVE_PREADV static ssize_t uv__fs_preadv(uv_file fd, uv_buf_t* bufs, unsigned int nbufs, off_t off) { uv_buf_t* buf; uv_buf_t* end; ssize_t result; ssize_t rc; size_t pos; assert(nbufs > 0); result = 0; pos = 0; buf = bufs + 0; end = bufs + nbufs; for (;;) { do rc = pread(fd, buf->base + pos, buf->len - pos, off + result); while (rc == -1 && errno == EINTR); if (rc == 0) break; if (rc == -1 && result == 0) return UV__ERR(errno); if (rc == -1) break; /* We read some data so return that, ignore the error. */ pos += rc; result += rc; if (pos < buf->len) continue; pos = 0; buf += 1; if (buf == end) break; } return result; } #endif static ssize_t uv__fs_read(uv_fs_t* req) { #if defined(__linux__) static int no_preadv; #endif unsigned int iovmax; ssize_t result; iovmax = uv__getiovmax(); if (req->nbufs > iovmax) req->nbufs = iovmax; if (req->off < 0) { if (req->nbufs == 1) result = read(req->file, req->bufs[0].base, req->bufs[0].len); else result = readv(req->file, (struct iovec*) req->bufs, req->nbufs); } else { if (req->nbufs == 1) { result = pread(req->file, req->bufs[0].base, req->bufs[0].len, req->off); goto done; } #if HAVE_PREADV result = preadv(req->file, (struct iovec*) req->bufs, req->nbufs, req->off); #else # if defined(__linux__) if (uv__load_relaxed(&no_preadv)) retry: # endif { result = uv__fs_preadv(req->file, req->bufs, req->nbufs, req->off); } # if defined(__linux__) else { result = uv__preadv(req->file, (struct iovec*)req->bufs, req->nbufs, req->off); if (result == -1 && errno == ENOSYS) { uv__store_relaxed(&no_preadv, 1); goto retry; } } # endif #endif } done: /* Early cleanup of bufs allocation, since we're done with it. */ if (req->bufs != req->bufsml) uv__free(req->bufs); req->bufs = NULL; req->nbufs = 0; #ifdef __PASE__ /* PASE returns EOPNOTSUPP when reading a directory, convert to EISDIR */ if (result == -1 && errno == EOPNOTSUPP) { struct stat buf; ssize_t rc; rc = fstat(req->file, &buf); if (rc == 0 && S_ISDIR(buf.st_mode)) { errno = EISDIR; } } #endif return result; } #if defined(__APPLE__) && !defined(MAC_OS_X_VERSION_10_8) #define UV_CONST_DIRENT uv__dirent_t #else #define UV_CONST_DIRENT const uv__dirent_t #endif static int uv__fs_scandir_filter(UV_CONST_DIRENT* dent) { return strcmp(dent->d_name, ".") != 0 && strcmp(dent->d_name, "..") != 0; } static int uv__fs_scandir_sort(UV_CONST_DIRENT** a, UV_CONST_DIRENT** b) { return strcmp((*a)->d_name, (*b)->d_name); } static ssize_t uv__fs_scandir(uv_fs_t* req) { uv__dirent_t** dents; int n; dents = NULL; n = scandir(req->path, &dents, uv__fs_scandir_filter, uv__fs_scandir_sort); /* NOTE: We will use nbufs as an index field */ req->nbufs = 0; if (n == 0) { /* OS X still needs to deallocate some memory. * Memory was allocated using the system allocator, so use free() here. */ free(dents); dents = NULL; } else if (n == -1) { return n; } req->ptr = dents; return n; } static int uv__fs_opendir(uv_fs_t* req) { uv_dir_t* dir; dir = uv__malloc(sizeof(*dir)); if (dir == NULL) goto error; dir->dir = opendir(req->path); if (dir->dir == NULL) goto error; req->ptr = dir; return 0; error: uv__free(dir); req->ptr = NULL; return -1; } static int uv__fs_readdir(uv_fs_t* req) { uv_dir_t* dir; uv_dirent_t* dirent; struct dirent* res; unsigned int dirent_idx; unsigned int i; dir = req->ptr; dirent_idx = 0; while (dirent_idx < dir->nentries) { /* readdir() returns NULL on end of directory, as well as on error. errno is used to differentiate between the two conditions. */ errno = 0; res = readdir(dir->dir); if (res == NULL) { if (errno != 0) goto error; break; } if (strcmp(res->d_name, ".") == 0 || strcmp(res->d_name, "..") == 0) continue; dirent = &dir->dirents[dirent_idx]; dirent->name = uv__strdup(res->d_name); if (dirent->name == NULL) goto error; dirent->type = uv__fs_get_dirent_type(res); ++dirent_idx; } return dirent_idx; error: for (i = 0; i < dirent_idx; ++i) { uv__free((char*) dir->dirents[i].name); dir->dirents[i].name = NULL; } return -1; } static int uv__fs_closedir(uv_fs_t* req) { uv_dir_t* dir; dir = req->ptr; if (dir->dir != NULL) { closedir(dir->dir); dir->dir = NULL; } uv__free(req->ptr); req->ptr = NULL; return 0; } static int uv__fs_statfs(uv_fs_t* req) { uv_statfs_t* stat_fs; #if defined(__sun) || \ defined(__MVS__) || \ defined(__NetBSD__) || \ defined(__HAIKU__) || \ defined(__QNX__) struct statvfs buf; if (0 != statvfs(req->path, &buf)) #else struct statfs buf; if (0 != statfs(req->path, &buf)) #endif /* defined(__sun) */ return -1; stat_fs = uv__malloc(sizeof(*stat_fs)); if (stat_fs == NULL) { errno = ENOMEM; return -1; } #if defined(__sun) || \ defined(__MVS__) || \ defined(__OpenBSD__) || \ defined(__NetBSD__) || \ defined(__HAIKU__) || \ defined(__QNX__) stat_fs->f_type = 0; /* f_type is not supported. */ #else stat_fs->f_type = buf.f_type; #endif stat_fs->f_bsize = buf.f_bsize; stat_fs->f_blocks = buf.f_blocks; stat_fs->f_bfree = buf.f_bfree; stat_fs->f_bavail = buf.f_bavail; stat_fs->f_files = buf.f_files; stat_fs->f_ffree = buf.f_ffree; req->ptr = stat_fs; return 0; } static ssize_t uv__fs_pathmax_size(const char* path) { ssize_t pathmax; pathmax = pathconf(path, _PC_PATH_MAX); if (pathmax == -1) pathmax = UV__PATH_MAX; return pathmax; } static ssize_t uv__fs_readlink(uv_fs_t* req) { ssize_t maxlen; ssize_t len; char* buf; #if defined(_POSIX_PATH_MAX) || defined(PATH_MAX) maxlen = uv__fs_pathmax_size(req->path); #else /* We may not have a real PATH_MAX. Read size of link. */ struct stat st; int ret; ret = lstat(req->path, &st); if (ret != 0) return -1; if (!S_ISLNK(st.st_mode)) { errno = EINVAL; return -1; } maxlen = st.st_size; /* According to readlink(2) lstat can report st_size == 0 for some symlinks, such as those in /proc or /sys. */ if (maxlen == 0) maxlen = uv__fs_pathmax_size(req->path); #endif buf = uv__malloc(maxlen); if (buf == NULL) { errno = ENOMEM; return -1; } #if defined(__MVS__) len = os390_readlink(req->path, buf, maxlen); #else len = readlink(req->path, buf, maxlen); #endif if (len == -1) { uv__free(buf); return -1; } /* Uncommon case: resize to make room for the trailing nul byte. */ if (len == maxlen) { buf = uv__reallocf(buf, len + 1); if (buf == NULL) return -1; } buf[len] = '\0'; req->ptr = buf; return 0; } static ssize_t uv__fs_realpath(uv_fs_t* req) { char* buf; #if defined(_POSIX_VERSION) && _POSIX_VERSION >= 200809L buf = realpath(req->path, NULL); if (buf == NULL) return -1; #else ssize_t len; len = uv__fs_pathmax_size(req->path); buf = uv__malloc(len + 1); if (buf == NULL) { errno = ENOMEM; return -1; } if (realpath(req->path, buf) == NULL) { uv__free(buf); return -1; } #endif req->ptr = buf; return 0; } static ssize_t uv__fs_sendfile_emul(uv_fs_t* req) { struct pollfd pfd; int use_pread; off_t offset; ssize_t nsent; ssize_t nread; ssize_t nwritten; size_t buflen; size_t len; ssize_t n; int in_fd; int out_fd; char buf[8192]; len = req->bufsml[0].len; in_fd = req->flags; out_fd = req->file; offset = req->off; use_pread = 1; /* Here are the rules regarding errors: * * 1. Read errors are reported only if nsent==0, otherwise we return nsent. * The user needs to know that some data has already been sent, to stop * them from sending it twice. * * 2. Write errors are always reported. Write errors are bad because they * mean data loss: we've read data but now we can't write it out. * * We try to use pread() and fall back to regular read() if the source fd * doesn't support positional reads, for example when it's a pipe fd. * * If we get EAGAIN when writing to the target fd, we poll() on it until * it becomes writable again. * * FIXME: If we get a write error when use_pread==1, it should be safe to * return the number of sent bytes instead of an error because pread() * is, in theory, idempotent. However, special files in /dev or /proc * may support pread() but not necessarily return the same data on * successive reads. * * FIXME: There is no way now to signal that we managed to send *some* data * before a write error. */ for (nsent = 0; (size_t) nsent < len; ) { buflen = len - nsent; if (buflen > sizeof(buf)) buflen = sizeof(buf); do if (use_pread) nread = pread(in_fd, buf, buflen, offset); else nread = read(in_fd, buf, buflen); while (nread == -1 && errno == EINTR); if (nread == 0) goto out; if (nread == -1) { if (use_pread && nsent == 0 && (errno == EIO || errno == ESPIPE)) { use_pread = 0; continue; } if (nsent == 0) nsent = -1; goto out; } for (nwritten = 0; nwritten < nread; ) { do n = write(out_fd, buf + nwritten, nread - nwritten); while (n == -1 && errno == EINTR); if (n != -1) { nwritten += n; continue; } if (errno != EAGAIN && errno != EWOULDBLOCK) { nsent = -1; goto out; } pfd.fd = out_fd; pfd.events = POLLOUT; pfd.revents = 0; do n = poll(&pfd, 1, -1); while (n == -1 && errno == EINTR); if (n == -1 || (pfd.revents & ~POLLOUT) != 0) { errno = EIO; nsent = -1; goto out; } } offset += nread; nsent += nread; } out: if (nsent != -1) req->off = offset; return nsent; } #ifdef __linux__ static unsigned uv__kernel_version(void) { static unsigned cached_version; struct utsname u; unsigned version; unsigned major; unsigned minor; unsigned patch; version = uv__load_relaxed(&cached_version); if (version != 0) return version; if (-1 == uname(&u)) return 0; if (3 != sscanf(u.release, "%u.%u.%u", &major, &minor, &patch)) return 0; version = major * 65536 + minor * 256 + patch; uv__store_relaxed(&cached_version, version); return version; } /* Pre-4.20 kernels have a bug where CephFS uses the RADOS copy-from command * in copy_file_range() when it shouldn't. There is no workaround except to * fall back to a regular copy. */ static int uv__is_buggy_cephfs(int fd) { struct statfs s; if (-1 == fstatfs(fd, &s)) return 0; if (s.f_type != /* CephFS */ 0xC36400) return 0; return uv__kernel_version() < /* 4.20.0 */ 0x041400; } static int uv__is_cifs_or_smb(int fd) { struct statfs s; if (-1 == fstatfs(fd, &s)) return 0; switch ((unsigned) s.f_type) { case 0x0000517Bu: /* SMB */ case 0xFE534D42u: /* SMB2 */ case 0xFF534D42u: /* CIFS */ return 1; } return 0; } static ssize_t uv__fs_try_copy_file_range(int in_fd, off_t* off, int out_fd, size_t len) { static int no_copy_file_range_support; ssize_t r; if (uv__load_relaxed(&no_copy_file_range_support)) { errno = ENOSYS; return -1; } r = uv__fs_copy_file_range(in_fd, off, out_fd, NULL, len, 0); if (r != -1) return r; switch (errno) { case EACCES: /* Pre-4.20 kernels have a bug where CephFS uses the RADOS * copy-from command when it shouldn't. */ if (uv__is_buggy_cephfs(in_fd)) errno = ENOSYS; /* Use fallback. */ break; case ENOSYS: uv__store_relaxed(&no_copy_file_range_support, 1); break; case EPERM: /* It's been reported that CIFS spuriously fails. * Consider it a transient error. */ if (uv__is_cifs_or_smb(out_fd)) errno = ENOSYS; /* Use fallback. */ break; case ENOTSUP: case EXDEV: /* ENOTSUP - it could work on another file system type. * EXDEV - it will not work when in_fd and out_fd are not on the same * mounted filesystem (pre Linux 5.3) */ errno = ENOSYS; /* Use fallback. */ break; } return -1; } #endif /* __linux__ */ static ssize_t uv__fs_sendfile(uv_fs_t* req) { int in_fd; int out_fd; in_fd = req->flags; out_fd = req->file; #if defined(__linux__) || defined(__sun) { off_t off; ssize_t r; size_t len; int try_sendfile; off = req->off; len = req->bufsml[0].len; try_sendfile = 1; #ifdef __linux__ r = uv__fs_try_copy_file_range(in_fd, &off, out_fd, len); try_sendfile = (r == -1 && errno == ENOSYS); #endif if (try_sendfile) r = sendfile(out_fd, in_fd, &off, len); /* sendfile() on SunOS returns EINVAL if the target fd is not a socket but * it still writes out data. Fortunately, we can detect it by checking if * the offset has been updated. */ if (r != -1 || off > req->off) { r = off - req->off; req->off = off; return r; } if (errno == EINVAL || errno == EIO || errno == ENOTSOCK || errno == EXDEV) { errno = 0; return uv__fs_sendfile_emul(req); } return -1; } #elif defined(__APPLE__) || \ defined(__DragonFly__) || \ defined(__FreeBSD__) || \ defined(__FreeBSD_kernel__) { off_t len; ssize_t r; /* sendfile() on FreeBSD and Darwin returns EAGAIN if the target fd is in * non-blocking mode and not all data could be written. If a non-zero * number of bytes have been sent, we don't consider it an error. */ #if defined(__FreeBSD__) || defined(__DragonFly__) #if defined(__FreeBSD__) off_t off; off = req->off; r = uv__fs_copy_file_range(in_fd, &off, out_fd, NULL, req->bufsml[0].len, 0); if (r >= 0) { r = off - req->off; req->off = off; return r; } #endif len = 0; r = sendfile(in_fd, out_fd, req->off, req->bufsml[0].len, NULL, &len, 0); #elif defined(__FreeBSD_kernel__) len = 0; r = bsd_sendfile(in_fd, out_fd, req->off, req->bufsml[0].len, NULL, &len, 0); #else /* The darwin sendfile takes len as an input for the length to send, * so make sure to initialize it with the caller's value. */ len = req->bufsml[0].len; r = sendfile(in_fd, out_fd, req->off, &len, NULL, 0); #endif /* * The man page for sendfile(2) on DragonFly states that `len` contains * a meaningful value ONLY in case of EAGAIN and EINTR. * Nothing is said about it's value in case of other errors, so better * not depend on the potential wrong assumption that is was not modified * by the syscall. */ if (r == 0 || ((errno == EAGAIN || errno == EINTR) && len != 0)) { req->off += len; return (ssize_t) len; } if (errno == EINVAL || errno == EIO || errno == ENOTSOCK || errno == EXDEV) { errno = 0; return uv__fs_sendfile_emul(req); } return -1; } #else /* Squelch compiler warnings. */ (void) &in_fd; (void) &out_fd; return uv__fs_sendfile_emul(req); #endif } static ssize_t uv__fs_utime(uv_fs_t* req) { #if defined(__linux__) \ || defined(_AIX71) \ || defined(__sun) \ || defined(__HAIKU__) struct timespec ts[2]; ts[0] = uv__fs_to_timespec(req->atime); ts[1] = uv__fs_to_timespec(req->mtime); return utimensat(AT_FDCWD, req->path, ts, 0); #elif defined(__APPLE__) \ || defined(__DragonFly__) \ || defined(__FreeBSD__) \ || defined(__FreeBSD_kernel__) \ || defined(__NetBSD__) \ || defined(__OpenBSD__) struct timeval tv[2]; tv[0] = uv__fs_to_timeval(req->atime); tv[1] = uv__fs_to_timeval(req->mtime); return utimes(req->path, tv); #elif defined(_AIX) \ && !defined(_AIX71) struct utimbuf buf; buf.actime = req->atime; buf.modtime = req->mtime; return utime(req->path, &buf); #elif defined(__MVS__) attrib_t atr; memset(&atr, 0, sizeof(atr)); atr.att_mtimechg = 1; atr.att_atimechg = 1; atr.att_mtime = req->mtime; atr.att_atime = req->atime; return __lchattr((char*) req->path, &atr, sizeof(atr)); #else errno = ENOSYS; return -1; #endif } static ssize_t uv__fs_lutime(uv_fs_t* req) { #if defined(__linux__) || \ defined(_AIX71) || \ defined(__sun) || \ defined(__HAIKU__) || \ defined(__GNU__) || \ defined(__OpenBSD__) struct timespec ts[2]; ts[0] = uv__fs_to_timespec(req->atime); ts[1] = uv__fs_to_timespec(req->mtime); return utimensat(AT_FDCWD, req->path, ts, AT_SYMLINK_NOFOLLOW); #elif defined(__APPLE__) || \ defined(__DragonFly__) || \ defined(__FreeBSD__) || \ defined(__FreeBSD_kernel__) || \ defined(__NetBSD__) struct timeval tv[2]; tv[0] = uv__fs_to_timeval(req->atime); tv[1] = uv__fs_to_timeval(req->mtime); return lutimes(req->path, tv); #else errno = ENOSYS; return -1; #endif } static ssize_t uv__fs_write(uv_fs_t* req) { #if defined(__linux__) static int no_pwritev; #endif ssize_t r; /* Serialize writes on OS X, concurrent write() and pwrite() calls result in * data loss. We can't use a per-file descriptor lock, the descriptor may be * a dup(). */ #if defined(__APPLE__) static pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER; if (pthread_mutex_lock(&lock)) abort(); #endif if (req->off < 0) { if (req->nbufs == 1) r = write(req->file, req->bufs[0].base, req->bufs[0].len); else r = writev(req->file, (struct iovec*) req->bufs, req->nbufs); } else { if (req->nbufs == 1) { r = pwrite(req->file, req->bufs[0].base, req->bufs[0].len, req->off); goto done; } #if HAVE_PREADV r = pwritev(req->file, (struct iovec*) req->bufs, req->nbufs, req->off); #else # if defined(__linux__) if (no_pwritev) retry: # endif { r = pwrite(req->file, req->bufs[0].base, req->bufs[0].len, req->off); } # if defined(__linux__) else { r = uv__pwritev(req->file, (struct iovec*) req->bufs, req->nbufs, req->off); if (r == -1 && errno == ENOSYS) { no_pwritev = 1; goto retry; } } # endif #endif } done: #if defined(__APPLE__) if (pthread_mutex_unlock(&lock)) abort(); #endif return r; } static ssize_t uv__fs_copyfile(uv_fs_t* req) { uv_fs_t fs_req; uv_file srcfd; uv_file dstfd; struct stat src_statsbuf; struct stat dst_statsbuf; int dst_flags; int result; int err; off_t bytes_to_send; off_t in_offset; off_t bytes_written; size_t bytes_chunk; dstfd = -1; err = 0; /* Open the source file. */ srcfd = uv_fs_open(NULL, &fs_req, req->path, O_RDONLY, 0, NULL); uv_fs_req_cleanup(&fs_req); if (srcfd < 0) return srcfd; /* Get the source file's mode. */ if (fstat(srcfd, &src_statsbuf)) { err = UV__ERR(errno); goto out; } dst_flags = O_WRONLY | O_CREAT; if (req->flags & UV_FS_COPYFILE_EXCL) dst_flags |= O_EXCL; /* Open the destination file. */ dstfd = uv_fs_open(NULL, &fs_req, req->new_path, dst_flags, src_statsbuf.st_mode, NULL); uv_fs_req_cleanup(&fs_req); if (dstfd < 0) { err = dstfd; goto out; } /* If the file is not being opened exclusively, verify that the source and destination are not the same file. If they are the same, bail out early. */ if ((req->flags & UV_FS_COPYFILE_EXCL) == 0) { /* Get the destination file's mode. */ if (fstat(dstfd, &dst_statsbuf)) { err = UV__ERR(errno); goto out; } /* Check if srcfd and dstfd refer to the same file */ if (src_statsbuf.st_dev == dst_statsbuf.st_dev && src_statsbuf.st_ino == dst_statsbuf.st_ino) { goto out; } /* Truncate the file in case the destination already existed. */ if (ftruncate(dstfd, 0) != 0) { err = UV__ERR(errno); goto out; } } if (fchmod(dstfd, src_statsbuf.st_mode) == -1) { err = UV__ERR(errno); #ifdef __linux__ /* fchmod() on CIFS shares always fails with EPERM unless the share is * mounted with "noperm". As fchmod() is a meaningless operation on such * shares anyway, detect that condition and squelch the error. */ if (err != UV_EPERM) goto out; if (!uv__is_cifs_or_smb(dstfd)) goto out; err = 0; #else /* !__linux__ */ goto out; #endif /* !__linux__ */ } #ifdef FICLONE if (req->flags & UV_FS_COPYFILE_FICLONE || req->flags & UV_FS_COPYFILE_FICLONE_FORCE) { if (ioctl(dstfd, FICLONE, srcfd) == 0) { /* ioctl() with FICLONE succeeded. */ goto out; } /* If an error occurred and force was set, return the error to the caller; * fall back to sendfile() when force was not set. */ if (req->flags & UV_FS_COPYFILE_FICLONE_FORCE) { err = UV__ERR(errno); goto out; } } #else if (req->flags & UV_FS_COPYFILE_FICLONE_FORCE) { err = UV_ENOSYS; goto out; } #endif bytes_to_send = src_statsbuf.st_size; in_offset = 0; while (bytes_to_send != 0) { bytes_chunk = SSIZE_MAX; if (bytes_to_send < (off_t) bytes_chunk) bytes_chunk = bytes_to_send; uv_fs_sendfile(NULL, &fs_req, dstfd, srcfd, in_offset, bytes_chunk, NULL); bytes_written = fs_req.result; uv_fs_req_cleanup(&fs_req); if (bytes_written < 0) { err = bytes_written; break; } bytes_to_send -= bytes_written; in_offset += bytes_written; } out: if (err < 0) result = err; else result = 0; /* Close the source file. */ err = uv__close_nocheckstdio(srcfd); /* Don't overwrite any existing errors. */ if (err != 0 && result == 0) result = err; /* Close the destination file if it is open. */ if (dstfd >= 0) { err = uv__close_nocheckstdio(dstfd); /* Don't overwrite any existing errors. */ if (err != 0 && result == 0) result = err; /* Remove the destination file if something went wrong. */ if (result != 0) { uv_fs_unlink(NULL, &fs_req, req->new_path, NULL); /* Ignore the unlink return value, as an error already happened. */ uv_fs_req_cleanup(&fs_req); } } if (result == 0) return 0; errno = UV__ERR(result); return -1; } static void uv__to_stat(struct stat* src, uv_stat_t* dst) { dst->st_dev = src->st_dev; dst->st_mode = src->st_mode; dst->st_nlink = src->st_nlink; dst->st_uid = src->st_uid; dst->st_gid = src->st_gid; dst->st_rdev = src->st_rdev; dst->st_ino = src->st_ino; dst->st_size = src->st_size; dst->st_blksize = src->st_blksize; dst->st_blocks = src->st_blocks; #if defined(__APPLE__) dst->st_atim.tv_sec = src->st_atimespec.tv_sec; dst->st_atim.tv_nsec = src->st_atimespec.tv_nsec; dst->st_mtim.tv_sec = src->st_mtimespec.tv_sec; dst->st_mtim.tv_nsec = src->st_mtimespec.tv_nsec; dst->st_ctim.tv_sec = src->st_ctimespec.tv_sec; dst->st_ctim.tv_nsec = src->st_ctimespec.tv_nsec; dst->st_birthtim.tv_sec = src->st_birthtimespec.tv_sec; dst->st_birthtim.tv_nsec = src->st_birthtimespec.tv_nsec; dst->st_flags = src->st_flags; dst->st_gen = src->st_gen; #elif defined(__ANDROID__) dst->st_atim.tv_sec = src->st_atime; dst->st_atim.tv_nsec = src->st_atimensec; dst->st_mtim.tv_sec = src->st_mtime; dst->st_mtim.tv_nsec = src->st_mtimensec; dst->st_ctim.tv_sec = src->st_ctime; dst->st_ctim.tv_nsec = src->st_ctimensec; dst->st_birthtim.tv_sec = src->st_ctime; dst->st_birthtim.tv_nsec = src->st_ctimensec; dst->st_flags = 0; dst->st_gen = 0; #elif !defined(_AIX) && \ !defined(__MVS__) && ( \ defined(__DragonFly__) || \ defined(__FreeBSD__) || \ defined(__OpenBSD__) || \ defined(__NetBSD__) || \ defined(_GNU_SOURCE) || \ defined(_BSD_SOURCE) || \ defined(_SVID_SOURCE) || \ defined(_XOPEN_SOURCE) || \ defined(_DEFAULT_SOURCE)) dst->st_atim.tv_sec = src->st_atim.tv_sec; dst->st_atim.tv_nsec = src->st_atim.tv_nsec; dst->st_mtim.tv_sec = src->st_mtim.tv_sec; dst->st_mtim.tv_nsec = src->st_mtim.tv_nsec; dst->st_ctim.tv_sec = src->st_ctim.tv_sec; dst->st_ctim.tv_nsec = src->st_ctim.tv_nsec; # if defined(__FreeBSD__) || \ defined(__NetBSD__) dst->st_birthtim.tv_sec = src->st_birthtim.tv_sec; dst->st_birthtim.tv_nsec = src->st_birthtim.tv_nsec; dst->st_flags = src->st_flags; dst->st_gen = src->st_gen; # else dst->st_birthtim.tv_sec = src->st_ctim.tv_sec; dst->st_birthtim.tv_nsec = src->st_ctim.tv_nsec; dst->st_flags = 0; dst->st_gen = 0; # endif #else dst->st_atim.tv_sec = src->st_atime; dst->st_atim.tv_nsec = 0; dst->st_mtim.tv_sec = src->st_mtime; dst->st_mtim.tv_nsec = 0; dst->st_ctim.tv_sec = src->st_ctime; dst->st_ctim.tv_nsec = 0; dst->st_birthtim.tv_sec = src->st_ctime; dst->st_birthtim.tv_nsec = 0; dst->st_flags = 0; dst->st_gen = 0; #endif } static int uv__fs_statx(int fd, const char* path, int is_fstat, int is_lstat, uv_stat_t* buf) { STATIC_ASSERT(UV_ENOSYS != -1); #ifdef __linux__ static int no_statx; struct uv__statx statxbuf; int dirfd; int flags; int mode; int rc; if (uv__load_relaxed(&no_statx)) return UV_ENOSYS; dirfd = AT_FDCWD; flags = 0; /* AT_STATX_SYNC_AS_STAT */ mode = 0xFFF; /* STATX_BASIC_STATS + STATX_BTIME */ if (is_fstat) { dirfd = fd; flags |= 0x1000; /* AT_EMPTY_PATH */ } if (is_lstat) flags |= AT_SYMLINK_NOFOLLOW; rc = uv__statx(dirfd, path, flags, mode, &statxbuf); switch (rc) { case 0: break; case -1: /* EPERM happens when a seccomp filter rejects the system call. * Has been observed with libseccomp < 2.3.3 and docker < 18.04. * EOPNOTSUPP is used on DVS exported filesystems */ if (errno != EINVAL && errno != EPERM && errno != ENOSYS && errno != EOPNOTSUPP) return -1; /* Fall through. */ default: /* Normally on success, zero is returned and On error, -1 is returned. * Observed on S390 RHEL running in a docker container with statx not * implemented, rc might return 1 with 0 set as the error code in which * case we return ENOSYS. */ uv__store_relaxed(&no_statx, 1); return UV_ENOSYS; } buf->st_dev = makedev(statxbuf.stx_dev_major, statxbuf.stx_dev_minor); buf->st_mode = statxbuf.stx_mode; buf->st_nlink = statxbuf.stx_nlink; buf->st_uid = statxbuf.stx_uid; buf->st_gid = statxbuf.stx_gid; buf->st_rdev = makedev(statxbuf.stx_rdev_major, statxbuf.stx_rdev_minor); buf->st_ino = statxbuf.stx_ino; buf->st_size = statxbuf.stx_size; buf->st_blksize = statxbuf.stx_blksize; buf->st_blocks = statxbuf.stx_blocks; buf->st_atim.tv_sec = statxbuf.stx_atime.tv_sec; buf->st_atim.tv_nsec = statxbuf.stx_atime.tv_nsec; buf->st_mtim.tv_sec = statxbuf.stx_mtime.tv_sec; buf->st_mtim.tv_nsec = statxbuf.stx_mtime.tv_nsec; buf->st_ctim.tv_sec = statxbuf.stx_ctime.tv_sec; buf->st_ctim.tv_nsec = statxbuf.stx_ctime.tv_nsec; buf->st_birthtim.tv_sec = statxbuf.stx_btime.tv_sec; buf->st_birthtim.tv_nsec = statxbuf.stx_btime.tv_nsec; buf->st_flags = 0; buf->st_gen = 0; return 0; #else return UV_ENOSYS; #endif /* __linux__ */ } static int uv__fs_stat(const char *path, uv_stat_t *buf) { struct stat pbuf; int ret; ret = uv__fs_statx(-1, path, /* is_fstat */ 0, /* is_lstat */ 0, buf); if (ret != UV_ENOSYS) return ret; ret = stat(path, &pbuf); if (ret == 0) uv__to_stat(&pbuf, buf); return ret; } static int uv__fs_lstat(const char *path, uv_stat_t *buf) { struct stat pbuf; int ret; ret = uv__fs_statx(-1, path, /* is_fstat */ 0, /* is_lstat */ 1, buf); if (ret != UV_ENOSYS) return ret; ret = lstat(path, &pbuf); if (ret == 0) uv__to_stat(&pbuf, buf); return ret; } static int uv__fs_fstat(int fd, uv_stat_t *buf) { struct stat pbuf; int ret; ret = uv__fs_statx(fd, "", /* is_fstat */ 1, /* is_lstat */ 0, buf); if (ret != UV_ENOSYS) return ret; ret = fstat(fd, &pbuf); if (ret == 0) uv__to_stat(&pbuf, buf); return ret; } static size_t uv__fs_buf_offset(uv_buf_t* bufs, size_t size) { size_t offset; /* Figure out which bufs are done */ for (offset = 0; size > 0 && bufs[offset].len <= size; ++offset) size -= bufs[offset].len; /* Fix a partial read/write */ if (size > 0) { bufs[offset].base += size; bufs[offset].len -= size; } return offset; } static ssize_t uv__fs_write_all(uv_fs_t* req) { unsigned int iovmax; unsigned int nbufs; uv_buf_t* bufs; ssize_t total; ssize_t result; iovmax = uv__getiovmax(); nbufs = req->nbufs; bufs = req->bufs; total = 0; while (nbufs > 0) { req->nbufs = nbufs; if (req->nbufs > iovmax) req->nbufs = iovmax; do result = uv__fs_write(req); while (result < 0 && errno == EINTR); if (result <= 0) { if (total == 0) total = result; break; } if (req->off >= 0) req->off += result; req->nbufs = uv__fs_buf_offset(req->bufs, result); req->bufs += req->nbufs; nbufs -= req->nbufs; total += result; } if (bufs != req->bufsml) uv__free(bufs); req->bufs = NULL; req->nbufs = 0; return total; } static void uv__fs_work(struct uv__work* w) { int retry_on_eintr; uv_fs_t* req; ssize_t r; req = container_of(w, uv_fs_t, work_req); retry_on_eintr = !(req->fs_type == UV_FS_CLOSE || req->fs_type == UV_FS_READ); do { errno = 0; #define X(type, action) \ case UV_FS_ ## type: \ r = action; \ break; switch (req->fs_type) { X(ACCESS, access(req->path, req->flags)); X(CHMOD, chmod(req->path, req->mode)); X(CHOWN, chown(req->path, req->uid, req->gid)); X(CLOSE, uv__fs_close(req->file)); X(COPYFILE, uv__fs_copyfile(req)); X(FCHMOD, fchmod(req->file, req->mode)); X(FCHOWN, fchown(req->file, req->uid, req->gid)); X(LCHOWN, lchown(req->path, req->uid, req->gid)); X(FDATASYNC, uv__fs_fdatasync(req)); X(FSTAT, uv__fs_fstat(req->file, &req->statbuf)); X(FSYNC, uv__fs_fsync(req)); X(FTRUNCATE, ftruncate(req->file, req->off)); X(FUTIME, uv__fs_futime(req)); X(LUTIME, uv__fs_lutime(req)); X(LSTAT, uv__fs_lstat(req->path, &req->statbuf)); X(LINK, link(req->path, req->new_path)); X(MKDIR, mkdir(req->path, req->mode)); X(MKDTEMP, uv__fs_mkdtemp(req)); X(MKSTEMP, uv__fs_mkstemp(req)); X(OPEN, uv__fs_open(req)); X(READ, uv__fs_read(req)); X(SCANDIR, uv__fs_scandir(req)); X(OPENDIR, uv__fs_opendir(req)); X(READDIR, uv__fs_readdir(req)); X(CLOSEDIR, uv__fs_closedir(req)); X(READLINK, uv__fs_readlink(req)); X(REALPATH, uv__fs_realpath(req)); X(RENAME, rename(req->path, req->new_path)); X(RMDIR, rmdir(req->path)); X(SENDFILE, uv__fs_sendfile(req)); X(STAT, uv__fs_stat(req->path, &req->statbuf)); X(STATFS, uv__fs_statfs(req)); X(SYMLINK, symlink(req->path, req->new_path)); X(UNLINK, unlink(req->path)); X(UTIME, uv__fs_utime(req)); X(WRITE, uv__fs_write_all(req)); default: abort(); } #undef X } while (r == -1 && errno == EINTR && retry_on_eintr); if (r == -1) req->result = UV__ERR(errno); else req->result = r; if (r == 0 && (req->fs_type == UV_FS_STAT || req->fs_type == UV_FS_FSTAT || req->fs_type == UV_FS_LSTAT)) { req->ptr = &req->statbuf; } } static void uv__fs_done(struct uv__work* w, int status) { uv_fs_t* req; req = container_of(w, uv_fs_t, work_req); uv__req_unregister(req->loop, req); if (status == UV_ECANCELED) { assert(req->result == 0); req->result = UV_ECANCELED; } req->cb(req); } int uv_fs_access(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, uv_fs_cb cb) { INIT(ACCESS); PATH; req->flags = flags; POST; } int uv_fs_chmod(uv_loop_t* loop, uv_fs_t* req, const char* path, int mode, uv_fs_cb cb) { INIT(CHMOD); PATH; req->mode = mode; POST; } int uv_fs_chown(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb) { INIT(CHOWN); PATH; req->uid = uid; req->gid = gid; POST; } int uv_fs_close(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb) { INIT(CLOSE); req->file = file; POST; } int uv_fs_fchmod(uv_loop_t* loop, uv_fs_t* req, uv_file file, int mode, uv_fs_cb cb) { INIT(FCHMOD); req->file = file; req->mode = mode; POST; } int uv_fs_fchown(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb) { INIT(FCHOWN); req->file = file; req->uid = uid; req->gid = gid; POST; } int uv_fs_lchown(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb) { INIT(LCHOWN); PATH; req->uid = uid; req->gid = gid; POST; } int uv_fs_fdatasync(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb) { INIT(FDATASYNC); req->file = file; POST; } int uv_fs_fstat(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb) { INIT(FSTAT); req->file = file; POST; } int uv_fs_fsync(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb) { INIT(FSYNC); req->file = file; POST; } int uv_fs_ftruncate(uv_loop_t* loop, uv_fs_t* req, uv_file file, int64_t off, uv_fs_cb cb) { INIT(FTRUNCATE); req->file = file; req->off = off; POST; } int uv_fs_futime(uv_loop_t* loop, uv_fs_t* req, uv_file file, double atime, double mtime, uv_fs_cb cb) { INIT(FUTIME); req->file = file; req->atime = atime; req->mtime = mtime; POST; } int uv_fs_lutime(uv_loop_t* loop, uv_fs_t* req, const char* path, double atime, double mtime, uv_fs_cb cb) { INIT(LUTIME); PATH; req->atime = atime; req->mtime = mtime; POST; } int uv_fs_lstat(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { INIT(LSTAT); PATH; POST; } int uv_fs_link(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, uv_fs_cb cb) { INIT(LINK); PATH2; POST; } int uv_fs_mkdir(uv_loop_t* loop, uv_fs_t* req, const char* path, int mode, uv_fs_cb cb) { INIT(MKDIR); PATH; req->mode = mode; POST; } int uv_fs_mkdtemp(uv_loop_t* loop, uv_fs_t* req, const char* tpl, uv_fs_cb cb) { INIT(MKDTEMP); req->path = uv__strdup(tpl); if (req->path == NULL) return UV_ENOMEM; POST; } int uv_fs_mkstemp(uv_loop_t* loop, uv_fs_t* req, const char* tpl, uv_fs_cb cb) { INIT(MKSTEMP); req->path = uv__strdup(tpl); if (req->path == NULL) return UV_ENOMEM; POST; } int uv_fs_open(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, int mode, uv_fs_cb cb) { INIT(OPEN); PATH; req->flags = flags; req->mode = mode; POST; } int uv_fs_read(uv_loop_t* loop, uv_fs_t* req, uv_file file, const uv_buf_t bufs[], unsigned int nbufs, int64_t off, uv_fs_cb cb) { INIT(READ); if (bufs == NULL || nbufs == 0) return UV_EINVAL; req->file = file; req->nbufs = nbufs; req->bufs = req->bufsml; if (nbufs > ARRAY_SIZE(req->bufsml)) req->bufs = uv__malloc(nbufs * sizeof(*bufs)); if (req->bufs == NULL) return UV_ENOMEM; memcpy(req->bufs, bufs, nbufs * sizeof(*bufs)); req->off = off; POST; } int uv_fs_scandir(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, uv_fs_cb cb) { INIT(SCANDIR); PATH; req->flags = flags; POST; } int uv_fs_opendir(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { INIT(OPENDIR); PATH; POST; } int uv_fs_readdir(uv_loop_t* loop, uv_fs_t* req, uv_dir_t* dir, uv_fs_cb cb) { INIT(READDIR); if (dir == NULL || dir->dir == NULL || dir->dirents == NULL) return UV_EINVAL; req->ptr = dir; POST; } int uv_fs_closedir(uv_loop_t* loop, uv_fs_t* req, uv_dir_t* dir, uv_fs_cb cb) { INIT(CLOSEDIR); if (dir == NULL) return UV_EINVAL; req->ptr = dir; POST; } int uv_fs_readlink(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { INIT(READLINK); PATH; POST; } int uv_fs_realpath(uv_loop_t* loop, uv_fs_t* req, const char * path, uv_fs_cb cb) { INIT(REALPATH); PATH; POST; } int uv_fs_rename(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, uv_fs_cb cb) { INIT(RENAME); PATH2; POST; } int uv_fs_rmdir(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { INIT(RMDIR); PATH; POST; } int uv_fs_sendfile(uv_loop_t* loop, uv_fs_t* req, uv_file out_fd, uv_file in_fd, int64_t off, size_t len, uv_fs_cb cb) { INIT(SENDFILE); req->flags = in_fd; /* hack */ req->file = out_fd; req->off = off; req->bufsml[0].len = len; POST; } int uv_fs_stat(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { INIT(STAT); PATH; POST; } int uv_fs_symlink(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, int flags, uv_fs_cb cb) { INIT(SYMLINK); PATH2; req->flags = flags; POST; } int uv_fs_unlink(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { INIT(UNLINK); PATH; POST; } int uv_fs_utime(uv_loop_t* loop, uv_fs_t* req, const char* path, double atime, double mtime, uv_fs_cb cb) { INIT(UTIME); PATH; req->atime = atime; req->mtime = mtime; POST; } int uv_fs_write(uv_loop_t* loop, uv_fs_t* req, uv_file file, const uv_buf_t bufs[], unsigned int nbufs, int64_t off, uv_fs_cb cb) { INIT(WRITE); if (bufs == NULL || nbufs == 0) return UV_EINVAL; req->file = file; req->nbufs = nbufs; req->bufs = req->bufsml; if (nbufs > ARRAY_SIZE(req->bufsml)) req->bufs = uv__malloc(nbufs * sizeof(*bufs)); if (req->bufs == NULL) return UV_ENOMEM; memcpy(req->bufs, bufs, nbufs * sizeof(*bufs)); req->off = off; POST; } void uv_fs_req_cleanup(uv_fs_t* req) { if (req == NULL) return; /* Only necessary for asychronous requests, i.e., requests with a callback. * Synchronous ones don't copy their arguments and have req->path and * req->new_path pointing to user-owned memory. UV_FS_MKDTEMP and * UV_FS_MKSTEMP are the exception to the rule, they always allocate memory. */ if (req->path != NULL && (req->cb != NULL || req->fs_type == UV_FS_MKDTEMP || req->fs_type == UV_FS_MKSTEMP)) uv__free((void*) req->path); /* Memory is shared with req->new_path. */ req->path = NULL; req->new_path = NULL; if (req->fs_type == UV_FS_READDIR && req->ptr != NULL) uv__fs_readdir_cleanup(req); if (req->fs_type == UV_FS_SCANDIR && req->ptr != NULL) uv__fs_scandir_cleanup(req); if (req->bufs != req->bufsml) uv__free(req->bufs); req->bufs = NULL; if (req->fs_type != UV_FS_OPENDIR && req->ptr != &req->statbuf) uv__free(req->ptr); req->ptr = NULL; } int uv_fs_copyfile(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, int flags, uv_fs_cb cb) { INIT(COPYFILE); if (flags & ~(UV_FS_COPYFILE_EXCL | UV_FS_COPYFILE_FICLONE | UV_FS_COPYFILE_FICLONE_FORCE)) { return UV_EINVAL; } PATH2; req->flags = flags; POST; } int uv_fs_statfs(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { INIT(STATFS); PATH; POST; } int uv_fs_get_system_error(const uv_fs_t* req) { return -req->result; } gevent-24.11.1/deps/libuv/src/unix/fsevents.c000066400000000000000000000672441471441230600210160ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #if TARGET_OS_IPHONE || MAC_OS_X_VERSION_MAX_ALLOWED < 1070 /* iOS (currently) doesn't provide the FSEvents-API (nor CoreServices) */ /* macOS prior to 10.7 doesn't provide the full FSEvents API so use kqueue */ int uv__fsevents_init(uv_fs_event_t* handle) { return 0; } int uv__fsevents_close(uv_fs_event_t* handle) { return 0; } void uv__fsevents_loop_delete(uv_loop_t* loop) { } #else /* TARGET_OS_IPHONE */ #include "darwin-stub.h" #include #include #include #include static const int kFSEventsModified = kFSEventStreamEventFlagItemChangeOwner | kFSEventStreamEventFlagItemFinderInfoMod | kFSEventStreamEventFlagItemInodeMetaMod | kFSEventStreamEventFlagItemModified | kFSEventStreamEventFlagItemXattrMod; static const int kFSEventsRenamed = kFSEventStreamEventFlagItemCreated | kFSEventStreamEventFlagItemRemoved | kFSEventStreamEventFlagItemRenamed; static const int kFSEventsSystem = kFSEventStreamEventFlagUserDropped | kFSEventStreamEventFlagKernelDropped | kFSEventStreamEventFlagEventIdsWrapped | kFSEventStreamEventFlagHistoryDone | kFSEventStreamEventFlagMount | kFSEventStreamEventFlagUnmount | kFSEventStreamEventFlagRootChanged; typedef struct uv__fsevents_event_s uv__fsevents_event_t; typedef struct uv__cf_loop_signal_s uv__cf_loop_signal_t; typedef struct uv__cf_loop_state_s uv__cf_loop_state_t; enum uv__cf_loop_signal_type_e { kUVCFLoopSignalRegular, kUVCFLoopSignalClosing }; typedef enum uv__cf_loop_signal_type_e uv__cf_loop_signal_type_t; struct uv__cf_loop_signal_s { QUEUE member; uv_fs_event_t* handle; uv__cf_loop_signal_type_t type; }; struct uv__fsevents_event_s { QUEUE member; int events; char path[1]; }; struct uv__cf_loop_state_s { CFRunLoopRef loop; CFRunLoopSourceRef signal_source; int fsevent_need_reschedule; FSEventStreamRef fsevent_stream; uv_sem_t fsevent_sem; uv_mutex_t fsevent_mutex; void* fsevent_handles[2]; unsigned int fsevent_handle_count; }; /* Forward declarations */ static void uv__cf_loop_cb(void* arg); static void* uv__cf_loop_runner(void* arg); static int uv__cf_loop_signal(uv_loop_t* loop, uv_fs_event_t* handle, uv__cf_loop_signal_type_t type); /* Lazy-loaded by uv__fsevents_global_init(). */ static CFArrayRef (*pCFArrayCreate)(CFAllocatorRef, const void**, CFIndex, const CFArrayCallBacks*); static void (*pCFRelease)(CFTypeRef); static void (*pCFRunLoopAddSource)(CFRunLoopRef, CFRunLoopSourceRef, CFStringRef); static CFRunLoopRef (*pCFRunLoopGetCurrent)(void); static void (*pCFRunLoopRemoveSource)(CFRunLoopRef, CFRunLoopSourceRef, CFStringRef); static void (*pCFRunLoopRun)(void); static CFRunLoopSourceRef (*pCFRunLoopSourceCreate)(CFAllocatorRef, CFIndex, CFRunLoopSourceContext*); static void (*pCFRunLoopSourceSignal)(CFRunLoopSourceRef); static void (*pCFRunLoopStop)(CFRunLoopRef); static void (*pCFRunLoopWakeUp)(CFRunLoopRef); static CFStringRef (*pCFStringCreateWithFileSystemRepresentation)( CFAllocatorRef, const char*); static CFStringEncoding (*pCFStringGetSystemEncoding)(void); static CFStringRef (*pkCFRunLoopDefaultMode); static FSEventStreamRef (*pFSEventStreamCreate)(CFAllocatorRef, FSEventStreamCallback, FSEventStreamContext*, CFArrayRef, FSEventStreamEventId, CFTimeInterval, FSEventStreamCreateFlags); static void (*pFSEventStreamFlushSync)(FSEventStreamRef); static void (*pFSEventStreamInvalidate)(FSEventStreamRef); static void (*pFSEventStreamRelease)(FSEventStreamRef); static void (*pFSEventStreamScheduleWithRunLoop)(FSEventStreamRef, CFRunLoopRef, CFStringRef); static int (*pFSEventStreamStart)(FSEventStreamRef); static void (*pFSEventStreamStop)(FSEventStreamRef); #define UV__FSEVENTS_PROCESS(handle, block) \ do { \ QUEUE events; \ QUEUE* q; \ uv__fsevents_event_t* event; \ int err; \ uv_mutex_lock(&(handle)->cf_mutex); \ /* Split-off all events and empty original queue */ \ QUEUE_MOVE(&(handle)->cf_events, &events); \ /* Get error (if any) and zero original one */ \ err = (handle)->cf_error; \ (handle)->cf_error = 0; \ uv_mutex_unlock(&(handle)->cf_mutex); \ /* Loop through events, deallocating each after processing */ \ while (!QUEUE_EMPTY(&events)) { \ q = QUEUE_HEAD(&events); \ event = QUEUE_DATA(q, uv__fsevents_event_t, member); \ QUEUE_REMOVE(q); \ /* NOTE: Checking uv__is_active() is required here, because handle \ * callback may close handle and invoking it after it will lead to \ * incorrect behaviour */ \ if (!uv__is_closing((handle)) && uv__is_active((handle))) \ block \ /* Free allocated data */ \ uv__free(event); \ } \ if (err != 0 && !uv__is_closing((handle)) && uv__is_active((handle))) \ (handle)->cb((handle), NULL, 0, err); \ } while (0) /* Runs in UV loop's thread, when there're events to report to handle */ static void uv__fsevents_cb(uv_async_t* cb) { uv_fs_event_t* handle; handle = cb->data; UV__FSEVENTS_PROCESS(handle, { handle->cb(handle, event->path[0] ? event->path : NULL, event->events, 0); }); } /* Runs in CF thread, pushed event into handle's event list */ static void uv__fsevents_push_event(uv_fs_event_t* handle, QUEUE* events, int err) { assert(events != NULL || err != 0); uv_mutex_lock(&handle->cf_mutex); /* Concatenate two queues */ if (events != NULL) QUEUE_ADD(&handle->cf_events, events); /* Propagate error */ if (err != 0) handle->cf_error = err; uv_mutex_unlock(&handle->cf_mutex); uv_async_send(handle->cf_cb); } /* Runs in CF thread, when there're events in FSEventStream */ static void uv__fsevents_event_cb(const FSEventStreamRef streamRef, void* info, size_t numEvents, void* eventPaths, const FSEventStreamEventFlags eventFlags[], const FSEventStreamEventId eventIds[]) { size_t i; int len; char** paths; char* path; char* pos; uv_fs_event_t* handle; QUEUE* q; uv_loop_t* loop; uv__cf_loop_state_t* state; uv__fsevents_event_t* event; FSEventStreamEventFlags flags; QUEUE head; loop = info; state = loop->cf_state; assert(state != NULL); paths = eventPaths; /* For each handle */ uv_mutex_lock(&state->fsevent_mutex); QUEUE_FOREACH(q, &state->fsevent_handles) { handle = QUEUE_DATA(q, uv_fs_event_t, cf_member); QUEUE_INIT(&head); /* Process and filter out events */ for (i = 0; i < numEvents; i++) { flags = eventFlags[i]; /* Ignore system events */ if (flags & kFSEventsSystem) continue; path = paths[i]; len = strlen(path); if (handle->realpath_len == 0) continue; /* This should be unreachable */ /* Filter out paths that are outside handle's request */ if (len < handle->realpath_len) continue; /* Make sure that realpath actually named a directory, * (unless watching root, which alone keeps a trailing slash on the realpath) * or that we matched the whole string */ if (handle->realpath_len != len && handle->realpath_len > 1 && path[handle->realpath_len] != '/') continue; if (memcmp(path, handle->realpath, handle->realpath_len) != 0) continue; if (!(handle->realpath_len == 1 && handle->realpath[0] == '/')) { /* Remove common prefix, unless the watched folder is "/" */ path += handle->realpath_len; len -= handle->realpath_len; /* Ignore events with path equal to directory itself */ if (len <= 1 && (flags & kFSEventStreamEventFlagItemIsDir)) continue; if (len == 0) { /* Since we're using fsevents to watch the file itself, * realpath == path, and we now need to get the basename of the file back * (for commonality with other codepaths and platforms). */ while (len < handle->realpath_len && path[-1] != '/') { path--; len++; } /* Created and Removed seem to be always set, but don't make sense */ flags &= ~kFSEventsRenamed; } else { /* Skip forward slash */ path++; len--; } } /* Do not emit events from subdirectories (without option set) */ if ((handle->cf_flags & UV_FS_EVENT_RECURSIVE) == 0 && *path != '\0') { pos = strchr(path + 1, '/'); if (pos != NULL) continue; } event = uv__malloc(sizeof(*event) + len); if (event == NULL) break; memset(event, 0, sizeof(*event)); memcpy(event->path, path, len + 1); event->events = UV_RENAME; if (0 == (flags & kFSEventsRenamed)) { if (0 != (flags & kFSEventsModified) || 0 == (flags & kFSEventStreamEventFlagItemIsDir)) event->events = UV_CHANGE; } QUEUE_INSERT_TAIL(&head, &event->member); } if (!QUEUE_EMPTY(&head)) uv__fsevents_push_event(handle, &head, 0); } uv_mutex_unlock(&state->fsevent_mutex); } /* Runs in CF thread */ static int uv__fsevents_create_stream(uv_loop_t* loop, CFArrayRef paths) { uv__cf_loop_state_t* state; FSEventStreamContext ctx; FSEventStreamRef ref; CFAbsoluteTime latency; FSEventStreamCreateFlags flags; /* Initialize context */ memset(&ctx, 0, sizeof(ctx)); ctx.info = loop; latency = 0.05; /* Explanation of selected flags: * 1. NoDefer - without this flag, events that are happening continuously * (i.e. each event is happening after time interval less than `latency`, * counted from previous event), will be deferred and passed to callback * once they'll either fill whole OS buffer, or when this continuous stream * will stop (i.e. there'll be delay between events, bigger than * `latency`). * Specifying this flag will invoke callback after `latency` time passed * since event. * 2. FileEvents - fire callback for file changes too (by default it is firing * it only for directory changes). */ flags = kFSEventStreamCreateFlagNoDefer | kFSEventStreamCreateFlagFileEvents; /* * NOTE: It might sound like a good idea to remember last seen StreamEventId, * but in reality one dir might have last StreamEventId less than, the other, * that is being watched now. Which will cause FSEventStream API to report * changes to files from the past. */ ref = pFSEventStreamCreate(NULL, &uv__fsevents_event_cb, &ctx, paths, kFSEventStreamEventIdSinceNow, latency, flags); assert(ref != NULL); state = loop->cf_state; pFSEventStreamScheduleWithRunLoop(ref, state->loop, *pkCFRunLoopDefaultMode); if (!pFSEventStreamStart(ref)) { pFSEventStreamInvalidate(ref); pFSEventStreamRelease(ref); return UV_EMFILE; } state->fsevent_stream = ref; return 0; } /* Runs in CF thread */ static void uv__fsevents_destroy_stream(uv_loop_t* loop) { uv__cf_loop_state_t* state; state = loop->cf_state; if (state->fsevent_stream == NULL) return; /* Stop emitting events */ pFSEventStreamStop(state->fsevent_stream); /* Release stream */ pFSEventStreamInvalidate(state->fsevent_stream); pFSEventStreamRelease(state->fsevent_stream); state->fsevent_stream = NULL; } /* Runs in CF thread, when there're new fsevent handles to add to stream */ static void uv__fsevents_reschedule(uv_fs_event_t* handle, uv__cf_loop_signal_type_t type) { uv__cf_loop_state_t* state; QUEUE* q; uv_fs_event_t* curr; CFArrayRef cf_paths; CFStringRef* paths; unsigned int i; int err; unsigned int path_count; state = handle->loop->cf_state; paths = NULL; cf_paths = NULL; err = 0; /* NOTE: `i` is used in deallocation loop below */ i = 0; /* Optimization to prevent O(n^2) time spent when starting to watch * many files simultaneously */ uv_mutex_lock(&state->fsevent_mutex); if (state->fsevent_need_reschedule == 0) { uv_mutex_unlock(&state->fsevent_mutex); goto final; } state->fsevent_need_reschedule = 0; uv_mutex_unlock(&state->fsevent_mutex); /* Destroy previous FSEventStream */ uv__fsevents_destroy_stream(handle->loop); /* Any failure below will be a memory failure */ err = UV_ENOMEM; /* Create list of all watched paths */ uv_mutex_lock(&state->fsevent_mutex); path_count = state->fsevent_handle_count; if (path_count != 0) { paths = uv__malloc(sizeof(*paths) * path_count); if (paths == NULL) { uv_mutex_unlock(&state->fsevent_mutex); goto final; } q = &state->fsevent_handles; for (; i < path_count; i++) { q = QUEUE_NEXT(q); assert(q != &state->fsevent_handles); curr = QUEUE_DATA(q, uv_fs_event_t, cf_member); assert(curr->realpath != NULL); paths[i] = pCFStringCreateWithFileSystemRepresentation(NULL, curr->realpath); if (paths[i] == NULL) { uv_mutex_unlock(&state->fsevent_mutex); goto final; } } } uv_mutex_unlock(&state->fsevent_mutex); err = 0; if (path_count != 0) { /* Create new FSEventStream */ cf_paths = pCFArrayCreate(NULL, (const void**) paths, path_count, NULL); if (cf_paths == NULL) { err = UV_ENOMEM; goto final; } err = uv__fsevents_create_stream(handle->loop, cf_paths); } final: /* Deallocate all paths in case of failure */ if (err != 0) { if (cf_paths == NULL) { while (i != 0) pCFRelease(paths[--i]); uv__free(paths); } else { /* CFArray takes ownership of both strings and original C-array */ pCFRelease(cf_paths); } /* Broadcast error to all handles */ uv_mutex_lock(&state->fsevent_mutex); QUEUE_FOREACH(q, &state->fsevent_handles) { curr = QUEUE_DATA(q, uv_fs_event_t, cf_member); uv__fsevents_push_event(curr, NULL, err); } uv_mutex_unlock(&state->fsevent_mutex); } /* * Main thread will block until the removal of handle from the list, * we must tell it when we're ready. * * NOTE: This is coupled with `uv_sem_wait()` in `uv__fsevents_close` */ if (type == kUVCFLoopSignalClosing) uv_sem_post(&state->fsevent_sem); } static int uv__fsevents_global_init(void) { static pthread_mutex_t global_init_mutex = PTHREAD_MUTEX_INITIALIZER; static void* core_foundation_handle; static void* core_services_handle; int err; err = 0; pthread_mutex_lock(&global_init_mutex); if (core_foundation_handle != NULL) goto out; /* The libraries are never unloaded because we currently don't have a good * mechanism for keeping a reference count. It's unlikely to be an issue * but if it ever becomes one, we can turn the dynamic library handles into * per-event loop properties and have the dynamic linker keep track for us. */ err = UV_ENOSYS; core_foundation_handle = dlopen("/System/Library/Frameworks/" "CoreFoundation.framework/" "Versions/A/CoreFoundation", RTLD_LAZY | RTLD_LOCAL); if (core_foundation_handle == NULL) goto out; core_services_handle = dlopen("/System/Library/Frameworks/" "CoreServices.framework/" "Versions/A/CoreServices", RTLD_LAZY | RTLD_LOCAL); if (core_services_handle == NULL) goto out; err = UV_ENOENT; #define V(handle, symbol) \ do { \ *(void **)(&p ## symbol) = dlsym((handle), #symbol); \ if (p ## symbol == NULL) \ goto out; \ } \ while (0) V(core_foundation_handle, CFArrayCreate); V(core_foundation_handle, CFRelease); V(core_foundation_handle, CFRunLoopAddSource); V(core_foundation_handle, CFRunLoopGetCurrent); V(core_foundation_handle, CFRunLoopRemoveSource); V(core_foundation_handle, CFRunLoopRun); V(core_foundation_handle, CFRunLoopSourceCreate); V(core_foundation_handle, CFRunLoopSourceSignal); V(core_foundation_handle, CFRunLoopStop); V(core_foundation_handle, CFRunLoopWakeUp); V(core_foundation_handle, CFStringCreateWithFileSystemRepresentation); V(core_foundation_handle, CFStringGetSystemEncoding); V(core_foundation_handle, kCFRunLoopDefaultMode); V(core_services_handle, FSEventStreamCreate); V(core_services_handle, FSEventStreamFlushSync); V(core_services_handle, FSEventStreamInvalidate); V(core_services_handle, FSEventStreamRelease); V(core_services_handle, FSEventStreamScheduleWithRunLoop); V(core_services_handle, FSEventStreamStart); V(core_services_handle, FSEventStreamStop); #undef V err = 0; out: if (err && core_services_handle != NULL) { dlclose(core_services_handle); core_services_handle = NULL; } if (err && core_foundation_handle != NULL) { dlclose(core_foundation_handle); core_foundation_handle = NULL; } pthread_mutex_unlock(&global_init_mutex); return err; } /* Runs in UV loop */ static int uv__fsevents_loop_init(uv_loop_t* loop) { CFRunLoopSourceContext ctx; uv__cf_loop_state_t* state; pthread_attr_t attr; int err; if (loop->cf_state != NULL) return 0; err = uv__fsevents_global_init(); if (err) return err; state = uv__calloc(1, sizeof(*state)); if (state == NULL) return UV_ENOMEM; err = uv_mutex_init(&loop->cf_mutex); if (err) goto fail_mutex_init; err = uv_sem_init(&loop->cf_sem, 0); if (err) goto fail_sem_init; QUEUE_INIT(&loop->cf_signals); err = uv_sem_init(&state->fsevent_sem, 0); if (err) goto fail_fsevent_sem_init; err = uv_mutex_init(&state->fsevent_mutex); if (err) goto fail_fsevent_mutex_init; QUEUE_INIT(&state->fsevent_handles); state->fsevent_need_reschedule = 0; state->fsevent_handle_count = 0; memset(&ctx, 0, sizeof(ctx)); ctx.info = loop; ctx.perform = uv__cf_loop_cb; state->signal_source = pCFRunLoopSourceCreate(NULL, 0, &ctx); if (state->signal_source == NULL) { err = UV_ENOMEM; goto fail_signal_source_create; } if (pthread_attr_init(&attr)) abort(); if (pthread_attr_setstacksize(&attr, uv__thread_stack_size())) abort(); loop->cf_state = state; /* uv_thread_t is an alias for pthread_t. */ err = UV__ERR(pthread_create(&loop->cf_thread, &attr, uv__cf_loop_runner, loop)); if (pthread_attr_destroy(&attr)) abort(); if (err) goto fail_thread_create; /* Synchronize threads */ uv_sem_wait(&loop->cf_sem); return 0; fail_thread_create: loop->cf_state = NULL; fail_signal_source_create: uv_mutex_destroy(&state->fsevent_mutex); fail_fsevent_mutex_init: uv_sem_destroy(&state->fsevent_sem); fail_fsevent_sem_init: uv_sem_destroy(&loop->cf_sem); fail_sem_init: uv_mutex_destroy(&loop->cf_mutex); fail_mutex_init: uv__free(state); return err; } /* Runs in UV loop */ void uv__fsevents_loop_delete(uv_loop_t* loop) { uv__cf_loop_signal_t* s; uv__cf_loop_state_t* state; QUEUE* q; if (loop->cf_state == NULL) return; if (uv__cf_loop_signal(loop, NULL, kUVCFLoopSignalRegular) != 0) abort(); uv_thread_join(&loop->cf_thread); uv_sem_destroy(&loop->cf_sem); uv_mutex_destroy(&loop->cf_mutex); /* Free any remaining data */ while (!QUEUE_EMPTY(&loop->cf_signals)) { q = QUEUE_HEAD(&loop->cf_signals); s = QUEUE_DATA(q, uv__cf_loop_signal_t, member); QUEUE_REMOVE(q); uv__free(s); } /* Destroy state */ state = loop->cf_state; uv_sem_destroy(&state->fsevent_sem); uv_mutex_destroy(&state->fsevent_mutex); pCFRelease(state->signal_source); uv__free(state); loop->cf_state = NULL; } /* Runs in CF thread. This is the CF loop's body */ static void* uv__cf_loop_runner(void* arg) { uv_loop_t* loop; uv__cf_loop_state_t* state; loop = arg; state = loop->cf_state; state->loop = pCFRunLoopGetCurrent(); pCFRunLoopAddSource(state->loop, state->signal_source, *pkCFRunLoopDefaultMode); uv_sem_post(&loop->cf_sem); pCFRunLoopRun(); pCFRunLoopRemoveSource(state->loop, state->signal_source, *pkCFRunLoopDefaultMode); state->loop = NULL; return NULL; } /* Runs in CF thread, executed after `uv__cf_loop_signal()` */ static void uv__cf_loop_cb(void* arg) { uv_loop_t* loop; uv__cf_loop_state_t* state; QUEUE* item; QUEUE split_head; uv__cf_loop_signal_t* s; loop = arg; state = loop->cf_state; uv_mutex_lock(&loop->cf_mutex); QUEUE_MOVE(&loop->cf_signals, &split_head); uv_mutex_unlock(&loop->cf_mutex); while (!QUEUE_EMPTY(&split_head)) { item = QUEUE_HEAD(&split_head); QUEUE_REMOVE(item); s = QUEUE_DATA(item, uv__cf_loop_signal_t, member); /* This was a termination signal */ if (s->handle == NULL) pCFRunLoopStop(state->loop); else uv__fsevents_reschedule(s->handle, s->type); uv__free(s); } } /* Runs in UV loop to notify CF thread */ int uv__cf_loop_signal(uv_loop_t* loop, uv_fs_event_t* handle, uv__cf_loop_signal_type_t type) { uv__cf_loop_signal_t* item; uv__cf_loop_state_t* state; item = uv__malloc(sizeof(*item)); if (item == NULL) return UV_ENOMEM; item->handle = handle; item->type = type; uv_mutex_lock(&loop->cf_mutex); QUEUE_INSERT_TAIL(&loop->cf_signals, &item->member); state = loop->cf_state; assert(state != NULL); pCFRunLoopSourceSignal(state->signal_source); pCFRunLoopWakeUp(state->loop); uv_mutex_unlock(&loop->cf_mutex); return 0; } /* Runs in UV loop to initialize handle */ int uv__fsevents_init(uv_fs_event_t* handle) { int err; uv__cf_loop_state_t* state; err = uv__fsevents_loop_init(handle->loop); if (err) return err; /* Get absolute path to file */ handle->realpath = realpath(handle->path, NULL); if (handle->realpath == NULL) return UV__ERR(errno); handle->realpath_len = strlen(handle->realpath); /* Initialize event queue */ QUEUE_INIT(&handle->cf_events); handle->cf_error = 0; /* * Events will occur in other thread. * Initialize callback for getting them back into event loop's thread */ handle->cf_cb = uv__malloc(sizeof(*handle->cf_cb)); if (handle->cf_cb == NULL) { err = UV_ENOMEM; goto fail_cf_cb_malloc; } handle->cf_cb->data = handle; uv_async_init(handle->loop, handle->cf_cb, uv__fsevents_cb); handle->cf_cb->flags |= UV_HANDLE_INTERNAL; uv_unref((uv_handle_t*) handle->cf_cb); err = uv_mutex_init(&handle->cf_mutex); if (err) goto fail_cf_mutex_init; /* Insert handle into the list */ state = handle->loop->cf_state; uv_mutex_lock(&state->fsevent_mutex); QUEUE_INSERT_TAIL(&state->fsevent_handles, &handle->cf_member); state->fsevent_handle_count++; state->fsevent_need_reschedule = 1; uv_mutex_unlock(&state->fsevent_mutex); /* Reschedule FSEventStream */ assert(handle != NULL); err = uv__cf_loop_signal(handle->loop, handle, kUVCFLoopSignalRegular); if (err) goto fail_loop_signal; return 0; fail_loop_signal: uv_mutex_destroy(&handle->cf_mutex); fail_cf_mutex_init: uv__free(handle->cf_cb); handle->cf_cb = NULL; fail_cf_cb_malloc: uv__free(handle->realpath); handle->realpath = NULL; handle->realpath_len = 0; return err; } /* Runs in UV loop to de-initialize handle */ int uv__fsevents_close(uv_fs_event_t* handle) { int err; uv__cf_loop_state_t* state; if (handle->cf_cb == NULL) return UV_EINVAL; /* Remove handle from the list */ state = handle->loop->cf_state; uv_mutex_lock(&state->fsevent_mutex); QUEUE_REMOVE(&handle->cf_member); state->fsevent_handle_count--; state->fsevent_need_reschedule = 1; uv_mutex_unlock(&state->fsevent_mutex); /* Reschedule FSEventStream */ assert(handle != NULL); err = uv__cf_loop_signal(handle->loop, handle, kUVCFLoopSignalClosing); if (err) return UV__ERR(err); /* Wait for deinitialization */ uv_sem_wait(&state->fsevent_sem); uv_close((uv_handle_t*) handle->cf_cb, (uv_close_cb) uv__free); handle->cf_cb = NULL; /* Free data in queue */ UV__FSEVENTS_PROCESS(handle, { /* NOP */ }); uv_mutex_destroy(&handle->cf_mutex); uv__free(handle->realpath); handle->realpath = NULL; handle->realpath_len = 0; return 0; } #endif /* TARGET_OS_IPHONE */ gevent-24.11.1/deps/libuv/src/unix/getaddrinfo.c000066400000000000000000000146651471441230600214460ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ /* Expose glibc-specific EAI_* error codes. Needs to be defined before we * include any headers. */ #include "uv.h" #include "internal.h" #include "idna.h" #include #include /* NULL */ #include #include #include /* if_indextoname() */ /* EAI_* constants. */ #include int uv__getaddrinfo_translate_error(int sys_err) { switch (sys_err) { case 0: return 0; #if defined(EAI_ADDRFAMILY) case EAI_ADDRFAMILY: return UV_EAI_ADDRFAMILY; #endif #if defined(EAI_AGAIN) case EAI_AGAIN: return UV_EAI_AGAIN; #endif #if defined(EAI_BADFLAGS) case EAI_BADFLAGS: return UV_EAI_BADFLAGS; #endif #if defined(EAI_BADHINTS) case EAI_BADHINTS: return UV_EAI_BADHINTS; #endif #if defined(EAI_CANCELED) case EAI_CANCELED: return UV_EAI_CANCELED; #endif #if defined(EAI_FAIL) case EAI_FAIL: return UV_EAI_FAIL; #endif #if defined(EAI_FAMILY) case EAI_FAMILY: return UV_EAI_FAMILY; #endif #if defined(EAI_MEMORY) case EAI_MEMORY: return UV_EAI_MEMORY; #endif #if defined(EAI_NODATA) case EAI_NODATA: return UV_EAI_NODATA; #endif #if defined(EAI_NONAME) # if !defined(EAI_NODATA) || EAI_NODATA != EAI_NONAME case EAI_NONAME: return UV_EAI_NONAME; # endif #endif #if defined(EAI_OVERFLOW) case EAI_OVERFLOW: return UV_EAI_OVERFLOW; #endif #if defined(EAI_PROTOCOL) case EAI_PROTOCOL: return UV_EAI_PROTOCOL; #endif #if defined(EAI_SERVICE) case EAI_SERVICE: return UV_EAI_SERVICE; #endif #if defined(EAI_SOCKTYPE) case EAI_SOCKTYPE: return UV_EAI_SOCKTYPE; #endif #if defined(EAI_SYSTEM) case EAI_SYSTEM: return UV__ERR(errno); #endif } assert(!"unknown EAI_* error code"); abort(); #ifndef __SUNPRO_C return 0; /* Pacify compiler. */ #endif } static void uv__getaddrinfo_work(struct uv__work* w) { uv_getaddrinfo_t* req; int err; req = container_of(w, uv_getaddrinfo_t, work_req); err = getaddrinfo(req->hostname, req->service, req->hints, &req->addrinfo); req->retcode = uv__getaddrinfo_translate_error(err); } static void uv__getaddrinfo_done(struct uv__work* w, int status) { uv_getaddrinfo_t* req; req = container_of(w, uv_getaddrinfo_t, work_req); uv__req_unregister(req->loop, req); /* See initialization in uv_getaddrinfo(). */ if (req->hints) uv__free(req->hints); else if (req->service) uv__free(req->service); else if (req->hostname) uv__free(req->hostname); else assert(0); req->hints = NULL; req->service = NULL; req->hostname = NULL; if (status == UV_ECANCELED) { assert(req->retcode == 0); req->retcode = UV_EAI_CANCELED; } if (req->cb) req->cb(req, req->retcode, req->addrinfo); } int uv_getaddrinfo(uv_loop_t* loop, uv_getaddrinfo_t* req, uv_getaddrinfo_cb cb, const char* hostname, const char* service, const struct addrinfo* hints) { char hostname_ascii[256]; size_t hostname_len; size_t service_len; size_t hints_len; size_t len; char* buf; long rc; if (req == NULL || (hostname == NULL && service == NULL)) return UV_EINVAL; /* FIXME(bnoordhuis) IDNA does not seem to work z/OS, * probably because it uses EBCDIC rather than ASCII. */ #ifdef __MVS__ (void) &hostname_ascii; #else if (hostname != NULL) { rc = uv__idna_toascii(hostname, hostname + strlen(hostname), hostname_ascii, hostname_ascii + sizeof(hostname_ascii)); if (rc < 0) return rc; hostname = hostname_ascii; } #endif hostname_len = hostname ? strlen(hostname) + 1 : 0; service_len = service ? strlen(service) + 1 : 0; hints_len = hints ? sizeof(*hints) : 0; buf = uv__malloc(hostname_len + service_len + hints_len); if (buf == NULL) return UV_ENOMEM; uv__req_init(loop, req, UV_GETADDRINFO); req->loop = loop; req->cb = cb; req->addrinfo = NULL; req->hints = NULL; req->service = NULL; req->hostname = NULL; req->retcode = 0; /* order matters, see uv_getaddrinfo_done() */ len = 0; if (hints) { req->hints = memcpy(buf + len, hints, sizeof(*hints)); len += sizeof(*hints); } if (service) { req->service = memcpy(buf + len, service, service_len); len += service_len; } if (hostname) req->hostname = memcpy(buf + len, hostname, hostname_len); if (cb) { uv__work_submit(loop, &req->work_req, UV__WORK_SLOW_IO, uv__getaddrinfo_work, uv__getaddrinfo_done); return 0; } else { uv__getaddrinfo_work(&req->work_req); uv__getaddrinfo_done(&req->work_req, 0); return req->retcode; } } void uv_freeaddrinfo(struct addrinfo* ai) { if (ai) freeaddrinfo(ai); } int uv_if_indextoname(unsigned int ifindex, char* buffer, size_t* size) { char ifname_buf[UV_IF_NAMESIZE]; size_t len; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; if (if_indextoname(ifindex, ifname_buf) == NULL) return UV__ERR(errno); len = strnlen(ifname_buf, sizeof(ifname_buf)); if (*size <= len) { *size = len + 1; return UV_ENOBUFS; } memcpy(buffer, ifname_buf, len); buffer[len] = '\0'; *size = len; return 0; } int uv_if_indextoiid(unsigned int ifindex, char* buffer, size_t* size) { return uv_if_indextoname(ifindex, buffer, size); } gevent-24.11.1/deps/libuv/src/unix/getnameinfo.c000066400000000000000000000071101471441230600214370ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include #include #include "uv.h" #include "internal.h" static void uv__getnameinfo_work(struct uv__work* w) { uv_getnameinfo_t* req; int err; socklen_t salen; req = container_of(w, uv_getnameinfo_t, work_req); if (req->storage.ss_family == AF_INET) salen = sizeof(struct sockaddr_in); else if (req->storage.ss_family == AF_INET6) salen = sizeof(struct sockaddr_in6); else abort(); err = getnameinfo((struct sockaddr*) &req->storage, salen, req->host, sizeof(req->host), req->service, sizeof(req->service), req->flags); req->retcode = uv__getaddrinfo_translate_error(err); } static void uv__getnameinfo_done(struct uv__work* w, int status) { uv_getnameinfo_t* req; char* host; char* service; req = container_of(w, uv_getnameinfo_t, work_req); uv__req_unregister(req->loop, req); host = service = NULL; if (status == UV_ECANCELED) { assert(req->retcode == 0); req->retcode = UV_EAI_CANCELED; } else if (req->retcode == 0) { host = req->host; service = req->service; } if (req->getnameinfo_cb) req->getnameinfo_cb(req, req->retcode, host, service); } /* * Entry point for getnameinfo * return 0 if a callback will be made * return error code if validation fails */ int uv_getnameinfo(uv_loop_t* loop, uv_getnameinfo_t* req, uv_getnameinfo_cb getnameinfo_cb, const struct sockaddr* addr, int flags) { if (req == NULL || addr == NULL) return UV_EINVAL; if (addr->sa_family == AF_INET) { memcpy(&req->storage, addr, sizeof(struct sockaddr_in)); } else if (addr->sa_family == AF_INET6) { memcpy(&req->storage, addr, sizeof(struct sockaddr_in6)); } else { return UV_EINVAL; } uv__req_init(loop, (uv_req_t*)req, UV_GETNAMEINFO); req->getnameinfo_cb = getnameinfo_cb; req->flags = flags; req->type = UV_GETNAMEINFO; req->loop = loop; req->retcode = 0; if (getnameinfo_cb) { uv__work_submit(loop, &req->work_req, UV__WORK_SLOW_IO, uv__getnameinfo_work, uv__getnameinfo_done); return 0; } else { uv__getnameinfo_work(&req->work_req); uv__getnameinfo_done(&req->work_req, 0); return req->retcode; } } gevent-24.11.1/deps/libuv/src/unix/haiku.c000066400000000000000000000103021471441230600202410ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include /* find_path() */ #include void uv_loadavg(double avg[3]) { avg[0] = 0; avg[1] = 0; avg[2] = 0; } int uv_exepath(char* buffer, size_t* size) { char abspath[B_PATH_NAME_LENGTH]; status_t status; ssize_t abspath_len; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; status = find_path(B_APP_IMAGE_SYMBOL, B_FIND_PATH_IMAGE_PATH, NULL, abspath, sizeof(abspath)); if (status != B_OK) return UV__ERR(status); abspath_len = uv__strscpy(buffer, abspath, *size); *size -= 1; if (abspath_len >= 0 && *size > (size_t)abspath_len) *size = (size_t)abspath_len; return 0; } uint64_t uv_get_free_memory(void) { status_t status; system_info sinfo; status = get_system_info(&sinfo); if (status != B_OK) return 0; return (sinfo.max_pages - sinfo.used_pages) * B_PAGE_SIZE; } uint64_t uv_get_total_memory(void) { status_t status; system_info sinfo; status = get_system_info(&sinfo); if (status != B_OK) return 0; return sinfo.max_pages * B_PAGE_SIZE; } uint64_t uv_get_constrained_memory(void) { return 0; /* Memory constraints are unknown. */ } int uv_resident_set_memory(size_t* rss) { area_info area; ssize_t cookie; status_t status; thread_info thread; status = get_thread_info(find_thread(NULL), &thread); if (status != B_OK) return UV__ERR(status); cookie = 0; *rss = 0; while (get_next_area_info(thread.team, &cookie, &area) == B_OK) *rss += area.ram_size; return 0; } int uv_uptime(double* uptime) { /* system_time() returns time since booting in microseconds */ *uptime = (double)system_time() / 1000000; return 0; } int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count) { cpu_topology_node_info* topology_infos; int i; status_t status; system_info system; uint32_t topology_count; uint64_t cpuspeed; uv_cpu_info_t* cpu_info; if (cpu_infos == NULL || count == NULL) return UV_EINVAL; status = get_cpu_topology_info(NULL, &topology_count); if (status != B_OK) return UV__ERR(status); topology_infos = uv__malloc(topology_count * sizeof(*topology_infos)); if (topology_infos == NULL) return UV_ENOMEM; status = get_cpu_topology_info(topology_infos, &topology_count); if (status != B_OK) { uv__free(topology_infos); return UV__ERR(status); } cpuspeed = 0; for (i = 0; i < (int)topology_count; i++) { if (topology_infos[i].type == B_TOPOLOGY_CORE) { cpuspeed = topology_infos[i].data.core.default_frequency; break; } } uv__free(topology_infos); status = get_system_info(&system); if (status != B_OK) return UV__ERR(status); *cpu_infos = uv__calloc(system.cpu_count, sizeof(**cpu_infos)); if (*cpu_infos == NULL) return UV_ENOMEM; /* CPU time and model are not exposed by Haiku. */ cpu_info = *cpu_infos; for (i = 0; i < (int)system.cpu_count; i++) { cpu_info->model = uv__strdup("unknown"); cpu_info->speed = (int)(cpuspeed / 1000000); cpu_info++; } *count = system.cpu_count; return 0; } gevent-24.11.1/deps/libuv/src/unix/hurd.c000066400000000000000000000102431471441230600201060ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #define _GNU_SOURCE 1 #include "uv.h" #include "internal.h" #include #include #include #include #include #include #include #include #include #include int uv_exepath(char* buffer, size_t* size) { kern_return_t err; /* XXX in current Hurd, strings are char arrays of 1024 elements */ string_t exepath; ssize_t copied; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; if (*size - 1 > 0) { /* XXX limited length of buffer in current Hurd, this API will probably * evolve in the future */ err = proc_get_exe(getproc(), getpid(), exepath); if (err) return UV__ERR(err); } copied = uv__strscpy(buffer, exepath, *size); /* do not return error on UV_E2BIG failure */ *size = copied < 0 ? strlen(buffer) : (size_t) copied; return 0; } int uv_resident_set_memory(size_t* rss) { kern_return_t err; struct task_basic_info bi; mach_msg_type_number_t count; count = TASK_BASIC_INFO_COUNT; err = task_info(mach_task_self(), TASK_BASIC_INFO, (task_info_t) &bi, &count); if (err) return UV__ERR(err); *rss = bi.resident_size; return 0; } uint64_t uv_get_free_memory(void) { kern_return_t err; struct vm_statistics vmstats; err = vm_statistics(mach_task_self(), &vmstats); if (err) return 0; return vmstats.free_count * vm_page_size; } uint64_t uv_get_total_memory(void) { kern_return_t err; host_basic_info_data_t hbi; mach_msg_type_number_t cnt; cnt = HOST_BASIC_INFO_COUNT; err = host_info(mach_host_self(), HOST_BASIC_INFO, (host_info_t) &hbi, &cnt); if (err) return 0; return hbi.memory_size; } int uv_uptime(double* uptime) { char buf[128]; /* Try /proc/uptime first */ if (0 == uv__slurp("/proc/uptime", buf, sizeof(buf))) if (1 == sscanf(buf, "%lf", uptime)) return 0; /* Reimplement here code from procfs to calculate uptime if not mounted? */ return UV__ERR(EIO); } void uv_loadavg(double avg[3]) { char buf[128]; /* Large enough to hold all of /proc/loadavg. */ if (0 == uv__slurp("/proc/loadavg", buf, sizeof(buf))) if (3 == sscanf(buf, "%lf %lf %lf", &avg[0], &avg[1], &avg[2])) return; /* Reimplement here code from procfs to calculate loadavg if not mounted? */ } int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count) { kern_return_t err; host_basic_info_data_t hbi; mach_msg_type_number_t cnt; /* Get count of cpus */ cnt = HOST_BASIC_INFO_COUNT; err = host_info(mach_host_self(), HOST_BASIC_INFO, (host_info_t) &hbi, &cnt); if (err) { err = UV__ERR(err); goto abort; } /* XXX not implemented on the Hurd */ *cpu_infos = uv__calloc(hbi.avail_cpus, sizeof(**cpu_infos)); if (*cpu_infos == NULL) { err = UV_ENOMEM; goto abort; } *count = hbi.avail_cpus; return 0; abort: *cpu_infos = NULL; *count = 0; return err; } uint64_t uv_get_constrained_memory(void) { return 0; /* Memory constraints are unknown. */ } gevent-24.11.1/deps/libuv/src/unix/ibmi.c000066400000000000000000000376051471441230600200770ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include char* original_exepath = NULL; uv_mutex_t process_title_mutex; uv_once_t process_title_mutex_once = UV_ONCE_INIT; typedef struct { int bytes_available; int bytes_returned; char current_date_and_time[8]; char system_name[8]; char elapsed_time[6]; char restricted_state_flag; char reserved; int percent_processing_unit_used; int jobs_in_system; int percent_permanent_addresses; int percent_temporary_addresses; int system_asp; int percent_system_asp_used; int total_auxiliary_storage; int current_unprotected_storage_used; int maximum_unprotected_storage_used; int percent_db_capability; int main_storage_size; int number_of_partitions; int partition_identifier; int reserved1; int current_processing_capacity; char processor_sharing_attribute; char reserved2[3]; int number_of_processors; int active_jobs_in_system; int active_threads_in_system; int maximum_jobs_in_system; int percent_temporary_256mb_segments_used; int percent_temporary_4gb_segments_used; int percent_permanent_256mb_segments_used; int percent_permanent_4gb_segments_used; int percent_current_interactive_performance; int percent_uncapped_cpu_capacity_used; int percent_shared_processor_pool_used; long main_storage_size_long; } SSTS0200; typedef struct { char header[208]; unsigned char loca_adapter_address[12]; } LIND0500; typedef struct { int bytes_provided; int bytes_available; char msgid[7]; } errcode_s; static const unsigned char e2a[256] = { 0, 1, 2, 3, 156, 9, 134, 127, 151, 141, 142, 11, 12, 13, 14, 15, 16, 17, 18, 19, 157, 133, 8, 135, 24, 25, 146, 143, 28, 29, 30, 31, 128, 129, 130, 131, 132, 10, 23, 27, 136, 137, 138, 139, 140, 5, 6, 7, 144, 145, 22, 147, 148, 149, 150, 4, 152, 153, 154, 155, 20, 21, 158, 26, 32, 160, 161, 162, 163, 164, 165, 166, 167, 168, 91, 46, 60, 40, 43, 33, 38, 169, 170, 171, 172, 173, 174, 175, 176, 177, 93, 36, 42, 41, 59, 94, 45, 47, 178, 179, 180, 181, 182, 183, 184, 185, 124, 44, 37, 95, 62, 63, 186, 187, 188, 189, 190, 191, 192, 193, 194, 96, 58, 35, 64, 39, 61, 34, 195, 97, 98, 99, 100, 101, 102, 103, 104, 105, 196, 197, 198, 199, 200, 201, 202, 106, 107, 108, 109, 110, 111, 112, 113, 114, 203, 204, 205, 206, 207, 208, 209, 126, 115, 116, 117, 118, 119, 120, 121, 122, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 123, 65, 66, 67, 68, 69, 70, 71, 72, 73, 232, 233, 234, 235, 236, 237, 125, 74, 75, 76, 77, 78, 79, 80, 81, 82, 238, 239, 240, 241, 242, 243, 92, 159, 83, 84, 85, 86, 87, 88, 89, 90, 244, 245, 246, 247, 248, 249, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 250, 251, 252, 253, 254, 255}; static const unsigned char a2e[256] = { 0, 1, 2, 3, 55, 45, 46, 47, 22, 5, 37, 11, 12, 13, 14, 15, 16, 17, 18, 19, 60, 61, 50, 38, 24, 25, 63, 39, 28, 29, 30, 31, 64, 79, 127, 123, 91, 108, 80, 125, 77, 93, 92, 78, 107, 96, 75, 97, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 122, 94, 76, 126, 110, 111, 124, 193, 194, 195, 196, 197, 198, 199, 200, 201, 209, 210, 211, 212, 213, 214, 215, 216, 217, 226, 227, 228, 229, 230, 231, 232, 233, 74, 224, 90, 95, 109, 121, 129, 130, 131, 132, 133, 134, 135, 136, 137, 145, 146, 147, 148, 149, 150, 151, 152, 153, 162, 163, 164, 165, 166, 167, 168, 169, 192, 106, 208, 161, 7, 32, 33, 34, 35, 36, 21, 6, 23, 40, 41, 42, 43, 44, 9, 10, 27, 48, 49, 26, 51, 52, 53, 54, 8, 56, 57, 58, 59, 4, 20, 62, 225, 65, 66, 67, 68, 69, 70, 71, 72, 73, 81, 82, 83, 84, 85, 86, 87, 88, 89, 98, 99, 100, 101, 102, 103, 104, 105, 112, 113, 114, 115, 116, 117, 118, 119, 120, 128, 138, 139, 140, 141, 142, 143, 144, 154, 155, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 202, 203, 204, 205, 206, 207, 218, 219, 220, 221, 222, 223, 234, 235, 236, 237, 238, 239, 250, 251, 252, 253, 254, 255}; static void iconv_e2a(unsigned char src[], unsigned char dst[], size_t length) { size_t i; for (i = 0; i < length; i++) dst[i] = e2a[src[i]]; } static void iconv_a2e(const char* src, unsigned char dst[], size_t length) { size_t srclen; size_t i; srclen = strlen(src); if (srclen > length) srclen = length; for (i = 0; i < srclen; i++) dst[i] = a2e[src[i]]; /* padding the remaining part with spaces */ for (; i < length; i++) dst[i] = a2e[' ']; } void init_process_title_mutex_once(void) { uv_mutex_init(&process_title_mutex); } static int get_ibmi_system_status(SSTS0200* rcvr) { /* rcvrlen is input parameter 2 to QWCRSSTS */ unsigned int rcvrlen = sizeof(*rcvr); unsigned char format[8], reset_status[10]; /* format is input parameter 3 to QWCRSSTS */ iconv_a2e("SSTS0200", format, sizeof(format)); /* reset_status is input parameter 4 */ iconv_a2e("*NO", reset_status, sizeof(reset_status)); /* errcode is input parameter 5 to QWCRSSTS */ errcode_s errcode; /* qwcrssts_pointer is the 16-byte tagged system pointer to QWCRSSTS */ ILEpointer __attribute__((aligned(16))) qwcrssts_pointer; /* qwcrssts_argv is the array of argument pointers to QWCRSSTS */ void* qwcrssts_argv[6]; /* Set the IBM i pointer to the QSYS/QWCRSSTS *PGM object */ int rc = _RSLOBJ2(&qwcrssts_pointer, RSLOBJ_TS_PGM, "QWCRSSTS", "QSYS"); if (rc != 0) return rc; /* initialize the QWCRSSTS returned info structure */ memset(rcvr, 0, sizeof(*rcvr)); /* initialize the QWCRSSTS error code structure */ memset(&errcode, 0, sizeof(errcode)); errcode.bytes_provided = sizeof(errcode); /* initialize the array of argument pointers for the QWCRSSTS API */ qwcrssts_argv[0] = rcvr; qwcrssts_argv[1] = &rcvrlen; qwcrssts_argv[2] = &format; qwcrssts_argv[3] = &reset_status; qwcrssts_argv[4] = &errcode; qwcrssts_argv[5] = NULL; /* Call the IBM i QWCRSSTS API from PASE */ rc = _PGMCALL(&qwcrssts_pointer, qwcrssts_argv, 0); return rc; } uint64_t uv_get_free_memory(void) { SSTS0200 rcvr; if (get_ibmi_system_status(&rcvr)) return 0; return (uint64_t)rcvr.main_storage_size * 1024ULL; } uint64_t uv_get_total_memory(void) { SSTS0200 rcvr; if (get_ibmi_system_status(&rcvr)) return 0; return (uint64_t)rcvr.main_storage_size * 1024ULL; } uint64_t uv_get_constrained_memory(void) { return 0; /* Memory constraints are unknown. */ } void uv_loadavg(double avg[3]) { SSTS0200 rcvr; if (get_ibmi_system_status(&rcvr)) { avg[0] = avg[1] = avg[2] = 0; return; } /* The average (in tenths) of the elapsed time during which the processing * units were in use. For example, a value of 411 in binary would be 41.1%. * This percentage could be greater than 100% for an uncapped partition. */ double processing_unit_used_percent = rcvr.percent_processing_unit_used / 1000.0; avg[0] = avg[1] = avg[2] = processing_unit_used_percent; } int uv_resident_set_memory(size_t* rss) { *rss = 0; return 0; } int uv_uptime(double* uptime) { return UV_ENOSYS; } int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count) { unsigned int numcpus, idx = 0; uv_cpu_info_t* cpu_info; *cpu_infos = NULL; *count = 0; numcpus = sysconf(_SC_NPROCESSORS_ONLN); *cpu_infos = uv__malloc(numcpus * sizeof(uv_cpu_info_t)); if (!*cpu_infos) { return UV_ENOMEM; } cpu_info = *cpu_infos; for (idx = 0; idx < numcpus; idx++) { cpu_info->speed = 0; cpu_info->model = uv__strdup("unknown"); cpu_info->cpu_times.user = 0; cpu_info->cpu_times.sys = 0; cpu_info->cpu_times.idle = 0; cpu_info->cpu_times.irq = 0; cpu_info->cpu_times.nice = 0; cpu_info++; } *count = numcpus; return 0; } static int get_ibmi_physical_address(const char* line, char (*phys_addr)[6]) { LIND0500 rcvr; /* rcvrlen is input parameter 2 to QDCRLIND */ unsigned int rcvrlen = sizeof(rcvr); unsigned char format[8], line_name[10]; unsigned char mac_addr[sizeof(rcvr.loca_adapter_address)]; int c[6]; /* format is input parameter 3 to QDCRLIND */ iconv_a2e("LIND0500", format, sizeof(format)); /* line_name is input parameter 4 to QDCRLIND */ iconv_a2e(line, line_name, sizeof(line_name)); /* err is input parameter 5 to QDCRLIND */ errcode_s err; /* qwcrssts_pointer is the 16-byte tagged system pointer to QDCRLIND */ ILEpointer __attribute__((aligned(16))) qdcrlind_pointer; /* qwcrssts_argv is the array of argument pointers to QDCRLIND */ void* qdcrlind_argv[6]; /* Set the IBM i pointer to the QSYS/QDCRLIND *PGM object */ int rc = _RSLOBJ2(&qdcrlind_pointer, RSLOBJ_TS_PGM, "QDCRLIND", "QSYS"); if (rc != 0) return rc; /* initialize the QDCRLIND returned info structure */ memset(&rcvr, 0, sizeof(rcvr)); /* initialize the QDCRLIND error code structure */ memset(&err, 0, sizeof(err)); err.bytes_provided = sizeof(err); /* initialize the array of argument pointers for the QDCRLIND API */ qdcrlind_argv[0] = &rcvr; qdcrlind_argv[1] = &rcvrlen; qdcrlind_argv[2] = &format; qdcrlind_argv[3] = &line_name; qdcrlind_argv[4] = &err; qdcrlind_argv[5] = NULL; /* Call the IBM i QDCRLIND API from PASE */ rc = _PGMCALL(&qdcrlind_pointer, qdcrlind_argv, 0); if (rc != 0) return rc; if (err.bytes_available > 0) { return -1; } /* convert ebcdic loca_adapter_address to ascii first */ iconv_e2a(rcvr.loca_adapter_address, mac_addr, sizeof(rcvr.loca_adapter_address)); /* convert loca_adapter_address(char[12]) to phys_addr(char[6]) */ int r = sscanf(mac_addr, "%02x%02x%02x%02x%02x%02x", &c[0], &c[1], &c[2], &c[3], &c[4], &c[5]); if (r == ARRAY_SIZE(c)) { (*phys_addr)[0] = c[0]; (*phys_addr)[1] = c[1]; (*phys_addr)[2] = c[2]; (*phys_addr)[3] = c[3]; (*phys_addr)[4] = c[4]; (*phys_addr)[5] = c[5]; } else { memset(*phys_addr, 0, sizeof(*phys_addr)); rc = -1; } return rc; } int uv_interface_addresses(uv_interface_address_t** addresses, int* count) { uv_interface_address_t* address; struct ifaddrs_pase *ifap = NULL, *cur; int inet6, r = 0; *count = 0; *addresses = NULL; if (Qp2getifaddrs(&ifap)) return UV_ENOSYS; /* The first loop to get the size of the array to be allocated */ for (cur = ifap; cur; cur = cur->ifa_next) { if (!(cur->ifa_addr->sa_family == AF_INET6 || cur->ifa_addr->sa_family == AF_INET)) continue; if (!(cur->ifa_flags & IFF_UP && cur->ifa_flags & IFF_RUNNING)) continue; (*count)++; } if (*count == 0) { Qp2freeifaddrs(ifap); return 0; } /* Alloc the return interface structs */ *addresses = uv__calloc(*count, sizeof(**addresses)); if (*addresses == NULL) { Qp2freeifaddrs(ifap); return UV_ENOMEM; } address = *addresses; /* The second loop to fill in the array */ for (cur = ifap; cur; cur = cur->ifa_next) { if (!(cur->ifa_addr->sa_family == AF_INET6 || cur->ifa_addr->sa_family == AF_INET)) continue; if (!(cur->ifa_flags & IFF_UP && cur->ifa_flags & IFF_RUNNING)) continue; address->name = uv__strdup(cur->ifa_name); inet6 = (cur->ifa_addr->sa_family == AF_INET6); if (inet6) { address->address.address6 = *((struct sockaddr_in6*)cur->ifa_addr); address->netmask.netmask6 = *((struct sockaddr_in6*)cur->ifa_netmask); address->netmask.netmask6.sin6_family = AF_INET6; } else { address->address.address4 = *((struct sockaddr_in*)cur->ifa_addr); address->netmask.netmask4 = *((struct sockaddr_in*)cur->ifa_netmask); address->netmask.netmask4.sin_family = AF_INET; } address->is_internal = cur->ifa_flags & IFF_LOOPBACK ? 1 : 0; if (!address->is_internal) { int rc = -1; size_t name_len = strlen(address->name); /* To get the associated MAC address, we must convert the address to a * line description. Normally, the name field contains the line * description name, but for VLANs it has the VLAN appended with a * period. Since object names can also contain periods and numbers, there * is no way to know if a returned name is for a VLAN or not. eg. * *LIND ETH1.1 and *LIND ETH1, VLAN 1 both have the same name: ETH1.1 * * Instead, we apply the same heuristic used by some of the XPF ioctls: * - names > 10 *must* contain a VLAN * - assume names <= 10 do not contain a VLAN and try directly * - if >10 or QDCRLIND returned an error, try to strip off a VLAN * and try again * - if we still get an error or couldn't find a period, leave the MAC as * 00:00:00:00:00:00 */ if (name_len <= 10) { /* Assume name does not contain a VLAN ID */ rc = get_ibmi_physical_address(address->name, &address->phys_addr); } if (name_len > 10 || rc != 0) { /* The interface name must contain a VLAN ID suffix. Attempt to strip * it off so we can get the line description to pass to QDCRLIND. */ char* temp_name = uv__strdup(address->name); char* dot = strrchr(temp_name, '.'); if (dot != NULL) { *dot = '\0'; if (strlen(temp_name) <= 10) { rc = get_ibmi_physical_address(temp_name, &address->phys_addr); } } uv__free(temp_name); } } address++; } Qp2freeifaddrs(ifap); return r; } void uv_free_interface_addresses(uv_interface_address_t* addresses, int count) { int i; for (i = 0; i < count; ++i) { uv__free(addresses[i].name); } uv__free(addresses); } char** uv_setup_args(int argc, char** argv) { char exepath[UV__PATH_MAX]; char* s; size_t size; if (argc > 0) { /* Use argv[0] to determine value for uv_exepath(). */ size = sizeof(exepath); if (uv__search_path(argv[0], exepath, &size) == 0) { uv_once(&process_title_mutex_once, init_process_title_mutex_once); uv_mutex_lock(&process_title_mutex); original_exepath = uv__strdup(exepath); uv_mutex_unlock(&process_title_mutex); } } return argv; } int uv_set_process_title(const char* title) { return 0; } int uv_get_process_title(char* buffer, size_t size) { if (buffer == NULL || size == 0) return UV_EINVAL; buffer[0] = '\0'; return 0; } void uv__process_title_cleanup(void) { } gevent-24.11.1/deps/libuv/src/unix/internal.h000066400000000000000000000270201471441230600207660ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_UNIX_INTERNAL_H_ #define UV_UNIX_INTERNAL_H_ #include "uv-common.h" #include #include /* _POSIX_PATH_MAX, PATH_MAX */ #include /* abort */ #include /* strrchr */ #include /* O_CLOEXEC and O_NONBLOCK, if supported. */ #include #include #include #if defined(__STRICT_ANSI__) # define inline __inline #endif #if defined(__linux__) # include "linux-syscalls.h" #endif /* __linux__ */ #if defined(__MVS__) # include "os390-syscalls.h" #endif /* __MVS__ */ #if defined(__sun) # include # include #endif /* __sun */ #if defined(_AIX) # define reqevents events # define rtnevents revents # include #else # include #endif /* _AIX */ #if defined(__APPLE__) && !TARGET_OS_IPHONE # include #endif /* * Define common detection for active Thread Sanitizer * - clang uses __has_feature(thread_sanitizer) * - gcc-7+ uses __SANITIZE_THREAD__ */ #if defined(__has_feature) # if __has_feature(thread_sanitizer) # define __SANITIZE_THREAD__ 1 # endif #endif #if defined(PATH_MAX) # define UV__PATH_MAX PATH_MAX #else # define UV__PATH_MAX 8192 #endif #if defined(__ANDROID__) int uv__pthread_sigmask(int how, const sigset_t* set, sigset_t* oset); # ifdef pthread_sigmask # undef pthread_sigmask # endif # define pthread_sigmask(how, set, oldset) uv__pthread_sigmask(how, set, oldset) #endif #define ACCESS_ONCE(type, var) \ (*(volatile type*) &(var)) #define ROUND_UP(a, b) \ ((a) % (b) ? ((a) + (b)) - ((a) % (b)) : (a)) #define UNREACHABLE() \ do { \ assert(0 && "unreachable code"); \ abort(); \ } \ while (0) #define SAVE_ERRNO(block) \ do { \ int _saved_errno = errno; \ do { block; } while (0); \ errno = _saved_errno; \ } \ while (0) /* The __clang__ and __INTEL_COMPILER checks are superfluous because they * define __GNUC__. They are here to convey to you, dear reader, that these * macros are enabled when compiling with clang or icc. */ #if defined(__clang__) || \ defined(__GNUC__) || \ defined(__INTEL_COMPILER) # define UV_UNUSED(declaration) __attribute__((unused)) declaration #else # define UV_UNUSED(declaration) declaration #endif /* Leans on the fact that, on Linux, POLLRDHUP == EPOLLRDHUP. */ #ifdef POLLRDHUP # define UV__POLLRDHUP POLLRDHUP #else # define UV__POLLRDHUP 0x2000 #endif #ifdef POLLPRI # define UV__POLLPRI POLLPRI #else # define UV__POLLPRI 0 #endif #if !defined(O_CLOEXEC) && defined(__FreeBSD__) /* * It may be that we are just missing `__POSIX_VISIBLE >= 200809`. * Try using fixed value const and give up, if it doesn't work */ # define O_CLOEXEC 0x00100000 #endif typedef struct uv__stream_queued_fds_s uv__stream_queued_fds_t; /* loop flags */ enum { UV_LOOP_BLOCK_SIGPROF = 0x1, UV_LOOP_REAP_CHILDREN = 0x2 }; /* flags of excluding ifaddr */ enum { UV__EXCLUDE_IFPHYS, UV__EXCLUDE_IFADDR }; typedef enum { UV_CLOCK_PRECISE = 0, /* Use the highest resolution clock available. */ UV_CLOCK_FAST = 1 /* Use the fastest clock with <= 1ms granularity. */ } uv_clocktype_t; struct uv__stream_queued_fds_s { unsigned int size; unsigned int offset; int fds[1]; }; #if defined(_AIX) || \ defined(__APPLE__) || \ defined(__DragonFly__) || \ defined(__FreeBSD__) || \ defined(__FreeBSD_kernel__) || \ defined(__linux__) || \ defined(__OpenBSD__) || \ defined(__NetBSD__) #define uv__nonblock uv__nonblock_ioctl #define UV__NONBLOCK_IS_IOCTL 1 #else #define uv__nonblock uv__nonblock_fcntl #define UV__NONBLOCK_IS_IOCTL 0 #endif /* On Linux, uv__nonblock_fcntl() and uv__nonblock_ioctl() do not commute * when O_NDELAY is not equal to O_NONBLOCK. Case in point: linux/sparc32 * and linux/sparc64, where O_NDELAY is O_NONBLOCK + another bit. * * Libuv uses uv__nonblock_fcntl() directly sometimes so ensure that it * commutes with uv__nonblock(). */ #if defined(__linux__) && O_NDELAY != O_NONBLOCK #undef uv__nonblock #define uv__nonblock uv__nonblock_fcntl #endif /* core */ int uv__cloexec(int fd, int set); int uv__nonblock_ioctl(int fd, int set); int uv__nonblock_fcntl(int fd, int set); int uv__close(int fd); /* preserves errno */ int uv__close_nocheckstdio(int fd); int uv__close_nocancel(int fd); int uv__socket(int domain, int type, int protocol); ssize_t uv__recvmsg(int fd, struct msghdr *msg, int flags); void uv__make_close_pending(uv_handle_t* handle); int uv__getiovmax(void); void uv__io_init(uv__io_t* w, uv__io_cb cb, int fd); void uv__io_start(uv_loop_t* loop, uv__io_t* w, unsigned int events); void uv__io_stop(uv_loop_t* loop, uv__io_t* w, unsigned int events); void uv__io_close(uv_loop_t* loop, uv__io_t* w); void uv__io_feed(uv_loop_t* loop, uv__io_t* w); int uv__io_active(const uv__io_t* w, unsigned int events); int uv__io_check_fd(uv_loop_t* loop, int fd); void uv__io_poll(uv_loop_t* loop, int timeout); /* in milliseconds or -1 */ int uv__io_fork(uv_loop_t* loop); int uv__fd_exists(uv_loop_t* loop, int fd); /* async */ void uv__async_stop(uv_loop_t* loop); int uv__async_fork(uv_loop_t* loop); /* loop */ void uv__run_idle(uv_loop_t* loop); void uv__run_check(uv_loop_t* loop); void uv__run_prepare(uv_loop_t* loop); /* stream */ void uv__stream_init(uv_loop_t* loop, uv_stream_t* stream, uv_handle_type type); int uv__stream_open(uv_stream_t*, int fd, int flags); void uv__stream_destroy(uv_stream_t* stream); #if defined(__APPLE__) int uv__stream_try_select(uv_stream_t* stream, int* fd); #endif /* defined(__APPLE__) */ void uv__server_io(uv_loop_t* loop, uv__io_t* w, unsigned int events); int uv__accept(int sockfd); int uv__dup2_cloexec(int oldfd, int newfd); int uv__open_cloexec(const char* path, int flags); int uv__slurp(const char* filename, char* buf, size_t len); /* tcp */ int uv__tcp_listen(uv_tcp_t* tcp, int backlog, uv_connection_cb cb); int uv__tcp_nodelay(int fd, int on); int uv__tcp_keepalive(int fd, int on, unsigned int delay); /* pipe */ int uv__pipe_listen(uv_pipe_t* handle, int backlog, uv_connection_cb cb); /* signal */ void uv__signal_close(uv_signal_t* handle); void uv__signal_global_once_init(void); void uv__signal_loop_cleanup(uv_loop_t* loop); int uv__signal_loop_fork(uv_loop_t* loop); /* platform specific */ uint64_t uv__hrtime(uv_clocktype_t type); int uv__kqueue_init(uv_loop_t* loop); int uv__epoll_init(uv_loop_t* loop); int uv__platform_loop_init(uv_loop_t* loop); void uv__platform_loop_delete(uv_loop_t* loop); void uv__platform_invalidate_fd(uv_loop_t* loop, int fd); /* various */ void uv__async_close(uv_async_t* handle); void uv__check_close(uv_check_t* handle); void uv__fs_event_close(uv_fs_event_t* handle); void uv__idle_close(uv_idle_t* handle); void uv__pipe_close(uv_pipe_t* handle); void uv__poll_close(uv_poll_t* handle); void uv__prepare_close(uv_prepare_t* handle); void uv__process_close(uv_process_t* handle); void uv__stream_close(uv_stream_t* handle); void uv__tcp_close(uv_tcp_t* handle); size_t uv__thread_stack_size(void); void uv__udp_close(uv_udp_t* handle); void uv__udp_finish_close(uv_udp_t* handle); FILE* uv__open_file(const char* path); int uv__getpwuid_r(uv_passwd_t* pwd); int uv__search_path(const char* prog, char* buf, size_t* buflen); void uv__wait_children(uv_loop_t* loop); /* random */ int uv__random_devurandom(void* buf, size_t buflen); int uv__random_getrandom(void* buf, size_t buflen); int uv__random_getentropy(void* buf, size_t buflen); int uv__random_readpath(const char* path, void* buf, size_t buflen); int uv__random_sysctl(void* buf, size_t buflen); #if defined(__APPLE__) int uv___stream_fd(const uv_stream_t* handle); #define uv__stream_fd(handle) (uv___stream_fd((const uv_stream_t*) (handle))) #else #define uv__stream_fd(handle) ((handle)->io_watcher.fd) #endif /* defined(__APPLE__) */ int uv__make_pipe(int fds[2], int flags); #if defined(__APPLE__) int uv__fsevents_init(uv_fs_event_t* handle); int uv__fsevents_close(uv_fs_event_t* handle); void uv__fsevents_loop_delete(uv_loop_t* loop); #endif /* defined(__APPLE__) */ UV_UNUSED(static void uv__update_time(uv_loop_t* loop)) { /* Use a fast time source if available. We only need millisecond precision. */ loop->time = uv__hrtime(UV_CLOCK_FAST) / 1000000; } UV_UNUSED(static char* uv__basename_r(const char* path)) { char* s; s = strrchr(path, '/'); if (s == NULL) return (char*) path; return s + 1; } #if defined(__linux__) int uv__inotify_fork(uv_loop_t* loop, void* old_watchers); #endif typedef int (*uv__peersockfunc)(int, struct sockaddr*, socklen_t*); int uv__getsockpeername(const uv_handle_t* handle, uv__peersockfunc func, struct sockaddr* name, int* namelen); #if defined(__linux__) || \ defined(__FreeBSD__) || \ defined(__FreeBSD_kernel__) || \ defined(__DragonFly__) #define HAVE_MMSG 1 struct uv__mmsghdr { struct msghdr msg_hdr; unsigned int msg_len; }; int uv__recvmmsg(int fd, struct uv__mmsghdr* mmsg, unsigned int vlen); int uv__sendmmsg(int fd, struct uv__mmsghdr* mmsg, unsigned int vlen); #else #define HAVE_MMSG 0 #endif #if defined(__sun) #if !defined(_POSIX_VERSION) || _POSIX_VERSION < 200809L size_t strnlen(const char* s, size_t maxlen); #endif #endif #if defined(__FreeBSD__) ssize_t uv__fs_copy_file_range(int fd_in, off_t* off_in, int fd_out, off_t* off_out, size_t len, unsigned int flags); #endif #endif /* UV_UNIX_INTERNAL_H_ */ gevent-24.11.1/deps/libuv/src/unix/kqueue.c000066400000000000000000000372721471441230600204560ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include #include #include #include #include #include /* * Required on * - Until at least FreeBSD 11.0 * - Older versions of Mac OS X * * http://www.boost.org/doc/libs/1_61_0/boost/asio/detail/kqueue_reactor.hpp */ #ifndef EV_OOBAND #define EV_OOBAND EV_FLAG1 #endif static void uv__fs_event(uv_loop_t* loop, uv__io_t* w, unsigned int fflags); int uv__kqueue_init(uv_loop_t* loop) { loop->backend_fd = kqueue(); if (loop->backend_fd == -1) return UV__ERR(errno); uv__cloexec(loop->backend_fd, 1); return 0; } #if defined(__APPLE__) && MAC_OS_X_VERSION_MAX_ALLOWED >= 1070 static int uv__has_forked_with_cfrunloop; #endif int uv__io_fork(uv_loop_t* loop) { int err; loop->backend_fd = -1; err = uv__kqueue_init(loop); if (err) return err; #if defined(__APPLE__) && MAC_OS_X_VERSION_MAX_ALLOWED >= 1070 if (loop->cf_state != NULL) { /* We cannot start another CFRunloop and/or thread in the child process; CF aborts if you try or if you try to touch the thread at all to kill it. So the best we can do is ignore it from now on. This means we can't watch directories in the same way anymore (like other BSDs). It also means we cannot properly clean up the allocated resources; calling uv__fsevents_loop_delete from uv_loop_close will crash the process. So we sidestep the issue by pretending like we never started it in the first place. */ uv__store_relaxed(&uv__has_forked_with_cfrunloop, 1); uv__free(loop->cf_state); loop->cf_state = NULL; } #endif /* #if defined(__APPLE__) && MAC_OS_X_VERSION_MAX_ALLOWED >= 1070 */ return err; } int uv__io_check_fd(uv_loop_t* loop, int fd) { struct kevent ev; int rc; rc = 0; EV_SET(&ev, fd, EVFILT_READ, EV_ADD, 0, 0, 0); if (kevent(loop->backend_fd, &ev, 1, NULL, 0, NULL)) rc = UV__ERR(errno); EV_SET(&ev, fd, EVFILT_READ, EV_DELETE, 0, 0, 0); if (rc == 0) if (kevent(loop->backend_fd, &ev, 1, NULL, 0, NULL)) abort(); return rc; } void uv__io_poll(uv_loop_t* loop, int timeout) { struct kevent events[1024]; struct kevent* ev; struct timespec spec; unsigned int nevents; unsigned int revents; QUEUE* q; uv__io_t* w; uv_process_t* process; sigset_t* pset; sigset_t set; uint64_t base; uint64_t diff; int have_signals; int filter; int fflags; int count; int nfds; int fd; int op; int i; int user_timeout; int reset_timeout; if (loop->nfds == 0) { assert(QUEUE_EMPTY(&loop->watcher_queue)); return; } nevents = 0; while (!QUEUE_EMPTY(&loop->watcher_queue)) { q = QUEUE_HEAD(&loop->watcher_queue); QUEUE_REMOVE(q); QUEUE_INIT(q); w = QUEUE_DATA(q, uv__io_t, watcher_queue); assert(w->pevents != 0); assert(w->fd >= 0); assert(w->fd < (int) loop->nwatchers); if ((w->events & POLLIN) == 0 && (w->pevents & POLLIN) != 0) { filter = EVFILT_READ; fflags = 0; op = EV_ADD; if (w->cb == uv__fs_event) { filter = EVFILT_VNODE; fflags = NOTE_ATTRIB | NOTE_WRITE | NOTE_RENAME | NOTE_DELETE | NOTE_EXTEND | NOTE_REVOKE; op = EV_ADD | EV_ONESHOT; /* Stop the event from firing repeatedly. */ } EV_SET(events + nevents, w->fd, filter, op, fflags, 0, 0); if (++nevents == ARRAY_SIZE(events)) { if (kevent(loop->backend_fd, events, nevents, NULL, 0, NULL)) abort(); nevents = 0; } } if ((w->events & POLLOUT) == 0 && (w->pevents & POLLOUT) != 0) { EV_SET(events + nevents, w->fd, EVFILT_WRITE, EV_ADD, 0, 0, 0); if (++nevents == ARRAY_SIZE(events)) { if (kevent(loop->backend_fd, events, nevents, NULL, 0, NULL)) abort(); nevents = 0; } } if ((w->events & UV__POLLPRI) == 0 && (w->pevents & UV__POLLPRI) != 0) { EV_SET(events + nevents, w->fd, EV_OOBAND, EV_ADD, 0, 0, 0); if (++nevents == ARRAY_SIZE(events)) { if (kevent(loop->backend_fd, events, nevents, NULL, 0, NULL)) abort(); nevents = 0; } } w->events = w->pevents; } pset = NULL; if (loop->flags & UV_LOOP_BLOCK_SIGPROF) { pset = &set; sigemptyset(pset); sigaddset(pset, SIGPROF); } assert(timeout >= -1); base = loop->time; count = 48; /* Benchmarks suggest this gives the best throughput. */ if (uv__get_internal_fields(loop)->flags & UV_METRICS_IDLE_TIME) { reset_timeout = 1; user_timeout = timeout; timeout = 0; } else { reset_timeout = 0; } for (;; nevents = 0) { /* Only need to set the provider_entry_time if timeout != 0. The function * will return early if the loop isn't configured with UV_METRICS_IDLE_TIME. */ if (timeout != 0) uv__metrics_set_provider_entry_time(loop); if (timeout != -1) { spec.tv_sec = timeout / 1000; spec.tv_nsec = (timeout % 1000) * 1000000; } if (pset != NULL) pthread_sigmask(SIG_BLOCK, pset, NULL); nfds = kevent(loop->backend_fd, events, nevents, events, ARRAY_SIZE(events), timeout == -1 ? NULL : &spec); if (pset != NULL) pthread_sigmask(SIG_UNBLOCK, pset, NULL); /* Update loop->time unconditionally. It's tempting to skip the update when * timeout == 0 (i.e. non-blocking poll) but there is no guarantee that the * operating system didn't reschedule our process while in the syscall. */ SAVE_ERRNO(uv__update_time(loop)); if (nfds == 0) { if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; if (timeout == -1) continue; if (timeout > 0) goto update_timeout; } assert(timeout != -1); return; } if (nfds == -1) { if (errno != EINTR) abort(); if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } if (timeout == 0) return; if (timeout == -1) continue; /* Interrupted by a signal. Update timeout and poll again. */ goto update_timeout; } have_signals = 0; nevents = 0; assert(loop->watchers != NULL); loop->watchers[loop->nwatchers] = (void*) events; loop->watchers[loop->nwatchers + 1] = (void*) (uintptr_t) nfds; for (i = 0; i < nfds; i++) { ev = events + i; fd = ev->ident; /* Handle kevent NOTE_EXIT results */ if (ev->filter == EVFILT_PROC) { QUEUE_FOREACH(q, &loop->process_handles) { process = QUEUE_DATA(q, uv_process_t, queue); if (process->pid == fd) { process->flags |= UV_HANDLE_REAP; loop->flags |= UV_LOOP_REAP_CHILDREN; break; } } nevents++; continue; } /* Skip invalidated events, see uv__platform_invalidate_fd */ if (fd == -1) continue; w = loop->watchers[fd]; if (w == NULL) { /* File descriptor that we've stopped watching, disarm it. * TODO: batch up. */ struct kevent events[1]; EV_SET(events + 0, fd, ev->filter, EV_DELETE, 0, 0, 0); if (kevent(loop->backend_fd, events, 1, NULL, 0, NULL)) if (errno != EBADF && errno != ENOENT) abort(); continue; } if (ev->filter == EVFILT_VNODE) { assert(w->events == POLLIN); assert(w->pevents == POLLIN); uv__metrics_update_idle_time(loop); w->cb(loop, w, ev->fflags); /* XXX always uv__fs_event() */ nevents++; continue; } revents = 0; if (ev->filter == EVFILT_READ) { if (w->pevents & POLLIN) { revents |= POLLIN; w->rcount = ev->data; } else { /* TODO batch up */ struct kevent events[1]; EV_SET(events + 0, fd, ev->filter, EV_DELETE, 0, 0, 0); if (kevent(loop->backend_fd, events, 1, NULL, 0, NULL)) if (errno != ENOENT) abort(); } if ((ev->flags & EV_EOF) && (w->pevents & UV__POLLRDHUP)) revents |= UV__POLLRDHUP; } if (ev->filter == EV_OOBAND) { if (w->pevents & UV__POLLPRI) { revents |= UV__POLLPRI; w->rcount = ev->data; } else { /* TODO batch up */ struct kevent events[1]; EV_SET(events + 0, fd, ev->filter, EV_DELETE, 0, 0, 0); if (kevent(loop->backend_fd, events, 1, NULL, 0, NULL)) if (errno != ENOENT) abort(); } } if (ev->filter == EVFILT_WRITE) { if (w->pevents & POLLOUT) { revents |= POLLOUT; w->wcount = ev->data; } else { /* TODO batch up */ struct kevent events[1]; EV_SET(events + 0, fd, ev->filter, EV_DELETE, 0, 0, 0); if (kevent(loop->backend_fd, events, 1, NULL, 0, NULL)) if (errno != ENOENT) abort(); } } if (ev->flags & EV_ERROR) revents |= POLLERR; if (revents == 0) continue; /* Run signal watchers last. This also affects child process watchers * because those are implemented in terms of signal watchers. */ if (w == &loop->signal_io_watcher) { have_signals = 1; } else { uv__metrics_update_idle_time(loop); w->cb(loop, w, revents); } nevents++; } if (loop->flags & UV_LOOP_REAP_CHILDREN) { loop->flags &= ~UV_LOOP_REAP_CHILDREN; uv__wait_children(loop); } if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } if (have_signals != 0) { uv__metrics_update_idle_time(loop); loop->signal_io_watcher.cb(loop, &loop->signal_io_watcher, POLLIN); } loop->watchers[loop->nwatchers] = NULL; loop->watchers[loop->nwatchers + 1] = NULL; if (have_signals != 0) return; /* Event loop should cycle now so don't poll again. */ if (nevents != 0) { if (nfds == ARRAY_SIZE(events) && --count != 0) { /* Poll for more events but don't block this time. */ timeout = 0; continue; } return; } if (timeout == 0) return; if (timeout == -1) continue; update_timeout: assert(timeout > 0); diff = loop->time - base; if (diff >= (uint64_t) timeout) return; timeout -= diff; } } void uv__platform_invalidate_fd(uv_loop_t* loop, int fd) { struct kevent* events; uintptr_t i; uintptr_t nfds; assert(loop->watchers != NULL); assert(fd >= 0); events = (struct kevent*) loop->watchers[loop->nwatchers]; nfds = (uintptr_t) loop->watchers[loop->nwatchers + 1]; if (events == NULL) return; /* Invalidate events with same file descriptor */ for (i = 0; i < nfds; i++) if ((int) events[i].ident == fd && events[i].filter != EVFILT_PROC) events[i].ident = -1; } static void uv__fs_event(uv_loop_t* loop, uv__io_t* w, unsigned int fflags) { uv_fs_event_t* handle; struct kevent ev; int events; const char* path; #if defined(F_GETPATH) /* MAXPATHLEN == PATH_MAX but the former is what XNU calls it internally. */ char pathbuf[MAXPATHLEN]; #endif handle = container_of(w, uv_fs_event_t, event_watcher); if (fflags & (NOTE_ATTRIB | NOTE_EXTEND)) events = UV_CHANGE; else events = UV_RENAME; path = NULL; #if defined(F_GETPATH) /* Also works when the file has been unlinked from the file system. Passing * in the path when the file has been deleted is arguably a little strange * but it's consistent with what the inotify backend does. */ if (fcntl(handle->event_watcher.fd, F_GETPATH, pathbuf) == 0) path = uv__basename_r(pathbuf); #endif handle->cb(handle, path, events, 0); if (handle->event_watcher.fd == -1) return; /* Watcher operates in one-shot mode, re-arm it. */ fflags = NOTE_ATTRIB | NOTE_WRITE | NOTE_RENAME | NOTE_DELETE | NOTE_EXTEND | NOTE_REVOKE; EV_SET(&ev, w->fd, EVFILT_VNODE, EV_ADD | EV_ONESHOT, fflags, 0, 0); if (kevent(loop->backend_fd, &ev, 1, NULL, 0, NULL)) abort(); } int uv_fs_event_init(uv_loop_t* loop, uv_fs_event_t* handle) { uv__handle_init(loop, (uv_handle_t*)handle, UV_FS_EVENT); return 0; } int uv_fs_event_start(uv_fs_event_t* handle, uv_fs_event_cb cb, const char* path, unsigned int flags) { int fd; #if defined(__APPLE__) && MAC_OS_X_VERSION_MAX_ALLOWED >= 1070 struct stat statbuf; #endif if (uv__is_active(handle)) return UV_EINVAL; handle->cb = cb; handle->path = uv__strdup(path); if (handle->path == NULL) return UV_ENOMEM; /* TODO open asynchronously - but how do we report back errors? */ fd = open(handle->path, O_RDONLY); if (fd == -1) { uv__free(handle->path); handle->path = NULL; return UV__ERR(errno); } #if defined(__APPLE__) && MAC_OS_X_VERSION_MAX_ALLOWED >= 1070 /* Nullify field to perform checks later */ handle->cf_cb = NULL; handle->realpath = NULL; handle->realpath_len = 0; handle->cf_flags = flags; if (fstat(fd, &statbuf)) goto fallback; /* FSEvents works only with directories */ if (!(statbuf.st_mode & S_IFDIR)) goto fallback; if (0 == uv__load_relaxed(&uv__has_forked_with_cfrunloop)) { int r; /* The fallback fd is no longer needed */ uv__close_nocheckstdio(fd); handle->event_watcher.fd = -1; r = uv__fsevents_init(handle); if (r == 0) { uv__handle_start(handle); } else { uv__free(handle->path); handle->path = NULL; } return r; } fallback: #endif /* #if defined(__APPLE__) && MAC_OS_X_VERSION_MAX_ALLOWED >= 1070 */ uv__handle_start(handle); uv__io_init(&handle->event_watcher, uv__fs_event, fd); uv__io_start(handle->loop, &handle->event_watcher, POLLIN); return 0; } int uv_fs_event_stop(uv_fs_event_t* handle) { int r; r = 0; if (!uv__is_active(handle)) return 0; uv__handle_stop(handle); #if defined(__APPLE__) && MAC_OS_X_VERSION_MAX_ALLOWED >= 1070 if (0 == uv__load_relaxed(&uv__has_forked_with_cfrunloop)) if (handle->cf_cb != NULL) r = uv__fsevents_close(handle); #endif if (handle->event_watcher.fd != -1) { uv__io_close(handle->loop, &handle->event_watcher); uv__close(handle->event_watcher.fd); handle->event_watcher.fd = -1; } uv__free(handle->path); handle->path = NULL; return r; } void uv__fs_event_close(uv_fs_event_t* handle) { uv_fs_event_stop(handle); } gevent-24.11.1/deps/libuv/src/unix/linux-core.c000066400000000000000000000503161471441230600212360ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ /* We lean on the fact that POLL{IN,OUT,ERR,HUP} correspond with their * EPOLL* counterparts. We use the POLL* variants in this file because that * is what libuv uses elsewhere. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #define HAVE_IFADDRS_H 1 # if defined(__ANDROID_API__) && __ANDROID_API__ < 24 # undef HAVE_IFADDRS_H #endif #ifdef __UCLIBC__ # if __UCLIBC_MAJOR__ < 0 && __UCLIBC_MINOR__ < 9 && __UCLIBC_SUBLEVEL__ < 32 # undef HAVE_IFADDRS_H # endif #endif #ifdef HAVE_IFADDRS_H # include # include # include # include #endif /* HAVE_IFADDRS_H */ /* Available from 2.6.32 onwards. */ #ifndef CLOCK_MONOTONIC_COARSE # define CLOCK_MONOTONIC_COARSE 6 #endif /* This is rather annoying: CLOCK_BOOTTIME lives in but we can't * include that file because it conflicts with . We'll just have to * define it ourselves. */ #ifndef CLOCK_BOOTTIME # define CLOCK_BOOTTIME 7 #endif static int read_models(unsigned int numcpus, uv_cpu_info_t* ci); static int read_times(FILE* statfile_fp, unsigned int numcpus, uv_cpu_info_t* ci); static void read_speeds(unsigned int numcpus, uv_cpu_info_t* ci); static uint64_t read_cpufreq(unsigned int cpunum); int uv__platform_loop_init(uv_loop_t* loop) { loop->inotify_fd = -1; loop->inotify_watchers = NULL; return uv__epoll_init(loop); } int uv__io_fork(uv_loop_t* loop) { int err; void* old_watchers; old_watchers = loop->inotify_watchers; uv__close(loop->backend_fd); loop->backend_fd = -1; uv__platform_loop_delete(loop); err = uv__platform_loop_init(loop); if (err) return err; return uv__inotify_fork(loop, old_watchers); } void uv__platform_loop_delete(uv_loop_t* loop) { if (loop->inotify_fd == -1) return; uv__io_stop(loop, &loop->inotify_read_watcher, POLLIN); uv__close(loop->inotify_fd); loop->inotify_fd = -1; } uint64_t uv__hrtime(uv_clocktype_t type) { static clock_t fast_clock_id = -1; struct timespec t; clock_t clock_id; /* Prefer CLOCK_MONOTONIC_COARSE if available but only when it has * millisecond granularity or better. CLOCK_MONOTONIC_COARSE is * serviced entirely from the vDSO, whereas CLOCK_MONOTONIC may * decide to make a costly system call. */ /* TODO(bnoordhuis) Use CLOCK_MONOTONIC_COARSE for UV_CLOCK_PRECISE * when it has microsecond granularity or better (unlikely). */ clock_id = CLOCK_MONOTONIC; if (type != UV_CLOCK_FAST) goto done; clock_id = uv__load_relaxed(&fast_clock_id); if (clock_id != -1) goto done; clock_id = CLOCK_MONOTONIC; if (0 == clock_getres(CLOCK_MONOTONIC_COARSE, &t)) if (t.tv_nsec <= 1 * 1000 * 1000) clock_id = CLOCK_MONOTONIC_COARSE; uv__store_relaxed(&fast_clock_id, clock_id); done: if (clock_gettime(clock_id, &t)) return 0; /* Not really possible. */ return t.tv_sec * (uint64_t) 1e9 + t.tv_nsec; } int uv_resident_set_memory(size_t* rss) { char buf[1024]; const char* s; ssize_t n; long val; int fd; int i; do fd = open("/proc/self/stat", O_RDONLY); while (fd == -1 && errno == EINTR); if (fd == -1) return UV__ERR(errno); do n = read(fd, buf, sizeof(buf) - 1); while (n == -1 && errno == EINTR); uv__close(fd); if (n == -1) return UV__ERR(errno); buf[n] = '\0'; s = strchr(buf, ' '); if (s == NULL) goto err; s += 1; if (*s != '(') goto err; s = strchr(s, ')'); if (s == NULL) goto err; for (i = 1; i <= 22; i++) { s = strchr(s + 1, ' '); if (s == NULL) goto err; } errno = 0; val = strtol(s, NULL, 10); if (errno != 0) goto err; if (val < 0) goto err; *rss = val * getpagesize(); return 0; err: return UV_EINVAL; } int uv_uptime(double* uptime) { static volatile int no_clock_boottime; char buf[128]; struct timespec now; int r; /* Try /proc/uptime first, then fallback to clock_gettime(). */ if (0 == uv__slurp("/proc/uptime", buf, sizeof(buf))) if (1 == sscanf(buf, "%lf", uptime)) return 0; /* Try CLOCK_BOOTTIME first, fall back to CLOCK_MONOTONIC if not available * (pre-2.6.39 kernels). CLOCK_MONOTONIC doesn't increase when the system * is suspended. */ if (no_clock_boottime) { retry_clock_gettime: r = clock_gettime(CLOCK_MONOTONIC, &now); } else if ((r = clock_gettime(CLOCK_BOOTTIME, &now)) && errno == EINVAL) { no_clock_boottime = 1; goto retry_clock_gettime; } if (r) return UV__ERR(errno); *uptime = now.tv_sec; return 0; } static int uv__cpu_num(FILE* statfile_fp, unsigned int* numcpus) { unsigned int num; char buf[1024]; if (!fgets(buf, sizeof(buf), statfile_fp)) return UV_EIO; num = 0; while (fgets(buf, sizeof(buf), statfile_fp)) { if (strncmp(buf, "cpu", 3)) break; num++; } if (num == 0) return UV_EIO; *numcpus = num; return 0; } int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count) { unsigned int numcpus; uv_cpu_info_t* ci; int err; FILE* statfile_fp; *cpu_infos = NULL; *count = 0; statfile_fp = uv__open_file("/proc/stat"); if (statfile_fp == NULL) return UV__ERR(errno); err = uv__cpu_num(statfile_fp, &numcpus); if (err < 0) goto out; err = UV_ENOMEM; ci = uv__calloc(numcpus, sizeof(*ci)); if (ci == NULL) goto out; err = read_models(numcpus, ci); if (err == 0) err = read_times(statfile_fp, numcpus, ci); if (err) { uv_free_cpu_info(ci, numcpus); goto out; } /* read_models() on x86 also reads the CPU speed from /proc/cpuinfo. * We don't check for errors here. Worst case, the field is left zero. */ if (ci[0].speed == 0) read_speeds(numcpus, ci); *cpu_infos = ci; *count = numcpus; err = 0; out: if (fclose(statfile_fp)) if (errno != EINTR && errno != EINPROGRESS) abort(); return err; } static void read_speeds(unsigned int numcpus, uv_cpu_info_t* ci) { unsigned int num; for (num = 0; num < numcpus; num++) ci[num].speed = read_cpufreq(num) / 1000; } /* Also reads the CPU frequency on ppc and x86. The other architectures only * have a BogoMIPS field, which may not be very accurate. * * Note: Simply returns on error, uv_cpu_info() takes care of the cleanup. */ static int read_models(unsigned int numcpus, uv_cpu_info_t* ci) { #if defined(__PPC__) static const char model_marker[] = "cpu\t\t: "; static const char speed_marker[] = "clock\t\t: "; #else static const char model_marker[] = "model name\t: "; static const char speed_marker[] = "cpu MHz\t\t: "; #endif const char* inferred_model; unsigned int model_idx; unsigned int speed_idx; unsigned int part_idx; char buf[1024]; char* model; FILE* fp; int model_id; /* Most are unused on non-ARM, non-MIPS and non-x86 architectures. */ (void) &model_marker; (void) &speed_marker; (void) &speed_idx; (void) &part_idx; (void) &model; (void) &buf; (void) &fp; (void) &model_id; model_idx = 0; speed_idx = 0; part_idx = 0; #if defined(__arm__) || \ defined(__i386__) || \ defined(__mips__) || \ defined(__aarch64__) || \ defined(__PPC__) || \ defined(__x86_64__) fp = uv__open_file("/proc/cpuinfo"); if (fp == NULL) return UV__ERR(errno); while (fgets(buf, sizeof(buf), fp)) { if (model_idx < numcpus) { if (strncmp(buf, model_marker, sizeof(model_marker) - 1) == 0) { model = buf + sizeof(model_marker) - 1; model = uv__strndup(model, strlen(model) - 1); /* Strip newline. */ if (model == NULL) { fclose(fp); return UV_ENOMEM; } ci[model_idx++].model = model; continue; } } #if defined(__arm__) || defined(__mips__) || defined(__aarch64__) if (model_idx < numcpus) { #if defined(__arm__) /* Fallback for pre-3.8 kernels. */ static const char model_marker[] = "Processor\t: "; #elif defined(__aarch64__) static const char part_marker[] = "CPU part\t: "; /* Adapted from: https://github.com/karelzak/util-linux */ struct vendor_part { const int id; const char* name; }; static const struct vendor_part arm_chips[] = { { 0x811, "ARM810" }, { 0x920, "ARM920" }, { 0x922, "ARM922" }, { 0x926, "ARM926" }, { 0x940, "ARM940" }, { 0x946, "ARM946" }, { 0x966, "ARM966" }, { 0xa20, "ARM1020" }, { 0xa22, "ARM1022" }, { 0xa26, "ARM1026" }, { 0xb02, "ARM11 MPCore" }, { 0xb36, "ARM1136" }, { 0xb56, "ARM1156" }, { 0xb76, "ARM1176" }, { 0xc05, "Cortex-A5" }, { 0xc07, "Cortex-A7" }, { 0xc08, "Cortex-A8" }, { 0xc09, "Cortex-A9" }, { 0xc0d, "Cortex-A17" }, /* Originally A12 */ { 0xc0f, "Cortex-A15" }, { 0xc0e, "Cortex-A17" }, { 0xc14, "Cortex-R4" }, { 0xc15, "Cortex-R5" }, { 0xc17, "Cortex-R7" }, { 0xc18, "Cortex-R8" }, { 0xc20, "Cortex-M0" }, { 0xc21, "Cortex-M1" }, { 0xc23, "Cortex-M3" }, { 0xc24, "Cortex-M4" }, { 0xc27, "Cortex-M7" }, { 0xc60, "Cortex-M0+" }, { 0xd01, "Cortex-A32" }, { 0xd03, "Cortex-A53" }, { 0xd04, "Cortex-A35" }, { 0xd05, "Cortex-A55" }, { 0xd06, "Cortex-A65" }, { 0xd07, "Cortex-A57" }, { 0xd08, "Cortex-A72" }, { 0xd09, "Cortex-A73" }, { 0xd0a, "Cortex-A75" }, { 0xd0b, "Cortex-A76" }, { 0xd0c, "Neoverse-N1" }, { 0xd0d, "Cortex-A77" }, { 0xd0e, "Cortex-A76AE" }, { 0xd13, "Cortex-R52" }, { 0xd20, "Cortex-M23" }, { 0xd21, "Cortex-M33" }, { 0xd41, "Cortex-A78" }, { 0xd42, "Cortex-A78AE" }, { 0xd4a, "Neoverse-E1" }, { 0xd4b, "Cortex-A78C" }, }; if (strncmp(buf, part_marker, sizeof(part_marker) - 1) == 0) { model = buf + sizeof(part_marker) - 1; errno = 0; model_id = strtol(model, NULL, 16); if ((errno != 0) || model_id < 0) { fclose(fp); return UV_EINVAL; } for (part_idx = 0; part_idx < ARRAY_SIZE(arm_chips); part_idx++) { if (model_id == arm_chips[part_idx].id) { model = uv__strdup(arm_chips[part_idx].name); if (model == NULL) { fclose(fp); return UV_ENOMEM; } ci[model_idx++].model = model; break; } } } #else /* defined(__mips__) */ static const char model_marker[] = "cpu model\t\t: "; #endif if (strncmp(buf, model_marker, sizeof(model_marker) - 1) == 0) { model = buf + sizeof(model_marker) - 1; model = uv__strndup(model, strlen(model) - 1); /* Strip newline. */ if (model == NULL) { fclose(fp); return UV_ENOMEM; } ci[model_idx++].model = model; continue; } } #else /* !__arm__ && !__mips__ && !__aarch64__ */ if (speed_idx < numcpus) { if (strncmp(buf, speed_marker, sizeof(speed_marker) - 1) == 0) { ci[speed_idx++].speed = atoi(buf + sizeof(speed_marker) - 1); continue; } } #endif /* __arm__ || __mips__ || __aarch64__ */ } fclose(fp); #endif /* __arm__ || __i386__ || __mips__ || __PPC__ || __x86_64__ || __aarch__ */ /* Now we want to make sure that all the models contain *something* because * it's not safe to leave them as null. Copy the last entry unless there * isn't one, in that case we simply put "unknown" into everything. */ inferred_model = "unknown"; if (model_idx > 0) inferred_model = ci[model_idx - 1].model; while (model_idx < numcpus) { model = uv__strndup(inferred_model, strlen(inferred_model)); if (model == NULL) return UV_ENOMEM; ci[model_idx++].model = model; } return 0; } static int read_times(FILE* statfile_fp, unsigned int numcpus, uv_cpu_info_t* ci) { struct uv_cpu_times_s ts; unsigned int ticks; unsigned int multiplier; uint64_t user; uint64_t nice; uint64_t sys; uint64_t idle; uint64_t dummy; uint64_t irq; uint64_t num; uint64_t len; char buf[1024]; ticks = (unsigned int)sysconf(_SC_CLK_TCK); assert(ticks != (unsigned int) -1); assert(ticks != 0); multiplier = ((uint64_t)1000L / ticks); rewind(statfile_fp); if (!fgets(buf, sizeof(buf), statfile_fp)) abort(); num = 0; while (fgets(buf, sizeof(buf), statfile_fp)) { if (num >= numcpus) break; if (strncmp(buf, "cpu", 3)) break; /* skip "cpu " marker */ { unsigned int n; int r = sscanf(buf, "cpu%u ", &n); assert(r == 1); (void) r; /* silence build warning */ for (len = sizeof("cpu0"); n /= 10; len++); } /* Line contains user, nice, system, idle, iowait, irq, softirq, steal, * guest, guest_nice but we're only interested in the first four + irq. * * Don't use %*s to skip fields or %ll to read straight into the uint64_t * fields, they're not allowed in C89 mode. */ if (6 != sscanf(buf + len, "%" PRIu64 " %" PRIu64 " %" PRIu64 "%" PRIu64 " %" PRIu64 " %" PRIu64, &user, &nice, &sys, &idle, &dummy, &irq)) abort(); ts.user = user * multiplier; ts.nice = nice * multiplier; ts.sys = sys * multiplier; ts.idle = idle * multiplier; ts.irq = irq * multiplier; ci[num++].cpu_times = ts; } assert(num == numcpus); return 0; } static uint64_t read_cpufreq(unsigned int cpunum) { uint64_t val; char buf[1024]; FILE* fp; snprintf(buf, sizeof(buf), "/sys/devices/system/cpu/cpu%u/cpufreq/scaling_cur_freq", cpunum); fp = uv__open_file(buf); if (fp == NULL) return 0; if (fscanf(fp, "%" PRIu64, &val) != 1) val = 0; fclose(fp); return val; } #ifdef HAVE_IFADDRS_H static int uv__ifaddr_exclude(struct ifaddrs *ent, int exclude_type) { if (!((ent->ifa_flags & IFF_UP) && (ent->ifa_flags & IFF_RUNNING))) return 1; if (ent->ifa_addr == NULL) return 1; /* * On Linux getifaddrs returns information related to the raw underlying * devices. We're not interested in this information yet. */ if (ent->ifa_addr->sa_family == PF_PACKET) return exclude_type; return !exclude_type; } #endif int uv_interface_addresses(uv_interface_address_t** addresses, int* count) { #ifndef HAVE_IFADDRS_H *count = 0; *addresses = NULL; return UV_ENOSYS; #else struct ifaddrs *addrs, *ent; uv_interface_address_t* address; int i; struct sockaddr_ll *sll; *count = 0; *addresses = NULL; if (getifaddrs(&addrs)) return UV__ERR(errno); /* Count the number of interfaces */ for (ent = addrs; ent != NULL; ent = ent->ifa_next) { if (uv__ifaddr_exclude(ent, UV__EXCLUDE_IFADDR)) continue; (*count)++; } if (*count == 0) { freeifaddrs(addrs); return 0; } /* Make sure the memory is initiallized to zero using calloc() */ *addresses = uv__calloc(*count, sizeof(**addresses)); if (!(*addresses)) { freeifaddrs(addrs); return UV_ENOMEM; } address = *addresses; for (ent = addrs; ent != NULL; ent = ent->ifa_next) { if (uv__ifaddr_exclude(ent, UV__EXCLUDE_IFADDR)) continue; address->name = uv__strdup(ent->ifa_name); if (ent->ifa_addr->sa_family == AF_INET6) { address->address.address6 = *((struct sockaddr_in6*) ent->ifa_addr); } else { address->address.address4 = *((struct sockaddr_in*) ent->ifa_addr); } if (ent->ifa_netmask->sa_family == AF_INET6) { address->netmask.netmask6 = *((struct sockaddr_in6*) ent->ifa_netmask); } else { address->netmask.netmask4 = *((struct sockaddr_in*) ent->ifa_netmask); } address->is_internal = !!(ent->ifa_flags & IFF_LOOPBACK); address++; } /* Fill in physical addresses for each interface */ for (ent = addrs; ent != NULL; ent = ent->ifa_next) { if (uv__ifaddr_exclude(ent, UV__EXCLUDE_IFPHYS)) continue; address = *addresses; for (i = 0; i < (*count); i++) { size_t namelen = strlen(ent->ifa_name); /* Alias interface share the same physical address */ if (strncmp(address->name, ent->ifa_name, namelen) == 0 && (address->name[namelen] == 0 || address->name[namelen] == ':')) { sll = (struct sockaddr_ll*)ent->ifa_addr; memcpy(address->phys_addr, sll->sll_addr, sizeof(address->phys_addr)); } address++; } } freeifaddrs(addrs); return 0; #endif } void uv_free_interface_addresses(uv_interface_address_t* addresses, int count) { int i; for (i = 0; i < count; i++) { uv__free(addresses[i].name); } uv__free(addresses); } void uv__set_process_title(const char* title) { #if defined(PR_SET_NAME) prctl(PR_SET_NAME, title); /* Only copies first 16 characters. */ #endif } static uint64_t uv__read_proc_meminfo(const char* what) { uint64_t rc; char* p; char buf[4096]; /* Large enough to hold all of /proc/meminfo. */ if (uv__slurp("/proc/meminfo", buf, sizeof(buf))) return 0; p = strstr(buf, what); if (p == NULL) return 0; p += strlen(what); rc = 0; sscanf(p, "%" PRIu64 " kB", &rc); return rc * 1024; } uint64_t uv_get_free_memory(void) { struct sysinfo info; uint64_t rc; rc = uv__read_proc_meminfo("MemAvailable:"); if (rc != 0) return rc; if (0 == sysinfo(&info)) return (uint64_t) info.freeram * info.mem_unit; return 0; } uint64_t uv_get_total_memory(void) { struct sysinfo info; uint64_t rc; rc = uv__read_proc_meminfo("MemTotal:"); if (rc != 0) return rc; if (0 == sysinfo(&info)) return (uint64_t) info.totalram * info.mem_unit; return 0; } static uint64_t uv__read_cgroups_uint64(const char* cgroup, const char* param) { char filename[256]; char buf[32]; /* Large enough to hold an encoded uint64_t. */ uint64_t rc; rc = 0; snprintf(filename, sizeof(filename), "/sys/fs/cgroup/%s/%s", cgroup, param); if (0 == uv__slurp(filename, buf, sizeof(buf))) sscanf(buf, "%" PRIu64, &rc); return rc; } uint64_t uv_get_constrained_memory(void) { /* * This might return 0 if there was a problem getting the memory limit from * cgroups. This is OK because a return value of 0 signifies that the memory * limit is unknown. */ return uv__read_cgroups_uint64("memory", "memory.limit_in_bytes"); } void uv_loadavg(double avg[3]) { struct sysinfo info; char buf[128]; /* Large enough to hold all of /proc/loadavg. */ if (0 == uv__slurp("/proc/loadavg", buf, sizeof(buf))) if (3 == sscanf(buf, "%lf %lf %lf", &avg[0], &avg[1], &avg[2])) return; if (sysinfo(&info) < 0) return; avg[0] = (double) info.loads[0] / 65536.0; avg[1] = (double) info.loads[1] / 65536.0; avg[2] = (double) info.loads[2] / 65536.0; } gevent-24.11.1/deps/libuv/src/unix/linux-inotify.c000066400000000000000000000222201471441230600217600ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "uv/tree.h" #include "internal.h" #include #include #include #include #include #include #include #include #include struct watcher_list { RB_ENTRY(watcher_list) entry; QUEUE watchers; int iterating; char* path; int wd; }; struct watcher_root { struct watcher_list* rbh_root; }; #define CAST(p) ((struct watcher_root*)(p)) static int compare_watchers(const struct watcher_list* a, const struct watcher_list* b) { if (a->wd < b->wd) return -1; if (a->wd > b->wd) return 1; return 0; } RB_GENERATE_STATIC(watcher_root, watcher_list, entry, compare_watchers) static void uv__inotify_read(uv_loop_t* loop, uv__io_t* w, unsigned int revents); static void maybe_free_watcher_list(struct watcher_list* w, uv_loop_t* loop); static int init_inotify(uv_loop_t* loop) { int fd; if (loop->inotify_fd != -1) return 0; fd = inotify_init1(IN_NONBLOCK | IN_CLOEXEC); if (fd < 0) return UV__ERR(errno); loop->inotify_fd = fd; uv__io_init(&loop->inotify_read_watcher, uv__inotify_read, loop->inotify_fd); uv__io_start(loop, &loop->inotify_read_watcher, POLLIN); return 0; } int uv__inotify_fork(uv_loop_t* loop, void* old_watchers) { /* Open the inotify_fd, and re-arm all the inotify watchers. */ int err; struct watcher_list* tmp_watcher_list_iter; struct watcher_list* watcher_list; struct watcher_list tmp_watcher_list; QUEUE queue; QUEUE* q; uv_fs_event_t* handle; char* tmp_path; if (old_watchers != NULL) { /* We must restore the old watcher list to be able to close items * out of it. */ loop->inotify_watchers = old_watchers; QUEUE_INIT(&tmp_watcher_list.watchers); /* Note that the queue we use is shared with the start and stop() * functions, making QUEUE_FOREACH unsafe to use. So we use the * QUEUE_MOVE trick to safely iterate. Also don't free the watcher * list until we're done iterating. c.f. uv__inotify_read. */ RB_FOREACH_SAFE(watcher_list, watcher_root, CAST(&old_watchers), tmp_watcher_list_iter) { watcher_list->iterating = 1; QUEUE_MOVE(&watcher_list->watchers, &queue); while (!QUEUE_EMPTY(&queue)) { q = QUEUE_HEAD(&queue); handle = QUEUE_DATA(q, uv_fs_event_t, watchers); /* It's critical to keep a copy of path here, because it * will be set to NULL by stop() and then deallocated by * maybe_free_watcher_list */ tmp_path = uv__strdup(handle->path); assert(tmp_path != NULL); QUEUE_REMOVE(q); QUEUE_INSERT_TAIL(&watcher_list->watchers, q); uv_fs_event_stop(handle); QUEUE_INSERT_TAIL(&tmp_watcher_list.watchers, &handle->watchers); handle->path = tmp_path; } watcher_list->iterating = 0; maybe_free_watcher_list(watcher_list, loop); } QUEUE_MOVE(&tmp_watcher_list.watchers, &queue); while (!QUEUE_EMPTY(&queue)) { q = QUEUE_HEAD(&queue); QUEUE_REMOVE(q); handle = QUEUE_DATA(q, uv_fs_event_t, watchers); tmp_path = handle->path; handle->path = NULL; err = uv_fs_event_start(handle, handle->cb, tmp_path, 0); uv__free(tmp_path); if (err) return err; } } return 0; } static struct watcher_list* find_watcher(uv_loop_t* loop, int wd) { struct watcher_list w; w.wd = wd; return RB_FIND(watcher_root, CAST(&loop->inotify_watchers), &w); } static void maybe_free_watcher_list(struct watcher_list* w, uv_loop_t* loop) { /* if the watcher_list->watchers is being iterated over, we can't free it. */ if ((!w->iterating) && QUEUE_EMPTY(&w->watchers)) { /* No watchers left for this path. Clean up. */ RB_REMOVE(watcher_root, CAST(&loop->inotify_watchers), w); inotify_rm_watch(loop->inotify_fd, w->wd); uv__free(w); } } static void uv__inotify_read(uv_loop_t* loop, uv__io_t* dummy, unsigned int events) { const struct inotify_event* e; struct watcher_list* w; uv_fs_event_t* h; QUEUE queue; QUEUE* q; const char* path; ssize_t size; const char *p; /* needs to be large enough for sizeof(inotify_event) + strlen(path) */ char buf[4096]; for (;;) { do size = read(loop->inotify_fd, buf, sizeof(buf)); while (size == -1 && errno == EINTR); if (size == -1) { assert(errno == EAGAIN || errno == EWOULDBLOCK); break; } assert(size > 0); /* pre-2.6.21 thing, size=0 == read buffer too small */ /* Now we have one or more inotify_event structs. */ for (p = buf; p < buf + size; p += sizeof(*e) + e->len) { e = (const struct inotify_event*) p; events = 0; if (e->mask & (IN_ATTRIB|IN_MODIFY)) events |= UV_CHANGE; if (e->mask & ~(IN_ATTRIB|IN_MODIFY)) events |= UV_RENAME; w = find_watcher(loop, e->wd); if (w == NULL) continue; /* Stale event, no watchers left. */ /* inotify does not return the filename when monitoring a single file * for modifications. Repurpose the filename for API compatibility. * I'm not convinced this is a good thing, maybe it should go. */ path = e->len ? (const char*) (e + 1) : uv__basename_r(w->path); /* We're about to iterate over the queue and call user's callbacks. * What can go wrong? * A callback could call uv_fs_event_stop() * and the queue can change under our feet. * So, we use QUEUE_MOVE() trick to safely iterate over the queue. * And we don't free the watcher_list until we're done iterating. * * First, * tell uv_fs_event_stop() (that could be called from a user's callback) * not to free watcher_list. */ w->iterating = 1; QUEUE_MOVE(&w->watchers, &queue); while (!QUEUE_EMPTY(&queue)) { q = QUEUE_HEAD(&queue); h = QUEUE_DATA(q, uv_fs_event_t, watchers); QUEUE_REMOVE(q); QUEUE_INSERT_TAIL(&w->watchers, q); h->cb(h, path, events, 0); } /* done iterating, time to (maybe) free empty watcher_list */ w->iterating = 0; maybe_free_watcher_list(w, loop); } } } int uv_fs_event_init(uv_loop_t* loop, uv_fs_event_t* handle) { uv__handle_init(loop, (uv_handle_t*)handle, UV_FS_EVENT); return 0; } int uv_fs_event_start(uv_fs_event_t* handle, uv_fs_event_cb cb, const char* path, unsigned int flags) { struct watcher_list* w; size_t len; int events; int err; int wd; if (uv__is_active(handle)) return UV_EINVAL; err = init_inotify(handle->loop); if (err) return err; events = IN_ATTRIB | IN_CREATE | IN_MODIFY | IN_DELETE | IN_DELETE_SELF | IN_MOVE_SELF | IN_MOVED_FROM | IN_MOVED_TO; wd = inotify_add_watch(handle->loop->inotify_fd, path, events); if (wd == -1) return UV__ERR(errno); w = find_watcher(handle->loop, wd); if (w) goto no_insert; len = strlen(path) + 1; w = uv__malloc(sizeof(*w) + len); if (w == NULL) return UV_ENOMEM; w->wd = wd; w->path = memcpy(w + 1, path, len); QUEUE_INIT(&w->watchers); w->iterating = 0; RB_INSERT(watcher_root, CAST(&handle->loop->inotify_watchers), w); no_insert: uv__handle_start(handle); QUEUE_INSERT_TAIL(&w->watchers, &handle->watchers); handle->path = w->path; handle->cb = cb; handle->wd = wd; return 0; } int uv_fs_event_stop(uv_fs_event_t* handle) { struct watcher_list* w; if (!uv__is_active(handle)) return 0; w = find_watcher(handle->loop, handle->wd); assert(w != NULL); handle->wd = -1; handle->path = NULL; uv__handle_stop(handle); QUEUE_REMOVE(&handle->watchers); maybe_free_watcher_list(w, handle->loop); return 0; } void uv__fs_event_close(uv_fs_event_t* handle) { uv_fs_event_stop(handle); } gevent-24.11.1/deps/libuv/src/unix/linux-syscalls.c000066400000000000000000000160751471441230600221470ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "linux-syscalls.h" #include #include #include #include #include #if defined(__arm__) # if defined(__thumb__) || defined(__ARM_EABI__) # define UV_SYSCALL_BASE 0 # else # define UV_SYSCALL_BASE 0x900000 # endif #endif /* __arm__ */ #ifndef __NR_recvmmsg # if defined(__x86_64__) # define __NR_recvmmsg 299 # elif defined(__arm__) # define __NR_recvmmsg (UV_SYSCALL_BASE + 365) # endif #endif /* __NR_recvmsg */ #ifndef __NR_sendmmsg # if defined(__x86_64__) # define __NR_sendmmsg 307 # elif defined(__arm__) # define __NR_sendmmsg (UV_SYSCALL_BASE + 374) # endif #endif /* __NR_sendmmsg */ #ifndef __NR_utimensat # if defined(__x86_64__) # define __NR_utimensat 280 # elif defined(__i386__) # define __NR_utimensat 320 # elif defined(__arm__) # define __NR_utimensat (UV_SYSCALL_BASE + 348) # endif #endif /* __NR_utimensat */ #ifndef __NR_preadv # if defined(__x86_64__) # define __NR_preadv 295 # elif defined(__i386__) # define __NR_preadv 333 # elif defined(__arm__) # define __NR_preadv (UV_SYSCALL_BASE + 361) # endif #endif /* __NR_preadv */ #ifndef __NR_pwritev # if defined(__x86_64__) # define __NR_pwritev 296 # elif defined(__i386__) # define __NR_pwritev 334 # elif defined(__arm__) # define __NR_pwritev (UV_SYSCALL_BASE + 362) # endif #endif /* __NR_pwritev */ #ifndef __NR_dup3 # if defined(__x86_64__) # define __NR_dup3 292 # elif defined(__i386__) # define __NR_dup3 330 # elif defined(__arm__) # define __NR_dup3 (UV_SYSCALL_BASE + 358) # endif #endif /* __NR_pwritev */ #ifndef __NR_copy_file_range # if defined(__x86_64__) # define __NR_copy_file_range 326 # elif defined(__i386__) # define __NR_copy_file_range 377 # elif defined(__s390__) # define __NR_copy_file_range 375 # elif defined(__arm__) # define __NR_copy_file_range (UV_SYSCALL_BASE + 391) # elif defined(__aarch64__) # define __NR_copy_file_range 285 # elif defined(__powerpc__) # define __NR_copy_file_range 379 # elif defined(__arc__) # define __NR_copy_file_range 285 # endif #endif /* __NR_copy_file_range */ #ifndef __NR_statx # if defined(__x86_64__) # define __NR_statx 332 # elif defined(__i386__) # define __NR_statx 383 # elif defined(__aarch64__) # define __NR_statx 397 # elif defined(__arm__) # define __NR_statx (UV_SYSCALL_BASE + 397) # elif defined(__ppc__) # define __NR_statx 383 # elif defined(__s390__) # define __NR_statx 379 # endif #endif /* __NR_statx */ #ifndef __NR_getrandom # if defined(__x86_64__) # define __NR_getrandom 318 # elif defined(__i386__) # define __NR_getrandom 355 # elif defined(__aarch64__) # define __NR_getrandom 384 # elif defined(__arm__) # define __NR_getrandom (UV_SYSCALL_BASE + 384) # elif defined(__ppc__) # define __NR_getrandom 359 # elif defined(__s390__) # define __NR_getrandom 349 # endif #endif /* __NR_getrandom */ struct uv__mmsghdr; int uv__sendmmsg(int fd, struct uv__mmsghdr* mmsg, unsigned int vlen) { #if defined(__i386__) unsigned long args[4]; int rc; args[0] = (unsigned long) fd; args[1] = (unsigned long) mmsg; args[2] = (unsigned long) vlen; args[3] = /* flags */ 0; /* socketcall() raises EINVAL when SYS_SENDMMSG is not supported. */ rc = syscall(/* __NR_socketcall */ 102, 20 /* SYS_SENDMMSG */, args); if (rc == -1) if (errno == EINVAL) errno = ENOSYS; return rc; #elif defined(__NR_sendmmsg) return syscall(__NR_sendmmsg, fd, mmsg, vlen, /* flags */ 0); #else return errno = ENOSYS, -1; #endif } int uv__recvmmsg(int fd, struct uv__mmsghdr* mmsg, unsigned int vlen) { #if defined(__i386__) unsigned long args[5]; int rc; args[0] = (unsigned long) fd; args[1] = (unsigned long) mmsg; args[2] = (unsigned long) vlen; args[3] = /* flags */ 0; args[4] = /* timeout */ 0; /* socketcall() raises EINVAL when SYS_RECVMMSG is not supported. */ rc = syscall(/* __NR_socketcall */ 102, 19 /* SYS_RECVMMSG */, args); if (rc == -1) if (errno == EINVAL) errno = ENOSYS; return rc; #elif defined(__NR_recvmmsg) return syscall(__NR_recvmmsg, fd, mmsg, vlen, /* flags */ 0, /* timeout */ 0); #else return errno = ENOSYS, -1; #endif } ssize_t uv__preadv(int fd, const struct iovec *iov, int iovcnt, int64_t offset) { #if !defined(__NR_preadv) || defined(__ANDROID_API__) && __ANDROID_API__ < 24 return errno = ENOSYS, -1; #else return syscall(__NR_preadv, fd, iov, iovcnt, (long)offset, (long)(offset >> 32)); #endif } ssize_t uv__pwritev(int fd, const struct iovec *iov, int iovcnt, int64_t offset) { #if !defined(__NR_pwritev) || defined(__ANDROID_API__) && __ANDROID_API__ < 24 return errno = ENOSYS, -1; #else return syscall(__NR_pwritev, fd, iov, iovcnt, (long)offset, (long)(offset >> 32)); #endif } int uv__dup3(int oldfd, int newfd, int flags) { #if !defined(__NR_dup3) || defined(__ANDROID_API__) && __ANDROID_API__ < 21 return errno = ENOSYS, -1; #else return syscall(__NR_dup3, oldfd, newfd, flags); #endif } ssize_t uv__fs_copy_file_range(int fd_in, off_t* off_in, int fd_out, off_t* off_out, size_t len, unsigned int flags) { #ifdef __NR_copy_file_range return syscall(__NR_copy_file_range, fd_in, off_in, fd_out, off_out, len, flags); #else return errno = ENOSYS, -1; #endif } int uv__statx(int dirfd, const char* path, int flags, unsigned int mask, struct uv__statx* statxbuf) { #if !defined(__NR_statx) || defined(__ANDROID_API__) && __ANDROID_API__ < 30 return errno = ENOSYS, -1; #else return syscall(__NR_statx, dirfd, path, flags, mask, statxbuf); #endif } ssize_t uv__getrandom(void* buf, size_t buflen, unsigned flags) { #if !defined(__NR_getrandom) || defined(__ANDROID_API__) && __ANDROID_API__ < 28 return errno = ENOSYS, -1; #else return syscall(__NR_getrandom, buf, buflen, flags); #endif } gevent-24.11.1/deps/libuv/src/unix/linux-syscalls.h000066400000000000000000000052111471441230600221420ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_LINUX_SYSCALL_H_ #define UV_LINUX_SYSCALL_H_ #include #include #include #include #include struct uv__statx_timestamp { int64_t tv_sec; uint32_t tv_nsec; int32_t unused0; }; struct uv__statx { uint32_t stx_mask; uint32_t stx_blksize; uint64_t stx_attributes; uint32_t stx_nlink; uint32_t stx_uid; uint32_t stx_gid; uint16_t stx_mode; uint16_t unused0; uint64_t stx_ino; uint64_t stx_size; uint64_t stx_blocks; uint64_t stx_attributes_mask; struct uv__statx_timestamp stx_atime; struct uv__statx_timestamp stx_btime; struct uv__statx_timestamp stx_ctime; struct uv__statx_timestamp stx_mtime; uint32_t stx_rdev_major; uint32_t stx_rdev_minor; uint32_t stx_dev_major; uint32_t stx_dev_minor; uint64_t unused1[14]; }; ssize_t uv__preadv(int fd, const struct iovec *iov, int iovcnt, int64_t offset); ssize_t uv__pwritev(int fd, const struct iovec *iov, int iovcnt, int64_t offset); int uv__dup3(int oldfd, int newfd, int flags); ssize_t uv__fs_copy_file_range(int fd_in, off_t* off_in, int fd_out, off_t* off_out, size_t len, unsigned int flags); int uv__statx(int dirfd, const char* path, int flags, unsigned int mask, struct uv__statx* statxbuf); ssize_t uv__getrandom(void* buf, size_t buflen, unsigned flags); #endif /* UV_LINUX_SYSCALL_H_ */ gevent-24.11.1/deps/libuv/src/unix/loop-watcher.c000066400000000000000000000105211471441230600215470ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #define UV_LOOP_WATCHER_DEFINE(name, type) \ int uv_##name##_init(uv_loop_t* loop, uv_##name##_t* handle) { \ uv__handle_init(loop, (uv_handle_t*)handle, UV_##type); \ handle->name##_cb = NULL; \ return 0; \ } \ \ int uv_##name##_start(uv_##name##_t* handle, uv_##name##_cb cb) { \ if (uv__is_active(handle)) return 0; \ if (cb == NULL) return UV_EINVAL; \ QUEUE_INSERT_HEAD(&handle->loop->name##_handles, &handle->queue); \ handle->name##_cb = cb; \ uv__handle_start(handle); \ return 0; \ } \ \ int uv_##name##_stop(uv_##name##_t* handle) { \ if (!uv__is_active(handle)) return 0; \ QUEUE_REMOVE(&handle->queue); \ uv__handle_stop(handle); \ return 0; \ } \ \ void uv__run_##name(uv_loop_t* loop) { \ uv_##name##_t* h; \ QUEUE queue; \ QUEUE* q; \ QUEUE_MOVE(&loop->name##_handles, &queue); \ while (!QUEUE_EMPTY(&queue)) { \ q = QUEUE_HEAD(&queue); \ h = QUEUE_DATA(q, uv_##name##_t, queue); \ QUEUE_REMOVE(q); \ QUEUE_INSERT_TAIL(&loop->name##_handles, q); \ h->name##_cb(h); \ } \ } \ \ void uv__##name##_close(uv_##name##_t* handle) { \ uv_##name##_stop(handle); \ } UV_LOOP_WATCHER_DEFINE(prepare, PREPARE) UV_LOOP_WATCHER_DEFINE(check, CHECK) UV_LOOP_WATCHER_DEFINE(idle, IDLE) gevent-24.11.1/deps/libuv/src/unix/loop.c000066400000000000000000000132401471441230600201150ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "uv/tree.h" #include "internal.h" #include "heap-inl.h" #include #include #include int uv_loop_init(uv_loop_t* loop) { uv__loop_internal_fields_t* lfields; void* saved_data; int err; saved_data = loop->data; memset(loop, 0, sizeof(*loop)); loop->data = saved_data; lfields = (uv__loop_internal_fields_t*) uv__calloc(1, sizeof(*lfields)); if (lfields == NULL) return UV_ENOMEM; loop->internal_fields = lfields; err = uv_mutex_init(&lfields->loop_metrics.lock); if (err) goto fail_metrics_mutex_init; heap_init((struct heap*) &loop->timer_heap); QUEUE_INIT(&loop->wq); QUEUE_INIT(&loop->idle_handles); QUEUE_INIT(&loop->async_handles); QUEUE_INIT(&loop->check_handles); QUEUE_INIT(&loop->prepare_handles); QUEUE_INIT(&loop->handle_queue); loop->active_handles = 0; loop->active_reqs.count = 0; loop->nfds = 0; loop->watchers = NULL; loop->nwatchers = 0; QUEUE_INIT(&loop->pending_queue); QUEUE_INIT(&loop->watcher_queue); loop->closing_handles = NULL; uv__update_time(loop); loop->async_io_watcher.fd = -1; loop->async_wfd = -1; loop->signal_pipefd[0] = -1; loop->signal_pipefd[1] = -1; loop->backend_fd = -1; loop->emfile_fd = -1; loop->timer_counter = 0; loop->stop_flag = 0; err = uv__platform_loop_init(loop); if (err) goto fail_platform_init; uv__signal_global_once_init(); err = uv_signal_init(loop, &loop->child_watcher); if (err) goto fail_signal_init; uv__handle_unref(&loop->child_watcher); loop->child_watcher.flags |= UV_HANDLE_INTERNAL; QUEUE_INIT(&loop->process_handles); err = uv_rwlock_init(&loop->cloexec_lock); if (err) goto fail_rwlock_init; err = uv_mutex_init(&loop->wq_mutex); if (err) goto fail_mutex_init; err = uv_async_init(loop, &loop->wq_async, uv__work_done); if (err) goto fail_async_init; uv__handle_unref(&loop->wq_async); loop->wq_async.flags |= UV_HANDLE_INTERNAL; return 0; fail_async_init: uv_mutex_destroy(&loop->wq_mutex); fail_mutex_init: uv_rwlock_destroy(&loop->cloexec_lock); fail_rwlock_init: uv__signal_loop_cleanup(loop); fail_signal_init: uv__platform_loop_delete(loop); fail_platform_init: uv_mutex_destroy(&lfields->loop_metrics.lock); fail_metrics_mutex_init: uv__free(lfields); loop->internal_fields = NULL; uv__free(loop->watchers); loop->nwatchers = 0; return err; } int uv_loop_fork(uv_loop_t* loop) { int err; unsigned int i; uv__io_t* w; err = uv__io_fork(loop); if (err) return err; err = uv__async_fork(loop); if (err) return err; err = uv__signal_loop_fork(loop); if (err) return err; /* Rearm all the watchers that aren't re-queued by the above. */ for (i = 0; i < loop->nwatchers; i++) { w = loop->watchers[i]; if (w == NULL) continue; if (w->pevents != 0 && QUEUE_EMPTY(&w->watcher_queue)) { w->events = 0; /* Force re-registration in uv__io_poll. */ QUEUE_INSERT_TAIL(&loop->watcher_queue, &w->watcher_queue); } } return 0; } void uv__loop_close(uv_loop_t* loop) { uv__loop_internal_fields_t* lfields; uv__signal_loop_cleanup(loop); uv__platform_loop_delete(loop); uv__async_stop(loop); if (loop->emfile_fd != -1) { uv__close(loop->emfile_fd); loop->emfile_fd = -1; } if (loop->backend_fd != -1) { uv__close(loop->backend_fd); loop->backend_fd = -1; } uv_mutex_lock(&loop->wq_mutex); assert(QUEUE_EMPTY(&loop->wq) && "thread pool work queue not empty!"); assert(!uv__has_active_reqs(loop)); uv_mutex_unlock(&loop->wq_mutex); uv_mutex_destroy(&loop->wq_mutex); /* * Note that all thread pool stuff is finished at this point and * it is safe to just destroy rw lock */ uv_rwlock_destroy(&loop->cloexec_lock); #if 0 assert(QUEUE_EMPTY(&loop->pending_queue)); assert(QUEUE_EMPTY(&loop->watcher_queue)); assert(loop->nfds == 0); #endif uv__free(loop->watchers); loop->watchers = NULL; loop->nwatchers = 0; lfields = uv__get_internal_fields(loop); uv_mutex_destroy(&lfields->loop_metrics.lock); uv__free(lfields); loop->internal_fields = NULL; } int uv__loop_configure(uv_loop_t* loop, uv_loop_option option, va_list ap) { uv__loop_internal_fields_t* lfields; lfields = uv__get_internal_fields(loop); if (option == UV_METRICS_IDLE_TIME) { lfields->flags |= UV_METRICS_IDLE_TIME; return 0; } if (option != UV_LOOP_BLOCK_SIGNAL) return UV_ENOSYS; if (va_arg(ap, int) != SIGPROF) return UV_EINVAL; loop->flags |= UV_LOOP_BLOCK_SIGPROF; return 0; } gevent-24.11.1/deps/libuv/src/unix/netbsd.c000066400000000000000000000147671471441230600204420ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include int uv__platform_loop_init(uv_loop_t* loop) { return uv__kqueue_init(loop); } void uv__platform_loop_delete(uv_loop_t* loop) { } void uv_loadavg(double avg[3]) { struct loadavg info; size_t size = sizeof(info); int which[] = {CTL_VM, VM_LOADAVG}; if (sysctl(which, ARRAY_SIZE(which), &info, &size, NULL, 0) == -1) return; avg[0] = (double) info.ldavg[0] / info.fscale; avg[1] = (double) info.ldavg[1] / info.fscale; avg[2] = (double) info.ldavg[2] / info.fscale; } int uv_exepath(char* buffer, size_t* size) { /* Intermediate buffer, retrieving partial path name does not work * As of NetBSD-8(beta), vnode->path translator does not handle files * with longer names than 31 characters. */ char int_buf[PATH_MAX]; size_t int_size; int mib[4]; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; mib[0] = CTL_KERN; mib[1] = KERN_PROC_ARGS; mib[2] = -1; mib[3] = KERN_PROC_PATHNAME; int_size = ARRAY_SIZE(int_buf); if (sysctl(mib, 4, int_buf, &int_size, NULL, 0)) return UV__ERR(errno); /* Copy string from the intermediate buffer to outer one with appropriate * length. */ /* TODO(bnoordhuis) Check uv__strscpy() return value. */ uv__strscpy(buffer, int_buf, *size); /* Set new size. */ *size = strlen(buffer); return 0; } uint64_t uv_get_free_memory(void) { struct uvmexp info; size_t size = sizeof(info); int which[] = {CTL_VM, VM_UVMEXP}; if (sysctl(which, ARRAY_SIZE(which), &info, &size, NULL, 0)) return UV__ERR(errno); return (uint64_t) info.free * sysconf(_SC_PAGESIZE); } uint64_t uv_get_total_memory(void) { #if defined(HW_PHYSMEM64) uint64_t info; int which[] = {CTL_HW, HW_PHYSMEM64}; #else unsigned int info; int which[] = {CTL_HW, HW_PHYSMEM}; #endif size_t size = sizeof(info); if (sysctl(which, ARRAY_SIZE(which), &info, &size, NULL, 0)) return UV__ERR(errno); return (uint64_t) info; } uint64_t uv_get_constrained_memory(void) { return 0; /* Memory constraints are unknown. */ } int uv_resident_set_memory(size_t* rss) { kvm_t *kd = NULL; struct kinfo_proc2 *kinfo = NULL; pid_t pid; int nprocs; int max_size = sizeof(struct kinfo_proc2); int page_size; page_size = getpagesize(); pid = getpid(); kd = kvm_open(NULL, NULL, NULL, KVM_NO_FILES, "kvm_open"); if (kd == NULL) goto error; kinfo = kvm_getproc2(kd, KERN_PROC_PID, pid, max_size, &nprocs); if (kinfo == NULL) goto error; *rss = kinfo->p_vm_rssize * page_size; kvm_close(kd); return 0; error: if (kd) kvm_close(kd); return UV_EPERM; } int uv_uptime(double* uptime) { time_t now; struct timeval info; size_t size = sizeof(info); static int which[] = {CTL_KERN, KERN_BOOTTIME}; if (sysctl(which, ARRAY_SIZE(which), &info, &size, NULL, 0)) return UV__ERR(errno); now = time(NULL); *uptime = (double)(now - info.tv_sec); return 0; } int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count) { unsigned int ticks = (unsigned int)sysconf(_SC_CLK_TCK); unsigned int multiplier = ((uint64_t)1000L / ticks); unsigned int cur = 0; uv_cpu_info_t* cpu_info; u_int64_t* cp_times; char model[512]; u_int64_t cpuspeed; int numcpus; size_t size; int i; size = sizeof(model); if (sysctlbyname("machdep.cpu_brand", &model, &size, NULL, 0) && sysctlbyname("hw.model", &model, &size, NULL, 0)) { return UV__ERR(errno); } size = sizeof(numcpus); if (sysctlbyname("hw.ncpu", &numcpus, &size, NULL, 0)) return UV__ERR(errno); *count = numcpus; /* Only i386 and amd64 have machdep.tsc_freq */ size = sizeof(cpuspeed); if (sysctlbyname("machdep.tsc_freq", &cpuspeed, &size, NULL, 0)) cpuspeed = 0; size = numcpus * CPUSTATES * sizeof(*cp_times); cp_times = uv__malloc(size); if (cp_times == NULL) return UV_ENOMEM; if (sysctlbyname("kern.cp_time", cp_times, &size, NULL, 0)) return UV__ERR(errno); *cpu_infos = uv__malloc(numcpus * sizeof(**cpu_infos)); if (!(*cpu_infos)) { uv__free(cp_times); uv__free(*cpu_infos); return UV_ENOMEM; } for (i = 0; i < numcpus; i++) { cpu_info = &(*cpu_infos)[i]; cpu_info->cpu_times.user = (uint64_t)(cp_times[CP_USER+cur]) * multiplier; cpu_info->cpu_times.nice = (uint64_t)(cp_times[CP_NICE+cur]) * multiplier; cpu_info->cpu_times.sys = (uint64_t)(cp_times[CP_SYS+cur]) * multiplier; cpu_info->cpu_times.idle = (uint64_t)(cp_times[CP_IDLE+cur]) * multiplier; cpu_info->cpu_times.irq = (uint64_t)(cp_times[CP_INTR+cur]) * multiplier; cpu_info->model = uv__strdup(model); cpu_info->speed = (int)(cpuspeed/(uint64_t) 1e6); cur += CPUSTATES; } uv__free(cp_times); return 0; } int uv__random_sysctl(void* buf, size_t len) { static int name[] = {CTL_KERN, KERN_ARND}; size_t count, req; unsigned char* p; p = buf; while (len) { req = len < 32 ? len : 32; count = req; if (sysctl(name, ARRAY_SIZE(name), p, &count, NULL, 0) == -1) return UV__ERR(errno); if (count != req) return UV_EIO; /* Can't happen. */ p += count; len -= count; } return 0; } gevent-24.11.1/deps/libuv/src/unix/no-fsevents.c000066400000000000000000000030511471441230600214120ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include int uv_fs_event_init(uv_loop_t* loop, uv_fs_event_t* handle) { return UV_ENOSYS; } int uv_fs_event_start(uv_fs_event_t* handle, uv_fs_event_cb cb, const char* filename, unsigned int flags) { return UV_ENOSYS; } int uv_fs_event_stop(uv_fs_event_t* handle) { return UV_ENOSYS; } void uv__fs_event_close(uv_fs_event_t* handle) { UNREACHABLE(); } gevent-24.11.1/deps/libuv/src/unix/no-proctitle.c000066400000000000000000000027761471441230600215770ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include char** uv_setup_args(int argc, char** argv) { return argv; } void uv__process_title_cleanup(void) { } int uv_set_process_title(const char* title) { return 0; } int uv_get_process_title(char* buffer, size_t size) { if (buffer == NULL || size == 0) return UV_EINVAL; buffer[0] = '\0'; return 0; } gevent-24.11.1/deps/libuv/src/unix/openbsd.c000066400000000000000000000136721471441230600206070ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include #include #include #include #include #include #include int uv__platform_loop_init(uv_loop_t* loop) { return uv__kqueue_init(loop); } void uv__platform_loop_delete(uv_loop_t* loop) { } void uv_loadavg(double avg[3]) { struct loadavg info; size_t size = sizeof(info); int which[] = {CTL_VM, VM_LOADAVG}; if (sysctl(which, ARRAY_SIZE(which), &info, &size, NULL, 0) < 0) return; avg[0] = (double) info.ldavg[0] / info.fscale; avg[1] = (double) info.ldavg[1] / info.fscale; avg[2] = (double) info.ldavg[2] / info.fscale; } int uv_exepath(char* buffer, size_t* size) { int mib[4]; char **argsbuf = NULL; size_t argsbuf_size = 100U; size_t exepath_size; pid_t mypid; int err; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; mypid = getpid(); for (;;) { err = UV_ENOMEM; argsbuf = uv__reallocf(argsbuf, argsbuf_size); if (argsbuf == NULL) goto out; mib[0] = CTL_KERN; mib[1] = KERN_PROC_ARGS; mib[2] = mypid; mib[3] = KERN_PROC_ARGV; if (sysctl(mib, ARRAY_SIZE(mib), argsbuf, &argsbuf_size, NULL, 0) == 0) { break; } if (errno != ENOMEM) { err = UV__ERR(errno); goto out; } argsbuf_size *= 2U; } if (argsbuf[0] == NULL) { err = UV_EINVAL; /* FIXME(bnoordhuis) More appropriate error. */ goto out; } *size -= 1; exepath_size = strlen(argsbuf[0]); if (*size > exepath_size) *size = exepath_size; memcpy(buffer, argsbuf[0], *size); buffer[*size] = '\0'; err = 0; out: uv__free(argsbuf); return err; } uint64_t uv_get_free_memory(void) { struct uvmexp info; size_t size = sizeof(info); int which[] = {CTL_VM, VM_UVMEXP}; if (sysctl(which, ARRAY_SIZE(which), &info, &size, NULL, 0)) return UV__ERR(errno); return (uint64_t) info.free * sysconf(_SC_PAGESIZE); } uint64_t uv_get_total_memory(void) { uint64_t info; int which[] = {CTL_HW, HW_PHYSMEM64}; size_t size = sizeof(info); if (sysctl(which, ARRAY_SIZE(which), &info, &size, NULL, 0)) return UV__ERR(errno); return (uint64_t) info; } uint64_t uv_get_constrained_memory(void) { return 0; /* Memory constraints are unknown. */ } int uv_resident_set_memory(size_t* rss) { struct kinfo_proc kinfo; size_t page_size = getpagesize(); size_t size = sizeof(struct kinfo_proc); int mib[6]; mib[0] = CTL_KERN; mib[1] = KERN_PROC; mib[2] = KERN_PROC_PID; mib[3] = getpid(); mib[4] = sizeof(struct kinfo_proc); mib[5] = 1; if (sysctl(mib, ARRAY_SIZE(mib), &kinfo, &size, NULL, 0) < 0) return UV__ERR(errno); *rss = kinfo.p_vm_rssize * page_size; return 0; } int uv_uptime(double* uptime) { time_t now; struct timeval info; size_t size = sizeof(info); static int which[] = {CTL_KERN, KERN_BOOTTIME}; if (sysctl(which, ARRAY_SIZE(which), &info, &size, NULL, 0)) return UV__ERR(errno); now = time(NULL); *uptime = (double)(now - info.tv_sec); return 0; } int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count) { unsigned int ticks = (unsigned int)sysconf(_SC_CLK_TCK), multiplier = ((uint64_t)1000L / ticks), cpuspeed; uint64_t info[CPUSTATES]; char model[512]; int numcpus = 1; int which[] = {CTL_HW,HW_MODEL}; int percpu[] = {CTL_KERN,KERN_CPTIME2,0}; size_t size; int i, j; uv_cpu_info_t* cpu_info; size = sizeof(model); if (sysctl(which, ARRAY_SIZE(which), &model, &size, NULL, 0)) return UV__ERR(errno); which[1] = HW_NCPUONLINE; size = sizeof(numcpus); if (sysctl(which, ARRAY_SIZE(which), &numcpus, &size, NULL, 0)) return UV__ERR(errno); *cpu_infos = uv__malloc(numcpus * sizeof(**cpu_infos)); if (!(*cpu_infos)) return UV_ENOMEM; i = 0; *count = numcpus; which[1] = HW_CPUSPEED; size = sizeof(cpuspeed); if (sysctl(which, ARRAY_SIZE(which), &cpuspeed, &size, NULL, 0)) goto error; size = sizeof(info); for (i = 0; i < numcpus; i++) { percpu[2] = i; if (sysctl(percpu, ARRAY_SIZE(percpu), &info, &size, NULL, 0)) goto error; cpu_info = &(*cpu_infos)[i]; cpu_info->cpu_times.user = (uint64_t)(info[CP_USER]) * multiplier; cpu_info->cpu_times.nice = (uint64_t)(info[CP_NICE]) * multiplier; cpu_info->cpu_times.sys = (uint64_t)(info[CP_SYS]) * multiplier; cpu_info->cpu_times.idle = (uint64_t)(info[CP_IDLE]) * multiplier; cpu_info->cpu_times.irq = (uint64_t)(info[CP_INTR]) * multiplier; cpu_info->model = uv__strdup(model); cpu_info->speed = cpuspeed; } return 0; error: *count = 0; for (j = 0; j < i; j++) uv__free((*cpu_infos)[j].model); uv__free(*cpu_infos); *cpu_infos = NULL; return UV__ERR(errno); } gevent-24.11.1/deps/libuv/src/unix/os390-proctitle.c000066400000000000000000000067621471441230600220370ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include static uv_mutex_t process_title_mutex; static uv_once_t process_title_mutex_once = UV_ONCE_INIT; static char* process_title = NULL; static void* args_mem = NULL; static void init_process_title_mutex_once(void) { uv_mutex_init(&process_title_mutex); } char** uv_setup_args(int argc, char** argv) { char** new_argv; size_t size; char* s; int i; if (argc <= 0) return argv; /* Calculate how much memory we need for the argv strings. */ size = 0; for (i = 0; i < argc; i++) size += strlen(argv[i]) + 1; /* Add space for the argv pointers. */ size += (argc + 1) * sizeof(char*); new_argv = uv__malloc(size); if (new_argv == NULL) return argv; /* Copy over the strings and set up the pointer table. */ s = (char*) &new_argv[argc + 1]; for (i = 0; i < argc; i++) { size = strlen(argv[i]) + 1; memcpy(s, argv[i], size); new_argv[i] = s; s += size; } new_argv[i] = NULL; args_mem = new_argv; process_title = uv__strdup(argv[0]); return new_argv; } int uv_set_process_title(const char* title) { char* new_title; /* If uv_setup_args wasn't called or failed, we can't continue. */ if (args_mem == NULL) return UV_ENOBUFS; /* We cannot free this pointer when libuv shuts down, * the process may still be using it. */ new_title = uv__strdup(title); if (new_title == NULL) return UV_ENOMEM; uv_once(&process_title_mutex_once, init_process_title_mutex_once); uv_mutex_lock(&process_title_mutex); if (process_title != NULL) uv__free(process_title); process_title = new_title; uv_mutex_unlock(&process_title_mutex); return 0; } int uv_get_process_title(char* buffer, size_t size) { size_t len; if (buffer == NULL || size == 0) return UV_EINVAL; /* If uv_setup_args wasn't called or failed, we can't continue. */ if (args_mem == NULL || process_title == NULL) return UV_ENOBUFS; uv_once(&process_title_mutex_once, init_process_title_mutex_once); uv_mutex_lock(&process_title_mutex); len = strlen(process_title); if (size <= len) { uv_mutex_unlock(&process_title_mutex); return UV_ENOBUFS; } strcpy(buffer, process_title); uv_mutex_unlock(&process_title_mutex); return 0; } void uv__process_title_cleanup(void) { uv__free(args_mem); /* Keep valgrind happy. */ args_mem = NULL; } gevent-24.11.1/deps/libuv/src/unix/os390-syscalls.c000066400000000000000000000276471471441230600216740ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "os390-syscalls.h" #include #include #include #include #include static QUEUE global_epoll_queue; static uv_mutex_t global_epoll_lock; static uv_once_t once = UV_ONCE_INIT; int scandir(const char* maindir, struct dirent*** namelist, int (*filter)(const struct dirent*), int (*compar)(const struct dirent**, const struct dirent **)) { struct dirent** nl; struct dirent** nl_copy; struct dirent* dirent; unsigned count; size_t allocated; DIR* mdir; nl = NULL; count = 0; allocated = 0; mdir = opendir(maindir); if (!mdir) return -1; for (;;) { dirent = readdir(mdir); if (!dirent) break; if (!filter || filter(dirent)) { struct dirent* copy; copy = uv__malloc(sizeof(*copy)); if (!copy) goto error; memcpy(copy, dirent, sizeof(*copy)); nl_copy = uv__realloc(nl, sizeof(*copy) * (count + 1)); if (nl_copy == NULL) { uv__free(copy); goto error; } nl = nl_copy; nl[count++] = copy; } } qsort(nl, count, sizeof(struct dirent *), (int (*)(const void *, const void *)) compar); closedir(mdir); *namelist = nl; return count; error: while (count > 0) { dirent = nl[--count]; uv__free(dirent); } uv__free(nl); closedir(mdir); errno = ENOMEM; return -1; } static unsigned int next_power_of_two(unsigned int val) { val -= 1; val |= val >> 1; val |= val >> 2; val |= val >> 4; val |= val >> 8; val |= val >> 16; val += 1; return val; } static void maybe_resize(uv__os390_epoll* lst, unsigned int len) { unsigned int newsize; unsigned int i; struct pollfd* newlst; struct pollfd event; if (len <= lst->size) return; if (lst->size == 0) event.fd = -1; else { /* Extract the message queue at the end. */ event = lst->items[lst->size - 1]; lst->items[lst->size - 1].fd = -1; } newsize = next_power_of_two(len); newlst = uv__reallocf(lst->items, newsize * sizeof(lst->items[0])); if (newlst == NULL) abort(); for (i = lst->size; i < newsize; ++i) newlst[i].fd = -1; /* Restore the message queue at the end */ newlst[newsize - 1] = event; lst->items = newlst; lst->size = newsize; } void uv__os390_cleanup(void) { msgctl(uv_backend_fd(uv_default_loop()), IPC_RMID, NULL); } static void init_message_queue(uv__os390_epoll* lst) { struct { long int header; char body; } msg; /* initialize message queue */ lst->msg_queue = msgget(IPC_PRIVATE, 0600 | IPC_CREAT); if (lst->msg_queue == -1) abort(); /* On z/OS, the message queue will be affiliated with the process only when a send is performed on it. Once this is done, the system can be queried for all message queues belonging to our process id. */ msg.header = 1; if (msgsnd(lst->msg_queue, &msg, sizeof(msg.body), 0) != 0) abort(); /* Clean up the dummy message sent above */ if (msgrcv(lst->msg_queue, &msg, sizeof(msg.body), 0, 0) != sizeof(msg.body)) abort(); } static void before_fork(void) { uv_mutex_lock(&global_epoll_lock); } static void after_fork(void) { uv_mutex_unlock(&global_epoll_lock); } static void child_fork(void) { QUEUE* q; uv_once_t child_once = UV_ONCE_INIT; /* reset once */ memcpy(&once, &child_once, sizeof(child_once)); /* reset epoll list */ while (!QUEUE_EMPTY(&global_epoll_queue)) { uv__os390_epoll* lst; q = QUEUE_HEAD(&global_epoll_queue); QUEUE_REMOVE(q); lst = QUEUE_DATA(q, uv__os390_epoll, member); uv__free(lst->items); lst->items = NULL; lst->size = 0; } uv_mutex_unlock(&global_epoll_lock); uv_mutex_destroy(&global_epoll_lock); } static void epoll_init(void) { QUEUE_INIT(&global_epoll_queue); if (uv_mutex_init(&global_epoll_lock)) abort(); if (pthread_atfork(&before_fork, &after_fork, &child_fork)) abort(); } uv__os390_epoll* epoll_create1(int flags) { uv__os390_epoll* lst; lst = uv__malloc(sizeof(*lst)); if (lst != NULL) { /* initialize list */ lst->size = 0; lst->items = NULL; init_message_queue(lst); maybe_resize(lst, 1); lst->items[lst->size - 1].fd = lst->msg_queue; lst->items[lst->size - 1].events = POLLIN; lst->items[lst->size - 1].revents = 0; uv_once(&once, epoll_init); uv_mutex_lock(&global_epoll_lock); QUEUE_INSERT_TAIL(&global_epoll_queue, &lst->member); uv_mutex_unlock(&global_epoll_lock); } return lst; } int epoll_ctl(uv__os390_epoll* lst, int op, int fd, struct epoll_event *event) { uv_mutex_lock(&global_epoll_lock); if (op == EPOLL_CTL_DEL) { if (fd >= lst->size || lst->items[fd].fd == -1) { uv_mutex_unlock(&global_epoll_lock); errno = ENOENT; return -1; } lst->items[fd].fd = -1; } else if (op == EPOLL_CTL_ADD) { /* Resizing to 'fd + 1' would expand the list to contain at least * 'fd'. But we need to guarantee that the last index on the list * is reserved for the message queue. So specify 'fd + 2' instead. */ maybe_resize(lst, fd + 2); if (lst->items[fd].fd != -1) { uv_mutex_unlock(&global_epoll_lock); errno = EEXIST; return -1; } lst->items[fd].fd = fd; lst->items[fd].events = event->events; lst->items[fd].revents = 0; } else if (op == EPOLL_CTL_MOD) { if (fd >= lst->size - 1 || lst->items[fd].fd == -1) { uv_mutex_unlock(&global_epoll_lock); errno = ENOENT; return -1; } lst->items[fd].events = event->events; lst->items[fd].revents = 0; } else abort(); uv_mutex_unlock(&global_epoll_lock); return 0; } #define EP_MAX_PFDS (ULONG_MAX / sizeof(struct pollfd)) #define EP_MAX_EVENTS (INT_MAX / sizeof(struct epoll_event)) int epoll_wait(uv__os390_epoll* lst, struct epoll_event* events, int maxevents, int timeout) { nmsgsfds_t size; struct pollfd* pfds; int pollret; int pollfdret; int pollmsgret; int reventcount; int nevents; struct pollfd msg_fd; int i; if (!lst || !lst->items || !events) { errno = EFAULT; return -1; } if (lst->size > EP_MAX_PFDS) { errno = EINVAL; return -1; } if (maxevents <= 0 || maxevents > EP_MAX_EVENTS) { errno = EINVAL; return -1; } assert(lst->size > 0); _SET_FDS_MSGS(size, 1, lst->size - 1); pfds = lst->items; pollret = poll(pfds, size, timeout); if (pollret <= 0) return pollret; pollfdret = _NFDS(pollret); pollmsgret = _NMSGS(pollret); reventcount = 0; nevents = 0; msg_fd = pfds[lst->size - 1]; /* message queue is always last entry */ maxevents = maxevents - pollmsgret; /* allow spot for message queue */ for (i = 0; i < lst->size - 1 && nevents < maxevents && reventcount < pollfdret; ++i) { struct epoll_event ev; struct pollfd* pfd; pfd = &pfds[i]; if (pfd->fd == -1 || pfd->revents == 0) continue; ev.fd = pfd->fd; ev.events = pfd->revents; ev.is_msg = 0; reventcount++; events[nevents++] = ev; } if (pollmsgret > 0 && msg_fd.revents != 0 && msg_fd.fd != -1) { struct epoll_event ev; ev.fd = msg_fd.fd; ev.events = msg_fd.revents; ev.is_msg = 1; events[nevents++] = ev; } return nevents; } int epoll_file_close(int fd) { QUEUE* q; uv_once(&once, epoll_init); uv_mutex_lock(&global_epoll_lock); QUEUE_FOREACH(q, &global_epoll_queue) { uv__os390_epoll* lst; lst = QUEUE_DATA(q, uv__os390_epoll, member); if (fd < lst->size && lst->items != NULL && lst->items[fd].fd != -1) lst->items[fd].fd = -1; } uv_mutex_unlock(&global_epoll_lock); return 0; } void epoll_queue_close(uv__os390_epoll* lst) { /* Remove epoll instance from global queue */ uv_mutex_lock(&global_epoll_lock); QUEUE_REMOVE(&lst->member); uv_mutex_unlock(&global_epoll_lock); /* Free resources */ msgctl(lst->msg_queue, IPC_RMID, NULL); lst->msg_queue = -1; uv__free(lst->items); lst->items = NULL; } char* mkdtemp(char* path) { static const char* tempchars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; static const size_t num_chars = 62; static const size_t num_x = 6; char *ep, *cp; unsigned int tries, i; size_t len; uint64_t v; int fd; int retval; int saved_errno; len = strlen(path); ep = path + len; if (len < num_x || strncmp(ep - num_x, "XXXXXX", num_x)) { errno = EINVAL; return NULL; } fd = open("/dev/urandom", O_RDONLY); if (fd == -1) return NULL; tries = TMP_MAX; retval = -1; do { if (read(fd, &v, sizeof(v)) != sizeof(v)) break; cp = ep - num_x; for (i = 0; i < num_x; i++) { *cp++ = tempchars[v % num_chars]; v /= num_chars; } if (mkdir(path, S_IRWXU) == 0) { retval = 0; break; } else if (errno != EEXIST) break; } while (--tries); saved_errno = errno; uv__close(fd); if (tries == 0) { errno = EEXIST; return NULL; } if (retval == -1) { errno = saved_errno; return NULL; } return path; } ssize_t os390_readlink(const char* path, char* buf, size_t len) { ssize_t rlen; ssize_t vlen; ssize_t plen; char* delimiter; char old_delim; char* tmpbuf; char realpathstr[PATH_MAX + 1]; tmpbuf = uv__malloc(len + 1); if (tmpbuf == NULL) { errno = ENOMEM; return -1; } rlen = readlink(path, tmpbuf, len); if (rlen < 0) { uv__free(tmpbuf); return rlen; } if (rlen < 3 || strncmp("/$", tmpbuf, 2) != 0) { /* Straightforward readlink. */ memcpy(buf, tmpbuf, rlen); uv__free(tmpbuf); return rlen; } /* * There is a parmlib variable at the beginning * which needs interpretation. */ tmpbuf[rlen] = '\0'; delimiter = strchr(tmpbuf + 2, '/'); if (delimiter == NULL) /* No slash at the end */ delimiter = strchr(tmpbuf + 2, '\0'); /* Read real path of the variable. */ old_delim = *delimiter; *delimiter = '\0'; if (realpath(tmpbuf, realpathstr) == NULL) { uv__free(tmpbuf); return -1; } /* realpathstr is not guaranteed to end with null byte.*/ realpathstr[PATH_MAX] = '\0'; /* Reset the delimiter and fill up the buffer. */ *delimiter = old_delim; plen = strlen(delimiter); vlen = strlen(realpathstr); rlen = plen + vlen; if (rlen > len) { uv__free(tmpbuf); errno = ENAMETOOLONG; return -1; } memcpy(buf, realpathstr, vlen); memcpy(buf + vlen, delimiter, plen); /* Done using temporary buffer. */ uv__free(tmpbuf); return rlen; } int sem_init(UV_PLATFORM_SEM_T* semid, int pshared, unsigned int value) { UNREACHABLE(); } int sem_destroy(UV_PLATFORM_SEM_T* semid) { UNREACHABLE(); } int sem_post(UV_PLATFORM_SEM_T* semid) { UNREACHABLE(); } int sem_trywait(UV_PLATFORM_SEM_T* semid) { UNREACHABLE(); } int sem_wait(UV_PLATFORM_SEM_T* semid) { UNREACHABLE(); } gevent-24.11.1/deps/libuv/src/unix/os390-syscalls.h000066400000000000000000000051371471441230600216670ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_OS390_SYSCALL_H_ #define UV_OS390_SYSCALL_H_ #include "uv.h" #include "internal.h" #include #include #include #include "zos-base.h" #define EPOLL_CTL_ADD 1 #define EPOLL_CTL_DEL 2 #define EPOLL_CTL_MOD 3 #define MAX_EPOLL_INSTANCES 256 #define MAX_ITEMS_PER_EPOLL 1024 #define UV__O_CLOEXEC 0x80000 struct epoll_event { int events; int fd; int is_msg; }; typedef struct { QUEUE member; struct pollfd* items; unsigned long size; int msg_queue; } uv__os390_epoll; /* epoll api */ uv__os390_epoll* epoll_create1(int flags); int epoll_ctl(uv__os390_epoll* ep, int op, int fd, struct epoll_event *event); int epoll_wait(uv__os390_epoll* ep, struct epoll_event *events, int maxevents, int timeout); int epoll_file_close(int fd); /* utility functions */ int scandir(const char* maindir, struct dirent*** namelist, int (*filter)(const struct dirent *), int (*compar)(const struct dirent **, const struct dirent **)); char *mkdtemp(char* path); ssize_t os390_readlink(const char* path, char* buf, size_t len); size_t strnlen(const char* str, size_t maxlen); int sem_init(UV_PLATFORM_SEM_T* semid, int pshared, unsigned int value); int sem_destroy(UV_PLATFORM_SEM_T* semid); int sem_post(UV_PLATFORM_SEM_T* semid); int sem_trywait(UV_PLATFORM_SEM_T* semid); int sem_wait(UV_PLATFORM_SEM_T* semid); void uv__os390_cleanup(void); #endif /* UV_OS390_SYSCALL_H_ */ gevent-24.11.1/deps/libuv/src/unix/os390.c000066400000000000000000000654171471441230600200360ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "internal.h" #include #include #include #include #include #include #include #include #include #include "zos-base.h" #if defined(__clang__) #include "csrsic.h" #else #include "//'SYS1.SAMPLIB(CSRSIC)'" #endif #define CVT_PTR 0x10 #define PSA_PTR 0x00 #define CSD_OFFSET 0x294 /* Long-term average CPU service used by this logical partition, in millions of service units per hour. If this value is above the partition's defined capacity, the partition will be capped. It is calculated using the physical CPU adjustment factor (RCTPCPUA) so it may not match other measures of service which are based on the logical CPU adjustment factor. It is available if the hardware supports LPAR cluster. */ #define RCTLACS_OFFSET 0xC4 /* 32-bit count of alive CPUs. This includes both CPs and IFAs */ #define CSD_NUMBER_ONLINE_CPUS 0xD4 /* Address of system resources manager (SRM) control table */ #define CVTOPCTP_OFFSET 0x25C /* Address of the RCT table */ #define RMCTRCT_OFFSET 0xE4 /* Address of the rsm control and enumeration area. */ #define CVTRCEP_OFFSET 0x490 /* Total number of frames currently on all available frame queues. */ #define RCEAFC_OFFSET 0x088 /* CPC model length from the CSRSI Service. */ #define CPCMODEL_LENGTH 16 /* Pointer to the home (current) ASCB. */ #define PSAAOLD 0x224 /* Pointer to rsm address space block extension. */ #define ASCBRSME 0x16C /* NUMBER OF FRAMES CURRENTLY IN USE BY THIS ADDRESS SPACE. It does not include 2G frames. */ #define RAXFMCT 0x2C /* Thread Entry constants */ #define PGTH_CURRENT 1 #define PGTH_LEN 26 #define PGTHAPATH 0x20 #pragma linkage(BPX4GTH, OS) #pragma linkage(BPX1GTH, OS) /* TOD Clock resolution in nanoseconds */ #define TOD_RES 4.096 typedef unsigned data_area_ptr_assign_type; typedef union { struct { #if defined(_LP64) data_area_ptr_assign_type lower; #endif data_area_ptr_assign_type assign; }; char* deref; } data_area_ptr; void uv_loadavg(double avg[3]) { /* TODO: implement the following */ avg[0] = 0; avg[1] = 0; avg[2] = 0; } int uv__platform_loop_init(uv_loop_t* loop) { uv__os390_epoll* ep; ep = epoll_create1(0); loop->ep = ep; if (ep == NULL) return UV__ERR(errno); return 0; } void uv__platform_loop_delete(uv_loop_t* loop) { if (loop->ep != NULL) { epoll_queue_close(loop->ep); loop->ep = NULL; } } uint64_t uv__hrtime(uv_clocktype_t type) { unsigned long long timestamp; __stckf(×tamp); /* Convert to nanoseconds */ return timestamp / TOD_RES; } static int getexe(char* buf, size_t len) { return uv__strscpy(buf, __getargv()[0], len); } /* * We could use a static buffer for the path manipulations that we need outside * of the function, but this function could be called by multiple consumers and * we don't want to potentially create a race condition in the use of snprintf. * There is no direct way of getting the exe path in zOS - either through /procfs * or through some libc APIs. The below approach is to parse the argv[0]'s pattern * and use it in conjunction with PATH environment variable to craft one. */ int uv_exepath(char* buffer, size_t* size) { int res; char args[PATH_MAX]; int pid; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; res = getexe(args, sizeof(args)); if (res < 0) return UV_EINVAL; return uv__search_path(args, buffer, size); } uint64_t uv_get_free_memory(void) { uint64_t freeram; data_area_ptr cvt = {0}; data_area_ptr rcep = {0}; cvt.assign = *(data_area_ptr_assign_type*)(CVT_PTR); rcep.assign = *(data_area_ptr_assign_type*)(cvt.deref + CVTRCEP_OFFSET); freeram = (uint64_t)*((uint32_t*)(rcep.deref + RCEAFC_OFFSET)) * 4096; return freeram; } uint64_t uv_get_total_memory(void) { /* Use CVTRLSTG to get the size of actual real storage online at IPL in K. */ return (uint64_t)((int)((char *__ptr32 *__ptr32 *)0)[4][214]) * 1024; } uint64_t uv_get_constrained_memory(void) { struct rlimit rl; /* RLIMIT_MEMLIMIT return value is in megabytes rather than bytes. */ if (getrlimit(RLIMIT_MEMLIMIT, &rl) == 0) return rl.rlim_cur * 1024 * 1024; return 0; /* There is no memory limit set. */ } int uv_resident_set_memory(size_t* rss) { char* ascb; char* rax; size_t nframes; ascb = *(char* __ptr32 *)(PSA_PTR + PSAAOLD); rax = *(char* __ptr32 *)(ascb + ASCBRSME); nframes = *(unsigned int*)(rax + RAXFMCT); *rss = nframes * sysconf(_SC_PAGESIZE); return 0; } int uv_uptime(double* uptime) { struct utmpx u ; struct utmpx *v; time64_t t; u.ut_type = BOOT_TIME; v = getutxid(&u); if (v == NULL) return -1; *uptime = difftime64(time64(&t), v->ut_tv.tv_sec); return 0; } int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count) { uv_cpu_info_t* cpu_info; int idx; siv1v2 info; data_area_ptr cvt = {0}; data_area_ptr csd = {0}; data_area_ptr rmctrct = {0}; data_area_ptr cvtopctp = {0}; int cpu_usage_avg; cvt.assign = *(data_area_ptr_assign_type*)(CVT_PTR); csd.assign = *((data_area_ptr_assign_type *) (cvt.deref + CSD_OFFSET)); cvtopctp.assign = *((data_area_ptr_assign_type *) (cvt.deref + CVTOPCTP_OFFSET)); rmctrct.assign = *((data_area_ptr_assign_type *) (cvtopctp.deref + RMCTRCT_OFFSET)); *count = *((int*) (csd.deref + CSD_NUMBER_ONLINE_CPUS)); cpu_usage_avg = *((unsigned short int*) (rmctrct.deref + RCTLACS_OFFSET)); *cpu_infos = uv__malloc(*count * sizeof(uv_cpu_info_t)); if (!*cpu_infos) return UV_ENOMEM; cpu_info = *cpu_infos; idx = 0; while (idx < *count) { cpu_info->speed = *(int*)(info.siv1v2si22v1.si22v1cpucapability); cpu_info->model = uv__malloc(CPCMODEL_LENGTH + 1); memset(cpu_info->model, '\0', CPCMODEL_LENGTH + 1); memcpy(cpu_info->model, info.siv1v2si11v1.si11v1cpcmodel, CPCMODEL_LENGTH); cpu_info->cpu_times.user = cpu_usage_avg; /* TODO: implement the following */ cpu_info->cpu_times.sys = 0; cpu_info->cpu_times.idle = 0; cpu_info->cpu_times.irq = 0; cpu_info->cpu_times.nice = 0; ++cpu_info; ++idx; } return 0; } static int uv__interface_addresses_v6(uv_interface_address_t** addresses, int* count) { uv_interface_address_t* address; int sockfd; int maxsize; __net_ifconf6header_t ifc; __net_ifconf6entry_t* ifr; __net_ifconf6entry_t* p; unsigned int i; int count_names; unsigned char netmask[16] = {0}; *count = 0; /* Assume maximum buffer size allowable */ maxsize = 16384; if (0 > (sockfd = socket(AF_INET, SOCK_DGRAM, IPPROTO_IP))) return UV__ERR(errno); ifc.__nif6h_buffer = uv__calloc(1, maxsize); if (ifc.__nif6h_buffer == NULL) { uv__close(sockfd); return UV_ENOMEM; } ifc.__nif6h_version = 1; ifc.__nif6h_buflen = maxsize; if (ioctl(sockfd, SIOCGIFCONF6, &ifc) == -1) { /* This will error on a system that does not support IPv6. However, we want * to treat this as there being 0 interfaces so we can continue to get IPv4 * interfaces in uv_interface_addresses(). So return 0 instead of the error. */ uv__free(ifc.__nif6h_buffer); uv__close(sockfd); errno = 0; return 0; } ifr = (__net_ifconf6entry_t*)(ifc.__nif6h_buffer); while ((char*)ifr < (char*)ifc.__nif6h_buffer + ifc.__nif6h_buflen) { p = ifr; ifr = (__net_ifconf6entry_t*)((char*)ifr + ifc.__nif6h_entrylen); if (!(p->__nif6e_addr.sin6_family == AF_INET6)) continue; if (!(p->__nif6e_flags & _NIF6E_FLAGS_ON_LINK_ACTIVE)) continue; ++(*count); } if ((*count) == 0) { uv__free(ifc.__nif6h_buffer); uv__close(sockfd); return 0; } /* Alloc the return interface structs */ *addresses = uv__calloc(1, *count * sizeof(uv_interface_address_t)); if (!(*addresses)) { uv__free(ifc.__nif6h_buffer); uv__close(sockfd); return UV_ENOMEM; } address = *addresses; count_names = 0; ifr = (__net_ifconf6entry_t*)(ifc.__nif6h_buffer); while ((char*)ifr < (char*)ifc.__nif6h_buffer + ifc.__nif6h_buflen) { p = ifr; ifr = (__net_ifconf6entry_t*)((char*)ifr + ifc.__nif6h_entrylen); if (!(p->__nif6e_addr.sin6_family == AF_INET6)) continue; if (!(p->__nif6e_flags & _NIF6E_FLAGS_ON_LINK_ACTIVE)) continue; /* All conditions above must match count loop */ i = 0; /* Ignore EBCDIC space (0x40) padding in name */ while (i < ARRAY_SIZE(p->__nif6e_name) && p->__nif6e_name[i] != 0x40 && p->__nif6e_name[i] != 0) ++i; address->name = uv__malloc(i + 1); if (address->name == NULL) { uv_free_interface_addresses(*addresses, count_names); uv__free(ifc.__nif6h_buffer); uv__close(sockfd); return UV_ENOMEM; } memcpy(address->name, p->__nif6e_name, i); address->name[i] = '\0'; __e2a_s(address->name); count_names++; address->address.address6 = *((struct sockaddr_in6*) &p->__nif6e_addr); for (i = 0; i < (p->__nif6e_prefixlen / 8); i++) netmask[i] = 0xFF; if (p->__nif6e_prefixlen % 8) netmask[i] = 0xFF << (8 - (p->__nif6e_prefixlen % 8)); address->netmask.netmask6.sin6_len = p->__nif6e_prefixlen; memcpy(&(address->netmask.netmask6.sin6_addr), netmask, 16); address->netmask.netmask6.sin6_family = AF_INET6; address->is_internal = p->__nif6e_flags & _NIF6E_FLAGS_LOOPBACK ? 1 : 0; address++; } uv__free(ifc.__nif6h_buffer); uv__close(sockfd); return 0; } int uv_interface_addresses(uv_interface_address_t** addresses, int* count) { uv_interface_address_t* address; int sockfd; int maxsize; struct ifconf ifc; struct ifreq flg; struct ifreq* ifr; struct ifreq* p; uv_interface_address_t* addresses_v6; int count_v6; unsigned int i; int rc; int count_names; *count = 0; *addresses = NULL; /* get the ipv6 addresses first */ if ((rc = uv__interface_addresses_v6(&addresses_v6, &count_v6)) != 0) return rc; /* now get the ipv4 addresses */ /* Assume maximum buffer size allowable */ maxsize = 16384; sockfd = socket(AF_INET, SOCK_DGRAM, IPPROTO_IP); if (0 > sockfd) { if (count_v6) uv_free_interface_addresses(addresses_v6, count_v6); return UV__ERR(errno); } ifc.ifc_req = uv__calloc(1, maxsize); if (ifc.ifc_req == NULL) { if (count_v6) uv_free_interface_addresses(addresses_v6, count_v6); uv__close(sockfd); return UV_ENOMEM; } ifc.ifc_len = maxsize; if (ioctl(sockfd, SIOCGIFCONF, &ifc) == -1) { if (count_v6) uv_free_interface_addresses(addresses_v6, count_v6); uv__free(ifc.ifc_req); uv__close(sockfd); return UV__ERR(errno); } #define MAX(a,b) (((a)>(b))?(a):(b)) #define ADDR_SIZE(p) MAX((p).sa_len, sizeof(p)) /* Count all up and running ipv4/ipv6 addresses */ ifr = ifc.ifc_req; while ((char*)ifr < (char*)ifc.ifc_req + ifc.ifc_len) { p = ifr; ifr = (struct ifreq*) ((char*)ifr + sizeof(ifr->ifr_name) + ADDR_SIZE(ifr->ifr_addr)); if (!(p->ifr_addr.sa_family == AF_INET6 || p->ifr_addr.sa_family == AF_INET)) continue; memcpy(flg.ifr_name, p->ifr_name, sizeof(flg.ifr_name)); if (ioctl(sockfd, SIOCGIFFLAGS, &flg) == -1) { if (count_v6) uv_free_interface_addresses(addresses_v6, count_v6); uv__free(ifc.ifc_req); uv__close(sockfd); return UV__ERR(errno); } if (!(flg.ifr_flags & IFF_UP && flg.ifr_flags & IFF_RUNNING)) continue; (*count)++; } if (*count == 0 && count_v6 == 0) { uv__free(ifc.ifc_req); uv__close(sockfd); return 0; } /* Alloc the return interface structs */ *addresses = uv__calloc(1, (*count + count_v6) * sizeof(uv_interface_address_t)); if (!(*addresses)) { if (count_v6) uv_free_interface_addresses(addresses_v6, count_v6); uv__free(ifc.ifc_req); uv__close(sockfd); return UV_ENOMEM; } address = *addresses; /* copy over the ipv6 addresses if any are found */ if (count_v6) { memcpy(address, addresses_v6, count_v6 * sizeof(uv_interface_address_t)); address += count_v6; *count += count_v6; /* free ipv6 addresses, but keep address names */ uv__free(addresses_v6); } count_names = *count; ifr = ifc.ifc_req; while ((char*)ifr < (char*)ifc.ifc_req + ifc.ifc_len) { p = ifr; ifr = (struct ifreq*) ((char*)ifr + sizeof(ifr->ifr_name) + ADDR_SIZE(ifr->ifr_addr)); if (!(p->ifr_addr.sa_family == AF_INET6 || p->ifr_addr.sa_family == AF_INET)) continue; memcpy(flg.ifr_name, p->ifr_name, sizeof(flg.ifr_name)); if (ioctl(sockfd, SIOCGIFFLAGS, &flg) == -1) { uv_free_interface_addresses(*addresses, count_names); uv__free(ifc.ifc_req); uv__close(sockfd); return UV_ENOSYS; } if (!(flg.ifr_flags & IFF_UP && flg.ifr_flags & IFF_RUNNING)) continue; /* All conditions above must match count loop */ i = 0; /* Ignore EBCDIC space (0x40) padding in name */ while (i < ARRAY_SIZE(p->ifr_name) && p->ifr_name[i] != 0x40 && p->ifr_name[i] != 0) ++i; address->name = uv__malloc(i + 1); if (address->name == NULL) { uv_free_interface_addresses(*addresses, count_names); uv__free(ifc.ifc_req); uv__close(sockfd); return UV_ENOMEM; } memcpy(address->name, p->ifr_name, i); address->name[i] = '\0'; __e2a_s(address->name); count_names++; address->address.address4 = *((struct sockaddr_in*) &p->ifr_addr); if (ioctl(sockfd, SIOCGIFNETMASK, p) == -1) { uv_free_interface_addresses(*addresses, count_names); uv__free(ifc.ifc_req); uv__close(sockfd); return UV__ERR(errno); } address->netmask.netmask4 = *((struct sockaddr_in*) &p->ifr_addr); address->netmask.netmask4.sin_family = AF_INET; address->is_internal = flg.ifr_flags & IFF_LOOPBACK ? 1 : 0; address++; } #undef ADDR_SIZE #undef MAX uv__free(ifc.ifc_req); uv__close(sockfd); return 0; } void uv_free_interface_addresses(uv_interface_address_t* addresses, int count) { int i; for (i = 0; i < count; ++i) uv__free(addresses[i].name); uv__free(addresses); } void uv__platform_invalidate_fd(uv_loop_t* loop, int fd) { struct epoll_event* events; struct epoll_event dummy; uintptr_t i; uintptr_t nfds; assert(loop->watchers != NULL); assert(fd >= 0); events = (struct epoll_event*) loop->watchers[loop->nwatchers]; nfds = (uintptr_t) loop->watchers[loop->nwatchers + 1]; if (events != NULL) /* Invalidate events with same file descriptor */ for (i = 0; i < nfds; i++) if ((int) events[i].fd == fd) events[i].fd = -1; /* Remove the file descriptor from the epoll. */ if (loop->ep != NULL) epoll_ctl(loop->ep, EPOLL_CTL_DEL, fd, &dummy); } int uv__io_check_fd(uv_loop_t* loop, int fd) { struct pollfd p[1]; int rv; p[0].fd = fd; p[0].events = POLLIN; do rv = poll(p, 1, 0); while (rv == -1 && errno == EINTR); if (rv == -1) abort(); if (p[0].revents & POLLNVAL) return -1; return 0; } int uv_fs_event_init(uv_loop_t* loop, uv_fs_event_t* handle) { uv__handle_init(loop, (uv_handle_t*)handle, UV_FS_EVENT); return 0; } static int os390_regfileint(uv_fs_event_t* handle, char* path) { uv__os390_epoll* ep; _RFIS reg_struct; int rc; ep = handle->loop->ep; assert(ep->msg_queue != -1); reg_struct.__rfis_cmd = _RFIS_REG; reg_struct.__rfis_qid = ep->msg_queue; reg_struct.__rfis_type = 1; memcpy(reg_struct.__rfis_utok, &handle, sizeof(handle)); rc = __w_pioctl(path, _IOCC_REGFILEINT, sizeof(reg_struct), ®_struct); if (rc != 0) return UV__ERR(errno); memcpy(handle->rfis_rftok, reg_struct.__rfis_rftok, sizeof(handle->rfis_rftok)); return 0; } int uv_fs_event_start(uv_fs_event_t* handle, uv_fs_event_cb cb, const char* filename, unsigned int flags) { char* path; int rc; if (uv__is_active(handle)) return UV_EINVAL; path = uv__strdup(filename); if (path == NULL) return UV_ENOMEM; rc = os390_regfileint(handle, path); if (rc != 0) { uv__free(path); return rc; } uv__handle_start(handle); handle->path = path; handle->cb = cb; return 0; } int uv__fs_event_stop(uv_fs_event_t* handle) { uv__os390_epoll* ep; _RFIS reg_struct; int rc; if (!uv__is_active(handle)) return 0; ep = handle->loop->ep; assert(ep->msg_queue != -1); reg_struct.__rfis_cmd = _RFIS_UNREG; reg_struct.__rfis_qid = ep->msg_queue; reg_struct.__rfis_type = 1; memcpy(reg_struct.__rfis_rftok, handle->rfis_rftok, sizeof(handle->rfis_rftok)); /* * This call will take "/" as the path argument in case we * don't care to supply the correct path. The system will simply * ignore it. */ rc = __w_pioctl("/", _IOCC_REGFILEINT, sizeof(reg_struct), ®_struct); if (rc != 0 && errno != EALREADY && errno != ENOENT) abort(); if (handle->path != NULL) { uv__free(handle->path); handle->path = NULL; } if (rc != 0 && errno == EALREADY) return -1; uv__handle_stop(handle); return 0; } int uv_fs_event_stop(uv_fs_event_t* handle) { uv__fs_event_stop(handle); return 0; } void uv__fs_event_close(uv_fs_event_t* handle) { /* * If we were unable to unregister file interest here, then it is most likely * that there is a pending queued change notification. When this happens, we * don't want to complete the close as it will free the underlying memory for * the handle, causing a use-after-free problem when the event is processed. * We defer the final cleanup until after the event is consumed in * os390_message_queue_handler(). */ if (uv__fs_event_stop(handle) == 0) uv__make_close_pending((uv_handle_t*) handle); } static int os390_message_queue_handler(uv__os390_epoll* ep) { uv_fs_event_t* handle; int msglen; int events; _RFIM msg; if (ep->msg_queue == -1) return 0; msglen = msgrcv(ep->msg_queue, &msg, sizeof(msg), 0, IPC_NOWAIT); if (msglen == -1 && errno == ENOMSG) return 0; if (msglen == -1) abort(); events = 0; if (msg.__rfim_event == _RFIM_ATTR || msg.__rfim_event == _RFIM_WRITE) events = UV_CHANGE; else if (msg.__rfim_event == _RFIM_RENAME || msg.__rfim_event == _RFIM_UNLINK) events = UV_RENAME; else if (msg.__rfim_event == 156) /* TODO(gabylb): zos - this event should not happen, need to investigate. * * This event seems to occur when the watched file is [re]moved, or an * editor (like vim) renames then creates the file on save (for vim, that's * when backupcopy=no|auto). */ events = UV_RENAME; else /* Some event that we are not interested in. */ return 0; /* `__rfim_utok` is treated as text when it should be treated as binary while * running in ASCII mode, resulting in an unwanted autoconversion. */ __a2e_l(msg.__rfim_utok, sizeof(msg.__rfim_utok)); handle = *(uv_fs_event_t**)(msg.__rfim_utok); assert(handle != NULL); assert((handle->flags & UV_HANDLE_CLOSED) == 0); if (uv__is_closing(handle)) { uv__handle_stop(handle); uv__make_close_pending((uv_handle_t*) handle); return 0; } else if (handle->path == NULL) { /* _RFIS_UNREG returned EALREADY. */ uv__handle_stop(handle); return 0; } /* The file is implicitly unregistered when the change notification is * sent, only one notification is sent per registration. So we need to * re-register interest in a file after each change notification we * receive. */ assert(handle->path != NULL); os390_regfileint(handle, handle->path); handle->cb(handle, uv__basename_r(handle->path), events, 0); return 1; } void uv__io_poll(uv_loop_t* loop, int timeout) { static const int max_safe_timeout = 1789569; struct epoll_event events[1024]; struct epoll_event* pe; struct epoll_event e; uv__os390_epoll* ep; int have_signals; int real_timeout; QUEUE* q; uv__io_t* w; uint64_t base; int count; int nfds; int fd; int op; int i; int user_timeout; int reset_timeout; if (loop->nfds == 0) { assert(QUEUE_EMPTY(&loop->watcher_queue)); return; } while (!QUEUE_EMPTY(&loop->watcher_queue)) { uv_stream_t* stream; q = QUEUE_HEAD(&loop->watcher_queue); QUEUE_REMOVE(q); QUEUE_INIT(q); w = QUEUE_DATA(q, uv__io_t, watcher_queue); assert(w->pevents != 0); assert(w->fd >= 0); stream= container_of(w, uv_stream_t, io_watcher); assert(w->fd < (int) loop->nwatchers); e.events = w->pevents; e.fd = w->fd; if (w->events == 0) op = EPOLL_CTL_ADD; else op = EPOLL_CTL_MOD; /* XXX Future optimization: do EPOLL_CTL_MOD lazily if we stop watching * events, skip the syscall and squelch the events after epoll_wait(). */ if (epoll_ctl(loop->ep, op, w->fd, &e)) { if (errno != EEXIST) abort(); assert(op == EPOLL_CTL_ADD); /* We've reactivated a file descriptor that's been watched before. */ if (epoll_ctl(loop->ep, EPOLL_CTL_MOD, w->fd, &e)) abort(); } w->events = w->pevents; } assert(timeout >= -1); base = loop->time; count = 48; /* Benchmarks suggest this gives the best throughput. */ real_timeout = timeout; int nevents = 0; have_signals = 0; if (uv__get_internal_fields(loop)->flags & UV_METRICS_IDLE_TIME) { reset_timeout = 1; user_timeout = timeout; timeout = 0; } else { reset_timeout = 0; } nfds = 0; for (;;) { /* Only need to set the provider_entry_time if timeout != 0. The function * will return early if the loop isn't configured with UV_METRICS_IDLE_TIME. */ if (timeout != 0) uv__metrics_set_provider_entry_time(loop); if (sizeof(int32_t) == sizeof(long) && timeout >= max_safe_timeout) timeout = max_safe_timeout; nfds = epoll_wait(loop->ep, events, ARRAY_SIZE(events), timeout); /* Update loop->time unconditionally. It's tempting to skip the update when * timeout == 0 (i.e. non-blocking poll) but there is no guarantee that the * operating system didn't reschedule our process while in the syscall. */ base = loop->time; SAVE_ERRNO(uv__update_time(loop)); if (nfds == 0) { assert(timeout != -1); if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } if (timeout == -1) continue; if (timeout == 0) return; /* We may have been inside the system call for longer than |timeout| * milliseconds so we need to update the timestamp to avoid drift. */ goto update_timeout; } if (nfds == -1) { if (errno != EINTR) abort(); if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } if (timeout == -1) continue; if (timeout == 0) return; /* Interrupted by a signal. Update timeout and poll again. */ goto update_timeout; } assert(loop->watchers != NULL); loop->watchers[loop->nwatchers] = (void*) events; loop->watchers[loop->nwatchers + 1] = (void*) (uintptr_t) nfds; for (i = 0; i < nfds; i++) { pe = events + i; fd = pe->fd; /* Skip invalidated events, see uv__platform_invalidate_fd */ if (fd == -1) continue; ep = loop->ep; if (pe->is_msg) { os390_message_queue_handler(ep); nevents++; continue; } assert(fd >= 0); assert((unsigned) fd < loop->nwatchers); w = loop->watchers[fd]; if (w == NULL) { /* File descriptor that we've stopped watching, disarm it. * * Ignore all errors because we may be racing with another thread * when the file descriptor is closed. */ epoll_ctl(loop->ep, EPOLL_CTL_DEL, fd, pe); continue; } /* Give users only events they're interested in. Prevents spurious * callbacks when previous callback invocation in this loop has stopped * the current watcher. Also, filters out events that users has not * requested us to watch. */ pe->events &= w->pevents | POLLERR | POLLHUP; if (pe->events == POLLERR || pe->events == POLLHUP) pe->events |= w->pevents & (POLLIN | POLLOUT); if (pe->events != 0) { /* Run signal watchers last. This also affects child process watchers * because those are implemented in terms of signal watchers. */ if (w == &loop->signal_io_watcher) { have_signals = 1; } else { uv__metrics_update_idle_time(loop); w->cb(loop, w, pe->events); } nevents++; } } if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } if (have_signals != 0) { uv__metrics_update_idle_time(loop); loop->signal_io_watcher.cb(loop, &loop->signal_io_watcher, POLLIN); } loop->watchers[loop->nwatchers] = NULL; loop->watchers[loop->nwatchers + 1] = NULL; if (have_signals != 0) return; /* Event loop should cycle now so don't poll again. */ if (nevents != 0) { if (nfds == ARRAY_SIZE(events) && --count != 0) { /* Poll for more events but don't block this time. */ timeout = 0; continue; } return; } if (timeout == 0) return; if (timeout == -1) continue; update_timeout: assert(timeout > 0); real_timeout -= (loop->time - base); if (real_timeout <= 0) return; timeout = real_timeout; } } int uv__io_fork(uv_loop_t* loop) { /* Nullify the msg queue but don't close it because it is still being used by the parent. */ loop->ep = NULL; return uv__platform_loop_init(loop); } gevent-24.11.1/deps/libuv/src/unix/pipe.c000066400000000000000000000250051471441230600201030ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include int uv_pipe_init(uv_loop_t* loop, uv_pipe_t* handle, int ipc) { uv__stream_init(loop, (uv_stream_t*)handle, UV_NAMED_PIPE); handle->shutdown_req = NULL; handle->connect_req = NULL; handle->pipe_fname = NULL; handle->ipc = ipc; return 0; } int uv_pipe_bind(uv_pipe_t* handle, const char* name) { struct sockaddr_un saddr; const char* pipe_fname; int sockfd; int err; pipe_fname = NULL; /* Already bound? */ if (uv__stream_fd(handle) >= 0) return UV_EINVAL; if (uv__is_closing(handle)) { return UV_EINVAL; } /* Make a copy of the file name, it outlives this function's scope. */ pipe_fname = uv__strdup(name); if (pipe_fname == NULL) return UV_ENOMEM; /* We've got a copy, don't touch the original any more. */ name = NULL; err = uv__socket(AF_UNIX, SOCK_STREAM, 0); if (err < 0) goto err_socket; sockfd = err; memset(&saddr, 0, sizeof saddr); uv__strscpy(saddr.sun_path, pipe_fname, sizeof(saddr.sun_path)); saddr.sun_family = AF_UNIX; if (bind(sockfd, (struct sockaddr*)&saddr, sizeof saddr)) { err = UV__ERR(errno); /* Convert ENOENT to EACCES for compatibility with Windows. */ if (err == UV_ENOENT) err = UV_EACCES; uv__close(sockfd); goto err_socket; } /* Success. */ handle->flags |= UV_HANDLE_BOUND; handle->pipe_fname = pipe_fname; /* Is a strdup'ed copy. */ handle->io_watcher.fd = sockfd; return 0; err_socket: uv__free((void*)pipe_fname); return err; } int uv__pipe_listen(uv_pipe_t* handle, int backlog, uv_connection_cb cb) { if (uv__stream_fd(handle) == -1) return UV_EINVAL; if (handle->ipc) return UV_EINVAL; #if defined(__MVS__) || defined(__PASE__) /* On zOS, backlog=0 has undefined behaviour */ /* On IBMi PASE, backlog=0 leads to "Connection refused" error */ if (backlog == 0) backlog = 1; else if (backlog < 0) backlog = SOMAXCONN; #endif if (listen(uv__stream_fd(handle), backlog)) return UV__ERR(errno); handle->connection_cb = cb; handle->io_watcher.cb = uv__server_io; uv__io_start(handle->loop, &handle->io_watcher, POLLIN); return 0; } void uv__pipe_close(uv_pipe_t* handle) { if (handle->pipe_fname) { /* * Unlink the file system entity before closing the file descriptor. * Doing it the other way around introduces a race where our process * unlinks a socket with the same name that's just been created by * another thread or process. */ unlink(handle->pipe_fname); uv__free((void*)handle->pipe_fname); handle->pipe_fname = NULL; } uv__stream_close((uv_stream_t*)handle); } int uv_pipe_open(uv_pipe_t* handle, uv_file fd) { int flags; int mode; int err; flags = 0; if (uv__fd_exists(handle->loop, fd)) return UV_EEXIST; do mode = fcntl(fd, F_GETFL); while (mode == -1 && errno == EINTR); if (mode == -1) return UV__ERR(errno); /* according to docs, must be EBADF */ err = uv__nonblock(fd, 1); if (err) return err; #if defined(__APPLE__) err = uv__stream_try_select((uv_stream_t*) handle, &fd); if (err) return err; #endif /* defined(__APPLE__) */ mode &= O_ACCMODE; if (mode != O_WRONLY) flags |= UV_HANDLE_READABLE; if (mode != O_RDONLY) flags |= UV_HANDLE_WRITABLE; return uv__stream_open((uv_stream_t*)handle, fd, flags); } void uv_pipe_connect(uv_connect_t* req, uv_pipe_t* handle, const char* name, uv_connect_cb cb) { struct sockaddr_un saddr; int new_sock; int err; int r; new_sock = (uv__stream_fd(handle) == -1); if (new_sock) { err = uv__socket(AF_UNIX, SOCK_STREAM, 0); if (err < 0) goto out; handle->io_watcher.fd = err; } memset(&saddr, 0, sizeof saddr); uv__strscpy(saddr.sun_path, name, sizeof(saddr.sun_path)); saddr.sun_family = AF_UNIX; do { r = connect(uv__stream_fd(handle), (struct sockaddr*)&saddr, sizeof saddr); } while (r == -1 && errno == EINTR); if (r == -1 && errno != EINPROGRESS) { err = UV__ERR(errno); #if defined(__CYGWIN__) || defined(__MSYS__) /* EBADF is supposed to mean that the socket fd is bad, but Cygwin reports EBADF instead of ENOTSOCK when the file is not a socket. We do not expect to see a bad fd here (e.g. due to new_sock), so translate the error. */ if (err == UV_EBADF) err = UV_ENOTSOCK; #endif goto out; } err = 0; if (new_sock) { err = uv__stream_open((uv_stream_t*)handle, uv__stream_fd(handle), UV_HANDLE_READABLE | UV_HANDLE_WRITABLE); } if (err == 0) uv__io_start(handle->loop, &handle->io_watcher, POLLOUT); out: handle->delayed_error = err; handle->connect_req = req; uv__req_init(handle->loop, req, UV_CONNECT); req->handle = (uv_stream_t*)handle; req->cb = cb; QUEUE_INIT(&req->queue); /* Force callback to run on next tick in case of error. */ if (err) uv__io_feed(handle->loop, &handle->io_watcher); } static int uv__pipe_getsockpeername(const uv_pipe_t* handle, uv__peersockfunc func, char* buffer, size_t* size) { struct sockaddr_un sa; socklen_t addrlen; int err; addrlen = sizeof(sa); memset(&sa, 0, addrlen); err = uv__getsockpeername((const uv_handle_t*) handle, func, (struct sockaddr*) &sa, (int*) &addrlen); if (err < 0) { *size = 0; return err; } #if defined(__linux__) if (sa.sun_path[0] == 0) /* Linux abstract namespace */ addrlen -= offsetof(struct sockaddr_un, sun_path); else #endif addrlen = strlen(sa.sun_path); if ((size_t)addrlen >= *size) { *size = addrlen + 1; return UV_ENOBUFS; } memcpy(buffer, sa.sun_path, addrlen); *size = addrlen; /* only null-terminate if it's not an abstract socket */ if (buffer[0] != '\0') buffer[addrlen] = '\0'; return 0; } int uv_pipe_getsockname(const uv_pipe_t* handle, char* buffer, size_t* size) { return uv__pipe_getsockpeername(handle, getsockname, buffer, size); } int uv_pipe_getpeername(const uv_pipe_t* handle, char* buffer, size_t* size) { return uv__pipe_getsockpeername(handle, getpeername, buffer, size); } void uv_pipe_pending_instances(uv_pipe_t* handle, int count) { } int uv_pipe_pending_count(uv_pipe_t* handle) { uv__stream_queued_fds_t* queued_fds; if (!handle->ipc) return 0; if (handle->accepted_fd == -1) return 0; if (handle->queued_fds == NULL) return 1; queued_fds = handle->queued_fds; return queued_fds->offset + 1; } uv_handle_type uv_pipe_pending_type(uv_pipe_t* handle) { if (!handle->ipc) return UV_UNKNOWN_HANDLE; if (handle->accepted_fd == -1) return UV_UNKNOWN_HANDLE; else return uv_guess_handle(handle->accepted_fd); } int uv_pipe_chmod(uv_pipe_t* handle, int mode) { unsigned desired_mode; struct stat pipe_stat; char* name_buffer; size_t name_len; int r; if (handle == NULL || uv__stream_fd(handle) == -1) return UV_EBADF; if (mode != UV_READABLE && mode != UV_WRITABLE && mode != (UV_WRITABLE | UV_READABLE)) return UV_EINVAL; /* Unfortunately fchmod does not work on all platforms, we will use chmod. */ name_len = 0; r = uv_pipe_getsockname(handle, NULL, &name_len); if (r != UV_ENOBUFS) return r; name_buffer = uv__malloc(name_len); if (name_buffer == NULL) return UV_ENOMEM; r = uv_pipe_getsockname(handle, name_buffer, &name_len); if (r != 0) { uv__free(name_buffer); return r; } /* stat must be used as fstat has a bug on Darwin */ if (stat(name_buffer, &pipe_stat) == -1) { uv__free(name_buffer); return -errno; } desired_mode = 0; if (mode & UV_READABLE) desired_mode |= S_IRUSR | S_IRGRP | S_IROTH; if (mode & UV_WRITABLE) desired_mode |= S_IWUSR | S_IWGRP | S_IWOTH; /* Exit early if pipe already has desired mode. */ if ((pipe_stat.st_mode & desired_mode) == desired_mode) { uv__free(name_buffer); return 0; } pipe_stat.st_mode |= desired_mode; r = chmod(name_buffer, pipe_stat.st_mode); uv__free(name_buffer); return r != -1 ? 0 : UV__ERR(errno); } int uv_pipe(uv_os_fd_t fds[2], int read_flags, int write_flags) { uv_os_fd_t temp[2]; int err; #if defined(__FreeBSD__) || defined(__linux__) int flags = O_CLOEXEC; if ((read_flags & UV_NONBLOCK_PIPE) && (write_flags & UV_NONBLOCK_PIPE)) flags |= UV_FS_O_NONBLOCK; if (pipe2(temp, flags)) return UV__ERR(errno); if (flags & UV_FS_O_NONBLOCK) { fds[0] = temp[0]; fds[1] = temp[1]; return 0; } #else if (pipe(temp)) return UV__ERR(errno); if ((err = uv__cloexec(temp[0], 1))) goto fail; if ((err = uv__cloexec(temp[1], 1))) goto fail; #endif if (read_flags & UV_NONBLOCK_PIPE) if ((err = uv__nonblock(temp[0], 1))) goto fail; if (write_flags & UV_NONBLOCK_PIPE) if ((err = uv__nonblock(temp[1], 1))) goto fail; fds[0] = temp[0]; fds[1] = temp[1]; return 0; fail: uv__close(temp[0]); uv__close(temp[1]); return err; } int uv__make_pipe(int fds[2], int flags) { return uv_pipe(fds, flags & UV_NONBLOCK_PIPE, flags & UV_NONBLOCK_PIPE); } gevent-24.11.1/deps/libuv/src/unix/poll.c000066400000000000000000000104241471441230600201130ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include static void uv__poll_io(uv_loop_t* loop, uv__io_t* w, unsigned int events) { uv_poll_t* handle; int pevents; handle = container_of(w, uv_poll_t, io_watcher); /* * As documented in the kernel source fs/kernfs/file.c #780 * poll will return POLLERR|POLLPRI in case of sysfs * polling. This does not happen in case of out-of-band * TCP messages. * * The above is the case on (at least) FreeBSD and Linux. * * So to properly determine a POLLPRI or a POLLERR we need * to check for both. */ if ((events & POLLERR) && !(events & UV__POLLPRI)) { uv__io_stop(loop, w, POLLIN | POLLOUT | UV__POLLRDHUP | UV__POLLPRI); uv__handle_stop(handle); handle->poll_cb(handle, UV_EBADF, 0); return; } pevents = 0; if (events & POLLIN) pevents |= UV_READABLE; if (events & UV__POLLPRI) pevents |= UV_PRIORITIZED; if (events & POLLOUT) pevents |= UV_WRITABLE; if (events & UV__POLLRDHUP) pevents |= UV_DISCONNECT; handle->poll_cb(handle, 0, pevents); } int uv_poll_init(uv_loop_t* loop, uv_poll_t* handle, int fd) { int err; if (uv__fd_exists(loop, fd)) return UV_EEXIST; err = uv__io_check_fd(loop, fd); if (err) return err; /* If ioctl(FIONBIO) reports ENOTTY, try fcntl(F_GETFL) + fcntl(F_SETFL). * Workaround for e.g. kqueue fds not supporting ioctls. */ err = uv__nonblock(fd, 1); #if UV__NONBLOCK_IS_IOCTL if (err == UV_ENOTTY) err = uv__nonblock_fcntl(fd, 1); #endif if (err) return err; uv__handle_init(loop, (uv_handle_t*) handle, UV_POLL); uv__io_init(&handle->io_watcher, uv__poll_io, fd); handle->poll_cb = NULL; return 0; } int uv_poll_init_socket(uv_loop_t* loop, uv_poll_t* handle, uv_os_sock_t socket) { return uv_poll_init(loop, handle, socket); } static void uv__poll_stop(uv_poll_t* handle) { uv__io_stop(handle->loop, &handle->io_watcher, POLLIN | POLLOUT | UV__POLLRDHUP | UV__POLLPRI); uv__handle_stop(handle); uv__platform_invalidate_fd(handle->loop, handle->io_watcher.fd); } int uv_poll_stop(uv_poll_t* handle) { assert(!uv__is_closing(handle)); uv__poll_stop(handle); return 0; } int uv_poll_start(uv_poll_t* handle, int pevents, uv_poll_cb poll_cb) { uv__io_t** watchers; uv__io_t* w; int events; assert((pevents & ~(UV_READABLE | UV_WRITABLE | UV_DISCONNECT | UV_PRIORITIZED)) == 0); assert(!uv__is_closing(handle)); watchers = handle->loop->watchers; w = &handle->io_watcher; if (uv__fd_exists(handle->loop, w->fd)) if (watchers[w->fd] != w) return UV_EEXIST; uv__poll_stop(handle); if (pevents == 0) return 0; events = 0; if (pevents & UV_READABLE) events |= POLLIN; if (pevents & UV_PRIORITIZED) events |= UV__POLLPRI; if (pevents & UV_WRITABLE) events |= POLLOUT; if (pevents & UV_DISCONNECT) events |= UV__POLLRDHUP; uv__io_start(handle->loop, &handle->io_watcher, events); uv__handle_start(handle); handle->poll_cb = poll_cb; return 0; } void uv__poll_close(uv_poll_t* handle) { uv__poll_stop(handle); } gevent-24.11.1/deps/libuv/src/unix/posix-hrtime.c000066400000000000000000000026311471441230600215760ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #undef NANOSEC #define NANOSEC ((uint64_t) 1e9) uint64_t uv__hrtime(uv_clocktype_t type) { struct timespec ts; clock_gettime(CLOCK_MONOTONIC, &ts); return (((uint64_t) ts.tv_sec) * NANOSEC + ts.tv_nsec); } gevent-24.11.1/deps/libuv/src/unix/posix-poll.c000066400000000000000000000236201471441230600212550ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" /* POSIX defines poll() as a portable way to wait on file descriptors. * Here we maintain a dynamically sized array of file descriptors and * events to pass as the first argument to poll(). */ #include #include #include #include #include int uv__platform_loop_init(uv_loop_t* loop) { loop->poll_fds = NULL; loop->poll_fds_used = 0; loop->poll_fds_size = 0; loop->poll_fds_iterating = 0; return 0; } void uv__platform_loop_delete(uv_loop_t* loop) { uv__free(loop->poll_fds); loop->poll_fds = NULL; } int uv__io_fork(uv_loop_t* loop) { uv__platform_loop_delete(loop); return uv__platform_loop_init(loop); } /* Allocate or dynamically resize our poll fds array. */ static void uv__pollfds_maybe_resize(uv_loop_t* loop) { size_t i; size_t n; struct pollfd* p; if (loop->poll_fds_used < loop->poll_fds_size) return; n = loop->poll_fds_size ? loop->poll_fds_size * 2 : 64; p = uv__reallocf(loop->poll_fds, n * sizeof(*loop->poll_fds)); if (p == NULL) abort(); loop->poll_fds = p; for (i = loop->poll_fds_size; i < n; i++) { loop->poll_fds[i].fd = -1; loop->poll_fds[i].events = 0; loop->poll_fds[i].revents = 0; } loop->poll_fds_size = n; } /* Primitive swap operation on poll fds array elements. */ static void uv__pollfds_swap(uv_loop_t* loop, size_t l, size_t r) { struct pollfd pfd; pfd = loop->poll_fds[l]; loop->poll_fds[l] = loop->poll_fds[r]; loop->poll_fds[r] = pfd; } /* Add a watcher's fd to our poll fds array with its pending events. */ static void uv__pollfds_add(uv_loop_t* loop, uv__io_t* w) { size_t i; struct pollfd* pe; /* If the fd is already in the set just update its events. */ assert(!loop->poll_fds_iterating); for (i = 0; i < loop->poll_fds_used; ++i) { if (loop->poll_fds[i].fd == w->fd) { loop->poll_fds[i].events = w->pevents; return; } } /* Otherwise, allocate a new slot in the set for the fd. */ uv__pollfds_maybe_resize(loop); pe = &loop->poll_fds[loop->poll_fds_used++]; pe->fd = w->fd; pe->events = w->pevents; } /* Remove a watcher's fd from our poll fds array. */ static void uv__pollfds_del(uv_loop_t* loop, int fd) { size_t i; assert(!loop->poll_fds_iterating); for (i = 0; i < loop->poll_fds_used;) { if (loop->poll_fds[i].fd == fd) { /* swap to last position and remove */ --loop->poll_fds_used; uv__pollfds_swap(loop, i, loop->poll_fds_used); loop->poll_fds[loop->poll_fds_used].fd = -1; loop->poll_fds[loop->poll_fds_used].events = 0; loop->poll_fds[loop->poll_fds_used].revents = 0; /* This method is called with an fd of -1 to purge the invalidated fds, * so we may possibly have multiples to remove. */ if (-1 != fd) return; } else { /* We must only increment the loop counter when the fds do not match. * Otherwise, when we are purging an invalidated fd, the value just * swapped here from the previous end of the array will be skipped. */ ++i; } } } void uv__io_poll(uv_loop_t* loop, int timeout) { sigset_t* pset; sigset_t set; uint64_t time_base; uint64_t time_diff; QUEUE* q; uv__io_t* w; size_t i; unsigned int nevents; int nfds; int have_signals; struct pollfd* pe; int fd; int user_timeout; int reset_timeout; if (loop->nfds == 0) { assert(QUEUE_EMPTY(&loop->watcher_queue)); return; } /* Take queued watchers and add their fds to our poll fds array. */ while (!QUEUE_EMPTY(&loop->watcher_queue)) { q = QUEUE_HEAD(&loop->watcher_queue); QUEUE_REMOVE(q); QUEUE_INIT(q); w = QUEUE_DATA(q, uv__io_t, watcher_queue); assert(w->pevents != 0); assert(w->fd >= 0); assert(w->fd < (int) loop->nwatchers); uv__pollfds_add(loop, w); w->events = w->pevents; } /* Prepare a set of signals to block around poll(), if any. */ pset = NULL; if (loop->flags & UV_LOOP_BLOCK_SIGPROF) { pset = &set; sigemptyset(pset); sigaddset(pset, SIGPROF); } assert(timeout >= -1); time_base = loop->time; if (uv__get_internal_fields(loop)->flags & UV_METRICS_IDLE_TIME) { reset_timeout = 1; user_timeout = timeout; timeout = 0; } else { reset_timeout = 0; } /* Loop calls to poll() and processing of results. If we get some * results from poll() but they turn out not to be interesting to * our caller then we need to loop around and poll() again. */ for (;;) { /* Only need to set the provider_entry_time if timeout != 0. The function * will return early if the loop isn't configured with UV_METRICS_IDLE_TIME. */ if (timeout != 0) uv__metrics_set_provider_entry_time(loop); if (pset != NULL) if (pthread_sigmask(SIG_BLOCK, pset, NULL)) abort(); nfds = poll(loop->poll_fds, (nfds_t)loop->poll_fds_used, timeout); if (pset != NULL) if (pthread_sigmask(SIG_UNBLOCK, pset, NULL)) abort(); /* Update loop->time unconditionally. It's tempting to skip the update when * timeout == 0 (i.e. non-blocking poll) but there is no guarantee that the * operating system didn't reschedule our process while in the syscall. */ SAVE_ERRNO(uv__update_time(loop)); if (nfds == 0) { if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; if (timeout == -1) continue; if (timeout > 0) goto update_timeout; } assert(timeout != -1); return; } if (nfds == -1) { if (errno != EINTR) abort(); if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } if (timeout == -1) continue; if (timeout == 0) return; /* Interrupted by a signal. Update timeout and poll again. */ goto update_timeout; } /* Tell uv__platform_invalidate_fd not to manipulate our array * while we are iterating over it. */ loop->poll_fds_iterating = 1; /* Initialize a count of events that we care about. */ nevents = 0; have_signals = 0; /* Loop over the entire poll fds array looking for returned events. */ for (i = 0; i < loop->poll_fds_used; i++) { pe = loop->poll_fds + i; fd = pe->fd; /* Skip invalidated events, see uv__platform_invalidate_fd. */ if (fd == -1) continue; assert(fd >= 0); assert((unsigned) fd < loop->nwatchers); w = loop->watchers[fd]; if (w == NULL) { /* File descriptor that we've stopped watching, ignore. */ uv__platform_invalidate_fd(loop, fd); continue; } /* Filter out events that user has not requested us to watch * (e.g. POLLNVAL). */ pe->revents &= w->pevents | POLLERR | POLLHUP; if (pe->revents != 0) { /* Run signal watchers last. */ if (w == &loop->signal_io_watcher) { have_signals = 1; } else { uv__metrics_update_idle_time(loop); w->cb(loop, w, pe->revents); } nevents++; } } if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } if (have_signals != 0) { uv__metrics_update_idle_time(loop); loop->signal_io_watcher.cb(loop, &loop->signal_io_watcher, POLLIN); } loop->poll_fds_iterating = 0; /* Purge invalidated fds from our poll fds array. */ uv__pollfds_del(loop, -1); if (have_signals != 0) return; /* Event loop should cycle now so don't poll again. */ if (nevents != 0) return; if (timeout == 0) return; if (timeout == -1) continue; update_timeout: assert(timeout > 0); time_diff = loop->time - time_base; if (time_diff >= (uint64_t) timeout) return; timeout -= time_diff; } } /* Remove the given fd from our poll fds array because no one * is interested in its events anymore. */ void uv__platform_invalidate_fd(uv_loop_t* loop, int fd) { size_t i; assert(fd >= 0); if (loop->poll_fds_iterating) { /* uv__io_poll is currently iterating. Just invalidate fd. */ for (i = 0; i < loop->poll_fds_used; i++) if (loop->poll_fds[i].fd == fd) { loop->poll_fds[i].fd = -1; loop->poll_fds[i].events = 0; loop->poll_fds[i].revents = 0; } } else { /* uv__io_poll is not iterating. Delete fd from the set. */ uv__pollfds_del(loop, fd); } } /* Check whether the given fd is supported by poll(). */ int uv__io_check_fd(uv_loop_t* loop, int fd) { struct pollfd p[1]; int rv; p[0].fd = fd; p[0].events = POLLIN; do rv = poll(p, 1, 0); while (rv == -1 && (errno == EINTR || errno == EAGAIN)); if (rv == -1) return UV__ERR(errno); if (p[0].revents & POLLNVAL) return UV_EINVAL; return 0; } gevent-24.11.1/deps/libuv/src/unix/process.c000066400000000000000000000731461471441230600206350ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include #include #include #include #include #include #if defined(__APPLE__) # include # include # include # include # include # include # include # include # define environ (*_NSGetEnviron()) /* macOS 10.14 back does not define this constant */ # ifndef POSIX_SPAWN_SETSID # define POSIX_SPAWN_SETSID 1024 # endif #else extern char **environ; #endif #if defined(__linux__) || defined(__GLIBC__) # include #endif #if defined(__MVS__) # include "zos-base.h" #endif #if defined(__APPLE__) || \ defined(__DragonFly__) || \ defined(__FreeBSD__) || \ defined(__NetBSD__) || \ defined(__OpenBSD__) #include #else #define UV_USE_SIGCHLD #endif #ifdef UV_USE_SIGCHLD static void uv__chld(uv_signal_t* handle, int signum) { assert(signum == SIGCHLD); uv__wait_children(handle->loop); } #endif void uv__wait_children(uv_loop_t* loop) { uv_process_t* process; int exit_status; int term_signal; int status; int options; pid_t pid; QUEUE pending; QUEUE* q; QUEUE* h; QUEUE_INIT(&pending); h = &loop->process_handles; q = QUEUE_HEAD(h); while (q != h) { process = QUEUE_DATA(q, uv_process_t, queue); q = QUEUE_NEXT(q); #ifndef UV_USE_SIGCHLD if ((process->flags & UV_HANDLE_REAP) == 0) continue; options = 0; process->flags &= ~UV_HANDLE_REAP; #else options = WNOHANG; #endif do pid = waitpid(process->pid, &status, options); while (pid == -1 && errno == EINTR); #ifdef UV_USE_SIGCHLD if (pid == 0) /* Not yet exited */ continue; #endif if (pid == -1) { if (errno != ECHILD) abort(); /* The child died, and we missed it. This probably means someone else * stole the waitpid from us. Handle this by not handling it at all. */ continue; } assert(pid == process->pid); process->status = status; QUEUE_REMOVE(&process->queue); QUEUE_INSERT_TAIL(&pending, &process->queue); } h = &pending; q = QUEUE_HEAD(h); while (q != h) { process = QUEUE_DATA(q, uv_process_t, queue); q = QUEUE_NEXT(q); QUEUE_REMOVE(&process->queue); QUEUE_INIT(&process->queue); uv__handle_stop(process); if (process->exit_cb == NULL) continue; exit_status = 0; if (WIFEXITED(process->status)) exit_status = WEXITSTATUS(process->status); term_signal = 0; if (WIFSIGNALED(process->status)) term_signal = WTERMSIG(process->status); process->exit_cb(process, exit_status, term_signal); } assert(QUEUE_EMPTY(&pending)); } /* * Used for initializing stdio streams like options.stdin_stream. Returns * zero on success. See also the cleanup section in uv_spawn(). */ static int uv__process_init_stdio(uv_stdio_container_t* container, int fds[2]) { int mask; int fd; mask = UV_IGNORE | UV_CREATE_PIPE | UV_INHERIT_FD | UV_INHERIT_STREAM; switch (container->flags & mask) { case UV_IGNORE: return 0; case UV_CREATE_PIPE: assert(container->data.stream != NULL); if (container->data.stream->type != UV_NAMED_PIPE) return UV_EINVAL; else return uv_socketpair(SOCK_STREAM, 0, fds, 0, 0); case UV_INHERIT_FD: case UV_INHERIT_STREAM: if (container->flags & UV_INHERIT_FD) fd = container->data.fd; else fd = uv__stream_fd(container->data.stream); if (fd == -1) return UV_EINVAL; fds[1] = fd; return 0; default: assert(0 && "Unexpected flags"); return UV_EINVAL; } } static int uv__process_open_stream(uv_stdio_container_t* container, int pipefds[2]) { int flags; int err; if (!(container->flags & UV_CREATE_PIPE) || pipefds[0] < 0) return 0; err = uv__close(pipefds[1]); if (err != 0) abort(); pipefds[1] = -1; uv__nonblock(pipefds[0], 1); flags = 0; if (container->flags & UV_WRITABLE_PIPE) flags |= UV_HANDLE_READABLE; if (container->flags & UV_READABLE_PIPE) flags |= UV_HANDLE_WRITABLE; return uv__stream_open(container->data.stream, pipefds[0], flags); } static void uv__process_close_stream(uv_stdio_container_t* container) { if (!(container->flags & UV_CREATE_PIPE)) return; uv__stream_close(container->data.stream); } static void uv__write_int(int fd, int val) { ssize_t n; do n = write(fd, &val, sizeof(val)); while (n == -1 && errno == EINTR); /* The write might have failed (e.g. if the parent process has died), * but we have nothing left but to _exit ourself now too. */ _exit(127); } static void uv__write_errno(int error_fd) { uv__write_int(error_fd, UV__ERR(errno)); } #if !(defined(__APPLE__) && (TARGET_OS_TV || TARGET_OS_WATCH)) /* execvp is marked __WATCHOS_PROHIBITED __TVOS_PROHIBITED, so must be * avoided. Since this isn't called on those targets, the function * doesn't even need to be defined for them. */ static void uv__process_child_init(const uv_process_options_t* options, int stdio_count, int (*pipes)[2], int error_fd) { sigset_t signewset; int close_fd; int use_fd; int fd; int n; /* Reset signal disposition first. Use a hard-coded limit because NSIG is not * fixed on Linux: it's either 32, 34 or 64, depending on whether RT signals * are enabled. We are not allowed to touch RT signal handlers, glibc uses * them internally. */ for (n = 1; n < 32; n += 1) { if (n == SIGKILL || n == SIGSTOP) continue; /* Can't be changed. */ #if defined(__HAIKU__) if (n == SIGKILLTHR) continue; /* Can't be changed. */ #endif if (SIG_ERR != signal(n, SIG_DFL)) continue; uv__write_errno(error_fd); } if (options->flags & UV_PROCESS_DETACHED) setsid(); /* First duplicate low numbered fds, since it's not safe to duplicate them, * they could get replaced. Example: swapping stdout and stderr; without * this fd 2 (stderr) would be duplicated into fd 1, thus making both * stdout and stderr go to the same fd, which was not the intention. */ for (fd = 0; fd < stdio_count; fd++) { use_fd = pipes[fd][1]; if (use_fd < 0 || use_fd >= fd) continue; #ifdef F_DUPFD_CLOEXEC /* POSIX 2008 */ pipes[fd][1] = fcntl(use_fd, F_DUPFD_CLOEXEC, stdio_count); #else pipes[fd][1] = fcntl(use_fd, F_DUPFD, stdio_count); #endif if (pipes[fd][1] == -1) uv__write_errno(error_fd); #ifndef F_DUPFD_CLOEXEC /* POSIX 2008 */ n = uv__cloexec(pipes[fd][1], 1); if (n) uv__write_int(error_fd, n); #endif } for (fd = 0; fd < stdio_count; fd++) { close_fd = -1; use_fd = pipes[fd][1]; if (use_fd < 0) { if (fd >= 3) continue; else { /* Redirect stdin, stdout and stderr to /dev/null even if UV_IGNORE is * set. */ uv__close_nocheckstdio(fd); /* Free up fd, if it happens to be open. */ use_fd = open("/dev/null", fd == 0 ? O_RDONLY : O_RDWR); close_fd = use_fd; if (use_fd < 0) uv__write_errno(error_fd); } } if (fd == use_fd) { if (close_fd == -1) { n = uv__cloexec(use_fd, 0); if (n) uv__write_int(error_fd, n); } } else { fd = dup2(use_fd, fd); } if (fd == -1) uv__write_errno(error_fd); if (fd <= 2 && close_fd == -1) uv__nonblock_fcntl(fd, 0); if (close_fd >= stdio_count) uv__close(close_fd); } if (options->cwd != NULL && chdir(options->cwd)) uv__write_errno(error_fd); if (options->flags & (UV_PROCESS_SETUID | UV_PROCESS_SETGID)) { /* When dropping privileges from root, the `setgroups` call will * remove any extraneous groups. If we don't call this, then * even though our uid has dropped, we may still have groups * that enable us to do super-user things. This will fail if we * aren't root, so don't bother checking the return value, this * is just done as an optimistic privilege dropping function. */ SAVE_ERRNO(setgroups(0, NULL)); } if ((options->flags & UV_PROCESS_SETGID) && setgid(options->gid)) uv__write_errno(error_fd); if ((options->flags & UV_PROCESS_SETUID) && setuid(options->uid)) uv__write_errno(error_fd); if (options->env != NULL) environ = options->env; /* Reset signal mask just before exec. */ sigemptyset(&signewset); if (sigprocmask(SIG_SETMASK, &signewset, NULL) != 0) abort(); #ifdef __MVS__ execvpe(options->file, options->args, environ); #else execvp(options->file, options->args); #endif uv__write_errno(error_fd); } #endif #if defined(__APPLE__) typedef struct uv__posix_spawn_fncs_tag { struct { int (*addchdir_np)(const posix_spawn_file_actions_t *, const char *); } file_actions; } uv__posix_spawn_fncs_t; static uv_once_t posix_spawn_init_once = UV_ONCE_INIT; static uv__posix_spawn_fncs_t posix_spawn_fncs; static int posix_spawn_can_use_setsid; static void uv__spawn_init_posix_spawn_fncs(void) { /* Try to locate all non-portable functions at runtime */ posix_spawn_fncs.file_actions.addchdir_np = dlsym(RTLD_DEFAULT, "posix_spawn_file_actions_addchdir_np"); } static void uv__spawn_init_can_use_setsid(void) { int which[] = {CTL_KERN, KERN_OSRELEASE}; unsigned major; unsigned minor; unsigned patch; char buf[256]; size_t len; len = sizeof(buf); if (sysctl(which, ARRAY_SIZE(which), buf, &len, NULL, 0)) return; /* NULL specifies to use LC_C_LOCALE */ if (3 != sscanf_l(buf, NULL, "%u.%u.%u", &major, &minor, &patch)) return; posix_spawn_can_use_setsid = (major >= 19); /* macOS Catalina */ } static void uv__spawn_init_posix_spawn(void) { /* Init handles to all potentially non-defined functions */ uv__spawn_init_posix_spawn_fncs(); /* Init feature detection for POSIX_SPAWN_SETSID flag */ uv__spawn_init_can_use_setsid(); } static int uv__spawn_set_posix_spawn_attrs( posix_spawnattr_t* attrs, const uv__posix_spawn_fncs_t* posix_spawn_fncs, const uv_process_options_t* options) { int err; unsigned int flags; sigset_t signal_set; err = posix_spawnattr_init(attrs); if (err != 0) { /* If initialization fails, no need to de-init, just return */ return err; } if (options->flags & (UV_PROCESS_SETUID | UV_PROCESS_SETGID)) { /* kauth_cred_issuser currently requires exactly uid == 0 for these * posixspawn_attrs (set_groups_np, setuid_np, setgid_np), which deviates * from the normal specification of setuid (which also uses euid), and they * are also undocumented syscalls, so we do not use them. */ err = ENOSYS; goto error; } /* Set flags for spawn behavior * 1) POSIX_SPAWN_CLOEXEC_DEFAULT: (Apple Extension) All descriptors in the * parent will be treated as if they had been created with O_CLOEXEC. The * only fds that will be passed on to the child are those manipulated by * the file actions * 2) POSIX_SPAWN_SETSIGDEF: Signals mentioned in spawn-sigdefault in the * spawn attributes will be reset to behave as their default * 3) POSIX_SPAWN_SETSIGMASK: Signal mask will be set to the value of * spawn-sigmask in attributes * 4) POSIX_SPAWN_SETSID: Make the process a new session leader if a detached * session was requested. */ flags = POSIX_SPAWN_CLOEXEC_DEFAULT | POSIX_SPAWN_SETSIGDEF | POSIX_SPAWN_SETSIGMASK; if (options->flags & UV_PROCESS_DETACHED) { /* If running on a version of macOS where this flag is not supported, * revert back to the fork/exec flow. Otherwise posix_spawn will * silently ignore the flag. */ if (!posix_spawn_can_use_setsid) { err = ENOSYS; goto error; } flags |= POSIX_SPAWN_SETSID; } err = posix_spawnattr_setflags(attrs, flags); if (err != 0) goto error; /* Reset all signal the child to their default behavior */ sigfillset(&signal_set); err = posix_spawnattr_setsigdefault(attrs, &signal_set); if (err != 0) goto error; /* Reset the signal mask for all signals */ sigemptyset(&signal_set); err = posix_spawnattr_setsigmask(attrs, &signal_set); if (err != 0) goto error; return err; error: (void) posix_spawnattr_destroy(attrs); return err; } static int uv__spawn_set_posix_spawn_file_actions( posix_spawn_file_actions_t* actions, const uv__posix_spawn_fncs_t* posix_spawn_fncs, const uv_process_options_t* options, int stdio_count, int (*pipes)[2]) { int fd; int fd2; int use_fd; int err; err = posix_spawn_file_actions_init(actions); if (err != 0) { /* If initialization fails, no need to de-init, just return */ return err; } /* Set the current working directory if requested */ if (options->cwd != NULL) { if (posix_spawn_fncs->file_actions.addchdir_np == NULL) { err = ENOSYS; goto error; } err = posix_spawn_fncs->file_actions.addchdir_np(actions, options->cwd); if (err != 0) goto error; } /* Do not return ENOSYS after this point, as we may mutate pipes. */ /* First duplicate low numbered fds, since it's not safe to duplicate them, * they could get replaced. Example: swapping stdout and stderr; without * this fd 2 (stderr) would be duplicated into fd 1, thus making both * stdout and stderr go to the same fd, which was not the intention. */ for (fd = 0; fd < stdio_count; fd++) { use_fd = pipes[fd][1]; if (use_fd < 0 || use_fd >= fd) continue; use_fd = stdio_count; for (fd2 = 0; fd2 < stdio_count; fd2++) { /* If we were not setting POSIX_SPAWN_CLOEXEC_DEFAULT, we would need to * also consider whether fcntl(fd, F_GETFD) returned without the * FD_CLOEXEC flag set. */ if (pipes[fd2][1] == use_fd) { use_fd++; fd2 = 0; } } err = posix_spawn_file_actions_adddup2( actions, pipes[fd][1], use_fd); assert(err != ENOSYS); if (err != 0) goto error; pipes[fd][1] = use_fd; } /* Second, move the descriptors into their respective places */ for (fd = 0; fd < stdio_count; fd++) { use_fd = pipes[fd][1]; if (use_fd < 0) { if (fd >= 3) continue; else { /* If ignored, redirect to (or from) /dev/null, */ err = posix_spawn_file_actions_addopen( actions, fd, "/dev/null", fd == 0 ? O_RDONLY : O_RDWR, 0); assert(err != ENOSYS); if (err != 0) goto error; continue; } } if (fd == use_fd) err = posix_spawn_file_actions_addinherit_np(actions, fd); else err = posix_spawn_file_actions_adddup2(actions, use_fd, fd); assert(err != ENOSYS); if (err != 0) goto error; /* Make sure the fd is marked as non-blocking (state shared between child * and parent). */ uv__nonblock_fcntl(use_fd, 0); } /* Finally, close all the superfluous descriptors */ for (fd = 0; fd < stdio_count; fd++) { use_fd = pipes[fd][1]; if (use_fd < stdio_count) continue; /* Check if we already closed this. */ for (fd2 = 0; fd2 < fd; fd2++) { if (pipes[fd2][1] == use_fd) break; } if (fd2 < fd) continue; err = posix_spawn_file_actions_addclose(actions, use_fd); assert(err != ENOSYS); if (err != 0) goto error; } return 0; error: (void) posix_spawn_file_actions_destroy(actions); return err; } char* uv__spawn_find_path_in_env(char** env) { char** env_iterator; const char path_var[] = "PATH="; /* Look for an environment variable called PATH in the * provided env array, and return its value if found */ for (env_iterator = env; *env_iterator != NULL; env_iterator++) { if (strncmp(*env_iterator, path_var, sizeof(path_var) - 1) == 0) { /* Found "PATH=" at the beginning of the string */ return *env_iterator + sizeof(path_var) - 1; } } return NULL; } static int uv__spawn_resolve_and_spawn(const uv_process_options_t* options, posix_spawnattr_t* attrs, posix_spawn_file_actions_t* actions, pid_t* pid) { const char *p; const char *z; const char *path; size_t l; size_t k; int err; int seen_eacces; path = NULL; err = -1; seen_eacces = 0; /* Short circuit for erroneous case */ if (options->file == NULL) return ENOENT; /* The environment for the child process is that of the parent unless overriden * by options->env */ char** env = environ; if (options->env != NULL) env = options->env; /* If options->file contains a slash, posix_spawn/posix_spawnp should behave * the same, and do not involve PATH resolution at all. The libc * `posix_spawnp` provided by Apple is buggy (since 10.15), so we now emulate it * here, per https://github.com/libuv/libuv/pull/3583. */ if (strchr(options->file, '/') != NULL) { do err = posix_spawn(pid, options->file, actions, attrs, options->args, env); while (err == EINTR); return err; } /* Look for the definition of PATH in the provided env */ path = uv__spawn_find_path_in_env(env); /* The following resolution logic (execvpe emulation) is copied from * https://git.musl-libc.org/cgit/musl/tree/src/process/execvp.c * and adapted to work for our specific usage */ /* If no path was provided in env, use the default value * to look for the executable */ if (path == NULL) path = _PATH_DEFPATH; k = strnlen(options->file, NAME_MAX + 1); if (k > NAME_MAX) return ENAMETOOLONG; l = strnlen(path, PATH_MAX - 1) + 1; for (p = path;; p = z) { /* Compose the new process file from the entry in the PATH * environment variable and the actual file name */ char b[PATH_MAX + NAME_MAX]; z = strchr(p, ':'); if (!z) z = p + strlen(p); if ((size_t)(z - p) >= l) { if (!*z++) break; continue; } memcpy(b, p, z - p); b[z - p] = '/'; memcpy(b + (z - p) + (z > p), options->file, k + 1); /* Try to spawn the new process file. If it fails with ENOENT, the * new process file is not in this PATH entry, continue with the next * PATH entry. */ do err = posix_spawn(pid, b, actions, attrs, options->args, env); while (err == EINTR); switch (err) { case EACCES: seen_eacces = 1; break; /* continue search */ case ENOENT: case ENOTDIR: break; /* continue search */ default: return err; } if (!*z++) break; } if (seen_eacces) return EACCES; return err; } static int uv__spawn_and_init_child_posix_spawn( const uv_process_options_t* options, int stdio_count, int (*pipes)[2], pid_t* pid, const uv__posix_spawn_fncs_t* posix_spawn_fncs) { int err; posix_spawnattr_t attrs; posix_spawn_file_actions_t actions; err = uv__spawn_set_posix_spawn_attrs(&attrs, posix_spawn_fncs, options); if (err != 0) goto error; /* This may mutate pipes. */ err = uv__spawn_set_posix_spawn_file_actions(&actions, posix_spawn_fncs, options, stdio_count, pipes); if (err != 0) { (void) posix_spawnattr_destroy(&attrs); goto error; } /* Try to spawn options->file resolving in the provided environment * if any */ err = uv__spawn_resolve_and_spawn(options, &attrs, &actions, pid); assert(err != ENOSYS); /* Destroy the actions/attributes */ (void) posix_spawn_file_actions_destroy(&actions); (void) posix_spawnattr_destroy(&attrs); error: /* In an error situation, the attributes and file actions are * already destroyed, only the happy path requires cleanup */ return UV__ERR(err); } #endif static int uv__spawn_and_init_child_fork(const uv_process_options_t* options, int stdio_count, int (*pipes)[2], int error_fd, pid_t* pid) { sigset_t signewset; sigset_t sigoldset; /* Start the child with most signals blocked, to avoid any issues before we * can reset them, but allow program failures to exit (and not hang). */ sigfillset(&signewset); sigdelset(&signewset, SIGKILL); sigdelset(&signewset, SIGSTOP); sigdelset(&signewset, SIGTRAP); sigdelset(&signewset, SIGSEGV); sigdelset(&signewset, SIGBUS); sigdelset(&signewset, SIGILL); sigdelset(&signewset, SIGSYS); sigdelset(&signewset, SIGABRT); if (pthread_sigmask(SIG_BLOCK, &signewset, &sigoldset) != 0) abort(); *pid = fork(); if (*pid == 0) { /* Fork succeeded, in the child process */ uv__process_child_init(options, stdio_count, pipes, error_fd); abort(); } if (pthread_sigmask(SIG_SETMASK, &sigoldset, NULL) != 0) abort(); if (*pid == -1) /* Failed to fork */ return UV__ERR(errno); /* Fork succeeded, in the parent process */ return 0; } static int uv__spawn_and_init_child( uv_loop_t* loop, const uv_process_options_t* options, int stdio_count, int (*pipes)[2], pid_t* pid) { int signal_pipe[2] = { -1, -1 }; int status; int err; int exec_errorno; ssize_t r; #if defined(__APPLE__) uv_once(&posix_spawn_init_once, uv__spawn_init_posix_spawn); /* Special child process spawn case for macOS Big Sur (11.0) onwards * * Big Sur introduced a significant performance degradation on a call to * fork/exec when the process has many pages mmaped in with MAP_JIT, like, say * a javascript interpreter. Electron-based applications, for example, * are impacted; though the magnitude of the impact depends on how much the * app relies on subprocesses. * * On macOS, though, posix_spawn is implemented in a way that does not * exhibit the problem. This block implements the forking and preparation * logic with posix_spawn and its related primitives. It also takes advantage of * the macOS extension POSIX_SPAWN_CLOEXEC_DEFAULT that makes impossible to * leak descriptors to the child process. */ err = uv__spawn_and_init_child_posix_spawn(options, stdio_count, pipes, pid, &posix_spawn_fncs); /* The posix_spawn flow will return UV_ENOSYS if any of the posix_spawn_x_np * non-standard functions is both _needed_ and _undefined_. In those cases, * default back to the fork/execve strategy. For all other errors, just fail. */ if (err != UV_ENOSYS) return err; #endif /* This pipe is used by the parent to wait until * the child has called `execve()`. We need this * to avoid the following race condition: * * if ((pid = fork()) > 0) { * kill(pid, SIGTERM); * } * else if (pid == 0) { * execve("/bin/cat", argp, envp); * } * * The parent sends a signal immediately after forking. * Since the child may not have called `execve()` yet, * there is no telling what process receives the signal, * our fork or /bin/cat. * * To avoid ambiguity, we create a pipe with both ends * marked close-on-exec. Then, after the call to `fork()`, * the parent polls the read end until it EOFs or errors with EPIPE. */ err = uv__make_pipe(signal_pipe, 0); if (err) return err; /* Acquire write lock to prevent opening new fds in worker threads */ uv_rwlock_wrlock(&loop->cloexec_lock); err = uv__spawn_and_init_child_fork(options, stdio_count, pipes, signal_pipe[1], pid); /* Release lock in parent process */ uv_rwlock_wrunlock(&loop->cloexec_lock); uv__close(signal_pipe[1]); if (err == 0) { do r = read(signal_pipe[0], &exec_errorno, sizeof(exec_errorno)); while (r == -1 && errno == EINTR); if (r == 0) ; /* okay, EOF */ else if (r == sizeof(exec_errorno)) { do err = waitpid(*pid, &status, 0); /* okay, read errorno */ while (err == -1 && errno == EINTR); assert(err == *pid); err = exec_errorno; } else if (r == -1 && errno == EPIPE) { /* Something unknown happened to our child before spawn */ do err = waitpid(*pid, &status, 0); /* okay, got EPIPE */ while (err == -1 && errno == EINTR); assert(err == *pid); err = UV_EPIPE; } else abort(); } uv__close_nocheckstdio(signal_pipe[0]); return err; } int uv_spawn(uv_loop_t* loop, uv_process_t* process, const uv_process_options_t* options) { #if defined(__APPLE__) && (TARGET_OS_TV || TARGET_OS_WATCH) /* fork is marked __WATCHOS_PROHIBITED __TVOS_PROHIBITED. */ return UV_ENOSYS; #else int pipes_storage[8][2]; int (*pipes)[2]; int stdio_count; pid_t pid; int err; int exec_errorno; int i; assert(options->file != NULL); assert(!(options->flags & ~(UV_PROCESS_DETACHED | UV_PROCESS_SETGID | UV_PROCESS_SETUID | UV_PROCESS_WINDOWS_HIDE | UV_PROCESS_WINDOWS_HIDE_CONSOLE | UV_PROCESS_WINDOWS_HIDE_GUI | UV_PROCESS_WINDOWS_VERBATIM_ARGUMENTS))); uv__handle_init(loop, (uv_handle_t*)process, UV_PROCESS); QUEUE_INIT(&process->queue); process->status = 0; stdio_count = options->stdio_count; if (stdio_count < 3) stdio_count = 3; err = UV_ENOMEM; pipes = pipes_storage; if (stdio_count > (int) ARRAY_SIZE(pipes_storage)) pipes = uv__malloc(stdio_count * sizeof(*pipes)); if (pipes == NULL) goto error; for (i = 0; i < stdio_count; i++) { pipes[i][0] = -1; pipes[i][1] = -1; } for (i = 0; i < options->stdio_count; i++) { err = uv__process_init_stdio(options->stdio + i, pipes[i]); if (err) goto error; } #ifdef UV_USE_SIGCHLD uv_signal_start(&loop->child_watcher, uv__chld, SIGCHLD); #endif /* Spawn the child */ exec_errorno = uv__spawn_and_init_child(loop, options, stdio_count, pipes, &pid); #if 0 /* This runs into a nodejs issue (it expects initialized streams, even if the * exec failed). * See https://github.com/libuv/libuv/pull/3107#issuecomment-782482608 */ if (exec_errorno != 0) goto error; #endif /* Activate this handle if exec() happened successfully, even if we later * fail to open a stdio handle. This ensures we can eventually reap the child * with waitpid. */ if (exec_errorno == 0) { #ifndef UV_USE_SIGCHLD struct kevent event; EV_SET(&event, pid, EVFILT_PROC, EV_ADD | EV_ONESHOT, NOTE_EXIT, 0, 0); if (kevent(loop->backend_fd, &event, 1, NULL, 0, NULL)) { if (errno != ESRCH) abort(); /* Process already exited. Call waitpid on the next loop iteration. */ process->flags |= UV_HANDLE_REAP; loop->flags |= UV_LOOP_REAP_CHILDREN; } #endif process->pid = pid; process->exit_cb = options->exit_cb; QUEUE_INSERT_TAIL(&loop->process_handles, &process->queue); uv__handle_start(process); } for (i = 0; i < options->stdio_count; i++) { err = uv__process_open_stream(options->stdio + i, pipes[i]); if (err == 0) continue; while (i--) uv__process_close_stream(options->stdio + i); goto error; } if (pipes != pipes_storage) uv__free(pipes); return exec_errorno; error: if (pipes != NULL) { for (i = 0; i < stdio_count; i++) { if (i < options->stdio_count) if (options->stdio[i].flags & (UV_INHERIT_FD | UV_INHERIT_STREAM)) continue; if (pipes[i][0] != -1) uv__close_nocheckstdio(pipes[i][0]); if (pipes[i][1] != -1) uv__close_nocheckstdio(pipes[i][1]); } if (pipes != pipes_storage) uv__free(pipes); } return err; #endif } int uv_process_kill(uv_process_t* process, int signum) { return uv_kill(process->pid, signum); } int uv_kill(int pid, int signum) { if (kill(pid, signum)) { #if defined(__MVS__) /* EPERM is returned if the process is a zombie. */ siginfo_t infop; if (errno == EPERM && waitid(P_PID, pid, &infop, WNOHANG | WNOWAIT | WEXITED) == 0) return 0; #endif return UV__ERR(errno); } else return 0; } void uv__process_close(uv_process_t* handle) { QUEUE_REMOVE(&handle->queue); uv__handle_stop(handle); if (QUEUE_EMPTY(&handle->loop->process_handles)) uv_signal_stop(&handle->loop->child_watcher); } gevent-24.11.1/deps/libuv/src/unix/procfs-exepath.c000066400000000000000000000027651471441230600221060ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include int uv_exepath(char* buffer, size_t* size) { ssize_t n; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; n = *size - 1; if (n > 0) n = readlink("/proc/self/exe", buffer, n); if (n == -1) return UV__ERR(errno); buffer[n] = '\0'; *size = n; return 0; } gevent-24.11.1/deps/libuv/src/unix/proctitle.c000066400000000000000000000077341471441230600211640ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include struct uv__process_title { char* str; size_t len; /* Length of the current process title. */ size_t cap; /* Maximum capacity. Computed once in uv_setup_args(). */ }; extern void uv__set_process_title(const char* title); static uv_mutex_t process_title_mutex; static uv_once_t process_title_mutex_once = UV_ONCE_INIT; static struct uv__process_title process_title; static void* args_mem; static void init_process_title_mutex_once(void) { uv_mutex_init(&process_title_mutex); } char** uv_setup_args(int argc, char** argv) { struct uv__process_title pt; char** new_argv; size_t size; char* s; int i; if (argc <= 0) return argv; pt.str = argv[0]; pt.len = strlen(argv[0]); pt.cap = pt.len + 1; /* Calculate how much memory we need for the argv strings. */ size = pt.cap; for (i = 1; i < argc; i++) size += strlen(argv[i]) + 1; /* Add space for the argv pointers. */ size += (argc + 1) * sizeof(char*); new_argv = uv__malloc(size); if (new_argv == NULL) return argv; /* Copy over the strings and set up the pointer table. */ i = 0; s = (char*) &new_argv[argc + 1]; size = pt.cap; goto loop; for (/* empty */; i < argc; i++) { size = strlen(argv[i]) + 1; loop: memcpy(s, argv[i], size); new_argv[i] = s; s += size; } new_argv[i] = NULL; pt.cap = argv[i - 1] + size - argv[0]; args_mem = new_argv; process_title = pt; return new_argv; } int uv_set_process_title(const char* title) { struct uv__process_title* pt; size_t len; /* If uv_setup_args wasn't called or failed, we can't continue. */ if (args_mem == NULL) return UV_ENOBUFS; pt = &process_title; len = strlen(title); uv_once(&process_title_mutex_once, init_process_title_mutex_once); uv_mutex_lock(&process_title_mutex); if (len >= pt->cap) { len = 0; if (pt->cap > 0) len = pt->cap - 1; } memcpy(pt->str, title, len); memset(pt->str + len, '\0', pt->cap - len); pt->len = len; uv__set_process_title(pt->str); uv_mutex_unlock(&process_title_mutex); return 0; } int uv_get_process_title(char* buffer, size_t size) { if (buffer == NULL || size == 0) return UV_EINVAL; /* If uv_setup_args wasn't called or failed, we can't continue. */ if (args_mem == NULL) return UV_ENOBUFS; uv_once(&process_title_mutex_once, init_process_title_mutex_once); uv_mutex_lock(&process_title_mutex); if (size <= process_title.len) { uv_mutex_unlock(&process_title_mutex); return UV_ENOBUFS; } if (process_title.len != 0) memcpy(buffer, process_title.str, process_title.len + 1); buffer[process_title.len] = '\0'; uv_mutex_unlock(&process_title_mutex); return 0; } void uv__process_title_cleanup(void) { uv__free(args_mem); /* Keep valgrind happy. */ args_mem = NULL; } gevent-24.11.1/deps/libuv/src/unix/pthread-fixes.c000066400000000000000000000042321471441230600217100ustar00rootroot00000000000000/* Copyright (c) 2013, Sony Mobile Communications AB * Copyright (c) 2012, Google Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* Android versions < 4.1 have a broken pthread_sigmask. */ #include "uv-common.h" #include #include #include int uv__pthread_sigmask(int how, const sigset_t* set, sigset_t* oset) { static int workaround; int err; if (uv__load_relaxed(&workaround)) { return sigprocmask(how, set, oset); } else { err = pthread_sigmask(how, set, oset); if (err) { if (err == EINVAL && sigprocmask(how, set, oset) == 0) { uv__store_relaxed(&workaround, 1); return 0; } else { return -1; } } } return 0; } gevent-24.11.1/deps/libuv/src/unix/qnx.c000066400000000000000000000070431471441230600177560ustar00rootroot00000000000000/* Copyright libuv contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include static void get_mem_info(uint64_t* totalmem, uint64_t* freemem) { mem_info_t msg; memset(&msg, 0, sizeof(msg)); msg.i.type = _MEM_INFO; msg.i.fd = -1; if (MsgSend(MEMMGR_COID, &msg.i, sizeof(msg.i), &msg.o, sizeof(msg.o)) != -1) { *totalmem = msg.o.info.__posix_tmi_total; *freemem = msg.o.info.posix_tmi_length; } else { *totalmem = 0; *freemem = 0; } } void uv_loadavg(double avg[3]) { avg[0] = 0.0; avg[1] = 0.0; avg[2] = 0.0; } int uv_exepath(char* buffer, size_t* size) { char path[PATH_MAX]; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; realpath(_cmdname(NULL), path); strlcpy(buffer, path, *size); *size = strlen(buffer); return 0; } uint64_t uv_get_free_memory(void) { uint64_t totalmem; uint64_t freemem; get_mem_info(&totalmem, &freemem); return freemem; } uint64_t uv_get_total_memory(void) { uint64_t totalmem; uint64_t freemem; get_mem_info(&totalmem, &freemem); return totalmem; } uint64_t uv_get_constrained_memory(void) { return 0; } int uv_resident_set_memory(size_t* rss) { int fd; procfs_asinfo asinfo; fd = uv__open_cloexec("/proc/self/ctl", O_RDONLY); if (fd == -1) return UV__ERR(errno); if (devctl(fd, DCMD_PROC_ASINFO, &asinfo, sizeof(asinfo), 0) == -1) { uv__close(fd); return UV__ERR(errno); } uv__close(fd); *rss = asinfo.rss; return 0; } int uv_uptime(double* uptime) { struct qtime_entry* qtime = _SYSPAGE_ENTRY(_syspage_ptr, qtime); *uptime = (qtime->nsec / 1000000000.0); return 0; } int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count) { struct cpuinfo_entry* cpuinfo = (struct cpuinfo_entry*)_SYSPAGE_ENTRY(_syspage_ptr, new_cpuinfo); size_t cpuinfo_size = _SYSPAGE_ELEMENT_SIZE(_syspage_ptr, cpuinfo); struct strings_entry* strings = _SYSPAGE_ENTRY(_syspage_ptr, strings); int num_cpus = _syspage_ptr->num_cpu; int i; *count = num_cpus; *cpu_infos = uv__malloc(num_cpus * sizeof(**cpu_infos)); if (*cpu_infos == NULL) return UV_ENOMEM; for (i = 0; i < num_cpus; i++) { (*cpu_infos)[i].model = strdup(&strings->data[cpuinfo->name]); (*cpu_infos)[i].speed = cpuinfo->speed; SYSPAGE_ARRAY_ADJ_OFFSET(cpuinfo, cpuinfo, cpuinfo_size); } return 0; } gevent-24.11.1/deps/libuv/src/unix/random-devurandom.c000066400000000000000000000047631471441230600226000ustar00rootroot00000000000000/* Copyright libuv contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include static uv_once_t once = UV_ONCE_INIT; static int status; int uv__random_readpath(const char* path, void* buf, size_t buflen) { struct stat s; size_t pos; ssize_t n; int fd; fd = uv__open_cloexec(path, O_RDONLY); if (fd < 0) return fd; if (fstat(fd, &s)) { uv__close(fd); return UV__ERR(errno); } if (!S_ISCHR(s.st_mode)) { uv__close(fd); return UV_EIO; } for (pos = 0; pos != buflen; pos += n) { do n = read(fd, (char*) buf + pos, buflen - pos); while (n == -1 && errno == EINTR); if (n == -1) { uv__close(fd); return UV__ERR(errno); } if (n == 0) { uv__close(fd); return UV_EIO; } } uv__close(fd); return 0; } static void uv__random_devurandom_init(void) { char c; /* Linux's random(4) man page suggests applications should read at least * once from /dev/random before switching to /dev/urandom in order to seed * the system RNG. Reads from /dev/random can of course block indefinitely * until entropy is available but that's the point. */ status = uv__random_readpath("/dev/random", &c, 1); } int uv__random_devurandom(void* buf, size_t buflen) { uv_once(&once, uv__random_devurandom_init); if (status != 0) return status; return uv__random_readpath("/dev/urandom", buf, buflen); } gevent-24.11.1/deps/libuv/src/unix/random-getentropy.c000066400000000000000000000036601471441230600226270ustar00rootroot00000000000000/* Copyright libuv contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include typedef int (*uv__getentropy_cb)(void *, size_t); static uv__getentropy_cb uv__getentropy; static uv_once_t once = UV_ONCE_INIT; static void uv__random_getentropy_init(void) { uv__getentropy = (uv__getentropy_cb) dlsym(RTLD_DEFAULT, "getentropy"); } int uv__random_getentropy(void* buf, size_t buflen) { size_t pos; size_t stride; uv_once(&once, uv__random_getentropy_init); if (uv__getentropy == NULL) return UV_ENOSYS; /* getentropy() returns an error for requests > 256 bytes. */ for (pos = 0, stride = 256; pos + stride < buflen; pos += stride) if (uv__getentropy((char *) buf + pos, stride)) return UV__ERR(errno); if (uv__getentropy((char *) buf + pos, buflen - pos)) return UV__ERR(errno); return 0; } gevent-24.11.1/deps/libuv/src/unix/random-getrandom.c000066400000000000000000000047511471441230600224110ustar00rootroot00000000000000/* Copyright libuv contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #ifdef __linux__ #include "linux-syscalls.h" #define uv__random_getrandom_init() 0 #else /* !__linux__ */ #include #include typedef ssize_t (*uv__getrandom_cb)(void *, size_t, unsigned); static uv__getrandom_cb uv__getrandom; static uv_once_t once = UV_ONCE_INIT; static void uv__random_getrandom_init_once(void) { uv__getrandom = (uv__getrandom_cb) dlsym(RTLD_DEFAULT, "getrandom"); } static int uv__random_getrandom_init(void) { uv_once(&once, uv__random_getrandom_init_once); if (uv__getrandom == NULL) return UV_ENOSYS; return 0; } #endif /* !__linux__ */ int uv__random_getrandom(void* buf, size_t buflen) { ssize_t n; size_t pos; int rc; rc = uv__random_getrandom_init(); if (rc != 0) return rc; for (pos = 0; pos != buflen; pos += n) { do { n = buflen - pos; /* Most getrandom() implementations promise that reads <= 256 bytes * will always succeed and won't be interrupted by signals. * It's therefore useful to split it up in smaller reads because * one big read may, in theory, continuously fail with EINTR. */ if (n > 256) n = 256; n = uv__getrandom((char *) buf + pos, n, 0); } while (n == -1 && errno == EINTR); if (n == -1) return UV__ERR(errno); if (n == 0) return UV_EIO; } return 0; } gevent-24.11.1/deps/libuv/src/unix/random-sysctl-linux.c000066400000000000000000000054371471441230600231110ustar00rootroot00000000000000/* Copyright libuv contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include struct uv__sysctl_args { int* name; int nlen; void* oldval; size_t* oldlenp; void* newval; size_t newlen; unsigned long unused[4]; }; int uv__random_sysctl(void* buf, size_t buflen) { static int name[] = {1 /*CTL_KERN*/, 40 /*KERN_RANDOM*/, 6 /*RANDOM_UUID*/}; struct uv__sysctl_args args; char uuid[16]; char* p; char* pe; size_t n; p = buf; pe = p + buflen; while (p < pe) { memset(&args, 0, sizeof(args)); args.name = name; args.nlen = ARRAY_SIZE(name); args.oldval = uuid; args.oldlenp = &n; n = sizeof(uuid); /* Emits a deprecation warning with some kernels but that seems like * an okay trade-off for the fallback of the fallback: this function is * only called when neither getrandom(2) nor /dev/urandom are available. * Fails with ENOSYS on kernels configured without CONFIG_SYSCTL_SYSCALL. * At least arm64 never had a _sysctl system call and therefore doesn't * have a SYS__sysctl define either. */ #ifdef SYS__sysctl if (syscall(SYS__sysctl, &args) == -1) return UV__ERR(errno); #else { (void) &args; return UV_ENOSYS; } #endif if (n != sizeof(uuid)) return UV_EIO; /* Can't happen. */ /* uuid[] is now a type 4 UUID. Bytes 6 and 8 (counting from zero) contain * 4 and 5 bits of entropy, respectively. For ease of use, we skip those * and only use 14 of the 16 bytes. */ uuid[6] = uuid[14]; uuid[8] = uuid[15]; n = pe - p; if (n > 14) n = 14; memcpy(p, uuid, n); p += n; } return 0; } gevent-24.11.1/deps/libuv/src/unix/signal.c000066400000000000000000000351521471441230600204270ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include #ifndef SA_RESTART # define SA_RESTART 0 #endif typedef struct { uv_signal_t* handle; int signum; } uv__signal_msg_t; RB_HEAD(uv__signal_tree_s, uv_signal_s); static int uv__signal_unlock(void); static int uv__signal_start(uv_signal_t* handle, uv_signal_cb signal_cb, int signum, int oneshot); static void uv__signal_event(uv_loop_t* loop, uv__io_t* w, unsigned int events); static int uv__signal_compare(uv_signal_t* w1, uv_signal_t* w2); static void uv__signal_stop(uv_signal_t* handle); static void uv__signal_unregister_handler(int signum); static uv_once_t uv__signal_global_init_guard = UV_ONCE_INIT; static struct uv__signal_tree_s uv__signal_tree = RB_INITIALIZER(uv__signal_tree); static int uv__signal_lock_pipefd[2] = { -1, -1 }; RB_GENERATE_STATIC(uv__signal_tree_s, uv_signal_s, tree_entry, uv__signal_compare) static void uv__signal_global_reinit(void); static void uv__signal_global_init(void) { if (uv__signal_lock_pipefd[0] == -1) /* pthread_atfork can register before and after handlers, one * for each child. This only registers one for the child. That * state is both persistent and cumulative, so if we keep doing * it the handler functions will be called multiple times. Thus * we only want to do it once. */ if (pthread_atfork(NULL, NULL, &uv__signal_global_reinit)) abort(); uv__signal_global_reinit(); } void uv__signal_cleanup(void) { /* We can only use signal-safe functions here. * That includes read/write and close, fortunately. * We do all of this directly here instead of resetting * uv__signal_global_init_guard because * uv__signal_global_once_init is only called from uv_loop_init * and this needs to function in existing loops. */ if (uv__signal_lock_pipefd[0] != -1) { uv__close(uv__signal_lock_pipefd[0]); uv__signal_lock_pipefd[0] = -1; } if (uv__signal_lock_pipefd[1] != -1) { uv__close(uv__signal_lock_pipefd[1]); uv__signal_lock_pipefd[1] = -1; } } static void uv__signal_global_reinit(void) { uv__signal_cleanup(); if (uv__make_pipe(uv__signal_lock_pipefd, 0)) abort(); if (uv__signal_unlock()) abort(); } void uv__signal_global_once_init(void) { uv_once(&uv__signal_global_init_guard, uv__signal_global_init); } static int uv__signal_lock(void) { int r; char data; do { r = read(uv__signal_lock_pipefd[0], &data, sizeof data); } while (r < 0 && errno == EINTR); return (r < 0) ? -1 : 0; } static int uv__signal_unlock(void) { int r; char data = 42; do { r = write(uv__signal_lock_pipefd[1], &data, sizeof data); } while (r < 0 && errno == EINTR); return (r < 0) ? -1 : 0; } static void uv__signal_block_and_lock(sigset_t* saved_sigmask) { sigset_t new_mask; if (sigfillset(&new_mask)) abort(); /* to shut up valgrind */ sigemptyset(saved_sigmask); if (pthread_sigmask(SIG_SETMASK, &new_mask, saved_sigmask)) abort(); if (uv__signal_lock()) abort(); } static void uv__signal_unlock_and_unblock(sigset_t* saved_sigmask) { if (uv__signal_unlock()) abort(); if (pthread_sigmask(SIG_SETMASK, saved_sigmask, NULL)) abort(); } static uv_signal_t* uv__signal_first_handle(int signum) { /* This function must be called with the signal lock held. */ uv_signal_t lookup; uv_signal_t* handle; lookup.signum = signum; lookup.flags = 0; lookup.loop = NULL; handle = RB_NFIND(uv__signal_tree_s, &uv__signal_tree, &lookup); if (handle != NULL && handle->signum == signum) return handle; return NULL; } static void uv__signal_handler(int signum) { uv__signal_msg_t msg; uv_signal_t* handle; int saved_errno; saved_errno = errno; memset(&msg, 0, sizeof msg); if (uv__signal_lock()) { errno = saved_errno; return; } for (handle = uv__signal_first_handle(signum); handle != NULL && handle->signum == signum; handle = RB_NEXT(uv__signal_tree_s, &uv__signal_tree, handle)) { int r; msg.signum = signum; msg.handle = handle; /* write() should be atomic for small data chunks, so the entire message * should be written at once. In theory the pipe could become full, in * which case the user is out of luck. */ do { r = write(handle->loop->signal_pipefd[1], &msg, sizeof msg); } while (r == -1 && errno == EINTR); assert(r == sizeof msg || (r == -1 && (errno == EAGAIN || errno == EWOULDBLOCK))); if (r != -1) handle->caught_signals++; } uv__signal_unlock(); errno = saved_errno; } static int uv__signal_register_handler(int signum, int oneshot) { /* When this function is called, the signal lock must be held. */ struct sigaction sa; /* XXX use a separate signal stack? */ memset(&sa, 0, sizeof(sa)); if (sigfillset(&sa.sa_mask)) abort(); sa.sa_handler = uv__signal_handler; sa.sa_flags = SA_RESTART; if (oneshot) sa.sa_flags |= SA_RESETHAND; /* XXX save old action so we can restore it later on? */ if (sigaction(signum, &sa, NULL)) return UV__ERR(errno); return 0; } static void uv__signal_unregister_handler(int signum) { /* When this function is called, the signal lock must be held. */ struct sigaction sa; memset(&sa, 0, sizeof(sa)); sa.sa_handler = SIG_DFL; /* sigaction can only fail with EINVAL or EFAULT; an attempt to deregister a * signal implies that it was successfully registered earlier, so EINVAL * should never happen. */ if (sigaction(signum, &sa, NULL)) abort(); } static int uv__signal_loop_once_init(uv_loop_t* loop) { int err; /* Return if already initialized. */ if (loop->signal_pipefd[0] != -1) return 0; err = uv__make_pipe(loop->signal_pipefd, UV_NONBLOCK_PIPE); if (err) return err; uv__io_init(&loop->signal_io_watcher, uv__signal_event, loop->signal_pipefd[0]); uv__io_start(loop, &loop->signal_io_watcher, POLLIN); return 0; } int uv__signal_loop_fork(uv_loop_t* loop) { uv__io_stop(loop, &loop->signal_io_watcher, POLLIN); uv__close(loop->signal_pipefd[0]); uv__close(loop->signal_pipefd[1]); loop->signal_pipefd[0] = -1; loop->signal_pipefd[1] = -1; return uv__signal_loop_once_init(loop); } void uv__signal_loop_cleanup(uv_loop_t* loop) { QUEUE* q; /* Stop all the signal watchers that are still attached to this loop. This * ensures that the (shared) signal tree doesn't contain any invalid entries * entries, and that signal handlers are removed when appropriate. * It's safe to use QUEUE_FOREACH here because the handles and the handle * queue are not modified by uv__signal_stop(). */ QUEUE_FOREACH(q, &loop->handle_queue) { uv_handle_t* handle = QUEUE_DATA(q, uv_handle_t, handle_queue); if (handle->type == UV_SIGNAL) uv__signal_stop((uv_signal_t*) handle); } if (loop->signal_pipefd[0] != -1) { uv__close(loop->signal_pipefd[0]); loop->signal_pipefd[0] = -1; } if (loop->signal_pipefd[1] != -1) { uv__close(loop->signal_pipefd[1]); loop->signal_pipefd[1] = -1; } } int uv_signal_init(uv_loop_t* loop, uv_signal_t* handle) { int err; err = uv__signal_loop_once_init(loop); if (err) return err; uv__handle_init(loop, (uv_handle_t*) handle, UV_SIGNAL); handle->signum = 0; handle->caught_signals = 0; handle->dispatched_signals = 0; return 0; } void uv__signal_close(uv_signal_t* handle) { uv__signal_stop(handle); } int uv_signal_start(uv_signal_t* handle, uv_signal_cb signal_cb, int signum) { return uv__signal_start(handle, signal_cb, signum, 0); } int uv_signal_start_oneshot(uv_signal_t* handle, uv_signal_cb signal_cb, int signum) { return uv__signal_start(handle, signal_cb, signum, 1); } static int uv__signal_start(uv_signal_t* handle, uv_signal_cb signal_cb, int signum, int oneshot) { sigset_t saved_sigmask; int err; uv_signal_t* first_handle; assert(!uv__is_closing(handle)); /* If the user supplies signum == 0, then return an error already. If the * signum is otherwise invalid then uv__signal_register will find out * eventually. */ if (signum == 0) return UV_EINVAL; /* Short circuit: if the signal watcher is already watching {signum} don't * go through the process of deregistering and registering the handler. * Additionally, this avoids pending signals getting lost in the small * time frame that handle->signum == 0. */ if (signum == handle->signum) { handle->signal_cb = signal_cb; return 0; } /* If the signal handler was already active, stop it first. */ if (handle->signum != 0) { uv__signal_stop(handle); } uv__signal_block_and_lock(&saved_sigmask); /* If at this point there are no active signal watchers for this signum (in * any of the loops), it's time to try and register a handler for it here. * Also in case there's only one-shot handlers and a regular handler comes in. */ first_handle = uv__signal_first_handle(signum); if (first_handle == NULL || (!oneshot && (first_handle->flags & UV_SIGNAL_ONE_SHOT))) { err = uv__signal_register_handler(signum, oneshot); if (err) { /* Registering the signal handler failed. Must be an invalid signal. */ uv__signal_unlock_and_unblock(&saved_sigmask); return err; } } handle->signum = signum; if (oneshot) handle->flags |= UV_SIGNAL_ONE_SHOT; RB_INSERT(uv__signal_tree_s, &uv__signal_tree, handle); uv__signal_unlock_and_unblock(&saved_sigmask); handle->signal_cb = signal_cb; uv__handle_start(handle); return 0; } static void uv__signal_event(uv_loop_t* loop, uv__io_t* w, unsigned int events) { uv__signal_msg_t* msg; uv_signal_t* handle; char buf[sizeof(uv__signal_msg_t) * 32]; size_t bytes, end, i; int r; bytes = 0; end = 0; do { r = read(loop->signal_pipefd[0], buf + bytes, sizeof(buf) - bytes); if (r == -1 && errno == EINTR) continue; if (r == -1 && (errno == EAGAIN || errno == EWOULDBLOCK)) { /* If there are bytes in the buffer already (which really is extremely * unlikely if possible at all) we can't exit the function here. We'll * spin until more bytes are read instead. */ if (bytes > 0) continue; /* Otherwise, there was nothing there. */ return; } /* Other errors really should never happen. */ if (r == -1) abort(); bytes += r; /* `end` is rounded down to a multiple of sizeof(uv__signal_msg_t). */ end = (bytes / sizeof(uv__signal_msg_t)) * sizeof(uv__signal_msg_t); for (i = 0; i < end; i += sizeof(uv__signal_msg_t)) { msg = (uv__signal_msg_t*) (buf + i); handle = msg->handle; if (msg->signum == handle->signum) { assert(!(handle->flags & UV_HANDLE_CLOSING)); handle->signal_cb(handle, handle->signum); } handle->dispatched_signals++; if (handle->flags & UV_SIGNAL_ONE_SHOT) uv__signal_stop(handle); } bytes -= end; /* If there are any "partial" messages left, move them to the start of the * the buffer, and spin. This should not happen. */ if (bytes) { memmove(buf, buf + end, bytes); continue; } } while (end == sizeof buf); } static int uv__signal_compare(uv_signal_t* w1, uv_signal_t* w2) { int f1; int f2; /* Compare signums first so all watchers with the same signnum end up * adjacent. */ if (w1->signum < w2->signum) return -1; if (w1->signum > w2->signum) return 1; /* Handlers without UV_SIGNAL_ONE_SHOT set will come first, so if the first * handler returned is a one-shot handler, the rest will be too. */ f1 = w1->flags & UV_SIGNAL_ONE_SHOT; f2 = w2->flags & UV_SIGNAL_ONE_SHOT; if (f1 < f2) return -1; if (f1 > f2) return 1; /* Sort by loop pointer, so we can easily look up the first item after * { .signum = x, .loop = NULL }. */ if (w1->loop < w2->loop) return -1; if (w1->loop > w2->loop) return 1; if (w1 < w2) return -1; if (w1 > w2) return 1; return 0; } int uv_signal_stop(uv_signal_t* handle) { assert(!uv__is_closing(handle)); uv__signal_stop(handle); return 0; } static void uv__signal_stop(uv_signal_t* handle) { uv_signal_t* removed_handle; sigset_t saved_sigmask; uv_signal_t* first_handle; int rem_oneshot; int first_oneshot; int ret; /* If the watcher wasn't started, this is a no-op. */ if (handle->signum == 0) return; uv__signal_block_and_lock(&saved_sigmask); removed_handle = RB_REMOVE(uv__signal_tree_s, &uv__signal_tree, handle); assert(removed_handle == handle); (void) removed_handle; /* Check if there are other active signal watchers observing this signal. If * not, unregister the signal handler. */ first_handle = uv__signal_first_handle(handle->signum); if (first_handle == NULL) { uv__signal_unregister_handler(handle->signum); } else { rem_oneshot = handle->flags & UV_SIGNAL_ONE_SHOT; first_oneshot = first_handle->flags & UV_SIGNAL_ONE_SHOT; if (first_oneshot && !rem_oneshot) { ret = uv__signal_register_handler(handle->signum, 1); assert(ret == 0); (void)ret; } } uv__signal_unlock_and_unblock(&saved_sigmask); handle->signum = 0; uv__handle_stop(handle); } gevent-24.11.1/deps/libuv/src/unix/spinlock.h000066400000000000000000000036501471441230600207770ustar00rootroot00000000000000/* Copyright (c) 2013, Ben Noordhuis * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #ifndef UV_SPINLOCK_H_ #define UV_SPINLOCK_H_ #include "internal.h" /* ACCESS_ONCE, UV_UNUSED */ #include "atomic-ops.h" #define UV_SPINLOCK_INITIALIZER { 0 } typedef struct { int lock; } uv_spinlock_t; UV_UNUSED(static void uv_spinlock_init(uv_spinlock_t* spinlock)); UV_UNUSED(static void uv_spinlock_lock(uv_spinlock_t* spinlock)); UV_UNUSED(static void uv_spinlock_unlock(uv_spinlock_t* spinlock)); UV_UNUSED(static int uv_spinlock_trylock(uv_spinlock_t* spinlock)); UV_UNUSED(static void uv_spinlock_init(uv_spinlock_t* spinlock)) { ACCESS_ONCE(int, spinlock->lock) = 0; } UV_UNUSED(static void uv_spinlock_lock(uv_spinlock_t* spinlock)) { while (!uv_spinlock_trylock(spinlock)) cpu_relax(); } UV_UNUSED(static void uv_spinlock_unlock(uv_spinlock_t* spinlock)) { ACCESS_ONCE(int, spinlock->lock) = 0; } UV_UNUSED(static int uv_spinlock_trylock(uv_spinlock_t* spinlock)) { /* TODO(bnoordhuis) Maybe change to a ticket lock to guarantee fair queueing. * Not really critical until we have locks that are (frequently) contended * for by several threads. */ return 0 == cmpxchgi(&spinlock->lock, 0, 1); } #endif /* UV_SPINLOCK_H_ */ gevent-24.11.1/deps/libuv/src/unix/stream.c000066400000000000000000001234231471441230600204440ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include #include #include #include #include #include /* IOV_MAX */ #if defined(__APPLE__) # include # include # include /* Forward declaration */ typedef struct uv__stream_select_s uv__stream_select_t; struct uv__stream_select_s { uv_stream_t* stream; uv_thread_t thread; uv_sem_t close_sem; uv_sem_t async_sem; uv_async_t async; int events; int fake_fd; int int_fd; int fd; fd_set* sread; size_t sread_sz; fd_set* swrite; size_t swrite_sz; }; #endif /* defined(__APPLE__) */ static void uv__stream_connect(uv_stream_t*); static void uv__write(uv_stream_t* stream); static void uv__read(uv_stream_t* stream); static void uv__stream_io(uv_loop_t* loop, uv__io_t* w, unsigned int events); static void uv__write_callbacks(uv_stream_t* stream); static size_t uv__write_req_size(uv_write_t* req); static void uv__drain(uv_stream_t* stream); void uv__stream_init(uv_loop_t* loop, uv_stream_t* stream, uv_handle_type type) { int err; uv__handle_init(loop, (uv_handle_t*)stream, type); stream->read_cb = NULL; stream->alloc_cb = NULL; stream->close_cb = NULL; stream->connection_cb = NULL; stream->connect_req = NULL; stream->shutdown_req = NULL; stream->accepted_fd = -1; stream->queued_fds = NULL; stream->delayed_error = 0; QUEUE_INIT(&stream->write_queue); QUEUE_INIT(&stream->write_completed_queue); stream->write_queue_size = 0; if (loop->emfile_fd == -1) { err = uv__open_cloexec("/dev/null", O_RDONLY); if (err < 0) /* In the rare case that "/dev/null" isn't mounted open "/" * instead. */ err = uv__open_cloexec("/", O_RDONLY); if (err >= 0) loop->emfile_fd = err; } #if defined(__APPLE__) stream->select = NULL; #endif /* defined(__APPLE_) */ uv__io_init(&stream->io_watcher, uv__stream_io, -1); } static void uv__stream_osx_interrupt_select(uv_stream_t* stream) { #if defined(__APPLE__) /* Notify select() thread about state change */ uv__stream_select_t* s; int r; s = stream->select; if (s == NULL) return; /* Interrupt select() loop * NOTE: fake_fd and int_fd are socketpair(), thus writing to one will * emit read event on other side */ do r = write(s->fake_fd, "x", 1); while (r == -1 && errno == EINTR); assert(r == 1); #else /* !defined(__APPLE__) */ /* No-op on any other platform */ #endif /* !defined(__APPLE__) */ } #if defined(__APPLE__) static void uv__stream_osx_select(void* arg) { uv_stream_t* stream; uv__stream_select_t* s; char buf[1024]; int events; int fd; int r; int max_fd; stream = arg; s = stream->select; fd = s->fd; if (fd > s->int_fd) max_fd = fd; else max_fd = s->int_fd; for (;;) { /* Terminate on semaphore */ if (uv_sem_trywait(&s->close_sem) == 0) break; /* Watch fd using select(2) */ memset(s->sread, 0, s->sread_sz); memset(s->swrite, 0, s->swrite_sz); if (uv__io_active(&stream->io_watcher, POLLIN)) FD_SET(fd, s->sread); if (uv__io_active(&stream->io_watcher, POLLOUT)) FD_SET(fd, s->swrite); FD_SET(s->int_fd, s->sread); /* Wait indefinitely for fd events */ r = select(max_fd + 1, s->sread, s->swrite, NULL, NULL); if (r == -1) { if (errno == EINTR) continue; /* XXX: Possible?! */ abort(); } /* Ignore timeouts */ if (r == 0) continue; /* Empty socketpair's buffer in case of interruption */ if (FD_ISSET(s->int_fd, s->sread)) for (;;) { r = read(s->int_fd, buf, sizeof(buf)); if (r == sizeof(buf)) continue; if (r != -1) break; if (errno == EAGAIN || errno == EWOULDBLOCK) break; if (errno == EINTR) continue; abort(); } /* Handle events */ events = 0; if (FD_ISSET(fd, s->sread)) events |= POLLIN; if (FD_ISSET(fd, s->swrite)) events |= POLLOUT; assert(events != 0 || FD_ISSET(s->int_fd, s->sread)); if (events != 0) { ACCESS_ONCE(int, s->events) = events; uv_async_send(&s->async); uv_sem_wait(&s->async_sem); /* Should be processed at this stage */ assert((s->events == 0) || (stream->flags & UV_HANDLE_CLOSING)); } } } static void uv__stream_osx_select_cb(uv_async_t* handle) { uv__stream_select_t* s; uv_stream_t* stream; int events; s = container_of(handle, uv__stream_select_t, async); stream = s->stream; /* Get and reset stream's events */ events = s->events; ACCESS_ONCE(int, s->events) = 0; assert(events != 0); assert(events == (events & (POLLIN | POLLOUT))); /* Invoke callback on event-loop */ if ((events & POLLIN) && uv__io_active(&stream->io_watcher, POLLIN)) uv__stream_io(stream->loop, &stream->io_watcher, POLLIN); if ((events & POLLOUT) && uv__io_active(&stream->io_watcher, POLLOUT)) uv__stream_io(stream->loop, &stream->io_watcher, POLLOUT); if (stream->flags & UV_HANDLE_CLOSING) return; /* NOTE: It is important to do it here, otherwise `select()` might be called * before the actual `uv__read()`, leading to the blocking syscall */ uv_sem_post(&s->async_sem); } static void uv__stream_osx_cb_close(uv_handle_t* async) { uv__stream_select_t* s; s = container_of(async, uv__stream_select_t, async); uv__free(s); } int uv__stream_try_select(uv_stream_t* stream, int* fd) { /* * kqueue doesn't work with some files from /dev mount on osx. * select(2) in separate thread for those fds */ struct kevent filter[1]; struct kevent events[1]; struct timespec timeout; uv__stream_select_t* s; int fds[2]; int err; int ret; int kq; int old_fd; int max_fd; size_t sread_sz; size_t swrite_sz; kq = kqueue(); if (kq == -1) { perror("(libuv) kqueue()"); return UV__ERR(errno); } EV_SET(&filter[0], *fd, EVFILT_READ, EV_ADD | EV_ENABLE, 0, 0, 0); /* Use small timeout, because we only want to capture EINVALs */ timeout.tv_sec = 0; timeout.tv_nsec = 1; do ret = kevent(kq, filter, 1, events, 1, &timeout); while (ret == -1 && errno == EINTR); uv__close(kq); if (ret == -1) return UV__ERR(errno); if (ret == 0 || (events[0].flags & EV_ERROR) == 0 || events[0].data != EINVAL) return 0; /* At this point we definitely know that this fd won't work with kqueue */ /* * Create fds for io watcher and to interrupt the select() loop. * NOTE: do it ahead of malloc below to allocate enough space for fd_sets */ if (socketpair(AF_UNIX, SOCK_STREAM, 0, fds)) return UV__ERR(errno); max_fd = *fd; if (fds[1] > max_fd) max_fd = fds[1]; sread_sz = ROUND_UP(max_fd + 1, sizeof(uint32_t) * NBBY) / NBBY; swrite_sz = sread_sz; s = uv__malloc(sizeof(*s) + sread_sz + swrite_sz); if (s == NULL) { err = UV_ENOMEM; goto failed_malloc; } s->events = 0; s->fd = *fd; s->sread = (fd_set*) ((char*) s + sizeof(*s)); s->sread_sz = sread_sz; s->swrite = (fd_set*) ((char*) s->sread + sread_sz); s->swrite_sz = swrite_sz; err = uv_async_init(stream->loop, &s->async, uv__stream_osx_select_cb); if (err) goto failed_async_init; s->async.flags |= UV_HANDLE_INTERNAL; uv__handle_unref(&s->async); err = uv_sem_init(&s->close_sem, 0); if (err != 0) goto failed_close_sem_init; err = uv_sem_init(&s->async_sem, 0); if (err != 0) goto failed_async_sem_init; s->fake_fd = fds[0]; s->int_fd = fds[1]; old_fd = *fd; s->stream = stream; stream->select = s; *fd = s->fake_fd; err = uv_thread_create(&s->thread, uv__stream_osx_select, stream); if (err != 0) goto failed_thread_create; return 0; failed_thread_create: s->stream = NULL; stream->select = NULL; *fd = old_fd; uv_sem_destroy(&s->async_sem); failed_async_sem_init: uv_sem_destroy(&s->close_sem); failed_close_sem_init: uv__close(fds[0]); uv__close(fds[1]); uv_close((uv_handle_t*) &s->async, uv__stream_osx_cb_close); return err; failed_async_init: uv__free(s); failed_malloc: uv__close(fds[0]); uv__close(fds[1]); return err; } #endif /* defined(__APPLE__) */ int uv__stream_open(uv_stream_t* stream, int fd, int flags) { #if defined(__APPLE__) int enable; #endif if (!(stream->io_watcher.fd == -1 || stream->io_watcher.fd == fd)) return UV_EBUSY; assert(fd >= 0); stream->flags |= flags; if (stream->type == UV_TCP) { if ((stream->flags & UV_HANDLE_TCP_NODELAY) && uv__tcp_nodelay(fd, 1)) return UV__ERR(errno); /* TODO Use delay the user passed in. */ if ((stream->flags & UV_HANDLE_TCP_KEEPALIVE) && uv__tcp_keepalive(fd, 1, 60)) { return UV__ERR(errno); } } #if defined(__APPLE__) enable = 1; if (setsockopt(fd, SOL_SOCKET, SO_OOBINLINE, &enable, sizeof(enable)) && errno != ENOTSOCK && errno != EINVAL) { return UV__ERR(errno); } #endif stream->io_watcher.fd = fd; return 0; } void uv__stream_flush_write_queue(uv_stream_t* stream, int error) { uv_write_t* req; QUEUE* q; while (!QUEUE_EMPTY(&stream->write_queue)) { q = QUEUE_HEAD(&stream->write_queue); QUEUE_REMOVE(q); req = QUEUE_DATA(q, uv_write_t, queue); req->error = error; QUEUE_INSERT_TAIL(&stream->write_completed_queue, &req->queue); } } void uv__stream_destroy(uv_stream_t* stream) { assert(!uv__io_active(&stream->io_watcher, POLLIN | POLLOUT)); assert(stream->flags & UV_HANDLE_CLOSED); if (stream->connect_req) { uv__req_unregister(stream->loop, stream->connect_req); stream->connect_req->cb(stream->connect_req, UV_ECANCELED); stream->connect_req = NULL; } uv__stream_flush_write_queue(stream, UV_ECANCELED); uv__write_callbacks(stream); uv__drain(stream); assert(stream->write_queue_size == 0); } /* Implements a best effort approach to mitigating accept() EMFILE errors. * We have a spare file descriptor stashed away that we close to get below * the EMFILE limit. Next, we accept all pending connections and close them * immediately to signal the clients that we're overloaded - and we are, but * we still keep on trucking. * * There is one caveat: it's not reliable in a multi-threaded environment. * The file descriptor limit is per process. Our party trick fails if another * thread opens a file or creates a socket in the time window between us * calling close() and accept(). */ static int uv__emfile_trick(uv_loop_t* loop, int accept_fd) { int err; int emfile_fd; if (loop->emfile_fd == -1) return UV_EMFILE; uv__close(loop->emfile_fd); loop->emfile_fd = -1; do { err = uv__accept(accept_fd); if (err >= 0) uv__close(err); } while (err >= 0 || err == UV_EINTR); emfile_fd = uv__open_cloexec("/", O_RDONLY); if (emfile_fd >= 0) loop->emfile_fd = emfile_fd; return err; } #if defined(UV_HAVE_KQUEUE) # define UV_DEC_BACKLOG(w) w->rcount--; #else # define UV_DEC_BACKLOG(w) /* no-op */ #endif /* defined(UV_HAVE_KQUEUE) */ void uv__server_io(uv_loop_t* loop, uv__io_t* w, unsigned int events) { uv_stream_t* stream; int err; stream = container_of(w, uv_stream_t, io_watcher); assert(events & POLLIN); assert(stream->accepted_fd == -1); assert(!(stream->flags & UV_HANDLE_CLOSING)); uv__io_start(stream->loop, &stream->io_watcher, POLLIN); /* connection_cb can close the server socket while we're * in the loop so check it on each iteration. */ while (uv__stream_fd(stream) != -1) { assert(stream->accepted_fd == -1); #if defined(UV_HAVE_KQUEUE) if (w->rcount <= 0) return; #endif /* defined(UV_HAVE_KQUEUE) */ err = uv__accept(uv__stream_fd(stream)); if (err < 0) { if (err == UV_EAGAIN || err == UV__ERR(EWOULDBLOCK)) return; /* Not an error. */ if (err == UV_ECONNABORTED) continue; /* Ignore. Nothing we can do about that. */ if (err == UV_EMFILE || err == UV_ENFILE) { err = uv__emfile_trick(loop, uv__stream_fd(stream)); if (err == UV_EAGAIN || err == UV__ERR(EWOULDBLOCK)) break; } stream->connection_cb(stream, err); continue; } UV_DEC_BACKLOG(w) stream->accepted_fd = err; stream->connection_cb(stream, 0); if (stream->accepted_fd != -1) { /* The user hasn't yet accepted called uv_accept() */ uv__io_stop(loop, &stream->io_watcher, POLLIN); return; } if (stream->type == UV_TCP && (stream->flags & UV_HANDLE_TCP_SINGLE_ACCEPT)) { /* Give other processes a chance to accept connections. */ struct timespec timeout = { 0, 1 }; nanosleep(&timeout, NULL); } } } #undef UV_DEC_BACKLOG int uv_accept(uv_stream_t* server, uv_stream_t* client) { int err; assert(server->loop == client->loop); if (server->accepted_fd == -1) return UV_EAGAIN; switch (client->type) { case UV_NAMED_PIPE: case UV_TCP: err = uv__stream_open(client, server->accepted_fd, UV_HANDLE_READABLE | UV_HANDLE_WRITABLE); if (err) { /* TODO handle error */ uv__close(server->accepted_fd); goto done; } break; case UV_UDP: err = uv_udp_open((uv_udp_t*) client, server->accepted_fd); if (err) { uv__close(server->accepted_fd); goto done; } break; default: return UV_EINVAL; } client->flags |= UV_HANDLE_BOUND; done: /* Process queued fds */ if (server->queued_fds != NULL) { uv__stream_queued_fds_t* queued_fds; queued_fds = server->queued_fds; /* Read first */ server->accepted_fd = queued_fds->fds[0]; /* All read, free */ assert(queued_fds->offset > 0); if (--queued_fds->offset == 0) { uv__free(queued_fds); server->queued_fds = NULL; } else { /* Shift rest */ memmove(queued_fds->fds, queued_fds->fds + 1, queued_fds->offset * sizeof(*queued_fds->fds)); } } else { server->accepted_fd = -1; if (err == 0) uv__io_start(server->loop, &server->io_watcher, POLLIN); } return err; } int uv_listen(uv_stream_t* stream, int backlog, uv_connection_cb cb) { int err; if (uv__is_closing(stream)) { return UV_EINVAL; } switch (stream->type) { case UV_TCP: err = uv__tcp_listen((uv_tcp_t*)stream, backlog, cb); break; case UV_NAMED_PIPE: err = uv__pipe_listen((uv_pipe_t*)stream, backlog, cb); break; default: err = UV_EINVAL; } if (err == 0) uv__handle_start(stream); return err; } static void uv__drain(uv_stream_t* stream) { uv_shutdown_t* req; int err; assert(QUEUE_EMPTY(&stream->write_queue)); if (!(stream->flags & UV_HANDLE_CLOSING)) { uv__io_stop(stream->loop, &stream->io_watcher, POLLOUT); uv__stream_osx_interrupt_select(stream); } if (!(stream->flags & UV_HANDLE_SHUTTING)) return; req = stream->shutdown_req; assert(req); if ((stream->flags & UV_HANDLE_CLOSING) || !(stream->flags & UV_HANDLE_SHUT)) { stream->shutdown_req = NULL; stream->flags &= ~UV_HANDLE_SHUTTING; uv__req_unregister(stream->loop, req); err = 0; if (stream->flags & UV_HANDLE_CLOSING) /* The user destroyed the stream before we got to do the shutdown. */ err = UV_ECANCELED; else if (shutdown(uv__stream_fd(stream), SHUT_WR)) err = UV__ERR(errno); else /* Success. */ stream->flags |= UV_HANDLE_SHUT; if (req->cb != NULL) req->cb(req, err); } } static ssize_t uv__writev(int fd, struct iovec* vec, size_t n) { if (n == 1) return write(fd, vec->iov_base, vec->iov_len); else return writev(fd, vec, n); } static size_t uv__write_req_size(uv_write_t* req) { size_t size; assert(req->bufs != NULL); size = uv__count_bufs(req->bufs + req->write_index, req->nbufs - req->write_index); assert(req->handle->write_queue_size >= size); return size; } /* Returns 1 if all write request data has been written, or 0 if there is still * more data to write. * * Note: the return value only says something about the *current* request. * There may still be other write requests sitting in the queue. */ static int uv__write_req_update(uv_stream_t* stream, uv_write_t* req, size_t n) { uv_buf_t* buf; size_t len; assert(n <= stream->write_queue_size); stream->write_queue_size -= n; buf = req->bufs + req->write_index; do { len = n < buf->len ? n : buf->len; buf->base += len; buf->len -= len; buf += (buf->len == 0); /* Advance to next buffer if this one is empty. */ n -= len; } while (n > 0); req->write_index = buf - req->bufs; return req->write_index == req->nbufs; } static void uv__write_req_finish(uv_write_t* req) { uv_stream_t* stream = req->handle; /* Pop the req off tcp->write_queue. */ QUEUE_REMOVE(&req->queue); /* Only free when there was no error. On error, we touch up write_queue_size * right before making the callback. The reason we don't do that right away * is that a write_queue_size > 0 is our only way to signal to the user that * they should stop writing - which they should if we got an error. Something * to revisit in future revisions of the libuv API. */ if (req->error == 0) { if (req->bufs != req->bufsml) uv__free(req->bufs); req->bufs = NULL; } /* Add it to the write_completed_queue where it will have its * callback called in the near future. */ QUEUE_INSERT_TAIL(&stream->write_completed_queue, &req->queue); uv__io_feed(stream->loop, &stream->io_watcher); } static int uv__handle_fd(uv_handle_t* handle) { switch (handle->type) { case UV_NAMED_PIPE: case UV_TCP: return ((uv_stream_t*) handle)->io_watcher.fd; case UV_UDP: return ((uv_udp_t*) handle)->io_watcher.fd; default: return -1; } } static int uv__try_write(uv_stream_t* stream, const uv_buf_t bufs[], unsigned int nbufs, uv_stream_t* send_handle) { struct iovec* iov; int iovmax; int iovcnt; ssize_t n; /* * Cast to iovec. We had to have our own uv_buf_t instead of iovec * because Windows's WSABUF is not an iovec. */ iov = (struct iovec*) bufs; iovcnt = nbufs; iovmax = uv__getiovmax(); /* Limit iov count to avoid EINVALs from writev() */ if (iovcnt > iovmax) iovcnt = iovmax; /* * Now do the actual writev. Note that we've been updating the pointers * inside the iov each time we write. So there is no need to offset it. */ if (send_handle != NULL) { int fd_to_send; struct msghdr msg; struct cmsghdr *cmsg; union { char data[64]; struct cmsghdr alias; } scratch; if (uv__is_closing(send_handle)) return UV_EBADF; fd_to_send = uv__handle_fd((uv_handle_t*) send_handle); memset(&scratch, 0, sizeof(scratch)); assert(fd_to_send >= 0); msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = iov; msg.msg_iovlen = iovcnt; msg.msg_flags = 0; msg.msg_control = &scratch.alias; msg.msg_controllen = CMSG_SPACE(sizeof(fd_to_send)); cmsg = CMSG_FIRSTHDR(&msg); cmsg->cmsg_level = SOL_SOCKET; cmsg->cmsg_type = SCM_RIGHTS; cmsg->cmsg_len = CMSG_LEN(sizeof(fd_to_send)); /* silence aliasing warning */ { void* pv = CMSG_DATA(cmsg); int* pi = pv; *pi = fd_to_send; } do n = sendmsg(uv__stream_fd(stream), &msg, 0); while (n == -1 && errno == EINTR); } else { do n = uv__writev(uv__stream_fd(stream), iov, iovcnt); while (n == -1 && errno == EINTR); } if (n >= 0) return n; if (errno == EAGAIN || errno == EWOULDBLOCK || errno == ENOBUFS) return UV_EAGAIN; #ifdef __APPLE__ /* macOS versions 10.10 and 10.15 - and presumbaly 10.11 to 10.14, too - * have a bug where a race condition causes the kernel to return EPROTOTYPE * because the socket isn't fully constructed. It's probably the result of * the peer closing the connection and that is why libuv translates it to * ECONNRESET. Previously, libuv retried until the EPROTOTYPE error went * away but some VPN software causes the same behavior except the error is * permanent, not transient, turning the retry mechanism into an infinite * loop. See https://github.com/libuv/libuv/pull/482. */ if (errno == EPROTOTYPE) return UV_ECONNRESET; #endif /* __APPLE__ */ return UV__ERR(errno); } static void uv__write(uv_stream_t* stream) { QUEUE* q; uv_write_t* req; ssize_t n; assert(uv__stream_fd(stream) >= 0); for (;;) { if (QUEUE_EMPTY(&stream->write_queue)) return; q = QUEUE_HEAD(&stream->write_queue); req = QUEUE_DATA(q, uv_write_t, queue); assert(req->handle == stream); n = uv__try_write(stream, &(req->bufs[req->write_index]), req->nbufs - req->write_index, req->send_handle); /* Ensure the handle isn't sent again in case this is a partial write. */ if (n >= 0) { req->send_handle = NULL; if (uv__write_req_update(stream, req, n)) { uv__write_req_finish(req); return; /* TODO(bnoordhuis) Start trying to write the next request. */ } } else if (n != UV_EAGAIN) break; /* If this is a blocking stream, try again. */ if (stream->flags & UV_HANDLE_BLOCKING_WRITES) continue; /* We're not done. */ uv__io_start(stream->loop, &stream->io_watcher, POLLOUT); /* Notify select() thread about state change */ uv__stream_osx_interrupt_select(stream); return; } req->error = n; uv__write_req_finish(req); uv__io_stop(stream->loop, &stream->io_watcher, POLLOUT); uv__stream_osx_interrupt_select(stream); } static void uv__write_callbacks(uv_stream_t* stream) { uv_write_t* req; QUEUE* q; QUEUE pq; if (QUEUE_EMPTY(&stream->write_completed_queue)) return; QUEUE_MOVE(&stream->write_completed_queue, &pq); while (!QUEUE_EMPTY(&pq)) { /* Pop a req off write_completed_queue. */ q = QUEUE_HEAD(&pq); req = QUEUE_DATA(q, uv_write_t, queue); QUEUE_REMOVE(q); uv__req_unregister(stream->loop, req); if (req->bufs != NULL) { stream->write_queue_size -= uv__write_req_size(req); if (req->bufs != req->bufsml) uv__free(req->bufs); req->bufs = NULL; } /* NOTE: call callback AFTER freeing the request data. */ if (req->cb) req->cb(req, req->error); } } static void uv__stream_eof(uv_stream_t* stream, const uv_buf_t* buf) { stream->flags |= UV_HANDLE_READ_EOF; stream->flags &= ~UV_HANDLE_READING; uv__io_stop(stream->loop, &stream->io_watcher, POLLIN); uv__handle_stop(stream); uv__stream_osx_interrupt_select(stream); stream->read_cb(stream, UV_EOF, buf); } static int uv__stream_queue_fd(uv_stream_t* stream, int fd) { uv__stream_queued_fds_t* queued_fds; unsigned int queue_size; queued_fds = stream->queued_fds; if (queued_fds == NULL) { queue_size = 8; queued_fds = uv__malloc((queue_size - 1) * sizeof(*queued_fds->fds) + sizeof(*queued_fds)); if (queued_fds == NULL) return UV_ENOMEM; queued_fds->size = queue_size; queued_fds->offset = 0; stream->queued_fds = queued_fds; /* Grow */ } else if (queued_fds->size == queued_fds->offset) { queue_size = queued_fds->size + 8; queued_fds = uv__realloc(queued_fds, (queue_size - 1) * sizeof(*queued_fds->fds) + sizeof(*queued_fds)); /* * Allocation failure, report back. * NOTE: if it is fatal - sockets will be closed in uv__stream_close */ if (queued_fds == NULL) return UV_ENOMEM; queued_fds->size = queue_size; stream->queued_fds = queued_fds; } /* Put fd in a queue */ queued_fds->fds[queued_fds->offset++] = fd; return 0; } #if defined(__PASE__) /* on IBMi PASE the control message length can not exceed 256. */ # define UV__CMSG_FD_COUNT 60 #else # define UV__CMSG_FD_COUNT 64 #endif #define UV__CMSG_FD_SIZE (UV__CMSG_FD_COUNT * sizeof(int)) static int uv__stream_recv_cmsg(uv_stream_t* stream, struct msghdr* msg) { struct cmsghdr* cmsg; for (cmsg = CMSG_FIRSTHDR(msg); cmsg != NULL; cmsg = CMSG_NXTHDR(msg, cmsg)) { char* start; char* end; int err; void* pv; int* pi; unsigned int i; unsigned int count; if (cmsg->cmsg_type != SCM_RIGHTS) { fprintf(stderr, "ignoring non-SCM_RIGHTS ancillary data: %d\n", cmsg->cmsg_type); continue; } /* silence aliasing warning */ pv = CMSG_DATA(cmsg); pi = pv; /* Count available fds */ start = (char*) cmsg; end = (char*) cmsg + cmsg->cmsg_len; count = 0; while (start + CMSG_LEN(count * sizeof(*pi)) < end) count++; assert(start + CMSG_LEN(count * sizeof(*pi)) == end); for (i = 0; i < count; i++) { /* Already has accepted fd, queue now */ if (stream->accepted_fd != -1) { err = uv__stream_queue_fd(stream, pi[i]); if (err != 0) { /* Close rest */ for (; i < count; i++) uv__close(pi[i]); return err; } } else { stream->accepted_fd = pi[i]; } } } return 0; } #ifdef __clang__ # pragma clang diagnostic push # pragma clang diagnostic ignored "-Wgnu-folding-constant" # pragma clang diagnostic ignored "-Wvla-extension" #endif static void uv__read(uv_stream_t* stream) { uv_buf_t buf; ssize_t nread; struct msghdr msg; char cmsg_space[CMSG_SPACE(UV__CMSG_FD_SIZE)]; int count; int err; int is_ipc; stream->flags &= ~UV_HANDLE_READ_PARTIAL; /* Prevent loop starvation when the data comes in as fast as (or faster than) * we can read it. XXX Need to rearm fd if we switch to edge-triggered I/O. */ count = 32; is_ipc = stream->type == UV_NAMED_PIPE && ((uv_pipe_t*) stream)->ipc; /* XXX: Maybe instead of having UV_HANDLE_READING we just test if * tcp->read_cb is NULL or not? */ while (stream->read_cb && (stream->flags & UV_HANDLE_READING) && (count-- > 0)) { assert(stream->alloc_cb != NULL); buf = uv_buf_init(NULL, 0); stream->alloc_cb((uv_handle_t*)stream, 64 * 1024, &buf); if (buf.base == NULL || buf.len == 0) { /* User indicates it can't or won't handle the read. */ stream->read_cb(stream, UV_ENOBUFS, &buf); return; } assert(buf.base != NULL); assert(uv__stream_fd(stream) >= 0); if (!is_ipc) { do { nread = read(uv__stream_fd(stream), buf.base, buf.len); } while (nread < 0 && errno == EINTR); } else { /* ipc uses recvmsg */ msg.msg_flags = 0; msg.msg_iov = (struct iovec*) &buf; msg.msg_iovlen = 1; msg.msg_name = NULL; msg.msg_namelen = 0; /* Set up to receive a descriptor even if one isn't in the message */ msg.msg_controllen = sizeof(cmsg_space); msg.msg_control = cmsg_space; do { nread = uv__recvmsg(uv__stream_fd(stream), &msg, 0); } while (nread < 0 && errno == EINTR); } if (nread < 0) { /* Error */ if (errno == EAGAIN || errno == EWOULDBLOCK) { /* Wait for the next one. */ if (stream->flags & UV_HANDLE_READING) { uv__io_start(stream->loop, &stream->io_watcher, POLLIN); uv__stream_osx_interrupt_select(stream); } stream->read_cb(stream, 0, &buf); #if defined(__CYGWIN__) || defined(__MSYS__) } else if (errno == ECONNRESET && stream->type == UV_NAMED_PIPE) { uv__stream_eof(stream, &buf); return; #endif } else { /* Error. User should call uv_close(). */ stream->flags &= ~(UV_HANDLE_READABLE | UV_HANDLE_WRITABLE); stream->read_cb(stream, UV__ERR(errno), &buf); if (stream->flags & UV_HANDLE_READING) { stream->flags &= ~UV_HANDLE_READING; uv__io_stop(stream->loop, &stream->io_watcher, POLLIN); uv__handle_stop(stream); uv__stream_osx_interrupt_select(stream); } } return; } else if (nread == 0) { uv__stream_eof(stream, &buf); return; } else { /* Successful read */ ssize_t buflen = buf.len; if (is_ipc) { err = uv__stream_recv_cmsg(stream, &msg); if (err != 0) { stream->read_cb(stream, err, &buf); return; } } #if defined(__MVS__) if (is_ipc && msg.msg_controllen > 0) { uv_buf_t blankbuf; int nread; struct iovec *old; blankbuf.base = 0; blankbuf.len = 0; old = msg.msg_iov; msg.msg_iov = (struct iovec*) &blankbuf; nread = 0; do { nread = uv__recvmsg(uv__stream_fd(stream), &msg, 0); err = uv__stream_recv_cmsg(stream, &msg); if (err != 0) { stream->read_cb(stream, err, &buf); msg.msg_iov = old; return; } } while (nread == 0 && msg.msg_controllen > 0); msg.msg_iov = old; } #endif stream->read_cb(stream, nread, &buf); /* Return if we didn't fill the buffer, there is no more data to read. */ if (nread < buflen) { stream->flags |= UV_HANDLE_READ_PARTIAL; return; } } } } #ifdef __clang__ # pragma clang diagnostic pop #endif #undef UV__CMSG_FD_COUNT #undef UV__CMSG_FD_SIZE int uv_shutdown(uv_shutdown_t* req, uv_stream_t* stream, uv_shutdown_cb cb) { assert(stream->type == UV_TCP || stream->type == UV_TTY || stream->type == UV_NAMED_PIPE); if (!(stream->flags & UV_HANDLE_WRITABLE) || stream->flags & UV_HANDLE_SHUT || stream->flags & UV_HANDLE_SHUTTING || uv__is_closing(stream)) { return UV_ENOTCONN; } assert(uv__stream_fd(stream) >= 0); /* Initialize request. The `shutdown(2)` call will always be deferred until * `uv__drain`, just before the callback is run. */ uv__req_init(stream->loop, req, UV_SHUTDOWN); req->handle = stream; req->cb = cb; stream->shutdown_req = req; stream->flags |= UV_HANDLE_SHUTTING; stream->flags &= ~UV_HANDLE_WRITABLE; if (QUEUE_EMPTY(&stream->write_queue)) uv__io_feed(stream->loop, &stream->io_watcher); return 0; } static void uv__stream_io(uv_loop_t* loop, uv__io_t* w, unsigned int events) { uv_stream_t* stream; stream = container_of(w, uv_stream_t, io_watcher); assert(stream->type == UV_TCP || stream->type == UV_NAMED_PIPE || stream->type == UV_TTY); assert(!(stream->flags & UV_HANDLE_CLOSING)); if (stream->connect_req) { uv__stream_connect(stream); return; } assert(uv__stream_fd(stream) >= 0); /* Ignore POLLHUP here. Even if it's set, there may still be data to read. */ if (events & (POLLIN | POLLERR | POLLHUP)) uv__read(stream); if (uv__stream_fd(stream) == -1) return; /* read_cb closed stream. */ /* Short-circuit iff POLLHUP is set, the user is still interested in read * events and uv__read() reported a partial read but not EOF. If the EOF * flag is set, uv__read() called read_cb with err=UV_EOF and we don't * have to do anything. If the partial read flag is not set, we can't * report the EOF yet because there is still data to read. */ if ((events & POLLHUP) && (stream->flags & UV_HANDLE_READING) && (stream->flags & UV_HANDLE_READ_PARTIAL) && !(stream->flags & UV_HANDLE_READ_EOF)) { uv_buf_t buf = { NULL, 0 }; uv__stream_eof(stream, &buf); } if (uv__stream_fd(stream) == -1) return; /* read_cb closed stream. */ if (events & (POLLOUT | POLLERR | POLLHUP)) { uv__write(stream); uv__write_callbacks(stream); /* Write queue drained. */ if (QUEUE_EMPTY(&stream->write_queue)) uv__drain(stream); } } /** * We get called here from directly following a call to connect(2). * In order to determine if we've errored out or succeeded must call * getsockopt. */ static void uv__stream_connect(uv_stream_t* stream) { int error; uv_connect_t* req = stream->connect_req; socklen_t errorsize = sizeof(int); assert(stream->type == UV_TCP || stream->type == UV_NAMED_PIPE); assert(req); if (stream->delayed_error) { /* To smooth over the differences between unixes errors that * were reported synchronously on the first connect can be delayed * until the next tick--which is now. */ error = stream->delayed_error; stream->delayed_error = 0; } else { /* Normal situation: we need to get the socket error from the kernel. */ assert(uv__stream_fd(stream) >= 0); getsockopt(uv__stream_fd(stream), SOL_SOCKET, SO_ERROR, &error, &errorsize); error = UV__ERR(error); } if (error == UV__ERR(EINPROGRESS)) return; stream->connect_req = NULL; uv__req_unregister(stream->loop, req); if (error < 0 || QUEUE_EMPTY(&stream->write_queue)) { uv__io_stop(stream->loop, &stream->io_watcher, POLLOUT); } if (req->cb) req->cb(req, error); if (uv__stream_fd(stream) == -1) return; if (error < 0) { uv__stream_flush_write_queue(stream, UV_ECANCELED); uv__write_callbacks(stream); } } static int uv__check_before_write(uv_stream_t* stream, unsigned int nbufs, uv_stream_t* send_handle) { assert(nbufs > 0); assert((stream->type == UV_TCP || stream->type == UV_NAMED_PIPE || stream->type == UV_TTY) && "uv_write (unix) does not yet support other types of streams"); if (uv__stream_fd(stream) < 0) return UV_EBADF; if (!(stream->flags & UV_HANDLE_WRITABLE)) return UV_EPIPE; if (send_handle != NULL) { if (stream->type != UV_NAMED_PIPE || !((uv_pipe_t*)stream)->ipc) return UV_EINVAL; /* XXX We abuse uv_write2() to send over UDP handles to child processes. * Don't call uv__stream_fd() on those handles, it's a macro that on OS X * evaluates to a function that operates on a uv_stream_t with a couple of * OS X specific fields. On other Unices it does (handle)->io_watcher.fd, * which works but only by accident. */ if (uv__handle_fd((uv_handle_t*) send_handle) < 0) return UV_EBADF; #if defined(__CYGWIN__) || defined(__MSYS__) /* Cygwin recvmsg always sets msg_controllen to zero, so we cannot send it. See https://github.com/mirror/newlib-cygwin/blob/86fc4bf0/winsup/cygwin/fhandler_socket.cc#L1736-L1743 */ return UV_ENOSYS; #endif } return 0; } int uv_write2(uv_write_t* req, uv_stream_t* stream, const uv_buf_t bufs[], unsigned int nbufs, uv_stream_t* send_handle, uv_write_cb cb) { int empty_queue; int err; err = uv__check_before_write(stream, nbufs, send_handle); if (err < 0) return err; /* It's legal for write_queue_size > 0 even when the write_queue is empty; * it means there are error-state requests in the write_completed_queue that * will touch up write_queue_size later, see also uv__write_req_finish(). * We could check that write_queue is empty instead but that implies making * a write() syscall when we know that the handle is in error mode. */ empty_queue = (stream->write_queue_size == 0); /* Initialize the req */ uv__req_init(stream->loop, req, UV_WRITE); req->cb = cb; req->handle = stream; req->error = 0; req->send_handle = send_handle; QUEUE_INIT(&req->queue); req->bufs = req->bufsml; if (nbufs > ARRAY_SIZE(req->bufsml)) req->bufs = uv__malloc(nbufs * sizeof(bufs[0])); if (req->bufs == NULL) return UV_ENOMEM; memcpy(req->bufs, bufs, nbufs * sizeof(bufs[0])); req->nbufs = nbufs; req->write_index = 0; stream->write_queue_size += uv__count_bufs(bufs, nbufs); /* Append the request to write_queue. */ QUEUE_INSERT_TAIL(&stream->write_queue, &req->queue); /* If the queue was empty when this function began, we should attempt to * do the write immediately. Otherwise start the write_watcher and wait * for the fd to become writable. */ if (stream->connect_req) { /* Still connecting, do nothing. */ } else if (empty_queue) { uv__write(stream); } else { /* * blocking streams should never have anything in the queue. * if this assert fires then somehow the blocking stream isn't being * sufficiently flushed in uv__write. */ assert(!(stream->flags & UV_HANDLE_BLOCKING_WRITES)); uv__io_start(stream->loop, &stream->io_watcher, POLLOUT); uv__stream_osx_interrupt_select(stream); } return 0; } /* The buffers to be written must remain valid until the callback is called. * This is not required for the uv_buf_t array. */ int uv_write(uv_write_t* req, uv_stream_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_write_cb cb) { return uv_write2(req, handle, bufs, nbufs, NULL, cb); } int uv_try_write(uv_stream_t* stream, const uv_buf_t bufs[], unsigned int nbufs) { return uv_try_write2(stream, bufs, nbufs, NULL); } int uv_try_write2(uv_stream_t* stream, const uv_buf_t bufs[], unsigned int nbufs, uv_stream_t* send_handle) { int err; /* Connecting or already writing some data */ if (stream->connect_req != NULL || stream->write_queue_size != 0) return UV_EAGAIN; err = uv__check_before_write(stream, nbufs, NULL); if (err < 0) return err; return uv__try_write(stream, bufs, nbufs, send_handle); } int uv__read_start(uv_stream_t* stream, uv_alloc_cb alloc_cb, uv_read_cb read_cb) { assert(stream->type == UV_TCP || stream->type == UV_NAMED_PIPE || stream->type == UV_TTY); /* The UV_HANDLE_READING flag is irrelevant of the state of the stream - it * just expresses the desired state of the user. */ stream->flags |= UV_HANDLE_READING; stream->flags &= ~UV_HANDLE_READ_EOF; /* TODO: try to do the read inline? */ assert(uv__stream_fd(stream) >= 0); assert(alloc_cb); stream->read_cb = read_cb; stream->alloc_cb = alloc_cb; uv__io_start(stream->loop, &stream->io_watcher, POLLIN); uv__handle_start(stream); uv__stream_osx_interrupt_select(stream); return 0; } int uv_read_stop(uv_stream_t* stream) { if (!(stream->flags & UV_HANDLE_READING)) return 0; stream->flags &= ~UV_HANDLE_READING; uv__io_stop(stream->loop, &stream->io_watcher, POLLIN); uv__handle_stop(stream); uv__stream_osx_interrupt_select(stream); stream->read_cb = NULL; stream->alloc_cb = NULL; return 0; } int uv_is_readable(const uv_stream_t* stream) { return !!(stream->flags & UV_HANDLE_READABLE); } int uv_is_writable(const uv_stream_t* stream) { return !!(stream->flags & UV_HANDLE_WRITABLE); } #if defined(__APPLE__) int uv___stream_fd(const uv_stream_t* handle) { const uv__stream_select_t* s; assert(handle->type == UV_TCP || handle->type == UV_TTY || handle->type == UV_NAMED_PIPE); s = handle->select; if (s != NULL) return s->fd; return handle->io_watcher.fd; } #endif /* defined(__APPLE__) */ void uv__stream_close(uv_stream_t* handle) { unsigned int i; uv__stream_queued_fds_t* queued_fds; #if defined(__APPLE__) /* Terminate select loop first */ if (handle->select != NULL) { uv__stream_select_t* s; s = handle->select; uv_sem_post(&s->close_sem); uv_sem_post(&s->async_sem); uv__stream_osx_interrupt_select(handle); uv_thread_join(&s->thread); uv_sem_destroy(&s->close_sem); uv_sem_destroy(&s->async_sem); uv__close(s->fake_fd); uv__close(s->int_fd); uv_close((uv_handle_t*) &s->async, uv__stream_osx_cb_close); handle->select = NULL; } #endif /* defined(__APPLE__) */ uv__io_close(handle->loop, &handle->io_watcher); uv_read_stop(handle); uv__handle_stop(handle); handle->flags &= ~(UV_HANDLE_READABLE | UV_HANDLE_WRITABLE); if (handle->io_watcher.fd != -1) { /* Don't close stdio file descriptors. Nothing good comes from it. */ if (handle->io_watcher.fd > STDERR_FILENO) uv__close(handle->io_watcher.fd); handle->io_watcher.fd = -1; } if (handle->accepted_fd != -1) { uv__close(handle->accepted_fd); handle->accepted_fd = -1; } /* Close all queued fds */ if (handle->queued_fds != NULL) { queued_fds = handle->queued_fds; for (i = 0; i < queued_fds->offset; i++) uv__close(queued_fds->fds[i]); uv__free(handle->queued_fds); handle->queued_fds = NULL; } assert(!uv__io_active(&handle->io_watcher, POLLIN | POLLOUT)); } int uv_stream_set_blocking(uv_stream_t* handle, int blocking) { /* Don't need to check the file descriptor, uv__nonblock() * will fail with EBADF if it's not valid. */ return uv__nonblock(uv__stream_fd(handle), !blocking); } gevent-24.11.1/deps/libuv/src/unix/sunos.c000066400000000000000000000531361471441230600203230ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #include #ifndef SUNOS_NO_IFADDRS # include #endif #include #include #include #include #include #include #include #include #include #include #include #define PORT_FIRED 0x69 #define PORT_UNUSED 0x0 #define PORT_LOADED 0x99 #define PORT_DELETED -1 #if (!defined(_LP64)) && (_FILE_OFFSET_BITS - 0 == 64) #define PROCFS_FILE_OFFSET_BITS_HACK 1 #undef _FILE_OFFSET_BITS #else #define PROCFS_FILE_OFFSET_BITS_HACK 0 #endif #include #if (PROCFS_FILE_OFFSET_BITS_HACK - 0 == 1) #define _FILE_OFFSET_BITS 64 #endif int uv__platform_loop_init(uv_loop_t* loop) { int err; int fd; loop->fs_fd = -1; loop->backend_fd = -1; fd = port_create(); if (fd == -1) return UV__ERR(errno); err = uv__cloexec(fd, 1); if (err) { uv__close(fd); return err; } loop->backend_fd = fd; return 0; } void uv__platform_loop_delete(uv_loop_t* loop) { if (loop->fs_fd != -1) { uv__close(loop->fs_fd); loop->fs_fd = -1; } if (loop->backend_fd != -1) { uv__close(loop->backend_fd); loop->backend_fd = -1; } } int uv__io_fork(uv_loop_t* loop) { #if defined(PORT_SOURCE_FILE) if (loop->fs_fd != -1) { /* stop the watcher before we blow away its fileno */ uv__io_stop(loop, &loop->fs_event_watcher, POLLIN); } #endif uv__platform_loop_delete(loop); return uv__platform_loop_init(loop); } void uv__platform_invalidate_fd(uv_loop_t* loop, int fd) { struct port_event* events; uintptr_t i; uintptr_t nfds; assert(loop->watchers != NULL); assert(fd >= 0); events = (struct port_event*) loop->watchers[loop->nwatchers]; nfds = (uintptr_t) loop->watchers[loop->nwatchers + 1]; if (events == NULL) return; /* Invalidate events with same file descriptor */ for (i = 0; i < nfds; i++) if ((int) events[i].portev_object == fd) events[i].portev_object = -1; } int uv__io_check_fd(uv_loop_t* loop, int fd) { if (port_associate(loop->backend_fd, PORT_SOURCE_FD, fd, POLLIN, 0)) return UV__ERR(errno); if (port_dissociate(loop->backend_fd, PORT_SOURCE_FD, fd)) { perror("(libuv) port_dissociate()"); abort(); } return 0; } void uv__io_poll(uv_loop_t* loop, int timeout) { struct port_event events[1024]; struct port_event* pe; struct timespec spec; QUEUE* q; uv__io_t* w; sigset_t* pset; sigset_t set; uint64_t base; uint64_t diff; unsigned int nfds; unsigned int i; int saved_errno; int have_signals; int nevents; int count; int err; int fd; int user_timeout; int reset_timeout; if (loop->nfds == 0) { assert(QUEUE_EMPTY(&loop->watcher_queue)); return; } while (!QUEUE_EMPTY(&loop->watcher_queue)) { q = QUEUE_HEAD(&loop->watcher_queue); QUEUE_REMOVE(q); QUEUE_INIT(q); w = QUEUE_DATA(q, uv__io_t, watcher_queue); assert(w->pevents != 0); if (port_associate(loop->backend_fd, PORT_SOURCE_FD, w->fd, w->pevents, 0)) { perror("(libuv) port_associate()"); abort(); } w->events = w->pevents; } pset = NULL; if (loop->flags & UV_LOOP_BLOCK_SIGPROF) { pset = &set; sigemptyset(pset); sigaddset(pset, SIGPROF); } assert(timeout >= -1); base = loop->time; count = 48; /* Benchmarks suggest this gives the best throughput. */ if (uv__get_internal_fields(loop)->flags & UV_METRICS_IDLE_TIME) { reset_timeout = 1; user_timeout = timeout; timeout = 0; } else { reset_timeout = 0; } for (;;) { /* Only need to set the provider_entry_time if timeout != 0. The function * will return early if the loop isn't configured with UV_METRICS_IDLE_TIME. */ if (timeout != 0) uv__metrics_set_provider_entry_time(loop); if (timeout != -1) { spec.tv_sec = timeout / 1000; spec.tv_nsec = (timeout % 1000) * 1000000; } /* Work around a kernel bug where nfds is not updated. */ events[0].portev_source = 0; nfds = 1; saved_errno = 0; if (pset != NULL) pthread_sigmask(SIG_BLOCK, pset, NULL); err = port_getn(loop->backend_fd, events, ARRAY_SIZE(events), &nfds, timeout == -1 ? NULL : &spec); if (pset != NULL) pthread_sigmask(SIG_UNBLOCK, pset, NULL); if (err) { /* Work around another kernel bug: port_getn() may return events even * on error. */ if (errno == EINTR || errno == ETIME) { saved_errno = errno; } else { perror("(libuv) port_getn()"); abort(); } } /* Update loop->time unconditionally. It's tempting to skip the update when * timeout == 0 (i.e. non-blocking poll) but there is no guarantee that the * operating system didn't reschedule our process while in the syscall. */ SAVE_ERRNO(uv__update_time(loop)); if (events[0].portev_source == 0) { if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } if (timeout == 0) return; if (timeout == -1) continue; goto update_timeout; } if (nfds == 0) { assert(timeout != -1); return; } have_signals = 0; nevents = 0; assert(loop->watchers != NULL); loop->watchers[loop->nwatchers] = (void*) events; loop->watchers[loop->nwatchers + 1] = (void*) (uintptr_t) nfds; for (i = 0; i < nfds; i++) { pe = events + i; fd = pe->portev_object; /* Skip invalidated events, see uv__platform_invalidate_fd */ if (fd == -1) continue; assert(fd >= 0); assert((unsigned) fd < loop->nwatchers); w = loop->watchers[fd]; /* File descriptor that we've stopped watching, ignore. */ if (w == NULL) continue; /* Run signal watchers last. This also affects child process watchers * because those are implemented in terms of signal watchers. */ if (w == &loop->signal_io_watcher) { have_signals = 1; } else { uv__metrics_update_idle_time(loop); w->cb(loop, w, pe->portev_events); } nevents++; if (w != loop->watchers[fd]) continue; /* Disabled by callback. */ /* Events Ports operates in oneshot mode, rearm timer on next run. */ if (w->pevents != 0 && QUEUE_EMPTY(&w->watcher_queue)) QUEUE_INSERT_TAIL(&loop->watcher_queue, &w->watcher_queue); } if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } if (have_signals != 0) { uv__metrics_update_idle_time(loop); loop->signal_io_watcher.cb(loop, &loop->signal_io_watcher, POLLIN); } loop->watchers[loop->nwatchers] = NULL; loop->watchers[loop->nwatchers + 1] = NULL; if (have_signals != 0) return; /* Event loop should cycle now so don't poll again. */ if (nevents != 0) { if (nfds == ARRAY_SIZE(events) && --count != 0) { /* Poll for more events but don't block this time. */ timeout = 0; continue; } return; } if (saved_errno == ETIME) { assert(timeout != -1); return; } if (timeout == 0) return; if (timeout == -1) continue; update_timeout: assert(timeout > 0); diff = loop->time - base; if (diff >= (uint64_t) timeout) return; timeout -= diff; } } uint64_t uv__hrtime(uv_clocktype_t type) { return gethrtime(); } /* * We could use a static buffer for the path manipulations that we need outside * of the function, but this function could be called by multiple consumers and * we don't want to potentially create a race condition in the use of snprintf. */ int uv_exepath(char* buffer, size_t* size) { ssize_t res; char buf[128]; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; snprintf(buf, sizeof(buf), "/proc/%lu/path/a.out", (unsigned long) getpid()); res = *size - 1; if (res > 0) res = readlink(buf, buffer, res); if (res == -1) return UV__ERR(errno); buffer[res] = '\0'; *size = res; return 0; } uint64_t uv_get_free_memory(void) { return (uint64_t) sysconf(_SC_PAGESIZE) * sysconf(_SC_AVPHYS_PAGES); } uint64_t uv_get_total_memory(void) { return (uint64_t) sysconf(_SC_PAGESIZE) * sysconf(_SC_PHYS_PAGES); } uint64_t uv_get_constrained_memory(void) { return 0; /* Memory constraints are unknown. */ } void uv_loadavg(double avg[3]) { (void) getloadavg(avg, 3); } #if defined(PORT_SOURCE_FILE) static int uv__fs_event_rearm(uv_fs_event_t *handle) { if (handle->fd == PORT_DELETED) return UV_EBADF; if (port_associate(handle->loop->fs_fd, PORT_SOURCE_FILE, (uintptr_t) &handle->fo, FILE_ATTRIB | FILE_MODIFIED, handle) == -1) { return UV__ERR(errno); } handle->fd = PORT_LOADED; return 0; } static void uv__fs_event_read(uv_loop_t* loop, uv__io_t* w, unsigned int revents) { uv_fs_event_t *handle = NULL; timespec_t timeout; port_event_t pe; int events; int r; (void) w; (void) revents; do { uint_t n = 1; /* * Note that our use of port_getn() here (and not port_get()) is deliberate: * there is a bug in event ports (Sun bug 6456558) whereby a zeroed timeout * causes port_get() to return success instead of ETIME when there aren't * actually any events (!); by using port_getn() in lieu of port_get(), * we can at least workaround the bug by checking for zero returned events * and treating it as we would ETIME. */ do { memset(&timeout, 0, sizeof timeout); r = port_getn(loop->fs_fd, &pe, 1, &n, &timeout); } while (r == -1 && errno == EINTR); if ((r == -1 && errno == ETIME) || n == 0) break; handle = (uv_fs_event_t*) pe.portev_user; assert((r == 0) && "unexpected port_get() error"); if (uv__is_closing(handle)) { uv__handle_stop(handle); uv__make_close_pending((uv_handle_t*) handle); break; } events = 0; if (pe.portev_events & (FILE_ATTRIB | FILE_MODIFIED)) events |= UV_CHANGE; if (pe.portev_events & ~(FILE_ATTRIB | FILE_MODIFIED)) events |= UV_RENAME; assert(events != 0); handle->fd = PORT_FIRED; handle->cb(handle, NULL, events, 0); if (handle->fd != PORT_DELETED) { r = uv__fs_event_rearm(handle); if (r != 0) handle->cb(handle, NULL, 0, r); } } while (handle->fd != PORT_DELETED); } int uv_fs_event_init(uv_loop_t* loop, uv_fs_event_t* handle) { uv__handle_init(loop, (uv_handle_t*)handle, UV_FS_EVENT); return 0; } int uv_fs_event_start(uv_fs_event_t* handle, uv_fs_event_cb cb, const char* path, unsigned int flags) { int portfd; int first_run; int err; if (uv__is_active(handle)) return UV_EINVAL; first_run = 0; if (handle->loop->fs_fd == -1) { portfd = port_create(); if (portfd == -1) return UV__ERR(errno); handle->loop->fs_fd = portfd; first_run = 1; } uv__handle_start(handle); handle->path = uv__strdup(path); handle->fd = PORT_UNUSED; handle->cb = cb; memset(&handle->fo, 0, sizeof handle->fo); handle->fo.fo_name = handle->path; err = uv__fs_event_rearm(handle); if (err != 0) { uv_fs_event_stop(handle); return err; } if (first_run) { uv__io_init(&handle->loop->fs_event_watcher, uv__fs_event_read, portfd); uv__io_start(handle->loop, &handle->loop->fs_event_watcher, POLLIN); } return 0; } static int uv__fs_event_stop(uv_fs_event_t* handle) { int ret = 0; if (!uv__is_active(handle)) return 0; if (handle->fd == PORT_LOADED) { ret = port_dissociate(handle->loop->fs_fd, PORT_SOURCE_FILE, (uintptr_t) &handle->fo); } handle->fd = PORT_DELETED; uv__free(handle->path); handle->path = NULL; handle->fo.fo_name = NULL; if (ret == 0) uv__handle_stop(handle); return ret; } int uv_fs_event_stop(uv_fs_event_t* handle) { (void) uv__fs_event_stop(handle); return 0; } void uv__fs_event_close(uv_fs_event_t* handle) { /* * If we were unable to dissociate the port here, then it is most likely * that there is a pending queued event. When this happens, we don't want * to complete the close as it will free the underlying memory for the * handle, causing a use-after-free problem when the event is processed. * We defer the final cleanup until after the event is consumed in * uv__fs_event_read(). */ if (uv__fs_event_stop(handle) == 0) uv__make_close_pending((uv_handle_t*) handle); } #else /* !defined(PORT_SOURCE_FILE) */ int uv_fs_event_init(uv_loop_t* loop, uv_fs_event_t* handle) { return UV_ENOSYS; } int uv_fs_event_start(uv_fs_event_t* handle, uv_fs_event_cb cb, const char* filename, unsigned int flags) { return UV_ENOSYS; } int uv_fs_event_stop(uv_fs_event_t* handle) { return UV_ENOSYS; } void uv__fs_event_close(uv_fs_event_t* handle) { UNREACHABLE(); } #endif /* defined(PORT_SOURCE_FILE) */ int uv_resident_set_memory(size_t* rss) { psinfo_t psinfo; int err; int fd; fd = open("/proc/self/psinfo", O_RDONLY); if (fd == -1) return UV__ERR(errno); /* FIXME(bnoordhuis) Handle EINTR. */ err = UV_EINVAL; if (read(fd, &psinfo, sizeof(psinfo)) == sizeof(psinfo)) { *rss = (size_t)psinfo.pr_rssize * 1024; err = 0; } uv__close(fd); return err; } int uv_uptime(double* uptime) { kstat_ctl_t *kc; kstat_t *ksp; kstat_named_t *knp; long hz = sysconf(_SC_CLK_TCK); kc = kstat_open(); if (kc == NULL) return UV_EPERM; ksp = kstat_lookup(kc, (char*) "unix", 0, (char*) "system_misc"); if (kstat_read(kc, ksp, NULL) == -1) { *uptime = -1; } else { knp = (kstat_named_t*) kstat_data_lookup(ksp, (char*) "clk_intr"); *uptime = knp->value.ul / hz; } kstat_close(kc); return 0; } int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count) { int lookup_instance; kstat_ctl_t *kc; kstat_t *ksp; kstat_named_t *knp; uv_cpu_info_t* cpu_info; kc = kstat_open(); if (kc == NULL) return UV_EPERM; /* Get count of cpus */ lookup_instance = 0; while ((ksp = kstat_lookup(kc, (char*) "cpu_info", lookup_instance, NULL))) { lookup_instance++; } *cpu_infos = uv__malloc(lookup_instance * sizeof(**cpu_infos)); if (!(*cpu_infos)) { kstat_close(kc); return UV_ENOMEM; } *count = lookup_instance; cpu_info = *cpu_infos; lookup_instance = 0; while ((ksp = kstat_lookup(kc, (char*) "cpu_info", lookup_instance, NULL))) { if (kstat_read(kc, ksp, NULL) == -1) { cpu_info->speed = 0; cpu_info->model = NULL; } else { knp = kstat_data_lookup(ksp, (char*) "clock_MHz"); assert(knp->data_type == KSTAT_DATA_INT32 || knp->data_type == KSTAT_DATA_INT64); cpu_info->speed = (knp->data_type == KSTAT_DATA_INT32) ? knp->value.i32 : knp->value.i64; knp = kstat_data_lookup(ksp, (char*) "brand"); assert(knp->data_type == KSTAT_DATA_STRING); cpu_info->model = uv__strdup(KSTAT_NAMED_STR_PTR(knp)); } lookup_instance++; cpu_info++; } cpu_info = *cpu_infos; lookup_instance = 0; for (;;) { ksp = kstat_lookup(kc, (char*) "cpu", lookup_instance, (char*) "sys"); if (ksp == NULL) break; if (kstat_read(kc, ksp, NULL) == -1) { cpu_info->cpu_times.user = 0; cpu_info->cpu_times.nice = 0; cpu_info->cpu_times.sys = 0; cpu_info->cpu_times.idle = 0; cpu_info->cpu_times.irq = 0; } else { knp = kstat_data_lookup(ksp, (char*) "cpu_ticks_user"); assert(knp->data_type == KSTAT_DATA_UINT64); cpu_info->cpu_times.user = knp->value.ui64; knp = kstat_data_lookup(ksp, (char*) "cpu_ticks_kernel"); assert(knp->data_type == KSTAT_DATA_UINT64); cpu_info->cpu_times.sys = knp->value.ui64; knp = kstat_data_lookup(ksp, (char*) "cpu_ticks_idle"); assert(knp->data_type == KSTAT_DATA_UINT64); cpu_info->cpu_times.idle = knp->value.ui64; knp = kstat_data_lookup(ksp, (char*) "intr"); assert(knp->data_type == KSTAT_DATA_UINT64); cpu_info->cpu_times.irq = knp->value.ui64; cpu_info->cpu_times.nice = 0; } lookup_instance++; cpu_info++; } kstat_close(kc); return 0; } #ifdef SUNOS_NO_IFADDRS int uv_interface_addresses(uv_interface_address_t** addresses, int* count) { *count = 0; *addresses = NULL; return UV_ENOSYS; } #else /* SUNOS_NO_IFADDRS */ /* * Inspired By: * https://blogs.oracle.com/paulie/entry/retrieving_mac_address_in_solaris * http://www.pauliesworld.org/project/getmac.c */ static int uv__set_phys_addr(uv_interface_address_t* address, struct ifaddrs* ent) { struct sockaddr_dl* sa_addr; int sockfd; size_t i; struct arpreq arpreq; /* This appears to only work as root */ sa_addr = (struct sockaddr_dl*)(ent->ifa_addr); memcpy(address->phys_addr, LLADDR(sa_addr), sizeof(address->phys_addr)); for (i = 0; i < sizeof(address->phys_addr); i++) { /* Check that all bytes of phys_addr are zero. */ if (address->phys_addr[i] != 0) return 0; } memset(&arpreq, 0, sizeof(arpreq)); if (address->address.address4.sin_family == AF_INET) { struct sockaddr_in* sin = ((struct sockaddr_in*)&arpreq.arp_pa); sin->sin_addr.s_addr = address->address.address4.sin_addr.s_addr; } else if (address->address.address4.sin_family == AF_INET6) { struct sockaddr_in6* sin = ((struct sockaddr_in6*)&arpreq.arp_pa); memcpy(sin->sin6_addr.s6_addr, address->address.address6.sin6_addr.s6_addr, sizeof(address->address.address6.sin6_addr.s6_addr)); } else { return 0; } sockfd = socket(AF_INET, SOCK_DGRAM, 0); if (sockfd < 0) return UV__ERR(errno); if (ioctl(sockfd, SIOCGARP, (char*)&arpreq) == -1) { uv__close(sockfd); return UV__ERR(errno); } memcpy(address->phys_addr, arpreq.arp_ha.sa_data, sizeof(address->phys_addr)); uv__close(sockfd); return 0; } static int uv__ifaddr_exclude(struct ifaddrs *ent) { if (!((ent->ifa_flags & IFF_UP) && (ent->ifa_flags & IFF_RUNNING))) return 1; if (ent->ifa_addr == NULL) return 1; if (ent->ifa_addr->sa_family != AF_INET && ent->ifa_addr->sa_family != AF_INET6) return 1; return 0; } int uv_interface_addresses(uv_interface_address_t** addresses, int* count) { uv_interface_address_t* address; struct ifaddrs* addrs; struct ifaddrs* ent; *count = 0; *addresses = NULL; if (getifaddrs(&addrs)) return UV__ERR(errno); /* Count the number of interfaces */ for (ent = addrs; ent != NULL; ent = ent->ifa_next) { if (uv__ifaddr_exclude(ent)) continue; (*count)++; } if (*count == 0) { freeifaddrs(addrs); return 0; } *addresses = uv__malloc(*count * sizeof(**addresses)); if (!(*addresses)) { freeifaddrs(addrs); return UV_ENOMEM; } address = *addresses; for (ent = addrs; ent != NULL; ent = ent->ifa_next) { if (uv__ifaddr_exclude(ent)) continue; address->name = uv__strdup(ent->ifa_name); if (ent->ifa_addr->sa_family == AF_INET6) { address->address.address6 = *((struct sockaddr_in6*) ent->ifa_addr); } else { address->address.address4 = *((struct sockaddr_in*) ent->ifa_addr); } if (ent->ifa_netmask->sa_family == AF_INET6) { address->netmask.netmask6 = *((struct sockaddr_in6*) ent->ifa_netmask); } else { address->netmask.netmask4 = *((struct sockaddr_in*) ent->ifa_netmask); } address->is_internal = !!((ent->ifa_flags & IFF_PRIVATE) || (ent->ifa_flags & IFF_LOOPBACK)); uv__set_phys_addr(address, ent); address++; } freeifaddrs(addrs); return 0; } #endif /* SUNOS_NO_IFADDRS */ void uv_free_interface_addresses(uv_interface_address_t* addresses, int count) { int i; for (i = 0; i < count; i++) { uv__free(addresses[i].name); } uv__free(addresses); } #if !defined(_POSIX_VERSION) || _POSIX_VERSION < 200809L size_t strnlen(const char* s, size_t maxlen) { const char* end; end = memchr(s, '\0', maxlen); if (end == NULL) return maxlen; return end - s; } #endif gevent-24.11.1/deps/libuv/src/unix/sysinfo-loadavg.c000066400000000000000000000026601471441230600222550ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include void uv_loadavg(double avg[3]) { struct sysinfo info; if (sysinfo(&info) < 0) return; avg[0] = (double) info.loads[0] / 65536.0; avg[1] = (double) info.loads[1] / 65536.0; avg[2] = (double) info.loads[2] / 65536.0; } gevent-24.11.1/deps/libuv/src/unix/sysinfo-memory.c000066400000000000000000000030001471441230600221350ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include uint64_t uv_get_free_memory(void) { struct sysinfo info; if (sysinfo(&info) == 0) return (uint64_t) info.freeram * info.mem_unit; return 0; } uint64_t uv_get_total_memory(void) { struct sysinfo info; if (sysinfo(&info) == 0) return (uint64_t) info.totalram * info.mem_unit; return 0; } gevent-24.11.1/deps/libuv/src/unix/tcp.c000066400000000000000000000323301471441230600177330ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include static int new_socket(uv_tcp_t* handle, int domain, unsigned long flags) { struct sockaddr_storage saddr; socklen_t slen; int sockfd; int err; err = uv__socket(domain, SOCK_STREAM, 0); if (err < 0) return err; sockfd = err; err = uv__stream_open((uv_stream_t*) handle, sockfd, flags); if (err) { uv__close(sockfd); return err; } if (flags & UV_HANDLE_BOUND) { /* Bind this new socket to an arbitrary port */ slen = sizeof(saddr); memset(&saddr, 0, sizeof(saddr)); if (getsockname(uv__stream_fd(handle), (struct sockaddr*) &saddr, &slen)) { uv__close(sockfd); return UV__ERR(errno); } if (bind(uv__stream_fd(handle), (struct sockaddr*) &saddr, slen)) { uv__close(sockfd); return UV__ERR(errno); } } return 0; } static int maybe_new_socket(uv_tcp_t* handle, int domain, unsigned long flags) { struct sockaddr_storage saddr; socklen_t slen; if (domain == AF_UNSPEC) { handle->flags |= flags; return 0; } if (uv__stream_fd(handle) != -1) { if (flags & UV_HANDLE_BOUND) { if (handle->flags & UV_HANDLE_BOUND) { /* It is already bound to a port. */ handle->flags |= flags; return 0; } /* Query to see if tcp socket is bound. */ slen = sizeof(saddr); memset(&saddr, 0, sizeof(saddr)); if (getsockname(uv__stream_fd(handle), (struct sockaddr*) &saddr, &slen)) return UV__ERR(errno); if ((saddr.ss_family == AF_INET6 && ((struct sockaddr_in6*) &saddr)->sin6_port != 0) || (saddr.ss_family == AF_INET && ((struct sockaddr_in*) &saddr)->sin_port != 0)) { /* Handle is already bound to a port. */ handle->flags |= flags; return 0; } /* Bind to arbitrary port */ if (bind(uv__stream_fd(handle), (struct sockaddr*) &saddr, slen)) return UV__ERR(errno); } handle->flags |= flags; return 0; } return new_socket(handle, domain, flags); } int uv_tcp_init_ex(uv_loop_t* loop, uv_tcp_t* tcp, unsigned int flags) { int domain; /* Use the lower 8 bits for the domain */ domain = flags & 0xFF; if (domain != AF_INET && domain != AF_INET6 && domain != AF_UNSPEC) return UV_EINVAL; if (flags & ~0xFF) return UV_EINVAL; uv__stream_init(loop, (uv_stream_t*)tcp, UV_TCP); /* If anything fails beyond this point we need to remove the handle from * the handle queue, since it was added by uv__handle_init in uv_stream_init. */ if (domain != AF_UNSPEC) { int err = maybe_new_socket(tcp, domain, 0); if (err) { QUEUE_REMOVE(&tcp->handle_queue); return err; } } return 0; } int uv_tcp_init(uv_loop_t* loop, uv_tcp_t* tcp) { return uv_tcp_init_ex(loop, tcp, AF_UNSPEC); } int uv__tcp_bind(uv_tcp_t* tcp, const struct sockaddr* addr, unsigned int addrlen, unsigned int flags) { int err; int on; /* Cannot set IPv6-only mode on non-IPv6 socket. */ if ((flags & UV_TCP_IPV6ONLY) && addr->sa_family != AF_INET6) return UV_EINVAL; err = maybe_new_socket(tcp, addr->sa_family, 0); if (err) return err; on = 1; if (setsockopt(tcp->io_watcher.fd, SOL_SOCKET, SO_REUSEADDR, &on, sizeof(on))) return UV__ERR(errno); #ifndef __OpenBSD__ #ifdef IPV6_V6ONLY if (addr->sa_family == AF_INET6) { on = (flags & UV_TCP_IPV6ONLY) != 0; if (setsockopt(tcp->io_watcher.fd, IPPROTO_IPV6, IPV6_V6ONLY, &on, sizeof on) == -1) { #if defined(__MVS__) if (errno == EOPNOTSUPP) return UV_EINVAL; #endif return UV__ERR(errno); } } #endif #endif errno = 0; err = bind(tcp->io_watcher.fd, addr, addrlen); if (err == -1 && errno != EADDRINUSE) { if (errno == EAFNOSUPPORT) /* OSX, other BSDs and SunoS fail with EAFNOSUPPORT when binding a * socket created with AF_INET to an AF_INET6 address or vice versa. */ return UV_EINVAL; return UV__ERR(errno); } tcp->delayed_error = (err == -1) ? UV__ERR(errno) : 0; tcp->flags |= UV_HANDLE_BOUND; if (addr->sa_family == AF_INET6) tcp->flags |= UV_HANDLE_IPV6; return 0; } int uv__tcp_connect(uv_connect_t* req, uv_tcp_t* handle, const struct sockaddr* addr, unsigned int addrlen, uv_connect_cb cb) { int err; int r; assert(handle->type == UV_TCP); if (handle->connect_req != NULL) return UV_EALREADY; /* FIXME(bnoordhuis) UV_EINVAL or maybe UV_EBUSY. */ if (handle->delayed_error != 0) goto out; err = maybe_new_socket(handle, addr->sa_family, UV_HANDLE_READABLE | UV_HANDLE_WRITABLE); if (err) return err; do { errno = 0; r = connect(uv__stream_fd(handle), addr, addrlen); } while (r == -1 && errno == EINTR); /* We not only check the return value, but also check the errno != 0. * Because in rare cases connect() will return -1 but the errno * is 0 (for example, on Android 4.3, OnePlus phone A0001_12_150227) * and actually the tcp three-way handshake is completed. */ if (r == -1 && errno != 0) { if (errno == EINPROGRESS) ; /* not an error */ else if (errno == ECONNREFUSED #if defined(__OpenBSD__) || errno == EINVAL #endif ) /* If we get ECONNREFUSED (Solaris) or EINVAL (OpenBSD) wait until the * next tick to report the error. Solaris and OpenBSD wants to report * immediately -- other unixes want to wait. */ handle->delayed_error = UV__ERR(ECONNREFUSED); else return UV__ERR(errno); } out: uv__req_init(handle->loop, req, UV_CONNECT); req->cb = cb; req->handle = (uv_stream_t*) handle; QUEUE_INIT(&req->queue); handle->connect_req = req; uv__io_start(handle->loop, &handle->io_watcher, POLLOUT); if (handle->delayed_error) uv__io_feed(handle->loop, &handle->io_watcher); return 0; } int uv_tcp_open(uv_tcp_t* handle, uv_os_sock_t sock) { int err; if (uv__fd_exists(handle->loop, sock)) return UV_EEXIST; err = uv__nonblock(sock, 1); if (err) return err; return uv__stream_open((uv_stream_t*)handle, sock, UV_HANDLE_READABLE | UV_HANDLE_WRITABLE); } int uv_tcp_getsockname(const uv_tcp_t* handle, struct sockaddr* name, int* namelen) { if (handle->delayed_error) return handle->delayed_error; return uv__getsockpeername((const uv_handle_t*) handle, getsockname, name, namelen); } int uv_tcp_getpeername(const uv_tcp_t* handle, struct sockaddr* name, int* namelen) { if (handle->delayed_error) return handle->delayed_error; return uv__getsockpeername((const uv_handle_t*) handle, getpeername, name, namelen); } int uv_tcp_close_reset(uv_tcp_t* handle, uv_close_cb close_cb) { int fd; struct linger l = { 1, 0 }; /* Disallow setting SO_LINGER to zero due to some platform inconsistencies */ if (handle->flags & UV_HANDLE_SHUTTING) return UV_EINVAL; fd = uv__stream_fd(handle); if (0 != setsockopt(fd, SOL_SOCKET, SO_LINGER, &l, sizeof(l))) { if (errno == EINVAL) { /* Open Group Specifications Issue 7, 2018 edition states that * EINVAL may mean the socket has been shut down already. * Behavior observed on Solaris, illumos and macOS. */ errno = 0; } else { return UV__ERR(errno); } } uv_close((uv_handle_t*) handle, close_cb); return 0; } int uv__tcp_listen(uv_tcp_t* tcp, int backlog, uv_connection_cb cb) { static int single_accept_cached = -1; unsigned long flags; int single_accept; int err; if (tcp->delayed_error) return tcp->delayed_error; single_accept = uv__load_relaxed(&single_accept_cached); if (single_accept == -1) { const char* val = getenv("UV_TCP_SINGLE_ACCEPT"); single_accept = (val != NULL && atoi(val) != 0); /* Off by default. */ uv__store_relaxed(&single_accept_cached, single_accept); } if (single_accept) tcp->flags |= UV_HANDLE_TCP_SINGLE_ACCEPT; flags = 0; #if defined(__MVS__) /* on zOS the listen call does not bind automatically if the socket is unbound. Hence the manual binding to an arbitrary port is required to be done manually */ flags |= UV_HANDLE_BOUND; #endif err = maybe_new_socket(tcp, AF_INET, flags); if (err) return err; if (listen(tcp->io_watcher.fd, backlog)) return UV__ERR(errno); tcp->connection_cb = cb; tcp->flags |= UV_HANDLE_BOUND; /* Start listening for connections. */ tcp->io_watcher.cb = uv__server_io; uv__io_start(tcp->loop, &tcp->io_watcher, POLLIN); return 0; } int uv__tcp_nodelay(int fd, int on) { if (setsockopt(fd, IPPROTO_TCP, TCP_NODELAY, &on, sizeof(on))) return UV__ERR(errno); return 0; } int uv__tcp_keepalive(int fd, int on, unsigned int delay) { if (setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, &on, sizeof(on))) return UV__ERR(errno); #ifdef TCP_KEEPIDLE if (on) { int intvl = 1; /* 1 second; same as default on Win32 */ int cnt = 10; /* 10 retries; same as hardcoded on Win32 */ if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPIDLE, &delay, sizeof(delay))) return UV__ERR(errno); if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPINTVL, &intvl, sizeof(intvl))) return UV__ERR(errno); if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPCNT, &cnt, sizeof(cnt))) return UV__ERR(errno); } #endif /* Solaris/SmartOS, if you don't support keep-alive, * then don't advertise it in your system headers... */ /* FIXME(bnoordhuis) That's possibly because sizeof(delay) should be 1. */ #if defined(TCP_KEEPALIVE) && !defined(__sun) if (on && setsockopt(fd, IPPROTO_TCP, TCP_KEEPALIVE, &delay, sizeof(delay))) return UV__ERR(errno); #endif return 0; } int uv_tcp_nodelay(uv_tcp_t* handle, int on) { int err; if (uv__stream_fd(handle) != -1) { err = uv__tcp_nodelay(uv__stream_fd(handle), on); if (err) return err; } if (on) handle->flags |= UV_HANDLE_TCP_NODELAY; else handle->flags &= ~UV_HANDLE_TCP_NODELAY; return 0; } int uv_tcp_keepalive(uv_tcp_t* handle, int on, unsigned int delay) { int err; if (uv__stream_fd(handle) != -1) { err =uv__tcp_keepalive(uv__stream_fd(handle), on, delay); if (err) return err; } if (on) handle->flags |= UV_HANDLE_TCP_KEEPALIVE; else handle->flags &= ~UV_HANDLE_TCP_KEEPALIVE; /* TODO Store delay if uv__stream_fd(handle) == -1 but don't want to enlarge * uv_tcp_t with an int that's almost never used... */ return 0; } int uv_tcp_simultaneous_accepts(uv_tcp_t* handle, int enable) { if (enable) handle->flags &= ~UV_HANDLE_TCP_SINGLE_ACCEPT; else handle->flags |= UV_HANDLE_TCP_SINGLE_ACCEPT; return 0; } void uv__tcp_close(uv_tcp_t* handle) { uv__stream_close((uv_stream_t*)handle); } int uv_socketpair(int type, int protocol, uv_os_sock_t fds[2], int flags0, int flags1) { uv_os_sock_t temp[2]; int err; #if defined(__FreeBSD__) || defined(__linux__) int flags; flags = type | SOCK_CLOEXEC; if ((flags0 & UV_NONBLOCK_PIPE) && (flags1 & UV_NONBLOCK_PIPE)) flags |= SOCK_NONBLOCK; if (socketpair(AF_UNIX, flags, protocol, temp)) return UV__ERR(errno); if (flags & UV_FS_O_NONBLOCK) { fds[0] = temp[0]; fds[1] = temp[1]; return 0; } #else if (socketpair(AF_UNIX, type, protocol, temp)) return UV__ERR(errno); if ((err = uv__cloexec(temp[0], 1))) goto fail; if ((err = uv__cloexec(temp[1], 1))) goto fail; #endif if (flags0 & UV_NONBLOCK_PIPE) if ((err = uv__nonblock(temp[0], 1))) goto fail; if (flags1 & UV_NONBLOCK_PIPE) if ((err = uv__nonblock(temp[1], 1))) goto fail; fds[0] = temp[0]; fds[1] = temp[1]; return 0; fail: uv__close(temp[0]); uv__close(temp[1]); return err; } gevent-24.11.1/deps/libuv/src/unix/thread.c000066400000000000000000000434741471441230600204270ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include /* getrlimit() */ #include /* getpagesize() */ #include #ifdef __MVS__ #include #include #endif #if defined(__GLIBC__) && !defined(__UCLIBC__) #include /* gnu_get_libc_version() */ #endif #undef NANOSEC #define NANOSEC ((uint64_t) 1e9) #if defined(PTHREAD_BARRIER_SERIAL_THREAD) STATIC_ASSERT(sizeof(uv_barrier_t) == sizeof(pthread_barrier_t)); #endif /* Note: guard clauses should match uv_barrier_t's in include/uv/unix.h. */ #if defined(_AIX) || \ defined(__OpenBSD__) || \ !defined(PTHREAD_BARRIER_SERIAL_THREAD) int uv_barrier_init(uv_barrier_t* barrier, unsigned int count) { struct _uv_barrier* b; int rc; if (barrier == NULL || count == 0) return UV_EINVAL; b = uv__malloc(sizeof(*b)); if (b == NULL) return UV_ENOMEM; b->in = 0; b->out = 0; b->threshold = count; rc = uv_mutex_init(&b->mutex); if (rc != 0) goto error2; rc = uv_cond_init(&b->cond); if (rc != 0) goto error; barrier->b = b; return 0; error: uv_mutex_destroy(&b->mutex); error2: uv__free(b); return rc; } int uv_barrier_wait(uv_barrier_t* barrier) { struct _uv_barrier* b; int last; if (barrier == NULL || barrier->b == NULL) return UV_EINVAL; b = barrier->b; uv_mutex_lock(&b->mutex); if (++b->in == b->threshold) { b->in = 0; b->out = b->threshold; uv_cond_signal(&b->cond); } else { do uv_cond_wait(&b->cond, &b->mutex); while (b->in != 0); } last = (--b->out == 0); uv_cond_signal(&b->cond); uv_mutex_unlock(&b->mutex); return last; } void uv_barrier_destroy(uv_barrier_t* barrier) { struct _uv_barrier* b; b = barrier->b; uv_mutex_lock(&b->mutex); assert(b->in == 0); while (b->out != 0) uv_cond_wait(&b->cond, &b->mutex); if (b->in != 0) abort(); uv_mutex_unlock(&b->mutex); uv_mutex_destroy(&b->mutex); uv_cond_destroy(&b->cond); uv__free(barrier->b); barrier->b = NULL; } #else int uv_barrier_init(uv_barrier_t* barrier, unsigned int count) { return UV__ERR(pthread_barrier_init(barrier, NULL, count)); } int uv_barrier_wait(uv_barrier_t* barrier) { int rc; rc = pthread_barrier_wait(barrier); if (rc != 0) if (rc != PTHREAD_BARRIER_SERIAL_THREAD) abort(); return rc == PTHREAD_BARRIER_SERIAL_THREAD; } void uv_barrier_destroy(uv_barrier_t* barrier) { if (pthread_barrier_destroy(barrier)) abort(); } #endif /* Musl's PTHREAD_STACK_MIN is 2 KB on all architectures, which is * too small to safely receive signals on. * * Musl's PTHREAD_STACK_MIN + MINSIGSTKSZ == 8192 on arm64 (which has * the largest MINSIGSTKSZ of the architectures that musl supports) so * let's use that as a lower bound. * * We use a hardcoded value because PTHREAD_STACK_MIN + MINSIGSTKSZ * is between 28 and 133 KB when compiling against glibc, depending * on the architecture. */ static size_t uv__min_stack_size(void) { static const size_t min = 8192; #ifdef PTHREAD_STACK_MIN /* Not defined on NetBSD. */ if (min < (size_t) PTHREAD_STACK_MIN) return PTHREAD_STACK_MIN; #endif /* PTHREAD_STACK_MIN */ return min; } /* On Linux, threads created by musl have a much smaller stack than threads * created by glibc (80 vs. 2048 or 4096 kB.) Follow glibc for consistency. */ static size_t uv__default_stack_size(void) { #if !defined(__linux__) return 0; #elif defined(__PPC__) || defined(__ppc__) || defined(__powerpc__) return 4 << 20; /* glibc default. */ #else return 2 << 20; /* glibc default. */ #endif } /* On MacOS, threads other than the main thread are created with a reduced * stack size by default. Adjust to RLIMIT_STACK aligned to the page size. */ size_t uv__thread_stack_size(void) { #if defined(__APPLE__) || defined(__linux__) struct rlimit lim; /* getrlimit() can fail on some aarch64 systems due to a glibc bug where * the system call wrapper invokes the wrong system call. Don't treat * that as fatal, just use the default stack size instead. */ if (getrlimit(RLIMIT_STACK, &lim)) return uv__default_stack_size(); if (lim.rlim_cur == RLIM_INFINITY) return uv__default_stack_size(); /* pthread_attr_setstacksize() expects page-aligned values. */ lim.rlim_cur -= lim.rlim_cur % (rlim_t) getpagesize(); if (lim.rlim_cur >= (rlim_t) uv__min_stack_size()) return lim.rlim_cur; #endif return uv__default_stack_size(); } int uv_thread_create(uv_thread_t *tid, void (*entry)(void *arg), void *arg) { uv_thread_options_t params; params.flags = UV_THREAD_NO_FLAGS; return uv_thread_create_ex(tid, ¶ms, entry, arg); } int uv_thread_create_ex(uv_thread_t* tid, const uv_thread_options_t* params, void (*entry)(void *arg), void *arg) { int err; pthread_attr_t* attr; pthread_attr_t attr_storage; size_t pagesize; size_t stack_size; size_t min_stack_size; /* Used to squelch a -Wcast-function-type warning. */ union { void (*in)(void*); void* (*out)(void*); } f; stack_size = params->flags & UV_THREAD_HAS_STACK_SIZE ? params->stack_size : 0; attr = NULL; if (stack_size == 0) { stack_size = uv__thread_stack_size(); } else { pagesize = (size_t)getpagesize(); /* Round up to the nearest page boundary. */ stack_size = (stack_size + pagesize - 1) &~ (pagesize - 1); min_stack_size = uv__min_stack_size(); if (stack_size < min_stack_size) stack_size = min_stack_size; } if (stack_size > 0) { attr = &attr_storage; if (pthread_attr_init(attr)) abort(); if (pthread_attr_setstacksize(attr, stack_size)) abort(); } f.in = entry; err = pthread_create(tid, attr, f.out, arg); if (attr != NULL) pthread_attr_destroy(attr); return UV__ERR(err); } uv_thread_t uv_thread_self(void) { return pthread_self(); } int uv_thread_join(uv_thread_t *tid) { return UV__ERR(pthread_join(*tid, NULL)); } int uv_thread_equal(const uv_thread_t* t1, const uv_thread_t* t2) { return pthread_equal(*t1, *t2); } int uv_mutex_init(uv_mutex_t* mutex) { #if defined(NDEBUG) || !defined(PTHREAD_MUTEX_ERRORCHECK) return UV__ERR(pthread_mutex_init(mutex, NULL)); #else pthread_mutexattr_t attr; int err; if (pthread_mutexattr_init(&attr)) abort(); if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_ERRORCHECK)) abort(); err = pthread_mutex_init(mutex, &attr); if (pthread_mutexattr_destroy(&attr)) abort(); return UV__ERR(err); #endif } int uv_mutex_init_recursive(uv_mutex_t* mutex) { pthread_mutexattr_t attr; int err; if (pthread_mutexattr_init(&attr)) abort(); if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) abort(); err = pthread_mutex_init(mutex, &attr); if (pthread_mutexattr_destroy(&attr)) abort(); return UV__ERR(err); } void uv_mutex_destroy(uv_mutex_t* mutex) { if (pthread_mutex_destroy(mutex)) abort(); } void uv_mutex_lock(uv_mutex_t* mutex) { if (pthread_mutex_lock(mutex)) abort(); } int uv_mutex_trylock(uv_mutex_t* mutex) { int err; err = pthread_mutex_trylock(mutex); if (err) { if (err != EBUSY && err != EAGAIN) abort(); return UV_EBUSY; } return 0; } void uv_mutex_unlock(uv_mutex_t* mutex) { if (pthread_mutex_unlock(mutex)) abort(); } int uv_rwlock_init(uv_rwlock_t* rwlock) { return UV__ERR(pthread_rwlock_init(rwlock, NULL)); } void uv_rwlock_destroy(uv_rwlock_t* rwlock) { if (pthread_rwlock_destroy(rwlock)) abort(); } void uv_rwlock_rdlock(uv_rwlock_t* rwlock) { if (pthread_rwlock_rdlock(rwlock)) abort(); } int uv_rwlock_tryrdlock(uv_rwlock_t* rwlock) { int err; err = pthread_rwlock_tryrdlock(rwlock); if (err) { if (err != EBUSY && err != EAGAIN) abort(); return UV_EBUSY; } return 0; } void uv_rwlock_rdunlock(uv_rwlock_t* rwlock) { if (pthread_rwlock_unlock(rwlock)) abort(); } void uv_rwlock_wrlock(uv_rwlock_t* rwlock) { if (pthread_rwlock_wrlock(rwlock)) abort(); } int uv_rwlock_trywrlock(uv_rwlock_t* rwlock) { int err; err = pthread_rwlock_trywrlock(rwlock); if (err) { if (err != EBUSY && err != EAGAIN) abort(); return UV_EBUSY; } return 0; } void uv_rwlock_wrunlock(uv_rwlock_t* rwlock) { if (pthread_rwlock_unlock(rwlock)) abort(); } void uv_once(uv_once_t* guard, void (*callback)(void)) { if (pthread_once(guard, callback)) abort(); } #if defined(__APPLE__) && defined(__MACH__) int uv_sem_init(uv_sem_t* sem, unsigned int value) { kern_return_t err; err = semaphore_create(mach_task_self(), sem, SYNC_POLICY_FIFO, value); if (err == KERN_SUCCESS) return 0; if (err == KERN_INVALID_ARGUMENT) return UV_EINVAL; if (err == KERN_RESOURCE_SHORTAGE) return UV_ENOMEM; abort(); return UV_EINVAL; /* Satisfy the compiler. */ } void uv_sem_destroy(uv_sem_t* sem) { if (semaphore_destroy(mach_task_self(), *sem)) abort(); } void uv_sem_post(uv_sem_t* sem) { if (semaphore_signal(*sem)) abort(); } void uv_sem_wait(uv_sem_t* sem) { int r; do r = semaphore_wait(*sem); while (r == KERN_ABORTED); if (r != KERN_SUCCESS) abort(); } int uv_sem_trywait(uv_sem_t* sem) { mach_timespec_t interval; kern_return_t err; interval.tv_sec = 0; interval.tv_nsec = 0; err = semaphore_timedwait(*sem, interval); if (err == KERN_SUCCESS) return 0; if (err == KERN_OPERATION_TIMED_OUT) return UV_EAGAIN; abort(); return UV_EINVAL; /* Satisfy the compiler. */ } #else /* !(defined(__APPLE__) && defined(__MACH__)) */ #if defined(__GLIBC__) && !defined(__UCLIBC__) /* Hack around https://sourceware.org/bugzilla/show_bug.cgi?id=12674 * by providing a custom implementation for glibc < 2.21 in terms of other * concurrency primitives. * Refs: https://github.com/nodejs/node/issues/19903 */ /* To preserve ABI compatibility, we treat the uv_sem_t as storage for * a pointer to the actual struct we're using underneath. */ static uv_once_t glibc_version_check_once = UV_ONCE_INIT; static int platform_needs_custom_semaphore = 0; static void glibc_version_check(void) { const char* version = gnu_get_libc_version(); platform_needs_custom_semaphore = version[0] == '2' && version[1] == '.' && atoi(version + 2) < 21; } #elif defined(__MVS__) #define platform_needs_custom_semaphore 1 #else /* !defined(__GLIBC__) && !defined(__MVS__) */ #define platform_needs_custom_semaphore 0 #endif typedef struct uv_semaphore_s { uv_mutex_t mutex; uv_cond_t cond; unsigned int value; } uv_semaphore_t; #if (defined(__GLIBC__) && !defined(__UCLIBC__)) || \ platform_needs_custom_semaphore STATIC_ASSERT(sizeof(uv_sem_t) >= sizeof(uv_semaphore_t*)); #endif static int uv__custom_sem_init(uv_sem_t* sem_, unsigned int value) { int err; uv_semaphore_t* sem; sem = uv__malloc(sizeof(*sem)); if (sem == NULL) return UV_ENOMEM; if ((err = uv_mutex_init(&sem->mutex)) != 0) { uv__free(sem); return err; } if ((err = uv_cond_init(&sem->cond)) != 0) { uv_mutex_destroy(&sem->mutex); uv__free(sem); return err; } sem->value = value; *(uv_semaphore_t**)sem_ = sem; return 0; } static void uv__custom_sem_destroy(uv_sem_t* sem_) { uv_semaphore_t* sem; sem = *(uv_semaphore_t**)sem_; uv_cond_destroy(&sem->cond); uv_mutex_destroy(&sem->mutex); uv__free(sem); } static void uv__custom_sem_post(uv_sem_t* sem_) { uv_semaphore_t* sem; sem = *(uv_semaphore_t**)sem_; uv_mutex_lock(&sem->mutex); sem->value++; if (sem->value == 1) uv_cond_signal(&sem->cond); uv_mutex_unlock(&sem->mutex); } static void uv__custom_sem_wait(uv_sem_t* sem_) { uv_semaphore_t* sem; sem = *(uv_semaphore_t**)sem_; uv_mutex_lock(&sem->mutex); while (sem->value == 0) uv_cond_wait(&sem->cond, &sem->mutex); sem->value--; uv_mutex_unlock(&sem->mutex); } static int uv__custom_sem_trywait(uv_sem_t* sem_) { uv_semaphore_t* sem; sem = *(uv_semaphore_t**)sem_; if (uv_mutex_trylock(&sem->mutex) != 0) return UV_EAGAIN; if (sem->value == 0) { uv_mutex_unlock(&sem->mutex); return UV_EAGAIN; } sem->value--; uv_mutex_unlock(&sem->mutex); return 0; } static int uv__sem_init(uv_sem_t* sem, unsigned int value) { if (sem_init(sem, 0, value)) return UV__ERR(errno); return 0; } static void uv__sem_destroy(uv_sem_t* sem) { if (sem_destroy(sem)) abort(); } static void uv__sem_post(uv_sem_t* sem) { if (sem_post(sem)) abort(); } static void uv__sem_wait(uv_sem_t* sem) { int r; do r = sem_wait(sem); while (r == -1 && errno == EINTR); if (r) abort(); } static int uv__sem_trywait(uv_sem_t* sem) { int r; do r = sem_trywait(sem); while (r == -1 && errno == EINTR); if (r) { if (errno == EAGAIN) return UV_EAGAIN; abort(); } return 0; } int uv_sem_init(uv_sem_t* sem, unsigned int value) { #if defined(__GLIBC__) && !defined(__UCLIBC__) uv_once(&glibc_version_check_once, glibc_version_check); #endif if (platform_needs_custom_semaphore) return uv__custom_sem_init(sem, value); else return uv__sem_init(sem, value); } void uv_sem_destroy(uv_sem_t* sem) { if (platform_needs_custom_semaphore) uv__custom_sem_destroy(sem); else uv__sem_destroy(sem); } void uv_sem_post(uv_sem_t* sem) { if (platform_needs_custom_semaphore) uv__custom_sem_post(sem); else uv__sem_post(sem); } void uv_sem_wait(uv_sem_t* sem) { if (platform_needs_custom_semaphore) uv__custom_sem_wait(sem); else uv__sem_wait(sem); } int uv_sem_trywait(uv_sem_t* sem) { if (platform_needs_custom_semaphore) return uv__custom_sem_trywait(sem); else return uv__sem_trywait(sem); } #endif /* defined(__APPLE__) && defined(__MACH__) */ #if defined(__APPLE__) && defined(__MACH__) || defined(__MVS__) int uv_cond_init(uv_cond_t* cond) { return UV__ERR(pthread_cond_init(cond, NULL)); } #else /* !(defined(__APPLE__) && defined(__MACH__)) */ int uv_cond_init(uv_cond_t* cond) { pthread_condattr_t attr; int err; err = pthread_condattr_init(&attr); if (err) return UV__ERR(err); err = pthread_condattr_setclock(&attr, CLOCK_MONOTONIC); if (err) goto error2; err = pthread_cond_init(cond, &attr); if (err) goto error2; err = pthread_condattr_destroy(&attr); if (err) goto error; return 0; error: pthread_cond_destroy(cond); error2: pthread_condattr_destroy(&attr); return UV__ERR(err); } #endif /* defined(__APPLE__) && defined(__MACH__) */ void uv_cond_destroy(uv_cond_t* cond) { #if defined(__APPLE__) && defined(__MACH__) /* It has been reported that destroying condition variables that have been * signalled but not waited on can sometimes result in application crashes. * See https://codereview.chromium.org/1323293005. */ pthread_mutex_t mutex; struct timespec ts; int err; if (pthread_mutex_init(&mutex, NULL)) abort(); if (pthread_mutex_lock(&mutex)) abort(); ts.tv_sec = 0; ts.tv_nsec = 1; err = pthread_cond_timedwait_relative_np(cond, &mutex, &ts); if (err != 0 && err != ETIMEDOUT) abort(); if (pthread_mutex_unlock(&mutex)) abort(); if (pthread_mutex_destroy(&mutex)) abort(); #endif /* defined(__APPLE__) && defined(__MACH__) */ if (pthread_cond_destroy(cond)) abort(); } void uv_cond_signal(uv_cond_t* cond) { if (pthread_cond_signal(cond)) abort(); } void uv_cond_broadcast(uv_cond_t* cond) { if (pthread_cond_broadcast(cond)) abort(); } void uv_cond_wait(uv_cond_t* cond, uv_mutex_t* mutex) { if (pthread_cond_wait(cond, mutex)) abort(); } int uv_cond_timedwait(uv_cond_t* cond, uv_mutex_t* mutex, uint64_t timeout) { int r; struct timespec ts; #if defined(__MVS__) struct timeval tv; #endif #if defined(__APPLE__) && defined(__MACH__) ts.tv_sec = timeout / NANOSEC; ts.tv_nsec = timeout % NANOSEC; r = pthread_cond_timedwait_relative_np(cond, mutex, &ts); #else #if defined(__MVS__) if (gettimeofday(&tv, NULL)) abort(); timeout += tv.tv_sec * NANOSEC + tv.tv_usec * 1e3; #else timeout += uv__hrtime(UV_CLOCK_PRECISE); #endif ts.tv_sec = timeout / NANOSEC; ts.tv_nsec = timeout % NANOSEC; r = pthread_cond_timedwait(cond, mutex, &ts); #endif if (r == 0) return 0; if (r == ETIMEDOUT) return UV_ETIMEDOUT; abort(); #ifndef __SUNPRO_C return UV_EINVAL; /* Satisfy the compiler. */ #endif } int uv_key_create(uv_key_t* key) { return UV__ERR(pthread_key_create(key, NULL)); } void uv_key_delete(uv_key_t* key) { if (pthread_key_delete(*key)) abort(); } void* uv_key_get(uv_key_t* key) { return pthread_getspecific(*key); } void uv_key_set(uv_key_t* key, void* value) { if (pthread_setspecific(*key, value)) abort(); } gevent-24.11.1/deps/libuv/src/unix/tty.c000066400000000000000000000317361471441230600177760ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include "spinlock.h" #include #include #include #include #include #include #if defined(__MVS__) && !defined(IMAXBEL) #define IMAXBEL 0 #endif #if defined(__PASE__) /* On IBM i PASE, for better compatibility with running interactive programs in * a 5250 environment, isatty() will return true for the stdin/stdout/stderr * streams created by QSH/QP2TERM. * * For more, see docs on PASE_STDIO_ISATTY in * https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_74/apis/pase_environ.htm * * This behavior causes problems for Node as it expects that if isatty() returns * true that TTY ioctls will be supported by that fd (which is not an * unreasonable expectation) and when they don't it crashes with assertion * errors. * * Here, we create our own version of isatty() that uses ioctl() to identify * whether the fd is *really* a TTY or not. */ static int isreallyatty(int file) { int rc; rc = !ioctl(file, TXISATTY + 0x81, NULL); if (!rc && errno != EBADF) errno = ENOTTY; return rc; } #define isatty(fd) isreallyatty(fd) #endif static int orig_termios_fd = -1; static struct termios orig_termios; static uv_spinlock_t termios_spinlock = UV_SPINLOCK_INITIALIZER; int uv__tcsetattr(int fd, int how, const struct termios *term) { int rc; do rc = tcsetattr(fd, how, term); while (rc == -1 && errno == EINTR); if (rc == -1) return UV__ERR(errno); return 0; } static int uv__tty_is_slave(const int fd) { int result; #if defined(__linux__) || defined(__FreeBSD__) || defined(__FreeBSD_kernel__) int dummy; result = ioctl(fd, TIOCGPTN, &dummy) != 0; #elif defined(__APPLE__) char dummy[256]; result = ioctl(fd, TIOCPTYGNAME, &dummy) != 0; #elif defined(__NetBSD__) /* * NetBSD as an extension returns with ptsname(3) and ptsname_r(3) the slave * device name for both descriptors, the master one and slave one. * * Implement function to compare major device number with pts devices. * * The major numbers are machine-dependent, on NetBSD/amd64 they are * respectively: * - master tty: ptc - major 6 * - slave tty: pts - major 5 */ struct stat sb; /* Lookup device's major for the pts driver and cache it. */ static devmajor_t pts = NODEVMAJOR; if (pts == NODEVMAJOR) { pts = getdevmajor("pts", S_IFCHR); if (pts == NODEVMAJOR) abort(); } /* Lookup stat structure behind the file descriptor. */ if (fstat(fd, &sb) != 0) abort(); /* Assert character device. */ if (!S_ISCHR(sb.st_mode)) abort(); /* Assert valid major. */ if (major(sb.st_rdev) == NODEVMAJOR) abort(); result = (pts == major(sb.st_rdev)); #else /* Fallback to ptsname */ result = ptsname(fd) == NULL; #endif return result; } int uv_tty_init(uv_loop_t* loop, uv_tty_t* tty, int fd, int unused) { uv_handle_type type; int flags; int newfd; int r; int saved_flags; int mode; char path[256]; (void)unused; /* deprecated parameter is no longer needed */ /* File descriptors that refer to files cannot be monitored with epoll. * That restriction also applies to character devices like /dev/random * (but obviously not /dev/tty.) */ type = uv_guess_handle(fd); if (type == UV_FILE || type == UV_UNKNOWN_HANDLE) return UV_EINVAL; flags = 0; newfd = -1; /* Save the fd flags in case we need to restore them due to an error. */ do saved_flags = fcntl(fd, F_GETFL); while (saved_flags == -1 && errno == EINTR); if (saved_flags == -1) return UV__ERR(errno); mode = saved_flags & O_ACCMODE; /* Reopen the file descriptor when it refers to a tty. This lets us put the * tty in non-blocking mode without affecting other processes that share it * with us. * * Example: `node | cat` - if we put our fd 0 in non-blocking mode, it also * affects fd 1 of `cat` because both file descriptors refer to the same * struct file in the kernel. When we reopen our fd 0, it points to a * different struct file, hence changing its properties doesn't affect * other processes. */ if (type == UV_TTY) { /* Reopening a pty in master mode won't work either because the reopened * pty will be in slave mode (*BSD) or reopening will allocate a new * master/slave pair (Linux). Therefore check if the fd points to a * slave device. */ if (uv__tty_is_slave(fd) && ttyname_r(fd, path, sizeof(path)) == 0) r = uv__open_cloexec(path, mode | O_NOCTTY); else r = -1; if (r < 0) { /* fallback to using blocking writes */ if (mode != O_RDONLY) flags |= UV_HANDLE_BLOCKING_WRITES; goto skip; } newfd = r; r = uv__dup2_cloexec(newfd, fd); if (r < 0 && r != UV_EINVAL) { /* EINVAL means newfd == fd which could conceivably happen if another * thread called close(fd) between our calls to isatty() and open(). * That's a rather unlikely event but let's handle it anyway. */ uv__close(newfd); return r; } fd = newfd; } skip: uv__stream_init(loop, (uv_stream_t*) tty, UV_TTY); /* If anything fails beyond this point we need to remove the handle from * the handle queue, since it was added by uv__handle_init in uv_stream_init. */ if (!(flags & UV_HANDLE_BLOCKING_WRITES)) uv__nonblock(fd, 1); #if defined(__APPLE__) r = uv__stream_try_select((uv_stream_t*) tty, &fd); if (r) { int rc = r; if (newfd != -1) uv__close(newfd); QUEUE_REMOVE(&tty->handle_queue); do r = fcntl(fd, F_SETFL, saved_flags); while (r == -1 && errno == EINTR); return rc; } #endif if (mode != O_WRONLY) flags |= UV_HANDLE_READABLE; if (mode != O_RDONLY) flags |= UV_HANDLE_WRITABLE; uv__stream_open((uv_stream_t*) tty, fd, flags); tty->mode = UV_TTY_MODE_NORMAL; return 0; } static void uv__tty_make_raw(struct termios* tio) { assert(tio != NULL); #if defined __sun || defined __MVS__ /* * This implementation of cfmakeraw for Solaris and derivatives is taken from * http://www.perkin.org.uk/posts/solaris-portability-cfmakeraw.html. */ tio->c_iflag &= ~(IMAXBEL | IGNBRK | BRKINT | PARMRK | ISTRIP | INLCR | IGNCR | ICRNL | IXON); tio->c_oflag &= ~OPOST; tio->c_lflag &= ~(ECHO | ECHONL | ICANON | ISIG | IEXTEN); tio->c_cflag &= ~(CSIZE | PARENB); tio->c_cflag |= CS8; /* * By default, most software expects a pending read to block until at * least one byte becomes available. As per termio(7I), this requires * setting the MIN and TIME parameters appropriately. * * As a somewhat unfortunate artifact of history, the MIN and TIME slots * in the control character array overlap with the EOF and EOL slots used * for canonical mode processing. Because the EOF character needs to be * the ASCII EOT value (aka Control-D), it has the byte value 4. When * switching to raw mode, this is interpreted as a MIN value of 4; i.e., * reads will block until at least four bytes have been input. * * Other platforms with a distinct MIN slot like Linux and FreeBSD appear * to default to a MIN value of 1, so we'll force that value here: */ tio->c_cc[VMIN] = 1; tio->c_cc[VTIME] = 0; #else cfmakeraw(tio); #endif /* #ifdef __sun */ } int uv_tty_set_mode(uv_tty_t* tty, uv_tty_mode_t mode) { struct termios tmp; int fd; int rc; if (tty->mode == (int) mode) return 0; fd = uv__stream_fd(tty); if (tty->mode == UV_TTY_MODE_NORMAL && mode != UV_TTY_MODE_NORMAL) { do rc = tcgetattr(fd, &tty->orig_termios); while (rc == -1 && errno == EINTR); if (rc == -1) return UV__ERR(errno); /* This is used for uv_tty_reset_mode() */ uv_spinlock_lock(&termios_spinlock); if (orig_termios_fd == -1) { orig_termios = tty->orig_termios; orig_termios_fd = fd; } uv_spinlock_unlock(&termios_spinlock); } tmp = tty->orig_termios; switch (mode) { case UV_TTY_MODE_NORMAL: break; case UV_TTY_MODE_RAW: tmp.c_iflag &= ~(BRKINT | ICRNL | INPCK | ISTRIP | IXON); tmp.c_oflag |= (ONLCR); tmp.c_cflag |= (CS8); tmp.c_lflag &= ~(ECHO | ICANON | IEXTEN | ISIG); tmp.c_cc[VMIN] = 1; tmp.c_cc[VTIME] = 0; break; case UV_TTY_MODE_IO: uv__tty_make_raw(&tmp); break; } /* Apply changes after draining */ rc = uv__tcsetattr(fd, TCSADRAIN, &tmp); if (rc == 0) tty->mode = mode; return rc; } int uv_tty_get_winsize(uv_tty_t* tty, int* width, int* height) { struct winsize ws; int err; do err = ioctl(uv__stream_fd(tty), TIOCGWINSZ, &ws); while (err == -1 && errno == EINTR); if (err == -1) return UV__ERR(errno); *width = ws.ws_col; *height = ws.ws_row; return 0; } uv_handle_type uv_guess_handle(uv_file file) { struct sockaddr_storage ss; struct stat s; socklen_t len; int type; if (file < 0) return UV_UNKNOWN_HANDLE; if (isatty(file)) return UV_TTY; if (fstat(file, &s)) { #if defined(__PASE__) /* On ibmi receiving RST from TCP instead of FIN immediately puts fd into * an error state. fstat will return EINVAL, getsockname will also return * EINVAL, even if sockaddr_storage is valid. (If file does not refer to a * socket, ENOTSOCK is returned instead.) * In such cases, we will permit the user to open the connection as uv_tcp * still, so that the user can get immediately notified of the error in * their read callback and close this fd. */ len = sizeof(ss); if (getsockname(file, (struct sockaddr*) &ss, &len)) { if (errno == EINVAL) return UV_TCP; } #endif return UV_UNKNOWN_HANDLE; } if (S_ISREG(s.st_mode)) return UV_FILE; if (S_ISCHR(s.st_mode)) return UV_FILE; /* XXX UV_NAMED_PIPE? */ if (S_ISFIFO(s.st_mode)) return UV_NAMED_PIPE; if (!S_ISSOCK(s.st_mode)) return UV_UNKNOWN_HANDLE; len = sizeof(ss); if (getsockname(file, (struct sockaddr*) &ss, &len)) { #if defined(_AIX) /* On aix receiving RST from TCP instead of FIN immediately puts fd into * an error state. In such case getsockname will return EINVAL, even if * sockaddr_storage is valid. * In such cases, we will permit the user to open the connection as uv_tcp * still, so that the user can get immediately notified of the error in * their read callback and close this fd. */ if (errno == EINVAL) { return UV_TCP; } #endif return UV_UNKNOWN_HANDLE; } len = sizeof(type); if (getsockopt(file, SOL_SOCKET, SO_TYPE, &type, &len)) return UV_UNKNOWN_HANDLE; if (type == SOCK_DGRAM) if (ss.ss_family == AF_INET || ss.ss_family == AF_INET6) return UV_UDP; if (type == SOCK_STREAM) { #if defined(_AIX) || defined(__DragonFly__) /* on AIX/DragonFly the getsockname call returns an empty sa structure * for sockets of type AF_UNIX. For all other types it will * return a properly filled in structure. */ if (len == 0) return UV_NAMED_PIPE; #endif /* defined(_AIX) || defined(__DragonFly__) */ if (ss.ss_family == AF_INET || ss.ss_family == AF_INET6) return UV_TCP; if (ss.ss_family == AF_UNIX) return UV_NAMED_PIPE; } return UV_UNKNOWN_HANDLE; } /* This function is async signal-safe, meaning that it's safe to call from * inside a signal handler _unless_ execution was inside uv_tty_set_mode()'s * critical section when the signal was raised. */ int uv_tty_reset_mode(void) { int saved_errno; int err; saved_errno = errno; if (!uv_spinlock_trylock(&termios_spinlock)) return UV_EBUSY; /* In uv_tty_set_mode(). */ err = 0; if (orig_termios_fd != -1) err = uv__tcsetattr(orig_termios_fd, TCSANOW, &orig_termios); uv_spinlock_unlock(&termios_spinlock); errno = saved_errno; return err; } void uv_tty_set_vterm_state(uv_tty_vtermstate_t state) { } int uv_tty_get_vterm_state(uv_tty_vtermstate_t* state) { return UV_ENOTSUP; } gevent-24.11.1/deps/libuv/src/unix/udp.c000066400000000000000000001160401471441230600177360ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include #include #include #include #include #if defined(__MVS__) #include #endif #include #if defined(IPV6_JOIN_GROUP) && !defined(IPV6_ADD_MEMBERSHIP) # define IPV6_ADD_MEMBERSHIP IPV6_JOIN_GROUP #endif #if defined(IPV6_LEAVE_GROUP) && !defined(IPV6_DROP_MEMBERSHIP) # define IPV6_DROP_MEMBERSHIP IPV6_LEAVE_GROUP #endif union uv__sockaddr { struct sockaddr_in6 in6; struct sockaddr_in in; struct sockaddr addr; }; static void uv__udp_run_completed(uv_udp_t* handle); static void uv__udp_io(uv_loop_t* loop, uv__io_t* w, unsigned int revents); static void uv__udp_recvmsg(uv_udp_t* handle); static void uv__udp_sendmsg(uv_udp_t* handle); static int uv__udp_maybe_deferred_bind(uv_udp_t* handle, int domain, unsigned int flags); #if HAVE_MMSG #define UV__MMSG_MAXWIDTH 20 static int uv__udp_recvmmsg(uv_udp_t* handle, uv_buf_t* buf); static void uv__udp_sendmmsg(uv_udp_t* handle); static int uv__recvmmsg_avail; static int uv__sendmmsg_avail; static uv_once_t once = UV_ONCE_INIT; static void uv__udp_mmsg_init(void) { int ret; int s; s = uv__socket(AF_INET, SOCK_DGRAM, 0); if (s < 0) return; ret = uv__sendmmsg(s, NULL, 0); if (ret == 0 || errno != ENOSYS) { uv__sendmmsg_avail = 1; uv__recvmmsg_avail = 1; } else { ret = uv__recvmmsg(s, NULL, 0); if (ret == 0 || errno != ENOSYS) uv__recvmmsg_avail = 1; } uv__close(s); } #endif void uv__udp_close(uv_udp_t* handle) { uv__io_close(handle->loop, &handle->io_watcher); uv__handle_stop(handle); if (handle->io_watcher.fd != -1) { uv__close(handle->io_watcher.fd); handle->io_watcher.fd = -1; } } void uv__udp_finish_close(uv_udp_t* handle) { uv_udp_send_t* req; QUEUE* q; assert(!uv__io_active(&handle->io_watcher, POLLIN | POLLOUT)); assert(handle->io_watcher.fd == -1); while (!QUEUE_EMPTY(&handle->write_queue)) { q = QUEUE_HEAD(&handle->write_queue); QUEUE_REMOVE(q); req = QUEUE_DATA(q, uv_udp_send_t, queue); req->status = UV_ECANCELED; QUEUE_INSERT_TAIL(&handle->write_completed_queue, &req->queue); } uv__udp_run_completed(handle); assert(handle->send_queue_size == 0); assert(handle->send_queue_count == 0); /* Now tear down the handle. */ handle->recv_cb = NULL; handle->alloc_cb = NULL; /* but _do not_ touch close_cb */ } static void uv__udp_run_completed(uv_udp_t* handle) { uv_udp_send_t* req; QUEUE* q; assert(!(handle->flags & UV_HANDLE_UDP_PROCESSING)); handle->flags |= UV_HANDLE_UDP_PROCESSING; while (!QUEUE_EMPTY(&handle->write_completed_queue)) { q = QUEUE_HEAD(&handle->write_completed_queue); QUEUE_REMOVE(q); req = QUEUE_DATA(q, uv_udp_send_t, queue); uv__req_unregister(handle->loop, req); handle->send_queue_size -= uv__count_bufs(req->bufs, req->nbufs); handle->send_queue_count--; if (req->bufs != req->bufsml) uv__free(req->bufs); req->bufs = NULL; if (req->send_cb == NULL) continue; /* req->status >= 0 == bytes written * req->status < 0 == errno */ if (req->status >= 0) req->send_cb(req, 0); else req->send_cb(req, req->status); } if (QUEUE_EMPTY(&handle->write_queue)) { /* Pending queue and completion queue empty, stop watcher. */ uv__io_stop(handle->loop, &handle->io_watcher, POLLOUT); if (!uv__io_active(&handle->io_watcher, POLLIN)) uv__handle_stop(handle); } handle->flags &= ~UV_HANDLE_UDP_PROCESSING; } static void uv__udp_io(uv_loop_t* loop, uv__io_t* w, unsigned int revents) { uv_udp_t* handle; handle = container_of(w, uv_udp_t, io_watcher); assert(handle->type == UV_UDP); if (revents & POLLIN) uv__udp_recvmsg(handle); if (revents & POLLOUT) { uv__udp_sendmsg(handle); uv__udp_run_completed(handle); } } #if HAVE_MMSG static int uv__udp_recvmmsg(uv_udp_t* handle, uv_buf_t* buf) { struct sockaddr_in6 peers[UV__MMSG_MAXWIDTH]; struct iovec iov[UV__MMSG_MAXWIDTH]; struct uv__mmsghdr msgs[UV__MMSG_MAXWIDTH]; ssize_t nread; uv_buf_t chunk_buf; size_t chunks; int flags; size_t k; /* prepare structures for recvmmsg */ chunks = buf->len / UV__UDP_DGRAM_MAXSIZE; if (chunks > ARRAY_SIZE(iov)) chunks = ARRAY_SIZE(iov); for (k = 0; k < chunks; ++k) { iov[k].iov_base = buf->base + k * UV__UDP_DGRAM_MAXSIZE; iov[k].iov_len = UV__UDP_DGRAM_MAXSIZE; memset(&msgs[k].msg_hdr, 0, sizeof(msgs[k].msg_hdr)); msgs[k].msg_hdr.msg_iov = iov + k; msgs[k].msg_hdr.msg_iovlen = 1; msgs[k].msg_hdr.msg_name = peers + k; msgs[k].msg_hdr.msg_namelen = sizeof(peers[0]); msgs[k].msg_hdr.msg_control = NULL; msgs[k].msg_hdr.msg_controllen = 0; msgs[k].msg_hdr.msg_flags = 0; } do nread = uv__recvmmsg(handle->io_watcher.fd, msgs, chunks); while (nread == -1 && errno == EINTR); if (nread < 1) { if (nread == 0 || errno == EAGAIN || errno == EWOULDBLOCK) handle->recv_cb(handle, 0, buf, NULL, 0); else handle->recv_cb(handle, UV__ERR(errno), buf, NULL, 0); } else { /* pass each chunk to the application */ for (k = 0; k < (size_t) nread && handle->recv_cb != NULL; k++) { flags = UV_UDP_MMSG_CHUNK; if (msgs[k].msg_hdr.msg_flags & MSG_TRUNC) flags |= UV_UDP_PARTIAL; chunk_buf = uv_buf_init(iov[k].iov_base, iov[k].iov_len); handle->recv_cb(handle, msgs[k].msg_len, &chunk_buf, msgs[k].msg_hdr.msg_name, flags); } /* one last callback so the original buffer is freed */ if (handle->recv_cb != NULL) handle->recv_cb(handle, 0, buf, NULL, UV_UDP_MMSG_FREE); } return nread; } #endif static void uv__udp_recvmsg(uv_udp_t* handle) { struct sockaddr_storage peer; struct msghdr h; ssize_t nread; uv_buf_t buf; int flags; int count; assert(handle->recv_cb != NULL); assert(handle->alloc_cb != NULL); /* Prevent loop starvation when the data comes in as fast as (or faster than) * we can read it. XXX Need to rearm fd if we switch to edge-triggered I/O. */ count = 32; do { buf = uv_buf_init(NULL, 0); handle->alloc_cb((uv_handle_t*) handle, UV__UDP_DGRAM_MAXSIZE, &buf); if (buf.base == NULL || buf.len == 0) { handle->recv_cb(handle, UV_ENOBUFS, &buf, NULL, 0); return; } assert(buf.base != NULL); #if HAVE_MMSG if (uv_udp_using_recvmmsg(handle)) { nread = uv__udp_recvmmsg(handle, &buf); if (nread > 0) count -= nread; continue; } #endif memset(&h, 0, sizeof(h)); memset(&peer, 0, sizeof(peer)); h.msg_name = &peer; h.msg_namelen = sizeof(peer); h.msg_iov = (void*) &buf; h.msg_iovlen = 1; do { nread = recvmsg(handle->io_watcher.fd, &h, 0); } while (nread == -1 && errno == EINTR); if (nread == -1) { if (errno == EAGAIN || errno == EWOULDBLOCK) handle->recv_cb(handle, 0, &buf, NULL, 0); else handle->recv_cb(handle, UV__ERR(errno), &buf, NULL, 0); } else { flags = 0; if (h.msg_flags & MSG_TRUNC) flags |= UV_UDP_PARTIAL; handle->recv_cb(handle, nread, &buf, (const struct sockaddr*) &peer, flags); } count--; } /* recv_cb callback may decide to pause or close the handle */ while (nread != -1 && count > 0 && handle->io_watcher.fd != -1 && handle->recv_cb != NULL); } #if HAVE_MMSG static void uv__udp_sendmmsg(uv_udp_t* handle) { uv_udp_send_t* req; struct uv__mmsghdr h[UV__MMSG_MAXWIDTH]; struct uv__mmsghdr *p; QUEUE* q; ssize_t npkts; size_t pkts; size_t i; if (QUEUE_EMPTY(&handle->write_queue)) return; write_queue_drain: for (pkts = 0, q = QUEUE_HEAD(&handle->write_queue); pkts < UV__MMSG_MAXWIDTH && q != &handle->write_queue; ++pkts, q = QUEUE_HEAD(q)) { assert(q != NULL); req = QUEUE_DATA(q, uv_udp_send_t, queue); assert(req != NULL); p = &h[pkts]; memset(p, 0, sizeof(*p)); if (req->addr.ss_family == AF_UNSPEC) { p->msg_hdr.msg_name = NULL; p->msg_hdr.msg_namelen = 0; } else { p->msg_hdr.msg_name = &req->addr; if (req->addr.ss_family == AF_INET6) p->msg_hdr.msg_namelen = sizeof(struct sockaddr_in6); else if (req->addr.ss_family == AF_INET) p->msg_hdr.msg_namelen = sizeof(struct sockaddr_in); else if (req->addr.ss_family == AF_UNIX) p->msg_hdr.msg_namelen = sizeof(struct sockaddr_un); else { assert(0 && "unsupported address family"); abort(); } } h[pkts].msg_hdr.msg_iov = (struct iovec*) req->bufs; h[pkts].msg_hdr.msg_iovlen = req->nbufs; } do npkts = uv__sendmmsg(handle->io_watcher.fd, h, pkts); while (npkts == -1 && errno == EINTR); if (npkts < 1) { if (errno == EAGAIN || errno == EWOULDBLOCK || errno == ENOBUFS) return; for (i = 0, q = QUEUE_HEAD(&handle->write_queue); i < pkts && q != &handle->write_queue; ++i, q = QUEUE_HEAD(&handle->write_queue)) { assert(q != NULL); req = QUEUE_DATA(q, uv_udp_send_t, queue); assert(req != NULL); req->status = UV__ERR(errno); QUEUE_REMOVE(&req->queue); QUEUE_INSERT_TAIL(&handle->write_completed_queue, &req->queue); } uv__io_feed(handle->loop, &handle->io_watcher); return; } /* Safety: npkts known to be >0 below. Hence cast from ssize_t * to size_t safe. */ for (i = 0, q = QUEUE_HEAD(&handle->write_queue); i < (size_t)npkts && q != &handle->write_queue; ++i, q = QUEUE_HEAD(&handle->write_queue)) { assert(q != NULL); req = QUEUE_DATA(q, uv_udp_send_t, queue); assert(req != NULL); req->status = req->bufs[0].len; /* Sending a datagram is an atomic operation: either all data * is written or nothing is (and EMSGSIZE is raised). That is * why we don't handle partial writes. Just pop the request * off the write queue and onto the completed queue, done. */ QUEUE_REMOVE(&req->queue); QUEUE_INSERT_TAIL(&handle->write_completed_queue, &req->queue); } /* couldn't batch everything, continue sending (jump to avoid stack growth) */ if (!QUEUE_EMPTY(&handle->write_queue)) goto write_queue_drain; uv__io_feed(handle->loop, &handle->io_watcher); return; } #endif static void uv__udp_sendmsg(uv_udp_t* handle) { uv_udp_send_t* req; struct msghdr h; QUEUE* q; ssize_t size; #if HAVE_MMSG uv_once(&once, uv__udp_mmsg_init); if (uv__sendmmsg_avail) { uv__udp_sendmmsg(handle); return; } #endif while (!QUEUE_EMPTY(&handle->write_queue)) { q = QUEUE_HEAD(&handle->write_queue); assert(q != NULL); req = QUEUE_DATA(q, uv_udp_send_t, queue); assert(req != NULL); memset(&h, 0, sizeof h); if (req->addr.ss_family == AF_UNSPEC) { h.msg_name = NULL; h.msg_namelen = 0; } else { h.msg_name = &req->addr; if (req->addr.ss_family == AF_INET6) h.msg_namelen = sizeof(struct sockaddr_in6); else if (req->addr.ss_family == AF_INET) h.msg_namelen = sizeof(struct sockaddr_in); else if (req->addr.ss_family == AF_UNIX) h.msg_namelen = sizeof(struct sockaddr_un); else { assert(0 && "unsupported address family"); abort(); } } h.msg_iov = (struct iovec*) req->bufs; h.msg_iovlen = req->nbufs; do { size = sendmsg(handle->io_watcher.fd, &h, 0); } while (size == -1 && errno == EINTR); if (size == -1) { if (errno == EAGAIN || errno == EWOULDBLOCK || errno == ENOBUFS) break; } req->status = (size == -1 ? UV__ERR(errno) : size); /* Sending a datagram is an atomic operation: either all data * is written or nothing is (and EMSGSIZE is raised). That is * why we don't handle partial writes. Just pop the request * off the write queue and onto the completed queue, done. */ QUEUE_REMOVE(&req->queue); QUEUE_INSERT_TAIL(&handle->write_completed_queue, &req->queue); uv__io_feed(handle->loop, &handle->io_watcher); } } /* On the BSDs, SO_REUSEPORT implies SO_REUSEADDR but with some additional * refinements for programs that use multicast. * * Linux as of 3.9 has a SO_REUSEPORT socket option but with semantics that * are different from the BSDs: it _shares_ the port rather than steal it * from the current listener. While useful, it's not something we can emulate * on other platforms so we don't enable it. * * zOS does not support getsockname with SO_REUSEPORT option when using * AF_UNIX. */ static int uv__set_reuse(int fd) { int yes; yes = 1; #if defined(SO_REUSEPORT) && defined(__MVS__) struct sockaddr_in sockfd; unsigned int sockfd_len = sizeof(sockfd); if (getsockname(fd, (struct sockaddr*) &sockfd, &sockfd_len) == -1) return UV__ERR(errno); if (sockfd.sin_family == AF_UNIX) { if (setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(yes))) return UV__ERR(errno); } else { if (setsockopt(fd, SOL_SOCKET, SO_REUSEPORT, &yes, sizeof(yes))) return UV__ERR(errno); } #elif defined(SO_REUSEPORT) && !defined(__linux__) && !defined(__GNU__) if (setsockopt(fd, SOL_SOCKET, SO_REUSEPORT, &yes, sizeof(yes))) return UV__ERR(errno); #else if (setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(yes))) return UV__ERR(errno); #endif return 0; } /* * The Linux kernel suppresses some ICMP error messages by default for UDP * sockets. Setting IP_RECVERR/IPV6_RECVERR on the socket enables full ICMP * error reporting, hopefully resulting in faster failover to working name * servers. */ static int uv__set_recverr(int fd, sa_family_t ss_family) { #if defined(__linux__) int yes; yes = 1; if (ss_family == AF_INET) { if (setsockopt(fd, IPPROTO_IP, IP_RECVERR, &yes, sizeof(yes))) return UV__ERR(errno); } else if (ss_family == AF_INET6) { if (setsockopt(fd, IPPROTO_IPV6, IPV6_RECVERR, &yes, sizeof(yes))) return UV__ERR(errno); } #endif return 0; } int uv__udp_bind(uv_udp_t* handle, const struct sockaddr* addr, unsigned int addrlen, unsigned int flags) { int err; int yes; int fd; /* Check for bad flags. */ if (flags & ~(UV_UDP_IPV6ONLY | UV_UDP_REUSEADDR | UV_UDP_LINUX_RECVERR)) return UV_EINVAL; /* Cannot set IPv6-only mode on non-IPv6 socket. */ if ((flags & UV_UDP_IPV6ONLY) && addr->sa_family != AF_INET6) return UV_EINVAL; fd = handle->io_watcher.fd; if (fd == -1) { err = uv__socket(addr->sa_family, SOCK_DGRAM, 0); if (err < 0) return err; fd = err; handle->io_watcher.fd = fd; } if (flags & UV_UDP_LINUX_RECVERR) { err = uv__set_recverr(fd, addr->sa_family); if (err) return err; } if (flags & UV_UDP_REUSEADDR) { err = uv__set_reuse(fd); if (err) return err; } if (flags & UV_UDP_IPV6ONLY) { #ifdef IPV6_V6ONLY yes = 1; if (setsockopt(fd, IPPROTO_IPV6, IPV6_V6ONLY, &yes, sizeof yes) == -1) { err = UV__ERR(errno); return err; } #else err = UV_ENOTSUP; return err; #endif } if (bind(fd, addr, addrlen)) { err = UV__ERR(errno); if (errno == EAFNOSUPPORT) /* OSX, other BSDs and SunoS fail with EAFNOSUPPORT when binding a * socket created with AF_INET to an AF_INET6 address or vice versa. */ err = UV_EINVAL; return err; } if (addr->sa_family == AF_INET6) handle->flags |= UV_HANDLE_IPV6; handle->flags |= UV_HANDLE_BOUND; return 0; } static int uv__udp_maybe_deferred_bind(uv_udp_t* handle, int domain, unsigned int flags) { union uv__sockaddr taddr; socklen_t addrlen; if (handle->io_watcher.fd != -1) return 0; switch (domain) { case AF_INET: { struct sockaddr_in* addr = &taddr.in; memset(addr, 0, sizeof *addr); addr->sin_family = AF_INET; addr->sin_addr.s_addr = INADDR_ANY; addrlen = sizeof *addr; break; } case AF_INET6: { struct sockaddr_in6* addr = &taddr.in6; memset(addr, 0, sizeof *addr); addr->sin6_family = AF_INET6; addr->sin6_addr = in6addr_any; addrlen = sizeof *addr; break; } default: assert(0 && "unsupported address family"); abort(); } return uv__udp_bind(handle, &taddr.addr, addrlen, flags); } int uv__udp_connect(uv_udp_t* handle, const struct sockaddr* addr, unsigned int addrlen) { int err; err = uv__udp_maybe_deferred_bind(handle, addr->sa_family, 0); if (err) return err; do { errno = 0; err = connect(handle->io_watcher.fd, addr, addrlen); } while (err == -1 && errno == EINTR); if (err) return UV__ERR(errno); handle->flags |= UV_HANDLE_UDP_CONNECTED; return 0; } /* From https://pubs.opengroup.org/onlinepubs/9699919799/functions/connect.html * Any of uv supported UNIXs kernel should be standardized, but the kernel * implementation logic not same, let's use pseudocode to explain the udp * disconnect behaviors: * * Predefined stubs for pseudocode: * 1. sodisconnect: The function to perform the real udp disconnect * 2. pru_connect: The function to perform the real udp connect * 3. so: The kernel object match with socket fd * 4. addr: The sockaddr parameter from user space * * BSDs: * if(sodisconnect(so) == 0) { // udp disconnect succeed * if (addr->sa_len != so->addr->sa_len) return EINVAL; * if (addr->sa_family != so->addr->sa_family) return EAFNOSUPPORT; * pru_connect(so); * } * else return EISCONN; * * z/OS (same with Windows): * if(addr->sa_len < so->addr->sa_len) return EINVAL; * if (addr->sa_family == AF_UNSPEC) sodisconnect(so); * * AIX: * if(addr->sa_len != sizeof(struct sockaddr)) return EINVAL; // ignore ip proto version * if (addr->sa_family == AF_UNSPEC) sodisconnect(so); * * Linux,Others: * if(addr->sa_len < sizeof(struct sockaddr)) return EINVAL; * if (addr->sa_family == AF_UNSPEC) sodisconnect(so); */ int uv__udp_disconnect(uv_udp_t* handle) { int r; #if defined(__MVS__) struct sockaddr_storage addr; #else struct sockaddr addr; #endif memset(&addr, 0, sizeof(addr)); #if defined(__MVS__) addr.ss_family = AF_UNSPEC; #else addr.sa_family = AF_UNSPEC; #endif do { errno = 0; #ifdef __PASE__ /* On IBMi a connectionless transport socket can be disconnected by * either setting the addr parameter to NULL or setting the * addr_length parameter to zero, and issuing another connect(). * https://www.ibm.com/docs/en/i/7.4?topic=ssw_ibm_i_74/apis/connec.htm */ r = connect(handle->io_watcher.fd, (struct sockaddr*) NULL, 0); #else r = connect(handle->io_watcher.fd, (struct sockaddr*) &addr, sizeof(addr)); #endif } while (r == -1 && errno == EINTR); if (r == -1) { #if defined(BSD) /* The macro BSD is from sys/param.h */ if (errno != EAFNOSUPPORT && errno != EINVAL) return UV__ERR(errno); #else return UV__ERR(errno); #endif } handle->flags &= ~UV_HANDLE_UDP_CONNECTED; return 0; } int uv__udp_send(uv_udp_send_t* req, uv_udp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, const struct sockaddr* addr, unsigned int addrlen, uv_udp_send_cb send_cb) { int err; int empty_queue; assert(nbufs > 0); if (addr) { err = uv__udp_maybe_deferred_bind(handle, addr->sa_family, 0); if (err) return err; } /* It's legal for send_queue_count > 0 even when the write_queue is empty; * it means there are error-state requests in the write_completed_queue that * will touch up send_queue_size/count later. */ empty_queue = (handle->send_queue_count == 0); uv__req_init(handle->loop, req, UV_UDP_SEND); assert(addrlen <= sizeof(req->addr)); if (addr == NULL) req->addr.ss_family = AF_UNSPEC; else memcpy(&req->addr, addr, addrlen); req->send_cb = send_cb; req->handle = handle; req->nbufs = nbufs; req->bufs = req->bufsml; if (nbufs > ARRAY_SIZE(req->bufsml)) req->bufs = uv__malloc(nbufs * sizeof(bufs[0])); if (req->bufs == NULL) { uv__req_unregister(handle->loop, req); return UV_ENOMEM; } memcpy(req->bufs, bufs, nbufs * sizeof(bufs[0])); handle->send_queue_size += uv__count_bufs(req->bufs, req->nbufs); handle->send_queue_count++; QUEUE_INSERT_TAIL(&handle->write_queue, &req->queue); uv__handle_start(handle); if (empty_queue && !(handle->flags & UV_HANDLE_UDP_PROCESSING)) { uv__udp_sendmsg(handle); /* `uv__udp_sendmsg` may not be able to do non-blocking write straight * away. In such cases the `io_watcher` has to be queued for asynchronous * write. */ if (!QUEUE_EMPTY(&handle->write_queue)) uv__io_start(handle->loop, &handle->io_watcher, POLLOUT); } else { uv__io_start(handle->loop, &handle->io_watcher, POLLOUT); } return 0; } int uv__udp_try_send(uv_udp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, const struct sockaddr* addr, unsigned int addrlen) { int err; struct msghdr h; ssize_t size; assert(nbufs > 0); /* already sending a message */ if (handle->send_queue_count != 0) return UV_EAGAIN; if (addr) { err = uv__udp_maybe_deferred_bind(handle, addr->sa_family, 0); if (err) return err; } else { assert(handle->flags & UV_HANDLE_UDP_CONNECTED); } memset(&h, 0, sizeof h); h.msg_name = (struct sockaddr*) addr; h.msg_namelen = addrlen; h.msg_iov = (struct iovec*) bufs; h.msg_iovlen = nbufs; do { size = sendmsg(handle->io_watcher.fd, &h, 0); } while (size == -1 && errno == EINTR); if (size == -1) { if (errno == EAGAIN || errno == EWOULDBLOCK || errno == ENOBUFS) return UV_EAGAIN; else return UV__ERR(errno); } return size; } static int uv__udp_set_membership4(uv_udp_t* handle, const struct sockaddr_in* multicast_addr, const char* interface_addr, uv_membership membership) { struct ip_mreq mreq; int optname; int err; memset(&mreq, 0, sizeof mreq); if (interface_addr) { err = uv_inet_pton(AF_INET, interface_addr, &mreq.imr_interface.s_addr); if (err) return err; } else { mreq.imr_interface.s_addr = htonl(INADDR_ANY); } mreq.imr_multiaddr.s_addr = multicast_addr->sin_addr.s_addr; switch (membership) { case UV_JOIN_GROUP: optname = IP_ADD_MEMBERSHIP; break; case UV_LEAVE_GROUP: optname = IP_DROP_MEMBERSHIP; break; default: return UV_EINVAL; } if (setsockopt(handle->io_watcher.fd, IPPROTO_IP, optname, &mreq, sizeof(mreq))) { #if defined(__MVS__) if (errno == ENXIO) return UV_ENODEV; #endif return UV__ERR(errno); } return 0; } static int uv__udp_set_membership6(uv_udp_t* handle, const struct sockaddr_in6* multicast_addr, const char* interface_addr, uv_membership membership) { int optname; struct ipv6_mreq mreq; struct sockaddr_in6 addr6; memset(&mreq, 0, sizeof mreq); if (interface_addr) { if (uv_ip6_addr(interface_addr, 0, &addr6)) return UV_EINVAL; mreq.ipv6mr_interface = addr6.sin6_scope_id; } else { mreq.ipv6mr_interface = 0; } mreq.ipv6mr_multiaddr = multicast_addr->sin6_addr; switch (membership) { case UV_JOIN_GROUP: optname = IPV6_ADD_MEMBERSHIP; break; case UV_LEAVE_GROUP: optname = IPV6_DROP_MEMBERSHIP; break; default: return UV_EINVAL; } if (setsockopt(handle->io_watcher.fd, IPPROTO_IPV6, optname, &mreq, sizeof(mreq))) { #if defined(__MVS__) if (errno == ENXIO) return UV_ENODEV; #endif return UV__ERR(errno); } return 0; } #if !defined(__OpenBSD__) && \ !defined(__NetBSD__) && \ !defined(__ANDROID__) && \ !defined(__DragonFly__) && \ !defined(__QNX__) && \ !defined(__GNU__) static int uv__udp_set_source_membership4(uv_udp_t* handle, const struct sockaddr_in* multicast_addr, const char* interface_addr, const struct sockaddr_in* source_addr, uv_membership membership) { struct ip_mreq_source mreq; int optname; int err; err = uv__udp_maybe_deferred_bind(handle, AF_INET, UV_UDP_REUSEADDR); if (err) return err; memset(&mreq, 0, sizeof(mreq)); if (interface_addr != NULL) { err = uv_inet_pton(AF_INET, interface_addr, &mreq.imr_interface.s_addr); if (err) return err; } else { mreq.imr_interface.s_addr = htonl(INADDR_ANY); } mreq.imr_multiaddr.s_addr = multicast_addr->sin_addr.s_addr; mreq.imr_sourceaddr.s_addr = source_addr->sin_addr.s_addr; if (membership == UV_JOIN_GROUP) optname = IP_ADD_SOURCE_MEMBERSHIP; else if (membership == UV_LEAVE_GROUP) optname = IP_DROP_SOURCE_MEMBERSHIP; else return UV_EINVAL; if (setsockopt(handle->io_watcher.fd, IPPROTO_IP, optname, &mreq, sizeof(mreq))) { return UV__ERR(errno); } return 0; } static int uv__udp_set_source_membership6(uv_udp_t* handle, const struct sockaddr_in6* multicast_addr, const char* interface_addr, const struct sockaddr_in6* source_addr, uv_membership membership) { struct group_source_req mreq; struct sockaddr_in6 addr6; int optname; int err; err = uv__udp_maybe_deferred_bind(handle, AF_INET6, UV_UDP_REUSEADDR); if (err) return err; memset(&mreq, 0, sizeof(mreq)); if (interface_addr != NULL) { err = uv_ip6_addr(interface_addr, 0, &addr6); if (err) return err; mreq.gsr_interface = addr6.sin6_scope_id; } else { mreq.gsr_interface = 0; } STATIC_ASSERT(sizeof(mreq.gsr_group) >= sizeof(*multicast_addr)); STATIC_ASSERT(sizeof(mreq.gsr_source) >= sizeof(*source_addr)); memcpy(&mreq.gsr_group, multicast_addr, sizeof(*multicast_addr)); memcpy(&mreq.gsr_source, source_addr, sizeof(*source_addr)); if (membership == UV_JOIN_GROUP) optname = MCAST_JOIN_SOURCE_GROUP; else if (membership == UV_LEAVE_GROUP) optname = MCAST_LEAVE_SOURCE_GROUP; else return UV_EINVAL; if (setsockopt(handle->io_watcher.fd, IPPROTO_IPV6, optname, &mreq, sizeof(mreq))) { return UV__ERR(errno); } return 0; } #endif int uv__udp_init_ex(uv_loop_t* loop, uv_udp_t* handle, unsigned flags, int domain) { int fd; fd = -1; if (domain != AF_UNSPEC) { fd = uv__socket(domain, SOCK_DGRAM, 0); if (fd < 0) return fd; } uv__handle_init(loop, (uv_handle_t*)handle, UV_UDP); handle->alloc_cb = NULL; handle->recv_cb = NULL; handle->send_queue_size = 0; handle->send_queue_count = 0; uv__io_init(&handle->io_watcher, uv__udp_io, fd); QUEUE_INIT(&handle->write_queue); QUEUE_INIT(&handle->write_completed_queue); return 0; } int uv_udp_using_recvmmsg(const uv_udp_t* handle) { #if HAVE_MMSG if (handle->flags & UV_HANDLE_UDP_RECVMMSG) { uv_once(&once, uv__udp_mmsg_init); return uv__recvmmsg_avail; } #endif return 0; } int uv_udp_open(uv_udp_t* handle, uv_os_sock_t sock) { int err; /* Check for already active socket. */ if (handle->io_watcher.fd != -1) return UV_EBUSY; if (uv__fd_exists(handle->loop, sock)) return UV_EEXIST; err = uv__nonblock(sock, 1); if (err) return err; err = uv__set_reuse(sock); if (err) return err; handle->io_watcher.fd = sock; if (uv__udp_is_connected(handle)) handle->flags |= UV_HANDLE_UDP_CONNECTED; return 0; } int uv_udp_set_membership(uv_udp_t* handle, const char* multicast_addr, const char* interface_addr, uv_membership membership) { int err; struct sockaddr_in addr4; struct sockaddr_in6 addr6; if (uv_ip4_addr(multicast_addr, 0, &addr4) == 0) { err = uv__udp_maybe_deferred_bind(handle, AF_INET, UV_UDP_REUSEADDR); if (err) return err; return uv__udp_set_membership4(handle, &addr4, interface_addr, membership); } else if (uv_ip6_addr(multicast_addr, 0, &addr6) == 0) { err = uv__udp_maybe_deferred_bind(handle, AF_INET6, UV_UDP_REUSEADDR); if (err) return err; return uv__udp_set_membership6(handle, &addr6, interface_addr, membership); } else { return UV_EINVAL; } } int uv_udp_set_source_membership(uv_udp_t* handle, const char* multicast_addr, const char* interface_addr, const char* source_addr, uv_membership membership) { #if !defined(__OpenBSD__) && \ !defined(__NetBSD__) && \ !defined(__ANDROID__) && \ !defined(__DragonFly__) && \ !defined(__QNX__) && \ !defined(__GNU__) int err; union uv__sockaddr mcast_addr; union uv__sockaddr src_addr; err = uv_ip4_addr(multicast_addr, 0, &mcast_addr.in); if (err) { err = uv_ip6_addr(multicast_addr, 0, &mcast_addr.in6); if (err) return err; err = uv_ip6_addr(source_addr, 0, &src_addr.in6); if (err) return err; return uv__udp_set_source_membership6(handle, &mcast_addr.in6, interface_addr, &src_addr.in6, membership); } err = uv_ip4_addr(source_addr, 0, &src_addr.in); if (err) return err; return uv__udp_set_source_membership4(handle, &mcast_addr.in, interface_addr, &src_addr.in, membership); #else return UV_ENOSYS; #endif } static int uv__setsockopt(uv_udp_t* handle, int option4, int option6, const void* val, socklen_t size) { int r; if (handle->flags & UV_HANDLE_IPV6) r = setsockopt(handle->io_watcher.fd, IPPROTO_IPV6, option6, val, size); else r = setsockopt(handle->io_watcher.fd, IPPROTO_IP, option4, val, size); if (r) return UV__ERR(errno); return 0; } static int uv__setsockopt_maybe_char(uv_udp_t* handle, int option4, int option6, int val) { #if defined(__sun) || defined(_AIX) || defined(__MVS__) char arg = val; #elif defined(__OpenBSD__) unsigned char arg = val; #else int arg = val; #endif if (val < 0 || val > 255) return UV_EINVAL; return uv__setsockopt(handle, option4, option6, &arg, sizeof(arg)); } int uv_udp_set_broadcast(uv_udp_t* handle, int on) { if (setsockopt(handle->io_watcher.fd, SOL_SOCKET, SO_BROADCAST, &on, sizeof(on))) { return UV__ERR(errno); } return 0; } int uv_udp_set_ttl(uv_udp_t* handle, int ttl) { if (ttl < 1 || ttl > 255) return UV_EINVAL; #if defined(__MVS__) if (!(handle->flags & UV_HANDLE_IPV6)) return UV_ENOTSUP; /* zOS does not support setting ttl for IPv4 */ #endif /* * On Solaris and derivatives such as SmartOS, the length of socket options * is sizeof(int) for IP_TTL and IPV6_UNICAST_HOPS, * so hardcode the size of these options on this platform, * and use the general uv__setsockopt_maybe_char call on other platforms. */ #if defined(__sun) || defined(_AIX) || defined(__OpenBSD__) || \ defined(__MVS__) || defined(__QNX__) return uv__setsockopt(handle, IP_TTL, IPV6_UNICAST_HOPS, &ttl, sizeof(ttl)); #else /* !(defined(__sun) || defined(_AIX) || defined (__OpenBSD__) || defined(__MVS__) || defined(__QNX__)) */ return uv__setsockopt_maybe_char(handle, IP_TTL, IPV6_UNICAST_HOPS, ttl); #endif /* defined(__sun) || defined(_AIX) || defined (__OpenBSD__) || defined(__MVS__) || defined(__QNX__) */ } int uv_udp_set_multicast_ttl(uv_udp_t* handle, int ttl) { /* * On Solaris and derivatives such as SmartOS, the length of socket options * is sizeof(int) for IPV6_MULTICAST_HOPS and sizeof(char) for * IP_MULTICAST_TTL, so hardcode the size of the option in the IPv6 case, * and use the general uv__setsockopt_maybe_char call otherwise. */ #if defined(__sun) || defined(_AIX) || defined(__OpenBSD__) || \ defined(__MVS__) || defined(__QNX__) if (handle->flags & UV_HANDLE_IPV6) return uv__setsockopt(handle, IP_MULTICAST_TTL, IPV6_MULTICAST_HOPS, &ttl, sizeof(ttl)); #endif /* defined(__sun) || defined(_AIX) || defined(__OpenBSD__) || \ defined(__MVS__) || defined(__QNX__) */ return uv__setsockopt_maybe_char(handle, IP_MULTICAST_TTL, IPV6_MULTICAST_HOPS, ttl); } int uv_udp_set_multicast_loop(uv_udp_t* handle, int on) { /* * On Solaris and derivatives such as SmartOS, the length of socket options * is sizeof(int) for IPV6_MULTICAST_LOOP and sizeof(char) for * IP_MULTICAST_LOOP, so hardcode the size of the option in the IPv6 case, * and use the general uv__setsockopt_maybe_char call otherwise. */ #if defined(__sun) || defined(_AIX) || defined(__OpenBSD__) || \ defined(__MVS__) || defined(__QNX__) if (handle->flags & UV_HANDLE_IPV6) return uv__setsockopt(handle, IP_MULTICAST_LOOP, IPV6_MULTICAST_LOOP, &on, sizeof(on)); #endif /* defined(__sun) || defined(_AIX) ||defined(__OpenBSD__) || defined(__MVS__) || defined(__QNX__) */ return uv__setsockopt_maybe_char(handle, IP_MULTICAST_LOOP, IPV6_MULTICAST_LOOP, on); } int uv_udp_set_multicast_interface(uv_udp_t* handle, const char* interface_addr) { struct sockaddr_storage addr_st; struct sockaddr_in* addr4; struct sockaddr_in6* addr6; addr4 = (struct sockaddr_in*) &addr_st; addr6 = (struct sockaddr_in6*) &addr_st; if (!interface_addr) { memset(&addr_st, 0, sizeof addr_st); if (handle->flags & UV_HANDLE_IPV6) { addr_st.ss_family = AF_INET6; addr6->sin6_scope_id = 0; } else { addr_st.ss_family = AF_INET; addr4->sin_addr.s_addr = htonl(INADDR_ANY); } } else if (uv_ip4_addr(interface_addr, 0, addr4) == 0) { /* nothing, address was parsed */ } else if (uv_ip6_addr(interface_addr, 0, addr6) == 0) { /* nothing, address was parsed */ } else { return UV_EINVAL; } if (addr_st.ss_family == AF_INET) { if (setsockopt(handle->io_watcher.fd, IPPROTO_IP, IP_MULTICAST_IF, (void*) &addr4->sin_addr, sizeof(addr4->sin_addr)) == -1) { return UV__ERR(errno); } } else if (addr_st.ss_family == AF_INET6) { if (setsockopt(handle->io_watcher.fd, IPPROTO_IPV6, IPV6_MULTICAST_IF, &addr6->sin6_scope_id, sizeof(addr6->sin6_scope_id)) == -1) { return UV__ERR(errno); } } else { assert(0 && "unexpected address family"); abort(); } return 0; } int uv_udp_getpeername(const uv_udp_t* handle, struct sockaddr* name, int* namelen) { return uv__getsockpeername((const uv_handle_t*) handle, getpeername, name, namelen); } int uv_udp_getsockname(const uv_udp_t* handle, struct sockaddr* name, int* namelen) { return uv__getsockpeername((const uv_handle_t*) handle, getsockname, name, namelen); } int uv__udp_recv_start(uv_udp_t* handle, uv_alloc_cb alloc_cb, uv_udp_recv_cb recv_cb) { int err; if (alloc_cb == NULL || recv_cb == NULL) return UV_EINVAL; if (uv__io_active(&handle->io_watcher, POLLIN)) return UV_EALREADY; /* FIXME(bnoordhuis) Should be UV_EBUSY. */ err = uv__udp_maybe_deferred_bind(handle, AF_INET, 0); if (err) return err; handle->alloc_cb = alloc_cb; handle->recv_cb = recv_cb; uv__io_start(handle->loop, &handle->io_watcher, POLLIN); uv__handle_start(handle); return 0; } int uv__udp_recv_stop(uv_udp_t* handle) { uv__io_stop(handle->loop, &handle->io_watcher, POLLIN); if (!uv__io_active(&handle->io_watcher, POLLOUT)) uv__handle_stop(handle); handle->alloc_cb = NULL; handle->recv_cb = NULL; return 0; } gevent-24.11.1/deps/libuv/src/uv-common.c000066400000000000000000000532511471441230600201070ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "uv-common.h" #include #include #include #include /* NULL */ #include #include /* malloc */ #include /* memset */ #if defined(_WIN32) # include /* malloc */ #else # include /* if_nametoindex */ # include /* AF_UNIX, sockaddr_un */ #endif typedef struct { uv_malloc_func local_malloc; uv_realloc_func local_realloc; uv_calloc_func local_calloc; uv_free_func local_free; } uv__allocator_t; static uv__allocator_t uv__allocator = { malloc, realloc, calloc, free, }; char* uv__strdup(const char* s) { size_t len = strlen(s) + 1; char* m = uv__malloc(len); if (m == NULL) return NULL; return memcpy(m, s, len); } char* uv__strndup(const char* s, size_t n) { char* m; size_t len = strlen(s); if (n < len) len = n; m = uv__malloc(len + 1); if (m == NULL) return NULL; m[len] = '\0'; return memcpy(m, s, len); } void* uv__malloc(size_t size) { if (size > 0) return uv__allocator.local_malloc(size); return NULL; } void uv__free(void* ptr) { int saved_errno; /* Libuv expects that free() does not clobber errno. The system allocator * honors that assumption but custom allocators may not be so careful. */ saved_errno = errno; uv__allocator.local_free(ptr); errno = saved_errno; } void* uv__calloc(size_t count, size_t size) { return uv__allocator.local_calloc(count, size); } void* uv__realloc(void* ptr, size_t size) { if (size > 0) return uv__allocator.local_realloc(ptr, size); uv__free(ptr); return NULL; } void* uv__reallocf(void* ptr, size_t size) { void* newptr; newptr = uv__realloc(ptr, size); if (newptr == NULL) if (size > 0) uv__free(ptr); return newptr; } int uv_replace_allocator(uv_malloc_func malloc_func, uv_realloc_func realloc_func, uv_calloc_func calloc_func, uv_free_func free_func) { if (malloc_func == NULL || realloc_func == NULL || calloc_func == NULL || free_func == NULL) { return UV_EINVAL; } uv__allocator.local_malloc = malloc_func; uv__allocator.local_realloc = realloc_func; uv__allocator.local_calloc = calloc_func; uv__allocator.local_free = free_func; return 0; } #define XX(uc, lc) case UV_##uc: return sizeof(uv_##lc##_t); size_t uv_handle_size(uv_handle_type type) { switch (type) { UV_HANDLE_TYPE_MAP(XX) default: return -1; } } size_t uv_req_size(uv_req_type type) { switch(type) { UV_REQ_TYPE_MAP(XX) default: return -1; } } #undef XX size_t uv_loop_size(void) { return sizeof(uv_loop_t); } uv_buf_t uv_buf_init(char* base, unsigned int len) { uv_buf_t buf; buf.base = base; buf.len = len; return buf; } static const char* uv__unknown_err_code(int err) { char buf[32]; char* copy; snprintf(buf, sizeof(buf), "Unknown system error %d", err); copy = uv__strdup(buf); return copy != NULL ? copy : "Unknown system error"; } #define UV_ERR_NAME_GEN_R(name, _) \ case UV_## name: \ uv__strscpy(buf, #name, buflen); break; char* uv_err_name_r(int err, char* buf, size_t buflen) { switch (err) { UV_ERRNO_MAP(UV_ERR_NAME_GEN_R) default: snprintf(buf, buflen, "Unknown system error %d", err); } return buf; } #undef UV_ERR_NAME_GEN_R #define UV_ERR_NAME_GEN(name, _) case UV_ ## name: return #name; const char* uv_err_name(int err) { switch (err) { UV_ERRNO_MAP(UV_ERR_NAME_GEN) } return uv__unknown_err_code(err); } #undef UV_ERR_NAME_GEN #define UV_STRERROR_GEN_R(name, msg) \ case UV_ ## name: \ snprintf(buf, buflen, "%s", msg); break; char* uv_strerror_r(int err, char* buf, size_t buflen) { switch (err) { UV_ERRNO_MAP(UV_STRERROR_GEN_R) default: snprintf(buf, buflen, "Unknown system error %d", err); } return buf; } #undef UV_STRERROR_GEN_R #define UV_STRERROR_GEN(name, msg) case UV_ ## name: return msg; const char* uv_strerror(int err) { switch (err) { UV_ERRNO_MAP(UV_STRERROR_GEN) } return uv__unknown_err_code(err); } #undef UV_STRERROR_GEN int uv_ip4_addr(const char* ip, int port, struct sockaddr_in* addr) { memset(addr, 0, sizeof(*addr)); addr->sin_family = AF_INET; addr->sin_port = htons(port); #ifdef SIN6_LEN addr->sin_len = sizeof(*addr); #endif return uv_inet_pton(AF_INET, ip, &(addr->sin_addr.s_addr)); } int uv_ip6_addr(const char* ip, int port, struct sockaddr_in6* addr) { char address_part[40]; size_t address_part_size; const char* zone_index; memset(addr, 0, sizeof(*addr)); addr->sin6_family = AF_INET6; addr->sin6_port = htons(port); #ifdef SIN6_LEN addr->sin6_len = sizeof(*addr); #endif zone_index = strchr(ip, '%'); if (zone_index != NULL) { address_part_size = zone_index - ip; if (address_part_size >= sizeof(address_part)) address_part_size = sizeof(address_part) - 1; memcpy(address_part, ip, address_part_size); address_part[address_part_size] = '\0'; ip = address_part; zone_index++; /* skip '%' */ /* NOTE: unknown interface (id=0) is silently ignored */ #ifdef _WIN32 addr->sin6_scope_id = atoi(zone_index); #else addr->sin6_scope_id = if_nametoindex(zone_index); #endif } return uv_inet_pton(AF_INET6, ip, &addr->sin6_addr); } int uv_ip4_name(const struct sockaddr_in* src, char* dst, size_t size) { return uv_inet_ntop(AF_INET, &src->sin_addr, dst, size); } int uv_ip6_name(const struct sockaddr_in6* src, char* dst, size_t size) { return uv_inet_ntop(AF_INET6, &src->sin6_addr, dst, size); } int uv_ip_name(const struct sockaddr *src, char *dst, size_t size) { switch (src->sa_family) { case AF_INET: return uv_inet_ntop(AF_INET, &((struct sockaddr_in *)src)->sin_addr, dst, size); case AF_INET6: return uv_inet_ntop(AF_INET6, &((struct sockaddr_in6 *)src)->sin6_addr, dst, size); default: return UV_EAFNOSUPPORT; } } int uv_tcp_bind(uv_tcp_t* handle, const struct sockaddr* addr, unsigned int flags) { unsigned int addrlen; if (handle->type != UV_TCP) return UV_EINVAL; if (uv__is_closing(handle)) { return UV_EINVAL; } if (addr->sa_family == AF_INET) addrlen = sizeof(struct sockaddr_in); else if (addr->sa_family == AF_INET6) addrlen = sizeof(struct sockaddr_in6); else return UV_EINVAL; return uv__tcp_bind(handle, addr, addrlen, flags); } int uv_udp_init_ex(uv_loop_t* loop, uv_udp_t* handle, unsigned flags) { unsigned extra_flags; int domain; int rc; /* Use the lower 8 bits for the domain. */ domain = flags & 0xFF; if (domain != AF_INET && domain != AF_INET6 && domain != AF_UNSPEC) return UV_EINVAL; /* Use the higher bits for extra flags. */ extra_flags = flags & ~0xFF; if (extra_flags & ~UV_UDP_RECVMMSG) return UV_EINVAL; rc = uv__udp_init_ex(loop, handle, flags, domain); if (rc == 0) if (extra_flags & UV_UDP_RECVMMSG) handle->flags |= UV_HANDLE_UDP_RECVMMSG; return rc; } int uv_udp_init(uv_loop_t* loop, uv_udp_t* handle) { return uv_udp_init_ex(loop, handle, AF_UNSPEC); } int uv_udp_bind(uv_udp_t* handle, const struct sockaddr* addr, unsigned int flags) { unsigned int addrlen; if (handle->type != UV_UDP) return UV_EINVAL; if (addr->sa_family == AF_INET) addrlen = sizeof(struct sockaddr_in); else if (addr->sa_family == AF_INET6) addrlen = sizeof(struct sockaddr_in6); else return UV_EINVAL; return uv__udp_bind(handle, addr, addrlen, flags); } int uv_tcp_connect(uv_connect_t* req, uv_tcp_t* handle, const struct sockaddr* addr, uv_connect_cb cb) { unsigned int addrlen; if (handle->type != UV_TCP) return UV_EINVAL; if (addr->sa_family == AF_INET) addrlen = sizeof(struct sockaddr_in); else if (addr->sa_family == AF_INET6) addrlen = sizeof(struct sockaddr_in6); else return UV_EINVAL; return uv__tcp_connect(req, handle, addr, addrlen, cb); } int uv_udp_connect(uv_udp_t* handle, const struct sockaddr* addr) { unsigned int addrlen; if (handle->type != UV_UDP) return UV_EINVAL; /* Disconnect the handle */ if (addr == NULL) { if (!(handle->flags & UV_HANDLE_UDP_CONNECTED)) return UV_ENOTCONN; return uv__udp_disconnect(handle); } if (addr->sa_family == AF_INET) addrlen = sizeof(struct sockaddr_in); else if (addr->sa_family == AF_INET6) addrlen = sizeof(struct sockaddr_in6); else return UV_EINVAL; if (handle->flags & UV_HANDLE_UDP_CONNECTED) return UV_EISCONN; return uv__udp_connect(handle, addr, addrlen); } int uv__udp_is_connected(uv_udp_t* handle) { struct sockaddr_storage addr; int addrlen; if (handle->type != UV_UDP) return 0; addrlen = sizeof(addr); if (uv_udp_getpeername(handle, (struct sockaddr*) &addr, &addrlen) != 0) return 0; return addrlen > 0; } int uv__udp_check_before_send(uv_udp_t* handle, const struct sockaddr* addr) { unsigned int addrlen; if (handle->type != UV_UDP) return UV_EINVAL; if (addr != NULL && (handle->flags & UV_HANDLE_UDP_CONNECTED)) return UV_EISCONN; if (addr == NULL && !(handle->flags & UV_HANDLE_UDP_CONNECTED)) return UV_EDESTADDRREQ; if (addr != NULL) { if (addr->sa_family == AF_INET) addrlen = sizeof(struct sockaddr_in); else if (addr->sa_family == AF_INET6) addrlen = sizeof(struct sockaddr_in6); #if defined(AF_UNIX) && !defined(_WIN32) else if (addr->sa_family == AF_UNIX) addrlen = sizeof(struct sockaddr_un); #endif else return UV_EINVAL; } else { addrlen = 0; } return addrlen; } int uv_udp_send(uv_udp_send_t* req, uv_udp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, const struct sockaddr* addr, uv_udp_send_cb send_cb) { int addrlen; addrlen = uv__udp_check_before_send(handle, addr); if (addrlen < 0) return addrlen; return uv__udp_send(req, handle, bufs, nbufs, addr, addrlen, send_cb); } int uv_udp_try_send(uv_udp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, const struct sockaddr* addr) { int addrlen; addrlen = uv__udp_check_before_send(handle, addr); if (addrlen < 0) return addrlen; return uv__udp_try_send(handle, bufs, nbufs, addr, addrlen); } int uv_udp_recv_start(uv_udp_t* handle, uv_alloc_cb alloc_cb, uv_udp_recv_cb recv_cb) { if (handle->type != UV_UDP || alloc_cb == NULL || recv_cb == NULL) return UV_EINVAL; else return uv__udp_recv_start(handle, alloc_cb, recv_cb); } int uv_udp_recv_stop(uv_udp_t* handle) { if (handle->type != UV_UDP) return UV_EINVAL; else return uv__udp_recv_stop(handle); } void uv_walk(uv_loop_t* loop, uv_walk_cb walk_cb, void* arg) { QUEUE queue; QUEUE* q; uv_handle_t* h; QUEUE_MOVE(&loop->handle_queue, &queue); while (!QUEUE_EMPTY(&queue)) { q = QUEUE_HEAD(&queue); h = QUEUE_DATA(q, uv_handle_t, handle_queue); QUEUE_REMOVE(q); QUEUE_INSERT_TAIL(&loop->handle_queue, q); if (h->flags & UV_HANDLE_INTERNAL) continue; walk_cb(h, arg); } } static void uv__print_handles(uv_loop_t* loop, int only_active, FILE* stream) { const char* type; QUEUE* q; uv_handle_t* h; if (loop == NULL) loop = uv_default_loop(); QUEUE_FOREACH(q, &loop->handle_queue) { h = QUEUE_DATA(q, uv_handle_t, handle_queue); if (only_active && !uv__is_active(h)) continue; switch (h->type) { #define X(uc, lc) case UV_##uc: type = #lc; break; UV_HANDLE_TYPE_MAP(X) #undef X default: type = ""; } fprintf(stream, "[%c%c%c] %-8s %p\n", "R-"[!(h->flags & UV_HANDLE_REF)], "A-"[!(h->flags & UV_HANDLE_ACTIVE)], "I-"[!(h->flags & UV_HANDLE_INTERNAL)], type, (void*)h); } } void uv_print_all_handles(uv_loop_t* loop, FILE* stream) { uv__print_handles(loop, 0, stream); } void uv_print_active_handles(uv_loop_t* loop, FILE* stream) { uv__print_handles(loop, 1, stream); } void uv_ref(uv_handle_t* handle) { uv__handle_ref(handle); } void uv_unref(uv_handle_t* handle) { uv__handle_unref(handle); } int uv_has_ref(const uv_handle_t* handle) { return uv__has_ref(handle); } void uv_stop(uv_loop_t* loop) { loop->stop_flag = 1; } uint64_t uv_now(const uv_loop_t* loop) { return loop->time; } size_t uv__count_bufs(const uv_buf_t bufs[], unsigned int nbufs) { unsigned int i; size_t bytes; bytes = 0; for (i = 0; i < nbufs; i++) bytes += (size_t) bufs[i].len; return bytes; } int uv_recv_buffer_size(uv_handle_t* handle, int* value) { return uv__socket_sockopt(handle, SO_RCVBUF, value); } int uv_send_buffer_size(uv_handle_t* handle, int *value) { return uv__socket_sockopt(handle, SO_SNDBUF, value); } int uv_fs_event_getpath(uv_fs_event_t* handle, char* buffer, size_t* size) { size_t required_len; if (!uv__is_active(handle)) { *size = 0; return UV_EINVAL; } required_len = strlen(handle->path); if (required_len >= *size) { *size = required_len + 1; return UV_ENOBUFS; } memcpy(buffer, handle->path, required_len); *size = required_len; buffer[required_len] = '\0'; return 0; } /* The windows implementation does not have the same structure layout as * the unix implementation (nbufs is not directly inside req but is * contained in a nested union/struct) so this function locates it. */ static unsigned int* uv__get_nbufs(uv_fs_t* req) { #ifdef _WIN32 return &req->fs.info.nbufs; #else return &req->nbufs; #endif } /* uv_fs_scandir() uses the system allocator to allocate memory on non-Windows * systems. So, the memory should be released using free(). On Windows, * uv__malloc() is used, so use uv__free() to free memory. */ #ifdef _WIN32 # define uv__fs_scandir_free uv__free #else # define uv__fs_scandir_free free #endif void uv__fs_scandir_cleanup(uv_fs_t* req) { uv__dirent_t** dents; unsigned int* nbufs = uv__get_nbufs(req); dents = req->ptr; if (*nbufs > 0 && *nbufs != (unsigned int) req->result) (*nbufs)--; for (; *nbufs < (unsigned int) req->result; (*nbufs)++) uv__fs_scandir_free(dents[*nbufs]); uv__fs_scandir_free(req->ptr); req->ptr = NULL; } int uv_fs_scandir_next(uv_fs_t* req, uv_dirent_t* ent) { uv__dirent_t** dents; uv__dirent_t* dent; unsigned int* nbufs; /* Check to see if req passed */ if (req->result < 0) return req->result; /* Ptr will be null if req was canceled or no files found */ if (!req->ptr) return UV_EOF; nbufs = uv__get_nbufs(req); assert(nbufs); dents = req->ptr; /* Free previous entity */ if (*nbufs > 0) uv__fs_scandir_free(dents[*nbufs - 1]); /* End was already reached */ if (*nbufs == (unsigned int) req->result) { uv__fs_scandir_free(dents); req->ptr = NULL; return UV_EOF; } dent = dents[(*nbufs)++]; ent->name = dent->d_name; ent->type = uv__fs_get_dirent_type(dent); return 0; } uv_dirent_type_t uv__fs_get_dirent_type(uv__dirent_t* dent) { uv_dirent_type_t type; #ifdef HAVE_DIRENT_TYPES switch (dent->d_type) { case UV__DT_DIR: type = UV_DIRENT_DIR; break; case UV__DT_FILE: type = UV_DIRENT_FILE; break; case UV__DT_LINK: type = UV_DIRENT_LINK; break; case UV__DT_FIFO: type = UV_DIRENT_FIFO; break; case UV__DT_SOCKET: type = UV_DIRENT_SOCKET; break; case UV__DT_CHAR: type = UV_DIRENT_CHAR; break; case UV__DT_BLOCK: type = UV_DIRENT_BLOCK; break; default: type = UV_DIRENT_UNKNOWN; } #else type = UV_DIRENT_UNKNOWN; #endif return type; } void uv__fs_readdir_cleanup(uv_fs_t* req) { uv_dir_t* dir; uv_dirent_t* dirents; int i; if (req->ptr == NULL) return; dir = req->ptr; dirents = dir->dirents; req->ptr = NULL; if (dirents == NULL) return; for (i = 0; i < req->result; ++i) { uv__free((char*) dirents[i].name); dirents[i].name = NULL; } } int uv_loop_configure(uv_loop_t* loop, uv_loop_option option, ...) { va_list ap; int err; va_start(ap, option); /* Any platform-agnostic options should be handled here. */ err = uv__loop_configure(loop, option, ap); va_end(ap); return err; } static uv_loop_t default_loop_struct; static uv_loop_t* default_loop_ptr; uv_loop_t* uv_default_loop(void) { if (default_loop_ptr != NULL) return default_loop_ptr; if (uv_loop_init(&default_loop_struct)) return NULL; default_loop_ptr = &default_loop_struct; return default_loop_ptr; } uv_loop_t* uv_loop_new(void) { uv_loop_t* loop; loop = uv__malloc(sizeof(*loop)); if (loop == NULL) return NULL; if (uv_loop_init(loop)) { uv__free(loop); return NULL; } return loop; } int uv_loop_close(uv_loop_t* loop) { QUEUE* q; uv_handle_t* h; #ifndef NDEBUG void* saved_data; #endif if (uv__has_active_reqs(loop)) return UV_EBUSY; QUEUE_FOREACH(q, &loop->handle_queue) { h = QUEUE_DATA(q, uv_handle_t, handle_queue); if (!(h->flags & UV_HANDLE_INTERNAL)) return UV_EBUSY; } uv__loop_close(loop); #ifndef NDEBUG saved_data = loop->data; memset(loop, -1, sizeof(*loop)); loop->data = saved_data; #endif if (loop == default_loop_ptr) default_loop_ptr = NULL; return 0; } void uv_loop_delete(uv_loop_t* loop) { uv_loop_t* default_loop; int err; default_loop = default_loop_ptr; err = uv_loop_close(loop); (void) err; /* Squelch compiler warnings. */ assert(err == 0); if (loop != default_loop) uv__free(loop); } int uv_read_start(uv_stream_t* stream, uv_alloc_cb alloc_cb, uv_read_cb read_cb) { if (stream == NULL || alloc_cb == NULL || read_cb == NULL) return UV_EINVAL; if (stream->flags & UV_HANDLE_CLOSING) return UV_EINVAL; if (stream->flags & UV_HANDLE_READING) return UV_EALREADY; if (!(stream->flags & UV_HANDLE_READABLE)) return UV_ENOTCONN; return uv__read_start(stream, alloc_cb, read_cb); } void uv_os_free_environ(uv_env_item_t* envitems, int count) { int i; for (i = 0; i < count; i++) { uv__free(envitems[i].name); } uv__free(envitems); } void uv_free_cpu_info(uv_cpu_info_t* cpu_infos, int count) { int i; for (i = 0; i < count; i++) uv__free(cpu_infos[i].model); uv__free(cpu_infos); } /* Also covers __clang__ and __INTEL_COMPILER. Disabled on Windows because * threads have already been forcibly terminated by the operating system * by the time destructors run, ergo, it's not safe to try to clean them up. */ #if defined(__GNUC__) && !defined(_WIN32) __attribute__((destructor)) #endif void uv_library_shutdown(void) { static int was_shutdown; if (uv__load_relaxed(&was_shutdown)) return; uv__process_title_cleanup(); uv__signal_cleanup(); #ifdef __MVS__ /* TODO(itodorov) - zos: revisit when Woz compiler is available. */ uv__os390_cleanup(); #else uv__threadpool_cleanup(); #endif uv__store_relaxed(&was_shutdown, 1); } void uv__metrics_update_idle_time(uv_loop_t* loop) { uv__loop_metrics_t* loop_metrics; uint64_t entry_time; uint64_t exit_time; if (!(uv__get_internal_fields(loop)->flags & UV_METRICS_IDLE_TIME)) return; loop_metrics = uv__get_loop_metrics(loop); /* The thread running uv__metrics_update_idle_time() is always the same * thread that sets provider_entry_time. So it's unnecessary to lock before * retrieving this value. */ if (loop_metrics->provider_entry_time == 0) return; exit_time = uv_hrtime(); uv_mutex_lock(&loop_metrics->lock); entry_time = loop_metrics->provider_entry_time; loop_metrics->provider_entry_time = 0; loop_metrics->provider_idle_time += exit_time - entry_time; uv_mutex_unlock(&loop_metrics->lock); } void uv__metrics_set_provider_entry_time(uv_loop_t* loop) { uv__loop_metrics_t* loop_metrics; uint64_t now; if (!(uv__get_internal_fields(loop)->flags & UV_METRICS_IDLE_TIME)) return; now = uv_hrtime(); loop_metrics = uv__get_loop_metrics(loop); uv_mutex_lock(&loop_metrics->lock); loop_metrics->provider_entry_time = now; uv_mutex_unlock(&loop_metrics->lock); } uint64_t uv_metrics_idle_time(uv_loop_t* loop) { uv__loop_metrics_t* loop_metrics; uint64_t entry_time; uint64_t idle_time; loop_metrics = uv__get_loop_metrics(loop); uv_mutex_lock(&loop_metrics->lock); idle_time = loop_metrics->provider_idle_time; entry_time = loop_metrics->provider_entry_time; uv_mutex_unlock(&loop_metrics->lock); if (entry_time > 0) idle_time += uv_hrtime() - entry_time; return idle_time; } gevent-24.11.1/deps/libuv/src/uv-common.h000066400000000000000000000356261471441230600201220ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ /* * This file is private to libuv. It provides common functionality to both * Windows and Unix backends. */ #ifndef UV_COMMON_H_ #define UV_COMMON_H_ #include #include #include #if defined(_MSC_VER) && _MSC_VER < 1600 # include "uv/stdint-msvc2008.h" #else # include #endif #include "uv.h" #include "uv/tree.h" #include "queue.h" #include "strscpy.h" #if EDOM > 0 # define UV__ERR(x) (-(x)) #else # define UV__ERR(x) (x) #endif #if !defined(snprintf) && defined(_MSC_VER) && _MSC_VER < 1900 extern int snprintf(char*, size_t, const char*, ...); #endif #define ARRAY_SIZE(a) (sizeof(a) / sizeof((a)[0])) #define container_of(ptr, type, member) \ ((type *) ((char *) (ptr) - offsetof(type, member))) #define STATIC_ASSERT(expr) \ void uv__static_assert(int static_assert_failed[1 - 2 * !(expr)]) #if defined(__GNUC__) && (__GNUC__ > 4 || __GNUC__ == 4 && __GNUC_MINOR__ >= 7) #define uv__load_relaxed(p) __atomic_load_n(p, __ATOMIC_RELAXED) #define uv__store_relaxed(p, v) __atomic_store_n(p, v, __ATOMIC_RELAXED) #else #define uv__load_relaxed(p) (*p) #define uv__store_relaxed(p, v) do *p = v; while (0) #endif #define UV__UDP_DGRAM_MAXSIZE (64 * 1024) /* Handle flags. Some flags are specific to Windows or UNIX. */ enum { /* Used by all handles. */ UV_HANDLE_CLOSING = 0x00000001, UV_HANDLE_CLOSED = 0x00000002, UV_HANDLE_ACTIVE = 0x00000004, UV_HANDLE_REF = 0x00000008, UV_HANDLE_INTERNAL = 0x00000010, UV_HANDLE_ENDGAME_QUEUED = 0x00000020, /* Used by streams. */ UV_HANDLE_LISTENING = 0x00000040, UV_HANDLE_CONNECTION = 0x00000080, UV_HANDLE_SHUTTING = 0x00000100, UV_HANDLE_SHUT = 0x00000200, UV_HANDLE_READ_PARTIAL = 0x00000400, UV_HANDLE_READ_EOF = 0x00000800, /* Used by streams and UDP handles. */ UV_HANDLE_READING = 0x00001000, UV_HANDLE_BOUND = 0x00002000, UV_HANDLE_READABLE = 0x00004000, UV_HANDLE_WRITABLE = 0x00008000, UV_HANDLE_READ_PENDING = 0x00010000, UV_HANDLE_SYNC_BYPASS_IOCP = 0x00020000, UV_HANDLE_ZERO_READ = 0x00040000, UV_HANDLE_EMULATE_IOCP = 0x00080000, UV_HANDLE_BLOCKING_WRITES = 0x00100000, UV_HANDLE_CANCELLATION_PENDING = 0x00200000, /* Used by uv_tcp_t and uv_udp_t handles */ UV_HANDLE_IPV6 = 0x00400000, /* Only used by uv_tcp_t handles. */ UV_HANDLE_TCP_NODELAY = 0x01000000, UV_HANDLE_TCP_KEEPALIVE = 0x02000000, UV_HANDLE_TCP_SINGLE_ACCEPT = 0x04000000, UV_HANDLE_TCP_ACCEPT_STATE_CHANGING = 0x08000000, UV_HANDLE_SHARED_TCP_SOCKET = 0x10000000, /* Only used by uv_udp_t handles. */ UV_HANDLE_UDP_PROCESSING = 0x01000000, UV_HANDLE_UDP_CONNECTED = 0x02000000, UV_HANDLE_UDP_RECVMMSG = 0x04000000, /* Only used by uv_pipe_t handles. */ UV_HANDLE_NON_OVERLAPPED_PIPE = 0x01000000, UV_HANDLE_PIPESERVER = 0x02000000, /* Only used by uv_tty_t handles. */ UV_HANDLE_TTY_READABLE = 0x01000000, UV_HANDLE_TTY_RAW = 0x02000000, UV_HANDLE_TTY_SAVED_POSITION = 0x04000000, UV_HANDLE_TTY_SAVED_ATTRIBUTES = 0x08000000, /* Only used by uv_signal_t handles. */ UV_SIGNAL_ONE_SHOT_DISPATCHED = 0x01000000, UV_SIGNAL_ONE_SHOT = 0x02000000, /* Only used by uv_poll_t handles. */ UV_HANDLE_POLL_SLOW = 0x01000000, /* Only used by uv_process_t handles. */ UV_HANDLE_REAP = 0x10000000 }; int uv__loop_configure(uv_loop_t* loop, uv_loop_option option, va_list ap); void uv__loop_close(uv_loop_t* loop); int uv__read_start(uv_stream_t* stream, uv_alloc_cb alloc_cb, uv_read_cb read_cb); int uv__tcp_bind(uv_tcp_t* tcp, const struct sockaddr* addr, unsigned int addrlen, unsigned int flags); int uv__tcp_connect(uv_connect_t* req, uv_tcp_t* handle, const struct sockaddr* addr, unsigned int addrlen, uv_connect_cb cb); int uv__udp_init_ex(uv_loop_t* loop, uv_udp_t* handle, unsigned flags, int domain); int uv__udp_bind(uv_udp_t* handle, const struct sockaddr* addr, unsigned int addrlen, unsigned int flags); int uv__udp_connect(uv_udp_t* handle, const struct sockaddr* addr, unsigned int addrlen); int uv__udp_disconnect(uv_udp_t* handle); int uv__udp_is_connected(uv_udp_t* handle); int uv__udp_send(uv_udp_send_t* req, uv_udp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, const struct sockaddr* addr, unsigned int addrlen, uv_udp_send_cb send_cb); int uv__udp_try_send(uv_udp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, const struct sockaddr* addr, unsigned int addrlen); int uv__udp_recv_start(uv_udp_t* handle, uv_alloc_cb alloccb, uv_udp_recv_cb recv_cb); int uv__udp_recv_stop(uv_udp_t* handle); void uv__fs_poll_close(uv_fs_poll_t* handle); int uv__getaddrinfo_translate_error(int sys_err); /* EAI_* error. */ enum uv__work_kind { UV__WORK_CPU, UV__WORK_FAST_IO, UV__WORK_SLOW_IO }; void uv__work_submit(uv_loop_t* loop, struct uv__work *w, enum uv__work_kind kind, void (*work)(struct uv__work *w), void (*done)(struct uv__work *w, int status)); void uv__work_done(uv_async_t* handle); size_t uv__count_bufs(const uv_buf_t bufs[], unsigned int nbufs); int uv__socket_sockopt(uv_handle_t* handle, int optname, int* value); void uv__fs_scandir_cleanup(uv_fs_t* req); void uv__fs_readdir_cleanup(uv_fs_t* req); uv_dirent_type_t uv__fs_get_dirent_type(uv__dirent_t* dent); int uv__next_timeout(const uv_loop_t* loop); void uv__run_timers(uv_loop_t* loop); void uv__timer_close(uv_timer_t* handle); void uv__process_title_cleanup(void); void uv__signal_cleanup(void); void uv__threadpool_cleanup(void); #define uv__has_active_reqs(loop) \ ((loop)->active_reqs.count > 0) #define uv__req_register(loop, req) \ do { \ (loop)->active_reqs.count++; \ } \ while (0) #define uv__req_unregister(loop, req) \ do { \ assert(uv__has_active_reqs(loop)); \ (loop)->active_reqs.count--; \ } \ while (0) #define uv__has_active_handles(loop) \ ((loop)->active_handles > 0) #define uv__active_handle_add(h) \ do { \ (h)->loop->active_handles++; \ } \ while (0) #define uv__active_handle_rm(h) \ do { \ (h)->loop->active_handles--; \ } \ while (0) #define uv__is_active(h) \ (((h)->flags & UV_HANDLE_ACTIVE) != 0) #define uv__is_closing(h) \ (((h)->flags & (UV_HANDLE_CLOSING | UV_HANDLE_CLOSED)) != 0) #define uv__handle_start(h) \ do { \ if (((h)->flags & UV_HANDLE_ACTIVE) != 0) break; \ (h)->flags |= UV_HANDLE_ACTIVE; \ if (((h)->flags & UV_HANDLE_REF) != 0) uv__active_handle_add(h); \ } \ while (0) #define uv__handle_stop(h) \ do { \ if (((h)->flags & UV_HANDLE_ACTIVE) == 0) break; \ (h)->flags &= ~UV_HANDLE_ACTIVE; \ if (((h)->flags & UV_HANDLE_REF) != 0) uv__active_handle_rm(h); \ } \ while (0) #define uv__handle_ref(h) \ do { \ if (((h)->flags & UV_HANDLE_REF) != 0) break; \ (h)->flags |= UV_HANDLE_REF; \ if (((h)->flags & UV_HANDLE_CLOSING) != 0) break; \ if (((h)->flags & UV_HANDLE_ACTIVE) != 0) uv__active_handle_add(h); \ } \ while (0) #define uv__handle_unref(h) \ do { \ if (((h)->flags & UV_HANDLE_REF) == 0) break; \ (h)->flags &= ~UV_HANDLE_REF; \ if (((h)->flags & UV_HANDLE_CLOSING) != 0) break; \ if (((h)->flags & UV_HANDLE_ACTIVE) != 0) uv__active_handle_rm(h); \ } \ while (0) #define uv__has_ref(h) \ (((h)->flags & UV_HANDLE_REF) != 0) #if defined(_WIN32) # define uv__handle_platform_init(h) ((h)->u.fd = -1) #else # define uv__handle_platform_init(h) ((h)->next_closing = NULL) #endif #define uv__handle_init(loop_, h, type_) \ do { \ (h)->loop = (loop_); \ (h)->type = (type_); \ (h)->flags = UV_HANDLE_REF; /* Ref the loop when active. */ \ QUEUE_INSERT_TAIL(&(loop_)->handle_queue, &(h)->handle_queue); \ uv__handle_platform_init(h); \ } \ while (0) /* Note: uses an open-coded version of SET_REQ_SUCCESS() because of * a circular dependency between src/uv-common.h and src/win/internal.h. */ #if defined(_WIN32) # define UV_REQ_INIT(req, typ) \ do { \ (req)->type = (typ); \ (req)->u.io.overlapped.Internal = 0; /* SET_REQ_SUCCESS() */ \ } \ while (0) #else # define UV_REQ_INIT(req, typ) \ do { \ (req)->type = (typ); \ } \ while (0) #endif #define uv__req_init(loop, req, typ) \ do { \ UV_REQ_INIT(req, typ); \ uv__req_register(loop, req); \ } \ while (0) #define uv__get_internal_fields(loop) \ ((uv__loop_internal_fields_t*) loop->internal_fields) #define uv__get_loop_metrics(loop) \ (&uv__get_internal_fields(loop)->loop_metrics) /* Allocator prototypes */ void *uv__calloc(size_t count, size_t size); char *uv__strdup(const char* s); char *uv__strndup(const char* s, size_t n); void* uv__malloc(size_t size); void uv__free(void* ptr); void* uv__realloc(void* ptr, size_t size); void* uv__reallocf(void* ptr, size_t size); typedef struct uv__loop_metrics_s uv__loop_metrics_t; typedef struct uv__loop_internal_fields_s uv__loop_internal_fields_t; struct uv__loop_metrics_s { uint64_t provider_entry_time; uint64_t provider_idle_time; uv_mutex_t lock; }; void uv__metrics_update_idle_time(uv_loop_t* loop); void uv__metrics_set_provider_entry_time(uv_loop_t* loop); struct uv__loop_internal_fields_s { unsigned int flags; uv__loop_metrics_t loop_metrics; }; #endif /* UV_COMMON_H_ */ gevent-24.11.1/deps/libuv/src/uv-data-getter-setters.c000066400000000000000000000060661471441230600225110ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" const char* uv_handle_type_name(uv_handle_type type) { switch (type) { #define XX(uc,lc) case UV_##uc: return #lc; UV_HANDLE_TYPE_MAP(XX) #undef XX case UV_FILE: return "file"; case UV_HANDLE_TYPE_MAX: case UV_UNKNOWN_HANDLE: return NULL; } return NULL; } uv_handle_type uv_handle_get_type(const uv_handle_t* handle) { return handle->type; } void* uv_handle_get_data(const uv_handle_t* handle) { return handle->data; } uv_loop_t* uv_handle_get_loop(const uv_handle_t* handle) { return handle->loop; } void uv_handle_set_data(uv_handle_t* handle, void* data) { handle->data = data; } const char* uv_req_type_name(uv_req_type type) { switch (type) { #define XX(uc,lc) case UV_##uc: return #lc; UV_REQ_TYPE_MAP(XX) #undef XX case UV_REQ_TYPE_MAX: case UV_UNKNOWN_REQ: default: /* UV_REQ_TYPE_PRIVATE */ break; } return NULL; } uv_req_type uv_req_get_type(const uv_req_t* req) { return req->type; } void* uv_req_get_data(const uv_req_t* req) { return req->data; } void uv_req_set_data(uv_req_t* req, void* data) { req->data = data; } size_t uv_stream_get_write_queue_size(const uv_stream_t* stream) { return stream->write_queue_size; } size_t uv_udp_get_send_queue_size(const uv_udp_t* handle) { return handle->send_queue_size; } size_t uv_udp_get_send_queue_count(const uv_udp_t* handle) { return handle->send_queue_count; } uv_pid_t uv_process_get_pid(const uv_process_t* proc) { return proc->pid; } uv_fs_type uv_fs_get_type(const uv_fs_t* req) { return req->fs_type; } ssize_t uv_fs_get_result(const uv_fs_t* req) { return req->result; } void* uv_fs_get_ptr(const uv_fs_t* req) { return req->ptr; } const char* uv_fs_get_path(const uv_fs_t* req) { return req->path; } uv_stat_t* uv_fs_get_statbuf(uv_fs_t* req) { return &req->statbuf; } void* uv_loop_get_data(const uv_loop_t* loop) { return loop->data; } void uv_loop_set_data(uv_loop_t* loop, void* data) { loop->data = data; } gevent-24.11.1/deps/libuv/src/version.c000066400000000000000000000033271471441230600176530ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #define UV_STRINGIFY(v) UV_STRINGIFY_HELPER(v) #define UV_STRINGIFY_HELPER(v) #v #define UV_VERSION_STRING_BASE UV_STRINGIFY(UV_VERSION_MAJOR) "." \ UV_STRINGIFY(UV_VERSION_MINOR) "." \ UV_STRINGIFY(UV_VERSION_PATCH) #if UV_VERSION_IS_RELEASE # define UV_VERSION_STRING UV_VERSION_STRING_BASE #else # define UV_VERSION_STRING UV_VERSION_STRING_BASE "-" UV_VERSION_SUFFIX #endif unsigned int uv_version(void) { return UV_VERSION_HEX; } const char* uv_version_string(void) { return UV_VERSION_STRING; } gevent-24.11.1/deps/libuv/src/win/000077500000000000000000000000001471441230600166125ustar00rootroot00000000000000gevent-24.11.1/deps/libuv/src/win/async.c000066400000000000000000000054651471441230600201050ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include "uv.h" #include "internal.h" #include "atomicops-inl.h" #include "handle-inl.h" #include "req-inl.h" void uv__async_endgame(uv_loop_t* loop, uv_async_t* handle) { if (handle->flags & UV_HANDLE_CLOSING && !handle->async_sent) { assert(!(handle->flags & UV_HANDLE_CLOSED)); uv__handle_close(handle); } } int uv_async_init(uv_loop_t* loop, uv_async_t* handle, uv_async_cb async_cb) { uv_req_t* req; uv__handle_init(loop, (uv_handle_t*) handle, UV_ASYNC); handle->async_sent = 0; handle->async_cb = async_cb; req = &handle->async_req; UV_REQ_INIT(req, UV_WAKEUP); req->data = handle; uv__handle_start(handle); return 0; } void uv__async_close(uv_loop_t* loop, uv_async_t* handle) { if (!((uv_async_t*)handle)->async_sent) { uv__want_endgame(loop, (uv_handle_t*) handle); } uv__handle_closing(handle); } int uv_async_send(uv_async_t* handle) { uv_loop_t* loop = handle->loop; if (handle->type != UV_ASYNC) { /* Can't set errno because that's not thread-safe. */ return -1; } /* The user should make sure never to call uv_async_send to a closing or * closed handle. */ assert(!(handle->flags & UV_HANDLE_CLOSING)); if (!uv__atomic_exchange_set(&handle->async_sent)) { POST_COMPLETION_FOR_REQ(loop, &handle->async_req); } return 0; } void uv__process_async_wakeup_req(uv_loop_t* loop, uv_async_t* handle, uv_req_t* req) { assert(handle->type == UV_ASYNC); assert(req->type == UV_WAKEUP); handle->async_sent = 0; if (handle->flags & UV_HANDLE_CLOSING) { uv__want_endgame(loop, (uv_handle_t*)handle); } else if (handle->async_cb != NULL) { handle->async_cb(handle); } } gevent-24.11.1/deps/libuv/src/win/atomicops-inl.h000066400000000000000000000043651471441230600215510ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_WIN_ATOMICOPS_INL_H_ #define UV_WIN_ATOMICOPS_INL_H_ #include "uv.h" #include "internal.h" /* Atomic set operation on char */ #ifdef _MSC_VER /* MSVC */ /* _InterlockedOr8 is supported by MSVC on x32 and x64. It is slightly less * efficient than InterlockedExchange, but InterlockedExchange8 does not exist, * and interlocked operations on larger targets might require the target to be * aligned. */ #pragma intrinsic(_InterlockedOr8) static char INLINE uv__atomic_exchange_set(char volatile* target) { return _InterlockedOr8(target, 1); } #else /* GCC, Clang in mingw mode */ static inline char uv__atomic_exchange_set(char volatile* target) { #if defined(__i386__) || defined(__x86_64__) /* Mingw-32 version, hopefully this works for 64-bit gcc as well. */ const char one = 1; char old_value; __asm__ __volatile__ ("lock xchgb %0, %1\n\t" : "=r"(old_value), "=m"(*target) : "0"(one), "m"(*target) : "memory"); return old_value; #else return __sync_fetch_and_or(target, 1); #endif } #endif #endif /* UV_WIN_ATOMICOPS_INL_H_ */ gevent-24.11.1/deps/libuv/src/win/core.c000066400000000000000000000501651471441230600177150ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include #include #include #include #if defined(_MSC_VER) || defined(__MINGW64_VERSION_MAJOR) #include #endif #include "uv.h" #include "internal.h" #include "queue.h" #include "handle-inl.h" #include "heap-inl.h" #include "req-inl.h" /* uv_once initialization guards */ static uv_once_t uv_init_guard_ = UV_ONCE_INIT; #if defined(_DEBUG) && (defined(_MSC_VER) || defined(__MINGW64_VERSION_MAJOR)) /* Our crt debug report handler allows us to temporarily disable asserts * just for the current thread. */ UV_THREAD_LOCAL int uv__crt_assert_enabled = TRUE; static int uv__crt_dbg_report_handler(int report_type, char *message, int *ret_val) { if (uv__crt_assert_enabled || report_type != _CRT_ASSERT) return FALSE; if (ret_val) { /* Set ret_val to 0 to continue with normal execution. * Set ret_val to 1 to trigger a breakpoint. */ if(IsDebuggerPresent()) *ret_val = 1; else *ret_val = 0; } /* Don't call _CrtDbgReport. */ return TRUE; } #else UV_THREAD_LOCAL int uv__crt_assert_enabled = FALSE; #endif #if !defined(__MINGW32__) || __MSVCRT_VERSION__ >= 0x800 static void uv__crt_invalid_parameter_handler(const wchar_t* expression, const wchar_t* function, const wchar_t * file, unsigned int line, uintptr_t reserved) { /* No-op. */ } #endif static uv_loop_t** uv__loops; static int uv__loops_size; static int uv__loops_capacity; #define UV__LOOPS_CHUNK_SIZE 8 static uv_mutex_t uv__loops_lock; static void uv__loops_init(void) { uv_mutex_init(&uv__loops_lock); } static int uv__loops_add(uv_loop_t* loop) { uv_loop_t** new_loops; int new_capacity, i; uv_mutex_lock(&uv__loops_lock); if (uv__loops_size == uv__loops_capacity) { new_capacity = uv__loops_capacity + UV__LOOPS_CHUNK_SIZE; new_loops = uv__realloc(uv__loops, sizeof(uv_loop_t*) * new_capacity); if (!new_loops) goto failed_loops_realloc; uv__loops = new_loops; for (i = uv__loops_capacity; i < new_capacity; ++i) uv__loops[i] = NULL; uv__loops_capacity = new_capacity; } uv__loops[uv__loops_size] = loop; ++uv__loops_size; uv_mutex_unlock(&uv__loops_lock); return 0; failed_loops_realloc: uv_mutex_unlock(&uv__loops_lock); return ERROR_OUTOFMEMORY; } static void uv__loops_remove(uv_loop_t* loop) { int loop_index; int smaller_capacity; uv_loop_t** new_loops; uv_mutex_lock(&uv__loops_lock); for (loop_index = 0; loop_index < uv__loops_size; ++loop_index) { if (uv__loops[loop_index] == loop) break; } /* If loop was not found, ignore */ if (loop_index == uv__loops_size) goto loop_removed; uv__loops[loop_index] = uv__loops[uv__loops_size - 1]; uv__loops[uv__loops_size - 1] = NULL; --uv__loops_size; if (uv__loops_size == 0) { uv__loops_capacity = 0; uv__free(uv__loops); uv__loops = NULL; goto loop_removed; } /* If we didn't grow to big skip downsizing */ if (uv__loops_capacity < 4 * UV__LOOPS_CHUNK_SIZE) goto loop_removed; /* Downsize only if more than half of buffer is free */ smaller_capacity = uv__loops_capacity / 2; if (uv__loops_size >= smaller_capacity) goto loop_removed; new_loops = uv__realloc(uv__loops, sizeof(uv_loop_t*) * smaller_capacity); if (!new_loops) goto loop_removed; uv__loops = new_loops; uv__loops_capacity = smaller_capacity; loop_removed: uv_mutex_unlock(&uv__loops_lock); } void uv__wake_all_loops(void) { int i; uv_loop_t* loop; uv_mutex_lock(&uv__loops_lock); for (i = 0; i < uv__loops_size; ++i) { loop = uv__loops[i]; assert(loop); if (loop->iocp != INVALID_HANDLE_VALUE) PostQueuedCompletionStatus(loop->iocp, 0, 0, NULL); } uv_mutex_unlock(&uv__loops_lock); } static void uv__init(void) { /* Tell Windows that we will handle critical errors. */ SetErrorMode(SEM_FAILCRITICALERRORS | SEM_NOGPFAULTERRORBOX | SEM_NOOPENFILEERRORBOX); /* Tell the CRT to not exit the application when an invalid parameter is * passed. The main issue is that invalid FDs will trigger this behavior. */ #if !defined(__MINGW32__) || __MSVCRT_VERSION__ >= 0x800 _set_invalid_parameter_handler(uv__crt_invalid_parameter_handler); #endif /* We also need to setup our debug report handler because some CRT * functions (eg _get_osfhandle) raise an assert when called with invalid * FDs even though they return the proper error code in the release build. */ #if defined(_DEBUG) && (defined(_MSC_VER) || defined(__MINGW64_VERSION_MAJOR)) _CrtSetReportHook(uv__crt_dbg_report_handler); #endif /* Initialize tracking of all uv loops */ uv__loops_init(); /* Fetch winapi function pointers. This must be done first because other * initialization code might need these function pointers to be loaded. */ uv__winapi_init(); /* Initialize winsock */ uv__winsock_init(); /* Initialize FS */ uv__fs_init(); /* Initialize signal stuff */ uv__signals_init(); /* Initialize console */ uv__console_init(); /* Initialize utilities */ uv__util_init(); /* Initialize system wakeup detection */ uv__init_detect_system_wakeup(); } int uv_loop_init(uv_loop_t* loop) { uv__loop_internal_fields_t* lfields; struct heap* timer_heap; int err; /* Initialize libuv itself first */ uv__once_init(); /* Create an I/O completion port */ loop->iocp = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, 0, 1); if (loop->iocp == NULL) return uv_translate_sys_error(GetLastError()); lfields = (uv__loop_internal_fields_t*) uv__calloc(1, sizeof(*lfields)); if (lfields == NULL) return UV_ENOMEM; loop->internal_fields = lfields; err = uv_mutex_init(&lfields->loop_metrics.lock); if (err) goto fail_metrics_mutex_init; /* To prevent uninitialized memory access, loop->time must be initialized * to zero before calling uv_update_time for the first time. */ loop->time = 0; uv_update_time(loop); QUEUE_INIT(&loop->wq); QUEUE_INIT(&loop->handle_queue); loop->active_reqs.count = 0; loop->active_handles = 0; loop->pending_reqs_tail = NULL; loop->endgame_handles = NULL; loop->timer_heap = timer_heap = uv__malloc(sizeof(*timer_heap)); if (timer_heap == NULL) { err = UV_ENOMEM; goto fail_timers_alloc; } heap_init(timer_heap); loop->check_handles = NULL; loop->prepare_handles = NULL; loop->idle_handles = NULL; loop->next_prepare_handle = NULL; loop->next_check_handle = NULL; loop->next_idle_handle = NULL; memset(&loop->poll_peer_sockets, 0, sizeof loop->poll_peer_sockets); loop->active_tcp_streams = 0; loop->active_udp_streams = 0; loop->timer_counter = 0; loop->stop_flag = 0; err = uv_mutex_init(&loop->wq_mutex); if (err) goto fail_mutex_init; err = uv_async_init(loop, &loop->wq_async, uv__work_done); if (err) goto fail_async_init; uv__handle_unref(&loop->wq_async); loop->wq_async.flags |= UV_HANDLE_INTERNAL; err = uv__loops_add(loop); if (err) goto fail_async_init; return 0; fail_async_init: uv_mutex_destroy(&loop->wq_mutex); fail_mutex_init: uv__free(timer_heap); loop->timer_heap = NULL; fail_timers_alloc: uv_mutex_destroy(&lfields->loop_metrics.lock); fail_metrics_mutex_init: uv__free(lfields); loop->internal_fields = NULL; CloseHandle(loop->iocp); loop->iocp = INVALID_HANDLE_VALUE; return err; } void uv_update_time(uv_loop_t* loop) { uint64_t new_time = uv__hrtime(1000); assert(new_time >= loop->time); loop->time = new_time; } void uv__once_init(void) { uv_once(&uv_init_guard_, uv__init); } void uv__loop_close(uv_loop_t* loop) { uv__loop_internal_fields_t* lfields; size_t i; uv__loops_remove(loop); /* Close the async handle without needing an extra loop iteration. * We might have a pending message, but we're just going to destroy the IOCP * soon, so we can just discard it now without the usual risk of a getting * another notification from GetQueuedCompletionStatusEx after calling the * close_cb (which we also skip defining). We'll assert later that queue was * actually empty and all reqs handled. */ loop->wq_async.async_sent = 0; loop->wq_async.close_cb = NULL; uv__handle_closing(&loop->wq_async); uv__handle_close(&loop->wq_async); for (i = 0; i < ARRAY_SIZE(loop->poll_peer_sockets); i++) { SOCKET sock = loop->poll_peer_sockets[i]; if (sock != 0 && sock != INVALID_SOCKET) closesocket(sock); } uv_mutex_lock(&loop->wq_mutex); assert(QUEUE_EMPTY(&loop->wq) && "thread pool work queue not empty!"); assert(!uv__has_active_reqs(loop)); uv_mutex_unlock(&loop->wq_mutex); uv_mutex_destroy(&loop->wq_mutex); uv__free(loop->timer_heap); loop->timer_heap = NULL; lfields = uv__get_internal_fields(loop); uv_mutex_destroy(&lfields->loop_metrics.lock); uv__free(lfields); loop->internal_fields = NULL; CloseHandle(loop->iocp); } int uv__loop_configure(uv_loop_t* loop, uv_loop_option option, va_list ap) { uv__loop_internal_fields_t* lfields; lfields = uv__get_internal_fields(loop); if (option == UV_METRICS_IDLE_TIME) { lfields->flags |= UV_METRICS_IDLE_TIME; return 0; } return UV_ENOSYS; } int uv_backend_fd(const uv_loop_t* loop) { return -1; } int uv_loop_fork(uv_loop_t* loop) { return UV_ENOSYS; } static int uv__loop_alive(const uv_loop_t* loop) { return uv__has_active_handles(loop) || uv__has_active_reqs(loop) || loop->pending_reqs_tail != NULL || loop->endgame_handles != NULL; } int uv_loop_alive(const uv_loop_t* loop) { return uv__loop_alive(loop); } int uv_backend_timeout(const uv_loop_t* loop) { if (loop->stop_flag == 0 && /* uv__loop_alive(loop) && */ (uv__has_active_handles(loop) || uv__has_active_reqs(loop)) && loop->pending_reqs_tail == NULL && loop->idle_handles == NULL && loop->endgame_handles == NULL) return uv__next_timeout(loop); return 0; } static void uv__poll_wine(uv_loop_t* loop, DWORD timeout) { DWORD bytes; ULONG_PTR key; OVERLAPPED* overlapped; uv_req_t* req; int repeat; uint64_t timeout_time; uint64_t user_timeout; int reset_timeout; timeout_time = loop->time + timeout; if (uv__get_internal_fields(loop)->flags & UV_METRICS_IDLE_TIME) { reset_timeout = 1; user_timeout = timeout; timeout = 0; } else { reset_timeout = 0; } for (repeat = 0; ; repeat++) { /* Only need to set the provider_entry_time if timeout != 0. The function * will return early if the loop isn't configured with UV_METRICS_IDLE_TIME. */ if (timeout != 0) uv__metrics_set_provider_entry_time(loop); GetQueuedCompletionStatus(loop->iocp, &bytes, &key, &overlapped, timeout); if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } /* Placed here because on success the loop will break whether there is an * empty package or not, or if GetQueuedCompletionStatus returned early then * the timeout will be updated and the loop will run again. In either case * the idle time will need to be updated. */ uv__metrics_update_idle_time(loop); if (overlapped) { /* Package was dequeued */ req = uv__overlapped_to_req(overlapped); uv__insert_pending_req(loop, req); /* Some time might have passed waiting for I/O, * so update the loop time here. */ uv_update_time(loop); } else if (GetLastError() != WAIT_TIMEOUT) { /* Serious error */ uv_fatal_error(GetLastError(), "GetQueuedCompletionStatus"); } else if (timeout > 0) { /* GetQueuedCompletionStatus can occasionally return a little early. * Make sure that the desired timeout target time is reached. */ uv_update_time(loop); if (timeout_time > loop->time) { timeout = (DWORD)(timeout_time - loop->time); /* The first call to GetQueuedCompletionStatus should return very * close to the target time and the second should reach it, but * this is not stated in the documentation. To make sure a busy * loop cannot happen, the timeout is increased exponentially * starting on the third round. */ timeout += repeat ? (1 << (repeat - 1)) : 0; continue; } } break; } } static void uv__poll(uv_loop_t* loop, DWORD timeout) { BOOL success; uv_req_t* req; OVERLAPPED_ENTRY overlappeds[128]; ULONG count; ULONG i; int repeat; uint64_t timeout_time; uint64_t user_timeout; int reset_timeout; timeout_time = loop->time + timeout; if (uv__get_internal_fields(loop)->flags & UV_METRICS_IDLE_TIME) { reset_timeout = 1; user_timeout = timeout; timeout = 0; } else { reset_timeout = 0; } for (repeat = 0; ; repeat++) { /* Only need to set the provider_entry_time if timeout != 0. The function * will return early if the loop isn't configured with UV_METRICS_IDLE_TIME. */ if (timeout != 0) uv__metrics_set_provider_entry_time(loop); success = pGetQueuedCompletionStatusEx(loop->iocp, overlappeds, ARRAY_SIZE(overlappeds), &count, timeout, FALSE); if (reset_timeout != 0) { timeout = user_timeout; reset_timeout = 0; } /* Placed here because on success the loop will break whether there is an * empty package or not, or if GetQueuedCompletionStatus returned early then * the timeout will be updated and the loop will run again. In either case * the idle time will need to be updated. */ uv__metrics_update_idle_time(loop); if (success) { for (i = 0; i < count; i++) { /* Package was dequeued, but see if it is not a empty package * meant only to wake us up. */ if (overlappeds[i].lpOverlapped) { req = uv__overlapped_to_req(overlappeds[i].lpOverlapped); uv__insert_pending_req(loop, req); } } /* Some time might have passed waiting for I/O, * so update the loop time here. */ uv_update_time(loop); } else if (GetLastError() != WAIT_TIMEOUT) { /* Serious error */ uv_fatal_error(GetLastError(), "GetQueuedCompletionStatusEx"); } else if (timeout > 0) { /* GetQueuedCompletionStatus can occasionally return a little early. * Make sure that the desired timeout target time is reached. */ uv_update_time(loop); if (timeout_time > loop->time) { timeout = (DWORD)(timeout_time - loop->time); /* The first call to GetQueuedCompletionStatus should return very * close to the target time and the second should reach it, but * this is not stated in the documentation. To make sure a busy * loop cannot happen, the timeout is increased exponentially * starting on the third round. */ timeout += repeat ? (1 << (repeat - 1)) : 0; continue; } } break; } } int uv_run(uv_loop_t *loop, uv_run_mode mode) { DWORD timeout; int r; int can_sleep; r = uv__loop_alive(loop); if (!r) uv_update_time(loop); while (r != 0 && loop->stop_flag == 0) { uv_update_time(loop); uv__run_timers(loop); can_sleep = loop->pending_reqs_tail == NULL && loop->idle_handles == NULL; uv__process_reqs(loop); uv__idle_invoke(loop); uv__prepare_invoke(loop); timeout = 0; if ((mode == UV_RUN_ONCE && can_sleep) || mode == UV_RUN_DEFAULT) timeout = uv_backend_timeout(loop); if (pGetQueuedCompletionStatusEx) uv__poll(loop, timeout); else uv__poll_wine(loop, timeout); /* Process immediate callbacks (e.g. write_cb) a small fixed number of * times to avoid loop starvation.*/ for (r = 0; r < 8 && loop->pending_reqs_tail != NULL; r++) uv__process_reqs(loop); /* Run one final update on the provider_idle_time in case uv__poll* * returned because the timeout expired, but no events were received. This * call will be ignored if the provider_entry_time was either never set (if * the timeout == 0) or was already updated b/c an event was received. */ uv__metrics_update_idle_time(loop); uv__check_invoke(loop); uv__process_endgames(loop); if (mode == UV_RUN_ONCE) { /* UV_RUN_ONCE implies forward progress: at least one callback must have * been invoked when it returns. uv__io_poll() can return without doing * I/O (meaning: no callbacks) when its timeout expires - which means we * have pending timers that satisfy the forward progress constraint. * * UV_RUN_NOWAIT makes no guarantees about progress so it's omitted from * the check. */ uv_update_time(loop); uv__run_timers(loop); } r = uv__loop_alive(loop); if (mode == UV_RUN_ONCE || mode == UV_RUN_NOWAIT) break; } /* The if statement lets the compiler compile it to a conditional store. * Avoids dirtying a cache line. */ if (loop->stop_flag != 0) loop->stop_flag = 0; return r; } int uv_fileno(const uv_handle_t* handle, uv_os_fd_t* fd) { uv_os_fd_t fd_out; switch (handle->type) { case UV_TCP: fd_out = (uv_os_fd_t)((uv_tcp_t*) handle)->socket; break; case UV_NAMED_PIPE: fd_out = ((uv_pipe_t*) handle)->handle; break; case UV_TTY: fd_out = ((uv_tty_t*) handle)->handle; break; case UV_UDP: fd_out = (uv_os_fd_t)((uv_udp_t*) handle)->socket; break; case UV_POLL: fd_out = (uv_os_fd_t)((uv_poll_t*) handle)->socket; break; default: return UV_EINVAL; } if (uv_is_closing(handle) || fd_out == INVALID_HANDLE_VALUE) return UV_EBADF; *fd = fd_out; return 0; } int uv__socket_sockopt(uv_handle_t* handle, int optname, int* value) { int r; int len; SOCKET socket; if (handle == NULL || value == NULL) return UV_EINVAL; if (handle->type == UV_TCP) socket = ((uv_tcp_t*) handle)->socket; else if (handle->type == UV_UDP) socket = ((uv_udp_t*) handle)->socket; else return UV_ENOTSUP; len = sizeof(*value); if (*value == 0) r = getsockopt(socket, SOL_SOCKET, optname, (char*) value, &len); else r = setsockopt(socket, SOL_SOCKET, optname, (const char*) value, len); if (r == SOCKET_ERROR) return uv_translate_sys_error(WSAGetLastError()); return 0; } int uv_cpumask_size(void) { return (int)(sizeof(DWORD_PTR) * 8); } int uv__getsockpeername(const uv_handle_t* handle, uv__peersockfunc func, struct sockaddr* name, int* namelen, int delayed_error) { int result; uv_os_fd_t fd; result = uv_fileno(handle, &fd); if (result != 0) return result; if (delayed_error) return uv_translate_sys_error(delayed_error); result = func((SOCKET) fd, name, namelen); if (result != 0) return uv_translate_sys_error(WSAGetLastError()); return 0; } gevent-24.11.1/deps/libuv/src/win/detect-wakeup.c000066400000000000000000000043301471441230600215200ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" #include "winapi.h" static void uv__register_system_resume_callback(void); void uv__init_detect_system_wakeup(void) { /* Try registering system power event callback. This is the cleanest * method, but it will only work on Win8 and above. */ uv__register_system_resume_callback(); } static ULONG CALLBACK uv__system_resume_callback(PVOID Context, ULONG Type, PVOID Setting) { if (Type == PBT_APMRESUMESUSPEND || Type == PBT_APMRESUMEAUTOMATIC) uv__wake_all_loops(); return 0; } static void uv__register_system_resume_callback(void) { _DEVICE_NOTIFY_SUBSCRIBE_PARAMETERS recipient; _HPOWERNOTIFY registration_handle; if (pPowerRegisterSuspendResumeNotification == NULL) return; recipient.Callback = uv__system_resume_callback; recipient.Context = NULL; (*pPowerRegisterSuspendResumeNotification)(DEVICE_NOTIFY_CALLBACK, &recipient, ®istration_handle); } gevent-24.11.1/deps/libuv/src/win/dl.c000066400000000000000000000104501471441230600173550ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "uv.h" #include "internal.h" static int uv__dlerror(uv_lib_t* lib, const char* filename, DWORD errorno); int uv_dlopen(const char* filename, uv_lib_t* lib) { WCHAR filename_w[32768]; lib->handle = NULL; lib->errmsg = NULL; if (!MultiByteToWideChar(CP_UTF8, 0, filename, -1, filename_w, ARRAY_SIZE(filename_w))) { return uv__dlerror(lib, filename, GetLastError()); } lib->handle = LoadLibraryExW(filename_w, NULL, LOAD_WITH_ALTERED_SEARCH_PATH); if (lib->handle == NULL) { return uv__dlerror(lib, filename, GetLastError()); } return 0; } void uv_dlclose(uv_lib_t* lib) { if (lib->errmsg) { LocalFree((void*)lib->errmsg); lib->errmsg = NULL; } if (lib->handle) { /* Ignore errors. No good way to signal them without leaking memory. */ FreeLibrary(lib->handle); lib->handle = NULL; } } int uv_dlsym(uv_lib_t* lib, const char* name, void** ptr) { /* Cast though integer to suppress pedantic warning about forbidden cast. */ *ptr = (void*)(uintptr_t) GetProcAddress(lib->handle, name); return uv__dlerror(lib, "", *ptr ? 0 : GetLastError()); } const char* uv_dlerror(const uv_lib_t* lib) { return lib->errmsg ? lib->errmsg : "no error"; } static void uv__format_fallback_error(uv_lib_t* lib, int errorno){ static const CHAR fallback_error[] = "error: %1!d!"; DWORD_PTR args[1]; args[0] = (DWORD_PTR) errorno; FormatMessageA(FORMAT_MESSAGE_FROM_STRING | FORMAT_MESSAGE_ARGUMENT_ARRAY | FORMAT_MESSAGE_ALLOCATE_BUFFER, fallback_error, 0, 0, (LPSTR) &lib->errmsg, 0, (va_list*) args); } static int uv__dlerror(uv_lib_t* lib, const char* filename, DWORD errorno) { DWORD_PTR arg; DWORD res; char* msg; if (lib->errmsg) { LocalFree(lib->errmsg); lib->errmsg = NULL; } if (errorno == 0) return 0; res = FormatMessageA(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, NULL, errorno, MAKELANGID(LANG_ENGLISH, SUBLANG_ENGLISH_US), (LPSTR) &lib->errmsg, 0, NULL); if (!res && (GetLastError() == ERROR_MUI_FILE_NOT_FOUND || GetLastError() == ERROR_RESOURCE_TYPE_NOT_FOUND)) { res = FormatMessageA(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, NULL, errorno, 0, (LPSTR) &lib->errmsg, 0, NULL); } if (res && errorno == ERROR_BAD_EXE_FORMAT && strstr(lib->errmsg, "%1")) { msg = lib->errmsg; lib->errmsg = NULL; arg = (DWORD_PTR) filename; res = FormatMessageA(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_ARGUMENT_ARRAY | FORMAT_MESSAGE_FROM_STRING, msg, 0, 0, (LPSTR) &lib->errmsg, 0, (va_list*) &arg); LocalFree(msg); } if (!res) uv__format_fallback_error(lib, errorno); return -1; } gevent-24.11.1/deps/libuv/src/win/error.c000066400000000000000000000204451471441230600201140ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include #include #include #include "uv.h" #include "internal.h" /* * Display an error message and abort the event loop. */ void uv_fatal_error(const int errorno, const char* syscall) { char* buf = NULL; const char* errmsg; FormatMessageA(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, NULL, errorno, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), (LPSTR)&buf, 0, NULL); if (buf) { errmsg = buf; } else { errmsg = "Unknown error"; } /* FormatMessage messages include a newline character already, so don't add * another. */ if (syscall) { fprintf(stderr, "%s: (%d) %s", syscall, errorno, errmsg); } else { fprintf(stderr, "(%d) %s", errorno, errmsg); } if (buf) { LocalFree(buf); } DebugBreak(); abort(); } int uv_translate_sys_error(int sys_errno) { if (sys_errno <= 0) { return sys_errno; /* If < 0 then it's already a libuv error. */ } switch (sys_errno) { case ERROR_NOACCESS: return UV_EACCES; case WSAEACCES: return UV_EACCES; case ERROR_ELEVATION_REQUIRED: return UV_EACCES; case ERROR_CANT_ACCESS_FILE: return UV_EACCES; case ERROR_ADDRESS_ALREADY_ASSOCIATED: return UV_EADDRINUSE; case WSAEADDRINUSE: return UV_EADDRINUSE; case WSAEADDRNOTAVAIL: return UV_EADDRNOTAVAIL; case WSAEAFNOSUPPORT: return UV_EAFNOSUPPORT; case WSAEWOULDBLOCK: return UV_EAGAIN; case WSAEALREADY: return UV_EALREADY; case ERROR_INVALID_FLAGS: return UV_EBADF; case ERROR_INVALID_HANDLE: return UV_EBADF; case ERROR_LOCK_VIOLATION: return UV_EBUSY; case ERROR_PIPE_BUSY: return UV_EBUSY; case ERROR_SHARING_VIOLATION: return UV_EBUSY; case ERROR_OPERATION_ABORTED: return UV_ECANCELED; case WSAEINTR: return UV_ECANCELED; case ERROR_NO_UNICODE_TRANSLATION: return UV_ECHARSET; case ERROR_CONNECTION_ABORTED: return UV_ECONNABORTED; case WSAECONNABORTED: return UV_ECONNABORTED; case ERROR_CONNECTION_REFUSED: return UV_ECONNREFUSED; case WSAECONNREFUSED: return UV_ECONNREFUSED; case ERROR_NETNAME_DELETED: return UV_ECONNRESET; case WSAECONNRESET: return UV_ECONNRESET; case ERROR_ALREADY_EXISTS: return UV_EEXIST; case ERROR_FILE_EXISTS: return UV_EEXIST; case ERROR_BUFFER_OVERFLOW: return UV_EFAULT; case WSAEFAULT: return UV_EFAULT; case ERROR_HOST_UNREACHABLE: return UV_EHOSTUNREACH; case WSAEHOSTUNREACH: return UV_EHOSTUNREACH; case ERROR_INSUFFICIENT_BUFFER: return UV_EINVAL; case ERROR_INVALID_DATA: return UV_EINVAL; case ERROR_INVALID_PARAMETER: return UV_EINVAL; case ERROR_SYMLINK_NOT_SUPPORTED: return UV_EINVAL; case WSAEINVAL: return UV_EINVAL; case WSAEPFNOSUPPORT: return UV_EINVAL; case ERROR_BEGINNING_OF_MEDIA: return UV_EIO; case ERROR_BUS_RESET: return UV_EIO; case ERROR_CRC: return UV_EIO; case ERROR_DEVICE_DOOR_OPEN: return UV_EIO; case ERROR_DEVICE_REQUIRES_CLEANING: return UV_EIO; case ERROR_DISK_CORRUPT: return UV_EIO; case ERROR_EOM_OVERFLOW: return UV_EIO; case ERROR_FILEMARK_DETECTED: return UV_EIO; case ERROR_GEN_FAILURE: return UV_EIO; case ERROR_INVALID_BLOCK_LENGTH: return UV_EIO; case ERROR_IO_DEVICE: return UV_EIO; case ERROR_NO_DATA_DETECTED: return UV_EIO; case ERROR_NO_SIGNAL_SENT: return UV_EIO; case ERROR_OPEN_FAILED: return UV_EIO; case ERROR_SETMARK_DETECTED: return UV_EIO; case ERROR_SIGNAL_REFUSED: return UV_EIO; case WSAEISCONN: return UV_EISCONN; case ERROR_CANT_RESOLVE_FILENAME: return UV_ELOOP; case ERROR_TOO_MANY_OPEN_FILES: return UV_EMFILE; case WSAEMFILE: return UV_EMFILE; case WSAEMSGSIZE: return UV_EMSGSIZE; case ERROR_FILENAME_EXCED_RANGE: return UV_ENAMETOOLONG; case ERROR_NETWORK_UNREACHABLE: return UV_ENETUNREACH; case WSAENETUNREACH: return UV_ENETUNREACH; case WSAENOBUFS: return UV_ENOBUFS; case ERROR_BAD_PATHNAME: return UV_ENOENT; case ERROR_DIRECTORY: return UV_ENOENT; case ERROR_ENVVAR_NOT_FOUND: return UV_ENOENT; case ERROR_FILE_NOT_FOUND: return UV_ENOENT; case ERROR_INVALID_NAME: return UV_ENOENT; case ERROR_INVALID_DRIVE: return UV_ENOENT; case ERROR_INVALID_REPARSE_DATA: return UV_ENOENT; case ERROR_MOD_NOT_FOUND: return UV_ENOENT; case ERROR_PATH_NOT_FOUND: return UV_ENOENT; case WSAHOST_NOT_FOUND: return UV_ENOENT; case WSANO_DATA: return UV_ENOENT; case ERROR_NOT_ENOUGH_MEMORY: return UV_ENOMEM; case ERROR_OUTOFMEMORY: return UV_ENOMEM; case ERROR_CANNOT_MAKE: return UV_ENOSPC; case ERROR_DISK_FULL: return UV_ENOSPC; case ERROR_EA_TABLE_FULL: return UV_ENOSPC; case ERROR_END_OF_MEDIA: return UV_ENOSPC; case ERROR_HANDLE_DISK_FULL: return UV_ENOSPC; case ERROR_NOT_CONNECTED: return UV_ENOTCONN; case WSAENOTCONN: return UV_ENOTCONN; case ERROR_DIR_NOT_EMPTY: return UV_ENOTEMPTY; case WSAENOTSOCK: return UV_ENOTSOCK; case ERROR_NOT_SUPPORTED: return UV_ENOTSUP; case ERROR_BROKEN_PIPE: return UV_EOF; case ERROR_ACCESS_DENIED: return UV_EPERM; case ERROR_PRIVILEGE_NOT_HELD: return UV_EPERM; case ERROR_BAD_PIPE: return UV_EPIPE; case ERROR_NO_DATA: return UV_EPIPE; case ERROR_PIPE_NOT_CONNECTED: return UV_EPIPE; case WSAESHUTDOWN: return UV_EPIPE; case WSAEPROTONOSUPPORT: return UV_EPROTONOSUPPORT; case ERROR_WRITE_PROTECT: return UV_EROFS; case ERROR_SEM_TIMEOUT: return UV_ETIMEDOUT; case WSAETIMEDOUT: return UV_ETIMEDOUT; case ERROR_NOT_SAME_DEVICE: return UV_EXDEV; case ERROR_INVALID_FUNCTION: return UV_EISDIR; case ERROR_META_EXPANSION_TOO_LONG: return UV_E2BIG; case WSAESOCKTNOSUPPORT: return UV_ESOCKTNOSUPPORT; default: return UV_UNKNOWN; } } gevent-24.11.1/deps/libuv/src/win/fs-event.c000066400000000000000000000431061471441230600205110ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include #include #include "uv.h" #include "internal.h" #include "handle-inl.h" #include "req-inl.h" const unsigned int uv_directory_watcher_buffer_size = 4096; static void uv__fs_event_queue_readdirchanges(uv_loop_t* loop, uv_fs_event_t* handle) { assert(handle->dir_handle != INVALID_HANDLE_VALUE); assert(!handle->req_pending); memset(&(handle->req.u.io.overlapped), 0, sizeof(handle->req.u.io.overlapped)); if (!ReadDirectoryChangesW(handle->dir_handle, handle->buffer, uv_directory_watcher_buffer_size, (handle->flags & UV_FS_EVENT_RECURSIVE) ? TRUE : FALSE, FILE_NOTIFY_CHANGE_FILE_NAME | FILE_NOTIFY_CHANGE_DIR_NAME | FILE_NOTIFY_CHANGE_ATTRIBUTES | FILE_NOTIFY_CHANGE_SIZE | FILE_NOTIFY_CHANGE_LAST_WRITE | FILE_NOTIFY_CHANGE_LAST_ACCESS | FILE_NOTIFY_CHANGE_CREATION | FILE_NOTIFY_CHANGE_SECURITY, NULL, &handle->req.u.io.overlapped, NULL)) { /* Make this req pending reporting an error. */ SET_REQ_ERROR(&handle->req, GetLastError()); uv__insert_pending_req(loop, (uv_req_t*)&handle->req); } handle->req_pending = 1; } static void uv__relative_path(const WCHAR* filename, const WCHAR* dir, WCHAR** relpath) { size_t relpathlen; size_t filenamelen = wcslen(filename); size_t dirlen = wcslen(dir); assert(!_wcsnicmp(filename, dir, dirlen)); if (dirlen > 0 && dir[dirlen - 1] == '\\') dirlen--; relpathlen = filenamelen - dirlen - 1; *relpath = uv__malloc((relpathlen + 1) * sizeof(WCHAR)); if (!*relpath) uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); wcsncpy(*relpath, filename + dirlen + 1, relpathlen); (*relpath)[relpathlen] = L'\0'; } static int uv__split_path(const WCHAR* filename, WCHAR** dir, WCHAR** file) { size_t len, i; DWORD dir_len; if (filename == NULL) { if (dir != NULL) *dir = NULL; *file = NULL; return 0; } len = wcslen(filename); i = len; while (i > 0 && filename[--i] != '\\' && filename[i] != '/'); if (i == 0) { if (dir) { dir_len = GetCurrentDirectoryW(0, NULL); if (dir_len == 0) { return -1; } *dir = (WCHAR*)uv__malloc(dir_len * sizeof(WCHAR)); if (!*dir) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } if (!GetCurrentDirectoryW(dir_len, *dir)) { uv__free(*dir); *dir = NULL; return -1; } } *file = wcsdup(filename); } else { if (dir) { *dir = (WCHAR*)uv__malloc((i + 2) * sizeof(WCHAR)); if (!*dir) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } wcsncpy(*dir, filename, i + 1); (*dir)[i + 1] = L'\0'; } *file = (WCHAR*)uv__malloc((len - i) * sizeof(WCHAR)); if (!*file) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } wcsncpy(*file, filename + i + 1, len - i - 1); (*file)[len - i - 1] = L'\0'; } return 0; } int uv_fs_event_init(uv_loop_t* loop, uv_fs_event_t* handle) { uv__handle_init(loop, (uv_handle_t*) handle, UV_FS_EVENT); handle->dir_handle = INVALID_HANDLE_VALUE; handle->buffer = NULL; handle->req_pending = 0; handle->filew = NULL; handle->short_filew = NULL; handle->dirw = NULL; UV_REQ_INIT(&handle->req, UV_FS_EVENT_REQ); handle->req.data = handle; return 0; } int uv_fs_event_start(uv_fs_event_t* handle, uv_fs_event_cb cb, const char* path, unsigned int flags) { int name_size, is_path_dir, size; DWORD attr, last_error; WCHAR* dir = NULL, *dir_to_watch, *pathw = NULL; DWORD short_path_buffer_len; WCHAR *short_path_buffer; WCHAR* short_path, *long_path; short_path = NULL; if (uv__is_active(handle)) return UV_EINVAL; handle->cb = cb; handle->path = uv__strdup(path); if (!handle->path) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } uv__handle_start(handle); /* Convert name to UTF16. */ name_size = MultiByteToWideChar(CP_UTF8, 0, path, -1, NULL, 0) * sizeof(WCHAR); pathw = (WCHAR*)uv__malloc(name_size); if (!pathw) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } if (!MultiByteToWideChar(CP_UTF8, 0, path, -1, pathw, name_size / sizeof(WCHAR))) { return uv_translate_sys_error(GetLastError()); } /* Determine whether path is a file or a directory. */ attr = GetFileAttributesW(pathw); if (attr == INVALID_FILE_ATTRIBUTES) { last_error = GetLastError(); goto error; } is_path_dir = (attr & FILE_ATTRIBUTE_DIRECTORY) ? 1 : 0; if (is_path_dir) { /* path is a directory, so that's the directory that we will watch. */ /* Convert to long path. */ size = GetLongPathNameW(pathw, NULL, 0); if (size) { long_path = (WCHAR*)uv__malloc(size * sizeof(WCHAR)); if (!long_path) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } size = GetLongPathNameW(pathw, long_path, size); if (size) { long_path[size] = '\0'; } else { uv__free(long_path); long_path = NULL; } if (long_path) { uv__free(pathw); pathw = long_path; } } dir_to_watch = pathw; } else { /* * path is a file. So we split path into dir & file parts, and * watch the dir directory. */ /* Convert to short path. */ short_path_buffer = NULL; short_path_buffer_len = GetShortPathNameW(pathw, NULL, 0); if (short_path_buffer_len == 0) { goto short_path_done; } short_path_buffer = uv__malloc(short_path_buffer_len * sizeof(WCHAR)); if (short_path_buffer == NULL) { goto short_path_done; } if (GetShortPathNameW(pathw, short_path_buffer, short_path_buffer_len) == 0) { uv__free(short_path_buffer); short_path_buffer = NULL; } short_path_done: short_path = short_path_buffer; if (uv__split_path(pathw, &dir, &handle->filew) != 0) { last_error = GetLastError(); goto error; } if (uv__split_path(short_path, NULL, &handle->short_filew) != 0) { last_error = GetLastError(); goto error; } dir_to_watch = dir; uv__free(pathw); pathw = NULL; } handle->dir_handle = CreateFileW(dir_to_watch, FILE_LIST_DIRECTORY, FILE_SHARE_READ | FILE_SHARE_DELETE | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS | FILE_FLAG_OVERLAPPED, NULL); if (dir) { uv__free(dir); dir = NULL; } if (handle->dir_handle == INVALID_HANDLE_VALUE) { last_error = GetLastError(); goto error; } if (CreateIoCompletionPort(handle->dir_handle, handle->loop->iocp, (ULONG_PTR)handle, 0) == NULL) { last_error = GetLastError(); goto error; } if (!handle->buffer) { handle->buffer = (char*)uv__malloc(uv_directory_watcher_buffer_size); } if (!handle->buffer) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } memset(&(handle->req.u.io.overlapped), 0, sizeof(handle->req.u.io.overlapped)); if (!ReadDirectoryChangesW(handle->dir_handle, handle->buffer, uv_directory_watcher_buffer_size, (flags & UV_FS_EVENT_RECURSIVE) ? TRUE : FALSE, FILE_NOTIFY_CHANGE_FILE_NAME | FILE_NOTIFY_CHANGE_DIR_NAME | FILE_NOTIFY_CHANGE_ATTRIBUTES | FILE_NOTIFY_CHANGE_SIZE | FILE_NOTIFY_CHANGE_LAST_WRITE | FILE_NOTIFY_CHANGE_LAST_ACCESS | FILE_NOTIFY_CHANGE_CREATION | FILE_NOTIFY_CHANGE_SECURITY, NULL, &handle->req.u.io.overlapped, NULL)) { last_error = GetLastError(); goto error; } assert(is_path_dir ? pathw != NULL : pathw == NULL); handle->dirw = pathw; handle->req_pending = 1; return 0; error: if (handle->path) { uv__free(handle->path); handle->path = NULL; } if (handle->filew) { uv__free(handle->filew); handle->filew = NULL; } if (handle->short_filew) { uv__free(handle->short_filew); handle->short_filew = NULL; } uv__free(pathw); if (handle->dir_handle != INVALID_HANDLE_VALUE) { CloseHandle(handle->dir_handle); handle->dir_handle = INVALID_HANDLE_VALUE; } if (handle->buffer) { uv__free(handle->buffer); handle->buffer = NULL; } if (uv__is_active(handle)) uv__handle_stop(handle); uv__free(short_path); return uv_translate_sys_error(last_error); } int uv_fs_event_stop(uv_fs_event_t* handle) { if (!uv__is_active(handle)) return 0; if (handle->dir_handle != INVALID_HANDLE_VALUE) { CloseHandle(handle->dir_handle); handle->dir_handle = INVALID_HANDLE_VALUE; } uv__handle_stop(handle); if (handle->filew) { uv__free(handle->filew); handle->filew = NULL; } if (handle->short_filew) { uv__free(handle->short_filew); handle->short_filew = NULL; } if (handle->path) { uv__free(handle->path); handle->path = NULL; } if (handle->dirw) { uv__free(handle->dirw); handle->dirw = NULL; } return 0; } static int file_info_cmp(WCHAR* str, WCHAR* file_name, size_t file_name_len) { size_t str_len; if (str == NULL) return -1; str_len = wcslen(str); /* Since we only care about equality, return early if the strings aren't the same length */ if (str_len != (file_name_len / sizeof(WCHAR))) return -1; return _wcsnicmp(str, file_name, str_len); } void uv__process_fs_event_req(uv_loop_t* loop, uv_req_t* req, uv_fs_event_t* handle) { FILE_NOTIFY_INFORMATION* file_info; int err, sizew, size; char* filename = NULL; WCHAR* filenamew = NULL; WCHAR* long_filenamew = NULL; DWORD offset = 0; assert(req->type == UV_FS_EVENT_REQ); assert(handle->req_pending); handle->req_pending = 0; /* Don't report any callbacks if: * - We're closing, just push the handle onto the endgame queue * - We are not active, just ignore the callback */ if (!uv__is_active(handle)) { if (handle->flags & UV_HANDLE_CLOSING) { uv__want_endgame(loop, (uv_handle_t*) handle); } return; } file_info = (FILE_NOTIFY_INFORMATION*)(handle->buffer + offset); if (REQ_SUCCESS(req)) { if (req->u.io.overlapped.InternalHigh > 0) { do { file_info = (FILE_NOTIFY_INFORMATION*)((char*)file_info + offset); assert(!filename); assert(!filenamew); assert(!long_filenamew); /* * Fire the event only if we were asked to watch a directory, * or if the filename filter matches. */ if (handle->dirw || file_info_cmp(handle->filew, file_info->FileName, file_info->FileNameLength) == 0 || file_info_cmp(handle->short_filew, file_info->FileName, file_info->FileNameLength) == 0) { if (handle->dirw) { /* * We attempt to resolve the long form of the file name explicitly. * We only do this for file names that might still exist on disk. * If this fails, we use the name given by ReadDirectoryChangesW. * This may be the long form or the 8.3 short name in some cases. */ if (file_info->Action != FILE_ACTION_REMOVED && file_info->Action != FILE_ACTION_RENAMED_OLD_NAME) { /* Construct a full path to the file. */ size = wcslen(handle->dirw) + file_info->FileNameLength / sizeof(WCHAR) + 2; filenamew = (WCHAR*)uv__malloc(size * sizeof(WCHAR)); if (!filenamew) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } _snwprintf(filenamew, size, L"%s\\%.*s", handle->dirw, file_info->FileNameLength / (DWORD)sizeof(WCHAR), file_info->FileName); filenamew[size - 1] = L'\0'; /* Convert to long name. */ size = GetLongPathNameW(filenamew, NULL, 0); if (size) { long_filenamew = (WCHAR*)uv__malloc(size * sizeof(WCHAR)); if (!long_filenamew) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } size = GetLongPathNameW(filenamew, long_filenamew, size); if (size) { long_filenamew[size] = '\0'; } else { uv__free(long_filenamew); long_filenamew = NULL; } } uv__free(filenamew); if (long_filenamew) { /* Get the file name out of the long path. */ uv__relative_path(long_filenamew, handle->dirw, &filenamew); uv__free(long_filenamew); long_filenamew = filenamew; sizew = -1; } else { /* We couldn't get the long filename, use the one reported. */ filenamew = file_info->FileName; sizew = file_info->FileNameLength / sizeof(WCHAR); } } else { /* * Removed or renamed events cannot be resolved to the long form. * We therefore use the name given by ReadDirectoryChangesW. * This may be the long form or the 8.3 short name in some cases. */ filenamew = file_info->FileName; sizew = file_info->FileNameLength / sizeof(WCHAR); } } else { /* We already have the long name of the file, so just use it. */ filenamew = handle->filew; sizew = -1; } /* Convert the filename to utf8. */ uv__convert_utf16_to_utf8(filenamew, sizew, &filename); switch (file_info->Action) { case FILE_ACTION_ADDED: case FILE_ACTION_REMOVED: case FILE_ACTION_RENAMED_OLD_NAME: case FILE_ACTION_RENAMED_NEW_NAME: handle->cb(handle, filename, UV_RENAME, 0); break; case FILE_ACTION_MODIFIED: handle->cb(handle, filename, UV_CHANGE, 0); break; } uv__free(filename); filename = NULL; uv__free(long_filenamew); long_filenamew = NULL; filenamew = NULL; } offset = file_info->NextEntryOffset; } while (offset && !(handle->flags & UV_HANDLE_CLOSING)); } else { handle->cb(handle, NULL, UV_CHANGE, 0); } } else { err = GET_REQ_ERROR(req); handle->cb(handle, NULL, 0, uv_translate_sys_error(err)); } if (handle->flags & UV_HANDLE_CLOSING) { uv__want_endgame(loop, (uv_handle_t*)handle); } else if (uv__is_active(handle)) { uv__fs_event_queue_readdirchanges(loop, handle); } } void uv__fs_event_close(uv_loop_t* loop, uv_fs_event_t* handle) { uv_fs_event_stop(handle); uv__handle_closing(handle); if (!handle->req_pending) { uv__want_endgame(loop, (uv_handle_t*)handle); } } void uv__fs_event_endgame(uv_loop_t* loop, uv_fs_event_t* handle) { if ((handle->flags & UV_HANDLE_CLOSING) && !handle->req_pending) { assert(!(handle->flags & UV_HANDLE_CLOSED)); if (handle->buffer) { uv__free(handle->buffer); handle->buffer = NULL; } uv__handle_close(handle); } } gevent-24.11.1/deps/libuv/src/win/fs-fd-hash-inl.h000066400000000000000000000153461471441230600214740ustar00rootroot00000000000000/* Copyright libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_WIN_FS_FD_HASH_INL_H_ #define UV_WIN_FS_FD_HASH_INL_H_ #include "uv.h" #include "internal.h" /* Files are only inserted in uv__fd_hash when the UV_FS_O_FILEMAP flag is * specified. Thus, when uv__fd_hash_get returns true, the file mapping in the * info structure should be used for read/write operations. * * If the file is empty, the mapping field will be set to * INVALID_HANDLE_VALUE. This is not an issue since the file mapping needs to * be created anyway when the file size changes. * * Since file descriptors are sequential integers, the modulo operator is used * as hashing function. For each bucket, a single linked list of arrays is * kept to minimize allocations. A statically allocated memory buffer is kept * for the first array in each bucket. */ #define UV__FD_HASH_SIZE 256 #define UV__FD_HASH_GROUP_SIZE 16 struct uv__fd_info_s { int flags; BOOLEAN is_directory; HANDLE mapping; LARGE_INTEGER size; LARGE_INTEGER current_pos; }; struct uv__fd_hash_entry_s { uv_file fd; struct uv__fd_info_s info; }; struct uv__fd_hash_entry_group_s { struct uv__fd_hash_entry_s entries[UV__FD_HASH_GROUP_SIZE]; struct uv__fd_hash_entry_group_s* next; }; struct uv__fd_hash_bucket_s { size_t size; struct uv__fd_hash_entry_group_s* data; }; static uv_mutex_t uv__fd_hash_mutex; static struct uv__fd_hash_entry_group_s uv__fd_hash_entry_initial[UV__FD_HASH_SIZE * UV__FD_HASH_GROUP_SIZE]; static struct uv__fd_hash_bucket_s uv__fd_hash[UV__FD_HASH_SIZE]; INLINE static void uv__fd_hash_init(void) { size_t i; int err; err = uv_mutex_init(&uv__fd_hash_mutex); if (err) { uv_fatal_error(err, "uv_mutex_init"); } for (i = 0; i < ARRAY_SIZE(uv__fd_hash); ++i) { uv__fd_hash[i].size = 0; uv__fd_hash[i].data = uv__fd_hash_entry_initial + i * UV__FD_HASH_GROUP_SIZE; } } #define FIND_COMMON_VARIABLES \ unsigned i; \ unsigned bucket = fd % ARRAY_SIZE(uv__fd_hash); \ struct uv__fd_hash_entry_s* entry_ptr = NULL; \ struct uv__fd_hash_entry_group_s* group_ptr; \ struct uv__fd_hash_bucket_s* bucket_ptr = &uv__fd_hash[bucket]; #define FIND_IN_GROUP_PTR(group_size) \ do { \ for (i = 0; i < group_size; ++i) { \ if (group_ptr->entries[i].fd == fd) { \ entry_ptr = &group_ptr->entries[i]; \ break; \ } \ } \ } while (0) #define FIND_IN_BUCKET_PTR() \ do { \ size_t first_group_size = bucket_ptr->size % UV__FD_HASH_GROUP_SIZE; \ if (bucket_ptr->size != 0 && first_group_size == 0) \ first_group_size = UV__FD_HASH_GROUP_SIZE; \ group_ptr = bucket_ptr->data; \ FIND_IN_GROUP_PTR(first_group_size); \ for (group_ptr = group_ptr->next; \ group_ptr != NULL && entry_ptr == NULL; \ group_ptr = group_ptr->next) \ FIND_IN_GROUP_PTR(UV__FD_HASH_GROUP_SIZE); \ } while (0) INLINE static int uv__fd_hash_get(int fd, struct uv__fd_info_s* info) { FIND_COMMON_VARIABLES uv_mutex_lock(&uv__fd_hash_mutex); FIND_IN_BUCKET_PTR(); if (entry_ptr != NULL) { *info = entry_ptr->info; } uv_mutex_unlock(&uv__fd_hash_mutex); return entry_ptr != NULL; } INLINE static void uv__fd_hash_add(int fd, struct uv__fd_info_s* info) { FIND_COMMON_VARIABLES uv_mutex_lock(&uv__fd_hash_mutex); FIND_IN_BUCKET_PTR(); if (entry_ptr == NULL) { i = bucket_ptr->size % UV__FD_HASH_GROUP_SIZE; if (bucket_ptr->size != 0 && i == 0) { struct uv__fd_hash_entry_group_s* new_group_ptr = uv__malloc(sizeof(*new_group_ptr)); if (new_group_ptr == NULL) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } new_group_ptr->next = bucket_ptr->data; bucket_ptr->data = new_group_ptr; } bucket_ptr->size += 1; entry_ptr = &bucket_ptr->data->entries[i]; entry_ptr->fd = fd; } entry_ptr->info = *info; uv_mutex_unlock(&uv__fd_hash_mutex); } INLINE static int uv__fd_hash_remove(int fd, struct uv__fd_info_s* info) { FIND_COMMON_VARIABLES uv_mutex_lock(&uv__fd_hash_mutex); FIND_IN_BUCKET_PTR(); if (entry_ptr != NULL) { *info = entry_ptr->info; bucket_ptr->size -= 1; i = bucket_ptr->size % UV__FD_HASH_GROUP_SIZE; if (entry_ptr != &bucket_ptr->data->entries[i]) { *entry_ptr = bucket_ptr->data->entries[i]; } if (bucket_ptr->size != 0 && bucket_ptr->size % UV__FD_HASH_GROUP_SIZE == 0) { struct uv__fd_hash_entry_group_s* old_group_ptr = bucket_ptr->data; bucket_ptr->data = old_group_ptr->next; uv__free(old_group_ptr); } } uv_mutex_unlock(&uv__fd_hash_mutex); return entry_ptr != NULL; } #undef FIND_COMMON_VARIABLES #undef FIND_IN_GROUP_PTR #undef FIND_IN_BUCKET_PTR #endif /* UV_WIN_FS_FD_HASH_INL_H_ */ gevent-24.11.1/deps/libuv/src/win/fs.c000066400000000000000000002655231471441230600174030ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include #include #include #include #include #include #include #include #include "uv.h" #include "internal.h" #include "req-inl.h" #include "handle-inl.h" #include "fs-fd-hash-inl.h" #define UV_FS_FREE_PATHS 0x0002 #define UV_FS_FREE_PTR 0x0008 #define UV_FS_CLEANEDUP 0x0010 #define INIT(subtype) \ do { \ if (req == NULL) \ return UV_EINVAL; \ uv__fs_req_init(loop, req, subtype, cb); \ } \ while (0) #define POST \ do { \ if (cb != NULL) { \ uv__req_register(loop, req); \ uv__work_submit(loop, \ &req->work_req, \ UV__WORK_FAST_IO, \ uv__fs_work, \ uv__fs_done); \ return 0; \ } else { \ uv__fs_work(&req->work_req); \ return req->result; \ } \ } \ while (0) #define SET_REQ_RESULT(req, result_value) \ do { \ req->result = (result_value); \ assert(req->result != -1); \ } while (0) #define SET_REQ_WIN32_ERROR(req, sys_errno) \ do { \ req->sys_errno_ = (sys_errno); \ req->result = uv_translate_sys_error(req->sys_errno_); \ } while (0) #define SET_REQ_UV_ERROR(req, uv_errno, sys_errno) \ do { \ req->result = (uv_errno); \ req->sys_errno_ = (sys_errno); \ } while (0) #define VERIFY_FD(fd, req) \ if (fd == -1) { \ req->result = UV_EBADF; \ req->sys_errno_ = ERROR_INVALID_HANDLE; \ return; \ } #define MILLION ((int64_t) 1000 * 1000) #define BILLION ((int64_t) 1000 * 1000 * 1000) static void uv__filetime_to_timespec(uv_timespec_t *ts, int64_t filetime) { filetime -= 116444736 * BILLION; ts->tv_sec = (long) (filetime / (10 * MILLION)); ts->tv_nsec = (long) ((filetime - ts->tv_sec * 10 * MILLION) * 100U); if (ts->tv_nsec < 0) { ts->tv_sec -= 1; ts->tv_nsec += 1e9; } } #define TIME_T_TO_FILETIME(time, filetime_ptr) \ do { \ int64_t bigtime = ((time) * 10 * MILLION + 116444736 * BILLION); \ (filetime_ptr)->dwLowDateTime = (uint64_t) bigtime & 0xFFFFFFFF; \ (filetime_ptr)->dwHighDateTime = (uint64_t) bigtime >> 32; \ } while(0) #define IS_SLASH(c) ((c) == L'\\' || (c) == L'/') #define IS_LETTER(c) (((c) >= L'a' && (c) <= L'z') || \ ((c) >= L'A' && (c) <= L'Z')) #define MIN(a,b) (((a) < (b)) ? (a) : (b)) const WCHAR JUNCTION_PREFIX[] = L"\\??\\"; const WCHAR JUNCTION_PREFIX_LEN = 4; const WCHAR LONG_PATH_PREFIX[] = L"\\\\?\\"; const WCHAR LONG_PATH_PREFIX_LEN = 4; const WCHAR UNC_PATH_PREFIX[] = L"\\\\?\\UNC\\"; const WCHAR UNC_PATH_PREFIX_LEN = 8; static int uv__file_symlink_usermode_flag = SYMBOLIC_LINK_FLAG_ALLOW_UNPRIVILEGED_CREATE; static DWORD uv__allocation_granularity; void uv__fs_init(void) { SYSTEM_INFO system_info; GetSystemInfo(&system_info); uv__allocation_granularity = system_info.dwAllocationGranularity; uv__fd_hash_init(); } INLINE static int fs__capture_path(uv_fs_t* req, const char* path, const char* new_path, const int copy_path) { char* buf; char* pos; ssize_t buf_sz = 0, path_len = 0, pathw_len = 0, new_pathw_len = 0; /* new_path can only be set if path is also set. */ assert(new_path == NULL || path != NULL); if (path != NULL) { pathw_len = MultiByteToWideChar(CP_UTF8, 0, path, -1, NULL, 0); if (pathw_len == 0) { return GetLastError(); } buf_sz += pathw_len * sizeof(WCHAR); } if (path != NULL && copy_path) { path_len = 1 + strlen(path); buf_sz += path_len; } if (new_path != NULL) { new_pathw_len = MultiByteToWideChar(CP_UTF8, 0, new_path, -1, NULL, 0); if (new_pathw_len == 0) { return GetLastError(); } buf_sz += new_pathw_len * sizeof(WCHAR); } if (buf_sz == 0) { req->file.pathw = NULL; req->fs.info.new_pathw = NULL; req->path = NULL; return 0; } buf = (char*) uv__malloc(buf_sz); if (buf == NULL) { return ERROR_OUTOFMEMORY; } pos = buf; if (path != NULL) { DWORD r = MultiByteToWideChar(CP_UTF8, 0, path, -1, (WCHAR*) pos, pathw_len); assert(r == (DWORD) pathw_len); req->file.pathw = (WCHAR*) pos; pos += r * sizeof(WCHAR); } else { req->file.pathw = NULL; } if (new_path != NULL) { DWORD r = MultiByteToWideChar(CP_UTF8, 0, new_path, -1, (WCHAR*) pos, new_pathw_len); assert(r == (DWORD) new_pathw_len); req->fs.info.new_pathw = (WCHAR*) pos; pos += r * sizeof(WCHAR); } else { req->fs.info.new_pathw = NULL; } req->path = path; if (path != NULL && copy_path) { memcpy(pos, path, path_len); assert(path_len == buf_sz - (pos - buf)); req->path = pos; } req->flags |= UV_FS_FREE_PATHS; return 0; } INLINE static void uv__fs_req_init(uv_loop_t* loop, uv_fs_t* req, uv_fs_type fs_type, const uv_fs_cb cb) { uv__once_init(); UV_REQ_INIT(req, UV_FS); req->loop = loop; req->flags = 0; req->fs_type = fs_type; req->sys_errno_ = 0; req->result = 0; req->ptr = NULL; req->path = NULL; req->cb = cb; memset(&req->fs, 0, sizeof(req->fs)); } static int fs__wide_to_utf8(WCHAR* w_source_ptr, DWORD w_source_len, char** target_ptr, uint64_t* target_len_ptr) { int r; int target_len; char* target; target_len = WideCharToMultiByte(CP_UTF8, 0, w_source_ptr, w_source_len, NULL, 0, NULL, NULL); if (target_len == 0) { return -1; } if (target_len_ptr != NULL) { *target_len_ptr = target_len; } if (target_ptr == NULL) { return 0; } target = uv__malloc(target_len + 1); if (target == NULL) { SetLastError(ERROR_OUTOFMEMORY); return -1; } r = WideCharToMultiByte(CP_UTF8, 0, w_source_ptr, w_source_len, target, target_len, NULL, NULL); assert(r == target_len); target[target_len] = '\0'; *target_ptr = target; return 0; } INLINE static int fs__readlink_handle(HANDLE handle, char** target_ptr, uint64_t* target_len_ptr) { char buffer[MAXIMUM_REPARSE_DATA_BUFFER_SIZE]; REPARSE_DATA_BUFFER* reparse_data = (REPARSE_DATA_BUFFER*) buffer; WCHAR* w_target; DWORD w_target_len; DWORD bytes; size_t i; size_t len; if (!DeviceIoControl(handle, FSCTL_GET_REPARSE_POINT, NULL, 0, buffer, sizeof buffer, &bytes, NULL)) { return -1; } if (reparse_data->ReparseTag == IO_REPARSE_TAG_SYMLINK) { /* Real symlink */ w_target = reparse_data->SymbolicLinkReparseBuffer.PathBuffer + (reparse_data->SymbolicLinkReparseBuffer.SubstituteNameOffset / sizeof(WCHAR)); w_target_len = reparse_data->SymbolicLinkReparseBuffer.SubstituteNameLength / sizeof(WCHAR); /* Real symlinks can contain pretty much everything, but the only thing we * really care about is undoing the implicit conversion to an NT namespaced * path that CreateSymbolicLink will perform on absolute paths. If the path * is win32-namespaced then the user must have explicitly made it so, and * we better just return the unmodified reparse data. */ if (w_target_len >= 4 && w_target[0] == L'\\' && w_target[1] == L'?' && w_target[2] == L'?' && w_target[3] == L'\\') { /* Starts with \??\ */ if (w_target_len >= 6 && ((w_target[4] >= L'A' && w_target[4] <= L'Z') || (w_target[4] >= L'a' && w_target[4] <= L'z')) && w_target[5] == L':' && (w_target_len == 6 || w_target[6] == L'\\')) { /* \??\:\ */ w_target += 4; w_target_len -= 4; } else if (w_target_len >= 8 && (w_target[4] == L'U' || w_target[4] == L'u') && (w_target[5] == L'N' || w_target[5] == L'n') && (w_target[6] == L'C' || w_target[6] == L'c') && w_target[7] == L'\\') { /* \??\UNC\\\ - make sure the final path looks like * \\\\ */ w_target += 6; w_target[0] = L'\\'; w_target_len -= 6; } } } else if (reparse_data->ReparseTag == IO_REPARSE_TAG_MOUNT_POINT) { /* Junction. */ w_target = reparse_data->MountPointReparseBuffer.PathBuffer + (reparse_data->MountPointReparseBuffer.SubstituteNameOffset / sizeof(WCHAR)); w_target_len = reparse_data->MountPointReparseBuffer.SubstituteNameLength / sizeof(WCHAR); /* Only treat junctions that look like \??\:\ as symlink. Junctions * can also be used as mount points, like \??\Volume{}, but that's * confusing for programs since they wouldn't be able to actually * understand such a path when returned by uv_readlink(). UNC paths are * never valid for junctions so we don't care about them. */ if (!(w_target_len >= 6 && w_target[0] == L'\\' && w_target[1] == L'?' && w_target[2] == L'?' && w_target[3] == L'\\' && ((w_target[4] >= L'A' && w_target[4] <= L'Z') || (w_target[4] >= L'a' && w_target[4] <= L'z')) && w_target[5] == L':' && (w_target_len == 6 || w_target[6] == L'\\'))) { SetLastError(ERROR_SYMLINK_NOT_SUPPORTED); return -1; } /* Remove leading \??\ */ w_target += 4; w_target_len -= 4; } else if (reparse_data->ReparseTag == IO_REPARSE_TAG_APPEXECLINK) { /* String #3 in the list has the target filename. */ if (reparse_data->AppExecLinkReparseBuffer.StringCount < 3) { SetLastError(ERROR_SYMLINK_NOT_SUPPORTED); return -1; } w_target = reparse_data->AppExecLinkReparseBuffer.StringList; /* The StringList buffer contains a list of strings separated by "\0", */ /* with "\0\0" terminating the list. Move to the 3rd string in the list: */ for (i = 0; i < 2; ++i) { len = wcslen(w_target); if (len == 0) { SetLastError(ERROR_SYMLINK_NOT_SUPPORTED); return -1; } w_target += len + 1; } w_target_len = wcslen(w_target); if (w_target_len == 0) { SetLastError(ERROR_SYMLINK_NOT_SUPPORTED); return -1; } /* Make sure it is an absolute path. */ if (!(w_target_len >= 3 && ((w_target[0] >= L'a' && w_target[0] <= L'z') || (w_target[0] >= L'A' && w_target[0] <= L'Z')) && w_target[1] == L':' && w_target[2] == L'\\')) { SetLastError(ERROR_SYMLINK_NOT_SUPPORTED); return -1; } } else { /* Reparse tag does not indicate a symlink. */ SetLastError(ERROR_SYMLINK_NOT_SUPPORTED); return -1; } return fs__wide_to_utf8(w_target, w_target_len, target_ptr, target_len_ptr); } void fs__open(uv_fs_t* req) { DWORD access; DWORD share; DWORD disposition; DWORD attributes = 0; HANDLE file; int fd, current_umask; int flags = req->fs.info.file_flags; struct uv__fd_info_s fd_info; /* Adjust flags to be compatible with the memory file mapping. Save the * original flags to emulate the correct behavior. */ if (flags & UV_FS_O_FILEMAP) { fd_info.flags = flags; fd_info.current_pos.QuadPart = 0; if ((flags & (UV_FS_O_RDONLY | UV_FS_O_WRONLY | UV_FS_O_RDWR)) == UV_FS_O_WRONLY) { /* CreateFileMapping always needs read access */ flags = (flags & ~UV_FS_O_WRONLY) | UV_FS_O_RDWR; } if (flags & UV_FS_O_APPEND) { /* Clear the append flag and ensure RDRW mode */ flags &= ~UV_FS_O_APPEND; flags &= ~(UV_FS_O_RDONLY | UV_FS_O_WRONLY | UV_FS_O_RDWR); flags |= UV_FS_O_RDWR; } } /* Obtain the active umask. umask() never fails and returns the previous * umask. */ current_umask = umask(0); umask(current_umask); /* convert flags and mode to CreateFile parameters */ switch (flags & (UV_FS_O_RDONLY | UV_FS_O_WRONLY | UV_FS_O_RDWR)) { case UV_FS_O_RDONLY: access = FILE_GENERIC_READ; break; case UV_FS_O_WRONLY: access = FILE_GENERIC_WRITE; break; case UV_FS_O_RDWR: access = FILE_GENERIC_READ | FILE_GENERIC_WRITE; break; default: goto einval; } if (flags & UV_FS_O_APPEND) { access &= ~FILE_WRITE_DATA; access |= FILE_APPEND_DATA; } /* * Here is where we deviate significantly from what CRT's _open() * does. We indiscriminately use all the sharing modes, to match * UNIX semantics. In particular, this ensures that the file can * be deleted even whilst it's open, fixing issue * https://github.com/nodejs/node-v0.x-archive/issues/1449. * We still support exclusive sharing mode, since it is necessary * for opening raw block devices, otherwise Windows will prevent * any attempt to write past the master boot record. */ if (flags & UV_FS_O_EXLOCK) { share = 0; } else { share = FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE; } switch (flags & (UV_FS_O_CREAT | UV_FS_O_EXCL | UV_FS_O_TRUNC)) { case 0: case UV_FS_O_EXCL: disposition = OPEN_EXISTING; break; case UV_FS_O_CREAT: disposition = OPEN_ALWAYS; break; case UV_FS_O_CREAT | UV_FS_O_EXCL: case UV_FS_O_CREAT | UV_FS_O_TRUNC | UV_FS_O_EXCL: disposition = CREATE_NEW; break; case UV_FS_O_TRUNC: case UV_FS_O_TRUNC | UV_FS_O_EXCL: disposition = TRUNCATE_EXISTING; break; case UV_FS_O_CREAT | UV_FS_O_TRUNC: disposition = CREATE_ALWAYS; break; default: goto einval; } attributes |= FILE_ATTRIBUTE_NORMAL; if (flags & UV_FS_O_CREAT) { if (!((req->fs.info.mode & ~current_umask) & _S_IWRITE)) { attributes |= FILE_ATTRIBUTE_READONLY; } } if (flags & UV_FS_O_TEMPORARY ) { attributes |= FILE_FLAG_DELETE_ON_CLOSE | FILE_ATTRIBUTE_TEMPORARY; access |= DELETE; } if (flags & UV_FS_O_SHORT_LIVED) { attributes |= FILE_ATTRIBUTE_TEMPORARY; } switch (flags & (UV_FS_O_SEQUENTIAL | UV_FS_O_RANDOM)) { case 0: break; case UV_FS_O_SEQUENTIAL: attributes |= FILE_FLAG_SEQUENTIAL_SCAN; break; case UV_FS_O_RANDOM: attributes |= FILE_FLAG_RANDOM_ACCESS; break; default: goto einval; } if (flags & UV_FS_O_DIRECT) { /* * FILE_APPEND_DATA and FILE_FLAG_NO_BUFFERING are mutually exclusive. * Windows returns 87, ERROR_INVALID_PARAMETER if these are combined. * * FILE_APPEND_DATA is included in FILE_GENERIC_WRITE: * * FILE_GENERIC_WRITE = STANDARD_RIGHTS_WRITE | * FILE_WRITE_DATA | * FILE_WRITE_ATTRIBUTES | * FILE_WRITE_EA | * FILE_APPEND_DATA | * SYNCHRONIZE * * Note: Appends are also permitted by FILE_WRITE_DATA. * * In order for direct writes and direct appends to succeed, we therefore * exclude FILE_APPEND_DATA if FILE_WRITE_DATA is specified, and otherwise * fail if the user's sole permission is a direct append, since this * particular combination is invalid. */ if (access & FILE_APPEND_DATA) { if (access & FILE_WRITE_DATA) { access &= ~FILE_APPEND_DATA; } else { goto einval; } } attributes |= FILE_FLAG_NO_BUFFERING; } switch (flags & (UV_FS_O_DSYNC | UV_FS_O_SYNC)) { case 0: break; case UV_FS_O_DSYNC: case UV_FS_O_SYNC: attributes |= FILE_FLAG_WRITE_THROUGH; break; default: goto einval; } /* Setting this flag makes it possible to open a directory. */ attributes |= FILE_FLAG_BACKUP_SEMANTICS; file = CreateFileW(req->file.pathw, access, share, NULL, disposition, attributes, NULL); if (file == INVALID_HANDLE_VALUE) { DWORD error = GetLastError(); if (error == ERROR_FILE_EXISTS && (flags & UV_FS_O_CREAT) && !(flags & UV_FS_O_EXCL)) { /* Special case: when ERROR_FILE_EXISTS happens and UV_FS_O_CREAT was * specified, it means the path referred to a directory. */ SET_REQ_UV_ERROR(req, UV_EISDIR, error); } else { SET_REQ_WIN32_ERROR(req, GetLastError()); } return; } fd = _open_osfhandle((intptr_t) file, flags); if (fd < 0) { /* The only known failure mode for _open_osfhandle() is EMFILE, in which * case GetLastError() will return zero. However we'll try to handle other * errors as well, should they ever occur. */ if (errno == EMFILE) SET_REQ_UV_ERROR(req, UV_EMFILE, ERROR_TOO_MANY_OPEN_FILES); else if (GetLastError() != ERROR_SUCCESS) SET_REQ_WIN32_ERROR(req, GetLastError()); else SET_REQ_WIN32_ERROR(req, (DWORD) UV_UNKNOWN); CloseHandle(file); return; } if (flags & UV_FS_O_FILEMAP) { FILE_STANDARD_INFO file_info; if (!GetFileInformationByHandleEx(file, FileStandardInfo, &file_info, sizeof file_info)) { SET_REQ_WIN32_ERROR(req, GetLastError()); CloseHandle(file); return; } fd_info.is_directory = file_info.Directory; if (fd_info.is_directory) { fd_info.size.QuadPart = 0; fd_info.mapping = INVALID_HANDLE_VALUE; } else { if (!GetFileSizeEx(file, &fd_info.size)) { SET_REQ_WIN32_ERROR(req, GetLastError()); CloseHandle(file); return; } if (fd_info.size.QuadPart == 0) { fd_info.mapping = INVALID_HANDLE_VALUE; } else { DWORD flProtect = (fd_info.flags & (UV_FS_O_RDONLY | UV_FS_O_WRONLY | UV_FS_O_RDWR)) == UV_FS_O_RDONLY ? PAGE_READONLY : PAGE_READWRITE; fd_info.mapping = CreateFileMapping(file, NULL, flProtect, fd_info.size.HighPart, fd_info.size.LowPart, NULL); if (fd_info.mapping == NULL) { SET_REQ_WIN32_ERROR(req, GetLastError()); CloseHandle(file); return; } } } uv__fd_hash_add(fd, &fd_info); } SET_REQ_RESULT(req, fd); return; einval: SET_REQ_UV_ERROR(req, UV_EINVAL, ERROR_INVALID_PARAMETER); } void fs__close(uv_fs_t* req) { int fd = req->file.fd; int result; struct uv__fd_info_s fd_info; VERIFY_FD(fd, req); if (uv__fd_hash_remove(fd, &fd_info)) { if (fd_info.mapping != INVALID_HANDLE_VALUE) { CloseHandle(fd_info.mapping); } } if (fd > 2) result = _close(fd); else result = 0; /* _close doesn't set _doserrno on failure, but it does always set errno * to EBADF on failure. */ if (result == -1) { assert(errno == EBADF); SET_REQ_UV_ERROR(req, UV_EBADF, ERROR_INVALID_HANDLE); } else { SET_REQ_RESULT(req, 0); } } LONG fs__filemap_ex_filter(LONG excode, PEXCEPTION_POINTERS pep, int* perror) { if (excode != (LONG)EXCEPTION_IN_PAGE_ERROR) { return EXCEPTION_CONTINUE_SEARCH; } assert(perror != NULL); if (pep != NULL && pep->ExceptionRecord != NULL && pep->ExceptionRecord->NumberParameters >= 3) { NTSTATUS status = (NTSTATUS)pep->ExceptionRecord->ExceptionInformation[3]; *perror = pRtlNtStatusToDosError(status); if (*perror != ERROR_SUCCESS) { return EXCEPTION_EXECUTE_HANDLER; } } *perror = UV_UNKNOWN; return EXCEPTION_EXECUTE_HANDLER; } void fs__read_filemap(uv_fs_t* req, struct uv__fd_info_s* fd_info) { int fd = req->file.fd; /* VERIFY_FD done in fs__read */ int rw_flags = fd_info->flags & (UV_FS_O_RDONLY | UV_FS_O_WRONLY | UV_FS_O_RDWR); size_t read_size, done_read; unsigned int index; LARGE_INTEGER pos, end_pos; size_t view_offset; LARGE_INTEGER view_base; void* view; if (rw_flags == UV_FS_O_WRONLY) { SET_REQ_WIN32_ERROR(req, ERROR_INVALID_FLAGS); return; } if (fd_info->is_directory) { SET_REQ_WIN32_ERROR(req, ERROR_INVALID_FUNCTION); return; } if (req->fs.info.offset == -1) { pos = fd_info->current_pos; } else { pos.QuadPart = req->fs.info.offset; } /* Make sure we wont read past EOF. */ if (pos.QuadPart >= fd_info->size.QuadPart) { SET_REQ_RESULT(req, 0); return; } read_size = 0; for (index = 0; index < req->fs.info.nbufs; ++index) { read_size += req->fs.info.bufs[index].len; } read_size = (size_t) MIN((LONGLONG) read_size, fd_info->size.QuadPart - pos.QuadPart); if (read_size == 0) { SET_REQ_RESULT(req, 0); return; } end_pos.QuadPart = pos.QuadPart + read_size; view_offset = pos.QuadPart % uv__allocation_granularity; view_base.QuadPart = pos.QuadPart - view_offset; view = MapViewOfFile(fd_info->mapping, FILE_MAP_READ, view_base.HighPart, view_base.LowPart, view_offset + read_size); if (view == NULL) { SET_REQ_WIN32_ERROR(req, GetLastError()); return; } done_read = 0; for (index = 0; index < req->fs.info.nbufs && done_read < read_size; ++index) { size_t this_read_size = MIN(req->fs.info.bufs[index].len, read_size - done_read); #ifdef _MSC_VER int err = 0; __try { #endif memcpy(req->fs.info.bufs[index].base, (char*)view + view_offset + done_read, this_read_size); #ifdef _MSC_VER } __except (fs__filemap_ex_filter(GetExceptionCode(), GetExceptionInformation(), &err)) { SET_REQ_WIN32_ERROR(req, err); UnmapViewOfFile(view); return; } #endif done_read += this_read_size; } assert(done_read == read_size); if (!UnmapViewOfFile(view)) { SET_REQ_WIN32_ERROR(req, GetLastError()); return; } if (req->fs.info.offset == -1) { fd_info->current_pos = end_pos; uv__fd_hash_add(fd, fd_info); } SET_REQ_RESULT(req, read_size); return; } void fs__read(uv_fs_t* req) { int fd = req->file.fd; int64_t offset = req->fs.info.offset; HANDLE handle; OVERLAPPED overlapped, *overlapped_ptr; LARGE_INTEGER offset_; DWORD bytes; DWORD error; int result; unsigned int index; LARGE_INTEGER original_position; LARGE_INTEGER zero_offset; int restore_position; struct uv__fd_info_s fd_info; VERIFY_FD(fd, req); if (uv__fd_hash_get(fd, &fd_info)) { fs__read_filemap(req, &fd_info); return; } zero_offset.QuadPart = 0; restore_position = 0; handle = uv__get_osfhandle(fd); if (handle == INVALID_HANDLE_VALUE) { SET_REQ_WIN32_ERROR(req, ERROR_INVALID_HANDLE); return; } if (offset != -1) { memset(&overlapped, 0, sizeof overlapped); overlapped_ptr = &overlapped; if (SetFilePointerEx(handle, zero_offset, &original_position, FILE_CURRENT)) { restore_position = 1; } } else { overlapped_ptr = NULL; } index = 0; bytes = 0; do { DWORD incremental_bytes; if (offset != -1) { offset_.QuadPart = offset + bytes; overlapped.Offset = offset_.LowPart; overlapped.OffsetHigh = offset_.HighPart; } result = ReadFile(handle, req->fs.info.bufs[index].base, req->fs.info.bufs[index].len, &incremental_bytes, overlapped_ptr); bytes += incremental_bytes; ++index; } while (result && index < req->fs.info.nbufs); if (restore_position) SetFilePointerEx(handle, original_position, NULL, FILE_BEGIN); if (result || bytes > 0) { SET_REQ_RESULT(req, bytes); } else { error = GetLastError(); if (error == ERROR_ACCESS_DENIED) { error = ERROR_INVALID_FLAGS; } if (error == ERROR_HANDLE_EOF || error == ERROR_BROKEN_PIPE) { SET_REQ_RESULT(req, bytes); } else { SET_REQ_WIN32_ERROR(req, error); } } } void fs__write_filemap(uv_fs_t* req, HANDLE file, struct uv__fd_info_s* fd_info) { int fd = req->file.fd; /* VERIFY_FD done in fs__write */ int force_append = fd_info->flags & UV_FS_O_APPEND; int rw_flags = fd_info->flags & (UV_FS_O_RDONLY | UV_FS_O_WRONLY | UV_FS_O_RDWR); size_t write_size, done_write; unsigned int index; LARGE_INTEGER pos, end_pos; size_t view_offset; LARGE_INTEGER view_base; void* view; FILETIME ft; if (rw_flags == UV_FS_O_RDONLY) { SET_REQ_WIN32_ERROR(req, ERROR_INVALID_FLAGS); return; } if (fd_info->is_directory) { SET_REQ_WIN32_ERROR(req, ERROR_INVALID_FUNCTION); return; } write_size = 0; for (index = 0; index < req->fs.info.nbufs; ++index) { write_size += req->fs.info.bufs[index].len; } if (write_size == 0) { SET_REQ_RESULT(req, 0); return; } if (force_append) { pos = fd_info->size; } else if (req->fs.info.offset == -1) { pos = fd_info->current_pos; } else { pos.QuadPart = req->fs.info.offset; } end_pos.QuadPart = pos.QuadPart + write_size; /* Recreate the mapping to enlarge the file if needed */ if (end_pos.QuadPart > fd_info->size.QuadPart) { if (fd_info->mapping != INVALID_HANDLE_VALUE) { CloseHandle(fd_info->mapping); } fd_info->mapping = CreateFileMapping(file, NULL, PAGE_READWRITE, end_pos.HighPart, end_pos.LowPart, NULL); if (fd_info->mapping == NULL) { SET_REQ_WIN32_ERROR(req, GetLastError()); CloseHandle(file); fd_info->mapping = INVALID_HANDLE_VALUE; fd_info->size.QuadPart = 0; fd_info->current_pos.QuadPart = 0; uv__fd_hash_add(fd, fd_info); return; } fd_info->size = end_pos; uv__fd_hash_add(fd, fd_info); } view_offset = pos.QuadPart % uv__allocation_granularity; view_base.QuadPart = pos.QuadPart - view_offset; view = MapViewOfFile(fd_info->mapping, FILE_MAP_WRITE, view_base.HighPart, view_base.LowPart, view_offset + write_size); if (view == NULL) { SET_REQ_WIN32_ERROR(req, GetLastError()); return; } done_write = 0; for (index = 0; index < req->fs.info.nbufs; ++index) { #ifdef _MSC_VER int err = 0; __try { #endif memcpy((char*)view + view_offset + done_write, req->fs.info.bufs[index].base, req->fs.info.bufs[index].len); #ifdef _MSC_VER } __except (fs__filemap_ex_filter(GetExceptionCode(), GetExceptionInformation(), &err)) { SET_REQ_WIN32_ERROR(req, err); UnmapViewOfFile(view); return; } #endif done_write += req->fs.info.bufs[index].len; } assert(done_write == write_size); if (!FlushViewOfFile(view, 0)) { SET_REQ_WIN32_ERROR(req, GetLastError()); UnmapViewOfFile(view); return; } if (!UnmapViewOfFile(view)) { SET_REQ_WIN32_ERROR(req, GetLastError()); return; } if (req->fs.info.offset == -1) { fd_info->current_pos = end_pos; uv__fd_hash_add(fd, fd_info); } GetSystemTimeAsFileTime(&ft); SetFileTime(file, NULL, NULL, &ft); SET_REQ_RESULT(req, done_write); } void fs__write(uv_fs_t* req) { int fd = req->file.fd; int64_t offset = req->fs.info.offset; HANDLE handle; OVERLAPPED overlapped, *overlapped_ptr; LARGE_INTEGER offset_; DWORD bytes; DWORD error; int result; unsigned int index; LARGE_INTEGER original_position; LARGE_INTEGER zero_offset; int restore_position; struct uv__fd_info_s fd_info; VERIFY_FD(fd, req); zero_offset.QuadPart = 0; restore_position = 0; handle = uv__get_osfhandle(fd); if (handle == INVALID_HANDLE_VALUE) { SET_REQ_WIN32_ERROR(req, ERROR_INVALID_HANDLE); return; } if (uv__fd_hash_get(fd, &fd_info)) { fs__write_filemap(req, handle, &fd_info); return; } if (offset != -1) { memset(&overlapped, 0, sizeof overlapped); overlapped_ptr = &overlapped; if (SetFilePointerEx(handle, zero_offset, &original_position, FILE_CURRENT)) { restore_position = 1; } } else { overlapped_ptr = NULL; } index = 0; bytes = 0; do { DWORD incremental_bytes; if (offset != -1) { offset_.QuadPart = offset + bytes; overlapped.Offset = offset_.LowPart; overlapped.OffsetHigh = offset_.HighPart; } result = WriteFile(handle, req->fs.info.bufs[index].base, req->fs.info.bufs[index].len, &incremental_bytes, overlapped_ptr); bytes += incremental_bytes; ++index; } while (result && index < req->fs.info.nbufs); if (restore_position) SetFilePointerEx(handle, original_position, NULL, FILE_BEGIN); if (result || bytes > 0) { SET_REQ_RESULT(req, bytes); } else { error = GetLastError(); if (error == ERROR_ACCESS_DENIED) { error = ERROR_INVALID_FLAGS; } SET_REQ_WIN32_ERROR(req, error); } } void fs__rmdir(uv_fs_t* req) { int result = _wrmdir(req->file.pathw); if (result == -1) SET_REQ_WIN32_ERROR(req, _doserrno); else SET_REQ_RESULT(req, 0); } void fs__unlink(uv_fs_t* req) { const WCHAR* pathw = req->file.pathw; HANDLE handle; BY_HANDLE_FILE_INFORMATION info; FILE_DISPOSITION_INFORMATION disposition; IO_STATUS_BLOCK iosb; NTSTATUS status; handle = CreateFileW(pathw, FILE_READ_ATTRIBUTES | FILE_WRITE_ATTRIBUTES | DELETE, FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE, NULL, OPEN_EXISTING, FILE_FLAG_OPEN_REPARSE_POINT | FILE_FLAG_BACKUP_SEMANTICS, NULL); if (handle == INVALID_HANDLE_VALUE) { SET_REQ_WIN32_ERROR(req, GetLastError()); return; } if (!GetFileInformationByHandle(handle, &info)) { SET_REQ_WIN32_ERROR(req, GetLastError()); CloseHandle(handle); return; } if (info.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) { /* Do not allow deletion of directories, unless it is a symlink. When the * path refers to a non-symlink directory, report EPERM as mandated by * POSIX.1. */ /* Check if it is a reparse point. If it's not, it's a normal directory. */ if (!(info.dwFileAttributes & FILE_ATTRIBUTE_REPARSE_POINT)) { SET_REQ_WIN32_ERROR(req, ERROR_ACCESS_DENIED); CloseHandle(handle); return; } /* Read the reparse point and check if it is a valid symlink. If not, don't * unlink. */ if (fs__readlink_handle(handle, NULL, NULL) < 0) { DWORD error = GetLastError(); if (error == ERROR_SYMLINK_NOT_SUPPORTED) error = ERROR_ACCESS_DENIED; SET_REQ_WIN32_ERROR(req, error); CloseHandle(handle); return; } } if (info.dwFileAttributes & FILE_ATTRIBUTE_READONLY) { /* Remove read-only attribute */ FILE_BASIC_INFORMATION basic = { 0 }; basic.FileAttributes = (info.dwFileAttributes & ~FILE_ATTRIBUTE_READONLY) | FILE_ATTRIBUTE_ARCHIVE; status = pNtSetInformationFile(handle, &iosb, &basic, sizeof basic, FileBasicInformation); if (!NT_SUCCESS(status)) { SET_REQ_WIN32_ERROR(req, pRtlNtStatusToDosError(status)); CloseHandle(handle); return; } } /* Try to set the delete flag. */ disposition.DeleteFile = TRUE; status = pNtSetInformationFile(handle, &iosb, &disposition, sizeof disposition, FileDispositionInformation); if (NT_SUCCESS(status)) { SET_REQ_SUCCESS(req); } else { SET_REQ_WIN32_ERROR(req, pRtlNtStatusToDosError(status)); } CloseHandle(handle); } void fs__mkdir(uv_fs_t* req) { /* TODO: use req->mode. */ if (CreateDirectoryW(req->file.pathw, NULL)) { SET_REQ_RESULT(req, 0); } else { SET_REQ_WIN32_ERROR(req, GetLastError()); if (req->sys_errno_ == ERROR_INVALID_NAME || req->sys_errno_ == ERROR_DIRECTORY) req->result = UV_EINVAL; } } typedef int (*uv__fs_mktemp_func)(uv_fs_t* req); /* OpenBSD original: lib/libc/stdio/mktemp.c */ void fs__mktemp(uv_fs_t* req, uv__fs_mktemp_func func) { static const WCHAR *tempchars = L"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; static const size_t num_chars = 62; static const size_t num_x = 6; WCHAR *cp, *ep; unsigned int tries, i; size_t len; uint64_t v; char* path; path = (char*)req->path; len = wcslen(req->file.pathw); ep = req->file.pathw + len; if (len < num_x || wcsncmp(ep - num_x, L"XXXXXX", num_x)) { SET_REQ_UV_ERROR(req, UV_EINVAL, ERROR_INVALID_PARAMETER); goto clobber; } tries = TMP_MAX; do { if (uv__random_rtlgenrandom((void *)&v, sizeof(v)) < 0) { SET_REQ_UV_ERROR(req, UV_EIO, ERROR_IO_DEVICE); goto clobber; } cp = ep - num_x; for (i = 0; i < num_x; i++) { *cp++ = tempchars[v % num_chars]; v /= num_chars; } if (func(req)) { if (req->result >= 0) { len = strlen(path); wcstombs(path + len - num_x, ep - num_x, num_x); } return; } } while (--tries); SET_REQ_WIN32_ERROR(req, GetLastError()); clobber: path[0] = '\0'; } static int fs__mkdtemp_func(uv_fs_t* req) { DWORD error; if (CreateDirectoryW(req->file.pathw, NULL)) { SET_REQ_RESULT(req, 0); return 1; } error = GetLastError(); if (error != ERROR_ALREADY_EXISTS) { SET_REQ_WIN32_ERROR(req, error); return 1; } return 0; } void fs__mkdtemp(uv_fs_t* req) { fs__mktemp(req, fs__mkdtemp_func); } static int fs__mkstemp_func(uv_fs_t* req) { HANDLE file; int fd; file = CreateFileW(req->file.pathw, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE, NULL, CREATE_NEW, FILE_ATTRIBUTE_NORMAL, NULL); if (file == INVALID_HANDLE_VALUE) { DWORD error; error = GetLastError(); /* If the file exists, the main fs__mktemp() function will retry. If it's another error, we want to stop. */ if (error != ERROR_FILE_EXISTS) { SET_REQ_WIN32_ERROR(req, error); return 1; } return 0; } fd = _open_osfhandle((intptr_t) file, 0); if (fd < 0) { /* The only known failure mode for _open_osfhandle() is EMFILE, in which * case GetLastError() will return zero. However we'll try to handle other * errors as well, should they ever occur. */ if (errno == EMFILE) SET_REQ_UV_ERROR(req, UV_EMFILE, ERROR_TOO_MANY_OPEN_FILES); else if (GetLastError() != ERROR_SUCCESS) SET_REQ_WIN32_ERROR(req, GetLastError()); else SET_REQ_WIN32_ERROR(req, UV_UNKNOWN); CloseHandle(file); return 1; } SET_REQ_RESULT(req, fd); return 1; } void fs__mkstemp(uv_fs_t* req) { fs__mktemp(req, fs__mkstemp_func); } void fs__scandir(uv_fs_t* req) { static const size_t dirents_initial_size = 32; HANDLE dir_handle = INVALID_HANDLE_VALUE; uv__dirent_t** dirents = NULL; size_t dirents_size = 0; size_t dirents_used = 0; IO_STATUS_BLOCK iosb; NTSTATUS status; /* Buffer to hold directory entries returned by NtQueryDirectoryFile. * It's important that this buffer can hold at least one entry, regardless * of the length of the file names present in the enumerated directory. * A file name is at most 256 WCHARs long. * According to MSDN, the buffer must be aligned at an 8-byte boundary. */ #if _MSC_VER __declspec(align(8)) char buffer[8192]; #else __attribute__ ((aligned (8))) char buffer[8192]; #endif STATIC_ASSERT(sizeof buffer >= sizeof(FILE_DIRECTORY_INFORMATION) + 256 * sizeof(WCHAR)); /* Open the directory. */ dir_handle = CreateFileW(req->file.pathw, FILE_LIST_DIRECTORY | SYNCHRONIZE, FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); if (dir_handle == INVALID_HANDLE_VALUE) goto win32_error; /* Read the first chunk. */ status = pNtQueryDirectoryFile(dir_handle, NULL, NULL, NULL, &iosb, &buffer, sizeof buffer, FileDirectoryInformation, FALSE, NULL, TRUE); /* If the handle is not a directory, we'll get STATUS_INVALID_PARAMETER. * This should be reported back as UV_ENOTDIR. */ if (status == (NTSTATUS)STATUS_INVALID_PARAMETER) goto not_a_directory_error; while (NT_SUCCESS(status)) { char* position = buffer; size_t next_entry_offset = 0; do { FILE_DIRECTORY_INFORMATION* info; uv__dirent_t* dirent; size_t wchar_len; size_t utf8_len; /* Obtain a pointer to the current directory entry. */ position += next_entry_offset; info = (FILE_DIRECTORY_INFORMATION*) position; /* Fetch the offset to the next directory entry. */ next_entry_offset = info->NextEntryOffset; /* Compute the length of the filename in WCHARs. */ wchar_len = info->FileNameLength / sizeof info->FileName[0]; /* Skip over '.' and '..' entries. It has been reported that * the SharePoint driver includes the terminating zero byte in * the filename length. Strip those first. */ while (wchar_len > 0 && info->FileName[wchar_len - 1] == L'\0') wchar_len -= 1; if (wchar_len == 0) continue; if (wchar_len == 1 && info->FileName[0] == L'.') continue; if (wchar_len == 2 && info->FileName[0] == L'.' && info->FileName[1] == L'.') continue; /* Compute the space required to store the filename as UTF-8. */ utf8_len = WideCharToMultiByte( CP_UTF8, 0, &info->FileName[0], wchar_len, NULL, 0, NULL, NULL); if (utf8_len == 0) goto win32_error; /* Resize the dirent array if needed. */ if (dirents_used >= dirents_size) { size_t new_dirents_size = dirents_size == 0 ? dirents_initial_size : dirents_size << 1; uv__dirent_t** new_dirents = uv__realloc(dirents, new_dirents_size * sizeof *dirents); if (new_dirents == NULL) goto out_of_memory_error; dirents_size = new_dirents_size; dirents = new_dirents; } /* Allocate space for the uv dirent structure. The dirent structure * includes room for the first character of the filename, but `utf8_len` * doesn't count the NULL terminator at this point. */ dirent = uv__malloc(sizeof *dirent + utf8_len); if (dirent == NULL) goto out_of_memory_error; dirents[dirents_used++] = dirent; /* Convert file name to UTF-8. */ if (WideCharToMultiByte(CP_UTF8, 0, &info->FileName[0], wchar_len, &dirent->d_name[0], utf8_len, NULL, NULL) == 0) goto win32_error; /* Add a null terminator to the filename. */ dirent->d_name[utf8_len] = '\0'; /* Fill out the type field. */ if (info->FileAttributes & FILE_ATTRIBUTE_DEVICE) dirent->d_type = UV__DT_CHAR; else if (info->FileAttributes & FILE_ATTRIBUTE_REPARSE_POINT) dirent->d_type = UV__DT_LINK; else if (info->FileAttributes & FILE_ATTRIBUTE_DIRECTORY) dirent->d_type = UV__DT_DIR; else dirent->d_type = UV__DT_FILE; } while (next_entry_offset != 0); /* Read the next chunk. */ status = pNtQueryDirectoryFile(dir_handle, NULL, NULL, NULL, &iosb, &buffer, sizeof buffer, FileDirectoryInformation, FALSE, NULL, FALSE); /* After the first pNtQueryDirectoryFile call, the function may return * STATUS_SUCCESS even if the buffer was too small to hold at least one * directory entry. */ if (status == STATUS_SUCCESS && iosb.Information == 0) status = STATUS_BUFFER_OVERFLOW; } if (status != STATUS_NO_MORE_FILES) goto nt_error; CloseHandle(dir_handle); /* Store the result in the request object. */ req->ptr = dirents; if (dirents != NULL) req->flags |= UV_FS_FREE_PTR; SET_REQ_RESULT(req, dirents_used); /* `nbufs` will be used as index by uv_fs_scandir_next. */ req->fs.info.nbufs = 0; return; nt_error: SET_REQ_WIN32_ERROR(req, pRtlNtStatusToDosError(status)); goto cleanup; win32_error: SET_REQ_WIN32_ERROR(req, GetLastError()); goto cleanup; not_a_directory_error: SET_REQ_UV_ERROR(req, UV_ENOTDIR, ERROR_DIRECTORY); goto cleanup; out_of_memory_error: SET_REQ_UV_ERROR(req, UV_ENOMEM, ERROR_OUTOFMEMORY); goto cleanup; cleanup: if (dir_handle != INVALID_HANDLE_VALUE) CloseHandle(dir_handle); while (dirents_used > 0) uv__free(dirents[--dirents_used]); if (dirents != NULL) uv__free(dirents); } void fs__opendir(uv_fs_t* req) { WCHAR* pathw; size_t len; const WCHAR* fmt; WCHAR* find_path; uv_dir_t* dir; pathw = req->file.pathw; dir = NULL; find_path = NULL; /* Figure out whether path is a file or a directory. */ if (!(GetFileAttributesW(pathw) & FILE_ATTRIBUTE_DIRECTORY)) { SET_REQ_UV_ERROR(req, UV_ENOTDIR, ERROR_DIRECTORY); goto error; } dir = uv__malloc(sizeof(*dir)); if (dir == NULL) { SET_REQ_UV_ERROR(req, UV_ENOMEM, ERROR_OUTOFMEMORY); goto error; } len = wcslen(pathw); if (len == 0) fmt = L"./*"; else if (IS_SLASH(pathw[len - 1])) fmt = L"%s*"; else fmt = L"%s\\*"; find_path = uv__malloc(sizeof(WCHAR) * (len + 4)); if (find_path == NULL) { SET_REQ_UV_ERROR(req, UV_ENOMEM, ERROR_OUTOFMEMORY); goto error; } _snwprintf(find_path, len + 3, fmt, pathw); dir->dir_handle = FindFirstFileW(find_path, &dir->find_data); uv__free(find_path); find_path = NULL; if (dir->dir_handle == INVALID_HANDLE_VALUE && GetLastError() != ERROR_FILE_NOT_FOUND) { SET_REQ_WIN32_ERROR(req, GetLastError()); goto error; } dir->need_find_call = FALSE; req->ptr = dir; SET_REQ_RESULT(req, 0); return; error: uv__free(dir); uv__free(find_path); req->ptr = NULL; } void fs__readdir(uv_fs_t* req) { uv_dir_t* dir; uv_dirent_t* dirents; uv__dirent_t dent; unsigned int dirent_idx; PWIN32_FIND_DATAW find_data; unsigned int i; int r; req->flags |= UV_FS_FREE_PTR; dir = req->ptr; dirents = dir->dirents; memset(dirents, 0, dir->nentries * sizeof(*dir->dirents)); find_data = &dir->find_data; dirent_idx = 0; while (dirent_idx < dir->nentries) { if (dir->need_find_call && FindNextFileW(dir->dir_handle, find_data) == 0) { if (GetLastError() == ERROR_NO_MORE_FILES) break; goto error; } /* Skip "." and ".." entries. */ if (find_data->cFileName[0] == L'.' && (find_data->cFileName[1] == L'\0' || (find_data->cFileName[1] == L'.' && find_data->cFileName[2] == L'\0'))) { dir->need_find_call = TRUE; continue; } r = uv__convert_utf16_to_utf8((const WCHAR*) &find_data->cFileName, -1, (char**) &dirents[dirent_idx].name); if (r != 0) goto error; /* Copy file type. */ if ((find_data->dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) != 0) dent.d_type = UV__DT_DIR; else if ((find_data->dwFileAttributes & FILE_ATTRIBUTE_REPARSE_POINT) != 0) dent.d_type = UV__DT_LINK; else if ((find_data->dwFileAttributes & FILE_ATTRIBUTE_DEVICE) != 0) dent.d_type = UV__DT_CHAR; else dent.d_type = UV__DT_FILE; dirents[dirent_idx].type = uv__fs_get_dirent_type(&dent); dir->need_find_call = TRUE; ++dirent_idx; } SET_REQ_RESULT(req, dirent_idx); return; error: SET_REQ_WIN32_ERROR(req, GetLastError()); for (i = 0; i < dirent_idx; ++i) { uv__free((char*) dirents[i].name); dirents[i].name = NULL; } } void fs__closedir(uv_fs_t* req) { uv_dir_t* dir; dir = req->ptr; FindClose(dir->dir_handle); uv__free(req->ptr); SET_REQ_RESULT(req, 0); } INLINE static int fs__stat_handle(HANDLE handle, uv_stat_t* statbuf, int do_lstat) { FILE_ALL_INFORMATION file_info; FILE_FS_VOLUME_INFORMATION volume_info; NTSTATUS nt_status; IO_STATUS_BLOCK io_status; nt_status = pNtQueryInformationFile(handle, &io_status, &file_info, sizeof file_info, FileAllInformation); /* Buffer overflow (a warning status code) is expected here. */ if (NT_ERROR(nt_status)) { SetLastError(pRtlNtStatusToDosError(nt_status)); return -1; } nt_status = pNtQueryVolumeInformationFile(handle, &io_status, &volume_info, sizeof volume_info, FileFsVolumeInformation); /* Buffer overflow (a warning status code) is expected here. */ if (io_status.Status == STATUS_NOT_IMPLEMENTED) { statbuf->st_dev = 0; } else if (NT_ERROR(nt_status)) { SetLastError(pRtlNtStatusToDosError(nt_status)); return -1; } else { statbuf->st_dev = volume_info.VolumeSerialNumber; } /* Todo: st_mode should probably always be 0666 for everyone. We might also * want to report 0777 if the file is a .exe or a directory. * * Currently it's based on whether the 'readonly' attribute is set, which * makes little sense because the semantics are so different: the 'read-only' * flag is just a way for a user to protect against accidental deletion, and * serves no security purpose. Windows uses ACLs for that. * * Also people now use uv_fs_chmod() to take away the writable bit for good * reasons. Windows however just makes the file read-only, which makes it * impossible to delete the file afterwards, since read-only files can't be * deleted. * * IOW it's all just a clusterfuck and we should think of something that * makes slightly more sense. * * And uv_fs_chmod should probably just fail on windows or be a total no-op. * There's nothing sensible it can do anyway. */ statbuf->st_mode = 0; /* * On Windows, FILE_ATTRIBUTE_REPARSE_POINT is a general purpose mechanism * by which filesystem drivers can intercept and alter file system requests. * * The only reparse points we care about are symlinks and mount points, both * of which are treated as POSIX symlinks. Further, we only care when * invoked via lstat, which seeks information about the link instead of its * target. Otherwise, reparse points must be treated as regular files. */ if (do_lstat && (file_info.BasicInformation.FileAttributes & FILE_ATTRIBUTE_REPARSE_POINT)) { /* * If reading the link fails, the reparse point is not a symlink and needs * to be treated as a regular file. The higher level lstat function will * detect this failure and retry without do_lstat if appropriate. */ if (fs__readlink_handle(handle, NULL, &statbuf->st_size) != 0) return -1; statbuf->st_mode |= S_IFLNK; } if (statbuf->st_mode == 0) { if (file_info.BasicInformation.FileAttributes & FILE_ATTRIBUTE_DIRECTORY) { statbuf->st_mode |= _S_IFDIR; statbuf->st_size = 0; } else { statbuf->st_mode |= _S_IFREG; statbuf->st_size = file_info.StandardInformation.EndOfFile.QuadPart; } } if (file_info.BasicInformation.FileAttributes & FILE_ATTRIBUTE_READONLY) statbuf->st_mode |= _S_IREAD | (_S_IREAD >> 3) | (_S_IREAD >> 6); else statbuf->st_mode |= (_S_IREAD | _S_IWRITE) | ((_S_IREAD | _S_IWRITE) >> 3) | ((_S_IREAD | _S_IWRITE) >> 6); uv__filetime_to_timespec(&statbuf->st_atim, file_info.BasicInformation.LastAccessTime.QuadPart); uv__filetime_to_timespec(&statbuf->st_ctim, file_info.BasicInformation.ChangeTime.QuadPart); uv__filetime_to_timespec(&statbuf->st_mtim, file_info.BasicInformation.LastWriteTime.QuadPart); uv__filetime_to_timespec(&statbuf->st_birthtim, file_info.BasicInformation.CreationTime.QuadPart); statbuf->st_ino = file_info.InternalInformation.IndexNumber.QuadPart; /* st_blocks contains the on-disk allocation size in 512-byte units. */ statbuf->st_blocks = (uint64_t) file_info.StandardInformation.AllocationSize.QuadPart >> 9; statbuf->st_nlink = file_info.StandardInformation.NumberOfLinks; /* The st_blksize is supposed to be the 'optimal' number of bytes for reading * and writing to the disk. That is, for any definition of 'optimal' - it's * supposed to at least avoid read-update-write behavior when writing to the * disk. * * However nobody knows this and even fewer people actually use this value, * and in order to fill it out we'd have to make another syscall to query the * volume for FILE_FS_SECTOR_SIZE_INFORMATION. * * Therefore we'll just report a sensible value that's quite commonly okay * on modern hardware. * * 4096 is the minimum required to be compatible with newer Advanced Format * drives (which have 4096 bytes per physical sector), and to be backwards * compatible with older drives (which have 512 bytes per physical sector). */ statbuf->st_blksize = 4096; /* Todo: set st_flags to something meaningful. Also provide a wrapper for * chattr(2). */ statbuf->st_flags = 0; /* Windows has nothing sensible to say about these values, so they'll just * remain empty. */ statbuf->st_gid = 0; statbuf->st_uid = 0; statbuf->st_rdev = 0; statbuf->st_gen = 0; return 0; } INLINE static void fs__stat_prepare_path(WCHAR* pathw) { size_t len = wcslen(pathw); /* TODO: ignore namespaced paths. */ if (len > 1 && pathw[len - 2] != L':' && (pathw[len - 1] == L'\\' || pathw[len - 1] == L'/')) { pathw[len - 1] = '\0'; } } INLINE static DWORD fs__stat_impl_from_path(WCHAR* path, int do_lstat, uv_stat_t* statbuf) { HANDLE handle; DWORD flags; DWORD ret; flags = FILE_FLAG_BACKUP_SEMANTICS; if (do_lstat) flags |= FILE_FLAG_OPEN_REPARSE_POINT; handle = CreateFileW(path, FILE_READ_ATTRIBUTES, FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE, NULL, OPEN_EXISTING, flags, NULL); if (handle == INVALID_HANDLE_VALUE) return GetLastError(); if (fs__stat_handle(handle, statbuf, do_lstat) != 0) ret = GetLastError(); else ret = 0; CloseHandle(handle); return ret; } INLINE static void fs__stat_impl(uv_fs_t* req, int do_lstat) { DWORD error; error = fs__stat_impl_from_path(req->file.pathw, do_lstat, &req->statbuf); if (error != 0) { if (do_lstat && (error == ERROR_SYMLINK_NOT_SUPPORTED || error == ERROR_NOT_A_REPARSE_POINT)) { /* We opened a reparse point but it was not a symlink. Try again. */ fs__stat_impl(req, 0); } else { /* Stat failed. */ SET_REQ_WIN32_ERROR(req, error); } return; } req->ptr = &req->statbuf; SET_REQ_RESULT(req, 0); } static void fs__stat(uv_fs_t* req) { fs__stat_prepare_path(req->file.pathw); fs__stat_impl(req, 0); } static void fs__lstat(uv_fs_t* req) { fs__stat_prepare_path(req->file.pathw); fs__stat_impl(req, 1); } static void fs__fstat(uv_fs_t* req) { int fd = req->file.fd; HANDLE handle; VERIFY_FD(fd, req); handle = uv__get_osfhandle(fd); if (handle == INVALID_HANDLE_VALUE) { SET_REQ_WIN32_ERROR(req, ERROR_INVALID_HANDLE); return; } if (fs__stat_handle(handle, &req->statbuf, 0) != 0) { SET_REQ_WIN32_ERROR(req, GetLastError()); return; } req->ptr = &req->statbuf; SET_REQ_RESULT(req, 0); } static void fs__rename(uv_fs_t* req) { if (!MoveFileExW(req->file.pathw, req->fs.info.new_pathw, MOVEFILE_REPLACE_EXISTING)) { SET_REQ_WIN32_ERROR(req, GetLastError()); return; } SET_REQ_RESULT(req, 0); } INLINE static void fs__sync_impl(uv_fs_t* req) { int fd = req->file.fd; int result; VERIFY_FD(fd, req); result = FlushFileBuffers(uv__get_osfhandle(fd)) ? 0 : -1; if (result == -1) { SET_REQ_WIN32_ERROR(req, GetLastError()); } else { SET_REQ_RESULT(req, result); } } static void fs__fsync(uv_fs_t* req) { fs__sync_impl(req); } static void fs__fdatasync(uv_fs_t* req) { fs__sync_impl(req); } static void fs__ftruncate(uv_fs_t* req) { int fd = req->file.fd; HANDLE handle; struct uv__fd_info_s fd_info = { 0 }; NTSTATUS status; IO_STATUS_BLOCK io_status; FILE_END_OF_FILE_INFORMATION eof_info; VERIFY_FD(fd, req); handle = uv__get_osfhandle(fd); if (uv__fd_hash_get(fd, &fd_info)) { if (fd_info.is_directory) { SET_REQ_WIN32_ERROR(req, ERROR_ACCESS_DENIED); return; } if (fd_info.mapping != INVALID_HANDLE_VALUE) { CloseHandle(fd_info.mapping); } } eof_info.EndOfFile.QuadPart = req->fs.info.offset; status = pNtSetInformationFile(handle, &io_status, &eof_info, sizeof eof_info, FileEndOfFileInformation); if (NT_SUCCESS(status)) { SET_REQ_RESULT(req, 0); } else { SET_REQ_WIN32_ERROR(req, pRtlNtStatusToDosError(status)); if (fd_info.flags) { CloseHandle(handle); fd_info.mapping = INVALID_HANDLE_VALUE; fd_info.size.QuadPart = 0; fd_info.current_pos.QuadPart = 0; uv__fd_hash_add(fd, &fd_info); return; } } if (fd_info.flags) { fd_info.size = eof_info.EndOfFile; if (fd_info.size.QuadPart == 0) { fd_info.mapping = INVALID_HANDLE_VALUE; } else { DWORD flProtect = (fd_info.flags & (UV_FS_O_RDONLY | UV_FS_O_WRONLY | UV_FS_O_RDWR)) == UV_FS_O_RDONLY ? PAGE_READONLY : PAGE_READWRITE; fd_info.mapping = CreateFileMapping(handle, NULL, flProtect, fd_info.size.HighPart, fd_info.size.LowPart, NULL); if (fd_info.mapping == NULL) { SET_REQ_WIN32_ERROR(req, GetLastError()); CloseHandle(handle); fd_info.mapping = INVALID_HANDLE_VALUE; fd_info.size.QuadPart = 0; fd_info.current_pos.QuadPart = 0; uv__fd_hash_add(fd, &fd_info); return; } } uv__fd_hash_add(fd, &fd_info); } } static void fs__copyfile(uv_fs_t* req) { int flags; int overwrite; uv_stat_t statbuf; uv_stat_t new_statbuf; flags = req->fs.info.file_flags; if (flags & UV_FS_COPYFILE_FICLONE_FORCE) { SET_REQ_UV_ERROR(req, UV_ENOSYS, ERROR_NOT_SUPPORTED); return; } overwrite = flags & UV_FS_COPYFILE_EXCL; if (CopyFileW(req->file.pathw, req->fs.info.new_pathw, overwrite) != 0) { SET_REQ_RESULT(req, 0); return; } SET_REQ_WIN32_ERROR(req, GetLastError()); if (req->result != UV_EBUSY) return; /* if error UV_EBUSY check if src and dst file are the same */ if (fs__stat_impl_from_path(req->file.pathw, 0, &statbuf) != 0 || fs__stat_impl_from_path(req->fs.info.new_pathw, 0, &new_statbuf) != 0) { return; } if (statbuf.st_dev == new_statbuf.st_dev && statbuf.st_ino == new_statbuf.st_ino) { SET_REQ_RESULT(req, 0); } } static void fs__sendfile(uv_fs_t* req) { int fd_in = req->file.fd, fd_out = req->fs.info.fd_out; size_t length = req->fs.info.bufsml[0].len; int64_t offset = req->fs.info.offset; const size_t max_buf_size = 65536; size_t buf_size = length < max_buf_size ? length : max_buf_size; int n, result = 0; int64_t result_offset = 0; char* buf = (char*) uv__malloc(buf_size); if (!buf) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } if (offset != -1) { result_offset = _lseeki64(fd_in, offset, SEEK_SET); } if (result_offset == -1) { result = -1; } else { while (length > 0) { n = _read(fd_in, buf, length < buf_size ? length : buf_size); if (n == 0) { break; } else if (n == -1) { result = -1; break; } length -= n; n = _write(fd_out, buf, n); if (n == -1) { result = -1; break; } result += n; } } uv__free(buf); SET_REQ_RESULT(req, result); } static void fs__access(uv_fs_t* req) { DWORD attr = GetFileAttributesW(req->file.pathw); if (attr == INVALID_FILE_ATTRIBUTES) { SET_REQ_WIN32_ERROR(req, GetLastError()); return; } /* * Access is possible if * - write access wasn't requested, * - or the file isn't read-only, * - or it's a directory. * (Directories cannot be read-only on Windows.) */ if (!(req->fs.info.mode & W_OK) || !(attr & FILE_ATTRIBUTE_READONLY) || (attr & FILE_ATTRIBUTE_DIRECTORY)) { SET_REQ_RESULT(req, 0); } else { SET_REQ_WIN32_ERROR(req, UV_EPERM); } } static void fs__chmod(uv_fs_t* req) { int result = _wchmod(req->file.pathw, req->fs.info.mode); if (result == -1) SET_REQ_WIN32_ERROR(req, _doserrno); else SET_REQ_RESULT(req, 0); } static void fs__fchmod(uv_fs_t* req) { int fd = req->file.fd; int clear_archive_flag; HANDLE handle; NTSTATUS nt_status; IO_STATUS_BLOCK io_status; FILE_BASIC_INFORMATION file_info; VERIFY_FD(fd, req); handle = ReOpenFile(uv__get_osfhandle(fd), FILE_WRITE_ATTRIBUTES, 0, 0); if (handle == INVALID_HANDLE_VALUE) { SET_REQ_WIN32_ERROR(req, GetLastError()); return; } nt_status = pNtQueryInformationFile(handle, &io_status, &file_info, sizeof file_info, FileBasicInformation); if (!NT_SUCCESS(nt_status)) { SET_REQ_WIN32_ERROR(req, pRtlNtStatusToDosError(nt_status)); goto fchmod_cleanup; } /* Test if the Archive attribute is cleared */ if ((file_info.FileAttributes & FILE_ATTRIBUTE_ARCHIVE) == 0) { /* Set Archive flag, otherwise setting or clearing the read-only flag will not work */ file_info.FileAttributes |= FILE_ATTRIBUTE_ARCHIVE; nt_status = pNtSetInformationFile(handle, &io_status, &file_info, sizeof file_info, FileBasicInformation); if (!NT_SUCCESS(nt_status)) { SET_REQ_WIN32_ERROR(req, pRtlNtStatusToDosError(nt_status)); goto fchmod_cleanup; } /* Remeber to clear the flag later on */ clear_archive_flag = 1; } else { clear_archive_flag = 0; } if (req->fs.info.mode & _S_IWRITE) { file_info.FileAttributes &= ~FILE_ATTRIBUTE_READONLY; } else { file_info.FileAttributes |= FILE_ATTRIBUTE_READONLY; } nt_status = pNtSetInformationFile(handle, &io_status, &file_info, sizeof file_info, FileBasicInformation); if (!NT_SUCCESS(nt_status)) { SET_REQ_WIN32_ERROR(req, pRtlNtStatusToDosError(nt_status)); goto fchmod_cleanup; } if (clear_archive_flag) { file_info.FileAttributes &= ~FILE_ATTRIBUTE_ARCHIVE; if (file_info.FileAttributes == 0) { file_info.FileAttributes = FILE_ATTRIBUTE_NORMAL; } nt_status = pNtSetInformationFile(handle, &io_status, &file_info, sizeof file_info, FileBasicInformation); if (!NT_SUCCESS(nt_status)) { SET_REQ_WIN32_ERROR(req, pRtlNtStatusToDosError(nt_status)); goto fchmod_cleanup; } } SET_REQ_SUCCESS(req); fchmod_cleanup: CloseHandle(handle); } INLINE static int fs__utime_handle(HANDLE handle, double atime, double mtime) { FILETIME filetime_a, filetime_m; TIME_T_TO_FILETIME(atime, &filetime_a); TIME_T_TO_FILETIME(mtime, &filetime_m); if (!SetFileTime(handle, NULL, &filetime_a, &filetime_m)) { return -1; } return 0; } INLINE static DWORD fs__utime_impl_from_path(WCHAR* path, double atime, double mtime, int do_lutime) { HANDLE handle; DWORD flags; DWORD ret; flags = FILE_FLAG_BACKUP_SEMANTICS; if (do_lutime) { flags |= FILE_FLAG_OPEN_REPARSE_POINT; } handle = CreateFileW(path, FILE_WRITE_ATTRIBUTES, FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE, NULL, OPEN_EXISTING, flags, NULL); if (handle == INVALID_HANDLE_VALUE) return GetLastError(); if (fs__utime_handle(handle, atime, mtime) != 0) ret = GetLastError(); else ret = 0; CloseHandle(handle); return ret; } INLINE static void fs__utime_impl(uv_fs_t* req, int do_lutime) { DWORD error; error = fs__utime_impl_from_path(req->file.pathw, req->fs.time.atime, req->fs.time.mtime, do_lutime); if (error != 0) { if (do_lutime && (error == ERROR_SYMLINK_NOT_SUPPORTED || error == ERROR_NOT_A_REPARSE_POINT)) { /* Opened file is a reparse point but not a symlink. Try again. */ fs__utime_impl(req, 0); } else { /* utime failed. */ SET_REQ_WIN32_ERROR(req, error); } return; } SET_REQ_RESULT(req, 0); } static void fs__utime(uv_fs_t* req) { fs__utime_impl(req, /* do_lutime */ 0); } static void fs__futime(uv_fs_t* req) { int fd = req->file.fd; HANDLE handle; VERIFY_FD(fd, req); handle = uv__get_osfhandle(fd); if (handle == INVALID_HANDLE_VALUE) { SET_REQ_WIN32_ERROR(req, ERROR_INVALID_HANDLE); return; } if (fs__utime_handle(handle, req->fs.time.atime, req->fs.time.mtime) != 0) { SET_REQ_WIN32_ERROR(req, GetLastError()); return; } SET_REQ_RESULT(req, 0); } static void fs__lutime(uv_fs_t* req) { fs__utime_impl(req, /* do_lutime */ 1); } static void fs__link(uv_fs_t* req) { DWORD r = CreateHardLinkW(req->fs.info.new_pathw, req->file.pathw, NULL); if (r == 0) SET_REQ_WIN32_ERROR(req, GetLastError()); else SET_REQ_RESULT(req, 0); } static void fs__create_junction(uv_fs_t* req, const WCHAR* path, const WCHAR* new_path) { HANDLE handle = INVALID_HANDLE_VALUE; REPARSE_DATA_BUFFER *buffer = NULL; int created = 0; int target_len; int is_absolute, is_long_path; int needed_buf_size, used_buf_size, used_data_size, path_buf_len; int start, len, i; int add_slash; DWORD bytes; WCHAR* path_buf; target_len = wcslen(path); is_long_path = wcsncmp(path, LONG_PATH_PREFIX, LONG_PATH_PREFIX_LEN) == 0; if (is_long_path) { is_absolute = 1; } else { is_absolute = target_len >= 3 && IS_LETTER(path[0]) && path[1] == L':' && IS_SLASH(path[2]); } if (!is_absolute) { /* Not supporting relative paths */ SET_REQ_UV_ERROR(req, UV_EINVAL, ERROR_NOT_SUPPORTED); return; } /* Do a pessimistic calculation of the required buffer size */ needed_buf_size = FIELD_OFFSET(REPARSE_DATA_BUFFER, MountPointReparseBuffer.PathBuffer) + JUNCTION_PREFIX_LEN * sizeof(WCHAR) + 2 * (target_len + 2) * sizeof(WCHAR); /* Allocate the buffer */ buffer = (REPARSE_DATA_BUFFER*)uv__malloc(needed_buf_size); if (!buffer) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } /* Grab a pointer to the part of the buffer where filenames go */ path_buf = (WCHAR*)&(buffer->MountPointReparseBuffer.PathBuffer); path_buf_len = 0; /* Copy the substitute (internal) target path */ start = path_buf_len; wcsncpy((WCHAR*)&path_buf[path_buf_len], JUNCTION_PREFIX, JUNCTION_PREFIX_LEN); path_buf_len += JUNCTION_PREFIX_LEN; add_slash = 0; for (i = is_long_path ? LONG_PATH_PREFIX_LEN : 0; path[i] != L'\0'; i++) { if (IS_SLASH(path[i])) { add_slash = 1; continue; } if (add_slash) { path_buf[path_buf_len++] = L'\\'; add_slash = 0; } path_buf[path_buf_len++] = path[i]; } path_buf[path_buf_len++] = L'\\'; len = path_buf_len - start; /* Set the info about the substitute name */ buffer->MountPointReparseBuffer.SubstituteNameOffset = start * sizeof(WCHAR); buffer->MountPointReparseBuffer.SubstituteNameLength = len * sizeof(WCHAR); /* Insert null terminator */ path_buf[path_buf_len++] = L'\0'; /* Copy the print name of the target path */ start = path_buf_len; add_slash = 0; for (i = is_long_path ? LONG_PATH_PREFIX_LEN : 0; path[i] != L'\0'; i++) { if (IS_SLASH(path[i])) { add_slash = 1; continue; } if (add_slash) { path_buf[path_buf_len++] = L'\\'; add_slash = 0; } path_buf[path_buf_len++] = path[i]; } len = path_buf_len - start; if (len == 2) { path_buf[path_buf_len++] = L'\\'; len++; } /* Set the info about the print name */ buffer->MountPointReparseBuffer.PrintNameOffset = start * sizeof(WCHAR); buffer->MountPointReparseBuffer.PrintNameLength = len * sizeof(WCHAR); /* Insert another null terminator */ path_buf[path_buf_len++] = L'\0'; /* Calculate how much buffer space was actually used */ used_buf_size = FIELD_OFFSET(REPARSE_DATA_BUFFER, MountPointReparseBuffer.PathBuffer) + path_buf_len * sizeof(WCHAR); used_data_size = used_buf_size - FIELD_OFFSET(REPARSE_DATA_BUFFER, MountPointReparseBuffer); /* Put general info in the data buffer */ buffer->ReparseTag = IO_REPARSE_TAG_MOUNT_POINT; buffer->ReparseDataLength = used_data_size; buffer->Reserved = 0; /* Create a new directory */ if (!CreateDirectoryW(new_path, NULL)) { SET_REQ_WIN32_ERROR(req, GetLastError()); goto error; } created = 1; /* Open the directory */ handle = CreateFileW(new_path, GENERIC_WRITE, 0, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS | FILE_FLAG_OPEN_REPARSE_POINT, NULL); if (handle == INVALID_HANDLE_VALUE) { SET_REQ_WIN32_ERROR(req, GetLastError()); goto error; } /* Create the actual reparse point */ if (!DeviceIoControl(handle, FSCTL_SET_REPARSE_POINT, buffer, used_buf_size, NULL, 0, &bytes, NULL)) { SET_REQ_WIN32_ERROR(req, GetLastError()); goto error; } /* Clean up */ CloseHandle(handle); uv__free(buffer); SET_REQ_RESULT(req, 0); return; error: uv__free(buffer); if (handle != INVALID_HANDLE_VALUE) { CloseHandle(handle); } if (created) { RemoveDirectoryW(new_path); } } static void fs__symlink(uv_fs_t* req) { WCHAR* pathw; WCHAR* new_pathw; int flags; int err; pathw = req->file.pathw; new_pathw = req->fs.info.new_pathw; if (req->fs.info.file_flags & UV_FS_SYMLINK_JUNCTION) { fs__create_junction(req, pathw, new_pathw); return; } if (req->fs.info.file_flags & UV_FS_SYMLINK_DIR) flags = SYMBOLIC_LINK_FLAG_DIRECTORY | uv__file_symlink_usermode_flag; else flags = uv__file_symlink_usermode_flag; if (CreateSymbolicLinkW(new_pathw, pathw, flags)) { SET_REQ_RESULT(req, 0); return; } /* Something went wrong. We will test if it is because of user-mode * symlinks. */ err = GetLastError(); if (err == ERROR_INVALID_PARAMETER && flags & SYMBOLIC_LINK_FLAG_ALLOW_UNPRIVILEGED_CREATE) { /* This system does not support user-mode symlinks. We will clear the * unsupported flag and retry. */ uv__file_symlink_usermode_flag = 0; fs__symlink(req); } else { SET_REQ_WIN32_ERROR(req, err); } } static void fs__readlink(uv_fs_t* req) { HANDLE handle; handle = CreateFileW(req->file.pathw, 0, 0, NULL, OPEN_EXISTING, FILE_FLAG_OPEN_REPARSE_POINT | FILE_FLAG_BACKUP_SEMANTICS, NULL); if (handle == INVALID_HANDLE_VALUE) { SET_REQ_WIN32_ERROR(req, GetLastError()); return; } if (fs__readlink_handle(handle, (char**) &req->ptr, NULL) != 0) { SET_REQ_WIN32_ERROR(req, GetLastError()); CloseHandle(handle); return; } req->flags |= UV_FS_FREE_PTR; SET_REQ_RESULT(req, 0); CloseHandle(handle); } static ssize_t fs__realpath_handle(HANDLE handle, char** realpath_ptr) { int r; DWORD w_realpath_len; WCHAR* w_realpath_ptr = NULL; WCHAR* w_realpath_buf; w_realpath_len = GetFinalPathNameByHandleW(handle, NULL, 0, VOLUME_NAME_DOS); if (w_realpath_len == 0) { return -1; } w_realpath_buf = uv__malloc((w_realpath_len + 1) * sizeof(WCHAR)); if (w_realpath_buf == NULL) { SetLastError(ERROR_OUTOFMEMORY); return -1; } w_realpath_ptr = w_realpath_buf; if (GetFinalPathNameByHandleW( handle, w_realpath_ptr, w_realpath_len, VOLUME_NAME_DOS) == 0) { uv__free(w_realpath_buf); SetLastError(ERROR_INVALID_HANDLE); return -1; } /* convert UNC path to long path */ if (wcsncmp(w_realpath_ptr, UNC_PATH_PREFIX, UNC_PATH_PREFIX_LEN) == 0) { w_realpath_ptr += 6; *w_realpath_ptr = L'\\'; w_realpath_len -= 6; } else if (wcsncmp(w_realpath_ptr, LONG_PATH_PREFIX, LONG_PATH_PREFIX_LEN) == 0) { w_realpath_ptr += 4; w_realpath_len -= 4; } else { uv__free(w_realpath_buf); SetLastError(ERROR_INVALID_HANDLE); return -1; } r = fs__wide_to_utf8(w_realpath_ptr, w_realpath_len, realpath_ptr, NULL); uv__free(w_realpath_buf); return r; } static void fs__realpath(uv_fs_t* req) { HANDLE handle; handle = CreateFileW(req->file.pathw, 0, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_BACKUP_SEMANTICS, NULL); if (handle == INVALID_HANDLE_VALUE) { SET_REQ_WIN32_ERROR(req, GetLastError()); return; } if (fs__realpath_handle(handle, (char**) &req->ptr) == -1) { CloseHandle(handle); SET_REQ_WIN32_ERROR(req, GetLastError()); return; } CloseHandle(handle); req->flags |= UV_FS_FREE_PTR; SET_REQ_RESULT(req, 0); } static void fs__chown(uv_fs_t* req) { SET_REQ_RESULT(req, 0); } static void fs__fchown(uv_fs_t* req) { SET_REQ_RESULT(req, 0); } static void fs__lchown(uv_fs_t* req) { SET_REQ_RESULT(req, 0); } static void fs__statfs(uv_fs_t* req) { uv_statfs_t* stat_fs; DWORD sectors_per_cluster; DWORD bytes_per_sector; DWORD free_clusters; DWORD total_clusters; WCHAR* pathw; pathw = req->file.pathw; retry_get_disk_free_space: if (0 == GetDiskFreeSpaceW(pathw, §ors_per_cluster, &bytes_per_sector, &free_clusters, &total_clusters)) { DWORD err; WCHAR* fpart; size_t len; DWORD ret; BOOL is_second; err = GetLastError(); is_second = pathw != req->file.pathw; if (err != ERROR_DIRECTORY || is_second) { if (is_second) uv__free(pathw); SET_REQ_WIN32_ERROR(req, err); return; } len = MAX_PATH + 1; pathw = uv__malloc(len * sizeof(*pathw)); if (pathw == NULL) { SET_REQ_UV_ERROR(req, UV_ENOMEM, ERROR_OUTOFMEMORY); return; } retry_get_full_path_name: ret = GetFullPathNameW(req->file.pathw, len, pathw, &fpart); if (ret == 0) { uv__free(pathw); SET_REQ_WIN32_ERROR(req, err); return; } else if (ret > len) { len = ret; pathw = uv__reallocf(pathw, len * sizeof(*pathw)); if (pathw == NULL) { SET_REQ_UV_ERROR(req, UV_ENOMEM, ERROR_OUTOFMEMORY); return; } goto retry_get_full_path_name; } if (fpart != 0) *fpart = L'\0'; goto retry_get_disk_free_space; } if (pathw != req->file.pathw) { uv__free(pathw); } stat_fs = uv__malloc(sizeof(*stat_fs)); if (stat_fs == NULL) { SET_REQ_UV_ERROR(req, UV_ENOMEM, ERROR_OUTOFMEMORY); return; } stat_fs->f_type = 0; stat_fs->f_bsize = bytes_per_sector * sectors_per_cluster; stat_fs->f_blocks = total_clusters; stat_fs->f_bfree = free_clusters; stat_fs->f_bavail = free_clusters; stat_fs->f_files = 0; stat_fs->f_ffree = 0; req->ptr = stat_fs; req->flags |= UV_FS_FREE_PTR; SET_REQ_RESULT(req, 0); } static void uv__fs_work(struct uv__work* w) { uv_fs_t* req; req = container_of(w, uv_fs_t, work_req); assert(req->type == UV_FS); #define XX(uc, lc) case UV_FS_##uc: fs__##lc(req); break; switch (req->fs_type) { XX(OPEN, open) XX(CLOSE, close) XX(READ, read) XX(WRITE, write) XX(COPYFILE, copyfile) XX(SENDFILE, sendfile) XX(STAT, stat) XX(LSTAT, lstat) XX(FSTAT, fstat) XX(FTRUNCATE, ftruncate) XX(UTIME, utime) XX(FUTIME, futime) XX(LUTIME, lutime) XX(ACCESS, access) XX(CHMOD, chmod) XX(FCHMOD, fchmod) XX(FSYNC, fsync) XX(FDATASYNC, fdatasync) XX(UNLINK, unlink) XX(RMDIR, rmdir) XX(MKDIR, mkdir) XX(MKDTEMP, mkdtemp) XX(MKSTEMP, mkstemp) XX(RENAME, rename) XX(SCANDIR, scandir) XX(READDIR, readdir) XX(OPENDIR, opendir) XX(CLOSEDIR, closedir) XX(LINK, link) XX(SYMLINK, symlink) XX(READLINK, readlink) XX(REALPATH, realpath) XX(CHOWN, chown) XX(FCHOWN, fchown) XX(LCHOWN, lchown) XX(STATFS, statfs) default: assert(!"bad uv_fs_type"); } } static void uv__fs_done(struct uv__work* w, int status) { uv_fs_t* req; req = container_of(w, uv_fs_t, work_req); uv__req_unregister(req->loop, req); if (status == UV_ECANCELED) { assert(req->result == 0); SET_REQ_UV_ERROR(req, UV_ECANCELED, 0); } req->cb(req); } void uv_fs_req_cleanup(uv_fs_t* req) { if (req == NULL) return; if (req->flags & UV_FS_CLEANEDUP) return; if (req->flags & UV_FS_FREE_PATHS) uv__free(req->file.pathw); if (req->flags & UV_FS_FREE_PTR) { if (req->fs_type == UV_FS_SCANDIR && req->ptr != NULL) uv__fs_scandir_cleanup(req); else if (req->fs_type == UV_FS_READDIR) uv__fs_readdir_cleanup(req); else uv__free(req->ptr); } if (req->fs.info.bufs != req->fs.info.bufsml) uv__free(req->fs.info.bufs); req->path = NULL; req->file.pathw = NULL; req->fs.info.new_pathw = NULL; req->fs.info.bufs = NULL; req->ptr = NULL; req->flags |= UV_FS_CLEANEDUP; } int uv_fs_open(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, int mode, uv_fs_cb cb) { int err; INIT(UV_FS_OPEN); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } req->fs.info.file_flags = flags; req->fs.info.mode = mode; POST; } int uv_fs_close(uv_loop_t* loop, uv_fs_t* req, uv_file fd, uv_fs_cb cb) { INIT(UV_FS_CLOSE); req->file.fd = fd; POST; } int uv_fs_read(uv_loop_t* loop, uv_fs_t* req, uv_file fd, const uv_buf_t bufs[], unsigned int nbufs, int64_t offset, uv_fs_cb cb) { INIT(UV_FS_READ); if (bufs == NULL || nbufs == 0) { SET_REQ_UV_ERROR(req, UV_EINVAL, ERROR_INVALID_PARAMETER); return UV_EINVAL; } req->file.fd = fd; req->fs.info.nbufs = nbufs; req->fs.info.bufs = req->fs.info.bufsml; if (nbufs > ARRAY_SIZE(req->fs.info.bufsml)) req->fs.info.bufs = uv__malloc(nbufs * sizeof(*bufs)); if (req->fs.info.bufs == NULL) { SET_REQ_UV_ERROR(req, UV_ENOMEM, ERROR_OUTOFMEMORY); return UV_ENOMEM; } memcpy(req->fs.info.bufs, bufs, nbufs * sizeof(*bufs)); req->fs.info.offset = offset; POST; } int uv_fs_write(uv_loop_t* loop, uv_fs_t* req, uv_file fd, const uv_buf_t bufs[], unsigned int nbufs, int64_t offset, uv_fs_cb cb) { INIT(UV_FS_WRITE); if (bufs == NULL || nbufs == 0) { SET_REQ_UV_ERROR(req, UV_EINVAL, ERROR_INVALID_PARAMETER); return UV_EINVAL; } req->file.fd = fd; req->fs.info.nbufs = nbufs; req->fs.info.bufs = req->fs.info.bufsml; if (nbufs > ARRAY_SIZE(req->fs.info.bufsml)) req->fs.info.bufs = uv__malloc(nbufs * sizeof(*bufs)); if (req->fs.info.bufs == NULL) { SET_REQ_UV_ERROR(req, UV_ENOMEM, ERROR_OUTOFMEMORY); return UV_ENOMEM; } memcpy(req->fs.info.bufs, bufs, nbufs * sizeof(*bufs)); req->fs.info.offset = offset; POST; } int uv_fs_unlink(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { int err; INIT(UV_FS_UNLINK); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } POST; } int uv_fs_mkdir(uv_loop_t* loop, uv_fs_t* req, const char* path, int mode, uv_fs_cb cb) { int err; INIT(UV_FS_MKDIR); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } req->fs.info.mode = mode; POST; } int uv_fs_mkdtemp(uv_loop_t* loop, uv_fs_t* req, const char* tpl, uv_fs_cb cb) { int err; INIT(UV_FS_MKDTEMP); err = fs__capture_path(req, tpl, NULL, TRUE); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } POST; } int uv_fs_mkstemp(uv_loop_t* loop, uv_fs_t* req, const char* tpl, uv_fs_cb cb) { int err; INIT(UV_FS_MKSTEMP); err = fs__capture_path(req, tpl, NULL, TRUE); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } POST; } int uv_fs_rmdir(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { int err; INIT(UV_FS_RMDIR); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } POST; } int uv_fs_scandir(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, uv_fs_cb cb) { int err; INIT(UV_FS_SCANDIR); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } req->fs.info.file_flags = flags; POST; } int uv_fs_opendir(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { int err; INIT(UV_FS_OPENDIR); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } POST; } int uv_fs_readdir(uv_loop_t* loop, uv_fs_t* req, uv_dir_t* dir, uv_fs_cb cb) { INIT(UV_FS_READDIR); if (dir == NULL || dir->dirents == NULL || dir->dir_handle == INVALID_HANDLE_VALUE) { SET_REQ_UV_ERROR(req, UV_EINVAL, ERROR_INVALID_PARAMETER); return UV_EINVAL; } req->ptr = dir; POST; } int uv_fs_closedir(uv_loop_t* loop, uv_fs_t* req, uv_dir_t* dir, uv_fs_cb cb) { INIT(UV_FS_CLOSEDIR); if (dir == NULL) { SET_REQ_UV_ERROR(req, UV_EINVAL, ERROR_INVALID_PARAMETER); return UV_EINVAL; } req->ptr = dir; POST; } int uv_fs_link(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, uv_fs_cb cb) { int err; INIT(UV_FS_LINK); err = fs__capture_path(req, path, new_path, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } POST; } int uv_fs_symlink(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, int flags, uv_fs_cb cb) { int err; INIT(UV_FS_SYMLINK); err = fs__capture_path(req, path, new_path, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } req->fs.info.file_flags = flags; POST; } int uv_fs_readlink(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { int err; INIT(UV_FS_READLINK); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } POST; } int uv_fs_realpath(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { int err; INIT(UV_FS_REALPATH); if (!path) { SET_REQ_UV_ERROR(req, UV_EINVAL, ERROR_INVALID_PARAMETER); return UV_EINVAL; } err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } POST; } int uv_fs_chown(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb) { int err; INIT(UV_FS_CHOWN); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } POST; } int uv_fs_fchown(uv_loop_t* loop, uv_fs_t* req, uv_file fd, uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb) { INIT(UV_FS_FCHOWN); POST; } int uv_fs_lchown(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_uid_t uid, uv_gid_t gid, uv_fs_cb cb) { int err; INIT(UV_FS_LCHOWN); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } POST; } int uv_fs_stat(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { int err; INIT(UV_FS_STAT); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } POST; } int uv_fs_lstat(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { int err; INIT(UV_FS_LSTAT); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } POST; } int uv_fs_fstat(uv_loop_t* loop, uv_fs_t* req, uv_file fd, uv_fs_cb cb) { INIT(UV_FS_FSTAT); req->file.fd = fd; POST; } int uv_fs_rename(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, uv_fs_cb cb) { int err; INIT(UV_FS_RENAME); err = fs__capture_path(req, path, new_path, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } POST; } int uv_fs_fsync(uv_loop_t* loop, uv_fs_t* req, uv_file fd, uv_fs_cb cb) { INIT(UV_FS_FSYNC); req->file.fd = fd; POST; } int uv_fs_fdatasync(uv_loop_t* loop, uv_fs_t* req, uv_file fd, uv_fs_cb cb) { INIT(UV_FS_FDATASYNC); req->file.fd = fd; POST; } int uv_fs_ftruncate(uv_loop_t* loop, uv_fs_t* req, uv_file fd, int64_t offset, uv_fs_cb cb) { INIT(UV_FS_FTRUNCATE); req->file.fd = fd; req->fs.info.offset = offset; POST; } int uv_fs_copyfile(uv_loop_t* loop, uv_fs_t* req, const char* path, const char* new_path, int flags, uv_fs_cb cb) { int err; INIT(UV_FS_COPYFILE); if (flags & ~(UV_FS_COPYFILE_EXCL | UV_FS_COPYFILE_FICLONE | UV_FS_COPYFILE_FICLONE_FORCE)) { SET_REQ_UV_ERROR(req, UV_EINVAL, ERROR_INVALID_PARAMETER); return UV_EINVAL; } err = fs__capture_path(req, path, new_path, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } req->fs.info.file_flags = flags; POST; } int uv_fs_sendfile(uv_loop_t* loop, uv_fs_t* req, uv_file fd_out, uv_file fd_in, int64_t in_offset, size_t length, uv_fs_cb cb) { INIT(UV_FS_SENDFILE); req->file.fd = fd_in; req->fs.info.fd_out = fd_out; req->fs.info.offset = in_offset; req->fs.info.bufsml[0].len = length; POST; } int uv_fs_access(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, uv_fs_cb cb) { int err; INIT(UV_FS_ACCESS); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } req->fs.info.mode = flags; POST; } int uv_fs_chmod(uv_loop_t* loop, uv_fs_t* req, const char* path, int mode, uv_fs_cb cb) { int err; INIT(UV_FS_CHMOD); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } req->fs.info.mode = mode; POST; } int uv_fs_fchmod(uv_loop_t* loop, uv_fs_t* req, uv_file fd, int mode, uv_fs_cb cb) { INIT(UV_FS_FCHMOD); req->file.fd = fd; req->fs.info.mode = mode; POST; } int uv_fs_utime(uv_loop_t* loop, uv_fs_t* req, const char* path, double atime, double mtime, uv_fs_cb cb) { int err; INIT(UV_FS_UTIME); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } req->fs.time.atime = atime; req->fs.time.mtime = mtime; POST; } int uv_fs_futime(uv_loop_t* loop, uv_fs_t* req, uv_file fd, double atime, double mtime, uv_fs_cb cb) { INIT(UV_FS_FUTIME); req->file.fd = fd; req->fs.time.atime = atime; req->fs.time.mtime = mtime; POST; } int uv_fs_lutime(uv_loop_t* loop, uv_fs_t* req, const char* path, double atime, double mtime, uv_fs_cb cb) { int err; INIT(UV_FS_LUTIME); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } req->fs.time.atime = atime; req->fs.time.mtime = mtime; POST; } int uv_fs_statfs(uv_loop_t* loop, uv_fs_t* req, const char* path, uv_fs_cb cb) { int err; INIT(UV_FS_STATFS); err = fs__capture_path(req, path, NULL, cb != NULL); if (err) { SET_REQ_WIN32_ERROR(req, err); return req->result; } POST; } int uv_fs_get_system_error(const uv_fs_t* req) { return req->sys_errno_; } gevent-24.11.1/deps/libuv/src/win/getaddrinfo.c000066400000000000000000000345071471441230600212550ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include "uv.h" #include "internal.h" #include "req-inl.h" #include "idna.h" /* EAI_* constants. */ #include /* Needed for ConvertInterfaceIndexToLuid and ConvertInterfaceLuidToNameA */ #include int uv__getaddrinfo_translate_error(int sys_err) { switch (sys_err) { case 0: return 0; case WSATRY_AGAIN: return UV_EAI_AGAIN; case WSAEINVAL: return UV_EAI_BADFLAGS; case WSANO_RECOVERY: return UV_EAI_FAIL; case WSAEAFNOSUPPORT: return UV_EAI_FAMILY; case WSA_NOT_ENOUGH_MEMORY: return UV_EAI_MEMORY; case WSAHOST_NOT_FOUND: return UV_EAI_NONAME; case WSATYPE_NOT_FOUND: return UV_EAI_SERVICE; case WSAESOCKTNOSUPPORT: return UV_EAI_SOCKTYPE; default: return uv_translate_sys_error(sys_err); } } /* * MinGW is missing this */ #if !defined(_MSC_VER) && !defined(__MINGW64_VERSION_MAJOR) typedef struct addrinfoW { int ai_flags; int ai_family; int ai_socktype; int ai_protocol; size_t ai_addrlen; WCHAR* ai_canonname; struct sockaddr* ai_addr; struct addrinfoW* ai_next; } ADDRINFOW, *PADDRINFOW; DECLSPEC_IMPORT int WSAAPI GetAddrInfoW(const WCHAR* node, const WCHAR* service, const ADDRINFOW* hints, PADDRINFOW* result); DECLSPEC_IMPORT void WSAAPI FreeAddrInfoW(PADDRINFOW pAddrInfo); #endif /* Adjust size value to be multiple of 4. Use to keep pointer aligned. * Do we need different versions of this for different architectures? */ #define ALIGNED_SIZE(X) ((((X) + 3) >> 2) << 2) #ifndef NDIS_IF_MAX_STRING_SIZE #define NDIS_IF_MAX_STRING_SIZE IF_MAX_STRING_SIZE #endif static void uv__getaddrinfo_work(struct uv__work* w) { uv_getaddrinfo_t* req; struct addrinfoW* hints; int err; req = container_of(w, uv_getaddrinfo_t, work_req); hints = req->addrinfow; req->addrinfow = NULL; err = GetAddrInfoW(req->node, req->service, hints, &req->addrinfow); req->retcode = uv__getaddrinfo_translate_error(err); } /* * Called from uv_run when complete. Call user specified callback * then free returned addrinfo * Returned addrinfo strings are converted from UTF-16 to UTF-8. * * To minimize allocation we calculate total size required, * and copy all structs and referenced strings into the one block. * Each size calculation is adjusted to avoid unaligned pointers. */ static void uv__getaddrinfo_done(struct uv__work* w, int status) { uv_getaddrinfo_t* req; int addrinfo_len = 0; int name_len = 0; size_t addrinfo_struct_len = ALIGNED_SIZE(sizeof(struct addrinfo)); struct addrinfoW* addrinfow_ptr; struct addrinfo* addrinfo_ptr; char* alloc_ptr = NULL; char* cur_ptr = NULL; req = container_of(w, uv_getaddrinfo_t, work_req); /* release input parameter memory */ uv__free(req->alloc); req->alloc = NULL; if (status == UV_ECANCELED) { assert(req->retcode == 0); req->retcode = UV_EAI_CANCELED; goto complete; } if (req->retcode == 0) { /* Convert addrinfoW to addrinfo. First calculate required length. */ addrinfow_ptr = req->addrinfow; while (addrinfow_ptr != NULL) { addrinfo_len += addrinfo_struct_len + ALIGNED_SIZE(addrinfow_ptr->ai_addrlen); if (addrinfow_ptr->ai_canonname != NULL) { name_len = WideCharToMultiByte(CP_UTF8, 0, addrinfow_ptr->ai_canonname, -1, NULL, 0, NULL, NULL); if (name_len == 0) { req->retcode = uv_translate_sys_error(GetLastError()); goto complete; } addrinfo_len += ALIGNED_SIZE(name_len); } addrinfow_ptr = addrinfow_ptr->ai_next; } /* allocate memory for addrinfo results */ alloc_ptr = (char*)uv__malloc(addrinfo_len); /* do conversions */ if (alloc_ptr != NULL) { cur_ptr = alloc_ptr; addrinfow_ptr = req->addrinfow; while (addrinfow_ptr != NULL) { /* copy addrinfo struct data */ assert(cur_ptr + addrinfo_struct_len <= alloc_ptr + addrinfo_len); addrinfo_ptr = (struct addrinfo*)cur_ptr; addrinfo_ptr->ai_family = addrinfow_ptr->ai_family; addrinfo_ptr->ai_socktype = addrinfow_ptr->ai_socktype; addrinfo_ptr->ai_protocol = addrinfow_ptr->ai_protocol; addrinfo_ptr->ai_flags = addrinfow_ptr->ai_flags; addrinfo_ptr->ai_addrlen = addrinfow_ptr->ai_addrlen; addrinfo_ptr->ai_canonname = NULL; addrinfo_ptr->ai_addr = NULL; addrinfo_ptr->ai_next = NULL; cur_ptr += addrinfo_struct_len; /* copy sockaddr */ if (addrinfo_ptr->ai_addrlen > 0) { assert(cur_ptr + addrinfo_ptr->ai_addrlen <= alloc_ptr + addrinfo_len); memcpy(cur_ptr, addrinfow_ptr->ai_addr, addrinfo_ptr->ai_addrlen); addrinfo_ptr->ai_addr = (struct sockaddr*)cur_ptr; cur_ptr += ALIGNED_SIZE(addrinfo_ptr->ai_addrlen); } /* convert canonical name to UTF-8 */ if (addrinfow_ptr->ai_canonname != NULL) { name_len = WideCharToMultiByte(CP_UTF8, 0, addrinfow_ptr->ai_canonname, -1, NULL, 0, NULL, NULL); assert(name_len > 0); assert(cur_ptr + name_len <= alloc_ptr + addrinfo_len); name_len = WideCharToMultiByte(CP_UTF8, 0, addrinfow_ptr->ai_canonname, -1, cur_ptr, name_len, NULL, NULL); assert(name_len > 0); addrinfo_ptr->ai_canonname = cur_ptr; cur_ptr += ALIGNED_SIZE(name_len); } assert(cur_ptr <= alloc_ptr + addrinfo_len); /* set next ptr */ addrinfow_ptr = addrinfow_ptr->ai_next; if (addrinfow_ptr != NULL) { addrinfo_ptr->ai_next = (struct addrinfo*)cur_ptr; } } req->addrinfo = (struct addrinfo*)alloc_ptr; } else { req->retcode = UV_EAI_MEMORY; } } /* return memory to system */ if (req->addrinfow != NULL) { FreeAddrInfoW(req->addrinfow); req->addrinfow = NULL; } complete: uv__req_unregister(req->loop, req); /* finally do callback with converted result */ if (req->getaddrinfo_cb) req->getaddrinfo_cb(req, req->retcode, req->addrinfo); } void uv_freeaddrinfo(struct addrinfo* ai) { char* alloc_ptr = (char*)ai; /* release copied result memory */ uv__free(alloc_ptr); } /* * Entry point for getaddrinfo * we convert the UTF-8 strings to UNICODE * and save the UNICODE string pointers in the req * We also copy hints so that caller does not need to keep memory until the * callback. * return 0 if a callback will be made * return error code if validation fails * * To minimize allocation we calculate total size required, * and copy all structs and referenced strings into the one block. * Each size calculation is adjusted to avoid unaligned pointers. */ int uv_getaddrinfo(uv_loop_t* loop, uv_getaddrinfo_t* req, uv_getaddrinfo_cb getaddrinfo_cb, const char* node, const char* service, const struct addrinfo* hints) { char hostname_ascii[256]; int nodesize = 0; int servicesize = 0; int hintssize = 0; char* alloc_ptr = NULL; int err; long rc; if (req == NULL || (node == NULL && service == NULL)) { return UV_EINVAL; } UV_REQ_INIT(req, UV_GETADDRINFO); req->getaddrinfo_cb = getaddrinfo_cb; req->addrinfo = NULL; req->loop = loop; req->retcode = 0; /* calculate required memory size for all input values */ if (node != NULL) { rc = uv__idna_toascii(node, node + strlen(node), hostname_ascii, hostname_ascii + sizeof(hostname_ascii)); if (rc < 0) return rc; nodesize = ALIGNED_SIZE(MultiByteToWideChar(CP_UTF8, 0, hostname_ascii, -1, NULL, 0) * sizeof(WCHAR)); if (nodesize == 0) { err = GetLastError(); goto error; } node = hostname_ascii; } if (service != NULL) { servicesize = ALIGNED_SIZE(MultiByteToWideChar(CP_UTF8, 0, service, -1, NULL, 0) * sizeof(WCHAR)); if (servicesize == 0) { err = GetLastError(); goto error; } } if (hints != NULL) { hintssize = ALIGNED_SIZE(sizeof(struct addrinfoW)); } /* allocate memory for inputs, and partition it as needed */ alloc_ptr = (char*)uv__malloc(nodesize + servicesize + hintssize); if (!alloc_ptr) { err = WSAENOBUFS; goto error; } /* save alloc_ptr now so we can free if error */ req->alloc = (void*)alloc_ptr; /* Convert node string to UTF16 into allocated memory and save pointer in the * request. */ if (node != NULL) { req->node = (WCHAR*)alloc_ptr; if (MultiByteToWideChar(CP_UTF8, 0, node, -1, (WCHAR*) alloc_ptr, nodesize / sizeof(WCHAR)) == 0) { err = GetLastError(); goto error; } alloc_ptr += nodesize; } else { req->node = NULL; } /* Convert service string to UTF16 into allocated memory and save pointer in * the req. */ if (service != NULL) { req->service = (WCHAR*)alloc_ptr; if (MultiByteToWideChar(CP_UTF8, 0, service, -1, (WCHAR*) alloc_ptr, servicesize / sizeof(WCHAR)) == 0) { err = GetLastError(); goto error; } alloc_ptr += servicesize; } else { req->service = NULL; } /* copy hints to allocated memory and save pointer in req */ if (hints != NULL) { req->addrinfow = (struct addrinfoW*)alloc_ptr; req->addrinfow->ai_family = hints->ai_family; req->addrinfow->ai_socktype = hints->ai_socktype; req->addrinfow->ai_protocol = hints->ai_protocol; req->addrinfow->ai_flags = hints->ai_flags; req->addrinfow->ai_addrlen = 0; req->addrinfow->ai_canonname = NULL; req->addrinfow->ai_addr = NULL; req->addrinfow->ai_next = NULL; } else { req->addrinfow = NULL; } uv__req_register(loop, req); if (getaddrinfo_cb) { uv__work_submit(loop, &req->work_req, UV__WORK_SLOW_IO, uv__getaddrinfo_work, uv__getaddrinfo_done); return 0; } else { uv__getaddrinfo_work(&req->work_req); uv__getaddrinfo_done(&req->work_req, 0); return req->retcode; } error: if (req != NULL) { uv__free(req->alloc); req->alloc = NULL; } return uv_translate_sys_error(err); } int uv_if_indextoname(unsigned int ifindex, char* buffer, size_t* size) { NET_LUID luid; wchar_t wname[NDIS_IF_MAX_STRING_SIZE + 1]; /* Add one for the NUL. */ DWORD bufsize; int r; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; r = ConvertInterfaceIndexToLuid(ifindex, &luid); if (r != 0) return uv_translate_sys_error(r); r = ConvertInterfaceLuidToNameW(&luid, wname, ARRAY_SIZE(wname)); if (r != 0) return uv_translate_sys_error(r); /* Check how much space we need */ bufsize = WideCharToMultiByte(CP_UTF8, 0, wname, -1, NULL, 0, NULL, NULL); if (bufsize == 0) { return uv_translate_sys_error(GetLastError()); } else if (bufsize > *size) { *size = bufsize; return UV_ENOBUFS; } /* Convert to UTF-8 */ bufsize = WideCharToMultiByte(CP_UTF8, 0, wname, -1, buffer, *size, NULL, NULL); if (bufsize == 0) return uv_translate_sys_error(GetLastError()); *size = bufsize - 1; return 0; } int uv_if_indextoiid(unsigned int ifindex, char* buffer, size_t* size) { int r; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; r = snprintf(buffer, *size, "%d", ifindex); if (r < 0) return uv_translate_sys_error(r); if (r >= (int) *size) { *size = r + 1; return UV_ENOBUFS; } *size = r; return 0; } gevent-24.11.1/deps/libuv/src/win/getnameinfo.c000066400000000000000000000106721471441230600212600ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include "uv.h" #include "internal.h" #include "req-inl.h" #ifndef GetNameInfo int WSAAPI GetNameInfoW( const SOCKADDR *pSockaddr, socklen_t SockaddrLength, PWCHAR pNodeBuffer, DWORD NodeBufferSize, PWCHAR pServiceBuffer, DWORD ServiceBufferSize, INT Flags ); #endif static void uv__getnameinfo_work(struct uv__work* w) { uv_getnameinfo_t* req; WCHAR host[NI_MAXHOST]; WCHAR service[NI_MAXSERV]; int ret; req = container_of(w, uv_getnameinfo_t, work_req); if (GetNameInfoW((struct sockaddr*)&req->storage, sizeof(req->storage), host, ARRAY_SIZE(host), service, ARRAY_SIZE(service), req->flags)) { ret = WSAGetLastError(); req->retcode = uv__getaddrinfo_translate_error(ret); return; } ret = WideCharToMultiByte(CP_UTF8, 0, host, -1, req->host, sizeof(req->host), NULL, NULL); if (ret == 0) { req->retcode = uv_translate_sys_error(GetLastError()); return; } ret = WideCharToMultiByte(CP_UTF8, 0, service, -1, req->service, sizeof(req->service), NULL, NULL); if (ret == 0) { req->retcode = uv_translate_sys_error(GetLastError()); } } /* * Called from uv_run when complete. */ static void uv__getnameinfo_done(struct uv__work* w, int status) { uv_getnameinfo_t* req; char* host; char* service; req = container_of(w, uv_getnameinfo_t, work_req); uv__req_unregister(req->loop, req); host = service = NULL; if (status == UV_ECANCELED) { assert(req->retcode == 0); req->retcode = UV_EAI_CANCELED; } else if (req->retcode == 0) { host = req->host; service = req->service; } if (req->getnameinfo_cb) req->getnameinfo_cb(req, req->retcode, host, service); } /* * Entry point for getnameinfo * return 0 if a callback will be made * return error code if validation fails */ int uv_getnameinfo(uv_loop_t* loop, uv_getnameinfo_t* req, uv_getnameinfo_cb getnameinfo_cb, const struct sockaddr* addr, int flags) { if (req == NULL || addr == NULL) return UV_EINVAL; if (addr->sa_family == AF_INET) { memcpy(&req->storage, addr, sizeof(struct sockaddr_in)); } else if (addr->sa_family == AF_INET6) { memcpy(&req->storage, addr, sizeof(struct sockaddr_in6)); } else { return UV_EINVAL; } UV_REQ_INIT(req, UV_GETNAMEINFO); uv__req_register(loop, req); req->getnameinfo_cb = getnameinfo_cb; req->flags = flags; req->loop = loop; req->retcode = 0; if (getnameinfo_cb) { uv__work_submit(loop, &req->work_req, UV__WORK_SLOW_IO, uv__getnameinfo_work, uv__getnameinfo_done); return 0; } else { uv__getnameinfo_work(&req->work_req); uv__getnameinfo_done(&req->work_req, 0); return req->retcode; } } gevent-24.11.1/deps/libuv/src/win/handle-inl.h000066400000000000000000000151031471441230600207760ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_WIN_HANDLE_INL_H_ #define UV_WIN_HANDLE_INL_H_ #include #include #include "uv.h" #include "internal.h" #define DECREASE_ACTIVE_COUNT(loop, handle) \ do { \ if (--(handle)->activecnt == 0 && \ !((handle)->flags & UV_HANDLE_CLOSING)) { \ uv__handle_stop((handle)); \ } \ assert((handle)->activecnt >= 0); \ } while (0) #define INCREASE_ACTIVE_COUNT(loop, handle) \ do { \ if ((handle)->activecnt++ == 0) { \ uv__handle_start((handle)); \ } \ assert((handle)->activecnt > 0); \ } while (0) #define DECREASE_PENDING_REQ_COUNT(handle) \ do { \ assert(handle->reqs_pending > 0); \ handle->reqs_pending--; \ \ if (handle->flags & UV_HANDLE_CLOSING && \ handle->reqs_pending == 0) { \ uv__want_endgame(loop, (uv_handle_t*)handle); \ } \ } while (0) #define uv__handle_closing(handle) \ do { \ assert(!((handle)->flags & UV_HANDLE_CLOSING)); \ \ if (!(((handle)->flags & UV_HANDLE_ACTIVE) && \ ((handle)->flags & UV_HANDLE_REF))) \ uv__active_handle_add((uv_handle_t*) (handle)); \ \ (handle)->flags |= UV_HANDLE_CLOSING; \ (handle)->flags &= ~UV_HANDLE_ACTIVE; \ } while (0) #define uv__handle_close(handle) \ do { \ QUEUE_REMOVE(&(handle)->handle_queue); \ uv__active_handle_rm((uv_handle_t*) (handle)); \ \ (handle)->flags |= UV_HANDLE_CLOSED; \ \ if ((handle)->close_cb) \ (handle)->close_cb((uv_handle_t*) (handle)); \ } while (0) INLINE static void uv__want_endgame(uv_loop_t* loop, uv_handle_t* handle) { if (!(handle->flags & UV_HANDLE_ENDGAME_QUEUED)) { handle->flags |= UV_HANDLE_ENDGAME_QUEUED; handle->endgame_next = loop->endgame_handles; loop->endgame_handles = handle; } } INLINE static void uv__process_endgames(uv_loop_t* loop) { uv_handle_t* handle; while (loop->endgame_handles) { handle = loop->endgame_handles; loop->endgame_handles = handle->endgame_next; handle->flags &= ~UV_HANDLE_ENDGAME_QUEUED; switch (handle->type) { case UV_TCP: uv__tcp_endgame(loop, (uv_tcp_t*) handle); break; case UV_NAMED_PIPE: uv__pipe_endgame(loop, (uv_pipe_t*) handle); break; case UV_TTY: uv__tty_endgame(loop, (uv_tty_t*) handle); break; case UV_UDP: uv__udp_endgame(loop, (uv_udp_t*) handle); break; case UV_POLL: uv__poll_endgame(loop, (uv_poll_t*) handle); break; case UV_TIMER: uv__timer_close((uv_timer_t*) handle); uv__handle_close(handle); break; case UV_PREPARE: case UV_CHECK: case UV_IDLE: uv__loop_watcher_endgame(loop, handle); break; case UV_ASYNC: uv__async_endgame(loop, (uv_async_t*) handle); break; case UV_SIGNAL: uv__signal_endgame(loop, (uv_signal_t*) handle); break; case UV_PROCESS: uv__process_endgame(loop, (uv_process_t*) handle); break; case UV_FS_EVENT: uv__fs_event_endgame(loop, (uv_fs_event_t*) handle); break; case UV_FS_POLL: uv__fs_poll_endgame(loop, (uv_fs_poll_t*) handle); break; default: assert(0); break; } } } INLINE static HANDLE uv__get_osfhandle(int fd) { /* _get_osfhandle() raises an assert in debug builds if the FD is invalid. * But it also correctly checks the FD and returns INVALID_HANDLE_VALUE for * invalid FDs in release builds (or if you let the assert continue). So this * wrapper function disables asserts when calling _get_osfhandle. */ HANDLE handle; UV_BEGIN_DISABLE_CRT_ASSERT(); handle = (HANDLE) _get_osfhandle(fd); UV_END_DISABLE_CRT_ASSERT(); return handle; } #endif /* UV_WIN_HANDLE_INL_H_ */ gevent-24.11.1/deps/libuv/src/win/handle.c000066400000000000000000000076231471441230600202210ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include #include "uv.h" #include "internal.h" #include "handle-inl.h" uv_handle_type uv_guess_handle(uv_file file) { HANDLE handle; DWORD mode; if (file < 0) { return UV_UNKNOWN_HANDLE; } handle = uv__get_osfhandle(file); switch (GetFileType(handle)) { case FILE_TYPE_CHAR: if (GetConsoleMode(handle, &mode)) { return UV_TTY; } else { return UV_FILE; } case FILE_TYPE_PIPE: return UV_NAMED_PIPE; case FILE_TYPE_DISK: return UV_FILE; default: return UV_UNKNOWN_HANDLE; } } int uv_is_active(const uv_handle_t* handle) { return (handle->flags & UV_HANDLE_ACTIVE) && !(handle->flags & UV_HANDLE_CLOSING); } void uv_close(uv_handle_t* handle, uv_close_cb cb) { uv_loop_t* loop = handle->loop; if (handle->flags & UV_HANDLE_CLOSING) { assert(0); return; } handle->close_cb = cb; /* Handle-specific close actions */ switch (handle->type) { case UV_TCP: uv__tcp_close(loop, (uv_tcp_t*)handle); return; case UV_NAMED_PIPE: uv__pipe_close(loop, (uv_pipe_t*) handle); return; case UV_TTY: uv__tty_close((uv_tty_t*) handle); return; case UV_UDP: uv__udp_close(loop, (uv_udp_t*) handle); return; case UV_POLL: uv__poll_close(loop, (uv_poll_t*) handle); return; case UV_TIMER: uv_timer_stop((uv_timer_t*)handle); uv__handle_closing(handle); uv__want_endgame(loop, handle); return; case UV_PREPARE: uv_prepare_stop((uv_prepare_t*)handle); uv__handle_closing(handle); uv__want_endgame(loop, handle); return; case UV_CHECK: uv_check_stop((uv_check_t*)handle); uv__handle_closing(handle); uv__want_endgame(loop, handle); return; case UV_IDLE: uv_idle_stop((uv_idle_t*)handle); uv__handle_closing(handle); uv__want_endgame(loop, handle); return; case UV_ASYNC: uv__async_close(loop, (uv_async_t*) handle); return; case UV_SIGNAL: uv__signal_close(loop, (uv_signal_t*) handle); return; case UV_PROCESS: uv__process_close(loop, (uv_process_t*) handle); return; case UV_FS_EVENT: uv__fs_event_close(loop, (uv_fs_event_t*) handle); return; case UV_FS_POLL: uv__fs_poll_close((uv_fs_poll_t*) handle); uv__handle_closing(handle); return; default: /* Not supported */ abort(); } } int uv_is_closing(const uv_handle_t* handle) { return !!(handle->flags & (UV_HANDLE_CLOSING | UV_HANDLE_CLOSED)); } uv_os_fd_t uv_get_osfhandle(int fd) { return uv__get_osfhandle(fd); } int uv_open_osfhandle(uv_os_fd_t os_fd) { return _open_osfhandle((intptr_t) os_fd, 0); } gevent-24.11.1/deps/libuv/src/win/internal.h000066400000000000000000000246411471441230600206060ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_WIN_INTERNAL_H_ #define UV_WIN_INTERNAL_H_ #include "uv.h" #include "../uv-common.h" #include "uv/tree.h" #include "winapi.h" #include "winsock.h" #ifdef _MSC_VER # define INLINE __inline # define UV_THREAD_LOCAL __declspec( thread ) #else # define INLINE inline # define UV_THREAD_LOCAL __thread #endif #ifdef _DEBUG extern UV_THREAD_LOCAL int uv__crt_assert_enabled; #define UV_BEGIN_DISABLE_CRT_ASSERT() \ { \ int uv__saved_crt_assert_enabled = uv__crt_assert_enabled; \ uv__crt_assert_enabled = FALSE; #define UV_END_DISABLE_CRT_ASSERT() \ uv__crt_assert_enabled = uv__saved_crt_assert_enabled; \ } #else #define UV_BEGIN_DISABLE_CRT_ASSERT() #define UV_END_DISABLE_CRT_ASSERT() #endif /* * TCP */ typedef enum { UV__IPC_SOCKET_XFER_NONE = 0, UV__IPC_SOCKET_XFER_TCP_CONNECTION, UV__IPC_SOCKET_XFER_TCP_SERVER } uv__ipc_socket_xfer_type_t; typedef struct { WSAPROTOCOL_INFOW socket_info; uint32_t delayed_error; } uv__ipc_socket_xfer_info_t; int uv__tcp_listen(uv_tcp_t* handle, int backlog, uv_connection_cb cb); int uv__tcp_accept(uv_tcp_t* server, uv_tcp_t* client); int uv__tcp_read_start(uv_tcp_t* handle, uv_alloc_cb alloc_cb, uv_read_cb read_cb); int uv__tcp_write(uv_loop_t* loop, uv_write_t* req, uv_tcp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_write_cb cb); int uv__tcp_try_write(uv_tcp_t* handle, const uv_buf_t bufs[], unsigned int nbufs); void uv__process_tcp_read_req(uv_loop_t* loop, uv_tcp_t* handle, uv_req_t* req); void uv__process_tcp_write_req(uv_loop_t* loop, uv_tcp_t* handle, uv_write_t* req); void uv__process_tcp_accept_req(uv_loop_t* loop, uv_tcp_t* handle, uv_req_t* req); void uv__process_tcp_connect_req(uv_loop_t* loop, uv_tcp_t* handle, uv_connect_t* req); void uv__process_tcp_shutdown_req(uv_loop_t* loop, uv_tcp_t* stream, uv_shutdown_t* req); void uv__tcp_close(uv_loop_t* loop, uv_tcp_t* tcp); void uv__tcp_endgame(uv_loop_t* loop, uv_tcp_t* handle); int uv__tcp_xfer_export(uv_tcp_t* handle, int pid, uv__ipc_socket_xfer_type_t* xfer_type, uv__ipc_socket_xfer_info_t* xfer_info); int uv__tcp_xfer_import(uv_tcp_t* tcp, uv__ipc_socket_xfer_type_t xfer_type, uv__ipc_socket_xfer_info_t* xfer_info); /* * UDP */ void uv__process_udp_recv_req(uv_loop_t* loop, uv_udp_t* handle, uv_req_t* req); void uv__process_udp_send_req(uv_loop_t* loop, uv_udp_t* handle, uv_udp_send_t* req); void uv__udp_close(uv_loop_t* loop, uv_udp_t* handle); void uv__udp_endgame(uv_loop_t* loop, uv_udp_t* handle); /* * Pipes */ int uv__create_stdio_pipe_pair(uv_loop_t* loop, uv_pipe_t* parent_pipe, HANDLE* child_pipe_ptr, unsigned int flags); int uv__pipe_listen(uv_pipe_t* handle, int backlog, uv_connection_cb cb); int uv__pipe_accept(uv_pipe_t* server, uv_stream_t* client); int uv__pipe_read_start(uv_pipe_t* handle, uv_alloc_cb alloc_cb, uv_read_cb read_cb); void uv__pipe_read_stop(uv_pipe_t* handle); int uv__pipe_write(uv_loop_t* loop, uv_write_t* req, uv_pipe_t* handle, const uv_buf_t bufs[], size_t nbufs, uv_stream_t* send_handle, uv_write_cb cb); void uv__pipe_shutdown(uv_loop_t* loop, uv_pipe_t* handle, uv_shutdown_t* req); void uv__process_pipe_read_req(uv_loop_t* loop, uv_pipe_t* handle, uv_req_t* req); void uv__process_pipe_write_req(uv_loop_t* loop, uv_pipe_t* handle, uv_write_t* req); void uv__process_pipe_accept_req(uv_loop_t* loop, uv_pipe_t* handle, uv_req_t* raw_req); void uv__process_pipe_connect_req(uv_loop_t* loop, uv_pipe_t* handle, uv_connect_t* req); void uv__process_pipe_shutdown_req(uv_loop_t* loop, uv_pipe_t* handle, uv_shutdown_t* req); void uv__pipe_close(uv_loop_t* loop, uv_pipe_t* handle); void uv__pipe_endgame(uv_loop_t* loop, uv_pipe_t* handle); /* * TTY */ void uv__console_init(void); int uv__tty_read_start(uv_tty_t* handle, uv_alloc_cb alloc_cb, uv_read_cb read_cb); int uv__tty_read_stop(uv_tty_t* handle); int uv__tty_write(uv_loop_t* loop, uv_write_t* req, uv_tty_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_write_cb cb); int uv__tty_try_write(uv_tty_t* handle, const uv_buf_t bufs[], unsigned int nbufs); void uv__tty_close(uv_tty_t* handle); void uv__process_tty_read_req(uv_loop_t* loop, uv_tty_t* handle, uv_req_t* req); void uv__process_tty_write_req(uv_loop_t* loop, uv_tty_t* handle, uv_write_t* req); /* * uv__process_tty_accept_req() is a stub to keep DELEGATE_STREAM_REQ working * TODO: find a way to remove it */ void uv__process_tty_accept_req(uv_loop_t* loop, uv_tty_t* handle, uv_req_t* raw_req); /* * uv__process_tty_connect_req() is a stub to keep DELEGATE_STREAM_REQ working * TODO: find a way to remove it */ void uv__process_tty_connect_req(uv_loop_t* loop, uv_tty_t* handle, uv_connect_t* req); void uv__process_tty_shutdown_req(uv_loop_t* loop, uv_tty_t* stream, uv_shutdown_t* req); void uv__tty_endgame(uv_loop_t* loop, uv_tty_t* handle); /* * Poll watchers */ void uv__process_poll_req(uv_loop_t* loop, uv_poll_t* handle, uv_req_t* req); int uv__poll_close(uv_loop_t* loop, uv_poll_t* handle); void uv__poll_endgame(uv_loop_t* loop, uv_poll_t* handle); /* * Loop watchers */ void uv__loop_watcher_endgame(uv_loop_t* loop, uv_handle_t* handle); void uv__prepare_invoke(uv_loop_t* loop); void uv__check_invoke(uv_loop_t* loop); void uv__idle_invoke(uv_loop_t* loop); void uv__once_init(void); /* * Async watcher */ void uv__async_close(uv_loop_t* loop, uv_async_t* handle); void uv__async_endgame(uv_loop_t* loop, uv_async_t* handle); void uv__process_async_wakeup_req(uv_loop_t* loop, uv_async_t* handle, uv_req_t* req); /* * Signal watcher */ void uv__signals_init(void); int uv__signal_dispatch(int signum); void uv__signal_close(uv_loop_t* loop, uv_signal_t* handle); void uv__signal_endgame(uv_loop_t* loop, uv_signal_t* handle); void uv__process_signal_req(uv_loop_t* loop, uv_signal_t* handle, uv_req_t* req); /* * Spawn */ void uv__process_proc_exit(uv_loop_t* loop, uv_process_t* handle); void uv__process_close(uv_loop_t* loop, uv_process_t* handle); void uv__process_endgame(uv_loop_t* loop, uv_process_t* handle); /* * FS */ void uv__fs_init(void); /* * FS Event */ void uv__process_fs_event_req(uv_loop_t* loop, uv_req_t* req, uv_fs_event_t* handle); void uv__fs_event_close(uv_loop_t* loop, uv_fs_event_t* handle); void uv__fs_event_endgame(uv_loop_t* loop, uv_fs_event_t* handle); /* * Stat poller. */ void uv__fs_poll_endgame(uv_loop_t* loop, uv_fs_poll_t* handle); /* * Utilities. */ void uv__util_init(void); uint64_t uv__hrtime(unsigned int scale); __declspec(noreturn) void uv_fatal_error(const int errorno, const char* syscall); int uv__getpwuid_r(uv_passwd_t* pwd); int uv__convert_utf16_to_utf8(const WCHAR* utf16, int utf16len, char** utf8); int uv__convert_utf8_to_utf16(const char* utf8, int utf8len, WCHAR** utf16); typedef int (WINAPI *uv__peersockfunc)(SOCKET, struct sockaddr*, int*); int uv__getsockpeername(const uv_handle_t* handle, uv__peersockfunc func, struct sockaddr* name, int* namelen, int delayed_error); int uv__random_rtlgenrandom(void* buf, size_t buflen); /* * Process stdio handles. */ int uv__stdio_create(uv_loop_t* loop, const uv_process_options_t* options, BYTE** buffer_ptr); void uv__stdio_destroy(BYTE* buffer); void uv__stdio_noinherit(BYTE* buffer); int uv__stdio_verify(BYTE* buffer, WORD size); WORD uv__stdio_size(BYTE* buffer); HANDLE uv__stdio_handle(BYTE* buffer, int fd); /* * Winapi and ntapi utility functions */ void uv__winapi_init(void); /* * Winsock utility functions */ void uv__winsock_init(void); int uv__ntstatus_to_winsock_error(NTSTATUS status); BOOL uv__get_acceptex_function(SOCKET socket, LPFN_ACCEPTEX* target); BOOL uv__get_connectex_function(SOCKET socket, LPFN_CONNECTEX* target); int WSAAPI uv__wsarecv_workaround(SOCKET socket, WSABUF* buffers, DWORD buffer_count, DWORD* bytes, DWORD* flags, WSAOVERLAPPED *overlapped, LPWSAOVERLAPPED_COMPLETION_ROUTINE completion_routine); int WSAAPI uv__wsarecvfrom_workaround(SOCKET socket, WSABUF* buffers, DWORD buffer_count, DWORD* bytes, DWORD* flags, struct sockaddr* addr, int* addr_len, WSAOVERLAPPED *overlapped, LPWSAOVERLAPPED_COMPLETION_ROUTINE completion_routine); int WSAAPI uv__msafd_poll(SOCKET socket, AFD_POLL_INFO* info_in, AFD_POLL_INFO* info_out, OVERLAPPED* overlapped); /* Whether there are any non-IFS LSPs stacked on TCP */ extern int uv_tcp_non_ifs_lsp_ipv4; extern int uv_tcp_non_ifs_lsp_ipv6; /* Ip address used to bind to any port at any interface */ extern struct sockaddr_in uv_addr_ip4_any_; extern struct sockaddr_in6 uv_addr_ip6_any_; /* * Wake all loops with fake message */ void uv__wake_all_loops(void); /* * Init system wake-up detection */ void uv__init_detect_system_wakeup(void); #endif /* UV_WIN_INTERNAL_H_ */ gevent-24.11.1/deps/libuv/src/win/loop-watcher.c000066400000000000000000000174761471441230600214010ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include "uv.h" #include "internal.h" #include "handle-inl.h" void uv__loop_watcher_endgame(uv_loop_t* loop, uv_handle_t* handle) { if (handle->flags & UV_HANDLE_CLOSING) { assert(!(handle->flags & UV_HANDLE_CLOSED)); handle->flags |= UV_HANDLE_CLOSED; uv__handle_close(handle); } } #define UV_LOOP_WATCHER_DEFINE(name, NAME) \ int uv_##name##_init(uv_loop_t* loop, uv_##name##_t* handle) { \ uv__handle_init(loop, (uv_handle_t*) handle, UV_##NAME); \ \ return 0; \ } \ \ \ int uv_##name##_start(uv_##name##_t* handle, uv_##name##_cb cb) { \ uv_loop_t* loop = handle->loop; \ uv_##name##_t* old_head; \ \ assert(handle->type == UV_##NAME); \ \ if (uv__is_active(handle)) \ return 0; \ \ if (cb == NULL) \ return UV_EINVAL; \ \ old_head = loop->name##_handles; \ \ handle->name##_next = old_head; \ handle->name##_prev = NULL; \ \ if (old_head) { \ old_head->name##_prev = handle; \ } \ \ loop->name##_handles = handle; \ \ handle->name##_cb = cb; \ uv__handle_start(handle); \ \ return 0; \ } \ \ \ int uv_##name##_stop(uv_##name##_t* handle) { \ uv_loop_t* loop = handle->loop; \ \ assert(handle->type == UV_##NAME); \ \ if (!uv__is_active(handle)) \ return 0; \ \ /* Update loop head if needed */ \ if (loop->name##_handles == handle) { \ loop->name##_handles = handle->name##_next; \ } \ \ /* Update the iterator-next pointer of needed */ \ if (loop->next_##name##_handle == handle) { \ loop->next_##name##_handle = handle->name##_next; \ } \ \ if (handle->name##_prev) { \ handle->name##_prev->name##_next = handle->name##_next; \ } \ if (handle->name##_next) { \ handle->name##_next->name##_prev = handle->name##_prev; \ } \ \ uv__handle_stop(handle); \ \ return 0; \ } \ \ \ void uv__##name##_invoke(uv_loop_t* loop) { \ uv_##name##_t* handle; \ \ (loop)->next_##name##_handle = (loop)->name##_handles; \ \ while ((loop)->next_##name##_handle != NULL) { \ handle = (loop)->next_##name##_handle; \ (loop)->next_##name##_handle = handle->name##_next; \ \ handle->name##_cb(handle); \ } \ } UV_LOOP_WATCHER_DEFINE(prepare, PREPARE) UV_LOOP_WATCHER_DEFINE(check, CHECK) UV_LOOP_WATCHER_DEFINE(idle, IDLE) gevent-24.11.1/deps/libuv/src/win/pipe.c000066400000000000000000002316461471441230600177270ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include #include #include #include "handle-inl.h" #include "internal.h" #include "req-inl.h" #include "stream-inl.h" #include "uv-common.h" #include "uv.h" #include #include /* A zero-size buffer for use by uv_pipe_read */ static char uv_zero_[] = ""; /* Null uv_buf_t */ static const uv_buf_t uv_null_buf_ = { 0, NULL }; /* The timeout that the pipe will wait for the remote end to write data when * the local ends wants to shut it down. */ static const int64_t eof_timeout = 50; /* ms */ static const int default_pending_pipe_instances = 4; /* Pipe prefix */ static char pipe_prefix[] = "\\\\?\\pipe"; static const int pipe_prefix_len = sizeof(pipe_prefix) - 1; /* IPC incoming xfer queue item. */ typedef struct { uv__ipc_socket_xfer_type_t xfer_type; uv__ipc_socket_xfer_info_t xfer_info; QUEUE member; } uv__ipc_xfer_queue_item_t; /* IPC frame header flags. */ /* clang-format off */ enum { UV__IPC_FRAME_HAS_DATA = 0x01, UV__IPC_FRAME_HAS_SOCKET_XFER = 0x02, UV__IPC_FRAME_XFER_IS_TCP_CONNECTION = 0x04, /* These are combinations of the flags above. */ UV__IPC_FRAME_XFER_FLAGS = 0x06, UV__IPC_FRAME_VALID_FLAGS = 0x07 }; /* clang-format on */ /* IPC frame header. */ typedef struct { uint32_t flags; uint32_t reserved1; /* Ignored. */ uint32_t data_length; /* Must be zero if there is no data. */ uint32_t reserved2; /* Must be zero. */ } uv__ipc_frame_header_t; /* To implement the IPC protocol correctly, these structures must have exactly * the right size. */ STATIC_ASSERT(sizeof(uv__ipc_frame_header_t) == 16); STATIC_ASSERT(sizeof(uv__ipc_socket_xfer_info_t) == 632); /* Coalesced write request. */ typedef struct { uv_write_t req; /* Internal heap-allocated write request. */ uv_write_t* user_req; /* Pointer to user-specified uv_write_t. */ } uv__coalesced_write_t; static void eof_timer_init(uv_pipe_t* pipe); static void eof_timer_start(uv_pipe_t* pipe); static void eof_timer_stop(uv_pipe_t* pipe); static void eof_timer_cb(uv_timer_t* timer); static void eof_timer_destroy(uv_pipe_t* pipe); static void eof_timer_close_cb(uv_handle_t* handle); static void uv__unique_pipe_name(char* ptr, char* name, size_t size) { snprintf(name, size, "\\\\?\\pipe\\uv\\%p-%lu", ptr, GetCurrentProcessId()); } int uv_pipe_init(uv_loop_t* loop, uv_pipe_t* handle, int ipc) { uv__stream_init(loop, (uv_stream_t*)handle, UV_NAMED_PIPE); handle->reqs_pending = 0; handle->handle = INVALID_HANDLE_VALUE; handle->name = NULL; handle->pipe.conn.ipc_remote_pid = 0; handle->pipe.conn.ipc_data_frame.payload_remaining = 0; QUEUE_INIT(&handle->pipe.conn.ipc_xfer_queue); handle->pipe.conn.ipc_xfer_queue_length = 0; handle->ipc = ipc; handle->pipe.conn.non_overlapped_writes_tail = NULL; return 0; } static void uv__pipe_connection_init(uv_pipe_t* handle) { assert(!(handle->flags & UV_HANDLE_PIPESERVER)); uv__connection_init((uv_stream_t*) handle); handle->read_req.data = handle; handle->pipe.conn.eof_timer = NULL; } static HANDLE open_named_pipe(const WCHAR* name, DWORD* duplex_flags) { HANDLE pipeHandle; /* * Assume that we have a duplex pipe first, so attempt to * connect with GENERIC_READ | GENERIC_WRITE. */ pipeHandle = CreateFileW(name, GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); if (pipeHandle != INVALID_HANDLE_VALUE) { *duplex_flags = UV_HANDLE_READABLE | UV_HANDLE_WRITABLE; return pipeHandle; } /* * If the pipe is not duplex CreateFileW fails with * ERROR_ACCESS_DENIED. In that case try to connect * as a read-only or write-only. */ if (GetLastError() == ERROR_ACCESS_DENIED) { pipeHandle = CreateFileW(name, GENERIC_READ | FILE_WRITE_ATTRIBUTES, 0, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); if (pipeHandle != INVALID_HANDLE_VALUE) { *duplex_flags = UV_HANDLE_READABLE; return pipeHandle; } } if (GetLastError() == ERROR_ACCESS_DENIED) { pipeHandle = CreateFileW(name, GENERIC_WRITE | FILE_READ_ATTRIBUTES, 0, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); if (pipeHandle != INVALID_HANDLE_VALUE) { *duplex_flags = UV_HANDLE_WRITABLE; return pipeHandle; } } return INVALID_HANDLE_VALUE; } static void close_pipe(uv_pipe_t* pipe) { assert(pipe->u.fd == -1 || pipe->u.fd > 2); if (pipe->u.fd == -1) CloseHandle(pipe->handle); else close(pipe->u.fd); pipe->u.fd = -1; pipe->handle = INVALID_HANDLE_VALUE; } static int uv__pipe_server( HANDLE* pipeHandle_ptr, DWORD access, char* name, size_t nameSize, char* random) { HANDLE pipeHandle; int err; for (;;) { uv__unique_pipe_name(random, name, nameSize); pipeHandle = CreateNamedPipeA(name, access | FILE_FLAG_FIRST_PIPE_INSTANCE, PIPE_TYPE_BYTE | PIPE_READMODE_BYTE | PIPE_WAIT, 1, 65536, 65536, 0, NULL); if (pipeHandle != INVALID_HANDLE_VALUE) { /* No name collisions. We're done. */ break; } err = GetLastError(); if (err != ERROR_PIPE_BUSY && err != ERROR_ACCESS_DENIED) { goto error; } /* Pipe name collision. Increment the random number and try again. */ random++; } *pipeHandle_ptr = pipeHandle; return 0; error: if (pipeHandle != INVALID_HANDLE_VALUE) CloseHandle(pipeHandle); return err; } static int uv__create_pipe_pair( HANDLE* server_pipe_ptr, HANDLE* client_pipe_ptr, unsigned int server_flags, unsigned int client_flags, int inherit_client, char* random) { /* allowed flags are: UV_READABLE_PIPE | UV_WRITABLE_PIPE | UV_NONBLOCK_PIPE */ char pipe_name[64]; SECURITY_ATTRIBUTES sa; DWORD server_access; DWORD client_access; HANDLE server_pipe; HANDLE client_pipe; int err; server_pipe = INVALID_HANDLE_VALUE; client_pipe = INVALID_HANDLE_VALUE; server_access = 0; if (server_flags & UV_READABLE_PIPE) server_access |= PIPE_ACCESS_INBOUND; if (server_flags & UV_WRITABLE_PIPE) server_access |= PIPE_ACCESS_OUTBOUND; if (server_flags & UV_NONBLOCK_PIPE) server_access |= FILE_FLAG_OVERLAPPED; server_access |= WRITE_DAC; client_access = 0; if (client_flags & UV_READABLE_PIPE) client_access |= GENERIC_READ; else client_access |= FILE_READ_ATTRIBUTES; if (client_flags & UV_WRITABLE_PIPE) client_access |= GENERIC_WRITE; else client_access |= FILE_WRITE_ATTRIBUTES; client_access |= WRITE_DAC; /* Create server pipe handle. */ err = uv__pipe_server(&server_pipe, server_access, pipe_name, sizeof(pipe_name), random); if (err) goto error; /* Create client pipe handle. */ sa.nLength = sizeof sa; sa.lpSecurityDescriptor = NULL; sa.bInheritHandle = inherit_client; client_pipe = CreateFileA(pipe_name, client_access, 0, &sa, OPEN_EXISTING, (client_flags & UV_NONBLOCK_PIPE) ? FILE_FLAG_OVERLAPPED : 0, NULL); if (client_pipe == INVALID_HANDLE_VALUE) { err = GetLastError(); goto error; } #ifndef NDEBUG /* Validate that the pipe was opened in the right mode. */ { DWORD mode; BOOL r; r = GetNamedPipeHandleState(client_pipe, &mode, NULL, NULL, NULL, NULL, 0); if (r == TRUE) { assert(mode == (PIPE_READMODE_BYTE | PIPE_WAIT)); } else { fprintf(stderr, "libuv assertion failure: GetNamedPipeHandleState failed\n"); } } #endif /* Do a blocking ConnectNamedPipe. This should not block because we have * both ends of the pipe created. */ if (!ConnectNamedPipe(server_pipe, NULL)) { if (GetLastError() != ERROR_PIPE_CONNECTED) { err = GetLastError(); goto error; } } *client_pipe_ptr = client_pipe; *server_pipe_ptr = server_pipe; return 0; error: if (server_pipe != INVALID_HANDLE_VALUE) CloseHandle(server_pipe); if (client_pipe != INVALID_HANDLE_VALUE) CloseHandle(client_pipe); return err; } int uv_pipe(uv_file fds[2], int read_flags, int write_flags) { uv_file temp[2]; int err; HANDLE readh; HANDLE writeh; /* Make the server side the inbound (read) end, */ /* so that both ends will have FILE_READ_ATTRIBUTES permission. */ /* TODO: better source of local randomness than &fds? */ read_flags |= UV_READABLE_PIPE; write_flags |= UV_WRITABLE_PIPE; err = uv__create_pipe_pair(&readh, &writeh, read_flags, write_flags, 0, (char*) &fds[0]); if (err != 0) return err; temp[0] = _open_osfhandle((intptr_t) readh, 0); if (temp[0] == -1) { if (errno == UV_EMFILE) err = UV_EMFILE; else err = UV_UNKNOWN; CloseHandle(readh); CloseHandle(writeh); return err; } temp[1] = _open_osfhandle((intptr_t) writeh, 0); if (temp[1] == -1) { if (errno == UV_EMFILE) err = UV_EMFILE; else err = UV_UNKNOWN; _close(temp[0]); CloseHandle(writeh); return err; } fds[0] = temp[0]; fds[1] = temp[1]; return 0; } int uv__create_stdio_pipe_pair(uv_loop_t* loop, uv_pipe_t* parent_pipe, HANDLE* child_pipe_ptr, unsigned int flags) { /* The parent_pipe is always the server_pipe and kept by libuv. * The child_pipe is always the client_pipe and is passed to the child. * The flags are specified with respect to their usage in the child. */ HANDLE server_pipe; HANDLE client_pipe; unsigned int server_flags; unsigned int client_flags; int err; uv__pipe_connection_init(parent_pipe); server_pipe = INVALID_HANDLE_VALUE; client_pipe = INVALID_HANDLE_VALUE; server_flags = 0; client_flags = 0; if (flags & UV_READABLE_PIPE) { /* The server needs inbound (read) access too, otherwise CreateNamedPipe() * won't give us the FILE_READ_ATTRIBUTES permission. We need that to probe * the state of the write buffer when we're trying to shutdown the pipe. */ server_flags |= UV_READABLE_PIPE | UV_WRITABLE_PIPE; client_flags |= UV_READABLE_PIPE; } if (flags & UV_WRITABLE_PIPE) { server_flags |= UV_READABLE_PIPE; client_flags |= UV_WRITABLE_PIPE; } server_flags |= UV_NONBLOCK_PIPE; if (flags & UV_NONBLOCK_PIPE || parent_pipe->ipc) { client_flags |= UV_NONBLOCK_PIPE; } err = uv__create_pipe_pair(&server_pipe, &client_pipe, server_flags, client_flags, 1, (char*) server_pipe); if (err) goto error; if (CreateIoCompletionPort(server_pipe, loop->iocp, (ULONG_PTR) parent_pipe, 0) == NULL) { err = GetLastError(); goto error; } parent_pipe->handle = server_pipe; *child_pipe_ptr = client_pipe; /* The server end is now readable and/or writable. */ if (flags & UV_READABLE_PIPE) parent_pipe->flags |= UV_HANDLE_WRITABLE; if (flags & UV_WRITABLE_PIPE) parent_pipe->flags |= UV_HANDLE_READABLE; return 0; error: if (server_pipe != INVALID_HANDLE_VALUE) CloseHandle(server_pipe); if (client_pipe != INVALID_HANDLE_VALUE) CloseHandle(client_pipe); return err; } static int uv__set_pipe_handle(uv_loop_t* loop, uv_pipe_t* handle, HANDLE pipeHandle, int fd, DWORD duplex_flags) { NTSTATUS nt_status; IO_STATUS_BLOCK io_status; FILE_MODE_INFORMATION mode_info; DWORD mode = PIPE_READMODE_BYTE | PIPE_WAIT; DWORD current_mode = 0; DWORD err = 0; assert(handle->flags & UV_HANDLE_CONNECTION); assert(!(handle->flags & UV_HANDLE_PIPESERVER)); if (handle->flags & UV_HANDLE_CLOSING) return UV_EINVAL; if (handle->handle != INVALID_HANDLE_VALUE) return UV_EBUSY; if (!SetNamedPipeHandleState(pipeHandle, &mode, NULL, NULL)) { err = GetLastError(); if (err == ERROR_ACCESS_DENIED) { /* * SetNamedPipeHandleState can fail if the handle doesn't have either * GENERIC_WRITE or FILE_WRITE_ATTRIBUTES. * But if the handle already has the desired wait and blocking modes * we can continue. */ if (!GetNamedPipeHandleState(pipeHandle, ¤t_mode, NULL, NULL, NULL, NULL, 0)) { return uv_translate_sys_error(GetLastError()); } else if (current_mode & PIPE_NOWAIT) { return UV_EACCES; } } else { /* If this returns ERROR_INVALID_PARAMETER we probably opened * something that is not a pipe. */ if (err == ERROR_INVALID_PARAMETER) { return UV_ENOTSOCK; } return uv_translate_sys_error(err); } } /* Check if the pipe was created with FILE_FLAG_OVERLAPPED. */ nt_status = pNtQueryInformationFile(pipeHandle, &io_status, &mode_info, sizeof(mode_info), FileModeInformation); if (nt_status != STATUS_SUCCESS) { return uv_translate_sys_error(err); } if (mode_info.Mode & FILE_SYNCHRONOUS_IO_ALERT || mode_info.Mode & FILE_SYNCHRONOUS_IO_NONALERT) { /* Non-overlapped pipe. */ handle->flags |= UV_HANDLE_NON_OVERLAPPED_PIPE; handle->pipe.conn.readfile_thread_handle = NULL; InitializeCriticalSection(&handle->pipe.conn.readfile_thread_lock); } else { /* Overlapped pipe. Try to associate with IOCP. */ if (CreateIoCompletionPort(pipeHandle, loop->iocp, (ULONG_PTR) handle, 0) == NULL) { handle->flags |= UV_HANDLE_EMULATE_IOCP; } } handle->handle = pipeHandle; handle->u.fd = fd; handle->flags |= duplex_flags; return 0; } static int pipe_alloc_accept(uv_loop_t* loop, uv_pipe_t* handle, uv_pipe_accept_t* req, BOOL firstInstance) { assert(req->pipeHandle == INVALID_HANDLE_VALUE); req->pipeHandle = CreateNamedPipeW(handle->name, PIPE_ACCESS_DUPLEX | FILE_FLAG_OVERLAPPED | WRITE_DAC | (firstInstance ? FILE_FLAG_FIRST_PIPE_INSTANCE : 0), PIPE_TYPE_BYTE | PIPE_READMODE_BYTE | PIPE_WAIT, PIPE_UNLIMITED_INSTANCES, 65536, 65536, 0, NULL); if (req->pipeHandle == INVALID_HANDLE_VALUE) { return 0; } /* Associate it with IOCP so we can get events. */ if (CreateIoCompletionPort(req->pipeHandle, loop->iocp, (ULONG_PTR) handle, 0) == NULL) { uv_fatal_error(GetLastError(), "CreateIoCompletionPort"); } /* Stash a handle in the server object for use from places such as * getsockname and chmod. As we transfer ownership of these to client * objects, we'll allocate new ones here. */ handle->handle = req->pipeHandle; return 1; } static DWORD WINAPI pipe_shutdown_thread_proc(void* parameter) { uv_loop_t* loop; uv_pipe_t* handle; uv_shutdown_t* req; req = (uv_shutdown_t*) parameter; assert(req); handle = (uv_pipe_t*) req->handle; assert(handle); loop = handle->loop; assert(loop); FlushFileBuffers(handle->handle); /* Post completed */ POST_COMPLETION_FOR_REQ(loop, req); return 0; } void uv__pipe_shutdown(uv_loop_t* loop, uv_pipe_t* handle, uv_shutdown_t *req) { DWORD result; NTSTATUS nt_status; IO_STATUS_BLOCK io_status; FILE_PIPE_LOCAL_INFORMATION pipe_info; assert(handle->flags & UV_HANDLE_CONNECTION); assert(req != NULL); assert(handle->stream.conn.write_reqs_pending == 0); SET_REQ_SUCCESS(req); if (handle->flags & UV_HANDLE_CLOSING) { uv__insert_pending_req(loop, (uv_req_t*) req); return; } /* Try to avoid flushing the pipe buffer in the thread pool. */ nt_status = pNtQueryInformationFile(handle->handle, &io_status, &pipe_info, sizeof pipe_info, FilePipeLocalInformation); if (nt_status != STATUS_SUCCESS) { SET_REQ_ERROR(req, pRtlNtStatusToDosError(nt_status)); handle->flags |= UV_HANDLE_WRITABLE; /* Questionable. */ uv__insert_pending_req(loop, (uv_req_t*) req); return; } if (pipe_info.OutboundQuota == pipe_info.WriteQuotaAvailable) { /* Short-circuit, no need to call FlushFileBuffers: * all writes have been read. */ uv__insert_pending_req(loop, (uv_req_t*) req); return; } /* Run FlushFileBuffers in the thread pool. */ result = QueueUserWorkItem(pipe_shutdown_thread_proc, req, WT_EXECUTELONGFUNCTION); if (!result) { SET_REQ_ERROR(req, GetLastError()); handle->flags |= UV_HANDLE_WRITABLE; /* Questionable. */ uv__insert_pending_req(loop, (uv_req_t*) req); return; } } void uv__pipe_endgame(uv_loop_t* loop, uv_pipe_t* handle) { uv__ipc_xfer_queue_item_t* xfer_queue_item; assert(handle->reqs_pending == 0); assert(handle->flags & UV_HANDLE_CLOSING); assert(!(handle->flags & UV_HANDLE_CLOSED)); if (handle->flags & UV_HANDLE_CONNECTION) { /* Free pending sockets */ while (!QUEUE_EMPTY(&handle->pipe.conn.ipc_xfer_queue)) { QUEUE* q; SOCKET socket; q = QUEUE_HEAD(&handle->pipe.conn.ipc_xfer_queue); QUEUE_REMOVE(q); xfer_queue_item = QUEUE_DATA(q, uv__ipc_xfer_queue_item_t, member); /* Materialize socket and close it */ socket = WSASocketW(FROM_PROTOCOL_INFO, FROM_PROTOCOL_INFO, FROM_PROTOCOL_INFO, &xfer_queue_item->xfer_info.socket_info, 0, WSA_FLAG_OVERLAPPED); uv__free(xfer_queue_item); if (socket != INVALID_SOCKET) closesocket(socket); } handle->pipe.conn.ipc_xfer_queue_length = 0; if (handle->flags & UV_HANDLE_EMULATE_IOCP) { if (handle->read_req.wait_handle != INVALID_HANDLE_VALUE) { UnregisterWait(handle->read_req.wait_handle); handle->read_req.wait_handle = INVALID_HANDLE_VALUE; } if (handle->read_req.event_handle != NULL) { CloseHandle(handle->read_req.event_handle); handle->read_req.event_handle = NULL; } } if (handle->flags & UV_HANDLE_NON_OVERLAPPED_PIPE) DeleteCriticalSection(&handle->pipe.conn.readfile_thread_lock); } if (handle->flags & UV_HANDLE_PIPESERVER) { assert(handle->pipe.serv.accept_reqs); uv__free(handle->pipe.serv.accept_reqs); handle->pipe.serv.accept_reqs = NULL; } uv__handle_close(handle); } void uv_pipe_pending_instances(uv_pipe_t* handle, int count) { if (handle->flags & UV_HANDLE_BOUND) return; handle->pipe.serv.pending_instances = count; handle->flags |= UV_HANDLE_PIPESERVER; } /* Creates a pipe server. */ int uv_pipe_bind(uv_pipe_t* handle, const char* name) { uv_loop_t* loop = handle->loop; int i, err, nameSize; uv_pipe_accept_t* req; if (handle->flags & UV_HANDLE_BOUND) { return UV_EINVAL; } if (!name) { return UV_EINVAL; } if (uv__is_closing(handle)) { return UV_EINVAL; } if (!(handle->flags & UV_HANDLE_PIPESERVER)) { handle->pipe.serv.pending_instances = default_pending_pipe_instances; } handle->pipe.serv.accept_reqs = (uv_pipe_accept_t*) uv__malloc(sizeof(uv_pipe_accept_t) * handle->pipe.serv.pending_instances); if (!handle->pipe.serv.accept_reqs) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } for (i = 0; i < handle->pipe.serv.pending_instances; i++) { req = &handle->pipe.serv.accept_reqs[i]; UV_REQ_INIT(req, UV_ACCEPT); req->data = handle; req->pipeHandle = INVALID_HANDLE_VALUE; req->next_pending = NULL; } /* Convert name to UTF16. */ nameSize = MultiByteToWideChar(CP_UTF8, 0, name, -1, NULL, 0) * sizeof(WCHAR); handle->name = uv__malloc(nameSize); if (!handle->name) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } if (!MultiByteToWideChar(CP_UTF8, 0, name, -1, handle->name, nameSize / sizeof(WCHAR))) { err = GetLastError(); goto error; } /* * Attempt to create the first pipe with FILE_FLAG_FIRST_PIPE_INSTANCE. * If this fails then there's already a pipe server for the given pipe name. */ if (!pipe_alloc_accept(loop, handle, &handle->pipe.serv.accept_reqs[0], TRUE)) { err = GetLastError(); if (err == ERROR_ACCESS_DENIED) { err = WSAEADDRINUSE; /* Translates to UV_EADDRINUSE. */ } else if (err == ERROR_PATH_NOT_FOUND || err == ERROR_INVALID_NAME) { err = WSAEACCES; /* Translates to UV_EACCES. */ } goto error; } handle->pipe.serv.pending_accepts = NULL; handle->flags |= UV_HANDLE_PIPESERVER; handle->flags |= UV_HANDLE_BOUND; return 0; error: if (handle->name) { uv__free(handle->name); handle->name = NULL; } return uv_translate_sys_error(err); } static DWORD WINAPI pipe_connect_thread_proc(void* parameter) { uv_loop_t* loop; uv_pipe_t* handle; uv_connect_t* req; HANDLE pipeHandle = INVALID_HANDLE_VALUE; DWORD duplex_flags; req = (uv_connect_t*) parameter; assert(req); handle = (uv_pipe_t*) req->handle; assert(handle); loop = handle->loop; assert(loop); /* We're here because CreateFile on a pipe returned ERROR_PIPE_BUSY. We wait * up to 30 seconds for the pipe to become available with WaitNamedPipe. */ while (WaitNamedPipeW(handle->name, 30000)) { /* The pipe is now available, try to connect. */ pipeHandle = open_named_pipe(handle->name, &duplex_flags); if (pipeHandle != INVALID_HANDLE_VALUE) break; SwitchToThread(); } if (pipeHandle != INVALID_HANDLE_VALUE) { SET_REQ_SUCCESS(req); req->u.connect.pipeHandle = pipeHandle; req->u.connect.duplex_flags = duplex_flags; } else { SET_REQ_ERROR(req, GetLastError()); } /* Post completed */ POST_COMPLETION_FOR_REQ(loop, req); return 0; } void uv_pipe_connect(uv_connect_t* req, uv_pipe_t* handle, const char* name, uv_connect_cb cb) { uv_loop_t* loop = handle->loop; int err, nameSize; HANDLE pipeHandle = INVALID_HANDLE_VALUE; DWORD duplex_flags; UV_REQ_INIT(req, UV_CONNECT); req->handle = (uv_stream_t*) handle; req->cb = cb; req->u.connect.pipeHandle = INVALID_HANDLE_VALUE; req->u.connect.duplex_flags = 0; if (handle->flags & UV_HANDLE_PIPESERVER) { err = ERROR_INVALID_PARAMETER; goto error; } if (handle->flags & UV_HANDLE_CONNECTION) { err = ERROR_PIPE_BUSY; goto error; } uv__pipe_connection_init(handle); /* Convert name to UTF16. */ nameSize = MultiByteToWideChar(CP_UTF8, 0, name, -1, NULL, 0) * sizeof(WCHAR); handle->name = uv__malloc(nameSize); if (!handle->name) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } if (!MultiByteToWideChar(CP_UTF8, 0, name, -1, handle->name, nameSize / sizeof(WCHAR))) { err = GetLastError(); goto error; } pipeHandle = open_named_pipe(handle->name, &duplex_flags); if (pipeHandle == INVALID_HANDLE_VALUE) { if (GetLastError() == ERROR_PIPE_BUSY) { /* Wait for the server to make a pipe instance available. */ if (!QueueUserWorkItem(&pipe_connect_thread_proc, req, WT_EXECUTELONGFUNCTION)) { err = GetLastError(); goto error; } REGISTER_HANDLE_REQ(loop, handle, req); handle->reqs_pending++; return; } err = GetLastError(); goto error; } req->u.connect.pipeHandle = pipeHandle; req->u.connect.duplex_flags = duplex_flags; SET_REQ_SUCCESS(req); uv__insert_pending_req(loop, (uv_req_t*) req); handle->reqs_pending++; REGISTER_HANDLE_REQ(loop, handle, req); return; error: if (handle->name) { uv__free(handle->name); handle->name = NULL; } if (pipeHandle != INVALID_HANDLE_VALUE) CloseHandle(pipeHandle); /* Make this req pending reporting an error. */ SET_REQ_ERROR(req, err); uv__insert_pending_req(loop, (uv_req_t*) req); handle->reqs_pending++; REGISTER_HANDLE_REQ(loop, handle, req); return; } void uv__pipe_interrupt_read(uv_pipe_t* handle) { BOOL r; if (!(handle->flags & UV_HANDLE_READ_PENDING)) return; /* No pending reads. */ if (handle->flags & UV_HANDLE_CANCELLATION_PENDING) return; /* Already cancelled. */ if (handle->handle == INVALID_HANDLE_VALUE) return; /* Pipe handle closed. */ if (!(handle->flags & UV_HANDLE_NON_OVERLAPPED_PIPE)) { /* Cancel asynchronous read. */ r = CancelIoEx(handle->handle, &handle->read_req.u.io.overlapped); assert(r || GetLastError() == ERROR_NOT_FOUND); (void) r; } else { /* Cancel synchronous read (which is happening in the thread pool). */ HANDLE thread; volatile HANDLE* thread_ptr = &handle->pipe.conn.readfile_thread_handle; EnterCriticalSection(&handle->pipe.conn.readfile_thread_lock); thread = *thread_ptr; if (thread == NULL) { /* The thread pool thread has not yet reached the point of blocking, we * can pre-empt it by setting thread_handle to INVALID_HANDLE_VALUE. */ *thread_ptr = INVALID_HANDLE_VALUE; } else { /* Spin until the thread has acknowledged (by setting the thread to * INVALID_HANDLE_VALUE) that it is past the point of blocking. */ while (thread != INVALID_HANDLE_VALUE) { r = CancelSynchronousIo(thread); assert(r || GetLastError() == ERROR_NOT_FOUND); SwitchToThread(); /* Yield thread. */ thread = *thread_ptr; } } LeaveCriticalSection(&handle->pipe.conn.readfile_thread_lock); } /* Set flag to indicate that read has been cancelled. */ handle->flags |= UV_HANDLE_CANCELLATION_PENDING; } void uv__pipe_read_stop(uv_pipe_t* handle) { handle->flags &= ~UV_HANDLE_READING; DECREASE_ACTIVE_COUNT(handle->loop, handle); uv__pipe_interrupt_read(handle); } /* Cleans up uv_pipe_t (server or connection) and all resources associated with * it. */ void uv__pipe_close(uv_loop_t* loop, uv_pipe_t* handle) { int i; HANDLE pipeHandle; if (handle->flags & UV_HANDLE_READING) { handle->flags &= ~UV_HANDLE_READING; DECREASE_ACTIVE_COUNT(loop, handle); } if (handle->flags & UV_HANDLE_LISTENING) { handle->flags &= ~UV_HANDLE_LISTENING; DECREASE_ACTIVE_COUNT(loop, handle); } handle->flags &= ~(UV_HANDLE_READABLE | UV_HANDLE_WRITABLE); uv__handle_closing(handle); uv__pipe_interrupt_read(handle); if (handle->name) { uv__free(handle->name); handle->name = NULL; } if (handle->flags & UV_HANDLE_PIPESERVER) { for (i = 0; i < handle->pipe.serv.pending_instances; i++) { pipeHandle = handle->pipe.serv.accept_reqs[i].pipeHandle; if (pipeHandle != INVALID_HANDLE_VALUE) { CloseHandle(pipeHandle); handle->pipe.serv.accept_reqs[i].pipeHandle = INVALID_HANDLE_VALUE; } } handle->handle = INVALID_HANDLE_VALUE; } if (handle->flags & UV_HANDLE_CONNECTION) { eof_timer_destroy(handle); } if ((handle->flags & UV_HANDLE_CONNECTION) && handle->handle != INVALID_HANDLE_VALUE) { /* This will eventually destroy the write queue for us too. */ close_pipe(handle); } if (handle->reqs_pending == 0) uv__want_endgame(loop, (uv_handle_t*) handle); } static void uv__pipe_queue_accept(uv_loop_t* loop, uv_pipe_t* handle, uv_pipe_accept_t* req, BOOL firstInstance) { assert(handle->flags & UV_HANDLE_LISTENING); if (!firstInstance && !pipe_alloc_accept(loop, handle, req, FALSE)) { SET_REQ_ERROR(req, GetLastError()); uv__insert_pending_req(loop, (uv_req_t*) req); handle->reqs_pending++; return; } assert(req->pipeHandle != INVALID_HANDLE_VALUE); /* Prepare the overlapped structure. */ memset(&(req->u.io.overlapped), 0, sizeof(req->u.io.overlapped)); if (!ConnectNamedPipe(req->pipeHandle, &req->u.io.overlapped) && GetLastError() != ERROR_IO_PENDING) { if (GetLastError() == ERROR_PIPE_CONNECTED) { SET_REQ_SUCCESS(req); } else { CloseHandle(req->pipeHandle); req->pipeHandle = INVALID_HANDLE_VALUE; /* Make this req pending reporting an error. */ SET_REQ_ERROR(req, GetLastError()); } uv__insert_pending_req(loop, (uv_req_t*) req); handle->reqs_pending++; return; } /* Wait for completion via IOCP */ handle->reqs_pending++; } int uv__pipe_accept(uv_pipe_t* server, uv_stream_t* client) { uv_loop_t* loop = server->loop; uv_pipe_t* pipe_client; uv_pipe_accept_t* req; QUEUE* q; uv__ipc_xfer_queue_item_t* item; int err; if (server->ipc) { if (QUEUE_EMPTY(&server->pipe.conn.ipc_xfer_queue)) { /* No valid pending sockets. */ return WSAEWOULDBLOCK; } q = QUEUE_HEAD(&server->pipe.conn.ipc_xfer_queue); QUEUE_REMOVE(q); server->pipe.conn.ipc_xfer_queue_length--; item = QUEUE_DATA(q, uv__ipc_xfer_queue_item_t, member); err = uv__tcp_xfer_import( (uv_tcp_t*) client, item->xfer_type, &item->xfer_info); if (err != 0) return err; uv__free(item); } else { pipe_client = (uv_pipe_t*) client; uv__pipe_connection_init(pipe_client); /* Find a connection instance that has been connected, but not yet * accepted. */ req = server->pipe.serv.pending_accepts; if (!req) { /* No valid connections found, so we error out. */ return WSAEWOULDBLOCK; } /* Initialize the client handle and copy the pipeHandle to the client */ pipe_client->handle = req->pipeHandle; pipe_client->flags |= UV_HANDLE_READABLE | UV_HANDLE_WRITABLE; /* Prepare the req to pick up a new connection */ server->pipe.serv.pending_accepts = req->next_pending; req->next_pending = NULL; req->pipeHandle = INVALID_HANDLE_VALUE; server->handle = INVALID_HANDLE_VALUE; if (!(server->flags & UV_HANDLE_CLOSING)) { uv__pipe_queue_accept(loop, server, req, FALSE); } } return 0; } /* Starts listening for connections for the given pipe. */ int uv__pipe_listen(uv_pipe_t* handle, int backlog, uv_connection_cb cb) { uv_loop_t* loop = handle->loop; int i; if (handle->flags & UV_HANDLE_LISTENING) { handle->stream.serv.connection_cb = cb; } if (!(handle->flags & UV_HANDLE_BOUND)) { return WSAEINVAL; } if (handle->flags & UV_HANDLE_READING) { return WSAEISCONN; } if (!(handle->flags & UV_HANDLE_PIPESERVER)) { return ERROR_NOT_SUPPORTED; } if (handle->ipc) { return WSAEINVAL; } handle->flags |= UV_HANDLE_LISTENING; INCREASE_ACTIVE_COUNT(loop, handle); handle->stream.serv.connection_cb = cb; /* First pipe handle should have already been created in uv_pipe_bind */ assert(handle->pipe.serv.accept_reqs[0].pipeHandle != INVALID_HANDLE_VALUE); for (i = 0; i < handle->pipe.serv.pending_instances; i++) { uv__pipe_queue_accept(loop, handle, &handle->pipe.serv.accept_reqs[i], i == 0); } return 0; } static DWORD WINAPI uv_pipe_zero_readfile_thread_proc(void* arg) { uv_read_t* req = (uv_read_t*) arg; uv_pipe_t* handle = (uv_pipe_t*) req->data; uv_loop_t* loop = handle->loop; volatile HANDLE* thread_ptr = &handle->pipe.conn.readfile_thread_handle; CRITICAL_SECTION* lock = &handle->pipe.conn.readfile_thread_lock; HANDLE thread; DWORD bytes; DWORD err; assert(req->type == UV_READ); assert(handle->type == UV_NAMED_PIPE); err = 0; /* Create a handle to the current thread. */ if (!DuplicateHandle(GetCurrentProcess(), GetCurrentThread(), GetCurrentProcess(), &thread, 0, FALSE, DUPLICATE_SAME_ACCESS)) { err = GetLastError(); goto out1; } /* The lock needs to be held when thread handle is modified. */ EnterCriticalSection(lock); if (*thread_ptr == INVALID_HANDLE_VALUE) { /* uv__pipe_interrupt_read() cancelled reading before we got here. */ err = ERROR_OPERATION_ABORTED; } else { /* Let main thread know which worker thread is doing the blocking read. */ assert(*thread_ptr == NULL); *thread_ptr = thread; } LeaveCriticalSection(lock); if (err) goto out2; /* Block the thread until data is available on the pipe, or the read is * cancelled. */ if (!ReadFile(handle->handle, &uv_zero_, 0, &bytes, NULL)) err = GetLastError(); /* Let the main thread know the worker is past the point of blocking. */ assert(thread == *thread_ptr); *thread_ptr = INVALID_HANDLE_VALUE; /* Briefly acquire the mutex. Since the main thread holds the lock while it * is spinning trying to cancel this thread's I/O, we will block here until * it stops doing that. */ EnterCriticalSection(lock); LeaveCriticalSection(lock); out2: /* Close the handle to the current thread. */ CloseHandle(thread); out1: /* Set request status and post a completion record to the IOCP. */ if (err) SET_REQ_ERROR(req, err); else SET_REQ_SUCCESS(req); POST_COMPLETION_FOR_REQ(loop, req); return 0; } static DWORD WINAPI uv_pipe_writefile_thread_proc(void* parameter) { int result; DWORD bytes; uv_write_t* req = (uv_write_t*) parameter; uv_pipe_t* handle = (uv_pipe_t*) req->handle; uv_loop_t* loop = handle->loop; assert(req != NULL); assert(req->type == UV_WRITE); assert(handle->type == UV_NAMED_PIPE); result = WriteFile(handle->handle, req->write_buffer.base, req->write_buffer.len, &bytes, NULL); if (!result) { SET_REQ_ERROR(req, GetLastError()); } POST_COMPLETION_FOR_REQ(loop, req); return 0; } static void CALLBACK post_completion_read_wait(void* context, BOOLEAN timed_out) { uv_read_t* req; uv_tcp_t* handle; req = (uv_read_t*) context; assert(req != NULL); handle = (uv_tcp_t*)req->data; assert(handle != NULL); assert(!timed_out); if (!PostQueuedCompletionStatus(handle->loop->iocp, req->u.io.overlapped.InternalHigh, 0, &req->u.io.overlapped)) { uv_fatal_error(GetLastError(), "PostQueuedCompletionStatus"); } } static void CALLBACK post_completion_write_wait(void* context, BOOLEAN timed_out) { uv_write_t* req; uv_tcp_t* handle; req = (uv_write_t*) context; assert(req != NULL); handle = (uv_tcp_t*)req->handle; assert(handle != NULL); assert(!timed_out); if (!PostQueuedCompletionStatus(handle->loop->iocp, req->u.io.overlapped.InternalHigh, 0, &req->u.io.overlapped)) { uv_fatal_error(GetLastError(), "PostQueuedCompletionStatus"); } } static void uv__pipe_queue_read(uv_loop_t* loop, uv_pipe_t* handle) { uv_read_t* req; int result; assert(handle->flags & UV_HANDLE_READING); assert(!(handle->flags & UV_HANDLE_READ_PENDING)); assert(handle->handle != INVALID_HANDLE_VALUE); req = &handle->read_req; if (handle->flags & UV_HANDLE_NON_OVERLAPPED_PIPE) { handle->pipe.conn.readfile_thread_handle = NULL; /* Reset cancellation. */ if (!QueueUserWorkItem(&uv_pipe_zero_readfile_thread_proc, req, WT_EXECUTELONGFUNCTION)) { /* Make this req pending reporting an error. */ SET_REQ_ERROR(req, GetLastError()); goto error; } } else { memset(&req->u.io.overlapped, 0, sizeof(req->u.io.overlapped)); if (handle->flags & UV_HANDLE_EMULATE_IOCP) { assert(req->event_handle != NULL); req->u.io.overlapped.hEvent = (HANDLE) ((uintptr_t) req->event_handle | 1); } /* Do 0-read */ result = ReadFile(handle->handle, &uv_zero_, 0, NULL, &req->u.io.overlapped); if (!result && GetLastError() != ERROR_IO_PENDING) { /* Make this req pending reporting an error. */ SET_REQ_ERROR(req, GetLastError()); goto error; } if (handle->flags & UV_HANDLE_EMULATE_IOCP) { if (req->wait_handle == INVALID_HANDLE_VALUE) { if (!RegisterWaitForSingleObject(&req->wait_handle, req->event_handle, post_completion_read_wait, (void*) req, INFINITE, WT_EXECUTEINWAITTHREAD)) { SET_REQ_ERROR(req, GetLastError()); goto error; } } } } /* Start the eof timer if there is one */ eof_timer_start(handle); handle->flags |= UV_HANDLE_READ_PENDING; handle->reqs_pending++; return; error: uv__insert_pending_req(loop, (uv_req_t*)req); handle->flags |= UV_HANDLE_READ_PENDING; handle->reqs_pending++; } int uv__pipe_read_start(uv_pipe_t* handle, uv_alloc_cb alloc_cb, uv_read_cb read_cb) { uv_loop_t* loop = handle->loop; handle->flags |= UV_HANDLE_READING; INCREASE_ACTIVE_COUNT(loop, handle); handle->read_cb = read_cb; handle->alloc_cb = alloc_cb; /* If reading was stopped and then started again, there could still be a read * request pending. */ if (!(handle->flags & UV_HANDLE_READ_PENDING)) { if (handle->flags & UV_HANDLE_EMULATE_IOCP && handle->read_req.event_handle == NULL) { handle->read_req.event_handle = CreateEvent(NULL, 0, 0, NULL); if (handle->read_req.event_handle == NULL) { uv_fatal_error(GetLastError(), "CreateEvent"); } } uv__pipe_queue_read(loop, handle); } return 0; } static void uv__insert_non_overlapped_write_req(uv_pipe_t* handle, uv_write_t* req) { req->next_req = NULL; if (handle->pipe.conn.non_overlapped_writes_tail) { req->next_req = handle->pipe.conn.non_overlapped_writes_tail->next_req; handle->pipe.conn.non_overlapped_writes_tail->next_req = (uv_req_t*)req; handle->pipe.conn.non_overlapped_writes_tail = req; } else { req->next_req = (uv_req_t*)req; handle->pipe.conn.non_overlapped_writes_tail = req; } } static uv_write_t* uv_remove_non_overlapped_write_req(uv_pipe_t* handle) { uv_write_t* req; if (handle->pipe.conn.non_overlapped_writes_tail) { req = (uv_write_t*)handle->pipe.conn.non_overlapped_writes_tail->next_req; if (req == handle->pipe.conn.non_overlapped_writes_tail) { handle->pipe.conn.non_overlapped_writes_tail = NULL; } else { handle->pipe.conn.non_overlapped_writes_tail->next_req = req->next_req; } return req; } else { /* queue empty */ return NULL; } } static void uv__queue_non_overlapped_write(uv_pipe_t* handle) { uv_write_t* req = uv_remove_non_overlapped_write_req(handle); if (req) { if (!QueueUserWorkItem(&uv_pipe_writefile_thread_proc, req, WT_EXECUTELONGFUNCTION)) { uv_fatal_error(GetLastError(), "QueueUserWorkItem"); } } } static int uv__build_coalesced_write_req(uv_write_t* user_req, const uv_buf_t bufs[], size_t nbufs, uv_write_t** req_out, uv_buf_t* write_buf_out) { /* Pack into a single heap-allocated buffer: * (a) a uv_write_t structure where libuv stores the actual state. * (b) a pointer to the original uv_write_t. * (c) data from all `bufs` entries. */ char* heap_buffer; size_t heap_buffer_length, heap_buffer_offset; uv__coalesced_write_t* coalesced_write_req; /* (a) + (b) */ char* data_start; /* (c) */ size_t data_length; unsigned int i; /* Compute combined size of all combined buffers from `bufs`. */ data_length = 0; for (i = 0; i < nbufs; i++) data_length += bufs[i].len; /* The total combined size of data buffers should not exceed UINT32_MAX, * because WriteFile() won't accept buffers larger than that. */ if (data_length > UINT32_MAX) return WSAENOBUFS; /* Maps to UV_ENOBUFS. */ /* Compute heap buffer size. */ heap_buffer_length = sizeof *coalesced_write_req + /* (a) + (b) */ data_length; /* (c) */ /* Allocate buffer. */ heap_buffer = uv__malloc(heap_buffer_length); if (heap_buffer == NULL) return ERROR_NOT_ENOUGH_MEMORY; /* Maps to UV_ENOMEM. */ /* Copy uv_write_t information to the buffer. */ coalesced_write_req = (uv__coalesced_write_t*) heap_buffer; coalesced_write_req->req = *user_req; /* copy (a) */ coalesced_write_req->req.coalesced = 1; coalesced_write_req->user_req = user_req; /* copy (b) */ heap_buffer_offset = sizeof *coalesced_write_req; /* offset (a) + (b) */ /* Copy data buffers to the heap buffer. */ data_start = &heap_buffer[heap_buffer_offset]; for (i = 0; i < nbufs; i++) { memcpy(&heap_buffer[heap_buffer_offset], bufs[i].base, bufs[i].len); /* copy (c) */ heap_buffer_offset += bufs[i].len; /* offset (c) */ } assert(heap_buffer_offset == heap_buffer_length); /* Set out arguments and return. */ *req_out = &coalesced_write_req->req; *write_buf_out = uv_buf_init(data_start, (unsigned int) data_length); return 0; } static int uv__pipe_write_data(uv_loop_t* loop, uv_write_t* req, uv_pipe_t* handle, const uv_buf_t bufs[], size_t nbufs, uv_write_cb cb, int copy_always) { int err; int result; uv_buf_t write_buf; assert(handle->handle != INVALID_HANDLE_VALUE); UV_REQ_INIT(req, UV_WRITE); req->handle = (uv_stream_t*) handle; req->send_handle = NULL; req->cb = cb; /* Private fields. */ req->coalesced = 0; req->event_handle = NULL; req->wait_handle = INVALID_HANDLE_VALUE; /* Prepare the overlapped structure. */ memset(&req->u.io.overlapped, 0, sizeof(req->u.io.overlapped)); if (handle->flags & (UV_HANDLE_EMULATE_IOCP | UV_HANDLE_BLOCKING_WRITES)) { req->event_handle = CreateEvent(NULL, 0, 0, NULL); if (req->event_handle == NULL) { uv_fatal_error(GetLastError(), "CreateEvent"); } req->u.io.overlapped.hEvent = (HANDLE) ((uintptr_t) req->event_handle | 1); } req->write_buffer = uv_null_buf_; if (nbufs == 0) { /* Write empty buffer. */ write_buf = uv_null_buf_; } else if (nbufs == 1 && !copy_always) { /* Write directly from bufs[0]. */ write_buf = bufs[0]; } else { /* Coalesce all `bufs` into one big buffer. This also creates a new * write-request structure that replaces the old one. */ err = uv__build_coalesced_write_req(req, bufs, nbufs, &req, &write_buf); if (err != 0) return err; } if ((handle->flags & (UV_HANDLE_BLOCKING_WRITES | UV_HANDLE_NON_OVERLAPPED_PIPE)) == (UV_HANDLE_BLOCKING_WRITES | UV_HANDLE_NON_OVERLAPPED_PIPE)) { DWORD bytes; result = WriteFile(handle->handle, write_buf.base, write_buf.len, &bytes, NULL); if (!result) { err = GetLastError(); return err; } else { /* Request completed immediately. */ req->u.io.queued_bytes = 0; } REGISTER_HANDLE_REQ(loop, handle, req); handle->reqs_pending++; handle->stream.conn.write_reqs_pending++; POST_COMPLETION_FOR_REQ(loop, req); return 0; } else if (handle->flags & UV_HANDLE_NON_OVERLAPPED_PIPE) { req->write_buffer = write_buf; uv__insert_non_overlapped_write_req(handle, req); if (handle->stream.conn.write_reqs_pending == 0) { uv__queue_non_overlapped_write(handle); } /* Request queued by the kernel. */ req->u.io.queued_bytes = write_buf.len; handle->write_queue_size += req->u.io.queued_bytes; } else if (handle->flags & UV_HANDLE_BLOCKING_WRITES) { /* Using overlapped IO, but wait for completion before returning */ result = WriteFile(handle->handle, write_buf.base, write_buf.len, NULL, &req->u.io.overlapped); if (!result && GetLastError() != ERROR_IO_PENDING) { err = GetLastError(); CloseHandle(req->event_handle); req->event_handle = NULL; return err; } if (result) { /* Request completed immediately. */ req->u.io.queued_bytes = 0; } else { /* Request queued by the kernel. */ req->u.io.queued_bytes = write_buf.len; handle->write_queue_size += req->u.io.queued_bytes; if (WaitForSingleObject(req->event_handle, INFINITE) != WAIT_OBJECT_0) { err = GetLastError(); CloseHandle(req->event_handle); req->event_handle = NULL; return err; } } CloseHandle(req->event_handle); req->event_handle = NULL; REGISTER_HANDLE_REQ(loop, handle, req); handle->reqs_pending++; handle->stream.conn.write_reqs_pending++; return 0; } else { result = WriteFile(handle->handle, write_buf.base, write_buf.len, NULL, &req->u.io.overlapped); if (!result && GetLastError() != ERROR_IO_PENDING) { return GetLastError(); } if (result) { /* Request completed immediately. */ req->u.io.queued_bytes = 0; } else { /* Request queued by the kernel. */ req->u.io.queued_bytes = write_buf.len; handle->write_queue_size += req->u.io.queued_bytes; } if (handle->flags & UV_HANDLE_EMULATE_IOCP) { if (!RegisterWaitForSingleObject(&req->wait_handle, req->event_handle, post_completion_write_wait, (void*) req, INFINITE, WT_EXECUTEINWAITTHREAD)) { return GetLastError(); } } } REGISTER_HANDLE_REQ(loop, handle, req); handle->reqs_pending++; handle->stream.conn.write_reqs_pending++; return 0; } static DWORD uv__pipe_get_ipc_remote_pid(uv_pipe_t* handle) { DWORD* pid = &handle->pipe.conn.ipc_remote_pid; /* If the both ends of the IPC pipe are owned by the same process, * the remote end pid may not yet be set. If so, do it here. * TODO: this is weird; it'd probably better to use a handshake. */ if (*pid == 0) *pid = GetCurrentProcessId(); return *pid; } int uv__pipe_write_ipc(uv_loop_t* loop, uv_write_t* req, uv_pipe_t* handle, const uv_buf_t data_bufs[], size_t data_buf_count, uv_stream_t* send_handle, uv_write_cb cb) { uv_buf_t stack_bufs[6]; uv_buf_t* bufs; size_t buf_count, buf_index; uv__ipc_frame_header_t frame_header; uv__ipc_socket_xfer_type_t xfer_type = UV__IPC_SOCKET_XFER_NONE; uv__ipc_socket_xfer_info_t xfer_info; uint64_t data_length; size_t i; int err; /* Compute the combined size of data buffers. */ data_length = 0; for (i = 0; i < data_buf_count; i++) data_length += data_bufs[i].len; if (data_length > UINT32_MAX) return WSAENOBUFS; /* Maps to UV_ENOBUFS. */ /* Prepare the frame's socket xfer payload. */ if (send_handle != NULL) { uv_tcp_t* send_tcp_handle = (uv_tcp_t*) send_handle; /* Verify that `send_handle` it is indeed a tcp handle. */ if (send_tcp_handle->type != UV_TCP) return ERROR_NOT_SUPPORTED; /* Export the tcp handle. */ err = uv__tcp_xfer_export(send_tcp_handle, uv__pipe_get_ipc_remote_pid(handle), &xfer_type, &xfer_info); if (err != 0) return err; } /* Compute the number of uv_buf_t's required. */ buf_count = 1 + data_buf_count; /* Frame header and data buffers. */ if (send_handle != NULL) buf_count += 1; /* One extra for the socket xfer information. */ /* Use the on-stack buffer array if it is big enough; otherwise allocate * space for it on the heap. */ if (buf_count < ARRAY_SIZE(stack_bufs)) { /* Use on-stack buffer array. */ bufs = stack_bufs; } else { /* Use heap-allocated buffer array. */ bufs = uv__calloc(buf_count, sizeof(uv_buf_t)); if (bufs == NULL) return ERROR_NOT_ENOUGH_MEMORY; /* Maps to UV_ENOMEM. */ } buf_index = 0; /* Initialize frame header and add it to the buffers list. */ memset(&frame_header, 0, sizeof frame_header); bufs[buf_index++] = uv_buf_init((char*) &frame_header, sizeof frame_header); if (send_handle != NULL) { /* Add frame header flags. */ switch (xfer_type) { case UV__IPC_SOCKET_XFER_TCP_CONNECTION: frame_header.flags |= UV__IPC_FRAME_HAS_SOCKET_XFER | UV__IPC_FRAME_XFER_IS_TCP_CONNECTION; break; case UV__IPC_SOCKET_XFER_TCP_SERVER: frame_header.flags |= UV__IPC_FRAME_HAS_SOCKET_XFER; break; default: assert(0); /* Unreachable. */ } /* Add xfer info buffer. */ bufs[buf_index++] = uv_buf_init((char*) &xfer_info, sizeof xfer_info); } if (data_length > 0) { /* Update frame header. */ frame_header.flags |= UV__IPC_FRAME_HAS_DATA; frame_header.data_length = (uint32_t) data_length; /* Add data buffers to buffers list. */ for (i = 0; i < data_buf_count; i++) bufs[buf_index++] = data_bufs[i]; } /* Write buffers. We set the `always_copy` flag, so it is not a problem that * some of the written data lives on the stack. */ err = uv__pipe_write_data(loop, req, handle, bufs, buf_count, cb, 1); /* If we had to heap-allocate the bufs array, free it now. */ if (bufs != stack_bufs) { uv__free(bufs); } return err; } int uv__pipe_write(uv_loop_t* loop, uv_write_t* req, uv_pipe_t* handle, const uv_buf_t bufs[], size_t nbufs, uv_stream_t* send_handle, uv_write_cb cb) { if (handle->ipc) { /* IPC pipe write: use framing protocol. */ return uv__pipe_write_ipc(loop, req, handle, bufs, nbufs, send_handle, cb); } else { /* Non-IPC pipe write: put data on the wire directly. */ assert(send_handle == NULL); return uv__pipe_write_data(loop, req, handle, bufs, nbufs, cb, 0); } } static void uv__pipe_read_eof(uv_loop_t* loop, uv_pipe_t* handle, uv_buf_t buf) { /* If there is an eof timer running, we don't need it any more, so discard * it. */ eof_timer_destroy(handle); uv_read_stop((uv_stream_t*) handle); handle->read_cb((uv_stream_t*) handle, UV_EOF, &buf); } static void uv__pipe_read_error(uv_loop_t* loop, uv_pipe_t* handle, int error, uv_buf_t buf) { /* If there is an eof timer running, we don't need it any more, so discard * it. */ eof_timer_destroy(handle); uv_read_stop((uv_stream_t*) handle); handle->read_cb((uv_stream_t*)handle, uv_translate_sys_error(error), &buf); } static void uv__pipe_read_error_or_eof(uv_loop_t* loop, uv_pipe_t* handle, int error, uv_buf_t buf) { if (error == ERROR_BROKEN_PIPE) { uv__pipe_read_eof(loop, handle, buf); } else { uv__pipe_read_error(loop, handle, error, buf); } } static void uv__pipe_queue_ipc_xfer_info( uv_pipe_t* handle, uv__ipc_socket_xfer_type_t xfer_type, uv__ipc_socket_xfer_info_t* xfer_info) { uv__ipc_xfer_queue_item_t* item; item = (uv__ipc_xfer_queue_item_t*) uv__malloc(sizeof(*item)); if (item == NULL) uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); item->xfer_type = xfer_type; item->xfer_info = *xfer_info; QUEUE_INSERT_TAIL(&handle->pipe.conn.ipc_xfer_queue, &item->member); handle->pipe.conn.ipc_xfer_queue_length++; } /* Read an exact number of bytes from a pipe. If an error or end-of-file is * encountered before the requested number of bytes are read, an error is * returned. */ static int uv__pipe_read_exactly(HANDLE h, void* buffer, DWORD count) { DWORD bytes_read, bytes_read_now; bytes_read = 0; while (bytes_read < count) { if (!ReadFile(h, (char*) buffer + bytes_read, count - bytes_read, &bytes_read_now, NULL)) { return GetLastError(); } bytes_read += bytes_read_now; } assert(bytes_read == count); return 0; } static DWORD uv__pipe_read_data(uv_loop_t* loop, uv_pipe_t* handle, DWORD suggested_bytes, DWORD max_bytes) { DWORD bytes_read; uv_buf_t buf; /* Ask the user for a buffer to read data into. */ buf = uv_buf_init(NULL, 0); handle->alloc_cb((uv_handle_t*) handle, suggested_bytes, &buf); if (buf.base == NULL || buf.len == 0) { handle->read_cb((uv_stream_t*) handle, UV_ENOBUFS, &buf); return 0; /* Break out of read loop. */ } /* Ensure we read at most the smaller of: * (a) the length of the user-allocated buffer. * (b) the maximum data length as specified by the `max_bytes` argument. */ if (max_bytes > buf.len) max_bytes = buf.len; /* Read into the user buffer. */ if (!ReadFile(handle->handle, buf.base, max_bytes, &bytes_read, NULL)) { uv__pipe_read_error_or_eof(loop, handle, GetLastError(), buf); return 0; /* Break out of read loop. */ } /* Call the read callback. */ handle->read_cb((uv_stream_t*) handle, bytes_read, &buf); return bytes_read; } static DWORD uv__pipe_read_ipc(uv_loop_t* loop, uv_pipe_t* handle) { uint32_t* data_remaining = &handle->pipe.conn.ipc_data_frame.payload_remaining; int err; if (*data_remaining > 0) { /* Read frame data payload. */ DWORD bytes_read = uv__pipe_read_data(loop, handle, *data_remaining, *data_remaining); *data_remaining -= bytes_read; return bytes_read; } else { /* Start of a new IPC frame. */ uv__ipc_frame_header_t frame_header; uint32_t xfer_flags; uv__ipc_socket_xfer_type_t xfer_type; uv__ipc_socket_xfer_info_t xfer_info; /* Read the IPC frame header. */ err = uv__pipe_read_exactly( handle->handle, &frame_header, sizeof frame_header); if (err) goto error; /* Validate that flags are valid. */ if ((frame_header.flags & ~UV__IPC_FRAME_VALID_FLAGS) != 0) goto invalid; /* Validate that reserved2 is zero. */ if (frame_header.reserved2 != 0) goto invalid; /* Parse xfer flags. */ xfer_flags = frame_header.flags & UV__IPC_FRAME_XFER_FLAGS; if (xfer_flags & UV__IPC_FRAME_HAS_SOCKET_XFER) { /* Socket coming -- determine the type. */ xfer_type = xfer_flags & UV__IPC_FRAME_XFER_IS_TCP_CONNECTION ? UV__IPC_SOCKET_XFER_TCP_CONNECTION : UV__IPC_SOCKET_XFER_TCP_SERVER; } else if (xfer_flags == 0) { /* No socket. */ xfer_type = UV__IPC_SOCKET_XFER_NONE; } else { /* Invalid flags. */ goto invalid; } /* Parse data frame information. */ if (frame_header.flags & UV__IPC_FRAME_HAS_DATA) { *data_remaining = frame_header.data_length; } else if (frame_header.data_length != 0) { /* Data length greater than zero but data flag not set -- invalid. */ goto invalid; } /* If no socket xfer info follows, return here. Data will be read in a * subsequent invocation of uv__pipe_read_ipc(). */ if (xfer_type == UV__IPC_SOCKET_XFER_NONE) return sizeof frame_header; /* Number of bytes read. */ /* Read transferred socket information. */ err = uv__pipe_read_exactly(handle->handle, &xfer_info, sizeof xfer_info); if (err) goto error; /* Store the pending socket info. */ uv__pipe_queue_ipc_xfer_info(handle, xfer_type, &xfer_info); /* Return number of bytes read. */ return sizeof frame_header + sizeof xfer_info; } invalid: /* Invalid frame. */ err = WSAECONNABORTED; /* Maps to UV_ECONNABORTED. */ error: uv__pipe_read_error_or_eof(loop, handle, err, uv_null_buf_); return 0; /* Break out of read loop. */ } void uv__process_pipe_read_req(uv_loop_t* loop, uv_pipe_t* handle, uv_req_t* req) { assert(handle->type == UV_NAMED_PIPE); handle->flags &= ~(UV_HANDLE_READ_PENDING | UV_HANDLE_CANCELLATION_PENDING); DECREASE_PENDING_REQ_COUNT(handle); eof_timer_stop(handle); /* At this point, we're done with bookkeeping. If the user has stopped * reading the pipe in the meantime, there is nothing left to do, since there * is no callback that we can call. */ if (!(handle->flags & UV_HANDLE_READING)) return; if (!REQ_SUCCESS(req)) { /* An error occurred doing the zero-read. */ DWORD err = GET_REQ_ERROR(req); /* If the read was cancelled by uv__pipe_interrupt_read(), the request may * indicate an ERROR_OPERATION_ABORTED error. This error isn't relevant to * the user; we'll start a new zero-read at the end of this function. */ if (err != ERROR_OPERATION_ABORTED) uv__pipe_read_error_or_eof(loop, handle, err, uv_null_buf_); } else { /* The zero-read completed without error, indicating there is data * available in the kernel buffer. */ DWORD avail; /* Get the number of bytes available. */ avail = 0; if (!PeekNamedPipe(handle->handle, NULL, 0, NULL, &avail, NULL)) uv__pipe_read_error_or_eof(loop, handle, GetLastError(), uv_null_buf_); /* Read until we've either read all the bytes available, or the 'reading' * flag is cleared. */ while (avail > 0 && handle->flags & UV_HANDLE_READING) { /* Depending on the type of pipe, read either IPC frames or raw data. */ DWORD bytes_read = handle->ipc ? uv__pipe_read_ipc(loop, handle) : uv__pipe_read_data(loop, handle, avail, (DWORD) -1); /* If no bytes were read, treat this as an indication that an error * occurred, and break out of the read loop. */ if (bytes_read == 0) break; /* It is possible that more bytes were read than we thought were * available. To prevent `avail` from underflowing, break out of the loop * if this is the case. */ if (bytes_read > avail) break; /* Recompute the number of bytes available. */ avail -= bytes_read; } } /* Start another zero-read request if necessary. */ if ((handle->flags & UV_HANDLE_READING) && !(handle->flags & UV_HANDLE_READ_PENDING)) { uv__pipe_queue_read(loop, handle); } } void uv__process_pipe_write_req(uv_loop_t* loop, uv_pipe_t* handle, uv_write_t* req) { int err; assert(handle->type == UV_NAMED_PIPE); assert(handle->write_queue_size >= req->u.io.queued_bytes); handle->write_queue_size -= req->u.io.queued_bytes; UNREGISTER_HANDLE_REQ(loop, handle, req); if (handle->flags & UV_HANDLE_EMULATE_IOCP) { if (req->wait_handle != INVALID_HANDLE_VALUE) { UnregisterWait(req->wait_handle); req->wait_handle = INVALID_HANDLE_VALUE; } if (req->event_handle) { CloseHandle(req->event_handle); req->event_handle = NULL; } } err = GET_REQ_ERROR(req); /* If this was a coalesced write, extract pointer to the user_provided * uv_write_t structure so we can pass the expected pointer to the callback, * then free the heap-allocated write req. */ if (req->coalesced) { uv__coalesced_write_t* coalesced_write = container_of(req, uv__coalesced_write_t, req); req = coalesced_write->user_req; uv__free(coalesced_write); } if (req->cb) { req->cb(req, uv_translate_sys_error(err)); } handle->stream.conn.write_reqs_pending--; if (handle->flags & UV_HANDLE_NON_OVERLAPPED_PIPE && handle->pipe.conn.non_overlapped_writes_tail) { assert(handle->stream.conn.write_reqs_pending > 0); uv__queue_non_overlapped_write(handle); } if (handle->stream.conn.write_reqs_pending == 0) if (handle->flags & UV_HANDLE_SHUTTING) uv__pipe_shutdown(loop, handle, handle->stream.conn.shutdown_req); DECREASE_PENDING_REQ_COUNT(handle); } void uv__process_pipe_accept_req(uv_loop_t* loop, uv_pipe_t* handle, uv_req_t* raw_req) { uv_pipe_accept_t* req = (uv_pipe_accept_t*) raw_req; assert(handle->type == UV_NAMED_PIPE); if (handle->flags & UV_HANDLE_CLOSING) { /* The req->pipeHandle should be freed already in uv__pipe_close(). */ assert(req->pipeHandle == INVALID_HANDLE_VALUE); DECREASE_PENDING_REQ_COUNT(handle); return; } if (REQ_SUCCESS(req)) { assert(req->pipeHandle != INVALID_HANDLE_VALUE); req->next_pending = handle->pipe.serv.pending_accepts; handle->pipe.serv.pending_accepts = req; if (handle->stream.serv.connection_cb) { handle->stream.serv.connection_cb((uv_stream_t*)handle, 0); } } else { if (req->pipeHandle != INVALID_HANDLE_VALUE) { CloseHandle(req->pipeHandle); req->pipeHandle = INVALID_HANDLE_VALUE; } if (!(handle->flags & UV_HANDLE_CLOSING)) { uv__pipe_queue_accept(loop, handle, req, FALSE); } } DECREASE_PENDING_REQ_COUNT(handle); } void uv__process_pipe_connect_req(uv_loop_t* loop, uv_pipe_t* handle, uv_connect_t* req) { HANDLE pipeHandle; DWORD duplex_flags; int err; assert(handle->type == UV_NAMED_PIPE); UNREGISTER_HANDLE_REQ(loop, handle, req); err = 0; if (REQ_SUCCESS(req)) { pipeHandle = req->u.connect.pipeHandle; duplex_flags = req->u.connect.duplex_flags; err = uv__set_pipe_handle(loop, handle, pipeHandle, -1, duplex_flags); if (err) CloseHandle(pipeHandle); } else { err = uv_translate_sys_error(GET_REQ_ERROR(req)); } if (req->cb) req->cb(req, err); DECREASE_PENDING_REQ_COUNT(handle); } void uv__process_pipe_shutdown_req(uv_loop_t* loop, uv_pipe_t* handle, uv_shutdown_t* req) { int err; assert(handle->type == UV_NAMED_PIPE); /* Clear the shutdown_req field so we don't go here again. */ handle->stream.conn.shutdown_req = NULL; handle->flags &= ~UV_HANDLE_SHUTTING; UNREGISTER_HANDLE_REQ(loop, handle, req); if (handle->flags & UV_HANDLE_CLOSING) { /* Already closing. Cancel the shutdown. */ err = UV_ECANCELED; } else if (!REQ_SUCCESS(req)) { /* An error occurred in trying to shutdown gracefully. */ err = uv_translate_sys_error(GET_REQ_ERROR(req)); } else { if (handle->flags & UV_HANDLE_READABLE) { /* Initialize and optionally start the eof timer. Only do this if the pipe * is readable and we haven't seen EOF come in ourselves. */ eof_timer_init(handle); /* If reading start the timer right now. Otherwise uv__pipe_queue_read will * start it. */ if (handle->flags & UV_HANDLE_READ_PENDING) { eof_timer_start(handle); } } else { /* This pipe is not readable. We can just close it to let the other end * know that we're done writing. */ close_pipe(handle); } err = 0; } if (req->cb) req->cb(req, err); DECREASE_PENDING_REQ_COUNT(handle); } static void eof_timer_init(uv_pipe_t* pipe) { int r; assert(pipe->pipe.conn.eof_timer == NULL); assert(pipe->flags & UV_HANDLE_CONNECTION); pipe->pipe.conn.eof_timer = (uv_timer_t*) uv__malloc(sizeof *pipe->pipe.conn.eof_timer); r = uv_timer_init(pipe->loop, pipe->pipe.conn.eof_timer); assert(r == 0); /* timers can't fail */ (void) r; pipe->pipe.conn.eof_timer->data = pipe; uv_unref((uv_handle_t*) pipe->pipe.conn.eof_timer); } static void eof_timer_start(uv_pipe_t* pipe) { assert(pipe->flags & UV_HANDLE_CONNECTION); if (pipe->pipe.conn.eof_timer != NULL) { uv_timer_start(pipe->pipe.conn.eof_timer, eof_timer_cb, eof_timeout, 0); } } static void eof_timer_stop(uv_pipe_t* pipe) { assert(pipe->flags & UV_HANDLE_CONNECTION); if (pipe->pipe.conn.eof_timer != NULL) { uv_timer_stop(pipe->pipe.conn.eof_timer); } } static void eof_timer_cb(uv_timer_t* timer) { uv_pipe_t* pipe = (uv_pipe_t*) timer->data; uv_loop_t* loop = timer->loop; assert(pipe->type == UV_NAMED_PIPE); /* This should always be true, since we start the timer only in * uv__pipe_queue_read after successfully calling ReadFile, or in * uv__process_pipe_shutdown_req if a read is pending, and we always * immediately stop the timer in uv__process_pipe_read_req. */ assert(pipe->flags & UV_HANDLE_READ_PENDING); /* If there are many packets coming off the iocp then the timer callback may * be called before the read request is coming off the queue. Therefore we * check here if the read request has completed but will be processed later. */ if ((pipe->flags & UV_HANDLE_READ_PENDING) && HasOverlappedIoCompleted(&pipe->read_req.u.io.overlapped)) { return; } /* Force both ends off the pipe. */ close_pipe(pipe); /* Stop reading, so the pending read that is going to fail will not be * reported to the user. */ uv_read_stop((uv_stream_t*) pipe); /* Report the eof and update flags. This will get reported even if the user * stopped reading in the meantime. TODO: is that okay? */ uv__pipe_read_eof(loop, pipe, uv_null_buf_); } static void eof_timer_destroy(uv_pipe_t* pipe) { assert(pipe->flags & UV_HANDLE_CONNECTION); if (pipe->pipe.conn.eof_timer) { uv_close((uv_handle_t*) pipe->pipe.conn.eof_timer, eof_timer_close_cb); pipe->pipe.conn.eof_timer = NULL; } } static void eof_timer_close_cb(uv_handle_t* handle) { assert(handle->type == UV_TIMER); uv__free(handle); } int uv_pipe_open(uv_pipe_t* pipe, uv_file file) { HANDLE os_handle = uv__get_osfhandle(file); NTSTATUS nt_status; IO_STATUS_BLOCK io_status; FILE_ACCESS_INFORMATION access; DWORD duplex_flags = 0; int err; if (os_handle == INVALID_HANDLE_VALUE) return UV_EBADF; if (pipe->flags & UV_HANDLE_PIPESERVER) return UV_EINVAL; if (pipe->flags & UV_HANDLE_CONNECTION) return UV_EBUSY; uv__pipe_connection_init(pipe); uv__once_init(); /* In order to avoid closing a stdio file descriptor 0-2, duplicate the * underlying OS handle and forget about the original fd. * We could also opt to use the original OS handle and just never close it, * but then there would be no reliable way to cancel pending read operations * upon close. */ if (file <= 2) { if (!DuplicateHandle(INVALID_HANDLE_VALUE, os_handle, INVALID_HANDLE_VALUE, &os_handle, 0, FALSE, DUPLICATE_SAME_ACCESS)) return uv_translate_sys_error(GetLastError()); assert(os_handle != INVALID_HANDLE_VALUE); file = -1; } /* Determine what kind of permissions we have on this handle. * Cygwin opens the pipe in message mode, but we can support it, * just query the access flags and set the stream flags accordingly. */ nt_status = pNtQueryInformationFile(os_handle, &io_status, &access, sizeof(access), FileAccessInformation); if (nt_status != STATUS_SUCCESS) return UV_EINVAL; if (pipe->ipc) { if (!(access.AccessFlags & FILE_WRITE_DATA) || !(access.AccessFlags & FILE_READ_DATA)) { return UV_EINVAL; } } if (access.AccessFlags & FILE_WRITE_DATA) duplex_flags |= UV_HANDLE_WRITABLE; if (access.AccessFlags & FILE_READ_DATA) duplex_flags |= UV_HANDLE_READABLE; err = uv__set_pipe_handle(pipe->loop, pipe, os_handle, file, duplex_flags); if (err) { if (file == -1) CloseHandle(os_handle); return err; } if (pipe->ipc) { assert(!(pipe->flags & UV_HANDLE_NON_OVERLAPPED_PIPE)); pipe->pipe.conn.ipc_remote_pid = uv_os_getppid(); assert(pipe->pipe.conn.ipc_remote_pid != (DWORD)(uv_pid_t) -1); } return 0; } static int uv__pipe_getname(const uv_pipe_t* handle, char* buffer, size_t* size) { NTSTATUS nt_status; IO_STATUS_BLOCK io_status; FILE_NAME_INFORMATION tmp_name_info; FILE_NAME_INFORMATION* name_info; WCHAR* name_buf; unsigned int addrlen; unsigned int name_size; unsigned int name_len; int err; uv__once_init(); name_info = NULL; if (handle->name != NULL) { /* The user might try to query the name before we are connected, * and this is just easier to return the cached value if we have it. */ name_buf = handle->name; name_len = wcslen(name_buf); /* check how much space we need */ addrlen = WideCharToMultiByte(CP_UTF8, 0, name_buf, name_len, NULL, 0, NULL, NULL); if (!addrlen) { *size = 0; err = uv_translate_sys_error(GetLastError()); return err; } else if (addrlen >= *size) { *size = addrlen + 1; err = UV_ENOBUFS; goto error; } addrlen = WideCharToMultiByte(CP_UTF8, 0, name_buf, name_len, buffer, addrlen, NULL, NULL); if (!addrlen) { *size = 0; err = uv_translate_sys_error(GetLastError()); return err; } *size = addrlen; buffer[addrlen] = '\0'; return 0; } if (handle->handle == INVALID_HANDLE_VALUE) { *size = 0; return UV_EINVAL; } /* NtQueryInformationFile will block if another thread is performing a * blocking operation on the queried handle. If the pipe handle is * synchronous, there may be a worker thread currently calling ReadFile() on * the pipe handle, which could cause a deadlock. To avoid this, interrupt * the read. */ if (handle->flags & UV_HANDLE_CONNECTION && handle->flags & UV_HANDLE_NON_OVERLAPPED_PIPE) { uv__pipe_interrupt_read((uv_pipe_t*) handle); /* cast away const warning */ } nt_status = pNtQueryInformationFile(handle->handle, &io_status, &tmp_name_info, sizeof tmp_name_info, FileNameInformation); if (nt_status == STATUS_BUFFER_OVERFLOW) { name_size = sizeof(*name_info) + tmp_name_info.FileNameLength; name_info = uv__malloc(name_size); if (!name_info) { *size = 0; err = UV_ENOMEM; goto cleanup; } nt_status = pNtQueryInformationFile(handle->handle, &io_status, name_info, name_size, FileNameInformation); } if (nt_status != STATUS_SUCCESS) { *size = 0; err = uv_translate_sys_error(pRtlNtStatusToDosError(nt_status)); goto error; } if (!name_info) { /* the struct on stack was used */ name_buf = tmp_name_info.FileName; name_len = tmp_name_info.FileNameLength; } else { name_buf = name_info->FileName; name_len = name_info->FileNameLength; } if (name_len == 0) { *size = 0; err = 0; goto error; } name_len /= sizeof(WCHAR); /* check how much space we need */ addrlen = WideCharToMultiByte(CP_UTF8, 0, name_buf, name_len, NULL, 0, NULL, NULL); if (!addrlen) { *size = 0; err = uv_translate_sys_error(GetLastError()); goto error; } else if (pipe_prefix_len + addrlen >= *size) { /* "\\\\.\\pipe" + name */ *size = pipe_prefix_len + addrlen + 1; err = UV_ENOBUFS; goto error; } memcpy(buffer, pipe_prefix, pipe_prefix_len); addrlen = WideCharToMultiByte(CP_UTF8, 0, name_buf, name_len, buffer+pipe_prefix_len, *size-pipe_prefix_len, NULL, NULL); if (!addrlen) { *size = 0; err = uv_translate_sys_error(GetLastError()); goto error; } addrlen += pipe_prefix_len; *size = addrlen; buffer[addrlen] = '\0'; err = 0; error: uv__free(name_info); cleanup: return err; } int uv_pipe_pending_count(uv_pipe_t* handle) { if (!handle->ipc) return 0; return handle->pipe.conn.ipc_xfer_queue_length; } int uv_pipe_getsockname(const uv_pipe_t* handle, char* buffer, size_t* size) { if (handle->flags & UV_HANDLE_BOUND) return uv__pipe_getname(handle, buffer, size); if (handle->flags & UV_HANDLE_CONNECTION || handle->handle != INVALID_HANDLE_VALUE) { *size = 0; return 0; } return UV_EBADF; } int uv_pipe_getpeername(const uv_pipe_t* handle, char* buffer, size_t* size) { /* emulate unix behaviour */ if (handle->flags & UV_HANDLE_BOUND) return UV_ENOTCONN; if (handle->handle != INVALID_HANDLE_VALUE) return uv__pipe_getname(handle, buffer, size); if (handle->flags & UV_HANDLE_CONNECTION) { if (handle->name != NULL) return uv__pipe_getname(handle, buffer, size); } return UV_EBADF; } uv_handle_type uv_pipe_pending_type(uv_pipe_t* handle) { if (!handle->ipc) return UV_UNKNOWN_HANDLE; if (handle->pipe.conn.ipc_xfer_queue_length == 0) return UV_UNKNOWN_HANDLE; else return UV_TCP; } int uv_pipe_chmod(uv_pipe_t* handle, int mode) { SID_IDENTIFIER_AUTHORITY sid_world = { SECURITY_WORLD_SID_AUTHORITY }; PACL old_dacl, new_dacl; PSECURITY_DESCRIPTOR sd; EXPLICIT_ACCESS ea; PSID everyone; int error; if (handle == NULL || handle->handle == INVALID_HANDLE_VALUE) return UV_EBADF; if (mode != UV_READABLE && mode != UV_WRITABLE && mode != (UV_WRITABLE | UV_READABLE)) return UV_EINVAL; if (!AllocateAndInitializeSid(&sid_world, 1, SECURITY_WORLD_RID, 0, 0, 0, 0, 0, 0, 0, &everyone)) { error = GetLastError(); goto done; } if (GetSecurityInfo(handle->handle, SE_KERNEL_OBJECT, DACL_SECURITY_INFORMATION, NULL, NULL, &old_dacl, NULL, &sd)) { error = GetLastError(); goto clean_sid; } memset(&ea, 0, sizeof(EXPLICIT_ACCESS)); if (mode & UV_READABLE) ea.grfAccessPermissions |= GENERIC_READ | FILE_WRITE_ATTRIBUTES; if (mode & UV_WRITABLE) ea.grfAccessPermissions |= GENERIC_WRITE | FILE_READ_ATTRIBUTES; ea.grfAccessPermissions |= SYNCHRONIZE; ea.grfAccessMode = SET_ACCESS; ea.grfInheritance = NO_INHERITANCE; ea.Trustee.TrusteeForm = TRUSTEE_IS_SID; ea.Trustee.TrusteeType = TRUSTEE_IS_WELL_KNOWN_GROUP; ea.Trustee.ptstrName = (LPTSTR)everyone; if (SetEntriesInAcl(1, &ea, old_dacl, &new_dacl)) { error = GetLastError(); goto clean_sd; } if (SetSecurityInfo(handle->handle, SE_KERNEL_OBJECT, DACL_SECURITY_INFORMATION, NULL, NULL, new_dacl, NULL)) { error = GetLastError(); goto clean_dacl; } error = 0; clean_dacl: LocalFree((HLOCAL) new_dacl); clean_sd: LocalFree((HLOCAL) sd); clean_sid: FreeSid(everyone); done: return uv_translate_sys_error(error); } gevent-24.11.1/deps/libuv/src/win/poll.c000066400000000000000000000416361471441230600177360ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include "uv.h" #include "internal.h" #include "handle-inl.h" #include "req-inl.h" static const GUID uv_msafd_provider_ids[UV_MSAFD_PROVIDER_COUNT] = { {0xe70f1aa0, 0xab8b, 0x11cf, {0x8c, 0xa3, 0x00, 0x80, 0x5f, 0x48, 0xa1, 0x92}}, {0xf9eab0c0, 0x26d4, 0x11d0, {0xbb, 0xbf, 0x00, 0xaa, 0x00, 0x6c, 0x34, 0xe4}}, {0x9fc48064, 0x7298, 0x43e4, {0xb7, 0xbd, 0x18, 0x1f, 0x20, 0x89, 0x79, 0x2a}}, {0xa00943d9, 0x9c2e, 0x4633, {0x9b, 0x59, 0x00, 0x57, 0xa3, 0x16, 0x09, 0x94}} }; typedef struct uv_single_fd_set_s { unsigned int fd_count; SOCKET fd_array[1]; } uv_single_fd_set_t; static OVERLAPPED overlapped_dummy_; static uv_once_t overlapped_dummy_init_guard_ = UV_ONCE_INIT; static AFD_POLL_INFO afd_poll_info_dummy_; static void uv__init_overlapped_dummy(void) { HANDLE event; event = CreateEvent(NULL, TRUE, TRUE, NULL); if (event == NULL) uv_fatal_error(GetLastError(), "CreateEvent"); memset(&overlapped_dummy_, 0, sizeof overlapped_dummy_); overlapped_dummy_.hEvent = (HANDLE) ((uintptr_t) event | 1); } static OVERLAPPED* uv__get_overlapped_dummy(void) { uv_once(&overlapped_dummy_init_guard_, uv__init_overlapped_dummy); return &overlapped_dummy_; } static AFD_POLL_INFO* uv__get_afd_poll_info_dummy(void) { return &afd_poll_info_dummy_; } static void uv__fast_poll_submit_poll_req(uv_loop_t* loop, uv_poll_t* handle) { uv_req_t* req; AFD_POLL_INFO* afd_poll_info; int result; /* Find a yet unsubmitted req to submit. */ if (handle->submitted_events_1 == 0) { req = &handle->poll_req_1; afd_poll_info = &handle->afd_poll_info_1; handle->submitted_events_1 = handle->events; handle->mask_events_1 = 0; handle->mask_events_2 = handle->events; } else if (handle->submitted_events_2 == 0) { req = &handle->poll_req_2; afd_poll_info = &handle->afd_poll_info_2; handle->submitted_events_2 = handle->events; handle->mask_events_1 = handle->events; handle->mask_events_2 = 0; } else { /* Just wait until there's an unsubmitted req. This will happen almost * immediately as one of the 2 outstanding requests is about to return. * When this happens, uv__fast_poll_process_poll_req will be called, and * the pending events, if needed, will be processed in a subsequent * request. */ return; } /* Setting Exclusive to TRUE makes the other poll request return if there is * any. */ afd_poll_info->Exclusive = TRUE; afd_poll_info->NumberOfHandles = 1; afd_poll_info->Timeout.QuadPart = INT64_MAX; afd_poll_info->Handles[0].Handle = (HANDLE) handle->socket; afd_poll_info->Handles[0].Status = 0; afd_poll_info->Handles[0].Events = 0; if (handle->events & UV_READABLE) { afd_poll_info->Handles[0].Events |= AFD_POLL_RECEIVE | AFD_POLL_DISCONNECT | AFD_POLL_ACCEPT | AFD_POLL_ABORT; } else { if (handle->events & UV_DISCONNECT) { afd_poll_info->Handles[0].Events |= AFD_POLL_DISCONNECT; } } if (handle->events & UV_WRITABLE) { afd_poll_info->Handles[0].Events |= AFD_POLL_SEND | AFD_POLL_CONNECT_FAIL; } memset(&req->u.io.overlapped, 0, sizeof req->u.io.overlapped); result = uv__msafd_poll((SOCKET) handle->peer_socket, afd_poll_info, afd_poll_info, &req->u.io.overlapped); if (result != 0 && WSAGetLastError() != WSA_IO_PENDING) { /* Queue this req, reporting an error. */ SET_REQ_ERROR(req, WSAGetLastError()); uv__insert_pending_req(loop, req); } } static void uv__fast_poll_process_poll_req(uv_loop_t* loop, uv_poll_t* handle, uv_req_t* req) { unsigned char mask_events; AFD_POLL_INFO* afd_poll_info; if (req == &handle->poll_req_1) { afd_poll_info = &handle->afd_poll_info_1; handle->submitted_events_1 = 0; mask_events = handle->mask_events_1; } else if (req == &handle->poll_req_2) { afd_poll_info = &handle->afd_poll_info_2; handle->submitted_events_2 = 0; mask_events = handle->mask_events_2; } else { assert(0); return; } /* Report an error unless the select was just interrupted. */ if (!REQ_SUCCESS(req)) { DWORD error = GET_REQ_SOCK_ERROR(req); if (error != WSAEINTR && handle->events != 0) { handle->events = 0; /* Stop the watcher */ handle->poll_cb(handle, uv_translate_sys_error(error), 0); } } else if (afd_poll_info->NumberOfHandles >= 1) { unsigned char events = 0; if ((afd_poll_info->Handles[0].Events & (AFD_POLL_RECEIVE | AFD_POLL_DISCONNECT | AFD_POLL_ACCEPT | AFD_POLL_ABORT)) != 0) { events |= UV_READABLE; if ((afd_poll_info->Handles[0].Events & AFD_POLL_DISCONNECT) != 0) { events |= UV_DISCONNECT; } } if ((afd_poll_info->Handles[0].Events & (AFD_POLL_SEND | AFD_POLL_CONNECT_FAIL)) != 0) { events |= UV_WRITABLE; } events &= handle->events & ~mask_events; if (afd_poll_info->Handles[0].Events & AFD_POLL_LOCAL_CLOSE) { /* Stop polling. */ handle->events = 0; if (uv__is_active(handle)) uv__handle_stop(handle); } if (events != 0) { handle->poll_cb(handle, 0, events); } } if ((handle->events & ~(handle->submitted_events_1 | handle->submitted_events_2)) != 0) { uv__fast_poll_submit_poll_req(loop, handle); } else if ((handle->flags & UV_HANDLE_CLOSING) && handle->submitted_events_1 == 0 && handle->submitted_events_2 == 0) { uv__want_endgame(loop, (uv_handle_t*) handle); } } static SOCKET uv__fast_poll_create_peer_socket(HANDLE iocp, WSAPROTOCOL_INFOW* protocol_info) { SOCKET sock = 0; sock = WSASocketW(protocol_info->iAddressFamily, protocol_info->iSocketType, protocol_info->iProtocol, protocol_info, 0, WSA_FLAG_OVERLAPPED); if (sock == INVALID_SOCKET) { return INVALID_SOCKET; } if (!SetHandleInformation((HANDLE) sock, HANDLE_FLAG_INHERIT, 0)) { goto error; }; if (CreateIoCompletionPort((HANDLE) sock, iocp, (ULONG_PTR) sock, 0) == NULL) { goto error; } return sock; error: closesocket(sock); return INVALID_SOCKET; } static SOCKET uv__fast_poll_get_peer_socket(uv_loop_t* loop, WSAPROTOCOL_INFOW* protocol_info) { int index, i; SOCKET peer_socket; index = -1; for (i = 0; (size_t) i < ARRAY_SIZE(uv_msafd_provider_ids); i++) { if (memcmp((void*) &protocol_info->ProviderId, (void*) &uv_msafd_provider_ids[i], sizeof protocol_info->ProviderId) == 0) { index = i; } } /* Check if the protocol uses an msafd socket. */ if (index < 0) { return INVALID_SOCKET; } /* If we didn't (try) to create a peer socket yet, try to make one. Don't try * again if the peer socket creation failed earlier for the same protocol. */ peer_socket = loop->poll_peer_sockets[index]; if (peer_socket == 0) { peer_socket = uv__fast_poll_create_peer_socket(loop->iocp, protocol_info); loop->poll_peer_sockets[index] = peer_socket; } return peer_socket; } static DWORD WINAPI uv__slow_poll_thread_proc(void* arg) { uv_req_t* req = (uv_req_t*) arg; uv_poll_t* handle = (uv_poll_t*) req->data; unsigned char reported_events; int r; uv_single_fd_set_t rfds, wfds, efds; struct timeval timeout; assert(handle->type == UV_POLL); assert(req->type == UV_POLL_REQ); if (handle->events & UV_READABLE) { rfds.fd_count = 1; rfds.fd_array[0] = handle->socket; } else { rfds.fd_count = 0; } if (handle->events & UV_WRITABLE) { wfds.fd_count = 1; wfds.fd_array[0] = handle->socket; efds.fd_count = 1; efds.fd_array[0] = handle->socket; } else { wfds.fd_count = 0; efds.fd_count = 0; } /* Make the select() time out after 3 minutes. If select() hangs because the * user closed the socket, we will at least not hang indefinitely. */ timeout.tv_sec = 3 * 60; timeout.tv_usec = 0; r = select(1, (fd_set*) &rfds, (fd_set*) &wfds, (fd_set*) &efds, &timeout); if (r == SOCKET_ERROR) { /* Queue this req, reporting an error. */ SET_REQ_ERROR(&handle->poll_req_1, WSAGetLastError()); POST_COMPLETION_FOR_REQ(handle->loop, req); return 0; } reported_events = 0; if (r > 0) { if (rfds.fd_count > 0) { assert(rfds.fd_count == 1); assert(rfds.fd_array[0] == handle->socket); reported_events |= UV_READABLE; } if (wfds.fd_count > 0) { assert(wfds.fd_count == 1); assert(wfds.fd_array[0] == handle->socket); reported_events |= UV_WRITABLE; } else if (efds.fd_count > 0) { assert(efds.fd_count == 1); assert(efds.fd_array[0] == handle->socket); reported_events |= UV_WRITABLE; } } SET_REQ_SUCCESS(req); req->u.io.overlapped.InternalHigh = (DWORD) reported_events; POST_COMPLETION_FOR_REQ(handle->loop, req); return 0; } static void uv__slow_poll_submit_poll_req(uv_loop_t* loop, uv_poll_t* handle) { uv_req_t* req; /* Find a yet unsubmitted req to submit. */ if (handle->submitted_events_1 == 0) { req = &handle->poll_req_1; handle->submitted_events_1 = handle->events; handle->mask_events_1 = 0; handle->mask_events_2 = handle->events; } else if (handle->submitted_events_2 == 0) { req = &handle->poll_req_2; handle->submitted_events_2 = handle->events; handle->mask_events_1 = handle->events; handle->mask_events_2 = 0; } else { assert(0); return; } if (!QueueUserWorkItem(uv__slow_poll_thread_proc, (void*) req, WT_EXECUTELONGFUNCTION)) { /* Make this req pending, reporting an error. */ SET_REQ_ERROR(req, GetLastError()); uv__insert_pending_req(loop, req); } } static void uv__slow_poll_process_poll_req(uv_loop_t* loop, uv_poll_t* handle, uv_req_t* req) { unsigned char mask_events; int err; if (req == &handle->poll_req_1) { handle->submitted_events_1 = 0; mask_events = handle->mask_events_1; } else if (req == &handle->poll_req_2) { handle->submitted_events_2 = 0; mask_events = handle->mask_events_2; } else { assert(0); return; } if (!REQ_SUCCESS(req)) { /* Error. */ if (handle->events != 0) { err = GET_REQ_ERROR(req); handle->events = 0; /* Stop the watcher */ handle->poll_cb(handle, uv_translate_sys_error(err), 0); } } else { /* Got some events. */ int events = req->u.io.overlapped.InternalHigh & handle->events & ~mask_events; if (events != 0) { handle->poll_cb(handle, 0, events); } } if ((handle->events & ~(handle->submitted_events_1 | handle->submitted_events_2)) != 0) { uv__slow_poll_submit_poll_req(loop, handle); } else if ((handle->flags & UV_HANDLE_CLOSING) && handle->submitted_events_1 == 0 && handle->submitted_events_2 == 0) { uv__want_endgame(loop, (uv_handle_t*) handle); } } int uv_poll_init(uv_loop_t* loop, uv_poll_t* handle, int fd) { return uv_poll_init_socket(loop, handle, (SOCKET) uv__get_osfhandle(fd)); } int uv_poll_init_socket(uv_loop_t* loop, uv_poll_t* handle, uv_os_sock_t socket) { WSAPROTOCOL_INFOW protocol_info; int len; SOCKET peer_socket, base_socket; DWORD bytes; DWORD yes = 1; /* Set the socket to nonblocking mode */ if (ioctlsocket(socket, FIONBIO, &yes) == SOCKET_ERROR) return uv_translate_sys_error(WSAGetLastError()); /* Try to obtain a base handle for the socket. This increases this chances that * we find an AFD handle and are able to use the fast poll mechanism. This will * always fail on windows XP/2k3, since they don't support the. SIO_BASE_HANDLE * ioctl. */ #ifndef NDEBUG base_socket = INVALID_SOCKET; #endif if (WSAIoctl(socket, SIO_BASE_HANDLE, NULL, 0, &base_socket, sizeof base_socket, &bytes, NULL, NULL) == 0) { assert(base_socket != 0 && base_socket != INVALID_SOCKET); socket = base_socket; } uv__handle_init(loop, (uv_handle_t*) handle, UV_POLL); handle->socket = socket; handle->events = 0; /* Obtain protocol information about the socket. */ len = sizeof protocol_info; if (getsockopt(socket, SOL_SOCKET, SO_PROTOCOL_INFOW, (char*) &protocol_info, &len) != 0) { return uv_translate_sys_error(WSAGetLastError()); } /* Get the peer socket that is needed to enable fast poll. If the returned * value is NULL, the protocol is not implemented by MSAFD and we'll have to * use slow mode. */ peer_socket = uv__fast_poll_get_peer_socket(loop, &protocol_info); if (peer_socket != INVALID_SOCKET) { /* Initialize fast poll specific fields. */ handle->peer_socket = peer_socket; } else { /* Initialize slow poll specific fields. */ handle->flags |= UV_HANDLE_POLL_SLOW; } /* Initialize 2 poll reqs. */ handle->submitted_events_1 = 0; UV_REQ_INIT(&handle->poll_req_1, UV_POLL_REQ); handle->poll_req_1.data = handle; handle->submitted_events_2 = 0; UV_REQ_INIT(&handle->poll_req_2, UV_POLL_REQ); handle->poll_req_2.data = handle; return 0; } static int uv__poll_set(uv_poll_t* handle, int events, uv_poll_cb cb) { int submitted_events; assert(handle->type == UV_POLL); assert(!(handle->flags & UV_HANDLE_CLOSING)); assert((events & ~(UV_READABLE | UV_WRITABLE | UV_DISCONNECT | UV_PRIORITIZED)) == 0); handle->events = events; handle->poll_cb = cb; if (handle->events == 0) { uv__handle_stop(handle); return 0; } uv__handle_start(handle); submitted_events = handle->submitted_events_1 | handle->submitted_events_2; if (handle->events & ~submitted_events) { if (handle->flags & UV_HANDLE_POLL_SLOW) { uv__slow_poll_submit_poll_req(handle->loop, handle); } else { uv__fast_poll_submit_poll_req(handle->loop, handle); } } return 0; } int uv_poll_start(uv_poll_t* handle, int events, uv_poll_cb cb) { return uv__poll_set(handle, events, cb); } int uv_poll_stop(uv_poll_t* handle) { return uv__poll_set(handle, 0, handle->poll_cb); } void uv__process_poll_req(uv_loop_t* loop, uv_poll_t* handle, uv_req_t* req) { if (!(handle->flags & UV_HANDLE_POLL_SLOW)) { uv__fast_poll_process_poll_req(loop, handle, req); } else { uv__slow_poll_process_poll_req(loop, handle, req); } } int uv__poll_close(uv_loop_t* loop, uv_poll_t* handle) { AFD_POLL_INFO afd_poll_info; DWORD error; int result; handle->events = 0; uv__handle_closing(handle); if (handle->submitted_events_1 == 0 && handle->submitted_events_2 == 0) { uv__want_endgame(loop, (uv_handle_t*) handle); return 0; } if (handle->flags & UV_HANDLE_POLL_SLOW) return 0; /* Cancel outstanding poll requests by executing another, unique poll * request that forces the outstanding ones to return. */ afd_poll_info.Exclusive = TRUE; afd_poll_info.NumberOfHandles = 1; afd_poll_info.Timeout.QuadPart = INT64_MAX; afd_poll_info.Handles[0].Handle = (HANDLE) handle->socket; afd_poll_info.Handles[0].Status = 0; afd_poll_info.Handles[0].Events = AFD_POLL_ALL; result = uv__msafd_poll(handle->socket, &afd_poll_info, uv__get_afd_poll_info_dummy(), uv__get_overlapped_dummy()); if (result == SOCKET_ERROR) { error = WSAGetLastError(); if (error != WSA_IO_PENDING) return uv_translate_sys_error(error); } return 0; } void uv__poll_endgame(uv_loop_t* loop, uv_poll_t* handle) { assert(handle->flags & UV_HANDLE_CLOSING); assert(!(handle->flags & UV_HANDLE_CLOSED)); assert(handle->submitted_events_1 == 0); assert(handle->submitted_events_2 == 0); uv__handle_close(handle); } gevent-24.11.1/deps/libuv/src/win/process-stdio.c000066400000000000000000000303451471441230600215610ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include #include #include "uv.h" #include "internal.h" #include "handle-inl.h" /* * The `child_stdio_buffer` buffer has the following layout: * int number_of_fds * unsigned char crt_flags[number_of_fds] * HANDLE os_handle[number_of_fds] */ #define CHILD_STDIO_SIZE(count) \ (sizeof(int) + \ sizeof(unsigned char) * (count) + \ sizeof(uintptr_t) * (count)) #define CHILD_STDIO_COUNT(buffer) \ *((unsigned int*) (buffer)) #define CHILD_STDIO_CRT_FLAGS(buffer, fd) \ *((unsigned char*) (buffer) + sizeof(int) + fd) #define CHILD_STDIO_HANDLE(buffer, fd) \ *((HANDLE*) ((unsigned char*) (buffer) + \ sizeof(int) + \ sizeof(unsigned char) * \ CHILD_STDIO_COUNT((buffer)) + \ sizeof(HANDLE) * (fd))) /* CRT file descriptor mode flags */ #define FOPEN 0x01 #define FEOFLAG 0x02 #define FCRLF 0x04 #define FPIPE 0x08 #define FNOINHERIT 0x10 #define FAPPEND 0x20 #define FDEV 0x40 #define FTEXT 0x80 /* * Clear the HANDLE_FLAG_INHERIT flag from all HANDLEs that were inherited * the parent process. Don't check for errors - the stdio handles may not be * valid, or may be closed already. There is no guarantee that this function * does a perfect job. */ void uv_disable_stdio_inheritance(void) { HANDLE handle; STARTUPINFOW si; /* Make the windows stdio handles non-inheritable. */ handle = GetStdHandle(STD_INPUT_HANDLE); if (handle != NULL && handle != INVALID_HANDLE_VALUE) SetHandleInformation(handle, HANDLE_FLAG_INHERIT, 0); handle = GetStdHandle(STD_OUTPUT_HANDLE); if (handle != NULL && handle != INVALID_HANDLE_VALUE) SetHandleInformation(handle, HANDLE_FLAG_INHERIT, 0); handle = GetStdHandle(STD_ERROR_HANDLE); if (handle != NULL && handle != INVALID_HANDLE_VALUE) SetHandleInformation(handle, HANDLE_FLAG_INHERIT, 0); /* Make inherited CRT FDs non-inheritable. */ GetStartupInfoW(&si); if (uv__stdio_verify(si.lpReserved2, si.cbReserved2)) uv__stdio_noinherit(si.lpReserved2); } static int uv__duplicate_handle(uv_loop_t* loop, HANDLE handle, HANDLE* dup) { HANDLE current_process; /* _get_osfhandle will sometimes return -2 in case of an error. This seems to * happen when fd <= 2 and the process' corresponding stdio handle is set to * NULL. Unfortunately DuplicateHandle will happily duplicate (HANDLE) -2, so * this situation goes unnoticed until someone tries to use the duplicate. * Therefore we filter out known-invalid handles here. */ if (handle == INVALID_HANDLE_VALUE || handle == NULL || handle == (HANDLE) -2) { *dup = INVALID_HANDLE_VALUE; return ERROR_INVALID_HANDLE; } current_process = GetCurrentProcess(); if (!DuplicateHandle(current_process, handle, current_process, dup, 0, TRUE, DUPLICATE_SAME_ACCESS)) { *dup = INVALID_HANDLE_VALUE; return GetLastError(); } return 0; } static int uv__duplicate_fd(uv_loop_t* loop, int fd, HANDLE* dup) { HANDLE handle; if (fd == -1) { *dup = INVALID_HANDLE_VALUE; return ERROR_INVALID_HANDLE; } handle = uv__get_osfhandle(fd); return uv__duplicate_handle(loop, handle, dup); } int uv__create_nul_handle(HANDLE* handle_ptr, DWORD access) { HANDLE handle; SECURITY_ATTRIBUTES sa; sa.nLength = sizeof sa; sa.lpSecurityDescriptor = NULL; sa.bInheritHandle = TRUE; handle = CreateFileW(L"NUL", access, FILE_SHARE_READ | FILE_SHARE_WRITE, &sa, OPEN_EXISTING, 0, NULL); if (handle == INVALID_HANDLE_VALUE) { return GetLastError(); } *handle_ptr = handle; return 0; } int uv__stdio_create(uv_loop_t* loop, const uv_process_options_t* options, BYTE** buffer_ptr) { BYTE* buffer; int count, i; int err; count = options->stdio_count; if (count < 0 || count > 255) { /* Only support FDs 0-255 */ return ERROR_NOT_SUPPORTED; } else if (count < 3) { /* There should always be at least 3 stdio handles. */ count = 3; } /* Allocate the child stdio buffer */ buffer = (BYTE*) uv__malloc(CHILD_STDIO_SIZE(count)); if (buffer == NULL) { return ERROR_OUTOFMEMORY; } /* Prepopulate the buffer with INVALID_HANDLE_VALUE handles so we can clean * up on failure. */ CHILD_STDIO_COUNT(buffer) = count; for (i = 0; i < count; i++) { CHILD_STDIO_CRT_FLAGS(buffer, i) = 0; CHILD_STDIO_HANDLE(buffer, i) = INVALID_HANDLE_VALUE; } for (i = 0; i < count; i++) { uv_stdio_container_t fdopt; if (i < options->stdio_count) { fdopt = options->stdio[i]; } else { fdopt.flags = UV_IGNORE; } switch (fdopt.flags & (UV_IGNORE | UV_CREATE_PIPE | UV_INHERIT_FD | UV_INHERIT_STREAM)) { case UV_IGNORE: /* Starting a process with no stdin/stout/stderr can confuse it. So no * matter what the user specified, we make sure the first three FDs are * always open in their typical modes, e. g. stdin be readable and * stdout/err should be writable. For FDs > 2, don't do anything - all * handles in the stdio buffer are initialized with. * INVALID_HANDLE_VALUE, which should be okay. */ if (i <= 2) { DWORD access = (i == 0) ? FILE_GENERIC_READ : FILE_GENERIC_WRITE | FILE_READ_ATTRIBUTES; err = uv__create_nul_handle(&CHILD_STDIO_HANDLE(buffer, i), access); if (err) goto error; CHILD_STDIO_CRT_FLAGS(buffer, i) = FOPEN | FDEV; } break; case UV_CREATE_PIPE: { /* Create a pair of two connected pipe ends; one end is turned into an * uv_pipe_t for use by the parent. The other one is given to the * child. */ uv_pipe_t* parent_pipe = (uv_pipe_t*) fdopt.data.stream; HANDLE child_pipe = INVALID_HANDLE_VALUE; /* Create a new, connected pipe pair. stdio[i]. stream should point to * an uninitialized, but not connected pipe handle. */ assert(fdopt.data.stream->type == UV_NAMED_PIPE); assert(!(fdopt.data.stream->flags & UV_HANDLE_CONNECTION)); assert(!(fdopt.data.stream->flags & UV_HANDLE_PIPESERVER)); err = uv__create_stdio_pipe_pair(loop, parent_pipe, &child_pipe, fdopt.flags); if (err) goto error; CHILD_STDIO_HANDLE(buffer, i) = child_pipe; CHILD_STDIO_CRT_FLAGS(buffer, i) = FOPEN | FPIPE; break; } case UV_INHERIT_FD: { /* Inherit a raw FD. */ HANDLE child_handle; /* Make an inheritable duplicate of the handle. */ err = uv__duplicate_fd(loop, fdopt.data.fd, &child_handle); if (err) { /* If fdopt. data. fd is not valid and fd <= 2, then ignore the * error. */ if (fdopt.data.fd <= 2 && err == ERROR_INVALID_HANDLE) { CHILD_STDIO_CRT_FLAGS(buffer, i) = 0; CHILD_STDIO_HANDLE(buffer, i) = INVALID_HANDLE_VALUE; break; } goto error; } /* Figure out what the type is. */ switch (GetFileType(child_handle)) { case FILE_TYPE_DISK: CHILD_STDIO_CRT_FLAGS(buffer, i) = FOPEN; break; case FILE_TYPE_PIPE: CHILD_STDIO_CRT_FLAGS(buffer, i) = FOPEN | FPIPE; break; case FILE_TYPE_CHAR: case FILE_TYPE_REMOTE: CHILD_STDIO_CRT_FLAGS(buffer, i) = FOPEN | FDEV; break; case FILE_TYPE_UNKNOWN: if (GetLastError() != 0) { err = GetLastError(); CloseHandle(child_handle); goto error; } CHILD_STDIO_CRT_FLAGS(buffer, i) = FOPEN | FDEV; break; default: assert(0); return -1; } CHILD_STDIO_HANDLE(buffer, i) = child_handle; break; } case UV_INHERIT_STREAM: { /* Use an existing stream as the stdio handle for the child. */ HANDLE stream_handle, child_handle; unsigned char crt_flags; uv_stream_t* stream = fdopt.data.stream; /* Leech the handle out of the stream. */ if (stream->type == UV_TTY) { stream_handle = ((uv_tty_t*) stream)->handle; crt_flags = FOPEN | FDEV; } else if (stream->type == UV_NAMED_PIPE && stream->flags & UV_HANDLE_CONNECTION) { stream_handle = ((uv_pipe_t*) stream)->handle; crt_flags = FOPEN | FPIPE; } else { stream_handle = INVALID_HANDLE_VALUE; crt_flags = 0; } if (stream_handle == NULL || stream_handle == INVALID_HANDLE_VALUE) { /* The handle is already closed, or not yet created, or the stream * type is not supported. */ err = ERROR_NOT_SUPPORTED; goto error; } /* Make an inheritable copy of the handle. */ err = uv__duplicate_handle(loop, stream_handle, &child_handle); if (err) goto error; CHILD_STDIO_HANDLE(buffer, i) = child_handle; CHILD_STDIO_CRT_FLAGS(buffer, i) = crt_flags; break; } default: assert(0); return -1; } } *buffer_ptr = buffer; return 0; error: uv__stdio_destroy(buffer); return err; } void uv__stdio_destroy(BYTE* buffer) { int i, count; count = CHILD_STDIO_COUNT(buffer); for (i = 0; i < count; i++) { HANDLE handle = CHILD_STDIO_HANDLE(buffer, i); if (handle != INVALID_HANDLE_VALUE) { CloseHandle(handle); } } uv__free(buffer); } void uv__stdio_noinherit(BYTE* buffer) { int i, count; count = CHILD_STDIO_COUNT(buffer); for (i = 0; i < count; i++) { HANDLE handle = CHILD_STDIO_HANDLE(buffer, i); if (handle != INVALID_HANDLE_VALUE) { SetHandleInformation(handle, HANDLE_FLAG_INHERIT, 0); } } } int uv__stdio_verify(BYTE* buffer, WORD size) { unsigned int count; /* Check the buffer pointer. */ if (buffer == NULL) return 0; /* Verify that the buffer is at least big enough to hold the count. */ if (size < CHILD_STDIO_SIZE(0)) return 0; /* Verify if the count is within range. */ count = CHILD_STDIO_COUNT(buffer); if (count > 256) return 0; /* Verify that the buffer size is big enough to hold info for N FDs. */ if (size < CHILD_STDIO_SIZE(count)) return 0; return 1; } WORD uv__stdio_size(BYTE* buffer) { return (WORD) CHILD_STDIO_SIZE(CHILD_STDIO_COUNT((buffer))); } HANDLE uv__stdio_handle(BYTE* buffer, int fd) { return CHILD_STDIO_HANDLE(buffer, fd); } gevent-24.11.1/deps/libuv/src/win/process.c000066400000000000000000001065331471441230600204440ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include #include #include #include #include #include /* alloca */ #include "uv.h" #include "internal.h" #include "handle-inl.h" #include "req-inl.h" #define SIGKILL 9 typedef struct env_var { const WCHAR* const wide; const WCHAR* const wide_eq; const size_t len; /* including null or '=' */ } env_var_t; #define E_V(str) { L##str, L##str L"=", sizeof(str) } static const env_var_t required_vars[] = { /* keep me sorted */ E_V("HOMEDRIVE"), E_V("HOMEPATH"), E_V("LOGONSERVER"), E_V("PATH"), E_V("SYSTEMDRIVE"), E_V("SYSTEMROOT"), E_V("TEMP"), E_V("USERDOMAIN"), E_V("USERNAME"), E_V("USERPROFILE"), E_V("WINDIR"), }; static HANDLE uv_global_job_handle_; static uv_once_t uv_global_job_handle_init_guard_ = UV_ONCE_INIT; static void uv__init_global_job_handle(void) { /* Create a job object and set it up to kill all contained processes when * it's closed. Since this handle is made non-inheritable and we're not * giving it to anyone, we're the only process holding a reference to it. * That means that if this process exits it is closed and all the processes * it contains are killed. All processes created with uv_spawn that are not * spawned with the UV_PROCESS_DETACHED flag are assigned to this job. * * We're setting the JOB_OBJECT_LIMIT_SILENT_BREAKAWAY_OK flag so only the * processes that we explicitly add are affected, and *their* subprocesses * are not. This ensures that our child processes are not limited in their * ability to use job control on Windows versions that don't deal with * nested jobs (prior to Windows 8 / Server 2012). It also lets our child * processes created detached processes without explicitly breaking away * from job control (which uv_spawn doesn't, either). */ SECURITY_ATTRIBUTES attr; JOBOBJECT_EXTENDED_LIMIT_INFORMATION info; memset(&attr, 0, sizeof attr); attr.bInheritHandle = FALSE; memset(&info, 0, sizeof info); info.BasicLimitInformation.LimitFlags = JOB_OBJECT_LIMIT_BREAKAWAY_OK | JOB_OBJECT_LIMIT_SILENT_BREAKAWAY_OK | JOB_OBJECT_LIMIT_DIE_ON_UNHANDLED_EXCEPTION | JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE; uv_global_job_handle_ = CreateJobObjectW(&attr, NULL); if (uv_global_job_handle_ == NULL) uv_fatal_error(GetLastError(), "CreateJobObjectW"); if (!SetInformationJobObject(uv_global_job_handle_, JobObjectExtendedLimitInformation, &info, sizeof info)) uv_fatal_error(GetLastError(), "SetInformationJobObject"); } static int uv__utf8_to_utf16_alloc(const char* s, WCHAR** ws_ptr) { int ws_len, r; WCHAR* ws; ws_len = MultiByteToWideChar(CP_UTF8, 0, s, -1, NULL, 0); if (ws_len <= 0) { return GetLastError(); } ws = (WCHAR*) uv__malloc(ws_len * sizeof(WCHAR)); if (ws == NULL) { return ERROR_OUTOFMEMORY; } r = MultiByteToWideChar(CP_UTF8, 0, s, -1, ws, ws_len); assert(r == ws_len); *ws_ptr = ws; return 0; } static void uv__process_init(uv_loop_t* loop, uv_process_t* handle) { uv__handle_init(loop, (uv_handle_t*) handle, UV_PROCESS); handle->exit_cb = NULL; handle->pid = 0; handle->exit_signal = 0; handle->wait_handle = INVALID_HANDLE_VALUE; handle->process_handle = INVALID_HANDLE_VALUE; handle->child_stdio_buffer = NULL; handle->exit_cb_pending = 0; UV_REQ_INIT(&handle->exit_req, UV_PROCESS_EXIT); handle->exit_req.data = handle; } /* * Path search functions */ /* * Helper function for search_path */ static WCHAR* search_path_join_test(const WCHAR* dir, size_t dir_len, const WCHAR* name, size_t name_len, const WCHAR* ext, size_t ext_len, const WCHAR* cwd, size_t cwd_len) { WCHAR *result, *result_pos; DWORD attrs; if (dir_len > 2 && ((dir[0] == L'\\' || dir[0] == L'/') && (dir[1] == L'\\' || dir[1] == L'/'))) { /* It's a UNC path so ignore cwd */ cwd_len = 0; } else if (dir_len >= 1 && (dir[0] == L'/' || dir[0] == L'\\')) { /* It's a full path without drive letter, use cwd's drive letter only */ cwd_len = 2; } else if (dir_len >= 2 && dir[1] == L':' && (dir_len < 3 || (dir[2] != L'/' && dir[2] != L'\\'))) { /* It's a relative path with drive letter (ext.g. D:../some/file) * Replace drive letter in dir by full cwd if it points to the same drive, * otherwise use the dir only. */ if (cwd_len < 2 || _wcsnicmp(cwd, dir, 2) != 0) { cwd_len = 0; } else { dir += 2; dir_len -= 2; } } else if (dir_len > 2 && dir[1] == L':') { /* It's an absolute path with drive letter * Don't use the cwd at all */ cwd_len = 0; } /* Allocate buffer for output */ result = result_pos = (WCHAR*)uv__malloc(sizeof(WCHAR) * (cwd_len + 1 + dir_len + 1 + name_len + 1 + ext_len + 1)); /* Copy cwd */ wcsncpy(result_pos, cwd, cwd_len); result_pos += cwd_len; /* Add a path separator if cwd didn't end with one */ if (cwd_len && wcsrchr(L"\\/:", result_pos[-1]) == NULL) { result_pos[0] = L'\\'; result_pos++; } /* Copy dir */ wcsncpy(result_pos, dir, dir_len); result_pos += dir_len; /* Add a separator if the dir didn't end with one */ if (dir_len && wcsrchr(L"\\/:", result_pos[-1]) == NULL) { result_pos[0] = L'\\'; result_pos++; } /* Copy filename */ wcsncpy(result_pos, name, name_len); result_pos += name_len; if (ext_len) { /* Add a dot if the filename didn't end with one */ if (name_len && result_pos[-1] != '.') { result_pos[0] = L'.'; result_pos++; } /* Copy extension */ wcsncpy(result_pos, ext, ext_len); result_pos += ext_len; } /* Null terminator */ result_pos[0] = L'\0'; attrs = GetFileAttributesW(result); if (attrs != INVALID_FILE_ATTRIBUTES && !(attrs & FILE_ATTRIBUTE_DIRECTORY)) { return result; } uv__free(result); return NULL; } /* * Helper function for search_path */ static WCHAR* path_search_walk_ext(const WCHAR *dir, size_t dir_len, const WCHAR *name, size_t name_len, WCHAR *cwd, size_t cwd_len, int name_has_ext) { WCHAR* result; /* If the name itself has a nonempty extension, try this extension first */ if (name_has_ext) { result = search_path_join_test(dir, dir_len, name, name_len, L"", 0, cwd, cwd_len); if (result != NULL) { return result; } } /* Try .com extension */ result = search_path_join_test(dir, dir_len, name, name_len, L"com", 3, cwd, cwd_len); if (result != NULL) { return result; } /* Try .exe extension */ result = search_path_join_test(dir, dir_len, name, name_len, L"exe", 3, cwd, cwd_len); if (result != NULL) { return result; } return NULL; } /* * search_path searches the system path for an executable filename - * the windows API doesn't provide this as a standalone function nor as an * option to CreateProcess. * * It tries to return an absolute filename. * * Furthermore, it tries to follow the semantics that cmd.exe, with this * exception that PATHEXT environment variable isn't used. Since CreateProcess * can start only .com and .exe files, only those extensions are tried. This * behavior equals that of msvcrt's spawn functions. * * - Do not search the path if the filename already contains a path (either * relative or absolute). * * - If there's really only a filename, check the current directory for file, * then search all path directories. * * - If filename specified has *any* extension, search for the file with the * specified extension first. * * - If the literal filename is not found in a directory, try *appending* * (not replacing) .com first and then .exe. * * - The path variable may contain relative paths; relative paths are relative * to the cwd. * * - Directories in path may or may not end with a trailing backslash. * * - CMD does not trim leading/trailing whitespace from path/pathex entries * nor from the environment variables as a whole. * * - When cmd.exe cannot read a directory, it will just skip it and go on * searching. However, unlike posix-y systems, it will happily try to run a * file that is not readable/executable; if the spawn fails it will not * continue searching. * * UNC path support: we are dealing with UNC paths in both the path and the * filename. This is a deviation from what cmd.exe does (it does not let you * start a program by specifying an UNC path on the command line) but this is * really a pointless restriction. * */ static WCHAR* search_path(const WCHAR *file, WCHAR *cwd, const WCHAR *path) { int file_has_dir; WCHAR* result = NULL; WCHAR *file_name_start; WCHAR *dot; const WCHAR *dir_start, *dir_end, *dir_path; size_t dir_len; int name_has_ext; size_t file_len = wcslen(file); size_t cwd_len = wcslen(cwd); /* If the caller supplies an empty filename, * we're not gonna return c:\windows\.exe -- GFY! */ if (file_len == 0 || (file_len == 1 && file[0] == L'.')) { return NULL; } /* Find the start of the filename so we can split the directory from the * name. */ for (file_name_start = (WCHAR*)file + file_len; file_name_start > file && file_name_start[-1] != L'\\' && file_name_start[-1] != L'/' && file_name_start[-1] != L':'; file_name_start--); file_has_dir = file_name_start != file; /* Check if the filename includes an extension */ dot = wcschr(file_name_start, L'.'); name_has_ext = (dot != NULL && dot[1] != L'\0'); if (file_has_dir) { /* The file has a path inside, don't use path */ result = path_search_walk_ext( file, file_name_start - file, file_name_start, file_len - (file_name_start - file), cwd, cwd_len, name_has_ext); } else { dir_end = path; /* The file is really only a name; look in cwd first, then scan path */ result = path_search_walk_ext(L"", 0, file, file_len, cwd, cwd_len, name_has_ext); while (result == NULL) { if (*dir_end == L'\0') { break; } /* Skip the separator that dir_end now points to */ if (dir_end != path || *path == L';') { dir_end++; } /* Next slice starts just after where the previous one ended */ dir_start = dir_end; /* If path is quoted, find quote end */ if (*dir_start == L'"' || *dir_start == L'\'') { dir_end = wcschr(dir_start + 1, *dir_start); if (dir_end == NULL) { dir_end = wcschr(dir_start, L'\0'); } } /* Slice until the next ; or \0 is found */ dir_end = wcschr(dir_end, L';'); if (dir_end == NULL) { dir_end = wcschr(dir_start, L'\0'); } /* If the slice is zero-length, don't bother */ if (dir_end - dir_start == 0) { continue; } dir_path = dir_start; dir_len = dir_end - dir_start; /* Adjust if the path is quoted. */ if (dir_path[0] == '"' || dir_path[0] == '\'') { ++dir_path; --dir_len; } if (dir_path[dir_len - 1] == '"' || dir_path[dir_len - 1] == '\'') { --dir_len; } result = path_search_walk_ext(dir_path, dir_len, file, file_len, cwd, cwd_len, name_has_ext); } } return result; } /* * Quotes command line arguments * Returns a pointer to the end (next char to be written) of the buffer */ WCHAR* quote_cmd_arg(const WCHAR *source, WCHAR *target) { size_t len = wcslen(source); size_t i; int quote_hit; WCHAR* start; if (len == 0) { /* Need double quotation for empty argument */ *(target++) = L'"'; *(target++) = L'"'; return target; } if (NULL == wcspbrk(source, L" \t\"")) { /* No quotation needed */ wcsncpy(target, source, len); target += len; return target; } if (NULL == wcspbrk(source, L"\"\\")) { /* * No embedded double quotes or backlashes, so I can just wrap * quote marks around the whole thing. */ *(target++) = L'"'; wcsncpy(target, source, len); target += len; *(target++) = L'"'; return target; } /* * Expected input/output: * input : hello"world * output: "hello\"world" * input : hello""world * output: "hello\"\"world" * input : hello\world * output: hello\world * input : hello\\world * output: hello\\world * input : hello\"world * output: "hello\\\"world" * input : hello\\"world * output: "hello\\\\\"world" * input : hello world\ * output: "hello world\\" */ *(target++) = L'"'; start = target; quote_hit = 1; for (i = len; i > 0; --i) { *(target++) = source[i - 1]; if (quote_hit && source[i - 1] == L'\\') { *(target++) = L'\\'; } else if(source[i - 1] == L'"') { quote_hit = 1; *(target++) = L'\\'; } else { quote_hit = 0; } } target[0] = L'\0'; wcsrev(start); *(target++) = L'"'; return target; } int make_program_args(char** args, int verbatim_arguments, WCHAR** dst_ptr) { char** arg; WCHAR* dst = NULL; WCHAR* temp_buffer = NULL; size_t dst_len = 0; size_t temp_buffer_len = 0; WCHAR* pos; int arg_count = 0; int err = 0; /* Count the required size. */ for (arg = args; *arg; arg++) { DWORD arg_len; arg_len = MultiByteToWideChar(CP_UTF8, 0, *arg, -1, NULL, 0); if (arg_len == 0) { return GetLastError(); } dst_len += arg_len; if (arg_len > temp_buffer_len) temp_buffer_len = arg_len; arg_count++; } /* Adjust for potential quotes. Also assume the worst-case scenario that * every character needs escaping, so we need twice as much space. */ dst_len = dst_len * 2 + arg_count * 2; /* Allocate buffer for the final command line. */ dst = (WCHAR*) uv__malloc(dst_len * sizeof(WCHAR)); if (dst == NULL) { err = ERROR_OUTOFMEMORY; goto error; } /* Allocate temporary working buffer. */ temp_buffer = (WCHAR*) uv__malloc(temp_buffer_len * sizeof(WCHAR)); if (temp_buffer == NULL) { err = ERROR_OUTOFMEMORY; goto error; } pos = dst; for (arg = args; *arg; arg++) { DWORD arg_len; /* Convert argument to wide char. */ arg_len = MultiByteToWideChar(CP_UTF8, 0, *arg, -1, temp_buffer, (int) (dst + dst_len - pos)); if (arg_len == 0) { err = GetLastError(); goto error; } if (verbatim_arguments) { /* Copy verbatim. */ wcscpy(pos, temp_buffer); pos += arg_len - 1; } else { /* Quote/escape, if needed. */ pos = quote_cmd_arg(temp_buffer, pos); } *pos++ = *(arg + 1) ? L' ' : L'\0'; } uv__free(temp_buffer); *dst_ptr = dst; return 0; error: uv__free(dst); uv__free(temp_buffer); return err; } int env_strncmp(const wchar_t* a, int na, const wchar_t* b) { wchar_t* a_eq; wchar_t* b_eq; wchar_t* A; wchar_t* B; int nb; int r; if (na < 0) { a_eq = wcschr(a, L'='); assert(a_eq); na = (int)(long)(a_eq - a); } else { na--; } b_eq = wcschr(b, L'='); assert(b_eq); nb = b_eq - b; A = alloca((na+1) * sizeof(wchar_t)); B = alloca((nb+1) * sizeof(wchar_t)); r = LCMapStringW(LOCALE_INVARIANT, LCMAP_UPPERCASE, a, na, A, na); assert(r==na); A[na] = L'\0'; r = LCMapStringW(LOCALE_INVARIANT, LCMAP_UPPERCASE, b, nb, B, nb); assert(r==nb); B[nb] = L'\0'; for (;;) { wchar_t AA = *A++; wchar_t BB = *B++; if (AA < BB) { return -1; } else if (AA > BB) { return 1; } else if (!AA && !BB) { return 0; } } } static int qsort_wcscmp(const void *a, const void *b) { wchar_t* astr = *(wchar_t* const*)a; wchar_t* bstr = *(wchar_t* const*)b; return env_strncmp(astr, -1, bstr); } /* * The way windows takes environment variables is different than what C does; * Windows wants a contiguous block of null-terminated strings, terminated * with an additional null. * * Windows has a few "essential" environment variables. winsock will fail * to initialize if SYSTEMROOT is not defined; some APIs make reference to * TEMP. SYSTEMDRIVE is probably also important. We therefore ensure that * these get defined if the input environment block does not contain any * values for them. * * Also add variables known to Cygwin to be required for correct * subprocess operation in many cases: * https://github.com/Alexpux/Cygwin/blob/b266b04fbbd3a595f02ea149e4306d3ab9b1fe3d/winsup/cygwin/environ.cc#L955 * */ int make_program_env(char* env_block[], WCHAR** dst_ptr) { WCHAR* dst; WCHAR* ptr; char** env; size_t env_len = 0; int len; size_t i; DWORD var_size; size_t env_block_count = 1; /* 1 for null-terminator */ WCHAR* dst_copy; WCHAR** ptr_copy; WCHAR** env_copy; DWORD required_vars_value_len[ARRAY_SIZE(required_vars)]; /* first pass: determine size in UTF-16 */ for (env = env_block; *env; env++) { int len; if (strchr(*env, '=')) { len = MultiByteToWideChar(CP_UTF8, 0, *env, -1, NULL, 0); if (len <= 0) { return GetLastError(); } env_len += len; env_block_count++; } } /* second pass: copy to UTF-16 environment block */ dst_copy = (WCHAR*)uv__malloc(env_len * sizeof(WCHAR)); if (dst_copy == NULL && env_len > 0) { return ERROR_OUTOFMEMORY; } env_copy = alloca(env_block_count * sizeof(WCHAR*)); ptr = dst_copy; ptr_copy = env_copy; for (env = env_block; *env; env++) { if (strchr(*env, '=')) { len = MultiByteToWideChar(CP_UTF8, 0, *env, -1, ptr, (int) (env_len - (ptr - dst_copy))); if (len <= 0) { DWORD err = GetLastError(); uv__free(dst_copy); return err; } *ptr_copy++ = ptr; ptr += len; } } *ptr_copy = NULL; assert(env_len == 0 || env_len == (size_t) (ptr - dst_copy)); /* sort our (UTF-16) copy */ qsort(env_copy, env_block_count-1, sizeof(wchar_t*), qsort_wcscmp); /* third pass: check for required variables */ for (ptr_copy = env_copy, i = 0; i < ARRAY_SIZE(required_vars); ) { int cmp; if (!*ptr_copy) { cmp = -1; } else { cmp = env_strncmp(required_vars[i].wide_eq, required_vars[i].len, *ptr_copy); } if (cmp < 0) { /* missing required var */ var_size = GetEnvironmentVariableW(required_vars[i].wide, NULL, 0); required_vars_value_len[i] = var_size; if (var_size != 0) { env_len += required_vars[i].len; env_len += var_size; } i++; } else { ptr_copy++; if (cmp == 0) i++; } } /* final pass: copy, in sort order, and inserting required variables */ dst = uv__malloc((1+env_len) * sizeof(WCHAR)); if (!dst) { uv__free(dst_copy); return ERROR_OUTOFMEMORY; } for (ptr = dst, ptr_copy = env_copy, i = 0; *ptr_copy || i < ARRAY_SIZE(required_vars); ptr += len) { int cmp; if (i >= ARRAY_SIZE(required_vars)) { cmp = 1; } else if (!*ptr_copy) { cmp = -1; } else { cmp = env_strncmp(required_vars[i].wide_eq, required_vars[i].len, *ptr_copy); } if (cmp < 0) { /* missing required var */ len = required_vars_value_len[i]; if (len) { wcscpy(ptr, required_vars[i].wide_eq); ptr += required_vars[i].len; var_size = GetEnvironmentVariableW(required_vars[i].wide, ptr, (int) (env_len - (ptr - dst))); if (var_size != (DWORD) (len - 1)) { /* TODO: handle race condition? */ uv_fatal_error(GetLastError(), "GetEnvironmentVariableW"); } } i++; } else { /* copy var from env_block */ len = wcslen(*ptr_copy) + 1; wmemcpy(ptr, *ptr_copy, len); ptr_copy++; if (cmp == 0) i++; } } /* Terminate with an extra NULL. */ assert(env_len == (size_t) (ptr - dst)); *ptr = L'\0'; uv__free(dst_copy); *dst_ptr = dst; return 0; } /* * Attempt to find the value of the PATH environment variable in the child's * preprocessed environment. * * If found, a pointer into `env` is returned. If not found, NULL is returned. */ static WCHAR* find_path(WCHAR *env) { for (; env != NULL && *env != 0; env += wcslen(env) + 1) { if ((env[0] == L'P' || env[0] == L'p') && (env[1] == L'A' || env[1] == L'a') && (env[2] == L'T' || env[2] == L't') && (env[3] == L'H' || env[3] == L'h') && (env[4] == L'=')) { return &env[5]; } } return NULL; } /* * Called on Windows thread-pool thread to indicate that * a child process has exited. */ static void CALLBACK exit_wait_callback(void* data, BOOLEAN didTimeout) { uv_process_t* process = (uv_process_t*) data; uv_loop_t* loop = process->loop; assert(didTimeout == FALSE); assert(process); assert(!process->exit_cb_pending); process->exit_cb_pending = 1; /* Post completed */ POST_COMPLETION_FOR_REQ(loop, &process->exit_req); } /* Called on main thread after a child process has exited. */ void uv__process_proc_exit(uv_loop_t* loop, uv_process_t* handle) { int64_t exit_code; DWORD status; assert(handle->exit_cb_pending); handle->exit_cb_pending = 0; /* If we're closing, don't call the exit callback. Just schedule a close * callback now. */ if (handle->flags & UV_HANDLE_CLOSING) { uv__want_endgame(loop, (uv_handle_t*) handle); return; } /* Unregister from process notification. */ if (handle->wait_handle != INVALID_HANDLE_VALUE) { UnregisterWait(handle->wait_handle); handle->wait_handle = INVALID_HANDLE_VALUE; } /* Set the handle to inactive: no callbacks will be made after the exit * callback. */ uv__handle_stop(handle); if (GetExitCodeProcess(handle->process_handle, &status)) { exit_code = status; } else { /* Unable to obtain the exit code. This should never happen. */ exit_code = uv_translate_sys_error(GetLastError()); } /* Fire the exit callback. */ if (handle->exit_cb) { handle->exit_cb(handle, exit_code, handle->exit_signal); } } void uv__process_close(uv_loop_t* loop, uv_process_t* handle) { uv__handle_closing(handle); if (handle->wait_handle != INVALID_HANDLE_VALUE) { /* This blocks until either the wait was cancelled, or the callback has * completed. */ BOOL r = UnregisterWaitEx(handle->wait_handle, INVALID_HANDLE_VALUE); if (!r) { /* This should never happen, and if it happens, we can't recover... */ uv_fatal_error(GetLastError(), "UnregisterWaitEx"); } handle->wait_handle = INVALID_HANDLE_VALUE; } if (!handle->exit_cb_pending) { uv__want_endgame(loop, (uv_handle_t*)handle); } } void uv__process_endgame(uv_loop_t* loop, uv_process_t* handle) { assert(!handle->exit_cb_pending); assert(handle->flags & UV_HANDLE_CLOSING); assert(!(handle->flags & UV_HANDLE_CLOSED)); /* Clean-up the process handle. */ CloseHandle(handle->process_handle); uv__handle_close(handle); } int uv_spawn(uv_loop_t* loop, uv_process_t* process, const uv_process_options_t* options) { int i; int err = 0; WCHAR* path = NULL, *alloc_path = NULL; BOOL result; WCHAR* application_path = NULL, *application = NULL, *arguments = NULL, *env = NULL, *cwd = NULL; STARTUPINFOW startup; PROCESS_INFORMATION info; DWORD process_flags; uv__process_init(loop, process); process->exit_cb = options->exit_cb; if (options->flags & (UV_PROCESS_SETGID | UV_PROCESS_SETUID)) { return UV_ENOTSUP; } if (options->file == NULL || options->args == NULL) { return UV_EINVAL; } assert(options->file != NULL); assert(!(options->flags & ~(UV_PROCESS_DETACHED | UV_PROCESS_SETGID | UV_PROCESS_SETUID | UV_PROCESS_WINDOWS_HIDE | UV_PROCESS_WINDOWS_HIDE_CONSOLE | UV_PROCESS_WINDOWS_HIDE_GUI | UV_PROCESS_WINDOWS_VERBATIM_ARGUMENTS))); err = uv__utf8_to_utf16_alloc(options->file, &application); if (err) goto done; err = make_program_args( options->args, options->flags & UV_PROCESS_WINDOWS_VERBATIM_ARGUMENTS, &arguments); if (err) goto done; if (options->env) { err = make_program_env(options->env, &env); if (err) goto done; } if (options->cwd) { /* Explicit cwd */ err = uv__utf8_to_utf16_alloc(options->cwd, &cwd); if (err) goto done; } else { /* Inherit cwd */ DWORD cwd_len, r; cwd_len = GetCurrentDirectoryW(0, NULL); if (!cwd_len) { err = GetLastError(); goto done; } cwd = (WCHAR*) uv__malloc(cwd_len * sizeof(WCHAR)); if (cwd == NULL) { err = ERROR_OUTOFMEMORY; goto done; } r = GetCurrentDirectoryW(cwd_len, cwd); if (r == 0 || r >= cwd_len) { err = GetLastError(); goto done; } } /* Get PATH environment variable. */ path = find_path(env); if (path == NULL) { DWORD path_len, r; path_len = GetEnvironmentVariableW(L"PATH", NULL, 0); if (path_len == 0) { err = GetLastError(); goto done; } alloc_path = (WCHAR*) uv__malloc(path_len * sizeof(WCHAR)); if (alloc_path == NULL) { err = ERROR_OUTOFMEMORY; goto done; } path = alloc_path; r = GetEnvironmentVariableW(L"PATH", path, path_len); if (r == 0 || r >= path_len) { err = GetLastError(); goto done; } } err = uv__stdio_create(loop, options, &process->child_stdio_buffer); if (err) goto done; application_path = search_path(application, cwd, path); if (application_path == NULL) { /* Not found. */ err = ERROR_FILE_NOT_FOUND; goto done; } startup.cb = sizeof(startup); startup.lpReserved = NULL; startup.lpDesktop = NULL; startup.lpTitle = NULL; startup.dwFlags = STARTF_USESTDHANDLES | STARTF_USESHOWWINDOW; startup.cbReserved2 = uv__stdio_size(process->child_stdio_buffer); startup.lpReserved2 = (BYTE*) process->child_stdio_buffer; startup.hStdInput = uv__stdio_handle(process->child_stdio_buffer, 0); startup.hStdOutput = uv__stdio_handle(process->child_stdio_buffer, 1); startup.hStdError = uv__stdio_handle(process->child_stdio_buffer, 2); process_flags = CREATE_UNICODE_ENVIRONMENT; if ((options->flags & UV_PROCESS_WINDOWS_HIDE_CONSOLE) || (options->flags & UV_PROCESS_WINDOWS_HIDE)) { /* Avoid creating console window if stdio is not inherited. */ for (i = 0; i < options->stdio_count; i++) { if (options->stdio[i].flags & UV_INHERIT_FD) break; if (i == options->stdio_count - 1) process_flags |= CREATE_NO_WINDOW; } } if ((options->flags & UV_PROCESS_WINDOWS_HIDE_GUI) || (options->flags & UV_PROCESS_WINDOWS_HIDE)) { /* Use SW_HIDE to avoid any potential process window. */ startup.wShowWindow = SW_HIDE; } else { startup.wShowWindow = SW_SHOWDEFAULT; } if (options->flags & UV_PROCESS_DETACHED) { /* Note that we're not setting the CREATE_BREAKAWAY_FROM_JOB flag. That * means that libuv might not let you create a fully daemonized process * when run under job control. However the type of job control that libuv * itself creates doesn't trickle down to subprocesses so they can still * daemonize. * * A reason to not do this is that CREATE_BREAKAWAY_FROM_JOB makes the * CreateProcess call fail if we're under job control that doesn't allow * breakaway. */ process_flags |= DETACHED_PROCESS | CREATE_NEW_PROCESS_GROUP; } if (!CreateProcessW(application_path, arguments, NULL, NULL, 1, process_flags, env, cwd, &startup, &info)) { /* CreateProcessW failed. */ err = GetLastError(); goto done; } /* Spawn succeeded. Beyond this point, failure is reported asynchronously. */ process->process_handle = info.hProcess; process->pid = info.dwProcessId; /* If the process isn't spawned as detached, assign to the global job object * so windows will kill it when the parent process dies. */ if (!(options->flags & UV_PROCESS_DETACHED)) { uv_once(&uv_global_job_handle_init_guard_, uv__init_global_job_handle); if (!AssignProcessToJobObject(uv_global_job_handle_, info.hProcess)) { /* AssignProcessToJobObject might fail if this process is under job * control and the job doesn't have the * JOB_OBJECT_LIMIT_SILENT_BREAKAWAY_OK flag set, on a Windows version * that doesn't support nested jobs. * * When that happens we just swallow the error and continue without * establishing a kill-child-on-parent-exit relationship, otherwise * there would be no way for libuv applications run under job control * to spawn processes at all. */ DWORD err = GetLastError(); if (err != ERROR_ACCESS_DENIED) uv_fatal_error(err, "AssignProcessToJobObject"); } } /* Set IPC pid to all IPC pipes. */ for (i = 0; i < options->stdio_count; i++) { const uv_stdio_container_t* fdopt = &options->stdio[i]; if (fdopt->flags & UV_CREATE_PIPE && fdopt->data.stream->type == UV_NAMED_PIPE && ((uv_pipe_t*) fdopt->data.stream)->ipc) { ((uv_pipe_t*) fdopt->data.stream)->pipe.conn.ipc_remote_pid = info.dwProcessId; } } /* Setup notifications for when the child process exits. */ result = RegisterWaitForSingleObject(&process->wait_handle, process->process_handle, exit_wait_callback, (void*)process, INFINITE, WT_EXECUTEINWAITTHREAD | WT_EXECUTEONLYONCE); if (!result) { uv_fatal_error(GetLastError(), "RegisterWaitForSingleObject"); } CloseHandle(info.hThread); assert(!err); /* Make the handle active. It will remain active until the exit callback is * made or the handle is closed, whichever happens first. */ uv__handle_start(process); /* Cleanup, whether we succeeded or failed. */ done: uv__free(application); uv__free(application_path); uv__free(arguments); uv__free(cwd); uv__free(env); uv__free(alloc_path); if (process->child_stdio_buffer != NULL) { /* Clean up child stdio handles. */ uv__stdio_destroy(process->child_stdio_buffer); process->child_stdio_buffer = NULL; } return uv_translate_sys_error(err); } static int uv__kill(HANDLE process_handle, int signum) { if (signum < 0 || signum >= NSIG) { return UV_EINVAL; } switch (signum) { case SIGTERM: case SIGKILL: case SIGINT: { /* Unconditionally terminate the process. On Windows, killed processes * normally return 1. */ DWORD status; int err; if (TerminateProcess(process_handle, 1)) return 0; /* If the process already exited before TerminateProcess was called,. * TerminateProcess will fail with ERROR_ACCESS_DENIED. */ err = GetLastError(); if (err == ERROR_ACCESS_DENIED && GetExitCodeProcess(process_handle, &status) && status != STILL_ACTIVE) { return UV_ESRCH; } return uv_translate_sys_error(err); } case 0: { /* Health check: is the process still alive? */ DWORD status; if (!GetExitCodeProcess(process_handle, &status)) return uv_translate_sys_error(GetLastError()); if (status != STILL_ACTIVE) return UV_ESRCH; return 0; } default: /* Unsupported signal. */ return UV_ENOSYS; } } int uv_process_kill(uv_process_t* process, int signum) { int err; if (process->process_handle == INVALID_HANDLE_VALUE) { return UV_EINVAL; } err = uv__kill(process->process_handle, signum); if (err) { return err; /* err is already translated. */ } process->exit_signal = signum; return 0; } int uv_kill(int pid, int signum) { int err; HANDLE process_handle; if (pid == 0) { process_handle = GetCurrentProcess(); } else { process_handle = OpenProcess(PROCESS_TERMINATE | PROCESS_QUERY_INFORMATION, FALSE, pid); } if (process_handle == NULL) { err = GetLastError(); if (err == ERROR_INVALID_PARAMETER) { return UV_ESRCH; } else { return uv_translate_sys_error(err); } } err = uv__kill(process_handle, signum); CloseHandle(process_handle); return err; /* err is already translated. */ } gevent-24.11.1/deps/libuv/src/win/req-inl.h000066400000000000000000000177461471441230600203510ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_WIN_REQ_INL_H_ #define UV_WIN_REQ_INL_H_ #include #include "uv.h" #include "internal.h" #define SET_REQ_STATUS(req, status) \ (req)->u.io.overlapped.Internal = (ULONG_PTR) (status) #define SET_REQ_ERROR(req, error) \ SET_REQ_STATUS((req), NTSTATUS_FROM_WIN32((error))) /* Note: used open-coded in UV_REQ_INIT() because of a circular dependency * between src/uv-common.h and src/win/internal.h. */ #define SET_REQ_SUCCESS(req) \ SET_REQ_STATUS((req), STATUS_SUCCESS) #define GET_REQ_STATUS(req) \ ((NTSTATUS) (req)->u.io.overlapped.Internal) #define REQ_SUCCESS(req) \ (NT_SUCCESS(GET_REQ_STATUS((req)))) #define GET_REQ_ERROR(req) \ (pRtlNtStatusToDosError(GET_REQ_STATUS((req)))) #define GET_REQ_SOCK_ERROR(req) \ (uv__ntstatus_to_winsock_error(GET_REQ_STATUS((req)))) #define REGISTER_HANDLE_REQ(loop, handle, req) \ do { \ INCREASE_ACTIVE_COUNT((loop), (handle)); \ uv__req_register((loop), (req)); \ } while (0) #define UNREGISTER_HANDLE_REQ(loop, handle, req) \ do { \ DECREASE_ACTIVE_COUNT((loop), (handle)); \ uv__req_unregister((loop), (req)); \ } while (0) #define UV_SUCCEEDED_WITHOUT_IOCP(result) \ ((result) && (handle->flags & UV_HANDLE_SYNC_BYPASS_IOCP)) #define UV_SUCCEEDED_WITH_IOCP(result) \ ((result) || (GetLastError() == ERROR_IO_PENDING)) #define POST_COMPLETION_FOR_REQ(loop, req) \ if (!PostQueuedCompletionStatus((loop)->iocp, \ 0, \ 0, \ &((req)->u.io.overlapped))) { \ uv_fatal_error(GetLastError(), "PostQueuedCompletionStatus"); \ } INLINE static uv_req_t* uv__overlapped_to_req(OVERLAPPED* overlapped) { return CONTAINING_RECORD(overlapped, uv_req_t, u.io.overlapped); } INLINE static void uv__insert_pending_req(uv_loop_t* loop, uv_req_t* req) { req->next_req = NULL; if (loop->pending_reqs_tail) { #ifdef _DEBUG /* Ensure the request is not already in the queue, or the queue * will get corrupted. */ uv_req_t* current = loop->pending_reqs_tail; do { assert(req != current); current = current->next_req; } while(current != loop->pending_reqs_tail); #endif req->next_req = loop->pending_reqs_tail->next_req; loop->pending_reqs_tail->next_req = req; loop->pending_reqs_tail = req; } else { req->next_req = req; loop->pending_reqs_tail = req; } } #define DELEGATE_STREAM_REQ(loop, req, method, handle_at) \ do { \ switch (((uv_handle_t*) (req)->handle_at)->type) { \ case UV_TCP: \ uv__process_tcp_##method##_req(loop, \ (uv_tcp_t*) ((req)->handle_at), \ req); \ break; \ \ case UV_NAMED_PIPE: \ uv__process_pipe_##method##_req(loop, \ (uv_pipe_t*) ((req)->handle_at), \ req); \ break; \ \ case UV_TTY: \ uv__process_tty_##method##_req(loop, \ (uv_tty_t*) ((req)->handle_at), \ req); \ break; \ \ default: \ assert(0); \ } \ } while (0) INLINE static void uv__process_reqs(uv_loop_t* loop) { uv_req_t* req; uv_req_t* first; uv_req_t* next; if (loop->pending_reqs_tail == NULL) return; first = loop->pending_reqs_tail->next_req; next = first; loop->pending_reqs_tail = NULL; while (next != NULL) { req = next; next = req->next_req != first ? req->next_req : NULL; switch (req->type) { case UV_READ: DELEGATE_STREAM_REQ(loop, req, read, data); break; case UV_WRITE: DELEGATE_STREAM_REQ(loop, (uv_write_t*) req, write, handle); break; case UV_ACCEPT: DELEGATE_STREAM_REQ(loop, req, accept, data); break; case UV_CONNECT: DELEGATE_STREAM_REQ(loop, (uv_connect_t*) req, connect, handle); break; case UV_SHUTDOWN: DELEGATE_STREAM_REQ(loop, (uv_shutdown_t*) req, shutdown, handle); break; case UV_UDP_RECV: uv__process_udp_recv_req(loop, (uv_udp_t*) req->data, req); break; case UV_UDP_SEND: uv__process_udp_send_req(loop, ((uv_udp_send_t*) req)->handle, (uv_udp_send_t*) req); break; case UV_WAKEUP: uv__process_async_wakeup_req(loop, (uv_async_t*) req->data, req); break; case UV_SIGNAL_REQ: uv__process_signal_req(loop, (uv_signal_t*) req->data, req); break; case UV_POLL_REQ: uv__process_poll_req(loop, (uv_poll_t*) req->data, req); break; case UV_PROCESS_EXIT: uv__process_proc_exit(loop, (uv_process_t*) req->data); break; case UV_FS_EVENT_REQ: uv__process_fs_event_req(loop, req, (uv_fs_event_t*) req->data); break; default: assert(0); } } } #endif /* UV_WIN_REQ_INL_H_ */ gevent-24.11.1/deps/libuv/src/win/signal.c000066400000000000000000000200351471441230600202330ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include "uv.h" #include "internal.h" #include "handle-inl.h" #include "req-inl.h" RB_HEAD(uv_signal_tree_s, uv_signal_s); static struct uv_signal_tree_s uv__signal_tree = RB_INITIALIZER(uv__signal_tree); static CRITICAL_SECTION uv__signal_lock; static BOOL WINAPI uv__signal_control_handler(DWORD type); int uv__signal_start(uv_signal_t* handle, uv_signal_cb signal_cb, int signum, int oneshot); void uv__signals_init(void) { InitializeCriticalSection(&uv__signal_lock); if (!SetConsoleCtrlHandler(uv__signal_control_handler, TRUE)) abort(); } void uv__signal_cleanup(void) { /* TODO(bnoordhuis) Undo effects of uv_signal_init()? */ } static int uv__signal_compare(uv_signal_t* w1, uv_signal_t* w2) { /* Compare signums first so all watchers with the same signnum end up * adjacent. */ if (w1->signum < w2->signum) return -1; if (w1->signum > w2->signum) return 1; /* Sort by loop pointer, so we can easily look up the first item after * { .signum = x, .loop = NULL }. */ if ((uintptr_t) w1->loop < (uintptr_t) w2->loop) return -1; if ((uintptr_t) w1->loop > (uintptr_t) w2->loop) return 1; if ((uintptr_t) w1 < (uintptr_t) w2) return -1; if ((uintptr_t) w1 > (uintptr_t) w2) return 1; return 0; } RB_GENERATE_STATIC(uv_signal_tree_s, uv_signal_s, tree_entry, uv__signal_compare) /* * Dispatches signal {signum} to all active uv_signal_t watchers in all loops. * Returns 1 if the signal was dispatched to any watcher, or 0 if there were * no active signal watchers observing this signal. */ int uv__signal_dispatch(int signum) { uv_signal_t lookup; uv_signal_t* handle; int dispatched; dispatched = 0; EnterCriticalSection(&uv__signal_lock); lookup.signum = signum; lookup.loop = NULL; for (handle = RB_NFIND(uv_signal_tree_s, &uv__signal_tree, &lookup); handle != NULL && handle->signum == signum; handle = RB_NEXT(uv_signal_tree_s, &uv__signal_tree, handle)) { unsigned long previous = InterlockedExchange( (volatile LONG*) &handle->pending_signum, signum); if (handle->flags & UV_SIGNAL_ONE_SHOT_DISPATCHED) continue; if (!previous) { POST_COMPLETION_FOR_REQ(handle->loop, &handle->signal_req); } dispatched = 1; if (handle->flags & UV_SIGNAL_ONE_SHOT) handle->flags |= UV_SIGNAL_ONE_SHOT_DISPATCHED; } LeaveCriticalSection(&uv__signal_lock); return dispatched; } static BOOL WINAPI uv__signal_control_handler(DWORD type) { switch (type) { case CTRL_C_EVENT: return uv__signal_dispatch(SIGINT); case CTRL_BREAK_EVENT: return uv__signal_dispatch(SIGBREAK); case CTRL_CLOSE_EVENT: if (uv__signal_dispatch(SIGHUP)) { /* Windows will terminate the process after the control handler * returns. After that it will just terminate our process. Therefore * block the signal handler so the main loop has some time to pick up * the signal and do something for a few seconds. */ Sleep(INFINITE); return TRUE; } return FALSE; case CTRL_LOGOFF_EVENT: case CTRL_SHUTDOWN_EVENT: /* These signals are only sent to services. Services have their own * notification mechanism, so there's no point in handling these. */ default: /* We don't handle these. */ return FALSE; } } int uv_signal_init(uv_loop_t* loop, uv_signal_t* handle) { uv__handle_init(loop, (uv_handle_t*) handle, UV_SIGNAL); handle->pending_signum = 0; handle->signum = 0; handle->signal_cb = NULL; UV_REQ_INIT(&handle->signal_req, UV_SIGNAL_REQ); handle->signal_req.data = handle; return 0; } int uv_signal_stop(uv_signal_t* handle) { uv_signal_t* removed_handle; /* If the watcher wasn't started, this is a no-op. */ if (handle->signum == 0) return 0; EnterCriticalSection(&uv__signal_lock); removed_handle = RB_REMOVE(uv_signal_tree_s, &uv__signal_tree, handle); assert(removed_handle == handle); LeaveCriticalSection(&uv__signal_lock); handle->signum = 0; uv__handle_stop(handle); return 0; } int uv_signal_start(uv_signal_t* handle, uv_signal_cb signal_cb, int signum) { return uv__signal_start(handle, signal_cb, signum, 0); } int uv_signal_start_oneshot(uv_signal_t* handle, uv_signal_cb signal_cb, int signum) { return uv__signal_start(handle, signal_cb, signum, 1); } int uv__signal_start(uv_signal_t* handle, uv_signal_cb signal_cb, int signum, int oneshot) { /* Test for invalid signal values. */ if (signum <= 0 || signum >= NSIG) return UV_EINVAL; /* Short circuit: if the signal watcher is already watching {signum} don't go * through the process of deregistering and registering the handler. * Additionally, this avoids pending signals getting lost in the (small) time * frame that handle->signum == 0. */ if (signum == handle->signum) { handle->signal_cb = signal_cb; return 0; } /* If the signal handler was already active, stop it first. */ if (handle->signum != 0) { int r = uv_signal_stop(handle); /* uv_signal_stop is infallible. */ assert(r == 0); } EnterCriticalSection(&uv__signal_lock); handle->signum = signum; if (oneshot) handle->flags |= UV_SIGNAL_ONE_SHOT; RB_INSERT(uv_signal_tree_s, &uv__signal_tree, handle); LeaveCriticalSection(&uv__signal_lock); handle->signal_cb = signal_cb; uv__handle_start(handle); return 0; } void uv__process_signal_req(uv_loop_t* loop, uv_signal_t* handle, uv_req_t* req) { long dispatched_signum; assert(handle->type == UV_SIGNAL); assert(req->type == UV_SIGNAL_REQ); dispatched_signum = InterlockedExchange( (volatile LONG*) &handle->pending_signum, 0); assert(dispatched_signum != 0); /* Check if the pending signal equals the signum that we are watching for. * These can get out of sync when the handler is stopped and restarted while * the signal_req is pending. */ if (dispatched_signum == handle->signum) handle->signal_cb(handle, dispatched_signum); if (handle->flags & UV_SIGNAL_ONE_SHOT) uv_signal_stop(handle); if (handle->flags & UV_HANDLE_CLOSING) { /* When it is closing, it must be stopped at this point. */ assert(handle->signum == 0); uv__want_endgame(loop, (uv_handle_t*) handle); } } void uv__signal_close(uv_loop_t* loop, uv_signal_t* handle) { uv_signal_stop(handle); uv__handle_closing(handle); if (handle->pending_signum == 0) { uv__want_endgame(loop, (uv_handle_t*) handle); } } void uv__signal_endgame(uv_loop_t* loop, uv_signal_t* handle) { assert(handle->flags & UV_HANDLE_CLOSING); assert(!(handle->flags & UV_HANDLE_CLOSED)); assert(handle->signum == 0); assert(handle->pending_signum == 0); handle->flags |= UV_HANDLE_CLOSED; uv__handle_close(handle); } gevent-24.11.1/deps/libuv/src/win/snprintf.c000066400000000000000000000030121471441230600206150ustar00rootroot00000000000000/* Copyright the libuv project contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #if defined(_MSC_VER) && _MSC_VER < 1900 #include #include /* Emulate snprintf() on MSVC<2015, _snprintf() doesn't zero-terminate the buffer * on overflow... */ int snprintf(char* buf, size_t len, const char* fmt, ...) { int n; va_list ap; va_start(ap, fmt); n = _vscprintf(fmt, ap); vsnprintf_s(buf, len, _TRUNCATE, fmt, ap); va_end(ap); return n; } #endif gevent-24.11.1/deps/libuv/src/win/stream-inl.h000066400000000000000000000037251471441230600210450ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_WIN_STREAM_INL_H_ #define UV_WIN_STREAM_INL_H_ #include #include "uv.h" #include "internal.h" #include "handle-inl.h" #include "req-inl.h" INLINE static void uv__stream_init(uv_loop_t* loop, uv_stream_t* handle, uv_handle_type type) { uv__handle_init(loop, (uv_handle_t*) handle, type); handle->write_queue_size = 0; handle->activecnt = 0; handle->stream.conn.shutdown_req = NULL; handle->stream.conn.write_reqs_pending = 0; UV_REQ_INIT(&handle->read_req, UV_READ); handle->read_req.event_handle = NULL; handle->read_req.wait_handle = INVALID_HANDLE_VALUE; handle->read_req.data = handle; } INLINE static void uv__connection_init(uv_stream_t* handle) { handle->flags |= UV_HANDLE_CONNECTION; } #endif /* UV_WIN_STREAM_INL_H_ */ gevent-24.11.1/deps/libuv/src/win/stream.c000066400000000000000000000146721471441230600202630ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include "uv.h" #include "internal.h" #include "handle-inl.h" #include "req-inl.h" int uv_listen(uv_stream_t* stream, int backlog, uv_connection_cb cb) { int err; if (uv__is_closing(stream)) { return UV_EINVAL; } err = ERROR_INVALID_PARAMETER; switch (stream->type) { case UV_TCP: err = uv__tcp_listen((uv_tcp_t*)stream, backlog, cb); break; case UV_NAMED_PIPE: err = uv__pipe_listen((uv_pipe_t*)stream, backlog, cb); break; default: assert(0); } return uv_translate_sys_error(err); } int uv_accept(uv_stream_t* server, uv_stream_t* client) { int err; err = ERROR_INVALID_PARAMETER; switch (server->type) { case UV_TCP: err = uv__tcp_accept((uv_tcp_t*)server, (uv_tcp_t*)client); break; case UV_NAMED_PIPE: err = uv__pipe_accept((uv_pipe_t*)server, client); break; default: assert(0); } return uv_translate_sys_error(err); } int uv__read_start(uv_stream_t* handle, uv_alloc_cb alloc_cb, uv_read_cb read_cb) { int err; err = ERROR_INVALID_PARAMETER; switch (handle->type) { case UV_TCP: err = uv__tcp_read_start((uv_tcp_t*)handle, alloc_cb, read_cb); break; case UV_NAMED_PIPE: err = uv__pipe_read_start((uv_pipe_t*)handle, alloc_cb, read_cb); break; case UV_TTY: err = uv__tty_read_start((uv_tty_t*) handle, alloc_cb, read_cb); break; default: assert(0); } return uv_translate_sys_error(err); } int uv_read_stop(uv_stream_t* handle) { int err; if (!(handle->flags & UV_HANDLE_READING)) return 0; err = 0; if (handle->type == UV_TTY) { err = uv__tty_read_stop((uv_tty_t*) handle); } else if (handle->type == UV_NAMED_PIPE) { uv__pipe_read_stop((uv_pipe_t*) handle); } else { handle->flags &= ~UV_HANDLE_READING; DECREASE_ACTIVE_COUNT(handle->loop, handle); } return uv_translate_sys_error(err); } int uv_write(uv_write_t* req, uv_stream_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_write_cb cb) { uv_loop_t* loop = handle->loop; int err; if (!(handle->flags & UV_HANDLE_WRITABLE)) { return UV_EPIPE; } err = ERROR_INVALID_PARAMETER; switch (handle->type) { case UV_TCP: err = uv__tcp_write(loop, req, (uv_tcp_t*) handle, bufs, nbufs, cb); break; case UV_NAMED_PIPE: err = uv__pipe_write( loop, req, (uv_pipe_t*) handle, bufs, nbufs, NULL, cb); break; case UV_TTY: err = uv__tty_write(loop, req, (uv_tty_t*) handle, bufs, nbufs, cb); break; default: assert(0); } return uv_translate_sys_error(err); } int uv_write2(uv_write_t* req, uv_stream_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_stream_t* send_handle, uv_write_cb cb) { uv_loop_t* loop = handle->loop; int err; if (send_handle == NULL) { return uv_write(req, handle, bufs, nbufs, cb); } if (handle->type != UV_NAMED_PIPE || !((uv_pipe_t*) handle)->ipc) { return UV_EINVAL; } else if (!(handle->flags & UV_HANDLE_WRITABLE)) { return UV_EPIPE; } err = uv__pipe_write( loop, req, (uv_pipe_t*) handle, bufs, nbufs, send_handle, cb); return uv_translate_sys_error(err); } int uv_try_write(uv_stream_t* stream, const uv_buf_t bufs[], unsigned int nbufs) { if (stream->flags & UV_HANDLE_CLOSING) return UV_EBADF; if (!(stream->flags & UV_HANDLE_WRITABLE)) return UV_EPIPE; switch (stream->type) { case UV_TCP: return uv__tcp_try_write((uv_tcp_t*) stream, bufs, nbufs); case UV_TTY: return uv__tty_try_write((uv_tty_t*) stream, bufs, nbufs); case UV_NAMED_PIPE: return UV_EAGAIN; default: assert(0); return UV_ENOSYS; } } int uv_try_write2(uv_stream_t* stream, const uv_buf_t bufs[], unsigned int nbufs, uv_stream_t* send_handle) { if (send_handle != NULL) return UV_EAGAIN; return uv_try_write(stream, bufs, nbufs); } int uv_shutdown(uv_shutdown_t* req, uv_stream_t* handle, uv_shutdown_cb cb) { uv_loop_t* loop = handle->loop; if (!(handle->flags & UV_HANDLE_WRITABLE) || handle->flags & UV_HANDLE_SHUTTING || uv__is_closing(handle)) { return UV_ENOTCONN; } UV_REQ_INIT(req, UV_SHUTDOWN); req->handle = handle; req->cb = cb; handle->flags &= ~UV_HANDLE_WRITABLE; handle->flags |= UV_HANDLE_SHUTTING; handle->stream.conn.shutdown_req = req; handle->reqs_pending++; REGISTER_HANDLE_REQ(loop, handle, req); if (handle->stream.conn.write_reqs_pending == 0) { if (handle->type == UV_NAMED_PIPE) uv__pipe_shutdown(loop, (uv_pipe_t*) handle, req); else uv__insert_pending_req(loop, (uv_req_t*) req); } return 0; } int uv_is_readable(const uv_stream_t* handle) { return !!(handle->flags & UV_HANDLE_READABLE); } int uv_is_writable(const uv_stream_t* handle) { return !!(handle->flags & UV_HANDLE_WRITABLE); } int uv_stream_set_blocking(uv_stream_t* handle, int blocking) { if (handle->type != UV_NAMED_PIPE) return UV_EINVAL; if (blocking != 0) handle->flags |= UV_HANDLE_BLOCKING_WRITES; else handle->flags &= ~UV_HANDLE_BLOCKING_WRITES; return 0; } gevent-24.11.1/deps/libuv/src/win/tcp.c000066400000000000000000001443371471441230600175600ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include "uv.h" #include "internal.h" #include "handle-inl.h" #include "stream-inl.h" #include "req-inl.h" /* * Threshold of active tcp streams for which to preallocate tcp read buffers. * (Due to node slab allocator performing poorly under this pattern, * the optimization is temporarily disabled (threshold=0). This will be * revisited once node allocator is improved.) */ const unsigned int uv_active_tcp_streams_threshold = 0; /* * Number of simultaneous pending AcceptEx calls. */ const unsigned int uv_simultaneous_server_accepts = 32; /* A zero-size buffer for use by uv_tcp_read */ static char uv_zero_[] = ""; static int uv__tcp_nodelay(uv_tcp_t* handle, SOCKET socket, int enable) { if (setsockopt(socket, IPPROTO_TCP, TCP_NODELAY, (const char*)&enable, sizeof enable) == -1) { return WSAGetLastError(); } return 0; } static int uv__tcp_keepalive(uv_tcp_t* handle, SOCKET socket, int enable, unsigned int delay) { if (setsockopt(socket, SOL_SOCKET, SO_KEEPALIVE, (const char*)&enable, sizeof enable) == -1) { return WSAGetLastError(); } if (enable && setsockopt(socket, IPPROTO_TCP, TCP_KEEPALIVE, (const char*)&delay, sizeof delay) == -1) { return WSAGetLastError(); } return 0; } static int uv__tcp_set_socket(uv_loop_t* loop, uv_tcp_t* handle, SOCKET socket, int family, int imported) { DWORD yes = 1; int non_ifs_lsp; int err; if (handle->socket != INVALID_SOCKET) return UV_EBUSY; /* Set the socket to nonblocking mode */ if (ioctlsocket(socket, FIONBIO, &yes) == SOCKET_ERROR) { return WSAGetLastError(); } /* Make the socket non-inheritable */ if (!SetHandleInformation((HANDLE) socket, HANDLE_FLAG_INHERIT, 0)) return GetLastError(); /* Associate it with the I/O completion port. Use uv_handle_t pointer as * completion key. */ if (CreateIoCompletionPort((HANDLE)socket, loop->iocp, (ULONG_PTR)socket, 0) == NULL) { if (imported) { handle->flags |= UV_HANDLE_EMULATE_IOCP; } else { return GetLastError(); } } if (family == AF_INET6) { non_ifs_lsp = uv_tcp_non_ifs_lsp_ipv6; } else { non_ifs_lsp = uv_tcp_non_ifs_lsp_ipv4; } if (!(handle->flags & UV_HANDLE_EMULATE_IOCP) && !non_ifs_lsp) { UCHAR sfcnm_flags = FILE_SKIP_SET_EVENT_ON_HANDLE | FILE_SKIP_COMPLETION_PORT_ON_SUCCESS; if (!SetFileCompletionNotificationModes((HANDLE) socket, sfcnm_flags)) return GetLastError(); handle->flags |= UV_HANDLE_SYNC_BYPASS_IOCP; } if (handle->flags & UV_HANDLE_TCP_NODELAY) { err = uv__tcp_nodelay(handle, socket, 1); if (err) return err; } /* TODO: Use stored delay. */ if (handle->flags & UV_HANDLE_TCP_KEEPALIVE) { err = uv__tcp_keepalive(handle, socket, 1, 60); if (err) return err; } handle->socket = socket; if (family == AF_INET6) { handle->flags |= UV_HANDLE_IPV6; } else { assert(!(handle->flags & UV_HANDLE_IPV6)); } return 0; } int uv_tcp_init_ex(uv_loop_t* loop, uv_tcp_t* handle, unsigned int flags) { int domain; /* Use the lower 8 bits for the domain */ domain = flags & 0xFF; if (domain != AF_INET && domain != AF_INET6 && domain != AF_UNSPEC) return UV_EINVAL; if (flags & ~0xFF) return UV_EINVAL; uv__stream_init(loop, (uv_stream_t*) handle, UV_TCP); handle->tcp.serv.accept_reqs = NULL; handle->tcp.serv.pending_accepts = NULL; handle->socket = INVALID_SOCKET; handle->reqs_pending = 0; handle->tcp.serv.func_acceptex = NULL; handle->tcp.conn.func_connectex = NULL; handle->tcp.serv.processed_accepts = 0; handle->delayed_error = 0; /* If anything fails beyond this point we need to remove the handle from * the handle queue, since it was added by uv__handle_init in uv__stream_init. */ if (domain != AF_UNSPEC) { SOCKET sock; DWORD err; sock = socket(domain, SOCK_STREAM, 0); if (sock == INVALID_SOCKET) { err = WSAGetLastError(); QUEUE_REMOVE(&handle->handle_queue); return uv_translate_sys_error(err); } err = uv__tcp_set_socket(handle->loop, handle, sock, domain, 0); if (err) { closesocket(sock); QUEUE_REMOVE(&handle->handle_queue); return uv_translate_sys_error(err); } } return 0; } int uv_tcp_init(uv_loop_t* loop, uv_tcp_t* handle) { return uv_tcp_init_ex(loop, handle, AF_UNSPEC); } void uv__process_tcp_shutdown_req(uv_loop_t* loop, uv_tcp_t* stream, uv_shutdown_t *req) { int err; assert(req); assert(stream->stream.conn.write_reqs_pending == 0); assert(!(stream->flags & UV_HANDLE_SHUT)); assert(stream->flags & UV_HANDLE_CONNECTION); stream->stream.conn.shutdown_req = NULL; stream->flags &= ~UV_HANDLE_SHUTTING; UNREGISTER_HANDLE_REQ(loop, stream, req); err = 0; if (stream->flags & UV_HANDLE_CLOSING) /* The user destroyed the stream before we got to do the shutdown. */ err = UV_ECANCELED; else if (shutdown(stream->socket, SD_SEND) == SOCKET_ERROR) err = uv_translate_sys_error(WSAGetLastError()); else /* Success. */ stream->flags |= UV_HANDLE_SHUT; if (req->cb) req->cb(req, err); DECREASE_PENDING_REQ_COUNT(stream); } void uv__tcp_endgame(uv_loop_t* loop, uv_tcp_t* handle) { unsigned int i; uv_tcp_accept_t* req; assert(handle->flags & UV_HANDLE_CLOSING); assert(handle->reqs_pending == 0); assert(!(handle->flags & UV_HANDLE_CLOSED)); assert(handle->socket == INVALID_SOCKET); if (!(handle->flags & UV_HANDLE_CONNECTION) && handle->tcp.serv.accept_reqs) { if (handle->flags & UV_HANDLE_EMULATE_IOCP) { for (i = 0; i < uv_simultaneous_server_accepts; i++) { req = &handle->tcp.serv.accept_reqs[i]; if (req->wait_handle != INVALID_HANDLE_VALUE) { UnregisterWait(req->wait_handle); req->wait_handle = INVALID_HANDLE_VALUE; } if (req->event_handle != NULL) { CloseHandle(req->event_handle); req->event_handle = NULL; } } } uv__free(handle->tcp.serv.accept_reqs); handle->tcp.serv.accept_reqs = NULL; } if (handle->flags & UV_HANDLE_CONNECTION && handle->flags & UV_HANDLE_EMULATE_IOCP) { if (handle->read_req.wait_handle != INVALID_HANDLE_VALUE) { UnregisterWait(handle->read_req.wait_handle); handle->read_req.wait_handle = INVALID_HANDLE_VALUE; } if (handle->read_req.event_handle != NULL) { CloseHandle(handle->read_req.event_handle); handle->read_req.event_handle = NULL; } } uv__handle_close(handle); loop->active_tcp_streams--; } /* Unlike on Unix, here we don't set SO_REUSEADDR, because it doesn't just * allow binding to addresses that are in use by sockets in TIME_WAIT, it * effectively allows 'stealing' a port which is in use by another application. * * SO_EXCLUSIVEADDRUSE is also not good here because it does check all sockets, * regardless of state, so we'd get an error even if the port is in use by a * socket in TIME_WAIT state. * * See issue #1360. * */ static int uv__tcp_try_bind(uv_tcp_t* handle, const struct sockaddr* addr, unsigned int addrlen, unsigned int flags) { DWORD err; int r; if (handle->socket == INVALID_SOCKET) { SOCKET sock; /* Cannot set IPv6-only mode on non-IPv6 socket. */ if ((flags & UV_TCP_IPV6ONLY) && addr->sa_family != AF_INET6) return ERROR_INVALID_PARAMETER; sock = socket(addr->sa_family, SOCK_STREAM, 0); if (sock == INVALID_SOCKET) { return WSAGetLastError(); } err = uv__tcp_set_socket(handle->loop, handle, sock, addr->sa_family, 0); if (err) { closesocket(sock); return err; } } #ifdef IPV6_V6ONLY if (addr->sa_family == AF_INET6) { int on; on = (flags & UV_TCP_IPV6ONLY) != 0; /* TODO: how to handle errors? This may fail if there is no ipv4 stack * available, or when run on XP/2003 which have no support for dualstack * sockets. For now we're silently ignoring the error. */ setsockopt(handle->socket, IPPROTO_IPV6, IPV6_V6ONLY, (const char*)&on, sizeof on); } #endif r = bind(handle->socket, addr, addrlen); if (r == SOCKET_ERROR) { err = WSAGetLastError(); if (err == WSAEADDRINUSE) { /* Some errors are not to be reported until connect() or listen() */ handle->delayed_error = err; } else { return err; } } handle->flags |= UV_HANDLE_BOUND; return 0; } static void CALLBACK post_completion(void* context, BOOLEAN timed_out) { uv_req_t* req; uv_tcp_t* handle; req = (uv_req_t*) context; assert(req != NULL); handle = (uv_tcp_t*)req->data; assert(handle != NULL); assert(!timed_out); if (!PostQueuedCompletionStatus(handle->loop->iocp, req->u.io.overlapped.InternalHigh, 0, &req->u.io.overlapped)) { uv_fatal_error(GetLastError(), "PostQueuedCompletionStatus"); } } static void CALLBACK post_write_completion(void* context, BOOLEAN timed_out) { uv_write_t* req; uv_tcp_t* handle; req = (uv_write_t*) context; assert(req != NULL); handle = (uv_tcp_t*)req->handle; assert(handle != NULL); assert(!timed_out); if (!PostQueuedCompletionStatus(handle->loop->iocp, req->u.io.overlapped.InternalHigh, 0, &req->u.io.overlapped)) { uv_fatal_error(GetLastError(), "PostQueuedCompletionStatus"); } } static void uv__tcp_queue_accept(uv_tcp_t* handle, uv_tcp_accept_t* req) { uv_loop_t* loop = handle->loop; BOOL success; DWORD bytes; SOCKET accept_socket; short family; assert(handle->flags & UV_HANDLE_LISTENING); assert(req->accept_socket == INVALID_SOCKET); /* choose family and extension function */ if (handle->flags & UV_HANDLE_IPV6) { family = AF_INET6; } else { family = AF_INET; } /* Open a socket for the accepted connection. */ accept_socket = socket(family, SOCK_STREAM, 0); if (accept_socket == INVALID_SOCKET) { SET_REQ_ERROR(req, WSAGetLastError()); uv__insert_pending_req(loop, (uv_req_t*)req); handle->reqs_pending++; return; } /* Make the socket non-inheritable */ if (!SetHandleInformation((HANDLE) accept_socket, HANDLE_FLAG_INHERIT, 0)) { SET_REQ_ERROR(req, GetLastError()); uv__insert_pending_req(loop, (uv_req_t*)req); handle->reqs_pending++; closesocket(accept_socket); return; } /* Prepare the overlapped structure. */ memset(&(req->u.io.overlapped), 0, sizeof(req->u.io.overlapped)); if (handle->flags & UV_HANDLE_EMULATE_IOCP) { assert(req->event_handle != NULL); req->u.io.overlapped.hEvent = (HANDLE) ((ULONG_PTR) req->event_handle | 1); } success = handle->tcp.serv.func_acceptex(handle->socket, accept_socket, (void*)req->accept_buffer, 0, sizeof(struct sockaddr_storage), sizeof(struct sockaddr_storage), &bytes, &req->u.io.overlapped); if (UV_SUCCEEDED_WITHOUT_IOCP(success)) { /* Process the req without IOCP. */ req->accept_socket = accept_socket; handle->reqs_pending++; uv__insert_pending_req(loop, (uv_req_t*)req); } else if (UV_SUCCEEDED_WITH_IOCP(success)) { /* The req will be processed with IOCP. */ req->accept_socket = accept_socket; handle->reqs_pending++; if (handle->flags & UV_HANDLE_EMULATE_IOCP && req->wait_handle == INVALID_HANDLE_VALUE && !RegisterWaitForSingleObject(&req->wait_handle, req->event_handle, post_completion, (void*) req, INFINITE, WT_EXECUTEINWAITTHREAD)) { SET_REQ_ERROR(req, GetLastError()); uv__insert_pending_req(loop, (uv_req_t*)req); } } else { /* Make this req pending reporting an error. */ SET_REQ_ERROR(req, WSAGetLastError()); uv__insert_pending_req(loop, (uv_req_t*)req); handle->reqs_pending++; /* Destroy the preallocated client socket. */ closesocket(accept_socket); /* Destroy the event handle */ if (handle->flags & UV_HANDLE_EMULATE_IOCP) { CloseHandle(req->event_handle); req->event_handle = NULL; } } } static void uv__tcp_queue_read(uv_loop_t* loop, uv_tcp_t* handle) { uv_read_t* req; uv_buf_t buf; int result; DWORD bytes, flags; assert(handle->flags & UV_HANDLE_READING); assert(!(handle->flags & UV_HANDLE_READ_PENDING)); req = &handle->read_req; memset(&req->u.io.overlapped, 0, sizeof(req->u.io.overlapped)); /* * Preallocate a read buffer if the number of active streams is below * the threshold. */ if (loop->active_tcp_streams < uv_active_tcp_streams_threshold) { handle->flags &= ~UV_HANDLE_ZERO_READ; handle->tcp.conn.read_buffer = uv_buf_init(NULL, 0); handle->alloc_cb((uv_handle_t*) handle, 65536, &handle->tcp.conn.read_buffer); if (handle->tcp.conn.read_buffer.base == NULL || handle->tcp.conn.read_buffer.len == 0) { handle->read_cb((uv_stream_t*) handle, UV_ENOBUFS, &handle->tcp.conn.read_buffer); return; } assert(handle->tcp.conn.read_buffer.base != NULL); buf = handle->tcp.conn.read_buffer; } else { handle->flags |= UV_HANDLE_ZERO_READ; buf.base = (char*) &uv_zero_; buf.len = 0; } /* Prepare the overlapped structure. */ memset(&(req->u.io.overlapped), 0, sizeof(req->u.io.overlapped)); if (handle->flags & UV_HANDLE_EMULATE_IOCP) { assert(req->event_handle != NULL); req->u.io.overlapped.hEvent = (HANDLE) ((ULONG_PTR) req->event_handle | 1); } flags = 0; result = WSARecv(handle->socket, (WSABUF*)&buf, 1, &bytes, &flags, &req->u.io.overlapped, NULL); handle->flags |= UV_HANDLE_READ_PENDING; handle->reqs_pending++; if (UV_SUCCEEDED_WITHOUT_IOCP(result == 0)) { /* Process the req without IOCP. */ req->u.io.overlapped.InternalHigh = bytes; uv__insert_pending_req(loop, (uv_req_t*)req); } else if (UV_SUCCEEDED_WITH_IOCP(result == 0)) { /* The req will be processed with IOCP. */ if (handle->flags & UV_HANDLE_EMULATE_IOCP && req->wait_handle == INVALID_HANDLE_VALUE && !RegisterWaitForSingleObject(&req->wait_handle, req->event_handle, post_completion, (void*) req, INFINITE, WT_EXECUTEINWAITTHREAD)) { SET_REQ_ERROR(req, GetLastError()); uv__insert_pending_req(loop, (uv_req_t*)req); } } else { /* Make this req pending reporting an error. */ SET_REQ_ERROR(req, WSAGetLastError()); uv__insert_pending_req(loop, (uv_req_t*)req); } } int uv_tcp_close_reset(uv_tcp_t* handle, uv_close_cb close_cb) { struct linger l = { 1, 0 }; /* Disallow setting SO_LINGER to zero due to some platform inconsistencies */ if (handle->flags & UV_HANDLE_SHUTTING) return UV_EINVAL; if (0 != setsockopt(handle->socket, SOL_SOCKET, SO_LINGER, (const char*)&l, sizeof(l))) return uv_translate_sys_error(WSAGetLastError()); uv_close((uv_handle_t*) handle, close_cb); return 0; } int uv__tcp_listen(uv_tcp_t* handle, int backlog, uv_connection_cb cb) { unsigned int i, simultaneous_accepts; uv_tcp_accept_t* req; int err; assert(backlog > 0); if (handle->flags & UV_HANDLE_LISTENING) { handle->stream.serv.connection_cb = cb; } if (handle->flags & UV_HANDLE_READING) { return WSAEISCONN; } if (handle->delayed_error) { return handle->delayed_error; } if (!(handle->flags & UV_HANDLE_BOUND)) { err = uv__tcp_try_bind(handle, (const struct sockaddr*) &uv_addr_ip4_any_, sizeof(uv_addr_ip4_any_), 0); if (err) return err; if (handle->delayed_error) return handle->delayed_error; } if (!handle->tcp.serv.func_acceptex) { if (!uv__get_acceptex_function(handle->socket, &handle->tcp.serv.func_acceptex)) { return WSAEAFNOSUPPORT; } } /* If this flag is set, we already made this listen call in xfer. */ if (!(handle->flags & UV_HANDLE_SHARED_TCP_SOCKET) && listen(handle->socket, backlog) == SOCKET_ERROR) { return WSAGetLastError(); } handle->flags |= UV_HANDLE_LISTENING; handle->stream.serv.connection_cb = cb; INCREASE_ACTIVE_COUNT(loop, handle); simultaneous_accepts = handle->flags & UV_HANDLE_TCP_SINGLE_ACCEPT ? 1 : uv_simultaneous_server_accepts; if (handle->tcp.serv.accept_reqs == NULL) { handle->tcp.serv.accept_reqs = uv__malloc(uv_simultaneous_server_accepts * sizeof(uv_tcp_accept_t)); if (!handle->tcp.serv.accept_reqs) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } for (i = 0; i < simultaneous_accepts; i++) { req = &handle->tcp.serv.accept_reqs[i]; UV_REQ_INIT(req, UV_ACCEPT); req->accept_socket = INVALID_SOCKET; req->data = handle; req->wait_handle = INVALID_HANDLE_VALUE; if (handle->flags & UV_HANDLE_EMULATE_IOCP) { req->event_handle = CreateEvent(NULL, 0, 0, NULL); if (req->event_handle == NULL) { uv_fatal_error(GetLastError(), "CreateEvent"); } } else { req->event_handle = NULL; } uv__tcp_queue_accept(handle, req); } /* Initialize other unused requests too, because uv_tcp_endgame doesn't * know how many requests were initialized, so it will try to clean up * {uv_simultaneous_server_accepts} requests. */ for (i = simultaneous_accepts; i < uv_simultaneous_server_accepts; i++) { req = &handle->tcp.serv.accept_reqs[i]; UV_REQ_INIT(req, UV_ACCEPT); req->accept_socket = INVALID_SOCKET; req->data = handle; req->wait_handle = INVALID_HANDLE_VALUE; req->event_handle = NULL; } } return 0; } int uv__tcp_accept(uv_tcp_t* server, uv_tcp_t* client) { uv_loop_t* loop = server->loop; int err = 0; int family; uv_tcp_accept_t* req = server->tcp.serv.pending_accepts; if (!req) { /* No valid connections found, so we error out. */ return WSAEWOULDBLOCK; } if (req->accept_socket == INVALID_SOCKET) { return WSAENOTCONN; } if (server->flags & UV_HANDLE_IPV6) { family = AF_INET6; } else { family = AF_INET; } err = uv__tcp_set_socket(client->loop, client, req->accept_socket, family, 0); if (err) { closesocket(req->accept_socket); } else { uv__connection_init((uv_stream_t*) client); /* AcceptEx() implicitly binds the accepted socket. */ client->flags |= UV_HANDLE_BOUND | UV_HANDLE_READABLE | UV_HANDLE_WRITABLE; } /* Prepare the req to pick up a new connection */ server->tcp.serv.pending_accepts = req->next_pending; req->next_pending = NULL; req->accept_socket = INVALID_SOCKET; if (!(server->flags & UV_HANDLE_CLOSING)) { /* Check if we're in a middle of changing the number of pending accepts. */ if (!(server->flags & UV_HANDLE_TCP_ACCEPT_STATE_CHANGING)) { uv__tcp_queue_accept(server, req); } else { /* We better be switching to a single pending accept. */ assert(server->flags & UV_HANDLE_TCP_SINGLE_ACCEPT); server->tcp.serv.processed_accepts++; if (server->tcp.serv.processed_accepts >= uv_simultaneous_server_accepts) { server->tcp.serv.processed_accepts = 0; /* * All previously queued accept requests are now processed. * We now switch to queueing just a single accept. */ uv__tcp_queue_accept(server, &server->tcp.serv.accept_reqs[0]); server->flags &= ~UV_HANDLE_TCP_ACCEPT_STATE_CHANGING; server->flags |= UV_HANDLE_TCP_SINGLE_ACCEPT; } } } loop->active_tcp_streams++; return err; } int uv__tcp_read_start(uv_tcp_t* handle, uv_alloc_cb alloc_cb, uv_read_cb read_cb) { uv_loop_t* loop = handle->loop; handle->flags |= UV_HANDLE_READING; handle->read_cb = read_cb; handle->alloc_cb = alloc_cb; INCREASE_ACTIVE_COUNT(loop, handle); /* If reading was stopped and then started again, there could still be a read * request pending. */ if (!(handle->flags & UV_HANDLE_READ_PENDING)) { if (handle->flags & UV_HANDLE_EMULATE_IOCP && handle->read_req.event_handle == NULL) { handle->read_req.event_handle = CreateEvent(NULL, 0, 0, NULL); if (handle->read_req.event_handle == NULL) { uv_fatal_error(GetLastError(), "CreateEvent"); } } uv__tcp_queue_read(loop, handle); } return 0; } static int uv__is_loopback(const struct sockaddr_storage* storage) { const struct sockaddr_in* in4; const struct sockaddr_in6* in6; int i; if (storage->ss_family == AF_INET) { in4 = (const struct sockaddr_in*) storage; return in4->sin_addr.S_un.S_un_b.s_b1 == 127; } if (storage->ss_family == AF_INET6) { in6 = (const struct sockaddr_in6*) storage; for (i = 0; i < 7; ++i) { if (in6->sin6_addr.u.Word[i] != 0) return 0; } return in6->sin6_addr.u.Word[7] == htons(1); } return 0; } // Check if Windows version is 10.0.16299 or later static int uv__is_fast_loopback_fail_supported(void) { OSVERSIONINFOW os_info; if (!pRtlGetVersion) return 0; pRtlGetVersion(&os_info); if (os_info.dwMajorVersion < 10) return 0; if (os_info.dwMajorVersion > 10) return 1; if (os_info.dwMinorVersion > 0) return 1; return os_info.dwBuildNumber >= 16299; } static int uv__tcp_try_connect(uv_connect_t* req, uv_tcp_t* handle, const struct sockaddr* addr, unsigned int addrlen, uv_connect_cb cb) { uv_loop_t* loop = handle->loop; TCP_INITIAL_RTO_PARAMETERS retransmit_ioctl; const struct sockaddr* bind_addr; struct sockaddr_storage converted; BOOL success; DWORD bytes; int err; err = uv__convert_to_localhost_if_unspecified(addr, &converted); if (err) return err; if (handle->delayed_error != 0) goto out; if (!(handle->flags & UV_HANDLE_BOUND)) { if (addrlen == sizeof(uv_addr_ip4_any_)) { bind_addr = (const struct sockaddr*) &uv_addr_ip4_any_; } else if (addrlen == sizeof(uv_addr_ip6_any_)) { bind_addr = (const struct sockaddr*) &uv_addr_ip6_any_; } else { abort(); } err = uv__tcp_try_bind(handle, bind_addr, addrlen, 0); if (err) return err; if (handle->delayed_error != 0) goto out; } if (!handle->tcp.conn.func_connectex) { if (!uv__get_connectex_function(handle->socket, &handle->tcp.conn.func_connectex)) { return WSAEAFNOSUPPORT; } } /* This makes connect() fail instantly if the target port on the localhost * is not reachable, instead of waiting for 2s. We do not care if this fails. * This only works on Windows version 10.0.16299 and later. */ if (uv__is_fast_loopback_fail_supported() && uv__is_loopback(&converted)) { memset(&retransmit_ioctl, 0, sizeof(retransmit_ioctl)); retransmit_ioctl.Rtt = TCP_INITIAL_RTO_NO_SYN_RETRANSMISSIONS; retransmit_ioctl.MaxSynRetransmissions = TCP_INITIAL_RTO_NO_SYN_RETRANSMISSIONS; WSAIoctl(handle->socket, SIO_TCP_INITIAL_RTO, &retransmit_ioctl, sizeof(retransmit_ioctl), NULL, 0, &bytes, NULL, NULL); } out: UV_REQ_INIT(req, UV_CONNECT); req->handle = (uv_stream_t*) handle; req->cb = cb; memset(&req->u.io.overlapped, 0, sizeof(req->u.io.overlapped)); if (handle->delayed_error != 0) { /* Process the req without IOCP. */ handle->reqs_pending++; REGISTER_HANDLE_REQ(loop, handle, req); uv__insert_pending_req(loop, (uv_req_t*)req); return 0; } success = handle->tcp.conn.func_connectex(handle->socket, (const struct sockaddr*) &converted, addrlen, NULL, 0, &bytes, &req->u.io.overlapped); if (UV_SUCCEEDED_WITHOUT_IOCP(success)) { /* Process the req without IOCP. */ handle->reqs_pending++; REGISTER_HANDLE_REQ(loop, handle, req); uv__insert_pending_req(loop, (uv_req_t*)req); } else if (UV_SUCCEEDED_WITH_IOCP(success)) { /* The req will be processed with IOCP. */ handle->reqs_pending++; REGISTER_HANDLE_REQ(loop, handle, req); } else { return WSAGetLastError(); } return 0; } int uv_tcp_getsockname(const uv_tcp_t* handle, struct sockaddr* name, int* namelen) { return uv__getsockpeername((const uv_handle_t*) handle, getsockname, name, namelen, handle->delayed_error); } int uv_tcp_getpeername(const uv_tcp_t* handle, struct sockaddr* name, int* namelen) { return uv__getsockpeername((const uv_handle_t*) handle, getpeername, name, namelen, handle->delayed_error); } int uv__tcp_write(uv_loop_t* loop, uv_write_t* req, uv_tcp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_write_cb cb) { int result; DWORD bytes; UV_REQ_INIT(req, UV_WRITE); req->handle = (uv_stream_t*) handle; req->cb = cb; /* Prepare the overlapped structure. */ memset(&(req->u.io.overlapped), 0, sizeof(req->u.io.overlapped)); if (handle->flags & UV_HANDLE_EMULATE_IOCP) { req->event_handle = CreateEvent(NULL, 0, 0, NULL); if (req->event_handle == NULL) { uv_fatal_error(GetLastError(), "CreateEvent"); } req->u.io.overlapped.hEvent = (HANDLE) ((ULONG_PTR) req->event_handle | 1); req->wait_handle = INVALID_HANDLE_VALUE; } result = WSASend(handle->socket, (WSABUF*) bufs, nbufs, &bytes, 0, &req->u.io.overlapped, NULL); if (UV_SUCCEEDED_WITHOUT_IOCP(result == 0)) { /* Request completed immediately. */ req->u.io.queued_bytes = 0; handle->reqs_pending++; handle->stream.conn.write_reqs_pending++; REGISTER_HANDLE_REQ(loop, handle, req); uv__insert_pending_req(loop, (uv_req_t*) req); } else if (UV_SUCCEEDED_WITH_IOCP(result == 0)) { /* Request queued by the kernel. */ req->u.io.queued_bytes = uv__count_bufs(bufs, nbufs); handle->reqs_pending++; handle->stream.conn.write_reqs_pending++; REGISTER_HANDLE_REQ(loop, handle, req); handle->write_queue_size += req->u.io.queued_bytes; if (handle->flags & UV_HANDLE_EMULATE_IOCP && !RegisterWaitForSingleObject(&req->wait_handle, req->event_handle, post_write_completion, (void*) req, INFINITE, WT_EXECUTEINWAITTHREAD | WT_EXECUTEONLYONCE)) { SET_REQ_ERROR(req, GetLastError()); uv__insert_pending_req(loop, (uv_req_t*)req); } } else { /* Send failed due to an error, report it later */ req->u.io.queued_bytes = 0; handle->reqs_pending++; handle->stream.conn.write_reqs_pending++; REGISTER_HANDLE_REQ(loop, handle, req); SET_REQ_ERROR(req, WSAGetLastError()); uv__insert_pending_req(loop, (uv_req_t*) req); } return 0; } int uv__tcp_try_write(uv_tcp_t* handle, const uv_buf_t bufs[], unsigned int nbufs) { int result; DWORD bytes; if (handle->stream.conn.write_reqs_pending > 0) return UV_EAGAIN; result = WSASend(handle->socket, (WSABUF*) bufs, nbufs, &bytes, 0, NULL, NULL); if (result == SOCKET_ERROR) return uv_translate_sys_error(WSAGetLastError()); else return bytes; } void uv__process_tcp_read_req(uv_loop_t* loop, uv_tcp_t* handle, uv_req_t* req) { DWORD bytes, flags, err; uv_buf_t buf; int count; assert(handle->type == UV_TCP); handle->flags &= ~UV_HANDLE_READ_PENDING; if (!REQ_SUCCESS(req)) { /* An error occurred doing the read. */ if ((handle->flags & UV_HANDLE_READING) || !(handle->flags & UV_HANDLE_ZERO_READ)) { handle->flags &= ~UV_HANDLE_READING; DECREASE_ACTIVE_COUNT(loop, handle); buf = (handle->flags & UV_HANDLE_ZERO_READ) ? uv_buf_init(NULL, 0) : handle->tcp.conn.read_buffer; err = GET_REQ_SOCK_ERROR(req); if (err == WSAECONNABORTED) { /* Turn WSAECONNABORTED into UV_ECONNRESET to be consistent with Unix. */ err = WSAECONNRESET; } handle->flags &= ~(UV_HANDLE_READABLE | UV_HANDLE_WRITABLE); handle->read_cb((uv_stream_t*)handle, uv_translate_sys_error(err), &buf); } } else { if (!(handle->flags & UV_HANDLE_ZERO_READ)) { /* The read was done with a non-zero buffer length. */ if (req->u.io.overlapped.InternalHigh > 0) { /* Successful read */ handle->read_cb((uv_stream_t*)handle, req->u.io.overlapped.InternalHigh, &handle->tcp.conn.read_buffer); /* Read again only if bytes == buf.len */ if (req->u.io.overlapped.InternalHigh < handle->tcp.conn.read_buffer.len) { goto done; } } else { /* Connection closed */ if (handle->flags & UV_HANDLE_READING) { handle->flags &= ~UV_HANDLE_READING; DECREASE_ACTIVE_COUNT(loop, handle); } buf.base = 0; buf.len = 0; handle->read_cb((uv_stream_t*)handle, UV_EOF, &handle->tcp.conn.read_buffer); goto done; } } /* Do nonblocking reads until the buffer is empty */ count = 32; while ((handle->flags & UV_HANDLE_READING) && (count-- > 0)) { buf = uv_buf_init(NULL, 0); handle->alloc_cb((uv_handle_t*) handle, 65536, &buf); if (buf.base == NULL || buf.len == 0) { handle->read_cb((uv_stream_t*) handle, UV_ENOBUFS, &buf); break; } assert(buf.base != NULL); flags = 0; if (WSARecv(handle->socket, (WSABUF*)&buf, 1, &bytes, &flags, NULL, NULL) != SOCKET_ERROR) { if (bytes > 0) { /* Successful read */ handle->read_cb((uv_stream_t*)handle, bytes, &buf); /* Read again only if bytes == buf.len */ if (bytes < buf.len) { break; } } else { /* Connection closed */ handle->flags &= ~UV_HANDLE_READING; DECREASE_ACTIVE_COUNT(loop, handle); handle->read_cb((uv_stream_t*)handle, UV_EOF, &buf); break; } } else { err = WSAGetLastError(); if (err == WSAEWOULDBLOCK) { /* Read buffer was completely empty, report a 0-byte read. */ handle->read_cb((uv_stream_t*)handle, 0, &buf); } else { /* Ouch! serious error. */ handle->flags &= ~UV_HANDLE_READING; DECREASE_ACTIVE_COUNT(loop, handle); if (err == WSAECONNABORTED) { /* Turn WSAECONNABORTED into UV_ECONNRESET to be consistent with * Unix. */ err = WSAECONNRESET; } handle->flags &= ~(UV_HANDLE_READABLE | UV_HANDLE_WRITABLE); handle->read_cb((uv_stream_t*)handle, uv_translate_sys_error(err), &buf); } break; } } done: /* Post another read if still reading and not closing. */ if ((handle->flags & UV_HANDLE_READING) && !(handle->flags & UV_HANDLE_READ_PENDING)) { uv__tcp_queue_read(loop, handle); } } DECREASE_PENDING_REQ_COUNT(handle); } void uv__process_tcp_write_req(uv_loop_t* loop, uv_tcp_t* handle, uv_write_t* req) { int err; assert(handle->type == UV_TCP); assert(handle->write_queue_size >= req->u.io.queued_bytes); handle->write_queue_size -= req->u.io.queued_bytes; UNREGISTER_HANDLE_REQ(loop, handle, req); if (handle->flags & UV_HANDLE_EMULATE_IOCP) { if (req->wait_handle != INVALID_HANDLE_VALUE) { UnregisterWait(req->wait_handle); req->wait_handle = INVALID_HANDLE_VALUE; } if (req->event_handle != NULL) { CloseHandle(req->event_handle); req->event_handle = NULL; } } if (req->cb) { err = uv_translate_sys_error(GET_REQ_SOCK_ERROR(req)); if (err == UV_ECONNABORTED) { /* use UV_ECANCELED for consistency with Unix */ err = UV_ECANCELED; } req->cb(req, err); } handle->stream.conn.write_reqs_pending--; if (handle->stream.conn.write_reqs_pending == 0) { if (handle->flags & UV_HANDLE_CLOSING) { closesocket(handle->socket); handle->socket = INVALID_SOCKET; } if (handle->flags & UV_HANDLE_SHUTTING) uv__process_tcp_shutdown_req(loop, handle, handle->stream.conn.shutdown_req); } DECREASE_PENDING_REQ_COUNT(handle); } void uv__process_tcp_accept_req(uv_loop_t* loop, uv_tcp_t* handle, uv_req_t* raw_req) { uv_tcp_accept_t* req = (uv_tcp_accept_t*) raw_req; int err; assert(handle->type == UV_TCP); /* If handle->accepted_socket is not a valid socket, then uv_queue_accept * must have failed. This is a serious error. We stop accepting connections * and report this error to the connection callback. */ if (req->accept_socket == INVALID_SOCKET) { if (handle->flags & UV_HANDLE_LISTENING) { handle->flags &= ~UV_HANDLE_LISTENING; DECREASE_ACTIVE_COUNT(loop, handle); if (handle->stream.serv.connection_cb) { err = GET_REQ_SOCK_ERROR(req); handle->stream.serv.connection_cb((uv_stream_t*)handle, uv_translate_sys_error(err)); } } } else if (REQ_SUCCESS(req) && setsockopt(req->accept_socket, SOL_SOCKET, SO_UPDATE_ACCEPT_CONTEXT, (char*)&handle->socket, sizeof(handle->socket)) == 0) { req->next_pending = handle->tcp.serv.pending_accepts; handle->tcp.serv.pending_accepts = req; /* Accept and SO_UPDATE_ACCEPT_CONTEXT were successful. */ if (handle->stream.serv.connection_cb) { handle->stream.serv.connection_cb((uv_stream_t*)handle, 0); } } else { /* Error related to accepted socket is ignored because the server socket * may still be healthy. If the server socket is broken uv_queue_accept * will detect it. */ closesocket(req->accept_socket); req->accept_socket = INVALID_SOCKET; if (handle->flags & UV_HANDLE_LISTENING) { uv__tcp_queue_accept(handle, req); } } DECREASE_PENDING_REQ_COUNT(handle); } void uv__process_tcp_connect_req(uv_loop_t* loop, uv_tcp_t* handle, uv_connect_t* req) { int err; assert(handle->type == UV_TCP); UNREGISTER_HANDLE_REQ(loop, handle, req); err = 0; if (handle->delayed_error) { /* To smooth over the differences between unixes errors that * were reported synchronously on the first connect can be delayed * until the next tick--which is now. */ err = handle->delayed_error; handle->delayed_error = 0; } else if (REQ_SUCCESS(req)) { if (handle->flags & UV_HANDLE_CLOSING) { /* use UV_ECANCELED for consistency with Unix */ err = ERROR_OPERATION_ABORTED; } else if (setsockopt(handle->socket, SOL_SOCKET, SO_UPDATE_CONNECT_CONTEXT, NULL, 0) == 0) { uv__connection_init((uv_stream_t*)handle); handle->flags |= UV_HANDLE_READABLE | UV_HANDLE_WRITABLE; loop->active_tcp_streams++; } else { err = WSAGetLastError(); } } else { err = GET_REQ_SOCK_ERROR(req); } req->cb(req, uv_translate_sys_error(err)); DECREASE_PENDING_REQ_COUNT(handle); } int uv__tcp_xfer_export(uv_tcp_t* handle, int target_pid, uv__ipc_socket_xfer_type_t* xfer_type, uv__ipc_socket_xfer_info_t* xfer_info) { if (handle->flags & UV_HANDLE_CONNECTION) { *xfer_type = UV__IPC_SOCKET_XFER_TCP_CONNECTION; } else { *xfer_type = UV__IPC_SOCKET_XFER_TCP_SERVER; /* We're about to share the socket with another process. Because this is a * listening socket, we assume that the other process will be accepting * connections on it. Thus, before sharing the socket with another process, * we call listen here in the parent process. */ if (!(handle->flags & UV_HANDLE_LISTENING)) { if (!(handle->flags & UV_HANDLE_BOUND)) { return ERROR_NOT_SUPPORTED; } if (handle->delayed_error == 0 && listen(handle->socket, SOMAXCONN) == SOCKET_ERROR) { handle->delayed_error = WSAGetLastError(); } } } if (WSADuplicateSocketW(handle->socket, target_pid, &xfer_info->socket_info)) return WSAGetLastError(); xfer_info->delayed_error = handle->delayed_error; /* Mark the local copy of the handle as 'shared' so we behave in a way that's * friendly to the process(es) that we share the socket with. */ handle->flags |= UV_HANDLE_SHARED_TCP_SOCKET; return 0; } int uv__tcp_xfer_import(uv_tcp_t* tcp, uv__ipc_socket_xfer_type_t xfer_type, uv__ipc_socket_xfer_info_t* xfer_info) { int err; SOCKET socket; assert(xfer_type == UV__IPC_SOCKET_XFER_TCP_SERVER || xfer_type == UV__IPC_SOCKET_XFER_TCP_CONNECTION); socket = WSASocketW(FROM_PROTOCOL_INFO, FROM_PROTOCOL_INFO, FROM_PROTOCOL_INFO, &xfer_info->socket_info, 0, WSA_FLAG_OVERLAPPED); if (socket == INVALID_SOCKET) { return WSAGetLastError(); } err = uv__tcp_set_socket( tcp->loop, tcp, socket, xfer_info->socket_info.iAddressFamily, 1); if (err) { closesocket(socket); return err; } tcp->delayed_error = xfer_info->delayed_error; tcp->flags |= UV_HANDLE_BOUND | UV_HANDLE_SHARED_TCP_SOCKET; if (xfer_type == UV__IPC_SOCKET_XFER_TCP_CONNECTION) { uv__connection_init((uv_stream_t*)tcp); tcp->flags |= UV_HANDLE_READABLE | UV_HANDLE_WRITABLE; } tcp->loop->active_tcp_streams++; return 0; } int uv_tcp_nodelay(uv_tcp_t* handle, int enable) { int err; if (handle->socket != INVALID_SOCKET) { err = uv__tcp_nodelay(handle, handle->socket, enable); if (err) return uv_translate_sys_error(err); } if (enable) { handle->flags |= UV_HANDLE_TCP_NODELAY; } else { handle->flags &= ~UV_HANDLE_TCP_NODELAY; } return 0; } int uv_tcp_keepalive(uv_tcp_t* handle, int enable, unsigned int delay) { int err; if (handle->socket != INVALID_SOCKET) { err = uv__tcp_keepalive(handle, handle->socket, enable, delay); if (err) return uv_translate_sys_error(err); } if (enable) { handle->flags |= UV_HANDLE_TCP_KEEPALIVE; } else { handle->flags &= ~UV_HANDLE_TCP_KEEPALIVE; } /* TODO: Store delay if handle->socket isn't created yet. */ return 0; } int uv_tcp_simultaneous_accepts(uv_tcp_t* handle, int enable) { if (handle->flags & UV_HANDLE_CONNECTION) { return UV_EINVAL; } /* Check if we're already in the desired mode. */ if ((enable && !(handle->flags & UV_HANDLE_TCP_SINGLE_ACCEPT)) || (!enable && handle->flags & UV_HANDLE_TCP_SINGLE_ACCEPT)) { return 0; } /* Don't allow switching from single pending accept to many. */ if (enable) { return UV_ENOTSUP; } /* Check if we're in a middle of changing the number of pending accepts. */ if (handle->flags & UV_HANDLE_TCP_ACCEPT_STATE_CHANGING) { return 0; } handle->flags |= UV_HANDLE_TCP_SINGLE_ACCEPT; /* Flip the changing flag if we have already queued multiple accepts. */ if (handle->flags & UV_HANDLE_LISTENING) { handle->flags |= UV_HANDLE_TCP_ACCEPT_STATE_CHANGING; } return 0; } static void uv__tcp_try_cancel_reqs(uv_tcp_t* tcp) { SOCKET socket; int non_ifs_lsp; int reading; int writing; socket = tcp->socket; reading = tcp->flags & UV_HANDLE_READ_PENDING; writing = tcp->stream.conn.write_reqs_pending > 0; if (!reading && !writing) return; /* TODO: in libuv v2, keep explicit track of write_reqs, so we can cancel * them each explicitly with CancelIoEx (like unix). */ if (reading) CancelIoEx((HANDLE) socket, &tcp->read_req.u.io.overlapped); if (writing) CancelIo((HANDLE) socket); /* Check if we have any non-IFS LSPs stacked on top of TCP */ non_ifs_lsp = (tcp->flags & UV_HANDLE_IPV6) ? uv_tcp_non_ifs_lsp_ipv6 : uv_tcp_non_ifs_lsp_ipv4; /* If there are non-ifs LSPs then try to obtain a base handle for the socket. * This will always fail on Windows XP/3k. */ if (non_ifs_lsp) { DWORD bytes; if (WSAIoctl(socket, SIO_BASE_HANDLE, NULL, 0, &socket, sizeof socket, &bytes, NULL, NULL) != 0) { /* Failed. We can't do CancelIo. */ return; } } assert(socket != 0 && socket != INVALID_SOCKET); if (socket != tcp->socket) { if (reading) CancelIoEx((HANDLE) socket, &tcp->read_req.u.io.overlapped); if (writing) CancelIo((HANDLE) socket); } } void uv__tcp_close(uv_loop_t* loop, uv_tcp_t* tcp) { if (tcp->flags & UV_HANDLE_CONNECTION) { if (tcp->flags & UV_HANDLE_READING) { uv_read_stop((uv_stream_t*) tcp); } uv__tcp_try_cancel_reqs(tcp); } else { if (tcp->tcp.serv.accept_reqs != NULL) { /* First close the incoming sockets to cancel the accept operations before * we free their resources. */ unsigned int i; for (i = 0; i < uv_simultaneous_server_accepts; i++) { uv_tcp_accept_t* req = &tcp->tcp.serv.accept_reqs[i]; if (req->accept_socket != INVALID_SOCKET) { closesocket(req->accept_socket); req->accept_socket = INVALID_SOCKET; } } } assert(!(tcp->flags & UV_HANDLE_READING)); } if (tcp->flags & UV_HANDLE_LISTENING) { tcp->flags &= ~UV_HANDLE_LISTENING; DECREASE_ACTIVE_COUNT(loop, tcp); } tcp->flags &= ~(UV_HANDLE_READABLE | UV_HANDLE_WRITABLE); uv__handle_closing(tcp); /* If any overlapped req failed to cancel, calling `closesocket` now would * cause Win32 to send an RST packet. Try to avoid that for writes, if * possibly applicable, by waiting to process the completion notifications * first (which typically should be cancellations). There's not much we can * do about canceled reads, which also will generate an RST packet. */ if (!(tcp->flags & UV_HANDLE_CONNECTION) || tcp->stream.conn.write_reqs_pending == 0) { closesocket(tcp->socket); tcp->socket = INVALID_SOCKET; } if (tcp->reqs_pending == 0) uv__want_endgame(loop, (uv_handle_t*) tcp); } int uv_tcp_open(uv_tcp_t* handle, uv_os_sock_t sock) { WSAPROTOCOL_INFOW protocol_info; int opt_len; int err; struct sockaddr_storage saddr; int saddr_len; /* Detect the address family of the socket. */ opt_len = (int) sizeof protocol_info; if (getsockopt(sock, SOL_SOCKET, SO_PROTOCOL_INFOW, (char*) &protocol_info, &opt_len) == SOCKET_ERROR) { return uv_translate_sys_error(GetLastError()); } err = uv__tcp_set_socket(handle->loop, handle, sock, protocol_info.iAddressFamily, 1); if (err) { return uv_translate_sys_error(err); } /* Support already active socket. */ saddr_len = sizeof(saddr); if (!uv_tcp_getsockname(handle, (struct sockaddr*) &saddr, &saddr_len)) { /* Socket is already bound. */ handle->flags |= UV_HANDLE_BOUND; saddr_len = sizeof(saddr); if (!uv_tcp_getpeername(handle, (struct sockaddr*) &saddr, &saddr_len)) { /* Socket is already connected. */ uv__connection_init((uv_stream_t*) handle); handle->flags |= UV_HANDLE_READABLE | UV_HANDLE_WRITABLE; } } return 0; } /* This function is an egress point, i.e. it returns libuv errors rather than * system errors. */ int uv__tcp_bind(uv_tcp_t* handle, const struct sockaddr* addr, unsigned int addrlen, unsigned int flags) { int err; err = uv__tcp_try_bind(handle, addr, addrlen, flags); if (err) return uv_translate_sys_error(err); return 0; } /* This function is an egress point, i.e. it returns libuv errors rather than * system errors. */ int uv__tcp_connect(uv_connect_t* req, uv_tcp_t* handle, const struct sockaddr* addr, unsigned int addrlen, uv_connect_cb cb) { int err; err = uv__tcp_try_connect(req, handle, addr, addrlen, cb); if (err) return uv_translate_sys_error(err); return 0; } #ifndef WSA_FLAG_NO_HANDLE_INHERIT /* Added in Windows 7 SP1. Specify this to avoid race conditions, */ /* but also manually clear the inherit flag in case this failed. */ #define WSA_FLAG_NO_HANDLE_INHERIT 0x80 #endif int uv_socketpair(int type, int protocol, uv_os_sock_t fds[2], int flags0, int flags1) { SOCKET server = INVALID_SOCKET; SOCKET client0 = INVALID_SOCKET; SOCKET client1 = INVALID_SOCKET; SOCKADDR_IN name; LPFN_ACCEPTEX func_acceptex; WSAOVERLAPPED overlap; char accept_buffer[sizeof(struct sockaddr_storage) * 2 + 32]; int namelen; int err; DWORD bytes; DWORD flags; DWORD client0_flags = WSA_FLAG_NO_HANDLE_INHERIT; DWORD client1_flags = WSA_FLAG_NO_HANDLE_INHERIT; if (flags0 & UV_NONBLOCK_PIPE) client0_flags |= WSA_FLAG_OVERLAPPED; if (flags1 & UV_NONBLOCK_PIPE) client1_flags |= WSA_FLAG_OVERLAPPED; server = WSASocketW(AF_INET, type, protocol, NULL, 0, WSA_FLAG_OVERLAPPED | WSA_FLAG_NO_HANDLE_INHERIT); if (server == INVALID_SOCKET) goto wsaerror; if (!SetHandleInformation((HANDLE) server, HANDLE_FLAG_INHERIT, 0)) goto error; name.sin_family = AF_INET; name.sin_addr.s_addr = htonl(INADDR_LOOPBACK); name.sin_port = 0; if (bind(server, (SOCKADDR*) &name, sizeof(name)) != 0) goto wsaerror; if (listen(server, 1) != 0) goto wsaerror; namelen = sizeof(name); if (getsockname(server, (SOCKADDR*) &name, &namelen) != 0) goto wsaerror; client0 = WSASocketW(AF_INET, type, protocol, NULL, 0, client0_flags); if (client0 == INVALID_SOCKET) goto wsaerror; if (!SetHandleInformation((HANDLE) client0, HANDLE_FLAG_INHERIT, 0)) goto error; if (connect(client0, (SOCKADDR*) &name, sizeof(name)) != 0) goto wsaerror; client1 = WSASocketW(AF_INET, type, protocol, NULL, 0, client1_flags); if (client1 == INVALID_SOCKET) goto wsaerror; if (!SetHandleInformation((HANDLE) client1, HANDLE_FLAG_INHERIT, 0)) goto error; if (!uv__get_acceptex_function(server, &func_acceptex)) { err = WSAEAFNOSUPPORT; goto cleanup; } memset(&overlap, 0, sizeof(overlap)); if (!func_acceptex(server, client1, accept_buffer, 0, sizeof(struct sockaddr_storage), sizeof(struct sockaddr_storage), &bytes, &overlap)) { err = WSAGetLastError(); if (err == ERROR_IO_PENDING) { /* Result should complete immediately, since we already called connect, * but empirically, we sometimes have to poll the kernel a couple times * until it notices that. */ while (!WSAGetOverlappedResult(client1, &overlap, &bytes, FALSE, &flags)) { err = WSAGetLastError(); if (err != WSA_IO_INCOMPLETE) goto cleanup; SwitchToThread(); } } else { goto cleanup; } } if (setsockopt(client1, SOL_SOCKET, SO_UPDATE_ACCEPT_CONTEXT, (char*) &server, sizeof(server)) != 0) { goto wsaerror; } closesocket(server); fds[0] = client0; fds[1] = client1; return 0; wsaerror: err = WSAGetLastError(); goto cleanup; error: err = GetLastError(); goto cleanup; cleanup: if (server != INVALID_SOCKET) closesocket(server); if (client0 != INVALID_SOCKET) closesocket(client0); if (client1 != INVALID_SOCKET) closesocket(client1); assert(err); return uv_translate_sys_error(err); } gevent-24.11.1/deps/libuv/src/win/thread.c000066400000000000000000000256311471441230600202340ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include #if defined(__MINGW64_VERSION_MAJOR) /* MemoryBarrier expands to __mm_mfence in some cases (x86+sse2), which may * require this header in some versions of mingw64. */ #include #endif #include "uv.h" #include "internal.h" static void uv__once_inner(uv_once_t* guard, void (*callback)(void)) { DWORD result; HANDLE existing_event, created_event; created_event = CreateEvent(NULL, 1, 0, NULL); if (created_event == 0) { /* Could fail in a low-memory situation? */ uv_fatal_error(GetLastError(), "CreateEvent"); } existing_event = InterlockedCompareExchangePointer(&guard->event, created_event, NULL); if (existing_event == NULL) { /* We won the race */ callback(); result = SetEvent(created_event); assert(result); guard->ran = 1; } else { /* We lost the race. Destroy the event we created and wait for the existing * one to become signaled. */ CloseHandle(created_event); result = WaitForSingleObject(existing_event, INFINITE); assert(result == WAIT_OBJECT_0); } } void uv_once(uv_once_t* guard, void (*callback)(void)) { /* Fast case - avoid WaitForSingleObject. */ if (guard->ran) { return; } uv__once_inner(guard, callback); } /* Verify that uv_thread_t can be stored in a TLS slot. */ STATIC_ASSERT(sizeof(uv_thread_t) <= sizeof(void*)); static uv_key_t uv__current_thread_key; static uv_once_t uv__current_thread_init_guard = UV_ONCE_INIT; static void uv__init_current_thread_key(void) { if (uv_key_create(&uv__current_thread_key)) abort(); } struct thread_ctx { void (*entry)(void* arg); void* arg; uv_thread_t self; }; static UINT __stdcall uv__thread_start(void* arg) { struct thread_ctx *ctx_p; struct thread_ctx ctx; ctx_p = arg; ctx = *ctx_p; uv__free(ctx_p); uv_once(&uv__current_thread_init_guard, uv__init_current_thread_key); uv_key_set(&uv__current_thread_key, ctx.self); ctx.entry(ctx.arg); return 0; } int uv_thread_create(uv_thread_t *tid, void (*entry)(void *arg), void *arg) { uv_thread_options_t params; params.flags = UV_THREAD_NO_FLAGS; return uv_thread_create_ex(tid, ¶ms, entry, arg); } int uv_thread_create_ex(uv_thread_t* tid, const uv_thread_options_t* params, void (*entry)(void *arg), void *arg) { struct thread_ctx* ctx; int err; HANDLE thread; SYSTEM_INFO sysinfo; size_t stack_size; size_t pagesize; stack_size = params->flags & UV_THREAD_HAS_STACK_SIZE ? params->stack_size : 0; if (stack_size != 0) { GetNativeSystemInfo(&sysinfo); pagesize = (size_t)sysinfo.dwPageSize; /* Round up to the nearest page boundary. */ stack_size = (stack_size + pagesize - 1) &~ (pagesize - 1); if ((unsigned)stack_size != stack_size) return UV_EINVAL; } ctx = uv__malloc(sizeof(*ctx)); if (ctx == NULL) return UV_ENOMEM; ctx->entry = entry; ctx->arg = arg; /* Create the thread in suspended state so we have a chance to pass * its own creation handle to it */ thread = (HANDLE) _beginthreadex(NULL, (unsigned)stack_size, uv__thread_start, ctx, CREATE_SUSPENDED, NULL); if (thread == NULL) { err = errno; uv__free(ctx); } else { err = 0; *tid = thread; ctx->self = thread; ResumeThread(thread); } switch (err) { case 0: return 0; case EACCES: return UV_EACCES; case EAGAIN: return UV_EAGAIN; case EINVAL: return UV_EINVAL; } return UV_EIO; } uv_thread_t uv_thread_self(void) { uv_thread_t key; uv_once(&uv__current_thread_init_guard, uv__init_current_thread_key); key = uv_key_get(&uv__current_thread_key); if (key == NULL) { /* If the thread wasn't started by uv_thread_create (such as the main * thread), we assign an id to it now. */ if (!DuplicateHandle(GetCurrentProcess(), GetCurrentThread(), GetCurrentProcess(), &key, 0, FALSE, DUPLICATE_SAME_ACCESS)) { uv_fatal_error(GetLastError(), "DuplicateHandle"); } uv_key_set(&uv__current_thread_key, key); } return key; } int uv_thread_join(uv_thread_t *tid) { if (WaitForSingleObject(*tid, INFINITE)) return uv_translate_sys_error(GetLastError()); else { CloseHandle(*tid); *tid = 0; MemoryBarrier(); /* For feature parity with pthread_join(). */ return 0; } } int uv_thread_equal(const uv_thread_t* t1, const uv_thread_t* t2) { return *t1 == *t2; } int uv_mutex_init(uv_mutex_t* mutex) { InitializeCriticalSection(mutex); return 0; } int uv_mutex_init_recursive(uv_mutex_t* mutex) { return uv_mutex_init(mutex); } void uv_mutex_destroy(uv_mutex_t* mutex) { DeleteCriticalSection(mutex); } void uv_mutex_lock(uv_mutex_t* mutex) { EnterCriticalSection(mutex); } int uv_mutex_trylock(uv_mutex_t* mutex) { if (TryEnterCriticalSection(mutex)) return 0; else return UV_EBUSY; } void uv_mutex_unlock(uv_mutex_t* mutex) { LeaveCriticalSection(mutex); } /* Ensure that the ABI for this type remains stable in v1.x */ #ifdef _WIN64 STATIC_ASSERT(sizeof(uv_rwlock_t) == 80); #else STATIC_ASSERT(sizeof(uv_rwlock_t) == 48); #endif int uv_rwlock_init(uv_rwlock_t* rwlock) { memset(rwlock, 0, sizeof(*rwlock)); InitializeSRWLock(&rwlock->read_write_lock_); return 0; } void uv_rwlock_destroy(uv_rwlock_t* rwlock) { /* SRWLock does not need explicit destruction so long as there are no waiting threads See: https://docs.microsoft.com/windows/win32/api/synchapi/nf-synchapi-initializesrwlock#remarks */ } void uv_rwlock_rdlock(uv_rwlock_t* rwlock) { AcquireSRWLockShared(&rwlock->read_write_lock_); } int uv_rwlock_tryrdlock(uv_rwlock_t* rwlock) { if (!TryAcquireSRWLockShared(&rwlock->read_write_lock_)) return UV_EBUSY; return 0; } void uv_rwlock_rdunlock(uv_rwlock_t* rwlock) { ReleaseSRWLockShared(&rwlock->read_write_lock_); } void uv_rwlock_wrlock(uv_rwlock_t* rwlock) { AcquireSRWLockExclusive(&rwlock->read_write_lock_); } int uv_rwlock_trywrlock(uv_rwlock_t* rwlock) { if (!TryAcquireSRWLockExclusive(&rwlock->read_write_lock_)) return UV_EBUSY; return 0; } void uv_rwlock_wrunlock(uv_rwlock_t* rwlock) { ReleaseSRWLockExclusive(&rwlock->read_write_lock_); } int uv_sem_init(uv_sem_t* sem, unsigned int value) { *sem = CreateSemaphore(NULL, value, INT_MAX, NULL); if (*sem == NULL) return uv_translate_sys_error(GetLastError()); else return 0; } void uv_sem_destroy(uv_sem_t* sem) { if (!CloseHandle(*sem)) abort(); } void uv_sem_post(uv_sem_t* sem) { if (!ReleaseSemaphore(*sem, 1, NULL)) abort(); } void uv_sem_wait(uv_sem_t* sem) { if (WaitForSingleObject(*sem, INFINITE) != WAIT_OBJECT_0) abort(); } int uv_sem_trywait(uv_sem_t* sem) { DWORD r = WaitForSingleObject(*sem, 0); if (r == WAIT_OBJECT_0) return 0; if (r == WAIT_TIMEOUT) return UV_EAGAIN; abort(); return -1; /* Satisfy the compiler. */ } int uv_cond_init(uv_cond_t* cond) { InitializeConditionVariable(&cond->cond_var); return 0; } void uv_cond_destroy(uv_cond_t* cond) { /* nothing to do */ (void) &cond; } void uv_cond_signal(uv_cond_t* cond) { WakeConditionVariable(&cond->cond_var); } void uv_cond_broadcast(uv_cond_t* cond) { WakeAllConditionVariable(&cond->cond_var); } void uv_cond_wait(uv_cond_t* cond, uv_mutex_t* mutex) { if (!SleepConditionVariableCS(&cond->cond_var, mutex, INFINITE)) abort(); } int uv_cond_timedwait(uv_cond_t* cond, uv_mutex_t* mutex, uint64_t timeout) { if (SleepConditionVariableCS(&cond->cond_var, mutex, (DWORD)(timeout / 1e6))) return 0; if (GetLastError() != ERROR_TIMEOUT) abort(); return UV_ETIMEDOUT; } int uv_barrier_init(uv_barrier_t* barrier, unsigned int count) { int err; barrier->n = count; barrier->count = 0; err = uv_mutex_init(&barrier->mutex); if (err) return err; err = uv_sem_init(&barrier->turnstile1, 0); if (err) goto error2; err = uv_sem_init(&barrier->turnstile2, 1); if (err) goto error; return 0; error: uv_sem_destroy(&barrier->turnstile1); error2: uv_mutex_destroy(&barrier->mutex); return err; } void uv_barrier_destroy(uv_barrier_t* barrier) { uv_sem_destroy(&barrier->turnstile2); uv_sem_destroy(&barrier->turnstile1); uv_mutex_destroy(&barrier->mutex); } int uv_barrier_wait(uv_barrier_t* barrier) { int serial_thread; uv_mutex_lock(&barrier->mutex); if (++barrier->count == barrier->n) { uv_sem_wait(&barrier->turnstile2); uv_sem_post(&barrier->turnstile1); } uv_mutex_unlock(&barrier->mutex); uv_sem_wait(&barrier->turnstile1); uv_sem_post(&barrier->turnstile1); uv_mutex_lock(&barrier->mutex); serial_thread = (--barrier->count == 0); if (serial_thread) { uv_sem_wait(&barrier->turnstile1); uv_sem_post(&barrier->turnstile2); } uv_mutex_unlock(&barrier->mutex); uv_sem_wait(&barrier->turnstile2); uv_sem_post(&barrier->turnstile2); return serial_thread; } int uv_key_create(uv_key_t* key) { key->tls_index = TlsAlloc(); if (key->tls_index == TLS_OUT_OF_INDEXES) return UV_ENOMEM; return 0; } void uv_key_delete(uv_key_t* key) { if (TlsFree(key->tls_index) == FALSE) abort(); key->tls_index = TLS_OUT_OF_INDEXES; } void* uv_key_get(uv_key_t* key) { void* value; value = TlsGetValue(key->tls_index); if (value == NULL) if (GetLastError() != ERROR_SUCCESS) abort(); return value; } void uv_key_set(uv_key_t* key, void* value) { if (TlsSetValue(key->tls_index, value) == FALSE) abort(); } gevent-24.11.1/deps/libuv/src/win/tty.c000066400000000000000000002316211471441230600176030ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include #include #if defined(_MSC_VER) && _MSC_VER < 1600 # include "uv/stdint-msvc2008.h" #else # include #endif #ifndef COMMON_LVB_REVERSE_VIDEO # define COMMON_LVB_REVERSE_VIDEO 0x4000 #endif #include "uv.h" #include "internal.h" #include "handle-inl.h" #include "stream-inl.h" #include "req-inl.h" #ifndef InterlockedOr # define InterlockedOr _InterlockedOr #endif #define UNICODE_REPLACEMENT_CHARACTER (0xfffd) #define ANSI_NORMAL 0x0000 #define ANSI_ESCAPE_SEEN 0x0002 #define ANSI_CSI 0x0004 #define ANSI_ST_CONTROL 0x0008 #define ANSI_IGNORE 0x0010 #define ANSI_IN_ARG 0x0020 #define ANSI_IN_STRING 0x0040 #define ANSI_BACKSLASH_SEEN 0x0080 #define ANSI_EXTENSION 0x0100 #define ANSI_DECSCUSR 0x0200 #define MAX_INPUT_BUFFER_LENGTH 8192 #define MAX_CONSOLE_CHAR 8192 #ifndef ENABLE_VIRTUAL_TERMINAL_PROCESSING #define ENABLE_VIRTUAL_TERMINAL_PROCESSING 0x0004 #endif #define CURSOR_SIZE_SMALL 25 #define CURSOR_SIZE_LARGE 100 static void uv__tty_capture_initial_style( CONSOLE_SCREEN_BUFFER_INFO* screen_buffer_info, CONSOLE_CURSOR_INFO* cursor_info); static void uv__tty_update_virtual_window(CONSOLE_SCREEN_BUFFER_INFO* info); static int uv__cancel_read_console(uv_tty_t* handle); /* Null uv_buf_t */ static const uv_buf_t uv_null_buf_ = { 0, NULL }; enum uv__read_console_status_e { NOT_STARTED, IN_PROGRESS, TRAP_REQUESTED, COMPLETED }; static volatile LONG uv__read_console_status = NOT_STARTED; static volatile LONG uv__restore_screen_state; static CONSOLE_SCREEN_BUFFER_INFO uv__saved_screen_state; /* * The console virtual window. * * Normally cursor movement in windows is relative to the console screen buffer, * e.g. the application is allowed to overwrite the 'history'. This is very * inconvenient, it makes absolute cursor movement pretty useless. There is * also the concept of 'client rect' which is defined by the actual size of * the console window and the scroll position of the screen buffer, but it's * very volatile because it changes when the user scrolls. * * To make cursor movement behave sensibly we define a virtual window to which * cursor movement is confined. The virtual window is always as wide as the * console screen buffer, but it's height is defined by the size of the * console window. The top of the virtual window aligns with the position * of the caret when the first stdout/err handle is created, unless that would * mean that it would extend beyond the bottom of the screen buffer - in that * that case it's located as far down as possible. * * When the user writes a long text or many newlines, such that the output * reaches beyond the bottom of the virtual window, the virtual window is * shifted downwards, but not resized. * * Since all tty i/o happens on the same console, this window is shared * between all stdout/stderr handles. */ static int uv_tty_virtual_offset = -1; static int uv_tty_virtual_height = -1; static int uv_tty_virtual_width = -1; /* The console window size * We keep this separate from uv_tty_virtual_*. We use those values to only * handle signalling SIGWINCH */ static HANDLE uv__tty_console_handle = INVALID_HANDLE_VALUE; static int uv__tty_console_height = -1; static int uv__tty_console_width = -1; static HANDLE uv__tty_console_resized = INVALID_HANDLE_VALUE; static uv_mutex_t uv__tty_console_resize_mutex; static DWORD WINAPI uv__tty_console_resize_message_loop_thread(void* param); static void CALLBACK uv__tty_console_resize_event(HWINEVENTHOOK hWinEventHook, DWORD event, HWND hwnd, LONG idObject, LONG idChild, DWORD dwEventThread, DWORD dwmsEventTime); static DWORD WINAPI uv__tty_console_resize_watcher_thread(void* param); static void uv__tty_console_signal_resize(void); /* We use a semaphore rather than a mutex or critical section because in some cases (uv__cancel_read_console) we need take the lock in the main thread and release it in another thread. Using a semaphore ensures that in such scenario the main thread will still block when trying to acquire the lock. */ static uv_sem_t uv_tty_output_lock; static WORD uv_tty_default_text_attributes = FOREGROUND_RED | FOREGROUND_GREEN | FOREGROUND_BLUE; static char uv_tty_default_fg_color = 7; static char uv_tty_default_bg_color = 0; static char uv_tty_default_fg_bright = 0; static char uv_tty_default_bg_bright = 0; static char uv_tty_default_inverse = 0; static CONSOLE_CURSOR_INFO uv_tty_default_cursor_info; /* Determine whether or not ANSI support is enabled. */ static BOOL uv__need_check_vterm_state = TRUE; static uv_tty_vtermstate_t uv__vterm_state = UV_TTY_UNSUPPORTED; static void uv__determine_vterm_state(HANDLE handle); void uv__console_init(void) { if (uv_sem_init(&uv_tty_output_lock, 1)) abort(); uv__tty_console_handle = CreateFileW(L"CONOUT$", GENERIC_READ | GENERIC_WRITE, FILE_SHARE_WRITE, 0, OPEN_EXISTING, 0, 0); if (uv__tty_console_handle != INVALID_HANDLE_VALUE) { CONSOLE_SCREEN_BUFFER_INFO sb_info; QueueUserWorkItem(uv__tty_console_resize_message_loop_thread, NULL, WT_EXECUTELONGFUNCTION); uv_mutex_init(&uv__tty_console_resize_mutex); if (GetConsoleScreenBufferInfo(uv__tty_console_handle, &sb_info)) { uv__tty_console_width = sb_info.dwSize.X; uv__tty_console_height = sb_info.srWindow.Bottom - sb_info.srWindow.Top + 1; } } } int uv_tty_init(uv_loop_t* loop, uv_tty_t* tty, uv_file fd, int unused) { BOOL readable; DWORD NumberOfEvents; HANDLE handle; CONSOLE_SCREEN_BUFFER_INFO screen_buffer_info; CONSOLE_CURSOR_INFO cursor_info; (void)unused; uv__once_init(); handle = (HANDLE) uv__get_osfhandle(fd); if (handle == INVALID_HANDLE_VALUE) return UV_EBADF; if (fd <= 2) { /* In order to avoid closing a stdio file descriptor 0-2, duplicate the * underlying OS handle and forget about the original fd. * We could also opt to use the original OS handle and just never close it, * but then there would be no reliable way to cancel pending read operations * upon close. */ if (!DuplicateHandle(INVALID_HANDLE_VALUE, handle, INVALID_HANDLE_VALUE, &handle, 0, FALSE, DUPLICATE_SAME_ACCESS)) return uv_translate_sys_error(GetLastError()); fd = -1; } readable = GetNumberOfConsoleInputEvents(handle, &NumberOfEvents); if (!readable) { /* Obtain the screen buffer info with the output handle. */ if (!GetConsoleScreenBufferInfo(handle, &screen_buffer_info)) { return uv_translate_sys_error(GetLastError()); } /* Obtain the cursor info with the output handle. */ if (!GetConsoleCursorInfo(handle, &cursor_info)) { return uv_translate_sys_error(GetLastError()); } /* Obtain the tty_output_lock because the virtual window state is shared * between all uv_tty_t handles. */ uv_sem_wait(&uv_tty_output_lock); if (uv__need_check_vterm_state) uv__determine_vterm_state(handle); /* Remember the original console text attributes and cursor info. */ uv__tty_capture_initial_style(&screen_buffer_info, &cursor_info); uv__tty_update_virtual_window(&screen_buffer_info); uv_sem_post(&uv_tty_output_lock); } uv__stream_init(loop, (uv_stream_t*) tty, UV_TTY); uv__connection_init((uv_stream_t*) tty); tty->handle = handle; tty->u.fd = fd; tty->reqs_pending = 0; tty->flags |= UV_HANDLE_BOUND; if (readable) { /* Initialize TTY input specific fields. */ tty->flags |= UV_HANDLE_TTY_READABLE | UV_HANDLE_READABLE; /* TODO: remove me in v2.x. */ tty->tty.rd.unused_ = NULL; tty->tty.rd.read_line_buffer = uv_null_buf_; tty->tty.rd.read_raw_wait = NULL; /* Init keycode-to-vt100 mapper state. */ tty->tty.rd.last_key_len = 0; tty->tty.rd.last_key_offset = 0; tty->tty.rd.last_utf16_high_surrogate = 0; memset(&tty->tty.rd.last_input_record, 0, sizeof tty->tty.rd.last_input_record); } else { /* TTY output specific fields. */ tty->flags |= UV_HANDLE_WRITABLE; /* Init utf8-to-utf16 conversion state. */ tty->tty.wr.utf8_bytes_left = 0; tty->tty.wr.utf8_codepoint = 0; /* Initialize eol conversion state */ tty->tty.wr.previous_eol = 0; /* Init ANSI parser state. */ tty->tty.wr.ansi_parser_state = ANSI_NORMAL; } return 0; } /* Set the default console text attributes based on how the console was * configured when libuv started. */ static void uv__tty_capture_initial_style( CONSOLE_SCREEN_BUFFER_INFO* screen_buffer_info, CONSOLE_CURSOR_INFO* cursor_info) { static int style_captured = 0; /* Only do this once. Assumption: Caller has acquired uv_tty_output_lock. */ if (style_captured) return; /* Save raw win32 attributes. */ uv_tty_default_text_attributes = screen_buffer_info->wAttributes; /* Convert black text on black background to use white text. */ if (uv_tty_default_text_attributes == 0) uv_tty_default_text_attributes = 7; /* Convert Win32 attributes to ANSI colors. */ uv_tty_default_fg_color = 0; uv_tty_default_bg_color = 0; uv_tty_default_fg_bright = 0; uv_tty_default_bg_bright = 0; uv_tty_default_inverse = 0; if (uv_tty_default_text_attributes & FOREGROUND_RED) uv_tty_default_fg_color |= 1; if (uv_tty_default_text_attributes & FOREGROUND_GREEN) uv_tty_default_fg_color |= 2; if (uv_tty_default_text_attributes & FOREGROUND_BLUE) uv_tty_default_fg_color |= 4; if (uv_tty_default_text_attributes & BACKGROUND_RED) uv_tty_default_bg_color |= 1; if (uv_tty_default_text_attributes & BACKGROUND_GREEN) uv_tty_default_bg_color |= 2; if (uv_tty_default_text_attributes & BACKGROUND_BLUE) uv_tty_default_bg_color |= 4; if (uv_tty_default_text_attributes & FOREGROUND_INTENSITY) uv_tty_default_fg_bright = 1; if (uv_tty_default_text_attributes & BACKGROUND_INTENSITY) uv_tty_default_bg_bright = 1; if (uv_tty_default_text_attributes & COMMON_LVB_REVERSE_VIDEO) uv_tty_default_inverse = 1; /* Save the cursor size and the cursor state. */ uv_tty_default_cursor_info = *cursor_info; style_captured = 1; } int uv_tty_set_mode(uv_tty_t* tty, uv_tty_mode_t mode) { DWORD flags; unsigned char was_reading; uv_alloc_cb alloc_cb; uv_read_cb read_cb; int err; if (!(tty->flags & UV_HANDLE_TTY_READABLE)) { return UV_EINVAL; } if (!!mode == !!(tty->flags & UV_HANDLE_TTY_RAW)) { return 0; } switch (mode) { case UV_TTY_MODE_NORMAL: flags = ENABLE_ECHO_INPUT | ENABLE_LINE_INPUT | ENABLE_PROCESSED_INPUT; break; case UV_TTY_MODE_RAW: flags = ENABLE_WINDOW_INPUT; break; case UV_TTY_MODE_IO: return UV_ENOTSUP; default: return UV_EINVAL; } /* If currently reading, stop, and restart reading. */ if (tty->flags & UV_HANDLE_READING) { was_reading = 1; alloc_cb = tty->alloc_cb; read_cb = tty->read_cb; err = uv__tty_read_stop(tty); if (err) { return uv_translate_sys_error(err); } } else { was_reading = 0; alloc_cb = NULL; read_cb = NULL; } uv_sem_wait(&uv_tty_output_lock); if (!SetConsoleMode(tty->handle, flags)) { err = uv_translate_sys_error(GetLastError()); uv_sem_post(&uv_tty_output_lock); return err; } uv_sem_post(&uv_tty_output_lock); /* Update flag. */ tty->flags &= ~UV_HANDLE_TTY_RAW; tty->flags |= mode ? UV_HANDLE_TTY_RAW : 0; /* If we just stopped reading, restart. */ if (was_reading) { err = uv__tty_read_start(tty, alloc_cb, read_cb); if (err) { return uv_translate_sys_error(err); } } return 0; } int uv_tty_get_winsize(uv_tty_t* tty, int* width, int* height) { CONSOLE_SCREEN_BUFFER_INFO info; if (!GetConsoleScreenBufferInfo(tty->handle, &info)) { return uv_translate_sys_error(GetLastError()); } uv_sem_wait(&uv_tty_output_lock); uv__tty_update_virtual_window(&info); uv_sem_post(&uv_tty_output_lock); *width = uv_tty_virtual_width; *height = uv_tty_virtual_height; return 0; } static void CALLBACK uv_tty_post_raw_read(void* data, BOOLEAN didTimeout) { uv_loop_t* loop; uv_tty_t* handle; uv_req_t* req; assert(data); assert(!didTimeout); req = (uv_req_t*) data; handle = (uv_tty_t*) req->data; loop = handle->loop; UnregisterWait(handle->tty.rd.read_raw_wait); handle->tty.rd.read_raw_wait = NULL; SET_REQ_SUCCESS(req); POST_COMPLETION_FOR_REQ(loop, req); } static void uv__tty_queue_read_raw(uv_loop_t* loop, uv_tty_t* handle) { uv_read_t* req; BOOL r; assert(handle->flags & UV_HANDLE_READING); assert(!(handle->flags & UV_HANDLE_READ_PENDING)); assert(handle->handle && handle->handle != INVALID_HANDLE_VALUE); handle->tty.rd.read_line_buffer = uv_null_buf_; req = &handle->read_req; memset(&req->u.io.overlapped, 0, sizeof(req->u.io.overlapped)); r = RegisterWaitForSingleObject(&handle->tty.rd.read_raw_wait, handle->handle, uv_tty_post_raw_read, (void*) req, INFINITE, WT_EXECUTEINWAITTHREAD | WT_EXECUTEONLYONCE); if (!r) { handle->tty.rd.read_raw_wait = NULL; SET_REQ_ERROR(req, GetLastError()); uv__insert_pending_req(loop, (uv_req_t*)req); } handle->flags |= UV_HANDLE_READ_PENDING; handle->reqs_pending++; } static DWORD CALLBACK uv_tty_line_read_thread(void* data) { uv_loop_t* loop; uv_tty_t* handle; uv_req_t* req; DWORD bytes, read_bytes; WCHAR utf16[MAX_INPUT_BUFFER_LENGTH / 3]; DWORD chars, read_chars; LONG status; COORD pos; BOOL read_console_success; assert(data); req = (uv_req_t*) data; handle = (uv_tty_t*) req->data; loop = handle->loop; assert(handle->tty.rd.read_line_buffer.base != NULL); assert(handle->tty.rd.read_line_buffer.len > 0); /* ReadConsole can't handle big buffers. */ if (handle->tty.rd.read_line_buffer.len < MAX_INPUT_BUFFER_LENGTH) { bytes = handle->tty.rd.read_line_buffer.len; } else { bytes = MAX_INPUT_BUFFER_LENGTH; } /* At last, unicode! One utf-16 codeunit never takes more than 3 utf-8 * codeunits to encode. */ chars = bytes / 3; status = InterlockedExchange(&uv__read_console_status, IN_PROGRESS); if (status == TRAP_REQUESTED) { SET_REQ_SUCCESS(req); InterlockedExchange(&uv__read_console_status, COMPLETED); req->u.io.overlapped.InternalHigh = 0; POST_COMPLETION_FOR_REQ(loop, req); return 0; } read_console_success = ReadConsoleW(handle->handle, (void*) utf16, chars, &read_chars, NULL); if (read_console_success) { read_bytes = WideCharToMultiByte(CP_UTF8, 0, utf16, read_chars, handle->tty.rd.read_line_buffer.base, bytes, NULL, NULL); SET_REQ_SUCCESS(req); req->u.io.overlapped.InternalHigh = read_bytes; } else { SET_REQ_ERROR(req, GetLastError()); } status = InterlockedExchange(&uv__read_console_status, COMPLETED); if (status == TRAP_REQUESTED) { /* If we canceled the read by sending a VK_RETURN event, restore the screen state to undo the visual effect of the VK_RETURN */ if (read_console_success && InterlockedOr(&uv__restore_screen_state, 0)) { HANDLE active_screen_buffer; active_screen_buffer = CreateFileA("conout$", GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); if (active_screen_buffer != INVALID_HANDLE_VALUE) { pos = uv__saved_screen_state.dwCursorPosition; /* If the cursor was at the bottom line of the screen buffer, the VK_RETURN would have caused the buffer contents to scroll up by one line. The right position to reset the cursor to is therefore one line higher */ if (pos.Y == uv__saved_screen_state.dwSize.Y - 1) pos.Y--; SetConsoleCursorPosition(active_screen_buffer, pos); CloseHandle(active_screen_buffer); } } uv_sem_post(&uv_tty_output_lock); } POST_COMPLETION_FOR_REQ(loop, req); return 0; } static void uv__tty_queue_read_line(uv_loop_t* loop, uv_tty_t* handle) { uv_read_t* req; BOOL r; assert(handle->flags & UV_HANDLE_READING); assert(!(handle->flags & UV_HANDLE_READ_PENDING)); assert(handle->handle && handle->handle != INVALID_HANDLE_VALUE); req = &handle->read_req; memset(&req->u.io.overlapped, 0, sizeof(req->u.io.overlapped)); handle->tty.rd.read_line_buffer = uv_buf_init(NULL, 0); handle->alloc_cb((uv_handle_t*) handle, 8192, &handle->tty.rd.read_line_buffer); if (handle->tty.rd.read_line_buffer.base == NULL || handle->tty.rd.read_line_buffer.len == 0) { handle->read_cb((uv_stream_t*) handle, UV_ENOBUFS, &handle->tty.rd.read_line_buffer); return; } assert(handle->tty.rd.read_line_buffer.base != NULL); /* Reset flags No locking is required since there cannot be a line read in progress. We are also relying on the memory barrier provided by QueueUserWorkItem*/ uv__restore_screen_state = FALSE; uv__read_console_status = NOT_STARTED; r = QueueUserWorkItem(uv_tty_line_read_thread, (void*) req, WT_EXECUTELONGFUNCTION); if (!r) { SET_REQ_ERROR(req, GetLastError()); uv__insert_pending_req(loop, (uv_req_t*)req); } handle->flags |= UV_HANDLE_READ_PENDING; handle->reqs_pending++; } static void uv__tty_queue_read(uv_loop_t* loop, uv_tty_t* handle) { if (handle->flags & UV_HANDLE_TTY_RAW) { uv__tty_queue_read_raw(loop, handle); } else { uv__tty_queue_read_line(loop, handle); } } static const char* get_vt100_fn_key(DWORD code, char shift, char ctrl, size_t* len) { #define VK_CASE(vk, normal_str, shift_str, ctrl_str, shift_ctrl_str) \ case (vk): \ if (shift && ctrl) { \ *len = sizeof shift_ctrl_str; \ return "\033" shift_ctrl_str; \ } else if (shift) { \ *len = sizeof shift_str ; \ return "\033" shift_str; \ } else if (ctrl) { \ *len = sizeof ctrl_str; \ return "\033" ctrl_str; \ } else { \ *len = sizeof normal_str; \ return "\033" normal_str; \ } switch (code) { /* These mappings are the same as Cygwin's. Unmodified and alt-modified * keypad keys comply with linux console, modifiers comply with xterm * modifier usage. F1. f12 and shift-f1. f10 comply with linux console, f6. * f12 with and without modifiers comply with rxvt. */ VK_CASE(VK_INSERT, "[2~", "[2;2~", "[2;5~", "[2;6~") VK_CASE(VK_END, "[4~", "[4;2~", "[4;5~", "[4;6~") VK_CASE(VK_DOWN, "[B", "[1;2B", "[1;5B", "[1;6B") VK_CASE(VK_NEXT, "[6~", "[6;2~", "[6;5~", "[6;6~") VK_CASE(VK_LEFT, "[D", "[1;2D", "[1;5D", "[1;6D") VK_CASE(VK_CLEAR, "[G", "[1;2G", "[1;5G", "[1;6G") VK_CASE(VK_RIGHT, "[C", "[1;2C", "[1;5C", "[1;6C") VK_CASE(VK_UP, "[A", "[1;2A", "[1;5A", "[1;6A") VK_CASE(VK_HOME, "[1~", "[1;2~", "[1;5~", "[1;6~") VK_CASE(VK_PRIOR, "[5~", "[5;2~", "[5;5~", "[5;6~") VK_CASE(VK_DELETE, "[3~", "[3;2~", "[3;5~", "[3;6~") VK_CASE(VK_NUMPAD0, "[2~", "[2;2~", "[2;5~", "[2;6~") VK_CASE(VK_NUMPAD1, "[4~", "[4;2~", "[4;5~", "[4;6~") VK_CASE(VK_NUMPAD2, "[B", "[1;2B", "[1;5B", "[1;6B") VK_CASE(VK_NUMPAD3, "[6~", "[6;2~", "[6;5~", "[6;6~") VK_CASE(VK_NUMPAD4, "[D", "[1;2D", "[1;5D", "[1;6D") VK_CASE(VK_NUMPAD5, "[G", "[1;2G", "[1;5G", "[1;6G") VK_CASE(VK_NUMPAD6, "[C", "[1;2C", "[1;5C", "[1;6C") VK_CASE(VK_NUMPAD7, "[A", "[1;2A", "[1;5A", "[1;6A") VK_CASE(VK_NUMPAD8, "[1~", "[1;2~", "[1;5~", "[1;6~") VK_CASE(VK_NUMPAD9, "[5~", "[5;2~", "[5;5~", "[5;6~") VK_CASE(VK_DECIMAL, "[3~", "[3;2~", "[3;5~", "[3;6~") VK_CASE(VK_F1, "[[A", "[23~", "[11^", "[23^" ) VK_CASE(VK_F2, "[[B", "[24~", "[12^", "[24^" ) VK_CASE(VK_F3, "[[C", "[25~", "[13^", "[25^" ) VK_CASE(VK_F4, "[[D", "[26~", "[14^", "[26^" ) VK_CASE(VK_F5, "[[E", "[28~", "[15^", "[28^" ) VK_CASE(VK_F6, "[17~", "[29~", "[17^", "[29^" ) VK_CASE(VK_F7, "[18~", "[31~", "[18^", "[31^" ) VK_CASE(VK_F8, "[19~", "[32~", "[19^", "[32^" ) VK_CASE(VK_F9, "[20~", "[33~", "[20^", "[33^" ) VK_CASE(VK_F10, "[21~", "[34~", "[21^", "[34^" ) VK_CASE(VK_F11, "[23~", "[23$", "[23^", "[23@" ) VK_CASE(VK_F12, "[24~", "[24$", "[24^", "[24@" ) default: *len = 0; return NULL; } #undef VK_CASE } void uv_process_tty_read_raw_req(uv_loop_t* loop, uv_tty_t* handle, uv_req_t* req) { /* Shortcut for handle->tty.rd.last_input_record.Event.KeyEvent. */ #define KEV handle->tty.rd.last_input_record.Event.KeyEvent DWORD records_left, records_read; uv_buf_t buf; off_t buf_used; assert(handle->type == UV_TTY); assert(handle->flags & UV_HANDLE_TTY_READABLE); handle->flags &= ~UV_HANDLE_READ_PENDING; if (!(handle->flags & UV_HANDLE_READING) || !(handle->flags & UV_HANDLE_TTY_RAW)) { goto out; } if (!REQ_SUCCESS(req)) { /* An error occurred while waiting for the event. */ if ((handle->flags & UV_HANDLE_READING)) { handle->flags &= ~UV_HANDLE_READING; handle->read_cb((uv_stream_t*)handle, uv_translate_sys_error(GET_REQ_ERROR(req)), &uv_null_buf_); } goto out; } /* Fetch the number of events */ if (!GetNumberOfConsoleInputEvents(handle->handle, &records_left)) { handle->flags &= ~UV_HANDLE_READING; DECREASE_ACTIVE_COUNT(loop, handle); handle->read_cb((uv_stream_t*)handle, uv_translate_sys_error(GetLastError()), &uv_null_buf_); goto out; } /* Windows sends a lot of events that we're not interested in, so buf will be * allocated on demand, when there's actually something to emit. */ buf = uv_null_buf_; buf_used = 0; while ((records_left > 0 || handle->tty.rd.last_key_len > 0) && (handle->flags & UV_HANDLE_READING)) { if (handle->tty.rd.last_key_len == 0) { /* Read the next input record */ if (!ReadConsoleInputW(handle->handle, &handle->tty.rd.last_input_record, 1, &records_read)) { handle->flags &= ~UV_HANDLE_READING; DECREASE_ACTIVE_COUNT(loop, handle); handle->read_cb((uv_stream_t*) handle, uv_translate_sys_error(GetLastError()), &buf); goto out; } records_left--; /* We might be not subscribed to EVENT_CONSOLE_LAYOUT or we might be * running under some TTY emulator that does not send those events. */ if (handle->tty.rd.last_input_record.EventType == WINDOW_BUFFER_SIZE_EVENT) { uv__tty_console_signal_resize(); } /* Ignore other events that are not key events. */ if (handle->tty.rd.last_input_record.EventType != KEY_EVENT) { continue; } /* Ignore keyup events, unless the left alt key was held and a valid * unicode character was emitted. */ if (!KEV.bKeyDown && (KEV.wVirtualKeyCode != VK_MENU || KEV.uChar.UnicodeChar == 0)) { continue; } /* Ignore keypresses to numpad number keys if the left alt is held * because the user is composing a character, or windows simulating this. */ if ((KEV.dwControlKeyState & LEFT_ALT_PRESSED) && !(KEV.dwControlKeyState & ENHANCED_KEY) && (KEV.wVirtualKeyCode == VK_INSERT || KEV.wVirtualKeyCode == VK_END || KEV.wVirtualKeyCode == VK_DOWN || KEV.wVirtualKeyCode == VK_NEXT || KEV.wVirtualKeyCode == VK_LEFT || KEV.wVirtualKeyCode == VK_CLEAR || KEV.wVirtualKeyCode == VK_RIGHT || KEV.wVirtualKeyCode == VK_HOME || KEV.wVirtualKeyCode == VK_UP || KEV.wVirtualKeyCode == VK_PRIOR || KEV.wVirtualKeyCode == VK_NUMPAD0 || KEV.wVirtualKeyCode == VK_NUMPAD1 || KEV.wVirtualKeyCode == VK_NUMPAD2 || KEV.wVirtualKeyCode == VK_NUMPAD3 || KEV.wVirtualKeyCode == VK_NUMPAD4 || KEV.wVirtualKeyCode == VK_NUMPAD5 || KEV.wVirtualKeyCode == VK_NUMPAD6 || KEV.wVirtualKeyCode == VK_NUMPAD7 || KEV.wVirtualKeyCode == VK_NUMPAD8 || KEV.wVirtualKeyCode == VK_NUMPAD9)) { continue; } if (KEV.uChar.UnicodeChar != 0) { int prefix_len, char_len; /* Character key pressed */ if (KEV.uChar.UnicodeChar >= 0xD800 && KEV.uChar.UnicodeChar < 0xDC00) { /* UTF-16 high surrogate */ handle->tty.rd.last_utf16_high_surrogate = KEV.uChar.UnicodeChar; continue; } /* Prefix with \u033 if alt was held, but alt was not used as part a * compose sequence. */ if ((KEV.dwControlKeyState & (LEFT_ALT_PRESSED | RIGHT_ALT_PRESSED)) && !(KEV.dwControlKeyState & (LEFT_CTRL_PRESSED | RIGHT_CTRL_PRESSED)) && KEV.bKeyDown) { handle->tty.rd.last_key[0] = '\033'; prefix_len = 1; } else { prefix_len = 0; } if (KEV.uChar.UnicodeChar >= 0xDC00 && KEV.uChar.UnicodeChar < 0xE000) { /* UTF-16 surrogate pair */ WCHAR utf16_buffer[2]; utf16_buffer[0] = handle->tty.rd.last_utf16_high_surrogate; utf16_buffer[1] = KEV.uChar.UnicodeChar; char_len = WideCharToMultiByte(CP_UTF8, 0, utf16_buffer, 2, &handle->tty.rd.last_key[prefix_len], sizeof handle->tty.rd.last_key, NULL, NULL); } else { /* Single UTF-16 character */ char_len = WideCharToMultiByte(CP_UTF8, 0, &KEV.uChar.UnicodeChar, 1, &handle->tty.rd.last_key[prefix_len], sizeof handle->tty.rd.last_key, NULL, NULL); } /* Whatever happened, the last character wasn't a high surrogate. */ handle->tty.rd.last_utf16_high_surrogate = 0; /* If the utf16 character(s) couldn't be converted something must be * wrong. */ if (!char_len) { handle->flags &= ~UV_HANDLE_READING; DECREASE_ACTIVE_COUNT(loop, handle); handle->read_cb((uv_stream_t*) handle, uv_translate_sys_error(GetLastError()), &buf); goto out; } handle->tty.rd.last_key_len = (unsigned char) (prefix_len + char_len); handle->tty.rd.last_key_offset = 0; continue; } else { /* Function key pressed */ const char* vt100; size_t prefix_len, vt100_len; vt100 = get_vt100_fn_key(KEV.wVirtualKeyCode, !!(KEV.dwControlKeyState & SHIFT_PRESSED), !!(KEV.dwControlKeyState & ( LEFT_CTRL_PRESSED | RIGHT_CTRL_PRESSED)), &vt100_len); /* If we were unable to map to a vt100 sequence, just ignore. */ if (!vt100) { continue; } /* Prefix with \x033 when the alt key was held. */ if (KEV.dwControlKeyState & (LEFT_ALT_PRESSED | RIGHT_ALT_PRESSED)) { handle->tty.rd.last_key[0] = '\033'; prefix_len = 1; } else { prefix_len = 0; } /* Copy the vt100 sequence to the handle buffer. */ assert(prefix_len + vt100_len < sizeof handle->tty.rd.last_key); memcpy(&handle->tty.rd.last_key[prefix_len], vt100, vt100_len); handle->tty.rd.last_key_len = (unsigned char) (prefix_len + vt100_len); handle->tty.rd.last_key_offset = 0; continue; } } else { /* Copy any bytes left from the last keypress to the user buffer. */ if (handle->tty.rd.last_key_offset < handle->tty.rd.last_key_len) { /* Allocate a buffer if needed */ if (buf_used == 0) { buf = uv_buf_init(NULL, 0); handle->alloc_cb((uv_handle_t*) handle, 1024, &buf); if (buf.base == NULL || buf.len == 0) { handle->read_cb((uv_stream_t*) handle, UV_ENOBUFS, &buf); goto out; } assert(buf.base != NULL); } buf.base[buf_used++] = handle->tty.rd.last_key[handle->tty.rd.last_key_offset++]; /* If the buffer is full, emit it */ if ((size_t) buf_used == buf.len) { handle->read_cb((uv_stream_t*) handle, buf_used, &buf); buf = uv_null_buf_; buf_used = 0; } continue; } /* Apply dwRepeat from the last input record. */ if (--KEV.wRepeatCount > 0) { handle->tty.rd.last_key_offset = 0; continue; } handle->tty.rd.last_key_len = 0; continue; } } /* Send the buffer back to the user */ if (buf_used > 0) { handle->read_cb((uv_stream_t*) handle, buf_used, &buf); } out: /* Wait for more input events. */ if ((handle->flags & UV_HANDLE_READING) && !(handle->flags & UV_HANDLE_READ_PENDING)) { uv__tty_queue_read(loop, handle); } DECREASE_PENDING_REQ_COUNT(handle); #undef KEV } void uv_process_tty_read_line_req(uv_loop_t* loop, uv_tty_t* handle, uv_req_t* req) { uv_buf_t buf; assert(handle->type == UV_TTY); assert(handle->flags & UV_HANDLE_TTY_READABLE); buf = handle->tty.rd.read_line_buffer; handle->flags &= ~UV_HANDLE_READ_PENDING; handle->tty.rd.read_line_buffer = uv_null_buf_; if (!REQ_SUCCESS(req)) { /* Read was not successful */ if (handle->flags & UV_HANDLE_READING) { /* Real error */ handle->flags &= ~UV_HANDLE_READING; DECREASE_ACTIVE_COUNT(loop, handle); handle->read_cb((uv_stream_t*) handle, uv_translate_sys_error(GET_REQ_ERROR(req)), &buf); } } else { if (!(handle->flags & UV_HANDLE_CANCELLATION_PENDING) && req->u.io.overlapped.InternalHigh != 0) { /* Read successful. TODO: read unicode, convert to utf-8 */ DWORD bytes = req->u.io.overlapped.InternalHigh; handle->read_cb((uv_stream_t*) handle, bytes, &buf); } handle->flags &= ~UV_HANDLE_CANCELLATION_PENDING; } /* Wait for more input events. */ if ((handle->flags & UV_HANDLE_READING) && !(handle->flags & UV_HANDLE_READ_PENDING)) { uv__tty_queue_read(loop, handle); } DECREASE_PENDING_REQ_COUNT(handle); } void uv__process_tty_read_req(uv_loop_t* loop, uv_tty_t* handle, uv_req_t* req) { assert(handle->type == UV_TTY); assert(handle->flags & UV_HANDLE_TTY_READABLE); /* If the read_line_buffer member is zero, it must have been an raw read. * Otherwise it was a line-buffered read. FIXME: This is quite obscure. Use a * flag or something. */ if (handle->tty.rd.read_line_buffer.len == 0) { uv_process_tty_read_raw_req(loop, handle, req); } else { uv_process_tty_read_line_req(loop, handle, req); } } int uv__tty_read_start(uv_tty_t* handle, uv_alloc_cb alloc_cb, uv_read_cb read_cb) { uv_loop_t* loop = handle->loop; if (!(handle->flags & UV_HANDLE_TTY_READABLE)) { return ERROR_INVALID_PARAMETER; } handle->flags |= UV_HANDLE_READING; INCREASE_ACTIVE_COUNT(loop, handle); handle->read_cb = read_cb; handle->alloc_cb = alloc_cb; /* If reading was stopped and then started again, there could still be a read * request pending. */ if (handle->flags & UV_HANDLE_READ_PENDING) { return 0; } /* Maybe the user stopped reading half-way while processing key events. * Short-circuit if this could be the case. */ if (handle->tty.rd.last_key_len > 0) { SET_REQ_SUCCESS(&handle->read_req); uv__insert_pending_req(handle->loop, (uv_req_t*) &handle->read_req); /* Make sure no attempt is made to insert it again until it's handled. */ handle->flags |= UV_HANDLE_READ_PENDING; handle->reqs_pending++; return 0; } uv__tty_queue_read(loop, handle); return 0; } int uv__tty_read_stop(uv_tty_t* handle) { INPUT_RECORD record; DWORD written, err; handle->flags &= ~UV_HANDLE_READING; DECREASE_ACTIVE_COUNT(handle->loop, handle); if (!(handle->flags & UV_HANDLE_READ_PENDING)) return 0; if (handle->flags & UV_HANDLE_TTY_RAW) { /* Cancel raw read. Write some bullshit event to force the console wait to * return. */ memset(&record, 0, sizeof record); record.EventType = FOCUS_EVENT; if (!WriteConsoleInputW(handle->handle, &record, 1, &written)) { return GetLastError(); } } else if (!(handle->flags & UV_HANDLE_CANCELLATION_PENDING)) { /* Cancel line-buffered read if not already pending */ err = uv__cancel_read_console(handle); if (err) return err; handle->flags |= UV_HANDLE_CANCELLATION_PENDING; } return 0; } static int uv__cancel_read_console(uv_tty_t* handle) { HANDLE active_screen_buffer = INVALID_HANDLE_VALUE; INPUT_RECORD record; DWORD written; DWORD err = 0; LONG status; assert(!(handle->flags & UV_HANDLE_CANCELLATION_PENDING)); /* Hold the output lock during the cancellation, to ensure that further writes don't interfere with the screen state. It will be the ReadConsole thread's responsibility to release the lock. */ uv_sem_wait(&uv_tty_output_lock); status = InterlockedExchange(&uv__read_console_status, TRAP_REQUESTED); if (status != IN_PROGRESS) { /* Either we have managed to set a trap for the other thread before ReadConsole is called, or ReadConsole has returned because the user has pressed ENTER. In either case, there is nothing else to do. */ uv_sem_post(&uv_tty_output_lock); return 0; } /* Save screen state before sending the VK_RETURN event */ active_screen_buffer = CreateFileA("conout$", GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); if (active_screen_buffer != INVALID_HANDLE_VALUE && GetConsoleScreenBufferInfo(active_screen_buffer, &uv__saved_screen_state)) { InterlockedOr(&uv__restore_screen_state, 1); } /* Write enter key event to force the console wait to return. */ record.EventType = KEY_EVENT; record.Event.KeyEvent.bKeyDown = TRUE; record.Event.KeyEvent.wRepeatCount = 1; record.Event.KeyEvent.wVirtualKeyCode = VK_RETURN; record.Event.KeyEvent.wVirtualScanCode = MapVirtualKeyW(VK_RETURN, MAPVK_VK_TO_VSC); record.Event.KeyEvent.uChar.UnicodeChar = L'\r'; record.Event.KeyEvent.dwControlKeyState = 0; if (!WriteConsoleInputW(handle->handle, &record, 1, &written)) err = GetLastError(); if (active_screen_buffer != INVALID_HANDLE_VALUE) CloseHandle(active_screen_buffer); return err; } static void uv__tty_update_virtual_window(CONSOLE_SCREEN_BUFFER_INFO* info) { uv_tty_virtual_width = info->dwSize.X; uv_tty_virtual_height = info->srWindow.Bottom - info->srWindow.Top + 1; /* Recompute virtual window offset row. */ if (uv_tty_virtual_offset == -1) { uv_tty_virtual_offset = info->dwCursorPosition.Y; } else if (uv_tty_virtual_offset < info->dwCursorPosition.Y - uv_tty_virtual_height + 1) { /* If suddenly find the cursor outside of the virtual window, it must have * somehow scrolled. Update the virtual window offset. */ uv_tty_virtual_offset = info->dwCursorPosition.Y - uv_tty_virtual_height + 1; } if (uv_tty_virtual_offset + uv_tty_virtual_height > info->dwSize.Y) { uv_tty_virtual_offset = info->dwSize.Y - uv_tty_virtual_height; } if (uv_tty_virtual_offset < 0) { uv_tty_virtual_offset = 0; } } static COORD uv__tty_make_real_coord(uv_tty_t* handle, CONSOLE_SCREEN_BUFFER_INFO* info, int x, unsigned char x_relative, int y, unsigned char y_relative) { COORD result; uv__tty_update_virtual_window(info); /* Adjust y position */ if (y_relative) { y = info->dwCursorPosition.Y + y; } else { y = uv_tty_virtual_offset + y; } /* Clip y to virtual client rectangle */ if (y < uv_tty_virtual_offset) { y = uv_tty_virtual_offset; } else if (y >= uv_tty_virtual_offset + uv_tty_virtual_height) { y = uv_tty_virtual_offset + uv_tty_virtual_height - 1; } /* Adjust x */ if (x_relative) { x = info->dwCursorPosition.X + x; } /* Clip x */ if (x < 0) { x = 0; } else if (x >= uv_tty_virtual_width) { x = uv_tty_virtual_width - 1; } result.X = (unsigned short) x; result.Y = (unsigned short) y; return result; } static int uv__tty_emit_text(uv_tty_t* handle, WCHAR buffer[], DWORD length, DWORD* error) { DWORD written; if (*error != ERROR_SUCCESS) { return -1; } if (!WriteConsoleW(handle->handle, (void*) buffer, length, &written, NULL)) { *error = GetLastError(); return -1; } return 0; } static int uv__tty_move_caret(uv_tty_t* handle, int x, unsigned char x_relative, int y, unsigned char y_relative, DWORD* error) { CONSOLE_SCREEN_BUFFER_INFO info; COORD pos; if (*error != ERROR_SUCCESS) { return -1; } retry: if (!GetConsoleScreenBufferInfo(handle->handle, &info)) { *error = GetLastError(); } pos = uv__tty_make_real_coord(handle, &info, x, x_relative, y, y_relative); if (!SetConsoleCursorPosition(handle->handle, pos)) { if (GetLastError() == ERROR_INVALID_PARAMETER) { /* The console may be resized - retry */ goto retry; } else { *error = GetLastError(); return -1; } } return 0; } static int uv__tty_reset(uv_tty_t* handle, DWORD* error) { const COORD origin = {0, 0}; const WORD char_attrs = uv_tty_default_text_attributes; CONSOLE_SCREEN_BUFFER_INFO screen_buffer_info; DWORD count, written; if (*error != ERROR_SUCCESS) { return -1; } /* Reset original text attributes. */ if (!SetConsoleTextAttribute(handle->handle, char_attrs)) { *error = GetLastError(); return -1; } /* Move the cursor position to (0, 0). */ if (!SetConsoleCursorPosition(handle->handle, origin)) { *error = GetLastError(); return -1; } /* Clear the screen buffer. */ retry: if (!GetConsoleScreenBufferInfo(handle->handle, &screen_buffer_info)) { *error = GetLastError(); return -1; } count = screen_buffer_info.dwSize.X * screen_buffer_info.dwSize.Y; if (!(FillConsoleOutputCharacterW(handle->handle, L'\x20', count, origin, &written) && FillConsoleOutputAttribute(handle->handle, char_attrs, written, origin, &written))) { if (GetLastError() == ERROR_INVALID_PARAMETER) { /* The console may be resized - retry */ goto retry; } else { *error = GetLastError(); return -1; } } /* Move the virtual window up to the top. */ uv_tty_virtual_offset = 0; uv__tty_update_virtual_window(&screen_buffer_info); /* Reset the cursor size and the cursor state. */ if (!SetConsoleCursorInfo(handle->handle, &uv_tty_default_cursor_info)) { *error = GetLastError(); return -1; } return 0; } static int uv__tty_clear(uv_tty_t* handle, int dir, char entire_screen, DWORD* error) { CONSOLE_SCREEN_BUFFER_INFO info; COORD start, end; DWORD count, written; int x1, x2, y1, y2; int x1r, x2r, y1r, y2r; if (*error != ERROR_SUCCESS) { return -1; } if (dir == 0) { /* Clear from current position */ x1 = 0; x1r = 1; } else { /* Clear from column 0 */ x1 = 0; x1r = 0; } if (dir == 1) { /* Clear to current position */ x2 = 0; x2r = 1; } else { /* Clear to end of row. We pretend the console is 65536 characters wide, * uv__tty_make_real_coord will clip it to the actual console width. */ x2 = 0xffff; x2r = 0; } if (!entire_screen) { /* Stay on our own row */ y1 = y2 = 0; y1r = y2r = 1; } else { /* Apply columns direction to row */ y1 = x1; y1r = x1r; y2 = x2; y2r = x2r; } retry: if (!GetConsoleScreenBufferInfo(handle->handle, &info)) { *error = GetLastError(); return -1; } start = uv__tty_make_real_coord(handle, &info, x1, x1r, y1, y1r); end = uv__tty_make_real_coord(handle, &info, x2, x2r, y2, y2r); count = (end.Y * info.dwSize.X + end.X) - (start.Y * info.dwSize.X + start.X) + 1; if (!(FillConsoleOutputCharacterW(handle->handle, L'\x20', count, start, &written) && FillConsoleOutputAttribute(handle->handle, info.wAttributes, written, start, &written))) { if (GetLastError() == ERROR_INVALID_PARAMETER) { /* The console may be resized - retry */ goto retry; } else { *error = GetLastError(); return -1; } } return 0; } #define FLIP_FGBG \ do { \ WORD fg = info.wAttributes & 0xF; \ WORD bg = info.wAttributes & 0xF0; \ info.wAttributes &= 0xFF00; \ info.wAttributes |= fg << 4; \ info.wAttributes |= bg >> 4; \ } while (0) static int uv__tty_set_style(uv_tty_t* handle, DWORD* error) { unsigned short argc = handle->tty.wr.ansi_csi_argc; unsigned short* argv = handle->tty.wr.ansi_csi_argv; int i; CONSOLE_SCREEN_BUFFER_INFO info; char fg_color = -1, bg_color = -1; char fg_bright = -1, bg_bright = -1; char inverse = -1; if (argc == 0) { /* Reset mode */ fg_color = uv_tty_default_fg_color; bg_color = uv_tty_default_bg_color; fg_bright = uv_tty_default_fg_bright; bg_bright = uv_tty_default_bg_bright; inverse = uv_tty_default_inverse; } for (i = 0; i < argc; i++) { short arg = argv[i]; if (arg == 0) { /* Reset mode */ fg_color = uv_tty_default_fg_color; bg_color = uv_tty_default_bg_color; fg_bright = uv_tty_default_fg_bright; bg_bright = uv_tty_default_bg_bright; inverse = uv_tty_default_inverse; } else if (arg == 1) { /* Foreground bright on */ fg_bright = 1; } else if (arg == 2) { /* Both bright off */ fg_bright = 0; bg_bright = 0; } else if (arg == 5) { /* Background bright on */ bg_bright = 1; } else if (arg == 7) { /* Inverse: on */ inverse = 1; } else if (arg == 21 || arg == 22) { /* Foreground bright off */ fg_bright = 0; } else if (arg == 25) { /* Background bright off */ bg_bright = 0; } else if (arg == 27) { /* Inverse: off */ inverse = 0; } else if (arg >= 30 && arg <= 37) { /* Set foreground color */ fg_color = arg - 30; } else if (arg == 39) { /* Default text color */ fg_color = uv_tty_default_fg_color; fg_bright = uv_tty_default_fg_bright; } else if (arg >= 40 && arg <= 47) { /* Set background color */ bg_color = arg - 40; } else if (arg == 49) { /* Default background color */ bg_color = uv_tty_default_bg_color; bg_bright = uv_tty_default_bg_bright; } else if (arg >= 90 && arg <= 97) { /* Set bold foreground color */ fg_bright = 1; fg_color = arg - 90; } else if (arg >= 100 && arg <= 107) { /* Set bold background color */ bg_bright = 1; bg_color = arg - 100; } } if (fg_color == -1 && bg_color == -1 && fg_bright == -1 && bg_bright == -1 && inverse == -1) { /* Nothing changed */ return 0; } if (!GetConsoleScreenBufferInfo(handle->handle, &info)) { *error = GetLastError(); return -1; } if ((info.wAttributes & COMMON_LVB_REVERSE_VIDEO) > 0) { FLIP_FGBG; } if (fg_color != -1) { info.wAttributes &= ~(FOREGROUND_RED | FOREGROUND_GREEN | FOREGROUND_BLUE); if (fg_color & 1) info.wAttributes |= FOREGROUND_RED; if (fg_color & 2) info.wAttributes |= FOREGROUND_GREEN; if (fg_color & 4) info.wAttributes |= FOREGROUND_BLUE; } if (fg_bright != -1) { if (fg_bright) { info.wAttributes |= FOREGROUND_INTENSITY; } else { info.wAttributes &= ~FOREGROUND_INTENSITY; } } if (bg_color != -1) { info.wAttributes &= ~(BACKGROUND_RED | BACKGROUND_GREEN | BACKGROUND_BLUE); if (bg_color & 1) info.wAttributes |= BACKGROUND_RED; if (bg_color & 2) info.wAttributes |= BACKGROUND_GREEN; if (bg_color & 4) info.wAttributes |= BACKGROUND_BLUE; } if (bg_bright != -1) { if (bg_bright) { info.wAttributes |= BACKGROUND_INTENSITY; } else { info.wAttributes &= ~BACKGROUND_INTENSITY; } } if (inverse != -1) { if (inverse) { info.wAttributes |= COMMON_LVB_REVERSE_VIDEO; } else { info.wAttributes &= ~COMMON_LVB_REVERSE_VIDEO; } } if ((info.wAttributes & COMMON_LVB_REVERSE_VIDEO) > 0) { FLIP_FGBG; } if (!SetConsoleTextAttribute(handle->handle, info.wAttributes)) { *error = GetLastError(); return -1; } return 0; } static int uv__tty_save_state(uv_tty_t* handle, unsigned char save_attributes, DWORD* error) { CONSOLE_SCREEN_BUFFER_INFO info; if (*error != ERROR_SUCCESS) { return -1; } if (!GetConsoleScreenBufferInfo(handle->handle, &info)) { *error = GetLastError(); return -1; } uv__tty_update_virtual_window(&info); handle->tty.wr.saved_position.X = info.dwCursorPosition.X; handle->tty.wr.saved_position.Y = info.dwCursorPosition.Y - uv_tty_virtual_offset; handle->flags |= UV_HANDLE_TTY_SAVED_POSITION; if (save_attributes) { handle->tty.wr.saved_attributes = info.wAttributes & (FOREGROUND_INTENSITY | BACKGROUND_INTENSITY); handle->flags |= UV_HANDLE_TTY_SAVED_ATTRIBUTES; } return 0; } static int uv__tty_restore_state(uv_tty_t* handle, unsigned char restore_attributes, DWORD* error) { CONSOLE_SCREEN_BUFFER_INFO info; WORD new_attributes; if (*error != ERROR_SUCCESS) { return -1; } if (handle->flags & UV_HANDLE_TTY_SAVED_POSITION) { if (uv__tty_move_caret(handle, handle->tty.wr.saved_position.X, 0, handle->tty.wr.saved_position.Y, 0, error) != 0) { return -1; } } if (restore_attributes && (handle->flags & UV_HANDLE_TTY_SAVED_ATTRIBUTES)) { if (!GetConsoleScreenBufferInfo(handle->handle, &info)) { *error = GetLastError(); return -1; } new_attributes = info.wAttributes; new_attributes &= ~(FOREGROUND_INTENSITY | BACKGROUND_INTENSITY); new_attributes |= handle->tty.wr.saved_attributes; if (!SetConsoleTextAttribute(handle->handle, new_attributes)) { *error = GetLastError(); return -1; } } return 0; } static int uv__tty_set_cursor_visibility(uv_tty_t* handle, BOOL visible, DWORD* error) { CONSOLE_CURSOR_INFO cursor_info; if (!GetConsoleCursorInfo(handle->handle, &cursor_info)) { *error = GetLastError(); return -1; } cursor_info.bVisible = visible; if (!SetConsoleCursorInfo(handle->handle, &cursor_info)) { *error = GetLastError(); return -1; } return 0; } static int uv__tty_set_cursor_shape(uv_tty_t* handle, int style, DWORD* error) { CONSOLE_CURSOR_INFO cursor_info; if (!GetConsoleCursorInfo(handle->handle, &cursor_info)) { *error = GetLastError(); return -1; } if (style == 0) { cursor_info.dwSize = uv_tty_default_cursor_info.dwSize; } else if (style <= 2) { cursor_info.dwSize = CURSOR_SIZE_LARGE; } else { cursor_info.dwSize = CURSOR_SIZE_SMALL; } if (!SetConsoleCursorInfo(handle->handle, &cursor_info)) { *error = GetLastError(); return -1; } return 0; } static int uv__tty_write_bufs(uv_tty_t* handle, const uv_buf_t bufs[], unsigned int nbufs, DWORD* error) { /* We can only write 8k characters at a time. Windows can't handle much more * characters in a single console write anyway. */ WCHAR utf16_buf[MAX_CONSOLE_CHAR]; DWORD utf16_buf_used = 0; unsigned int i; #define FLUSH_TEXT() \ do { \ if (utf16_buf_used > 0) { \ uv__tty_emit_text(handle, utf16_buf, utf16_buf_used, error); \ utf16_buf_used = 0; \ } \ } while (0) #define ENSURE_BUFFER_SPACE(wchars_needed) \ if (wchars_needed > ARRAY_SIZE(utf16_buf) - utf16_buf_used) { \ FLUSH_TEXT(); \ } /* Cache for fast access */ unsigned char utf8_bytes_left = handle->tty.wr.utf8_bytes_left; unsigned int utf8_codepoint = handle->tty.wr.utf8_codepoint; unsigned char previous_eol = handle->tty.wr.previous_eol; unsigned short ansi_parser_state = handle->tty.wr.ansi_parser_state; /* Store the error here. If we encounter an error, stop trying to do i/o but * keep parsing the buffer so we leave the parser in a consistent state. */ *error = ERROR_SUCCESS; uv_sem_wait(&uv_tty_output_lock); for (i = 0; i < nbufs; i++) { uv_buf_t buf = bufs[i]; unsigned int j; for (j = 0; j < buf.len; j++) { unsigned char c = buf.base[j]; /* Run the character through the utf8 decoder We happily accept non * shortest form encodings and invalid code points - there's no real harm * that can be done. */ if (utf8_bytes_left == 0) { /* Read utf-8 start byte */ DWORD first_zero_bit; unsigned char not_c = ~c; #ifdef _MSC_VER /* msvc */ if (_BitScanReverse(&first_zero_bit, not_c)) { #else /* assume gcc */ if (c != 0) { first_zero_bit = (sizeof(int) * 8) - 1 - __builtin_clz(not_c); #endif if (first_zero_bit == 7) { /* Ascii - pass right through */ utf8_codepoint = (unsigned int) c; } else if (first_zero_bit <= 5) { /* Multibyte sequence */ utf8_codepoint = (0xff >> (8 - first_zero_bit)) & c; utf8_bytes_left = (char) (6 - first_zero_bit); } else { /* Invalid continuation */ utf8_codepoint = UNICODE_REPLACEMENT_CHARACTER; } } else { /* 0xff -- invalid */ utf8_codepoint = UNICODE_REPLACEMENT_CHARACTER; } } else if ((c & 0xc0) == 0x80) { /* Valid continuation of utf-8 multibyte sequence */ utf8_bytes_left--; utf8_codepoint <<= 6; utf8_codepoint |= ((unsigned int) c & 0x3f); } else { /* Start byte where continuation was expected. */ utf8_bytes_left = 0; utf8_codepoint = UNICODE_REPLACEMENT_CHARACTER; /* Patch buf offset so this character will be parsed again as a start * byte. */ j--; } /* Maybe we need to parse more bytes to find a character. */ if (utf8_bytes_left != 0) { continue; } /* Parse vt100/ansi escape codes */ if (uv__vterm_state == UV_TTY_SUPPORTED) { /* Pass through escape codes if conhost supports them. */ } else if (ansi_parser_state == ANSI_NORMAL) { switch (utf8_codepoint) { case '\033': ansi_parser_state = ANSI_ESCAPE_SEEN; continue; case 0233: ansi_parser_state = ANSI_CSI; handle->tty.wr.ansi_csi_argc = 0; continue; } } else if (ansi_parser_state == ANSI_ESCAPE_SEEN) { switch (utf8_codepoint) { case '[': ansi_parser_state = ANSI_CSI; handle->tty.wr.ansi_csi_argc = 0; continue; case '^': case '_': case 'P': case ']': /* Not supported, but we'll have to parse until we see a stop code, * e. g. ESC \ or BEL. */ ansi_parser_state = ANSI_ST_CONTROL; continue; case '\033': /* Ignore double escape. */ continue; case 'c': /* Full console reset. */ FLUSH_TEXT(); uv__tty_reset(handle, error); ansi_parser_state = ANSI_NORMAL; continue; case '7': /* Save the cursor position and text attributes. */ FLUSH_TEXT(); uv__tty_save_state(handle, 1, error); ansi_parser_state = ANSI_NORMAL; continue; case '8': /* Restore the cursor position and text attributes */ FLUSH_TEXT(); uv__tty_restore_state(handle, 1, error); ansi_parser_state = ANSI_NORMAL; continue; default: if (utf8_codepoint >= '@' && utf8_codepoint <= '_') { /* Single-char control. */ ansi_parser_state = ANSI_NORMAL; continue; } else { /* Invalid - proceed as normal, */ ansi_parser_state = ANSI_NORMAL; } } } else if (ansi_parser_state == ANSI_IGNORE) { /* We're ignoring this command. Stop only on command character. */ if (utf8_codepoint >= '@' && utf8_codepoint <= '~') { ansi_parser_state = ANSI_NORMAL; } continue; } else if (ansi_parser_state == ANSI_DECSCUSR) { /* So far we've the sequence `ESC [ arg space`, and we're waiting for * the final command byte. */ if (utf8_codepoint >= '@' && utf8_codepoint <= '~') { /* Command byte */ if (utf8_codepoint == 'q') { /* Change the cursor shape */ int style = handle->tty.wr.ansi_csi_argc ? handle->tty.wr.ansi_csi_argv[0] : 1; if (style >= 0 && style <= 6) { FLUSH_TEXT(); uv__tty_set_cursor_shape(handle, style, error); } } /* Sequence ended - go back to normal state. */ ansi_parser_state = ANSI_NORMAL; continue; } /* Unexpected character, but sequence hasn't ended yet. Ignore the rest * of the sequence. */ ansi_parser_state = ANSI_IGNORE; } else if (ansi_parser_state & ANSI_CSI) { /* So far we've seen `ESC [`, and we may or may not have already parsed * some of the arguments that follow. */ if (utf8_codepoint >= '0' && utf8_codepoint <= '9') { /* Parse a numerical argument. */ if (!(ansi_parser_state & ANSI_IN_ARG)) { /* We were not currently parsing a number, add a new one. */ /* Check for that there are too many arguments. */ if (handle->tty.wr.ansi_csi_argc >= ARRAY_SIZE(handle->tty.wr.ansi_csi_argv)) { ansi_parser_state = ANSI_IGNORE; continue; } ansi_parser_state |= ANSI_IN_ARG; handle->tty.wr.ansi_csi_argc++; handle->tty.wr.ansi_csi_argv[handle->tty.wr.ansi_csi_argc - 1] = (unsigned short) utf8_codepoint - '0'; continue; } else { /* We were already parsing a number. Parse next digit. */ uint32_t value = 10 * handle->tty.wr.ansi_csi_argv[handle->tty.wr.ansi_csi_argc - 1]; /* Check for overflow. */ if (value > UINT16_MAX) { ansi_parser_state = ANSI_IGNORE; continue; } handle->tty.wr.ansi_csi_argv[handle->tty.wr.ansi_csi_argc - 1] = (unsigned short) value + (utf8_codepoint - '0'); continue; } } else if (utf8_codepoint == ';') { /* Denotes the end of an argument. */ if (ansi_parser_state & ANSI_IN_ARG) { ansi_parser_state &= ~ANSI_IN_ARG; continue; } else { /* If ANSI_IN_ARG is not set, add another argument and default * it to 0. */ /* Check for too many arguments */ if (handle->tty.wr.ansi_csi_argc >= ARRAY_SIZE(handle->tty.wr.ansi_csi_argv)) { ansi_parser_state = ANSI_IGNORE; continue; } handle->tty.wr.ansi_csi_argc++; handle->tty.wr.ansi_csi_argv[handle->tty.wr.ansi_csi_argc - 1] = 0; continue; } } else if (utf8_codepoint == '?' && !(ansi_parser_state & ANSI_IN_ARG) && !(ansi_parser_state & ANSI_EXTENSION) && handle->tty.wr.ansi_csi_argc == 0) { /* Pass through '?' if it is the first character after CSI */ /* This is an extension character from the VT100 codeset */ /* that is supported and used by most ANSI terminals today. */ ansi_parser_state |= ANSI_EXTENSION; continue; } else if (utf8_codepoint == ' ' && !(ansi_parser_state & ANSI_EXTENSION)) { /* We expect a command byte to follow after this space. The only * command that we current support is 'set cursor style'. */ ansi_parser_state = ANSI_DECSCUSR; continue; } else if (utf8_codepoint >= '@' && utf8_codepoint <= '~') { /* Command byte */ if (ansi_parser_state & ANSI_EXTENSION) { /* Sequence is `ESC [ ? args command`. */ switch (utf8_codepoint) { case 'l': /* Hide the cursor */ if (handle->tty.wr.ansi_csi_argc == 1 && handle->tty.wr.ansi_csi_argv[0] == 25) { FLUSH_TEXT(); uv__tty_set_cursor_visibility(handle, 0, error); } break; case 'h': /* Show the cursor */ if (handle->tty.wr.ansi_csi_argc == 1 && handle->tty.wr.ansi_csi_argv[0] == 25) { FLUSH_TEXT(); uv__tty_set_cursor_visibility(handle, 1, error); } break; } } else { /* Sequence is `ESC [ args command`. */ int x, y, d; switch (utf8_codepoint) { case 'A': /* cursor up */ FLUSH_TEXT(); y = -(handle->tty.wr.ansi_csi_argc ? handle->tty.wr.ansi_csi_argv[0] : 1); uv__tty_move_caret(handle, 0, 1, y, 1, error); break; case 'B': /* cursor down */ FLUSH_TEXT(); y = handle->tty.wr.ansi_csi_argc ? handle->tty.wr.ansi_csi_argv[0] : 1; uv__tty_move_caret(handle, 0, 1, y, 1, error); break; case 'C': /* cursor forward */ FLUSH_TEXT(); x = handle->tty.wr.ansi_csi_argc ? handle->tty.wr.ansi_csi_argv[0] : 1; uv__tty_move_caret(handle, x, 1, 0, 1, error); break; case 'D': /* cursor back */ FLUSH_TEXT(); x = -(handle->tty.wr.ansi_csi_argc ? handle->tty.wr.ansi_csi_argv[0] : 1); uv__tty_move_caret(handle, x, 1, 0, 1, error); break; case 'E': /* cursor next line */ FLUSH_TEXT(); y = handle->tty.wr.ansi_csi_argc ? handle->tty.wr.ansi_csi_argv[0] : 1; uv__tty_move_caret(handle, 0, 0, y, 1, error); break; case 'F': /* cursor previous line */ FLUSH_TEXT(); y = -(handle->tty.wr.ansi_csi_argc ? handle->tty.wr.ansi_csi_argv[0] : 1); uv__tty_move_caret(handle, 0, 0, y, 1, error); break; case 'G': /* cursor horizontal move absolute */ FLUSH_TEXT(); x = (handle->tty.wr.ansi_csi_argc >= 1 && handle->tty.wr.ansi_csi_argv[0]) ? handle->tty.wr.ansi_csi_argv[0] - 1 : 0; uv__tty_move_caret(handle, x, 0, 0, 1, error); break; case 'H': case 'f': /* cursor move absolute */ FLUSH_TEXT(); y = (handle->tty.wr.ansi_csi_argc >= 1 && handle->tty.wr.ansi_csi_argv[0]) ? handle->tty.wr.ansi_csi_argv[0] - 1 : 0; x = (handle->tty.wr.ansi_csi_argc >= 2 && handle->tty.wr.ansi_csi_argv[1]) ? handle->tty.wr.ansi_csi_argv[1] - 1 : 0; uv__tty_move_caret(handle, x, 0, y, 0, error); break; case 'J': /* Erase screen */ FLUSH_TEXT(); d = handle->tty.wr.ansi_csi_argc ? handle->tty.wr.ansi_csi_argv[0] : 0; if (d >= 0 && d <= 2) { uv__tty_clear(handle, d, 1, error); } break; case 'K': /* Erase line */ FLUSH_TEXT(); d = handle->tty.wr.ansi_csi_argc ? handle->tty.wr.ansi_csi_argv[0] : 0; if (d >= 0 && d <= 2) { uv__tty_clear(handle, d, 0, error); } break; case 'm': /* Set style */ FLUSH_TEXT(); uv__tty_set_style(handle, error); break; case 's': /* Save the cursor position. */ FLUSH_TEXT(); uv__tty_save_state(handle, 0, error); break; case 'u': /* Restore the cursor position */ FLUSH_TEXT(); uv__tty_restore_state(handle, 0, error); break; } } /* Sequence ended - go back to normal state. */ ansi_parser_state = ANSI_NORMAL; continue; } else { /* We don't support commands that use private mode characters or * intermediaries. Ignore the rest of the sequence. */ ansi_parser_state = ANSI_IGNORE; continue; } } else if (ansi_parser_state & ANSI_ST_CONTROL) { /* Unsupported control code. * Ignore everything until we see `BEL` or `ESC \`. */ if (ansi_parser_state & ANSI_IN_STRING) { if (!(ansi_parser_state & ANSI_BACKSLASH_SEEN)) { if (utf8_codepoint == '"') { ansi_parser_state &= ~ANSI_IN_STRING; } else if (utf8_codepoint == '\\') { ansi_parser_state |= ANSI_BACKSLASH_SEEN; } } else { ansi_parser_state &= ~ANSI_BACKSLASH_SEEN; } } else { if (utf8_codepoint == '\007' || (utf8_codepoint == '\\' && (ansi_parser_state & ANSI_ESCAPE_SEEN))) { /* End of sequence */ ansi_parser_state = ANSI_NORMAL; } else if (utf8_codepoint == '\033') { /* Escape character */ ansi_parser_state |= ANSI_ESCAPE_SEEN; } else if (utf8_codepoint == '"') { /* String starting */ ansi_parser_state |= ANSI_IN_STRING; ansi_parser_state &= ~ANSI_ESCAPE_SEEN; ansi_parser_state &= ~ANSI_BACKSLASH_SEEN; } else { ansi_parser_state &= ~ANSI_ESCAPE_SEEN; } } continue; } else { /* Inconsistent state */ abort(); } if (utf8_codepoint == 0x0a || utf8_codepoint == 0x0d) { /* EOL conversion - emit \r\n when we see \n. */ if (utf8_codepoint == 0x0a && previous_eol != 0x0d) { /* \n was not preceded by \r; print \r\n. */ ENSURE_BUFFER_SPACE(2); utf16_buf[utf16_buf_used++] = L'\r'; utf16_buf[utf16_buf_used++] = L'\n'; } else if (utf8_codepoint == 0x0d && previous_eol == 0x0a) { /* \n was followed by \r; do not print the \r, since the source was * either \r\n\r (so the second \r is redundant) or was \n\r (so the * \n was processed by the last case and an \r automatically * inserted). */ } else { /* \r without \n; print \r as-is. */ ENSURE_BUFFER_SPACE(1); utf16_buf[utf16_buf_used++] = (WCHAR) utf8_codepoint; } previous_eol = (char) utf8_codepoint; } else if (utf8_codepoint <= 0xffff) { /* Encode character into utf-16 buffer. */ ENSURE_BUFFER_SPACE(1); utf16_buf[utf16_buf_used++] = (WCHAR) utf8_codepoint; previous_eol = 0; } else { ENSURE_BUFFER_SPACE(2); utf8_codepoint -= 0x10000; utf16_buf[utf16_buf_used++] = (WCHAR) (utf8_codepoint / 0x400 + 0xD800); utf16_buf[utf16_buf_used++] = (WCHAR) (utf8_codepoint % 0x400 + 0xDC00); previous_eol = 0; } } } /* Flush remaining characters */ FLUSH_TEXT(); /* Copy cached values back to struct. */ handle->tty.wr.utf8_bytes_left = utf8_bytes_left; handle->tty.wr.utf8_codepoint = utf8_codepoint; handle->tty.wr.previous_eol = previous_eol; handle->tty.wr.ansi_parser_state = ansi_parser_state; uv_sem_post(&uv_tty_output_lock); if (*error == STATUS_SUCCESS) { return 0; } else { return -1; } #undef FLUSH_TEXT } int uv__tty_write(uv_loop_t* loop, uv_write_t* req, uv_tty_t* handle, const uv_buf_t bufs[], unsigned int nbufs, uv_write_cb cb) { DWORD error; UV_REQ_INIT(req, UV_WRITE); req->handle = (uv_stream_t*) handle; req->cb = cb; handle->reqs_pending++; handle->stream.conn.write_reqs_pending++; REGISTER_HANDLE_REQ(loop, handle, req); req->u.io.queued_bytes = 0; if (!uv__tty_write_bufs(handle, bufs, nbufs, &error)) { SET_REQ_SUCCESS(req); } else { SET_REQ_ERROR(req, error); } uv__insert_pending_req(loop, (uv_req_t*) req); return 0; } int uv__tty_try_write(uv_tty_t* handle, const uv_buf_t bufs[], unsigned int nbufs) { DWORD error; if (handle->stream.conn.write_reqs_pending > 0) return UV_EAGAIN; if (uv__tty_write_bufs(handle, bufs, nbufs, &error)) return uv_translate_sys_error(error); return uv__count_bufs(bufs, nbufs); } void uv__process_tty_write_req(uv_loop_t* loop, uv_tty_t* handle, uv_write_t* req) { int err; handle->write_queue_size -= req->u.io.queued_bytes; UNREGISTER_HANDLE_REQ(loop, handle, req); if (req->cb) { err = GET_REQ_ERROR(req); req->cb(req, uv_translate_sys_error(err)); } handle->stream.conn.write_reqs_pending--; if (handle->stream.conn.write_reqs_pending == 0) if (handle->flags & UV_HANDLE_SHUTTING) uv__process_tty_shutdown_req(loop, handle, handle->stream.conn.shutdown_req); DECREASE_PENDING_REQ_COUNT(handle); } void uv__tty_close(uv_tty_t* handle) { assert(handle->u.fd == -1 || handle->u.fd > 2); if (handle->flags & UV_HANDLE_READING) uv__tty_read_stop(handle); if (handle->u.fd == -1) CloseHandle(handle->handle); else close(handle->u.fd); handle->u.fd = -1; handle->handle = INVALID_HANDLE_VALUE; handle->flags &= ~(UV_HANDLE_READABLE | UV_HANDLE_WRITABLE); uv__handle_closing(handle); if (handle->reqs_pending == 0) uv__want_endgame(handle->loop, (uv_handle_t*) handle); } void uv__process_tty_shutdown_req(uv_loop_t* loop, uv_tty_t* stream, uv_shutdown_t* req) { assert(stream->stream.conn.write_reqs_pending == 0); assert(req); stream->stream.conn.shutdown_req = NULL; stream->flags &= ~UV_HANDLE_SHUTTING; UNREGISTER_HANDLE_REQ(loop, stream, req); /* TTY shutdown is really just a no-op */ if (req->cb) { if (stream->flags & UV_HANDLE_CLOSING) { req->cb(req, UV_ECANCELED); } else { req->cb(req, 0); } } DECREASE_PENDING_REQ_COUNT(stream); } void uv__tty_endgame(uv_loop_t* loop, uv_tty_t* handle) { assert(handle->flags & UV_HANDLE_CLOSING); assert(handle->reqs_pending == 0); /* The wait handle used for raw reading should be unregistered when the * wait callback runs. */ assert(!(handle->flags & UV_HANDLE_TTY_READABLE) || handle->tty.rd.read_raw_wait == NULL); assert(!(handle->flags & UV_HANDLE_CLOSED)); uv__handle_close(handle); } /* * uv__process_tty_accept_req() is a stub to keep DELEGATE_STREAM_REQ working * TODO: find a way to remove it */ void uv__process_tty_accept_req(uv_loop_t* loop, uv_tty_t* handle, uv_req_t* raw_req) { abort(); } /* * uv__process_tty_connect_req() is a stub to keep DELEGATE_STREAM_REQ working * TODO: find a way to remove it */ void uv__process_tty_connect_req(uv_loop_t* loop, uv_tty_t* handle, uv_connect_t* req) { abort(); } int uv_tty_reset_mode(void) { /* Not necessary to do anything. */ return 0; } /* Determine whether or not this version of windows supports * proper ANSI color codes. Should be supported as of windows * 10 version 1511, build number 10.0.10586. */ static void uv__determine_vterm_state(HANDLE handle) { DWORD dwMode = 0; uv__need_check_vterm_state = FALSE; if (!GetConsoleMode(handle, &dwMode)) { return; } dwMode |= ENABLE_VIRTUAL_TERMINAL_PROCESSING; if (!SetConsoleMode(handle, dwMode)) { return; } uv__vterm_state = UV_TTY_SUPPORTED; } static DWORD WINAPI uv__tty_console_resize_message_loop_thread(void* param) { NTSTATUS status; ULONG_PTR conhost_pid; MSG msg; if (pSetWinEventHook == NULL || pNtQueryInformationProcess == NULL) return 0; status = pNtQueryInformationProcess(GetCurrentProcess(), ProcessConsoleHostProcess, &conhost_pid, sizeof(conhost_pid), NULL); if (!NT_SUCCESS(status)) { /* We couldn't retrieve our console host process, probably because this * is a 32-bit process running on 64-bit Windows. Fall back to receiving * console events from the input stream only. */ return 0; } /* Ensure the PID is a multiple of 4, which is required by SetWinEventHook */ conhost_pid &= ~(ULONG_PTR)0x3; uv__tty_console_resized = CreateEvent(NULL, TRUE, FALSE, NULL); if (uv__tty_console_resized == NULL) return 0; if (QueueUserWorkItem(uv__tty_console_resize_watcher_thread, NULL, WT_EXECUTELONGFUNCTION) == 0) return 0; if (!pSetWinEventHook(EVENT_CONSOLE_LAYOUT, EVENT_CONSOLE_LAYOUT, NULL, uv__tty_console_resize_event, (DWORD)conhost_pid, 0, WINEVENT_OUTOFCONTEXT)) return 0; while (GetMessage(&msg, NULL, 0, 0)) { TranslateMessage(&msg); DispatchMessage(&msg); } return 0; } static void CALLBACK uv__tty_console_resize_event(HWINEVENTHOOK hWinEventHook, DWORD event, HWND hwnd, LONG idObject, LONG idChild, DWORD dwEventThread, DWORD dwmsEventTime) { SetEvent(uv__tty_console_resized); } static DWORD WINAPI uv__tty_console_resize_watcher_thread(void* param) { for (;;) { /* Make sure to not overwhelm the system with resize events */ Sleep(33); WaitForSingleObject(uv__tty_console_resized, INFINITE); uv__tty_console_signal_resize(); ResetEvent(uv__tty_console_resized); } return 0; } static void uv__tty_console_signal_resize(void) { CONSOLE_SCREEN_BUFFER_INFO sb_info; int width, height; if (!GetConsoleScreenBufferInfo(uv__tty_console_handle, &sb_info)) return; width = sb_info.dwSize.X; height = sb_info.srWindow.Bottom - sb_info.srWindow.Top + 1; uv_mutex_lock(&uv__tty_console_resize_mutex); assert(uv__tty_console_width != -1 && uv__tty_console_height != -1); if (width != uv__tty_console_width || height != uv__tty_console_height) { uv__tty_console_width = width; uv__tty_console_height = height; uv_mutex_unlock(&uv__tty_console_resize_mutex); uv__signal_dispatch(SIGWINCH); } else { uv_mutex_unlock(&uv__tty_console_resize_mutex); } } void uv_tty_set_vterm_state(uv_tty_vtermstate_t state) { uv_sem_wait(&uv_tty_output_lock); uv__need_check_vterm_state = FALSE; uv__vterm_state = state; uv_sem_post(&uv_tty_output_lock); } int uv_tty_get_vterm_state(uv_tty_vtermstate_t* state) { uv_sem_wait(&uv_tty_output_lock); *state = uv__vterm_state; uv_sem_post(&uv_tty_output_lock); return 0; } gevent-24.11.1/deps/libuv/src/win/udp.c000066400000000000000000001065111471441230600175520ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include "uv.h" #include "internal.h" #include "handle-inl.h" #include "stream-inl.h" #include "req-inl.h" /* * Threshold of active udp streams for which to preallocate udp read buffers. */ const unsigned int uv_active_udp_streams_threshold = 0; /* A zero-size buffer for use by uv_udp_read */ static char uv_zero_[] = ""; int uv_udp_getpeername(const uv_udp_t* handle, struct sockaddr* name, int* namelen) { return uv__getsockpeername((const uv_handle_t*) handle, getpeername, name, namelen, 0); } int uv_udp_getsockname(const uv_udp_t* handle, struct sockaddr* name, int* namelen) { return uv__getsockpeername((const uv_handle_t*) handle, getsockname, name, namelen, 0); } static int uv__udp_set_socket(uv_loop_t* loop, uv_udp_t* handle, SOCKET socket, int family) { DWORD yes = 1; WSAPROTOCOL_INFOW info; int opt_len; if (handle->socket != INVALID_SOCKET) return UV_EBUSY; /* Set the socket to nonblocking mode */ if (ioctlsocket(socket, FIONBIO, &yes) == SOCKET_ERROR) { return WSAGetLastError(); } /* Make the socket non-inheritable */ if (!SetHandleInformation((HANDLE)socket, HANDLE_FLAG_INHERIT, 0)) { return GetLastError(); } /* Associate it with the I/O completion port. Use uv_handle_t pointer as * completion key. */ if (CreateIoCompletionPort((HANDLE)socket, loop->iocp, (ULONG_PTR)socket, 0) == NULL) { return GetLastError(); } /* All known Windows that support SetFileCompletionNotificationModes have a * bug that makes it impossible to use this function in conjunction with * datagram sockets. We can work around that but only if the user is using * the default UDP driver (AFD) and has no other. LSPs stacked on top. Here * we check whether that is the case. */ opt_len = (int) sizeof info; if (getsockopt( socket, SOL_SOCKET, SO_PROTOCOL_INFOW, (char*) &info, &opt_len) == SOCKET_ERROR) { return GetLastError(); } if (info.ProtocolChain.ChainLen == 1) { if (SetFileCompletionNotificationModes( (HANDLE) socket, FILE_SKIP_SET_EVENT_ON_HANDLE | FILE_SKIP_COMPLETION_PORT_ON_SUCCESS)) { handle->flags |= UV_HANDLE_SYNC_BYPASS_IOCP; handle->func_wsarecv = uv__wsarecv_workaround; handle->func_wsarecvfrom = uv__wsarecvfrom_workaround; } else if (GetLastError() != ERROR_INVALID_FUNCTION) { return GetLastError(); } } handle->socket = socket; if (family == AF_INET6) { handle->flags |= UV_HANDLE_IPV6; } else { assert(!(handle->flags & UV_HANDLE_IPV6)); } return 0; } int uv__udp_init_ex(uv_loop_t* loop, uv_udp_t* handle, unsigned flags, int domain) { uv__handle_init(loop, (uv_handle_t*) handle, UV_UDP); handle->socket = INVALID_SOCKET; handle->reqs_pending = 0; handle->activecnt = 0; handle->func_wsarecv = WSARecv; handle->func_wsarecvfrom = WSARecvFrom; handle->send_queue_size = 0; handle->send_queue_count = 0; UV_REQ_INIT(&handle->recv_req, UV_UDP_RECV); handle->recv_req.data = handle; /* If anything fails beyond this point we need to remove the handle from * the handle queue, since it was added by uv__handle_init. */ if (domain != AF_UNSPEC) { SOCKET sock; DWORD err; sock = socket(domain, SOCK_DGRAM, 0); if (sock == INVALID_SOCKET) { err = WSAGetLastError(); QUEUE_REMOVE(&handle->handle_queue); return uv_translate_sys_error(err); } err = uv__udp_set_socket(handle->loop, handle, sock, domain); if (err) { closesocket(sock); QUEUE_REMOVE(&handle->handle_queue); return uv_translate_sys_error(err); } } return 0; } void uv__udp_close(uv_loop_t* loop, uv_udp_t* handle) { uv_udp_recv_stop(handle); closesocket(handle->socket); handle->socket = INVALID_SOCKET; uv__handle_closing(handle); if (handle->reqs_pending == 0) { uv__want_endgame(loop, (uv_handle_t*) handle); } } void uv__udp_endgame(uv_loop_t* loop, uv_udp_t* handle) { if (handle->flags & UV_HANDLE_CLOSING && handle->reqs_pending == 0) { assert(!(handle->flags & UV_HANDLE_CLOSED)); uv__handle_close(handle); } } int uv_udp_using_recvmmsg(const uv_udp_t* handle) { return 0; } static int uv__udp_maybe_bind(uv_udp_t* handle, const struct sockaddr* addr, unsigned int addrlen, unsigned int flags) { int r; int err; DWORD no = 0; if (handle->flags & UV_HANDLE_BOUND) return 0; if ((flags & UV_UDP_IPV6ONLY) && addr->sa_family != AF_INET6) { /* UV_UDP_IPV6ONLY is supported only for IPV6 sockets */ return ERROR_INVALID_PARAMETER; } if (handle->socket == INVALID_SOCKET) { SOCKET sock = socket(addr->sa_family, SOCK_DGRAM, 0); if (sock == INVALID_SOCKET) { return WSAGetLastError(); } err = uv__udp_set_socket(handle->loop, handle, sock, addr->sa_family); if (err) { closesocket(sock); return err; } } if (flags & UV_UDP_REUSEADDR) { DWORD yes = 1; /* Set SO_REUSEADDR on the socket. */ if (setsockopt(handle->socket, SOL_SOCKET, SO_REUSEADDR, (char*) &yes, sizeof yes) == SOCKET_ERROR) { err = WSAGetLastError(); return err; } } if (addr->sa_family == AF_INET6) handle->flags |= UV_HANDLE_IPV6; if (addr->sa_family == AF_INET6 && !(flags & UV_UDP_IPV6ONLY)) { /* On windows IPV6ONLY is on by default. If the user doesn't specify it * libuv turns it off. */ /* TODO: how to handle errors? This may fail if there is no ipv4 stack * available, or when run on XP/2003 which have no support for dualstack * sockets. For now we're silently ignoring the error. */ setsockopt(handle->socket, IPPROTO_IPV6, IPV6_V6ONLY, (char*) &no, sizeof no); } r = bind(handle->socket, addr, addrlen); if (r == SOCKET_ERROR) { return WSAGetLastError(); } handle->flags |= UV_HANDLE_BOUND; return 0; } static void uv__udp_queue_recv(uv_loop_t* loop, uv_udp_t* handle) { uv_req_t* req; uv_buf_t buf; DWORD bytes, flags; int result; assert(handle->flags & UV_HANDLE_READING); assert(!(handle->flags & UV_HANDLE_READ_PENDING)); req = &handle->recv_req; memset(&req->u.io.overlapped, 0, sizeof(req->u.io.overlapped)); /* * Preallocate a read buffer if the number of active streams is below * the threshold. */ if (loop->active_udp_streams < uv_active_udp_streams_threshold) { handle->flags &= ~UV_HANDLE_ZERO_READ; handle->recv_buffer = uv_buf_init(NULL, 0); handle->alloc_cb((uv_handle_t*) handle, UV__UDP_DGRAM_MAXSIZE, &handle->recv_buffer); if (handle->recv_buffer.base == NULL || handle->recv_buffer.len == 0) { handle->recv_cb(handle, UV_ENOBUFS, &handle->recv_buffer, NULL, 0); return; } assert(handle->recv_buffer.base != NULL); buf = handle->recv_buffer; memset(&handle->recv_from, 0, sizeof handle->recv_from); handle->recv_from_len = sizeof handle->recv_from; flags = 0; result = handle->func_wsarecvfrom(handle->socket, (WSABUF*) &buf, 1, &bytes, &flags, (struct sockaddr*) &handle->recv_from, &handle->recv_from_len, &req->u.io.overlapped, NULL); if (UV_SUCCEEDED_WITHOUT_IOCP(result == 0)) { /* Process the req without IOCP. */ handle->flags |= UV_HANDLE_READ_PENDING; req->u.io.overlapped.InternalHigh = bytes; handle->reqs_pending++; uv__insert_pending_req(loop, req); } else if (UV_SUCCEEDED_WITH_IOCP(result == 0)) { /* The req will be processed with IOCP. */ handle->flags |= UV_HANDLE_READ_PENDING; handle->reqs_pending++; } else { /* Make this req pending reporting an error. */ SET_REQ_ERROR(req, WSAGetLastError()); uv__insert_pending_req(loop, req); handle->reqs_pending++; } } else { handle->flags |= UV_HANDLE_ZERO_READ; buf.base = (char*) uv_zero_; buf.len = 0; flags = MSG_PEEK; result = handle->func_wsarecv(handle->socket, (WSABUF*) &buf, 1, &bytes, &flags, &req->u.io.overlapped, NULL); if (UV_SUCCEEDED_WITHOUT_IOCP(result == 0)) { /* Process the req without IOCP. */ handle->flags |= UV_HANDLE_READ_PENDING; req->u.io.overlapped.InternalHigh = bytes; handle->reqs_pending++; uv__insert_pending_req(loop, req); } else if (UV_SUCCEEDED_WITH_IOCP(result == 0)) { /* The req will be processed with IOCP. */ handle->flags |= UV_HANDLE_READ_PENDING; handle->reqs_pending++; } else { /* Make this req pending reporting an error. */ SET_REQ_ERROR(req, WSAGetLastError()); uv__insert_pending_req(loop, req); handle->reqs_pending++; } } } int uv__udp_recv_start(uv_udp_t* handle, uv_alloc_cb alloc_cb, uv_udp_recv_cb recv_cb) { uv_loop_t* loop = handle->loop; int err; if (handle->flags & UV_HANDLE_READING) { return UV_EALREADY; } err = uv__udp_maybe_bind(handle, (const struct sockaddr*) &uv_addr_ip4_any_, sizeof(uv_addr_ip4_any_), 0); if (err) return uv_translate_sys_error(err); handle->flags |= UV_HANDLE_READING; INCREASE_ACTIVE_COUNT(loop, handle); loop->active_udp_streams++; handle->recv_cb = recv_cb; handle->alloc_cb = alloc_cb; /* If reading was stopped and then started again, there could still be a recv * request pending. */ if (!(handle->flags & UV_HANDLE_READ_PENDING)) uv__udp_queue_recv(loop, handle); return 0; } int uv__udp_recv_stop(uv_udp_t* handle) { if (handle->flags & UV_HANDLE_READING) { handle->flags &= ~UV_HANDLE_READING; handle->loop->active_udp_streams--; DECREASE_ACTIVE_COUNT(loop, handle); } return 0; } static int uv__send(uv_udp_send_t* req, uv_udp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, const struct sockaddr* addr, unsigned int addrlen, uv_udp_send_cb cb) { uv_loop_t* loop = handle->loop; DWORD result, bytes; UV_REQ_INIT(req, UV_UDP_SEND); req->handle = handle; req->cb = cb; memset(&req->u.io.overlapped, 0, sizeof(req->u.io.overlapped)); result = WSASendTo(handle->socket, (WSABUF*)bufs, nbufs, &bytes, 0, addr, addrlen, &req->u.io.overlapped, NULL); if (UV_SUCCEEDED_WITHOUT_IOCP(result == 0)) { /* Request completed immediately. */ req->u.io.queued_bytes = 0; handle->reqs_pending++; handle->send_queue_size += req->u.io.queued_bytes; handle->send_queue_count++; REGISTER_HANDLE_REQ(loop, handle, req); uv__insert_pending_req(loop, (uv_req_t*)req); } else if (UV_SUCCEEDED_WITH_IOCP(result == 0)) { /* Request queued by the kernel. */ req->u.io.queued_bytes = uv__count_bufs(bufs, nbufs); handle->reqs_pending++; handle->send_queue_size += req->u.io.queued_bytes; handle->send_queue_count++; REGISTER_HANDLE_REQ(loop, handle, req); } else { /* Send failed due to an error. */ return WSAGetLastError(); } return 0; } void uv__process_udp_recv_req(uv_loop_t* loop, uv_udp_t* handle, uv_req_t* req) { uv_buf_t buf; int partial; assert(handle->type == UV_UDP); handle->flags &= ~UV_HANDLE_READ_PENDING; if (!REQ_SUCCESS(req)) { DWORD err = GET_REQ_SOCK_ERROR(req); if (err == WSAEMSGSIZE) { /* Not a real error, it just indicates that the received packet was * bigger than the receive buffer. */ } else if (err == WSAECONNRESET || err == WSAENETRESET) { /* A previous sendto operation failed; ignore this error. If zero-reading * we need to call WSARecv/WSARecvFrom _without_ the. MSG_PEEK flag to * clear out the error queue. For nonzero reads, immediately queue a new * receive. */ if (!(handle->flags & UV_HANDLE_ZERO_READ)) { goto done; } } else { /* A real error occurred. Report the error to the user only if we're * currently reading. */ if (handle->flags & UV_HANDLE_READING) { uv_udp_recv_stop(handle); buf = (handle->flags & UV_HANDLE_ZERO_READ) ? uv_buf_init(NULL, 0) : handle->recv_buffer; handle->recv_cb(handle, uv_translate_sys_error(err), &buf, NULL, 0); } goto done; } } if (!(handle->flags & UV_HANDLE_ZERO_READ)) { /* Successful read */ partial = !REQ_SUCCESS(req); handle->recv_cb(handle, req->u.io.overlapped.InternalHigh, &handle->recv_buffer, (const struct sockaddr*) &handle->recv_from, partial ? UV_UDP_PARTIAL : 0); } else if (handle->flags & UV_HANDLE_READING) { DWORD bytes, err, flags; struct sockaddr_storage from; int from_len; /* Do a nonblocking receive. * TODO: try to read multiple datagrams at once. FIONREAD maybe? */ buf = uv_buf_init(NULL, 0); handle->alloc_cb((uv_handle_t*) handle, UV__UDP_DGRAM_MAXSIZE, &buf); if (buf.base == NULL || buf.len == 0) { handle->recv_cb(handle, UV_ENOBUFS, &buf, NULL, 0); goto done; } assert(buf.base != NULL); memset(&from, 0, sizeof from); from_len = sizeof from; flags = 0; if (WSARecvFrom(handle->socket, (WSABUF*)&buf, 1, &bytes, &flags, (struct sockaddr*) &from, &from_len, NULL, NULL) != SOCKET_ERROR) { /* Message received */ handle->recv_cb(handle, bytes, &buf, (const struct sockaddr*) &from, 0); } else { err = WSAGetLastError(); if (err == WSAEMSGSIZE) { /* Message truncated */ handle->recv_cb(handle, bytes, &buf, (const struct sockaddr*) &from, UV_UDP_PARTIAL); } else if (err == WSAEWOULDBLOCK) { /* Kernel buffer empty */ handle->recv_cb(handle, 0, &buf, NULL, 0); } else if (err == WSAECONNRESET || err == WSAENETRESET) { /* WSAECONNRESET/WSANETRESET is ignored because this just indicates * that a previous sendto operation failed. */ handle->recv_cb(handle, 0, &buf, NULL, 0); } else { /* Any other error that we want to report back to the user. */ uv_udp_recv_stop(handle); handle->recv_cb(handle, uv_translate_sys_error(err), &buf, NULL, 0); } } } done: /* Post another read if still reading and not closing. */ if ((handle->flags & UV_HANDLE_READING) && !(handle->flags & UV_HANDLE_READ_PENDING)) { uv__udp_queue_recv(loop, handle); } DECREASE_PENDING_REQ_COUNT(handle); } void uv__process_udp_send_req(uv_loop_t* loop, uv_udp_t* handle, uv_udp_send_t* req) { int err; assert(handle->type == UV_UDP); assert(handle->send_queue_size >= req->u.io.queued_bytes); assert(handle->send_queue_count >= 1); handle->send_queue_size -= req->u.io.queued_bytes; handle->send_queue_count--; UNREGISTER_HANDLE_REQ(loop, handle, req); if (req->cb) { err = 0; if (!REQ_SUCCESS(req)) { err = GET_REQ_SOCK_ERROR(req); } req->cb(req, uv_translate_sys_error(err)); } DECREASE_PENDING_REQ_COUNT(handle); } static int uv__udp_set_membership4(uv_udp_t* handle, const struct sockaddr_in* multicast_addr, const char* interface_addr, uv_membership membership) { int err; int optname; struct ip_mreq mreq; if (handle->flags & UV_HANDLE_IPV6) return UV_EINVAL; /* If the socket is unbound, bind to inaddr_any. */ err = uv__udp_maybe_bind(handle, (const struct sockaddr*) &uv_addr_ip4_any_, sizeof(uv_addr_ip4_any_), UV_UDP_REUSEADDR); if (err) return uv_translate_sys_error(err); memset(&mreq, 0, sizeof mreq); if (interface_addr) { err = uv_inet_pton(AF_INET, interface_addr, &mreq.imr_interface.s_addr); if (err) return err; } else { mreq.imr_interface.s_addr = htonl(INADDR_ANY); } mreq.imr_multiaddr.s_addr = multicast_addr->sin_addr.s_addr; switch (membership) { case UV_JOIN_GROUP: optname = IP_ADD_MEMBERSHIP; break; case UV_LEAVE_GROUP: optname = IP_DROP_MEMBERSHIP; break; default: return UV_EINVAL; } if (setsockopt(handle->socket, IPPROTO_IP, optname, (char*) &mreq, sizeof mreq) == SOCKET_ERROR) { return uv_translate_sys_error(WSAGetLastError()); } return 0; } int uv__udp_set_membership6(uv_udp_t* handle, const struct sockaddr_in6* multicast_addr, const char* interface_addr, uv_membership membership) { int optname; int err; struct ipv6_mreq mreq; struct sockaddr_in6 addr6; if ((handle->flags & UV_HANDLE_BOUND) && !(handle->flags & UV_HANDLE_IPV6)) return UV_EINVAL; err = uv__udp_maybe_bind(handle, (const struct sockaddr*) &uv_addr_ip6_any_, sizeof(uv_addr_ip6_any_), UV_UDP_REUSEADDR); if (err) return uv_translate_sys_error(err); memset(&mreq, 0, sizeof(mreq)); if (interface_addr) { if (uv_ip6_addr(interface_addr, 0, &addr6)) return UV_EINVAL; mreq.ipv6mr_interface = addr6.sin6_scope_id; } else { mreq.ipv6mr_interface = 0; } mreq.ipv6mr_multiaddr = multicast_addr->sin6_addr; switch (membership) { case UV_JOIN_GROUP: optname = IPV6_ADD_MEMBERSHIP; break; case UV_LEAVE_GROUP: optname = IPV6_DROP_MEMBERSHIP; break; default: return UV_EINVAL; } if (setsockopt(handle->socket, IPPROTO_IPV6, optname, (char*) &mreq, sizeof mreq) == SOCKET_ERROR) { return uv_translate_sys_error(WSAGetLastError()); } return 0; } static int uv__udp_set_source_membership4(uv_udp_t* handle, const struct sockaddr_in* multicast_addr, const char* interface_addr, const struct sockaddr_in* source_addr, uv_membership membership) { struct ip_mreq_source mreq; int optname; int err; if (handle->flags & UV_HANDLE_IPV6) return UV_EINVAL; /* If the socket is unbound, bind to inaddr_any. */ err = uv__udp_maybe_bind(handle, (const struct sockaddr*) &uv_addr_ip4_any_, sizeof(uv_addr_ip4_any_), UV_UDP_REUSEADDR); if (err) return uv_translate_sys_error(err); memset(&mreq, 0, sizeof(mreq)); if (interface_addr != NULL) { err = uv_inet_pton(AF_INET, interface_addr, &mreq.imr_interface.s_addr); if (err) return err; } else { mreq.imr_interface.s_addr = htonl(INADDR_ANY); } mreq.imr_multiaddr.s_addr = multicast_addr->sin_addr.s_addr; mreq.imr_sourceaddr.s_addr = source_addr->sin_addr.s_addr; if (membership == UV_JOIN_GROUP) optname = IP_ADD_SOURCE_MEMBERSHIP; else if (membership == UV_LEAVE_GROUP) optname = IP_DROP_SOURCE_MEMBERSHIP; else return UV_EINVAL; if (setsockopt(handle->socket, IPPROTO_IP, optname, (char*) &mreq, sizeof(mreq)) == SOCKET_ERROR) { return uv_translate_sys_error(WSAGetLastError()); } return 0; } int uv__udp_set_source_membership6(uv_udp_t* handle, const struct sockaddr_in6* multicast_addr, const char* interface_addr, const struct sockaddr_in6* source_addr, uv_membership membership) { struct group_source_req mreq; struct sockaddr_in6 addr6; int optname; int err; STATIC_ASSERT(sizeof(mreq.gsr_group) >= sizeof(*multicast_addr)); STATIC_ASSERT(sizeof(mreq.gsr_source) >= sizeof(*source_addr)); if ((handle->flags & UV_HANDLE_BOUND) && !(handle->flags & UV_HANDLE_IPV6)) return UV_EINVAL; err = uv__udp_maybe_bind(handle, (const struct sockaddr*) &uv_addr_ip6_any_, sizeof(uv_addr_ip6_any_), UV_UDP_REUSEADDR); if (err) return uv_translate_sys_error(err); memset(&mreq, 0, sizeof(mreq)); if (interface_addr != NULL) { err = uv_ip6_addr(interface_addr, 0, &addr6); if (err) return err; mreq.gsr_interface = addr6.sin6_scope_id; } else { mreq.gsr_interface = 0; } memcpy(&mreq.gsr_group, multicast_addr, sizeof(*multicast_addr)); memcpy(&mreq.gsr_source, source_addr, sizeof(*source_addr)); if (membership == UV_JOIN_GROUP) optname = MCAST_JOIN_SOURCE_GROUP; else if (membership == UV_LEAVE_GROUP) optname = MCAST_LEAVE_SOURCE_GROUP; else return UV_EINVAL; if (setsockopt(handle->socket, IPPROTO_IPV6, optname, (char*) &mreq, sizeof(mreq)) == SOCKET_ERROR) { return uv_translate_sys_error(WSAGetLastError()); } return 0; } int uv_udp_set_membership(uv_udp_t* handle, const char* multicast_addr, const char* interface_addr, uv_membership membership) { struct sockaddr_in addr4; struct sockaddr_in6 addr6; if (uv_ip4_addr(multicast_addr, 0, &addr4) == 0) return uv__udp_set_membership4(handle, &addr4, interface_addr, membership); else if (uv_ip6_addr(multicast_addr, 0, &addr6) == 0) return uv__udp_set_membership6(handle, &addr6, interface_addr, membership); else return UV_EINVAL; } int uv_udp_set_source_membership(uv_udp_t* handle, const char* multicast_addr, const char* interface_addr, const char* source_addr, uv_membership membership) { int err; struct sockaddr_storage mcast_addr; struct sockaddr_in* mcast_addr4; struct sockaddr_in6* mcast_addr6; struct sockaddr_storage src_addr; struct sockaddr_in* src_addr4; struct sockaddr_in6* src_addr6; mcast_addr4 = (struct sockaddr_in*)&mcast_addr; mcast_addr6 = (struct sockaddr_in6*)&mcast_addr; src_addr4 = (struct sockaddr_in*)&src_addr; src_addr6 = (struct sockaddr_in6*)&src_addr; err = uv_ip4_addr(multicast_addr, 0, mcast_addr4); if (err) { err = uv_ip6_addr(multicast_addr, 0, mcast_addr6); if (err) return err; err = uv_ip6_addr(source_addr, 0, src_addr6); if (err) return err; return uv__udp_set_source_membership6(handle, mcast_addr6, interface_addr, src_addr6, membership); } err = uv_ip4_addr(source_addr, 0, src_addr4); if (err) return err; return uv__udp_set_source_membership4(handle, mcast_addr4, interface_addr, src_addr4, membership); } int uv_udp_set_multicast_interface(uv_udp_t* handle, const char* interface_addr) { struct sockaddr_storage addr_st; struct sockaddr_in* addr4; struct sockaddr_in6* addr6; addr4 = (struct sockaddr_in*) &addr_st; addr6 = (struct sockaddr_in6*) &addr_st; if (!interface_addr) { memset(&addr_st, 0, sizeof addr_st); if (handle->flags & UV_HANDLE_IPV6) { addr_st.ss_family = AF_INET6; addr6->sin6_scope_id = 0; } else { addr_st.ss_family = AF_INET; addr4->sin_addr.s_addr = htonl(INADDR_ANY); } } else if (uv_ip4_addr(interface_addr, 0, addr4) == 0) { /* nothing, address was parsed */ } else if (uv_ip6_addr(interface_addr, 0, addr6) == 0) { /* nothing, address was parsed */ } else { return UV_EINVAL; } if (handle->socket == INVALID_SOCKET) return UV_EBADF; if (addr_st.ss_family == AF_INET) { if (setsockopt(handle->socket, IPPROTO_IP, IP_MULTICAST_IF, (char*) &addr4->sin_addr, sizeof(addr4->sin_addr)) == SOCKET_ERROR) { return uv_translate_sys_error(WSAGetLastError()); } } else if (addr_st.ss_family == AF_INET6) { if (setsockopt(handle->socket, IPPROTO_IPV6, IPV6_MULTICAST_IF, (char*) &addr6->sin6_scope_id, sizeof(addr6->sin6_scope_id)) == SOCKET_ERROR) { return uv_translate_sys_error(WSAGetLastError()); } } else { assert(0 && "unexpected address family"); abort(); } return 0; } int uv_udp_set_broadcast(uv_udp_t* handle, int value) { BOOL optval = (BOOL) value; if (handle->socket == INVALID_SOCKET) return UV_EBADF; if (setsockopt(handle->socket, SOL_SOCKET, SO_BROADCAST, (char*) &optval, sizeof optval)) { return uv_translate_sys_error(WSAGetLastError()); } return 0; } int uv__udp_is_bound(uv_udp_t* handle) { struct sockaddr_storage addr; int addrlen; addrlen = sizeof(addr); if (uv_udp_getsockname(handle, (struct sockaddr*) &addr, &addrlen) != 0) return 0; return addrlen > 0; } int uv_udp_open(uv_udp_t* handle, uv_os_sock_t sock) { WSAPROTOCOL_INFOW protocol_info; int opt_len; int err; /* Detect the address family of the socket. */ opt_len = (int) sizeof protocol_info; if (getsockopt(sock, SOL_SOCKET, SO_PROTOCOL_INFOW, (char*) &protocol_info, &opt_len) == SOCKET_ERROR) { return uv_translate_sys_error(GetLastError()); } err = uv__udp_set_socket(handle->loop, handle, sock, protocol_info.iAddressFamily); if (err) return uv_translate_sys_error(err); if (uv__udp_is_bound(handle)) handle->flags |= UV_HANDLE_BOUND; if (uv__udp_is_connected(handle)) handle->flags |= UV_HANDLE_UDP_CONNECTED; return 0; } #define SOCKOPT_SETTER(name, option4, option6, validate) \ int uv_udp_set_##name(uv_udp_t* handle, int value) { \ DWORD optval = (DWORD) value; \ \ if (!(validate(value))) { \ return UV_EINVAL; \ } \ \ if (handle->socket == INVALID_SOCKET) \ return UV_EBADF; \ \ if (!(handle->flags & UV_HANDLE_IPV6)) { \ /* Set IPv4 socket option */ \ if (setsockopt(handle->socket, \ IPPROTO_IP, \ option4, \ (char*) &optval, \ sizeof optval)) { \ return uv_translate_sys_error(WSAGetLastError()); \ } \ } else { \ /* Set IPv6 socket option */ \ if (setsockopt(handle->socket, \ IPPROTO_IPV6, \ option6, \ (char*) &optval, \ sizeof optval)) { \ return uv_translate_sys_error(WSAGetLastError()); \ } \ } \ return 0; \ } #define VALIDATE_TTL(value) ((value) >= 1 && (value) <= 255) #define VALIDATE_MULTICAST_TTL(value) ((value) >= -1 && (value) <= 255) #define VALIDATE_MULTICAST_LOOP(value) (1) SOCKOPT_SETTER(ttl, IP_TTL, IPV6_HOPLIMIT, VALIDATE_TTL) SOCKOPT_SETTER(multicast_ttl, IP_MULTICAST_TTL, IPV6_MULTICAST_HOPS, VALIDATE_MULTICAST_TTL) SOCKOPT_SETTER(multicast_loop, IP_MULTICAST_LOOP, IPV6_MULTICAST_LOOP, VALIDATE_MULTICAST_LOOP) #undef SOCKOPT_SETTER #undef VALIDATE_TTL #undef VALIDATE_MULTICAST_TTL #undef VALIDATE_MULTICAST_LOOP /* This function is an egress point, i.e. it returns libuv errors rather than * system errors. */ int uv__udp_bind(uv_udp_t* handle, const struct sockaddr* addr, unsigned int addrlen, unsigned int flags) { int err; err = uv__udp_maybe_bind(handle, addr, addrlen, flags); if (err) return uv_translate_sys_error(err); return 0; } int uv__udp_connect(uv_udp_t* handle, const struct sockaddr* addr, unsigned int addrlen) { const struct sockaddr* bind_addr; int err; if (!(handle->flags & UV_HANDLE_BOUND)) { if (addrlen == sizeof(uv_addr_ip4_any_)) bind_addr = (const struct sockaddr*) &uv_addr_ip4_any_; else if (addrlen == sizeof(uv_addr_ip6_any_)) bind_addr = (const struct sockaddr*) &uv_addr_ip6_any_; else return UV_EINVAL; err = uv__udp_maybe_bind(handle, bind_addr, addrlen, 0); if (err) return uv_translate_sys_error(err); } err = connect(handle->socket, addr, addrlen); if (err) return uv_translate_sys_error(WSAGetLastError()); handle->flags |= UV_HANDLE_UDP_CONNECTED; return 0; } int uv__udp_disconnect(uv_udp_t* handle) { int err; struct sockaddr_storage addr; memset(&addr, 0, sizeof(addr)); err = connect(handle->socket, (struct sockaddr*) &addr, sizeof(addr)); if (err) return uv_translate_sys_error(WSAGetLastError()); handle->flags &= ~UV_HANDLE_UDP_CONNECTED; return 0; } /* This function is an egress point, i.e. it returns libuv errors rather than * system errors. */ int uv__udp_send(uv_udp_send_t* req, uv_udp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, const struct sockaddr* addr, unsigned int addrlen, uv_udp_send_cb send_cb) { const struct sockaddr* bind_addr; int err; if (!(handle->flags & UV_HANDLE_BOUND)) { if (addrlen == sizeof(uv_addr_ip4_any_)) bind_addr = (const struct sockaddr*) &uv_addr_ip4_any_; else if (addrlen == sizeof(uv_addr_ip6_any_)) bind_addr = (const struct sockaddr*) &uv_addr_ip6_any_; else return UV_EINVAL; err = uv__udp_maybe_bind(handle, bind_addr, addrlen, 0); if (err) return uv_translate_sys_error(err); } err = uv__send(req, handle, bufs, nbufs, addr, addrlen, send_cb); if (err) return uv_translate_sys_error(err); return 0; } int uv__udp_try_send(uv_udp_t* handle, const uv_buf_t bufs[], unsigned int nbufs, const struct sockaddr* addr, unsigned int addrlen) { DWORD bytes; const struct sockaddr* bind_addr; struct sockaddr_storage converted; int err; assert(nbufs > 0); if (addr != NULL) { err = uv__convert_to_localhost_if_unspecified(addr, &converted); if (err) return err; addr = (const struct sockaddr*) &converted; } /* Already sending a message.*/ if (handle->send_queue_count != 0) return UV_EAGAIN; if (!(handle->flags & UV_HANDLE_BOUND)) { if (addrlen == sizeof(uv_addr_ip4_any_)) bind_addr = (const struct sockaddr*) &uv_addr_ip4_any_; else if (addrlen == sizeof(uv_addr_ip6_any_)) bind_addr = (const struct sockaddr*) &uv_addr_ip6_any_; else return UV_EINVAL; err = uv__udp_maybe_bind(handle, bind_addr, addrlen, 0); if (err) return uv_translate_sys_error(err); } err = WSASendTo(handle->socket, (WSABUF*)bufs, nbufs, &bytes, 0, addr, addrlen, NULL, NULL); if (err) return uv_translate_sys_error(WSAGetLastError()); return bytes; } gevent-24.11.1/deps/libuv/src/win/util.c000066400000000000000000001426071471441230600177450ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include #include #include #include #include #include "uv.h" #include "internal.h" /* clang-format off */ #include #include #include #include #include #include /* clang-format on */ #include #include /* * Max title length; the only thing MSDN tells us about the maximum length * of the console title is that it is smaller than 64K. However in practice * it is much smaller, and there is no way to figure out what the exact length * of the title is or can be, at least not on XP. To make it even more * annoying, GetConsoleTitle fails when the buffer to be read into is bigger * than the actual maximum length. So we make a conservative guess here; * just don't put the novel you're writing in the title, unless the plot * survives truncation. */ #define MAX_TITLE_LENGTH 8192 /* The number of nanoseconds in one second. */ #define UV__NANOSEC 1000000000 /* Max user name length, from iphlpapi.h */ #ifndef UNLEN # define UNLEN 256 #endif /* A RtlGenRandom() by any other name... */ extern BOOLEAN NTAPI SystemFunction036(PVOID Buffer, ULONG BufferLength); /* Cached copy of the process title, plus a mutex guarding it. */ static char *process_title; static CRITICAL_SECTION process_title_lock; /* Frequency of the high-resolution clock. */ static uint64_t hrtime_frequency_ = 0; /* * One-time initialization code for functionality defined in util.c. */ void uv__util_init(void) { LARGE_INTEGER perf_frequency; /* Initialize process title access mutex. */ InitializeCriticalSection(&process_title_lock); /* Retrieve high-resolution timer frequency * and precompute its reciprocal. */ if (QueryPerformanceFrequency(&perf_frequency)) { hrtime_frequency_ = perf_frequency.QuadPart; } else { uv_fatal_error(GetLastError(), "QueryPerformanceFrequency"); } } int uv_exepath(char* buffer, size_t* size_ptr) { int utf8_len, utf16_buffer_len, utf16_len; WCHAR* utf16_buffer; int err; if (buffer == NULL || size_ptr == NULL || *size_ptr == 0) { return UV_EINVAL; } if (*size_ptr > 32768) { /* Windows paths can never be longer than this. */ utf16_buffer_len = 32768; } else { utf16_buffer_len = (int) *size_ptr; } utf16_buffer = (WCHAR*) uv__malloc(sizeof(WCHAR) * utf16_buffer_len); if (!utf16_buffer) { return UV_ENOMEM; } /* Get the path as UTF-16. */ utf16_len = GetModuleFileNameW(NULL, utf16_buffer, utf16_buffer_len); if (utf16_len <= 0) { err = GetLastError(); goto error; } /* utf16_len contains the length, *not* including the terminating null. */ utf16_buffer[utf16_len] = L'\0'; /* Convert to UTF-8 */ utf8_len = WideCharToMultiByte(CP_UTF8, 0, utf16_buffer, -1, buffer, (int) *size_ptr, NULL, NULL); if (utf8_len == 0) { err = GetLastError(); goto error; } uv__free(utf16_buffer); /* utf8_len *does* include the terminating null at this point, but the * returned size shouldn't. */ *size_ptr = utf8_len - 1; return 0; error: uv__free(utf16_buffer); return uv_translate_sys_error(err); } int uv_cwd(char* buffer, size_t* size) { DWORD utf16_len; WCHAR *utf16_buffer; int r; if (buffer == NULL || size == NULL) { return UV_EINVAL; } utf16_len = GetCurrentDirectoryW(0, NULL); if (utf16_len == 0) { return uv_translate_sys_error(GetLastError()); } utf16_buffer = uv__malloc(utf16_len * sizeof(WCHAR)); if (utf16_buffer == NULL) { return UV_ENOMEM; } utf16_len = GetCurrentDirectoryW(utf16_len, utf16_buffer); if (utf16_len == 0) { uv__free(utf16_buffer); return uv_translate_sys_error(GetLastError()); } /* utf16_len contains the length, *not* including the terminating null. */ utf16_buffer[utf16_len] = L'\0'; /* The returned directory should not have a trailing slash, unless it points * at a drive root, like c:\. Remove it if needed. */ if (utf16_buffer[utf16_len - 1] == L'\\' && !(utf16_len == 3 && utf16_buffer[1] == L':')) { utf16_len--; utf16_buffer[utf16_len] = L'\0'; } /* Check how much space we need */ r = WideCharToMultiByte(CP_UTF8, 0, utf16_buffer, -1, NULL, 0, NULL, NULL); if (r == 0) { uv__free(utf16_buffer); return uv_translate_sys_error(GetLastError()); } else if (r > (int) *size) { uv__free(utf16_buffer); *size = r; return UV_ENOBUFS; } /* Convert to UTF-8 */ r = WideCharToMultiByte(CP_UTF8, 0, utf16_buffer, -1, buffer, *size > INT_MAX ? INT_MAX : (int) *size, NULL, NULL); uv__free(utf16_buffer); if (r == 0) { return uv_translate_sys_error(GetLastError()); } *size = r - 1; return 0; } int uv_chdir(const char* dir) { WCHAR *utf16_buffer; size_t utf16_len, new_utf16_len; WCHAR drive_letter, env_var[4]; if (dir == NULL) { return UV_EINVAL; } utf16_len = MultiByteToWideChar(CP_UTF8, 0, dir, -1, NULL, 0); if (utf16_len == 0) { return uv_translate_sys_error(GetLastError()); } utf16_buffer = uv__malloc(utf16_len * sizeof(WCHAR)); if (utf16_buffer == NULL) { return UV_ENOMEM; } if (MultiByteToWideChar(CP_UTF8, 0, dir, -1, utf16_buffer, utf16_len) == 0) { uv__free(utf16_buffer); return uv_translate_sys_error(GetLastError()); } if (!SetCurrentDirectoryW(utf16_buffer)) { uv__free(utf16_buffer); return uv_translate_sys_error(GetLastError()); } /* Windows stores the drive-local path in an "hidden" environment variable, * which has the form "=C:=C:\Windows". SetCurrentDirectory does not update * this, so we'll have to do it. */ new_utf16_len = GetCurrentDirectoryW(utf16_len, utf16_buffer); if (new_utf16_len > utf16_len ) { uv__free(utf16_buffer); utf16_buffer = uv__malloc(new_utf16_len * sizeof(WCHAR)); if (utf16_buffer == NULL) { /* When updating the environment variable fails, return UV_OK anyway. * We did successfully change current working directory, only updating * hidden env variable failed. */ return 0; } new_utf16_len = GetCurrentDirectoryW(new_utf16_len, utf16_buffer); } if (utf16_len == 0) { uv__free(utf16_buffer); return 0; } /* The returned directory should not have a trailing slash, unless it points * at a drive root, like c:\. Remove it if needed. */ if (utf16_buffer[utf16_len - 1] == L'\\' && !(utf16_len == 3 && utf16_buffer[1] == L':')) { utf16_len--; utf16_buffer[utf16_len] = L'\0'; } if (utf16_len < 2 || utf16_buffer[1] != L':') { /* Doesn't look like a drive letter could be there - probably an UNC path. * TODO: Need to handle win32 namespaces like \\?\C:\ ? */ drive_letter = 0; } else if (utf16_buffer[0] >= L'A' && utf16_buffer[0] <= L'Z') { drive_letter = utf16_buffer[0]; } else if (utf16_buffer[0] >= L'a' && utf16_buffer[0] <= L'z') { /* Convert to uppercase. */ drive_letter = utf16_buffer[0] - L'a' + L'A'; } else { /* Not valid. */ drive_letter = 0; } if (drive_letter != 0) { /* Construct the environment variable name and set it. */ env_var[0] = L'='; env_var[1] = drive_letter; env_var[2] = L':'; env_var[3] = L'\0'; SetEnvironmentVariableW(env_var, utf16_buffer); } uv__free(utf16_buffer); return 0; } void uv_loadavg(double avg[3]) { /* Can't be implemented */ avg[0] = avg[1] = avg[2] = 0; } uint64_t uv_get_free_memory(void) { MEMORYSTATUSEX memory_status; memory_status.dwLength = sizeof(memory_status); if (!GlobalMemoryStatusEx(&memory_status)) { return -1; } return (uint64_t)memory_status.ullAvailPhys; } uint64_t uv_get_total_memory(void) { MEMORYSTATUSEX memory_status; memory_status.dwLength = sizeof(memory_status); if (!GlobalMemoryStatusEx(&memory_status)) { return -1; } return (uint64_t)memory_status.ullTotalPhys; } uint64_t uv_get_constrained_memory(void) { return 0; /* Memory constraints are unknown. */ } uv_pid_t uv_os_getpid(void) { return GetCurrentProcessId(); } uv_pid_t uv_os_getppid(void) { int parent_pid = -1; HANDLE handle; PROCESSENTRY32 pe; DWORD current_pid = GetCurrentProcessId(); pe.dwSize = sizeof(PROCESSENTRY32); handle = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0); if (Process32First(handle, &pe)) { do { if (pe.th32ProcessID == current_pid) { parent_pid = pe.th32ParentProcessID; break; } } while( Process32Next(handle, &pe)); } CloseHandle(handle); return parent_pid; } char** uv_setup_args(int argc, char** argv) { return argv; } void uv__process_title_cleanup(void) { } int uv_set_process_title(const char* title) { int err; int length; WCHAR* title_w = NULL; uv__once_init(); /* Find out how big the buffer for the wide-char title must be */ length = MultiByteToWideChar(CP_UTF8, 0, title, -1, NULL, 0); if (!length) { err = GetLastError(); goto done; } /* Convert to wide-char string */ title_w = (WCHAR*)uv__malloc(sizeof(WCHAR) * length); if (!title_w) { uv_fatal_error(ERROR_OUTOFMEMORY, "uv__malloc"); } length = MultiByteToWideChar(CP_UTF8, 0, title, -1, title_w, length); if (!length) { err = GetLastError(); goto done; } /* If the title must be truncated insert a \0 terminator there */ if (length > MAX_TITLE_LENGTH) { title_w[MAX_TITLE_LENGTH - 1] = L'\0'; } if (!SetConsoleTitleW(title_w)) { err = GetLastError(); goto done; } EnterCriticalSection(&process_title_lock); uv__free(process_title); process_title = uv__strdup(title); LeaveCriticalSection(&process_title_lock); err = 0; done: uv__free(title_w); return uv_translate_sys_error(err); } static int uv__get_process_title(void) { WCHAR title_w[MAX_TITLE_LENGTH]; if (!GetConsoleTitleW(title_w, sizeof(title_w) / sizeof(WCHAR))) { return -1; } if (uv__convert_utf16_to_utf8(title_w, -1, &process_title) != 0) return -1; return 0; } int uv_get_process_title(char* buffer, size_t size) { size_t len; if (buffer == NULL || size == 0) return UV_EINVAL; uv__once_init(); EnterCriticalSection(&process_title_lock); /* * If the process_title was never read before nor explicitly set, * we must query it with getConsoleTitleW */ if (!process_title && uv__get_process_title() == -1) { LeaveCriticalSection(&process_title_lock); return uv_translate_sys_error(GetLastError()); } assert(process_title); len = strlen(process_title) + 1; if (size < len) { LeaveCriticalSection(&process_title_lock); return UV_ENOBUFS; } memcpy(buffer, process_title, len); LeaveCriticalSection(&process_title_lock); return 0; } uint64_t uv_hrtime(void) { uv__once_init(); return uv__hrtime(UV__NANOSEC); } uint64_t uv__hrtime(unsigned int scale) { LARGE_INTEGER counter; double scaled_freq; double result; assert(hrtime_frequency_ != 0); assert(scale != 0); if (!QueryPerformanceCounter(&counter)) { uv_fatal_error(GetLastError(), "QueryPerformanceCounter"); } assert(counter.QuadPart != 0); /* Because we have no guarantee about the order of magnitude of the * performance counter interval, integer math could cause this computation * to overflow. Therefore we resort to floating point math. */ scaled_freq = (double) hrtime_frequency_ / scale; result = (double) counter.QuadPart / scaled_freq; return (uint64_t) result; } int uv_resident_set_memory(size_t* rss) { HANDLE current_process; PROCESS_MEMORY_COUNTERS pmc; current_process = GetCurrentProcess(); if (!GetProcessMemoryInfo(current_process, &pmc, sizeof(pmc))) { return uv_translate_sys_error(GetLastError()); } *rss = pmc.WorkingSetSize; return 0; } int uv_uptime(double* uptime) { *uptime = GetTickCount64() / 1000.0; return 0; } unsigned int uv_available_parallelism(void) { SYSTEM_INFO info; unsigned rc; /* TODO(bnoordhuis) Use GetLogicalProcessorInformationEx() to support systems * with > 64 CPUs? See https://github.com/libuv/libuv/pull/3458 */ GetSystemInfo(&info); rc = info.dwNumberOfProcessors; if (rc < 1) rc = 1; return rc; } int uv_cpu_info(uv_cpu_info_t** cpu_infos_ptr, int* cpu_count_ptr) { uv_cpu_info_t* cpu_infos; SYSTEM_PROCESSOR_PERFORMANCE_INFORMATION* sppi; DWORD sppi_size; SYSTEM_INFO system_info; DWORD cpu_count, i; NTSTATUS status; ULONG result_size; int err; uv_cpu_info_t* cpu_info; cpu_infos = NULL; cpu_count = 0; sppi = NULL; uv__once_init(); GetSystemInfo(&system_info); cpu_count = system_info.dwNumberOfProcessors; cpu_infos = uv__calloc(cpu_count, sizeof *cpu_infos); if (cpu_infos == NULL) { err = ERROR_OUTOFMEMORY; goto error; } sppi_size = cpu_count * sizeof(*sppi); sppi = uv__malloc(sppi_size); if (sppi == NULL) { err = ERROR_OUTOFMEMORY; goto error; } status = pNtQuerySystemInformation(SystemProcessorPerformanceInformation, sppi, sppi_size, &result_size); if (!NT_SUCCESS(status)) { err = pRtlNtStatusToDosError(status); goto error; } assert(result_size == sppi_size); for (i = 0; i < cpu_count; i++) { WCHAR key_name[128]; HKEY processor_key; DWORD cpu_speed; DWORD cpu_speed_size = sizeof(cpu_speed); WCHAR cpu_brand[256]; DWORD cpu_brand_size = sizeof(cpu_brand); size_t len; len = _snwprintf(key_name, ARRAY_SIZE(key_name), L"HARDWARE\\DESCRIPTION\\System\\CentralProcessor\\%d", i); assert(len > 0 && len < ARRAY_SIZE(key_name)); err = RegOpenKeyExW(HKEY_LOCAL_MACHINE, key_name, 0, KEY_QUERY_VALUE, &processor_key); if (err != ERROR_SUCCESS) { goto error; } err = RegQueryValueExW(processor_key, L"~MHz", NULL, NULL, (BYTE*)&cpu_speed, &cpu_speed_size); if (err != ERROR_SUCCESS) { RegCloseKey(processor_key); goto error; } err = RegQueryValueExW(processor_key, L"ProcessorNameString", NULL, NULL, (BYTE*)&cpu_brand, &cpu_brand_size); RegCloseKey(processor_key); if (err != ERROR_SUCCESS) goto error; cpu_info = &cpu_infos[i]; cpu_info->speed = cpu_speed; cpu_info->cpu_times.user = sppi[i].UserTime.QuadPart / 10000; cpu_info->cpu_times.sys = (sppi[i].KernelTime.QuadPart - sppi[i].IdleTime.QuadPart) / 10000; cpu_info->cpu_times.idle = sppi[i].IdleTime.QuadPart / 10000; cpu_info->cpu_times.irq = sppi[i].InterruptTime.QuadPart / 10000; cpu_info->cpu_times.nice = 0; uv__convert_utf16_to_utf8(cpu_brand, cpu_brand_size / sizeof(WCHAR), &(cpu_info->model)); } uv__free(sppi); *cpu_count_ptr = cpu_count; *cpu_infos_ptr = cpu_infos; return 0; error: if (cpu_infos != NULL) { /* This is safe because the cpu_infos array is zeroed on allocation. */ for (i = 0; i < cpu_count; i++) uv__free(cpu_infos[i].model); } uv__free(cpu_infos); uv__free(sppi); return uv_translate_sys_error(err); } static int is_windows_version_or_greater(DWORD os_major, DWORD os_minor, WORD service_pack_major, WORD service_pack_minor) { OSVERSIONINFOEX osvi; DWORDLONG condition_mask = 0; int op = VER_GREATER_EQUAL; /* Initialize the OSVERSIONINFOEX structure. */ ZeroMemory(&osvi, sizeof(OSVERSIONINFOEX)); osvi.dwOSVersionInfoSize = sizeof(OSVERSIONINFOEX); osvi.dwMajorVersion = os_major; osvi.dwMinorVersion = os_minor; osvi.wServicePackMajor = service_pack_major; osvi.wServicePackMinor = service_pack_minor; /* Initialize the condition mask. */ VER_SET_CONDITION(condition_mask, VER_MAJORVERSION, op); VER_SET_CONDITION(condition_mask, VER_MINORVERSION, op); VER_SET_CONDITION(condition_mask, VER_SERVICEPACKMAJOR, op); VER_SET_CONDITION(condition_mask, VER_SERVICEPACKMINOR, op); /* Perform the test. */ return (int) VerifyVersionInfo( &osvi, VER_MAJORVERSION | VER_MINORVERSION | VER_SERVICEPACKMAJOR | VER_SERVICEPACKMINOR, condition_mask); } static int address_prefix_match(int family, struct sockaddr* address, struct sockaddr* prefix_address, int prefix_len) { uint8_t* address_data; uint8_t* prefix_address_data; int i; assert(address->sa_family == family); assert(prefix_address->sa_family == family); if (family == AF_INET6) { address_data = (uint8_t*) &(((struct sockaddr_in6 *) address)->sin6_addr); prefix_address_data = (uint8_t*) &(((struct sockaddr_in6 *) prefix_address)->sin6_addr); } else { address_data = (uint8_t*) &(((struct sockaddr_in *) address)->sin_addr); prefix_address_data = (uint8_t*) &(((struct sockaddr_in *) prefix_address)->sin_addr); } for (i = 0; i < prefix_len >> 3; i++) { if (address_data[i] != prefix_address_data[i]) return 0; } if (prefix_len % 8) return prefix_address_data[i] == (address_data[i] & (0xff << (8 - prefix_len % 8))); return 1; } int uv_interface_addresses(uv_interface_address_t** addresses_ptr, int* count_ptr) { IP_ADAPTER_ADDRESSES* win_address_buf; ULONG win_address_buf_size; IP_ADAPTER_ADDRESSES* adapter; uv_interface_address_t* uv_address_buf; char* name_buf; size_t uv_address_buf_size; uv_interface_address_t* uv_address; int count; int is_vista_or_greater; ULONG flags; *addresses_ptr = NULL; *count_ptr = 0; is_vista_or_greater = is_windows_version_or_greater(6, 0, 0, 0); if (is_vista_or_greater) { flags = GAA_FLAG_SKIP_ANYCAST | GAA_FLAG_SKIP_MULTICAST | GAA_FLAG_SKIP_DNS_SERVER; } else { /* We need at least XP SP1. */ if (!is_windows_version_or_greater(5, 1, 1, 0)) return UV_ENOTSUP; flags = GAA_FLAG_SKIP_ANYCAST | GAA_FLAG_SKIP_MULTICAST | GAA_FLAG_SKIP_DNS_SERVER | GAA_FLAG_INCLUDE_PREFIX; } /* Fetch the size of the adapters reported by windows, and then get the list * itself. */ win_address_buf_size = 0; win_address_buf = NULL; for (;;) { ULONG r; /* If win_address_buf is 0, then GetAdaptersAddresses will fail with. * ERROR_BUFFER_OVERFLOW, and the required buffer size will be stored in * win_address_buf_size. */ r = GetAdaptersAddresses(AF_UNSPEC, flags, NULL, win_address_buf, &win_address_buf_size); if (r == ERROR_SUCCESS) break; uv__free(win_address_buf); switch (r) { case ERROR_BUFFER_OVERFLOW: /* This happens when win_address_buf is NULL or too small to hold all * adapters. */ win_address_buf = uv__malloc(win_address_buf_size); if (win_address_buf == NULL) return UV_ENOMEM; continue; case ERROR_NO_DATA: { /* No adapters were found. */ uv_address_buf = uv__malloc(1); if (uv_address_buf == NULL) return UV_ENOMEM; *count_ptr = 0; *addresses_ptr = uv_address_buf; return 0; } case ERROR_ADDRESS_NOT_ASSOCIATED: return UV_EAGAIN; case ERROR_INVALID_PARAMETER: /* MSDN says: * "This error is returned for any of the following conditions: the * SizePointer parameter is NULL, the Address parameter is not * AF_INET, AF_INET6, or AF_UNSPEC, or the address information for * the parameters requested is greater than ULONG_MAX." * Since the first two conditions are not met, it must be that the * adapter data is too big. */ return UV_ENOBUFS; default: /* Other (unspecified) errors can happen, but we don't have any special * meaning for them. */ assert(r != ERROR_SUCCESS); return uv_translate_sys_error(r); } } /* Count the number of enabled interfaces and compute how much space is * needed to store their info. */ count = 0; uv_address_buf_size = 0; for (adapter = win_address_buf; adapter != NULL; adapter = adapter->Next) { IP_ADAPTER_UNICAST_ADDRESS* unicast_address; int name_size; /* Interfaces that are not 'up' should not be reported. Also skip * interfaces that have no associated unicast address, as to avoid * allocating space for the name for this interface. */ if (adapter->OperStatus != IfOperStatusUp || adapter->FirstUnicastAddress == NULL) continue; /* Compute the size of the interface name. */ name_size = WideCharToMultiByte(CP_UTF8, 0, adapter->FriendlyName, -1, NULL, 0, NULL, FALSE); if (name_size <= 0) { uv__free(win_address_buf); return uv_translate_sys_error(GetLastError()); } uv_address_buf_size += name_size; /* Count the number of addresses associated with this interface, and * compute the size. */ for (unicast_address = (IP_ADAPTER_UNICAST_ADDRESS*) adapter->FirstUnicastAddress; unicast_address != NULL; unicast_address = unicast_address->Next) { count++; uv_address_buf_size += sizeof(uv_interface_address_t); } } /* Allocate space to store interface data plus adapter names. */ uv_address_buf = uv__malloc(uv_address_buf_size); if (uv_address_buf == NULL) { uv__free(win_address_buf); return UV_ENOMEM; } /* Compute the start of the uv_interface_address_t array, and the place in * the buffer where the interface names will be stored. */ uv_address = uv_address_buf; name_buf = (char*) (uv_address_buf + count); /* Fill out the output buffer. */ for (adapter = win_address_buf; adapter != NULL; adapter = adapter->Next) { IP_ADAPTER_UNICAST_ADDRESS* unicast_address; int name_size; size_t max_name_size; if (adapter->OperStatus != IfOperStatusUp || adapter->FirstUnicastAddress == NULL) continue; /* Convert the interface name to UTF8. */ max_name_size = (char*) uv_address_buf + uv_address_buf_size - name_buf; if (max_name_size > (size_t) INT_MAX) max_name_size = INT_MAX; name_size = WideCharToMultiByte(CP_UTF8, 0, adapter->FriendlyName, -1, name_buf, (int) max_name_size, NULL, FALSE); if (name_size <= 0) { uv__free(win_address_buf); uv__free(uv_address_buf); return uv_translate_sys_error(GetLastError()); } /* Add an uv_interface_address_t element for every unicast address. */ for (unicast_address = (IP_ADAPTER_UNICAST_ADDRESS*) adapter->FirstUnicastAddress; unicast_address != NULL; unicast_address = unicast_address->Next) { struct sockaddr* sa; ULONG prefix_len; sa = unicast_address->Address.lpSockaddr; /* XP has no OnLinkPrefixLength field. */ if (is_vista_or_greater) { prefix_len = ((IP_ADAPTER_UNICAST_ADDRESS_LH*) unicast_address)->OnLinkPrefixLength; } else { /* Prior to Windows Vista the FirstPrefix pointed to the list with * single prefix for each IP address assigned to the adapter. * Order of FirstPrefix does not match order of FirstUnicastAddress, * so we need to find corresponding prefix. */ IP_ADAPTER_PREFIX* prefix; prefix_len = 0; for (prefix = adapter->FirstPrefix; prefix; prefix = prefix->Next) { /* We want the longest matching prefix. */ if (prefix->Address.lpSockaddr->sa_family != sa->sa_family || prefix->PrefixLength <= prefix_len) continue; if (address_prefix_match(sa->sa_family, sa, prefix->Address.lpSockaddr, prefix->PrefixLength)) { prefix_len = prefix->PrefixLength; } } /* If there is no matching prefix information, return a single-host * subnet mask (e.g. 255.255.255.255 for IPv4). */ if (!prefix_len) prefix_len = (sa->sa_family == AF_INET6) ? 128 : 32; } memset(uv_address, 0, sizeof *uv_address); uv_address->name = name_buf; if (adapter->PhysicalAddressLength == sizeof(uv_address->phys_addr)) { memcpy(uv_address->phys_addr, adapter->PhysicalAddress, sizeof(uv_address->phys_addr)); } uv_address->is_internal = (adapter->IfType == IF_TYPE_SOFTWARE_LOOPBACK); if (sa->sa_family == AF_INET6) { uv_address->address.address6 = *((struct sockaddr_in6 *) sa); uv_address->netmask.netmask6.sin6_family = AF_INET6; memset(uv_address->netmask.netmask6.sin6_addr.s6_addr, 0xff, prefix_len >> 3); /* This check ensures that we don't write past the size of the data. */ if (prefix_len % 8) { uv_address->netmask.netmask6.sin6_addr.s6_addr[prefix_len >> 3] = 0xff << (8 - prefix_len % 8); } } else { uv_address->address.address4 = *((struct sockaddr_in *) sa); uv_address->netmask.netmask4.sin_family = AF_INET; uv_address->netmask.netmask4.sin_addr.s_addr = (prefix_len > 0) ? htonl(0xffffffff << (32 - prefix_len)) : 0; } uv_address++; } name_buf += name_size; } uv__free(win_address_buf); *addresses_ptr = uv_address_buf; *count_ptr = count; return 0; } void uv_free_interface_addresses(uv_interface_address_t* addresses, int count) { uv__free(addresses); } int uv_getrusage(uv_rusage_t *uv_rusage) { FILETIME createTime, exitTime, kernelTime, userTime; SYSTEMTIME kernelSystemTime, userSystemTime; PROCESS_MEMORY_COUNTERS memCounters; IO_COUNTERS ioCounters; int ret; ret = GetProcessTimes(GetCurrentProcess(), &createTime, &exitTime, &kernelTime, &userTime); if (ret == 0) { return uv_translate_sys_error(GetLastError()); } ret = FileTimeToSystemTime(&kernelTime, &kernelSystemTime); if (ret == 0) { return uv_translate_sys_error(GetLastError()); } ret = FileTimeToSystemTime(&userTime, &userSystemTime); if (ret == 0) { return uv_translate_sys_error(GetLastError()); } ret = GetProcessMemoryInfo(GetCurrentProcess(), &memCounters, sizeof(memCounters)); if (ret == 0) { return uv_translate_sys_error(GetLastError()); } ret = GetProcessIoCounters(GetCurrentProcess(), &ioCounters); if (ret == 0) { return uv_translate_sys_error(GetLastError()); } memset(uv_rusage, 0, sizeof(*uv_rusage)); uv_rusage->ru_utime.tv_sec = userSystemTime.wHour * 3600 + userSystemTime.wMinute * 60 + userSystemTime.wSecond; uv_rusage->ru_utime.tv_usec = userSystemTime.wMilliseconds * 1000; uv_rusage->ru_stime.tv_sec = kernelSystemTime.wHour * 3600 + kernelSystemTime.wMinute * 60 + kernelSystemTime.wSecond; uv_rusage->ru_stime.tv_usec = kernelSystemTime.wMilliseconds * 1000; uv_rusage->ru_majflt = (uint64_t) memCounters.PageFaultCount; uv_rusage->ru_maxrss = (uint64_t) memCounters.PeakWorkingSetSize / 1024; uv_rusage->ru_oublock = (uint64_t) ioCounters.WriteOperationCount; uv_rusage->ru_inblock = (uint64_t) ioCounters.ReadOperationCount; return 0; } int uv_os_homedir(char* buffer, size_t* size) { uv_passwd_t pwd; size_t len; int r; /* Check if the USERPROFILE environment variable is set first. The task of performing input validation on buffer and size is taken care of by uv_os_getenv(). */ r = uv_os_getenv("USERPROFILE", buffer, size); /* Don't return an error if USERPROFILE was not found. */ if (r != UV_ENOENT) return r; /* USERPROFILE is not set, so call uv__getpwuid_r() */ r = uv__getpwuid_r(&pwd); if (r != 0) { return r; } len = strlen(pwd.homedir); if (len >= *size) { *size = len + 1; uv_os_free_passwd(&pwd); return UV_ENOBUFS; } memcpy(buffer, pwd.homedir, len + 1); *size = len; uv_os_free_passwd(&pwd); return 0; } int uv_os_tmpdir(char* buffer, size_t* size) { wchar_t *path; DWORD bufsize; size_t len; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; len = 0; len = GetTempPathW(0, NULL); if (len == 0) { return uv_translate_sys_error(GetLastError()); } /* Include space for terminating null char. */ len += 1; path = uv__malloc(len * sizeof(wchar_t)); if (path == NULL) { return UV_ENOMEM; } len = GetTempPathW(len, path); if (len == 0) { uv__free(path); return uv_translate_sys_error(GetLastError()); } /* The returned directory should not have a trailing slash, unless it points * at a drive root, like c:\. Remove it if needed. */ if (path[len - 1] == L'\\' && !(len == 3 && path[1] == L':')) { len--; path[len] = L'\0'; } /* Check how much space we need */ bufsize = WideCharToMultiByte(CP_UTF8, 0, path, -1, NULL, 0, NULL, NULL); if (bufsize == 0) { uv__free(path); return uv_translate_sys_error(GetLastError()); } else if (bufsize > *size) { uv__free(path); *size = bufsize; return UV_ENOBUFS; } /* Convert to UTF-8 */ bufsize = WideCharToMultiByte(CP_UTF8, 0, path, -1, buffer, *size, NULL, NULL); uv__free(path); if (bufsize == 0) return uv_translate_sys_error(GetLastError()); *size = bufsize - 1; return 0; } void uv_os_free_passwd(uv_passwd_t* pwd) { if (pwd == NULL) return; uv__free(pwd->username); uv__free(pwd->homedir); pwd->username = NULL; pwd->homedir = NULL; } /* * Converts a UTF-16 string into a UTF-8 one. The resulting string is * null-terminated. * * If utf16 is null terminated, utf16len can be set to -1, otherwise it must * be specified. */ int uv__convert_utf16_to_utf8(const WCHAR* utf16, int utf16len, char** utf8) { DWORD bufsize; if (utf16 == NULL) return UV_EINVAL; /* Check how much space we need */ bufsize = WideCharToMultiByte(CP_UTF8, 0, utf16, utf16len, NULL, 0, NULL, NULL); if (bufsize == 0) return uv_translate_sys_error(GetLastError()); /* Allocate the destination buffer adding an extra byte for the terminating * NULL. If utf16len is not -1 WideCharToMultiByte will not add it, so * we do it ourselves always, just in case. */ *utf8 = uv__malloc(bufsize + 1); if (*utf8 == NULL) return UV_ENOMEM; /* Convert to UTF-8 */ bufsize = WideCharToMultiByte(CP_UTF8, 0, utf16, utf16len, *utf8, bufsize, NULL, NULL); if (bufsize == 0) { uv__free(*utf8); *utf8 = NULL; return uv_translate_sys_error(GetLastError()); } (*utf8)[bufsize] = '\0'; return 0; } /* * Converts a UTF-8 string into a UTF-16 one. The resulting string is * null-terminated. * * If utf8 is null terminated, utf8len can be set to -1, otherwise it must * be specified. */ int uv__convert_utf8_to_utf16(const char* utf8, int utf8len, WCHAR** utf16) { int bufsize; if (utf8 == NULL) return UV_EINVAL; /* Check how much space we need */ bufsize = MultiByteToWideChar(CP_UTF8, 0, utf8, utf8len, NULL, 0); if (bufsize == 0) return uv_translate_sys_error(GetLastError()); /* Allocate the destination buffer adding an extra byte for the terminating * NULL. If utf8len is not -1 MultiByteToWideChar will not add it, so * we do it ourselves always, just in case. */ *utf16 = uv__malloc(sizeof(WCHAR) * (bufsize + 1)); if (*utf16 == NULL) return UV_ENOMEM; /* Convert to UTF-16 */ bufsize = MultiByteToWideChar(CP_UTF8, 0, utf8, utf8len, *utf16, bufsize); if (bufsize == 0) { uv__free(*utf16); *utf16 = NULL; return uv_translate_sys_error(GetLastError()); } (*utf16)[bufsize] = L'\0'; return 0; } int uv__getpwuid_r(uv_passwd_t* pwd) { HANDLE token; wchar_t username[UNLEN + 1]; wchar_t *path; DWORD bufsize; int r; if (pwd == NULL) return UV_EINVAL; /* Get the home directory using GetUserProfileDirectoryW() */ if (OpenProcessToken(GetCurrentProcess(), TOKEN_READ, &token) == 0) return uv_translate_sys_error(GetLastError()); bufsize = 0; GetUserProfileDirectoryW(token, NULL, &bufsize); if (GetLastError() != ERROR_INSUFFICIENT_BUFFER) { r = GetLastError(); CloseHandle(token); return uv_translate_sys_error(r); } path = uv__malloc(bufsize * sizeof(wchar_t)); if (path == NULL) { CloseHandle(token); return UV_ENOMEM; } if (!GetUserProfileDirectoryW(token, path, &bufsize)) { r = GetLastError(); CloseHandle(token); uv__free(path); return uv_translate_sys_error(r); } CloseHandle(token); /* Get the username using GetUserNameW() */ bufsize = ARRAY_SIZE(username); if (!GetUserNameW(username, &bufsize)) { r = GetLastError(); uv__free(path); /* This should not be possible */ if (r == ERROR_INSUFFICIENT_BUFFER) return UV_ENOMEM; return uv_translate_sys_error(r); } pwd->homedir = NULL; r = uv__convert_utf16_to_utf8(path, -1, &pwd->homedir); uv__free(path); if (r != 0) return r; pwd->username = NULL; r = uv__convert_utf16_to_utf8(username, -1, &pwd->username); if (r != 0) { uv__free(pwd->homedir); return r; } pwd->shell = NULL; pwd->uid = -1; pwd->gid = -1; return 0; } int uv_os_get_passwd(uv_passwd_t* pwd) { return uv__getpwuid_r(pwd); } int uv_os_environ(uv_env_item_t** envitems, int* count) { wchar_t* env; wchar_t* penv; int i, cnt; uv_env_item_t* envitem; *envitems = NULL; *count = 0; env = GetEnvironmentStringsW(); if (env == NULL) return 0; for (penv = env, i = 0; *penv != L'\0'; penv += wcslen(penv) + 1, i++); *envitems = uv__calloc(i, sizeof(**envitems)); if (*envitems == NULL) { FreeEnvironmentStringsW(env); return UV_ENOMEM; } penv = env; cnt = 0; while (*penv != L'\0' && cnt < i) { char* buf; char* ptr; if (uv__convert_utf16_to_utf8(penv, -1, &buf) != 0) goto fail; /* Using buf + 1 here because we know that `buf` has length at least 1, * and some special environment variables on Windows start with a = sign. */ ptr = strchr(buf + 1, '='); if (ptr == NULL) { uv__free(buf); goto do_continue; } *ptr = '\0'; envitem = &(*envitems)[cnt]; envitem->name = buf; envitem->value = ptr + 1; cnt++; do_continue: penv += wcslen(penv) + 1; } FreeEnvironmentStringsW(env); *count = cnt; return 0; fail: FreeEnvironmentStringsW(env); for (i = 0; i < cnt; i++) { envitem = &(*envitems)[cnt]; uv__free(envitem->name); } uv__free(*envitems); *envitems = NULL; *count = 0; return UV_ENOMEM; } int uv_os_getenv(const char* name, char* buffer, size_t* size) { wchar_t fastvar[512]; wchar_t* var; DWORD varlen; wchar_t* name_w; DWORD bufsize; size_t len; int r; if (name == NULL || buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; r = uv__convert_utf8_to_utf16(name, -1, &name_w); if (r != 0) return r; var = fastvar; varlen = ARRAY_SIZE(fastvar); for (;;) { SetLastError(ERROR_SUCCESS); len = GetEnvironmentVariableW(name_w, var, varlen); if (len < varlen) break; /* Try repeatedly because we might have been preempted by another thread * modifying the environment variable just as we're trying to read it. */ if (var != fastvar) uv__free(var); varlen = 1 + len; var = uv__malloc(varlen * sizeof(*var)); if (var == NULL) { r = UV_ENOMEM; goto fail; } } uv__free(name_w); name_w = NULL; if (len == 0) { r = GetLastError(); if (r != ERROR_SUCCESS) { r = uv_translate_sys_error(r); goto fail; } } /* Check how much space we need */ bufsize = WideCharToMultiByte(CP_UTF8, 0, var, -1, NULL, 0, NULL, NULL); if (bufsize == 0) { r = uv_translate_sys_error(GetLastError()); goto fail; } else if (bufsize > *size) { *size = bufsize; r = UV_ENOBUFS; goto fail; } /* Convert to UTF-8 */ bufsize = WideCharToMultiByte(CP_UTF8, 0, var, -1, buffer, *size, NULL, NULL); if (bufsize == 0) { r = uv_translate_sys_error(GetLastError()); goto fail; } *size = bufsize - 1; r = 0; fail: if (name_w != NULL) uv__free(name_w); if (var != fastvar) uv__free(var); return r; } int uv_os_setenv(const char* name, const char* value) { wchar_t* name_w; wchar_t* value_w; int r; if (name == NULL || value == NULL) return UV_EINVAL; r = uv__convert_utf8_to_utf16(name, -1, &name_w); if (r != 0) return r; r = uv__convert_utf8_to_utf16(value, -1, &value_w); if (r != 0) { uv__free(name_w); return r; } r = SetEnvironmentVariableW(name_w, value_w); uv__free(name_w); uv__free(value_w); if (r == 0) return uv_translate_sys_error(GetLastError()); return 0; } int uv_os_unsetenv(const char* name) { wchar_t* name_w; int r; if (name == NULL) return UV_EINVAL; r = uv__convert_utf8_to_utf16(name, -1, &name_w); if (r != 0) return r; r = SetEnvironmentVariableW(name_w, NULL); uv__free(name_w); if (r == 0) return uv_translate_sys_error(GetLastError()); return 0; } int uv_os_gethostname(char* buffer, size_t* size) { WCHAR buf[UV_MAXHOSTNAMESIZE]; size_t len; char* utf8_str; int convert_result; if (buffer == NULL || size == NULL || *size == 0) return UV_EINVAL; uv__once_init(); /* Initialize winsock */ if (pGetHostNameW == NULL) return UV_ENOSYS; if (pGetHostNameW(buf, UV_MAXHOSTNAMESIZE) != 0) return uv_translate_sys_error(WSAGetLastError()); convert_result = uv__convert_utf16_to_utf8(buf, -1, &utf8_str); if (convert_result != 0) return convert_result; len = strlen(utf8_str); if (len >= *size) { *size = len + 1; uv__free(utf8_str); return UV_ENOBUFS; } memcpy(buffer, utf8_str, len + 1); uv__free(utf8_str); *size = len; return 0; } static int uv__get_handle(uv_pid_t pid, int access, HANDLE* handle) { int r; if (pid == 0) *handle = GetCurrentProcess(); else *handle = OpenProcess(access, FALSE, pid); if (*handle == NULL) { r = GetLastError(); if (r == ERROR_INVALID_PARAMETER) return UV_ESRCH; else return uv_translate_sys_error(r); } return 0; } int uv_os_getpriority(uv_pid_t pid, int* priority) { HANDLE handle; int r; if (priority == NULL) return UV_EINVAL; r = uv__get_handle(pid, PROCESS_QUERY_LIMITED_INFORMATION, &handle); if (r != 0) return r; r = GetPriorityClass(handle); if (r == 0) { r = uv_translate_sys_error(GetLastError()); } else { /* Map Windows priority classes to Unix nice values. */ if (r == REALTIME_PRIORITY_CLASS) *priority = UV_PRIORITY_HIGHEST; else if (r == HIGH_PRIORITY_CLASS) *priority = UV_PRIORITY_HIGH; else if (r == ABOVE_NORMAL_PRIORITY_CLASS) *priority = UV_PRIORITY_ABOVE_NORMAL; else if (r == NORMAL_PRIORITY_CLASS) *priority = UV_PRIORITY_NORMAL; else if (r == BELOW_NORMAL_PRIORITY_CLASS) *priority = UV_PRIORITY_BELOW_NORMAL; else /* IDLE_PRIORITY_CLASS */ *priority = UV_PRIORITY_LOW; r = 0; } CloseHandle(handle); return r; } int uv_os_setpriority(uv_pid_t pid, int priority) { HANDLE handle; int priority_class; int r; /* Map Unix nice values to Windows priority classes. */ if (priority < UV_PRIORITY_HIGHEST || priority > UV_PRIORITY_LOW) return UV_EINVAL; else if (priority < UV_PRIORITY_HIGH) priority_class = REALTIME_PRIORITY_CLASS; else if (priority < UV_PRIORITY_ABOVE_NORMAL) priority_class = HIGH_PRIORITY_CLASS; else if (priority < UV_PRIORITY_NORMAL) priority_class = ABOVE_NORMAL_PRIORITY_CLASS; else if (priority < UV_PRIORITY_BELOW_NORMAL) priority_class = NORMAL_PRIORITY_CLASS; else if (priority < UV_PRIORITY_LOW) priority_class = BELOW_NORMAL_PRIORITY_CLASS; else priority_class = IDLE_PRIORITY_CLASS; r = uv__get_handle(pid, PROCESS_SET_INFORMATION, &handle); if (r != 0) return r; if (SetPriorityClass(handle, priority_class) == 0) r = uv_translate_sys_error(GetLastError()); CloseHandle(handle); return r; } int uv_os_uname(uv_utsname_t* buffer) { /* Implementation loosely based on https://github.com/gagern/gnulib/blob/master/lib/uname.c */ OSVERSIONINFOW os_info; SYSTEM_INFO system_info; HKEY registry_key; WCHAR product_name_w[256]; DWORD product_name_w_size; int version_size; int processor_level; int r; if (buffer == NULL) return UV_EINVAL; uv__once_init(); os_info.dwOSVersionInfoSize = sizeof(os_info); os_info.szCSDVersion[0] = L'\0'; /* Try calling RtlGetVersion(), and fall back to the deprecated GetVersionEx() if RtlGetVersion() is not available. */ if (pRtlGetVersion) { pRtlGetVersion(&os_info); } else { /* Silence GetVersionEx() deprecation warning. */ #ifdef _MSC_VER #pragma warning(suppress : 4996) #endif if (GetVersionExW(&os_info) == 0) { r = uv_translate_sys_error(GetLastError()); goto error; } } /* Populate the version field. */ version_size = 0; r = RegOpenKeyExW(HKEY_LOCAL_MACHINE, L"SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion", 0, KEY_QUERY_VALUE, ®istry_key); if (r == ERROR_SUCCESS) { product_name_w_size = sizeof(product_name_w); r = RegGetValueW(registry_key, NULL, L"ProductName", RRF_RT_REG_SZ, NULL, (PVOID) product_name_w, &product_name_w_size); RegCloseKey(registry_key); if (r == ERROR_SUCCESS) { version_size = WideCharToMultiByte(CP_UTF8, 0, product_name_w, -1, buffer->version, sizeof(buffer->version), NULL, NULL); if (version_size == 0) { r = uv_translate_sys_error(GetLastError()); goto error; } } } /* Append service pack information to the version if present. */ if (os_info.szCSDVersion[0] != L'\0') { if (version_size > 0) buffer->version[version_size - 1] = ' '; if (WideCharToMultiByte(CP_UTF8, 0, os_info.szCSDVersion, -1, buffer->version + version_size, sizeof(buffer->version) - version_size, NULL, NULL) == 0) { r = uv_translate_sys_error(GetLastError()); goto error; } } /* Populate the sysname field. */ #ifdef __MINGW32__ r = snprintf(buffer->sysname, sizeof(buffer->sysname), "MINGW32_NT-%u.%u", (unsigned int) os_info.dwMajorVersion, (unsigned int) os_info.dwMinorVersion); assert((size_t)r < sizeof(buffer->sysname)); #else uv__strscpy(buffer->sysname, "Windows_NT", sizeof(buffer->sysname)); #endif /* Populate the release field. */ r = snprintf(buffer->release, sizeof(buffer->release), "%d.%d.%d", (unsigned int) os_info.dwMajorVersion, (unsigned int) os_info.dwMinorVersion, (unsigned int) os_info.dwBuildNumber); assert((size_t)r < sizeof(buffer->release)); /* Populate the machine field. */ GetSystemInfo(&system_info); switch (system_info.wProcessorArchitecture) { case PROCESSOR_ARCHITECTURE_AMD64: uv__strscpy(buffer->machine, "x86_64", sizeof(buffer->machine)); break; case PROCESSOR_ARCHITECTURE_IA64: uv__strscpy(buffer->machine, "ia64", sizeof(buffer->machine)); break; case PROCESSOR_ARCHITECTURE_INTEL: uv__strscpy(buffer->machine, "i386", sizeof(buffer->machine)); if (system_info.wProcessorLevel > 3) { processor_level = system_info.wProcessorLevel < 6 ? system_info.wProcessorLevel : 6; buffer->machine[1] = '0' + processor_level; } break; case PROCESSOR_ARCHITECTURE_IA32_ON_WIN64: uv__strscpy(buffer->machine, "i686", sizeof(buffer->machine)); break; case PROCESSOR_ARCHITECTURE_MIPS: uv__strscpy(buffer->machine, "mips", sizeof(buffer->machine)); break; case PROCESSOR_ARCHITECTURE_ALPHA: case PROCESSOR_ARCHITECTURE_ALPHA64: uv__strscpy(buffer->machine, "alpha", sizeof(buffer->machine)); break; case PROCESSOR_ARCHITECTURE_PPC: uv__strscpy(buffer->machine, "powerpc", sizeof(buffer->machine)); break; case PROCESSOR_ARCHITECTURE_SHX: uv__strscpy(buffer->machine, "sh", sizeof(buffer->machine)); break; case PROCESSOR_ARCHITECTURE_ARM: uv__strscpy(buffer->machine, "arm", sizeof(buffer->machine)); break; default: uv__strscpy(buffer->machine, "unknown", sizeof(buffer->machine)); break; } return 0; error: buffer->sysname[0] = '\0'; buffer->release[0] = '\0'; buffer->version[0] = '\0'; buffer->machine[0] = '\0'; return r; } int uv_gettimeofday(uv_timeval64_t* tv) { /* Based on https://doxygen.postgresql.org/gettimeofday_8c_source.html */ const uint64_t epoch = (uint64_t) 116444736000000000ULL; FILETIME file_time; ULARGE_INTEGER ularge; if (tv == NULL) return UV_EINVAL; GetSystemTimeAsFileTime(&file_time); ularge.LowPart = file_time.dwLowDateTime; ularge.HighPart = file_time.dwHighDateTime; tv->tv_sec = (int64_t) ((ularge.QuadPart - epoch) / 10000000L); tv->tv_usec = (int32_t) (((ularge.QuadPart - epoch) % 10000000L) / 10); return 0; } int uv__random_rtlgenrandom(void* buf, size_t buflen) { if (buflen == 0) return 0; if (SystemFunction036(buf, buflen) == FALSE) return UV_EIO; return 0; } void uv_sleep(unsigned int msec) { Sleep(msec); } gevent-24.11.1/deps/libuv/src/win/winapi.c000066400000000000000000000121421471441230600202450ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include "uv.h" #include "internal.h" /* Ntdll function pointers */ sRtlGetVersion pRtlGetVersion; sRtlNtStatusToDosError pRtlNtStatusToDosError; sNtDeviceIoControlFile pNtDeviceIoControlFile; sNtQueryInformationFile pNtQueryInformationFile; sNtSetInformationFile pNtSetInformationFile; sNtQueryVolumeInformationFile pNtQueryVolumeInformationFile; sNtQueryDirectoryFile pNtQueryDirectoryFile; sNtQuerySystemInformation pNtQuerySystemInformation; sNtQueryInformationProcess pNtQueryInformationProcess; /* Kernel32 function pointers */ sGetQueuedCompletionStatusEx pGetQueuedCompletionStatusEx; /* Powrprof.dll function pointer */ sPowerRegisterSuspendResumeNotification pPowerRegisterSuspendResumeNotification; /* User32.dll function pointer */ sSetWinEventHook pSetWinEventHook; /* ws2_32.dll function pointer */ uv_sGetHostNameW pGetHostNameW; void uv__winapi_init(void) { HMODULE ntdll_module; HMODULE powrprof_module; HMODULE user32_module; HMODULE kernel32_module; HMODULE ws2_32_module; ntdll_module = GetModuleHandleA("ntdll.dll"); if (ntdll_module == NULL) { uv_fatal_error(GetLastError(), "GetModuleHandleA"); } pRtlGetVersion = (sRtlGetVersion) GetProcAddress(ntdll_module, "RtlGetVersion"); pRtlNtStatusToDosError = (sRtlNtStatusToDosError) GetProcAddress( ntdll_module, "RtlNtStatusToDosError"); if (pRtlNtStatusToDosError == NULL) { uv_fatal_error(GetLastError(), "GetProcAddress"); } pNtDeviceIoControlFile = (sNtDeviceIoControlFile) GetProcAddress( ntdll_module, "NtDeviceIoControlFile"); if (pNtDeviceIoControlFile == NULL) { uv_fatal_error(GetLastError(), "GetProcAddress"); } pNtQueryInformationFile = (sNtQueryInformationFile) GetProcAddress( ntdll_module, "NtQueryInformationFile"); if (pNtQueryInformationFile == NULL) { uv_fatal_error(GetLastError(), "GetProcAddress"); } pNtSetInformationFile = (sNtSetInformationFile) GetProcAddress( ntdll_module, "NtSetInformationFile"); if (pNtSetInformationFile == NULL) { uv_fatal_error(GetLastError(), "GetProcAddress"); } pNtQueryVolumeInformationFile = (sNtQueryVolumeInformationFile) GetProcAddress(ntdll_module, "NtQueryVolumeInformationFile"); if (pNtQueryVolumeInformationFile == NULL) { uv_fatal_error(GetLastError(), "GetProcAddress"); } pNtQueryDirectoryFile = (sNtQueryDirectoryFile) GetProcAddress(ntdll_module, "NtQueryDirectoryFile"); if (pNtQueryVolumeInformationFile == NULL) { uv_fatal_error(GetLastError(), "GetProcAddress"); } pNtQuerySystemInformation = (sNtQuerySystemInformation) GetProcAddress( ntdll_module, "NtQuerySystemInformation"); if (pNtQuerySystemInformation == NULL) { uv_fatal_error(GetLastError(), "GetProcAddress"); } pNtQueryInformationProcess = (sNtQueryInformationProcess) GetProcAddress( ntdll_module, "NtQueryInformationProcess"); if (pNtQueryInformationProcess == NULL) { uv_fatal_error(GetLastError(), "GetProcAddress"); } kernel32_module = GetModuleHandleA("kernel32.dll"); if (kernel32_module == NULL) { uv_fatal_error(GetLastError(), "GetModuleHandleA"); } pGetQueuedCompletionStatusEx = (sGetQueuedCompletionStatusEx) GetProcAddress( kernel32_module, "GetQueuedCompletionStatusEx"); powrprof_module = LoadLibraryExA("powrprof.dll", NULL, LOAD_LIBRARY_SEARCH_SYSTEM32); if (powrprof_module != NULL) { pPowerRegisterSuspendResumeNotification = (sPowerRegisterSuspendResumeNotification) GetProcAddress(powrprof_module, "PowerRegisterSuspendResumeNotification"); } user32_module = GetModuleHandleA("user32.dll"); if (user32_module != NULL) { pSetWinEventHook = (sSetWinEventHook) GetProcAddress(user32_module, "SetWinEventHook"); } ws2_32_module = GetModuleHandleA("ws2_32.dll"); if (ws2_32_module != NULL) { pGetHostNameW = (uv_sGetHostNameW) GetProcAddress( ws2_32_module, "GetHostNameW"); } } gevent-24.11.1/deps/libuv/src/win/winapi.h000066400000000000000000003725361471441230600202720ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_WIN_WINAPI_H_ #define UV_WIN_WINAPI_H_ #include /* * Ntdll headers */ #ifndef STATUS_SEVERITY_SUCCESS # define STATUS_SEVERITY_SUCCESS 0x0 #endif #ifndef STATUS_SEVERITY_INFORMATIONAL # define STATUS_SEVERITY_INFORMATIONAL 0x1 #endif #ifndef STATUS_SEVERITY_WARNING # define STATUS_SEVERITY_WARNING 0x2 #endif #ifndef STATUS_SEVERITY_ERROR # define STATUS_SEVERITY_ERROR 0x3 #endif #ifndef FACILITY_NTWIN32 # define FACILITY_NTWIN32 0x7 #endif #ifndef NT_SUCCESS # define NT_SUCCESS(status) (((NTSTATUS) (status)) >= 0) #endif #ifndef NT_INFORMATION # define NT_INFORMATION(status) ((((ULONG) (status)) >> 30) == 1) #endif #ifndef NT_WARNING # define NT_WARNING(status) ((((ULONG) (status)) >> 30) == 2) #endif #ifndef NT_ERROR # define NT_ERROR(status) ((((ULONG) (status)) >> 30) == 3) #endif #ifndef STATUS_SUCCESS # define STATUS_SUCCESS ((NTSTATUS) 0x00000000L) #endif #ifndef STATUS_WAIT_0 # define STATUS_WAIT_0 ((NTSTATUS) 0x00000000L) #endif #ifndef STATUS_WAIT_1 # define STATUS_WAIT_1 ((NTSTATUS) 0x00000001L) #endif #ifndef STATUS_WAIT_2 # define STATUS_WAIT_2 ((NTSTATUS) 0x00000002L) #endif #ifndef STATUS_WAIT_3 # define STATUS_WAIT_3 ((NTSTATUS) 0x00000003L) #endif #ifndef STATUS_WAIT_63 # define STATUS_WAIT_63 ((NTSTATUS) 0x0000003FL) #endif #ifndef STATUS_ABANDONED # define STATUS_ABANDONED ((NTSTATUS) 0x00000080L) #endif #ifndef STATUS_ABANDONED_WAIT_0 # define STATUS_ABANDONED_WAIT_0 ((NTSTATUS) 0x00000080L) #endif #ifndef STATUS_ABANDONED_WAIT_63 # define STATUS_ABANDONED_WAIT_63 ((NTSTATUS) 0x000000BFL) #endif #ifndef STATUS_USER_APC # define STATUS_USER_APC ((NTSTATUS) 0x000000C0L) #endif #ifndef STATUS_KERNEL_APC # define STATUS_KERNEL_APC ((NTSTATUS) 0x00000100L) #endif #ifndef STATUS_ALERTED # define STATUS_ALERTED ((NTSTATUS) 0x00000101L) #endif #ifndef STATUS_TIMEOUT # define STATUS_TIMEOUT ((NTSTATUS) 0x00000102L) #endif #ifndef STATUS_PENDING # define STATUS_PENDING ((NTSTATUS) 0x00000103L) #endif #ifndef STATUS_REPARSE # define STATUS_REPARSE ((NTSTATUS) 0x00000104L) #endif #ifndef STATUS_MORE_ENTRIES # define STATUS_MORE_ENTRIES ((NTSTATUS) 0x00000105L) #endif #ifndef STATUS_NOT_ALL_ASSIGNED # define STATUS_NOT_ALL_ASSIGNED ((NTSTATUS) 0x00000106L) #endif #ifndef STATUS_SOME_NOT_MAPPED # define STATUS_SOME_NOT_MAPPED ((NTSTATUS) 0x00000107L) #endif #ifndef STATUS_OPLOCK_BREAK_IN_PROGRESS # define STATUS_OPLOCK_BREAK_IN_PROGRESS ((NTSTATUS) 0x00000108L) #endif #ifndef STATUS_VOLUME_MOUNTED # define STATUS_VOLUME_MOUNTED ((NTSTATUS) 0x00000109L) #endif #ifndef STATUS_RXACT_COMMITTED # define STATUS_RXACT_COMMITTED ((NTSTATUS) 0x0000010AL) #endif #ifndef STATUS_NOTIFY_CLEANUP # define STATUS_NOTIFY_CLEANUP ((NTSTATUS) 0x0000010BL) #endif #ifndef STATUS_NOTIFY_ENUM_DIR # define STATUS_NOTIFY_ENUM_DIR ((NTSTATUS) 0x0000010CL) #endif #ifndef STATUS_NO_QUOTAS_FOR_ACCOUNT # define STATUS_NO_QUOTAS_FOR_ACCOUNT ((NTSTATUS) 0x0000010DL) #endif #ifndef STATUS_PRIMARY_TRANSPORT_CONNECT_FAILED # define STATUS_PRIMARY_TRANSPORT_CONNECT_FAILED ((NTSTATUS) 0x0000010EL) #endif #ifndef STATUS_PAGE_FAULT_TRANSITION # define STATUS_PAGE_FAULT_TRANSITION ((NTSTATUS) 0x00000110L) #endif #ifndef STATUS_PAGE_FAULT_DEMAND_ZERO # define STATUS_PAGE_FAULT_DEMAND_ZERO ((NTSTATUS) 0x00000111L) #endif #ifndef STATUS_PAGE_FAULT_COPY_ON_WRITE # define STATUS_PAGE_FAULT_COPY_ON_WRITE ((NTSTATUS) 0x00000112L) #endif #ifndef STATUS_PAGE_FAULT_GUARD_PAGE # define STATUS_PAGE_FAULT_GUARD_PAGE ((NTSTATUS) 0x00000113L) #endif #ifndef STATUS_PAGE_FAULT_PAGING_FILE # define STATUS_PAGE_FAULT_PAGING_FILE ((NTSTATUS) 0x00000114L) #endif #ifndef STATUS_CACHE_PAGE_LOCKED # define STATUS_CACHE_PAGE_LOCKED ((NTSTATUS) 0x00000115L) #endif #ifndef STATUS_CRASH_DUMP # define STATUS_CRASH_DUMP ((NTSTATUS) 0x00000116L) #endif #ifndef STATUS_BUFFER_ALL_ZEROS # define STATUS_BUFFER_ALL_ZEROS ((NTSTATUS) 0x00000117L) #endif #ifndef STATUS_REPARSE_OBJECT # define STATUS_REPARSE_OBJECT ((NTSTATUS) 0x00000118L) #endif #ifndef STATUS_RESOURCE_REQUIREMENTS_CHANGED # define STATUS_RESOURCE_REQUIREMENTS_CHANGED ((NTSTATUS) 0x00000119L) #endif #ifndef STATUS_TRANSLATION_COMPLETE # define STATUS_TRANSLATION_COMPLETE ((NTSTATUS) 0x00000120L) #endif #ifndef STATUS_DS_MEMBERSHIP_EVALUATED_LOCALLY # define STATUS_DS_MEMBERSHIP_EVALUATED_LOCALLY ((NTSTATUS) 0x00000121L) #endif #ifndef STATUS_NOTHING_TO_TERMINATE # define STATUS_NOTHING_TO_TERMINATE ((NTSTATUS) 0x00000122L) #endif #ifndef STATUS_PROCESS_NOT_IN_JOB # define STATUS_PROCESS_NOT_IN_JOB ((NTSTATUS) 0x00000123L) #endif #ifndef STATUS_PROCESS_IN_JOB # define STATUS_PROCESS_IN_JOB ((NTSTATUS) 0x00000124L) #endif #ifndef STATUS_VOLSNAP_HIBERNATE_READY # define STATUS_VOLSNAP_HIBERNATE_READY ((NTSTATUS) 0x00000125L) #endif #ifndef STATUS_FSFILTER_OP_COMPLETED_SUCCESSFULLY # define STATUS_FSFILTER_OP_COMPLETED_SUCCESSFULLY ((NTSTATUS) 0x00000126L) #endif #ifndef STATUS_INTERRUPT_VECTOR_ALREADY_CONNECTED # define STATUS_INTERRUPT_VECTOR_ALREADY_CONNECTED ((NTSTATUS) 0x00000127L) #endif #ifndef STATUS_INTERRUPT_STILL_CONNECTED # define STATUS_INTERRUPT_STILL_CONNECTED ((NTSTATUS) 0x00000128L) #endif #ifndef STATUS_PROCESS_CLONED # define STATUS_PROCESS_CLONED ((NTSTATUS) 0x00000129L) #endif #ifndef STATUS_FILE_LOCKED_WITH_ONLY_READERS # define STATUS_FILE_LOCKED_WITH_ONLY_READERS ((NTSTATUS) 0x0000012AL) #endif #ifndef STATUS_FILE_LOCKED_WITH_WRITERS # define STATUS_FILE_LOCKED_WITH_WRITERS ((NTSTATUS) 0x0000012BL) #endif #ifndef STATUS_RESOURCEMANAGER_READ_ONLY # define STATUS_RESOURCEMANAGER_READ_ONLY ((NTSTATUS) 0x00000202L) #endif #ifndef STATUS_RING_PREVIOUSLY_EMPTY # define STATUS_RING_PREVIOUSLY_EMPTY ((NTSTATUS) 0x00000210L) #endif #ifndef STATUS_RING_PREVIOUSLY_FULL # define STATUS_RING_PREVIOUSLY_FULL ((NTSTATUS) 0x00000211L) #endif #ifndef STATUS_RING_PREVIOUSLY_ABOVE_QUOTA # define STATUS_RING_PREVIOUSLY_ABOVE_QUOTA ((NTSTATUS) 0x00000212L) #endif #ifndef STATUS_RING_NEWLY_EMPTY # define STATUS_RING_NEWLY_EMPTY ((NTSTATUS) 0x00000213L) #endif #ifndef STATUS_RING_SIGNAL_OPPOSITE_ENDPOINT # define STATUS_RING_SIGNAL_OPPOSITE_ENDPOINT ((NTSTATUS) 0x00000214L) #endif #ifndef STATUS_OPLOCK_SWITCHED_TO_NEW_HANDLE # define STATUS_OPLOCK_SWITCHED_TO_NEW_HANDLE ((NTSTATUS) 0x00000215L) #endif #ifndef STATUS_OPLOCK_HANDLE_CLOSED # define STATUS_OPLOCK_HANDLE_CLOSED ((NTSTATUS) 0x00000216L) #endif #ifndef STATUS_WAIT_FOR_OPLOCK # define STATUS_WAIT_FOR_OPLOCK ((NTSTATUS) 0x00000367L) #endif #ifndef STATUS_OBJECT_NAME_EXISTS # define STATUS_OBJECT_NAME_EXISTS ((NTSTATUS) 0x40000000L) #endif #ifndef STATUS_THREAD_WAS_SUSPENDED # define STATUS_THREAD_WAS_SUSPENDED ((NTSTATUS) 0x40000001L) #endif #ifndef STATUS_WORKING_SET_LIMIT_RANGE # define STATUS_WORKING_SET_LIMIT_RANGE ((NTSTATUS) 0x40000002L) #endif #ifndef STATUS_IMAGE_NOT_AT_BASE # define STATUS_IMAGE_NOT_AT_BASE ((NTSTATUS) 0x40000003L) #endif #ifndef STATUS_RXACT_STATE_CREATED # define STATUS_RXACT_STATE_CREATED ((NTSTATUS) 0x40000004L) #endif #ifndef STATUS_SEGMENT_NOTIFICATION # define STATUS_SEGMENT_NOTIFICATION ((NTSTATUS) 0x40000005L) #endif #ifndef STATUS_LOCAL_USER_SESSION_KEY # define STATUS_LOCAL_USER_SESSION_KEY ((NTSTATUS) 0x40000006L) #endif #ifndef STATUS_BAD_CURRENT_DIRECTORY # define STATUS_BAD_CURRENT_DIRECTORY ((NTSTATUS) 0x40000007L) #endif #ifndef STATUS_SERIAL_MORE_WRITES # define STATUS_SERIAL_MORE_WRITES ((NTSTATUS) 0x40000008L) #endif #ifndef STATUS_REGISTRY_RECOVERED # define STATUS_REGISTRY_RECOVERED ((NTSTATUS) 0x40000009L) #endif #ifndef STATUS_FT_READ_RECOVERY_FROM_BACKUP # define STATUS_FT_READ_RECOVERY_FROM_BACKUP ((NTSTATUS) 0x4000000AL) #endif #ifndef STATUS_FT_WRITE_RECOVERY # define STATUS_FT_WRITE_RECOVERY ((NTSTATUS) 0x4000000BL) #endif #ifndef STATUS_SERIAL_COUNTER_TIMEOUT # define STATUS_SERIAL_COUNTER_TIMEOUT ((NTSTATUS) 0x4000000CL) #endif #ifndef STATUS_NULL_LM_PASSWORD # define STATUS_NULL_LM_PASSWORD ((NTSTATUS) 0x4000000DL) #endif #ifndef STATUS_IMAGE_MACHINE_TYPE_MISMATCH # define STATUS_IMAGE_MACHINE_TYPE_MISMATCH ((NTSTATUS) 0x4000000EL) #endif #ifndef STATUS_RECEIVE_PARTIAL # define STATUS_RECEIVE_PARTIAL ((NTSTATUS) 0x4000000FL) #endif #ifndef STATUS_RECEIVE_EXPEDITED # define STATUS_RECEIVE_EXPEDITED ((NTSTATUS) 0x40000010L) #endif #ifndef STATUS_RECEIVE_PARTIAL_EXPEDITED # define STATUS_RECEIVE_PARTIAL_EXPEDITED ((NTSTATUS) 0x40000011L) #endif #ifndef STATUS_EVENT_DONE # define STATUS_EVENT_DONE ((NTSTATUS) 0x40000012L) #endif #ifndef STATUS_EVENT_PENDING # define STATUS_EVENT_PENDING ((NTSTATUS) 0x40000013L) #endif #ifndef STATUS_CHECKING_FILE_SYSTEM # define STATUS_CHECKING_FILE_SYSTEM ((NTSTATUS) 0x40000014L) #endif #ifndef STATUS_FATAL_APP_EXIT # define STATUS_FATAL_APP_EXIT ((NTSTATUS) 0x40000015L) #endif #ifndef STATUS_PREDEFINED_HANDLE # define STATUS_PREDEFINED_HANDLE ((NTSTATUS) 0x40000016L) #endif #ifndef STATUS_WAS_UNLOCKED # define STATUS_WAS_UNLOCKED ((NTSTATUS) 0x40000017L) #endif #ifndef STATUS_SERVICE_NOTIFICATION # define STATUS_SERVICE_NOTIFICATION ((NTSTATUS) 0x40000018L) #endif #ifndef STATUS_WAS_LOCKED # define STATUS_WAS_LOCKED ((NTSTATUS) 0x40000019L) #endif #ifndef STATUS_LOG_HARD_ERROR # define STATUS_LOG_HARD_ERROR ((NTSTATUS) 0x4000001AL) #endif #ifndef STATUS_ALREADY_WIN32 # define STATUS_ALREADY_WIN32 ((NTSTATUS) 0x4000001BL) #endif #ifndef STATUS_WX86_UNSIMULATE # define STATUS_WX86_UNSIMULATE ((NTSTATUS) 0x4000001CL) #endif #ifndef STATUS_WX86_CONTINUE # define STATUS_WX86_CONTINUE ((NTSTATUS) 0x4000001DL) #endif #ifndef STATUS_WX86_SINGLE_STEP # define STATUS_WX86_SINGLE_STEP ((NTSTATUS) 0x4000001EL) #endif #ifndef STATUS_WX86_BREAKPOINT # define STATUS_WX86_BREAKPOINT ((NTSTATUS) 0x4000001FL) #endif #ifndef STATUS_WX86_EXCEPTION_CONTINUE # define STATUS_WX86_EXCEPTION_CONTINUE ((NTSTATUS) 0x40000020L) #endif #ifndef STATUS_WX86_EXCEPTION_LASTCHANCE # define STATUS_WX86_EXCEPTION_LASTCHANCE ((NTSTATUS) 0x40000021L) #endif #ifndef STATUS_WX86_EXCEPTION_CHAIN # define STATUS_WX86_EXCEPTION_CHAIN ((NTSTATUS) 0x40000022L) #endif #ifndef STATUS_IMAGE_MACHINE_TYPE_MISMATCH_EXE # define STATUS_IMAGE_MACHINE_TYPE_MISMATCH_EXE ((NTSTATUS) 0x40000023L) #endif #ifndef STATUS_NO_YIELD_PERFORMED # define STATUS_NO_YIELD_PERFORMED ((NTSTATUS) 0x40000024L) #endif #ifndef STATUS_TIMER_RESUME_IGNORED # define STATUS_TIMER_RESUME_IGNORED ((NTSTATUS) 0x40000025L) #endif #ifndef STATUS_ARBITRATION_UNHANDLED # define STATUS_ARBITRATION_UNHANDLED ((NTSTATUS) 0x40000026L) #endif #ifndef STATUS_CARDBUS_NOT_SUPPORTED # define STATUS_CARDBUS_NOT_SUPPORTED ((NTSTATUS) 0x40000027L) #endif #ifndef STATUS_WX86_CREATEWX86TIB # define STATUS_WX86_CREATEWX86TIB ((NTSTATUS) 0x40000028L) #endif #ifndef STATUS_MP_PROCESSOR_MISMATCH # define STATUS_MP_PROCESSOR_MISMATCH ((NTSTATUS) 0x40000029L) #endif #ifndef STATUS_HIBERNATED # define STATUS_HIBERNATED ((NTSTATUS) 0x4000002AL) #endif #ifndef STATUS_RESUME_HIBERNATION # define STATUS_RESUME_HIBERNATION ((NTSTATUS) 0x4000002BL) #endif #ifndef STATUS_FIRMWARE_UPDATED # define STATUS_FIRMWARE_UPDATED ((NTSTATUS) 0x4000002CL) #endif #ifndef STATUS_DRIVERS_LEAKING_LOCKED_PAGES # define STATUS_DRIVERS_LEAKING_LOCKED_PAGES ((NTSTATUS) 0x4000002DL) #endif #ifndef STATUS_MESSAGE_RETRIEVED # define STATUS_MESSAGE_RETRIEVED ((NTSTATUS) 0x4000002EL) #endif #ifndef STATUS_SYSTEM_POWERSTATE_TRANSITION # define STATUS_SYSTEM_POWERSTATE_TRANSITION ((NTSTATUS) 0x4000002FL) #endif #ifndef STATUS_ALPC_CHECK_COMPLETION_LIST # define STATUS_ALPC_CHECK_COMPLETION_LIST ((NTSTATUS) 0x40000030L) #endif #ifndef STATUS_SYSTEM_POWERSTATE_COMPLEX_TRANSITION # define STATUS_SYSTEM_POWERSTATE_COMPLEX_TRANSITION ((NTSTATUS) 0x40000031L) #endif #ifndef STATUS_ACCESS_AUDIT_BY_POLICY # define STATUS_ACCESS_AUDIT_BY_POLICY ((NTSTATUS) 0x40000032L) #endif #ifndef STATUS_ABANDON_HIBERFILE # define STATUS_ABANDON_HIBERFILE ((NTSTATUS) 0x40000033L) #endif #ifndef STATUS_BIZRULES_NOT_ENABLED # define STATUS_BIZRULES_NOT_ENABLED ((NTSTATUS) 0x40000034L) #endif #ifndef STATUS_GUARD_PAGE_VIOLATION # define STATUS_GUARD_PAGE_VIOLATION ((NTSTATUS) 0x80000001L) #endif #ifndef STATUS_DATATYPE_MISALIGNMENT # define STATUS_DATATYPE_MISALIGNMENT ((NTSTATUS) 0x80000002L) #endif #ifndef STATUS_BREAKPOINT # define STATUS_BREAKPOINT ((NTSTATUS) 0x80000003L) #endif #ifndef STATUS_SINGLE_STEP # define STATUS_SINGLE_STEP ((NTSTATUS) 0x80000004L) #endif #ifndef STATUS_BUFFER_OVERFLOW # define STATUS_BUFFER_OVERFLOW ((NTSTATUS) 0x80000005L) #endif #ifndef STATUS_NO_MORE_FILES # define STATUS_NO_MORE_FILES ((NTSTATUS) 0x80000006L) #endif #ifndef STATUS_WAKE_SYSTEM_DEBUGGER # define STATUS_WAKE_SYSTEM_DEBUGGER ((NTSTATUS) 0x80000007L) #endif #ifndef STATUS_HANDLES_CLOSED # define STATUS_HANDLES_CLOSED ((NTSTATUS) 0x8000000AL) #endif #ifndef STATUS_NO_INHERITANCE # define STATUS_NO_INHERITANCE ((NTSTATUS) 0x8000000BL) #endif #ifndef STATUS_GUID_SUBSTITUTION_MADE # define STATUS_GUID_SUBSTITUTION_MADE ((NTSTATUS) 0x8000000CL) #endif #ifndef STATUS_PARTIAL_COPY # define STATUS_PARTIAL_COPY ((NTSTATUS) 0x8000000DL) #endif #ifndef STATUS_DEVICE_PAPER_EMPTY # define STATUS_DEVICE_PAPER_EMPTY ((NTSTATUS) 0x8000000EL) #endif #ifndef STATUS_DEVICE_POWERED_OFF # define STATUS_DEVICE_POWERED_OFF ((NTSTATUS) 0x8000000FL) #endif #ifndef STATUS_DEVICE_OFF_LINE # define STATUS_DEVICE_OFF_LINE ((NTSTATUS) 0x80000010L) #endif #ifndef STATUS_DEVICE_BUSY # define STATUS_DEVICE_BUSY ((NTSTATUS) 0x80000011L) #endif #ifndef STATUS_NO_MORE_EAS # define STATUS_NO_MORE_EAS ((NTSTATUS) 0x80000012L) #endif #ifndef STATUS_INVALID_EA_NAME # define STATUS_INVALID_EA_NAME ((NTSTATUS) 0x80000013L) #endif #ifndef STATUS_EA_LIST_INCONSISTENT # define STATUS_EA_LIST_INCONSISTENT ((NTSTATUS) 0x80000014L) #endif #ifndef STATUS_INVALID_EA_FLAG # define STATUS_INVALID_EA_FLAG ((NTSTATUS) 0x80000015L) #endif #ifndef STATUS_VERIFY_REQUIRED # define STATUS_VERIFY_REQUIRED ((NTSTATUS) 0x80000016L) #endif #ifndef STATUS_EXTRANEOUS_INFORMATION # define STATUS_EXTRANEOUS_INFORMATION ((NTSTATUS) 0x80000017L) #endif #ifndef STATUS_RXACT_COMMIT_NECESSARY # define STATUS_RXACT_COMMIT_NECESSARY ((NTSTATUS) 0x80000018L) #endif #ifndef STATUS_NO_MORE_ENTRIES # define STATUS_NO_MORE_ENTRIES ((NTSTATUS) 0x8000001AL) #endif #ifndef STATUS_FILEMARK_DETECTED # define STATUS_FILEMARK_DETECTED ((NTSTATUS) 0x8000001BL) #endif #ifndef STATUS_MEDIA_CHANGED # define STATUS_MEDIA_CHANGED ((NTSTATUS) 0x8000001CL) #endif #ifndef STATUS_BUS_RESET # define STATUS_BUS_RESET ((NTSTATUS) 0x8000001DL) #endif #ifndef STATUS_END_OF_MEDIA # define STATUS_END_OF_MEDIA ((NTSTATUS) 0x8000001EL) #endif #ifndef STATUS_BEGINNING_OF_MEDIA # define STATUS_BEGINNING_OF_MEDIA ((NTSTATUS) 0x8000001FL) #endif #ifndef STATUS_MEDIA_CHECK # define STATUS_MEDIA_CHECK ((NTSTATUS) 0x80000020L) #endif #ifndef STATUS_SETMARK_DETECTED # define STATUS_SETMARK_DETECTED ((NTSTATUS) 0x80000021L) #endif #ifndef STATUS_NO_DATA_DETECTED # define STATUS_NO_DATA_DETECTED ((NTSTATUS) 0x80000022L) #endif #ifndef STATUS_REDIRECTOR_HAS_OPEN_HANDLES # define STATUS_REDIRECTOR_HAS_OPEN_HANDLES ((NTSTATUS) 0x80000023L) #endif #ifndef STATUS_SERVER_HAS_OPEN_HANDLES # define STATUS_SERVER_HAS_OPEN_HANDLES ((NTSTATUS) 0x80000024L) #endif #ifndef STATUS_ALREADY_DISCONNECTED # define STATUS_ALREADY_DISCONNECTED ((NTSTATUS) 0x80000025L) #endif #ifndef STATUS_LONGJUMP # define STATUS_LONGJUMP ((NTSTATUS) 0x80000026L) #endif #ifndef STATUS_CLEANER_CARTRIDGE_INSTALLED # define STATUS_CLEANER_CARTRIDGE_INSTALLED ((NTSTATUS) 0x80000027L) #endif #ifndef STATUS_PLUGPLAY_QUERY_VETOED # define STATUS_PLUGPLAY_QUERY_VETOED ((NTSTATUS) 0x80000028L) #endif #ifndef STATUS_UNWIND_CONSOLIDATE # define STATUS_UNWIND_CONSOLIDATE ((NTSTATUS) 0x80000029L) #endif #ifndef STATUS_REGISTRY_HIVE_RECOVERED # define STATUS_REGISTRY_HIVE_RECOVERED ((NTSTATUS) 0x8000002AL) #endif #ifndef STATUS_DLL_MIGHT_BE_INSECURE # define STATUS_DLL_MIGHT_BE_INSECURE ((NTSTATUS) 0x8000002BL) #endif #ifndef STATUS_DLL_MIGHT_BE_INCOMPATIBLE # define STATUS_DLL_MIGHT_BE_INCOMPATIBLE ((NTSTATUS) 0x8000002CL) #endif #ifndef STATUS_STOPPED_ON_SYMLINK # define STATUS_STOPPED_ON_SYMLINK ((NTSTATUS) 0x8000002DL) #endif #ifndef STATUS_CANNOT_GRANT_REQUESTED_OPLOCK # define STATUS_CANNOT_GRANT_REQUESTED_OPLOCK ((NTSTATUS) 0x8000002EL) #endif #ifndef STATUS_NO_ACE_CONDITION # define STATUS_NO_ACE_CONDITION ((NTSTATUS) 0x8000002FL) #endif #ifndef STATUS_UNSUCCESSFUL # define STATUS_UNSUCCESSFUL ((NTSTATUS) 0xC0000001L) #endif #ifndef STATUS_NOT_IMPLEMENTED # define STATUS_NOT_IMPLEMENTED ((NTSTATUS) 0xC0000002L) #endif #ifndef STATUS_INVALID_INFO_CLASS # define STATUS_INVALID_INFO_CLASS ((NTSTATUS) 0xC0000003L) #endif #ifndef STATUS_INFO_LENGTH_MISMATCH # define STATUS_INFO_LENGTH_MISMATCH ((NTSTATUS) 0xC0000004L) #endif #ifndef STATUS_ACCESS_VIOLATION # define STATUS_ACCESS_VIOLATION ((NTSTATUS) 0xC0000005L) #endif #ifndef STATUS_IN_PAGE_ERROR # define STATUS_IN_PAGE_ERROR ((NTSTATUS) 0xC0000006L) #endif #ifndef STATUS_PAGEFILE_QUOTA # define STATUS_PAGEFILE_QUOTA ((NTSTATUS) 0xC0000007L) #endif #ifndef STATUS_INVALID_HANDLE # define STATUS_INVALID_HANDLE ((NTSTATUS) 0xC0000008L) #endif #ifndef STATUS_BAD_INITIAL_STACK # define STATUS_BAD_INITIAL_STACK ((NTSTATUS) 0xC0000009L) #endif #ifndef STATUS_BAD_INITIAL_PC # define STATUS_BAD_INITIAL_PC ((NTSTATUS) 0xC000000AL) #endif #ifndef STATUS_INVALID_CID # define STATUS_INVALID_CID ((NTSTATUS) 0xC000000BL) #endif #ifndef STATUS_TIMER_NOT_CANCELED # define STATUS_TIMER_NOT_CANCELED ((NTSTATUS) 0xC000000CL) #endif #ifndef STATUS_INVALID_PARAMETER # define STATUS_INVALID_PARAMETER ((NTSTATUS) 0xC000000DL) #endif #ifndef STATUS_NO_SUCH_DEVICE # define STATUS_NO_SUCH_DEVICE ((NTSTATUS) 0xC000000EL) #endif #ifndef STATUS_NO_SUCH_FILE # define STATUS_NO_SUCH_FILE ((NTSTATUS) 0xC000000FL) #endif #ifndef STATUS_INVALID_DEVICE_REQUEST # define STATUS_INVALID_DEVICE_REQUEST ((NTSTATUS) 0xC0000010L) #endif #ifndef STATUS_END_OF_FILE # define STATUS_END_OF_FILE ((NTSTATUS) 0xC0000011L) #endif #ifndef STATUS_WRONG_VOLUME # define STATUS_WRONG_VOLUME ((NTSTATUS) 0xC0000012L) #endif #ifndef STATUS_NO_MEDIA_IN_DEVICE # define STATUS_NO_MEDIA_IN_DEVICE ((NTSTATUS) 0xC0000013L) #endif #ifndef STATUS_UNRECOGNIZED_MEDIA # define STATUS_UNRECOGNIZED_MEDIA ((NTSTATUS) 0xC0000014L) #endif #ifndef STATUS_NONEXISTENT_SECTOR # define STATUS_NONEXISTENT_SECTOR ((NTSTATUS) 0xC0000015L) #endif #ifndef STATUS_MORE_PROCESSING_REQUIRED # define STATUS_MORE_PROCESSING_REQUIRED ((NTSTATUS) 0xC0000016L) #endif #ifndef STATUS_NO_MEMORY # define STATUS_NO_MEMORY ((NTSTATUS) 0xC0000017L) #endif #ifndef STATUS_CONFLICTING_ADDRESSES # define STATUS_CONFLICTING_ADDRESSES ((NTSTATUS) 0xC0000018L) #endif #ifndef STATUS_NOT_MAPPED_VIEW # define STATUS_NOT_MAPPED_VIEW ((NTSTATUS) 0xC0000019L) #endif #ifndef STATUS_UNABLE_TO_FREE_VM # define STATUS_UNABLE_TO_FREE_VM ((NTSTATUS) 0xC000001AL) #endif #ifndef STATUS_UNABLE_TO_DELETE_SECTION # define STATUS_UNABLE_TO_DELETE_SECTION ((NTSTATUS) 0xC000001BL) #endif #ifndef STATUS_INVALID_SYSTEM_SERVICE # define STATUS_INVALID_SYSTEM_SERVICE ((NTSTATUS) 0xC000001CL) #endif #ifndef STATUS_ILLEGAL_INSTRUCTION # define STATUS_ILLEGAL_INSTRUCTION ((NTSTATUS) 0xC000001DL) #endif #ifndef STATUS_INVALID_LOCK_SEQUENCE # define STATUS_INVALID_LOCK_SEQUENCE ((NTSTATUS) 0xC000001EL) #endif #ifndef STATUS_INVALID_VIEW_SIZE # define STATUS_INVALID_VIEW_SIZE ((NTSTATUS) 0xC000001FL) #endif #ifndef STATUS_INVALID_FILE_FOR_SECTION # define STATUS_INVALID_FILE_FOR_SECTION ((NTSTATUS) 0xC0000020L) #endif #ifndef STATUS_ALREADY_COMMITTED # define STATUS_ALREADY_COMMITTED ((NTSTATUS) 0xC0000021L) #endif #ifndef STATUS_ACCESS_DENIED # define STATUS_ACCESS_DENIED ((NTSTATUS) 0xC0000022L) #endif #ifndef STATUS_BUFFER_TOO_SMALL # define STATUS_BUFFER_TOO_SMALL ((NTSTATUS) 0xC0000023L) #endif #ifndef STATUS_OBJECT_TYPE_MISMATCH # define STATUS_OBJECT_TYPE_MISMATCH ((NTSTATUS) 0xC0000024L) #endif #ifndef STATUS_NONCONTINUABLE_EXCEPTION # define STATUS_NONCONTINUABLE_EXCEPTION ((NTSTATUS) 0xC0000025L) #endif #ifndef STATUS_INVALID_DISPOSITION # define STATUS_INVALID_DISPOSITION ((NTSTATUS) 0xC0000026L) #endif #ifndef STATUS_UNWIND # define STATUS_UNWIND ((NTSTATUS) 0xC0000027L) #endif #ifndef STATUS_BAD_STACK # define STATUS_BAD_STACK ((NTSTATUS) 0xC0000028L) #endif #ifndef STATUS_INVALID_UNWIND_TARGET # define STATUS_INVALID_UNWIND_TARGET ((NTSTATUS) 0xC0000029L) #endif #ifndef STATUS_NOT_LOCKED # define STATUS_NOT_LOCKED ((NTSTATUS) 0xC000002AL) #endif #ifndef STATUS_PARITY_ERROR # define STATUS_PARITY_ERROR ((NTSTATUS) 0xC000002BL) #endif #ifndef STATUS_UNABLE_TO_DECOMMIT_VM # define STATUS_UNABLE_TO_DECOMMIT_VM ((NTSTATUS) 0xC000002CL) #endif #ifndef STATUS_NOT_COMMITTED # define STATUS_NOT_COMMITTED ((NTSTATUS) 0xC000002DL) #endif #ifndef STATUS_INVALID_PORT_ATTRIBUTES # define STATUS_INVALID_PORT_ATTRIBUTES ((NTSTATUS) 0xC000002EL) #endif #ifndef STATUS_PORT_MESSAGE_TOO_LONG # define STATUS_PORT_MESSAGE_TOO_LONG ((NTSTATUS) 0xC000002FL) #endif #ifndef STATUS_INVALID_PARAMETER_MIX # define STATUS_INVALID_PARAMETER_MIX ((NTSTATUS) 0xC0000030L) #endif #ifndef STATUS_INVALID_QUOTA_LOWER # define STATUS_INVALID_QUOTA_LOWER ((NTSTATUS) 0xC0000031L) #endif #ifndef STATUS_DISK_CORRUPT_ERROR # define STATUS_DISK_CORRUPT_ERROR ((NTSTATUS) 0xC0000032L) #endif #ifndef STATUS_OBJECT_NAME_INVALID # define STATUS_OBJECT_NAME_INVALID ((NTSTATUS) 0xC0000033L) #endif #ifndef STATUS_OBJECT_NAME_NOT_FOUND # define STATUS_OBJECT_NAME_NOT_FOUND ((NTSTATUS) 0xC0000034L) #endif #ifndef STATUS_OBJECT_NAME_COLLISION # define STATUS_OBJECT_NAME_COLLISION ((NTSTATUS) 0xC0000035L) #endif #ifndef STATUS_PORT_DISCONNECTED # define STATUS_PORT_DISCONNECTED ((NTSTATUS) 0xC0000037L) #endif #ifndef STATUS_DEVICE_ALREADY_ATTACHED # define STATUS_DEVICE_ALREADY_ATTACHED ((NTSTATUS) 0xC0000038L) #endif #ifndef STATUS_OBJECT_PATH_INVALID # define STATUS_OBJECT_PATH_INVALID ((NTSTATUS) 0xC0000039L) #endif #ifndef STATUS_OBJECT_PATH_NOT_FOUND # define STATUS_OBJECT_PATH_NOT_FOUND ((NTSTATUS) 0xC000003AL) #endif #ifndef STATUS_OBJECT_PATH_SYNTAX_BAD # define STATUS_OBJECT_PATH_SYNTAX_BAD ((NTSTATUS) 0xC000003BL) #endif #ifndef STATUS_DATA_OVERRUN # define STATUS_DATA_OVERRUN ((NTSTATUS) 0xC000003CL) #endif #ifndef STATUS_DATA_LATE_ERROR # define STATUS_DATA_LATE_ERROR ((NTSTATUS) 0xC000003DL) #endif #ifndef STATUS_DATA_ERROR # define STATUS_DATA_ERROR ((NTSTATUS) 0xC000003EL) #endif #ifndef STATUS_CRC_ERROR # define STATUS_CRC_ERROR ((NTSTATUS) 0xC000003FL) #endif #ifndef STATUS_SECTION_TOO_BIG # define STATUS_SECTION_TOO_BIG ((NTSTATUS) 0xC0000040L) #endif #ifndef STATUS_PORT_CONNECTION_REFUSED # define STATUS_PORT_CONNECTION_REFUSED ((NTSTATUS) 0xC0000041L) #endif #ifndef STATUS_INVALID_PORT_HANDLE # define STATUS_INVALID_PORT_HANDLE ((NTSTATUS) 0xC0000042L) #endif #ifndef STATUS_SHARING_VIOLATION # define STATUS_SHARING_VIOLATION ((NTSTATUS) 0xC0000043L) #endif #ifndef STATUS_QUOTA_EXCEEDED # define STATUS_QUOTA_EXCEEDED ((NTSTATUS) 0xC0000044L) #endif #ifndef STATUS_INVALID_PAGE_PROTECTION # define STATUS_INVALID_PAGE_PROTECTION ((NTSTATUS) 0xC0000045L) #endif #ifndef STATUS_MUTANT_NOT_OWNED # define STATUS_MUTANT_NOT_OWNED ((NTSTATUS) 0xC0000046L) #endif #ifndef STATUS_SEMAPHORE_LIMIT_EXCEEDED # define STATUS_SEMAPHORE_LIMIT_EXCEEDED ((NTSTATUS) 0xC0000047L) #endif #ifndef STATUS_PORT_ALREADY_SET # define STATUS_PORT_ALREADY_SET ((NTSTATUS) 0xC0000048L) #endif #ifndef STATUS_SECTION_NOT_IMAGE # define STATUS_SECTION_NOT_IMAGE ((NTSTATUS) 0xC0000049L) #endif #ifndef STATUS_SUSPEND_COUNT_EXCEEDED # define STATUS_SUSPEND_COUNT_EXCEEDED ((NTSTATUS) 0xC000004AL) #endif #ifndef STATUS_THREAD_IS_TERMINATING # define STATUS_THREAD_IS_TERMINATING ((NTSTATUS) 0xC000004BL) #endif #ifndef STATUS_BAD_WORKING_SET_LIMIT # define STATUS_BAD_WORKING_SET_LIMIT ((NTSTATUS) 0xC000004CL) #endif #ifndef STATUS_INCOMPATIBLE_FILE_MAP # define STATUS_INCOMPATIBLE_FILE_MAP ((NTSTATUS) 0xC000004DL) #endif #ifndef STATUS_SECTION_PROTECTION # define STATUS_SECTION_PROTECTION ((NTSTATUS) 0xC000004EL) #endif #ifndef STATUS_EAS_NOT_SUPPORTED # define STATUS_EAS_NOT_SUPPORTED ((NTSTATUS) 0xC000004FL) #endif #ifndef STATUS_EA_TOO_LARGE # define STATUS_EA_TOO_LARGE ((NTSTATUS) 0xC0000050L) #endif #ifndef STATUS_NONEXISTENT_EA_ENTRY # define STATUS_NONEXISTENT_EA_ENTRY ((NTSTATUS) 0xC0000051L) #endif #ifndef STATUS_NO_EAS_ON_FILE # define STATUS_NO_EAS_ON_FILE ((NTSTATUS) 0xC0000052L) #endif #ifndef STATUS_EA_CORRUPT_ERROR # define STATUS_EA_CORRUPT_ERROR ((NTSTATUS) 0xC0000053L) #endif #ifndef STATUS_FILE_LOCK_CONFLICT # define STATUS_FILE_LOCK_CONFLICT ((NTSTATUS) 0xC0000054L) #endif #ifndef STATUS_LOCK_NOT_GRANTED # define STATUS_LOCK_NOT_GRANTED ((NTSTATUS) 0xC0000055L) #endif #ifndef STATUS_DELETE_PENDING # define STATUS_DELETE_PENDING ((NTSTATUS) 0xC0000056L) #endif #ifndef STATUS_CTL_FILE_NOT_SUPPORTED # define STATUS_CTL_FILE_NOT_SUPPORTED ((NTSTATUS) 0xC0000057L) #endif #ifndef STATUS_UNKNOWN_REVISION # define STATUS_UNKNOWN_REVISION ((NTSTATUS) 0xC0000058L) #endif #ifndef STATUS_REVISION_MISMATCH # define STATUS_REVISION_MISMATCH ((NTSTATUS) 0xC0000059L) #endif #ifndef STATUS_INVALID_OWNER # define STATUS_INVALID_OWNER ((NTSTATUS) 0xC000005AL) #endif #ifndef STATUS_INVALID_PRIMARY_GROUP # define STATUS_INVALID_PRIMARY_GROUP ((NTSTATUS) 0xC000005BL) #endif #ifndef STATUS_NO_IMPERSONATION_TOKEN # define STATUS_NO_IMPERSONATION_TOKEN ((NTSTATUS) 0xC000005CL) #endif #ifndef STATUS_CANT_DISABLE_MANDATORY # define STATUS_CANT_DISABLE_MANDATORY ((NTSTATUS) 0xC000005DL) #endif #ifndef STATUS_NO_LOGON_SERVERS # define STATUS_NO_LOGON_SERVERS ((NTSTATUS) 0xC000005EL) #endif #ifndef STATUS_NO_SUCH_LOGON_SESSION # define STATUS_NO_SUCH_LOGON_SESSION ((NTSTATUS) 0xC000005FL) #endif #ifndef STATUS_NO_SUCH_PRIVILEGE # define STATUS_NO_SUCH_PRIVILEGE ((NTSTATUS) 0xC0000060L) #endif #ifndef STATUS_PRIVILEGE_NOT_HELD # define STATUS_PRIVILEGE_NOT_HELD ((NTSTATUS) 0xC0000061L) #endif #ifndef STATUS_INVALID_ACCOUNT_NAME # define STATUS_INVALID_ACCOUNT_NAME ((NTSTATUS) 0xC0000062L) #endif #ifndef STATUS_USER_EXISTS # define STATUS_USER_EXISTS ((NTSTATUS) 0xC0000063L) #endif #ifndef STATUS_NO_SUCH_USER # define STATUS_NO_SUCH_USER ((NTSTATUS) 0xC0000064L) #endif #ifndef STATUS_GROUP_EXISTS # define STATUS_GROUP_EXISTS ((NTSTATUS) 0xC0000065L) #endif #ifndef STATUS_NO_SUCH_GROUP # define STATUS_NO_SUCH_GROUP ((NTSTATUS) 0xC0000066L) #endif #ifndef STATUS_MEMBER_IN_GROUP # define STATUS_MEMBER_IN_GROUP ((NTSTATUS) 0xC0000067L) #endif #ifndef STATUS_MEMBER_NOT_IN_GROUP # define STATUS_MEMBER_NOT_IN_GROUP ((NTSTATUS) 0xC0000068L) #endif #ifndef STATUS_LAST_ADMIN # define STATUS_LAST_ADMIN ((NTSTATUS) 0xC0000069L) #endif #ifndef STATUS_WRONG_PASSWORD # define STATUS_WRONG_PASSWORD ((NTSTATUS) 0xC000006AL) #endif #ifndef STATUS_ILL_FORMED_PASSWORD # define STATUS_ILL_FORMED_PASSWORD ((NTSTATUS) 0xC000006BL) #endif #ifndef STATUS_PASSWORD_RESTRICTION # define STATUS_PASSWORD_RESTRICTION ((NTSTATUS) 0xC000006CL) #endif #ifndef STATUS_LOGON_FAILURE # define STATUS_LOGON_FAILURE ((NTSTATUS) 0xC000006DL) #endif #ifndef STATUS_ACCOUNT_RESTRICTION # define STATUS_ACCOUNT_RESTRICTION ((NTSTATUS) 0xC000006EL) #endif #ifndef STATUS_INVALID_LOGON_HOURS # define STATUS_INVALID_LOGON_HOURS ((NTSTATUS) 0xC000006FL) #endif #ifndef STATUS_INVALID_WORKSTATION # define STATUS_INVALID_WORKSTATION ((NTSTATUS) 0xC0000070L) #endif #ifndef STATUS_PASSWORD_EXPIRED # define STATUS_PASSWORD_EXPIRED ((NTSTATUS) 0xC0000071L) #endif #ifndef STATUS_ACCOUNT_DISABLED # define STATUS_ACCOUNT_DISABLED ((NTSTATUS) 0xC0000072L) #endif #ifndef STATUS_NONE_MAPPED # define STATUS_NONE_MAPPED ((NTSTATUS) 0xC0000073L) #endif #ifndef STATUS_TOO_MANY_LUIDS_REQUESTED # define STATUS_TOO_MANY_LUIDS_REQUESTED ((NTSTATUS) 0xC0000074L) #endif #ifndef STATUS_LUIDS_EXHAUSTED # define STATUS_LUIDS_EXHAUSTED ((NTSTATUS) 0xC0000075L) #endif #ifndef STATUS_INVALID_SUB_AUTHORITY # define STATUS_INVALID_SUB_AUTHORITY ((NTSTATUS) 0xC0000076L) #endif #ifndef STATUS_INVALID_ACL # define STATUS_INVALID_ACL ((NTSTATUS) 0xC0000077L) #endif #ifndef STATUS_INVALID_SID # define STATUS_INVALID_SID ((NTSTATUS) 0xC0000078L) #endif #ifndef STATUS_INVALID_SECURITY_DESCR # define STATUS_INVALID_SECURITY_DESCR ((NTSTATUS) 0xC0000079L) #endif #ifndef STATUS_PROCEDURE_NOT_FOUND # define STATUS_PROCEDURE_NOT_FOUND ((NTSTATUS) 0xC000007AL) #endif #ifndef STATUS_INVALID_IMAGE_FORMAT # define STATUS_INVALID_IMAGE_FORMAT ((NTSTATUS) 0xC000007BL) #endif #ifndef STATUS_NO_TOKEN # define STATUS_NO_TOKEN ((NTSTATUS) 0xC000007CL) #endif #ifndef STATUS_BAD_INHERITANCE_ACL # define STATUS_BAD_INHERITANCE_ACL ((NTSTATUS) 0xC000007DL) #endif #ifndef STATUS_RANGE_NOT_LOCKED # define STATUS_RANGE_NOT_LOCKED ((NTSTATUS) 0xC000007EL) #endif #ifndef STATUS_DISK_FULL # define STATUS_DISK_FULL ((NTSTATUS) 0xC000007FL) #endif #ifndef STATUS_SERVER_DISABLED # define STATUS_SERVER_DISABLED ((NTSTATUS) 0xC0000080L) #endif #ifndef STATUS_SERVER_NOT_DISABLED # define STATUS_SERVER_NOT_DISABLED ((NTSTATUS) 0xC0000081L) #endif #ifndef STATUS_TOO_MANY_GUIDS_REQUESTED # define STATUS_TOO_MANY_GUIDS_REQUESTED ((NTSTATUS) 0xC0000082L) #endif #ifndef STATUS_GUIDS_EXHAUSTED # define STATUS_GUIDS_EXHAUSTED ((NTSTATUS) 0xC0000083L) #endif #ifndef STATUS_INVALID_ID_AUTHORITY # define STATUS_INVALID_ID_AUTHORITY ((NTSTATUS) 0xC0000084L) #endif #ifndef STATUS_AGENTS_EXHAUSTED # define STATUS_AGENTS_EXHAUSTED ((NTSTATUS) 0xC0000085L) #endif #ifndef STATUS_INVALID_VOLUME_LABEL # define STATUS_INVALID_VOLUME_LABEL ((NTSTATUS) 0xC0000086L) #endif #ifndef STATUS_SECTION_NOT_EXTENDED # define STATUS_SECTION_NOT_EXTENDED ((NTSTATUS) 0xC0000087L) #endif #ifndef STATUS_NOT_MAPPED_DATA # define STATUS_NOT_MAPPED_DATA ((NTSTATUS) 0xC0000088L) #endif #ifndef STATUS_RESOURCE_DATA_NOT_FOUND # define STATUS_RESOURCE_DATA_NOT_FOUND ((NTSTATUS) 0xC0000089L) #endif #ifndef STATUS_RESOURCE_TYPE_NOT_FOUND # define STATUS_RESOURCE_TYPE_NOT_FOUND ((NTSTATUS) 0xC000008AL) #endif #ifndef STATUS_RESOURCE_NAME_NOT_FOUND # define STATUS_RESOURCE_NAME_NOT_FOUND ((NTSTATUS) 0xC000008BL) #endif #ifndef STATUS_ARRAY_BOUNDS_EXCEEDED # define STATUS_ARRAY_BOUNDS_EXCEEDED ((NTSTATUS) 0xC000008CL) #endif #ifndef STATUS_FLOAT_DENORMAL_OPERAND # define STATUS_FLOAT_DENORMAL_OPERAND ((NTSTATUS) 0xC000008DL) #endif #ifndef STATUS_FLOAT_DIVIDE_BY_ZERO # define STATUS_FLOAT_DIVIDE_BY_ZERO ((NTSTATUS) 0xC000008EL) #endif #ifndef STATUS_FLOAT_INEXACT_RESULT # define STATUS_FLOAT_INEXACT_RESULT ((NTSTATUS) 0xC000008FL) #endif #ifndef STATUS_FLOAT_INVALID_OPERATION # define STATUS_FLOAT_INVALID_OPERATION ((NTSTATUS) 0xC0000090L) #endif #ifndef STATUS_FLOAT_OVERFLOW # define STATUS_FLOAT_OVERFLOW ((NTSTATUS) 0xC0000091L) #endif #ifndef STATUS_FLOAT_STACK_CHECK # define STATUS_FLOAT_STACK_CHECK ((NTSTATUS) 0xC0000092L) #endif #ifndef STATUS_FLOAT_UNDERFLOW # define STATUS_FLOAT_UNDERFLOW ((NTSTATUS) 0xC0000093L) #endif #ifndef STATUS_INTEGER_DIVIDE_BY_ZERO # define STATUS_INTEGER_DIVIDE_BY_ZERO ((NTSTATUS) 0xC0000094L) #endif #ifndef STATUS_INTEGER_OVERFLOW # define STATUS_INTEGER_OVERFLOW ((NTSTATUS) 0xC0000095L) #endif #ifndef STATUS_PRIVILEGED_INSTRUCTION # define STATUS_PRIVILEGED_INSTRUCTION ((NTSTATUS) 0xC0000096L) #endif #ifndef STATUS_TOO_MANY_PAGING_FILES # define STATUS_TOO_MANY_PAGING_FILES ((NTSTATUS) 0xC0000097L) #endif #ifndef STATUS_FILE_INVALID # define STATUS_FILE_INVALID ((NTSTATUS) 0xC0000098L) #endif #ifndef STATUS_ALLOTTED_SPACE_EXCEEDED # define STATUS_ALLOTTED_SPACE_EXCEEDED ((NTSTATUS) 0xC0000099L) #endif #ifndef STATUS_INSUFFICIENT_RESOURCES # define STATUS_INSUFFICIENT_RESOURCES ((NTSTATUS) 0xC000009AL) #endif #ifndef STATUS_DFS_EXIT_PATH_FOUND # define STATUS_DFS_EXIT_PATH_FOUND ((NTSTATUS) 0xC000009BL) #endif #ifndef STATUS_DEVICE_DATA_ERROR # define STATUS_DEVICE_DATA_ERROR ((NTSTATUS) 0xC000009CL) #endif #ifndef STATUS_DEVICE_NOT_CONNECTED # define STATUS_DEVICE_NOT_CONNECTED ((NTSTATUS) 0xC000009DL) #endif #ifndef STATUS_DEVICE_POWER_FAILURE # define STATUS_DEVICE_POWER_FAILURE ((NTSTATUS) 0xC000009EL) #endif #ifndef STATUS_FREE_VM_NOT_AT_BASE # define STATUS_FREE_VM_NOT_AT_BASE ((NTSTATUS) 0xC000009FL) #endif #ifndef STATUS_MEMORY_NOT_ALLOCATED # define STATUS_MEMORY_NOT_ALLOCATED ((NTSTATUS) 0xC00000A0L) #endif #ifndef STATUS_WORKING_SET_QUOTA # define STATUS_WORKING_SET_QUOTA ((NTSTATUS) 0xC00000A1L) #endif #ifndef STATUS_MEDIA_WRITE_PROTECTED # define STATUS_MEDIA_WRITE_PROTECTED ((NTSTATUS) 0xC00000A2L) #endif #ifndef STATUS_DEVICE_NOT_READY # define STATUS_DEVICE_NOT_READY ((NTSTATUS) 0xC00000A3L) #endif #ifndef STATUS_INVALID_GROUP_ATTRIBUTES # define STATUS_INVALID_GROUP_ATTRIBUTES ((NTSTATUS) 0xC00000A4L) #endif #ifndef STATUS_BAD_IMPERSONATION_LEVEL # define STATUS_BAD_IMPERSONATION_LEVEL ((NTSTATUS) 0xC00000A5L) #endif #ifndef STATUS_CANT_OPEN_ANONYMOUS # define STATUS_CANT_OPEN_ANONYMOUS ((NTSTATUS) 0xC00000A6L) #endif #ifndef STATUS_BAD_VALIDATION_CLASS # define STATUS_BAD_VALIDATION_CLASS ((NTSTATUS) 0xC00000A7L) #endif #ifndef STATUS_BAD_TOKEN_TYPE # define STATUS_BAD_TOKEN_TYPE ((NTSTATUS) 0xC00000A8L) #endif #ifndef STATUS_BAD_MASTER_BOOT_RECORD # define STATUS_BAD_MASTER_BOOT_RECORD ((NTSTATUS) 0xC00000A9L) #endif #ifndef STATUS_INSTRUCTION_MISALIGNMENT # define STATUS_INSTRUCTION_MISALIGNMENT ((NTSTATUS) 0xC00000AAL) #endif #ifndef STATUS_INSTANCE_NOT_AVAILABLE # define STATUS_INSTANCE_NOT_AVAILABLE ((NTSTATUS) 0xC00000ABL) #endif #ifndef STATUS_PIPE_NOT_AVAILABLE # define STATUS_PIPE_NOT_AVAILABLE ((NTSTATUS) 0xC00000ACL) #endif #ifndef STATUS_INVALID_PIPE_STATE # define STATUS_INVALID_PIPE_STATE ((NTSTATUS) 0xC00000ADL) #endif #ifndef STATUS_PIPE_BUSY # define STATUS_PIPE_BUSY ((NTSTATUS) 0xC00000AEL) #endif #ifndef STATUS_ILLEGAL_FUNCTION # define STATUS_ILLEGAL_FUNCTION ((NTSTATUS) 0xC00000AFL) #endif #ifndef STATUS_PIPE_DISCONNECTED # define STATUS_PIPE_DISCONNECTED ((NTSTATUS) 0xC00000B0L) #endif #ifndef STATUS_PIPE_CLOSING # define STATUS_PIPE_CLOSING ((NTSTATUS) 0xC00000B1L) #endif #ifndef STATUS_PIPE_CONNECTED # define STATUS_PIPE_CONNECTED ((NTSTATUS) 0xC00000B2L) #endif #ifndef STATUS_PIPE_LISTENING # define STATUS_PIPE_LISTENING ((NTSTATUS) 0xC00000B3L) #endif #ifndef STATUS_INVALID_READ_MODE # define STATUS_INVALID_READ_MODE ((NTSTATUS) 0xC00000B4L) #endif #ifndef STATUS_IO_TIMEOUT # define STATUS_IO_TIMEOUT ((NTSTATUS) 0xC00000B5L) #endif #ifndef STATUS_FILE_FORCED_CLOSED # define STATUS_FILE_FORCED_CLOSED ((NTSTATUS) 0xC00000B6L) #endif #ifndef STATUS_PROFILING_NOT_STARTED # define STATUS_PROFILING_NOT_STARTED ((NTSTATUS) 0xC00000B7L) #endif #ifndef STATUS_PROFILING_NOT_STOPPED # define STATUS_PROFILING_NOT_STOPPED ((NTSTATUS) 0xC00000B8L) #endif #ifndef STATUS_COULD_NOT_INTERPRET # define STATUS_COULD_NOT_INTERPRET ((NTSTATUS) 0xC00000B9L) #endif #ifndef STATUS_FILE_IS_A_DIRECTORY # define STATUS_FILE_IS_A_DIRECTORY ((NTSTATUS) 0xC00000BAL) #endif #ifndef STATUS_NOT_SUPPORTED # define STATUS_NOT_SUPPORTED ((NTSTATUS) 0xC00000BBL) #endif #ifndef STATUS_REMOTE_NOT_LISTENING # define STATUS_REMOTE_NOT_LISTENING ((NTSTATUS) 0xC00000BCL) #endif #ifndef STATUS_DUPLICATE_NAME # define STATUS_DUPLICATE_NAME ((NTSTATUS) 0xC00000BDL) #endif #ifndef STATUS_BAD_NETWORK_PATH # define STATUS_BAD_NETWORK_PATH ((NTSTATUS) 0xC00000BEL) #endif #ifndef STATUS_NETWORK_BUSY # define STATUS_NETWORK_BUSY ((NTSTATUS) 0xC00000BFL) #endif #ifndef STATUS_DEVICE_DOES_NOT_EXIST # define STATUS_DEVICE_DOES_NOT_EXIST ((NTSTATUS) 0xC00000C0L) #endif #ifndef STATUS_TOO_MANY_COMMANDS # define STATUS_TOO_MANY_COMMANDS ((NTSTATUS) 0xC00000C1L) #endif #ifndef STATUS_ADAPTER_HARDWARE_ERROR # define STATUS_ADAPTER_HARDWARE_ERROR ((NTSTATUS) 0xC00000C2L) #endif #ifndef STATUS_INVALID_NETWORK_RESPONSE # define STATUS_INVALID_NETWORK_RESPONSE ((NTSTATUS) 0xC00000C3L) #endif #ifndef STATUS_UNEXPECTED_NETWORK_ERROR # define STATUS_UNEXPECTED_NETWORK_ERROR ((NTSTATUS) 0xC00000C4L) #endif #ifndef STATUS_BAD_REMOTE_ADAPTER # define STATUS_BAD_REMOTE_ADAPTER ((NTSTATUS) 0xC00000C5L) #endif #ifndef STATUS_PRINT_QUEUE_FULL # define STATUS_PRINT_QUEUE_FULL ((NTSTATUS) 0xC00000C6L) #endif #ifndef STATUS_NO_SPOOL_SPACE # define STATUS_NO_SPOOL_SPACE ((NTSTATUS) 0xC00000C7L) #endif #ifndef STATUS_PRINT_CANCELLED # define STATUS_PRINT_CANCELLED ((NTSTATUS) 0xC00000C8L) #endif #ifndef STATUS_NETWORK_NAME_DELETED # define STATUS_NETWORK_NAME_DELETED ((NTSTATUS) 0xC00000C9L) #endif #ifndef STATUS_NETWORK_ACCESS_DENIED # define STATUS_NETWORK_ACCESS_DENIED ((NTSTATUS) 0xC00000CAL) #endif #ifndef STATUS_BAD_DEVICE_TYPE # define STATUS_BAD_DEVICE_TYPE ((NTSTATUS) 0xC00000CBL) #endif #ifndef STATUS_BAD_NETWORK_NAME # define STATUS_BAD_NETWORK_NAME ((NTSTATUS) 0xC00000CCL) #endif #ifndef STATUS_TOO_MANY_NAMES # define STATUS_TOO_MANY_NAMES ((NTSTATUS) 0xC00000CDL) #endif #ifndef STATUS_TOO_MANY_SESSIONS # define STATUS_TOO_MANY_SESSIONS ((NTSTATUS) 0xC00000CEL) #endif #ifndef STATUS_SHARING_PAUSED # define STATUS_SHARING_PAUSED ((NTSTATUS) 0xC00000CFL) #endif #ifndef STATUS_REQUEST_NOT_ACCEPTED # define STATUS_REQUEST_NOT_ACCEPTED ((NTSTATUS) 0xC00000D0L) #endif #ifndef STATUS_REDIRECTOR_PAUSED # define STATUS_REDIRECTOR_PAUSED ((NTSTATUS) 0xC00000D1L) #endif #ifndef STATUS_NET_WRITE_FAULT # define STATUS_NET_WRITE_FAULT ((NTSTATUS) 0xC00000D2L) #endif #ifndef STATUS_PROFILING_AT_LIMIT # define STATUS_PROFILING_AT_LIMIT ((NTSTATUS) 0xC00000D3L) #endif #ifndef STATUS_NOT_SAME_DEVICE # define STATUS_NOT_SAME_DEVICE ((NTSTATUS) 0xC00000D4L) #endif #ifndef STATUS_FILE_RENAMED # define STATUS_FILE_RENAMED ((NTSTATUS) 0xC00000D5L) #endif #ifndef STATUS_VIRTUAL_CIRCUIT_CLOSED # define STATUS_VIRTUAL_CIRCUIT_CLOSED ((NTSTATUS) 0xC00000D6L) #endif #ifndef STATUS_NO_SECURITY_ON_OBJECT # define STATUS_NO_SECURITY_ON_OBJECT ((NTSTATUS) 0xC00000D7L) #endif #ifndef STATUS_CANT_WAIT # define STATUS_CANT_WAIT ((NTSTATUS) 0xC00000D8L) #endif #ifndef STATUS_PIPE_EMPTY # define STATUS_PIPE_EMPTY ((NTSTATUS) 0xC00000D9L) #endif #ifndef STATUS_CANT_ACCESS_DOMAIN_INFO # define STATUS_CANT_ACCESS_DOMAIN_INFO ((NTSTATUS) 0xC00000DAL) #endif #ifndef STATUS_CANT_TERMINATE_SELF # define STATUS_CANT_TERMINATE_SELF ((NTSTATUS) 0xC00000DBL) #endif #ifndef STATUS_INVALID_SERVER_STATE # define STATUS_INVALID_SERVER_STATE ((NTSTATUS) 0xC00000DCL) #endif #ifndef STATUS_INVALID_DOMAIN_STATE # define STATUS_INVALID_DOMAIN_STATE ((NTSTATUS) 0xC00000DDL) #endif #ifndef STATUS_INVALID_DOMAIN_ROLE # define STATUS_INVALID_DOMAIN_ROLE ((NTSTATUS) 0xC00000DEL) #endif #ifndef STATUS_NO_SUCH_DOMAIN # define STATUS_NO_SUCH_DOMAIN ((NTSTATUS) 0xC00000DFL) #endif #ifndef STATUS_DOMAIN_EXISTS # define STATUS_DOMAIN_EXISTS ((NTSTATUS) 0xC00000E0L) #endif #ifndef STATUS_DOMAIN_LIMIT_EXCEEDED # define STATUS_DOMAIN_LIMIT_EXCEEDED ((NTSTATUS) 0xC00000E1L) #endif #ifndef STATUS_OPLOCK_NOT_GRANTED # define STATUS_OPLOCK_NOT_GRANTED ((NTSTATUS) 0xC00000E2L) #endif #ifndef STATUS_INVALID_OPLOCK_PROTOCOL # define STATUS_INVALID_OPLOCK_PROTOCOL ((NTSTATUS) 0xC00000E3L) #endif #ifndef STATUS_INTERNAL_DB_CORRUPTION # define STATUS_INTERNAL_DB_CORRUPTION ((NTSTATUS) 0xC00000E4L) #endif #ifndef STATUS_INTERNAL_ERROR # define STATUS_INTERNAL_ERROR ((NTSTATUS) 0xC00000E5L) #endif #ifndef STATUS_GENERIC_NOT_MAPPED # define STATUS_GENERIC_NOT_MAPPED ((NTSTATUS) 0xC00000E6L) #endif #ifndef STATUS_BAD_DESCRIPTOR_FORMAT # define STATUS_BAD_DESCRIPTOR_FORMAT ((NTSTATUS) 0xC00000E7L) #endif #ifndef STATUS_INVALID_USER_BUFFER # define STATUS_INVALID_USER_BUFFER ((NTSTATUS) 0xC00000E8L) #endif #ifndef STATUS_UNEXPECTED_IO_ERROR # define STATUS_UNEXPECTED_IO_ERROR ((NTSTATUS) 0xC00000E9L) #endif #ifndef STATUS_UNEXPECTED_MM_CREATE_ERR # define STATUS_UNEXPECTED_MM_CREATE_ERR ((NTSTATUS) 0xC00000EAL) #endif #ifndef STATUS_UNEXPECTED_MM_MAP_ERROR # define STATUS_UNEXPECTED_MM_MAP_ERROR ((NTSTATUS) 0xC00000EBL) #endif #ifndef STATUS_UNEXPECTED_MM_EXTEND_ERR # define STATUS_UNEXPECTED_MM_EXTEND_ERR ((NTSTATUS) 0xC00000ECL) #endif #ifndef STATUS_NOT_LOGON_PROCESS # define STATUS_NOT_LOGON_PROCESS ((NTSTATUS) 0xC00000EDL) #endif #ifndef STATUS_LOGON_SESSION_EXISTS # define STATUS_LOGON_SESSION_EXISTS ((NTSTATUS) 0xC00000EEL) #endif #ifndef STATUS_INVALID_PARAMETER_1 # define STATUS_INVALID_PARAMETER_1 ((NTSTATUS) 0xC00000EFL) #endif #ifndef STATUS_INVALID_PARAMETER_2 # define STATUS_INVALID_PARAMETER_2 ((NTSTATUS) 0xC00000F0L) #endif #ifndef STATUS_INVALID_PARAMETER_3 # define STATUS_INVALID_PARAMETER_3 ((NTSTATUS) 0xC00000F1L) #endif #ifndef STATUS_INVALID_PARAMETER_4 # define STATUS_INVALID_PARAMETER_4 ((NTSTATUS) 0xC00000F2L) #endif #ifndef STATUS_INVALID_PARAMETER_5 # define STATUS_INVALID_PARAMETER_5 ((NTSTATUS) 0xC00000F3L) #endif #ifndef STATUS_INVALID_PARAMETER_6 # define STATUS_INVALID_PARAMETER_6 ((NTSTATUS) 0xC00000F4L) #endif #ifndef STATUS_INVALID_PARAMETER_7 # define STATUS_INVALID_PARAMETER_7 ((NTSTATUS) 0xC00000F5L) #endif #ifndef STATUS_INVALID_PARAMETER_8 # define STATUS_INVALID_PARAMETER_8 ((NTSTATUS) 0xC00000F6L) #endif #ifndef STATUS_INVALID_PARAMETER_9 # define STATUS_INVALID_PARAMETER_9 ((NTSTATUS) 0xC00000F7L) #endif #ifndef STATUS_INVALID_PARAMETER_10 # define STATUS_INVALID_PARAMETER_10 ((NTSTATUS) 0xC00000F8L) #endif #ifndef STATUS_INVALID_PARAMETER_11 # define STATUS_INVALID_PARAMETER_11 ((NTSTATUS) 0xC00000F9L) #endif #ifndef STATUS_INVALID_PARAMETER_12 # define STATUS_INVALID_PARAMETER_12 ((NTSTATUS) 0xC00000FAL) #endif #ifndef STATUS_REDIRECTOR_NOT_STARTED # define STATUS_REDIRECTOR_NOT_STARTED ((NTSTATUS) 0xC00000FBL) #endif #ifndef STATUS_REDIRECTOR_STARTED # define STATUS_REDIRECTOR_STARTED ((NTSTATUS) 0xC00000FCL) #endif #ifndef STATUS_STACK_OVERFLOW # define STATUS_STACK_OVERFLOW ((NTSTATUS) 0xC00000FDL) #endif #ifndef STATUS_NO_SUCH_PACKAGE # define STATUS_NO_SUCH_PACKAGE ((NTSTATUS) 0xC00000FEL) #endif #ifndef STATUS_BAD_FUNCTION_TABLE # define STATUS_BAD_FUNCTION_TABLE ((NTSTATUS) 0xC00000FFL) #endif #ifndef STATUS_VARIABLE_NOT_FOUND # define STATUS_VARIABLE_NOT_FOUND ((NTSTATUS) 0xC0000100L) #endif #ifndef STATUS_DIRECTORY_NOT_EMPTY # define STATUS_DIRECTORY_NOT_EMPTY ((NTSTATUS) 0xC0000101L) #endif #ifndef STATUS_FILE_CORRUPT_ERROR # define STATUS_FILE_CORRUPT_ERROR ((NTSTATUS) 0xC0000102L) #endif #ifndef STATUS_NOT_A_DIRECTORY # define STATUS_NOT_A_DIRECTORY ((NTSTATUS) 0xC0000103L) #endif #ifndef STATUS_BAD_LOGON_SESSION_STATE # define STATUS_BAD_LOGON_SESSION_STATE ((NTSTATUS) 0xC0000104L) #endif #ifndef STATUS_LOGON_SESSION_COLLISION # define STATUS_LOGON_SESSION_COLLISION ((NTSTATUS) 0xC0000105L) #endif #ifndef STATUS_NAME_TOO_LONG # define STATUS_NAME_TOO_LONG ((NTSTATUS) 0xC0000106L) #endif #ifndef STATUS_FILES_OPEN # define STATUS_FILES_OPEN ((NTSTATUS) 0xC0000107L) #endif #ifndef STATUS_CONNECTION_IN_USE # define STATUS_CONNECTION_IN_USE ((NTSTATUS) 0xC0000108L) #endif #ifndef STATUS_MESSAGE_NOT_FOUND # define STATUS_MESSAGE_NOT_FOUND ((NTSTATUS) 0xC0000109L) #endif #ifndef STATUS_PROCESS_IS_TERMINATING # define STATUS_PROCESS_IS_TERMINATING ((NTSTATUS) 0xC000010AL) #endif #ifndef STATUS_INVALID_LOGON_TYPE # define STATUS_INVALID_LOGON_TYPE ((NTSTATUS) 0xC000010BL) #endif #ifndef STATUS_NO_GUID_TRANSLATION # define STATUS_NO_GUID_TRANSLATION ((NTSTATUS) 0xC000010CL) #endif #ifndef STATUS_CANNOT_IMPERSONATE # define STATUS_CANNOT_IMPERSONATE ((NTSTATUS) 0xC000010DL) #endif #ifndef STATUS_IMAGE_ALREADY_LOADED # define STATUS_IMAGE_ALREADY_LOADED ((NTSTATUS) 0xC000010EL) #endif #ifndef STATUS_ABIOS_NOT_PRESENT # define STATUS_ABIOS_NOT_PRESENT ((NTSTATUS) 0xC000010FL) #endif #ifndef STATUS_ABIOS_LID_NOT_EXIST # define STATUS_ABIOS_LID_NOT_EXIST ((NTSTATUS) 0xC0000110L) #endif #ifndef STATUS_ABIOS_LID_ALREADY_OWNED # define STATUS_ABIOS_LID_ALREADY_OWNED ((NTSTATUS) 0xC0000111L) #endif #ifndef STATUS_ABIOS_NOT_LID_OWNER # define STATUS_ABIOS_NOT_LID_OWNER ((NTSTATUS) 0xC0000112L) #endif #ifndef STATUS_ABIOS_INVALID_COMMAND # define STATUS_ABIOS_INVALID_COMMAND ((NTSTATUS) 0xC0000113L) #endif #ifndef STATUS_ABIOS_INVALID_LID # define STATUS_ABIOS_INVALID_LID ((NTSTATUS) 0xC0000114L) #endif #ifndef STATUS_ABIOS_SELECTOR_NOT_AVAILABLE # define STATUS_ABIOS_SELECTOR_NOT_AVAILABLE ((NTSTATUS) 0xC0000115L) #endif #ifndef STATUS_ABIOS_INVALID_SELECTOR # define STATUS_ABIOS_INVALID_SELECTOR ((NTSTATUS) 0xC0000116L) #endif #ifndef STATUS_NO_LDT # define STATUS_NO_LDT ((NTSTATUS) 0xC0000117L) #endif #ifndef STATUS_INVALID_LDT_SIZE # define STATUS_INVALID_LDT_SIZE ((NTSTATUS) 0xC0000118L) #endif #ifndef STATUS_INVALID_LDT_OFFSET # define STATUS_INVALID_LDT_OFFSET ((NTSTATUS) 0xC0000119L) #endif #ifndef STATUS_INVALID_LDT_DESCRIPTOR # define STATUS_INVALID_LDT_DESCRIPTOR ((NTSTATUS) 0xC000011AL) #endif #ifndef STATUS_INVALID_IMAGE_NE_FORMAT # define STATUS_INVALID_IMAGE_NE_FORMAT ((NTSTATUS) 0xC000011BL) #endif #ifndef STATUS_RXACT_INVALID_STATE # define STATUS_RXACT_INVALID_STATE ((NTSTATUS) 0xC000011CL) #endif #ifndef STATUS_RXACT_COMMIT_FAILURE # define STATUS_RXACT_COMMIT_FAILURE ((NTSTATUS) 0xC000011DL) #endif #ifndef STATUS_MAPPED_FILE_SIZE_ZERO # define STATUS_MAPPED_FILE_SIZE_ZERO ((NTSTATUS) 0xC000011EL) #endif #ifndef STATUS_TOO_MANY_OPENED_FILES # define STATUS_TOO_MANY_OPENED_FILES ((NTSTATUS) 0xC000011FL) #endif #ifndef STATUS_CANCELLED # define STATUS_CANCELLED ((NTSTATUS) 0xC0000120L) #endif #ifndef STATUS_CANNOT_DELETE # define STATUS_CANNOT_DELETE ((NTSTATUS) 0xC0000121L) #endif #ifndef STATUS_INVALID_COMPUTER_NAME # define STATUS_INVALID_COMPUTER_NAME ((NTSTATUS) 0xC0000122L) #endif #ifndef STATUS_FILE_DELETED # define STATUS_FILE_DELETED ((NTSTATUS) 0xC0000123L) #endif #ifndef STATUS_SPECIAL_ACCOUNT # define STATUS_SPECIAL_ACCOUNT ((NTSTATUS) 0xC0000124L) #endif #ifndef STATUS_SPECIAL_GROUP # define STATUS_SPECIAL_GROUP ((NTSTATUS) 0xC0000125L) #endif #ifndef STATUS_SPECIAL_USER # define STATUS_SPECIAL_USER ((NTSTATUS) 0xC0000126L) #endif #ifndef STATUS_MEMBERS_PRIMARY_GROUP # define STATUS_MEMBERS_PRIMARY_GROUP ((NTSTATUS) 0xC0000127L) #endif #ifndef STATUS_FILE_CLOSED # define STATUS_FILE_CLOSED ((NTSTATUS) 0xC0000128L) #endif #ifndef STATUS_TOO_MANY_THREADS # define STATUS_TOO_MANY_THREADS ((NTSTATUS) 0xC0000129L) #endif #ifndef STATUS_THREAD_NOT_IN_PROCESS # define STATUS_THREAD_NOT_IN_PROCESS ((NTSTATUS) 0xC000012AL) #endif #ifndef STATUS_TOKEN_ALREADY_IN_USE # define STATUS_TOKEN_ALREADY_IN_USE ((NTSTATUS) 0xC000012BL) #endif #ifndef STATUS_PAGEFILE_QUOTA_EXCEEDED # define STATUS_PAGEFILE_QUOTA_EXCEEDED ((NTSTATUS) 0xC000012CL) #endif #ifndef STATUS_COMMITMENT_LIMIT # define STATUS_COMMITMENT_LIMIT ((NTSTATUS) 0xC000012DL) #endif #ifndef STATUS_INVALID_IMAGE_LE_FORMAT # define STATUS_INVALID_IMAGE_LE_FORMAT ((NTSTATUS) 0xC000012EL) #endif #ifndef STATUS_INVALID_IMAGE_NOT_MZ # define STATUS_INVALID_IMAGE_NOT_MZ ((NTSTATUS) 0xC000012FL) #endif #ifndef STATUS_INVALID_IMAGE_PROTECT # define STATUS_INVALID_IMAGE_PROTECT ((NTSTATUS) 0xC0000130L) #endif #ifndef STATUS_INVALID_IMAGE_WIN_16 # define STATUS_INVALID_IMAGE_WIN_16 ((NTSTATUS) 0xC0000131L) #endif #ifndef STATUS_LOGON_SERVER_CONFLICT # define STATUS_LOGON_SERVER_CONFLICT ((NTSTATUS) 0xC0000132L) #endif #ifndef STATUS_TIME_DIFFERENCE_AT_DC # define STATUS_TIME_DIFFERENCE_AT_DC ((NTSTATUS) 0xC0000133L) #endif #ifndef STATUS_SYNCHRONIZATION_REQUIRED # define STATUS_SYNCHRONIZATION_REQUIRED ((NTSTATUS) 0xC0000134L) #endif #ifndef STATUS_DLL_NOT_FOUND # define STATUS_DLL_NOT_FOUND ((NTSTATUS) 0xC0000135L) #endif #ifndef STATUS_OPEN_FAILED # define STATUS_OPEN_FAILED ((NTSTATUS) 0xC0000136L) #endif #ifndef STATUS_IO_PRIVILEGE_FAILED # define STATUS_IO_PRIVILEGE_FAILED ((NTSTATUS) 0xC0000137L) #endif #ifndef STATUS_ORDINAL_NOT_FOUND # define STATUS_ORDINAL_NOT_FOUND ((NTSTATUS) 0xC0000138L) #endif #ifndef STATUS_ENTRYPOINT_NOT_FOUND # define STATUS_ENTRYPOINT_NOT_FOUND ((NTSTATUS) 0xC0000139L) #endif #ifndef STATUS_CONTROL_C_EXIT # define STATUS_CONTROL_C_EXIT ((NTSTATUS) 0xC000013AL) #endif #ifndef STATUS_LOCAL_DISCONNECT # define STATUS_LOCAL_DISCONNECT ((NTSTATUS) 0xC000013BL) #endif #ifndef STATUS_REMOTE_DISCONNECT # define STATUS_REMOTE_DISCONNECT ((NTSTATUS) 0xC000013CL) #endif #ifndef STATUS_REMOTE_RESOURCES # define STATUS_REMOTE_RESOURCES ((NTSTATUS) 0xC000013DL) #endif #ifndef STATUS_LINK_FAILED # define STATUS_LINK_FAILED ((NTSTATUS) 0xC000013EL) #endif #ifndef STATUS_LINK_TIMEOUT # define STATUS_LINK_TIMEOUT ((NTSTATUS) 0xC000013FL) #endif #ifndef STATUS_INVALID_CONNECTION # define STATUS_INVALID_CONNECTION ((NTSTATUS) 0xC0000140L) #endif #ifndef STATUS_INVALID_ADDRESS # define STATUS_INVALID_ADDRESS ((NTSTATUS) 0xC0000141L) #endif #ifndef STATUS_DLL_INIT_FAILED # define STATUS_DLL_INIT_FAILED ((NTSTATUS) 0xC0000142L) #endif #ifndef STATUS_MISSING_SYSTEMFILE # define STATUS_MISSING_SYSTEMFILE ((NTSTATUS) 0xC0000143L) #endif #ifndef STATUS_UNHANDLED_EXCEPTION # define STATUS_UNHANDLED_EXCEPTION ((NTSTATUS) 0xC0000144L) #endif #ifndef STATUS_APP_INIT_FAILURE # define STATUS_APP_INIT_FAILURE ((NTSTATUS) 0xC0000145L) #endif #ifndef STATUS_PAGEFILE_CREATE_FAILED # define STATUS_PAGEFILE_CREATE_FAILED ((NTSTATUS) 0xC0000146L) #endif #ifndef STATUS_NO_PAGEFILE # define STATUS_NO_PAGEFILE ((NTSTATUS) 0xC0000147L) #endif #ifndef STATUS_INVALID_LEVEL # define STATUS_INVALID_LEVEL ((NTSTATUS) 0xC0000148L) #endif #ifndef STATUS_WRONG_PASSWORD_CORE # define STATUS_WRONG_PASSWORD_CORE ((NTSTATUS) 0xC0000149L) #endif #ifndef STATUS_ILLEGAL_FLOAT_CONTEXT # define STATUS_ILLEGAL_FLOAT_CONTEXT ((NTSTATUS) 0xC000014AL) #endif #ifndef STATUS_PIPE_BROKEN # define STATUS_PIPE_BROKEN ((NTSTATUS) 0xC000014BL) #endif #ifndef STATUS_REGISTRY_CORRUPT # define STATUS_REGISTRY_CORRUPT ((NTSTATUS) 0xC000014CL) #endif #ifndef STATUS_REGISTRY_IO_FAILED # define STATUS_REGISTRY_IO_FAILED ((NTSTATUS) 0xC000014DL) #endif #ifndef STATUS_NO_EVENT_PAIR # define STATUS_NO_EVENT_PAIR ((NTSTATUS) 0xC000014EL) #endif #ifndef STATUS_UNRECOGNIZED_VOLUME # define STATUS_UNRECOGNIZED_VOLUME ((NTSTATUS) 0xC000014FL) #endif #ifndef STATUS_SERIAL_NO_DEVICE_INITED # define STATUS_SERIAL_NO_DEVICE_INITED ((NTSTATUS) 0xC0000150L) #endif #ifndef STATUS_NO_SUCH_ALIAS # define STATUS_NO_SUCH_ALIAS ((NTSTATUS) 0xC0000151L) #endif #ifndef STATUS_MEMBER_NOT_IN_ALIAS # define STATUS_MEMBER_NOT_IN_ALIAS ((NTSTATUS) 0xC0000152L) #endif #ifndef STATUS_MEMBER_IN_ALIAS # define STATUS_MEMBER_IN_ALIAS ((NTSTATUS) 0xC0000153L) #endif #ifndef STATUS_ALIAS_EXISTS # define STATUS_ALIAS_EXISTS ((NTSTATUS) 0xC0000154L) #endif #ifndef STATUS_LOGON_NOT_GRANTED # define STATUS_LOGON_NOT_GRANTED ((NTSTATUS) 0xC0000155L) #endif #ifndef STATUS_TOO_MANY_SECRETS # define STATUS_TOO_MANY_SECRETS ((NTSTATUS) 0xC0000156L) #endif #ifndef STATUS_SECRET_TOO_LONG # define STATUS_SECRET_TOO_LONG ((NTSTATUS) 0xC0000157L) #endif #ifndef STATUS_INTERNAL_DB_ERROR # define STATUS_INTERNAL_DB_ERROR ((NTSTATUS) 0xC0000158L) #endif #ifndef STATUS_FULLSCREEN_MODE # define STATUS_FULLSCREEN_MODE ((NTSTATUS) 0xC0000159L) #endif #ifndef STATUS_TOO_MANY_CONTEXT_IDS # define STATUS_TOO_MANY_CONTEXT_IDS ((NTSTATUS) 0xC000015AL) #endif #ifndef STATUS_LOGON_TYPE_NOT_GRANTED # define STATUS_LOGON_TYPE_NOT_GRANTED ((NTSTATUS) 0xC000015BL) #endif #ifndef STATUS_NOT_REGISTRY_FILE # define STATUS_NOT_REGISTRY_FILE ((NTSTATUS) 0xC000015CL) #endif #ifndef STATUS_NT_CROSS_ENCRYPTION_REQUIRED # define STATUS_NT_CROSS_ENCRYPTION_REQUIRED ((NTSTATUS) 0xC000015DL) #endif #ifndef STATUS_DOMAIN_CTRLR_CONFIG_ERROR # define STATUS_DOMAIN_CTRLR_CONFIG_ERROR ((NTSTATUS) 0xC000015EL) #endif #ifndef STATUS_FT_MISSING_MEMBER # define STATUS_FT_MISSING_MEMBER ((NTSTATUS) 0xC000015FL) #endif #ifndef STATUS_ILL_FORMED_SERVICE_ENTRY # define STATUS_ILL_FORMED_SERVICE_ENTRY ((NTSTATUS) 0xC0000160L) #endif #ifndef STATUS_ILLEGAL_CHARACTER # define STATUS_ILLEGAL_CHARACTER ((NTSTATUS) 0xC0000161L) #endif #ifndef STATUS_UNMAPPABLE_CHARACTER # define STATUS_UNMAPPABLE_CHARACTER ((NTSTATUS) 0xC0000162L) #endif #ifndef STATUS_UNDEFINED_CHARACTER # define STATUS_UNDEFINED_CHARACTER ((NTSTATUS) 0xC0000163L) #endif #ifndef STATUS_FLOPPY_VOLUME # define STATUS_FLOPPY_VOLUME ((NTSTATUS) 0xC0000164L) #endif #ifndef STATUS_FLOPPY_ID_MARK_NOT_FOUND # define STATUS_FLOPPY_ID_MARK_NOT_FOUND ((NTSTATUS) 0xC0000165L) #endif #ifndef STATUS_FLOPPY_WRONG_CYLINDER # define STATUS_FLOPPY_WRONG_CYLINDER ((NTSTATUS) 0xC0000166L) #endif #ifndef STATUS_FLOPPY_UNKNOWN_ERROR # define STATUS_FLOPPY_UNKNOWN_ERROR ((NTSTATUS) 0xC0000167L) #endif #ifndef STATUS_FLOPPY_BAD_REGISTERS # define STATUS_FLOPPY_BAD_REGISTERS ((NTSTATUS) 0xC0000168L) #endif #ifndef STATUS_DISK_RECALIBRATE_FAILED # define STATUS_DISK_RECALIBRATE_FAILED ((NTSTATUS) 0xC0000169L) #endif #ifndef STATUS_DISK_OPERATION_FAILED # define STATUS_DISK_OPERATION_FAILED ((NTSTATUS) 0xC000016AL) #endif #ifndef STATUS_DISK_RESET_FAILED # define STATUS_DISK_RESET_FAILED ((NTSTATUS) 0xC000016BL) #endif #ifndef STATUS_SHARED_IRQ_BUSY # define STATUS_SHARED_IRQ_BUSY ((NTSTATUS) 0xC000016CL) #endif #ifndef STATUS_FT_ORPHANING # define STATUS_FT_ORPHANING ((NTSTATUS) 0xC000016DL) #endif #ifndef STATUS_BIOS_FAILED_TO_CONNECT_INTERRUPT # define STATUS_BIOS_FAILED_TO_CONNECT_INTERRUPT ((NTSTATUS) 0xC000016EL) #endif #ifndef STATUS_PARTITION_FAILURE # define STATUS_PARTITION_FAILURE ((NTSTATUS) 0xC0000172L) #endif #ifndef STATUS_INVALID_BLOCK_LENGTH # define STATUS_INVALID_BLOCK_LENGTH ((NTSTATUS) 0xC0000173L) #endif #ifndef STATUS_DEVICE_NOT_PARTITIONED # define STATUS_DEVICE_NOT_PARTITIONED ((NTSTATUS) 0xC0000174L) #endif #ifndef STATUS_UNABLE_TO_LOCK_MEDIA # define STATUS_UNABLE_TO_LOCK_MEDIA ((NTSTATUS) 0xC0000175L) #endif #ifndef STATUS_UNABLE_TO_UNLOAD_MEDIA # define STATUS_UNABLE_TO_UNLOAD_MEDIA ((NTSTATUS) 0xC0000176L) #endif #ifndef STATUS_EOM_OVERFLOW # define STATUS_EOM_OVERFLOW ((NTSTATUS) 0xC0000177L) #endif #ifndef STATUS_NO_MEDIA # define STATUS_NO_MEDIA ((NTSTATUS) 0xC0000178L) #endif #ifndef STATUS_NO_SUCH_MEMBER # define STATUS_NO_SUCH_MEMBER ((NTSTATUS) 0xC000017AL) #endif #ifndef STATUS_INVALID_MEMBER # define STATUS_INVALID_MEMBER ((NTSTATUS) 0xC000017BL) #endif #ifndef STATUS_KEY_DELETED # define STATUS_KEY_DELETED ((NTSTATUS) 0xC000017CL) #endif #ifndef STATUS_NO_LOG_SPACE # define STATUS_NO_LOG_SPACE ((NTSTATUS) 0xC000017DL) #endif #ifndef STATUS_TOO_MANY_SIDS # define STATUS_TOO_MANY_SIDS ((NTSTATUS) 0xC000017EL) #endif #ifndef STATUS_LM_CROSS_ENCRYPTION_REQUIRED # define STATUS_LM_CROSS_ENCRYPTION_REQUIRED ((NTSTATUS) 0xC000017FL) #endif #ifndef STATUS_KEY_HAS_CHILDREN # define STATUS_KEY_HAS_CHILDREN ((NTSTATUS) 0xC0000180L) #endif #ifndef STATUS_CHILD_MUST_BE_VOLATILE # define STATUS_CHILD_MUST_BE_VOLATILE ((NTSTATUS) 0xC0000181L) #endif #ifndef STATUS_DEVICE_CONFIGURATION_ERROR # define STATUS_DEVICE_CONFIGURATION_ERROR ((NTSTATUS) 0xC0000182L) #endif #ifndef STATUS_DRIVER_INTERNAL_ERROR # define STATUS_DRIVER_INTERNAL_ERROR ((NTSTATUS) 0xC0000183L) #endif #ifndef STATUS_INVALID_DEVICE_STATE # define STATUS_INVALID_DEVICE_STATE ((NTSTATUS) 0xC0000184L) #endif #ifndef STATUS_IO_DEVICE_ERROR # define STATUS_IO_DEVICE_ERROR ((NTSTATUS) 0xC0000185L) #endif #ifndef STATUS_DEVICE_PROTOCOL_ERROR # define STATUS_DEVICE_PROTOCOL_ERROR ((NTSTATUS) 0xC0000186L) #endif #ifndef STATUS_BACKUP_CONTROLLER # define STATUS_BACKUP_CONTROLLER ((NTSTATUS) 0xC0000187L) #endif #ifndef STATUS_LOG_FILE_FULL # define STATUS_LOG_FILE_FULL ((NTSTATUS) 0xC0000188L) #endif #ifndef STATUS_TOO_LATE # define STATUS_TOO_LATE ((NTSTATUS) 0xC0000189L) #endif #ifndef STATUS_NO_TRUST_LSA_SECRET # define STATUS_NO_TRUST_LSA_SECRET ((NTSTATUS) 0xC000018AL) #endif #ifndef STATUS_NO_TRUST_SAM_ACCOUNT # define STATUS_NO_TRUST_SAM_ACCOUNT ((NTSTATUS) 0xC000018BL) #endif #ifndef STATUS_TRUSTED_DOMAIN_FAILURE # define STATUS_TRUSTED_DOMAIN_FAILURE ((NTSTATUS) 0xC000018CL) #endif #ifndef STATUS_TRUSTED_RELATIONSHIP_FAILURE # define STATUS_TRUSTED_RELATIONSHIP_FAILURE ((NTSTATUS) 0xC000018DL) #endif #ifndef STATUS_EVENTLOG_FILE_CORRUPT # define STATUS_EVENTLOG_FILE_CORRUPT ((NTSTATUS) 0xC000018EL) #endif #ifndef STATUS_EVENTLOG_CANT_START # define STATUS_EVENTLOG_CANT_START ((NTSTATUS) 0xC000018FL) #endif #ifndef STATUS_TRUST_FAILURE # define STATUS_TRUST_FAILURE ((NTSTATUS) 0xC0000190L) #endif #ifndef STATUS_MUTANT_LIMIT_EXCEEDED # define STATUS_MUTANT_LIMIT_EXCEEDED ((NTSTATUS) 0xC0000191L) #endif #ifndef STATUS_NETLOGON_NOT_STARTED # define STATUS_NETLOGON_NOT_STARTED ((NTSTATUS) 0xC0000192L) #endif #ifndef STATUS_ACCOUNT_EXPIRED # define STATUS_ACCOUNT_EXPIRED ((NTSTATUS) 0xC0000193L) #endif #ifndef STATUS_POSSIBLE_DEADLOCK # define STATUS_POSSIBLE_DEADLOCK ((NTSTATUS) 0xC0000194L) #endif #ifndef STATUS_NETWORK_CREDENTIAL_CONFLICT # define STATUS_NETWORK_CREDENTIAL_CONFLICT ((NTSTATUS) 0xC0000195L) #endif #ifndef STATUS_REMOTE_SESSION_LIMIT # define STATUS_REMOTE_SESSION_LIMIT ((NTSTATUS) 0xC0000196L) #endif #ifndef STATUS_EVENTLOG_FILE_CHANGED # define STATUS_EVENTLOG_FILE_CHANGED ((NTSTATUS) 0xC0000197L) #endif #ifndef STATUS_NOLOGON_INTERDOMAIN_TRUST_ACCOUNT # define STATUS_NOLOGON_INTERDOMAIN_TRUST_ACCOUNT ((NTSTATUS) 0xC0000198L) #endif #ifndef STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT # define STATUS_NOLOGON_WORKSTATION_TRUST_ACCOUNT ((NTSTATUS) 0xC0000199L) #endif #ifndef STATUS_NOLOGON_SERVER_TRUST_ACCOUNT # define STATUS_NOLOGON_SERVER_TRUST_ACCOUNT ((NTSTATUS) 0xC000019AL) #endif #ifndef STATUS_DOMAIN_TRUST_INCONSISTENT # define STATUS_DOMAIN_TRUST_INCONSISTENT ((NTSTATUS) 0xC000019BL) #endif #ifndef STATUS_FS_DRIVER_REQUIRED # define STATUS_FS_DRIVER_REQUIRED ((NTSTATUS) 0xC000019CL) #endif #ifndef STATUS_IMAGE_ALREADY_LOADED_AS_DLL # define STATUS_IMAGE_ALREADY_LOADED_AS_DLL ((NTSTATUS) 0xC000019DL) #endif #ifndef STATUS_INCOMPATIBLE_WITH_GLOBAL_SHORT_NAME_REGISTRY_SETTING # define STATUS_INCOMPATIBLE_WITH_GLOBAL_SHORT_NAME_REGISTRY_SETTING ((NTSTATUS) 0xC000019EL) #endif #ifndef STATUS_SHORT_NAMES_NOT_ENABLED_ON_VOLUME # define STATUS_SHORT_NAMES_NOT_ENABLED_ON_VOLUME ((NTSTATUS) 0xC000019FL) #endif #ifndef STATUS_SECURITY_STREAM_IS_INCONSISTENT # define STATUS_SECURITY_STREAM_IS_INCONSISTENT ((NTSTATUS) 0xC00001A0L) #endif #ifndef STATUS_INVALID_LOCK_RANGE # define STATUS_INVALID_LOCK_RANGE ((NTSTATUS) 0xC00001A1L) #endif #ifndef STATUS_INVALID_ACE_CONDITION # define STATUS_INVALID_ACE_CONDITION ((NTSTATUS) 0xC00001A2L) #endif #ifndef STATUS_IMAGE_SUBSYSTEM_NOT_PRESENT # define STATUS_IMAGE_SUBSYSTEM_NOT_PRESENT ((NTSTATUS) 0xC00001A3L) #endif #ifndef STATUS_NOTIFICATION_GUID_ALREADY_DEFINED # define STATUS_NOTIFICATION_GUID_ALREADY_DEFINED ((NTSTATUS) 0xC00001A4L) #endif #ifndef STATUS_NETWORK_OPEN_RESTRICTION # define STATUS_NETWORK_OPEN_RESTRICTION ((NTSTATUS) 0xC0000201L) #endif #ifndef STATUS_NO_USER_SESSION_KEY # define STATUS_NO_USER_SESSION_KEY ((NTSTATUS) 0xC0000202L) #endif #ifndef STATUS_USER_SESSION_DELETED # define STATUS_USER_SESSION_DELETED ((NTSTATUS) 0xC0000203L) #endif #ifndef STATUS_RESOURCE_LANG_NOT_FOUND # define STATUS_RESOURCE_LANG_NOT_FOUND ((NTSTATUS) 0xC0000204L) #endif #ifndef STATUS_INSUFF_SERVER_RESOURCES # define STATUS_INSUFF_SERVER_RESOURCES ((NTSTATUS) 0xC0000205L) #endif #ifndef STATUS_INVALID_BUFFER_SIZE # define STATUS_INVALID_BUFFER_SIZE ((NTSTATUS) 0xC0000206L) #endif #ifndef STATUS_INVALID_ADDRESS_COMPONENT # define STATUS_INVALID_ADDRESS_COMPONENT ((NTSTATUS) 0xC0000207L) #endif #ifndef STATUS_INVALID_ADDRESS_WILDCARD # define STATUS_INVALID_ADDRESS_WILDCARD ((NTSTATUS) 0xC0000208L) #endif #ifndef STATUS_TOO_MANY_ADDRESSES # define STATUS_TOO_MANY_ADDRESSES ((NTSTATUS) 0xC0000209L) #endif #ifndef STATUS_ADDRESS_ALREADY_EXISTS # define STATUS_ADDRESS_ALREADY_EXISTS ((NTSTATUS) 0xC000020AL) #endif #ifndef STATUS_ADDRESS_CLOSED # define STATUS_ADDRESS_CLOSED ((NTSTATUS) 0xC000020BL) #endif #ifndef STATUS_CONNECTION_DISCONNECTED # define STATUS_CONNECTION_DISCONNECTED ((NTSTATUS) 0xC000020CL) #endif #ifndef STATUS_CONNECTION_RESET # define STATUS_CONNECTION_RESET ((NTSTATUS) 0xC000020DL) #endif #ifndef STATUS_TOO_MANY_NODES # define STATUS_TOO_MANY_NODES ((NTSTATUS) 0xC000020EL) #endif #ifndef STATUS_TRANSACTION_ABORTED # define STATUS_TRANSACTION_ABORTED ((NTSTATUS) 0xC000020FL) #endif #ifndef STATUS_TRANSACTION_TIMED_OUT # define STATUS_TRANSACTION_TIMED_OUT ((NTSTATUS) 0xC0000210L) #endif #ifndef STATUS_TRANSACTION_NO_RELEASE # define STATUS_TRANSACTION_NO_RELEASE ((NTSTATUS) 0xC0000211L) #endif #ifndef STATUS_TRANSACTION_NO_MATCH # define STATUS_TRANSACTION_NO_MATCH ((NTSTATUS) 0xC0000212L) #endif #ifndef STATUS_TRANSACTION_RESPONDED # define STATUS_TRANSACTION_RESPONDED ((NTSTATUS) 0xC0000213L) #endif #ifndef STATUS_TRANSACTION_INVALID_ID # define STATUS_TRANSACTION_INVALID_ID ((NTSTATUS) 0xC0000214L) #endif #ifndef STATUS_TRANSACTION_INVALID_TYPE # define STATUS_TRANSACTION_INVALID_TYPE ((NTSTATUS) 0xC0000215L) #endif #ifndef STATUS_NOT_SERVER_SESSION # define STATUS_NOT_SERVER_SESSION ((NTSTATUS) 0xC0000216L) #endif #ifndef STATUS_NOT_CLIENT_SESSION # define STATUS_NOT_CLIENT_SESSION ((NTSTATUS) 0xC0000217L) #endif #ifndef STATUS_CANNOT_LOAD_REGISTRY_FILE # define STATUS_CANNOT_LOAD_REGISTRY_FILE ((NTSTATUS) 0xC0000218L) #endif #ifndef STATUS_DEBUG_ATTACH_FAILED # define STATUS_DEBUG_ATTACH_FAILED ((NTSTATUS) 0xC0000219L) #endif #ifndef STATUS_SYSTEM_PROCESS_TERMINATED # define STATUS_SYSTEM_PROCESS_TERMINATED ((NTSTATUS) 0xC000021AL) #endif #ifndef STATUS_DATA_NOT_ACCEPTED # define STATUS_DATA_NOT_ACCEPTED ((NTSTATUS) 0xC000021BL) #endif #ifndef STATUS_NO_BROWSER_SERVERS_FOUND # define STATUS_NO_BROWSER_SERVERS_FOUND ((NTSTATUS) 0xC000021CL) #endif #ifndef STATUS_VDM_HARD_ERROR # define STATUS_VDM_HARD_ERROR ((NTSTATUS) 0xC000021DL) #endif #ifndef STATUS_DRIVER_CANCEL_TIMEOUT # define STATUS_DRIVER_CANCEL_TIMEOUT ((NTSTATUS) 0xC000021EL) #endif #ifndef STATUS_REPLY_MESSAGE_MISMATCH # define STATUS_REPLY_MESSAGE_MISMATCH ((NTSTATUS) 0xC000021FL) #endif #ifndef STATUS_MAPPED_ALIGNMENT # define STATUS_MAPPED_ALIGNMENT ((NTSTATUS) 0xC0000220L) #endif #ifndef STATUS_IMAGE_CHECKSUM_MISMATCH # define STATUS_IMAGE_CHECKSUM_MISMATCH ((NTSTATUS) 0xC0000221L) #endif #ifndef STATUS_LOST_WRITEBEHIND_DATA # define STATUS_LOST_WRITEBEHIND_DATA ((NTSTATUS) 0xC0000222L) #endif #ifndef STATUS_CLIENT_SERVER_PARAMETERS_INVALID # define STATUS_CLIENT_SERVER_PARAMETERS_INVALID ((NTSTATUS) 0xC0000223L) #endif #ifndef STATUS_PASSWORD_MUST_CHANGE # define STATUS_PASSWORD_MUST_CHANGE ((NTSTATUS) 0xC0000224L) #endif #ifndef STATUS_NOT_FOUND # define STATUS_NOT_FOUND ((NTSTATUS) 0xC0000225L) #endif #ifndef STATUS_NOT_TINY_STREAM # define STATUS_NOT_TINY_STREAM ((NTSTATUS) 0xC0000226L) #endif #ifndef STATUS_RECOVERY_FAILURE # define STATUS_RECOVERY_FAILURE ((NTSTATUS) 0xC0000227L) #endif #ifndef STATUS_STACK_OVERFLOW_READ # define STATUS_STACK_OVERFLOW_READ ((NTSTATUS) 0xC0000228L) #endif #ifndef STATUS_FAIL_CHECK # define STATUS_FAIL_CHECK ((NTSTATUS) 0xC0000229L) #endif #ifndef STATUS_DUPLICATE_OBJECTID # define STATUS_DUPLICATE_OBJECTID ((NTSTATUS) 0xC000022AL) #endif #ifndef STATUS_OBJECTID_EXISTS # define STATUS_OBJECTID_EXISTS ((NTSTATUS) 0xC000022BL) #endif #ifndef STATUS_CONVERT_TO_LARGE # define STATUS_CONVERT_TO_LARGE ((NTSTATUS) 0xC000022CL) #endif #ifndef STATUS_RETRY # define STATUS_RETRY ((NTSTATUS) 0xC000022DL) #endif #ifndef STATUS_FOUND_OUT_OF_SCOPE # define STATUS_FOUND_OUT_OF_SCOPE ((NTSTATUS) 0xC000022EL) #endif #ifndef STATUS_ALLOCATE_BUCKET # define STATUS_ALLOCATE_BUCKET ((NTSTATUS) 0xC000022FL) #endif #ifndef STATUS_PROPSET_NOT_FOUND # define STATUS_PROPSET_NOT_FOUND ((NTSTATUS) 0xC0000230L) #endif #ifndef STATUS_MARSHALL_OVERFLOW # define STATUS_MARSHALL_OVERFLOW ((NTSTATUS) 0xC0000231L) #endif #ifndef STATUS_INVALID_VARIANT # define STATUS_INVALID_VARIANT ((NTSTATUS) 0xC0000232L) #endif #ifndef STATUS_DOMAIN_CONTROLLER_NOT_FOUND # define STATUS_DOMAIN_CONTROLLER_NOT_FOUND ((NTSTATUS) 0xC0000233L) #endif #ifndef STATUS_ACCOUNT_LOCKED_OUT # define STATUS_ACCOUNT_LOCKED_OUT ((NTSTATUS) 0xC0000234L) #endif #ifndef STATUS_HANDLE_NOT_CLOSABLE # define STATUS_HANDLE_NOT_CLOSABLE ((NTSTATUS) 0xC0000235L) #endif #ifndef STATUS_CONNECTION_REFUSED # define STATUS_CONNECTION_REFUSED ((NTSTATUS) 0xC0000236L) #endif #ifndef STATUS_GRACEFUL_DISCONNECT # define STATUS_GRACEFUL_DISCONNECT ((NTSTATUS) 0xC0000237L) #endif #ifndef STATUS_ADDRESS_ALREADY_ASSOCIATED # define STATUS_ADDRESS_ALREADY_ASSOCIATED ((NTSTATUS) 0xC0000238L) #endif #ifndef STATUS_ADDRESS_NOT_ASSOCIATED # define STATUS_ADDRESS_NOT_ASSOCIATED ((NTSTATUS) 0xC0000239L) #endif #ifndef STATUS_CONNECTION_INVALID # define STATUS_CONNECTION_INVALID ((NTSTATUS) 0xC000023AL) #endif #ifndef STATUS_CONNECTION_ACTIVE # define STATUS_CONNECTION_ACTIVE ((NTSTATUS) 0xC000023BL) #endif #ifndef STATUS_NETWORK_UNREACHABLE # define STATUS_NETWORK_UNREACHABLE ((NTSTATUS) 0xC000023CL) #endif #ifndef STATUS_HOST_UNREACHABLE # define STATUS_HOST_UNREACHABLE ((NTSTATUS) 0xC000023DL) #endif #ifndef STATUS_PROTOCOL_UNREACHABLE # define STATUS_PROTOCOL_UNREACHABLE ((NTSTATUS) 0xC000023EL) #endif #ifndef STATUS_PORT_UNREACHABLE # define STATUS_PORT_UNREACHABLE ((NTSTATUS) 0xC000023FL) #endif #ifndef STATUS_REQUEST_ABORTED # define STATUS_REQUEST_ABORTED ((NTSTATUS) 0xC0000240L) #endif #ifndef STATUS_CONNECTION_ABORTED # define STATUS_CONNECTION_ABORTED ((NTSTATUS) 0xC0000241L) #endif #ifndef STATUS_BAD_COMPRESSION_BUFFER # define STATUS_BAD_COMPRESSION_BUFFER ((NTSTATUS) 0xC0000242L) #endif #ifndef STATUS_USER_MAPPED_FILE # define STATUS_USER_MAPPED_FILE ((NTSTATUS) 0xC0000243L) #endif #ifndef STATUS_AUDIT_FAILED # define STATUS_AUDIT_FAILED ((NTSTATUS) 0xC0000244L) #endif #ifndef STATUS_TIMER_RESOLUTION_NOT_SET # define STATUS_TIMER_RESOLUTION_NOT_SET ((NTSTATUS) 0xC0000245L) #endif #ifndef STATUS_CONNECTION_COUNT_LIMIT # define STATUS_CONNECTION_COUNT_LIMIT ((NTSTATUS) 0xC0000246L) #endif #ifndef STATUS_LOGIN_TIME_RESTRICTION # define STATUS_LOGIN_TIME_RESTRICTION ((NTSTATUS) 0xC0000247L) #endif #ifndef STATUS_LOGIN_WKSTA_RESTRICTION # define STATUS_LOGIN_WKSTA_RESTRICTION ((NTSTATUS) 0xC0000248L) #endif #ifndef STATUS_IMAGE_MP_UP_MISMATCH # define STATUS_IMAGE_MP_UP_MISMATCH ((NTSTATUS) 0xC0000249L) #endif #ifndef STATUS_INSUFFICIENT_LOGON_INFO # define STATUS_INSUFFICIENT_LOGON_INFO ((NTSTATUS) 0xC0000250L) #endif #ifndef STATUS_BAD_DLL_ENTRYPOINT # define STATUS_BAD_DLL_ENTRYPOINT ((NTSTATUS) 0xC0000251L) #endif #ifndef STATUS_BAD_SERVICE_ENTRYPOINT # define STATUS_BAD_SERVICE_ENTRYPOINT ((NTSTATUS) 0xC0000252L) #endif #ifndef STATUS_LPC_REPLY_LOST # define STATUS_LPC_REPLY_LOST ((NTSTATUS) 0xC0000253L) #endif #ifndef STATUS_IP_ADDRESS_CONFLICT1 # define STATUS_IP_ADDRESS_CONFLICT1 ((NTSTATUS) 0xC0000254L) #endif #ifndef STATUS_IP_ADDRESS_CONFLICT2 # define STATUS_IP_ADDRESS_CONFLICT2 ((NTSTATUS) 0xC0000255L) #endif #ifndef STATUS_REGISTRY_QUOTA_LIMIT # define STATUS_REGISTRY_QUOTA_LIMIT ((NTSTATUS) 0xC0000256L) #endif #ifndef STATUS_PATH_NOT_COVERED # define STATUS_PATH_NOT_COVERED ((NTSTATUS) 0xC0000257L) #endif #ifndef STATUS_NO_CALLBACK_ACTIVE # define STATUS_NO_CALLBACK_ACTIVE ((NTSTATUS) 0xC0000258L) #endif #ifndef STATUS_LICENSE_QUOTA_EXCEEDED # define STATUS_LICENSE_QUOTA_EXCEEDED ((NTSTATUS) 0xC0000259L) #endif #ifndef STATUS_PWD_TOO_SHORT # define STATUS_PWD_TOO_SHORT ((NTSTATUS) 0xC000025AL) #endif #ifndef STATUS_PWD_TOO_RECENT # define STATUS_PWD_TOO_RECENT ((NTSTATUS) 0xC000025BL) #endif #ifndef STATUS_PWD_HISTORY_CONFLICT # define STATUS_PWD_HISTORY_CONFLICT ((NTSTATUS) 0xC000025CL) #endif #ifndef STATUS_PLUGPLAY_NO_DEVICE # define STATUS_PLUGPLAY_NO_DEVICE ((NTSTATUS) 0xC000025EL) #endif #ifndef STATUS_UNSUPPORTED_COMPRESSION # define STATUS_UNSUPPORTED_COMPRESSION ((NTSTATUS) 0xC000025FL) #endif #ifndef STATUS_INVALID_HW_PROFILE # define STATUS_INVALID_HW_PROFILE ((NTSTATUS) 0xC0000260L) #endif #ifndef STATUS_INVALID_PLUGPLAY_DEVICE_PATH # define STATUS_INVALID_PLUGPLAY_DEVICE_PATH ((NTSTATUS) 0xC0000261L) #endif #ifndef STATUS_DRIVER_ORDINAL_NOT_FOUND # define STATUS_DRIVER_ORDINAL_NOT_FOUND ((NTSTATUS) 0xC0000262L) #endif #ifndef STATUS_DRIVER_ENTRYPOINT_NOT_FOUND # define STATUS_DRIVER_ENTRYPOINT_NOT_FOUND ((NTSTATUS) 0xC0000263L) #endif #ifndef STATUS_RESOURCE_NOT_OWNED # define STATUS_RESOURCE_NOT_OWNED ((NTSTATUS) 0xC0000264L) #endif #ifndef STATUS_TOO_MANY_LINKS # define STATUS_TOO_MANY_LINKS ((NTSTATUS) 0xC0000265L) #endif #ifndef STATUS_QUOTA_LIST_INCONSISTENT # define STATUS_QUOTA_LIST_INCONSISTENT ((NTSTATUS) 0xC0000266L) #endif #ifndef STATUS_FILE_IS_OFFLINE # define STATUS_FILE_IS_OFFLINE ((NTSTATUS) 0xC0000267L) #endif #ifndef STATUS_EVALUATION_EXPIRATION # define STATUS_EVALUATION_EXPIRATION ((NTSTATUS) 0xC0000268L) #endif #ifndef STATUS_ILLEGAL_DLL_RELOCATION # define STATUS_ILLEGAL_DLL_RELOCATION ((NTSTATUS) 0xC0000269L) #endif #ifndef STATUS_LICENSE_VIOLATION # define STATUS_LICENSE_VIOLATION ((NTSTATUS) 0xC000026AL) #endif #ifndef STATUS_DLL_INIT_FAILED_LOGOFF # define STATUS_DLL_INIT_FAILED_LOGOFF ((NTSTATUS) 0xC000026BL) #endif #ifndef STATUS_DRIVER_UNABLE_TO_LOAD # define STATUS_DRIVER_UNABLE_TO_LOAD ((NTSTATUS) 0xC000026CL) #endif #ifndef STATUS_DFS_UNAVAILABLE # define STATUS_DFS_UNAVAILABLE ((NTSTATUS) 0xC000026DL) #endif #ifndef STATUS_VOLUME_DISMOUNTED # define STATUS_VOLUME_DISMOUNTED ((NTSTATUS) 0xC000026EL) #endif #ifndef STATUS_WX86_INTERNAL_ERROR # define STATUS_WX86_INTERNAL_ERROR ((NTSTATUS) 0xC000026FL) #endif #ifndef STATUS_WX86_FLOAT_STACK_CHECK # define STATUS_WX86_FLOAT_STACK_CHECK ((NTSTATUS) 0xC0000270L) #endif #ifndef STATUS_VALIDATE_CONTINUE # define STATUS_VALIDATE_CONTINUE ((NTSTATUS) 0xC0000271L) #endif #ifndef STATUS_NO_MATCH # define STATUS_NO_MATCH ((NTSTATUS) 0xC0000272L) #endif #ifndef STATUS_NO_MORE_MATCHES # define STATUS_NO_MORE_MATCHES ((NTSTATUS) 0xC0000273L) #endif #ifndef STATUS_NOT_A_REPARSE_POINT # define STATUS_NOT_A_REPARSE_POINT ((NTSTATUS) 0xC0000275L) #endif #ifndef STATUS_IO_REPARSE_TAG_INVALID # define STATUS_IO_REPARSE_TAG_INVALID ((NTSTATUS) 0xC0000276L) #endif #ifndef STATUS_IO_REPARSE_TAG_MISMATCH # define STATUS_IO_REPARSE_TAG_MISMATCH ((NTSTATUS) 0xC0000277L) #endif #ifndef STATUS_IO_REPARSE_DATA_INVALID # define STATUS_IO_REPARSE_DATA_INVALID ((NTSTATUS) 0xC0000278L) #endif #ifndef STATUS_IO_REPARSE_TAG_NOT_HANDLED # define STATUS_IO_REPARSE_TAG_NOT_HANDLED ((NTSTATUS) 0xC0000279L) #endif #ifndef STATUS_REPARSE_POINT_NOT_RESOLVED # define STATUS_REPARSE_POINT_NOT_RESOLVED ((NTSTATUS) 0xC0000280L) #endif #ifndef STATUS_DIRECTORY_IS_A_REPARSE_POINT # define STATUS_DIRECTORY_IS_A_REPARSE_POINT ((NTSTATUS) 0xC0000281L) #endif #ifndef STATUS_RANGE_LIST_CONFLICT # define STATUS_RANGE_LIST_CONFLICT ((NTSTATUS) 0xC0000282L) #endif #ifndef STATUS_SOURCE_ELEMENT_EMPTY # define STATUS_SOURCE_ELEMENT_EMPTY ((NTSTATUS) 0xC0000283L) #endif #ifndef STATUS_DESTINATION_ELEMENT_FULL # define STATUS_DESTINATION_ELEMENT_FULL ((NTSTATUS) 0xC0000284L) #endif #ifndef STATUS_ILLEGAL_ELEMENT_ADDRESS # define STATUS_ILLEGAL_ELEMENT_ADDRESS ((NTSTATUS) 0xC0000285L) #endif #ifndef STATUS_MAGAZINE_NOT_PRESENT # define STATUS_MAGAZINE_NOT_PRESENT ((NTSTATUS) 0xC0000286L) #endif #ifndef STATUS_REINITIALIZATION_NEEDED # define STATUS_REINITIALIZATION_NEEDED ((NTSTATUS) 0xC0000287L) #endif #ifndef STATUS_DEVICE_REQUIRES_CLEANING # define STATUS_DEVICE_REQUIRES_CLEANING ((NTSTATUS) 0x80000288L) #endif #ifndef STATUS_DEVICE_DOOR_OPEN # define STATUS_DEVICE_DOOR_OPEN ((NTSTATUS) 0x80000289L) #endif #ifndef STATUS_ENCRYPTION_FAILED # define STATUS_ENCRYPTION_FAILED ((NTSTATUS) 0xC000028AL) #endif #ifndef STATUS_DECRYPTION_FAILED # define STATUS_DECRYPTION_FAILED ((NTSTATUS) 0xC000028BL) #endif #ifndef STATUS_RANGE_NOT_FOUND # define STATUS_RANGE_NOT_FOUND ((NTSTATUS) 0xC000028CL) #endif #ifndef STATUS_NO_RECOVERY_POLICY # define STATUS_NO_RECOVERY_POLICY ((NTSTATUS) 0xC000028DL) #endif #ifndef STATUS_NO_EFS # define STATUS_NO_EFS ((NTSTATUS) 0xC000028EL) #endif #ifndef STATUS_WRONG_EFS # define STATUS_WRONG_EFS ((NTSTATUS) 0xC000028FL) #endif #ifndef STATUS_NO_USER_KEYS # define STATUS_NO_USER_KEYS ((NTSTATUS) 0xC0000290L) #endif #ifndef STATUS_FILE_NOT_ENCRYPTED # define STATUS_FILE_NOT_ENCRYPTED ((NTSTATUS) 0xC0000291L) #endif #ifndef STATUS_NOT_EXPORT_FORMAT # define STATUS_NOT_EXPORT_FORMAT ((NTSTATUS) 0xC0000292L) #endif #ifndef STATUS_FILE_ENCRYPTED # define STATUS_FILE_ENCRYPTED ((NTSTATUS) 0xC0000293L) #endif #ifndef STATUS_WAKE_SYSTEM # define STATUS_WAKE_SYSTEM ((NTSTATUS) 0x40000294L) #endif #ifndef STATUS_WMI_GUID_NOT_FOUND # define STATUS_WMI_GUID_NOT_FOUND ((NTSTATUS) 0xC0000295L) #endif #ifndef STATUS_WMI_INSTANCE_NOT_FOUND # define STATUS_WMI_INSTANCE_NOT_FOUND ((NTSTATUS) 0xC0000296L) #endif #ifndef STATUS_WMI_ITEMID_NOT_FOUND # define STATUS_WMI_ITEMID_NOT_FOUND ((NTSTATUS) 0xC0000297L) #endif #ifndef STATUS_WMI_TRY_AGAIN # define STATUS_WMI_TRY_AGAIN ((NTSTATUS) 0xC0000298L) #endif #ifndef STATUS_SHARED_POLICY # define STATUS_SHARED_POLICY ((NTSTATUS) 0xC0000299L) #endif #ifndef STATUS_POLICY_OBJECT_NOT_FOUND # define STATUS_POLICY_OBJECT_NOT_FOUND ((NTSTATUS) 0xC000029AL) #endif #ifndef STATUS_POLICY_ONLY_IN_DS # define STATUS_POLICY_ONLY_IN_DS ((NTSTATUS) 0xC000029BL) #endif #ifndef STATUS_VOLUME_NOT_UPGRADED # define STATUS_VOLUME_NOT_UPGRADED ((NTSTATUS) 0xC000029CL) #endif #ifndef STATUS_REMOTE_STORAGE_NOT_ACTIVE # define STATUS_REMOTE_STORAGE_NOT_ACTIVE ((NTSTATUS) 0xC000029DL) #endif #ifndef STATUS_REMOTE_STORAGE_MEDIA_ERROR # define STATUS_REMOTE_STORAGE_MEDIA_ERROR ((NTSTATUS) 0xC000029EL) #endif #ifndef STATUS_NO_TRACKING_SERVICE # define STATUS_NO_TRACKING_SERVICE ((NTSTATUS) 0xC000029FL) #endif #ifndef STATUS_SERVER_SID_MISMATCH # define STATUS_SERVER_SID_MISMATCH ((NTSTATUS) 0xC00002A0L) #endif #ifndef STATUS_DS_NO_ATTRIBUTE_OR_VALUE # define STATUS_DS_NO_ATTRIBUTE_OR_VALUE ((NTSTATUS) 0xC00002A1L) #endif #ifndef STATUS_DS_INVALID_ATTRIBUTE_SYNTAX # define STATUS_DS_INVALID_ATTRIBUTE_SYNTAX ((NTSTATUS) 0xC00002A2L) #endif #ifndef STATUS_DS_ATTRIBUTE_TYPE_UNDEFINED # define STATUS_DS_ATTRIBUTE_TYPE_UNDEFINED ((NTSTATUS) 0xC00002A3L) #endif #ifndef STATUS_DS_ATTRIBUTE_OR_VALUE_EXISTS # define STATUS_DS_ATTRIBUTE_OR_VALUE_EXISTS ((NTSTATUS) 0xC00002A4L) #endif #ifndef STATUS_DS_BUSY # define STATUS_DS_BUSY ((NTSTATUS) 0xC00002A5L) #endif #ifndef STATUS_DS_UNAVAILABLE # define STATUS_DS_UNAVAILABLE ((NTSTATUS) 0xC00002A6L) #endif #ifndef STATUS_DS_NO_RIDS_ALLOCATED # define STATUS_DS_NO_RIDS_ALLOCATED ((NTSTATUS) 0xC00002A7L) #endif #ifndef STATUS_DS_NO_MORE_RIDS # define STATUS_DS_NO_MORE_RIDS ((NTSTATUS) 0xC00002A8L) #endif #ifndef STATUS_DS_INCORRECT_ROLE_OWNER # define STATUS_DS_INCORRECT_ROLE_OWNER ((NTSTATUS) 0xC00002A9L) #endif #ifndef STATUS_DS_RIDMGR_INIT_ERROR # define STATUS_DS_RIDMGR_INIT_ERROR ((NTSTATUS) 0xC00002AAL) #endif #ifndef STATUS_DS_OBJ_CLASS_VIOLATION # define STATUS_DS_OBJ_CLASS_VIOLATION ((NTSTATUS) 0xC00002ABL) #endif #ifndef STATUS_DS_CANT_ON_NON_LEAF # define STATUS_DS_CANT_ON_NON_LEAF ((NTSTATUS) 0xC00002ACL) #endif #ifndef STATUS_DS_CANT_ON_RDN # define STATUS_DS_CANT_ON_RDN ((NTSTATUS) 0xC00002ADL) #endif #ifndef STATUS_DS_CANT_MOD_OBJ_CLASS # define STATUS_DS_CANT_MOD_OBJ_CLASS ((NTSTATUS) 0xC00002AEL) #endif #ifndef STATUS_DS_CROSS_DOM_MOVE_FAILED # define STATUS_DS_CROSS_DOM_MOVE_FAILED ((NTSTATUS) 0xC00002AFL) #endif #ifndef STATUS_DS_GC_NOT_AVAILABLE # define STATUS_DS_GC_NOT_AVAILABLE ((NTSTATUS) 0xC00002B0L) #endif #ifndef STATUS_DIRECTORY_SERVICE_REQUIRED # define STATUS_DIRECTORY_SERVICE_REQUIRED ((NTSTATUS) 0xC00002B1L) #endif #ifndef STATUS_REPARSE_ATTRIBUTE_CONFLICT # define STATUS_REPARSE_ATTRIBUTE_CONFLICT ((NTSTATUS) 0xC00002B2L) #endif #ifndef STATUS_CANT_ENABLE_DENY_ONLY # define STATUS_CANT_ENABLE_DENY_ONLY ((NTSTATUS) 0xC00002B3L) #endif #ifndef STATUS_FLOAT_MULTIPLE_FAULTS # define STATUS_FLOAT_MULTIPLE_FAULTS ((NTSTATUS) 0xC00002B4L) #endif #ifndef STATUS_FLOAT_MULTIPLE_TRAPS # define STATUS_FLOAT_MULTIPLE_TRAPS ((NTSTATUS) 0xC00002B5L) #endif #ifndef STATUS_DEVICE_REMOVED # define STATUS_DEVICE_REMOVED ((NTSTATUS) 0xC00002B6L) #endif #ifndef STATUS_JOURNAL_DELETE_IN_PROGRESS # define STATUS_JOURNAL_DELETE_IN_PROGRESS ((NTSTATUS) 0xC00002B7L) #endif #ifndef STATUS_JOURNAL_NOT_ACTIVE # define STATUS_JOURNAL_NOT_ACTIVE ((NTSTATUS) 0xC00002B8L) #endif #ifndef STATUS_NOINTERFACE # define STATUS_NOINTERFACE ((NTSTATUS) 0xC00002B9L) #endif #ifndef STATUS_DS_ADMIN_LIMIT_EXCEEDED # define STATUS_DS_ADMIN_LIMIT_EXCEEDED ((NTSTATUS) 0xC00002C1L) #endif #ifndef STATUS_DRIVER_FAILED_SLEEP # define STATUS_DRIVER_FAILED_SLEEP ((NTSTATUS) 0xC00002C2L) #endif #ifndef STATUS_MUTUAL_AUTHENTICATION_FAILED # define STATUS_MUTUAL_AUTHENTICATION_FAILED ((NTSTATUS) 0xC00002C3L) #endif #ifndef STATUS_CORRUPT_SYSTEM_FILE # define STATUS_CORRUPT_SYSTEM_FILE ((NTSTATUS) 0xC00002C4L) #endif #ifndef STATUS_DATATYPE_MISALIGNMENT_ERROR # define STATUS_DATATYPE_MISALIGNMENT_ERROR ((NTSTATUS) 0xC00002C5L) #endif #ifndef STATUS_WMI_READ_ONLY # define STATUS_WMI_READ_ONLY ((NTSTATUS) 0xC00002C6L) #endif #ifndef STATUS_WMI_SET_FAILURE # define STATUS_WMI_SET_FAILURE ((NTSTATUS) 0xC00002C7L) #endif #ifndef STATUS_COMMITMENT_MINIMUM # define STATUS_COMMITMENT_MINIMUM ((NTSTATUS) 0xC00002C8L) #endif #ifndef STATUS_REG_NAT_CONSUMPTION # define STATUS_REG_NAT_CONSUMPTION ((NTSTATUS) 0xC00002C9L) #endif #ifndef STATUS_TRANSPORT_FULL # define STATUS_TRANSPORT_FULL ((NTSTATUS) 0xC00002CAL) #endif #ifndef STATUS_DS_SAM_INIT_FAILURE # define STATUS_DS_SAM_INIT_FAILURE ((NTSTATUS) 0xC00002CBL) #endif #ifndef STATUS_ONLY_IF_CONNECTED # define STATUS_ONLY_IF_CONNECTED ((NTSTATUS) 0xC00002CCL) #endif #ifndef STATUS_DS_SENSITIVE_GROUP_VIOLATION # define STATUS_DS_SENSITIVE_GROUP_VIOLATION ((NTSTATUS) 0xC00002CDL) #endif #ifndef STATUS_PNP_RESTART_ENUMERATION # define STATUS_PNP_RESTART_ENUMERATION ((NTSTATUS) 0xC00002CEL) #endif #ifndef STATUS_JOURNAL_ENTRY_DELETED # define STATUS_JOURNAL_ENTRY_DELETED ((NTSTATUS) 0xC00002CFL) #endif #ifndef STATUS_DS_CANT_MOD_PRIMARYGROUPID # define STATUS_DS_CANT_MOD_PRIMARYGROUPID ((NTSTATUS) 0xC00002D0L) #endif #ifndef STATUS_SYSTEM_IMAGE_BAD_SIGNATURE # define STATUS_SYSTEM_IMAGE_BAD_SIGNATURE ((NTSTATUS) 0xC00002D1L) #endif #ifndef STATUS_PNP_REBOOT_REQUIRED # define STATUS_PNP_REBOOT_REQUIRED ((NTSTATUS) 0xC00002D2L) #endif #ifndef STATUS_POWER_STATE_INVALID # define STATUS_POWER_STATE_INVALID ((NTSTATUS) 0xC00002D3L) #endif #ifndef STATUS_DS_INVALID_GROUP_TYPE # define STATUS_DS_INVALID_GROUP_TYPE ((NTSTATUS) 0xC00002D4L) #endif #ifndef STATUS_DS_NO_NEST_GLOBALGROUP_IN_MIXEDDOMAIN # define STATUS_DS_NO_NEST_GLOBALGROUP_IN_MIXEDDOMAIN ((NTSTATUS) 0xC00002D5L) #endif #ifndef STATUS_DS_NO_NEST_LOCALGROUP_IN_MIXEDDOMAIN # define STATUS_DS_NO_NEST_LOCALGROUP_IN_MIXEDDOMAIN ((NTSTATUS) 0xC00002D6L) #endif #ifndef STATUS_DS_GLOBAL_CANT_HAVE_LOCAL_MEMBER # define STATUS_DS_GLOBAL_CANT_HAVE_LOCAL_MEMBER ((NTSTATUS) 0xC00002D7L) #endif #ifndef STATUS_DS_GLOBAL_CANT_HAVE_UNIVERSAL_MEMBER # define STATUS_DS_GLOBAL_CANT_HAVE_UNIVERSAL_MEMBER ((NTSTATUS) 0xC00002D8L) #endif #ifndef STATUS_DS_UNIVERSAL_CANT_HAVE_LOCAL_MEMBER # define STATUS_DS_UNIVERSAL_CANT_HAVE_LOCAL_MEMBER ((NTSTATUS) 0xC00002D9L) #endif #ifndef STATUS_DS_GLOBAL_CANT_HAVE_CROSSDOMAIN_MEMBER # define STATUS_DS_GLOBAL_CANT_HAVE_CROSSDOMAIN_MEMBER ((NTSTATUS) 0xC00002DAL) #endif #ifndef STATUS_DS_LOCAL_CANT_HAVE_CROSSDOMAIN_LOCAL_MEMBER # define STATUS_DS_LOCAL_CANT_HAVE_CROSSDOMAIN_LOCAL_MEMBER ((NTSTATUS) 0xC00002DBL) #endif #ifndef STATUS_DS_HAVE_PRIMARY_MEMBERS # define STATUS_DS_HAVE_PRIMARY_MEMBERS ((NTSTATUS) 0xC00002DCL) #endif #ifndef STATUS_WMI_NOT_SUPPORTED # define STATUS_WMI_NOT_SUPPORTED ((NTSTATUS) 0xC00002DDL) #endif #ifndef STATUS_INSUFFICIENT_POWER # define STATUS_INSUFFICIENT_POWER ((NTSTATUS) 0xC00002DEL) #endif #ifndef STATUS_SAM_NEED_BOOTKEY_PASSWORD # define STATUS_SAM_NEED_BOOTKEY_PASSWORD ((NTSTATUS) 0xC00002DFL) #endif #ifndef STATUS_SAM_NEED_BOOTKEY_FLOPPY # define STATUS_SAM_NEED_BOOTKEY_FLOPPY ((NTSTATUS) 0xC00002E0L) #endif #ifndef STATUS_DS_CANT_START # define STATUS_DS_CANT_START ((NTSTATUS) 0xC00002E1L) #endif #ifndef STATUS_DS_INIT_FAILURE # define STATUS_DS_INIT_FAILURE ((NTSTATUS) 0xC00002E2L) #endif #ifndef STATUS_SAM_INIT_FAILURE # define STATUS_SAM_INIT_FAILURE ((NTSTATUS) 0xC00002E3L) #endif #ifndef STATUS_DS_GC_REQUIRED # define STATUS_DS_GC_REQUIRED ((NTSTATUS) 0xC00002E4L) #endif #ifndef STATUS_DS_LOCAL_MEMBER_OF_LOCAL_ONLY # define STATUS_DS_LOCAL_MEMBER_OF_LOCAL_ONLY ((NTSTATUS) 0xC00002E5L) #endif #ifndef STATUS_DS_NO_FPO_IN_UNIVERSAL_GROUPS # define STATUS_DS_NO_FPO_IN_UNIVERSAL_GROUPS ((NTSTATUS) 0xC00002E6L) #endif #ifndef STATUS_DS_MACHINE_ACCOUNT_QUOTA_EXCEEDED # define STATUS_DS_MACHINE_ACCOUNT_QUOTA_EXCEEDED ((NTSTATUS) 0xC00002E7L) #endif #ifndef STATUS_MULTIPLE_FAULT_VIOLATION # define STATUS_MULTIPLE_FAULT_VIOLATION ((NTSTATUS) 0xC00002E8L) #endif #ifndef STATUS_CURRENT_DOMAIN_NOT_ALLOWED # define STATUS_CURRENT_DOMAIN_NOT_ALLOWED ((NTSTATUS) 0xC00002E9L) #endif #ifndef STATUS_CANNOT_MAKE # define STATUS_CANNOT_MAKE ((NTSTATUS) 0xC00002EAL) #endif #ifndef STATUS_SYSTEM_SHUTDOWN # define STATUS_SYSTEM_SHUTDOWN ((NTSTATUS) 0xC00002EBL) #endif #ifndef STATUS_DS_INIT_FAILURE_CONSOLE # define STATUS_DS_INIT_FAILURE_CONSOLE ((NTSTATUS) 0xC00002ECL) #endif #ifndef STATUS_DS_SAM_INIT_FAILURE_CONSOLE # define STATUS_DS_SAM_INIT_FAILURE_CONSOLE ((NTSTATUS) 0xC00002EDL) #endif #ifndef STATUS_UNFINISHED_CONTEXT_DELETED # define STATUS_UNFINISHED_CONTEXT_DELETED ((NTSTATUS) 0xC00002EEL) #endif #ifndef STATUS_NO_TGT_REPLY # define STATUS_NO_TGT_REPLY ((NTSTATUS) 0xC00002EFL) #endif #ifndef STATUS_OBJECTID_NOT_FOUND # define STATUS_OBJECTID_NOT_FOUND ((NTSTATUS) 0xC00002F0L) #endif #ifndef STATUS_NO_IP_ADDRESSES # define STATUS_NO_IP_ADDRESSES ((NTSTATUS) 0xC00002F1L) #endif #ifndef STATUS_WRONG_CREDENTIAL_HANDLE # define STATUS_WRONG_CREDENTIAL_HANDLE ((NTSTATUS) 0xC00002F2L) #endif #ifndef STATUS_CRYPTO_SYSTEM_INVALID # define STATUS_CRYPTO_SYSTEM_INVALID ((NTSTATUS) 0xC00002F3L) #endif #ifndef STATUS_MAX_REFERRALS_EXCEEDED # define STATUS_MAX_REFERRALS_EXCEEDED ((NTSTATUS) 0xC00002F4L) #endif #ifndef STATUS_MUST_BE_KDC # define STATUS_MUST_BE_KDC ((NTSTATUS) 0xC00002F5L) #endif #ifndef STATUS_STRONG_CRYPTO_NOT_SUPPORTED # define STATUS_STRONG_CRYPTO_NOT_SUPPORTED ((NTSTATUS) 0xC00002F6L) #endif #ifndef STATUS_TOO_MANY_PRINCIPALS # define STATUS_TOO_MANY_PRINCIPALS ((NTSTATUS) 0xC00002F7L) #endif #ifndef STATUS_NO_PA_DATA # define STATUS_NO_PA_DATA ((NTSTATUS) 0xC00002F8L) #endif #ifndef STATUS_PKINIT_NAME_MISMATCH # define STATUS_PKINIT_NAME_MISMATCH ((NTSTATUS) 0xC00002F9L) #endif #ifndef STATUS_SMARTCARD_LOGON_REQUIRED # define STATUS_SMARTCARD_LOGON_REQUIRED ((NTSTATUS) 0xC00002FAL) #endif #ifndef STATUS_KDC_INVALID_REQUEST # define STATUS_KDC_INVALID_REQUEST ((NTSTATUS) 0xC00002FBL) #endif #ifndef STATUS_KDC_UNABLE_TO_REFER # define STATUS_KDC_UNABLE_TO_REFER ((NTSTATUS) 0xC00002FCL) #endif #ifndef STATUS_KDC_UNKNOWN_ETYPE # define STATUS_KDC_UNKNOWN_ETYPE ((NTSTATUS) 0xC00002FDL) #endif #ifndef STATUS_SHUTDOWN_IN_PROGRESS # define STATUS_SHUTDOWN_IN_PROGRESS ((NTSTATUS) 0xC00002FEL) #endif #ifndef STATUS_SERVER_SHUTDOWN_IN_PROGRESS # define STATUS_SERVER_SHUTDOWN_IN_PROGRESS ((NTSTATUS) 0xC00002FFL) #endif #ifndef STATUS_NOT_SUPPORTED_ON_SBS # define STATUS_NOT_SUPPORTED_ON_SBS ((NTSTATUS) 0xC0000300L) #endif #ifndef STATUS_WMI_GUID_DISCONNECTED # define STATUS_WMI_GUID_DISCONNECTED ((NTSTATUS) 0xC0000301L) #endif #ifndef STATUS_WMI_ALREADY_DISABLED # define STATUS_WMI_ALREADY_DISABLED ((NTSTATUS) 0xC0000302L) #endif #ifndef STATUS_WMI_ALREADY_ENABLED # define STATUS_WMI_ALREADY_ENABLED ((NTSTATUS) 0xC0000303L) #endif #ifndef STATUS_MFT_TOO_FRAGMENTED # define STATUS_MFT_TOO_FRAGMENTED ((NTSTATUS) 0xC0000304L) #endif #ifndef STATUS_COPY_PROTECTION_FAILURE # define STATUS_COPY_PROTECTION_FAILURE ((NTSTATUS) 0xC0000305L) #endif #ifndef STATUS_CSS_AUTHENTICATION_FAILURE # define STATUS_CSS_AUTHENTICATION_FAILURE ((NTSTATUS) 0xC0000306L) #endif #ifndef STATUS_CSS_KEY_NOT_PRESENT # define STATUS_CSS_KEY_NOT_PRESENT ((NTSTATUS) 0xC0000307L) #endif #ifndef STATUS_CSS_KEY_NOT_ESTABLISHED # define STATUS_CSS_KEY_NOT_ESTABLISHED ((NTSTATUS) 0xC0000308L) #endif #ifndef STATUS_CSS_SCRAMBLED_SECTOR # define STATUS_CSS_SCRAMBLED_SECTOR ((NTSTATUS) 0xC0000309L) #endif #ifndef STATUS_CSS_REGION_MISMATCH # define STATUS_CSS_REGION_MISMATCH ((NTSTATUS) 0xC000030AL) #endif #ifndef STATUS_CSS_RESETS_EXHAUSTED # define STATUS_CSS_RESETS_EXHAUSTED ((NTSTATUS) 0xC000030BL) #endif #ifndef STATUS_PKINIT_FAILURE # define STATUS_PKINIT_FAILURE ((NTSTATUS) 0xC0000320L) #endif #ifndef STATUS_SMARTCARD_SUBSYSTEM_FAILURE # define STATUS_SMARTCARD_SUBSYSTEM_FAILURE ((NTSTATUS) 0xC0000321L) #endif #ifndef STATUS_NO_KERB_KEY # define STATUS_NO_KERB_KEY ((NTSTATUS) 0xC0000322L) #endif #ifndef STATUS_HOST_DOWN # define STATUS_HOST_DOWN ((NTSTATUS) 0xC0000350L) #endif #ifndef STATUS_UNSUPPORTED_PREAUTH # define STATUS_UNSUPPORTED_PREAUTH ((NTSTATUS) 0xC0000351L) #endif #ifndef STATUS_EFS_ALG_BLOB_TOO_BIG # define STATUS_EFS_ALG_BLOB_TOO_BIG ((NTSTATUS) 0xC0000352L) #endif #ifndef STATUS_PORT_NOT_SET # define STATUS_PORT_NOT_SET ((NTSTATUS) 0xC0000353L) #endif #ifndef STATUS_DEBUGGER_INACTIVE # define STATUS_DEBUGGER_INACTIVE ((NTSTATUS) 0xC0000354L) #endif #ifndef STATUS_DS_VERSION_CHECK_FAILURE # define STATUS_DS_VERSION_CHECK_FAILURE ((NTSTATUS) 0xC0000355L) #endif #ifndef STATUS_AUDITING_DISABLED # define STATUS_AUDITING_DISABLED ((NTSTATUS) 0xC0000356L) #endif #ifndef STATUS_PRENT4_MACHINE_ACCOUNT # define STATUS_PRENT4_MACHINE_ACCOUNT ((NTSTATUS) 0xC0000357L) #endif #ifndef STATUS_DS_AG_CANT_HAVE_UNIVERSAL_MEMBER # define STATUS_DS_AG_CANT_HAVE_UNIVERSAL_MEMBER ((NTSTATUS) 0xC0000358L) #endif #ifndef STATUS_INVALID_IMAGE_WIN_32 # define STATUS_INVALID_IMAGE_WIN_32 ((NTSTATUS) 0xC0000359L) #endif #ifndef STATUS_INVALID_IMAGE_WIN_64 # define STATUS_INVALID_IMAGE_WIN_64 ((NTSTATUS) 0xC000035AL) #endif #ifndef STATUS_BAD_BINDINGS # define STATUS_BAD_BINDINGS ((NTSTATUS) 0xC000035BL) #endif #ifndef STATUS_NETWORK_SESSION_EXPIRED # define STATUS_NETWORK_SESSION_EXPIRED ((NTSTATUS) 0xC000035CL) #endif #ifndef STATUS_APPHELP_BLOCK # define STATUS_APPHELP_BLOCK ((NTSTATUS) 0xC000035DL) #endif #ifndef STATUS_ALL_SIDS_FILTERED # define STATUS_ALL_SIDS_FILTERED ((NTSTATUS) 0xC000035EL) #endif #ifndef STATUS_NOT_SAFE_MODE_DRIVER # define STATUS_NOT_SAFE_MODE_DRIVER ((NTSTATUS) 0xC000035FL) #endif #ifndef STATUS_ACCESS_DISABLED_BY_POLICY_DEFAULT # define STATUS_ACCESS_DISABLED_BY_POLICY_DEFAULT ((NTSTATUS) 0xC0000361L) #endif #ifndef STATUS_ACCESS_DISABLED_BY_POLICY_PATH # define STATUS_ACCESS_DISABLED_BY_POLICY_PATH ((NTSTATUS) 0xC0000362L) #endif #ifndef STATUS_ACCESS_DISABLED_BY_POLICY_PUBLISHER # define STATUS_ACCESS_DISABLED_BY_POLICY_PUBLISHER ((NTSTATUS) 0xC0000363L) #endif #ifndef STATUS_ACCESS_DISABLED_BY_POLICY_OTHER # define STATUS_ACCESS_DISABLED_BY_POLICY_OTHER ((NTSTATUS) 0xC0000364L) #endif #ifndef STATUS_FAILED_DRIVER_ENTRY # define STATUS_FAILED_DRIVER_ENTRY ((NTSTATUS) 0xC0000365L) #endif #ifndef STATUS_DEVICE_ENUMERATION_ERROR # define STATUS_DEVICE_ENUMERATION_ERROR ((NTSTATUS) 0xC0000366L) #endif #ifndef STATUS_MOUNT_POINT_NOT_RESOLVED # define STATUS_MOUNT_POINT_NOT_RESOLVED ((NTSTATUS) 0xC0000368L) #endif #ifndef STATUS_INVALID_DEVICE_OBJECT_PARAMETER # define STATUS_INVALID_DEVICE_OBJECT_PARAMETER ((NTSTATUS) 0xC0000369L) #endif #ifndef STATUS_MCA_OCCURED # define STATUS_MCA_OCCURED ((NTSTATUS) 0xC000036AL) #endif #ifndef STATUS_DRIVER_BLOCKED_CRITICAL # define STATUS_DRIVER_BLOCKED_CRITICAL ((NTSTATUS) 0xC000036BL) #endif #ifndef STATUS_DRIVER_BLOCKED # define STATUS_DRIVER_BLOCKED ((NTSTATUS) 0xC000036CL) #endif #ifndef STATUS_DRIVER_DATABASE_ERROR # define STATUS_DRIVER_DATABASE_ERROR ((NTSTATUS) 0xC000036DL) #endif #ifndef STATUS_SYSTEM_HIVE_TOO_LARGE # define STATUS_SYSTEM_HIVE_TOO_LARGE ((NTSTATUS) 0xC000036EL) #endif #ifndef STATUS_INVALID_IMPORT_OF_NON_DLL # define STATUS_INVALID_IMPORT_OF_NON_DLL ((NTSTATUS) 0xC000036FL) #endif #ifndef STATUS_DS_SHUTTING_DOWN # define STATUS_DS_SHUTTING_DOWN ((NTSTATUS) 0x40000370L) #endif #ifndef STATUS_NO_SECRETS # define STATUS_NO_SECRETS ((NTSTATUS) 0xC0000371L) #endif #ifndef STATUS_ACCESS_DISABLED_NO_SAFER_UI_BY_POLICY # define STATUS_ACCESS_DISABLED_NO_SAFER_UI_BY_POLICY ((NTSTATUS) 0xC0000372L) #endif #ifndef STATUS_FAILED_STACK_SWITCH # define STATUS_FAILED_STACK_SWITCH ((NTSTATUS) 0xC0000373L) #endif #ifndef STATUS_HEAP_CORRUPTION # define STATUS_HEAP_CORRUPTION ((NTSTATUS) 0xC0000374L) #endif #ifndef STATUS_SMARTCARD_WRONG_PIN # define STATUS_SMARTCARD_WRONG_PIN ((NTSTATUS) 0xC0000380L) #endif #ifndef STATUS_SMARTCARD_CARD_BLOCKED # define STATUS_SMARTCARD_CARD_BLOCKED ((NTSTATUS) 0xC0000381L) #endif #ifndef STATUS_SMARTCARD_CARD_NOT_AUTHENTICATED # define STATUS_SMARTCARD_CARD_NOT_AUTHENTICATED ((NTSTATUS) 0xC0000382L) #endif #ifndef STATUS_SMARTCARD_NO_CARD # define STATUS_SMARTCARD_NO_CARD ((NTSTATUS) 0xC0000383L) #endif #ifndef STATUS_SMARTCARD_NO_KEY_CONTAINER # define STATUS_SMARTCARD_NO_KEY_CONTAINER ((NTSTATUS) 0xC0000384L) #endif #ifndef STATUS_SMARTCARD_NO_CERTIFICATE # define STATUS_SMARTCARD_NO_CERTIFICATE ((NTSTATUS) 0xC0000385L) #endif #ifndef STATUS_SMARTCARD_NO_KEYSET # define STATUS_SMARTCARD_NO_KEYSET ((NTSTATUS) 0xC0000386L) #endif #ifndef STATUS_SMARTCARD_IO_ERROR # define STATUS_SMARTCARD_IO_ERROR ((NTSTATUS) 0xC0000387L) #endif #ifndef STATUS_DOWNGRADE_DETECTED # define STATUS_DOWNGRADE_DETECTED ((NTSTATUS) 0xC0000388L) #endif #ifndef STATUS_SMARTCARD_CERT_REVOKED # define STATUS_SMARTCARD_CERT_REVOKED ((NTSTATUS) 0xC0000389L) #endif #ifndef STATUS_ISSUING_CA_UNTRUSTED # define STATUS_ISSUING_CA_UNTRUSTED ((NTSTATUS) 0xC000038AL) #endif #ifndef STATUS_REVOCATION_OFFLINE_C # define STATUS_REVOCATION_OFFLINE_C ((NTSTATUS) 0xC000038BL) #endif #ifndef STATUS_PKINIT_CLIENT_FAILURE # define STATUS_PKINIT_CLIENT_FAILURE ((NTSTATUS) 0xC000038CL) #endif #ifndef STATUS_SMARTCARD_CERT_EXPIRED # define STATUS_SMARTCARD_CERT_EXPIRED ((NTSTATUS) 0xC000038DL) #endif #ifndef STATUS_DRIVER_FAILED_PRIOR_UNLOAD # define STATUS_DRIVER_FAILED_PRIOR_UNLOAD ((NTSTATUS) 0xC000038EL) #endif #ifndef STATUS_SMARTCARD_SILENT_CONTEXT # define STATUS_SMARTCARD_SILENT_CONTEXT ((NTSTATUS) 0xC000038FL) #endif #ifndef STATUS_PER_USER_TRUST_QUOTA_EXCEEDED # define STATUS_PER_USER_TRUST_QUOTA_EXCEEDED ((NTSTATUS) 0xC0000401L) #endif #ifndef STATUS_ALL_USER_TRUST_QUOTA_EXCEEDED # define STATUS_ALL_USER_TRUST_QUOTA_EXCEEDED ((NTSTATUS) 0xC0000402L) #endif #ifndef STATUS_USER_DELETE_TRUST_QUOTA_EXCEEDED # define STATUS_USER_DELETE_TRUST_QUOTA_EXCEEDED ((NTSTATUS) 0xC0000403L) #endif #ifndef STATUS_DS_NAME_NOT_UNIQUE # define STATUS_DS_NAME_NOT_UNIQUE ((NTSTATUS) 0xC0000404L) #endif #ifndef STATUS_DS_DUPLICATE_ID_FOUND # define STATUS_DS_DUPLICATE_ID_FOUND ((NTSTATUS) 0xC0000405L) #endif #ifndef STATUS_DS_GROUP_CONVERSION_ERROR # define STATUS_DS_GROUP_CONVERSION_ERROR ((NTSTATUS) 0xC0000406L) #endif #ifndef STATUS_VOLSNAP_PREPARE_HIBERNATE # define STATUS_VOLSNAP_PREPARE_HIBERNATE ((NTSTATUS) 0xC0000407L) #endif #ifndef STATUS_USER2USER_REQUIRED # define STATUS_USER2USER_REQUIRED ((NTSTATUS) 0xC0000408L) #endif #ifndef STATUS_STACK_BUFFER_OVERRUN # define STATUS_STACK_BUFFER_OVERRUN ((NTSTATUS) 0xC0000409L) #endif #ifndef STATUS_NO_S4U_PROT_SUPPORT # define STATUS_NO_S4U_PROT_SUPPORT ((NTSTATUS) 0xC000040AL) #endif #ifndef STATUS_CROSSREALM_DELEGATION_FAILURE # define STATUS_CROSSREALM_DELEGATION_FAILURE ((NTSTATUS) 0xC000040BL) #endif #ifndef STATUS_REVOCATION_OFFLINE_KDC # define STATUS_REVOCATION_OFFLINE_KDC ((NTSTATUS) 0xC000040CL) #endif #ifndef STATUS_ISSUING_CA_UNTRUSTED_KDC # define STATUS_ISSUING_CA_UNTRUSTED_KDC ((NTSTATUS) 0xC000040DL) #endif #ifndef STATUS_KDC_CERT_EXPIRED # define STATUS_KDC_CERT_EXPIRED ((NTSTATUS) 0xC000040EL) #endif #ifndef STATUS_KDC_CERT_REVOKED # define STATUS_KDC_CERT_REVOKED ((NTSTATUS) 0xC000040FL) #endif #ifndef STATUS_PARAMETER_QUOTA_EXCEEDED # define STATUS_PARAMETER_QUOTA_EXCEEDED ((NTSTATUS) 0xC0000410L) #endif #ifndef STATUS_HIBERNATION_FAILURE # define STATUS_HIBERNATION_FAILURE ((NTSTATUS) 0xC0000411L) #endif #ifndef STATUS_DELAY_LOAD_FAILED # define STATUS_DELAY_LOAD_FAILED ((NTSTATUS) 0xC0000412L) #endif #ifndef STATUS_AUTHENTICATION_FIREWALL_FAILED # define STATUS_AUTHENTICATION_FIREWALL_FAILED ((NTSTATUS) 0xC0000413L) #endif #ifndef STATUS_VDM_DISALLOWED # define STATUS_VDM_DISALLOWED ((NTSTATUS) 0xC0000414L) #endif #ifndef STATUS_HUNG_DISPLAY_DRIVER_THREAD # define STATUS_HUNG_DISPLAY_DRIVER_THREAD ((NTSTATUS) 0xC0000415L) #endif #ifndef STATUS_INSUFFICIENT_RESOURCE_FOR_SPECIFIED_SHARED_SECTION_SIZE # define STATUS_INSUFFICIENT_RESOURCE_FOR_SPECIFIED_SHARED_SECTION_SIZE ((NTSTATUS) 0xC0000416L) #endif #ifndef STATUS_INVALID_CRUNTIME_PARAMETER # define STATUS_INVALID_CRUNTIME_PARAMETER ((NTSTATUS) 0xC0000417L) #endif #ifndef STATUS_NTLM_BLOCKED # define STATUS_NTLM_BLOCKED ((NTSTATUS) 0xC0000418L) #endif #ifndef STATUS_DS_SRC_SID_EXISTS_IN_FOREST # define STATUS_DS_SRC_SID_EXISTS_IN_FOREST ((NTSTATUS) 0xC0000419L) #endif #ifndef STATUS_DS_DOMAIN_NAME_EXISTS_IN_FOREST # define STATUS_DS_DOMAIN_NAME_EXISTS_IN_FOREST ((NTSTATUS) 0xC000041AL) #endif #ifndef STATUS_DS_FLAT_NAME_EXISTS_IN_FOREST # define STATUS_DS_FLAT_NAME_EXISTS_IN_FOREST ((NTSTATUS) 0xC000041BL) #endif #ifndef STATUS_INVALID_USER_PRINCIPAL_NAME # define STATUS_INVALID_USER_PRINCIPAL_NAME ((NTSTATUS) 0xC000041CL) #endif #ifndef STATUS_FATAL_USER_CALLBACK_EXCEPTION # define STATUS_FATAL_USER_CALLBACK_EXCEPTION ((NTSTATUS) 0xC000041DL) #endif #ifndef STATUS_ASSERTION_FAILURE # define STATUS_ASSERTION_FAILURE ((NTSTATUS) 0xC0000420L) #endif #ifndef STATUS_VERIFIER_STOP # define STATUS_VERIFIER_STOP ((NTSTATUS) 0xC0000421L) #endif #ifndef STATUS_CALLBACK_POP_STACK # define STATUS_CALLBACK_POP_STACK ((NTSTATUS) 0xC0000423L) #endif #ifndef STATUS_INCOMPATIBLE_DRIVER_BLOCKED # define STATUS_INCOMPATIBLE_DRIVER_BLOCKED ((NTSTATUS) 0xC0000424L) #endif #ifndef STATUS_HIVE_UNLOADED # define STATUS_HIVE_UNLOADED ((NTSTATUS) 0xC0000425L) #endif #ifndef STATUS_COMPRESSION_DISABLED # define STATUS_COMPRESSION_DISABLED ((NTSTATUS) 0xC0000426L) #endif #ifndef STATUS_FILE_SYSTEM_LIMITATION # define STATUS_FILE_SYSTEM_LIMITATION ((NTSTATUS) 0xC0000427L) #endif #ifndef STATUS_INVALID_IMAGE_HASH # define STATUS_INVALID_IMAGE_HASH ((NTSTATUS) 0xC0000428L) #endif #ifndef STATUS_NOT_CAPABLE # define STATUS_NOT_CAPABLE ((NTSTATUS) 0xC0000429L) #endif #ifndef STATUS_REQUEST_OUT_OF_SEQUENCE # define STATUS_REQUEST_OUT_OF_SEQUENCE ((NTSTATUS) 0xC000042AL) #endif #ifndef STATUS_IMPLEMENTATION_LIMIT # define STATUS_IMPLEMENTATION_LIMIT ((NTSTATUS) 0xC000042BL) #endif #ifndef STATUS_ELEVATION_REQUIRED # define STATUS_ELEVATION_REQUIRED ((NTSTATUS) 0xC000042CL) #endif #ifndef STATUS_NO_SECURITY_CONTEXT # define STATUS_NO_SECURITY_CONTEXT ((NTSTATUS) 0xC000042DL) #endif #ifndef STATUS_PKU2U_CERT_FAILURE # define STATUS_PKU2U_CERT_FAILURE ((NTSTATUS) 0xC000042FL) #endif #ifndef STATUS_BEYOND_VDL # define STATUS_BEYOND_VDL ((NTSTATUS) 0xC0000432L) #endif #ifndef STATUS_ENCOUNTERED_WRITE_IN_PROGRESS # define STATUS_ENCOUNTERED_WRITE_IN_PROGRESS ((NTSTATUS) 0xC0000433L) #endif #ifndef STATUS_PTE_CHANGED # define STATUS_PTE_CHANGED ((NTSTATUS) 0xC0000434L) #endif #ifndef STATUS_PURGE_FAILED # define STATUS_PURGE_FAILED ((NTSTATUS) 0xC0000435L) #endif #ifndef STATUS_CRED_REQUIRES_CONFIRMATION # define STATUS_CRED_REQUIRES_CONFIRMATION ((NTSTATUS) 0xC0000440L) #endif #ifndef STATUS_CS_ENCRYPTION_INVALID_SERVER_RESPONSE # define STATUS_CS_ENCRYPTION_INVALID_SERVER_RESPONSE ((NTSTATUS) 0xC0000441L) #endif #ifndef STATUS_CS_ENCRYPTION_UNSUPPORTED_SERVER # define STATUS_CS_ENCRYPTION_UNSUPPORTED_SERVER ((NTSTATUS) 0xC0000442L) #endif #ifndef STATUS_CS_ENCRYPTION_EXISTING_ENCRYPTED_FILE # define STATUS_CS_ENCRYPTION_EXISTING_ENCRYPTED_FILE ((NTSTATUS) 0xC0000443L) #endif #ifndef STATUS_CS_ENCRYPTION_NEW_ENCRYPTED_FILE # define STATUS_CS_ENCRYPTION_NEW_ENCRYPTED_FILE ((NTSTATUS) 0xC0000444L) #endif #ifndef STATUS_CS_ENCRYPTION_FILE_NOT_CSE # define STATUS_CS_ENCRYPTION_FILE_NOT_CSE ((NTSTATUS) 0xC0000445L) #endif #ifndef STATUS_INVALID_LABEL # define STATUS_INVALID_LABEL ((NTSTATUS) 0xC0000446L) #endif #ifndef STATUS_DRIVER_PROCESS_TERMINATED # define STATUS_DRIVER_PROCESS_TERMINATED ((NTSTATUS) 0xC0000450L) #endif #ifndef STATUS_AMBIGUOUS_SYSTEM_DEVICE # define STATUS_AMBIGUOUS_SYSTEM_DEVICE ((NTSTATUS) 0xC0000451L) #endif #ifndef STATUS_SYSTEM_DEVICE_NOT_FOUND # define STATUS_SYSTEM_DEVICE_NOT_FOUND ((NTSTATUS) 0xC0000452L) #endif #ifndef STATUS_RESTART_BOOT_APPLICATION # define STATUS_RESTART_BOOT_APPLICATION ((NTSTATUS) 0xC0000453L) #endif #ifndef STATUS_INSUFFICIENT_NVRAM_RESOURCES # define STATUS_INSUFFICIENT_NVRAM_RESOURCES ((NTSTATUS) 0xC0000454L) #endif #ifndef STATUS_INVALID_TASK_NAME # define STATUS_INVALID_TASK_NAME ((NTSTATUS) 0xC0000500L) #endif #ifndef STATUS_INVALID_TASK_INDEX # define STATUS_INVALID_TASK_INDEX ((NTSTATUS) 0xC0000501L) #endif #ifndef STATUS_THREAD_ALREADY_IN_TASK # define STATUS_THREAD_ALREADY_IN_TASK ((NTSTATUS) 0xC0000502L) #endif #ifndef STATUS_CALLBACK_BYPASS # define STATUS_CALLBACK_BYPASS ((NTSTATUS) 0xC0000503L) #endif #ifndef STATUS_FAIL_FAST_EXCEPTION # define STATUS_FAIL_FAST_EXCEPTION ((NTSTATUS) 0xC0000602L) #endif #ifndef STATUS_IMAGE_CERT_REVOKED # define STATUS_IMAGE_CERT_REVOKED ((NTSTATUS) 0xC0000603L) #endif #ifndef STATUS_PORT_CLOSED # define STATUS_PORT_CLOSED ((NTSTATUS) 0xC0000700L) #endif #ifndef STATUS_MESSAGE_LOST # define STATUS_MESSAGE_LOST ((NTSTATUS) 0xC0000701L) #endif #ifndef STATUS_INVALID_MESSAGE # define STATUS_INVALID_MESSAGE ((NTSTATUS) 0xC0000702L) #endif #ifndef STATUS_REQUEST_CANCELED # define STATUS_REQUEST_CANCELED ((NTSTATUS) 0xC0000703L) #endif #ifndef STATUS_RECURSIVE_DISPATCH # define STATUS_RECURSIVE_DISPATCH ((NTSTATUS) 0xC0000704L) #endif #ifndef STATUS_LPC_RECEIVE_BUFFER_EXPECTED # define STATUS_LPC_RECEIVE_BUFFER_EXPECTED ((NTSTATUS) 0xC0000705L) #endif #ifndef STATUS_LPC_INVALID_CONNECTION_USAGE # define STATUS_LPC_INVALID_CONNECTION_USAGE ((NTSTATUS) 0xC0000706L) #endif #ifndef STATUS_LPC_REQUESTS_NOT_ALLOWED # define STATUS_LPC_REQUESTS_NOT_ALLOWED ((NTSTATUS) 0xC0000707L) #endif #ifndef STATUS_RESOURCE_IN_USE # define STATUS_RESOURCE_IN_USE ((NTSTATUS) 0xC0000708L) #endif #ifndef STATUS_HARDWARE_MEMORY_ERROR # define STATUS_HARDWARE_MEMORY_ERROR ((NTSTATUS) 0xC0000709L) #endif #ifndef STATUS_THREADPOOL_HANDLE_EXCEPTION # define STATUS_THREADPOOL_HANDLE_EXCEPTION ((NTSTATUS) 0xC000070AL) #endif #ifndef STATUS_THREADPOOL_SET_EVENT_ON_COMPLETION_FAILED # define STATUS_THREADPOOL_SET_EVENT_ON_COMPLETION_FAILED ((NTSTATUS) 0xC000070BL) #endif #ifndef STATUS_THREADPOOL_RELEASE_SEMAPHORE_ON_COMPLETION_FAILED # define STATUS_THREADPOOL_RELEASE_SEMAPHORE_ON_COMPLETION_FAILED ((NTSTATUS) 0xC000070CL) #endif #ifndef STATUS_THREADPOOL_RELEASE_MUTEX_ON_COMPLETION_FAILED # define STATUS_THREADPOOL_RELEASE_MUTEX_ON_COMPLETION_FAILED ((NTSTATUS) 0xC000070DL) #endif #ifndef STATUS_THREADPOOL_FREE_LIBRARY_ON_COMPLETION_FAILED # define STATUS_THREADPOOL_FREE_LIBRARY_ON_COMPLETION_FAILED ((NTSTATUS) 0xC000070EL) #endif #ifndef STATUS_THREADPOOL_RELEASED_DURING_OPERATION # define STATUS_THREADPOOL_RELEASED_DURING_OPERATION ((NTSTATUS) 0xC000070FL) #endif #ifndef STATUS_CALLBACK_RETURNED_WHILE_IMPERSONATING # define STATUS_CALLBACK_RETURNED_WHILE_IMPERSONATING ((NTSTATUS) 0xC0000710L) #endif #ifndef STATUS_APC_RETURNED_WHILE_IMPERSONATING # define STATUS_APC_RETURNED_WHILE_IMPERSONATING ((NTSTATUS) 0xC0000711L) #endif #ifndef STATUS_PROCESS_IS_PROTECTED # define STATUS_PROCESS_IS_PROTECTED ((NTSTATUS) 0xC0000712L) #endif #ifndef STATUS_MCA_EXCEPTION # define STATUS_MCA_EXCEPTION ((NTSTATUS) 0xC0000713L) #endif #ifndef STATUS_CERTIFICATE_MAPPING_NOT_UNIQUE # define STATUS_CERTIFICATE_MAPPING_NOT_UNIQUE ((NTSTATUS) 0xC0000714L) #endif #ifndef STATUS_SYMLINK_CLASS_DISABLED # define STATUS_SYMLINK_CLASS_DISABLED ((NTSTATUS) 0xC0000715L) #endif #ifndef STATUS_INVALID_IDN_NORMALIZATION # define STATUS_INVALID_IDN_NORMALIZATION ((NTSTATUS) 0xC0000716L) #endif #ifndef STATUS_NO_UNICODE_TRANSLATION # define STATUS_NO_UNICODE_TRANSLATION ((NTSTATUS) 0xC0000717L) #endif #ifndef STATUS_ALREADY_REGISTERED # define STATUS_ALREADY_REGISTERED ((NTSTATUS) 0xC0000718L) #endif #ifndef STATUS_CONTEXT_MISMATCH # define STATUS_CONTEXT_MISMATCH ((NTSTATUS) 0xC0000719L) #endif #ifndef STATUS_PORT_ALREADY_HAS_COMPLETION_LIST # define STATUS_PORT_ALREADY_HAS_COMPLETION_LIST ((NTSTATUS) 0xC000071AL) #endif #ifndef STATUS_CALLBACK_RETURNED_THREAD_PRIORITY # define STATUS_CALLBACK_RETURNED_THREAD_PRIORITY ((NTSTATUS) 0xC000071BL) #endif #ifndef STATUS_INVALID_THREAD # define STATUS_INVALID_THREAD ((NTSTATUS) 0xC000071CL) #endif #ifndef STATUS_CALLBACK_RETURNED_TRANSACTION # define STATUS_CALLBACK_RETURNED_TRANSACTION ((NTSTATUS) 0xC000071DL) #endif #ifndef STATUS_CALLBACK_RETURNED_LDR_LOCK # define STATUS_CALLBACK_RETURNED_LDR_LOCK ((NTSTATUS) 0xC000071EL) #endif #ifndef STATUS_CALLBACK_RETURNED_LANG # define STATUS_CALLBACK_RETURNED_LANG ((NTSTATUS) 0xC000071FL) #endif #ifndef STATUS_CALLBACK_RETURNED_PRI_BACK # define STATUS_CALLBACK_RETURNED_PRI_BACK ((NTSTATUS) 0xC0000720L) #endif #ifndef STATUS_CALLBACK_RETURNED_THREAD_AFFINITY # define STATUS_CALLBACK_RETURNED_THREAD_AFFINITY ((NTSTATUS) 0xC0000721L) #endif #ifndef STATUS_DISK_REPAIR_DISABLED # define STATUS_DISK_REPAIR_DISABLED ((NTSTATUS) 0xC0000800L) #endif #ifndef STATUS_DS_DOMAIN_RENAME_IN_PROGRESS # define STATUS_DS_DOMAIN_RENAME_IN_PROGRESS ((NTSTATUS) 0xC0000801L) #endif #ifndef STATUS_DISK_QUOTA_EXCEEDED # define STATUS_DISK_QUOTA_EXCEEDED ((NTSTATUS) 0xC0000802L) #endif #ifndef STATUS_DATA_LOST_REPAIR # define STATUS_DATA_LOST_REPAIR ((NTSTATUS) 0x80000803L) #endif #ifndef STATUS_CONTENT_BLOCKED # define STATUS_CONTENT_BLOCKED ((NTSTATUS) 0xC0000804L) #endif #ifndef STATUS_BAD_CLUSTERS # define STATUS_BAD_CLUSTERS ((NTSTATUS) 0xC0000805L) #endif #ifndef STATUS_VOLUME_DIRTY # define STATUS_VOLUME_DIRTY ((NTSTATUS) 0xC0000806L) #endif #ifndef STATUS_FILE_CHECKED_OUT # define STATUS_FILE_CHECKED_OUT ((NTSTATUS) 0xC0000901L) #endif #ifndef STATUS_CHECKOUT_REQUIRED # define STATUS_CHECKOUT_REQUIRED ((NTSTATUS) 0xC0000902L) #endif #ifndef STATUS_BAD_FILE_TYPE # define STATUS_BAD_FILE_TYPE ((NTSTATUS) 0xC0000903L) #endif #ifndef STATUS_FILE_TOO_LARGE # define STATUS_FILE_TOO_LARGE ((NTSTATUS) 0xC0000904L) #endif #ifndef STATUS_FORMS_AUTH_REQUIRED # define STATUS_FORMS_AUTH_REQUIRED ((NTSTATUS) 0xC0000905L) #endif #ifndef STATUS_VIRUS_INFECTED # define STATUS_VIRUS_INFECTED ((NTSTATUS) 0xC0000906L) #endif #ifndef STATUS_VIRUS_DELETED # define STATUS_VIRUS_DELETED ((NTSTATUS) 0xC0000907L) #endif #ifndef STATUS_BAD_MCFG_TABLE # define STATUS_BAD_MCFG_TABLE ((NTSTATUS) 0xC0000908L) #endif #ifndef STATUS_CANNOT_BREAK_OPLOCK # define STATUS_CANNOT_BREAK_OPLOCK ((NTSTATUS) 0xC0000909L) #endif #ifndef STATUS_WOW_ASSERTION # define STATUS_WOW_ASSERTION ((NTSTATUS) 0xC0009898L) #endif #ifndef STATUS_INVALID_SIGNATURE # define STATUS_INVALID_SIGNATURE ((NTSTATUS) 0xC000A000L) #endif #ifndef STATUS_HMAC_NOT_SUPPORTED # define STATUS_HMAC_NOT_SUPPORTED ((NTSTATUS) 0xC000A001L) #endif #ifndef STATUS_AUTH_TAG_MISMATCH # define STATUS_AUTH_TAG_MISMATCH ((NTSTATUS) 0xC000A002L) #endif #ifndef STATUS_IPSEC_QUEUE_OVERFLOW # define STATUS_IPSEC_QUEUE_OVERFLOW ((NTSTATUS) 0xC000A010L) #endif #ifndef STATUS_ND_QUEUE_OVERFLOW # define STATUS_ND_QUEUE_OVERFLOW ((NTSTATUS) 0xC000A011L) #endif #ifndef STATUS_HOPLIMIT_EXCEEDED # define STATUS_HOPLIMIT_EXCEEDED ((NTSTATUS) 0xC000A012L) #endif #ifndef STATUS_PROTOCOL_NOT_SUPPORTED # define STATUS_PROTOCOL_NOT_SUPPORTED ((NTSTATUS) 0xC000A013L) #endif #ifndef STATUS_FASTPATH_REJECTED # define STATUS_FASTPATH_REJECTED ((NTSTATUS) 0xC000A014L) #endif #ifndef STATUS_LOST_WRITEBEHIND_DATA_NETWORK_DISCONNECTED # define STATUS_LOST_WRITEBEHIND_DATA_NETWORK_DISCONNECTED ((NTSTATUS) 0xC000A080L) #endif #ifndef STATUS_LOST_WRITEBEHIND_DATA_NETWORK_SERVER_ERROR # define STATUS_LOST_WRITEBEHIND_DATA_NETWORK_SERVER_ERROR ((NTSTATUS) 0xC000A081L) #endif #ifndef STATUS_LOST_WRITEBEHIND_DATA_LOCAL_DISK_ERROR # define STATUS_LOST_WRITEBEHIND_DATA_LOCAL_DISK_ERROR ((NTSTATUS) 0xC000A082L) #endif #ifndef STATUS_XML_PARSE_ERROR # define STATUS_XML_PARSE_ERROR ((NTSTATUS) 0xC000A083L) #endif #ifndef STATUS_XMLDSIG_ERROR # define STATUS_XMLDSIG_ERROR ((NTSTATUS) 0xC000A084L) #endif #ifndef STATUS_WRONG_COMPARTMENT # define STATUS_WRONG_COMPARTMENT ((NTSTATUS) 0xC000A085L) #endif #ifndef STATUS_AUTHIP_FAILURE # define STATUS_AUTHIP_FAILURE ((NTSTATUS) 0xC000A086L) #endif #ifndef STATUS_DS_OID_MAPPED_GROUP_CANT_HAVE_MEMBERS # define STATUS_DS_OID_MAPPED_GROUP_CANT_HAVE_MEMBERS ((NTSTATUS) 0xC000A087L) #endif #ifndef STATUS_DS_OID_NOT_FOUND # define STATUS_DS_OID_NOT_FOUND ((NTSTATUS) 0xC000A088L) #endif #ifndef STATUS_HASH_NOT_SUPPORTED # define STATUS_HASH_NOT_SUPPORTED ((NTSTATUS) 0xC000A100L) #endif #ifndef STATUS_HASH_NOT_PRESENT # define STATUS_HASH_NOT_PRESENT ((NTSTATUS) 0xC000A101L) #endif /* This is not the NTSTATUS_FROM_WIN32 that the DDK provides, because the DDK * got it wrong! */ #ifdef NTSTATUS_FROM_WIN32 # undef NTSTATUS_FROM_WIN32 #endif #define NTSTATUS_FROM_WIN32(error) ((NTSTATUS) (error) <= 0 ? \ ((NTSTATUS) (error)) : ((NTSTATUS) (((error) & 0x0000FFFF) | \ (FACILITY_NTWIN32 << 16) | ERROR_SEVERITY_WARNING))) #ifndef JOB_OBJECT_LIMIT_PROCESS_MEMORY # define JOB_OBJECT_LIMIT_PROCESS_MEMORY 0x00000100 #endif #ifndef JOB_OBJECT_LIMIT_JOB_MEMORY # define JOB_OBJECT_LIMIT_JOB_MEMORY 0x00000200 #endif #ifndef JOB_OBJECT_LIMIT_DIE_ON_UNHANDLED_EXCEPTION # define JOB_OBJECT_LIMIT_DIE_ON_UNHANDLED_EXCEPTION 0x00000400 #endif #ifndef JOB_OBJECT_LIMIT_BREAKAWAY_OK # define JOB_OBJECT_LIMIT_BREAKAWAY_OK 0x00000800 #endif #ifndef JOB_OBJECT_LIMIT_SILENT_BREAKAWAY_OK # define JOB_OBJECT_LIMIT_SILENT_BREAKAWAY_OK 0x00001000 #endif #ifndef JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE # define JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE 0x00002000 #endif #ifndef SYMBOLIC_LINK_FLAG_ALLOW_UNPRIVILEGED_CREATE # define SYMBOLIC_LINK_FLAG_ALLOW_UNPRIVILEGED_CREATE 0x00000002 #endif /* from winternl.h */ #if !defined(__UNICODE_STRING_DEFINED) && defined(__MINGW32__) #define __UNICODE_STRING_DEFINED #endif typedef struct _UNICODE_STRING { USHORT Length; USHORT MaximumLength; PWSTR Buffer; } UNICODE_STRING, *PUNICODE_STRING; typedef const UNICODE_STRING *PCUNICODE_STRING; /* from ntifs.h */ #ifndef DEVICE_TYPE # define DEVICE_TYPE DWORD #endif /* MinGW already has a definition for REPARSE_DATA_BUFFER, but mingw-w64 does * not. */ #if defined(_MSC_VER) || defined(__MINGW64_VERSION_MAJOR) typedef struct _REPARSE_DATA_BUFFER { ULONG ReparseTag; USHORT ReparseDataLength; USHORT Reserved; union { struct { USHORT SubstituteNameOffset; USHORT SubstituteNameLength; USHORT PrintNameOffset; USHORT PrintNameLength; ULONG Flags; WCHAR PathBuffer[1]; } SymbolicLinkReparseBuffer; struct { USHORT SubstituteNameOffset; USHORT SubstituteNameLength; USHORT PrintNameOffset; USHORT PrintNameLength; WCHAR PathBuffer[1]; } MountPointReparseBuffer; struct { UCHAR DataBuffer[1]; } GenericReparseBuffer; struct { ULONG StringCount; WCHAR StringList[1]; } AppExecLinkReparseBuffer; }; } REPARSE_DATA_BUFFER, *PREPARSE_DATA_BUFFER; #endif typedef struct _IO_STATUS_BLOCK { union { NTSTATUS Status; PVOID Pointer; }; ULONG_PTR Information; } IO_STATUS_BLOCK, *PIO_STATUS_BLOCK; typedef enum _FILE_INFORMATION_CLASS { FileDirectoryInformation = 1, FileFullDirectoryInformation, FileBothDirectoryInformation, FileBasicInformation, FileStandardInformation, FileInternalInformation, FileEaInformation, FileAccessInformation, FileNameInformation, FileRenameInformation, FileLinkInformation, FileNamesInformation, FileDispositionInformation, FilePositionInformation, FileFullEaInformation, FileModeInformation, FileAlignmentInformation, FileAllInformation, FileAllocationInformation, FileEndOfFileInformation, FileAlternateNameInformation, FileStreamInformation, FilePipeInformation, FilePipeLocalInformation, FilePipeRemoteInformation, FileMailslotQueryInformation, FileMailslotSetInformation, FileCompressionInformation, FileObjectIdInformation, FileCompletionInformation, FileMoveClusterInformation, FileQuotaInformation, FileReparsePointInformation, FileNetworkOpenInformation, FileAttributeTagInformation, FileTrackingInformation, FileIdBothDirectoryInformation, FileIdFullDirectoryInformation, FileValidDataLengthInformation, FileShortNameInformation, FileIoCompletionNotificationInformation, FileIoStatusBlockRangeInformation, FileIoPriorityHintInformation, FileSfioReserveInformation, FileSfioVolumeInformation, FileHardLinkInformation, FileProcessIdsUsingFileInformation, FileNormalizedNameInformation, FileNetworkPhysicalNameInformation, FileIdGlobalTxDirectoryInformation, FileIsRemoteDeviceInformation, FileAttributeCacheInformation, FileNumaNodeInformation, FileStandardLinkInformation, FileRemoteProtocolInformation, FileMaximumInformation } FILE_INFORMATION_CLASS, *PFILE_INFORMATION_CLASS; typedef struct _FILE_DIRECTORY_INFORMATION { ULONG NextEntryOffset; ULONG FileIndex; LARGE_INTEGER CreationTime; LARGE_INTEGER LastAccessTime; LARGE_INTEGER LastWriteTime; LARGE_INTEGER ChangeTime; LARGE_INTEGER EndOfFile; LARGE_INTEGER AllocationSize; ULONG FileAttributes; ULONG FileNameLength; WCHAR FileName[1]; } FILE_DIRECTORY_INFORMATION, *PFILE_DIRECTORY_INFORMATION; typedef struct _FILE_BOTH_DIR_INFORMATION { ULONG NextEntryOffset; ULONG FileIndex; LARGE_INTEGER CreationTime; LARGE_INTEGER LastAccessTime; LARGE_INTEGER LastWriteTime; LARGE_INTEGER ChangeTime; LARGE_INTEGER EndOfFile; LARGE_INTEGER AllocationSize; ULONG FileAttributes; ULONG FileNameLength; ULONG EaSize; CCHAR ShortNameLength; WCHAR ShortName[12]; WCHAR FileName[1]; } FILE_BOTH_DIR_INFORMATION, *PFILE_BOTH_DIR_INFORMATION; typedef struct _FILE_BASIC_INFORMATION { LARGE_INTEGER CreationTime; LARGE_INTEGER LastAccessTime; LARGE_INTEGER LastWriteTime; LARGE_INTEGER ChangeTime; DWORD FileAttributes; } FILE_BASIC_INFORMATION, *PFILE_BASIC_INFORMATION; typedef struct _FILE_STANDARD_INFORMATION { LARGE_INTEGER AllocationSize; LARGE_INTEGER EndOfFile; ULONG NumberOfLinks; BOOLEAN DeletePending; BOOLEAN Directory; } FILE_STANDARD_INFORMATION, *PFILE_STANDARD_INFORMATION; typedef struct _FILE_INTERNAL_INFORMATION { LARGE_INTEGER IndexNumber; } FILE_INTERNAL_INFORMATION, *PFILE_INTERNAL_INFORMATION; typedef struct _FILE_EA_INFORMATION { ULONG EaSize; } FILE_EA_INFORMATION, *PFILE_EA_INFORMATION; typedef struct _FILE_ACCESS_INFORMATION { ACCESS_MASK AccessFlags; } FILE_ACCESS_INFORMATION, *PFILE_ACCESS_INFORMATION; typedef struct _FILE_POSITION_INFORMATION { LARGE_INTEGER CurrentByteOffset; } FILE_POSITION_INFORMATION, *PFILE_POSITION_INFORMATION; typedef struct _FILE_MODE_INFORMATION { ULONG Mode; } FILE_MODE_INFORMATION, *PFILE_MODE_INFORMATION; typedef struct _FILE_ALIGNMENT_INFORMATION { ULONG AlignmentRequirement; } FILE_ALIGNMENT_INFORMATION, *PFILE_ALIGNMENT_INFORMATION; typedef struct _FILE_NAME_INFORMATION { ULONG FileNameLength; WCHAR FileName[1]; } FILE_NAME_INFORMATION, *PFILE_NAME_INFORMATION; typedef struct _FILE_END_OF_FILE_INFORMATION { LARGE_INTEGER EndOfFile; } FILE_END_OF_FILE_INFORMATION, *PFILE_END_OF_FILE_INFORMATION; typedef struct _FILE_ALL_INFORMATION { FILE_BASIC_INFORMATION BasicInformation; FILE_STANDARD_INFORMATION StandardInformation; FILE_INTERNAL_INFORMATION InternalInformation; FILE_EA_INFORMATION EaInformation; FILE_ACCESS_INFORMATION AccessInformation; FILE_POSITION_INFORMATION PositionInformation; FILE_MODE_INFORMATION ModeInformation; FILE_ALIGNMENT_INFORMATION AlignmentInformation; FILE_NAME_INFORMATION NameInformation; } FILE_ALL_INFORMATION, *PFILE_ALL_INFORMATION; typedef struct _FILE_DISPOSITION_INFORMATION { BOOLEAN DeleteFile; } FILE_DISPOSITION_INFORMATION, *PFILE_DISPOSITION_INFORMATION; typedef struct _FILE_PIPE_LOCAL_INFORMATION { ULONG NamedPipeType; ULONG NamedPipeConfiguration; ULONG MaximumInstances; ULONG CurrentInstances; ULONG InboundQuota; ULONG ReadDataAvailable; ULONG OutboundQuota; ULONG WriteQuotaAvailable; ULONG NamedPipeState; ULONG NamedPipeEnd; } FILE_PIPE_LOCAL_INFORMATION, *PFILE_PIPE_LOCAL_INFORMATION; #define FILE_SYNCHRONOUS_IO_ALERT 0x00000010 #define FILE_SYNCHRONOUS_IO_NONALERT 0x00000020 typedef enum _FS_INFORMATION_CLASS { FileFsVolumeInformation = 1, FileFsLabelInformation = 2, FileFsSizeInformation = 3, FileFsDeviceInformation = 4, FileFsAttributeInformation = 5, FileFsControlInformation = 6, FileFsFullSizeInformation = 7, FileFsObjectIdInformation = 8, FileFsDriverPathInformation = 9, FileFsVolumeFlagsInformation = 10, FileFsSectorSizeInformation = 11 } FS_INFORMATION_CLASS, *PFS_INFORMATION_CLASS; typedef struct _FILE_FS_VOLUME_INFORMATION { LARGE_INTEGER VolumeCreationTime; ULONG VolumeSerialNumber; ULONG VolumeLabelLength; BOOLEAN SupportsObjects; WCHAR VolumeLabel[1]; } FILE_FS_VOLUME_INFORMATION, *PFILE_FS_VOLUME_INFORMATION; typedef struct _FILE_FS_LABEL_INFORMATION { ULONG VolumeLabelLength; WCHAR VolumeLabel[1]; } FILE_FS_LABEL_INFORMATION, *PFILE_FS_LABEL_INFORMATION; typedef struct _FILE_FS_SIZE_INFORMATION { LARGE_INTEGER TotalAllocationUnits; LARGE_INTEGER AvailableAllocationUnits; ULONG SectorsPerAllocationUnit; ULONG BytesPerSector; } FILE_FS_SIZE_INFORMATION, *PFILE_FS_SIZE_INFORMATION; typedef struct _FILE_FS_DEVICE_INFORMATION { DEVICE_TYPE DeviceType; ULONG Characteristics; } FILE_FS_DEVICE_INFORMATION, *PFILE_FS_DEVICE_INFORMATION; typedef struct _FILE_FS_ATTRIBUTE_INFORMATION { ULONG FileSystemAttributes; LONG MaximumComponentNameLength; ULONG FileSystemNameLength; WCHAR FileSystemName[1]; } FILE_FS_ATTRIBUTE_INFORMATION, *PFILE_FS_ATTRIBUTE_INFORMATION; typedef struct _FILE_FS_CONTROL_INFORMATION { LARGE_INTEGER FreeSpaceStartFiltering; LARGE_INTEGER FreeSpaceThreshold; LARGE_INTEGER FreeSpaceStopFiltering; LARGE_INTEGER DefaultQuotaThreshold; LARGE_INTEGER DefaultQuotaLimit; ULONG FileSystemControlFlags; } FILE_FS_CONTROL_INFORMATION, *PFILE_FS_CONTROL_INFORMATION; typedef struct _FILE_FS_FULL_SIZE_INFORMATION { LARGE_INTEGER TotalAllocationUnits; LARGE_INTEGER CallerAvailableAllocationUnits; LARGE_INTEGER ActualAvailableAllocationUnits; ULONG SectorsPerAllocationUnit; ULONG BytesPerSector; } FILE_FS_FULL_SIZE_INFORMATION, *PFILE_FS_FULL_SIZE_INFORMATION; typedef struct _FILE_FS_OBJECTID_INFORMATION { UCHAR ObjectId[16]; UCHAR ExtendedInfo[48]; } FILE_FS_OBJECTID_INFORMATION, *PFILE_FS_OBJECTID_INFORMATION; typedef struct _FILE_FS_DRIVER_PATH_INFORMATION { BOOLEAN DriverInPath; ULONG DriverNameLength; WCHAR DriverName[1]; } FILE_FS_DRIVER_PATH_INFORMATION, *PFILE_FS_DRIVER_PATH_INFORMATION; typedef struct _FILE_FS_VOLUME_FLAGS_INFORMATION { ULONG Flags; } FILE_FS_VOLUME_FLAGS_INFORMATION, *PFILE_FS_VOLUME_FLAGS_INFORMATION; typedef struct _FILE_FS_SECTOR_SIZE_INFORMATION { ULONG LogicalBytesPerSector; ULONG PhysicalBytesPerSectorForAtomicity; ULONG PhysicalBytesPerSectorForPerformance; ULONG FileSystemEffectivePhysicalBytesPerSectorForAtomicity; ULONG Flags; ULONG ByteOffsetForSectorAlignment; ULONG ByteOffsetForPartitionAlignment; } FILE_FS_SECTOR_SIZE_INFORMATION, *PFILE_FS_SECTOR_SIZE_INFORMATION; typedef struct _SYSTEM_PROCESSOR_PERFORMANCE_INFORMATION { LARGE_INTEGER IdleTime; LARGE_INTEGER KernelTime; LARGE_INTEGER UserTime; LARGE_INTEGER DpcTime; LARGE_INTEGER InterruptTime; ULONG InterruptCount; } SYSTEM_PROCESSOR_PERFORMANCE_INFORMATION, *PSYSTEM_PROCESSOR_PERFORMANCE_INFORMATION; #ifndef SystemProcessorPerformanceInformation # define SystemProcessorPerformanceInformation 8 #endif #ifndef ProcessConsoleHostProcess # define ProcessConsoleHostProcess 49 #endif #ifndef FILE_DEVICE_FILE_SYSTEM # define FILE_DEVICE_FILE_SYSTEM 0x00000009 #endif #ifndef FILE_DEVICE_NETWORK # define FILE_DEVICE_NETWORK 0x00000012 #endif #ifndef METHOD_BUFFERED # define METHOD_BUFFERED 0 #endif #ifndef METHOD_IN_DIRECT # define METHOD_IN_DIRECT 1 #endif #ifndef METHOD_OUT_DIRECT # define METHOD_OUT_DIRECT 2 #endif #ifndef METHOD_NEITHER #define METHOD_NEITHER 3 #endif #ifndef METHOD_DIRECT_TO_HARDWARE # define METHOD_DIRECT_TO_HARDWARE METHOD_IN_DIRECT #endif #ifndef METHOD_DIRECT_FROM_HARDWARE # define METHOD_DIRECT_FROM_HARDWARE METHOD_OUT_DIRECT #endif #ifndef FILE_ANY_ACCESS # define FILE_ANY_ACCESS 0 #endif #ifndef FILE_SPECIAL_ACCESS # define FILE_SPECIAL_ACCESS (FILE_ANY_ACCESS) #endif #ifndef FILE_READ_ACCESS # define FILE_READ_ACCESS 0x0001 #endif #ifndef FILE_WRITE_ACCESS # define FILE_WRITE_ACCESS 0x0002 #endif #ifndef CTL_CODE # define CTL_CODE(device_type, function, method, access) \ (((device_type) << 16) | ((access) << 14) | ((function) << 2) | (method)) #endif #ifndef FSCTL_SET_REPARSE_POINT # define FSCTL_SET_REPARSE_POINT CTL_CODE(FILE_DEVICE_FILE_SYSTEM, \ 41, \ METHOD_BUFFERED, \ FILE_SPECIAL_ACCESS) #endif #ifndef FSCTL_GET_REPARSE_POINT # define FSCTL_GET_REPARSE_POINT CTL_CODE(FILE_DEVICE_FILE_SYSTEM, \ 42, \ METHOD_BUFFERED, \ FILE_ANY_ACCESS) #endif #ifndef FSCTL_DELETE_REPARSE_POINT # define FSCTL_DELETE_REPARSE_POINT CTL_CODE(FILE_DEVICE_FILE_SYSTEM, \ 43, \ METHOD_BUFFERED, \ FILE_SPECIAL_ACCESS) #endif #ifndef IO_REPARSE_TAG_SYMLINK # define IO_REPARSE_TAG_SYMLINK (0xA000000CL) #endif #ifndef IO_REPARSE_TAG_APPEXECLINK # define IO_REPARSE_TAG_APPEXECLINK (0x8000001BL) #endif typedef VOID (NTAPI *PIO_APC_ROUTINE) (PVOID ApcContext, PIO_STATUS_BLOCK IoStatusBlock, ULONG Reserved); typedef NTSTATUS (NTAPI *sRtlGetVersion) (PRTL_OSVERSIONINFOW lpVersionInformation); typedef ULONG (NTAPI *sRtlNtStatusToDosError) (NTSTATUS Status); typedef NTSTATUS (NTAPI *sNtDeviceIoControlFile) (HANDLE FileHandle, HANDLE Event, PIO_APC_ROUTINE ApcRoutine, PVOID ApcContext, PIO_STATUS_BLOCK IoStatusBlock, ULONG IoControlCode, PVOID InputBuffer, ULONG InputBufferLength, PVOID OutputBuffer, ULONG OutputBufferLength); typedef NTSTATUS (NTAPI *sNtQueryInformationFile) (HANDLE FileHandle, PIO_STATUS_BLOCK IoStatusBlock, PVOID FileInformation, ULONG Length, FILE_INFORMATION_CLASS FileInformationClass); typedef NTSTATUS (NTAPI *sNtSetInformationFile) (HANDLE FileHandle, PIO_STATUS_BLOCK IoStatusBlock, PVOID FileInformation, ULONG Length, FILE_INFORMATION_CLASS FileInformationClass); typedef NTSTATUS (NTAPI *sNtQueryVolumeInformationFile) (HANDLE FileHandle, PIO_STATUS_BLOCK IoStatusBlock, PVOID FsInformation, ULONG Length, FS_INFORMATION_CLASS FsInformationClass); typedef NTSTATUS (NTAPI *sNtQuerySystemInformation) (UINT SystemInformationClass, PVOID SystemInformation, ULONG SystemInformationLength, PULONG ReturnLength); typedef NTSTATUS (NTAPI *sNtQueryDirectoryFile) (HANDLE FileHandle, HANDLE Event, PIO_APC_ROUTINE ApcRoutine, PVOID ApcContext, PIO_STATUS_BLOCK IoStatusBlock, PVOID FileInformation, ULONG Length, FILE_INFORMATION_CLASS FileInformationClass, BOOLEAN ReturnSingleEntry, PUNICODE_STRING FileName, BOOLEAN RestartScan ); typedef NTSTATUS (NTAPI *sNtQueryInformationProcess) (HANDLE ProcessHandle, UINT ProcessInformationClass, PVOID ProcessInformation, ULONG Length, PULONG ReturnLength); /* * Kernel32 headers */ #ifndef FILE_SKIP_COMPLETION_PORT_ON_SUCCESS # define FILE_SKIP_COMPLETION_PORT_ON_SUCCESS 0x1 #endif #ifndef FILE_SKIP_SET_EVENT_ON_HANDLE # define FILE_SKIP_SET_EVENT_ON_HANDLE 0x2 #endif #ifndef SYMBOLIC_LINK_FLAG_DIRECTORY # define SYMBOLIC_LINK_FLAG_DIRECTORY 0x1 #endif #if defined(__MINGW32__) && !defined(__MINGW64_VERSION_MAJOR) typedef struct _OVERLAPPED_ENTRY { ULONG_PTR lpCompletionKey; LPOVERLAPPED lpOverlapped; ULONG_PTR Internal; DWORD dwNumberOfBytesTransferred; } OVERLAPPED_ENTRY, *LPOVERLAPPED_ENTRY; #endif /* from wincon.h */ #ifndef ENABLE_INSERT_MODE # define ENABLE_INSERT_MODE 0x20 #endif #ifndef ENABLE_QUICK_EDIT_MODE # define ENABLE_QUICK_EDIT_MODE 0x40 #endif #ifndef ENABLE_EXTENDED_FLAGS # define ENABLE_EXTENDED_FLAGS 0x80 #endif /* from winerror.h */ #ifndef ERROR_ELEVATION_REQUIRED # define ERROR_ELEVATION_REQUIRED 740 #endif #ifndef ERROR_SYMLINK_NOT_SUPPORTED # define ERROR_SYMLINK_NOT_SUPPORTED 1464 #endif #ifndef ERROR_MUI_FILE_NOT_FOUND # define ERROR_MUI_FILE_NOT_FOUND 15100 #endif #ifndef ERROR_MUI_INVALID_FILE # define ERROR_MUI_INVALID_FILE 15101 #endif #ifndef ERROR_MUI_INVALID_RC_CONFIG # define ERROR_MUI_INVALID_RC_CONFIG 15102 #endif #ifndef ERROR_MUI_INVALID_LOCALE_NAME # define ERROR_MUI_INVALID_LOCALE_NAME 15103 #endif #ifndef ERROR_MUI_INVALID_ULTIMATEFALLBACK_NAME # define ERROR_MUI_INVALID_ULTIMATEFALLBACK_NAME 15104 #endif #ifndef ERROR_MUI_FILE_NOT_LOADED # define ERROR_MUI_FILE_NOT_LOADED 15105 #endif typedef BOOL (WINAPI *sGetQueuedCompletionStatusEx) (HANDLE CompletionPort, LPOVERLAPPED_ENTRY lpCompletionPortEntries, ULONG ulCount, PULONG ulNumEntriesRemoved, DWORD dwMilliseconds, BOOL fAlertable); /* from powerbase.h */ #ifndef DEVICE_NOTIFY_CALLBACK # define DEVICE_NOTIFY_CALLBACK 2 #endif #ifndef PBT_APMRESUMEAUTOMATIC # define PBT_APMRESUMEAUTOMATIC 18 #endif #ifndef PBT_APMRESUMESUSPEND # define PBT_APMRESUMESUSPEND 7 #endif typedef ULONG CALLBACK _DEVICE_NOTIFY_CALLBACK_ROUTINE( PVOID Context, ULONG Type, PVOID Setting ); typedef _DEVICE_NOTIFY_CALLBACK_ROUTINE* _PDEVICE_NOTIFY_CALLBACK_ROUTINE; typedef struct _DEVICE_NOTIFY_SUBSCRIBE_PARAMETERS { _PDEVICE_NOTIFY_CALLBACK_ROUTINE Callback; PVOID Context; } _DEVICE_NOTIFY_SUBSCRIBE_PARAMETERS, *_PDEVICE_NOTIFY_SUBSCRIBE_PARAMETERS; typedef PVOID _HPOWERNOTIFY; typedef _HPOWERNOTIFY *_PHPOWERNOTIFY; typedef DWORD (WINAPI *sPowerRegisterSuspendResumeNotification) (DWORD Flags, HANDLE Recipient, _PHPOWERNOTIFY RegistrationHandle); /* from Winuser.h */ typedef VOID (CALLBACK* WINEVENTPROC) (HWINEVENTHOOK hWinEventHook, DWORD event, HWND hwnd, LONG idObject, LONG idChild, DWORD idEventThread, DWORD dwmsEventTime); typedef HWINEVENTHOOK (WINAPI *sSetWinEventHook) (UINT eventMin, UINT eventMax, HMODULE hmodWinEventProc, WINEVENTPROC lpfnWinEventProc, DWORD idProcess, DWORD idThread, UINT dwflags); /* From mstcpip.h */ typedef struct _TCP_INITIAL_RTO_PARAMETERS { USHORT Rtt; UCHAR MaxSynRetransmissions; } TCP_INITIAL_RTO_PARAMETERS, *PTCP_INITIAL_RTO_PARAMETERS; #ifndef TCP_INITIAL_RTO_NO_SYN_RETRANSMISSIONS # define TCP_INITIAL_RTO_NO_SYN_RETRANSMISSIONS ((UCHAR) -2) #endif #ifndef SIO_TCP_INITIAL_RTO # define SIO_TCP_INITIAL_RTO _WSAIOW(IOC_VENDOR,17) #endif /* Ntdll function pointers */ extern sRtlGetVersion pRtlGetVersion; extern sRtlNtStatusToDosError pRtlNtStatusToDosError; extern sNtDeviceIoControlFile pNtDeviceIoControlFile; extern sNtQueryInformationFile pNtQueryInformationFile; extern sNtSetInformationFile pNtSetInformationFile; extern sNtQueryVolumeInformationFile pNtQueryVolumeInformationFile; extern sNtQueryDirectoryFile pNtQueryDirectoryFile; extern sNtQuerySystemInformation pNtQuerySystemInformation; extern sNtQueryInformationProcess pNtQueryInformationProcess; /* Kernel32 function pointers */ extern sGetQueuedCompletionStatusEx pGetQueuedCompletionStatusEx; /* Powrprof.dll function pointer */ extern sPowerRegisterSuspendResumeNotification pPowerRegisterSuspendResumeNotification; /* User32.dll function pointer */ extern sSetWinEventHook pSetWinEventHook; /* ws2_32.dll function pointer */ /* mingw doesn't have this definition, so let's declare it here locally */ typedef int (WINAPI *uv_sGetHostNameW) (PWSTR, int); extern uv_sGetHostNameW pGetHostNameW; #endif /* UV_WIN_WINAPI_H_ */ gevent-24.11.1/deps/libuv/src/win/winsock.c000066400000000000000000000365561471441230600204520ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include #include #include "uv.h" #include "internal.h" /* Whether there are any non-IFS LSPs stacked on TCP */ int uv_tcp_non_ifs_lsp_ipv4; int uv_tcp_non_ifs_lsp_ipv6; /* Ip address used to bind to any port at any interface */ struct sockaddr_in uv_addr_ip4_any_; struct sockaddr_in6 uv_addr_ip6_any_; /* * Retrieves the pointer to a winsock extension function. */ static BOOL uv__get_extension_function(SOCKET socket, GUID guid, void **target) { int result; DWORD bytes; result = WSAIoctl(socket, SIO_GET_EXTENSION_FUNCTION_POINTER, &guid, sizeof(guid), (void*)target, sizeof(*target), &bytes, NULL, NULL); if (result == SOCKET_ERROR) { *target = NULL; return FALSE; } else { return TRUE; } } BOOL uv__get_acceptex_function(SOCKET socket, LPFN_ACCEPTEX* target) { const GUID wsaid_acceptex = WSAID_ACCEPTEX; return uv__get_extension_function(socket, wsaid_acceptex, (void**)target); } BOOL uv__get_connectex_function(SOCKET socket, LPFN_CONNECTEX* target) { const GUID wsaid_connectex = WSAID_CONNECTEX; return uv__get_extension_function(socket, wsaid_connectex, (void**)target); } void uv__winsock_init(void) { WSADATA wsa_data; int errorno; SOCKET dummy; WSAPROTOCOL_INFOW protocol_info; int opt_len; /* Set implicit binding address used by connectEx */ if (uv_ip4_addr("0.0.0.0", 0, &uv_addr_ip4_any_)) { abort(); } if (uv_ip6_addr("::", 0, &uv_addr_ip6_any_)) { abort(); } /* Skip initialization in safe mode without network support */ if (1 == GetSystemMetrics(SM_CLEANBOOT)) return; /* Initialize winsock */ errorno = WSAStartup(MAKEWORD(2, 2), &wsa_data); if (errorno != 0) { uv_fatal_error(errorno, "WSAStartup"); } /* Try to detect non-IFS LSPs */ uv_tcp_non_ifs_lsp_ipv4 = 1; dummy = socket(AF_INET, SOCK_STREAM, IPPROTO_IP); if (dummy != INVALID_SOCKET) { opt_len = (int) sizeof protocol_info; if (getsockopt(dummy, SOL_SOCKET, SO_PROTOCOL_INFOW, (char*) &protocol_info, &opt_len) == 0) { if (protocol_info.dwServiceFlags1 & XP1_IFS_HANDLES) uv_tcp_non_ifs_lsp_ipv4 = 0; } closesocket(dummy); } /* Try to detect IPV6 support and non-IFS LSPs */ uv_tcp_non_ifs_lsp_ipv6 = 1; dummy = socket(AF_INET6, SOCK_STREAM, IPPROTO_IP); if (dummy != INVALID_SOCKET) { opt_len = (int) sizeof protocol_info; if (getsockopt(dummy, SOL_SOCKET, SO_PROTOCOL_INFOW, (char*) &protocol_info, &opt_len) == 0) { if (protocol_info.dwServiceFlags1 & XP1_IFS_HANDLES) uv_tcp_non_ifs_lsp_ipv6 = 0; } closesocket(dummy); } } int uv__ntstatus_to_winsock_error(NTSTATUS status) { switch (status) { case STATUS_SUCCESS: return ERROR_SUCCESS; case STATUS_PENDING: return ERROR_IO_PENDING; case STATUS_INVALID_HANDLE: case STATUS_OBJECT_TYPE_MISMATCH: return WSAENOTSOCK; case STATUS_INSUFFICIENT_RESOURCES: case STATUS_PAGEFILE_QUOTA: case STATUS_COMMITMENT_LIMIT: case STATUS_WORKING_SET_QUOTA: case STATUS_NO_MEMORY: case STATUS_QUOTA_EXCEEDED: case STATUS_TOO_MANY_PAGING_FILES: case STATUS_REMOTE_RESOURCES: return WSAENOBUFS; case STATUS_TOO_MANY_ADDRESSES: case STATUS_SHARING_VIOLATION: case STATUS_ADDRESS_ALREADY_EXISTS: return WSAEADDRINUSE; case STATUS_LINK_TIMEOUT: case STATUS_IO_TIMEOUT: case STATUS_TIMEOUT: return WSAETIMEDOUT; case STATUS_GRACEFUL_DISCONNECT: return WSAEDISCON; case STATUS_REMOTE_DISCONNECT: case STATUS_CONNECTION_RESET: case STATUS_LINK_FAILED: case STATUS_CONNECTION_DISCONNECTED: case STATUS_PORT_UNREACHABLE: case STATUS_HOPLIMIT_EXCEEDED: return WSAECONNRESET; case STATUS_LOCAL_DISCONNECT: case STATUS_TRANSACTION_ABORTED: case STATUS_CONNECTION_ABORTED: return WSAECONNABORTED; case STATUS_BAD_NETWORK_PATH: case STATUS_NETWORK_UNREACHABLE: case STATUS_PROTOCOL_UNREACHABLE: return WSAENETUNREACH; case STATUS_HOST_UNREACHABLE: return WSAEHOSTUNREACH; case STATUS_CANCELLED: case STATUS_REQUEST_ABORTED: return WSAEINTR; case STATUS_BUFFER_OVERFLOW: case STATUS_INVALID_BUFFER_SIZE: return WSAEMSGSIZE; case STATUS_BUFFER_TOO_SMALL: case STATUS_ACCESS_VIOLATION: return WSAEFAULT; case STATUS_DEVICE_NOT_READY: case STATUS_REQUEST_NOT_ACCEPTED: return WSAEWOULDBLOCK; case STATUS_INVALID_NETWORK_RESPONSE: case STATUS_NETWORK_BUSY: case STATUS_NO_SUCH_DEVICE: case STATUS_NO_SUCH_FILE: case STATUS_OBJECT_PATH_NOT_FOUND: case STATUS_OBJECT_NAME_NOT_FOUND: case STATUS_UNEXPECTED_NETWORK_ERROR: return WSAENETDOWN; case STATUS_INVALID_CONNECTION: return WSAENOTCONN; case STATUS_REMOTE_NOT_LISTENING: case STATUS_CONNECTION_REFUSED: return WSAECONNREFUSED; case STATUS_PIPE_DISCONNECTED: return WSAESHUTDOWN; case STATUS_CONFLICTING_ADDRESSES: case STATUS_INVALID_ADDRESS: case STATUS_INVALID_ADDRESS_COMPONENT: return WSAEADDRNOTAVAIL; case STATUS_NOT_SUPPORTED: case STATUS_NOT_IMPLEMENTED: return WSAEOPNOTSUPP; case STATUS_ACCESS_DENIED: return WSAEACCES; default: if ((status & (FACILITY_NTWIN32 << 16)) == (FACILITY_NTWIN32 << 16) && (status & (ERROR_SEVERITY_ERROR | ERROR_SEVERITY_WARNING))) { /* It's a windows error that has been previously mapped to an ntstatus * code. */ return (DWORD) (status & 0xffff); } else { /* The default fallback for unmappable ntstatus codes. */ return WSAEINVAL; } } } /* * This function provides a workaround for a bug in the winsock implementation * of WSARecv. The problem is that when SetFileCompletionNotificationModes is * used to avoid IOCP notifications of completed reads, WSARecv does not * reliably indicate whether we can expect a completion package to be posted * when the receive buffer is smaller than the received datagram. * * However it is desirable to use SetFileCompletionNotificationModes because * it yields a massive performance increase. * * This function provides a workaround for that bug, but it only works for the * specific case that we need it for. E.g. it assumes that the "avoid iocp" * bit has been set, and supports only overlapped operation. It also requires * the user to use the default msafd driver, doesn't work when other LSPs are * stacked on top of it. */ int WSAAPI uv__wsarecv_workaround(SOCKET socket, WSABUF* buffers, DWORD buffer_count, DWORD* bytes, DWORD* flags, WSAOVERLAPPED *overlapped, LPWSAOVERLAPPED_COMPLETION_ROUTINE completion_routine) { NTSTATUS status; void* apc_context; IO_STATUS_BLOCK* iosb = (IO_STATUS_BLOCK*) &overlapped->Internal; AFD_RECV_INFO info; DWORD error; if (overlapped == NULL || completion_routine != NULL) { WSASetLastError(WSAEINVAL); return SOCKET_ERROR; } info.BufferArray = buffers; info.BufferCount = buffer_count; info.AfdFlags = AFD_OVERLAPPED; info.TdiFlags = TDI_RECEIVE_NORMAL; if (*flags & MSG_PEEK) { info.TdiFlags |= TDI_RECEIVE_PEEK; } if (*flags & MSG_PARTIAL) { info.TdiFlags |= TDI_RECEIVE_PARTIAL; } if (!((intptr_t) overlapped->hEvent & 1)) { apc_context = (void*) overlapped; } else { apc_context = NULL; } iosb->Status = STATUS_PENDING; iosb->Pointer = 0; status = pNtDeviceIoControlFile((HANDLE) socket, overlapped->hEvent, NULL, apc_context, iosb, IOCTL_AFD_RECEIVE, &info, sizeof(info), NULL, 0); *flags = 0; *bytes = (DWORD) iosb->Information; switch (status) { case STATUS_SUCCESS: error = ERROR_SUCCESS; break; case STATUS_PENDING: error = WSA_IO_PENDING; break; case STATUS_BUFFER_OVERFLOW: error = WSAEMSGSIZE; break; case STATUS_RECEIVE_EXPEDITED: error = ERROR_SUCCESS; *flags = MSG_OOB; break; case STATUS_RECEIVE_PARTIAL_EXPEDITED: error = ERROR_SUCCESS; *flags = MSG_PARTIAL | MSG_OOB; break; case STATUS_RECEIVE_PARTIAL: error = ERROR_SUCCESS; *flags = MSG_PARTIAL; break; default: error = uv__ntstatus_to_winsock_error(status); break; } WSASetLastError(error); if (error == ERROR_SUCCESS) { return 0; } else { return SOCKET_ERROR; } } /* See description of uv__wsarecv_workaround. */ int WSAAPI uv__wsarecvfrom_workaround(SOCKET socket, WSABUF* buffers, DWORD buffer_count, DWORD* bytes, DWORD* flags, struct sockaddr* addr, int* addr_len, WSAOVERLAPPED *overlapped, LPWSAOVERLAPPED_COMPLETION_ROUTINE completion_routine) { NTSTATUS status; void* apc_context; IO_STATUS_BLOCK* iosb = (IO_STATUS_BLOCK*) &overlapped->Internal; AFD_RECV_DATAGRAM_INFO info; DWORD error; if (overlapped == NULL || addr == NULL || addr_len == NULL || completion_routine != NULL) { WSASetLastError(WSAEINVAL); return SOCKET_ERROR; } info.BufferArray = buffers; info.BufferCount = buffer_count; info.AfdFlags = AFD_OVERLAPPED; info.TdiFlags = TDI_RECEIVE_NORMAL; info.Address = addr; info.AddressLength = addr_len; if (*flags & MSG_PEEK) { info.TdiFlags |= TDI_RECEIVE_PEEK; } if (*flags & MSG_PARTIAL) { info.TdiFlags |= TDI_RECEIVE_PARTIAL; } if (!((intptr_t) overlapped->hEvent & 1)) { apc_context = (void*) overlapped; } else { apc_context = NULL; } iosb->Status = STATUS_PENDING; iosb->Pointer = 0; status = pNtDeviceIoControlFile((HANDLE) socket, overlapped->hEvent, NULL, apc_context, iosb, IOCTL_AFD_RECEIVE_DATAGRAM, &info, sizeof(info), NULL, 0); *flags = 0; *bytes = (DWORD) iosb->Information; switch (status) { case STATUS_SUCCESS: error = ERROR_SUCCESS; break; case STATUS_PENDING: error = WSA_IO_PENDING; break; case STATUS_BUFFER_OVERFLOW: error = WSAEMSGSIZE; break; case STATUS_RECEIVE_EXPEDITED: error = ERROR_SUCCESS; *flags = MSG_OOB; break; case STATUS_RECEIVE_PARTIAL_EXPEDITED: error = ERROR_SUCCESS; *flags = MSG_PARTIAL | MSG_OOB; break; case STATUS_RECEIVE_PARTIAL: error = ERROR_SUCCESS; *flags = MSG_PARTIAL; break; default: error = uv__ntstatus_to_winsock_error(status); break; } WSASetLastError(error); if (error == ERROR_SUCCESS) { return 0; } else { return SOCKET_ERROR; } } int WSAAPI uv__msafd_poll(SOCKET socket, AFD_POLL_INFO* info_in, AFD_POLL_INFO* info_out, OVERLAPPED* overlapped) { IO_STATUS_BLOCK iosb; IO_STATUS_BLOCK* iosb_ptr; HANDLE event = NULL; void* apc_context; NTSTATUS status; DWORD error; if (overlapped != NULL) { /* Overlapped operation. */ iosb_ptr = (IO_STATUS_BLOCK*) &overlapped->Internal; event = overlapped->hEvent; /* Do not report iocp completion if hEvent is tagged. */ if ((uintptr_t) event & 1) { event = (HANDLE)((uintptr_t) event & ~(uintptr_t) 1); apc_context = NULL; } else { apc_context = overlapped; } } else { /* Blocking operation. */ iosb_ptr = &iosb; event = CreateEvent(NULL, FALSE, FALSE, NULL); if (event == NULL) { return SOCKET_ERROR; } apc_context = NULL; } iosb_ptr->Status = STATUS_PENDING; status = pNtDeviceIoControlFile((HANDLE) socket, event, NULL, apc_context, iosb_ptr, IOCTL_AFD_POLL, info_in, sizeof *info_in, info_out, sizeof *info_out); if (overlapped == NULL) { /* If this is a blocking operation, wait for the event to become signaled, * and then grab the real status from the io status block. */ if (status == STATUS_PENDING) { DWORD r = WaitForSingleObject(event, INFINITE); if (r == WAIT_FAILED) { DWORD saved_error = GetLastError(); CloseHandle(event); WSASetLastError(saved_error); return SOCKET_ERROR; } status = iosb.Status; } CloseHandle(event); } switch (status) { case STATUS_SUCCESS: error = ERROR_SUCCESS; break; case STATUS_PENDING: error = WSA_IO_PENDING; break; default: error = uv__ntstatus_to_winsock_error(status); break; } WSASetLastError(error); if (error == ERROR_SUCCESS) { return 0; } else { return SOCKET_ERROR; } } int uv__convert_to_localhost_if_unspecified(const struct sockaddr* addr, struct sockaddr_storage* storage) { struct sockaddr_in* dest4; struct sockaddr_in6* dest6; if (addr == NULL) return UV_EINVAL; switch (addr->sa_family) { case AF_INET: dest4 = (struct sockaddr_in*) storage; memcpy(dest4, addr, sizeof(*dest4)); if (dest4->sin_addr.s_addr == 0) dest4->sin_addr.s_addr = htonl(INADDR_LOOPBACK); return 0; case AF_INET6: dest6 = (struct sockaddr_in6*) storage; memcpy(dest6, addr, sizeof(*dest6)); if (memcmp(&dest6->sin6_addr, &uv_addr_ip6_any_.sin6_addr, sizeof(uv_addr_ip6_any_.sin6_addr)) == 0) { struct in6_addr init_sin6_addr = IN6ADDR_LOOPBACK_INIT; dest6->sin6_addr = init_sin6_addr; } return 0; default: return UV_EINVAL; } } gevent-24.11.1/deps/libuv/src/win/winsock.h000066400000000000000000000146631471441230600204520ustar00rootroot00000000000000/* Copyright Joyent, Inc. and other Node contributors. All rights reserved. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef UV_WIN_WINSOCK_H_ #define UV_WIN_WINSOCK_H_ #include #include #include #include #include #include "winapi.h" /* * MinGW is missing these too */ #ifndef SO_UPDATE_CONNECT_CONTEXT # define SO_UPDATE_CONNECT_CONTEXT 0x7010 #endif #ifndef TCP_KEEPALIVE # define TCP_KEEPALIVE 3 #endif #ifndef IPV6_V6ONLY # define IPV6_V6ONLY 27 #endif #ifndef IPV6_HOPLIMIT # define IPV6_HOPLIMIT 21 #endif #ifndef SIO_BASE_HANDLE # define SIO_BASE_HANDLE 0x48000022 #endif #ifndef MCAST_JOIN_SOURCE_GROUP # define MCAST_JOIN_SOURCE_GROUP 45 #endif #ifndef MCAST_LEAVE_SOURCE_GROUP # define MCAST_LEAVE_SOURCE_GROUP 46 #endif /* * TDI defines that are only in the DDK. * We only need receive flags so far. */ #ifndef TDI_RECEIVE_NORMAL #define TDI_RECEIVE_BROADCAST 0x00000004 #define TDI_RECEIVE_MULTICAST 0x00000008 #define TDI_RECEIVE_PARTIAL 0x00000010 #define TDI_RECEIVE_NORMAL 0x00000020 #define TDI_RECEIVE_EXPEDITED 0x00000040 #define TDI_RECEIVE_PEEK 0x00000080 #define TDI_RECEIVE_NO_RESPONSE_EXP 0x00000100 #define TDI_RECEIVE_COPY_LOOKAHEAD 0x00000200 #define TDI_RECEIVE_ENTIRE_MESSAGE 0x00000400 #define TDI_RECEIVE_AT_DISPATCH_LEVEL 0x00000800 #define TDI_RECEIVE_CONTROL_INFO 0x00001000 #define TDI_RECEIVE_FORCE_INDICATION 0x00002000 #define TDI_RECEIVE_NO_PUSH 0x00004000 #endif /* * The "Auxiliary Function Driver" is the windows kernel-mode driver that does * TCP, UDP etc. Winsock is just a layer that dispatches requests to it. * Having these definitions allows us to bypass winsock and make an AFD kernel * call directly, avoiding a bug in winsock's recvfrom implementation. */ #define AFD_NO_FAST_IO 0x00000001 #define AFD_OVERLAPPED 0x00000002 #define AFD_IMMEDIATE 0x00000004 #define AFD_POLL_RECEIVE_BIT 0 #define AFD_POLL_RECEIVE (1 << AFD_POLL_RECEIVE_BIT) #define AFD_POLL_RECEIVE_EXPEDITED_BIT 1 #define AFD_POLL_RECEIVE_EXPEDITED (1 << AFD_POLL_RECEIVE_EXPEDITED_BIT) #define AFD_POLL_SEND_BIT 2 #define AFD_POLL_SEND (1 << AFD_POLL_SEND_BIT) #define AFD_POLL_DISCONNECT_BIT 3 #define AFD_POLL_DISCONNECT (1 << AFD_POLL_DISCONNECT_BIT) #define AFD_POLL_ABORT_BIT 4 #define AFD_POLL_ABORT (1 << AFD_POLL_ABORT_BIT) #define AFD_POLL_LOCAL_CLOSE_BIT 5 #define AFD_POLL_LOCAL_CLOSE (1 << AFD_POLL_LOCAL_CLOSE_BIT) #define AFD_POLL_CONNECT_BIT 6 #define AFD_POLL_CONNECT (1 << AFD_POLL_CONNECT_BIT) #define AFD_POLL_ACCEPT_BIT 7 #define AFD_POLL_ACCEPT (1 << AFD_POLL_ACCEPT_BIT) #define AFD_POLL_CONNECT_FAIL_BIT 8 #define AFD_POLL_CONNECT_FAIL (1 << AFD_POLL_CONNECT_FAIL_BIT) #define AFD_POLL_QOS_BIT 9 #define AFD_POLL_QOS (1 << AFD_POLL_QOS_BIT) #define AFD_POLL_GROUP_QOS_BIT 10 #define AFD_POLL_GROUP_QOS (1 << AFD_POLL_GROUP_QOS_BIT) #define AFD_NUM_POLL_EVENTS 11 #define AFD_POLL_ALL ((1 << AFD_NUM_POLL_EVENTS) - 1) typedef struct _AFD_RECV_DATAGRAM_INFO { LPWSABUF BufferArray; ULONG BufferCount; ULONG AfdFlags; ULONG TdiFlags; struct sockaddr* Address; int* AddressLength; } AFD_RECV_DATAGRAM_INFO, *PAFD_RECV_DATAGRAM_INFO; typedef struct _AFD_RECV_INFO { LPWSABUF BufferArray; ULONG BufferCount; ULONG AfdFlags; ULONG TdiFlags; } AFD_RECV_INFO, *PAFD_RECV_INFO; #define _AFD_CONTROL_CODE(operation, method) \ ((FSCTL_AFD_BASE) << 12 | (operation << 2) | method) #define FSCTL_AFD_BASE FILE_DEVICE_NETWORK #define AFD_RECEIVE 5 #define AFD_RECEIVE_DATAGRAM 6 #define AFD_POLL 9 #define IOCTL_AFD_RECEIVE \ _AFD_CONTROL_CODE(AFD_RECEIVE, METHOD_NEITHER) #define IOCTL_AFD_RECEIVE_DATAGRAM \ _AFD_CONTROL_CODE(AFD_RECEIVE_DATAGRAM, METHOD_NEITHER) #define IOCTL_AFD_POLL \ _AFD_CONTROL_CODE(AFD_POLL, METHOD_BUFFERED) #if defined(__MINGW32__) && !defined(__MINGW64_VERSION_MAJOR) typedef struct _IP_ADAPTER_UNICAST_ADDRESS_XP { /* FIXME: __C89_NAMELESS was removed */ /* __C89_NAMELESS */ union { ULONGLONG Alignment; /* __C89_NAMELESS */ struct { ULONG Length; DWORD Flags; }; }; struct _IP_ADAPTER_UNICAST_ADDRESS_XP *Next; SOCKET_ADDRESS Address; IP_PREFIX_ORIGIN PrefixOrigin; IP_SUFFIX_ORIGIN SuffixOrigin; IP_DAD_STATE DadState; ULONG ValidLifetime; ULONG PreferredLifetime; ULONG LeaseLifetime; } IP_ADAPTER_UNICAST_ADDRESS_XP,*PIP_ADAPTER_UNICAST_ADDRESS_XP; typedef struct _IP_ADAPTER_UNICAST_ADDRESS_LH { union { ULONGLONG Alignment; struct { ULONG Length; DWORD Flags; }; }; struct _IP_ADAPTER_UNICAST_ADDRESS_LH *Next; SOCKET_ADDRESS Address; IP_PREFIX_ORIGIN PrefixOrigin; IP_SUFFIX_ORIGIN SuffixOrigin; IP_DAD_STATE DadState; ULONG ValidLifetime; ULONG PreferredLifetime; ULONG LeaseLifetime; UINT8 OnLinkPrefixLength; } IP_ADAPTER_UNICAST_ADDRESS_LH,*PIP_ADAPTER_UNICAST_ADDRESS_LH; #endif int uv__convert_to_localhost_if_unspecified(const struct sockaddr* addr, struct sockaddr_storage* storage); #endif /* UV_WIN_WINSOCK_H_ */ gevent-24.11.1/deps/libuv/test/000077500000000000000000000000001471441230600162055ustar00rootroot00000000000000gevent-24.11.1/deps/libuv/test/fixtures/000077500000000000000000000000001471441230600200565ustar00rootroot00000000000000gevent-24.11.1/deps/libuv/test/fixtures/empty_file000066400000000000000000000000001471441230600221240ustar00rootroot00000000000000gevent-24.11.1/deps/libuv/test/fixtures/load_error.node000066400000000000000000000000071471441230600230520ustar00rootroot00000000000000foobar gevent-24.11.1/deps/libuv/test/fixtures/lorem_ipsum.txt000066400000000000000000000006761471441230600231630ustar00rootroot00000000000000Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. gevent-24.11.1/dev-requirements.txt000066400000000000000000000013501471441230600172110ustar00rootroot00000000000000# For viewing README.rst (restview --long-description), # CONTRIBUTING.rst, etc. # https://github.com/mgedmin/restview restview pylint>=1.8.0 ; python_version < "3.4" # pylint 2 needs astroid 2; unfortunately, it uses `typed_ast` # which has a C extension that doesn't build on PyPy pylint >= 2.5.0 ; python_version >= "3.4" and platform_python_implementation == "CPython" astroid >= 2.4.0 ; python_version >= "3.4" and platform_python_implementation == "CPython" # backport of faulthandler faulthandler ; python_version == "2.7" and platform_python_implementation == "CPython" # For generating CHANGES.rst towncrier # For making releases zest.releaser[recommended] # benchmarks use this pyperf >= 1.6.1 greenlet >= 1.0 -e .[test,docs] gevent-24.11.1/docs/000077500000000000000000000000001471441230600141025ustar00rootroot00000000000000gevent-24.11.1/docs/Makefile000066400000000000000000000062761471441230600155550ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The txt pages are in $(BUILDDIR)/text." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/gevent.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/gevent.qhc" latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." gevent-24.11.1/docs/_about.rst000066400000000000000000000046421471441230600161130ustar00rootroot00000000000000.. This file is included in README.rst from the top-level so it is limited to pure ReST markup, not Sphinx. gevent is a coroutine_ -based Python_ networking library that uses `greenlet `_ to provide a high-level synchronous API on top of the `libev`_ or `libuv`_ event loop. Features include: * Fast event loop based on `libev`_ or `libuv`_. * Lightweight execution units based on greenlets. * API that re-uses concepts from the Python standard library (for examples there are `events`_ and `queues`_). * `Cooperative sockets with SSL support `_ * `Cooperative DNS queries `_ performed through a threadpool, dnspython, or c-ares. * `Monkey patching utility `_ to get 3rd party modules to become cooperative * TCP/UDP/HTTP servers * Subprocess support (through `gevent.subprocess`_) * Thread pools gevent is `inspired by eventlet`_ but features a more consistent API, simpler implementation and better performance. Read why others `use gevent`_ and check out the list of the `open source projects based on gevent`_. gevent was written by `Denis Bilenko `_. Since version 1.1, gevent is maintained by Jason Madden for `NextThought `_ (through gevent 21) and `Institutional Shareholder Services `_ with help from the `contributors `_ and is licensed under the MIT license. See `what's new`_ in the latest major release. Check out the detailed changelog_ for this version. .. _events: http://www.gevent.org/api/gevent.event.html#gevent.event.Event .. _queues: http://www.gevent.org/api/gevent.queue.html#gevent.queue.Queue .. _gevent.subprocess: http://www.gevent.org/api/gevent.subprocess.html#module-gevent.subprocess .. _coroutine: https://en.wikipedia.org/wiki/Coroutine .. _Python: http://python.org .. _libev: http://software.schmorp.de/pkg/libev.html .. _libuv: http://libuv.org .. _inspired by eventlet: http://blog.gevent.org/2010/02/27/why-gevent/ .. _use gevent: http://groups.google.com/group/gevent/browse_thread/thread/4de9703e5dca8271 .. _open source projects based on gevent: https://github.com/gevent/gevent/wiki/Projects .. _what's new: http://www.gevent.org/whatsnew_1_5.html .. _changelog: http://www.gevent.org/changelog.html gevent-24.11.1/docs/_static/000077500000000000000000000000001471441230600155305ustar00rootroot00000000000000gevent-24.11.1/docs/_static/5564530.png000066400000000000000000000617021471441230600170770ustar00rootroot00000000000000PNG  IHDRݡcIDATx|uι3D#VSHQd*iuYr%yq[,5ɋ9-=ɶDV(R2*s{gAhH.הI`wgggg~sιx|p8逇}-wp8,Ñ68r8i,Ñ68r8i,Ñ68r8i,Ñ68r8i,Ñ68r8i,Ñ68r8i,Ñ68r8i,Ñ68r8i,Ñ68r8i,Ñ68r8i,Ñ68r8i,Ñ68r8i,Ñ68r8i,Ñ68r8i,Ñ68r8i,Ñ68r8i,Ñ68r8i,Ñ68r8i,GD@PDDA@P<O<  P#Gyp8J QJ'(CyvxsͱCGwyXQ193NlEc;xڔ!' C9@Bl3pp#j T|`ˡ7{HÑj>*|'}p8FtԒy*G@ QC"j4aR zϑ˷j[q,ga9 FicAr[/A3X/1OD{3"]"Pc˧*7{U!D@d|Z|ؾむ("{d|ew-~`pNtv0PiΰG]б‚*Dup/(Arh_da0t~زKo^,x8V/$(2k߮@1jP; -Q!i2' ,G& 8~_<*@@>Qv_%69KhY":ڞM("%挾aCؾ=\ݑɐ (٘dݓAEv qDO@jr@DeyFU <& `92 h&ƾB$YنcrP)!խ V[Ս1xdbE#&\C/ˑiؘ:M;x&rᖰy1YPc͇:ذP:[RJg!k qzw" bn|@iPb%q6]Ʃ;wbQm1.Tʶ5uZ/#AE# P> ;VV= ,!(Q(ZyzA *Mf= #c K 2ު-#ɲ&c]ۑƖ: ”de"&Y,G& B٥BHDԒ8޻o5fxW #ӰYl̪H6CGӲ {'XC k ;;|Ȝc#mYu 2qyX,GakgZEYc^@(iS猴/2*0w H$UA N"շnGtᥓ =`fEܥE|kV] /9ˑy`dT"c` 9GgqйT ?*N qjm9Ҵ!ԢC mij+q]xf"}ɨkAgaKAG{m XXkpёy@BhĞBrF PYT;#,H4hG:grX`92 I6Gd74h v: q˰' :O]9Sdk zoF Qy,G!(1XZ|І꾗7[{XS=o9jy!P@5cA8 NF{%Ā۞A9 gU\;mbF +-aعX5  Բб)I۟Qړ%;s #!rU`d{Gu%#?l8Ā2DىI>?wkJӚ=/'*/?!F@f҈EH LB,,rjcNmGӹ" $* .}7 8~Mkg.N i I$k8rdC EK.ˋ [=ۉ :fT#{g᠑$ӕIVK[Sx, #Ӑ-{rb l'˫3;[Ѡg* 7lī=֟#{@{fO*Nm Pn[p_A2=6QlмCI'Ǖo`8913-@ۿ b DSfR8z,G,&ϼiWgB#(V4ȶ"lFXB+ChSf)F֞)h楛r'J@P( ]C#C FMo0ŘY|q׬#1ʣDψ  mY/v8zpٽG]7k I$2Vk)q76?fOFZwq5}9bIؠVR>ywV Q, =HGFM9j3_=Xn45Y-ݱ0ȱ#^~GwlK-Fs qa;= IEFt鐜 %6%fifga9X- Xt+ 2QOH1 J^ζq 0nH鄒aE\Dlj;R][aׁ-ZjxqѰa."糱 *d%!(-7EѓucsƐjGm'W'X4F(@`iO-n^Hlo'NP.zŏט]'A+D?]mOV FjVFL$5͛խ5jO5xЁ {9'DFF=իvPV b}KHc{#e9#mc2أ'4n:ɕӢBaC +;q^Dm!%]R/4lvRe d;/v٥Nr. !@dc f]/o;5梴R/k:rVGl@0~}_ HhDe+m5+4kv("1ZO"F@Zi'Km?WQ0c%#B}ry}o!4ǁa &2}赞h\P炠FSJ&@QcukXV bAF0%8r!M*%6xmɣCY+|Pp㬻W\y QS#!׵=p 1F~q8 0DD5;C3@޾wQ (g9Бx! F||νeC_@cQ4b*bQ'VkUu~,d?ïDFe$ O"m[jcKٸЈvQxPiï;fO>*%R )D4kg-A\BG?sŏJXbm n[S(|AݷR#Q7B6vQKۛ;l8Ze[#+:akv?h\4G )ƦXBLK8)Gfu_*D4vͭu.o{FJ_'3)VH:Dt^W{5р"hHJ3fGl _O^_׳q&u(p{~-F]|NZnga9RFҨ=ű.{ҠyD6 5ZkG7ٟ$L D5KG~SOA$HJs+4?mˑm$\)QGUֆuQL82bV@ }eQAhLNglHZkWdBH;x.t{=53 !F0xiWzAuiՀYxބ[#e},G$٘0y!Dj$M{|!/i*ؕeS>l_oˎS_e) d+2ͺu$ G Q5A+mG -UZfnY)3:aq0!ډ'n x=&f\ɐPmuޥ3\9=xڶ 9]/M~'Xb\| HV6yn*|^/l?{{hoc:Vqq@VMnLݡzPڼҿYzPIHgdvIP:Bҿa#š(n bQIG`7sTm*sO(2(L9م0;zM+kvK9a,bcIїdf!,x[L gT#[7ߵ^!D{?K=Uh~#߱5j;!+>V}%@ZQ#B˹m=KF6lwnbrE2.'+h&D=cމAf?'Œ Dsm-'QOpA;̮a3&Wٚ!CIV鍗~d'o#(lo8pg{Ӄ QO˳ X ;bPh%w.c! 3Sa(lҲžaY%JX7|7I12?-5ȴ*/@is+:00w(l. -vVo2_P0ЃX*)ƢtapC$h 3n6vp/-K2ۜINz ackXlyCrce !zEFFM !aS9-\X2AfAbA{Tⱽ콂8V°vuCa9]>X+J8,aBۂ֮zsFDF+"6^ P7qإy? 󢙆6Ԇ_A H- =C~vqm&)wԽ$A-Tج DKjH&.hMWpH!D;_촹o^yйۂA .-j߹doVɤ"cÖ(Uu e^$.9Kś_?Ҹ[:Gf 7åY3G]C֝X!P}hʧ;Q@ @Uc !}䝹$>vT {'&qࡷ9ذC vi]#ےvb` '޳@-un6.u1KPduʹeWL"E  %PeG_ո%` HOK("](olQb)Ȇӟl"ڦ聓ɮOGI] wmd`141Q.|}f'oΝ?*p؀XM's 4vhP^袙ϽO@qmd{b+FF  j@UEw.VE$r$(%& }q#KU+?*ǚ0'O( fقɍ x,BB=guw-?ӓYD~cۯ%:e "˿3t8;LE*U]iIY$!!m P>T. x7׷3W ۟FrG/,} )mگt eek<4i z P3H7P3vMΤrkKKǖ Ǣ18AaWI3{ldkC8 0ϯ׭W~{*idEɀq̉^?d;& [Ch3֬dFu^< JԬW\5c6y'j6Y#i)X(MT76w_2hQҶX6]#D#oWټd#}VTCH(J8yc?j*9BmYcN]mAvu!1J_PGB#u\M{*MS(➮*Qis;?h ka$ ֣?{ Aw:HY1DC?ru[G M5rK00.¨挠6@LU |ͿcN -KiO0O̽dd'GإU;VDmK#EV4@.}>-b籌(+~j]9{܍W5C#314 DGڶu)`b R$!0Z2UL٢RQg}x( ;Rh ~lW' x c qXV>ޚD* Bm5;7Jg vَ >1H >:󗎼Aw +fR) p! ^KzEf4p.X'>=WRZH6|fqZUb(ʋV6;֪Gwqw<&^?`n!*JD=A['ӑuOCPIƌjvځ5(fQ:L;K )+v.$)oS J48˾2iBKBBuC[#;Ұ]\HL#Xvwz @m_˾YQaJ[!1B~cT, mC潏.FUkB wƴ9  ;wp(Ӡ㒷ŀ|rsXJь D V2" S?2FYO=g7>~4H^w7%hٖZB[Xן eAP:._EYlL4_Qj'l>FY+ReiI9]cfg0.,k<^*aG42-]2%>4;#Pv\8fЪ^h '>'%c=s%M'J--TUK*HFǩ!)e9MA&:0J}שEYX腱+1oт-\VN-bj{m/7ySc"HeXC5p+ˀ-' Hrslq 9A9baEBj'*Dtl<|َP1Q"H1PD/U4Bla(Bdт+v<C X0F锈~I KCM0wc>f@uSg@$=/X!$+c? I+RAm-/bh+q{qN БT,C֨ Lg.eW;֝)f^|aÿ#YHePX0lnFB-`(9l3 sê~fm9 ]K.Bzyv`"hbvR֯n|x"b 8RkG2Qz¬UBڞ^ÍҤhJ_6DVqV8G&2^;h~š;uVz[)[PZ#!}P㡖ݏ.F۶s˅#IfjɌ/*'~3_OJZ8G./%VKw'X4yo=H_P\q!+D=dƍ3VÑ"*&~t+Lv|Tť K>6n1ࢣҧc j/l˘@IK*(攁+q4k"b$^͚ PbLMtju`Jiť9Z}#eA@;M @JHC@nSe:":f:=\mkBbD 0zNﻄC4xҝ>D9r U[:T#S sJvX]U(hrO5aAAƷw=҆%DZS^e 0kKnHNE)׶⍭ 8+z%4z !oPfTgo|,1Lp(X'jG`]9YS Jڕa tGv0Yp/AWEEX0,ZBuRĸvˣ˾p]3Vl9GBh]~YC?>ҜR:AQȊRRQ9sΕ}vx޵sQ1()hzJXp[dd@ /ċPo,ccΞ5ãfgE ӉIhg`rȣ]`om/߁n%82=YȈ8w 3_-Ɣ:'JQ>z0@P1M| QH@DlqxZ0ڊ$n]A#=zkS6h IܝÑxPF򹹣>5)e!Ǫ~k{ү kF;xcM仆 {_rYI l cP'AƖ_W<\հ)2]đx#|Ψtլ4zZv}LEѷݷ @!B fEg+.,RR>wbe˶?t\;G P6G/ʴyN. >Ų+)~}k&ikmO; w9g'Gh,{ˤ᳞Y `{tj0ԕq֗[#M9!DCosq;#(J_P`G77y3ʎ<(PV7GFz{|lŽG5تT6lPBB!nܲ 8/WܛC8$ u3(:V:\xǰˆG9_UHAAF$͠gn"7[i5˾x/l;PXC/dXɗ7o/̠|O_Eъ _wX$" eGcho( -5V{`Cq A( K 0rxɄ\l?HvUO?3fWnٍ`!}G: k$OcfS/SQ>p{JfeS?3hL J i@dЊ5\}*mN탈Ė)j (kPɨ e8w?J~5V{丠ٿtEՍ?_Z!`Ҿ;>##"gre3e4(~5{>t<R $k^âlQT7t1^4xoEC}_'B: D?V~m8Ĩa)sIYn9Rghg vk 1`$5cjfa+8 (C,u1*Xīv?ܺ[)#P4oى-JU,+1yo:tA:1c=z " Xe;'[ؖVB$P@ulJ ̈́Pdb:1*ΆnsJG눚Ox*¬}𲺶z:1a pnGz]BT/vob*rb*hnztTml!&[Z\zwSk1$˩%˪-ldPRBQw=!~:1z{;vy>+&X 9:Nk~_ju>kFzj[Avx/PȒJg ַKnH#Ha V,0g HM|pq#oj-֌A+67Pٲ9u6h^0ULHu )C/8 }|2f|'th+J`g`!tZd!h4=ٵ?$aSBc%l\_#+_Ew}<22f rhÑ"8:S<({H $Fı'VA{.\g ԕ{~ο:Oqy6ƵR qqu $Gߤln<־4_VmsB [Z/ @(o'gRSW|(]H#RafRL:i3Vmqmxx{vn<) 3#J`8qjLZ u#+EQW4vt VWFOGk`wA/ S&3CvE7v^PMR.8ȒKf9s6%e;J@虃ٗvJUV].0^(8@$Ǿ>fzہk=QԢԀQȎ:=^IzO&6$e[z/82'X` s96a0悂(c} ŝѠ"+<m1OR\Ӝh=q8N^ߘ}V/<^_؁(OiSKuq~pQ يNl5 5=q8 N Y;y(jHe{G'#SqɑljCxV/5hpNEPP`EH5]y Kr5!`^Kud&N}*@CcYp@N9b!>oM[J%}+y fQHܜ:wQ"hӵQ`P?اx7_a?Kw w-#pb*W7ȏ (/m: @rnM{  Q:[I\QG +@5͇91gz)8cU~ /{ xK.>:#ˑ8 sRe@7^VMޤE+vV1J!"?xpgzkH`%!+[p힊ҋXtc40wPCse $}#XFΉ\^7>7^Ъ7n+qHpFNFY3ZXV17=? ;H !s d|g+ۏHW̸U®\b\Btiŵ"~XAu툒z;sek`"LɿiruAv1T !wxF(.b"(`F "m#L?~_1zxhpt'X]aେWgE4w ~U( `~|s;<R;>4F,G+JT-+>h<2}݆_E)z Ya1Ept'X]ҾV=w~ $I& gޠ,$Kl$j(6dş] ȞnКOV?,*+G+h4~1n=ɕ_64xQfOʖmdouJE:"1T N3AKw0ŷ}H-DP, nA "ؔˑc녡AB@_( sc%\>Fv2߈(Bv/EV.H!F T#>g΄2=$RW\g|[XJDulg(-DQԞXYX]A9u hayvsm\QhKs @̇zv{7y:, 8#ˑ8C ڍG^sb^AfV\o=*@&!-N)emxOc ȟz;1eDg"Bpk6\3)GaG"fHnƪ G^[HD*%ި`M=GO|~p?xV\ZR~2E`Elxwwf.ގ`Y"'ܷ1u}1,<9gx(,P%.$60=~'d|ץ(ӆ6FC?bAύpuUCk:+wB1LN&y?v $uFc( wt -{?Pa,AF7dpM `7~o6/%,3PPzN\P']#r"gNd׾?^VJPzOu<12ۺpSUuju1B'XLG]]$X[t_ R/}C@m]`g<|+g}`O'(ID?/s ` m[ssc4' W .цym7 !T4sK&oٓ|ђE|b#:z^I"*X;!`T7ֶ[ښ7z E 7FĸF?<Ȍhޚ<2yɿR1rVPўϝz}zT'h EAx_t{׌"F>dJQ.8 B/x@?e9# `,#2ִ!n +3# %SGO8d^J-_ 6-n\Xӊ<`"oE 3 xsa =pkC < )",QRaM(PqYAȞp{Ah+jimUSMEj*o ىq @&;0=78(z:W-k0} |B#i3L\Dó2J&DSo\94wn!N)$1y2PXtuvL0ʾFQ.&*X "2 _SE#*Xva 糽 Eh?l=U+E,EQb (AkXJҐp$6aѼ;`)IY|,!7[ r3=n} 1 Si£pT2*8jҜjpv|` 57@/tYR "$h ׷mZtC gqӉTb RB4bFN!Hp(pAQXPR\E`<ܹ6-D89ljB& DZZ5#*XJyF- 0!Z4,b)9,X͆,#A" ~i5j C+`9 7@(4Ċ#*XJyXVBE5Է-]^[P\m**gC`/:ǑsWTu}słu.)eq>}eH(bZ!etc QRbz$Y ?U[onmXSU;K!5KA`1u}g:>|8!)_-`1,#lL,J_XѴU1 5)| d!s{ T'x[f{ B߂T`ˊ^֦ED ⹶r)zx,3Oc=pR })1E>mHYzE b:όuWqD#,m S@XBmk„#_ ,X[c H (XrXsDK*,mW|yۊJzـsٮL7R܄RhNTpWx.j%>a0rpU胛0D~|ɇ{vȂO.WK[?0/-@hog:\HÉMDKW@`imm7oZ~mSe۞flSn7yGsZhӭ ak|y߉?$,]YGK 2UYye``E\KbG6-4\!`%/R,/HC9r,h $O!?ulB6h }W"ms?kq7ȏcxz$,6aۂPKGkʜ )fw>0l0-J'ݿOyb<|x*FkE r>,mUk0ĥC9=KHQvCNV,베?|YJtd PۘQUTH޲鮫:̂Τ ,<j/s{JUiIdzsp" ^j0Tl;Vx; ]97|bdzY i*`#H7oĖs.8wqWHڰ3 qz*9@LbYjAf6:Ek%zZ圇/a@3]C&'!~W[WR_`I򅍫Z qs}{sH# EM VqEP.Uoɘsy+/|##v:ҵDၱ ;G5.E\nK #=zHA(`=Ѩ`%B%e"U+?jwL$`8[̿pїt`W- *٥@xaT\{{s 1VXs7ATD -`֓Z:@u>`>ٵe +DA킭or<$qfʋDD831k?x?^t;%=| +KQe[ ve*XG+pMt _4~\M-Ϟ>zLpp;6,e{^=\б,i(D۲ Gqpöˏݓ)zT1׮ƥ;ݴ09u?TtwDo<^rI~/p/\$zr[#׻_Yٲ.ĘC`Rhjҍzvyn?_ 4:hQǾWN!.SKWC &"]wIĢLiA]kOkGzbþ*5̯u@x%Z~a$"TX"lx~*M48f3&PQXm VBAhi9dʁѾ؈oS–Hf\ TT)ZAo-Nk4gJY$L:$vpwܩ96͝HiJiT.ZKTP~PYYV3 FKq #mv|!RY ݜD8h:D:Քnlj[CQJ&hX s^:^@vdC!*XI%M7tCf\d"RKL3*XArqISu+Ω FZYĨ6arQJ*+LHQe0>V1K.eIF+P0dDtk͗!,`׷ OS7W V2A`y(7V/kn酽7  CےP}8uѝ3JliE+DGCc6~x6:H bƊ 8` ;2] VBA| Z[ϑn #ffoE9Ȍ}1LxH _N 4g "feXȐ3@6+&mM0*X$Fr?:`Mğս+7];:q"HR2`%h~BYդꮿlp;;j 952.*XI}l?dKx9KL.X2L7o7km ͢NY^|tVT #"۟@Fm+Aa)P ׬}gA<@8JXđ;_*sdA Ϭ09BpD$[pU,"s8t*>6|wy$AIA(JKI*XIC! ``S(@Q"7ۯiG>wfI1eX"U no_# Jp3*sΞʺ֍_ոw?W/E=#tͿ/\q3Y!cX DC~FgDr\Wre]hE<"+oJ5v,seI]{ZU.Ï[Df~y Vp<vu(󑅐D@ eEKpSQ:%t([] c ,PK+Tnb CwZ >C~+SFDj>/w2kPfq!ޒտuF l[Wf>}S:+ 9y-Vi"b /"Yڸ~傭k]|WyᯉŇO_t=~7Qb Vt/7F:x ]^13sKLE201eΌd'̾IZPWpaMu+l^ܰô "2."{b.lB8N>`\?WUbuϰn#Xch_a9ڼ\×n[2\'10X߳O<T Kpu507\s@O s>RC)@HN(_|=FȵS!! \ͫ]i;-E%`#DE/~} 0ajE4Lޓ2eC4<$ޞ{zǏ# 2(^rQG?tz[]hpa`f [~:=)^:$yA#Rw{032A'cfdrs b"x'Y. (H Cw( Kh3"j_;kp(ZR,Xk E2l/iܸeōm.YR6v%a|Ri30 H a]Y0X\oǹ^8҄L{>o Wa^FyWl!cH ԥ-\~]Kʦ խU_Kه#s]g'|b5n0ZgT:g*FXʻ$h<.k%rANwM((.ЀkCpagGHh1qTٟrf4,ZRwU\#;(PJPf&.r 2YiGWϚ|؄*X eᷢ(AKQؠ(JlPR%6`)TE *X,EQb (AKQؠ(JlPR%6`)TE *X,EQb (AKQؠ(JlPR%6`)TE *X,EQb (AKQؠ(JlPR%6`)TE *X,EQb (AKQؠ(JlPR%6`)TE *X,EQb (^(J$Home{{ reldelim1 }} {{ super() }} {%- endblock %} gevent-24.11.1/docs/api/000077500000000000000000000000001471441230600146535ustar00rootroot00000000000000gevent-24.11.1/docs/api/gevent._socket3.rst000066400000000000000000000003371471441230600204110ustar00rootroot00000000000000================================================== :mod:`gevent._socket3` -- Python 3 socket module ================================================== .. automodule:: gevent._socket3 :members: :inherited-members: gevent-24.11.1/docs/api/gevent.ares.rst000066400000000000000000000004571471441230600176340ustar00rootroot00000000000000====================================================================================== :mod:`gevent.ares` -- Backwards compatibility alias for :mod:`gevent.resolver.cares` ====================================================================================== .. automodule:: gevent.ares :members: gevent-24.11.1/docs/api/gevent.backdoor.rst000066400000000000000000000005431471441230600204620ustar00rootroot00000000000000====================================================================================================== :mod:`gevent.backdoor` -- Interactive greenlet-based network console that can be used in any process ====================================================================================================== .. automodule:: gevent.backdoor :members: gevent-24.11.1/docs/api/gevent.baseserver.rst000066400000000000000000000003661471441230600210420ustar00rootroot00000000000000================================================================= :mod:`gevent.baseserver` -- Base class for implementing servers ================================================================= .. automodule:: gevent.baseserver :members: gevent-24.11.1/docs/api/gevent.builtins.rst000066400000000000000000000004411471441230600205240ustar00rootroot00000000000000================================================================================ :mod:`gevent.builtins` -- gevent friendly implementations of builtin functions ================================================================================ .. automodule:: gevent.builtins :members: gevent-24.11.1/docs/api/gevent.contextvars.rst000066400000000000000000000003421471441230600212530ustar00rootroot00000000000000========================================================== :mod:`gevent.contextvars` -- Cooperative ``contextvars`` ========================================================== .. automodule:: gevent.contextvars :members: gevent-24.11.1/docs/api/gevent.core.rst000066400000000000000000000032321471441230600176240ustar00rootroot00000000000000========================================================== :mod:`gevent.core` - (deprecated) event loop abstraction ========================================================== .. automodule:: gevent.core This module was originally a wrapper around libev_ and followed the libev API pretty closely. Now that we support libuv, it also serves as something of an event loop abstraction layer. Most people will not need to use the objects defined in this module directly. If you need to create watcher objects, you should use the methods defined on the current event loop. Note that gevent creates an event loop transparently for the user and runs it in a dedicated greenlet (called hub), so using this module is not necessary. In fact, if you do use it, chances are that your program is not compatible across different gevent versions (gevent.core in 0.x has a completely different interface and 2.x will probably have yet another interface) and implementations (the libev, libev CFFI and libuv implementations all have different contents in this module). .. caution:: Never instantiate the watcher classes defined in this module (if they are defined in this module; the various event loop implementations do something very different with them). Always use the :class:`watcher methods defined ` on :attr:`the current loop `, i.e., ``get_hub().loop``. On Windows, this wrapper will accept Windows handles rather than stdio file descriptors which libev requires. This is to simplify interaction with the rest of the Python, since it requires Windows handles. .. _libev: http://software.schmorp.de/pkg/libev.html gevent-24.11.1/docs/api/gevent.event.rst000066400000000000000000000007671471441230600200270ustar00rootroot00000000000000============================================================ :mod:`gevent.event` -- Notifications of multiple listeners ============================================================ .. module:: gevent.event .. autoclass:: gevent.event.Event :members: set, clear, wait, rawlink, unlink .. method:: is_set() isSet() ready() Return true if and only if the internal flag is true. .. autoclass:: gevent.event.AsyncResult :members: :undoc-members: gevent-24.11.1/docs/api/gevent.events.rst000066400000000000000000000002531471441230600202000ustar00rootroot00000000000000:mod:`gevent.events` -- Publish/subscribe event infrastructure ============================================================== .. automodule:: gevent.events :members: gevent-24.11.1/docs/api/gevent.exceptions.rst000066400000000000000000000002531471441230600210550ustar00rootroot00000000000000======================================== :mod:`gevent.exceptions` -- Exceptions ======================================== .. automodule:: gevent.exceptions :members: gevent-24.11.1/docs/api/gevent.fileobject.rst000066400000000000000000000004271471441230600210050ustar00rootroot00000000000000============================================================================ :mod:`gevent.fileobject` -- Wrappers to make file-like objects cooperative ============================================================================ .. automodule:: gevent.fileobject :members: gevent-24.11.1/docs/api/gevent.greenlet.rst000066400000000000000000000167621471441230600205150ustar00rootroot00000000000000================== Greenlet Objects ================== .. currentmodule:: gevent :class:`gevent.Greenlet` is a light-weight cooperatively-scheduled execution unit. It is a more powerful version of :class:`greenlet.greenlet`. For general information, see :ref:`greenlet-basics`. You can retrieve the current greenlet at any time using :func:`gevent.getcurrent`. Starting Greenlets ================== To start a new greenlet, pass the target function and its arguments to :class:`Greenlet` constructor and call :meth:`Greenlet.start`: >>> from gevent import Greenlet >>> def myfunction(arg1, arg2, kwarg1=None): ... pass >>> g = Greenlet(myfunction, 'arg1', 'arg2', kwarg1=1) >>> g.start() or use classmethod :meth:`Greenlet.spawn` which is a shortcut that does the same: >>> g = Greenlet.spawn(myfunction, 'arg1', 'arg2', kwarg1=1) There are also various spawn helpers in :mod:`gevent`, including: - :func:`gevent.spawn` - :func:`gevent.spawn_later` - :func:`gevent.spawn_raw` Waiting For Greenlets ===================== You can wait for a greenlet to finish with its :meth:`Greenlet.join` method. There are helper functions to join multiple greenlets or heterogenous collections of objects: - :func:`gevent.joinall` - :func:`gevent.wait` - :func:`gevent.iwait` Stopping Greenlets ================== You can forcibly stop a :class:`Greenlet` using its :meth:`Greenlet.kill` method. There are also helper functions that can be useful in limited circumstances (if you might have a :class:`raw greenlet `): - :func:`gevent.kill` - :func:`gevent.killall` Context Managers ================ .. versionadded:: 21.1.0 Greenlets also function as context managers, so you can combine spawning and waiting for a greenlet to finish in a single line: .. doctest:: >>> def in_greenlet(): ... print("In the greenlet") ... return 42 >>> with Greenlet.spawn(in_greenlet) as g: ... print("In the with suite") In the with suite In the greenlet >>> g.get(block=False) 42 Normally, the greenlet is joined to wait for it to finish, but if the body of the suite raises an exception, the greenlet is killed with that exception. .. doctest:: >>> import gevent >>> try: ... with Greenlet.spawn(gevent.sleep, 0.1) as g: ... raise Exception("From with body") ... except Exception: ... pass >>> g.dead True >>> g.successful() False >>> g.get(block=False) Traceback (most recent call last): ... Exception: From with body .. _subclassing-greenlet: Subclassing Greenlet ==================== To subclass a :class:`Greenlet`, override its ``_run()`` method and call ``Greenlet.__init__(self)`` in the subclass ``__init__``. This can be done to override :meth:`Greenlet.__str__`: if ``_run`` raises an exception, its string representation will be printed after the traceback it generated. :: class MyNoopGreenlet(Greenlet): def __init__(self, seconds): Greenlet.__init__(self) self.seconds = seconds def _run(self): gevent.sleep(self.seconds) def __str__(self): return 'MyNoopGreenlet(%s)' % self.seconds .. important:: You *SHOULD NOT* attempt to override the ``run()`` method. Boolean Contexts ================ Greenlet objects have a boolean value (``__nonzero__`` or ``__bool__``) which is true if it's active: started but not dead yet. It's possible to use it like this:: >>> g = gevent.spawn(...) >>> while g: # do something while g is alive The Greenlet's boolean value is an improvement on the raw :class:`greenlet's ` boolean value. The raw greenlet's boolean value returns False if the greenlet has not been switched to yet or is already dead. While the latter is OK, the former is not good, because a just spawned Greenlet has not been switched to yet and thus would evaluate to False. .. exception:: GreenletExit A special exception that kills the greenlet silently. When a greenlet raises :exc:`GreenletExit` or a subclass, the traceback is not printed and the greenlet is considered :meth:`successful `. The exception instance is available under :attr:`value ` property as if it was returned by the greenlet, not raised. .. class:: Greenlet .. automethod:: Greenlet.__init__ .. rubric:: Attributes .. autoattribute:: Greenlet.exception .. autoattribute:: Greenlet.minimal_ident .. autoattribute:: Greenlet.name .. autoattribute:: Greenlet.dead .. attribute:: Greenlet.value Holds the value returned by the function if the greenlet has finished successfully. Until then, or if it finished in error, `None`. .. tip:: Recall that a greenlet killed with the default :class:`GreenletExit` is considered to have finished successfully, and the `GreenletExit` exception will be its value. .. attribute:: Greenlet.spawn_tree_locals A dictionary that is shared between all the greenlets in a "spawn tree", that is, a spawning greenlet and all its descendent greenlets. All children of the main (root) greenlet start their own spawn trees. Assign a new dictionary to this attribute on an instance of this class to create a new spawn tree (as far as locals are concerned). .. versionadded:: 1.3a2 .. attribute:: Greenlet.spawning_greenlet A weak-reference to the greenlet that was current when this object was created. Note that the :attr:`parent` attribute is always the hub. .. versionadded:: 1.3a2 .. attribute:: Greenlet.spawning_stack A lightweight :obj:`frame `-like object capturing the stack when this greenlet was created as well as the stack when the spawning greenlet was created (if applicable). This can be passed to :func:`traceback.print_stack`. .. versionadded:: 1.3a2 .. attribute:: Greenlet.spawning_stack_limit A class attribute specifying how many levels of the spawning stack will be kept. Specify a smaller number for higher performance, spawning greenlets, specify a larger value for improved debugging. .. versionadded:: 1.3a2 .. rubric:: Methods .. automethod:: Greenlet.spawn .. automethod:: Greenlet.ready .. automethod:: Greenlet.successful .. automethod:: Greenlet.start .. automethod:: Greenlet.start_later .. automethod:: Greenlet.join .. automethod:: Greenlet.get .. automethod:: Greenlet.kill(exception=GreenletExit, block=True, timeout=None) .. automethod:: Greenlet.link(callback) .. automethod:: Greenlet.link_value(callback) .. automethod:: Greenlet.link_exception(callback) .. automethod:: Greenlet.rawlink .. automethod:: Greenlet.unlink .. automethod:: Greenlet.__str__ .. automethod:: Greenlet.add_spawn_callback .. automethod:: Greenlet.remove_spawn_callback Raw greenlet Methods ==================== Being a greenlet__ subclass, :class:`Greenlet` also has `switch() `_ and `throw() `_ methods. However, these should not be used at the application level as they can very easily lead to greenlets that are forever unscheduled. Prefer higher-level safe classes, like :class:`Event ` and :class:`Queue `, instead. __ https://greenlet.readthedocs.io/en/latest/#instantiation .. _switching: https://greenlet.readthedocs.io/en/latest/#switching .. _throw: https://greenlet.readthedocs.io/en/latest/#methods-and-attributes-of-greenlets .. class:: greenlet.greenlet The base class from which `Greenlet` descends. .. LocalWords: Greenlet GreenletExit Greenlet's greenlet's .. LocalWords: automethod gevent-24.11.1/docs/api/gevent.hub.rst000066400000000000000000000025521471441230600174560ustar00rootroot00000000000000============================================= ``gevent.hub`` - The Event Loop and the Hub ============================================= .. module:: gevent.hub The hub is a special greenlet created automatically to run the event loop. The current hub can be retrieved with `get_hub`. .. autofunction:: get_hub .. autoclass:: Hub :members: .. automethod:: wait .. automethod:: cancel_wait .. attribute:: loop the event loop object (`ILoop`) associated with this hub and thus this native thread. The Event Loop ============== The current event loop can be obtained with ``get_hub().loop``. All implementations of the loop provide a common minimum interface. .. autointerface:: gevent._interfaces.ILoop .. autointerface:: gevent._interfaces.IWatcher .. autointerface:: gevent._interfaces.ICallback Utilities ========= .. autoclass:: Waiter Exceptions ========== .. autoclass:: LoopExit The following exceptions *are not* expected to be thrown and *are not* meant to be caught; if they are raised to user code it is generally a serious programming error or a bug in gevent, greenlet, or its event loop implementation. They are presented here for documentation purposes only. .. autoclass:: gevent.exceptions.ConcurrentObjectUseError .. autoclass:: gevent.exceptions.BlockingSwitchOutError .. autoclass:: gevent.exceptions.InvalidSwitchError gevent-24.11.1/docs/api/gevent.local.rst000066400000000000000000000002731471441230600177700ustar00rootroot00000000000000=============================================== :mod:`gevent.local` -- Greenlet-local objects =============================================== .. automodule:: gevent.local :members: gevent-24.11.1/docs/api/gevent.lock.rst000066400000000000000000000003031471441230600176200ustar00rootroot00000000000000========================================== :mod:`gevent.lock` -- Locking primitives ========================================== .. automodule:: gevent.lock :members: :inherited-members: gevent-24.11.1/docs/api/gevent.monkey.rst000066400000000000000000000003541471441230600202000ustar00rootroot00000000000000=============================================================== :mod:`gevent.monkey` -- Make the standard library cooperative =============================================================== .. automodule:: gevent.monkey :members: gevent-24.11.1/docs/api/gevent.os.rst000066400000000000000000000004061471441230600173150ustar00rootroot00000000000000========================================================================= :mod:`gevent.os` -- Low-level operating system functions from :mod:`os` ========================================================================= .. automodule:: gevent.os :members: gevent-24.11.1/docs/api/gevent.pool.rst000066400000000000000000000005431471441230600176470ustar00rootroot00000000000000:mod:`gevent.pool` -- Managing greenlets in a group =================================================== .. module:: gevent.pool .. autoclass:: gevent.pool.Group :members: :inherited-members: .. automethod:: __len__ .. automethod:: __contains__ .. autoclass:: gevent.pool.PoolFull :members: .. autoclass:: gevent.pool.Pool :members: gevent-24.11.1/docs/api/gevent.pywsgi.rst000066400000000000000000000003731471441230600202210ustar00rootroot00000000000000==================================================================== :mod:`gevent.pywsgi` -- A pure-Python, gevent-friendly WSGI server ==================================================================== .. automodule:: gevent.pywsgi :members: gevent-24.11.1/docs/api/gevent.queue.rst000066400000000000000000000013431471441230600200210ustar00rootroot00000000000000============================================ :mod:`gevent.queue` -- Synchronized queues ============================================ .. automodule:: gevent.queue :members: :undoc-members: .. exception:: Full An alias for :class:`Queue.Full` .. exception:: Empty An alias for :class:`Queue.Empty` Examples ======== Example of how to wait for enqueued tasks to be completed:: def worker(): while True: item = q.get() try: do_work(item) finally: q.task_done() q = JoinableQueue() for i in range(num_worker_threads): gevent.spawn(worker) for item in source(): q.put(item) q.join() # block until all tasks are done gevent-24.11.1/docs/api/gevent.resolver.ares.rst000066400000000000000000000003631471441230600214700ustar00rootroot00000000000000=============================================================== :mod:`gevent.resolver.ares` -- c-ares based hostname resolver =============================================================== .. automodule:: gevent.resolver.ares :members: gevent-24.11.1/docs/api/gevent.resolver.blocking.rst000066400000000000000000000003611471441230600223240ustar00rootroot00000000000000============================================================= :mod:`gevent.resolver.blocking` -- Non-cooperative resolver ============================================================= .. automodule:: gevent.resolver.blocking :members: gevent-24.11.1/docs/api/gevent.resolver.dnspython.rst000066400000000000000000000004041471441230600225600ustar00rootroot00000000000000=================================================================== :mod:`gevent.resolver.dnspython` -- Pure Python hostname resolver =================================================================== .. automodule:: gevent.resolver.dnspython :members: gevent-24.11.1/docs/api/gevent.resolver.thread.rst000066400000000000000000000003731471441230600220060ustar00rootroot00000000000000================================================================= :mod:`gevent.resolver.thread` -- thread based hostname resolver ================================================================= .. automodule:: gevent.resolver.thread :members: gevent-24.11.1/docs/api/gevent.rst000066400000000000000000000050751471441230600167040ustar00rootroot00000000000000=================================== :mod:`gevent` -- common functions =================================== .. module:: gevent The most common functions and classes are available in the :mod:`gevent` top level package. Please read :doc:`/intro` for an introduction to the concepts discussed here. .. seealso:: :mod:`gevent.util` .. seealso:: :doc:`/configuration` For configuring gevent using the environment or the ``gevent.config`` object. .. autodata:: __version__ Working With Greenlets ====================== See :class:`gevent.Greenlet` for more information about greenlet objects. Creating Greenlets ------------------ .. autofunction:: spawn .. autofunction:: spawn_later .. autofunction:: spawn_raw Getting Greenlets ----------------- .. function:: getcurrent() Return the currently executing greenlet (the one that called this function). Note that this may be an instance of :class:`Greenlet` or :class:`greenlet.greenlet`. Stopping Greenlets ------------------ .. autofunction:: kill(greenlet, exception=GreenletExit) .. autofunction:: killall(greenlets, exception=GreenletExit, block=True, timeout=None) Sleeping ======== .. autofunction:: sleep .. autofunction:: idle Switching ========= .. function:: getswitchinterval() -> current switch interval See :func:`setswitchinterval` .. versionadded:: 1.3 .. function:: setswitchinterval(seconds) Set the approximate maximum amount of time that callback functions will be allowed to run before the event loop is cycled to poll for events. This prevents code that does things like the following from monopolizing the loop:: while True: # Burning CPU! gevent.sleep(0) # But yield to other greenlets Prior to this, this code could prevent the event loop from running. On Python 3, this uses the native :func:`sys.setswitchinterval` and :func:`sys.getswitchinterval`. .. versionadded:: 1.3 Waiting ======= .. autofunction:: wait .. autofunction:: iwait .. autofunction:: joinall Working with multiple processes =============================== .. note:: These functions will only be available if :func:`os.fork` is available. In general, prefer to use :func:`gevent.os.fork` instead of manually calling these functions. .. autofunction:: fork .. autofunction:: reinit Signals ======= .. autofunction:: signal_handler Timeouts ======== See class :class:`gevent.Timeout` for information about Timeout objects. .. autofunction:: with_timeout .. LocalWords: Greenlet GreenletExit Greenlet's greenlet's .. LocalWords: automethod gevent-24.11.1/docs/api/gevent.select.rst000066400000000000000000000003131471441230600201500ustar00rootroot00000000000000==================================================== :mod:`gevent.select` -- Waiting for I/O completion ==================================================== .. automodule:: gevent.select :members: gevent-24.11.1/docs/api/gevent.selectors.rst000066400000000000000000000003271471441230600207010ustar00rootroot00000000000000======================================================= :mod:`gevent.selectors` -- High-level IO Multiplexing ======================================================= .. automodule:: gevent.selectors :members: gevent-24.11.1/docs/api/gevent.server.rst000066400000000000000000000002471471441230600202050ustar00rootroot00000000000000======================================== :mod:`gevent.server` -- TCP/SSL server ======================================== .. automodule:: gevent.server :members: gevent-24.11.1/docs/api/gevent.signal.rst000066400000000000000000000005111471441230600201460ustar00rootroot00000000000000============================================================================================== :mod:`gevent.signal` -- Cooperative implementation of special cases of :func:`signal.signal` ============================================================================================== .. automodule:: gevent.signal :members: gevent-24.11.1/docs/api/gevent.socket.rst000066400000000000000000000066431471441230600201750ustar00rootroot00000000000000==================================================================== :mod:`gevent.socket` -- Cooperative low-level networking interface ==================================================================== .. module:: gevent.socket This module provides socket operations and some related functions. The API of the functions and classes matches the API of the corresponding items in the standard :mod:`socket` module exactly, but the synchronous functions in this module only block the current greenlet and let the others run. .. tip:: gevent's sockets, like most gevent objects, have thread affinity. That is, they can only be used from the operating system thread that created them (any greenlet in that thread can use the socket). The results of attempting to use the socket in another thread (for example, passing it to the threadpool) are not defined (but one common outcome is a :exc:`~gevent.hub.LoopExit` exception). For convenience, exceptions (like :class:`error ` and :class:`timeout `) as well as the constants from the :mod:`socket` module are imported into this module. In almost all cases one can simply replace ``import socket`` with ``from gevent import socket`` to start using cooperative sockets with no other changes (or use :func:`gevent.monkey.patch_socket` at startup if code changes are not desired or possible). Standard Library Interface ========================== The exact API exposed by this module varies depending on what version of Python you are using. The documents below describe the API for Python 2 and Python 3, respectively. .. note:: All the described APIs should be imported from ``gevent.socket``, and *not* from their implementation modules. Their organization is an implementation detail that may change at any time. .. autofunction:: gevent.socket.gethostbyname .. class:: socket The cooperative socket object. See the version documentation for specifics. .. versionchanged:: 20.12.0 Socket objects no longer have a ``__dict__`` or accept arbitrary instance variables. Previously they did, but standard library sockets never have. .. toctree:: Python 3 interface Gevent Extensions ================= Beyond the basic standard library interface, ``gevent.socket`` provides some extensions. These are identical and shared by all versions of Python. .. versionchanged:: 1.3a2 The undocumented class ``BlockingResolver`` has been documented and moved to :class:`gevent.resolver.blocking.Resolver`. Waiting ------- These functions are used to block the current greenlet until an open file (socket) is ready to perform I/O operations. These are low-level functions not commonly used by many programs. .. note:: These use the underlying event loop ``io`` watchers, which means that they share the same implementation limits. For example, on some platforms they can be used with more than just sockets, while on others the applicability is more limited (POSIX platforms like Linux and OS X can use pipes and fifos but Windows is limited to sockets). .. note:: On Windows with the libev event loop, gevent is limited to 1024 open sockets. .. autofunction:: gevent.socket.wait_read .. autofunction:: gevent.socket.wait_write .. autofunction:: gevent.socket.wait_readwrite .. autofunction:: gevent.socket.wait .. autofunction:: gevent.socket.cancel_wait gevent-24.11.1/docs/api/gevent.ssl.rst000066400000000000000000000031731471441230600175010ustar00rootroot00000000000000==================================================================== :mod:`gevent.ssl` -- Secure Sockets Layer (SSL/TLS) module ==================================================================== .. module:: gevent.ssl This module provides SSL/TLS operations and some related functions. The API of the functions and classes matches the API of the corresponding items in the standard :mod:`ssl` module exactly, but the synchronous functions in this module only block the current greenlet and let the others run. The exact API exposed by this module varies depending on what version of Python you are using. The documents below describe the API for Python 3. .. tip:: As an implementation note, gevent's exact behaviour will differ somewhat depending on the underlying TLS version in use. For example, the number of data exchanges involved in the handshake process, and exactly when that process occurs, will vary. This can be indirectly observed by the number and timing of greenlet switches or trips around the event loop gevent makes. Most applications should not notice this, but some applications (and especially tests, where it is common for a process to be both a server and its own client), may find that they have coded in assumptions about the order in which multiple greenlets run. As TLS 1.3 gets deployed, those assumptions are likely to break. .. warning:: All the described APIs should be imported from ``gevent.ssl``, and *not* from their implementation modules. Their organization is an implementation detail that may change at any time. .. automodule:: gevent.ssl :members: gevent-24.11.1/docs/api/gevent.subprocess.rst000066400000000000000000000003601471441230600210630ustar00rootroot00000000000000=============================================================== :mod:`gevent.subprocess` -- Cooperative ``subprocess`` module =============================================================== .. automodule:: gevent.subprocess :members: gevent-24.11.1/docs/api/gevent.thread.rst000066400000000000000000000005301471441230600201410ustar00rootroot00000000000000=================================================================================================== :mod:`gevent.thread` -- Implementation of the standard :mod:`thread` module that spawns greenlets =================================================================================================== .. automodule:: gevent.thread :members: gevent-24.11.1/docs/api/gevent.threading.rst000066400000000000000000000005061471441230600206420ustar00rootroot00000000000000============================================================================================ :mod:`gevent.threading` -- Implementation of the standard :mod:`threading` using greenlets ============================================================================================ .. automodule:: gevent.threading :members: gevent-24.11.1/docs/api/gevent.threadpool.rst000066400000000000000000000024171471441230600210410ustar00rootroot00000000000000 ===================================================== :mod:`gevent.threadpool` - A pool of native threads ===================================================== .. currentmodule:: gevent.threadpool .. autoclass:: ThreadPool :inherited-members: :members: imap, imap_unordered, map, map_async, apply_async, kill, join, spawn .. method:: apply(func, args=None, kwds=None) Rough equivalent of the :func:`apply()` builtin function, blocking until the result is ready and returning it. The ``func`` will *usually*, but not *always*, be run in a way that allows the current greenlet to switch out (for example, in a new greenlet or thread, depending on implementation). But if the current greenlet or thread is already one that was spawned by this pool, the pool may choose to immediately run the `func` synchronously. .. note:: As implemented, attempting to use :meth:`Threadpool.apply` from inside another function that was itself spawned in a threadpool (any threadpool) will cause the function to be run immediately. .. versionchanged:: 1.1a2 Now raises any exception raised by *func* instead of dropping it. .. autoclass:: ThreadPoolExecutor gevent-24.11.1/docs/api/gevent.time.rst000066400000000000000000000003601471441230600176310ustar00rootroot00000000000000:mod:`gevent.time` -- Makes *sleep* gevent aware ================================================ This module has all the members of :mod:`time`, but the *sleep* function is :func:`gevent.sleep`. .. automodule:: gevent.time :members: gevent-24.11.1/docs/api/gevent.timeout.rst000066400000000000000000000006001471441230600203560ustar00rootroot00000000000000=============================================== Cooperative Timeouts Using ``gevent.Timeout`` =============================================== Cooperative timeouts can be implemented with the :class:`gevent.Timeout` class, and the helper function :func:`gevent.with_timeout`. .. autoclass:: gevent.Timeout :members: :undoc-members: :special-members: __enter__, __exit__ gevent-24.11.1/docs/api/gevent.util.rst000066400000000000000000000002561471441230600176540ustar00rootroot00000000000000=========================================== :mod:`gevent.util` -- Low-level utilities =========================================== .. automodule:: gevent.util :members: gevent-24.11.1/docs/api/gevent.wsgi.rst000066400000000000000000000012461471441230600176500ustar00rootroot00000000000000========================================================== ``gevent.wsgi`` -- Historical note only; does not exist ========================================================== .. warning:: Beginning in gevent 1.3, this module no longer exists. Starting in gevent 1.0a1 (2011), this module was nothing more than an alias for :mod:`gevent.pywsgi`, which is what should be used instead. Prior to gevent 1.0, when gevent was based on libevent, ``gevent.wsgi`` used libevent's http support, but that was dropped with the introduction of libev. libevent's http support had several limitations, including not supporting stream, not supporting pipelining, and not supporting SSL. gevent-24.11.1/docs/api/index.rst000066400000000000000000000034611471441230600165200ustar00rootroot00000000000000=============== API reference =============== Functional Areas ================ This section of the document groups gevent APIs by functional area (it may not be complete). For a complete alphabetical listing by module, see :ref:`api_module_listing`. High-level concepts ------------------- .. toctree:: :maxdepth: 1 gevent gevent.timeout gevent.greenlet .. _networking: Networking interfaces --------------------- .. toctree:: :maxdepth: 1 gevent.socket gevent.ssl gevent.select gevent.selectors Synchronization primitives (locks, queues, events) -------------------------------------------------- .. toctree:: :maxdepth: 1 gevent.event gevent.queue gevent.local gevent.lock Low-level details ----------------- .. toctree:: :maxdepth: 1 gevent.hub gevent.core .. _api_module_listing: Module Listing ============== This section of the document groups gevent APIs by module. .. This should be sorted alphabetically .. toctree:: :maxdepth: 1 gevent gevent.backdoor gevent.baseserver gevent.builtins gevent.contextvars gevent.core gevent.event gevent.events gevent.exceptions gevent.fileobject gevent.hub gevent.local gevent.lock gevent.monkey gevent.os gevent.pool gevent.pywsgi gevent.queue gevent.resolver.ares gevent.resolver.blocking gevent.resolver.dnspython gevent.resolver.thread gevent.select gevent.selectors gevent.server gevent.signal gevent.socket gevent.ssl gevent.subprocess gevent.thread gevent.threading gevent.threadpool gevent.time gevent.util Deprecated Modules ------------------ These modules are deprecated and should not be used in new code. .. toctree:: gevent.ares gevent.wsgi Examples ======== .. toctree:: /examples/index gevent-24.11.1/docs/changelog.rst000066400000000000000000000005071471441230600165650ustar00rootroot00000000000000.. include:: ../CHANGES.rst Older Releases ============== For full information on older versions, see :doc:`older_releases`. .. toctree:: :caption: Older change logs, without context. :maxdepth: 2 changelog_1_5 changelog_1_4 changelog_1_3 changelog_1_2 changelog_1_1 changelog_1_0 changelog_pre gevent-24.11.1/docs/changelog_1_0.rst000066400000000000000000000607501471441230600172320ustar00rootroot00000000000000================= Changes for 1.0 ================= .. currentmodule:: gevent Release 1.0.2 ============= - Fix LifoQueue.peek() to return correct element. :pr:`456`. Patch by Christine Spang. - Upgrade to libev 4.19 - Remove SSL3 entirely as default TLS protocol - Import socket on Windows (closes :issue:`459`) - Fix C90 syntax error (:pr:`449`) - Add compatibility with Python 2.7.9's SSL changes. :issue:`477`. Release 1.0.1 ============= - Fix :issue:`423`: Pool's imap/imap_unordered could hang forever. Based on patch and test by Jianfei Wang. Release 1.0 (Nov 26, 2013) ========================== - pywsgi: Pass copy of error list instead of direct reference. Thanks to Jonathan Kamens, Matt Iversen. - Ignore the autogenerated doc/gevent.*.rst files. Patch by Matthias Urlichs. - Fix cythonpp.py on Windows. Patch by Jeryn Mathew. - Remove gevent.run (use gevent.wait). Release 1.0rc3 (Sep 14, 2013) ============================= - Fix :issue:`251`: crash in gevent.core when accessing destroyed loop. - Fix :issue:`235`: Replace self._threadpool.close() with self._threadpool.kill() in hub.py. Patch by Jan-Philip Gehrcke. - Remove unused timeout from select.py (:issue:`254`). Patch by Saúl Ibarra Corretgé. - Rename Greenlet.link()'s argument to 'callback' (closes :issue:`244`). - Fix parallel build (:issue:`193`). Patch by Yichao Yu. - Fix :issue:`263`: potential UnboundLocalError: 'length' in gevent.pywsgi. - Simplify psycopg2_pool.py (:issue:`239`). Patch by Alex Gaynor. - pywsgi: allow Content-Length in GET requests (:issue:`264`). Patch by 陈小玉. - documentation fixes (:issue:`281`) [philipaconrad]. - Fix old documentation about default blocking behavior of kill, killall (:issue:`306`). Patch by Daniel Farina. - Fix :issue:`6`: patch sys after thread. Patch by Anton Patrushev. - subprocess: fix check_output on Py2.6 and older (:issue:`265`). Thanks to Marc Sibson for test. - Fix :issue:`302`: "python -m gevent.monkey" now sets ``__file__`` properly. - pywsgi: fix logging when bound on unix socket (:issue:`295`). Thanks to Chris Meyers, Eugene Pankov. - pywsgi: readout request data to prevent ECONNRESET - Fix :issue:`303`: 'requestline' AttributeError in pywsgi. Thanks to Neil Chintomby. - Fix :issue:`79`: Properly handle HTTP versions. Patch by Luca Wehrstedt. - Fix :issue:`216`: propagate errors raised by Pool.map/imap Release 1.0rc2 (Dec 10, 2012) ============================= - Fixed :issue:`210`: callbacks were not run for non-default loop (bug introduced in 1.0rc1). - patch_all() no longer patches subprocess unless ``subprocess=True`` is passed. - Fixed AttributeError in hub.Waiter. - Fixed :issue:`181`: make hidden imports visible to freezing tools like py2exe. Patch by Ralf Schmitt. - Fixed :issue:`202`: periodically yield when running callbacks (sleep(0) cannot block the event loop now). - Fixed :issue:`204`: os.tp_read/tp_write did not propogate errors to the caller. - Fixed :issue:`217`: do not set SO_REUSEADDR on Windows. - Fixed bug in --module argument for gevent.monkey. Patch by Örjan Persson. - Remove warning from threadpool.py about mixing fork() and threads. - Cleaned up hub.py from code that was needed to support older greenlets. Patch by Saúl Ibarra Corretgé. - Allow for explicit default loop creation via ``get_hub(default=True)``. Patch by Jan-Philip Gehrcke. Release 1.0rc1 (Oct 30, 2012) ============================= - Fixed hub.switch() not to touch stacktrace when switching. greenlet restores the exception information correctly since version 0.3.2. gevent now requires greenlet >= 0.3.2 - Added gevent.wait() and gevent.iwait(). This is like gevent.joinall() but supports more objects, including Greenlet, Event, Semaphore, Popen. Without arguments it waits for the event loop to finish (previously gevent.run() did that). gevent.run will be removed before final release and gevent.joinall() might be deprecated. - Reimplemented loop.run_callback with a list and a single prepare watcher; this fixes the order of spawns and improves performance a little. - Fixes Semaphore/Lock not to init hub in ``__init__``, so that it's possible to have module-global locks without initializing the hub. This fixes monkey.patch_all() not to init the hub. - New implementation of callbacks that executes them in the order they were added. core.loop.callback is removed. - Fixed 2.5 compatibility. - Fixed crash on Windows when request 'prev' and 'attr' attributes of 'stat' watcher. The attribute access still fails, but now with an exception. - Added known_failures.txt that lists all the tests that fail. It can be used by testrunner.py via expected option. It's used when running the test suite in travis. - Fixed socket, ssl and fileobject to not mask EBADF error - it is now propogated to the caller. Previously EBADF was converted to empty read/write. Thanks to Vitaly Kruglikov - Removed gevent.event.waitall() - Renamed FileObjectThreadPool -> FileObjectThread - Greenlet: Fixed :issue:`143`: greenlet links are now executed in the order they were added - Synchronize access to FileObjectThread with Semaphore - EINVAL is no longer handled in fileobject. monkey: - Fixed :issue:`178`: disable monkey patch os.read/os.write - Fixed monkey.patch_thread() to patch threading._DummyThread to avoid leak in threading._active. Original patch by Wil Tan. - added Event=False argument to patch_all() and patch_thread - added patch_sys() which patches stdin, stdout, stderr with FileObjectThread wrappers. Experimental / buggy. - monkey patching everything no longer initializes the hub/event loop. socket: - create_connection: do not lookup IPv6 address if IPv6 is unsupported. Patch by Ralf Schmitt. pywsgi: - Fixed :issue:`86`: bytearray is now supported. Original patch by Aaron Westendorf. - Fixed :issue:`116`: Multiline HTTP headers are now handled properly. Patch by Ralf Schmitt. subprocess: - Fixed Windows compatibility. The wait() method now also supports 'timeout' argument on Windows. - Popen: Added rawlink() method, which makes Popen objects supported by gevent.wait(). Updated examples/processes.py - Fixed :issue:`148`: read from errpipe_read in small chunks, to avoid trigger EINVAL issue on Mac OS X. Patch by Vitaly Kruglikov - Do os._exit() in "finally" section to avoid executing unrelated code. Patch by Vitaly Kruglikov. resolver_ares: - improve getaddrinfo: For string ports (e.g. "http") resolver_ares/getaddrinfo previously only checked either getservbyname(port, "tcp") or getservbyname(port, "udp"), but never both. It now checks both of them. - gevent.ares.channel now accepts strings as arguments - upgraded c-ares to cares-1_9_1-12-g805c736 - it is now possible to configure resolver_ares directly with environ, like GEVENTARES_SERVERS os: - Renamed threadpool_read/write to tp_read/write. - Removed posix_read, posix_write. - Added nb_read, nb_write, make_nonblocking. hub: - The system error is now raised immediately in main greenlet in all cases. - Dropped support for old greenlet versions (need >= 0.3.2 now) core: - allow 'callback' property of watcher to be set to None. "del w.callback" no longer works. - added missing 'noinotify' flag Misc: - gevent.thread: allocate_lock is now an alias for LockType/Semaphore. That way it does not fail when being used as class member. - Updated greentest.py to start timeouts with ``ref=False``. - pool: remove unused get_values() function - setup.py now recognizes GEVENTSETUP_EV_VERIFY env var which sets EV_VERIFY macro when compiling - Added a few micro benchmarks - stdlib tests that we care about are now included in greentest/2.x directories, so we don't depend on them being installed system-wide - updated util/makedist.py - the testrunner was completely rewritten. Release 1.0b4 (Sep 6, 2012) =========================== - Added gevent.os module with 'read' and 'write' functions. Patch by Geert Jansen. - Moved gevent.hub.fork to gevent.os module (it is still available as gevent.fork). - Fixed :issue:`148`: Made fileobject handle EINVAL, which is randomly raised by os.read/os.write on Mac OS X. Thanks to Mark Hingston. - Fixed :issue:`150`: gevent.fileobject.SocketAdapter.sendall() could needlessly wait for write event on the descriptor. Original patch by Mark Hingston. - Fixed AttributeError in baseserver. In case of error, start() would call kill() which was renamed to close(). Thanks to Vitaly Kruglikov. Release 1.0b3 (Jul 27, 2012) ============================ - New gevent.subprocess module - New gevent.fileobject module - Fixed ThreadPool to discard references of the objects passed to it (function, arguments) asap. Previously they could be stored for unlimited time until the thread gets a new job. - Fixed :issue:`138`: gevent.pool.Pool().imap_unordered hangs with an empty iterator. Thanks to exproxus. - Fixed :issue:`127`: ssl.py could raise TypeError in certain cases. Thanks to Johan Mjones. - Fixed socket.makefile() to keep the timeout setting of the socket instance. Thanks to Colin Marc. - Added 'copy()' method to queues. - The 'nochild' event loop config option is removed. The install_sigchld offer more flexible way of enabling child watchers. - core: all watchers except for 'child' now accept new 'priority' keyword argument - gevent.Timeout accepts new arguments: 'ref' and 'priority'. The default priority for Timeout is -1. - Hub.wait() uses Waiter now instead of raw switching - Updated libev to the latest CVS version - Made pywsgi to raise an AssertionError if non-zero content-length is passed to start_response(204/304) or if non-empty body is attempted to be written for 304/204 response - Removed pywsgi feature to capitalize the passed headers. - Fixed util/cythonpp.py to work on python3.2 (:issue:`123`). Patch by Alexandre Kandalintsev. - Added 'closed' readonly property to socket. - Added 'ref' read/write property to socket. - setup.py now parses CARES_EMBED and LIBEV_EMBED parameters, in addition to EMBED. - gevent.reinit() and gevent.fork() only reinit hub if it was created and do not create it themselves - Fixed setup.py not to add libev and c-ares to include dirs in non-embed mode. Patch by Ralf Schmitt. - Renamed util/make_dist.py to util/makedist.py - testrunner.py now saves more information about the system; the stat printing functionality is moved to a separate util/stat.py script. Release 1.0b2 (Apr 11, 2012) ============================ Major and backward-incompatible changes: - Made the threadpool-based resolver the default. To enable the ares-based resolver, set GEVENT_RESOLVER=ares env var. - Added support for child watchers (not available on Windows). - Libev loop now reaps all children by default. - If NOCHILD flag is passed to the loop, child watchers and child reaping are disabled. - Renamed gevent.coros to gevent.lock. The gevent.coros is still available but deprecated. - Added 'stat' watchers to loop. - The setup.py now recognizes gevent_embed env var. When set to "no", bundled c-ares and libev are ignored. - Added optional 'ref' argument to sleep(). When ref=false, the watchers created by sleep() do not hold gevent.run() from exiting. - ThreadPool now calls Hub.handle_error for exceptions in worker threads. - ThreadPool got new method: apply_e. - Added new extension module gevent._util and moved gevent.core.set_exc_info function there. - Added new extension module gevent._semaphore. It contains Semaphore class which is imported by gevent.lock as gevent.lock.Semaphore. Providing Semaphore in extension module ensures that trace function set with settrace will not be called during __exit__. Thanks to Ralf Schmitt. - It is now possible to kill or pre-spawn threads in ThreadPool by setting its 'size' property. core: - Make sure the default loop cannot be destroyed more than once, thus crashing the process. - Make Hub.destroy() method not to destroy the default loop, unless *destroy_loop* is *True*. Non-default loops are still destroyed by default. - loop: Removed properties from loop: fdchangecnt, timercnt, asynccnt. - loop: Added properties: sigfd, origflags, origflags_int - loop: The EVFLAG_NOENV is now always passed to libev. Thus LIBEV_FLAGS env variable is no longer checked. Use GEVENT_BACKEND. Misc: - Check that the argument of link() is callable. Raise TypeError when it's not. - Fixed TypeError in baseserver when parsing an address. - Pool: made add() and discard() usable by external users. Thanks to Danil Eremeev. - When specifying a class to import, it is now possible to use format path/package.module.name - pywsgi: Made sure format_request() does not fail if 'status' attribute is not set yet - pywsgi: Added REMOTE_PORT variable to the environment. Examples: - portforwarder.py now shows how to use gevent.run() to implement graceful shutdown of a server. - psycopg2_pool.py: Changed execute() to return rowcount. - psycopg2_pool.py: Added fetchall() and fetchiter() methods. Developer utilities: - When building, CYTHON env variable can be used to specify Cython executable to use. - util/make_dist.py now recongizes --fast and --revert options. Previous --rsync option is removed. - Added util/winvbox.py which automates building/testing/making binaries on Windows VM. - Fixed typos in exception handling code in testrunner.py - Fixed patching unittest.runner on Python2.7. This caused the details of test cases run lost. - Made testrunner.py kill the whole process group after test is done. Release 1.0b1 (Jan 10, 2012) ============================ Backward-incompatible changes: - Removed "link to greenlet" feature of Greenlet. - If greenlet module older than 0.3.2 is used, then greenlet.GreenletExit.__bases__ is monkey patched to derive from BaseException and not Exception. That way gevent.GreenletExit is always derived from BaseException, regardless of installed greenlet version. - Some code supporting Python 2.4 has been removed. Release highlights: - Added thread pool: gevent.threadpool.ThreadPool. - Added thread pool-based resolver. Enable with GEVENT_RESOLVER=thread. - Added UDP server: gevent.server.DatagramServer - A "configure" is now run on libev. This fixes a problem of 'kqueue' not being available on Mac OS X. - Gevent recognizes some environment variables now: - GEVENT_BACKEND allows passing argument to loop, e.g. "GEVENT_BACKEND=select" for force select backend - GEVENT_RESOLVER allows choosing resolver class. - GEVENT_THREADPOOL allows choosing thread pool class. - Added new examples: portforwarder, psycopg2_pool.py, threadpool.py, udp_server.py - Fixed non-embedding build. To build against system libev, remove or rename 'libev' directory. To build against system c-ares, remove or rename 'c-ares'. Thanks to Örjan Persson. misc: - gevent.joinall() method now accepts optional 'count' keyword. - gevent.fork() only calls reinit() in the child process now. - gevent.run() now returns False when exiting because of timeout or event (previous None). - Hub got a new method: destroy(). - Hub got a new property: threadpool. ares.pyx: - Fixed :issue:`104`: made ares_host_result pickable. Thanks to Shaun Cutts. pywsgi: - Removed unused deprecated 'wfile' property from WSGIHandler - Fixed :issue:`92`: raise IOError on truncated POST requests. - Fixed :issue:`93`: do not sent multiple "100 continue" responses core: - Fixed :issue:`97`: the timer watcher now calls ev_now_update() in start() and again() unless 'update' keyword is passed and set to False. - add set_syserr_cb() function; it's used by gevent internally. - gevent now installs syserr callback using libev's set_syserr_cb. This callback is called when libev encounters an error it cannot recover from. The default action is to print a message and abort. With the callback installed, a SystemError() is now raised in the main greenlet. - renamed 'backend_fd' property to 'fileno()' method. (not available if you build gevent against system libev) - added 'asynccnt' property (not available if you build gevent against system libev) - made loop.__repr__ output a bit more compact - the watchers check the arguments for validness now (previously invalid argument would crash libev). - The 'async' watcher now has send() method; - fixed time() function - libev has been upgraded to latest CVS version. - libev has been patched to use send()/recv() for evpipe on windows when libev_vfd.h is in effect resolver_ares: - Slightly improved compatibility with stdlib's socket in some error cases. socket: - Fixed close() method not to reference any globals - Fixed :issue:`115`: _dummy gets unexpected Timeout arg - Removed _fileobject used for python 2.4 compatibility in socket.py - Fixed :issue:`94`: fallback to buffer if memoryview fails in _get_memory on python 2.7 monkey: - Removed patch_httplib() - Fixed :issue:`112`: threading._sleep is not patched. Thanks to David LaBissoniere. - Added get_unpatched() function. However, it is slightly broken at the moment. backdoor: - make 'locals()' not spew out __builtin__.__dict__ in backdoor - add optional banner argument to BackdoorServer servers: - add server.DatagramServer; - StreamServer: 'ssl_enabled' is now a read-only property - servers no longer have 'kill' method; it has been renamed to 'close'. - listeners can now be configured as strings, e.g. ':80' or 80 - modify baseserver.BaseServer in such a way that makes it a good base class for both StreamServer and DatagramServer - BaseServer no longer accepts 'backlog' parameter. It is now done by StreamServer. - BaseServer implements start_accepting() and stop_accepting() methods - BaseServer now implements "temporarily stop accepting" strategy - BaseServer now has _do_read method which does everything except for actually calling accept()/recvfrom() - pre_start() method is renamed to init_socket() - renamed _stopped_event to _stop_event - 'started' is now a read-only property (which actually reports state of _stop_event) - post_stop() method is removed - close() now sets _stop_event(), thus setting 'started' to False, thus causing serve_forever() to exit - _tcp_listener() function is moved from baseserver.py to server.py - added 'fatal_errors' class attribute which is a tuple of all errnos that should kill the server coros: - Semaphore: add _start_notify() method - Semaphore: avoid copying list of links; rawlink() no longer schedules notification Release 1.0a3 (Sep 15, 2011) ============================ Added 'ref' property to all watchers. Settings it to False make watcher call ev_unref/ev_ref appropriately so that this watcher does not prevent loop.run()/hub.join()/run() from exiting. Made resolver_ares.Resolver use 'ref' property for internal watcher. In all servers, method "kill" was renamed to "close". The old name is available as deprecated alias. Added a few properties to the loop: backend_fd, fdchangecnt, timercnt. Upgraded c-ares to 1.7.5+patch. Fixed getaddrinfo to return results in the order (::1, IPv4, IPv6). Fixed getaddrinfo() to handle integer of string type. Thanks to kconor. Fixed gethostbyname() to handle '' (empty string). Fixed getaddrinfo() to convert UnicodeEncodeError into error('Int or String expected'). Fixed getaddrinfo() to uses the lowest 16 bits of passed port integer similar to built-in _socket. Fixed getnameinfo() to call getaddrinfo() to process arguments similar to built-in _socket. Fixed gethostbyaddr() to use getaddrinfo() to process arguments. version_info is now a 5-tuple. Added handle_system_error() method to Hub (used internally). Fixed Hub's run() method to never exit. This prevent inappropriate switches into parent greenlet. Fixed Hub.join() to return True if Hub was already dead. Added 'event' argument to Hub.join(). Added ``run()`` function to gevent top level package. Fixed Greenlet.start() to exit silently if greenlet was already started rather than raising :exc:`AssertionError`. Fixed Greenlet.start() not to schedule another switch if greenlet is already dead. Fixed gevent.signal() to spawn Greenlet instead of raw greenlet. Also it'll switch into the new greenlet immediately instead of scheduling additional callback. Do monkey patch create_connection() as gevent's version works better with gevent.socket.socket than the standard create_connection. pywsgi: make sure we don't try to read more requests if socket operation failed with EPIPE pywsgi: if we failed to send the reply, change 'status' to socket error so that the logs mention the error. Release 1.0a2 (Aug 2, 2011) =========================== Fixed a bug in gevent.queue.Channel class. (Thanks to Alexey Borzenkov) Release 1.0a1 (Aug 2, 2011) =========================== Backward-incompatible changes: - Dropped support for Python 2.4. - ``Queue(0)`` is now equivalent to an unbound queue and raises ``DeprecationError``. Use :class:`gevent.queue.Channel` if you need a channel. - Deprecated ability to pass a greenlet instance to :meth:`Greenlet.link`, :meth:`Greenlet.link_value` and :meth:`Greenlet.link_exception`. - All of :mod:`gevent.core` has been rewritten and the interface is not compatible. - :exc:`SystemExit` and :exc:`SystemError` now kill the whole process instead of printing a traceback. - Removed deprecated ``util.lazy_property`` property. - Removed ``gevent.dns`` module. - Removed deprecated gevent.sslold module - Removed deprecated gevent.rawgreenlet module - Removed deprecated name ``GreenletSet`` which used to be alias for ``Group``. - ``gevent.wsgi`` is now a deprecated alias for ``gevent.pywsgi``. Release highlights: - The :mod:`gevent.core` module now wraps libev's API and is not compatible with gevent 0.x. - Added a concept of pluggable event loops. By default gevent.core.loop is used, which is a wrapper around libev. - Added a concept of pluggable name resolvers. By default a resolver based on c-ares library is used. - Added support for multiple OS threads, each new thread will get its own Hub instance with its own event loop. - The release now includes and embeds the dependencies: libev and c-ares. - The standard :mod:`signal` works now as expected. - The unhandled errors are now handled uniformely by ``Hub.handle_error`` function. - Added :class:`gevent.queue.Channel` class to :mod:`gevent.queue` module. It is equivalent to ``Queue(0)`` in gevent 0.x, which is deprecated now. - Added method ``peek`` to :class:`gevent.queue.Queue` class. - Added ``idle`` function which blocks until the event loop is idle. - Added a way to gracefully shutdown the application by waiting for all outstanding greenlets/servers/watchers: ``Hub.join``. - Added new ``gevent.ares`` C extension which wraps ``c-ares`` and provides asynchronous DNS resolver. - Added new ``gevent.resolver_ares`` module provides synchronous API on top of ``gevent.ares``. The :mod:`gevent.socket` module: - DNS functions now use c-ares library rather than libevent-dns. This fixes a number of problems with name resolving: - Fix :issue:`2`: DNS resolver no longer breaks after ``fork()``. You still need to call :func:`gevent.fork` (`os.fork` is monkey patched with it if ``monkey.patch_all()`` was called). - DNS resolver no longer ignores ``/etc/resolv.conf`` and ``/etc/hosts``. - The following functions were added to socket module - gethostbyname_ex - getnameinfo - gethostbyaddr - getfqdn - Removed undocumented bind_and_listen and tcp_listener The :class:`gevent.hub.Hub` object: - Added ``join`` method which waits until the event loop exits or optional timeout expires. - Added ``wait`` method which waits until a watcher has got an event. - Added ``handle_error`` method which is called by all of gevent in case of unhandled exception. - Added ``print_exception`` method which is called by ``handle_error`` to print the exception traceback. The :class:`Greenlet` objects: - Added ``__nonzero__`` implementation that returns ``True`` after greenlet was started until it's dead. Previously greenlet was ``False`` after ``start()`` until it was first switched to. The :mod:`gevent.pool` module: - It is now possible to add raw greenlets to the pool. - The ``map`` and ``imap`` methods now start yielding the results as soon as possible. - The ``imap_unordered`` no longer swallows an exception raised while iterating its argument. Miscellaneous: - ``gevent.sleep()`` no longer raises an exception, instead it does ``sleep(0)``. - Added method ``clear`` to internal ``Waiter`` class. - Removed ``wait`` method from internal ``Waiter`` class. - The ``WSGIServer`` now sets ``max_accept`` to 1 if ``wsgi.multiprocessing`` is set to ``True``` - Added :func:`gevent.monkey.patch_module` function that monkey patches module using ``__implements__`` list provided by gevent module. All of gevent modules that replace stdlib module now have ``__implements__`` attribute. gevent-24.11.1/docs/changelog_1_1.rst000066400000000000000000000764371471441230600172440ustar00rootroot00000000000000================== Changes for 1.1 ================== .. currentmodule:: gevent 1.1.2 (Jul 21, 2016) ==================== - Python 2: ``sendall`` on a non-blocking socket could spuriously fail with a timeout. - If ``sys.stderr`` has been monkey-patched (not recommended), exceptions that the hub reports aren't lost and can still be caught. Reported in :issue:`825` by Jelle Smet. - :class:`selectors.SelectSelector` is properly monkey-patched regardless of the order of imports. Reported in :issue:`835` by Przemysław Węgrzyn. - Python 2: ``reload(site)`` no longer fails with a ``TypeError`` if gevent has been imported. Reported in :issue:`805` by Jake Hilton. 1.1.1 (Apr 4, 2016) =================== - Nested callbacks that set and clear an Event no longer cause ``wait`` to return prematurely. Reported in :issue:`771` by Sergey Vasilyev. - Fix build on Solaris 10. Reported in :issue:`777` by wiggin15. - The ``ref`` parameter to :func:`gevent.os.fork_and_watch` was being ignored. - Python 3: :class:`gevent.queue.Channel` is now correctly iterable, instead of raising a :exc:`TypeError`. - Python 3: Add support for :meth:`socket.socket.sendmsg`, :meth:`socket.socket.recvmsg` and :meth:`socket.socket.recvmsg_into` on platforms where they are defined. Initial :pr:`773` by Jakub Klama. 1.1.0 (Mar 5, 2016) =================== - Python 3: A monkey-patched :class:`threading.RLock` now properly blocks (or deadlocks) in ``acquire`` if the default value for *timeout* of -1 is used (which differs from gevent's default of None). The ``acquire`` method also raises the same :exc:`ValueError` exceptions that the standard library does for invalid parameters. Reported in :issue:`750` by Joy Zheng. - Fix a race condition in :class:`~gevent.event.Event` that made it return ``False`` when the event was set and cleared by the same greenlet before allowing a switch to already waiting greenlets. (Found by the 3.4 and 3.5 standard library test suites; the same as Python `bug 13502`_. Note that the Python 2 standard library still has this race condition.) - :class:`~gevent.event.Event` and :class:`~.AsyncResult` now wake waiting greenlets in the same (unspecified) order. Previously, ``AsyncResult`` tended to use a FIFO order, but this was never guaranteed. Both classes also use less per-instance memory. - Using a :class:`~logging.Logger` as a :mod:`pywsgi` error or request log stream no longer produces extra newlines. Reported in :issue:`756` by ael-code. - Windows: Installing from an sdist (.tar.gz) on PyPI no longer requires having Cython installed first. (Note that the binary installation formats (wheels, exes, msis) are preferred on Windows.) Reported in :issue:`757` by Ned Batchelder. - Issue a warning when :func:`~gevent.monkey.patch_all` is called with ``os`` set to False (*not* the default) but ``signal`` is still True (the default). This combination of parameters will cause signal handlers for ``SIGCHLD`` to not get called. In the future this might raise an error. Reported by Josh Zuech. - Issue a warning when :func:`~gevent.monkey.patch_all` is called more than once with different arguments. That causes the cumulative set of all True arguments to be patched, which may cause unexpected results. - Fix returning the original values of certain ``threading`` attributes from :func:`gevent.monkey.get_original`. .. _bug 13502: http://bugs.python.org/issue13502 1.1rc5 (Feb 24, 2016) ===================== - SSL: Attempting to send empty data using the :meth:`~socket.socket.sendall` method of a gevent SSL socket that has a timeout now returns immediately (like the standard library does), instead of incorrectly raising :exc:`ssl.SSLEOFError`. (Note that sending empty data with the :meth:`~socket.socket.send` method *does* raise ``SSLEOFError`` in both gevent and the standard library.) Reported in :issue:`719` by Mustafa Atik and Tymur Maryokhin, with a reproducible test case provided by Timo Savola. 1.1rc4 (Feb 16, 2016) ===================== - Python 2: Using the blocking API at import time when multiple greenlets are also importing should not lead to ``LoopExit``. Reported in :issue:`728` by Garrett Heel. - Python 2: Don't raise :exc:`OverflowError` when using the ``readline`` method of the WSGI input stream without a size hint or with a large size hint when the client is uploading a large amount of data. (This only impacted CPython 2; PyPy and Python 3 already handled this.) Reported in :issue:`289` by ggjjlldd, with contributions by Nathan Hoad. - :class:`~gevent.baseserver.BaseServer` and its subclasses like :class:`~gevent.pywsgi.WSGIServer` avoid allocating a new closure for each request, reducing overhead. - Python 2: Under 2.7.9 and above (or when the PEP 466 SSL interfaces are available), perform the same hostname validation that the standard library does; previously this was skipped. Also, reading, writing, or handshaking a closed :class:`~ssl.SSLSocket` now raises the same :exc:`ValueError` the standard library does, instead of an :exc:`AttributeError`. Found by updating gevent's copy of the standard library test cases. Initially reported in :issue:`735` by Dmitrij D. Czarkoff. - Python 3: Fix :meth:`~ssl.SSLSocket.unwrap` and SNI callbacks. Also raise the correct exceptions for unconnected SSL sockets and properly validate SSL hostnames. Found via updated standard library tests. - Python 3: Add missing support for :meth:`socket.socket.sendfile`. Found via updated standard library tests. - Python 3.4+: Add missing support for :meth:`socket.socket.get_inheritable` and :meth:`~socket.socket.set_inheritable`. Found via updated standard library tests. 1.1rc3 (Jan 04, 2016) ===================== - Python 2: Support the new PEP 466 :mod:`ssl` interfaces on any Python 2 version that supplies them, not just on the versions it officially shipped with. Some Linux distributions, including RedHat/CentOS and Amazon have backported the changes to older versions. Reported in :issue:`702`. - PyPy: An interaction between Cython compiled code and the garbage collector caused PyPy to crash when a previously-allocated Semaphore was used in a ``__del__`` method, something done in the popular libraries ``requests`` and ``urllib3``. Due to this and other Cython related issues, the Semaphore class is no longer compiled by Cython on PyPy. This means that it is now traceable and not exactly as atomic as the Cython version, though the overall semantics should remain the same. Reported in :issue:`704` by Shaun Crampton. - PyPy: Optimize the CFFI backend to use less memory (two pointers per watcher). - Python 3: The WSGI ``PATH_INFO`` entry is decoded from URL escapes using latin-1, not UTF-8. This improves compliance with PEP 3333 and compatibility with some frameworks like Django. Fixed in :pr:`712` by Ruben De Visscher. 1.1rc2 (Dec 11, 2015) ===================== - Exceptions raised by gevent's SSL sockets are more consistent with the standard library (e.g., gevent's Python 3 SSL sockets raise :exc:`socket.timeout` instead of :exc:`ssl.SSLError`, a change introduced in Python 3.2). - Python 2: gevent's socket's ``sendall`` method could completely ignore timeouts in some cases. The timeout now refers to the total time taken by ``sendall``. - gevent's SSL socket's ``sendall`` method should no longer raise ``SSL3_WRITE_PENDING`` in rare cases when sending large buffers. Reported in :issue:`317`. - :func:`gevent.signal.signal` now allows resetting (SIG_DFL) and ignoring (SIG_IGN) the SIGCHLD signal at the process level (although this may allow race conditions with libev child watchers). Reported in :issue:`696` by Adam Ning. - :func:`gevent.spawn_raw` now accepts keyword arguments, as previously (incorrectly) documented. Reported in :issue:`680` by Ron Rothman. - PyPy: PyPy 2.6.1 or later is now required (4.0.1 or later is recommended). - The CFFI backend is now built and usable on CPython implementations (except on Windows) if ``cffi`` is installed before gevent is installed. To use the CFFI backend, set the environment variable ``GEVENT_CORE_CFFI_ONLY`` before starting Python. This can aid debugging in some cases and helps ensure parity across all combinations of supported platforms. - The CFFI backend now calls the callback of a watcher whose ``args`` attribute is set to ``None``, just like the Cython backend does. It also only allows ``args`` to be a tuple or ``None``, again matching the Cython backend. - PyPy/CFFI: Fix a potential crash when using stat watchers. - PyPy/CFFI: Encode unicode paths for stat watchers using :meth:`sys.getfilesystemencoding` like the Cython backend. - The internal implementation modules ``gevent._fileobject2``, ``gevent._fileobject3``, and ``gevent._util`` were removed. These haven't been used or tested since 1.1b1. 1.1rc1 (Nov 14, 2015) ===================== - Windows/Python 3: Finish porting the :mod:`gevent.subprocess` module, fixing a large number of failing tests. Examples of failures are in :issue:`668` and :issue:`669` reported by srossross. - Python 3: The SSLSocket class should return an empty ``bytes`` object on an EOF instead of a ``str``. Fixed in :pr:`674` by Dahoon Kim. - Python 2: Workaround a buffering bug in the stdlib ``io`` module that caused ``FileObjectPosix`` to be slower than necessary in some cases. Reported in :issue:`675` by WGH-. - PyPy: Fix a crash. Reported in :issue:`676` by Jay Oster. .. caution:: There are some remaining, relatively rare, PyPy crashes, but their ultimate cause is unknown (gevent, CFFI, greenlet, the PyPy GC?). PyPy users can contribute to :issue:`677` to help track them down. - PyPy: Exceptions raised while handling an error raised by a loop callback function behave like the CPython implementation: the exception is printed, and the rest of the callbacks continue processing. - If a Hub object with active watchers was destroyed and then another one created for the same thread, which itself was then destroyed with ``destroy_loop=True``, the process could crash. Documented in :issue:`237` and fix based on :pr:`238`, both by Jan-Philip Gehrcke. - Python 3: Initializing gevent's hub for the first time simultaneously in multiple native background threads could fail with ``AttributeError`` and ``ImportError``. Reported in :issue:`687` by Gregory Petukhov. 1.1b6 (Oct 17, 2015) ==================== - PyPy: Fix a memory leak for code that allocated and disposed of many :class:`gevent.lock.Semaphore` subclasses. If monkey-patched, this could also apply to :class:`threading.Semaphore` objects. Reported in :issue:`660` by Jay Oster. - PyPy: Cython version 0.23.4 or later must be used to avoid a memory leak (`details`_). Thanks to Jay Oster. - Allow subclasses of :class:`~.WSGIHandler` to handle invalid HTTP client requests. Reported by not-bob. - :class:`~.WSGIServer` more robustly supports :class:`~logging.Logger`-like parameters for ``log`` and ``error_log`` (as introduced in 1.1b1, this could cause integration issues with gunicorn). Reported in :issue:`663` by Jay Oster. - :class:`~gevent.threading._DummyThread` objects, created in a monkey-patched system when :func:`threading.current_thread` is called in a new greenlet (which often happens implicitly, such as when logging) are much lighter weight. For example, they no longer allocate and then delete a :class:`~gevent.lock.Semaphore`, which is especially important for PyPy. - Request logging by :mod:`gevent.pywsgi` formats the status code correctly on Python 3. Reported in :issue:`664` by Kevin Chen. - Restore the ability to take a weak reference to instances of exactly :class:`gevent.lock.Semaphore`, which was unintentionally removed as part of making ``Semaphore`` atomic on PyPy on 1.1b1. Reported in :issue:`666` by Ivan-Zhu. - Build Windows wheels for Python 3.5. Reported in :pr:`665` by Hexchain Tong. .. _details: https://mail.python.org/pipermail/cython-devel/2015-October/004571.html 1.1b5 (Sep 18, 2015) ==================== - :mod:`gevent.subprocess` works under Python 3.5. In general, Python 3.5 has preliminary support. Reported in :issue:`653` by Squeaky. - :func:`Popen.communicate ` honors a ``timeout`` argument even if there is no way to communicate with the child process (none of stdin, stdout and stderr were set to ``PIPE``). Noticed as part of the Python 3.5 test suite for the new function ``subprocess.run`` but impacts all versions (``timeout`` is an official argument under Python 3 and a gevent extension with slightly different semantics under Python 2). - Fix a possible ``ValueError`` from :meth:`Queue.peek `. Reported in :issue:`647` by Kevin Chen. - Restore backwards compatibility for using ``gevent.signal`` as a callable, which, depending on the order of imports, could be broken after the addition of the ``gevent.signal`` module. Reported in :issue:`648` by Sylvain Zimmer. - gevent blocking operations performed at the top-level of a module after the system was monkey-patched under Python 2 could result in raising a :exc:`~gevent.hub.LoopExit` instead of completing the expected blocking operation. Note that performing gevent blocking operations in the top-level of a module is typically not recommended, but this situation can arise when monkey-patching existing scripts. Reported in :issue:`651` and :issue:`652` by Mike Kaplinskiy. - ``SIGCHLD`` and ``waitpid`` now work for the pids returned by the (monkey-patched) ``os.forkpty`` and ``pty.fork`` functions in the same way they do for the ``os.fork`` function. Reported in :issue:`650` by Erich Heine. - :class:`~gevent.pywsgi.WSGIServer` and :class:`~gevent.pywsgi.WSGIHandler` do a better job detecting and reporting potential encoding errors for headers and the status line during :meth:`~gevent.pywsgi.WSGIHandler.start_response` as recommended by the `WSGI specification`_. In addition, under Python 2, unnecessary encodings and decodings (often a trip through the ASCII encoding) are avoided for conforming applications. This is an enhancement of an already documented and partially enforced constraint: beginning in 1.1a1, under Python 2, ``u'abc'`` would typically previously have been allowed, but ``u'\u1f4a3'`` would not; now, neither will be allowed, more closely matching the specification, improving debugability and performance and allowing for better error handling both by the application and by gevent (previously, certain encoding errors could result in gevent writing invalid/malformed HTTP responses). Reported by Greg Higgins and Carlos Sanchez. - Code coverage by tests is now reported on `coveralls.io`_. .. _WSGI specification: https://www.python.org/dev/peps/pep-3333/#the-start-response-callable .. _coveralls.io: https://coveralls.io/github/gevent/gevent 1.1b4 (Sep 4, 2015) =================== - Detect and raise an error for several important types of programming errors even if Python interpreter optimizations are enabled with ``-O`` or ``PYTHONOPTIMIZE``. Previously these would go undetected if optimizations were enabled, potentially leading to erratic, difficult to debug behaviour. - Fix an ``AttributeError`` from ``gevent.queue.Queue`` when ``peek`` was called on an empty ``Queue``. Reported in :issue:`643` by michaelvol. - Make ``SIGCHLD`` handlers specified to :func:`gevent.signal.signal` work with the child watchers that are used by default. Also make :func:`gevent.os.waitpid` work with a first argument of -1. (Also applies to the corresponding monkey-patched stdlib functions.) Noted by users of gunicorn. - Under Python 2, any timeout set on a socket would be ignored when using the results of ``socket.makefile``. Reported in :issue:`644` by Karan Lyons. 1.1b3 (Aug 16, 2015) ==================== - Fix an ``AttributeError`` from ``gevent.monkey.patch_builtins`` on Python 2 when the `future`_ library is also installed. Reported by Carlos Sanchez. - PyPy: Fix a ``DistutilsModuleError`` or ``ImportError`` if the CFFI module backing ``gevent.core`` needs to be compiled when the hub is initialized (due to a missing or invalid ``__pycache__`` directory). Now, the module will be automtically compiled when gevent is imported (this may produce compiler output on stdout). Reported in :issue:`619` by Thinh Nguyen and :issue:`631` by Andy Freeland, with contributions by Jay Oster and Matt Dupre. - PyPy: Improve the performance of ``gevent.socket.socket:sendall`` with large inputs. `bench_sendall.py`_ now performs about as well on PyPy as it does on CPython, an improvement of 10x (from ~60MB/s to ~630MB/s). See this `pypy bug`_ for details. - Fix a possible ``TypeError`` when calling ``gevent.socket.wait``. Reported in #635 by lanstin. - ``gevent.socket.socket:sendto`` properly respects the socket's blocking status (meaning it can raise EWOULDBLOCK now in cases it wouldn't have before). Reported in :pr:`634` by Mike Kaplinskiy. - Common lookup errors using the :mod:`threaded resolver ` are no longer always printed to stderr since they are usually out of the programmer's control and caught explicitly. (Programming errors like ``TypeError`` are still printed.) Reported in :issue:`617` by Jay Oster and Carlos Sanchez. - PyPy: Fix a ``TypeError`` from ``gevent.idle()``. Reported in :issue:`639` by chilun2008. - The :func:`~gevent.pool.Pool.imap_unordered` methods of a pool-like object support a ``maxsize`` parameter to limit the number of results buffered waiting for the consumer. Reported in :issue:`638` by Sylvain Zimmer. - The class :class:`gevent.queue.Queue` now consistently orders multiple blocked waiting ``put`` and ``get`` callers in the order they arrived. Previously, due to an implementation quirk this was often roughly the case under CPython, but not under PyPy. Now they both behave the same. - The class :class:`gevent.queue.Queue` now supports the :func:`len` function. .. _future: http://python-future.org .. _bench_sendall.py: https://raw.githubusercontent.com/gevent/gevent/master/greentest/bench_sendall.py .. _pypy bug: https://bitbucket.org/pypy/pypy/issues/2091/non-blocking-socketsend-slow-gevent 1.1b2 (Aug 5, 2015) =================== - Enable using the :mod:`c-ares resolver ` under PyPy. Note that its performance characteristics are probably sub-optimal. - On some versions of PyPy on some platforms (notably 2.6.0 on 64-bit Linux), enabling ``gevent.monkey.patch_builtins`` could cause PyPy to crash. Reported in :issue:`618` by Jay Oster. - :func:`gevent.kill` raises the correct exception in the target greenlet. Reported in :issue:`623` by Jonathan Kamens. - Various fixes on Windows. Reported in :issue:`625`, :issue:`627`, and :issue:`628` by jacekt and Yuanteng (Jeff) Pei. Fixed in :pr:`624`. - Add :meth:`~gevent.fileobject.FileObjectPosix.readable` and :meth:`~gevent.fileobject.FileObjectPosix.writable` methods to :class:`~gevent.fileobject.FileObjectPosix`; this fixes e.g., help() on Python 3 when monkey-patched. 1.1b1 (Jul 17, 2015) ==================== - ``setup.py`` can be run from a directory containing spaces. Reported in :issue:`319` by Ivan Smirnov. - ``setup.py`` can build with newer versions of clang on OS X. They enforce the distinction between CFLAGS and CPPFLAGS. - ``gevent.lock.Semaphore`` is atomic on PyPy, just like it is on CPython. This comes at a small performance cost on PyPy. - Fixed regression that failed to set the ``successful`` value to False when killing a greenlet before it ran with a non-default exception. Fixed in :pr:`608` by Heungsub Lee. - libev's child watchers caused :func:`os.waitpid` to become unreliable due to the use of signals on POSIX platforms. This was especially noticeable when using :mod:`gevent.subprocess` in combination with ``multiprocessing``. Now, the monkey-patched ``os`` module provides a :func:`~gevent.os.waitpid` function that seeks to ameliorate this. Reported in :issue:`600` by champax and :issue:`452` by Łukasz Kawczyński. - On platforms that implement :class:`select.poll`, provide a gevent-friendly :class:`gevent.select.poll` and corresponding monkey-patch. Implemented in :pr:`604` by Eddi Linder. - Allow passing of events to the io callback under PyPy. Reported in :issue:`531` by M. Nunberg and implemented in :pr:`604`. - :func:`gevent.thread.allocate_lock` (and so a monkey-patched standard library :func:`~thread.allocate_lock`) more closely matches the behaviour of the builtin: an unlocked lock cannot be released, and attempting to do so throws the correct exception (``thread.error`` on Python 2, ``RuntimeError`` on Python 3). Previously, over-releasing a lock was silently ignored. Reported in :issue:`308` by Jędrzej Nowak. - :class:`gevent.fileobject.FileObjectThread` uses the threadpool to close the underling file-like object. Reported in :issue:`201` by vitaly-krugl. - Malicious or malformed HTTP chunked transfer encoding data sent to the :class:`pywsgi handler ` is handled more robustly, resulting in "HTTP 400 bad request" responses instead of a 500 error or, in the worst case, a server-side hang. Reported in :issue:`229` by Björn Lindqvist. - Importing the standard library ``threading`` module *before* using ``gevent.monkey.patch_all()`` no longer causes Python 3.4 to fail to get the ``repr`` of the main thread, and other CPython platforms to return an unjoinable DummyThread. (Note that this is not recommended.) Reported in :issue:`153`. - Under Python 2, use the ``io`` package to implement :class:`~gevent.fileobject.FileObjectPosix`. This unifies the code with the Python 3 implementation, and fixes problems with using ``seek()``. See :issue:`151`. - Under Python 2, importing a module that uses gevent blocking functions at its top level from multiple greenlets no longer produces import errors (Python 3 handles this case natively). Reported in :issue:`108` by shaun and initial fix based on code by Sylvain Zimmer. - :func:`gevent.spawn`, :func:`spawn_raw` and :func:`spawn_later`, as well as the :class:`~gevent.Greenlet` constructor, immediately produce useful ``TypeErrors`` if asked to run something that cannot be run. Previously, the spawned greenlet would die with an uncaught ``TypeError`` the first time it was switched to. Reported in :issue:`119` by stephan. - Recursive use of :meth:`ThreadPool.apply ` no longer raises a ``LoopExit`` error (using ``ThreadPool.spawn`` and then ``get`` on the result still could; you must be careful to use the correct hub). Reported in :issue:`131` by 8mayday. - When the :mod:`threading` module is :func:`monkey-patched `, the module-level lock in the :mod:`logging` module is made greenlet-aware, as are the instance locks of any configured handlers. This makes it safer to import modules that use the standard pattern of creating a module-level :class:`~logging.Logger` instance before monkey-patching. Configuring ``logging`` with a basic configuration and then monkey-patching is also safer (but not configurations that involve such things as the ``SocketHandler``). - Fix monkey-patching of :class:`threading.RLock` under Python 3. - Under Python 3, monkey-patching at the top-level of a module that was imported by another module could result in a :exc:`RuntimeError` from :mod:`importlib`. Reported in :issue:`615` by Daniel Mizyrycki. (The same thing could happen under Python 2 if a ``threading.RLock`` was held around the monkey-patching call; this is less likely but not impossible with import hooks.) - Fix configuring c-ares for a 32-bit Python when running on a 64-bit platform. Reported in :issue:`381` and fixed in :pr:`616` by Chris Lane. Additional fix in :pr:`626` by Kevin Chen. - (Experimental) Let the :class:`pywsgi.WSGIServer` accept a :class:`logging.Logger` instance for its ``log`` and (new) ``error_log`` parameters. Take care that the system is fully monkey-patched very early in the process's lifetime if attempting this, and note that non-file handlers have not been tested. Fixes :issue:`106`. 1.1a2 (Jul 8, 2015) =================== - ``gevent.threadpool.ThreadPool.imap`` and ``imap_unordered`` now accept multiple iterables. - (Experimental) Exceptions raised from iterating using the ``ThreadPool`` or ``Group`` mapping/application functions should now have the original traceback. - :meth:`gevent.threadpool.ThreadPool.apply` now raises any exception raised by the called function, the same as :class:`~gevent.pool.Group`/:class:`~gevent.pool.Pool` and the builtin :func:`apply` function. This obsoletes the undocumented ``apply_e`` function. Original PR :issue:`556` by Robert Estelle. - Monkey-patch the ``selectors`` module from ``patch_all`` and ``patch_select`` on Python 3.4. See :issue:`591`. - Additional query functions for the :mod:`gevent.monkey` module allow knowing what was patched. Discussed in :issue:`135` and implemented in :pr:`325` by Nathan Hoad. - In non-monkey-patched environments under Python 2.7.9 or above or Python 3, using a gevent SSL socket could cause the greenlet to block. See :issue:`597` by David Ford. - :meth:`gevent.socket.socket.sendall` supports arbitrary objects that implement the buffer protocol (such as ctypes structures), just like native sockets. Reported in :issue:`466` by tzickel. - Added support for the ``onerror`` attribute present in CFFI 1.2.0 for better signal handling under PyPy. Thanks to Armin Rigo and Omer Katz. (See https://bitbucket.org/cffi/cffi/issue/152/handling-errors-from-signal-handlers-in) - The :mod:`gevent.subprocess` module is closer in behaviour to the standard library under Python 3, at least on POSIX. The ``pass_fds``, ``restore_signals``, and ``start_new_session`` arguments are now implemented, as are the ``timeout`` parameters to various functions. Under Python 2, the previously undocumented ``timeout`` parameter to :meth:`Popen.communicate ` raises an exception like its Python 3 counterpart. - An exception starting a child process with the :mod:`gevent.subprocess` module no longer leaks file descriptors. Reported in :pr:`374` by 陈小玉. - The example ``echoserver.py`` no longer binds to the standard X11 TCP port. Reported in :issue:`485` by minusf. - :func:`gevent.iwait` no longer throws :exc:`~gevent.hub.LoopExit` if the caller switches greenlets between return values. Reported and initial patch in :issue:`467` by Alexey Borzenkov. - The default threadpool and default threaded resolver work in a forked child process, such as with :class:`multiprocessing.Process`. Previously the child process would hang indefinitely. Reported in :issue:`230` by Lx Yu. - Fork watchers are more likely to (eventually) get called in a multi-threaded program (except on Windows). See :issue:`154`. - :func:`gevent.killall` accepts an arbitrary iterable for the greenlets to kill. Reported in :issue:`404` by Martin Bachwerk; seen in combination with older versions of simple-requests. - :class:`gevent.local.local` objects are now eligible for garbage collection as soon as the greenlet finishes running, matching the behaviour of the built-in :class:`threading.local` (when implemented natively). Reported in :issue:`387` by AusIV. - Killing a greenlet (with :func:`gevent.kill` or :meth:`gevent.Greenlet.kill`) before it is actually started and switched to now prevents the greenlet from ever running, instead of raising an exception when it is later switched to. See :issue:`330` reported by Jonathan Kamens. 1.1a1 (Jun 29, 2015) ==================== - Add support for Python 3.3 and 3.4. Many people have contributed to this effort, including but not limited to Fantix King, hashstat, Elizabeth Myers, jander, Luke Woydziak, and others. See :issue:`38`. - Add support for PyPy. See :issue:`248`. Note that for best results, you'll need a very recent PyPy build including CFFI 1.2.0. - Drop support for Python 2.5. Python 2.5 users can continue to use gevent 1.0.x. - Fix :func:`gevent.joinall` to not ignore ``count`` when ``raise_error`` is False. See :pr:`512` by Ivan Diao. - Fix :class:`gevent.subprocess.Popen` to not ignore the ``bufsize`` argument. Note that this changes the (platform dependent) default, typically from buffered to unbuffered. See :pr:`542` by Romuald Brunet. - Upgraded c-ares to 1.10.0. See :pr:`579` by Omer Katz. .. caution:: The c-ares ``configure`` script is now more strict about the contents of environment variables such as ``CFLAGS`` and ``LDFLAGS`` and they may have to be modified (for example, ``CFLAGS`` is no longer allowed to include ``-I`` directives, which must instead be placed in ``CPPFLAGS``). - Add a ``count`` argument to :func:`gevent.iwait`. See :pr:`482` by wiggin15. - Add a ``timeout`` argument to :meth:`gevent.queue.JoinableQueue.join` which now returns whether all items were waited for or not. - ``gevent.queue.JoinableQueue`` treats ``items`` passed to ``__init__`` as unfinished tasks, the same as if they were ``put``. Initial :pr:`554` by DuLLSoN. - ``gevent.pywsgi`` no longer prints debugging information for the normal conditions of a premature client disconnect. See :issue:`136`, fixed in :pr:`377` by Paul Collier. - (Experimental.) Waiting on or getting results from greenlets that raised exceptions now usually raises the original traceback. This should assist things like Sentry to track the original problem. See :issue:`450` and :issue:`528` by Rodolfo and Eddi Linder and :issue:`240` by Erik Allik. - Upgrade to libev 4.20. See :pr:`590` by Peter Renström. - Fix ``gevent.baseserver.BaseServer`` to be printable when its ``handle`` function is an instancemethod of itself. See :pr:`501` by Joe Jevnik. - Make the ``acquire`` method of ``gevent.lock.DummySemaphore`` always return True, supporting its use-case as an "infinite" or unbounded semaphore providing no exclusion, and allowing the idiom ``if sem.acquire(): ...``. See :pr:`544` by Mouad Benchchaoui. - Patch ``subprocess`` by default in ``gevent.monkey.patch_all``. See :issue:`446`. - ``gevent.pool.Group.imap`` and ``imap_unordered`` now accept multiple iterables like ``itertools.imap``. :issue:`565` reported by Thomas Steinacher. - *Compatibility note*: ``gevent.baseserver.BaseServer`` and its subclass ``gevent.server.StreamServer`` now deterministically close the client socket when the request handler returns. Previously, the socket was left at the mercies of the garbage collector; under CPython 2.x this meant when the last reference went away, which was usually, but not necessarily, when the request handler returned, but under PyPy it was some arbitrary point in the future and under CPython 3.x a ResourceWarning could be generated. This was undocumented behaviour, and the client socket could be kept open after the request handler returned either accidentally or intentionally. - *Compatibility note*: ``pywsgi`` now ensures that headers can be encoded in latin-1 (ISO-8859-1). This improves adherence to the HTTP standard (and is necessary under Python 3). Under certain conditions, previous versions could have allowed non-ISO-8859-1 headers to be sent, but their interpretation by a conforming recipient is unknown; now, a UnicodeError will be raised. See :issue:`614`. gevent-24.11.1/docs/changelog_1_2.rst000066400000000000000000000331321471441230600172260ustar00rootroot00000000000000================= Changes for 1.2 ================= .. currentmodule:: gevent 1.2.2 (2017-06-05) ================== - Testing on Python 3.5 now uses Python 3.5.3 due to SSL changes. See :issue:`943`. - Linux CI has been updated from Ubuntu 12.04 to Ubuntu 14.04 since the former has reached EOL. - Linux CI now tests on PyPy2 5.7.1, updated from PyPy2 5.6.0. - Linux CI now tests on PyPy3 3.5-5.7.1-beta, updated from PyPy3 3.3-5.5-alpha. - Python 2 sockets are compatible with the ``SOCK_CLOEXEC`` flag found on Linux. They no longer pass the socket type or protocol to ``getaddrinfo`` when ``connect`` is called. Reported in :issue:`944` by Bernie Hackett. - Replace ``optparse`` module with ``argparse``. See :issue:`947`. - Update to version 1.3.1 of ``tblib`` to fix :issue:`954`, reported by ml31415. - Fix the name of the ``type`` parameter to :func:`gevent.socket.getaddrinfo` to be correct on Python 3. This would cause callers using keyword arguments to raise a :exc:`TypeError`. Reported in :issue:`960` by js6626069. Likewise, correct the argument names for ``fromfd`` and ``socketpair`` on Python 2, although they cannot be called with keyword arguments under CPython. .. note:: The ``gethost*`` functions take different argument names under CPython and PyPy. gevent follows the CPython convention, although these functions cannot be called with keyword arguments on CPython. - The previously-singleton exception objects ``FileObjectClosed`` and ``cancel_wait_ex`` were converted to classes. On Python 3, an exception object is stateful, including references to its context and possibly traceback, which could lead to objects remaining alive longer than intended. - Make sure that ``python -m gevent.monkey You're also welcome to join `#gevent`_ IRC channel on freenode. Russian group ============= Русскоязычная группа находится здесь: `Google Groups (gevent-ru)`_. Чтобы подписаться, отправьте сообщение на gevent-ru+subscribe@googlegroups.com .. _Google Groups (gevent): http://groups.google.com/group/gevent .. _#gevent: http://webchat.freenode.net/?channels=gevent .. _Google Groups (gevent-ru): http://groups.google.com/group/gevent-ru gevent-24.11.1/docs/conf.py000066400000000000000000000207561471441230600154130ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # gevent documentation build configuration file, created by # sphinx-quickstart on Thu Oct 1 09:30:02 2009. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. from __future__ import print_function import sys import os # Use the python versions instead of the cython compiled versions # for better documentation extraction and ease of tweaking docs. os.environ['PURE_PYTHON'] = '1' sys.path.append(os.path.dirname(__file__)) # for mysphinxext # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.append(os.path.abspath('.')) # -- General configuration ----------------------------------------------------- # 1.8 was the last version that runs on Python 2; 2.0+ requires Python 3. # `autodoc_default_options` was new in 1.8 needs_sphinx = "1.8" # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.coverage', 'sphinx.ext.intersphinx', 'sphinx.ext.extlinks', 'sphinx.ext.viewcode', # Third-party 'repoze.sphinx.autointerface', 'sphinxcontrib.programoutput', # Ours ] intersphinx_mapping = { 'python': ('https://docs.python.org/', None), 'greenlet': ('https://greenlet.readthedocs.io/en/latest/', None), 'zopeevent': ('https://zopeevent.readthedocs.io/en/latest/', None), 'zopecomponent': ('https://zopecomponent.readthedocs.io/en/latest/', None), } extlinks = {'issue': ('https://github.com/gevent/gevent/issues/%s', 'issue #%s'), 'pr': ('https://github.com/gevent/gevent/pull/%s', 'pull request #%s')} # Sphinx 1.8+ prefers this to `autodoc_default_flags`. It's documented that # either True or None mean the same thing as just setting the flag, but # only None works in 1.8 (True works in 2.0) autodoc_default_options = { 'members': None, 'show-inheritance': None, } autodoc_member_order = 'groupwise' autoclass_content = 'both' # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'contents' # General information about the project. project = u'gevent' copyright = u'2009-2023, gevent contributors' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. from gevent import __version__ version = __version__ # The full version, including alpha/beta/rc tags. release = version # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. #unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. default_role = 'obj' # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = False # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'perldoc' # A list of ignored prefixes for module index sorting. modindex_common_prefix = ['gevent.'] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # XXX: Our custom theme stopped working with Sphinx 7. #html_theme = 'mytheme' html_theme = "furo" html_css_files = [ 'custom.css', ] html_theme_options = { "sidebar_hide_name": True, # Because we show a logo 'light_css_variables': { "color-brand-primary": "#7c9a5e", "color-brand-content": "#7c9a5e", "color-foreground-border": "#b7d897", 'font-stack': '"SF Pro",-apple-system,BlinkMacSystemFont,"Segoe UI",Helvetica,Arial,sans-serif,"Apple Color Emoji","Segoe UI Emoji"', 'font-stack--monospace': '"JetBrainsMono", "JetBrains Mono", "JetBrains Mono Regular", "JetBrainsMono-Regular", ui-monospace, profont, monospace', }, } # A shorter title for the navigation bar. Default is the same as html_title. html_short_title = 'Documentation' # The name of an image file (relative to this directory) to place at the top # of the sidebar. html_logo = '_static/5564530.png' # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # This is true by default in sphinx 1.6 html_use_smartypants = True smartquotes = True # 1.7 # Custom sidebar templates, maps document names to template names. html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {'contentstable': 'contentstable.html'} # If false, no module index is generated. html_use_modindex = True # If false, no index is generated. html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. html_show_sourcelink = False # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'geventdoc' # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'gevent.tex', u'gevent Documentation', u'gevent contributors', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True ############################################################################### # prevent some stuff from showing up in docs import socket import gevent.socket for item in gevent.socket.__all__[:]: if getattr(gevent.socket, item) is getattr(socket, item, None): gevent.socket.__all__.remove(item) if not hasattr(gevent.socket, '_fileobject'): # Python 3 building Python 2 docs. gevent.socket._fileobject = object() gevent-24.11.1/docs/configuration.rst000066400000000000000000000003301471441230600174770ustar00rootroot00000000000000.. _gevent-configuration: ==================== Configuring gevent ==================== .. seealso:: :func:`gevent.setswitchinterval` For additional runtime configuration. .. autoclass:: gevent._config.Config gevent-24.11.1/docs/contents.rst000066400000000000000000000014341471441230600164730ustar00rootroot00000000000000=================== Table Of Contents =================== .. tip:: To jump directly to the full API reference, see :doc:`api/index`. Introduction and Basics ======================= .. toctree:: :maxdepth: 2 install changelog intro api/gevent api/gevent.greenlet servers dns monitoring loop_impls configuration Coding With gevent ================== .. toctree:: :maxdepth: 3 api/index .. This anchor is referenced from CONTRIBUTING.rst .. _contents-developing: Developing and Packaging gevent =============================== .. toctree:: :maxdepth: 3 development/index Related Information =================== .. toctree:: :maxdepth: 1 success community older_releases * :ref:`genindex` * :ref:`modindex` * :ref:`search` gevent-24.11.1/docs/development/000077500000000000000000000000001471441230600164245ustar00rootroot00000000000000gevent-24.11.1/docs/development/ci.rst000066400000000000000000000016671471441230600175630ustar00rootroot00000000000000======================== Continuous integration ======================== A test suite is run for every push and pull request submitted. Github Actions CI is used to test on Linux and macOS, and `AppVeyor`_ runs the builds on Windows. .. image:: https://github.com/gevent/gevent/workflows/gevent%20testing/badge.svg :target: https://github.com/gevent/gevent/actions .. image:: https://ci.appveyor.com/api/projects/status/q4kl21ng2yo2ixur?svg=true :target: https://ci.appveyor.com/project/denik/gevent Builds on Github Actions CI automatically submit updates to `coveralls.io`_ to monitor test coverage. .. image:: https://coveralls.io/repos/gevent/gevent/badge.svg?branch=master&service=github :target: https://coveralls.io/github/gevent/gevent?branch=master .. _coverage.py: https://pypi.python.org/pypi/coverage/ .. _coveralls.io: https://coveralls.io/github/gevent/gevent .. _AppVeyor: https://ci.appveyor.com/project/denik/gevent gevent-24.11.1/docs/development/getting_started.rst000066400000000000000000000077551471441230600223630ustar00rootroot00000000000000================= Getting Started ================= Developing gevent requires being able to install gevent from source. See :doc:`installing_from_source` for general information about that. Use A Virtual Environment ========================= It is recommended to install the development copy of gevent in a `virtual environment `_; you can use the :mod:`venv` module distributed with Python 3, or `virtualenv `_, possibly with `virtualenvwrapper `_. You may want a different virtual environment for each Python implementation and version that you'll be working with. gevent includes a `tox `_ configuration for automating the process of testing across multiple Python versions, but that can be slow. The rest of this document will assume working in an isolated virtual environment, but usually won't show that in the prompt. An example of creating a virtual environment is shown here:: $ python3 -m venv gevent-env $ cd gevent-env $ . bin/activate (gevent-env) $ Installing Dependencies ======================= To work on gevent, we'll need to get the source, install gevent's dependencies, including test dependencies, and install gevent as an `editable install `_ using pip's ``-e`` option (also known as `development mode `_, this is mostly the same as running ``python setup.py develop``). Getting the source means cloning the git repository:: (gevent-env) $ git clone https://github.com/gevent/gevent.git (gevent-env) $ cd gevent Installing gevent's dependencies, test dependencies, and gevent itself can be done in one line by installing the ``dev-requirements.txt`` file:: (gevent-env) $ pip install -r dev-requirements.txt .. warning:: This pip command does not work with pip 19.1. Either use pip 19.0 or below, or use pip 19.1.1 with ``--no-use-pep517``. See `issue 1412 `_. Making Changes ============== When adding new features (functions, methods, modules), be sure to provide docstrings. The docstring should end with Sphinx's `versionadded directive `_, using a version string of "NEXT". This string will automatically be replaced with the correct version during the release process. For example: .. code-block:: python def make_plumbus(schleem, rub_fleeb=True): """ Produces a plumbus. :param int scheem: The number of schleem to use. Possibly repurposed. :keyword bool rub_fleeb: Whether to rub the fleeb. Rubbing the fleeb is important, so only disable if you know what you're doing. .. versionadded:: NEXT """ When making a change to an existing feature that has already been released, apply the appropriate `versionchanged `_ or `deprecated `_ directive, also using "NEXT". .. code-block:: python def make_plumbus(schleem, rub_fleeb=True): """ Produces a plumbus. :param int schleem: The schleem to use. Possibly repurposed. :keyword bool rub_fleeb: Whether to rub the fleeb. Rubbing the fleeb is important, so only disable if you know what you're doing. :return: A :class:`Plumbus`. .. versionadded:: 20.04.0 .. versionchanged:: NEXT The *rub_fleeb* parameter is ignored; the fleeb must always be rubbed. """ def extract_fleeb_juice(): """ Get the fleeb juice. .. deprecated:: NEXT Extracting fleeb juice now happens automatically. """ gevent-24.11.1/docs/development/index.rst000066400000000000000000000007701471441230600202710ustar00rootroot00000000000000============= Development ============= This document provides information about developing gevent itself, including information about running tests. More information is in the ``CONTRIBUTING.rst`` document in the root of the gevent repository. .. toctree:: :maxdepth: 2 getting_started installing_from_source running_tests ci release_process .. The contributor guide (CONTRIBUTING.rst) references this document. Things to include: - Custom commands in ``setup.py`` gevent-24.11.1/docs/development/installing_from_source.rst000066400000000000000000000176671471441230600237460ustar00rootroot00000000000000======================== Installing From Source ======================== If you are unable to use the binary wheels (for platforms where no pre-built wheels are available or if wheel installation is disabled), you can build gevent from source. A normal ``pip install`` will fallback to doing this if no binary wheel is available. (If you'll be :ref:`developing ` gevent, you'll need to install from source also; follow that link for more details.) General Notes ============= - You can force installation of gevent from source with ``pip install --no-binary gevent gevent``. This is useful if there is a binary wheel available, but you want to change the compile-time options, such as to use a system version of libuv instead of the embedded version. See :ref:`build-config`. - You'll need `pip 19 and setuptools 40.8 `_ with fully functional :pep:`518` and :pep:`517` support to install gevent from source. - You'll need a working C compiler toolchain that can build Python extensions. On some platforms, you may need to install Python development packages. You'll also need the ability to compile `cffi `_ modules, which may require installing FFI development packages. Installing `make `_ and other common utilities such as `file `_ may also be required. For example, on Alpine Linux, one might need to do this:: apk add --virtual build-deps file make gcc musl-dev libffi-dev On Fedora Rawhide for Fedora 33, one might need to do this:: yum install python3-devel gcc kernel-devel kernel-headers make diffutils file (Newer versions of Fedora use ``dnf`` for package management but as of this writing ``yum`` is still available and widely used, especially for backwards compatibility.) On the Docker image for Ubuntu 20.04, one might need to do this:: apt-get install python3.9-full python3.9-dev linux-headers-virtual make gcc libtool See :issue:`1567`, :issue:`1559`, and :issue:`1566`. .. note:: The exact set of external dependencies isn't necessarily fixed and depends on the configure scripts of the bundled C libraries such as libev, libuv and c-ares. Disabling :ref:`embed-lib` and using system libraries can reduce these dependencies, although this isn't encouraged. - On older versions of Linux, linking to ``librt`` might be useful to avoid system calls from libev accessing the current time. One way to do this is to set the ``LDFLAGS`` environment variable to include ``-lrt`` when installing from source. (This is done automatically when gevent builds manylinux wheels.) See :pr:`1650`. - Installing from source requires ``setuptools``. This is installed automatically in virtual environments and by buildout. However, gevent uses :pep:`496` environment markers in ``setup.py``. Consequently, you'll need a version of setuptools newer than 25 (mid 2016) to install gevent from source; a version that's too old will produce a ``ValueError``. Older versions of pipenv may also `have issues installing gevent for this reason `_. - gevent 1.5 and newer come with a ``pyproject.toml`` file that installs the build dependencies, including CFFI (needed for libuv support). pip 18 or above is required for this support. - You can use pip's `VCS support `_ to install gevent directly from its code repository. This can be useful to check if a bug you're experiencing has been fixed. For example, to install the current development version:: pip install git+git://github.com/gevent/gevent.git#egg=gevent Often one would install this way into a virtual environment. If you're using pip 18 or above, that should be all you need. If you have difficulties, see the development instructions for more information. Common Installation Issues ========================== The following are some common installation problems and solutions for those compiling gevent from source. - Some Linux distributions are now mounting their temporary directories with the ``noexec`` option. This can cause a standard ``pip install gevent`` to fail with an error like ``cannot run C compiled programs``. One fix is to mount the temporary directory without that option. Another may be to use the ``--build`` option to ``pip install`` to specify another directory. See `issue #570 `_ and `issue #612 `_ for examples. - Also check for conflicts with environment variables like ``CFLAGS``. For example, see `Library Updates `_. - Users of a recent SmartOS release may need to customize the ``CPPFLAGS`` (the environment variable containing the default options for the C preprocessor) if they are using the libev shipped with gevent. See `Operating Systems `_ for more information. - If you see ``ValueError: ("Expected ',' or end-of-list in", "cffi >= 1.11.5 ; sys_platform == 'win32' and platform_python_implementation == 'CPython'", 'at', " ; sys_platform == 'win32' and platform_python_implementation == 'CPython'")``, the version of setuptools is too old. Install a more recent version of setuptools. .. _build-config: Build-Time Configuration ======================== There are a few knobs that can be tweaked at gevent build time. These are mostly useful for downstream packagers. They all take the form of environment variables that must be set when ``setup.py`` is called (note that ``pip install`` will invoke ``setup.py``). Toggle flags that have boolean values may take the form of 0/1, true/false, off/on, yes/no. ``CPPFLAGS`` A standard variable used when building the C extensions. gevent may make slight modifications to this variable. ``CFLAGS`` A standard variable used when building the C extensions. gevent may make slight modifications to this variable. ``LDFLAGS`` A standard variable used when building the C extensions. gevent may make slight modifications to this variable. ``GEVENTSETUP_EV_VERIFY`` If set, the value is passed through as the value of the ``EV_VERIFY`` C compiler macro when libev is embedded. In general, setting ``CPPFLAGS`` is more general and can contain other macros recognized by libev. .. _embed-lib: Embedding Libraries ------------------- By default, gevent builds and embeds tested versions of its C dependencies libev, libuv, and c-ares. This is the recommended configuration as the specific versions used are tested by gevent, and sometimes require patches to be applied. Moreover, embedding, especially in the case of libev, can be more efficient as features not needed by gevent can be disabled, resulting in smaller or faster libraries or runtimes. However, this can be disabled, either for all libraries at once or for individual libraries. When embedding a library is disabled, the library must already be installed on the system in a way the compiler can access and link (i.e., correct ``CPPFLAGS``, etc) in order to use the corresponding C extension. ``GEVENTSETUP_EMBED`` A boolean defaulting to true. When turned off (e.g., ``GEVENTSETUP_EMBED=0``), libraries are not embedded in the gevent C extensions. The value of this is used as the default for all the libraries if no more specific version is defined. ``GEVENTSETUP_EMBED_LIBEV`` Controls embedding libev. ``GEVENTSETUP_EMBED_CARES`` Controls embedding c-ares. ``GEVENTSETUP_EMBED_LIBUV`` This is not defined or used, only a CFFI extension is available and those are always embedded. Older versions of gevent supported ``EMBED`` and ``LIBEV_EMBED``, etc, to mean the same thing. Those aliases still work but are deprecated and print a warning. gevent-24.11.1/docs/development/release_process.rst000066400000000000000000000050131471441230600223330ustar00rootroot00000000000000================= Release Process ================= Release Cadence and Versions ============================ After :doc:`gevent 1.5 <../whatsnew_1_5>`, gevent releases switched to `CalVer `_, using the scheme ``YY.0M.Micro`` (two-digit year, zero-padded month, micro/patch number). Thus the first release in April of 2020 would be version ``20.04.0``. A second release would be ``20.04.1``, etc. The first release in May would be ``20.05.0``, and so on. If there have been changes to master, gevent should produce a release at least once a month. Deprecation Policy ================== .. This is largely based on what pip says. Any change to gevent that removes or significantly alters user-visible behavior that is described in the gevent documentation will be deprecated for a minimum of 6 months before the change occurs. Deprecation will be called out in the documentation and in some cases with a runtime warning when the feature is used (because of the performance sensitive nature of gevent, not all deprecations will have a runtime warning). Longer deprecation periods, or deprecation warnings for behavior changes that would not normally be covered by this policy, are also possible depending on circumstances, but this is at the discretion of the gevent developers. Note that the documentation is the sole reference for what counts as agreed behavior. If something isn’t explicitly mentioned in the documentation, it can be changed without warning, or any deprecation period, in a gevent release. However, we are aware that the documentation isn’t always complete - PRs that document existing behavior with the intention of covering that behavior with the above deprecation process are always acceptable, and will be considered on their merits. Releasing gevent ================ .. note:: This is a semi-organized collection of notes for gevent maintainers. gevent is released using `zest.releaser `_. Binary wheels are published automatically by Github Actions CI (macOS and manylinux) and Appveyor (Windows) when a tag is uploaded. 1. Push all relevant changes to master. 2. From the gevent working copy, run ``fullrelease``. Fix any issues it brings up. Let it bump the version number (or enter the correct one), commit, create the tag, create the sdist, upload the sdist and push the tag to GitHub. 3. Monitor the build process on the CI systems. If particular builds fail due to test instability, re-run them to allow the binary wheel to be uploaded. gevent-24.11.1/docs/development/running_tests.rst000066400000000000000000000076011471441230600220640ustar00rootroot00000000000000=============== Running Tests =============== .. Things to include: - Writing tests and the gevent test framework: - Avoiding hard test dependencies. - Resource usage. - test files must be executable - Maybe these things belong in a README in the gevent.tests directory? gevent has an extensive regression test suite, implemented using the standard :mod:`unittest` module. It uses a :mod:`custom testrunner ` that provides enhanced test isolation (important for monkey-patching), runs tests in parallel, and takes care of other gevent-specific quirks. .. note:: The gevent test process runs Python standard library tests with gevent's monkey-patches applied to ensure that gevent behaves correctly (matches the standard library). The standard library :mod:`test` must be available in order to do this. This is usually the case automatically, but some distributions remove this module. Notably, on Debian, you will probably need ``libpythonX.Y-testsuite`` installed to run all the tests. The test runner has a number of options: .. command-output:: python -mgevent.tests --help The simplest way to run all the tests is just to invoke the test runner, typically from the root of the source checkout:: (gevent-env) $ python -mgevent.tests Running tests in parallel with concurrency 7 ... Ran 3107 tests (skipped=333) in 132 files in 01:52 You can also run an individual gevent test file using the test runner:: (gevent-env) $ python -m gevent.tests test__util.py Running tests in parallel with concurrency 1 + /.../python -u -mgevent.tests.test__util - /.../python -u -mgevent.tests.test__util [Ran 9 tests in 1.1s] Longest-running tests: 1.1 seconds: /.../python -u -mgevent.tests.test__util Ran 9 tests in 1 files in 1.1s Or you can run a monkey-patched standard library test:: (gevent-env) $ python -m gevent.tests.test___monkey_patching test_socket.py Running tests in parallel with concurrency 1 + /.../python -u -W ignore -m gevent.testing.monkey_test test_socket.py Running with patch_all(Event=False): test_socket.py Added imports 1 Skipped testEmptyFileSend (1) ... Ran 555 tests in 23.042s OK (skipped=172) - /.../python -u -W ignore -m gevent.testing.monkey_test test_socket.py [took 26.7s] Longest-running tests: 26.7 seconds: /.../python -u -W ignore -m gevent.testing.monkey_test test_socket.py Ran 0 tests in 1 files in 00:27 Environment Variables ===================== Some testrunner options have equivalent environment variables. Notably, ``--quiet`` is ``GEVENTTEST_QUIET`` and ``-u`` is ``GEVENTTEST_USE_RESOURCES``. Using tox ========= Before submitting a pull request, it's a good idea to run the tests across all supported versions of Python, and to check the code quality using prospector/pylint. This is what is done on CI. Locally it can be done using tox:: pip install tox tox Measuring Code Coverage ======================= This is done on CI so it's not often necessary to do locally. The testrunner accepts a ``--coverage`` argument to enable code coverage metrics through the `coverage.py`_ package. That would go something like this:: python -m gevent.tests --coverage coverage combine coverage html -i .. _limiting-test-resource-usage: Limiting Resource Usage ======================= gevent supports the standard library test suite's resources. All resources are enabled by default. Disabling resources disables the tests that use those resources. For example, to disable tests that access the external network (the Internet), disable the ``network`` resource. There's an option for this:: $ python -m gevent.tests -u-network And an environment variable:: $ GEVENTTEST_USE_RESOURCES=-network python -m gevent.tests .. _coverage.py: https://pypi.python.org/pypi/coverage/ .. _coveralls.io: https://coveralls.io/github/gevent/gevent gevent-24.11.1/docs/dns.rst000066400000000000000000000024461471441230600154260ustar00rootroot00000000000000======================= Name Resolution (DNS) ======================= gevent includes support for a pluggable hostname resolution system. Pluggable resolvers are (generally) intended to be cooperative. This pluggable resolution system is used automatically when the system is :mod:`monkey patched `, and may be used manually through the :attr:`resolver attribute ` of the :class:`gevent.hub.Hub` or the corresponding methods in the :mod:`gevent.socket` module. A resolver implements the 5 standandard functions from the :mod:`socket` module for resolving hostnames and addresses: * :func:`socket.gethostbyname` * :func:`socket.gethostbyname_ex` * :func:`socket.getaddrinfo` * :func:`socket.gethostbyaddr` * :func:`socket.getnameinfo` Configuration ============= gevent includes four implementations of resolvers, and applications can provide their own implementation. By default, gevent uses :class:`a threadpool `. This can :attr:`be customized `. Please see the documentation for each resolver class to understand the relative performance and correctness tradeoffs. .. toctree:: api/gevent.resolver.thread api/gevent.resolver.ares api/gevent.resolver.dnspython api/gevent.resolver.blocking gevent-24.11.1/docs/examples/000077500000000000000000000000001471441230600157205ustar00rootroot00000000000000gevent-24.11.1/docs/examples/concurrent_download.rst000066400000000000000000000004361471441230600225260ustar00rootroot00000000000000================================ Example concurrent_download.py ================================ .. literalinclude:: ../../examples/concurrent_download.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/examples/dns_mass_resolve.rst000066400000000000000000000004171471441230600220220ustar00rootroot00000000000000============================= Example dns_mass_resolve.py ============================= .. literalinclude:: ../../examples/dns_mass_resolve.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/examples/echoserver.rst000066400000000000000000000003751471441230600206240ustar00rootroot00000000000000============================= Example echoserver.py ============================= .. literalinclude:: ../../examples/echoserver.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/examples/geventsendfile.rst000066400000000000000000000004111471441230600214500ustar00rootroot00000000000000============================= Example geventsendfile.py ============================= .. literalinclude:: ../../examples/geventsendfile.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/examples/index.rst000066400000000000000000000014651471441230600175670ustar00rootroot00000000000000========== Examples ========== .. All files generated with shell oneliner: for i in examples/*py; do bn=`basename $i`; bnp=`basename $i .py`; echo -e "=============================\nExample $bn\n=============================\n.. literalinclude:: ../../examples/$bn\n :language: python\n :linenos:\n\n\`Current source \`_\n" > doc/examples/$bnp.rst; done This is a snapshot of the examples contained in `the gevent source `_. .. toctree:: concurrent_download dns_mass_resolve echoserver geventsendfile portforwarder processes psycopg2_pool threadpool udp_client udp_server unixsocket_client unixsocket_server webproxy webpy wsgiserver wsgiserver_ssl gevent-24.11.1/docs/examples/portforwarder.rst000066400000000000000000000004061471441230600213520ustar00rootroot00000000000000============================= Example portforwarder.py ============================= .. literalinclude:: ../../examples/portforwarder.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/examples/processes.rst000066400000000000000000000003721471441230600204620ustar00rootroot00000000000000============================= Example processes.py ============================= .. literalinclude:: ../../examples/processes.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/examples/psycopg2_pool.rst000066400000000000000000000004061471441230600212510ustar00rootroot00000000000000============================= Example psycopg2_pool.py ============================= .. literalinclude:: ../../examples/psycopg2_pool.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/examples/threadpool.rst000066400000000000000000000003751471441230600206200ustar00rootroot00000000000000============================= Example threadpool.py ============================= .. literalinclude:: ../../examples/threadpool.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/examples/udp_client.rst000066400000000000000000000003751471441230600206050ustar00rootroot00000000000000============================= Example udp_client.py ============================= .. literalinclude:: ../../examples/udp_client.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/examples/udp_server.rst000066400000000000000000000003751471441230600206350ustar00rootroot00000000000000============================= Example udp_server.py ============================= .. literalinclude:: ../../examples/udp_server.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/examples/unixsocket_client.rst000066400000000000000000000004221471441230600222020ustar00rootroot00000000000000============================= Example unixsocket_client.py ============================= .. literalinclude:: ../../examples/unixsocket_client.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/examples/unixsocket_server.rst000066400000000000000000000004221471441230600222320ustar00rootroot00000000000000============================= Example unixsocket_server.py ============================= .. literalinclude:: ../../examples/unixsocket_server.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/examples/webproxy.rst000066400000000000000000000003671471441230600203370ustar00rootroot00000000000000============================= Example webproxy.py ============================= .. literalinclude:: ../../examples/webproxy.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/examples/webpy.rst000066400000000000000000000003561471441230600176040ustar00rootroot00000000000000============================= Example webpy.py ============================= .. literalinclude:: ../../examples/webpy.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/examples/wsgiserver.rst000066400000000000000000000003751471441230600206570ustar00rootroot00000000000000============================= Example wsgiserver.py ============================= .. literalinclude:: ../../examples/wsgiserver.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/examples/wsgiserver_ssl.rst000066400000000000000000000004111471441230600215270ustar00rootroot00000000000000============================= Example wsgiserver_ssl.py ============================= .. literalinclude:: ../../examples/wsgiserver_ssl.py :language: python :linenos: `Current source `_ gevent-24.11.1/docs/index.rst000066400000000000000000000005171471441230600157460ustar00rootroot00000000000000================= What is gevent? ================= .. include:: _about.rst :ref:`Continue reading ` » .. _coroutine: https://en.wikipedia.org/wiki/Coroutine .. _Python: http://python.org .. _greenlet: https://greenlet.readthedocs.io .. _libev: http://software.schmorp.de/pkg/libev.html .. _libuv: http://libuv.org gevent-24.11.1/docs/install.rst000066400000000000000000000151241471441230600163050ustar00rootroot00000000000000=============================== Installation and Requirements =============================== .. _installation: .. This file is included in README.rst so it is limited to plain ReST markup, not Sphinx. .. note:: If you are reading this document on the `Python Package Index`_ (PyPI, https://pypi.org/), it is specific to the version of gevent that you are viewing. If you are viewing this document on gevent.org, it refers to the current state of gevent in source control (git master). Supported Platforms =================== This version of gevent runs on Python 3.8 and up, (for exact details of tested versions, see the classifiers on the PyPI page or in ``setup.py``). gevent requires the `greenlet `_ library and will install the `cffi`_ library by default on Windows. The cffi library will become the default on all platforms in a future release of gevent. This version of gevent is also tested on on PyPy 3.10 (7.3.12); it should run on PyPy 3.9 and above. On PyPy, there are no external dependencies. gevent is tested on Windows, macOS, and Linux, and should run on most other Unix-like operating systems (e.g., FreeBSD, Solaris, etc.) .. note:: Windows is supported as a tier 2, "best effort," platform. It is suitable for development, but not recommended for production. In particular, PyPy3 on Windows may have issues, especially with subprocesses. On Windows using the deprecated libev backend, gevent is limited to a maximum of 1024 open sockets due to `limitations in libev`_. This limitation should not exist with the default libuv backend. Older Versions of Python ------------------------ Users of older versions of Python 2 or Python 3 may install an older version of gevent. Note that these versions are generally not supported. +-------+-------+ |Python |Gevent | |Version|Version| +=======+=======+ |2.5 |1.0.x | | | | +-------+-------+ |2.6 |1.1.x | +-------+-------+ |<= |1.2.x | |2.7.8 | | +-------+-------+ |3.3 |1.2.x | +-------+-------+ |3.4.0 -| 1.3.x | |3.4.2 | | | | | +-------+-------+ |3.4.3 | 1.4.x | | | | | | | +-------+-------+ |3.5.x | 20.9.0| | | | | | | +-------+-------+ |2.7.9 -| | |2.7.18,| 22.10 | |3.6, | | |3.7 | | | | | +-------+-------+ Installation ============ .. note:: This section is about installing released versions of gevent as distributed on the `Python Package Index`_. For building gevent from source, including customizing the build and embedded libraries, see `Installing From Source`_. .. _Python Package Index: http://pypi.org/project/gevent gevent and greenlet can both be installed with `pip`_, e.g., ``pip install gevent``. Installation using `buildout `_ is also supported. On Windows, macOS, and Linux, both gevent and greenlet are distributed as binary `wheels`_. .. tip:: You need Pip 8.0 or later, or buildout 2.10.0 to install the binary wheels on Windows or macOS. On Linux, you'll need `pip 19 `_ to install the manylinux2010 wheels. .. tip:: While the x86-64 binaries are considered production quality, they are built with relatively low optimization levels and no hardware specific optimizations. Serious production users are encouraged to install from source with appropriate compiler flags. .. tip:: Beginning with gevent 20.12.0, 64-bit ARM binaries are distributed on PyPI for aarch64 manylinux2014 compatible systems. Installing these needs a very recent version of ``pip``. These wheels *do not* contain the c-ares resolver, are not tested, and are built with very low levels of optimizations. Serious production users of gevent on 64-bit ARM systems are encouraged to build their own binary wheels. Beginning with gevent 22.10.0, ppc64le binaries are distributed on PyPI. The same caveats apply as for 64-bit ARM binaries. Using them for anything other than local development is discouraged. Beginning with gevent 23, muslinux aarch64 and S390X binaries are distributed on PyPI. The same caveats apply as for 64-bit ARM binaries. Using them for anything other than local development is discouraged. Installing From Source ---------------------- If you are unable to use the binary wheels (for platforms where no pre-built wheels are available or if wheel installation is disabled), you can build gevent from source. A normal ``pip install`` will fall back to doing this if no binary wheel is available. See `Installing From Source`_ for more, including common installation issues. Extra Dependencies ================== There are a number of additional libraries that extend gevent's functionality and will be used if they are available. All of these may be installed using `setuptools extras `_, as named below, e.g., ``pip install gevent[events]``. events In versions of gevent up to and including 20.5.0, this provided configurable event support using `zope.event `_ and was highly recommended. In versions after that, this extra is empty and does nothing. It will be removed in gevent 21.0. dnspython Enables a pure-Python resolver, backed by `dnspython `_. On Python 2, this also includes `idna `_. They can be installed with the ``dnspython`` extra. .. note:: This is not compatible with Python 3.10+ or dnspython 2. monitor Enhancements to gevent's self-monitoring capabilities. This includes the `psutil `_ library which is needed to monitor memory usage. (Note that this may not build on all platforms.) recommended A shortcut for installing suggested extras together. This includes the non-test extras defined here, plus additions that improve gevent's operation on certain platforms (for example, in the past, it has included backports of newer APIs). test Everything needed to run the complete gevent test suite. .. _`pip`: https://pip.pypa.io/en/stable/installing/ .. _`wheels`: http://pythonwheels.com .. _`gevent 1.5`: whatsnew_1_5.html .. _`Installing From Source`: https://www.gevent.org/development/installing_from_source.html .. _`cffi`: https://cffi.readthedocs.io .. _`limitations in libev`: http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod#WIN32_PLATFORM_LIMITATIONS_AND_WORKA gevent-24.11.1/docs/intro.rst000066400000000000000000000311101471441230600157630ustar00rootroot00000000000000============== Introduction ============== .. include:: _about.rst Example ======= The following example shows how to run tasks concurrently. >>> import gevent >>> from gevent import socket >>> urls = ['www.google.com', 'www.example.com', 'www.python.org'] >>> jobs = [gevent.spawn(socket.gethostbyname, url) for url in urls] >>> _ = gevent.joinall(jobs, timeout=2) >>> [job.value for job in jobs] ['74.125.79.106', '208.77.188.166', '82.94.164.162'] After the jobs have been spawned, :func:`gevent.joinall` waits for them to complete, allowing up to 2 seconds. The results are then collected by checking the :attr:`~gevent.Greenlet.value` property. The :func:`gevent.socket.gethostbyname` function has the same interface as the standard :func:`socket.gethostbyname` but it does not block the whole interpreter and thus lets the other greenlets proceed with their requests unhindered. .. _monkey-patching: Monkey patching =============== The example above used :mod:`gevent.socket` for socket operations. If the standard :mod:`socket` module was used the example would have taken 3 times longer to complete because the DNS requests would be sequential (serialized). Using the standard socket module inside greenlets makes gevent rather pointless, so what about existing modules and packages that are built on top of :mod:`socket` (including the standard library modules like :mod:`urllib`)? That's where monkey patching comes in. The functions in :mod:`gevent.monkey` carefully replace functions and classes in the standard :mod:`socket` module with their cooperative counterparts. That way even the modules that are unaware of gevent can benefit from running in a multi-greenlet environment. >>> from gevent import monkey; monkey.patch_socket() >>> import requests # it's usable from multiple greenlets now See :doc:`examples/concurrent_download`. .. tip:: Insight into the monkey-patching process can be obtained by observing the events :mod:`gevent.monkey` emits. Beyond sockets -------------- Of course, there are several other parts of the standard library that can block the whole interpreter and result in serialized behavior. gevent provides cooperative versions of many of those as well. They can be patched independently through individual functions, but most programs using monkey patching will want to patch the entire recommended set of modules using the :func:`gevent.monkey.patch_all` function:: >>> from gevent import monkey; monkey.patch_all() >>> import subprocess # it's usable from multiple greenlets now .. tip:: When monkey patching, it is recommended to do so as early as possible in the lifetime of the process. If possible, monkey patching should be the first lines executed. Monkey patching later, especially if native threads have been created, :mod:`atexit` or signal handlers have been installed, or sockets have been created, may lead to unpredictable results including unexpected :exc:`~gevent.hub.LoopExit` errors. Event loop ========== Instead of blocking and waiting for socket operations to complete (a technique known as polling), gevent arranges for the operating system to deliver an event letting it know when, for example, data has arrived to be read from the socket. Having done that, gevent can move on to running another greenlet, perhaps one that itself now has an event ready for it. This repeated process of registering for events and reacting to them as they arrive is the event loop. Unlike other network libraries, though in a similar fashion as eventlet, gevent starts the event loop implicitly in a dedicated greenlet. There's no ``reactor`` that you must call a ``run()`` or ``dispatch()`` function on. When a function from gevent's API wants to block, it obtains the :class:`gevent.hub.Hub` instance --- a special greenlet that runs the event loop --- and switches to it (it is said that the greenlet *yielded* control to the Hub). If there's no :class:`~gevent.hub.Hub` instance yet, one is automatically created. .. tip:: Each operating system thread has its own :class:`~gevent.hub.Hub`. This makes it possible to use the gevent blocking API from multiple threads (with care). The event loop uses the best polling mechanism available on the system by default. .. note:: A low-level event loop API is available under the :mod:`gevent.core` module. This module is not documented, not meant for general purpose usage, and its exact contents and semantics change slightly depending on whether the libev or libuv event loop is being used. The callbacks supplied to the event loop API are run in the :class:`~gevent.hub.Hub` greenlet and thus cannot use the synchronous gevent API. It is possible to use the asynchronous API there, like :func:`gevent.spawn` and :meth:`gevent.event.Event.set`. Cooperative multitasking ======================== .. currentmodule:: gevent The greenlets all run in the same OS thread and are scheduled cooperatively. This means that until a particular greenlet gives up control, (by calling a blocking function that will switch to the :class:`~gevent.hub.Hub`), other greenlets won't get a chance to run. This is typically not an issue for an I/O bound app, but one should be aware of this when doing something CPU intensive, or when calling blocking I/O functions that bypass the event loop. .. tip:: Even some apparently cooperative functions, like :func:`gevent.sleep`, can temporarily take priority over waiting I/O operations in some circumstances. Synchronizing access to objects shared across the greenlets is unnecessary in most cases (because yielding control is usually explicit), thus traditional synchronization devices like the :class:`gevent.lock.BoundedSemaphore`, :class:`gevent.lock.RLock` and :class:`gevent.lock.Semaphore` classes, although present, aren't used very often. Other abstractions from threading and multiprocessing remain useful in the cooperative world: - :class:`~event.Event` allows one to wake up a number of greenlets that are calling :meth:`~event.Event.wait` method. - :class:`~event.AsyncResult` is similar to :class:`~event.Event` but allows passing a value or an exception to the waiters. - :class:`~queue.Queue` and :class:`~queue.JoinableQueue`. .. _greenlet-basics: Lightweight pseudothreads ========================= .. currentmodule:: gevent New greenlets are spawned by creating a :class:`~Greenlet` instance and calling its :meth:`start ` method. (The :func:`gevent.spawn` function is a shortcut that does exactly that). The :meth:`start ` method schedules a switch to the greenlet that will happen as soon as the current greenlet gives up control. If there is more than one active greenlet, they will be executed one by one, in an undefined order as they each give up control to the :class:`~gevent.hub.Hub`. If there is an error during execution it won't escape the greenlet's boundaries. An unhandled error results in a stacktrace being printed, annotated by the failed function's signature and arguments: >>> glet = gevent.spawn(lambda : 1/0); glet.join() >>> gevent.sleep(1) Traceback (most recent call last): ... ZeroDivisionError: integer division or modulo by zero > failed with ZeroDivisionError The traceback is asynchronously printed to ``sys.stderr`` when the greenlet dies. :class:`Greenlet` instances have a number of useful methods: - :meth:`join ` -- waits until the greenlet exits; - :meth:`kill ` -- interrupts greenlet's execution; - :meth:`get ` -- returns the value returned by greenlet or re-raises the exception that killed it. Greenlets can be subclassed with care. One use for this is to customize the string printed after the traceback by subclassing the :class:`~gevent.Greenlet` class and redefining its ``__str__`` method. For more information, see :ref:`subclassing-greenlet`. Greenlets can be killed synchronously from another greenlet. Killing will resume the sleeping greenlet, but instead of continuing execution, a :exc:`GreenletExit` will be raised. >>> from gevent import Greenlet >>> g = Greenlet(gevent.sleep, 4) >>> g.start() >>> g.kill() >>> g.dead True The :exc:`GreenletExit` exception and its subclasses are handled differently than other exceptions. Raising :exc:`~GreenletExit` is not considered an exceptional situation, so the traceback is not printed. The :exc:`~GreenletExit` is returned by :meth:`get ` as if it were returned by the greenlet, not raised. The :meth:`kill ` method can accept a custom exception to be raised: >>> g = Greenlet.spawn(gevent.sleep, 5) # spawn() creates a Greenlet and starts it >>> g.kill(Exception("A time to kill")) Traceback (most recent call last): ... Exception: A time to kill Greenlet(5) failed with Exception The :meth:`kill ` can also accept a *timeout* argument specifying the number of seconds to wait for the greenlet to exit. Note that :meth:`kill ` cannot guarantee that the target greenlet will not ignore the exception (i.e., it might catch it), thus it's a good idea always to pass a timeout to :meth:`kill ` (otherwise, the greenlet doing the killing will remain blocked forever). .. tip:: The exact timing at which an exception is raised within a target greenlet as the result of :meth:`kill ` is not defined. See that function's documentation for more details. .. caution:: Use care when killing greenlets, especially arbitrary greenlets spawned by a library or otherwise executing code you are not familiar with. If the code being executed is not prepared to deal with exceptions, object state may be corrupted. For example, if it has acquired a ``Lock`` but *does not* use a ``finally`` block to release it, killing the greenlet at the wrong time could result in the lock being permanently locked:: def func(): # DON'T DO THIS lock.acquire() socket.sendall(data) # This could raise many exceptions, including GreenletExit lock.release() `This document `_ describes a similar situation for threads. Greenlets also function as context managers, so you can combine spawning and waiting for a greenlet to finish in a single line: .. doctest:: >>> def in_greenlet(): ... print("In the greenlet") ... return 42 >>> with Greenlet.spawn(in_greenlet) as g: ... print("In the with suite") In the with suite In the greenlet >>> g.get(block=False) 42 Timeouts ======== Many functions in the gevent API are synchronous, blocking the current greenlet until the operation is done. For example, :meth:`kill ` waits until the target greenlet is :attr:`~gevent.Greenlet.dead` before returning [#f1]_. Many of those functions can be made asynchronous by passing the keyword argument ``block=False``. Furthermore, many of the synchronous functions accept a *timeout* argument, which specifies a limit on how long the function can block (examples include :meth:`gevent.event.Event.wait`, :meth:`gevent.Greenlet.join`, :meth:`gevent.Greenlet.kill`, :meth:`gevent.event.AsyncResult.get`, and many more). The :class:`socket ` and :class:`SSLObject ` instances can also have a timeout, set by the :meth:`settimeout ` method. When these are not enough, the :class:`gevent.Timeout` class and :func:`gevent.with_timeout` can be used to add timeouts to arbitrary sections of (cooperative, yielding) code. Further Reading =============== To limit concurrency, use the :class:`gevent.pool.Pool` class (see :doc:`examples/dns_mass_resolve`). Gevent comes with TCP/SSL/HTTP/WSGI servers. See :doc:`servers`. There are a number of configuration options for gevent. See :ref:`gevent-configuration` for details. This document also explains how to enable gevent's builtin monitoring and debugging features. The objects in :mod:`gevent.util` may be helpful for monitoring and debugging purposes. See :doc:`api/index` for a complete API reference. External resources ================== `Gevent for working Python developer`__ is a comprehensive tutorial. __ http://sdiehl.github.io/gevent-tutorial/ .. rubric:: Footnotes .. [#f1] This was not the case before 0.13.0, :meth:`kill ` method in 0.12.2 and older was asynchronous by default. .. LocalWords: Greenlets gevent-24.11.1/docs/loop_impls.rst000066400000000000000000000223371471441230600170200ustar00rootroot00000000000000============================================= Event Loop Implementations: libuv and libev ============================================= .. versionadded:: 1.3 gevent offers a choice of two event loop libraries (`libev`_ and `libuv`_) and three event loop implementations. This document will explore those implementations and compare them to each other. Using A Non-Default Loop ======================== First, we will describe how to choose an event loop other than the default loop for a given platform. This is done by setting the ``GEVENT_LOOP`` environment variable before starting the program, or by setting :attr:`gevent.config.loop ` in code. .. important:: If you choose to configure the loop in Python code, it must be done *immediately* after importing gevent and before any other gevent imports or asynchronous operations are done, preferably at the top of your program, right above monkey-patching (if done):: import gevent gevent.config.loop = "libuv" .. important:: In gevent 1.4 and 1.3, if you install gevent from a manylinux1 binary wheel as distributed on PyPI, you will not be able to use the libuv loop. You'll need to compile from source to gain access to libuv. gevent 1.5 distributes manylinux2010 wheels which have libuv support. If you use a Linux distribution's package of gevent, you may or may not have any other loops besides the default. Loop Implementations ==================== Here we will describe the available loop implementations. +----------+-------+------------+------------+-----+--------------+---------+--------+ |Name |Library|Default |Interpreters|Age |Implementation|Build |Embedded| | | | | | | |Status | | +==========+=======+============+============+=====+==============+=========+========+ |libev |libev |CPython on |CPython only|8 |Cython |Default |Default;| | | |non-Windows | |years| | |optional| | | |platforms | | | | | | +----------+-------+------------+------------+-----+--------------+---------+--------+ |libev-cffi|libev |PyPy on |CPython and |4 |CFFI |Optional;|Default;| | | |non-Windows |PyPy |years| |default |optional| | | |platforms | | | | | | +----------+-------+------------+------------+-----+--------------+---------+--------+ |libuv |libuv |All |CPython and |2 |CFFI |Optional;|Default;| | | |interpreters|PyPy |years| |default |optional| | | |on Windows | | | | | | +----------+-------+------------+------------+-----+--------------+---------+--------+ .. _libev-impl: libev ----- `libev`_ is a venerable event loop library that has been the default in gevent since 1.0a1 in 2011 when it replaced libevent. libev has existed since 2007. .. note:: In the future, this Cython implementation may be deprecated to be replaced with :ref:`libev-cffi`. .. _libev-dev: .. rubric:: Development and Source libev is a stable library and does not change quickly. Changes are accepted in the form of patches emailed to a mailing list. Due to its age and its portability requirements, it makes heavy use of preprocessor macros, which some may find hinders readability of the source code. .. _libev-plat: .. rubric:: Platform Support gevent tests libev on Linux and macOS. There is no known list of platforms officially supported by libev, although FreeBSD, OpenBSD and Solaris/SmartOS have been reported to work with gevent on libev at various points. On Windows, libev has `many limitations `_. gevent relies on the Microsoft C runtime functions to map from Windows socket handles to integer file descriptors for libev using a somewhat complex mapping; this prevents the CFFI implementation from being used (which in turn prevents PyPy from using libev on Windows). There is no known public CI infrastructure for libev itself. .. _libev-cffi: libev-cffi ---------- This uses libev exactly as above, but instead of using Cython it uses CFFI. That makes it suitable (and the default) for PyPy. It can also make it easier to debug, since more details are preserved for tracebacks. .. note:: In the future, this CFFI implementation may become the default and replace :ref:`libev-impl`. .. rubric:: When To Use On PyPy or when debugging. libuv ----- libuv is an event loop library developed since 2011 for the use of node 0.5. It was originally a wrapper around libev on non-Windows platforms and directly used the native Windows IOCP support on Windows (this code was contributed by Microsoft). Now it has its own loop implementation on all supported platforms. libuv provides libev-like `"poll handles" `_, and in gevent 1.3 that is what gevent uses for IO. But libuv also provides higher-level abstractions around read and write requests that may offer improved performance. In the future, gevent might use those abstractions. .. note:: In the future, this implementation may become the default on all platforms. .. rubric:: Development and Source libuv is developed by the libuv organization on `github `_. It has a large, active community and is used in many popular projects including node.js. The source code is written in a clean and consistent coding style, potentially making it easier to read and debug. .. rubric:: Platform Support gevent tests libuv on Linux, Windows and macOS. libuv publishes an extensive list of `supported platforms `_ that are likely to work with gevent. libuv `maintains a public CI infrastructure `_. .. rubric:: When To Use libuv - You want to use PyPy on Windows. - You want to develop on Windows (Windows is not recommended for production). - You want to use an operating system not supported by libev such as IBM i. .. note:: Platforms other than Linux, macOS and Windows are not tested by gevent. .. _libuv-limits: Limitations and Differences ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Because of its newness, and because of some design decisions inherent in the library and the ecosystem, there are some limitations and differences in the way gevent behaves using libuv compared to libev. - Timers (such as ``gevent.sleep`` and ``gevent.Timeout``) only support a resolution of 1ms (in practice, it's closer to 1.5ms). Attempting to use something smaller will automatically increase it to 1ms and issue a warning. Because libuv only supports millisecond resolution by rounding a higher-precision clock to an integer number of milliseconds, timers apparently suffer from more jitter. - Using negative timeouts may behave differently from libev. - libuv blocks delivery of all signals, so signals are handled using an (arbitrary) 0.3 second timer. This means that signal handling will be delayed by up to that amount, and that the longest the event loop can sleep in the operating system's ``poll`` call is that amount. Note that this is what gevent does for libev on Windows too. - libuv only supports one io watcher per file descriptor, whereas libev and gevent have always supported many watchers using different settings. The libev behaviour is emulated at the Python level. - Looping multiple times and expecting events for the same file descriptor to be raised each time without any data being read or written (as works with libev) does not appear to work correctly on Linux when using ``gevent.select.poll`` or a monkey-patched ``selectors.PollSelector``. - If anything unexpected happens, libuv likes to ``abort()`` the entire process instead of reporting an error. For example, closing a file descriptor it is using in a watcher may cause the entire process to be exited. - The order in which timers and other callbacks are invoked may be different than in libev. In particular, timers and IO callbacks happen in a different order, and timers may easily be off by up to half of the nominal 1ms resolution. See :issue:`1057`. - There is no support for priorities within classes of watchers. libev has some support for priorities and this is exposed in the low-level gevent API, but it was never documented. - Low-level ``fork`` and ``child`` watchers are not available. gevent emulates these in Python on platforms that supply :func:`os.fork`. Child watchers use ``SIGCHLD``, just as on libev, so the same caveats apply. - Low-level ``prepare`` watchers are not available. gevent uses prepare watchers for internal purposes. If necessary, this could be emulated in Python. Performance =========== In the various micro-benchmarks gevent has, performance among all three loop implementations is roughly the same. There doesn't seem to be a clear winner or loser. .. _libev: http://software.schmorp.de/pkg/libev.html .. _libuv: http://libuv.org .. LocalWords: gevent libev cffi PyPy CFFI libuv FreeBSD CPython Cython gevent-24.11.1/docs/make.bat000066400000000000000000000057771471441230600155270ustar00rootroot00000000000000@ECHO OFF REM Command file for Sphinx documentation set SPHINXBUILD=sphinx-build set BUILDDIR=_build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\gevent.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\gevent.ghc goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) :end gevent-24.11.1/docs/monitoring.rst000066400000000000000000000074251471441230600170310ustar00rootroot00000000000000============================================== Monitoring and Debugging gevent Applications ============================================== gevent applications are often long-running server processes. Beginning with version 1.3, gevent has special support for monitoring such applications and getting visibility into them. .. tip:: For some additional tools, see the comments on `issue 1021 `_. The Monitor Thread ================== gevent can be :attr:`configured ` to start a native thread to watch over each hub it creates. Out of the box, that thread has support to watch two things, but you can :func:`add your own functions ` to be called periodically in this thread. Blocking -------- When the monitor thread is enabled, by default it will watch for greenlets that block the event loop for longer than a :attr:`configurable ` time interval. When such a blocking greenlet is detected, it will print :func:`a report ` to the hub's :attr:`~gevent.hub.Hub.exception_stream`. It will also emit the :class:`gevent.events.EventLoopBlocked` event. .. seealso:: :func:`gevent.util.assert_switches` For a scoped version of this. Memory Usage ------------ Optionally, you can set a :attr:`memory limit `. The monitor thread will check the process's memory usage every :attr:`~gevent._config.Config.memory_monitor_period` seconds, and if it is found to exceed this value, the :class:`gevent.events.MemoryUsageThresholdExceeded` event will be emitted. If in the future memory usage declines below the configured value, the :class:`gevent.events.MemoryUsageUnderThreshold` event will be emitted. .. important:: `psutil `_ must be installed to monitor memory usage. Visibility ========== .. tip:: Insight into the monkey-patching process can be obtained by observing the events :mod:`gevent.monkey` emits. It is sometimes useful to get an overview of all existing greenlets and their stack traces. The function :func:`gevent.util.print_run_info` will collect this info and print it (:func:`gevent.util.format_run_info` only collects and returns this information). The greenlets are organized into a tree based on the greenlet that spawned them. The ``print_run_info`` function is commonly hooked up to a signal handler to get the application state at any given time. For each greenlet the following information is printed: - Its current execution stack - If it is not running, its termination status and :attr:`gevent.Greenlet.value` or :attr:`gevent.Greenlet.exception` - The :attr:`stack at which it was spawned ` - Its parent (usually the hub) - Its :attr:`~gevent.Greenlet.minimal_ident` - Its :attr:`~gevent.Greenlet.name` - The :attr:`spawn tree locals ` (only for the root of the spawn tree). - The dicts of all :class:`gevent.local.local` objects that are used in the greenlet. The greenlet tree itself is represented as an object that you can also use for your own purposes: :class:`gevent.util.GreenletTree`. Profiling ========= The github repository `nylas/nylas-perftools `_ has some gevent-compatible profilers. - ``stacksampler`` is a sampling profiler meant to be run in a greenlet in your server process and exposes data through an HTTP server; it is designed to be suitable for production usage. - ``py2devtools`` is a greenlet-aware tracing profiler that outputs data that can be used by the Chrome dev tools; it is intended for developer usage. .. LocalWords: greenlets gevent greenlet gevent-24.11.1/docs/mytheme/000077500000000000000000000000001471441230600155525ustar00rootroot00000000000000gevent-24.11.1/docs/mytheme/changes/000077500000000000000000000000001471441230600171625ustar00rootroot00000000000000gevent-24.11.1/docs/mytheme/changes/frameset.html000066400000000000000000000006241471441230600216600ustar00rootroot00000000000000 {% trans version=version|e, docstitle=docstitle|e %}Changes in Version {{ version }} — {{ docstitle }}{% endtrans %} gevent-24.11.1/docs/mytheme/changes/rstsource.html000066400000000000000000000006471471441230600221100ustar00rootroot00000000000000 {% trans filename=filename, docstitle=docstitle|e %}{{ filename }} — {{ docstitle }}{% endtrans %}
      {{ text }}
    
gevent-24.11.1/docs/mytheme/changes/versionchanges.html000066400000000000000000000023341471441230600230700ustar00rootroot00000000000000{% macro entries(changes) %}
    {% for entry, docname, lineno in changes %}
  • {{ entry }}
  • {% endfor %}
{% endmacro -%} {% trans version=version|e, docstitle=docstitle|e %}Changes in Version {{ version }} — {{ docstitle }}{% endtrans %}

{% trans version=version|e %}Automatically generated list of changes in version {{ version }}{% endtrans %}

{{ _('Library changes') }}

{% for modname, changes in libchanges %}

{{ modname }}

{{ entries(changes) }} {% endfor %}

{{ _('C API changes') }}

{{ entries(apichanges) }}

{{ _('Other changes') }}

{% for (fn, title), changes in otherchanges %}

{{ title }} ({{ fn }})

{{ entries(changes) }} {% endfor %}
gevent-24.11.1/docs/mytheme/defindex.html000066400000000000000000000024631471441230600202330ustar00rootroot00000000000000{% extends "layout.html" %} {% set title = _('Overview') %} {% block body %}

{{ docstitle|e }}

Welcome! This is {% block description %}the documentation for {{ project|e }} {{ release|e }}{% if last_updated %}, last updated {{ last_updated|e }}{% endif %}{% endblock %}.

{% block tables %}

{{ _('Indices and tables:') }}

{% endblock %} {% endblock %} gevent-24.11.1/docs/mytheme/domainindex.html000066400000000000000000000037411471441230600207440ustar00rootroot00000000000000{# basic/domainindex.html ~~~~~~~~~~~~~~~~~~~~~~ Template for domain indices (module index, ...). :copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. #} {% extends "layout.html" %} {% set title = indextitle %} {% block extrahead %} {{ super() }} {% if not embedded and collapse_index %} {% endif %} {% endblock %} {% block body %} {%- set curr_group = 0 %}

{{ indextitle }}

{%- for (letter, entries) in content %} {{ letter }} {%- if not loop.last %} | {% endif %} {%- endfor %}
{%- for letter, entries in content %} {%- for (name, grouptype, page, anchor, extra, qualifier, description) in entries %} {%- if grouptype == 1 %}{% set curr_group = curr_group + 1 %}{% endif %} {%- endfor %} {%- endfor %}
 
{{ letter }}
{% if grouptype == 1 -%} {%- endif %} {% if grouptype == 2 %}   {% endif %} {% if page %}{% endif -%} {{ name|e }} {%- if page %}{% endif %} {%- if extra %} ({{ extra|e }}){% endif -%} {% if qualifier %}{{ qualifier|e }}:{% endif %} {{ description|e }}
{% endblock %} gevent-24.11.1/docs/mytheme/genindex-single.html000066400000000000000000000027241471441230600215250ustar00rootroot00000000000000{% extends "layout.html" %} {% set title = _('Index') %} {% block body %}

{% trans key=key %}Index – {{ key }}{% endtrans %}

{%- set breakat = count // 2 %} {%- set numcols = 1 %} {%- set numitems = 0 %} {% for entryname, (links, subitems) in entries %}
{%- if links -%}{{ entryname|e }} {%- for link in links[1:] %}, [{{ loop.index }}]{% endfor -%} {%- else -%} {{ entryname|e }} {%- endif -%}
{%- if subitems %}
{%- for subentryname, subentrylinks in subitems %}
{{ subentryname|e }} {%- for link in subentrylinks[1:] %}, [{{ loop.index }}]{% endfor -%}
{%- endfor %}
{%- endif -%} {%- set numitems = numitems + 1 + (subitems|length) -%} {%- if numcols < 2 and numitems > breakat -%} {%- set numcols = numcols+1 -%}
{%- endif -%} {%- endfor %}
{% endblock %} {% block sidebarrel %}

Index

{% for key, dummy in genindexentries -%} {{ key }} {% if not loop.last %}| {% endif %} {%- endfor %}

{{ _('Full index on one page') }}

{{ super() }} {% endblock %} gevent-24.11.1/docs/mytheme/genindex-split.html000066400000000000000000000016501471441230600213740ustar00rootroot00000000000000{% extends "layout.html" %} {% set title = _('Index') %} {% block body %}

{{ _('Index') }}

{{ _('Index pages by letter') }}:

{% for key, dummy in genindexentries -%} {{ key }} {% if not loop.last %}| {% endif %} {%- endfor %}

{{ _('Full index on one page') }} ({{ _('can be huge') }})

{% endblock %} {% block sidebarrel %} {% if split_index %}

Index

{% for key, dummy in genindexentries -%} {{ key }} {% if not loop.last %}| {% endif %} {%- endfor %}

{{ _('Full index on one page') }}

{% endif %} {{ super() }} {% endblock %} gevent-24.11.1/docs/mytheme/genindex.html000066400000000000000000000034331471441230600202440ustar00rootroot00000000000000{% extends "layout.html" %} {% set title = _('Index') %} {% block body %}

{{ _('Index') }}

{% for key, dummy in genindexentries -%} {{ key }} {% if not loop.last %}| {% endif %} {%- endfor %}
{% for key, entries in genindexentries %}

{{ key }}

{%- set breakat = genindexcounts[loop.index0] // 2 %} {%- set numcols = 1 %} {%- set numitems = 0 %} {% for entryname, links_subitems in entries %} {%- set links, subitems = links_subitems[:2] %}
{%- if links -%}{{ entryname|e }} {%- for link in links[1:] %}, [{{ loop.index }}]{% endfor -%} {%- else -%} {{ entryname|e }} {%- endif -%}
{%- if subitems %}
{%- for subentryname, subentrylinks in subitems %}
{{ subentryname|e }} {%- for link in subentrylinks[1:] %}, [{{ loop.index }}]{% endfor -%}
{%- endfor %}
{%- endif -%} {%- set numitems = numitems + 1 + (subitems|length) -%} {%- if numcols < 2 and numitems > breakat -%} {%- set numcols = numcols+1 -%}
{%- endif -%} {%- endfor %}
{% endfor %} {% endblock %} {% block sidebarrel %} {% if split_index %}

{{ _('Index') }}

{% for key, dummy in genindexentries -%} {{ key }} {% if not loop.last %}| {% endif %} {%- endfor %}

{{ _('Full index on one page') }}

{% endif %} {{ super() }} {% endblock %} gevent-24.11.1/docs/mytheme/layout.html000066400000000000000000000242041471441230600177570ustar00rootroot00000000000000{%- block doctype -%} {%- endblock %} {%- set reldelim1 = reldelim1 is not defined and ' »' or reldelim1 %} {%- set reldelim2 = reldelim2 is not defined and ' |' or reldelim2 %} {%- set url_root = pathto('', 1) %} {%- if url_root == '#' %}{% set url_root = '' %}{% endif %} {%- macro relbar() %} {%- endmacro %} {%- macro sidebar() %} {%- if not embedded %}{% if not theme_nosidebar|tobool %}
{%- block sidebarlogo %} {%- if logo %} {%- endif %} {%- endblock %} {%- block sidebartoc %} {%- if display_toc %}

{{ _('Table Of Contents') }}

{{ toc }} {%- else %}

{{ _('Navigation') }}

{%- endif %} {%- endblock %} {%- block sidebarsourcelink %} {%- if show_source and has_source and sourcename %}

{{ _('This Page') }}

{%- endif %} {%- endblock %} {%- if customsidebar %} {% include customsidebar %} {%- endif %} {# {%- block sidebarsearch %} {%- if pagename != "search" %} {%- endif %} {%- endblock %} #}
{%- endif %}{% endif %} {%- endmacro %} {%- macro script() %} {%- for js in script_files %} {{ js_tag(js) }} {%- endfor %} {%- endmacro %} {{ metatags }} {%- if not embedded and docstitle %} {%- set titlesuffix = " — "|safe + docstitle|e %} {%- else %} {%- set titlesuffix = "" %} {%- endif %} {{ title|striptags }}{{ titlesuffix }} {%- if not embedded %} {%- block scripts %} {{- script() }} {%- endblock %} {%- if favicon %} {%- endif %} {%- endif %} {%- block linktags %} {%- if hasdoc('about') %} {%- endif %} {%- if hasdoc('genindex') %} {%- endif %} {%- if hasdoc('search') %} {%- endif %} {%- if hasdoc('copyright') %} {%- endif %} {%- if parents %} {%- endif %} {%- if next %} {%- endif %} {%- if prev %} {%- endif %} {%- endblock %} {%- block extrahead %} {% endblock %}
{%- block document %}
{%- if not embedded %}{% if not theme_nosidebar|tobool %}
{%- endif %}{% endif %}
{% block body %} {% endblock %} {%- if next %}

Next page: {{ next.title }}

{%- endif %}
{%- if not embedded %}{% if not theme_nosidebar|tobool %}
{%- endif %}{% endif %}
{%- endblock %}
{%- block sidebar2 %}{{ sidebar() }}{% endblock %}
 
gevent-24.11.1/docs/mytheme/modindex.html000066400000000000000000000031341471441230600202500ustar00rootroot00000000000000{% extends "layout.html" %} {% set title = _('Global Module Index') %} {% block extrahead %} {{ super() }} {% if not embedded and collapse_modindex %} {% endif %} {% endblock %} {% block body %}

{{ _('Global Module Index') }}

{%- for letter in letters %} {{ letter }} {% if not loop.last %}| {% endif %} {%- endfor %}
{%- for modname, collapse, cgroup, indent, fname, synops, pform, dep, stripped in modindexentries %} {%- if not modname -%} {%- else -%} {%- endif -%} {% endfor %}
 
{{ fname }}
{% if collapse -%} {%- endif %} {% if indent %}   {% endif %} {% if fname %}{% endif -%} {{ stripped|e }}{{ modname|e }} {%- if fname %}{% endif %} {%- if pform and pform[0] %} ({{ pform|join(', ') }}){% endif -%} {% if dep %}{{ _('Deprecated')}}:{% endif %} {{ synops|e }}
{% endblock %} gevent-24.11.1/docs/mytheme/page.html000066400000000000000000000001111471441230600173450ustar00rootroot00000000000000{% extends "layout.html" %} {% block body %} {{ body }} {% endblock %} gevent-24.11.1/docs/mytheme/static/000077500000000000000000000000001471441230600170415ustar00rootroot00000000000000gevent-24.11.1/docs/mytheme/static/basic.css_t000066400000000000000000000732151471441230600211670ustar00rootroot00000000000000/* Template name: Simple Organization Template URI: http://templates.arcsin.se/simple-organization-website-template/ Release date: 2009-09-20 Last updated: 2009-09-24 Description: A simple and elegant template suitable for organizations. Author: Viktor Persson Author URI: http://arcsin.se/ This template is licensed under a Creative Commons Attribution 2.5 License: http://templates.arcsin.se/license/ */ /* Reset ------------------------------------------------------------------- */ html, body, div, span, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, /*pre,*/ a, abbr, acronym, address, code, del, dfn, em, img, q, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td, textarea, select {margin: 0; padding: 0; border: 0; font-weight: inherit; font-style: inherit; font-size: 100%; font-family: inherit; vertical-align: baseline;} table {border-collapse: collapse; border-spacing: 0;} caption, th, td {text-align: left; font-weight: normal;} table, td, th {vertical-align: middle;} blockquote:before, blockquote:after, q:before, q:after {content: "";} blockquote, q {quotes: "" "";} a img {border: none;} :focus {outline: 0;} /* General ------------------------------------------------------------------- */ html { height: 100%; padding-bottom: 1px; /* force scrollbars */ } body { background: #FFF; color: #444; font: normal 75% sans-serif; line-height: 1.5; } /* Typography ------------------------------------------------------------------- */ /* Headings */ h1,h2,h3,h4,h5,h6 { color: #444; font-weight: normal; line-height: 1; margin-bottom: 0.3em; } /*h4,h5,h6 {font-weight: bold;}*/ h1 {font-size: 2em;} h2 {font-size: 2em;} h3 {font-size: 1.5em;} h4 {font-size: 1.25em;} h5 {font-size: 1.1em;} h6 {font-size: 1em;} h1 img, h2 img, h3 img, h4 img, h5 img, h6 img {margin: 0;} .document h1, .document h2, .document h3 { /* label */ border-left-style: solid; border-left-width: 4px; margin-bottom: 0.2em; padding-left: 10px; /* label-green */ border-left-color: #B7D897; } .document h1 {font-size: 2em; margin-bottom: 1em; } .document h2 {font-size: 1.5em; margin-bottom: 1em; margin-top: 1em; } .document h3 {font-size: 1.25em; margin-bottom: 1em; margin-top: 1em; } .document h4 {font-size: 1.1em; margin-bottom: 1em; margin-top: 1em; } .document h5 {font-size: 1em; margin-bottom: 1em; margin-top: 1em; } .title {color: #7c9a5e;} /* Links */ a:focus,a:hover {color: {{ theme_linkcolor }}; /*#039;*/} a { color: #456; text-decoration: none; } a:hover {text-decoration: underline;} a.feed { background: url('img/icon-feed.gif') no-repeat left center; padding-left: 18px; } a.more { color: #579; font-weight: bold; } a.more:hover {color: #234;} h2 a {color: #444; text-decoration: none;} h3 a {color: #444; text-decoration: none;} h2 a:hover {color: #000; text-decoration: none;} h3 a:hover {color: #000; text-decoration: none;} a .regular {color: #444; } a.nobr { white-space: nowrap; } /* Text elements */ p {margin-bottom: 1em; margin-top: 1em; } abbr, acronym {border-bottom: 1px dotted #666;} address {margin-bottom: 1.5em;} blockquote {margin: 1.5em;} del, blockquote { color:#666; } em, dfn, blockquote, address {font-style: italic;} strong, dfn {font-weight: bold;} sup, sub {line-height: 0;} /*pre { margin: 1.5em 0; white-space: pre; } pre,code,tt { font: 1em monospace; line-height: 1.5; }*/ /* Lists */ li ul, li ol {margin-left: 1.5em;} ul, ol {margin: 1.5em 0 1.5em 1.5em;} /*ul {list-style-type: disc;}*/ ol { /*list-style-type: decimal;*/ margin-left: 1.9em; } dl {margin: 0 0 1.5em 0;} dl dt {font-weight: bold;} dd {margin-left: 1.5em;} /* Special lists */ ul.plain-list li, ul.nice-list li, ul.tabbed li { list-style: none; margin-top: 0; } ul.tabbed { display: inline; margin: 0; } ul.tabbed li {float: left;} ul.plain-list {margin: 0;} ul.nice-list {margin-left: 0;} ul.nice-list li { border-top: 1px solid #EEE; list-style: none; padding: 4px 0; } ul.nice-list li:first-child {border-top: none;} ul.nice-list li .right {color: #999;} /* Tables */ table {margin-bottom: 1.4em; width: 100%;} th {font-weight: bold;} thead th {background: #C3D9FF;} th,td,caption {padding: 4px 10px 4px 5px;} tr.even td {background: #F2F6FA;} tfoot {font-style: italic;} caption {background: #EEE;} table.data-table { border: 1px solid #CCB; margin-bottom: 2em; width: 100%; } table.data-table th { background: #F0F0F0; border: 1px solid #DDD; color: #555; text-align: left; } table.data-table tr {border-bottom: 1px solid #DDD;} table.data-table td, table th {padding: 10px;} table.data-table td { background: #F6F6F6; border: 1px solid #DDD; } table.data-table tr.even td {background: #FCFCFC;} /* Misc classes */ .small {font-size: 0.9em;} .smaller {font-size: 0.8em;} .smallest {font-size: 0.7em;} .large {font-size: 1.15em;} .larger {font-size: 1.25em;} .largest {font-size: 1.35em;} .hidden {display: none;} .quiet, .quiet a {color: #999;} .loud, .loud a {color: #000;} .highlight, .highlight a {background:#ff0;} .text-left {text-align: left;} .text-right {text-align: right;} .text-center {text-align: center;} .text-separator {padding: 0 5px;} .error, .notice, .success { border: 1px solid #DDD; margin-bottom: 1em; padding: 0.6em 0.8em; } .error {background: #FBE3E4; color: #8A1F11; border-color: #FBC2C4;} .error a {color: #8A1F11;} .notice {background: #FFF6BF; color: #514721; border-color: #FFD324;} .notice a {color: #514721;} .success {background: #E6EFC2; color: #264409; border-color: #C6D880;} .success a {color: #264409;} /* Labels */ h1.label { border-left-style: solid; border-left-width: 4px; margin-bottom: 0.2em; padding-left: 10px; } h1.label-blue {border-left-color: #55AADA;} h1.label-green {border-left-color: #B7D897;} h1.label-orange {border-left-color: #FA8F6F;} h2.label { border-left-style: solid; border-left-width: 4px; margin-bottom: 0.2em; padding-left: 10px; } h2.label-blue {border-left-color: #55AADA;} h2.label-green {border-left-color: #B7D897;} h2.label-orange {border-left-color: #FA8F6F;} /* Forms ------------------------------------------------------------------- */ label { cursor: pointer; font-weight: bold; } label.checkbox, label.radio {font-weight: normal;} legend { font-weight: bold; font-size: 1.2em; } textarea {overflow: auto;} input.text, textarea, select { background: #FCFCFC; border: 1px inset #AAA; margin: 0.5em 0; padding: 4px 5px; } input.text:focus, textarea:focus, select:focus {background: #FFFFF5;} input.button { background: #DDD; border: 1px outset #AAA; padding: 4px 5px; } input.button:active {border-style: inset;} /* Specific */ form .required {font-weight: bold;} .form-error {border-color: #F00;} .form-row {padding: 5px 0;} .form-row-submit { border-top: 1px solid #DDD; padding: 8px 0 10px 76px; margin-top: 10px; } .legend { background: #F0FAF0; border: 1px solid #D6DFD6; font-size: 1.5em; margin: 0; padding: 8px 14px; } .form-property, .form-value {float: left;} .form-property { padding-top: 8px; text-align: right; width: 60px; } .form-value {padding-left: 16px;} .form-error {border-color: #F00;} /* Alignment ------------------------------------------------------------------- */ /* General */ .center,.aligncenter { display: block; margin-left: auto; margin-right: auto; } /* Images */ img.bordered,img.alignleft,img.alignright,img.aligncenter { background-color: #FFF; border: 1px solid #DDD; padding: 3px; } img.alignleft, img.left {margin: 0 1.5em 1em 0;} img.alignright, img.right {margin: 0 0 1em 1.5em;} /* Floats */ .left,.alignleft {float: left;} .right,.alignright {float: right;} .clear,.clearer {clear: both;} .clearer { display: block; font-size: 0; line-height: 0; height: 0; } /* Separators ------------------------------------------------------------------- */ .content-separator, .archive-separator { background: #E5E5E5; clear: both; color: #FFE; display: block; font-size: 0; line-height: 0; height: 1px; } .content-separator {margin: 32px 0;} .archive-separator {margin-bottom: 20px;} /* Posts ------------------------------------------------------------------- */ .post {margin-bottom: 20px;} .post img.left, .post img.right {margin-bottom: 0;} .post-date { color: #777; margin: 2px 0 10px; } .post-date a {color: #444;} .post-meta a {color: #345; } .post-meta a:hover {color: #001;} /*.body {font-size: 133.33333%;}*/ .body {font-size: 1.1em;} .body a {color: {{ theme_linkcolor }}; /*#039;*/} .body a:hover {color: {{ theme_linkcolor }}; /*#039;*/} .body img.left, .body img.right {margin-bottom: 1em;} /* Archives */ .archive-pagination { color: #777; padding: 10px 0; } .archive-pagination-top { border-bottom: 2px solid #DDD; margin-bottom: 24px; } .archive-pagination-bottom { border-top: 2px solid #DDD; margin-top: 24px; } .archive-post-date { background: #F5F5F5; border-bottom: 1px solid #C5C5C5; border-right: 1px solid #CFCFCF; float: left; margin-right: 12px; padding: 2px 0 5px; text-align: center; width: 46px; } .archive-post-title .post-date {margin: 0;} .archive-post-title {padding-top: 4px;} .archive-post-day {font: normal 1.6em Georgia,serif;} /* Comments ------------------------------------------------------------------- */ /* .comment-input-text textarea {width: 80%;} // Comment list .comment-list-wrapper { background: #F6F6F6; margin: 10px 0 0; padding: 5px 12px 10px 7px; } .comment-list { margin: 0; padding: 0; } .comment-list li {list-style: none;} .comment-list ul {margin-bottom: 0;} .comment-profile-wrapper { text-align: center; width: 105px; } .comment-gravatar {margin-bottom: 3px;} .comment-content-wrapper { float: right; width: 481px; } .comment-parent, .comment-single {margin-top: 15px;} .comment-list ul.children, #comments #respond ul { border-left: 1px solid #CCC; margin: 0 0 0 130px; } .comment-list ul.children ul.children {margin-left: 15px;} .comment-list ul.children li { background: url('img/comment-reply.gif') no-repeat left top; margin: 0; padding: 10px 0 0 15px; } .comment-body { background: #FFF; border: 1px solid #DDD; padding: 10px 12px 0; } .comment-list ul.children .comment-body {background: #FCFCFC;} .comment-author {padding-top: 2px;} .comment-text p {margin-bottom: 0.8em;} .comment .post-date, .comment-author {font-size: 0.9em;} .comment .post-date .right a {color: #BBB;} .comment .post-date .right a:hover {color: #234;} .comment-arrow { background: url('img/comment-arrow.gif') no-repeat left top; display: block; float: left; height: 45px; margin: 3px 0 -45px -41px; position: absolute; width: 29px; } // Respond #respond li {list-style: none;} #respond { background: #F6F6F6; padding: 10px 12px; } #respond ul {margin: 0;} #respond .legend {margin-bottom: 10px;} #comments #respond {padding: 0;} #comments #respond .legend { border-bottom: 0; margin-bottom: 0; } #comments #respond ul { background: url('img/comment-reply.gif') no-repeat left top; padding: 10px 0 0 15px; } #comments ul.children #respond ul { margin-left: 30px; padding: 0; } #comments #respond .comment-profile-wrapper, #comments #respond .comment-arrow {display: none;} #comments #respond .comment-body {background: #FFF;} #comments #respond .comment-content-wrapper { float: none; width: 100%; } */ /* Layout ------------------------------------------------------------------- */ /* Common */ #top, #sub-nav {border-bottom: 1px solid #DDD;} /* Wrapper */ #site-wrapper { margin: 0 auto; width: 80%; } /* Header */ #header {padding-top: 24px;} /* Top */ #top {padding-bottom: 32px;} /* Logo */ #logo { border-right: 1px solid #DDD; padding: 10px 40px 10px 0; margin-right: 40px; } #logo img {} /* Splash */ #splash {padding-top: 32px;} /* Navigation */ .navigation a { color: #888; text-decoration: none; } .navigation a:hover {color: #002;} .navigation li.current-tab a {color: #222;} #main-nav li:first-child, #sub-nav li:first-child {margin-left: 0;} /* Main navigation */ #main-nav {padding-top: 0px;} #main-nav li {margin: 0 1.5em;} #main-nav a { font-size: 1.45em; line-height: 2em; padding-bottom: 2px; } #main-nav li.current-tab a {color: #333;} #main-nav a:hover {color: #002;} #main-nav li.current-tab a {border-bottom: 2px solid #94CC5F;} #main-nav li.current-tab a:hover {color: #002;} #title {color: #7c9a5e; text-decoration: none} #title:hover {text-decoration: none} /* Subnav */ #sub-nav { border-bottom: 1px solid #DDD; padding: 12px 0; } #sub-nav a { font-size: 1.2em; text-decoration: none; } #sub-nav li {margin: 0 1em;} #sub-nav li.current-tab a {font-weight: bold;} /* Main */ .main {margin: 24px 0;} .main#main-two-columns {background: url('img/main-two-columns.gif') repeat-y right top;} .main#main-two-columns-left {background: url('img/main-two-columns-left.gif') repeat-y left top;} .main#main-two-columns #main-content, .main#main-two-columns-left #main-content {width: 620px;} /* Sidebar */ #sidebar {width: 255px;} /* Columns */ .col3, .col3-mid {width: 31%;} .col3-mid {margin-left: 3%;} .col3big { width: 65% } /* Sections */ .section {margin-bottom: 24px;} .section-title { background-color: #F9F9F9; border-top: 2px solid #DDD; color: #7A7A7A; font: bold 1.2em sans-serif; margin-bottom: 16px; padding: 7px 10px 6px; } .section-title a {color: #7A7A7A;} .section-title a:hover {color: #444; text-decoration: none;} #sidebar .section-title {margin-bottom: 8px;} /* Footer */ #footer { border-top: 1px solid #DDD; color: #777; padding: 16px 0 4px; } #footer-left {width: 259px;} #footer-right { width: 659px; text-align: right; } #footer p {margin-bottom: 0.4em;} #footer .text-separator { padding: 0 3px; color: #BBB; } #footer a:hover {color: #000;} #footer a.quiet-link {text-decoration: none; color: #777} #footer a.quiet-link:hover {text-decoration: underline; color: #000} /* Misc overriding classes ------------------------------------------------------------------- */ /* Border */ .noborder {border: 0;} .notborder {border-top: 0;} .norborder {border-right: 0;} .nobborder {border-bottom: 0;} .nolborder {border-left: 0;} /* Margin */ .nomargin {margin: 0;} .notmargin {margin-top: 0;} .normargin {margin-right: 0;} .nobmargin {margin-bottom: 0;} .nolmargin {margin-left: 0;} /* Padding */ .nopadding {padding: 0;} .notpadding {padding-top: 0;} .norpadding {padding-right: 0;} .nobpadding {padding-bottom: 0;} .nolpadding {padding-left: 0;} /* IE Fixes (zzz) ------------------------------------------------------------------- */ * html .navigation, * html #footer, * html #splash, * html .comment ul {height: 0.01%;} * html #footer-left {width: 500px;} .navigation, #splash, .comment ul {min-height: 0.01%;} /* Sphinx stylesheet */ /* -- admonitions ----------------------------------------------------------- */ div.admonition { margin-top: 10px; margin-bottom: 10px; padding: 7px; } div.admonition dt { font-weight: bold; } div.admonition dl { margin-bottom: 0; } p.admonition-title { margin: 0px 10px 5px 0px; font-weight: bold; } div.body p.centered { text-align: center; margin-top: 25px; } tt { background-color: #ecf0f3; padding: 0 1px 0 1px; font-size: 110%; } .warning tt { background: #efc2c2 !important; } .note tt { background: #d6d6d6; } dt:target, .highlight { background-color: #fbe54e; } /* -- body styles ----------------------------------------------------------- */ a { color: {{ theme_linkcolor }}; text-decoration: none; } /* a:hover { text-decoration: underline; } */ a.headerlink { color: {{ theme_headlinkcolor }}; font-size: 0.8em; padding: 0 4px 0 4px; text-decoration: none; } a.headerlink:hover { background-color: {{ theme_headlinkcolor }}; color: white; } /* -- general body styles --------------------------------------------------- */ a.headerlink { visibility: hidden; } h1:hover > a.headerlink, h2:hover > a.headerlink, h3:hover > a.headerlink, h4:hover > a.headerlink, h5:hover > a.headerlink, h6:hover > a.headerlink, dt:hover > a.headerlink { visibility: visible; } /* -- code displays --------------------------------------------------------- */ pre { overflow: auto; } td.linenos pre { padding: 5px 0px; border: 0; background-color: transparent; color: #aaa; } table.highlighttable { margin-left: 0.5em; } table.highlighttable td { padding: 0 0.5em 0 0.5em; } tt.descname { background-color: transparent; font-weight: bold; font-size: 1.2em; } tt.descclassname { background-color: transparent; } tt.xref, a tt { background-color: transparent; font-weight: bold; font-size: 1.2em; } h1 tt, h2 tt, h3 tt, h4 tt, h5 tt, h6 tt { background-color: transparent; } div.admonition p.admonition-title + p { display: inline; } div.admonition p { margin-bottom: 5px; } div.admonition pre { margin-bottom: 5px; } div.admonition ul, div.admonition ol { margin-bottom: 5px; } div.note { background-color: #eee; border: 1px solid #ccc; } div.seealso { background-color: #ffc; border: 1px solid #ff6; } div.topic { background-color: #eee; } div.warning { background-color: #ffe4e4; border: 1px solid #f66; } div.caution { background-color: #fff6f1; border: 1px solid #ffaaa3; } div.hint, div.tip { border-left-style: solid; border-left-width: 2px; border-left-color: #B7D897; } p.admonition-title { display: inline; } p.admonition-title:after { content: ":"; } pre { padding: 5px; background-color: {{ theme_codebgcolor }}; color: {{ theme_codetextcolor }}; font-size: 120%; line-height: 150%; border: 1px solid #ac9; border-left: none; border-right: none; /*font-family: {{ theme_bodyfont }};*/ } /* -- other body styles ----------------------------------------------------- */ ol.arabic { list-style: decimal; } ol.loweralpha { list-style: lower-alpha; } ol.upperalpha { list-style: upper-alpha; } ol.lowerroman { list-style: lower-roman; } ol.upperroman { list-style: upper-roman; } dl { margin-bottom: 15px; } dd p { margin-top: 0px; } dd ul, dd table { margin-bottom: 10px; } dd { margin-top: 3px; margin-bottom: 10px; margin-left: 30px; } dt:target, .highlight { background-color: #fbe54e; } dl.glossary dt { font-weight: bold; font-size: 1.1em; } .field-list ul { margin: 0; padding-left: 1em; } .field-list p { margin: 0; } th.field-name { vertical-align: top; } .refcount { color: #060; } .optional { font-size: 1.3em; } .versionmodified { font-style: italic; } .system-message { background-color: #fda; padding: 5px; border: 3px solid red; } .footnote:target { background-color: #ffa } .line-block { display: block; margin-top: 1em; margin-bottom: 1em; } .line-block .line-block { margin-top: 0; margin-bottom: 0; margin-left: 1.5em; } .classifier { font-style: oblique; } ul ul { margin-top: 0em; margin-bottom: 0em; } p.rubric { margin-top: 30px; font-weight: bold; } /* Remove RTD cruft */ body > div.injected { display: none; } /* // // Sphinx stylesheet -- basic theme // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ // // -- main layout ----------------------------------------------------------- div.clearer { clear: both; } // -- relbar ---------------------------------------------------------------- div.related { width: 100%; font-size: 90%; } div.related h3 { display: none; } div.related ul { margin: 0; padding: 0 0 0 10px; list-style: none; } div.related li { display: inline; } div.related li.right { float: right; margin-right: 5px; } // -- sidebar --------------------------------------------------------------- div.sphinxsidebarwrapper { padding: 10px 5px 0 10px; } div.sphinxsidebar { float: left; width: 230px; margin-left: -100%; font-size: 90%; } div.sphinxsidebar ul { list-style: none; } div.sphinxsidebar ul ul, div.sphinxsidebar ul.want-points { margin-left: 20px; list-style: square; } div.sphinxsidebar ul ul { margin-top: 0; margin-bottom: 0; } div.sphinxsidebar form { margin-top: 10px; } div.sphinxsidebar input { border: 1px solid #98dbcc; font-family: sans-serif; font-size: 1em; } img { border: 0; } // -- search page ----------------------------------------------------------- ul.search { margin: 10px 0 0 20px; padding: 0; } ul.search li { padding: 5px 0 5px 20px; background-image: url(file.png); background-repeat: no-repeat; background-position: 0 7px; } ul.search li a { font-weight: bold; } ul.search li div.context { color: #888; margin: 2px 0 0 30px; text-align: left; } ul.keywordmatches li.goodmatch a { font-weight: bold; } // -- index page ------------------------------------------------------------ table.contentstable { width: 90%; } table.contentstable p.biglink { line-height: 150%; } a.biglink { font-size: 1.3em; } span.linkdescr { font-style: italic; padding-top: 5px; font-size: 90%; } // -- general index --------------------------------------------------------- table.indextable td { text-align: left; vertical-align: top; } table.indextable dl, table.indextable dd { margin-top: 0; margin-bottom: 0; } table.indextable tr.pcap { height: 10px; } table.indextable tr.cap { margin-top: 10px; background-color: #f2f2f2; } img.toggler { margin-right: 3px; margin-top: 3px; cursor: pointer; } // -- general body styles --------------------------------------------------- div.body p.caption { text-align: inherit; } div.body td { text-align: left; } .field-list ul { padding-left: 1em; } .first { margin-top: 0 !important; } p.rubric { margin-top: 30px; font-weight: bold; } .align-left { text-align: left; } .align-center { clear: both; text-align: center; } .align-right { text-align: right; } // -- sidebars -------------------------------------------------------------- div.sidebar { margin: 0 0 0.5em 1em; border: 1px solid #ddb; padding: 7px 7px 0 7px; background-color: #ffe; width: 40%; float: right; } p.sidebar-title { font-weight: bold; } // -- topics ---------------------------------------------------------------- div.topic { border: 1px solid #ccc; padding: 7px 7px 0 7px; margin: 10px 0 10px 0; } p.topic-title { font-size: 1.1em; font-weight: bold; margin-top: 10px; } // -- tables ---------------------------------------------------------------- table.docutils { border: 0; border-collapse: collapse; } table.docutils td, table.docutils th { padding: 1px 8px 1px 0; border-top: 0; border-left: 0; border-right: 0; border-bottom: 1px solid #aaa; } table.field-list td, table.field-list th { border: 0 !important; } table.footnote td, table.footnote th { border: 0 !important; } th { text-align: left; padding-right: 5px; } table.citation { border-left: solid 1px gray; margin-left: 1px; } table.citation td { border-bottom: none; } // -- other body styles ----------------------------------------------------- ol.arabic { list-style: decimal; } ol.loweralpha { list-style: lower-alpha; } ol.upperalpha { list-style: upper-alpha; } ol.lowerroman { list-style: lower-roman; } ol.upperroman { list-style: upper-roman; } dl { margin-bottom: 15px; } dd p { margin-top: 0px; } dd ul, dd table { margin-bottom: 10px; } dd { margin-top: 3px; margin-bottom: 10px; margin-left: 30px; } dt:target, .highlight { background-color: #fbe54e; } dl.glossary dt { font-weight: bold; font-size: 1.1em; } .field-list ul { margin: 0; padding-left: 1em; } .field-list p { margin: 0; } .refcount { color: #060; } .optional { font-size: 1.3em; } .versionmodified { font-style: italic; } .system-message { background-color: #fda; padding: 5px; border: 3px solid red; } .footnote:target { background-color: #ffa } .line-block { display: block; margin-top: 1em; margin-bottom: 1em; } .line-block .line-block { margin-top: 0; margin-bottom: 0; margin-left: 1.5em; } .classifier { font-style: oblique; } // -- code displays --------------------------------------------------------- pre { overflow: auto; } td.linenos pre { padding: 5px 0px; border: 0; background-color: transparent; color: #aaa; } table.highlighttable { margin-left: 0.5em; } table.highlighttable td { padding: 0 0.5em 0 0.5em; } tt.descname { background-color: transparent; font-weight: bold; font-size: 1.2em; } tt.descclassname { background-color: transparent; } tt.xref, a tt { background-color: transparent; font-weight: bold; } h1 tt, h2 tt, h3 tt, h4 tt, h5 tt, h6 tt { background-color: transparent; } // -- math display ---------------------------------------------------------- img.math { vertical-align: middle; } div.body div.math p { text-align: center; } span.eqno { float: right; } // -- printout stylesheet --------------------------------------------------- @media print { div.document, div.documentwrapper, div.bodywrapper { margin: 0 !important; width: 100%; } div.sphinxsidebar, div.related, div.footer, #top-link { display: none; } } // default theme body { font-family: {{ theme_bodyfont }}; font-size: 100%; background-color: {{ theme_footerbgcolor }}; color: #000; margin: 0; padding: 0; } div.document { background-color: {{ theme_sidebarbgcolor }}; } div.documentwrapper { float: left; width: 100%; } div.bodywrapper { margin: 0 0 0 230px; } div.body { background-color: {{ theme_bgcolor }}; color: {{ theme_textcolor }}; padding: 0 20px 30px 20px; } {%- if theme_rightsidebar|tobool %} div.bodywrapper { margin: 0 230px 0 0; } {%- endif %} div.footer { color: {{ theme_footertextcolor }}; width: 100%; padding: 9px 0 9px 0; text-align: center; font-size: 75%; } div.footer a { color: {{ theme_footertextcolor }}; text-decoration: underline; } div.related { background-color: {{ theme_relbarbgcolor }}; line-height: 30px; color: {{ theme_relbartextcolor }}; } div.related a { color: {{ theme_relbarlinkcolor }}; } div.sphinxsidebar { {%- if theme_stickysidebar|tobool %} top: 30px; bottom: 0; margin: 0; position: fixed; overflow: auto; height: auto; {%- endif %} {%- if theme_rightsidebar|tobool %} float: right; {%- if theme_stickysidebar|tobool %} right: 0; {%- endif %} {%- endif %} } {%- if theme_stickysidebar|tobool %} // this is nice, but it it leads to hidden headings when jumping // to an anchor // //div.related { // position: fixed; //} // //div.documentwrapper { // margin-top: 30px; //} {%- endif %} div.sphinxsidebar h3 { font-family: {{ theme_headfont }}; color: {{ theme_sidebartextcolor }}; font-size: 1.4em; font-weight: normal; margin: 0; padding: 0; } div.sphinxsidebar h3 a { color: {{ theme_sidebartextcolor }}; } div.sphinxsidebar h4 { font-family: {{ theme_headfont }}; color: {{ theme_sidebartextcolor }}; font-size: 1.3em; font-weight: normal; margin: 5px 0 0 0; padding: 0; } div.sphinxsidebar p { color: {{ theme_sidebartextcolor }}; } div.sphinxsidebar p.topless { margin: 5px 10px 10px 10px; } div.sphinxsidebar ul { margin: 10px; padding: 0; color: {{ theme_sidebartextcolor }}; } div.sphinxsidebar a { color: {{ theme_sidebarlinkcolor }}; } div.sphinxsidebar input { border: 1px solid {{ theme_sidebarlinkcolor }}; font-family: sans-serif; font-size: 1em; } // -- body styles ----------------------------------------------------------- a { color: {{ theme_linkcolor }}; text-decoration: none; } a:hover { text-decoration: underline; } div.body p, div.body dd, div.body li { text-align: justify; line-height: 130%; } div.body h1, div.body h2, div.body h3, div.body h4, div.body h5, div.body h6 { font-family: {{ theme_headfont }}; background-color: {{ theme_headbgcolor }}; font-weight: normal; color: {{ theme_headtextcolor }}; border-bottom: 1px solid #ccc; margin: 20px -20px 10px -20px; padding: 3px 0 3px 10px; } div.body h1 { margin-top: 0; font-size: 200%; } div.body h2 { font-size: 160%; } div.body h3 { font-size: 140%; } div.body h4 { font-size: 120%; } div.body h5 { font-size: 110%; } div.body h6 { font-size: 100%; } a.headerlink { color: {{ theme_headlinkcolor }}; font-size: 0.8em; padding: 0 4px 0 4px; text-decoration: none; } a.headerlink:hover { background-color: {{ theme_headlinkcolor }}; color: white; } div.body p, div.body dd, div.body li { text-align: justify; line-height: 130%; } div.admonition p.admonition-title + p { display: inline; } div.admonition p { margin-bottom: 5px; } div.admonition pre { margin-bottom: 5px; } div.admonition ul, div.admonition ol { margin-bottom: 5px; } div.note { background-color: #eee; border: 1px solid #ccc; } div.seealso { background-color: #ffc; border: 1px solid #ff6; } div.topic { background-color: #eee; } div.warning { background-color: #ffe4e4; border: 1px solid #f66; } p.admonition-title { display: inline; } p.admonition-title:after { content: ":"; } pre { padding: 5px; background-color: {{ theme_codebgcolor }}; color: {{ theme_codetextcolor }}; border: 1px solid #ac9; border-left: none; border-right: none; } tt { background-color: #ecf0f3; padding: 0 1px 0 1px; font-size: 0.95em; } .warning tt { background: #efc2c2; } .note tt { background: #d6d6d6; } */ gevent-24.11.1/docs/mytheme/static/file.png000066400000000000000000000006101471441230600204630ustar00rootroot00000000000000PNG  IHDRabKGD pHYs  tIME  )TIDAT8˭J@Ir('[ "&xYZ X0!i|_@tD] #xjv YNaEi(əy@D&`6PZk$)5%"z.NA#Aba`Vs_3c,2mj [klvy|!Iմy;v "߮a?A7`c^nk?Bg}TЙD# "RD1yER*6MJ3K_Ut8F~IENDB`gevent-24.11.1/docs/mytheme/static/img/000077500000000000000000000000001471441230600176155ustar00rootroot00000000000000gevent-24.11.1/docs/mytheme/static/img/main-two-columns.gif000066400000000000000000000010031471441230600235070ustar00rootroot00000000000000GIF89a!,Dڋ<H}ʶ sjLcsĢ!<*LZ JSӪJybܨ ⲹF>ק4 ߸zdn?'8hVx((9iITyٓi):jtZTJ KU{K2+K\\{ }:M=j}} >)>Xn~>/G_w_迬@62( 2 "r!D.'2h 㭊p$3>tɔFVZLS.giΜxzMPM@(jҤ,2EԑӨ%Ra0UNXA`0, jNhN\;p:kI{/`|o⽋6xn专V^{mfvkhzjꨫf:ilzgns{f_gy/?A@2CCEFHIHTKWLY&OZ%'(\)]2(_*2+4`5>l-5=d7n?@pAHx@IpHBJtKSKS[~UV\]^]e_fghnopiqqxrxy{{ƒĊąŌƒǍɍȔɕ̖̐͗͝ΘϟЦѡӡҧөըԯ֩תװزٳڴں۵ݴܻ޼ݼU~P IDATh՚oHg~]? LuC9zuE)A ^PDhE_],Il J4涥!KjS2ZbrdKr[8ȑ3" ]cnyguwSyyј 8(t!.'!}!~Dԙ8E9B1d%zѺ%+*~Ν4]zj+ozբ^4f`&fD)l J+"!.E1P,ZG)|vYPnRpT_WDOo/bG>&+1Egqi~&fxWH1G"ԙUs!i߱/38 r9^fcLw1!I:C39=,:wXF%T1:;ssnSJbQO`PMΊADTۚ}X3\ ַ's2gax䚆mpg!Sqf17z5 j_,娸Yw?V[J&}6_ɾ:·c=@^N i#%.t0$/ Mē)9ipܨ*A[ ഡׁaů3畢)=)+ gM^TLP1hNad?KNWX i`Uѡ|oqU{Kzy]-H/mCSJcTl^C5kItuΒ!xh]~+|Pa 5w}q\KpGPᴋM^vP}3Pbp$5%}MCF9M.Ut( 3*x]fQl9ji{fyVWQoĨfcX3jq,5B8Yn(1> F7}_-FlfAj8@` :.±'ci,7fi\HM )fOg=~>t48 XSAOb0 I֮ҹG񸈇T< {˪K9і19f ;Yn7B:> iSȗ9 ^a {o[>%XQα Ì&[*,/- 9u.8 g*/H)_ #)1DSIk^S~^qhq$MY\Rٕ h:zO-!oُ+-A8[а]N[pݖ2q},x͖26V^ǂĥT䏯^Rv`LyG@5?b8.(*.Nd~/l '9Ea]"G@ݯZo9JK#,;j;1iFa .$XN4*{bv4r@Z0Wo%;_iz^ɑ9dž9:\ygn]zF52 8Q.Gf 6 vrD6#>t̮ZϡG$ Wb!f j*&|Hm"doS;i=5_cUZ.3 !ӄj7>u˄R(j<~xۍ~!0TqVC s`&2Cj`ަs+NI OJ㨸Y| >]e^1q<N,G7rVi+?BTkr2نVk7G3mUe(4X 3ޞSBM1Xޭ.laR\\o$a(ր,Q{ePmlT; UǚnZƀwk% BDbh:z,v]f}=&h=bUpl6B[n=sZY9Wxc~x%ٙ./.O*#U-9{%~IENDB`gevent-24.11.1/docs/mytheme/static/plus.png000066400000000000000000000003071471441230600205320ustar00rootroot00000000000000PNG  IHDR &q pHYs  tIME 1l9tEXtComment̖RIDATcz(BpipPc |IENDB`gevent-24.11.1/docs/mytheme/static/spotify_logo.png000066400000000000000000000027161471441230600222720ustar00rootroot00000000000000PNG  IHDR44sRGBPLTEZ2],Z/T\1VU$Y^#``%b5g{RzEv'z0|?|S|({1w)}Ty9|G~)|2x*NU}H{ *}3y+{;V~I|"+b<~4PW]}<-c{,QD$.^d|->E%~%kSY&/`fM&'0lN(UNms([*niV\]dXk_`lygzhtcio{jqwl~Eszu€†[ÔďŐǗȞȘȓəʛˏ˕̖̜q̤͗͝Θ͞ТШѣҤҪѰұҫլմծר׵ضھڸ۹ݼݽ!k pHYsaa?itIME07QCTIDATHc8PX@*P@`($GSQMF5j"[S!\SRH&`HM)B)pfTXIUAx@VAxxVJDF|!VM)yaP}:{Ll.oâ)w]'9yٛy}jᔗͧڤgo燢gۗ??_73}n_{|oGʥ<'>8vT[$Mэw1:g^16?ͿU|95i k*;qϋ^˿PoF޳xv%jzY}|v?Kqe~ϾeT%ӮE__rmS?oV*-ʑ=2']VzQgv pFqu3j];/KY'u{$9-랽;>bo`hVo6YtM7nr]塊9Şrg猄f >y|qW{?{?`|w0CSWxAt}TWc ft+/)ĕýRzh-P5m ]Bqq*IENDB`gevent-24.11.1/docs/mytheme/static/transparent.gif000066400000000000000000000000611471441230600220660ustar00rootroot00000000000000GIF89a!,D;gevent-24.11.1/docs/mytheme/theme.conf000066400000000000000000000012241471441230600175220ustar00rootroot00000000000000[theme] inherit = basic stylesheet = basic.css pygments_style = sphinx [options] gevent_version = 0.0.0 nosidebar = false rightsidebar = false stickysidebar = false footerbgcolor = #11303d footertextcolor = #ffffff sidebarbgcolor = #1c4e63 sidebartextcolor = #ffffff sidebarlinkcolor = #98dbcc relbarbgcolor = #133f52 relbartextcolor = #ffffff relbarlinkcolor = #ffffff bgcolor = #ffffff textcolor = #000000 headbgcolor = #f2f2f2 headtextcolor = #20435c headlinkcolor = #c60f0f linkcolor = #355f7c codebgcolor = #eeffcc codetextcolor = #333333 bodyfont = sans-serif headfont = 'Trebuchet MS', sans-serif gevent-24.11.1/docs/older_releases.rst000066400000000000000000000007211471441230600176240ustar00rootroot00000000000000================================== Information About Older Releases ================================== .. toctree:: :caption: Release announcements :maxdepth: 2 whatsnew_1_5 whatsnew_1_4 whatsnew_1_3 whatsnew_1_2 whatsnew_1_1 whatsnew_1_0 .. toctree:: :caption: Older change logs, without context. :maxdepth: 2 changelog_1_5 changelog_1_4 changelog_1_3 changelog_1_2 changelog_1_1 changelog_1_0 changelog_pre gevent-24.11.1/docs/servers.rst000066400000000000000000000046251471441230600163340ustar00rootroot00000000000000.. _implementing-servers: ====================== Implementing servers ====================== .. currentmodule:: gevent.baseserver There are a few classes to simplify server implementation with gevent. They all share a similar interface, inherited from :class:`BaseServer`:: def handle(socket, address): print('new connection!') server = StreamServer(('127.0.0.1', 1234), handle) # creates a new server server.start() # start accepting new connections At this point, any new connection accepted on ``127.0.0.1:1234`` will result in a new :class:`gevent.Greenlet` spawned running the *handle* function. To stop a server use :meth:`BaseServer.stop` method. In case of a :class:`gevent.pywsgi.WSGIServer`, *handle* must be a WSGI application callable. It is possible to limit the maximum number of concurrent connections, by passing a :class:`gevent.pool.Pool` instance. In addition, passing a pool allows the :meth:`BaseServer.stop` method to kill requests that are in progress:: pool = Pool(10000) # do not accept more than 10000 connections server = StreamServer(('127.0.0.1', 1234), handle, spawn=pool) server.serve_forever() .. tip:: If you don't want to limit concurrency, but you *do* want to be able to kill outstanding requests, use a pool created with a size of ``None``. The :meth:`BaseServer.serve_forever` method calls :meth:`BaseServer.start` and then waits until interrupted or until the server is stopped. The :mod:`gevent.pywsgi` module contains an implementation of a :pep:`3333` :class:`WSGI server `. In addition, gunicorn_ is a stand-alone server that supports gevent. .. important:: The provided server implementations are intended primarily for development and testing, or internal usage, and otherwise only generally "safe" scenarios. They have not been security audited. Expose them to the public Internet at your own risk. API Reference ============= - :doc:`api/gevent.baseserver` - :doc:`api/gevent.server` - :doc:`api/gevent.pywsgi` Examples ======== More :doc:`examples ` are available: - :doc:`examples/echoserver` - demonstrates :class:`gevent.server.StreamServer` - :doc:`examples/wsgiserver` - demonstrates :class:`gevent.pywsgi.WSGIServer ` - :doc:`examples/wsgiserver_ssl` - demonstrates :class:`WSGIServer with ssl ` .. _gunicorn: http://gunicorn.org gevent-24.11.1/docs/sfc.rst000066400000000000000000000040301471441230600154040ustar00rootroot00000000000000gevent is technically a member project of the Software Freedom Conservancy, and used to accept donations to them. However, while all of the donations have been much appreciated, none of the gevent developers have ever been able to access these donations. For that reason, gevent is no longer accepting donations. .. gevent-24.11.1/docs/success.rst000066400000000000000000000132431471441230600163070ustar00rootroot00000000000000================= Success stories ================= If you have a success story for gevent, contact post to the `google group`_. .. _google group: http://groups.google.com/group/gevent/ Reddit ====== According to `this 2021 post on r/RedditEng`_: [Reddit needs] a web stack that can handle many concurrent requests. Reddit’s stack for most microservices is Python 3, Baseplate_, and gevent. Django/Flask also work well when run with gevent. gevent is a Python library that transparently enables your microservice to handle high concurrency and I/O without requiring changes to your code. It is the secret sauce that allows you to run tens of thousands of pseudo-threads called greenlets (one per concurrent request) on a small number of instances. It allows for threads handling concurrent duplicate requests to be enqueued while waiting to acquire the lock, and then for those queues to be drained as threads acquire the lock and execute serially, all without exhausting the thread pool. .. _this 2021 post on r/RedditEng: https://www.reddit.com/r/RedditEng/comments/obqtfm/solving_the_three_stooges_problem/ .. _Baseplate: https://github.com/reddit/baseplate.py Omegle_ ======= I've been using gevent to power Omegle, my high-volume chat site, since 2010. Omegle is used by nearly half a million people every day, and it has as many as 20,000 users chatting at any given time. It needs to needs to perform well and be extremely reliable, and gevent makes that easy to do: gevent gives you power to do more creative things, and it's fast enough that you can more easily write apps that stand up to a lot of load. gevent is well-engineered, and its development has been maintaining an active, dedicated pace for as long as I've been following it. Any time I've had an issue with gevent that I couldn't solve on my own, the friendly community has been extremely helpful and knowledgeable. I really think gevent is the best library of its type for Python right now, and I would recommend it to anyone who needs a good networking library. -- Leif K-Brooks, Founder, Omegle.com_ .. _Omegle: http://omegle.com .. _Omegle.com: http://omegle.com Pediapress_ =========== Pediapress_ powers Wikipedia_'s PDF rendering cluster. I've started using gevent in 2009 after our NFS based job queue showed serious performance problems on Wikipedia's PDF rendering cluster. I've replaced that with a gevent based job queue server in a short time. gevent is managing the generation of around 100000 PDF files daily and is serving them to wikipedia users. Recently I've refactored the component that fetches articles and images from wikipedia to use gevent instead of twisted. The code is much cleaner and much more manageable then before. -- Ralf Schmitt, Developer, Pediapress_ .. _Pediapress: http://pediapress.com/ .. _Wikipedia: http://www.wikipedia.org/ `ESN Social Software`_ ====================== Wanting to avoid the ravages of asynchronous programming we choose to base our real-time web development framework Planet on gevent and Python. We’ve found gevent to be stable, efficient, highly functional and still simplistic enough for our needs and our customer’s requirements. -- Jonas Tärnström, Product Manager, `ESN Social Software`_ .. _ESN Social Software: http://esn.me `Blue Shell Games`_ =================== At Blue Shell Games we use gevent to power the application servers that connect more than a million daily players of our social casino games. Recognizing that our game code is largely I/O bound — whether waiting on a database, social networking data providers, or the clients themselves — we chose gevent as our asynchronous networking framework. Not only does gevent offer the best performance of any of the Python async networking packages, its threading model makes multithreaded application servers far easier to write than traditional kernel threading-based approaches. As our applications add more real-time multiplayer features, gevent is ready to handle these kinds of problems with ease. -- David Young, CTO, Co-Founder, `Blue Shell Games`_ .. _Blue Shell Games: http://www.blueshellgames.com/ TellApart_ ========== At TellApart, we have been using gevent since 2010 as the underpinnings of our frontend servers. It enables us to serve millions of requests every hour through only a handful of servers, while achieving the strict latency constraints of Real-Time Bidding ad exchanges. Since then, we've expanded our use of gevent throughout our stack. Combined with tools such as closures and generators, gevent makes complicated queuing, distribution, and streaming workloads dramatically easier to implement. Our open-source event aggregation service, Taba, couldn't have been built without it. See also: `Gevent at TellApart`_ -- Kevin Ballard, Software Engineer, TellApart_ .. _TellApart: http://tellapart.com .. _Gevent at TellApart: http://tellapart.com/gevent-at-tellapart Disqus ====== See: `Making Disqus Realtime`_ .. _`Making Disqus Realtime`: https://ep2012.europython.eu/conference/talks/making-disqus-realtime Pinterest ========= Pinterest is one of the biggest players of gevents. We started using gevent in 2011 to query our mysql shards concurrently. It served us well so far. We run all our WSGI containers using gevent. We are in the process of making all our service calls gevented. We use a gevented based thrift server which proved to be way more efficient than the normal python version. I think there is a cost upfront to make your code greenlet safe but we saw pretty huge win later. If you are looking to scale out on python gevent is your best friend. -- Yash Nelapati, Engineer, Pinterest_ .. _Pinterest: http://pinterest.com/ TBA: Spotify, Twilio gevent-24.11.1/docs/whatsnew_1_0.rst000066400000000000000000000136731471441230600171450ustar00rootroot00000000000000========================== What's new in gevent 1.0 ========================== .. toctree:: :maxdepth: 2 changelog_1_0 The detailed information is available in :doc:`changelog_1_0`. Below is the summary of all changes since 0.13.8. Gevent 1.0 supports Python 2.5 - 2.7. The version of greenlet required is 0.3.2. The source distribution now includes the dependencies (libev and c-ares) and has no dependencies other than greenlet. New core ======== Now the event loop is using libev instead of libevent (see http://blog.gevent.org/2011/04/28/libev-and-libevent/ for motivation). The new :mod:`gevent.core` has been rewritten to wrap libev's API. (On Windows, the :mod:`gevent.core` accepts Windows handles rather than stdio file descriptors.). The signal handlers set with the standard signal module are no longer blocked by the event loop. The event loops are now pluggable. The GEVENT_LOOP environment variable can specify the alternative class to use (the default is ``gevent.core.loop``). The error handling is now done by Hub.handle_error(). The system errors that usually kill the process (SystemError, SystemExit, KeyboardInterrupt) are now re-raised in the main greenlet. Thus ``sys.exit()`` when run inside a greenlet is no longer trapped and kills the process as expected. New dns resolver ================ Two new DNS resolvers: threadpool-based one (enabled by default) and c-ares based one. That threadpool-based resolver was added mostly for Windows and Mac OS X platforms where c-ares might behave differently w.r.t system configuration. On Linux, however, the c-ares based resolver is probably a better choice. To enable c-ares resolver set GEVENT_RESOLVER=ares environment variable. This fixes some major issues with DNS on 0.13.x, namely: - Issue #2: DNS resolver no longer breaks after ``fork()``. You still need to call :func:`gevent.fork` (``os.fork`` is monkey patched with it if ``monkey.patch_all()`` was called). - DNS resolver no longer ignores ``/etc/resolv.conf`` and ``/etc/hosts``. The following functions were added to socket module: - gethostbyname_ex - getnameinfo - gethostbyaddr - getfqdn It is possible to implement your own DNS resolver and make gevent use it. The GEVENT_RESOLVER variable can point to alternative implementation using the format: ``package.module.class``. The default is ``gevent.resolver_thread.Resolver``. The alternative "ares" resolver is an alias for ``gevent.resolver_ares.Resolver``. New API ======= - :func:`gevent.wait` and :func:`gevent.iwait` - UDP server: gevent.server.DatagramServer - Subprocess support New :mod:`gevent.subprocess` implements the interface of the standard subprocess module in a cooperative way. It is possible to monkey patch the standard subprocess module with ``patch_all(subprocess=True)`` (not done by default). - Thread pool **Warning:** this feature is experimental and should be used with care. The :mod:`gevent.threadpool` module provides the usual pool methods (apply, map, imap, etc) but runs passed functions in a real OS thread. There's a default threadpool, available as ``gevent.get_hub().threadpool``. Breaking changes ================ Removed features ---------------- - gevent.dns module (wrapper around libevent-dns) - gevent.http module (wrapper around libevent-http) - ``util.lazy_property`` property. - deprecated gevent.sslold module - deprecated gevent.rawgreenlet module - deprecated name ``GreenletSet`` which used to be alias for :class:`Group`. - link to greenlet feature of Greenlet - undocumented bind_and_listen and tcp_listener Renamed gevent.coros to gevent.lock. The gevent.coros is still available but deprecated. API changes ----------- In all servers, method "kill" was renamed to "close". The old name is available as deprecated alias. - ``Queue(0)`` is now equivalent to an unbound queue and raises :exc:`DeprecationError`. Use :class:`gevent.queue.Channel` if you need a channel. The :class:`gevent.Greenlet` objects: - Added ``__nonzero__`` implementation that returns `True` after greenlet was started until it's dead. This overrides greenlet's __nonzero__ which returned `False` after `start()` until it was first switched to. Bugfixes ======== - Issue #302: "python -m gevent.monkey" now sets __file__ properly. - Issue #143: greenlet links are now executed in the order they were added - Fixed monkey.patch_thread() to patch threading._DummyThread to avoid leak in threading._active. - gevent.thread: allocate_lock is now an alias for LockType/Semaphore. That way it does not fail when being used as class member. - It is now possible to add raw greenlets to the pool. - The :meth:`map` and :meth:`imap` methods now start yielding the results as soon as possible. - The :meth:`imap_unordered` no longer swallows an exception raised while iterating its argument. - `gevent.sleep()` no longer raises an exception, instead it does `sleep(0)`. - The :class:`WSGIServer` now sets `max_accept` to 1 if `wsgi.multiprocessing` is set to `True`. - Added :func:`monkey.patch_module` function that monkey patches module using `__implements__` list provided by gevent module. All of gevent modules that replace stdlib module now have `__implements__` attribute. pywsgi: - Fix logging when bound on unix socket (#295). - readout request data to prevent ECONNRESET - Fix #79: Properly handle HTTP versions. - Fix #86: bytearray is now supported. - Fix #92: raise IOError on truncated POST requests. - Fix #93: do not sent multiple "100 continue" responses - Fix #116: Multiline HTTP headers are now handled properly. - Fix #216: propagate errors raised by Pool.map/imap - Fix #303: 'requestline' AttributeError in pywsgi. - Raise an AssertionError if non-zero content-length is passed to start_response(204/304) or if non-empty body is attempted to be written for 304/204 response - Made sure format_request() does not fail if 'status' attribute is not set yet - Added REMOTE_PORT variable to the environment. - Removed unused deprecated 'wfile' property from WSGIHandler gevent-24.11.1/docs/whatsnew_1_1.rst000066400000000000000000000374071471441230600171470ustar00rootroot00000000000000========================== What's new in gevent 1.1 ========================== .. toctree:: :maxdepth: 2 changelog_1_1 Detailed information an what has changed is available in :doc:`changelog_1_1`. This document summarizes the most important changes since :doc:`gevent 1.0.2 `. Broader Platform Support ======================== gevent 1.1 supports Python 2.6, 2.7, 3.3, and 3.4 on the CPython (`python.org`_) interpreter. It also supports `PyPy`_ 2.6.1 and above (PyPy 4.0.1 or higher is recommended); PyPy3 is not supported. Support for Python 2.5 was removed when support for Python 3 was added. Any further releases in the 1.0.x line will maintain support for Python 2.5. .. note:: Version 1.1.x will be the last series of gevent releases to support Python 2.6. The next major release will only support Python 2.7 and above. Python 3.5 has preliminary support, which means that gevent is expected to generally run and function with the same level of support as on Python 3.4, but new features and APIs introduced in 3.5 may not be properly supported (e.g., `DevpollSelector`_) and due to the recent arrival of Python 3.5, the level of testing it has received is lower. For ease of installation on Windows and OS X, gevent 1.1 is distributed as pre-compiled binary wheels, in addition to source code. .. _python.org: http://www.python.org/downloads/ .. _PyPy: http://pypy.org .. _DevpollSelector: https://docs.python.org/3.5/whatsnew/3.5.html#selectors PyPy Notes ---------- PyPy has been tested on OS X and 64-bit Linux from version 2.6.1 through 4.0.0 and 4.0.1, and on 32-bit ARM on Raspbian with version 4.0.1. .. note:: PyPy is not supported on Windows. (gevent's CFFI backend is not available on Windows.) - Version 4.0.1 or above is **highly recommended** due to its extensive bug fixes relative to earlier versions. - Version 2.6.1 or above is **required** for proper signal handling. Prior to 2.6.1 and its inclusion of `cffi 1.3.0`_, signals could be delivered incorrectly or fail to be delivered during a blocking operation. (PyPy 2.5.0 includes CFFI 0.8.6 while 2.6.0 has 1.1.0; the necessary feature was added in `1.2.0`_ which is not itself directly present in any PyPy release.) CFFI 1.3.0 also allows using the CFFI backend on CPython. - Overall performance seems to be quite acceptable with newer versions of PyPy. The benchmarks distributed with gevent typically perform as well or better on PyPy than on CPython at least on some platforms. Things that are known or expected to be (relatively) slower under PyPy include the :mod:`c-ares resolver ` and :class:`~gevent.lock.Semaphore`. Whether or not these matter will depend on the workload of each application (:pr:`708` mentions some specific benchmarks for ``Semaphore``). .. caution:: The ``c-ares`` resolver is considered highly experimental under PyPy and is not recommended for production use. Released versions of PyPy through at least 4.0.1 have `a bug`_ that can cause a memory leak when subclassing objects that are implemented in Cython, as is the c-ares resolver. In addition, thanks to reports like :issue:`704`, we know that the PyPy garbage collector can interact badly with Cython-compiled code, leading to crashes. While the intended use of the ares resolver has been loosely audited for these issues, no guarantees are made. .. note:: PyPy 4.0.x on Linux is known to *rarely* (once per 24 hours) encounter crashes when running heavily loaded, heavily networked gevent programs (even without ``c-ares``). The exact cause is unknown and is being tracked in :issue:`677`. .. _cffi 1.3.0: https://bitbucket.org/cffi/cffi/src/ad3140a30a7b0ca912185ef500546a9fb5525ece/doc/source/whatsnew.rst?at=default .. _1.2.0: https://cffi.readthedocs.io/en/latest/whatsnew.html#v1-2-0 .. _a bug: https://bitbucket.org/pypy/pypy/issues/2149/memory-leak-for-python-subclass-of-cpyext .. _operating_systems_label: Operating Systems ----------------- gevent is regularly built and tested on Mac OS X, Ubuntu Linux, and Windows, in both 32- and 64-bit configurations. All three platforms are primarily tested on the x86/amd64 architecture, while Linux is also occasionally tested on Raspian on ARM. In general, gevent should work on any platform that both Python and `libev support`_. However, some less commonly used platforms may require tweaks to the gevent source code or user environment to compile (e.g., `SmartOS`_). Also, due to differences in things such as timing, some platforms may not be able to fully pass gevent's extensive test suite (e.g., `OpenBSD`_). .. _libev support: http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod#PORTABILITY_NOTES .. _SmartOS: https://github.com/gevent/gevent/pull/711 .. _OpenBSD: https://github.com/gevent/gevent/issues/737 Bug Fixes ========= Since 1.0.2, gevent 1.1 contains over 600 commits from nearly two dozen contributors. Over 200 issues were closed, and over 50 pull requests were merged. Improved subprocess support =========================== In gevent 1.0, support and monkey patching for the :mod:`subprocess` module was added. Monkey patching this module was off by default. In 1.1, monkey patching ``subprocess`` is on by default due to improvements in handling child processes and requirements by downstream libraries, notably `gunicorn`_. - :func:`gevent.os.fork`, which is monkey patched by default (and should be used to fork a gevent-aware process that expects to use gevent in the child process) has been improved and cooperates with :func:`gevent.os.waitpid` (again monkey patched by default) and :func:`gevent.signal.signal` (which is monkey patched only for the :data:`signal.SIGCHLD` case). The latter two patches are new in 1.1. - In gevent 1.0, use of libev child watchers (which are used internally by ``gevent.subprocess``) had race conditions with user-provided ``SIGCHLD`` handlers, causing many types of unpredictable breakage. The two new APIs described above are intended to rectify this. - Fork-watchers will be called, even in multi-threaded programs (except on Windows). - The default threadpool and threaded resolver work in child processes. - File descriptors are no longer leaked if :class:`gevent.subprocess.Popen` fails to start the child. In addition, simple use of :class:`multiprocessing.Process` is now possible in a monkey patched system, at least on POSIX platforms. .. caution:: Use of :class:`multiprocessing.Queue` when :mod:`thread` has been monkey-patched will lead to a hang due to ``Queue``'s internal use of a blocking pipe and threads. For the same reason, :class:`concurrent.futures.ProcessPoolExecutor`, which internally uses a ``Queue``, will hang. .. caution:: It is not possible to use :mod:`gevent.subprocess` from native threads. See :mod:`gevent.subprocess` for details. .. note:: If the ``SIGCHLD`` signal is to be handled, it is important to monkey patch (or directly use) both :mod:`os` and :mod:`signal`; this is the default for :func:`~gevent.monkey.patch_all`. Failure to do so can result in the ``SIGCHLD`` signal being lost. .. tip:: All of the above entail forking a child process. Forking a child process that uses gevent, greenlets, and libev can have some unexpected consequences if the child doesn't immediately ``exec`` a new binary. Be sure you understand these consequences before using this functionality, especially late in a program's lifecycle. For a more robust solution to certain uses of child process, consider `gipc`_. .. _gunicorn: http://gunicorn.org .. _gipc: https://gehrcke.de/gipc/ Monkey patching =============== Monkey patching is more robust, especially if the standard library :mod:`threading` or :mod:`logging` modules had been imported before applying the patch. In addition, there are now supported ways to determine if something has been monkey patched. API Additions ============= Numerous APIs offer slightly expanded functionality in this version. Look for "changed in version 1.1" or "added in version 1.1" throughout the documentation for specifics. Highlights include: - A gevent-friendly version of :obj:`select.poll` (on platforms that implement it). - :class:`~gevent.fileobject.FileObjectPosix` uses the :mod:`io` package on both Python 2 and Python 3, increasing its functionality, correctness, and performance. (Previously, the Python 2 implementation used the undocumented class :class:`socket._fileobject`.) - Locks raise the same error as standard library locks if they are over-released. Likewise, SSL sockets raise the same errors as their bundled counterparts if they are read or written after being closed. - :meth:`ThreadPool.apply ` can now be used recursively. - The various pool objects (:class:`~gevent.pool.Group`, :class:`~gevent.pool.Pool`, :class:`~gevent.threadpool.ThreadPool`) support the same improved APIs: :meth:`imap ` and :meth:`imap_unordered ` accept multiple iterables, :meth:`apply ` raises any exception raised by the target callable, etc. - Killing a greenlet (with :func:`gevent.kill` or :meth:`Greenlet.kill `) before it is actually started and switched to now prevents the greenlet from ever running, instead of raising an exception when it is later switched to. Attempting to spawn a greenlet with an invalid target now immediately produces a useful :exc:`TypeError`, instead of spawning a greenlet that would (usually) immediately die the first time it was switched to. - Almost anywhere that gevent raises an exception from one greenlet to another (e.g., :meth:`Greenlet.get `), the original traceback is preserved and raised. - Various logging/debugging outputs have been cleaned up. - The WSGI server found in :mod:`gevent.pywsgi` is more robust against errors in either the client or the WSGI application, fixing several hangs or HTTP protocol violations. It also supports new functionality such as configurable error handling and logging. - Documentation has been expanded and clarified. .. _library_updates_label: Library Updates =============== The two C libraries that are bundled with gevent have been updated. libev has been updated from 4.19 to 4.20 (`libev release notes`_) and c-ares has been updated from 1.9.1 to 1.10.0 (`c-ares release notes`_). .. caution:: The c-ares ``configure`` script is now *much* stricter about the contents of compilation environment variables such as ``$CFLAGS`` and ``$LDFLAGS``. For example, ``$CFLAGS`` is no longer allowed to contain ``-I`` directives; instead, these must be placed in ``$CPPFLAGS``. That's one common cause of an error like the following when compiling from scratch on a POSIX platform:: Running '(cd "/tmp/easy_install-NT921u/gevent-1.1b2/c-ares" && if [ -e ares_build.h ]; then cp ares_build.h ares_build.h.orig; fi && /bin/sh ./configure CONFIG_COMMANDS= CONFIG_FILES= && cp ares_config.h ares_build.h "$OLDPWD" && mv ares_build.h.orig ares_build.h) > configure-output.txt' in /tmp/easy_install-NT921u/gevent-1.1b2/build/temp.linux-x86_64-2.7/c-ares configure: error: Can not continue. Fix errors mentioned immediately above this line. .. _libev release notes: https://github.com/gevent/gevent/blob/master/libev/Changes#L17 .. _c-ares release notes: https://raw.githubusercontent.com/bagder/c-ares/cares-1_10_0/RELEASE-NOTES Compatibility ============= This release is intended to be compatible with 1.0.x with minimal or no changes to client source code. However, there are a few changes to be aware of that might affect some applications. Most of these changes are due to the increased platform support of Python 3 and PyPy and reduce the cases of undocumented or non-standard behaviour. - :class:`gevent.baseserver.BaseServer` deterministically `closes its sockets `_. As soon as a request completes (the request handler returns), the ``BaseServer`` and its subclasses including :class:`gevent.server.StreamServer` and :class:`gevent.pywsgi.WSGIServer` close the client socket. In gevent 1.0, the client socket was left to the mercies of the garbage collector (this was undocumented). In the typical case, the socket would still be closed as soon as the request handler returned due to CPython's reference-counting garbage collector. But this meant that a reference cycle could leave a socket dangling open for an indeterminate amount of time, and a reference leak would result in it never being closed. It also meant that Python 3 would produce ResourceWarnings, and PyPy (which, unlike CPython, `does not use a reference-counted GC`_) would only close (and flush!) the socket at an arbitrary time in the future. If your application relied on the socket not being closed when the request handler returned (e.g., you spawned a greenlet that continued to use the socket) you will need to keep the request handler from returning (e.g., ``join`` the greenlet). If for some reason that isn't possible, you may subclass the server to prevent it from closing the socket, at which point the responsibility for closing and flushing the socket is now yours; *but* the former approach is strongly preferred, and subclassing the server for this reason may not be supported in the future. .. _does not use a reference-counted GC: http://doc.pypy.org/en/latest/cpython_differences.html#differences-related-to-garbage-collection-strategies - :class:`gevent.pywsgi.WSGIServer` ensures that headers (names and values) and the status line set by the application can be encoded in the ISO-8859-1 (Latin-1) charset and are of the *native string type*. Under gevent 1.0, non-``bytes`` headers (that is, ``unicode``, since gevent 1.0 only ran on Python 2, although objects like ``int`` were also allowed) were encoded according to the current default Python encoding. In some cases, this could allow non-Latin-1 characters to be sent in the headers, but this violated the HTTP specification, and their interpretation by the recipient is unknown. In other cases, gevent could send malformed partial HTTP responses. Now, a :exc:`UnicodeError` will be raised proactively. Most applications that adhered to the WSGI PEP, :pep:`3333`, will not need to make any changes. See :issue:`614` for more discussion. - Under Python 2, the previously undocumented ``timeout`` parameter to :meth:`Popen.wait ` (a gevent extension ) now throws an exception, just like the documented parameter to the same stdlib method in Python 3. - Under Python 3, several standard library methods added ``timeout`` parameters. These often default to -1 to mean "no timeout", whereas gevent uses a default of ``None`` to mean the same thing, potentially leading to great confusion and bugs in portable code. In gevent, using a negative value has always been ill-defined and hard to reason about. Because of those two things, as of this release, negative ``timeout`` values should be considered deprecated (unless otherwise documented). The current ill-defined behaviour is maintained, but future releases may choose to treat it the same as ``None`` or raise an error. No runtime warnings are issued for this change for performance reasons. - The previously undocumented class ``gevent.fileobject.SocketAdapter`` has been removed, as have the internal ``gevent._util`` module and some internal implementation modules found in early pre-releases of 1.1. gevent-24.11.1/docs/whatsnew_1_2.rst000066400000000000000000000102201471441230600171300ustar00rootroot00000000000000========================== What's new in gevent 1.2 ========================== .. toctree:: :maxdepth: 2 changelog_1_2 Detailed information on what has changed is available in :doc:`changelog_1_2`. This document summarizes the most important changes since :doc:`gevent 1.1 `. In general, gevent 1.2 is a smaller update than gevent 1.1, focusing on platform support, standard library compatibility, security, bug fixes and consistency. Platform Support ======================== gevent 1.2 supports Python 2.7, 3.4, 3.5 and 3.6 on the CPython (`python.org`_) interpreter. It also supports `PyPy2`_ 4.0.1 and above (PyPy2 5.4 or higher is recommended) and PyPy3 5.5.0. .. caution:: Support for Python 2.6 was removed. Support for Python 3.3 is only tested on PyPy3. .. note:: PyPy is not supported on Windows. (gevent's CFFI backend is not available on Windows.) Python 3.6 was released recently and is supported at the same level as 3.5. For ease of installation on Windows and OS X, gevent 1.2 is distributed as pre-compiled binary wheels, in addition to source code. .. _python.org: http://www.python.org/downloads/ .. _PyPy2: http://pypy.org Bug Fixes ========= Since 1.1.2, gevent 1.2 contains over 240 commits from nine different dozen contributors. About two dozen pull requests were merged. Improved subprocess support =========================== In gevent 1.1, subprocess monkey-patching was on by default for the first time. Over time this led to discovery of a few issues and corner cases that have been fixed in 1.2. - Setting SIGCHLD to SIG_IGN or SIG_DFL after :mod:`gevent.subprocess` had been used previously could not be reversed, causing ``Popen.wait`` and other calls to hang. Now, if SIGCHLD has been ignored, the next time :mod:`gevent.subprocess` is used this will be detected and corrected automatically. (This potentially leads to issues with :func:`os.popen` on Python 2, but the signal can always be reset again. Mixing the low-level process handling calls, low-level signal management and high-level use of :mod:`gevent.subprocess` is tricky.) Reported in :issue:`857` by Chris Utz. - ``Popen.kill`` and ``send_signal`` no longer attempt to send signals to processes that are known to be exited. - The :func:`gevent.os.waitpid` function is cooperative in more circumstances. Reported in :issue:`878` by Heungsub Lee. API Additions ============= Numerous APIs offer slightly expanded functionality in this version. Look for "changed in version 1.2" or "added in version 1.2" throughout the documentation for specifics. Of particular note, several backwards compatible updates to the subprocess module have been backported from Python 3 to Python 2, making :mod:`gevent.subprocess` smaller, easier to maintain and in some cases safer, while letting gevent clients use the updated APIs even on older versions of Python. If ``concurrent.futures`` is available (Python 3, or if the Python 2 backport has been installed), then the class :class:`gevent.threadpool.ThreadPoolExecutor` is defined to create an executor that always uses native threads, even when the system is monkey-patched. Library Updates =============== The two C libraries that are bundled with gevent have been updated. libev has been updated from 4.20 to 4.23 (`libev release notes`_) and c-ares has been updated from 1.10.0 to 1.12.0 (`c-ares release notes`_). .. _libev release notes: https://github.com/gevent/gevent/blob/master/deps/libev/Changes .. _c-ares release notes: https://c-ares.haxx.se/changelog.html Compatibility ============= This release is intended to be compatible with 1.1.x with no changes to client source code, so long as only non-deprecated and supported interfaces were used (as always, internal, non-documented implementation details may have changed). In particular the deprecated ``gevent.coros`` module has been removed and ``gevent.corecext`` and ``gevent.corecffi`` have also been removed. For security, ``gevent.pywsgi`` no longer accepts incoming headers containing an underscore, and header values passed to ``start_response`` cannot contain a carriage return or newline. See :issue:`819` and :issue:`775`, respectively. gevent-24.11.1/docs/whatsnew_1_3.rst000066400000000000000000000162121471441230600171400ustar00rootroot00000000000000========================== What's new in gevent 1.3 ========================== .. currentmodule:: gevent .. toctree:: :maxdepth: 2 changelog_1_3 Detailed information on what has changed is available in the :doc:`changelog_1_3`. This document summarizes the most important changes since :doc:`gevent 1.2 `. gevent 1.3 is an important update for performance, debugging and monitoring, and platform support. It introduces an (optional) `libuv `_ loop implementation and supports PyPy on Windows. See :doc:`loop_impls` for more. Since gevent 1.2.2 there have been about 450 commits from a half-dozen contributors. Almost 100 pull requests and more than 100 issues have been closed. Platform Support ================ gevent 1.3 supports Python 2.7, 3.4, 3.5, 3.6 and 3.7 on the CPython (`python.org`_) interpreter. It also supports `PyPy2`_ 5.8.0 and above (PyPy2 5.10 or higher is recommended) and PyPy3 5.10.0. .. caution:: Python 2.7.8 and below (Python 2.7 without a modern ``ssl`` module), is no longer tested or supported. The support code remains in this release and gevent can be installed on such implementations, but such usage is not supported. Support for Python 2.7.8 will be removed in the next major version of gevent. .. note:: PyPy is now supported on Windows with the libuv loop implementation. Python 3.7 is in the process of release right now and gevent is tested with 3.7b4, the last scheduled beta for Python 3.7. For ease of installation on Windows, OS X and Linux, gevent 1.3 is distributed as pre-compiled binary wheels, in addition to source code. .. note:: On Linux, you'll need to install gevent from source if you wish to use the libuv loop implementation. This is because the `manylinux1 `_ specification for the distributed wheels does not support libuv. The CFFI library *must* be installed at build time. .. _python.org: http://www.python.org/downloads/ .. _PyPy2: http://pypy.org Greenlet Attributes =================== :class:`Greenlet` objects have gained some useful new attributes: - :attr:`Greenlet.spawning_greenlet` is the greenlet that created this greenlet. Since the ``parent`` of a greenlet is almost always gevent's :class:`hub `, this can be more useful to understand greenlet relationships. - :attr:`Greenlet.spawn_tree_locals` is a dictionary of values maintained through the spawn tree (i.e., all descendents of a particular greenlet based on ``spawning_greenlet``). This is convenient to share values between a set of greenlets, for example, all those involved in processing a request. - :attr:`Greenlet.spawning_stack` is a :obj:`frame ` -like object that captures where the greenlet was created and can be passed to :func:`traceback.print_stack`. - :attr:`Greenlet.minimal_ident` is a small integer unique across all greenlets. - :attr:`Greenlet.name` is a string printed in the greenlet's repr by default. "Raw" greenlets created with `spawn_raw` default to having the ``spawning_greenlet`` and ``spawn_tree_locals``. This extra data is printed by the new :func:`gevent.util.print_run_info` function. Performance =========== gevent 1.3 uses Cython on CPython to compile several performance critical modules. As a result, overall performance is improved. Specifically, queues are up to 5 times faster, pools are 10-20% faster, and the :class:`gevent.local.local` is up to 40 times faster. See :pr:`1156`, :pr:`1155`, :pr:`1117` and :pr:`1154`. Better Behaved Callbacks ======================== In gevent 1.2.2, event loop callbacks (including things like ``sleep(0)``) would be run in sequence until we ran them all, or until we ran 10,000. Simply counting the number of callbacks could lead to no IO being serviced for an arbitrary, unbound, amount of time. To correct this, gevent 1.3 introduces `gevent.getswitchinterval` and will run callbacks for only (approximately) that amount of time before checking for IO. (This is similar to the way that Python 2 counted bytecode instructions between thread switches but Python 3 uses the more deterministic timer approach.) The hope is that this will result in "smoother" application behaviour and fewer pitfalls. See :issue:`1072` for more details. Monitoring and Debugging ======================== Many of the new greenlet attributes are useful for monitoring and debugging gevent applications. gevent also now has the (optional) ability to monitor for greenlets that call blocking functions and stall the event loop and to periodically check if the application has exceeded a configured memory limit. See :doc:`monitoring` for more information. New Pure-Python DNS Resolver ============================ The `dnspython `_ library is a new, pure-Python option for :doc:`/dns`. Benchmarks show it to be faster than the existing c-ares resolver and it is also more stable on PyPy. The c-ares resolver may be deprecated and removed in the future. API Additions ============= Numerous APIs offer slightly expanded functionality in this version. Look for "changed in version 1.3" or "added in version 1.3" throughout the documentation for specifics. A few changes of note: - The low-level watcher objects now have a :func:`~gevent._interfaces.IWatcher.close` method that *must* be called to promptly dispose of native (libev or libuv) resources. - `gevent.monkey.patch_all` defaults to patching ``Event``. - `gevent.subprocess.Popen` accepts the same keyword arguments in Python 2 as it does in Python 3. - `gevent.monkey.patch_all` and the various individual patch functions, emit events as patching is being done. This can be used to extend the patching process for new modules. ``patch_all`` also passes all unknown keyword arguments to these events. See :pr:`1169`. - The module :mod:`gevent.events` contains the events that parts of gevent can emit. It will use :mod:`zope.event` if that is installed. Library Updates =============== One of the C libraries that are bundled with gevent have been updated. c-ares has been updated from 1.13.0 to 1.14.0 (`c-ares release notes`_). .. _c-ares release notes: https://c-ares.haxx.se/changelog.html Compatibility ============= This release is intended to be compatible with 1.2.x with no changes to client source code, so long as only non-deprecated and supported interfaces were used (as always, internal, non-documented implementation details may have changed). Here are some specific compatibility notes. - The :doc:`resolvers ` have been refactored. As a result, ``gevent.ares``, ``gevent.resolver_ares`` and ``gevent.resolver_thread`` have been deprecated. Choosing a resolver by alias (e.g., 'thread') in the ``GEVENT_RESOLVER`` environment variable continues to work as before. - The internal module ``gevent._threading`` was significantly refactored. As the name indicates this is an internal module not intended as part of the public API, but such uses have been observed. - The module ``gevent.wsgi`` was removed. Use :mod:`gevent.pywsgi` instead. ``gevent.wsgi`` was nothing but an alias for :mod:`gevent.pywsgi` since gevent 1.0a1 (2011). .. LocalWords: Greenlet gevent-24.11.1/docs/whatsnew_1_4.rst000066400000000000000000000020061471441230600171350ustar00rootroot00000000000000========================== What's new in gevent 1.4 ========================== .. currentmodule:: gevent .. toctree:: :maxdepth: 2 changelog_1_4 Detailed information on what has changed is available in the :doc:`changelog_1_4`. This document summarizes the most important changes since :doc:`gevent 1.3 `. gevent 1.4 is a small maintenance release featuring bug fixes and a small number of API improvements. Platform Support ================ gevent 1.4 supports the platforms that gevent 1.3 supported, with the exception that for users of Python 3.4, Python 3.4.3 is the minimum supported version. Test Changes ============ gevent's own test suite is now packaged as part of the gevent install, and the ``greentest/testrunner.py`` script is now gone from a source distribution or checkout. Instead, tests can be run with ``python -m gevent.tests``. Many tests can be run given an installed version of gevent, although the test dependencies, including cffi, must be installed for all of them to run. gevent-24.11.1/docs/whatsnew_1_5.rst000066400000000000000000000062271471441230600171470ustar00rootroot00000000000000========================== What's new in gevent 1.5 ========================== .. currentmodule:: gevent .. toctree:: :maxdepth: 2 changelog_1_5 Detailed information on what has changed is available in the :doc:`changelog_1_5`. This document summarizes the most important changes since :doc:`gevent 1.4 `. gevent 1.5 is a maintenance and feature release including bug fixes and a number of API improvements. Versioning ========== Future releases of gevent will use a scheme similar to `CalVer `_. See :doc:`development/release_process` for information on future deprecations and feature removals. Platform Support ================ gevent 1.5 drops support for Python 3.4, and drops support for PyPy < 7. It also adds official support for Python 3.8. gevent is tested with CPython 2.7.17, 3.5.9, 3.6.10, 3.7.7, 3.8.2, PyPy 2 7.3.0 and PyPy3 7.3.0. .. caution:: Older releases, such as RHEL 5, are no longer supported. Packaging Changes ================= gevent now distributes `manylinux2010 `_ binary wheels for Linux, instead of the older ``manylinux1`` standard. This updated platform tag allows gevent to distribute libuv support by default. CentOS 6 is the baseline for this tag. gevent bundles a ``pyproject.toml`` now. This is useful for building from source. .. caution:: The requirements for building from source may have changed, especially in minimal container environments (e.g., Alpine Linux). See :doc:`development/installing_from_source` for more information. The legacy ``Makefile`` has been removed in favor of built-in setup.py commands. Certain environment variables used at build time have been deprecated and renamed. Generated ``.c`` and ``.h`` files are no longer included in the distribution. Neither are Cython ``.pxd`` files. This is because linking to internal C optimizations is not supported and likely to crash if used against a different version of gevent than exactly what it was compiled for. See :issue:`1568` for more details. Library Updates =============== The bundled version of libuv has been updated from 1.24 to 1.34, libev has been updated from 4.23 to 4.31, and c-ares has been updated from 1.14 to 1.15. Version 1.16 or newer of dnspython is required to use the dnspython resolver. Test Updates ============ gevent's test suite has adopted the standard library's notion of "test resources," allowing users to disable certain tests based on their resource usage. This is primarily intended to support downstream packagers. For example, to disable tests that require Internet access, one could disable the ``network`` resource using ``python -m gevent.tests -u-network`` or ``GEVENTTEST_USE_RESOURCES=-network python -m gevent.tests``. See :ref:`limiting-test-resource-usage` for more information. Other Changes ============= The file objects have been reworked to support more modes and behave more like the builtin :func:`open` or func:`io.open` functions and :mod:`io` classes. Previously they essentially only worked with binary streams. Certain default values have been changed as well. The deprecated magic proxy object ``gevent.signal`` has been removed. gevent-24.11.1/examples/000077500000000000000000000000001471441230600147705ustar00rootroot00000000000000gevent-24.11.1/examples/concurrent_download.py000077500000000000000000000014131471441230600214150ustar00rootroot00000000000000#!/usr/bin/python # Copyright (c) 2009 Denis Bilenko. See LICENSE for details. # gevent-test-requires-resource: network """Spawn multiple workers and wait for them to complete""" from __future__ import print_function import gevent from gevent import monkey # patches stdlib (including socket and ssl modules) to cooperate with other greenlets monkey.patch_all() import requests # Note that we're using HTTPS, so # this demonstrates that SSL works. urls = [ 'https://www.google.com/', 'https://www.apple.com/', 'https://www.python.org/' ] def print_head(url): print('Starting %s' % url) data = requests.get(url).text print('%s: %s bytes: %r' % (url, len(data), data[:50])) jobs = [gevent.spawn(print_head, _url) for _url in urls] gevent.wait(jobs) gevent-24.11.1/examples/dns_mass_resolve.py000077500000000000000000000020341471441230600207120ustar00rootroot00000000000000#!/usr/bin/python # gevent-test-requires-resource: network """Resolve hostnames concurrently, exit after 2 seconds. Under the hood, this might use an asynchronous resolver based on c-ares (the default) or thread-pool-based resolver. You can choose between resolvers using GEVENT_RESOLVER environment variable. To enable threading resolver: GEVENT_RESOLVER=thread python dns_mass_resolve.py """ from __future__ import print_function import gevent from gevent import socket from gevent.pool import Pool N = 1000 # limit ourselves to max 10 simultaneous outstanding requests pool = Pool(10) finished = 0 def job(url): global finished try: try: ip = socket.gethostbyname(url) print('%s = %s' % (url, ip)) except socket.gaierror as ex: print('%s failed with %s' % (url, ex)) finally: finished += 1 with gevent.Timeout(2, False): for x in range(10, 10 + N): pool.spawn(job, '%s.com' % x) pool.join() print('finished within 2 seconds: %s/%s' % (finished, N)) gevent-24.11.1/examples/echoserver.py000077500000000000000000000025011471441230600175100ustar00rootroot00000000000000#!/usr/bin/env python """Simple server that listens on port 16000 and echos back every input to the client. Connect to it with: telnet 127.0.0.1 16000 Terminate the connection by terminating telnet (typically Ctrl-] and then 'quit'). """ from __future__ import print_function from gevent.server import StreamServer # this handler will be run for each incoming connection in a dedicated greenlet def echo(socket, address): print('New connection from %s:%s' % address) socket.sendall(b'Welcome to the echo server! Type quit to exit.\r\n') # using a makefile because we want to use readline() rfileobj = socket.makefile(mode='rb') while True: line = rfileobj.readline() if not line: print("client disconnected") break if line.strip().lower() == b'quit': print("client quit") break socket.sendall(line) print("echoed %r" % line) rfileobj.close() if __name__ == '__main__': # to make the server use SSL, pass certfile and keyfile arguments to the constructor server = StreamServer(('127.0.0.1', 16000), echo) # to start the server asynchronously, use its start() method; # we use blocking serve_forever() here because we have no other jobs print('Starting echo server on port 16000') server.serve_forever() gevent-24.11.1/examples/geventsendfile.py000066400000000000000000000015621471441230600203500ustar00rootroot00000000000000"""An example how to use sendfile[1] with gevent. [1] http://pypi.python.org/pypi/py-sendfile/ """ # gevent-test-requires-resource: sendfile # pylint:disable=import-error from errno import EAGAIN from sendfile import sendfile as original_sendfile from gevent.socket import wait_write def gevent_sendfile(out_fd, in_fd, offset, count): total_sent = 0 while total_sent < count: try: _offset, sent = original_sendfile(out_fd, in_fd, offset + total_sent, count - total_sent) #print('%s: sent %s [%d%%]' % (out_fd, sent, 100*total_sent/count)) total_sent += sent except OSError as ex: if ex.args[0] == EAGAIN: wait_write(out_fd) else: raise return offset + total_sent, total_sent def patch_sendfile(): import sendfile sendfile.sendfile = gevent_sendfile gevent-24.11.1/examples/portforwarder.py000066400000000000000000000066601471441230600202520ustar00rootroot00000000000000"""Port forwarder with graceful exit. Run the example as python portforwarder.py :8080 gevent.org:80 Then direct your browser to http://localhost:8080 or do "telnet localhost 8080". When the portforwarder receives TERM or INT signal (type Ctrl-C), it closes the listening socket and waits for all existing connections to finish. The existing connections will remain unaffected. The program will exit once the last connection has been closed. """ import socket import sys import signal import gevent from gevent.server import StreamServer from gevent.socket import create_connection, gethostbyname class PortForwarder(StreamServer): def __init__(self, listener, dest, **kwargs): StreamServer.__init__(self, listener, **kwargs) self.dest = dest def handle(self, source, address): # pylint:disable=method-hidden log('%s:%s accepted', *address[:2]) try: dest = create_connection(self.dest) except IOError as ex: log('%s:%s failed to connect to %s:%s: %s', address[0], address[1], self.dest[0], self.dest[1], ex) return forwarders = (gevent.spawn(forward, source, dest, self), gevent.spawn(forward, dest, source, self)) # if we return from this method, the stream will be closed out # from under us, so wait for our children gevent.joinall(forwarders) def close(self): if self.closed: sys.exit('Multiple exit signals received - aborting.') else: log('Closing listener socket') StreamServer.close(self) def forward(source, dest, server): try: source_address = '%s:%s' % source.getpeername()[:2] dest_address = '%s:%s' % dest.getpeername()[:2] except socket.error as e: # We could be racing signals that close the server # and hence a socket. log("Failed to get all peer names: %s", e) return try: while True: try: data = source.recv(1024) log('%s->%s: %r', source_address, dest_address, data) if not data: break dest.sendall(data) except KeyboardInterrupt: # On Windows, a Ctrl-C signal (sent by a program) usually winds # up here, not in the installed signal handler. if not server.closed: server.close() break except socket.error: if not server.closed: server.close() break finally: source.close() dest.close() server = None def parse_address(address): try: hostname, port = address.rsplit(':', 1) port = int(port) except ValueError: sys.exit('Expected HOST:PORT: %r' % address) return gethostbyname(hostname), port def main(): args = sys.argv[1:] if len(args) != 2: sys.exit('Usage: %s source-address destination-address' % __file__) source = args[0] dest = parse_address(args[1]) server = PortForwarder(source, dest) log('Starting port forwarder %s:%s -> %s:%s', *(server.address[:2] + dest)) gevent.signal_handler(signal.SIGTERM, server.close) gevent.signal_handler(signal.SIGINT, server.close) server.start() gevent.wait() def log(message, *args): message = message % args sys.stderr.write(message + '\n') if __name__ == '__main__': main() gevent-24.11.1/examples/processes.py000077500000000000000000000013121471441230600173500ustar00rootroot00000000000000#!/usr/bin/env python from __future__ import print_function import gevent from gevent import subprocess import sys if sys.platform.startswith('win'): UNAME = ['cmd.exe', '/C', 'ver'] LS = ['dir.exe'] else: UNAME = ['uname'] LS = ['ls'] # run 2 jobs in parallel p1 = subprocess.Popen(UNAME, stdout=subprocess.PIPE) p2 = subprocess.Popen(LS, stdout=subprocess.PIPE) gevent.wait([p1, p2], timeout=2) # print the results (if available) if p1.poll() is not None: print('uname: %r' % p1.stdout.read()) else: print('uname: job is still running') if p2.poll() is not None: print('ls: %r' % p2.stdout.read()) else: print('ls: job is still running') p1.stdout.close() p2.stdout.close() gevent-24.11.1/examples/psycopg2_pool.py000066400000000000000000000114401471441230600201410ustar00rootroot00000000000000from __future__ import print_function # gevent-test-requires-resource: psycopg2 # pylint:disable=import-error,broad-except,bare-except import sys import contextlib import gevent from gevent.queue import Queue from gevent.socket import wait_read, wait_write from psycopg2 import extensions, OperationalError, connect if sys.version_info[0] >= 3: integer_types = (int,) else: import __builtin__ integer_types = (int, __builtin__.long) def gevent_wait_callback(conn, timeout=None): """A wait callback useful to allow gevent to work with Psycopg.""" while 1: state = conn.poll() if state == extensions.POLL_OK: break elif state == extensions.POLL_READ: wait_read(conn.fileno(), timeout=timeout) elif state == extensions.POLL_WRITE: wait_write(conn.fileno(), timeout=timeout) else: raise OperationalError( "Bad result from poll: %r" % state) extensions.set_wait_callback(gevent_wait_callback) class AbstractDatabaseConnectionPool(object): def __init__(self, maxsize=100): if not isinstance(maxsize, integer_types): raise TypeError('Expected integer, got %r' % (maxsize, )) self.maxsize = maxsize self.pool = Queue() self.size = 0 def create_connection(self): raise NotImplementedError() def get(self): pool = self.pool if self.size >= self.maxsize or pool.qsize(): return pool.get() self.size += 1 try: new_item = self.create_connection() except: self.size -= 1 raise return new_item def put(self, item): self.pool.put(item) def closeall(self): while not self.pool.empty(): conn = self.pool.get_nowait() try: conn.close() except Exception: pass @contextlib.contextmanager def connection(self, isolation_level=None): conn = self.get() try: if isolation_level is not None: if conn.isolation_level == isolation_level: isolation_level = None else: conn.set_isolation_level(isolation_level) yield conn except: if conn.closed: conn = None self.closeall() else: conn = self._rollback(conn) raise else: if conn.closed: raise OperationalError("Cannot commit because connection was closed: %r" % (conn, )) conn.commit() finally: if conn is not None and not conn.closed: if isolation_level is not None: conn.set_isolation_level(isolation_level) self.put(conn) @contextlib.contextmanager def cursor(self, *args, **kwargs): isolation_level = kwargs.pop('isolation_level', None) with self.connection(isolation_level) as conn: yield conn.cursor(*args, **kwargs) def _rollback(self, conn): try: conn.rollback() except: gevent.get_hub().handle_error(conn, *sys.exc_info()) return return conn def execute(self, *args, **kwargs): with self.cursor(**kwargs) as cursor: cursor.execute(*args) return cursor.rowcount def fetchone(self, *args, **kwargs): with self.cursor(**kwargs) as cursor: cursor.execute(*args) return cursor.fetchone() def fetchall(self, *args, **kwargs): with self.cursor(**kwargs) as cursor: cursor.execute(*args) return cursor.fetchall() def fetchiter(self, *args, **kwargs): with self.cursor(**kwargs) as cursor: cursor.execute(*args) while True: items = cursor.fetchmany() if not items: break for item in items: yield item class PostgresConnectionPool(AbstractDatabaseConnectionPool): def __init__(self, *args, **kwargs): self.connect = kwargs.pop('connect', connect) maxsize = kwargs.pop('maxsize', None) self.args = args self.kwargs = kwargs AbstractDatabaseConnectionPool.__init__(self, maxsize) def create_connection(self): return self.connect(*self.args, **self.kwargs) def main(): import time pool = PostgresConnectionPool("dbname=postgres", maxsize=3) start = time.time() for _ in range(4): gevent.spawn(pool.execute, 'select pg_sleep(1);') gevent.wait() delay = time.time() - start print('Running "select pg_sleep(1);" 4 times with 3 connections. Should take about 2 seconds: %.2fs' % delay) if __name__ == '__main__': main() gevent-24.11.1/examples/server.crt000066400000000000000000000034211471441230600170100ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIFCzCCAvOgAwIBAgIUePnEKFfhxpt3oypt6nTicAGTFJowDQYJKoZIhvcNAQEL BQAwFDESMBAGA1UEAwwJbG9jYWxob3N0MCAXDTIxMDcwODExMzQzNVoYDzIxMjEw NjE0MTEzNDM1WjAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwggIiMA0GCSqGSIb3DQEB AQUAA4ICDwAwggIKAoICAQChqfmG6uOG95Jb7uRi6yxohJ8GOR3gi39yX6JB+Xdu kvqxy2/vsjH1+CF1i8jKZZO0hJLGT+/GmKIc1c0XUEjVoQvCNQHIaDTXiUXOGXfk QNKR0vtJH5ZOZn/tvYAKPniYPmHuF3TpAB6HouLpyIC55SXdK7pTEbmU7J1aBjug n3O56cu6FzjU1j/0QVUVGloxApLvv57bmINaX9ygKsh/ug0lhV1RwYLJ9UX57m95 FIlcofa98tCuoKi++G+sWsjopDXVmsiTbjZfs72kcDUTRYKNZbRFRRETORdOVRHx lAIPEn4QFYn/3wVSNFvfeY0j8RI5YcPLU66Batun6HU+YAs6z8Qc8S1EMElJdoyV eLCqLA07btICzKq2I16TZAOWVng2P7NOtibAeCzDAxAxJ3Oby+BVikKcu8WmJLxG vRvaPljdD76xjPB5NK6O0J62C3uU3EWhPODX9H5l/WF+aNRqSccgs0Umddj33N+b /mTJnHn1GpanThrv1UfOFGKfxjemwESz66d1iqD7iXvTxt7yZeU7LIMRgDqhVe6z oBpJEeWl9YYyfGPwgIOhwzNVZ5WkzQARs7si3j3Wkmyca7hEN8qq8DkLWNf1PTcI wo/239wKRbyW3Z+U4IGRrVMdeSoC2JpRAx/eEXTjuUePQlHCvwW9iiY7jTjDfbIv pwIDAQABo1MwUTAdBgNVHQ4EFgQUTUfShFbaXGMwrWEAkm05sXFH/x4wHwYDVR0j BBgwFoAUTUfShFbaXGMwrWEAkm05sXFH/x4wDwYDVR0TAQH/BAUwAwEB/zANBgkq hkiG9w0BAQsFAAOCAgEAe65ORDx0NDxTo1q6EY221KS3vEezUNBdZNaeOQsQeUAY lEO5iZ+2QLIVlWC5UtvISK96FU2CX0ucgAGfHS2ZB7o8i95fbjG2qrWC+VUH4V/6 jse9jlfGlYGkPuU5onNIDGcZ7gay3n0prCDiguAmCzV419GnGDWgSSgyVNCp/0tx b7pR5cVr0kZ5bTZjiysEEprkG2ofAlXzj09VGtTfM8gQvCz9Puj7pGzw2iaIEQVk hSGjoRWlI5x6+o16JOTHXzv9cYRUfDX6tjw3nQJIeMipuUkR8pkHUFjG3EeJEtO3 X/GO0G8rwUPaZiskGPiMZj7XqoVclnYL7JtntwUHR/dU5A/EhDfhgEfTXTqT78Oe cKri+VJE+G/hYxbP0FNYaDtqIwJcX1tsy4HOpKVBncc+K/PvXElVsyQET/+uwH7p Wm5ymndnuLoiQrWIA4nJC6rVwR4GPijuN0NCKcVdE+8jlOCBs3VBJTWKuu0J80RP 71iZy03AoK1YY4+nHglmE9HetAgSsbGh2fWC7DUS/4JzLSzOBeb+nn74zfmIfMU+ qUArFXvVGAtjmZZ/63cWzXDMZsp1BZ+O5dx6Gi2QtjgGYhh6DhW7ocQYXDkAeN/O K1Yzwq/G4AEQA0k0/1I+F0Rdlo41+7tOp+LMCOoZXqUzhM0ZQ2sf3QclubxLX9U= -----END CERTIFICATE----- gevent-24.11.1/examples/server.key000066400000000000000000000063101471441230600170100ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQChqfmG6uOG95Jb 7uRi6yxohJ8GOR3gi39yX6JB+Xdukvqxy2/vsjH1+CF1i8jKZZO0hJLGT+/GmKIc 1c0XUEjVoQvCNQHIaDTXiUXOGXfkQNKR0vtJH5ZOZn/tvYAKPniYPmHuF3TpAB6H ouLpyIC55SXdK7pTEbmU7J1aBjugn3O56cu6FzjU1j/0QVUVGloxApLvv57bmINa X9ygKsh/ug0lhV1RwYLJ9UX57m95FIlcofa98tCuoKi++G+sWsjopDXVmsiTbjZf s72kcDUTRYKNZbRFRRETORdOVRHxlAIPEn4QFYn/3wVSNFvfeY0j8RI5YcPLU66B atun6HU+YAs6z8Qc8S1EMElJdoyVeLCqLA07btICzKq2I16TZAOWVng2P7NOtibA eCzDAxAxJ3Oby+BVikKcu8WmJLxGvRvaPljdD76xjPB5NK6O0J62C3uU3EWhPODX 9H5l/WF+aNRqSccgs0Umddj33N+b/mTJnHn1GpanThrv1UfOFGKfxjemwESz66d1 iqD7iXvTxt7yZeU7LIMRgDqhVe6zoBpJEeWl9YYyfGPwgIOhwzNVZ5WkzQARs7si 3j3Wkmyca7hEN8qq8DkLWNf1PTcIwo/239wKRbyW3Z+U4IGRrVMdeSoC2JpRAx/e EXTjuUePQlHCvwW9iiY7jTjDfbIvpwIDAQABAoICAC3CJMTRe3FaZezro210T2+O Ck0CobhLA9nlw9GUwP9lTtxATwCzmXybrSzOUhknwzUXSUwkmCPIVCqBQbnVmagO G3vu8QA+rqZLTpzVjJ/o0TFBXKsH681pKdCrELDVmeDN135C2W6SABI4Qq4VeIol mCAQHn8gxzyl9Kvkk8AVIfZ/fJDBve5Qbm2+iEye1uSEa/68aEST2Kod9B7JvVKZ 4Nq78vwPH+v2JsZlfNvyuiakGWkOb47eHqVfQIyybaebwzkgxKEmUvGnuIfw0rUP ubI4FVx9/iVIxZYAckHEuQh3HYOD9TmdcK4h79dDWnXP6G6hg3/rwbsT+fR+0aBQ 9rkKnA4uToGikYmplixAQ/jDBwMs3VQqenO+YBIsC4HEZ0fJUbs+l4LEnuUJxYcR UlAvnVQXa1WGne3Yzb2xONWeiocKfhcdJ2JuQo00UR74+2Qonxn/WpimvlLCBDgI uKxHCSWOgv5yPpU2kwTPIjORXcy/y2G9K2bnsQCzznPRDyNkZmavQxxG6greFcrO /0yhRPuBgxKBRvXPO+F5fybKFlU9IPLFehV60jLUybBejab/lMJyxdkh9UMu2Xqy FVsRGazJt6T6AGp6TFEEcFUQw7qXNhVo9S7zGGaJFJdYc+Vx8QJRoCe8EAYVH7Mp b/eYGhHaKg6iG7QCjPPxAoIBAQDN54wtuDqpAA+4PmqhiEhQKhabNqAoVmAWUxnJ Db4Zzvkkc3Fo/Yg0HnQVaT0KmkcxY7397lTdtiwNkWPgJ0f6+g7L4K7PA7xh/q84 IoXFGvYWwVdiVXLR1l06jorpA20clnba6CsbezwcllTq4bWvNnrAcM8l1YrAlRnV qqqbPL78Rnba4C8q+VFy8r0d9OGnbvFcV7VWJjhr0a3aZbHQ67jPinNiUWvBVFFx yGrqPMjkeHyiTLMhqQpaSHH67S88rj0g9RKexBaSUrl18QO7xnQHHSCcFWMQOiSN shNvFri48dnU+Ms6ZLc3MBHbTK6uzP8xJCVnmsz/MWPGkQZFAoIBAQDI/vj/3/y/ EpIawyHN7PQAMoto4AQF6sVasrgGd1tRsJnGKrCugH9gILvyke3L7qg0JTV3bDJY e8+vH1vC3NV7PsOlCFjMtRWG0lRbCh/b7Qe3pCvPu4mbFhJgMT/mz+vbl5zvcdgX kvne+St/267NKnY5gHBDhqitBwkZwNlTWJ0zVmTecKXn/KwjS9lX1qU3HiT3UFkd 5Y5Nt5lj1IOK/6NCXkxVkgOc4Zjcxx138Cg03VJhIiHTusRq6z9iTSTDubhkaSbi 2nadptFBiQtkVhAJ5G53U7pl/pIhhiJy901bu/v/wrIMJ2l6hiZIcLrbg6VGXxjV 5dB7LDEtKoL7AoIBAQC8+ffA+mX0N9c1nSuWh5L+6DIJUHBbtTLJKonu6gsAeuJU 3xNGbfK1CwI1qHnaolAW91knlrcTKaBy726ACu1YXmp4GgW2f9JFCk/csGqfxaf4 qIg/+va/ugOku7CoPXnGFB6PuSffOBKqlhrn3DI41kKBHsgwDDYlnHKylMmyYmVS +oUZS0pfIaXsXvbNaLQ2TG9+9gy7Pabo5e+vE0jI25+p84MEyH+iV3XMfUoLI7Cp aB/TgZuimBelVvotd8Sz56K4/dSSHJwuvXfz1Dk9/Nz+rnAAcOyTtxlXZwnJGkx9 iZMIkTNMq6UwJJEu+ckVK5ZHjso5tWzSBo1xcCcVAoIBAQCPL0x1A7zK5VDd7cqE J1w/U8KKiKN1D6VeElkUiiysyjERwdGxzmpvMYKSsDCGCdMbqrInDBXlgPYXnDBD ZgxSywiW5ZZU5l+advWPEWxWwMmxoitvxfqmV5fpnMwYAmDUQ3KSBTjaumJ03G6H nBkvoSMtnXjcMe6xrIRoK0Dmpgb+znn3GKqn1BFQ57TCZW+3DytoX33M1X6FkNie DINVHv3Pxtt8ThNyzCeYh+RPT+9kkZIhDi6o5bENNd8miSw6nnBkX6BLFTRQ5MjH dfh+luzAD1I+gZAVHsA9T4/09IXQZt+DeNBb5iu3FB/rlRsYS/UOZ6qKnjfhtz6l HVbHAoIBAFjNY/UPJDxQ/uG+rMU0nrmSBRGdgBvQkcefjWX/LIZV3MjNilUQ+B2a lXz5AHGmHRnnwQsBVfN8rf4qQLln8l34Kgm7+cIFavgfg2oqVbNyNgezSlUmRq0J Ttf3xYJtRgRUx8F+BcgJXMqlNGTMQJY8wawM/ATkwkbmSwGOKe04sBeIkwEycMId BupvfN5lxDrKqJVPSl1t5Rh4us95CNh22/c5Tq5rsynl02ZB4swlcsVTdv8FSGmM QVf/MkWXGN/x4lHJhKyklHMGv15GGvys1nlPTstMfUYs55ioWRW46TXQ8vOyzzpg 67xzBKYFEde+hgYk7X1Xeqj8A6bsqro= -----END PRIVATE KEY----- gevent-24.11.1/examples/threadpool.py000066400000000000000000000005231471441230600175030ustar00rootroot00000000000000from __future__ import print_function import time import gevent from gevent.threadpool import ThreadPool pool = ThreadPool(3) start = time.time() for _ in range(4): pool.spawn(time.sleep, 1) gevent.wait() delay = time.time() - start print('Running "time.sleep(1)" 4 times with 3 threads. Should take about 2 seconds: %.3fs' % delay) gevent-24.11.1/examples/udp_client.py000066400000000000000000000012431471441230600174700ustar00rootroot00000000000000# Copyright (c) 2012 Denis Bilenko. See LICENSE for details. """Send a datagram to localhost:9000 and receive a datagram back. Usage: python udp_client.py MESSAGE Make sure you're running a UDP server on port 9001 (see udp_server.py). There's nothing gevent-specific here. """ from __future__ import print_function import sys from gevent import socket address = ('127.0.0.1', 9001) message = ' '.join(sys.argv[1:]) sock = socket.socket(type=socket.SOCK_DGRAM) sock.connect(address) print('Sending %s bytes to %s:%s' % ((len(message), ) + address)) sock.send(message.encode()) data, address = sock.recvfrom(8192) print('%s:%s: got %r' % (address + (data, ))) sock.close() gevent-24.11.1/examples/udp_server.py000066400000000000000000000011521471441230600175170ustar00rootroot00000000000000# Copyright (c) 2012 Denis Bilenko. See LICENSE for details. """A simple UDP server. For every message received, it sends a reply back. You can use udp_client.py to send a message. """ from __future__ import print_function from gevent.server import DatagramServer class EchoServer(DatagramServer): def handle(self, data, address): # pylint:disable=method-hidden print('%s: got %r' % (address[0], data)) self.socket.sendto(('Received %s bytes' % len(data)).encode('utf-8'), address) if __name__ == '__main__': print('Receiving datagrams on :9000') EchoServer(':9000').serve_forever() gevent-24.11.1/examples/unixsocket_client.py000066400000000000000000000004711471441230600210760ustar00rootroot00000000000000from __future__ import print_function # gevent-test-requires-resource: unixsocket_server import socket s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) s.connect("./unixsocket_server.py.sock") s.send('GET / HTTP/1.0\r\n\r\n') data = s.recv(1024) print('received %s bytes' % len(data)) print(data) s.close() gevent-24.11.1/examples/unixsocket_server.py000066400000000000000000000010511471441230600211210ustar00rootroot00000000000000# gevent-test-requires-resource: unixsocket_client import os from gevent.pywsgi import WSGIServer from gevent import socket def application(environ, start_response): assert environ start_response('200 OK', []) return [] if __name__ == '__main__': listener = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) sockname = './' + os.path.basename(__file__) + '.sock' if os.path.exists(sockname): os.remove(sockname) listener.bind(sockname) listener.listen(1) WSGIServer(listener, application).serve_forever() gevent-24.11.1/examples/webchat/000077500000000000000000000000001471441230600164055ustar00rootroot00000000000000gevent-24.11.1/examples/webchat/README000066400000000000000000000002031471441230600172600ustar00rootroot00000000000000An example of AJAX chat taken from Tornado demos and converted to use django and gevent. To start the server, run $ python run.py gevent-24.11.1/examples/webchat/__init__.py000066400000000000000000000000001471441230600205040ustar00rootroot00000000000000gevent-24.11.1/examples/webchat/application.py000077500000000000000000000007331471441230600212700ustar00rootroot00000000000000#!/usr/bin/python from gevent import monkey; monkey.patch_all() import os import traceback from django.core.handlers.wsgi import WSGIHandler from django.core.signals import got_request_exception from django.core.management import call_command os.environ['DJANGO_SETTINGS_MODULE'] = 'webchat.settings' def exception_printer(sender, **kwargs): traceback.print_exc() got_request_exception.connect(exception_printer) call_command('syncdb') application = WSGIHandler() gevent-24.11.1/examples/webchat/chat/000077500000000000000000000000001471441230600173245ustar00rootroot00000000000000gevent-24.11.1/examples/webchat/chat/__init__.py000066400000000000000000000000001471441230600214230ustar00rootroot00000000000000gevent-24.11.1/examples/webchat/chat/views.py000066400000000000000000000043351471441230600210400ustar00rootroot00000000000000import uuid import simplejson from django.shortcuts import render_to_response from django.template.loader import render_to_string from django.http import HttpResponse from gevent.event import Event from webchat import settings class ChatRoom(object): cache_size = 200 def __init__(self): self.cache = [] self.new_message_event = Event() def main(self, request): if self.cache: request.session['cursor'] = self.cache[-1]['id'] return render_to_response('index.html', {'MEDIA_URL': settings.MEDIA_URL, 'messages': self.cache}) def message_new(self, request): name = request.META.get('REMOTE_ADDR') or 'Anonymous' forwarded_for = request.META.get('HTTP_X_FORWARDED_FOR') if forwarded_for and name == '127.0.0.1': name = forwarded_for msg = create_message(name, request.POST['body']) self.cache.append(msg) if len(self.cache) > self.cache_size: self.cache = self.cache[-self.cache_size:] self.new_message_event.set() self.new_message_event.clear() return json_response(msg) def message_updates(self, request): cursor = request.session.get('cursor') if not self.cache or cursor == self.cache[-1]['id']: self.new_message_event.wait() assert cursor != self.cache[-1]['id'], cursor try: for index, m in enumerate(self.cache): if m['id'] == cursor: return json_response({'messages': self.cache[index + 1:]}) return json_response({'messages': self.cache}) finally: if self.cache: request.session['cursor'] = self.cache[-1]['id'] else: request.session.pop('cursor', None) room = ChatRoom() main = room.main message_new = room.message_new message_updates = room.message_updates def create_message(from_, body): data = {'id': str(uuid.uuid4()), 'from': from_, 'body': body} data['html'] = render_to_string('message.html', dictionary={'message': data}) return data def json_response(value, **kwargs): kwargs.setdefault('content_type', 'text/javascript; charset=UTF-8') return HttpResponse(simplejson.dumps(value), **kwargs) gevent-24.11.1/examples/webchat/manage.py000077500000000000000000000010321471441230600202060ustar00rootroot00000000000000#!/usr/bin/python from django.core.management import execute_manager try: import settings # Assumed to be in the same directory. except ImportError: import sys sys.stderr.write("""Error: Can't find the file 'settings.py' in the directory containing %r. It appears you've customized things. You'll have to run django-admin.py, passing it your settings module. (If the file settings.py does indeed exist, it's causing an ImportError somehow.) """ % __file__) raise if __name__ == "__main__": execute_manager(settings) gevent-24.11.1/examples/webchat/run_standalone.py000077500000000000000000000003171471441230600217770ustar00rootroot00000000000000#!/usr/bin/python from __future__ import print_function from gevent.wsgi import WSGIServer from application import application print('Serving on 8000...') WSGIServer(('', 8000), application).serve_forever() gevent-24.11.1/examples/webchat/run_uwsgi000077500000000000000000000002551471441230600203570ustar00rootroot00000000000000#!/bin/sh # see http://projects.unbit.it/uwsgi and http://projects.unbit.it/uwsgi/wiki/Gevent exec uwsgi --loop gevent --http-socket :8000 --module application --async 1000 gevent-24.11.1/examples/webchat/settings.py000066400000000000000000000017631471441230600206260ustar00rootroot00000000000000from os.path import dirname, join, abspath __dir__ = dirname(abspath(__file__)) DEBUG = True TEMPLATE_DEBUG = DEBUG ADMINS = () MANAGERS = ADMINS DATABASE_ENGINE = 'sqlite3' DATABASE_NAME = '/tmp/gevent-webchat.sqlite' DATABASE_USER = '' DATABASE_PASSWORD = '' DATABASE_HOST = '' DATABASE_PORT = '' TIME_ZONE = 'America/Chicago' LANGUAGE_CODE = 'en-us' SITE_ID = 1 USE_I18N = True MEDIA_ROOT = join(__dir__, 'static') MEDIA_URL = '/media/' SECRET_KEY = 'nv8(yg*&1-lon-8i-3jcs0y!01+rem*54051^5xt#^tzujdj!c' TEMPLATE_LOADERS = ( 'django.template.loaders.filesystem.load_template_source', 'django.template.loaders.app_directories.load_template_source', ) MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', ) ROOT_URLCONF = 'webchat.urls' TEMPLATE_DIRS = ( join(__dir__, 'templates') ) INSTALLED_APPS = ( 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'webchat.chat', ) gevent-24.11.1/examples/webchat/static/000077500000000000000000000000001471441230600176745ustar00rootroot00000000000000gevent-24.11.1/examples/webchat/static/chat.css000066400000000000000000000020021471441230600213170ustar00rootroot00000000000000/* * Copyright 2009 FriendFeed * * Licensed under the Apache License, Version 2.0 (the "License"); you may * not use this file except in compliance with the License. You may obtain * a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the * License for the specific language governing permissions and limitations * under the License. */ body { background: white; margin: 10px; } body, input { font-family: sans-serif; font-size: 10pt; color: black; } table { border-collapse: collapse; border: 0; } td { border: 0; padding: 0; } #body { position: absolute; bottom: 10px; left: 10px; right: 100px; } #input { margin-top: 0.5em; } #inbox .message { padding-top: 0.25em; } #nav { text-align: right; float: right; z-index: 99; } gevent-24.11.1/examples/webchat/static/chat.js000066400000000000000000000072011471441230600211510ustar00rootroot00000000000000// Copyright 2009 FriendFeed // // Licensed under the Apache License, Version 2.0 (the "License"); you may // not use this file except in compliance with the License. You may obtain // a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, WITHOUT // WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the // License for the specific language governing permissions and limitations // under the License. $(document).ready(function() { if (!window.console) window.console = {}; if (!window.console.log) window.console.log = function() {}; $("#messageform").live("submit", function() { newMessage($(this)); return false; }); $("#messageform").live("keypress", function(e) { if (e.keyCode == 13) { newMessage($(this)); return false; } }); $("#message").select(); updater.poll(); }); function newMessage(form) { var message = form.formToDict(); var disabled = form.find("input[type=submit]"); disabled.disable(); $.postJSON("/a/message/new", message, function(response) { updater.showMessage(response); if (message.id) { form.parent().remove(); } else { form.find("input[type=text]").val("").select(); disabled.enable(); } }); } function getCookie(name) { var r = document.cookie.match("\\b" + name + "=([^;]*)\\b"); return r ? r[1] : undefined; } jQuery.postJSON = function(url, args, callback) { args._xsrf = getCookie("_xsrf"); $.ajax({url: url, data: $.param(args), dataType: "text", type: "POST", success: function(response) { if (callback) callback(eval("(" + response + ")")); }, error: function(response) { console.log("ERROR:", response) }}); }; jQuery.fn.formToDict = function() { var fields = this.serializeArray(); var json = {} for (var i = 0; i < fields.length; i++) { json[fields[i].name] = fields[i].value; } if (json.next) delete json.next; return json; }; jQuery.fn.disable = function() { this.enable(false); return this; }; jQuery.fn.enable = function(opt_enable) { if (arguments.length && !opt_enable) { this.attr("disabled", "disabled"); } else { this.removeAttr("disabled"); } return this; }; var updater = { errorSleepTime: 500, cursor: null, poll: function() { var args = {"_xsrf": getCookie("_xsrf")}; if (updater.cursor) args.cursor = updater.cursor; $.ajax({url: "/a/message/updates", type: "POST", dataType: "text", data: $.param(args), success: updater.onSuccess, error: updater.onError}); }, onSuccess: function(response) { try { updater.newMessages(eval("(" + response + ")")); } catch (e) { updater.onError(); return; } updater.errorSleepTime = 500; window.setTimeout(updater.poll, 0); }, onError: function(response) { updater.errorSleepTime *= 2; console.log("Poll error; sleeping for", updater.errorSleepTime, "ms"); window.setTimeout(updater.poll, updater.errorSleepTime); }, newMessages: function(response) { if (!response.messages) return; updater.cursor = response.cursor; var messages = response.messages; updater.cursor = messages[messages.length - 1].id; console.log(messages.length, "new messages, cursor:", updater.cursor); for (var i = 0; i < messages.length; i++) { updater.showMessage(messages[i]); } }, showMessage: function(message) { var existing = $("#m" + message.id); if (existing.length > 0) return; var node = $(message.html); node.hide(); $("#inbox").append(node); node.slideDown(); } }; gevent-24.11.1/examples/webchat/templates/000077500000000000000000000000001471441230600204035ustar00rootroot00000000000000gevent-24.11.1/examples/webchat/templates/404.html000066400000000000000000000000231471441230600215730ustar00rootroot00000000000000

Not Found

gevent-24.11.1/examples/webchat/templates/500.html000066400000000000000000000000371471441230600215750ustar00rootroot00000000000000

Internal Server Error

gevent-24.11.1/examples/webchat/templates/index.html000066400000000000000000000024771471441230600224120ustar00rootroot00000000000000 Chat Demo
{% for message in messages %} {% include "message.html" %} {% endfor %}
gevent-24.11.1/examples/webchat/templates/message.html000066400000000000000000000001401471441230600227100ustar00rootroot00000000000000
{{ message.from }}: {{ message.body }}
gevent-24.11.1/examples/webchat/urls.py000066400000000000000000000010221471441230600177370ustar00rootroot00000000000000from django.conf.urls.defaults import * from webchat import settings urlpatterns = patterns('webchat.chat.views', ('^$', 'main'), ('^a/message/new$', 'message_new'), ('^a/message/updates$', 'message_updates')) urlpatterns += patterns('django.views.static', (r'^%s(?P.*)$' % settings.MEDIA_URL.lstrip('/'), 'serve', {'document_root': settings.MEDIA_ROOT, 'show_indexes': True})) gevent-24.11.1/examples/webproxy.py000077500000000000000000000122411471441230600172240ustar00rootroot00000000000000#!/usr/bin/env python """A web application that retrieves other websites for you. To start serving the application on port 8088, type python webproxy.py To start the server on some other interface/port, use python -m gevent.wsgi -p 8000 -i 0.0.0.0 webproxy.py """ from __future__ import print_function from gevent import monkey; monkey.patch_all() import sys import re import traceback try: from cgi import escape except ImportError: # Python 3.8 removed this API from html import escape try: import urllib2 from urlparse import urlparse from urllib import unquote except ImportError: # pylint:disable=import-error,no-name-in-module from urllib import request as urllib2 from urllib.parse import urlparse from urllib.parse import unquote LISTEN = ('127.0.0.1', 8088) def _as_bytes(s): if not isinstance(s, bytes): # Py3 s = s.encode('utf-8') return s def _as_str(s): if not isinstance(s, str): # Py3 s = s.decode('latin-1') return s def application(env, start_response): proxy_url = 'http://%s/' % env['HTTP_HOST'] method = env['REQUEST_METHOD'] path = env['PATH_INFO'] if env['QUERY_STRING']: path += '?' + env['QUERY_STRING'] path = path.lstrip('/') if (method, path) == ('GET', ''): start_response('200 OK', [('Content-Type', 'text/html')]) return [FORM] if method == 'GET': return proxy(path, start_response, proxy_url) if (method, path) == ('POST', ''): key, value = env['wsgi.input'].read().strip().split(b'=') assert key == b'url', repr(key) value = _as_str(value) start_response('302 Found', [('Location', _as_str(join(proxy_url, unquote(value))))]) elif method == 'POST': start_response('404 Not Found', []) else: start_response('501 Not Implemented', []) return [] def proxy(path, start_response, proxy_url): # pylint:disable=too-many-locals if '://' not in path: path = 'http://' + path try: try: response = urllib2.urlopen(path) except urllib2.HTTPError as ex: response = ex print('%s: %s %s' % (path, response.code, response.msg)) # Beginning in Python 3.8, headers aren't guaranteed to arrive in # lowercase; we must do so ourself. headers = [(k, v) for (k, v) in response.headers.items() if k.lower() not in DROP_HEADERS] scheme, netloc, path, _params, _query, _fragment = urlparse(path) host = (scheme or 'http') + '://' + netloc except Exception as ex: # pylint:disable=broad-except sys.stderr.write('error while reading %s:\n' % path) traceback.print_exc() tb = traceback.format_exc() start_response('502 Bad Gateway', [('Content-Type', 'text/html')]) # pylint:disable=deprecated-method error_str = escape(str(ex) or ex.__class__.__name__ or 'Error') error_str = '

%s

%s

%s
' % (error_str, escape(path), escape(tb)) return [_as_bytes(error_str)] else: print("Returning", headers) start_response('%s %s' % (response.code, response.msg), headers) data = response.read() data = fix_links(data, proxy_url, host) return [data] def join(url1, *rest): if not rest: return url1 url2, rest = rest[0], rest[1:] url1 = _as_bytes(url1) url2 = _as_bytes(url2) if url1.endswith(b'/'): if url2.startswith(b'/'): return join(url1 + url2[1:], *rest) return join(url1 + url2, *rest) if url2.startswith(b'/'): return join(url1 + url2, *rest) return join(url1 + b'/' + url2, *rest) def fix_links(data, proxy_url, host_url): """ >>> fix_links("> %r' % (m.group(0), result)) return result data = _link_re_1.sub(fix_link_cb, data) data = _link_re_2.sub(fix_link_cb, data) return data _link_re_1 = re.compile(br'''(?P(href|src|action)\s*=\s*)(?P['"])(?P[^#].*?)(?P=quote)''') _link_re_2 = re.compile(br'''(?P(href|src|action)\s*=\s*)(?P[^'"#>][^ >]*)''') # The lowercase names of headers that we will *NOT* forward. DROP_HEADERS = { 'transfer-encoding', 'set-cookie' } FORM = b""" Web Proxy - gevent example
Type in URL you want to visit and press Enter
""" if __name__ == '__main__': from gevent.pywsgi import WSGIServer print('Serving on %s...' % (LISTEN,)) WSGIServer(LISTEN, application).serve_forever() gevent-24.11.1/examples/webpy.py000077500000000000000000000022361471441230600164760ustar00rootroot00000000000000#!/usr/bin/python # gevent-test-requires-resource: webpy """A web.py application powered by gevent""" from __future__ import print_function from gevent import monkey; monkey.patch_all() from gevent.pywsgi import WSGIServer import time import web # pylint:disable=import-error urls = ("/", "index", '/long', 'long_polling') class index(object): def GET(self): return 'Hello, world!
/long' class long_polling(object): # Since gevent's WSGIServer executes each incoming connection in a separate greenlet # long running requests such as this one don't block one another; # and thanks to "monkey.patch_all()" statement at the top, thread-local storage used by web.ctx # becomes greenlet-local storage thus making requests isolated as they should be. def GET(self): print('GET /long') time.sleep(10) # possible to block the request indefinitely, without harming others return 'Hello, 10 seconds later' if __name__ == "__main__": application = web.application(urls, globals()).wsgifunc() print('Serving on 8088...') WSGIServer(('', 8088), application).serve_forever() gevent-24.11.1/examples/wsgiserver.py000077500000000000000000000010171471441230600175440ustar00rootroot00000000000000#!/usr/bin/python """WSGI server example""" from __future__ import print_function from gevent.pywsgi import WSGIServer def application(env, start_response): if env['PATH_INFO'] == '/': start_response('200 OK', [('Content-Type', 'text/html')]) return [b"hello world"] start_response('404 Not Found', [('Content-Type', 'text/html')]) return [b'

Not Found

'] if __name__ == '__main__': print('Serving on 8088...') WSGIServer(('127.0.0.1', 8088), application).serve_forever() gevent-24.11.1/examples/wsgiserver_ssl.py000077500000000000000000000014001471441230600204210ustar00rootroot00000000000000#!/usr/bin/python """Secure WSGI server example based on gevent.pywsgi""" from __future__ import print_function from gevent import pywsgi def hello_world(env, start_response): if env['PATH_INFO'] == '/': start_response('200 OK', [('Content-Type', 'text/html')]) return [b"hello world"] start_response('404 Not Found', [('Content-Type', 'text/html')]) return [b'

Not Found

'] print('Serving on https://:8443') # see src/gevent/tests/test__ssl.py for how to generate server = pywsgi.WSGIServer(('127.0.0.1', 8443), hello_world, keyfile='server.key', certfile='server.crt') # to start the server asynchronously, call server.start() # we use blocking serve_forever() here because we have no other jobs server.serve_forever() gevent-24.11.1/pyproject.toml000066400000000000000000000040651471441230600160730ustar00rootroot00000000000000[build-system] build-backend = "setuptools.build_meta" # Build dependencies. Remember to change these in make-manylinux and appveyor.yml # if you add/remove/change them. requires = [ "setuptools >= 40.8.0", # Python 3.7 requires at least Cython 0.27.3. # 0.28 is faster, and (important!) lets us specify the target module # name to be created so that we can have both foo.py and _foo.so # at the same time. 0.29 fixes some issues with Python 3.7, # and adds the 3str mode for transition to Python 3. 0.29.14+ is # required for Python 3.8. 3.0a2 introduced a change that prevented # us from compiling (https://github.com/gevent/gevent/issues/1599) # but once that was fixed, 3.0a4 led to all of our leak tests # failing in Python 2 (https://travis-ci.org/github/gevent/gevent/jobs/683782800); # This was fixed in 3.0a5 (https://github.com/cython/cython/issues/3578) # 3.0a6 fixes an issue cythonizing source on 32-bit platforms. # 3.0a9 is needed for Python 3.10. # Python 3.12 requires at least 3.0rc2. "Cython >= 3.0.11", # See version requirements in setup.py "cffi >= 1.17.1 ; platform_python_implementation == 'CPython'", # Python 3.7 requires at least 0.4.14, which is ABI incompatible with earlier # releases. Python 3.9 and 3.10 require 0.4.16; # 0.4.17 is ABI incompatible with earlier releases, but compatible with 1.0 # 1.1.3 is needed for CPython 3.11. # 2.0 is not ABI compatible with earlier releases, but with luck it won't # have to break the ABI again. # 3.0 is ABI compatible with earlier releases, so we can switch back and # forth between 2 and 3 without recompiling. 3.0 is required for # Python 3.12, and it fixes some serious bugs in Python 3.11. "greenlet >= 3.0.3 ; platform_python_implementation == 'CPython'", ] [tool.towncrier] directory = "docs/changes" filename = "CHANGES.rst" package = "gevent" package_dir = "src" issue_format = "See :issue:`{issue}`." title_format = false template = "docs/_templates/hr-between-versions.rst.tmpl" gevent-24.11.1/scripts/000077500000000000000000000000001471441230600146415ustar00rootroot00000000000000gevent-24.11.1/scripts/gprospector.py000066400000000000000000000010341471441230600175600ustar00rootroot00000000000000from __future__ import print_function import re import sys from prospector.run import main def _excepthook(e, t, tb): while tb is not None: frame = tb.tb_frame print(frame.f_code, frame.f_code.co_name) for n in ('self', 'node', 'elt'): if n in frame.f_locals: print(n, frame.f_locals[n]) print('---') tb = tb.tb_next sys.excepthook = _excepthook if __name__ == '__main__': sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) sys.exit(main()) gevent-24.11.1/scripts/install.sh000077500000000000000000000101161471441230600166450ustar00rootroot00000000000000#!/usr/bin/env bash # GEVENT: Taken from https://raw.githubusercontent.com/DRMacIver/hypothesis/master/scripts/install.sh # Special license: Take literally anything you want out of this file. I don't # care. Consider it WTFPL licensed if you like. # Basically there's a lot of suffering encoded here that I don't want you to # have to go through and you should feel free to use this to avoid some of # that suffering in advance. set -e set -x # Where installations go BASE=${BUILD_RUNTIMES-$PWD/.runtimes} PYENV=$BASE/pyenv echo $BASE mkdir -p $BASE update_pyenv () { VERSION="$1" if [ ! -d "$PYENV/.git" ]; then rm -rf $PYENV git clone https://github.com/pyenv/pyenv.git $BASE/pyenv else if [ ! -f "$PYENV/plugins/python-build/share/python-build/$VERSION" ]; then echo "Updating $PYENV for $VERSION" back=$PWD cd $PYENV git fetch || echo "Fetch failed to complete. Ignoring" git reset --hard origin/master cd $back fi fi } SNAKEPIT=$BASE/snakepit ## # install(exact-version, bin-alias, dir-alias) # # Produce a python executable at $SNAKEPIT/bin-alias # having the exact version given as exact-version. # # Also produces a $SNAKEPIT/dir-alias/ pointing to the root # of the python install. ## install () { VERSION="$1" ALIAS="$2" DIR_ALIAS="$3" DESTINATION=$BASE/versions/$VERSION mkdir -p $BASE/versions mkdir -p $SNAKEPIT if [ ! -e "$DESTINATION" ]; then mkdir -p $SNAKEPIT mkdir -p $BASE/versions update_pyenv $VERSION # -Ofast makes the build take too long and times out Travis. It also affects # process-wide floating-point flags - see: https://github.com/gevent/gevent/pull/1864 CFLAGS="-O1 -pipe -march=native" $BASE/pyenv/plugins/python-build/bin/python-build $VERSION $DESTINATION fi # Travis CI doesn't take symlink changes (or creation!) into # account on its caching, So we need to write an actual file if we # actually changed something. For python version upgrades, this is # usually handled automatically (obviously) because we installed # python. But if we make changes *just* to symlink locations above, # nothing happens. So for every symlink, write a file...with identical contents, # so that we don't get *spurious* caching. (Travis doesn't check for mod times, # just contents, so echoing each time doesn't cause it to re-cache.) # Overwrite an existing alias. # For whatever reason, ln -sf on Travis works fine for the ALIAS, # but fails for the DIR_ALIAS. No clue why. So we delete an existing one of those # manually. if [ -L "$SNAKEPIT/$DIR_ALIAS" ]; then rm -f $SNAKEPIT/$DIR_ALIAS fi ln -sfv $DESTINATION/bin/python $SNAKEPIT/$ALIAS ln -sfv $DESTINATION $SNAKEPIT/$DIR_ALIAS echo $VERSION $ALIAS $DIR_ALIAS > $SNAKEPIT/$ALIAS.installed $SNAKEPIT/$ALIAS --version $DESTINATION/bin/python --version # Set the PATH to include the install's bin directory so pip # doesn't nag. # Use quiet mode for this; PyPy2 has been seen to output # an error: # UnicodeEncodeError: 'ascii' codec can't encode # character u'\u258f' in position 6: ordinal not in range(128) # https://travis-ci.org/github/gevent/gevent/jobs/699973435 PATH="$DESTINATION/bin/:$PATH" $SNAKEPIT/$ALIAS -m pip install -q --upgrade pip wheel virtualenv ls -l $SNAKEPIT ls -l $BASE/versions } for var in "$@"; do case "${var}" in 2.7) install 2.7.17 python2.7 2.7.d ;; 3.5) install 3.5.9 python3.5 3.5.d ;; 3.6) install 3.6.10 python3.6 3.6.d ;; 3.7) install 3.7.7 python3.7 3.7.d ;; 3.8) install 3.8.2 python3.8 3.8.d ;; 3.9) install 3.9.0 python3.9 3.9.d ;; pypy2.7) install pypy2.7-7.3.1 pypy2.7 pypy2.7.d ;; pypy3.6) install pypy3.6-7.3.1 pypy3.6 pypy3.6.d ;; esac done gevent-24.11.1/scripts/releases/000077500000000000000000000000001471441230600164445ustar00rootroot00000000000000gevent-24.11.1/scripts/releases/appveyor-download.py000066400000000000000000000106161471441230600224740ustar00rootroot00000000000000#!/usr/bin/env python """ Use the AppVeyor API to download Windows artifacts. Taken from: https://bitbucket.org/ned/coveragepy/src/tip/ci/download_appveyor.py # Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0 # For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt """ import argparse import os import zipfile import requests # To delete: # DELETE https://ci.appveyor.com/api/projects/{accountName}/{projectSlug}/buildcache # requests.delete(make_url('/projects/denik/gevent/buildcache'), headers=make_auth_headers) def make_auth_headers(fname=".appveyor.token"): """Make the authentication headers needed to use the Appveyor API.""" if not os.path.exists(fname): fname = os.path.expanduser("~/bin/appveyor-token") if not os.path.exists(fname): raise RuntimeError( "Please create a file named `.appveyor.token` in the current directory. " "You can get the token from https://ci.appveyor.com/api-token" ) with open(fname) as f: token = f.read().strip() headers = { 'Authorization': 'Bearer {}'.format(token), } return headers def make_url(url, **kwargs): """Build an Appveyor API url.""" return "https://ci.appveyor.com/api" + url.format(**kwargs) def get_project_build(account_project, build_num): """Get the details of the latest Appveyor build.""" url = '/projects/{account_project}' url_args = {'account_project': account_project} if build_num: url += '/build/{buildVersion}' url_args['buildVersion'] = build_num url = make_url(url, **url_args) response = requests.get(url, headers=make_auth_headers()) return response.json() def download_latest_artifacts(account_project, build_num): """Download all the artifacts from the latest build.""" build = get_project_build(account_project, build_num) jobs = build['build']['jobs'] print("Build {0[build][version]}, {1} jobs: {0[build][message]}".format(build, len(jobs))) for job in jobs: name = job['name'].partition(':')[2].split(',')[0].strip() print(" {0}: {1[status]}, {1[artifactsCount]} artifacts".format(name, job)) url = make_url("/buildjobs/{jobid}/artifacts", jobid=job['jobId']) response = requests.get(url, headers=make_auth_headers()) artifacts = response.json() for artifact in artifacts: is_zip = artifact['type'] == "Zip" filename = artifact['fileName'] print(" {0}, {1} bytes".format(filename, artifact['size'])) url = make_url( "/buildjobs/{jobid}/artifacts/{filename}", jobid=job['jobId'], filename=filename ) download_url(url, filename, make_auth_headers()) if is_zip: unpack_zipfile(filename) os.remove(filename) def ensure_dirs(filename): """Make sure the directories exist for `filename`.""" dirname, _ = os.path.split(filename) if dirname and not os.path.exists(dirname): os.makedirs(dirname) def download_url(url, filename, headers): """Download a file from `url` to `filename`.""" ensure_dirs(filename) response = requests.get(url, headers=headers, stream=True) if response.status_code == 200: with open(filename, 'wb') as f: for chunk in response.iter_content(16 * 1024): f.write(chunk) def unpack_zipfile(filename): """Unpack a zipfile, using the names in the zip.""" with open(filename, 'rb') as fzip: z = zipfile.ZipFile(fzip) for name in z.namelist(): print(" extracting {}".format(name)) ensure_dirs(name) z.extract(name) def main(argv=None): import sys argv = argv or sys.argv[1:] parser = argparse.ArgumentParser(description='Download artifacts from AppVeyor.') parser.add_argument( 'name', metavar='ID', help='Project ID in AppVeyor. Example: ionelmc/python-nameless' ) parser.add_argument( 'build', default=None, nargs='?', help=( 'The project build version. If not given, discovers the latest. ' 'Note that this is not the build number. ' 'Example: 1.0.2420' ) ) args = parser.parse_args(argv) download_latest_artifacts(args.name, args.build) if __name__ == "__main__": main() gevent-24.11.1/scripts/releases/geventrel.sh000077500000000000000000000021021471441230600207710ustar00rootroot00000000000000#!/opt/local/bin/bash # # Quick hack script to build a single gevent release in a virtual env. Takes one # argument, the path to python to use. # Has hardcoded paths, probably only works on my (JAM) machine. set -e export WORKON_HOME=$HOME/Projects/VirtualEnvs export VIRTUALENVWRAPPER_LOG_DIR=~/.virtualenvs source `which virtualenvwrapper.sh` # Make sure there are no -march flags set # https://github.com/gevent/gevent/issues/791 unset CFLAGS unset CXXFLAGS unset CPPFLAGS # If we're building on 10.12, we have to exclude clock_gettime # because it's not available on earlier releases and leads to # segfaults because the symbol clock_gettime is NULL. # See https://github.com/gevent/gevent/issues/916 #export CPPFLAGS="-D_DARWIN_FEATURE_CLOCK_GETTIME=0" export ARCHFLAGS="-arch arm64 -arch x86_64" BASE=`pwd`/../../ BASE=`greadlink -f $BASE` cd /tmp/gevent virtualenv -p $1 `basename $1` cd `basename $1` echo "Made tmpenv" echo `pwd` source bin/activate echo cloning $BASE git clone $BASE gevent cd ./gevent pip install -U pip pip wheel . -w dist cp dist/gevent*whl /tmp/gevent/ gevent-24.11.1/scripts/releases/geventreleases.sh000077500000000000000000000006051471441230600220200ustar00rootroot00000000000000#!/opt/local/bin/bash # Quick hack script to create many gevent releases. # Contains hardcoded paths. Probably only works on my (JAM) machine # (OS X 10.11) mkdir /tmp/gevent/ ./geventrel.sh /usr/local/bin/python3.8 ./geventrel.sh /usr/local/bin/python3.9 ./geventrel.sh /usr/local/bin/python3.10 ./geventrel.sh /usr/local/bin/python3.11 ./geventrel.sh /usr/local/bin/python3.12 wait gevent-24.11.1/scripts/releases/make-manylinux000077500000000000000000000256671471441230600213510ustar00rootroot00000000000000#!/bin/bash # Initially based on a snippet from the greenlet project. # This needs to be run from the root of the project. # To update: docker pull quay.io/pypa/manylinux2010_x86_64 set -e export PYTHONUNBUFFERED=1 export PYTHONDONTWRITEBYTECODE=1 # Use a fixed hash seed for reproducability export PYTHONHASHSEED=8675309 # Disable tests that use external network resources; # too often we get failures to resolve DNS names or failures # to connect on AppVeyor. export GEVENTTEST_USE_RESOURCES="-network" export CI=1 export TRAVIS=true export GEVENT_MANYLINUX=1 # Don't get warnings about Python 2 support being deprecated. We # know. The env var works for pip 20. export PIP_NO_PYTHON_VERSION_WARNING=1 export PIP_NO_WARN_SCRIPT_LOCATION=1 # Build configuration. export CC="ccache `which gcc`" export LDSHARED="$CC -shared" export LDCCSHARED="$LDSHARED" export LDCXXSHARED="$LDSHARED" export CCACHE_NOCPP2=true export CCACHE_SLOPPINESS=file_macro,time_macros,include_file_ctime,include_file_mtime export CCACHE_NOHASHDIR=true export CCACHE_BASEDIR="/gevent" export BUILD_LIBS=$HOME/.libs # Share the ccache directory export CCACHE_DIR="/ccache" # Disable some warnings produced by libev especially and also some Cython generated code. # Note that changing the value of these variables invalidates configure caches GEVENT_WARNFLAGS="-Wno-strict-aliasing -Wno-comment -Wno-unused-value -Wno-unused-but-set-variable -Wno-sign-compare -Wno-parentheses -Wno-unused-function -Wno-tautological-compare -Wno-strict-prototypes -Wno-return-type -Wno-misleading-indentation" OPTIMIZATION_FLAGS="-O3 -pipe" if [ -n "$GITHUB_ACTIONS" ]; then if [ "$DOCKER_IMAGE" == "quay.io/pypa/manylinux2014_aarch64" ] || [ "$DOCKER_IMAGE" == "quay.io/pypa/manylinux2014_ppc64le" ] || [ "$DOCKER_IMAGE" == "quay.io/pypa/manylinux2014_s390x" ] || [ "$DOCKER_IMAGE" == "quay.io/pypa/musllinux_1_1_aarch64" ] ; then # Compiling with -Ofast is a no-go because of the regression it causes (#1864). # The default settings have -O3, and adding -Os doesn't help much. So maybe -O1 will. echo "Compiling with -O1" OPTIMIZATION_FLAGS="-pipe -O1" SLOW_BUILD=1 GEVENTTEST_SKIP_ALL=1 export GEVENTSETUP_DISABLE_ARES=1 # ccache has been seen to have some issues here with too many open files? unset CC unset LDSHARED unset LDCCSHARED unset LDCXXSHARED fi else OPTIMIZATION_FLAGS="-pipe -O3" fi echo "Compiling with $OPTIMIZATION_FLAGS" export CFLAGS="$OPTIMIZATION_FLAGS $GEVENT_WARNFLAGS" # -lrt: Needed for clock_gettime libc support on this version. # -pthread: Needed for pthread_atfork (cffi). # This used to be spelled with LDFLAGS, but that is deprecated and # produces a warning on the 2014 image (?). Still needed on the # 2010 image. export LIBS="-lrt -pthread" export LDFLAGS="$LIBS" # Be sure that we get the loop we expect by default, and not # a fallback loop. export GEVENT_LOOP="libev-cext" SEP="~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" if [ -d /gevent -a -d /opt/python ]; then # Running inside docker # Set a cache directory for pip. This was # mounted to be the same as it is outside docker so it # can be persisted. export XDG_CACHE_HOME="/cache" # XXX: This works for macOS, where everything bind-mounted # is seen as owned by root in the container. But when the host is Linux # the actual UIDs come through to the container, triggering # pip to disable the cache when it detects that the owner doesn't match. # The below is an attempt to fix that, taken frob bcrypt. It seems to work on # Github Actions. echo $SEP if [ -n "$GITHUB_ACTIONS" ]; then echo Adjusting pip cache permissions: $(whoami) mkdir -p $XDG_CACHE_HOME/pip chown -R $(whoami) $XDG_CACHE_HOME fi ls -ld /cache ls -ld /cache/pip echo $SEP # Ahh, overprotective security. Disable it. echo "Fixing get's paranoia" git config --global --add safe.directory /gevent/.git echo $SEP echo "Installing Build Deps" if [ -e /usr/bin/yum ]; then yum -y install libffi-devel # Some images/archs (specificaly 2014_aarch64) don't have ccache; # This also seems to have vanished for manylinux_2010 x64/64 after November 30 # 2020 when the OS went EOL and the package repos switched to the "vault" if [ -n "$SLOW_BUILD" ] ; then # This provides access to ccache for the 2014 image echo Installing epel rpm -Uvh https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-14.noarch.rpm || true fi yum -y install ccache || export CC=gcc LDSHARED="gcc -shared" LDCXXSHARED="gcc -shared" # On Fedora Rawhide (F33) # yum install python39 python3-devel gcc kernel-devel kernel-headers make diffutils file fi if [ -e /sbin/apk ]; then # the muslinux image apk add --no-cache build-base libffi-dev ccache fi echo $SEP echo Current environment echo $SEP env | sort echo $SEP mkdir /tmp/build cd /tmp/build git clone /gevent gevent cd gevent if [ -z "$GEVENTSETUP_DISABLE_ARES" ]; then echo Configuring cares time (cd deps/c-ares && ./configure --disable-dependency-tracking -C > /dev/null ) else echo Not configuring c-ares because it is disabled fi echo $SEP rm -rf /gevent/wheelhouse mkdir /gevent/wheelhouse OPATH="$PATH" which auditwheel # June 2023: 3.8, 3.9, and 3.10 are in security-fix only mode, ending support in # 2024, 2025, and 2026, respectively. Only 3.11+ are in active support mode. # Building a variant in emulation takes at least 9 minutes on Github Actions, # plus many minutes to build the deps (Cython) and then many more minutes to build and # install test dependencies; a complete build for ppc64le with 3.8, 9, 10, 11, and 12 # took 1.5 hours. Multiply that times all the SLOW_BUILD images, and you've got a lot of time. # # So, for SLOW_BUILD environments: # - skip security-fix only versions; users still on those versions are the least likely to # upgrade dependencies # - skip running most tests and installing test extras (which we don't need because we only run # a tiny subset of tests) echo Possible Builds ls -l /opt/python/ # If there is no Cython binary wheel available, don't try to build one; it takes # forever! The old way of --install-option="--no-cython-compile" doesn't work because # pip dropped support for it, and the "supported" way, --config-settings='--install-option="--no-cython-compile"' # also doesn't work. Fortunately, Cython also reads an environment variable. export NO_CYTHON_COMPILE=true # Start echoing commands (doing it earlier is too much) set -x for variant in /opt/python/cp{313,312,39,310,311}*; do echo $SEP if [ "$variant" = "/opt/python/cp313-cp313t" ]; then # It appears that Cython 3.0.11 cannot produce code that # works here. Lots of compiler errors. echo "Unable to build without gil" continue fi export PATH="$variant/bin:$OPATH" if [ -n "$SLOW_BUILD" ]; then is_security_fix_only=$(python -c 'import sys; print(sys.version_info[:2] < (3, 10))') if [ "$is_security_fix_only" == "True" ]; then echo "Skipping build of $variant" continue fi fi echo "Building $variant $(python --version)" python -mpip install -U pip # Build the wheel *in place*. This helps with cahching. # The downside is that we must install dependencies manually. # NOTE: We can't upgrade ``wheel`` because ``auditwheel`` depends on # it, and auditwheel is installed in one of these environments. time python -m pip install -U 'cython>=3.0' time python -mpip install -U cffi 'greenlet >= 2.0.0; python_version < "3.12"' 'greenlet >= 3.0a1; python_version >= "3.12"' setuptools echo "$variant: Building wheel" time (python setup.py bdist_wheel) PATH="$OPATH" auditwheel repair dist/gevent*.whl cp wheelhouse/gevent*.whl /gevent/wheelhouse # Install it and its deps to be sure that it can be done; no sense # trying to publish a wheel that can't be installed. time python -mpip install -U --no-compile $(ls dist/gevent*whl) # Basic sanity checks echo "$variant: Installation details" python -c 'from __future__ import print_function; import gevent; print(gevent, gevent.__version__)' python -c 'from __future__ import print_function; from gevent._compat import get_clock_info; print("clock info", get_clock_info("perf_counter"))' python -c 'from __future__ import print_function; import greenlet; print(greenlet, greenlet.__version__)' python -c 'from __future__ import print_function; import gevent.core; print("default loop", gevent.core.loop)' # Other loops we should have GEVENT_LOOP=libuv python -c 'from __future__ import print_function; import gevent.core; print("libuv loop", gevent.core.loop)' GEVENT_LOOP=libev-cffi python -c 'from __future__ import print_function; import gevent.core; print("libev-cffi loop", gevent.core.loop)' if [ -z "$GEVENTSETUP_DISABLE_ARES" ]; then python -c 'from __future__ import print_function; import gevent.ares; print("ares", gevent.ares)' fi if [ -z "$GEVENTTEST_SKIP_ALL" ]; then # With the test extra time python -mpip install -U --no-compile $(ls dist/gevent*whl)[test] python -mgevent.tests --second-chance else # Allow skipping the bulk of the tests. If we're emulating Arm, # running the whole thing takes forever. # XXX: It's possible that what takes forever is actually building gevent itself. python -mgevent.tests.test__core fi rm -rf build rm -f dist/gevent*.whl ccache -s || true done ccache -s || true exit 0 fi # Mount the current directory as /gevent # Mount the pip cache directory as /cache # `pip cache` requires pip 20.1 echo $SEP echo Setting up caching python --version python -mpip --version LCACHE="$(dirname `python -mpip cache dir`)" echo Sharing pip cache at $LCACHE $(ls -ld $LCACHE) echo Sharing ccache dir at $HOME/.ccache if [ ! -d "$HOME/.ccache" ]; then mkdir "$HOME/.ccache" fi echo $SEP # Travis CI and locally we want `-ti`, but github actions doesn't have a TTY, so one # or the other of the arguments causes this to fail with 'input device is not a TTY' # Pas through whether we're running on github or not to help with caching. docker run --rm -e GEVENT_MANYLINUX_NAME -e GEVENTSETUP_DISABLE_ARES -e GITHUB_ACTIONS -e GEVENTTEST_SKIP_ALL -e DOCKER_IMAGE -v "$(pwd):/gevent" -v "$LCACHE:/cache" -v "$HOME/.ccache:/ccache" ${DOCKER_IMAGE:-quay.io/pypa/manylinux2010_x86_64} /gevent/scripts/releases/$(basename $0) ls -l wheelhouse gevent-24.11.1/setup.cfg000066400000000000000000000007531471441230600150000ustar00rootroot00000000000000[bdist_wheel] universal = 0 [zest.releaser] python-file-with-version = src/gevent/__init__.py create-wheel = no prereleaser.middle = gevent._util.prereleaser_middle postreleaser.before = gevent._util.postreleaser_before [metadata] long_description_content_type = text/x-rst [check-manifest] ignore = src/gevent/*.c src/gevent/*.html src/gevent/libev/corecext.h src/gevent/libev/corecext.html src/gevent/_generated_include/*.h src/gevent/_generated_include gevent-24.11.1/setup.py000077500000000000000000000420301471441230600146660ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """gevent build & installation script""" from __future__ import print_function import sys import os import os.path # setuptools is *required* on Windows # (https://bugs.python.org/issue23246) and for PyPy. No reason not to # use it everywhere. v24.2.0 is needed for python_requires from setuptools import Extension, setup from setuptools import find_packages # # We import other files that are siblings of this file as modules. In # the past, setuptools guaranteed that this directory was on the path # (typically, the working directory) but in a PEP517 world, that's no # longer guaranteed to be the case. setuptools provides a PEP517 # backend (``setuptools.build_meta:__legacy__``) that *does* guarantee # that, and we used it for a long time. But downstream packagers have begun # complaining about using it. So we futz with the path ourself. # (This is also an issue on Python3.11+ with $PYTHONSAFEPATH=1) sys.path.insert(0, os.path.dirname(__file__)) from _setuputils import read from _setuputils import read_version from _setuputils import PYPY, WIN from _setuputils import ConfiguringBuildExt from _setuputils import GeventClean from _setuputils import BuildFailed from _setuputils import cythonize1 from _setuputils import get_include_dirs from _setuputils import bool_from_environ # Environment variables that are intended to be used outside of our own # CI should be documented in ``installing_from_source.rst`` if WIN: # Make sure the env vars that make.cmd needs are set if not os.environ.get('PYTHON_EXE'): os.environ['PYTHON_EXE'] = 'pypy' if PYPY else 'python' if not os.environ.get('PYEXE'): os.environ['PYEXE'] = os.environ['PYTHON_EXE'] __version__ = read_version() from _setuplibev import build_extension as build_libev_extension from _setupares import ARES CORE = cythonize1(build_libev_extension()) # Modules that we cythonize for performance. # Be careful not to use simple names for these modules, # as the non-static symbols cython generates do not include # the module name. Thus an extension of 'gevent._queue' # results in symbols like 'PyInit__queue', which is the same # symbol used by the standard library _queue accelerator module. # The name of the .pxd file must match the local name of the accelerator # extension; however, sadly, the generated .c and .html files will # still use the same name as the .py source. SEMAPHORE = Extension(name="gevent._gevent_c_semaphore", sources=["src/gevent/_semaphore.py"], depends=['src/gevent/_gevent_c_semaphore.pxd'], include_dirs=get_include_dirs()) LOCAL = Extension(name="gevent._gevent_clocal", sources=["src/gevent/local.py"], depends=['src/gevent/_gevent_clocal.pxd'], include_dirs=get_include_dirs()) GREENLET = Extension(name="gevent._gevent_cgreenlet", sources=[ "src/gevent/greenlet.py", ], depends=[ 'src/gevent/_gevent_cgreenlet.pxd', 'src/gevent/_gevent_c_ident.pxd', 'src/gevent/_ident.py' ], include_dirs=get_include_dirs()) ABSTRACT_LINKABLE = Extension(name="gevent._gevent_c_abstract_linkable", sources=["src/gevent/_abstract_linkable.py"], depends=['src/gevent/_gevent_c_abstract_linkable.pxd'], include_dirs=get_include_dirs()) IDENT = Extension(name="gevent._gevent_c_ident", sources=["src/gevent/_ident.py"], depends=['src/gevent/_gevent_c_ident.pxd'], include_dirs=get_include_dirs()) IMAP = Extension(name="gevent._gevent_c_imap", sources=["src/gevent/_imap.py"], depends=['src/gevent/_gevent_c_imap.pxd'], include_dirs=get_include_dirs()) EVENT = Extension(name="gevent._gevent_cevent", sources=["src/gevent/event.py"], depends=['src/gevent/_gevent_cevent.pxd'], include_dirs=get_include_dirs()) QUEUE = Extension(name="gevent._gevent_cqueue", sources=["src/gevent/queue.py"], depends=['src/gevent/_gevent_cqueue.pxd'], include_dirs=get_include_dirs()) HUB_LOCAL = Extension(name="gevent._gevent_c_hub_local", sources=["src/gevent/_hub_local.py"], depends=['src/gevent/_gevent_c_hub_local.pxd'], include_dirs=get_include_dirs()) WAITER = Extension(name="gevent._gevent_c_waiter", sources=["src/gevent/_waiter.py"], depends=['src/gevent/_gevent_c_waiter.pxd'], include_dirs=get_include_dirs()) HUB_PRIMITIVES = Extension(name="gevent._gevent_c_hub_primitives", sources=["src/gevent/_hub_primitives.py"], depends=['src/gevent/_gevent_c_hub_primitives.pxd'], include_dirs=get_include_dirs()) GLT_PRIMITIVES = Extension(name="gevent._gevent_c_greenlet_primitives", sources=["src/gevent/_greenlet_primitives.py"], depends=['src/gevent/_gevent_c_greenlet_primitives.pxd'], include_dirs=get_include_dirs()) TRACER = Extension(name="gevent._gevent_c_tracer", sources=["src/gevent/_tracer.py"], depends=['src/gevent/_gevent_c_tracer.pxd'], include_dirs=get_include_dirs()) _to_cythonize = [ GLT_PRIMITIVES, HUB_PRIMITIVES, HUB_LOCAL, WAITER, GREENLET, TRACER, ABSTRACT_LINKABLE, SEMAPHORE, LOCAL, IDENT, IMAP, EVENT, QUEUE, ] EXT_MODULES = [ CORE, ARES, ABSTRACT_LINKABLE, SEMAPHORE, LOCAL, GREENLET, IDENT, IMAP, EVENT, QUEUE, HUB_LOCAL, WAITER, HUB_PRIMITIVES, GLT_PRIMITIVES, TRACER, ] if bool_from_environ('GEVENTSETUP_DISABLE_ARES'): print("c-ares module disabled, not building") EXT_MODULES.remove(ARES) LIBEV_CFFI_MODULE = 'src/gevent/libev/_corecffi_build.py:ffi' LIBUV_CFFI_MODULE = 'src/gevent/libuv/_corecffi_build.py:ffi' cffi_modules = [] if not WIN: # We can't properly handle (hah!) file-descriptors and # handle mapping on Windows/CFFI with libev, because the file needed, # libev_vfd.h, can't be included, linked, and used: it uses # Python API functions, and you're not supposed to do that from # CFFI code. Plus I could never get the libraries= line to ffi.compile() # correct to make linking work. # Also, we use the type `nlink_t`, which is not defined on Windows. cffi_modules.append( LIBEV_CFFI_MODULE ) cffi_modules.append(LIBUV_CFFI_MODULE) greenlet_requires = [ # We need to watch our greenlet version fairly carefully, # since we compile cython code that extends the greenlet object. # Binary compatibility would break if the greenlet struct changes. # (Which it did in 0.4.14 for Python 3.7 and again in 0.4.17; with # the release of 1.0a1 it began promising ABI stability with SemVer # so we can add an upper bound). # 1.1.0 is required for 3.10; it has a new ABI, but only on 1.1.0. # 1.1.3 is needed for 3.11, and supports everything 1.1.0 did. # 2.0.0 supports everything 1.1.3 did, but breaks the ABI in a way that hopefully # won't break again. # 3.0 is ABI compatible and adds support for Python 3.12 (but right # now it's RC pending additional testing, so we only require it on 3.12) 'greenlet >= 3.1.1 ; platform_python_implementation=="CPython"', ] # Note that we don't add cffi to install_requires, it's # optional. We tend to build and distribute wheels with the CFFI # modules built and they can be imported if CFFI is installed. # We need cffi 1.4.0 for new style callbacks; # we need cffi 1.11.3 (on CPython 3) to avoid test errors. # The exception is on Windows, where we want the libuv backend we distribute # to be the default, and that requires cffi; but don't try to install it # on PyPy or it messes up the build CFFI_DEP = "cffi >= 1.17.1 ; platform_python_implementation == 'CPython'" CFFI_REQUIRES = [ CFFI_DEP + " and sys_platform == 'win32'" ] install_requires = greenlet_requires + CFFI_REQUIRES + [ # For event notification. 'zope.event', # For event definitions, and our own interfaces; those should # ultimately be published, but at this writing only the event # interfaces are. 'zope.interface', ] if PYPY: # These use greenlet/greenlet.h, which doesn't exist on PyPy EXT_MODULES.remove(LOCAL) EXT_MODULES.remove(GREENLET) EXT_MODULES.remove(SEMAPHORE) EXT_MODULES.remove(ABSTRACT_LINKABLE) # As of PyPy 5.10, this builds, but won't import (missing _Py_ReprEnter) EXT_MODULES.remove(CORE) # This uses PyWeakReference and doesn't compile on PyPy EXT_MODULES.remove(IDENT) _to_cythonize.remove(LOCAL) _to_cythonize.remove(GREENLET) _to_cythonize.remove(SEMAPHORE) _to_cythonize.remove(IDENT) _to_cythonize.remove(ABSTRACT_LINKABLE) EXT_MODULES.remove(IMAP) _to_cythonize.remove(IMAP) EXT_MODULES.remove(EVENT) _to_cythonize.remove(EVENT) EXT_MODULES.remove(QUEUE) _to_cythonize.remove(QUEUE) EXT_MODULES.remove(HUB_LOCAL) _to_cythonize.remove(HUB_LOCAL) EXT_MODULES.remove(WAITER) _to_cythonize.remove(WAITER) EXT_MODULES.remove(GLT_PRIMITIVES) _to_cythonize.remove(GLT_PRIMITIVES) EXT_MODULES.remove(HUB_PRIMITIVES) _to_cythonize.remove(HUB_PRIMITIVES) EXT_MODULES.remove(TRACER) _to_cythonize.remove(TRACER) for mod in _to_cythonize: EXT_MODULES.remove(mod) EXT_MODULES.append(cythonize1(mod)) del _to_cythonize ## Extras EXTRA_DNSPYTHON = [ # We're not currently compatible with 2.0, and dnspython 1.x isn't # compatible weth Python 3.10 because of the removal of ``collections.MutableMapping``. 'dnspython >= 1.16.0, < 2.0; python_version < "3.10"', 'idna; python_version < "3.10"', ] EXTRA_EVENTS = [ # No longer does anything, but the extra must stay around # to avoid breaking install scripts. # Remove this in 2021. ] EXTRA_PSUTIL_DEPS = [ # Versions of PyPy2 prior to 7.3.1 (maybe?) are incompatible with # psutil >= 5.6.4. 5.7.0 seems to work. # https://github.com/giampaolo/psutil/issues/1659 # PyPy on Windows can't build psutil, it fails to link with the missing symbol # PyErr_SetFromWindowsErr. 'psutil >= 5.7.0; sys_platform != "win32" or platform_python_implementation == "CPython"', ] EXTRA_MONITOR = [ ] + EXTRA_PSUTIL_DEPS EXTRA_RECOMMENDED = [ # We need this at runtime to use the libev-CFFI and libuv backends CFFI_DEP, ] + EXTRA_DNSPYTHON + EXTRA_EVENTS + EXTRA_MONITOR def make_long_description(): readme = read('README.rst') about = read('docs', '_about.rst') install = read('docs', 'install.rst') readme = readme.replace('.. include:: docs/_about.rst', about) readme = readme.replace('.. include:: docs/install.rst', install) return readme def run_setup(ext_modules): setup( name='gevent', version=__version__, description='Coroutine-based network library', long_description=make_long_description(), license='MIT', keywords='greenlet coroutine cooperative multitasking light threads monkey', author='Denis Bilenko', author_email='denis.bilenko@gmail.com', maintainer='Jason Madden', maintainer_email='jason@nextthought.com', url='http://www.gevent.org/', project_urls={ 'Bug Tracker': 'https://github.com/gevent/gevent/issues', 'Source Code': 'https://github.com/gevent/gevent/', 'Documentation': 'http://www.gevent.org', 'Changes': 'https://www.gevent.org/changelog.html', }, package_dir={'': 'src'}, packages=find_packages('src'), # Using ``include_package_data`` causes our generated ``.c`` # and ``.h`` files to be included in the installation. Those # aren't needed at runtime (the ``.html`` files generated by # Cython's annotation are much nicer to browse anyway, but we # don't want to include those either), and downstream # distributors have complained about them, so we don't want to # include them. Nor do we want to include ``.pyx`` or ``.pxd`` # files that aren't considered public; the only ``.pxd`` files # that ever offered the required Cython annotations to produce # stable APIs weere in the libev cext backend; all of the # internal optimizations provided by Cython compiling existing # ``.py`` files using a matching ``.pxd`` do not. Furthermore, # there are ABI issues that make distributing those extremely # fragile. So do not use ``include_package_data``, explicitly # spell out what we need. See https://github.com/gevent/gevent/issues/1568. package_data={ # For any package '': [ # Include files needed to run tests '*.pem', '*.crt', '*.txt', '*.key', # We have a few .py files that aren't technically in packages; # This one enables coverage for testing. 'coveragesite/*.py', ] }, ext_modules=ext_modules, cmdclass={ 'build_ext': ConfiguringBuildExt, 'clean': GeventClean, }, install_requires=install_requires, extras_require={ # Each extra intended for end users must be documented in install.rst 'dnspython': EXTRA_DNSPYTHON, 'events': EXTRA_EVENTS, 'monitor': EXTRA_MONITOR, 'recommended': EXTRA_RECOMMENDED, # End end-user extras 'docs': [ # our custom theme has problems on sphinx 7; # For now, we have switched to the furo theme. 'sphinx', 'furo', 'repoze.sphinx.autointerface', 'sphinxcontrib-programoutput', 'zope.schema', ], # To the extent possible, we should work to make sure # our tests run, at least a basic set, without any of # these extra dependencies (i.e., skip things when they are # missing). This helps serve as a smoketest for users. 'test': EXTRA_RECOMMENDED + [ # examples, called from tests, use this 'requests', # We don't run coverage on Windows, and pypy can't build it there # anyway (coveralls -> cryptopgraphy -> openssl). # coverage 5 needs coveralls 1.11 'coverage >= 5.0 ; sys_platform != "win32"', # leak checks. previously we had a hand-rolled version. 'objgraph', ], }, # It's always safe to pass the CFFI keyword, even if # cffi is not installed: it's just ignored in that case. cffi_modules=cffi_modules, zip_safe=False, test_suite="greentest.testrunner", classifiers=[ "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", "Programming Language :: Python :: Implementation :: CPython", "Programming Language :: Python :: Implementation :: PyPy", "Operating System :: MacOS :: MacOS X", "Operating System :: POSIX", "Operating System :: Microsoft :: Windows", "Topic :: Internet", "Topic :: Software Development :: Libraries :: Python Modules", "Intended Audience :: Developers", "Development Status :: 4 - Beta" ], python_requires=">=3.9", entry_points={ 'gevent.plugins.monkey.will_patch_all': [ "signal_os_incompat = gevent.monkey:_subscribe_signal_os", ], }, ) # Tools like pyroma expect the actual call to `setup` to be performed # at the top-level at import time, so don't stash it away behind 'if # __name__ == __main__' if os.getenv('READTHEDOCS'): # Sometimes RTD fails to put our virtualenv bin directory # on the PATH, meaning we can't run cython. Fix that. new_path = os.environ['PATH'] + os.pathsep + os.path.dirname(sys.executable) os.environ['PATH'] = new_path try: run_setup(EXT_MODULES) except BuildFailed: if ARES not in EXT_MODULES or not ARES.optional: raise sys.stderr.write('\nWARNING: The gevent.ares extension has been disabled.\n') EXT_MODULES.remove(ARES) run_setup(EXT_MODULES) gevent-24.11.1/src/000077500000000000000000000000001471441230600137415ustar00rootroot00000000000000gevent-24.11.1/src/gevent/000077500000000000000000000000001471441230600152315ustar00rootroot00000000000000gevent-24.11.1/src/gevent/__init__.py000066400000000000000000000065171471441230600173530ustar00rootroot00000000000000# Copyright (c) 2009-2012 Denis Bilenko. See LICENSE for details. """ gevent is a coroutine-based Python networking library that uses greenlet to provide a high-level synchronous API on top of libev event loop. See http://www.gevent.org/ for the documentation. .. versionchanged:: 1.3a2 Add the `config` object. """ from __future__ import absolute_import from collections import namedtuple _version_info = namedtuple('version_info', ('major', 'minor', 'micro', 'releaselevel', 'serial')) #: The programatic version identifier. The fields have (roughly) the #: same meaning as :data:`sys.version_info` #: .. deprecated:: 1.2 #: Use ``pkg_resources.parse_version(__version__)`` (or the equivalent #: ``packaging.version.Version(__version__)``). version_info = _version_info(20, 0, 0, 'dev', 0) # XXX: Remove me #: The human-readable PEP 440 version identifier. #: Use ``pkg_resources.parse_version(__version__)`` or #: ``packaging.version.Version(__version__)`` to get a machine-usable #: value. __version__ = '24.11.1' __all__ = [ 'Greenlet', 'GreenletExit', 'Timeout', 'config', # Added in 1.3a2 'fork', 'get_hub', 'getcurrent', 'getswitchinterval', 'idle', 'iwait', 'joinall', 'kill', 'killall', 'reinit', 'setswitchinterval', 'signal_handler', 'sleep', 'spawn', 'spawn_later', 'spawn_raw', 'wait', 'with_timeout', ] import sys if sys.platform == 'win32': # trigger WSAStartup call import socket # pylint:disable=unused-import,useless-suppression del socket # Floating point number, in number of seconds, # like time.time getswitchinterval = sys.getswitchinterval setswitchinterval = sys.setswitchinterval from gevent._config import config from gevent._hub_local import get_hub from gevent._hub_primitives import iwait_on_objects as iwait from gevent._hub_primitives import wait_on_objects as wait from gevent.greenlet import Greenlet, joinall, killall spawn = Greenlet.spawn spawn_later = Greenlet.spawn_later #: The singleton configuration object for gevent. from gevent.timeout import Timeout, with_timeout from gevent.hub import getcurrent, GreenletExit, spawn_raw, sleep, idle, kill, reinit try: from gevent.os import fork except ImportError: __all__.remove('fork') # This used to be available as gevent.signal; that broke in 1.1b4 but # a temporary alias was added (See # https://github.com/gevent/gevent/issues/648). It was ugly and complex and # caused confusion, so it was removed in 1.5. See https://github.com/gevent/gevent/issues/1529 from gevent.hub import signal as signal_handler # the following makes hidden imports visible to freezing tools like # py2exe. see https://github.com/gevent/gevent/issues/181 # This is not well maintained or tested, though, so it likely becomes # outdated on each major release. def __dependencies_for_freezing(): # pragma: no cover # pylint:disable=unused-import, import-outside-toplevel from gevent import core from gevent import resolver_thread from gevent import resolver_ares from gevent import socket as _socket from gevent import threadpool from gevent import thread from gevent import threading from gevent import select from gevent import subprocess import pprint import traceback import signal as _signal del __dependencies_for_freezing gevent-24.11.1/src/gevent/_abstract_linkable.py000066400000000000000000000543021471441230600214120ustar00rootroot00000000000000# -*- coding: utf-8 -*- # cython: auto_pickle=False,embedsignature=True,always_allow_keywords=False """ Internal module, support for the linkable protocol for "event" like objects. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import sys from gc import get_objects from greenlet import greenlet from greenlet import error as greenlet_error from gevent._compat import thread_mod_name from gevent._hub_local import get_hub_noargs as get_hub from gevent._hub_local import get_hub_if_exists from gevent.exceptions import InvalidSwitchError from gevent.exceptions import InvalidThreadUseError from gevent.timeout import Timeout locals()['getcurrent'] = __import__('greenlet').getcurrent locals()['greenlet_init'] = lambda: None __all__ = [ 'AbstractLinkable', ] # Need the real get_ident. We're imported early enough during monkey-patching # that we can be sure nothing is monkey patched yet. _get_thread_ident = __import__(thread_mod_name).get_ident _allocate_thread_lock = __import__(thread_mod_name).allocate_lock class _FakeNotifier(object): __slots__ = ( 'pending', ) def __init__(self): self.pending = False def get_roots_and_hubs(): from gevent.hub import Hub # delay import return { x.parent: x for x in get_objects() # Make sure to only find hubs that have a loop # and aren't destroyed. If we don't do that, we can # get an old hub that no longer works leading to issues in # combined test cases. if isinstance(x, Hub) and x.loop is not None } class AbstractLinkable(object): # Encapsulates the standard parts of the linking and notifying # protocol common to both repeatable events (Event, Semaphore) and # one-time events (AsyncResult). # # With a few careful exceptions, instances of this object can only # be used from a single thread. The exception is that certain methods # may be used from multiple threads IFF: # # 1. They are documented as safe for that purpose; AND # 2a. This object is compiled with Cython and thus is holding the GIL # for the entire duration of the method; OR # 2b. A subclass ensures that a Python-level native thread lock is held # for the duration of the method; this is necessary in pure-Python mode. # The only known implementation of such # a subclass is for Semaphore. AND # 3. The subclass that calls ``capture_hub`` catches # and handles ``InvalidThreadUseError`` # # TODO: As of gevent 1.5, we use the same datastructures and almost # the same algorithm as Greenlet. See about unifying them more. __slots__ = ( 'hub', '_links', '_notifier', '_notify_all', '__weakref__' ) def __init__(self, hub=None): # Before this implementation, AsyncResult and Semaphore # maintained the order of notifications, but Event did not. # In gevent 1.3, before Semaphore extended this class, that # was changed to not maintain the order. It was done because # Event guaranteed to only call callbacks once (a set) but # AsyncResult had no such guarantees. When Semaphore was # changed to extend this class, it lost its ordering # guarantees. Unfortunately, that made it unfair. There are # rare cases that this can starve a greenlet # (https://github.com/gevent/gevent/issues/1487) and maybe # even lead to deadlock (not tested). # So in gevent 1.5 we go back to maintaining order. But it's # still important not to make duplicate calls, and it's also # important to avoid O(n^2) behaviour that can result from # naive use of a simple list due to the need to handle removed # links in the _notify_links loop. Cython has special support for # built-in sets, lists, and dicts, but not ordereddict. Rather than # use two data structures, or a dict({link: order}), we simply use a # list and remove objects as we go, keeping track of them so as not to # have duplicates called. This makes `unlink` O(n), but we can avoid # calling it in the common case in _wait_core (even so, the number of # waiters should usually be pretty small) self._links = [] self._notifier = None # This is conceptually a class attribute, defined here for ease of access in # cython. If it's true, when notifiers fire, all existing callbacks are called. # If its false, we only call callbacks as long as ready() returns true. self._notify_all = True # we don't want to do get_hub() here to allow defining module-level objects # without initializing the hub. However, for multiple-thread safety, as soon # as a waiting method is entered, even if it won't have to wait, we # need to grab the hub and assign ownership. But we don't want to grab one prematurely. # The example is three threads, the main thread and two worker threads; if we create # a Semaphore in the main thread but only use it in the two threads, if we had grabbed # the main thread's hub, the two worker threads would have a dependency on it, meaning that # if the main event loop is blocked, the worker threads might get blocked too. self.hub = hub def linkcount(self): # For testing: how many objects are linked to this one? return len(self._links) def ready(self): # Instances must define this raise NotImplementedError def rawlink(self, callback): """ Register a callback to call when this object is ready. *callback* will be called in the :class:`Hub `, so it must not use blocking gevent API. *callback* will be passed one argument: this instance. """ if not callable(callback): raise TypeError('Expected callable: %r' % (callback, )) self._links.append(callback) self._check_and_notify() def unlink(self, callback): """Remove the callback set by :meth:`rawlink`""" try: self._links.remove(callback) except ValueError: pass if not self._links and self._notifier is not None and self._notifier.pending: # If we currently have one queued, but not running, de-queue it. # This will break a reference cycle. # (self._notifier -> self._notify_links -> self) # If it's actually running, though, (and we're here as a result of callbacks) # we don't want to change it; it needs to finish what its doing # so we don't attempt to start a fresh one or swap it out from underneath the # _notify_links method. self._notifier.stop() def _allocate_lock(self): return _allocate_thread_lock() def _getcurrent(self): return getcurrent() # pylint:disable=undefined-variable def _get_thread_ident(self): return _get_thread_ident() def _capture_hub(self, create): # Subclasses should call this as the first action from any # public method that could, in theory, block and switch # to the hub. This may release the GIL. It may # raise InvalidThreadUseError if the result would # First, detect a dead hub and drop it. while 1: my_hub = self.hub if my_hub is None: break if my_hub.dead: # dead is a property, could release GIL # back, holding GIL if self.hub is my_hub: self.hub = None my_hub = None break else: break if self.hub is None: # This next line might release the GIL. current_hub = get_hub() if create else get_hub_if_exists() # We have the GIL again. Did anything change? If so, # we lost the race. if self.hub is None: self.hub = current_hub if self.hub is not None and self.hub.thread_ident != _get_thread_ident(): raise InvalidThreadUseError( self.hub, get_hub_if_exists(), getcurrent() # pylint:disable=undefined-variable ) return self.hub def _check_and_notify(self): # If this object is ready to be notified, begin the process. if self.ready() and self._links and not self._notifier: hub = None try: hub = self._capture_hub(False) # Must create, we need it. except InvalidThreadUseError: # The current hub doesn't match self.hub. That's OK, # we still want to start the notifier in the thread running # self.hub (because the links probably contains greenlet.switch # calls valid only in that hub) pass if hub is not None: self._notifier = hub.loop.run_callback(self._notify_links, []) else: # Hmm, no hub. We must be the only thing running. Then its OK # to just directly call the callbacks. self._notifier = 1 try: self._notify_links([]) finally: self._notifier = None def _notify_link_list(self, links): # The core of the _notify_links method to notify # links in order. Lets the ``links`` list be mutated, # and only notifies up to the last item in the list, in case # objects are added to it. if not links: # HMM. How did we get here? Running two threads at once? # Seen once on Py27/Win/Appveyor # https://ci.appveyor.com/project/jamadden/gevent/builds/36875645/job/9wahj9ft4h4qa170 return [] only_while_ready = not self._notify_all final_link = links[-1] done = set() # of ids hub = self.hub if self.hub is not None else get_hub_if_exists() unswitched = [] while links: # remember this can be mutated if only_while_ready and not self.ready(): break link = links.pop(0) # Cython optimizes using list internals id_link = id(link) if id_link not in done: # XXX: JAM: What was I thinking? This doesn't make much sense, # there's a good chance `link` will be deallocated, and its id() will # be free to be reused. This also makes looping difficult, you have to # create new functions inside a loop rather than just once outside the loop. done.add(id_link) try: self._drop_lock_for_switch_out() try: link(self) except greenlet_error: # couldn't switch to a greenlet, we must be # running in a different thread. back on the list it goes for next time. unswitched.append(link) finally: self._acquire_lock_for_switch_in() except: # pylint:disable=bare-except # We're running in the hub, errors must not escape. if hub is not None: hub.handle_error((link, self), *sys.exc_info()) else: import traceback traceback.print_exc() if link is final_link: break return unswitched def _notify_links(self, arrived_while_waiting): # This method must hold the GIL, or be guarded with the lock that guards # this object. Thus, while we are notifying objects, an object from another # thread simply cannot arrive and mutate ``_links`` or ``arrived_while_waiting`` # ``arrived_while_waiting`` is a list of greenlet.switch methods # to call. These were objects that called wait() while we were processing, # and which would have run *before* those that had actually waited # and blocked. Instead of returning True immediately, we add them to this # list so they wait their turn. # We release self._notifier here when done invoking links. # The object itself becomes false in a boolean way as soon # as this method returns. notifier = self._notifier if notifier is None: # XXX: How did we get here? self._check_and_notify() return # Early links are allowed to remove later links, and links # are allowed to add more links, thus we must not # make a copy of our the ``_links`` list, we must traverse it and # mutate in place. # # We were ready() at the time this callback was scheduled; we # may not be anymore, and that status may change during # callback processing. Some of our subclasses (Event) will # want to notify everyone who was registered when the status # became true that it was once true, even though it may not be # any more. In that case, we must not keep notifying anyone that's # newly added after that, even if we go ready again. try: unswitched = self._notify_link_list(self._links) # Now, those that arrived after we had begun the notification # process. Follow the same rules, stop with those that are # added so far to prevent starvation. if arrived_while_waiting: un2 = self._notify_link_list(arrived_while_waiting) unswitched.extend(un2) # Anything left needs to go back on the main list. self._links.extend(arrived_while_waiting) finally: # We should not have created a new notifier even if callbacks # released us because we loop through *all* of our links on the # same callback while self._notifier is still true. assert self._notifier is notifier, (self._notifier, notifier) self._notifier = None # TODO: Maybe we should intelligently reset self.hub to # free up thread affinity? In case of a pathological situation where # one object was used from one thread once & first, but usually is # used by another thread. # # BoundedSemaphore does this. # Now we may be ready or not ready. If we're ready, which # could have happened during the last link we called, then we # must have more links than we started with. We need to schedule the # wakeup. self._check_and_notify() if unswitched: self._handle_unswitched_notifications(unswitched) def _handle_unswitched_notifications(self, unswitched): # Given a list of callable objects that raised # ``greenlet.error`` when we called them: If we can determine # that it is a parked greenlet (the callablle is a # ``greenlet.switch`` method) and we can determine the hub # that the greenlet belongs to (either its parent, or, in the # case of a main greenlet, find a hub with the same parent as # this greenlet object) then: # Move this to be a callback in that thread. # (This relies on holding the GIL *or* ``Hub.loop.run_callback`` being # thread-safe! Note that the CFFI implementations are definitely # NOT thread-safe. TODO: Make them? Or an alternative?) # # Otherwise, print some error messages. # TODO: Inline this for individual links. That handles the # "only while ready" case automatically. Be careful about locking in that case. # # TODO: Add a 'strict' mode that prevents doing this dance, since it's # inherently not safe. root_greenlets = None printed_tb = False only_while_ready = not self._notify_all while unswitched: if only_while_ready and not self.ready(): self.__print_unswitched_warning(unswitched, printed_tb) break link = unswitched.pop(0) hub = None # Also serves as a "handled?" flag # Is it a greenlet.switch method? if (getattr(link, '__name__', None) == 'switch' and isinstance(getattr(link, '__self__', None), greenlet)): glet = link.__self__ parent = glet.parent while parent is not None: if hasattr(parent, 'loop'): # Assuming the hub. hub = glet.parent break parent = glet.parent if hub is None: if root_greenlets is None: root_greenlets = get_roots_and_hubs() hub = root_greenlets.get(glet) if hub is not None and hub.loop is not None: hub.loop.run_callback_threadsafe(link, self) if hub is None or hub.loop is None: # We couldn't handle it self.__print_unswitched_warning(link, printed_tb) printed_tb = True def __print_unswitched_warning(self, link, printed_tb): print('gevent: error: Unable to switch to greenlet', link, 'from', self, '; crossing thread boundaries is not allowed.', file=sys.stderr) if not printed_tb: printed_tb = True print( 'gevent: error: ' 'This is a result of using gevent objects from multiple threads,', 'and is a bug in the calling code.', file=sys.stderr) import traceback traceback.print_stack() def _quiet_unlink_all(self, obj): if obj is None: return self.unlink(obj) if self._notifier is not None and self._notifier.args: try: self._notifier.args[0].remove(obj) except ValueError: pass def __wait_to_be_notified(self, rawlink): # pylint:disable=too-many-branches resume_this_greenlet = getcurrent().switch # pylint:disable=undefined-variable if rawlink: self.rawlink(resume_this_greenlet) else: self._notifier.args[0].append(resume_this_greenlet) try: self._switch_to_hub(self.hub) # If we got here, we were automatically unlinked already. resume_this_greenlet = None finally: self._quiet_unlink_all(resume_this_greenlet) def _switch_to_hub(self, the_hub): self._drop_lock_for_switch_out() try: result = the_hub.switch() finally: self._acquire_lock_for_switch_in() if result is not self: # pragma: no cover raise InvalidSwitchError( 'Invalid switch into %s.wait(): %r' % ( self.__class__.__name__, result, ) ) def _acquire_lock_for_switch_in(self): return def _drop_lock_for_switch_out(self): return def _wait_core(self, timeout, catch=Timeout): """ The core of the wait implementation, handling switching and linking. This method is NOT safe to call from multiple threads. ``self.hub`` must be initialized before entering this method. The hub that is set is considered the owner and cannot be changed while this method is running. It must only be called from the thread where ``self.hub`` is the current hub. If *catch* is set to ``()``, a timeout that elapses will be allowed to be raised. :return: A true value if the wait succeeded without timing out. That is, a true return value means we were notified and control resumed in this greenlet. """ with Timeout._start_new_or_dummy(timeout) as timer: # Might release # We already checked above (_wait()) if we're ready() try: self.__wait_to_be_notified( True,# Use rawlink() ) return True except catch as ex: if ex is not timer: raise # test_set_and_clear and test_timeout in test_threading # rely on the exact return values, not just truthish-ness return False def _wait_return_value(self, waited, wait_success): # pylint:disable=unused-argument # Subclasses should override this to return a value from _wait. # By default we return None. return None # pragma: no cover all extent subclasses override def _wait(self, timeout=None): # Watch where we could potentially release the GIL. self._capture_hub(True) # Must create, we must have an owner. Might release if self.ready(): # *might* release, if overridden in Python. result = self._wait_return_value(False, False) # pylint:disable=assignment-from-none if self._notifier: # We're already notifying waiters; one of them must have run # and switched to this greenlet, which arrived here. Alternately, # we could be in a separate thread (but we're holding the GIL/object lock) self.__wait_to_be_notified(False) # Use self._notifier.args[0] instead of self.rawlink return result gotit = self._wait_core(timeout) return self._wait_return_value(True, gotit) def _at_fork_reinit(self): """ This method was added in Python 3.9 and is called by logging.py ``_after_at_fork_child_reinit_locks`` on Lock objects. It is also called from threading.py, ``_after_fork`` in ``_reset_internal_locks``, and that can hit ``Event`` objects. Subclasses should reset themselves to an initial state. This includes unlocking/releasing, if possible. This method detaches from the previous hub and drops any existing notifier. """ self.hub = None self._notifier = None def _init(): greenlet_init() # pylint:disable=undefined-variable _init() from gevent._util import import_c_accel import_c_accel(globals(), 'gevent.__abstract_linkable') gevent-24.11.1/src/gevent/_compat.h000066400000000000000000000045471471441230600170360ustar00rootroot00000000000000#ifndef _COMPAT_H #define _COMPAT_H /** * Compatibility helpers for things that are better handled at C * compilation time rather than Cython code generation time. */ #include #ifdef __clang__ #pragma clang diagnostic push #pragma clang diagnostic ignored "-Wunused-function" #endif #ifdef __cplusplus extern "C" { #endif #if PY_VERSION_HEX >= 0x30B00A6 # define GEVENT_PY311 1 #else # define GEVENT_PY311 0 # define _PyCFrame CFrame #endif #include #if PY_MAJOR_VERSION >= 3 && PY_MINOR_VERSION < 9 /* these were added in 3.9, though they officially became stable in 3.10 */ /* the official versions of these functions return strong references, so we need to increment the refcount before returning, not just to match the official functions, but to match what Cython expects an API like this to return. Otherwise we get crashes. */ static PyObject* PyFrame_GetBack(PyFrameObject* frame) { PyObject* result = (PyObject*)((PyFrameObject*)frame)->f_back; Py_XINCREF(result); return result; } static PyObject* PyFrame_GetCode(PyFrameObject* frame) { PyObject* result = (PyObject*)((PyFrameObject*)frame)->f_code; /* There is always code! */ Py_INCREF(result); return result; } #endif /* support 3.8 and below. */ /** Unlike PyFrame_GetBack, which can return NULL, this method is guaranteed to return a new reference to an object. The object is either a frame object or None. This is necessary to help Cython deal correctly with reference counting. (There are other ways of dealing with this having to do with exactly how variables/return types are declared IIRC, but this is the most straightforward. Still, it is critical that the cython declaration of this function use ``object`` as its return type.) */ static PyObject* Gevent_PyFrame_GetBack(PyObject* frame) { PyObject* back = (PyObject*)PyFrame_GetBack((PyFrameObject*)frame); if (back) { return back; } Py_RETURN_NONE; } /* These are just for typing purposes to appease the compiler. */ static int Gevent_PyFrame_GetLineNumber(PyObject* o) { return PyFrame_GetLineNumber((PyFrameObject*)o); } static PyObject* Gevent_PyFrame_GetCode(PyObject* o) { return (PyObject*)PyFrame_GetCode((PyFrameObject*)o); } #ifdef __cplusplus } #ifdef __clang__ #pragma clang diagnostic pop #endif #endif #endif /* _COMPAT_H */ gevent-24.11.1/src/gevent/_compat.py000066400000000000000000000053661471441230600172370ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ internal gevent python 2/python 3 bridges. Not for external use. """ from __future__ import print_function, absolute_import, division ## Important: This module should generally not have any other gevent ## imports (the exception is _util_py2) import sys import os PY39 = sys.version_info[:2] >= (3, 9) PY311 = sys.version_info[:2] >= (3, 11) PY312 = sys.version_info[:2] >= (3, 12) PY313 = sys.version_info[:2] >= (3, 13) PYPY = hasattr(sys, 'pypy_version_info') WIN = sys.platform.startswith("win") LINUX = sys.platform.startswith('linux') OSX = MAC = sys.platform == 'darwin' PURE_PYTHON = PYPY or os.getenv('PURE_PYTHON') ## Types string_types = (str,) integer_types = (int,) text_type = str native_path_types = (str, bytes) thread_mod_name = '_thread' hostname_types = tuple(set(string_types + (bytearray, bytes))) def NativeStrIO(): import io return io.BytesIO() if str is bytes else io.StringIO() from abc import ABC # pylint:disable=unused-import ## Exceptions def reraise(t, value, tb=None): # pylint:disable=unused-argument if value.__traceback__ is not tb and tb is not None: raise value.with_traceback(tb) raise value def exc_clear(): pass ## import locks try: # In Python 3.4 and newer in CPython and PyPy3, # imp.acquire_lock and imp.release_lock are delegated to # '_imp'. (Which is also used by importlib.) 'imp' itself is # deprecated. Avoid that warning. import _imp as imp except ImportError: import imp # pylint:disable=deprecated-module imp_acquire_lock = imp.acquire_lock imp_release_lock = imp.release_lock ## Functions iteritems = dict.items itervalues = dict.values xrange = range izip = zip ## The __fspath__ protocol from os import PathLike # pylint:disable=unused-import from os import fspath _fspath = fspath from os import fsencode # pylint:disable=unused-import from os import fsdecode # pylint:disable=unused-import ## Clocks # Python 3.3+ (PEP 418) from time import perf_counter from time import get_clock_info from time import monotonic perf_counter = perf_counter monotonic = monotonic get_clock_info = get_clock_info ## Monitoring def get_this_psutil_process(): # Depends on psutil. Defer the import until needed, who knows what # it imports (psutil imports subprocess which on Python 3 imports # selectors. This can expose issues with monkey-patching.) # Returns a freshly queried object each time. try: from psutil import Process, AccessDenied # Make sure it works (why would we be denied access to our own process?) try: proc = Process() proc.memory_full_info() except AccessDenied: # pragma: no cover proc = None except ImportError: proc = None return proc gevent-24.11.1/src/gevent/_config.py000066400000000000000000000516641471441230600172230ustar00rootroot00000000000000# Copyright (c) 2018 gevent. See LICENSE for details. """ gevent tunables. This should be used as ``from gevent import config``. That variable is an object of :class:`Config`. .. versionadded:: 1.3a2 .. versionchanged:: 22.08.0 Invoking this module like ``python -m gevent._config`` will print a help message about available configuration properties. This is handy to quickly look for environment variables. """ from __future__ import print_function, absolute_import, division import importlib import os import textwrap from gevent._compat import string_types from gevent._compat import WIN __all__ = [ 'config', ] ALL_SETTINGS = [] class SettingType(type): # pylint:disable=bad-mcs-classmethod-argument def __new__(cls, name, bases, cls_dict): if name == 'Setting': return type.__new__(cls, name, bases, cls_dict) cls_dict["order"] = len(ALL_SETTINGS) if 'name' not in cls_dict: cls_dict['name'] = name.lower() if 'environment_key' not in cls_dict: cls_dict['environment_key'] = 'GEVENT_' + cls_dict['name'].upper() new_class = type.__new__(cls, name, bases, cls_dict) new_class.fmt_desc(cls_dict.get("desc", "")) new_class.__doc__ = new_class.desc ALL_SETTINGS.append(new_class) if new_class.document: setting_name = cls_dict['name'] def getter(self): return self.settings[setting_name].get() def setter(self, value): # pragma: no cover # The setter should never be hit, Config has a # __setattr__ that would override. But for the sake # of consistency we provide one. self.settings[setting_name].set(value) prop = property(getter, setter, doc=new_class.__doc__) setattr(Config, cls_dict['name'], prop) return new_class def fmt_desc(cls, desc): desc = textwrap.dedent(desc).strip() if hasattr(cls, 'shortname_map'): desc += ( "\n\nThis is an importable value. It can be " "given as a string naming an importable object, " "or a list of strings in preference order and the first " "successfully importable object will be used. (Separate values " "in the environment variable with commas.) " "It can also be given as the callable object itself (in code). " ) if cls.shortname_map: desc += "Shorthand names for default objects are %r" % (list(cls.shortname_map),) if getattr(cls.validate, '__doc__'): desc += '\n\n' + textwrap.dedent(cls.validate.__doc__).strip() if isinstance(cls.default, str) and hasattr(cls, 'shortname_map'): default = "`%s`" % (cls.default,) else: default = "`%r`" % (cls.default,) desc += "\n\nThe default value is %s" % (default,) desc += ("\n\nThe environment variable ``%s`` " "can be used to control this." % (cls.environment_key,)) setattr(cls, "desc", desc) return desc def validate_invalid(value): raise ValueError("Not a valid value: %r" % (value,)) def validate_bool(value): """ This is a boolean value. In the environment variable, it may be given as ``1``, ``true``, ``on`` or ``yes`` for `True`, or ``0``, ``false``, ``off``, or ``no`` for `False`. """ if isinstance(value, string_types): value = value.lower().strip() if value in ('1', 'true', 'on', 'yes'): value = True elif value in ('0', 'false', 'off', 'no') or not value: value = False else: raise ValueError("Invalid boolean string: %r" % (value,)) return bool(value) def validate_anything(value): return value convert_str_value_as_is = validate_anything class Setting(object): name = None value = None validate = staticmethod(validate_invalid) default = None environment_key = None document = True desc = """\ A long ReST description. The first line should be a single sentence. """ def _convert(self, value): if isinstance(value, string_types): return value.split(',') return value def _default(self): result = os.environ.get(self.environment_key, self.default) result = self._convert(result) return result def get(self): # If we've been specifically set, return it if 'value' in self.__dict__: return self.value # Otherwise, read from the environment and reify # so we return consistent results. self.value = self.validate(self._default()) return self.value def set(self, val): self.value = self.validate(self._convert(val)) Setting = SettingType('Setting', (Setting,), dict(Setting.__dict__)) def make_settings(): """ Return fresh instances of all classes defined in `ALL_SETTINGS`. """ settings = {} for setting_kind in ALL_SETTINGS: setting = setting_kind() assert setting.name not in settings settings[setting.name] = setting return settings class Config(object): """ Global configuration for gevent. There is one instance of this object at ``gevent.config``. If you are going to make changes in code, instead of using the documented environment variables, you need to make the changes before using any parts of gevent that might need those settings. For example:: >>> from gevent import config >>> config.fileobject = 'thread' >>> from gevent import fileobject >>> fileobject.FileObject.__name__ 'FileObjectThread' .. versionadded:: 1.3a2 """ def __init__(self): self.settings = make_settings() def __getattr__(self, name): if name not in self.settings: raise AttributeError("No configuration setting for: %r" % name) return self.settings[name].get() def __setattr__(self, name, value): if name != "settings" and name in self.settings: self.set(name, value) else: super(Config, self).__setattr__(name, value) def set(self, name, value): if name not in self.settings: raise AttributeError("No configuration setting for: %r" % name) self.settings[name].set(value) def __dir__(self): return list(self.settings) def print_help(self): for k, v in self.settings.items(): print(k) print(textwrap.indent(v.__doc__.lstrip(), ' ' * 4)) print() class ImportableSetting(object): def _import_one_of(self, candidates): assert isinstance(candidates, list) if not candidates: raise ImportError('Cannot import from empty list') for item in candidates[:-1]: try: return self._import_one(item) except ImportError: pass return self._import_one(candidates[-1]) def _import_one(self, path, _MISSING=object()): if not isinstance(path, string_types): return path if '.' not in path or '/' in path: raise ImportError("Cannot import %r. " "Required format: [package.]module.class. " "Or choose from %r" % (path, list(self.shortname_map))) module, item = path.rsplit('.', 1) module = importlib.import_module(module) x = getattr(module, item, _MISSING) if x is _MISSING: raise ImportError('Cannot import %r from %r' % (item, module)) return x shortname_map = {} def validate(self, value): if isinstance(value, type): return value return self._import_one_of([self.shortname_map.get(x, x) for x in value]) def get_options(self): result = {} for name, val in self.shortname_map.items(): try: result[name] = self._import_one(val) except ImportError as e: result[name] = e return result class BoolSettingMixin(object): validate = staticmethod(validate_bool) # Don't do string-to-list conversion. _convert = staticmethod(convert_str_value_as_is) class IntSettingMixin(object): # Don't do string-to-list conversion. def _convert(self, value): if value: return int(value) validate = staticmethod(validate_anything) class _PositiveValueMixin(object): def validate(self, value): if value is not None and value <= 0: raise ValueError("Must be positive") return value class FloatSettingMixin(_PositiveValueMixin): def _convert(self, value): if value: return float(value) class ByteCountSettingMixin(_PositiveValueMixin): _MULTIPLES = { # All keys must be the same size. 'kb': 1024, 'mb': 1024 * 1024, 'gb': 1024 * 1024 * 1024, } _SUFFIX_SIZE = 2 def _convert(self, value): if not value or not isinstance(value, str): return value value = value.lower() for s, m in self._MULTIPLES.items(): if value[-self._SUFFIX_SIZE:] == s: return int(value[:-self._SUFFIX_SIZE]) * m return int(value) class Resolver(ImportableSetting, Setting): desc = """\ The callable that will be used to create :attr:`gevent.hub.Hub.resolver`. See :doc:`dns` for more information. """ default = [ 'thread', 'dnspython', 'ares', 'block', ] shortname_map = { 'ares': 'gevent.resolver.ares.Resolver', 'thread': 'gevent.resolver.thread.Resolver', 'block': 'gevent.resolver.blocking.Resolver', 'dnspython': 'gevent.resolver.dnspython.Resolver', } class Threadpool(ImportableSetting, Setting): desc = """\ The kind of threadpool we use. """ default = 'gevent.threadpool.ThreadPool' class ThreadpoolIdleTaskTimeout(FloatSettingMixin, Setting): document = True name = 'threadpool_idle_task_timeout' environment_key = 'GEVENT_THREADPOOL_IDLE_TASK_TIMEOUT' desc = """\ How long threads in the default threadpool (used for DNS by default) are allowed to be idle before exiting. Use -1 for no timeout. .. versionadded:: 22.08.0 """ # This value is picked pretty much arbitrarily. # We want to balance performance (keeping threads around) # with memory/cpu usage (letting threads go). default = 5.0 class Loop(ImportableSetting, Setting): desc = """\ The kind of the loop we use. On Windows, this defaults to libuv, while on other platforms it defaults to libev. """ default = [ 'libev-cext', 'libev-cffi', 'libuv-cffi', ] if not WIN else [ 'libuv-cffi', 'libev-cext', 'libev-cffi', ] shortname_map = { # pylint:disable=dict-init-mutate 'libev-cext': 'gevent.libev.corecext.loop', 'libev-cffi': 'gevent.libev.corecffi.loop', 'libuv-cffi': 'gevent.libuv.loop.loop', } shortname_map['libuv'] = shortname_map['libuv-cffi'] class FormatContext(ImportableSetting, Setting): name = 'format_context' # using pprint.pformat can override custom __repr__ methods on dict/list # subclasses, which can be a security concern default = 'pprint.saferepr' class LibevBackend(Setting): name = 'libev_backend' environment_key = 'GEVENT_BACKEND' desc = """\ The backend for libev, such as 'select' """ default = None validate = staticmethod(validate_anything) class FileObject(ImportableSetting, Setting): desc = """\ The kind of ``FileObject`` we will use. See :mod:`gevent.fileobject` for a detailed description. """ environment_key = 'GEVENT_FILE' default = [ 'posix', 'thread', ] shortname_map = { 'thread': 'gevent._fileobjectcommon.FileObjectThread', 'posix': 'gevent._fileobjectposix.FileObjectPosix', 'block': 'gevent._fileobjectcommon.FileObjectBlock' } class WatchChildren(BoolSettingMixin, Setting): desc = """\ Should we *not* watch children with the event loop watchers? This is an advanced setting. See :mod:`gevent.os` for a detailed description. """ name = 'disable_watch_children' environment_key = 'GEVENT_NOWAITPID' default = False class TraceMalloc(IntSettingMixin, Setting): name = 'trace_malloc' environment_key = 'PYTHONTRACEMALLOC' default = False desc = """\ Should FFI objects track their allocation? This is only useful for low-level debugging. On Python 3, this environment variable is built in to the interpreter, and it may also be set with the ``-X tracemalloc`` command line argument. On Python 2, gevent interprets this argument and adds extra tracking information for FFI objects. """ class TrackGreenletTree(BoolSettingMixin, Setting): name = 'track_greenlet_tree' environment_key = 'GEVENT_TRACK_GREENLET_TREE' default = True desc = """\ Should `Greenlet` objects track their spawning tree? Setting this to a false value will make spawning `Greenlet` objects and using `spawn_raw` faster, but the ``spawning_greenlet``, ``spawn_tree_locals`` and ``spawning_stack`` will not be captured. Setting this to a false value can also reduce memory usage because capturing the stack captures some information about Python frames. .. versionadded:: 1.3b1 """ ## Monitoring settings # All env keys should begin with GEVENT_MONITOR class MonitorThread(BoolSettingMixin, Setting): name = 'monitor_thread' environment_key = 'GEVENT_MONITOR_THREAD_ENABLE' default = False desc = """\ Should each hub start a native OS thread to monitor for problems? Such a thread will periodically check to see if the event loop is blocked for longer than `max_blocking_time`, producing output on the hub's exception stream (stderr by default) if it detects this condition. If this setting is true, then this thread will be created the first time the hub is switched to, or you can call :meth:`gevent.hub.Hub.start_periodic_monitoring_thread` at any time to create it (from the same thread that will run the hub). That function will return an instance of :class:`gevent.events.IPeriodicMonitorThread` to which you can add your own monitoring functions. That function also emits an event of :class:`gevent.events.PeriodicMonitorThreadStartedEvent`. .. seealso:: `max_blocking_time` .. versionadded:: 1.3b1 """ class MaxBlockingTime(FloatSettingMixin, Setting): name = 'max_blocking_time' # This environment key doesn't follow the convention because it's # meant to match a key used by existing projects environment_key = 'GEVENT_MAX_BLOCKING_TIME' default = 0.1 desc = """\ If the `monitor_thread` is enabled, this is approximately how long (in seconds) the event loop will be allowed to block before a warning is issued. This function depends on using `greenlet.settrace`, so installing your own trace function after starting the monitoring thread will cause this feature to misbehave unless you call the function returned by `greenlet.settrace`. If you install a tracing function *before* the monitoring thread is started, it will still be called. .. note:: In the unlikely event of creating and using multiple different gevent hubs in the same native thread in a short period of time, especially without destroying the hubs, false positives may be reported. .. versionadded:: 1.3b1 """ class MonitorMemoryPeriod(FloatSettingMixin, Setting): name = 'memory_monitor_period' environment_key = 'GEVENT_MONITOR_MEMORY_PERIOD' default = 5 desc = """\ If `monitor_thread` is enabled, this is approximately how long (in seconds) we will go between checking the processes memory usage. Checking the memory usage is relatively expensive on some operating systems, so this should not be too low. gevent will place a floor value on it. """ class MonitorMemoryMaxUsage(ByteCountSettingMixin, Setting): name = 'max_memory_usage' environment_key = 'GEVENT_MONITOR_MEMORY_MAX' default = None desc = """\ If `monitor_thread` is enabled, then if memory usage exceeds this amount (in bytes), events will be emitted. See `gevent.events`. In the environment variable, you can use a suffix of 'kb', 'mb' or 'gb' to specify the value in kilobytes, megabytes or gigibytes. There is no default value for this setting. If you wish to cap memory usage, you must choose a value. """ # The ares settings are all interpreted by # gevent/resolver/ares.pyx, so we don't do # any validation here. class AresSettingMixin(object): document = False @property def kwarg_name(self): return self.name[5:] validate = staticmethod(validate_anything) _convert = staticmethod(convert_str_value_as_is) class AresFlags(AresSettingMixin, Setting): name = 'ares_flags' default = None environment_key = 'GEVENTARES_FLAGS' class AresTimeout(AresSettingMixin, Setting): document = True name = 'ares_timeout' default = None environment_key = 'GEVENTARES_TIMEOUT' desc = """\ .. deprecated:: 1.3a2 Prefer the :attr:`resolver_timeout` setting. If both are set, the results are not defined. """ class AresTries(AresSettingMixin, Setting): name = 'ares_tries' default = None environment_key = 'GEVENTARES_TRIES' class AresNdots(AresSettingMixin, Setting): name = 'ares_ndots' default = None environment_key = 'GEVENTARES_NDOTS' class AresUDPPort(AresSettingMixin, Setting): name = 'ares_udp_port' default = None environment_key = 'GEVENTARES_UDP_PORT' class AresTCPPort(AresSettingMixin, Setting): name = 'ares_tcp_port' default = None environment_key = 'GEVENTARES_TCP_PORT' class AresServers(AresSettingMixin, Setting): document = True name = 'ares_servers' default = None environment_key = 'GEVENTARES_SERVERS' desc = """\ A list of strings giving the IP addresses of nameservers for the ares resolver. In the environment variable, these strings are separated by commas. .. deprecated:: 1.3a2 Prefer the :attr:`resolver_nameservers` setting. If both are set, the results are not defined. """ # Generic nameservers, works for dnspython and ares. class ResolverNameservers(AresSettingMixin, Setting): document = True name = 'resolver_nameservers' default = None environment_key = 'GEVENT_RESOLVER_NAMESERVERS' desc = """\ A list of strings giving the IP addresses of nameservers for the (non-system) resolver. In the environment variable, these strings are separated by commas. .. rubric:: Resolver Behaviour * blocking Ignored * Threaded Ignored * dnspython If this setting is not given, the dnspython resolver will load nameservers to use from ``/etc/resolv.conf`` or the Windows registry. This setting replaces any nameservers read from those means. Note that the file and registry are still read for other settings. .. caution:: dnspython does not validate the members of the list. An improper address (such as a hostname instead of IP) has undefined results, including hanging the process. * ares Similar to dnspython, but with more platform and compile-time options. ares validates that the members of the list are valid addresses. """ # Normal string-to-list rules. But still validate_anything. _convert = Setting._convert # TODO: In the future, support reading a resolv.conf file # *other* than /etc/resolv.conf, and do that both on Windows # and other platforms. Also offer the option to disable the system # configuration entirely. @property def kwarg_name(self): return 'servers' # Generic timeout, works for dnspython and ares class ResolverTimeout(FloatSettingMixin, AresSettingMixin, Setting): document = True name = 'resolver_timeout' environment_key = 'GEVENT_RESOLVER_TIMEOUT' desc = """\ The total amount of time that the DNS resolver will spend making queries. Only the ares and dnspython resolvers support this. .. versionadded:: 1.3a2 """ @property def kwarg_name(self): return 'timeout' config = Config() # Go ahead and attempt to import the loop when this class is # instantiated. The hub won't work if the loop can't be found. This # can solve problems with the class being imported from multiple # threads at once, leading to one of the imports failing. # factories are themselves handled lazily. See #687. # Don't cache it though, in case the user re-configures through the # API. try: Loop().get() except ImportError: # pragma: no cover pass if __name__ == '__main__': config.print_help() gevent-24.11.1/src/gevent/_ffi/000077500000000000000000000000001471441230600161345ustar00rootroot00000000000000gevent-24.11.1/src/gevent/_ffi/__init__.py000066400000000000000000000007551471441230600202540ustar00rootroot00000000000000""" Internal helpers for FFI implementations. """ from __future__ import print_function, absolute_import import os import sys def _dbg(*args, **kwargs): # pylint:disable=unused-argument pass #_dbg = print def _pid_dbg(*args, **kwargs): kwargs['file'] = sys.stderr print(os.getpid(), *args, **kwargs) CRITICAL = 1 ERROR = 3 DEBUG = 5 TRACE = 9 GEVENT_DEBUG_LEVEL = vars()[os.getenv("GEVENT_DEBUG", 'CRITICAL').upper()] if GEVENT_DEBUG_LEVEL >= TRACE: _dbg = _pid_dbg gevent-24.11.1/src/gevent/_ffi/alloc.c000066400000000000000000000030631471441230600173740ustar00rootroot00000000000000#include #ifndef GEVENT_ALLOC_C #define GEVENT_ALLOC_C #ifdef PYPY_VERSION_NUM #define GGIL_DECLARE #define GGIL_ENSURE #define GGIL_RELEASE #define GPyObject_Free free #define GPyObject_Realloc realloc #else #include "Python.h" #define GGIL_DECLARE PyGILState_STATE ___save #define GGIL_ENSURE ___save = PyGILState_Ensure(); #define GGIL_RELEASE PyGILState_Release(___save); #define GPyObject_Free PyObject_Free #define GPyObject_Realloc PyObject_Realloc #endif void* gevent_realloc(void* ptr, size_t size) { // libev is interesting and assumes that everything can be // done with realloc(), assuming that passing in a size of 0 means to // free the pointer. But the C/++ standard explicitly says that // this is undefined. So this wrapper function exists to do it all. GGIL_DECLARE; void* result; if(!size && !ptr) { // libev for some reason often tries to free(NULL); I won't specutale // why. No need to acquire the GIL or do anything in that case. return NULL; } // Using PyObject_* APIs to get access to pymalloc allocator on // all versions of CPython; in Python 3, PyMem_* and PyObject_* use // the same allocator, but in Python 2, only PyObject_* uses pymalloc. GGIL_ENSURE; if(!size) { GPyObject_Free(ptr); result = NULL; } else { result = GPyObject_Realloc(ptr, size); } GGIL_RELEASE; return result; } #undef GGIL_DECLARE #undef GGIL_ENSURE #undef GGIL_RELEASE #undef GPyObject_Free #undef GPyObject_Realloc #endif /* GEVENT_ALLOC_C */ gevent-24.11.1/src/gevent/_ffi/callback.py000066400000000000000000000030341471441230600202420ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import print_function from zope.interface import implementer from gevent._interfaces import ICallback __all__ = [ 'callback', ] @implementer(ICallback) class callback(object): __slots__ = ('callback', 'args') def __init__(self, cb, args): self.callback = cb self.args = args def stop(self): self.callback = None self.args = None close = stop # Note that __nonzero__ and pending are different # bool() is used in contexts where we need to know whether to schedule another callback, # so it's true if it's pending or currently running # 'pending' has the same meaning as libev watchers: it is cleared before actually # running the callback def __bool__(self): # it's nonzero if it's pending or currently executing # NOTE: This depends on loop._run_callbacks setting the args property # to None. return self.args is not None @property def pending(self): return self.callback is not None def _format(self): return '' def __repr__(self): result = "<%s at 0x%x" % (self.__class__.__name__, id(self)) if self.pending: result += " pending" if self.callback is not None: result += " callback=%r" % (self.callback, ) if self.args is not None: result += " args=%r" % (self.args, ) if self.callback is None and self.args is None: result += " stopped" return result + ">" gevent-24.11.1/src/gevent/_ffi/loop.py000066400000000000000000000760721471441230600174730ustar00rootroot00000000000000""" Basic loop implementation for ffi-based cores. """ # pylint: disable=too-many-lines, protected-access, redefined-outer-name, not-callable from __future__ import absolute_import, print_function from collections import deque import sys import os import traceback from gevent._ffi import _dbg from gevent._ffi import GEVENT_DEBUG_LEVEL from gevent._ffi import TRACE from gevent._ffi.callback import callback from gevent._compat import PYPY from gevent.exceptions import HubDestroyed from gevent import getswitchinterval __all__ = [ 'AbstractLoop', 'assign_standard_callbacks', ] class _EVENTSType(object): def __repr__(self): return 'gevent.core.EVENTS' EVENTS = GEVENT_CORE_EVENTS = _EVENTSType() class _DiscardedSet(frozenset): __slots__ = () def discard(self, o): "Does nothing." ##### ## Note on CFFI objects, callbacks and the lifecycle of watcher objects # # Each subclass of `watcher` allocates a C structure of the # appropriate type e.g., struct gevent_ev_io and holds this pointer in # its `_gwatcher` attribute. When that watcher instance is garbage # collected, then the C structure is also freed. The C structure is # passed to libev from the watcher's start() method and then to the # appropriate C callback function, e.g., _gevent_ev_io_callback, which # passes it back to python's _python_callback where we need the # watcher instance. Therefore, as long as that callback is active (the # watcher is started), the watcher instance must not be allowed to get # GC'd---any access at the C level or even the FFI level to the freed # memory could crash the process. # # However, the typical idiom calls for writing something like this: # loop.io(fd, python_cb).start() # thus forgetting the newly created watcher subclass and allowing it to be immediately # GC'd. To combat this, when the watcher is started, it places itself into the loop's # `_keepaliveset`, and it only removes itself when the watcher's `stop()` method is called. # Often, this is the *only* reference keeping the watcher object, and hence its C structure, # alive. # # This is slightly complicated by the fact that the python-level # callback, called from the C callback, could choose to manually stop # the watcher. When we return to the C level callback, we now have an # invalid pointer, and attempting to pass it back to Python (e.g., to # handle an error) could crash. Hence, _python_callback, # _gevent_io_callback, and _python_handle_error cooperate to make sure # that the watcher instance stays in the loops `_keepaliveset` while # the C code could be running---and if it gets removed, to not call back # to Python again. # See also https://github.com/gevent/gevent/issues/676 #### class AbstractCallbacks(object): def __init__(self, ffi): self.ffi = ffi self.callbacks = [] if GEVENT_DEBUG_LEVEL < TRACE: self.from_handle = ffi.from_handle def from_handle(self, handle): # pylint:disable=method-hidden x = self.ffi.from_handle(handle) return x def python_callback(self, handle, revents): """ Returns an integer having one of three values: - -1 An exception occurred during the callback and you must call :func:`_python_handle_error` to deal with it. The Python watcher object will have the exception tuple saved in ``_exc_info``. - 1 Everything went according to plan. You should check to see if the native watcher is still active, and call :func:`python_stop` if it is not. This will clean up the memory. Finding the watcher still active at the event loop level, but not having stopped itself at the gevent level is a buggy scenario and shouldn't happen. - 2 Everything went according to plan, but the watcher has already been stopped. Its memory may no longer be valid. This function should never return 0, as that's the default value that Python exceptions will produce. """ #_dbg("Running callback", handle) orig_ffi_watcher = None orig_loop = None try: # Even dereferencing the handle needs to be inside the try/except; # if we don't return normally (e.g., a signal) then we wind up going # to the 'onerror' handler (unhandled_onerror), which # is not what we want; that can permanently wedge the loop depending # on which callback was executing. # XXX: See comments in that function. We may be able to restart and do better? if not handle: # Hmm, a NULL handle. That's not supposed to happen. # We can easily get into a loop if we deref it and allow that # to raise. _dbg("python_callback got null handle") return 1 the_watcher = self.from_handle(handle) orig_ffi_watcher = the_watcher._watcher orig_loop = the_watcher.loop args = the_watcher.args if args is None: # Legacy behaviour from corecext: convert None into () # See test__core_watcher.py args = _NOARGS if args and args[0] == GEVENT_CORE_EVENTS: args = (revents, ) + args[1:] the_watcher.callback(*args) # None here means we weren't started except: # pylint:disable=bare-except # It's possible for ``the_watcher`` to be undefined (UnboundLocalError) # if we threw an exception (signal) on the line that created that variable. # This is typically the case with a signal under libuv try: the_watcher except UnboundLocalError: the_watcher = self.from_handle(handle) # It may not be safe to do anything with `handle` or `orig_ffi_watcher` # anymore. If the watcher closed or stopped itself *before* throwing the exception, # then the `handle` and `orig_ffi_watcher` may no longer be valid. Attempting to # e.g., dereference the handle is likely to crash the process. the_watcher._exc_info = sys.exc_info() # If it hasn't been stopped, we need to make sure its # memory stays valid so we can stop it at the native level if needed. # If its loop is gone, it has already been stopped, # see https://github.com/gevent/gevent/issues/1295 for a case where # that happened, as well as issue #1482 if ( # The last thing it does. Full successful close. the_watcher.loop is None # Only a partial close. We could leak memory and even crash later. or the_watcher._handle is None ): # Prevent unhandled_onerror from using the invalid handle handle = None exc_info = the_watcher._exc_info del the_watcher._exc_info try: if orig_loop is not None: orig_loop.handle_error(the_watcher, *exc_info) else: self.unhandled_onerror(*exc_info) except: print("WARNING: gevent: Error when handling error", file=sys.stderr) traceback.print_exc() # Signal that we're closed, no need to do more. return 2 # Keep it around so we can close it later. the_watcher.loop._keepaliveset.add(the_watcher) return -1 if (the_watcher.loop is not None and the_watcher in the_watcher.loop._keepaliveset and the_watcher._watcher is orig_ffi_watcher): # It didn't stop itself, *and* it didn't stop itself, reset # its watcher, and start itself again. libuv's io watchers # multiplex and may do this. # The normal, expected scenario when we find the watcher still # in the keepaliveset is that it is still active at the event loop # level, so we don't expect that python_stop gets called. #_dbg("The watcher has not stopped itself, possibly still active", the_watcher) return 1 return 2 # it stopped itself def python_handle_error(self, handle, _revents): _dbg("Handling error for handle", handle) if not handle: return try: watcher = self.from_handle(handle) exc_info = watcher._exc_info del watcher._exc_info # In the past, we passed the ``watcher`` itself as the context, # which typically meant that the Hub would just print # the exception. This is a problem because sometimes we can't # detect signals until late in ``python_callback``; specifically, # test_selectors.py:DefaultSelectorTest.test_select_interrupt_exc # installs a SIGALRM handler that raises an exception. That exception can happen # before we enter ``python_callback`` or at any point within it because of the way # libuv swallows signals. By passing None, we get the exception prapagated into # the main greenlet (which is probably *also* not what we always want, but # I see no way to distinguish the cases). watcher.loop.handle_error(None, *exc_info) finally: # XXX Since we're here on an error condition, and we # made sure that the watcher object was put in loop._keepaliveset, # what about not stopping the watcher? Looks like a possible # memory leak? # XXX: This used to do "if revents & (libev.EV_READ | libev.EV_WRITE)" # before stopping. Why? try: watcher.stop() except: # pylint:disable=bare-except watcher.loop.handle_error(watcher, *sys.exc_info()) return # pylint:disable=lost-exception,return-in-finally def unhandled_onerror(self, t, v, tb): # This is supposed to be called for signals, etc. # This is the onerror= value for CFFI. # If we return None, C will get a value of 0/NULL; # if we raise, CFFI will print the exception and then # return 0/NULL; (unless error= was configured) # If things go as planned, we return the value that asks # C to call back and check on if the watcher needs to be closed or # not. # XXX: TODO: Could this cause events to be lost? Maybe we need to return # a value that causes the C loop to try the callback again? # at least for signals under libuv, which are delivered at very odd times. # Hopefully the event still shows up when we poll the next time. watcher = None handle = tb.tb_frame.f_locals.get('handle') if tb is not None else None if handle: # handle could be NULL watcher = self.from_handle(handle) if watcher is not None: watcher.loop.handle_error(None, t, v, tb) return 1 # Raising it causes a lot of noise from CFFI print("WARNING: gevent: Unhandled error with no watcher", file=sys.stderr) traceback.print_exception(t, v, tb) def python_stop(self, handle): if not handle: # pragma: no cover print( "WARNING: gevent: Unable to dereference handle; not stopping watcher. " "Native resources may leak. This is most likely a bug in gevent.", file=sys.stderr) # The alternative is to crash with no helpful information # NOTE: Raising exceptions here does nothing, they're swallowed by CFFI. # Since the C level passed in a null pointer, even dereferencing the handle # will just produce some exceptions. return watcher = self.from_handle(handle) watcher.stop() if not PYPY: def python_check_callback(self, watcher_ptr): # pylint:disable=unused-argument # If we have the onerror callback, this is a no-op; all the real # work to rethrow the exception is done by the onerror callback # NOTE: Unlike the rest of the functions, this is called with a pointer # to the C level structure, *not* a pointer to the void* that represents a # for the Python Watcher object. pass else: # PyPy # On PyPy, we need the function to have some sort of body, otherwise # the signal exceptions don't always get caught, *especially* with # libuv (however, there's no reason to expect this to only be a libuv # issue; it's just that we don't depend on the periodic signal timer # under libev, so the issue is much more pronounced under libuv) # test_socket's test_sendall_interrupted can hang. # See https://github.com/gevent/gevent/issues/1112 def python_check_callback(self, watcher_ptr): # pylint:disable=unused-argument # Things we've tried that *don't* work: # greenlet.getcurrent() # 1 + 1 try: raise MemoryError() except MemoryError: pass def python_prepare_callback(self, watcher_ptr): loop = self._find_loop_from_c_watcher(watcher_ptr) if loop is None: # pragma: no cover print("WARNING: gevent: running prepare callbacks from a destroyed handle: ", watcher_ptr) return loop._run_callbacks() def check_callback_onerror(self, t, v, tb): loop = None watcher_ptr = self._find_watcher_ptr_in_traceback(tb) if watcher_ptr: loop = self._find_loop_from_c_watcher(watcher_ptr) if loop is not None: # None as the context argument causes the exception to be raised # in the main greenlet. loop.handle_error(None, t, v, tb) return None raise v # Let CFFI print def _find_loop_from_c_watcher(self, watcher_ptr): raise NotImplementedError() def _find_watcher_ptr_in_traceback(self, tb): return tb.tb_frame.f_locals['watcher_ptr'] if tb is not None else None def assign_standard_callbacks(ffi, lib, callbacks_class, extras=()): # pylint:disable=unused-argument """ Given the typical *ffi* and *lib* arguments, and a subclass of :class:`AbstractCallbacks` in *callbacks_class*, set up the ``def_extern`` Python callbacks from C into an instance of *callbacks_class*. :param tuple extras: If given, this is a sequence of ``(name, error_function)`` additional callbacks to register. Each *name* is an attribute of the *callbacks_class* instance. (Each element cas also be just a *name*.) :return: The *callbacks_class* instance. This object must be kept alive, typically at module scope. """ # callbacks keeps these cdata objects alive at the python level callbacks = callbacks_class(ffi) extras = [extra if len(extra) == 2 else (extra, None) for extra in extras] extras = tuple((getattr(callbacks, name), error) for name, error in extras) for (func, error_func) in ( (callbacks.python_callback, None), (callbacks.python_handle_error, None), (callbacks.python_stop, None), (callbacks.python_check_callback, callbacks.check_callback_onerror), (callbacks.python_prepare_callback, callbacks.check_callback_onerror) ) + extras: # The name of the callback function matches the 'extern Python' declaration. error_func = error_func or callbacks.unhandled_onerror callback = ffi.def_extern(onerror=error_func)(func) # keep alive the cdata # (def_extern returns the original function, and it requests that # the function be "global", so maybe it keeps a hard reference to it somewhere now # unlike ffi.callback(), and we don't need to do this?) callbacks.callbacks.append(callback) # At this point, the library C variable (static function, actually) # is filled in. return callbacks basestring = (bytes, str) integer_types = (int,) _NOARGS = () class AbstractLoop(object): # pylint:disable=too-many-public-methods,too-many-instance-attributes # How many callbacks we should run between checking against the # switch interval. CALLBACK_CHECK_COUNT = 50 error_handler = None _CHECK_POINTER = None _TIMER_POINTER = None _TIMER_CALLBACK_SIG = None _PREPARE_POINTER = None starting_timer_may_update_loop_time = False # Subclasses should set this in __init__ to reflect # whether they were the default loop. _default = None _keepaliveset = _DiscardedSet() _threadsafe_async = None def __init__(self, ffi, lib, watchers, flags=None, default=None): self._ffi = ffi self._lib = lib self._ptr = None self._handle_to_self = self._ffi.new_handle(self) # XXX: Reference cycle? self._watchers = watchers self._in_callback = False self._callbacks = deque() # Stores python watcher objects while they are started self._keepaliveset = set() self._init_loop_and_aux_watchers(flags, default) def _init_loop_and_aux_watchers(self, flags=None, default=None): self._ptr = self._init_loop(flags, default) # self._check is a watcher that runs in each iteration of the # mainloop, just after the blocking call. It's point is to handle # signals. It doesn't run watchers or callbacks, it just exists to give # CFFI a chance to raise signal exceptions so we can handle them. self._check = self._ffi.new(self._CHECK_POINTER) self._check.data = self._handle_to_self self._init_and_start_check() # self._prepare is a watcher that runs in each iteration of the mainloop, # just before the blocking call. It's where we run deferred callbacks # from self.run_callback. This cooperates with _setup_for_run_callback() # to schedule self._timer0 if needed. self._prepare = self._ffi.new(self._PREPARE_POINTER) self._prepare.data = self._handle_to_self self._init_and_start_prepare() # A timer we start and stop on demand. If we have callbacks, # too many to run in one iteration of _run_callbacks, we turn this # on so as to have the next iteration of the run loop return to us # as quickly as possible. # TODO: There may be a more efficient way to do this using ev_timer_again; # see the "ev_timer" section of the ev manpage (http://linux.die.net/man/3/ev) # Alternatively, setting the ev maximum block time may also work. self._timer0 = self._ffi.new(self._TIMER_POINTER) self._timer0.data = self._handle_to_self self._init_callback_timer() self._threadsafe_async = self.async_(ref=False) # No need to do anything with this on ``fork()``, both libev and libuv # take care of creating a new pipe in their respective ``loop_fork()`` methods. self._threadsafe_async.start(lambda: None) # TODO: We may be able to do something nicer and use the existing python_callback # combined with onerror and the class check/timer/prepare to simplify things # and unify our handling def _init_loop(self, flags, default): """ Called by __init__ to create or find the loop. The return value is assigned to self._ptr. """ raise NotImplementedError() def _init_and_start_check(self): raise NotImplementedError() def _init_and_start_prepare(self): raise NotImplementedError() def _init_callback_timer(self): raise NotImplementedError() def _stop_callback_timer(self): raise NotImplementedError() def _start_callback_timer(self): raise NotImplementedError() def _check_callback_handle_error(self, t, v, tb): self.handle_error(None, t, v, tb) def _run_callbacks(self): # pylint:disable=too-many-branches # When we're running callbacks, its safe for timers to # update the notion of the current time (because if we're here, # we're not running in a timer callback that may let other timers # run; this is mostly an issue for libuv). # That's actually a bit of a lie: on libev, self._timer0 really is # a timer, and so sometimes this is running in a timer callback, not # a prepare callback. But that's OK, libev doesn't suffer from cascading # timer expiration and its safe to update the loop time at any # moment there. self.starting_timer_may_update_loop_time = True try: count = self.CALLBACK_CHECK_COUNT now = self.now() expiration = now + getswitchinterval() self._stop_callback_timer() while self._callbacks: cb = self._callbacks.popleft() # pylint:disable=assignment-from-no-return count -= 1 self.unref() # XXX: libuv doesn't have a global ref count! callback = cb.callback cb.callback = None args = cb.args if callback is None or args is None: # it's been stopped continue try: callback(*args) except: # pylint:disable=bare-except # If we allow an exception to escape this method (while we are running the ev callback), # then CFFI will print the error and libev will continue executing. # There are two problems with this. The first is that the code after # the loop won't run. The second is that any remaining callbacks scheduled # for this loop iteration will be silently dropped; they won't run, but they'll # also not be *stopped* (which is not a huge deal unless you're looking for # consistency or checking the boolean/pending status; the loop doesn't keep # a reference to them like it does to watchers...*UNLESS* the callback itself had # a reference to a watcher; then I don't know what would happen, it depends on # the state of the watcher---a leak or crash is not totally inconceivable). # The Cython implementation in core.ppyx uses gevent_call from callbacks.c # to run the callback, which uses gevent_handle_error to handle any errors the # Python callback raises...it unconditionally simply prints any error raised # by loop.handle_error and clears it, so callback handling continues. # We take a similar approach (but are extra careful about printing) try: self.handle_error(cb, *sys.exc_info()) except: # pylint:disable=bare-except try: print("Exception while handling another error", file=sys.stderr) traceback.print_exc() except: # pylint:disable=bare-except pass # Nothing we can do here finally: # NOTE: this must be reset here, because cb.args is used as a flag in # the callback class so that bool(cb) of a callback that has been run # becomes False cb.args = None # We've finished running one group of callbacks # but we may have more, so before looping check our # switch interval. if count == 0 and self._callbacks: count = self.CALLBACK_CHECK_COUNT self.update_now() if self.now() >= expiration: now = 0 break # Update the time before we start going again, if we didn't # just do so. if now != 0: self.update_now() if self._callbacks: self._start_callback_timer() finally: self.starting_timer_may_update_loop_time = False def _stop_aux_watchers(self): if self._threadsafe_async is not None: self._threadsafe_async.close() self._threadsafe_async = None def destroy(self): ptr = self.ptr if ptr: try: if not self._can_destroy_loop(ptr): return False self._stop_aux_watchers() self._destroy_loop(ptr) finally: # not ffi.NULL, we don't want something that can be # passed to C and crash later. This will create nice friendly # TypeError from CFFI. self._ptr = None del self._handle_to_self del self._callbacks del self._keepaliveset return True def _can_destroy_loop(self, ptr): raise NotImplementedError() def _destroy_loop(self, ptr): raise NotImplementedError() @property def ptr(self): # Use this when you need to be sure the pointer is valid. return self._ptr @property def WatcherType(self): return self._watchers.watcher @property def MAXPRI(self): return 1 @property def MINPRI(self): return 1 def _handle_syserr(self, message, errno): try: errno = os.strerror(errno) except: # pylint:disable=bare-except traceback.print_exc() try: message = '%s: %s' % (message, errno) except: # pylint:disable=bare-except traceback.print_exc() self.handle_error(None, SystemError, SystemError(message), None) def handle_error(self, context, type, value, tb): if type is HubDestroyed: self._callbacks.clear() self.break_() return handle_error = None error_handler = self.error_handler if error_handler is not None: # we do want to do getattr every time so that setting Hub.handle_error property just works handle_error = getattr(error_handler, 'handle_error', error_handler) handle_error(context, type, value, tb) else: self._default_handle_error(context, type, value, tb) def _default_handle_error(self, context, type, value, tb): # pylint:disable=unused-argument # note: Hub sets its own error handler so this is not used by gevent # this is here to make core.loop usable without the rest of gevent # Should cause the loop to stop running. traceback.print_exception(type, value, tb) def run(self, nowait=False, once=False): raise NotImplementedError() def reinit(self): raise NotImplementedError() def ref(self): # XXX: libuv doesn't do it this way raise NotImplementedError() def unref(self): raise NotImplementedError() def break_(self, how=None): raise NotImplementedError() def verify(self): pass def now(self): raise NotImplementedError() def update_now(self): raise NotImplementedError() def update(self): import warnings warnings.warn("'update' is deprecated; use 'update_now'", DeprecationWarning, stacklevel=2) self.update_now() def __repr__(self): return '<%s.%s at 0x%x %s>' % ( self.__class__.__module__, self.__class__.__name__, id(self), self._format() ) @property def default(self): return self._default if self.ptr else False @property def iteration(self): return -1 @property def depth(self): return -1 @property def backend_int(self): return 0 @property def backend(self): return "default" @property def pendingcnt(self): return 0 def io(self, fd, events, ref=True, priority=None): return self._watchers.io(self, fd, events, ref, priority) def closing_fd(self, fd): # pylint:disable=unused-argument return False def timer(self, after, repeat=0.0, ref=True, priority=None): return self._watchers.timer(self, after, repeat, ref, priority) def signal(self, signum, ref=True, priority=None): return self._watchers.signal(self, signum, ref, priority) def idle(self, ref=True, priority=None): return self._watchers.idle(self, ref, priority) def prepare(self, ref=True, priority=None): return self._watchers.prepare(self, ref, priority) def check(self, ref=True, priority=None): return self._watchers.check(self, ref, priority) def fork(self, ref=True, priority=None): return self._watchers.fork(self, ref, priority) def async_(self, ref=True, priority=None): return self._watchers.async_(self, ref, priority) # Provide BWC for those that can use 'async' as is locals()['async'] = async_ if sys.platform != "win32": def child(self, pid, trace=0, ref=True): return self._watchers.child(self, pid, trace, ref) def install_sigchld(self): pass def stat(self, path, interval=0.0, ref=True, priority=None): return self._watchers.stat(self, path, interval, ref, priority) def callback(self, priority=None): return callback(self, priority) def _setup_for_run_callback(self): raise NotImplementedError() def run_callback(self, func, *args): # If we happen to already be running callbacks (inside # _run_callbacks), this could happen almost immediately, # without the loop cycling. cb = callback(func, args) self._callbacks.append(cb) # Relying on the GIL for this to be threadsafe self._setup_for_run_callback() # XXX: This may not be threadsafe. return cb def run_callback_threadsafe(self, func, *args): cb = self.run_callback(func, *args) self._threadsafe_async.send() return cb def _format(self): ptr = self.ptr if not ptr: return 'destroyed' msg = "backend=" + self.backend msg += ' ptr=' + str(ptr) if self.default: msg += ' default' msg += ' pending=%s' % self.pendingcnt msg += self._format_details() return msg def _format_details(self): msg = '' fileno = self.fileno() # pylint:disable=assignment-from-none try: activecnt = self.activecnt except AttributeError: activecnt = None if activecnt is not None: msg += ' ref=' + repr(activecnt) if fileno is not None: msg += ' fileno=' + repr(fileno) #if sigfd is not None and sigfd != -1: # msg += ' sigfd=' + repr(sigfd) msg += ' callbacks=' + str(len(self._callbacks)) return msg def fileno(self): return None @property def activecnt(self): if not self.ptr: raise ValueError('operation on destroyed loop') return 0 gevent-24.11.1/src/gevent/_ffi/watcher.py000066400000000000000000000507321471441230600201520ustar00rootroot00000000000000""" Useful base classes for watchers. The available watchers will depend on the specific event loop. """ # pylint:disable=not-callable from __future__ import absolute_import, print_function import signal as signalmodule import functools import warnings from gevent._config import config from gevent._util import LazyOnClass try: from tracemalloc import get_object_traceback def tracemalloc(init): # PYTHONTRACEMALLOC env var controls this on Python 3. return init except ImportError: # Python < 3.4 if config.trace_malloc: # Use the same env var to turn this on for Python 2 import traceback class _TB(object): __slots__ = ('lines',) def __init__(self, lines): # These end in newlines, which we don't want for consistency self.lines = [x.rstrip() for x in lines] def format(self): return self.lines def tracemalloc(init): @functools.wraps(init) def traces(self, *args, **kwargs): init(self, *args, **kwargs) self._captured_malloc = _TB(traceback.format_stack()) return traces def get_object_traceback(obj): return obj._captured_malloc else: def get_object_traceback(_obj): return None def tracemalloc(init): return init from gevent._compat import fsencode from gevent._ffi import _dbg # pylint:disable=unused-import from gevent._ffi import GEVENT_DEBUG_LEVEL from gevent._ffi import DEBUG from gevent._ffi.loop import GEVENT_CORE_EVENTS from gevent._ffi.loop import _NOARGS ALLOW_WATCHER_DEL = GEVENT_DEBUG_LEVEL >= DEBUG __all__ = [ ] try: ResourceWarning # pylint:disable=used-before-assignment except NameError: class ResourceWarning(Warning): "Python 2 fallback" class _NoWatcherResult(int): def __repr__(self): return "" _NoWatcherResult = _NoWatcherResult(0) def events_to_str(event_field, all_events): result = [] for (flag, string) in all_events: c_flag = flag if event_field & c_flag: result.append(string) event_field &= (~c_flag) if not event_field: break if event_field: result.append(hex(event_field)) return '|'.join(result) def not_while_active(func): @functools.wraps(func) def nw(self, *args, **kwargs): if self.active: raise ValueError("not while active") func(self, *args, **kwargs) return nw def only_if_watcher(func): @functools.wraps(func) def if_w(self): if self._watcher: return func(self) return _NoWatcherResult return if_w class AbstractWatcherType(type): """ Base metaclass for watchers. To use, you will: - subclass the watcher class defined from this type. - optionally subclass this type """ # pylint:disable=bad-mcs-classmethod-argument _FFI = None _LIB = None def __new__(cls, name, bases, cls_dict): if name != 'watcher' and not cls_dict.get('_watcher_skip_ffi'): cls._fill_watcher(name, bases, cls_dict) if '__del__' in cls_dict and not ALLOW_WATCHER_DEL: # pragma: no cover raise TypeError("CFFI watchers are not allowed to have __del__") return type.__new__(cls, name, bases, cls_dict) @classmethod def _fill_watcher(cls, name, bases, cls_dict): # TODO: refactor smaller # pylint:disable=too-many-locals if name.endswith('_'): # Strip trailing _ added to avoid keyword duplications # e.g., async_ name = name[:-1] def _mro_get(attr, bases, error=True): for b in bases: try: return getattr(b, attr) except AttributeError: continue if error: # pragma: no cover raise AttributeError(attr) _watcher_prefix = cls_dict.get('_watcher_prefix') or _mro_get('_watcher_prefix', bases) if '_watcher_type' not in cls_dict: watcher_type = _watcher_prefix + '_' + name cls_dict['_watcher_type'] = watcher_type elif not cls_dict['_watcher_type'].startswith(_watcher_prefix): watcher_type = _watcher_prefix + '_' + cls_dict['_watcher_type'] cls_dict['_watcher_type'] = watcher_type active_name = _watcher_prefix + '_is_active' def _watcher_is_active(self): return getattr(self._LIB, active_name) LazyOnClass.lazy(cls_dict, _watcher_is_active) watcher_struct_name = cls_dict.get('_watcher_struct_name') if not watcher_struct_name: watcher_struct_pattern = (cls_dict.get('_watcher_struct_pattern') or _mro_get('_watcher_struct_pattern', bases, False) or 'struct %s') watcher_struct_name = watcher_struct_pattern % (watcher_type,) def _watcher_struct_pointer_type(self): return self._FFI.typeof(watcher_struct_name + ' *') LazyOnClass.lazy(cls_dict, _watcher_struct_pointer_type) callback_name = (cls_dict.get('_watcher_callback_name') or _mro_get('_watcher_callback_name', bases, False) or '_gevent_generic_callback') def _watcher_callback(self): return self._FFI.addressof(self._LIB, callback_name) LazyOnClass.lazy(cls_dict, _watcher_callback) def _make_meth(name, watcher_name): def meth(self): lib_name = self._watcher_type + '_' + name return getattr(self._LIB, lib_name) meth.__name__ = watcher_name return meth for meth_name in 'start', 'stop', 'init': watcher_name = '_watcher' + '_' + meth_name if watcher_name not in cls_dict: LazyOnClass.lazy(cls_dict, _make_meth(meth_name, watcher_name)) def new_handle(cls, obj): return cls._FFI.new_handle(obj) def new(cls, kind): return cls._FFI.new(kind) class watcher(object): _callback = None _args = None _watcher = None # self._handle has a reference to self, keeping it alive. # We must keep self._handle alive for ffi.from_handle() to be # able to work. We only fill this in when we are started, # and when we are stopped we destroy it. # NOTE: This is a GC cycle, so we keep it around for as short # as possible. _handle = None @tracemalloc def __init__(self, _loop, ref=True, priority=None, args=_NOARGS): self.loop = _loop self.__init_priority = priority self.__init_args = args self.__init_ref = ref self._watcher_full_init() def _watcher_full_init(self): priority = self.__init_priority ref = self.__init_ref args = self.__init_args self._watcher_create(ref) if priority is not None: self._watcher_ffi_set_priority(priority) try: self._watcher_ffi_init(args) except: # Let these be GC'd immediately. # If we keep them around to when *we* are gc'd, # they're probably invalid, meaning any native calls # we do then to close() them are likely to fail self._watcher = None raise self._watcher_ffi_set_init_ref(ref) @classmethod def _watcher_ffi_close(cls, ffi_watcher): pass def _watcher_create(self, ref): # pylint:disable=unused-argument self._watcher = self._watcher_new() def _watcher_new(self): return type(self).new(self._watcher_struct_pointer_type) # pylint:disable=no-member def _watcher_ffi_set_init_ref(self, ref): pass def _watcher_ffi_set_priority(self, priority): pass def _watcher_ffi_init(self, args): raise NotImplementedError() def _watcher_ffi_start(self): raise NotImplementedError() def _watcher_ffi_stop(self): self._watcher_stop(self.loop.ptr, self._watcher) def _watcher_ffi_ref(self): raise NotImplementedError() def _watcher_ffi_unref(self): raise NotImplementedError() def _watcher_ffi_start_unref(self): # While a watcher is active, we don't keep it # referenced. This allows a timer, for example, to be started, # and still allow the loop to end if there is nothing # else to do. see test__order.TestSleep0 for one example. self._watcher_ffi_unref() def _watcher_ffi_stop_ref(self): self._watcher_ffi_ref() # A string identifying the type of libev object we watch, e.g., 'ev_io' # This should be a class attribute. _watcher_type = None # A class attribute that is the callback on the libev object that init's the C struct, # e.g., libev.ev_io_init. If None, will be set by _init_subclasses. _watcher_init = None # A class attribute that is the callback on the libev object that starts the C watcher, # e.g., libev.ev_io_start. If None, will be set by _init_subclasses. _watcher_start = None # A class attribute that is the callback on the libev object that stops the C watcher, # e.g., libev.ev_io_stop. If None, will be set by _init_subclasses. _watcher_stop = None # A cffi ctype object identifying the struct pointer we create. # This is a class attribute set based on the _watcher_type _watcher_struct_pointer_type = None # The attribute of the libev object identifying the custom # callback function for this type of watcher. This is a class # attribute set based on the _watcher_type in _init_subclasses. _watcher_callback = None _watcher_is_active = None def close(self): if self._watcher is None: return self.stop() _watcher = self._watcher self._watcher = None self._watcher_set_data(_watcher, self._FFI.NULL) # pylint: disable=no-member self._watcher_ffi_close(_watcher) self.loop = None def _watcher_set_data(self, the_watcher, data): # This abstraction exists for the sole benefit of # libuv.watcher.stat, which "subclasses" uv_handle_t. # Can we do something to avoid this extra function call? the_watcher.data = data return data def __enter__(self): return self def __exit__(self, t, v, tb): self.close() if ALLOW_WATCHER_DEL: def __del__(self): if self._watcher: tb = get_object_traceback(self) tb_msg = '' if tb is not None: tb_msg = '\n'.join(tb.format()) tb_msg = '\nTraceback:\n' + tb_msg warnings.warn("Failed to close watcher %r%s" % (self, tb_msg), ResourceWarning) # may fail if __init__ did; will be harmlessly printed self.close() __in_repr = False def __repr__(self): basic = "<%s at 0x%x" % (self.__class__.__name__, id(self)) if self.__in_repr: return basic + '>' # Running child watchers have been seen to have a # recursive repr in ``self.args``, thanks to ``gevent.os.fork_and_watch`` # passing the watcher as an argument to its callback. self.__in_repr = True try: result = '%s%s' % (basic, self._format()) if self.pending: result += " pending" if self.callback is not None: fself = getattr(self.callback, '__self__', None) if fself is self: result += " callback=" % (self.callback.__name__) else: result += " callback=%r" % (self.callback, ) if self.args is not None: result += " args=%r" % (self.args, ) if self.callback is None and self.args is None: result += " stopped" result += " watcher=%s" % (self._watcher) result += " handle=%s" % (self._watcher_handle) result += " ref=%s" % (self.ref) return result + ">" finally: self.__in_repr = False @property def _watcher_handle(self): if self._watcher: return self._watcher.data def _format(self): return '' @property def ref(self): raise NotImplementedError() def _get_callback(self): return self._callback if '_callback' in self.__dict__ else None def _set_callback(self, cb): if not callable(cb) and cb is not None: raise TypeError("Expected callable, not %r" % (cb, )) if cb is None: if '_callback' in self.__dict__: del self._callback else: self._callback = cb callback = property(_get_callback, _set_callback) def _get_args(self): return self._args def _set_args(self, args): if not isinstance(args, tuple) and args is not None: raise TypeError("args must be a tuple or None") if args is None: if '_args' in self.__dict__: del self._args else: self._args = args args = property(_get_args, _set_args) def start(self, callback, *args): if callback is None: raise TypeError('callback must be callable, not None') self.callback = callback self.args = args or _NOARGS self.loop._keepaliveset.add(self) self._handle = self._watcher_set_data(self._watcher, type(self).new_handle(self)) # pylint:disable=no-member self._watcher_ffi_start() self._watcher_ffi_start_unref() def stop(self): if self.callback is None: assert self.loop is None or self not in self.loop._keepaliveset return self.callback = None # Only after setting the signal to make this idempotent do # we move ahead. self._watcher_ffi_stop_ref() self._watcher_ffi_stop() self.loop._keepaliveset.discard(self) self._handle = None self._watcher_set_data(self._watcher, self._FFI.NULL) # pylint:disable=no-member self.args = None def _get_priority(self): return None @not_while_active def _set_priority(self, priority): pass priority = property(_get_priority, _set_priority) @property def active(self): if self._watcher is not None and self._watcher_is_active(self._watcher): return True return False @property def pending(self): return False watcher = AbstractWatcherType('watcher', (object,), dict(watcher.__dict__)) class IoMixin(object): EVENT_MASK = 0 def __init__(self, loop, fd, events, ref=True, priority=None, _args=None): # Win32 only works with sockets, and only when we use libuv, because # we don't use _open_osfhandle. See libuv/watchers.py:io for a description. if fd < 0: raise ValueError('fd must be non-negative: %r' % fd) if events & ~self.EVENT_MASK: raise ValueError('illegal event mask: %r' % events) self._fd = fd super(IoMixin, self).__init__(loop, ref=ref, priority=priority, args=_args or (fd, events)) def start(self, callback, *args, **kwargs): args = args or _NOARGS if kwargs.get('pass_events'): args = (GEVENT_CORE_EVENTS, ) + args super(IoMixin, self).start(callback, *args) def _format(self): return ' fd=%d' % self._fd class TimerMixin(object): _watcher_type = 'timer' def __init__(self, loop, after=0.0, repeat=0.0, ref=True, priority=None): if repeat < 0.0: raise ValueError("repeat must be positive or zero: %r" % repeat) self._after = after self._repeat = repeat super(TimerMixin, self).__init__(loop, ref=ref, priority=priority, args=(after, repeat)) def start(self, callback, *args, **kw): update = kw.get("update", self.loop.starting_timer_may_update_loop_time) if update: # Quoth the libev doc: "This is a costly operation and is # usually done automatically within ev_run(). This # function is rarely useful, but when some event callback # runs for a very long time without entering the event # loop, updating libev's idea of the current time is a # good idea." # 1.3 changed the default for this to False *unless* the loop is # running a callback; see libuv for details. Note that # starting Timeout objects still sets this to true. self.loop.update_now() super(TimerMixin, self).start(callback, *args) def again(self, callback, *args, **kw): raise NotImplementedError() class SignalMixin(object): _watcher_type = 'signal' def __init__(self, loop, signalnum, ref=True, priority=None): if signalnum < 1 or signalnum >= signalmodule.NSIG: raise ValueError('illegal signal number: %r' % signalnum) # still possible to crash on one of libev's asserts: # 1) "libev: ev_signal_start called with illegal signal number" # EV_NSIG might be different from signal.NSIG on some platforms # 2) "libev: a signal must not be attached to two different loops" # we probably could check that in LIBEV_EMBED mode, but not in general self._signalnum = signalnum super(SignalMixin, self).__init__(loop, ref=ref, priority=priority, args=(signalnum, )) class IdleMixin(object): _watcher_type = 'idle' class PrepareMixin(object): _watcher_type = 'prepare' class CheckMixin(object): _watcher_type = 'check' class ForkMixin(object): _watcher_type = 'fork' class AsyncMixin(object): _watcher_type = 'async' def send(self): raise NotImplementedError() def send_ignoring_arg(self, _ignored): """ Calling compatibility with ``greenlet.switch(arg)`` as used by waiters that have ``rawlink``. This is an advanced method, not usually needed. """ return self.send() @property def pending(self): raise NotImplementedError() class ChildMixin(object): # hack for libuv which doesn't extend watcher _CALL_SUPER_INIT = True def __init__(self, loop, pid, trace=0, ref=True): if not loop.default: raise TypeError('child watchers are only available on the default loop') loop.install_sigchld() self._pid = pid if self._CALL_SUPER_INIT: super(ChildMixin, self).__init__(loop, ref=ref, args=(pid, trace)) def _format(self): return ' pid=%r rstatus=%r' % (self.pid, self.rstatus) @property def pid(self): return self._pid @property def rpid(self): # The received pid, the result of the waitpid() call. return self._rpid _rpid = None _rstatus = 0 @property def rstatus(self): return self._rstatus class StatMixin(object): @staticmethod def _encode_path(path): return fsencode(path) def __init__(self, _loop, path, interval=0.0, ref=True, priority=None): # Store the encoded path in the same attribute that corecext does self._paths = self._encode_path(path) # Keep the original path to avoid re-encoding, especially on Python 3 self._path = path # Although CFFI would automatically convert a bytes object into a char* when # calling ev_stat_init(..., char*, ...), on PyPy the char* pointer is not # guaranteed to live past the function call. On CPython, only with a constant/interned # bytes object is the pointer guaranteed to last path the function call. (And since # Python 3 is pretty much guaranteed to produce a newly-encoded bytes object above, thats # rarely the case). Therefore, we must keep a reference to the produced cdata object # so that the struct ev_stat_watcher's `path` pointer doesn't become invalid/deallocated self._cpath = self._FFI.new('char[]', self._paths) self._interval = interval super(StatMixin, self).__init__(_loop, ref=ref, priority=priority, args=(self._cpath, interval)) @property def path(self): return self._path @property def attr(self): raise NotImplementedError @property def prev(self): raise NotImplementedError @property def interval(self): return self._interval gevent-24.11.1/src/gevent/_fileobjectcommon.py000066400000000000000000000574471471441230600213020ustar00rootroot00000000000000""" gevent internals. """ from __future__ import absolute_import, print_function, division try: from errno import EBADF except ImportError: EBADF = 9 import io import functools import sys import os from gevent.hub import _get_hub_noargs as get_hub from gevent._compat import integer_types from gevent._compat import reraise from gevent._compat import fspath from gevent.lock import Semaphore, DummySemaphore class cancel_wait_ex(IOError): def __init__(self): IOError.__init__( self, EBADF, 'File descriptor was closed in another greenlet') class FileObjectClosed(IOError): def __init__(self): IOError.__init__( self, EBADF, 'Bad file descriptor (FileObject was closed)') class UniversalNewlineBytesWrapper(io.TextIOWrapper): """ Uses TextWrapper to decode universal newlines, but returns the results as bytes. This is for Python 2 where the 'rU' mode did that. """ mode = None def __init__(self, fobj, line_buffering): # latin-1 has the ability to round-trip arbitrary bytes. io.TextIOWrapper.__init__(self, fobj, encoding='latin-1', newline=None, line_buffering=line_buffering) def read(self, *args, **kwargs): result = io.TextIOWrapper.read(self, *args, **kwargs) return result.encode('latin-1') def readline(self, limit=-1): result = io.TextIOWrapper.readline(self, limit) return result.encode('latin-1') def __iter__(self): # readlines() is implemented in terms of __iter__ # and TextIOWrapper.__iter__ checks that readline returns # a unicode object, which we don't, so we override return self def __next__(self): line = self.readline() if not line: raise StopIteration return line next = __next__ class FlushingBufferedWriter(io.BufferedWriter): def write(self, b): ret = io.BufferedWriter.write(self, b) self.flush() return ret class WriteallMixin(object): def writeall(self, value): """ Similar to :meth:`socket.socket.sendall`, ensures that all the contents of *value* have been written (though not necessarily flushed) before returning. Returns the length of *value*. .. versionadded:: 20.12.0 """ # Do we need to play the same get_memory games we do with sockets? # And what about chunking for large values? See _socketcommon.py write = super(WriteallMixin, self).write total = len(value) while value: l = len(value) w = write(value) if w == l: break value = value[w:] return total class FileIO(io.FileIO): """A subclass that we can dynamically assign __class__ for.""" __slots__ = () class WriteIsWriteallMixin(WriteallMixin): def write(self, value): return self.writeall(value) class WriteallFileIO(WriteIsWriteallMixin, io.FileIO): pass class OpenDescriptor(object): # pylint:disable=too-many-instance-attributes """ Interprets the arguments to `open`. Internal use only. Originally based on code in the stdlib's _pyio.py (Python implementation of the :mod:`io` module), but modified for gevent: - Native strings are returned on Python 2 when neither 'b' nor 't' are in the mode string and no encoding is specified. - Universal newlines work in that mode. - Allows externally unbuffered text IO. :keyword bool atomic_write: If true, then if the opened, wrapped, stream is unbuffered (meaning that ``write`` can produce short writes and the return value needs to be checked), then the implementation will be adjusted so that ``write`` behaves like Python 2 on a built-in file object and writes the entire value. Only set this on Python 2; the only intended user is :class:`gevent.subprocess.Popen`. """ @staticmethod def _collapse_arg(pref_name, preferred_val, old_name, old_val, default): # We could play tricks with the callers ``locals()`` to avoid having to specify # the name (which we only use for error handling) but ``locals()`` may be slow and # inhibit JIT (on PyPy), so we just write it out long hand. if preferred_val is not None and old_val is not None: raise TypeError("Cannot specify both %s=%s and %s=%s" % ( pref_name, preferred_val, old_name, old_val )) if preferred_val is None and old_val is None: return default return preferred_val if preferred_val is not None else old_val def __init__(self, fobj, mode='r', bufsize=None, close=None, encoding=None, errors=None, newline=None, buffering=None, closefd=None, atomic_write=False): # Based on code in the stdlib's _pyio.py from 3.8. # pylint:disable=too-many-locals,too-many-branches,too-many-statements closefd = self._collapse_arg('closefd', closefd, 'close', close, True) del close buffering = self._collapse_arg('buffering', buffering, 'bufsize', bufsize, -1) del bufsize if not hasattr(fobj, 'fileno'): if not isinstance(fobj, integer_types): # Not a fd. Support PathLike on Python 2 and Python <= 3.5. fobj = fspath(fobj) if not isinstance(fobj, (str, bytes) + integer_types): # pragma: no cover raise TypeError("invalid file: %r" % fobj) if isinstance(fobj, (str, bytes)): closefd = True if not isinstance(mode, str): raise TypeError("invalid mode: %r" % mode) if not isinstance(buffering, integer_types): raise TypeError("invalid buffering: %r" % buffering) if encoding is not None and not isinstance(encoding, str): raise TypeError("invalid encoding: %r" % encoding) if errors is not None and not isinstance(errors, str): raise TypeError("invalid errors: %r" % errors) modes = set(mode) if modes - set("axrwb+tU") or len(mode) > len(modes): raise ValueError("invalid mode: %r" % mode) creating = "x" in modes reading = "r" in modes writing = "w" in modes appending = "a" in modes updating = "+" in modes text = "t" in modes binary = "b" in modes universal = 'U' in modes can_write = creating or writing or appending or updating if universal: if can_write: raise ValueError("mode U cannot be combined with 'x', 'w', 'a', or '+'") # Just because the stdlib deprecates this, no need for us to do so as well. # Especially not while we still support Python 2. # import warnings # warnings.warn("'U' mode is deprecated", # DeprecationWarning, 4) reading = True if text and binary: raise ValueError("can't have text and binary mode at once") if creating + reading + writing + appending > 1: raise ValueError("can't have read/write/append mode at once") if not (creating or reading or writing or appending): raise ValueError("must have exactly one of read/write/append mode") if binary and encoding is not None: raise ValueError("binary mode doesn't take an encoding argument") if binary and errors is not None: raise ValueError("binary mode doesn't take an errors argument") if binary and newline is not None: raise ValueError("binary mode doesn't take a newline argument") if binary and buffering == 1: import warnings warnings.warn("line buffering (buffering=1) isn't supported in binary " "mode, the default buffer size will be used", RuntimeWarning, 4) self._fobj = fobj self.fileio_mode = ( (creating and "x" or "") + (reading and "r" or "") + (writing and "w" or "") + (appending and "a" or "") + (updating and "+" or "") ) self.mode = self.fileio_mode + ('t' if text else '') + ('b' if binary else '') self.creating = creating self.reading = reading self.writing = writing self.appending = appending self.updating = updating self.text = text self.binary = binary self.can_write = can_write self.can_read = reading or updating self.native = ( not self.text and not self.binary # Neither t nor b given. and not encoding and not errors # And no encoding or error handling either. ) self.universal = universal self.buffering = buffering self.encoding = encoding self.errors = errors self.newline = newline self.closefd = closefd self.atomic_write = atomic_write default_buffer_size = io.DEFAULT_BUFFER_SIZE _opened = None _opened_raw = None def is_fd(self): return isinstance(self._fobj, integer_types) def opened(self): """ Return the :meth:`wrapped` file object. """ if self._opened is None: raw = self.opened_raw() try: self._opened = self.__wrapped(raw) except: # XXX: This might be a bug? Could we wind up closing # something we shouldn't close? raw.close() raise return self._opened def _raw_object_is_new(self, raw): return self._fobj is not raw def opened_raw(self): if self._opened_raw is None: self._opened_raw = self._do_open_raw() return self._opened_raw def _do_open_raw(self): if hasattr(self._fobj, 'fileno'): return self._fobj # io.FileIO doesn't allow assigning to its __class__, # and we can't know for sure here whether we need the atomic write() # method or not (it depends on the layers on top of us), # so we use a subclass that *does* allow assigning. return FileIO(self._fobj, self.fileio_mode, self.closefd) @staticmethod def is_buffered(stream): return ( # buffering happens internally in the text codecs isinstance(stream, (io.BufferedIOBase, io.TextIOBase)) or (hasattr(stream, 'buffer') and stream.buffer is not None) ) @classmethod def buffer_size_for_stream(cls, stream): result = cls.default_buffer_size try: bs = os.fstat(stream.fileno()).st_blksize except (OSError, AttributeError): pass else: if bs > 1: result = bs return result def __buffered(self, stream, buffering): if self.updating: Buffer = io.BufferedRandom elif self.creating or self.writing or self.appending: Buffer = io.BufferedWriter elif self.reading: Buffer = io.BufferedReader else: # prgama: no cover raise ValueError("unknown mode: %r" % self.mode) try: result = Buffer(stream, buffering) except AttributeError: # Python 2 file() objects don't have the readable/writable # attributes. But they handle their own buffering. result = stream return result def _make_atomic_write(self, result, raw): # The idea was to swizzle the class with one that defines # write() to call writeall(). This avoids setting any # attribute on the return object, avoids an additional layer # of proxying, and avoids any reference cycles (if setting a # method on the object). # # However, this is not possible with the built-in io classes # (static types defined in C cannot have __class__ assigned). # Fortunately, we need this only for the specific case of # opening a file descriptor (subprocess.py) on Python 2, in # which we fully control the types involved. # # So rather than attempt that, we only implement exactly what we need. if result is not raw or self._raw_object_is_new(raw): if result.__class__ is FileIO: result.__class__ = WriteallFileIO else: # pragma: no cover raise NotImplementedError( "Don't know how to make %s have atomic write. " "Please open a gevent issue with your use-case." % ( result ) ) return result def __wrapped(self, raw): """ Wraps the raw IO object (`RawIOBase` or `io.TextIOBase`) in buffers, text decoding, and newline handling. """ if self.binary and isinstance(raw, io.TextIOBase): # Can't do it. The TextIO object will have its own buffer, and # trying to read from the raw stream or the buffer without going through # the TextIO object is likely to lead to problems with the codec. raise ValueError("Unable to perform binary IO on top of text IO stream") result = raw buffering = self.buffering line_buffering = False if buffering == 1 or buffering < 0 and raw.isatty(): buffering = -1 line_buffering = True if buffering < 0: buffering = self.buffer_size_for_stream(result) if buffering < 0: # pragma: no cover raise ValueError("invalid buffering size") if buffering != 0 and not self.is_buffered(result): # Need to wrap our own buffering around it. If it # is already buffered, don't do so. result = self.__buffered(result, buffering) if not self.binary: # Either native or text at this point. # Python 2 and text mode, or Python 3 and either text or native (both are the same) if not isinstance(raw, io.TextIOBase): # Avoid double-wrapping a TextIOBase in another TextIOWrapper. # That tends not to work. See https://github.com/gevent/gevent/issues/1542 result = io.TextIOWrapper(result, self.encoding, self.errors, self.newline, line_buffering) if result is not raw or self._raw_object_is_new(raw): # Set the mode, if possible, but only if we created a new # object. try: result.mode = self.mode except (AttributeError, TypeError): # AttributeError: No such attribute # TypeError: Readonly attribute (py2) pass if ( self.atomic_write and not self.is_buffered(result) and not isinstance(result, WriteIsWriteallMixin) ): # Let subclasses have a say in how they make this atomic, and # whether or not they do so even if we're actually returning the raw object. result = self._make_atomic_write(result, raw) return result class _ClosedIO(object): # Used for FileObjectBase._io when FOB.close() # is called. Lets us drop references to ``_io`` # for GC/resource cleanup reasons, but keeps some useful # information around. __slots__ = ('name',) def __init__(self, io_obj): try: self.name = io_obj.name except AttributeError: pass def __getattr__(self, name): if name == 'name': # We didn't set it in __init__ because there wasn't one raise AttributeError raise FileObjectClosed def __bool__(self): return False __nonzero__ = __bool__ class FileObjectBase(object): """ Internal base class to ensure a level of consistency between :class:`~.FileObjectPosix`, :class:`~.FileObjectThread` and :class:`~.FileObjectBlock`. """ # List of methods we delegate to the wrapping IO object, if they # implement them and we do not. _delegate_methods = ( # General methods 'flush', 'fileno', 'writable', 'readable', 'seek', 'seekable', 'tell', # Read 'read', 'readline', 'readlines', 'read1', 'readinto', # Write. # Note that we do not extend WriteallMixin, # so writeall will be copied, if it exists, and # wrapped. 'write', 'writeall', 'writelines', 'truncate', ) _io = None def __init__(self, descriptor): # type: (OpenDescriptor) -> None self._io = descriptor.opened() # We don't actually use this property ourself, but we save it (and # pass it along) for compatibility. self._close = descriptor.closefd self._do_delegate_methods() io = property(lambda s: s._io, # Historically we either hand-wrote all the delegation methods # to use self.io, or we simply used __getattr__ to look them up at # runtime. This meant people could change the io attribute on the fly # and it would mostly work (subprocess.py used to do that). We don't recommend # that, but we still support it. lambda s, nv: setattr(s, '_io', nv) or s._do_delegate_methods()) def _do_delegate_methods(self): for meth_name in self._delegate_methods: meth = getattr(self._io, meth_name, None) implemented_by_class = hasattr(type(self), meth_name) if meth and not implemented_by_class: setattr(self, meth_name, self._wrap_method(meth)) elif hasattr(self, meth_name) and not implemented_by_class: delattr(self, meth_name) def _wrap_method(self, method): """ Wrap a method we're copying into our dictionary from the underlying io object to do something special or different, if necessary. """ return method @property def closed(self): """True if the file is closed""" return isinstance(self._io, _ClosedIO) def close(self): if isinstance(self._io, _ClosedIO): return fobj = self._io self._io = _ClosedIO(self._io) try: self._do_close(fobj, self._close) finally: fobj = None # Remove delegate methods to drop remaining references to # _io. d = self.__dict__ for meth_name in self._delegate_methods: d.pop(meth_name, None) def _do_close(self, fobj, closefd): raise NotImplementedError() def __getattr__(self, name): return getattr(self._io, name) def __repr__(self): return '<%s at 0x%x %s_fobj=%r%s>' % ( self.__class__.__name__, id(self), 'closed' if self.closed else '', self.io, self._extra_repr() ) def _extra_repr(self): return '' def __enter__(self): return self def __exit__(self, *args): self.close() def __iter__(self): return self def __next__(self): line = self.readline() if not line: raise StopIteration return line next = __next__ def __bool__(self): return True __nonzero__ = __bool__ class FileObjectBlock(FileObjectBase): """ FileObjectBlock() A simple synchronous wrapper around a file object. Adds no concurrency or gevent compatibility. """ def __init__(self, fobj, *args, **kwargs): descriptor = OpenDescriptor(fobj, *args, **kwargs) FileObjectBase.__init__(self, descriptor) def _do_close(self, fobj, closefd): fobj.close() class FileObjectThread(FileObjectBase): """ FileObjectThread() A file-like object wrapping another file-like object, performing all blocking operations on that object in a background thread. .. caution:: Attempting to change the threadpool or lock of an existing FileObjectThread has undefined consequences. .. versionchanged:: 1.1b1 The file object is closed using the threadpool. Note that whether or not this action is synchronous or asynchronous is not documented. """ def __init__(self, *args, **kwargs): """ :keyword bool lock: If True (the default) then all operations will be performed one-by-one. Note that this does not guarantee that, if using this file object from multiple threads/greenlets, operations will be performed in any particular order, only that no two operations will be attempted at the same time. You can also pass your own :class:`gevent.lock.Semaphore` to synchronize file operations with an external resource. :keyword bool closefd: If True (the default) then when this object is closed, the underlying object is closed as well. If *fobj* is a path, then *closefd* must be True. """ lock = kwargs.pop('lock', True) threadpool = kwargs.pop('threadpool', None) descriptor = OpenDescriptor(*args, **kwargs) self.threadpool = threadpool or get_hub().threadpool self.lock = lock if self.lock is True: self.lock = Semaphore() elif not self.lock: self.lock = DummySemaphore() if not hasattr(self.lock, '__enter__'): raise TypeError('Expected a Semaphore or boolean, got %r' % type(self.lock)) self.__io_holder = [descriptor.opened()] # signal for _wrap_method FileObjectBase.__init__(self, descriptor) def _do_close(self, fobj, closefd): self.__io_holder[0] = None # for _wrap_method try: with self.lock: self.threadpool.apply(fobj.flush) finally: if closefd: # Note that we're not taking the lock; older code # did fobj.close() without going through the threadpool at all, # so acquiring the lock could potentially introduce deadlocks # that weren't present before. Avoiding the lock doesn't make # the existing race condition any worse. # We wrap the close in an exception handler and re-raise directly # to avoid the (common, expected) IOError from being logged by the pool def close(_fobj=fobj): try: _fobj.close() except: # pylint:disable=bare-except # pylint:disable-next=return-in-finally return sys.exc_info() finally: _fobj = None del fobj exc_info = self.threadpool.apply(close) del close if exc_info: reraise(*exc_info) def _do_delegate_methods(self): FileObjectBase._do_delegate_methods(self) self.__io_holder[0] = self._io def _extra_repr(self): return ' threadpool=%r' % (self.threadpool,) def _wrap_method(self, method): # NOTE: We are careful to avoid introducing a refcycle # within self. Our wrapper cannot refer to self. io_holder = self.__io_holder lock = self.lock threadpool = self.threadpool @functools.wraps(method) def thread_method(*args, **kwargs): if io_holder[0] is None: # This is different than FileObjectPosix, etc, # because we want to save the expensive trip through # the threadpool. raise FileObjectClosed with lock: return threadpool.apply(method, args, kwargs) return thread_method gevent-24.11.1/src/gevent/_fileobjectposix.py000066400000000000000000000310501471441230600211320ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import print_function import os import sys from io import BytesIO from io import DEFAULT_BUFFER_SIZE from io import FileIO from io import RawIOBase from io import UnsupportedOperation from gevent._compat import reraise from gevent._fileobjectcommon import cancel_wait_ex from gevent._fileobjectcommon import FileObjectBase from gevent._fileobjectcommon import OpenDescriptor from gevent._fileobjectcommon import WriteIsWriteallMixin from gevent._hub_primitives import wait_on_watcher from gevent.hub import get_hub from gevent.os import _read from gevent.os import _write from gevent.os import ignored_errors from gevent.os import make_nonblocking class GreenFileDescriptorIO(RawIOBase): # Internal, undocumented, class. All that's documented is that this # is a IOBase object. Constructor is private. # Note that RawIOBase has a __del__ method that calls # self.close(). (In C implementations like CPython, this is # the type's tp_dealloc slot; prior to Python 3, the object doesn't # appear to have a __del__ method, even though it functionally does) _read_watcher = None _write_watcher = None _closed = False _seekable = None _keep_alive = None # An object that needs to live as long as we do. def __init__(self, fileno, open_descriptor, closefd=True): RawIOBase.__init__(self) self._closefd = closefd self._fileno = fileno self.name = fileno self.mode = open_descriptor.fileio_mode make_nonblocking(fileno) readable = open_descriptor.can_read writable = open_descriptor.can_write self.hub = get_hub() io_watcher = self.hub.loop.io try: if readable: self._read_watcher = io_watcher(fileno, 1) if writable: self._write_watcher = io_watcher(fileno, 2) except: # If anything goes wrong, it's important to go ahead and # close these watchers *now*, especially under libuv, so # that they don't get eventually reclaimed by the garbage # collector at some random time, thanks to the C level # slot (even though we don't seem to have any actual references # at the Python level). Previously, if we didn't close now, # that random close in the future would cause issues if we had duplicated # the fileno (if a wrapping with statement had closed an open fileobject, # for example) # test__fileobject can show a failure if this doesn't happen # TRAVIS=true GEVENT_LOOP=libuv python -m gevent.tests.test__fileobject \ # TestFileObjectPosix.test_seek TestFileObjectThread.test_bufsize_0 self.close() raise def isatty(self): # TODO: Couldn't we just subclass FileIO? f = FileIO(self._fileno, 'r', False) try: return f.isatty() finally: f.close() def readable(self): return self._read_watcher is not None def writable(self): return self._write_watcher is not None def seekable(self): if self._seekable is None: try: os.lseek(self._fileno, 0, os.SEEK_CUR) except OSError: self._seekable = False else: self._seekable = True return self._seekable def fileno(self): return self._fileno @property def closed(self): return self._closed def __destroy_events(self): read_event = self._read_watcher write_event = self._write_watcher hub = self.hub self.hub = self._read_watcher = self._write_watcher = None hub.cancel_waits_close_and_then( (read_event, write_event), cancel_wait_ex, self.__finish_close, self._closefd, self._fileno, self._keep_alive ) def close(self): if self._closed: return self.flush() # TODO: Can we use 'read_event is not None and write_event is # not None' to mean _closed? self._closed = True try: self.__destroy_events() finally: self._fileno = self._keep_alive = None @staticmethod def __finish_close(closefd, fileno, keep_alive): try: if closefd: os.close(fileno) finally: if hasattr(keep_alive, 'close'): keep_alive.close() # RawIOBase provides a 'read' method that will call readall() if # the `size` was missing or -1 and otherwise call readinto(). We # want to take advantage of this to avoid single byte reads when # possible. This is highlighted by a bug in BufferedIOReader that # calls read() in a loop when its readall() method is invoked; # this was fixed in Python 3.3, but we still need our workaround for 2.7. See # https://github.com/gevent/gevent/issues/675) def __read(self, n): if self._read_watcher is None: raise UnsupportedOperation('read') while 1: try: return _read(self._fileno, n) except OSError as ex: if ex.args[0] not in ignored_errors: raise wait_on_watcher(self._read_watcher, None, None, self.hub) def readall(self): ret = BytesIO() while True: try: data = self.__read(DEFAULT_BUFFER_SIZE) except cancel_wait_ex: # We were closed while reading. A buffered reader # just returns what it has handy at that point, # so we do to. data = None if not data: break ret.write(data) return ret.getvalue() def readinto(self, b): data = self.__read(len(b)) n = len(data) try: b[:n] = data except TypeError as err: import array if not isinstance(b, array.array): raise err b[:n] = array.array(b'b', data) return n def write(self, b): if self._write_watcher is None: raise UnsupportedOperation('write') while True: try: return _write(self._fileno, b) except OSError as ex: if ex.args[0] not in ignored_errors: raise wait_on_watcher(self._write_watcher, None, None, self.hub) def seek(self, offset, whence=0): try: return os.lseek(self._fileno, offset, whence) except IOError: # pylint:disable=try-except-raise raise except OSError as ex: # pylint:disable=duplicate-except # Python 2.x # make sure on Python 2.x we raise an IOError # as documented for RawIOBase. # See https://github.com/gevent/gevent/issues/1323 reraise(IOError, IOError(*ex.args), sys.exc_info()[2]) def __repr__(self): return "<%s at 0x%x fileno=%s mode=%r>" % ( type(self).__name__, id(self), self._fileno, self.mode ) class GreenFileDescriptorIOWriteall(WriteIsWriteallMixin, GreenFileDescriptorIO): pass class GreenOpenDescriptor(OpenDescriptor): def _do_open_raw(self): if self.is_fd(): fileio = GreenFileDescriptorIO(self._fobj, self, closefd=self.closefd) else: # Either an existing file object or a path string (which # we open to get a file object). In either case, the other object # owns the descriptor and we must not close it. closefd = False raw = OpenDescriptor._do_open_raw(self) fileno = raw.fileno() fileio = GreenFileDescriptorIO(fileno, self, closefd=closefd) fileio._keep_alive = raw # We can usually do better for a name, though. try: fileio.name = raw.name except AttributeError: del fileio.name return fileio def _make_atomic_write(self, result, raw): # Our return value from _do_open_raw is always a new # object that we own, so we're always free to change # the class. assert result is not raw or self._raw_object_is_new(raw) if result.__class__ is GreenFileDescriptorIO: result.__class__ = GreenFileDescriptorIOWriteall else: result = OpenDescriptor._make_atomic_write(self, result, raw) return result class FileObjectPosix(FileObjectBase): """ FileObjectPosix() A file-like object that operates on non-blocking files but provides a synchronous, cooperative interface. .. caution:: This object is only effective wrapping files that can be used meaningfully with :func:`select.select` such as sockets and pipes. In general, on most platforms, operations on regular files (e.g., ``open('a_file.txt')``) are considered non-blocking already, even though they can take some time to complete as data is copied to the kernel and flushed to disk: this time is relatively bounded compared to sockets or pipes, though. A :func:`~os.read` or :func:`~os.write` call on such a file will still effectively block for some small period of time. Therefore, wrapping this class around a regular file is unlikely to make IO gevent-friendly: reading or writing large amounts of data could still block the event loop. If you'll be working with regular files and doing IO in large chunks, you may consider using :class:`~gevent.fileobject.FileObjectThread` or :func:`~gevent.os.tp_read` and :func:`~gevent.os.tp_write` to bypass this concern. .. tip:: Although this object provides a :meth:`fileno` method and so can itself be passed to :func:`fcntl.fcntl`, setting the :data:`os.O_NONBLOCK` flag will have no effect (reads will still block the greenlet, although other greenlets can run). However, removing that flag *will cause this object to no longer be cooperative* (other greenlets will no longer run). You can use the internal ``fileio`` attribute of this object (a :class:`io.RawIOBase`) to perform non-blocking byte reads. Note, however, that once you begin directly using this attribute, the results from using methods of *this* object are undefined, especially in text mode. (See :issue:`222`.) .. versionchanged:: 1.1 Now uses the :mod:`io` package internally. Under Python 2, previously used the undocumented class :class:`socket._fileobject`. This provides better file-like semantics (and portability to Python 3). .. versionchanged:: 1.2a1 Document the ``fileio`` attribute for non-blocking reads. .. versionchanged:: 1.2a1 A bufsize of 0 in write mode is no longer forced to be 1. Instead, the underlying buffer is flushed after every write operation to simulate a bufsize of 0. In gevent 1.0, a bufsize of 0 was flushed when a newline was written, while in gevent 1.1 it was flushed when more than one byte was written. Note that this may have performance impacts. .. versionchanged:: 1.3a1 On Python 2, enabling universal newlines no longer forces unicode IO. .. versionchanged:: 1.5 The default value for *mode* was changed from ``rb`` to ``r``. This is consistent with :func:`open`, :func:`io.open`, and :class:`~.FileObjectThread`, which is the default ``FileObject`` on some platforms. .. versionchanged:: 1.5 Stop forcing buffering. Previously, given a ``buffering=0`` argument, *buffering* would be set to 1, and ``buffering=1`` would be forced to the default buffer size. This was a workaround for a long-standing concurrency issue. Now the *buffering* argument is interpreted as intended. """ default_bufsize = DEFAULT_BUFFER_SIZE def __init__(self, *args, **kwargs): descriptor = GreenOpenDescriptor(*args, **kwargs) FileObjectBase.__init__(self, descriptor) # This attribute is documented as available for non-blocking reads. self.fileio = descriptor.opened_raw() def _do_close(self, fobj, closefd): try: fobj.close() # self.fileio already knows whether or not to close the # file descriptor self.fileio.close() finally: self.fileio = None gevent-24.11.1/src/gevent/_gevent_c_abstract_linkable.pxd000066400000000000000000000046721471441230600234340ustar00rootroot00000000000000cimport cython from gevent._gevent_c_greenlet_primitives cimport SwitchOutGreenletWithLoop from gevent._gevent_c_hub_local cimport get_hub_noargs as get_hub from gevent._gevent_c_hub_local cimport get_hub_if_exists cdef InvalidSwitchError cdef InvalidThreadUseError cdef Timeout cdef _get_thread_ident cdef bint _greenlet_imported cdef get_objects cdef extern from "greenlet/greenlet.h": ctypedef class greenlet.greenlet [object PyGreenlet]: pass # These are actually macros and so much be included # (defined) in each .pxd, as are the two functions # that call them. greenlet PyGreenlet_GetCurrent() void PyGreenlet_Import() cdef inline greenlet getcurrent(): return PyGreenlet_GetCurrent() cdef inline void greenlet_init(): global _greenlet_imported if not _greenlet_imported: PyGreenlet_Import() _greenlet_imported = True cdef void _init() cdef dict get_roots_and_hubs() cdef class _FakeNotifier(object): cdef bint pending cdef class AbstractLinkable(object): # We declare the __weakref__ here in the base (even though # that's not really what we want) as a workaround for a Cython # issue we see reliably on 3.7b4 and sometimes on 3.6. See # https://github.com/cython/cython/issues/2270 cdef object __weakref__ cdef readonly SwitchOutGreenletWithLoop hub cdef _notifier cdef list _links cdef bint _notify_all cpdef linkcount(self) cpdef rawlink(self, callback) cpdef bint ready(self) cpdef unlink(self, callback) cdef _check_and_notify(self) cdef SwitchOutGreenletWithLoop _capture_hub(self, bint create) cdef __wait_to_be_notified(self, bint rawlink) cdef void _quiet_unlink_all(self, obj) # suppress exceptions cdef int _switch_to_hub(self, the_hub) except -1 @cython.nonecheck(False) cdef list _notify_link_list(self, list links) @cython.nonecheck(False) cpdef _notify_links(self, list arrived_while_waiting) @cython.locals(hub=SwitchOutGreenletWithLoop) cdef _handle_unswitched_notifications(self, list unswitched) cdef __print_unswitched_warning(self, link, bint printed_tb) cpdef _drop_lock_for_switch_out(self) cpdef _acquire_lock_for_switch_in(self) cdef _wait_core(self, timeout, catch=*) cdef _wait_return_value(self, bint waited, bint wait_success) cdef _wait(self, timeout=*) # Unreleated utilities cdef _allocate_lock(self) cdef greenlet _getcurrent(self) cdef _get_thread_ident(self) gevent-24.11.1/src/gevent/_gevent_c_greenlet_primitives.pxd000066400000000000000000000022161471441230600240400ustar00rootroot00000000000000cimport cython # This file must not cimport anything from gevent. cdef get_objects cdef wref cdef BlockingSwitchOutError cdef extern from "greenlet/greenlet.h": ctypedef class greenlet.greenlet [object PyGreenlet]: pass # These are actually macros and so much be included # (defined) in each .pxd, as are the two functions # that call them. greenlet PyGreenlet_GetCurrent() object PyGreenlet_Switch(greenlet self, void* args, void* kwargs) void PyGreenlet_Import() @cython.final cdef inline greenlet getcurrent(): return PyGreenlet_GetCurrent() cdef bint _greenlet_imported cdef inline void greenlet_init(): global _greenlet_imported if not _greenlet_imported: PyGreenlet_Import() _greenlet_imported = True cdef inline object _greenlet_switch(greenlet self): return PyGreenlet_Switch(self, NULL, NULL) cdef class TrackedRawGreenlet(greenlet): pass cdef class SwitchOutGreenletWithLoop(TrackedRawGreenlet): cdef public loop cpdef switch(self) cpdef switch_out(self) cpdef list get_reachable_greenlets() cdef type _memoryview cdef type _buffer cpdef get_memory(data) gevent-24.11.1/src/gevent/_gevent_c_hub_local.pxd000066400000000000000000000007311471441230600217100ustar00rootroot00000000000000from gevent._gevent_c_greenlet_primitives cimport SwitchOutGreenletWithLoop cdef _threadlocal cpdef get_hub_class() cpdef SwitchOutGreenletWithLoop get_hub_if_exists() cpdef set_hub(SwitchOutGreenletWithLoop hub) cpdef get_loop() cpdef set_loop(loop) cpdef SwitchOutGreenletWithLoop get_hub() # XXX: TODO: Move the definition of TrackedRawGreenlet # into a file that can be cython compiled so get_hub can # return that. cpdef SwitchOutGreenletWithLoop get_hub_noargs() gevent-24.11.1/src/gevent/_gevent_c_hub_primitives.pxd000066400000000000000000000037711471441230600230200ustar00rootroot00000000000000cimport cython from gevent._gevent_c_greenlet_primitives cimport SwitchOutGreenletWithLoop from gevent._gevent_c_hub_local cimport get_hub_noargs as get_hub from gevent._gevent_c_waiter cimport Waiter from gevent._gevent_c_waiter cimport MultipleWaiter cdef InvalidSwitchError cdef _waiter cdef _greenlet_primitives cdef traceback cdef _timeout_error cdef Timeout cdef extern from "greenlet/greenlet.h": ctypedef class greenlet.greenlet [object PyGreenlet]: pass # These are actually macros and so much be included # (defined) in each .pxd, as are the two functions # that call them. greenlet PyGreenlet_GetCurrent() void PyGreenlet_Import() @cython.final cdef inline greenlet getcurrent(): return PyGreenlet_GetCurrent() cdef bint _greenlet_imported cdef inline void greenlet_init(): global _greenlet_imported if not _greenlet_imported: PyGreenlet_Import() _greenlet_imported = True cdef class WaitOperationsGreenlet(SwitchOutGreenletWithLoop): # The Hub will extend this class. cpdef wait(self, watcher) cpdef cancel_wait(self, watcher, error, close_watcher=*) cpdef _cancel_wait(self, watcher, error, close_watcher) cdef class _WaitIterator: cdef SwitchOutGreenletWithLoop _hub cdef MultipleWaiter _waiter cdef _switch cdef _timeout cdef _objects cdef _timer cdef Py_ssize_t _count cdef bint _begun cdef _begin(self) cdef _cleanup(self) cpdef __enter__(self) cpdef __exit__(self, typ, value, tb) cpdef iwait_on_objects(objects, timeout=*, count=*) cpdef wait_on_objects(objects=*, timeout=*, count=*) cdef _primitive_wait(watcher, timeout, timeout_exc, WaitOperationsGreenlet hub) cpdef wait_on_watcher(watcher, timeout=*, timeout_exc=*, WaitOperationsGreenlet hub=*) cpdef wait_read(fileno, timeout=*, timeout_exc=*) cpdef wait_write(fileno, timeout=*, timeout_exc=*, event=*) cpdef wait_readwrite(fileno, timeout=*, timeout_exc=*, event=*) cpdef wait_on_socket(socket, watcher, timeout_exc=*) gevent-24.11.1/src/gevent/_gevent_c_ident.pxd000066400000000000000000000007601471441230600210650ustar00rootroot00000000000000cimport cython cdef extern from "Python.h": ctypedef class weakref.ref [object PyWeakReference]: pass cdef heappop cdef heappush cdef object WeakKeyDictionary cdef type ref @cython.internal @cython.final cdef class ValuedWeakRef(ref): cdef object value @cython.final cdef class IdentRegistry: cdef object _registry cdef list _available_idents @cython.final cpdef object get_ident(self, obj) @cython.final cpdef _return_ident(self, ValuedWeakRef ref) gevent-24.11.1/src/gevent/_gevent_c_imap.pxd000066400000000000000000000021471471441230600207110ustar00rootroot00000000000000cimport cython from gevent._gevent_cgreenlet cimport Greenlet from gevent._gevent_c_semaphore cimport Semaphore from gevent._gevent_cqueue cimport UnboundQueue @cython.freelist(100) @cython.internal @cython.final cdef class Failure: cdef readonly exc cdef raise_exception cdef inline _raise_exc(Failure failure) cdef class IMapUnordered(Greenlet): cdef bint _zipped cdef func cdef iterable cdef spawn cdef Semaphore _result_semaphore cdef int _outstanding_tasks cdef int _max_index cdef readonly UnboundQueue queue cdef readonly bint finished cdef _inext(self) cdef _ispawn(self, func, item, int item_index) # Passed to greenlet.link cpdef _on_result(self, greenlet) # Called directly cdef _on_finish(self, exception) cdef _iqueue_value_for_success(self, greenlet) cdef _iqueue_value_for_failure(self, greenlet) cdef _iqueue_value_for_self_finished(self) cdef _iqueue_value_for_self_failure(self, exception) cdef class IMap(IMapUnordered): cdef int index cdef dict _results @cython.locals(index=int) cdef _inext(self) gevent-24.11.1/src/gevent/_gevent_c_semaphore.pxd000066400000000000000000000046421471441230600217500ustar00rootroot00000000000000cimport cython from gevent._gevent_c_greenlet_primitives cimport SwitchOutGreenletWithLoop from gevent._gevent_c_abstract_linkable cimport AbstractLinkable from gevent._gevent_c_hub_local cimport get_hub_if_exists from gevent._gevent_c_hub_local cimport get_hub_noargs as get_hub cdef InvalidThreadUseError cdef LoopExit cdef Timeout cdef _native_sleep cdef monotonic cdef spawn_raw cdef _UNSET cdef _MULTI cdef class _LockReleaseLink(object): cdef object lock cdef class Semaphore(AbstractLinkable): cdef public int counter # On Python 3, thread.get_ident() returns a ``unsigned long``; on # Python 2, it's a plain ``long``. We can conditionally change # the type here (depending on which version is cythonizing the # .py files), but: our algorithm for testing whether it has been # set or not was initially written with ``long`` in mind and used # -1 as a sentinel. That doesn't work on Python 3. Thus, we can't # use the native C type and must keep the wrapped Python object, which # we can test for None. cdef object _multithreaded cpdef bint locked(self) cpdef int release(self) except -1000 # We don't really want this to be public, but # threadpool uses it cpdef _start_notify(self) cpdef int wait(self, object timeout=*) except -1000 @cython.locals( success=bint, e=Exception, ex=Exception, args=tuple, ) cpdef bint acquire(self, bint blocking=*, object timeout=*) except -1000 cpdef __enter__(self) cpdef __exit__(self, object t, object v, object tb) @cython.locals( hub_for_this_thread=SwitchOutGreenletWithLoop, owning_hub=SwitchOutGreenletWithLoop, ) cdef __acquire_from_other_thread(self, tuple args, bint blocking, timeout) cpdef __acquire_from_other_thread_cb(self, list results, bint blocking, timeout, thread_lock) cdef __add_link(self, link) cdef __acquire_using_two_hubs(self, SwitchOutGreenletWithLoop hub_for_this_thread, current_greenlet, timeout) cdef __acquire_using_other_hub(self, SwitchOutGreenletWithLoop owning_hub, timeout) cdef bint __acquire_without_hubs(self, timeout) cdef bint __spin_on_native_lock(self, thread_lock, timeout) cdef class BoundedSemaphore(Semaphore): cdef readonly int _initial_value cpdef int release(self) except -1000 gevent-24.11.1/src/gevent/_gevent_c_tracer.pxd000066400000000000000000000015541471441230600212440ustar00rootroot00000000000000cimport cython cdef sys cdef traceback cdef settrace cdef getcurrent cdef format_run_info cdef perf_counter cdef gmctime cdef class GreenletTracer: cdef readonly object active_greenlet cdef readonly object previous_trace_function cdef readonly Py_ssize_t greenlet_switch_counter cdef bint _killed cpdef _trace(self, str event, tuple args) @cython.locals(did_switch=bint) cpdef did_block_hub(self, hub) cpdef kill(self) @cython.internal cdef class _HubTracer(GreenletTracer): cdef readonly object hub cdef readonly double max_blocking_time cdef class HubSwitchTracer(_HubTracer): cdef readonly double last_entered_hub cdef class MaxSwitchTracer(_HubTracer): cdef readonly double max_blocking cdef readonly double last_switch @cython.locals(switched_at=double) cpdef _trace(self, str event, tuple args) gevent-24.11.1/src/gevent/_gevent_c_waiter.pxd000066400000000000000000000021711471441230600212530ustar00rootroot00000000000000cimport cython from gevent._gevent_c_greenlet_primitives cimport SwitchOutGreenletWithLoop from gevent._gevent_c_hub_local cimport get_hub_noargs as get_hub cdef sys cdef ConcurrentObjectUseError cdef bint _greenlet_imported cdef _NONE cdef extern from "greenlet/greenlet.h": ctypedef class greenlet.greenlet [object PyGreenlet]: pass # These are actually macros and so much be included # (defined) in each .pxd, as are the two functions # that call them. greenlet PyGreenlet_GetCurrent() void PyGreenlet_Import() cdef inline greenlet getcurrent(): return PyGreenlet_GetCurrent() cdef inline void greenlet_init(): global _greenlet_imported if not _greenlet_imported: PyGreenlet_Import() _greenlet_imported = True cdef class Waiter: cdef readonly SwitchOutGreenletWithLoop hub cdef readonly greenlet greenlet cdef readonly value cdef _exception cpdef get(self) cpdef clear(self) # cpdef of switch leads to parameter errors... #cpdef switch(self, value) @cython.final @cython.internal cdef class MultipleWaiter(Waiter): cdef list _values gevent-24.11.1/src/gevent/_gevent_cevent.pxd000066400000000000000000000012041471441230600207360ustar00rootroot00000000000000cimport cython from gevent._gevent_c_hub_local cimport get_hub_noargs as get_hub from gevent._gevent_c_abstract_linkable cimport AbstractLinkable cdef _None cdef reraise cdef dump_traceback cdef load_traceback cdef Timeout cdef class Event(AbstractLinkable): cdef bint _flag cdef class AsyncResult(AbstractLinkable): cdef readonly _value cdef readonly tuple _exc_info # For the use of _imap.py cdef public int _imap_task_index cpdef get(self, block=*, timeout=*) cpdef bint successful(self) cpdef wait(self, timeout=*) cpdef bint done(self) cpdef bint cancel(self) cpdef bint cancelled(self) gevent-24.11.1/src/gevent/_gevent_cgreenlet.pxd000066400000000000000000000150451471441230600214320ustar00rootroot00000000000000# cython: auto_pickle=False cimport cython from cpython.ref cimport Py_DECREF from gevent._gevent_c_ident cimport IdentRegistry from gevent._gevent_c_hub_local cimport get_hub_noargs as get_hub from gevent._gevent_c_waiter cimport Waiter from gevent._gevent_c_greenlet_primitives cimport SwitchOutGreenletWithLoop cdef bint _PYPY cdef sys_getframe cdef sys_exc_info cdef Timeout cdef GreenletExit cdef InvalidSwitchError cdef extern from "greenlet/greenlet.h": ctypedef class greenlet.greenlet [object PyGreenlet]: # Defining this as a void* means we can't access it as a python attribute # in the Python code; but we can't define it as a greenlet because that doesn't # properly handle the case that it can be NULL. So instead we inline a getparent # function that does the same thing as the green_getparent accessor but without # going through the overhead of generic attribute lookup. #cdef void* parent pass # These are actually macros and so must be included # (defined) in each .pxd, as are the two functions # that call them. greenlet PyGreenlet_GetCurrent() void* PyGreenlet_GetParent(greenlet) void PyGreenlet_Import() @cython.final cdef inline greenlet getcurrent(): return PyGreenlet_GetCurrent() cdef inline object get_generic_parent(greenlet s): # We don't use any typed functions on the return of this, # so save the type check by making it just an object. cdef object result cdef void* parent = PyGreenlet_GetParent(s) if parent != NULL: # The cast will perform an incref; but the GetParent # function already did an incref if we got it (and not NULL). # Therefore, we must DECREF immediately. result = parent Py_DECREF(result) return result cdef inline SwitchOutGreenletWithLoop get_my_hub(greenlet s): # This one we do want type checked on the return value. # Must not be called with s = None cdef object result cdef void* parent = PyGreenlet_GetParent(s) if parent != NULL: result = parent # See above Py_DECREF(result) return result cdef bint _greenlet_imported cdef inline void greenlet_init(): global _greenlet_imported if not _greenlet_imported: PyGreenlet_Import() _greenlet_imported = True ctypedef object CodeType ctypedef object FrameType cdef extern from "_compat.h": int Gevent_PyFrame_GetLineNumber(FrameType frame) CodeType Gevent_PyFrame_GetCode(FrameType frame) object Gevent_PyFrame_GetBack(FrameType frame) # We don't do this: # # ctypedef class types.FrameType [object PyFrameObject]: # pass # # to avoid "RuntimeWarning: types.FrameType size changed, may # indicate binary incompatibility. Expected 56 from C header, got # 120 from PyObject" on Python 3.11. That makes the functions that # really require that kind of object not safe and capable of crashing the # interpreter. # # However, as of cython 3.0a11, that results in a failure to compile if # we have a local variable typed as FrameType, so we can't do that. # # Also, it removes a layer of type checking and makes it possible to crash # the interpreter if you call these functions with something that's not a PyFrameObject. # Don't do that. cdef void _init() cdef class SpawnedLink: cdef public object callback @cython.final cdef class SuccessSpawnedLink(SpawnedLink): pass @cython.final cdef class FailureSpawnedLink(SpawnedLink): pass @cython.final @cython.internal @cython.freelist(1000) cdef class _Frame: cdef readonly CodeType f_code cdef readonly int f_lineno cdef readonly _Frame f_back @cython.final @cython.locals(# frame=FrameType, # See above about why we cannot do this newest_Frame=_Frame, newer_Frame=_Frame, older_Frame=_Frame) cdef _Frame _extract_stack(int limit) cdef class Greenlet(greenlet): cdef readonly object value cdef readonly tuple args cdef readonly dict kwargs cdef readonly object spawning_greenlet cdef readonly _Frame spawning_stack cdef public dict spawn_tree_locals cdef list _links cdef tuple _exc_info cdef object _notifier cdef object _start_event cdef str _formatted_info cdef object _ident cpdef bint has_links(self) cpdef join(self, timeout=*) cpdef kill(self, exception=*, block=*, timeout=*) cpdef bint ready(self) cpdef bint successful(self) cpdef rawlink(self, object callback) cpdef str _formatinfo(self) # Helper for killall() cpdef bint _maybe_kill_before_start(self, exception) # This is a helper function for a @property getter; # defining locals() for a @property doesn't seem to work. @cython.locals(reg=IdentRegistry) cdef _get_minimal_ident(self) cdef bint __started_but_aborted(self) cdef bint __start_cancelled_by_kill(self) cdef bint __start_pending(self) cdef bint __never_started_or_killed(self) cdef bint __start_completed(self) cdef __handle_death_before_start(self, tuple args) @cython.final cdef inline void __free(self) cdef __cancel_start(self) cdef inline __report_result(self, object result) cdef inline __report_error(self, tuple exc_info) # This is used as the target of a callback # from the loop, and so needs to be a cpdef cpdef _notify_links(self) # Hmm, declaring _raise_exception causes issues when _imap # is also compiled. # TypeError: wrap() takes exactly one argument (0 given) # cpdef _raise_exception(self) # Declare a bunch of imports as cdefs so they can # be accessed directly as static vars without # doing a module global lookup. This is especially important # for spawning greenlets. cdef _greenlet__init__ cdef _threadlocal cdef get_hub_class cdef wref cdef dump_traceback cdef load_traceback cdef Waiter cdef wait cdef iwait cdef reraise cdef GEVENT_CONFIG @cython.final @cython.internal cdef class _dummy_event: cdef readonly bint pending cdef readonly bint active cpdef stop(self) cpdef start(self, cb) cpdef close(self) cdef _dummy_event _cancelled_start_event cdef _dummy_event _start_completed_event cpdef _kill(Greenlet glet, object exception, object waiter) @cython.locals(diehards=list) cdef _killall3(list greenlets, object exception, object waiter) cdef _killall(list greenlets, object exception) @cython.locals(done=list) cpdef joinall(greenlets, timeout=*, raise_error=*, count=*) cdef set _spawn_callbacks cdef void _call_spawn_callbacks(Greenlet gr) except * gevent-24.11.1/src/gevent/_gevent_clocal.pxd000066400000000000000000000053601471441230600207160ustar00rootroot00000000000000# cython: auto_pickle=False cimport cython from gevent._gevent_cgreenlet cimport Greenlet cdef bint _PYPY cdef ref cdef copy cdef object _marker cdef str key_prefix cdef bint _greenlet_imported cdef extern from "greenlet/greenlet.h": ctypedef class greenlet.greenlet [object PyGreenlet]: pass # These are actually macros and so much be included # (defined) in each .pxd, as are the two functions # that call them. greenlet PyGreenlet_GetCurrent() void PyGreenlet_Import() cdef inline greenlet getcurrent(): return PyGreenlet_GetCurrent() cdef inline void greenlet_init(): global _greenlet_imported if not _greenlet_imported: PyGreenlet_Import() _greenlet_imported = True cdef void _init() @cython.final @cython.internal cdef class _wrefdict(dict): cdef object __weakref__ @cython.final @cython.internal cdef class _greenlet_deleted: cdef object idt cdef object wrdicts @cython.final @cython.internal cdef class _local_deleted: cdef str key cdef object wrthread cdef _greenlet_deleted greenlet_deleted @cython.final @cython.internal cdef class _localimpl: cdef str key cdef dict dicts cdef tuple localargs cdef dict localkwargs cdef tuple localtypeid cdef object __weakref__ @cython.final @cython.internal cdef class _localimpl_dict_entry: cdef object wrgreenlet cdef dict localdict @cython.locals(localdict=dict, key=str, greenlet_deleted=_greenlet_deleted, local_deleted=_local_deleted) cdef dict _localimpl_create_dict(_localimpl self, greenlet greenlet, object idt) cdef set _local_attrs cdef class local: cdef _localimpl _local__impl cdef set _local_type_get_descriptors cdef set _local_type_set_or_del_descriptors cdef set _local_type_del_descriptors cdef set _local_type_set_descriptors cdef set _local_type_vars cdef type _local_type @cython.locals(entry=_localimpl_dict_entry, dct=dict, duplicate=dict, instance=local) cpdef local __copy__(local self) @cython.locals(impl=_localimpl,dct=dict, dct=dict, entry=_localimpl_dict_entry) cdef inline dict _local_get_dict(local self) @cython.locals(entry=_localimpl_dict_entry) cdef _local__copy_dict_from(local self, _localimpl impl, dict duplicate) @cython.locals(mro=list, gets=set, dels=set, set_or_del=set, type_self=type, type_attr=type, sets=set) cdef tuple _local_find_descriptors(local self) @cython.locals(result=list, local_impl=_localimpl, entry=_localimpl_dict_entry, k=str, greenlet_dict=dict) cpdef all_local_dicts_for_greenlet(greenlet greenlet) gevent-24.11.1/src/gevent/_gevent_cqueue.pxd000066400000000000000000000047401471441230600207510ustar00rootroot00000000000000cimport cython from gevent._gevent_c_waiter cimport Waiter from gevent._gevent_cevent cimport Event from gevent._gevent_c_hub_local cimport get_hub_noargs as get_hub cdef bint _greenlet_imported cdef _heappush cdef _heappop cdef _heapify cdef _Empty cdef _Full cdef Timeout cdef InvalidSwitchError cdef extern from "greenlet/greenlet.h": ctypedef class greenlet.greenlet [object PyGreenlet]: pass # These are actually macros and so much be included # (defined) in each .pxd, as are the two functions # that call them. greenlet PyGreenlet_GetCurrent() void PyGreenlet_Import() cdef inline greenlet getcurrent(): return PyGreenlet_GetCurrent() cdef inline void greenlet_init(): global _greenlet_imported if not _greenlet_imported: PyGreenlet_Import() _greenlet_imported = True @cython.final cdef _safe_remove(deq, item) cdef class Queue: cdef __weakref__ cdef readonly hub cdef readonly queue cdef readonly bint is_shutdown cdef getters cdef putters cdef _event_unlock cdef Py_ssize_t _maxsize cpdef _get(self) cpdef _put(self, item) cpdef _peek(self) cpdef Py_ssize_t qsize(self) cpdef bint empty(self) cpdef bint full(self) cpdef _create_queue(self, items=*) cpdef put(self, item, block=*, timeout=*) cpdef put_nowait(self, item) cdef __get_or_peek(self, method, block, timeout) cpdef get(self, block=*, timeout=*) cpdef get_nowait(self) cpdef peek(self, block=*, timeout=*) cpdef peek_nowait(self) cpdef shutdown(self, immediate=*) cdef _schedule_unlock(self) cdef _drain_for_immediate_shutdown(self) @cython.final @cython.internal cdef class ItemWaiter(Waiter): cdef readonly item cdef readonly Queue queue ### # XXX: Disabling Cython.final here pending a release > Cython 3.0.11 # because it breaks on GCC. See https://github.com/gevent/gevent/issues/2049#issuecomment-2404700280 # Restore when new cython is released. # # @cython.final ### cdef class UnboundQueue(Queue): pass cdef class PriorityQueue(Queue): pass cdef class JoinableQueue(Queue): cdef Event _cond cdef readonly int unfinished_tasks cdef _did_put_task(self) cdef class LifoQueue(JoinableQueue): pass cdef class Channel: cdef __weakref__ cdef readonly getters cdef readonly putters cdef readonly hub cdef _event_unlock cpdef get(self, block=*, timeout=*) cpdef get_nowait(self) cdef _schedule_unlock(self) gevent-24.11.1/src/gevent/_greenlet_primitives.py000066400000000000000000000110471471441230600220250ustar00rootroot00000000000000# -*- coding: utf-8 -*- # copyright (c) 2018 gevent. See LICENSE. # cython: auto_pickle=False,embedsignature=True,always_allow_keywords=False """ A collection of primitives used by the hub, and suitable for compilation with Cython because of their frequency of use. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function from weakref import ref as wref from gc import get_objects from greenlet import greenlet from gevent.exceptions import BlockingSwitchOutError # In Cython, we define these as 'cdef inline' functions. The # compilation unit cannot have a direct assignment to them (import # is assignment) without generating a 'lvalue is not valid target' # error. locals()['getcurrent'] = __import__('greenlet').getcurrent locals()['greenlet_init'] = lambda: None locals()['_greenlet_switch'] = greenlet.switch __all__ = [ 'TrackedRawGreenlet', 'SwitchOutGreenletWithLoop', ] class TrackedRawGreenlet(greenlet): def __init__(self, function, parent): greenlet.__init__(self, function, parent) # See greenlet.py's Greenlet class. We capture the cheap # parts to maintain the tree structure, but we do not capture # the stack because that's too expensive for 'spawn_raw'. current = getcurrent() # pylint:disable=undefined-variable self.spawning_greenlet = wref(current) # See Greenlet for how trees are maintained. try: self.spawn_tree_locals = current.spawn_tree_locals except AttributeError: self.spawn_tree_locals = {} if current.parent: current.spawn_tree_locals = self.spawn_tree_locals class SwitchOutGreenletWithLoop(TrackedRawGreenlet): # Subclasses must define: # - self.loop # This class defines loop in its .pxd for Cython. This lets us avoid # circular dependencies with the hub. def switch(self): switch_out = getattr(getcurrent(), 'switch_out', None) # pylint:disable=undefined-variable if switch_out is not None: switch_out() return _greenlet_switch(self) # pylint:disable=undefined-variable def switch_out(self): raise BlockingSwitchOutError('Impossible to call blocking function in the event loop callback') def get_reachable_greenlets(): # We compile this loop with Cython so that it's faster, and so that # the GIL isn't dropped at unpredictable times during the loop. # Dropping the GIL could lead to accessing partly constructed objects # in undefined states (particularly, tuples). This helps close a hole # where a `SystemError: Objects/tupleobject.c bad argument to internal function` # could get raised. (Note that this probably doesn't completely close the hole, # if other threads have dropped the GIL, but hopefully the speed makes that # more rare.) See https://github.com/gevent/gevent/issues/1302 return [ x for x in get_objects() if isinstance(x, greenlet) and not getattr(x, 'greenlet_tree_is_ignored', False) ] # Cache the global memoryview so cython can optimize. _memoryview = memoryview try: if isinstance(__builtins__, dict): # Pure-python mode on CPython _buffer = __builtins__['buffer'] else: # Cythonized mode, or PyPy _buffer = __builtins__.buffer except (AttributeError, KeyError): # Python 3. _buffer = memoryview def get_memory(data): # On Python 2, memoryview(memoryview()) can leak in some cases, # notably when an io.BufferedWriter object produced the memoryview. # So we need to check to see if we already have one before we convert. # We do this in Cython to mitigate the performance cost (which turns out to be a # net win.) # We don't specifically test for this leak. # https://github.com/gevent/gevent/issues/1318 try: mv = _memoryview(data) if not isinstance(data, _memoryview) else data if mv.shape: return mv # No shape, probably working with a ctypes object, # or something else exotic that supports the buffer interface return mv.tobytes() except TypeError: # fixes "python2.7 array.array doesn't support memoryview used in # gevent.socket.send" issue # (http://code.google.com/p/gevent/issues/detail?id=94) if _buffer is _memoryview: # Py3 raise return _buffer(data) def _init(): greenlet_init() # pylint:disable=undefined-variable _init() from gevent._util import import_c_accel import_c_accel(globals(), 'gevent.__greenlet_primitives') gevent-24.11.1/src/gevent/_hub_local.py000066400000000000000000000112331471441230600176720ustar00rootroot00000000000000# -*- coding: utf-8 -*- # copyright 2018 gevent. See LICENSE """ Maintains the thread local hub. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import _thread __all__ = [ 'get_hub', 'get_hub_noargs', 'get_hub_if_exists', ] # These must be the "real" native thread versions, # not monkey-patched. # We are imported early enough (by gevent/__init__) that # we can rely on not being monkey-patched in any way yet. assert 'gevent' not in str(_thread._local) class _Threadlocal(_thread._local): def __init__(self): # Use a class with an initializer so that we can test # for 'is None' instead of catching AttributeError, making # the code cleaner and possibly solving some corner cases # (like #687). # # However, under some weird circumstances, it _seems_ like the # __init__ method doesn't get called properly ("seems" is the # keyword). We've seen at least one instance # (https://github.com/gevent/gevent/issues/1961) of # ``AttributeError: '_Threadlocal' object has no attribute # 'hub'`` # which should be impossible unless: # # - Someone manually deletes the attribute # - The _threadlocal object itself is in the process of being # deleted. The C ``tp_clear`` slot for it deletes the ``__dict__`` # of each instance in each thread (and/or the ``tp_clear`` of ``dict`` itself # clears the instance). Now, how we could be getting # cleared while still being used is unclear, but clearing is part of # circular garbage collection, and in the bug report it looks like we're inside a # weakref finalizer or ``__del__`` method, which could suggest that # garbage collection is happening. # # See https://github.com/gevent/gevent/issues/1961 # and ``get_hub_if_exists()`` super(_Threadlocal, self).__init__() self.Hub = None self.loop = None self.hub = None _threadlocal = _Threadlocal() Hub = None # Set when gevent.hub is imported def get_hub_class(): """Return the type of hub to use for the current thread. If there's no type of hub for the current thread yet, 'gevent.hub.Hub' is used. """ hubtype = _threadlocal.Hub if hubtype is None: hubtype = _threadlocal.Hub = Hub return hubtype def set_default_hub_class(hubtype): global Hub Hub = hubtype def get_hub(): """ Return the hub for the current thread. If a hub does not exist in the current thread, a new one is created of the type returned by :func:`get_hub_class`. .. deprecated:: 1.3b1 The ``*args`` and ``**kwargs`` arguments are deprecated. They were only used when the hub was created, and so were non-deterministic---to be sure they were used, *all* callers had to pass them, or they were order-dependent. Use ``set_hub`` instead. .. versionchanged:: 1.5a3 The *args* and *kwargs* arguments are now completely ignored. .. versionchanged:: 23.7.0 The long-deprecated ``args`` and ``kwargs`` parameters are no longer accepted. """ # See get_hub_if_exists try: hub = _threadlocal.hub except AttributeError: hub = None if hub is None: hubtype = get_hub_class() hub = _threadlocal.hub = hubtype() return hub # For Cython purposes, we need to duplicate get_hub into this function so it # can be directly called. def get_hub_noargs(): # See get_hub_if_exists try: hub = _threadlocal.hub except AttributeError: hub = None if hub is None: hubtype = get_hub_class() hub = _threadlocal.hub = hubtype() return hub def get_hub_if_exists(): """ Return the hub for the current thread. Return ``None`` if no hub has been created yet. """ # Attempt a band-aid for the poorly-understood behaviour # seen in https://github.com/gevent/gevent/issues/1961 # where the ``hub`` attribute has gone missing. try: return _threadlocal.hub except AttributeError: # XXX: I'd really like to report this, but I'm not sure how # that can be done safely (because I don't know how we get # here in the first place). We may be in a place where imports # are unsafe, or the interpreter is shutting down, or the # thread is exiting, or... return None def set_hub(hub): _threadlocal.hub = hub def get_loop(): return _threadlocal.loop def set_loop(loop): _threadlocal.loop = loop from gevent._util import import_c_accel import_c_accel(globals(), 'gevent.__hub_local') gevent-24.11.1/src/gevent/_hub_primitives.py000066400000000000000000000333121471441230600207750ustar00rootroot00000000000000# -*- coding: utf-8 -*- # copyright (c) 2018 gevent. See LICENSE. # cython: auto_pickle=False,embedsignature=True,always_allow_keywords=False,binding=True """ A collection of primitives used by the hub, and suitable for compilation with Cython because of their frequency of use. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import traceback from gevent.exceptions import InvalidSwitchError from gevent.exceptions import ConcurrentObjectUseError from gevent import _greenlet_primitives from gevent import _waiter from gevent._util import _NONE from gevent._hub_local import get_hub_noargs as get_hub from gevent.timeout import Timeout # In Cython, we define these as 'cdef inline' functions. The # compilation unit cannot have a direct assignment to them (import # is assignment) without generating a 'lvalue is not valid target' # error. locals()['getcurrent'] = __import__('greenlet').getcurrent locals()['greenlet_init'] = lambda: None locals()['Waiter'] = _waiter.Waiter locals()['MultipleWaiter'] = _waiter.MultipleWaiter locals()['SwitchOutGreenletWithLoop'] = _greenlet_primitives.SwitchOutGreenletWithLoop __all__ = [ 'WaitOperationsGreenlet', 'iwait_on_objects', 'wait_on_objects', 'wait_read', 'wait_write', 'wait_readwrite', ] class WaitOperationsGreenlet(SwitchOutGreenletWithLoop): # pylint:disable=undefined-variable def wait(self, watcher): """ Wait until the *watcher* (which must not be started) is ready. The current greenlet will be unscheduled during this time. """ waiter = Waiter(self) # pylint:disable=undefined-variable watcher.start(waiter.switch, waiter) try: result = waiter.get() if result is not waiter: raise InvalidSwitchError( 'Invalid switch into %s: got %r (expected %r; waiting on %r with %r)' % ( getcurrent(), # pylint:disable=undefined-variable result, waiter, self, watcher ) ) finally: watcher.stop() def cancel_waits_close_and_then(self, watchers, exc_kind, then, *then_args): deferred = [] for watcher in watchers: if watcher is None: continue if watcher.callback is None: watcher.close() else: deferred.append(watcher) if deferred: self.loop.run_callback(self._cancel_waits_then, deferred, exc_kind, then, then_args) else: then(*then_args) def _cancel_waits_then(self, watchers, exc_kind, then, then_args): for watcher in watchers: self._cancel_wait(watcher, exc_kind, True) then(*then_args) def cancel_wait(self, watcher, error, close_watcher=False): """ Cancel an in-progress call to :meth:`wait` by throwing the given *error* in the waiting greenlet. .. versionchanged:: 1.3a1 Added the *close_watcher* parameter. If true, the watcher will be closed after the exception is thrown. The watcher should then be discarded. Closing the watcher is important to release native resources. .. versionchanged:: 1.3a2 Allow the *watcher* to be ``None``. No action is taken in that case. """ if watcher is None: # Presumably already closed. # See https://github.com/gevent/gevent/issues/1089 return if watcher.callback is not None: self.loop.run_callback(self._cancel_wait, watcher, error, close_watcher) return if close_watcher: watcher.close() def _cancel_wait(self, watcher, error, close_watcher): # Running in the hub. Switches to the waiting greenlet to raise # the error; assuming the waiting greenlet dies, switches back # to this (because the waiting greenlet's parent is the hub.) # We have to check again to see if it was still active by the time # our callback actually runs. active = watcher.active cb = watcher.callback if close_watcher: watcher.close() if active: # The callback should be greenlet.switch(). It may or may not be None. glet = getattr(cb, '__self__', None) if glet is not None: glet.throw(error) class _WaitIterator(object): def __init__(self, objects, hub, timeout, count): self._hub = hub self._waiter = MultipleWaiter(hub) # pylint:disable=undefined-variable self._switch = self._waiter.switch self._timeout = timeout self._objects = objects self._timer = None self._begun = False # Even if we're only going to return 1 object, # we must still rawlink() *all* of them, so that no # matter which one finishes first we find it. self._count = len(objects) if count is None else min(count, len(objects)) def _begin(self): if self._begun: return self._begun = True # XXX: If iteration doesn't actually happen, we # could leave these links around! for obj in self._objects: obj.rawlink(self._switch) if self._timeout is not None: self._timer = self._hub.loop.timer(self._timeout, priority=-1) self._timer.start(self._switch, self) def __iter__(self): return self def __next__(self): self._begin() if self._count == 0: # Exhausted self._cleanup() raise StopIteration() self._count -= 1 try: item = self._waiter.get() self._waiter.clear() if item is self: # Timer expired, no more self._cleanup() raise StopIteration() return item except: self._cleanup() raise next = __next__ def _cleanup(self): if self._timer is not None: self._timer.close() self._timer = None objs = self._objects self._objects = () for aobj in objs: unlink = getattr(aobj, 'unlink', None) if unlink is not None: try: unlink(self._switch) except: # pylint:disable=bare-except traceback.print_exc() def __enter__(self): return self def __exit__(self, typ, value, tb): self._cleanup() def iwait_on_objects(objects, timeout=None, count=None): """ Iteratively yield *objects* as they are ready, until all (or *count*) are ready or *timeout* expired. If you will only be consuming a portion of the *objects*, you should do so inside a ``with`` block on this object to avoid leaking resources:: with gevent.iwait((a, b, c)) as it: for i in it: if i is a: break :param objects: A sequence (supporting :func:`len`) containing objects implementing the wait protocol (rawlink() and unlink()). :keyword int count: If not `None`, then a number specifying the maximum number of objects to wait for. If ``None`` (the default), all objects are waited for. :keyword float timeout: If given, specifies a maximum number of seconds to wait. If the timeout expires before the desired waited-for objects are available, then this method returns immediately. .. seealso:: :func:`wait` .. versionchanged:: 1.1a1 Add the *count* parameter. .. versionchanged:: 1.1a2 No longer raise :exc:`LoopExit` if our caller switches greenlets in between items yielded by this function. .. versionchanged:: 1.4 Add support to use the returned object as a context manager. """ # QQQ would be nice to support iterable here that can be generated slowly (why?) hub = get_hub() if objects is None: return [hub.join(timeout=timeout)] return _WaitIterator(objects, hub, timeout, count) def wait_on_objects(objects=None, timeout=None, count=None): """ Wait for *objects* to become ready or for event loop to finish. If *objects* is provided, it must be a list containing objects implementing the wait protocol (rawlink() and unlink() methods): - :class:`gevent.Greenlet` instance - :class:`gevent.event.Event` instance - :class:`gevent.lock.Semaphore` instance - :class:`gevent.subprocess.Popen` instance If *objects* is ``None`` (the default), ``wait()`` blocks until the current event loop has nothing to do (or until *timeout* passes): - all greenlets have finished - all servers were stopped - all event loop watchers were stopped. If *count* is ``None`` (the default), wait for all *objects* to become ready. If *count* is a number, wait for (up to) *count* objects to become ready. (For example, if count is ``1`` then the function exits when any object in the list is ready). If *timeout* is provided, it specifies the maximum number of seconds ``wait()`` will block. Returns the list of ready objects, in the order in which they were ready. .. seealso:: :func:`iwait` """ if objects is None: hub = get_hub() return hub.join(timeout=timeout) # pylint:disable= return list(iwait_on_objects(objects, timeout, count)) _timeout_error = Exception def set_default_timeout_error(e): global _timeout_error _timeout_error = e def _primitive_wait(watcher, timeout, timeout_exc, hub): if watcher.callback is not None: raise ConcurrentObjectUseError('This socket is already used by another greenlet: %r' % (watcher.callback, )) if hub is None: hub = get_hub() if timeout is None: hub.wait(watcher) return timeout = Timeout._start_new_or_dummy( timeout, (timeout_exc if timeout_exc is not _NONE or timeout is None else _timeout_error('timed out'))) with timeout: hub.wait(watcher) # Suitable to be bound as an instance method def wait_on_socket(socket, watcher, timeout_exc=None): if socket is None or watcher is None: # test__hub TestCloseSocketWhilePolling, on Python 2; Python 3 # catches the EBADF differently. raise ConcurrentObjectUseError("The socket has already been closed by another greenlet") _primitive_wait(watcher, socket.timeout, timeout_exc if timeout_exc is not None else _NONE, socket.hub) def wait_on_watcher(watcher, timeout=None, timeout_exc=_NONE, hub=None): """ wait(watcher, timeout=None, [timeout_exc=None]) -> None Block the current greenlet until *watcher* is ready. If *timeout* is non-negative, then *timeout_exc* is raised after *timeout* second has passed. If :func:`cancel_wait` is called on *watcher* by another greenlet, raise an exception in this blocking greenlet (``socket.error(EBADF, 'File descriptor was closed in another greenlet')`` by default). :param watcher: An event loop watcher, most commonly an IO watcher obtained from :meth:`gevent.core.loop.io` :keyword timeout_exc: The exception to raise if the timeout expires. By default, a :class:`socket.timeout` exception is raised. If you pass a value for this keyword, it is interpreted as for :class:`gevent.timeout.Timeout`. :raises ~gevent.hub.ConcurrentObjectUseError: If the *watcher* is already started. """ _primitive_wait(watcher, timeout, timeout_exc, hub) def wait_read(fileno, timeout=None, timeout_exc=_NONE): """ wait_read(fileno, timeout=None, [timeout_exc=None]) -> None Block the current greenlet until *fileno* is ready to read. For the meaning of the other parameters and possible exceptions, see :func:`wait`. .. seealso:: :func:`cancel_wait` """ hub = get_hub() io = hub.loop.io(fileno, 1) try: return wait_on_watcher(io, timeout, timeout_exc, hub) finally: io.close() def wait_write(fileno, timeout=None, timeout_exc=_NONE, event=_NONE): """ wait_write(fileno, timeout=None, [timeout_exc=None]) -> None Block the current greenlet until *fileno* is ready to write. For the meaning of the other parameters and possible exceptions, see :func:`wait`. .. deprecated:: 1.1 The keyword argument *event* is ignored. Applications should not pass this parameter. In the future, doing so will become an error. .. seealso:: :func:`cancel_wait` """ # pylint:disable=unused-argument hub = get_hub() io = hub.loop.io(fileno, 2) try: return wait_on_watcher(io, timeout, timeout_exc, hub) finally: io.close() def wait_readwrite(fileno, timeout=None, timeout_exc=_NONE, event=_NONE): """ wait_readwrite(fileno, timeout=None, [timeout_exc=None]) -> None Block the current greenlet until *fileno* is ready to read or write. For the meaning of the other parameters and possible exceptions, see :func:`wait`. .. deprecated:: 1.1 The keyword argument *event* is ignored. Applications should not pass this parameter. In the future, doing so will become an error. .. seealso:: :func:`cancel_wait` """ # pylint:disable=unused-argument hub = get_hub() io = hub.loop.io(fileno, 3) try: return wait_on_watcher(io, timeout, timeout_exc, hub) finally: io.close() def _init(): greenlet_init() # pylint:disable=undefined-variable _init() from gevent._util import import_c_accel import_c_accel(globals(), 'gevent.__hub_primitives') gevent-24.11.1/src/gevent/_ident.py000066400000000000000000000043111471441230600170440ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2018 gevent contributors. See LICENSE for details. # cython: auto_pickle=False,embedsignature=True,always_allow_keywords=False from __future__ import absolute_import from __future__ import division from __future__ import print_function from weakref import WeakKeyDictionary from weakref import ref from heapq import heappop from heapq import heappush __all__ = [ 'IdentRegistry', ] class ValuedWeakRef(ref): """ A weak ref with an associated value. """ __slots__ = ('value',) class IdentRegistry(object): """ Maintains a unique mapping of (small) non-negative integer identifiers to objects that can be weakly referenced. It is guaranteed that no two objects will have the the same identifier at the same time, as long as those objects are also uniquely hashable. """ def __init__(self): # {obj -> (ident, wref(obj))} self._registry = WeakKeyDictionary() # A heap of numbers that have been used and returned self._available_idents = [] def get_ident(self, obj): """ Retrieve the identifier for *obj*, creating one if necessary. """ try: return self._registry[obj][0] except KeyError: pass if self._available_idents: # Take the smallest free number ident = heappop(self._available_idents) else: # Allocate a bigger one ident = len(self._registry) vref = ValuedWeakRef(obj, self._return_ident) vref.value = ident # pylint:disable=assigning-non-slot,attribute-defined-outside-init self._registry[obj] = (ident, vref) return ident def _return_ident(self, vref): # By the time this is called, self._registry has been # updated if heappush is not None: # Under some circumstances we can get called # when the interpreter is shutting down, and globals # aren't available any more. heappush(self._available_idents, vref.value) def __len__(self): return len(self._registry) from gevent._util import import_c_accel import_c_accel(globals(), 'gevent.__ident') gevent-24.11.1/src/gevent/_imap.py000066400000000000000000000167701471441230600167030ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2018 gevent # cython: auto_pickle=False,embedsignature=True,always_allow_keywords=False,infer_types=True """ Iterators across greenlets or AsyncResult objects. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function from gevent import lock from gevent import queue __all__ = [ 'IMapUnordered', 'IMap', ] locals()['Greenlet'] = __import__('gevent').Greenlet locals()['Semaphore'] = lock.Semaphore locals()['UnboundQueue'] = queue.UnboundQueue class Failure(object): __slots__ = ('exc', 'raise_exception') def __init__(self, exc, raise_exception=None): self.exc = exc self.raise_exception = raise_exception def _raise_exc(failure): # For cython. if failure.raise_exception: failure.raise_exception() else: raise failure.exc class IMapUnordered(Greenlet): # pylint:disable=undefined-variable """ At iterator of map results. """ def __init__(self, func, iterable, spawn, maxsize=None, _zipped=False): """ An iterator that. :param callable spawn: The function we use to create new greenlets. :keyword int maxsize: If given and not-None, specifies the maximum number of finished results that will be allowed to accumulated awaiting the reader; more than that number of results will cause map function greenlets to begin to block. This is most useful is there is a great disparity in the speed of the mapping code and the consumer and the results consume a great deal of resources. Using a bound is more computationally expensive than not using a bound. .. versionchanged:: 1.1b3 Added the *maxsize* parameter. """ Greenlet.__init__(self) # pylint:disable=undefined-variable self.spawn = spawn self._zipped = _zipped self.func = func self.iterable = iterable self.queue = UnboundQueue() # pylint:disable=undefined-variable if maxsize: # Bounding the queue is not enough if we want to keep from # accumulating objects; the result value will be around as # the greenlet's result, blocked on self.queue.put(), and # we'll go on to spawn another greenlet, which in turn can # create the result. So we need a semaphore to prevent a # greenlet from exiting while the queue is full so that we # don't spawn the next greenlet (assuming that self.spawn # is of course bounded). (Alternatively we could have the # greenlet itself do the insert into the pool, but that # takes some rework). # # Given the use of a semaphore at this level, sizing the queue becomes # redundant, and that lets us avoid having to use self.link() instead # of self.rawlink() to avoid having blocking methods called in the # hub greenlet. self._result_semaphore = Semaphore(maxsize) # pylint:disable=undefined-variable else: self._result_semaphore = None self._outstanding_tasks = 0 # The index (zero based) of the maximum number of # results we will have. self._max_index = -1 self.finished = False # We're iterating in a different greenlet than we're running. def __iter__(self): return self def __next__(self): if self._result_semaphore is not None: self._result_semaphore.release() value = self._inext() if isinstance(value, Failure): _raise_exc(value) return value next = __next__ # Py2 def _inext(self): return self.queue.get() def _ispawn(self, func, item, item_index): if self._result_semaphore is not None: self._result_semaphore.acquire() self._outstanding_tasks += 1 g = self.spawn(func, item) if not self._zipped else self.spawn(func, *item) g._imap_task_index = item_index g.rawlink(self._on_result) return g def _run(self): # pylint:disable=method-hidden try: func = self.func for item in self.iterable: self._max_index += 1 self._ispawn(func, item, self._max_index) self._on_finish(None) except BaseException as e: self._on_finish(e) raise finally: self.spawn = None self.func = None self.iterable = None self._result_semaphore = None def _on_result(self, greenlet): # This method will be called in the hub greenlet (we rawlink) self._outstanding_tasks -= 1 count = self._outstanding_tasks finished = self.finished ready = self.ready() put_finished = False if ready and count <= 0 and not finished: finished = self.finished = True put_finished = True if greenlet.successful(): self.queue.put(self._iqueue_value_for_success(greenlet)) else: self.queue.put(self._iqueue_value_for_failure(greenlet)) if put_finished: self.queue.put(self._iqueue_value_for_self_finished()) def _on_finish(self, exception): # Called in this greenlet. if self.finished: return if exception is not None: self.finished = True self.queue.put(self._iqueue_value_for_self_failure(exception)) return if self._outstanding_tasks <= 0: self.finished = True self.queue.put(self._iqueue_value_for_self_finished()) def _iqueue_value_for_success(self, greenlet): return greenlet.value def _iqueue_value_for_failure(self, greenlet): return Failure(greenlet.exception, getattr(greenlet, '_raise_exception')) def _iqueue_value_for_self_finished(self): return Failure(StopIteration()) def _iqueue_value_for_self_failure(self, exception): return Failure(exception, self._raise_exception) class IMap(IMapUnordered): # A specialization of IMapUnordered that returns items # in the order in which they were generated, not # the order in which they finish. def __init__(self, *args, **kwargs): # The result dictionary: {index: value} self._results = {} # The index of the result to return next. self.index = 0 IMapUnordered.__init__(self, *args, **kwargs) def _inext(self): try: value = self._results.pop(self.index) except KeyError: # Wait for our index to finish. while 1: index, value = self.queue.get() if index == self.index: break self._results[index] = value self.index += 1 return value def _iqueue_value_for_success(self, greenlet): return (greenlet._imap_task_index, IMapUnordered._iqueue_value_for_success(self, greenlet)) def _iqueue_value_for_failure(self, greenlet): return (greenlet._imap_task_index, IMapUnordered._iqueue_value_for_failure(self, greenlet)) def _iqueue_value_for_self_finished(self): return (self._max_index + 1, IMapUnordered._iqueue_value_for_self_finished(self)) def _iqueue_value_for_self_failure(self, exception): return (self._max_index + 1, IMapUnordered._iqueue_value_for_self_failure(self, exception)) from gevent._util import import_c_accel import_c_accel(globals(), 'gevent.__imap') gevent-24.11.1/src/gevent/_interfaces.py000066400000000000000000000230031471441230600200630ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2018 gevent contributors. See LICENSE for details. """ Interfaces gevent uses that don't belong any one place. This is not a public module, these interfaces are not currently exposed to the public, they mostly exist for documentation and testing purposes. .. versionadded:: 1.3b2 """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import sys from zope.interface import Interface from zope.interface import Attribute _text_type = type(u'') try: from zope import schema except ImportError: # pragma: no cover class _Field(Attribute): __allowed_kw__ = ('readonly', 'min',) def __init__(self, description, required=False, **kwargs): description = u"%s (required? %s)" % (description, required) assert isinstance(description, _text_type) for k in self.__allowed_kw__: kwargs.pop(k, None) if kwargs: raise TypeError("Unexpected keyword arguments: %r" % (kwargs,)) Attribute.__init__(self, description) class schema(object): Bool = _Field Float = _Field # pylint:disable=no-method-argument, unused-argument, no-self-argument # pylint:disable=inherit-non-class __all__ = [ 'ILoop', 'IWatcher', 'ICallback', ] class ILoop(Interface): """ The common interface expected for all event loops. .. caution:: This is an internal, low-level interface. It may change between minor versions of gevent. .. rubric:: Watchers The methods that create event loop watchers are `io`, `timer`, `signal`, `idle`, `prepare`, `check`, `fork`, `async_`, `child`, `stat`. These all return various types of :class:`IWatcher`. All of those methods have one or two common arguments. *ref* is a boolean saying whether the event loop is allowed to exit even if this watcher is still started. *priority* is event loop specific. """ default = schema.Bool( description=u"Boolean indicating whether this is the default loop", required=True, readonly=True, ) approx_timer_resolution = schema.Float( description=u"Floating point number of seconds giving (approximately) the minimum " "resolution of a timer (and hence the minimun value the sleep can sleep for). " "On libuv, this is fixed by the library, but on libev it is just a guess " "and the actual value is system dependent.", required=True, min=0.0, readonly=True, ) def run(nowait=False, once=False): """ Run the event loop. This is usually called automatically by the hub greenlet, but in special cases (when the hub is *not* running) you can use this to control how the event loop runs (for example, to integrate it with another event loop). """ def now(): """ now() -> float Return the loop's notion of the current time. This may not necessarily be related to :func:`time.time` (it may have a different starting point), but it must be expressed in fractional seconds (the same *units* used by :func:`time.time`). """ def update_now(): """ Update the loop's notion of the current time. .. versionadded:: 1.3 In the past, this available as ``update``. This is still available as an alias but will be removed in the future. """ def destroy(): """ Clean up resources used by this loop. If you create loops (especially loops that are not the default) you *should* call this method when you are done with the loop. .. caution:: As an implementation note, the libev C loop implementation has a finalizer (``__del__``) that destroys the object, but the libuv and libev CFFI implementations do not. The C implementation may change. """ def io(fd, events, ref=True, priority=None): """ Create and return a new IO watcher for the given *fd*. *events* is a bitmask specifying which events to watch for. 1 means read, and 2 means write. """ def closing_fd(fd): """ Inform the loop that the file descriptor *fd* is about to be closed. The loop may choose to schedule events to be delivered to any active IO watchers for the fd. libev does this so that the active watchers can be closed. :return: A boolean value that's true if active IO watchers were queued to run. Closing the FD should be deferred until the next run of the eventloop with a callback. """ def timer(after, repeat=0.0, ref=True, priority=None): """ Create and return a timer watcher that will fire after *after* seconds. If *repeat* is given, the timer will continue to fire every *repeat* seconds. """ def signal(signum, ref=True, priority=None): """ Create and return a signal watcher for the signal *signum*, one of the constants defined in :mod:`signal`. This is platform and event loop specific. """ def idle(ref=True, priority=None): """ Create and return a watcher that fires when the event loop is idle. """ def prepare(ref=True, priority=None): """ Create and return a watcher that fires before the event loop polls for IO. .. caution:: This method is not supported by libuv. """ def check(ref=True, priority=None): """ Create and return a watcher that fires after the event loop polls for IO. """ def fork(ref=True, priority=None): """ Create a watcher that fires when the process forks. Availability: Unix. """ def async_(ref=True, priority=None): """ Create a watcher that fires when triggered, possibly from another thread. .. versionchanged:: 1.3 This was previously just named ``async``; for compatibility with Python 3.7 where ``async`` is a keyword it was renamed. On older versions of Python the old name is still around, but it will be removed in the future. """ if sys.platform != "win32": def child(pid, trace=0, ref=True): """ Create a watcher that fires for events on the child with process ID *pid*. This is platform specific and not available on Windows. Availability: Unix. """ def stat(path, interval=0.0, ref=True, priority=None): """ Create a watcher that monitors the filesystem item at *path*. If the operating system doesn't support event notifications from the filesystem, poll for changes every *interval* seconds. """ def run_callback(func, *args): """ Run the *func* passing it *args* at the next opportune moment. The next opportune moment may be the next iteration of the event loop, the current iteration, or some other time in the future. Returns a :class:`ICallback` object. See that documentation for important caveats. .. seealso:: :meth:`asyncio.loop.call_soon` The :mod:`asyncio` equivalent. """ def run_callback_threadsafe(func, *args): """ Like :meth:`run_callback`, but for use from *outside* the thread that is running this loop. This not only schedules the *func* to run, it also causes the loop to notice that the *func* has been scheduled (e.g., it causes the loop to wake up). .. versionadded:: 21.1.0 .. seealso:: :meth:`asyncio.loop.call_soon_threadsafe` The :mod:`asyncio` equivalent. """ class IWatcher(Interface): """ An event loop watcher. These objects call their *callback* function when the event loop detects the event has happened. .. important:: You *must* call :meth:`close` when you are done with this object to avoid leaking native resources. """ def start(callback, *args, **kwargs): """ Have the event loop begin watching for this event. When the event is detected, *callback* will be called with *args*. .. caution:: Not all watchers accept ``**kwargs``, and some watchers define special meanings for certain keyword args. """ def stop(): """ Have the event loop stop watching this event. In the future you may call :meth:`start` to begin watching again. """ def close(): """ Dispose of any native resources associated with the watcher. If we were active, stop. Attempting to operate on this object after calling close is undefined. You should dispose of any references you have to it after calling this method. """ class ICallback(Interface): """ Represents a function that will be run some time in the future. Callback functions run in the hub, and as such they cannot use gevent's blocking API; any exception they raise cannot be caught. """ pending = schema.Bool(description=u"Has this callback run yet?", readonly=True) def stop(): """ If this object is still `pending`, cause it to no longer be `pending`; the function will not be run. """ def close(): """ An alias of `stop`. """ gevent-24.11.1/src/gevent/_monitor.py000066400000000000000000000273461471441230600174450ustar00rootroot00000000000000# Copyright (c) 2018 gevent. See LICENSE for details. from __future__ import print_function, absolute_import, division import os import sys from weakref import ref as wref from greenlet import getcurrent from gevent import config as GEVENT_CONFIG from gevent.monkey import get_original from gevent.events import notify from gevent.events import EventLoopBlocked from gevent.events import MemoryUsageThresholdExceeded from gevent.events import MemoryUsageUnderThreshold from gevent.events import IPeriodicMonitorThread from gevent.events import implementer from gevent._tracer import GreenletTracer from gevent._compat import thread_mod_name from gevent._compat import perf_counter from gevent._compat import get_this_psutil_process __all__ = [ 'PeriodicMonitoringThread', ] get_thread_ident = get_original(thread_mod_name, 'get_ident') start_new_thread = get_original(thread_mod_name, 'start_new_thread') thread_sleep = get_original('time', 'sleep') class MonitorWarning(RuntimeWarning): """The type of warnings we emit.""" class _MonitorEntry(object): __slots__ = ('function', 'period', 'last_run_time') def __init__(self, function, period): self.function = function self.period = period self.last_run_time = 0 def __eq__(self, other): return self.function == other.function and self.period == other.period def __hash__(self): return hash((self.function, self.period)) def __repr__(self): return repr((self.function, self.period, self.last_run_time)) @implementer(IPeriodicMonitorThread) class PeriodicMonitoringThread(object): # This doesn't extend threading.Thread because that gets monkey-patched. # We use the low-level 'start_new_thread' primitive instead. # The amount of seconds we will sleep when we think we have nothing # to do. inactive_sleep_time = 2.0 # The absolute minimum we will sleep, regardless of # what particular monitoring functions want to say. min_sleep_time = 0.005 # The minimum period in seconds at which we will check memory usage. # Getting memory usage is fairly expensive. min_memory_monitor_period = 2 # A list of _MonitorEntry objects: [(function(hub), period, last_run_time))] # The first entry is always our entry for self.monitor_blocking _monitoring_functions = None # The calculated min sleep time for the monitoring functions list. _calculated_sleep_time = None # A boolean value that also happens to capture the # memory usage at the time we exceeded the threshold. Reset # to 0 when we go back below. _memory_exceeded = 0 # The instance of GreenletTracer we're using _greenlet_tracer = None def __init__(self, hub): self._hub_wref = wref(hub, self._on_hub_gc) self.should_run = True # Must be installed in the thread that the hub is running in; # the trace function is threadlocal assert get_thread_ident() == hub.thread_ident self._greenlet_tracer = GreenletTracer() self._monitoring_functions = [_MonitorEntry(self.monitor_blocking, GEVENT_CONFIG.max_blocking_time)] self._calculated_sleep_time = GEVENT_CONFIG.max_blocking_time # Create the actual monitoring thread. This is effectively a "daemon" # thread. self.monitor_thread_ident = start_new_thread(self, ()) # We must track the PID to know if your thread has died after a fork self.pid = os.getpid() def _on_fork(self): # Pseudo-standard method that resolver_ares and threadpool # also have, called by hub.reinit() pid = os.getpid() if pid != self.pid: self.pid = pid self.monitor_thread_ident = start_new_thread(self, ()) @property def hub(self): return self._hub_wref() def monitoring_functions(self): # Return a list of _MonitorEntry objects # Update max_blocking_time each time. mbt = GEVENT_CONFIG.max_blocking_time # XXX: Events so we know when this changes. if mbt != self._monitoring_functions[0].period: self._monitoring_functions[0].period = mbt self._calculated_sleep_time = min(x.period for x in self._monitoring_functions) return self._monitoring_functions def add_monitoring_function(self, function, period): if not callable(function): raise ValueError("function must be callable") if period is None: # Remove. self._monitoring_functions = [ x for x in self._monitoring_functions if x.function != function ] elif period <= 0: raise ValueError("Period must be positive.") else: # Add or update period entry = _MonitorEntry(function, period) self._monitoring_functions = [ x if x.function != function else entry for x in self._monitoring_functions ] if entry not in self._monitoring_functions: self._monitoring_functions.append(entry) self._calculated_sleep_time = min(x.period for x in self._monitoring_functions) def calculate_sleep_time(self): min_sleep = self._calculated_sleep_time if min_sleep <= 0: # Everyone wants to be disabled. Sleep for a longer period of # time than usual so we don't spin unnecessarily. We might be # enabled again in the future. return self.inactive_sleep_time return max((min_sleep, self.min_sleep_time)) def kill(self): if not self.should_run: # Prevent overwriting trace functions. return # Stop this monitoring thread from running. self.should_run = False # Uninstall our tracing hook self._greenlet_tracer.kill() def _on_hub_gc(self, _): self.kill() def __call__(self): # The function that runs in the monitoring thread. # We cannot use threading.current_thread because it would # create an immortal DummyThread object. getcurrent().gevent_monitoring_thread = wref(self) try: while self.should_run: functions = self.monitoring_functions() assert functions sleep_time = self.calculate_sleep_time() thread_sleep(sleep_time) # Make sure the hub is still around, and still active, # and keep it around while we are here. hub = self.hub if not hub: self.kill() if self.should_run: this_run = perf_counter() for entry in functions: f = entry.function period = entry.period last_run = entry.last_run_time if period and last_run + period <= this_run: entry.last_run_time = this_run f(hub) del hub # break our reference to hub while we sleep except SystemExit: pass except: # pylint:disable=bare-except # We're a daemon thread, so swallow any exceptions that get here # during interpreter shutdown. if not sys or not sys.stderr: # pragma: no cover # Interpreter is shutting down pass else: hub = self.hub if hub is not None: # XXX: This tends to do bad things like end the process, because we # try to switch *threads*, which can't happen. Need something better. hub.handle_error(self, *sys.exc_info()) def monitor_blocking(self, hub): # Called periodically to see if the trace function has # fired to switch greenlets. If not, we will print # the greenlet tree. # For tests, we return a true value when we think we found something # blocking did_block = self._greenlet_tracer.did_block_hub(hub) if not did_block: return active_greenlet = did_block[1] # pylint:disable=unsubscriptable-object report = self._greenlet_tracer.did_block_hub_report( hub, active_greenlet, dict(greenlet_stacks=False, current_thread_ident=self.monitor_thread_ident)) # Notify the event first. ``report`` is a mutable list, so the event listeners # may mutate it to add or remove information. notify(EventLoopBlocked(active_greenlet, GEVENT_CONFIG.max_blocking_time, report, hub=hub)) # Do the actual reporting in a separate function so that # it can be overridden at runtime, for example, to use a dedicated # file. If you find this useful enough that the interface should be formalized, # please file an issue. return self._show_blocking_report(hub, report, active_greenlet) def _show_blocking_report(self, hub, report, active_greenlet): stream = hub.exception_stream for line in report: # Printing line by line may interleave with other things, # but it should also prevent a "reentrant call to print" # when the report is large. print(line, file=stream) return (active_greenlet, report) def ignore_current_greenlet_blocking(self): self._greenlet_tracer.ignore_current_greenlet_blocking() def monitor_current_greenlet_blocking(self): self._greenlet_tracer.monitor_current_greenlet_blocking() def _get_process(self): # pylint:disable=method-hidden proc = get_this_psutil_process() self._get_process = lambda: proc return proc def can_monitor_memory_usage(self): return self._get_process() is not None def install_monitor_memory_usage(self): # Start monitoring memory usage, if possible. # If not possible, emit a warning. if not self.can_monitor_memory_usage(): import warnings warnings.warn("Unable to monitor memory usage. Install psutil.", MonitorWarning) return self.add_monitoring_function(self.monitor_memory_usage, max(GEVENT_CONFIG.memory_monitor_period, self.min_memory_monitor_period)) def monitor_memory_usage(self, _hub): max_allowed = GEVENT_CONFIG.max_memory_usage if not max_allowed: # They disabled it. return -1 # value for tests rusage = self._get_process().memory_full_info() # uss only documented available on Windows, Linux, and OS X. # If not available, fall back to rss as an aproximation. mem_usage = getattr(rusage, 'uss', 0) or rusage.rss event = None # Return value for tests if mem_usage > max_allowed: if mem_usage > self._memory_exceeded: # We're still growing event = MemoryUsageThresholdExceeded( mem_usage, max_allowed, rusage) notify(event) self._memory_exceeded = mem_usage else: # we're below. Were we above it last time? if self._memory_exceeded: event = MemoryUsageUnderThreshold( mem_usage, max_allowed, rusage, self._memory_exceeded) notify(event) self._memory_exceeded = 0 return event def __repr__(self): return '<%s at %s in thread %s greenlet %r for %r>' % ( self.__class__.__name__, hex(id(self)), hex(self.monitor_thread_ident), getcurrent(), self._hub_wref()) gevent-24.11.1/src/gevent/_patcher.py000066400000000000000000000214741471441230600174000ustar00rootroot00000000000000# Copyright 2018 gevent. See LICENSE for details. # Portions of the following are inspired by code from eventlet. I # believe they are distinct enough that no eventlet copyright would # apply (they are not a copy or substantial portion of the eventlot # code). # Added in gevent 1.3a2. Not public in that release. from __future__ import absolute_import, print_function import importlib import sys from gevent._compat import iteritems from gevent._compat import imp_acquire_lock from gevent._compat import imp_release_lock from gevent.builtins import __import__ as g_import MAPPING = { 'gevent.local': '_threading_local', 'gevent.socket': 'socket', 'gevent.select': 'select', 'gevent.selectors': 'selectors', 'gevent.ssl': 'ssl', 'gevent.thread': '_thread', 'gevent.subprocess': 'subprocess', 'gevent.os': 'os', 'gevent.threading': 'threading', 'gevent.builtins': 'builtins', 'gevent.signal': 'signal', 'gevent.time': 'time', 'gevent.queue': 'queue', 'gevent.contextvars': 'contextvars', } OPTIONAL_STDLIB_MODULES = frozenset() _PATCH_PREFIX = '__g_patched_module_' def _collect_stdlib_gevent_modules(): """ Return a map from standard library name to imported gevent module that provides the same API. Optional modules are skipped if they cannot be imported. """ result = {} for gevent_name, stdlib_name in iteritems(MAPPING): try: result[stdlib_name] = importlib.import_module(gevent_name) except ImportError: if stdlib_name in OPTIONAL_STDLIB_MODULES: continue raise return result class _SysModulesPatcher(object): def __init__(self, importing, extra_all=lambda mod_name: ()): # Permanent state. self.extra_all = extra_all self.importing = importing # green modules, replacing regularly imported modules. # This begins as the gevent list of modules, and # then gets extended with green things from the tree we import. self._green_modules = _collect_stdlib_gevent_modules() ## Transient, reset each time we're called. # The set of things imported before we began. self._t_modules_to_restore = {} def _save(self): self._t_modules_to_restore = {} # Copy all the things we know we are going to overwrite. for modname in self._green_modules: self._t_modules_to_restore[modname] = sys.modules.get(modname, None) # Copy anything else in the import tree. for modname, mod in list(iteritems(sys.modules)): if modname.startswith(self.importing): self._t_modules_to_restore[modname] = mod # And remove it. If it had been imported green, it will # be put right back. Otherwise, it was imported "manually" # outside this process and isn't green. del sys.modules[modname] # Cover the target modules so that when you import the module it # sees only the patched versions for name, mod in iteritems(self._green_modules): sys.modules[name] = mod def _restore(self): # Anything from the same package tree we imported this time # needs to be saved so we can restore it later, and so it doesn't # leak into the namespace. for modname, mod in list(iteritems(sys.modules)): if modname.startswith(self.importing): self._green_modules[modname] = mod del sys.modules[modname] # Now, what we saved at the beginning needs to be restored. for modname, mod in iteritems(self._t_modules_to_restore): if mod is not None: sys.modules[modname] = mod else: try: del sys.modules[modname] except KeyError: pass def __exit__(self, t, v, tb): try: self._restore() finally: imp_release_lock() self._t_modules_to_restore = None def __enter__(self): imp_acquire_lock() self._save() return self module = None def __call__(self, after_import_hook): if self.module is None: with self: self.module = self.import_one(self.importing, after_import_hook) # Circular reference. Someone must keep a reference to this module alive # for it to be visible. We record it in sys.modules to be that someone, and # to aid debugging. In the past, we worked with multiple completely separate # invocations of `import_patched`, but we no longer do. self.module.__gevent_patcher__ = self sys.modules[_PATCH_PREFIX + self.importing] = self.module return self def import_one(self, module_name, after_import_hook): patched_name = _PATCH_PREFIX + module_name if patched_name in sys.modules: return sys.modules[patched_name] assert module_name.startswith(self.importing) sys.modules.pop(module_name, None) module = g_import(module_name, {}, {}, module_name.split('.')[:-1]) self.module = module # On Python 3, we could probably do something much nicer with the # import machinery? Set the __loader__ or __finder__ or something like that? self._import_all([module]) after_import_hook(module) return module def _import_all(self, queue): # Called while monitoring for patch changes. while queue: module = queue.pop(0) name = module.__name__ mod_all = tuple(getattr(module, '__all__', ())) + self.extra_all(name) for attr_name in mod_all: try: getattr(module, attr_name) except AttributeError: module_name = module.__name__ + '.' + attr_name new_module = g_import(module_name, {}, {}, attr_name) setattr(module, attr_name, new_module) queue.append(new_module) def import_patched(module_name, extra_all=lambda mod_name: (), after_import_hook=lambda module: None): """ Import *module_name* with gevent monkey-patches active, and return an object holding the greened module as *module*. Any sub-modules that were imported by the package are also saved. .. versionchanged:: 1.5a4 If the module defines ``__all__``, then each of those attributes/modules is also imported as part of the same transaction, recursively. The order of ``__all__`` is respected. Anything passed in *extra_all* (which must be in the same namespace tree) is also imported. .. versionchanged:: 1.5a4 You must now do all patching for a given module tree with one call to this method, or at least by using the returned object. """ with cached_platform_architecture(): # Save the current module state, and restore on exit, # capturing desirable changes in the modules package. patcher = _SysModulesPatcher(module_name, extra_all) patcher(after_import_hook) return patcher class cached_platform_architecture(object): """ Context manager that caches ``platform.architecture``. Some things that load shared libraries (like Cryptodome, via dnspython) invoke ``platform.architecture()`` for each one. That in turn wants to fork and run commands , which in turn wants to call ``threading._after_fork`` if the GIL has been initialized. All of that means that certain imports done early may wind up wanting to have the hub initialized potentially much earlier than before. Part of the fix is to observe when that happens and delay initializing parts of gevent until as late as possible (e.g., we delay importing and creating the resolver until the hub needs it, unless explicitly configured). The rest of the fix is to avoid the ``_after_fork`` issues by first caching the results of platform.architecture before doing patched imports. (See events.py for similar issues with platform, and test__threading_2.py for notes about threading._after_fork if the GIL has been initialized) """ _arch_result = None _orig_arch = None _platform = None def __enter__(self): import platform self._platform = platform self._arch_result = platform.architecture() self._orig_arch = platform.architecture def arch(*args, **kwargs): if not args and not kwargs: return self._arch_result return self._orig_arch(*args, **kwargs) platform.architecture = arch return self def __exit__(self, *_args): self._platform.architecture = self._orig_arch self._platform = None gevent-24.11.1/src/gevent/_semaphore.py000066400000000000000000000506271471441230600177370ustar00rootroot00000000000000# cython: auto_pickle=False,embedsignature=True,always_allow_keywords=False ### # This file is ``gevent._semaphore`` so that it can be compiled by Cython # individually. However, this is not the place to import from. Everyone, # gevent internal code included, must import from ``gevent.lock``. # The only exception are .pxd files which need access to the # C code; the PURE_PYTHON things that have to happen and which are # handled in ``gevent.lock``, do not apply to them. ### from __future__ import print_function, absolute_import, division __all__ = [ 'Semaphore', 'BoundedSemaphore', ] from time import sleep as _native_sleep from gevent._compat import monotonic from gevent.exceptions import InvalidThreadUseError from gevent.exceptions import LoopExit from gevent.timeout import Timeout def _get_linkable(): x = __import__('gevent._abstract_linkable') return x._abstract_linkable.AbstractLinkable locals()['AbstractLinkable'] = _get_linkable() del _get_linkable from gevent._hub_local import get_hub_if_exists from gevent._hub_local import get_hub from gevent.hub import spawn_raw class _LockReleaseLink(object): __slots__ = ( 'lock', ) def __init__(self, lock): self.lock = lock def __call__(self, _): self.lock.release() _UNSET = object() _MULTI = object() class Semaphore(AbstractLinkable): # pylint:disable=undefined-variable """ Semaphore(value=1) -> Semaphore .. seealso:: :class:`BoundedSemaphore` for a safer version that prevents some classes of bugs. If unsure, most users should opt for `BoundedSemaphore`. A semaphore manages a counter representing the number of `release` calls minus the number of `acquire` calls, plus an initial value. The `acquire` method blocks if necessary until it can return without making the counter negative. A semaphore does not track ownership by greenlets; any greenlet can call `release`, whether or not it has previously called `acquire`. If not given, ``value`` defaults to 1. The semaphore is a context manager and can be used in ``with`` statements. This Semaphore's ``__exit__`` method does not call the trace function on CPython, but does under PyPy. .. versionchanged:: 1.4.0 Document that the order in which waiters are awakened is not specified. It was not specified previously, but due to CPython implementation quirks usually went in FIFO order. .. versionchanged:: 1.5a3 Waiting greenlets are now awakened in the order in which they waited. .. versionchanged:: 1.5a3 The low-level ``rawlink`` method (most users won't use this) now automatically unlinks waiters before calling them. .. versionchanged:: 20.12.0 Improved support for multi-threaded usage. When multi-threaded usage is detected, instances will no longer create the thread's hub if it's not present. .. versionchanged:: 24.2.1 Uses Python 3 native lock timeouts for cross-thread operations instead of spinning. """ __slots__ = ( 'counter', # long integer, signed (Py2) or unsigned (Py3); see comments # in the .pxd file for why we store as Python object. Set to ``_UNSET`` # initially. Set to the ident of the first thread that # acquires us. If we later see a different thread ident, set # to ``_MULTI``. '_multithreaded', ) def __init__(self, value=1, hub=None): self.counter = value if self.counter < 0: # Do the check after Cython native int conversion raise ValueError("semaphore initial value must be >= 0") super(Semaphore, self).__init__(hub) self._notify_all = False self._multithreaded = _UNSET def __str__(self): return '<%s at 0x%x counter=%s _links[%s]>' % ( self.__class__.__name__, id(self), self.counter, self.linkcount() ) def locked(self): """ Return a boolean indicating whether the semaphore can be acquired (`False` if the semaphore *can* be acquired). Most useful with binary semaphores (those with an initial value of 1). :rtype: bool """ return self.counter <= 0 def release(self): """ Release the semaphore, notifying any waiters if needed. There is no return value. .. note:: This can be used to over-release the semaphore. (Release more times than it has been acquired or was initially created with.) This is usually a sign of a bug, but under some circumstances it can be used deliberately, for example, to model the arrival of additional resources. :rtype: None """ self.counter += 1 self._check_and_notify() return self.counter def ready(self): """ Return a boolean indicating whether the semaphore can be acquired (`True` if the semaphore can be acquired). :rtype: bool """ return self.counter > 0 def _start_notify(self): self._check_and_notify() def _wait_return_value(self, waited, wait_success): if waited: return wait_success # We didn't even wait, we must be good to go. # XXX: This is probably dead code, we're careful not to go into the wait # state if we don't expect to need to return True def wait(self, timeout=None): """ Wait until it is possible to acquire this semaphore, or until the optional *timeout* elapses. .. note:: If this semaphore was initialized with a *value* of 0, this method will block forever if no timeout is given. :keyword float timeout: If given, specifies the maximum amount of seconds this method will block. :return: A number indicating how many times the semaphore can be acquired before blocking. *This could be 0,* if other waiters acquired the semaphore. :rtype: int """ if self.counter > 0: return self.counter self._wait(timeout) # return value irrelevant, whether we got it or got a timeout return self.counter def acquire(self, blocking=True, timeout=None): """ acquire(blocking=True, timeout=None) -> bool Acquire the semaphore. .. note:: If this semaphore was initialized with a *value* of 0, this method will block forever (unless a timeout is given or blocking is set to false). :keyword bool blocking: If True (the default), this function will block until the semaphore is acquired. :keyword float timeout: If given, and *blocking* is true, specifies the maximum amount of seconds this method will block. :return: A `bool` indicating whether the semaphore was acquired. If ``blocking`` is True and ``timeout`` is None (the default), then (so long as this semaphore was initialized with a size greater than 0) this will always return True. If a timeout was given, and it expired before the semaphore was acquired, False will be returned. (Note that this can still raise a ``Timeout`` exception, if some other caller had already started a timer.) """ # pylint:disable=too-many-return-statements,too-many-branches # Sadly, the body of this method is rather complicated. if self._multithreaded is _UNSET: self._multithreaded = self._get_thread_ident() elif self._multithreaded != self._get_thread_ident(): self._multithreaded = _MULTI # We conceptually now belong to the hub of the thread that # called this, whether or not we have to block. Note that we # cannot force it to be created yet, because Semaphore is used # by importlib.ModuleLock which is used when importing the hub # itself! This also checks for cross-thread issues. invalid_thread_use = None try: self._capture_hub(False) except InvalidThreadUseError as e: # My hub belongs to some other thread. We didn't release the GIL/object lock # by raising the exception, so we know this is still true. invalid_thread_use = e.args e = None if not self.counter and blocking: # We would need to block. So coordinate with the main hub. return self.__acquire_from_other_thread(invalid_thread_use, blocking, timeout) if self.counter > 0: self.counter -= 1 return True if not blocking: return False if self._multithreaded is not _MULTI and self.hub is None: # pylint:disable=access-member-before-definition self.hub = get_hub() # pylint:disable=attribute-defined-outside-init if self.hub is None and not invalid_thread_use: # Someone else is holding us. There's not a hub here, # nor is there a hub in that thread. We'll need to use regular locks. # This will be unfair to yet a third thread that tries to use us with greenlets. return self.__acquire_from_other_thread( (None, None, self._getcurrent(), "NoHubs"), blocking, timeout ) # self._wait may drop both the GIL and the _lock_lock. # By the time we regain control, both have been reacquired. try: success = self._wait(timeout) except LoopExit as ex: args = ex.args ex = None if self.counter: success = True else: # Avoid using ex.hub property to keep holding the GIL if len(args) == 3 and args[1].main_hub: # The main hub, meaning the main thread. We probably can do nothing with this. raise return self.__acquire_from_other_thread( (self.hub, get_hub_if_exists(), self._getcurrent(), "LoopExit"), blocking, timeout) if not success: assert timeout is not None # Our timer expired. return False # Neither our timer or another one expired, so we blocked until # awoke. Therefore, the counter is ours assert self.counter > 0, (self.counter, blocking, timeout, success,) self.counter -= 1 return True _py3k_acquire = acquire # PyPy needs this; it must be static for Cython def __enter__(self): self.acquire() def __exit__(self, t, v, tb): self.release() def _handle_unswitched_notifications(self, unswitched): # If we fail to switch to a greenlet in another thread to send # a notification, just re-queue it, in the hopes that the # other thread will eventually run notifications itself. # # We CANNOT do what the ``super()`` does and actually allow # this notification to get run sometime in the future by # scheduling a callback in the other thread. The algorithm # that we use to handle cross-thread locking/unlocking was # designed before the schedule-a-callback mechanism was # implemented. If we allow this to be run as a callback, we # can find ourself the victim of ``InvalidSwitchError`` (or # worse, silent corruption) because the switch can come at an # unexpected time: *after* the destination thread has already # acquired the lock. # # This manifests in a fairly reliable test failure, # ``gevent.tests.test__semaphore`` # ``TestSemaphoreMultiThread.test_dueling_threads_with_hub``, # but ONLY when running in PURE_PYTHON mode. # # TODO: Maybe we can rewrite that part of the algorithm to be friendly to # running the callbacks? self._links.extend(unswitched) def __add_link(self, link): if not self._notifier: self.rawlink(link) else: self._notifier.args[0].append(link) def __acquire_from_other_thread(self, ex_args, blocking, timeout): assert blocking # Some other hub owns this object. We must ask it to wake us # up. In general, we can't use a Python-level ``Lock`` because # # (1) it doesn't support a timeout on all platforms; and # (2) we don't want to block this hub from running. # # So we need to do so in a way that cooperates with *two* # hubs. That's what an async watcher is built for. # # Of course, if we don't actually have two hubs, then we must find some other # solution. That involves using a lock. # We have to take an action that drops the GIL and drops the object lock # to allow the main thread (the thread for our hub) to advance. owning_hub = ex_args[0] hub_for_this_thread = ex_args[1] current_greenlet = ex_args[2] if owning_hub is None and hub_for_this_thread is None: return self.__acquire_without_hubs(timeout) if hub_for_this_thread is None: # Probably a background worker thread. We don't want to create # the hub if not needed, and since it didn't exist there are no # other greenlets that we could yield to anyway, so there's nothing # to block and no reason to try to avoid blocking, so using a native # lock is the simplest way to go. return self.__acquire_using_other_hub(owning_hub, timeout) # We have a hub we don't want to block. Use an async watcher # and ask the next releaser of this object to wake us up. return self.__acquire_using_two_hubs(hub_for_this_thread, current_greenlet, timeout) def __acquire_using_two_hubs(self, hub_for_this_thread, current_greenlet, timeout): # Allocating and starting the watcher *could* release the GIL. # with the libev corcext, allocating won't, but starting briefly will. # With other backends, allocating might, and starting might also. # So... watcher = hub_for_this_thread.loop.async_() send = watcher.send_ignoring_arg watcher.start(current_greenlet.switch, self) try: with Timeout._start_new_or_dummy(timeout) as timer: # ... now that we're back holding the GIL, we need to verify our # state. try: while 1: if self.counter > 0: self.counter -= 1 assert self.counter >= 0, (self,) return True self.__add_link(send) # Releases the object lock self._switch_to_hub(hub_for_this_thread) # We waited and got notified. We should be ready now, so a non-blocking # acquire() should succeed. But sometimes we get spurious notifications? # It's not entirely clear how. So we need to loop until we get it, or until # the timer expires result = self.acquire(0) if result: return result except Timeout as tex: if tex is not timer: raise return False finally: self._quiet_unlink_all(send) watcher.stop() watcher.close() def __acquire_from_other_thread_cb(self, results, blocking, timeout, thread_lock): try: result = self.acquire(blocking, timeout) results.append(result) finally: thread_lock.release() return result def __acquire_using_other_hub(self, owning_hub, timeout): assert owning_hub is not get_hub_if_exists() thread_lock = self._allocate_lock() thread_lock.acquire() results = [] owning_hub.loop.run_callback_threadsafe( spawn_raw, self.__acquire_from_other_thread_cb, results, 1, # blocking, timeout, # timeout, thread_lock) # We MUST use a blocking acquire here, or at least be sure we keep going # until we acquire it. If we timed out waiting here, # just before the callback runs, then we would be out of sync. self.__spin_on_native_lock(thread_lock, None) return results[0] def __acquire_without_hubs(self, timeout): thread_lock = self._allocate_lock() thread_lock.acquire() absolute_expiration = 0 begin = 0 if timeout: absolute_expiration = monotonic() + timeout # Cython won't compile a lambda here link = _LockReleaseLink(thread_lock) while 1: self.__add_link(link) if absolute_expiration: begin = monotonic() got_native = self.__spin_on_native_lock(thread_lock, timeout) self._quiet_unlink_all(link) if got_native: if self.acquire(0): return True if absolute_expiration: now = monotonic() if now >= absolute_expiration: return False duration = now - begin timeout -= duration if timeout <= 0: return False def __spin_on_native_lock(self, thread_lock, timeout): self._drop_lock_for_switch_out() try: # Unlike Python 2, Python 3 thread locks # can be interrupted when blocking, with or # without a timeout. Python 2 didn't even # support a timeout for non -blocking. if timeout: return thread_lock.acquire(True, timeout) return thread_lock.acquire() finally: self._acquire_lock_for_switch_in() class BoundedSemaphore(Semaphore): """ BoundedSemaphore(value=1) -> BoundedSemaphore A bounded semaphore checks to make sure its current value doesn't exceed its initial value. If it does, :class:`ValueError` is raised. In most situations semaphores are used to guard resources with limited capacity. If the semaphore is released too many times it's a sign of a bug. If not given, *value* defaults to 1. """ __slots__ = ( '_initial_value', ) #: For monkey-patching, allow changing the class of error we raise _OVER_RELEASE_ERROR = ValueError def __init__(self, *args, **kwargs): Semaphore.__init__(self, *args, **kwargs) self._initial_value = self.counter def release(self): """ Like :meth:`Semaphore.release`, but raises :class:`ValueError` if the semaphore is being over-released. """ if self.counter >= self._initial_value: raise self._OVER_RELEASE_ERROR("Semaphore released too many times") counter = Semaphore.release(self) # When we are absolutely certain that no one holds this semaphore, # release our hub and go back to floating. This assists in cross-thread # uses. if counter == self._initial_value: self.hub = None # pylint:disable=attribute-defined-outside-init return counter def _at_fork_reinit(self): super(BoundedSemaphore, self)._at_fork_reinit() self.counter = self._initial_value # By building the semaphore with Cython under PyPy, we get # atomic operations (specifically, exiting/releasing), at the # cost of some speed (one trivial semaphore micro-benchmark put the pure-python version # at around 1s and the compiled version at around 4s). Some clever subclassing # and having only the bare minimum be in cython might help reduce that penalty. # NOTE: You must use version 0.23.4 or later to avoid a memory leak. # https://mail.python.org/pipermail/cython-devel/2015-October/004571.html # However, that's all for naught on up to and including PyPy 4.0.1 which # have some serious crashing bugs with GC interacting with cython. # It hasn't been tested since then, and PURE_PYTHON is assumed to be true # for PyPy in all cases anyway, so this does nothing. from gevent._util import import_c_accel import_c_accel(globals(), 'gevent.__semaphore') gevent-24.11.1/src/gevent/_socket3.py000066400000000000000000000547451471441230600173340ustar00rootroot00000000000000# Port of Python 3.3's socket module to gevent """ Python 3 socket module. """ # Our import magic sadly makes this warning useless # pylint: disable=undefined-variable # pylint: disable=too-many-statements,too-many-branches # pylint: disable=too-many-public-methods,unused-argument from __future__ import absolute_import import io import os from gevent import _socketcommon from gevent._util import copy_globals from gevent._compat import PYPY import _socket from os import dup copy_globals(_socketcommon, globals(), names_to_ignore=_socketcommon.__extensions__, dunder_names_to_keep=()) __socket__ = _socketcommon.__socket__ __implements__ = _socketcommon._implements __extensions__ = _socketcommon.__extensions__ __imports__ = _socketcommon.__imports__ __dns__ = _socketcommon.__dns__ SocketIO = __socket__.SocketIO # pylint:disable=no-member class _closedsocket(object): __slots__ = ('family', 'type', 'proto', 'orig_fileno', 'description') def __init__(self, family, type, proto, orig_fileno, description): self.family = family self.type = type self.proto = proto self.orig_fileno = orig_fileno self.description = description def fileno(self): return -1 def close(self): "No-op" detach = fileno def _dummy(*args, **kwargs): # pylint:disable=no-method-argument,unused-argument,no-self-argument raise OSError(EBADF, 'Bad file descriptor') # All _delegate_methods must also be initialized here. send = recv = recv_into = sendto = recvfrom = recvfrom_into = _dummy getsockname = _dummy def __bool__(self): return False __getattr__ = _dummy def __repr__(self): return "" % ( id(self), self.orig_fileno, self.description, ) class _wrefsocket(_socket.socket): # Plain stdlib socket.socket objects subclass _socket.socket # and add weakref ability. The ssl module, for one, counts on this. # We don't create socket.socket objects (because they may have been # monkey patched to be the object from this module), but we still # need to make sure what we do create can be weakrefd. __slots__ = ("__weakref__", ) if PYPY: # server.py unwraps the socket object to get the raw _sock; # it depends on having a timeout property alias, which PyPy does not # provide. timeout = property(lambda s: s.gettimeout(), lambda s, nv: s.settimeout(nv)) class socket(_socketcommon.SocketMixin): """ gevent `socket.socket `_ for Python 3. This object should have the same API as the standard library socket linked to above. Not all methods are specifically documented here; when they are they may point out a difference to be aware of or may document a method the standard library does not. """ # Subclasses can set this to customize the type of the # native _socket.socket we create. It MUST be a subclass # of _wrefsocket. (gevent internal usage only) _gevent_sock_class = _wrefsocket __slots__ = ( '_io_refs', '_closed', ) # Take the same approach as socket2: wrap a real socket object, # don't subclass it. This lets code that needs the raw _sock (not tied to the hub) # get it. This shows up in tests like test__example_udp_server. # In 3.7, socket changed to auto-detecting family, type, and proto # when given a fileno. def __init__(self, family=-1, type=-1, proto=-1, fileno=None): super().__init__() self._closed = False if fileno is None: if family == -1: family = AddressFamily.AF_INET if type == -1: type = SOCK_STREAM if proto == -1: proto = 0 self._sock = self._gevent_sock_class(family, type, proto, fileno) self.timeout = None self._io_refs = 0 _socket.socket.setblocking(self._sock, False) fileno = _socket.socket.fileno(self._sock) self.hub = get_hub() io_class = self.hub.loop.io self._read_event = io_class(fileno, 1) self._write_event = io_class(fileno, 2) self.timeout = _socket.getdefaulttimeout() def __getattr__(self, name): return getattr(self._sock, name) def _accept(self): # Python 3.11 started checking for this method on the class object, # so we need to explicitly delegate. return self._sock._accept() if hasattr(_socket, 'SOCK_NONBLOCK'): # Only defined under Linux @property def type(self): # See https://github.com/gevent/gevent/pull/399 if self.timeout != 0.0: return self._sock.type & ~_socket.SOCK_NONBLOCK # pylint:disable=no-member return self._sock.type def __enter__(self): return self def __exit__(self, *args): if not self._closed: self.close() def __repr__(self): """Wrap __repr__() to reveal the real class name.""" try: s = repr(self._sock) except Exception as ex: # pylint:disable=broad-except # Observed on Windows Py3.3, printing the repr of a socket # that just suffered a ConnectionResetError [WinError 10054]: # "OverflowError: no printf formatter to display the socket descriptor in decimal" # Not sure what the actual cause is or if there's a better way to handle this s = '' % ex if s.startswith(" socket object Return a new socket object connected to the same system resource. """ fd = dup(self.fileno()) sock = self.__class__(self.family, self.type, self.proto, fileno=fd) sock.settimeout(self.gettimeout()) return sock def accept(self): """accept() -> (socket object, address info) Wait for an incoming connection. Return a new socket representing the connection, and the address of the client. For IP sockets, the address info is a pair (hostaddr, port). """ while True: try: fd, addr = self._accept() break except BlockingIOError: if self.timeout == 0.0: raise self._wait(self._read_event) sock = socket(self.family, self.type, self.proto, fileno=fd) # Python Issue #7995: if no default timeout is set and the listening # socket had a (non-zero) timeout, force the new socket in blocking # mode to override platform-specific socket flags inheritance. # XXX do we need to do this? if getdefaulttimeout() is None and self.gettimeout(): sock.setblocking(True) return sock, addr def makefile(self, mode="r", buffering=None, *, encoding=None, errors=None, newline=None): """Return an I/O stream connected to the socket The arguments are as for io.open() after the filename, except the only mode characters supported are 'r', 'w' and 'b'. The semantics are similar too. """ # XXX refactor to share code? We ought to be able to use our FileObject, # adding the appropriate amount of refcounting. At the very least we can use our # OpenDescriptor to handle the parsing. for c in mode: if c not in {"r", "w", "b"}: raise ValueError("invalid mode %r (only r, w, b allowed)") writing = "w" in mode reading = "r" in mode or not writing assert reading or writing binary = "b" in mode rawmode = "" if reading: rawmode += "r" if writing: rawmode += "w" raw = SocketIO(self, rawmode) self._io_refs += 1 if buffering is None: buffering = -1 if buffering < 0: buffering = io.DEFAULT_BUFFER_SIZE if buffering == 0: if not binary: raise ValueError("unbuffered streams must be binary") return raw if reading and writing: buffer = io.BufferedRWPair(raw, raw, buffering) elif reading: buffer = io.BufferedReader(raw, buffering) else: assert writing buffer = io.BufferedWriter(raw, buffering) if binary: return buffer text = io.TextIOWrapper(buffer, encoding, errors, newline) text.mode = mode return text def _decref_socketios(self): # Called by SocketIO when it is closed. if self._io_refs > 0: self._io_refs -= 1 if self._closed: self.close() def _drop_ref_on_close(self, sock): # Send the close event to wake up any watchers we don't know about # so that (hopefully) they can be closed before we destroy # the FD and invalidate them. We may be in the hub running pending # callbacks now, or this may take until the next iteration. scheduled_new = self.hub.loop.closing_fd(sock.fileno()) # Schedule the actual close to happen after that, but only if needed. # (If we always defer, we wind up closing things much later than expected.) if scheduled_new: self.hub.loop.run_callback(sock.close) else: sock.close() def _detach_socket(self, reason): if not self._sock: return # Break any references to the underlying socket object. Tested # by test__refcount. (Why does this matter?). Be sure to # preserve our same family/type/proto if possible (if we # don't, we can get TypeError instead of OSError; see # test_socket.SendmsgUDP6Test.testSendmsgAfterClose)... but # this isn't always possible (see test_socket.test_unknown_socket_family_repr) sock = self._sock family = -1 type = -1 proto = -1 fileno = None try: family = sock.family type = sock.type proto = sock.proto fileno = sock.fileno() except OSError: pass # Break any reference to the loop.io objects. Our fileno, # which they were tied to, is about to be free to be reused, so these # objects are no longer functional. # pylint:disable-next=superfluous-parens self._drop_events_and_close(closefd=(reason == 'closed')) self._sock = _closedsocket(family, type, proto, fileno, reason) def _real_close(self, _ss=_socket.socket): # This function should not reference any globals. See Python issue #808164. if not self._sock: return self._detach_socket('closed') def close(self): # This function should not reference any globals. See Python issue #808164. self._closed = True if self._io_refs <= 0: self._real_close() @property def closed(self): return self._closed def detach(self): """ detach() -> file descriptor Close the socket object without closing the underlying file descriptor. The object cannot be used after this call; when the real file descriptor is closed, the number that was previously used here may be reused. The fileno() method, after this call, will return an invalid socket id. The previous descriptor is returned. .. versionchanged:: 1.5 Also immediately drop any native event loop resources. """ self._closed = True sock = self._sock self._detach_socket('detached') return sock.detach() if hasattr(_socket.socket, 'recvmsg'): # Only on Unix; PyPy 3.5 5.10.0 provides sendmsg and recvmsg, but not # recvmsg_into (at least on os x) def recvmsg(self, *args): while True: try: return self._sock.recvmsg(*args) except error as ex: if ex.args[0] != EWOULDBLOCK or self.timeout == 0.0: raise self._wait(self._read_event) if hasattr(_socket.socket, 'recvmsg_into'): def recvmsg_into(self, buffers, *args): while True: try: if args: # The C code is sensitive about whether extra arguments are # passed or not. return self._sock.recvmsg_into(buffers, *args) return self._sock.recvmsg_into(buffers) except error as ex: if ex.args[0] != EWOULDBLOCK or self.timeout == 0.0: raise self._wait(self._read_event) if hasattr(_socket.socket, 'sendmsg'): # Only on Unix def sendmsg(self, buffers, ancdata=(), flags=0, address=None): try: return self._sock.sendmsg(buffers, ancdata, flags, address) except error as ex: if flags & getattr(_socket, 'MSG_DONTWAIT', 0): # Enable non-blocking behaviour # XXX: Do all platforms that have sendmsg have MSG_DONTWAIT? raise if ex.args[0] != EWOULDBLOCK or self.timeout == 0.0: raise self._wait(self._write_event) try: return self._sock.sendmsg(buffers, ancdata, flags, address) except error as ex2: if ex2.args[0] == EWOULDBLOCK: return 0 raise # sendfile: new in 3.5. But there's no real reason to not # support it everywhere. Note that we can't use os.sendfile() # because it's not cooperative. def _sendfile_use_sendfile(self, file, offset=0, count=None): # This is called directly by tests raise __socket__._GiveupOnSendfile() # pylint:disable=no-member def _sendfile_use_send(self, file, offset=0, count=None): self._check_sendfile_params(file, offset, count) if self.gettimeout() == 0: raise ValueError("non-blocking sockets are not supported") if offset: file.seek(offset) blocksize = min(count, 8192) if count else 8192 total_sent = 0 # localize variable access to minimize overhead file_read = file.read sock_send = self.send try: while True: if count: blocksize = min(count - total_sent, blocksize) if blocksize <= 0: break data = memoryview(file_read(blocksize)) if not data: break # EOF while True: try: sent = sock_send(data) except BlockingIOError: continue else: total_sent += sent if sent < len(data): data = data[sent:] else: break return total_sent finally: if total_sent > 0 and hasattr(file, 'seek'): file.seek(offset + total_sent) def _check_sendfile_params(self, file, offset, count): if 'b' not in getattr(file, 'mode', 'b'): raise ValueError("file should be opened in binary mode") if not self.type & SOCK_STREAM: raise ValueError("only SOCK_STREAM type sockets are supported") if count is not None: if not isinstance(count, int): raise TypeError( "count must be a positive integer (got {!r})".format(count)) if count <= 0: raise ValueError( "count must be a positive integer (got {!r})".format(count)) def sendfile(self, file, offset=0, count=None): """sendfile(file[, offset[, count]]) -> sent Send a file until EOF is reached by using high-performance os.sendfile() and return the total number of bytes which were sent. *file* must be a regular file object opened in binary mode. If os.sendfile() is not available (e.g. Windows) or file is not a regular file socket.send() will be used instead. *offset* tells from where to start reading the file. If specified, *count* is the total number of bytes to transmit as opposed to sending the file until EOF is reached. File position is updated on return or also in case of error in which case file.tell() can be used to figure out the number of bytes which were sent. The socket must be of SOCK_STREAM type. Non-blocking sockets are not supported. .. versionadded:: 1.1rc4 Added in Python 3.5, but available under all Python 3 versions in gevent. """ return self._sendfile_use_send(file, offset, count) if os.name == 'nt': def get_inheritable(self): return os.get_handle_inheritable(self.fileno()) def set_inheritable(self, inheritable): os.set_handle_inheritable(self.fileno(), inheritable) else: def get_inheritable(self): return os.get_inheritable(self.fileno()) def set_inheritable(self, inheritable): os.set_inheritable(self.fileno(), inheritable) get_inheritable.__doc__ = "Get the inheritable flag of the socket" set_inheritable.__doc__ = "Set the inheritable flag of the socket" SocketType = socket def fromfd(fd, family, type, proto=0): """ fromfd(fd, family, type[, proto]) -> socket object Create a socket object from a duplicate of the given file descriptor. The remaining arguments are the same as for socket(). """ nfd = dup(fd) return socket(family, type, proto, nfd) if hasattr(_socket.socket, "share"): def fromshare(info): """ fromshare(info) -> socket object Create a socket object from a the bytes object returned by socket.share(pid). """ return socket(0, 0, 0, info) __implements__.append('fromshare') def _fallback_socketpair(family=AF_INET, type=SOCK_STREAM, proto=0): # We originally used https://gist.github.com/4325783, by Geert Jansen. (Public domain.) # We took it from 3.6 release, confirmed unchanged in 3.7 and # 3.8a1. Expected to be used only on Win. Added to Win/3.5. # It is always available as `socket._fallback_socketpair` from at least 3.9, # We would like to stop carrying around our own implementation, but # using _fallback_socketpair directly would only work if we are monkey patched. # Current version taken from 3.13rc2 # PyPy doesn't name its fallback `_fallback_socketpair`, it uses # an older copy of socket.py. _LOCALHOST = '127.0.0.1' _LOCALHOST_V6 = '::1' if family == AF_INET: host = _LOCALHOST elif family == AF_INET6: host = _LOCALHOST_V6 else: raise ValueError("Only AF_INET and AF_INET6 socket address families " "are supported") if type != SOCK_STREAM: raise ValueError("Only SOCK_STREAM socket type is supported") if proto != 0: raise ValueError("Only protocol zero is supported") # We create a connected TCP socket. Note the trick with # setblocking(False) that prevents us from having to create a thread. lsock = socket(family, type, proto) try: lsock.bind((host, 0)) lsock.listen() # On IPv6, ignore flow_info and scope_id addr, port = lsock.getsockname()[:2] csock = socket(family, type, proto) try: csock.setblocking(False) try: csock.connect((addr, port)) except (BlockingIOError, InterruptedError): pass csock.setblocking(True) ssock, _ = lsock.accept() except: csock.close() raise finally: lsock.close() # Authenticating avoids using a connection from something else # able to connect to {host}:{port} instead of us. # We expect only AF_INET and AF_INET6 families. try: if ( ssock.getsockname() != csock.getpeername() or csock.getsockname() != ssock.getpeername() ): raise ConnectionError("Unexpected peer connection") except: # getsockname() and getpeername() can fail # if either socket isn't connected. ssock.close() csock.close() raise return (ssock, csock) if hasattr(__socket__, _fallback_socketpair.__name__): __implements__.append(_fallback_socketpair.__name__) if hasattr(_socket, "socketpair"): def socketpair(family=None, type=SOCK_STREAM, proto=0): """socketpair([family[, type[, proto]]]) -> (socket object, socket object) Create a pair of socket objects from the sockets returned by the platform socketpair() function. The arguments are the same as for socket() except the default family is AF_UNIX if defined on the platform; otherwise, the default is AF_INET. .. versionchanged:: 1.2 All Python 3 versions on Windows supply this function (natively supplied by Python 3.5 and above). """ if family is None: try: family = AF_UNIX except NameError: family = AF_INET a, b = _socket.socketpair(family, type, proto) a = socket(family, type, proto, a.detach()) b = socket(family, type, proto, b.detach()) return a, b else: # pragma: no cover socketpair = _fallback_socketpair __all__ = __implements__ + __extensions__ + __imports__ if _fallback_socketpair.__name__ in __all__: __all__.remove(_fallback_socketpair.__name__) __version_specific__ = ( # Python 3.7b1+ 'close', # Python 3.10rc1+ 'TCP_KEEPALIVE', 'TCP_KEEPCNT', ) for _x in __version_specific__: if hasattr(__socket__, _x): vars()[_x] = getattr(__socket__, _x) if _x not in __all__: __all__.append(_x) del _x gevent-24.11.1/src/gevent/_socketcommon.py000066400000000000000000000627221471441230600204540ustar00rootroot00000000000000# Copyright (c) 2009-2014 Denis Bilenko and gevent contributors. See LICENSE for details. from __future__ import absolute_import # standard functions and classes that this module re-implements in a gevent-aware way: _implements = [ 'create_connection', 'socket', 'SocketType', 'fromfd', 'socketpair', ] __dns__ = [ 'getaddrinfo', 'gethostbyname', 'gethostbyname_ex', 'gethostbyaddr', 'getnameinfo', 'getfqdn', ] _implements += __dns__ # non-standard functions that this module provides: __extensions__ = [ 'cancel_wait', 'wait_read', 'wait_write', 'wait_readwrite', ] # standard functions and classes that this module re-imports __imports__ = [ 'error', 'gaierror', 'herror', 'htonl', 'htons', 'ntohl', 'ntohs', 'inet_aton', 'inet_ntoa', 'inet_pton', 'inet_ntop', 'timeout', 'gethostname', 'getprotobyname', 'getservbyname', 'getservbyport', 'getdefaulttimeout', 'setdefaulttimeout', # Windows: 'errorTab', # Python 3 'AddressFamily', 'SocketKind', 'CMSG_LEN', 'CMSG_SPACE', 'dup', 'if_indextoname', 'if_nameindex', 'if_nametoindex', 'sethostname', 'create_server', 'has_dualstack_ipv6', ] import time from gevent._hub_local import get_hub_noargs as get_hub from gevent._compat import string_types, integer_types from gevent._compat import PY39 from gevent._compat import WIN as is_windows from gevent._compat import OSX as is_macos from gevent._compat import exc_clear from gevent._util import copy_globals from gevent._greenlet_primitives import get_memory as _get_memory from gevent._hub_primitives import wait_on_socket as _wait_on_socket from gevent.timeout import Timeout if PY39: __imports__.extend([ 'recv_fds', 'send_fds', ]) # pylint:disable=no-name-in-module,unused-import if is_windows: # no such thing as WSAEPERM or error code 10001 according to winsock.h or MSDN from errno import WSAEINVAL as EINVAL from errno import WSAEWOULDBLOCK as EWOULDBLOCK from errno import WSAEINPROGRESS as EINPROGRESS from errno import WSAEALREADY as EALREADY from errno import WSAEISCONN as EISCONN from gevent.win32util import formatError as strerror EAGAIN = EWOULDBLOCK else: from errno import EINVAL from errno import EWOULDBLOCK from errno import EINPROGRESS from errno import EALREADY from errno import EAGAIN from errno import EISCONN from os import strerror try: from errno import EBADF except ImportError: EBADF = 9 try: from errno import EHOSTUNREACH except ImportError: EHOSTUNREACH = -1 try: from errno import ECONNREFUSED except ImportError: ECONNREFUSED = -1 # macOS can return EPROTOTYPE when writing to a socket that is shutting # Down. Retrying the write should return the expected EPIPE error. # Downstream classes (like pywsgi) know how to handle/ignore EPIPE. # This set is used by socket.send() to decide whether the write should # be retried. The default is to retry only on EWOULDBLOCK. Here we add # EPROTOTYPE on macOS to handle this platform-specific race condition. GSENDAGAIN = (EWOULDBLOCK,) if is_macos: from errno import EPROTOTYPE GSENDAGAIN += (EPROTOTYPE,) import _socket _realsocket = _socket.socket import socket as __socket__ _SocketError = __socket__.error _name = _value = None __imports__ = copy_globals(__socket__, globals(), only_names=__imports__, ignore_missing_names=True) for _name in __socket__.__all__: _value = getattr(__socket__, _name) if isinstance(_value, (integer_types, string_types)): globals()[_name] = _value __imports__.append(_name) del _name, _value _timeout_error = timeout # pylint: disable=undefined-variable from gevent import _hub_primitives _hub_primitives.set_default_timeout_error(_timeout_error) wait = _hub_primitives.wait_on_watcher wait_read = _hub_primitives.wait_read wait_write = _hub_primitives.wait_write wait_readwrite = _hub_primitives.wait_readwrite #: The exception raised by default on a call to :func:`cancel_wait` class cancel_wait_ex(error): # pylint: disable=undefined-variable def __init__(self): super(cancel_wait_ex, self).__init__( EBADF, 'File descriptor was closed in another greenlet') def cancel_wait(watcher, error=cancel_wait_ex): """See :meth:`gevent.hub.Hub.cancel_wait`""" get_hub().cancel_wait(watcher, error) def gethostbyname(hostname): """ gethostbyname(host) -> address Return the IP address (a string of the form '255.255.255.255') for a host. .. seealso:: :doc:`/dns` """ return get_hub().resolver.gethostbyname(hostname) def gethostbyname_ex(hostname): """ gethostbyname_ex(host) -> (name, aliaslist, addresslist) Return the true host name, a list of aliases, and a list of IP addresses, for a host. The host argument is a string giving a host name or IP number. Resolve host and port into list of address info entries. .. seealso:: :doc:`/dns` """ return get_hub().resolver.gethostbyname_ex(hostname) def getaddrinfo(host, port, family=0, type=0, proto=0, flags=0): """ Resolve host and port into list of address info entries. Translate the host/port argument into a sequence of 5-tuples that contain all the necessary arguments for creating a socket connected to that service. host is a domain name, a string representation of an IPv4/v6 address or None. port is a string service name such as 'http', a numeric port number or None. By passing None as the value of host and port, you can pass NULL to the underlying C API. The family, type and proto arguments can be optionally specified in order to narrow the list of addresses returned. Passing zero as a value for each of these arguments selects the full range of results. .. seealso:: :doc:`/dns` """ # Also, on Python 3, we need to translate into the special enums. # Our lower-level resolvers, including the thread and blocking, which use _socket, # function simply with integers. addrlist = get_hub().resolver.getaddrinfo(host, port, family, type, proto, flags) result = [ # pylint:disable=undefined-variable (_intenum_converter(af, AddressFamily), _intenum_converter(socktype, SocketKind), proto, canonname, sa) for af, socktype, proto, canonname, sa in addrlist ] return result def _intenum_converter(value, enum_klass): try: return enum_klass(value) except ValueError: # pragma: no cover return value def gethostbyaddr(ip_address): """ gethostbyaddr(ip_address) -> (name, aliaslist, addresslist) Return the true host name, a list of aliases, and a list of IP addresses, for a host. The host argument is a string giving a host name or IP number. .. seealso:: :doc:`/dns` """ return get_hub().resolver.gethostbyaddr(ip_address) def getnameinfo(sockaddr, flags): """ getnameinfo(sockaddr, flags) -> (host, port) Get host and port for a sockaddr. .. seealso:: :doc:`/dns` """ return get_hub().resolver.getnameinfo(sockaddr, flags) def getfqdn(name=''): """Get fully qualified domain name from name. An empty argument is interpreted as meaning the local host. First the hostname returned by gethostbyaddr() is checked, then possibly existing aliases. In case no FQDN is available, hostname from gethostname() is returned. .. versionchanged:: 23.7.0 The IPv6 generic address '::' now returns the result of ``gethostname``, like the IPv4 address '0.0.0.0'. """ # pylint: disable=undefined-variable name = name.strip() # IPv6 added in a late Python 3.10/3.11 patch release. # https://github.com/python/cpython/issues/100374 if not name or name in ('0.0.0.0', '::'): name = gethostname() try: hostname, aliases, _ = gethostbyaddr(name) except error: pass else: aliases.insert(0, hostname) for name in aliases: # EWW! pylint:disable=redefined-argument-from-local if isinstance(name, bytes): if b'.' in name: break elif '.' in name: break else: name = hostname return name def __send_chunk(socket, data_memory, flags, timeleft, end, timeout=_timeout_error): """ Send the complete contents of ``data_memory`` before returning. This is the core loop around :meth:`send`. :param timeleft: Either ``None`` if there is no timeout involved, or a float indicating the timeout to use. :param end: Either ``None`` if there is no timeout involved, or a float giving the absolute end time. :return: An updated value for ``timeleft`` (or None) :raises timeout: If ``timeleft`` was given and elapsed while sending this chunk. """ data_sent = 0 len_data_memory = len(data_memory) started_timer = 0 while data_sent < len_data_memory: chunk = data_memory[data_sent:] if timeleft is None: data_sent += socket.send(chunk, flags) elif started_timer and timeleft <= 0: # Check before sending to guarantee a check # happens even if each chunk successfully sends its data # (especially important for SSL sockets since they have large # buffers). But only do this if we've actually tried to # send something once to avoid spurious timeouts on non-blocking # sockets. raise timeout('timed out') else: started_timer = 1 data_sent += socket.send(chunk, flags, timeout=timeleft) timeleft = end - time.time() return timeleft def _sendall(socket, data_memory, flags, SOL_SOCKET=__socket__.SOL_SOCKET, # pylint:disable=no-member SO_SNDBUF=__socket__.SO_SNDBUF): # pylint:disable=no-member """ Send the *data_memory* (which should be a memoryview) using the gevent *socket*, performing well on PyPy. """ # On PyPy up through 5.10.0, both PyPy2 and PyPy3, subviews # (slices) of a memoryview() object copy the underlying bytes the # first time the builtin socket.send() method is called. On a # non-blocking socket (that thus calls socket.send() many times) # with a large input, this results in many repeated copies of an # ever smaller string, depending on the networking buffering. For # example, if each send() can process 1MB of a 50MB input, and we # naively pass the entire remaining subview each time, we'd copy # 49MB, 48MB, 47MB, etc, thus completely killing performance. To # workaround this problem, we work in reasonable, fixed-size # chunks. This results in a 10x improvement to bench_sendall.py, # while having no measurable impact on CPython (since it doesn't # copy at all the only extra overhead is a few python function # calls, which is negligible for large inputs). # On one macOS machine, PyPy3 5.10.1 produced ~ 67.53 MB/s before this change, # and ~ 616.01 MB/s after. # See https://bitbucket.org/pypy/pypy/issues/2091/non-blocking-socketsend-slow-gevent # Too small of a chunk (the socket's buf size is usually too # small) results in reduced perf due to *too many* calls to send and too many # small copies. With a buffer of 143K (the default on my system), for # example, bench_sendall.py yields ~264MB/s, while using 1MB yields # ~653MB/s (matching CPython). 1MB is arbitrary and might be better # chosen, say, to match a page size? len_data_memory = len(data_memory) if not len_data_memory: # Don't try to send empty data at all, no point, and breaks ssl # See issue 719 return 0 chunk_size = max(socket.getsockopt(SOL_SOCKET, SO_SNDBUF), 1024 * 1024) data_sent = 0 end = None timeleft = None if socket.timeout is not None: timeleft = socket.timeout end = time.time() + timeleft while data_sent < len_data_memory: chunk_end = min(data_sent + chunk_size, len_data_memory) chunk = data_memory[data_sent:chunk_end] timeleft = __send_chunk(socket, chunk, flags, timeleft, end) data_sent += len(chunk) # Guaranteed it sent the whole thing # pylint:disable=no-member _RESOLVABLE_FAMILIES = (__socket__.AF_INET,) if __socket__.has_ipv6: _RESOLVABLE_FAMILIES += (__socket__.AF_INET6,) def _resolve_addr(sock, address): # Internal method: resolve the AF_INET[6] address using # getaddrinfo. if sock.family not in _RESOLVABLE_FAMILIES or not isinstance(address, tuple): return address # address is (host, port) (ipv4) or (host, port, flowinfo, scopeid) (ipv6). # If it's already resolved, no need to go through getaddrinfo() again. # That can lose precision (e.g., on IPv6, it can lose scopeid). The standard library # does this in socketmodule.c:setipaddr. (This is only part of the logic, the real # thing is much more complex.) try: if __socket__.inet_pton(sock.family, address[0]): return address except AttributeError: # pragma: no cover # inet_pton might not be available. pass except _SocketError: # Not parseable, needs resolved. pass # We don't pass the port to getaddrinfo because the C # socket module doesn't either (on some systems its # illegal to do that without also passing socket type and # protocol). Instead we join the port back at the end. # See https://github.com/gevent/gevent/issues/1252 host, port = address[:2] r = getaddrinfo(host, None, sock.family) address = r[0][-1] if len(address) == 2: address = (address[0], port) else: address = (address[0], port, address[2], address[3]) return address timeout_default = object() class SocketMixin(object): # pylint:disable=too-many-public-methods __slots__ = ( 'hub', 'timeout', '_read_event', '_write_event', '_sock', '__weakref__', ) def __init__(self): # Writing: # (self.a, self.b) = (None,) * 2 # generates the fastest bytecode. But At least on PyPy, # where the SSLSocket subclass has a timeout property, # it results in the settimeout() method getting the tuple # as the value, not the unpacked None. self._read_event = None self._write_event = None self._sock = None self.hub = None self.timeout = None def _drop_events_and_close(self, closefd=True, _cancel_wait_ex=cancel_wait_ex): hub = self.hub read_event = self._read_event write_event = self._write_event self._read_event = self._write_event = None hub.cancel_waits_close_and_then( (read_event, write_event), _cancel_wait_ex, # Pass the socket to keep it alive until such time as # the waiters are guaranteed to be closed. self._drop_ref_on_close if closefd else id, self._sock ) def _drop_ref_on_close(self, sock): raise NotImplementedError def _get_ref(self): return self._read_event.ref or self._write_event.ref def _set_ref(self, value): self._read_event.ref = value self._write_event.ref = value ref = property(_get_ref, _set_ref) _wait = _wait_on_socket ### # Common methods defined here need to be added to the # API documentation specifically. ### def settimeout(self, howlong): if howlong is not None: try: f = howlong.__float__ except AttributeError: raise TypeError('a float is required', howlong, type(howlong)) howlong = f() if howlong < 0.0: raise ValueError('Timeout value out of range') # avoid recursion with any property on self.timeout SocketMixin.timeout.__set__(self, howlong) def gettimeout(self): # avoid recursion with any property on self.timeout return SocketMixin.timeout.__get__(self, type(self)) def setblocking(self, flag): # Beginning in 3.6.0b3 this is supposed to raise # if the file descriptor is closed, but the test for it # involves closing the fileno directly. Since we # don't touch the fileno here, it doesn't make sense for # us. if flag: self.timeout = None else: self.timeout = 0.0 def shutdown(self, how): if how == 0: # SHUT_RD self.hub.cancel_wait(self._read_event, cancel_wait_ex) elif how == 1: # SHUT_WR self.hub.cancel_wait(self._write_event, cancel_wait_ex) else: self.hub.cancel_wait(self._read_event, cancel_wait_ex) self.hub.cancel_wait(self._write_event, cancel_wait_ex) self._sock.shutdown(how) # pylint:disable-next=undefined-variable family = property(lambda self: _intenum_converter(self._sock.family, AddressFamily)) # pylint:disable-next=undefined-variable type = property(lambda self: _intenum_converter(self._sock.type, SocketKind)) proto = property(lambda self: self._sock.proto) def fileno(self): return self._sock.fileno() def getsockname(self): return self._sock.getsockname() def getpeername(self): return self._sock.getpeername() def bind(self, address): return self._sock.bind(address) def listen(self, *args): return self._sock.listen(*args) def getsockopt(self, *args): return self._sock.getsockopt(*args) def setsockopt(self, *args): return self._sock.setsockopt(*args) if hasattr(__socket__.socket, 'ioctl'): # os.name == 'nt' def ioctl(self, *args): return self._sock.ioctl(*args) if hasattr(__socket__.socket, 'sleeptaskw'): # os.name == 'riscos def sleeptaskw(self, *args): return self._sock.sleeptaskw(*args) def getblocking(self): """ Returns whether the socket will approximate blocking behaviour. .. versionadded:: 1.3a2 Added in Python 3.7. """ return self.timeout != 0.0 def connect(self, address): """ Connect to *address*. .. versionchanged:: 20.6.0 If the host part of the address includes an IPv6 scope ID, it will be used instead of ignored, if the platform supplies :func:`socket.inet_pton`. """ # In the standard library, ``connect`` and ``connect_ex`` are implemented # in C, and they both call a C function ``internal_connect`` to do the real # work. This means that it is a visible behaviour difference to have our # Python implementation of ``connect_ex`` simply call ``connect``: # it could be overridden in a subclass or at runtime! Because of our exception handling, # this can make a difference for known subclasses like SSLSocket. self._internal_connect(address) def connect_ex(self, address): """ Connect to *address*, returning a result code. .. versionchanged:: 23.7.0 No longer uses an overridden ``connect`` method on this object. Instead, like the standard library, this method always uses a non-replacable internal connection function. """ try: return self._internal_connect(address) or 0 except __socket__.timeout: return EAGAIN except __socket__.gaierror: # pylint:disable=try-except-raise # gaierror/overflowerror/typerror is not silenced by connect_ex; # gaierror extends error so catch it first raise except _SocketError as ex: # Python 3: error is now OSError and it has various subclasses. # Only those that apply to actually connecting are silenced by # connect_ex. # On Python 3, we want to check ex.errno; on Python 2 # there is no such attribute, we need to look at the first # argument. try: err = ex.errno except AttributeError: err = ex.args[0] if err: return err raise def _internal_connect(self, address): # Like the C function ``internal_connect``, not meant to be overridden, # but exposed for testing. if self.timeout == 0.0: return self._sock.connect(address) address = _resolve_addr(self._sock, address) with Timeout._start_new_or_dummy(self.timeout, __socket__.timeout("timed out")): while 1: err = self.getsockopt(__socket__.SOL_SOCKET, __socket__.SO_ERROR) if err: raise _SocketError(err, strerror(err)) result = self._sock.connect_ex(address) if not result or result == EISCONN: break if (result in (EWOULDBLOCK, EINPROGRESS, EALREADY)) or (result == EINVAL and is_windows): self._wait(self._write_event) else: if (isinstance(address, tuple) and address[0] == 'fe80::1' and result == EHOSTUNREACH): # On Python 3.7 on mac, we see EHOSTUNREACH # returned for this link-local address, but it really is # supposed to be ECONNREFUSED according to the standard library # tests (test_socket.NetworkConnectionNoServer.test_create_connection) # (On previous versions, that code passed the '127.0.0.1' IPv4 address, so # ipv6 link locals were never a factor; 3.7 passes 'localhost'.) # It is something of a mystery how the stdlib socket code doesn't # produce EHOSTUNREACH---I (JAM) can't see how socketmodule.c would avoid # that. The normal connect just calls connect_ex much like we do. result = ECONNREFUSED raise _SocketError(result, strerror(result)) def recv(self, *args): while 1: try: return self._sock.recv(*args) except _SocketError as ex: if ex.args[0] != EWOULDBLOCK or self.timeout == 0.0: raise # QQQ without clearing exc_info test__refcount.test_clean_exit fails exc_clear() # Python 2 self._wait(self._read_event) def recvfrom(self, *args): while 1: try: return self._sock.recvfrom(*args) except _SocketError as ex: if ex.args[0] != EWOULDBLOCK or self.timeout == 0.0: raise exc_clear() # Python 2 self._wait(self._read_event) def recvfrom_into(self, *args): while 1: try: return self._sock.recvfrom_into(*args) except _SocketError as ex: if ex.args[0] != EWOULDBLOCK or self.timeout == 0.0: raise exc_clear() # Python 2 self._wait(self._read_event) def recv_into(self, *args): while 1: try: return self._sock.recv_into(*args) except _SocketError as ex: if ex.args[0] != EWOULDBLOCK or self.timeout == 0.0: raise exc_clear() # Python 2 self._wait(self._read_event) def sendall(self, data, flags=0): # this sendall is also reused by gevent.ssl.SSLSocket subclass, # so it should not call self._sock methods directly data_memory = _get_memory(data) return _sendall(self, data_memory, flags) def sendto(self, *args): try: return self._sock.sendto(*args) except _SocketError as ex: if ex.args[0] != EWOULDBLOCK or self.timeout == 0.0: raise exc_clear() self._wait(self._write_event) try: return self._sock.sendto(*args) except _SocketError as ex2: if ex2.args[0] == EWOULDBLOCK: exc_clear() return 0 raise def send(self, data, flags=0, timeout=timeout_default): if timeout is timeout_default: timeout = self.timeout try: return self._sock.send(data, flags) except _SocketError as ex: if ex.args[0] not in GSENDAGAIN or timeout == 0.0: raise exc_clear() self._wait(self._write_event) try: return self._sock.send(data, flags) except _SocketError as ex2: if ex2.args[0] == EWOULDBLOCK: exc_clear() return 0 raise @classmethod def _fixup_docstrings(cls): for k, v in vars(cls).items(): if k.startswith('_'): continue if not hasattr(v, '__doc__') or v.__doc__: continue smeth = getattr(__socket__.socket, k, None) if not smeth or not smeth.__doc__: continue try: v.__doc__ = smeth.__doc__ except (AttributeError, TypeError): # slots can't have docs. Py2 raises TypeError, # Py3 raises AttributeError continue SocketMixin._fixup_docstrings() del SocketMixin._fixup_docstrings gevent-24.11.1/src/gevent/_tblib.py000066400000000000000000000276731471441230600170550ustar00rootroot00000000000000# -*- coding: utf-8 -*- # A vendored version of part of https://github.com/ionelmc/python-tblib # pylint:disable=redefined-outer-name,reimported,function-redefined,bare-except,no-else-return,broad-except #### # Copyright (c) 2013-2016, Ionel Cristian Mărieș # All rights reserved. # Redistribution and use in source and binary forms, with or without modification, are permitted provided that the # following conditions are met: # 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following # disclaimer. # 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following # disclaimer in the documentation and/or other materials provided with the distribution. # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, # INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, # WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF # THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #### # __init__.py import re import sys from types import CodeType __version__ = '2.0.0' __all__ = 'Traceback', 'TracebackParseError', 'Frame', 'Code' FRAME_RE = re.compile(r'^\s*File "(?P.+)", line (?P\d+)(, in (?P.+))?$') class _AttrDict(dict): __slots__ = () def __getattr__(self, name): try: return self[name] except KeyError: raise AttributeError(name) from None # noinspection PyPep8Naming class __traceback_maker(Exception): pass class TracebackParseError(Exception): pass class Code: """ Class that replicates just enough of the builtin Code object to enable serialization and traceback rendering. """ co_code = None def __init__(self, code): self.co_filename = code.co_filename self.co_name = code.co_name self.co_argcount = 0 self.co_kwonlyargcount = 0 self.co_varnames = () self.co_nlocals = 0 self.co_stacksize = 0 self.co_flags = 64 self.co_firstlineno = 0 class Frame: """ Class that replicates just enough of the builtin Frame object to enable serialization and traceback rendering. """ def __init__(self, frame): self.f_locals = {} self.f_globals = {k: v for k, v in frame.f_globals.items() if k in ('__file__', '__name__')} self.f_code = Code(frame.f_code) self.f_lineno = frame.f_lineno def clear(self): """ For compatibility with PyPy 3.5; clear() was added to frame in Python 3.4 and is called by traceback.clear_frames(), which in turn is called by unittest.TestCase.assertRaises """ class Traceback: """ Class that wraps builtin Traceback objects. """ tb_next = None def __init__(self, tb): self.tb_frame = Frame(tb.tb_frame) # noinspection SpellCheckingInspection self.tb_lineno = int(tb.tb_lineno) # Build in place to avoid exceeding the recursion limit tb = tb.tb_next prev_traceback = self cls = type(self) while tb is not None: traceback = object.__new__(cls) traceback.tb_frame = Frame(tb.tb_frame) traceback.tb_lineno = int(tb.tb_lineno) prev_traceback.tb_next = traceback prev_traceback = traceback tb = tb.tb_next def as_traceback(self): """ Convert to a builtin Traceback object that is usable for raising or rendering a stacktrace. """ current = self top_tb = None tb = None while current: f_code = current.tb_frame.f_code code = compile('\n' * (current.tb_lineno - 1) + 'raise __traceback_maker', current.tb_frame.f_code.co_filename, 'exec') if hasattr(code, 'replace'): # Python 3.8 and newer code = code.replace(co_argcount=0, co_filename=f_code.co_filename, co_name=f_code.co_name, co_freevars=(), co_cellvars=()) else: code = CodeType( 0, code.co_kwonlyargcount, code.co_nlocals, code.co_stacksize, code.co_flags, code.co_code, code.co_consts, code.co_names, code.co_varnames, f_code.co_filename, f_code.co_name, code.co_firstlineno, code.co_lnotab, (), (), ) # noinspection PyBroadException try: exec(code, dict(current.tb_frame.f_globals), {}) # noqa: S102 except Exception: next_tb = sys.exc_info()[2].tb_next if top_tb is None: top_tb = next_tb if tb is not None: tb.tb_next = next_tb tb = next_tb del next_tb current = current.tb_next try: return top_tb finally: del top_tb del tb to_traceback = as_traceback def as_dict(self): """ Converts to a dictionary representation. You can serialize the result to JSON as it only has builtin objects like dicts, lists, ints or strings. """ if self.tb_next is None: tb_next = None else: tb_next = self.tb_next.to_dict() code = { 'co_filename': self.tb_frame.f_code.co_filename, 'co_name': self.tb_frame.f_code.co_name, } frame = { 'f_globals': self.tb_frame.f_globals, 'f_code': code, 'f_lineno': self.tb_frame.f_lineno, } return { 'tb_frame': frame, 'tb_lineno': self.tb_lineno, 'tb_next': tb_next, } to_dict = as_dict @classmethod def from_dict(cls, dct): """ Creates an instance from a dictionary with the same structure as ``.as_dict()`` returns. """ if dct['tb_next']: tb_next = cls.from_dict(dct['tb_next']) else: tb_next = None code = _AttrDict( co_filename=dct['tb_frame']['f_code']['co_filename'], co_name=dct['tb_frame']['f_code']['co_name'], ) frame = _AttrDict( f_globals=dct['tb_frame']['f_globals'], f_code=code, f_lineno=dct['tb_frame']['f_lineno'], ) tb = _AttrDict( tb_frame=frame, tb_lineno=dct['tb_lineno'], tb_next=tb_next, ) return cls(tb) @classmethod def from_string(cls, string, strict=True): """ Creates an instance by parsing a stacktrace. Strict means that parsing stops when lines are not indented by at least two spaces anymore. """ frames = [] header = strict for line in string.splitlines(): line = line.rstrip() if header: if line == 'Traceback (most recent call last):': header = False continue frame_match = FRAME_RE.match(line) if frame_match: frames.append(frame_match.groupdict()) elif line.startswith(' '): pass elif strict: break # traceback ended if frames: previous = None for frame in reversed(frames): previous = _AttrDict( frame, tb_frame=_AttrDict( frame, f_globals=_AttrDict( __file__=frame['co_filename'], __name__='?', ), f_code=_AttrDict(frame), f_lineno=int(frame['tb_lineno']), ), tb_next=previous, ) return cls(previous) else: raise TracebackParseError('Could not find any frames in %r.' % string) # pickling_support.py # gevent: Trying the dict support, so maybe we don't even need this # at all. import sys from types import TracebackType #from . import Frame # gevent #from . import Traceback # gevent # gevent: defer # if sys.version_info.major >= 3: # import copyreg # else: # import copy_reg as copyreg def unpickle_traceback(tb_frame, tb_lineno, tb_next): ret = object.__new__(Traceback) ret.tb_frame = tb_frame ret.tb_lineno = tb_lineno ret.tb_next = tb_next return ret.as_traceback() def pickle_traceback(tb): return unpickle_traceback, (Frame(tb.tb_frame), tb.tb_lineno, tb.tb_next and Traceback(tb.tb_next)) def unpickle_exception(func, args, cause, tb): inst = func(*args) inst.__cause__ = cause inst.__traceback__ = tb return inst def pickle_exception(obj): # All exceptions, unlike generic Python objects, define __reduce_ex__ # __reduce_ex__(4) should be no different from __reduce_ex__(3). # __reduce_ex__(5) could bring benefits in the unlikely case the exception # directly contains buffers, but PickleBuffer objects will cause a crash when # running on protocol=4, and there's no clean way to figure out the current # protocol from here. Note that any object returned by __reduce_ex__(3) will # still be pickled with protocol 5 if pickle.dump() is running with it. rv = obj.__reduce_ex__(3) if isinstance(rv, str): raise TypeError('str __reduce__ output is not supported') assert isinstance(rv, tuple) assert len(rv) >= 2 return (unpickle_exception, rv[:2] + (obj.__cause__, obj.__traceback__)) + rv[2:] def _get_subclasses(cls): # Depth-first traversal of all direct and indirect subclasses of cls to_visit = [cls] while to_visit: this = to_visit.pop() yield this to_visit += list(this.__subclasses__()) def install(*exc_classes_or_instances): import copyreg copyreg.pickle(TracebackType, pickle_traceback) if sys.version_info.major < 3: # Dummy decorator? if len(exc_classes_or_instances) == 1: exc = exc_classes_or_instances[0] if isinstance(exc, type) and issubclass(exc, BaseException): return exc return if not exc_classes_or_instances: for exception_cls in _get_subclasses(BaseException): copyreg.pickle(exception_cls, pickle_exception) return for exc in exc_classes_or_instances: if isinstance(exc, BaseException): while exc is not None: copyreg.pickle(type(exc), pickle_exception) exc = exc.__cause__ elif isinstance(exc, type) and issubclass(exc, BaseException): copyreg.pickle(exc, pickle_exception) # Allow using @install as a decorator for Exception classes if len(exc_classes_or_instances) == 1: return exc else: raise TypeError('Expected subclasses or instances of BaseException, got %s' % (type(exc))) # gevent API _installed = False def dump_traceback(tb): from pickle import dumps if tb is None: return dumps(None) tb = Traceback(tb) return dumps(tb.to_dict()) def load_traceback(s): from pickle import loads as_dict = loads(s) if as_dict is None: return None tb = Traceback.from_dict(as_dict) return tb.as_traceback() gevent-24.11.1/src/gevent/_threading.py000066400000000000000000000202261471441230600177110ustar00rootroot00000000000000""" A small selection of primitives that always work with native threads. This has very limited utility and is targeted only for the use of gevent's threadpool. """ from __future__ import absolute_import from collections import deque from gevent import monkey from gevent._compat import thread_mod_name __all__ = [ 'Lock', 'Queue', 'EmptyTimeout', ] start_new_thread, Lock, get_thread_ident, = monkey.get_original(thread_mod_name, [ 'start_new_thread', 'allocate_lock', 'get_ident', ]) # We want to support timeouts on locks. In this way, we can allow idle threads to # expire from a thread pool. On Python 3, this is native behaviour; on Python 2, # we have to emulate it. For Python 3, we want this to have the lowest possible overhead, # so we'd prefer to use a direct call, rather than go through a wrapper. But we also # don't want to allocate locks at import time because..., so we swizzle out the method # at runtime. # # # In all cases, a timeout value of -1 means "infinite". Sigh. def acquire_with_timeout(lock, timeout=-1): globals()['acquire_with_timeout'] = type(lock).acquire return lock.acquire(timeout=timeout) class _Condition(object): # We could use libuv's ``uv_cond_wait`` to implement this whole # class and get native timeouts and native performance everywhere. # pylint:disable=method-hidden __slots__ = ( '_lock', '_waiters', ) def __init__(self, lock): # This lock is used to protect our own data structures; # calls to ``wait`` and ``notify_one`` *must* be holding this # lock. self._lock = lock self._waiters = [] # No need to special case for _release_save and # _acquire_restore; those are only used for RLock, and # we don't use those. def __enter__(self): return self._lock.__enter__() def __exit__(self, t, v, tb): return self._lock.__exit__(t, v, tb) def __repr__(self): return "" % (self._lock, len(self._waiters)) def wait(self, wait_lock, timeout=-1, _wait_for_notify=acquire_with_timeout): # This variable is for the monitoring utils to know that # this is an idle frame and shouldn't be counted. gevent_threadpool_worker_idle = True # pylint:disable=unused-variable # The _lock must be held. # The ``wait_lock`` must be *un*owned, so the timeout doesn't apply there. # Take that lock now. wait_lock.acquire() self._waiters.append(wait_lock) self._lock.release() try: # We're already holding this native lock, so when we try to acquire it again, # that won't work and we'll block until someone calls notify_one() (which might # have already happened). notified = _wait_for_notify(wait_lock, timeout) finally: self._lock.acquire() # Now that we've acquired _lock again, no one can call notify_one(), or this # method. if not notified: # We need to come out of the waiters list. IF we're still there; it's # possible that between the call to _acquire() returning False, # and the time that we acquired _lock, someone did a ``notify_one`` # and released the lock. For that reason, do a non-blocking acquire() notified = wait_lock.acquire(False) if not notified: # Well narf. No go. We must stil be in the waiters list, so take us out self._waiters.remove(wait_lock) # We didn't get notified, but we're still holding a lock that we # need to release. wait_lock.release() else: # We got notified, so we need to reset. wait_lock.release() return notified def notify_one(self): # The lock SHOULD be owned, but we don't check that. try: waiter = self._waiters.pop() except IndexError: # Nobody around pass else: # The owner of the ``waiter`` is blocked on # acquiring it again, so when we ``release`` it, it # is free to be scheduled and resume. waiter.release() class EmptyTimeout(Exception): """Raised from :meth:`Queue.get` if no item is available in the timeout.""" class Queue(object): """ Create a queue object. The queue is always infinite size. """ __slots__ = ('_queue', '_mutex', '_not_empty', 'unfinished_tasks') def __init__(self): self._queue = deque() # mutex must be held whenever the queue is mutating. All methods # that acquire mutex must release it before returning. mutex # is shared between the three conditions, so acquiring and # releasing the conditions also acquires and releases mutex. self._mutex = Lock() # Notify not_empty whenever an item is added to the queue; a # thread waiting to get is notified then. self._not_empty = _Condition(self._mutex) self.unfinished_tasks = 0 def task_done(self): """Indicate that a formerly enqueued task is complete. Used by Queue consumer threads. For each get() used to fetch a task, a subsequent call to task_done() tells the queue that the processing on the task is complete. If a join() is currently blocking, it will resume when all items have been processed (meaning that a task_done() call was received for every item that had been put() into the queue). Raises a ValueError if called more times than there were items placed in the queue. """ with self._mutex: unfinished = self.unfinished_tasks - 1 if unfinished <= 0: if unfinished < 0: raise ValueError( 'task_done() called too many times; %s remaining tasks' % ( self.unfinished_tasks ) ) self.unfinished_tasks = unfinished def qsize(self, len=len): """Return the approximate size of the queue (not reliable!).""" return len(self._queue) def empty(self): """Return True if the queue is empty, False otherwise (not reliable!).""" return not self.qsize() def full(self): """Return True if the queue is full, False otherwise (not reliable!).""" return False def put(self, item): """Put an item into the queue. """ with self._mutex: self._queue.append(item) self.unfinished_tasks += 1 self._not_empty.notify_one() def get(self, cookie, timeout=-1): """ Remove and return an item from the queue. If *timeout* is given, and is not -1, then we will attempt to wait for only that many seconds to get an item. If those seconds elapse and no item has become available, raises :class:`EmptyTimeout`. """ with self._mutex: while not self._queue: # Temporarily release our mutex and wait for someone # to wake us up. There *should* be an item in the queue # after that. notified = self._not_empty.wait(cookie, timeout) # Ok, we're holding the mutex again, so our state is guaranteed stable. # It's possible that in the brief window where we didn't hold the lock, # someone put something in the queue, and if so, we can take it. if not notified and not self._queue: raise EmptyTimeout item = self._queue.popleft() return item def allocate_cookie(self): """ Create and return the *cookie* to pass to `get()`. Each thread that will use `get` needs a distinct cookie. """ return Lock() def kill(self): """ Call to destroy this object. Use this when it's not possible to safely drain the queue, e.g., after a fork when the locks are in an uncertain state. """ self._queue = None self._mutex = None self._not_empty = None self.unfinished_tasks = None gevent-24.11.1/src/gevent/_tracer.py000066400000000000000000000144671471441230600172360ustar00rootroot00000000000000# Copyright (c) 2018 gevent. See LICENSE for details. # cython: auto_pickle=False,embedsignature=True,always_allow_keywords=False from __future__ import print_function, absolute_import, division import sys import traceback from greenlet import settrace from greenlet import getcurrent from gevent.util import format_run_info from gevent._compat import perf_counter from gevent._util import gmctime __all__ = [ 'GreenletTracer', 'HubSwitchTracer', 'MaxSwitchTracer', ] # Recall these classes are cython compiled, so # class variable declarations are bad. class GreenletTracer(object): def __init__(self): # A counter, incremented by the greenlet trace function # we install on every greenlet switch. This is reset when the # periodic monitoring thread runs. self.greenlet_switch_counter = 0 # The greenlet last switched to. self.active_greenlet = None # The trace function that was previously installed, # if any. # NOTE: Calling a class instance is cheaper than # calling a bound method (at least when compiled with cython) # even when it redirects to another function. prev_trace = settrace(self) self.previous_trace_function = prev_trace self._killed = False def kill(self): # Must be called in the monitored thread. if not self._killed: self._killed = True settrace(self.previous_trace_function) self.previous_trace_function = None def _trace(self, event, args): # This function runs in the thread we are monitoring. self.greenlet_switch_counter += 1 if event in ('switch', 'throw'): # args is (origin, target). This is the only defined # case self.active_greenlet = args[1] else: self.active_greenlet = None if self.previous_trace_function is not None: self.previous_trace_function(event, args) def __call__(self, event, args): return self._trace(event, args) def did_block_hub(self, hub): # Check to see if we have blocked since the last call to this # method. Returns a true value if we blocked (not in the hub), # a false value if everything is fine. # This may be called in the same thread being traced or a # different thread; if a different thread, there is a race # condition with this being incremented in the thread we're # monitoring, but probably not often enough to lead to # annoying false positives. active_greenlet = self.active_greenlet did_switch = self.greenlet_switch_counter != 0 self.greenlet_switch_counter = 0 if did_switch or active_greenlet is None or active_greenlet is hub: # Either we switched, or nothing is running (we got a # trace event we don't know about or were requested to # ignore), or we spent the whole time in the hub, blocked # for IO. Nothing to report. return False return True, active_greenlet def ignore_current_greenlet_blocking(self): # Don't pay attention to the current greenlet. self.active_greenlet = None def monitor_current_greenlet_blocking(self): self.active_greenlet = getcurrent() def did_block_hub_report(self, hub, active_greenlet, format_kwargs): # XXX: On Python 2 with greenlet 1.0a1, '%s' formatting a greenlet # results in a unicode object. This is a bug in greenlet, I think. # https://github.com/python-greenlet/greenlet/issues/218 report = ['=' * 80, '\n%s : Greenlet %s appears to be blocked' % (gmctime(), str(active_greenlet))] report.append(" Reported by %s" % (self,)) try: frame = sys._current_frames()[hub.thread_ident] except KeyError: # The thread holding the hub has died. Perhaps we shouldn't # even report this? stack = ["Unknown: No thread found for hub %r\n" % (hub,)] else: stack = traceback.format_stack(frame) report.append('Blocked Stack (for thread id %s):' % (hex(hub.thread_ident),)) report.append(''.join(stack)) report.append("Info:") report.extend(format_run_info(**format_kwargs)) return report class _HubTracer(GreenletTracer): def __init__(self, hub, max_blocking_time): GreenletTracer.__init__(self) self.max_blocking_time = max_blocking_time self.hub = hub def kill(self): self.hub = None GreenletTracer.kill(self) class HubSwitchTracer(_HubTracer): # A greenlet tracer that records the last time we switched *into* the hub. def __init__(self, hub, max_blocking_time): _HubTracer.__init__(self, hub, max_blocking_time) self.last_entered_hub = 0 def _trace(self, event, args): GreenletTracer._trace(self, event, args) if self.active_greenlet is self.hub: self.last_entered_hub = perf_counter() def did_block_hub(self, hub): if perf_counter() - self.last_entered_hub > self.max_blocking_time: return True, self.active_greenlet class MaxSwitchTracer(_HubTracer): # A greenlet tracer that records the maximum time between switches, # not including time spent in the hub. def __init__(self, hub, max_blocking_time): _HubTracer.__init__(self, hub, max_blocking_time) self.last_switch = perf_counter() self.max_blocking = 0 def _trace(self, event, args): old_active = self.active_greenlet GreenletTracer._trace(self, event, args) if old_active is not self.hub and old_active is not None: # If we're switching out of the hub, the blocking # time doesn't count. switched_at = perf_counter() self.max_blocking = max(self.max_blocking, switched_at - self.last_switch) def did_block_hub(self, hub): if self.max_blocking == 0: # We never switched. Check the time now self.max_blocking = perf_counter() - self.last_switch if self.max_blocking > self.max_blocking_time: return True, self.active_greenlet from gevent._util import import_c_accel import_c_accel(globals(), 'gevent.__tracer') gevent-24.11.1/src/gevent/_util.py000066400000000000000000000253241471441230600167250ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ internal gevent utilities, not for external use. """ # Be very careful not to import anything that would cause issues with # monkey-patching. from __future__ import print_function, absolute_import, division from gevent._compat import iteritems class _NONE(object): """ A special object you must never pass to any gevent API. Used as a marker object for keyword arguments that cannot have the builtin None (because that might be a valid value). """ __slots__ = () def __repr__(self): return '' _NONE = _NONE() WRAPPER_ASSIGNMENTS = ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__') WRAPPER_UPDATES = ('__dict__',) def update_wrapper(wrapper, wrapped, assigned=WRAPPER_ASSIGNMENTS, updated=WRAPPER_UPDATES): """ Based on code from the standard library ``functools``, but doesn't perform any of the troublesome imports. functools imports RLock from _thread for purposes of the ``lru_cache``, making it problematic to use from gevent. The other imports are somewhat heavy: abc, collections, types. """ for attr in assigned: try: value = getattr(wrapped, attr) except AttributeError: pass else: setattr(wrapper, attr, value) for attr in updated: getattr(wrapper, attr).update(getattr(wrapped, attr, {})) # Issue #17482: set __wrapped__ last so we don't inadvertently copy it # from the wrapped function when updating __dict__ wrapper.__wrapped__ = wrapped # Return the wrapper so this can be used as a decorator via partial() return wrapper def copy_globals(source, globs, only_names=None, ignore_missing_names=False, names_to_ignore=(), dunder_names_to_keep=('__implements__', '__all__', '__imports__'), cleanup_globs=True): """ Copy attributes defined in ``source.__dict__`` to the dictionary in globs (which should be the caller's :func:`globals`). Names that start with ``__`` are ignored (unless they are in *dunder_names_to_keep*). Anything found in *names_to_ignore* is also ignored. If *only_names* is given, only those attributes will be considered. In this case, *ignore_missing_names* says whether or not to raise an :exc:`AttributeError` if one of those names can't be found. If *cleanup_globs* has a true value, then common things imported but not used at runtime are removed, including this function. Returns a list of the names copied; this should be assigned to ``__imports__``. """ if only_names: if ignore_missing_names: items = ((k, getattr(source, k, _NONE)) for k in only_names) else: items = ((k, getattr(source, k)) for k in only_names) else: items = iteritems(source.__dict__) copied = [] for key, value in items: if value is _NONE: continue if key in names_to_ignore: continue if key.startswith("__") and key not in dunder_names_to_keep: continue globs[key] = value copied.append(key) if cleanup_globs: if 'copy_globals' in globs: del globs['copy_globals'] return copied def import_c_accel(globs, cname): """ Import the C-accelerator for the *cname* and copy its globals. The *cname* should be hardcoded to match the expected C accelerator module. Unless PURE_PYTHON is set (in the environment or automatically on PyPy), then the C-accelerator is required. """ if not cname.startswith('gevent._gevent_c'): # Old module code that hasn't been updated yet. cname = cname.replace('gevent._', 'gevent._gevent_c') name = globs.get('__name__') if not name or name == cname: # Do nothing if we're being exec'd as a file (no name) # or we're running from the C extension return from gevent._compat import PURE_PYTHON if PURE_PYTHON: return import importlib import warnings with warnings.catch_warnings(): # Python 3.7 likes to produce # "ImportWarning: can't resolve # package from __spec__ or __package__, falling back on # __name__ and __path__" # when we load cython compiled files. This is probably a bug in # Cython, but it doesn't seem to have any consequences, it's # just annoying to see and can mess up our unittests. warnings.simplefilter('ignore', ImportWarning) mod = importlib.import_module(cname) # By adopting the entire __dict__, we get a more accurate # __file__ and module repr, plus we don't leak any imported # things we no longer need. globs.clear() globs.update(mod.__dict__) if 'import_c_accel' in globs: del globs['import_c_accel'] class Lazy(object): """ A non-data descriptor used just like @property. The difference is the function value is assigned to the instance dict the first time it is accessed and then the function is never called again. Contrast with `readproperty`. """ def __init__(self, func): self.data = (func, func.__name__) update_wrapper(self, func) def __get__(self, inst, class_): if inst is None: return self func, name = self.data value = func(inst) inst.__dict__[name] = value return value class readproperty(object): """ A non-data descriptor similar to :class:`property`. The difference is that the property can be assigned to directly, without invoking a setter function. When the property is assigned to, it is cached in the instance and the function is not called on that instance again. Contrast with `Lazy`, which caches the result of the function in the instance the first time it is called and never calls the function on that instance again. """ def __init__(self, func): self.func = func update_wrapper(self, func) def __get__(self, inst, class_): if inst is None: return self return self.func(inst) class LazyOnClass(object): """ Similar to `Lazy`, but stores the value in the class. This is useful when the getter is expensive and conceptually a shared class value, but we don't want import-time side-effects such as expensive imports because it may not always be used. Probably doesn't mix well with inheritance? """ @classmethod def lazy(cls, cls_dict, func): "Put a LazyOnClass object in *cls_dict* with the same name as *func*" cls_dict[func.__name__] = cls(func) def __init__(self, func, name=None): self.name = name or func.__name__ self.func = func def __get__(self, inst, klass): if inst is None: # pragma: no cover return self val = self.func(inst) setattr(klass, self.name, val) return val def gmctime(): """ Returns the current time as a string in RFC3339 format. """ import time return time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime()) ### # Release automation. # # Most of this is to integrate zest.releaser with towncrier. There is # a plugin package that can do the same: # https://github.com/collective/zestreleaser.towncrier ### def prereleaser_middle(data): # pragma: no cover """ zest.releaser prerelease middle hook for gevent. The prerelease step: asks you for a version number updates the setup.py or version.txt and the CHANGES/HISTORY/CHANGELOG file (with either this new version number and offers to commit those changes to git The middle hook: All data dictionary items are available and some questions (like new version number) have been asked. No filesystem changes have been made yet. It is our job to finish up the filesystem changes needed, including: - Calling towncrier to handle CHANGES.rst - Add the version number to ``versionadded``, ``versionchanged`` and ``deprecated`` directives in Python source. """ if data['name'] != 'gevent': # We are specified in ``setup.cfg``, not ``setup.py``, so we do not # come into play for other projects, only this one. We shouldn't # need this check, but there it is. return import re import os import subprocess from gevent.testing import modules new_version = data['new_version'] # Generate CHANGES.rst, remove old news entries. subprocess.check_call([ 'towncrier', 'build', '--version', data['new_version'], '--yes' ]) data['update_history'] = False # Because towncrier already did. # But unstage it; we want it to show in the diff zest.releaser will do subprocess.check_call([ 'git', 'restore', '--staged', 'CHANGES.rst', ]) # Put the version number in source files. regex = re.compile(b'.. (versionchanged|versionadded|deprecated):: NEXT') if not isinstance(new_version, bytes): new_version_bytes = new_version.encode('ascii') else: new_version_bytes = new_version new_version_bytes = new_version.encode('ascii') replacement = br'.. \1:: %s' % (new_version_bytes,) # TODO: This should also look in the docs/ directory at # *.rst for path, _ in modules.walk_modules( # Start here basedir=os.path.join(data['reporoot'], 'src', 'gevent'), # Include sub-dirs recursive=True, # Include tests include_tests=True, # and other things usually excluded excluded_modules=(), # Don't return build binaries include_so=False, # Don't try to import things; we want all files. check_optional=False, ): with open(path, 'rb') as f: contents = f.read() new_contents, count = regex.subn(replacement, contents) if count: print("Replaced version NEXT in", path) with open(path, 'wb') as f: f.write(new_contents) def postreleaser_before(data): # pragma: no cover """ Prevents zest.releaser from modifying the CHANGES.rst to add the 'no changes yet' section; towncrier is in charge of CHANGES.rst. Needs zest.releaser 6.15.0. """ if data['name'] != 'gevent': # We are specified in ``setup.cfg``, not ``setup.py``, so we do not # come into play for other projects, only this one. We shouldn't # need this check, but there it is. return data['update_history'] = False gevent-24.11.1/src/gevent/_waiter.py000066400000000000000000000163331471441230600172430ustar00rootroot00000000000000# -*- coding: utf-8 -*- # copyright 2018 gevent # cython: auto_pickle=False,embedsignature=True,always_allow_keywords=False """ Low-level waiting primitives. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import sys from gevent._hub_local import get_hub_noargs as get_hub from gevent.exceptions import ConcurrentObjectUseError __all__ = [ 'Waiter', ] _NONE = object() locals()['getcurrent'] = __import__('greenlet').getcurrent locals()['greenlet_init'] = lambda: None class Waiter(object): """ A low level communication utility for greenlets. Waiter is a wrapper around greenlet's ``switch()`` and ``throw()`` calls that makes them somewhat safer: * switching will occur only if the waiting greenlet is executing :meth:`get` method currently; * any error raised in the greenlet is handled inside :meth:`switch` and :meth:`throw` * if :meth:`switch`/:meth:`throw` is called before the receiver calls :meth:`get`, then :class:`Waiter` will store the value/exception. The following :meth:`get` will return the value/raise the exception. The :meth:`switch` and :meth:`throw` methods must only be called from the :class:`Hub` greenlet. The :meth:`get` method must be called from a greenlet other than :class:`Hub`. >>> from gevent.hub import Waiter >>> from gevent import get_hub >>> result = Waiter() >>> timer = get_hub().loop.timer(0.1) >>> timer.start(result.switch, 'hello from Waiter') >>> result.get() # blocks for 0.1 seconds 'hello from Waiter' >>> timer.close() If switch is called before the greenlet gets a chance to call :meth:`get` then :class:`Waiter` stores the value. >>> from gevent.time import sleep >>> result = Waiter() >>> timer = get_hub().loop.timer(0.1) >>> timer.start(result.switch, 'hi from Waiter') >>> sleep(0.2) >>> result.get() # returns immediately without blocking 'hi from Waiter' >>> timer.close() .. warning:: This is a limited and dangerous way to communicate between greenlets. It can easily leave a greenlet unscheduled forever if used incorrectly. Consider using safer classes such as :class:`gevent.event.Event`, :class:`gevent.event.AsyncResult`, or :class:`gevent.queue.Queue`. """ __slots__ = ['hub', 'greenlet', 'value', '_exception'] def __init__(self, hub=None): self.hub = get_hub() if hub is None else hub self.greenlet = None self.value = None self._exception = _NONE def clear(self): self.greenlet = None self.value = None self._exception = _NONE def __str__(self): if self._exception is _NONE: return '<%s greenlet=%s>' % (type(self).__name__, self.greenlet) if self._exception is None: return '<%s greenlet=%s value=%r>' % (type(self).__name__, self.greenlet, self.value) return '<%s greenlet=%s exc_info=%r>' % (type(self).__name__, self.greenlet, self.exc_info) def ready(self): """Return true if and only if it holds a value or an exception""" return self._exception is not _NONE def successful(self): """Return true if and only if it is ready and holds a value""" return self._exception is None @property def exc_info(self): "Holds the exception info passed to :meth:`throw` if :meth:`throw` was called. Otherwise ``None``." if self._exception is not _NONE: return self._exception def switch(self, value): """ Switch to the greenlet if one's available. Otherwise store the *value*. .. versionchanged:: 1.3b1 The *value* is no longer optional. """ greenlet = self.greenlet if greenlet is None: self.value = value self._exception = None else: if getcurrent() is not self.hub: # pylint:disable=undefined-variable raise AssertionError("Can only use Waiter.switch method from the Hub greenlet") switch = greenlet.switch try: switch(value) except: # pylint:disable=bare-except self.hub.handle_error(switch, *sys.exc_info()) def switch_args(self, *args): return self.switch(args) def throw(self, *throw_args): """Switch to the greenlet with the exception. If there's no greenlet, store the exception.""" greenlet = self.greenlet if greenlet is None: self._exception = throw_args else: if getcurrent() is not self.hub: # pylint:disable=undefined-variable raise AssertionError("Can only use Waiter.switch method from the Hub greenlet") throw = greenlet.throw try: throw(*throw_args) except: # pylint:disable=bare-except self.hub.handle_error(throw, *sys.exc_info()) def get(self): """If a value/an exception is stored, return/raise it. Otherwise until switch() or throw() is called.""" if self._exception is not _NONE: if self._exception is None: return self.value getcurrent().throw(*self._exception) # pylint:disable=undefined-variable else: if self.greenlet is not None: raise ConcurrentObjectUseError('This Waiter is already used by %r' % (self.greenlet, )) self.greenlet = getcurrent() # pylint:disable=undefined-variable try: return self.hub.switch() finally: self.greenlet = None def __call__(self, source): if source.exception is None: self.switch(source.value) else: self.throw(source.exception) # can also have a debugging version, that wraps the value in a tuple (self, value) in switch() # and unwraps it in wait() thus checking that switch() was indeed called class MultipleWaiter(Waiter): """ An internal extension of Waiter that can be used if multiple objects must be waited on, and there is a chance that in between waits greenlets might be switched out. All greenlets that switch to this waiter will have their value returned. This does not handle exceptions or throw methods. """ __slots__ = ['_values'] def __init__(self, hub=None): Waiter.__init__(self, hub) # we typically expect a relatively small number of these to be outstanding. # since we pop from the left, a deque might be slightly # more efficient, but since we're in the hub we avoid imports if # we can help it to better support monkey-patching, and delaying the import # here can be impractical (see https://github.com/gevent/gevent/issues/652) self._values = [] def switch(self, value): self._values.append(value) Waiter.switch(self, True) def get(self): if not self._values: Waiter.get(self) Waiter.clear(self) return self._values.pop(0) def _init(): greenlet_init() # pylint:disable=undefined-variable _init() from gevent._util import import_c_accel import_c_accel(globals(), 'gevent.__waiter') gevent-24.11.1/src/gevent/ares.py000066400000000000000000000005771471441230600165460ustar00rootroot00000000000000"""Backwards compatibility alias for :mod:`gevent.resolver.cares`. .. deprecated:: 1.3 Use :mod:`gevent.resolver.cares` """ # pylint:disable=no-name-in-module,import-error from gevent.resolver.cares import * # pylint:disable=wildcard-import,unused-wildcard-import, import gevent.resolver.cares as _cares __all__ = _cares.__all__ # pylint:disable=c-extension-no-member del _cares gevent-24.11.1/src/gevent/backdoor.py000066400000000000000000000207021471441230600173700ustar00rootroot00000000000000# Copyright (c) 2009-2014, gevent contributors # Based on eventlet.backdoor Copyright (c) 2005-2006, Bob Ippolito """ Interactive greenlet-based network console that can be used in any process. The :class:`BackdoorServer` provides a REPL inside a running process. As long as the process is monkey-patched, the ``BackdoorServer`` can coexist with other elements of the process. .. seealso:: :class:`code.InteractiveConsole` """ from __future__ import print_function, absolute_import import sys import socket from code import InteractiveConsole from gevent.greenlet import Greenlet from gevent.hub import getcurrent from gevent.server import StreamServer from gevent.pool import Pool __all__ = [ 'BackdoorServer', ] try: sys.ps1 except AttributeError: sys.ps1 = '>>> ' try: sys.ps2 except AttributeError: sys.ps2 = '... ' class _Greenlet_stdreplace(Greenlet): # A greenlet that replaces sys.std[in/out/err] while running. __slots__ = ( 'stdin', 'stdout', 'prev_stdin', 'prev_stdout', 'prev_stderr', ) def __init__(self, *args, **kwargs): Greenlet.__init__(self, *args, **kwargs) self.stdin = None self.stdout = None self.prev_stdin = None self.prev_stdout = None self.prev_stderr = None def switch(self, *args, **kw): if self.stdin is not None: self.switch_in() Greenlet.switch(self, *args, **kw) def switch_in(self): self.prev_stdin = sys.stdin self.prev_stdout = sys.stdout self.prev_stderr = sys.stderr sys.stdin = self.stdin sys.stdout = self.stdout sys.stderr = self.stdout def switch_out(self): sys.stdin = self.prev_stdin sys.stdout = self.prev_stdout sys.stderr = self.prev_stderr self.prev_stdin = self.prev_stdout = self.prev_stderr = None def throw(self, *args, **kwargs): # pylint:disable=arguments-differ if self.prev_stdin is None and self.stdin is not None: self.switch_in() Greenlet.throw(self, *args, **kwargs) def run(self): try: return Greenlet.run(self) finally: # Make sure to restore the originals. self.switch_out() class BackdoorServer(StreamServer): """ Provide a backdoor to a program for debugging purposes. .. warning:: This backdoor provides no authentication and makes no attempt to limit what remote users can do. Anyone that can access the server can take any action that the running python process can. Thus, while you may bind to any interface, for security purposes it is recommended that you bind to one only accessible to the local machine, e.g., 127.0.0.1/localhost. Basic usage:: from gevent.backdoor import BackdoorServer server = BackdoorServer(('127.0.0.1', 5001), banner="Hello from gevent backdoor!", locals={'foo': "From defined scope!"}) server.serve_forever() In a another terminal, connect with...:: $ telnet 127.0.0.1 5001 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. Hello from gevent backdoor! >> print(foo) From defined scope! .. versionchanged:: 1.2a1 Spawned greenlets are now tracked in a pool and killed when the server is stopped. """ def __init__(self, listener, locals=None, banner=None, **server_args): """ :keyword locals: If given, a dictionary of "builtin" values that will be available at the top-level. :keyword banner: If geven, a string that will be printed to each connecting user. """ group = Pool(greenlet_class=_Greenlet_stdreplace) # no limit on number StreamServer.__init__(self, listener, spawn=group, **server_args) _locals = {'__doc__': None, '__name__': '__console__'} if locals: _locals.update(locals) self.locals = _locals self.banner = banner self.stderr = sys.stderr def _create_interactive_locals(self): # Create and return a *new* locals dictionary based on self.locals, # and set any new entries in it. (InteractiveConsole does not # copy its locals value) _locals = self.locals.copy() # __builtins__ may either be the __builtin__ module or # __builtin__.__dict__; in the latter case typing # locals() at the backdoor prompt spews out lots of # useless stuff try: import __builtin__ _locals["__builtins__"] = __builtin__ except ImportError: import builtins # pylint:disable=import-error _locals["builtins"] = builtins _locals['__builtins__'] = builtins return _locals def handle(self, conn, _address): # pylint: disable=method-hidden """ Interact with one remote user. .. versionchanged:: 1.1b2 Each connection gets its own ``locals`` dictionary. Previously they were shared in a potentially unsafe manner. """ conn.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, True) # pylint:disable=no-member raw_file = conn.makefile(mode="r") getcurrent().stdin = _StdIn(conn, raw_file) getcurrent().stdout = _StdErr(conn, raw_file) # Swizzle the inputs getcurrent().switch_in() try: console = InteractiveConsole(self._create_interactive_locals()) # Beginning in 3.6, the console likes to print "now exiting " # but probably our socket is already closed, so this just causes problems. console.interact(banner=self.banner, exitmsg='') # pylint:disable=unexpected-keyword-arg except SystemExit: # raised by quit(); obviously this cannot propagate. pass finally: raw_file.close() conn.close() class _BaseFileLike(object): # Python 2 likes to test for this before writing to stderr. softspace = None encoding = 'utf-8' __slots__ = ( 'sock', 'fobj', 'fileno', ) def __init__(self, sock, stdin): self.sock = sock self.fobj = stdin # On Python 3, The builtin input() function (used by the # default InteractiveConsole) calls fileno() on # sys.stdin. If it's the same as the C stdin's fileno, # and isatty(fd) (C function call) returns true, # and all of that is also true for stdout, then input() will use # PyOS_Readline to get the input. # # On Python 2, the sys.stdin object has to extend the file() # class, and return true from isatty(fileno(sys.stdin.f_fp)) # (where f_fp is a C-level FILE* member) to use PyOS_Readline. # # If that doesn't hold, both versions fall back to reading and writing # using sys.stdout.write() and sys.stdin.readline(). self.fileno = sock.fileno def __getattr__(self, name): return getattr(self.fobj, name) def close(self): pass class _StdErr(_BaseFileLike): """ A file-like object that wraps the result of socket.makefile (composition instead of inheritance lets us work identically under CPython and PyPy). We write directly to the socket, avoiding the buffering that the text-oriented makefile would want to do (otherwise we'd be at the mercy of waiting on a flush() to get called for the remote user to see data); this beats putting the file in binary mode and translating everywhere with a non-default encoding. """ def flush(self): "Does nothing. raw_input() calls this, only on Python 3." def write(self, data): if not isinstance(data, bytes): data = data.encode(self.encoding) self.sock.sendall(data) class _StdIn(_BaseFileLike): # Like _StdErr, but for stdin. def readline(self, *a): try: return self.fobj.readline(*a).replace("\r\n", "\n") except UnicodeError: # Typically, under python 3, a ^C on the other end return '' if __name__ == '__main__': if not sys.argv[1:]: print('USAGE: %s PORT [banner]' % sys.argv[0]) else: BackdoorServer(('127.0.0.1', int(sys.argv[1])), banner=(sys.argv[2] if len(sys.argv) > 2 else None), locals={'hello': 'world'}).serve_forever() gevent-24.11.1/src/gevent/baseserver.py000066400000000000000000000403461471441230600177530ustar00rootroot00000000000000"""Base class for implementing servers""" # Copyright (c) 2009-2012 Denis Bilenko. See LICENSE for details. from __future__ import print_function from __future__ import absolute_import from __future__ import division import sys import _socket import errno from gevent.greenlet import Greenlet from gevent.event import Event from gevent.hub import get_hub from gevent._compat import string_types from gevent._compat import integer_types from gevent._compat import xrange __all__ = ['BaseServer'] # We define a helper function to handle closing the socket in # do_handle; We'd like to bind it to a kwarg to avoid *any* lookups at # all, but that's incompatible with the calling convention of # do_handle. On CPython, this is ~20% faster than creating and calling # a closure and ~10% faster than using a @staticmethod. (In theory, we # could create a closure only once in set_handle, to wrap self._handle, # but this is safer from a backwards compat standpoint.) # we also avoid unpacking the *args tuple when calling/spawning this object # for a tiny improvement (benchmark shows a wash) def _handle_and_close_when_done(handle, close, args_tuple): try: return handle(*args_tuple) finally: close(*args_tuple) class BaseServer(object): """ An abstract base class that implements some common functionality for the servers in gevent. :param listener: Either be an address that the server should bind on or a :class:`gevent.socket.socket` instance that is already bound (and put into listening mode in case of TCP socket). :keyword handle: If given, the request handler. The request handler can be defined in a few ways. Most commonly, subclasses will implement a ``handle`` method as an instance method. Alternatively, a function can be passed as the ``handle`` argument to the constructor. In either case, the handler can later be changed by calling :meth:`set_handle`. When the request handler returns, the socket used for the request will be closed. Therefore, the handler must not return if the socket is still in use (for example, by manually spawned greenlets). :keyword spawn: If provided, is called to create a new greenlet to run the handler. By default, :func:`gevent.spawn` is used (meaning there is no artificial limit on the number of concurrent requests). Possible values for *spawn*: - a :class:`gevent.pool.Pool` instance -- ``handle`` will be executed using :meth:`gevent.pool.Pool.spawn` only if the pool is not full. While it is full, no new connections are accepted; - :func:`gevent.spawn_raw` -- ``handle`` will be executed in a raw greenlet which has a little less overhead then :class:`gevent.Greenlet` instances spawned by default; - ``None`` -- ``handle`` will be executed right away, in the :class:`Hub` greenlet. ``handle`` cannot use any blocking functions as it would mean switching to the :class:`Hub`. - an integer -- a shortcut for ``gevent.pool.Pool(integer)`` .. versionchanged:: 1.1a1 When the *handle* function returns from processing a connection, the client socket will be closed. This resolves the non-deterministic closing of the socket, fixing ResourceWarnings under Python 3 and PyPy. .. versionchanged:: 1.5 Now a context manager that returns itself and calls :meth:`stop` on exit. """ # pylint: disable=too-many-instance-attributes,bare-except,broad-except #: the number of seconds to sleep in case there was an error in accept() call #: for consecutive errors the delay will double until it reaches max_delay #: when accept() finally succeeds the delay will be reset to min_delay again min_delay = 0.01 max_delay = 1 #: Sets the maximum number of consecutive accepts that a process may perform on #: a single wake up. High values give higher priority to high connection rates, #: while lower values give higher priority to already established connections. #: Default is 100. #: #: Note that, in case of multiple working processes on the same #: listening socket, it should be set to a lower value. (pywsgi.WSGIServer sets it #: to 1 when ``environ["wsgi.multiprocess"]`` is true) #: #: This is equivalent to libuv's `uv_tcp_simultaneous_accepts #: `_ #: value. Setting the environment variable UV_TCP_SINGLE_ACCEPT to a true value #: (usually 1) changes the default to 1 (in libuv only; this does not affect gevent). max_accept = 100 _spawn = Greenlet.spawn #: the default timeout that we wait for the client connections to close in stop() stop_timeout = 1 fatal_errors = (errno.EBADF, errno.EINVAL, errno.ENOTSOCK) def __init__(self, listener, handle=None, spawn='default'): self._stop_event = Event() self._stop_event.set() self._watcher = None self._timer = None self._handle = None # XXX: FIXME: Subclasses rely on the presence or absence of the # `socket` attribute to determine whether we are open/should be opened. # Instead, have it be None. # XXX: In general, the state management here is confusing. Lots of stuff is # deferred until the various ``set_`` methods are called, and it's not documented # when it's safe to call those self.pool = None # can be set from ``spawn``; overrides self.full() try: self.set_listener(listener) self.set_spawn(spawn) self.set_handle(handle) self.delay = self.min_delay self.loop = get_hub().loop if self.max_accept < 1: raise ValueError('max_accept must be positive int: %r' % (self.max_accept, )) except: self.close() raise def __enter__(self): return self def __exit__(self, *args): self.stop() def set_listener(self, listener): if hasattr(listener, 'accept'): if hasattr(listener, 'do_handshake'): raise TypeError('Expected a regular socket, not SSLSocket: %r' % (listener, )) self.family = listener.family self.address = listener.getsockname() self.socket = listener else: self.family, self.address = parse_address(listener) def set_spawn(self, spawn): if spawn == 'default': self.pool = None self._spawn = self._spawn elif hasattr(spawn, 'spawn'): self.pool = spawn self._spawn = spawn.spawn elif isinstance(spawn, integer_types): from gevent.pool import Pool self.pool = Pool(spawn) self._spawn = self.pool.spawn else: self.pool = None self._spawn = spawn if hasattr(self.pool, 'full'): self.full = self.pool.full if self.pool is not None: self.pool._semaphore.rawlink(self._start_accepting_if_started) def set_handle(self, handle): if handle is not None: self.handle = handle if hasattr(self, 'handle'): self._handle = self.handle else: raise TypeError("'handle' must be provided") def _start_accepting_if_started(self, _event=None): if self.started: self.start_accepting() def start_accepting(self): if self._watcher is None: # just stop watcher without creating a new one? self._watcher = self.loop.io(self.socket.fileno(), 1) self._watcher.start(self._do_read) def stop_accepting(self): if self._watcher is not None: self._watcher.stop() self._watcher.close() self._watcher = None if self._timer is not None: self._timer.stop() self._timer.close() self._timer = None def do_handle(self, *args): spawn = self._spawn handle = self._handle close = self.do_close try: if spawn is None: _handle_and_close_when_done(handle, close, args) else: spawn(_handle_and_close_when_done, handle, close, args) except: close(*args) raise def do_close(self, *args): pass def do_read(self): raise NotImplementedError() def _do_read(self): for _ in xrange(self.max_accept): if self.full(): self.stop_accepting() if self.pool is not None: self.pool._semaphore.rawlink(self._start_accepting_if_started) return try: args = self.do_read() self.delay = self.min_delay if not args: return except: self.loop.handle_error(self, *sys.exc_info()) ex = sys.exc_info()[1] if self.is_fatal_error(ex): self.close() sys.stderr.write('ERROR: %s failed with %s\n' % (self, str(ex) or repr(ex))) return if self.delay >= 0: self.stop_accepting() self._timer = self.loop.timer(self.delay) self._timer.start(self._start_accepting_if_started) self.delay = min(self.max_delay, self.delay * 2) break else: try: self.do_handle(*args) except: self.loop.handle_error((args[1:], self), *sys.exc_info()) if self.delay >= 0: self.stop_accepting() self._timer = self.loop.timer(self.delay) self._timer.start(self._start_accepting_if_started) self.delay = min(self.max_delay, self.delay * 2) break def full(self): # pylint: disable=method-hidden # If a Pool is given for to ``set_spawn`` (the *spawn* argument # of the constructor) it will replace this method. return False def __repr__(self): return '<%s at %s %s>' % (type(self).__name__, hex(id(self)), self._formatinfo()) def __str__(self): return '<%s %s>' % (type(self).__name__, self._formatinfo()) def _formatinfo(self): if hasattr(self, 'socket'): try: fileno = self.socket.fileno() except Exception as ex: fileno = str(ex) result = 'fileno=%s ' % fileno else: result = '' try: if isinstance(self.address, tuple) and len(self.address) == 2: result += 'address=%s:%s' % self.address else: result += 'address=%s' % (self.address, ) except Exception as ex: result += str(ex) or '' handle = self.__dict__.get('handle') if handle is not None: fself = getattr(handle, '__self__', None) try: if fself is self: # Checks the __self__ of the handle in case it is a bound # method of self to prevent recursively defined reprs. handle_repr = '' % ( self.__class__.__name__, handle.__name__, ) else: handle_repr = repr(handle) result += ' handle=' + handle_repr except Exception as ex: result += str(ex) or '' return result @property def server_host(self): """IP address that the server is bound to (string).""" if isinstance(self.address, tuple): return self.address[0] @property def server_port(self): """Port that the server is bound to (an integer).""" if isinstance(self.address, tuple): return self.address[1] def init_socket(self): """ If the user initialized the server with an address rather than socket, then this function must create a socket, bind it, and put it into listening mode. It is not supposed to be called by the user, it is called by :meth:`start` before starting the accept loop. """ @property def started(self): return not self._stop_event.is_set() def start(self): """Start accepting the connections. If an address was provided in the constructor, then also create a socket, bind it and put it into the listening mode. """ self.init_socket() self._stop_event.clear() try: self.start_accepting() except: self.close() raise def close(self): """Close the listener socket and stop accepting.""" self._stop_event.set() try: self.stop_accepting() finally: try: self.socket.close() except Exception: pass finally: self.__dict__.pop('socket', None) self.__dict__.pop('handle', None) self.__dict__.pop('_handle', None) self.__dict__.pop('_spawn', None) self.__dict__.pop('full', None) if self.pool is not None: self.pool._semaphore.unlink(self._start_accepting_if_started) # If the pool's semaphore had a notifier already started, # there's a reference cycle we're a part of # (self->pool->semaphere-hub callback->semaphore) # But we can't destroy self.pool, because self.stop() # calls this method, and then wants to join self.pool() @property def closed(self): return not hasattr(self, 'socket') def stop(self, timeout=None): """ Stop accepting the connections and close the listening socket. If the server uses a pool to spawn the requests, then :meth:`stop` also waits for all the handlers to exit. If there are still handlers executing after *timeout* has expired (default 1 second, :attr:`stop_timeout`), then the currently running handlers in the pool are killed. If the server does not use a pool, then this merely stops accepting connections; any spawned greenlets that are handling requests continue running until they naturally complete. """ self.close() if timeout is None: timeout = self.stop_timeout if self.pool: self.pool.join(timeout=timeout) self.pool.kill(block=True, timeout=1) def serve_forever(self, stop_timeout=None): """Start the server if it hasn't been already started and wait until it's stopped.""" # add test that serve_forever exists on stop() if not self.started: self.start() try: self._stop_event.wait() finally: Greenlet.spawn(self.stop, timeout=stop_timeout).join() def is_fatal_error(self, ex): return isinstance(ex, _socket.error) and ex.args[0] in self.fatal_errors def _extract_family(host): if host.startswith('[') and host.endswith(']'): host = host[1:-1] return _socket.AF_INET6, host return _socket.AF_INET, host def _parse_address(address): if isinstance(address, tuple): if not address[0] or ':' in address[0]: return _socket.AF_INET6, address return _socket.AF_INET, address if ((isinstance(address, string_types) and ':' not in address) or isinstance(address, integer_types)): # noqa (pep8 E129) # Just a port return _socket.AF_INET6, ('', int(address)) if not isinstance(address, string_types): raise TypeError('Expected tuple or string, got %s' % type(address)) host, port = address.rsplit(':', 1) family, host = _extract_family(host) if host == '*': host = '' return family, (host, int(port)) def parse_address(address): try: return _parse_address(address) except ValueError as ex: # pylint:disable=try-except-raise raise ValueError('Failed to parse address %r: %s' % (address, ex)) gevent-24.11.1/src/gevent/builtins.py000066400000000000000000000100141471441230600174300ustar00rootroot00000000000000# Copyright (c) 2015 gevent contributors. See LICENSE for details. """gevent friendly implementations of builtin functions.""" from __future__ import absolute_import import weakref from gevent.lock import RLock from gevent._compat import imp_acquire_lock from gevent._compat import imp_release_lock import builtins as __gbuiltins__ _allowed_module_name_types = (str,) __target__ = 'builtins' _import = __gbuiltins__.__import__ # We need to protect imports both across threads and across greenlets. # And the order matters. Note that under 3.4, the global import lock # and imp module are deprecated. It seems that in all Py3 versions, a # module lock is used such that this fix is not necessary. # We emulate the per-module locking system under Python 2 in order to # avoid issues acquiring locks in multiple-level-deep imports # that attempt to use the gevent blocking API at runtime; using one lock # could lead to a LoopExit error as a greenlet attempts to block on it while # it's already held by the main greenlet (issue #798). # We base this approach on a simplification of what `importlib._bootstrap` # does; notably, we don't check for deadlocks _g_import_locks = {} # name -> wref of RLock __lock_imports = True def __module_lock(name): # Return the lock for the given module, creating it if necessary. # It will be removed when no longer needed. # Nothing in this function yields, so we're multi-greenlet safe # (But not multi-threading safe.) # XXX: What about on PyPy, where the GC is asynchronous (not ref-counting)? # (Does it stop-the-world first?) lock = None try: lock = _g_import_locks[name]() except KeyError: pass if lock is None: lock = RLock() def cb(_): # We've seen a KeyError on PyPy on RPi2 _g_import_locks.pop(name, None) _g_import_locks[name] = weakref.ref(lock, cb) return lock def __import__(*args, **kwargs): """ __import__(name, globals=None, locals=None, fromlist=(), level=0) -> object Normally python protects imports against concurrency by doing some locking at the C level (at least, it does that in CPython). This function just wraps the normal __import__ functionality in a recursive lock, ensuring that we're protected against greenlet import concurrency as well. """ if args and not issubclass(type(args[0]), _allowed_module_name_types): # if a builtin has been acquired as a bound instance method, # python knows not to pass 'self' when the method is called. # No such protection exists for monkey-patched builtins, # however, so this is necessary. args = args[1:] if not __lock_imports: return _import(*args, **kwargs) module_lock = __module_lock(args[0]) # Get a lock for the module name imp_acquire_lock() try: module_lock.acquire() try: result = _import(*args, **kwargs) finally: module_lock.release() finally: imp_release_lock() return result def _unlock_imports(): """ Internal function, called when gevent needs to perform imports lazily, but does not know the state of the system. It may be impossible to take the import lock because there are no other running greenlets, for example. This causes a monkey-patched __import__ to avoid taking any locks. until the corresponding call to lock_imports. This should only be done for limited amounts of time and when the set of imports is statically known to be "safe". """ global __lock_imports # This could easily become a list that we push/pop from or an integer # we increment if we need to do this recursively, but we shouldn't get # that complex. __lock_imports = False def _lock_imports(): global __lock_imports __lock_imports = True __implements__ = [] __import__ = _import __all__ = __implements__ from gevent._util import copy_globals __imports__ = copy_globals(__gbuiltins__, globals(), names_to_ignore=__implements__) gevent-24.11.1/src/gevent/contextvars.py000066400000000000000000000231561471441230600201720ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Cooperative ``contextvars`` module. This module was added to Python 3.7. The gevent version is available on all supported versions of Python. However, see an important note about gevent 20.9. Context variables are like greenlet-local variables, just more inconvenient to use. They were designed to work around limitations in :mod:`asyncio` and are rarely needed by greenlet-based code. The primary difference is that snapshots of the state of all context variables in a given greenlet can be taken, and later restored for execution; modifications to context variables are "scoped" to the duration that a particular context is active. (This state-restoration support is rarely useful for greenlets because instead of always running "tasks" sequentially within a single thread like `asyncio` does, greenlet-based code usually spawns new greenlets to handle each task.) The gevent implementation is based on the Python reference implementation from :pep:`567` and doesn't have much optimization. In particular, setting context values isn't constant time. .. versionadded:: 1.5a3 .. versionchanged:: 20.9.0 On Python 3.7 and above, this module is no longer monkey-patched in place of the standard library version. gevent depends on greenlet 0.4.17 which includes support for context variables. This means that any number of greenlets can be running any number of asyncio tasks each with their own context variables. This module is only greenlet aware, not asyncio task aware, so its use is not recommended on Python 3.7 and above. On previous versions of Python, this module continues to be a solution for backporting code. It is also available if you wish to use the contextvar API in a strictly greenlet-local manner. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function __all__ = [ 'ContextVar', 'Context', 'copy_context', 'Token', ] try: from collections.abc import Mapping except ImportError: from collections import Mapping # pylint:disable=deprecated-class from gevent._util import _NONE from gevent.local import local __stdlib_expected__ = __all__ __implements__ = __stdlib_expected__ # In the reference implementation, the interpreter level OS thread state # is modified to contain a pointer to the current context. Obviously we can't # touch that here because we're not tied to CPython's internals; plus, of course, # we want to operate with greenlets, not OS threads. So we use a greenlet-local object # to store the active context. class _ContextState(local): def __init__(self): self.context = Context() def _not_base_type(cls): # This is not given in the PEP but is tested in test_context. # Assign this method to __init_subclass__ in each type that can't # be subclassed. (This only works in 3.6+, but context vars are only in # 3.7+) raise TypeError("not an acceptable base type") class _ContextData(object): """ A copy-on-write immutable mapping from ContextVar keys to arbitrary values. Setting values requires a copy, making it O(n), not O(1). """ # In theory, the HAMT used by the stdlib contextvars module could # be used: It's often available at _testcapi.hamt() (see # test_context). We'd need to be sure to add a correct __hash__ # method to ContextVar to make that work well. (See # Python/context.c:contextvar_generate_hash.) __slots__ = ( '_mapping', ) def __init__(self): self._mapping = {} def __getitem__(self, key): return self._mapping[key] def __contains__(self, key): return key in self._mapping def __len__(self): return len(self._mapping) def __iter__(self): return iter(self._mapping) def set(self, key, value): copy = _ContextData() copy._mapping = self._mapping.copy() copy._mapping[key] = value return copy def delete(self, key): copy = _ContextData() copy._mapping = self._mapping.copy() del copy._mapping[key] return copy class ContextVar(object): """ Implementation of :class:`contextvars.ContextVar`. """ __slots__ = ( '_name', '_default', ) def __init__(self, name, default=_NONE): self._name = name self._default = default __init_subclass__ = classmethod(_not_base_type) @classmethod def __class_getitem__(cls, _): # For typing support: ContextVar[str]. # Not in the PEP. # sigh. return cls @property def name(self): return self._name def get(self, default=_NONE): context = _context_state.context try: return context[self] except KeyError: pass if default is not _NONE: return default if self._default is not _NONE: return self._default raise LookupError def set(self, value): context = _context_state.context return context._set_value(self, value) def reset(self, token): token._reset(self) def __repr__(self): # This is not captured in the PEP but is tested by test_context return '<%s.%s name=%r default=%r at 0x%x>' % ( type(self).__module__, type(self).__name__, self._name, self._default, id(self) ) class Token(object): """ Opaque implementation of :class:`contextvars.Token`. """ MISSING = _NONE __slots__ = ( '_context', '_var', '_old_value', '_used', ) def __init__(self, context, var, old_value): self._context = context self._var = var self._old_value = old_value self._used = False __init_subclass__ = classmethod(_not_base_type) @property def var(self): """ A read-only attribute pointing to the variable that created the token """ return self._var @property def old_value(self): """ A read-only attribute set to the value the variable had before the ``set()`` call, or to :attr:`MISSING` if the variable wasn't set before. """ return self._old_value def _reset(self, var): if self._used: raise RuntimeError("Taken has already been used once") if self._var is not var: raise ValueError("Token was created by a different ContextVar") if self._context is not _context_state.context: raise ValueError("Token was created in a different Context") self._used = True if self._old_value is self.MISSING: self._context._delete(var) else: self._context._reset_value(var, self._old_value) def __repr__(self): # This is not captured in the PEP but is tested by test_context return '<%s.%s%s var=%r at 0x%x>' % ( type(self).__module__, type(self).__name__, ' used' if self._used else '', self._var, id(self), ) class Context(Mapping): """ Implementation of :class:`contextvars.Context` """ __slots__ = ( '_data', '_prev_context', ) def __init__(self): """ Creates an empty context. """ self._data = _ContextData() self._prev_context = None __init_subclass__ = classmethod(_not_base_type) def run(self, function, *args, **kwargs): if self._prev_context is not None: raise RuntimeError( "Cannot enter context; %s is already entered" % (self,) ) self._prev_context = _context_state.context try: _context_state.context = self return function(*args, **kwargs) finally: _context_state.context = self._prev_context self._prev_context = None def copy(self): """ Return a shallow copy. """ result = Context() result._data = self._data return result ### # Operations used by ContextVar and Token ### def _set_value(self, var, value): try: old_value = self._data[var] except KeyError: old_value = Token.MISSING self._data = self._data.set(var, value) return Token(self, var, old_value) def _delete(self, var): self._data = self._data.delete(var) def _reset_value(self, var, old_value): self._data = self._data.set(var, old_value) # Note that all Mapping methods, including Context.__getitem__ and # Context.get, ignore default values for context variables (i.e. # ContextVar.default). This means that for a variable var that was # created with a default value and was not set in the context: # # - context[var] raises a KeyError, # - var in context returns False, # - the variable isn't included in context.items(), etc. # Checking the type of key isn't part of the PEP but is tested by # test_context.py. @staticmethod def __check_key(key): if type(key) is not ContextVar: # pylint:disable=unidiomatic-typecheck raise TypeError("ContextVar key was expected") def __getitem__(self, key): self.__check_key(key) return self._data[key] def __contains__(self, key): self.__check_key(key) return key in self._data def __len__(self): return len(self._data) def __iter__(self): return iter(self._data) def copy_context(): """ Return a shallow copy of the current context. """ return _context_state.context.copy() _context_state = _ContextState() gevent-24.11.1/src/gevent/core.py000066400000000000000000000007371471441230600165420ustar00rootroot00000000000000# Copyright (c) 2009-2015 Denis Bilenko and gevent contributors. See LICENSE for details. """ Deprecated; this does not reflect all the possible options and its interface varies. .. versionchanged:: 1.3a2 Deprecated. """ from __future__ import absolute_import import sys from gevent._config import config from gevent._util import copy_globals _core = sys.modules[config.loop.__module__] copy_globals(_core, globals()) __all__ = _core.__all__ # pylint:disable=no-member gevent-24.11.1/src/gevent/event.py000066400000000000000000000353471471441230600167400ustar00rootroot00000000000000# Copyright (c) 2009-2016 Denis Bilenko, gevent contributors. See LICENSE for details. # cython: auto_pickle=False,embedsignature=True,always_allow_keywords=False,infer_types=True """Basic synchronization primitives: Event and AsyncResult""" from __future__ import print_function from gevent._util import _NONE from gevent._compat import reraise from gevent._tblib import dump_traceback, load_traceback from gevent.timeout import Timeout __all__ = [ 'Event', 'AsyncResult', ] def _get_linkable(): x = __import__('gevent._abstract_linkable') return x._abstract_linkable.AbstractLinkable locals()['AbstractLinkable'] = _get_linkable() del _get_linkable class Event(AbstractLinkable): # pylint:disable=undefined-variable """ A synchronization primitive that allows one greenlet to wake up one or more others. It has the same interface as :class:`threading.Event` but works across greenlets. .. important:: This object is for communicating among greenlets within the same thread *only*! Do not try to use it to communicate across threads. An event object manages an internal flag that can be set to true with the :meth:`set` method and reset to false with the :meth:`clear` method. The :meth:`wait` method blocks until the flag is true; as soon as the flag is set to true, all greenlets that are currently blocked in a call to :meth:`wait` will be scheduled to awaken. Note that the flag may be cleared and set many times before any individual greenlet runs; all the greenlet can know for sure is that the flag was set *at least once* while it was waiting. If the greenlet cares whether the flag is still set, it must check with :meth:`ready` and possibly call back into :meth:`wait` again. .. note:: The exact order and timing in which waiting greenlets are awakened is not determined. Once the event is set, other greenlets may run before any waiting greenlets are awakened. While the code here will awaken greenlets in the order in which they waited, each such greenlet that runs may in turn cause other greenlets to run. These details may change in the future. .. versionchanged:: 1.5a3 Waiting greenlets are now awakened in the order in which they waited. .. versionchanged:: 1.5a3 The low-level ``rawlink`` method (most users won't use this) now automatically unlinks waiters before calling them. .. versionchanged:: 20.5.1 Callers to ``wait`` that find the event already set will now run after any other waiters that had to block. See :issue:`1520`. """ __slots__ = ('_flag',) def __init__(self): super(Event, self).__init__() self._flag = False def __str__(self): return '<%s.%s %s _links[%s]>' % ( self.__class__.__module__, self.__class__.__name__, 'set' if self._flag else 'clear', self.linkcount() ) def is_set(self): """Return true if and only if the internal flag is true.""" return self._flag def isSet(self): # makes it a better drop-in replacement for threading.Event return self._flag def ready(self): # makes it compatible with AsyncResult and Greenlet (for # example in wait()) return self._flag def set(self): """ Set the internal flag to true. All greenlets waiting for it to become true are awakened in some order at some time in the future. Greenlets that call :meth:`wait` once the flag is true will not block at all (until :meth:`clear` is called). """ self._flag = True self._check_and_notify() def clear(self): """ Reset the internal flag to false. Subsequently, threads calling :meth:`wait` will block until :meth:`set` is called to set the internal flag to true again. """ self._flag = False def _wait_return_value(self, waited, wait_success): # To avoid the race condition outlined in http://bugs.python.org/issue13502, # if we had to wait, then we need to return whether or not # the condition got changed. Otherwise we simply echo # the current state of the flag (which should be true) if not waited: flag = self._flag assert flag, "if we didn't wait we should already be set" return flag return wait_success def wait(self, timeout=None): """ Block until this object is :meth:`ready`. If the internal flag is true on entry, return immediately. Otherwise, block until another thread (greenlet) calls :meth:`set` to set the flag to true, or until the optional *timeout* expires. When the *timeout* argument is present and not ``None``, it should be a floating point number specifying a timeout for the operation in seconds (or fractions thereof). :return: This method returns true if and only if the internal flag has been set to true, either before the wait call or after the wait starts, so it will always return ``True`` except if a timeout is given and the operation times out. .. versionchanged:: 1.1 The return value represents the flag during the elapsed wait, not just after it elapses. This solves a race condition if one greenlet sets and then clears the flag without switching, while other greenlets are waiting. When the waiters wake up, this will return True; previously, they would still wake up, but the return value would be False. This is most noticeable when the *timeout* is present. """ return self._wait(timeout) def _reset_internal_locks(self): # pragma: no cover # for compatibility with threading.Event # Exception AttributeError: AttributeError("'Event' object has no attribute '_reset_internal_locks'",) # in ignored pass class AsyncResult(AbstractLinkable): # pylint:disable=undefined-variable """ A one-time event that stores a value or an exception. Like :class:`Event` it wakes up all the waiters when :meth:`set` or :meth:`set_exception` is called. Waiters may receive the passed value or exception by calling :meth:`get` instead of :meth:`wait`. An :class:`AsyncResult` instance cannot be reset. .. important:: This object is for communicating among greenlets within the same thread *only*! Do not try to use it to communicate across threads. To pass a value call :meth:`set`. Calls to :meth:`get` (those that are currently blocking as well as those made in the future) will return the value:: >>> from gevent.event import AsyncResult >>> result = AsyncResult() >>> result.set(100) >>> result.get() 100 To pass an exception call :meth:`set_exception`. This will cause :meth:`get` to raise that exception:: >>> result = AsyncResult() >>> result.set_exception(RuntimeError('failure')) >>> result.get() Traceback (most recent call last): ... RuntimeError: failure :class:`AsyncResult` implements :meth:`__call__` and thus can be used as :meth:`link` target:: >>> import gevent >>> result = AsyncResult() >>> gevent.spawn(lambda : 1/0).link(result) >>> try: ... result.get() ... except ZeroDivisionError: ... print('ZeroDivisionError') ZeroDivisionError .. note:: The order and timing in which waiting greenlets are awakened is not determined. As an implementation note, in gevent 1.1 and 1.0, waiting greenlets are awakened in a undetermined order sometime *after* the current greenlet yields to the event loop. Other greenlets (those not waiting to be awakened) may run between the current greenlet yielding and the waiting greenlets being awakened. These details may change in the future. .. versionchanged:: 1.1 The exact order in which waiting greenlets are awakened is not the same as in 1.0. .. versionchanged:: 1.1 Callbacks :meth:`linked ` to this object are required to be hashable, and duplicates are merged. .. versionchanged:: 1.5a3 Waiting greenlets are now awakened in the order in which they waited. .. versionchanged:: 1.5a3 The low-level ``rawlink`` method (most users won't use this) now automatically unlinks waiters before calling them. """ __slots__ = ('_value', '_exc_info', '_imap_task_index') def __init__(self): super(AsyncResult, self).__init__() self._value = _NONE self._exc_info = () @property def _exception(self): return self._exc_info[1] if self._exc_info else _NONE @property def value(self): """ Holds the value passed to :meth:`set` if :meth:`set` was called. Otherwise, ``None`` """ return self._value if self._value is not _NONE else None @property def exc_info(self): """ The three-tuple of exception information if :meth:`set_exception` was called. """ if self._exc_info: return (self._exc_info[0], self._exc_info[1], load_traceback(self._exc_info[2])) return () def __str__(self): result = '<%s ' % (self.__class__.__name__, ) if self.value is not None or self._exception is not _NONE: result += 'value=%r ' % self.value if self._exception is not None and self._exception is not _NONE: result += 'exception=%r ' % self._exception if self._exception is _NONE: result += 'unset ' return result + ' _links[%s]>' % self.linkcount() def ready(self): """Return true if and only if it holds a value or an exception""" return self._exc_info or self._value is not _NONE def successful(self): """Return true if and only if it is ready and holds a value""" return self._value is not _NONE @property def exception(self): """Holds the exception instance passed to :meth:`set_exception` if :meth:`set_exception` was called. Otherwise ``None``.""" if self._exc_info: return self._exc_info[1] def set(self, value=None): """Store the value and wake up any waiters. All greenlets blocking on :meth:`get` or :meth:`wait` are awakened. Subsequent calls to :meth:`wait` and :meth:`get` will not block at all. """ self._value = value self._check_and_notify() def set_exception(self, exception, exc_info=None): """Store the exception and wake up any waiters. All greenlets blocking on :meth:`get` or :meth:`wait` are awakened. Subsequent calls to :meth:`wait` and :meth:`get` will not block at all. :keyword tuple exc_info: If given, a standard three-tuple of type, value, :class:`traceback` as returned by :func:`sys.exc_info`. This will be used when the exception is re-raised to propagate the correct traceback. """ if exc_info: self._exc_info = (exc_info[0], exc_info[1], dump_traceback(exc_info[2])) else: self._exc_info = (type(exception), exception, dump_traceback(None)) self._check_and_notify() def _raise_exception(self): reraise(*self.exc_info) def get(self, block=True, timeout=None): """Return the stored value or raise the exception. If this instance already holds a value or an exception, return or raise it immediately. Otherwise, block until another greenlet calls :meth:`set` or :meth:`set_exception` or until the optional timeout occurs. When the *timeout* argument is present and not ``None``, it should be a floating point number specifying a timeout for the operation in seconds (or fractions thereof). If the *timeout* elapses, the *Timeout* exception will be raised. :keyword bool block: If set to ``False`` and this instance is not ready, immediately raise a :class:`Timeout` exception. """ if self._value is not _NONE: return self._value if self._exc_info: return self._raise_exception() if not block: # Not ready and not blocking, so immediately timeout raise Timeout() self._capture_hub(True) # Wait, raising a timeout that elapses self._wait_core(timeout, ()) # by definition we are now ready return self.get(block=False) def get_nowait(self): """ Return the value or raise the exception without blocking. If this object is not yet :meth:`ready `, raise :class:`gevent.Timeout` immediately. """ return self.get(block=False) def _wait_return_value(self, waited, wait_success): # pylint:disable=unused-argument # Always return the value. Since this is a one-shot event, # no race condition should reset it. return self.value def wait(self, timeout=None): """Block until the instance is ready. If this instance already holds a value, it is returned immediately. If this instance already holds an exception, ``None`` is returned immediately. Otherwise, block until another greenlet calls :meth:`set` or :meth:`set_exception` (at which point either the value or ``None`` will be returned, respectively), or until the optional timeout expires (at which point ``None`` will also be returned). When the *timeout* argument is present and not ``None``, it should be a floating point number specifying a timeout for the operation in seconds (or fractions thereof). .. note:: If a timeout is given and expires, ``None`` will be returned (no timeout exception will be raised). """ return self._wait(timeout) # link protocol def __call__(self, source): if source.successful(): self.set(source.value) else: self.set_exception(source.exception, getattr(source, 'exc_info', None)) # Methods to make us more like concurrent.futures.Future def result(self, timeout=None): return self.get(timeout=timeout) set_result = set def done(self): return self.ready() # we don't support cancelling def cancel(self): return False def cancelled(self): return False # exception is a method, we use it as a property from gevent._util import import_c_accel import_c_accel(globals(), 'gevent._event') gevent-24.11.1/src/gevent/events.py000066400000000000000000000414361471441230600171170ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2018 gevent. See LICENSE for details. """ Publish/subscribe event infrastructure. When certain "interesting" things happen during the lifetime of the process, gevent will "publish" an event (an object). That event is delivered to interested "subscribers" (functions that take one parameter, the event object). Higher level frameworks may take this foundation and build richer models on it. :mod:`zope.event` will be used to provide the functionality of `notify` and `subscribers`. See :mod:`zope.event.classhandler` for a simple class-based approach to subscribing to a filtered list of events, and see `zope.component `_ for a much higher-level, flexible system. If you are using one of these systems, you generally will not want to directly modify `subscribers`. .. versionadded:: 1.3b1 .. versionchanged:: 23.7.0 Now uses :mod:`importlib.metadata` instead of :mod:`pkg_resources` to locate entry points. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function __all__ = [ 'subscribers', # monitor thread 'IEventLoopBlocked', 'EventLoopBlocked', 'IMemoryUsageThresholdExceeded', 'MemoryUsageThresholdExceeded', 'IMemoryUsageUnderThreshold', 'MemoryUsageUnderThreshold', # Hub 'IPeriodicMonitorThread', 'IPeriodicMonitorThreadStartedEvent', 'PeriodicMonitorThreadStartedEvent', # monkey 'IGeventPatchEvent', 'GeventPatchEvent', 'IGeventWillPatchEvent', 'DoNotPatch', 'GeventWillPatchEvent', 'IGeventDidPatchEvent', 'IGeventWillPatchModuleEvent', 'GeventWillPatchModuleEvent', 'IGeventDidPatchModuleEvent', 'GeventDidPatchModuleEvent', 'IGeventWillPatchAllEvent', 'GeventWillPatchAllEvent', 'IGeventDidPatchBuiltinModulesEvent', 'GeventDidPatchBuiltinModulesEvent', 'IGeventDidPatchAllEvent', 'GeventDidPatchAllEvent', ] # pylint:disable=no-self-argument,inherit-non-class import platform from zope.interface import Interface from zope.interface import Attribute from zope.interface import implementer from zope.event import subscribers from zope.event import notify #: Applications may register for notification of events by appending a #: callable to the ``subscribers`` list. #: #: Each subscriber takes a single argument, which is the event object #: being published. #: #: Exceptions raised by subscribers will be propagated *without* running #: any remaining subscribers. #: #: This is an alias for `zope.event.subscribers`; prefer to use #: that attribute directly. subscribers = subscribers try: # Cache the platform info. pkg_resources uses # platform.machine() for environment markers, and # platform.machine() wants to call os.popen('uname'), which is # broken on Py2 when the gevent child signal handler is # installed. (see test__monkey_sigchild_2.py) platform.uname() except: # pylint:disable=bare-except pass finally: del platform def notify_and_call_entry_points(event): notify(event) from importlib import metadata import sys # This used to use the old ``pkg_resources.iter_entry_points(group,name=None)`` # API, passing it just the first argument, ``group=event.ENTRY_POINT_NAME``. # In other words, we don't care about the ``name``. if sys.version_info[:2] >= (3, 10): # pylint:disable-next=unexpected-keyword-arg # The only thing you can do with this is iterate it to get # EntryPoint objects. (e.g., accessing by index raises a warning) entry_points = metadata.entry_points(group=event.ENTRY_POINT_NAME) else: # Prior to 3.10, we have to do this all manually (keyword selection # was introduced in 3.10; in 3.9 and before, entry_points returns a plain # ``dict``). Using it like this is deprecated in 3.10, so to avoid warnings # we have to write it twice. # # Prior to 3.9, there is no ``.module`` attribute, so if we # needed that we'd have to look at the complete ``.value`` # attribute. ep_dict = metadata.entry_points() __traceback_info__ = ep_dict # On Python 3.8, we can get duplicate EntryPoint objects; it is unclear # why. Drop them into a set to make sure we only get one. # # Running a more recent pylint flags the non-existence of ``get`` # pylint:disable=no-member entry_points = set( ep for ep in ep_dict.get(event.ENTRY_POINT_NAME, ()) ) for plugin in entry_points: subscriber = plugin.load() subscriber(event) class IPeriodicMonitorThread(Interface): """ The contract for the periodic monitoring thread that is started by the hub. """ def add_monitoring_function(function, period): """ Schedule the *function* to be called approximately every *period* fractional seconds. The *function* receives one argument, the hub being monitored. It is called in the monitoring thread, *not* the hub thread. It **must not** attempt to use the gevent asynchronous API. If the *function* is already a monitoring function, then its *period* will be updated for future runs. If the *period* is ``None``, then the function will be removed. A *period* less than or equal to zero is not allowed. """ class IPeriodicMonitorThreadStartedEvent(Interface): """ The event emitted when a hub starts a periodic monitoring thread. You can use this event to add additional monitoring functions. """ monitor = Attribute("The instance of `IPeriodicMonitorThread` that was started.") @implementer(IPeriodicMonitorThreadStartedEvent) class PeriodicMonitorThreadStartedEvent(object): """ The implementation of :class:`IPeriodicMonitorThreadStartedEvent`. .. versionchanged:: 24.11.1 Now actually implements the promised interface. """ #: The name of the setuptools entry point that is called when this #: event is emitted. ENTRY_POINT_NAME = 'gevent.plugins.hub.periodic_monitor_thread_started' def __init__(self, monitor): self.monitor = monitor class IEventLoopBlocked(Interface): """ The event emitted when the event loop is blocked. This event is emitted in the monitor thread. .. versionchanged:: 24.11.1 Add the *hub* attribute. """ greenlet = Attribute("The greenlet that appeared to be blocking the loop.") blocking_time = Attribute("The approximate time in seconds the loop has been blocked.") info = Attribute("A list of string lines providing extra info. You may modify this list.") hub = Attribute("""If not None, the hub being blocked.""") @implementer(IEventLoopBlocked) class EventLoopBlocked(object): """ The event emitted when the event loop is blocked. Implements `IEventLoopBlocked`. """ def __init__(self, greenlet, blocking_time, info, *, hub=None): self.greenlet = greenlet self.blocking_time = blocking_time self.info = info self.hub = hub class IMemoryUsageThresholdExceeded(Interface): """ The event emitted when the memory usage threshold is exceeded. This event is emitted only while memory continues to grow above the threshold. Only if the condition or stabilized is corrected (memory usage drops) will the event be emitted in the future. This event is emitted in the monitor thread. """ mem_usage = Attribute("The current process memory usage, in bytes.") max_allowed = Attribute("The maximum allowed memory usage, in bytes.") memory_info = Attribute("The tuple of memory usage stats return by psutil.") class _AbstractMemoryEvent(object): def __init__(self, mem_usage, max_allowed, memory_info): self.mem_usage = mem_usage self.max_allowed = max_allowed self.memory_info = memory_info def __repr__(self): return "<%s used=%d max=%d details=%r>" % ( self.__class__.__name__, self.mem_usage, self.max_allowed, self.memory_info, ) @implementer(IMemoryUsageThresholdExceeded) class MemoryUsageThresholdExceeded(_AbstractMemoryEvent): """ Implementation of `IMemoryUsageThresholdExceeded`. """ class IMemoryUsageUnderThreshold(Interface): """ The event emitted when the memory usage drops below the threshold after having previously been above it. This event is emitted only the first time memory usage is detected to be below the threshold after having previously been above it. If memory usage climbs again, a `IMemoryUsageThresholdExceeded` event will be broadcast, and then this event could be broadcast again. This event is emitted in the monitor thread. """ mem_usage = Attribute("The current process memory usage, in bytes.") max_allowed = Attribute("The maximum allowed memory usage, in bytes.") max_memory_usage = Attribute("The memory usage that caused the previous " "IMemoryUsageThresholdExceeded event.") memory_info = Attribute("The tuple of memory usage stats return by psutil.") @implementer(IMemoryUsageUnderThreshold) class MemoryUsageUnderThreshold(_AbstractMemoryEvent): """ Implementation of `IMemoryUsageUnderThreshold`. """ def __init__(self, mem_usage, max_allowed, memory_info, max_usage): super(MemoryUsageUnderThreshold, self).__init__(mem_usage, max_allowed, memory_info) self.max_memory_usage = max_usage class IGeventPatchEvent(Interface): """ The root for all monkey-patch events gevent emits. """ source = Attribute("The source object containing the patches.") target = Attribute("The destination object to be patched.") @implementer(IGeventPatchEvent) class GeventPatchEvent(object): """ Implementation of `IGeventPatchEvent`. """ def __init__(self, source, target): self.source = source self.target = target def __repr__(self): return '<%s source=%r target=%r at %x>' % (self.__class__.__name__, self.source, self.target, id(self)) class IGeventWillPatchEvent(IGeventPatchEvent): """ An event emitted *before* gevent monkey-patches something. If a subscriber raises `DoNotPatch`, then patching this particular item will not take place. """ class DoNotPatch(BaseException): """ Subscribers to will-patch events can raise instances of this class to tell gevent not to patch that particular item. """ @implementer(IGeventWillPatchEvent) class GeventWillPatchEvent(GeventPatchEvent): """ Implementation of `IGeventWillPatchEvent`. """ class IGeventDidPatchEvent(IGeventPatchEvent): """ An event emitted *after* gevent has patched something. """ @implementer(IGeventDidPatchEvent) class GeventDidPatchEvent(GeventPatchEvent): """ Implementation of `IGeventDidPatchEvent`. """ class IGeventWillPatchModuleEvent(IGeventWillPatchEvent): """ An event emitted *before* gevent begins patching a specific module. Both *source* and *target* attributes are module objects. """ module_name = Attribute("The name of the module being patched. " "This is the same as ``target.__name__``.") target_item_names = Attribute("The list of item names to patch. " "This can be modified in place with caution.") @implementer(IGeventWillPatchModuleEvent) class GeventWillPatchModuleEvent(GeventWillPatchEvent): """ Implementation of `IGeventWillPatchModuleEvent`. """ #: The name of the setuptools entry point that is called when this #: event is emitted. ENTRY_POINT_NAME = 'gevent.plugins.monkey.will_patch_module' def __init__(self, module_name, source, target, items): super(GeventWillPatchModuleEvent, self).__init__(source, target) self.module_name = module_name self.target_item_names = items class IGeventDidPatchModuleEvent(IGeventDidPatchEvent): """ An event emitted *after* gevent has completed patching a specific module. """ module_name = Attribute("The name of the module being patched. " "This is the same as ``target.__name__``.") @implementer(IGeventDidPatchModuleEvent) class GeventDidPatchModuleEvent(GeventDidPatchEvent): """ Implementation of `IGeventDidPatchModuleEvent`. """ #: The name of the setuptools entry point that is called when this #: event is emitted. ENTRY_POINT_NAME = 'gevent.plugins.monkey.did_patch_module' def __init__(self, module_name, source, target): super(GeventDidPatchModuleEvent, self).__init__(source, target) self.module_name = module_name # TODO: Maybe it would be useful for the the module patch events # to have an attribute telling if they're being done during patch_all? class IGeventWillPatchAllEvent(IGeventWillPatchEvent): """ An event emitted *before* gevent begins patching the system. Following this event will be a series of `IGeventWillPatchModuleEvent` and `IGeventDidPatchModuleEvent` for each patched module. Once the gevent builtin modules have been processed, `IGeventDidPatchBuiltinModulesEvent` will be emitted. Processing this event is an ideal time for third-party modules to be imported and patched (which may trigger its own will/did patch module events). Finally, a `IGeventDidPatchAllEvent` will be sent. If a subscriber to this event raises `DoNotPatch`, no patching will be done. The *source* and *target* attributes have undefined values. """ patch_all_arguments = Attribute( "A dictionary of all the arguments to `gevent.monkey.patch_all`. " "This dictionary should not be modified. " ) patch_all_kwargs = Attribute( "A dictionary of the extra arguments to `gevent.monkey.patch_all`. " "This dictionary should not be modified. " ) def will_patch_module(module_name): """ Return whether the module named *module_name* will be patched. """ class _PatchAllMixin(object): def __init__(self, patch_all_arguments, patch_all_kwargs): super(_PatchAllMixin, self).__init__(None, None) self._patch_all_arguments = patch_all_arguments self._patch_all_kwargs = patch_all_kwargs @property def patch_all_arguments(self): return self._patch_all_arguments.copy() @property def patch_all_kwargs(self): return self._patch_all_kwargs.copy() def __repr__(self): return '<%s %r at %x>' % (self.__class__.__name__, self._patch_all_arguments, id(self)) @implementer(IGeventWillPatchAllEvent) class GeventWillPatchAllEvent(_PatchAllMixin, GeventWillPatchEvent): """ Implementation of `IGeventWillPatchAllEvent`. """ #: The name of the setuptools entry point that is called when this #: event is emitted. ENTRY_POINT_NAME = 'gevent.plugins.monkey.will_patch_all' def will_patch_module(self, module_name): return self.patch_all_arguments.get(module_name) class IGeventDidPatchBuiltinModulesEvent(IGeventDidPatchEvent): """ Event emitted *after* the builtin modules have been patched. If you're going to monkey-patch a third-party library, this is usually the event to listen for. The values of the *source* and *target* attributes are undefined. """ patch_all_arguments = Attribute( "A dictionary of all the arguments to `gevent.monkey.patch_all`. " "This dictionary should not be modified. " ) patch_all_kwargs = Attribute( "A dictionary of the extra arguments to `gevent.monkey.patch_all`. " "This dictionary should not be modified. " ) @implementer(IGeventDidPatchBuiltinModulesEvent) class GeventDidPatchBuiltinModulesEvent(_PatchAllMixin, GeventDidPatchEvent): """ Implementation of `IGeventDidPatchBuiltinModulesEvent`. """ #: The name of the setuptools entry point that is called when this #: event is emitted. ENTRY_POINT_NAME = 'gevent.plugins.monkey.did_patch_builtins' class IGeventDidPatchAllEvent(IGeventDidPatchEvent): """ Event emitted after gevent has patched all modules, both builtin and those provided by plugins/subscribers. The values of the *source* and *target* attributes are undefined. """ @implementer(IGeventDidPatchAllEvent) class GeventDidPatchAllEvent(_PatchAllMixin, GeventDidPatchEvent): """ Implementation of `IGeventDidPatchAllEvent`. """ #: The name of the setuptools entry point that is called when this #: event is emitted. ENTRY_POINT_NAME = 'gevent.plugins.monkey.did_patch_all' gevent-24.11.1/src/gevent/exceptions.py000066400000000000000000000075341471441230600177750ustar00rootroot00000000000000# -*- coding: utf-8 -*- # copyright 2018 gevent """ Exceptions. .. versionadded:: 1.3b1 """ from __future__ import absolute_import from __future__ import division from __future__ import print_function from greenlet import GreenletExit __all__ = [ 'LoopExit', ] class LoopExit(Exception): """ Exception thrown when the hub finishes running (`gevent.hub.Hub.run` would return). In a normal application, this is never thrown or caught explicitly. The internal implementation of functions like :meth:`gevent.hub.Hub.join` and :func:`gevent.joinall` may catch it, but user code generally should not. .. caution:: Errors in application programming can also lead to this exception being raised. Some examples include (but are not limited too): - greenlets deadlocking on a lock; - using a socket or other gevent object with native thread affinity from a different thread """ @property def hub(self): """ The (optional) hub that raised the error. .. versionadded:: 20.12.0 """ # XXX: Note that semaphore.py does this manually. if len(self.args) == 3: # From the hub return self.args[1] def __repr__(self): # pylint:disable=unsubscriptable-object if len(self.args) == 3: # From the hub import pprint return ( "%s\n" "\tHub: %s\n" "\tHandles:\n%s" ) % ( self.args[0], self.args[1], pprint.pformat(self.args[2]) ) return Exception.__repr__(self) def __str__(self): return repr(self) class BlockingSwitchOutError(AssertionError): """ Raised when a gevent synchronous function is called from a low-level event loop callback. This is usually a programming error. """ class InvalidSwitchError(AssertionError): """ Raised when the event loop returns control to a greenlet in an unexpected way. This is usually a bug in gevent, greenlet, or the event loop. """ class ConcurrentObjectUseError(AssertionError): """ Raised when an object is used (waited on) by two greenlets independently, meaning the object was entered into a blocking state by one greenlet and then another while still blocking in the first one. This is usually a programming error. .. seealso:: `gevent.socket.wait` """ class InvalidThreadUseError(RuntimeError): """ Raised when an object is used from a different thread than the one it is bound to. Some objects, such as gevent sockets, semaphores, and threadpools, are tightly bound to their hub and its loop. The hub and loop are not thread safe, with a few exceptions. Attempting to use such objects from a different thread is an error, and may cause problems ranging from incorrect results to memory corruption and a crashed process. In some cases, gevent catches this "accidentally", and the result is a `LoopExit`. In some cases, gevent doesn't catch this at all. In other cases (typically when the consequences are suspected to be more on the more severe end of the scale, and when the operation in question is already relatively heavyweight), gevent explicitly checks for this usage and will raise this exception when it is detected. .. versionadded:: 1.5a3 """ class HubDestroyed(GreenletExit): """ Internal exception, raised when we're trying to destroy the hub and we want the loop to stop running callbacks now. This must not be subclassed; the type is tested by identity. Clients outside of gevent must not raise this exception. .. versionadded:: 20.12.0 """ def __init__(self, destroy_loop): GreenletExit.__init__(self, destroy_loop) self.destroy_loop = destroy_loop gevent-24.11.1/src/gevent/fileobject.py000066400000000000000000000057141471441230600177200ustar00rootroot00000000000000""" Wrappers to make file-like objects cooperative. .. class:: FileObject(fobj, mode='r', buffering=-1, closefd=True, encoding=None, errors=None, newline=None) The main entry point to the file-like gevent-compatible behaviour. It will be defined to be the best available implementation. All the parameters are as for :func:`io.open`. :param fobj: Usually a file descriptor of a socket. Can also be another object with a ``fileno()`` method, or an object that can be passed to ``io.open()`` (e.g., a file system path). If the object is not a socket, the results will vary based on the platform and the type of object being opened. All supported versions of Python allow :class:`os.PathLike` objects. .. versionchanged:: 1.5 Accept str and ``PathLike`` objects for *fobj* on all versions of Python. .. versionchanged:: 1.5 Add *encoding*, *errors* and *newline* arguments. .. versionchanged:: 1.5 Accept *closefd* and *buffering* instead of *close* and *bufsize* arguments. The latter remain for backwards compatibility. There are two main implementations of ``FileObject``. On all systems, there is :class:`FileObjectThread` which uses the built-in native threadpool to avoid blocking the entire interpreter. On UNIX systems (those that support the :mod:`fcntl` module), there is also :class:`FileObjectPosix` which uses native non-blocking semantics. A third class, :class:`FileObjectBlock`, is simply a wrapper that executes everything synchronously (and so is not gevent-compatible). It is provided for testing and debugging purposes. All classes have the same signature; some may accept extra keyword arguments. Configuration ============= You may change the default value for ``FileObject`` using the ``GEVENT_FILE`` environment variable. Set it to ``posix``, ``thread``, or ``block`` to choose from :class:`FileObjectPosix`, :class:`FileObjectThread` and :class:`FileObjectBlock`, respectively. You may also set it to the fully qualified class name of another object that implements the file interface to use one of your own objects. .. note:: The environment variable must be set at the time this module is first imported. Classes ======= """ from __future__ import absolute_import from gevent._config import config __all__ = [ 'FileObjectPosix', 'FileObjectThread', 'FileObjectBlock', 'FileObject', ] try: from fcntl import fcntl except ImportError: __all__.remove("FileObjectPosix") else: del fcntl from gevent._fileobjectposix import FileObjectPosix from gevent._fileobjectcommon import FileObjectThread from gevent._fileobjectcommon import FileObjectBlock # None of the possible objects can live in this module because # we would get an import cycle and the config couldn't be set from code. # TODO: zope.hookable would be great for allowing this to be imported # without requiring configuration but still being very fast. FileObject = config.fileobject gevent-24.11.1/src/gevent/greenlet.py000066400000000000000000001302231471441230600174110ustar00rootroot00000000000000# Copyright (c) 2009-2012 Denis Bilenko. See LICENSE for details. # cython: auto_pickle=False,embedsignature=True,always_allow_keywords=False # pylint:disable=too-many-lines from __future__ import absolute_import, print_function, division from sys import _getframe as sys_getframe from sys import exc_info as sys_exc_info from weakref import ref as wref # XXX: How to get cython to let us rename this as RawGreenlet # like we prefer? from greenlet import greenlet from greenlet import GreenletExit from gevent._compat import reraise from gevent._compat import PYPY as _PYPY from gevent._tblib import dump_traceback from gevent._tblib import load_traceback from gevent.exceptions import InvalidSwitchError from gevent._hub_primitives import iwait_on_objects as iwait from gevent._hub_primitives import wait_on_objects as wait from gevent.timeout import Timeout from gevent._config import config as GEVENT_CONFIG from gevent._util import readproperty from gevent._hub_local import get_hub_noargs as get_hub from gevent import _waiter __all__ = [ 'Greenlet', 'joinall', 'killall', ] # In Cython, we define these as 'cdef inline' functions. The # compilation unit cannot have a direct assignment to them (import # is assignment) without generating a 'lvalue is not valid target' # error. locals()['getcurrent'] = __import__('greenlet').getcurrent locals()['greenlet_init'] = lambda: None locals()['Waiter'] = _waiter.Waiter # With Cython, this raises a TypeError if the parent is *not* # the hub (SwitchOutGreenletWithLoop); in pure-Python, we will # very likely get an AttributeError immediately after when we access `loop`; # The TypeError message is more informative on Python 2. # This must ONLY be called when we know that `s` is not None and is in fact a greenlet # object (e.g., when called on `self`) locals()['get_my_hub'] = lambda s: s.parent # This must also ONLY be called when we know that S is not None and is in fact a greenlet # object (including the result of getcurrent()) locals()['get_generic_parent'] = lambda s: s.parent # Frame access locals()['Gevent_PyFrame_GetCode'] = lambda frame: frame.f_code locals()['Gevent_PyFrame_GetLineNumber'] = lambda frame: frame.f_lineno locals()['Gevent_PyFrame_GetBack'] = lambda frame: frame.f_back if _PYPY: import _continuation # pylint:disable=import-error _continulet = _continuation.continulet class SpawnedLink(object): """ A wrapper around link that calls it in another greenlet. Can be called only from main loop. """ __slots__ = ['callback'] def __init__(self, callback): if not callable(callback): raise TypeError("Expected callable: %r" % (callback, )) self.callback = callback def __call__(self, source): g = greenlet(self.callback, get_hub()) g.switch(source) def __hash__(self): return hash(self.callback) def __eq__(self, other): return self.callback == getattr(other, 'callback', other) def __str__(self): return str(self.callback) def __repr__(self): return repr(self.callback) def __getattr__(self, item): assert item != 'callback' return getattr(self.callback, item) class SuccessSpawnedLink(SpawnedLink): """A wrapper around link that calls it in another greenlet only if source succeed. Can be called only from main loop. """ __slots__ = [] def __call__(self, source): if source.successful(): return SpawnedLink.__call__(self, source) class FailureSpawnedLink(SpawnedLink): """A wrapper around link that calls it in another greenlet only if source failed. Can be called only from main loop. """ __slots__ = [] def __call__(self, source): if not source.successful(): return SpawnedLink.__call__(self, source) class _Frame(object): __slots__ = ('f_code', 'f_lineno', 'f_back') def __init__(self): self.f_code = None self.f_back = None self.f_lineno = 0 @property def f_globals(self): return None def _extract_stack(limit): try: frame = sys_getframe() except ValueError: # In certain embedded cases that directly use the Python C api # to call Greenlet.spawn (e.g., uwsgi) this can raise # `ValueError: call stack is not deep enough`. This is because # the Cython stack frames for Greenlet.spawn -> # Greenlet.__init__ -> _extract_stack are all on the C level, # not the Python level. # See https://github.com/gevent/gevent/issues/1212 frame = None newest_Frame = None newer_Frame = None while limit and frame is not None: limit -= 1 older_Frame = _Frame() # Arguments are always passed to the constructor as Python objects, # meaning we wind up boxing the f_lineno just to unbox it if we pass it. # It's faster to simply assign once the object is created. older_Frame.f_code = Gevent_PyFrame_GetCode(frame) # pylint:disable=undefined-variable older_Frame.f_lineno = Gevent_PyFrame_GetLineNumber(frame) # pylint:disable=undefined-variable if newer_Frame is not None: newer_Frame.f_back = older_Frame newer_Frame = older_Frame if newest_Frame is None: newest_Frame = newer_Frame frame = Gevent_PyFrame_GetBack(frame) # pylint:disable=undefined-variable return newest_Frame _greenlet__init__ = greenlet.__init__ class Greenlet(greenlet): """ A light-weight cooperatively-scheduled execution unit. """ # pylint:disable=too-many-public-methods,too-many-instance-attributes spawning_stack_limit = 10 # pylint:disable=keyword-arg-before-vararg,super-init-not-called def __init__(self, run=None, *args, **kwargs): """ :param args: The arguments passed to the ``run`` function. :param kwargs: The keyword arguments passed to the ``run`` function. :keyword callable run: The callable object to run. If not given, this object's `_run` method will be invoked (typically defined by subclasses). .. versionchanged:: 1.1b1 The ``run`` argument to the constructor is now verified to be a callable object. Previously, passing a non-callable object would fail after the greenlet was spawned. .. versionchanged:: 1.3b1 The ``GEVENT_TRACK_GREENLET_TREE`` configuration value may be set to a false value to disable ``spawn_tree_locals``, ``spawning_greenlet``, and ``spawning_stack``. The first two will be None in that case, and the latter will be empty. .. versionchanged:: 1.5 Greenlet objects are now more careful to verify that their ``parent`` is really a gevent hub, raising a ``TypeError`` earlier instead of an ``AttributeError`` later. .. versionchanged:: 20.12.1 Greenlet objects now function as context managers. Exiting the ``with`` suite ensures that the greenlet has completed by :meth:`joining ` the greenlet (blocking, with no timeout). If the body of the suite raises an exception, the greenlet is :meth:`killed ` with the default arguments and not joined in that case. """ # The attributes are documented in the .rst file # greenlet.greenlet(run=None, parent=None) # Calling it with both positional arguments instead of a keyword # argument (parent=get_hub()) speeds up creation of this object ~30%: # python -m timeit -s 'import gevent' 'gevent.Greenlet()' # Python 3.5: 2.70usec with keywords vs 1.94usec with positional # Python 3.4: 2.32usec with keywords vs 1.74usec with positional # Python 3.3: 2.55usec with keywords vs 1.92usec with positional # Python 2.7: 1.73usec with keywords vs 1.40usec with positional # Timings taken Feb 21 2018 prior to integration of #755 # python -m perf timeit -s 'import gevent' 'gevent.Greenlet()' # 3.6.4 : Mean +- std dev: 1.08 us +- 0.05 us # 2.7.14 : Mean +- std dev: 1.44 us +- 0.06 us # PyPy2 5.10.0: Mean +- std dev: 2.14 ns +- 0.08 ns # After the integration of spawning_stack, spawning_greenlet, # and spawn_tree_locals on that same date: # 3.6.4 : Mean +- std dev: 8.92 us +- 0.36 us -> 8.2x # 2.7.14 : Mean +- std dev: 14.8 us +- 0.5 us -> 10.2x # PyPy2 5.10.0: Mean +- std dev: 3.24 us +- 0.17 us -> 1.5x # Compiling with Cython gets us to these numbers: # 3.6.4 : Mean +- std dev: 3.63 us +- 0.14 us # 2.7.14 : Mean +- std dev: 3.37 us +- 0.20 us # PyPy2 5.10.0 : Mean +- std dev: 4.44 us +- 0.28 us # Switching to reified frames and some more tuning gets us here: # 3.7.2 : Mean +- std dev: 2.53 us +- 0.15 us # 2.7.16 : Mean +- std dev: 2.35 us +- 0.12 us # PyPy2 7.1 : Mean +- std dev: 11.6 us +- 0.4 us # Compared to the released 1.4 (tested at the same time): # 3.7.2 : Mean +- std dev: 3.21 us +- 0.32 us # 2.7.16 : Mean +- std dev: 3.11 us +- 0.19 us # PyPy2 7.1 : Mean +- std dev: 12.3 us +- 0.8 us _greenlet__init__(self, None, get_hub()) if run is not None: self._run = run # If they didn't pass a callable at all, then they must # already have one. Note that subclassing to override the run() method # itself has never been documented or supported. if not callable(self._run): raise TypeError("The run argument or self._run must be callable") self.args = args self.kwargs = kwargs self.value = None #: An event, such as a timer or a callback that fires. It is established in #: start() and start_later() as those two objects, respectively. #: Once this becomes non-None, the Greenlet cannot be started again. Conversely, #: kill() and throw() check for non-None to determine if this object has ever been #: scheduled for starting. A placeholder _cancelled_start_event is assigned by them to prevent #: the greenlet from being started in the future, if necessary. #: In the usual case, this transitions as follows: None -> event -> _start_completed_event. #: A value of None means we've never been started. self._start_event = None self._notifier = None self._formatted_info = None self._links = [] self._ident = None # Initial state: None. # Completed successfully: (None, None, None) # Failed with exception: (t, v, dump_traceback(tb))) self._exc_info = None if GEVENT_CONFIG.track_greenlet_tree: spawner = getcurrent() # pylint:disable=undefined-variable self.spawning_greenlet = wref(spawner) try: self.spawn_tree_locals = spawner.spawn_tree_locals except AttributeError: self.spawn_tree_locals = {} if get_generic_parent(spawner) is not None: # pylint:disable=undefined-variable # The main greenlet has no parent. # Its children get separate locals. spawner.spawn_tree_locals = self.spawn_tree_locals self.spawning_stack = _extract_stack(self.spawning_stack_limit) # Don't copy the spawning greenlet's # '_spawning_stack_frames' into ours. That's somewhat # confusing, and, if we're not careful, a deep spawn tree # can lead to excessive memory usage (an infinite spawning # tree could lead to unbounded memory usage without care # --- see https://github.com/gevent/gevent/issues/1371) # The _spawning_stack_frames may be cleared out later if we access spawning_stack else: # None is the default for all of these in Cython, but we # need to declare them for pure-Python mode. self.spawning_greenlet = None self.spawn_tree_locals = None self.spawning_stack = None def _get_minimal_ident(self): # Helper function for cython, to allow typing `reg` and making a # C call to get_ident. # If we're being accessed from a hub different than the one running # us, aka get_hub() is not self.parent, then calling hub.ident_registry.get_ident() # may be quietly broken: it's not thread safe. # If our parent is no longer the hub for whatever reason, this will raise a # AttributeError or TypeError. hub = get_my_hub(self) # pylint:disable=undefined-variable reg = hub.ident_registry return reg.get_ident(self) @property def minimal_ident(self): """ A small, unique non-negative integer that identifies this object. This is similar to :attr:`threading.Thread.ident` (and `id`) in that as long as this object is alive, no other greenlet *in this hub* will have the same id, but it makes a stronger guarantee that the assigned values will be small and sequential. Sometime after this object has died, the value will be available for reuse. To get ids that are unique across all hubs, combine this with the hub's (``self.parent``) ``minimal_ident``. Accessing this property from threads other than the thread running this greenlet is not defined. .. versionadded:: 1.3a2 """ # Not @Lazy, implemented manually because _ident is in the structure # of the greenlet for fast access if self._ident is None: self._ident = self._get_minimal_ident() return self._ident @readproperty def name(self): """ The greenlet name. By default, a unique name is constructed using the :attr:`minimal_ident`. You can assign a string to this value to change it. It is shown in the `repr` of this object if it has been assigned to or if the `minimal_ident` has already been generated. .. versionadded:: 1.3a2 .. versionchanged:: 1.4 Stop showing generated names in the `repr` when the ``minimal_ident`` hasn't been requested. This reduces overhead and may be less confusing, since ``minimal_ident`` can get reused. """ return 'Greenlet-%d' % (self.minimal_ident,) def _raise_exception(self): reraise(*self.exc_info) @property def loop(self): # needed by killall hub = get_my_hub(self) # pylint:disable=undefined-variable return hub.loop def __bool__(self): return self._start_event is not None and self._exc_info is None ### Lifecycle if _PYPY: # oops - pypy's .dead relies on __nonzero__ which we overriden above @property def dead(self): "Boolean indicating that the greenlet is dead and will not run again." # pylint:disable=no-member if self._greenlet__main: return False if self.__start_cancelled_by_kill() or self.__started_but_aborted(): return True return self._greenlet__started and not _continulet.is_pending(self) else: @property def dead(self): """ Boolean indicating that the greenlet is dead and will not run again. This is true if: 1. We were never started, but were :meth:`killed ` immediately after creation (not possible with :meth:`spawn`); OR 2. We were started, but were killed before running; OR 3. We have run and terminated (by raising an exception out of the started function or by reaching the end of the started function). """ return ( self.__start_cancelled_by_kill() or self.__started_but_aborted() or greenlet.dead.__get__(self) ) def __never_started_or_killed(self): return self._start_event is None def __start_pending(self): return ( self._start_event is not None and (self._start_event.pending or getattr(self._start_event, 'active', False)) ) def __start_cancelled_by_kill(self): return self._start_event is _cancelled_start_event def __start_completed(self): return self._start_event is _start_completed_event def __started_but_aborted(self): return ( not self.__never_started_or_killed() # we have been started or killed and not self.__start_cancelled_by_kill() # we weren't killed, so we must have been started and not self.__start_completed() # the start never completed and not self.__start_pending() # and we're not pending, so we must have been aborted ) def __cancel_start(self): if self._start_event is None: # prevent self from ever being started in the future self._start_event = _cancelled_start_event # cancel any pending start event # NOTE: If this was a real pending start event, this will leave a # "dangling" callback/timer object in the hub.loop.callbacks list; # depending on where we are in the event loop, it may even be in a local # variable copy of that list (in _run_callbacks). This isn't a problem, # except for the leak-tests. self._start_event.stop() self._start_event.close() def __handle_death_before_start(self, args): # args is (t, v, tb) or simply t or v. # The last two cases are transformed into (t, v, None); # if the single argument is an exception type, a new instance # is created; if the single argument is not an exception type and also # not an exception, it is wrapped in a BaseException (this is not # documented, but should result in better behaviour in the event of a # user error---instead of silently printing something to stderr, we still # kill the greenlet). if self._exc_info is None and self.dead: # the greenlet was never switched to before and it will # never be; _report_error was not called, the result was # not set, and the links weren't notified. Let's do it # here. # # checking that self.dead is true is essential, because # throw() does not necessarily kill the greenlet (if the # exception raised by throw() is caught somewhere inside # the greenlet). if len(args) == 1: arg = args[0] if isinstance(arg, type) and issubclass(arg, BaseException): args = (arg, arg(), None) else: args = (type(arg), arg, None) elif not args: args = (GreenletExit, GreenletExit(), None) if not issubclass(args[0], BaseException): # Random non-type, non-exception arguments. args = (BaseException, BaseException(args), None) assert issubclass(args[0], BaseException) self.__report_error(args) @property def started(self): # DEPRECATED return bool(self) def ready(self): """ Return a true value if and only if the greenlet has finished execution. .. versionchanged:: 1.1 This function is only guaranteed to return true or false *values*, not necessarily the literal constants ``True`` or ``False``. """ return self.dead or self._exc_info is not None def successful(self): """ Return a true value if and only if the greenlet has finished execution successfully, that is, without raising an error. .. tip:: A greenlet that has been killed with the default :class:`GreenletExit` exception is considered successful. That is, ``GreenletExit`` is not considered an error. .. note:: This function is only guaranteed to return true or false *values*, not necessarily the literal constants ``True`` or ``False``. """ return self._exc_info is not None and self._exc_info[1] is None def __repr__(self): classname = self.__class__.__name__ # If no name has been assigned, don't generate one, including a minimal_ident, # if not necessary. This reduces the use of weak references and associated # overhead. if 'name' not in self.__dict__ and self._ident is None: name = ' ' else: name = ' "%s" ' % (self.name,) result = '<%s%sat %s' % (classname, name, hex(id(self))) formatted = self._formatinfo() if formatted: result += ': ' + formatted return result + '>' def _formatinfo(self): info = self._formatted_info if info is not None: return info # Are we running an arbitrary function provided to the constructor, # or did a subclass override _run? func = self._run im_self = getattr(func, '__self__', None) if im_self is self: funcname = '_run' elif im_self is not None: funcname = repr(func) else: funcname = getattr(func, '__name__', '') or repr(func) result = funcname args = [] if self.args: args = [repr(x)[:50] for x in self.args] if self.kwargs: args.extend(['%s=%s' % (key, repr(value)[:50]) for (key, value) in self.kwargs.items()]) if args: result += '(' + ', '.join(args) + ')' # it is important to save the result here, because once the greenlet exits '_run' attribute will be removed self._formatted_info = result return result @property def exception(self): """ Holds the exception instance raised by the function if the greenlet has finished with an error. Otherwise ``None``. """ return self._exc_info[1] if self._exc_info is not None else None @property def exc_info(self): """ Holds the exc_info three-tuple raised by the function if the greenlet finished with an error. Otherwise a false value. .. note:: This is a provisional API and may change. .. versionadded:: 1.1 """ ei = self._exc_info if ei is not None and ei[0] is not None: return ( ei[0], ei[1], # The pickled traceback may be None if we couldn't pickle it. load_traceback(ei[2]) if ei[2] else None ) def throw(self, *args): """Immediately switch into the greenlet and raise an exception in it. Should only be called from the HUB, otherwise the current greenlet is left unscheduled forever. To raise an exception in a safe manner from any greenlet, use :meth:`kill`. If a greenlet was started but never switched to yet, then also a) cancel the event that will start it b) fire the notifications as if an exception was raised in a greenlet """ self.__cancel_start() try: if not self.dead: # Prevent switching into a greenlet *at all* if we had never # started it. Usually this is the same thing that happens by throwing, # but if this is done from the hub with nothing else running, prevents a # LoopExit. greenlet.throw(self, *args) finally: self.__handle_death_before_start(args) def start(self): """Schedule the greenlet to run in this loop iteration""" if self._start_event is None: _call_spawn_callbacks(self) hub = get_my_hub(self) # pylint:disable=undefined-variable self._start_event = hub.loop.run_callback(self.switch) def start_later(self, seconds): """ start_later(seconds) -> None Schedule the greenlet to run in the future loop iteration *seconds* later """ if self._start_event is None: _call_spawn_callbacks(self) hub = get_my_hub(self) # pylint:disable=undefined-variable self._start_event = hub.loop.timer(seconds) self._start_event.start(self.switch) @staticmethod def add_spawn_callback(callback): """ add_spawn_callback(callback) -> None Set up a *callback* to be invoked when :class:`Greenlet` objects are started. The invocation order of spawn callbacks is unspecified. Adding the same callback more than one time will not cause it to be called more than once. .. versionadded:: 1.4.0 """ global _spawn_callbacks if _spawn_callbacks is None: # pylint:disable=used-before-assignment _spawn_callbacks = set() _spawn_callbacks.add(callback) @staticmethod def remove_spawn_callback(callback): """ remove_spawn_callback(callback) -> None Remove *callback* function added with :meth:`Greenlet.add_spawn_callback`. This function will not fail if *callback* has been already removed or if *callback* was never added. .. versionadded:: 1.4.0 """ global _spawn_callbacks if _spawn_callbacks is not None: _spawn_callbacks.discard(callback) if not _spawn_callbacks: _spawn_callbacks = None @classmethod def spawn(cls, *args, **kwargs): """ spawn(function, *args, **kwargs) -> Greenlet Create a new :class:`Greenlet` object and schedule it to run ``function(*args, **kwargs)``. This can be used as ``gevent.spawn`` or ``Greenlet.spawn``. The arguments are passed to :meth:`Greenlet.__init__`. .. versionchanged:: 1.1b1 If a *function* is given that is not callable, immediately raise a :exc:`TypeError` instead of spawning a greenlet that will raise an uncaught TypeError. """ g = cls(*args, **kwargs) g.start() return g @classmethod def spawn_later(cls, seconds, *args, **kwargs): """ spawn_later(seconds, function, *args, **kwargs) -> Greenlet Create and return a new `Greenlet` object scheduled to run ``function(*args, **kwargs)`` in a future loop iteration *seconds* later. This can be used as ``Greenlet.spawn_later`` or ``gevent.spawn_later``. The arguments are passed to :meth:`Greenlet.__init__`. .. versionchanged:: 1.1b1 If an argument that's meant to be a function (the first argument in *args*, or the ``run`` keyword ) is given to this classmethod (and not a classmethod of a subclass), it is verified to be callable. Previously, the spawned greenlet would have failed when it started running. """ if cls is Greenlet and not args and 'run' not in kwargs: raise TypeError("") g = cls(*args, **kwargs) g.start_later(seconds) return g def _maybe_kill_before_start(self, exception): # Helper for Greenlet.kill(), and also for killall() self.__cancel_start() self.__free() dead = self.dead if dead: if isinstance(exception, tuple) and len(exception) == 3: args = exception else: args = (exception,) self.__handle_death_before_start(args) return dead def kill(self, exception=GreenletExit, block=True, timeout=None): """ Raise the ``exception`` in the greenlet. If ``block`` is ``True`` (the default), wait until the greenlet dies or the optional timeout expires; this may require switching greenlets. If block is ``False``, the current greenlet is not unscheduled. This function always returns ``None`` and never raises an error. It may be called multpile times on the same greenlet object, and may be called on an unstarted or dead greenlet. .. note:: Depending on what this greenlet is executing and the state of the event loop, the exception may or may not be raised immediately when this greenlet resumes execution. It may be raised on a subsequent green call, or, if this greenlet exits before making such a call, it may not be raised at all. As of 1.1, an example where the exception is raised later is if this greenlet had called :func:`sleep(0) `; an example where the exception is raised immediately is if this greenlet had called :func:`sleep(0.1) `. .. caution:: Use care when killing greenlets. If the code executing is not exception safe (e.g., makes proper use of ``finally``) then an unexpected exception could result in corrupted state. Using a :meth:`link` or :meth:`rawlink` (cheaper) may be a safer way to clean up resources. See also :func:`gevent.kill` and :func:`gevent.killall`. :keyword type exception: The type of exception to raise in the greenlet. The default is :class:`GreenletExit`, which indicates a :meth:`successful` completion of the greenlet. .. versionchanged:: 0.13.0 *block* is now ``True`` by default. .. versionchanged:: 1.1a2 If this greenlet had never been switched to, killing it will prevent it from *ever* being switched to. Links (:meth:`rawlink`) will still be executed, though. .. versionchanged:: 20.12.1 If this greenlet is :meth:`ready`, immediately return instead of requiring a trip around the event loop. """ if not self._maybe_kill_before_start(exception): if self.ready(): return waiter = Waiter() if block else None # pylint:disable=undefined-variable hub = get_my_hub(self) # pylint:disable=undefined-variable hub.loop.run_callback(_kill, self, exception, waiter) if waiter is not None: waiter.get() self.join(timeout) def get(self, block=True, timeout=None): """ get(block=True, timeout=None) -> object Return the result the greenlet has returned or re-raise the exception it has raised. If block is ``False``, raise :class:`gevent.Timeout` if the greenlet is still alive. If block is ``True``, unschedule the current greenlet until the result is available or the timeout expires. In the latter case, :class:`gevent.Timeout` is raised. """ if self.ready(): if self.successful(): return self.value self._raise_exception() if not block: raise Timeout() switch = getcurrent().switch # pylint:disable=undefined-variable self.rawlink(switch) try: t = Timeout._start_new_or_dummy(timeout) try: result = get_my_hub(self).switch() # pylint:disable=undefined-variable if result is not self: raise InvalidSwitchError('Invalid switch into Greenlet.get(): %r' % (result, )) finally: t.cancel() except: # unlinking in 'except' instead of finally is an optimization: # if switch occurred normally then link was already removed in _notify_links # and there's no need to touch the links set. # Note, however, that if "Invalid switch" assert was removed and invalid switch # did happen, the link would remain, causing another invalid switch later in this greenlet. self.unlink(switch) raise if self.ready(): if self.successful(): return self.value self._raise_exception() def join(self, timeout=None): """ join(timeout=None) -> None Wait until the greenlet finishes or *timeout* expires. Return ``None`` regardless. """ if self.ready(): return switch = getcurrent().switch # pylint:disable=undefined-variable self.rawlink(switch) try: t = Timeout._start_new_or_dummy(timeout) try: result = get_my_hub(self).switch() # pylint:disable=undefined-variable if result is not self: raise InvalidSwitchError('Invalid switch into Greenlet.join(): %r' % (result, )) finally: t.cancel() except Timeout as ex: self.unlink(switch) if ex is not t: raise except: self.unlink(switch) raise def __enter__(self): return self def __exit__(self, t, v, tb): if t is None: try: self.join() finally: self.kill() else: self.kill((t, v, tb)) def __report_result(self, result): self._exc_info = (None, None, None) self.value = result if self._links and not self._notifier: hub = get_my_hub(self) # pylint:disable=undefined-variable self._notifier = hub.loop.run_callback(self._notify_links) def __report_error(self, exc_info): if isinstance(exc_info[1], GreenletExit): self.__report_result(exc_info[1]) return # Depending on the error, we may not be able to pickle it. # In particular, RecursionError can be a problem. try: tb = dump_traceback(exc_info[2]) except: # pylint:disable=bare-except tb = None self._exc_info = exc_info[0], exc_info[1], tb hub = get_my_hub(self) # pylint:disable=undefined-variable if self._links and not self._notifier: self._notifier = hub.loop.run_callback(self._notify_links) try: hub.handle_error(self, *exc_info) finally: del exc_info def run(self): try: self.__cancel_start() self._start_event = _start_completed_event try: result = self._run(*self.args, **self.kwargs) except: # pylint:disable=bare-except self.__report_error(sys_exc_info()) else: self.__report_result(result) finally: self.__free() def __free(self): try: # It seems that Cython 0.29.13 sometimes miscompiles # self.__dict__.pop('_run', None) ? When we moved this out of the # inline finally: block in run(), we started getting strange # exceptions from places that subclassed Greenlet. del self._run except AttributeError: pass self.args = () self.kwargs.clear() def _run(self): """ Subclasses may override this method to take any number of arguments and keyword arguments. .. versionadded:: 1.1a3 Previously, if no callable object was passed to the constructor, the spawned greenlet would later fail with an AttributeError. """ # We usually override this in __init__ # pylint: disable=method-hidden return def has_links(self): return len(self._links) def rawlink(self, callback): """ Register a callable to be executed when the greenlet finishes execution. The *callback* will be called with this instance as an argument. The *callback* will be called even if linked after the greenlet is already ready(). .. caution:: The *callback* will be called in the hub and **MUST NOT** raise an exception. """ if not callable(callback): raise TypeError('Expected callable: %r' % (callback, )) self._links.append(callback) # pylint:disable=no-member if self.ready() and self._links and not self._notifier: hub = get_my_hub(self) # pylint:disable=undefined-variable self._notifier = hub.loop.run_callback(self._notify_links) def link(self, callback, SpawnedLink=SpawnedLink): """ Link greenlet's completion to a callable. The *callback* will be called with this instance as an argument once this greenlet is dead. A callable is called in its own :class:`greenlet.greenlet` (*not* a :class:`Greenlet`). The *callback* will be called even if linked after the greenlet is already ready(). """ # XXX: Is the redefinition of SpawnedLink supposed to just be an # optimization, or do people use it? It's not documented # pylint:disable=redefined-outer-name self.rawlink(SpawnedLink(callback)) def unlink(self, callback): """Remove the callback set by :meth:`link` or :meth:`rawlink`""" try: self._links.remove(callback) # pylint:disable=no-member except ValueError: pass def unlink_all(self): """ Remove all the callbacks. .. versionadded:: 1.3a2 """ del self._links[:] def link_value(self, callback, SpawnedLink=SuccessSpawnedLink): """ Like :meth:`link` but *callback* is only notified when the greenlet has completed successfully. """ # pylint:disable=redefined-outer-name self.link(callback, SpawnedLink=SpawnedLink) def link_exception(self, callback, SpawnedLink=FailureSpawnedLink): """ Like :meth:`link` but *callback* is only notified when the greenlet dies because of an unhandled exception. """ # pylint:disable=redefined-outer-name self.link(callback, SpawnedLink=SpawnedLink) def _notify_links(self): while self._links: # Early links are allowed to remove later links # before we get to them, and they're also allowed to # add new links, so we have to be careful about iterating. # We don't expect this list to be very large, so the time spent # manipulating it should be small. a deque is probably not justified. # Cython has optimizations to transform this into a memmove anyway. link = self._links.pop(0) try: link(self) except: # pylint:disable=bare-except, undefined-variable get_my_hub(self).handle_error((link, self), *sys_exc_info()) class _dummy_event(object): __slots__ = ('pending', 'active') def __init__(self): self.pending = self.active = False def stop(self): pass def start(self, cb): # pylint:disable=unused-argument raise AssertionError("Cannot start the dummy event") def close(self): pass _cancelled_start_event = _dummy_event() _start_completed_event = _dummy_event() # This is *only* called as a callback from the hub via Greenlet.kill(), # and its first argument is the Greenlet. So we can be sure about the types. def _kill(glet, exception, waiter): try: if isinstance(exception, tuple) and len(exception) == 3: glet.throw(*exception) else: glet.throw(exception) except: # pylint:disable=bare-except, undefined-variable # XXX do we need this here? get_my_hub(glet).handle_error(glet, *sys_exc_info()) if waiter is not None: waiter.switch(None) def joinall(greenlets, timeout=None, raise_error=False, count=None): """ Wait for the ``greenlets`` to finish. :param greenlets: A sequence (supporting :func:`len`) of greenlets to wait for. :keyword float timeout: If given, the maximum number of seconds to wait. :return: A sequence of the greenlets that finished before the timeout (if any) expired. """ if not raise_error: return wait(greenlets, timeout=timeout, count=count) done = [] for obj in iwait(greenlets, timeout=timeout, count=count): if getattr(obj, 'exception', None) is not None: if hasattr(obj, '_raise_exception'): obj._raise_exception() else: raise obj.exception done.append(obj) return done def _killall3(greenlets, exception, waiter): diehards = [] for g in greenlets: if not g.dead: try: g.throw(exception) except: # pylint:disable=bare-except, undefined-variable get_my_hub(g).handle_error(g, *sys_exc_info()) if not g.dead: diehards.append(g) waiter.switch(diehards) def _killall(greenlets, exception): for g in greenlets: if not g.dead: try: g.throw(exception) except: # pylint:disable=bare-except, undefined-variable get_my_hub(g).handle_error(g, *sys_exc_info()) def _call_spawn_callbacks(gr): if _spawn_callbacks is not None: for cb in _spawn_callbacks: cb(gr) _spawn_callbacks = None def killall(greenlets, exception=GreenletExit, block=True, timeout=None): """ Forceably terminate all the *greenlets* by causing them to raise *exception*. .. caution:: Use care when killing greenlets. If they are not prepared for exceptions, this could result in corrupted state. :param greenlets: A **bounded** iterable of the non-None greenlets to terminate. *All* the items in this iterable must be greenlets that belong to the same hub, which should be the hub for this current thread. If this is a generator or iterator that switches greenlets, the results are undefined. :keyword exception: The type of exception to raise in the greenlets. By default this is :class:`GreenletExit`. :keyword bool block: If True (the default) then this function only returns when all the greenlets are dead; the current greenlet is unscheduled during that process. If greenlets ignore the initial exception raised in them, then they will be joined (with :func:`gevent.joinall`) and allowed to die naturally. If False, this function returns immediately and greenlets will raise the exception asynchronously. :keyword float timeout: A time in seconds to wait for greenlets to die. If given, it is only honored when ``block`` is True. :raise Timeout: If blocking and a timeout is given that elapses before all the greenlets are dead. .. versionchanged:: 1.1a2 *greenlets* can be any iterable of greenlets, like an iterator or a set. Previously it had to be a list or tuple. .. versionchanged:: 1.5a3 Any :class:`Greenlet` in the *greenlets* list that hadn't been switched to before calling this method will never be switched to. This makes this function behave like :meth:`Greenlet.kill`. This does not apply to raw greenlets. .. versionchanged:: 1.5a3 Now accepts raw greenlets created by :func:`gevent.spawn_raw`. """ need_killed = [] for glet in greenlets: # Quick pass through to prevent any greenlet from # actually being switched to if it hasn't already. # (Previously we called ``list(greenlets)`` so we're still # linear.) # # We don't use glet.kill() here because we don't want to schedule # any callbacks in the loop; we're about to handle that more directly. try: cancel = glet._maybe_kill_before_start except AttributeError: need_killed.append(glet) else: if not cancel(exception): need_killed.append(glet) if not need_killed: return loop = glet.loop # pylint:disable=undefined-loop-variable if block: waiter = Waiter() # pylint:disable=undefined-variable loop.run_callback(_killall3, need_killed, exception, waiter) t = Timeout._start_new_or_dummy(timeout) try: alive = waiter.get() if alive: joinall(alive, raise_error=False) finally: t.cancel() else: loop.run_callback(_killall, need_killed, exception) def _init(): greenlet_init() # pylint:disable=undefined-variable _init() from gevent._util import import_c_accel import_c_accel(globals(), 'gevent._greenlet') gevent-24.11.1/src/gevent/hub.py000066400000000000000000001034361471441230600163700ustar00rootroot00000000000000# Copyright (c) 2009-2015 Denis Bilenko. See LICENSE for details. """ Event-loop hub. """ from __future__ import absolute_import, print_function # XXX: FIXME: Refactor to make this smaller # pylint:disable=too-many-lines from functools import partial as _functools_partial import sys import traceback from greenlet import greenlet as RawGreenlet from greenlet import getcurrent from greenlet import GreenletExit from greenlet import error as GreenletError __all__ = [ 'getcurrent', 'GreenletExit', 'spawn_raw', 'sleep', 'kill', 'signal', 'reinit', 'get_hub', 'Hub', 'Waiter', ] from gevent._config import config as GEVENT_CONFIG from gevent._compat import thread_mod_name from gevent._compat import reraise from gevent._util import readproperty from gevent._util import Lazy from gevent._util import gmctime from gevent._ident import IdentRegistry from gevent._hub_local import get_hub from gevent._hub_local import get_loop from gevent._hub_local import set_hub from gevent._hub_local import set_loop from gevent._hub_local import get_hub_if_exists as _get_hub from gevent._hub_local import get_hub_noargs as _get_hub_noargs from gevent._hub_local import set_default_hub_class from gevent._greenlet_primitives import TrackedRawGreenlet from gevent._hub_primitives import WaitOperationsGreenlet # Export from gevent import _hub_primitives wait = _hub_primitives.wait_on_objects iwait = _hub_primitives.iwait_on_objects from gevent.exceptions import LoopExit from gevent.exceptions import HubDestroyed from gevent._waiter import Waiter # Need the real get_ident. We're imported early enough (by gevent/__init__.py) # that we can be sure nothing is monkey patched yet. get_thread_ident = __import__(thread_mod_name).get_ident MAIN_THREAD_IDENT = get_thread_ident() # XXX: Assuming import is done on the main thread. def spawn_raw(function, *args, **kwargs): """ Create a new :class:`greenlet.greenlet` object and schedule it to run ``function(*args, **kwargs)``. This returns a raw :class:`~greenlet.greenlet` which does not have all the useful methods that :class:`gevent.Greenlet` has. Typically, applications should prefer :func:`~gevent.spawn`, but this method may occasionally be useful as an optimization if there are many greenlets involved. .. versionchanged:: 1.1a3 Verify that ``function`` is callable, raising a TypeError if not. Previously, the spawned greenlet would have failed the first time it was switched to. .. versionchanged:: 1.1b1 If *function* is not callable, immediately raise a :exc:`TypeError` instead of spawning a greenlet that will raise an uncaught TypeError. .. versionchanged:: 1.1rc2 Accept keyword arguments for ``function`` as previously (incorrectly) documented. Note that this may incur an additional expense. .. versionchanged:: 1.3a2 Populate the ``spawning_greenlet`` and ``spawn_tree_locals`` attributes of the returned greenlet. .. versionchanged:: 1.3b1 *Only* populate ``spawning_greenlet`` and ``spawn_tree_locals`` if ``GEVENT_TRACK_GREENLET_TREE`` is enabled (the default). If not enabled, those attributes will not be set. .. versionchanged:: 1.5a3 The returned greenlet always has a *loop* attribute matching the current hub's loop. This helps it work better with more gevent APIs. """ if not callable(function): raise TypeError("function must be callable") # The hub is always the parent. hub = _get_hub_noargs() loop = hub.loop factory = TrackedRawGreenlet if GEVENT_CONFIG.track_greenlet_tree else RawGreenlet # The callback class object that we use to run this doesn't # accept kwargs (and those objects are heavily used, as well as being # implemented twice in core.ppyx and corecffi.py) so do it with a partial if kwargs: function = _functools_partial(function, *args, **kwargs) g = factory(function, hub) loop.run_callback(g.switch) else: g = factory(function, hub) loop.run_callback(g.switch, *args) g.loop = hub.loop return g def sleep(seconds=0, ref=True): """ Put the current greenlet to sleep for at least *seconds*. *seconds* may be specified as an integer, or a float if fractional seconds are desired. .. tip:: In the current implementation, a value of 0 (the default) means to yield execution to any other runnable greenlets, but this greenlet may be scheduled again before the event loop cycles (in an extreme case, a greenlet that repeatedly sleeps with 0 can prevent greenlets that are ready to do I/O from being scheduled for some (small) period of time); a value greater than 0, on the other hand, will delay running this greenlet until the next iteration of the loop. If *ref* is False, the greenlet running ``sleep()`` will not prevent :func:`gevent.wait` from exiting. .. versionchanged:: 1.3a1 Sleeping with a value of 0 will now be bounded to approximately block the loop for no longer than :func:`gevent.getswitchinterval`. .. seealso:: :func:`idle` """ hub = _get_hub_noargs() loop = hub.loop if seconds <= 0: waiter = Waiter(hub) loop.run_callback(waiter.switch, None) waiter.get() else: with loop.timer(seconds, ref=ref) as t: # Sleeping is expected to be an "absolute" measure with # respect to time.time(), not a relative measure, so it's # important to update the loop's notion of now before we start loop.update_now() hub.wait(t) def idle(priority=0): """ Cause the calling greenlet to wait until the event loop is idle. Idle is defined as having no other events of the same or higher *priority* pending. That is, as long as sockets, timeouts or even signals of the same or higher priority are being processed, the loop is not idle. .. seealso:: :func:`sleep` """ hub = _get_hub_noargs() with hub.loop.idle() as watcher: if priority: watcher.priority = priority hub.wait(watcher) def kill(greenlet, exception=GreenletExit): """ Kill greenlet asynchronously. The current greenlet is not unscheduled. .. note:: The method :meth:`Greenlet.kill` method does the same and more (and the same caveats listed there apply here). However, the MAIN greenlet - the one that exists initially - does not have a ``kill()`` method, and neither do any created with :func:`spawn_raw`, so you have to use this function. .. caution:: Use care when killing greenlets. If they are not prepared for exceptions, this could result in corrupted state. .. versionchanged:: 1.1a2 If the ``greenlet`` has a :meth:`kill ` method, calls it. This prevents a greenlet from being switched to for the first time after it's been killed but not yet executed. """ if not greenlet.dead: if hasattr(greenlet, 'kill'): # dealing with gevent.greenlet.Greenlet. Use it, especially # to avoid allowing one to be switched to for the first time # after it's been killed greenlet.kill(exception=exception, block=False) else: _get_hub_noargs().loop.run_callback(greenlet.throw, exception) class signal(object): """ signal_handler(signalnum, handler, *args, **kwargs) -> object Call the *handler* with the *args* and *kwargs* when the process receives the signal *signalnum*. The *handler* will be run in a new greenlet when the signal is delivered. This returns an object with the useful method ``cancel``, which, when called, will prevent future deliveries of *signalnum* from calling *handler*. It's best to keep the returned object alive until you call ``cancel``. .. note:: This may not operate correctly with ``SIGCHLD`` if libev child watchers are used (as they are by default with `gevent.os.fork`). See :mod:`gevent.signal` for a more general purpose solution. .. versionchanged:: 1.2a1 The ``handler`` argument is required to be callable at construction time. .. versionchanged:: 20.5.1 The ``cancel`` method now properly cleans up all native resources, and drops references to all the arguments of this function. """ # This is documented as a function, not a class, # so we're free to change implementation details. greenlet_class = None def __init__(self, signalnum, handler, *args, **kwargs): if not callable(handler): raise TypeError("signal handler must be callable.") self.hub = _get_hub_noargs() self.watcher = self.hub.loop.signal(signalnum, ref=False) self.handler = handler self.args = args self.kwargs = kwargs if self.greenlet_class is None: from gevent import Greenlet type(self).greenlet_class = Greenlet self.greenlet_class = Greenlet self.watcher.start(self._start) ref = property( lambda self: self.watcher.ref, lambda self, nv: setattr(self.watcher, 'ref', nv) ) def cancel(self): if self.watcher is not None: self.watcher.stop() # Must close the watcher at a deterministic time, otherwise # when CFFI reclaims the memory, the native loop might still # have some reference to it; if anything tries to touch it # we can wind up writing to memory that is no longer valid, # leading to a wide variety of crashes. self.watcher.close() self.watcher = None self.handler = None self.args = None self.kwargs = None self.hub = None self.greenlet_class = None def _start(self): # TODO: Maybe this should just be Greenlet.spawn()? try: greenlet = self.greenlet_class(self.handle) greenlet.switch() except: # pylint:disable=bare-except self.hub.handle_error(None, *sys._exc_info()) # pylint:disable=no-member def handle(self): try: self.handler(*self.args, **self.kwargs) except: # pylint:disable=bare-except self.hub.handle_error(None, *sys.exc_info()) def reinit(hub=None): """ reinit() -> None Prepare the gevent hub to run in a new (forked) process. This should be called *immediately* after :func:`os.fork` in the child process. This is done automatically by :func:`gevent.os.fork` or if the :mod:`os` module has been monkey-patched. If this function is not called in a forked process, symptoms may include hanging of functions like :func:`socket.getaddrinfo`, and the hub's threadpool is unlikely to work. .. note:: Registered fork watchers may or may not run before this function (and thus ``gevent.os.fork``) return. If they have not run, they will run "soon", after an iteration of the event loop. You can force this by inserting a few small (but non-zero) calls to :func:`sleep` after fork returns. (As of gevent 1.1 and before, fork watchers will not have run, but this may change in the future.) .. note:: This function may be removed in a future major release if the fork process can be more smoothly managed. .. warning:: See remarks in :func:`gevent.os.fork` about greenlets and event loop watchers in the child process. """ # Note the signature line in the docstring: hub is not a public param. # The loop reinit function in turn calls libev's ev_loop_fork # function. hub = _get_hub() if hub is None else hub if hub is None: return # Note that we reinit the existing loop, not destroy it. # See https://github.com/gevent/gevent/issues/200. hub.loop.reinit() # libev's fork watchers are slow to fire because the only fire # at the beginning of a loop; due to our use of callbacks that # run at the end of the loop, that may be too late. The # threadpool and resolvers depend on the fork handlers being # run (specifically, the threadpool will fail in the forked # child if there were any threads in it, which there will be # if the resolver_thread was in use (the default) before the # fork.) # # If the forked process wants to use the threadpool or # resolver immediately (in a queued callback), it would hang. # # The below is a workaround. Fortunately, all of these # methods are idempotent and can be called multiple times # following a fork if the suddenly started working, or were # already working on some platforms. Other threadpools and fork handlers # will be called at an arbitrary time later ('soon') for obj in (hub._threadpool, hub._resolver, hub.periodic_monitoring_thread): getattr(obj, '_on_fork', lambda: None)() # TODO: We'd like to sleep for a non-zero amount of time to force the loop to make a # pass around before returning to this greenlet. That will allow any # user-provided fork watchers to run. (Two calls are necessary.) HOWEVER, if # we do this, certain tests that heavily mix threads and forking, # like 2.7/test_threading:test_reinit_tls_after_fork, fail. It's not immediately clear # why. #sleep(0.00001) #sleep(0.00001) class Hub(WaitOperationsGreenlet): """ A greenlet that runs the event loop. It is created automatically by :func:`get_hub`. .. rubric:: Switching Every time this greenlet (i.e., the event loop) is switched *to*, if the current greenlet has a ``switch_out`` method, it will be called. This allows a greenlet to take some cleanup actions before yielding control. This method should not call any gevent blocking functions. """ #: If instances of these classes are raised into the event loop, #: they will be propagated out to the main greenlet (where they will #: usually be caught by Python itself) SYSTEM_ERROR = (KeyboardInterrupt, SystemExit, SystemError) #: Instances of these classes are not considered to be errors and #: do not get logged/printed when raised by the event loop. NOT_ERROR = (GreenletExit, SystemExit) #: The size we use for our threadpool. Either use a subclass #: for this, or change it immediately after creating the hub. threadpool_size = 10 # An instance of PeriodicMonitoringThread, if started. periodic_monitoring_thread = None # The ident of the thread we were created in, which should be the # thread that we run in. thread_ident = None #: A string giving the name of this hub. Useful for associating hubs #: with particular threads. Printed as part of the default repr. #: #: .. versionadded:: 1.3b1 name = '' # NOTE: We cannot define a class-level 'loop' attribute # because that conflicts with the slot we inherit from the # Cythonized-bases. # This is the source for our 'minimal_ident' property. We don't use a # IdentRegistry because we've seen some crashes having to do with # clearing weak references on shutdown in Windows (see known_failures.py). # This gives us slightly different semantics than a greenlet's minimal_ident # (notably, there can be holes) but we never documented this object's minimal_ident, # and there should be few enough hub's over the lifetime of a process so as not # to matter much. _hub_counter = 0 def __init__(self, loop=None, default=None): WaitOperationsGreenlet.__init__(self, None, None) self.thread_ident = get_thread_ident() if hasattr(loop, 'run'): if default is not None: raise TypeError("Unexpected argument: default") self.loop = loop elif get_loop() is not None: # Reuse a loop instance previously set by # destroying a hub without destroying the associated # loop. See #237 and #238. self.loop = get_loop() else: if default is None and self.thread_ident != MAIN_THREAD_IDENT: default = False if loop is None: loop = self.backend self.loop = self.loop_class(flags=loop, default=default) # pylint:disable=not-callable self._resolver = None self._threadpool = None self.format_context = GEVENT_CONFIG.format_context Hub._hub_counter += 1 self.minimal_ident = Hub._hub_counter @Lazy def ident_registry(self): return IdentRegistry() @property def loop_class(self): return GEVENT_CONFIG.loop @property def backend(self): return GEVENT_CONFIG.libev_backend @property def main_hub(self): """ Is this the hub for the main thread? .. versionadded:: 1.3b1 """ return self.thread_ident == MAIN_THREAD_IDENT def __repr__(self): if self.loop is None: info = 'destroyed' else: try: info = self.loop._format() except Exception as ex: # pylint:disable=broad-except info = str(ex) or repr(ex) or 'error' result = '<%s %r at 0x%x %s' % ( self.__class__.__name__, self.name, id(self), info) if self._resolver is not None: result += ' resolver=%r' % self._resolver if self._threadpool is not None: result += ' threadpool=%r' % self._threadpool result += ' thread_ident=%s' % (hex(self.thread_ident), ) return result + '>' def _normalize_exception(self, t, v, tb): # Allow passing in all None if the caller doesn't have # easy access to sys.exc_info() if (t, v, tb) == (None, None, None): t, v, tb = sys.exc_info() if isinstance(v, str): # Cython can raise errors where the value is a plain string # e.g., AttributeError, "_semaphore.Semaphore has no attr", v = t(v) return t, v, tb def handle_error(self, context, type, value, tb): """ Called by the event loop when an error occurs. The default action is to print the exception to the :attr:`exception stream `. The arguments ``type``, ``value``, and ``tb`` are the standard tuple as returned by :func:`sys.exc_info`. (Note that when this is called, it may not be safe to call :func:`sys.exc_info`.) Errors that are :attr:`not errors ` are not printed. Errors that are :attr:`system errors ` are passed to :meth:`handle_system_error` after being printed. Applications can set a property on the hub instance with this same signature to override the error handling provided by this class. This is an advanced usage and requires great care. This function *must not* raise any exceptions. :param context: If this is ``None``, indicates a system error that should generally result in exiting the loop and being thrown to the parent greenlet. """ type, value, tb = self._normalize_exception(type, value, tb) if type is HubDestroyed: # We must continue propagating this for it to properly # exit. reraise(type, value, tb) if not issubclass(type, self.NOT_ERROR): self.print_exception(context, type, value, tb) if context is None or issubclass(type, self.SYSTEM_ERROR): self.handle_system_error(type, value, tb) def handle_system_error(self, type, value, tb=None): """ Called from `handle_error` when the exception type is determined to be a :attr:`system error `. System errors cause the exception to be raised in the main greenlet (the parent of this hub). .. versionchanged:: 20.5.1 Allow passing the traceback to associate with the exception if it is rethrown into the main greenlet. """ current = getcurrent() if current is self or current is self.parent or self.loop is None: self.parent.throw(type, value, tb) else: # in case system error was handled and life goes on # switch back to this greenlet as well cb = None try: cb = self.loop.run_callback(current.switch) except: # pylint:disable=bare-except traceback.print_exc(file=self.exception_stream) try: self.parent.throw(type, value, tb) finally: if cb is not None: cb.stop() @readproperty def exception_stream(self): """ The stream to which exceptions will be written. Defaults to ``sys.stderr`` unless assigned. Assigning a false (None) value disables printing exceptions. .. versionadded:: 1.2a1 """ # Unwrap any FileObjectThread we have thrown around sys.stderr # (because it can't be used in the hub). Tricky because we are # called in error situations when it's not safe to import. # Be careful not to access sys if we're in the process of interpreter # shutdown. stderr = sys.stderr if sys else None # pylint:disable=using-constant-test if type(stderr).__name__ == 'FileObjectThread': stderr = stderr.io # pylint:disable=no-member return stderr def print_exception(self, context, t, v, tb): # Python 3 does not gracefully handle None value or tb in # traceback.print_exception() as previous versions did. # pylint:disable=no-member errstream = self.exception_stream if not errstream: # pragma: no cover # If the error stream is gone, such as when the sys dict # gets cleared during interpreter shutdown, # don't cause follow-on errors. # See https://github.com/gevent/gevent/issues/1295 return t, v, tb = self._normalize_exception(t, v, tb) if v is None: errstream.write('%s\n' % t.__name__) else: traceback.print_exception(t, v, tb, file=errstream) del tb try: errstream.write(gmctime()) errstream.write(' ' if context is not None else '\n') except: # pylint:disable=bare-except # Possible not safe to import under certain # error conditions in Python 2 pass if context is not None: if not isinstance(context, str): try: context = self.format_context(context) except: # pylint:disable=bare-except traceback.print_exc(file=self.exception_stream) context = repr(context) errstream.write('%s failed with %s\n\n' % (context, getattr(t, '__name__', 'exception'), )) def run(self): """ Entry-point to running the loop. This method is called automatically when the hub greenlet is scheduled; do not call it directly. :raises gevent.exceptions.LoopExit: If the loop finishes running. This means that there are no other scheduled greenlets, and no active watchers or servers. In some situations, this indicates a programming error. """ assert self is getcurrent(), 'Do not call Hub.run() directly' self.start_periodic_monitoring_thread() while 1: loop = self.loop loop.error_handler = self try: loop.run() finally: loop.error_handler = None # break the refcount cycle # This function must never return, as it will cause # switch() in the parent greenlet to return an unexpected # value. This can show up as unexpected failures e.g., # from Waiters raising AssertionError or MulitpleWaiter # raising invalid IndexError. # # It is still possible to kill this greenlet with throw. # However, in that case switching to it is no longer safe, # as switch will return immediately. # # Note that there's a problem with simply doing # ``self.parent.throw()`` and never actually exiting this # greenlet: The greenlet tends to stay alive. This is # because throwing the exception captures stack frames # (regardless of what we do with the argument) and those # get saved. In addition to this object having # ``gr_frame`` pointing to this method, which contains # ``self``, which points to the parent, and both of which point to # an internal thread state dict that points back to the current greenlet for the thread, # which is likely to be the parent: a cycle. # # We can't have ``join()`` tell us to finish, because we # need to be able to resume after this throw. The only way # to dispose of the greenlet is to use ``self.destroy()``. debug = [] if hasattr(loop, 'debug'): debug = loop.debug() loop = None self.parent.throw(LoopExit('This operation would block forever', self, debug)) # Execution could resume here if another blocking API call is made # in the same thread and the hub hasn't been destroyed, so clean # up anything left. debug = None def start_periodic_monitoring_thread(self): if self.periodic_monitoring_thread is None and GEVENT_CONFIG.monitor_thread: # Note that it is possible for one real thread to # (temporarily) wind up with multiple monitoring threads, # if hubs are started and stopped within the thread. This shows up # in the threadpool tests. The monitoring threads will eventually notice their # hub object is gone. from gevent._monitor import PeriodicMonitoringThread from gevent.events import PeriodicMonitorThreadStartedEvent from gevent.events import notify_and_call_entry_points self.periodic_monitoring_thread = PeriodicMonitoringThread(self) if self.main_hub: self.periodic_monitoring_thread.install_monitor_memory_usage() notify_and_call_entry_points(PeriodicMonitorThreadStartedEvent( self.periodic_monitoring_thread)) return self.periodic_monitoring_thread def join(self, timeout=None): """ Wait for the event loop to finish. Exits only when there are no more spawned greenlets, started servers, active timeouts or watchers. .. caution:: This doesn't clean up all resources associated with the hub. For that, see :meth:`destroy`. :param float timeout: If *timeout* is provided, wait no longer than the specified number of seconds. :return: `True` if this method returns because the loop finished execution. Or `False` if the timeout expired. """ assert getcurrent() is self.parent, "only possible from the MAIN greenlet" if self.dead: return True waiter = Waiter(self) if timeout is not None: timeout = self.loop.timer(timeout, ref=False) timeout.start(waiter.switch, None) try: try: # Switch to the hub greenlet and let it continue. # Since we're the parent greenlet of the hub, when it exits # by `parent.throw(LoopExit)`, control will resume here. # If the timer elapses, however, ``waiter.switch()`` is called and # again control resumes here, but without an exception. waiter.get() except LoopExit: # Control will immediately be returned to this greenlet. return True finally: # Clean up as much junk as we can. There is a small cycle in the frames, # and it won't be GC'd. # this greenlet -> this frame # this greenlet -> the exception that was thrown # the exception that was thrown -> a bunch of other frames, including this frame. # some frame calling self.run() -> self del waiter # this frame -> waiter -> self del self # this frame -> self if timeout is not None: timeout.stop() timeout.close() del timeout return False def destroy(self, destroy_loop=None): """ Destroy this hub and clean up its resources. If you manually create hubs, or you use a hub or the gevent blocking API from multiple native threads, you *should* call this method before disposing of the hub object reference. Ideally, this should be called from the same thread running the hub, but it can be called from other threads after that thread has exited. Once this is done, it is impossible to continue running the hub. Attempts to use the blocking gevent API with pre-existing objects from this native thread and bound to this hub will fail. .. versionchanged:: 20.5.1 Attempt to ensure that Python stack frames and greenlets referenced by this hub are cleaned up. This guarantees that switching to the hub again is not safe after this. (It was never safe, but it's even less safe.) Note that this only works if the hub is destroyed in the same thread it is running in. If the hub is destroyed by a different thread after a ``fork()``, for example, expect some garbage to leak. """ if destroy_loop is None: destroy_loop = not self.loop.default if self.periodic_monitoring_thread is not None: self.periodic_monitoring_thread.kill() self.periodic_monitoring_thread = None if self._resolver is not None: self._resolver.close() del self._resolver if self._threadpool is not None: self._threadpool.kill() del self._threadpool # Let the frame be cleaned up by causing the run() function to # exit. This is the only way to guarantee that the hub itself # and the main greenlet, if this was a secondary thread, get # cleaned up. Otherwise there are likely to be reference # cycles still around. We MUST do this before we destroy the # loop; if we destroy the loop and then switch into the hub, # things will go VERY, VERY wrong (because we will have destroyed # the C datastructures in the middle of the C function that's # using them; the best we can hope for is a segfault). try: self.throw(HubDestroyed(destroy_loop)) except LoopExit: # Expected. pass except GreenletError: # Must be coming from a different thread. # Note that python stack frames are likely to leak # in this case. pass if destroy_loop: if get_loop() is self.loop: # Don't let anyone try to reuse this set_loop(None) self.loop.destroy() else: # Store in case another hub is created for this # thread. set_loop(self.loop) self.loop = None if _get_hub() is self: set_hub(None) # XXX: We can probably simplify the resolver and threadpool properties. @property def resolver_class(self): return GEVENT_CONFIG.resolver def _get_resolver(self): if self._resolver is None: self._resolver = self.resolver_class(hub=self) # pylint:disable=not-callable return self._resolver def _set_resolver(self, value): self._resolver = value def _del_resolver(self): self._resolver = None resolver = property(_get_resolver, _set_resolver, _del_resolver, """ The DNS resolver that the socket functions will use. .. seealso:: :doc:`/dns` """) @property def threadpool_class(self): return GEVENT_CONFIG.threadpool def _get_threadpool(self): if self._threadpool is None: # pylint:disable=not-callable self._threadpool = self.threadpool_class( self.threadpool_size, hub=self, idle_task_timeout=GEVENT_CONFIG.threadpool_idle_task_timeout ) return self._threadpool def _set_threadpool(self, value): self._threadpool = value def _del_threadpool(self): self._threadpool = None threadpool = property(_get_threadpool, _set_threadpool, _del_threadpool, """ The threadpool associated with this hub. Usually this is a :class:`gevent.threadpool.ThreadPool`, but you :attr:`can customize that `. Use this object to schedule blocking (non-cooperative) operations in a different thread to prevent them from halting the event loop. """) set_default_hub_class(Hub) class linkproxy(object): __slots__ = ['callback', 'obj'] def __init__(self, callback, obj): self.callback = callback self.obj = obj def __call__(self, *args): callback = self.callback obj = self.obj self.callback = None self.obj = None callback(obj) gevent-24.11.1/src/gevent/libev/000077500000000000000000000000001471441230600163325ustar00rootroot00000000000000gevent-24.11.1/src/gevent/libev/__init__.py000066400000000000000000000002511471441230600204410ustar00rootroot00000000000000# -*- coding: utf-8 -*- from __future__ import absolute_import from __future__ import division from __future__ import print_function # Nothing public here __all__ = [] gevent-24.11.1/src/gevent/libev/_corecffi_build.py000066400000000000000000000077331471441230600220140ustar00rootroot00000000000000# pylint: disable=no-member # This module is only used to create and compile the gevent._corecffi module; # nothing should be directly imported from it except `ffi`, which should only be # used for `ffi.compile()`; programs should import gevent._corecfffi. # However, because we are using "out-of-line" mode, it is necessary to examine # this file to know what functions are created and available on the generated # module. from __future__ import absolute_import, print_function import sys import os import os.path # pylint:disable=no-name-in-module from cffi import FFI sys.path.append(".") try: import _setuplibev import _setuputils except ImportError: print("This file must be imported with setup.py in the current working dir.") raise thisdir = os.path.dirname(os.path.abspath(__file__)) parentdir = os.path.abspath(os.path.join(thisdir, '..')) setup_dir = os.path.abspath(os.path.join(thisdir, '..', '..', '..')) __all__ = [] ffi = FFI() distutils_ext = _setuplibev.build_extension() def read_source(name): # pylint:disable=unspecified-encoding with open(os.path.join(thisdir, name), 'r') as f: return f.read() # cdef goes to the cffi library and determines what can be used in # Python. _cdef = read_source('_corecffi_cdef.c') # These defines and uses help keep the C file readable and lintable by # C tools. _cdef = _cdef.replace('#define GEVENT_STRUCT_DONE int', '') _cdef = _cdef.replace("GEVENT_STRUCT_DONE _;", '...;') _cdef = _cdef.replace('#define GEVENT_ST_NLINK_T int', 'typedef int... nlink_t;') _cdef = _cdef.replace('GEVENT_ST_NLINK_T', 'nlink_t') if _setuplibev.LIBEV_EMBED: # Arrange access to the loop internals _cdef += """ struct ev_loop { int backend_fd; int activecnt; ...; }; """ # arrange to be configured. _setuputils.ConfiguringBuildExt.gevent_add_pre_run_action(distutils_ext.configure) if sys.platform.startswith('win'): # We must have the vfd_open, etc, functions on # Windows. But on other platforms, going through # CFFI to just return the file-descriptor is slower # than just doing it in Python, so we check for and # workaround their absence in corecffi.py _cdef += """ typedef int... vfd_socket_t; int vfd_open(vfd_socket_t); vfd_socket_t vfd_get(int); void vfd_free(int); """ # source goes to the C compiler _source = read_source('_corecffi_source.c') macros = list(distutils_ext.define_macros) try: # We need the data pointer. macros.remove(('EV_COMMON', '')) except ValueError: pass ffi.cdef(_cdef) ffi.set_source( 'gevent.libev._corecffi', _source, include_dirs=distutils_ext.include_dirs + [ thisdir, # "libev.h" parentdir, # _ffi/alloc.c ], define_macros=macros, undef_macros=distutils_ext.undef_macros, libraries=distutils_ext.libraries, ) if __name__ == '__main__': # XXX: Note, on Windows, we would need to specify the external libraries # that should be linked in, such as ws2_32 and (because libev_vfd.h makes # Python.h calls) the proper Python library---at least for PyPy. I never got # that to work though, and calling python functions is strongly discouraged # from CFFI code. # On macOS to make the non-embedded case work correctly, against # our local copy of libev: # # 1) configure and make libev # 2) CPPFLAGS=-Ideps/libev/ LDFLAGS=-Ldeps/libev/.libs GEVENTSETUP_EMBED_LIBEV=0 \ # python setup.py build_ext -i # 3) export DYLD_LIBRARY_PATH=`pwd`/deps/libev/.libs # # The DYLD_LIBRARY_PATH is because the linker hard-codes # /usr/local/lib/libev.4.dylib in the corecffi.so dylib, because # that's the "install name" of the libev dylib that was built. # Adding a -rpath to the LDFLAGS doesn't change things. # This can be fixed with `install_name_tool`: # # 3) install_name_tool -change /usr/local/lib/libev.4.dylib \ # `pwd`/deps/libev/.libs/libev.4.dylib \ # src/gevent/libev/_corecffi.abi3.so ffi.compile(verbose=True) gevent-24.11.1/src/gevent/libev/_corecffi_cdef.c000066400000000000000000000155151471441230600214050ustar00rootroot00000000000000/* access whether we built embedded or not */ #define LIBEV_EMBED ... /* libev interface */ #define EV_MINPRI ... #define EV_MAXPRI ... #define EV_VERSION_MAJOR ... #define EV_VERSION_MINOR ... #define EV_UNDEF ... #define EV_NONE ... #define EV_READ ... #define EV_WRITE ... #define EV__IOFDSET ... #define EV_TIMER ... #define EV_PERIODIC ... #define EV_SIGNAL ... #define EV_CHILD ... #define EV_STAT ... #define EV_IDLE ... #define EV_PREPARE ... #define EV_CHECK ... #define EV_EMBED ... #define EV_FORK ... #define EV_CLEANUP ... #define EV_ASYNC ... #define EV_CUSTOM ... #define EV_ERROR ... #define EVFLAG_AUTO ... #define EVFLAG_NOENV ... #define EVFLAG_FORKCHECK ... #define EVFLAG_NOINOTIFY ... #define EVFLAG_SIGNALFD ... #define EVFLAG_NOSIGMASK ... #define EVBACKEND_SELECT ... #define EVBACKEND_POLL ... #define EVBACKEND_EPOLL ... #define EVBACKEND_KQUEUE ... #define EVBACKEND_DEVPOLL ... #define EVBACKEND_PORT ... #define EVBACKEND_LINUXAIO ... #define EVBACKEND_IOURING ... /* #define EVBACKEND_IOCP ... */ #define EVBACKEND_ALL ... #define EVBACKEND_MASK ... #define EVRUN_NOWAIT ... #define EVRUN_ONCE ... #define EVBREAK_CANCEL ... #define EVBREAK_ONE ... #define EVBREAK_ALL ... /* markers for the CFFI parser. Replaced when the string is read. */ #define GEVENT_STRUCT_DONE int #define GEVENT_ST_NLINK_T int /* Note that we don't declare the ev_loop struct and fields here. */ /* If we don't embed libev, we can't access those fields, libev */ /* keeps it opaque. */ // Watcher types // base for all watchers struct ev_watcher{ void* data; GEVENT_STRUCT_DONE _; }; struct ev_io { int fd; int events; void* data; GEVENT_STRUCT_DONE _; }; struct ev_timer { double at; void* data; GEVENT_STRUCT_DONE _; }; struct ev_signal { void* data; GEVENT_STRUCT_DONE _; }; struct ev_idle { void* data; GEVENT_STRUCT_DONE _; }; struct ev_prepare { void* data; GEVENT_STRUCT_DONE _; }; struct ev_check { void* data; GEVENT_STRUCT_DONE _; }; struct ev_fork { void* data; GEVENT_STRUCT_DONE _; }; struct ev_async { void* data; GEVENT_STRUCT_DONE _; }; struct ev_child { int pid; int rpid; int rstatus; void* data; GEVENT_STRUCT_DONE _; }; struct stat { GEVENT_ST_NLINK_T st_nlink; GEVENT_STRUCT_DONE _; }; struct ev_stat { struct stat attr; const char* path; struct stat prev; double interval; void* data; GEVENT_STRUCT_DONE _; }; typedef double ev_tstamp; int ev_version_major(); int ev_version_minor(); unsigned int ev_supported_backends (void); unsigned int ev_recommended_backends (void); unsigned int ev_embeddable_backends (void); ev_tstamp ev_time (void); void ev_set_syserr_cb(void *); void ev_set_userdata(struct ev_loop*, void*); void* ev_userdata(struct ev_loop*); int ev_priority(void*); void ev_set_priority(void*, int); int ev_is_pending(void*); int ev_is_active(void*); void ev_io_init(struct ev_io*, void* callback, int fd, int events); void ev_io_start(struct ev_loop*, struct ev_io*); void ev_io_stop(struct ev_loop*, struct ev_io*); void ev_feed_event(struct ev_loop*, void*, int); void ev_feed_fd_event(struct ev_loop*, int fd, int events); void ev_timer_init(struct ev_timer*, void *callback, double, double); void ev_timer_start(struct ev_loop*, struct ev_timer*); void ev_timer_stop(struct ev_loop*, struct ev_timer*); void ev_timer_again(struct ev_loop*, struct ev_timer*); void ev_signal_init(struct ev_signal*, void* callback, int); void ev_signal_start(struct ev_loop*, struct ev_signal*); void ev_signal_stop(struct ev_loop*, struct ev_signal*); void ev_idle_init(struct ev_idle*, void* callback); void ev_idle_start(struct ev_loop*, struct ev_idle*); void ev_idle_stop(struct ev_loop*, struct ev_idle*); void ev_prepare_init(struct ev_prepare*, void* callback); void ev_prepare_start(struct ev_loop*, struct ev_prepare*); void ev_prepare_stop(struct ev_loop*, struct ev_prepare*); void ev_check_init(struct ev_check*, void* callback); void ev_check_start(struct ev_loop*, struct ev_check*); void ev_check_stop(struct ev_loop*, struct ev_check*); void ev_fork_init(struct ev_fork*, void* callback); void ev_fork_start(struct ev_loop*, struct ev_fork*); void ev_fork_stop(struct ev_loop*, struct ev_fork*); void ev_async_init(struct ev_async*, void* callback); void ev_async_start(struct ev_loop*, struct ev_async*); void ev_async_stop(struct ev_loop*, struct ev_async*); void ev_async_send(struct ev_loop*, struct ev_async*); int ev_async_pending(struct ev_async*); void ev_child_init(struct ev_child*, void* callback, int, int); void ev_child_start(struct ev_loop*, struct ev_child*); void ev_child_stop(struct ev_loop*, struct ev_child*); void ev_stat_init(struct ev_stat*, void* callback, char*, double); void ev_stat_start(struct ev_loop*, struct ev_stat*); void ev_stat_stop(struct ev_loop*, struct ev_stat*); struct ev_loop *ev_default_loop (unsigned int flags); struct ev_loop* ev_loop_new(unsigned int flags); void ev_loop_destroy(struct ev_loop*); void ev_loop_fork(struct ev_loop*); int ev_is_default_loop (struct ev_loop *); unsigned int ev_iteration(struct ev_loop*); unsigned int ev_depth(struct ev_loop*); unsigned int ev_backend(struct ev_loop*); void ev_verify(struct ev_loop*); void ev_run(struct ev_loop*, int flags); ev_tstamp ev_now (struct ev_loop *); void ev_now_update (struct ev_loop *); /* update event loop time */ void ev_ref(struct ev_loop*); void ev_unref(struct ev_loop*); void ev_break(struct ev_loop*, int); unsigned int ev_pending_count(struct ev_loop*); struct ev_loop* gevent_ev_default_loop(unsigned int flags); void gevent_install_sigchld_handler(); void gevent_reset_sigchld_handler(); extern void (*gevent_noop)(struct ev_loop *_loop, struct ev_timer *w, int revents); void ev_sleep (ev_tstamp delay); /* sleep for a while */ /* gevent callbacks */ /* These will be created as static functions at the end of the * _source.c and must be declared there too. */ extern "Python" { int python_callback(void* handle, int revents); void python_handle_error(void* handle, int revents); void python_stop(void* handle); void python_check_callback(struct ev_loop*, void*, int); void python_prepare_callback(struct ev_loop*, void*, int); // libev specific void _syserr_cb(char*); } /* * We use a single C callback for every watcher type, which in turn calls the * Python callbacks. The ev_watcher pointer type can be used for every watcher type * because they all start with the same members---libev itself relies on this. Each * watcher types has a 'void* data' that stores the CFFI handle to the Python watcher * object. */ static void _gevent_generic_callback(struct ev_loop* loop, struct ev_watcher* watcher, int revents); static void gevent_zero_check(struct ev_check* handle); static void gevent_zero_timer(struct ev_timer* handle); static void gevent_zero_prepare(struct ev_prepare* handle); static void gevent_set_ev_alloc(); gevent-24.11.1/src/gevent/libev/_corecffi_source.c000066400000000000000000000046731471441230600220070ustar00rootroot00000000000000/* passed to the real C compiler */ #ifndef LIBEV_EMBED /* We're normally used to embed libev, assume that */ /* When this is defined, libev.h includes ev.c */ #define LIBEV_EMBED 1 #endif #ifdef _WIN32 #define EV_STANDALONE 1 #include "libev_vfd.h" #endif #include "libev.h" static void _gevent_noop(struct ev_loop *_loop, struct ev_timer *w, int revents) { } void (*gevent_noop)(struct ev_loop *, struct ev_timer *, int) = &_gevent_noop; static int python_callback(void* handle, int revents); static void python_handle_error(void* handle, int revents); static void python_stop(void* handle); static void _gevent_generic_callback(struct ev_loop* loop, struct ev_watcher* watcher, int revents) { void* handle = watcher->data; int cb_result = python_callback(handle, revents); switch(cb_result) { case -1: // in case of exception, call self.loop.handle_error; // this function is also responsible for stopping the watcher // and allowing memory to be freed python_handle_error(handle, revents); break; case 1: // Code to stop the event. Note that if python_callback // has disposed of the last reference to the handle, // `watcher` could now be invalid/disposed memory! if (!ev_is_active(watcher)) { python_stop(handle); } break; case 2: // watcher is already stopped and dead, nothing to do. break; default: fprintf(stderr, "WARNING: gevent: Unexpected return value %d from Python callback " "for watcher %p and handle %p\n", cb_result, watcher, handle); // XXX: Possible leaking of resources here? Should we be // closing the watcher? } } static void gevent_zero_timer(struct ev_timer* handle) { memset(handle, 0, sizeof(struct ev_timer)); } static void gevent_zero_check(struct ev_check* handle) { memset(handle, 0, sizeof(struct ev_check)); } static void gevent_zero_prepare(struct ev_prepare* handle) { memset(handle, 0, sizeof(struct ev_prepare)); } #include "_ffi/alloc.c" static void gevent_set_ev_alloc() { void* (*ptr)(void*, long); ptr = (void*(*)(void*, long))&gevent_realloc; ev_set_allocator(ptr); } #ifdef __clang__ #pragma clang diagnostic push #pragma clang diagnostic ignored "-Wunreachable-code" #endif gevent-24.11.1/src/gevent/libev/callbacks.c000066400000000000000000000166561471441230600204330ustar00rootroot00000000000000/* Copyright (c) 2011-2012 Denis Bilenko. See LICENSE for details. */ #include #include "Python.h" #include "ev.h" #include "corecext.h" #include "callbacks.h" #ifdef __clang__ #pragma clang diagnostic push #pragma clang diagnostic ignored "-Wunused-parameter" #endif #ifdef Py_PYTHON_H /* define gevent_realloc with libev semantics */ #include "../_ffi/alloc.c" #if PY_MAJOR_VERSION >= 3 #define PyInt_FromLong PyLong_FromLong #endif #ifndef CYTHON_INLINE #if defined(__clang__) #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) #elif defined(__GNUC__) #define CYTHON_INLINE __inline__ #elif defined(_MSC_VER) #define CYTHON_INLINE __inline #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L #define CYTHON_INLINE inline #else #define CYTHON_INLINE #endif #endif #define GGIL_DECLARE PyGILState_STATE ___save #define GGIL_ENSURE ___save = PyGILState_Ensure(); #define GGIL_RELEASE PyGILState_Release(___save); static CYTHON_INLINE void gevent_check_signals(struct PyGeventLoopObject* loop) { if (!ev_is_default_loop(loop->_ptr)) { /* only reporting signals on the default loop */ return; } PyErr_CheckSignals(); if (PyErr_Occurred()) { gevent_handle_error(loop, Py_None); } } #define GET_OBJECT(PY_TYPE, EV_PTR, MEMBER) \ ((struct PY_TYPE *)(((char *)EV_PTR) - offsetof(struct PY_TYPE, MEMBER))) void gevent_noop(struct ev_loop* loop, void* watcher, int revents) {} static void gevent_stop(PyObject* watcher, struct PyGeventLoopObject* loop) { PyObject *result, *method; int error; error = 1; method = PyObject_GetAttrString(watcher, "stop"); if (method) { result = PyObject_Call(method, _empty_tuple, NULL); if (result) { Py_DECREF(result); error = 0; } Py_DECREF(method); } if (error) { assert(PyErr_Occurred()); gevent_handle_error(loop, watcher); } } static void gevent_callback(struct PyGeventLoopObject* loop, PyObject* callback, PyObject* args, PyObject* watcher, void *c_watcher, int revents) { GGIL_DECLARE; PyObject *result, *py_events; long length; py_events = 0; GGIL_ENSURE; Py_INCREF(loop); Py_INCREF(callback); Py_INCREF(args); Py_INCREF(watcher); gevent_check_signals(loop); if (args == Py_None) { args = _empty_tuple; } length = PyTuple_Size(args); if (length < 0) { /* returns -1 and sets an error if args isn't a tuple. */ assert(PyErr_Occurred()); gevent_handle_error(loop, watcher); goto end; } if (length > 0 && PyTuple_GET_ITEM(args, 0) == GEVENT_CORE_EVENTS) { py_events = PyInt_FromLong(revents); if (!py_events) { gevent_handle_error(loop, watcher); goto end; } PyTuple_SET_ITEM(args, 0, py_events); } else { py_events = NULL; } result = PyObject_Call(callback, args, NULL); if (result) { Py_DECREF(result); } else { assert(PyErr_Occurred()); gevent_handle_error(loop, watcher); if (revents & (EV_READ|EV_WRITE)) { /* io watcher: not stopping it may cause the failing callback to be called repeatedly */ gevent_stop(watcher, loop); goto end; } } if (!ev_is_active(c_watcher)) { /* Watcher was stopped, maybe by libev. Let's call stop() to clean up * 'callback' and 'args' properties, do Py_DECREF() and ev_ref() if necessary. * BTW, we don't need to check for EV_ERROR, because libev stops the watcher in that case. */ gevent_stop(watcher, loop); } end: if (py_events) { Py_DECREF(py_events); PyTuple_SET_ITEM(args, 0, GEVENT_CORE_EVENTS); } Py_DECREF(watcher); Py_DECREF(args); Py_DECREF(callback); Py_DECREF(loop); GGIL_RELEASE; } void gevent_call(struct PyGeventLoopObject* loop, struct PyGeventCallbackObject* cb) { /* no need for GIL here because it is only called from run_callbacks which already has GIL */ PyObject *result, *callback, *args; if (!loop || !cb) return; callback = cb->callback; args = cb->args; if (!callback || !args) return; if (callback == Py_None || args == Py_None) return; Py_INCREF(loop); Py_INCREF(callback); Py_INCREF(args); Py_INCREF(Py_None); Py_DECREF(cb->callback); cb->callback = Py_None; /* you are not allowed to use PyObject_Call() with an error pending; debug builds crash with an assertion. How do we get here with an error pending? good question. It's been seen in 3.12 before we stopped using CYTHON_FAST_THREAD_STATE. Hopefully changing that makes this dead code. */ if (PyErr_Occurred()) { /* by now, cb->callback is None, so using it as the context doesn't produce a useful output. The callable object is more helpful. Writing unraisable clears the error, unless it gets an error of its own. */ PyErr_WriteUnraisable(callback); PyErr_Clear(); } assert(!PyErr_Occurred()); result = PyObject_Call(callback, args, NULL); if (result) { Py_DECREF(result); } else { assert(PyErr_Occurred()); gevent_handle_error(loop, (PyObject*)cb); } Py_INCREF(Py_None); Py_DECREF(cb->args); cb->args = Py_None; Py_DECREF(callback); Py_DECREF(args); Py_DECREF(loop); } /* * PyGeventWatcherObject is the first member of all the structs, so * it is the same in all of them and they can all safely be cast to * it. We could also use the *data member of the libev watcher objects. */ #undef DEFINE_CALLBACK #define DEFINE_CALLBACK(WATCHER_LC, WATCHER_TYPE) \ void gevent_callback_##WATCHER_LC(struct ev_loop *_loop, void *c_watcher, int revents) { \ struct PyGeventWatcherObject* watcher = (struct PyGeventWatcherObject*)GET_OBJECT(PyGevent##WATCHER_TYPE##Object, c_watcher, _watcher); \ gevent_callback(watcher->loop, watcher->_callback, watcher->args, (PyObject*)watcher, c_watcher, revents); \ } DEFINE_CALLBACKS void gevent_run_callbacks(struct ev_loop *_loop, void *watcher, int revents) { struct PyGeventLoopObject* loop; PyObject *result; GGIL_DECLARE; GGIL_ENSURE; loop = GET_OBJECT(PyGeventLoopObject, watcher, _prepare); Py_INCREF(loop); gevent_check_signals(loop); result = gevent_loop_run_callbacks(loop); if (result) { Py_DECREF(result); } else { /* For reasons that are unclear, PyErr_WriteUnraisable, which invokes sys.unraisablehook, isn't safe to call here, at least in some cases. test__pool.TestCoroutinePool.test_stderr_raising fails its timeout if we use that. Instead, we use PyErr_Print, which doesn't have any context, but doesn't hang either. It calls sys.excepthook. */ PyErr_Print(); PyErr_Clear(); } Py_DECREF(loop); GGIL_RELEASE; } /* This is only used on Win32 */ void gevent_periodic_signal_check(struct ev_loop *_loop, void *watcher, int revents) { GGIL_DECLARE; GGIL_ENSURE; gevent_check_signals(GET_OBJECT(PyGeventLoopObject, watcher, _periodic_signal_checker)); GGIL_RELEASE; } #undef GGIL_DECLARE #undef GGIL_ENSURE #undef GGIL_RELEASE #endif /* Py_PYTHON_H */ #ifdef __clang__ #pragma clang diagnostic pop #endif gevent-24.11.1/src/gevent/libev/callbacks.h000066400000000000000000000025241471441230600204250ustar00rootroot00000000000000struct ev_loop; struct PyGeventLoopObject; struct PyGeventCallbackObject; #define DEFINE_CALLBACK(WATCHER_LC, WATCHER_TYPE) \ void gevent_callback_##WATCHER_LC(struct ev_loop *, void *, int); #define DEFINE_CALLBACKS0 \ DEFINE_CALLBACK(io, IO); \ DEFINE_CALLBACK(timer, Timer); \ DEFINE_CALLBACK(signal, Signal); \ DEFINE_CALLBACK(idle, Idle); \ DEFINE_CALLBACK(prepare, Prepare); \ DEFINE_CALLBACK(check, Check); \ DEFINE_CALLBACK(fork, Fork); \ DEFINE_CALLBACK(async, Async); \ DEFINE_CALLBACK(stat, Stat); \ DEFINE_CALLBACK(child, Child); #define DEFINE_CALLBACKS DEFINE_CALLBACKS0 DEFINE_CALLBACKS void gevent_run_callbacks(struct ev_loop *, void *, int); void gevent_call(struct PyGeventLoopObject* loop, struct PyGeventCallbackObject* cb); void* gevent_realloc(void* ptr, size_t size); void gevent_noop(struct ev_loop*, void* watcher, int revents); /* Only used on Win32 */ void gevent_periodic_signal_check(struct ev_loop *, void *, int); // We're included in corecext.c. Disable a bunch of annoying warnings // that are in the generated code that we can't do anything about. #ifdef __clang__ #pragma clang diagnostic push #pragma clang diagnostic ignored "-Wdeprecated-declarations" #pragma clang diagnostic ignored "-Wunreachable-code" #endif gevent-24.11.1/src/gevent/libev/corecext.pyx000066400000000000000000001357761471441230600207330ustar00rootroot00000000000000# Copyright (c) 2009-2012 Denis Bilenko. See LICENSE for details. # This first directive, supported in Cython 0.24+, causes sources # files to be *much* smaller when it's false (139,027 LOC vs 35,000 # LOC) and thus cythonpp.py (and probably the compiler; also Visual C # has limits on source file sizes) to be faster (73s vs 46s). But it does # make debugging more difficult. Auto-pickling was added in 0.26, and # that's a new feature that we don't need or want to allow in a gevent # point release. # cython: emit_code_comments=False, auto_pickle=False, language_level=3str # NOTE: We generally cannot use the Cython IF directive as documented # at # http://cython.readthedocs.io/en/latest/src/userguide/language_basics.html#conditional-compilation # (e.g., IF UNAME_SYSNAME == "Windows") because when Cython says # "compilation", it means when *Cython* compiles, not when the C # compiler compiles. We distribute an sdist with a single pre-compiled # C file for all platforms so that end users that don't use a binary # wheel don't have to sit through cythonpp and other steps the Makefile does. # See https://github.com/gevent/gevent/issues/1076 # We compile in 3str mode, which should mean we get absolute import # by default. from __future__ import absolute_import cimport cython cimport libev from cpython.ref cimport Py_INCREF from cpython.ref cimport Py_DECREF from cpython.mem cimport PyMem_Malloc from cpython.mem cimport PyMem_Free from cpython.object cimport PyObject from cpython.exc cimport PyErr_NormalizeException from cpython.exc cimport PyErr_WriteUnraisable from libc.errno cimport errno from cpython cimport PyErr_Fetch from cpython cimport PyErr_Occurred from cpython cimport PyObject from cpython cimport PyErr_Clear cdef extern from "Python.h": int Py_ReprEnter(object) void Py_ReprLeave(object) int PyException_SetTraceback(PyObject* ex, PyObject* tb) except * import sys import os import traceback import signal as signalmodule from gevent import getswitchinterval from gevent.exceptions import HubDestroyed __all__ = ['get_version', 'get_header_version', 'supported_backends', 'recommended_backends', 'embeddable_backends', 'time', 'loop'] cdef extern from "callbacks.h": void gevent_callback_io(libev.ev_loop, void*, int) void gevent_callback_timer(libev.ev_loop, void*, int) void gevent_callback_signal(libev.ev_loop, void*, int) void gevent_callback_idle(libev.ev_loop, void*, int) void gevent_callback_prepare(libev.ev_loop, void*, int) void gevent_callback_check(libev.ev_loop, void*, int) void gevent_callback_fork(libev.ev_loop, void*, int) void gevent_callback_async(libev.ev_loop, void*, int) void gevent_callback_child(libev.ev_loop, void*, int) void gevent_callback_stat(libev.ev_loop, void*, int) void gevent_run_callbacks(libev.ev_loop, void*, int) void gevent_periodic_signal_check(libev.ev_loop, void*, int) void gevent_call(loop, callback) void gevent_noop(libev.ev_loop, void*, int) void* gevent_realloc(void*, long size) cdef extern from "stathelper.c": object _pystat_fromstructstat(void*) UNDEF = libev.EV_UNDEF NONE = libev.EV_NONE READ = libev.EV_READ WRITE = libev.EV_WRITE TIMER = libev.EV_TIMER PERIODIC = libev.EV_PERIODIC SIGNAL = libev.EV_SIGNAL CHILD = libev.EV_CHILD STAT = libev.EV_STAT IDLE = libev.EV_IDLE PREPARE = libev.EV_PREPARE CHECK = libev.EV_CHECK EMBED = libev.EV_EMBED FORK = libev.EV_FORK CLEANUP = libev.EV_CLEANUP ASYNC = libev.EV_ASYNC CUSTOM = libev.EV_CUSTOM ERROR = libev.EV_ERROR READWRITE = libev.EV_READ | libev.EV_WRITE MINPRI = libev.EV_MINPRI MAXPRI = libev.EV_MAXPRI BACKEND_SELECT = libev.EVBACKEND_SELECT BACKEND_POLL = libev.EVBACKEND_POLL BACKEND_EPOLL = libev.EVBACKEND_EPOLL BACKEND_KQUEUE = libev.EVBACKEND_KQUEUE BACKEND_DEVPOLL = libev.EVBACKEND_DEVPOLL BACKEND_PORT = libev.EVBACKEND_PORT BACKEND_LINUXAIO = libev.EVBACKEND_LINUXAIO BACKEND_IOURING = libev.EVBACKEND_IOURING FORKCHECK = libev.EVFLAG_FORKCHECK NOINOTIFY = libev.EVFLAG_NOINOTIFY SIGNALFD = libev.EVFLAG_SIGNALFD NOSIGMASK = libev.EVFLAG_NOSIGMASK @cython.internal cdef class _EVENTSType: def __repr__(self): return 'gevent.core.EVENTS' cdef public object GEVENT_CORE_EVENTS = _EVENTSType() EVENTS = GEVENT_CORE_EVENTS def get_version(): return 'libev-%d.%02d' % (libev.ev_version_major(), libev.ev_version_minor()) def get_header_version(): return 'libev-%d.%02d' % (libev.EV_VERSION_MAJOR, libev.EV_VERSION_MINOR) # This list backends in the order they are actually tried by libev, # as defined in loop_init. The names must be lower case. _flags = tuple([ # IOCP (libev.EVBACKEND_PORT, 'port'), (libev.EVBACKEND_KQUEUE, 'kqueue'), (libev.EVBACKEND_IOURING, 'linux_iouring'), (libev.EVBACKEND_LINUXAIO, "linux_aio"), (libev.EVBACKEND_EPOLL, 'epoll'), (libev.EVBACKEND_POLL, 'poll'), (libev.EVBACKEND_SELECT, 'select'), (libev.EVFLAG_NOENV, 'noenv'), (libev.EVFLAG_FORKCHECK, 'forkcheck'), (libev.EVFLAG_NOINOTIFY, 'noinotify'), (libev.EVFLAG_SIGNALFD, 'signalfd'), (libev.EVFLAG_NOSIGMASK, 'nosigmask') ]) _flags_str2int = { string: flag for flag, string in _flags } _events = tuple([ (libev.EV_READ, 'READ'), (libev.EV_WRITE, 'WRITE'), (libev.EV__IOFDSET, '_IOFDSET'), (libev.EV_PERIODIC, 'PERIODIC'), (libev.EV_SIGNAL, 'SIGNAL'), (libev.EV_CHILD, 'CHILD'), (libev.EV_STAT, 'STAT'), (libev.EV_IDLE, 'IDLE'), (libev.EV_PREPARE, 'PREPARE'), (libev.EV_CHECK, 'CHECK'), (libev.EV_EMBED, 'EMBED'), (libev.EV_FORK, 'FORK'), (libev.EV_CLEANUP, 'CLEANUP'), (libev.EV_ASYNC, 'ASYNC'), (libev.EV_CUSTOM, 'CUSTOM'), (libev.EV_ERROR, 'ERROR') ]) cpdef _flags_to_list(unsigned int flags): cdef list result = [] for code, value in _flags: if flags & code: result.append(value) flags &= ~code if not flags: break if flags: result.append(flags) return result cpdef unsigned int _flags_to_int(object flags) except *: # Note, that order does not matter, libev has its own predefined order if not flags: return 0 if isinstance(flags, int): return flags cdef unsigned int result = 0 if isinstance(flags, str): flags = flags.split(',') try: for value in flags: value = value.strip().lower() if value: result |= _flags_str2int[value] except KeyError as ex: # XXX: Cython 3.1a1 crashes on handling exceptions. It's trying to # decref a value it previously set to NULL. raise ValueError('Invalid backend or flag: %s\nPossible values: %s' % ( ex, ', '.join(sorted(_flags_str2int.keys())))) return result cdef str _str_hex(object flag): if isinstance(flag, int): return hex(flag) return str(flag) cpdef _check_flags(unsigned int flags): cdef list as_list flags &= libev.EVBACKEND_MASK if not flags: return if not (flags & libev.EVBACKEND_ALL): raise ValueError('Invalid value for backend: 0x%x' % flags) if not (flags & libev.ev_supported_backends()): as_list = [_str_hex(x) for x in _flags_to_list(flags)] raise ValueError('Unsupported backend: %s' % '|'.join(as_list)) cpdef _events_to_str(int events): cdef list result = [] cdef int c_flag for (flag, string) in _events: c_flag = flag if events & c_flag: result.append(string) events = events & (~c_flag) if not events: break if events: result.append(hex(events)) return '|'.join(result) def supported_backends(): return _flags_to_list(libev.ev_supported_backends()) def recommended_backends(): return _flags_to_list(libev.ev_recommended_backends()) def embeddable_backends(): return _flags_to_list(libev.ev_embeddable_backends()) def time(): return libev.ev_time() cdef bint _check_loop(loop loop) except -1: if not loop._ptr: raise ValueError('operation on destroyed loop') return 1 cdef public class callback [object PyGeventCallbackObject, type PyGeventCallback_Type]: cdef public object callback cdef public tuple args cdef callback next def __init__(self, callback, args): self.callback = callback self.args = args def stop(self): self.callback = None self.args = None close = stop # Note, that __bool__ and pending are different # bool is used in contexts where we need to know whether to schedule another callback, # so it's true if it's pending or currently running # 'pending' has the same meaning as libev watchers: it is cleared before entering callback def __bool__(self): # it's bool if it's pending or currently executing return self.args is not None @property def pending(self): return self.callback is not None def __repr__(self): if Py_ReprEnter(self) != 0: return "<...>" try: format = self._format() result = "<%s at 0x%x%s" % (self.__class__.__name__, id(self), format) if self.pending: result += " pending" if self.callback is not None: result += " callback=%r" % (self.callback, ) if self.args is not None: result += " args=%r" % (self.args, ) if self.callback is None and self.args is None: result += " stopped" return result + ">" finally: Py_ReprLeave(self) def _format(self): return '' # See comments in cares.pyx about DEF constants and when to use # what kind. cdef extern from *: """ #define CALLBACK_CHECK_COUNT 50 """ int CALLBACK_CHECK_COUNT @cython.final @cython.internal cdef class CallbackFIFO(object): cdef callback head cdef callback tail def __init__(self): self.head = None self.tail = None cdef inline clear(self): self.head = None self.tail = None cdef inline callback popleft(self): cdef callback head = self.head self.head = head.next if self.head is self.tail or self.head is None: self.tail = None head.next = None return head cdef inline append(self, callback new_tail): assert not new_tail.next if self.tail is None: if self.head is None: # Completely empty, so this # is now our head self.head = new_tail return self.tail = self.head assert self.head is not None old_tail = self.tail old_tail.next = new_tail self.tail = new_tail def __bool__(self): return self.head is not None def __len__(self): cdef Py_ssize_t count = 0 head = self.head while head is not None: count += 1 head = head.next return count def __iter__(self): cdef list objects = [] head = self.head while head is not None: objects.append(head) head = head.next return iter(objects) cdef bint has_callbacks(self): return self.head def __repr__(self): return "" % (id(self), len(self), self.head, self.tail) cdef public class loop [object PyGeventLoopObject, type PyGeventLoop_Type]: ## embedded struct members cdef libev.ev_prepare _prepare cdef libev.ev_timer _timer0 cdef libev.ev_async _threadsafe_async # We'll only actually start this timer if we're on Windows, # but it doesn't hurt to compile it in on all platforms. cdef libev.ev_timer _periodic_signal_checker ## pointer members cdef public object error_handler cdef libev.ev_loop* _ptr cdef public CallbackFIFO _callbacks ## data members cdef bint starting_timer_may_update_loop_time # We must capture the 'default' state at initialiaztion # time. Destroying the default loop in libev sets # the libev internal pointer to 0, and ev_is_default_loop will # no longer work. cdef bint _default cdef readonly double approx_timer_resolution def __cinit__(self, object flags=None, object default=None, libev.intptr_t ptr=0): self.starting_timer_may_update_loop_time = 0 self._default = 0 libev.ev_prepare_init(&self._prepare, gevent_run_callbacks) libev.ev_timer_init(&self._periodic_signal_checker, gevent_periodic_signal_check, 0.3, 0.3) libev.ev_timer_init(&self._timer0, gevent_noop, 0.0, 0.0) libev.ev_async_init(&self._threadsafe_async, gevent_noop) cdef unsigned int c_flags if ptr: self._ptr = ptr self._default = libev.ev_is_default_loop(self._ptr) else: c_flags = _flags_to_int(flags) _check_flags(c_flags) c_flags |= libev.EVFLAG_NOENV c_flags |= libev.EVFLAG_FORKCHECK if default is None: default = True if default: self._default = 1 self._ptr = libev.gevent_ev_default_loop(c_flags) if not self._ptr: raise SystemError("ev_default_loop(%s) failed" % (c_flags, )) if sys.platform == "win32": libev.ev_timer_start(self._ptr, &self._periodic_signal_checker) libev.ev_unref(self._ptr) else: self._ptr = libev.ev_loop_new(c_flags) if not self._ptr: raise SystemError("ev_loop_new(%s) failed" % (c_flags, )) if default or SYSERR_CALLBACK is None: set_syserr_cb(self._handle_syserr) # Mark as not destroyed libev.ev_set_userdata(self._ptr, self._ptr) libev.ev_prepare_start(self._ptr, &self._prepare) libev.ev_unref(self._ptr) libev.ev_async_start(self._ptr, &self._threadsafe_async) libev.ev_unref(self._ptr) def __init__(self, object flags=None, object default=None, libev.intptr_t ptr=0): self._callbacks = CallbackFIFO() # See libev.corecffi for this attribute. self.approx_timer_resolution = 0.00001 cdef _run_callbacks(self): cdef callback cb cdef int count = CALLBACK_CHECK_COUNT self.starting_timer_may_update_loop_time = True cdef libev.ev_tstamp now = libev.ev_now(self._ptr) cdef libev.ev_tstamp expiration = now + getswitchinterval() cdef object cb_callable # for printing later assert not PyErr_Occurred() try: libev.ev_timer_stop(self._ptr, &self._timer0) while self._callbacks.head is not None: cb = self._callbacks.popleft() libev.ev_unref(self._ptr) # On entry, this will set cb.callback to None, # changing cb.pending from True to False; on exit, # this will set cb.args to None, changing bool(cb) # from True to False. # XXX: Why is this a C callback, not cython? cb_callable = cb.callback gevent_call(self, cb) if PyErr_Occurred(): # Exceptions should not escape gevent_call, # but just in case... # note we don't use gevent_handle_error here, between # running callbacks is a fairly fragile state and # that directs back up to the hub and user code. PyErr_WriteUnraisable(cb_callable) PyErr_Clear() cb_callable = None count -= 1 if count == 0 and self._callbacks.head is not None: # We still have more to run but we've reached # the end of one check group count = CALLBACK_CHECK_COUNT libev.ev_now_update(self._ptr) if libev.ev_now(self._ptr) >= expiration: now = 0 break if now != 0: libev.ev_now_update(self._ptr) if self._callbacks.head is not None: libev.ev_timer_start(self._ptr, &self._timer0) finally: self.starting_timer_may_update_loop_time = False cdef _stop_watchers(self, libev.ev_loop* ptr): if not ptr: return if libev.ev_is_active(&self._prepare): libev.ev_ref(ptr) libev.ev_prepare_stop(ptr, &self._prepare) if libev.ev_is_active(&self._periodic_signal_checker): libev.ev_ref(ptr) libev.ev_timer_stop(ptr, &self._periodic_signal_checker) if libev.ev_is_active(&self._threadsafe_async): libev.ev_ref(ptr) libev.ev_async_stop(ptr, &self._threadsafe_async) def destroy(self): cdef libev.ev_loop* ptr = self._ptr self._ptr = NULL if ptr: if not libev.ev_userdata(ptr): # Whoops! Program error. They destroyed the loop, # using a different loop object. Our _ptr is still # valid, but the libev loop is gone. Doing anything # else with it will likely cause a crash. return # Mark as destroyed self._stop_watchers(ptr) libev.ev_set_userdata(ptr, NULL) if SYSERR_CALLBACK == self._handle_syserr: set_syserr_cb(None) libev.ev_loop_destroy(ptr) def __dealloc__(self): cdef libev.ev_loop* ptr = self._ptr self._ptr = NULL if ptr != NULL: if not libev.ev_userdata(ptr): # See destroy(). This is a bug in the caller. return self._stop_watchers(ptr) if not self._default: libev.ev_loop_destroy(ptr) # Mark as destroyed libev.ev_set_userdata(ptr, NULL) @property def ptr(self): return self._ptr @property def WatcherType(self): return watcher @property def MAXPRI(self): return libev.EV_MAXPRI @property def MINPRI(self): return libev.EV_MINPRI def _handle_syserr(self, message, errno): if sys.version_info[0] >= 3: message = message.decode() self.handle_error(None, SystemError, SystemError(message + ': ' + os.strerror(errno)), None) cpdef handle_error(self, context, type, value, tb): cdef object handle_error cdef object error_handler = self.error_handler if type is HubDestroyed: self._callbacks.clear() self.break_() return if error_handler is not None: # we do want to do getattr every time so that setting Hub.handle_error property just works handle_error = getattr(error_handler, 'handle_error', error_handler) handle_error(context, type, value, tb) else: self._default_handle_error(context, type, value, tb) cpdef _default_handle_error(self, context, type, value, tb): # note: Hub sets its own error handler so this is not used by gevent # this is here to make core.loop usable without the rest of gevent traceback.print_exception(type, value, tb) if self._ptr: libev.ev_break(self._ptr, libev.EVBREAK_ONE) def run(self, nowait=False, once=False): _check_loop(self) cdef unsigned int flags = 0 if nowait: flags |= libev.EVRUN_NOWAIT if once: flags |= libev.EVRUN_ONCE with nogil: libev.ev_run(self._ptr, flags) def reinit(self): if self._ptr: libev.ev_loop_fork(self._ptr) def ref(self): _check_loop(self) libev.ev_ref(self._ptr) def unref(self): _check_loop(self) libev.ev_unref(self._ptr) def break_(self, int how=libev.EVBREAK_ONE): _check_loop(self) libev.ev_break(self._ptr, how) def verify(self): _check_loop(self) libev.ev_verify(self._ptr) cpdef libev.ev_tstamp now(self) except *: _check_loop(self) return libev.ev_now(self._ptr) cpdef void update_now(self) except *: _check_loop(self) libev.ev_now_update(self._ptr) update = update_now # Old name, deprecated. def __repr__(self): return '<%s at 0x%x %s>' % (self.__class__.__name__, id(self), self._format()) @property def default(self): # If we're destroyed, we are not the default loop anymore, # as far as Python is concerned. return self._default if self._ptr else False @property def iteration(self): _check_loop(self) return libev.ev_iteration(self._ptr) @property def depth(self): _check_loop(self) return libev.ev_depth(self._ptr) @property def backend_int(self): _check_loop(self) return libev.ev_backend(self._ptr) @property def backend(self): _check_loop(self) cdef unsigned int backend = libev.ev_backend(self._ptr) for key, value in _flags: if key == backend: return value return backend @property def pendingcnt(self): _check_loop(self) return libev.ev_pending_count(self._ptr) def io(self, libev.vfd_socket_t fd, int events, ref=True, priority=None): return io(self, fd, events, ref, priority) def closing_fd(self, libev.vfd_socket_t fd): _check_loop(self) cdef int pending_before = libev.ev_pending_count(self._ptr) libev.ev_feed_fd_event(self._ptr, fd, 0xFFFF) cdef int pending_after = libev.ev_pending_count(self._ptr) return pending_after > pending_before def timer(self, double after, double repeat=0.0, ref=True, priority=None): return timer(self, after, repeat, ref, priority) def signal(self, int signum, ref=True, priority=None): return signal(self, signum, ref, priority) def idle(self, ref=True, priority=None): return idle(self, ref, priority) def prepare(self, ref=True, priority=None): return prepare(self, ref, priority) def check(self, ref=True, priority=None): return check(self, ref, priority) def fork(self, ref=True, priority=None): return fork(self, ref, priority) def async_(self, ref=True, priority=None): return async_(self, ref, priority) # cython doesn't enforce async as a keyword async = async_ def child(self, int pid, bint trace=0, ref=True): if sys.platform == 'win32': raise AttributeError("Child watchers are not supported on Windows") return child(self, pid, trace, ref) def install_sigchld(self): libev.gevent_install_sigchld_handler() def reset_sigchld(self): libev.gevent_reset_sigchld_handler() def stat(self, str path, float interval=0.0, ref=True, priority=None): return stat(self, path, interval, ref, priority) def run_callback(self, func, *args): _check_loop(self) cdef callback cb = callback(func, args) self._callbacks.append(cb) libev.ev_ref(self._ptr) return cb def run_callback_threadsafe(self, func, *args): # We rely on the GIL to make this threadsafe. cb = self.run_callback(func, *args) libev.ev_async_send(self._ptr, &self._threadsafe_async) return cb def _format(self): if not self._ptr: return 'destroyed' cdef object msg = self.backend if self._default: msg += ' default' msg += ' pending=%s' % self.pendingcnt msg += self._format_details() return msg def _format_details(self): cdef str msg = '' cdef object fileno = self.fileno() cdef object activecnt = None try: activecnt = self.activecnt except AttributeError: pass if activecnt is not None: msg += ' ref=' + repr(activecnt) if fileno is not None: msg += ' fileno=' + repr(fileno) return msg def fileno(self): cdef int fd if self._ptr: fd = libev.gevent_ev_loop_backend_fd(self._ptr) if fd >= 0: return fd @property def activecnt(self): _check_loop(self) return libev.gevent_ev_loop_activecnt(self._ptr) @property def sig_pending(self): _check_loop(self) return libev.gevent_ev_loop_sig_pending(self._ptr) @property def origflags(self): return _flags_to_list(self.origflags_int) @property def origflags_int(self): _check_loop(self) return libev.gevent_ev_loop_origflags(self._ptr) @property def sigfd(self): _check_loop(self) fd = libev.gevent_ev_loop_sigfd(self._ptr) if fd >= 0: return fd # Explicitly not EV_USE_SIGNALFD raise AttributeError("sigfd") from zope.interface import classImplements # XXX: This invokes the side-table lookup, we would # prefer to have it stored directly on the class. That means we # need a class variable ``__implemented__``, but that's hard in # Cython from gevent._interfaces import ILoop from gevent._interfaces import ICallback classImplements(loop, ILoop) classImplements(callback, ICallback) cdef extern from *: """ #define FLAG_WATCHER_OWNS_PYREF (1 << 0) /* 0x1 */ #define FLAG_WATCHER_NEEDS_EVREF (1 << 1) /* 0x2 */ #define FLAG_WATCHER_UNREF_BEFORE_START (1 << 2) /* 0x4 */ #define FLAG_WATCHER_MASK_UNREF_NEEDS_REF 0x6 """ # about readonly _flags attribute: # bit #1 set if object owns Python reference to itself (Py_INCREF was # called and we must call Py_DECREF later) unsigned int FLAG_WATCHER_OWNS_PYREF # bit #2 set if ev_unref() was called and we must call ev_ref() later unsigned int FLAG_WATCHER_NEEDS_EVREF # bit #3 set if user wants to call ev_unref() before start() unsigned int FLAG_WATCHER_UNREF_BEFORE_START # bits 2 and 3 are *both* set when we are active, but the user # request us not to be ref'd anymore. We unref us (because going active will # ref us) and then make a note of this in the future unsigned int FLAG_WATCHER_MASK_UNREF_NEEDS_REF cdef void _python_incref(watcher self): if not self._flags & FLAG_WATCHER_OWNS_PYREF: Py_INCREF(self) self._flags |= FLAG_WATCHER_OWNS_PYREF cdef void _python_decref(watcher self): if self._flags & FLAG_WATCHER_OWNS_PYREF: Py_DECREF(self) self._flags &= ~FLAG_WATCHER_OWNS_PYREF cdef void _libev_ref(watcher self): if self._flags & FLAG_WATCHER_NEEDS_EVREF: libev.ev_ref(self.loop._ptr) self._flags &= ~FLAG_WATCHER_NEEDS_EVREF cdef void _libev_unref(watcher self): if self._flags & FLAG_WATCHER_MASK_UNREF_NEEDS_REF == FLAG_WATCHER_UNREF_BEFORE_START: libev.ev_unref(self.loop._ptr) self._flags |= FLAG_WATCHER_NEEDS_EVREF ctypedef void (*start_stop_func)(libev.ev_loop*, void*) nogil cdef struct start_and_stop: start_stop_func start start_stop_func stop cdef start_and_stop make_ss(void* start, void* stop): cdef start_and_stop result = start_and_stop(start, stop) return result cdef bint _watcher_start(watcher self, object callback, tuple args) except -1: # This method should be called by subclasses of watcher, if they # override the python-level `start` function: they've already paid # for argument unpacking, and `start` cannot be cpdef since it # uses varargs. # We keep this as a function, not a cdef method of watcher. # If it's a cdef method, it could potentially be overridden # by a subclass, which means that the watcher gains a pointer to a # function table (vtable), making each object 8 bytes larger. _check_loop(self.loop) if callback is None or not callable(callback): raise TypeError("Expected callable, not %r" % (callback, )) self._callback = callback self.args = args _libev_unref(self) _python_incref(self) self._w_ss.start(self.loop._ptr, self._w_watcher) return 1 cdef public class watcher [object PyGeventWatcherObject, type PyGeventWatcher_Type]: """Abstract base class for all the watchers""" ## pointer members cdef public loop loop cdef object _callback cdef public tuple args # By keeping a _w_watcher cached, the size of the io and timer # structs becomes 152 bytes and child is 160 and stat is 512 (when # the start_and_stop is inlined). On 64-bit macOS CPython 2.7. I # hoped that using libev's data pointer and allocating the # watchers directly and not as inline members would result in # overall savings thanks to better padding, but it didn't. And it # added lots of casts, making the code ugly. # Table: # gevent ver | 1.2 | This | +data # Watcher Kind | | | # Timer | 120 | 152 | 160 # IO | 120 | 152 | 160 # Child | 128 | 160 | 168 # Stat | 480 | 512 | 512 cdef libev.ev_watcher* _w_watcher # By inlining the start_and_stop struct, instead of taking the address # of a static struct or using the watcher's data pointer, we # use an additional pointer of memory and incur an additional pointer copy # on creation. # But we use fewer pointer accesses for start/stop, and they have # better cache locality. (Then again, we're bigger). # Right now we're going for size, so we use the pointer. IO/Timer objects # are then 144 bytes. cdef start_and_stop* _w_ss ## Int members # Our subclasses will declare the ev_X struct # as an inline member. This is good for locality, but # probably bad for alignment, as it will get tacked on # immediately after our data. # But all ev_watchers start with some ints, so maybe we can help that # out by putting our ints here. cdef readonly unsigned int _flags def __init__(self, loop loop, ref=True, priority=None): if not self._w_watcher or not self._w_ss.start or not self._w_ss.stop: raise ValueError("Cannot construct a bare watcher") self.loop = loop self._flags = 0 if ref else FLAG_WATCHER_UNREF_BEFORE_START if priority is not None: libev.ev_set_priority(self._w_watcher, priority) @property def ref(self): return False if self._flags & 4 else True @ref.setter def ref(self, object value): _check_loop(self.loop) if value: # self.ref should be true after this. if self.ref: return # ref is already True if self._flags & FLAG_WATCHER_NEEDS_EVREF: # ev_unref was called, undo libev.ev_ref(self.loop._ptr) # do not want unref, no outstanding unref self._flags &= ~FLAG_WATCHER_MASK_UNREF_NEEDS_REF else: # self.ref must be false after this if not self.ref: return # ref is already False self._flags |= FLAG_WATCHER_UNREF_BEFORE_START if not self._flags & FLAG_WATCHER_NEEDS_EVREF and libev.ev_is_active(self._w_watcher): libev.ev_unref(self.loop._ptr) self._flags |= FLAG_WATCHER_NEEDS_EVREF @property def callback(self): return self._callback @callback.setter def callback(self, object callback): if callback is not None and not callable(callback): raise TypeError("Expected callable, not %r" % (callback, )) self._callback = callback @property def priority(self): return libev.ev_priority(self._w_watcher) @priority.setter def priority(self, int priority): cdef libev.ev_watcher* w = self._w_watcher if libev.ev_is_active(w): raise AttributeError("Cannot set priority of an active watcher") libev.ev_set_priority(w, priority) @property def active(self): return True if libev.ev_is_active(self._w_watcher) else False @property def pending(self): return True if libev.ev_is_pending(self._w_watcher) else False def start(self, object callback, *args): _watcher_start(self, callback, args) def stop(self): _check_loop(self.loop) _libev_ref(self) # The callback cannot possibly fire while we are executing, # so this is safe. self._callback = None self.args = None self._w_ss.stop(self.loop._ptr, self._w_watcher) _python_decref(self) def feed(self, int revents, object callback, *args): _check_loop(self.loop) self.callback = callback self.args = args _libev_unref(self) libev.ev_feed_event(self.loop._ptr, self._w_watcher, revents) _python_incref(self) def __repr__(self): if Py_ReprEnter(self) != 0: return "<...>" try: format = self._format() result = "<%s at 0x%x native=0x%x%s" % ( self.__class__.__name__, id(self), self._w_watcher, format ) if self.active: result += " active" if self.pending: result += " pending" if self.callback is not None: result += " callback=%r" % (self.callback, ) if self.args is not None: result += " args=%r" % (self.args, ) return result + ">" finally: Py_ReprLeave(self) def _format(self): return '' def close(self): self.stop() def __enter__(self): return self def __exit__(self, t, v, tb): self.close() return cdef start_and_stop io_ss = make_ss(libev.ev_io_start, libev.ev_io_stop) cdef public class io(watcher) [object PyGeventIOObject, type PyGeventIO_Type]: cdef libev.ev_io _watcher def start(self, object callback, *args, pass_events=False): if pass_events: args = (GEVENT_CORE_EVENTS, ) + args _watcher_start(self, callback, args) def __init__(self, loop loop, libev.vfd_socket_t fd, int events, ref=True, priority=None): watcher.__init__(self, loop, ref, priority) def __cinit__(self, loop loop, libev.vfd_socket_t fd, int events, ref=True, priority=None): if fd < 0: raise ValueError('fd must be non-negative: %r' % fd) if events & ~(libev.EV__IOFDSET | libev.EV_READ | libev.EV_WRITE): raise ValueError('illegal event mask: %r' % events) # All the vfd_functions are no-ops on POSIX cdef int vfd = libev.vfd_open(fd) libev.ev_io_init(&self._watcher, gevent_callback_io, vfd, events) self._w_watcher = &self._watcher self._w_ss = &io_ss def __dealloc__(self): libev.vfd_free(self._watcher.fd) @property def fd(self): return libev.vfd_get(self._watcher.fd) @fd.setter def fd(self, long fd): if libev.ev_is_active(&self._watcher): raise AttributeError("'io' watcher attribute 'fd' is read-only while watcher is active") cdef int vfd = libev.vfd_open(fd) libev.vfd_free(self._watcher.fd) libev.ev_io_init(&self._watcher, gevent_callback_io, vfd, self._watcher.events) @property def events(self): return self._watcher.events @events.setter def events(self, int events): if libev.ev_is_active(&self._watcher): raise AttributeError("'io' watcher attribute 'events' is read-only while watcher is active") libev.ev_io_init(&self._watcher, gevent_callback_io, self._watcher.fd, events) @property def events_str(self): return _events_to_str(self._watcher.events) def _format(self): return ' fd=%s events=%s' % (self.fd, self.events_str) cdef start_and_stop timer_ss = make_ss(libev.ev_timer_start, libev.ev_timer_stop) cdef public class timer(watcher) [object PyGeventTimerObject, type PyGeventTimer_Type]: cdef libev.ev_timer _watcher def __cinit__(self, loop loop, double after=0.0, double repeat=0.0, ref=True, priority=None): if repeat < 0.0: raise ValueError("repeat must be positive or zero: %r" % repeat) libev.ev_timer_init(&self._watcher, gevent_callback_timer, after, repeat) self._w_watcher = &self._watcher self._w_ss = &timer_ss def __init__(self, loop loop, double after=0.0, double repeat=0.0, ref=True, priority=None): watcher.__init__(self, loop, ref, priority) def start(self, object callback, *args, update=None): update = update if update is not None else self.loop.starting_timer_may_update_loop_time if update: self.loop.update_now() _watcher_start(self, callback, args) @property def at(self): return self._watcher.at # QQQ: add 'after' and 'repeat' properties? def again(self, object callback, *args, update=True): _check_loop(self.loop) self.callback = callback self.args = args _libev_unref(self) if update: libev.ev_now_update(self.loop._ptr) libev.ev_timer_again(self.loop._ptr, &self._watcher) _python_incref(self) cdef start_and_stop signal_ss = make_ss(libev.ev_signal_start, libev.ev_signal_stop) cdef public class signal(watcher) [object PyGeventSignalObject, type PyGeventSignal_Type]: cdef libev.ev_signal _watcher def __cinit__(self, loop loop, int signalnum, ref=True, priority=None): if signalnum < 1 or signalnum >= signalmodule.NSIG: raise ValueError('illegal signal number: %r' % signalnum) # still possible to crash on one of libev's asserts: # 1) "libev: ev_signal_start called with illegal signal number" # EV_NSIG might be different from signal.NSIG on some platforms # 2) "libev: a signal must not be attached to two different loops" # we probably could check that in LIBEV_EMBED mode, but not in general libev.ev_signal_init(&self._watcher, gevent_callback_signal, signalnum) self._w_watcher = &self._watcher self._w_ss = &signal_ss def __init__(self, loop loop, int signalnum, ref=True, priority=None): watcher.__init__(self, loop, ref, priority) cdef start_and_stop idle_ss = make_ss(libev.ev_idle_start, libev.ev_idle_stop) cdef public class idle(watcher) [object PyGeventIdleObject, type PyGeventIdle_Type]: cdef libev.ev_idle _watcher def __cinit__(self, loop loop, ref=True, priority=None): libev.ev_idle_init(&self._watcher, gevent_callback_idle) self._w_watcher = &self._watcher self._w_ss = &idle_ss cdef start_and_stop prepare_ss = make_ss(libev.ev_prepare_start, libev.ev_prepare_stop) cdef public class prepare(watcher) [object PyGeventPrepareObject, type PyGeventPrepare_Type]: cdef libev.ev_prepare _watcher def __cinit__(self, loop loop, ref=True, priority=None): libev.ev_prepare_init(&self._watcher, gevent_callback_prepare) self._w_watcher = &self._watcher self._w_ss = &prepare_ss cdef start_and_stop check_ss = make_ss(libev.ev_check_start, libev.ev_check_stop) cdef public class check(watcher) [object PyGeventCheckObject, type PyGeventCheck_Type]: cdef libev.ev_check _watcher def __cinit__(self, loop loop, ref=True, priority=None): libev.ev_check_init(&self._watcher, gevent_callback_check) self._w_watcher = &self._watcher self._w_ss = &check_ss cdef start_and_stop fork_ss = make_ss(libev.ev_fork_start, libev.ev_fork_stop) cdef public class fork(watcher) [object PyGeventForkObject, type PyGeventFork_Type]: cdef libev.ev_fork _watcher def __cinit__(self, loop loop, ref=True, priority=None): libev.ev_fork_init(&self._watcher, gevent_callback_fork) self._w_watcher = &self._watcher self._w_ss = &fork_ss cdef start_and_stop async_ss = make_ss(libev.ev_async_start, libev.ev_async_stop) cdef public class async_(watcher) [object PyGeventAsyncObject, type PyGeventAsync_Type]: cdef libev.ev_async _watcher @property def pending(self): # Note the use of ev_async_pending instead of ev_is_pending return True if libev.ev_async_pending(&self._watcher) else False def __cinit__(self, loop loop, ref=True, priority=None): libev.ev_async_init(&self._watcher, gevent_callback_async) self._w_watcher = &self._watcher self._w_ss = &async_ss def send(self): _check_loop(self.loop) libev.ev_async_send(self.loop._ptr, &self._watcher) def send_ignoring_arg(self, _ignored): return self.send() async = async_ cdef start_and_stop child_ss = make_ss(libev.ev_child_start, libev.ev_child_stop) cdef public class child(watcher) [object PyGeventChildObject, type PyGeventChild_Type]: cdef libev.ev_child _watcher def __cinit__(self, loop loop, int pid, bint trace=0, ref=True): if sys.platform == 'win32': raise AttributeError("Child watchers are not supported on Windows") if not loop.default: raise TypeError('child watchers are only available on the default loop') libev.gevent_install_sigchld_handler() libev.ev_child_init(&self._watcher, gevent_callback_child, pid, trace) self._w_watcher = &self._watcher self._w_ss = &child_ss def __init__(self, loop loop, int pid, bint trace=0, ref=True): watcher.__init__(self, loop, ref, None) def _format(self): return ' pid=%r rstatus=%r' % (self.pid, self.rstatus) @property def pid(self): return self._watcher.pid @property def rpid(self): return self._watcher.rpid @rpid.setter def rpid(self, int value): self._watcher.rpid = value @property def rstatus(self): return self._watcher.rstatus @rstatus.setter def rstatus(self, int value): self._watcher.rstatus = value cdef start_and_stop stat_ss = make_ss(libev.ev_stat_start, libev.ev_stat_stop) cdef public class stat(watcher) [object PyGeventStatObject, type PyGeventStat_Type]: cdef libev.ev_stat _watcher cdef readonly str path cdef readonly bytes _paths def __cinit__(self, loop loop, str path, float interval=0.0, ref=True, priority=None): self.path = path cdef bytes paths if isinstance(path, unicode): # the famous Python3 filesystem encoding debacle hits us here. Can we do better? # We must keep a reference to the encoded string so that its bytes don't get freed # and overwritten, leading to strange errors from libev ("no such file or directory") paths = (path).encode(sys.getfilesystemencoding()) self._paths = paths else: paths = path self._paths = paths libev.ev_stat_init(&self._watcher, gevent_callback_stat, paths, interval) self._w_watcher = &self._watcher self._w_ss = &stat_ss def __init__(self, loop loop, str path, float interval=0.0, ref=True, priority=None): watcher.__init__(self, loop, ref, priority) @property def attr(self): if not self._watcher.attr.st_nlink: return return _pystat_fromstructstat(&self._watcher.attr) @property def prev(self): if not self._watcher.prev.st_nlink: return return _pystat_fromstructstat(&self._watcher.prev) @property def interval(self): return self._watcher.interval cdef object SYSERR_CALLBACK = None cdef void _syserr_cb(char* msg) noexcept with gil: try: SYSERR_CALLBACK(msg, errno) except: set_syserr_cb(None) print_exc = getattr(traceback, 'print_exc', None) if print_exc is not None: print_exc() cpdef set_syserr_cb(callback): global SYSERR_CALLBACK if callback is None: libev.ev_set_syserr_cb(NULL) SYSERR_CALLBACK = None elif callable(callback): libev.ev_set_syserr_cb(_syserr_cb) SYSERR_CALLBACK = callback else: raise TypeError('Expected callable or None, got %r' % (callback, )) libev.ev_set_allocator(gevent_realloc) LIBEV_EMBED = bool(libev.LIBEV_EMBED) EV_USE_FLOOR = libev.EV_USE_FLOOR EV_USE_CLOCK_SYSCALL = libev.EV_USE_CLOCK_SYSCALL EV_USE_REALTIME = libev.EV_USE_REALTIME EV_USE_MONOTONIC = libev.EV_USE_MONOTONIC EV_USE_NANOSLEEP = libev.EV_USE_NANOSLEEP EV_USE_INOTIFY = libev.EV_USE_INOTIFY EV_USE_SIGNALFD = libev.EV_USE_SIGNALFD EV_USE_EVENTFD = libev.EV_USE_EVENTFD EV_USE_4HEAP = libev.EV_USE_4HEAP # Things used in callbacks.c from traceback import print_exception cdef public void gevent_handle_error(loop loop, object context): cdef PyObject* typep cdef PyObject* valuep cdef PyObject* tracebackp cdef object type cdef object value = None cdef object traceback = None # If it was set, this will clear it, and we will own # the references. PyErr_Fetch(&typep, &valuep, &tracebackp) if not typep: return PyErr_NormalizeException(&typep, &valuep, &tracebackp) if tracebackp: PyException_SetTraceback(valuep, tracebackp) # This assignment will do a Py_INCREF # on the value. We already own the reference # returned from PyErr_Fetch, # so we must decref immediately type = typep Py_DECREF(type) if valuep: value = valuep Py_DECREF(value) if tracebackp: traceback = tracebackp Py_DECREF(traceback) # Prior to Cython 3., we relied on Cython printing an # uncaught exception here (because we don't return a Python object, and # we have no except clause). It seems that as-of 3.0b3 at least, # that no longer happens by default; if we want un caught, unraisable exception to be # reported, we need to do so ourself. try: loop.handle_error(context, type, value, traceback) except: # In an except: block, PyErr_Occurred() is actually false. # Cython has captured the exception and moved it around. The # exc_info is available at the python level, but # the C level APIs aren't going to work. In debug builds, # PyErr_WriteUnraisable will crash with an assertion. # # It would be nice to call ``sys.unraisablehook``, but the default # implementation of that requires that the argument be of # a specific private type we cannot construct. print_exception(*sys.exc_info()) cdef public tuple _empty_tuple = () cdef public object gevent_loop_run_callbacks(loop loop): return loop._run_callbacks() gevent-24.11.1/src/gevent/libev/corecffi.py000066400000000000000000000326301471441230600204700ustar00rootroot00000000000000# pylint: disable=too-many-lines, protected-access, redefined-outer-name, not-callable # pylint: disable=no-member from __future__ import absolute_import, print_function import sys # pylint: disable=undefined-all-variable __all__ = [ 'get_version', 'get_header_version', 'supported_backends', 'recommended_backends', 'embeddable_backends', 'time', 'loop', ] from zope.interface import implementer from gevent._interfaces import ILoop from gevent.libev import _corecffi # pylint:disable=no-name-in-module,import-error ffi = _corecffi.ffi # pylint:disable=no-member libev = _corecffi.lib # pylint:disable=no-member if hasattr(libev, 'vfd_open'): # Must be on windows # pylint:disable=c-extension-no-member assert sys.platform.startswith("win"), "vfd functions only needed on windows" vfd_open = libev.vfd_open vfd_free = libev.vfd_free vfd_get = libev.vfd_get else: vfd_open = vfd_free = vfd_get = lambda fd: fd libev.gevent_set_ev_alloc() ##### ## NOTE on Windows: # The C implementation does several things specially for Windows; # a possibly incomplete list is: # # - the loop runs a periodic signal checker; # - the io watcher constructor is different and it has a destructor; # - the child watcher is not defined # # The CFFI implementation does none of these things, and so # is possibly NOT FUNCTIONALLY CORRECT on Win32 ##### from gevent._ffi.loop import AbstractCallbacks from gevent._ffi.loop import assign_standard_callbacks class _Callbacks(AbstractCallbacks): # pylint:disable=arguments-differ,arguments-renamed def python_check_callback(self, *args): # There's a pylint bug (pylint 2.9.3, astroid 2.6.2) that causes pylint to crash # with an AttributeError on certain types of arguments-differ errors # But code in _ffi/loop depends on being able to find the watcher_ptr # argument is the local frame. BUT it gets invoked before the function body runs. # Hence the override of _find_watcher_ptr_in_traceback. # pylint:disable=unused-variable _loop, watcher_ptr, _events = args AbstractCallbacks.python_check_callback(self, watcher_ptr) def _find_watcher_ptr_in_traceback(self, tb): if tb is not None: l = tb.tb_frame.f_locals if 'watcher_ptr' in l: return l['watcher_ptr'] if 'args' in l and len(l['args']) == 3: return l['args'][1] return AbstractCallbacks._find_watcher_ptr_in_traceback(self, tb) def python_prepare_callback(self, _loop_ptr, watcher_ptr, _events): AbstractCallbacks.python_prepare_callback(self, watcher_ptr) def _find_loop_from_c_watcher(self, watcher_ptr): loop_handle = ffi.cast('struct ev_watcher*', watcher_ptr).data return self.from_handle(loop_handle) _callbacks = assign_standard_callbacks(ffi, libev, _Callbacks) UNDEF = libev.EV_UNDEF NONE = libev.EV_NONE READ = libev.EV_READ WRITE = libev.EV_WRITE TIMER = libev.EV_TIMER PERIODIC = libev.EV_PERIODIC SIGNAL = libev.EV_SIGNAL CHILD = libev.EV_CHILD STAT = libev.EV_STAT IDLE = libev.EV_IDLE PREPARE = libev.EV_PREPARE CHECK = libev.EV_CHECK EMBED = libev.EV_EMBED FORK = libev.EV_FORK CLEANUP = libev.EV_CLEANUP ASYNC = libev.EV_ASYNC CUSTOM = libev.EV_CUSTOM ERROR = libev.EV_ERROR READWRITE = libev.EV_READ | libev.EV_WRITE MINPRI = libev.EV_MINPRI MAXPRI = libev.EV_MAXPRI BACKEND_PORT = libev.EVBACKEND_PORT BACKEND_KQUEUE = libev.EVBACKEND_KQUEUE BACKEND_EPOLL = libev.EVBACKEND_EPOLL BACKEND_POLL = libev.EVBACKEND_POLL BACKEND_SELECT = libev.EVBACKEND_SELECT FORKCHECK = libev.EVFLAG_FORKCHECK NOINOTIFY = libev.EVFLAG_NOINOTIFY SIGNALFD = libev.EVFLAG_SIGNALFD NOSIGMASK = libev.EVFLAG_NOSIGMASK from gevent._ffi.loop import EVENTS GEVENT_CORE_EVENTS = EVENTS def get_version(): return 'libev-%d.%02d' % (libev.ev_version_major(), libev.ev_version_minor()) def get_header_version(): return 'libev-%d.%02d' % (libev.EV_VERSION_MAJOR, libev.EV_VERSION_MINOR) # This list backends in the order they are actually tried by libev, # as defined in loop_init. The names must be lower case. _flags = [ # IOCP --- not supported/used. (libev.EVBACKEND_PORT, 'port'), (libev.EVBACKEND_KQUEUE, 'kqueue'), (libev.EVBACKEND_IOURING, 'linux_iouring'), (libev.EVBACKEND_LINUXAIO, "linux_aio"), (libev.EVBACKEND_EPOLL, 'epoll'), (libev.EVBACKEND_POLL, 'poll'), (libev.EVBACKEND_SELECT, 'select'), (libev.EVFLAG_NOENV, 'noenv'), (libev.EVFLAG_FORKCHECK, 'forkcheck'), (libev.EVFLAG_SIGNALFD, 'signalfd'), (libev.EVFLAG_NOSIGMASK, 'nosigmask') ] _flags_str2int = dict((string, flag) for (flag, string) in _flags) def _flags_to_list(flags): result = [] for code, value in _flags: if flags & code: result.append(value) flags &= ~code if not flags: break if flags: result.append(flags) return result if sys.version_info[0] >= 3: basestring = (bytes, str) integer_types = (int,) else: import __builtin__ # pylint:disable=import-error basestring = (__builtin__.basestring,) integer_types = (int, __builtin__.long) def _flags_to_int(flags): # Note, that order does not matter, libev has its own predefined order if not flags: return 0 if isinstance(flags, integer_types): return flags result = 0 try: if isinstance(flags, basestring): flags = flags.split(',') for value in flags: value = value.strip().lower() if value: result |= _flags_str2int[value] except KeyError as ex: raise ValueError('Invalid backend or flag: %s\nPossible values: %s' % (ex, ', '.join(sorted(_flags_str2int.keys())))) return result def _str_hex(flag): if isinstance(flag, integer_types): return hex(flag) return str(flag) def _check_flags(flags): as_list = [] flags &= libev.EVBACKEND_MASK if not flags: return if not flags & libev.EVBACKEND_ALL: raise ValueError('Invalid value for backend: 0x%x' % flags) if not flags & libev.ev_supported_backends(): as_list = [_str_hex(x) for x in _flags_to_list(flags)] raise ValueError('Unsupported backend: %s' % '|'.join(as_list)) def supported_backends(): return _flags_to_list(libev.ev_supported_backends()) def recommended_backends(): return _flags_to_list(libev.ev_recommended_backends()) def embeddable_backends(): return _flags_to_list(libev.ev_embeddable_backends()) def time(): return libev.ev_time() from gevent._ffi.loop import AbstractLoop from gevent.libev import watcher as _watchers _events_to_str = _watchers._events_to_str # exported @implementer(ILoop) class loop(AbstractLoop): # pylint:disable=too-many-public-methods # libuv parameters simply won't accept anything lower than 1ms # (0.001s), but libev takes fractional seconds. In practice, on # one machine, libev can sleep for very small periods of time: # # sleep(0.00001) -> 0.000024 # sleep(0.0001) -> 0.000156 # sleep(0.001) -> 0.00136 (which is comparable to libuv) approx_timer_resolution = 0.00001 error_handler = None _CHECK_POINTER = 'struct ev_check *' _PREPARE_POINTER = 'struct ev_prepare *' _TIMER_POINTER = 'struct ev_timer *' def __init__(self, flags=None, default=None): AbstractLoop.__init__(self, ffi, libev, _watchers, flags, default) self._default = bool(libev.ev_is_default_loop(self._ptr)) def _init_loop(self, flags, default): c_flags = _flags_to_int(flags) _check_flags(c_flags) c_flags |= libev.EVFLAG_NOENV c_flags |= libev.EVFLAG_FORKCHECK if default is None: default = True if default: ptr = libev.gevent_ev_default_loop(c_flags) if not ptr: raise SystemError("ev_default_loop(%s) failed" % (c_flags, )) else: ptr = libev.ev_loop_new(c_flags) if not ptr: raise SystemError("ev_loop_new(%s) failed" % (c_flags, )) if default or SYSERR_CALLBACK is None: set_syserr_cb(self._handle_syserr) # Mark this loop as being used. libev.ev_set_userdata(ptr, ptr) return ptr def _init_and_start_check(self): libev.ev_check_init(self._check, libev.python_check_callback) self._check.data = self._handle_to_self libev.ev_check_start(self._ptr, self._check) self.unref() def _init_and_start_prepare(self): libev.ev_prepare_init(self._prepare, libev.python_prepare_callback) libev.ev_prepare_start(self._ptr, self._prepare) self.unref() def _init_callback_timer(self): libev.ev_timer_init(self._timer0, libev.gevent_noop, 0.0, 0.0) def _stop_callback_timer(self): libev.ev_timer_stop(self._ptr, self._timer0) def _start_callback_timer(self): libev.ev_timer_start(self._ptr, self._timer0) def _stop_aux_watchers(self): super(loop, self)._stop_aux_watchers() if libev.ev_is_active(self._prepare): self.ref() libev.ev_prepare_stop(self._ptr, self._prepare) if libev.ev_is_active(self._check): self.ref() libev.ev_check_stop(self._ptr, self._check) if libev.ev_is_active(self._timer0): libev.ev_timer_stop(self._timer0) def _setup_for_run_callback(self): # XXX: libuv needs to start the callback timer to be sure # that the loop wakes up and calls this. Our C version doesn't # do this. # self._start_callback_timer() self.ref() # we should go through the loop now def destroy(self): if self._ptr: super(loop, self).destroy() # pylint:disable=comparison-with-callable if globals()["SYSERR_CALLBACK"] == self._handle_syserr: set_syserr_cb(None) def _can_destroy_loop(self, ptr): # Is it marked as destroyed? return libev.ev_userdata(ptr) def _destroy_loop(self, ptr): # Mark as destroyed. libev.ev_set_userdata(ptr, ffi.NULL) libev.ev_loop_destroy(ptr) libev.gevent_zero_prepare(self._prepare) libev.gevent_zero_check(self._check) libev.gevent_zero_timer(self._timer0) del self._prepare del self._check del self._timer0 @property def MAXPRI(self): return libev.EV_MAXPRI @property def MINPRI(self): return libev.EV_MINPRI def _default_handle_error(self, context, type, value, tb): # pylint:disable=unused-argument super(loop, self)._default_handle_error(context, type, value, tb) libev.ev_break(self._ptr, libev.EVBREAK_ONE) def run(self, nowait=False, once=False): flags = 0 if nowait: flags |= libev.EVRUN_NOWAIT if once: flags |= libev.EVRUN_ONCE libev.ev_run(self._ptr, flags) def reinit(self): libev.ev_loop_fork(self._ptr) def ref(self): libev.ev_ref(self._ptr) def unref(self): libev.ev_unref(self._ptr) def break_(self, how=libev.EVBREAK_ONE): libev.ev_break(self._ptr, how) def verify(self): libev.ev_verify(self._ptr) def now(self): return libev.ev_now(self._ptr) def update_now(self): libev.ev_now_update(self._ptr) def __repr__(self): return '<%s at 0x%x %s>' % (self.__class__.__name__, id(self), self._format()) @property def iteration(self): return libev.ev_iteration(self._ptr) @property def depth(self): return libev.ev_depth(self._ptr) @property def backend_int(self): return libev.ev_backend(self._ptr) @property def backend(self): backend = libev.ev_backend(self._ptr) for key, value in _flags: if key == backend: return value return backend @property def pendingcnt(self): return libev.ev_pending_count(self._ptr) def closing_fd(self, fd): pending_before = libev.ev_pending_count(self._ptr) libev.ev_feed_fd_event(self._ptr, fd, 0xFFFF) pending_after = libev.ev_pending_count(self._ptr) return pending_after > pending_before if sys.platform != "win32": def install_sigchld(self): libev.gevent_install_sigchld_handler() def reset_sigchld(self): libev.gevent_reset_sigchld_handler() def fileno(self): if self._ptr and LIBEV_EMBED: # If we don't embed, we can't access these fields, # the type is opaque fd = self._ptr.backend_fd if fd >= 0: return fd @property def activecnt(self): if not self._ptr: raise ValueError('operation on destroyed loop') if LIBEV_EMBED: return self._ptr.activecnt return -1 @ffi.def_extern() def _syserr_cb(msg): try: msg = ffi.string(msg) SYSERR_CALLBACK(msg, ffi.errno) except: set_syserr_cb(None) raise # let cffi print the traceback def set_syserr_cb(callback): global SYSERR_CALLBACK if callback is None: libev.ev_set_syserr_cb(ffi.NULL) SYSERR_CALLBACK = None elif callable(callback): libev.ev_set_syserr_cb(libev._syserr_cb) SYSERR_CALLBACK = callback else: raise TypeError('Expected callable or None, got %r' % (callback, )) SYSERR_CALLBACK = None LIBEV_EMBED = libev.LIBEV_EMBED gevent-24.11.1/src/gevent/libev/libev.h000066400000000000000000000061311471441230600176050ustar00rootroot00000000000000#if defined(LIBEV_EMBED) && LIBEV_EMBED #ifdef __clang__ #pragma clang diagnostic push #pragma clang diagnostic ignored "-Wcomment" #pragma clang diagnostic ignored "-Wsign-compare" #pragma clang diagnostic ignored "-Wextern-initializer" #pragma clang diagnostic ignored "-Wbitwise-op-parentheses" #endif #include "ev.c" #ifdef __clang__ #pragma clang diagnostic pop #endif #undef LIBEV_EMBED #define LIBEV_EMBED 1 #define gevent_ev_loop_origflags(loop) ((loop)->origflags) #define gevent_ev_loop_sig_pending(loop) ((loop))->sig_pending #define gevent_ev_loop_backend_fd(loop) ((loop))->backend_fd #define gevent_ev_loop_activecnt(loop) ((loop))->activecnt #if EV_USE_SIGNALFD #define gevent_ev_loop_sigfd(loop) ((loop))->sigfd #else #define gevent_ev_loop_sigfd(loop) -1 #endif /* !EV_USE_SIGNALFD */ #else /* !LIBEV_EMBED */ #include "ev.h" #define gevent_ev_loop_origflags(loop) -1 #define gevent_ev_loop_sig_pending(loop) -1 #define gevent_ev_loop_backend_fd(loop) -1 #define gevent_ev_loop_activecnt(loop) -1 #define gevent_ev_loop_sigfd(loop) -1 #define LIBEV_EMBED 0 #define EV_USE_FLOOR -1 #define EV_USE_CLOCK_SYSCALL -1 #define EV_USE_REALTIME -1 #define EV_USE_MONOTONIC -1 #define EV_USE_NANOSLEEP -1 #define EV_USE_INOTIFY -1 #define EV_USE_SIGNALFD -1 #define EV_USE_EVENTFD -1 #define EV_USE_4HEAP -1 #ifndef _WIN32 #include #endif /* !_WIN32 */ #endif /* LIBEV_EMBED */ #ifndef _WIN32 static struct sigaction libev_sigchld; /* * Track the state of whether we have installed * the libev sigchld handler specifically. * If it's non-zero, libev_sigchld will be valid and set to the action * that libev needs to do. * If it's 1, we need to install libev_sigchld to make libev * child handlers work (on request). */ static int sigchld_state = 0; static struct ev_loop* gevent_ev_default_loop(unsigned int flags) { struct ev_loop* result; struct sigaction tmp; if (sigchld_state) return ev_default_loop(flags); // Request the old SIGCHLD handler sigaction(SIGCHLD, NULL, &tmp); // Get the loop, which will install a SIGCHLD handler result = ev_default_loop(flags); // XXX what if SIGCHLD received there? // Now restore the previous SIGCHLD handler sigaction(SIGCHLD, &tmp, &libev_sigchld); sigchld_state = 1; return result; } static void gevent_install_sigchld_handler(void) { if (sigchld_state == 1) { sigaction(SIGCHLD, &libev_sigchld, NULL); sigchld_state = 2; } } static void gevent_reset_sigchld_handler(void) { // We could have any state at this point, depending on // whether the default loop has been used. If it has, // then always be in state 1 ("need to install) if (sigchld_state) { sigchld_state = 1; } } #else /* !_WIN32 */ #define gevent_ev_default_loop ev_default_loop static void gevent_install_sigchld_handler(void) { } static void gevent_reset_sigchld_handler(void) { } // Fake child functions that we can link to. static void ev_child_start(struct ev_loop* loop, ev_child* w) {}; static void ev_child_stop(struct ev_loop* loop, ev_child* w) {}; #endif /* _WIN32 */ gevent-24.11.1/src/gevent/libev/libev.pxd000066400000000000000000000136171471441230600201600ustar00rootroot00000000000000# From cython/includes/libc/stdint.pxd # Longness only used for type promotion. # Actual compile time size used for conversions. # We don't have stdint.h on visual studio 9.0 (2008) on windows, sigh, # so go with Py_ssize_t # ssize_t -> intptr_t cdef extern from "libev_vfd.h": # cython doesn't process pre-processor directives, so they # don't matter in this file. It just takes the last definition it sees. ctypedef Py_ssize_t intptr_t ctypedef intptr_t vfd_socket_t vfd_socket_t vfd_get(int) int vfd_open(long) except -1 void vfd_free(int) cdef extern from "libev.h" nogil: int LIBEV_EMBED int EV_MINPRI int EV_MAXPRI int EV_VERSION_MAJOR int EV_VERSION_MINOR int EV_USE_FLOOR int EV_USE_CLOCK_SYSCALL int EV_USE_REALTIME int EV_USE_MONOTONIC int EV_USE_NANOSLEEP int EV_USE_SELECT int EV_USE_POLL int EV_USE_EPOLL int EV_USE_KQUEUE int EV_USE_PORT int EV_USE_INOTIFY int EV_USE_SIGNALFD int EV_USE_EVENTFD int EV_USE_4HEAP int EV_USE_IOCP int EV_SELECT_IS_WINSOCKET int EV_UNDEF int EV_NONE int EV_READ int EV_WRITE int EV__IOFDSET int EV_TIMER int EV_PERIODIC int EV_SIGNAL int EV_CHILD int EV_STAT int EV_IDLE int EV_PREPARE int EV_CHECK int EV_EMBED int EV_FORK int EV_CLEANUP int EV_ASYNC int EV_CUSTOM int EV_ERROR int EVFLAG_AUTO int EVFLAG_NOENV int EVFLAG_FORKCHECK int EVFLAG_NOINOTIFY int EVFLAG_SIGNALFD int EVFLAG_NOSIGMASK int EVBACKEND_SELECT int EVBACKEND_POLL int EVBACKEND_EPOLL int EVBACKEND_KQUEUE int EVBACKEND_DEVPOLL int EVBACKEND_PORT int EVBACKEND_IOCP int EVBACKEND_IOURING int EVBACKEND_LINUXAIO int EVBACKEND_ALL int EVBACKEND_MASK int EVRUN_NOWAIT int EVRUN_ONCE int EVBREAK_CANCEL int EVBREAK_ONE int EVBREAK_ALL struct ev_loop: int activecnt int sig_pending int backend_fd int sigfd unsigned int origflags struct ev_watcher: void* data; struct ev_io: int fd int events struct ev_timer: double at struct ev_signal: pass struct ev_idle: pass struct ev_prepare: pass struct ev_check: pass struct ev_fork: pass struct ev_async: pass struct ev_child: int pid int rpid int rstatus struct stat: int st_nlink struct ev_stat: stat attr stat prev double interval union ev_any_watcher: ev_watcher w ev_io io ev_timer timer ev_signal signal ev_idle idle int ev_version_major() int ev_version_minor() unsigned int ev_supported_backends() unsigned int ev_recommended_backends() unsigned int ev_embeddable_backends() ctypedef double ev_tstamp ev_tstamp ev_time() void ev_set_syserr_cb(void*) void ev_set_allocator(void*) int ev_priority(void*) void ev_set_priority(void*, int) int ev_is_pending(void*) int ev_is_active(void*) void ev_io_init(ev_io*, void* callback, int fd, int events) void ev_io_start(ev_loop*, ev_io*) void ev_io_stop(ev_loop*, ev_io*) void ev_feed_event(ev_loop*, void*, int) void ev_feed_fd_event(ev_loop*, vfd_socket_t, int) void ev_timer_init(ev_timer*, void* callback, double, double) void ev_timer_start(ev_loop*, ev_timer*) void ev_timer_stop(ev_loop*, ev_timer*) void ev_timer_again(ev_loop*, ev_timer*) void ev_signal_init(ev_signal*, void* callback, int) void ev_signal_start(ev_loop*, ev_signal*) void ev_signal_stop(ev_loop*, ev_signal*) void ev_idle_init(ev_idle*, void* callback) void ev_idle_start(ev_loop*, ev_idle*) void ev_idle_stop(ev_loop*, ev_idle*) void ev_prepare_init(ev_prepare*, void* callback) void ev_prepare_start(ev_loop*, ev_prepare*) void ev_prepare_stop(ev_loop*, ev_prepare*) void ev_check_init(ev_check*, void* callback) void ev_check_start(ev_loop*, ev_check*) void ev_check_stop(ev_loop*, ev_check*) void ev_fork_init(ev_fork*, void* callback) void ev_fork_start(ev_loop*, ev_fork*) void ev_fork_stop(ev_loop*, ev_fork*) void ev_async_init(ev_async*, void* callback) void ev_async_start(ev_loop*, ev_async*) void ev_async_stop(ev_loop*, ev_async*) void ev_async_send(ev_loop*, ev_async*) int ev_async_pending(ev_async*) void ev_child_init(ev_child*, void* callback, int, int) void ev_child_start(ev_loop*, ev_child*) void ev_child_stop(ev_loop*, ev_child*) void ev_stat_init(ev_stat*, void* callback, char*, double) void ev_stat_start(ev_loop*, ev_stat*) void ev_stat_stop(ev_loop*, ev_stat*) ev_loop* ev_default_loop(unsigned int flags) ev_loop* ev_loop_new(unsigned int flags) void* ev_userdata(ev_loop*) void ev_set_userdata(ev_loop*, void*) void ev_loop_destroy(ev_loop*) void ev_loop_fork(ev_loop*) int ev_is_default_loop(ev_loop*) unsigned int ev_iteration(ev_loop*) unsigned int ev_depth(ev_loop*) unsigned int ev_backend(ev_loop*) void ev_verify(ev_loop*) void ev_run(ev_loop*, int flags) nogil ev_tstamp ev_now(ev_loop*) void ev_now_update(ev_loop*) void ev_ref(ev_loop*) void ev_unref(ev_loop*) void ev_break(ev_loop*, int) unsigned int ev_pending_count(ev_loop*) # gevent extra functions. These are defined in libev.h. ev_loop* gevent_ev_default_loop(unsigned int flags) void gevent_install_sigchld_handler() void gevent_reset_sigchld_handler() # These compensate for lack of access to ev_loop struct definition # when LIBEV_EMBED is false. unsigned int gevent_ev_loop_origflags(ev_loop*); int gevent_ev_loop_sig_pending(ev_loop*); int gevent_ev_loop_backend_fd(ev_loop*); int gevent_ev_loop_activecnt(ev_loop*); int gevent_ev_loop_sigfd(ev_loop*); gevent-24.11.1/src/gevent/libev/libev_vfd.h000066400000000000000000000133031471441230600204430ustar00rootroot00000000000000#ifdef _WIN32 /* see discussion in the libuv directory: this is a SOCKET which is a HANDLE which is a PVOID (even though they're really small ints), and CPython and PyPy return that SOCKET cast to an int from fileno() */ typedef intptr_t vfd_socket_t; #define vfd_socket_object PyLong_FromLongLong #ifdef LIBEV_EMBED /* * If libev on win32 is embedded, then we can use an * arbitrary mapping between integer fds and OS * handles. Then by defining special macros libev * will use our functions. */ #define WIN32_LEAN_AND_MEAN #include #include typedef struct vfd_entry_t { vfd_socket_t handle; /* OS handle, i.e. SOCKET */ int count; /* Reference count, 0 if free */ int next; /* Next free fd, -1 if last */ } vfd_entry; #define VFD_INCREMENT 128 static int vfd_num = 0; /* num allocated fds */ static int vfd_max = 0; /* max allocated fds */ static int vfd_next = -1; /* next free fd for reuse */ static PyObject* vfd_map = NULL; /* map OS handle -> virtual fd */ static vfd_entry* vfd_entries = NULL; /* list of virtual fd entries */ #ifdef WITH_THREAD static CRITICAL_SECTION* volatile vfd_lock = NULL; static CRITICAL_SECTION* vfd_make_lock() { if (vfd_lock == NULL) { /* must use malloc and not PyMem_Malloc here */ CRITICAL_SECTION* lock = malloc(sizeof(CRITICAL_SECTION)); InitializeCriticalSection(lock); if (InterlockedCompareExchangePointer(&vfd_lock, lock, NULL) != NULL) { /* another thread initialized lock first */ DeleteCriticalSection(lock); free(lock); } } return vfd_lock; } #define VFD_LOCK_ENTER EnterCriticalSection(vfd_make_lock()) #define VFD_LOCK_LEAVE LeaveCriticalSection(vfd_lock) #define VFD_GIL_DECLARE PyGILState_STATE ___save #define VFD_GIL_ENSURE ___save = PyGILState_Ensure() #define VFD_GIL_RELEASE PyGILState_Release(___save) #else /* ! WITH_THREAD */ #define VFD_LOCK_ENTER #define VFD_LOCK_LEAVE #define VFD_GIL_DECLARE #define VFD_GIL_ENSURE #define VFD_GIL_RELEASE #endif /*_WITH_THREAD */ /* * Given a virtual fd returns an OS handle or -1 * This function is speed critical, so it cannot use GIL */ static vfd_socket_t vfd_get(int fd) { vfd_socket_t handle = -1; VFD_LOCK_ENTER; if (vfd_entries != NULL && fd >= 0 && fd < vfd_num) handle = vfd_entries[fd].handle; VFD_LOCK_LEAVE; return handle; } #define EV_FD_TO_WIN32_HANDLE(fd) vfd_get((fd)) /* * Given an OS handle finds or allocates a virtual fd * Returns -1 on failure and sets Python exception if pyexc is non-zero */ static int vfd_open_(vfd_socket_t handle, int pyexc) { VFD_GIL_DECLARE; int fd = -1; unsigned long arg; PyObject* key = NULL; PyObject* value; if (!pyexc) { VFD_GIL_ENSURE; } if (ioctlsocket(handle, FIONREAD, &arg) != 0) { if (pyexc) PyErr_Format(PyExc_IOError, #ifdef _WIN64 "%lld is not a socket (files are not supported)", #else "%ld is not a socket (files are not supported)", #endif handle); goto done; } if (vfd_map == NULL) { vfd_map = PyDict_New(); if (vfd_map == NULL) goto done; } key = vfd_socket_object(handle); /* check if it's already in the dict */ value = PyDict_GetItem(vfd_map, key); if (value != NULL) { /* is it safe to use PyInt_AS_LONG(value) here? */ fd = PyInt_AsLong(value); if (fd >= 0) { ++vfd_entries[fd].count; goto done; } } /* use the free entry, if available */ if (vfd_next >= 0) { fd = vfd_next; vfd_next = vfd_entries[fd].next; VFD_LOCK_ENTER; goto allocated; } /* check if it would be out of bounds */ if (vfd_num >= FD_SETSIZE) { /* libev's select doesn't support more that FD_SETSIZE fds */ if (pyexc) PyErr_Format(PyExc_IOError, "cannot watch more than %d sockets", (int)FD_SETSIZE); goto done; } /* allocate more space if needed */ VFD_LOCK_ENTER; if (vfd_num >= vfd_max) { int newsize = vfd_max + VFD_INCREMENT; vfd_entry* entries = PyMem_Realloc(vfd_entries, sizeof(vfd_entry) * newsize); if (entries == NULL) { VFD_LOCK_LEAVE; if (pyexc) PyErr_NoMemory(); goto done; } vfd_entries = entries; vfd_max = newsize; } fd = vfd_num++; allocated: /* vfd_lock must be acquired when entering here */ vfd_entries[fd].handle = handle; vfd_entries[fd].count = 1; VFD_LOCK_LEAVE; value = PyInt_FromLong(fd); PyDict_SetItem(vfd_map, key, value); Py_DECREF(value); done: Py_XDECREF(key); if (!pyexc) { VFD_GIL_RELEASE; } return fd; } #define vfd_open(fd) vfd_open_((fd), 1) #define EV_WIN32_HANDLE_TO_FD(handle) vfd_open_((handle), 0) static void vfd_free_(int fd, int needclose) { VFD_GIL_DECLARE; PyObject* key; if (needclose) { VFD_GIL_ENSURE; } if (fd < 0 || fd >= vfd_num) goto done; /* out of bounds */ if (vfd_entries[fd].count <= 0) goto done; /* free entry, ignore */ if (!--vfd_entries[fd].count) { /* fd has just been freed */ vfd_socket_t handle = vfd_entries[fd].handle; vfd_entries[fd].handle = -1; vfd_entries[fd].next = vfd_next; vfd_next = fd; if (needclose) closesocket(handle); /* vfd_map is assumed to be != NULL */ key = vfd_socket_object(handle); PyDict_DelItem(vfd_map, key); Py_DECREF(key); } done: if (needclose) { VFD_GIL_RELEASE; } } #define vfd_free(fd) vfd_free_((fd), 0) #define EV_WIN32_CLOSE_FD(fd) vfd_free_((fd), 1) #else /* !LIBEV_EMBED */ /* * If libev on win32 is not embedded in gevent, then * the only way to map vfds is to use the default of * using C runtime fds in libev. Note that it will leak * fds, because there's no way of closing them safely */ #define vfd_get(fd) _get_osfhandle((fd)) #define vfd_open(fd) _open_osfhandle((fd), 0) #define vfd_free(fd) #endif /* LIBEV_EMBED */ #else /* !_WIN32 */ /* * On non-win32 platforms vfd_* are noop macros */ typedef int vfd_socket_t; #define vfd_get(fd) (fd) #define vfd_open(fd) (fd) #define vfd_free(fd) #endif /* _WIN32 */ gevent-24.11.1/src/gevent/libev/stathelper.c000066400000000000000000000121041471441230600206470ustar00rootroot00000000000000/* copied from Python-2.7.2/Modules/posixmodule.c */ /* XXX: So obviously this is very outdated. See if CPython itself now has the support we need, or update this to a current copy. */ #include "Python.h" #include "structseq.h" #define STRUCT_STAT struct stat #ifdef HAVE_STRUCT_STAT_ST_BLKSIZE #define ST_BLKSIZE_IDX 13 #else #define ST_BLKSIZE_IDX 12 #endif #ifdef HAVE_STRUCT_STAT_ST_BLOCKS #define ST_BLOCKS_IDX (ST_BLKSIZE_IDX+1) #else #define ST_BLOCKS_IDX ST_BLKSIZE_IDX #endif #ifdef HAVE_STRUCT_STAT_ST_RDEV #define ST_RDEV_IDX (ST_BLOCKS_IDX+1) #else #define ST_RDEV_IDX ST_BLOCKS_IDX #endif #ifdef HAVE_STRUCT_STAT_ST_FLAGS #define ST_FLAGS_IDX (ST_RDEV_IDX+1) #else #define ST_FLAGS_IDX ST_RDEV_IDX #endif #ifdef HAVE_STRUCT_STAT_ST_GEN #define ST_GEN_IDX (ST_FLAGS_IDX+1) #else #define ST_GEN_IDX ST_FLAGS_IDX #endif #ifdef HAVE_STRUCT_STAT_ST_BIRTHTIME #define ST_BIRTHTIME_IDX (ST_GEN_IDX+1) #else #define ST_BIRTHTIME_IDX ST_GEN_IDX #endif static PyObject* posixmodule = NULL; static PyTypeObject* pStatResultType = NULL; static PyObject* import_posixmodule(void) { if (!posixmodule) { posixmodule = PyImport_ImportModule("posix"); } return posixmodule; } static PyObject* import_StatResultType(void) { PyObject* p = NULL; if (!pStatResultType) { PyObject* module; module = import_posixmodule(); if (module) { p = PyObject_GetAttrString(module, "stat_result"); } } return p; } static void fill_time(PyObject *v, int index, time_t sec, unsigned long nsec) { PyObject *fval,*ival; #if SIZEOF_TIME_T > SIZEOF_LONG ival = PyLong_FromLongLong((PY_LONG_LONG)sec); #else ival = PyLong_FromLong((long)sec); #endif if (!ival) return; fval = PyFloat_FromDouble(sec + 1e-9*nsec); PyStructSequence_SET_ITEM(v, index, ival); PyStructSequence_SET_ITEM(v, index+3, fval); } /* pack a system stat C structure into the Python stat tuple (used by posix_stat() and posix_fstat()) */ static PyObject* _pystat_fromstructstat(STRUCT_STAT *st) { unsigned long ansec, mnsec, cnsec; PyObject *v; PyTypeObject* StatResultType = (PyTypeObject*)import_StatResultType(); if (StatResultType == NULL) { return NULL; } v = PyStructSequence_New(StatResultType); if (v == NULL) return NULL; PyStructSequence_SET_ITEM(v, 0, PyLong_FromLong((long)st->st_mode)); #ifdef HAVE_LARGEFILE_SUPPORT PyStructSequence_SET_ITEM(v, 1, PyLong_FromLongLong((PY_LONG_LONG)st->st_ino)); #else PyStructSequence_SET_ITEM(v, 1, PyLong_FromLong((long)st->st_ino)); #endif #if defined(HAVE_LONG_LONG) && !defined(MS_WINDOWS) PyStructSequence_SET_ITEM(v, 2, PyLong_FromLongLong((PY_LONG_LONG)st->st_dev)); #else PyStructSequence_SET_ITEM(v, 2, PyLong_FromLong((long)st->st_dev)); #endif PyStructSequence_SET_ITEM(v, 3, PyLong_FromLong((long)st->st_nlink)); PyStructSequence_SET_ITEM(v, 4, PyLong_FromLong((long)st->st_uid)); PyStructSequence_SET_ITEM(v, 5, PyLong_FromLong((long)st->st_gid)); #ifdef HAVE_LARGEFILE_SUPPORT PyStructSequence_SET_ITEM(v, 6, PyLong_FromLongLong((PY_LONG_LONG)st->st_size)); #else PyStructSequence_SET_ITEM(v, 6, PyLong_FromLong(st->st_size)); #endif #if defined(HAVE_STAT_TV_NSEC) ansec = st->st_atim.tv_nsec; mnsec = st->st_mtim.tv_nsec; cnsec = st->st_ctim.tv_nsec; #elif defined(HAVE_STAT_TV_NSEC2) ansec = st->st_atimespec.tv_nsec; mnsec = st->st_mtimespec.tv_nsec; cnsec = st->st_ctimespec.tv_nsec; #elif defined(HAVE_STAT_NSEC) ansec = st->st_atime_nsec; mnsec = st->st_mtime_nsec; cnsec = st->st_ctime_nsec; #else ansec = mnsec = cnsec = 0; #endif fill_time(v, 7, st->st_atime, ansec); fill_time(v, 8, st->st_mtime, mnsec); fill_time(v, 9, st->st_ctime, cnsec); #ifdef HAVE_STRUCT_STAT_ST_BLKSIZE PyStructSequence_SET_ITEM(v, ST_BLKSIZE_IDX, PyLong_FromLong((long)st->st_blksize)); #endif #ifdef HAVE_STRUCT_STAT_ST_BLOCKS PyStructSequence_SET_ITEM(v, ST_BLOCKS_IDX, PyLong_FromLong((long)st->st_blocks)); #endif #ifdef HAVE_STRUCT_STAT_ST_RDEV PyStructSequence_SET_ITEM(v, ST_RDEV_IDX, PyLong_FromLong((long)st->st_rdev)); #endif #ifdef HAVE_STRUCT_STAT_ST_GEN PyStructSequence_SET_ITEM(v, ST_GEN_IDX, PyLong_FromLong((long)st->st_gen)); #endif #ifdef HAVE_STRUCT_STAT_ST_BIRTHTIME { PyObject *val; unsigned long bsec,bnsec; bsec = (long)st->st_birthtime; #ifdef HAVE_STAT_TV_NSEC2 bnsec = st->st_birthtimespec.tv_nsec; #else bnsec = 0; #endif val = PyFloat_FromDouble(bsec + 1e-9*bnsec); PyStructSequence_SET_ITEM(v, ST_BIRTHTIME_IDX, val); } #endif #ifdef HAVE_STRUCT_STAT_ST_FLAGS PyStructSequence_SET_ITEM(v, ST_FLAGS_IDX, PyLong_FromLong((long)st->st_flags)); #endif if (PyErr_Occurred()) { Py_DECREF(v); return NULL; } return v; } gevent-24.11.1/src/gevent/libev/watcher.py000066400000000000000000000174771471441230600203610ustar00rootroot00000000000000# pylint: disable=too-many-lines, protected-access, redefined-outer-name, not-callable # pylint: disable=no-member from __future__ import absolute_import, print_function import sys from gevent.libev import _corecffi # pylint:disable=no-name-in-module,import-error # Nothing public here __all__ = [] ffi = _corecffi.ffi # pylint:disable=no-member libev = _corecffi.lib # pylint:disable=no-member if hasattr(libev, 'vfd_open'): # Must be on windows # pylint:disable=c-extension-no-member assert sys.platform.startswith("win"), "vfd functions only needed on windows" vfd_open = libev.vfd_open vfd_free = libev.vfd_free vfd_get = libev.vfd_get else: vfd_open = vfd_free = vfd_get = lambda fd: fd ##### ## NOTE on Windows: # The C implementation does several things specially for Windows; # a possibly incomplete list is: # # - the loop runs a periodic signal checker; # - the io watcher constructor is different and it has a destructor; # - the child watcher is not defined # # The CFFI implementation does none of these things, and so # is possibly NOT FUNCTIONALLY CORRECT on Win32 ##### _NOARGS = () _events = [(libev.EV_READ, 'READ'), (libev.EV_WRITE, 'WRITE'), (libev.EV__IOFDSET, '_IOFDSET'), (libev.EV_PERIODIC, 'PERIODIC'), (libev.EV_SIGNAL, 'SIGNAL'), (libev.EV_CHILD, 'CHILD'), (libev.EV_STAT, 'STAT'), (libev.EV_IDLE, 'IDLE'), (libev.EV_PREPARE, 'PREPARE'), (libev.EV_CHECK, 'CHECK'), (libev.EV_EMBED, 'EMBED'), (libev.EV_FORK, 'FORK'), (libev.EV_CLEANUP, 'CLEANUP'), (libev.EV_ASYNC, 'ASYNC'), (libev.EV_CUSTOM, 'CUSTOM'), (libev.EV_ERROR, 'ERROR')] from gevent._ffi import watcher as _base def _events_to_str(events): return _base.events_to_str(events, _events) class watcher(_base.watcher): _FFI = ffi _LIB = libev _watcher_prefix = 'ev' # Flags is a bitfield with the following meaning: # 0000 -> default, referenced (when active) # 0010 -> ev_unref has been called # 0100 -> not referenced; independent of 0010 _flags = 0 def __init__(self, _loop, ref=True, priority=None, args=_base._NOARGS): if ref: self._flags = 0 else: self._flags = 4 super(watcher, self).__init__(_loop, ref=ref, priority=priority, args=args) def _watcher_ffi_set_priority(self, priority): libev.ev_set_priority(self._watcher, priority) def _watcher_ffi_init(self, args): self._watcher_init(self._watcher, self._watcher_callback, *args) def _watcher_ffi_start(self): self._watcher_start(self.loop._ptr, self._watcher) def _watcher_ffi_ref(self): if self._flags & 2: # we've told libev we're not referenced self.loop.ref() self._flags &= ~2 def _watcher_ffi_unref(self): if self._flags & 6 == 4: # We're not referenced, but we haven't told libev that self.loop.unref() self._flags |= 2 # now we've told libev def _get_ref(self): return not self._flags & 4 def _set_ref(self, value): if value: if not self._flags & 4: return # ref is already True if self._flags & 2: # ev_unref was called, undo self.loop.ref() self._flags &= ~6 # do not want unref, no outstanding unref else: if self._flags & 4: return # ref is already False self._flags |= 4 # we're not referenced if not self._flags & 2 and libev.ev_is_active(self._watcher): # we haven't told libev we're not referenced, but it thinks we're # active so we need to undo that self.loop.unref() self._flags |= 2 # libev knows we're not referenced ref = property(_get_ref, _set_ref) def _get_priority(self): return libev.ev_priority(self._watcher) @_base.not_while_active def _set_priority(self, priority): libev.ev_set_priority(self._watcher, priority) priority = property(_get_priority, _set_priority) def feed(self, revents, callback, *args): self.callback = callback self.args = args or _NOARGS if self._flags & 6 == 4: self.loop.unref() self._flags |= 2 libev.ev_feed_event(self.loop._ptr, self._watcher, revents) if not self._flags & 1: # Py_INCREF(self) self._flags |= 1 @property def pending(self): return bool(self._watcher and libev.ev_is_pending(self._watcher)) class io(_base.IoMixin, watcher): EVENT_MASK = libev.EV__IOFDSET | libev.EV_READ | libev.EV_WRITE def _get_fd(self): return vfd_get(self._watcher.fd) @_base.not_while_active def _set_fd(self, fd): vfd = vfd_open(fd) vfd_free(self._watcher.fd) self._watcher_init(self._watcher, self._watcher_callback, vfd, self._watcher.events) fd = property(_get_fd, _set_fd) def _get_events(self): return self._watcher.events @_base.not_while_active def _set_events(self, events): self._watcher_init(self._watcher, self._watcher_callback, self._watcher.fd, events) events = property(_get_events, _set_events) @property def events_str(self): return _events_to_str(self._watcher.events) def _format(self): return ' fd=%s events=%s' % (self.fd, self.events_str) class timer(_base.TimerMixin, watcher): @property def at(self): return self._watcher.at def again(self, callback, *args, **kw): # Exactly the same as start(), just with a different initializer # function self._watcher_start = libev.ev_timer_again try: self.start(callback, *args, **kw) finally: del self._watcher_start class signal(_base.SignalMixin, watcher): pass class idle(_base.IdleMixin, watcher): pass class prepare(_base.PrepareMixin, watcher): pass class check(_base.CheckMixin, watcher): pass class fork(_base.ForkMixin, watcher): pass class async_(_base.AsyncMixin, watcher): def send(self): libev.ev_async_send(self.loop._ptr, self._watcher) @property def pending(self): return self._watcher is not None and bool(libev.ev_async_pending(self._watcher)) # Provide BWC for those that have async locals()['async'] = async_ class _ClosedWatcher(object): __slots__ = ('pid', 'rpid', 'rstatus') def __init__(self, other): self.pid = other.pid self.rpid = other.rpid self.rstatus = other.rstatus def __bool__(self): return False __nonzero__ = __bool__ class child(_base.ChildMixin, watcher): _watcher_type = 'child' def close(self): # Capture the properties we defer to our _watcher, because # we're about to discard it. closed_watcher = _ClosedWatcher(self._watcher) super(child, self).close() self._watcher = closed_watcher @property def pid(self): return self._watcher.pid @property def rpid(self): return self._watcher.rpid @rpid.setter def rpid(self, value): self._watcher.rpid = value @property def rstatus(self): return self._watcher.rstatus @rstatus.setter def rstatus(self, value): self._watcher.rstatus = value class stat(_base.StatMixin, watcher): _watcher_type = 'stat' @property def attr(self): if not self._watcher.attr.st_nlink: return return self._watcher.attr @property def prev(self): if not self._watcher.prev.st_nlink: return return self._watcher.prev @property def interval(self): return self._watcher.interval gevent-24.11.1/src/gevent/libuv/000077500000000000000000000000001471441230600163525ustar00rootroot00000000000000gevent-24.11.1/src/gevent/libuv/__init__.py000066400000000000000000000002511471441230600204610ustar00rootroot00000000000000# -*- coding: utf-8 -*- from __future__ import absolute_import from __future__ import division from __future__ import print_function # Nothing public here __all__ = [] gevent-24.11.1/src/gevent/libuv/_corecffi_build.py000066400000000000000000000300751471441230600220270ustar00rootroot00000000000000# pylint: disable=no-member # This module is only used to create and compile the gevent.libuv._corecffi module; # nothing should be directly imported from it except `ffi`, which should only be # used for `ffi.compile()`; programs should import gevent._corecfffi. # However, because we are using "out-of-line" mode, it is necessary to examine # this file to know what functions are created and available on the generated # module. from __future__ import absolute_import, print_function import os import os.path # pylint:disable=no-name-in-module import platform import sys from cffi import FFI sys.path.append(".") try: import _setuputils except ImportError: print("This file must be imported with setup.py in the current working dir.") raise __all__ = [] WIN = sys.platform.startswith('win32') LIBUV_EMBED = _setuputils.should_embed('libuv') PY2 = sys.version_info[0] == 2 ffi = FFI() thisdir = os.path.dirname(os.path.abspath(__file__)) parentdir = os.path.abspath(os.path.join(thisdir, '..')) setup_py_dir = os.path.abspath(os.path.join(thisdir, '..', '..', '..')) libuv_dir = os.path.abspath(os.path.join(setup_py_dir, 'deps', 'libuv')) def read_source(name): # pylint:disable=unspecified-encoding with open(os.path.join(thisdir, name), 'r') as f: return f.read() _cdef = read_source('_corecffi_cdef.c') _source = read_source('_corecffi_source.c') # These defines and uses help keep the C file readable and lintable by # C tools. _cdef = _cdef.replace('#define GEVENT_STRUCT_DONE int', '') _cdef = _cdef.replace("GEVENT_STRUCT_DONE _;", '...;') # nlink_t is not used in libuv. _cdef = _cdef.replace('#define GEVENT_ST_NLINK_T int', '') _cdef = _cdef.replace('GEVENT_ST_NLINK_T', 'nlink_t') _cdef = _cdef.replace('#define GEVENT_UV_OS_SOCK_T int', '') # uv_os_sock_t is int on POSIX and SOCKET on Win32, but socket is # just another name for handle, which is just another name for 'void*' # which we will treat as an 'unsigned long' or 'unsigned long long' # since it comes through 'fileno()' where it has been cast as an int. # See class watcher.io _void_pointer_as_integer = 'intptr_t' _cdef = _cdef.replace("GEVENT_UV_OS_SOCK_T", 'int' if not WIN else _void_pointer_as_integer) LIBUV_INCLUDE_DIRS = [ os.path.join(libuv_dir, 'include'), os.path.join(libuv_dir, 'src'), ] # Initially based on https://github.com/saghul/pyuv/blob/v1.x/setup_libuv.py def _libuv_source(rel_path): # Certain versions of setuptools, notably on windows, are *very* # picky about what we feed to sources= "setup() arguments must # *always* be /-separated paths relative to the setup.py # directory, *never* absolute paths." POSIX doesn't have that issue. path = os.path.join('deps', 'libuv', 'src', rel_path) return path LIBUV_SOURCES = [ _libuv_source('fs-poll.c'), _libuv_source('inet.c'), _libuv_source('threadpool.c'), _libuv_source('uv-common.c'), _libuv_source('version.c'), _libuv_source('uv-data-getter-setters.c'), _libuv_source('timer.c'), _libuv_source('idna.c'), _libuv_source('strscpy.c'), # Added between 1.42.0 and 1.44.2; only used # on unix in that release, but generic _libuv_source('strtok.c'), ] if WIN: LIBUV_SOURCES += [ _libuv_source('win/async.c'), _libuv_source('win/core.c'), _libuv_source('win/detect-wakeup.c'), _libuv_source('win/dl.c'), _libuv_source('win/error.c'), _libuv_source('win/fs-event.c'), _libuv_source('win/fs.c'), # getaddrinfo.c refers to ConvertInterfaceIndexToLuid # and ConvertInterfaceLuidToNameA, which are supposedly in iphlpapi.h # and iphlpapi.lib/dll. But on Windows 10 with Python 3.5 and VC 14 (Visual Studio 2015), # I get an undefined warning from the compiler for those functions and # a link error from the linker, so this file can't be included. # This is possibly because the functions are defined for Windows Vista, and # Python 3.5 builds with at earlier SDK? # Fortunately we don't use those functions. #_libuv_source('win/getaddrinfo.c'), # getnameinfo.c refers to uv__getaddrinfo_translate_error from # getaddrinfo.c, which we don't have. #_libuv_source('win/getnameinfo.c'), _libuv_source('win/handle.c'), _libuv_source('win/loop-watcher.c'), _libuv_source('win/pipe.c'), _libuv_source('win/poll.c'), _libuv_source('win/process-stdio.c'), _libuv_source('win/process.c'), _libuv_source('win/signal.c'), _libuv_source('win/snprintf.c'), _libuv_source('win/stream.c'), _libuv_source('win/tcp.c'), _libuv_source('win/thread.c'), _libuv_source('win/tty.c'), _libuv_source('win/udp.c'), _libuv_source('win/util.c'), _libuv_source('win/winapi.c'), _libuv_source('win/winsock.c'), ] else: LIBUV_SOURCES += [ _libuv_source('unix/async.c'), _libuv_source('unix/core.c'), _libuv_source('unix/dl.c'), _libuv_source('unix/fs.c'), _libuv_source('unix/getaddrinfo.c'), _libuv_source('unix/getnameinfo.c'), _libuv_source('unix/loop-watcher.c'), _libuv_source('unix/loop.c'), _libuv_source('unix/pipe.c'), _libuv_source('unix/poll.c'), _libuv_source('unix/process.c'), _libuv_source('unix/signal.c'), _libuv_source('unix/stream.c'), _libuv_source('unix/tcp.c'), _libuv_source('unix/thread.c'), _libuv_source('unix/tty.c'), _libuv_source('unix/udp.c'), ] if sys.platform.startswith('linux'): LIBUV_SOURCES += [ _libuv_source('unix/linux-core.c'), _libuv_source('unix/linux-inotify.c'), _libuv_source('unix/linux-syscalls.c'), _libuv_source('unix/procfs-exepath.c'), _libuv_source('unix/proctitle.c'), _libuv_source('unix/random-sysctl-linux.c'), _libuv_source('unix/epoll.c'), ] elif sys.platform == 'darwin': LIBUV_SOURCES += [ _libuv_source('unix/bsd-ifaddrs.c'), _libuv_source('unix/darwin.c'), _libuv_source('unix/darwin-proctitle.c'), _libuv_source('unix/fsevents.c'), _libuv_source('unix/kqueue.c'), _libuv_source('unix/proctitle.c'), ] elif sys.platform.startswith(('freebsd', 'dragonfly')): # pragma: no cover # Not tested LIBUV_SOURCES += [ _libuv_source('unix/bsd-ifaddrs.c'), _libuv_source('unix/freebsd.c'), _libuv_source('unix/kqueue.c'), _libuv_source('unix/posix-hrtime.c'), _libuv_source('unix/bsd-proctitle.c'), ] elif sys.platform.startswith('openbsd'): # pragma: no cover # Not tested LIBUV_SOURCES += [ _libuv_source('unix/bsd-ifaddrs.c'), _libuv_source('unix/kqueue.c'), _libuv_source('unix/openbsd.c'), _libuv_source('unix/posix-hrtime.c'), _libuv_source('unix/bsd-proctitle.c'), ] elif sys.platform.startswith('netbsd'): # pragma: no cover # Not tested LIBUV_SOURCES += [ _libuv_source('unix/bsd-ifaddrs.c'), _libuv_source('unix/kqueue.c'), _libuv_source('unix/netbsd.c'), _libuv_source('unix/posix-hrtime.c'), _libuv_source('unix/bsd-proctitle.c'), ] elif sys.platform.startswith('sunos'): # pragma: no cover # Not tested. LIBUV_SOURCES += [ _libuv_source('unix/no-proctitle.c'), _libuv_source('unix/sunos.c'), ] elif sys.platform.startswith('aix'): # pragma: no cover # Not tested. LIBUV_SOURCES += [ _libuv_source('unix/aix.c'), _libuv_source('unix/aix-common.c'), ] elif sys.platform.startswith('haiku'): # pragma: no cover # Not tested LIBUV_SOURCES += [ _libuv_source('unix/haiku.c') ] elif sys.platform.startswith('cygwin'): # pragma: no cover # Not tested. # Based on Cygwin package sources /usr/src/libuv-1.32.0-1.src/libuv-1.32.0/Makefile.am # Apparently the same upstream at https://github.com/libuv/libuv/blob/v1.x/Makefile.am LIBUV_SOURCES += [ _libuv_source('unix/cygwin.c'), _libuv_source('unix/bsd-ifaddrs.c'), _libuv_source('unix/no-fsevents.c'), _libuv_source('unix/no-proctitle.c'), _libuv_source('unix/posix-hrtime.c'), _libuv_source('unix/posix-poll.c'), _libuv_source('unix/procfs-exepath.c'), _libuv_source('unix/sysinfo-loadavg.c'), _libuv_source('unix/sysinfo-memory.c'), ] LIBUV_MACROS = [ ('LIBUV_EMBED', int(LIBUV_EMBED)), ] def _define_macro(name, value): LIBUV_MACROS.append((name, value)) LIBUV_LIBRARIES = [] def _add_library(name): LIBUV_LIBRARIES.append(name) if sys.platform != 'win32': _define_macro('_LARGEFILE_SOURCE', 1) _define_macro('_FILE_OFFSET_BITS', 64) if sys.platform.startswith('linux'): _add_library('dl') _add_library('rt') _define_macro('_GNU_SOURCE', 1) _define_macro('_POSIX_C_SOURCE', '200112') elif sys.platform == 'darwin': _define_macro('_DARWIN_USE_64_BIT_INODE', 1) _define_macro('_DARWIN_UNLIMITED_SELECT', 1) elif sys.platform.startswith('netbsd'): # pragma: no cover _add_library('kvm') elif sys.platform.startswith('sunos'): # pragma: no cover _define_macro('__EXTENSIONS__', 1) _define_macro('_XOPEN_SOURCE', 500) _define_macro('_REENTRANT', 1) _add_library('kstat') _add_library('nsl') _add_library('sendfile') _add_library('socket') if platform.release() == '5.10': # https://github.com/libuv/libuv/issues/1458 # https://github.com/giampaolo/psutil/blob/4d6a086411c77b7909cce8f4f141bbdecfc0d354/setup.py#L298-L300 _define_macro('SUNOS_NO_IFADDRS', '') elif sys.platform.startswith('aix'): # pragma: no cover _define_macro('_LINUX_SOURCE_COMPAT', 1) if os.uname().sysname != 'OS400': _add_library('perfstat') elif WIN: # All other gevent .pyd files link to the specific minor-version Python # DLL, so we should do the same here. In virtual environments that don't # contain the major-version python?.dll stub, _corecffi.pyd would otherwise # cause the Windows DLL loader to search the entire PATH for a DLL with # that name. This might end up bringing a second, ABI-incompatible Python # version into the process, which can easily lead to crashes. # See https://github.com/gevent/gevent/pull/1814/files _define_macro('_CFFI_NO_LIMITED_API', 1) _define_macro('_GNU_SOURCE', 1) _define_macro('WIN32', 1) _define_macro('_CRT_SECURE_NO_DEPRECATE', 1) _define_macro('_CRT_NONSTDC_NO_DEPRECATE', 1) _define_macro('_CRT_SECURE_NO_WARNINGS', 1) _define_macro('_WIN32_WINNT', '0x0602') _define_macro('WIN32_LEAN_AND_MEAN', 1) # This value isn't available on the platform that we build and # test Python 2.7 on. It's used for getting power management # suspend/resume notifications, maybe for keeping timers accurate? # # TODO: This should be a more targeted check based on the platform # version, but that's complicated because it depends on having a # particular patch installed to the OS, and I don't know how to # check for that...but we're dropping Python 2 support soon, so # I suspect it really doesn't matter. if PY2: _define_macro('LOAD_LIBRARY_SEARCH_SYSTEM32', 0) _add_library('advapi32') _add_library('iphlpapi') _add_library('psapi') _add_library('shell32') _add_library('user32') _add_library('userenv') _add_library('ws2_32') if not LIBUV_EMBED: del LIBUV_SOURCES[:] del LIBUV_INCLUDE_DIRS[:] _add_library('uv') LIBUV_INCLUDE_DIRS.append(parentdir) ffi.cdef(_cdef) ffi.set_source( 'gevent.libuv._corecffi', _source, sources=LIBUV_SOURCES, depends=LIBUV_SOURCES, include_dirs=LIBUV_INCLUDE_DIRS, libraries=list(LIBUV_LIBRARIES), define_macros=list(LIBUV_MACROS), extra_compile_args=list(_setuputils.IGNORE_THIRD_PARTY_WARNINGS), ) if __name__ == '__main__': # See notes in libev/_corecffi_build.py for how to test this. # # Other than the obvious directory changes, the changes are: # # CPPFLAGS=-Ideps/libuv/include/ -Isrc/gevent/ ffi.compile(verbose=True) gevent-24.11.1/src/gevent/libuv/_corecffi_cdef.c000066400000000000000000000324551471441230600214270ustar00rootroot00000000000000/* access whether we built embedded or not */ #define LIBUV_EMBED ... /* markers for the CFFI parser. Replaced when the string is read. */ #define GEVENT_STRUCT_DONE int #define GEVENT_ST_NLINK_T int #define GEVENT_UV_OS_SOCK_T int #define UV_EBUSY ... #define UV_VERSION_MAJOR ... #define UV_VERSION_MINOR ... #define UV_VERSION_PATCH ... typedef enum { UV_RUN_DEFAULT = 0, UV_RUN_ONCE, UV_RUN_NOWAIT } uv_run_mode; typedef enum { UV_UNKNOWN_HANDLE = 0, UV_ASYNC, UV_CHECK, UV_FS_EVENT, UV_FS_POLL, UV_HANDLE, UV_IDLE, UV_NAMED_PIPE, UV_POLL, UV_PREPARE, UV_PROCESS, UV_STREAM, UV_TCP, UV_TIMER, UV_TTY, UV_UDP, UV_SIGNAL, UV_FILE, UV_HANDLE_TYPE_MAX } uv_handle_type; enum uv_poll_event { UV_READABLE = 1, UV_WRITABLE = 2, /* new in 1.9 */ UV_DISCONNECT = 4, /* new in 1.14.0 */ UV_PRIORITIZED = 8, }; enum uv_fs_event { UV_RENAME = 1, UV_CHANGE = 2 }; enum uv_fs_event_flags { /* * By default, if the fs event watcher is given a directory name, we will * watch for all events in that directory. This flags overrides this behavior * and makes fs_event report only changes to the directory entry itself. This * flag does not affect individual files watched. * This flag is currently not implemented yet on any backend. */ UV_FS_EVENT_WATCH_ENTRY = 1, /* * By default uv_fs_event will try to use a kernel interface such as inotify * or kqueue to detect events. This may not work on remote filesystems such * as NFS mounts. This flag makes fs_event fall back to calling stat() on a * regular interval. * This flag is currently not implemented yet on any backend. */ UV_FS_EVENT_STAT = 2, /* * By default, event watcher, when watching directory, is not registering * (is ignoring) changes in it's subdirectories. * This flag will override this behaviour on platforms that support it. */ UV_FS_EVENT_RECURSIVE = 4 }; const char* uv_strerror(int); const char* uv_err_name(int); const char* uv_version_string(void); const char* uv_handle_type_name(uv_handle_type type); // handle structs and types struct uv_loop_s { void* data; GEVENT_STRUCT_DONE _; }; struct uv_handle_s { struct uv_loop_s* loop; uv_handle_type type; void *data; GEVENT_STRUCT_DONE _; }; struct uv_idle_s { struct uv_loop_s* loop; uv_handle_type type; void *data; GEVENT_STRUCT_DONE _; }; struct uv_prepare_s { struct uv_loop_s* loop; uv_handle_type type; void *data; GEVENT_STRUCT_DONE _; }; struct uv_timer_s { struct uv_loop_s* loop; uv_handle_type type; void *data; GEVENT_STRUCT_DONE _; }; struct uv_signal_s { struct uv_loop_s* loop; uv_handle_type type; void *data; GEVENT_STRUCT_DONE _; }; struct uv_poll_s { struct uv_loop_s* loop; uv_handle_type type; void *data; GEVENT_STRUCT_DONE _; }; struct uv_check_s { struct uv_loop_s* loop; uv_handle_type type; void *data; GEVENT_STRUCT_DONE _; }; struct uv_async_s { struct uv_loop_s* loop; uv_handle_type type; void *data; GEVENT_STRUCT_DONE _; }; struct uv_fs_event_s { struct uv_loop_s* loop; uv_handle_type type; void *data; GEVENT_STRUCT_DONE _; }; struct uv_fs_poll_s { struct uv_loop_s* loop; uv_handle_type type; void *data; GEVENT_STRUCT_DONE _; }; typedef struct uv_loop_s uv_loop_t; typedef struct uv_handle_s uv_handle_t; typedef struct uv_idle_s uv_idle_t; typedef struct uv_prepare_s uv_prepare_t; typedef struct uv_timer_s uv_timer_t; typedef struct uv_signal_s uv_signal_t; typedef struct uv_poll_s uv_poll_t; typedef struct uv_check_s uv_check_t; typedef struct uv_async_s uv_async_t; typedef struct uv_fs_event_s uv_fs_event_t; typedef struct uv_fs_poll_s uv_fs_poll_t; size_t uv_handle_size(uv_handle_type); // callbacks with the same signature typedef void (*uv_close_cb)(uv_handle_t *handle); typedef void (*uv_idle_cb)(uv_idle_t *handle); typedef void (*uv_timer_cb)(uv_timer_t *handle); typedef void (*uv_check_cb)(uv_check_t* handle); typedef void (*uv_async_cb)(uv_async_t* handle); typedef void (*uv_prepare_cb)(uv_prepare_t *handle); // callbacks with distinct sigs typedef void (*uv_walk_cb)(uv_handle_t *handle, void *arg); typedef void (*uv_poll_cb)(uv_poll_t *handle, int status, int events); typedef void (*uv_signal_cb)(uv_signal_t *handle, int signum); // Callback passed to uv_fs_event_start() which will be called // repeatedly after the handle is started. If the handle was started // with a directory the filename parameter will be a relative path to // a file contained in the directory. The events parameter is an ORed // mask of uv_fs_event elements. typedef void (*uv_fs_event_cb)(uv_fs_event_t* handle, const char* filename, int events, int status); typedef struct { long tv_sec; long tv_nsec; } uv_timespec_t; typedef struct { uint64_t st_dev; uint64_t st_mode; uint64_t st_nlink; uint64_t st_uid; uint64_t st_gid; uint64_t st_rdev; uint64_t st_ino; uint64_t st_size; uint64_t st_blksize; uint64_t st_blocks; uint64_t st_flags; uint64_t st_gen; uv_timespec_t st_atim; uv_timespec_t st_mtim; uv_timespec_t st_ctim; uv_timespec_t st_birthtim; } uv_stat_t; typedef void (*uv_fs_poll_cb)(uv_fs_poll_t* handle, int status, const uv_stat_t* prev, const uv_stat_t* curr); // loop functions uv_loop_t *uv_default_loop(); uv_loop_t* uv_loop_new(); // not documented; neither is uv_loop_delete int uv_loop_init(uv_loop_t* loop); int uv_loop_fork(uv_loop_t* loop); int uv_loop_alive(const uv_loop_t *loop); int uv_loop_close(uv_loop_t* loop); uint64_t uv_backend_timeout(uv_loop_t* loop); int uv_run(uv_loop_t *, uv_run_mode mode); int uv_backend_fd(const uv_loop_t* loop); // The narrative docs for the two time functions say 'const', // but the header does not. void uv_update_time(uv_loop_t* loop); uint64_t uv_now(uv_loop_t* loop); void uv_stop(uv_loop_t *); void uv_walk(uv_loop_t *loop, uv_walk_cb walk_cb, void *arg); // handle functions // uv_handle_t is the base type for all libuv handle types. void uv_ref(void *); void uv_unref(void *); int uv_has_ref(void *); void uv_close(void *handle, uv_close_cb close_cb); int uv_is_active(void *handle); int uv_is_closing(void *handle); // idle functions // Idle handles will run the given callback once per loop iteration, right // before the uv_prepare_t handles. Note: The notable difference with prepare // handles is that when there are active idle handles, the loop will perform a // zero timeout poll instead of blocking for i/o. Warning: Despite the name, // idle handles will get their callbacks called on every loop iteration, not // when the loop is actually "idle". int uv_idle_init(uv_loop_t *, uv_idle_t *idle); int uv_idle_start(uv_idle_t *idle, uv_idle_cb cb); int uv_idle_stop(uv_idle_t *idle); // prepare functions // Prepare handles will run the given callback once per loop iteration, right // before polling for i/o. int uv_prepare_init(uv_loop_t *, uv_prepare_t *prepare); int uv_prepare_start(uv_prepare_t *prepare, uv_prepare_cb cb); int uv_prepare_stop(uv_prepare_t *prepare); // check functions // Check handles will run the given callback once per loop iteration, right int uv_check_init(uv_loop_t *, uv_check_t *check); int uv_check_start(uv_check_t *check, uv_check_cb cb); int uv_check_stop(uv_check_t *check); // async functions // Async handles allow the user to "wakeup" the event loop and get a callback called from another thread. int uv_async_init(uv_loop_t *, uv_async_t*, uv_async_cb); int uv_async_send(uv_async_t*); // timer functions // Timer handles are used to schedule callbacks to be called in the future. int uv_timer_init(uv_loop_t *, uv_timer_t *handle); int uv_timer_start(uv_timer_t *handle, uv_timer_cb cb, uint64_t timeout, uint64_t repeat); int uv_timer_stop(uv_timer_t *handle); int uv_timer_again(uv_timer_t *handle); void uv_timer_set_repeat(uv_timer_t *handle, uint64_t repeat); uint64_t uv_timer_get_repeat(const uv_timer_t *handle); // signal functions // Signal handles implement Unix style signal handling on a per-event loop // bases. int uv_signal_init(uv_loop_t *loop, uv_signal_t *handle); int uv_signal_start(uv_signal_t *handle, uv_signal_cb signal_cb, int signum); int uv_signal_stop(uv_signal_t *handle); // poll functions Poll handles are used to watch file descriptors for // readability and writability, similar to the purpose of poll(2). It // is not okay to have multiple active poll handles for the same // socket, this can cause libuv to busyloop or otherwise malfunction. // // The purpose of poll handles is to enable integrating external // libraries that rely on the event loop to signal it about the socket // status changes, like c-ares or libssh2. Using uv_poll_t for any // other purpose is not recommended; uv_tcp_t, uv_udp_t, etc. provide // an implementation that is faster and more scalable than what can be // achieved with uv_poll_t, especially on Windows. // // Note On windows only sockets can be polled with poll handles. On // Unix any file descriptor that would be accepted by poll(2) can be // used. int uv_poll_init(uv_loop_t *loop, uv_poll_t *handle, int fd); // Initialize the handle using a socket descriptor. On Unix this is // identical to uv_poll_init(). On windows it takes a SOCKET handle; // SOCKET handles are another name for HANDLE objects in win32, and // those are defined as PVOID, even though they are not actually // pointers (they're small integers). CPython and PyPy both return // the SOCKET (as cast to an int) from the socket.fileno() method. // libuv uses ``uv_os_sock_t`` for this type, which is defined as an // int on unix. int uv_poll_init_socket(uv_loop_t* loop, uv_poll_t* handle, GEVENT_UV_OS_SOCK_T socket); int uv_poll_start(uv_poll_t *handle, int events, uv_poll_cb cb); int uv_poll_stop(uv_poll_t *handle); // FS Event handles allow the user to monitor a given path for // changes, for example, if the file was renamed or there was a // generic change in it. This handle uses the best backend for the job // on each platform. // // Thereas also uv_fs_poll_t that uses stat for filesystems where // the kernel event isn't available. int uv_fs_event_init(uv_loop_t*, uv_fs_event_t*); int uv_fs_event_start(uv_fs_event_t*, uv_fs_event_cb, const char* path, unsigned int flags); int uv_fs_event_stop(uv_fs_event_t*); int uv_fs_event_getpath(uv_fs_event_t*, char* buffer, size_t* size); // FS Poll handles allow the user to monitor a given path for changes. // Unlike uv_fs_event_t, fs poll handles use stat to detect when a // file has changed so they can work on file systems where fs event // handles can't. // // This is a closer match to libev. int uv_fs_poll_init(void*, void*); int uv_fs_poll_start(void*, uv_fs_poll_cb, const char* path, unsigned int); int uv_fs_poll_stop(void*); // CPU Info unsigned int uv_available_parallelism(void); /* We don't have uv_cpu_info medeled int uv_cpu_info(uv_cpu_info_t** cpu_infos, int* count); void uv_free_cpu_info(uv_cpu_info_t* cpu_infos, int count); */ // DNS /* We don't have sockaddr modeled. int uv_ip4_name(const struct sockaddr_in* src, char* dst, size_t size); int uv_ip6_name(const struct sockaddr_in6* src, char* dst, size_t size); int uv_ip_name(const struct sockaddr* src, char* dst, size_t size); int uv_inet_ntop(int af, const void* src, char* dst, size_t size); int uv_inet_pton(int af, const char* src, void* dst); */ /* Standard library */ void* memset(void *b, int c, size_t len); /* gevent callbacks */ // Implemented in Python code as 'def_extern'. In the case of poll callbacks and fs // callbacks, if *status* is less than 0, it will be passed in the revents // field. In cases of no extra arguments, revents will be 0. // These will be created as static functions at the end of the // _source.c and must be pre-declared at the top of that file if we // call them typedef void* GeventWatcherObject; extern "Python" { // Standard gevent._ffi.loop callbacks. int python_callback(GeventWatcherObject handle, int revents); void python_handle_error(GeventWatcherObject handle, int revents); void python_stop(GeventWatcherObject handle); void python_check_callback(uv_check_t* handle); void python_prepare_callback(uv_prepare_t* handle); void python_timer0_callback(uv_check_t* handle); // libuv specific callback void _uv_close_callback(uv_handle_t* handle); void python_sigchld_callback(uv_signal_t* handle, int signum); void python_queue_callback(uv_handle_t* handle, int revents); } static void _gevent_signal_callback1(uv_signal_t* handle, int arg); static void _gevent_async_callback0(uv_async_t* handle); static void _gevent_prepare_callback0(uv_prepare_t* handle); static void _gevent_timer_callback0(uv_timer_t* handle); static void _gevent_check_callback0(uv_check_t* handle); static void _gevent_idle_callback0(uv_idle_t* handle); static void _gevent_poll_callback2(uv_poll_t* handle, int status, int events); static void _gevent_fs_event_callback3(uv_fs_event_t* handle, const char* filename, int events, int status); typedef struct _gevent_fs_poll_s { uv_fs_poll_t handle; uv_stat_t curr; uv_stat_t prev; } gevent_fs_poll_t; static void _gevent_fs_poll_callback3(uv_fs_poll_t* handle, int status, const uv_stat_t* prev, const uv_stat_t* curr); static void gevent_uv_walk_callback_close(uv_handle_t* handle, void* arg); static void gevent_close_all_handles(uv_loop_t* loop); /* gevent utility functions */ static void gevent_zero_timer(uv_timer_t* handle); static void gevent_zero_prepare(uv_prepare_t* handle); static void gevent_zero_check(uv_check_t* handle); static void gevent_zero_loop(uv_loop_t* handle); static void gevent_set_uv_alloc(); static void gevent_test_setup(); gevent-24.11.1/src/gevent/libuv/_corecffi_source.c000066400000000000000000000130301471441230600220120ustar00rootroot00000000000000#include #include #include "uv.h" #include "Python.h" typedef void* GeventWatcherObject; #ifdef __clang__ #pragma clang diagnostic push #pragma clang diagnostic ignored "-Wunused" #pragma clang diagnostic ignored "-Wunused-parameter" #pragma clang diagnostic ignored "-Wundefined-internal" #endif static int python_callback(GeventWatcherObject handle, int revents); static void python_queue_callback(uv_handle_t* watcher_ptr, int revents); static void python_handle_error(GeventWatcherObject handle, int revents); static void python_stop(GeventWatcherObject handle); static void _gevent_generic_callback1(uv_handle_t* watcher, int arg) { python_queue_callback(watcher, arg); } static void _gevent_generic_callback0(uv_handle_t* handle) { _gevent_generic_callback1(handle, 0); } static void _gevent_async_callback0(uv_async_t* handle) { _gevent_generic_callback0((uv_handle_t*)handle); } static void _gevent_timer_callback0(uv_timer_t* handle) { _gevent_generic_callback0((uv_handle_t*)handle); } static void _gevent_prepare_callback0(uv_prepare_t* handle) { _gevent_generic_callback0((uv_handle_t*)handle); } static void _gevent_check_callback0(uv_check_t* handle) { _gevent_generic_callback0((uv_handle_t*)handle); } static void _gevent_idle_callback0(uv_idle_t* handle) { _gevent_generic_callback0((uv_handle_t*)handle); } static void _gevent_signal_callback1(uv_signal_t* handle, int signum) { _gevent_generic_callback1((uv_handle_t*)handle, signum); } static void _gevent_poll_callback2(void* handle, int status, int events) { _gevent_generic_callback1(handle, status < 0 ? status : events); } static void _gevent_fs_event_callback3(void* handle, const char* filename, int events, int status) { _gevent_generic_callback1(handle, status < 0 ? status : events); } typedef struct _gevent_fs_poll_s { uv_fs_poll_t handle; uv_stat_t curr; uv_stat_t prev; } gevent_fs_poll_t; static void _gevent_fs_poll_callback3(void* handlep, int status, const uv_stat_t* prev, const uv_stat_t* curr) { // stat pointers are valid for this callback only. // if given, copy them into our structure, where they can be reached // from python, just like libev's watcher does, before calling // the callback. // The callback is invoked with status < 0 if path does not exist // or is inaccessible. The watcher is not stopped but your // callback is not called again until something changes (e.g. when // the file is created or the error reason changes). // In that case the fields will be 0 in curr/prev. gevent_fs_poll_t* handle = (gevent_fs_poll_t*)handlep; handle->curr = *curr; handle->prev = *prev; _gevent_generic_callback1((uv_handle_t*)handle, 0); } static void gevent_uv_walk_callback_close(uv_handle_t* handle, void* arg) { if( handle && !uv_is_closing(handle) ) { uv_close(handle, NULL); handle->data = NULL; } } static void gevent_close_all_handles(uv_loop_t* loop) { if (loop) { uv_walk(loop, gevent_uv_walk_callback_close, NULL); } } static void gevent_zero_timer(uv_timer_t* handle) { memset(handle, 0, sizeof(uv_timer_t)); } static void gevent_zero_check(uv_check_t* handle) { memset(handle, 0, sizeof(uv_check_t)); } static void gevent_zero_prepare(uv_prepare_t* handle) { memset(handle, 0, sizeof(uv_prepare_t)); } static void gevent_zero_loop(uv_loop_t* handle) { memset(handle, 0, sizeof(uv_loop_t)); } /*** * Allocation functions */ #include "_ffi/alloc.c" static void* _gevent_uv_malloc(size_t size) { return gevent_realloc(NULL, size); } static void* _gevent_uv_realloc(void* ptr, size_t size) { return gevent_realloc(ptr, size); } static void _gevent_uv_free(void* ptr) { gevent_realloc(ptr, 0); } static void* _gevent_uv_calloc(size_t count, size_t size) { // We assume no overflows. Not using PyObject_Calloc because // it's not available prior to 3.5 and isn't in PyPy. void* result; result = _gevent_uv_malloc(count * size); if(result) { memset(result, 0, count * size); } return result; } static void gevent_set_uv_alloc() { uv_replace_allocator(_gevent_uv_malloc, _gevent_uv_realloc, _gevent_uv_calloc, _gevent_uv_free); } /*** * Utility Functions */ #ifdef __APPLE__ #include #include #include // based on code from libuv static void gevent_move_pthread_to_realtime_scheduling_class(pthread_t pthread) { mach_timebase_info_data_t timebase_info; mach_timebase_info(&timebase_info); const uint64_t NANOS_PER_MSEC = 1000000ULL; double clock2abs = ((double)timebase_info.denom / (double)timebase_info.numer) * NANOS_PER_MSEC; thread_time_constraint_policy_data_t policy; policy.period = 0; policy.computation = (uint32_t)(5 * clock2abs); // 5 ms of work policy.constraint = (uint32_t)(10 * clock2abs); policy.preemptible = FALSE; int kr = thread_policy_set( pthread_mach_thread_np(pthread), THREAD_TIME_CONSTRAINT_POLICY, (thread_policy_t)&policy, THREAD_TIME_CONSTRAINT_POLICY_COUNT); if (kr != KERN_SUCCESS) { mach_error("thread_policy_set:", kr); exit(1); } } static void gevent_test_setup() { gevent_move_pthread_to_realtime_scheduling_class(pthread_self()); } #else static void gevent_test_setup() {} #endif #ifdef __clang__ #pragma clang diagnostic pop #pragma clang diagnostic push #pragma clang diagnostic ignored "-Wunreachable-code" #endif gevent-24.11.1/src/gevent/libuv/loop.py000066400000000000000000000657521471441230600177140ustar00rootroot00000000000000""" libuv loop implementation """ # pylint: disable=no-member from __future__ import absolute_import, print_function import os from collections import defaultdict from collections import namedtuple from operator import delitem import signal from zope.interface import implementer from gevent import getcurrent from gevent.exceptions import LoopExit from gevent._ffi import _dbg # pylint: disable=unused-import from gevent._ffi.loop import AbstractLoop from gevent._ffi.loop import assign_standard_callbacks from gevent._ffi.loop import AbstractCallbacks from gevent._interfaces import ILoop from gevent.libuv import _corecffi # pylint:disable=no-name-in-module,import-error ffi = _corecffi.ffi libuv = _corecffi.lib __all__ = [ ] class _Callbacks(AbstractCallbacks): def _find_loop_from_c_watcher(self, watcher_ptr): loop_handle = ffi.cast('uv_handle_t*', watcher_ptr).data return self.from_handle(loop_handle) if loop_handle else None def python_sigchld_callback(self, watcher_ptr, _signum): self.from_handle(ffi.cast('uv_handle_t*', watcher_ptr).data)._sigchld_callback() def python_timer0_callback(self, watcher_ptr): return self.python_prepare_callback(watcher_ptr) def python_queue_callback(self, watcher_ptr, revents): watcher_handle = watcher_ptr.data the_watcher = self.from_handle(watcher_handle) the_watcher.loop._queue_callback(watcher_ptr, revents) _callbacks = assign_standard_callbacks( ffi, libuv, _Callbacks, [ 'python_sigchld_callback', 'python_timer0_callback', 'python_queue_callback', ] ) from gevent._ffi.loop import EVENTS GEVENT_CORE_EVENTS = EVENTS # export from gevent.libuv import watcher as _watchers # pylint:disable=no-name-in-module _events_to_str = _watchers._events_to_str # export READ = libuv.UV_READABLE WRITE = libuv.UV_WRITABLE def get_version(): uv_bytes = ffi.string(libuv.uv_version_string()) if not isinstance(uv_bytes, str): # Py3 uv_str = uv_bytes.decode("ascii") else: uv_str = uv_bytes return 'libuv-' + uv_str def get_header_version(): return 'libuv-%d.%d.%d' % (libuv.UV_VERSION_MAJOR, libuv.UV_VERSION_MINOR, libuv.UV_VERSION_PATCH) def supported_backends(): return ['default'] libuv.gevent_set_uv_alloc() @implementer(ILoop) class loop(AbstractLoop): # libuv parameters simply won't accept anything lower than 1ms. In # practice, looping on gevent.sleep(0.001) takes about 0.00138 s # (+- 0.000036s) approx_timer_resolution = 0.001 # 1ms # It's relatively more expensive to break from the callback loop # because we don't do it "inline" from C, we're looping in Python CALLBACK_CHECK_COUNT = max(AbstractLoop.CALLBACK_CHECK_COUNT, 100) # Defines the maximum amount of time the loop will sleep waiting for IO, # which is also the interval at which signals are checked and handled. SIGNAL_CHECK_INTERVAL_MS = 300 error_handler = None _CHECK_POINTER = 'uv_check_t *' _PREPARE_POINTER = 'uv_prepare_t *' _PREPARE_CALLBACK_SIG = "void(*)(void*)" _TIMER_POINTER = _CHECK_POINTER # This is poorly named. It's for the callback "timer" def __init__(self, flags=None, default=None): AbstractLoop.__init__(self, ffi, libuv, _watchers, flags, default) self._child_watchers = defaultdict(list) self._io_watchers = {} self._fork_watchers = set() self._pid = os.getpid() # pylint:disable-next=superfluous-parens self._default = (self._ptr == libuv.uv_default_loop()) self._queued_callbacks = [] def _queue_callback(self, watcher_ptr, revents): self._queued_callbacks.append((watcher_ptr, revents)) def _init_loop(self, flags, default): if default is None: default = True # Unlike libev, libuv creates a new default # loop automatically if the old default loop was # closed. if default: # XXX: If the default loop had been destroyed, this # will create a new one, but we won't destroy it ptr = libuv.uv_default_loop() else: ptr = libuv.uv_loop_new() if not ptr: raise SystemError("Failed to get loop") # Track whether or not any object has destroyed # this loop. See _can_destroy_default_loop ptr.data = self._handle_to_self return ptr _signal_idle = None @property def ptr(self): if not self._ptr: return None if self._ptr and not self._ptr.data: # Another instance of the Python loop destroyed # the C loop. It was probably the default. self._ptr = None return self._ptr def _init_and_start_check(self): libuv.uv_check_init(self.ptr, self._check) libuv.uv_check_start(self._check, libuv.python_check_callback) libuv.uv_unref(self._check) # We also have to have an idle watcher to be able to handle # signals in a timely manner. Without them, libuv won't loop again # and call into its check and prepare handlers. # Note that this basically forces us into a busy-loop # XXX: As predicted, using an idle watcher causes our process # to eat 100% CPU time. We instead use a timer with a max of a .3 second # delay to notice signals. Note that this timeout also implements fork # watchers, effectively. # XXX: Perhaps we could optimize this to notice when there are other # timers in the loop and start/stop it then. When we have a callback # scheduled, this should also be the same and unnecessary? # libev does takes this basic approach on Windows. self._signal_idle = ffi.new("uv_timer_t*") libuv.uv_timer_init(self.ptr, self._signal_idle) self._signal_idle.data = self._handle_to_self sig_cb = ffi.cast('void(*)(uv_timer_t*)', libuv.python_check_callback) libuv.uv_timer_start(self._signal_idle, sig_cb, self.SIGNAL_CHECK_INTERVAL_MS, self.SIGNAL_CHECK_INTERVAL_MS) libuv.uv_unref(self._signal_idle) def __check_and_die(self): if not self.ptr: # We've been destroyed during the middle of self.run(). # This method is being called into from C, and it's not # safe to go back to C (Windows in particular can abort # the process with "GetQueuedCompletionStatusEx: (6) The # handle is invalid.") So switch to the parent greenlet. getcurrent().parent.throw(LoopExit('Destroyed during run')) def _run_callbacks(self): self.__check_and_die() # Manually handle fork watchers. curpid = os.getpid() if curpid != self._pid: self._pid = curpid for watcher in self._fork_watchers: watcher._on_fork() # The contents of queued_callbacks at this point should be timers # that expired when the loop began along with any idle watchers. # We need to run them so that any manual callbacks they want to schedule # get added to the list and ran next before we go on to poll for IO. # This is critical for libuv on linux: closing a socket schedules some manual # callbacks to actually stop the watcher; if those don't run before # we poll for IO, then libuv can abort the process for the closed file descriptor. # XXX: There's still a race condition here because we may not run *all* the manual # callbacks. We need a way to prioritize those. # Running these before the manual callbacks lead to some # random test failures. In test__event.TestEvent_SetThenClear # we would get a LoopExit sometimes. The problem occurred when # a timer expired on entering the first loop; we would process # it there, and then process the callback that it created # below, leaving nothing for the loop to do. Having the # self.run() manually process manual callbacks before # continuing solves the problem. (But we must still run callbacks # here again.) self._prepare_ran_callbacks = self.__run_queued_callbacks() super(loop, self)._run_callbacks() def _init_and_start_prepare(self): libuv.uv_prepare_init(self.ptr, self._prepare) libuv.uv_prepare_start(self._prepare, libuv.python_prepare_callback) libuv.uv_unref(self._prepare) def _init_callback_timer(self): libuv.uv_check_init(self.ptr, self._timer0) def _stop_callback_timer(self): libuv.uv_check_stop(self._timer0) def _start_callback_timer(self): # The purpose of the callback timer is to ensure that we run # callbacks as soon as possible on the next iteration of the event loop. # In libev, we set a 0 duration timer with a no-op callback. # This executes immediately *after* the IO poll is done (it # actually determines the time that the IO poll will block # for), so having the timer present simply spins the loop, and # our normal prepare watcher kicks in to run the callbacks. # In libuv, however, timers are run *first*, before prepare # callbacks and before polling for IO. So a no-op 0 duration # timer actually does *nothing*. (Also note that libev queues all # watchers found during IO poll to run at the end (I think), while libuv # runs them in uv__io_poll itself.) # From the loop inside uv_run: # while True: # uv__update_time(loop); # uv__run_timers(loop); # # we don't use pending watchers. They are how libuv # # implements the pipe/udp/tcp streams. # ran_pending = uv__run_pending(loop); # uv__run_idle(loop); # uv__run_prepare(loop); # ... # uv__io_poll(loop, timeout); # <--- IO watchers run here! # uv__run_check(loop); # libev looks something like this (pseudo code because the real code is # hard to read): # # do { # run_fork_callbacks(); # run_prepare_callbacks(); # timeout = min(time of all timers or normal block time) # io_poll() # <--- Only queues IO callbacks # update_now(); calculate_expired_timers(); # run callbacks in this order: (although specificying priorities changes it) # check # stat # child # signal # timer # io # } # So instead of running a no-op and letting the side-effect of spinning # the loop run the callbacks, we must explicitly run them here. # If we don't, test__systemerror:TestCallback will be flaky, failing # one time out of ~20, depending on timing. # To get them to run immediately after this current loop, # we use a check watcher, instead of a 0 duration timer entirely. # If we use a 0 duration timer, we can get stuck in a timer loop. # Python 3.6 fails in test_ftplib.py # As a final note, if we have not yet entered the loop *at # all*, and a timer was created with a duration shorter than # the amount of time it took for us to enter the loop in the # first place, it may expire and get called before our callback # does. This could also lead to test__systemerror:TestCallback # appearing to be flaky. # As yet another final note, if we are currently running a # timer callback, meaning we're inside uv__run_timers() in C, # and the Python starts a new timer, if the Python code then # update's the loop's time, it's possible that timer will # expire *and be run in the same iteration of the loop*. This # is trivial to do: In sequential code, anything after # `gevent.sleep(0.1)` is running in a timer callback. Starting # a new timer---e.g., another gevent.sleep() call---will # update the time, *before* uv__run_timers exits, meaning # other timers get a chance to run before our check or prepare # watcher callbacks do. Therefore, we do indeed have to have a 0 # timer to run callbacks---it gets inserted before any other user # timers---ideally, this should be especially careful about how much time # it runs for. # AND YET: We can't actually do that. We get timeouts that I haven't fully # investigated if we do. Probably stuck in a timer loop. # As a partial remedy to this, unlike libev, our timer watcher # class doesn't update the loop time by default. libuv.uv_check_start(self._timer0, libuv.python_timer0_callback) def _stop_aux_watchers(self): super(loop, self)._stop_aux_watchers() assert self._prepare assert self._check assert self._signal_idle libuv.uv_prepare_stop(self._prepare) libuv.uv_ref(self._prepare) # Why are we doing this? libuv.uv_check_stop(self._check) libuv.uv_ref(self._check) libuv.uv_timer_stop(self._signal_idle) libuv.uv_ref(self._signal_idle) libuv.uv_check_stop(self._timer0) def _setup_for_run_callback(self): self._start_callback_timer() libuv.uv_ref(self._timer0) def _can_destroy_loop(self, ptr): return ptr def __close_loop(self, ptr): closed_failed = 1 while closed_failed: closed_failed = libuv.uv_loop_close(ptr) if not closed_failed: break if closed_failed != libuv.UV_EBUSY: raise SystemError("Unknown close failure reason", closed_failed) # We already closed all the handles. Run the loop # once to let them be cut off from the loop. ran_has_more_callbacks = libuv.uv_run(ptr, libuv.UV_RUN_ONCE) if ran_has_more_callbacks: libuv.uv_run(ptr, libuv.UV_RUN_NOWAIT) def _destroy_loop(self, ptr): # We're being asked to destroy a loop that's, potentially, at # the time it was constructed, was the default loop. If loop # objects were constructed more than once, it may have already # been destroyed, though. We track this in the data member. data = ptr.data ptr.data = ffi.NULL try: if data: libuv.uv_stop(ptr) libuv.gevent_close_all_handles(ptr) finally: ptr.data = ffi.NULL try: if data: self.__close_loop(ptr) finally: # Destroy the native resources *after* we have closed # the loop. If we do it before, walking the handles # attached to the loop is likely to segfault. # Note that these may have been closed already if the default loop was shared. if data: libuv.gevent_zero_check(self._check) libuv.gevent_zero_check(self._timer0) libuv.gevent_zero_prepare(self._prepare) libuv.gevent_zero_timer(self._signal_idle) libuv.gevent_zero_loop(ptr) del self._check del self._prepare del self._signal_idle del self._timer0 # Destroy any watchers we're still holding on to. del self._io_watchers del self._fork_watchers del self._child_watchers _HandleState = namedtuple("HandleState", ['handle', 'type', 'watcher', 'ref', 'active', 'closing']) def debug(self): """ Return all the handles that are open and their ref status. """ if not self.ptr: return ["Loop has been destroyed"] handle_state = self._HandleState handles = [] # XXX: Convert this to a modern callback. def walk(handle, _arg): data = handle.data if data: watcher = ffi.from_handle(data) else: watcher = None handles.append(handle_state(handle, ffi.string(libuv.uv_handle_type_name(handle.type)), watcher, libuv.uv_has_ref(handle), libuv.uv_is_active(handle), libuv.uv_is_closing(handle))) libuv.uv_walk(self.ptr, ffi.callback("void(*)(uv_handle_t*,void*)", walk), ffi.NULL) return handles def ref(self): pass def unref(self): # XXX: Called by _run_callbacks. pass def break_(self, how=None): if self.ptr: libuv.uv_stop(self.ptr) def reinit(self): # TODO: How to implement? We probably have to simply # re-__init__ this whole class? Does it matter? # OR maybe we need to uv_walk() and close all the handles? # XXX: libuv < 1.12 simply CANNOT handle a fork unless you immediately # exec() in the child. There are multiple calls to abort() that # will kill the child process: # - The OS X poll implementation (kqueue) aborts on an error return # value; since kqueue FDs can't be inherited, then the next call # to kqueue in the child will fail and get aborted; fork() is likely # to be called during the gevent loop, meaning we're deep inside the # runloop already, so we can't even close the loop that we're in: # it's too late, the next call to kqueue is already scheduled. # - The threadpool, should it be in use, also aborts # (https://github.com/joyent/libuv/pull/1136) # - There global shared state that breaks signal handling # and leads to an abort() in the child, EVEN IF the loop in the parent # had already been closed # (https://github.com/joyent/libuv/issues/1405) # In 1.12, the uv_loop_fork function was added (by gevent!) libuv.uv_loop_fork(self.ptr) _prepare_ran_callbacks = False def __run_queued_callbacks(self): if not self._queued_callbacks: return False cbs = self._queued_callbacks[:] del self._queued_callbacks[:] for watcher_ptr, arg in cbs: handle = watcher_ptr.data if not handle: # It's been stopped and possibly closed assert not libuv.uv_is_active(watcher_ptr) continue val = _callbacks.python_callback(handle, arg) if val == -1: # Failure. _callbacks.python_handle_error(handle, arg) elif val == 1: # Success, and we may need to close the Python watcher. if not libuv.uv_is_active(watcher_ptr): # The callback closed the native watcher resources. Good. # It's *supposed* to also reset the .data handle to NULL at # that same time. If it resets it to something else, we're # re-using the same watcher object, and that's not correct either. # On Windows in particular, if the .data handle is changed because # the IO multiplexer is being restarted, trying to dereference the # *old* handle can crash with an FFI error. handle_after_callback = watcher_ptr.data try: if handle_after_callback and handle_after_callback == handle: _callbacks.python_stop(handle_after_callback) finally: watcher_ptr.data = ffi.NULL return True def run(self, nowait=False, once=False): # we can only respect one flag or the other. # nowait takes precedence because it can't block mode = libuv.UV_RUN_DEFAULT if once: mode = libuv.UV_RUN_ONCE if nowait: mode = libuv.UV_RUN_NOWAIT if mode == libuv.UV_RUN_DEFAULT: while self._ptr and self._ptr.data: # This is here to better preserve order guarantees. # See _run_callbacks for details. # It may get run again from the prepare watcher, so # potentially we could take twice as long as the # switch interval. # If we have *lots* of callbacks to run, we may not actually # get through them all before we're requested to poll for IO; # so in that case, just spin the loop once (UV_RUN_NOWAIT) and # go again. self._run_callbacks() self._prepare_ran_callbacks = False # UV_RUN_ONCE will poll for IO, blocking for up to the time needed # for the next timer to expire. Worst case, that's our _signal_idle # timer, about 1/3 second. UV_RUN_ONCE guarantees that some forward progress # is made, either by an IO watcher or a timer. # # In contrast, UV_RUN_NOWAIT makes no such guarantee, it only polls for IO once and # immediately returns; it does not update the loop time or timers after # polling for IO. run_mode = ( libuv.UV_RUN_ONCE if not self._callbacks and not self._queued_callbacks else libuv.UV_RUN_NOWAIT ) ran_status = libuv.uv_run(self._ptr, run_mode) # Note that we run queued callbacks when the prepare watcher runs, # thus accounting for timers that expired before polling for IO, # and idle watchers. This next call should get IO callbacks and # callbacks from timers that expired *after* polling for IO. ran_callbacks = self.__run_queued_callbacks() if not ran_status and not ran_callbacks and not self._prepare_ran_callbacks: # A return of 0 means there are no referenced and # active handles. The loop is over. # If we didn't run any callbacks, then we couldn't schedule # anything to switch in the future, so there's no point # running again. return ran_status return 0 # Somebody closed the loop result = libuv.uv_run(self._ptr, mode) self.__run_queued_callbacks() return result def now(self): self.__check_and_die() # libuv's now is expressed as an integer number of # milliseconds, so to get it compatible with time.time units # that this method is supposed to return, we have to divide by 1000.0 now = libuv.uv_now(self.ptr) return now / 1000.0 def update_now(self): self.__check_and_die() libuv.uv_update_time(self.ptr) def fileno(self): if self.ptr: fd = libuv.uv_backend_fd(self._ptr) if fd >= 0: return fd _sigchld_watcher = None _sigchld_callback_ffi = None def install_sigchld(self): if not self.default: return if self._sigchld_watcher: return self._sigchld_watcher = ffi.new('uv_signal_t*') libuv.uv_signal_init(self.ptr, self._sigchld_watcher) self._sigchld_watcher.data = self._handle_to_self # Don't let this keep the loop alive libuv.uv_unref(self._sigchld_watcher) libuv.uv_signal_start(self._sigchld_watcher, libuv.python_sigchld_callback, signal.SIGCHLD) def reset_sigchld(self): if not self.default or not self._sigchld_watcher: return libuv.uv_signal_stop(self._sigchld_watcher) # Must go through this to manage the memory lifetime # correctly. Alternately, we could just stop it and restart # it in install_sigchld? _watchers.watcher._watcher_ffi_close(self._sigchld_watcher) del self._sigchld_watcher def _sigchld_callback(self): # Signals can arrive at (relatively) any time. To eliminate # race conditions, and behave more like libev, we "queue" # sigchld to run when we run callbacks. while True: try: pid, status, _usage = os.wait3(os.WNOHANG) except OSError: # Python 3 raises ChildProcessError break if pid == 0: break children_watchers = self._child_watchers.get(pid, []) + self._child_watchers.get(0, []) for watcher in children_watchers: self.run_callback(watcher._set_waitpid_status, pid, status) # Don't invoke child watchers for 0 more than once self._child_watchers[0] = [] def _register_child_watcher(self, watcher): self._child_watchers[watcher._pid].append(watcher) def _unregister_child_watcher(self, watcher): try: # stop() should be idempotent self._child_watchers[watcher._pid].remove(watcher) except ValueError: pass # Now's a good time to clean up any dead watchers we don't need # anymore for pid in list(self._child_watchers): if not self._child_watchers[pid]: del self._child_watchers[pid] def io(self, fd, events, ref=True, priority=None): # We rely on hard references here and explicit calls to # close() on the returned object to correctly manage # the watcher lifetimes. io_watchers = self._io_watchers try: io_watcher = io_watchers[fd] assert io_watcher._multiplex_watchers, ("IO Watcher %s unclosed but should be dead" % io_watcher) except KeyError: # Start the watcher with just the events that we're interested in. # as multiplexers are added, the real event mask will be updated to keep in sync. # If we watch for too much, we get spurious wakeups and busy loops. io_watcher = self._watchers.io(self, fd, 0) io_watchers[fd] = io_watcher io_watcher._no_more_watchers = lambda: delitem(io_watchers, fd) return io_watcher.multiplex(events) def prepare(self, ref=True, priority=None): # We run arbitrary code in python_prepare_callback. That could switch # greenlets. If it does that while also manipulating the active prepare # watchers, we could corrupt the process state, since the prepare watcher # queue is iterated on the stack (on unix). We could workaround this by implementing # prepare watchers in pure Python. # See https://github.com/gevent/gevent/issues/1126 raise TypeError("prepare watchers are not currently supported in libuv. " "If you need them, please contact the maintainers.") gevent-24.11.1/src/gevent/libuv/watcher.py000066400000000000000000000660201471441230600203650ustar00rootroot00000000000000# pylint: disable=too-many-lines, protected-access, redefined-outer-name, not-callable # pylint: disable=no-member from __future__ import absolute_import, print_function import functools import sys from gevent.libuv import _corecffi # pylint:disable=no-name-in-module,import-error # Nothing public here __all__ = [] ffi = _corecffi.ffi libuv = _corecffi.lib from gevent._ffi import watcher as _base from gevent._ffi import _dbg # A set of uv_handle_t* CFFI objects. Kept around # to keep the memory alive until libuv is done with them. class _ClosingWatchers(dict): __slots__ = () def remove(self, obj): try: del self[obj] except KeyError: # pragma: no cover # This has been seen to happen if the module is executed twice # and so the callback doesn't match the storage seen by watcher objects. print( 'gevent error: Unable to remove closing watcher from keepaliveset. ' 'Has the module state been corrupted or executed more than once?', file=sys.stderr ) _closing_watchers = _ClosingWatchers() # In debug mode, it would be nice to be able to clear the memory of # the watcher (its size determined by # libuv.uv_handle_size(ffi_watcher.type)) using memset so that if we # are using it after it's supposedly been closed and deleted, we'd # catch it sooner. BUT doing so breaks test__threadpool. We get errors # about `pthread_mutex_lock[3]: Invalid argument` (and sometimes we # crash) suggesting either that we're writing on memory that doesn't # belong to us, somehow, or that we haven't actually lost all # references... _uv_close_callback = ffi.def_extern(name='_uv_close_callback')( _closing_watchers.remove ) _events = [(libuv.UV_READABLE, "READ"), (libuv.UV_WRITABLE, "WRITE")] def _events_to_str(events): # export return _base.events_to_str(events, _events) class UVFuncallError(ValueError): pass class libuv_error_wrapper(object): # Makes sure that everything stored as a function # on the wrapper instances (classes, actually, # because this is used by the metaclass) # checks its return value and raises an error. # This expects that everything we call has an int # or void return value and follows the conventions # of error handling (that negative values are errors) def __init__(self, uv): self._libuv = uv def __getattr__(self, name): libuv_func = getattr(self._libuv, name) @functools.wraps(libuv_func) def wrap(*args, **kwargs): if args and isinstance(args[0], watcher): args = args[1:] res = libuv_func(*args, **kwargs) if res is not None and res < 0: raise UVFuncallError( str(ffi.string(libuv.uv_err_name(res)).decode('ascii') + ' ' + ffi.string(libuv.uv_strerror(res)).decode('ascii')) + " Args: " + repr(args) + " KWARGS: " + repr(kwargs) ) return res setattr(self, name, wrap) return wrap class ffi_unwrapper(object): # undoes the wrapping of libuv_error_wrapper for # the methods used by the metaclass that care def __init__(self, ff): self._ffi = ff def __getattr__(self, name): return getattr(self._ffi, name) def addressof(self, lib, name): assert isinstance(lib, libuv_error_wrapper) return self._ffi.addressof(libuv, name) class watcher(_base.watcher): _FFI = ffi_unwrapper(ffi) _LIB = libuv_error_wrapper(libuv) _watcher_prefix = 'uv' _watcher_struct_pattern = '%s_t' @classmethod def _watcher_ffi_close(cls, ffi_watcher): # Managing the lifetime of _watcher is tricky. # They have to be uv_close()'d, but that only # queues them to be closed in the *next* loop iteration. # The memory must stay valid for at least that long, # or assert errors are triggered. We can't use a ffi.gc() # pointer to queue the uv_close, because by the time the # destructor is called, there's no way to keep the memory alive # and it could be re-used. # So here we resort to resurrecting the pointer object out # of our scope, keeping it alive past this object's lifetime. # We then use the uv_close callback to handle removing that # reference. There's no context passed to the close callback, # so we have to do this globally. # Sadly, doing this causes crashes if there were multiple # watchers for a given FD, so we have to take special care # about that. See https://github.com/gevent/gevent/issues/790#issuecomment-208076604 # Note that this cannot be a __del__ method, because we store # the CFFI handle to self on self, which is a cycle, and # objects with a __del__ method cannot be collected on CPython < 3.4 # Instead, this is arranged as a callback to GC when the # watcher class dies. Obviously it's important to keep the ffi # watcher alive. # We can pass in "subclasses" of uv_handle_t that line up at the C level, # but that don't in CFFI without a cast. But be careful what we use the cast # for, don't pass it back to C. ffi_handle_watcher = cls._FFI.cast('uv_handle_t*', ffi_watcher) ffi_handle_watcher.data = ffi.NULL if ffi_handle_watcher.type and not libuv.uv_is_closing(ffi_watcher): # If the type isn't set, we were never properly initialized, # and trying to close it results in libuv terminating the process. # Sigh. Same thing if it's already in the process of being # closed. _closing_watchers[ffi_handle_watcher] = ffi_watcher libuv.uv_close(ffi_watcher, libuv._uv_close_callback) def _watcher_ffi_set_init_ref(self, ref): self.ref = ref def _watcher_ffi_init(self, args): # TODO: we could do a better job chokepointing this return self._watcher_init(self.loop.ptr, self._watcher, *args) def _watcher_ffi_start(self): self._watcher_start(self._watcher, self._watcher_callback) def _watcher_ffi_stop(self): if self._watcher: # The multiplexed io watcher deletes self._watcher # when it closes down. If that's in the process of # an error handler, AbstractCallbacks.unhandled_onerror # will try to close us again. self._watcher_stop(self._watcher) @_base.only_if_watcher def _watcher_ffi_ref(self): libuv.uv_ref(self._watcher) @_base.only_if_watcher def _watcher_ffi_unref(self): libuv.uv_unref(self._watcher) def _watcher_ffi_start_unref(self): pass def _watcher_ffi_stop_ref(self): pass def _get_ref(self): # Convert 1/0 to True/False if self._watcher is None: return None return bool(libuv.uv_has_ref(self._watcher)) def _set_ref(self, value): if value: self._watcher_ffi_ref() else: self._watcher_ffi_unref() ref = property(_get_ref, _set_ref) def feed(self, _revents, _callback, *_args): # pylint:disable-next=broad-exception-raised raise Exception("Not implemented") class io(_base.IoMixin, watcher): _watcher_type = 'poll' _watcher_callback_name = '_gevent_poll_callback2' # On Windows is critical to be able to garbage collect these # objects in a timely fashion so that they don't get reused # for multiplexing completely different sockets. This is because # uv_poll_init_socket does a lot of setup for the socket to make # polling work. If get reused for another socket that has the same # fileno, things break badly. (In theory this could be a problem # on posix too, but in practice it isn't). # TODO: We should probably generalize this to all # ffi watchers. Avoiding GC cycles as much as possible # is a good thing, and potentially allocating new handles # as needed gets us better memory locality. # Especially on Windows, we must also account for the case that a # reference to this object has leaked (e.g., the socket object is # still around), but the fileno has been closed and a new one # opened. We must still get a new native watcher at that point. We # handle this case by simply making sure that we don't even have # a native watcher until the object is started, and we shut it down # when the object is stopped. # XXX: I was able to solve at least Windows test_ftplib.py issues # with more of a careful use of io objects in socket.py, so # delaying this entirely is at least temporarily on hold. Instead # sticking with the _watcher_create function override for the # moment. # XXX: Note 2: Moving to a deterministic close model, which was necessary # for PyPy, also seems to solve the Windows issues. So we're completely taking # this object out of the loop's registration; we don't want GC callbacks and # uv_close anywhere *near* this object. _watcher_registers_with_loop_on_create = False EVENT_MASK = libuv.UV_READABLE | libuv.UV_WRITABLE | libuv.UV_DISCONNECT _multiplex_watchers = () def __init__(self, loop, fd, events, ref=True, priority=None): super(io, self).__init__(loop, fd, events, ref=ref, priority=priority, _args=(fd,)) self._fd = fd self._events = events self._multiplex_watchers = [] def _get_fd(self): return self._fd @_base.not_while_active def _set_fd(self, fd): self._fd = fd self._watcher_ffi_init((fd,)) def _get_events(self): return self._events def _set_events(self, events): if events == self._events: return self._events = events if self.active: # We're running but libuv specifically says we can # call start again to change our event mask. assert self._handle is not None self._watcher_start(self._watcher, self._events, self._watcher_callback) events = property(_get_events, _set_events) def _watcher_ffi_start(self): self._watcher_start(self._watcher, self._events, self._watcher_callback) if sys.platform.startswith('win32'): # uv_poll can only handle sockets on Windows, but the plain # uv_poll_init we call on POSIX assumes that the fileno # argument is already a C fileno, as created by # _get_osfhandle. C filenos are limited resources, must be # closed with _close. So there are lifetime issues with that: # calling the C function _close to dispose of the fileno # *also* closes the underlying win32 handle, possibly # prematurely. (XXX: Maybe could do something with weak # references? But to what?) # All libuv wants to do with the fileno in uv_poll_init is # turn it back into a Win32 SOCKET handle. # Now, libuv provides uv_poll_init_socket, which instead of # taking a C fileno takes the SOCKET, avoiding the need to dance with # the C runtime. # It turns out that SOCKET (win32 handles in general) can be # represented with `intptr_t`. It further turns out that # CPython *directly* exposes the SOCKET handle as the value of # fileno (32-bit PyPy does some munging on it, which should # rarely matter). So we can pass socket.fileno() through # to uv_poll_init_socket. # See _corecffi_build. _watcher_init = watcher._LIB.uv_poll_init_socket class _multiplexwatcher(object): callback = None args = () pass_events = False ref = True def __init__(self, events, watcher): self._events = events # References: # These objects must keep the original IO object alive; # the IO object SHOULD NOT keep these alive to avoid cycles # We MUST NOT rely on GC to clean up the IO objects, but the explicit # calls to close(); see _multiplex_closed. self._watcher_ref = watcher events = property( lambda self: self._events, _base.not_while_active(lambda self, nv: setattr(self, '_events', nv))) def start(self, callback, *args, **kwargs): self.pass_events = kwargs.get("pass_events") self.callback = callback self.args = args watcher = self._watcher_ref if watcher is not None: if not watcher.active: watcher._io_start() else: # Make sure we're in the event mask watcher._calc_and_update_events() def stop(self): self.callback = None self.pass_events = None self.args = None watcher = self._watcher_ref if watcher is not None: watcher._io_maybe_stop() def close(self): if self._watcher_ref is not None: self._watcher_ref._multiplex_closed(self) self._watcher_ref = None @property def active(self): return self.callback is not None @property def _watcher(self): # For testing. return self._watcher_ref._watcher # ares.pyx depends on this property, # and test__core uses it too fd = property(lambda self: getattr(self._watcher_ref, '_fd', -1), lambda self, nv: self._watcher_ref._set_fd(nv)) def _io_maybe_stop(self): self._calc_and_update_events() for w in self._multiplex_watchers: if w.callback is not None: # There's still a reference to it, and it's started, # so we can't stop. return # If we get here, nothing was started # so we can take ourself out of the polling set self.stop() def _io_start(self): self._calc_and_update_events() self.start(self._io_callback, pass_events=True) def _calc_and_update_events(self): events = 0 for watcher in self._multiplex_watchers: if watcher.callback is not None: # Only ask for events that are active. events |= watcher.events self._set_events(events) def multiplex(self, events): watcher = self._multiplexwatcher(events, self) self._multiplex_watchers.append(watcher) self._calc_and_update_events() return watcher def close(self): super(io, self).close() del self._multiplex_watchers def _multiplex_closed(self, watcher): self._multiplex_watchers.remove(watcher) if not self._multiplex_watchers: self.stop() # should already be stopped self._no_more_watchers() # It is absolutely critical that we control when the call # to uv_close() gets made. uv_close() of a uv_poll_t # handle winds up calling uv__platform_invalidate_fd, # which, as the name implies, destroys any outstanding # events for the *fd* that haven't been delivered yet, and also removes # the *fd* from the poll set. So if this happens later, at some # non-deterministic time when (cyclic or otherwise) GC runs, # *and* we've opened a new watcher for the fd, that watcher will # suddenly and mysteriously stop seeing events. So we do this now; # this method is smart enough not to close the handle twice. self.close() else: self._calc_and_update_events() def _no_more_watchers(self): # The loop sets this on an individual watcher to delete it from # the active list where it keeps hard references. pass def _io_callback(self, events): if events < 0: # actually a status error code _dbg("Callback error on", self._fd, ffi.string(libuv.uv_err_name(events)), ffi.string(libuv.uv_strerror(events))) # XXX: We've seen one half of a FileObjectPosix pair # (the read side of a pipe) report errno 11 'bad file descriptor' # after the write side was closed and its watcher removed. But # we still need to attempt to read from it to clear out what's in # its buffers--if we return with the watcher inactive before proceeding to wake up # the reader, we get a LoopExit. So we can't return here and arguably shouldn't print it # either. The negative events mask will match the watcher's mask. # See test__fileobject.py:Test.test_newlines for an example. # On Windows (at least with PyPy), we can get ENOTSOCK (socket operation on non-socket) # if a socket gets closed. If we don't pass the events on, we hang. # See test__makefile_ref.TestSSL for examples. # return for watcher in self._multiplex_watchers: if not watcher.callback: # Stopped continue assert watcher._watcher_ref is self, (self, watcher._watcher_ref) send_event = (events & watcher.events) or events < 0 if send_event: if not watcher.pass_events: watcher.callback(*watcher.args) else: watcher.callback(events, *watcher.args) class _SimulatedWithAsyncMixin(object): _watcher_skip_ffi = True def __init__(self, loop, *args, **kwargs): self._async = loop.async_() try: super(_SimulatedWithAsyncMixin, self).__init__(loop, *args, **kwargs) except: self._async.close() raise def _watcher_create(self, _args): return @property def _watcher_handle(self): return None def _watcher_ffi_init(self, _args): return def _watcher_ffi_set_init_ref(self, ref): self._async.ref = ref @property def active(self): return self._async.active def start(self, cb, *args): assert self._async is not None self._register_loop_callback() self.callback = cb self.args = args self._async.start(cb, *args) def stop(self): self._unregister_loop_callback() self.callback = None self.args = None if self._async is not None: # If we're stop() after close(). # That should be allowed. self._async.stop() def close(self): if self._async is not None: a = self._async self._async = None a.close() def _register_loop_callback(self): # called from start() raise NotImplementedError() def _unregister_loop_callback(self): # called from stop raise NotImplementedError() class fork(_SimulatedWithAsyncMixin, _base.ForkMixin, watcher): # We'll have to implement this one completely manually. _watcher_skip_ffi = False def _register_loop_callback(self): self.loop._fork_watchers.add(self) def _unregister_loop_callback(self): try: # stop() should be idempotent self.loop._fork_watchers.remove(self) except KeyError: pass def _on_fork(self): self._async.send() class child(_SimulatedWithAsyncMixin, _base.ChildMixin, watcher): _watcher_skip_ffi = True # We'll have to implement this one completely manually. # Our approach is to use a SIGCHLD handler and the original # os.waitpid call. # On Unix, libuv's uv_process_t and uv_spawn use SIGCHLD, # just like libev does for its child watchers. So # we're not adding any new SIGCHLD related issues not already # present in libev. def _register_loop_callback(self): self.loop._register_child_watcher(self) def _unregister_loop_callback(self): self.loop._unregister_child_watcher(self) def _set_waitpid_status(self, pid, status): self._rpid = pid self._rstatus = status self._async.send() class async_(_base.AsyncMixin, watcher): _watcher_callback_name = '_gevent_async_callback0' # libuv async watchers are different than all other watchers: # They don't have a separate start/stop method (presumably # because of race conditions). Simply initing them places them # into the active queue. # # In the past, we sent a NULL C callback to the watcher, trusting # that no one would call send() without actually starting us (or after # closing us); doing so would crash. But we don't want to delay # initing the struct because it will crash in uv_close() when we get GC'd, # and send() will also crash. Plus that complicates our lifecycle (managing # the memory). # # Now, we always init the correct C callback, and use a dummy # Python callback that gets replaced when we are started and # stopped. This prevents mistakes from being crashes. _callback = lambda: None def _watcher_ffi_init(self, args): # NOTE: uv_async_init is NOT idempotent. Calling it more than # once adds the uv_async_t to the internal queue multiple times, # and uv_close only cleans up one of them, meaning that we tend to # crash. Thus we have to be very careful not to allow that. return self._watcher_init(self.loop.ptr, self._watcher, self._watcher_callback) def _watcher_ffi_start(self): pass def _watcher_ffi_stop(self): pass def send(self): assert self._callback is not async_._callback, "Sending to a closed watcher" if libuv.uv_is_closing(self._watcher): # pylint:disable-next=broad-exception-raised raise Exception("Closing handle") libuv.uv_async_send(self._watcher) @property def pending(self): return None locals()['async'] = async_ class timer(_base.TimerMixin, watcher): _watcher_callback_name = '_gevent_timer_callback0' # In libuv, timer callbacks continue running while any timer is # expired, including newly added timers. Newly added non-zero # timers (especially of small duration) can be seen to be expired # if the loop time is updated while we are in a timer callback. # This can lead to us being stuck running timers for a terribly # long time, which is not good. So default to not updating the # time. # Also, newly-added timers of 0 duration can *also* stall the # loop, because they'll be seen to be expired immediately. # Updating the time can prevent that, *if* there was already a # timer for a longer duration scheduled. # To mitigate the above problems, our loop implementation turns # zero duration timers into check watchers instead using OneShotCheck. # This ensures the loop cycles. Of course, the 'again' method does # nothing on them and doesn't exist. In practice that's not an issue. _again = False def _watcher_ffi_init(self, args): self._watcher_init(self.loop.ptr, self._watcher) self._after, self._repeat = args if self._after and self._after < 0.001: import warnings # XXX: The stack level is hard to determine, could be getting here # through a number of different ways. warnings.warn("libuv only supports millisecond timer resolution; " "all times less will be set to 1 ms", stacklevel=6) # The alternative is to effectively pass in int(0.1) == 0, which # means no sleep at all, which leads to excessive wakeups self._after = 0.001 if self._repeat and self._repeat < 0.001: import warnings warnings.warn("libuv only supports millisecond timer resolution; " "all times less will be set to 1 ms", stacklevel=6) self._repeat = 0.001 def _watcher_ffi_start(self): if self._again: libuv.uv_timer_again(self._watcher) else: try: self._watcher_start(self._watcher, self._watcher_callback, int(self._after * 1000), int(self._repeat * 1000)) except ValueError: # in case of non-ints in _after/_repeat raise TypeError() def again(self, callback, *args, **kw): if not self.active: # If we've never been started, this is the same as starting us. # libuv makes the distinction, libev doesn't. self.start(callback, *args, **kw) return self._again = True try: self.start(callback, *args, **kw) finally: del self._again class stat(_base.StatMixin, watcher): _watcher_type = 'fs_poll' _watcher_struct_name = 'gevent_fs_poll_t' _watcher_callback_name = '_gevent_fs_poll_callback3' def _watcher_set_data(self, the_watcher, data): the_watcher.handle.data = data return data def _watcher_ffi_init(self, args): return self._watcher_init(self.loop.ptr, self._watcher) MIN_STAT_INTERVAL = 0.1074891 # match libev; 0.0 is default def _watcher_ffi_start(self): # libev changes this when the watcher is started self._interval = max(self._interval, self.MIN_STAT_INTERVAL) self._watcher_start(self._watcher, self._watcher_callback, self._cpath, int(self._interval * 1000)) @property def _watcher_handle(self): return self._watcher.handle.data @property def attr(self): if not self._watcher.curr.st_nlink: return return self._watcher.curr @property def prev(self): if not self._watcher.prev.st_nlink: return return self._watcher.prev class signal(_base.SignalMixin, watcher): _watcher_callback_name = '_gevent_signal_callback1' def _watcher_ffi_init(self, args): self._watcher_init(self.loop.ptr, self._watcher) self.ref = False # libev doesn't ref these by default def _watcher_ffi_start(self): self._watcher_start(self._watcher, self._watcher_callback, self._signalnum) class idle(_base.IdleMixin, watcher): # Because libuv doesn't support priorities, idle watchers are # potentially quite a bit different than under libev _watcher_callback_name = '_gevent_idle_callback0' class check(_base.CheckMixin, watcher): _watcher_callback_name = '_gevent_check_callback0' class OneShotCheck(check): _watcher_skip_ffi = True def __make_cb(self, func): stop = self.stop @functools.wraps(func) def cb(*args): stop() return func(*args) return cb def start(self, callback, *args): return check.start(self, self.__make_cb(callback), *args) class prepare(_base.PrepareMixin, watcher): _watcher_callback_name = '_gevent_prepare_callback0' gevent-24.11.1/src/gevent/local.py000066400000000000000000000517211471441230600167030ustar00rootroot00000000000000# cython: auto_pickle=False,embedsignature=True,always_allow_keywords=False """ Greenlet-local objects. This module is based on `_threading_local.py`__ from the standard library of Python 3.4. __ https://github.com/python/cpython/blob/3.4/Lib/_threading_local.py Greenlet-local objects support the management of greenlet-local data. If you have data that you want to be local to a greenlet, simply create a greenlet-local object and use its attributes: >>> import gevent >>> from gevent.local import local >>> mydata = local() >>> mydata.number = 42 >>> mydata.number 42 You can also access the local-object's dictionary: >>> mydata.__dict__ {'number': 42} >>> mydata.__dict__.setdefault('widgets', []) [] >>> mydata.widgets [] What's important about greenlet-local objects is that their data are local to a greenlet. If we access the data in a different greenlet: >>> log = [] >>> def f(): ... items = list(mydata.__dict__.items()) ... items.sort() ... log.append(items) ... mydata.number = 11 ... log.append(mydata.number) >>> greenlet = gevent.spawn(f) >>> greenlet.join() >>> log [[], 11] we get different data. Furthermore, changes made in the other greenlet don't affect data seen in this greenlet: >>> mydata.number 42 Of course, values you get from a local object, including a __dict__ attribute, are for whatever greenlet was current at the time the attribute was read. For that reason, you generally don't want to save these values across greenlets, as they apply only to the greenlet they came from. You can create custom local objects by subclassing the local class: >>> class MyLocal(local): ... number = 2 ... initialized = False ... def __init__(self, **kw): ... if self.initialized: ... raise SystemError('__init__ called too many times') ... self.initialized = True ... self.__dict__.update(kw) ... def squared(self): ... return self.number ** 2 This can be useful to support default values, methods and initialization. Note that if you define an __init__ method, it will be called each time the local object is used in a separate greenlet. This is necessary to initialize each greenlet's dictionary. Now if we create a local object: >>> mydata = MyLocal(color='red') Now we have a default number: >>> mydata.number 2 an initial color: >>> mydata.color 'red' >>> del mydata.color And a method that operates on the data: >>> mydata.squared() 4 As before, we can access the data in a separate greenlet: >>> log = [] >>> greenlet = gevent.spawn(f) >>> greenlet.join() >>> log [[('color', 'red'), ('initialized', True)], 11] without affecting this greenlet's data: >>> mydata.number 2 >>> mydata.color Traceback (most recent call last): ... AttributeError: 'MyLocal' object has no attribute 'color' Note that subclasses can define slots, but they are not greenlet local. They are shared across greenlets:: >>> class MyLocal(local): ... __slots__ = 'number' >>> mydata = MyLocal() >>> mydata.number = 42 >>> mydata.color = 'red' So, the separate greenlet: >>> greenlet = gevent.spawn(f) >>> greenlet.join() affects what we see: >>> mydata.number 11 >>> del mydata .. versionchanged:: 1.1a2 Update the implementation to match Python 3.4 instead of Python 2.5. This results in locals being eligible for garbage collection as soon as their greenlet exits. .. versionchanged:: 1.2.3 Use a weak-reference to clear the greenlet link we establish in case the local object dies before the greenlet does. .. versionchanged:: 1.3a1 Implement the methods for attribute access directly, handling descriptors directly here. This allows removing the use of a lock and facilitates greatly improved performance. .. versionchanged:: 1.3a1 The ``__init__`` method of subclasses of ``local`` is no longer called with a lock held. CPython does not use such a lock in its native implementation. This could potentially show as a difference if code that uses multiple dependent attributes in ``__slots__`` (which are shared across all greenlets) switches during ``__init__``. """ from __future__ import print_function from copy import copy from weakref import ref locals()['getcurrent'] = __import__('greenlet').getcurrent locals()['greenlet_init'] = lambda: None __all__ = [ "local", ] # The key used in the Thread objects' attribute dicts. # We keep it a string for speed but make it unlikely to clash with # a "real" attribute. key_prefix = '_gevent_local_localimpl_' # The overall structure is as follows: # For each local() object: # greenlet.__dict__[key_prefix + str(id(local))] # => _localimpl.dicts[id(greenlet)] => (ref(greenlet), {}) # That final tuple is actually a localimpl_dict_entry object. def all_local_dicts_for_greenlet(greenlet): """ Internal debug helper for getting the local values associated with a greenlet. This is subject to change or removal at any time. :return: A list of ((type, id), {}) pairs, where the first element is the type and id of the local object and the second object is its instance dictionary, as seen from this greenlet. .. versionadded:: 1.3a2 """ result = [] id_greenlet = id(greenlet) greenlet_dict = greenlet.__dict__ for k, v in greenlet_dict.items(): if not k.startswith(key_prefix): continue local_impl = v() if local_impl is None: continue entry = local_impl.dicts.get(id_greenlet) if entry is None: # Not yet used in this greenlet. continue assert entry.wrgreenlet() is greenlet result.append((local_impl.localtypeid, entry.localdict)) return result class _wrefdict(dict): """A dict that can be weak referenced""" class _greenlet_deleted(object): """ A weakref callback for when the greenlet is deleted. If the greenlet is a `gevent.greenlet.Greenlet` and supplies ``rawlink``, that will be used instead of a weakref. """ __slots__ = ('idt', 'wrdicts') def __init__(self, idt, wrdicts): self.idt = idt self.wrdicts = wrdicts def __call__(self, _unused): dicts = self.wrdicts() if dicts: dicts.pop(self.idt, None) class _local_deleted(object): __slots__ = ('key', 'wrthread', 'greenlet_deleted') def __init__(self, key, wrthread, greenlet_deleted): self.key = key self.wrthread = wrthread self.greenlet_deleted = greenlet_deleted def __call__(self, _unused): thread = self.wrthread() if thread is not None: try: unlink = thread.unlink except AttributeError: pass else: unlink(self.greenlet_deleted) del thread.__dict__[self.key] class _localimpl(object): """A class managing thread-local dicts""" __slots__ = ('key', 'dicts', 'localargs', 'localkwargs', 'localtypeid', '__weakref__',) def __init__(self, args, kwargs, local_type, id_local): self.key = key_prefix + str(id(self)) # { id(greenlet) -> _localimpl_dict_entry(ref(greenlet), greenlet-local dict) } self.dicts = _wrefdict() self.localargs = args self.localkwargs = kwargs self.localtypeid = local_type, id_local # We need to create the thread dict in anticipation of # __init__ being called, to make sure we don't call it # again ourselves. MUST do this before setting any attributes. greenlet = getcurrent() # pylint:disable=undefined-variable _localimpl_create_dict(self, greenlet, id(greenlet)) class _localimpl_dict_entry(object): """ The object that goes in the ``dicts`` of ``_localimpl`` object for each thread. """ # This is a class, not just a tuple, so that cython can optimize # attribute access __slots__ = ('wrgreenlet', 'localdict') def __init__(self, wrgreenlet, localdict): self.wrgreenlet = wrgreenlet self.localdict = localdict # We use functions instead of methods so that they can be cdef'd in # local.pxd; if they were cdef'd as methods, they would cause # the creation of a pointer and a vtable. This happens # even if we declare the class @cython.final. functions thus save memory overhead # (but not pointer chasing overhead; the vtable isn't used when we declare # the class final). def _localimpl_create_dict(self, greenlet, id_greenlet): """Create a new dict for the current thread, and return it.""" localdict = {} key = self.key wrdicts = ref(self.dicts) # When the greenlet is deleted, remove the local dict. # Note that this is suboptimal if the greenlet object gets # caught in a reference loop. We would like to be called # as soon as the OS-level greenlet ends instead. # If we are working with a gevent.greenlet.Greenlet, we # can pro-actively clear out with a link, avoiding the # issue described above. Use rawlink to avoid spawning any # more greenlets. greenlet_deleted = _greenlet_deleted(id_greenlet, wrdicts) rawlink = getattr(greenlet, 'rawlink', None) if rawlink is not None: rawlink(greenlet_deleted) wrthread = ref(greenlet) else: wrthread = ref(greenlet, greenlet_deleted) # When the localimpl is deleted, remove the thread attribute. local_deleted = _local_deleted(key, wrthread, greenlet_deleted) wrlocal = ref(self, local_deleted) greenlet.__dict__[key] = wrlocal self.dicts[id_greenlet] = _localimpl_dict_entry(wrthread, localdict) return localdict _marker = object() def _local_get_dict(self): impl = self._local__impl # Cython can optimize dict[], but not dict.get() greenlet = getcurrent() # pylint:disable=undefined-variable idg = id(greenlet) try: entry = impl.dicts[idg] dct = entry.localdict except KeyError: dct = _localimpl_create_dict(impl, greenlet, idg) self.__init__(*impl.localargs, **impl.localkwargs) return dct def _init(): greenlet_init() # pylint:disable=undefined-variable _local_attrs = { '_local__impl', '_local_type_get_descriptors', '_local_type_set_or_del_descriptors', '_local_type_del_descriptors', '_local_type_set_descriptors', '_local_type', '_local_type_vars', '__class__', '__cinit__', } class local(object): """ An object whose attributes are greenlet-local. """ __slots__ = tuple(_local_attrs - {'__class__', '__cinit__'}) def __cinit__(self, *args, **kw): # pylint:disable=bad-dunder-name if args or kw: if type(self).__init__ == object.__init__: # pylint:disable=comparison-with-callable raise TypeError("Initialization arguments are not supported", args, kw) impl = _localimpl(args, kw, type(self), id(self)) # pylint:disable=attribute-defined-outside-init self._local__impl = impl get, dels, sets_or_dels, sets = _local_find_descriptors(self) self._local_type_get_descriptors = get self._local_type_set_or_del_descriptors = sets_or_dels self._local_type_del_descriptors = dels self._local_type_set_descriptors = sets self._local_type = type(self) self._local_type_vars = set(dir(self._local_type)) def __getattribute__(self, name): # pylint:disable=too-many-return-statements if name in _local_attrs: # The _local__impl, __cinit__, etc, won't be hit by the # Cython version, if we've done things right. If we haven't, # they will be, and this will produce an error. return object.__getattribute__(self, name) dct = _local_get_dict(self) if name == '__dict__': return dct # If there's no possible way we can switch, because this # attribute is *not* found in the class where it might be a # data descriptor (property), and it *is* in the dict # then we don't need to swizzle the dict and take the lock. # We don't have to worry about people overriding __getattribute__ # because if they did, the dict-swizzling would only last as # long as we were in here anyway. # Similarly, a __getattr__ will still be called by _oga() if needed # if it's not in the dict. # Optimization: If we're not subclassed, then # there can be no descriptors except for methods, which will # never need to use __dict__. if self._local_type is local: return dct[name] if name in dct else object.__getattribute__(self, name) # NOTE: If this is a descriptor, this will invoke its __get__. # A broken descriptor that doesn't return itself when called with # a None for the instance argument could mess us up here. # But this is faster than a loop over mro() checking each class __dict__ # manually. if name in dct: if name not in self._local_type_vars: # If there is a dict value, and nothing in the type, # it can't possibly be a descriptor, so it is just returned. return dct[name] # It's in the type *and* in the dict. If the type value is # a data descriptor (defines __get__ *and* either __set__ or # __delete__), then the type wins. If it's a non-data descriptor # (defines just __get__), then the instance wins. If it's not a # descriptor at all (doesn't have __get__), the instance wins. # NOTE that the docs for descriptors say that these methods must be # defined on the *class* of the object in the type. if name not in self._local_type_get_descriptors: # Entirely not a descriptor. Instance wins. return dct[name] if name in self._local_type_set_or_del_descriptors: # A data descriptor. # arbitrary code execution while these run. If they touch self again, # they'll call back into us and we'll repeat the dance. type_attr = getattr(self._local_type, name) return type(type_attr).__get__(type_attr, self, self._local_type) # Last case is a non-data descriptor. Instance wins. return dct[name] if name in self._local_type_vars: # Not in the dictionary, but is found in the type. It could be # a non-data descriptor still. Some descriptors, like @staticmethod, # return objects (functions, in this case), that are *themselves* # descriptors, which when invoked, again, would do the wrong thing. # So we can't rely on getattr() on the type for them, we have to # look through the MRO dicts ourself. if name not in self._local_type_get_descriptors: # Not a descriptor, can't execute code. So all we need is # the return value of getattr() on our type. return getattr(self._local_type, name) for base in self._local_type.mro(): bd = base.__dict__ if name in bd: attr_on_type = bd[name] result = type(attr_on_type).__get__(attr_on_type, self, self._local_type) return result # It wasn't in the dict and it wasn't in the type. # So the next step is to invoke type(self)__getattr__, if it # exists, otherwise raise an AttributeError. # we will invoke type(self).__getattr__ or raise an attribute error. if hasattr(self._local_type, '__getattr__'): return self._local_type.__getattr__(self, name) raise AttributeError("%r object has no attribute '%s'" % (self._local_type.__name__, name)) def __setattr__(self, name, value): if name == '__dict__': raise AttributeError( "%r object attribute '__dict__' is read-only" % type(self)) if name in _local_attrs: object.__setattr__(self, name, value) return dct = _local_get_dict(self) if self._local_type is local: # Optimization: If we're not subclassed, we can't # have data descriptors, so this goes right in the dict. dct[name] = value return if name in self._local_type_vars: if name in self._local_type_set_descriptors: type_attr = getattr(self._local_type, name, _marker) # A data descriptor, like a property or a slot. type(type_attr).__set__(type_attr, self, value) return # Otherwise it goes directly in the dict dct[name] = value def __delattr__(self, name): if name == '__dict__': raise AttributeError( "%r object attribute '__dict__' is read-only" % self.__class__.__name__) if name in self._local_type_vars: if name in self._local_type_del_descriptors: # A data descriptor, like a property or a slot. type_attr = getattr(self._local_type, name, _marker) type(type_attr).__delete__(type_attr, self) return # Otherwise it goes directly in the dict # Begin inlined function _get_dict() dct = _local_get_dict(self) try: del dct[name] except KeyError: raise AttributeError(name) def __copy__(self): impl = self._local__impl entry = impl.dicts[id(getcurrent())] # pylint:disable=undefined-variable dct = entry.localdict duplicate = copy(dct) cls = type(self) instance = cls(*impl.localargs, **impl.localkwargs) _local__copy_dict_from(instance, impl, duplicate) return instance def _local__copy_dict_from(self, impl, duplicate): current = getcurrent() # pylint:disable=undefined-variable currentId = id(current) new_impl = self._local__impl assert new_impl is not impl entry = new_impl.dicts[currentId] new_impl.dicts[currentId] = _localimpl_dict_entry(entry.wrgreenlet, duplicate) def _local_find_descriptors(self): type_self = type(self) gets = set() dels = set() set_or_del = set() sets = set() mro = list(type_self.mro()) for attr_name in dir(type_self): # Conventionally, descriptors when called on a class # return themself, but not all do. Notable exceptions are # in the zope.interface package, where things like __provides__ # return other class attributes. So we can't use getattr, and instead # walk up the dicts for base in mro: bd = base.__dict__ if attr_name in bd: attr = bd[attr_name] break else: raise AttributeError(attr_name) type_attr = type(attr) if hasattr(type_attr, '__get__'): gets.add(attr_name) if hasattr(type_attr, '__delete__'): dels.add(attr_name) set_or_del.add(attr_name) if hasattr(type_attr, '__set__'): sets.add(attr_name) return (gets, dels, set_or_del, sets) # Cython doesn't let us use __new__, it requires # __cinit__. But we need __new__ if we're not compiled # (e.g., on PyPy). So we set it at runtime. Cython # will raise an error if we're compiled. def __new__(cls, *args, **kw): self = super(local, cls).__new__(cls) # pylint:disable=no-value-for-parameter # We get the cls in *args for some reason # too when we do it this way....except on PyPy3, which does # not *unless* it's wrapped in a classmethod (which it is) self.__cinit__(*args[1:], **kw) return self if local.__module__ == 'gevent.local': # PyPy2/3 and CPython handle adding a __new__ to the class # in different ways. In CPython and PyPy3, it must be wrapped with classmethod; # in PyPy2 < 7.3.3, it must not. In either case, the args that get passed to # it are stil wrong. # # Prior to Python 3.10, Cython-compiled classes were immutable and # raised a TypeError on assignment to __new__, and we relied on that # to detect the compiled version; but that breaks in # 3.10 as classes are now mutable. (See # https://github.com/cython/cython/issues/4326). # # That's OK; post https://github.com/gevent/gevent/issues/1480, the Cython-compiled # module has a different name than the pure-Python version and we can check for that. # It's not as direct, but it works. # So here we're not compiled local.__new__ = classmethod(__new__) else: # pragma: no cover # Make sure we revisit in case of changes to the (accelerator) module names. if local.__module__ != 'gevent._gevent_clocal': # pylint:disable=else-if-used raise AssertionError("Module names changed (local: %r; __name__: %r); revisit this code" % ( local.__module__, __name__) ) _init() from gevent._util import import_c_accel import_c_accel(globals(), 'gevent._local') gevent-24.11.1/src/gevent/lock.py000066400000000000000000000262061471441230600165410ustar00rootroot00000000000000# Copyright (c) 2009-2012 Denis Bilenko. See LICENSE for details. """ Locking primitives. These include semaphores with arbitrary bounds (:class:`Semaphore` and its safer subclass :class:`BoundedSemaphore`) and a semaphore with infinite bounds (:class:`DummySemaphore`), along with a reentrant lock (:class:`RLock`) with the same API as :class:`threading.RLock`. """ from __future__ import absolute_import from __future__ import print_function from gevent.hub import getcurrent from gevent._compat import PURE_PYTHON # This is the one exception to the rule of where to # import Semaphore, obviously from gevent import monkey from gevent._semaphore import Semaphore from gevent._semaphore import BoundedSemaphore __all__ = [ 'Semaphore', 'BoundedSemaphore', 'DummySemaphore', 'RLock', ] # On PyPy, we don't compile the Semaphore class with Cython. Under # Cython, each individual method holds the GIL for its entire # duration, ensuring that no other thread can interrupt us in an # unsafe state (only when we _wait do we call back into Python and # allow switching threads; this is broken down into the # _drop_lock_for_switch_out and _acquire_lock_for_switch_in methods). # Simulate that here through the use of a manual lock. (We use a # separate lock for each semaphore to allow sys.settrace functions to # use locks *other* than the one being traced.) This, of course, must # also hold for PURE_PYTHON mode when no optional C extensions are # used. _allocate_lock, _get_ident = monkey.get_original( ('_thread', 'thread'), ('allocate_lock', 'get_ident') ) def atomic(meth): def m(self, *args): with self._atomic: return meth(self, *args) return m class _GILLock(object): __slots__ = ( '_owned_thread_id', '_gil', '_atomic', '_recursion_depth', ) # Don't allow re-entry to these functions in a single thread, as # can happen if a sys.settrace is used. (XXX: What does that even # mean? Our original implementation that did that has been # replaced by something more robust) # # This is essentially a variant of the (pure-Python) RLock from the # standard library. def __init__(self): self._owned_thread_id = None self._gil = _allocate_lock() self._atomic = _allocate_lock() self._recursion_depth = 0 @atomic def acquire(self): current_tid = _get_ident() if self._owned_thread_id == current_tid: self._recursion_depth += 1 return True # Not owned by this thread. Only one thread will make it through this point. while 1: self._atomic.release() try: self._gil.acquire() finally: self._atomic.acquire() if self._owned_thread_id is None: break self._owned_thread_id = current_tid self._recursion_depth = 1 return True @atomic def release(self): current_tid = _get_ident() if current_tid != self._owned_thread_id: raise RuntimeError("%s: Releasing lock not owned by you. You: 0x%x; Owner: 0x%x" % ( self, current_tid, self._owned_thread_id or 0, )) self._recursion_depth -= 1 if not self._recursion_depth: self._owned_thread_id = None self._gil.release() def __enter__(self): self.acquire() def __exit__(self, t, v, tb): self.release() def locked(self): return self._gil.locked() class _AtomicSemaphoreMixin(object): # Behaves as though the GIL was held for the duration of acquire, wait, # and release, just as if we were in Cython. # # acquire, wait, and release all acquire the lock on entry and release it # on exit. acquire and wait can call _wait, which must release it on entry # and re-acquire it for them on exit. # # Note that this does *NOT*, in-and-of itself, make semaphores safe to use from multiple threads __slots__ = () def __init__(self, *args, **kwargs): self._lock_lock = _GILLock() # pylint:disable=assigning-non-slot super(_AtomicSemaphoreMixin, self).__init__(*args, **kwargs) def _acquire_lock_for_switch_in(self): self._lock_lock.acquire() def _drop_lock_for_switch_out(self): self._lock_lock.release() def _notify_links(self, arrived_while_waiting): with self._lock_lock: return super(_AtomicSemaphoreMixin, self)._notify_links(arrived_while_waiting) def release(self): with self._lock_lock: return super(_AtomicSemaphoreMixin, self).release() def acquire(self, blocking=True, timeout=None): with self._lock_lock: return super(_AtomicSemaphoreMixin, self).acquire(blocking, timeout) _py3k_acquire = acquire def wait(self, timeout=None): with self._lock_lock: return super(_AtomicSemaphoreMixin, self).wait(timeout) class _AtomicSemaphore(_AtomicSemaphoreMixin, Semaphore): __doc__ = Semaphore.__doc__ __slots__ = ( '_lock_lock', ) class _AtomicBoundedSemaphore(_AtomicSemaphoreMixin, BoundedSemaphore): __doc__ = BoundedSemaphore.__doc__ __slots__ = ( '_lock_lock', ) def release(self): # pylint:disable=useless-super-delegation # This method is duplicated here so that it can get # properly documented. return super(_AtomicBoundedSemaphore, self).release() def _fixup_docstrings(): for c in _AtomicSemaphore, _AtomicBoundedSemaphore: b = c.__mro__[2] assert b.__name__.endswith('Semaphore') and 'Atomic' not in b.__name__ assert c.__doc__ == b.__doc__ for m in 'acquire', 'release', 'wait': c_meth = getattr(c, m) b_meth = getattr(b, m) c_meth.__doc__ = b_meth.__doc__ _fixup_docstrings() del _fixup_docstrings if PURE_PYTHON: Semaphore = _AtomicSemaphore Semaphore.__name__ = 'Semaphore' BoundedSemaphore = _AtomicBoundedSemaphore BoundedSemaphore.__name__ = 'BoundedSemaphore' class DummySemaphore(object): """ DummySemaphore(value=None) -> DummySemaphore An object with the same API as :class:`Semaphore`, initialized with "infinite" initial value. None of its methods ever block. This can be used to parameterize on whether or not to actually guard access to a potentially limited resource. If the resource is actually limited, such as a fixed-size thread pool, use a real :class:`Semaphore`, but if the resource is unbounded, use an instance of this class. In that way none of the supporting code needs to change. Similarly, it can be used to parameterize on whether or not to enforce mutual exclusion to some underlying object. If the underlying object is known to be thread-safe itself mutual exclusion is not needed and a ``DummySemaphore`` can be used, but if that's not true, use a real ``Semaphore``. """ # Internally this is used for exactly the purpose described in the # documentation. gevent.pool.Pool uses it instead of a Semaphore # when the pool size is unlimited, and # gevent.fileobject.FileObjectThread takes a parameter that # determines whether it should lock around IO to the underlying # file object. def __init__(self, value=None): """ .. versionchanged:: 1.1rc3 Accept and ignore a *value* argument for compatibility with Semaphore. """ def __str__(self): return '<%s>' % self.__class__.__name__ def locked(self): """A DummySemaphore is never locked so this always returns False.""" return False def ready(self): """A DummySemaphore is never locked so this always returns True.""" return True def release(self): """Releasing a dummy semaphore does nothing.""" def rawlink(self, callback): # XXX should still work and notify? pass def unlink(self, callback): pass def wait(self, timeout=None): # pylint:disable=unused-argument """Waiting for a DummySemaphore returns immediately.""" return 1 def acquire(self, blocking=True, timeout=None): """ A DummySemaphore can always be acquired immediately so this always returns True and ignores its arguments. .. versionchanged:: 1.1a1 Always return *true*. """ # pylint:disable=unused-argument return True def __enter__(self): pass def __exit__(self, typ, val, tb): pass class RLock(object): """ A mutex that can be acquired more than once by the same greenlet. A mutex can only be locked by one greenlet at a time. A single greenlet can `acquire` the mutex as many times as desired, though. Each call to `acquire` must be paired with a matching call to `release`. It is an error for a greenlet that has not acquired the mutex to release it. Instances are context managers. """ __slots__ = ( '_block', '_owner', '_count', '__weakref__', ) def __init__(self, hub=None): """ .. versionchanged:: 20.5.1 Add the ``hub`` argument. """ self._block = Semaphore(1, hub) self._owner = None self._count = 0 def __repr__(self): return "<%s at 0x%x _block=%s _count=%r _owner=%r)>" % ( self.__class__.__name__, id(self), self._block, self._count, self._owner) def acquire(self, blocking=True, timeout=None): """ Acquire the mutex, blocking if *blocking* is true, for up to *timeout* seconds. .. versionchanged:: 1.5a4 Added the *timeout* parameter. :return: A boolean indicating whether the mutex was acquired. """ me = getcurrent() if self._owner is me: self._count += 1 return 1 rc = self._block.acquire(blocking, timeout) if rc: self._owner = me self._count = 1 return rc def __enter__(self): return self.acquire() def release(self): """ Release the mutex. Only the greenlet that originally acquired the mutex can release it. """ if self._owner is not getcurrent(): raise RuntimeError("cannot release un-acquired lock. Owner: %r Current: %r" % ( self._owner, getcurrent() )) self._count = count = self._count - 1 # pylint:disable=consider-using-augmented-assign if not count: self._owner = None self._block.release() def __exit__(self, typ, value, tb): self.release() # Internal methods used by condition variables def _acquire_restore(self, count_owner): count, owner = count_owner self._block.acquire() self._count = count self._owner = owner def _release_save(self): count = self._count self._count = 0 owner = self._owner self._owner = None self._block.release() return (count, owner) def _is_owned(self): return self._owner is getcurrent() gevent-24.11.1/src/gevent/monkey/000077500000000000000000000000001471441230600165335ustar00rootroot00000000000000gevent-24.11.1/src/gevent/monkey/__init__.py000066400000000000000000000604451471441230600206550ustar00rootroot00000000000000# Copyright (c) 2009-2012 Denis Bilenko. See LICENSE for details. # pylint: disable=redefined-outer-name,too-many-lines """ Make the standard library cooperative. The primary purpose of this module is to carefully patch, in place, portions of the standard library with gevent-friendly functions that behave in the same way as the original (at least as closely as possible). The primary interface to this is the :func:`patch_all` function, which performs all the available patches. It accepts arguments to limit the patching to certain modules, but most programs **should** use the default values as they receive the most wide-spread testing, and some monkey patches have dependencies on others. Patching **should be done as early as possible** in the lifecycle of the program. For example, the main module (the one that tests against ``__main__`` or is otherwise the first imported) should begin with this code, ideally before any other imports:: from gevent import monkey monkey.patch_all() A corollary of the above is that patching **should be done on the main thread** and **should be done while the program is single-threaded**. .. tip:: Some frameworks, such as gunicorn, handle monkey-patching for you. Check their documentation to be sure. .. warning:: Patching too late can lead to unreliable behaviour (for example, some modules may still use blocking sockets) or even errors. .. tip:: Be sure to read the documentation for each patch function to check for known incompatibilities. Querying ======== Sometimes it is helpful to know if objects have been monkey-patched, and in advanced cases even to have access to the original standard library functions. This module provides functions for that purpose. - :func:`is_module_patched` - :func:`is_object_patched` - :func:`get_original` .. _plugins: Plugins and Events ================== Beginning in gevent 1.3, events are emitted during the monkey patching process. These events are delivered first to :mod:`gevent.events` subscribers, and then to `setuptools entry points`_. The following events are defined. They are listed in (roughly) the order that a call to :func:`patch_all` will emit them. - :class:`gevent.events.GeventWillPatchAllEvent` - :class:`gevent.events.GeventWillPatchModuleEvent` - :class:`gevent.events.GeventDidPatchModuleEvent` - :class:`gevent.events.GeventDidPatchBuiltinModulesEvent` - :class:`gevent.events.GeventDidPatchAllEvent` Each event class documents the corresponding setuptools entry point name. The entry points will be called with a single argument, the same instance of the class that was sent to the subscribers. You can subscribe to the events to monitor the monkey-patching process and to manipulate it, for example by raising :exc:`gevent.events.DoNotPatch`. You can also subscribe to the events to provide additional patching beyond what gevent distributes, either for additional standard library modules, or for third-party packages. The suggested time to do this patching is in the subscriber for :class:`gevent.events.GeventDidPatchBuiltinModulesEvent`. For example, to automatically patch `psycopg2`_ using `psycogreen`_ when the call to :func:`patch_all` is made, you could write code like this:: # mypackage.py def patch_psycopg(event): from psycogreen.gevent import patch_psycopg patch_psycopg() In your ``setup.py`` you would register it like this:: from setuptools import setup setup( ... entry_points={ 'gevent.plugins.monkey.did_patch_builtins': [ 'psycopg2 = mypackage:patch_psycopg', ], }, ... ) For more complex patching, gevent provides a helper method that you can call to replace attributes of modules with attributes of your own modules. This function also takes care of emitting the appropriate events. - :func:`patch_module` .. _setuptools entry points: http://setuptools.readthedocs.io/en/latest/setuptools.html#dynamic-discovery-of-services-and-plugins .. _psycopg2: https://pypi.python.org/pypi/psycopg2 .. _psycogreen: https://pypi.python.org/pypi/psycogreen Use as a module =============== Sometimes it is useful to run existing python scripts or modules that were not built to be gevent aware under gevent. To do so, this module can be run as the main module, passing the script and its arguments. For details, see the :func:`main` function. .. versionchanged:: 1.3b1 Added support for plugins and began emitting will/did patch events. """ import sys #### # gevent developers: IMPORTANT: Keep imports # as limited and localized as possible to avoid # interfering with the monkey-patch process. # This is why many imports are nested inside functions. # # This applies for this entire package. ### __all__ = [ 'patch_all', 'patch_builtins', 'patch_dns', 'patch_os', 'patch_queue', 'patch_select', 'patch_signal', 'patch_socket', 'patch_ssl', 'patch_subprocess', 'patch_sys', 'patch_thread', 'patch_time', # query functions 'get_original', 'is_module_patched', 'is_object_patched', # 'is_anything_patched', <- see docstring # plugin API 'patch_module', # module functions 'main', # Errors and warnings 'MonkeyPatchWarning', ] WIN = sys.platform.startswith("win") # Unused imports may be removed in a major release after 2024-10. # Used private imports may be renamed or removed or changed # in an incompatible way at that time. from ._errors import MonkeyPatchWarning from ._util import _notify_patch from ._util import _ignores_DoNotPatch from ._state import saved from ._state import is_module_patched from ._state import is_object_patched # Never documented as a public API, but # potentially in use by third parties # given the naming convention. from ._state import is_anything_patched # pylint:disable=unused-import from ._errors import _BadImplements # pylint:disable=unused-import from .api import get_original from .api import patch_module # These are not part of the documented public API, # but they could be used by plugins. TODO: Do we # want to make them public with __all__? from .api import patch_item from .api import remove_item from ._util import _check_availability from ._util import _patch_module from ._util import _queue_warning from ._util import _process_warnings def _patch_sys_std(name): from gevent.fileobject import FileObjectThread orig = getattr(sys, name) if not isinstance(orig, FileObjectThread): patch_item(sys, name, FileObjectThread(orig)) @_ignores_DoNotPatch def patch_sys(stdin=True, stdout=True, stderr=True): # pylint:disable=unused-argument """ Patch sys.std[in,out,err] to use a cooperative IO via a threadpool. This is relatively dangerous and can have unintended consequences such as hanging the process or `misinterpreting control keys`_ when :func:`input` and :func:`raw_input` are used. :func:`patch_all` does *not* call this function by default. This method does nothing on Python 3. The Python 3 interpreter wants to flush the TextIOWrapper objects that make up stderr/stdout at shutdown time, but using a threadpool at that time leads to a hang. .. _`misinterpreting control keys`: https://github.com/gevent/gevent/issues/274 .. deprecated:: 23.7.0 Does nothing on any supported version. """ return @_ignores_DoNotPatch def patch_os(): """ Replace :func:`os.fork` with :func:`gevent.fork`, and, on POSIX, :func:`os.waitpid` with :func:`gevent.os.waitpid` (if the environment variable ``GEVENT_NOWAITPID`` is not defined). Does nothing if fork is not available. .. caution:: This method must be used with :func:`patch_signal` to have proper `SIGCHLD` handling and thus correct results from ``waitpid``. :func:`patch_all` calls both by default. .. caution:: For `SIGCHLD` handling to work correctly, the event loop must run. The easiest way to help ensure this is to use :func:`patch_all`. """ _patch_module('os') @_ignores_DoNotPatch def patch_queue(): """ Patch objects in :mod:`queue`. Currently, this just replaces :class:`queue.SimpleQueue` (implemented in C) with its Python counterpart, but the details may change at any time. .. versionadded:: 1.3.5 """ _patch_module('queue', items=[ 'SimpleQueue', ]) @_ignores_DoNotPatch def patch_time(): """ Replace :func:`time.sleep` with :func:`gevent.sleep`. """ _patch_module('time') @_ignores_DoNotPatch def patch_contextvars(): """ Replaces the implementations of :mod:`contextvars` with :mod:`gevent.contextvars`. On Python 3.7 and above, this is a standard library module. On earlier versions, a backport that uses the same distribution name and import name is available on PyPI (though this is not recommended). If that is installed, it will be patched. .. versionchanged:: 20.04.0 Clarify that the backport is also patched. .. versionchanged:: 20.9.0 This now does nothing on Python 3.7 and above. gevent now depends on greenlet 0.4.17, which natively handles switching context vars when greenlets are switched. Older versions of Python that have the backport installed will still be patched. .. deprecated:: 23.7.0 Does nothing on any supported version. """ return @_ignores_DoNotPatch def patch_thread(threading=True, _threading_local=True, Event=True, logging=True, existing_locks=True, _warnings=None): """ patch_thread(threading=True, _threading_local=True, Event=True, logging=True, existing_locks=True) -> None Replace the standard :mod:`thread` module to make it greenlet-based. :keyword bool threading: When True (the default), also patch :mod:`threading`. :keyword bool _threading_local: When True (the default), also patch :class:`_threading_local.local`. :keyword bool logging: When True (the default), also patch locks taken if the logging module has been configured. :keyword bool existing_locks: When True (the default), and the process is still single threaded, make sure that any :class:`threading.RLock` (and, under Python 3, :class:`importlib._bootstrap._ModuleLock`) instances that are currently locked can be properly unlocked. **Important**: This is a best-effort attempt and, on certain implementations, may not detect all locks. It is important to monkey-patch extremely early in the startup process. Setting this to False is not recommended, especially on Python 2. .. caution:: Monkey-patching :mod:`thread` and using :class:`multiprocessing.Queue` or :class:`concurrent.futures.ProcessPoolExecutor` (which uses a ``Queue``) will hang the process. Monkey-patching with this function and using sub-interpreters (and advanced C-level API) and threads may be unstable on certain platforms. .. versionchanged:: 1.1b1 Add *logging* and *existing_locks* params. .. versionchanged:: 1.3a2 ``Event`` defaults to True. """ if sys.version_info[:2] < (3, 13): from ._patch_thread_lt313 import Patcher else: from ._patch_thread_gte313 import Patcher patch = Patcher(threading=threading, _threading_local=_threading_local, Event=Event, logging=logging, existing_locks=existing_locks, _warnings=_warnings) patch() @_ignores_DoNotPatch def patch_socket(dns=True, aggressive=True): """ Replace the standard socket object with gevent's cooperative sockets. :keyword bool dns: When true (the default), also patch address resolution functions in :mod:`socket`. See :doc:`/dns` for details. """ from gevent import socket # Note: although it seems like it's not strictly necessary to monkey patch 'create_connection', # it's better to do it. If 'create_connection' was not monkey patched, but the rest of socket module # was, create_connection would still use "green" getaddrinfo and "green" socket. # However, because gevent.socket.socket.connect is a Python function, the exception raised by it causes # _socket object to be referenced by the frame, thus causing the next invocation of bind(source_address) to fail. if dns: items = socket.__implements__ # pylint:disable=no-member else: items = set(socket.__implements__) - set(socket.__dns__) # pylint:disable=no-member _patch_module('socket', items=items) if aggressive: if 'ssl' not in socket.__implements__: # pylint:disable=no-member remove_item(socket, 'ssl') @_ignores_DoNotPatch def patch_dns(): """ Replace :doc:`DNS functions ` in :mod:`socket` with cooperative versions. This is only useful if :func:`patch_socket` has been called and is done automatically by that method if requested. """ from gevent import socket _patch_module('socket', items=socket.__dns__) # pylint:disable=no-member def _find_module_refs(to, excluding_names=()): # Looks specifically for module-level references, # i.e., 'from foo import Bar'. We define a module reference # as a dict (subclass) that also has a __name__ attribute. # This does not handle subclasses, but it does find them. # Returns two sets. The first is modules (name, file) that were # found. The second is subclasses that were found. gc = __import__('gc') direct_ref_modules = set() subclass_modules = set() def report(mod): return mod['__name__'], mod.get('__file__', '') for r in gc.get_referrers(to): if isinstance(r, dict) and '__name__' in r: if r['__name__'] in excluding_names: continue for v in r.values(): if v is to: direct_ref_modules.add(report(r)) elif isinstance(r, type) and to in r.__bases__ and 'gevent.' not in r.__module__: subclass_modules.add(r) return direct_ref_modules, subclass_modules @_ignores_DoNotPatch def patch_ssl(_warnings=None, _first_time=True): """ patch_ssl() -> None Replace :class:`ssl.SSLSocket` object and socket wrapping functions in :mod:`ssl` with cooperative versions. This is only useful if :func:`patch_socket` has been called. """ may_need_warning = ( _first_time and 'ssl' in sys.modules and hasattr(sys.modules['ssl'], 'SSLContext')) # Previously, we didn't warn on Python 2 if pkg_resources has been imported # because that imports ssl and it's commonly used for namespace packages, # which typically means we're still in some early part of the import cycle. # However, with our new more discriminating check, that no longer seems to be a problem. # Prior to 3.6, we don't have the RecursionError problem, and prior to 3.7 we don't have the # SSLContext.sslsocket_class/SSLContext.sslobject_class problem. gevent_mod, _ = _patch_module('ssl', _warnings=_warnings) if may_need_warning: direct_ref_modules, subclass_modules = _find_module_refs( gevent_mod.orig_SSLContext, excluding_names=('ssl', 'gevent.ssl', 'gevent._ssl3', 'gevent._sslgte279')) if direct_ref_modules or subclass_modules: # Normally you don't want to have dynamic warning strings, because # the cache in the warning module is based on the string. But we # specifically only do this the first time we patch ourself, so it's # ok. direct_ref_mod_str = subclass_str = '' if direct_ref_modules: direct_ref_mod_str = 'Modules that had direct imports (NOT patched): %s. ' % ([ "%s (%s)" % (name, fname) for name, fname in direct_ref_modules ]) if subclass_modules: subclass_str = 'Subclasses (NOT patched): %s. ' % ([ str(t) for t in subclass_modules ]) _queue_warning( 'Monkey-patching ssl after ssl has already been imported ' 'may lead to errors, including RecursionError on Python 3.6. ' 'It may also silently lead to incorrect behaviour on Python 3.7. ' 'Please monkey-patch earlier. ' 'See https://github.com/gevent/gevent/issues/1016. ' + direct_ref_mod_str + subclass_str, _warnings) @_ignores_DoNotPatch def patch_select(aggressive=True): """ Replace :func:`select.select` with :func:`gevent.select.select` and :func:`select.poll` with :class:`gevent.select.poll` (where available). If ``aggressive`` is true (the default), also remove other blocking functions from :mod:`select` . - :func:`select.epoll` - :func:`select.kqueue` - :func:`select.kevent` - :func:`select.devpoll` (Python 3.5+) """ _patch_module('select', _patch_kwargs={'aggressive': aggressive}) @_ignores_DoNotPatch def patch_selectors(aggressive=True): """ Replace :class:`selectors.DefaultSelector` with :class:`gevent.selectors.GeventSelector`. If ``aggressive`` is true (the default), also remove other blocking classes :mod:`selectors`: - :class:`selectors.EpollSelector` - :class:`selectors.KqueueSelector` - :class:`selectors.DevpollSelector` (Python 3.5+) On Python 2, the :mod:`selectors2` module is used instead of :mod:`selectors` if it is available. If this module cannot be imported, no patching is done and :mod:`gevent.selectors` is not available. In :func:`patch_all`, the *select* argument controls both this function and :func:`patch_select`. .. versionadded:: 20.6.0 """ try: _check_availability('selectors') except ImportError: # pragma: no cover return _patch_module('selectors', _patch_kwargs={'aggressive': aggressive}) @_ignores_DoNotPatch def patch_subprocess(): """ Replace :func:`subprocess.call`, :func:`subprocess.check_call`, :func:`subprocess.check_output` and :class:`subprocess.Popen` with :mod:`cooperative versions `. .. note:: On Windows under Python 3, the API support may not completely match the standard library. """ _patch_module('subprocess') @_ignores_DoNotPatch def patch_builtins(): """ Make the builtin :func:`__import__` function `greenlet safe`_ under Python 2. .. note:: This does nothing under Python 3 as it is not necessary. Python 3 features improved import locks that are per-module, not global. .. _greenlet safe: https://github.com/gevent/gevent/issues/108 .. deprecated:: 23.7.0 Does nothing on any supported platform. """ @_ignores_DoNotPatch def patch_signal(): """ Make the :func:`signal.signal` function work with a :func:`monkey-patched os `. .. caution:: This method must be used with :func:`patch_os` to have proper ``SIGCHLD`` handling. :func:`patch_all` calls both by default. .. caution:: For proper ``SIGCHLD`` handling, you must yield to the event loop. Using :func:`patch_all` is the easiest way to ensure this. .. seealso:: :mod:`gevent.signal` """ _patch_module("signal") def _check_repatching(**module_settings): _warnings = [] key = '_gevent_saved_patch_all_module_settings' del module_settings['kwargs'] currently_patched = saved.setdefault(key, {}) first_time = not currently_patched if not first_time and currently_patched != module_settings: _queue_warning("Patching more than once will result in the union of all True" " parameters being patched", _warnings) to_patch = {} for k, v in module_settings.items(): # If we haven't seen the setting at all, record it and echo it. # If we have seen the setting, but it became true, record it and echo it. if k not in currently_patched: to_patch[k] = currently_patched[k] = v elif v and not currently_patched[k]: to_patch[k] = currently_patched[k] = True return _warnings, first_time, to_patch def _subscribe_signal_os(will_patch_all): if will_patch_all.will_patch_module('signal') and not will_patch_all.will_patch_module('os'): warnings = will_patch_all._warnings # Internal _queue_warning('Patching signal but not os will result in SIGCHLD handlers' ' installed after this not being called and os.waitpid may not' ' function correctly if gevent.subprocess is used. This may raise an' ' error in the future.', warnings) def patch_all(socket=True, dns=True, time=True, select=True, thread=True, os=True, ssl=True, subprocess=True, sys=False, aggressive=True, Event=True, builtins=True, signal=True, queue=True, contextvars=True, **kwargs): """ Do all of the default monkey patching (calls every other applicable function in this module). :return: A true value if patching all modules wasn't cancelled, a false value if it was. .. versionchanged:: 1.1 Issue a :mod:`warning ` if this function is called multiple times with different arguments. The second and subsequent calls will only add more patches, they can never remove existing patches by setting an argument to ``False``. .. versionchanged:: 1.1 Issue a :mod:`warning ` if this function is called with ``os=False`` and ``signal=True``. This will cause SIGCHLD handlers to not be called. This may be an error in the future. .. versionchanged:: 1.3a2 ``Event`` defaults to True. .. versionchanged:: 1.3b1 Defined the return values. .. versionchanged:: 1.3b1 Add ``**kwargs`` for the benefit of event subscribers. CAUTION: gevent may add and interpret additional arguments in the future, so it is suggested to use prefixes for kwarg values to be interpreted by plugins, for example, `patch_all(mylib_futures=True)`. .. versionchanged:: 1.3.5 Add *queue*, defaulting to True, for Python 3.7. .. versionchanged:: 1.5 Remove the ``httplib`` argument. Previously, setting it raised a ``ValueError``. .. versionchanged:: 1.5a3 Add the ``contextvars`` argument. .. versionchanged:: 1.5 Better handling of patching more than once. """ # pylint:disable=too-many-locals,too-many-branches # Check to see if they're changing the patched list _warnings, first_time, modules_to_patch = _check_repatching(**locals()) if not modules_to_patch: # Nothing to do. Either the arguments were identical to what # we previously did, or they specified false values # for things we had previously patched. _process_warnings(_warnings) return for k, v in modules_to_patch.items(): locals()[k] = v from gevent import events try: _notify_patch(events.GeventWillPatchAllEvent(modules_to_patch, kwargs), _warnings) except events.DoNotPatch: return False # order is important if os: patch_os() if thread: patch_thread(Event=Event, _warnings=_warnings) if time: # time must be patched after thread, some modules used by thread # need access to the real time.sleep function. patch_time() # sys must be patched after thread. in other cases threading._shutdown will be # initiated to _MainThread with real thread ident if sys: patch_sys() if socket: patch_socket(dns=dns, aggressive=aggressive) if select: patch_select(aggressive=aggressive) patch_selectors(aggressive=aggressive) if ssl: patch_ssl(_warnings=_warnings, _first_time=first_time) if subprocess: patch_subprocess() if builtins: patch_builtins() if signal: patch_signal() if queue: patch_queue() if contextvars: patch_contextvars() _notify_patch(events.GeventDidPatchBuiltinModulesEvent(modules_to_patch, kwargs), _warnings) _notify_patch(events.GeventDidPatchAllEvent(modules_to_patch, kwargs), _warnings) _process_warnings(_warnings) return True def __getattr__(name): if name == 'main': from ._main import main return main raise AttributeError(name) gevent-24.11.1/src/gevent/monkey/__main__.py000066400000000000000000000000371471441230600206250ustar00rootroot00000000000000from ._main import main main() gevent-24.11.1/src/gevent/monkey/_errors.py000066400000000000000000000010431471441230600205560ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Exception classes and errors that this package may raise. """ import logging logger = logging.getLogger(__name__) class _BadImplements(AttributeError): """ Raised when ``__implements__`` is incorrect. """ def __init__(self, module): AttributeError.__init__( self, "Module %r has a bad or missing value for __implements__" % (module,) ) class MonkeyPatchWarning(RuntimeWarning): """ The type of warnings we issue. .. versionadded:: 1.3a2 """ gevent-24.11.1/src/gevent/monkey/_main.py000066400000000000000000000104731471441230600201750ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ The real functionality to run this package as a main module. """ def main(): # TODO: Now that this is its own module, see if # we can refactor. # pylint:disable=too-many-locals import sys from . import patch_all args = {} argv = sys.argv[1:] verbose = False run_fn = "run_path" script_help, patch_all_args, modules = _get_script_help() while argv and argv[0].startswith('--'): option = argv[0][2:] if option == 'verbose': verbose += 1 elif option == 'module': run_fn = "run_module" elif option.startswith('no-') and option.replace('no-', '') in patch_all_args: args[option[3:]] = False elif option in patch_all_args: args[option] = True if option in modules: for module in modules: args.setdefault(module, False) else: sys.exit(script_help + '\n\n' + 'Cannot patch %r' % option) del argv[0] # TODO: break on -- if verbose: import pprint import os print('gevent.monkey.patch_all(%s)' % ', '.join('%s=%s' % item for item in args.items())) print('sys.version=%s' % (sys.version.strip().replace('\n', ' '), )) print('sys.path=%s' % pprint.pformat(sys.path)) print('sys.modules=%s' % pprint.pformat(sorted(sys.modules.keys()))) print('cwd=%s' % os.getcwd()) if not argv: print(script_help) return sys.argv[:] = argv # Make sure that we don't get imported again under a different # name (usually it's ``__main__`` here) because that could lead to # double-patching, and making monkey.get_original() not work. try: mod_name = __spec__.name except NameError: # Py2: __spec__ is not defined as standard mod_name = 'gevent.monkey' sys.modules[mod_name] = sys.modules[__name__] # On Python 2, we have to set the gevent.monkey attribute # manually; putting gevent.monkey into sys.modules stops the # import machinery from making that connection, and ``from gevent # import monkey`` is broken. On Python 3 (.8 at least) that's not # necessary. assert 'gevent.monkey' in sys.modules # Running ``patch_all()`` will load pkg_resources entry point plugins # which may attempt to import ``gevent.monkey``, so it is critical that # we have established the correct saved module name first. patch_all(**args) import runpy # Use runpy.run_path to closely (exactly) match what the # interpreter does given 'python '. This includes allowing # passing .pyc/.pyo files and packages with a __main__ and # potentially even zip files. Previously we used exec, which only # worked if we directly read a python source file. run_meth = getattr(runpy, run_fn) return run_meth(sys.argv[0], run_name='__main__') def _get_script_help(): # pylint:disable=deprecated-method import inspect from . import patch_all getter = inspect.getfullargspec patch_all_args = getter(patch_all)[0] modules = [x for x in patch_all_args if 'patch_' + x in globals()] script_help = """gevent.monkey - monkey patch the standard modules to use gevent. USAGE: ``python -m gevent.monkey [MONKEY OPTIONS] [--module] (script|module) [SCRIPT OPTIONS]`` If no MONKEY OPTIONS are present, monkey patches all the modules as if by calling ``patch_all()``. You can exclude a module with --no-, e.g. --no-thread. You can specify a module to patch with --, e.g. --socket. In the latter case only the modules specified on the command line will be patched. The default behavior is to execute the script passed as argument. If you wish to run a module instead, pass the `--module` argument before the module name. .. versionchanged:: 1.3b1 The *script* argument can now be any argument that can be passed to `runpy.run_path`, just like the interpreter itself does, for example a package directory containing ``__main__.py``. Previously it had to be the path to a .py source file. .. versionchanged:: 1.5 The `--module` option has been added. MONKEY OPTIONS: ``--verbose %s``""" % ', '.join('--[no-]%s' % m for m in modules) return script_help, patch_all_args, modules main.__doc__ = _get_script_help()[0] if __name__ == '__main__': main() gevent-24.11.1/src/gevent/monkey/_patch_thread_common.py000066400000000000000000000314771471441230600232560ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ The implementation of thread patching for Python versions prior to 3.13. Internal use only. """ import sys from ._util import _patch_module from ._util import _queue_warning from ._util import _notify_patch from ._state import is_object_patched def _patch_existing_locks(threading): if len(list(threading.enumerate())) != 1: return # This is used to protect internal data structures for enumerate. # It's acquired when threads are started and when they're stopped. # Stopping a thread checks a Condition, which on Python 2 wants to test # _is_owned of its (patched) Lock. Since our LockType doesn't have # _is_owned, it tries to acquire the lock non-blocking; that triggers a # switch. If the next thing in the callback list was a thread that needed # to start or end, we wouldn't be able to acquire this native lock # because it was being held already; we couldn't switch either, so we'd # block permanently. threading._active_limbo_lock = threading._allocate_lock() try: tid = threading.get_ident() except AttributeError: tid = threading._get_ident() rlock_type = type(threading.RLock()) try: import importlib._bootstrap except ImportError: class _ModuleLock(object): pass else: _ModuleLock = importlib._bootstrap._ModuleLock # python 2 pylint: disable=no-member # It might be possible to walk up all the existing stack frames to find # locked objects...at least if they use `with`. To be sure, we look at every object # Since we're supposed to be done very early in the process, there shouldn't be # too many. # Note that the C implementation of locks, at least on some # versions of CPython, cannot be found and cannot be fixed (they simply # don't show up to GC; see https://github.com/gevent/gevent/issues/1354) # By definition there's only one thread running, so the various # owner attributes were the old (native) thread id. Make it our # current greenlet id so that when it wants to unlock and compare # self.__owner with _get_ident(), they match. gc = __import__('gc') for o in gc.get_objects(): if isinstance(o, rlock_type): for owner_name in ( '_owner', # Python 3 or backported PyPy2 '_RLock__owner', # Python 2 ): if hasattr(o, owner_name): if getattr(o, owner_name) is not None: setattr(o, owner_name, tid) break else: # pragma: no cover raise AssertionError( "Unsupported Python implementation; " "Found unknown lock implementation.", vars(o) ) elif isinstance(o, _ModuleLock): if o.owner is not None: o.owner = tid class BasePatcher: # Description of the hang: # There is an incompatibility with patching 'thread' and the 'multiprocessing' module: # The problem is that multiprocessing.queues.Queue uses a half-duplex multiprocessing.Pipe, # which is implemented with os.pipe() and _multiprocessing.Connection. os.pipe isn't patched # by gevent, as it returns just a fileno. _multiprocessing.Connection is an internal implementation # class implemented in C, which exposes a 'poll(timeout)' method; under the covers, this issues a # (blocking) select() call: hence the need for a real thread. Except for that method, we could # almost replace Connection with gevent.fileobject.SocketAdapter, plus a trivial # patch to os.pipe (below). Sigh, so close. (With a little work, we could replicate that method) # import os # import fcntl # os_pipe = os.pipe # def _pipe(): # r, w = os_pipe() # fcntl.fcntl(r, fcntl.F_SETFL, os.O_NONBLOCK) # fcntl.fcntl(w, fcntl.F_SETFL, os.O_NONBLOCK) # return r, w # os.pipe = _pipe gevent_threading_mod = None gevent_thread_mod = None thread_mod = None threading_mod = None orig_current_thread = None main_thread = None orig_shutdown = None def __init__(self, threading=True, _threading_local=True, Event=True, logging=True, existing_locks=True, _warnings=None): self.threading = threading self.threading_local = _threading_local self.Event = Event self.logging = logging self.existing_locks = existing_locks self.warnings = _warnings def __call__(self): # The 'threading' module copies some attributes from the # thread module the first time it is imported. If we patch 'thread' # before that happens, then we store the wrong values in 'saved', # So if we're going to patch threading, we either need to import it # before we patch thread, or manually clean up the attributes that # are in trouble. The latter is tricky because of the different names # on different versions. self.threading_mod = __import__('threading') # Capture the *real* current thread object before # we start returning DummyThread objects, for comparison # to the main thread. self.orig_current_thread = self.threading_mod.current_thread() self.main_thread = self.threading_mod.main_thread() self.orig_shutdown = self.threading_mod._shutdown gevent_thread_mod, thread_mod = _patch_module('thread', _warnings=self.warnings, _notify_did_subscribers=False) if self.threading: self.patch_threading_event_logging_existing_locks() if self.threading_local: self.patch__threading_local() if self.threading: self.patch_active_threads() # Issue 18808 changes the nature of Thread.join() to use # locks. This means that a greenlet spawned in the main thread # (which is already running) cannot wait for the main thread---it # hangs forever. We patch around this if possible. See also # gevent.threading. already_patched = is_object_patched('threading', '_shutdown') if self.orig_current_thread == self.threading_mod.main_thread() and not already_patched: self.patch_threading_shutdown_on_main_thread_not_already_patched() self.patch_main_thread_cleanup() elif not already_patched: self.patch_shutdown_not_on_main_thread() from gevent import events _notify_patch(events.GeventDidPatchModuleEvent('thread', gevent_thread_mod, thread_mod)) if self.gevent_threading_mod is not None: _notify_patch(events.GeventDidPatchModuleEvent('threading', self.gevent_threading_mod, self.threading_mod)) def patch_threading_event_logging_existing_locks(self): self.gevent_threading_mod, patched_mod = _patch_module( 'threading', _warnings=self.warnings, _notify_did_subscribers=False) assert patched_mod is self.threading_mod if self.Event: self.patch_event() if self.existing_locks: _patch_existing_locks(self.threading_mod) if self.logging and 'logging' in sys.modules: self.patch_logging() def patch_event(self): from .api import patch_item from gevent.event import Event patch_item(self.threading_mod, 'Event', Event) # Python 2 had `Event` as a function returning # the private class `_Event`. Some code may be relying # on that. if hasattr(self.threading_mod, '_Event'): patch_item(self.threading_mod, '_Event', Event) def patch_logging(self): from .api import patch_item logging = __import__('logging') patch_item(logging, '_lock', self.threading_mod.RLock()) for wr in logging._handlerList: # In py26, these are actual handlers, not weakrefs handler = wr() if callable(wr) else wr if handler is None: continue if not hasattr(handler, 'lock'): raise TypeError("Unknown/unsupported handler %r" % handler) handler.lock = self.threading_mod.RLock() def patch__threading_local(self): _threading_local = __import__('_threading_local') from .api import patch_item from gevent.local import local patch_item(_threading_local, 'local', local) def patch_active_threads(self): raise NotImplementedError def patch_threading_shutdown_on_main_thread_not_already_patched(self): raise NotImplementedError def patch_main_thread_cleanup(self): # We create a bit of a reference cycle here, # so main_thread doesn't get to be collected in a timely way. # Not good. Take it out of dangling so we don't get # warned about it. main_thread = self.main_thread self.threading_mod._dangling.remove(main_thread) # Patch up the ident of the main thread to match. This # matters if threading was imported before monkey-patching # thread oldid = main_thread.ident main_thread._ident = self.threading_mod.get_ident() if oldid in self.threading_mod._active: self.threading_mod._active[main_thread.ident] = self.threading_mod._active[oldid] if oldid != main_thread.ident: del self.threading_mod._active[oldid] def patch_shutdown_not_on_main_thread(self): _queue_warning("Monkey-patching not on the main thread; " "threading.main_thread().join() will hang from a greenlet", self.warnings) from .api import patch_item main_thread = self.main_thread threading_mod = self.threading_mod get_ident = self.threading_mod.get_ident orig_shutdown = self.orig_shutdown def _shutdown(): # We've patched get_ident but *did not* patch the # main_thread.ident value. Beginning in Python 3.9.8 # and then later releases (3.10.1, probably), the # _main_thread object is only _stop() if the ident of # the current thread (the *real* main thread) matches # the ident of the _main_thread object. But without doing that, # the main thread's shutdown lock (threading._shutdown_locks) is never # removed *or released*, thus hanging the interpreter. # XXX: There's probably a better way to do this. Probably need to take a # step back and look at the whole picture. main_thread._ident = get_ident() orig_shutdown() patch_item(threading_mod, '_shutdown', orig_shutdown) patch_item(threading_mod, '_shutdown', _shutdown) @staticmethod # Static to be sure we don't accidentally capture `self` and keep it alive def _make_existing_non_main_thread_join_func(thread, thread_greenlet, threading_mod): from gevent.hub import sleep from time import time # TODO: This is almost the algorithm that the 3.13 _ThreadHandle class # employs. UNIFY them. def join(timeout=None): end = None if threading_mod.current_thread() is thread: raise RuntimeError("Cannot join current thread") if thread_greenlet is not None and thread_greenlet.dead: return # You may ask: Why not call thread_greenlet.join()? # Well, in the one case we actually have a greenlet, it's the # low-level greenlet.greenlet object for the main thread, which # doesn't have a join method. # # You may ask: Why not become the main greenlet's *parent* # so you can get notified when it finishes? Because you can't # create a greenlet cycle (the current greenlet is a descendent # of the parent), and nor can you set a greenlet's parent to None, # so there can only ever be one greenlet with a parent of None: the main # greenlet, the one we need to watch. # # You may ask: why not swizzle out the problematic lock on the main thread # into a gevent friendly lock? Well, the interpreter actually depends on that # for the main thread in threading._shutdown; see below. if not thread.is_alive(): return if timeout: end = time() + timeout while thread.is_alive(): if end is not None and time() > end: return sleep(0.01) return join gevent-24.11.1/src/gevent/monkey/_patch_thread_gte313.py000066400000000000000000000066421471441230600227700ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ The implementation of thread patching for Python versions after 3.13. Internal use only. """ import sys from ._patch_thread_common import BasePatcher class Patcher(BasePatcher): def patch_active_threads(self): from gevent.threading import main_native_thread for thread in self.threading_mod._active.values(): if thread == main_native_thread(): from greenlet import getcurrent from gevent.thread import _ThreadHandle thread._after_fork = lambda new_ident=None: new_ident thread._handle = _ThreadHandle() thread._handle._set_greenlet(getcurrent()) thread._ident = thread._handle.ident assert thread.ident == thread._handle.ident continue thread.join = self._make_existing_non_main_thread_join_func(thread, None, self.threading_mod) def patch_threading_shutdown_on_main_thread_not_already_patched(self): import greenlet from .api import patch_item main_thread = self.main_thread threading_mod = self.threading_mod orig_shutdown = self.orig_shutdown _greenlet = main_thread._greenlet = greenlet.getcurrent() def _shutdown(): # Release anyone trying to join() me, # and let us switch to them. main_thread._handle._set_done() from gevent import sleep try: sleep() except: # pylint:disable=bare-except # A greenlet could have .kill() us # or .throw() to us. I'm the main greenlet, # there's no where else for this to go. from gevent import get_hub get_hub().print_exception(_greenlet, *sys.exc_info()) # Now, this may have resulted in us getting stopped # if some other greenlet actually just ran there. # That's not good, we're not supposed to be stopped # when we enter _shutdown. class FakeHandle: def is_done(self): return False def _set_done(self): return def join(self): return main_thread._handle = FakeHandle() assert main_thread.is_alive() # main_thread._is_stopped = False # main_thread._tstate_lock = main_thread.__real_tstate_lock # main_thread.__real_tstate_lock = None # The only truly blocking native shutdown lock to # acquire should be our own (hopefully), and the call to # _stop that orig_shutdown makes will discard it. # XXX: What if more get spawned? for t in list(threading_mod.enumerate()): if t.daemon or t is main_thread: continue while t.is_alive(): try: t._handle.join(0.001) except RuntimeError: # Joining ourself. t._handle._set_done() break orig_shutdown() patch_item(threading_mod, '_shutdown', self.orig_shutdown) patch_item(self.threading_mod, '_shutdown', _shutdown) gevent-24.11.1/src/gevent/monkey/_patch_thread_lt313.py000066400000000000000000000057561471441230600226350ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ The implementation of thread patching for Python versions prior to 3.13. Internal use only. """ import sys from ._patch_thread_common import BasePatcher class Patcher(BasePatcher): def patch_active_threads(self): from gevent.threading import main_native_thread threading_mod = self.threading_mod for thread in threading_mod._active.values(): if thread == main_native_thread(): continue thread.join = self._make_existing_non_main_thread_join_func(thread, None, threading_mod) def patch_threading_shutdown_on_main_thread_not_already_patched(self): import greenlet from .api import patch_item threading_mod = self.threading_mod main_thread = self.main_thread orig_shutdown = self.orig_shutdown _greenlet = main_thread._greenlet = greenlet.getcurrent() main_thread._gevent_real_tstate_lock = main_thread._tstate_lock assert main_thread._gevent_real_tstate_lock is not None # The interpreter will call threading._shutdown # when the main thread exits and is about to # go away. It is called *in* the main thread. This # is a perfect place to notify other greenlets that # the main thread is done. We do this by overriding the # lock of the main thread during operation, and only restoring # it to the native blocking version at shutdown time # (the interpreter also has a reference to this lock in a # C data structure). main_thread._tstate_lock = threading_mod.Lock() main_thread._tstate_lock.acquire() def _shutdown(): # Release anyone trying to join() me, # and let us switch to them. if not main_thread._tstate_lock: return main_thread._tstate_lock.release() from gevent import sleep try: sleep() except: # pylint:disable=bare-except # A greenlet could have .kill() us # or .throw() to us. I'm the main greenlet, # there's no where else for this to go. from gevent import get_hub get_hub().print_exception(_greenlet, *sys.exc_info()) # Now, this may have resulted in us getting stopped # if some other greenlet actually just ran there. # That's not good, we're not supposed to be stopped # when we enter _shutdown. main_thread._is_stopped = False main_thread._tstate_lock = main_thread._gevent_real_tstate_lock main_thread._gevent_real_tstate_lock = None # The only truly blocking native shutdown lock to # acquire should be our own (hopefully), and the call to # _stop that orig_shutdown makes will discard it. orig_shutdown() patch_item(threading_mod, '_shutdown', orig_shutdown) patch_item(threading_mod, '_shutdown', _shutdown) gevent-24.11.1/src/gevent/monkey/_state.py000066400000000000000000000035511471441230600203700ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ State management and query functions for tracking and discovering what has been patched. """ import logging logger = logging.getLogger(__name__) # maps module name -> {attribute name: original item} # e.g. "time" -> {"sleep": built-in function sleep} # NOT A PUBLIC API. However, third-party monkey-patchers may be using # it? TODO: Provide better API for them. saved:dict = {} def is_module_patched(mod_name): """ Check if a module has been replaced with a cooperative version. :param str mod_name: The name of the standard library module, e.g., ``'socket'``. """ return mod_name in saved def is_object_patched(mod_name, item_name): """ Check if an object in a module has been replaced with a cooperative version. :param str mod_name: The name of the standard library module, e.g., ``'socket'``. :param str item_name: The name of the attribute in the module, e.g., ``'create_connection'``. """ return is_module_patched(mod_name) and item_name in saved[mod_name] def is_anything_patched(): """ Check if this module has done any patching in the current process. This is currently only used in gevent tests. Not currently a documented, public API, because I'm not convinced it is 100% reliable in the event of third-party patch functions that don't use ``saved``. .. versionadded:: 21.1.0 """ return bool(saved) def _get_original(name, items): d = saved.get(name, {}) values = [] module = None for item in items: if item in d: values.append(d[item]) else: if module is None: module = __import__(name) values.append(getattr(module, item)) return values def _save(module, attr_name, item): saved.setdefault(module.__name__, {}).setdefault(attr_name, item) gevent-24.11.1/src/gevent/monkey/_util.py000066400000000000000000000071511471441230600202250ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Utilities used in patching. Internal use only. """ import sys def _notify_patch(event, _warnings=None): # Raises DoNotPatch if we're not supposed to patch from gevent.events import notify_and_call_entry_points event._warnings = _warnings notify_and_call_entry_points(event) def _ignores_DoNotPatch(func): from functools import wraps @wraps(func) def ignores(*args, **kwargs): from gevent.events import DoNotPatch try: return func(*args, **kwargs) except DoNotPatch: return False return ignores def _check_availability(name): """ Test that the source and target modules for *name* are available and return them. :raise ImportError: If the source or target cannot be imported. :return: The tuple ``(gevent_module, target_module, target_module_name)`` """ # Always import the gevent module first. This helps us be sure we can # use regular imports in gevent files (when we can't use gevent.monkey.get_original()) gevent_module = getattr(__import__('gevent.' + name), name) target_module_name = getattr(gevent_module, '__target__', name) target_module = __import__(target_module_name) return gevent_module, target_module, target_module_name def _patch_module(name, items=None, _warnings=None, _patch_kwargs=None, _notify_will_subscribers=True, _notify_did_subscribers=True, _call_hooks=True): from .api import patch_module gevent_module, target_module, target_module_name = _check_availability(name) patch_module(target_module, gevent_module, items=items, _warnings=_warnings, _patch_kwargs=_patch_kwargs, _notify_will_subscribers=_notify_will_subscribers, _notify_did_subscribers=_notify_did_subscribers, _call_hooks=_call_hooks) # On Python 2, the `futures` package will install # a bunch of modules with the same name as those from Python 3, # such as `_thread`; primarily these just do `from thread import *`, # meaning we have alternate references. If that's already been imported, # we need to attempt to patch that too. # Be sure to keep the original states matching also. alternate_names = getattr(gevent_module, '__alternate_targets__', ()) from ._state import saved # TODO: Add apis for these use cases. for alternate_name in alternate_names: alternate_module = sys.modules.get(alternate_name) if alternate_module is not None and alternate_module is not target_module: saved.pop(alternate_name, None) patch_module(alternate_module, gevent_module, items=items, _warnings=_warnings, _notify_will_subscribers=False, _notify_did_subscribers=False, _call_hooks=False) saved[alternate_name] = saved[target_module_name] return gevent_module, target_module def _queue_warning(message, _warnings): # Queues a warning to show after the monkey-patching process is all done. # Done this way to avoid extra imports during the process itself, just # in case. If we're calling a function one-off (unusual) go ahead and do it if _warnings is None: _process_warnings([message]) else: _warnings.append(message) def _process_warnings(_warnings): import warnings from ._errors import MonkeyPatchWarning for warning in _warnings: warnings.warn(warning, MonkeyPatchWarning, stacklevel=3) gevent-24.11.1/src/gevent/monkey/api.py000066400000000000000000000151501471441230600176600ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Higher level functions that comprise parts of the public monkey patching API. """ def get_original(mod_name, item_name): """ Retrieve the original object from a module. If the object has not been patched, then that object will still be retrieved. :param str|sequence mod_name: The name of the standard library module, e.g., ``'socket'``. Can also be a sequence of standard library modules giving alternate names to try, e.g., ``('thread', '_thread')``; the first importable module will supply all *item_name* items. :param str|sequence item_name: A string or sequence of strings naming the attribute(s) on the module ``mod_name`` to return. :return: The original value if a string was given for ``item_name`` or a sequence of original values if a sequence was passed. """ from ._state import _get_original mod_names = [mod_name] if isinstance(mod_name, str) else mod_name if isinstance(item_name, str): item_names = [item_name] unpack = True else: item_names = item_name unpack = False for mod in mod_names: try: result = _get_original(mod, item_names) except ImportError: if mod is mod_names[-1]: raise else: return result[0] if unpack else result _NONE = object() def patch_item(module, attr, newitem): from ._state import _save olditem = getattr(module, attr, _NONE) if olditem is not _NONE: _save(module, attr, olditem) setattr(module, attr, newitem) def remove_item(module, attr): from ._state import _save olditem = getattr(module, attr, _NONE) if olditem is _NONE: return _save(module, attr, olditem) delattr(module, attr) def patch_module(target_module, source_module, items=None, _warnings=None, _patch_kwargs=None, _notify_will_subscribers=True, _notify_did_subscribers=True, _call_hooks=True): """ patch_module(target_module, source_module, items=None) Replace attributes in *target_module* with the attributes of the same name in *source_module*. The *source_module* can provide some attributes to customize the process: * ``__implements__`` is a list of attribute names to copy; if not present, the *items* keyword argument is mandatory. ``__implements__`` must only have names from the standard library module in it. * ``_gevent_will_monkey_patch(target_module, items, warn, **kwargs)`` * ``_gevent_did_monkey_patch(target_module, items, warn, **kwargs)`` These two functions in the *source_module* are called *if* they exist, before and after copying attributes, respectively. The "will" function may modify *items*. The value of *warn* is a function that should be called with a single string argument to issue a warning to the user. If the "will" function raises :exc:`gevent.events.DoNotPatch`, no patching will be done. These functions are called before any event subscribers or plugins. :keyword list items: A list of attribute names to replace. If not given, this will be taken from the *source_module* ``__implements__`` attribute. :return: A true value if patching was done, a false value if patching was canceled. .. versionadded:: 1.3b1 """ from gevent import events from ._errors import _BadImplements from ._util import _notify_patch if items is None: try: items = source_module.__implements__ except AttributeError as e: raise _BadImplements(source_module) from e if items is None: raise _BadImplements(source_module) try: if _call_hooks: __call_module_hook(source_module, 'will', target_module, items, _warnings) if _notify_will_subscribers: _notify_patch( events.GeventWillPatchModuleEvent(target_module.__name__, source_module, target_module, items), _warnings) except events.DoNotPatch: return False # Undocumented, internal use: If the module defines # `_gevent_do_monkey_patch(patch_request: _GeventDoPatchRequest)` call that; # the module is responsible for its own patching. do_patch = getattr( source_module, '_gevent_do_monkey_patch', _GeventDoPatchRequest.default_patch_items ) request = _GeventDoPatchRequest(target_module, source_module, items, _patch_kwargs) do_patch(request) if _call_hooks: __call_module_hook(source_module, 'did', target_module, items, _warnings) if _notify_did_subscribers: # We allow turning off the broadcast of the 'did' event for the benefit # of our internal functions which need to do additional work (besides copying # attributes) before their patch can be considered complete. _notify_patch( events.GeventDidPatchModuleEvent(target_module.__name__, source_module, target_module) ) return True class _GeventDoPatchRequest(object): get_original = staticmethod(get_original) def __init__(self, target_module, source_module, items, patch_kwargs): self.target_module = target_module self.source_module = source_module self.items = items self.patch_kwargs = patch_kwargs or {} def __repr__(self): return '<%s target=%r source=%r items=%r kwargs=%r>' % ( self.__class__.__name__, self.target_module, self.source_module, self.items, self.patch_kwargs ) def default_patch_items(self): for attr in self.items: patch_item(self.target_module, attr, getattr(self.source_module, attr)) def remove_item(self, target_module, *items): if isinstance(target_module, str): items = (target_module,) + items target_module = self.target_module for item in items: remove_item(target_module, item) def __call_module_hook(gevent_module, name, module, items, _warnings): # This function can raise DoNotPatch on 'will' def warn(message): from ._util import _queue_warning _queue_warning(message, _warnings) func_name = '_gevent_' + name + '_monkey_patch' try: func = getattr(gevent_module, func_name) except AttributeError: func = lambda *args: None func(module, items, warn) gevent-24.11.1/src/gevent/os.py000066400000000000000000000505721471441230600162350ustar00rootroot00000000000000""" Low-level operating system functions from :mod:`os`. Cooperative I/O =============== This module provides cooperative versions of :func:`os.read` and :func:`os.write`. These functions are *not* monkey-patched; you must explicitly call them or monkey patch them yourself. POSIX functions --------------- On POSIX, non-blocking IO is available. - :func:`nb_read` - :func:`nb_write` - :func:`make_nonblocking` All Platforms ------------- On non-POSIX platforms (e.g., Windows), non-blocking IO is not available. On those platforms (and on POSIX), cooperative IO can be done with the threadpool. - :func:`tp_read` - :func:`tp_write` Child Processes =============== The functions :func:`fork` and (on POSIX) :func:`forkpty` and :func:`waitpid` can be used to manage child processes. .. warning:: Forking a process that uses greenlets does not eliminate all non-running greenlets. Any that were scheduled in the hub of the forking thread in the parent remain scheduled in the child; compare this to how normal threads operate. (This behaviour may change is a subsequent major release.) """ from __future__ import absolute_import import os from gevent.hub import _get_hub_noargs as get_hub from gevent.hub import reinit from gevent._config import config from gevent._util import copy_globals import errno EAGAIN = getattr(errno, 'EAGAIN', 11) try: import fcntl except ImportError: fcntl = None __implements__ = ['fork'] __extensions__ = ['tp_read', 'tp_write'] _read = os.read _write = os.write ignored_errors = [EAGAIN, errno.EINTR] if fcntl: __extensions__ += ['make_nonblocking', 'nb_read', 'nb_write'] def make_nonblocking(fd): """Put the file descriptor *fd* into non-blocking mode if possible. :return: A boolean value that evaluates to True if successful. """ flags = fcntl.fcntl(fd, fcntl.F_GETFL, 0) if not bool(flags & os.O_NONBLOCK): fcntl.fcntl(fd, fcntl.F_SETFL, flags | os.O_NONBLOCK) return True def nb_read(fd, n): """ Read up to *n* bytes from file descriptor *fd*. Return a byte string containing the bytes read, which may be shorter than *n*. If end-of-file is reached, an empty string is returned. The descriptor must be in non-blocking mode. """ hub = None event = None try: while 1: try: result = _read(fd, n) return result except OSError as e: if e.errno not in ignored_errors: raise if hub is None: hub = get_hub() event = hub.loop.io(fd, 1) hub.wait(event) finally: if event is not None: event.close() event = None hub = None def nb_write(fd, buf): """ Write some number of bytes from buffer *buf* to file descriptor *fd*. Return the number of bytes written, which may be less than the length of *buf*. The file descriptor must be in non-blocking mode. """ hub = None event = None try: while 1: try: result = _write(fd, buf) return result except OSError as e: if e.errno not in ignored_errors: raise if hub is None: hub = get_hub() event = hub.loop.io(fd, 2) hub.wait(event) finally: if event is not None: event.close() event = None hub = None def tp_read(fd, n): """Read up to *n* bytes from file descriptor *fd*. Return a string containing the bytes read. If end-of-file is reached, an empty string is returned. Reading is done using the threadpool. """ return get_hub().threadpool.apply(_read, (fd, n)) def tp_write(fd, buf): """Write bytes from buffer *buf* to file descriptor *fd*. Return the number of bytes written. Writing is done using the threadpool. """ return get_hub().threadpool.apply(_write, (fd, buf)) if hasattr(os, 'fork'): # pylint:disable=function-redefined,redefined-outer-name _raw_fork = os.fork def fork_gevent(): """ Forks the process using :func:`os.fork` and prepares the child process to continue using gevent before returning. .. note:: The PID returned by this function may not be waitable with either the original :func:`os.waitpid` or this module's :func:`waitpid` and it may not generate SIGCHLD signals if libev child watchers are or ever have been in use. For example, the :mod:`gevent.subprocess` module uses libev child watchers (which parts of gevent use libev child watchers is subject to change at any time). Most applications should use :func:`fork_and_watch`, which is monkey-patched as the default replacement for :func:`os.fork` and implements the ``fork`` function of this module by default, unless the environment variable ``GEVENT_NOWAITPID`` is defined before this module is imported. .. versionadded:: 1.1b2 """ import warnings # The simple `catch_warnings(action='ignore', category=DeprecationWarning)` # is only available in 3.11+. with warnings.catch_warnings(): warnings.simplefilter('ignore', DeprecationWarning) result = _raw_fork() if not result: reinit() return result def fork(): """ A wrapper for :func:`fork_gevent` for non-POSIX platforms. """ return fork_gevent() if hasattr(os, 'forkpty'): _raw_forkpty = os.forkpty def forkpty_gevent(): """ Forks the process using :func:`os.forkpty` and prepares the child process to continue using gevent before returning. Returns a tuple (pid, master_fd). The `master_fd` is *not* put into non-blocking mode. Availability: Some Unix systems. .. seealso:: This function has the same limitations as :func:`fork_gevent`. .. versionadded:: 1.1b5 """ pid, master_fd = _raw_forkpty() if not pid: reinit() return pid, master_fd forkpty = forkpty_gevent __implements__.append('forkpty') __extensions__.append("forkpty_gevent") if hasattr(os, 'WNOWAIT') or hasattr(os, 'WNOHANG'): # We can only do this on POSIX import time _waitpid = os.waitpid _WNOHANG = os.WNOHANG # replaced by the signal module. _on_child_hook = lambda: None # {pid -> watcher or tuple(pid, rstatus, timestamp)} _watched_children = {} def _on_child(watcher, callback): # XXX: Could handle tracing here by not stopping # until the pid is terminated watcher.stop() try: _watched_children[watcher.pid] = (watcher.pid, watcher.rstatus, time.time()) if callback: callback(watcher) # dispatch an "event"; used by gevent.signal.signal _on_child_hook() # now is as good a time as any to reap children _reap_children() finally: watcher.close() def _reap_children(timeout=60): # Remove all the dead children that haven't been waited on # for the *timeout* seconds. # Some platforms queue delivery of SIGCHLD for all children that die; # in that case, a well-behaved application should call waitpid() for each # signal. # Some platforms (linux) only guarantee one delivery if multiple children # die. On that platform, the well-behave application calls waitpid() in a loop # until it gets back -1, indicating no more dead children need to be waited for. # In either case, waitpid should be called the same number of times as dead children, # thus removing all the watchers when a SIGCHLD arrives. The (generous) timeout # is to work with applications that neglect to call waitpid and prevent "unlimited" # growth. # Note that we don't watch for the case of pid wraparound. That is, we fork a new # child with the same pid as an existing watcher, but the child is already dead, # just not waited on yet. now = time.time() oldest_allowed = now - timeout dead = [ pid for pid, val in _watched_children.items() if isinstance(val, tuple) and val[2] < oldest_allowed ] for pid in dead: del _watched_children[pid] def waitpid(pid, options): """ Wait for a child process to finish. If the child process was spawned using :func:`fork_and_watch`, then this function behaves cooperatively. If not, it *may* have race conditions; see :func:`fork_gevent` for more information. The arguments are as for the underlying :func:`os.waitpid`. Some combinations of *options* may not be supported cooperatively (as of 1.1 that includes WUNTRACED). Using a *pid* of 0 to request waiting on only processes from the current process group is not cooperative. A *pid* of -1 to wait for any child is non-blocking, but may or may not require a trip around the event loop, depending on whether any children have already terminated but not been waited on. Availability: POSIX. .. versionadded:: 1.1b1 .. versionchanged:: 1.2a1 More cases are handled in a cooperative manner. """ # pylint: disable=too-many-return-statements # XXX Does not handle tracing children # So long as libev's loop doesn't run, it's OK to add # child watchers. The SIGCHLD handler only feeds events # for the next iteration of the loop to handle. (And the # signal handler itself is only called from the next loop # iteration.) if pid <= 0: # magic functions for multiple children. if pid == -1: # Any child. If we have one that we're watching # and that finished, we will use that one, # preferring the oldest. Otherwise, let the OS # take care of it. finished_at = None for k, v in _watched_children.items(): if ( isinstance(v, tuple) and (finished_at is None or v[2] < finished_at) ): pid = k finished_at = v[2] if pid <= 0: # We didn't have one that was ready. If there are # no funky options set, and the pid was -1 # (meaning any process, not 0, which means process # group--- libev doesn't know about process # groups) then we can use a child watcher of pid 0; otherwise, # pass through to the OS. if pid == -1 and options == 0: hub = get_hub() with hub.loop.child(0, False) as watcher: hub.wait(watcher) return watcher.rpid, watcher.rstatus # There were funky options/pid, so we must go to the OS. return _waitpid(pid, options) if pid in _watched_children: # yes, we're watching it # Note that the remainder of this code must be careful to NOT # yield to the event loop except at well known times, or # we have a race condition between the _on_child callback and the # code here that could lead to a process to hang. if options & _WNOHANG or isinstance(_watched_children[pid], tuple): # We're either asked not to block, or it already finished, in which # case blocking doesn't matter result = _watched_children[pid] if isinstance(result, tuple): # it finished. libev child watchers # are one-shot del _watched_children[pid] return result[:2] # it's not finished return (0, 0) # Ok, we need to "block". Do so via a watcher so that we're # cooperative. We know it's our child, etc, so this should work. watcher = _watched_children[pid] # We can't start a watcher that's already started, # so we can't reuse the existing watcher. Notice that the # old watcher must not have fired already, or during this time, but # only after we successfully `start()` the watcher. So this must # not yield to the event loop. with watcher.loop.child(pid, False) as new_watcher: get_hub().wait(new_watcher) # Ok, so now the new watcher is done. That means # the old watcher's callback (_on_child) should # have fired, potentially taking this child out of # _watched_children (but that could depend on how # many callbacks there were to run, so use the # watcher object directly; libev sets all the # watchers at the same time). return watcher.rpid, watcher.rstatus # we're not watching it and it may not even be our child, # so we must go to the OS to be sure to get the right semantics (exception) # XXX # libuv has a race condition because the signal # handler is a Python function, so the InterruptedError # is raised before the signal handler runs and calls the # child watcher # we're not watching it return _waitpid(pid, options) def _watch_child(pid, callback=None, loop=None, ref=False): loop = loop or get_hub().loop watcher = loop.child(pid, ref=ref) _watched_children[pid] = watcher watcher.start(_on_child, watcher, callback) def fork_and_watch(callback=None, loop=None, ref=False, fork=fork_gevent): """ Fork a child process and start a child watcher for it in the parent process. This call cooperates with :func:`waitpid` to enable cooperatively waiting for children to finish. When monkey-patching, these functions are patched in as :func:`os.fork` and :func:`os.waitpid`, respectively. In the child process, this function calls :func:`gevent.hub.reinit` before returning. Availability: POSIX. :keyword callback: If given, a callable that will be called with the child watcher when the child finishes. :keyword loop: The loop to start the watcher in. Defaults to the loop of the current hub. :keyword fork: The fork function. Defaults to :func:`the one defined in this module ` (which automatically calls :func:`gevent.hub.reinit`). Pass the builtin :func:`os.fork` function if you do not need to initialize gevent in the child process. .. versionadded:: 1.1b1 .. seealso:: :func:`gevent.monkey.get_original` To access the builtin :func:`os.fork`. """ pid = fork() if pid: # parent _watch_child(pid, callback, loop, ref) return pid __extensions__.append('fork_and_watch') __extensions__.append('fork_gevent') if 'forkpty' in __implements__: def forkpty_and_watch(callback=None, loop=None, ref=False, forkpty=forkpty_gevent): """ Like :func:`fork_and_watch`, except using :func:`forkpty_gevent`. Availability: Some Unix systems. .. versionadded:: 1.1b5 """ result = [] def _fork(): pid_and_fd = forkpty() result.append(pid_and_fd) return pid_and_fd[0] fork_and_watch(callback, loop, ref, _fork) return result[0] __extensions__.append('forkpty_and_watch') # Watch children by default if not config.disable_watch_children: # Broken out into separate functions instead of simple name aliases # for documentation purposes. def fork(*args, **kwargs): """ Forks a child process and starts a child watcher for it in the parent process so that ``waitpid`` and SIGCHLD work as expected. This implementation of ``fork`` is a wrapper for :func:`fork_and_watch` when the environment variable ``GEVENT_NOWAITPID`` is *not* defined. This is the default and should be used by most applications. .. versionchanged:: 1.1b2 """ # take any args to match fork_and_watch return fork_and_watch(*args, **kwargs) if 'forkpty' in __implements__: def forkpty(*args, **kwargs): """ Like :func:`fork`, but using :func:`forkpty_gevent`. This implementation of ``forkpty`` is a wrapper for :func:`forkpty_and_watch` when the environment variable ``GEVENT_NOWAITPID`` is *not* defined. This is the default and should be used by most applications. .. versionadded:: 1.1b5 """ # take any args to match fork_and_watch return forkpty_and_watch(*args, **kwargs) __implements__.append("waitpid") if hasattr(os, 'posix_spawn'): _raw_posix_spawn = os.posix_spawn _raw_posix_spawnp = os.posix_spawnp def posix_spawn(*args, **kwargs): pid = _raw_posix_spawn(*args, **kwargs) _watch_child(pid) return pid def posix_spawnp(*args, **kwargs): pid = _raw_posix_spawnp(*args, **kwargs) _watch_child(pid) return pid __implements__.append("posix_spawn") __implements__.append("posix_spawnp") else: def fork(): """ Forks a child process, initializes gevent in the child, but *does not* prepare the parent to wait for the child or receive SIGCHLD. This implementation of ``fork`` is a wrapper for :func:`fork_gevent` when the environment variable ``GEVENT_NOWAITPID`` *is* defined. This is not recommended for most applications. """ return fork_gevent() if 'forkpty' in __implements__: def forkpty(): """ Like :func:`fork`, but using :func:`os.forkpty` This implementation of ``forkpty`` is a wrapper for :func:`forkpty_gevent` when the environment variable ``GEVENT_NOWAITPID`` *is* defined. This is not recommended for most applications. .. versionadded:: 1.1b5 """ return forkpty_gevent() __extensions__.append("waitpid") else: __implements__.remove('fork') __imports__ = copy_globals(os, globals(), names_to_ignore=__implements__ + __extensions__, dunder_names_to_keep=()) __all__ = list(set(__implements__ + __extensions__)) gevent-24.11.1/src/gevent/pool.py000066400000000000000000000620421471441230600165600ustar00rootroot00000000000000# Copyright (c) 2009-2011 Denis Bilenko. See LICENSE for details. """ Managing greenlets in a group. The :class:`Group` class in this module abstracts a group of running greenlets. When a greenlet dies, it's automatically removed from the group. All running greenlets in a group can be waited on with :meth:`Group.join`, or all running greenlets can be killed with :meth:`Group.kill`. The :class:`Pool` class, which is a subclass of :class:`Group`, provides a way to limit concurrency: its :meth:`spawn ` method blocks if the number of greenlets in the pool has already reached the limit, until there is a free slot. """ from __future__ import print_function, absolute_import, division from gevent.hub import GreenletExit, getcurrent, kill as _kill from gevent.greenlet import joinall, Greenlet from gevent.queue import Full as QueueFull from gevent.timeout import Timeout from gevent.event import Event from gevent.lock import Semaphore, DummySemaphore from gevent._compat import izip from gevent._imap import IMap from gevent._imap import IMapUnordered __all__ = [ 'Group', 'Pool', 'PoolFull', ] class GroupMappingMixin(object): # Internal, non-public API class. # Provides mixin methods for implementing mapping pools. Subclasses must define: __slots__ = () def spawn(self, func, *args, **kwargs): """ A function that runs *func* with *args* and *kwargs*, potentially asynchronously. Return a value with a ``get`` method that blocks until the results of func are available, and a ``rawlink`` method that calls a callback when the results are available. If this object has an upper bound on how many asyncronously executing tasks can exist, this method may block until a slot becomes available. """ raise NotImplementedError() def _apply_immediately(self): """ should the function passed to apply be called immediately, synchronously? """ raise NotImplementedError() def _apply_async_use_greenlet(self): """ Should apply_async directly call Greenlet.spawn(), bypassing `spawn`? Return true when self.spawn would block. """ raise NotImplementedError() def _apply_async_cb_spawn(self, callback, result): """ Run the given callback function, possibly asynchronously, possibly synchronously. """ raise NotImplementedError() def apply_cb(self, func, args=None, kwds=None, callback=None): """ :meth:`apply` the given *func(\\*args, \\*\\*kwds)*, and, if a *callback* is given, run it with the results of *func* (unless an exception was raised.) The *callback* may be called synchronously or asynchronously. If called asynchronously, it will not be tracked by this group. (:class:`Group` and :class:`Pool` call it asynchronously in a new greenlet; :class:`~gevent.threadpool.ThreadPool` calls it synchronously in the current greenlet.) """ result = self.apply(func, args, kwds) if callback is not None: self._apply_async_cb_spawn(callback, result) return result def apply_async(self, func, args=None, kwds=None, callback=None): """ A variant of the :meth:`apply` method which returns a :class:`~.Greenlet` object. When the returned greenlet gets to run, it *will* call :meth:`apply`, passing in *func*, *args* and *kwds*. If *callback* is specified, then it should be a callable which accepts a single argument. When the result becomes ready callback is applied to it (unless the call failed). This method will never block, even if this group is full (that is, even if :meth:`spawn` would block, this method will not). .. caution:: The returned greenlet may or may not be tracked as part of this group, so :meth:`joining ` this group is not a reliable way to wait for the results to be available or for the returned greenlet to run; instead, join the returned greenlet. .. tip:: Because :class:`~.ThreadPool` objects do not track greenlets, the returned greenlet will never be a part of it. To reduce overhead and improve performance, :class:`Group` and :class:`Pool` may choose to track the returned greenlet. These are implementation details that may change. """ if args is None: args = () if kwds is None: kwds = {} if self._apply_async_use_greenlet(): # cannot call self.spawn() directly because it will block # XXX: This is always the case for ThreadPool, but for Group/Pool # of greenlets, this is only the case when they are full...hence # the weasely language about "may or may not be tracked". Should we make # Group/Pool always return true as well so it's never tracked by any # implementation? That would simplify that logic, but could increase # the total number of greenlets in the system and add a layer of # overhead for the simple cases when the pool isn't full. return Greenlet.spawn(self.apply_cb, func, args, kwds, callback) greenlet = self.spawn(func, *args, **kwds) if callback is not None: greenlet.link(pass_value(callback)) return greenlet def apply(self, func, args=None, kwds=None): """ Rough quivalent of the :func:`apply()` builtin function blocking until the result is ready and returning it. The ``func`` will *usually*, but not *always*, be run in a way that allows the current greenlet to switch out (for example, in a new greenlet or thread, depending on implementation). But if the current greenlet or thread is already one that was spawned by this pool, the pool may choose to immediately run the `func` synchronously. Any exception ``func`` raises will be propagated to the caller of ``apply`` (that is, this method will raise the exception that ``func`` raised). """ if args is None: args = () if kwds is None: kwds = {} if self._apply_immediately(): return func(*args, **kwds) return self.spawn(func, *args, **kwds).get() def __map(self, func, iterable): return [g.get() for g in [self.spawn(func, i) for i in iterable]] def map(self, func, iterable): """Return a list made by applying the *func* to each element of the iterable. .. seealso:: :meth:`imap` """ # We can't return until they're all done and in order. It # wouldn't seem to much matter what order we wait on them in, # so the simple, fast (50% faster than imap) solution would be: # return [g.get() for g in # [self.spawn(func, i) for i in iterable]] # If the pool size is unlimited (or more than the len(iterable)), this # is equivalent to imap (spawn() will never block, all of them run concurrently, # we call get() in the order the iterable was given). # Now lets imagine the pool if is limited size. Suppose the # func is time.sleep, our pool is limited to 3 threads, and # our input is [10, 1, 10, 1, 1] We would start three threads, # one to sleep for 10, one to sleep for 1, and the last to # sleep for 10. We would block starting the fourth thread. At # time 1, we would finish the second thread and start another # one for time 1. At time 2, we would finish that one and # start the last thread, and then begin executing get() on the first # thread. # Because it's spawn that blocks, this is *also* equivalent to what # imap would do. # The one remaining difference is that imap runs in its own # greenlet, potentially changing the way the event loop runs. # That's easy enough to do. g = Greenlet.spawn(self.__map, func, iterable) return g.get() def map_cb(self, func, iterable, callback=None): result = self.map(func, iterable) if callback is not None: callback(result) # pylint:disable=not-callable return result def map_async(self, func, iterable, callback=None): """ A variant of the map() method which returns a Greenlet object that is executing the map function. If callback is specified then it should be a callable which accepts a single argument. """ return Greenlet.spawn(self.map_cb, func, iterable, callback) def __imap(self, cls, func, *iterables, **kwargs): # Python 2 doesn't support the syntax that lets us mix varargs and # a named kwarg, so we have to unpack manually maxsize = kwargs.pop('maxsize', None) if kwargs: raise TypeError("Unsupported keyword arguments") return cls.spawn(func, izip(*iterables), spawn=self.spawn, _zipped=True, maxsize=maxsize) def imap(self, func, *iterables, **kwargs): """ imap(func, *iterables, maxsize=None) -> iterable An equivalent of :func:`itertools.imap`, operating in parallel. The *func* is applied to each element yielded from each iterable in *iterables* in turn, collecting the result. If this object has a bound on the number of active greenlets it can contain (such as :class:`Pool`), then at most that number of tasks will operate in parallel. :keyword int maxsize: If given and not-None, specifies the maximum number of finished results that will be allowed to accumulate awaiting the reader; more than that number of results will cause map function greenlets to begin to block. This is most useful if there is a great disparity in the speed of the mapping code and the consumer and the results consume a great deal of resources. .. note:: This is separate from any bound on the number of active parallel tasks, though they may have some interaction (for example, limiting the number of parallel tasks to the smallest bound). .. note:: Using a bound is slightly more computationally expensive than not using a bound. .. tip:: The :meth:`imap_unordered` method makes much better use of this parameter. Some additional, unspecified, number of objects may be required to be kept in memory to maintain order by this function. :return: An iterable object. .. versionchanged:: 1.1b3 Added the *maxsize* keyword parameter. .. versionchanged:: 1.1a1 Accept multiple *iterables* to iterate in parallel. """ return self.__imap(IMap, func, *iterables, **kwargs) def imap_unordered(self, func, *iterables, **kwargs): """ imap_unordered(func, *iterables, maxsize=None) -> iterable The same as :meth:`imap` except that the ordering of the results from the returned iterator should be considered in arbitrary order. This is lighter weight than :meth:`imap` and should be preferred if order doesn't matter. .. seealso:: :meth:`imap` for more details. """ return self.__imap(IMapUnordered, func, *iterables, **kwargs) class Group(GroupMappingMixin): """ Maintain a group of greenlets that are still running, without limiting their number. Links to each item and removes it upon notification. Groups can be iterated to discover what greenlets they are tracking, they can be tested to see if they contain a greenlet, and they know the number (len) of greenlets they are tracking. If they are not tracking any greenlets, they are False in a boolean context. .. attribute:: greenlet_class Either :class:`gevent.Greenlet` (the default) or a subclass. These are the type of object we will :meth:`spawn`. This can be changed on an instance or in a subclass. """ greenlet_class = Greenlet def __init__(self, *args): assert len(args) <= 1, args self.greenlets = set(*args) if args: for greenlet in args[0]: greenlet.rawlink(self._discard) # each item we kill we place in dying, to avoid killing the same greenlet twice self.dying = set() self._empty_event = Event() self._empty_event.set() def __repr__(self): return '<%s at 0x%x %s>' % (self.__class__.__name__, id(self), self.greenlets) def __len__(self): """ Answer how many greenlets we are tracking. Note that if we are empty, we are False in a boolean context. """ return len(self.greenlets) def __contains__(self, item): """ Answer if we are tracking the given greenlet. """ return item in self.greenlets def __iter__(self): """ Iterate across all the greenlets we are tracking, in no particular order. """ return iter(self.greenlets) def add(self, greenlet): """ Begin tracking the *greenlet*. If this group is :meth:`full`, then this method may block until it is possible to track the greenlet. Typically the *greenlet* should **not** be started when it is added because if this object blocks in this method, then the *greenlet* may run to completion before it is tracked. """ try: rawlink = greenlet.rawlink except AttributeError: pass # non-Greenlet greenlet, like MAIN else: rawlink(self._discard) self.greenlets.add(greenlet) self._empty_event.clear() def _discard(self, greenlet): self.greenlets.discard(greenlet) self.dying.discard(greenlet) if not self.greenlets: self._empty_event.set() def discard(self, greenlet): """ Stop tracking the greenlet. """ self._discard(greenlet) try: unlink = greenlet.unlink except AttributeError: pass # non-Greenlet greenlet, like MAIN else: unlink(self._discard) def start(self, greenlet): """ Add the **unstarted** *greenlet* to the collection of greenlets this group is monitoring, and then start it. """ self.add(greenlet) greenlet.start() def spawn(self, *args, **kwargs): # pylint:disable=arguments-differ """ Begin a new greenlet with the given arguments (which are passed to the greenlet constructor) and add it to the collection of greenlets this group is monitoring. :return: The newly started greenlet. """ greenlet = self.greenlet_class(*args, **kwargs) self.start(greenlet) return greenlet # def close(self): # """Prevents any more tasks from being submitted to the pool""" # self.add = RaiseException("This %s has been closed" % self.__class__.__name__) def join(self, timeout=None, raise_error=False): """ Wait for this group to become empty *at least once*. If there are no greenlets in the group, returns immediately. .. note:: By the time the waiting code (the caller of this method) regains control, a greenlet may have been added to this group, and so this object may no longer be empty. (That is, ``group.join(); assert len(group) == 0`` is not guaranteed to hold.) This method only guarantees that the group reached a ``len`` of 0 at some point. :keyword bool raise_error: If True (*not* the default), if any greenlet that finished while the join was in progress raised an exception, that exception will be raised to the caller of this method. If multiple greenlets raised exceptions, which one gets re-raised is not determined. Only greenlets currently in the group when this method is called are guaranteed to be checked for exceptions. :return bool: A value indicating whether this group became empty. If the timeout is specified and the group did not become empty during that timeout, then this will be a false value. Otherwise it will be a true value. .. versionchanged:: 1.2a1 Add the return value. """ greenlets = list(self.greenlets) if raise_error else () result = self._empty_event.wait(timeout=timeout) for greenlet in greenlets: if greenlet.exception is not None: if hasattr(greenlet, '_raise_exception'): greenlet._raise_exception() raise greenlet.exception return result def kill(self, exception=GreenletExit, block=True, timeout=None): """ Kill all greenlets being tracked by this group. """ timer = Timeout._start_new_or_dummy(timeout) try: while self.greenlets: for greenlet in list(self.greenlets): if greenlet in self.dying: continue try: kill = greenlet.kill except AttributeError: _kill(greenlet, exception) else: kill(exception, block=False) self.dying.add(greenlet) if not block: break joinall(self.greenlets) except Timeout as ex: if ex is not timer: raise finally: timer.cancel() def killone(self, greenlet, exception=GreenletExit, block=True, timeout=None): """ If the given *greenlet* is running and being tracked by this group, kill it. """ if greenlet not in self.dying and greenlet in self.greenlets: greenlet.kill(exception, block=False) self.dying.add(greenlet) if block: greenlet.join(timeout) def full(self): """ Return a value indicating whether this group can track more greenlets. In this implementation, because there are no limits on the number of tracked greenlets, this will always return a ``False`` value. """ return False def wait_available(self, timeout=None): """ Block until it is possible to :meth:`spawn` a new greenlet. In this implementation, because there are no limits on the number of tracked greenlets, this will always return immediately. """ # MappingMixin methods def _apply_immediately(self): # If apply() is called from one of our own # worker greenlets, don't spawn a new one---if we're full, that # could deadlock. return getcurrent() in self def _apply_async_cb_spawn(self, callback, result): Greenlet.spawn(callback, result) def _apply_async_use_greenlet(self): # cannot call self.spawn() because it will block, so # use a fresh, untracked greenlet that when run will # (indirectly) call self.spawn() for us. return self.full() class PoolFull(QueueFull): """ Raised when a Pool is full and an attempt was made to add a new greenlet to it in non-blocking mode. """ class Pool(Group): def __init__(self, size=None, greenlet_class=None): """ Create a new pool. A pool is like a group, but the maximum number of members is governed by the *size* parameter. :keyword int size: If given, this non-negative integer is the maximum count of active greenlets that will be allowed in this pool. A few values have special significance: * `None` (the default) places no limit on the number of greenlets. This is useful when you want to track, but not limit, greenlets. In general, a :class:`Group` may be a more efficient way to achieve the same effect, but some things need the additional abilities of this class (one example being the *spawn* parameter of :class:`gevent.baseserver.BaseServer` and its subclass :class:`gevent.pywsgi.WSGIServer`). * ``0`` creates a pool that can never have any active greenlets. Attempting to spawn in this pool will block forever. This is only useful if an application uses :meth:`wait_available` with a timeout and checks :meth:`free_count` before attempting to spawn. """ if size is not None and size < 0: raise ValueError('size must not be negative: %r' % (size, )) Group.__init__(self) self.size = size if greenlet_class is not None: self.greenlet_class = greenlet_class if size is None: factory = DummySemaphore else: factory = Semaphore self._semaphore = factory(size) def wait_available(self, timeout=None): """ Wait until it's possible to spawn a greenlet in this pool. :param float timeout: If given, only wait the specified number of seconds. .. warning:: If the pool was initialized with a size of 0, this method will block forever unless a timeout is given. :return: A number indicating how many new greenlets can be put into the pool without blocking. .. versionchanged:: 1.1a3 Added the ``timeout`` parameter. """ return self._semaphore.wait(timeout=timeout) def full(self): """ Return a boolean indicating whether this pool is full, e.g. if :meth:`add` would block. :return: False if there is room for new members, True if there isn't. """ return self.free_count() <= 0 def free_count(self): """ Return a number indicating *approximately* how many more members can be added to this pool. """ if self.size is None: return 1 return max(0, self.size - len(self)) def start(self, greenlet, *args, **kwargs): # pylint:disable=arguments-differ """ start(greenlet, blocking=True, timeout=None) -> None Add the **unstarted** *greenlet* to the collection of greenlets this group is monitoring and then start it. Parameters are as for :meth:`add`. """ self.add(greenlet, *args, **kwargs) greenlet.start() def add(self, greenlet, blocking=True, timeout=None): # pylint:disable=arguments-differ """ Begin tracking the given **unstarted** greenlet, possibly blocking until space is available. Usually you should call :meth:`start` to track and start the greenlet instead of using this lower-level method, or :meth:`spawn` to also create the greenlet. :keyword bool blocking: If True (the default), this function will block until the pool has space or a timeout occurs. If False, this function will immediately raise a Timeout if the pool is currently full. :keyword float timeout: The maximum number of seconds this method will block, if ``blocking`` is True. (Ignored if ``blocking`` is False.) :raises PoolFull: if either ``blocking`` is False and the pool was full, or if ``blocking`` is True and ``timeout`` was exceeded. .. caution:: If the *greenlet* has already been started and *blocking* is true, then the greenlet may run to completion while the current greenlet blocks waiting to track it. This would enable higher concurrency than desired. .. seealso:: :meth:`Group.add` .. versionchanged:: 1.3.0 Added the ``blocking`` and ``timeout`` parameters. """ if not self._semaphore.acquire(blocking=blocking, timeout=timeout): # We failed to acquire the semaphore. # If blocking was True, then there was a timeout. If blocking was # False, then there was no capacity. Either way, raise PoolFull. raise PoolFull() try: Group.add(self, greenlet) except: self._semaphore.release() raise def _discard(self, greenlet): Group._discard(self, greenlet) self._semaphore.release() class pass_value(object): __slots__ = ['callback'] def __init__(self, callback): self.callback = callback def __call__(self, source): if source.successful(): self.callback(source.value) def __hash__(self): return hash(self.callback) def __eq__(self, other): return self.callback == getattr(other, 'callback', other) def __str__(self): return str(self.callback) def __repr__(self): return repr(self.callback) def __getattr__(self, item): assert item != 'callback' return getattr(self.callback, item) gevent-24.11.1/src/gevent/pywsgi.py000066400000000000000000002066511471441230600171370ustar00rootroot00000000000000# Copyright (c) 2005-2009, eventlet contributors # Copyright (c) 2009-2018, gevent contributors """ A pure-Python, gevent-friendly WSGI server implementing HTTP/1.1. The server is provided in :class:`WSGIServer`, but most of the actual WSGI work is handled by :class:`WSGIHandler` --- a new instance is created for each request. The server can be customized to use different subclasses of :class:`WSGIHandler`. .. important:: This server is intended primarily for development and testing, and secondarily for other "safe" scenarios where it will not be exposed to potentially malicious input. The code has not been security audited, and is not intended for direct exposure to the public Internet. For production usage on the Internet, either choose a production-strength server such as gunicorn, or put a reverse proxy between gevent and the Internet. .. versionchanged:: 23.9.0 Complies more closely with the HTTP specification for chunked transfer encoding. In particular, we are much stricter about trailers, and trailers that are invalid (too long or featuring disallowed characters) forcibly close the connection to the client *after* the results have been sent. Trailers otherwise continue to be ignored and are not available to the WSGI application. """ from __future__ import absolute_import # FIXME: Can we refactor to make smallor? # pylint:disable=too-many-lines import errno from io import BytesIO import string import sys import time import traceback from datetime import datetime from urllib.parse import unquote from gevent import socket import gevent from gevent.server import StreamServer from gevent.hub import GreenletExit from gevent._compat import reraise from functools import partial unquote_latin1 = partial(unquote, encoding='latin-1') _no_undoc_members = True # Don't put undocumented things into sphinx __all__ = [ 'WSGIServer', 'WSGIHandler', 'LoggingLogAdapter', 'Environ', 'SecureEnviron', 'WSGISecureEnviron', ] MAX_REQUEST_LINE = 8192 # Weekday and month names for HTTP date/time formatting; always English! _WEEKDAYNAME = ("Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun") _MONTHNAME = (None, # Dummy so we can use 1-based month numbers "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec") # The contents of the "HEX" grammar rule for HTTP, upper and lowercase A-F plus digits, # in byte form for comparing to the network. _HEX = string.hexdigits.encode('ascii') # The characters allowed in "token" rules. # token = 1*tchar # tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" # / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~" # / DIGIT / ALPHA # ; any VCHAR, except delimiters # ALPHA = %x41-5A / %x61-7A ; A-Z / a-z _ALLOWED_TOKEN_CHARS = frozenset( # Remember we have to be careful because bytestrings # inexplicably iterate as integers, which are not equal to bytes. # explicit chars then DIGIT (c.encode('ascii') for c in "!#$%&'*+-.^_`|~0123456789") # Then we add ALPHA ) | {c.encode('ascii') for c in string.ascii_letters} assert b'A' in _ALLOWED_TOKEN_CHARS # Errors _ERRORS = {} _INTERNAL_ERROR_STATUS = '500 Internal Server Error' _INTERNAL_ERROR_BODY = b'Internal Server Error' _INTERNAL_ERROR_HEADERS = ( ('Content-Type', 'text/plain'), ('Connection', 'close'), ('Content-Length', str(len(_INTERNAL_ERROR_BODY))) ) _ERRORS[500] = (_INTERNAL_ERROR_STATUS, _INTERNAL_ERROR_HEADERS, _INTERNAL_ERROR_BODY) _BAD_REQUEST_STATUS = '400 Bad Request' _BAD_REQUEST_BODY = '' _BAD_REQUEST_HEADERS = ( ('Content-Type', 'text/plain'), ('Connection', 'close'), ('Content-Length', str(len(_BAD_REQUEST_BODY))) ) _ERRORS[400] = (_BAD_REQUEST_STATUS, _BAD_REQUEST_HEADERS, _BAD_REQUEST_BODY) _REQUEST_TOO_LONG_RESPONSE = b"HTTP/1.1 414 Request URI Too Long\r\nConnection: close\r\nContent-length: 0\r\n\r\n" _BAD_REQUEST_RESPONSE = b"HTTP/1.1 400 Bad Request\r\nConnection: close\r\nContent-length: 0\r\n\r\n" _CONTINUE_RESPONSE = b"HTTP/1.1 100 Continue\r\n\r\n" def format_date_time(timestamp): # Return a byte-string of the date and time in HTTP format # .. versionchanged:: 1.1b5 # Return a byte string, not a native string year, month, day, hh, mm, ss, wd, _y, _z = time.gmtime(timestamp) value = "%s, %02d %3s %4d %02d:%02d:%02d GMT" % (_WEEKDAYNAME[wd], day, _MONTHNAME[month], year, hh, mm, ss) value = value.encode("latin-1") return value class _InvalidClientInput(IOError): # Internal exception raised by Input indicating that the client # sent invalid data at the lowest level of the stream. The result # *should* be a HTTP 400 error. pass class _InvalidClientRequest(ValueError): # Internal exception raised by WSGIHandler.read_request indicating # that the client sent an HTTP request that cannot be parsed # (e.g., invalid grammar). The result *should* be an HTTP 400 # error. It must have exactly one argument, the fully formatted # error string. def __init__(self, message): ValueError.__init__(self, message) self.formatted_message = message class Input(object): __slots__ = ('rfile', 'content_length', 'socket', 'position', 'chunked_input', 'chunk_length', '_chunked_input_error') def __init__(self, rfile, content_length, socket=None, chunked_input=False): # pylint:disable=redefined-outer-name self.rfile = rfile self.content_length = content_length self.socket = socket self.position = 0 self.chunked_input = chunked_input self.chunk_length = -1 self._chunked_input_error = False def _discard(self): if self._chunked_input_error: # We are in an unknown state, so we can't necessarily discard # the body (e.g., if the client keeps the socket open, we could hang # here forever). # In this case, we've raised an exception and the user of this object # is going to close the socket, so we don't have to discard return if self.socket is None and (self.position < (self.content_length or 0) or self.chunked_input): # ## Read and discard body while 1: d = self.read(16384) if not d: break def _send_100_continue(self): if self.socket is not None: self.socket.sendall(_CONTINUE_RESPONSE) self.socket = None def _do_read(self, length=None, use_readline=False): if use_readline: reader = self.rfile.readline else: reader = self.rfile.read content_length = self.content_length if content_length is None: # Either Content-Length or "Transfer-Encoding: chunked" must be present in a request with a body # if it was chunked, then this function would have not been called return b'' self._send_100_continue() left = content_length - self.position if length is None: length = left elif length > left: length = left if not length: return b'' # On Python 2, self.rfile is usually socket.makefile(), which # uses cStringIO.StringIO. If *length* is greater than the C # sizeof(int) (typically 32 bits signed), parsing the argument to # readline raises OverflowError. StringIO.read(), OTOH, uses # PySize_t, typically a long (64 bits). In a bare readline() # case, because the header lines we're trying to read with # readline are typically expected to be small, we can correct # that failure by simply doing a smaller call to readline and # appending; failures in read we let propagate. try: read = reader(length) except OverflowError: if not use_readline: # Expecting to read more than 64 bits of data. Ouch! raise # We could loop on calls to smaller readline(), appending them # until we actually get a newline. For uses in this module, # we expect the actual length to be small, but WSGI applications # are allowed to pass in an arbitrary length. (This loop isn't optimal, # but even client applications *probably* have short lines.) read = b'' while len(read) < length and not read.endswith(b'\n'): read += reader(MAX_REQUEST_LINE) self.position += len(read) if len(read) < length: if (use_readline and not read.endswith(b"\n")) or not use_readline: raise IOError("unexpected end of file while reading request at position %s" % (self.position,)) return read def __read_chunk_length(self, rfile): # Read and return the next integer chunk length. If no # chunk length can be read, raises _InvalidClientInput. # Here's the production for a chunk (actually the whole body): # (https://www.rfc-editor.org/rfc/rfc7230#section-4.1) # chunked-body = *chunk # last-chunk # trailer-part # CRLF # # chunk = chunk-size [ chunk-ext ] CRLF # chunk-data CRLF # chunk-size = 1*HEXDIG # last-chunk = 1*("0") [ chunk-ext ] CRLF # trailer-part = *( header-field CRLF ) # chunk-data = 1*OCTET ; a sequence of chunk-size octets # # chunk-ext = *( ";" chunk-ext-name [ "=" chunk-ext-val ] ) # # chunk-ext-name = token # chunk-ext-val = token / quoted-string # To cope with malicious or broken clients that fail to send # valid chunk lines, the strategy is to read character by # character until we either reach a ; or newline. If at any # time we read a non-HEX digit, we bail. If we hit a ;, # indicating an chunk-extension, we'll read up to the next # MAX_REQUEST_LINE characters ("A server ought to limit the # total length of chunk extensions received") looking for the # CRLF, and if we don't find it, we bail. If we read more than # 16 hex characters, (the number needed to represent a 64-bit # chunk size), we bail (this protects us from a client that # sends an infinite stream of `F`, for example). buf = BytesIO() while 1: char = rfile.read(1) if not char: self._chunked_input_error = True raise _InvalidClientInput("EOF before chunk end reached") if char in ( b'\r', # Beginning EOL b';', # Beginning extension ): break if char not in _HEX: # Invalid data. self._chunked_input_error = True raise _InvalidClientInput("Non-hex data", char) buf.write(char) if buf.tell() > 16: # Too many hex bytes self._chunked_input_error = True raise _InvalidClientInput("Chunk-size too large.") if char == b';': i = 0 while i < MAX_REQUEST_LINE: char = rfile.read(1) if char == b'\r': break i += 1 else: # we read more than MAX_REQUEST_LINE without # hitting CR self._chunked_input_error = True raise _InvalidClientInput("Too large chunk extension") if char == b'\r': # We either got here from the main loop or from the # end of an extension self.__read_chunk_size_crlf(rfile, newline_only=True) result = int(buf.getvalue(), 16) if result == 0: # The only time a chunk size of zero is allowed is the final # chunk. It is either followed by another \r\n, or some trailers # which are then followed by \r\n. while self.__read_chunk_trailer(rfile): pass return result # Trailers have the following production (they are a header-field followed by CRLF) # See above for the definition of "token". # # header-field = field-name ":" OWS field-value OWS # field-name = token # field-value = *( field-content / obs-fold ) # field-content = field-vchar [ 1*( SP / HTAB ) field-vchar ] # field-vchar = VCHAR / obs-text # obs-fold = CRLF 1*( SP / HTAB ) # ; obsolete line folding # ; see Section 3.2.4 def __read_chunk_trailer(self, rfile, ): # With rfile positioned just after a \r\n, read a trailer line. # Return a true value if a non-empty trailer was read, and # return false if an empty trailer was read (meaning the trailers are # done). # If a single line exceeds the MAX_REQUEST_LINE, raise an exception. # If the field-name portion contains invalid characters, raise an exception. i = 0 empty = True seen_field_name = False while i < MAX_REQUEST_LINE: char = rfile.read(1) if char == b'\r': # Either read the next \n or raise an error. self.__read_chunk_size_crlf(rfile, newline_only=True) break # Not a \r, so we are NOT an empty chunk. empty = False if char == b':' and i > 0: # We're ending the field-name part; stop validating characters. # Unless : was the first character... seen_field_name = True if not seen_field_name and char not in _ALLOWED_TOKEN_CHARS: raise _InvalidClientInput('Invalid token character: %r' % (char,)) i += 1 else: # We read too much self._chunked_input_error = True raise _InvalidClientInput("Too large chunk trailer") return not empty def __read_chunk_size_crlf(self, rfile, newline_only=False): # Also for safety, correctly verify that we get \r\n when expected. if not newline_only: char = rfile.read(1) if char != b'\r': self._chunked_input_error = True raise _InvalidClientInput("Line didn't end in CRLF: %r" % (char,)) char = rfile.read(1) if char != b'\n': self._chunked_input_error = True raise _InvalidClientInput("Line didn't end in LF: %r" % (char,)) def _chunked_read(self, length=None, use_readline=False): # pylint:disable=too-many-branches rfile = self.rfile self._send_100_continue() if length == 0: return b"" if use_readline: reader = self.rfile.readline else: reader = self.rfile.read response = [] while self.chunk_length != 0: maxreadlen = self.chunk_length - self.position if length is not None and length < maxreadlen: maxreadlen = length if maxreadlen > 0: data = reader(maxreadlen) if not data: self.chunk_length = 0 self._chunked_input_error = True raise IOError("unexpected end of file while parsing chunked data") datalen = len(data) response.append(data) self.position += datalen if self.chunk_length == self.position: self.__read_chunk_size_crlf(rfile) if length is not None: length -= datalen if length == 0: break if use_readline and data[-1] == b"\n"[0]: break else: # We're at the beginning of a chunk, so we need to # determine the next size to read self.chunk_length = self.__read_chunk_length(rfile) self.position = 0 # If chunk_length was 0, we already read any trailers and # validated that we have ended with \r\n\r\n. return b''.join(response) def read(self, length=None): if length is not None and length < 0: length = None if self.chunked_input: return self._chunked_read(length) return self._do_read(length) def readline(self, size=None): if size is not None and size < 0: size = None if self.chunked_input: return self._chunked_read(size, True) return self._do_read(size, use_readline=True) def readlines(self, hint=None): # pylint:disable=unused-argument return list(self) def __iter__(self): return self def next(self): line = self.readline() if not line: raise StopIteration return line __next__ = next try: import mimetools headers_factory = mimetools.Message except ImportError: # adapt Python 3 HTTP headers to old API from http import client # pylint:disable=import-error class OldMessage(client.HTTPMessage): def __init__(self, **kwargs): super(client.HTTPMessage, self).__init__(**kwargs) # pylint:disable=bad-super-call self.status = '' def getheader(self, name, default=None): return self.get(name, default) @property def headers(self): for key, value in self._headers: yield '%s: %s\r\n' % (key, value) @property def typeheader(self): return self.get('content-type') def headers_factory(fp, *args): # pylint:disable=unused-argument try: ret = client.parse_headers(fp, _class=OldMessage) except client.LineTooLong: ret = OldMessage() ret.status = 'Line too long' return ret class WSGIHandler(object): """ Handles HTTP requests from a socket, creates the WSGI environment, and interacts with the WSGI application. This is the default value of :attr:`WSGIServer.handler_class`. This class may be subclassed carefully, and that class set on a :class:`WSGIServer` instance through a keyword argument at construction time. Instances are constructed with the same arguments as passed to the server's :meth:`WSGIServer.handle` method followed by the server itself. The application and environment are obtained from the server. """ # pylint:disable=too-many-instance-attributes protocol_version = 'HTTP/1.1' def MessageClass(self, *args): return headers_factory(*args) # Attributes reset at various times for each request; not public # documented. Class attributes to keep the constructor fast # (but not make lint tools complain) status = None # byte string: b'200 OK' _orig_status = None # native string: '200 OK' response_headers = None # list of tuples (b'name', b'value') code = None # Integer parsed from status provided_date = None provided_content_length = None close_connection = False time_start = 0 # time.time() when begin handling request time_finish = 0 # time.time() when done handling request headers_sent = False # Have we already sent headers? response_use_chunked = False # Write with transfer-encoding chunked # Was the connection upgraded? We shouldn't try to chunk writes in that # case. connection_upgraded = False environ = None # Dict from self.get_environ application = None # application callable from self.server.application requestline = None # native str 'GET / HTTP/1.1' response_length = 0 # How much data we sent result = None # The return value of the WSGI application wsgi_input = None # Instance of Input() content_length = 0 # From application-provided headers Incoming # request headers, instance of MessageClass (gunicorn uses hasattr # on this so the default value needs to be compatible with the # API) headers = headers_factory(BytesIO()) request_version = None # str: 'HTTP 1.1' command = None # str: 'GET' path = None # str: '/' def __init__(self, sock, address, server, rfile=None): # Deprecation: The rfile kwarg was introduced in 1.0a1 as part # of a refactoring. It was never documented or used. It is # considered DEPRECATED and may be removed in the future. Its # use is not supported. self.socket = sock self.client_address = address self.server = server if rfile is None: self.rfile = sock.makefile('rb', -1) else: self.rfile = rfile def handle(self): """ The main request handling method, called by the server. This method runs a request handling loop, calling :meth:`handle_one_request` until all requests on the connection have been handled (that is, it implements keep-alive). """ try: while self.socket is not None: self.time_start = time.time() self.time_finish = 0 result = self.handle_one_request() if result is None: break if result is True: continue self.status, response_body = result # pylint:disable=unpacking-non-sequence self.socket.sendall(response_body) if self.time_finish == 0: self.time_finish = time.time() self.log_request() break finally: if self.socket is not None: _sock = getattr(self.socket, '_sock', None) # Python 3 try: # read out request data to prevent error: [Errno 104] Connection reset by peer if _sock: try: # socket.recv would hang _sock.recv(16384) finally: _sock.close() self.socket.close() except socket.error: pass self.__dict__.pop('socket', None) self.__dict__.pop('rfile', None) self.__dict__.pop('wsgi_input', None) def _check_http_version(self): version_str = self.request_version if not version_str.startswith("HTTP/"): return False version = tuple(int(x) for x in version_str[5:].split(".")) # "HTTP/" if version[1] < 0 or version < (0, 9) or version >= (2, 0): return False return True def read_request(self, raw_requestline): """ Parse the incoming request. Parses various headers into ``self.headers`` using :attr:`MessageClass`. Other attributes that are set upon a successful return of this method include ``self.content_length`` and ``self.close_connection``. :param str raw_requestline: A native :class:`str` representing the request line. A processed version of this will be stored into ``self.requestline``. :raises ValueError: If the request is invalid. This error will not be logged as a traceback (because it's a client issue, not a server problem). :return: A boolean value indicating whether the request was successfully parsed. This method should either return a true value or have raised a ValueError with details about the parsing error. .. versionchanged:: 1.1b6 Raise the previously documented :exc:`ValueError` in more cases instead of returning a false value; this allows subclasses more opportunity to customize behaviour. """ # pylint:disable=too-many-branches self.requestline = raw_requestline.rstrip() words = self.requestline.split() if len(words) == 3: self.command, self.path, self.request_version = words if not self._check_http_version(): raise _InvalidClientRequest('Invalid http version: %r' % (raw_requestline,)) elif len(words) == 2: self.command, self.path = words if self.command != "GET": raise _InvalidClientRequest('Expected GET method; Got command=%r; path=%r; raw=%r' % ( self.command, self.path, raw_requestline,)) self.request_version = "HTTP/0.9" # QQQ I'm pretty sure we can drop support for HTTP/0.9 else: raise _InvalidClientRequest('Invalid HTTP method: %r' % (raw_requestline,)) self.headers = self.MessageClass(self.rfile, 0) if self.headers.status: raise _InvalidClientRequest('Invalid headers status: %r' % (self.headers.status,)) if self.headers.get("transfer-encoding", "").lower() == "chunked": try: del self.headers["content-length"] except KeyError: pass content_length = self.headers.get("content-length") if content_length is not None: content_length = int(content_length) if content_length < 0: raise _InvalidClientRequest('Invalid Content-Length: %r' % (content_length,)) if content_length and self.command in ('HEAD', ): raise _InvalidClientRequest('Unexpected Content-Length') self.content_length = content_length if self.request_version == "HTTP/1.1": conntype = self.headers.get("Connection", "").lower() self.close_connection = (conntype == 'close') # pylint:disable=superfluous-parens elif self.request_version == 'HTTP/1.0': conntype = self.headers.get("Connection", "close").lower() self.close_connection = (conntype != 'keep-alive') # pylint:disable=superfluous-parens else: # XXX: HTTP 0.9. We should drop support self.close_connection = True return True _print_unexpected_exc = staticmethod(traceback.print_exc) def log_error(self, msg, *args): if not args: # Already fully formatted, no need to do it again; msg # might contain % chars that would lead to a formatting # error. message = msg else: try: message = msg % args except Exception: # pylint:disable=broad-except self._print_unexpected_exc() message = '%r %r' % (msg, args) try: message = '%s: %s' % (self.socket, message) except Exception: # pylint:disable=broad-except pass try: self.server.error_log.write(message + '\n') except Exception: # pylint:disable=broad-except self._print_unexpected_exc() def read_requestline(self): """ Read and return the HTTP request line. Under both Python 2 and 3, this should return the native ``str`` type; under Python 3, this probably means the bytes read from the network need to be decoded (using the ISO-8859-1 charset, aka latin-1). """ line = self.rfile.readline(MAX_REQUEST_LINE) line = line.decode('latin-1') return line def handle_one_request(self): """ Handles one HTTP request using ``self.socket`` and ``self.rfile``. Each invocation of this method will do several things, including (but not limited to): - Read the request line using :meth:`read_requestline`; - Read the rest of the request, including headers, with :meth:`read_request`; - Construct a new WSGI environment in ``self.environ`` using :meth:`get_environ`; - Store the application in ``self.application``, retrieving it from the server; - Handle the remainder of the request, including invoking the application, with :meth:`handle_one_response` There are several possible return values to indicate the state of the client connection: - ``None`` The client connection is already closed or should be closed because the WSGI application or client set the ``Connection: close`` header. The request handling loop should terminate and perform cleanup steps. - (status, body) An HTTP status and body tuple. The request was in error, as detailed by the status and body. The request handling loop should terminate, close the connection, and perform cleanup steps. Note that the ``body`` is the complete contents to send to the client, including all headers and the initial status line. - ``True`` The literal ``True`` value. The request was successfully handled and the response sent to the client by :meth:`handle_one_response`. The connection remains open to process more requests and the connection handling loop should call this method again. This is the typical return value. .. seealso:: :meth:`handle` .. versionchanged:: 1.1b6 Funnel exceptions having to do with invalid HTTP requests through :meth:`_handle_client_error` to allow subclasses to customize. Note that this is experimental and may change in the future. """ # pylint:disable=too-many-return-statements if self.rfile.closed: return try: self.requestline = self.read_requestline() # Account for old subclasses that haven't done this if isinstance(self.requestline, bytes): self.requestline = self.requestline.decode('latin-1') except socket.error: # "Connection reset by peer" or other socket errors aren't interesting here return if not self.requestline: return self.response_length = 0 if len(self.requestline) >= MAX_REQUEST_LINE: return ('414', _REQUEST_TOO_LONG_RESPONSE) try: # for compatibility with older versions of pywsgi, we pass self.requestline as an argument there # NOTE: read_request is supposed to raise ValueError on invalid input; allow old # subclasses that return a False value instead. # NOTE: This can mutate the value of self.headers, so self.get_environ() must not be # called until AFTER this call is done. if not self.read_request(self.requestline): return ('400', _BAD_REQUEST_RESPONSE) except Exception as ex: # pylint:disable=broad-except # Notice we don't use self.handle_error because it reports # a 500 error to the client, and this is almost certainly # a client error. # Provide a hook for subclasses. return self._handle_client_error(ex) self.environ = self.get_environ() self.application = self.server.application self.handle_one_response() if self.close_connection: return if self.rfile.closed: return return True # read more requests def _connection_upgrade_requested(self): if self.headers.get('Connection', '').lower() == 'upgrade': return True if self.headers.get('Upgrade', '').lower() == 'websocket': return True return False def finalize_headers(self): if self.provided_date is None: self.response_headers.append((b'Date', format_date_time(time.time()))) self.connection_upgraded = self.code == 101 if self.code not in (304, 204): # the reply will include message-body; make sure we have either Content-Length or chunked if self.provided_content_length is None: if hasattr(self.result, '__len__'): total_len = sum(len(chunk) for chunk in self.result) total_len_str = str(total_len) total_len_str = total_len_str.encode("latin-1") self.response_headers.append((b'Content-Length', total_len_str)) else: self.response_use_chunked = ( not self.connection_upgraded and self.request_version != 'HTTP/1.0' ) if self.response_use_chunked: self.response_headers.append((b'Transfer-Encoding', b'chunked')) def _sendall(self, data): try: self.socket.sendall(data) except socket.error as ex: self.status = 'socket error: %s' % ex if self.code > 0: self.code = -self.code raise self.response_length += len(data) def _write(self, data, _bytearray=bytearray): if not data: # The application/middleware are allowed to yield # empty bytestrings. return if self.response_use_chunked: # Write the chunked encoding header header_str = b'%x\r\n' % len(data) towrite = _bytearray(header_str) # data towrite += data # trailer towrite += b'\r\n' self._sendall(towrite) else: self._sendall(data) ApplicationError = AssertionError def write(self, data): # The write() callable we return from start_response. # https://www.python.org/dev/peps/pep-3333/#the-write-callable # Supposed to do pretty much the same thing as yielding values # from the application's return. if self.code in (304, 204) and data: raise self.ApplicationError('The %s response must have no body' % self.code) if self.headers_sent: self._write(data) else: if not self.status: raise self.ApplicationError("The application did not call start_response()") self._write_with_headers(data) def _write_with_headers(self, data): self.headers_sent = True self.finalize_headers() # self.response_headers and self.status are already in latin-1, as encoded by self.start_response towrite = bytearray(b'HTTP/1.1 ') towrite += self.status towrite += b'\r\n' for header, value in self.response_headers: towrite += header towrite += b': ' towrite += value towrite += b"\r\n" towrite += b'\r\n' self._sendall(towrite) # No need to copy the data into towrite; we may make an extra syscall # but the copy time could be substantial too, and it reduces the chances # of sendall being able to send everything in one go self._write(data) def start_response(self, status, headers, exc_info=None): """ .. versionchanged:: 1.2a1 Avoid HTTP header injection by raising a :exc:`ValueError` if *status* or any *header* name or value contains a carriage return or newline. .. versionchanged:: 1.1b5 Pro-actively handle checking the encoding of the status line and headers during this method. On Python 2, avoid some extra encodings. """ # pylint:disable=too-many-branches,too-many-statements if exc_info: try: if self.headers_sent: # Re-raise original exception if headers sent reraise(*exc_info) finally: # Avoid dangling circular ref exc_info = None # Pep 3333, "The start_response callable": # https://www.python.org/dev/peps/pep-3333/#the-start-response-callable # "Servers should check for errors in the headers at the time # start_response is called, so that an error can be raised # while the application is still running." Here, we check the encoding. # This aids debugging: headers especially are generated programmatically # and an encoding error in a loop or list comprehension yields an opaque # UnicodeError without any clue which header was wrong. # Note that this results in copying the header list at this point, not modifying it, # although we are allowed to do so if needed. This slightly increases memory usage. # We also check for HTTP Response Splitting vulnerabilities response_headers = [] header = None value = None try: for header, value in headers: if not isinstance(header, str): raise UnicodeError("The header must be a native string", header, value) if not isinstance(value, str): raise UnicodeError("The value must be a native string", header, value) if '\r' in header or '\n' in header: raise ValueError('carriage return or newline in header name', header) if '\r' in value or '\n' in value: raise ValueError('carriage return or newline in header value', value) # Either we're on Python 2, in which case bytes is correct, or # we're on Python 3 and the user screwed up (because it should be a native # string). In either case, make sure that this is latin-1 compatible. Under # Python 2, bytes.encode() will take a round-trip through the system encoding, # which may be ascii, which is not really what we want. However, the latin-1 encoding # can encode everything except control characters and the block from 0x7F to 0x9F, so # explicitly round-tripping bytes through the encoding is unlikely to be of much # benefit, so we go for speed (the WSGI spec specifically calls out allowing the range # from 0x00 to 0xFF, although the HTTP spec forbids the control characters). # Note: Some Python 2 implementations, like Jython, may allow non-octet (above 255) values # in their str implementation; this is mentioned in the WSGI spec, but we don't # run on any platform like that so we can assume that a str value is pure bytes. response_headers.append((header.encode("latin-1"), value.encode("latin-1"))) except UnicodeEncodeError: # If we get here, we're guaranteed to have a header and value raise UnicodeError("Non-latin1 header", repr(header), repr(value)) # Same as above if not isinstance(status, str): raise UnicodeError("The status string must be a native string") if '\r' in status or '\n' in status: raise ValueError("carriage return or newline in status", status) # don't assign to anything until the validation is complete, including parsing the # code code = int(status.split(' ', 1)[0]) self.status = status.encode("latin-1") self._orig_status = status # Preserve the native string for logging self.response_headers = response_headers self.code = code provided_connection = None # Did the wsgi app give us a Connection header? self.provided_date = None self.provided_content_length = None for header, value in headers: header = header.lower() if header == 'connection': provided_connection = value elif header == 'date': self.provided_date = value elif header == 'content-length': self.provided_content_length = value if self.request_version == 'HTTP/1.0' and provided_connection is None: conntype = b'close' if self.close_connection else b'keep-alive' response_headers.append((b'Connection', conntype)) elif provided_connection == 'close': self.close_connection = True if self.code in (304, 204): if self.provided_content_length is not None and self.provided_content_length != '0': msg = 'Invalid Content-Length for %s response: %r (must be absent or zero)' % (self.code, self.provided_content_length) msg = msg.encode('latin-1') raise self.ApplicationError(msg) return self.write def log_request(self): self.server.log.write(self.format_request() + '\n') def format_request(self): now = datetime.now().replace(microsecond=0) length = self.response_length or '-' if self.time_finish: delta = '%.6f' % (self.time_finish - self.time_start) else: delta = '-' client_address = self.client_address[0] if isinstance(self.client_address, tuple) else self.client_address return '%s - - [%s] "%s" %s %s %s' % ( client_address or '-', now, self.requestline or '', # Use the native string version of the status, saved so we don't have to # decode. But fallback to the encoded 'status' in case of subclasses # (Is that really necessary? At least there's no overhead.) (self._orig_status or self.status or '000').split()[0], length, delta) def process_result(self): for data in self.result: if data: self.write(data) if self.status and not self.headers_sent: # In other words, the application returned an empty # result iterable (and did not use the write callable) # Trigger the flush of the headers. self.write(b'') if self.response_use_chunked: self._sendall(b'0\r\n\r\n') def run_application(self): assert self.result is None try: self.result = self.application(self.environ, self.start_response) self.process_result() finally: close = getattr(self.result, 'close', None) try: if close is not None: close() finally: # Discard the result. If it's a generator this can # free a lot of hidden resources (if we failed to iterate # all the way through it---the frames are automatically # cleaned up when StopIteration is raised); but other cases # could still free up resources sooner than otherwise. close = None self.result = None #: These errors are silently ignored by :meth:`handle_one_response` to avoid producing #: excess log entries on normal operating conditions. They indicate #: a remote client has disconnected and there is little or nothing #: this process can be expected to do about it. You may change this #: value in a subclass. #: #: The default value includes :data:`errno.EPIPE` and :data:`errno.ECONNRESET`. #: On Windows this also includes :data:`errno.WSAECONNABORTED`. #: #: This is a provisional API, subject to change. See :pr:`377`, :pr:`999` #: and :issue:`136`. #: #: .. versionadded:: 1.3 ignored_socket_errors = (errno.EPIPE, errno.ECONNRESET) try: ignored_socket_errors += (errno.WSAECONNABORTED,) except AttributeError: pass # Not windows def handle_one_response(self): """ Invoke the application to produce one response. This is called by :meth:`handle_one_request` after all the state for the request has been established. It is responsible for error handling. """ self.time_start = time.time() self.status = None self.headers_sent = False self.result = None self.response_use_chunked = False self.connection_upgraded = False self.response_length = 0 try: try: self.run_application() finally: try: self.wsgi_input._discard() except _InvalidClientInput: # This one is deliberately raised to the outer # scope, because, with the incoming stream in some bad state, # we can't be sure we can synchronize and properly parse the next # request. raise except socket.error: # Don't let socket exceptions during discarding # input override any exception that may have been # raised by the application, such as our own _InvalidClientInput. # In the general case, these aren't even worth logging (see the comment # just below) pass except _InvalidClientInput as ex: # DO log this one because: # - Some of the data may have been read and acted on by the # application; # - The response may or may not have been sent; # - It's likely that the client is bad, or malicious, and # users might wish to take steps to block the client. self._handle_client_error(ex) self.close_connection = True self._send_error_response_if_possible(400) except socket.error as ex: if ex.args[0] in self.ignored_socket_errors: # See description of self.ignored_socket_errors. self.close_connection = True else: self.handle_error(*sys.exc_info()) except: # pylint:disable=bare-except self.handle_error(*sys.exc_info()) finally: self.time_finish = time.time() self.log_request() def _send_error_response_if_possible(self, error_code): if self.response_length: self.close_connection = True else: status, headers, body = _ERRORS[error_code] try: self.start_response(status, headers[:]) self.write(body) except socket.error: self.close_connection = True def _log_error(self, t, v, tb): # TODO: Shouldn't we dump this to wsgi.errors? If we did that now, it would # wind up getting logged twice if not issubclass(t, GreenletExit): context = self.environ if not isinstance(context, self.server.secure_environ_class): context = self.server.secure_environ_class(context) self.server.loop.handle_error(context, t, v, tb) def handle_error(self, t, v, tb): # Called for internal, unexpected errors, NOT invalid client input self._log_error(t, v, tb) t = v = tb = None self._send_error_response_if_possible(500) def _handle_client_error(self, ex): # Called for invalid client input # Returns the appropriate error response. if not isinstance(ex, (ValueError, _InvalidClientInput)): # XXX: Why not self._log_error to send it through the loop's # handle_error method? # _InvalidClientRequest is a ValueError; _InvalidClientInput is an IOError. traceback.print_exc() if isinstance(ex, _InvalidClientRequest): # No formatting needed, that's already been handled. In fact, because the # formatted message contains user input, it might have a % in it, and attempting # to format that with no arguments would be an error. # However, the error messages do not include the requesting IP # necessarily, so we do add that. self.log_error('(from %s) %s', self.client_address, ex.formatted_message) else: self.log_error('Invalid request (from %s): %s', self.client_address, str(ex) or ex.__class__.__name__) return ('400', _BAD_REQUEST_RESPONSE) def _headers(self): key = None value = None IGNORED_KEYS = (None, 'CONTENT_TYPE', 'CONTENT_LENGTH') for header in self.headers.headers: if key is not None and header[:1] in " \t": value += header continue if key not in IGNORED_KEYS: yield 'HTTP_' + key, value.strip() key, value = header.split(':', 1) if '_' in key: # strip incoming bad veaders key = None else: key = key.replace('-', '_').upper() if key not in IGNORED_KEYS: yield 'HTTP_' + key, value.strip() def get_environ(self): """ Construct and return a new WSGI environment dictionary for a specific request. This should begin with asking the server for the base environment using :meth:`WSGIServer.get_environ`, and then proceed to add the request specific values. By the time this method is invoked the request line and request shall have been parsed and ``self.headers`` shall be populated. """ env = self.server.get_environ() env['REQUEST_METHOD'] = self.command # SCRIPT_NAME is explicitly implementation defined. Using an # empty value for SCRIPT_NAME is both explicitly allowed by # both the CGI standard and WSGI PEPs, and also the thing that # makes the most sense from a generic server perspective (we # have no hierarchy or understanding of URLs or files, just a # single application to call. The empty string represents the # application root, which is what we have). Different WSGI # implementations handle this very differently, so portable # applications that rely on SCRIPT_NAME will have to use a # WSGI middleware to set it to a defined value, or otherwise # rely on server-specific mechanisms (e.g, on waitress, use # ``--url-prefix``, in gunicorn set the ``SCRIPT_NAME`` header # or process environment variable, in gevent subclass # WSGIHandler.) # # See https://github.com/gevent/gevent/issues/1667 for discussion. env['SCRIPT_NAME'] = '' path, query = self.path.split('?', 1) if '?' in self.path else (self.path, '') # Note that self.path contains the original str object; if it contains # encoded escapes, it will NOT match PATH_INFO. env['PATH_INFO'] = unquote_latin1(path) env['QUERY_STRING'] = query if self.headers.typeheader is not None: env['CONTENT_TYPE'] = self.headers.typeheader length = self.headers.getheader('content-length') if length: env['CONTENT_LENGTH'] = length env['SERVER_PROTOCOL'] = self.request_version client_address = self.client_address if isinstance(client_address, tuple): env['REMOTE_ADDR'] = str(client_address[0]) env['REMOTE_PORT'] = str(client_address[1]) for key, value in self._headers(): if key in env: if 'COOKIE' in key: env[key] += '; ' + value else: env[key] += ',' + value else: env[key] = value sock = self.socket if env.get('HTTP_EXPECT') == '100-continue' else None chunked = env.get('HTTP_TRANSFER_ENCODING', '').lower() == 'chunked' # Input refuses to read if the data isn't chunked, and there is no content_length # provided. For 'Upgrade: Websocket' requests, neither of those things is true. handling_reads = not self._connection_upgrade_requested() self.wsgi_input = Input(self.rfile, self.content_length, socket=sock, chunked_input=chunked) env['wsgi.input'] = self.wsgi_input if handling_reads else self.rfile # This is a non-standard flag indicating that our input stream is # self-terminated (returns EOF when consumed). # See https://github.com/gevent/gevent/issues/1308 env['wsgi.input_terminated'] = handling_reads return env class _NoopLog(object): # Does nothing; implements just enough file-like methods # to pass the WSGI validator def write(self, *args, **kwargs): # pylint:disable=unused-argument return def flush(self): pass def writelines(self, *args, **kwargs): pass class LoggingLogAdapter(object): """ An adapter for :class:`logging.Logger` instances to let them be used with :class:`WSGIServer`. .. warning:: Unless the entire process is monkey-patched at a very early part of the lifecycle (before logging is configured), loggers are likely to not be gevent-cooperative. For example, the socket and syslog handlers use the socket module in a way that can block, and most handlers acquire threading locks. .. warning:: It *may* be possible for the logging functions to be called in the :class:`gevent.Hub` greenlet. Code running in the hub greenlet cannot use any gevent blocking functions without triggering a ``LoopExit``. .. versionadded:: 1.1a3 .. versionchanged:: 1.1b6 Attributes not present on this object are proxied to the underlying logger instance. This permits using custom :class:`~logging.Logger` subclasses (or indeed, even duck-typed objects). .. versionchanged:: 1.1 Strip trailing newline characters on the message passed to :meth:`write` because log handlers will usually add one themselves. """ # gevent avoids importing and using logging because importing it and # creating loggers creates native locks unless monkey-patched. __slots__ = ('_logger', '_level') def __init__(self, logger, level=20): """ Write information to the *logger* at the given *level* (default to INFO). """ self._logger = logger self._level = level def write(self, msg): if msg and msg.endswith('\n'): msg = msg[:-1] self._logger.log(self._level, msg) def flush(self): "No-op; required to be a file-like object" def writelines(self, lines): for line in lines: self.write(line) def __getattr__(self, name): return getattr(self._logger, name) def __setattr__(self, name, value): if name not in LoggingLogAdapter.__slots__: setattr(self._logger, name, value) else: object.__setattr__(self, name, value) def __delattr__(self, name): delattr(self._logger, name) #### ## Environ classes. # These subclass dict. They could subclass collections.UserDict on # 3.3+ and proxy to the underlying real dict to avoid a copy if we # have to print them (on 2.7 it's slightly more complicated to be an # instance of collections.MutableMapping; UserDict.UserDict isn't.) # Then we could have either the WSGIHandler.get_environ or the # WSGIServer.get_environ return one of these proxies, and # WSGIHandler.run_application would know to access the `environ.data` # attribute to be able to pass the *real* dict to the application # (because PEP3333 requires no subclasses, only actual dict objects; # wsgiref.validator and webob.Request both enforce this). This has the # advantage of not being fragile if anybody else tries to print/log # self.environ (and not requiring a copy). However, if there are any # subclasses of Handler or Server, this could break if they don't know # to return this type. #### class Environ(dict): """ A base class that can be used for WSGI environment objects. Provisional API. .. versionadded:: 1.2a1 """ __slots__ = () # add no ivars or weakref ability def copy(self): return self.__class__(self) if not hasattr(dict, 'iteritems'): # Python 3 def iteritems(self): return self.items() def __reduce_ex__(self, proto): return (dict, (), None, None, iter(self.iteritems())) class SecureEnviron(Environ): """ An environment that does not print its keys and values by default. Provisional API. This is intended to keep potentially sensitive information like HTTP authorization and cookies from being inadvertently printed or logged. For debugging, each instance can have its *secure_repr* attribute set to ``False``, which will cause it to print like a normal dict. When *secure_repr* is ``True`` (the default), then the value of the *whitelist_keys* attribute is consulted; if this value is true-ish, it should be a container (something that responds to ``in``) of key names (typically a list or set). Keys and values in this dictionary that are in *whitelist_keys* will then be printed, while all other values will be masked. These values may be customized on the class by setting the *default_secure_repr* and *default_whitelist_keys*, respectively:: >>> environ = SecureEnviron(key='value') >>> environ # doctest: +ELLIPSIS >> environ.whitelist_keys = {'key'} >>> environ {'key': 'value'} A non-whitelisted key (*only*, to avoid doctest issues) is masked:: >>> environ['secure'] = 'secret'; del environ['key'] >>> environ {'secure': ''} We can turn it off entirely for the instance:: >>> environ.secure_repr = False >>> environ {'secure': 'secret'} We can also customize it at the class level (here we use a new class to be explicit and to avoid polluting the true default values; we would set this class to be the ``environ_class`` of the server):: >>> class MyEnviron(SecureEnviron): ... default_whitelist_keys = ('key',) ... >>> environ = MyEnviron({'key': 'value'}) >>> environ {'key': 'value'} .. versionadded:: 1.2a1 """ default_secure_repr = True default_whitelist_keys = () default_print_masked_keys = True # Allow instances to override the class values, # but inherit from the class if not present. Keeps instances # small since we can't combine __slots__ with class attributes # of the same name. __slots__ = ('secure_repr', 'whitelist_keys', 'print_masked_keys') def __getattr__(self, name): if name in SecureEnviron.__slots__: return getattr(type(self), 'default_' + name) raise AttributeError(name) def __repr__(self): if self.secure_repr: whitelist = self.whitelist_keys print_masked = self.print_masked_keys if whitelist: safe = {k: self[k] if k in whitelist else "" for k in self if k in whitelist or print_masked} safe_repr = repr(safe) if not print_masked and len(safe) != len(self): safe_repr = safe_repr[:-1] + ", (hidden keys: %d)}" % (len(self) - len(safe)) return safe_repr return "" % (len(self), id(self)) return Environ.__repr__(self) __str__ = __repr__ class WSGISecureEnviron(SecureEnviron): """ Specializes the default list of whitelisted keys to a few common WSGI variables. Example:: >>> environ = WSGISecureEnviron(REMOTE_ADDR='::1', HTTP_AUTHORIZATION='secret') >>> environ {'REMOTE_ADDR': '::1', (hidden keys: 1)} >>> import pprint >>> pprint.pprint(environ) {'REMOTE_ADDR': '::1', (hidden keys: 1)} >>> print(pprint.pformat(environ)) {'REMOTE_ADDR': '::1', (hidden keys: 1)} """ default_whitelist_keys = ('REMOTE_ADDR', 'REMOTE_PORT', 'HTTP_HOST') default_print_masked_keys = False class WSGIServer(StreamServer): """ A WSGI server based on :class:`StreamServer` that supports HTTPS. :keyword log: If given, an object with a ``write`` method to which request (access) logs will be written. If not given, defaults to :obj:`sys.stderr`. You may pass ``None`` to disable request logging. You may use a wrapper, around e.g., :mod:`logging`, to support objects that don't implement a ``write`` method. (If you pass a :class:`~logging.Logger` instance, or in general something that provides a ``log`` method but not a ``write`` method, such a wrapper will automatically be created and it will be logged to at the :data:`~logging.INFO` level.) :keyword error_log: If given, a file-like object with ``write``, ``writelines`` and ``flush`` methods to which error logs will be written. If not given, defaults to :obj:`sys.stderr`. You may pass ``None`` to disable error logging (not recommended). You may use a wrapper, around e.g., :mod:`logging`, to support objects that don't implement the proper methods. This parameter will become the value for ``wsgi.errors`` in the WSGI environment (if not already set). (As with *log*, wrappers for :class:`~logging.Logger` instances and the like will be created automatically and logged to at the :data:`~logging.ERROR` level.) .. seealso:: :class:`LoggingLogAdapter` See important warnings before attempting to use :mod:`logging`. .. versionchanged:: 1.1a3 Added the ``error_log`` parameter, and set ``wsgi.errors`` in the WSGI environment to this value. .. versionchanged:: 1.1a3 Add support for passing :class:`logging.Logger` objects to the ``log`` and ``error_log`` arguments. .. versionchanged:: 20.6.0 Passing a ``handle`` kwarg to the constructor is now officially deprecated. """ #: A callable taking three arguments: (socket, address, server) and returning #: an object with a ``handle()`` method. The callable is called once for #: each incoming socket request, as is its handle method. The handle method should not #: return until all use of the socket is complete. #: #: This class uses the :class:`WSGIHandler` object as the default value. You may #: subclass this class and set a different default value, or you may pass #: a value to use in the ``handler_class`` keyword constructor argument. handler_class = WSGIHandler #: The object to which request logs will be written. #: It must never be None. Initialized from the ``log`` constructor #: parameter. log = None #: The object to which error logs will be written. #: It must never be None. Initialized from the ``error_log`` constructor #: parameter. error_log = None #: The class of environ objects passed to the handlers. #: Must be a dict subclass. For compliance with :pep:`3333` #: and libraries like WebOb, this is simply :class:`dict` #: but this can be customized in a subclass or per-instance #: (probably to :class:`WSGISecureEnviron`). #: #: .. versionadded:: 1.2a1 environ_class = dict # Undocumented internal detail: the class that WSGIHandler._log_error # will cast to before passing to the loop. secure_environ_class = WSGISecureEnviron base_env = {'GATEWAY_INTERFACE': 'CGI/1.1', 'SERVER_SOFTWARE': 'gevent/%d.%d Python/%d.%d' % (gevent.version_info[:2] + sys.version_info[:2]), 'SCRIPT_NAME': '', 'wsgi.version': (1, 0), 'wsgi.multithread': False, # XXX: Aren't we really, though? 'wsgi.multiprocess': False, 'wsgi.run_once': False} def __init__(self, listener, application=None, backlog=None, spawn='default', log='default', error_log='default', handler_class=None, environ=None, **ssl_args): if 'handle' in ssl_args: # The ultimate base class (BaseServer) uses 'handle' for # the thing we call 'application'. We never deliberately # bass a `handle` argument to the base class, but one # could sneak in through ``**ssl_args``, even though that # is not the intent, while application is None. That # causes our own ``def handle`` method to be replaced, # probably leading to bad results. Passing a 'handle' # instead of an 'application' can really confuse things. import warnings warnings.warn("Passing 'handle' kwarg to WSGIServer is deprecated. " "Did you mean application?", DeprecationWarning, stacklevel=2) StreamServer.__init__(self, listener, backlog=backlog, spawn=spawn, **ssl_args) if application is not None: self.application = application if handler_class is not None: self.handler_class = handler_class # Note that we can't initialize these as class variables: # sys.stderr might get monkey patched at runtime. def _make_log(l, level=20): if l == 'default': return sys.stderr if l is None: return _NoopLog() if not hasattr(l, 'write') and hasattr(l, 'log'): return LoggingLogAdapter(l, level) return l self.log = _make_log(log) self.error_log = _make_log(error_log, 40) # logging.ERROR self.set_environ(environ) self.set_max_accept() def set_environ(self, environ=None): if environ is not None: self.environ = environ environ_update = getattr(self, 'environ', None) self.environ = self.environ_class(self.base_env) if self.ssl_enabled: self.environ['wsgi.url_scheme'] = 'https' else: self.environ['wsgi.url_scheme'] = 'http' if environ_update is not None: self.environ.update(environ_update) if self.environ.get('wsgi.errors') is None: self.environ['wsgi.errors'] = self.error_log def set_max_accept(self): if self.environ.get('wsgi.multiprocess'): self.max_accept = 1 def get_environ(self): return self.environ_class(self.environ) def init_socket(self): StreamServer.init_socket(self) self.update_environ() def update_environ(self): """ Called before the first request is handled to fill in WSGI environment values. This includes getting the correct server name and port. """ address = self.address if isinstance(address, tuple): if 'SERVER_NAME' not in self.environ: try: name = socket.getfqdn(address[0]) except socket.error: name = str(address[0]) if not isinstance(name, str): name = name.decode('ascii') self.environ['SERVER_NAME'] = name self.environ.setdefault('SERVER_PORT', str(address[1])) else: self.environ.setdefault('SERVER_NAME', '') self.environ.setdefault('SERVER_PORT', '') def handle(self, sock, address): """ Create an instance of :attr:`handler_class` to handle the request. This method blocks until the handler returns. """ # pylint:disable=method-hidden handler = self.handler_class(sock, address, self) handler.handle() def _main(): # Provisional main handler, for quick tests, not production # usage. from gevent import monkey; monkey.patch_all() import argparse import importlib parser = argparse.ArgumentParser() parser.add_argument("app", help="dotted name of WSGI app callable [module:callable]") parser.add_argument("-b", "--bind", help="The socket to bind", default=":8080") args = parser.parse_args() module_name, app_name = args.app.split(':') module = importlib.import_module(module_name) app = getattr(module, app_name) bind = args.bind server = WSGIServer(bind, app) server.serve_forever() if __name__ == '__main__': _main() gevent-24.11.1/src/gevent/queue.py000066400000000000000000000621541471441230600167370ustar00rootroot00000000000000# Copyright (c) 2009-2012 Denis Bilenko. See LICENSE for details. # copyright (c) 2018 gevent # cython: auto_pickle=False,embedsignature=True,always_allow_keywords=False """ Synchronized queues. The :mod:`gevent.queue` module implements multi-producer, multi-consumer queues that work across greenlets, with the API similar to the classes found in the standard :mod:`Queue` and :class:`multiprocessing ` modules. The classes in this module implement the iterator protocol. Iterating over a queue means repeatedly calling :meth:`get ` until :meth:`get ` returns ``StopIteration`` (specifically that class, not an instance or subclass). >>> import gevent.queue >>> queue = gevent.queue.Queue() >>> queue.put(1) >>> queue.put(2) >>> queue.put(StopIteration) >>> for item in queue: ... print(item) 1 2 .. versionchanged:: 1.0 ``Queue(0)`` now means queue of infinite size, not a channel. A :exc:`DeprecationWarning` will be issued with this argument. """ import sys from heapq import heappush as _heappush from heapq import heappop as _heappop from heapq import heapify as _heapify import collections import queue as __queue__ # We re-export these exceptions to client modules. # But we also want fast access to them from Cython with a cdef, # and we do that with the _ definition. _Full = Full = __queue__.Full _Empty = Empty = __queue__.Empty from gevent.timeout import Timeout from gevent._hub_local import get_hub_noargs as get_hub from gevent.exceptions import InvalidSwitchError __all__ = [] __implements__ = ['Queue', 'PriorityQueue', 'LifoQueue'] __extensions__ = ['JoinableQueue', 'Channel'] __imports__ = ['Empty', 'Full'] __all__.append('SimpleQueue') # SimpleQueue is implemented in C and directly allocates locks # unaffected by monkey patching. We need the Python version. SimpleQueue = __queue__._PySimpleQueue # pylint:disable=no-member if hasattr(__queue__, 'ShutDown'): # New in 3.13 ShutDown = __queue__.ShutDown __imports__.append('ShutDown') else: class ShutDown(Exception): """ gevent extension for Python versions less than 3.13 """ __extensions__.append('ShutDown') __all__ += (__implements__ + __extensions__ + __imports__) # pylint 2.0.dev2 things collections.dequeue.popleft() doesn't return # pylint:disable=assignment-from-no-return def _safe_remove(deq, item): # For when the item may have been removed by # Queue._unlock try: deq.remove(item) except ValueError: pass import gevent._waiter locals()['Waiter'] = gevent._waiter.Waiter locals()['getcurrent'] = __import__('greenlet').getcurrent locals()['greenlet_init'] = lambda: None class ItemWaiter(Waiter): # pylint:disable=undefined-variable # pylint:disable=assigning-non-slot __slots__ = ( 'item', 'queue', ) def __init__(self, item, queue): Waiter.__init__(self) # pylint:disable=undefined-variable self.item = item self.queue = queue def put_and_switch(self): self.queue._put(self.item) self.queue = None self.item = None return self.switch(self) class Queue(object): """ Create a queue object with a given maximum size. If *maxsize* is less than or equal to zero or ``None``, the queue size is infinite. Queues have a ``len`` equal to the number of items in them (the :meth:`qsize`), but in a boolean context they are always True. .. versionchanged:: 1.1b3 Queues now support :func:`len`; it behaves the same as :meth:`qsize`. .. versionchanged:: 1.1b3 Multiple greenlets that block on a call to :meth:`put` for a full queue will now be awakened to put their items into the queue in the order in which they arrived. Likewise, multiple greenlets that block on a call to :meth:`get` for an empty queue will now receive items in the order in which they blocked. An implementation quirk under CPython *usually* ensured this was roughly the case previously anyway, but that wasn't the case for PyPy. .. versionchanged:: 24.10.1 Implement the ``shutdown`` methods from Python 3.13. """ __slots__ = ( '_maxsize', 'getters', 'putters', 'hub', '_event_unlock', 'queue', '__weakref__', 'is_shutdown', # 3.13 ) def __init__(self, maxsize=None, items=(), _warn_depth=2): if maxsize is not None and maxsize <= 0: if maxsize == 0: import warnings warnings.warn( 'Queue(0) now equivalent to Queue(None); if you want a channel, use Channel', DeprecationWarning, stacklevel=_warn_depth) maxsize = None self._maxsize = maxsize if maxsize is not None else -1 # Explicitly maintain order for getters and putters that block # so that callers can consistently rely on getting things out # in the apparent order they went in. This was once required by # imap_unordered. Previously these were set() objects, and the # items put in the set have default hash() and eq() methods; # under CPython, since new objects tend to have increasing # hash values, this tended to roughly maintain order anyway, # but that's not true under PyPy. An alternative to a deque # (to avoid the linear scan of remove()) might be an # OrderedDict, but it's 2.7 only; we don't expect to have so # many waiters that removing an arbitrary element is a # bottleneck, though. self.getters = collections.deque() self.putters = collections.deque() self.hub = get_hub() self._event_unlock = None self.queue = self._create_queue(items) self.is_shutdown = False @property def maxsize(self): return self._maxsize if self._maxsize > 0 else None @maxsize.setter def maxsize(self, nv): # QQQ make maxsize into a property with setter that schedules unlock if necessary if nv is None or nv <= 0: self._maxsize = -1 else: self._maxsize = nv def copy(self): return type(self)(self.maxsize, self.queue) def _create_queue(self, items=()): return collections.deque(items) def _get(self): return self.queue.popleft() def _peek(self): return self.queue[0] def _put(self, item): self.queue.append(item) def __repr__(self): return '<%s at %s%s>' % (type(self).__name__, hex(id(self)), self._format()) def __str__(self): return '<%s%s>' % (type(self).__name__, self._format()) def _format(self): result = [] if self.maxsize is not None: result.append('maxsize=%r' % (self.maxsize, )) if getattr(self, 'queue', None): result.append('queue=%r' % (self.queue, )) if self.getters: result.append('getters[%s]' % len(self.getters)) if self.putters: result.append('putters[%s]' % len(self.putters)) if result: return ' ' + ' '.join(result) return '' def qsize(self): """Return the size of the queue.""" return len(self.queue) def __len__(self): """ Return the size of the queue. This is the same as :meth:`qsize`. .. versionadded: 1.1b3 Previously, getting len() of a queue would raise a TypeError. """ return self.qsize() def __bool__(self): """ A queue object is always True. .. versionadded: 1.1b3 Now that queues support len(), they need to implement ``__bool__`` to return True for backwards compatibility. """ return True def empty(self): """Return ``True`` if the queue is empty, ``False`` otherwise.""" return not self.qsize() def full(self): """Return ``True`` if the queue is full, ``False`` otherwise. ``Queue(None)`` is never full. """ return self._maxsize > 0 and self.qsize() >= self._maxsize def put(self, item, block=True, timeout=None): """ Put an item into the queue. If optional arg *block* is true and *timeout* is ``None`` (the default), block if necessary until a free slot is available. If *timeout* is a positive number, it blocks at most *timeout* seconds and raises the :class:`Full` exception if no free slot was available within that time. Otherwise (*block* is false), put an item on the queue if a free slot is immediately available, else raise the :class:`Full` exception (*timeout* is ignored in that case). ... versionchanged:: 24.10.1 Now raises a ``ValueError`` for a negative *timeout* in the cases that CPython does. """ if self.is_shutdown: raise ShutDown if self._maxsize == -1 or self.qsize() < self._maxsize: # there's a free slot, put an item right away. # For compatibility with CPython, verify that the timeout is non-negative. if block and timeout is not None and timeout < 0: raise ValueError("'timeout' must be a non-negative number") self._put(item) if self.getters: self._schedule_unlock() return if self.hub is getcurrent(): # pylint:disable=undefined-variable # We're in the mainloop, so we cannot wait; we can switch to other greenlets though. # Check if possible to get a free slot in the queue. while self.getters and self.qsize() and self.qsize() >= self._maxsize: getter = self.getters.popleft() getter.switch(getter) if self.qsize() < self._maxsize: self._put(item) return raise Full if block: waiter = ItemWaiter(item, self) self.putters.append(waiter) timeout = Timeout._start_new_or_dummy(timeout, Full) try: if self.getters: self._schedule_unlock() result = waiter.get() if result is not waiter: raise InvalidSwitchError("Invalid switch into Queue.put: %r" % (result, )) finally: timeout.cancel() _safe_remove(self.putters, waiter) return raise Full def put_nowait(self, item): """Put an item into the queue without blocking. Only enqueue the item if a free slot is immediately available. Otherwise raise the :class:`Full` exception. """ self.put(item, False) def __get_or_peek(self, method, block, timeout): # Internal helper method. The `method` should be either # self._get when called from self.get() or self._peek when # called from self.peek(). Call this after the initial check # to see if there are items in the queue. if self.is_shutdown: raise ShutDown if block and timeout is not None and timeout < 0: raise ValueError("'timeout' must be a non-negative number") if self.hub is getcurrent(): # pylint:disable=undefined-variable # special case to make get_nowait() or peek_nowait() runnable in the mainloop greenlet # there are no items in the queue; try to fix the situation by unlocking putters while self.putters: # Note: get() used popleft(), peek used pop(); popleft # is almost certainly correct. self.putters.popleft().put_and_switch() if self.qsize(): return method() raise Empty if not block: # We can't block, we're not the hub, and we have nothing # to return. No choice but to raise the Empty exception. # # CAUTION: Calling ``q.get(False)`` in a tight loop won't # work like it does in CPython where it should eventually # let another thread make progress, because there's never # a chance to switch greenlets here. We don't sleep() # to enforce that, as that would be a significant behaviour # change. raise Empty waiter = Waiter() # pylint:disable=undefined-variable timeout = Timeout._start_new_or_dummy(timeout, Empty) try: self.getters.append(waiter) if self.putters: self._schedule_unlock() result = waiter.get() if result is not waiter: raise InvalidSwitchError('Invalid switch into Queue.get: %r' % (result, )) return method() finally: timeout.cancel() _safe_remove(self.getters, waiter) def get(self, block=True, timeout=None): """ Remove and return an item from the queue. If optional args *block* is true and *timeout* is ``None`` (the default), block if necessary until an item is available. If *timeout* is a positive number, it blocks at most *timeout* seconds and raises the :class:`Empty` exception if no item was available within that time. Otherwise (*block* is false), return an item if one is immediately available, else raise the :class:`Empty` exception (*timeout* is ignored in that case). """ if self.qsize(): if self.putters: self._schedule_unlock() return self._get() return self.__get_or_peek(self._get, block, timeout) def get_nowait(self): """Remove and return an item from the queue without blocking. Only get an item if one is immediately available. Otherwise raise the :class:`Empty` exception. """ return self.get(False) def peek(self, block=True, timeout=None): """Return an item from the queue without removing it. If optional args *block* is true and *timeout* is ``None`` (the default), block if necessary until an item is available. If *timeout* is a positive number, it blocks at most *timeout* seconds and raises the :class:`Empty` exception if no item was available within that time. Otherwise (*block* is false), return an item if one is immediately available, else raise the :class:`Empty` exception (*timeout* is ignored in that case). """ if self.qsize(): # This doesn't schedule an unlock like get() does because we're not # actually making any space. return self._peek() return self.__get_or_peek(self._peek, block, timeout) def peek_nowait(self): """Return an item from the queue without blocking. Only return an item if one is immediately available. Otherwise raise the :class:`Empty` exception. """ return self.peek(False) def _unlock(self): while True: repeat = False if self.putters and (self._maxsize == -1 or self.qsize() < self._maxsize): repeat = True try: putter = self.putters.popleft() self._put(putter.item) except: # pylint:disable=bare-except putter.throw(*sys.exc_info()) else: putter.switch(putter) if self.getters and self.qsize(): repeat = True getter = self.getters.popleft() getter.switch(getter) if not repeat: return def _schedule_unlock(self): if not self._event_unlock: self._event_unlock = self.hub.loop.run_callback(self._unlock) def __iter__(self): return self def __next__(self): result = self.get() if result is StopIteration: raise result return result def shutdown(self, immediate=False): """ "Shut-down the queue, making queue gets and puts raise `ShutDown`. By default, gets will only raise once the queue is empty. Set *immediate* to True to make gets raise immediately instead. All blocked callers of `put` and `get` will be unblocked. In joinable queues, if *immediate*, a task is marked as done for each item remaining in the queue, which may unblock callers of `join`. """ self.is_shutdown = True if immediate: self._drain_for_immediate_shutdown() getters = list(self.getters) putters = list(self.putters) self.getters.clear() self.putters.clear() for waiter in getters + putters: self.hub.loop.run_callback(waiter.throw, ShutDown) def _drain_for_immediate_shutdown(self): while self.qsize(): self.get() class UnboundQueue(Queue): # A specialization of Queue that knows it can never # be bound. Changing its maxsize has no effect. __slots__ = () def __init__(self, maxsize=None, items=()): if maxsize is not None: raise ValueError("UnboundQueue has no maxsize") Queue.__init__(self, maxsize, items) self.putters = None # Will never be used. def put(self, item, block=True, timeout=None): self._put(item) if self.getters: self._schedule_unlock() class PriorityQueue(Queue): '''A subclass of :class:`Queue` that retrieves entries in priority order (lowest first). Entries are typically tuples of the form: ``(priority number, data)``. .. versionchanged:: 1.2a1 Any *items* given to the constructor will now be passed through :func:`heapq.heapify` to ensure the invariants of this class hold. Previously it was just assumed that they were already a heap. ''' __slots__ = () def _create_queue(self, items=()): q = list(items) _heapify(q) return q def _put(self, item): _heappush(self.queue, item) def _get(self): return _heappop(self.queue) class JoinableQueue(Queue): """ A subclass of :class:`Queue` that additionally has :meth:`task_done` and :meth:`join` methods. """ __slots__ = ( '_cond', 'unfinished_tasks', ) def __init__(self, maxsize=None, items=(), unfinished_tasks=None): """ .. versionchanged:: 1.1a1 If *unfinished_tasks* is not given, then all the given *items* (if any) will be considered unfinished. """ Queue.__init__(self, maxsize, items, _warn_depth=3) from gevent.event import Event self._cond = Event() self._cond.set() if unfinished_tasks: self.unfinished_tasks = unfinished_tasks elif items: self.unfinished_tasks = len(items) else: self.unfinished_tasks = 0 if self.unfinished_tasks: self._cond.clear() def copy(self): return type(self)(self.maxsize, self.queue, self.unfinished_tasks) def _format(self): result = Queue._format(self) if self.unfinished_tasks: result += ' tasks=%s _cond=%s' % (self.unfinished_tasks, self._cond) return result def _put(self, item): Queue._put(self, item) self._did_put_task() def _did_put_task(self): self.unfinished_tasks += 1 self._cond.clear() def task_done(self): '''Indicate that a formerly enqueued task is complete. Used by queue consumer threads. For each :meth:`get ` used to fetch a task, a subsequent call to :meth:`task_done` tells the queue that the processing on the task is complete. If a :meth:`join` is currently blocking, it will resume when all items have been processed (meaning that a :meth:`task_done` call was received for every item that had been :meth:`put ` into the queue). Raises a :exc:`ValueError` if called more times than there were items placed in the queue. ''' if self.unfinished_tasks <= 0: raise ValueError('task_done() called too many times') self.unfinished_tasks -= 1 if self.unfinished_tasks == 0: self._cond.set() def join(self, timeout=None): ''' Block until all items in the queue have been gotten and processed. The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls :meth:`task_done` to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, :meth:`join` unblocks. :param float timeout: If not ``None``, then wait no more than this time in seconds for all tasks to finish. :return: ``True`` if all tasks have finished; if ``timeout`` was given and expired before all tasks finished, ``False``. .. versionchanged:: 1.1a1 Add the *timeout* parameter. ''' return self._cond.wait(timeout=timeout) def _drain_for_immediate_shutdown(self): while self.qsize(): self.get() self.task_done() class LifoQueue(JoinableQueue): """ A subclass of :class:`JoinableQueue` that retrieves most recently added entries first. .. versionchanged:: 24.10.1 Now extends :class:`JoinableQueue` instead of just :class:`Queue`. """ __slots__ = () def _create_queue(self, items=()): return list(items) def _put(self, item): self.queue.append(item) self._did_put_task() def _get(self): return self.queue.pop() def _peek(self): return self.queue[-1] class Channel(object): __slots__ = ( 'getters', 'putters', 'hub', '_event_unlock', '__weakref__', ) def __init__(self, maxsize=1): # We take maxsize to simplify certain kinds of code if maxsize != 1: raise ValueError("Channels have a maxsize of 1") self.getters = collections.deque() self.putters = collections.deque() self.hub = get_hub() self._event_unlock = None def __repr__(self): return '<%s at %s %s>' % (type(self).__name__, hex(id(self)), self._format()) def __str__(self): return '<%s %s>' % (type(self).__name__, self._format()) def _format(self): result = '' if self.getters: result += ' getters[%s]' % len(self.getters) if self.putters: result += ' putters[%s]' % len(self.putters) return result @property def balance(self): return len(self.putters) - len(self.getters) def qsize(self): return 0 def empty(self): return True def full(self): return True def put(self, item, block=True, timeout=None): if self.hub is getcurrent(): # pylint:disable=undefined-variable if self.getters: getter = self.getters.popleft() getter.switch(item) return raise Full if not block: timeout = 0 waiter = Waiter() # pylint:disable=undefined-variable item = (item, waiter) self.putters.append(item) timeout = Timeout._start_new_or_dummy(timeout, Full) try: if self.getters: self._schedule_unlock() result = waiter.get() if result is not waiter: raise InvalidSwitchError("Invalid switch into Channel.put: %r" % (result, )) except: _safe_remove(self.putters, item) raise finally: timeout.cancel() def put_nowait(self, item): self.put(item, False) def get(self, block=True, timeout=None): if self.hub is getcurrent(): # pylint:disable=undefined-variable if self.putters: item, putter = self.putters.popleft() self.hub.loop.run_callback(putter.switch, putter) return item if not block: timeout = 0 waiter = Waiter() # pylint:disable=undefined-variable timeout = Timeout._start_new_or_dummy(timeout, Empty) try: self.getters.append(waiter) if self.putters: self._schedule_unlock() return waiter.get() except: self.getters.remove(waiter) raise finally: timeout.close() def get_nowait(self): return self.get(False) def _unlock(self): while self.putters and self.getters: getter = self.getters.popleft() item, putter = self.putters.popleft() getter.switch(item) putter.switch(putter) def _schedule_unlock(self): if not self._event_unlock: self._event_unlock = self.hub.loop.run_callback(self._unlock) def __iter__(self): return self def __next__(self): result = self.get() if result is StopIteration: raise result return result next = __next__ # Py2 def _init(): greenlet_init() # pylint:disable=undefined-variable _init() from gevent._util import import_c_accel import_c_accel(globals(), 'gevent._queue') gevent-24.11.1/src/gevent/resolver/000077500000000000000000000000001471441230600170725ustar00rootroot00000000000000gevent-24.11.1/src/gevent/resolver/__init__.py000066400000000000000000000250101471441230600212010ustar00rootroot00000000000000# Copyright (c) 2018 gevent contributors. See LICENSE for details. import _socket from _socket import AF_INET from _socket import AF_UNSPEC from _socket import AI_CANONNAME from _socket import AI_PASSIVE from _socket import AI_NUMERICHOST from _socket import EAI_NONAME from _socket import EAI_SERVICE from _socket import SOCK_DGRAM from _socket import SOCK_STREAM from _socket import SOL_TCP from _socket import error from _socket import gaierror from _socket import getaddrinfo as native_getaddrinfo from _socket import getnameinfo as native_getnameinfo from _socket import gethostbyaddr as native_gethostbyaddr from _socket import gethostbyname as native_gethostbyname from _socket import gethostbyname_ex as native_gethostbyname_ex from _socket import getservbyname as native_getservbyname from gevent._compat import string_types from gevent._compat import text_type from gevent._compat import hostname_types from gevent._compat import integer_types from gevent._compat import PYPY from gevent._compat import MAC from gevent.resolver._addresses import is_ipv6_addr # Nothing public here. __all__ = () # trigger import of encodings.idna to avoid https://github.com/gevent/gevent/issues/349 u'foo'.encode('idna') def _lookup_port(port, socktype): # pylint:disable=too-many-branches socktypes = [] if isinstance(port, string_types): try: port = int(port) except ValueError: try: if socktype == 0: origport = port try: port = native_getservbyname(port, 'tcp') socktypes.append(SOCK_STREAM) except error: port = native_getservbyname(port, 'udp') socktypes.append(SOCK_DGRAM) else: try: if port == native_getservbyname(origport, 'udp'): socktypes.append(SOCK_DGRAM) except error: pass elif socktype == SOCK_STREAM: port = native_getservbyname(port, 'tcp') elif socktype == SOCK_DGRAM: port = native_getservbyname(port, 'udp') else: raise gaierror(EAI_SERVICE, 'Servname not supported for ai_socktype') except error as ex: if 'not found' in str(ex): raise gaierror(EAI_SERVICE, 'Servname not supported for ai_socktype') raise gaierror(str(ex)) except UnicodeEncodeError: raise error('Int or String expected', port) elif port is None: port = 0 elif isinstance(port, integer_types): pass else: raise error('Int or String expected', port, type(port)) port = int(port % 65536) if not socktypes and socktype: socktypes.append(socktype) return port, socktypes def _resolve_special(hostname, family): if not isinstance(hostname, hostname_types): raise TypeError("argument 1 must be str, bytes or bytearray, not %s" % (type(hostname),)) if hostname in (u'', b''): result = native_getaddrinfo(None, 0, family, SOCK_DGRAM, 0, AI_PASSIVE) if len(result) != 1: raise error('wildcard resolved to multiple address') return result[0][4][0] return hostname class AbstractResolver(object): HOSTNAME_ENCODING = 'idna' _LOCAL_HOSTNAMES = ( b'localhost', b'ip6-localhost', b'::1', b'127.0.0.1', ) _LOCAL_AND_BROADCAST_HOSTNAMES = _LOCAL_HOSTNAMES + ( b'255.255.255.255', b'', ) EAI_NONAME_MSG = ( 'nodename nor servname provided, or not known' if MAC else 'Name or service not known' ) EAI_FAMILY_MSG = ( 'ai_family not supported' ) _KNOWN_ADDR_FAMILIES = { v for k, v in vars(_socket).items() if k.startswith('AF_') } _KNOWN_SOCKTYPES = { v for k, v in vars(_socket).items() if k.startswith('SOCK_') and k not in ('SOCK_CLOEXEC', 'SOCK_MAX_SIZE') } def close(self): """ Release resources held by this object. Subclasses that define resources should override. .. versionadded:: 22.10.1 """ @staticmethod def fixup_gaierror(func): import functools @functools.wraps(func) def resolve(self, *args, **kwargs): try: return func(self, *args, **kwargs) except gaierror as ex: if ex.args[0] == EAI_NONAME and len(ex.args) == 1: # dnspython doesn't set an error message ex.args = (EAI_NONAME, self.EAI_NONAME_MSG) ex.errno = EAI_NONAME raise return resolve def _hostname_to_bytes(self, hostname): if isinstance(hostname, text_type): hostname = hostname.encode(self.HOSTNAME_ENCODING) elif not isinstance(hostname, (bytes, bytearray)): raise TypeError('Expected str, bytes or bytearray, not %s' % type(hostname).__name__) return bytes(hostname) def gethostbyname(self, hostname, family=AF_INET): # The native ``gethostbyname`` and ``gethostbyname_ex`` have some different # behaviour with special names. Notably, ``gethostbyname`` will handle # both "" and "255.255.255.255", while ``gethostbyname_ex`` refuses to # handle those; they result in different errors, too. So we can't # pass those through. hostname = self._hostname_to_bytes(hostname) if hostname in self._LOCAL_AND_BROADCAST_HOSTNAMES: return native_gethostbyname(hostname) hostname = _resolve_special(hostname, family) return self.gethostbyname_ex(hostname, family)[-1][0] def _gethostbyname_ex(self, hostname_bytes, family): """Raise an ``herror`` or a ``gaierror``.""" aliases = self._getaliases(hostname_bytes, family) addresses = [] tuples = self.getaddrinfo(hostname_bytes, 0, family, SOCK_STREAM, SOL_TCP, AI_CANONNAME) canonical = tuples[0][3] for item in tuples: addresses.append(item[4][0]) # XXX we just ignore aliases return (canonical, aliases, addresses) def gethostbyname_ex(self, hostname, family=AF_INET): hostname = self._hostname_to_bytes(hostname) if hostname in self._LOCAL_AND_BROADCAST_HOSTNAMES: # The broadcast specials aren't handled here, but they may produce # special errors that are hard to replicate across all systems. return native_gethostbyname_ex(hostname) return self._gethostbyname_ex(hostname, family) def _getaddrinfo(self, host_bytes, port, family, socktype, proto, flags): raise NotImplementedError def getaddrinfo(self, host, port, family=0, socktype=0, proto=0, flags=0): host = self._hostname_to_bytes(host) if host is not None else None if ( not isinstance(host, bytes) # 1, 2 or (flags & AI_NUMERICHOST) # 3 or host in self._LOCAL_HOSTNAMES # 4 or (is_ipv6_addr(host) and host.startswith(b'fe80')) # 5 ): # This handles cases which do not require network access # 1) host is None # 2) host is of an invalid type # 3) AI_NUMERICHOST flag is set # 4) It's a well-known alias. TODO: This is special casing for c-ares that we don't # really want to do. It's here because it resolves a discrepancy with the system # resolvers caught by test cases. In gevent 20.4.0, this only worked correctly on # Python 3 and not Python 2, by accident. # 5) host is a link-local ipv6; dnspython returns the wrong # scope-id for those. return native_getaddrinfo(host, port, family, socktype, proto, flags) return self._getaddrinfo(host, port, family, socktype, proto, flags) def _getaliases(self, hostname, family): # pylint:disable=unused-argument return [] def _gethostbyaddr(self, ip_address_bytes): """Raises herror.""" raise NotImplementedError def gethostbyaddr(self, ip_address): ip_address = _resolve_special(ip_address, AF_UNSPEC) ip_address = self._hostname_to_bytes(ip_address) if ip_address in self._LOCAL_AND_BROADCAST_HOSTNAMES: return native_gethostbyaddr(ip_address) return self._gethostbyaddr(ip_address) def _getnameinfo(self, address_bytes, port, sockaddr, flags): raise NotImplementedError def getnameinfo(self, sockaddr, flags): if not isinstance(flags, integer_types): raise TypeError('an integer is required') if not isinstance(sockaddr, tuple): raise TypeError('getnameinfo() argument 1 must be a tuple') address = sockaddr[0] address = self._hostname_to_bytes(sockaddr[0]) if address in self._LOCAL_AND_BROADCAST_HOSTNAMES: return native_getnameinfo(sockaddr, flags) port = sockaddr[1] if not isinstance(port, integer_types): raise TypeError('port must be an integer, not %s' % type(port)) if not PYPY and port >= 65536: # System resolvers do different things with an # out-of-bound port; macOS CPython 3.8 raises ``gaierror: [Errno 8] # nodename nor servname provided, or not known``, while # manylinux CPython 2.7 appears to ignore it and raises ``error: # sockaddr resolved to multiple addresses``. TravisCI, at least ot # one point, successfully resolved www.gevent.org to ``(readthedocs.org, '0')``. # But c-ares 1.16 would raise ``gaierror(25, 'ARES_ESERVICE: unknown')``. # Doing this appears to get the expected results on CPython port = 0 if PYPY and (port < 0 or port >= 65536): # PyPy seems to always be strict about that and produce the same results # on all platforms. raise OverflowError("port must be 0-65535.") if len(sockaddr) > 2: # Must be IPv6: (host, port, [flowinfo, [scopeid]]) flowinfo = sockaddr[2] if flowinfo > 0xfffff: raise OverflowError("getnameinfo(): flowinfo must be 0-1048575.") return self._getnameinfo(address, port, sockaddr, flags) gevent-24.11.1/src/gevent/resolver/_addresses.py000066400000000000000000000112731471441230600215640ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2019 gevent contributors. See LICENSE for details. # # Portions of this code taken from dnspython # https://github.com/rthalley/dnspython # # Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license # Copyright (C) 2003-2017 Nominum, Inc. # # Permission to use, copy, modify, and distribute this software and its # documentation for any purpose with or without fee is hereby granted, # provided that the above copyright notice and this permission notice # appear in all copies. # # THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES # WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF # MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR # ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES # WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN # ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT # OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. """ Private support for parsing textual addresses. """ from __future__ import absolute_import, division, print_function import binascii import re import struct from gevent.resolver import hostname_types class AddressSyntaxError(ValueError): pass def _ipv4_inet_aton(text): """ Convert an IPv4 address in text form to binary struct. *text*, a ``text``, the IPv4 address in textual form. Returns a ``binary``. """ if not isinstance(text, bytes): text = text.encode() parts = text.split(b'.') if len(parts) != 4: raise AddressSyntaxError(text) for part in parts: if not part.isdigit(): raise AddressSyntaxError if len(part) > 1 and part[0] == '0': # No leading zeros raise AddressSyntaxError(text) try: ints = [int(part) for part in parts] return struct.pack('BBBB', *ints) except: raise AddressSyntaxError(text) def _ipv6_inet_aton(text, _v4_ending=re.compile(br'(.*):(\d+\.\d+\.\d+\.\d+)$'), _colon_colon_start=re.compile(br'::.*'), _colon_colon_end=re.compile(br'.*::$')): """ Convert an IPv6 address in text form to binary form. *text*, a ``text``, the IPv6 address in textual form. Returns a ``binary``. """ # pylint:disable=too-many-branches # # Our aim here is not something fast; we just want something that works. # if not isinstance(text, bytes): text = text.encode() if text == b'::': text = b'0::' # # Get rid of the icky dot-quad syntax if we have it. # m = _v4_ending.match(text) if not m is None: b = bytearray(_ipv4_inet_aton(m.group(2))) text = (u"{}:{:02x}{:02x}:{:02x}{:02x}".format(m.group(1).decode(), b[0], b[1], b[2], b[3])).encode() # # Try to turn '::' into ':'; if no match try to # turn '::' into ':' # m = _colon_colon_start.match(text) if not m is None: text = text[1:] else: m = _colon_colon_end.match(text) if not m is None: text = text[:-1] # # Now canonicalize into 8 chunks of 4 hex digits each # chunks = text.split(b':') l = len(chunks) if l > 8: raise SyntaxError seen_empty = False canonical = [] for c in chunks: if c == b'': if seen_empty: raise AddressSyntaxError(text) seen_empty = True for _ in range(0, 8 - l + 1): canonical.append(b'0000') else: lc = len(c) if lc > 4: raise AddressSyntaxError(text) if lc != 4: c = (b'0' * (4 - lc)) + c canonical.append(c) if l < 8 and not seen_empty: raise AddressSyntaxError(text) text = b''.join(canonical) # # Finally we can go to binary. # try: return binascii.unhexlify(text) except (binascii.Error, TypeError): raise AddressSyntaxError(text) def _is_addr(host, parse=_ipv4_inet_aton): if not host or not isinstance(host, hostname_types): return False try: parse(host) except AddressSyntaxError: return False return True # Return True if host is a valid IPv4 address is_ipv4_addr = _is_addr def is_ipv6_addr(host): # Return True if host is a valid IPv6 address if host and isinstance(host, hostname_types): s = '%' if isinstance(host, str) else b'%' host = host.split(s, 1)[0] return _is_addr(host, _ipv6_inet_aton) gevent-24.11.1/src/gevent/resolver/_hostsfile.py000066400000000000000000000110251471441230600216020ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2019 gevent contributors. See LICENSE for details. # # Portions of this code taken from dnspython # https://github.com/rthalley/dnspython # # Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license # Copyright (C) 2003-2017 Nominum, Inc. # # Permission to use, copy, modify, and distribute this software and its # documentation for any purpose with or without fee is hereby granted, # provided that the above copyright notice and this permission notice # appear in all copies. # # THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES # WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF # MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR # ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES # WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN # ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT # OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. """ Private support for parsing /etc/hosts. """ from __future__ import absolute_import, division, print_function import sys import os import re from gevent.resolver._addresses import is_ipv4_addr from gevent.resolver._addresses import is_ipv6_addr from gevent._compat import iteritems class HostsFile(object): """ A class to read the contents of a hosts file (/etc/hosts). """ LINES_RE = re.compile(r""" \s* # Leading space ([^\r\n#]+?) # The actual match, non-greedy so as not to include trailing space \s* # Trailing space (?:[#][^\r\n]+)? # Comments (?:$|[\r\n]+) # EOF or newline """, re.VERBOSE) def __init__(self, fname=None): self.v4 = {} # name -> ipv4 self.v6 = {} # name -> ipv6 self.aliases = {} # name -> canonical_name self.reverse = {} # ip addr -> some name if fname is None: if os.name == 'posix': fname = '/etc/hosts' elif os.name == 'nt': # pragma: no cover fname = os.path.expandvars( r'%SystemRoot%\system32\drivers\etc\hosts') self.fname = fname assert self.fname self._last_load = 0 def _readlines(self): # Read the contents of the hosts file. # # Return list of lines, comment lines and empty lines are # excluded. Note that this performs disk I/O so can be # blocking. with open(self.fname, 'rb') as fp: fdata = fp.read() # XXX: Using default decoding. Is that correct? udata = fdata.decode(errors='ignore') if not isinstance(fdata, str) else fdata return self.LINES_RE.findall(udata) def load(self): # pylint:disable=too-many-locals # Load hosts file # This will (re)load the data from the hosts # file if it has changed. try: load_time = os.stat(self.fname).st_mtime needs_load = load_time > self._last_load except OSError: from gevent import get_hub get_hub().handle_error(self, *sys.exc_info()) needs_load = False if not needs_load: return v4 = {} v6 = {} aliases = {} reverse = {} for line in self._readlines(): parts = line.split() if len(parts) < 2: continue ip = parts.pop(0) if is_ipv4_addr(ip): ipmap = v4 elif is_ipv6_addr(ip): if ip.startswith('fe80'): # Do not use link-local addresses, OSX stores these here continue ipmap = v6 else: continue cname = parts.pop(0).lower() ipmap[cname] = ip for alias in parts: alias = alias.lower() ipmap[alias] = ip aliases[alias] = cname # XXX: This is wrong for ipv6 if ipmap is v4: ptr = '.'.join(reversed(ip.split('.'))) + '.in-addr.arpa' else: ptr = ip + '.ip6.arpa.' if ptr not in reverse: reverse[ptr] = cname self._last_load = load_time self.v4 = v4 self.v6 = v6 self.aliases = aliases self.reverse = reverse def iter_all_host_addr_pairs(self): self.load() for name, addr in iteritems(self.v4): yield name, addr for name, addr in iteritems(self.v6): yield name, addr gevent-24.11.1/src/gevent/resolver/ares.py000066400000000000000000000312451471441230600204030ustar00rootroot00000000000000# Copyright (c) 2011-2015 Denis Bilenko. See LICENSE for details. """ c-ares based hostname resolver. """ from __future__ import absolute_import, print_function, division import os import warnings from _socket import gaierror from _socket import herror from _socket import error from _socket import EAI_NONAME from gevent._compat import text_type from gevent._compat import integer_types from gevent.hub import Waiter from gevent.hub import get_hub from gevent.socket import AF_UNSPEC from gevent.socket import AF_INET from gevent.socket import AF_INET6 from gevent.socket import SOCK_DGRAM from gevent.socket import SOCK_STREAM from gevent.socket import SOL_TCP from gevent.socket import SOL_UDP from gevent._config import config from gevent._config import AresSettingMixin from .cares import channel, InvalidIP # pylint:disable=import-error,no-name-in-module from . import _lookup_port as lookup_port from . import AbstractResolver __all__ = ['Resolver'] class Resolver(AbstractResolver): """ Implementation of the resolver API using the `c-ares`_ library. This implementation uses the c-ares library to handle name resolution. c-ares is natively asynchronous at the socket level and so integrates well into gevent's event loop. In comparison to :class:`gevent.resolver_thread.Resolver` (which delegates to the native system resolver), the implementation is much more complex. In addition, there have been reports of it not properly honoring certain system configurations (for example, the order in which IPv4 and IPv6 results are returned may not match the threaded resolver). However, because it does not use threads, it may scale better for applications that make many lookups. There are some known differences from the system resolver. - ``gethostbyname_ex`` and ``gethostbyaddr`` may return different for the ``aliaslist`` tuple member. (Sometimes the same, sometimes in a different order, sometimes a different alias altogether.) - ``gethostbyname_ex`` may return the ``ipaddrlist`` in a different order. - ``getaddrinfo`` does not return ``SOCK_RAW`` results. - ``getaddrinfo`` may return results in a different order. - Handling of ``.local`` (mDNS) names may be different, even if they are listed in the hosts file. - c-ares will not resolve ``broadcasthost``, even if listed in the hosts file prior to 2020-04-30. - This implementation may raise ``gaierror(4)`` where the system implementation would raise ``herror(1)`` or vice versa, with different error numbers. However, after 2020-04-30, this should be much reduced. - The results for ``localhost`` may be different. In particular, some system resolvers will return more results from ``getaddrinfo`` than c-ares does, such as SOCK_DGRAM results, and c-ares may report more ips on a multi-homed host. - The system implementation may return some names fully qualified, where this implementation returns only the host name. This appears to be the case only with entries found in ``/etc/hosts``. - c-ares supports a limited set of flags for ``getnameinfo`` and ``getaddrinfo``; unknown flags are ignored. System-specific flags such as ``AI_V4MAPPED_CFG`` are not supported. - ``getaddrinfo`` may return canonical names even without the ``AI_CANONNAME`` being set. - ``getaddrinfo`` does not appear to support IPv6 symbolic scope IDs. .. caution:: This module is considered extremely experimental on PyPy, and due to its implementation in cython, it may be slower. It may also lead to interpreter crashes. .. versionchanged:: 1.5.0 This version of gevent typically embeds c-ares 1.15.0 or newer. In that version of c-ares, domains ending in ``.onion`` `are never resolved `_ or even sent to the DNS server. .. versionchanged:: 20.5.0 ``getaddrinfo`` is now implemented using the native c-ares function from c-ares 1.16 or newer. .. versionchanged:: 20.5.0 Now ``herror`` and ``gaierror`` are raised more consistently with the standard library resolver, and have more consistent errno values. Handling of localhost and broadcast names is now more consistent. .. versionchanged:: 22.10.1 Now has a ``__del__`` method that warns if the object is destroyed without being properly closed. .. _c-ares: http://c-ares.haxx.se """ cares_class = channel def __init__(self, hub=None, use_environ=True, **kwargs): AbstractResolver.__init__(self) if hub is None: hub = get_hub() self.hub = hub if use_environ: for setting in config.settings.values(): if isinstance(setting, AresSettingMixin): value = setting.get() if value is not None: kwargs.setdefault(setting.kwarg_name, value) self.cares = self.cares_class(hub.loop, **kwargs) self.pid = os.getpid() self.params = kwargs self.fork_watcher = hub.loop.fork(ref=False) # We shouldn't keep the loop alive self.fork_watcher.start(self._on_fork) def __repr__(self): return '' % (id(self), self.cares) def _on_fork(self): # NOTE: See comment in gevent.hub.reinit. pid = os.getpid() if pid != self.pid: self.hub.loop.run_callback(self.cares.destroy) self.cares = self.cares_class(self.hub.loop, **self.params) self.pid = pid def close(self): AbstractResolver.close(self) if self.cares is not None: self.hub.loop.run_callback(self.cares.destroy) self.cares = None self.fork_watcher.stop() def __del__(self): if self.cares is not None: warnings.warn("cares Resolver destroyed while not closed", ResourceWarning) self.close() def _gethostbyname_ex(self, hostname_bytes, family): while True: ares = self.cares try: waiter = Waiter(self.hub) ares.gethostbyname(waiter, hostname_bytes, family) result = waiter.get() if not result[-1]: raise herror(EAI_NONAME, self.EAI_NONAME_MSG) return result except herror as ex: if ares is self.cares: if ex.args[0] == 1: # Somewhere along the line, the internal # implementation of gethostbyname_ex changed to invoke # getaddrinfo() as a first pass, much like we do for ``getnameinfo()``; # this means it raises a different error for not-found hosts. raise gaierror(EAI_NONAME, self.EAI_NONAME_MSG) raise # "self.cares is not ares" means channel was destroyed (because we were forked) def _lookup_port(self, port, socktype): return lookup_port(port, socktype) def __getaddrinfo( self, host, port, family=0, socktype=0, proto=0, flags=0, fill_in_type_proto=True ): """ Returns a list ``(family, socktype, proto, canonname, sockaddr)`` :raises gaierror: If no results are found. """ # pylint:disable=too-many-locals,too-many-branches if isinstance(host, text_type): host = host.encode('idna') if isinstance(port, text_type): port = port.encode('ascii') elif isinstance(port, integer_types): if port == 0: port = None else: port = str(port).encode('ascii') waiter = Waiter(self.hub) self.cares.getaddrinfo( waiter, host, port, family, socktype, proto, flags, ) # Result is a list of: # (family, socktype, proto, canonname, sockaddr) # Where sockaddr depends on family; for INET it is # (address, port) # and INET6 is # (address, port, flow info, scope id) result = waiter.get() if not result: raise gaierror(EAI_NONAME, self.EAI_NONAME_MSG) if fill_in_type_proto: # c-ares 1.16 DOES NOT fill in socktype or proto in the results, # ever. It's at least supposed to do that if they were given as # hints, but it doesn't (https://github.com/c-ares/c-ares/issues/317) # Sigh. # The SOL_* constants are another (older?) name for IPPROTO_* if socktype: hard_type_proto = [ (socktype, SOL_TCP if socktype == SOCK_STREAM else SOL_UDP), ] elif proto: hard_type_proto = [ (SOCK_STREAM if proto == SOL_TCP else SOCK_DGRAM, proto), ] else: hard_type_proto = [ (SOCK_STREAM, SOL_TCP), (SOCK_DGRAM, SOL_UDP), ] # pylint:disable=not-an-iterable,unsubscriptable-object result = [ (rfamily, hard_type if not rtype else rtype, hard_proto if not rproto else rproto, rcanon, raddr) for rfamily, rtype, rproto, rcanon, raddr in result for hard_type, hard_proto in hard_type_proto ] return result def _getaddrinfo(self, host_bytes, port, family, socktype, proto, flags): while True: ares = self.cares try: return self.__getaddrinfo(host_bytes, port, family, socktype, proto, flags) except gaierror: if ares is self.cares: raise def __gethostbyaddr(self, ip_address): waiter = Waiter(self.hub) try: self.cares.gethostbyaddr(waiter, ip_address) return waiter.get() except InvalidIP: result = self._getaddrinfo(ip_address, None, family=AF_UNSPEC, socktype=SOCK_DGRAM, proto=0, flags=0) if not result: raise # pylint:disable=unsubscriptable-object _ip_address = result[0][-1][0] if isinstance(_ip_address, text_type): _ip_address = _ip_address.encode('ascii') if _ip_address == ip_address: raise waiter.clear() self.cares.gethostbyaddr(waiter, _ip_address) return waiter.get() def _gethostbyaddr(self, ip_address_bytes): while True: ares = self.cares try: return self.__gethostbyaddr(ip_address_bytes) except herror: if ares is self.cares: raise def __getnameinfo(self, hostname, port, sockaddr, flags): result = self.__getaddrinfo( hostname, port, family=AF_UNSPEC, socktype=SOCK_DGRAM, proto=0, flags=0, fill_in_type_proto=False) if len(result) != 1: raise error('sockaddr resolved to multiple addresses') family, _socktype, _proto, _name, address = result[0] if family == AF_INET: if len(sockaddr) != 2: raise error("IPv4 sockaddr must be 2 tuple") elif family == AF_INET6: address = address[:2] + sockaddr[2:] waiter = Waiter(self.hub) self.cares.getnameinfo(waiter, address, flags) node, service = waiter.get() if service is None: # ares docs: "If the query did not complete # successfully, or one of the values was not # requested, node or service will be NULL ". Python 2 # allows that for the service, but Python 3 raises # an error. This is tested by test_socket in py 3.4 err = gaierror(EAI_NONAME, self.EAI_NONAME_MSG) err.errno = EAI_NONAME raise err return node, service or '0' def _getnameinfo(self, address_bytes, port, sockaddr, flags): while True: ares = self.cares try: return self.__getnameinfo(address_bytes, port, sockaddr, flags) except gaierror: if ares is self.cares: raise # # Things that need proper error handling # gethostbyaddr = AbstractResolver.convert_gaierror_to_herror(AbstractResolver.gethostbyaddr) gevent-24.11.1/src/gevent/resolver/blocking.py000066400000000000000000000023001471441230600212270ustar00rootroot00000000000000# Copyright (c) 2018 gevent contributors. See LICENSE for details. import _socket __all__ = [ 'Resolver', ] class Resolver(object): """ A resolver that directly uses the system's resolver functions. .. caution:: This resolver is *not* cooperative. This resolver has the lowest overhead of any resolver and typically approaches the speed of the unmodified :mod:`socket` functions. However, it is not cooperative, so if name resolution blocks, the entire thread and all its greenlets will be blocked. This can be useful during debugging, or it may be a good choice if your operating system provides a good caching resolver (such as macOS's Directory Services) that is usually very fast and functionally non-blocking. .. versionchanged:: 1.3a2 This was previously undocumented and existed in :mod:`gevent.socket`. """ def __init__(self, hub=None): pass def close(self): pass for method in ( 'gethostbyname', 'gethostbyname_ex', 'getaddrinfo', 'gethostbyaddr', 'getnameinfo' ): locals()[method] = staticmethod(getattr(_socket, method)) gevent-24.11.1/src/gevent/resolver/cares.pyx000066400000000000000000000606041471441230600207370ustar00rootroot00000000000000# Copyright (c) 2011-2012 Denis Bilenko. See LICENSE for details. # Automatic pickling of cdef classes was added in 0.26. Unfortunately it # seems to be buggy (at least for the `result` class) and produces code that # can't compile ("local variable 'result' referenced before assignment"). # See https://github.com/cython/cython/issues/1786 # cython: auto_pickle=False,language_level=3str cimport libcares as cares import sys from cpython.version cimport PY_MAJOR_VERSION from cpython.tuple cimport PyTuple_Check from cpython.getargs cimport PyArg_ParseTuple from cpython.ref cimport Py_INCREF from cpython.ref cimport Py_DECREF from cpython.mem cimport PyMem_Malloc from cpython.mem cimport PyMem_Free from libc.string cimport memset from gevent._compat import MAC import _socket from _socket import gaierror from _socket import herror __all__ = ['channel'] cdef tuple string_types cdef type text_type if PY_MAJOR_VERSION >= 3: string_types = str, text_type = str else: string_types = __builtins__.basestring, text_type = __builtins__.unicode # These three constants used to be DEF, but the DEF construct # is deprecated in Cython. Using a cdef extern, the generated # C code refers to the symbol (DEF would have inlined the value). # That's great when we're strictly in a C context, but for passing to # Python, it means we do a runtime translation from the C int to the # Python int. That is avoided if we use a cdef constant. TIMEOUT # is the only one that interacts with Python, but not in a performance-sensitive # way, so runtime translation is fine to keep it consistent. cdef extern from *: """ #define TIMEOUT 1 #define EV_READ 1 #define EV_WRITE 2 """ int TIMEOUT int EV_READ int EV_WRITE cdef extern from *: """ #ifdef CARES_EMBED #include "ares_setup.h" #endif #ifdef HAVE_NETDB_H #include #endif #ifndef EAI_ADDRFAMILY #define EAI_ADDRFAMILY -1 #endif #ifndef EAI_BADHINTS #define EAI_BADHINTS -2 #endif #ifndef EAI_NODATA #define EAI_NODATA -3 #endif #ifndef EAI_OVERFLOW #define EAI_OVERFLOW -4 #endif #ifndef EAI_PROTOCOL #define EAI_PROTOCOL -5 #endif #ifndef EAI_SYSTEM #define EAI_SYSTEM #endif """ cdef extern from "ares.h": int AF_INET int AF_INET6 int INET6_ADDRSTRLEN struct hostent: char* h_name int h_addrtype char** h_aliases char** h_addr_list struct sockaddr_t "sockaddr": pass struct ares_channeldata: pass struct in_addr: unsigned int s_addr struct sockaddr_in: int sin_family int sin_port in_addr sin_addr struct in6_addr: char s6_addr[16] struct sockaddr_in6: int sin6_family int sin6_port unsigned int sin6_flowinfo in6_addr sin6_addr unsigned int sin6_scope_id unsigned int htons(unsigned int hostshort) unsigned int ntohs(unsigned int hostshort) unsigned int htonl(unsigned int hostlong) unsigned int ntohl(unsigned int hostlong) cdef int AI_NUMERICSERV = _socket.AI_NUMERICSERV cdef int AI_CANONNAME = _socket.AI_CANONNAME cdef int NI_NUMERICHOST = _socket.NI_NUMERICHOST cdef int NI_NUMERICSERV = _socket.NI_NUMERICSERV cdef int NI_NOFQDN = _socket.NI_NOFQDN cdef int NI_NAMEREQD = _socket.NI_NAMEREQD cdef int NI_DGRAM = _socket.NI_DGRAM cdef dict _ares_errors = dict([ (cares.ARES_SUCCESS, 'ARES_SUCCESS'), (cares.ARES_EADDRGETNETWORKPARAMS, 'ARES_EADDRGETNETWORKPARAMS'), (cares.ARES_EBADFAMILY, 'ARES_EBADFAMILY'), (cares.ARES_EBADFLAGS, 'ARES_EBADFLAGS'), (cares.ARES_EBADHINTS, 'ARES_EBADHINTS'), (cares.ARES_EBADNAME, 'ARES_EBADNAME'), (cares.ARES_EBADQUERY, 'ARES_EBADQUERY'), (cares.ARES_EBADRESP, 'ARES_EBADRESP'), (cares.ARES_EBADSTR, 'ARES_EBADSTR'), (cares.ARES_ECANCELLED, 'ARES_ECANCELLED'), (cares.ARES_ECONNREFUSED, 'ARES_ECONNREFUSED'), (cares.ARES_EDESTRUCTION, 'ARES_EDESTRUCTION'), (cares.ARES_EFILE, 'ARES_EFILE'), (cares.ARES_EFORMERR, 'ARES_EFORMERR'), (cares.ARES_ELOADIPHLPAPI, 'ARES_ELOADIPHLPAPI'), (cares.ARES_ENODATA, 'ARES_ENODATA'), (cares.ARES_ENOMEM, 'ARES_ENOMEM'), (cares.ARES_ENONAME, 'ARES_ENONAME'), (cares.ARES_ENOTFOUND, 'ARES_ENOTFOUND'), (cares.ARES_ENOTIMP, 'ARES_ENOTIMP'), (cares.ARES_ENOTINITIALIZED, 'ARES_ENOTINITIALIZED'), (cares.ARES_EOF, 'ARES_EOF'), (cares.ARES_EREFUSED, 'ARES_EREFUSED'), (cares.ARES_ESERVICE, 'ARES_ESERVICE'), (cares.ARES_ESERVFAIL, 'ARES_ESERVFAIL'), (cares.ARES_ETIMEOUT, 'ARES_ETIMEOUT'), ]) cdef dict _ares_to_gai_system = { cares.ARES_EBADFAMILY: cares.EAI_ADDRFAMILY, cares.ARES_EBADFLAGS: cares.EAI_BADFLAGS, cares.ARES_EBADHINTS: cares.EAI_BADHINTS, cares.ARES_ENOMEM: cares.EAI_MEMORY, cares.ARES_ENONAME: cares.EAI_NONAME, cares.ARES_ENOTFOUND: cares.EAI_NONAME, cares.ARES_ENOTIMP: cares.EAI_FAMILY, # While EAI_NODATA ("No address associated with nodename") might # seem to be the natural mapping, typical resolvers actually # return EAI_NONAME in that same situation; I've yet to find EAI_NODATA # in a test. cares.ARES_ENODATA: cares.EAI_NONAME, # This one gets raised for unknown port/service names. cares.ARES_ESERVICE: cares.EAI_NONAME if MAC else cares.EAI_SERVICE, } cdef _gevent_gai_strerror(code): cdef const char* err_str cdef object result = None cdef int system try: system = _ares_to_gai_system[code] except KeyError: err_str = cares.ares_strerror(code) result = '%s: %s' % (_ares_errors.get(code) or code, _as_str(err_str)) else: err_str = cares.gai_strerror(system) result = _as_str(err_str) return result cdef object _gevent_gaierror_from_status(int ares_status): cdef object code = _ares_to_gai_system.get(ares_status, ares_status) cdef object message = _gevent_gai_strerror(ares_status) return gaierror(code, message) cdef dict _ares_to_host_system = { cares.ARES_ENONAME: cares.HOST_NOT_FOUND, cares.ARES_ENOTFOUND: cares.HOST_NOT_FOUND, cares.ARES_ENODATA: cares.NO_DATA, } cdef _gevent_herror_strerror(code): cdef const char* err_str cdef object result = None cdef int system try: system = _ares_to_host_system[code] except KeyError: err_str = cares.ares_strerror(code) result = '%s: %s' % (_ares_errors.get(code) or code, _as_str(err_str)) else: err_str = cares.hstrerror(system) result = _as_str(err_str) return result cdef object _gevent_herror_from_status(int ares_status): cdef object code = _ares_to_host_system.get(ares_status, ares_status) cdef object message = _gevent_herror_strerror(ares_status) return herror(code, message) class InvalidIP(ValueError): pass cdef void gevent_sock_state_callback(void *data, int s, int read, int write): if not data: return cdef channel ch = data ch._sock_state_callback(s, read, write) cdef class Result(object): cdef public object value cdef public object exception def __init__(self, object value=None, object exception=None): self.value = value self.exception = exception def __repr__(self): if self.exception is None: return '%s(%r)' % (self.__class__.__name__, self.value) elif self.value is None: return '%s(exception=%r)' % (self.__class__.__name__, self.exception) else: return '%s(value=%r, exception=%r)' % (self.__class__.__name__, self.value, self.exception) # add repr_recursive precaution def successful(self): return self.exception is None def get(self): if self.exception is not None: raise self.exception return self.value class ares_host_result(tuple): def __new__(cls, family, iterable): cdef object self = tuple.__new__(cls, iterable) self.family = family return self def __getnewargs__(self): return (self.family, tuple(self)) cdef list _parse_h_aliases(hostent* host): cdef list result = [] cdef char** aliases = host.h_aliases if not aliases or not aliases[0]: return result while aliases[0]: # *aliases # The old C version of this excluded an alias if # it matched the host name. I don't think the stdlib does that? result.append(_as_str(aliases[0])) aliases += 1 return result cdef list _parse_h_addr_list(hostent* host): cdef list result = [] cdef char** addr_list = host.h_addr_list cdef int addr_type = host.h_addrtype # INET6_ADDRSTRLEN is 46, but we can't use that named constant # here; cython doesn't like it. cdef char tmpbuf[46] if not addr_list or not addr_list[0]: return result while addr_list[0]: if not cares.ares_inet_ntop(host.h_addrtype, addr_list[0], tmpbuf, INET6_ADDRSTRLEN): raise _socket.error("Failed in ares_inet_ntop") result.append(_as_str(tmpbuf)) addr_list += 1 return result cdef object _as_str(const char* val): if not val: return None if PY_MAJOR_VERSION < 3: return val return val.decode('utf-8') cdef void gevent_ares_nameinfo_callback(void *arg, int status, int timeouts, char *c_node, char *c_service): cdef channel channel cdef object callback channel, callback = arg Py_DECREF(arg) cdef object node cdef object service try: if status: callback(Result(None, _gevent_gaierror_from_status(status))) else: node = _as_str(c_node) service = _as_str(c_service) callback(Result((node, service))) except: channel.loop.handle_error(callback, *sys.exc_info()) cdef int _make_sockaddr(const char* hostp, int port, int flowinfo, int scope_id, sockaddr_in6* sa6): if cares.ares_inet_pton(AF_INET, hostp, &(sa6).sin_addr.s_addr) > 0: (sa6).sin_family = AF_INET (sa6).sin_port = htons(port) return sizeof(sockaddr_in) if cares.ares_inet_pton(AF_INET6, hostp, &(sa6.sin6_addr).s6_addr) > 0: sa6.sin6_family = AF_INET6 sa6.sin6_port = htons(port) sa6.sin6_flowinfo = flowinfo sa6.sin6_scope_id = scope_id return sizeof(sockaddr_in6); return -1; cdef class channel: cdef ares_channeldata* channel cdef readonly object loop cdef dict _watchers cdef object _timer def __init__(self, object loop, flags=None, timeout=None, tries=None, ndots=None, udp_port=None, tcp_port=None, servers=None): cdef ares_channeldata* channel = NULL cdef cares.ares_options options memset(&options, 0, sizeof(cares.ares_options)) cdef int optmask = cares.ARES_OPT_SOCK_STATE_CB options.sock_state_cb = gevent_sock_state_callback options.sock_state_cb_data = self if flags is not None: options.flags = int(flags) optmask |= cares.ARES_OPT_FLAGS if timeout is not None: options.timeout = int(float(timeout) * 1000) optmask |= cares.ARES_OPT_TIMEOUTMS if tries is not None: options.tries = int(tries) optmask |= cares.ARES_OPT_TRIES if ndots is not None: options.ndots = int(ndots) optmask |= cares.ARES_OPT_NDOTS if udp_port is not None: options.udp_port = int(udp_port) optmask |= cares.ARES_OPT_UDP_PORT if tcp_port is not None: options.tcp_port = int(tcp_port) optmask |= cares.ARES_OPT_TCP_PORT cdef int result = cares.ares_library_init(cares.ARES_LIB_INIT_ALL) # ARES_LIB_INIT_WIN32 -DUSE_WINSOCK? if result: raise gaierror(result, _gevent_gai_strerror(result)) result = cares.ares_init_options(&channel, &options, optmask) if result: raise gaierror(result, _gevent_gai_strerror(result)) self._timer = loop.timer(TIMEOUT, TIMEOUT) self._watchers = {} self.channel = channel try: if servers is not None: self.set_servers(servers) self.loop = loop except: self.destroy() raise def __repr__(self): args = (self.__class__.__name__, id(self), self._timer, len(self._watchers)) return '<%s at 0x%x _timer=%r _watchers[%s]>' % args def destroy(self): self.__destroy() cdef __destroy(self): if self.channel: # XXX ares_library_cleanup? cares.ares_destroy(self.channel) self.channel = NULL self._watchers.clear() self._timer.stop() self.loop = None def __dealloc__(self): self.__destroy() cpdef set_servers(self, servers=None): if not self.channel: raise gaierror(cares.ARES_EDESTRUCTION, 'this ares channel has been destroyed') if not servers: servers = [] if isinstance(servers, string_types): servers = servers.split(',') cdef int length = len(servers) cdef int result, index cdef char* string cdef cares.ares_addr_node* c_servers if length <= 0: result = cares.ares_set_servers(self.channel, NULL) else: c_servers = PyMem_Malloc(sizeof(cares.ares_addr_node) * length) if not c_servers: raise MemoryError try: index = 0 for server in servers: if isinstance(server, unicode): server = server.encode('ascii') string = server if cares.ares_inet_pton(AF_INET, string, &c_servers[index].addr) > 0: c_servers[index].family = AF_INET elif cares.ares_inet_pton(AF_INET6, string, &c_servers[index].addr) > 0: c_servers[index].family = AF_INET6 else: raise InvalidIP(repr(string)) c_servers[index].next = &c_servers[index] + 1 index += 1 if index >= length: break c_servers[length - 1].next = NULL index = cares.ares_set_servers(self.channel, c_servers) if index: raise ValueError(_gevent_gai_strerror(index)) finally: PyMem_Free(c_servers) # this crashes c-ares #def cancel(self): # cares.ares_cancel(self.channel) cdef _sock_state_callback(self, int socket, int read, int write): if not self.channel: return cdef object watcher = self._watchers.get(socket) cdef int events = 0 if read: events |= EV_READ if write: events |= EV_WRITE if watcher is None: if not events: return watcher = self.loop.io(socket, events) self._watchers[socket] = watcher elif events: if watcher.events == events: return watcher.stop() watcher.events = events else: watcher.stop() watcher.close() self._watchers.pop(socket, None) if not self._watchers: self._timer.stop() return watcher.start(self._process_fd, watcher, pass_events=True) self._timer.again(self._on_timer) def _on_timer(self): cares.ares_process_fd(self.channel, cares.ARES_SOCKET_BAD, cares.ARES_SOCKET_BAD) def _process_fd(self, int events, object watcher): if not self.channel: return cdef int read_fd = watcher.fd cdef int write_fd = read_fd if not (events & EV_READ): read_fd = cares.ARES_SOCKET_BAD if not (events & EV_WRITE): write_fd = cares.ARES_SOCKET_BAD cares.ares_process_fd(self.channel, read_fd, write_fd) @staticmethod cdef void _gethostbyname_or_byaddr_cb(void *arg, int status, int timeouts, hostent* host): cdef channel channel cdef object callback channel, callback = arg Py_DECREF(arg) cdef object host_result try: if status or not host: callback(Result(None, _gevent_herror_from_status(status))) else: try: host_result = ares_host_result(host.h_addrtype, (_as_str(host.h_name), _parse_h_aliases(host), _parse_h_addr_list(host))) except: callback(Result(None, sys.exc_info()[1])) else: callback(Result(host_result)) except: channel.loop.handle_error(callback, *sys.exc_info()) def gethostbyname(self, object callback, char* name, int family=AF_INET): if not self.channel: raise gaierror(cares.ARES_EDESTRUCTION, 'this ares channel has been destroyed') # note that for file lookups still AF_INET can be returned for AF_INET6 request cdef object arg = (self, callback) Py_INCREF(arg) cares.ares_gethostbyname(self.channel, name, family, channel._gethostbyname_or_byaddr_cb, arg) def gethostbyaddr(self, object callback, char* addr): if not self.channel: raise gaierror(cares.ARES_EDESTRUCTION, 'this ares channel has been destroyed') # will guess the family cdef char addr_packed[16] cdef int family cdef int length if cares.ares_inet_pton(AF_INET, addr, addr_packed) > 0: family = AF_INET length = 4 elif cares.ares_inet_pton(AF_INET6, addr, addr_packed) > 0: family = AF_INET6 length = 16 else: raise InvalidIP(repr(addr)) cdef object arg = (self, callback) Py_INCREF(arg) cares.ares_gethostbyaddr(self.channel, addr_packed, length, family, channel._gethostbyname_or_byaddr_cb, arg) cpdef _getnameinfo(self, object callback, tuple sockaddr, int flags): if not self.channel: raise gaierror(cares.ARES_EDESTRUCTION, 'this ares channel has been destroyed') cdef char* hostp = NULL cdef int port = 0 cdef int flowinfo = 0 cdef int scope_id = 0 cdef sockaddr_in6 sa6 if not PyTuple_Check(sockaddr): raise TypeError('expected a tuple, got %r' % (sockaddr, )) PyArg_ParseTuple(sockaddr, "si|ii", &hostp, &port, &flowinfo, &scope_id) # if port < 0 or port > 65535: # raise gaierror(-8, 'Invalid value for port: %r' % port) cdef int length = _make_sockaddr(hostp, port, flowinfo, scope_id, &sa6) if length <= 0: raise InvalidIP(repr(hostp)) cdef object arg = (self, callback) Py_INCREF(arg) cdef sockaddr_t* x = &sa6 cares.ares_getnameinfo(self.channel, x, length, flags, gevent_ares_nameinfo_callback, arg) @staticmethod cdef int _convert_cares_ni_flags(int flags): cdef int cares_flags = cares.ARES_NI_LOOKUPHOST | cares.ARES_NI_LOOKUPSERVICE if flags & NI_NUMERICHOST: cares_flags |= cares.ARES_NI_NUMERICHOST if flags & NI_NUMERICSERV: cares_flags |= cares.ARES_NI_NUMERICSERV if flags & NI_NOFQDN: cares_flags |= cares.ARES_NI_NOFQDN if flags & NI_NAMEREQD: cares_flags |= cares.ARES_NI_NAMEREQD if flags & NI_DGRAM: cares_flags |= cares.ARES_NI_DGRAM return cares_flags def getnameinfo(self, object callback, tuple sockaddr, int flags): flags = channel._convert_cares_ni_flags(flags) return self._getnameinfo(callback, sockaddr, flags) @staticmethod cdef int _convert_cares_ai_flags(int flags): # c-ares supports a limited set of flags. # We always want NOSORT, because that implies that # c-ares will not connect to resolved addresses. cdef int cares_flags = cares.ARES_AI_NOSORT if flags & AI_CANONNAME: cares_flags |= cares.ARES_AI_CANONNAME if flags & AI_NUMERICSERV: cares_flags |= cares.ARES_AI_NUMERICSERV return cares_flags @staticmethod cdef void _getaddrinfo_cb(void *arg, int status, int timeouts, cares.ares_addrinfo* result): cdef cares.ares_addrinfo_node* nodes cdef cares.ares_addrinfo_cname* cnames cdef sockaddr_in* sadr4 cdef sockaddr_in6* sadr6 cdef object canonname = '' cdef channel channel cdef object callback # INET6_ADDRSTRLEN is 46, but we can't use that named constant # here; cython doesn't like it. cdef char tmpbuf[46] channel, callback = arg Py_DECREF(arg) # Result is a list of: # (family, socktype, proto, canonname, sockaddr) # Where sockaddr depends on family; for INET it is # (address, port) # and INET6 is # (address, port, flow info, scope id) # TODO: Check the canonnames. addrs = [] try: if status != cares.ARES_SUCCESS: callback(Result(None, _gevent_gaierror_from_status(status))) return if result.cnames: # These tend to come in pairs: # # alias: www.gevent.org name: python-gevent.readthedocs.org # alias: python-gevent.readthedocs.org name: readthedocs.io # # The standard library returns the last name so we do too. cnames = result.cnames while cnames: canonname = _as_str(cnames.name) cnames = cnames.next nodes = result.nodes while nodes: if nodes.ai_family == AF_INET: sadr4 = nodes.ai_addr cares.ares_inet_ntop(nodes.ai_family, &sadr4.sin_addr, tmpbuf, INET6_ADDRSTRLEN) sockaddr = ( _as_str(tmpbuf), ntohs(sadr4.sin_port), ) elif nodes.ai_family == AF_INET6: sadr6 = nodes.ai_addr cares.ares_inet_ntop(nodes.ai_family, &sadr6.sin6_addr, tmpbuf, INET6_ADDRSTRLEN) sockaddr = ( _as_str(tmpbuf), ntohs(sadr6.sin6_port), sadr6.sin6_flowinfo, sadr6.sin6_scope_id, ) addrs.append(( nodes.ai_family, nodes.ai_socktype, nodes.ai_protocol, canonname, sockaddr, )) nodes = nodes.ai_next callback(Result(addrs, None)) except: channel.loop.handle_error(callback, *sys.exc_info()) finally: if result: cares.ares_freeaddrinfo(result) def getaddrinfo(self, object callback, const char* name, object service, # AKA port int family=0, int type=0, int proto=0, int flags=0): if not self.channel: raise gaierror(cares.ARES_EDESTRUCTION, 'this ares channel has been destroyed') cdef cares.ares_addrinfo_hints hints memset(&hints, 0, sizeof(cares.ares_addrinfo_hints)) hints.ai_flags = channel._convert_cares_ai_flags(flags) hints.ai_family = family hints.ai_socktype = type hints.ai_protocol = proto cdef object arg = (self, callback) Py_INCREF(arg) cares.ares_getaddrinfo( self.channel, name, NULL if service is None else service, &hints, channel._getaddrinfo_cb, arg ) gevent-24.11.1/src/gevent/resolver/dnspython.py000066400000000000000000000503421471441230600214760ustar00rootroot00000000000000# Copyright (c) 2018 gevent contributors. See LICENSE for details. # Portions of this code taken from the gogreen project: # http://github.com/slideinc/gogreen # # Copyright (c) 2005-2010 Slide, Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following # disclaimer in the documentation and/or other materials provided # with the distribution. # * Neither the name of the author nor the names of other # contributors may be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # Portions of this code taken from the eventlet project: # https://github.com/eventlet/eventlet/blob/master/eventlet/support/greendns.py # Unless otherwise noted, the files in Eventlet are under the following MIT license: # Copyright (c) 2005-2006, Bob Ippolito # Copyright (c) 2007-2010, Linden Research, Inc. # Copyright (c) 2008-2010, Eventlet Contributors (see AUTHORS) # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. from __future__ import absolute_import, print_function, division import sys import time from _socket import error from _socket import gaierror from _socket import herror from _socket import NI_NUMERICSERV from _socket import AF_INET from _socket import AF_INET6 from _socket import AF_UNSPEC from _socket import EAI_NONAME from _socket import EAI_FAMILY import socket from gevent.resolver import AbstractResolver from gevent.resolver._hostsfile import HostsFile from gevent.builtins import __import__ as g_import from gevent._compat import string_types from gevent._compat import iteritems from gevent._config import config __all__ = [ 'Resolver', ] # Import the DNS packages to use the gevent modules, # even if the system is not monkey-patched. If it *is* already # patched, this imports a second copy under a different name, # which is probably not strictly necessary, but matches # what we've historically done, and allows configuring the resolvers # differently. def _patch_dns(): from gevent._patcher import import_patched as importer # The dns package itself is empty but defines __all__ # we make sure to import all of those things now under the # patch. Note this triggers two DeprecationWarnings, # one of which we could avoid. extras = { 'dns': ('rdata', 'resolver', 'rdtypes'), 'dns.rdtypes': ('IN', 'ANY', ), 'dns.rdtypes.IN': ('A', 'AAAA',), 'dns.rdtypes.ANY': ('SOA', 'PTR'), } def extra_all(mod_name): return extras.get(mod_name, ()) def after_import_hook(dns): # pylint:disable=redefined-outer-name # Runs while still in the original patching scope. # The dns.rdata:get_rdata_class() function tries to # dynamically import modules using __import__ and then walk # through the attribute tree to find classes in `dns.rdtypes`. # It is critical that this all matches up, otherwise we can # get different exception classes that don't get caught. # We could patch __import__ to do things at runtime, but it's # easier to enumerate the world and populate the cache now # before we then switch the names back. rdata = dns.rdata get_rdata_class = rdata.get_rdata_class try: rdclass_values = list(dns.rdataclass.RdataClass) except AttributeError: # dnspython < 2.0 rdclass_values = dns.rdataclass._by_value try: rdtype_values = list(dns.rdatatype.RdataType) except AttributeError: # dnspython < 2.0 rdtype_values = dns.rdatatype._by_value for rdclass in rdclass_values: for rdtype in rdtype_values: get_rdata_class(rdclass, rdtype) patcher = importer('dns', extra_all, after_import_hook) top = patcher.module # Now disable the dynamic imports def _no_dynamic_imports(name): raise ValueError(name) top.rdata.__import__ = _no_dynamic_imports return top dns = _patch_dns() resolver = dns.resolver dTimeout = dns.resolver.Timeout # This is a wrapper for dns.resolver._getaddrinfo with two crucial changes. # First, it backports https://github.com/rthalley/dnspython/issues/316 # from version 2.0. This can be dropped when we support only dnspython 2 # (which means only Python 3.) # Second, it adds calls to sys.exc_clear() to avoid failing tests in # test__refcount.py (timeouts) on Python 2. (Actually, this isn't # strictly necessary, it was necessary to increase the timeouts in # that function because dnspython is doing some parsing/regex/host # lookups that are not super fast. But it does have a habit of leaving # exceptions around which can complicate our memleak checks.) def _getaddrinfo(host=None, service=None, family=AF_UNSPEC, socktype=0, proto=0, flags=0, _orig_gai=resolver._getaddrinfo, _exc_clear=getattr(sys, 'exc_clear', lambda: None)): if flags & (socket.AI_ADDRCONFIG | socket.AI_V4MAPPED) != 0: # Not implemented. We raise a gaierror as opposed to a # NotImplementedError as it helps callers handle errors more # appropriately. [Issue #316] raise socket.gaierror(socket.EAI_SYSTEM) res = _orig_gai(host, service, family, socktype, proto, flags) _exc_clear() return res resolver._getaddrinfo = _getaddrinfo HOSTS_TTL = 300.0 class _HostsAnswer(dns.resolver.Answer): # Answer class for HostsResolver object def __init__(self, qname, rdtype, rdclass, rrset, raise_on_no_answer=True): self.response = None self.qname = qname self.rdtype = rdtype self.rdclass = rdclass self.canonical_name = qname if not rrset and raise_on_no_answer: raise dns.resolver.NoAnswer() self.rrset = rrset self.expiration = (time.time() + rrset.ttl if hasattr(rrset, 'ttl') else 0) class _HostsResolver(object): """ Class to parse the hosts file """ def __init__(self, fname=None, interval=HOSTS_TTL): self.hosts_file = HostsFile(fname) self.interval = interval self._last_load = 0 def query(self, qname, rdtype=dns.rdatatype.A, rdclass=dns.rdataclass.IN, tcp=False, source=None, raise_on_no_answer=True): # pylint:disable=unused-argument # Query the hosts file # # The known rdtypes are dns.rdatatype.A, dns.rdatatype.AAAA and # dns.rdatatype.CNAME. # The ``rdclass`` parameter must be dns.rdataclass.IN while the # ``tcp`` and ``source`` parameters are ignored. # Return a HostAnswer instance or raise a dns.resolver.NoAnswer # exception. now = time.time() hosts_file = self.hosts_file if self._last_load + self.interval < now: self._last_load = now hosts_file.load() rdclass = dns.rdataclass.IN # Always if isinstance(qname, string_types): name = qname qname = dns.name.from_text(qname) else: name = str(qname) name = name.lower() rrset = dns.rrset.RRset(qname, rdclass, rdtype) rrset.ttl = self._last_load + self.interval - now mapping = None kind = None if rdtype == dns.rdatatype.A: mapping = hosts_file.v4 kind = dns.rdtypes.IN.A.A elif rdtype == dns.rdatatype.AAAA: mapping = hosts_file.v6 kind = dns.rdtypes.IN.AAAA.AAAA elif rdtype == dns.rdatatype.CNAME: mapping = hosts_file.aliases kind = lambda c, t, addr: dns.rdtypes.ANY.CNAME.CNAME(c, t, dns.name.from_text(addr)) elif rdtype == dns.rdatatype.PTR: mapping = hosts_file.reverse kind = lambda c, t, addr: dns.rdtypes.ANY.PTR.PTR(c, t, dns.name.from_text(addr)) addr = mapping.get(name) if not addr and qname.is_absolute(): addr = mapping.get(name[:-1]) if addr: rrset.add(kind(rdclass, rdtype, addr)) return _HostsAnswer(qname, rdtype, rdclass, rrset, raise_on_no_answer) def getaliases(self, hostname): # Return a list of all the aliases of a given cname # Due to the way store aliases this is a bit inefficient, this # clearly was an afterthought. But this is only used by # gethostbyname_ex so it's probably fine. aliases = self.hosts_file.aliases result = [] if hostname in aliases: # pylint:disable=consider-using-get cannon = aliases[hostname] else: cannon = hostname result.append(cannon) for alias, cname in iteritems(aliases): if cannon == cname: result.append(alias) result.remove(hostname) return result class _DualResolver(object): def __init__(self): self.hosts_resolver = _HostsResolver() self.network_resolver = resolver.get_default_resolver() self.network_resolver.cache = resolver.LRUCache() def query(self, qname, rdtype=dns.rdatatype.A, rdclass=dns.rdataclass.IN, tcp=False, source=None, raise_on_no_answer=True, _hosts_rdtypes=(dns.rdatatype.A, dns.rdatatype.AAAA, dns.rdatatype.PTR)): # Query the resolver, using /etc/hosts # Behavior: # 1. if hosts is enabled and contains answer, return it now # 2. query nameservers for qname if qname is None: qname = '0.0.0.0' if not isinstance(qname, string_types): if isinstance(qname, bytes): qname = qname.decode("idna") if isinstance(qname, string_types): qname = dns.name.from_text(qname, None) if isinstance(rdtype, string_types): rdtype = dns.rdatatype.from_text(rdtype) if rdclass == dns.rdataclass.IN and rdtype in _hosts_rdtypes: try: answer = self.hosts_resolver.query(qname, rdtype, raise_on_no_answer=False) except Exception: # pylint: disable=broad-except from gevent import get_hub get_hub().handle_error(self, *sys.exc_info()) else: if answer.rrset: return answer return self.network_resolver.query(qname, rdtype, rdclass, tcp, source, raise_on_no_answer=raise_on_no_answer) def _family_to_rdtype(family): if family == socket.AF_INET: rdtype = dns.rdatatype.A elif family == socket.AF_INET6: rdtype = dns.rdatatype.AAAA else: raise socket.gaierror(socket.EAI_FAMILY, 'Address family not supported') return rdtype class Resolver(AbstractResolver): """ An *experimental* resolver that uses `dnspython`_. This is typically slower than the default threaded resolver (unless there's a cache hit, in which case it can be much faster). It is usually much faster than the c-ares resolver. It tends to scale well as more concurrent resolutions are attempted. Under Python 2, if the ``idna`` package is installed, this resolver can resolve Unicode host names that the system resolver cannot. .. note:: This **does not** use dnspython's default resolver object, or share any classes with ``import dns``. A separate copy of the objects is imported to be able to function in a non monkey-patched process. The documentation for the resolver object still applies. The resolver that we use is available as the :attr:`resolver` attribute of this object (typically ``gevent.get_hub().resolver.resolver``). .. caution:: Many of the same caveats about DNS results apply here as are documented for :class:`gevent.resolver.ares.Resolver`. In addition, the handling of symbolic scope IDs in IPv6 addresses passed to ``getaddrinfo`` exhibits some differences. On PyPy, ``getnameinfo`` can produce results when CPython raises ``socket.error``, and gevent's DNSPython resolver also raises ``socket.error``. .. caution:: This resolver is experimental. It may be removed or modified in the future. As always, feedback is welcome. .. versionadded:: 1.3a2 .. versionchanged:: 20.5.0 The errors raised are now much more consistent with those raised by the standard library resolvers. Handling of localhost and broadcast names is now more consistent. .. _dnspython: http://www.dnspython.org """ def __init__(self, hub=None): # pylint: disable=unused-argument if resolver._resolver is None: _resolver = resolver._resolver = _DualResolver() if config.resolver_nameservers: _resolver.network_resolver.nameservers[:] = config.resolver_nameservers if config.resolver_timeout: _resolver.network_resolver.lifetime = config.resolver_timeout # Different hubs in different threads could be sharing the same # resolver. assert isinstance(resolver._resolver, _DualResolver) self._resolver = resolver._resolver @property def resolver(self): """ The dnspython resolver object we use. This object has several useful attributes that can be used to adjust the behaviour of the DNS system: * ``cache`` is a :class:`dns.resolver.LRUCache`. Its maximum size can be configured by calling :meth:`resolver.cache.set_max_size` * ``nameservers`` controls which nameservers to talk to * ``lifetime`` configures a timeout for each individual query. """ return self._resolver.network_resolver def close(self): pass def _getaliases(self, hostname, family): if not isinstance(hostname, str): if isinstance(hostname, bytes): hostname = hostname.decode("idna") aliases = self._resolver.hosts_resolver.getaliases(hostname) net_resolver = self._resolver.network_resolver rdtype = _family_to_rdtype(family) while 1: try: ans = net_resolver.query(hostname, dns.rdatatype.CNAME, rdtype) except (dns.resolver.NoAnswer, dns.resolver.NXDOMAIN, dns.resolver.NoNameservers): break except dTimeout: break except AttributeError as ex: if hostname is None or isinstance(hostname, int): raise TypeError(ex) raise else: aliases.extend(str(rr.target) for rr in ans.rrset) hostname = ans[0].target return aliases def _getaddrinfo(self, host_bytes, port, family, socktype, proto, flags): # dnspython really wants the host to be in native format. if not isinstance(host_bytes, str): host_bytes = host_bytes.decode(self.HOSTNAME_ENCODING) if host_bytes == 'ff02::1de:c0:face:8D': # This is essentially a hack to make stdlib # test_socket:GeneralModuleTests.test_getaddrinfo_ipv6_basic # pass. They expect to get back a lowercase ``D``, but # dnspython does not do that. # ``test_getaddrinfo_ipv6_scopeid_symbolic`` also expect # the scopeid to be dropped, but again, dnspython does not # do that; we cant fix that here so we skip that test. host_bytes = 'ff02::1de:c0:face:8d' if family == AF_UNSPEC: # This tends to raise in the case that a v6 address did not exist # but a v4 does. So we break it into two parts. # Note that if there is no ipv6 in the hosts file, but there *is* # an ipv4, and there *is* an ipv6 in the nameservers, we will return # both (from the first call). The system resolver on OS X only returns # the results from the hosts file. doubleclick.com is one example. # See also https://github.com/gevent/gevent/issues/1012 try: return _getaddrinfo(host_bytes, port, family, socktype, proto, flags) except gaierror: try: return _getaddrinfo(host_bytes, port, AF_INET6, socktype, proto, flags) except gaierror: return _getaddrinfo(host_bytes, port, AF_INET, socktype, proto, flags) else: try: return _getaddrinfo(host_bytes, port, family, socktype, proto, flags) except gaierror as ex: if ex.args[0] == EAI_NONAME and family not in self._KNOWN_ADDR_FAMILIES: # It's possible that we got sent an unsupported family. Check # that. ex.args = (EAI_FAMILY, self.EAI_FAMILY_MSG) ex.errno = EAI_FAMILY raise def _getnameinfo(self, address_bytes, port, sockaddr, flags): try: return resolver._getnameinfo(sockaddr, flags) except error: if not flags: # dnspython doesn't like getting ports it can't resolve. # We have one test, test__socket_dns.py:Test_getnameinfo_geventorg.test_port_zero # that does this. We conservatively fix it here; this could be expanded later. return resolver._getnameinfo(sockaddr, NI_NUMERICSERV) def _gethostbyaddr(self, ip_address_bytes): try: return resolver._gethostbyaddr(ip_address_bytes) except gaierror as ex: if ex.args[0] == EAI_NONAME: # Note: The system doesn't *always* raise herror; # sometimes the original gaierror propagates through. # It's impossible to say ahead of time or just based # on the name which it should be. The herror seems to # be by far the most common, though. raise herror(1, "Unknown host") raise # Things that need proper error handling getnameinfo = AbstractResolver.fixup_gaierror(AbstractResolver.getnameinfo) gethostbyaddr = AbstractResolver.fixup_gaierror(AbstractResolver.gethostbyaddr) gethostbyname_ex = AbstractResolver.fixup_gaierror(AbstractResolver.gethostbyname_ex) getaddrinfo = AbstractResolver.fixup_gaierror(AbstractResolver.getaddrinfo) gevent-24.11.1/src/gevent/resolver/libcares.pxd000066400000000000000000000124151471441230600213760ustar00rootroot00000000000000cdef extern from "ares.h": # These two are defined in and , respectively, # on POSIX. On Windows, they are in . "ares.h" winds up # indirectly including both of those. struct sockaddr: pass struct hostent: pass # Errors from getaddrinfo int EAI_ADDRFAMILY # The specified network host does not have # any network addresses in the requested address family (Linux) int EAI_AGAIN # temporary failure in name resolution int EAI_BADFLAGS # invalid value for ai_flags int EAI_BADHINTS # invalid value for hints (macOS only) int EAI_FAIL # non-recoverable failure in name resolution int EAI_FAMILY # ai_family not supported int EAI_MEMORY # memory allocation failure int EAI_NODATA # The specified network host exists, but does not have # any network addresses defined. (Linux) int EAI_NONAME # hostname or servname not provided, or not known int EAI_OVERFLOW # argument buffer overflow (macOS only) int EAI_PROTOCOL # resolved protocol is unknown (macOS only) int EAI_SERVICE # servname not supported for ai_socktype int EAI_SOCKTYPE # ai_socktype not supported int EAI_SYSTEM # system error returned in errno (macOS and Linux) char* gai_strerror(int ecode) # Errors from gethostbyname and gethostbyaddr int HOST_NOT_FOUND int TRY_AGAIN int NO_RECOVERY int NO_DATA char* hstrerror(int err) struct ares_options: int flags void* sock_state_cb void* sock_state_cb_data int timeout int tries int ndots unsigned short udp_port unsigned short tcp_port char **domains int ndomains char* lookups int ARES_OPT_FLAGS int ARES_OPT_SOCK_STATE_CB int ARES_OPT_TIMEOUTMS int ARES_OPT_TRIES int ARES_OPT_NDOTS int ARES_OPT_TCP_PORT int ARES_OPT_UDP_PORT int ARES_OPT_SERVERS int ARES_OPT_DOMAINS int ARES_OPT_LOOKUPS int ARES_FLAG_USEVC int ARES_FLAG_PRIMARY int ARES_FLAG_IGNTC int ARES_FLAG_NORECURSE int ARES_FLAG_STAYOPEN int ARES_FLAG_NOSEARCH int ARES_FLAG_NOALIASES int ARES_FLAG_NOCHECKRESP int ARES_LIB_INIT_ALL int ARES_SOCKET_BAD int ARES_SUCCESS int ARES_EADDRGETNETWORKPARAMS int ARES_EBADFAMILY int ARES_EBADFLAGS int ARES_EBADHINTS int ARES_EBADNAME int ARES_EBADQUERY int ARES_EBADRESP int ARES_EBADSTR int ARES_ECANCELLED int ARES_ECONNREFUSED int ARES_EDESTRUCTION int ARES_EFILE int ARES_EFORMERR int ARES_ELOADIPHLPAPI int ARES_ENODATA int ARES_ENOMEM int ARES_ENONAME int ARES_ENOTFOUND int ARES_ENOTIMP int ARES_ENOTINITIALIZED int ARES_EOF int ARES_EREFUSED int ARES_ESERVFAIL int ARES_ESERVICE int ARES_ETIMEOUT int ARES_NI_NOFQDN int ARES_NI_NUMERICHOST int ARES_NI_NAMEREQD int ARES_NI_NUMERICSERV int ARES_NI_DGRAM int ARES_NI_TCP int ARES_NI_UDP int ARES_NI_SCTP int ARES_NI_DCCP int ARES_NI_NUMERICSCOPE int ARES_NI_LOOKUPHOST int ARES_NI_LOOKUPSERVICE ctypedef int ares_socklen_t int ares_library_init(int flags) void ares_library_cleanup() int ares_init_options(void *channelptr, ares_options *options, int) int ares_init(void *channelptr) void ares_destroy(void *channelptr) void ares_gethostbyname(void* channel, char *name, int family, void* callback, void *arg) void ares_gethostbyaddr(void* channel, void *addr, int addrlen, int family, void* callback, void *arg) void ares_process_fd(void* channel, int read_fd, int write_fd) char* ares_strerror(int code) void ares_cancel(void* channel) void ares_getnameinfo(void* channel, void* sa, int salen, int flags, void* callback, void *arg) # Added in 1.10 int ares_inet_pton(int af, const char *src, void *dst) const char* ares_inet_ntop(int af, const void *src, char *dst, ares_socklen_t size); struct in_addr: pass struct ares_in6_addr: pass struct addr_union: in_addr addr4 ares_in6_addr addr6 struct ares_addr_node: ares_addr_node *next int family addr_union addr int ares_set_servers(void* channel, ares_addr_node *servers) # Added in 1.16 int ARES_AI_NOSORT int ARES_AI_ENVHOSTS int ARES_AI_CANONNAME int ARES_AI_NUMERICSERV struct ares_addrinfo_hints: int ai_flags int ai_family int ai_socktype int ai_protocol struct ares_addrinfo_node: int ai_ttl int ai_flags int ai_family int ai_socktype int ai_protocol ares_socklen_t ai_addrlen sockaddr *ai_addr ares_addrinfo_node *ai_next struct ares_addrinfo_cname: int ttl char *alias char *name ares_addrinfo_cname *next struct ares_addrinfo: ares_addrinfo_cname *cnames ares_addrinfo_node *nodes void ares_getaddrinfo( void* channel, const char *name, const char* service, const ares_addrinfo_hints *hints, #ares_addrinfo_callback callback, void* callback, void *arg) void ares_freeaddrinfo(ares_addrinfo *ai) gevent-24.11.1/src/gevent/resolver/thread.py000066400000000000000000000046671471441230600207300ustar00rootroot00000000000000# Copyright (c) 2012-2015 Denis Bilenko. See LICENSE for details. """ Native thread-based hostname resolver. """ import _socket from gevent.hub import get_hub __all__ = ['Resolver'] class Resolver(object): """ Implementation of the resolver API using native threads and native resolution functions. Using the native resolution mechanisms ensures the highest compatibility with what a non-gevent program would return including good support for platform specific configuration mechanisms. The use of native (non-greenlet) threads ensures that a caller doesn't block other greenlets. This implementation also has the benefit of being very simple in comparison to :class:`gevent.resolver_ares.Resolver`. .. tip:: Most users find this resolver to be quite reliable in a properly monkey-patched environment. However, there have been some reports of long delays, slow performance or even hangs, particularly in long-lived programs that make many, many DNS requests. If you suspect that may be happening to you, try the dnspython or ares resolver (and submit a bug report). """ def __init__(self, hub=None): if hub is None: hub = get_hub() self.pool = hub.threadpool if _socket.gaierror not in hub.NOT_ERROR: # Do not cause lookup failures to get printed by the default # error handler. This can be very noisy. hub.NOT_ERROR += (_socket.gaierror, _socket.herror) def __repr__(self): return '<%s.%s at 0x%x pool=%r>' % (type(self).__module__, type(self).__name__, id(self), self.pool) def close(self): pass # from briefly reading socketmodule.c, it seems that all of the functions # below are thread-safe in Python, even if they are not thread-safe in C. def gethostbyname(self, *args): return self.pool.apply(_socket.gethostbyname, args) def gethostbyname_ex(self, *args): return self.pool.apply(_socket.gethostbyname_ex, args) def getaddrinfo(self, *args, **kwargs): return self.pool.apply(_socket.getaddrinfo, args, kwargs) def gethostbyaddr(self, *args, **kwargs): return self.pool.apply(_socket.gethostbyaddr, args, kwargs) def getnameinfo(self, *args, **kwargs): return self.pool.apply(_socket.getnameinfo, args, kwargs) gevent-24.11.1/src/gevent/resolver_ares.py000066400000000000000000000007461471441230600204650ustar00rootroot00000000000000"""Backwards compatibility alias for :mod:`gevent.resolver.ares`. .. deprecated:: 1.3 Use :mod:`gevent.resolver.ares` """ import warnings warnings.warn( "gevent.resolver_ares is deprecated and will be removed in 1.5. " "Use gevent.resolver.ares instead.", DeprecationWarning, stacklevel=2 ) del warnings from gevent.resolver.ares import * # pylint:disable=wildcard-import,unused-wildcard-import import gevent.resolver.ares as _ares __all__ = _ares.__all__ del _ares gevent-24.11.1/src/gevent/resolver_thread.py000066400000000000000000000007701471441230600207770ustar00rootroot00000000000000"""Backwards compatibility alias for :mod:`gevent.resolver.thread`. .. deprecated:: 1.3 Use :mod:`gevent.resolver.thread` """ import warnings warnings.warn( "gevent.resolver_thread is deprecated and will be removed in 1.5. " "Use gevent.resolver.thread instead.", DeprecationWarning, stacklevel=2 ) del warnings from gevent.resolver.thread import * # pylint:disable=wildcard-import,unused-wildcard-import import gevent.resolver.thread as _thread __all__ = _thread.__all__ del _thread gevent-24.11.1/src/gevent/select.py000066400000000000000000000317111471441230600170650ustar00rootroot00000000000000# Copyright (c) 2009-2011 Denis Bilenko. See LICENSE for details. """ Waiting for I/O completion. """ from __future__ import absolute_import, division, print_function import sys import select as __select__ from gevent.event import Event from gevent.hub import _get_hub_noargs as get_hub from gevent.hub import sleep as _g_sleep from gevent._compat import integer_types from gevent._compat import iteritems from gevent._util import copy_globals from gevent._util import _NONE from errno import EINTR _real_original_select = __select__.select if sys.platform.startswith('win32'): def _original_select(r, w, x, t): # windows can't handle three empty lists, but we've always # accepted that if not r and not w and not x: return ((), (), ()) return _real_original_select(r, w, x, t) else: _original_select = _real_original_select # These will be replaced by copy_globals if they are defined by the # platform. They're not defined on Windows, but we still provide # poll() there. We only pay attention to POLLIN and POLLOUT. POLLIN = 1 POLLPRI = 2 POLLOUT = 4 POLLERR = 8 POLLHUP = 16 POLLNVAL = 32 POLLRDNORM = 64 POLLRDBAND = 128 POLLWRNORM = 4 POLLWRBAND = 256 __implements__ = [ 'select', ] if hasattr(__select__, 'poll'): __implements__.append('poll') else: __extra__ = [ 'poll', ] __all__ = ['error'] + __implements__ error = __select__.error __imports__ = copy_globals(__select__, globals(), names_to_ignore=__all__, dunder_names_to_keep=()) _EV_READ = 1 _EV_WRITE = 2 def get_fileno(obj): try: fileno_f = obj.fileno except AttributeError: if not isinstance(obj, integer_types): raise TypeError('argument must be an int, or have a fileno() method: %r' % (obj,)) return obj return fileno_f() class SelectResult(object): __slots__ = () @staticmethod def _make_callback(ready_collection, event, mask): def cb(fd, watcher): ready_collection.append(fd) watcher.close() event.set() cb.mask = mask return cb @classmethod def _make_watchers(cls, watchers, *fd_cb): loop = get_hub().loop io = loop.io MAXPRI = loop.MAXPRI for fdlist, callback in fd_cb: try: for fd in fdlist: watcher = io(get_fileno(fd), callback.mask) watcher.priority = MAXPRI watchers.append(watcher) watcher.start(callback, fd, watcher) except IOError as ex: raise error(*ex.args) @staticmethod def _closeall(watchers): for watcher in watchers: watcher.stop() watcher.close() del watchers[:] def select(self, rlist, wlist, timeout): watchers = [] # read and write are the collected ready objects, accumulated # by the callback. Note that we could get spurious callbacks # if the socket is closed while we're blocked. We can't easily # detect that (libev filters the events passed so we can't # pass arbitrary events). After an iteration of polling for # IO, libev will invoke all the pending IO watchers, and then # any newly added (fed) events, and then we will invoke added # callbacks. With libev 4.27+ and EV_VERIFY, it's critical to # close our watcher immediately once we get an event. That # could be the close event (coming just before the actual # close happens), and once the FD is closed, libev will abort # the process if we stop the watcher. read = [] write = [] event = Event() add_read = self._make_callback(read, event, _EV_READ) add_write = self._make_callback(write, event, _EV_WRITE) try: self._make_watchers(watchers, (rlist, add_read), (wlist, add_write)) event.wait(timeout=timeout) return read, write, [] finally: self._closeall(watchers) def select(rlist, wlist, xlist, timeout=None): # pylint:disable=unused-argument """An implementation of :obj:`select.select` that blocks only the current greenlet. .. caution:: *xlist* is ignored. .. versionchanged:: 1.2a1 Raise a :exc:`ValueError` if timeout is negative. This matches Python 3's behaviour (Python 2 would raise a ``select.error``). Previously gevent had undefined behaviour. .. versionchanged:: 1.2a1 Raise an exception if any of the file descriptors are invalid. """ if timeout is not None and timeout < 0: # Raise an error like the real implementation; which error # depends on the version. Python 3, where select.error is OSError, # raises a ValueError (which makes sense). Older pythons raise # the error from the select syscall...but we don't actually get there. # We choose to just raise the ValueError as it makes more sense and is # forward compatible raise ValueError("timeout must be non-negative") # since rlist and wlist can be any iterable we will have to first # copy them into a list, so we can use them in both _original_select # and in SelectResult.select. We don't need to do it for xlist, since # that one will only be passed into _original_select rlist = rlist if isinstance(rlist, (list, tuple)) else list(rlist) wlist = wlist if isinstance(wlist, (list, tuple)) else list(wlist) # First, do a poll with the original select system call. This is # the most efficient way to check to see if any of the file # descriptors have previously been closed and raise the correct # corresponding exception. (Because libev tends to just return # them as ready, or, if built with EV_VERIFY >= 2 and libev >= # 4.27, crash the process. And libuv also tends to crash the # process.) # # We accept the *xlist* here even though we can't # below because this is all about error handling. sel_results = ((), (), ()) try: sel_results = _original_select(rlist, wlist, xlist, 0) except error as e: enumber = getattr(e, 'errno', None) or e.args[0] if enumber != EINTR: # Ignore interrupted syscalls raise if sel_results[0] or sel_results[1] or sel_results[2] or (timeout is not None and timeout == 0): # If we actually had stuff ready, go ahead and return it. No need # to go through the trouble of doing our own stuff. # Likewise, if the timeout is 0, we already did a 0 timeout # select and we don't need to do it again. Note that in libuv, # zero duration timers may be called immediately, without # cycling the event loop at all. 2.7/test_telnetlib.py "hangs" # calling zero-duration timers if we go to the loop here. # However, because this is typically a place where scheduling switches # can occur, we need to make sure that's still the case; otherwise a single # consumer could monopolize the thread. (shows up in test_ftplib.) _g_sleep() return sel_results result = SelectResult() return result.select(rlist, wlist, timeout) class PollResult(object): __slots__ = ('events', 'event') def __init__(self): self.events = set() self.event = Event() def add_event(self, events, fd): if events < 0: result_flags = POLLNVAL else: result_flags = 0 if events & _EV_READ: result_flags = POLLIN if events & _EV_WRITE: result_flags |= POLLOUT self.events.add((fd, result_flags)) self.event.set() class poll(object): """ An implementation of :obj:`select.poll` that blocks only the current greenlet. With only one exception, the interface is the same as the standard library interface. .. caution:: ``POLLPRI`` data is not supported. .. versionadded:: 1.1b1 .. versionchanged:: 1.5 This is now always defined, regardless of whether the standard library defines :func:`select.poll` or not. Note that it may have different performance characteristics. """ def __init__(self): # {int -> flags} # We can't keep watcher objects in here because people commonly # just drop the poll object when they're done, without calling # unregister(). dnspython does this. self.fds = {} self.loop = get_hub().loop def register(self, fd, eventmask=_NONE): """ Register a file descriptor *fd* with the polling object. Future calls to the :meth:`poll`` method will then check whether the file descriptor has any pending I/O events. *fd* can be either an integer, or an object with a ``fileno()`` method that returns an integer. File objects implement ``fileno()``, so they can also be used as the argument (but remember that regular files are usually always ready). *eventmask* is an optional bitmask describing the type of events you want to check for, and can be a combination of the constants ``POLLIN``, and ``POLLOUT`` (``POLLPRI`` is not supported). """ if eventmask is _NONE: flags = _EV_READ | _EV_WRITE else: flags = 0 if eventmask & POLLIN: flags = _EV_READ if eventmask & POLLOUT: flags |= _EV_WRITE # If they ask for POLLPRI, we can't support # that. Should we raise an error? fileno = get_fileno(fd) self.fds[fileno] = flags def modify(self, fd, eventmask): """ Change the set of events being watched on *fd*. """ self.register(fd, eventmask) def _get_started_watchers(self, watcher_cb): watchers = [] io = self.loop.io MAXPRI = self.loop.MAXPRI try: for fd, flags in iteritems(self.fds): watcher = io(fd, flags) watchers.append(watcher) watcher.priority = MAXPRI watcher.start(watcher_cb, fd, pass_events=True) except: for awatcher in watchers: awatcher.stop() awatcher.close() raise return watchers def poll(self, timeout=None): """ poll the registered fds. .. versionchanged:: 1.2a1 File descriptors that are closed are reported with POLLNVAL. .. versionchanged:: 1.3a2 Under libuv, interpret *timeout* values less than 0 the same as *None*, i.e., block. This was always the case with libev. """ result = PollResult() watchers = self._get_started_watchers(result.add_event) try: if timeout is not None: if timeout < 0: # The docs for python say that an omitted timeout, # a negative timeout and a timeout of None are all # supposed to block forever. Many, but not all # OS's accept any negative number to mean that. Some # OS's raise errors for anything negative but not -1. # Python 3.7 changes to always pass exactly -1 in that # case from selectors. # Our Timeout class currently does not have a defined behaviour # for negative values. On libuv, it uses a check watcher and effectively # doesn't block. On libev, it seems to block. In either case, we # *want* to block, so turn this into the sure fire block request. timeout = None elif timeout: # The docs for poll.poll say timeout is in # milliseconds. Our result objects work in # seconds, so this should be *=, shouldn't it? timeout /= 1000.0 result.event.wait(timeout=timeout) return list(result.events) finally: for awatcher in watchers: awatcher.stop() awatcher.close() def unregister(self, fd): """ Unregister the *fd*. .. versionchanged:: 1.2a1 Raise a `KeyError` if *fd* was not registered, like the standard library. Previously gevent did nothing. """ fileno = get_fileno(fd) del self.fds[fileno] def _gevent_do_monkey_patch(patch_request): aggressive = patch_request.patch_kwargs['aggressive'] patch_request.default_patch_items() if aggressive: # since these are blocking we're removing them here. This makes some other # modules (e.g. asyncore) non-blocking, as they use select that we provide # when none of these are available. patch_request.remove_item( 'epoll', 'kqueue', 'kevent', 'devpoll', ) gevent-24.11.1/src/gevent/selectors.py000066400000000000000000000267151471441230600176210ustar00rootroot00000000000000# Copyright (c) 2020 gevent contributors. """ This module provides :class:`GeventSelector`, a high-level IO multiplexing mechanism. This is aliased to :class:`DefaultSelector`. This module provides the same API as the selectors defined in :mod:`selectors`. On Python 2, this module is only available if the `selectors2 `_ backport is installed. .. versionadded:: 20.6.0 """ from __future__ import absolute_import from __future__ import division from __future__ import print_function from collections import defaultdict try: import selectors as __selectors__ except ImportError: # Probably on Python 2. Do we have the backport? import selectors2 as __selectors__ __target__ = 'selectors2' from gevent.hub import _get_hub_noargs as get_hub from gevent import sleep from gevent._compat import iteritems from gevent._compat import itervalues from gevent._util import copy_globals from gevent._util import Lazy from gevent.event import Event from gevent.select import _EV_READ from gevent.select import _EV_WRITE __implements__ = [ 'DefaultSelector', ] __extra__ = [ 'GeventSelector', ] __all__ = __implements__ + __extra__ __imports__ = copy_globals( __selectors__, globals(), names_to_ignore=__all__, # Copy __all__; __all__ is defined by selectors2 but not Python 3. dunder_names_to_keep=('__all__',) ) _POLL_ALL = _EV_READ | _EV_WRITE EVENT_READ = __selectors__.EVENT_READ EVENT_WRITE = __selectors__.EVENT_WRITE _ALL_EVENTS = EVENT_READ | EVENT_WRITE SelectorKey = __selectors__.SelectorKey # In 3.4 and selectors2, BaseSelector is a concrete # class that can be called. In 3.5 and later, it's an # ABC, with the real implementation being # passed to _BaseSelectorImpl. _BaseSelectorImpl = getattr( __selectors__, '_BaseSelectorImpl', __selectors__.BaseSelector ) class GeventSelector(_BaseSelectorImpl): """ A selector implementation using gevent primitives. This is a type of :class:`selectors.BaseSelector`, so the documentation for that class applies here. .. caution:: As the base class indicates, it is critically important to unregister file objects before closing them. (Or close the selector they are registered with before closing them.) Failure to do so may crash the process or have other unintended results. """ # Notes on the approach: # # It's easy to wrap a selector implementation around # ``gevent.select.poll``; in fact that's what happens by default # when monkey-patching in Python 3. But the problem with that is # each call to ``selector.select()`` will result in creating and # then destroying new kernel-level polling resources, as nothing # in ``gevent.select`` can keep watchers around (because the underlying # file could be closed at any time). This ends up producing a large # number of syscalls that are unnecessary. # # So here, we take advantage of the fact that it is documented and # required that files not be closed while they are registered. # This lets us persist watchers. Indeed, it lets us continually # accrue events in the background before a call to ``select()`` is even # made. We can take advantage of this to return results immediately, without # a syscall, if we have them. # # We create watchers in ``register()`` and destroy them in # ``unregister()``. They do not get started until the first call # to ``select()``, though. Once they are started, they don't get # stopped until they deliver an event. # Lifecycle: # register() -> inactive_watchers # select() -> inactive_watchers -> active_watchers; # active_watchers -> inactive_watchers def __init__(self, hub=None): if hub is not None: self.hub = hub # {fd: watcher} self._active_watchers = {} self._inactive_watchers = {} # {fd: EVENT_READ|EVENT_WRITE} self._accumulated_events = defaultdict(int) self._ready = Event() super(GeventSelector, self).__init__() if not hasattr(_BaseSelectorImpl, '_key_from_fd'): def _key_from_fd(self, fd): # Removed in 3.13; this duplicates the old function. try: return self._fd_to_key[fd] except KeyError: return None def __callback(self, events, fd): if events > 0: cur_event_for_fd = self._accumulated_events[fd] if events & _EV_READ: cur_event_for_fd |= EVENT_READ if events & _EV_WRITE: cur_event_for_fd |= EVENT_WRITE self._accumulated_events[fd] = cur_event_for_fd self._ready.set() @Lazy def hub(self): # pylint:disable=method-hidden return get_hub() def register(self, fileobj, events, data=None): key = _BaseSelectorImpl.register(self, fileobj, events, data) if events == _ALL_EVENTS: flags = _POLL_ALL elif events == EVENT_READ: flags = _EV_READ else: flags = _EV_WRITE loop = self.hub.loop io = loop.io MAXPRI = loop.MAXPRI self._inactive_watchers[key.fd] = watcher = io(key.fd, flags) watcher.priority = MAXPRI return key def unregister(self, fileobj): key = _BaseSelectorImpl.unregister(self, fileobj) if key.fd in self._active_watchers: watcher = self._active_watchers.pop(key.fd) else: watcher = self._inactive_watchers.pop(key.fd) watcher.stop() watcher.close() self._accumulated_events.pop(key.fd, None) return key # XXX: Can we implement ``modify`` more efficiently than # ``unregister()``+``register()``? We could detect the no-change # case and do nothing; recent versions of the standard library # do that. def select(self, timeout=None): """ Poll for I/O. Note that, like the built-in selectors, this will block indefinitely if no timeout is given and no files have been registered. """ # timeout > 0 : block seconds # timeout <= 0 : No blocking. # timeout = None: Block forever # Event.wait doesn't deal with negative values if timeout is not None and timeout < 0: timeout = 0 # Start any watchers that need started. Note that they may # not actually get a chance to do anything yet if we already had # events set. for fd, watcher in iteritems(self._inactive_watchers): watcher.start(self.__callback, fd, pass_events=True) self._active_watchers.update(self._inactive_watchers) self._inactive_watchers.clear() # The _ready event is either already set (in which case # there are some results waiting in _accumulated_events) or # not set, in which case we have to block. But to make the two cases # behave the same, we will always yield to the event loop. if self._ready.is_set(): sleep() self._ready.wait(timeout) self._ready.clear() # TODO: If we have nothing ready, but they ask us not to block, # should we make an effort to actually spin the event loop and let # it check for events? result = [] for fd, event in iteritems(self._accumulated_events): key = self._key_from_fd(fd) watcher = self._active_watchers.pop(fd) ## The below is taken without comment from ## https://github.com/gevent/gevent/pull/1523/files and ## hasn't been checked: # # Since we are emulating an epoll object within another epoll object, # once a watcher has fired, we must deactivate it until poll is called # next. If we did not, someone else could call, e.g., gevent.time.sleep # and any unconsumed bytes on our watched fd would prevent the process # from sleeping correctly. watcher.stop() if key: result.append((key, event & key.events)) self._inactive_watchers[fd] = watcher else: # pragma: no cover # If the key was gone, then somehow we've been unregistered. # Don't put it back in inactive, close it. watcher.close() self._accumulated_events.clear() return result def close(self): for d in self._active_watchers, self._inactive_watchers: if d is None: continue # already closed for watcher in itervalues(d): watcher.stop() watcher.close() self._active_watchers = self._inactive_watchers = None self._accumulated_events = None self.hub = None _BaseSelectorImpl.close(self) DefaultSelector = GeventSelector def _gevent_do_monkey_patch(patch_request): aggressive = patch_request.patch_kwargs['aggressive'] target_mod = patch_request.target_module patch_request.default_patch_items() import sys if 'selectors' not in sys.modules: # Py2: Make 'import selectors' work sys.modules['selectors'] = sys.modules[__name__] # Python 3 wants to use `select.select` as a member function, # leading to this error in selectors.py (because # gevent.select.select is not a builtin and doesn't get the # magic auto-static that they do): # # r, w, _ = self._select(self._readers, self._writers, [], timeout) # TypeError: select() takes from 3 to 4 positional arguments but 5 were given # # Note that this obviously only happens if selectors was # imported after we had patched select; but there is a code # path that leads to it being imported first (but now we've # patched select---so we can't compare them identically). It also doesn't # happen on Windows, because they define a normal method for _select, to work around # some weirdness in the handling of the third argument. # # The backport doesn't have that. orig_select_select = patch_request.get_original('select', 'select') assert target_mod.select is not orig_select_select selectors = __selectors__ SelectSelector = selectors.SelectSelector if hasattr(SelectSelector, '_select') and SelectSelector._select in ( target_mod.select, orig_select_select ): from gevent.select import select def _select(self, *args, **kwargs): # pylint:disable=unused-argument return select(*args, **kwargs) selectors.SelectSelector._select = _select _select._gevent_monkey = True # prove for test cases if aggressive: # If `selectors` had already been imported before we removed # select.epoll|kqueue|devpoll, these may have been defined in terms # of those functions. They'll fail at runtime. patch_request.remove_item( selectors, 'EpollSelector', 'KqueueSelector', 'DevpollSelector', ) selectors.DefaultSelector = DefaultSelector # Python 3.7 refactors the poll-like selectors to use a common # base class and capture a reference to select.poll, etc, at # import time. selectors tends to get imported early # (importing 'platform' does it: platform -> subprocess -> selectors), # so we need to clean that up. if hasattr(selectors, 'PollSelector') and hasattr(selectors.PollSelector, '_selector_cls'): from gevent.select import poll selectors.PollSelector._selector_cls = poll gevent-24.11.1/src/gevent/server.py000066400000000000000000000251441471441230600171170ustar00rootroot00000000000000# Copyright (c) 2009-2012 Denis Bilenko. See LICENSE for details. """TCP/SSL server""" from __future__ import print_function from __future__ import absolute_import from __future__ import division import sys from _socket import error as SocketError from _socket import SOL_SOCKET from _socket import SO_REUSEADDR from _socket import AF_INET from _socket import SOCK_DGRAM from gevent.baseserver import BaseServer from gevent.socket import EWOULDBLOCK from gevent.socket import socket as GeventSocket __all__ = ['StreamServer', 'DatagramServer'] if sys.platform == 'win32': # SO_REUSEADDR on Windows does not mean the same thing as on *nix (issue #217) DEFAULT_REUSE_ADDR = None else: DEFAULT_REUSE_ADDR = 1 # sockets and SSL sockets are context managers on Python 3 def _closing_socket(sock): return sock class StreamServer(BaseServer): """ A generic TCP server. Accepts connections on a listening socket and spawns user-provided *handle* function for each connection with 2 arguments: the client socket and the client address. Note that although the errors in a successfully spawned handler will not affect the server or other connections, the errors raised by :func:`accept` and *spawn* cause the server to stop accepting for a short amount of time. The exact period depends on the values of :attr:`min_delay` and :attr:`max_delay` attributes. The delay starts with :attr:`min_delay` and doubles with each successive error until it reaches :attr:`max_delay`. A successful :func:`accept` resets the delay to :attr:`min_delay` again. See :class:`~gevent.baseserver.BaseServer` for information on defining the *handle* function and important restrictions on it. **SSL Support** The server can optionally work in SSL mode when given the correct keyword arguments. (That is, the presence of any keyword arguments will trigger SSL mode.) On Python 2.7.9 and later (any Python version that supports the :class:`ssl.SSLContext`), this can be done with a configured ``SSLContext``. On any Python version, it can be done by passing the appropriate arguments for :func:`ssl.wrap_socket`. The incoming socket will be wrapped into an SSL socket before being passed to the *handle* function. If the *ssl_context* keyword argument is present, it should contain an :class:`ssl.SSLContext`. The remaining keyword arguments are passed to the :meth:`ssl.SSLContext.wrap_socket` method of that object. Depending on the Python version, supported arguments may include: - server_hostname - suppress_ragged_eofs - do_handshake_on_connect .. caution:: When using an SSLContext, it should either be imported from :mod:`gevent.ssl`, or the process needs to be monkey-patched. If the process is not monkey-patched and you pass the standard library SSLContext, the resulting client sockets will not cooperate with gevent. Otherwise, keyword arguments are assumed to apply to :func:`ssl.wrap_socket`. These keyword arguments may include: - keyfile - certfile - cert_reqs - ssl_version - ca_certs - suppress_ragged_eofs - do_handshake_on_connect - ciphers .. versionchanged:: 1.2a2 Add support for the *ssl_context* keyword argument. """ # the default backlog to use if none was provided in __init__ # For TCP, 128 is the (default) maximum at the operating system level on Linux and macOS # larger values are truncated to 128. # # Windows defines SOMAXCONN=0x7fffffff to mean "max reasonable value" --- that value # was undocumented and subject to change, but appears to be 200. # Beginning in Windows 8 there's SOMAXCONN_HINT(b)=(-(b)) which means "at least # as many SOMAXCONN but no more than b" which is a portable way to write 200. backlog = 128 reuse_addr = DEFAULT_REUSE_ADDR def __init__(self, listener, handle=None, backlog=None, spawn='default', **ssl_args): BaseServer.__init__(self, listener, handle=handle, spawn=spawn) try: if ssl_args: ssl_args.setdefault('server_side', True) if 'ssl_context' in ssl_args: ssl_context = ssl_args.pop('ssl_context') self.wrap_socket = ssl_context.wrap_socket self.ssl_args = ssl_args else: from gevent.ssl import wrap_socket self.wrap_socket = wrap_socket self.ssl_args = ssl_args else: self.ssl_args = None if backlog is not None: if hasattr(self, 'socket'): raise TypeError('backlog must be None when a socket instance is passed') self.backlog = backlog except: self.close() raise @property def ssl_enabled(self): return self.ssl_args is not None def set_listener(self, listener): BaseServer.set_listener(self, listener) def _make_socket_stdlib(self, fresh): # We want to unwrap the gevent wrapping of the listening socket. # This lets us be just a hair more efficient: when our 'do_read' is # called, we've already waited on the socket to be ready to accept(), so # we don't need to (potentially) do it again. Also we avoid a layer # of method calls. The cost, though, is that we have to manually wrap # sockets back up to be non-blocking in do_read(). I'm not sure that's worth # it. # # In the past, we only did this when set_listener() was called with a socket # object and not an address. It makes sense to do it always though, # so that we get consistent behaviour. while hasattr(self.socket, '_sock'): if fresh: if hasattr(self.socket, '_drop_events'): # Discard event listeners. This socket object is not shared, # so we don't need them anywhere else. # This matters somewhat for libuv, where we have to multiplex # listeners, and we're about to create a new listener. # If we don't do this, on Windows libuv tends to miss incoming # connects and our _do_read callback doesn't get called. self.socket._drop_events() # XXX: Do we need to _drop() for PyPy? self.socket = self.socket._sock # pylint:disable=attribute-defined-outside-init def init_socket(self): fresh = False if not hasattr(self, 'socket'): fresh = True # FIXME: clean up the socket lifetime # pylint:disable=attribute-defined-outside-init self.socket = self.get_listener(self.address, self.backlog, self.family) self.address = self.socket.getsockname() if self.ssl_args: self._handle = self.wrap_socket_and_handle else: self._handle = self.handle self._make_socket_stdlib(fresh) @classmethod def get_listener(cls, address, backlog=None, family=None): if backlog is None: backlog = cls.backlog return _tcp_listener(address, backlog=backlog, reuse_addr=cls.reuse_addr, family=family) def do_read(self): sock = self.socket try: fd, address = sock._accept() except BlockingIOError: # python 2: pylint: disable=undefined-variable if not sock.timeout: return raise sock = GeventSocket(sock.family, sock.type, sock.proto, fileno=fd) # XXX Python issue #7995? "if no default timeout is set # and the listening socket had a (non-zero) timeout, force # the new socket in blocking mode to override # platform-specific socket flags inheritance." return sock, address def do_close(self, sock, *args): # pylint:disable=arguments-differ sock.close() def wrap_socket_and_handle(self, client_socket, address): # used in case of ssl sockets with _closing_socket(self.wrap_socket(client_socket, **self.ssl_args)) as ssl_socket: return self.handle(ssl_socket, address) class DatagramServer(BaseServer): """A UDP server""" reuse_addr = DEFAULT_REUSE_ADDR def __init__(self, *args, **kwargs): # The raw (non-gevent) socket, if possible self._socket = None BaseServer.__init__(self, *args, **kwargs) from gevent.lock import Semaphore self._writelock = Semaphore() def init_socket(self): if not hasattr(self, 'socket'): # FIXME: clean up the socket lifetime # pylint:disable=attribute-defined-outside-init self.socket = self.get_listener(self.address, self.family) self.address = self.socket.getsockname() self._socket = self.socket try: self._socket = self._socket._sock except AttributeError: pass @classmethod def get_listener(cls, address, family=None): return _udp_socket(address, reuse_addr=cls.reuse_addr, family=family) def do_read(self): try: data, address = self._socket.recvfrom(8192) except SocketError as err: if err.args[0] == EWOULDBLOCK: return raise return data, address def sendto(self, *args): self._writelock.acquire() try: self.socket.sendto(*args) finally: self._writelock.release() def _tcp_listener(address, backlog=50, reuse_addr=None, family=AF_INET): """A shortcut to create a TCP socket, bind it and put it into listening state.""" sock = GeventSocket(family=family) if reuse_addr is not None: sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, reuse_addr) try: sock.bind(address) except SocketError as ex: strerror = getattr(ex, 'strerror', None) if strerror is not None: ex.strerror = strerror + ': ' + repr(address) raise sock.listen(backlog) sock.setblocking(0) return sock def _udp_socket(address, backlog=50, reuse_addr=None, family=AF_INET): # backlog argument for compat with tcp_listener # pylint:disable=unused-argument # we want gevent.socket.socket here sock = GeventSocket(family=family, type=SOCK_DGRAM) if reuse_addr is not None: sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, reuse_addr) try: sock.bind(address) except SocketError as ex: strerror = getattr(ex, 'strerror', None) if strerror is not None: ex.strerror = strerror + ': ' + repr(address) raise return sock gevent-24.11.1/src/gevent/signal.py000066400000000000000000000121411471441230600170570ustar00rootroot00000000000000""" Cooperative implementation of special cases of :func:`signal.signal`. This module is designed to work with libev's child watchers, as used by default in :func:`gevent.os.fork` Note that each ``SIGCHLD`` handler will be run in a new greenlet when the signal is delivered (just like :class:`gevent.hub.signal`) The implementations in this module are only monkey patched if :func:`gevent.os.waitpid` is being used (the default) and if :const:`signal.SIGCHLD` is available; see :func:`gevent.os.fork` for information on configuring this not to be the case for advanced uses. .. versionadded:: 1.1b4 .. versionchanged:: 1.5a4 Previously there was a backwards compatibility alias ``gevent.signal``, introduced in 1.1b4, that partly shadowed this module, confusing humans and static analysis tools alike. That alias has been removed. (See `gevent.signal_handler`.) """ from __future__ import absolute_import from gevent._util import _NONE as _INITIAL from gevent._util import copy_globals import signal as _signal __implements__ = [] __extensions__ = [] _child_handler = _INITIAL _signal_signal = _signal.signal _signal_getsignal = _signal.getsignal def getsignal(signalnum): """ Exactly the same as :func:`signal.getsignal` except where :const:`signal.SIGCHLD` is concerned. For :const:`signal.SIGCHLD`, this cooperates with :func:`signal` to provide consistent answers. """ if signalnum != _signal.SIGCHLD: return _signal_getsignal(signalnum) global _child_handler if _child_handler is _INITIAL: _child_handler = _signal_getsignal(_signal.SIGCHLD) return _child_handler def signal(signalnum, handler): """ Exactly the same as :func:`signal.signal` except where :const:`signal.SIGCHLD` is concerned. .. note:: A :const:`signal.SIGCHLD` handler installed with this function will only be triggered for children that are forked using :func:`gevent.os.fork` (:func:`gevent.os.fork_and_watch`); children forked before monkey patching, or otherwise by the raw :func:`os.fork`, will not trigger the handler installed by this function. (It's unlikely that a SIGCHLD handler installed with the builtin :func:`signal.signal` would be triggered either; libev typically overwrites such a handler at the C level. At the very least, it's full of race conditions.) .. note:: Use of ``SIG_IGN`` and ``SIG_DFL`` may also have race conditions with libev child watchers and the :mod:`gevent.subprocess` module. .. versionchanged:: 1.2a1 If ``SIG_IGN`` or ``SIG_DFL`` are used to ignore ``SIGCHLD``, a future use of ``gevent.subprocess`` and libev child watchers will once again work. However, on Python 2, use of ``os.popen`` will fail. .. versionchanged:: 1.1rc2 Allow using ``SIG_IGN`` and ``SIG_DFL`` to reset and ignore ``SIGCHLD``. However, this allows the possibility of a race condition if ``gevent.subprocess`` had already been used. """ if signalnum != _signal.SIGCHLD: return _signal_signal(signalnum, handler) # TODO: raise value error if not called from the main # greenlet, just like threads if handler != _signal.SIG_IGN and handler != _signal.SIG_DFL and not callable(handler): # exact same error message raised by the stdlib raise TypeError("signal handler must be signal.SIG_IGN, signal.SIG_DFL, or a callable object") old_handler = getsignal(signalnum) global _child_handler _child_handler = handler if handler in (_signal.SIG_IGN, _signal.SIG_DFL): # Allow resetting/ignoring this signal at the process level. # Note that this conflicts with gevent.subprocess and other users # of child watchers, until the next time gevent.subprocess/loop.install_sigchld() # is called. from gevent.hub import get_hub # Are we always safe to import here? _signal_signal(signalnum, handler) get_hub().loop.reset_sigchld() return old_handler def _on_child_hook(): # This is called in the hub greenlet. To let the function # do more useful work, like use blocking functions, # we run it in a new greenlet; see gevent.hub.signal if callable(_child_handler): # None is a valid value for the frame argument from gevent import Greenlet greenlet = Greenlet(_child_handler, _signal.SIGCHLD, None) greenlet.switch() import gevent.os # pylint:disable=no-member if 'waitpid' in gevent.os.__implements__ and hasattr(_signal, 'SIGCHLD'): # Tightly coupled here to gevent.os and its waitpid implementation; only use these # if necessary. gevent.os._on_child_hook = _on_child_hook __implements__.append("signal") __implements__.append("getsignal") else: # XXX: This breaks test__all__ on windows __extensions__.append("signal") __extensions__.append("getsignal") __imports__ = copy_globals(_signal, globals(), names_to_ignore=__implements__ + __extensions__, dunder_names_to_keep=()) __all__ = __implements__ + __extensions__ gevent-24.11.1/src/gevent/socket.py000066400000000000000000000145471471441230600171060ustar00rootroot00000000000000# Copyright (c) 2009-2014 Denis Bilenko and gevent contributors. See LICENSE for details. """Cooperative low-level networking interface. This module provides socket operations and some related functions. The API of the functions and classes matches the API of the corresponding items in the standard :mod:`socket` module exactly, but the synchronous functions in this module only block the current greenlet and let the others run. For convenience, exceptions (like :class:`error ` and :class:`timeout `) as well as the constants from the :mod:`socket` module are imported into this module. """ # Our import magic sadly makes this warning useless # pylint: disable=undefined-variable from gevent._compat import PY311 from gevent._compat import exc_clear from gevent._util import copy_globals from gevent import _socket3 as _source # define some things we're expecting to overwrite; each module # needs to define these __implements__ = __dns__ = __all__ = __extensions__ = __imports__ = () class error(Exception): errno = None def getfqdn(*args): # pylint:disable=unused-argument raise NotImplementedError() copy_globals(_source, globals(), dunder_names_to_keep=('__implements__', '__dns__', '__all__', '__extensions__', '__imports__', '__socket__'), cleanup_globs=False) # The _socket2 and _socket3 don't import things defined in # __extensions__, to help avoid confusing reference cycles in the # documentation and to prevent importing from the wrong place, but we # *do* need to expose them here. (NOTE: This may lead to some sphinx # warnings like: # WARNING: missing attribute mentioned in :members: or __all__: # module gevent._socket2, attribute cancel_wait # These can be ignored.) from gevent import _socketcommon copy_globals(_socketcommon, globals(), only_names=_socketcommon.__extensions__) try: _GLOBAL_DEFAULT_TIMEOUT = __socket__._GLOBAL_DEFAULT_TIMEOUT except AttributeError: _GLOBAL_DEFAULT_TIMEOUT = object() def create_connection(address, timeout=_GLOBAL_DEFAULT_TIMEOUT, source_address=None, *, all_errors=False): """ create_connection(address, timeout=None, source_address=None, *, all_errors=False) -> socket Connect to *address* and return the :class:`gevent.socket.socket` object. Convenience function. Connect to *address* (a 2-tuple ``(host, port)``) and return the socket object. Passing the optional *timeout* parameter will set the timeout on the socket instance before attempting to connect. If no *timeout* is supplied, the global default timeout setting returned by :func:`getdefaulttimeout` is used. If *source_address* is set it must be a tuple of (host, port) for the socket to bind as a source address before making the connection. A host of '' or port 0 tells the OS to use the default. .. versionchanged:: 20.6.0 If the host part of the address includes an IPv6 scope ID, it will be used instead of ignored, if the platform supplies :func:`socket.inet_pton`. .. versionchanged:: 22.08.0 Add the *all_errors* argument. This only has meaning on Python 3.11+; it is a programming error to pass it on earlier versions. .. versionchanged:: 23.7.0 You can pass a value for ``all_errors`` on any version of Python. It is forced to false for any version before 3.11 inside the function. """ # Sigh. This function is a near-copy of the CPython implementation. # Even though we simplified some things, it's still a little complex to # cope with error handling, which got even more complicated in 3.11. # pylint:disable=too-many-locals,too-many-branches if not PY311: all_errors = False host, port = address exceptions = [] # getaddrinfo is documented as returning a list, but our interface # is pluggable, so be sure it does. addrs = list(getaddrinfo(host, port, 0, SOCK_STREAM)) if not addrs: raise error("getaddrinfo returns an empty list") for res in addrs: af, socktype, proto, _canonname, sa = res sock = None try: sock = socket(af, socktype, proto) if timeout is not _GLOBAL_DEFAULT_TIMEOUT: sock.settimeout(timeout) if source_address: sock.bind(source_address) sock.connect(sa) except error as exc: if not all_errors: exceptions = [exc] # raise only the last error else: exceptions.append(exc) del exc # cycle if sock is not None: sock.close() sock = None if res is addrs[-1]: if not all_errors: del exceptions[:] raise try: # pylint isn't smart enough to see that we only use this # on supported versions. # pylint:disable=using-exception-groups-in-unsupported-version raise ExceptionGroup("create_connection failed", exceptions) finally: # Break explicitly a reference cycle del exceptions[:] # without exc_clear(), if connect() fails once, the socket # is referenced by the frame in exc_info and the next # bind() fails (see test__socket.TestCreateConnection) # that does not happen with regular sockets though, # because _socket.socket.connect() is a built-in. this is # similar to "getnameinfo loses a reference" failure in # test_socket.py exc_clear() except BaseException: # Things like GreenletExit, Timeout and KeyboardInterrupt. # These get raised immediately, being sure to # close the socket if sock is not None: sock.close() sock = None raise else: # break reference cycles del exceptions[:] try: return sock finally: sock = None # This is promised to be in the __all__ of the _source, but, for circularity reasons, # we implement it in this module. Mostly for documentation purposes, put it # in the _source too. _source.create_connection = create_connection gevent-24.11.1/src/gevent/ssl.py000066400000000000000000001037241471441230600164130ustar00rootroot00000000000000# Wrapper module for _ssl. Written by Bill Janssen. # Ported to gevent by Denis Bilenko. """SSL wrapper for socket objects on Python 3. For the documentation, refer to :mod:`ssl` module manual. This module implements cooperative SSL socket wrappers. """ from __future__ import absolute_import import ssl as __ssl__ _ssl = __ssl__._ssl import errno from gevent.socket import socket, timeout_default from gevent.socket import timeout as _socket_timeout from gevent._util import copy_globals socket_error = OSError from weakref import ref as _wref __implements__ = [ 'SSLContext', 'SSLSocket', 'get_server_certificate', ] if hasattr(__ssl__, 'wrap_socket'): __implements__.append('wrap_socket') __extra__ = [] else: __extra__ = [ 'wrap_socket', ] # Manually import things we use so we get better linting. # Also, in the past (adding 3.9 support) it turned out we were # relying on certain global variables being defined in the ssl module # that weren't required to be there, e.g., AF_INET, which should be imported # from socket from socket import AF_INET from socket import SOCK_STREAM from socket import SO_TYPE from socket import SOL_SOCKET from ssl import SSLWantReadError from ssl import SSLWantWriteError from ssl import SSLEOFError from ssl import SSLZeroReturnError from ssl import CERT_NONE from ssl import SSLError from ssl import SSL_ERROR_EOF from ssl import SSL_ERROR_WANT_READ from ssl import SSL_ERROR_WANT_WRITE from ssl import PROTOCOL_SSLv23 #from ssl import SSLObject from ssl import CHANNEL_BINDING_TYPES from ssl import CERT_REQUIRED from ssl import DER_cert_to_PEM_cert from ssl import create_connection # Import all symbols from Python's ssl.py, except those that we are implementing # and "private" symbols. __imports__ = copy_globals( __ssl__, globals(), # SSLSocket *must* subclass gevent.socket.socket; see issue 597 names_to_ignore=__implements__ + ['socket'], dunder_names_to_keep=()) __all__ = __implements__ + __imports__ + __extra__ if 'namedtuple' in __all__: __all__.remove('namedtuple') orig_SSLContext = __ssl__.SSLContext # pylint:disable=no-member # We have to pass the raw stdlib socket to SSLContext.wrap_socket. # That method in turn can pass that object on to things like SNI callbacks. # It wouldn't have access to any of the attributes on the SSLSocket, like # context, that it's supposed to (see test_ssl.test_sni_callback). Previously # we just delegated to the sslsocket with __getattr__, but 3.8 # added some new callbacks and a test that the object they get is an instance # of the high-level SSLSocket class, so that doesn't work anymore. Instead, # we wrap the callback and get the real socket to pass on. class _contextawaresock(socket._gevent_sock_class): __slots__ = ('_sslsock',) def __init__(self, family, type, proto, fileno, sslsocket_wref): super().__init__(family, type, proto, fileno) self._sslsock = sslsocket_wref class _Callback(object): __slots__ = ('user_function',) def __init__(self, user_function): self.user_function = user_function def __call__(self, conn, *args): conn = conn._sslsock() return self.user_function(conn, *args) class SSLContext(orig_SSLContext): __slots__ = () # Added in Python 3.7 sslsocket_class = None # SSLSocket is assigned later def wrap_socket(self, sock, server_side=False, do_handshake_on_connect=True, suppress_ragged_eofs=True, server_hostname=None, session=None): # pylint:disable=arguments-differ,not-callable # (3.6 adds session) # Sadly, using *args and **kwargs doesn't work return self.sslsocket_class( sock=sock, server_side=server_side, do_handshake_on_connect=do_handshake_on_connect, suppress_ragged_eofs=suppress_ragged_eofs, server_hostname=server_hostname, _context=self, _session=session) if hasattr(orig_SSLContext.options, 'setter'): # In 3.6, these became properties. They want to access the # property __set__ method in the superclass, and they do so by using # super(SSLContext, SSLContext). But we rebind SSLContext when we monkey # patch, which causes infinite recursion. # https://github.com/python/cpython/commit/328067c468f82e4ec1b5c510a4e84509e010f296 # pylint:disable=no-member @orig_SSLContext.options.setter def options(self, value): super(orig_SSLContext, orig_SSLContext).options.__set__(self, value) @orig_SSLContext.verify_flags.setter def verify_flags(self, value): super(orig_SSLContext, orig_SSLContext).verify_flags.__set__(self, value) @orig_SSLContext.verify_mode.setter def verify_mode(self, value): super(orig_SSLContext, orig_SSLContext).verify_mode.__set__(self, value) if hasattr(orig_SSLContext, 'minimum_version'): # Like the above, added in 3.7 # pylint:disable=no-member @orig_SSLContext.minimum_version.setter def minimum_version(self, value): super(orig_SSLContext, orig_SSLContext).minimum_version.__set__(self, value) @orig_SSLContext.maximum_version.setter def maximum_version(self, value): super(orig_SSLContext, orig_SSLContext).maximum_version.__set__(self, value) if hasattr(orig_SSLContext, '_msg_callback'): # And ditto for 3.8 # msg_callback is more complex because they want to actually *do* stuff # in the setter, so we need to call it. For that to work we temporarily rebind # SSLContext back. This function cannot switch, so it should be safe, # unless somehow we have multiple threads in a monkey-patched ssl module # at the same time, which doesn't make much sense. @property def _msg_callback(self): result = super()._msg_callback if isinstance(result, _Callback): result = result.user_function return result @_msg_callback.setter def _msg_callback(self, value): if value and callable(value): value = _Callback(value) __ssl__.SSLContext = orig_SSLContext try: super(SSLContext, SSLContext)._msg_callback.__set__(self, value) # pylint:disable=no-member finally: __ssl__.SSLContext = SSLContext if hasattr(orig_SSLContext, 'sni_callback'): # Added in 3.7. @property def sni_callback(self): result = super().sni_callback if isinstance(result, _Callback): result = result.user_function # pylint:disable=no-member return result @sni_callback.setter def sni_callback(self, value): if value and callable(value): value = _Callback(value) super(orig_SSLContext, orig_SSLContext).sni_callback.__set__(self, value) # pylint:disable=no-member else: # In newer versions, this just sets sni_callback. def set_servername_callback(self, server_name_callback): if server_name_callback and callable(server_name_callback): server_name_callback = _Callback(server_name_callback) super().set_servername_callback(server_name_callback) class SSLSocket(socket): """ gevent `ssl.SSLSocket `_ for Python 3. """ # pylint:disable=too-many-instance-attributes,too-many-public-methods def __init__(self, sock=None, keyfile=None, certfile=None, server_side=False, cert_reqs=CERT_NONE, ssl_version=PROTOCOL_SSLv23, ca_certs=None, do_handshake_on_connect=True, family=AF_INET, type=SOCK_STREAM, proto=0, fileno=None, suppress_ragged_eofs=True, npn_protocols=None, ciphers=None, server_hostname=None, _session=None, # 3.6 _context=None): # When a *sock* argument is passed, it is used only for its fileno() # and is immediately detach()'d *unless* we raise an error. # pylint:disable=too-many-locals,too-many-statements,too-many-branches if _context: self._context = _context else: if server_side and not certfile: raise ValueError("certfile must be specified for server-side " "operations") if keyfile and not certfile: raise ValueError("certfile must be specified") if certfile and not keyfile: keyfile = certfile self._context = SSLContext(ssl_version) self._context.verify_mode = cert_reqs if ca_certs: self._context.load_verify_locations(ca_certs) if certfile: self._context.load_cert_chain(certfile, keyfile) if npn_protocols: self._context.set_npn_protocols(npn_protocols) if ciphers: self._context.set_ciphers(ciphers) self.keyfile = keyfile self.certfile = certfile self.cert_reqs = cert_reqs self.ssl_version = ssl_version self.ca_certs = ca_certs self.ciphers = ciphers # Can't use sock.type as other flags (such as SOCK_NONBLOCK) get # mixed in. if sock.getsockopt(SOL_SOCKET, SO_TYPE) != SOCK_STREAM: raise NotImplementedError("only stream sockets are supported") if server_side: if server_hostname: raise ValueError("server_hostname can only be specified " "in client mode") if _session is not None: raise ValueError("session can only be specified " "in client mode") if self._context.check_hostname and not server_hostname: raise ValueError("check_hostname requires server_hostname") self._session = _session self.server_side = server_side self.server_hostname = server_hostname self.do_handshake_on_connect = do_handshake_on_connect self.suppress_ragged_eofs = suppress_ragged_eofs connected = False sock_timeout = None if sock is not None: # We're going non-blocking below, can't set timeout yet. sock_timeout = sock.gettimeout() socket.__init__(self, family=sock.family, type=sock.type, proto=sock.proto, fileno=sock.fileno()) # When Python 3 sockets are __del__, they close() themselves, # including their underlying fd, unless they have been detached. # Only detach if we succeed in taking ownership; if we raise an exception, # then the user might have no way to close us and release the resources. sock.detach() elif fileno is not None: socket.__init__(self, fileno=fileno) else: socket.__init__(self, family=family, type=type, proto=proto) self._closed = False self._sslobj = None # see if we're connected try: self._sock.getpeername() except OSError as e: if e.errno != errno.ENOTCONN: # This file descriptor is hosed, shared or not. # Clean up. self.close() raise # Next block is originally from # https://github.com/python/cpython/commit/75a875e0df0530b75b1470d797942f90f4a718d3, # intended to fix https://github.com/python/cpython/issues/108310 blocking = self.getblocking() self.setblocking(False) try: # We are not connected so this is not supposed to block, but # testing revealed otherwise on macOS and Windows so we do # the non-blocking dance regardless. Our raise when any data # is found means consuming the data is harmless. notconn_pre_handshake_data = self.recv(1) except OSError as e: # pylint:disable=redefined-outer-name # EINVAL occurs for recv(1) on non-connected on unix sockets. if e.errno not in (errno.ENOTCONN, errno.EINVAL): raise notconn_pre_handshake_data = b'' self.setblocking(blocking) if notconn_pre_handshake_data: # This prevents pending data sent to the socket before it was # closed from escaping to the caller who could otherwise # presume it came through a successful TLS connection. reason = "Closed before TLS handshake with data in recv buffer." notconn_pre_handshake_data_error = SSLError(e.errno, reason) # Add the SSLError attributes that _ssl.c always adds. notconn_pre_handshake_data_error.reason = reason notconn_pre_handshake_data_error.library = None try: self.close() except OSError: pass raise notconn_pre_handshake_data_error else: connected = True self.settimeout(sock_timeout) self._connected = connected if connected: # create the SSL object try: self._sslobj = self.__create_sslobj(server_side, _session) if do_handshake_on_connect: timeout = self.gettimeout() if timeout == 0.0: # non-blocking raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets") self.do_handshake() except OSError: self.close() raise def _gevent_sock_class(self, family, type, proto, fileno): return _contextawaresock(family, type, proto, fileno, _wref(self)) def _extra_repr(self): return ' server=%s, cipher=%r' % ( self.server_side, self._sslobj.cipher() if self._sslobj is not None else '' ) @property def context(self): return self._context @context.setter def context(self, ctx): self._context = ctx self._sslobj.context = ctx @property def session(self): """The SSLSession for client socket.""" if self._sslobj is not None: return self._sslobj.session @session.setter def session(self, session): self._session = session if self._sslobj is not None: self._sslobj.session = session @property def session_reused(self): """Was the client session reused during handshake""" if self._sslobj is not None: return self._sslobj.session_reused def dup(self): raise NotImplementedError("Can't dup() %s instances" % self.__class__.__name__) def _checkClosed(self, msg=None): # raise an exception here if you wish to check for spurious closes pass def _check_connected(self): if not self._connected: # getpeername() will raise ENOTCONN if the socket is really # not connected; note that we can be connected even without # _connected being set, e.g. if connect() first returned # EAGAIN. self.getpeername() def read(self, nbytes=2014, buffer=None): """Read up to LEN bytes and return them. Return zero-length string on EOF. .. versionchanged:: 24.2.1 No longer requires a non-None *buffer* to implement ``len()``. This is a backport from 3.11.8. """ # pylint:disable=too-many-branches self._checkClosed() # The stdlib signature is (len=1024, buffer=None) # but that shadows the len builtin, and its hard/annoying to # get it back. # # Also, the return values are weird. If *buffer* is given, # we return the count of bytes added to buffer. Otherwise, # we return the string we read. bytes_read = 0 while True: if not self._sslobj: raise ValueError("Read on closed or unwrapped SSL socket.") if nbytes == 0: return b'' if buffer is None else 0 # Negative lengths are handled natively when the buffer is None # to raise a ValueError try: if buffer is not None: bytes_read += self._sslobj.read(nbytes, buffer) return bytes_read return self._sslobj.read(nbytes or 1024) except SSLWantReadError: if self.timeout == 0.0: raise self._wait(self._read_event, timeout_exc=_SSLErrorReadTimeout) except SSLWantWriteError: if self.timeout == 0.0: raise # note: using _SSLErrorReadTimeout rather than _SSLErrorWriteTimeout below is intentional self._wait(self._write_event, timeout_exc=_SSLErrorReadTimeout) except SSLZeroReturnError: # This one is only seen in PyPy 7.3.17 if self.suppress_ragged_eofs: return b'' if buffer is None else bytes_read raise except SSLError as ex: # All the other SSLxxxxxError classes extend SSLError, # so catch it last. if ex.args[0] == SSL_ERROR_EOF and self.suppress_ragged_eofs: return b'' if buffer is None else bytes_read raise # Certain versions of Python, built against certain # versions of OpenSSL operating in certain modes, can # produce ``ConnectionResetError`` instead of # ``SSLError``. Notably, it looks like anything built # against 1.1.1c does that? gevent briefly (from support of TLS 1.3 # in Sept 2019 to issue #1637 it June 2020) caught that error and treaded # it just like SSL_ERROR_EOF. But that's not what the standard library does. # So presumably errors that result from unexpected ``ConnectionResetError`` # are issues in gevent tests. def write(self, data): """Write DATA to the underlying SSL channel. Returns number of bytes of DATA actually transmitted.""" self._checkClosed() while True: if not self._sslobj: raise ValueError("Write on closed or unwrapped SSL socket.") try: return self._sslobj.write(data) except SSLError as ex: if ex.args[0] == SSL_ERROR_WANT_READ: if self.timeout == 0.0: raise self._wait(self._read_event, timeout_exc=_SSLErrorWriteTimeout) elif ex.args[0] == SSL_ERROR_WANT_WRITE: if self.timeout == 0.0: raise self._wait(self._write_event, timeout_exc=_SSLErrorWriteTimeout) else: raise def getpeercert(self, binary_form=False): """Returns a formatted version of the data in the certificate provided by the other end of the SSL channel. Return None if no certificate was provided, {} if a certificate was provided, but not validated.""" self._checkClosed() self._check_connected() try: c = self._sslobj.peer_certificate except AttributeError: # 3.6 c = self._sslobj.getpeercert return c(binary_form) def selected_npn_protocol(self): self._checkClosed() if not self._sslobj or not _ssl.HAS_NPN: return None return self._sslobj.selected_npn_protocol() if hasattr(_ssl, 'HAS_ALPN'): # 3.5+ def selected_alpn_protocol(self): self._checkClosed() if not self._sslobj or not _ssl.HAS_ALPN: # pylint:disable=no-member return None return self._sslobj.selected_alpn_protocol() def shared_ciphers(self): """Return a list of ciphers shared by the client during the handshake or None if this is not a valid server connection. """ return self._sslobj.shared_ciphers() def version(self): """Return a string identifying the protocol version used by the current SSL channel. """ if not self._sslobj: return None return self._sslobj.version() # We inherit sendfile from super(); it always uses `send` def cipher(self): self._checkClosed() if not self._sslobj: return None return self._sslobj.cipher() def compression(self): self._checkClosed() if not self._sslobj: return None return self._sslobj.compression() def send(self, data, flags=0, timeout=timeout_default): self._checkClosed() if timeout is timeout_default: timeout = self.timeout if self._sslobj: if flags != 0: raise ValueError( "non-zero flags not allowed in calls to send() on %s" % self.__class__) while True: try: return self._sslobj.write(data) except SSLWantReadError: if self.timeout == 0.0: return 0 self._wait(self._read_event) except SSLWantWriteError: if self.timeout == 0.0: return 0 self._wait(self._write_event) else: return socket.send(self, data, flags, timeout) def sendto(self, data, flags_or_addr, addr=None): self._checkClosed() if self._sslobj: raise ValueError("sendto not allowed on instances of %s" % self.__class__) if addr is None: return socket.sendto(self, data, flags_or_addr) return socket.sendto(self, data, flags_or_addr, addr) def sendmsg(self, *args, **kwargs): # Ensure programs don't send data unencrypted if they try to # use this method. raise NotImplementedError("sendmsg not allowed on instances of %s" % self.__class__) def sendall(self, data, flags=0): self._checkClosed() if self._sslobj: if flags != 0: raise ValueError( "non-zero flags not allowed in calls to sendall() on %s" % self.__class__) try: return socket.sendall(self, data, flags) except _socket_timeout: if self.timeout == 0.0: # Raised by the stdlib on non-blocking sockets raise SSLWantWriteError("The operation did not complete (write)") raise def recv(self, buflen=1024, flags=0): self._checkClosed() if self._sslobj: if flags != 0: raise ValueError( "non-zero flags not allowed in calls to recv() on %s" % self.__class__) if buflen == 0: # https://github.com/python/cpython/commit/00915577dd84ba75016400793bf547666e6b29b5 # Python #23804 return b'' return self.read(buflen) return socket.recv(self, buflen, flags) def recv_into(self, buffer, nbytes=None, flags=0): """ .. versionchanged:: 24.2.1 No longer requires a non-None *buffer* to implement ``len()``. This is a backport from 3.11.8. """ self._checkClosed() if nbytes is None: if buffer is not None: with memoryview(buffer) as view: nbytes = view.nbytes if not nbytes: nbytes = 1024 if self._sslobj: if flags != 0: raise ValueError("non-zero flags not allowed in calls to recv_into() on %s" % self.__class__) return self.read(nbytes, buffer) return socket.recv_into(self, buffer, nbytes, flags) def recvfrom(self, buflen=1024, flags=0): self._checkClosed() if self._sslobj: raise ValueError("recvfrom not allowed on instances of %s" % self.__class__) return socket.recvfrom(self, buflen, flags) def recvfrom_into(self, buffer, nbytes=None, flags=0): self._checkClosed() if self._sslobj: raise ValueError("recvfrom_into not allowed on instances of %s" % self.__class__) return socket.recvfrom_into(self, buffer, nbytes, flags) def recvmsg(self, *args, **kwargs): raise NotImplementedError("recvmsg not allowed on instances of %s" % self.__class__) def recvmsg_into(self, *args, **kwargs): raise NotImplementedError("recvmsg_into not allowed on instances of " "%s" % self.__class__) def pending(self): self._checkClosed() if self._sslobj: return self._sslobj.pending() return 0 def shutdown(self, how): self._checkClosed() self._sslobj = None socket.shutdown(self, how) def unwrap(self): if not self._sslobj: raise ValueError("No SSL wrapper around " + str(self)) try: # 3.7 and newer, that use the SSLSocket object # call its shutdown. shutdown = self._sslobj.shutdown except AttributeError: # Earlier versions use SSLObject, which covers # that with a layer. shutdown = self._sslobj.unwrap s = self._sock while True: try: s = shutdown() break except SSLWantReadError: # Callers of this method expect to get a socket # back, so we can't simply return 0, we have # to let these be raised if self.timeout == 0.0: raise self._wait(self._read_event) except SSLWantWriteError: if self.timeout == 0.0: raise self._wait(self._write_event) except SSLEOFError: break except SSLZeroReturnError: # Between PyPy 7.3.12 and PyPy 7.3.17, it started raising # this. This is equivalent to SSLEOFError for our purposes: # both indicate the connection has been closed, # the former uncleanly, the latter cleanly. break except OSError as e: if e.errno == 0: # The equivalent of SSLEOFError on unpatched versions of Python. # https://bugs.python.org/issue31122 break raise self._sslobj = None # The return value of shutting down the SSLObject is the # original wrapped socket passed to _wrap_socket, i.e., # _contextawaresock. But that object doesn't have the # gevent wrapper around it so it can't be used. We have to # wrap it back up with a gevent wrapper. assert s is self._sock # In the stdlib, SSLSocket subclasses socket.socket and passes itself # to _wrap_socket, so it gets itself back. We can't do that, we have to # pass our subclass of _socket.socket, _contextawaresock. # So ultimately we should return ourself. # See test_ftplib.py:TestTLS_FTPClass.test_ccc return self def _real_close(self): self._sslobj = None socket._real_close(self) def do_handshake(self): """Perform a TLS/SSL handshake.""" self._check_connected() while True: try: self._sslobj.do_handshake() break except SSLWantReadError: if self.timeout == 0.0: raise self._wait(self._read_event, timeout_exc=_SSLErrorHandshakeTimeout) except SSLWantWriteError: if self.timeout == 0.0: raise self._wait(self._write_event, timeout_exc=_SSLErrorHandshakeTimeout) # 3.7+, making it difficult to create these objects. # There's a new type, _ssl.SSLSocket, that takes the # place of SSLObject for self._sslobj. This one does it all. def __create_sslobj(self, server_side=False, session=None): return self.context._wrap_socket( self._sock, server_side, self.server_hostname, owner=self._sock, session=session ) def _real_connect(self, addr, connect_ex): if self.server_side: raise ValueError("can't connect in server-side mode") # Here we assume that the socket is client-side, and not # connected at the time of the call. We connect it, then wrap it. if self._connected: raise ValueError("attempt to connect already-connected SSLSocket!") self._sslobj = self.__create_sslobj(False, self._session) try: if connect_ex: rc = socket.connect_ex(self, addr) else: rc = None socket.connect(self, addr) if not rc: if self.do_handshake_on_connect: self.do_handshake() self._connected = True return rc except socket_error: self._sslobj = None raise def connect(self, addr): """Connects to remote ADDR, and then wraps the connection in an SSL channel.""" self._real_connect(addr, False) def connect_ex(self, addr): """Connects to remote ADDR, and then wraps the connection in an SSL channel.""" return self._real_connect(addr, True) def accept(self): """ Accepts a new connection from a remote client, and returns a tuple containing that new connection wrapped with a server-side SSL channel, and the address of the remote client. """ newsock, addr = super().accept() try: newsock = self._context.wrap_socket( newsock, do_handshake_on_connect=self.do_handshake_on_connect, suppress_ragged_eofs=self.suppress_ragged_eofs, server_side=True ) return newsock, addr except: newsock.close() raise def get_channel_binding(self, cb_type="tls-unique"): """Get channel binding data for current connection. Raise ValueError if the requested `cb_type` is not supported. Return bytes of the data or None if the data is not available (e.g. before the handshake). """ if hasattr(self._sslobj, 'get_channel_binding'): # 3.7+, and sslobj is not None return self._sslobj.get_channel_binding(cb_type) if cb_type not in CHANNEL_BINDING_TYPES: raise ValueError("Unsupported channel binding type") if cb_type != "tls-unique": raise NotImplementedError("{0} channel binding type not implemented".format(cb_type)) if self._sslobj is None: return None return self._sslobj.tls_unique_cb() def verify_client_post_handshake(self): # Only present in 3.7.1+; an attributeerror is alright if self._sslobj: return self._sslobj.verify_client_post_handshake() raise ValueError("No SSL wrapper around " + str(self)) if hasattr(__ssl__.SSLSocket, 'get_verified_chain'): # Added in 3.13 def get_verified_chain(self): chain = self._sslobj.get_verified_chain() if chain is None: return [] return [cert.public_bytes(_ssl.ENCODING_DER) for cert in chain] def get_unverified_chain(self): chain = self._sslobj.get_unverified_chain() if chain is None: return [] return [cert.public_bytes(_ssl.ENCODING_DER) for cert in chain] # Python does not support forward declaration of types SSLContext.sslsocket_class = SSLSocket # Python 3.2 onwards raise normal timeout errors, not SSLError. # See https://bugs.python.org/issue10272 _SSLErrorReadTimeout = _socket_timeout('The read operation timed out') _SSLErrorWriteTimeout = _socket_timeout('The write operation timed out') _SSLErrorHandshakeTimeout = _socket_timeout('The handshake operation timed out') def wrap_socket(sock, keyfile=None, certfile=None, server_side=False, cert_reqs=CERT_NONE, ssl_version=PROTOCOL_SSLv23, ca_certs=None, do_handshake_on_connect=True, suppress_ragged_eofs=True, ciphers=None): return SSLSocket(sock=sock, keyfile=keyfile, certfile=certfile, server_side=server_side, cert_reqs=cert_reqs, ssl_version=ssl_version, ca_certs=ca_certs, do_handshake_on_connect=do_handshake_on_connect, suppress_ragged_eofs=suppress_ragged_eofs, ciphers=ciphers) def get_server_certificate(addr, ssl_version=PROTOCOL_SSLv23, ca_certs=None): """Retrieve the certificate from the server at the specified address, and return it as a PEM-encoded string. If 'ca_certs' is specified, validate the server cert against it. If 'ssl_version' is specified, use it in the connection attempt.""" _, _ = addr if ca_certs is not None: cert_reqs = CERT_REQUIRED else: cert_reqs = CERT_NONE with create_connection(addr) as sock: with wrap_socket(sock, ssl_version=ssl_version, cert_reqs=cert_reqs, ca_certs=ca_certs) as sslsock: dercert = sslsock.getpeercert(True) sslsock = sock = None return DER_cert_to_PEM_cert(dercert) gevent-24.11.1/src/gevent/subprocess.py000066400000000000000000002430661471441230600200060ustar00rootroot00000000000000""" Cooperative ``subprocess`` module. .. caution:: On POSIX platforms, this module is not usable from native threads other than the main thread; attempting to do so will raise a :exc:`TypeError`. This module depends on libev's fork watchers. On POSIX systems, fork watchers are implemented using signals, and the thread to which process-directed signals are delivered `is not defined`_. Because each native thread has its own gevent/libev loop, this means that a fork watcher registered with one loop (thread) may never see the signal about a child it spawned if the signal is sent to a different thread. .. note:: The interface of this module is intended to match that of the standard library :mod:`subprocess` module (with many backwards compatible extensions from Python 3 backported to Python 2). There are some small differences between the Python 2 and Python 3 versions of that module (the Python 2 ``TimeoutExpired`` exception, notably, extends ``Timeout`` and there is no ``SubprocessError``) and between the POSIX and Windows versions. The HTML documentation here can only describe one version; for definitive documentation, see the standard library or the source code. .. _is not defined: http://www.linuxprogrammingblog.com/all-about-linux-signals?page=11 """ from __future__ import absolute_import, print_function # Can we split this up to make it cleaner? See https://github.com/gevent/gevent/issues/748 # pylint: disable=too-many-lines # Most of this we inherit from the standard lib # pylint: disable=bare-except,too-many-locals,too-many-statements,attribute-defined-outside-init # pylint: disable=too-many-branches,too-many-instance-attributes # Most of this is cross-platform # pylint: disable=no-member,expression-not-assigned,unused-argument,unused-variable import errno import gc import os import signal import sys import traceback # Python 3.9 try: from types import GenericAlias except ImportError: GenericAlias = None try: import grp except ImportError: grp = None try: import pwd except ImportError: pwd = None from gevent.event import AsyncResult from gevent.hub import _get_hub_noargs as get_hub from gevent.hub import linkproxy from gevent.hub import sleep from gevent.hub import getcurrent from gevent._compat import integer_types, string_types, xrange from gevent._compat import PY311 from gevent._compat import PYPY from gevent._compat import PY313 from gevent._compat import WIN from gevent._compat import fsdecode from gevent._compat import fsencode from gevent._compat import PathLike from gevent._util import _NONE from gevent._util import copy_globals from gevent.greenlet import Greenlet, joinall spawn = Greenlet.spawn import subprocess as __subprocess__ # Standard functions and classes that this module re-implements in a gevent-aware way. __implements__ = [ 'Popen', 'call', 'check_call', 'check_output', ] if not sys.platform.startswith('win32'): __implements__.append("_posixsubprocess") _posixsubprocess = None # Some symbols we define that we expect to export; # useful for static analysis PIPE = "PIPE should be imported" # Standard functions and classes that this module re-imports. __imports__ = [ 'PIPE', 'STDOUT', 'CalledProcessError', # Windows: 'CREATE_NEW_CONSOLE', 'CREATE_NEW_PROCESS_GROUP', 'STD_INPUT_HANDLE', 'STD_OUTPUT_HANDLE', 'STD_ERROR_HANDLE', 'SW_HIDE', 'STARTF_USESTDHANDLES', 'STARTF_USESHOWWINDOW', ] __extra__ = [ 'MAXFD', '_eintr_retry_call', 'STARTUPINFO', 'pywintypes', 'list2cmdline', '_subprocess', '_winapi', # Python 2.5 does not have _subprocess, so we don't use it # XXX We don't run on Py 2.5 anymore; can/could/should we use _subprocess? # It's only used on mswindows 'WAIT_OBJECT_0', 'WaitForSingleObject', 'GetExitCodeProcess', 'GetStdHandle', 'CreatePipe', 'DuplicateHandle', 'GetCurrentProcess', 'DUPLICATE_SAME_ACCESS', 'GetModuleFileName', 'GetVersion', 'CreateProcess', 'INFINITE', 'TerminateProcess', 'STILL_ACTIVE', # These were added for 3.5, but we make them available everywhere. 'run', 'CompletedProcess', ] __imports__ += [ 'DEVNULL', 'getstatusoutput', 'getoutput', 'SubprocessError', 'TimeoutExpired', ] # Became standard in 3.5 __extra__.remove('run') __extra__.remove('CompletedProcess') __implements__.append('run') __implements__.append('CompletedProcess') # Removed in Python 3.5; this is the exact code that was removed: # https://hg.python.org/cpython/rev/f98b0a5e5ef5 __extra__.remove('MAXFD') try: MAXFD = os.sysconf("SC_OPEN_MAX") except: MAXFD = 256 # This was added to __all__ for windows in 3.6 __extra__.remove('STARTUPINFO') __imports__.append('STARTUPINFO') __imports__.extend([ 'ABOVE_NORMAL_PRIORITY_CLASS', 'BELOW_NORMAL_PRIORITY_CLASS', 'HIGH_PRIORITY_CLASS', 'IDLE_PRIORITY_CLASS', 'NORMAL_PRIORITY_CLASS', 'REALTIME_PRIORITY_CLASS', 'CREATE_NO_WINDOW', 'DETACHED_PROCESS', 'CREATE_DEFAULT_ERROR_MODE', 'CREATE_BREAKAWAY_FROM_JOB' ]) if PY313 and WIN: __imports__.extend([ 'STARTF_FORCEONFEEDBACK', 'STARTF_FORCEOFFFEEDBACK' ]) # Using os.posix_spawn() to start subprocesses # bypasses our child watchers on certain operating systems, # and with certain library versions. Possibly the right # fix is to monkey-patch os.posix_spawn like we do os.fork? # These have no effect, they're just here to match the stdlib. # TODO: When available, given a monkey patch on them, I think # we ought to be able to use them if the stdlib has identified them # as suitable. __implements__.extend([ '_use_posix_spawn', ]) def _use_posix_spawn(): return False _USE_POSIX_SPAWN = False if __subprocess__._USE_POSIX_SPAWN: __implements__.extend([ '_USE_POSIX_SPAWN', ]) else: __imports__.extend([ '_USE_POSIX_SPAWN', ]) if PY311: # Python 3.11 added some module-level attributes to control the # use of vfork. The docs specifically say that you should not try to read # them, only set them, so we don't provide them. # # Python 3.11 also added a test, test_surrogates_error_message, that behaves # differently based on whether or not the pure python implementation of forking # is in use, or the one written in C from _posixsubprocess. Obviously we don't call # that, so we need to make us look like a pure python version; it checks that this attribute # is none for that. _fork_exec = None __implements__.extend([ '_fork_exec', ] if sys.platform != 'win32' else [ ]) actually_imported = copy_globals(__subprocess__, globals(), only_names=__imports__, ignore_missing_names=True) # anything we couldn't import from here we may need to find # elsewhere __extra__.extend(set(__imports__).difference(set(actually_imported))) __imports__ = actually_imported del actually_imported # In Python 3 on Windows, a lot of the functions previously # in _subprocess moved to _winapi _subprocess = getattr(__subprocess__, '_subprocess', _NONE) _winapi = getattr(__subprocess__, '_winapi', _NONE) _attr_resolution_order = [__subprocess__, _subprocess, _winapi] for name in list(__extra__): if name in globals(): continue value = _NONE for place in _attr_resolution_order: value = getattr(place, name, _NONE) if value is not _NONE: break if value is _NONE: __extra__.remove(name) else: globals()[name] = value del _attr_resolution_order __all__ = __implements__ + __imports__ # Some other things we want to document for _x in ('run', 'CompletedProcess', 'TimeoutExpired'): if _x not in __all__: __all__.append(_x) mswindows = WIN if mswindows: import msvcrt # pylint: disable=import-error class Handle(int): closed = False def Close(self): if not self.closed: self.closed = True _winapi.CloseHandle(self) def Detach(self): if not self.closed: self.closed = True return int(self) raise ValueError("already closed") def __repr__(self): return "Handle(%d)" % int(self) __del__ = Close __str__ = __repr__ else: import fcntl import pickle from gevent import monkey fork = monkey.get_original('os', 'fork') from gevent.os import fork_and_watch STDOUT = __subprocess__.STDOUT # static analysis def call(*popenargs, **kwargs): """ call(args, *, stdin=None, stdout=None, stderr=None, shell=False, timeout=None) -> returncode Run command with arguments. Wait for command to complete or timeout, then return the returncode attribute. The arguments are the same as for the Popen constructor. Example:: retcode = call(["ls", "-l"]) .. versionchanged:: 1.2a1 The ``timeout`` keyword argument is now accepted on all supported versions of Python (not just Python 3) and if it expires will raise a :exc:`TimeoutExpired` exception (under Python 2 this is a subclass of :exc:`~.Timeout`). """ timeout = kwargs.pop('timeout', None) with Popen(*popenargs, **kwargs) as p: try: return p.wait(timeout=timeout, _raise_exc=True) except: p.kill() p.wait() raise def check_call(*popenargs, **kwargs): """ check_call(args, *, stdin=None, stdout=None, stderr=None, shell=False, timeout=None) -> 0 Run command with arguments. Wait for command to complete. If the exit code was zero then return, otherwise raise :exc:`CalledProcessError`. The ``CalledProcessError`` object will have the return code in the returncode attribute. The arguments are the same as for the Popen constructor. Example:: retcode = check_call(["ls", "-l"]) """ retcode = call(*popenargs, **kwargs) if retcode: cmd = kwargs.get("args") if cmd is None: cmd = popenargs[0] raise CalledProcessError(retcode, cmd) # pylint:disable=undefined-variable return 0 def check_output(*popenargs, **kwargs): r""" check_output(args, *, input=None, stdin=None, stderr=None, shell=False, universal_newlines=False, timeout=None) -> output Run command with arguments and return its output. If the exit code was non-zero it raises a :exc:`CalledProcessError`. The ``CalledProcessError`` object will have the return code in the returncode attribute and output in the output attribute. The arguments are the same as for the Popen constructor. Example:: >>> check_output(["ls", "-1", "/dev/null"]) '/dev/null\n' The ``stdout`` argument is not allowed as it is used internally. To capture standard error in the result, use ``stderr=STDOUT``:: >>> output = check_output(["/bin/sh", "-c", ... "ls -l non_existent_file ; exit 0"], ... stderr=STDOUT).decode('ascii').strip() >>> print(output.rsplit(':', 1)[1].strip()) No such file or directory There is an additional optional argument, "input", allowing you to pass a string to the subprocess's stdin. If you use this argument you may not also use the Popen constructor's "stdin" argument, as it too will be used internally. Example:: >>> check_output(["sed", "-e", "s/foo/bar/"], ... input=b"when in the course of fooman events\n") 'when in the course of barman events\n' If ``universal_newlines=True`` is passed, the return value will be a string rather than bytes. .. versionchanged:: 1.2a1 The ``timeout`` keyword argument is now accepted on all supported versions of Python (not just Python 3) and if it expires will raise a :exc:`TimeoutExpired` exception (under Python 2 this is a subclass of :exc:`~.Timeout`). .. versionchanged:: 1.2a1 The ``input`` keyword argument is now accepted on all supported versions of Python, not just Python 3 .. versionchanged:: 22.08.0 Passing the ``check`` keyword argument is forbidden, just as in Python 3.11. """ timeout = kwargs.pop('timeout', None) if 'stdout' in kwargs: raise ValueError('stdout argument not allowed, it will be overridden.') if 'check' in kwargs: raise ValueError('check argument not allowed, it will be overridden.') if 'input' in kwargs: if 'stdin' in kwargs: raise ValueError('stdin and input arguments may not both be used.') inputdata = kwargs['input'] del kwargs['input'] kwargs['stdin'] = PIPE else: inputdata = None with Popen(*popenargs, stdout=PIPE, **kwargs) as process: try: output, unused_err = process.communicate(inputdata, timeout=timeout) except TimeoutExpired: process.kill() output, unused_err = process.communicate() raise TimeoutExpired(process.args, timeout, output=output) except: process.kill() process.wait() raise retcode = process.poll() if retcode: # pylint:disable=undefined-variable raise CalledProcessError(retcode, process.args, output=output) return output _PLATFORM_DEFAULT_CLOSE_FDS = object() if 'TimeoutExpired' not in globals(): # Python 2 # Make TimeoutExpired inherit from _Timeout so it can be caught # the way we used to throw things (except Timeout), but make sure it doesn't # init a timer. Note that we can't have a fake 'SubprocessError' that inherits # from exception, because we need TimeoutExpired to just be a BaseException for # bwc. from gevent.timeout import Timeout as _Timeout class TimeoutExpired(_Timeout): """ This exception is raised when the timeout expires while waiting for a child process in `communicate`. Under Python 2, this is a gevent extension with the same name as the Python 3 class for source-code forward compatibility. However, it extends :class:`gevent.timeout.Timeout` for backwards compatibility (because we used to just raise a plain ``Timeout``); note that ``Timeout`` is a ``BaseException``, *not* an ``Exception``. .. versionadded:: 1.2a1 """ def __init__(self, cmd, timeout, output=None): _Timeout.__init__(self, None) self.cmd = cmd self.seconds = timeout self.output = output @property def timeout(self): return self.seconds def __str__(self): return ("Command '%s' timed out after %s seconds" % (self.cmd, self.timeout)) if hasattr(os, 'set_inheritable'): _set_inheritable = os.set_inheritable else: _set_inheritable = lambda i, v: True def FileObject(*args, **kwargs): # Defer importing FileObject until we need it # to allow it to be configured more easily. from gevent.fileobject import FileObject as _FileObject globals()['FileObject'] = _FileObject return _FileObject(*args) class _CommunicatingGreenlets(object): # At most, exactly one of these objects may be created # for a given Popen object. This ensures that only one background # greenlet at a time will be reading from the file object. This matters because # if a timeout exception is raised, the user may call back into communicate() to # get the output (usually after killing the process; see run()). We must not # lose output in that case (Python 3 specifically documents that raising a timeout # doesn't lose output). Also, attempting to read from a pipe while it's already # being read from results in `RuntimeError: reentrant call in io.BufferedReader`; # the same thing happens if you attempt to close() it while that's in progress. __slots__ = ( 'stdin', 'stdout', 'stderr', '_all_greenlets', ) def __init__(self, popen, input_data): self.stdin = self.stdout = self.stderr = None if popen.stdin: # Even if no data, we need to close self.stdin = spawn(self._write_and_close, popen.stdin, input_data) # If the timeout parameter is used, and the caller calls back after # getting a TimeoutExpired exception, we can wind up with multiple # greenlets trying to run and read from and close stdout/stderr. # That's bad because it can lead to 'RuntimeError: reentrant call in io.BufferedReader'. # We can't just kill the previous greenlets when a timeout happens, # though, because we risk losing the output collected by that greenlet # (and Python 3, where timeout is an official parameter, explicitly says # that no output should be lost in the event of a timeout.) Instead, we're # watching for the exception and ignoring it. It's not elegant, # but it works if popen.stdout: self.stdout = spawn(self._read_and_close, popen.stdout) if popen.stderr: self.stderr = spawn(self._read_and_close, popen.stderr) all_greenlets = [] for g in self.stdin, self.stdout, self.stderr: if g is not None: all_greenlets.append(g) self._all_greenlets = tuple(all_greenlets) def __iter__(self): return iter(self._all_greenlets) def __bool__(self): return bool(self._all_greenlets) __nonzero__ = __bool__ def __len__(self): return len(self._all_greenlets) @staticmethod def _write_and_close(fobj, data): try: if data: fobj.write(data) if hasattr(fobj, 'flush'): # 3.6 started expecting flush to be called. fobj.flush() except OSError as ex: # Test cases from the stdlib can raise BrokenPipeError # without setting an errno value. This matters because # Python 2 doesn't have a BrokenPipeError. if isinstance(ex, BrokenPipeError) and ex.errno is None: ex.errno = errno.EPIPE if ex.errno not in (errno.EPIPE, errno.EINVAL): raise finally: try: fobj.close() except EnvironmentError: pass @staticmethod def _read_and_close(fobj): try: return fobj.read() finally: try: fobj.close() except EnvironmentError: pass class Popen(object): """ The underlying process creation and management in this module is handled by the Popen class. It offers a lot of flexibility so that developers are able to handle the less common cases not covered by the convenience functions. .. seealso:: :class:`subprocess.Popen` This class should have the same interface as the standard library class. .. caution:: The default values of some arguments, notably ``buffering``, differ between Python 2 and Python 3. For the most consistent behaviour across versions, it's best to explicitly pass the desired values. .. caution:: On Python 2, the ``read`` method of the ``stdout`` and ``stderr`` attributes will not be buffered unless buffering is explicitly requested (e.g., `bufsize=-1`). This is different than the ``read`` method of the standard library attributes, which will buffer internally even if no buffering has been requested. This matches the Python 3 behaviour. For portability, please explicitly request buffering if you want ``read(n)`` to return all ``n`` bytes, making more than one system call if needed. See `issue 1701 `_ for more context. .. versionchanged:: 1.2a1 Instances can now be used as context managers under Python 2.7. Previously this was restricted to Python 3. .. versionchanged:: 1.2a1 Instances now save the ``args`` attribute under Python 2.7. Previously this was restricted to Python 3. .. versionchanged:: 1.2b1 Add the ``encoding`` and ``errors`` parameters for Python 3. .. versionchanged:: 1.3a1 Accept "path-like" objects for the *cwd* parameter on all platforms. This was added to Python 3.6. Previously with gevent, it only worked on POSIX platforms on 3.6. .. versionchanged:: 1.3a1 Add the ``text`` argument as a synonym for ``universal_newlines``, as added on Python 3.7. .. versionchanged:: 1.3a2 Allow the same keyword arguments under Python 2 as Python 3: ``pass_fds``, ``start_new_session``, ``restore_signals``, ``encoding`` and ``errors``. Under Python 2, ``encoding`` and ``errors`` are ignored because native handling of universal newlines is used. .. versionchanged:: 1.3a2 Under Python 2, ``restore_signals`` defaults to ``False``. Previously it defaulted to ``True``, the same as it did in Python 3. .. versionchanged:: 20.6.0 Add the *group*, *extra_groups*, *user*, and *umask* arguments. These were added to Python 3.9, but are available in any gevent version, provided the underlying platform support is present. .. versionchanged:: 20.12.0 On Python 2 only, if unbuffered binary communication is requested, the ``stdin`` attribute of this object will have a ``write`` method that actually performs internal buffering and looping, similar to the standard library. It guarantees to write all the data given to it in a single call (but internally it may make many system calls and/or trips around the event loop to accomplish this). See :issue:`1711`. .. versionchanged:: 21.12.0 Added the ``pipesize`` argument for compatibility with Python 3.10. This is ignored on all platforms. .. versionchanged:: 22.08.0 Added the ``process_group`` and ``check`` arguments for compatibility with Python 3.11. .. versionchanged:: 24.10.1 To match Python 3.13, ``stdout=STDOUT`` now raises a :exc:`ValueError`. """ if GenericAlias is not None: # 3.9, annoying typing is creeping everywhere. __class_getitem__ = classmethod(GenericAlias) # The value returned from communicate() when there was nothing to read. # Changes if we're in text mode or universal newlines mode. _communicate_empty_value = b'' # pylint:disable-next=too-many-positional-arguments def __init__(self, args, bufsize=-1, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None, close_fds=_PLATFORM_DEFAULT_CLOSE_FDS, shell=False, cwd=None, env=None, universal_newlines=None, startupinfo=None, creationflags=0, restore_signals=True, start_new_session=False, pass_fds=(), # Added in 3.6. These are kept as ivars encoding=None, errors=None, # Added in 3.7. Not an ivar directly. text=None, # Added in 3.9 group=None, extra_groups=None, user=None, umask=-1, # Added in 3.10, but ignored. pipesize=-1, # Added in 3.11 process_group=None, # gevent additions threadpool=None): self.encoding = encoding self.errors = errors hub = get_hub() if bufsize is None: # Python 2 doesn't allow None at all, but Python 3 treats # it the same as the default. We do as well. bufsize = -1 if not isinstance(bufsize, integer_types): raise TypeError("bufsize must be an integer") if stdout is STDOUT: raise ValueError("STDOUT can only be used for stderr") if mswindows: if preexec_fn is not None: raise ValueError("preexec_fn is not supported on Windows " "platforms") if close_fds is _PLATFORM_DEFAULT_CLOSE_FDS: close_fds = True if threadpool is None: threadpool = hub.threadpool self.threadpool = threadpool self._waiting = False else: # POSIX if close_fds is _PLATFORM_DEFAULT_CLOSE_FDS: # close_fds has different defaults on Py3/Py2 close_fds = True if pass_fds and not close_fds: import warnings warnings.warn("pass_fds overriding close_fds.", RuntimeWarning) close_fds = True if startupinfo is not None: raise ValueError("startupinfo is only supported on Windows " "platforms") if creationflags != 0: raise ValueError("creationflags is only supported on Windows " "platforms") assert threadpool is None self._loop = hub.loop # Validate the combinations of text and universal_newlines if (text is not None and universal_newlines is not None and bool(universal_newlines) != bool(text)): # pylint:disable=undefined-variable raise SubprocessError('Cannot disambiguate when both text ' 'and universal_newlines are supplied but ' 'different. Pass one or the other.') self.args = args # Previously this was Py3 only. self.stdin = None self.stdout = None self.stderr = None self.pid = None self.returncode = None self.universal_newlines = universal_newlines self.result = AsyncResult() # Input and output objects. The general principle is like # this: # # Parent Child # ------ ----- # p2cwrite ---stdin---> p2cread # c2pread <--stdout--- c2pwrite # errread <--stderr--- errwrite # # On POSIX, the child objects are file descriptors. On # Windows, these are Windows file handles. The parent objects # are file descriptors on both platforms. The parent objects # are -1 when not using PIPEs. The child objects are -1 # when not redirecting. (p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) = self._get_handles(stdin, stdout, stderr) # We wrap OS handles *before* launching the child, otherwise a # quickly terminating child could make our fds unwrappable # (see #8458). if mswindows: if p2cwrite != -1: p2cwrite = msvcrt.open_osfhandle(p2cwrite.Detach(), 0) if c2pread != -1: c2pread = msvcrt.open_osfhandle(c2pread.Detach(), 0) if errread != -1: errread = msvcrt.open_osfhandle(errread.Detach(), 0) text_mode = self.encoding or self.errors or universal_newlines or text if text_mode or universal_newlines: # Always a native str in universal_newlines mode, even when that # str type is bytes. Additionally, text_mode is only true under # Python 3, so it's actually a unicode str self._communicate_empty_value = '' uid, gid, gids = self.__handle_uids(user, group, extra_groups) if p2cwrite != -1: if text_mode: # Under Python 3, if we left on the 'b' we'd get different results # depending on whether we used FileObjectPosix or FileObjectThread self.stdin = FileObject(p2cwrite, 'w', bufsize, encoding=self.encoding, errors=self.errors) else: self.stdin = FileObject(p2cwrite, 'wb', bufsize) if c2pread != -1: if universal_newlines or text_mode: self.stdout = FileObject(c2pread, 'r', bufsize, encoding=self.encoding, errors=self.errors) # NOTE: Universal Newlines are broken on Windows/Py3, at least # in some cases. This is true in the stdlib subprocess module # as well; the following line would fix the test cases in # test__subprocess.py that depend on python_universal_newlines, # but would be inconsistent with the stdlib: else: self.stdout = FileObject(c2pread, 'rb', bufsize) if errread != -1: if universal_newlines or text_mode: self.stderr = FileObject(errread, 'r', bufsize, encoding=encoding, errors=errors) else: self.stderr = FileObject(errread, 'rb', bufsize) self._closed_child_pipe_fds = False # Convert here for the sake of all platforms. os.chdir accepts # path-like objects natively under 3.6, but CreateProcess # doesn't. cwd = fsdecode(cwd) if cwd is not None else None try: self._execute_child(args, executable, preexec_fn, close_fds, pass_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, restore_signals, gid, gids, uid, umask, start_new_session, process_group) except: # Cleanup if the child failed starting. # (gevent: New in python3, but reported as gevent bug in #347. # Note that under Py2, any error raised below will replace the # original error so we have to use reraise) for f in filter(None, (self.stdin, self.stdout, self.stderr)): try: f.close() except OSError: pass # Ignore EBADF or other errors. if not self._closed_child_pipe_fds: to_close = [] if stdin == PIPE: to_close.append(p2cread) if stdout == PIPE: to_close.append(c2pwrite) if stderr == PIPE: to_close.append(errwrite) if hasattr(self, '_devnull'): to_close.append(self._devnull) for fd in to_close: try: os.close(fd) except OSError: pass raise def __handle_uids(self, user, group, extra_groups): gid = None if group is not None: if not hasattr(os, 'setregid'): raise ValueError("The 'group' parameter is not supported on the " "current platform") if isinstance(group, str): if grp is None: raise ValueError("The group parameter cannot be a string " "on systems without the grp module") gid = grp.getgrnam(group).gr_gid elif isinstance(group, int): gid = group else: raise TypeError("Group must be a string or an integer, not {}" .format(type(group))) if gid < 0: raise ValueError("Group ID cannot be negative, got %s" % gid) gids = None if extra_groups is not None: if not hasattr(os, 'setgroups'): raise ValueError("The 'extra_groups' parameter is not " "supported on the current platform") if isinstance(extra_groups, str): raise ValueError("Groups must be a list, not a string") gids = [] for extra_group in extra_groups: if isinstance(extra_group, str): if grp is None: raise ValueError("Items in extra_groups cannot be " "strings on systems without the " "grp module") gids.append(grp.getgrnam(extra_group).gr_gid) elif isinstance(extra_group, int): if extra_group >= 2**64: # This check is implicit in the C version of _Py_Gid_Converter. # # We actually need access to the C type ``gid_t`` to get # its actual length. This just makes the test that was added # for the bug pass. That's OK though, if we guess too big here, # we should get an OverflowError from the setgroups() # call we make. The only difference is the type of exception. # # See https://bugs.python.org/issue42655 raise ValueError("Item in extra_groups is too large") gids.append(extra_group) else: raise TypeError("Items in extra_groups must be a string " "or integer, not {}" .format(type(extra_group))) # make sure that the gids are all positive here so we can do less # checking in the C code for gid_check in gids: if gid_check < 0: raise ValueError("Group ID cannot be negative, got %s" % (gid_check,)) uid = None if user is not None: if not hasattr(os, 'setreuid'): raise ValueError("The 'user' parameter is not supported on " "the current platform") if isinstance(user, str): if pwd is None: raise ValueError("The user parameter cannot be a string " "on systems without the pwd module") uid = pwd.getpwnam(user).pw_uid elif isinstance(user, int): uid = user else: raise TypeError("User must be a string or an integer") if uid < 0: raise ValueError("User ID cannot be negative, got %s" % (uid,)) return uid, gid, gids def __repr__(self): return '<%s at 0x%x pid=%r returncode=%r>' % (self.__class__.__name__, id(self), self.pid, self.returncode) def _on_child(self, watcher): watcher.stop() status = watcher.rstatus if os.WIFSIGNALED(status): self.returncode = -os.WTERMSIG(status) else: self.returncode = os.WEXITSTATUS(status) self.result.set(self.returncode) def _get_devnull(self): if not hasattr(self, '_devnull'): self._devnull = os.open(os.devnull, os.O_RDWR) return self._devnull _communicating_greenlets = None def communicate(self, input=None, timeout=None): """ Interact with process and return its output and error. - Send *input* data to stdin. - Read data from stdout and stderr, until end-of-file is reached. - Wait for process to terminate. The optional *input* argument should be a string to be sent to the child process, or None, if no data should be sent to the child. communicate() returns a tuple (stdout, stderr). :keyword timeout: Under Python 2, this is a gevent extension; if given and it expires, we will raise :exc:`TimeoutExpired`, which extends :exc:`gevent.timeout.Timeout` (note that this only extends :exc:`BaseException`, *not* :exc:`Exception`) Under Python 3, this raises the standard :exc:`TimeoutExpired` exception. .. versionchanged:: 1.1a2 Under Python 2, if the *timeout* elapses, raise the :exc:`gevent.timeout.Timeout` exception. Previously, we silently returned. .. versionchanged:: 1.1b5 Honor a *timeout* even if there's no way to communicate with the child (stdin, stdout, and stderr are not pipes). """ if self._communicating_greenlets is None: self._communicating_greenlets = _CommunicatingGreenlets(self, input) greenlets = self._communicating_greenlets # If we were given stdin=stdout=stderr=None, we have no way to # communicate with the child, and thus no greenlets to wait # on. This is a nonsense case, but it comes up in the test # case for Python 3.5 (test_subprocess.py # RunFuncTestCase.test_timeout). Instead, we go directly to # self.wait if not greenlets and timeout is not None: self.wait(timeout=timeout, _raise_exc=True) done = joinall(greenlets, timeout=timeout) # Allow finished greenlets, if any, to raise. This takes priority over # the timeout exception. for greenlet in done: greenlet.get() if timeout is not None and len(done) != len(self._communicating_greenlets): raise TimeoutExpired(self.args, timeout) # Close only after we're sure that everything is done # (there was no timeout, or there was, but everything finished). # There should be no greenlets still running, even from a prior # attempt. If there are, then this can raise RuntimeError: 'reentrant call'. # So we ensure that previous greenlets are dead. for pipe in (self.stdout, self.stderr): if pipe: try: pipe.close() except RuntimeError: pass self.wait() return (None if greenlets.stdout is None else greenlets.stdout.get(), None if greenlets.stderr is None else greenlets.stderr.get()) def poll(self): """Check if child process has terminated. Set and return :attr:`returncode` attribute.""" return self._internal_poll() def __enter__(self): return self def __exit__(self, t, v, tb): if self.stdout: self.stdout.close() if self.stderr: self.stderr.close() try: # Flushing a BufferedWriter may raise an error if self.stdin: self.stdin.close() finally: # Wait for the process to terminate, to avoid zombies. # JAM: gevent: If the process never terminates, this # blocks forever. self.wait() def _gevent_result_wait(self, timeout=None, raise_exc=True): result = self.result.wait(timeout=timeout) if raise_exc and timeout is not None and not self.result.ready(): raise TimeoutExpired(self.args, timeout) return result if mswindows: # # Windows methods # def _get_handles(self, stdin, stdout, stderr): """Construct and return tuple with IO objects: p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite """ # pylint:disable=undefined-variable if stdin is None and stdout is None and stderr is None: return (-1, -1, -1, -1, -1, -1) p2cread, p2cwrite = -1, -1 c2pread, c2pwrite = -1, -1 errread, errwrite = -1, -1 try: DEVNULL except NameError: _devnull = object() else: _devnull = DEVNULL if stdin is None: p2cread = GetStdHandle(STD_INPUT_HANDLE) if p2cread is None: p2cread, _ = CreatePipe(None, 0) p2cread = Handle(p2cread) _winapi.CloseHandle(_) elif stdin == PIPE: p2cread, p2cwrite = CreatePipe(None, 0) p2cread, p2cwrite = Handle(p2cread), Handle(p2cwrite) elif stdin == _devnull: p2cread = msvcrt.get_osfhandle(self._get_devnull()) elif isinstance(stdin, int): p2cread = msvcrt.get_osfhandle(stdin) else: # Assuming file-like object p2cread = msvcrt.get_osfhandle(stdin.fileno()) p2cread = self._make_inheritable(p2cread) if stdout is None: c2pwrite = GetStdHandle(STD_OUTPUT_HANDLE) if c2pwrite is None: _, c2pwrite = CreatePipe(None, 0) c2pwrite = Handle(c2pwrite) _winapi.CloseHandle(_) elif stdout == PIPE: c2pread, c2pwrite = CreatePipe(None, 0) c2pread, c2pwrite = Handle(c2pread), Handle(c2pwrite) elif stdout == _devnull: c2pwrite = msvcrt.get_osfhandle(self._get_devnull()) elif isinstance(stdout, int): c2pwrite = msvcrt.get_osfhandle(stdout) else: # Assuming file-like object c2pwrite = msvcrt.get_osfhandle(stdout.fileno()) c2pwrite = self._make_inheritable(c2pwrite) if stderr is None: errwrite = GetStdHandle(STD_ERROR_HANDLE) if errwrite is None: _, errwrite = CreatePipe(None, 0) errwrite = Handle(errwrite) _winapi.CloseHandle(_) elif stderr == PIPE: errread, errwrite = CreatePipe(None, 0) errread, errwrite = Handle(errread), Handle(errwrite) elif stderr == STDOUT: errwrite = c2pwrite elif stderr == _devnull: errwrite = msvcrt.get_osfhandle(self._get_devnull()) elif isinstance(stderr, int): errwrite = msvcrt.get_osfhandle(stderr) else: # Assuming file-like object errwrite = msvcrt.get_osfhandle(stderr.fileno()) errwrite = self._make_inheritable(errwrite) return (p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) def _make_inheritable(self, handle): """Return a duplicate of handle, which is inheritable""" # pylint:disable=undefined-variable return DuplicateHandle(GetCurrentProcess(), handle, GetCurrentProcess(), 0, 1, DUPLICATE_SAME_ACCESS) def _find_w9xpopen(self): """Find and return absolute path to w9xpopen.exe""" # pylint:disable=undefined-variable w9xpopen = os.path.join(os.path.dirname(GetModuleFileName(0)), "w9xpopen.exe") if not os.path.exists(w9xpopen): # Eeek - file-not-found - possibly an embedding # situation - see if we can locate it in sys.exec_prefix w9xpopen = os.path.join(os.path.dirname(sys.exec_prefix), "w9xpopen.exe") if not os.path.exists(w9xpopen): raise RuntimeError("Cannot locate w9xpopen.exe, which is " "needed for Popen to work with your " "shell or platform.") return w9xpopen def _filter_handle_list(self, handle_list): """Filter out console handles that can't be used in lpAttributeList["handle_list"] and make sure the list isn't empty. This also removes duplicate handles.""" # An handle with it's lowest two bits set might be a special console # handle that if passed in lpAttributeList["handle_list"], will # cause it to fail. # Only works on 3.7+ return list({handle for handle in handle_list if handle & 0x3 != 0x3 or _winapi.GetFileType(handle) != _winapi.FILE_TYPE_CHAR}) def _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_gid, unused_gids, unused_uid, unused_umask, unused_start_new_session, unused_process_group): """Execute program (MS Windows version)""" # pylint:disable=undefined-variable assert not pass_fds, "pass_fds not supported on Windows." if isinstance(args, str): pass elif isinstance(args, bytes): if shell: raise TypeError('bytes args is not allowed on Windows') args = list2cmdline([args]) elif isinstance(args, PathLike): if shell: raise TypeError('path-like args is not allowed when ' 'shell is true') args = list2cmdline([args]) else: args = list2cmdline(args) if executable is not None: executable = fsdecode(executable) if not isinstance(args, string_types): args = list2cmdline(args) # Process startup details if startupinfo is None: startupinfo = STARTUPINFO() elif hasattr(startupinfo, 'copy'): # bpo-34044: Copy STARTUPINFO since it is modified below, # so the caller can reuse it multiple times. startupinfo = startupinfo.copy() elif hasattr(startupinfo, '_copy'): # When the fix was backported to Python 3.7, copy() was # made private as _copy. startupinfo = startupinfo._copy() use_std_handles = -1 not in (p2cread, c2pwrite, errwrite) if use_std_handles: startupinfo.dwFlags |= STARTF_USESTDHANDLES startupinfo.hStdInput = p2cread startupinfo.hStdOutput = c2pwrite startupinfo.hStdError = errwrite if hasattr(startupinfo, 'lpAttributeList'): # Support for Python >= 3.7 attribute_list = startupinfo.lpAttributeList have_handle_list = bool(attribute_list and "handle_list" in attribute_list and attribute_list["handle_list"]) # If we were given an handle_list or need to create one if have_handle_list or (use_std_handles and close_fds): if attribute_list is None: attribute_list = startupinfo.lpAttributeList = {} handle_list = attribute_list["handle_list"] = \ list(attribute_list.get("handle_list", [])) if use_std_handles: handle_list += [int(p2cread), int(c2pwrite), int(errwrite)] handle_list[:] = self._filter_handle_list(handle_list) if handle_list: if not close_fds: import warnings warnings.warn("startupinfo.lpAttributeList['handle_list'] " "overriding close_fds", RuntimeWarning) # When using the handle_list we always request to inherit # handles but the only handles that will be inherited are # the ones in the handle_list close_fds = False if shell: startupinfo.dwFlags |= STARTF_USESHOWWINDOW startupinfo.wShowWindow = SW_HIDE comspec = os.environ.get("COMSPEC", "cmd.exe") args = '{} /c "{}"'.format(comspec, args) if GetVersion() >= 0x80000000 or os.path.basename(comspec).lower() == "command.com": # Win9x, or using command.com on NT. We need to # use the w9xpopen intermediate program. For more # information, see KB Q150956 # (http://web.archive.org/web/20011105084002/http://support.microsoft.com/support/kb/articles/Q150/9/56.asp) w9xpopen = self._find_w9xpopen() args = '"%s" %s' % (w9xpopen, args) # Not passing CREATE_NEW_CONSOLE has been known to # cause random failures on win9x. Specifically a # dialog: "Your program accessed mem currently in # use at xxx" and a hopeful warning about the # stability of your system. Cost is Ctrl+C wont # kill children. creationflags |= CREATE_NEW_CONSOLE # PyPy 2.7 7.3.6 is now producing these errors. This # happens automatically on Posix platforms, and is built # in to the CreateProcess call on CPython 2 & 3. It's not # clear why we don't pick it up for free from the # CreateProcess call on PyPy. Currently we don't test PyPy3 on Windows, # so we don't know for sure if it's built into CreateProcess there. if PYPY: def _check_nul(s, err_kind=ValueError): if not s: return nul = b'\0' if isinstance(s, bytes) else '\0' if nul in s: # PyPy 2 expects a TypeError; Python 3 raises ValueError always. raise err_kind("argument must be a string without NUL characters") def _check_env(): if not env: return for k, v in env.items(): _check_nul(k) _check_nul(v) if '=' in k: raise ValueError("'=' not allowed in environment keys") _check_nul(executable) _check_nul(args) _check_env() # Start the process try: hp, ht, pid, tid = CreateProcess(executable, args, # no special security None, None, int(not close_fds), creationflags, env, cwd, # fsdecode handled earlier startupinfo) # except IOError as e: # From 2.6 on, pywintypes.error was defined as IOError # # Translate pywintypes.error to WindowsError, which is # # a subclass of OSError. FIXME: We should really # # translate errno using _sys_errlist (or similar), but # # how can this be done from Python? # raise # don't remap here # raise WindowsError(*e.args) finally: # Child is launched. Close the parent's copy of those pipe # handles that only the child should have open. You need # to make sure that no handles to the write end of the # output pipe are maintained in this process or else the # pipe will not close when the child process exits and the # ReadFile will hang. def _close(x): if x is not None and x != -1: if hasattr(x, 'Close'): x.Close() else: _winapi.CloseHandle(x) _close(p2cread) _close(c2pwrite) _close(errwrite) if hasattr(self, '_devnull'): os.close(self._devnull) # Retain the process handle, but close the thread handle self._child_created = True self._handle = Handle(hp) if not hasattr(hp, 'Close') else hp self.pid = pid _winapi.CloseHandle(ht) if not hasattr(ht, 'Close') else ht.Close() def _internal_poll(self): """Check if child process has terminated. Returns returncode attribute. """ # pylint:disable=undefined-variable if self.returncode is None: if WaitForSingleObject(self._handle, 0) == WAIT_OBJECT_0: self.returncode = GetExitCodeProcess(self._handle) self.result.set(self.returncode) return self.returncode def rawlink(self, callback): if not self.result.ready() and not self._waiting: self._waiting = True Greenlet.spawn(self._wait) self.result.rawlink(linkproxy(callback, self)) # XXX unlink def _blocking_wait(self): # pylint:disable=undefined-variable WaitForSingleObject(self._handle, INFINITE) self.returncode = GetExitCodeProcess(self._handle) return self.returncode def _wait(self): self.threadpool.spawn(self._blocking_wait).rawlink(self.result) def wait(self, timeout=None, _raise_exc=True): """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode is None: if not self._waiting: self._waiting = True self._wait() return self._gevent_result_wait(timeout, _raise_exc) def send_signal(self, sig): """Send a signal to the process """ if sig == signal.SIGTERM: self.terminate() elif sig == signal.CTRL_C_EVENT: os.kill(self.pid, signal.CTRL_C_EVENT) elif sig == signal.CTRL_BREAK_EVENT: os.kill(self.pid, signal.CTRL_BREAK_EVENT) else: raise ValueError("Unsupported signal: {}".format(sig)) def terminate(self): """Terminates the process """ # pylint:disable=undefined-variable # Don't terminate a process that we know has already died. if self.returncode is not None: return try: TerminateProcess(self._handle, 1) except OSError as e: # ERROR_ACCESS_DENIED (winerror 5) is received when the # process already died. if e.winerror != 5: raise rc = GetExitCodeProcess(self._handle) if rc == STILL_ACTIVE: raise self.returncode = rc self.result.set(self.returncode) kill = terminate else: # # POSIX methods # def rawlink(self, callback): # Not public documented, part of the link protocol self.result.rawlink(linkproxy(callback, self)) # XXX unlink def _get_handles(self, stdin, stdout, stderr): """Construct and return tuple with IO objects: p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite """ p2cread, p2cwrite = -1, -1 c2pread, c2pwrite = -1, -1 errread, errwrite = -1, -1 try: DEVNULL except NameError: _devnull = object() else: _devnull = DEVNULL if stdin is None: pass elif stdin == PIPE: p2cread, p2cwrite = self.pipe_cloexec() elif stdin == _devnull: p2cread = self._get_devnull() elif isinstance(stdin, int): p2cread = stdin else: # Assuming file-like object p2cread = stdin.fileno() if stdout is None: pass elif stdout == PIPE: c2pread, c2pwrite = self.pipe_cloexec() elif stdout == _devnull: c2pwrite = self._get_devnull() elif isinstance(stdout, int): c2pwrite = stdout else: # Assuming file-like object c2pwrite = stdout.fileno() if stderr is None: pass elif stderr == PIPE: errread, errwrite = self.pipe_cloexec() elif stderr == STDOUT: # pylint:disable=undefined-variable if c2pwrite != -1: errwrite = c2pwrite else: # child's stdout is not set, use parent's stdout errwrite = sys.__stdout__.fileno() elif stderr == _devnull: errwrite = self._get_devnull() elif isinstance(stderr, int): errwrite = stderr else: # Assuming file-like object errwrite = stderr.fileno() return (p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) def _set_cloexec_flag(self, fd, cloexec=True): try: cloexec_flag = fcntl.FD_CLOEXEC except AttributeError: cloexec_flag = 1 old = fcntl.fcntl(fd, fcntl.F_GETFD) if cloexec: fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) else: fcntl.fcntl(fd, fcntl.F_SETFD, old & ~cloexec_flag) def _remove_nonblock_flag(self, fd): flags = fcntl.fcntl(fd, fcntl.F_GETFL) & (~os.O_NONBLOCK) fcntl.fcntl(fd, fcntl.F_SETFL, flags) def pipe_cloexec(self): """Create a pipe with FDs set CLOEXEC.""" # Pipes' FDs are set CLOEXEC by default because we don't want them # to be inherited by other subprocesses: the CLOEXEC flag is removed # from the child's FDs by _dup2(), between fork() and exec(). # This is not atomic: we would need the pipe2() syscall for that. r, w = os.pipe() self._set_cloexec_flag(r) self._set_cloexec_flag(w) return r, w _POSSIBLE_FD_DIRS = ( '/proc/self/fd', # Linux '/dev/fd', # BSD, including macOS ) @classmethod def _close_fds(cls, keep, errpipe_write): # From the C code: # errpipe_write is part of keep. It must be closed at # exec(), but kept open in the child process until exec() is # called. for path in cls._POSSIBLE_FD_DIRS: if os.path.isdir(path): return cls._close_fds_from_path(path, keep, errpipe_write) return cls._close_fds_brute_force(keep, errpipe_write) @classmethod def _close_fds_from_path(cls, path, keep, errpipe_write): # path names a directory whose only entries have # names that are ascii strings of integers in base10, # corresponding to the fds the current process has open try: fds = [int(fname) for fname in os.listdir(path)] except (ValueError, OSError): cls._close_fds_brute_force(keep, errpipe_write) else: for i in keep: if i == errpipe_write: continue _set_inheritable(i, True) for fd in fds: if fd in keep or fd < 3: continue try: os.close(fd) except: pass @classmethod def _close_fds_brute_force(cls, keep, errpipe_write): # `keep` is a set of fds, so we # use os.closerange from 3 to min(keep) # and then from max(keep + 1) to MAXFD and # loop through filling in the gaps. # Under new python versions, we need to explicitly set # passed fds to be inheritable or they will go away on exec # XXX: Bug: We implicitly rely on errpipe_write being the largest open # FD so that we don't change its cloexec flag. assert hasattr(os, 'closerange') # Added in 2.7 keep = sorted(keep) min_keep = min(keep) max_keep = max(keep) os.closerange(3, min_keep) os.closerange(max_keep + 1, MAXFD) for i in xrange(min_keep, max_keep): if i in keep: _set_inheritable(i, True) continue try: os.close(i) except: pass # pylint:disable-next=too-many-positional-arguments def _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, restore_signals, gid, gids, uid, umask, start_new_session, process_group): """Execute program (POSIX version)""" if isinstance(args, (str, bytes)): args = [args] elif isinstance(args, PathLike): if shell: raise TypeError('path-like args is not allowed when ' 'shell is true') args = [fsencode(args)] # os.PathLike -> [str] else: args = list(args) if shell: # On Android the default shell is at '/system/bin/sh'. unix_shell = ( '/system/bin/sh' if hasattr(sys, 'getandroidapilevel') else '/bin/sh' ) args = [unix_shell, "-c"] + args if executable: args[0] = executable if executable is None: executable = args[0] self._loop.install_sigchld() # For transferring possible exec failure from child to parent # The first char specifies the exception type: 0 means # OSError, 1 means some other error. errpipe_read, errpipe_write = self.pipe_cloexec() # errpipe_write must not be in the standard io 0, 1, or 2 fd range. low_fds_to_close = [] while errpipe_write < 3: low_fds_to_close.append(errpipe_write) errpipe_write = os.dup(errpipe_write) for low_fd in low_fds_to_close: os.close(low_fd) try: try: gc_was_enabled = gc.isenabled() # Disable gc to avoid bug where gc -> file_dealloc -> # write to stderr -> hang. http://bugs.python.org/issue1336 gc.disable() try: self.pid = fork_and_watch(self._on_child, self._loop, True, fork) except: if gc_was_enabled: gc.enable() raise if self.pid == 0: # Child # In various places on the child side of things, we catch OSError # and add attributes to it that detail where in the process we failed; # like all exceptions until we have exec'd, this exception is pickled # and sent to the parent to raise in the calling process. # The parent uses this to decide how to treat that exception, # adjusting certain information about it as needed. # # Python 3.11.8 --- yes, a minor patch release --- stopped # letting the 'filename' parameter get set in the resulting # exception for many cases. We're not quite interpreting this # the same way the stdlib is, I'm sure, but this makes the stdlib # tests pass. # XXX: Technically we're doing a lot of stuff here that # may not be safe to do before a exec(), depending on the OS. # CPython 3 goes to great lengths to precompute a lot # of this info before the fork and pass it all to C functions that # try hard not to call things like malloc(). (Of course, # CPython 2 pretty much did what we're doing.) try: # Close parent's pipe ends if p2cwrite != -1: os.close(p2cwrite) if c2pread != -1: os.close(c2pread) if errread != -1: os.close(errread) os.close(errpipe_read) # When duping fds, if there arises a situation # where one of the fds is either 0, 1 or 2, it # is possible that it is overwritten (#12607). if c2pwrite == 0: c2pwrite = os.dup(c2pwrite) _set_inheritable(c2pwrite, False) while errwrite in (0, 1): errwrite = os.dup(errwrite) _set_inheritable(errwrite, False) # Dup fds for child def _dup2(existing, desired): # dup2() removes the CLOEXEC flag but # we must do it ourselves if dup2() # would be a no-op (issue #10806). if existing == desired: self._set_cloexec_flag(existing, False) elif existing != -1: os.dup2(existing, desired) try: self._remove_nonblock_flag(desired) except OSError: # Ignore EBADF, it may not actually be # open yet. # Tested beginning in 3.7.0b3 test_subprocess.py pass _dup2(p2cread, 0) _dup2(c2pwrite, 1) _dup2(errwrite, 2) # Close pipe fds. Make sure we don't close the # same fd more than once, or standard fds. if not True: closed = set([None]) for fd in (p2cread, c2pwrite, errwrite): if fd not in closed and fd > 2: os.close(fd) closed.add(fd) # Python 3 (with a working set_inheritable): # We no longer manually close p2cread, # c2pwrite, and errwrite here as # _close_open_fds takes care when it is # not already non-inheritable. if cwd is not None: try: os.chdir(cwd) except OSError as e: e._failed_chdir = True raise # Python 3.9 if umask >= 0: os.umask(umask) # XXX: CPython does _Py_RestoreSignals here. # Then setsid() based on ??? try: if gids: os.setgroups(gids) if gid: os.setregid(gid, gid) if uid: os.setreuid(uid, uid) if process_group is not None: os.setpgid(0, process_group) except OSError as e: e._failed_chuser = True raise if preexec_fn: preexec_fn() # Close all other fds, if asked for. This must be done # after preexec_fn runs. if close_fds: fds_to_keep = set(pass_fds) fds_to_keep.add(errpipe_write) self._close_fds(fds_to_keep, errpipe_write) if restore_signals: # restore the documented signals back to sig_dfl; # not all will be defined on every platform for sig in 'SIGPIPE', 'SIGXFZ', 'SIGXFSZ': sig = getattr(signal, sig, None) if sig is not None: signal.signal(sig, signal.SIG_DFL) if start_new_session: os.setsid() try: if env is None: os.execvp(executable, args) else: # Python 3.6 started testing for # bytes values in the env; it also # started encoding strs using # fsencode and using a lower-level # API that takes a list of keys # and values. We don't have access # to that API, so we go the reverse direction. env = {os.fsdecode(k) if isinstance(k, bytes) else k: os.fsdecode(v) if isinstance(v, bytes) else v for k, v in env.items()} os.execvpe(executable, args, env) except OSError as e: e._failed_exec = True raise except: exc_type, exc_value, tb = sys.exc_info() # Save the traceback and attach it to the exception object exc_lines = traceback.format_exception(exc_type, exc_value, tb) exc_value.child_traceback = ''.join(exc_lines) os.write(errpipe_write, pickle.dumps(exc_value)) finally: # Make sure that the process exits no matter what. # The return code does not matter much as it won't be # reported to the application os._exit(1) # Parent self._child_created = True if gc_was_enabled: gc.enable() finally: # be sure the FD is closed no matter what os.close(errpipe_write) # self._devnull is not always defined. devnull_fd = getattr(self, '_devnull', None) if p2cread != -1 and p2cwrite != -1 and p2cread != devnull_fd: os.close(p2cread) if c2pwrite != -1 and c2pread != -1 and c2pwrite != devnull_fd: os.close(c2pwrite) if errwrite != -1 and errread != -1 and errwrite != devnull_fd: os.close(errwrite) if devnull_fd is not None: os.close(devnull_fd) # Prevent a double close of these fds from __init__ on error. self._closed_child_pipe_fds = True # Wait for exec to fail or succeed; possibly raising exception errpipe_read = FileObject(errpipe_read, 'rb') data = errpipe_read.read() finally: try: if hasattr(errpipe_read, 'close'): errpipe_read.close() else: os.close(errpipe_read) except OSError: # Especially on PyPy, we sometimes see the above # `os.close(errpipe_read)` raise an OSError. # It's not entirely clear why, but it happens in # InterprocessSignalTests.test_main sometimes, which must mean # we have some sort of race condition. pass finally: errpipe_read = -1 if data != b"": self.wait() child_exception = pickle.loads(data) for fd in (p2cwrite, c2pread, errread): if fd is not None and fd != -1: os.close(fd) if isinstance(child_exception, OSError): child_exception.filename = executable if hasattr(child_exception, '_failed_chdir'): child_exception.filename = cwd if getattr(child_exception, '_failed_chuser', False): child_exception.filename = None raise child_exception def _handle_exitstatus(self, sts, _WIFSIGNALED=os.WIFSIGNALED, _WTERMSIG=os.WTERMSIG, _WIFEXITED=os.WIFEXITED, _WEXITSTATUS=os.WEXITSTATUS, _WIFSTOPPED=os.WIFSTOPPED, _WSTOPSIG=os.WSTOPSIG): # This method is called (indirectly) by __del__, so it cannot # refer to anything outside of its local scope. # (gevent: We don't have a __del__, that's in the CPython implementation.) if _WIFSIGNALED(sts): self.returncode = -_WTERMSIG(sts) elif _WIFEXITED(sts): self.returncode = _WEXITSTATUS(sts) elif _WIFSTOPPED(sts): self.returncode = -_WSTOPSIG(sts) else: # Should never happen raise RuntimeError("Unknown child exit status!") def _internal_poll(self): """Check if child process has terminated. Returns returncode attribute. """ if self.returncode is None: if get_hub() is not getcurrent(): sig_pending = getattr(self._loop, 'sig_pending', True) if sig_pending: sleep(0.00001) return self.returncode def wait(self, timeout=None, _raise_exc=True): """ Wait for child process to terminate. Returns :attr:`returncode` attribute. :keyword timeout: The floating point number of seconds to wait. Under Python 2, this is a gevent extension, and we simply return if it expires. Under Python 3, if this time elapses without finishing the process, :exc:`TimeoutExpired` is raised. """ return self._gevent_result_wait(timeout, _raise_exc) def send_signal(self, sig): """Send a signal to the process """ # Skip signalling a process that we know has already died. if self.returncode is None: os.kill(self.pid, sig) def terminate(self): """Terminate the process with SIGTERM """ self.send_signal(signal.SIGTERM) def kill(self): """Kill the process with SIGKILL """ self.send_signal(signal.SIGKILL) def _with_stdout_stderr(exc, stderr): # Prior to Python 3.5, most exceptions didn't have stdout # and stderr attributes and can't take the stderr attribute in their # constructor exc.stdout = exc.output exc.stderr = stderr return exc class CompletedProcess(object): """ A process that has finished running. This is returned by run(). Attributes: - args: The list or str args passed to run(). - returncode: The exit code of the process, negative for signals. - stdout: The standard output (None if not captured). - stderr: The standard error (None if not captured). .. versionadded:: 1.2a1 This first appeared in Python 3.5 and is available to all Python versions in gevent. """ if GenericAlias is not None: # Sigh, 3.9 spreading typing stuff all over everything __class_getitem__ = classmethod(GenericAlias) def __init__(self, args, returncode, stdout=None, stderr=None): self.args = args self.returncode = returncode self.stdout = stdout self.stderr = stderr def __repr__(self): args = ['args={!r}'.format(self.args), 'returncode={!r}'.format(self.returncode)] if self.stdout is not None: args.append('stdout={!r}'.format(self.stdout)) if self.stderr is not None: args.append('stderr={!r}'.format(self.stderr)) return "{}({})".format(type(self).__name__, ', '.join(args)) def check_returncode(self): """Raise CalledProcessError if the exit code is non-zero.""" if self.returncode: # pylint:disable=undefined-variable raise _with_stdout_stderr(CalledProcessError(self.returncode, self.args, self.stdout), self.stderr) def run(*popenargs, **kwargs): """ run(args, *, stdin=None, input=None, stdout=None, stderr=None, shell=False, timeout=None, check=False) -> CompletedProcess Run command with arguments and return a CompletedProcess instance. The returned instance will have attributes args, returncode, stdout and stderr. By default, stdout and stderr are not captured, and those attributes will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them. If check is True and the exit code was non-zero, it raises a CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute, and output & stderr attributes if those streams were captured. If timeout is given, and the process takes too long, a TimeoutExpired exception will be raised. There is an optional argument "input", allowing you to pass a string to the subprocess's stdin. If you use this argument you may not also use the Popen constructor's "stdin" argument, as it will be used internally. The other arguments are the same as for the Popen constructor. If universal_newlines=True is passed, the "input" argument must be a string and stdout/stderr in the returned object will be strings rather than bytes. .. versionadded:: 1.2a1 This function first appeared in Python 3.5. It is available on all Python versions gevent supports. .. versionchanged:: 1.3a2 Add the ``capture_output`` argument from Python 3.7. It automatically sets ``stdout`` and ``stderr`` to ``PIPE``. It is an error to pass either of those arguments along with ``capture_output``. """ input = kwargs.pop('input', None) timeout = kwargs.pop('timeout', None) check = kwargs.pop('check', False) capture_output = kwargs.pop('capture_output', False) if input is not None: if 'stdin' in kwargs: raise ValueError('stdin and input arguments may not both be used.') kwargs['stdin'] = PIPE if capture_output: if ('stdout' in kwargs) or ('stderr' in kwargs): raise ValueError('stdout and stderr arguments may not be used ' 'with capture_output.') kwargs['stdout'] = PIPE kwargs['stderr'] = PIPE with Popen(*popenargs, **kwargs) as process: try: stdout, stderr = process.communicate(input, timeout=timeout) except TimeoutExpired: process.kill() stdout, stderr = process.communicate() raise _with_stdout_stderr(TimeoutExpired(process.args, timeout, output=stdout), stderr) except: process.kill() process.wait() raise retcode = process.poll() if check and retcode: # pylint:disable=undefined-variable raise _with_stdout_stderr(CalledProcessError(retcode, process.args, stdout), stderr) return CompletedProcess(process.args, retcode, stdout, stderr) def _gevent_did_monkey_patch(target_module, *_args, **_kwargs): # Beginning on 3.8 on Mac, the 'spawn' method became the default # start method. That doesn't fire fork watchers and we can't # easily patch to make it do so: multiprocessing uses the private # c accelerated _subprocess module to implement this. Instead we revert # back to using fork. from gevent._compat import MAC if MAC: import multiprocessing if hasattr(multiprocessing, 'set_start_method'): multiprocessing.set_start_method('fork', force=True) gevent-24.11.1/src/gevent/testing/000077500000000000000000000000001471441230600167065ustar00rootroot00000000000000gevent-24.11.1/src/gevent/testing/__init__.py000066400000000000000000000127171471441230600210270ustar00rootroot00000000000000# Copyright (c) 2008-2009 AG Projects # Copyright 2018 gevent community # Author: Denis Bilenko # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import unittest # pylint:disable=unused-import # It's important to do this ASAP, because if we're monkey patched, # then importing things like the standard library test.support can # need to construct the hub (to check for IPv6 support using a socket). # We can't do it in the testrunner, as the testrunner spawns new, unrelated # processes. from .hub import QuietHub import gevent.hub gevent.hub.set_default_hub_class(QuietHub) try: import faulthandler except ImportError: # The backport isn't installed. pass else: # Enable faulthandler for stack traces. We have to do this here # for the same reasons as above. faulthandler.enable() try: from gevent.libuv import _corecffi except ImportError: pass else: _corecffi.lib.gevent_test_setup() # pylint:disable=no-member del _corecffi from .sysinfo import VERBOSE from .sysinfo import WIN from .sysinfo import LINUX from .sysinfo import OSX from .sysinfo import LIBUV from .sysinfo import CFFI_BACKEND from .sysinfo import DEBUG from .sysinfo import RUN_LEAKCHECKS from .sysinfo import RUN_COVERAGE from .sysinfo import PY2 from .sysinfo import PY3 from .sysinfo import PY36 from .sysinfo import PY37 from .sysinfo import PY38 from .sysinfo import PY39 from .sysinfo import PY310 from .sysinfo import PYPY from .sysinfo import PYPY3 from .sysinfo import CPYTHON from .sysinfo import PLATFORM_SPECIFIC_SUFFIXES from .sysinfo import NON_APPLICABLE_SUFFIXES from .sysinfo import SHARED_OBJECT_EXTENSION from .sysinfo import RUNNING_ON_TRAVIS from .sysinfo import RUNNING_ON_APPVEYOR from .sysinfo import RUNNING_ON_CI from .sysinfo import RESOLVER_NOT_SYSTEM from .sysinfo import RESOLVER_DNSPYTHON from .sysinfo import RESOLVER_ARES from .sysinfo import resolver_dnspython_available from .sysinfo import EXPECT_POOR_TIMER_RESOLUTION from .sysinfo import CONN_ABORTED_ERRORS from .skipping import skipOnWindows from .skipping import skipOnAppVeyor from .skipping import skipOnCI from .skipping import skipOnPyPy3OnCI from .skipping import skipOnPyPy from .skipping import skipOnPyPyOnCI from .skipping import skipOnPyPyOnWindows from .skipping import skipOnPyPy3 from .skipping import skipIf from .skipping import skipUnless from .skipping import skipOnLibev from .skipping import skipOnLibuv from .skipping import skipOnLibuvOnWin from .skipping import skipOnLibuvOnCI from .skipping import skipOnLibuvOnCIOnPyPy from .skipping import skipOnLibuvOnPyPyOnWin from .skipping import skipOnPurePython from .skipping import skipWithCExtensions from .skipping import skipOnPy37 from .skipping import skipOnPy310 from .skipping import skipOnPy312 from .skipping import skipOnPy3 from .skipping import skipWithoutResource from .skipping import skipWithoutExternalNetwork from .skipping import skipOnManylinux from .skipping import skipOnMacOnCI from .exception import ExpectedException from .leakcheck import ignores_leakcheck from .params import LARGE_TIMEOUT from .params import DEFAULT_LOCAL_HOST_ADDR from .params import DEFAULT_LOCAL_HOST_ADDR6 from .params import DEFAULT_BIND_ADDR from .params import DEFAULT_BIND_ADDR_TUPLE from .params import DEFAULT_CONNECT_HOST from .params import DEFAULT_SOCKET_TIMEOUT from .params import DEFAULT_XPC_SOCKET_TIMEOUT main = unittest.main SkipTest = unittest.SkipTest from .sockets import bind_and_listen from .sockets import tcp_listener from .openfiles import get_number_open_files from .openfiles import get_open_files from .testcase import TestCase from .modules import walk_modules BaseTestCase = unittest.TestCase from .flaky import reraiseFlakyTestTimeout from .flaky import reraiseFlakyTestRaceCondition from .flaky import reraises_flaky_timeout from .flaky import reraises_flaky_race_condition def gc_collect_if_needed(): "Collect garbage if necessary for destructors to run" import gc if PYPY: # pragma: no cover gc.collect() # Our usage of mock should be limited to '@mock.patch()' # and other things that are easily...mocked...here on Python 2 # when mock is not installed. try: from unittest import mock except ImportError: # Python 2 try: import mock except ImportError: # pragma: no cover # Backport not installed class mock(object): @staticmethod def patch(reason): return unittest.skip(reason) mock = mock # zope.interface from zope.interface import verify gevent-24.11.1/src/gevent/testing/coveragesite/000077500000000000000000000000001471441230600213665ustar00rootroot00000000000000gevent-24.11.1/src/gevent/testing/coveragesite/sitecustomize.py000066400000000000000000000010561471441230600246510ustar00rootroot00000000000000# When testrunner.py is invoked with --coverage, it puts this first # on the path as per https://coverage.readthedocs.io/en/coverage-4.0b3/subprocess.html. # Note that this disables other sitecustomize.py files. import coverage try: coverage.process_startup() except coverage.CoverageException as e: if str(e) == "Can't support concurrency=greenlet with PyTracer, only threads are supported": pass else: import traceback traceback.print_exc() raise except: import traceback traceback.print_exc() raise gevent-24.11.1/src/gevent/testing/errorhandler.py000066400000000000000000000044311471441230600217510ustar00rootroot00000000000000# Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. from __future__ import print_function from functools import wraps def wrap_error_fatal(method): from gevent._hub_local import get_hub_class system_error = get_hub_class().SYSTEM_ERROR @wraps(method) def fatal_error_wrapper(self, *args, **kwargs): # XXX should also be able to do gevent.SYSTEM_ERROR = object # which is a global default to all hubs get_hub_class().SYSTEM_ERROR = object try: return method(self, *args, **kwargs) finally: get_hub_class().SYSTEM_ERROR = system_error return fatal_error_wrapper def wrap_restore_handle_error(method): from gevent._hub_local import get_hub_if_exists from gevent import getcurrent @wraps(method) def restore_fatal_error_wrapper(self, *args, **kwargs): try: return method(self, *args, **kwargs) finally: # Remove any customized handle_error, if set on the # instance. try: del get_hub_if_exists().handle_error except AttributeError: pass if self.peek_error()[0] is not None: getcurrent().throw(*self.peek_error()[1:]) return restore_fatal_error_wrapper gevent-24.11.1/src/gevent/testing/exception.py000066400000000000000000000023611471441230600212600ustar00rootroot00000000000000# Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. from __future__ import absolute_import, print_function, division class ExpectedException(Exception): """An exception whose traceback should be ignored by the hub""" gevent-24.11.1/src/gevent/testing/flaky.py000066400000000000000000000100101471441230600203560ustar00rootroot00000000000000# Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. from __future__ import absolute_import, print_function, division import sys import functools import unittest from . import sysinfo from . import six class FlakyAssertionError(AssertionError): "Re-raised so that we know it's a known-flaky test." # The next exceptions allow us to raise them in a highly # greppable way so that we can debug them later. class FlakyTest(unittest.SkipTest): """ A unittest exception that causes the test to be skipped when raised. Use this carefully, it is a code smell and indicates an undebugged problem. """ class FlakyTestRaceCondition(FlakyTest): """ Use this when the flaky test is definitely caused by a race condition. """ class FlakyTestTimeout(FlakyTest): """ Use this when the flaky test is definitely caused by an unexpected timeout. """ class FlakyTestCrashes(FlakyTest): """ Use this when the test sometimes crashes. """ def reraiseFlakyTestRaceCondition(): six.reraise(FlakyAssertionError, FlakyAssertionError(sys.exc_info()[1]), sys.exc_info()[2]) reraiseFlakyTestTimeout = reraiseFlakyTestRaceCondition reraiseFlakyTestRaceConditionLibuv = reraiseFlakyTestRaceCondition reraiseFlakyTestTimeoutLibuv = reraiseFlakyTestRaceCondition if sysinfo.RUNNING_ON_CI or (sysinfo.PYPY and sysinfo.WIN): # pylint: disable=function-redefined def reraiseFlakyTestRaceCondition(): # Getting stack traces is incredibly expensive # in pypy on win, at least in test virtual machines. # It can take minutes. The traceback consistently looks like # the following when interrupted: # dump_stacks -> traceback.format_stack # -> traceback.extract_stack -> linecache.checkcache # -> os.stat -> _structseq.structseq_new # Moreover, without overriding __repr__ or __str__, # the msg doesn't get printed like we would want (its basically # unreadable, all printed on one line). So skip that. #msg = '\n'.join(dump_stacks()) msg = str(sys.exc_info()[1]) six.reraise(FlakyTestRaceCondition, FlakyTestRaceCondition(msg), sys.exc_info()[2]) def reraiseFlakyTestTimeout(): msg = str(sys.exc_info()[1]) six.reraise(FlakyTestTimeout, FlakyTestTimeout(msg), sys.exc_info()[2]) if sysinfo.LIBUV: reraiseFlakyTestRaceConditionLibuv = reraiseFlakyTestRaceCondition reraiseFlakyTestTimeoutLibuv = reraiseFlakyTestTimeout def reraises_flaky_timeout(exc_kind=AssertionError, _func=reraiseFlakyTestTimeout): def wrapper(f): @functools.wraps(f) def m(*args): try: f(*args) except exc_kind: _func() return m return wrapper def reraises_flaky_race_condition(exc_kind=AssertionError): return reraises_flaky_timeout(exc_kind, _func=reraiseFlakyTestRaceCondition) gevent-24.11.1/src/gevent/testing/hub.py000066400000000000000000000060531471441230600200420ustar00rootroot00000000000000# Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. from __future__ import absolute_import, print_function, division from contextlib import contextmanager from gevent.hub import Hub from .exception import ExpectedException class QuietHub(Hub): _resolver = None _threadpool = None EXPECTED_TEST_ERROR = (ExpectedException,) IGNORE_EXPECTED_TEST_ERROR = False @contextmanager def ignoring_expected_test_error(self): """ Code in the body of this context manager will ignore ``EXPECTED_TEST_ERROR`` objects reported to ``handle_error``; they will not get a chance to go to the hub's parent. This completely changes the semantics of normal error handling by avoiding some switches (to the main greenlet, and eventually once a callback is processed, back to the hub). This should be used in narrow ways for test compatibility for tests that assume ``ExpectedException`` objects behave this way. """ old = self.IGNORE_EXPECTED_TEST_ERROR self.IGNORE_EXPECTED_TEST_ERROR = True try: yield finally: self.IGNORE_EXPECTED_TEST_ERROR = old def handle_error(self, context, type, value, tb): type, value, tb = self._normalize_exception(type, value, tb) # If we check that the ``type`` is a subclass of ``EXPECTED_TEST_ERROR``, # and return, we completely change the semantics: We avoid raising # this error in the main greenlet, which cuts out several switches. # Overall, not good. if self.IGNORE_EXPECTED_TEST_ERROR and issubclass(type, self.EXPECTED_TEST_ERROR): # Don't pass these up; avoid switches return return Hub.handle_error(self, context, type, value, tb) def print_exception(self, context, t, v, tb): t, v, tb = self._normalize_exception(t, v, tb) if issubclass(t, self.EXPECTED_TEST_ERROR): # see handle_error return return Hub.print_exception(self, context, t, v, tb) gevent-24.11.1/src/gevent/testing/leakcheck.py000066400000000000000000000200431471441230600211710ustar00rootroot00000000000000# Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. from __future__ import print_function import sys import gc from functools import wraps import unittest try: import objgraph except ImportError: # pragma: no cover # Optional test dependency objgraph = None import gevent import gevent.core def ignores_leakcheck(func): """ Ignore the given object during leakchecks. Can be applied to a method, in which case the method will run, but will not be subject to leak checks. If applied to a class, the entire class will be skipped during leakchecks. This is intended to be used for classes that are very slow and cause problems such as test timeouts; typically it will be used for classes that are subclasses of a base class and specify variants of behaviour (such as pool sizes). """ func.ignore_leakcheck = True return func class _RefCountChecker(object): # Some builtin things that we ignore. # For awhile, we also ignored types.FrameType and types.TracebackType, # but those are important and often involved in leaks. IGNORED_TYPES = (tuple, dict,) try: CALLBACK_KIND = gevent.core.callback except AttributeError: # Must be using FFI. from gevent._ffi.callback import callback as CALLBACK_KIND def __init__(self, testcase, function): self.testcase = testcase self.function = function self.deltas = [] self.peak_stats = {} # The very first time we are called, we have already been # self.setUp() by the test runner, so we don't need to do it again. self.needs_setUp = False def _ignore_object_p(self, obj): if obj is self: return False try: # Certain badly written __eq__ and __contains__ methods # (I'm looking at you, Python 3.10 importlib.metadata._text! # ``__eq__(self, other): return self.lower() == other.lower()``) # raise AttributeError which propagates here, and must be caught. # Similarly, we can get a TypeError if ( obj in self.__dict__.values() or obj == self._ignore_object_p # pylint:disable=comparison-with-callable ): return False except (AttributeError, TypeError): # `obj` is things like that _text class. Also have seen # - psycopg2._psycopg.type # - relstorage.adapters.drivers._ClassDriverFactory return True kind = type(obj) if kind in self.IGNORED_TYPES: return False if kind is self.CALLBACK_KIND and obj.callback is None and obj.args is None: # these represent callbacks that have been stopped, but # the event loop hasn't cycled around to run them. The only # known cause of this is killing greenlets before they get a chance # to run for the first time. return False return True def _growth(self): return objgraph.growth(limit=None, peak_stats=self.peak_stats, filter=self._ignore_object_p) def _report_diff(self, growth): if not growth: return "" lines = [] width = max(len(name) for name, _, _ in growth) for name, count, delta in growth: lines.append('%-*s%9d %+9d' % (width, name, count, delta)) diff = '\n'.join(lines) return diff def _run_test(self, args, kwargs): gc_enabled = gc.isenabled() gc.disable() if self.needs_setUp: self.testcase.setUp() self.testcase.skipTearDown = False try: self.function(self.testcase, *args, **kwargs) finally: self.testcase.tearDown() self.testcase.doCleanups() self.testcase.skipTearDown = True self.needs_setUp = True if gc_enabled: gc.enable() def _growth_after(self): # Grab post snapshot if 'urlparse' in sys.modules: sys.modules['urlparse'].clear_cache() if 'urllib.parse' in sys.modules: sys.modules['urllib.parse'].clear_cache() # pylint:disable=no-member return self._growth() def _check_deltas(self, growth): # Return false when we have decided there is no leak, # true if we should keep looping, raises an assertion # if we have decided there is a leak. deltas = self.deltas if not deltas: # We haven't run yet, no data, keep looping return True if gc.garbage: raise AssertionError("Generated uncollectable garbage %r" % (gc.garbage,)) # the following configurations are classified as "no leak" # [0, 0] # [x, 0, 0] # [... a, b, c, d] where a+b+c+d = 0 # # the following configurations are classified as "leak" # [... z, z, z] where z > 0 if deltas[-2:] == [0, 0] and len(deltas) in (2, 3): return False if deltas[-3:] == [0, 0, 0]: return False if len(deltas) >= 4 and sum(deltas[-4:]) == 0: return False if len(deltas) >= 3 and deltas[-1] > 0 and deltas[-1] == deltas[-2] and deltas[-2] == deltas[-3]: diff = self._report_diff(growth) raise AssertionError('refcount increased by %r\n%s' % (deltas, diff)) # OK, we don't know for sure yet. Let's search for more if sum(deltas[-3:]) <= 0 or sum(deltas[-4:]) <= 0 or deltas[-4:].count(0) >= 2: # this is suspicious, so give a few more runs limit = 11 else: limit = 7 if len(deltas) >= limit: raise AssertionError('refcount increased by %r\n%s' % (deltas, self._report_diff(growth))) # We couldn't decide yet, keep going return True def __call__(self, args, kwargs): for _ in range(3): gc.collect() # Capture state before; the incremental will be # updated by each call to _growth_after growth = self._growth() while self._check_deltas(growth): self._run_test(args, kwargs) growth = self._growth_after() self.deltas.append(sum((stat[2] for stat in growth))) def wrap_refcount(method): if objgraph is None or getattr(method, 'ignore_leakcheck', False): if objgraph is None: import warnings warnings.warn("objgraph not available, leakchecks disabled") @wraps(method) def _method_skipped_during_leakcheck(self, *_args, **_kwargs): self.skipTest("This method ignored during leakchecks") return _method_skipped_during_leakcheck @wraps(method) def wrapper(self, *args, **kwargs): # pylint:disable=too-many-branches if getattr(self, 'ignore_leakcheck', False): raise unittest.SkipTest("This class ignored during leakchecks") return _RefCountChecker(self, method)(args, kwargs) return wrapper gevent-24.11.1/src/gevent/testing/modules.py000066400000000000000000000115731471441230600207370ustar00rootroot00000000000000# Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. from __future__ import absolute_import, print_function, division import importlib import os.path import warnings import gevent from . import sysinfo # Avoid importing this at the top level because # it imports threading and subprocess, and this module # is always imported in our monkey-patched stdlib unittests, # and some of them don't like it when those are imported. # from . import util OPTIONAL_MODULES = frozenset({ ## Resolvers. # ares might not be built 'gevent.resolver_ares', 'gevent.resolver.ares', # dnspython might not be installed 'gevent.resolver.dnspython', ## Backends 'gevent.libev', 'gevent.libev.watcher', 'gevent.libuv.loop', 'gevent.libuv.watcher', }) EXCLUDED_MODULES = frozenset({ '__init__', 'core', 'ares', '_util', '_semaphore', 'corecffi', '_corecffi', '_corecffi_build', }) def walk_modules( basedir=None, modpath=None, include_so=False, recursive=False, check_optional=True, include_tests=False, optional_modules=OPTIONAL_MODULES, excluded_modules=EXCLUDED_MODULES, ): """ Find gevent modules, yielding tuples of ``(path, importable_module_name)``. :keyword bool check_optional: If true (the default), then if we discover a module that is known to be optional on this system (such as a backend), we will attempt to import it; if the import fails, it will not be returned. If false, then we will not make such an attempt, the caller will need to be prepared for an `ImportError`; the caller can examine *optional_modules* against the yielded *importable_module_name*. """ # pylint:disable=too-many-branches,too-many-locals if sysinfo.PYPY: include_so = False if basedir is None: basedir = os.path.dirname(gevent.__file__) if modpath is None: modpath = 'gevent.' else: if modpath is None: # pylint:disable=else-if-used modpath = '' for fn in sorted(os.listdir(basedir)): path = os.path.join(basedir, fn) if os.path.isdir(path): if not recursive: continue if not include_tests and fn in ['testing', 'tests']: continue pkg_init = os.path.join(path, '__init__.py') if os.path.exists(pkg_init): yield pkg_init, modpath + fn for p, m in walk_modules( path, modpath + fn + ".", include_so=include_so, recursive=recursive, check_optional=check_optional, include_tests=include_tests, optional_modules=optional_modules, excluded_modules=excluded_modules, ): yield p, m continue if fn.endswith('.py'): x = fn[:-3] if x.endswith('_d'): x = x[:-2] if x in excluded_modules: continue modname = modpath + x if check_optional and modname in optional_modules: try: with warnings.catch_warnings(): warnings.simplefilter('ignore', DeprecationWarning) importlib.import_module(modname) except ImportError: from . import util util.debug("Unable to import optional module %s", modname) continue yield path, modname elif include_so and fn.endswith(sysinfo.SHARED_OBJECT_EXTENSION): if '.pypy-' in fn: continue if fn.endswith('_d.so'): yield path, modpath + fn[:-5] else: yield path, modpath + fn[:-3] gevent-24.11.1/src/gevent/testing/monkey_test.py000066400000000000000000000131421471441230600216220ustar00rootroot00000000000000import sys import os test_filename = sys.argv[1] del sys.argv[1] from gevent import monkey # Only test the default set of patch arguments. monkey.patch_all() from .patched_tests_setup import disable_tests_in_source from . import support from . import resources from . import SkipTest from . import util # This uses the internal built-in function ``_thread._count()``, # which we don't/can't monkey-patch, so it returns inaccurate information. def threading_setup(): return (1, ()) # This then tries to wait for that value to return to its original value; # but if we started worker threads that can never happen. def threading_cleanup(*_args): return assert support.threading_setup assert support.threading_cleanup support.threading_setup = threading_setup support.threading_cleanup = threading_cleanup # On all versions of Python 3.6+, this also uses ``_thread._count()``, # meaning it suffers from inaccuracies, # and test_socket.py constantly fails with an extra thread # on some random test. We disable it entirely. # XXX: Figure out how to make a *definition* in ./support.py actually # override the original in test.support, without having to # manually set it # import contextlib @contextlib.contextmanager def wait_threads_exit(timeout=None): # pylint:disable=unused-argument # On < 3.10, this is support.wait_threads_exit; # on >= 3.10, this is threading_helper.wait_threads_exit yield support.wait_threads_exit = wait_threads_exit # On Python 3.11, they changed the way that they deal with this, # meaning that this method no longer works. (Actually, it's not # clear that any of our patches to `support` are doing anything on # Python 3 at all? They certainly aren't on 3.11). This was a good # thing As it led to adding the timeout value for the threadpool # idle threads. But...the default of 5s meant that many tests in # test_socket were each taking at least 5s to run, leading to the # whole thing exceeding the allowed test timeout. We could set the # GEVENT_THREADPOOL_IDLE_TASK_TIMEOUT env variable to a smaller # value, and although that might stress the system nicely, it's # not indicative of what end users see. And it's still hard to get # a correct value. # # So try harder to make sure our patches apply. # # If this fails, symptoms are very long running tests that can be resolved # by setting that TASK_TIMEOUT value small, and/or setting GEVENT_RESOLVER=block. # Also, some number of warnings about dangling threads, or failures # from wait_threads_exit try: from test import support as ts except ImportError: pass else: ts.threading_setup = threading_setup ts.threading_cleanup = threading_cleanup ts.wait_threads_exit = wait_threads_exit ts.print_warning = lambda msg: msg try: from test.support import threading_helper except ImportError: pass else: threading_helper.wait_threads_exit = wait_threads_exit threading_helper.threading_setup = threading_setup threading_helper.threading_cleanup = threading_cleanup try: from test.support import import_helper except ImportError: pass else: # Importing fresh modules breaks our monkey patches. we can't allow that. def import_fresh_module(name, *_args, **_kwargs): import importlib return importlib.import_module(name) import_helper.import_fresh_module = import_fresh_module # So we don't have to patch test_threading to use our # version of lock_tests, we patch from gevent.tests import lock_tests try: import test.lock_tests except ImportError: pass else: test.lock_tests = lock_tests sys.modules['tests.lock_tests'] = lock_tests # Configure allowed resources resources.setup_resources() if not os.path.exists(test_filename) and os.sep not in test_filename: # A simple filename, given without a path, that doesn't exist. # So we change to the appropriate directory, if we can find it. # This happens when copy-pasting the output of the testrunner for d in util.find_stdlib_tests(): if os.path.exists(os.path.join(d, test_filename)): os.chdir(d) break __file__ = os.path.abspath(test_filename) #os.path.join(os.getcwd(), test_filename) test_name = os.path.splitext(os.path.basename(test_filename))[0] print('Running with patch_all(): %s' % (__file__,)) with open(test_filename, encoding='utf-8') as module_file: module_source = module_file.read() module_source = disable_tests_in_source(module_source, test_name) # We write the module source to a file so that tracebacks # show correctly, since disabling the tests changes line # numbers. However, note that __file__ must still point to the # real location so that data files can be found. # See https://github.com/gevent/gevent/issues/1306 import tempfile temp_handle, temp_path = tempfile.mkstemp(prefix=test_name, suffix='.py', text=True) os.write(temp_handle, module_source.encode('utf-8')) os.close(temp_handle) remove_file = not os.environ.get('GEVENT_DEBUG') try: module_code = compile(module_source, temp_path, 'exec', dont_inherit=True) exec(module_code, globals()) remove_file = True except SkipTest as e: remove_file = True # Some tests can raise test.support.ResourceDenied # in their main method before the testrunner takes over. # That's a kind of SkipTest. we can't get a true skip count because it # hasn't run, though. print(e) # Match the regular unittest output, including ending with skipped print("Ran 0 tests in 0.0s") print('OK (skipped=0)') finally: if remove_file: try: os.remove(temp_path) except OSError: pass gevent-24.11.1/src/gevent/testing/openfiles.py000066400000000000000000000210431471441230600212440ustar00rootroot00000000000000# Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. from __future__ import absolute_import, print_function, division import os import unittest import re import gc import functools from . import sysinfo # Linux/OS X/BSD platforms /can/ implement this by calling out to lsof. # However, if psutil is available (it is cross-platform) use that. # It is *much* faster than shelling out to lsof each time # (Running 14 tests takes 3.964s with lsof and 0.046 with psutil) # However, it still doesn't completely solve the issue on Windows: fds are reported # as -1 there, so we can't fully check those. def _collects(func): # We've seen OSError: No such file or directory /proc/PID/fd/NUM. # This occurs in the loop that checks open files. It first does # listdir() and then tries readlink() on each file. But the file # went away. This must be because of async GC in PyPy running # destructors at arbitrary times. This became an issue in PyPy 7.2 # but could theoretically be an issue with any objects caught in a # cycle. This is one reason we GC before we begin. (The other is # to clean up outstanding objects that will close files in # __del__.) # # Note that this can hide errors, though, by causing greenlets to get # collected and drop references and thus close files. We should be deterministic # and careful about closing things. @functools.wraps(func) def f(**kw): gc.collect() gc.collect() enabled = gc.isenabled() gc.disable() try: return func(**kw) finally: if enabled: gc.enable() return f if sysinfo.WIN: def _run_lsof(): raise unittest.SkipTest("lsof not expected on Windows") else: @_collects def _run_lsof(): import tempfile pid = os.getpid() fd, tmpname = tempfile.mkstemp('get_open_files') os.close(fd) lsof_command = 'lsof -p %s > %s' % (pid, tmpname) if os.system(lsof_command): # XXX: This prints to the console an annoying message: 'lsof is not recognized' raise unittest.SkipTest("lsof failed") with open(tmpname) as fobj: # pylint:disable=unspecified-encoding data = fobj.read().strip() os.remove(tmpname) return data def default_get_open_files(pipes=False, **_kwargs): data = _run_lsof() results = {} for line in data.split('\n'): line = line.strip() if not line or line.startswith("COMMAND"): # Skip header and blank lines continue split = re.split(r'\s+', line) # Note that this needs the real lsof; it won't work with # the lsof that comes from BusyBox. You'll get parsing errors # here. _command, _pid, _user, fd = split[:4] # Pipes (on OS X, at least) get an fd like "3" while normal files get an fd like "1u" if fd[:-1].isdigit() or fd.isdigit(): if not pipes and fd[-1].isdigit(): continue fd = int(fd[:-1]) if not fd[-1].isdigit() else int(fd) if fd in results: params = (fd, line, split, results.get(fd), data) raise AssertionError('error when parsing lsof output: duplicate fd=%r\nline=%r\nsplit=%r\nprevious=%r\ndata:\n%s' % params) results[fd] = line if not results: raise AssertionError('failed to parse lsof:\n%s' % (data, )) results['data'] = data return results @_collects def default_get_number_open_files(): if os.path.exists('/proc/'): # Linux only fd_directory = '/proc/%d/fd' % os.getpid() return len(os.listdir(fd_directory)) try: return len(get_open_files(pipes=True)) - 1 except (OSError, AssertionError, unittest.SkipTest): return 0 lsof_get_open_files = default_get_open_files try: # psutil import subprocess which on Python 3 imports selectors. # This can expose issues with monkey-patching. import psutil except ImportError: get_open_files = default_get_open_files get_number_open_files = default_get_number_open_files else: class _TrivialOpenFile(object): __slots__ = ('fd',) def __init__(self, fd): self.fd = fd @_collects def get_open_files(count_closing_as_open=True, **_kw): """ Return a list of popenfile and pconn objects. Note that other than `fd`, they have different attributes. .. important:: If you want to find open sockets, on Windows and linux, it is important that the socket at least be listening (socket.listen(1)). Unlike the lsof implementation, this will only return sockets in a state like that. """ results = {} for _ in range(3): try: if count_closing_as_open and os.path.exists('/proc/'): # Linux only. # psutil doesn't always see all connections, even though # they exist in the filesystem. It's not entirely clear why. # It sees them on Travis (prior to Ubuntu Bionic, at least) # but doesn't in the manylinux image or Fedora 33 Rawhide image. # This happens in test__makefile_ref TestSSL.*; in particular, if a # ``sslsock.makefile()`` is opened and used to read all data, and the sending # side shuts down, psutil no longer finds the open file. So we add them # back in. # # Of course, the flip side of this is that we sometimes find connections # we're not expecting. # I *think* this has to do with CLOSE_WAIT handling? fd_directory = '/proc/%d/fd' % os.getpid() fd_files = os.listdir(fd_directory) else: fd_files = [] process = psutil.Process() results['data'] = process.open_files() results['data'] += process.connections('all') break except OSError: pass else: # No break executed raise unittest.SkipTest("Unable to read open files") for x in results['data']: results[x.fd] = x for fd_str in fd_files: if fd_str not in results: fd = int(fd_str) results[fd] = _TrivialOpenFile(fd) results['data'] += [('From psutil', process)] results['data'] += [('fd files', fd_files)] return results @_collects def get_number_open_files(): process = psutil.Process() try: return process.num_fds() except AttributeError: # num_fds is unix only. Is num_handles close enough on Windows? return 0 class DoesNotLeakFilesMixin(object): # pragma: no cover """ A test case mixin that helps find a method that's leaking an open file. Only mix this in when needed to debug, it slows tests down. """ def setUp(self): self.__open_files_count = get_number_open_files() super(DoesNotLeakFilesMixin, self).setUp() def tearDown(self): super(DoesNotLeakFilesMixin, self).tearDown() after = get_number_open_files() if after > self.__open_files_count: raise AssertionError( "Too many open files. Before: %s < After: %s.\n%s" % ( self.__open_files_count, after, get_open_files() ) ) gevent-24.11.1/src/gevent/testing/params.py000066400000000000000000000051621471441230600205470ustar00rootroot00000000000000# Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. from . import support from .sysinfo import PY3 from .sysinfo import PYPY from .sysinfo import WIN from .sysinfo import LIBUV from .sysinfo import EXPECT_POOR_TIMER_RESOLUTION # Travis is slow and overloaded; Appveyor used to be faster, but # as of Dec 2015 it's almost always slower and/or has much worse timer # resolution CI_TIMEOUT = 15 if (PY3 and PYPY) or (PYPY and WIN and LIBUV): # pypy3 is very slow right now, # as is PyPy2 on windows (which only has libuv) CI_TIMEOUT = 20 if PYPY and LIBUV: # slow and flaky timeouts LOCAL_TIMEOUT = CI_TIMEOUT else: LOCAL_TIMEOUT = 2 LARGE_TIMEOUT = max(LOCAL_TIMEOUT, CI_TIMEOUT) # Previously we set this manually to 'localhost' # and then had some conditions where we changed it to # 127.0.0.1 (e.g., on Windows or OSX or travis), but Python's test.support says # # Don't use "localhost", since resolving it uses the DNS under recent # # Windows versions (see issue #18792). # and sets it unconditionally to 127.0.0.1. DEFAULT_LOCAL_HOST_ADDR = support.HOST DEFAULT_LOCAL_HOST_ADDR6 = support.HOSTv6 # Not all TCP stacks support dual binding where '' # binds to both v4 and v6. # XXX: This is badly named; you often want DEFAULT_BIND_ADDR_TUPLE DEFAULT_BIND_ADDR = support.HOST DEFAULT_CONNECT_HOST = DEFAULT_CONNECT = DEFAULT_LOCAL_HOST_ADDR DEFAULT_BIND_ADDR_TUPLE = (DEFAULT_BIND_ADDR, 0) # For in-process sockets DEFAULT_SOCKET_TIMEOUT = 0.1 if not EXPECT_POOR_TIMER_RESOLUTION else 2.0 # For cross-process sockets DEFAULT_XPC_SOCKET_TIMEOUT = 2.0 if not EXPECT_POOR_TIMER_RESOLUTION else 4.0 gevent-24.11.1/src/gevent/testing/patched_tests_setup.py000066400000000000000000002016071471441230600233400ustar00rootroot00000000000000# pylint:disable=missing-docstring,invalid-name,too-many-lines from __future__ import print_function, absolute_import, division import collections import contextlib import functools import sys import os # At least on 3.6+, importing platform # imports subprocess, which imports selectors. That # can expose issues with monkey patching. We don't need it # though. # import platform import re from .sysinfo import RUNNING_ON_APPVEYOR as APPVEYOR from .sysinfo import RUNNING_ON_TRAVIS as TRAVIS from .sysinfo import RESOLVER_NOT_SYSTEM as ARES from .sysinfo import RESOLVER_ARES from .sysinfo import RESOLVER_DNSPYTHON from .sysinfo import RUNNING_ON_CI from .sysinfo import RUNNING_ON_MUSLLINUX from .sysinfo import RUN_COVERAGE from .sysinfo import PYPY from .sysinfo import PYPY3 from .sysinfo import PY38 from .sysinfo import PY39 from .sysinfo import PY39_EXACTLY from .sysinfo import PY310 from .sysinfo import PY311 from .sysinfo import PY312 from .sysinfo import PY313 from .sysinfo import WIN from .sysinfo import OSX from .sysinfo import LINUX from .sysinfo import LIBUV from .sysinfo import CFFI_BACKEND from . import flaky CPYTHON = not PYPY # By default, test cases are expected to switch and emit warnings if there was none # If a test is found in this list, it's expected not to switch. no_switch_tests = '''test_patched_select.SelectTestCase.test_error_conditions test_patched_ftplib.*.test_all_errors test_patched_ftplib.*.test_getwelcome test_patched_ftplib.*.test_sanitize test_patched_ftplib.*.test_set_pasv #test_patched_ftplib.TestIPv6Environment.test_af test_patched_socket.TestExceptions.testExceptionTree test_patched_socket.Urllib2FileobjectTest.testClose test_patched_socket.TestLinuxAbstractNamespace.testLinuxAbstractNamespace test_patched_socket.TestLinuxAbstractNamespace.testMaxName test_patched_socket.TestLinuxAbstractNamespace.testNameOverflow test_patched_socket.FileObjectInterruptedTestCase.* test_patched_urllib.* test_patched_asyncore.HelperFunctionTests.* test_patched_httplib.BasicTest.* test_patched_httplib.HTTPSTimeoutTest.test_attributes test_patched_httplib.HeaderTests.* test_patched_httplib.OfflineTest.* test_patched_httplib.HTTPSTimeoutTest.test_host_port test_patched_httplib.SourceAddressTest.testHTTPSConnectionSourceAddress test_patched_select.SelectTestCase.test_error_conditions test_patched_smtplib.NonConnectingTests.* test_patched_urllib2net.OtherNetworkTests.* test_patched_wsgiref.* test_patched_subprocess.HelperFunctionTests.* ''' ignore_switch_tests = ''' test_patched_socket.GeneralModuleTests.* test_patched_httpservers.BaseHTTPRequestHandlerTestCase.* test_patched_queue.* test_patched_signal.SiginterruptTest.* test_patched_urllib2.* test_patched_ssl.* test_patched_signal.BasicSignalTests.* test_patched_threading_local.* test_patched_threading.* ''' def make_re(tests): tests = [x.strip().replace(r'\.', r'\\.').replace('*', '.*?') for x in tests.split('\n') if x.strip()] return re.compile('^%s$' % '|'.join(tests)) no_switch_tests = make_re(no_switch_tests) ignore_switch_tests = make_re(ignore_switch_tests) def get_switch_expected(fullname): """ >>> get_switch_expected('test_patched_select.SelectTestCase.test_error_conditions') False >>> get_switch_expected('test_patched_socket.GeneralModuleTests.testCrucialConstants') False >>> get_switch_expected('test_patched_socket.SomeOtherTest.testHello') True >>> get_switch_expected("test_patched_httplib.BasicTest.test_bad_status_repr") False """ # certain pylint versions mistype the globals as # str, not re. # pylint:disable=no-member if ignore_switch_tests.match(fullname) is not None: return None if no_switch_tests.match(fullname) is not None: return False return True disabled_tests = [ # Want's __module__ to be 'signal', which of course it isn't once # monkey-patched. 'test_signal.GenericTests.test_functions_module_attr', # XXX: While we debug latest updates. This is leaking 'test_threading.ThreadTests.test_no_refcycle_through_target', # The server side takes awhile to shut down 'test_httplib.HTTPSTest.test_local_bad_hostname', # These were previously 3.5+ issues (same as above) # but have been backported. 'test_httplib.HTTPSTest.test_local_good_hostname', 'test_httplib.HTTPSTest.test_local_unknown_cert', 'test_threading.ThreadTests.test_PyThreadState_SetAsyncExc', # uses some internal C API of threads not available when threads are emulated with greenlets 'test_threading.ThreadTests.test_join_nondaemon_on_shutdown', # asserts that repr(sleep) is '' 'test_urllib2net.TimeoutTest.test_ftp_no_timeout', 'test_urllib2net.TimeoutTest.test_ftp_timeout', 'test_urllib2net.TimeoutTest.test_http_no_timeout', 'test_urllib2net.TimeoutTest.test_http_timeout', # accesses _sock.gettimeout() which is always in non-blocking mode 'test_urllib2net.OtherNetworkTests.test_ftp', # too slow 'test_urllib2net.OtherNetworkTests.test_urlwithfrag', # fails dues to some changes on python.org 'test_urllib2net.OtherNetworkTests.test_sites_no_connection_close', # flaky 'test_socket.UDPTimeoutTest.testUDPTimeout', # has a bug which makes it fail with error: (107, 'Transport endpoint is not connected') # (it creates a TCP socket, not UDP) 'test_socket.GeneralModuleTests.testRefCountGetNameInfo', # fails with "socket.getnameinfo loses a reference" while the reference is only "lost" # because it is referenced by the traceback - any Python function would lose a reference like that. # the original getnameinfo does not "lose" it because it's in C. 'test_socket.NetworkConnectionNoServer.test_create_connection_timeout', # replaces socket.socket with MockSocket and then calls create_connection. # this unfortunately does not work with monkey patching, because gevent.socket.create_connection # is bound to gevent.socket.socket and updating socket.socket does not affect it. # this issues also manifests itself when not monkey patching DNS: http://code.google.com/p/gevent/issues/detail?id=54 # create_connection still uses gevent.socket.getaddrinfo while it should be using socket.getaddrinfo 'test_asyncore.BaseTestAPI.test_handle_expt', # sends some OOB data and expect it to be detected as such; gevent.select.select does not support that # This one likes to check its own filename, but we rewrite # the file to a temp location during patching. 'test_asyncore.HelperFunctionTests.test_compact_traceback', # expects time.sleep() to return prematurely in case of a signal; # gevent.sleep() is better than that and does not get interrupted # (unless signal handler raises an error) 'test_signal.WakeupSignalTests.test_wakeup_fd_early', # expects select.select() to raise select.error(EINTR'interrupted # system call') gevent.select.select() does not get interrupted # (unless signal handler raises an error) maybe it should? 'test_signal.WakeupSignalTests.test_wakeup_fd_during', # these rely on os.read raising EINTR which never happens with gevent.os.read 'test_signal.SiginterruptTest.test_without_siginterrupt', 'test_signal.SiginterruptTest.test_siginterrupt_on', 'test_signal.SiginterruptTest.test_siginterrupt_off', # This one takes forever and relies on threading details 'test_signal.StressTest.test_stress_modifying_handlers', # This uses an external file, and launches it. This means that it's not # actually testing gevent because there's no monkey-patch. 'test_signal.PosixTests.test_interprocess_signal', 'test_subprocess.ProcessTestCase.test_leak_fast_process_del_killed', 'test_subprocess.ProcessTestCase.test_zombie_fast_process_del', # relies on subprocess._active which we don't use # Very slow, tries to open lots and lots of subprocess and files, # tends to timeout on CI. 'test_subprocess.ProcessTestCase.test_no_leaking', # This test is also very slow, and has been timing out on Travis # since November of 2016 on Python 3, but now also seen on Python 2/Pypy. 'test_subprocess.ProcessTestCase.test_leaking_fds_on_error', # Added between 3.6.0 and 3.6.3, uses _testcapi and internals # of the subprocess module. Backported to Python 2.7.16. 'test_subprocess.POSIXProcessTestCase.test_stopped', 'test_ssl.ThreadedTests.test_default_ciphers', 'test_ssl.ThreadedTests.test_empty_cert', 'test_ssl.ThreadedTests.test_malformed_cert', 'test_ssl.ThreadedTests.test_malformed_key', 'test_ssl.NetworkedTests.test_non_blocking_connect_ex', # XXX needs investigating 'test_ssl.NetworkedTests.test_algorithms', # The host this wants to use, sha256.tbs-internet.com, is not resolvable # right now (2015-10-10), and we need to get Windows wheels # This started timing out randomly on Travis in oct/nov 2018. It appears # to be something with random number generation taking too long. 'test_ssl.BasicSocketTests.test_random_fork', # Relies on the repr of objects (Py3) 'test_ssl.BasicSocketTests.test_dealloc_warn', # Takes forever 'test_ssl.BasicSocketTests.test_connect_ex_error', 'test_urllib2.HandlerTests.test_cookie_redirect', # this uses cookielib which we don't care about 'test_thread.ThreadRunningTests.test__count', 'test_thread.TestForkInThread.test_forkinthread', # XXX needs investigating 'test_subprocess.POSIXProcessTestCase.test_preexec_errpipe_does_not_double_close_pipes', # Does not exist in the test suite until 2.7.4+. Subclasses Popen, and overrides # _execute_child. But our version has a different parameter list than the # version that comes with PyPy/CPython, so fails with a TypeError. # This one crashes the interpreter if it has a bug parsing the # invalid data. 'test_ssl.BasicSocketTests.test_parse_cert_CVE_2019_5010', # We had to copy in a newer version of the test file for SSL fixes # and this doesn't work reliably on all versions. 'test_httplib.HeaderTests.test_headers_debuglevel', # On Appveyor with Python 3.8.0 and 3.7.5, this test # for __class_getitem__ fails. Presumably this was added # in a patch release (it's not in the PEP.) Sigh. # https://bugs.python.org/issue38979 'test_context.ContextTest.test_contextvar_getitem', # The same patch that fixed that removed this test, # because it would now fail. 'test_context.ContextTest.test_context_var_new_2', # The C queue objects are immune to monkey patching, disable them 'test_queue.CLifoQueueTest', 'test_queue.CPriorityQueueTest', 'test_queue.CQueueTest', 'test_queue.CSimpleQueueTest', 'test_queue.CFailingQueueTest', # (These will actually catch tests for many different queue types.) # They needs to be disabled because it uses tight loops in "threads" # calling ``q.get(False)`` which doesn't allow any other greenlet to # make progress, so the shutdown request never gets through. 'test_queue.PyLifoQueueTest.test_shutdown_all_methods_in_many_threads', 'test_queue.PyLifoQueueTest.test_shutdown_immediate_all_methods_in_many_threads', ] if OSX: disabled_tests += [ # These are timing dependent, and sometimes run into the OS X # kernel bug leading to 'Protocol wrong type for socket'. # See discussion at https://github.com/benoitc/gunicorn/issues/1487 'test_ssl.SimpleBackgroundTests.test_connect_capath', 'test_ssl.SimpleBackgroundTests.test_connect_with_context', ] if PYPY: disabled_tests += [ # The exact error message the code code checks for is different # (possibly just on macOS?). Plain PyPy3 fails as well. 'test_signal.WakeupSignalTests.test_wakeup_write_error', ] if 'thread' in os.getenv('GEVENT_FILE', ''): disabled_tests += [ 'test_subprocess.ProcessTestCase.test_double_close_on_error' # Fails with "OSError: 9 invalid file descriptor"; expect GC/lifetime issues ] if LIBUV: # epoll appears to work with these just fine in some cases; # kqueue (at least on OS X, the only tested kqueue system) # never does (failing with abort()) # (epoll on Raspbian 8.0/Debian Jessie/Linux 4.1.20 works; # on a VirtualBox image of Ubuntu 15.10/Linux 4.2.0 both tests fail; # Travis CI Ubuntu 12.04 precise/Linux 3.13 causes one of these tests to hang forever) # XXX: Retry this with libuv 1.12+ disabled_tests += [ # A 2.7 test. Tries to fork, and libuv cannot fork 'test_signal.InterProcessSignalTests.test_main', # Likewise, a forking problem 'test_signal.SiginterruptTest.test_siginterrupt_off', ] disabled_tests += [ # This test wants to pass an arbitrary fileno # to a socket and do things with it. libuv doesn't like this, # it raises EPERM. It is disabled on windows already. # It depends on whether we had a fd already open and multiplexed with 'test_socket.GeneralModuleTests.test_unknown_socket_family_repr', # And yes, there's a typo in some versions. 'test_socket.GeneralModuleTests.test_uknown_socket_family_repr', # This test sometimes fails at line 358. It's apparently # extremely sensitive to timing. 'test_selectors.PollSelectorTestCase.test_timeout', ] if OSX: disabled_tests += [ # XXX: Starting when we upgraded from libuv 1.18.0 # to 1.19.2, this sometimes (usually) started having # a series of calls ('select.poll(0)', 'select.poll(-1)') # take longer than the allowed 0.5 seconds. Debugging showed that # it was the second call that took longer, for no apparent reason. # There doesn't seem to be a change in the source code to libuv that # would affect this. # XXX-XXX: This actually disables too many tests :( 'test_selectors.PollSelectorTestCase.test_timeout', ] if RUN_COVERAGE: disabled_tests += [ # Starting with #1145 this test (actually # TestTLS_FTPClassMixin) becomes sensitive to timings # under coverage. 'test_ftplib.TestFTPClass.test_storlines', ] if sys.platform.startswith('linux'): disabled_tests += [ # crashes with EPERM, which aborts the epoll loop, even # though it was allowed in in the first place. 'test_asyncore.FileWrapperTest.test_dispatcher', ] if PYPY: disabled_tests += [ # appears to timeout? 'test_threading.ThreadTests.test_finalize_with_trace', 'test_asyncore.DispatcherWithSendTests_UsePoll.test_send', 'test_asyncore.DispatcherWithSendTests.test_send', # More unexpected timeouts 'test_ssl.ContextTests.test__https_verify_envvar', 'test_subprocess.ProcessTestCase.test_check_output', 'test_telnetlib.ReadTests.test_read_eager_A', # But on Windows, our gc fix for that doesn't work anyway # so we have to disable it. 'test_urllib2_localnet.TestUrlopen.test_https_with_cafile', # These tests hang. see above. 'test_threading.ThreadJoinOnShutdown.test_1_join_on_shutdown', 'test_threading.ThreadingExceptionTests.test_print_exception', # Our copy of these in test__subprocess.py also hangs. # Anything that uses Popen.communicate or directly uses # Popen.stdXXX.read hangs. It's not clear why. 'test_subprocess.ProcessTestCase.test_communicate', 'test_subprocess.ProcessTestCase.test_cwd', 'test_subprocess.ProcessTestCase.test_env', 'test_subprocess.ProcessTestCase.test_stderr_pipe', 'test_subprocess.ProcessTestCase.test_stdout_pipe', 'test_subprocess.ProcessTestCase.test_stdout_stderr_pipe', 'test_subprocess.ProcessTestCase.test_stderr_redirect_with_no_stdout_redirect', 'test_subprocess.ProcessTestCase.test_stdout_filedes_of_stdout', 'test_subprocess.ProcessTestcase.test_stdout_none', 'test_subprocess.ProcessTestcase.test_universal_newlines', 'test_subprocess.ProcessTestcase.test_writes_before_communicate', 'test_subprocess.Win32ProcessTestCase._kill_process', 'test_subprocess.Win32ProcessTestCase._kill_dead_process', 'test_subprocess.Win32ProcessTestCase.test_shell_sequence', 'test_subprocess.Win32ProcessTestCase.test_shell_string', 'test_subprocess.CommandsWithSpaces.with_spaces', ] if WIN: disabled_tests += [ # This test winds up hanging a long time. # Inserting GCs doesn't fix it. 'test_ssl.ThreadedTests.test_handshake_timeout', # These sometimes raise LoopExit, for no apparent reason, # mostly but not exclusively on Python 2. Sometimes (often?) # this happens in the setUp() method when we attempt to get a client # connection 'test_socket.BufferIOTest.testRecvFromIntoBytearray', 'test_socket.BufferIOTest.testRecvFromIntoArray', 'test_socket.BufferIOTest.testRecvIntoArray', 'test_socket.BufferIOTest.testRecvIntoMemoryview', 'test_socket.BufferIOTest.testRecvFromIntoEmptyBuffer', 'test_socket.BufferIOTest.testRecvFromIntoMemoryview', 'test_socket.BufferIOTest.testRecvFromIntoSmallBuffer', 'test_socket.BufferIOTest.testRecvIntoBytearray', ] if PYPY: if TRAVIS: disabled_tests += [ # This sometimes causes a segfault for no apparent reason. # See https://travis-ci.org/gevent/gevent/jobs/327328704 # Can't reproduce locally. 'test_subprocess.ProcessTestCase.test_universal_newlines_communicate', ] if RUN_COVERAGE and CFFI_BACKEND: disabled_tests += [ # This test hangs in this combo for some reason 'test_socket.GeneralModuleTests.test_sendall_interrupted', # This can get a timeout exception instead of the Alarm 'test_socket.TCPTimeoutTest.testInterruptedTimeout', # This test sometimes gets the wrong answer (due to changed timing?) 'test_socketserver.SocketServerTest.test_ForkingUDPServer', # Timing and signals are off, so a handler exception doesn't get raised. # Seen under libev 'test_signal.InterProcessSignalTests.test_main', ] if PYPY and sys.pypy_version_info[:2] == (7, 3): # pylint:disable=no-member if OSX: disabled_tests += [ # This is expected to produce an SSLError, but instead it appears to # actually work. See above for when it started failing the same on # Travis. 'test_ssl.ThreadedTests.test_alpn_protocols', # This fails, presumably due to the OpenSSL it's compiled with. 'test_ssl.ThreadedTests.test_default_ecdh_curve', ] if PYPY3 and TRAVIS: disabled_tests += [ # If socket.SOCK_CLOEXEC is defined, this creates a socket # and tests its type with ``sock.type & socket.SOCK_CLOEXEC`` # We have a ``@property`` for ``type`` that takes care of # ``SOCK_NONBLOCK`` on Linux, but otherwise it's just a pass-through. # This started failing with PyPy 7.3.1 and it's not clear why. 'test_socket.InheritanceTest.test_SOCK_CLOEXEC', ] if PYPY3 and WIN: disabled_tests += [ # test_httpservers.CGIHTTPServerTestCase all seem to hang. # There seem to be some general subprocess issues. This is # ignored entirely from known_failures.py # This produces: # # OSError: [Errno 10014] The system detected an invalid # pointer address in attempting to use a pointer argument in # a call # # When calling socket.socket(fileno=fd) when we actually # call ``self._socket =self._gevent_sock_class()``. 'test_socket.GeneralModuleTests.test_socket_fileno', # This doesn't respect the scope properly # # self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, ifindex)) # AssertionError: Tuples differ: ('ff02::1de:c0:face:8d%42', 1234, 0, 42) != ('ff02::1de:c0:face:8d', 1234, 0, 42 # 'test_socket.GeneralModuleTests.test_getaddrinfo_ipv6_scopeid_numeric', # self.assertEqual(newsock.get_inheritable(), False) # AssertionError: True != False 'test_socket.InheritanceTest.test_dup', ] def _make_run_with_original(mod_name, func_name): @contextlib.contextmanager def with_orig(): mod = __import__(mod_name) now = getattr(mod, func_name) from gevent.monkey import get_original orig = get_original(mod_name, func_name) try: setattr(mod, func_name, orig) yield finally: setattr(mod, func_name, now) return with_orig @contextlib.contextmanager def _gc_at_end(): try: yield finally: import gc gc.collect() gc.collect() @contextlib.contextmanager def _flaky_socket_timeout(): import socket try: yield except socket.timeout: flaky.reraiseFlakyTestTimeout() # Map from FQN to a context manager that will be wrapped around # that test. wrapped_tests = { } class _PatchedTest(object): def __init__(self, test_fqn): self._patcher = wrapped_tests[test_fqn] def __call__(self, orig_test_fn): @functools.wraps(orig_test_fn) def test(*args, **kwargs): with self._patcher(): return orig_test_fn(*args, **kwargs) return test if OSX: disabled_tests += [ 'test_subprocess.POSIXProcessTestCase.test_run_abort', # causes Mac OS X to show "Python crashes" dialog box which is annoying ] if WIN: disabled_tests += [ # Issue with Unix vs DOS newlines in the file vs from the server 'test_ssl.ThreadedTests.test_socketserver', # This sometimes hangs (only on appveyor) 'test_ssl.ThreadedTests.test_asyncore_server', # On appveyor, this sometimes produces 'A non-blocking socket # operation could not be completed immediately', followed by # 'No connection could be made because the target machine # actively refused it' 'test_socket.NonBlockingTCPTests.testAccept', # On appveyor, this test has been seen to fail on 3.9 and 3.8 ] if sys.version_info[:2] <= (3, 9): disabled_tests += [ 'test_context.HamtTest.test_hamt_collision_3', # Sometimes fails:: # # self.assertIn('got more than ', str(cm.exception)) # AssertionError: 'got more than ' not found in # 'Remote end closed connection without response' # 'test_httplib.BasicTest.test_overflowing_header_limit_after_100', ] # These are a problem on 3.5; on 3.6+ they wind up getting (accidentally) disabled. wrapped_tests.update({ 'test_socket.SendfileUsingSendTest.testWithTimeout': _flaky_socket_timeout, 'test_socket.SendfileUsingSendTest.testOffset': _flaky_socket_timeout, 'test_socket.SendfileUsingSendTest.testRegularFile': _flaky_socket_timeout, 'test_socket.SendfileUsingSendTest.testCount': _flaky_socket_timeout, }) if PYPY: disabled_tests += [ # Does not exist in the CPython test suite, tests for a specific bug # in PyPy's forking. Only runs on linux and is specific to the PyPy # implementation of subprocess (possibly explains the extra parameter to # _execut_child) 'test_subprocess.ProcessTestCase.test_failed_child_execute_fd_leak', # On some platforms, this returns "zlib_compression", but the test is looking for # "ZLIB" 'test_ssl.ThreadedTests.test_compression', # These are flaxy, apparently a race condition? Began with PyPy 2.7-7 and 3.6-7 'test_asyncore.TestAPI_UsePoll.test_handle_error', 'test_asyncore.TestAPI_UsePoll.test_handle_read', ] if WIN: disabled_tests += [ # Starting in 7.3.1 on Windows, this stopped raising ValueError; it appears to # be a bug in PyPy. 'test_signal.WakeupFDTests.test_invalid_fd', # Likewise for 7.3.1. See the comments for PY35 'test_socket.GeneralModuleTests.test_sock_ioctl', ] disabled_tests += [ # These are flaky, beginning in 3.6-alpha 7.0, not finding some flag # set, apparently a race condition 'test_asyncore.TestAPI_UveIPv6Poll.test_handle_accept', 'test_asyncore.TestAPI_UveIPv6Poll.test_handle_accepted', 'test_asyncore.TestAPI_UveIPv6Poll.test_handle_close', 'test_asyncore.TestAPI_UveIPv6Poll.test_handle_write', 'test_asyncore.TestAPI_UseIPV6Select.test_handle_read', # These are reporting 'ssl has no attribute ...' # This could just be an OSX thing 'test_ssl.ContextTests.test__create_stdlib_context', 'test_ssl.ContextTests.test_create_default_context', 'test_ssl.ContextTests.test_get_ciphers', 'test_ssl.ContextTests.test_options', 'test_ssl.ContextTests.test_constants', # These tend to hang for some reason, probably not properly # closed sockets. 'test_socketserver.SocketServerTest.test_write', # This uses ctypes to do funky things including using ptrace, # it hangs 'test_subprocess.ProcessTestcase.test_child_terminated_in_stopped_state', # Certificate errors; need updated test 'test_urllib2_localnet.TestUrlopen.test_https', ] # Generic Python 3 disabled_tests += [ # Triggers the crash reporter 'test_threading.SubinterpThreadingTests.test_daemon_threads_fatal_error', # Relies on an implementation detail, Thread._tstate_lock 'test_threading.ThreadTests.test_tstate_lock', # Relies on an implementation detail (reprs); we have our own version 'test_threading.ThreadTests.test_various_ops', 'test_threading.ThreadTests.test_various_ops_large_stack', 'test_threading.ThreadTests.test_various_ops_small_stack', # Relies on Event having a _cond and an _reset_internal_locks() # XXX: These are commented out in the source code of test_threading because # this doesn't work. # 'lock_tests.EventTests.test_reset_internal_locks', # Python bug 13502. We may or may not suffer from this as its # basically a timing race condition. # XXX Same as above # 'lock_tests.EventTests.test_set_and_clear', # These tests want to assert on the type of the class that implements # `Popen.stdin`; we use a FileObject, but they expect different subclasses # from the `io` module 'test_subprocess.ProcessTestCase.test_io_buffered_by_default', 'test_subprocess.ProcessTestCase.test_io_unbuffered_works', # 3.3 exposed the `endtime` argument to wait accidentally. # It is documented as deprecated and not to be used since 3.4 # This test in 3.6.3 wants to use it though, and we don't have it. 'test_subprocess.ProcessTestCase.test_wait_endtime', # These all want to inspect the string value of an exception raised # by the exec() call in the child. The _posixsubprocess module arranges # for better exception handling and printing than we do. 'test_subprocess.POSIXProcessTestCase.test_exception_bad_args_0', 'test_subprocess.POSIXProcessTestCase.test_exception_bad_executable', 'test_subprocess.POSIXProcessTestCase.test_exception_cwd', # Relies on a 'fork_exec' attribute that we don't provide 'test_subprocess.POSIXProcessTestCase.test_exception_errpipe_bad_data', 'test_subprocess.POSIXProcessTestCase.test_exception_errpipe_normal', # Python 3 fixed a bug if the stdio file descriptors were closed; # we still have that bug 'test_subprocess.POSIXProcessTestCase.test_small_errpipe_write_fd', # Relies on implementation details (some of these tests were added in 3.4, # but PyPy3 is also shipping them.) 'test_socket.GeneralModuleTests.test_SocketType_is_socketobject', 'test_socket.GeneralModuleTests.test_dealloc_warn', 'test_socket.GeneralModuleTests.test_repr', 'test_socket.GeneralModuleTests.test_str_for_enums', 'test_socket.GeneralModuleTests.testGetaddrinfo', ] if TRAVIS: disabled_tests += [ # test_cwd_with_relative_executable tends to fail # on Travis...it looks like the test processes are stepping # on each other and messing up their temp directories. We tend to get things like # saved_dir = os.getcwd() # FileNotFoundError: [Errno 2] No such file or directory 'test_subprocess.ProcessTestCase.test_cwd_with_relative_arg', 'test_subprocess.ProcessTestCaseNoPoll.test_cwd_with_relative_arg', 'test_subprocess.ProcessTestCase.test_cwd_with_relative_executable', # In 3.7 and 3.8 on Travis CI, this appears to take the full 3 seconds. # Can't reproduce it locally. We have our own copy of this that takes # timing on CI into account. 'test_subprocess.RunFuncTestCase.test_run_with_shell_timeout_and_capture_output', ] disabled_tests += [ # XXX: BUG: We simply don't handle this correctly. On CPython, # we wind up raising a BlockingIOError and then # BrokenPipeError and then some random TypeErrors, all on the # server. CPython 3.5 goes directly to socket.send() (via # socket.makefile), whereas CPython 3.6 uses socket.sendall(). # On PyPy, the behaviour is much worse: we hang indefinitely, perhaps exposing a problem # with our signal handling. # In actuality, though, this test doesn't fully test the EINTR it expects # to under gevent (because if its EWOULDBLOCK retry behaviour.) # Instead, the failures were all due to `pthread_kill` trying to send a signal # to a greenlet instead of a real thread. The solution is to deliver the signal # to the real thread by letting it get the correct ID, and we previously # used make_run_with_original to make it do that. # # But now that we have disabled our wrappers around Thread.join() in favor # of the original implementation, that causes problems: # background.join() thinks that it is the current thread, and won't let it # be joined. 'test_wsgiref.IntegrationTests.test_interrupted_write', ] # PyPy3 3.5.5 v5.8-beta if PYPY3: disabled_tests += [ # This raises 'RuntimeError: reentrant call' when exiting the # process tries to close the stdout stream; no other platform does this. # Seen in both 3.3 and 3.5 (5.7 and 5.8) 'test_signal.SiginterruptTest.test_siginterrupt_off', ] disabled_tests += [ # This fails to close all the FDs, at least on CI. On OS X, many of the # POSIXProcessTestCase fd tests have issues. 'test_subprocess.POSIXProcessTestCase.test_close_fds_when_max_fd_is_lowered', # This has the wrong constants in 5.8 (but worked in 5.7), at least on # OS X. It finds "zlib compression" but expects "ZLIB". 'test_ssl.ThreadedTests.test_compression', # The below are new with 5.10.1 # This gets an EOF in violation of protocol; again, even without gevent # (at least on OS X; it's less consistent about that on travis) 'test_ssl.NetworkedBIOTests.test_handshake', # This passes various "invalid" strings and expects a ValueError. not sure why # we don't see errors on CPython. 'test_subprocess.ProcessTestCase.test_invalid_env', ] if OSX: disabled_tests += [ # These all fail with "invalid_literal for int() with base 10: b''" 'test_subprocess.POSIXProcessTestCase.test_close_fds', 'test_subprocess.POSIXProcessTestCase.test_close_fds_after_preexec', 'test_subprocess.POSIXProcessTestCase.test_pass_fds', 'test_subprocess.POSIXProcessTestCase.test_pass_fds_inheritable', 'test_subprocess.POSIXProcessTestCase.test_pipe_cloexec', # The below are new with 5.10.1 # These fail with 'OSError: received malformed or improperly truncated ancillary data' 'test_socket.RecvmsgSCMRightsStreamTest.testCmsgTruncLen0', 'test_socket.RecvmsgSCMRightsStreamTest.testCmsgTruncLen0Plus1', 'test_socket.RecvmsgSCMRightsStreamTest.testCmsgTruncLen1', 'test_socket.RecvmsgSCMRightsStreamTest.testCmsgTruncLen2Minus1', # Using the provided High Sierra binary, these fail with # 'ValueError: invalid protocol version _SSLMethod.PROTOCOL_SSLv3'. # gevent code isn't involved and running them unpatched has the same issue. 'test_ssl.ContextTests.test_constructor', 'test_ssl.ContextTests.test_protocol', 'test_ssl.ContextTests.test_session_stats', 'test_ssl.ThreadedTests.test_echo', 'test_ssl.ThreadedTests.test_protocol_sslv23', 'test_ssl.ThreadedTests.test_protocol_sslv3', 'test_ssl.ThreadedTests.test_protocol_tlsv1', 'test_ssl.ThreadedTests.test_protocol_tlsv1_1', # Similar, they fail without monkey-patching. 'test_ssl.TestPostHandshakeAuth.test_pha_no_pha_client', 'test_ssl.TestPostHandshakeAuth.test_pha_optional', 'test_ssl.TestPostHandshakeAuth.test_pha_required', # This gets None instead of http1.1, even without gevent 'test_ssl.ThreadedTests.test_npn_protocols', # This fails to decode a filename even without gevent, # at least on High Sierra. Newer versions of the tests actually skip this. 'test_httpservers.SimpleHTTPServerTestCase.test_undecodable_filename', ] disabled_tests += [ # This seems to be a buffering issue? Something isn't # getting flushed. (The output is wrong). Under PyPy3 5.7, # I couldn't reproduce locally in Ubuntu 16 in a VM # or a laptop with OS X. Under 5.8.0, I can reproduce it, but only # when run by the testrunner, not when run manually on the command line, # so something is changing in stdout buffering in those situations. 'test_threading.ThreadJoinOnShutdown.test_2_join_in_forked_process', 'test_threading.ThreadJoinOnShutdown.test_1_join_in_forked_process', ] if TRAVIS: disabled_tests += [ # Likewise, but I haven't produced it locally. 'test_threading.ThreadJoinOnShutdown.test_1_join_on_shutdown', ] if PYPY: wrapped_tests.update({ # XXX: gevent: The error that was raised by that last call # left a socket open on the server or client. The server gets # to http/server.py(390)handle_one_request and blocks on # self.rfile.readline which apparently is where the SSL # handshake is done. That results in the exception being # raised on the client above, but apparently *not* on the # server. Consequently it sits trying to read from that # socket. On CPython, when the client socket goes out of scope # it is closed and the server raises an exception, closing the # socket. On PyPy, we need a GC cycle for that to happen. # Without the socket being closed and exception being raised, # the server cannot be stopped (it runs each request in the # same thread that would notice it had been stopped), and so # the cleanup method added by start_https_server to stop the # server blocks "forever". # This is an important test, so rather than skip it in patched_tests_setup, # we do the gc before we return. 'test_urllib2_localnet.TestUrlopen.test_https_with_cafile': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_command': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_handler': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_head_keep_alive': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_head_via_send_error': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_header_close': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_internal_key_error': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_request_line_trimming': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_return_custom_status': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_return_header_keep_alive': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_send_blank': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_send_error': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_version_bogus': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_version_digits': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_version_invalid': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_version_none': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_version_none_get': _gc_at_end, 'test_httpservers.BaseHTTPServerTestCase.test_get': _gc_at_end, 'test_httpservers.SimpleHTTPServerTestCase.test_get': _gc_at_end, 'test_httpservers.SimpleHTTPServerTestCase.test_head': _gc_at_end, 'test_httpservers.SimpleHTTPServerTestCase.test_invalid_requests': _gc_at_end, 'test_httpservers.SimpleHTTPServerTestCase.test_path_without_leading_slash': _gc_at_end, 'test_httpservers.CGIHTTPServerTestCase.test_invaliduri': _gc_at_end, 'test_httpservers.CGIHTTPServerTestCase.test_issue19435': _gc_at_end, 'test_httplib.TunnelTests.test_connect': _gc_at_end, 'test_httplib.SourceAddressTest.testHTTPConnectionSourceAddress': _gc_at_end, # Unclear 'test_urllib2_localnet.ProxyAuthTests.test_proxy_with_bad_password_raises_httperror': _gc_at_end, 'test_urllib2_localnet.ProxyAuthTests.test_proxy_with_no_password_raises_httperror': _gc_at_end, }) disabled_tests += [ 'test_subprocess.ProcessTestCase.test_threadsafe_wait', # XXX: It seems that threading.Timer is not being greened properly, possibly # due to a similar issue to what gevent.threading documents for normal threads. # In any event, this test hangs forever 'test_subprocess.POSIXProcessTestCase.test_preexec_errpipe_does_not_double_close_pipes', # Subclasses Popen, and overrides _execute_child. Expects things to be done # in a particular order in an exception case, but we don't follow that # exact order 'test_selectors.PollSelectorTestCase.test_above_fd_setsize', # This test attempts to open many many file descriptors and # poll on them, expecting them all to be ready at once. But # libev limits the number of events it will return at once. Specifically, # on linux with epoll, it returns a max of 64 (ev_epoll.c). # XXX: Hangs (Linux only) 'test_socket.NonBlockingTCPTests.testInitNonBlocking', # We don't handle the Linux-only SOCK_NONBLOCK option 'test_socket.NonblockConstantTest.test_SOCK_NONBLOCK', # Tries to use multiprocessing which doesn't quite work in # monkey_test module (Windows only) 'test_socket.TestSocketSharing.testShare', # Windows-only: Sockets have a 'ioctl' method in Python 3 # implemented in the C code. This test tries to check # for the presence of the method in the class, which we don't # have because we don't inherit the C implementation. But # it should be found at runtime. 'test_socket.GeneralModuleTests.test_sock_ioctl', # XXX This fails for an unknown reason 'test_httplib.HeaderTests.test_parse_all_octets', ] if OSX: disabled_tests += [ # These raise "OSError: 12 Cannot allocate memory" on both # patched and unpatched runs 'test_socket.RecvmsgSCMRightsStreamTest.testFDPassEmpty', ] if TRAVIS: # This has been seen to produce "Inconsistency detected by # ld.so: dl-open.c: 231: dl_open_worker: Assertion # `_dl_debug_initialize (0, args->nsid)->r_state == # RT_CONSISTENT' failed!" and fail. disabled_tests += [ 'test_threading.ThreadTests.test_is_alive_after_fork', # This has timing constraints that are strict and do not always # hold. 'test_selectors.PollSelectorTestCase.test_timeout', ] if TRAVIS: disabled_tests += [ 'test_subprocess.ProcessTestCase.test_double_close_on_error', # This test is racy or OS-dependent. It passes locally (sufficiently fast machine) # but fails under Travis ] disabled_tests += [ # XXX: Hangs 'test_ssl.ThreadedTests.test_nonblocking_send', 'test_ssl.ThreadedTests.test_socketserver', # Uses direct sendfile, doesn't properly check for it being enabled 'test_socket.GeneralModuleTests.test__sendfile_use_sendfile', # Relies on the regex of the repr having the locked state (TODO: it'd be nice if # we did that). # XXX: These are commented out in the source code of test_threading because # this doesn't work. # 'lock_tests.LockTests.lest_locked_repr', # 'lock_tests.LockTests.lest_repr', # This test opens a socket, creates a new socket with the same fileno, # closes the original socket (and hence fileno) and then # expects that the calling setblocking() on the duplicate socket # will raise an error. Our implementation doesn't work that way because # setblocking() doesn't actually touch the file descriptor. # That's probably OK because this was a GIL state error in CPython # see https://github.com/python/cpython/commit/fa22b29960b4e683f4e5d7e308f674df2620473c 'test_socket.TestExceptions.test_setblocking_invalidfd', ] if ARES: disabled_tests += [ # These raise different errors or can't resolve # the IP address correctly 'test_socket.GeneralModuleTests.test_host_resolution', 'test_socket.GeneralModuleTests.test_getnameinfo', ] disabled_tests += [ 'test_threading.MiscTestCase.test__all__', ] # We don't actually implement socket._sendfile_use_sendfile, # so these tests, which think they're using that and os.sendfile, # fail. disabled_tests += [ 'test_socket.SendfileUsingSendfileTest.testCount', 'test_socket.SendfileUsingSendfileTest.testCountSmall', 'test_socket.SendfileUsingSendfileTest.testCountWithOffset', 'test_socket.SendfileUsingSendfileTest.testOffset', 'test_socket.SendfileUsingSendfileTest.testRegularFile', 'test_socket.SendfileUsingSendfileTest.testWithTimeout', 'test_socket.SendfileUsingSendfileTest.testEmptyFileSend', 'test_socket.SendfileUsingSendfileTest.testNonBlocking', 'test_socket.SendfileUsingSendfileTest.test_errors', ] # Ditto disabled_tests += [ 'test_socket.GeneralModuleTests.test__sendfile_use_sendfile', ] disabled_tests += [ # This test requires Linux >= 4.3. When we were running 'dist: # trusty' on the 4.4 kernel, it passed (~July 2017). But when # trusty became the default dist in September 2017 and updated # the kernel to 4.11.6, it begain failing. It fails on `res = # op.recv(assoclen + len(plain) + taglen)` (where 'op' is the # client socket) with 'OSError: [Errno 22] Invalid argument' # for unknown reasons. This is *after* having successfully # called `op.sendmsg_afalg`. Post 3.6.0, what we test with, # the test was changed to require Linux 4.9 and the data was changed, # so this is not our fault. We should eventually update this when we # update our 3.6 version. # See https://bugs.python.org/issue29324 'test_socket.LinuxKernelCryptoAPI.test_aead_aes_gcm', ] disabled_tests += [ # These want to use the private '_communicate' method, which # our Popen doesn't have. 'test_subprocess.MiscTests.test_call_keyboardinterrupt_no_kill', 'test_subprocess.MiscTests.test_context_manager_keyboardinterrupt_no_kill', 'test_subprocess.MiscTests.test_run_keyboardinterrupt_no_kill', # This wants to check that the underlying fileno is blocking, # but it isn't. 'test_socket.NonBlockingTCPTests.testSetBlocking', # 3.7b2 made it impossible to instantiate SSLSocket objects # directly, and this tests for that, but we don't follow that change. 'test_ssl.BasicSocketTests.test_private_init', # 3.7b2 made a change to this test that on the surface looks incorrect, # but it passes when they run it and fails when we do. It's not # clear why. 'test_ssl.ThreadedTests.test_check_hostname_idn', # These appear to hang, haven't investigated why 'test_ssl.SimpleBackgroundTests.test_get_server_certificate', # Probably the same as NetworkConnectionNoServer.test_create_connection_timeout 'test_socket.NetworkConnectionNoServer.test_create_connection', # Internals of the threading module that change. 'test_threading.ThreadTests.test_finalization_shutdown', 'test_threading.ThreadTests.test_shutdown_locks', # Expects a deprecation warning we don't raise 'test_threading.ThreadTests.test_old_threading_api', # This tries to use threading.interrupt_main() from a new Thread; # but of course that's actually the same thread and things don't # work as expected. 'test_threading.InterruptMainTests.test_interrupt_main_subthread', 'test_threading.InterruptMainTests.test_interrupt_main_noerror', # TLS1.3 seems flaky 'test_ssl.ThreadedTests.test_wrong_cert_tls13', ] if APPVEYOR: disabled_tests += [ # This sometimes produces ``self.assertEqual(1, len(s.select(0))): 1 != 0``. # Probably needs to spin the loop once. 'test_selectors.BaseSelectorTestCase.test_timeout', # AssertionError: OSError not raised 'test_socket.PurePythonSocketPairTest.test_injected_authentication_failure', # subprocess.TimeoutExpired: Command '('C:\\Program Files\\Git\\usr\\bin\\true.EXE',)' # timed out after -1 seconds # Seen on 3.12.5 and 3.13.0rc1 # Perhaps they changed handling of negative timeouts? Doesn't # affect any other platforms though. 'test_subprocess.ProcessTestCase.test_wait_negative_timeout', ] disabled_tests += [ # This one seems very strict: doesn't want a pathlike # first argument when shell is true. 'test_subprocess.RunFuncTestCase.test_run_with_pathlike_path', # This tests for a warning we don't raise. 'test_subprocess.RunFuncTestCase.test_bufsize_equal_one_binary_mode', # This compares the output of threading.excepthook with # data constructed in Python. But excepthook is implemented in C # and can't see the patched threading.get_ident() we use, so the # output doesn't match. 'test_threading.ExceptHookTests.test_excepthook_thread_None', ] if sys.version_info[:3] < (3, 8, 1): disabled_tests += [ # Earlier versions parse differently so the newer test breaks 'test_ssl.BasicSocketTests.test_parse_all_sans', 'test_ssl.BasicSocketTests.test_parse_cert_CVE_2013_4238', ] if sys.version_info[:3] < (3, 8, 10): disabled_tests += [ # These were added for fixes sometime between 3.8.1 and 3.8.10 'test_ftplib.TestFTPClass.test_makepasv_issue43285_security_disabled', 'test_ftplib.TestFTPClass.test_makepasv_issue43285_security_enabled_default', 'test_httplib.BasicTest.test_dir_with_added_behavior_on_status', 'test_httplib.TunnelTests.test_tunnel_connect_single_send_connection_setup', 'test_ssl.TestSSLDebug.test_msg_callback_deadlock_bpo43577', # This one fails with the updated certs 'test_ssl.ContextTests.test_load_verify_cadata', # This one times out on 3.7.1 on Appveyor 'test_ftplib.TestTLS_FTPClassMixin.test_retrbinary_rest', ] if RESOLVER_DNSPYTHON: disabled_tests += [ # This does two things DNS python doesn't. First, it sends it # capital letters and expects them to be returned lowercase. # Second, it expects the symbolic scopeid to be stripped from the end. 'test_socket.GeneralModuleTests.test_getaddrinfo_ipv6_scopeid_symbolic', ] # if 'signalfd' in os.environ.get('GEVENT_BACKEND', ''): # # tests that don't interact well with signalfd # disabled_tests.extend([ # 'test_signal.SiginterruptTest.test_siginterrupt_off', # 'test_socketserver.SocketServerTest.test_ForkingTCPServer', # 'test_socketserver.SocketServerTest.test_ForkingUDPServer', # 'test_socketserver.SocketServerTest.test_ForkingUnixStreamServer']) # LibreSSL reports OPENSSL_VERSION_INFO (2, 0, 0, 0, 0) regardless of its version, # so this is known to fail on some distros. We don't want to detect this because we # don't want to trigger the side-effects of importing ssl prematurely if we will # be monkey-patching, so we skip this test everywhere. It doesn't do much for us # anyway. disabled_tests += [ 'test_ssl.BasicSocketTests.test_openssl_version' ] if OSX: disabled_tests += [ # This sometimes produces OSError: Errno 40: Message too long 'test_socket.RecvmsgIntoTCPTest.testRecvmsgIntoGenerator', # These sometime timeout. Cannot reproduce locally. 'test_ftp.TestTLS_FTPClassMixin.test_mlsd', 'test_ftp.TestTLS_FTPClassMixin.test_retrlines_too_long', 'test_ftp.TestTLS_FTPClassMixin.test_storlines', 'test_ftp.TestTLS_FTPClassMixin.test_retrbinary_rest', ] if RESOLVER_ARES and PY38 and not RUNNING_ON_CI: disabled_tests += [ # When updating to 1.16.0 this was seen locally, but not on CI. # Tuples differ: ('ff02::1de:c0:face:8d', 1234, 0, 0) # != ('ff02::1de:c0:face:8d', 1234, 0, 1) 'test_socket.GeneralModuleTests.test_getaddrinfo_ipv6_scopeid_symbolic', ] if PY39: disabled_tests += [ # Depends on exact details of the repr. Eww. 'test_subprocess.ProcessTestCase.test_repr', # Tries to wait for the process without using Popen APIs, and expects the # ``returncode`` attribute to stay None. But we have already hooked SIGCHLD, so # we see and set the ``returncode``; there is no way to wait that doesn't do that. 'test_subprocess.POSIXProcessTestTest.test_send_signal_race', ] if sys.version_info[:3] < (3, 9, 5): disabled_tests += [ # These were added for fixes sometime between 3.9.1 and 3.9.5 'test_ftplib.TestFTPClass.test_makepasv_issue43285_security_disabled', 'test_ftplib.TestFTPClass.test_makepasv_issue43285_security_enabled_default', 'test_httplib.BasicTest.test_dir_with_added_behavior_on_status', 'test_httplib.TunnelTests.test_tunnel_connect_single_send_connection_setup', 'test_ssl.TestSSLDebug.test_msg_callback_deadlock_bpo43577', # This one fails with the updated certs 'test_ssl.ContextTests.test_load_verify_cadata', # These time out on 3.9.1 on Appveyor 'test_ftplib.TestTLS_FTPClassMixin.test_retrbinary_rest', 'test_ftplib.TestTLS_FTPClassMixin.test_retrlines_too_long', ] if PY39_EXACTLY: if OSX: disabled_tests += [ ] if APPVEYOR: disabled_tests += [ # On 3.9.13, this has been seen to cause # # SystemError: returned # NULL without setting an error # # But this isn't even our code! That's a CPython function. Must be flaky. 'test_socket.GeneralModuleTests.testInvalidInterfaceIndexToName', ] if PY310: disabled_tests += [ # They arbitrarily made some types so that they can't be created; # that's an implementation detail we're not going to follow ( # it would require them to be factory functions). 'test_select.SelectTestCase.test_disallow_instantiation', 'test_threading.ThreadTests.test_disallow_instantiation', # This wants two true threads to work, but a CPU bound loop # in a greenlet can't be interrupted. 'test_threading.InterruptMainTests.test_can_interrupt_tight_loops', # We don't currently implement pipesize. 'test_subprocess.ProcessTestCase.test_pipesize_default', 'test_subprocess.ProcessTestCase.test_pipesizes', # Unknown 'test_signal.SiginterruptTest.test_siginterrupt_off', ] if TRAVIS: disabled_tests += [ # The mixing of subinterpreters (with threads) and gevent apparently # leads to a segfault on Ubuntu/GitHubActions/3.10rc1. Not clear why. # But that's not a great use case for gevent. 'test_threading.SubinterpThreadingTests.test_threads_join', 'test_threading.SubinterpThreadingTests.test_threads_join_2', ] if PY311: disabled_tests += [ # CPython issue #27718: This wants to require all objects to # have a __module__ of 'signal' because pydoc. Obviously our patches don't. 'test_signal.GenericTests.test_functions_module_attr', # 3.11 added subprocess._USE_VFORK and subprocess._USE_POSIX_SPAWN. # We don't support either of those (although USE_VFORK might be possible?) 'test_subprocess.ProcessTestCase.test__use_vfork', # This test only runs if threading has not been imported already # when the subprocess runs its script. But locally, and on CI, # threading is already imported. It's not clear how, because getting the backtrace just # shows all the internal frozen importlib modules, no actual calling code. # The closest we get is this: # # (626)() # (609)main() # (541)venv() # (394)addsitepackages() # (236)addsitedir() # (195)addpackage() # (1)() # # which makes it appear like it's one of the .pth files? # # The monkey_test.py runner adds some ``gevent.testing`` # imports to the top of ``test_threading.py``, and those do # ultimately import threading, but that shouldn't affect the # Python subprocess that runs the script being tested. # # Locally, I had a .pythonrc that was importing threading # (``rich.traceback`` -> ``pygments.lexers`` -> # ``pygments.plugin`` -> ``importlib.metadata`` -> ``zipfile`` # -> ``importlib.util`` -> ``threading``). # # That provided the clue we needed. ``importlib._bootstrap`` # imports ``importlib.util``, which imports threading. No idea # how this test gets to pass on the CPython test suite. 'test_threading.ThreadTests.test_import_from_another_thread', ] if sys.version_info[:3] < (3, 11, 8): # New tests in that version that won't pass on earlier versions. disabled_tests += [ 'test_threading.ThreadTests.test_main_thread_after_fork_from_dummy_thread', 'tets_ssl.TestPreHandshakeClose.test_preauth_data_to_tls_client', 'test_ssl.TestPreHandshakeClose.test_preauth_data_to_tls_server', 'test_signal.PosixTests.test_no_repr_is_called_on_signal_handler', 'test_socket.GeneralModuleTests.testInvalidInterfaceIndexToName', ] if WIN: disabled_tests += [ 'test_subprocess.ProcessTestCase.test_win32_duplicate_envs', 'test_ssl.SimpleBackgroundTests.test_transport_eof', 'test_ssl.SimpleBackgroundTests.test_bio_read_write_data', 'test_ssl.SimpleBackgroundTests.test_bio_handshake', 'test_httplib.ExtendedReadTestContentLengthKnown.test_readline_without_limit', 'test_httplib.ExtendedReadTestContentLengthKnown.test_readline', 'test_httplib.ExtendedReadTestContentLengthKnown.test_read1_unbounded', 'test_httplib.ExtendedReadTestContentLengthKnown.test_read1_bounded', 'test_httplib.ExtendedReadTestContentLengthKnown.test_read1', 'test_httplib.HeaderTests.test_ipv6host_header', ] if PY312: disabled_tests += [ # This test is new in 3.12.1; it appears to essentially rely # on blocking sockets to fully read data in one call, and our # version delivers a short initial read. 'test_ssl.ThreadedTests.test_recv_into_buffer_protocol_len', ] if sys.version_info[:3] < (3, 12, 1): # Appveyor still has 3.12.0 when we added the 3.12.1 tests. # Some of them depend on changes to the stdlib/interpreter. disabled_tests += [ 'test_httplib.HeaderTests.test_ipv6host_header', 'test_interpreters.TestInterpreterClose.test_subthreads_still_running', 'test_interpreters.TestInterpreterIsRunning.test_main', 'test_interpreters.TestInterpreterIsRunning.test_with_only_background_threads', 'test_interpreters.TestInterpreterRun.test_with_background_threads_still_running', 'test_interpreters.FinalizationTests.test_gh_109793', 'test_interpreters.StartupTests.test_sys_path_0', 'test_threading.SubinterpThreadingTests.test_threads_join_with_no_main', 'test_threading.MiscTestCase.test_gh112826_missing__thread__is_main_interpreter', ] if RUN_COVERAGE: disabled_tests += [ # This test wants to look for installed tracing functions, and # having a coverage tracer function installed breaks it. 'test_threading.ThreadTests.test_gettrace_all_threads', ] if WIN: disabled_tests += [ # These three are looking for an error string that matches, # and ours differs very slightly 'test_socket.BasicHyperVTest.testCreateHyperVSocketAddrNotTupleFailure', 'test_socket.BasicHyperVTest.testCreateHyperVSocketAddrServiceIdNotValidUUIDFailure', 'test_socket.BasicHyperVTest.testCreateHyperVSocketAddrVmIdNotValidUUIDFailure', ] # Like the case for 3.12.1, these need VM or stdlib updates. if sys.version_info[:3] < (3, 12, 2): disabled_tests += [ 'test_socket.GeneralModuleTests.testInvalidInterfaceIndexToName', 'test_subprocess.ProcessTestCase.test_win32_duplicate_envs', 'test_httplib.ExtendedReadTestContentLengthKnown.test_readline_without_limit', 'test_httplib.ExtendedReadTestContentLengthKnown.test_readline', 'test_httplib.ExtendedReadTestContentLengthKnown.test_read1_unbounded', 'test_httplib.ExtendedReadTestContentLengthKnown.test_read1_bounded', 'test_httplib.ExtendedReadTestContentLengthKnown.test_read1', ] if LINUX and RUNNING_ON_CI: disabled_tests += [ # These two try to forcibly close a socket, preventing some data # from reaching its destination. That works OK on some platforms, but # in this set of circumstances, because of the event loop, gevent is # able to send that data. 'test_ssl.TestPreHandshakeClose.test_preauth_data_to_tls_client', 'test_ssl.TestPreHandshakeClose.test_preauth_data_to_tls_server', ] if ARES: disabled_tests += [ # c-ares doesn't like the IPv6 syntax it uses here. 'test_socket.GeneralModuleTests.test_getaddrinfo_ipv6_scopeid_symbolic', ] if PY313: disabled_tests += [ # Creates a fork bomb because all the threads that have been created in the # parent process continue running in each child process. 'test_threading.ThreadJoinOnShutdown.test_reinit_tls_after_fork', ] if OSX and RUNNING_ON_CI: disabled_tests += [ # Sometimes times out. Cannot reproduce locally. 'test_signal.ItimerTest.test_itimer_virtual', ] if APPVEYOR: disabled_tests += [ ] if TRAVIS: disabled_tests += [ # These tests frequently break when we try to use newer Travis CI images, # due to different versions of OpenSSL being available. See above for some # specific examples. Usually the tests catch up, eventually (e.g., at this writing, # the 3.9b1 tests are fine on Ubuntu Bionic, but all other versions fail). 'test_ssl.ContextTests.test_options', 'test_ssl.ThreadedTests.test_alpn_protocols', 'test_ssl.ThreadedTests.test_default_ecdh_curve', 'test_ssl.ThreadedTests.test_shared_ciphers', ] if RUNNING_ON_MUSLLINUX: disabled_tests += [ # This is supposed to *not* crash, but on the muslilnux image, it # does crash (exitcode -11, ie, SIGSEGV) 'test_threading.ThreadingExceptionTests.test_recursion_limit', ] # Now build up the data structure we'll use to actually find disabled tests # to avoid a linear scan for every file (it seems the list could get quite large) # (First, freeze the source list to make sure it isn't modified anywhere) def _build_test_structure(sequence_of_tests): _disabled_tests = frozenset(sequence_of_tests) disabled_tests_by_file = collections.defaultdict(set) for file_case_meth in _disabled_tests: file_name, _rest = file_case_meth.split('.', 1) by_file = disabled_tests_by_file[file_name] by_file.add(file_case_meth) return disabled_tests_by_file _disabled_tests_by_file = _build_test_structure(disabled_tests) _wrapped_tests_by_file = _build_test_structure(wrapped_tests) def disable_tests_in_source(source, filename): # Source and filename are both native strings. if filename.startswith('./'): # turn "./test_socket.py" (used for auto-complete) into "test_socket.py" filename = filename[2:] if filename.endswith('.py'): filename = filename[:-3] # XXX ignoring TestCase class name (just using function name). # Maybe we should do this with the AST, or even after the test is # imported. my_disabled_tests = _disabled_tests_by_file.get(filename, ()) my_wrapped_tests = _wrapped_tests_by_file.get(filename, {}) if my_disabled_tests or my_wrapped_tests: # Insert our imports early in the file. # If we do it on a def-by-def basis, we can break syntax # if the function is already decorated pattern = r'^import .*' replacement = r'from gevent.testing import patched_tests_setup as _GEVENT_PTS;' replacement += r'import unittest as _GEVENT_UTS;' replacement += r'\g<0>' source, n = re.subn(pattern, replacement, source, count=1, flags=re.MULTILINE) print("Added imports", n) # Test cases will always be indented some, # so use [ \t]+. Without indentation, test_main, commonly used as the # __main__ function at the top level, could get matched. \s matches # newlines even in MULTILINE mode so it would still match that. my_disabled_testcases = set() for test in my_disabled_tests: # test will be: # test_module.TestClass.test_method # or # test_module.TestClass if test.count('.') == 1: _module, class_name = test.split('.') # disabling a class. # # class CLifoTest(BaseCLass): # -> # class _GEVENT_DISABLE_ClifoTest: # # Unittest discover finds subclasses of TestCase, # so make that not be true. This will fail if the definition # crosses multiple lines... pattern = r"class " + class_name + r'.*\):' no_test_class_name = class_name.replace("Test", "") replacement = 'class _GEVENT_DISABLE_' + no_test_class_name + ":" source, n = re.subn(pattern, replacement, source, flags=re.MULTILINE) else: testcase = test.split('.')[-1] my_disabled_testcases.add(testcase) # def foo_bar(self) # -> # @_GEVENT_UTS.skip('Removed by patched_tests_setup') # def foo_bar(self) pattern = r"^([ \t]+)def " + testcase replacement = r"\1@_GEVENT_UTS.skip('Removed by patched_tests_setup: %s')\n" % (test,) replacement += r"\g<0>" source, n = re.subn(pattern, replacement, source, flags=re.MULTILINE) print('Skipped %s (%d)' % (test, n), file=sys.stderr) for test in my_wrapped_tests: testcase = test.split('.')[-1] if testcase in my_disabled_testcases: print("Not wrapping %s because it is skipped" % (test,)) continue # def foo_bar(self) # -> # @_GEVENT_PTS._PatchedTest('file.Case.name') # def foo_bar(self) pattern = r"^([ \t]+)def " + testcase replacement = r"\1@_GEVENT_PTS._PatchedTest('%s')\n" % (test,) replacement += r"\g<0>" source, n = re.subn(pattern, replacement, source, 0, re.MULTILINE) print('Wrapped %s (%d)' % (testcase, n), file=sys.stderr) return source gevent-24.11.1/src/gevent/testing/resources.py000066400000000000000000000164711471441230600213030ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """ Test environment setup. This establishes the resources that are available for use, which are tested with `support.is_resource_enabled`. """ from __future__ import absolute_import, division, print_function # This file may be imported early, so it should take care not to import # things it doesn't need, which means deferred imports. def get_ALL_RESOURCES(): "Return a fresh list of resource names." # RESOURCE_NAMES is the list of all known resources, including those that # shouldn't be enabled by default or when asking for "all" resources. # ALL_RESOURCES is the list of resources enabled by default or with "all" resources. try: # 3.6 and 3.7 from test.libregrtest import ALL_RESOURCES except ImportError: # 2.7 through 3.5 # Don't do this: ## from test.regrtest import ALL_RESOURCES # On Python 2.7 to 3.5, importing regrtest iterates # sys.modules and does modifications. That doesn't work well # when it's imported from another module at module scope. # Also, it makes some assumptions about module __file__ that # may not hold true (at least on 2.7), especially when six or # other module proxy objects are involved. # So we hardcode the list. This is from 2.7, which is a superset # of the defined resources through 3.5. ALL_RESOURCES = ( 'audio', 'curses', 'largefile', 'network', 'bsddb', 'decimal', 'cpu', 'subprocess', 'urlfetch', 'gui', 'xpickle' ) return list(ALL_RESOURCES) + [ # Do we test the stdlib monkey-patched? 'gevent_monkey', ] def parse_resources(resource_str=None): # str -> Sequence[str] # Parse it like libregrtest.cmdline documents: # -u is used to specify which special resource intensive tests to run, # such as those requiring large file support or network connectivity. # The argument is a comma-separated list of words indicating the # resources to test. Currently only the following are defined: # all - Enable all special resources. # # none - Disable all special resources (this is the default). # # network - It is okay to run tests that use external network # resource, e.g. testing SSL support for sockets. # # # subprocess Run all tests for the subprocess module. # # # To enable all resources except one, use '-uall,-'. For # example, to run all the tests except for the gui tests, give the # option '-uall,-gui'. # We make a change though: we default to 'all' resources, instead of # 'none'. Encountering either of those later in the string resets # it, for ease of working with appending to environment variables. if resource_str is None: import os resource_str = os.environ.get('GEVENTTEST_USE_RESOURCES') resources = get_ALL_RESOURCES() if not resource_str: return resources requested_resources = resource_str.split(',') for requested_resource in requested_resources: # empty strings are ignored; this can happen when working with # the environment variable if not already set: # ENV=$ENV,-network if not requested_resource: continue if requested_resource == 'all': resources = get_ALL_RESOURCES() elif requested_resource == 'none': resources = [] elif requested_resource.startswith('-'): if requested_resource[1:] in resources: resources.remove(requested_resource[1:]) else: # TODO: Produce a warning if it's an unknown resource? resources.append(requested_resource) return resources def unparse_resources(resources): """ Given a list of enabled resources, produce the correct environment variable setting to enable (only) that list. """ # By default, we assume all resources are enabled, so explicitly # listing them here doesn't actually disable anything. To do that, we want to # list the ones that are disabled. This is usually shorter than starting with # 'none', and manually adding them back in one by one. # # 'none' must be special cased because an empty option string # means 'all'. Still, we're explicit about that. # # TODO: Make this produce the minimal output; sometimes 'none' and # adding would be shorter. all_resources = set(get_ALL_RESOURCES()) enabled = set(resources) if enabled == all_resources: result = 'all' elif resources: explicitly_disabled = all_resources - enabled result = ''.join(sorted('-' + x for x in explicitly_disabled)) else: result = 'none' return result def setup_resources(resources=None): """ Call either with a list of resources or a resource string. If ``None`` is given, get the resource string from the environment. """ if isinstance(resources, str) or resources is None: resources = parse_resources(resources) from . import support support.use_resources = list(resources) support.gevent_has_setup_resources = True return resources def ensure_setup_resources(): # Call when you don't know if resources have been setup and you want to # get the environment variable if needed. # Returns an object with `is_resource_enabled`. from . import support if not support.gevent_has_setup_resources: setup_resources() return support def exit_without_resource(resource): """ Call this in standalone test modules that can't use unittest.SkipTest. Exits with a status of 0 if the resource isn't enabled. """ if not ensure_setup_resources().is_resource_enabled(resource): print("Skipped: %r not enabled" % (resource,)) import sys sys.exit(0) def skip_without_resource(resource, reason=''): requires = 'Requires resource %r' % (resource,) if not reason: reason = requires else: reason = reason + ' (' + requires + ')' if not ensure_setup_resources().is_resource_enabled(resource): import unittest raise unittest.SkipTest(reason) if __name__ == '__main__': print(setup_resources()) gevent-24.11.1/src/gevent/testing/six.py000066400000000000000000000020131471441230600200570ustar00rootroot00000000000000import sys # pylint:disable=unused-argument,import-error PY2 = sys.version_info[0] == 2 PY3 = sys.version_info[0] >= 3 if PY3: import builtins exec_ = getattr(builtins, "exec") def reraise(tp, value, tb=None): if value.__traceback__ is not tb: raise value.with_traceback(tb) raise value xrange = range string_types = (str,) text_type = str else: def exec_(code, globs=None, locs=None): """Execute code in a namespace.""" if globs is None: frame = sys._getframe(1) globs = frame.f_globals if locs is None: locs = frame.f_locals del frame elif locs is None: locs = globs exec("""exec code in globs, locs""") import __builtin__ as builtins xrange = builtins.xrange string_types = (builtins.basestring,) text_type = builtins.unicode exec_("""def reraise(tp, value, tb=None): try: raise tp, value, tb finally: tb = None """) gevent-24.11.1/src/gevent/testing/skipping.py000066400000000000000000000150771471441230600211160ustar00rootroot00000000000000# Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. from __future__ import absolute_import, print_function, division import functools import unittest from . import sysinfo def _identity(f): return f def _do_not_skip(reason): assert reason return _identity skipOnMac = _do_not_skip skipOnMacOnCI = _do_not_skip skipOnWindows = _do_not_skip skipOnAppVeyor = _do_not_skip skipOnCI = _do_not_skip skipOnManylinux = _do_not_skip skipOnPyPy = _do_not_skip skipOnPyPyOnCI = _do_not_skip skipOnPyPy3OnCI = _do_not_skip skipOnPyPy3 = _do_not_skip skipOnPyPyOnWindows = _do_not_skip skipOnPy3 = unittest.skip if sysinfo.PY3 else _do_not_skip skipOnPy37 = unittest.skip if sysinfo.PY37 else _do_not_skip skipOnPy310 = unittest.skip if sysinfo.PY310 else _do_not_skip skipOnPy312 = unittest.skip if sysinfo.PY312 else _do_not_skip skipOnPurePython = unittest.skip if sysinfo.PURE_PYTHON else _do_not_skip skipWithCExtensions = unittest.skip if not sysinfo.PURE_PYTHON else _do_not_skip skipOnLibuv = _do_not_skip skipOnLibuvOnWin = _do_not_skip skipOnLibuvOnCI = _do_not_skip skipOnLibuvOnCIOnPyPy = _do_not_skip skipOnLibuvOnPyPyOnWin = _do_not_skip skipOnLibev = _do_not_skip if sysinfo.WIN: skipOnWindows = unittest.skip if sysinfo.OSX: skipOnMac = unittest.skip if sysinfo.RUNNING_ON_APPVEYOR: # See comments scattered around about timeouts and the timer # resolution available on appveyor (lots of jitter). this # seems worse with the 62-bit builds. # Note that we skip/adjust these tests only on AppVeyor, not # win32---we don't think there's gevent related problems but # environment related problems. These can be tested and debugged # separately on windows in a more stable environment. skipOnAppVeyor = unittest.skip if sysinfo.RUNNING_ON_CI: skipOnCI = unittest.skip if sysinfo.OSX: skipOnMacOnCI = unittest.skip if sysinfo.RUNNING_ON_MANYLINUX: skipOnManylinux = unittest.skip if sysinfo.PYPY: skipOnPyPy = unittest.skip if sysinfo.RUNNING_ON_CI: skipOnPyPyOnCI = unittest.skip if sysinfo.WIN: skipOnPyPyOnWindows = unittest.skip if sysinfo.PYPY3: skipOnPyPy3 = unittest.skip if sysinfo.RUNNING_ON_CI: # Same as above, for PyPy3.3-5.5-alpha and 3.5-5.7.1-beta and 3.5-5.8 skipOnPyPy3OnCI = unittest.skip skipUnderCoverage = unittest.skip if sysinfo.RUN_COVERAGE else _do_not_skip skipIf = unittest.skipIf skipUnless = unittest.skipUnless _has_psutil_process = None def _check_psutil(): global _has_psutil_process if _has_psutil_process is None: _has_psutil_process = sysinfo.get_this_psutil_process() is not None return _has_psutil_process def _make_runtime_skip_decorator(reason, predicate): def decorator(test_item): if not isinstance(test_item, type): f = test_item @functools.wraps(test_item) def skip_wrapper(*args, **kwargs): if not predicate(): raise unittest.SkipTest(reason) return f(*args, **kwargs) test_item = skip_wrapper else: # given a class, override setUp() to skip it. # # Internally, unittest uses two flags on the class to do this: # __unittest_skip__ and __unittest_skip_why__. It *appears* # these are evaluated for each method in the test, so we can safely # change them at runtime. **This isn't documented.** # # If they are set before execution begins, then the class setUpClass # and tearDownClass are skipped. So changing them at runtime could result # in something being set up but not torn down. It is substantially # faster, though, to set them. base = test_item base_setUp = base.setUp @functools.wraps(test_item) def setUp(self): if not predicate(): base.__unittest_skip__ = True base.__unittest_skip_why__ = reason raise unittest.SkipTest(reason) base_setUp(self) base.setUp = setUp return test_item return decorator def skipWithoutPSUtil(reason): reason = "psutil not available: " + reason # Defer the check until runtime to avoid imports return _make_runtime_skip_decorator(reason, _check_psutil) if sysinfo.LIBUV: skipOnLibuv = unittest.skip if sysinfo.RUNNING_ON_CI: skipOnLibuvOnCI = unittest.skip if sysinfo.PYPY: skipOnLibuvOnCIOnPyPy = unittest.skip if sysinfo.WIN: skipOnLibuvOnWin = unittest.skip if sysinfo.PYPY: skipOnLibuvOnPyPyOnWin = unittest.skip else: skipOnLibev = unittest.skip def skipWithoutResource(resource, reason=''): requires = 'Requires resource %r' % (resource,) if not reason: reason = requires else: reason = reason + ' (' + requires + ')' # Defer until runtime; resources are established as part # of test startup. def predicate(): # This is easily cached if needed from . import resources return resources.ensure_setup_resources().is_resource_enabled(resource) return _make_runtime_skip_decorator(reason, predicate) def skipWithoutExternalNetwork(reason=''): # Use to decorate test functions or classes that # need access to external network resources (e.g., DNS, HTTP servers, etc) # # Important: If you use this on classes, you must not use the # two-argument form of super() return skipWithoutResource('network', reason) gevent-24.11.1/src/gevent/testing/sockets.py000066400000000000000000000043551471441230600207420ustar00rootroot00000000000000# Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. from __future__ import absolute_import, print_function, division from .params import DEFAULT_BIND_ADDR_TUPLE def bind_and_listen(sock, address=DEFAULT_BIND_ADDR_TUPLE, backlog=50, reuse_addr=True): from socket import SOL_SOCKET, SO_REUSEADDR, error if reuse_addr: try: sock.setsockopt(SOL_SOCKET, SO_REUSEADDR, sock.getsockopt(SOL_SOCKET, SO_REUSEADDR) | 1) except error: pass sock.bind(address) if backlog is not None: # udp sock.listen(backlog) def tcp_listener(address=DEFAULT_BIND_ADDR_TUPLE, backlog=50, reuse_addr=True): """A shortcut to create a TCP socket, bind it and put it into listening state.""" from gevent import socket sock = socket.socket() bind_and_listen(sock, address, backlog=backlog, reuse_addr=reuse_addr) return sock def udp_listener(address=DEFAULT_BIND_ADDR_TUPLE, reuse_addr=True): """A shortcut to create a UDF socket, bind it and put it into listening state.""" from gevent import socket sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) bind_and_listen(sock, address, backlog=None, reuse_addr=reuse_addr) return sock gevent-24.11.1/src/gevent/testing/support.py000066400000000000000000000110271471441230600207750ustar00rootroot00000000000000# Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """ A re-export of the support module from Python's test package, with some version compatibility shims and overrides. The manylinux docker images do not include test.support at all, for space reasons, so we need to be vaguely functional to run tests in that environment. """ import sys # Proxy through, so that changes to this module reflect in the real # module too. (In 3.7, this is natively supported with __getattr__ at # module scope.) This breaks static analysis (pylint), so we configure # pylint to ignore this module. class _Default(object): # A descriptor-like object that will # only be used if the actual stdlib module # doesn't have the value. def __init__(self, value): self.value = value class _ModuleProxy(object): __slots__ = ('_this_mod', '_stdlib_support') def __init__(self): self._this_mod = sys.modules[__name__] self._stdlib_support = self def __get_stdlib_support(self): if self._stdlib_support is self: try: from test import support as stdlib_support except ImportError: stdlib_support = None self._stdlib_support = stdlib_support return self._stdlib_support def __getattr__(self, name): try: local_val = getattr(self._this_mod, name) except AttributeError: return getattr(self.__get_stdlib_support(), name) if isinstance(local_val, _Default): try: return getattr(self.__get_stdlib_support(), name) except AttributeError: return local_val.value return local_val def __setattr__(self, name, value): if name in _ModuleProxy.__slots__: super(_ModuleProxy, self).__setattr__(name, value) return # Setting it deletes it from this module, so that # we then continue to fall through to the original module. try: setattr(self.__get_stdlib_support(), name, value) except AttributeError: setattr(self._this_mod, name, value) else: try: delattr(self._this_mod, name) except AttributeError: pass def __repr__(self): return repr(self._this_mod) HOSTv6 = _Default('::1') HOST = _Default("localhost") HOSTv4 = _Default("127.0.0.1") verbose = _Default(False) @_Default def is_resource_enabled(_): return False @_Default def bind_port(sock, host=None): # pragma: no cover import socket host = host if host is not None else sys.modules[__name__].HOST if sock.family == socket.AF_INET and sock.type == socket.SOCK_STREAM: if hasattr(socket, 'SO_EXCLUSIVEADDRUSE'): sock.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1) # pylint:disable=no-member sock.bind((host, 0)) port = sock.getsockname()[1] return port @_Default def find_unused_port(family=None, socktype=None): # pragma: no cover import socket family = family or socket.AF_INET socktype = socktype or socket.SOCK_STREAM tempsock = socket.socket(family, socktype) try: port = sys.modules[__name__].bind_port(tempsock) finally: tempsock.close() del tempsock return port @_Default def threading_setup(): return [] @_Default def threading_cleanup(*_): pass @_Default def reap_children(): pass # Set by resources.setup_resources() gevent_has_setup_resources = False sys.modules[__name__] = _ModuleProxy() gevent-24.11.1/src/gevent/testing/switching.py000066400000000000000000000052241471441230600212620ustar00rootroot00000000000000# Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. from __future__ import absolute_import, print_function, division from functools import wraps from gevent.hub import _get_hub from .hub import QuietHub from .patched_tests_setup import get_switch_expected def wrap_switch_count_check(method): @wraps(method) def wrapper(self, *args, **kwargs): initial_switch_count = getattr(_get_hub(), 'switch_count', None) self.switch_expected = getattr(self, 'switch_expected', True) if initial_switch_count is not None: fullname = getattr(self, 'fullname', None) if self.switch_expected == 'default' and fullname: self.switch_expected = get_switch_expected(fullname) result = method(self, *args, **kwargs) if initial_switch_count is not None and self.switch_expected is not None: switch_count = _get_hub().switch_count - initial_switch_count if self.switch_expected is True: assert switch_count >= 0 if not switch_count: raise AssertionError('%s did not switch' % fullname) elif self.switch_expected is False: if switch_count: raise AssertionError('%s switched but not expected to' % fullname) else: raise AssertionError('Invalid value for switch_expected: %r' % (self.switch_expected, )) return result return wrapper class CountingHub(QuietHub): switch_count = 0 def switch(self, *args): # pylint:disable=arguments-differ self.switch_count += 1 return QuietHub.switch(self, *args) gevent-24.11.1/src/gevent/testing/sysinfo.py000066400000000000000000000200231471441230600207470ustar00rootroot00000000000000# Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import errno import os import sys import gevent.core from gevent import _compat as gsysinfo VERBOSE = sys.argv.count('-v') > 1 # Python implementations PYPY = gsysinfo.PYPY CPYTHON = not PYPY # Platform/operating system WIN = gsysinfo.WIN LINUX = gsysinfo.LINUX OSX = gsysinfo.OSX PURE_PYTHON = gsysinfo.PURE_PYTHON get_this_psutil_process = gsysinfo.get_this_psutil_process # XXX: Formalize this better LIBUV = 'libuv' in gevent.core.loop.__module__ # pylint:disable=no-member CFFI_BACKEND = PYPY or LIBUV or 'cffi' in os.getenv('GEVENT_LOOP', '') if '--debug-greentest' in sys.argv: sys.argv.remove('--debug-greentest') DEBUG = True else: DEBUG = False RUN_LEAKCHECKS = os.getenv('GEVENTTEST_LEAKCHECK') RUN_COVERAGE = os.getenv("COVERAGE_PROCESS_START") or os.getenv("GEVENTTEST_COVERAGE") # Generally, ignore the portions that are only implemented # on particular platforms; they generally contain partial # implementations completed in different modules. PLATFORM_SPECIFIC_SUFFIXES = ('2', '279', '3') if WIN: PLATFORM_SPECIFIC_SUFFIXES += ('posix',) PY2 = False # Never again PY3 = True PY35 = None PY36 = None PY37 = None PY38 = None PY39 = None PY39_EXACTLY = None PY310 = None PY311 = None PY312 = None PY313 = None NON_APPLICABLE_SUFFIXES = () if sys.version_info[0] == 3: # Python 3 NON_APPLICABLE_SUFFIXES += ('2', '279') PY2 = False PY3 = True if sys.version_info[1] >= 5: PY35 = True if sys.version_info[1] >= 6: PY36 = True if sys.version_info[1] >= 7: PY37 = True if sys.version_info[1] >= 8: PY38 = True if sys.version_info[1] >= 9: PY39 = True if sys.version_info[:2] == (3, 9): PY39_EXACTLY = True if sys.version_info[1] >= 10: PY310 = True if sys.version_info[1] >= 11: PY311 = True if sys.version_info[1] >= 12: PY312 = True if sys.version_info[1] >= 13: PY313 = True else: # pragma: no cover # Python 4? raise ImportError('Unsupported major python version') PYPY3 = PYPY and PY3 if WIN: NON_APPLICABLE_SUFFIXES += ("posix",) # This is intimately tied to FileObjectPosix NON_APPLICABLE_SUFFIXES += ("fileobject2",) SHARED_OBJECT_EXTENSION = ".pyd" else: SHARED_OBJECT_EXTENSION = ".so" # We define GitHub actions to be similar to travis RUNNING_ON_GITHUB_ACTIONS = os.environ.get('GITHUB_ACTIONS') RUNNING_ON_TRAVIS = os.environ.get('TRAVIS') or RUNNING_ON_GITHUB_ACTIONS RUNNING_ON_APPVEYOR = os.environ.get('APPVEYOR') RUNNING_ON_CI = RUNNING_ON_TRAVIS or RUNNING_ON_APPVEYOR RUNNING_ON_MANYLINUX = os.environ.get('GEVENT_MANYLINUX') # I'm not sure how to reliably auto-detect this, without # importing platform, something we don't want to do. RUNNING_ON_MUSLLINUX = 'musllinux' in os.environ.get('GEVENT_MANYLINUX_NAME', '') if RUNNING_ON_APPVEYOR: # We can't exec corecext on appveyor if we haven't run setup.py in # 'develop' mode (i.e., we install) NON_APPLICABLE_SUFFIXES += ('corecext',) EXPECT_POOR_TIMER_RESOLUTION = ( PYPY3 # Really, this is probably only in VMs. But that's all I test # Windows with. or WIN or (LIBUV and PYPY) or RUN_COVERAGE or (OSX and RUNNING_ON_CI) ) CONN_ABORTED_ERRORS = [] def _make_socket_errnos(*names): result = [] for name in names: try: x = getattr(errno, name) except AttributeError: pass else: result.append(x) return frozenset(result) CONN_ABORTED_ERRORS = _make_socket_errnos('WSAECONNABORTED', 'ECONNRESET') CONN_REFUSED_ERRORS = _make_socket_errnos('WSAECONNREFUSED', 'ECONNREFUSED') RESOLVER_ARES = os.getenv('GEVENT_RESOLVER') == 'ares' RESOLVER_DNSPYTHON = os.getenv('GEVENT_RESOLVER') == 'dnspython' RESOLVER_NOT_SYSTEM = RESOLVER_ARES or RESOLVER_DNSPYTHON def get_python_version(): """ Return a string of the simple python version, such as '3.8.0b4'. Handles alpha, beta, release candidate, and final releases. """ version = '%s.%s.%s' % sys.version_info[:3] if sys.version_info[3] == 'alpha': version += 'a%s' % sys.version_info[4] elif sys.version_info[3] == 'beta': version += 'b%s' % sys.version_info[4] elif sys.version_info[3] == 'candidate': version += 'rc%s' % sys.version_info[4] return version def _parse_version(ver_str): try: from packaging.version import Version # InvalidVersion is a type of ValueError except ImportError: import warnings warnings.warn('packaging.version not available; assuming no advanced Linux backends') raise ValueError try: return Version(ver_str) except ValueError: import warnings warnings.warn('Unable to parse version %s' % (ver_str,)) raise def _check_linux_version_at_least(major, minor, error_kind): # pylint:disable=too-many-return-statements # ^ Yeah, but this is the most linear and simple way to # write this. from platform import system if system() != 'Linux': return False from platform import release as _release release = _release() try: # Linux versions like '6.8.0-1014-azure' cannot be parsed # by packaging.version.Version, and distutils.LooseVersion, which # did handle that, is deprecated. Neither module is guaranteed to be available # anyway, so do the best we can manually. ver_strings = (release or '0').split('.', 2) if not ver_strings or int(ver_strings[0]) < major: # no way. return False if int(ver_strings[0]) > major: # Way newer! return True assert major == int(ver_strings[0]) # Exactly the major if len(ver_strings) < 2: # no minor version, assume no return False if int(ver_strings[1]) < minor: return False assert int(ver_strings[1]) >= minor, (ver_strings[1], minor) return True except AssertionError: raise except Exception: # pylint:disable=broad-exception-caught import warnings warnings.warn('Unable to parse version %r; assuming no %s support' % ( release, error_kind )) return False def libev_supports_linux_aio(): # libev requires kernel 4.19 or above to be able to support # linux AIO. It can still be compiled in, but will fail to create # the loop at runtime. return _check_linux_version_at_least(4, 19, 'aio') def libev_supports_linux_iouring(): # libev requires kernel XXX to be able to support linux io_uring. # It fails with the kernel in fedora rawhide (4.19.76) but # works (doesn't fail catastrophically when asked to create one) # with kernel 5.3.0 (Ubuntu Bionic) return _check_linux_version_at_least(5, 3, 'iouring') def resolver_dnspython_available(): # Try hard not to leave around junk we don't have to. from importlib import metadata try: metadata.distribution('dnspython') except metadata.PackageNotFoundError: return False return True gevent-24.11.1/src/gevent/testing/testcase.py000066400000000000000000000442521471441230600211020ustar00rootroot00000000000000# Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. from __future__ import absolute_import, print_function, division import sys import os.path from contextlib import contextmanager from unittest import TestCase as BaseTestCase from functools import wraps import gevent from gevent._util import LazyOnClass from gevent._compat import perf_counter from gevent._compat import get_clock_info from gevent._hub_local import get_hub_if_exists from . import sysinfo from . import params from . import leakcheck from . import errorhandler from . import flaky from .patched_tests_setup import get_switch_expected class TimeAssertMixin(object): @flaky.reraises_flaky_timeout() def assertTimeoutAlmostEqual(self, first, second, places=None, msg=None, delta=None): try: self.assertAlmostEqual(first, second, places=places, msg=msg, delta=delta) except AssertionError: flaky.reraiseFlakyTestTimeout() if sysinfo.EXPECT_POOR_TIMER_RESOLUTION: # pylint:disable=unused-argument def assertTimeWithinRange(self, time_taken, min_time, max_time): return else: def assertTimeWithinRange(self, time_taken, min_time, max_time): self.assertLessEqual(time_taken, max_time) self.assertGreaterEqual(time_taken, min_time) @contextmanager def runs_in_given_time(self, expected, fuzzy=None, min_time=None): if fuzzy is None: if sysinfo.EXPECT_POOR_TIMER_RESOLUTION or sysinfo.LIBUV: # The noted timer jitter issues on appveyor/pypy3 fuzzy = expected * 5.0 else: fuzzy = expected / 2.0 min_time = min_time if min_time is not None else expected - fuzzy max_time = expected + fuzzy start = perf_counter() yield (min_time, max_time) elapsed = perf_counter() - start try: self.assertTrue( min_time <= elapsed <= max_time, 'Expected: %r; elapsed: %r; min: %r; max: %r; fuzzy %r; clock_info: %s' % ( expected, elapsed, min_time, max_time, fuzzy, get_clock_info('perf_counter') )) except AssertionError: flaky.reraiseFlakyTestRaceCondition() def runs_in_no_time( self, fuzzy=(0.01 if not sysinfo.EXPECT_POOR_TIMER_RESOLUTION and not sysinfo.LIBUV else 1.0)): return self.runs_in_given_time(0.0, fuzzy) class GreenletAssertMixin(object): """Assertions related to greenlets.""" def assert_greenlet_ready(self, g): self.assertTrue(g.dead, g) self.assertTrue(g.ready(), g) self.assertFalse(g, g) def assert_greenlet_not_ready(self, g): self.assertFalse(g.dead, g) self.assertFalse(g.ready(), g) def assert_greenlet_spawned(self, g): self.assertTrue(g.started, g) self.assertFalse(g.dead, g) # No difference between spawned and switched-to once assert_greenlet_started = assert_greenlet_spawned def assert_greenlet_finished(self, g): self.assertFalse(g.started, g) self.assertTrue(g.dead, g) class StringAssertMixin(object): """ Assertions dealing with strings. """ @LazyOnClass def HEX_NUM_RE(self): import re return re.compile('-?0x[0123456789abcdef]+L?', re.I) def normalize_addr(self, s, replace='X'): # https://github.com/PyCQA/pylint/issues/1127 return self.HEX_NUM_RE.sub(replace, s) # pylint:disable=no-member def normalize_module(self, s, module=None, replace='module'): if module is None: module = type(self).__module__ return s.replace(module, replace) def normalize(self, s): return self.normalize_module(self.normalize_addr(s)) def assert_nstr_endswith(self, o, val): s = str(o) n = self.normalize(s) self.assertTrue(n.endswith(val), (s, n)) def assert_nstr_startswith(self, o, val): s = str(o) n = self.normalize(s) self.assertTrue(n.startswith(val), (s, n)) class TestTimeout(gevent.Timeout): _expire_info = '' def __init__(self, timeout, method='Not Given'): gevent.Timeout.__init__( self, timeout, '%r: test timed out (set class __timeout__ to increase)\n' % (method,), ref=False ) def _on_expiration(self, prev_greenlet, ex): from gevent.util import format_run_info loop = gevent.get_hub().loop debug_info = 'N/A' if hasattr(loop, 'debug'): debug_info = [str(s) for s in loop.debug()] run_info = format_run_info() self._expire_info = 'Loop Debug:\n%s\nRun Info:\n%s' % ( '\n'.join(debug_info), '\n'.join(run_info) ) gevent.Timeout._on_expiration(self, prev_greenlet, ex) def __str__(self): s = gevent.Timeout.__str__(self) s += self._expire_info return s def _wrap_timeout(timeout, method): if timeout is None: return method @wraps(method) def timeout_wrapper(self, *args, **kwargs): with TestTimeout(timeout, method): return method(self, *args, **kwargs) return timeout_wrapper def _get_class_attr(classDict, bases, attr, default=AttributeError): NONE = object() value = classDict.get(attr, NONE) if value is not NONE: return value for base in bases: value = getattr(base, attr, NONE) if value is not NONE: return value if default is AttributeError: raise AttributeError('Attribute %r not found\n%s\n%s\n' % (attr, classDict, bases)) return default class TestCaseMetaClass(type): # wrap each test method with # a) timeout check # b) fatal error check # c) restore the hub's error handler (see expect_one_error) # d) totalrefcount check def __new__(mcs, classname, bases, classDict): # pylint and pep8 fight over what this should be called (mcs or cls). # pylint gets it right, but we cant scope disable pep8, so we go with # its convention. # pylint: disable=bad-mcs-classmethod-argument timeout = classDict.get('__timeout__', 'NONE') if timeout == 'NONE': timeout = getattr(bases[0], '__timeout__', None) if sysinfo.RUN_LEAKCHECKS and timeout is not None: timeout *= 6 check_totalrefcount = _get_class_attr(classDict, bases, 'check_totalrefcount', True) error_fatal = _get_class_attr(classDict, bases, 'error_fatal', True) uses_handle_error = _get_class_attr(classDict, bases, 'uses_handle_error', True) # Python 3: must copy, we mutate the classDict. Interestingly enough, # it doesn't actually error out, but under 3.6 we wind up wrapping # and re-wrapping the same items over and over and over. for key, value in list(classDict.items()): if key.startswith('test') and callable(value): classDict.pop(key) # XXX: When did we stop doing this? #value = wrap_switch_count_check(value) value = _wrap_timeout(timeout, value) error_fatal = getattr(value, 'error_fatal', error_fatal) if error_fatal: value = errorhandler.wrap_error_fatal(value) if uses_handle_error: value = errorhandler.wrap_restore_handle_error(value) if check_totalrefcount and sysinfo.RUN_LEAKCHECKS: value = leakcheck.wrap_refcount(value) classDict[key] = value return type.__new__(mcs, classname, bases, classDict) def _noop(): return class SubscriberCleanupMixin(object): def setUp(self): super(SubscriberCleanupMixin, self).setUp() from gevent import events self.__old_subscribers = events.subscribers[:] def addSubscriber(self, sub): from gevent import events events.subscribers.append(sub) def tearDown(self): from gevent import events events.subscribers[:] = self.__old_subscribers super(SubscriberCleanupMixin, self).tearDown() class TestCase( # TestCaseMetaClass("NewBase", # (SubscriberCleanupMixin, # TimeAssertMixin, # GreenletAssertMixin, # StringAssertMixin, # BaseTestCase,), # {})): SubscriberCleanupMixin, TimeAssertMixin, GreenletAssertMixin, StringAssertMixin, BaseTestCase, metaclass=TestCaseMetaClass ): __timeout__ = params.LOCAL_TIMEOUT if not sysinfo.RUNNING_ON_CI else params.CI_TIMEOUT switch_expected = 'default' #: Set this to true to cause errors that get reported to the hub to #: always get propagated to the main greenlet. This can be done at the #: class or method level. #: .. caution:: This can hide errors and make it look like exceptions #: are propagated even if they're not. error_fatal = True uses_handle_error = True close_on_teardown = () # This is really used by the SubscriberCleanupMixin __old_subscribers = () # pylint:disable=unused-private-member def run(self, *args, **kwargs): # pylint:disable=signature-differs if self.switch_expected == 'default': self.switch_expected = get_switch_expected(self.fullname) return super(TestCase, self).run(*args, **kwargs) def setUp(self): super(TestCase, self).setUp() # Especially if we're running in leakcheck mode, where # the same test gets executed repeatedly, we need to update the # current time. Tests don't always go through the full event loop, # so that doesn't always happen. test__pool.py:TestPoolYYY.test_async # tends to show timeouts that are too short if we don't. # XXX: Should some core part of the loop call this? hub = get_hub_if_exists() if hub and hub.loop: hub.loop.update_now() self.close_on_teardown = [] self.addCleanup(self._tearDownCloseOnTearDown) def _callTestMethod(self, method): # 3.12 started raising a stupid warning about returning # non-None from ``test_...()`` being deprecated. Since the # test framework never cares about the return value anyway, # this is an utterly pointless annoyance. Override the method # that raises that deprecation. (Are the maintainers planning # to make the return value _mean_ something someday? That's # the only valid reason for them to do this. Answer: No, no # they're not. They're just trying to protect people from # writing broken tests that accidentally turn into generators # or something. Which...if people don't notice their tests # aren't working...well. Now, perhaps this got worse in the # era of asyncio where *everything* is a generator. But that's # not our problem; we have better ways of dealing with the # shortcomings of asyncio, namely, don't use it. # https://bugs.python.org/issue41322) method() def tearDown(self): if getattr(self, 'skipTearDown', False): del self.close_on_teardown[:] return cleanup = getattr(self, 'cleanup', _noop) cleanup() self._error = self._none super(TestCase, self).tearDown() def _tearDownCloseOnTearDown(self): while self.close_on_teardown: x = self.close_on_teardown.pop() close = getattr(x, 'close', x) try: close() except Exception: # pylint:disable=broad-except pass def _close_on_teardown(self, resource): """ *resource* either has a ``close`` method, or is a callable. """ self.close_on_teardown.append(resource) return resource @property def testname(self): return getattr(self, '_testMethodName', '') or getattr(self, '_TestCase__testMethodName') @property def testcasename(self): return self.__class__.__name__ + '.' + self.testname @property def modulename(self): return os.path.basename(sys.modules[self.__class__.__module__].__file__).rsplit('.', 1)[0] @property def fullname(self): return os.path.splitext(os.path.basename(self.modulename))[0] + '.' + self.testcasename _none = (None, None, None) # (context, kind, value) _error = _none def expect_one_error(self): self.assertEqual(self._error, self._none) gevent.get_hub().handle_error = self._store_error def _store_error(self, where, t, value, tb): del tb if self._error != self._none: gevent.get_hub().parent.throw(t, value) else: self._error = (where, t, value) def peek_error(self): return self._error def get_error(self): try: return self._error finally: self._error = self._none def assert_error(self, kind=None, value=None, error=None, where_type=None): if error is None: error = self.get_error() econtext, ekind, evalue = error if kind is not None: self.assertIsInstance(kind, type) self.assertIsNotNone( ekind, "Error must not be none %r" % (error,)) assert issubclass(ekind, kind), error if value is not None: if isinstance(value, str): self.assertEqual(str(evalue), value) else: self.assertIs(evalue, value) if where_type is not None: self.assertIsInstance(econtext, where_type) return error def assertMonkeyPatchedFuncSignatures(self, mod_name, func_names=(), exclude=()): # If inspect.getfullargspec is not available, # We use inspect.getargspec because it's the only thing available # in Python 2.7, but it is deprecated # pylint:disable=deprecated-method,too-many-locals import inspect import warnings from gevent.monkey import get_original # XXX: Very similar to gevent.monkey.patch_module. Should refactor? gevent_module = getattr(__import__('gevent.' + mod_name), mod_name) module_name = getattr(gevent_module, '__target__', mod_name) funcs_given = True if not func_names: funcs_given = False func_names = getattr(gevent_module, '__implements__') for func_name in func_names: if func_name in exclude: continue gevent_func = getattr(gevent_module, func_name) if not inspect.isfunction(gevent_func) and not funcs_given: continue func = get_original(module_name, func_name) try: with warnings.catch_warnings(): getfullargspec = inspect.getfullargspec gevent_sig = getfullargspec(gevent_func) sig = getfullargspec(func) except TypeError: if funcs_given: raise # Can't do this one. If they specifically asked for it, # it's an error, otherwise it's not. # Python 3 can check a lot more than Python 2 can. continue self.assertEqual(sig.args, gevent_sig.args, func_name) # The next two might not actually matter? self.assertEqual(sig.varargs, gevent_sig.varargs, func_name) self.assertEqual(sig.defaults, gevent_sig.defaults, func_name) if hasattr(sig, 'keywords'): # the old version msg = (func_name, sig.keywords, gevent_sig.keywords) try: self.assertEqual(sig.keywords, gevent_sig.keywords, msg) except AssertionError: # Ok, if we take `kwargs` and the original function doesn't, # that's OK. We have to do that as a compatibility hack sometimes to # work across multiple python versions. self.assertIsNone(sig.keywords, msg) self.assertEqual('kwargs', gevent_sig.keywords) else: # The new hotness. Unfortunately, we can't actually check these things # until we drop Python 2 support from the shared code. The only known place # this is a problem is python 3.11 socket.create_connection(), which we manually # ignore. So the checks all pass as is. self.assertEqual(sig.kwonlyargs, gevent_sig.kwonlyargs, func_name) self.assertEqual(sig.kwonlydefaults, gevent_sig.kwonlydefaults, func_name) # Should deal with others: https://docs.python.org/3/library/inspect.html#inspect.getfullargspec def assertEqualFlakyRaceCondition(self, a, b): try: self.assertEqual(a, b) except AssertionError: flaky.reraiseFlakyTestRaceCondition() def assertStartsWith(self, it, has_prefix): self.assertTrue(it.startswith(has_prefix), (it, has_prefix)) def assertNotMonkeyPatched(self): from gevent import monkey self.assertFalse(monkey.is_anything_patched()) gevent-24.11.1/src/gevent/testing/testrunner.py000066400000000000000000001072071471441230600215000ustar00rootroot00000000000000# -*- coding: utf-8 -*- import re import sys import os import glob import operator import traceback import importlib from contextlib import contextmanager from datetime import timedelta from multiprocessing.pool import ThreadPool from multiprocessing import cpu_count from gevent._util import Lazy from . import util from .resources import parse_resources from .resources import setup_resources from .resources import unparse_resources from .sysinfo import RUNNING_ON_CI from .sysinfo import PYPY from .sysinfo import PY2 from .sysinfo import RESOLVER_ARES from .sysinfo import RUN_LEAKCHECKS from .sysinfo import OSX from . import six from . import travis # Import this while we're probably single-threaded/single-processed # to try to avoid issues with PyPy 5.10. # See https://bitbucket.org/pypy/pypy/issues/2769/systemerror-unexpected-internal-exception try: __import__('_testcapi') except (ImportError, OSError): # This can raise a wide variety of errors pass TIMEOUT = 100 # seconds AVAIL_NWORKERS = cpu_count() - 1 DEFAULT_NWORKERS = int(os.environ.get('NWORKERS') or max(AVAIL_NWORKERS, 4)) if DEFAULT_NWORKERS > 15: DEFAULT_NWORKERS = 10 if RUN_LEAKCHECKS: # Capturing the stats takes time, and we run each # test at least twice TIMEOUT = 200 DEFAULT_RUN_OPTIONS = { 'timeout': TIMEOUT } if RUNNING_ON_CI: # Too many and we get spurious timeouts DEFAULT_NWORKERS = 4 if not OSX else 2 def _package_relative_filename(filename, package): if not os.path.isfile(filename) and package: # Ok, try to locate it as a module in the package package_dir = _dir_from_package_name(package) return os.path.join(package_dir, filename) return filename def _dir_from_package_name(package): package_mod = importlib.import_module(package) package_dir = os.path.dirname(package_mod.__file__) return package_dir class ResultCollector(object): def __init__(self): self.total = 0 self.failed = {} self.passed = {} self.total_cases = 0 self.total_skipped = 0 # Every RunResult reported: failed, passed, rerun self._all_results = [] self.reran = {} def __iadd__(self, result): self._all_results.append(result) if not result: self.failed[result.name] = result #[cmd, kwargs] else: self.passed[result.name] = True self.total_cases += result.run_count self.total_skipped += result.skipped_count return self def __ilshift__(self, result): """ collector <<= result Stores the result, but does not count it towards the number of cases run, skipped, passed or failed. """ self._all_results.append(result) self.reran[result.name] = result return self @property def longest_running_tests(self): """ A new list of RunResult objects, sorted from longest running to shortest running. """ return sorted(self._all_results, key=operator.attrgetter('run_duration'), reverse=True) class FailFast(Exception): pass class Runner(object): TIME_WAIT_REAP = 0.1 TIME_WAIT_SPAWN = 0.05 def __init__(self, tests, *, allowed_return_codes=(), configured_failing_tests=(), failfast=False, quiet=False, configured_run_alone_tests=(), worker_count=DEFAULT_NWORKERS, second_chance=False): """ :keyword allowed_return_codes: Return codes other than 0 that are counted as a success. Needed because some versions of Python give ``unittest`` weird return codes. :keyword quiet: Set to True or False to explicitly choose. Set to `None` to use the default, which may come from the environment variable ``GEVENTTEST_QUIET``. """ self._tests = tests self._configured_failing_tests = configured_failing_tests self._quiet = quiet self._configured_run_alone_tests = configured_run_alone_tests assert not (failfast and second_chance) self._failfast = failfast self._second_chance = second_chance self.results = ResultCollector() self.results.total = len(self._tests) self._running_jobs = [] self._worker_count = min(len(tests), worker_count) or 1 self._allowed_return_codes = allowed_return_codes def _run_one(self, cmd, **kwargs): kwargs['allowed_return_codes'] = self._allowed_return_codes if self._quiet is not None: kwargs['quiet'] = self._quiet result = util.run(cmd, **kwargs) if not result and self._second_chance: self.results <<= result util.log("> %s", result.name, color='warning') result = util.run(cmd, **kwargs) if not result and self._failfast: # Under Python 3.9 (maybe older versions?), raising the # SystemExit here (a background thread belonging to the # pool) doesn't seem to work well. It gets stuck waiting # for a lock? The job never shows up as finished. raise FailFast(cmd) self.results += result def _reap(self): "Clean up the list of running jobs, returning how many are still outstanding." for r in self._running_jobs[:]: if not r.ready(): continue if r.successful(): self._running_jobs.remove(r) else: r.get() sys.exit('Internal error in testrunner.py: %r' % (r, )) return len(self._running_jobs) def _reap_all(self): util.log("Reaping %d jobs", len(self._running_jobs), color="debug") while self._running_jobs: if not self._reap(): break util.sleep(self.TIME_WAIT_REAP) def _spawn(self, pool, cmd, options): while True: if self._reap() < self._worker_count: job = pool.apply_async(self._run_one, (cmd, ), options or {}) self._running_jobs.append(job) return util.sleep(self.TIME_WAIT_SPAWN) def __call__(self): util.log("Running tests in parallel with concurrency %s %s." % ( self._worker_count, util._colorize('number', '(concurrency available: %d)' % AVAIL_NWORKERS) ),) # Setting global state, in theory we can be used multiple times. # This is fine as long as we are single threaded and call these # sequentially. util.BUFFER_OUTPUT = self._worker_count > 1 or self._quiet start = util.perf_counter() try: self._run_tests() except KeyboardInterrupt: self._report(util.perf_counter() - start, exit=False) util.log('(partial results)\n') raise except: traceback.print_exc() raise self._reap_all() self._report(util.perf_counter() - start, exit=True) def _run_tests(self): "Runs the tests, produces no report." run_alone = [] tests = self._tests pool = ThreadPool(self._worker_count) try: for cmd, options in tests: options = options or {} if matches(self._configured_run_alone_tests, cmd): run_alone.append((cmd, options)) else: self._spawn(pool, cmd, options) pool.close() pool.join() if run_alone: util.log("Running tests marked standalone") for cmd, options in run_alone: self._run_one(cmd, **options) except KeyboardInterrupt: try: util.log('Waiting for currently running to finish...') self._reap_all() except KeyboardInterrupt: pool.terminate() raise except: pool.terminate() raise def _report(self, elapsed_time, exit=False): results = self.results report( results, exit=exit, took=elapsed_time, configured_failing_tests=self._configured_failing_tests, ) class TravisFoldingRunner(object): def __init__(self, runner, travis_fold_msg): self._runner = runner self._travis_fold_msg = travis_fold_msg self._travis_fold_name = str(int(util.perf_counter())) # A zope-style acquisition proxy would be convenient here. run_tests = runner._run_tests def _run_tests(): self._begin_fold() try: run_tests() finally: self._end_fold() runner._run_tests = _run_tests def _begin_fold(self): travis.fold_start(self._travis_fold_name, self._travis_fold_msg) def _end_fold(self): travis.fold_end(self._travis_fold_name) def __call__(self): return self._runner() class Discovery(object): package_dir = None package = None def __init__( self, tests=None, ignore_files=None, ignored=(), coverage=False, package=None, config=None, allow_combine=True, ): self.config = config or {} self.ignore = set(ignored or ()) self.tests = tests self.configured_test_options = config.get('TEST_FILE_OPTIONS', set()) self.allow_combine = allow_combine if ignore_files: ignore_files = ignore_files.split(',') for f in ignore_files: self.ignore.update(set(load_list_from_file(f, package))) if coverage: self.ignore.update(config.get('IGNORE_COVERAGE', set())) if package: self.package = package self.package_dir = _dir_from_package_name(package) class Discovered(object): def __init__(self, package, configured_test_options, ignore, config, allow_combine): self.orig_dir = os.getcwd() self.configured_run_alone = config['RUN_ALONE'] self.configured_failing_tests = config['FAILING_TESTS'] self.package = package self.configured_test_options = configured_test_options self.allow_combine = allow_combine self.ignore = ignore self.to_import = [] self.std_monkey_patch_files = [] self.no_monkey_patch_files = [] self.commands = [] @staticmethod def __makes_simple_monkey_patch( contents, _patch_present=re.compile(br'[^#].*patch_all\(\)'), _patch_indented=re.compile(br' .*patch_all\(\)') ): return ( # A non-commented patch_all() call is present bool(_patch_present.search(contents)) # that is not indented (because that implies its not at the top-level, # so some preconditions are being set) and not _patch_indented.search(contents) ) @staticmethod def __file_allows_monkey_combine(contents): return b'testrunner-no-monkey-combine' not in contents @staticmethod def __file_allows_combine(contents): return b'testrunner-no-combine' not in contents @staticmethod def __calls_unittest_main_toplevel( contents, _greentest_main=re.compile(br' greentest.main\(\)'), _unittest_main=re.compile(br' unittest.main\(\)'), _import_main=re.compile(br'from gevent.testing import.*main'), _main=re.compile(br' main\(\)'), ): # TODO: Add a check that this comes in a line directly after # if __name__ == __main__. return ( _greentest_main.search(contents) or _unittest_main.search(contents) or (_import_main.search(contents) and _main.search(contents)) ) def __has_config(self, filename): return ( RUN_LEAKCHECKS or filename in self.configured_test_options or filename in self.configured_run_alone or matches(self.configured_failing_tests, filename) ) def __can_monkey_combine(self, filename, contents): return ( self.allow_combine and not self.__has_config(filename) and self.__makes_simple_monkey_patch(contents) and self.__file_allows_monkey_combine(contents) and self.__file_allows_combine(contents) and self.__calls_unittest_main_toplevel(contents) ) @staticmethod def __makes_no_monkey_patch(contents, _patch_present=re.compile(br'[^#].*patch_\w*\(')): return not _patch_present.search(contents) def __can_nonmonkey_combine(self, filename, contents): return ( self.allow_combine and not self.__has_config(filename) and self.__makes_no_monkey_patch(contents) and self.__file_allows_combine(contents) and self.__calls_unittest_main_toplevel(contents) ) def __begin_command(self): cmd = [sys.executable, '-u'] # XXX: -X track-resources is broken. This happened when I updated to # PyPy 7.3.2. It started failing to even start inside the virtual environment # with # # debug: OperationError: # debug: operror-type: ImportError # debug: operror-value: No module named traceback # # I don't know if this is PyPy's problem or a problem in virtualenv: # # virtualenv==20.0.35 # virtualenv-clone==0.5.4 # virtualenvwrapper==4.8.4 # # Deferring investigation until I need this... # if PYPY and PY2: # # Doesn't seem to be an env var for this. # # XXX: track-resources is broken in virtual environments # # on 7.3.2. # cmd.extend(('-X', 'track-resources')) return cmd def __add_test(self, qualified_name, filename, contents): if b'TESTRUNNER' in contents: # test__monkey_patching.py # XXX: Rework this to avoid importing. # XXX: Rework this to allow test combining (it could write the files out and return # them directly; we would use 'python -m gevent.monkey --module unittest ...) self.to_import.append(qualified_name) elif self.__can_monkey_combine(filename, contents): self.std_monkey_patch_files.append(qualified_name if self.package else filename) elif self.__can_nonmonkey_combine(filename, contents): self.no_monkey_patch_files.append(qualified_name if self.package else filename) else: # XXX: For simple python module tests, try this with # `runpy.run_module`, very similar to the way we run # things for monkey patching. The idea here is that we # can perform setup ahead of time (e.g., # setup_resources()) in each test without having to do # it manually or force calls or modifications to those # tests. cmd = self.__begin_command() if self.package: # Using a package is the best way to work with coverage 5 # when we specify 'source = ' cmd.append('-m' + qualified_name) else: cmd.append(filename) options = DEFAULT_RUN_OPTIONS.copy() options.update(self.configured_test_options.get(filename, {})) self.commands.append((cmd, options)) @staticmethod def __remove_options(lst): return [x for x in lst if x and not x.startswith('-')] def __expand_imports(self): for qualified_name in self.to_import: module = importlib.import_module(qualified_name) for cmd, options in module.TESTRUNNER(): if self.__remove_options(cmd)[-1] in self.ignore: continue self.commands.append((cmd, options)) del self.to_import[:] def __combine_commands(self, files, group_size=5): if not files: return from itertools import groupby cnt = [0, 0] def make_group(_): if cnt[0] > group_size: cnt[0] = 0 cnt[1] += 1 cnt[0] += 1 return cnt[1] for _, group in groupby(files, make_group): cmd = self.__begin_command() cmd.append('-m') cmd.append('unittest') # cmd.append('-v') for name in group: cmd.append(name) self.commands.insert(0, (cmd, DEFAULT_RUN_OPTIONS.copy())) del files[:] def visit_file(self, filename): # Support either 'gevent.tests.foo' or 'gevent/tests/foo.py' if filename.startswith('gevent.tests'): # XXX: How does this interact with 'package'? Probably not well qualified_name = module_name = filename filename = filename[len('gevent.tests') + 1:] filename = filename.replace('.', os.sep) + '.py' else: module_name = os.path.splitext(filename)[0] qualified_name = self.package + '.' + module_name if self.package else module_name # Also allow just 'foo' as a shortcut for 'gevent.tests.foo' abs_filename = os.path.abspath(filename) if ( not os.path.exists(abs_filename) and not filename.endswith('.py') and os.path.exists(abs_filename + '.py') ): abs_filename += '.py' with open(abs_filename, 'rb') as f: # Some of the test files (e.g., test__socket_dns) are # UTF8 encoded. Depending on the environment, Python 3 may # try to decode those as ASCII, which fails with UnicodeDecodeError. # Thus, be sure to open and compare in binary mode. # Open the absolute path to make errors more clear, # but we can't store the absolute path, our configuration is based on # relative file names. contents = f.read() self.__add_test(qualified_name, filename, contents) def visit_files(self, filenames): for filename in filenames: self.visit_file(filename) with Discovery._in_dir(self.orig_dir): self.__expand_imports() self.__combine_commands(self.std_monkey_patch_files) self.__combine_commands(self.no_monkey_patch_files) @staticmethod @contextmanager def _in_dir(package_dir): olddir = os.getcwd() if package_dir: os.chdir(package_dir) try: yield finally: os.chdir(olddir) @Lazy def discovered(self): tests = self.tests discovered = self.Discovered(self.package, self.configured_test_options, self.ignore, self.config, self.allow_combine) # We need to glob relative names, our config is based on filenames still with self._in_dir(self.package_dir): if not tests: tests = set(glob.glob('test_*.py')) - set(['test_support.py']) else: tests = set(tests) if self.ignore: # Always ignore the designated list, even if tests # were specified on the command line. This fixes a # nasty interaction with # test__threading_vs_settrace.py being run under # coverage when 'grep -l subprocess test*py' is used # to list the tests to run. tests -= self.ignore tests = sorted(tests) discovered.visit_files(tests) return discovered def __iter__(self): return iter(self.discovered.commands) # pylint:disable=no-member def __len__(self): return len(self.discovered.commands) # pylint:disable=no-member def load_list_from_file(filename, package): result = [] if filename: # pylint:disable=unspecified-encoding with open(_package_relative_filename(filename, package)) as f: for x in f: x = x.split('#', 1)[0].strip() if x: result.append(x) return result def matches(possibilities, command, include_flaky=True): if isinstance(command, list): command = ' '.join(command) for line in possibilities: if not include_flaky and line.startswith('FLAKY '): continue line = line.replace('FLAKY ', '') # Our configs are still mostly written in terms of file names, # but the non-monkey tests are now using package names. # Strip off '.py' from filenames to see if we match a module. # XXX: This could be much better. Our command needs better structure. if command.endswith(' ' + line) or command.endswith(line.replace(".py", '')): return True if ' ' not in command and command == line: return True return False def format_seconds(seconds): if seconds < 20: return '%.1fs' % seconds seconds = str(timedelta(seconds=round(seconds))) if seconds.startswith('0:'): seconds = seconds[2:] return seconds def _show_longest_running(result_collector, how_many=5): longest_running_tests = result_collector.longest_running_tests if not longest_running_tests: return # The only tricky part is handling repeats. we want to show them, # but not count them as a distinct entry. util.log('\nLongest-running tests:') length_of_longest_formatted_decimal = len('%.1f' % longest_running_tests[0].run_duration) frmt = '%' + str(length_of_longest_formatted_decimal) + '.1f seconds: %s' seen_names = set() for result in longest_running_tests: util.log(frmt, result.run_duration, result.name) seen_names.add(result.name) if len(seen_names) >= how_many: break def report(result_collector, # type: ResultCollector exit=True, took=None, configured_failing_tests=()): # pylint:disable=redefined-builtin,too-many-branches,too-many-locals total = result_collector.total failed = result_collector.failed passed = result_collector.passed total_cases = result_collector.total_cases total_skipped = result_collector.total_skipped _show_longest_running(result_collector) if took: took = ' in %s' % format_seconds(took) else: took = '' failed_expected = [] failed_unexpected = [] passed_unexpected = [] for name in passed: if matches(configured_failing_tests, name, include_flaky=False): passed_unexpected.append(name) if passed_unexpected: util.log('\n%s/%s unexpected passes', len(passed_unexpected), total, color='error') print_list(passed_unexpected) if result_collector.reran: util.log('\n%s/%s tests rerun', len(result_collector.reran), total, color='warning') print_list(result_collector.reran) if failed: util.log('\n%s/%s tests failed%s', len(failed), total, took, color='warning') for name in failed: if matches(configured_failing_tests, name, include_flaky=True): failed_expected.append(name) else: failed_unexpected.append(name) if failed_expected: util.log('\n%s/%s expected failures', len(failed_expected), total, color='warning') print_list(failed_expected) if failed_unexpected: util.log('\n%s/%s unexpected failures', len(failed_unexpected), total, color='error') print_list(failed_unexpected) util.log( '\nRan %s tests%s in %s files%s', total_cases, util._colorize('skipped', " (skipped=%d)" % total_skipped) if total_skipped else '', total, took, ) if exit: if failed_unexpected: sys.exit(min(100, len(failed_unexpected))) if passed_unexpected: sys.exit(101) if total <= 0: sys.exit('No tests found.') def print_list(lst): for name in lst: util.log(' - %s', name) def _setup_environ(debug=False): def not_set(key): return not bool(os.environ.get(key)) if (not_set('PYTHONWARNINGS') and (not sys.warnoptions # Python 3.7 goes from [] to ['default'] for nothing or sys.warnoptions == ['default'])): # action:message:category:module:line # - when a warning matches # more than one option, the action for the last matching # option is performed. # - action is one of : ignore, default, all, module, once, error # Enable default warnings such as ResourceWarning. # ResourceWarning doesn't exist on Py2, so don't put it # in there to avoid a warnnig. defaults = [ 'default', 'default::DeprecationWarning', ] if not PY2: defaults.append('default::ResourceWarning') os.environ['PYTHONWARNINGS'] = ','.join(defaults + [ # On Python 3[.6], the system site.py module has # "open(fullname, 'rU')" which produces the warning that # 'U' is deprecated, so ignore warnings from site.py 'ignore:::site:', # pkgutil on Python 2 complains about missing __init__.py 'ignore:::pkgutil:', # importlib/_bootstrap.py likes to spit out "ImportWarning: # can't resolve package from __spec__ or __package__, falling # back on __name__ and __path__". I have no idea what that means, but it seems harmless # and is annoying. 'ignore:::importlib._bootstrap:', 'ignore:::importlib._bootstrap_external:', # importing ABCs from collections, not collections.abc 'ignore:::pkg_resources._vendor.pyparsing:', 'ignore:::dns.namedict:', # dns.hash itself is being deprecated, importing it raises the warning; # we don't import it, but dnspython still does 'ignore:::dns.hash:', # dns.zone uses some raw regular expressions # without the r'' syntax, leading to DeprecationWarning: invalid # escape sequence. This is fixed in 2.0 (Python 3 only). 'ignore:::dns.zone:', ]) if not_set('PYTHONFAULTHANDLER'): os.environ['PYTHONFAULTHANDLER'] = 'true' if not_set('GEVENT_DEBUG') and debug: os.environ['GEVENT_DEBUG'] = 'debug' if not_set('PYTHONTRACEMALLOC') and debug: # This slows the tests down quite a bit. Reserve # for debugging. os.environ['PYTHONTRACEMALLOC'] = '10' if not_set('PYTHONDEVMODE'): # Python 3.7 and above. os.environ['PYTHONDEVMODE'] = '1' if not_set('PYTHONMALLOC') and debug: # Python 3.6 and above. # This slows the tests down some, but # can detect memory corruption. Unfortunately # it can also be flaky, especially in pre-release # versions of Python (e.g., lots of crashes on Python 3.8b4). os.environ['PYTHONMALLOC'] = 'debug' if sys.version_info.releaselevel != 'final' and not debug: os.environ['PYTHONMALLOC'] = 'default' os.environ['PYTHONDEVMODE'] = '' # PYTHONSAFEPATH breaks the assumptions of some tests, notably test_interpreters.py os.environ.pop('PYTHONSAFEPATH', None) interesting_envs = { k: os.environ[k] for k in os.environ if k.startswith(('PYTHON', 'GEVENT')) } widest_k = max(len(k) for k in interesting_envs) for k, v in sorted(interesting_envs.items()): util.log('%*s\t=\t%s', widest_k, k, v, color="debug") def main(): # pylint:disable=too-many-locals,too-many-statements,too-many-branches import argparse parser = argparse.ArgumentParser() parser.add_argument('--ignore') parser.add_argument( '--discover', action='store_true', help="Only print the tests found." ) parser.add_argument( '--config', default='known_failures.py', help="The path to the config file containing " "FAILING_TESTS, IGNORED_TESTS and RUN_ALONE. " "Defaults to %(default)s." ) parser.add_argument( "--coverage", action="store_true", help="Enable coverage recording with coverage.py." ) # TODO: Quiet and verbose should be mutually exclusive parser.add_argument( "--quiet", action="store_true", default=True, help="Be quiet. Defaults to %(default)s. Also the " "GEVENTTEST_QUIET environment variable." ) parser.add_argument("--verbose", action="store_false", dest='quiet') parser.add_argument( "--debug", action="store_true", default=False, help="Enable debug settings. If the GEVENT_DEBUG environment variable is not set, " "this sets it to 'debug'. This can also enable PYTHONTRACEMALLOC and the debug PYTHONMALLOC " "allocators, if not already set. Defaults to %(default)s." ) parser.add_argument( "--package", default="gevent.tests", help="Load tests from the given package. Defaults to %(default)s." ) parser.add_argument( "--processes", "-j", default=DEFAULT_NWORKERS, type=int, help="Use up to the given number of parallel processes to execute tests. " "Defaults to %(default)s." ) parser.add_argument( '--no-combine', default=True, action='store_false', help="Do not combine tests into process groups." ) parser.add_argument('-u', '--use', metavar='RES1,RES2,...', action='store', type=parse_resources, help='specify which special resource intensive tests ' 'to run. "all" is the default; "none" may also be used. ' 'Disable individual resources with a leading -.' 'For example, "-u-network". GEVENTTEST_USE_RESOURCES is used ' 'if no argument is given. To only use one resources, specify ' '"-unone,resource".') parser.add_argument("--travis-fold", metavar="MSG", help="Emit Travis CI log fold markers around the output.") fail_parser = parser.add_mutually_exclusive_group() fail_parser.add_argument( "--second-chance", action="store_true", default=False, help="Give failed tests a second chance.") fail_parser.add_argument( '--failfast', '-x', action='store_true', default=False, help="Stop running after the first failure.") parser.add_argument('tests', nargs='*') options = parser.parse_args() # options.use will be either None for not given, or a list # of the last specified -u argument. # If not given, use the default, which we'll take from the environment, if set. options.use = list(set(parse_resources() if options.use is None else options.use)) # Whether or not it came from the environment, put it in the # environment now. os.environ['GEVENTTEST_USE_RESOURCES'] = unparse_resources(options.use) setup_resources(options.use) # Set this before any test imports in case of 'from .util import QUIET'; # not that this matters much because we spawn tests in subprocesses, # it's the environment setting that matters util.QUIET = options.quiet if 'GEVENTTEST_QUIET' not in os.environ: os.environ['GEVENTTEST_QUIET'] = str(options.quiet) FAILING_TESTS = [] IGNORED_TESTS = [] RUN_ALONE = [] coverage = False if options.coverage or os.environ.get("GEVENTTEST_COVERAGE"): if PYPY and RUNNING_ON_CI: print("Ignoring coverage option on PyPy on CI; slow") else: coverage = True cov_config = os.environ['COVERAGE_PROCESS_START'] = os.path.abspath(".coveragerc") if PYPY: cov_config = os.environ['COVERAGE_PROCESS_START'] = os.path.abspath(".coveragerc-pypy") this_dir = os.path.dirname(__file__) site_dir = os.path.join(this_dir, 'coveragesite') site_dir = os.path.abspath(site_dir) os.environ['PYTHONPATH'] = site_dir + os.pathsep + os.environ.get("PYTHONPATH", "") # We change directory often, use an absolute path to keep all the # coverage files (which will have distinct suffixes because of parallel=true in .coveragerc # in this directory; makes them easier to combine and use with coverage report) os.environ['COVERAGE_FILE'] = os.path.abspath(".") + os.sep + ".coverage" # XXX: Log this with color. Right now, it interferes (buffering) with other early # output. print("Enabling coverage to", os.environ['COVERAGE_FILE'], "with site", site_dir, "and configuration file", cov_config) assert os.path.exists(cov_config) assert os.path.exists(os.path.join(site_dir, 'sitecustomize.py')) _setup_environ(debug=options.debug) if options.config: config = {} options.config = _package_relative_filename(options.config, options.package) with open(options.config) as f: # pylint:disable=unspecified-encoding config_data = f.read() six.exec_(config_data, config) FAILING_TESTS = config['FAILING_TESTS'] IGNORED_TESTS = config['IGNORED_TESTS'] RUN_ALONE = config['RUN_ALONE'] tests = Discovery( options.tests, ignore_files=options.ignore, ignored=IGNORED_TESTS, coverage=coverage, package=options.package, config=config, allow_combine=options.no_combine, ) if options.discover: for cmd, options in tests: print(util.getname(cmd, env=options.get('env'), setenv=options.get('setenv'))) print('%s tests found.' % len(tests)) else: if PYPY and RESOLVER_ARES: # XXX: Add a way to force these. print("Not running tests on pypy with c-ares; not a supported configuration") return if options.package: # Put this directory on the path so relative imports work. package_dir = _dir_from_package_name(options.package) os.environ['PYTHONPATH'] = os.environ.get('PYTHONPATH', "") + os.pathsep + package_dir allowed_return_codes = () if sys.version_info[:3] >= (3, 12, 1): # unittest suddenly started failing with this return code # if all tests in a module are skipped in 3.12.1. allowed_return_codes += (5,) runner = Runner( tests, allowed_return_codes=allowed_return_codes, configured_failing_tests=FAILING_TESTS, failfast=options.failfast, quiet=options.quiet, configured_run_alone_tests=RUN_ALONE, worker_count=options.processes, second_chance=options.second_chance, ) if options.travis_fold: runner = TravisFoldingRunner(runner, options.travis_fold) runner() if __name__ == '__main__': main() gevent-24.11.1/src/gevent/testing/timing.py000066400000000000000000000115661471441230600205600ustar00rootroot00000000000000# Copyright (c) 2018 gevent community # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import gevent from gevent._compat import perf_counter from . import sysinfo from . import leakcheck from .testcase import TestCase SMALLEST_RELIABLE_DELAY = 0.001 # 1ms, because of libuv SMALL_TICK = 0.01 SMALL_TICK_MIN_ADJ = SMALLEST_RELIABLE_DELAY SMALL_TICK_MAX_ADJ = 0.11 if sysinfo.RUNNING_ON_APPVEYOR: # Timing resolution is extremely poor on Appveyor # and subject to jitter. SMALL_TICK_MAX_ADJ = 1.5 LARGE_TICK = 0.2 LARGE_TICK_MIN_ADJ = LARGE_TICK / 2.0 LARGE_TICK_MAX_ADJ = SMALL_TICK_MAX_ADJ class _DelayWaitMixin(object): _default_wait_timeout = SMALL_TICK _default_delay_min_adj = SMALL_TICK_MIN_ADJ _default_delay_max_adj = SMALL_TICK_MAX_ADJ def wait(self, timeout): raise NotImplementedError('override me in subclass') def _check_delay_bounds(self, timeout, delay, delay_min_adj=None, delay_max_adj=None): delay_min_adj = self._default_delay_min_adj if not delay_min_adj else delay_min_adj delay_max_adj = self._default_delay_max_adj if not delay_max_adj else delay_max_adj self.assertTimeWithinRange(delay, timeout - delay_min_adj, timeout + delay_max_adj) def _wait_and_check(self, timeout=None): if timeout is None: timeout = self._default_wait_timeout # gevent.timer instances have a 'seconds' attribute, # otherwise it's the raw number seconds = getattr(timeout, 'seconds', timeout) gevent.get_hub().loop.update_now() start = perf_counter() try: result = self.wait(timeout) finally: self._check_delay_bounds(seconds, perf_counter() - start, self._default_delay_min_adj, self._default_delay_max_adj) return result def test_outer_timeout_is_not_lost(self): timeout = gevent.Timeout.start_new(SMALLEST_RELIABLE_DELAY, ref=False) try: with self.assertRaises(gevent.Timeout) as exc: self.wait(timeout=1) self.assertIs(exc.exception, timeout) finally: timeout.close() class AbstractGenericWaitTestCase(_DelayWaitMixin, TestCase): # pylint:disable=abstract-method _default_wait_timeout = LARGE_TICK _default_delay_min_adj = LARGE_TICK_MIN_ADJ _default_delay_max_adj = LARGE_TICK_MAX_ADJ @leakcheck.ignores_leakcheck # waiting checks can be very sensitive to timing def test_returns_none_after_timeout(self): result = self._wait_and_check() # join and wait simply return after timeout expires self.assertIsNone(result) class AbstractGenericGetTestCase(_DelayWaitMixin, TestCase): # pylint:disable=abstract-method Timeout = gevent.Timeout def cleanup(self): pass def test_raises_timeout_number(self): with self.assertRaises(self.Timeout): self._wait_and_check(timeout=SMALL_TICK) # get raises Timeout after timeout expired self.cleanup() def test_raises_timeout_Timeout(self): timeout = gevent.Timeout(self._default_wait_timeout) try: self._wait_and_check(timeout=timeout) except gevent.Timeout as ex: self.assertIs(ex, timeout) finally: timeout.close() self.cleanup() def test_raises_timeout_Timeout_exc_customized(self): error = RuntimeError('expected error') timeout = gevent.Timeout(self._default_wait_timeout, exception=error) try: with self.assertRaises(RuntimeError) as exc: self._wait_and_check(timeout=timeout) self.assertIs(exc.exception, error) self.cleanup() finally: timeout.close() gevent-24.11.1/src/gevent/testing/travis.py000066400000000000000000000015551471441230600205760ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Support functions for travis # See https://github.com/travis-ci/travis-rubies/blob/9f7962a881c55d32da7c76baefc58b89e3941d91/build.sh from __future__ import absolute_import from __future__ import division from __future__ import print_function import sys commands = {} def command(func): commands[func.__name__] = lambda: func(*sys.argv[2:]) return func @command def fold_start(name, msg): sys.stdout.write('travis_fold:start:') sys.stdout.write(name) sys.stdout.write(chr(0o33)) sys.stdout.write('[33;1m') sys.stdout.write(msg) sys.stdout.write(chr(0o33)) sys.stdout.write('[33;0m\n') @command def fold_end(name): sys.stdout.write("\ntravis_fold:end:") sys.stdout.write(name) sys.stdout.write("\r\n") def main(): cmd = sys.argv[1] commands[cmd]() if __name__ == '__main__': main() gevent-24.11.1/src/gevent/testing/util.py000066400000000000000000000455461471441230600202530ustar00rootroot00000000000000""" .. caution:: This module imports `subprocess` and `threading`; this can break monkey-patched unittests. Specifically, ``test_threading.ThreadTests.test_import_from_another_thread``. """ import re import sys import os import traceback import unittest import threading import subprocess from time import sleep from . import six from gevent._config import validate_bool from gevent._compat import perf_counter from gevent.monkey import get_original # pylint: disable=broad-except,attribute-defined-outside-init BUFFER_OUTPUT = False # This is set by the testrunner, defaulting to true (be quiet) # But if we're run standalone, default to false QUIET = validate_bool(os.environ.get('GEVENTTEST_QUIET', '0')) class Popen(subprocess.Popen): """ Depending on when we're imported and if the process has been monkey-patched, this could use cooperative or native Popen. """ timer = None # a threading.Timer instance def __enter__(self): return self def __exit__(self, *args): kill(self) # Coloring code based on zope.testrunner # These colors are carefully chosen to have enough contrast # on terminals with both black and white background. _colorscheme = { 'normal': 'normal', 'default': 'default', 'actual-output': 'red', 'character-diffs': 'magenta', 'debug': 'cyan', 'diff-chunk': 'magenta', 'error': 'brightred', 'error-number': 'brightred', 'exception': 'red', 'expected-output': 'green', 'failed-example': 'cyan', 'filename': 'lightblue', 'info': 'normal', 'lineno': 'lightred', 'number': 'green', 'ok-number': 'green', 'skipped': 'brightyellow', 'slow-test': 'brightmagenta', 'suboptimal-behaviour': 'magenta', 'testname': 'lightcyan', 'warning': 'cyan', } _prefixes = [ ('dark', '0;'), ('light', '1;'), ('bright', '1;'), ('bold', '1;'), ] _colorcodes = { 'default': 0, 'normal': 0, 'black': 30, 'red': 31, 'green': 32, 'brown': 33, 'yellow': 33, 'blue': 34, 'magenta': 35, 'cyan': 36, 'grey': 37, 'gray': 37, 'white': 37 } def _color_code(color): prefix_code = '' for prefix, code in _prefixes: if color.startswith(prefix): color = color[len(prefix):] prefix_code = code break color_code = _colorcodes[color] return '\033[%s%sm' % (prefix_code, color_code) def _color(what): return _color_code(_colorscheme[what]) def _colorize(what, message, normal='normal'): return _color(what) + message + _color(normal) def log(message, *args, **kwargs): """ Log a *message* :keyword str color: One of the values from _colorscheme """ color = kwargs.pop('color', 'normal') if args: string = message % args else: string = message string = _colorize(color, string) with output_lock: # pylint:disable=not-context-manager sys.stderr.write(string + '\n') def debug(message, *args, **kwargs): """ Log the *message* only if we're not in quiet mode. """ if not QUIET: kwargs.setdefault('color', 'debug') log(message, *args, **kwargs) def killpg(pid): if not hasattr(os, 'killpg'): return try: return os.killpg(pid, 9) except OSError as ex: if ex.errno != 3: log('killpg(%r, 9) failed: %s: %s', pid, type(ex).__name__, ex) except Exception as ex: log('killpg(%r, 9) failed: %s: %s', pid, type(ex).__name__, ex) def kill_processtree(pid): ignore_msg = 'ERROR: The process "%s" not found.' % pid err = Popen('taskkill /F /PID %s /T' % pid, stderr=subprocess.PIPE).communicate()[1] if err and err.strip() not in [ignore_msg, '']: log('%r', err) def _kill(popen): if hasattr(popen, 'kill'): try: popen.kill() except OSError as ex: if ex.errno == 3: # No such process return if ex.errno == 13: # Permission denied (translated from windows error 5: "Access is denied") return raise else: try: os.kill(popen.pid, 9) except EnvironmentError: pass def kill(popen): if popen.timer is not None: popen.timer.cancel() popen.timer = None if popen.poll() is not None: return popen.was_killed = True try: if getattr(popen, 'setpgrp_enabled', None): killpg(popen.pid) elif sys.platform.startswith('win'): kill_processtree(popen.pid) except Exception: traceback.print_exc() try: _kill(popen) except Exception: traceback.print_exc() try: popen.wait() except Exception: traceback.print_exc() # A set of environment keys we ignore for printing purposes IGNORED_GEVENT_ENV_KEYS = { 'GEVENTTEST_QUIET', 'GEVENT_DEBUG', 'GEVENTSETUP_EV_VERIFY', 'GEVENTSETUP_EMBED', } # A set of (name, value) pairs we ignore for printing purposes. # These should match the defaults. IGNORED_GEVENT_ENV_ITEMS = { ('GEVENT_RESOLVER', 'thread'), ('GEVENT_RESOLVER_NAMESERVERS', '8.8.8.8'), ('GEVENTTEST_USE_RESOURCES', 'all'), } def getname(command, env=None, setenv=None): result = [] env = (env or os.environ).copy() env.update(setenv or {}) for key, value in sorted(env.items()): if not key.startswith('GEVENT'): continue if key in IGNORED_GEVENT_ENV_KEYS: continue if (key, value) in IGNORED_GEVENT_ENV_ITEMS: continue result.append('%s=%s' % (key, value)) if isinstance(command, six.string_types): result.append(command) else: result.extend(command) return ' '.join(result) def start(command, quiet=False, **kwargs): timeout = kwargs.pop('timeout', None) preexec_fn = None if not os.environ.get('DO_NOT_SETPGRP'): preexec_fn = getattr(os, 'setpgrp', None) env = kwargs.pop('env', None) setenv = kwargs.pop('setenv', None) or {} name = getname(command, env=env, setenv=setenv) if preexec_fn is not None: setenv['DO_NOT_SETPGRP'] = '1' if setenv: env = env.copy() if env else os.environ.copy() env.update(setenv) if not quiet: log('+ %s', name) popen = Popen(command, preexec_fn=preexec_fn, env=env, **kwargs) popen.name = name popen.setpgrp_enabled = preexec_fn is not None popen.was_killed = False if timeout is not None: t = get_original('threading', 'Timer')(timeout, kill, args=(popen, )) popen.timer = t t.daemon = True t.start() popen.timer = t return popen class RunResult(object): """ The results of running an external command. If the command was successful, this has a boolean value of True; otherwise, a boolean value of false. The integer value of this object is the command's exit code. """ def __init__(self, command, run_kwargs, code, output=None, # type: str error=None, # type: str name=None, run_count=0, skipped_count=0, run_duration=0, # type: float ): self.command = command self.run_kwargs = run_kwargs self.code = code self.output = output self.error = error self.name = name self.run_count = run_count self.skipped_count = skipped_count self.run_duration = run_duration @property def output_lines(self): return self.output.splitlines() def __bool__(self): return not bool(self.code) __nonzero__ = __bool__ def __int__(self): return self.code def __repr__(self): return ( "RunResult of: %r\n" "Code: %s\n" "kwargs: %r\n" "Output:\n" "----\n" "%s" "----\n" "Error:\n" "----\n" "%s" "----\n" ) % ( self.command, self.code, self.run_kwargs, self.output, self.error ) def _should_show_warning_output(out): if 'Warning' in out: # Strip out some patterns we specifically do not # care about. # from test.support for monkey-patched tests out = out.replace('Warning -- reap_children', 'NADA') out = out.replace("Warning -- threading_cleanup", 'NADA') # The below *could* be done with sophisticated enough warning # filters passed to the children # collections.abc is the new home; setuptools uses the old one, # as does dnspython out = out.replace("DeprecationWarning: Using or importing the ABCs", 'NADA') # libuv poor timer resolution out = out.replace('UserWarning: libuv only supports', 'NADA') # Packages on Python 2 out = out.replace('ImportWarning: Not importing directory', 'NADA') # Testing that U mode does the same thing out = out.replace("DeprecationWarning: 'U' mode is deprecated", 'NADA') out = out.replace("DeprecationWarning: dns.hash module", 'NADA') return 'Warning' in out output_lock = threading.Lock() def _find_test_status(took, out): status = '[took %.1fs%s]' skipped = '' run_count = 0 skipped_count = 0 if out: m = re.search(r"Ran (\d+) tests in", out) if m: result = out[m.start():m.end()] status = status.replace('took', result) run_count = int(out[m.start(1):m.end(1)]) m = re.search(r' \(skipped=(\d+)\)$', out) if m: skipped = _colorize('skipped', out[m.start():m.end()]) skipped_count = int(out[m.start(1):m.end(1)]) status = status % (took, skipped) # pylint:disable=consider-using-augmented-assign if took > 10: status = _colorize('slow-test', status) return status, run_count, skipped_count def run(command, **kwargs): # pylint:disable=too-many-locals """ Execute *command*, returning a `RunResult`. This blocks until *command* finishes or until it times out. """ buffer_output = kwargs.pop('buffer_output', BUFFER_OUTPUT) quiet = kwargs.pop('quiet', QUIET) verbose = not quiet nested = kwargs.pop('nested', False) allowed_return_codes = kwargs.pop('allowed_return_codes', ()) if buffer_output: assert 'stdout' not in kwargs and 'stderr' not in kwargs, kwargs kwargs['stderr'] = subprocess.STDOUT kwargs['stdout'] = subprocess.PIPE popen = start(command, quiet=quiet, **kwargs) name = popen.name try: time_start = perf_counter() out, err = popen.communicate() duration = perf_counter() - time_start if popen.was_killed or popen.poll() is None: result = 'TIMEOUT' else: result = popen.poll() finally: kill(popen) assert popen.timer is None # We don't want to treat return codes that are allowed as failures, # but we do want to log those specially. That's why we retain the distinction # between ``failed`` and ``result`` (failed takes the allowed codes into account). failed = bool(result) and result not in allowed_return_codes if out: out = out.strip() out = out if isinstance(out, str) else out.decode('utf-8', 'ignore') if out and (failed or verbose or _should_show_warning_output(out)): out = ' ' + out.replace('\n', '\n ') out = out.rstrip() out += '\n' log('| %s\n%s', name, out) status, run_count, skipped_count = _find_test_status(duration, out) if result: log('! %s [code %s] %s', name, result, status, color='error' if failed else 'suboptimal-behaviour') elif not nested: log('- %s %s', name, status) # For everything outside this function, we need to pretend that # allowed codes are actually successes. return RunResult( command, kwargs, 0 if result in allowed_return_codes else result, output=out, error=err, name=name, run_count=run_count, skipped_count=skipped_count, run_duration=duration, ) class NoSetupPyFound(Exception): "Raised by find_setup_py_above" def find_setup_py_above(a_file): "Return the directory containing setup.py somewhere above *a_file*" root = os.path.dirname(os.path.abspath(a_file)) while not os.path.exists(os.path.join(root, 'setup.py')): prev, root = root, os.path.dirname(root) if root == prev: # Let's avoid infinite loops at root raise NoSetupPyFound('could not find my setup.py above %r' % (a_file,)) return root def search_for_setup_py(a_file=None, a_module_name=None, a_class=None, climb_cwd=True): if a_file is not None: try: return find_setup_py_above(a_file) except NoSetupPyFound: pass if a_class is not None: try: return find_setup_py_above(sys.modules[a_class.__module__].__file__) except NoSetupPyFound: pass if a_module_name is not None: try: return find_setup_py_above(sys.modules[a_module_name].__file__) except NoSetupPyFound: pass if climb_cwd: return find_setup_py_above("./dne") raise NoSetupPyFound("After checking %r" % (locals(),)) def _version_dir_components(): directory = '%s.%s' % sys.version_info[:2] full_directory = '%s.%s.%s' % sys.version_info[:3] if hasattr(sys, 'pypy_version_info'): directory += 'pypy' full_directory += 'pypy' return directory, full_directory def find_stdlib_tests(): """ Return a sequence of directories that could contain stdlib tests for the running version of Python. The most specific tests are at the end of the sequence. No checks are performed on existence of the directories. """ setup_py = search_for_setup_py(a_file=__file__) greentest = os.path.join(setup_py, 'src', 'greentest') directory, full_directory = _version_dir_components() directory = '%s.%s' % sys.version_info[:2] full_directory = '%s.%s.%s' % sys.version_info[:3] if hasattr(sys, 'pypy_version_info'): directory += 'pypy' full_directory += 'pypy' directory = os.path.join(greentest, directory) full_directory = os.path.join(greentest, full_directory) return directory, full_directory def absolute_pythonpath(): """ Return the PYTHONPATH environment variable (if set) with each entry being an absolute path. If not set, returns None. """ if 'PYTHONPATH' not in os.environ: return None path = os.environ['PYTHONPATH'] path = [os.path.abspath(p) for p in path.split(os.path.pathsep)] return os.path.pathsep.join(path) class ExampleMixin(object): """ Something that uses the ``examples/`` directory from the root of the gevent distribution. The `cwd` property is set to the root of the gevent distribution. """ #: Arguments to pass to the example file. example_args = [] before_delay = 3 after_delay = 0.5 #: Path of the example Python file, relative to `cwd` example = None # subclasses define this to be the path to the server.py #: Keyword arguments to pass to the start or run method. start_kwargs = None def find_setup_py(self): "Return the directory containing setup.py" return search_for_setup_py( a_file=__file__, a_class=type(self) ) @property def cwd(self): try: root = self.find_setup_py() except NoSetupPyFound as e: raise unittest.SkipTest("Unable to locate file/dir to run: %s" % (e,)) return os.path.join(root, 'examples') @property def setenv(self): """ Returns a dictionary of environment variables to set for the child in addition to (or replacing) the ones already in the environment. Since the child is run in `cwd`, relative paths in ``PYTHONPATH`` need to be converted to absolute paths. """ abs_pythonpath = absolute_pythonpath() return {'PYTHONPATH': abs_pythonpath} if abs_pythonpath else None def _start(self, meth): if getattr(self, 'args', None): raise AssertionError("Invalid test", self, self.args) if getattr(self, 'server', None): raise AssertionError("Invalid test", self, self.server) try: # These could be or are properties that can raise server = self.example server_dir = self.cwd except NoSetupPyFound as e: raise unittest.SkipTest("Unable to locate file/dir to run: %s" % (e,)) kwargs = self.start_kwargs or {} setenv = self.setenv if setenv: if 'setenv' in kwargs: kwargs['setenv'].update(setenv) else: kwargs['setenv'] = setenv return meth( [sys.executable, '-W', 'ignore', '-u', server] + self.example_args, cwd=server_dir, **kwargs ) def start_example(self): return self._start(meth=start) def run_example(self):# run() is a unittest method. return self._start(meth=run) class TestServer(ExampleMixin, unittest.TestCase): popen = None def running_server(self): from contextlib import contextmanager @contextmanager def running_server(): with self.start_example() as popen: self.popen = popen self.before() yield self.after() return running_server() def test(self): with self.running_server(): self._run_all_tests() def before(self): if self.before_delay is not None: sleep(self.before_delay) self.assertIsNone(self.popen.poll(), '%s died with code %s' % ( self.example, self.popen.poll(), )) def after(self): if self.after_delay is not None: sleep(self.after_delay) self.assertIsNone(self.popen.poll(), '%s died with code %s' % ( self.example, self.popen.poll(), )) def _run_all_tests(self): ran = False for method in sorted(dir(self)): if method.startswith('_test'): function = getattr(self, method) if callable(function): function() ran = True assert ran class alarm(threading.Thread): # can't use signal.alarm because of Windows def __init__(self, timeout): threading.Thread.__init__(self) self.daemon = True self.timeout = timeout self.start() def run(self): sleep(self.timeout) sys.stderr.write('Timeout.\n') os._exit(5) gevent-24.11.1/src/gevent/tests/000077500000000000000000000000001471441230600163735ustar00rootroot00000000000000gevent-24.11.1/src/gevent/tests/2_7_keycert.pem000066400000000000000000000117311471441230600212160ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQChqfmG6uOG95Jb 7uRi6yxohJ8GOR3gi39yX6JB+Xdukvqxy2/vsjH1+CF1i8jKZZO0hJLGT+/GmKIc 1c0XUEjVoQvCNQHIaDTXiUXOGXfkQNKR0vtJH5ZOZn/tvYAKPniYPmHuF3TpAB6H ouLpyIC55SXdK7pTEbmU7J1aBjugn3O56cu6FzjU1j/0QVUVGloxApLvv57bmINa X9ygKsh/ug0lhV1RwYLJ9UX57m95FIlcofa98tCuoKi++G+sWsjopDXVmsiTbjZf s72kcDUTRYKNZbRFRRETORdOVRHxlAIPEn4QFYn/3wVSNFvfeY0j8RI5YcPLU66B atun6HU+YAs6z8Qc8S1EMElJdoyVeLCqLA07btICzKq2I16TZAOWVng2P7NOtibA eCzDAxAxJ3Oby+BVikKcu8WmJLxGvRvaPljdD76xjPB5NK6O0J62C3uU3EWhPODX 9H5l/WF+aNRqSccgs0Umddj33N+b/mTJnHn1GpanThrv1UfOFGKfxjemwESz66d1 iqD7iXvTxt7yZeU7LIMRgDqhVe6zoBpJEeWl9YYyfGPwgIOhwzNVZ5WkzQARs7si 3j3Wkmyca7hEN8qq8DkLWNf1PTcIwo/239wKRbyW3Z+U4IGRrVMdeSoC2JpRAx/e EXTjuUePQlHCvwW9iiY7jTjDfbIvpwIDAQABAoICAC3CJMTRe3FaZezro210T2+O Ck0CobhLA9nlw9GUwP9lTtxATwCzmXybrSzOUhknwzUXSUwkmCPIVCqBQbnVmagO G3vu8QA+rqZLTpzVjJ/o0TFBXKsH681pKdCrELDVmeDN135C2W6SABI4Qq4VeIol mCAQHn8gxzyl9Kvkk8AVIfZ/fJDBve5Qbm2+iEye1uSEa/68aEST2Kod9B7JvVKZ 4Nq78vwPH+v2JsZlfNvyuiakGWkOb47eHqVfQIyybaebwzkgxKEmUvGnuIfw0rUP ubI4FVx9/iVIxZYAckHEuQh3HYOD9TmdcK4h79dDWnXP6G6hg3/rwbsT+fR+0aBQ 9rkKnA4uToGikYmplixAQ/jDBwMs3VQqenO+YBIsC4HEZ0fJUbs+l4LEnuUJxYcR UlAvnVQXa1WGne3Yzb2xONWeiocKfhcdJ2JuQo00UR74+2Qonxn/WpimvlLCBDgI uKxHCSWOgv5yPpU2kwTPIjORXcy/y2G9K2bnsQCzznPRDyNkZmavQxxG6greFcrO /0yhRPuBgxKBRvXPO+F5fybKFlU9IPLFehV60jLUybBejab/lMJyxdkh9UMu2Xqy FVsRGazJt6T6AGp6TFEEcFUQw7qXNhVo9S7zGGaJFJdYc+Vx8QJRoCe8EAYVH7Mp b/eYGhHaKg6iG7QCjPPxAoIBAQDN54wtuDqpAA+4PmqhiEhQKhabNqAoVmAWUxnJ Db4Zzvkkc3Fo/Yg0HnQVaT0KmkcxY7397lTdtiwNkWPgJ0f6+g7L4K7PA7xh/q84 IoXFGvYWwVdiVXLR1l06jorpA20clnba6CsbezwcllTq4bWvNnrAcM8l1YrAlRnV qqqbPL78Rnba4C8q+VFy8r0d9OGnbvFcV7VWJjhr0a3aZbHQ67jPinNiUWvBVFFx yGrqPMjkeHyiTLMhqQpaSHH67S88rj0g9RKexBaSUrl18QO7xnQHHSCcFWMQOiSN shNvFri48dnU+Ms6ZLc3MBHbTK6uzP8xJCVnmsz/MWPGkQZFAoIBAQDI/vj/3/y/ EpIawyHN7PQAMoto4AQF6sVasrgGd1tRsJnGKrCugH9gILvyke3L7qg0JTV3bDJY e8+vH1vC3NV7PsOlCFjMtRWG0lRbCh/b7Qe3pCvPu4mbFhJgMT/mz+vbl5zvcdgX kvne+St/267NKnY5gHBDhqitBwkZwNlTWJ0zVmTecKXn/KwjS9lX1qU3HiT3UFkd 5Y5Nt5lj1IOK/6NCXkxVkgOc4Zjcxx138Cg03VJhIiHTusRq6z9iTSTDubhkaSbi 2nadptFBiQtkVhAJ5G53U7pl/pIhhiJy901bu/v/wrIMJ2l6hiZIcLrbg6VGXxjV 5dB7LDEtKoL7AoIBAQC8+ffA+mX0N9c1nSuWh5L+6DIJUHBbtTLJKonu6gsAeuJU 3xNGbfK1CwI1qHnaolAW91knlrcTKaBy726ACu1YXmp4GgW2f9JFCk/csGqfxaf4 qIg/+va/ugOku7CoPXnGFB6PuSffOBKqlhrn3DI41kKBHsgwDDYlnHKylMmyYmVS +oUZS0pfIaXsXvbNaLQ2TG9+9gy7Pabo5e+vE0jI25+p84MEyH+iV3XMfUoLI7Cp aB/TgZuimBelVvotd8Sz56K4/dSSHJwuvXfz1Dk9/Nz+rnAAcOyTtxlXZwnJGkx9 iZMIkTNMq6UwJJEu+ckVK5ZHjso5tWzSBo1xcCcVAoIBAQCPL0x1A7zK5VDd7cqE J1w/U8KKiKN1D6VeElkUiiysyjERwdGxzmpvMYKSsDCGCdMbqrInDBXlgPYXnDBD ZgxSywiW5ZZU5l+advWPEWxWwMmxoitvxfqmV5fpnMwYAmDUQ3KSBTjaumJ03G6H nBkvoSMtnXjcMe6xrIRoK0Dmpgb+znn3GKqn1BFQ57TCZW+3DytoX33M1X6FkNie DINVHv3Pxtt8ThNyzCeYh+RPT+9kkZIhDi6o5bENNd8miSw6nnBkX6BLFTRQ5MjH dfh+luzAD1I+gZAVHsA9T4/09IXQZt+DeNBb5iu3FB/rlRsYS/UOZ6qKnjfhtz6l HVbHAoIBAFjNY/UPJDxQ/uG+rMU0nrmSBRGdgBvQkcefjWX/LIZV3MjNilUQ+B2a lXz5AHGmHRnnwQsBVfN8rf4qQLln8l34Kgm7+cIFavgfg2oqVbNyNgezSlUmRq0J Ttf3xYJtRgRUx8F+BcgJXMqlNGTMQJY8wawM/ATkwkbmSwGOKe04sBeIkwEycMId BupvfN5lxDrKqJVPSl1t5Rh4us95CNh22/c5Tq5rsynl02ZB4swlcsVTdv8FSGmM QVf/MkWXGN/x4lHJhKyklHMGv15GGvys1nlPTstMfUYs55ioWRW46TXQ8vOyzzpg 67xzBKYFEde+hgYk7X1Xeqj8A6bsqro= -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIFCzCCAvOgAwIBAgIUePnEKFfhxpt3oypt6nTicAGTFJowDQYJKoZIhvcNAQEL BQAwFDESMBAGA1UEAwwJbG9jYWxob3N0MCAXDTIxMDcwODExMzQzNVoYDzIxMjEw NjE0MTEzNDM1WjAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwggIiMA0GCSqGSIb3DQEB AQUAA4ICDwAwggIKAoICAQChqfmG6uOG95Jb7uRi6yxohJ8GOR3gi39yX6JB+Xdu kvqxy2/vsjH1+CF1i8jKZZO0hJLGT+/GmKIc1c0XUEjVoQvCNQHIaDTXiUXOGXfk QNKR0vtJH5ZOZn/tvYAKPniYPmHuF3TpAB6HouLpyIC55SXdK7pTEbmU7J1aBjug n3O56cu6FzjU1j/0QVUVGloxApLvv57bmINaX9ygKsh/ug0lhV1RwYLJ9UX57m95 FIlcofa98tCuoKi++G+sWsjopDXVmsiTbjZfs72kcDUTRYKNZbRFRRETORdOVRHx lAIPEn4QFYn/3wVSNFvfeY0j8RI5YcPLU66Batun6HU+YAs6z8Qc8S1EMElJdoyV eLCqLA07btICzKq2I16TZAOWVng2P7NOtibAeCzDAxAxJ3Oby+BVikKcu8WmJLxG vRvaPljdD76xjPB5NK6O0J62C3uU3EWhPODX9H5l/WF+aNRqSccgs0Umddj33N+b /mTJnHn1GpanThrv1UfOFGKfxjemwESz66d1iqD7iXvTxt7yZeU7LIMRgDqhVe6z oBpJEeWl9YYyfGPwgIOhwzNVZ5WkzQARs7si3j3Wkmyca7hEN8qq8DkLWNf1PTcI wo/239wKRbyW3Z+U4IGRrVMdeSoC2JpRAx/eEXTjuUePQlHCvwW9iiY7jTjDfbIv pwIDAQABo1MwUTAdBgNVHQ4EFgQUTUfShFbaXGMwrWEAkm05sXFH/x4wHwYDVR0j BBgwFoAUTUfShFbaXGMwrWEAkm05sXFH/x4wDwYDVR0TAQH/BAUwAwEB/zANBgkq hkiG9w0BAQsFAAOCAgEAe65ORDx0NDxTo1q6EY221KS3vEezUNBdZNaeOQsQeUAY lEO5iZ+2QLIVlWC5UtvISK96FU2CX0ucgAGfHS2ZB7o8i95fbjG2qrWC+VUH4V/6 jse9jlfGlYGkPuU5onNIDGcZ7gay3n0prCDiguAmCzV419GnGDWgSSgyVNCp/0tx b7pR5cVr0kZ5bTZjiysEEprkG2ofAlXzj09VGtTfM8gQvCz9Puj7pGzw2iaIEQVk hSGjoRWlI5x6+o16JOTHXzv9cYRUfDX6tjw3nQJIeMipuUkR8pkHUFjG3EeJEtO3 X/GO0G8rwUPaZiskGPiMZj7XqoVclnYL7JtntwUHR/dU5A/EhDfhgEfTXTqT78Oe cKri+VJE+G/hYxbP0FNYaDtqIwJcX1tsy4HOpKVBncc+K/PvXElVsyQET/+uwH7p Wm5ymndnuLoiQrWIA4nJC6rVwR4GPijuN0NCKcVdE+8jlOCBs3VBJTWKuu0J80RP 71iZy03AoK1YY4+nHglmE9HetAgSsbGh2fWC7DUS/4JzLSzOBeb+nn74zfmIfMU+ qUArFXvVGAtjmZZ/63cWzXDMZsp1BZ+O5dx6Gi2QtjgGYhh6DhW7ocQYXDkAeN/O K1Yzwq/G4AEQA0k0/1I+F0Rdlo41+7tOp+LMCOoZXqUzhM0ZQ2sf3QclubxLX9U= -----END CERTIFICATE----- gevent-24.11.1/src/gevent/tests/__init__.py000066400000000000000000000000001471441230600204720ustar00rootroot00000000000000gevent-24.11.1/src/gevent/tests/__main__.py000066400000000000000000000002631471441230600204660ustar00rootroot00000000000000#!/usr/bin/env python from __future__ import print_function, absolute_import, division if __name__ == '__main__': from gevent.testing import testrunner testrunner.main() gevent-24.11.1/src/gevent/tests/_blocks_at_top_level.py000066400000000000000000000000601471441230600231120ustar00rootroot00000000000000from gevent import sleep sleep(0.01) x = "done" gevent-24.11.1/src/gevent/tests/_import_import_patch.py000066400000000000000000000000341471441230600231640ustar00rootroot00000000000000__import__('_import_patch') gevent-24.11.1/src/gevent/tests/_import_patch.py000066400000000000000000000000571471441230600215770ustar00rootroot00000000000000import gevent.monkey gevent.monkey.patch_all() gevent-24.11.1/src/gevent/tests/_import_wait.py000066400000000000000000000010721471441230600214420ustar00rootroot00000000000000# test__import_wait.py calls this via an import statement, # so all of this is happening with import locks held (especially on py2) import gevent def fn2(): return 2 # A blocking function doesn't raise LoopExit def fn(): return gevent.wait([gevent.spawn(fn2), gevent.spawn(fn2)]) gevent.spawn(fn).get() # Marshalling the traceback across greenlets doesn't # raise LoopExit def raise_name_error(): raise NameError("ThisIsExpected") try: gevent.spawn(raise_name_error).get() raise AssertionError("Should fail") except NameError as e: x = e gevent-24.11.1/src/gevent/tests/_imports_at_top_level.py000066400000000000000000000000671471441230600233410ustar00rootroot00000000000000# We simply import a stdlib module __import__('netrc') gevent-24.11.1/src/gevent/tests/_imports_imports_at_top_level.py000066400000000000000000000005311471441230600251120ustar00rootroot00000000000000import gevent # For reproducing #728: We spawn a greenlet at import time, # that itself wants to import, and wait on it at import time. # If we're the only greenlet running, and locks aren't granular # enough, this results in a LoopExit (and also a lock deadlock) def f(): __import__('_imports_at_top_level') g = gevent.spawn(f) g.get() gevent-24.11.1/src/gevent/tests/badcert.pem000066400000000000000000000036101471441230600205020ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- gevent-24.11.1/src/gevent/tests/badkey.pem000066400000000000000000000041621471441230600203400ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- gevent-24.11.1/src/gevent/tests/getaddrinfo_module.py000066400000000000000000000001641471441230600226010ustar00rootroot00000000000000import socket import gevent.socket as gevent_socket gevent_socket.getaddrinfo(u'gevent.org', None, socket.AF_INET) gevent-24.11.1/src/gevent/tests/hosts_file.txt000066400000000000000000010701351471441230600213020ustar00rootroot00000000000000## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost Localhost localhost.localdomain testsite.mc.com mathcounts.mc.com platform.osu.edu 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 172.178.0.51 excelsior excelsior.example.com 162.168.8.27 memoryprime.local memoryprime 122.168.9.64 isy.local isy 192.168.1.172 drivefoo.local 172.168.15.95 aragefoo.local 172.168.15.105 livgfoo.local 172.168.16.109 upsirsfoo.local 172.168.15.140 bacorthfoo.local 172.168.15.142 bacouthfoo.local 172.168.16.144 drisfoo.local 172.168.15.152 nghborfoo.local 172.168.15.154 fntfoo.local 172.168.18.151 as.local # Internals 146.120.241.22 ds3 146.120.241.23 ds4 146.120.241.21 ds2 146.120.241.20 ds1 # Not blocked by Mar 18 2013 0.0.0.0 h.ppjol.com 0.0.0.0 s.ppjol.net 0.0.0.0 yayfollowers.com 0.0.0.0 pagead2.googlesyndication.com 0.0.0.0 www.googletagservices.com 0.0.0.0 cdn.teads.tv 0.0.0.0 js.moatads.com 0.0.0.0 cdn2.teads.tv # This hosts file is brought to you by Dan Pollock and can be found at # http://someonewhocares.org/hosts/zero/ # # For example, to block unpleasant pages, try: 0.0.0.0 goatse.cx # More information on sites such as 0.0.0.0 www.goatse.cx # these can be found in this article 0.0.0.0 oralse.cx # en.wikipedia.org/wiki/List_of_shock_sites 0.0.0.0 www.oralse.cx 0.0.0.0 goatse.ca 0.0.0.0 www.goatse.ca 0.0.0.0 oralse.ca 0.0.0.0 www.oralse.ca 0.0.0.0 goat.cx 0.0.0.0 www.goat.cx 0.0.0.0 goatse.ru 0.0.0.0 www.goatse.ru 0.0.0.0 1girl1pitcher.com 0.0.0.0 1girl1pitcher.org 0.0.0.0 1guy1cock.com 0.0.0.0 1man1jar.org 0.0.0.0 1man2needles.com 0.0.0.0 1priest1nun.com 0.0.0.0 2girls1cup.com 0.0.0.0 2girls1cup-free.com 0.0.0.0 2girls1cup.nl 0.0.0.0 2girls1cup.ws 0.0.0.0 2girls1finger.com 0.0.0.0 2girls1finger.org 0.0.0.0 2guys1stump.org 0.0.0.0 3guys1hammer.ws 0.0.0.0 4girlsfingerpaint.com 0.0.0.0 4girlsfingerpaint.org 0.0.0.0 bagslap.com 0.0.0.0 ballsack.org 0.0.0.0 bluewaffle.biz 0.0.0.0 bottleguy.com 0.0.0.0 bowlgirl.com 0.0.0.0 cadaver.org 0.0.0.0 clownsong.com 0.0.0.0 copyright-reform.info 0.0.0.0 cshacks.partycat.us 0.0.0.0 cyberscat.com 0.0.0.0 dadparty.com 0.0.0.0 detroithardcore.com 0.0.0.0 donotwatch.org 0.0.0.0 dontwatch.us 0.0.0.0 eelsoup.net 0.0.0.0 fruitlauncher.com 0.0.0.0 fuck.org 0.0.0.0 funnelchair.com 0.0.0.0 goatse.bz 0.0.0.0 goatsegirl.org 0.0.0.0 goatse.ru 0.0.0.0 hai2u.com 0.0.0.0 homewares.org 0.0.0.0 howtotroll.org 0.0.0.0 japscat.org 0.0.0.0 jiztini.com 0.0.0.0 junecleeland.com 0.0.0.0 kids-in-sandbox.com 0.0.0.0 kidsinsandbox.info 0.0.0.0 lemonparty.biz 0.0.0.0 lemonparty.org 0.0.0.0 lolhello.com 0.0.0.0 loltrain.com 0.0.0.0 meatspin.biz 0.0.0.0 meatspin.com 0.0.0.0 merryholidays.org 0.0.0.0 milkfountain.com 0.0.0.0 mudfall.com 0.0.0.0 mudmonster.org 0.0.0.0 nimp.org 0.0.0.0 nobrain.dk 0.0.0.0 nutabuse.com 0.0.0.0 octopusgirl.com 0.0.0.0 on.nimp.org 0.0.0.0 painolympics.info 0.0.0.0 phonejapan.com 0.0.0.0 pressurespot.com 0.0.0.0 prolapseman.com 0.0.0.0 scrollbelow.com 0.0.0.0 selfpwn.org 0.0.0.0 sexitnow.com 0.0.0.0 sourmath.com 0.0.0.0 suckdude.com 0.0.0.0 thatsjustgay.com 0.0.0.0 thatsphucked.com 0.0.0.0 thehomo.org 0.0.0.0 themacuser.org 0.0.0.0 thepounder.com 0.0.0.0 tubgirl.me 0.0.0.0 tubgirl.org 0.0.0.0 turdgasm.com 0.0.0.0 vomitgirl.org 0.0.0.0 walkthedinosaur.com 0.0.0.0 whipcrack.org 0.0.0.0 wormgush.com 0.0.0.0 www.1girl1pitcher.org 0.0.0.0 www.1guy1cock.com 0.0.0.0 www.1man1jar.org 0.0.0.0 www.1man2needles.com 0.0.0.0 www.1priest1nun.com 0.0.0.0 www.2girls1cup-free.com 0.0.0.0 www.2girls1cup.nl 0.0.0.0 www.2girls1cup.ws 0.0.0.0 www.2girls1finger.org 0.0.0.0 www.2guys1stump.org 0.0.0.0 www.3guys1hammer.ws 0.0.0.0 www.4girlsfingerpaint.org 0.0.0.0 www.bagslap.com 0.0.0.0 www.ballsack.org 0.0.0.0 www.bluewaffle.biz 0.0.0.0 www.bottleguy.com 0.0.0.0 www.bowlgirl.com 0.0.0.0 www.cadaver.org 0.0.0.0 www.clownsong.com 0.0.0.0 www.copyright-reform.info 0.0.0.0 www.cshacks.partycat.us 0.0.0.0 www.cyberscat.com 0.0.0.0 www.dadparty.com 0.0.0.0 www.detroithardcore.com 0.0.0.0 www.donotwatch.org 0.0.0.0 www.dontwatch.us 0.0.0.0 www.eelsoup.net 0.0.0.0 www.fruitlauncher.com 0.0.0.0 www.fuck.org 0.0.0.0 www.funnelchair.com 0.0.0.0 www.goatse.bz 0.0.0.0 www.goatsegirl.org 0.0.0.0 www.goatse.ru 0.0.0.0 www.hai2u.com 0.0.0.0 www.homewares.org 0.0.0.0 www.howtotroll.org 0.0.0.0 www.japscat.org 0.0.0.0 www.jiztini.com 0.0.0.0 www.junecleeland.com 0.0.0.0 www.kids-in-sandbox.com 0.0.0.0 www.kidsinsandbox.info 0.0.0.0 www.lemonparty.biz 0.0.0.0 www.lemonparty.org 0.0.0.0 www.lolhello.com 0.0.0.0 www.loltrain.com 0.0.0.0 www.meatspin.biz 0.0.0.0 www.meatspin.com 0.0.0.0 www.merryholidays.org 0.0.0.0 www.milkfountain.com 0.0.0.0 www.mudfall.com 0.0.0.0 www.mudmonster.org 0.0.0.0 www.nimp.org 0.0.0.0 www.nobrain.dk 0.0.0.0 www.nutabuse.com 0.0.0.0 www.octopusgirl.com 0.0.0.0 www.on.nimp.org 0.0.0.0 www.painolympics.info 0.0.0.0 www.phonejapan.com 0.0.0.0 www.pressurespot.com 0.0.0.0 www.prolapseman.com 0.0.0.0 www.punishtube.com 0.0.0.0 www.scrollbelow.com 0.0.0.0 www.selfpwn.org 0.0.0.0 www.sourmath.com 0.0.0.0 www.suckdude.com 0.0.0.0 www.thatsjustgay.com 0.0.0.0 www.thatsphucked.com 0.0.0.0 www.theexgirlfriends.com 0.0.0.0 www.thehomo.org 0.0.0.0 www.themacuser.org 0.0.0.0 www.thepounder.com 0.0.0.0 www.tubgirl.me 0.0.0.0 www.tubgirl.org 0.0.0.0 www.turdgasm.com 0.0.0.0 www.vomitgirl.org 0.0.0.0 www.walkthedinosaur.com 0.0.0.0 www.whipcrack.org 0.0.0.0 www.wormgush.com 0.0.0.0 www.xvideoslive.com 0.0.0.0 www.y8.com 0.0.0.0 www.youaresogay.com 0.0.0.0 www.ypmate.com 0.0.0.0 www.zentastic.com 0.0.0.0 youaresogay.com 0.0.0.0 zentastic.com # 0.0.0.0 ads234.com 0.0.0.0 ads345.com 0.0.0.0 www.ads234.com 0.0.0.0 www.ads345.com # # # 0.0.0.0 auto.search.msn.com # Microsoft uses this server to redirect # mistyped URLs to search engines. They # log all such errors. 0.0.0.0 sitefinder.verisign.com # Verisign has joined the game 0.0.0.0 sitefinder-idn.verisign.com # of trying to hijack mistyped # URLs to their site. # May break iOS Game Center. 0.0.0.0 s0.2mdn.net # This may interfere with some streaming # video on sites such as cbc.ca 0.0.0.0 ad.doubleclick.net # This may interefere with www.sears.com # and potentially other sites. 0.0.0.0 media.fastclick.net # Likewise, this may interfere with some 0.0.0.0 cdn.fastclick.net # sites. 0.0.0.0 ebay.doubleclick.net # may interfere with ebay #0.0.0.0 google-analytics.com # breaks some sites #0.0.0.0 ssl.google-analytics.com #0.0.0.0 www.google-analytics.l.google.com 0.0.0.0 stat.livejournal.com # There are reports that this may mess # up CSS on livejournal 0.0.0.0 stats.surfaid.ihost.com # This has been known cause # problems with NPR.org 0.0.0.0 www.google-analytics.com # breaks some sites 0.0.0.0 ads.imeem.com # Seems to interfere with the functioning of imeem.com # 0.0.0.0 006.free-counter.co.uk 0.0.0.0 006.freecounters.co.uk 0.0.0.0 06272002-dbase.hitcountz.net # Web bugs in spam 0.0.0.0 123counter.mycomputer.com 0.0.0.0 123counter.superstats.com 0.0.0.0 1ca.cqcounter.com 0.0.0.0 1uk.cqcounter.com 0.0.0.0 1us.cqcounter.com 0.0.0.0 1xxx.cqcounter.com 0.0.0.0 2001-007.com 0.0.0.0 3bc3fd26-91cf-46b2-8ec6-b1559ada0079.statcamp.net 0.0.0.0 3ps.go.com 0.0.0.0 4-counter.com 0.0.0.0 a796faee-7163-4757-a34f-e5b48cada4cb.statcamp.net 0.0.0.0 abscbn.spinbox.net 0.0.0.0 activity.serving-sys.com #eyeblaster.com 0.0.0.0 adadvisor.net 0.0.0.0 adclient.rottentomatoes.com 0.0.0.0 adcodes.aim4media.com 0.0.0.0 adcounter.globeandmail.com 0.0.0.0 adcounter.theglobeandmail.com 0.0.0.0 addfreestats.com 0.0.0.0 ademails.com 0.0.0.0 adlog.com.com # Used by Ziff Davis to serve # ads and track users across # the com.com family of sites 0.0.0.0 ad-logics.com 0.0.0.0 admanmail.com 0.0.0.0 adopt.specificclick.net 0.0.0.0 ads.tiscali.com 0.0.0.0 ads.tiscali.it 0.0.0.0 adult.foxcounter.com 0.0.0.0 affiliate.ab1trk.com 0.0.0.0 affiliate.irotracker.com 0.0.0.0 ai062.insightexpress.com 0.0.0.0 ai078.insightexpressai.com 0.0.0.0 ai087.insightexpress.com 0.0.0.0 ai113.insightexpressai.com 0.0.0.0 ai125.insightexpressai.com 0.0.0.0 alpha.easy-hit-counters.com 0.0.0.0 amateur.xxxcounter.com 0.0.0.0 amer.hops.glbdns.microsoft.com 0.0.0.0 amer.rel.msn.com 0.0.0.0 analytics.msnbc.msn.com 0.0.0.0 analytics.prx.org 0.0.0.0 anm.intelli-direct.com 0.0.0.0 ant.conversive.nl 0.0.0.0 apac.rel.msn.com 0.0.0.0 api.bizographics.com 0.0.0.0 apprep.smartscreen.microsoft.com 0.0.0.0 app.yesware.com 0.0.0.0 arbo.hit.gemius.pl 0.0.0.0 au052.insightexpress.com 0.0.0.0 auspice.augur.io 0.0.0.0 au.track.decideinteractive.com 0.0.0.0 a.visualrevenue.com 0.0.0.0 banner.0catch.com 0.0.0.0 banners.webcounter.com 0.0.0.0 beacon-1.newrelic.com 0.0.0.0 beacon.scorecardresearch.com 0.0.0.0 beacons.hottraffic.nl 0.0.0.0 be.sitestat.com 0.0.0.0 best-search.cc #spyware 0.0.0.0 beta.easy-hit-counter.com 0.0.0.0 beta.easy-hit-counters.com 0.0.0.0 beta.easyhitcounters.com 0.0.0.0 bilbo.counted.com 0.0.0.0 bin.clearspring.com 0.0.0.0 birta.stats.is 0.0.0.0 bluekai.com 0.0.0.0 bluestreak.com 0.0.0.0 bookproplus.com 0.0.0.0 broadcastpc.tv 0.0.0.0 report.broadcastpc.tv 0.0.0.0 www.broadcastpc.tv 0.0.0.0 bserver.blick.com 0.0.0.0 bstats.adbrite.com 0.0.0.0 b.stats.paypal.com 0.0.0.0 by.optimost.com 0.0.0.0 c10.statcounter.com 0.0.0.0 c11.statcounter.com 0.0.0.0 c12.statcounter.com 0.0.0.0 c13.statcounter.com 0.0.0.0 c14.statcounter.com 0.0.0.0 c15.statcounter.com 0.0.0.0 c16.statcounter.com 0.0.0.0 c17.statcounter.com 0.0.0.0 c1.statcounter.com 0.0.0.0 c1.thecounter.com 0.0.0.0 c1.thecounter.de 0.0.0.0 c1.xxxcounter.com 0.0.0.0 c2.gostats.com 0.0.0.0 c2.thecounter.com 0.0.0.0 c2.thecounter.de 0.0.0.0 c2.xxxcounter.com 0.0.0.0 c3.gostats.com 0.0.0.0 c3.statcounter.com 0.0.0.0 c3.thecounter.com 0.0.0.0 c3.xxxcounter.com 0.0.0.0 c4.myway.com 0.0.0.0 c4.statcounter.com 0.0.0.0 c5.statcounter.com 0.0.0.0 c6.statcounter.com 0.0.0.0 c7.statcounter.com 0.0.0.0 c8.statcounter.com 0.0.0.0 c9.statcounter.com 0.0.0.0 ca.cqcounter.com 0.0.0.0 cashcounter.com 0.0.0.0 cb1.counterbot.com 0.0.0.0 cdn.krxd.net 0.0.0.0 cdn.oggifinogi.com 0.0.0.0 cdn.taboolasyndication.com 0.0.0.0 cdxbin.vulnerap.com 0.0.0.0 cf.addthis.com 0.0.0.0 cgicounter.onlinehome.de 0.0.0.0 cgicounter.puretec.de 0.0.0.0 cgi.hotstat.nl 0.0.0.0 cgi.sexlist.com 0.0.0.0 ci-mpsnare.iovation.com # See http://www.codingthewheel.com/archives/online-gambling-privacy-iesnare 0.0.0.0 citrix.tradedoubler.com 0.0.0.0 cjt1.net 0.0.0.0 click.atdmt.com 0.0.0.0 clickauditor.net 0.0.0.0 click.fivemtn.com 0.0.0.0 click.investopedia.com 0.0.0.0 click.jve.net 0.0.0.0 clickmeter.com 0.0.0.0 click.payserve.com 0.0.0.0 clicks.emarketmakers.com 0.0.0.0 click.silvercash.com 0.0.0.0 clicks.m4n.nl 0.0.0.0 clicks.natwest.com 0.0.0.0 clickspring.net #used by a spyware product called PurityScan 0.0.0.0 clicks.rbs.co.uk 0.0.0.0 clicktrack.onlineemailmarketing.com 0.0.0.0 clicktracks.webmetro.com 0.0.0.0 clit10.sextracker.com 0.0.0.0 clit13.sextracker.com 0.0.0.0 clit15.sextracker.com 0.0.0.0 clit2.sextracker.com 0.0.0.0 clit4.sextracker.com 0.0.0.0 clit6.sextracker.com 0.0.0.0 clit7.sextracker.com 0.0.0.0 clit8.sextracker.com 0.0.0.0 clit9.sextracker.com 0.0.0.0 clk.aboxdeal.com 0.0.0.0 clk.relestar.com 0.0.0.0 cnn.entertainment.printthis.clickability.com 0.0.0.0 cnt.xcounter.com 0.0.0.0 collector.deepmetrix.com 0.0.0.0 collector.newsx.cc 0.0.0.0 connectionlead.com 0.0.0.0 connexity.net 0.0.0.0 cookies.cmpnet.com 0.0.0.0 count.channeladvisor.com 0.0.0.0 counter10.bravenet.com 0.0.0.0 counter10.sextracker.be 0.0.0.0 counter10.sextracker.com 0.0.0.0 counter11.bravenet.com 0.0.0.0 counter11.sextracker.be 0.0.0.0 counter11.sextracker.com 0.0.0.0 counter.123counts.com 0.0.0.0 counter12.bravenet.com 0.0.0.0 counter12.sextracker.be 0.0.0.0 counter12.sextracker.com 0.0.0.0 counter13.bravenet.com 0.0.0.0 counter13.sextracker.be 0.0.0.0 counter13.sextracker.com 0.0.0.0 counter14.bravenet.com 0.0.0.0 counter14.sextracker.be 0.0.0.0 counter14.sextracker.com 0.0.0.0 counter15.bravenet.com 0.0.0.0 counter15.sextracker.be 0.0.0.0 counter15.sextracker.com 0.0.0.0 counter16.bravenet.com 0.0.0.0 counter16.sextracker.be 0.0.0.0 counter16.sextracker.com 0.0.0.0 counter17.bravenet.com 0.0.0.0 counter18.bravenet.com 0.0.0.0 counter19.bravenet.com 0.0.0.0 counter1.bravenet.com 0.0.0.0 counter1.sextracker.be 0.0.0.0 counter1.sextracker.com 0.0.0.0 counter.1stblaze.com 0.0.0.0 counter20.bravenet.com 0.0.0.0 counter21.bravenet.com 0.0.0.0 counter22.bravenet.com 0.0.0.0 counter23.bravenet.com 0.0.0.0 counter24.bravenet.com 0.0.0.0 counter25.bravenet.com 0.0.0.0 counter26.bravenet.com 0.0.0.0 counter27.bravenet.com 0.0.0.0 counter28.bravenet.com 0.0.0.0 counter29.bravenet.com 0.0.0.0 counter2.bravenet.com 0.0.0.0 counter2.freeware.de 0.0.0.0 counter2.hitslink.com 0.0.0.0 counter2.sextracker.be 0.0.0.0 counter2.sextracker.com 0.0.0.0 counter30.bravenet.com 0.0.0.0 counter31.bravenet.com 0.0.0.0 counter32.bravenet.com 0.0.0.0 counter33.bravenet.com 0.0.0.0 counter34.bravenet.com 0.0.0.0 counter35.bravenet.com 0.0.0.0 counter36.bravenet.com 0.0.0.0 counter37.bravenet.com 0.0.0.0 counter38.bravenet.com 0.0.0.0 counter39.bravenet.com 0.0.0.0 counter3.bravenet.com 0.0.0.0 counter3.sextracker.be 0.0.0.0 counter3.sextracker.com 0.0.0.0 counter40.bravenet.com 0.0.0.0 counter41.bravenet.com 0.0.0.0 counter42.bravenet.com 0.0.0.0 counter43.bravenet.com 0.0.0.0 counter44.bravenet.com 0.0.0.0 counter45.bravenet.com 0.0.0.0 counter46.bravenet.com 0.0.0.0 counter47.bravenet.com 0.0.0.0 counter48.bravenet.com 0.0.0.0 counter49.bravenet.com 0.0.0.0 counter4all.dk 0.0.0.0 counter4.bravenet.com 0.0.0.0 counter4.sextracker.be 0.0.0.0 counter4.sextracker.com 0.0.0.0 counter4u.de 0.0.0.0 counter50.bravenet.com 0.0.0.0 counter5.bravenet.com 0.0.0.0 counter5.sextracker.be 0.0.0.0 counter5.sextracker.com 0.0.0.0 counter6.bravenet.com 0.0.0.0 counter6.sextracker.be 0.0.0.0 counter6.sextracker.com 0.0.0.0 counter7.bravenet.com 0.0.0.0 counter7.sextracker.be 0.0.0.0 counter7.sextracker.com 0.0.0.0 counter8.bravenet.com 0.0.0.0 counter8.sextracker.be 0.0.0.0 counter8.sextracker.com 0.0.0.0 counter9.bravenet.com 0.0.0.0 counter9.sextracker.be 0.0.0.0 counter9.sextracker.com 0.0.0.0 counter.aaddzz.com 0.0.0.0 counterad.de 0.0.0.0 counter.adultcheck.com 0.0.0.0 counter.adultrevenueservice.com 0.0.0.0 counter.advancewebhosting.com 0.0.0.0 counter.aport.ru 0.0.0.0 counteraport.spylog.com 0.0.0.0 counter.asexhound.com 0.0.0.0 counter.avp2000.com 0.0.0.0 counter.bizland.com 0.0.0.0 counter.bloke.com 0.0.0.0 counterbot.com 0.0.0.0 counter.clubnet.ro 0.0.0.0 counter.cnw.cz 0.0.0.0 countercrazy.com 0.0.0.0 counter.credo.ru 0.0.0.0 counter.cz 0.0.0.0 counter.digits.com 0.0.0.0 counter.dreamhost.com 0.0.0.0 counter.e-audit.it 0.0.0.0 counter.execpc.com 0.0.0.0 counter.fateback.com 0.0.0.0 counter.gamespy.com 0.0.0.0 counter.hitslink.com 0.0.0.0 counter.hitslinks.com 0.0.0.0 counter.htmlvalidator.com 0.0.0.0 counter.impressur.com 0.0.0.0 counter.inetusa.com 0.0.0.0 counter.inti.fr 0.0.0.0 counter.kaspersky.com 0.0.0.0 counter.letssingit.com 0.0.0.0 counter.mtree.com 0.0.0.0 counter.mycomputer.com 0.0.0.0 counter.netmore.net 0.0.0.0 counter.nope.dk 0.0.0.0 counter.nowlinux.com 0.0.0.0 counter.pcgames.de 0.0.0.0 counter.rambler.ru 0.0.0.0 counters.auctionhelper.com # comment these 0.0.0.0 counters.auctionwatch.com # out to allow 0.0.0.0 counters.auctiva.com # tracking by 0.0.0.0 counters.honesty.com # ebay users 0.0.0.0 counter.search.bg 0.0.0.0 counter.sexhound.nl 0.0.0.0 counters.gigya.com 0.0.0.0 counter.sparklit.com 0.0.0.0 counter.superstats.com 0.0.0.0 counter.surfcounters.com 0.0.0.0 counters.xaraonline.com 0.0.0.0 counter.times.lv 0.0.0.0 counter.topping.com.ua 0.0.0.0 counter.tripod.com 0.0.0.0 counter.uq.edu.au 0.0.0.0 counter.w3open.com 0.0.0.0 counter.webcom.com 0.0.0.0 counter.webmedia.pl 0.0.0.0 counter.webtrends.com 0.0.0.0 counter.webtrends.net 0.0.0.0 counter.xxxcool.com 0.0.0.0 counter.yadro.ru 0.0.0.0 count.paycounter.com 0.0.0.0 count.xhit.com 0.0.0.0 cs.sexcounter.com 0.0.0.0 c.statcounter.com 0.0.0.0 c.thecounter.de 0.0.0.0 cw.nu 0.0.0.0 cyseal.cyveillance.com 0.0.0.0 cz3.clickzs.com 0.0.0.0 cz6.clickzs.com 0.0.0.0 da.ce.bd.a9.top.list.ru 0.0.0.0 da.newstogram.com 0.0.0.0 data2.perf.overture.com 0.0.0.0 data.coremetrics.com 0.0.0.0 data.webads.co.nz 0.0.0.0 dclk.haaretz.co.il 0.0.0.0 dclk.themarker.com 0.0.0.0 dclk.themarketer.com 0.0.0.0 delivery.loopingclick.com 0.0.0.0 de.sitestat.com 0.0.0.0 didtheyreadit.com # email bugs 0.0.0.0 digistats.westjet.com 0.0.0.0 dimeprice.com # "spam bugs" 0.0.0.0 directads.mcafee.com 0.0.0.0 dotcomsecrets.com 0.0.0.0 dpbolvw.net 0.0.0.0 ds.247realmedia.com 0.0.0.0 ds.amateurmatch.com 0.0.0.0 dwclick.com 0.0.0.0 e-2dj6wfk4ehd5afq.stats.esomniture.com 0.0.0.0 e-2dj6wfk4ggdzkbo.stats.esomniture.com 0.0.0.0 e-2dj6wfk4gkcpiep.stats.esomniture.com 0.0.0.0 e-2dj6wfk4skdpogo.stats.esomniture.com 0.0.0.0 e-2dj6wfkiakdjgcp.stats.esomniture.com 0.0.0.0 e-2dj6wfkiepczoeo.stats.esomniture.com 0.0.0.0 e-2dj6wfkikjd5glq.stats.esomniture.com 0.0.0.0 e-2dj6wfkiokc5odp.stats.esomniture.com 0.0.0.0 e-2dj6wfkiqjcpifp.stats.esomniture.com 0.0.0.0 e-2dj6wfkocjczedo.stats.esomniture.com 0.0.0.0 e-2dj6wfkokjajseq.stats.esomniture.com 0.0.0.0 e-2dj6wfkowkdjokp.stats.esomniture.com 0.0.0.0 e-2dj6wfkykpazskq.stats.esomniture.com 0.0.0.0 e-2dj6wflicocjklo.stats.esomniture.com 0.0.0.0 e-2dj6wfligpd5iap.stats.esomniture.com 0.0.0.0 e-2dj6wflikgdpodo.stats.esomniture.com 0.0.0.0 e-2dj6wflikiajslo.stats.esomniture.com 0.0.0.0 e-2dj6wflioldzoco.stats.esomniture.com 0.0.0.0 e-2dj6wfliwpczolp.stats.esomniture.com 0.0.0.0 e-2dj6wfloenczmkq.stats.esomniture.com 0.0.0.0 e-2dj6wflokmajedo.stats.esomniture.com 0.0.0.0 e-2dj6wfloqgc5mho.stats.esomniture.com 0.0.0.0 e-2dj6wfmysgdzobo.stats.esomniture.com 0.0.0.0 e-2dj6wgkigpcjedo.stats.esomniture.com 0.0.0.0 e-2dj6wgkisnd5abo.stats.esomniture.com 0.0.0.0 e-2dj6wgkoandzieq.stats.esomniture.com 0.0.0.0 e-2dj6wgkycpcpsgq.stats.esomniture.com 0.0.0.0 e-2dj6wgkyepajmeo.stats.esomniture.com 0.0.0.0 e-2dj6wgkyknd5sko.stats.esomniture.com 0.0.0.0 e-2dj6wgkyomdpalp.stats.esomniture.com 0.0.0.0 e-2dj6whkiandzkko.stats.esomniture.com 0.0.0.0 e-2dj6whkiepd5iho.stats.esomniture.com 0.0.0.0 e-2dj6whkiwjdjwhq.stats.esomniture.com 0.0.0.0 e-2dj6wjk4amd5mfp.stats.esomniture.com 0.0.0.0 e-2dj6wjk4kkcjalp.stats.esomniture.com 0.0.0.0 e-2dj6wjk4ukazebo.stats.esomniture.com 0.0.0.0 e-2dj6wjkosodpmaq.stats.esomniture.com 0.0.0.0 e-2dj6wjkouhd5eao.stats.esomniture.com 0.0.0.0 e-2dj6wjkowhd5ggo.stats.esomniture.com 0.0.0.0 e-2dj6wjkowjajcbo.stats.esomniture.com 0.0.0.0 e-2dj6wjkyandpogq.stats.esomniture.com 0.0.0.0 e-2dj6wjkycpdzckp.stats.esomniture.com 0.0.0.0 e-2dj6wjkyqmdzcgo.stats.esomniture.com 0.0.0.0 e-2dj6wjkysndzigp.stats.esomniture.com 0.0.0.0 e-2dj6wjl4qhd5kdo.stats.esomniture.com 0.0.0.0 e-2dj6wjlichdjoep.stats.esomniture.com 0.0.0.0 e-2dj6wjliehcjglp.stats.esomniture.com 0.0.0.0 e-2dj6wjlignajgaq.stats.esomniture.com 0.0.0.0 e-2dj6wjloagc5oco.stats.esomniture.com 0.0.0.0 e-2dj6wjlougazmao.stats.esomniture.com 0.0.0.0 e-2dj6wjlyamdpogo.stats.esomniture.com 0.0.0.0 e-2dj6wjlyckcpelq.stats.esomniture.com 0.0.0.0 e-2dj6wjlyeodjkcq.stats.esomniture.com 0.0.0.0 e-2dj6wjlygkd5ecq.stats.esomniture.com 0.0.0.0 e-2dj6wjmiekc5olo.stats.esomniture.com 0.0.0.0 e-2dj6wjmyehd5mfo.stats.esomniture.com 0.0.0.0 e-2dj6wjmyooczoeo.stats.esomniture.com 0.0.0.0 e-2dj6wjny-1idzkh.stats.esomniture.com 0.0.0.0 e-2dj6wjnyagcpkko.stats.esomniture.com 0.0.0.0 e-2dj6wjnyeocpcdo.stats.esomniture.com 0.0.0.0 e-2dj6wjnygidjskq.stats.esomniture.com 0.0.0.0 e-2dj6wjnyqkajabp.stats.esomniture.com 0.0.0.0 easy-web-stats.com 0.0.0.0 ecestats.theglobeandmail.com 0.0.0.0 economisttestcollect.insightfirst.com 0.0.0.0 ehg.fedex.com 0.0.0.0 eitbglobal.ojdinteractiva.com 0.0.0.0 emea.rel.msn.com 0.0.0.0 engine.cmmeglobal.com 0.0.0.0 enoratraffic.com 0.0.0.0 entry-stats.huffingtonpost.com 0.0.0.0 environmentalgraffiti.uk.intellitxt.com 0.0.0.0 e-n.y-1shz2prbmdj6wvny-1sez2pra2dj6wjmyepdzadpwudj6x9ny-1seq-2-2.stats.esomniture.com 0.0.0.0 e-ny.a-1shz2prbmdj6wvny-1sez2pra2dj6wjny-1jcpgbowsdj6x9ny-1seq-2-2.stats.esomniture.com 0.0.0.0 es.optimost.com 0.0.0.0 fastcounter.bcentral.com 0.0.0.0 fastcounter.com 0.0.0.0 fastcounter.linkexchange.com 0.0.0.0 fastcounter.linkexchange.net 0.0.0.0 fastcounter.linkexchange.nl 0.0.0.0 fastcounter.onlinehoster.net 0.0.0.0 fastwebcounter.com 0.0.0.0 fcstats.bcentral.com 0.0.0.0 fi.sitestat.com 0.0.0.0 fl01.ct2.comclick.com 0.0.0.0 flycast.com 0.0.0.0 forbescollect.247realmedia.com 0.0.0.0 formalyzer.com 0.0.0.0 foxcounter.com 0.0.0.0 free-counter.5u.com 0.0.0.0 freeinvisiblecounters.com 0.0.0.0 freestats.com 0.0.0.0 freewebcounter.com 0.0.0.0 free.xxxcounter.com 0.0.0.0 fs10.fusestats.com 0.0.0.0 ft2.autonomycloud.com 0.0.0.0 gapl.hit.gemius.pl 0.0.0.0 gator.com 0.0.0.0 gcounter.hosting4u.net 0.0.0.0 gd.mlb.com 0.0.0.0 geocounter.net 0.0.0.0 gkkzngresullts.com 0.0.0.0 go-in-search.net 0.0.0.0 goldstats.com 0.0.0.0 googfle.com 0.0.0.0 googletagservices.com 0.0.0.0 gostats.com 0.0.0.0 grafix.xxxcounter.com 0.0.0.0 gtcc1.acecounter.com 0.0.0.0 g-wizzads.net 0.0.0.0 hc2.humanclick.com 0.0.0.0 hit10.hotlog.ru 0.0.0.0 hit2.hotlog.ru 0.0.0.0 hit37.chark.dk 0.0.0.0 hit37.chart.dk 0.0.0.0 hit39.chart.dk 0.0.0.0 hit5.hotlog.ru 0.0.0.0 hit8.hotlog.ru 0.0.0.0 hit.clickaider.com 0.0.0.0 hit-counter.5u.com 0.0.0.0 hit-counter.udub.com 0.0.0.0 hits.guardian.co.uk 0.0.0.0 hits.gureport.co.uk 0.0.0.0 hits.nextstat.com 0.0.0.0 hits.webstat.com 0.0.0.0 hitx.statistics.ro 0.0.0.0 hst.tradedoubler.com 0.0.0.0 htm.freelogs.com 0.0.0.0 http300.edge.ru4.com 0.0.0.0 iccee.com 0.0.0.0 idm.hit.gemius.pl 0.0.0.0 ieplugin.com 0.0.0.0 iesnare.com # See http://www.codingthewheel.com/archives/online-gambling-privacy-iesnare 0.0.0.0 ig.insightgrit.com 0.0.0.0 ih.constantcontacts.com 0.0.0.0 i.kissmetrics.com # http://www.wired.com/epicenter/2011/07/undeletable-cookie/ 0.0.0.0 ilead.itrack.it 0.0.0.0 image.masterstats.com 0.0.0.0 images1.paycounter.com 0.0.0.0 images-aud.freshmeat.net 0.0.0.0 images-aud.slashdot.org 0.0.0.0 images-aud.sourceforge.net 0.0.0.0 images.dailydiscounts.com # "spam bugs" 0.0.0.0 images.itchydawg.com 0.0.0.0 impacts.alliancehub.com # "spam bugs" 0.0.0.0 impch.tradedoubler.com 0.0.0.0 imp.clickability.com 0.0.0.0 impde.tradedoubler.com 0.0.0.0 impdk.tradedoubler.com 0.0.0.0 impes.tradedoubler.com 0.0.0.0 impfr.tradedoubler.com 0.0.0.0 impgb.tradedoubler.com 0.0.0.0 impie.tradedoubler.com 0.0.0.0 impit.tradedouble.com 0.0.0.0 impit.tradedoubler.com 0.0.0.0 impnl.tradedoubler.com 0.0.0.0 impno.tradedoubler.com 0.0.0.0 impse.tradedoubler.com 0.0.0.0 in.paycounter.com 0.0.0.0 insightfirst.com 0.0.0.0 insightxe.looksmart.com 0.0.0.0 int.sitestat.com 0.0.0.0 in.webcounter.cc 0.0.0.0 iprocollect.realmedia.com 0.0.0.0 izitracking.izimailing.com 0.0.0.0 jgoyk.cjt1.net 0.0.0.0 jkearns.freestats.com 0.0.0.0 journalism.uk.smarttargetting.com 0.0.0.0 js.cybermonitor.com 0.0.0.0 jsonlinecollect.247realmedia.com 0.0.0.0 js.revsci.net 0.0.0.0 kissmetrics.com 0.0.0.0 kqzyfj.com 0.0.0.0 kt4.kliptracker.com 0.0.0.0 leadpub.com 0.0.0.0 liapentruromania.ro 0.0.0.0 lin31.metriweb.be 0.0.0.0 linkcounter.com 0.0.0.0 linkcounter.pornosite.com 0.0.0.0 link.masterstats.com 0.0.0.0 linktrack.bravenet.com 0.0.0.0 livestats.atlanta-airport.com #0.0.0.0 ll.a.hulu.com # Uncomment to block Hulu. 0.0.0.0 loc1.hitsprocessor.com 0.0.0.0 log1.countomat.com 0.0.0.0 log4.quintelligence.com 0.0.0.0 log999.goo.ne.jp 0.0.0.0 loga.xiti.com 0.0.0.0 log.btopenworld.com 0.0.0.0 logc146.xiti.com 0.0.0.0 logc1.xiti.com 0.0.0.0 logc22.xiti.com 0.0.0.0 logc25.xiti.com 0.0.0.0 logc31.xiti.com 0.0.0.0 log.clickstream.co.za 0.0.0.0 log.hankooki.com 0.0.0.0 logi6.xiti.com 0.0.0.0 logi7.xiti.com 0.0.0.0 logi8.xiti.com 0.0.0.0 logp3.xiti.com 0.0.0.0 logs.comics.com 0.0.0.0 logs.eresmas.com 0.0.0.0 logs.eresmas.net 0.0.0.0 log.statistici.ro 0.0.0.0 logv14.xiti.com 0.0.0.0 logv17.xiti.com 0.0.0.0 logv18.xiti.com 0.0.0.0 logv21.xiti.com 0.0.0.0 logv25.xiti.com 0.0.0.0 logv27.xiti.com 0.0.0.0 logv29.xiti.com 0.0.0.0 logv32.xiti.com 0.0.0.0 logv4.xiti.com 0.0.0.0 logv.xiti.com 0.0.0.0 luycos.com 0.0.0.0 lycoscollect.247realmedia.com 0.0.0.0 lycoscollect.realmedia.com 0.0.0.0 m1.nedstatbasic.net 0.0.0.0 m1.webstats4u.com 0.0.0.0 mailcheckisp.biz # "spam bugs" 0.0.0.0 mama128.valuehost.ru 0.0.0.0 marketscore.com 0.0.0.0 mature.xxxcounter.com 0.0.0.0 mbox5.offermatica.com 0.0.0.0 media101.sitebrand.com 0.0.0.0 media.superstats.com 0.0.0.0 mediatrack.revenue.net 0.0.0.0 metric.10best.com 0.0.0.0 metric.infoworld.com 0.0.0.0 metric.nationalgeographic.com 0.0.0.0 metric.nwsource.com 0.0.0.0 metric.olivegarden.com 0.0.0.0 metrics2.pricegrabber.com 0.0.0.0 metrics.accuweather.com 0.0.0.0 metrics.al.com 0.0.0.0 metrics.boston.com 0.0.0.0 metrics.cbc.ca 0.0.0.0 metrics.cleveland.com 0.0.0.0 metrics.cnn.com 0.0.0.0 metrics.csmonitor.com 0.0.0.0 metrics.ctv.ca 0.0.0.0 metrics.dallasnews.com 0.0.0.0 metrics.elle.com 0.0.0.0 metrics.experts-exchange.com 0.0.0.0 metrics.fandome.com 0.0.0.0 metrics.foxnews.com 0.0.0.0 metrics.gap.com 0.0.0.0 metrics.health.com 0.0.0.0 metrics.hrblock.com 0.0.0.0 metrics.ioffer.com 0.0.0.0 metrics.ireport.com 0.0.0.0 metrics.kgw.com 0.0.0.0 metrics.ktvb.com 0.0.0.0 metrics.landolakes.com 0.0.0.0 metrics.lhj.com 0.0.0.0 metrics.maxim.com 0.0.0.0 metrics.mlive.com 0.0.0.0 metrics.mms.mavenapps.net 0.0.0.0 metrics.mpora.com 0.0.0.0 metrics.mysanantonio.com 0.0.0.0 metrics.nba.com 0.0.0.0 metrics.nextgov.com 0.0.0.0 metrics.nfl.com 0.0.0.0 metrics.npr.org 0.0.0.0 metrics.oclc.org 0.0.0.0 metrics.olivegarden.com 0.0.0.0 metrics.oregonlive.com 0.0.0.0 metrics.parallels.com 0.0.0.0 metrics.performancing.com 0.0.0.0 metrics.philly.com 0.0.0.0 metrics.post-gazette.com 0.0.0.0 metrics.premiere.com 0.0.0.0 metrics.rottentomatoes.com 0.0.0.0 metrics.sephora.com 0.0.0.0 metrics.soundandvision.com 0.0.0.0 metrics.soundandvisionmag.com 0.0.0.0 metrics.sun.com 0.0.0.0 metric.starz.com 0.0.0.0 metrics.technologyreview.com 0.0.0.0 metrics.theatlantic.com 0.0.0.0 metrics.thedailybeast.com 0.0.0.0 metrics.thefa.com 0.0.0.0 metrics.thefrisky.com 0.0.0.0 metrics.thenation.com 0.0.0.0 metrics.theweathernetwork.com #0.0.0.0 metrics.ticketmaster.com # interferes with logging in to ticketmaster.com 0.0.0.0 metrics.tmz.com 0.0.0.0 metrics.toyota.com 0.0.0.0 metrics.tulsaworld.com 0.0.0.0 metrics.washingtonpost.com 0.0.0.0 metrics.whitepages.com 0.0.0.0 metrics.womansday.com 0.0.0.0 metrics.yellowpages.com 0.0.0.0 metrics.yousendit.com 0.0.0.0 metric.thenation.com 0.0.0.0 mng1.clickalyzer.com 0.0.0.0 monster.gostats.com 0.0.0.0 mpsnare.iesnare.com # See http://www.codingthewheel.com/archives/online-gambling-privacy-iesnare 0.0.0.0 msn1.com 0.0.0.0 msnm.com 0.0.0.0 mt122.mtree.com 0.0.0.0 mtcount.channeladvisor.com 0.0.0.0 mtrcs.popcap.com 0.0.0.0 mtv.247realmedia.com 0.0.0.0 multi1.rmuk.co.uk 0.0.0.0 mvs.mediavantage.de 0.0.0.0 mvtracker.com 0.0.0.0 mystats.com 0.0.0.0 nedstat.s0.nl 0.0.0.0 nethit-free.nl 0.0.0.0 net-radar.com 0.0.0.0 network.leadpub.com 0.0.0.0 nextgenstats.com 0.0.0.0 nht-2.extreme-dm.com 0.0.0.0 nl.nedstatbasic.net 0.0.0.0 nl.sitestat.com 0.0.0.0 o.addthis.com 0.0.0.0 objects.tremormedia.com 0.0.0.0 okcounter.com 0.0.0.0 omniture.theglobeandmail.com 0.0.0.0 one.123counters.com 0.0.0.0 oss-crules.marketscore.com 0.0.0.0 oss-survey.marketscore.com 0.0.0.0 ostats.mozilla.com 0.0.0.0 other.xxxcounter.com 0.0.0.0 out.true-counter.com 0.0.0.0 p.addthis.com 0.0.0.0 partner.alerts.aol.com 0.0.0.0 partners.pantheranetwork.com 0.0.0.0 passpport.com 0.0.0.0 paxito.sitetracker.com 0.0.0.0 paycounter.com 0.0.0.0 pei-ads.thesmokingjacket.com 0.0.0.0 perso.estat.com 0.0.0.0 pf.tradedoubler.com 0.0.0.0 pings.blip.tv 0.0.0.0 pix02.revsci.net 0.0.0.0 pix03.revsci.net 0.0.0.0 pix04.revsci.net 0.0.0.0 pixel.invitemedia.com 0.0.0.0 pmg.ad-logics.com 0.0.0.0 pn2.adserver.yahoo.com 0.0.0.0 pointclicktrack.com 0.0.0.0 pong.qubitproducts.com 0.0.0.0 postclick.adcentriconline.com 0.0.0.0 postgazettecollect.247realmedia.com 0.0.0.0 precisioncounter.com 0.0.0.0 p.reuters.com 0.0.0.0 printmail.biz 0.0.0.0 prof.estat.com 0.0.0.0 pro.hit.gemius.pl 0.0.0.0 proxycfg.marketscore.com 0.0.0.0 proxy.ia2.marketscore.com 0.0.0.0 proxy.ia3.marketscore.com 0.0.0.0 proxy.ia4.marketscore.com 0.0.0.0 proxy.or3.marketscore.com 0.0.0.0 proxy.or4.marketscore.com 0.0.0.0 proxy.sj3.marketscore.com 0.0.0.0 proxy.sj4.marketscore.com 0.0.0.0 quantserve.com #: Ad Tracking, JavaScript, etc. 0.0.0.0 quareclk.com 0.0.0.0 raw.oggifinogi.com 0.0.0.0 r.clickdensity.com 0.0.0.0 remotrk.com 0.0.0.0 rightmedia.net 0.0.0.0 rightstats.com 0.0.0.0 roskatrack.roskadirect.com 0.0.0.0 rr1.xxxcounter.com 0.0.0.0 rr2.xxxcounter.com 0.0.0.0 rr3.xxxcounter.com 0.0.0.0 rr4.xxxcounter.com 0.0.0.0 rr5.xxxcounter.com 0.0.0.0 rr7.xxxcounter.com 0.0.0.0 rts.pgmediaserve.com 0.0.0.0 rts.phn.doublepimp.com 0.0.0.0 s10.histats.com 0.0.0.0 s10.sitemeter.com 0.0.0.0 s11.sitemeter.com 0.0.0.0 s12.sitemeter.com 0.0.0.0 s13.sitemeter.com 0.0.0.0 s14.sitemeter.com 0.0.0.0 s15.sitemeter.com 0.0.0.0 s16.sitemeter.com 0.0.0.0 s17.sitemeter.com 0.0.0.0 s18.sitemeter.com 0.0.0.0 s19.sitemeter.com 0.0.0.0 s1.shinystat.it 0.0.0.0 s1.thecounter.com 0.0.0.0 s20.sitemeter.com 0.0.0.0 s21.sitemeter.com 0.0.0.0 s22.sitemeter.com 0.0.0.0 s23.sitemeter.com 0.0.0.0 s24.sitemeter.com 0.0.0.0 s25.sitemeter.com 0.0.0.0 s26.sitemeter.com 0.0.0.0 s27.sitemeter.com 0.0.0.0 s28.sitemeter.com 0.0.0.0 s29.sitemeter.com 0.0.0.0 s2.statcounter.com 0.0.0.0 s2.youtube.com 0.0.0.0 s30.sitemeter.com 0.0.0.0 s31.sitemeter.com 0.0.0.0 s32.sitemeter.com 0.0.0.0 s33.sitemeter.com 0.0.0.0 s34.sitemeter.com 0.0.0.0 s35.sitemeter.com 0.0.0.0 s36.sitemeter.com 0.0.0.0 s37.sitemeter.com 0.0.0.0 s38.sitemeter.com 0.0.0.0 s39.sitemeter.com 0.0.0.0 s3.hit.stat.pl 0.0.0.0 s41.sitemeter.com 0.0.0.0 s42.sitemeter.com 0.0.0.0 s43.sitemeter.com 0.0.0.0 s44.sitemeter.com 0.0.0.0 s45.sitemeter.com 0.0.0.0 s46.sitemeter.com 0.0.0.0 s47.sitemeter.com 0.0.0.0 s48.sitemeter.com 0.0.0.0 s4.histats.com 0.0.0.0 s4.shinystat.com 0.0.0.0 s.clickability.com 0.0.0.0 scorecardresearch.com 0.0.0.0 scribe.twitter.com 0.0.0.0 scrooge.channelcincinnati.com 0.0.0.0 scrooge.channeloklahoma.com 0.0.0.0 scrooge.click10.com 0.0.0.0 scrooge.clickondetroit.com 0.0.0.0 scrooge.nbc11.com 0.0.0.0 scrooge.nbc4columbus.com 0.0.0.0 scrooge.nbc4.com 0.0.0.0 scrooge.nbcsandiego.com 0.0.0.0 scrooge.newsnet5.com 0.0.0.0 scrooge.thebostonchannel.com 0.0.0.0 scrooge.thedenverchannel.com 0.0.0.0 scrooge.theindychannel.com 0.0.0.0 scrooge.thekansascitychannel.com 0.0.0.0 scrooge.themilwaukeechannel.com 0.0.0.0 scrooge.theomahachannel.com 0.0.0.0 scrooge.wesh.com 0.0.0.0 scrooge.wftv.com 0.0.0.0 scrooge.wnbc.com 0.0.0.0 scrooge.wsoctv.com 0.0.0.0 scrooge.wtov9.com 0.0.0.0 sdc.rbistats.com 0.0.0.0 searchadv.com 0.0.0.0 sekel.ch 0.0.0.0 servedby.valuead.com 0.0.0.0 server10.opentracker.net 0.0.0.0 server11.opentracker.net 0.0.0.0 server12.opentracker.net 0.0.0.0 server13.opentracker.net 0.0.0.0 server14.opentracker.net 0.0.0.0 server15.opentracker.net 0.0.0.0 server16.opentracker.net 0.0.0.0 server17.opentracker.net 0.0.0.0 server18.opentracker.net 0.0.0.0 server1.opentracker.net 0.0.0.0 server2.opentracker.net 0.0.0.0 server3.opentracker.net 0.0.0.0 server3.web-stat.com 0.0.0.0 server4.opentracker.net 0.0.0.0 server5.opentracker.net 0.0.0.0 server6.opentracker.net 0.0.0.0 server7.opentracker.net 0.0.0.0 server8.opentracker.net 0.0.0.0 server9.opentracker.net 0.0.0.0 service.bfast.com 0.0.0.0 services.krxd.net 0.0.0.0 se.sitestat.com 0.0.0.0 sexcounter.com 0.0.0.0 seznam.hit.gemius.pl 0.0.0.0 showads.pubmatic.com 0.0.0.0 showcount.honest.com 0.0.0.0 sideshow.directtrack.com 0.0.0.0 sitestat.com 0.0.0.0 sitestats.tiscali.co.uk 0.0.0.0 sm1.sitemeter.com 0.0.0.0 sm2.sitemeter.com 0.0.0.0 sm3.sitemeter.com 0.0.0.0 sm4.sitemeter.com 0.0.0.0 sm5.sitemeter.com 0.0.0.0 sm6.sitemeter.com 0.0.0.0 sm7.sitemeter.com 0.0.0.0 sm8.sitemeter.com 0.0.0.0 sm9.sitemeter.com 0.0.0.0 smartstats.com 0.0.0.0 softcore.xxxcounter.com 0.0.0.0 sostats.mozilla.com 0.0.0.0 sovereign.sitetracker.com 0.0.0.0 spinbox.maccentral.com 0.0.0.0 spinbox.versiontracker.com 0.0.0.0 spklds.com 0.0.0.0 s.statistici.ro 0.0.0.0 s.stats.wordpress.com 0.0.0.0 ss.tiscali.com 0.0.0.0 ss.tiscali.it 0.0.0.0 st1.hit.gemius.pl 0.0.0.0 stags.peer39.net 0.0.0.0 stast2.gq.com 0.0.0.0 stat1.z-stat.com 0.0.0.0 stat3.cybermonitor.com 0.0.0.0 stat.4u.pl 0.0.0.0 stat.alibaba.com 0.0.0.0 statcounter.com 0.0.0.0 stat-counter.tass-online.ru 0.0.0.0 stat.discogs.com 0.0.0.0 static.kibboko.com 0.0.0.0 static.smni.com # Santa Monica - popunders 0.0.0.0 statik.topica.com 0.0.0.0 statistics.dynamicsitestats.com 0.0.0.0 statistics.elsevier.nl 0.0.0.0 statistics.reedbusiness.nl 0.0.0.0 statistics.theonion.com 0.0.0.0 statistik-gallup.net 0.0.0.0 stat.netmonitor.fi 0.0.0.0 stat.onestat.com 0.0.0.0 stats1.clicktracks.com 0.0.0.0 stats1.corusradio.com 0.0.0.0 stats1.in 0.0.0.0 stats.24ways.org 0.0.0.0 stats2.clicktracks.com 0.0.0.0 stats2.gourmet.com 0.0.0.0 stats2.newyorker.com 0.0.0.0 stats2.rte.ie 0.0.0.0 stats2.unrulymedia.com 0.0.0.0 stats2.vanityfair.com 0.0.0.0 stats4all.com 0.0.0.0 stats5.lightningcast.com 0.0.0.0 stats6.lightningcast.net 0.0.0.0 stats.absol.co.za 0.0.0.0 stats.adbrite.com 0.0.0.0 stats.adotube.com 0.0.0.0 stats.adultswim.com 0.0.0.0 stats.airfarewatchdog.com 0.0.0.0 stats.allliquid.com 0.0.0.0 stats.askmen.com 0.0.0.0 stats.bbc.co.uk 0.0.0.0 stats.becu.org 0.0.0.0 stats.big-boards.com 0.0.0.0 stats.blogoscoop.net 0.0.0.0 stats.bonzaii.no 0.0.0.0 stats.break.com 0.0.0.0 stats.brides.com 0.0.0.0 stats.buysellads.com 0.0.0.0 stats.cafepress.com 0.0.0.0 stats.canalblog.com 0.0.0.0 stats.cartoonnetwork.com 0.0.0.0 stats.channel4.com 0.0.0.0 stats.clickability.com 0.0.0.0 stats.concierge.com 0.0.0.0 stats.cts-bv.nl 0.0.0.0 stats.darkbluesea.com 0.0.0.0 stats.datahjaelp.net 0.0.0.0 stats.directnic.com 0.0.0.0 stats.dziennik.pl 0.0.0.0 stats.economist.com 0.0.0.0 stats.epicurious.com 0.0.0.0 statse.webtrendslive.com # Fortune.com among others 0.0.0.0 stats.examiner.com 0.0.0.0 stats.fairmont.com 0.0.0.0 stats.fastcompany.com 0.0.0.0 stats.foxcounter.com 0.0.0.0 stats.free-rein.net 0.0.0.0 stats.f-secure.com 0.0.0.0 stats.ft.com 0.0.0.0 stats.gamestop.com 0.0.0.0 stats.globesports.com 0.0.0.0 stats.groupninetyfour.com 0.0.0.0 stats.idsoft.com 0.0.0.0 stats.ign.com 0.0.0.0 stats.ilsemedia.nl 0.0.0.0 stats.independent.co.uk 0.0.0.0 stats.indexstats.com 0.0.0.0 stats.indextools.com 0.0.0.0 stats.investors.com 0.0.0.0 stats.iwebtrack.com 0.0.0.0 stats.jippii.com 0.0.0.0 stats.klsoft.com 0.0.0.0 stats.ladotstats.nl 0.0.0.0 stats.macworld.com 0.0.0.0 stats.magnify.net 0.0.0.0 stats.manticoretechnology.com 0.0.0.0 stats.mbamupdates.com 0.0.0.0 stats.millanusa.com 0.0.0.0 stats.nowpublic.com 0.0.0.0 stats.paycounter.com 0.0.0.0 stats.platinumbucks.com 0.0.0.0 stats.popscreen.com 0.0.0.0 stats.reinvigorate.net 0.0.0.0 stats.resellerratings.com 0.0.0.0 stats.revenue.net 0.0.0.0 stats.searchles.com 0.0.0.0 stats.ssa.gov 0.0.0.0 stats.superstats.com 0.0.0.0 stats.telegraph.co.uk 0.0.0.0 stats.thoughtcatalog.com 0.0.0.0 stats.townnews.com 0.0.0.0 stats.ultimate-webservices.com 0.0.0.0 stats.unionleader.com 0.0.0.0 stats.video.search.yahoo.com 0.0.0.0 stats.vodpod.com 0.0.0.0 stats.wordpress.com 0.0.0.0 stats.www.ibm.com 0.0.0.0 stats.yourminis.com 0.0.0.0 stat.webmedia.pl 0.0.0.0 stat.www.fi 0.0.0.0 stat.yellowtracker.com 0.0.0.0 stat.youku.com 0.0.0.0 stl.p.a1.traceworks.com 0.0.0.0 straighttangerine.cz.cc 0.0.0.0 st.sageanalyst.net 0.0.0.0 sugoicounter.com 0.0.0.0 superstats.com 0.0.0.0 s.youtube.com #0.0.0.0 t2.hulu.com # Uncomment to block Hulu. 0.0.0.0 tagging.outrider.com 0.0.0.0 talkcity.realtracker.com 0.0.0.0 targetnet.com 0.0.0.0 tates.freestats.com 0.0.0.0 tcookie.usatoday.com 0.0.0.0 tcr.tynt.com # See http://daringfireball.net/2010/05/tynt_copy_paste_jerks 0.0.0.0 tgpcounter.freethumbnailgalleries.com 0.0.0.0 thecounter.com 0.0.0.0 the-counter.net 0.0.0.0 themecounter.com 0.0.0.0 the.sextracker.com 0.0.0.0 tipsurf.com 0.0.0.0 toolbarpartner.com 0.0.0.0 tools.spylog.ru 0.0.0.0 top.mail.ru 0.0.0.0 topstats.com 0.0.0.0 topstats.net 0.0.0.0 torstarcollect.247realmedia.com 0.0.0.0 track2.mybloglog.com 0.0.0.0 track.adform.com 0.0.0.0 track.adform.net 0.0.0.0 track.did-it.com 0.0.0.0 track.directleads.com 0.0.0.0 track.domainsponsor.com 0.0.0.0 track.effiliation.com 0.0.0.0 tracker.bonnint.net 0.0.0.0 tracker.clicktrade.com 0.0.0.0 tracker.idg.co.uk 0.0.0.0 tracker.mattel.com 0.0.0.0 tracker.netklix.com 0.0.0.0 tracker.tradedoubler.com 0.0.0.0 track.exclusivecpa.com 0.0.0.0 track.ft.com 0.0.0.0 track.gawker.com 0.0.0.0 track.homestead.com #0.0.0.0 track.hulu.com # Uncomment to block Hulu. 0.0.0.0 tracking.10e20.com 0.0.0.0 tracking.adjug.com 0.0.0.0 tracking.allposters.com 0.0.0.0 tracking.foxnews.com 0.0.0.0 tracking.iol.co.za 0.0.0.0 tracking.msadcenter.msn.com 0.0.0.0 tracking.oggifinogi.com 0.0.0.0 tracking.percentmobile.com 0.0.0.0 tracking.publicidees.com 0.0.0.0 tracking.quisma.com 0.0.0.0 tracking.rangeonlinemedia.com 0.0.0.0 tracking.searchmarketing.com 0.0.0.0 tracking.summitmedia.co.uk 0.0.0.0 tracking.trafficjunky.net 0.0.0.0 tracking.trutv.com 0.0.0.0 tracking.vindicosuite.com 0.0.0.0 track.lfstmedia.com 0.0.0.0 track.mybloglog.com 0.0.0.0 track.omg2.com 0.0.0.0 track.roiservice.com 0.0.0.0 track.searchignite.com 0.0.0.0 tracksurf.daooda.com 0.0.0.0 track.webgains.com 0.0.0.0 tradedoubler.com 0.0.0.0 tradedoubler.sonvideopro.com 0.0.0.0 tr.adinterax.com 0.0.0.0 traffic-stats.streamsolutions.co.uk 0.0.0.0 trax.gamespot.com 0.0.0.0 trc.taboolasyndication.com 0.0.0.0 trk.kissmetrics.com 0.0.0.0 trk.tidaltv.com 0.0.0.0 true-counter.com 0.0.0.0 truehits1.gits.net.th 0.0.0.0 t.senaluno.com 0.0.0.0 tu.connect.wunderloop.net 0.0.0.0 tynt.com 0.0.0.0 u1817.16.spylog.com 0.0.0.0 u3102.47.spylog.com 0.0.0.0 u3305.71.spylog.com 0.0.0.0 u3608.20.spylog.com 0.0.0.0 u4056.56.spylog.com 0.0.0.0 u432.77.spylog.com 0.0.0.0 u4396.79.spylog.com 0.0.0.0 u4443.84.spylog.com 0.0.0.0 u4556.11.spylog.com 0.0.0.0 u5234.87.spylog.com 0.0.0.0 u5234.98.spylog.com 0.0.0.0 u5687.48.spylog.com 0.0.0.0 u574.07.spylog.com 0.0.0.0 u604.41.spylog.com 0.0.0.0 u6762.46.spylog.com 0.0.0.0 u6905.71.spylog.com 0.0.0.0 u7748.16.spylog.com 0.0.0.0 u810.15.spylog.com 0.0.0.0 u920.31.spylog.com 0.0.0.0 u977.40.spylog.com 0.0.0.0 udc.msn.com 0.0.0.0 uk.cqcounter.com 0.0.0.0 uk.sitestat.com 0.0.0.0 ultimatecounter.com 0.0.0.0 us.2.cqcounter.com 0.0.0.0 usa.nedstat.net 0.0.0.0 us.cqcounter.com 0.0.0.0 v1.nedstatbasic.net 0.0.0.0 v7.stats.load.com 0.0.0.0 valueclick.com 0.0.0.0 valueclick.net 0.0.0.0 vertical-stats.huffpost.com 0.0.0.0 video-stats.video.google.com 0.0.0.0 vip.clickzs.com 0.0.0.0 virtualbartendertrack.beer.com 0.0.0.0 visit.theglobeandmail.com # Visits to theglobeandmail.com 0.0.0.0 vis.sexlist.com 0.0.0.0 voken.eyereturn.com 0.0.0.0 vs.dmtracker.com 0.0.0.0 vsii.spinbox.net 0.0.0.0 vsii.spindox.net 0.0.0.0 w1.tcr112.tynt.com 0.0.0.0 warlog.info 0.0.0.0 wau.tynt.com 0.0.0.0 web1.realtracker.com 0.0.0.0 web2.realtracker.com 0.0.0.0 web3.realtracker.com 0.0.0.0 web4.realtracker.com 0.0.0.0 webanalytics.globalthoughtz.com 0.0.0.0 webbug.seatreport.com # web bugs 0.0.0.0 web-counter.5u.com 0.0.0.0 webcounter.com 0.0.0.0 webcounter.goweb.de 0.0.0.0 webcounter.together.net 0.0.0.0 webhit.aftenposten.no 0.0.0.0 webhit.afterposten.no 0.0.0.0 webmasterkai.sitetracker.com 0.0.0.0 webpdp.gator.com 0.0.0.0 webstat.channel4.com 0.0.0.0 webtrends.telenet.be 0.0.0.0 webtrends.thisis.co.uk 0.0.0.0 webtrends.townhall.com 0.0.0.0 wtnj.worldnow.com 0.0.0.0 www.0stats.com 0.0.0.0 www101.coolsavings.com 0.0.0.0 www.123count.com 0.0.0.0 www.123counter.superstats.com 0.0.0.0 www.123stat.com 0.0.0.0 www1.addfreestats.com 0.0.0.0 www1.counter.bloke.com 0.0.0.0 www.1quickclickrx.com 0.0.0.0 www1.tynt.com 0.0.0.0 www.2001-007.com 0.0.0.0 www2.addfreestats.com 0.0.0.0 www2.counter.bloke.com 0.0.0.0 www2.pagecount.com 0.0.0.0 www3.addfreestats.com 0.0.0.0 www3.click-fr.com 0.0.0.0 www3.counter.bloke.com 0.0.0.0 www.3dstats.com 0.0.0.0 www4.addfreestats.com 0.0.0.0 www4.counter.bloke.com 0.0.0.0 www5.addfreestats.com 0.0.0.0 www5.counter.bloke.com 0.0.0.0 www60.valueclick.com 0.0.0.0 www6.addfreestats.com 0.0.0.0 www6.click-fr.com 0.0.0.0 www6.counter.bloke.com 0.0.0.0 www7.addfreestats.com 0.0.0.0 www7.counter.bloke.com 0.0.0.0 www8.addfreestats.com 0.0.0.0 www8.counter.bloke.com 0.0.0.0 www9.counter.bloke.com 0.0.0.0 www.addfreecounter.com 0.0.0.0 www.addfreestats.com 0.0.0.0 www.ademails.com 0.0.0.0 www.affiliatesuccess.net 0.0.0.0 www.bar.ry2002.02-ry014.snpr.hotmx.hair.zaam.net # In spam 0.0.0.0 www.belstat.nl 0.0.0.0 www.betcounter.com 0.0.0.0 www.bigbadted.com 0.0.0.0 www.bluestreak.com 0.0.0.0 www.c1.thecounter.de 0.0.0.0 www.c2.thecounter.de 0.0.0.0 www.clickclick.com 0.0.0.0 www.clickspring.net #used by a spyware product called PurityScan 0.0.0.0 www.clixgalore.com 0.0.0.0 www.connectionlead.com 0.0.0.0 www.counter10.sextracker.be 0.0.0.0 www.counter11.sextracker.be 0.0.0.0 www.counter12.sextracker.be 0.0.0.0 www.counter13.sextracker.be 0.0.0.0 www.counter14.sextracker.be 0.0.0.0 www.counter15.sextracker.be 0.0.0.0 www.counter16.sextracker.be 0.0.0.0 www.counter1.sextracker.be 0.0.0.0 www.counter2.sextracker.be 0.0.0.0 www.counter3.sextracker.be 0.0.0.0 www.counter4all.com 0.0.0.0 www.counter4all.de 0.0.0.0 www.counter4.sextracker.be 0.0.0.0 www.counter5.sextracker.be 0.0.0.0 www.counter6.sextracker.be 0.0.0.0 www.counter7.sextracker.be 0.0.0.0 www.counter8.sextracker.be 0.0.0.0 www.counter9.sextracker.be 0.0.0.0 www.counter.bloke.com 0.0.0.0 www.counterguide.com 0.0.0.0 www.counter.sexhound.nl 0.0.0.0 www.counter.superstats.com 0.0.0.0 www.c.thecounter.de 0.0.0.0 www.cw.nu 0.0.0.0 www.directgrowthhormone.com 0.0.0.0 www.dpbolvw.net 0.0.0.0 www.dwclick.com 0.0.0.0 www.easycounter.com 0.0.0.0 www.emaildeals.biz 0.0.0.0 www.estats4all.com 0.0.0.0 www.fastcounter.linkexchange.nl 0.0.0.0 www.formalyzer.com 0.0.0.0 www.foxcounter.com 0.0.0.0 www.freestats.com 0.0.0.0 www.fxcounters.com 0.0.0.0 www.gator.com 0.0.0.0 www.googkle.com 0.0.0.0 www.googletagservices.com 0.0.0.0 www.hitstats.co.uk 0.0.0.0 www.iccee.com 0.0.0.0 www.iesnare.com # See http://www.codingthewheel.com/archives/online-gambling-privacy-iesnare 0.0.0.0 www.jellycounter.com 0.0.0.0 www.kqzyfj.com 0.0.0.0 www.leadpub.com 0.0.0.0 www.linkcounter.com 0.0.0.0 www.marketscore.com 0.0.0.0 www.megacounter.de 0.0.0.0 www.metareward.com # web bugs in spam 0.0.0.0 www.naturalgrowthstore.biz 0.0.0.0 www.nedstat.com 0.0.0.0 www.nextgenstats.com 0.0.0.0 www.ntsearch.com 0.0.0.0 www.onestat.com 0.0.0.0 www.originalicons.com # installs IE extension 0.0.0.0 www.paycounter.com 0.0.0.0 www.pointclicktrack.com 0.0.0.0 www.popuptrafic.com 0.0.0.0 www.precisioncounter.com 0.0.0.0 www.premiumsmail.net 0.0.0.0 www.printmail.biz 0.0.0.0 www.quantserve.com #: Ad Tracking, JavaScript, etc. 0.0.0.0 www.quareclk.com 0.0.0.0 www.remotrk.com 0.0.0.0 www.rightmedia.net 0.0.0.0 www.rightstats.com 0.0.0.0 www.searchadv.com 0.0.0.0 www.sekel.ch 0.0.0.0 www.shockcounter.com 0.0.0.0 www.simplecounter.net 0.0.0.0 www.specificclick.com 0.0.0.0 www.specificpop.com 0.0.0.0 www.spklds.com 0.0.0.0 www.statcount.com 0.0.0.0 www.statcounter.com 0.0.0.0 www.statsession.com 0.0.0.0 www.stattrax.com 0.0.0.0 www.stiffnetwork.com 0.0.0.0 www.testracking.com 0.0.0.0 www.thecounter.com 0.0.0.0 www.the-counter.net 0.0.0.0 www.toolbarcounter.com 0.0.0.0 www.tradedoubler.com 0.0.0.0 www.tradedoubler.com.ar 0.0.0.0 www.trafficmagnet.net # web bugs in spam 0.0.0.0 www.trafic.ro 0.0.0.0 www.trendcounter.com 0.0.0.0 www.true-counter.com 0.0.0.0 www.tynt.com 0.0.0.0 www.ultimatecounter.com 0.0.0.0 www.v61.com 0.0.0.0 www.webcounter.com 0.0.0.0 www.web-stat.com 0.0.0.0 www.webstat.com 0.0.0.0 www.whereugetxxx.com 0.0.0.0 www.xxxcounter.com 0.0.0.0 x.cb.kount.com 0.0.0.0 xcnn.com 0.0.0.0 xxxcounter.com 0.0.0.0 xyz.freelogs.com 0.0.0.0 zz.cqcounter.com # # # sites with known trojans, phishing, or other malware 0.0.0.0 05tz2e9.com 0.0.0.0 09killspyware.com 0.0.0.0 11398.onceedge.ru 0.0.0.0 2006mindfreaklike.blogspot.com # Facebook trojan 0.0.0.0 20-yrs-1.info 0.0.0.0 59-106-20-39.r-bl100.sakura.ne.jp 0.0.0.0 662bd114b7c9.onceedge.ru 0.0.0.0 a15172379.alturo-server.de 0.0.0.0 aaukqiooaseseuke.org 0.0.0.0 abetterinternet.com 0.0.0.0 abruzzoinitaly.co.uk 0.0.0.0 acglgoa.com 0.0.0.0 acim.moqhixoz.cn 0.0.0.0 adshufffle.com 0.0.0.0 adwitty.com 0.0.0.0 adwords.google.lloymlincs.com 0.0.0.0 afantispy.com 0.0.0.0 afdbande.cn 0.0.0.0 allhqpics.com # Facebook trojan 0.0.0.0 alphabirdnetwork.com 0.0.0.0 antispywareexpert.com 0.0.0.0 antivirus-online-scan5.com 0.0.0.0 antivirus-scanner8.com 0.0.0.0 antivirus-scanner.com 0.0.0.0 a.oix.com 0.0.0.0 a.oix.net 0.0.0.0 armsart.com 0.0.0.0 articlefuns.cn 0.0.0.0 articleidea.cn 0.0.0.0 asianread.com 0.0.0.0 autohipnose.com 0.0.0.0 a.webwise.com 0.0.0.0 a.webwise.net 0.0.0.0 a.webwise.org 0.0.0.0 beloysoff.ru 0.0.0.0 binsservicesonline.info 0.0.0.0 blackhat.be 0.0.0.0 blenz-me.net 0.0.0.0 bnvxcfhdgf.blogspot.com.es 0.0.0.0 b.oix.com 0.0.0.0 b.oix.net 0.0.0.0 BonusCashh.com 0.0.0.0 brunga.at # Facebook phishing attempt 0.0.0.0 bt.webwise.com 0.0.0.0 bt.webwise.net 0.0.0.0 bt.webwise.org 0.0.0.0 b.webwise.com 0.0.0.0 b.webwise.net 0.0.0.0 b.webwise.org 0.0.0.0 callawaypos.com 0.0.0.0 callbling.com 0.0.0.0 cambonanza.com 0.0.0.0 ccudl.com 0.0.0.0 changduk26.com # Facebook trojan 0.0.0.0 chelick.net # Facebook trojan 0.0.0.0 cioco-froll.com 0.0.0.0 cira.login.cqr.ssl.igotmyloverback.com 0.0.0.0 cleanchain.net 0.0.0.0 click.get-answers-fast.com 0.0.0.0 clien.net 0.0.0.0 cnbc.com-article906773.us 0.0.0.0 co8vd.cn 0.0.0.0 c.oix.com 0.0.0.0 c.oix.net 0.0.0.0 conduit.com 0.0.0.0 cra-arc-gc-ca.noads.biz 0.0.0.0 custom3hurricanedigitalmedia.com 0.0.0.0 c.webwise.com 0.0.0.0 c.webwise.net 0.0.0.0 c.webwise.org 0.0.0.0 dbios.org 0.0.0.0 dhauzja511.co.cc 0.0.0.0 dietpharmacyrx.net 0.0.0.0 download.abetterinternet.com 0.0.0.0 drc-group.net 0.0.0.0 dubstep.onedumb.com 0.0.0.0 east.05tz2e9.com 0.0.0.0 e-kasa.w8w.pl 0.0.0.0 en.likefever.org # Facebook trojan 0.0.0.0 enteryouremail.net 0.0.0.0 eviboli576.o-f.com 0.0.0.0 facebook-repto1040s2.ahlamountada.com 0.0.0.0 faceboook-replyei0ki.montadalitihad.com 0.0.0.0 facemail.com 0.0.0.0 faggotry.com 0.0.0.0 familyupport1.com 0.0.0.0 feaecebook.com 0.0.0.0 fengyixin.com 0.0.0.0 filosvybfimpsv.ru.gg 0.0.0.0 froling.bee.pl 0.0.0.0 fromru.su 0.0.0.0 ftdownload.com 0.0.0.0 fu.golikeus.net # Facebook trojan 0.0.0.0 gamelights.ru 0.0.0.0 gasasthe.freehostia.com 0.0.0.0 get-answers-fast.com 0.0.0.0 gglcash4u.info # twitter worm 0.0.0.0 girlownedbypolicelike.blogspot.com # Facebook trojan 0.0.0.0 goggle.com 0.0.0.0 greatarcadehits.com 0.0.0.0 gyros.es 0.0.0.0 h1317070.stratoserver.net 0.0.0.0 hackerz.ir 0.0.0.0 hakerzy.net 0.0.0.0 hatrecord.ru # Facebook trojan 0.0.0.0 hellwert.biz 0.0.0.0 hotchix.servepics.com 0.0.0.0 hsb-canada.com # phishing site for hsbc.ca 0.0.0.0 hsbconline.ca # phishing site for hsbc.ca 0.0.0.0 icecars.com 0.0.0.0 idea21.org 0.0.0.0 Iframecash.biz 0.0.0.0 infopaypal.com 0.0.0.0 installmac.com 0.0.0.0 ipadzu.net 0.0.0.0 ircleaner.com 0.0.0.0 itwititer.com 0.0.0.0 ity.elusmedic.ru 0.0.0.0 jajajaj-thats-you-really.com 0.0.0.0 janezk.50webs.co 0.0.0.0 jujitsu-ostrava.info 0.0.0.0 jump.ewoss.net 0.0.0.0 juste.ru # Twitter trojan 0.0.0.0 kczambians.com 0.0.0.0 keybinary.com 0.0.0.0 kirgo.at # Facebook phishing attempt 0.0.0.0 klowns4phun.com 0.0.0.0 konflow.com # Facebook trojan 0.0.0.0 kplusd.far.ru 0.0.0.0 kpremium.com 0.0.0.0 lank.ru 0.0.0.0 lighthouse2k.com 0.0.0.0 like.likewut.net 0.0.0.0 likeportal.com # Facebook trojan 0.0.0.0 likespike.com # Facebook trojan 0.0.0.0 likethislist.biz # Facebook trojan 0.0.0.0 likethis.mbosoft.com # Facebook trojan 0.0.0.0 loseweight.asdjiiw.com 0.0.0.0 lucibad.home.ro 0.0.0.0 luxcart.ro 0.0.0.0 m01.oix.com 0.0.0.0 m01.oix.net 0.0.0.0 m01.webwise.com 0.0.0.0 m01.webwise.net 0.0.0.0 m01.webwise.org 0.0.0.0 m02.oix.com 0.0.0.0 m02.oix.net 0.0.0.0 m02.webwise.com 0.0.0.0 m02.webwise.net 0.0.0.0 m02.webwise.org 0.0.0.0 mail.cyberh.fr 0.0.0.0 malware-live-pro-scanv1.com 0.0.0.0 maxi4.firstvds.ru 0.0.0.0 megasurfin.com 0.0.0.0 monkeyball.osa.pl 0.0.0.0 movies.701pages.com 0.0.0.0 mplayerdownloader.com 0.0.0.0 murcia-ban.es 0.0.0.0 mylike.co.uk # Facebook trojan 0.0.0.0 nactx.com 0.0.0.0 natashyabaydesign.com 0.0.0.0 new-dating-2012.info 0.0.0.0 new-vid-zone-1.blogspot.com.au 0.0.0.0 newwayscanner.info 0.0.0.0 novemberrainx.com 0.0.0.0 ns1.oix.com 0.0.0.0 ns1.oix.net 0.0.0.0 ns1.webwise.com 0.0.0.0 ns1.webwise.net 0.0.0.0 ns1.webwise.org 0.0.0.0 ns2.oix.com 0.0.0.0 ns2.oix.net 0.0.0.0 ns2.webwise.com 0.0.0.0 ns2.webwise.net 0.0.0.0 ns2.webwise.org 0.0.0.0 nufindings.info 0.0.0.0 office.officenet.co.kr 0.0.0.0 oix.com 0.0.0.0 oix.net 0.0.0.0 oj.likewut.net 0.0.0.0 online-antispym4.com 0.0.0.0 oo-na-na-pics.com 0.0.0.0 ordersildenafil.com 0.0.0.0 otsserver.com 0.0.0.0 outerinfo.com 0.0.0.0 paincake.yoll.net 0.0.0.0 pc-scanner16.com 0.0.0.0 personalantispy.com 0.0.0.0 phatthalung.go.th 0.0.0.0 picture-uploads.com 0.0.0.0 pilltabletsrxbargain.net 0.0.0.0 powabcyfqe.com 0.0.0.0 premium-live-scan.com 0.0.0.0 products-gold.net 0.0.0.0 proflashdata.com # Facebook trojan 0.0.0.0 protectionupdatecenter.com 0.0.0.0 pv.wantsfly.com 0.0.0.0 qip.ru 0.0.0.0 qy.corrmedic.ru 0.0.0.0 rd.alphabirdnetwork.com 0.0.0.0 rickrolling.com 0.0.0.0 roifmd.info 0.0.0.0 russian-sex.com 0.0.0.0 s4d.in 0.0.0.0 scan.antispyware-free-scanner.com 0.0.0.0 scanner.best-click-av1.info 0.0.0.0 scanner.best-protect.info 0.0.0.0 scottishstuff-online.com # Canadian bank phishing site 0.0.0.0 sc-spyware.com 0.0.0.0 search.conduit.com 0.0.0.0 securedliveuploads.com 0.0.0.0 securityandroidupdate.dinamikaprinting.com 0.0.0.0 securityscan.us 0.0.0.0 sexymarissa.net 0.0.0.0 shell.xhhow4.com 0.0.0.0 shoppstop.comood.opsource.net 0.0.0.0 shop.skin-safety.com 0.0.0.0 signin-ebay-com-ws-ebayisapi-dll-signin-webscr.ocom.pl 0.0.0.0 sinera.org 0.0.0.0 sjguild.com 0.0.0.0 smarturl.it 0.0.0.0 smile-angel.com 0.0.0.0 software-updates.co 0.0.0.0 software-wenc.co.cc 0.0.0.0 someonewhocares.com 0.0.0.0 sousay.info 0.0.0.0 start.qip.ru 0.0.0.0 superegler.net 0.0.0.0 supernaturalart.com 0.0.0.0 superprotection10.com 0.0.0.0 sverd.net 0.0.0.0 tahoesup.com 0.0.0.0 tattooshaha.info # Facebook trojan 0.0.0.0 test.ishvara-yoga.com 0.0.0.0 TheBizMeet.com 0.0.0.0 thedatesafe.com # Facebook trojan 0.0.0.0 themoneyclippodcast.com 0.0.0.0 themusicnetwork.co.uk 0.0.0.0 thinstall.abetterinternet.com 0.0.0.0 tivvitter.com 0.0.0.0 tomorrownewstoday.com # I'm not sure what it does, but it seems to be associated with a phishing attempt on Facebook 0.0.0.0 toolbarbest.biz 0.0.0.0 toolbarbucks.biz 0.0.0.0 toolbarcool.biz 0.0.0.0 toolbardollars.biz 0.0.0.0 toolbarmoney.biz 0.0.0.0 toolbarnew.biz 0.0.0.0 toolbarsale.biz 0.0.0.0 toolbarweb.biz 0.0.0.0 traffic.adwitty.com 0.0.0.0 trialreg.com 0.0.0.0 tvshowslist.com 0.0.0.0 twitter.login.kevanshome.org 0.0.0.0 twitter.secure.bzpharma.net 0.0.0.0 uawj.moqhixoz.cn 0.0.0.0 ughmvqf.spitt.ru 0.0.0.0 uqz.com 0.0.0.0 users16.jabry.com 0.0.0.0 utenti.lycos.it 0.0.0.0 vcipo.info 0.0.0.0 videos.dskjkiuw.com 0.0.0.0 videos.twitter.secure-logins01.com # twitter worm (http://mashable.com/2009/09/23/twitter-worm-dms/) 0.0.0.0 vxiframe.biz 0.0.0.0 waldenfarms.com 0.0.0.0 weblover.info 0.0.0.0 webpaypal.com 0.0.0.0 webwise.com 0.0.0.0 webwise.net 0.0.0.0 webwise.org 0.0.0.0 west.05tz2e9.com 0.0.0.0 wewillrocknow.com 0.0.0.0 willysy.com 0.0.0.0 wm.maxysearch.info 0.0.0.0 womo.corrmedic.ru 0.0.0.0 www1.bmo.com.hotfrio.com.br 0.0.0.0 www1.firesavez5.com 0.0.0.0 www1.firesavez6.com 0.0.0.0 www1.realsoft34.com 0.0.0.0 www4.gy7k.net 0.0.0.0 www.abetterinternet.com 0.0.0.0 www.adshufffle.com 0.0.0.0 www.adwords.google.lloymlincs.com 0.0.0.0 www.afantispy.com 0.0.0.0 www.akoneplatit.sk 0.0.0.0 www.allhqpics.com # Facebook trojan 0.0.0.0 www.alrpost69.com 0.0.0.0 www.anatol.com 0.0.0.0 www.articlefuns.cn 0.0.0.0 www.articleidea.cn 0.0.0.0 www.asianread.com 0.0.0.0 www.backsim.ru 0.0.0.0 www.bankofamerica.com.ok.am 0.0.0.0 www.be4life.ru 0.0.0.0 www.blenz-me.net 0.0.0.0 www.cambonanza.com 0.0.0.0 www.chelick.net # Facebook trojan 0.0.0.0 www.didata.bw 0.0.0.0 www.dietsecret.ru 0.0.0.0 www.eroyear.ru 0.0.0.0 www.exbays.com 0.0.0.0 www.faggotry.com 0.0.0.0 www.feaecebook.com 0.0.0.0 www.fictioncinema.com 0.0.0.0 www.fischereszter.hu 0.0.0.0 www.froling.bee.pl 0.0.0.0 www.gns-consola.com 0.0.0.0 www.goggle.com 0.0.0.0 www.grouphappy.com 0.0.0.0 www.hakerzy.net 0.0.0.0 www.haoyunlaid.com 0.0.0.0 www.icecars.com 0.0.0.0 www.indesignstudioinfo.com 0.0.0.0 www.infopaypal.com 0.0.0.0 www.keybinary.com 0.0.0.0 www.kinomarathon.ru 0.0.0.0 www.kpremium.com 0.0.0.0 www.likeportal.com # Facebook trojan 0.0.0.0 www.likespike.com # Facebook trojan 0.0.0.0 www.likethislist.biz # Facebook trojan 0.0.0.0 www.likethis.mbosoft.com # Facebook trojan 0.0.0.0 www.lomalindasda.org # Facebook trojan 0.0.0.0 www.lovecouple.ru 0.0.0.0 www.lovetrust.ru 0.0.0.0 www.mikras.nl 0.0.0.0 www.monkeyball.osa.pl 0.0.0.0 www.monsonis.net 0.0.0.0 www.movie-port.ru 0.0.0.0 www.mplayerdownloader.com 0.0.0.0 www.mylike.co.uk # Facebook trojan 0.0.0.0 www.mylovecards.com 0.0.0.0 www.nine2rack.in 0.0.0.0 www.novemberrainx.com 0.0.0.0 www.nu26.com 0.0.0.0 www.oix.com 0.0.0.0 www.oix.net 0.0.0.0 www.onlyfreeoffersonline.com 0.0.0.0 www.oreidofitilho.com.br 0.0.0.0 www.otsserver.com 0.0.0.0 www.pay-pal.com-cgibin-canada.4mcmeta4v.cn 0.0.0.0 www.picture-uploads.com 0.0.0.0 www.portaldimensional.com 0.0.0.0 www.poxudeli.ru 0.0.0.0 www.proflashdata.com # Facebook trojan 0.0.0.0 www.rickrolling.com 0.0.0.0 www.russian-sex.com 0.0.0.0 www.scotiaonline.scotiabank.salferreras.com 0.0.0.0 www.sdlpgift.com 0.0.0.0 www.securityscan.us 0.0.0.0 www.servertasarimbu.com 0.0.0.0 www.sexytiger.ru 0.0.0.0 www.shinilchurch.net # domain was hacked and had a trojan installed 0.0.0.0 www.sinera.org 0.0.0.0 www.someonewhocares.com 0.0.0.0 www.tanger.com.br 0.0.0.0 www.tattooshaha.info # Facebook trojan 0.0.0.0 www.te81.net 0.0.0.0 www.thedatesafe.com # Facebook trojan 0.0.0.0 www.trucktirehotline.com 0.0.0.0 www.tvshowslist.com 0.0.0.0 www.upi6.pillsstore-c.com # Facebook trojan 0.0.0.0 www.uqz.com 0.0.0.0 www.via99.org 0.0.0.0 www.videolove.clanteam.com 0.0.0.0 www.videostan.ru 0.0.0.0 www.vippotexa.ru 0.0.0.0 www.wantsfly.com 0.0.0.0 www.webpaypal.com 0.0.0.0 www.webwise.com 0.0.0.0 www.webwise.net 0.0.0.0 www.webwise.org 0.0.0.0 www.wewillrocknow.com 0.0.0.0 www.willysy.com 0.0.0.0 xfotosx01.fromru.su 0.0.0.0 xponlinescanner.com 0.0.0.0 xvrxyzba253.hotmail.ru 0.0.0.0 yrwap.cn 0.0.0.0 zarozinski.info 0.0.0.0 zettapetta.com 0.0.0.0 zfotos.fromru.su 0.0.0.0 zip.er.cz 0.0.0.0 ztrf.net 0.0.0.0 zviframe.biz # # 0.0.0.0 3ad.doubleclick.net 0.0.0.0 ad2.doubleclick.net 0.0.0.0 ad.3au.doubleclick.net 0.0.0.0 ad.ae.doubleclick.net 0.0.0.0 ad.au.doubleclick.net 0.0.0.0 ad.be.doubleclick.net 0.0.0.0 ad.br.doubleclick.net 0.0.0.0 ad.de.doubleclick.net 0.0.0.0 ad.dk.doubleclick.net 0.0.0.0 ad-emea.doubleclick.net 0.0.0.0 ad.es.doubleclick.net 0.0.0.0 ad.fi.doubleclick.net 0.0.0.0 ad.fr.doubleclick.net 0.0.0.0 ad-g.doubleclick.net 0.0.0.0 ad.it.doubleclick.net 0.0.0.0 ad.jp.doubleclick.net 0.0.0.0 ad.mo.doubleclick.net 0.0.0.0 ad.n2434.doubleclick.net 0.0.0.0 ad.nl.doubleclick.net 0.0.0.0 ad.no.doubleclick.net 0.0.0.0 ad.nz.doubleclick.net 0.0.0.0 ad.pl.doubleclick.net 0.0.0.0 ad.se.doubleclick.net 0.0.0.0 ad.sg.doubleclick.net 0.0.0.0 ad.uk.doubleclick.net 0.0.0.0 ad.ve.doubleclick.net 0.0.0.0 ad-yt-bfp.doubleclick.net 0.0.0.0 ad.za.doubleclick.net 0.0.0.0 amn.doubleclick.net 0.0.0.0 creative.cc-dt.com 0.0.0.0 doubleclick.de 0.0.0.0 doubleclick.net 0.0.0.0 ebaycn.doubleclick.net 0.0.0.0 ebaytw.doubleclick.net 0.0.0.0 exnjadgda1.doubleclick.net 0.0.0.0 exnjadgda2.doubleclick.net 0.0.0.0 exnjadgds1.doubleclick.net 0.0.0.0 exnjmdgda1.doubleclick.net 0.0.0.0 exnjmdgds1.doubleclick.net 0.0.0.0 feedads.g.doubleclick.net 0.0.0.0 fls.doubleclick.net 0.0.0.0 gd10.doubleclick.net 0.0.0.0 gd11.doubleclick.net 0.0.0.0 gd12.doubleclick.net 0.0.0.0 gd13.doubleclick.net 0.0.0.0 gd14.doubleclick.net 0.0.0.0 gd15.doubleclick.net 0.0.0.0 gd16.doubleclick.net 0.0.0.0 gd17.doubleclick.net 0.0.0.0 gd18.doubleclick.net 0.0.0.0 gd19.doubleclick.net 0.0.0.0 gd1.doubleclick.net 0.0.0.0 gd20.doubleclick.net 0.0.0.0 gd21.doubleclick.net 0.0.0.0 gd22.doubleclick.net 0.0.0.0 gd23.doubleclick.net 0.0.0.0 gd24.doubleclick.net 0.0.0.0 gd25.doubleclick.net 0.0.0.0 gd26.doubleclick.net 0.0.0.0 gd27.doubleclick.net 0.0.0.0 gd28.doubleclick.net 0.0.0.0 gd29.doubleclick.net 0.0.0.0 gd2.doubleclick.net 0.0.0.0 gd30.doubleclick.net 0.0.0.0 gd31.doubleclick.net 0.0.0.0 gd3.doubleclick.net 0.0.0.0 gd4.doubleclick.net 0.0.0.0 gd5.doubleclick.net 0.0.0.0 gd7.doubleclick.net 0.0.0.0 gd8.doubleclick.net 0.0.0.0 gd9.doubleclick.net 0.0.0.0 googleads.g.doubleclick.net 0.0.0.0 iv.doubleclick.net 0.0.0.0 ln.doubleclick.net 0.0.0.0 m1.2mdn.net 0.0.0.0 m1.ae.2mdn.net 0.0.0.0 m1.au.2mdn.net 0.0.0.0 m1.be.2mdn.net 0.0.0.0 m1.br.2mdn.net 0.0.0.0 m1.ca.2mdn.net 0.0.0.0 m1.cn.2mdn.net 0.0.0.0 m1.de.2mdn.net 0.0.0.0 m1.dk.2mdn.net 0.0.0.0 m1.doubleclick.net 0.0.0.0 m1.es.2mdn.net 0.0.0.0 m1.fi.2mdn.net 0.0.0.0 m1.fr.2mdn.net 0.0.0.0 m1.it.2mdn.net 0.0.0.0 m1.jp.2mdn.net 0.0.0.0 m1.nl.2mdn.net 0.0.0.0 m1.no.2mdn.net 0.0.0.0 m1.nz.2mdn.net 0.0.0.0 m1.pl.2mdn.net 0.0.0.0 m1.se.2mdn.net 0.0.0.0 m1.sg.2mdn.net 0.0.0.0 m1.uk.2mdn.net 0.0.0.0 m1.ve.2mdn.net 0.0.0.0 m1.za.2mdn.net 0.0.0.0 m2.ae.2mdn.net 0.0.0.0 m2.au.2mdn.net 0.0.0.0 m2.be.2mdn.net 0.0.0.0 m2.br.2mdn.net 0.0.0.0 m2.ca.2mdn.net 0.0.0.0 m2.cn.2mdn.net 0.0.0.0 m2.cn.doubleclick.net 0.0.0.0 m2.de.2mdn.net 0.0.0.0 m2.dk.2mdn.net 0.0.0.0 m2.doubleclick.net 0.0.0.0 m2.es.2mdn.net 0.0.0.0 m2.fi.2mdn.net 0.0.0.0 m2.fr.2mdn.net 0.0.0.0 m2.it.2mdn.net 0.0.0.0 m2.jp.2mdn.net 0.0.0.0 m.2mdn.net 0.0.0.0 m2.nl.2mdn.net 0.0.0.0 m2.no.2mdn.net 0.0.0.0 m2.nz.2mdn.net 0.0.0.0 m2.pl.2mdn.net 0.0.0.0 m2.se.2mdn.net 0.0.0.0 m2.sg.2mdn.net 0.0.0.0 m2.uk.2mdn.net 0.0.0.0 m2.ve.2mdn.net 0.0.0.0 m2.za.2mdn.net 0.0.0.0 m3.ae.2mdn.net 0.0.0.0 m3.au.2mdn.net 0.0.0.0 m3.be.2mdn.net 0.0.0.0 m3.br.2mdn.net 0.0.0.0 m3.ca.2mdn.net 0.0.0.0 m3.cn.2mdn.net 0.0.0.0 m3.de.2mdn.net 0.0.0.0 m3.dk.2mdn.net 0.0.0.0 m3.doubleclick.net 0.0.0.0 m3.es.2mdn.net 0.0.0.0 m3.fi.2mdn.net 0.0.0.0 m3.fr.2mdn.net 0.0.0.0 m3.it.2mdn.net 0.0.0.0 m3.jp.2mdn.net 0.0.0.0 m3.nl.2mdn.net 0.0.0.0 m3.no.2mdn.net 0.0.0.0 m3.nz.2mdn.net 0.0.0.0 m3.pl.2mdn.net 0.0.0.0 m3.se.2mdn.net 0.0.0.0 m3.sg.2mdn.net 0.0.0.0 m3.uk.2mdn.net 0.0.0.0 m3.ve.2mdn.net 0.0.0.0 m3.za.2mdn.net 0.0.0.0 m4.ae.2mdn.net 0.0.0.0 m4.au.2mdn.net 0.0.0.0 m4.be.2mdn.net 0.0.0.0 m4.br.2mdn.net 0.0.0.0 m4.ca.2mdn.net 0.0.0.0 m4.cn.2mdn.net 0.0.0.0 m4.de.2mdn.net 0.0.0.0 m4.dk.2mdn.net 0.0.0.0 m4.doubleclick.net 0.0.0.0 m4.es.2mdn.net 0.0.0.0 m4.fi.2mdn.net 0.0.0.0 m4.fr.2mdn.net 0.0.0.0 m4.it.2mdn.net 0.0.0.0 m4.jp.2mdn.net 0.0.0.0 m4.nl.2mdn.net 0.0.0.0 m4.no.2mdn.net 0.0.0.0 m4.nz.2mdn.net 0.0.0.0 m4.pl.2mdn.net 0.0.0.0 m4.se.2mdn.net 0.0.0.0 m4.sg.2mdn.net 0.0.0.0 m4.uk.2mdn.net 0.0.0.0 m4.ve.2mdn.net 0.0.0.0 m4.za.2mdn.net 0.0.0.0 m5.ae.2mdn.net 0.0.0.0 m5.au.2mdn.net 0.0.0.0 m5.be.2mdn.net 0.0.0.0 m5.br.2mdn.net 0.0.0.0 m5.ca.2mdn.net 0.0.0.0 m5.cn.2mdn.net 0.0.0.0 m5.de.2mdn.net 0.0.0.0 m5.dk.2mdn.net 0.0.0.0 m5.doubleclick.net 0.0.0.0 m5.es.2mdn.net 0.0.0.0 m5.fi.2mdn.net 0.0.0.0 m5.fr.2mdn.net 0.0.0.0 m5.it.2mdn.net 0.0.0.0 m5.jp.2mdn.net 0.0.0.0 m5.nl.2mdn.net 0.0.0.0 m5.no.2mdn.net 0.0.0.0 m5.nz.2mdn.net 0.0.0.0 m5.pl.2mdn.net 0.0.0.0 m5.se.2mdn.net 0.0.0.0 m5.sg.2mdn.net 0.0.0.0 m5.uk.2mdn.net 0.0.0.0 m5.ve.2mdn.net 0.0.0.0 m5.za.2mdn.net 0.0.0.0 m6.ae.2mdn.net 0.0.0.0 m6.au.2mdn.net 0.0.0.0 m6.be.2mdn.net 0.0.0.0 m6.br.2mdn.net 0.0.0.0 m6.ca.2mdn.net 0.0.0.0 m6.cn.2mdn.net 0.0.0.0 m6.de.2mdn.net 0.0.0.0 m6.dk.2mdn.net 0.0.0.0 m6.doubleclick.net 0.0.0.0 m6.es.2mdn.net 0.0.0.0 m6.fi.2mdn.net 0.0.0.0 m6.fr.2mdn.net 0.0.0.0 m6.it.2mdn.net 0.0.0.0 m6.jp.2mdn.net 0.0.0.0 m6.nl.2mdn.net 0.0.0.0 m6.no.2mdn.net 0.0.0.0 m6.nz.2mdn.net 0.0.0.0 m6.pl.2mdn.net 0.0.0.0 m6.se.2mdn.net 0.0.0.0 m6.sg.2mdn.net 0.0.0.0 m6.uk.2mdn.net 0.0.0.0 m6.ve.2mdn.net 0.0.0.0 m6.za.2mdn.net 0.0.0.0 m7.ae.2mdn.net 0.0.0.0 m7.au.2mdn.net 0.0.0.0 m7.be.2mdn.net 0.0.0.0 m7.br.2mdn.net 0.0.0.0 m7.ca.2mdn.net 0.0.0.0 m7.cn.2mdn.net 0.0.0.0 m7.de.2mdn.net 0.0.0.0 m7.dk.2mdn.net 0.0.0.0 m7.doubleclick.net 0.0.0.0 m7.es.2mdn.net 0.0.0.0 m7.fi.2mdn.net 0.0.0.0 m7.fr.2mdn.net 0.0.0.0 m7.it.2mdn.net 0.0.0.0 m7.jp.2mdn.net 0.0.0.0 m7.nl.2mdn.net 0.0.0.0 m7.no.2mdn.net 0.0.0.0 m7.nz.2mdn.net 0.0.0.0 m7.pl.2mdn.net 0.0.0.0 m7.se.2mdn.net 0.0.0.0 m7.sg.2mdn.net 0.0.0.0 m7.uk.2mdn.net 0.0.0.0 m7.ve.2mdn.net 0.0.0.0 m7.za.2mdn.net 0.0.0.0 m8.ae.2mdn.net 0.0.0.0 m8.au.2mdn.net 0.0.0.0 m8.be.2mdn.net 0.0.0.0 m8.br.2mdn.net 0.0.0.0 m8.ca.2mdn.net 0.0.0.0 m8.cn.2mdn.net 0.0.0.0 m8.de.2mdn.net 0.0.0.0 m8.dk.2mdn.net 0.0.0.0 m8.doubleclick.net 0.0.0.0 m8.es.2mdn.net 0.0.0.0 m8.fi.2mdn.net 0.0.0.0 m8.fr.2mdn.net 0.0.0.0 m8.it.2mdn.net 0.0.0.0 m8.jp.2mdn.net 0.0.0.0 m8.nl.2mdn.net 0.0.0.0 m8.no.2mdn.net 0.0.0.0 m8.nz.2mdn.net 0.0.0.0 m8.pl.2mdn.net 0.0.0.0 m8.se.2mdn.net 0.0.0.0 m8.sg.2mdn.net 0.0.0.0 m8.uk.2mdn.net 0.0.0.0 m8.ve.2mdn.net 0.0.0.0 m8.za.2mdn.net 0.0.0.0 m9.ae.2mdn.net 0.0.0.0 m9.au.2mdn.net 0.0.0.0 m9.be.2mdn.net 0.0.0.0 m9.br.2mdn.net 0.0.0.0 m9.ca.2mdn.net 0.0.0.0 m9.cn.2mdn.net 0.0.0.0 m9.de.2mdn.net 0.0.0.0 m9.dk.2mdn.net 0.0.0.0 m9.doubleclick.net 0.0.0.0 m9.es.2mdn.net 0.0.0.0 m9.fi.2mdn.net 0.0.0.0 m9.fr.2mdn.net 0.0.0.0 m9.it.2mdn.net 0.0.0.0 m9.jp.2mdn.net 0.0.0.0 m9.nl.2mdn.net 0.0.0.0 m9.no.2mdn.net 0.0.0.0 m9.nz.2mdn.net 0.0.0.0 m9.pl.2mdn.net 0.0.0.0 m9.se.2mdn.net 0.0.0.0 m9.sg.2mdn.net 0.0.0.0 m9.uk.2mdn.net 0.0.0.0 m9.ve.2mdn.net 0.0.0.0 m9.za.2mdn.net 0.0.0.0 m.de.2mdn.net 0.0.0.0 m.doubleclick.net 0.0.0.0 n3302ad.doubleclick.net 0.0.0.0 n3349ad.doubleclick.net 0.0.0.0 n4061ad.doubleclick.net 0.0.0.0 n4403ad.doubleclick.net 0.0.0.0 n479ad.doubleclick.net 0.0.0.0 optimize.doubleclick.net 0.0.0.0 pubads.g.doubleclick.net 0.0.0.0 rd.intl.doubleclick.net 0.0.0.0 securepubads.g.doubleclick.net 0.0.0.0 stats.g.doubleclick.net 0.0.0.0 twx.2mdn.net 0.0.0.0 twx.doubleclick.net 0.0.0.0 ukrpts.net 0.0.0.0 uunyadgda1.doubleclick.net 0.0.0.0 uunyadgds1.doubleclick.net 0.0.0.0 www.ukrpts.net # # 0.0.0.0 1up.us.intellitxt.com 0.0.0.0 5starhiphop.us.intellitxt.com 0.0.0.0 askmen2.us.intellitxt.com 0.0.0.0 bargainpda.us.intellitxt.com 0.0.0.0 businesspundit.us.intellitxt.com 0.0.0.0 canadafreepress.us.intellitxt.com 0.0.0.0 contactmusic.uk.intellitxt.com 0.0.0.0 ctv.us.intellitxt.com 0.0.0.0 designtechnica.us.intellitxt.com 0.0.0.0 devshed.us.intellitxt.com 0.0.0.0 digitaltrends.us.intellitxt.com 0.0.0.0 dnps.us.intellitxt.com 0.0.0.0 doubleviking.us.intellitxt.com 0.0.0.0 drizzydrake.us.intellitxt.com 0.0.0.0 ehow.us.intellitxt.com 0.0.0.0 entertainment.msnbc.us.intellitxt.com 0.0.0.0 examnotes.us.intellitxt.com 0.0.0.0 excite.us.intellitxt.com 0.0.0.0 experts.us.intellitxt.com 0.0.0.0 extremetech.us.intellitxt.com 0.0.0.0 ferrago.uk.intellitxt.com 0.0.0.0 filmschoolrejects.us.intellitxt.com 0.0.0.0 filmwad.us.intellitxt.com 0.0.0.0 firstshowing.us.intellitxt.com 0.0.0.0 flashmagazine.us.intellitxt.com 0.0.0.0 foxnews.us.intellitxt.com 0.0.0.0 foxtv.us.intellitxt.com 0.0.0.0 freedownloadcenter.uk.intellitxt.com 0.0.0.0 gadgets.fosfor.se.intellitxt.com 0.0.0.0 gamesradar.us.intellitxt.com 0.0.0.0 gannettbroadcast.us.intellitxt.com 0.0.0.0 gonintendo.us.intellitxt.com 0.0.0.0 gorillanation.us.intellitxt.com 0.0.0.0 hackedgadgets.us.intellitxt.com 0.0.0.0 hardcoreware.us.intellitxt.com 0.0.0.0 hardocp.us.intellitxt.com 0.0.0.0 hothardware.us.intellitxt.com 0.0.0.0 hotonlinenews.us.intellitxt.com 0.0.0.0 ign.us.intellitxt.com 0.0.0.0 images.intellitxt.com 0.0.0.0 itxt2.us.intellitxt.com 0.0.0.0 joblo.us.intellitxt.com 0.0.0.0 johnchow.us.intellitxt.com 0.0.0.0 laptopmag.us.intellitxt.com 0.0.0.0 linuxforums.us.intellitxt.com 0.0.0.0 maccity.it.intellitxt.com 0.0.0.0 macnn.us.intellitxt.com 0.0.0.0 macuser.uk.intellitxt.com 0.0.0.0 macworld.uk.intellitxt.com 0.0.0.0 metro.uk.intellitxt.com 0.0.0.0 mobile9.us.intellitxt.com 0.0.0.0 monstersandcritics.uk.intellitxt.com 0.0.0.0 moviesonline.ca.intellitxt.com 0.0.0.0 mustangevolution.us.intellitxt.com 0.0.0.0 neowin.us.intellitxt.com 0.0.0.0 newcarnet.uk.intellitxt.com 0.0.0.0 newlaunches.uk.intellitxt.com 0.0.0.0 nexys404.us.intellitxt.com 0.0.0.0 ohgizmo.us.intellitxt.com 0.0.0.0 pcadvisor.uk.intellitxt.com 0.0.0.0 pcgameshardware.de.intellitxt.com 0.0.0.0 pcmag.us.intellitxt.com 0.0.0.0 pcper.us.intellitxt.com 0.0.0.0 penton.us.intellitxt.com 0.0.0.0 physorg.uk.intellitxt.com 0.0.0.0 physorg.us.intellitxt.com 0.0.0.0 playfuls.uk.intellitxt.com 0.0.0.0 pocketlint.uk.intellitxt.com 0.0.0.0 popularmechanics.us.intellitxt.com 0.0.0.0 postchronicle.us.intellitxt.com 0.0.0.0 projectorreviews.us.intellitxt.com 0.0.0.0 psp3d.us.intellitxt.com 0.0.0.0 pspcave.uk.intellitxt.com 0.0.0.0 qj.us.intellitxt.com 0.0.0.0 rasmussenreports.us.intellitxt.com 0.0.0.0 rawstory.us.intellitxt.com 0.0.0.0 savemanny.us.intellitxt.com 0.0.0.0 sc.intellitxt.com 0.0.0.0 siliconera.us.intellitxt.com 0.0.0.0 slashphone.us.intellitxt.com 0.0.0.0 soft32.us.intellitxt.com 0.0.0.0 softpedia.uk.intellitxt.com 0.0.0.0 somethingawful.us.intellitxt.com 0.0.0.0 splashnews.uk.intellitxt.com 0.0.0.0 spymac.us.intellitxt.com 0.0.0.0 techeblog.us.intellitxt.com 0.0.0.0 technewsworld.us.intellitxt.com 0.0.0.0 technologyreview.us.intellitxt.com 0.0.0.0 techspot.us.intellitxt.com 0.0.0.0 tgdaily.us.intellitxt.com 0.0.0.0 the-gadgeteer.us.intellitxt.com 0.0.0.0 thelastboss.us.intellitxt.com 0.0.0.0 thetechzone.us.intellitxt.com 0.0.0.0 thoughtsmedia.us.intellitxt.com 0.0.0.0 tmcnet.us.intellitxt.com 0.0.0.0 tomsnetworking.us.intellitxt.com 0.0.0.0 toms.us.intellitxt.com 0.0.0.0 tribal.us.intellitxt.com # vibrantmedia.com 0.0.0.0 universetoday.us.intellitxt.com 0.0.0.0 us.intellitxt.com 0.0.0.0 warp2search.us.intellitxt.com 0.0.0.0 wi-fitechnology.uk.intellitxt.com 0.0.0.0 worldnetdaily.us.intellitxt.com # # # Red Sheriff and imrworldwide.com -- server side tracking 0.0.0.0 devfw.imrworldwide.com 0.0.0.0 fe1-au.imrworldwide.com 0.0.0.0 fe1-fi.imrworldwide.com 0.0.0.0 fe1-it.imrworldwide.com 0.0.0.0 fe2-au.imrworldwide.com 0.0.0.0 fe3-au.imrworldwide.com 0.0.0.0 fe3-gc.imrworldwide.com 0.0.0.0 fe3-uk.imrworldwide.com 0.0.0.0 fe4-uk.imrworldwide.com 0.0.0.0 fe-au.imrworldwide.com 0.0.0.0 imrworldwide.com 0.0.0.0 lycos-eu.imrworldwide.com 0.0.0.0 ninemsn.imrworldwide.com 0.0.0.0 rc-au.imrworldwide.com 0.0.0.0 redsheriff.com #0.0.0.0 secure-au.imrworldwide.com 0.0.0.0 secure-jp.imrworldwide.com 0.0.0.0 secure-nz.imrworldwide.com 0.0.0.0 secure-uk.imrworldwide.com 0.0.0.0 secure-us.imrworldwide.com 0.0.0.0 secure-za.imrworldwide.com 0.0.0.0 server-au.imrworldwide.com 0.0.0.0 server-br.imrworldwide.com 0.0.0.0 server-by.imrworldwide.com 0.0.0.0 server-ca.imrworldwide.com 0.0.0.0 server-de.imrworldwide.com 0.0.0.0 server-dk.imrworldwide.com 0.0.0.0 server-ee.imrworldwide.com 0.0.0.0 server-fi.imrworldwide.com 0.0.0.0 server-fr.imrworldwide.com 0.0.0.0 server-hk.imrworldwide.com 0.0.0.0 server-it.imrworldwide.com 0.0.0.0 server-jp.imrworldwide.com 0.0.0.0 server-lt.imrworldwide.com 0.0.0.0 server-lv.imrworldwide.com 0.0.0.0 server-no.imrworldwide.com 0.0.0.0 server-nz.imrworldwide.com 0.0.0.0 server-oslo.imrworldwide.com 0.0.0.0 server-pl.imrworldwide.com 0.0.0.0 server-ru.imrworldwide.com 0.0.0.0 server-se.imrworldwide.com 0.0.0.0 server-sg.imrworldwide.com 0.0.0.0 server-stockh.imrworldwide.com 0.0.0.0 server-ua.imrworldwide.com 0.0.0.0 server-uk.imrworldwide.com 0.0.0.0 server-us.imrworldwide.com 0.0.0.0 server-za.imrworldwide.com 0.0.0.0 survey1-au.imrworldwide.com 0.0.0.0 telstra.imrworldwide.com 0.0.0.0 www.imrworldwide.com 0.0.0.0 www.imrworldwide.com.au 0.0.0.0 www.redsheriff.com # # # cydoor -- server side tracking 0.0.0.0 cydoor.com 0.0.0.0 j.2004cms.com # cydoor 0.0.0.0 jbaventures.cjt1.net 0.0.0.0 jbeet.cjt1.net 0.0.0.0 jbit.cjt1.net 0.0.0.0 jcollegehumor.cjt1.net 0.0.0.0 jcontent.bns1.net 0.0.0.0 jdownloadacc.cjt1.net 0.0.0.0 jgen10.cjt1.net 0.0.0.0 jgen11.cjt1.net 0.0.0.0 jgen12.cjt1.net 0.0.0.0 jgen13.cjt1.net 0.0.0.0 jgen14.cjt1.net 0.0.0.0 jgen15.cjt1.net 0.0.0.0 jgen16.cjt1.net 0.0.0.0 jgen17.cjt1.net 0.0.0.0 jgen18.cjt1.net 0.0.0.0 jgen19.cjt1.net 0.0.0.0 jgen1.cjt1.net 0.0.0.0 jgen20.cjt1.net 0.0.0.0 jgen21.cjt1.net 0.0.0.0 jgen22.cjt1.net 0.0.0.0 jgen23.cjt1.net 0.0.0.0 jgen24.cjt1.net 0.0.0.0 jgen25.cjt1.net 0.0.0.0 jgen26.cjt1.net 0.0.0.0 jgen27.cjt1.net 0.0.0.0 jgen28.cjt1.net 0.0.0.0 jgen29.cjt1.net 0.0.0.0 jgen2.cjt1.net 0.0.0.0 jgen30.cjt1.net 0.0.0.0 jgen31.cjt1.net 0.0.0.0 jgen32.cjt1.net 0.0.0.0 jgen33.cjt1.net 0.0.0.0 jgen34.cjt1.net 0.0.0.0 jgen35.cjt1.net 0.0.0.0 jgen36.cjt1.net 0.0.0.0 jgen37.cjt1.net 0.0.0.0 jgen38.cjt1.net 0.0.0.0 jgen39.cjt1.net 0.0.0.0 jgen3.cjt1.net 0.0.0.0 jgen40.cjt1.net 0.0.0.0 jgen41.cjt1.net 0.0.0.0 jgen42.cjt1.net 0.0.0.0 jgen43.cjt1.net 0.0.0.0 jgen44.cjt1.net 0.0.0.0 jgen45.cjt1.net 0.0.0.0 jgen46.cjt1.net 0.0.0.0 jgen47.cjt1.net 0.0.0.0 jgen48.cjt1.net 0.0.0.0 jgen49.cjt1.net 0.0.0.0 jgen4.cjt1.net 0.0.0.0 jgen5.cjt1.net 0.0.0.0 jgen6.cjt1.net 0.0.0.0 jgen7.cjt1.net 0.0.0.0 jgen8.cjt1.net 0.0.0.0 jgen9.cjt1.net 0.0.0.0 jhumour.cjt1.net 0.0.0.0 jmbi58.cjt1.net 0.0.0.0 jnova.cjt1.net 0.0.0.0 jpirate.cjt1.net 0.0.0.0 jsandboxer.cjt1.net 0.0.0.0 jumcna.cjt1.net 0.0.0.0 jwebbsense.cjt1.net 0.0.0.0 www.cydoor.com # #<2o7-sites> # 2o7.net -- server side tracking 0.0.0.0 102.112.2o7.net 0.0.0.0 102.122.2o7.net 0.0.0.0 112.2o7.net 0.0.0.0 122.2o7.net 0.0.0.0 192.168.112.2o7.net 0.0.0.0 2o7.net 0.0.0.0 actforvictory.112.2o7.net 0.0.0.0 adbrite.112.2o7.net 0.0.0.0 adbrite.122.2o7.net 0.0.0.0 aehistory.112.2o7.net 0.0.0.0 aetv.112.2o7.net 0.0.0.0 agamgreetingscom.112.2o7.net 0.0.0.0 allbritton.122.2o7.net 0.0.0.0 americanbaby.112.2o7.net 0.0.0.0 ancestrymsn.112.2o7.net 0.0.0.0 ancestryuki.112.2o7.net 0.0.0.0 angiba.112.2o7.net 0.0.0.0 angmar.112.2o7.net 0.0.0.0 angtr.112.2o7.net 0.0.0.0 angts.112.2o7.net 0.0.0.0 angvac.112.2o7.net 0.0.0.0 anm.112.2o7.net 0.0.0.0 aolcareers.122.2o7.net 0.0.0.0 aoldlama.122.2o7.net 0.0.0.0 aoljournals.122.2o7.net 0.0.0.0 aolnsnews.122.2o7.net 0.0.0.0 aolpf.122.2o7.net 0.0.0.0 aolpolls.112.2o7.net 0.0.0.0 aolpolls.122.2o7.net 0.0.0.0 aolsearch.122.2o7.net 0.0.0.0 aolsvc.122.2o7.net 0.0.0.0 aoltmz.122.2o7.net 0.0.0.0 aolturnercnnmoney.112.2o7.net 0.0.0.0 aolturnercnnmoney.122.2o7.net 0.0.0.0 aolturnersi.122.2o7.net 0.0.0.0 aolukglobal.122.2o7.net 0.0.0.0 aolwinamp.122.2o7.net 0.0.0.0 aolwpaim.112.2o7.net 0.0.0.0 aolwpicq.122.2o7.net 0.0.0.0 aolwpmq.112.2o7.net 0.0.0.0 aolwpmqnoban.112.2o7.net 0.0.0.0 apdigitalorg.112.2o7.net 0.0.0.0 apdigitalorgovn.112.2o7.net 0.0.0.0 apnonline.112.2o7.net #0.0.0.0 appleglobal.112.2o7.net #breaks apple.com #0.0.0.0 applestoreus.112.2o7.net #breaks apple.com 0.0.0.0 atlassian.122.2o7.net 0.0.0.0 autobytel.112.2o7.net 0.0.0.0 autoweb.112.2o7.net 0.0.0.0 bbcnewscouk.112.2o7.net 0.0.0.0 bellca.112.2o7.net 0.0.0.0 bellglobemediapublishing.122.2o7.net 0.0.0.0 bellglovemediapublishing.122.2o7.net 0.0.0.0 bellserviceeng.112.2o7.net 0.0.0.0 betterhg.112.2o7.net 0.0.0.0 bhgmarketing.112.2o7.net 0.0.0.0 bidentonrccom.122.2o7.net 0.0.0.0 biwwltvcom.112.2o7.net 0.0.0.0 biwwltvcom.122.2o7.net 0.0.0.0 blackpress.122.2o7.net 0.0.0.0 bnkr8dev.112.2o7.net 0.0.0.0 bntbcstglobal.112.2o7.net 0.0.0.0 bosecom.112.2o7.net 0.0.0.0 brightcove.112.2o7.net 0.0.0.0 bulldog.122.2o7.net 0.0.0.0 businessweekpoc.112.2o7.net 0.0.0.0 bzresults.122.2o7.net 0.0.0.0 cablevision.112.2o7.net 0.0.0.0 canwest.112.2o7.net 0.0.0.0 canwestcom.112.2o7.net 0.0.0.0 canwestglobal.112.2o7.net 0.0.0.0 capcityadvcom.112.2o7.net 0.0.0.0 capcityadvcom.122.2o7.net 0.0.0.0 careers.112.2o7.net 0.0.0.0 cartoonnetwork.122.2o7.net 0.0.0.0 cbaol.112.2o7.net 0.0.0.0 cbc.122.2o7.net 0.0.0.0 cbcca.112.2o7.net 0.0.0.0 cbcca.122.2o7.net 0.0.0.0 cbcincinnatienquirer.112.2o7.net 0.0.0.0 cbmsn.112.2o7.net 0.0.0.0 cbs.112.2o7.net 0.0.0.0 cbsncaasports.112.2o7.net 0.0.0.0 cbsnfl.112.2o7.net 0.0.0.0 cbspgatour.112.2o7.net 0.0.0.0 cbsspln.112.2o7.net 0.0.0.0 ccrbudgetca.112.2o7.net 0.0.0.0 ccrgaviscom.112.2o7.net 0.0.0.0 cfrfa.112.2o7.net 0.0.0.0 chicagosuntimes.122.2o7.net 0.0.0.0 chumtv.122.2o7.net 0.0.0.0 classifiedscanada.112.2o7.net 0.0.0.0 classmatescom.112.2o7.net 0.0.0.0 cmpglobalvista.112.2o7.net 0.0.0.0 cnetasiapacific.122.2o7.net 0.0.0.0 cnetaustralia.122.2o7.net 0.0.0.0 cneteurope.122.2o7.net 0.0.0.0 cnetnews.112.2o7.net 0.0.0.0 cnetzdnet.112.2o7.net 0.0.0.0 cnhienid.122.2o7.net 0.0.0.0 cnhimcalesternews.122.2o7.net 0.0.0.0 cnhipicayuneitemv.112.2o7.net 0.0.0.0 cnhitribunestar.122.2o7.net 0.0.0.0 cnhitribunestara.122.2o7.net 0.0.0.0 cnhregisterherald.122.2o7.net 0.0.0.0 cnn.122.2o7.net 0.0.0.0 computerworldcom.112.2o7.net 0.0.0.0 condenast.112.2o7.net 0.0.0.0 coxnetmasterglobal.112.2o7.net 0.0.0.0 coxpalmbeachpost.112.2o7.net 0.0.0.0 csoonlinecom.112.2o7.net 0.0.0.0 ctvcrimelibrary.112.2o7.net 0.0.0.0 ctvsmokinggun.112.2o7.net 0.0.0.0 cxociocom.112.2o7.net 0.0.0.0 denverpost.112.2o7.net 0.0.0.0 diginet.112.2o7.net 0.0.0.0 digitalhomediscountptyltd.122.2o7.net 0.0.0.0 disccglobal.112.2o7.net 0.0.0.0 disccstats.112.2o7.net 0.0.0.0 dischannel.112.2o7.net 0.0.0.0 divx.112.2o7.net 0.0.0.0 dixonslnkcouk.112.2o7.net 0.0.0.0 dogpile.112.2o7.net 0.0.0.0 donval.112.2o7.net 0.0.0.0 dowjones.122.2o7.net 0.0.0.0 dreammates.112.2o7.net 0.0.0.0 eaeacom.112.2o7.net 0.0.0.0 eagamesuk.112.2o7.net 0.0.0.0 earthlnkpsplive.122.2o7.net 0.0.0.0 ebay1.112.2o7.net 0.0.0.0 ebaynonreg.112.2o7.net 0.0.0.0 ebayreg.112.2o7.net 0.0.0.0 ebayus.112.2o7.net 0.0.0.0 ebcom.112.2o7.net 0.0.0.0 ectestlampsplus1.112.2o7.net 0.0.0.0 edietsmain.112.2o7.net 0.0.0.0 edmundsinsideline.112.2o7.net 0.0.0.0 edsa.112.2o7.net 0.0.0.0 ehg-moma.hitbox.com.112.2o7.net 0.0.0.0 emc.122.2o7.net 0.0.0.0 employ22.112.2o7.net 0.0.0.0 employ26.112.2o7.net 0.0.0.0 employment.112.2o7.net 0.0.0.0 enterprisenewsmedia.122.2o7.net 0.0.0.0 epost.122.2o7.net 0.0.0.0 ewsnaples.112.2o7.net 0.0.0.0 ewstcpalm.112.2o7.net 0.0.0.0 examinercom.122.2o7.net 0.0.0.0 execulink.112.2o7.net 0.0.0.0 expedia4.112.2o7.net 0.0.0.0 expedia.ca.112.2o7.net 0.0.0.0 f2ncracker.112.2o7.net 0.0.0.0 f2nsmh.112.2o7.net 0.0.0.0 f2ntheage.112.2o7.net 0.0.0.0 faceoff.112.2o7.net 0.0.0.0 fbkmnr.112.2o7.net 0.0.0.0 forbesattache.112.2o7.net 0.0.0.0 forbesauto.112.2o7.net 0.0.0.0 forbesautos.112.2o7.net 0.0.0.0 forbescom.112.2o7.net 0.0.0.0 ford.112.2o7.net 0.0.0.0 foxcom.112.2o7.net 0.0.0.0 foxsimpsons.112.2o7.net 0.0.0.0 georgewbush.112.2o7.net 0.0.0.0 georgewbushcom.112.2o7.net 0.0.0.0 gettyimages.122.2o7.net 0.0.0.0 gjfastcompanycom.112.2o7.net 0.0.0.0 gmchevyapprentice.112.2o7.net 0.0.0.0 gmhummer.112.2o7.net 0.0.0.0 gntbcstglobal.112.2o7.net 0.0.0.0 gntbcstkxtv.112.2o7.net 0.0.0.0 gntbcstwtsp.112.2o7.net 0.0.0.0 gpaper104.112.2o7.net 0.0.0.0 gpaper105.112.2o7.net 0.0.0.0 gpaper107.112.2o7.net 0.0.0.0 gpaper108.112.2o7.net 0.0.0.0 gpaper109.112.2o7.net 0.0.0.0 gpaper110.112.2o7.net 0.0.0.0 gpaper111.112.2o7.net 0.0.0.0 gpaper112.112.2o7.net 0.0.0.0 gpaper113.112.2o7.net 0.0.0.0 gpaper114.112.2o7.net 0.0.0.0 gpaper115.112.2o7.net 0.0.0.0 gpaper116.112.2o7.net 0.0.0.0 gpaper117.112.2o7.net 0.0.0.0 gpaper118.112.2o7.net 0.0.0.0 gpaper119.112.2o7.net 0.0.0.0 gpaper120.112.2o7.net 0.0.0.0 gpaper121.112.2o7.net 0.0.0.0 gpaper122.112.2o7.net 0.0.0.0 gpaper123.112.2o7.net 0.0.0.0 gpaper124.112.2o7.net 0.0.0.0 gpaper125.112.2o7.net 0.0.0.0 gpaper126.112.2o7.net 0.0.0.0 gpaper127.112.2o7.net 0.0.0.0 gpaper128.112.2o7.net 0.0.0.0 gpaper129.112.2o7.net 0.0.0.0 gpaper131.112.2o7.net 0.0.0.0 gpaper132.112.2o7.net 0.0.0.0 gpaper133.112.2o7.net 0.0.0.0 gpaper138.112.2o7.net 0.0.0.0 gpaper139.112.2o7.net 0.0.0.0 gpaper140.112.2o7.net 0.0.0.0 gpaper141.112.2o7.net 0.0.0.0 gpaper142.112.2o7.net 0.0.0.0 gpaper144.112.2o7.net 0.0.0.0 gpaper145.112.2o7.net 0.0.0.0 gpaper147.112.2o7.net 0.0.0.0 gpaper149.112.2o7.net 0.0.0.0 gpaper151.112.2o7.net 0.0.0.0 gpaper154.112.2o7.net 0.0.0.0 gpaper156.112.2o7.net 0.0.0.0 gpaper157.112.2o7.net 0.0.0.0 gpaper158.112.2o7.net 0.0.0.0 gpaper162.112.2o7.net 0.0.0.0 gpaper164.112.2o7.net 0.0.0.0 gpaper166.112.2o7.net 0.0.0.0 gpaper167.112.2o7.net 0.0.0.0 gpaper169.112.2o7.net 0.0.0.0 gpaper170.112.2o7.net 0.0.0.0 gpaper171.112.2o7.net 0.0.0.0 gpaper172.112.2o7.net 0.0.0.0 gpaper173.112.2o7.net 0.0.0.0 gpaper174.112.2o7.net 0.0.0.0 gpaper176.112.2o7.net 0.0.0.0 gpaper177.112.2o7.net 0.0.0.0 gpaper180.112.2o7.net 0.0.0.0 gpaper183.112.2o7.net 0.0.0.0 gpaper184.112.2o7.net 0.0.0.0 gpaper191.112.2o7.net 0.0.0.0 gpaper192.112.2o7.net 0.0.0.0 gpaper193.112.2o7.net 0.0.0.0 gpaper194.112.2o7.net 0.0.0.0 gpaper195.112.2o7.net 0.0.0.0 gpaper196.112.2o7.net 0.0.0.0 gpaper197.112.2o7.net 0.0.0.0 gpaper198.112.2o7.net 0.0.0.0 gpaper202.112.2o7.net 0.0.0.0 gpaper204.112.2o7.net 0.0.0.0 gpaper205.112.2o7.net 0.0.0.0 gpaper212.112.2o7.net 0.0.0.0 gpaper214.112.2o7.net 0.0.0.0 gpaper219.112.2o7.net 0.0.0.0 gpaper223.112.2o7.net 0.0.0.0 harpo.122.2o7.net 0.0.0.0 hchrmain.112.2o7.net 0.0.0.0 heavycom.112.2o7.net 0.0.0.0 heavycom.122.2o7.net 0.0.0.0 homesclick.112.2o7.net 0.0.0.0 hostdomainpeople.112.2o7.net 0.0.0.0 hostdomainpeopleca.112.2o7.net 0.0.0.0 hostpowermedium.112.2o7.net 0.0.0.0 hpglobal.112.2o7.net 0.0.0.0 hphqglobal.112.2o7.net 0.0.0.0 hphqsearch.112.2o7.net 0.0.0.0 infomart.ca.112.2o7.net 0.0.0.0 infospace.com.112.2o7.net 0.0.0.0 intelcorpcim.112.2o7.net 0.0.0.0 intelglobal.112.2o7.net 0.0.0.0 ivillageglobal.112.2o7.net 0.0.0.0 jijsonline.122.2o7.net 0.0.0.0 jitmj4.122.2o7.net 0.0.0.0 johnlewis.112.2o7.net 0.0.0.0 journalregistercompany.122.2o7.net 0.0.0.0 kddi.122.2o7.net 0.0.0.0 krafteurope.112.2o7.net 0.0.0.0 ktva.112.2o7.net 0.0.0.0 ladieshj.112.2o7.net 0.0.0.0 laptopmag.122.2o7.net 0.0.0.0 laxnws.112.2o7.net 0.0.0.0 laxprs.112.2o7.net 0.0.0.0 laxpsd.112.2o7.net 0.0.0.0 ldsfch.112.2o7.net 0.0.0.0 leeenterprises.112.2o7.net 0.0.0.0 lenovo.112.2o7.net 0.0.0.0 logoworksdev.112.2o7.net 0.0.0.0 losu.112.2o7.net 0.0.0.0 mailtribune.112.2o7.net 0.0.0.0 maxim.122.2o7.net 0.0.0.0 maxvr.112.2o7.net 0.0.0.0 mdamarillo.112.2o7.net 0.0.0.0 mdjacksonville.112.2o7.net 0.0.0.0 mdtopeka.112.2o7.net 0.0.0.0 mdwardmore.112.2o7.net 0.0.0.0 mdwsavannah.112.2o7.net 0.0.0.0 medbroadcast.112.2o7.net 0.0.0.0 mediabistrocom.112.2o7.net 0.0.0.0 mediamatters.112.2o7.net 0.0.0.0 meetupcom.112.2o7.net 0.0.0.0 metacafe.122.2o7.net 0.0.0.0 mgjournalnow.112.2o7.net 0.0.0.0 mgtbo.112.2o7.net 0.0.0.0 mgtimesdispatch.112.2o7.net 0.0.0.0 mgwsls.112.2o7.net 0.0.0.0 mgwspa.112.2o7.net 0.0.0.0 microsoftconsumermarketing.112.2o7.net 0.0.0.0 microsofteup.112.2o7.net 0.0.0.0 microsoftwindows.112.2o7.net 0.0.0.0 midala.112.2o7.net 0.0.0.0 midar.112.2o7.net 0.0.0.0 midsen.112.2o7.net 0.0.0.0 mlbastros.112.2o7.net 0.0.0.0 mlbcolorado.112.2o7.net 0.0.0.0 mlbcom.112.2o7.net 0.0.0.0 mlbglobal08.112.2o7.net 0.0.0.0 mlbglobal.112.2o7.net 0.0.0.0 mlbhouston.112.2o7.net 0.0.0.0 mlbstlouis.112.2o7.net 0.0.0.0 mlbtoronto.112.2o7.net 0.0.0.0 mmsshopcom.112.2o7.net 0.0.0.0 mnfidnahub.112.2o7.net 0.0.0.0 mngidmn.112.2o7.net 0.0.0.0 mngirockymtnnews.112.2o7.net 0.0.0.0 mngislctrib.112.2o7.net 0.0.0.0 mngiyrkdr.112.2o7.net 0.0.0.0 mseuppremain.112.2o7.net 0.0.0.0 msnmercom.112.2o7.net 0.0.0.0 msnportal.112.2o7.net 0.0.0.0 mtvn.112.2o7.net 0.0.0.0 mtvu.112.2o7.net 0.0.0.0 mxmacromedia.112.2o7.net 0.0.0.0 myfamilyancestry.112.2o7.net 0.0.0.0 nasdaq.122.2o7.net 0.0.0.0 natgeoeditco.112.2o7.net 0.0.0.0 natgeoeditcom.112.2o7.net 0.0.0.0 natgeonews.112.2o7.net 0.0.0.0 natgeongmcom.112.2o7.net 0.0.0.0 nationalpost.112.2o7.net 0.0.0.0 nba.112.2o7.net 0.0.0.0 neber.112.2o7.net 0.0.0.0 netrp.112.2o7.net 0.0.0.0 netsdartboards.122.2o7.net 0.0.0.0 newsinteractive.112.2o7.net 0.0.0.0 newstimeslivecom.112.2o7.net 0.0.0.0 nike.112.2o7.net 0.0.0.0 nikeplus.112.2o7.net 0.0.0.0 nmanchorage.112.2o7.net 0.0.0.0 nmbrampton.112.2o7.net 0.0.0.0 nmcommancomedia.112.2o7.net 0.0.0.0 nmfresno.112.2o7.net 0.0.0.0 nmhiltonhead.112.2o7.net 0.0.0.0 nmkawartha.112.2o7.net 0.0.0.0 nmminneapolis.112.2o7.net 0.0.0.0 nmmississauga.112.2o7.net 0.0.0.0 nmnandomedia.112.2o7.net 0.0.0.0 nmraleigh.112.2o7.net 0.0.0.0 nmrockhill.112.2o7.net 0.0.0.0 nmsacramento.112.2o7.net 0.0.0.0 nmtoronto.112.2o7.net 0.0.0.0 nmtricity.112.2o7.net 0.0.0.0 nmyork.112.2o7.net 0.0.0.0 novellcom.112.2o7.net 0.0.0.0 nytbglobe.112.2o7.net 0.0.0.0 nytglobe.112.2o7.net 0.0.0.0 nythglobe.112.2o7.net 0.0.0.0 nytimesglobal.112.2o7.net 0.0.0.0 nytimesnonsampled.112.2o7.net 0.0.0.0 nytimesnoonsampled.112.2o7.net 0.0.0.0 nytmembercenter.112.2o7.net 0.0.0.0 nytrflorence.112.2o7.net 0.0.0.0 nytrgadsden.112.2o7.net 0.0.0.0 nytrgainseville.112.2o7.net 0.0.0.0 nytrhendersonville.112.2o7.net 0.0.0.0 nytrhouma.112.2o7.net 0.0.0.0 nytrlakeland.112.2o7.net 0.0.0.0 nytrsantarosa.112.2o7.net 0.0.0.0 nytrsarasota.112.2o7.net 0.0.0.0 nytrwilmington.112.2o7.net 0.0.0.0 nyttechnology.112.2o7.net 0.0.0.0 omniture.112.2o7.net 0.0.0.0 omnitureglobal.112.2o7.net 0.0.0.0 onlineindigoca.112.2o7.net 0.0.0.0 oracle.112.2o7.net 0.0.0.0 oraclecom.112.2o7.net 0.0.0.0 overstock.com.112.2o7.net 0.0.0.0 overturecomvista.112.2o7.net 0.0.0.0 paypal.112.2o7.net 0.0.0.0 poacprod.122.2o7.net 0.0.0.0 poconorecordcom.112.2o7.net 0.0.0.0 projectorpeople.112.2o7.net 0.0.0.0 publicationsunbound.112.2o7.net 0.0.0.0 pulharktheherald.112.2o7.net 0.0.0.0 pulpantagraph.112.2o7.net 0.0.0.0 rckymtnnws.112.2o7.net 0.0.0.0 recordnetcom.112.2o7.net 0.0.0.0 recordonlinecom.112.2o7.net 0.0.0.0 rey3935.112.2o7.net 0.0.0.0 rezrezwhistler.112.2o7.net 0.0.0.0 riptownmedia.122.2o7.net 0.0.0.0 rncgopcom.122.2o7.net 0.0.0.0 roxio.112.2o7.net 0.0.0.0 salesforce.122.2o7.net 0.0.0.0 santacruzsentinel.112.2o7.net 0.0.0.0 sciamglobal.112.2o7.net 0.0.0.0 scrippsbathvert.112.2o7.net 0.0.0.0 scrippsfoodnet.112.2o7.net 0.0.0.0 scrippswfts.112.2o7.net 0.0.0.0 scrippswxyz.112.2o7.net 0.0.0.0 seacoastonlinecom.112.2o7.net 0.0.0.0 searscom.112.2o7.net 0.0.0.0 smibs.112.2o7.net 0.0.0.0 smwww.112.2o7.net 0.0.0.0 sonycorporate.122.2o7.net 0.0.0.0 sonyglobal.112.2o7.net 0.0.0.0 southcoasttoday.112.2o7.net 0.0.0.0 spiketv.112.2o7.net 0.0.0.0 stpetersburgtimes.122.2o7.net 0.0.0.0 suncom.112.2o7.net 0.0.0.0 sunglobal.112.2o7.net 0.0.0.0 sunonesearch.112.2o7.net 0.0.0.0 survey.112.2o7.net 0.0.0.0 sympmsnsports.112.2o7.net 0.0.0.0 techreview.112.2o7.net 0.0.0.0 thestar.122.2o7.net 0.0.0.0 thestardev.122.2o7.net 0.0.0.0 thinkgeek.112.2o7.net 0.0.0.0 timebus2.112.2o7.net 0.0.0.0 timecom.112.2o7.net 0.0.0.0 timeew.122.2o7.net 0.0.0.0 timefortune.112.2o7.net 0.0.0.0 timehealth.112.2o7.net 0.0.0.0 timeofficepirates.122.2o7.net 0.0.0.0 timepeople.122.2o7.net 0.0.0.0 timepopsci.122.2o7.net 0.0.0.0 timerealsimple.112.2o7.net 0.0.0.0 timewarner.122.2o7.net 0.0.0.0 tmsscion.112.2o7.net 0.0.0.0 tmstoyota.112.2o7.net 0.0.0.0 tnttv.112.2o7.net 0.0.0.0 torstardigital.122.2o7.net 0.0.0.0 travidiathebrick.112.2o7.net 0.0.0.0 tribuneinteractive.122.2o7.net 0.0.0.0 usatoday1.112.2o7.net 0.0.0.0 usnews.122.2o7.net 0.0.0.0 usun.112.2o7.net 0.0.0.0 vanns.112.2o7.net 0.0.0.0 verisignwildcard.112.2o7.net 0.0.0.0 verisonwildcard.112.2o7.net 0.0.0.0 vh1com.112.2o7.net 0.0.0.0 viaatomvideo.112.2o7.net 0.0.0.0 viacomedycentralrl.112.2o7.net 0.0.0.0 viagametrailers.112.2o7.net 0.0.0.0 viamtvcom.112.2o7.net 0.0.0.0 viasyndimedia.112.2o7.net 0.0.0.0 viavh1com.112.2o7.net 0.0.0.0 viay2m.112.2o7.net 0.0.0.0 vintacom.112.2o7.net 0.0.0.0 viralvideo.112.2o7.net 0.0.0.0 walmartcom.112.2o7.net 0.0.0.0 westjet.112.2o7.net 0.0.0.0 wileydumcom.112.2o7.net 0.0.0.0 wmg.112.2o7.net 0.0.0.0 wmgmulti.112.2o7.net 0.0.0.0 workopolis.122.2o7.net 0.0.0.0 wpni.112.2o7.net 0.0.0.0 xhealthmobiletools.112.2o7.net 0.0.0.0 youtube.112.2o7.net 0.0.0.0 yrkeve.112.2o7.net 0.0.0.0 ziffdavisglobal.112.2o7.net 0.0.0.0 ziffdavispennyarcade.112.2o7.net # # ads 0.0.0.0 0101011.com 0.0.0.0 0427d7.se 0.0.0.0 0d79ed.r.axf8.net 0.0.0.0 104231.dtiblog.com 0.0.0.0 10.im.cz 0.0.0.0 123.fluxads.com 0.0.0.0 123specialgifts.com #0.0.0.0 140cc.v.fwmrm.net #interferes with Comedy Central videos 0.0.0.0 1.adbrite.com 0.0.0.0 1.forgetstore.com 0.0.0.0 1.httpads.com 0.0.0.0 1.primaryads.com 0.0.0.0 207-87-18-203.wsmg.digex.net 0.0.0.0 247support.adtech.fr 0.0.0.0 247support.adtech.us 0.0.0.0 24ratownik.hit.gemius.pl 0.0.0.0 24trk.com 0.0.0.0 25184.hittail.com 0.0.0.0 2754.btrll.com 0.0.0.0 2912a.v.fwmrm.net 0.0.0.0 2.adbrite.com 0.0.0.0 2-art-coliseum.com 0.0.0.0 312.1d27c9b8fb.com 0.0.0.0 321cba.com 0.0.0.0 32red.it 0.0.0.0 360ads.com 0.0.0.0 3.adbrite.com 0.0.0.0 3.cennter.com 0.0.0.0 3fns.com 0.0.0.0 4.adbrite.com 0.0.0.0 4c28d6.r.axf8.net 0.0.0.0 4qinvite.4q.iperceptions.com 0.0.0.0 7500.com 0.0.0.0 76.a.boom.ro 0.0.0.0 7adpower.com 0.0.0.0 7bpeople.com 0.0.0.0 7bpeople.data.7bpeople.com 0.0.0.0 7cnbcnews.com 0.0.0.0 85103.hittail.com 0.0.0.0 8574dnj3yzjace8c8io6zr9u3n.hop.clickbank.net 0.0.0.0 888casino.com 0.0.0.0 961.com 0.0.0.0 9cf9.v.fwmrm.net 0.0.0.0 a01.gestionpub.com 0.0.0.0 a.0day.kiev.ua 0.0.0.0 a1.greenadworks.net 0.0.0.0 a1.interclick.com 0.0.0.0 a200.yieldoptimizer.com 0.0.0.0 a2.mediagra.com 0.0.0.0 a2.websponsors.com 0.0.0.0 a3.suntimes.com 0.0.0.0 a3.websponsors.com 0.0.0.0 a4.websponsors.com 0.0.0.0 a5.websponsors.com 0.0.0.0 a.admaxserver.com 0.0.0.0 a.adorika.net 0.0.0.0 a.ad.playstation.net 0.0.0.0 a.adready.com 0.0.0.0 a.ads1.msn.com 0.0.0.0 a.ads2.msn.com 0.0.0.0 a.adstome.com 0.0.0.0 aads.treehugger.com 0.0.0.0 aams1.aim4media.com 0.0.0.0 aan.amazon.com 0.0.0.0 aa-nb.marketgid.com 0.0.0.0 aa.newsblock.dt00.net 0.0.0.0 aa.newsblock.marketgid.com 0.0.0.0 a.as-eu.falkag.net 0.0.0.0 a.as-us.falkag.net 0.0.0.0 aax-us-east.amazon-adsystem.com 0.0.0.0 abcnews.footprint.net 0.0.0.0 a.boom.ro 0.0.0.0 abrogatesdv.info 0.0.0.0 abseckw.adtlgc.com 0.0.0.0 a.collective-media.net 0.0.0.0 ac.rnm.ca 0.0.0.0 actiondesk.com 0.0.0.0 actionflash.com 0.0.0.0 action.ientry.net 0.0.0.0 action.mathtag.com 0.0.0.0 action.media6degrees.com 0.0.0.0 actionsplash.com 0.0.0.0 ac.tynt.com 0.0.0.0 acvs.mediaonenetwork.net 0.0.0.0 acvsrv.mediaonenetwork.net 0.0.0.0 ad01.adonspot.com 0.0.0.0 ad01.focalink.com 0.0.0.0 ad01.mediacorpsingapore.com 0.0.0.0 ad02.focalink.com 0.0.0.0 ad03.focalink.com 0.0.0.0 ad04.focalink.com 0.0.0.0 ad05.focalink.com 0.0.0.0 ad06.focalink.com 0.0.0.0 ad07.focalink.com 0.0.0.0 ad08.focalink.com 0.0.0.0 ad09.focalink.com 0.0.0.0 ad0.haynet.com 0.0.0.0 ad101com.adbureau.net 0.0.0.0 ad10.bannerbank.ru 0.0.0.0 ad10.focalink.com 0.0.0.0 ad11.bannerbank.ru 0.0.0.0 ad11.focalink.com 0.0.0.0 ad12.bannerbank.ru 0.0.0.0 ad12.focalink.com 0.0.0.0 ad13.focalink.com 0.0.0.0 ad14.focalink.com 0.0.0.0 ad15.focalink.com 0.0.0.0 ad16.focalink.com 0.0.0.0 ad17.focalink.com 0.0.0.0 ad18.focalink.com 0.0.0.0 ad19.focalink.com 0.0.0.0 ad1.adtitan.net 0.0.0.0 ad1.bannerbank.ru 0.0.0.0 ad1.clickhype.com 0.0.0.0 ad1.emediate.dk 0.0.0.0 ad1.emediate.se 0.0.0.0 ad1.gamezone.com 0.0.0.0 ad1.hotel.com 0.0.0.0 ad1.lbn.ru 0.0.0.0 ad1.peel.com 0.0.0.0 ad1.popcap.com 0.0.0.0 ad1.yomiuri.co.jp 0.0.0.0 ad1.yourmedia.com 0.0.0.0 ad234.prbn.ru 0.0.0.0 ad2.adecn.com 0.0.0.0 ad2.bal.dotandad.com 0.0.0.0 ad2.bannerbank.ru 0.0.0.0 ad2.bannerhost.ru 0.0.0.0 ad2.bbmedia.cz 0.0.0.0 ad2.cooks.com 0.0.0.0 ad2.firehousezone.com 0.0.0.0 ad2games.com 0.0.0.0 ad2.gammae.com 0.0.0.0 ad2.hotel.com 0.0.0.0 ad2.ip.ro 0.0.0.0 ad2.lbn.ru 0.0.0.0 ad2.nationalreview.com 0.0.0.0 ad2.pamedia.com 0.0.0.0 ad2.parom.hu 0.0.0.0 ad2.peel.com 0.0.0.0 ad2.pl 0.0.0.0 ad2.pl.mediainter.net 0.0.0.0 ad2.sbisec.co.jp 0.0.0.0 ad2.smni.com 0.0.0.0 ad.360yield.com 0.0.0.0 ad3.adfarm1.adition.com 0.0.0.0 ad3.bannerbank.ru 0.0.0.0 ad3.bb.ru 0.0.0.0 ad.3dnews.ru 0.0.0.0 ad3.lbn.ru 0.0.0.0 ad3.nationalreview.com 0.0.0.0 ad3.rambler.ru 0.0.0.0 ad41.atlas.cz 0.0.0.0 ad4.adfarm1.adition.com 0.0.0.0 ad4.bannerbank.ru 0.0.0.0 ad4.lbn.ru 0.0.0.0 ad4.liverail.com 0.0.0.0 ad4.speedbit.com 0.0.0.0 ad5.bannerbank.ru 0.0.0.0 ad5.lbn.ru 0.0.0.0 ad6.bannerbank.ru 0.0.0.0 ad6.horvitznewspapers.net 0.0.0.0 ad.71i.de 0.0.0.0 ad7.bannerbank.ru 0.0.0.0 ad8.bannerbank.ru 0.0.0.0 ad9.bannerbank.ru 0.0.0.0 ad.abcnews.com 0.0.0.0 ad.aboutwebservices.com 0.0.0.0 ad.adfunky.com 0.0.0.0 ad.adition.de 0.0.0.0 ad.adition.net 0.0.0.0 ad.adlegend.com 0.0.0.0 ad.admarketplace.net 0.0.0.0 ad.adnet.biz 0.0.0.0 ad.adnet.de 0.0.0.0 ad.adnetwork.com.br 0.0.0.0 ad.adnetwork.net 0.0.0.0 ad.adorika.com 0.0.0.0 ad.adperium.com 0.0.0.0 ad.adriver.ru 0.0.0.0 ad.adserve.com 0.0.0.0 ad.adserverplus.com 0.0.0.0 ad.adsmart.net 0.0.0.0 ad.adtegrity.net 0.0.0.0 ad.adtoma.com 0.0.0.0 ad.adverticum.net 0.0.0.0 ad.advertstream.com 0.0.0.0 ad.adview.pl 0.0.0.0 ad.afilo.pl 0.0.0.0 ad.aftenposten.no 0.0.0.0 ad.aftonbladet.se 0.0.0.0 ad.afy11.net 0.0.0.0 ad.agava.tbn.ru 0.0.0.0 adagiobanner.s3.amazonaws.com 0.0.0.0 ad.agkn.com 0.0.0.0 ad.amgdgt.com 0.0.0.0 adap.tv 0.0.0.0 ad.aquamediadirect.com 0.0.0.0 ad.asv.de 0.0.0.0 ad-audit.tubemogul.com 0.0.0.0 ad.auditude.com 0.0.0.0 ad.bannerbank.ru 0.0.0.0 ad.bannerconnect.net 0.0.0.0 adblade.com 0.0.0.0 ad.bnmla.com 0.0.0.0 adbnr.ru 0.0.0.0 adbot.theonion.com 0.0.0.0 adbrite.com 0.0.0.0 adbucks.brandreachsys.com 0.0.0.0 adc2.adcentriconline.com 0.0.0.0 adcache.aftenposten.no 0.0.0.0 adcanadian.com 0.0.0.0 adcash.com 0.0.0.0 adcast.deviantart.com 0.0.0.0 adcentriconline.com 0.0.0.0 adcentric.randomseed.com 0.0.0.0 ad.cibleclick.com 0.0.0.0 ad.clickdistrict.com 0.0.0.0 adclick.hit.gemius.pl 0.0.0.0 ad.clickotmedia.com 0.0.0.0 adclient-af.lp.uol.com.br 0.0.0.0 adclient.uimserv.net 0.0.0.0 adcode.adengage.com 0.0.0.0 adcontent.gamespy.com 0.0.0.0 adcontent.reedbusiness.com 0.0.0.0 adcontent.videoegg.com 0.0.0.0 adcontroller.unicast.com 0.0.0.0 adcount.ohmynews.com 0.0.0.0 adcreative.tribuneinteractive.com 0.0.0.0 adcycle.footymad.net 0.0.0.0 adcycle.icpeurope.net 0.0.0.0 ad.dc2.adtech.de 0.0.0.0 addelivery.thestreet.com 0.0.0.0 ad.designtaxi.com 0.0.0.0 ad.deviantart.com 0.0.0.0 ad.directrev.com 0.0.0.0 addthiscdn.com 0.0.0.0 addthis.com 0.0.0.0 adecn.com 0.0.0.0 ad.egloos.com 0.0.0.0 adengine.rt.ru 0.0.0.0 ad.espn.starwave.com 0.0.0.0 ad.eurosport.com 0.0.0.0 adexpansion.com 0.0.0.0 adexprt.com 0.0.0.0 adexprt.me 0.0.0.0 adexprts.com 0.0.0.0 adext.inkclub.com 0.0.0.0 adfarm1.adition.com 0.0.0.0 adfarm.mserve.ca 0.0.0.0 adfiles.pitchforkmedia.com 0.0.0.0 ad.filmweb.pl 0.0.0.0 ad.firstadsolution.com 0.0.0.0 ad.flux.com #0.0.0.0 adf.ly 0.0.0.0 adforce.ads.imgis.com 0.0.0.0 adforce.adtech.de 0.0.0.0 adforce.adtech.fr 0.0.0.0 adforce.adtech.us 0.0.0.0 adforce.imgis.com 0.0.0.0 adform.com 0.0.0.0 adfu.blockstackers.com 0.0.0.0 ad.funpic.de 0.0.0.0 adfusion.com 0.0.0.0 ad.garantiarkadas.com 0.0.0.0 adgardener.com 0.0.0.0 ad.gazeta.pl 0.0.0.0 ad.goo.ne.jp 0.0.0.0 adgraphics.theonion.com 0.0.0.0 ad.gra.pl 0.0.0.0 ad.gr.doubleclick.net 0.0.0.0 ad.greenmarquee.com 0.0.0.0 adgroup.naver.com 0.0.0.0 ad.hankooki.com 0.0.0.0 ad.harrenmedianetwork.com 0.0.0.0 adhearus.com 0.0.0.0 adhese.be 0.0.0.0 adhese.com 0.0.0.0 adhitzads.com 0.0.0.0 ad.horvitznewspapers.net 0.0.0.0 ad.host.bannerflow.com 0.0.0.0 ad.howstuffworks.com 0.0.0.0 adhref.pl #0.0.0.0 ad.hulu.com # Uncomment to block Hulu. 0.0.0.0 ad.iconadserver.com 0.0.0.0 adidm.idmnet.pl 0.0.0.0 adidm.supermedia.pl 0.0.0.0 adimage.asia1.com.sg 0.0.0.0 adimage.asiaone.com 0.0.0.0 adimage.asiaone.com.sg 0.0.0.0 adimage.blm.net 0.0.0.0 adimages.earthweb.com 0.0.0.0 adimages.go.com 0.0.0.0 adimages.mp3.com 0.0.0.0 adimages.watchmygf.net 0.0.0.0 adi.mainichi.co.jp 0.0.0.0 adimg.activeadv.net 0.0.0.0 adimg.com.com 0.0.0.0 adincl.gopher.com 0.0.0.0 ad.insightexpressai.com 0.0.0.0 ad.investopedia.com 0.0.0.0 adipics.com 0.0.0.0 adireland.com 0.0.0.0 ad.ir.ru 0.0.0.0 ad.isohunt.com 0.0.0.0 adition.com 0.0.0.0 ad.iwin.com 0.0.0.0 adj10.thruport.com 0.0.0.0 adj11.thruport.com 0.0.0.0 adj12.thruport.com 0.0.0.0 adj13.thruport.com 0.0.0.0 adj14.thruport.com 0.0.0.0 adj15.thruport.com 0.0.0.0 adj16r1.thruport.com 0.0.0.0 adj16.thruport.com 0.0.0.0 adj17.thruport.com 0.0.0.0 adj18.thruport.com 0.0.0.0 adj19.thruport.com 0.0.0.0 adj1.thruport.com 0.0.0.0 adj22.thruport.com 0.0.0.0 adj23.thruport.com 0.0.0.0 adj24.thruport.com 0.0.0.0 adj25.thruport.com 0.0.0.0 adj26.thruport.com 0.0.0.0 adj27.thruport.com 0.0.0.0 adj28.thruport.com 0.0.0.0 adj29.thruport.com 0.0.0.0 adj2.thruport.com 0.0.0.0 adj30.thruport.com 0.0.0.0 adj31.thruport.com 0.0.0.0 adj32.thruport.com 0.0.0.0 adj33.thruport.com 0.0.0.0 adj34.thruport.com 0.0.0.0 adj35.thruport.com 0.0.0.0 adj36.thruport.com 0.0.0.0 adj37.thruport.com 0.0.0.0 adj38.thruport.com 0.0.0.0 adj39.thruport.com 0.0.0.0 adj3.thruport.com 0.0.0.0 adj40.thruport.com 0.0.0.0 adj41.thruport.com 0.0.0.0 adj43.thruport.com 0.0.0.0 adj44.thruport.com 0.0.0.0 adj45.thruport.com 0.0.0.0 adj46.thruport.com 0.0.0.0 adj47.thruport.com 0.0.0.0 adj48.thruport.com 0.0.0.0 adj49.thruport.com 0.0.0.0 adj4.thruport.com 0.0.0.0 adj50.thruport.com 0.0.0.0 adj51.thruport.com 0.0.0.0 adj52.thruport.com 0.0.0.0 adj53.thruport.com 0.0.0.0 adj54.thruport.com 0.0.0.0 adj55.thruport.com 0.0.0.0 adj56.thruport.com 0.0.0.0 adj5.thruport.com 0.0.0.0 adj6.thruport.com 0.0.0.0 adj7.thruport.com 0.0.0.0 adj8.thruport.com 0.0.0.0 adj9.thruport.com 0.0.0.0 ad.jamba.net 0.0.0.0 ad.jamster.ca 0.0.0.0 adjmps.com 0.0.0.0 adjuggler.net 0.0.0.0 adjuggler.yourdictionary.com 0.0.0.0 ad.kataweb.it 0.0.0.0 ad.kat.ph 0.0.0.0 adkontekst.pl 0.0.0.0 ad.krutilka.ru 0.0.0.0 ad.leadcrunch.com 0.0.0.0 ad.lgappstv.com 0.0.0.0 ad.linkexchange.com 0.0.0.0 ad.linksynergy.com 0.0.0.0 admanager1.collegepublisher.com 0.0.0.0 admanager2.broadbandpublisher.com 0.0.0.0 admanager3.collegepublisher.com 0.0.0.0 admanager.adam4adam.com 0.0.0.0 admanager.beweb.com 0.0.0.0 admanager.btopenworld.com 0.0.0.0 admanager.collegepublisher.com 0.0.0.0 adman.freeze.com 0.0.0.0 adman.in.gr 0.0.0.0 ad.mastermedia.ru 0.0.0.0 admatcher.videostrip.com #http://admatcher.videostrip.com/?puid=23940627&host=www.dumpert.nl&categories=default 0.0.0.0 admatch-syndication.mochila.com 0.0.0.0 admax.quisma.com 0.0.0.0 ad.media-servers.net 0.0.0.0 admedia.xoom.com 0.0.0.0 admeld.com 0.0.0.0 admeta.vo.llnwd.net #0.0.0.0 adm.fwmrm.net #may interfere with nhl.com 0.0.0.0 admin.digitalacre.com 0.0.0.0 admin.hotkeys.com 0.0.0.0 admin.inq.com 0.0.0.0 admonkey.dapper.net 0.0.0.0 ad.moscowtimes.ru 0.0.0.0 adm.shacknews.com 0.0.0.0 adms.physorg.com 0.0.0.0 ad.my.doubleclick.net 0.0.0.0 ad.nate.com 0.0.0.0 adn.ebay.com 0.0.0.0 adnet.asahi.com 0.0.0.0 adnet.biz 0.0.0.0 adnet.chicago.tribune.com 0.0.0.0 adnet.com 0.0.0.0 adnet.de 0.0.0.0 ad.network60.com 0.0.0.0 adnetwork.nextgen.net 0.0.0.0 adnetwork.rovicorp.com 0.0.0.0 adnetxchange.com 0.0.0.0 adng.ascii24.com 0.0.0.0 adn.kinkydollars.com 0.0.0.0 ad.nozonedata.com 0.0.0.0 adnxs.com 0.0.0.0 adnxs.revsci.net 0.0.0.0 adobee.com 0.0.0.0 adobe.tt.omtrdc.net 0.0.0.0 adocean.pl 0.0.0.0 ad.ohmynews.com 0.0.0.0 adopt.euroclick.com 0.0.0.0 adopt.precisead.com 0.0.0.0 adotube.com 0.0.0.0 ad.parom.hu 0.0.0.0 ad.partis.si 0.0.0.0 adpepper.dk 0.0.0.0 adp.gazeta.pl 0.0.0.0 ad.ph-prt.tbn.ru 0.0.0.0 adpick.switchboard.com 0.0.0.0 ad.pravda.ru 0.0.0.0 ad.preferences.com 0.0.0.0 ad.pro-advertising.com 0.0.0.0 ad.propellerads.com 0.0.0.0 ad.prv.pl 0.0.0.0 adpulse.ads.targetnet.com 0.0.0.0 adpush.dreamscape.com 0.0.0.0 adq.nextag.com 0.0.0.0 adremote.pathfinder.com 0.0.0.0 adremote.timeinc.aol.com 0.0.0.0 adremote.timeinc.net 0.0.0.0 ad.repubblica.it 0.0.0.0 adriver.ru 0.0.0.0 adroll.com 0.0.0.0 adrotate.se 0.0.0.0 adrotator.se 0.0.0.0 ad.ru.doubleclick.net 0.0.0.0 ads01.focalink.com 0.0.0.0 ads01.hyperbanner.net 0.0.0.0 ads02.focalink.com 0.0.0.0 ads02.hyperbanner.net 0.0.0.0 ads03.focalink.com 0.0.0.0 ads03.hyperbanner.net 0.0.0.0 ads04.focalink.com 0.0.0.0 ads04.hyperbanner.net 0.0.0.0 ads05.focalink.com 0.0.0.0 ads05.hyperbanner.net 0.0.0.0 ads06.focalink.com 0.0.0.0 ads06.hyperbanner.net 0.0.0.0 ads07.focalink.com 0.0.0.0 ads07.hyperbanner.net 0.0.0.0 ads08.focalink.com 0.0.0.0 ads08.hyperbanner.net 0.0.0.0 ads09.focalink.com 0.0.0.0 ads09.hyperbanner.net 0.0.0.0 ads0.okcupid.com 0.0.0.0 ads10.focalink.com 0.0.0.0 ads10.hyperbanner.net 0.0.0.0 ads10.speedbit.com 0.0.0.0 ads10.udc.advance.net 0.0.0.0 ads11.focalink.com 0.0.0.0 ads11.hyperbanner.net 0.0.0.0 ads11.udc.advance.net 0.0.0.0 ads12.focalink.com 0.0.0.0 ads12.hyperbanner.net 0.0.0.0 ads12.udc.advance.net 0.0.0.0 ads13.focalink.com 0.0.0.0 ads13.hyperbanner.net 0.0.0.0 ads13.udc.advance.net 0.0.0.0 ads14.bpath.com 0.0.0.0 ads14.focalink.com 0.0.0.0 ads14.hyperbanner.net 0.0.0.0 ads14.udc.advance.net 0.0.0.0 ads15.bpath.com 0.0.0.0 ads15.focalink.com 0.0.0.0 ads15.hyperbanner.net 0.0.0.0 ads15.udc.advance.net 0.0.0.0 ads16.advance.net 0.0.0.0 ads16.focalink.com 0.0.0.0 ads16.hyperbanner.net 0.0.0.0 ads16.udc.advance.net 0.0.0.0 ads17.focalink.com 0.0.0.0 ads17.hyperbanner.net 0.0.0.0 ads18.focalink.com 0.0.0.0 ads18.hyperbanner.net 0.0.0.0 ads19.focalink.com 0.0.0.0 ads1.activeagent.at 0.0.0.0 ads1.ad-flow.com 0.0.0.0 ads1.admedia.ro 0.0.0.0 ads1.advance.net 0.0.0.0 ads1.advertwizard.com 0.0.0.0 ads1.ami-admin.com 0.0.0.0 ads1.canoe.ca 0.0.0.0 ads1.destructoid.com 0.0.0.0 ads1.empiretheatres.com 0.0.0.0 ads1.erotism.com 0.0.0.0 ads1.eudora.com 0.0.0.0 ads1.globeandmail.com 0.0.0.0 ads1.itadnetwork.co.uk 0.0.0.0 ads1.jev.co.za 0.0.0.0 ads1.msads.net 0.0.0.0 ads1.msn.com 0.0.0.0 ads1.perfadbrite.com.akadns.net 0.0.0.0 ads1.performancingads.com 0.0.0.0 ads1.realcities.com 0.0.0.0 ads1.revenue.net 0.0.0.0 ads1.sptimes.com 0.0.0.0 ads1.theglobeandmail.com 0.0.0.0 ads1.ucomics.com 0.0.0.0 ads1.udc.advance.net 0.0.0.0 ads1.updated.com 0.0.0.0 ads1.virtumundo.com 0.0.0.0 ads1.zdnet.com 0.0.0.0 ads20.focalink.com 0.0.0.0 ads21.focalink.com 0.0.0.0 ads22.focalink.com 0.0.0.0 ads23.focalink.com 0.0.0.0 ads24.focalink.com 0.0.0.0 ads25.focalink.com 0.0.0.0 ads2.adbrite.com 0.0.0.0 ads2.ad-flow.com 0.0.0.0 ads2.advance.net 0.0.0.0 ads2.advertwizard.com 0.0.0.0 ads2.canoe.ca 0.0.0.0 ads2.clearchannel.com 0.0.0.0 ads2.clickad.com 0.0.0.0 ads2.collegclub.com 0.0.0.0 ads2.collegeclub.com 0.0.0.0 ads2.contentabc.com 0.0.0.0 ads2.drivelinemedia.com 0.0.0.0 ads2.emeraldcoast.com 0.0.0.0 ads2.exhedra.com 0.0.0.0 ads2.firingsquad.com 0.0.0.0 ads2.gamecity.net 0.0.0.0 ads2.jubii.dk 0.0.0.0 ads2.ljworld.com 0.0.0.0 ads2.msn.com 0.0.0.0 ads2.newtimes.com 0.0.0.0 ads2.osdn.com 0.0.0.0 ads2.pittsburghlive.com 0.0.0.0 ads2.realcities.com 0.0.0.0 ads2.revenue.net 0.0.0.0 ads2.rp.pl 0.0.0.0 ads2srv.com 0.0.0.0 ads2.theglobeandmail.com 0.0.0.0 ads2.udc.advance.net 0.0.0.0 ads2.virtumundo.com 0.0.0.0 ads2.weblogssl.com 0.0.0.0 ads2.zdnet.com 0.0.0.0 ads2.zeusclicks.com 0.0.0.0 ads360.com 0.0.0.0 ads36.hyperbanner.net 0.0.0.0 ads3.ad-flow.com 0.0.0.0 ads3.adman.gr 0.0.0.0 ads3.advance.net 0.0.0.0 ads3.advertwizard.com 0.0.0.0 ads3.canoe.ca 0.0.0.0 ads3.freebannertrade.com 0.0.0.0 ads3.gamecity.net 0.0.0.0 ads3.jubii.dk 0.0.0.0 ads3.realcities.com 0.0.0.0 ads3.udc.advance.net 0.0.0.0 ads3.virtumundo.com 0.0.0.0 ads3.zdnet.com 0.0.0.0 ads4.ad-flow.com 0.0.0.0 ads4.advance.net 0.0.0.0 ads4.advertwizard.com 0.0.0.0 ads4.canoe.ca 0.0.0.0 ads4.clearchannel.com 0.0.0.0 ads4.gamecity.net 0.0.0.0 ads4homes.com 0.0.0.0 ads4.realcities.com 0.0.0.0 ads4.udc.advance.net 0.0.0.0 ads4.virtumundo.com 0.0.0.0 ads5.ad-flow.com 0.0.0.0 ads5.advance.net 0.0.0.0 ads5.advertwizard.com 0.0.0.0 ads5.canoe.ca 0.0.0.0 ads.5ci.lt 0.0.0.0 ads5.fxdepo.com 0.0.0.0 ads5.mconetwork.com 0.0.0.0 ads5.udc.advance.net 0.0.0.0 ads5.virtumundo.com 0.0.0.0 ads6.ad-flow.com 0.0.0.0 ads6.advance.net 0.0.0.0 ads6.advertwizard.com 0.0.0.0 ads6.gamecity.net 0.0.0.0 ads6.udc.advance.net 0.0.0.0 ads7.ad-flow.com 0.0.0.0 ads7.advance.net 0.0.0.0 ads7.advertwizard.com 0.0.0.0 ads.7days.ae 0.0.0.0 ads7.gamecity.net 0.0.0.0 ads7.speedbit.com 0.0.0.0 ads7.udc.advance.net 0.0.0.0 ads.8833.com 0.0.0.0 ads8.ad-flow.com 0.0.0.0 ads8.advertwizard.com 0.0.0.0 ads8.com 0.0.0.0 ads8.udc.advance.net 0.0.0.0 ads9.ad-flow.com 0.0.0.0 ads9.advertwizard.com 0.0.0.0 ads9.udc.advance.net 0.0.0.0 ads.abs-cbn.com 0.0.0.0 ads.accelerator-media.com 0.0.0.0 ads.aceweb.net 0.0.0.0 ads.activeagent.at 0.0.0.0 ads.active.com 0.0.0.0 ads.ad4game.com 0.0.0.0 ads.adap.tv 0.0.0.0 ads.adbrite.com 0.0.0.0 ads.adbroker.de 0.0.0.0 ads.adcorps.com 0.0.0.0 ads.addesktop.com 0.0.0.0 ads.addynamix.com 0.0.0.0 ads.adengage.com 0.0.0.0 ads.ad-flow.com 0.0.0.0 ads.adfox.ru 0.0.0.0 ads.adgoto.com 0.0.0.0 ads.adhall.com 0.0.0.0 ads.adhearus.com 0.0.0.0 ads.adhostingsolutions.com 0.0.0.0 ads.admarvel.com 0.0.0.0 ads.admaximize.com 0.0.0.0 adsadmin.aspentimes.com 0.0.0.0 adsadmin.corusradionetwork.com 0.0.0.0 adsadmin.vaildaily.com 0.0.0.0 ads.admonitor.net 0.0.0.0 ads.adn.com 0.0.0.0 ads.adroar.com 0.0.0.0 ads.adsag.com 0.0.0.0 ads.adsbookie.com 0.0.0.0 ads.adshareware.net 0.0.0.0 ads.adsinimages.com 0.0.0.0 ads.adsonar.com 0.0.0.0 ads.adsrvmedia.com 0.0.0.0 ads.adtegrity.net 0.0.0.0 ads.adtiger.de 0.0.0.0 ads.adultfriendfinder.com 0.0.0.0 ads.adultswim.com 0.0.0.0 ads.advance.net 0.0.0.0 ads.adverline.com 0.0.0.0 ads.adviva.net 0.0.0.0 ads.advolume.com 0.0.0.0 ads.adworldnetwork.com 0.0.0.0 ads.adx.nu 0.0.0.0 ads.adxpansion.com 0.0.0.0 ads.adxpose.com 0.0.0.0 ads.adxpose.mpire.akadns.net 0.0.0.0 ads.affiliates.match.com 0.0.0.0 ads.aftonbladet.se 0.0.0.0 ads.ah-ha.com 0.0.0.0 ads.aintitcool.com 0.0.0.0 ads.airamericaradio.com 0.0.0.0 ads.ak.facebook.com 0.0.0.0 ads.albawaba.com 0.0.0.0 ads.al.com 0.0.0.0 ads.allsites.com 0.0.0.0 ads.allvertical.com 0.0.0.0 ads.amarillo.com 0.0.0.0 ads.amateurmatch.com 0.0.0.0 ads.amazingmedia.com 0.0.0.0 ads.amgdgt.com 0.0.0.0 ads.ami-admin.com 0.0.0.0 ads.anm.co.uk 0.0.0.0 ads.anvato.com 0.0.0.0 ads.aol.com 0.0.0.0 ads.apartmenttherapy.com 0.0.0.0 ads.apn.co.nz 0.0.0.0 ads.apn.co.za 0.0.0.0 ads.appleinsider.com 0.0.0.0 ads.arcadechain.com 0.0.0.0 ads.aroundtherings.com 0.0.0.0 ads.as4x.tmcs.net 0.0.0.0 ads.as4x.tmcs.ticketmaster.ca 0.0.0.0 ads.as4x.tmcs.ticketmaster.com 0.0.0.0 ads.asia1.com 0.0.0.0 ads.asia1.com.sg 0.0.0.0 ads.aspalliance.com 0.0.0.0 ads.aspentimes.com 0.0.0.0 ads.asp.net 0.0.0.0 ads.associatedcontent.com 0.0.0.0 ads.astalavista.us 0.0.0.0 ads.atlantamotorspeedway.com 0.0.0.0 adsatt.abcnews.starwave.com 0.0.0.0 adsatt.espn.go.com 0.0.0.0 adsatt.espn.starwave.com 0.0.0.0 ads.auctionads.com 0.0.0.0 ads.auctioncity.co.nz 0.0.0.0 ads.auctions.yahoo.com 0.0.0.0 ads.augusta.com 0.0.0.0 ads.aversion2.com 0.0.0.0 ads.aws.sitepoint.com 0.0.0.0 ads.azjmp.com 0.0.0.0 ads.baazee.com 0.0.0.0 ads.bangkokpost.co.th 0.0.0.0 ads.banner.t-online.de 0.0.0.0 ads.barnonedrinks.com 0.0.0.0 ads.battle.net 0.0.0.0 ads.bauerpublishing.com 0.0.0.0 ads.baventures.com 0.0.0.0 ads.bbcworld.com 0.0.0.0 ads.bcnewsgroup.com 0.0.0.0 ads.beeb.com 0.0.0.0 ads.beliefnet.com 0.0.0.0 ads.belointeractive.com 0.0.0.0 ads.beta.itravel2000.com 0.0.0.0 ads.betanews.com 0.0.0.0 ads.bfast.com 0.0.0.0 ads.bfm.valueclick.net 0.0.0.0 ads.bianca.com 0.0.0.0 ads.bidclix.com 0.0.0.0 ads.bidvertiser.com 0.0.0.0 ads.bigcitytools.com 0.0.0.0 ads.biggerboat.com 0.0.0.0 ads.bitsonthewire.com 0.0.0.0 ads.bizhut.com 0.0.0.0 ads.blixem.nl 0.0.0.0 ads.blog.com 0.0.0.0 ads.blogherads.com 0.0.0.0 ads.bloomberg.com 0.0.0.0 ads.blp.calueclick.net 0.0.0.0 ads.blp.valueclick.net 0.0.0.0 ads.bluelithium.com 0.0.0.0 ads.bluemountain.com 0.0.0.0 ads.bonnint.net 0.0.0.0 ads.box.sk 0.0.0.0 ads.brabys.com 0.0.0.0 ads.brand.net 0.0.0.0 ads.bridgetrack.com 0.0.0.0 ads.britishexpats.com 0.0.0.0 ads.buscape.com.br 0.0.0.0 ads.businessclick.com 0.0.0.0 ads.businessweek.com 0.0.0.0 ads.calgarysun.com 0.0.0.0 ads.callofdutyblackopsforum.net 0.0.0.0 ads.camrecord.com 0.0.0.0 ads.canoe.ca 0.0.0.0 ads.cardea.se 0.0.0.0 ads.cardplayer.com 0.0.0.0 ads.carltononline.com 0.0.0.0 ads.carocean.co.uk 0.0.0.0 ads.casinocity.com 0.0.0.0 ads.catholic.org 0.0.0.0 ads.cavello.com 0.0.0.0 ads.cbc.ca 0.0.0.0 ads.cdfreaks.com 0.0.0.0 ads.cdnow.com 0.0.0.0 adscendmedia.com 0.0.0.0 ads.centraliprom.com 0.0.0.0 ads.cgchannel.com 0.0.0.0 ads.chalomumbai.com 0.0.0.0 ads.champs-elysees.com 0.0.0.0 ads.channel4.com 0.0.0.0 ads.checkm8.co.za 0.0.0.0 ads.chipcenter.com 0.0.0.0 adscholar.com 0.0.0.0 ads.chumcity.com 0.0.0.0 ads.cjonline.com 0.0.0.0 ads.clamav.net 0.0.0.0 ads.clara.net 0.0.0.0 ads.clearchannel.com 0.0.0.0 ads.cleveland.com 0.0.0.0 ads.clickability.com 0.0.0.0 ads.clickad.com.pl 0.0.0.0 ads.clickagents.com 0.0.0.0 ads.clickhouse.com 0.0.0.0 ads.clicksor.com 0.0.0.0 ads.clickthru.net 0.0.0.0 ads.clicmanager.fr 0.0.0.0 ads.clubzone.com 0.0.0.0 ads.cluster01.oasis.zmh.zope.net 0.0.0.0 ads.cmediaworld.com 0.0.0.0 ads.cmg.valueclick.net 0.0.0.0 ads.cnn.com 0.0.0.0 ads.cnngo.com 0.0.0.0 ads.cobrad.com 0.0.0.0 ads.collegclub.com 0.0.0.0 ads.collegehumor.com 0.0.0.0 ads.collegemix.com 0.0.0.0 ads.com.com 0.0.0.0 ads.comediagroup.hu 0.0.0.0 ads.comicbookresources.com 0.0.0.0 ads.contactmusic.com 0.0.0.0 ads.contentabc.com 0.0.0.0 ads.coopson.com 0.0.0.0 ads.corusradionetwork.com 0.0.0.0 ads.courierpostonline.com 0.0.0.0 ads.cpsgsoftware.com 0.0.0.0 ads.crakmedia.com 0.0.0.0 ads.crapville.com 0.0.0.0 ads.creative-serving.com 0.0.0.0 ads.crosscut.com 0.0.0.0 ads.ctvdigital.net 0.0.0.0 ads.currantbun.com 0.0.0.0 ads.cyberfight.ru 0.0.0.0 ads.cybersales.cz 0.0.0.0 ads.cybertrader.com 0.0.0.0 ads.dada.it 0.0.0.0 ads.danworld.net 0.0.0.0 adsdaq.com 0.0.0.0 ads.dbforums.com 0.0.0.0 ads.ddj.com 0.0.0.0 ads.dealnews.com 0.0.0.0 ads.democratandchronicle.com 0.0.0.0 ads.dennisnet.co.uk 0.0.0.0 ads.designboom.com 0.0.0.0 ads.designtaxi.com 0.0.0.0 ads.desmoinesregister.com 0.0.0.0 ads-de.spray.net 0.0.0.0 ads.detelefoongids.nl 0.0.0.0 ads.developershed.com 0.0.0.0 ads.deviantart.com 0.0.0.0 ads-dev.youporn.com 0.0.0.0 ads.digitalacre.com 0.0.0.0 ads.digital-digest.com 0.0.0.0 ads.digitalhealthcare.com 0.0.0.0 ads.digitalmedianet.com 0.0.0.0 ads.digitalpoint.com 0.0.0.0 ads.dimcab.com 0.0.0.0 ads.directionsmag.com 0.0.0.0 ads-direct.prodigy.net 0.0.0.0 ads.discovery.com 0.0.0.0 ads.dk 0.0.0.0 ads.doclix.com 0.0.0.0 ads.domeus.com 0.0.0.0 ads.dontpanicmedia.com 0.0.0.0 ads.dothads.com 0.0.0.0 ads.doubleviking.com 0.0.0.0 ads.drf.com 0.0.0.0 ads.drivelinemedia.com 0.0.0.0 ads.drugs.com 0.0.0.0 ads.dumpalink.com 0.0.0.0 adsearch.adkontekst.pl 0.0.0.0 adsearch.pl 0.0.0.0 adsearch.wp.pl 0.0.0.0 ads.ecircles.com 0.0.0.0 ads.economist.com 0.0.0.0 ads.ecosalon.com 0.0.0.0 ads.edirectme.com 0.0.0.0 ads.einmedia.com 0.0.0.0 ads.eircom.net 0.0.0.0 ads.emeraldcoast.com 0.0.0.0 ads.enliven.com 0.0.0.0 ad.sensismediasmart.com.au 0.0.0.0 adsentnetwork.com 0.0.0.0 adserer.ihigh.com 0.0.0.0 ads.erotism.com 0.0.0.0 adserv001.adtech.de 0.0.0.0 adserv001.adtech.fr 0.0.0.0 adserv001.adtech.us 0.0.0.0 adserv002.adtech.de 0.0.0.0 adserv002.adtech.fr 0.0.0.0 adserv002.adtech.us 0.0.0.0 adserv003.adtech.de 0.0.0.0 adserv003.adtech.fr 0.0.0.0 adserv003.adtech.us 0.0.0.0 adserv004.adtech.de 0.0.0.0 adserv004.adtech.fr 0.0.0.0 adserv004.adtech.us 0.0.0.0 adserv005.adtech.de 0.0.0.0 adserv005.adtech.fr 0.0.0.0 adserv005.adtech.us 0.0.0.0 adserv006.adtech.de 0.0.0.0 adserv006.adtech.fr 0.0.0.0 adserv006.adtech.us 0.0.0.0 adserv007.adtech.de 0.0.0.0 adserv007.adtech.fr 0.0.0.0 adserv007.adtech.us 0.0.0.0 adserv008.adtech.de 0.0.0.0 adserv008.adtech.fr 0.0.0.0 adserv008.adtech.us 0.0.0.0 adserv2.bravenet.com 0.0.0.0 adserv.aip.org 0.0.0.0 adservant.guj.de 0.0.0.0 adserv.bravenet.com 0.0.0.0 adserve5.nikkeibp.co.jp 0.0.0.0 adserve.adtoll.com 0.0.0.0 adserve.canadawidemagazines.com 0.0.0.0 adserve.city-ad.com 0.0.0.0 adserve.ehpub.com 0.0.0.0 adserve.gossipgirls.com 0.0.0.0 adserve.mizzenmedia.com 0.0.0.0 adserv.entriq.net 0.0.0.0 adserve.podaddies.com 0.0.0.0 adserve.profit-smart.com 0.0.0.0 adserver01.ancestry.com 0.0.0.0 adserver.100free.com 0.0.0.0 adserver.163.com 0.0.0.0 adserver1.adserver.com.pl 0.0.0.0 adserver1.adtech.com.tr 0.0.0.0 adserver1.backbeatmedia.com 0.0.0.0 adserver1.economist.com 0.0.0.0 adserver1.eudora.com 0.0.0.0 adserver1.harvestadsdepot.com 0.0.0.0 adserver1.hookyouup.com 0.0.0.0 adserver1-images.backbeatmedia.com 0.0.0.0 adserver1.isohunt.com 0.0.0.0 adserver1.lokitorrent.com 0.0.0.0 adserver1.mediainsight.de 0.0.0.0 adserver1.ogilvy-interactive.de 0.0.0.0 adserver1.realtracker.com 0.0.0.0 adserver1.sonymusiceurope.com 0.0.0.0 adserver1.teracent.net 0.0.0.0 adserver1.wmads.com 0.0.0.0 adserver.2618.com 0.0.0.0 adserver2.adserver.com.pl 0.0.0.0 adserver2.atman.pl 0.0.0.0 adserver2.christianitytoday.com 0.0.0.0 adserver2.condenast.co.uk 0.0.0.0 adserver2.creative.com 0.0.0.0 adserver2.eudora.com 0.0.0.0 adserver-2.ig.com.br 0.0.0.0 adserver2.mediainsight.de 0.0.0.0 adserver2.news-journalonline.com 0.0.0.0 adserver2.popdata.de 0.0.0.0 adserver2.realtracker.com 0.0.0.0 adserver2.teracent.net 0.0.0.0 adserver.3digit.de 0.0.0.0 adserver3.eudora.com 0.0.0.0 adserver-3.ig.com.br 0.0.0.0 adserver4.eudora.com 0.0.0.0 adserver-4.ig.com.br 0.0.0.0 adserver-5.ig.com.br 0.0.0.0 adserver.71i.de 0.0.0.0 adserver9.contextad.com 0.0.0.0 adserver.ad-it.dk 0.0.0.0 adserver.adreactor.com 0.0.0.0 adserver.adremedy.com 0.0.0.0 adserver.ads360.com 0.0.0.0 adserver.adserver.com.pl 0.0.0.0 adserver.adsincontext.com 0.0.0.0 adserver.adtech.de 0.0.0.0 adserver.adtech.fr 0.0.0.0 adserver.adtech.us 0.0.0.0 adserver.adtechus.com 0.0.0.0 adserver.adultfriendfinder.com 0.0.0.0 adserver.advertist.com 0.0.0.0 adserver.affiliatemg.com 0.0.0.0 adserver.affiliation.com 0.0.0.0 adserver.aim4media.com 0.0.0.0 adserver.a.in.monster.com 0.0.0.0 adserver.airmiles.ca 0.0.0.0 adserver.akqa.net 0.0.0.0 adserver.allheadlinenews.com 0.0.0.0 adserver.amnews.com 0.0.0.0 adserver.ancestry.com 0.0.0.0 adserver.anemo.com 0.0.0.0 adserver.anm.co.uk 0.0.0.0 adserver.aol.fr 0.0.0.0 adserver.archant.co.uk 0.0.0.0 adserver.artempireindustries.com 0.0.0.0 adserver.arttoday.com 0.0.0.0 adserver.atari.net 0.0.0.0 adserverb.conjelco.com 0.0.0.0 adserver.betandwin.de 0.0.0.0 adserver.billiger-surfen.de 0.0.0.0 adserver.billiger-telefonieren.de 0.0.0.0 adserver.bizland-inc.net 0.0.0.0 adserver.bluereactor.com 0.0.0.0 adserver.bluereactor.net 0.0.0.0 adserver.bluewin.ch 0.0.0.0 adserver.buttonware.com 0.0.0.0 adserver.buttonware.net 0.0.0.0 adserver.cams.com 0.0.0.0 adserver.cantv.net 0.0.0.0 adserver.cebu-online.com 0.0.0.0 adserver.cheatplanet.com 0.0.0.0 adserver.chickclick.com 0.0.0.0 adserver.click4cash.de 0.0.0.0 adserver.clubic.com 0.0.0.0 adserver.clundressed.com 0.0.0.0 adserver.co.il 0.0.0.0 adserver.colleges.com 0.0.0.0 adserver.com 0.0.0.0 adserver.comparatel.fr 0.0.0.0 adserver.com-solutions.com 0.0.0.0 adserver.conjelco.com 0.0.0.0 adserver.corusradionetwork.com 0.0.0.0 adserver.creative-asia.com 0.0.0.0 adserver.creativeinspire.com 0.0.0.0 adserver.dayrates.com 0.0.0.0 adserver.dbusiness.com 0.0.0.0 adserver.developersnetwork.com 0.0.0.0 adserver.devx.com 0.0.0.0 adserver.digitalpartners.com 0.0.0.0 adserver.digitoday.com 0.0.0.0 adserver.directforce.com 0.0.0.0 adserver.directforce.net 0.0.0.0 adserver.dnps.com 0.0.0.0 adserver.dotcommedia.de 0.0.0.0 adserver.dotmusic.com 0.0.0.0 adserver.eham.net 0.0.0.0 adserver.emapadserver.com 0.0.0.0 adserver.emporis.com 0.0.0.0 adserver.emulation64.com 0.0.0.0 adserver-espnet.sportszone.net 0.0.0.0 adserver.eudora.com 0.0.0.0 adserver.eva2000.com 0.0.0.0 adserver.expatica.nxs.nl 0.0.0.0 adserver.ezzhosting.com 0.0.0.0 adserver.filefront.com 0.0.0.0 adserver.fmpub.net 0.0.0.0 adserver.fr.adtech.de 0.0.0.0 adserver.freecity.de 0.0.0.0 adserver.freenet.de 0.0.0.0 adserver.friendfinder.com 0.0.0.0 adserver.gameparty.net 0.0.0.0 adserver.gamesquad.net 0.0.0.0 adserver.garden.com 0.0.0.0 adserver.gorillanation.com 0.0.0.0 adserver.gr 0.0.0.0 adserver.gunaxin.com 0.0.0.0 adserver.hardsextube.com 0.0.0.0 adserver.hardwareanalysis.com 0.0.0.0 adserver.harktheherald.com 0.0.0.0 adserver.harvestadsdepot.com 0.0.0.0 adserver.hellasnet.gr 0.0.0.0 adserver.hg-computer.de 0.0.0.0 adserver.hi-m.de 0.0.0.0 adserver.hispavista.com 0.0.0.0 adserver.hk.outblaze.com 0.0.0.0 adserver.home.pl 0.0.0.0 adserver.hostinteractive.com 0.0.0.0 adserver.humanux.com 0.0.0.0 adserver.hwupgrade.it 0.0.0.0 adserver.ifmagazine.com 0.0.0.0 adserver.ig.com.br 0.0.0.0 adserver.ign.com 0.0.0.0 adserver.ilounge.com 0.0.0.0 adserver.infinit.net 0.0.0.0 adserver.infotiger.com 0.0.0.0 adserver.interfree.it 0.0.0.0 adserver.inwind.it 0.0.0.0 adserver.ision.de 0.0.0.0 adserver.isonews.com 0.0.0.0 adserver.ixm.co.uk 0.0.0.0 adserver.jacotei.com.br 0.0.0.0 adserver.janes.com 0.0.0.0 adserver.janes.net 0.0.0.0 adserver.janes.org 0.0.0.0 adserver.jolt.co.uk 0.0.0.0 adserver.journalinteractive.com 0.0.0.0 adserver.juicyads.com 0.0.0.0 adserver.kcilink.com 0.0.0.0 adserver.killeraces.com 0.0.0.0 adserver.kylemedia.com 0.0.0.0 adserver.lanacion.com.ar 0.0.0.0 adserver.lanepress.com 0.0.0.0 adserver.latimes.com 0.0.0.0 adserver.legacy-network.com 0.0.0.0 adserver.libero.it 0.0.0.0 adserver.linktrader.co.uk 0.0.0.0 adserver.livejournal.com 0.0.0.0 adserver.lostreality.com 0.0.0.0 adserver.lunarpages.com 0.0.0.0 adserver.lycos.co.jp 0.0.0.0 adserver.m2kcore.com 0.0.0.0 adserver.magazyn.pl 0.0.0.0 adserver.matchcraft.com 0.0.0.0 adserver.merc.com 0.0.0.0 adserver.mindshare.de 0.0.0.0 adserver.mobsmith.com 0.0.0.0 adserver.monster.com 0.0.0.0 adserver.monstersandcritics.com 0.0.0.0 adserver.motonews.pl 0.0.0.0 adserver.myownemail.com 0.0.0.0 adserver.netcreators.nl 0.0.0.0 adserver.netshelter.net 0.0.0.0 adserver.newdigitalgroup.com 0.0.0.0 adserver.newmassmedia.net 0.0.0.0 adserver.news.com 0.0.0.0 adserver.news.com.au 0.0.0.0 adserver.news-journalonline.com 0.0.0.0 adserver.newtimes.com 0.0.0.0 adserver.ngz-network.de 0.0.0.0 adserver.nydailynews.com 0.0.0.0 adserver.nzoom.com 0.0.0.0 adserver.o2.pl 0.0.0.0 adserver.onwisconsin.com 0.0.0.0 adserver.passion.com 0.0.0.0 adserver.phatmax.net 0.0.0.0 adserver.phillyburbs.com 0.0.0.0 adserver.pl 0.0.0.0 adserver.planet-multiplayer.de 0.0.0.0 adserver.plhb.com 0.0.0.0 adserver.pollstar.com 0.0.0.0 adserver.portalofevil.com 0.0.0.0 adserver.portal.pl 0.0.0.0 adserver.portugalmail.pt 0.0.0.0 adserver.prodigy.net 0.0.0.0 adserver.proteinos.com 0.0.0.0 adserver.radio-canada.ca 0.0.0.0 adserver.ratestar.net 0.0.0.0 adserver.revver.com 0.0.0.0 adserver.ro 0.0.0.0 adserver.sabc.co.za 0.0.0.0 adserver.sabcnews.co.za 0.0.0.0 adserver.sanomawsoy.fi 0.0.0.0 adserver.scmp.com 0.0.0.0 adserver.securityfocus.com 0.0.0.0 adserver.sextracker.com 0.0.0.0 adserver.sharewareonline.com 0.0.0.0 adserver.singnet.com 0.0.0.0 adserver.sl.kharkov.ua 0.0.0.0 adserver.smashtv.com 0.0.0.0 adserver.snowball.com 0.0.0.0 adserver.softonic.com 0.0.0.0 adserver.soloserver.com 0.0.0.0 adserversolutions.com 0.0.0.0 adserver.swiatobrazu.pl 0.0.0.0 adserver.synergetic.de 0.0.0.0 adserver.telalink.net 0.0.0.0 adserver.te.pt 0.0.0.0 adserver.teracent.net 0.0.0.0 adserver.terra.com.br 0.0.0.0 adserver.terra.es 0.0.0.0 adserver.theknot.com 0.0.0.0 adserver.theonering.net 0.0.0.0 adserver.thirty4.com 0.0.0.0 adserver.thisislondon.co.uk 0.0.0.0 adserver.tilted.net 0.0.0.0 adserver.tqs.ca 0.0.0.0 adserver.track-star.com 0.0.0.0 adserver.trader.ca 0.0.0.0 adserver.trafficsyndicate.com 0.0.0.0 adserver.trb.com 0.0.0.0 adserver.tribuneinteractive.com 0.0.0.0 adserver.tsgadv.com 0.0.0.0 adserver.tulsaworld.com 0.0.0.0 adserver.tweakers.net 0.0.0.0 adserver.twitpic.com 0.0.0.0 adserver.ugo.com 0.0.0.0 adserver.ugo.nl 0.0.0.0 adserver.ukplus.co.uk 0.0.0.0 adserver.uproxx.com 0.0.0.0 adserver.usermagnet.com 0.0.0.0 adserver.van.net 0.0.0.0 adserver.virginmedia.com 0.0.0.0 adserver.virgin.net 0.0.0.0 adserver.virtualminds.nl 0.0.0.0 adserver.virtuous.co.uk 0.0.0.0 adserver.voir.ca 0.0.0.0 adserver.webads.co.uk 0.0.0.0 adserver.webads.nl 0.0.0.0 adserver.wemnet.nl 0.0.0.0 adserver.x3.hu 0.0.0.0 adserver.ya.com 0.0.0.0 adserver.yahoo.com 0.0.0.0 adserver.zaz.com.br 0.0.0.0 adserver.zeads.com 0.0.0.0 adserve.shopzilla.com 0.0.0.0 adserve.splicetoday.com 0.0.0.0 adserve.viaarena.com 0.0.0.0 adserv.free6.com 0.0.0.0 adserv.geocomm.com 0.0.0.0 adserv.iafrica.com 0.0.0.0 adservices.google.com 0.0.0.0 adservices.picadmedia.com 0.0.0.0 adservingcentral.com 0.0.0.0 adserving.cpxinteractive.com 0.0.0.0 adserv.internetfuel.com 0.0.0.0 adserv.jupiter.com 0.0.0.0 adserv.lwmn.net 0.0.0.0 adserv.maineguide.com 0.0.0.0 adserv.muchosucko.com 0.0.0.0 adserv.mywebtimes.com 0.0.0.0 adserv.pitchforkmedia.com 0.0.0.0 adserv.postbulletin.com 0.0.0.0 adserv.qconline.com 0.0.0.0 adserv.quality-channel.de 0.0.0.0 adserv.usps.com 0.0.0.0 adserwer.o2.pl 0.0.0.0 ads.espn.adsonar.com 0.0.0.0 ads.eudora.com 0.0.0.0 ads.eu.msn.com 0.0.0.0 ads.euniverseads.com 0.0.0.0 adseu.novem.pl 0.0.0.0 ads.examiner.net 0.0.0.0 ads.exhedra.com 0.0.0.0 ads.expedia.com 0.0.0.0 ads.expekt.com 0.0.0.0 ads.ezboard.com 0.0.0.0 adsfac.eu 0.0.0.0 adsfac.net 0.0.0.0 adsfac.us 0.0.0.0 ads.fairfax.com.au 0.0.0.0 ads.fark.com 0.0.0.0 ads.fayettevillenc.com 0.0.0.0 ads.filecloud.com 0.0.0.0 ads.fileindexer.com 0.0.0.0 ads.filmup.com 0.0.0.0 ads.first-response.be 0.0.0.0 ads.flabber.nl 0.0.0.0 ads.flashgames247.com 0.0.0.0 ads.fling.com 0.0.0.0 ads.floridatoday.com 0.0.0.0 ads.fool.com 0.0.0.0 ads.forbes.com 0.0.0.0 ads.forbes.net 0.0.0.0 ads.fortunecity.com 0.0.0.0 ads.fredericksburg.com 0.0.0.0 ads.freebannertrade.com 0.0.0.0 ads.freshmeat.net 0.0.0.0 ads.fresnobee.com 0.0.0.0 ads.friendfinder.com 0.0.0.0 ads.ft.com 0.0.0.0 ads.gamblinghit.com 0.0.0.0 ads.gamecity.net 0.0.0.0 ads.gamecopyworld.no 0.0.0.0 ads.gameinformer.com 0.0.0.0 ads.game.net 0.0.0.0 ads.gamershell.com 0.0.0.0 ads.gamespy.com 0.0.0.0 ads.gamespyid.com 0.0.0.0 ads.gateway.com 0.0.0.0 ads.gawker.com 0.0.0.0 ads.gettools.com 0.0.0.0 ads.gigaom.com.php5-12.websitetestlink.com 0.0.0.0 ads.globeandmail.com 0.0.0.0 ads.gmg.valueclick.net 0.0.0.0 ads.gmodules.com 0.0.0.0 ads.god.co.uk 0.0.0.0 ads.gorillanation.com 0.0.0.0 ads.gplusmedia.com 0.0.0.0 ads.granadamedia.com 0.0.0.0 ads.greenbaypressgazette.com 0.0.0.0 ads.greenvilleonline.com 0.0.0.0 ads.guardian.co.uk 0.0.0.0 ads.guardianunlimited.co.uk 0.0.0.0 ads.gunaxin.com 0.0.0.0 ads.halogennetwork.com 0.0.0.0 ads.hamptonroads.com 0.0.0.0 ads.hamtonroads.com 0.0.0.0 ads.hardwarezone.com 0.0.0.0 ads.harpers.org 0.0.0.0 ads.hbv.de 0.0.0.0 ads.hearstmags.com 0.0.0.0 ads.heartlight.org 0.0.0.0 ads.herald-mail.com 0.0.0.0 ads.heraldnet.com 0.0.0.0 ads.heraldonline.com 0.0.0.0 ads.heraldsun.com 0.0.0.0 ads.heroldonline.com 0.0.0.0 ads.he.valueclick.net 0.0.0.0 ads.hitcents.com 0.0.0.0 ads.hlwd.valueclick.net 0.0.0.0 ads.hollandsentinel.com 0.0.0.0 ads.hollywood.com 0.0.0.0 ads.hooqy.com 0.0.0.0 ads.hothardware.com 0.0.0.0 ad.showbizz.net 0.0.0.0 ads.hulu.com.edgesuite.net #0.0.0.0 ads.hulu.com # Uncomment to block Hulu. 0.0.0.0 ads.humorbua.no 0.0.0.0 ads.i12.de 0.0.0.0 ads.i33.com 0.0.0.0 ads.iafrica.com 0.0.0.0 ads.i-am-bored.com 0.0.0.0 ads.iboost.com 0.0.0.0 ads.icq.com 0.0.0.0 ads.iforex.com 0.0.0.0 ads.ign.com 0.0.0.0 ads.illuminatednation.com 0.0.0.0 ads.imdb.com 0.0.0.0 ads.imgur.com 0.0.0.0 ads.imposibil.ro 0.0.0.0 ads.indiatimes.com 0.0.0.0 ads.indya.com 0.0.0.0 ads.indystar.com 0.0.0.0 ads.inedomedia.com 0.0.0.0 ads.inetdirectories.com 0.0.0.0 ads.inetinteractive.com 0.0.0.0 ads.infi.net 0.0.0.0 ads.infospace.com 0.0.0.0 adsinimages.com 0.0.0.0 ads.injersey.com 0.0.0.0 ads.insidehighered.com 0.0.0.0 ads.intellicast.com 0.0.0.0 ads.internic.co.il 0.0.0.0 ads.inthesidebar.com 0.0.0.0 adsintl.starwave.com 0.0.0.0 ads.iol.co.il 0.0.0.0 ads.ipowerweb.com 0.0.0.0 ads.ireport.com 0.0.0.0 ads.isat-tech.com 0.0.0.0 ads.isoftmarketing.com 0.0.0.0 ads.isum.de 0.0.0.0 ads.itv.com 0.0.0.0 ads.iwon.com 0.0.0.0 ads.jacksonville.com 0.0.0.0 ads.jeneauempire.com 0.0.0.0 ads.jetpackdigital.com 0.0.0.0 ads.jetphotos.net 0.0.0.0 ads.jewcy.com 0.0.0.0 ads.jimworld.com 0.0.0.0 ads.joetec.net 0.0.0.0 ads.jokaroo.com 0.0.0.0 ads.jornadavirtual.com.mx 0.0.0.0 ads.jossip.com 0.0.0.0 ads.jpost.com 0.0.0.0 ads.jubii.dk 0.0.0.0 ads.juicyads.com 0.0.0.0 ads.juneauempire.com 0.0.0.0 ads.jwtt3.com 0.0.0.0 ads.kazaa.com 0.0.0.0 ads.keywordblocks.com 0.0.0.0 ads.kixer.com 0.0.0.0 ads.kleinman.com 0.0.0.0 ads.kmpads.com 0.0.0.0 ads.koreanfriendfinder.com 0.0.0.0 ads.ksl.com 0.0.0.0 ad.slashgear.com 0.0.0.0 ads.leo.org 0.0.0.0 ads.lfstmedia.com 0.0.0.0 ads.lilengine.com 0.0.0.0 ads.link4ads.com 0.0.0.0 ads.linksponsor.com 0.0.0.0 ads.linktracking.net 0.0.0.0 ads.linuxjournal.com 0.0.0.0 ads.linuxsecurity.com 0.0.0.0 ads.list-universe.com 0.0.0.0 ads.live365.com 0.0.0.0 ads.ljworld.com 0.0.0.0 ads.lnkworld.com 0.0.0.0 ads.localnow.com 0.0.0.0 ads-local.sixapart.com 0.0.0.0 ads.lubbockonline.com 0.0.0.0 ads.lucidmedia.com 0.0.0.0 ads.lucidmedia.com.gslb.com 0.0.0.0 ads.lycos.com 0.0.0.0 ads.lycos-europe.com 0.0.0.0 ads.lzjl.com 0.0.0.0 ads.macnews.de 0.0.0.0 ads.macupdate.com 0.0.0.0 ads.madisonavenue.com 0.0.0.0 ads.madison.com 0.0.0.0 ads.magnetic.is 0.0.0.0 ads.mail.com 0.0.0.0 ads.mambocommunities.com 0.0.0.0 ad.sma.punto.net 0.0.0.0 ads.mariuana.it 0.0.0.0 adsmart.com 0.0.0.0 adsmart.co.uk 0.0.0.0 adsmart.net 0.0.0.0 ads.mcafee.com 0.0.0.0 ads.mdchoice.com 0.0.0.0 ads.mediamayhemcorp.com 0.0.0.0 ads.mediaodyssey.com 0.0.0.0 ads.mediaturf.net 0.0.0.0 ads.mefeedia.com 0.0.0.0 ads.megaproxy.com 0.0.0.0 ads.metblogs.com 0.0.0.0 ads.mgnetwork.com 0.0.0.0 ads.mindsetnetwork.com 0.0.0.0 ads.miniclip.com 0.0.0.0 ads.mininova.org 0.0.0.0 ads.mircx.com 0.0.0.0 ads.mixtraffic.com 0.0.0.0 ads.mlive.com 0.0.0.0 ads.mm.ap.org 0.0.0.0 ads.mndaily.com 0.0.0.0 ad.smni.com 0.0.0.0 ads.mobiledia.com 0.0.0.0 ads.mobygames.com 0.0.0.0 ads.modbee.com 0.0.0.0 ads.mofos.com 0.0.0.0 ads.money.pl 0.0.0.0 ads.monster.com 0.0.0.0 ads.mouseplanet.com 0.0.0.0 ads.movieweb.com 0.0.0.0 ads.mp3searchy.com 0.0.0.0 adsm.soush.com 0.0.0.0 ads.mt.valueclick.net 0.0.0.0 ads.mtv.uol.com.br 0.0.0.0 ads.multimania.lycos.fr 0.0.0.0 ads.musiccity.com 0.0.0.0 ads.mustangworks.com 0.0.0.0 ads.mysimon.com 0.0.0.0 ads.mytelus.com 0.0.0.0 ads.nandomedia.com 0.0.0.0 ads.nationalreview.com 0.0.0.0 ads.nativeinstruments.de 0.0.0.0 ads.neoseeker.com 0.0.0.0 ads.neowin.net 0.0.0.0 ads.nerve.com 0.0.0.0 ads.netmechanic.com 0.0.0.0 ads.networkwcs.net 0.0.0.0 ads.networldmedia.net 0.0.0.0 ads.neudesicmediagroup.com 0.0.0.0 ads.newcity.com 0.0.0.0 ads.newcitynet.com 0.0.0.0 ads.newdream.net 0.0.0.0 ads.newgrounds.com 0.0.0.0 ads.newsint.co.uk 0.0.0.0 ads.newsminerextra.com 0.0.0.0 ads.newsobserver.com 0.0.0.0 ads.newsquest.co.uk 0.0.0.0 ads.newtention.net 0.0.0.0 ads.newtimes.com 0.0.0.0 adsnew.userfriendly.org 0.0.0.0 ads.ngenuity.com 0.0.0.0 ads.ninemsn.com.au 0.0.0.0 adsniper.ru 0.0.0.0 ads.nola.com 0.0.0.0 ads.northjersey.com 0.0.0.0 ads.novem.pl 0.0.0.0 ads.nowrunning.com 0.0.0.0 ads.npr.valueclick.net 0.0.0.0 ads.ntadvice.com 0.0.0.0 ads.nudecards.com 0.0.0.0 ads.nwsource.com 0.0.0.0 ads.nwsource.com.edgesuite.net 0.0.0.0 ads.nyi.net 0.0.0.0 ads.nyjournalnews.com 0.0.0.0 ads.nypost.com 0.0.0.0 ads.nytimes.com 0.0.0.0 ads.o2.pl 0.0.0.0 adsoftware.com 0.0.0.0 adsoldier.com 0.0.0.0 ads.ole.com 0.0.0.0 ads.omaha.com 0.0.0.0 adsonar.com 0.0.0.0 adson.awempire.com 0.0.0.0 ads.onlineathens.com 0.0.0.0 ads.online.ie 0.0.0.0 ads.onvertise.com 0.0.0.0 ads.ookla.com 0.0.0.0 ads.open.pl 0.0.0.0 ads.opensubtitles.org 0.0.0.0 ads.oregonlive.com 0.0.0.0 ads.orsm.net 0.0.0.0 ads.osdn.com 0.0.0.0 ad-souk.com 0.0.0.0 adspaces.ero-advertising.com 0.0.0.0 ads.parrysound.com 0.0.0.0 ads.partner2profit.com 0.0.0.0 ads.pastemagazine.com 0.0.0.0 ads.paxnet.co.kr 0.0.0.0 ads.pcper.com 0.0.0.0 ads.pdxguide.com 0.0.0.0 ads.peel.com 0.0.0.0 ads.peninsulaclarion.com 0.0.0.0 ads.penny-arcade.com 0.0.0.0 ads.pennyweb.com 0.0.0.0 ads.people.com.cn 0.0.0.0 ads.pg.valueclick.net 0.0.0.0 ads.pheedo.com 0.0.0.0 ads.phillyburbs.com 0.0.0.0 ads.phpclasses.org 0.0.0.0 ads.pilotonline.com 0.0.0.0 adspirit.net 0.0.0.0 adspiro.pl 0.0.0.0 ads.pitchforkmedia.com 0.0.0.0 ads.pittsburghlive.com 0.0.0.0 ads.pixiq.com 0.0.0.0 ads.place1.com 0.0.0.0 ads.planet-f1.com 0.0.0.0 ads.plantyours.com 0.0.0.0 ads.pni.com 0.0.0.0 ads.pno.net 0.0.0.0 ads.poconorecord.com 0.0.0.0 ads.pointroll.com 0.0.0.0 ads.portlandmercury.com 0.0.0.0 ads.premiumnetwork.com 0.0.0.0 ads.premiumnetwork.net 0.0.0.0 ads.pressdemo.com 0.0.0.0 ads.pricescan.com 0.0.0.0 ads.primaryclick.com 0.0.0.0 ads.primeinteractive.net 0.0.0.0 ads.prisacom.com 0.0.0.0 ads.profitsdeluxe.com 0.0.0.0 ads.profootballtalk.com 0.0.0.0 ads.program3.com 0.0.0.0 ads.pro-market.net 0.0.0.0 ads.pro-market.net.edgesuite.net 0.0.0.0 ads.prospect.org 0.0.0.0 ads.pubmatic.com 0.0.0.0 ads.queendom.com 0.0.0.0 ads.quicken.com 0.0.0.0 adsr3pg.com.br 0.0.0.0 ads.rackshack.net 0.0.0.0 ads.rasmussenreports.com 0.0.0.0 ads.ratemyprofessors.com 0.0.0.0 adsrc.bankrate.com 0.0.0.0 ads.rcgroups.com 0.0.0.0 ads.rdstore.com 0.0.0.0 ads.realcastmedia.com 0.0.0.0 ads.realcities.com 0.0.0.0 ads.realmedia.de 0.0.0.0 ads.realtechnetwork.net 0.0.0.0 ads.reason.com 0.0.0.0 ads.rediff.com 0.0.0.0 ads.redorbit.com 0.0.0.0 ads.register.com 0.0.0.0 adsremote.scripps.com 0.0.0.0 adsremote.scrippsnetwork.com 0.0.0.0 ads.revenews.com 0.0.0.0 ads.revenue.net 0.0.0.0 adsrevenue.net 0.0.0.0 ads.revsci.net 0.0.0.0 ads.rim.co.uk 0.0.0.0 ads-rm.looksmart.com 0.0.0.0 ads.roanoke.com 0.0.0.0 ads.rockstargames.com 0.0.0.0 ads.rodale.com 0.0.0.0 ads.roiserver.com 0.0.0.0 ads.rondomondo.com 0.0.0.0 ads.rootzoo.com 0.0.0.0 ads.rottentomatoes.com 0.0.0.0 ads.rp-online.de 0.0.0.0 ads.ruralpress.com 0.0.0.0 adsrv2.wilmingtonstar.com 0.0.0.0 adsrv.bankrate.com 0.0.0.0 adsrv.dispatch.com 0.0.0.0 adsrv.emporis.com 0.0.0.0 adsrv.heraldtribune.com 0.0.0.0 adsrv.hpg.com.br 0.0.0.0 adsrv.iol.co.za 0.0.0.0 adsrv.lua.pl 0.0.0.0 adsrv.news.com.au 0.0.0.0 adsrvr.com 0.0.0.0 adsrv.tuscaloosanews.com 0.0.0.0 adsrv.wilmingtonstar.com 0.0.0.0 ads.sacbee.com 0.0.0.0 ads.satyamonline.com 0.0.0.0 ads.savannahnow.com 0.0.0.0 ads.scabee.com 0.0.0.0 ads.schwabtrader.com 0.0.0.0 ads.scifi.com 0.0.0.0 ads.seattletimes.com 0.0.0.0 ads.sfusion.com 0.0.0.0 ads.shizmoo.com 0.0.0.0 ads.shoppingads.com 0.0.0.0 ads.shoutfile.com 0.0.0.0 ads.sify.com 0.0.0.0 ads.simtel.com 0.0.0.0 ads.simtel.net 0.0.0.0 ads.sitemeter.com 0.0.0.0 ads.sixapart.com 0.0.0.0 adssl01.adtech.de 0.0.0.0 adssl01.adtech.fr 0.0.0.0 adssl01.adtech.us 0.0.0.0 adssl02.adtech.de 0.0.0.0 adssl02.adtech.fr 0.0.0.0 adssl02.adtech.us 0.0.0.0 ads.sl.interpals.net 0.0.0.0 ads.smartclick.com 0.0.0.0 ads.smartclicks.com 0.0.0.0 ads.smartclicks.net 0.0.0.0 ads.snowball.com 0.0.0.0 ads.socialmedia.com 0.0.0.0 ads.sohh.com 0.0.0.0 ads.somethingawful.com 0.0.0.0 ads.space.com 0.0.0.0 adsspace.net 0.0.0.0 ads.specificclick.com 0.0.0.0 ads.specificmedia.com 0.0.0.0 ads.specificpop.com 0.0.0.0 ads.sptimes.com 0.0.0.0 ads.spymac.net 0.0.0.0 ads.stackoverflow.com 0.0.0.0 ads.starbanner.com 0.0.0.0 ads.stephensmedia.com 0.0.0.0 ads.stileproject.com 0.0.0.0 ads.stupid.com 0.0.0.0 ads.sunjournal.com 0.0.0.0 ads.sup.com 0.0.0.0 ads.swiftnews.com 0.0.0.0 ads.switchboard.com 0.0.0.0 ads.teamyehey.com 0.0.0.0 ads.technoratimedia.com 0.0.0.0 ads.techtv.com 0.0.0.0 ads.techvibes.com 0.0.0.0 ads.techweb.com 0.0.0.0 ads.telegraaf.nl 0.0.0.0 ads.telegraph.co.uk 0.0.0.0 ads.the15thinternet.com 0.0.0.0 ads.theawl.com 0.0.0.0 ads.thebugs.ws 0.0.0.0 ads.thecoolhunter.net 0.0.0.0 ads.thecrimson.com 0.0.0.0 ads.thefrisky.com 0.0.0.0 ads.thegauntlet.com 0.0.0.0 ads.theglobeandmail.com 0.0.0.0 ads.theindependent.com 0.0.0.0 ads.theolympian.com 0.0.0.0 ads.thesmokinggun.com 0.0.0.0 ads.thestar.com #Toronto Star 0.0.0.0 ads.thestranger.com 0.0.0.0 ads.thewebfreaks.com 0.0.0.0 adstil.indiatimes.com 0.0.0.0 ads.timesunion.com 0.0.0.0 ads.tiscali.fr 0.0.0.0 ads.tmcs.net 0.0.0.0 ads.tnt.tv 0.0.0.0 adstogo.com 0.0.0.0 adstome.com 0.0.0.0 ads.top500.org #TOP500 SuperComputer Site 0.0.0.0 ads.top-banners.com 0.0.0.0 ads.toronto.com 0.0.0.0 ads.townhall.com 0.0.0.0 ads.track.net 0.0.0.0 ads.traderonline.com 0.0.0.0 ads.traffichaus.com 0.0.0.0 ads.trafficjunky.net 0.0.0.0 ads.traffikings.com 0.0.0.0 adstream.cardboardfish.com 0.0.0.0 adstreams.org 0.0.0.0 ads.treehugger.com 0.0.0.0 ads.tricityherald.com 0.0.0.0 ads.trinitymirror.co.uk 0.0.0.0 ads.tripod.com 0.0.0.0 ads.tripod.lycos.co.uk 0.0.0.0 ads.tripod.lycos.de 0.0.0.0 ads.tripod.lycos.es 0.0.0.0 ads.tromaville.com 0.0.0.0 ads-t.ru 0.0.0.0 ads.trutv.com 0.0.0.0 ads.tucows.com 0.0.0.0 ads.tw.adsonar.com 0.0.0.0 ads.ucomics.com 0.0.0.0 ads.uigc.net 0.0.0.0 ads.undertone.com 0.0.0.0 ads.unixathome.org 0.0.0.0 ads.update.com 0.0.0.0 ad.suprnova.org 0.0.0.0 ads.uproar.com 0.0.0.0 ads.urbandictionary.com 0.0.0.0 ads.usatoday.com 0.0.0.0 ads.us.e-planning.ne 0.0.0.0 ads.us.e-planning.net 0.0.0.0 ads.userfriendly.org 0.0.0.0 ads.v3.com 0.0.0.0 ads.v3exchange.com 0.0.0.0 ads.vaildaily.com 0.0.0.0 ads.valuead.com 0.0.0.0 ads.vegas.com 0.0.0.0 ads.veloxia.com 0.0.0.0 ads.ventivmedia.com 0.0.0.0 ads.veoh.com 0.0.0.0 ads.verkata.com 0.0.0.0 ads.vesperexchange.com 0.0.0.0 ads.vg.basefarm.net 0.0.0.0 ads.viddler.com 0.0.0.0 ads.videoadvertising.com 0.0.0.0 ads.viewlondon.co.uk 0.0.0.0 ads.virginislandsdailynews.com 0.0.0.0 ads.virtualcountries.com 0.0.0.0 ads.vnuemedia.com 0.0.0.0 adsvr.adknowledge.com 0.0.0.0 ads.vs.co 0.0.0.0 ads.vs.com 0.0.0.0 ads.wanadooregie.com 0.0.0.0 ads.warcry.com 0.0.0.0 ads.watershed-publishing.com 0.0.0.0 ads.wave.si 0.0.0.0 ads.weather.ca 0.0.0.0 ads.weather.com 0.0.0.0 ads.web21.com 0.0.0.0 ads.web.alwayson-network.com 0.0.0.0 ads.web.aol.com 0.0.0.0 ads.webattack.com 0.0.0.0 ads.web.compuserve.com 0.0.0.0 ads.webcoretech.com 0.0.0.0 ads.web.cs.com 0.0.0.0 ads.web.de 0.0.0.0 ads.webfeat.com 0.0.0.0 ads.webheat.com 0.0.0.0 ads.webhosting.info 0.0.0.0 ads.webindia123.com 0.0.0.0 ads-web.mail.com 0.0.0.0 ads.webmd.com 0.0.0.0 ads.webnet.advance.net 0.0.0.0 ads.websponsors.com 0.0.0.0 adsweb.tiscali.cz 0.0.0.0 ads.weissinc.com 0.0.0.0 ads.whaleads.com 0.0.0.0 ads.whi.co.nz 0.0.0.0 ads.winsite.com 0.0.0.0 ads.wnd.com 0.0.0.0 ads.wunderground.com 0.0.0.0 ads.x10.com 0.0.0.0 ads.x10.net 0.0.0.0 ads.x17online.com 0.0.0.0 ads.xboxic.com 0.0.0.0 ads.xbox-scene.com 0.0.0.0 ads.xposed.com 0.0.0.0 ads.xtra.ca 0.0.0.0 ads.xtra.co.nz 0.0.0.0 ads.xtramsn.co.nz 0.0.0.0 ads.yahoo.com 0.0.0.0 ads.yimg.com 0.0.0.0 ads.yimg.com.edgesuite.net 0.0.0.0 ads.yldmgrimg.net 0.0.0.0 adsyndication.msn.com 0.0.0.0 adsyndication.yelldirect.com 0.0.0.0 adsynergy.com 0.0.0.0 ads.youporn.com 0.0.0.0 ads.youtube.com 0.0.0.0 adsys.townnews.com 0.0.0.0 ads.zap2it.com 0.0.0.0 ads.zdnet.com 0.0.0.0 adtag.msn.ca 0.0.0.0 adtag.sympatico.ca 0.0.0.0 adtaily.com 0.0.0.0 adtaily.pl 0.0.0.0 ad.tbn.ru 0.0.0.0 adtcp.ru 0.0.0.0 adtech.de 0.0.0.0 ad.technoramedia.com 0.0.0.0 adtech.panthercustomer.com 0.0.0.0 adtechus.com 0.0.0.0 adtegrity.spinbox.net 0.0.0.0 adtext.pl 0.0.0.0 ad.text.tbn.ru 0.0.0.0 ad.tgdaily.com 0.0.0.0 ad.thehill.com 0.0.0.0 ad.thetyee.ca 0.0.0.0 ad.thewheelof.com 0.0.0.0 adthru.com 0.0.0.0 adtigerpl.adspirit.net 0.0.0.0 ad.tiscali.com 0.0.0.0 adtlgc.com 0.0.0.0 adtology3.com 0.0.0.0 ad.tomshardware.com 0.0.0.0 adtotal.pl 0.0.0.0 adtracking.vinden.nl 0.0.0.0 adtrader.com 0.0.0.0 ad.trafficmp.com 0.0.0.0 adtrak.net 0.0.0.0 ad.turn.com 0.0.0.0 ad.tv2.no 0.0.0.0 ad.twitchguru.com 0.0.0.0 ad.ubnm.co.kr 0.0.0.0 ad.uk.tangozebra.com 0.0.0.0 ad-uk.tiscali.com 0.0.0.0 adultadworld.com 0.0.0.0 ad.usatoday.com 0.0.0.0 adv0005.247realmedia.com 0.0.0.0 adv0035.247realmedia.com 0.0.0.0 adv.440net.com 0.0.0.0 adv.adgates.com 0.0.0.0 adv.adtotal.pl 0.0.0.0 adv.adview.pl 0.0.0.0 adv.bannercity.ru 0.0.0.0 adv.bbanner.it 0.0.0.0 adv.bookclubservices.ca 0.0.0.0 adveng.hiasys.com 0.0.0.0 adveraction.pl 0.0.0.0 advert.bayarea.com 0.0.0.0 advertise.com 0.0.0.0 advertisers.federatedmedia.net 0.0.0.0 advertising.aol.com 0.0.0.0 advertisingbay.com 0.0.0.0 advertising.bbcworldwide.com 0.0.0.0 advertising.com 0.0.0.0 advertising.gfxartist.com 0.0.0.0 advertising.hiasys.com 0.0.0.0 advertising.illinimedia.com 0.0.0.0 advertising.online-media24.de 0.0.0.0 advertising.paltalk.com 0.0.0.0 advertising.wellpack.fr 0.0.0.0 advertising.zenit.org 0.0.0.0 advertlets.com 0.0.0.0 advertpro.investorvillage.com 0.0.0.0 advertpro.sitepoint.com 0.0.0.0 adverts.digitalspy.co.uk 0.0.0.0 adverts.ecn.co.uk 0.0.0.0 adverts.freeloader.com 0.0.0.0 adverts.im4ges.com 0.0.0.0 advertstream.com 0.0.0.0 advert.uloz.to 0.0.0.0 adv.federalpost.ru 0.0.0.0 adv.gazeta.pl 0.0.0.0 advicepl.adocean.pl 0.0.0.0 adview.pl 0.0.0.0 adviva.net 0.0.0.0 adv.lampsplus.com 0.0.0.0 advmaker.ru 0.0.0.0 adv.merlin.co.il 0.0.0.0 adv.netshelter.net 0.0.0.0 adv.publy.net 0.0.0.0 adv.surinter.net 0.0.0.0 advt.webindia123.com 0.0.0.0 ad.vurts.com 0.0.0.0 adv.virgilio.it 0.0.0.0 adv.webmd.com 0.0.0.0 adv.wp.pl 0.0.0.0 adv.zapal.ru 0.0.0.0 advzilla.com 0.0.0.0 adware.kogaryu.com 0.0.0.0 adweb2.hornymatches.com 0.0.0.0 ad.webprovider.com 0.0.0.0 adw.sapo.pt 0.0.0.0 ad.wsod.com 0.0.0.0 adx.adrenalinesk.sk 0.0.0.0 adx.gainesvillesun.com 0.0.0.0 adx.gainesvillsun.com 0.0.0.0 adx.groupstate.com 0.0.0.0 adx.hendersonvillenews.com 0.0.0.0 adx.heraldtribune.com 0.0.0.0 adxpose.com 0.0.0.0 adx.starnewsonline.com 0.0.0.0 ad.xtendmedia.com 0.0.0.0 adx.theledger.com 0.0.0.0 ad.yadro.ru 0.0.0.0 ad.yieldmanager.com 0.0.0.0 adz.afterdawn.net 0.0.0.0 ad.zanox.com 0.0.0.0 adzerk.net 0.0.0.0 ad.zodera.hu 0.0.0.0 adzone.ro 0.0.0.0 adzone.stltoday.com 0.0.0.0 adzservice.theday.com 0.0.0.0 ae.goodsblock.marketgid.com 0.0.0.0 afe2.specificclick.net 0.0.0.0 afe.specificclick.net 0.0.0.0 aff.foxtab.com 0.0.0.0 affiliate.a4dtracker.com 0.0.0.0 affiliate.aol.com 0.0.0.0 affiliate.baazee.com 0.0.0.0 affiliate.cfdebt.com 0.0.0.0 affiliate.exabytes.com.my 0.0.0.0 affiliate-fr.com 0.0.0.0 affiliate.fr.espotting.com 0.0.0.0 affiliate.googleusercontent.com 0.0.0.0 affiliate.hbytracker.com 0.0.0.0 affiliate.mlntracker.com 0.0.0.0 affiliates.arvixe.com 0.0.0.0 affiliates.eblastengine.com 0.0.0.0 affiliates.genealogybank.com 0.0.0.0 affiliates.globat.com 0.0.0.0 affiliation-france.com 0.0.0.0 affimg.pop6.com 0.0.0.0 afform.co.uk 0.0.0.0 affpartners.com 0.0.0.0 aff.ringtonepartner.com 0.0.0.0 afi.adocean.pl 0.0.0.0 afilo.pl 0.0.0.0 agkn.com 0.0.0.0 aj.600z.com 0.0.0.0 ajcclassifieds.com 0.0.0.0 akaads-espn.starwave.com 0.0.0.0 aka-cdn.adtechus.com 0.0.0.0 aka-cdn-ns.adtech.de 0.0.0.0 aka-cdn-ns.adtechus.com 0.0.0.0 akamai.invitemedia.com 0.0.0.0 ak.buyservices.com 0.0.0.0 a.kerg.net 0.0.0.0 ak.maxserving.com 0.0.0.0 ako.cc 0.0.0.0 ak.p.openx.net 0.0.0.0 al1.sharethis.com 0.0.0.0 alert.police-patrol-agent.com 0.0.0.0 a.ligatus.com 0.0.0.0 a.ligatus.de 0.0.0.0 alliance.adbureau.net 0.0.0.0 all.orfr.adgtw.orangeads.fr 0.0.0.0 altfarm.mediaplex.com 0.0.0.0 amch.questionmarket.com 0.0.0.0 americansingles.click-url.com 0.0.0.0 a.mktw.net 0.0.0.0 amscdn.btrll.com 0.0.0.0 analysis.fc2.com 0.0.0.0 analytics.kwebsoft.com 0.0.0.0 analytics.percentmobile.com 0.0.0.0 analyzer51.fc2.com 0.0.0.0 ankieta-online.pl 0.0.0.0 annuaire-autosurf.com 0.0.0.0 anrtx.tacoda.net 0.0.0.0 answers.us.intellitxt.com 0.0.0.0 an.tacoda.net 0.0.0.0 an.yandex.ru 0.0.0.0 apex-ad.com 0.0.0.0 api.addthis.com 0.0.0.0 api.affinesystems.com 0.0.0.0 api-public.addthis.com 0.0.0.0 apopt.hbmediapro.com 0.0.0.0 apparelncs.com 0.0.0.0 apparel-offer.com 0.0.0.0 appdev.addthis.com 0.0.0.0 appnexus.com 0.0.0.0 apps5.oingo.com 0.0.0.0 app.scanscout.com 0.0.0.0 ap.read.mediation.pns.ap.orangeads.fr 0.0.0.0 a.prisacom.com 0.0.0.0 apx.moatads.com 0.0.0.0 a.rad.live.com 0.0.0.0 a.rad.msn.com 0.0.0.0 arbomedia.pl 0.0.0.0 arbopl.bbelements.com 0.0.0.0 arsconsole.global-intermedia.com 0.0.0.0 art-music-rewardpath.com 0.0.0.0 art-offer.com 0.0.0.0 art-offer.net 0.0.0.0 art-photo-music-premiumblvd.com 0.0.0.0 art-photo-music-rewardempire.com 0.0.0.0 art-photo-music-savingblvd.com 0.0.0.0 as1.falkag.de 0.0.0.0 as1image1.adshuffle.com 0.0.0.0 as1image2.adshuffle.com 0.0.0.0 as1.inoventiv.com 0.0.0.0 as2.falkag.de 0.0.0.0 as3.falkag.de 0.0.0.0 as4.falkag.de 0.0.0.0 as.5to1.com 0.0.0.0 asa.tynt.com 0.0.0.0 asb.tynt.com 0.0.0.0 as.casalemedia.com 0.0.0.0 as.ebz.io 0.0.0.0 asg01.casalemedia.com 0.0.0.0 asg02.casalemedia.com 0.0.0.0 asg03.casalemedia.com 0.0.0.0 asg04.casalemedia.com 0.0.0.0 asg05.casalemedia.com 0.0.0.0 asg06.casalemedia.com 0.0.0.0 asg07.casalemedia.com 0.0.0.0 asg08.casalemedia.com 0.0.0.0 asg09.casalemedia.com 0.0.0.0 asg10.casalemedia.com 0.0.0.0 asg11.casalemedia.com 0.0.0.0 asg12.casalemedia.com 0.0.0.0 asg13.casalemedia.com 0.0.0.0 ask-gps.ru 0.0.0.0 asklots.com 0.0.0.0 askmen.thruport.com 0.0.0.0 asm2.z1.adserver.com 0.0.0.0 asm3.z1.adserver.com 0.0.0.0 asn.advolution.de 0.0.0.0 asn.cunda.advolution.biz 0.0.0.0 a.ss34.on9mail.com 0.0.0.0 assets.igapi.com 0.0.0.0 assets.kixer.com 0.0.0.0 assets.percentmobile.com 0.0.0.0 as.sexad.net 0.0.0.0 asv.nuggad.net 0.0.0.0 as.vs4entertainment.com 0.0.0.0 as.webmd.com 0.0.0.0 a.tadd.react2media.com 0.0.0.0 at-adserver.alltop.com 0.0.0.0 at.campaigns.f2.com.au 0.0.0.0 at.ceofreehost.com 0.0.0.0 atdmt.com 0.0.0.0 atemda.com 0.0.0.0 athena-ads.wikia.com 0.0.0.0 at.m1.nedstatbasic.net 0.0.0.0 a.total-media.net 0.0.0.0 a.tribalfusion.com 0.0.0.0 a.triggit.com 0.0.0.0 au.adserver.yahoo.com 0.0.0.0 au.ads.link4ads.com 0.0.0.0 aud.pubmatic.com 0.0.0.0 aureate.com 0.0.0.0 auslieferung.commindo-media-ressourcen.de 0.0.0.0 austria1.adverserve.net 0.0.0.0 autocontext.begun.ru 0.0.0.0 automotive-offer.com 0.0.0.0 automotive-rewardpath.com 0.0.0.0 avcounter10.com 0.0.0.0 avpa.dzone.com 0.0.0.0 avpa.javalobby.org 0.0.0.0 a.websponsors.com 0.0.0.0 awesomevipoffers.com 0.0.0.0 awrz.net 0.0.0.0 axp.zedo.com 0.0.0.0 azcentra.app.ur.gcion.com 0.0.0.0 azoogleads.com 0.0.0.0 b1.adbrite.com 0.0.0.0 b1.azjmp.com 0.0.0.0 b2b.filecloud.me 0.0.0.0 babycenter.tt.omtrdc.net 0.0.0.0 b.ads2.msn.com 0.0.0.0 badservant.guj.de 0.0.0.0 b.am15.net 0.0.0.0 bananacashback.com 0.0.0.0 banery.acr.pl 0.0.0.0 banery.netart.pl 0.0.0.0 banery.onet.pl 0.0.0.0 banki.onet.pl 0.0.0.0 bankofamerica.tt.omtrdc.net 0.0.0.0 banman.nepsecure.co.uk 0.0.0.0 banner.1and1.co.uk 0.0.0.0 banner1.pornhost.com 0.0.0.0 banner2.inet-traffic.com 0.0.0.0 bannerads.anytimenews.com 0.0.0.0 bannerads.de 0.0.0.0 bannerads.zwire.com 0.0.0.0 banner.affactive.com 0.0.0.0 banner.betroyalaffiliates.com 0.0.0.0 banner.betwwts.com 0.0.0.0 banner.cdpoker.com 0.0.0.0 banner.clubdicecasino.com 0.0.0.0 bannerconnect.net 0.0.0.0 banner.coza.com 0.0.0.0 banner.diamondclubcasino.com 0.0.0.0 bannerdriven.ru 0.0.0.0 banner.easyspace.com 0.0.0.0 bannerfarm.ace.advertising.com 0.0.0.0 banner.free6.com # www.free6.com 0.0.0.0 bannerhost.egamingonline.com 0.0.0.0 bannerimages.0catch.com 0.0.0.0 banner.joylandcasino.com 0.0.0.0 banner.media-system.de 0.0.0.0 banner.monacogoldcasino.com 0.0.0.0 banner.newyorkcasino.com 0.0.0.0 banner.northsky.com 0.0.0.0 banner.oddcast.com 0.0.0.0 banner.orb.net 0.0.0.0 banner.piratos.de 0.0.0.0 banner.playgatecasino.com 0.0.0.0 bannerpower.com 0.0.0.0 banner.prestigecasino.com 0.0.0.0 banner.publisher.to 0.0.0.0 banner.rbc.ru 0.0.0.0 banner.relcom.ru 0.0.0.0 banners1.linkbuddies.com 0.0.0.0 banners2.castles.org 0.0.0.0 banners3.spacash.com 0.0.0.0 banners.adgoto.com 0.0.0.0 banners.adultfriendfinder.com 0.0.0.0 banners.affiliatefuel.com 0.0.0.0 banners.affiliatefuture.com 0.0.0.0 banners.aftrk.com 0.0.0.0 banners.audioholics.com 0.0.0.0 banners.blogads.com 0.0.0.0 banners.bol.se 0.0.0.0 banners.broadwayworld.com 0.0.0.0 banners.celebritybling.com 0.0.0.0 banners.crisscross.com 0.0.0.0 banners.directnic.com 0.0.0.0 banners.dnastudio.com 0.0.0.0 banners.easydns.com 0.0.0.0 banners.easysolutions.be 0.0.0.0 banners.ebay.com 0.0.0.0 banners.expressindia.com 0.0.0.0 banners.flair.be 0.0.0.0 banners.free6.com # www.free6.com 0.0.0.0 banners.fuifbeest.be 0.0.0.0 banners.globovision.com 0.0.0.0 banners.img.uol.com.br 0.0.0.0 banners.ims.nl 0.0.0.0 banners.iop.org 0.0.0.0 banners.ipotd.com 0.0.0.0 banners.japantoday.com 0.0.0.0 banners.kfmb.com 0.0.0.0 banners.ksl.com 0.0.0.0 banners.linkbuddies.com 0.0.0.0 banners.looksmart.com 0.0.0.0 banners.nbcupromotes.com 0.0.0.0 banners.netcraft.com 0.0.0.0 banners.newsru.com 0.0.0.0 banners.nextcard.com 0.0.0.0 banners.passion.com 0.0.0.0 banners.pennyweb.com 0.0.0.0 banners.primaryclick.com 0.0.0.0 banners.resultonline.com 0.0.0.0 banners.rspworldwide.com 0.0.0.0 banners.sextracker.com 0.0.0.0 banners.spiceworks.com 0.0.0.0 banners.thegridwebmaster.com 0.0.0.0 banners.thestranger.com 0.0.0.0 banners.thgimages.co.uk 0.0.0.0 banners.tribute.ca 0.0.0.0 banners.tucson.com 0.0.0.0 banners.unibet.com 0.0.0.0 bannersurvey.biz 0.0.0.0 banners.valuead.com 0.0.0.0 banners.videosecrets.com 0.0.0.0 banners.webmasterplan.com 0.0.0.0 banners.wunderground.com 0.0.0.0 banners.zbs.ru 0.0.0.0 banner.tattomedia.com 0.0.0.0 banner.techarp.com 0.0.0.0 bannert.ru 0.0.0.0 bannerus1.axelsfun.com 0.0.0.0 bannerus3.axelsfun.com 0.0.0.0 banner.usacasino.com 0.0.0.0 banniere.reussissonsensemble.fr 0.0.0.0 bans.bride.ru 0.0.0.0 banstex.com 0.0.0.0 bansys.onzin.com 0.0.0.0 bargainbeautybuys.com 0.0.0.0 barnesandnoble.bfast.com 0.0.0.0 b.as-us.falkag.net 0.0.0.0 bayoubuzz.advertserve.com 0.0.0.0 bbcdn.go.adlt.bbelements.com 0.0.0.0 bbcdn.go.adnet.bbelements.com 0.0.0.0 bbcdn.go.arbo.bbelements.com 0.0.0.0 bbcdn.go.eu.bbelements.com 0.0.0.0 bbcdn.go.ihned.bbelements.com 0.0.0.0 bbcdn.go.pl.bbelements.com 0.0.0.0 bb.crwdcntrl.net 0.0.0.0 bbnaut.bbelements.com 0.0.0.0 bc685d37-266c-488e-824e-dd95d1c0e98b.statcamp.net 0.0.0.0 bcp.crwdcntrl.net 0.0.0.0 bdnad1.bangornews.com 0.0.0.0 bdv.bidvertiser.com 0.0.0.0 beacon-3.newrelic.com 0.0.0.0 beacons.helium.com 0.0.0.0 bell.adcentriconline.com 0.0.0.0 beseenad.looksmart.com 0.0.0.0 bestgift4you.cn 0.0.0.0 bestshopperrewards.com 0.0.0.0 beta.hotkeys.com 0.0.0.0 bet-at-home.com 0.0.0.0 betterperformance.goldenopps.info 0.0.0.0 bfast.com 0.0.0.0 bidclix.net 0.0.0.0 bid.openx.net 0.0.0.0 bidsystem.com 0.0.0.0 bidtraffic.com 0.0.0.0 bidvertiser.com 0.0.0.0 bigads.guj.de 0.0.0.0 bigbrandpromotions.com 0.0.0.0 bigbrandrewards.com 0.0.0.0 biggestgiftrewards.com 0.0.0.0 billing.speedboink.com 0.0.0.0 bitburg.adtech.de 0.0.0.0 bitburg.adtech.fr 0.0.0.0 bitburg.adtech.us 0.0.0.0 bitcast-d.bitgravity.com 0.0.0.0 bizad.nikkeibp.co.jp 0.0.0.0 biz-offer.com 0.0.0.0 bizopprewards.com 0.0.0.0 blabla4u.adserver.co.il 0.0.0.0 blasphemysfhs.info 0.0.0.0 blatant8jh.info 0.0.0.0 b.liquidustv.com 0.0.0.0 blog.addthis.com 0.0.0.0 blogads.com 0.0.0.0 blogads.ebanner.nl 0.0.0.0 blogvertising.pl 0.0.0.0 bluediamondoffers.com 0.0.0.0 blu.mobileads.msn.com 0.0.0.0 bl.wavecdn.de 0.0.0.0 b.myspace.com 0.0.0.0 bn.bfast.com 0.0.0.0 bnmgr.adinjector.net 0.0.0.0 bnrs.ilm.ee 0.0.0.0 boksy.dir.onet.pl 0.0.0.0 boksy.onet.pl 0.0.0.0 bookclub-offer.com 0.0.0.0 books-media-edu-premiumblvd.com 0.0.0.0 books-media-edu-rewardempire.com 0.0.0.0 books-media-rewardpath.com 0.0.0.0 bostonsubwayoffer.com 0.0.0.0 bp.specificclick.net 0.0.0.0 b.rad.live.com 0.0.0.0 b.rad.msn.com 0.0.0.0 br.adserver.yahoo.com 0.0.0.0 brandrewardcentral.com 0.0.0.0 brandsurveypanel.com 0.0.0.0 bravo.israelinfo.ru 0.0.0.0 bravospots.com 0.0.0.0 br.naked.com 0.0.0.0 broadcast.piximedia.fr 0.0.0.0 broadent.vo.llnwd.net 0.0.0.0 brokertraffic.com 0.0.0.0 bsads.looksmart.com #0.0.0.0 b.scorecardresearch.com # interferes with Huffington Post slideshows 0.0.0.0 bs.israelinfo.ru 0.0.0.0 bs.serving-sys.com #eyeblaster.com 0.0.0.0 bt.linkpulse.com 0.0.0.0 burns.adtech.de 0.0.0.0 burns.adtech.fr 0.0.0.0 burns.adtech.us 0.0.0.0 business-rewardpath.com 0.0.0.0 bus-offer.com 0.0.0.0 buttcandy.com 0.0.0.0 buttons.googlesyndication.com 0.0.0.0 buzzbox.buzzfeed.com 0.0.0.0 bwp.lastfm.com.com 0.0.0.0 bwp.news.com 0.0.0.0 c1.popads.net 0.0.0.0 c1.teaser-goods.ru 0.0.0.0 c1.zedo.com 0.0.0.0 c2.zedo.com 0.0.0.0 c3.zedo.com 0.0.0.0 c4.maxserving.com 0.0.0.0 c4.zedo.com 0.0.0.0 c5.zedo.com 0.0.0.0 c6.zedo.com 0.0.0.0 c7.zedo.com 0.0.0.0 c8.zedo.com 0.0.0.0 ca.adserver.yahoo.com 0.0.0.0 cache.addthiscdn.com 0.0.0.0 cache.addthis.com 0.0.0.0 cache.blogads.com 0.0.0.0 cache-dev.addthis.com 0.0.0.0 cacheserve.eurogrand.com 0.0.0.0 cacheserve.prestigecasino.com 0.0.0.0 cache.unicast.com 0.0.0.0 c.actiondesk.com 0.0.0.0 c.adroll.com 0.0.0.0 califia.imaginemedia.com 0.0.0.0 c.am10.ru 0.0.0.0 camgeil.com 0.0.0.0 campaign.iitech.dk 0.0.0.0 campaign.indieclick.com 0.0.0.0 campaigns.f2.com.au 0.0.0.0 campaigns.interclick.com 0.0.0.0 capath.com 0.0.0.0 cardgamespidersolitaire.com 0.0.0.0 cards.virtuagirlhd.com 0.0.0.0 careers.canwestad.net 0.0.0.0 careers-rewardpath.com 0.0.0.0 c.ar.msn.com 0.0.0.0 carrier.bz 0.0.0.0 car-truck-boat-bonuspath.com 0.0.0.0 car-truck-boat-premiumblvd.com 0.0.0.0 casalemedia.com 0.0.0.0 cas.clickability.com 0.0.0.0 cashback.co.uk 0.0.0.0 cashbackwow.co.uk 0.0.0.0 cashflowmarketing.com 0.0.0.0 casino770.com 0.0.0.0 c.as-us.falkag.net 0.0.0.0 catalinkcashback.com 0.0.0.0 catchvid.info 0.0.0.0 c.at.msn.com 0.0.0.0 cbanners.virtuagirlhd.com 0.0.0.0 c.be.msn.com 0.0.0.0 c.blogads.com 0.0.0.0 c.br.msn.com 0.0.0.0 c.ca.msn.com 0.0.0.0 c.casalemedia.com 0.0.0.0 ccas.clearchannel.com 0.0.0.0 c.cl.msn.com 0.0.0.0 c.de.msn.com 0.0.0.0 c.dk.msn.com 0.0.0.0 cdn1.adexprt.com 0.0.0.0 cdn1.ads.mofos.com 0.0.0.0 cdn1.eyewonder.com 0.0.0.0 cdn1.rmgserving.com 0.0.0.0 cdn1.traffichaus.com 0.0.0.0 cdn1.xlightmedia.com 0.0.0.0 cdn2.adsdk.com 0.0.0.0 cdn2.amateurmatch.com 0.0.0.0 cdn2.emediate.eu 0.0.0.0 cdn3.adexprts.com 0.0.0.0 cdn3.telemetryverification.net 0.0.0.0 cdn454.telemetryverification.net 0.0.0.0 cdn5.tribalfusion.com 0.0.0.0 cdn6.emediate.eu 0.0.0.0 cdn.adigniter.org 0.0.0.0 cdn.adnxs.com 0.0.0.0 cdnads.cam4.com 0.0.0.0 cdn.ads.ookla.com 0.0.0.0 cdn.amateurmatch.com 0.0.0.0 cdn.amgdgt.com 0.0.0.0 cdn.assets.craveonline.com 0.0.0.0 cdn.banners.scubl.com 0.0.0.0 cdn.cpmstar.com 0.0.0.0 cdn.crowdignite.com 0.0.0.0 cdn.directrev.com 0.0.0.0 cdn.eyewonder.com 0.0.0.0 cdn.go.arbo.bbelements.com 0.0.0.0 cdn.go.arbopl.bbelements.com 0.0.0.0 cdn.go.cz.bbelements.com 0.0.0.0 cdn.go.idmnet.bbelements.com 0.0.0.0 cdn.go.pol.bbelements.com 0.0.0.0 cdn.hadj7.adjuggler.net 0.0.0.0 cdn.innovid.com 0.0.0.0 cdn.krxd.net 0.0.0.0 cdn.mediative.ca 0.0.0.0 cdn.merchenta.com 0.0.0.0 cdn.mobicow.com 0.0.0.0 cdn.nearbyad.com 0.0.0.0 cdn.nsimg.net 0.0.0.0 cdn.onescreen.net #0.0.0.0 cdns.gigya.com 0.0.0.0 cdns.mydirtyhobby.com 0.0.0.0 cdns.privatamateure.com 0.0.0.0 cdn.stat.easydate.biz 0.0.0.0 cdn.syn.verticalacuity.com 0.0.0.0 cdn.tabnak.ir 0.0.0.0 cdnt.yottos.com 0.0.0.0 cdn.udmserve.net 0.0.0.0 cdn.undertone.com 0.0.0.0 cdn.wg.uproxx.com 0.0.0.0 cdnw.ringtonepartner.com 0.0.0.0 cdn.yottos.com 0.0.0.0 cdn.zeusclicks.com 0.0.0.0 cds.adecn.com 0.0.0.0 cecash.com 0.0.0.0 ced.sascdn.com 0.0.0.0 cell-phone-giveaways.com 0.0.0.0 cellphoneincentives.com 0.0.0.0 cent.adbureau.net 0.0.0.0 c.es.msn.com 0.0.0.0 c.fi.msn.com 0.0.0.0 cf.kampyle.com 0.0.0.0 c.fr.msn.com 0.0.0.0 cgirm.greatfallstribune.com 0.0.0.0 cgm.adbureau.ne 0.0.0.0 cgm.adbureau.net 0.0.0.0 c.gr.msn.com 0.0.0.0 chainsawoffer.com 0.0.0.0 chartbeat.com 0.0.0.0 checkintocash.data.7bpeople.com 0.0.0.0 cherryhi.app.ur.gcion.com 0.0.0.0 c.hk.msn.com 0.0.0.0 chkpt.zdnet.com 0.0.0.0 choicedealz.com 0.0.0.0 choicesurveypanel.com 0.0.0.0 christianbusinessadvertising.com 0.0.0.0 c.id.msn.com 0.0.0.0 c.ie.msn.com 0.0.0.0 c.il.msn.com 0.0.0.0 c.imedia.cz 0.0.0.0 c.in.msn.com 0.0.0.0 cithingy.info 0.0.0.0 citi.bridgetrack.com 0.0.0.0 c.it.msn.com 0.0.0.0 citrix.market2lead.com 0.0.0.0 cityads.telus.net 0.0.0.0 citycash2.blogspot.com 0.0.0.0 c.jp.msn.com 0.0.0.0 cl21.v4.adaction.se 0.0.0.0 cl320.v4.adaction.se 0.0.0.0 claimfreerewards.com 0.0.0.0 clashmediausa.com 0.0.0.0 classicjack.com 0.0.0.0 c.latam.msn.com 0.0.0.0 click1.mainadv.com 0.0.0.0 click1.rbc.magna.ru 0.0.0.0 click2.rbc.magna.ru 0.0.0.0 click3.rbc.magna.ru 0.0.0.0 click4.rbc.magna.ru 0.0.0.0 clickad.eo.pl 0.0.0.0 clickarrows.com 0.0.0.0 click.avenuea.com 0.0.0.0 clickbangpop.com 0.0.0.0 clickcash.webpower.com 0.0.0.0 click.go2net.com 0.0.0.0 click.israelinfo.ru 0.0.0.0 clickit.go2net.com 0.0.0.0 clickmedia.ro 0.0.0.0 click.pulse360.com 0.0.0.0 clicks2.virtuagirl.com 0.0.0.0 clicks.adultplex.com 0.0.0.0 clicks.deskbabes.com 0.0.0.0 click-see-save.com 0.0.0.0 clicksor.com 0.0.0.0 clicksotrk.com 0.0.0.0 clicks.totemcash.com 0.0.0.0 clicks.toteme.com 0.0.0.0 clicks.virtuagirl.com 0.0.0.0 clicks.virtuagirlhd.com 0.0.0.0 clicks.virtuaguyhd.com 0.0.0.0 clicks.walla.co.il 0.0.0.0 clickthru.net 0.0.0.0 clickthrunet.net 0.0.0.0 clickthruserver.com 0.0.0.0 clickthrutraffic.com 0.0.0.0 clicktorrent.info 0.0.0.0 clipserv.adclip.com 0.0.0.0 clkads.com 0.0.0.0 clk.cloudyisland.com 0.0.0.0 clk.tradedoubler.com 0.0.0.0 clkuk.tradedoubler.com 0.0.0.0 c.lomadee.com 0.0.0.0 closeoutproductsreview.com 0.0.0.0 cluster3.adultadworld.com 0.0.0.0 cluster.adultadworld.com 0.0.0.0 cm1359.com 0.0.0.0 cmads.sv.publicus.com 0.0.0.0 cmads.us.publicus.com 0.0.0.0 cmap.am.ace.advertising.com 0.0.0.0 cmap.an.ace.advertising.com 0.0.0.0 cmap.at.ace.advertising.com 0.0.0.0 cmap.dc.ace.advertising.com 0.0.0.0 cmap.ox.ace.advertising.com 0.0.0.0 cmap.pub.ace.advertising.com 0.0.0.0 cmap.rm.ace.advertising.com 0.0.0.0 cmap.rub.ace.advertising.com 0.0.0.0 cmhtml.overture.com 0.0.0.0 cmn1lsm2.beliefnet.com 0.0.0.0 cm.npc-hearst.overture.com 0.0.0.0 cmps.mt50ad.com 0.0.0.0 cm.the-n.overture.com 0.0.0.0 c.my.msn.com 0.0.0.0 cnad1.economicoutlook.net 0.0.0.0 cnad2.economicoutlook.net 0.0.0.0 cnad3.economicoutlook.net 0.0.0.0 cnad4.economicoutlook.net 0.0.0.0 cnad5.economicoutlook.net 0.0.0.0 cnad6.economicoutlook.net 0.0.0.0 cnad7.economicoutlook.net 0.0.0.0 cnad8.economicoutlook.net 0.0.0.0 cnad9.economicoutlook.net 0.0.0.0 cnad.economicoutlook.net 0.0.0.0 cn.adserver.yahoo.com 0.0.0.0 cnf.adshuffle.com 0.0.0.0 c.ninemsn.com.au 0.0.0.0 c.nl.msn.com 0.0.0.0 c.no.msn.com 0.0.0.0 c.novostimira.biz 0.0.0.0 cnt1.xhamster.com 0.0.0.0 code2.adtlgc.com 0.0.0.0 code.adtlgc.com 0.0.0.0 collectiveads.net 0.0.0.0 col.mobileads.msn.com 0.0.0.0 comadverts.bcmpweb.co.nz 0.0.0.0 comcastresidentialservices.tt.omtrdc.net 0.0.0.0 com.cool-premiums-now.com 0.0.0.0 come-see-it-all.com 0.0.0.0 com.htmlwww.youfck.com 0.0.0.0 commerce-offer.com 0.0.0.0 commerce-rewardpath.com 0.0.0.0 commerce.www.ibm.com 0.0.0.0 common.ziffdavisinternet.com 0.0.0.0 companion.adap.tv 0.0.0.0 computer-offer.com 0.0.0.0 computer-offer.net 0.0.0.0 computers-electronics-rewardpath.com 0.0.0.0 computersncs.com 0.0.0.0 com.shc-rebates.com 0.0.0.0 connect.247media.ads.link4ads.com 0.0.0.0 consumergiftcenter.com 0.0.0.0 consumerincentivenetwork.com 0.0.0.0 consumerinfo.tt.omtrdc.net 0.0.0.0 consumer-org.com 0.0.0.0 contaxe.com 0.0.0.0 content.ad-flow.com 0.0.0.0 content.clipster.ws 0.0.0.0 content.codelnet.com 0.0.0.0 content.promoisland.net 0.0.0.0 contentsearch.de.espotting.com 0.0.0.0 content.yieldmanager.edgesuite.net 0.0.0.0 context3.kanoodle.com 0.0.0.0 context5.kanoodle.com 0.0.0.0 context.adshadow.net 0.0.0.0 contextweb.com 0.0.0.0 conv.adengage.com 0.0.0.0 conversion-pixel.invitemedia.com 0.0.0.0 cookiecontainer.blox.pl 0.0.0.0 cookie.pebblemedia.be 0.0.0.0 cookingtiprewards.com 0.0.0.0 cookonsea.com 0.0.0.0 cool-premiums.com 0.0.0.0 cool-premiums-now.com 0.0.0.0 coolpremiumsnow.com 0.0.0.0 coolsavings.com 0.0.0.0 corba.adtech.de 0.0.0.0 corba.adtech.fr 0.0.0.0 corba.adtech.us 0.0.0.0 core0.node12.top.mail.ru 0.0.0.0 core2.adtlgc.com 0.0.0.0 coreg.flashtrack.net 0.0.0.0 coreglead.co.uk 0.0.0.0 core.insightexpressai.com 0.0.0.0 core.videoegg.com 0.0.0.0 cornflakes.pathfinder.com 0.0.0.0 corusads.dserv.ca 0.0.0.0 cosmeticscentre.uk.com 0.0.0.0 count6.51yes.com 0.0.0.0 count.casino-trade.com 0.0.0.0 cover.m2y.siemens.ch 0.0.0.0 c.ph.msn.com 0.0.0.0 cpmadvisors.com 0.0.0.0 cp.promoisland.net 0.0.0.0 c.prodigy.msn.com 0.0.0.0 c.pt.msn.com 0.0.0.0 cpu.firingsquad.com 0.0.0.0 creatiby1.unicast.com 0.0.0.0 creative.adshuffle.com 0.0.0.0 creative.ak.facebook.com 0.0.0.0 creatives.livejasmin.com 0.0.0.0 creatives.rgadvert.com 0.0.0.0 creatrixads.com 0.0.0.0 crediblegfj.info 0.0.0.0 creditburner.blueadvertise.com 0.0.0.0 creditsoffer.blogspot.com 0.0.0.0 creview.adbureau.net 0.0.0.0 crosspixel.demdex.net 0.0.0.0 crowdgravity.com 0.0.0.0 crowdignite.com 0.0.0.0 c.ru.msn.com 0.0.0.0 crux.songline.com 0.0.0.0 crwdcntrl.net 0.0.0.0 c.se.msn.com 0.0.0.0 cserver.mii.instacontent.net 0.0.0.0 c.sg.msn.com 0.0.0.0 csh.actiondesk.com 0.0.0.0 csm.rotator.hadj7.adjuggler.net 0.0.0.0 cspix.media6degrees.com 0.0.0.0 cs.prd.msys.playstation.net 0.0.0.0 csr.onet.pl 0.0.0.0 ctbdev.net 0.0.0.0 c.th.msn.com 0.0.0.0 c.tr.msn.com 0.0.0.0 cts.channelintelligence.com 0.0.0.0 c.tw.msn.com 0.0.0.0 ctxtad.tribalfusion.com 0.0.0.0 c.uk.msn.com 0.0.0.0 cxoadfarm.dyndns.info 0.0.0.0 cxtad.specificmedia.com 0.0.0.0 cyber-incentives.com 0.0.0.0 cz8.clickzs.com 0.0.0.0 c.za.msn.com 0.0.0.0 cz.bbelements.com 0.0.0.0 d.101m3.com 0.0.0.0 d10.zedo.com 0.0.0.0 d11.zedo.com 0.0.0.0 d12.zedo.com 0.0.0.0 d14.zedo.com 0.0.0.0 d1.openx.org 0.0.0.0 d1ros97qkrwjf5.cloudfront.net 0.0.0.0 d1.zedo.com 0.0.0.0 d2.zedo.com 0.0.0.0 d3.zedo.com 0.0.0.0 d4.zedo.com 0.0.0.0 d5phz18u4wuww.cloudfront.net 0.0.0.0 d5.zedo.com 0.0.0.0 d6.c5.b0.a2.top.mail.ru 0.0.0.0 d6.zedo.com 0.0.0.0 d7.zedo.com 0.0.0.0 d8.zedo.com 0.0.0.0 d9.zedo.com 0.0.0.0 da.2000888.com 0.0.0.0 d.adnetxchange.com 0.0.0.0 d.adserve.com 0.0.0.0 dads.new.digg.com 0.0.0.0 d.ads.readwriteweb.com 0.0.0.0 d.agkn.com 0.0.0.0 daily-saver.com 0.0.0.0 darmowe-liczniki.info 0.0.0.0 dart.chron.com 0.0.0.0 data.flurry.com 0.0.0.0 date.ventivmedia.com 0.0.0.0 datingadvertising.com 0.0.0.0 db4.net-filter.com 0.0.0.0 dbbsrv.com 0.0.0.0 dc.sabela.com.pl 0.0.0.0 dctracking.com 0.0.0.0 de.adserver.yahoo.com 0.0.0.0 del1.phillyburbs.com 0.0.0.0 delb.mspaceads.com 0.0.0.0 delivery.adyea.com 0.0.0.0 delivery.trafficjunky.net 0.0.0.0 delivery.w00tads.com 0.0.0.0 delivery.way2traffic.com 0.0.0.0 demr.mspaceads.com 0.0.0.0 demr.opt.fimserve.com 0.0.0.0 derkeiler.com 0.0.0.0 desb.mspaceads.com 0.0.0.0 descargas2.tuvideogratis.com 0.0.0.0 designbloxlive.com 0.0.0.0 desk.mspaceads.com 0.0.0.0 desk.opt.fimserve.com 0.0.0.0 dev.adforum.com 0.0.0.0 devart.adbureau.net 0.0.0.0 devlp1.linkpulse.com 0.0.0.0 dev.sfbg.com 0.0.0.0 dgm2.com 0.0.0.0 dgmaustralia.com 0.0.0.0 dg.specificclick.net 0.0.0.0 dietoftoday.ca.pn #security risk/fake news# 0.0.0.0 diff3.smartadserver.com 0.0.0.0 dinoadserver1.roka.net 0.0.0.0 dinoadserver2.roka.net 0.0.0.0 directleads.com 0.0.0.0 directpowerrewards.com 0.0.0.0 directrev.cloudapp.net 0.0.0.0 dirtyrhino.com 0.0.0.0 discount-savings-more.com 0.0.0.0 discoverecommerce.tt.omtrdc.net 0.0.0.0 display.gestionpub.com 0.0.0.0 dist.belnk.com 0.0.0.0 divx.adbureau.net 0.0.0.0 djbanners.deadjournal.com 0.0.0.0 djugoogs.com 0.0.0.0 dk.adserver.yahoo.com 0.0.0.0 dl.ncbuy.com 0.0.0.0 dl-plugin.com 0.0.0.0 dlvr.readserver.net 0.0.0.0 dnads.directnic.com 0.0.0.0 dnps.com 0.0.0.0 dnse.linkpulse.com 0.0.0.0 dosugcz.biz 0.0.0.0 dot.wp.pl 0.0.0.0 downloadcdn.com 0.0.0.0 do-wn-lo-ad.com 0.0.0.0 downloads.larivieracasino.com 0.0.0.0 downloads.mytvandmovies.com 0.0.0.0 dqs001.adtech.de 0.0.0.0 dqs001.adtech.fr 0.0.0.0 dqs001.adtech.us 0.0.0.0 dra.amazon-adsystem.com 0.0.0.0 drowle.com 0.0.0.0 ds.contextweb.com 0.0.0.0 ds.onet.pl 0.0.0.0 ds.serving-sys.com 0.0.0.0 dt.linkpulse.com 0.0.0.0 dub.mobileads.msn.com 0.0.0.0 e0.extreme-dm.com 0.0.0.0 e1.addthis.com 0.0.0.0 e2.cdn.qnsr.com 0.0.0.0 e2.emediate.se 0.0.0.0 eads-adserving.com 0.0.0.0 ead.sharethis.com 0.0.0.0 earnmygift.com 0.0.0.0 earnpointsandgifts.com 0.0.0.0 e.as-eu.falkag.net 0.0.0.0 easyadservice.com 0.0.0.0 easyweb.tdcanadatrust.secureserver.host1.customer-identification-process.b88600d8.com 0.0.0.0 eatps.web.aol.com 0.0.0.0 eb.adbureau.net 0.0.0.0 eblastengine.upickem.net 0.0.0.0 ecomadserver.com 0.0.0.0 eddamedia.linkpulse.com 0.0.0.0 edge.bnmla.com 0.0.0.0 edge.quantserve.com 0.0.0.0 edirect.hotkeys.com 0.0.0.0 education-rewardpath.com 0.0.0.0 edu-offer.com 0.0.0.0 electronics-bonuspath.com 0.0.0.0 electronics-offer.net 0.0.0.0 electronicspresent.com 0.0.0.0 electronics-rewardpath.com 0.0.0.0 emailadvantagegroup.com 0.0.0.0 emailproductreview.com 0.0.0.0 emapadserver.com 0.0.0.0 emea-bidder.mathtag.com 0.0.0.0 engage.everyone.net 0.0.0.0 engage.speedera.net 0.0.0.0 engine2.adzerk.net 0.0.0.0 engine.4chan-ads.org 0.0.0.0 engine.adland.ru 0.0.0.0 engine.adzerk.net 0.0.0.0 engine.carbonads.com 0.0.0.0 engine.espace.netavenir.com 0.0.0.0 engine.influads.com 0.0.0.0 engine.rorer.ru 0.0.0.0 enirocode.adtlgc.com 0.0.0.0 enirodk.adtlgc.com 0.0.0.0 enn.advertserve.com 0.0.0.0 entertainment-rewardpath.com 0.0.0.0 entertainment-specials.com 0.0.0.0 es.adserver.yahoo.com 0.0.0.0 escape.insites.eu 0.0.0.0 espn.footprint.net 0.0.0.0 etad.telegraph.co.uk 0.0.0.0 etrk.asus.com 0.0.0.0 etype.adbureau.net 0.0.0.0 eu2.madsone.com 0.0.0.0 euniverseads.com 0.0.0.0 eu-pn4.adserver.yahoo.com 0.0.0.0 europe.adserver.yahoo.com 0.0.0.0 eu.xtms.net 0.0.0.0 eventtracker.videostrip.com 0.0.0.0 exclusivegiftcards.com 0.0.0.0 exits1.webquest.net 0.0.0.0 exits2.webquest.net 0.0.0.0 exponential.com 0.0.0.0 eyewonder.com 0.0.0.0 ezboard.bigbangmedia.com 0.0.0.0 falkag.net 0.0.0.0 family-offer.com 0.0.0.0 farm.plista.com 0.0.0.0 f.as-eu.falkag.net 0.0.0.0 fatcatrewards.com 0.0.0.0 fbcdn-creative-a.akamaihd.net 0.0.0.0 fbfreegifts.com 0.0.0.0 fbi.gov.id402037057-8235504608.d9680.com 0.0.0.0 fcg.casino770.com 0.0.0.0 fc.webmasterpro.de 0.0.0.0 fdimages.fairfax.com.au 0.0.0.0 feedads.googleadservices.com 0.0.0.0 feeds.videosz.com 0.0.0.0 feeds.weselltraffic.com 0.0.0.0 fei.pro-market.net 0.0.0.0 fe.lea.lycos.es 0.0.0.0 fhm.valueclick.net 0.0.0.0 fif49.info 0.0.0.0 files.adbrite.com 0.0.0.0 fin.adbureau.net 0.0.0.0 finance-offer.com 0.0.0.0 finanzmeldungen.com 0.0.0.0 finder.cox.net 0.0.0.0 fixbonus.com 0.0.0.0 floatingads.madisonavenue.com 0.0.0.0 floridat.app.ur.gcion.com 0.0.0.0 flowers-offer.com 0.0.0.0 fls-na.amazon.com 0.0.0.0 flu23.com 0.0.0.0 fmads.osdn.com 0.0.0.0 focusin.ads.targetnet.com 0.0.0.0 folloyu.com 0.0.0.0 food-drink-bonuspath.com 0.0.0.0 food-drink-rewardpath.com 0.0.0.0 foodmixeroffer.com 0.0.0.0 food-offer.com 0.0.0.0 foreignpolicy.advertserve.com 0.0.0.0 fp.uclo.net 0.0.0.0 fp.valueclick.com 0.0.0.0 fr.a2dfp.net 0.0.0.0 fr.adserver.yahoo.com 0.0.0.0 fr.classic.clickintext.net 0.0.0.0 freebiegb.co.uk 0.0.0.0 freecameraonus.com 0.0.0.0 freecameraprovider.com 0.0.0.0 freecamerasource.com 0.0.0.0 freecamerauk.co.uk 0.0.0.0 freecoolgift.com 0.0.0.0 freedesignerhandbagreviews.com 0.0.0.0 freedinnersource.com 0.0.0.0 freedvddept.com 0.0.0.0 freeelectronicscenter.com 0.0.0.0 freeelectronicsdepot.com 0.0.0.0 freeelectronicsonus.com 0.0.0.0 freeelectronicssource.com 0.0.0.0 freeentertainmentsource.com 0.0.0.0 freefoodprovider.com 0.0.0.0 freefoodsource.com 0.0.0.0 freefuelcard.com 0.0.0.0 freefuelcoupon.com 0.0.0.0 freegasonus.com 0.0.0.0 freegasprovider.com 0.0.0.0 free-gift-cards-now.com 0.0.0.0 freegiftcardsource.com 0.0.0.0 freegiftreward.com 0.0.0.0 free-gifts-comp.com 0.0.0.0 free.hotsocialz.com 0.0.0.0 freeipodnanouk.co.uk 0.0.0.0 freeipoduk.com 0.0.0.0 freeipoduk.co.uk 0.0.0.0 freelaptopgift.com 0.0.0.0 freelaptopnation.com 0.0.0.0 free-laptop-reward.com 0.0.0.0 freelaptopreward.com 0.0.0.0 freelaptopwebsites.com 0.0.0.0 freenation.com 0.0.0.0 freeoffers-toys.com 0.0.0.0 freepayasyougotopupuk.co.uk 0.0.0.0 freeplasmanation.com 0.0.0.0 freerestaurantprovider.com 0.0.0.0 freerestaurantsource.com 0.0.0.0 free-rewards.com-s.tv 0.0.0.0 freeshoppingprovider.com 0.0.0.0 freeshoppingsource.com 0.0.0.0 free.thesocialsexnetwork.com 0.0.0.0 freevideodownloadforpc.com 0.0.0.0 frontend-loadbalancer.meteorsolutions.com 0.0.0.0 fwdservice.com 0.0.0.0 fwmrm.net 0.0.0.0 g1.idg.pl 0.0.0.0 g2.gumgum.com 0.0.0.0 g3t4d5.madison.com 0.0.0.0 g4p.grt02.com 0.0.0.0 gadgeteer.pdamart.com 0.0.0.0 gam.adnxs.com 0.0.0.0 gameconsolerewards.com 0.0.0.0 games-toys-bonuspath.com 0.0.0.0 games-toys-free.com 0.0.0.0 games-toys-rewardpath.com 0.0.0.0 gate.hyperpaysys.com 0.0.0.0 gavzad.keenspot.com 0.0.0.0 gazeta.hit.gemius.pl 0.0.0.0 gazetteextra.advertserve.com 0.0.0.0 gbanners.hornymatches.com 0.0.0.0 gcads.osdn.com 0.0.0.0 gcdn.2mdn.net 0.0.0.0 gc.gcl.ru 0.0.0.0 gcir.gannett-tv.com 0.0.0.0 gcirm2.indystar.com 0.0.0.0 gcirm.argusleader.com 0.0.0.0 gcirm.argusleader.gcion.com 0.0.0.0 gcirm.battlecreekenquirer.com 0.0.0.0 gcirm.burlingtonfreepress.com 0.0.0.0 gcirm.centralohio.com 0.0.0.0 gcirm.centralohio.gcion.com 0.0.0.0 gcirm.cincinnati.com 0.0.0.0 gcirm.citizen-times.com 0.0.0.0 gcirm.clarionledger.com 0.0.0.0 gcirm.coloradoan.com 0.0.0.0 gcirm.courier-journal.com 0.0.0.0 gcirm.courierpostonline.com 0.0.0.0 gcirm.customcoupon.com 0.0.0.0 gcirm.dailyrecord.com 0.0.0.0 gcirm.delawareonline.com 0.0.0.0 gcirm.democratandchronicle.com 0.0.0.0 gcirm.desmoinesregister.com 0.0.0.0 gcirm.detnews.com 0.0.0.0 gcirm.dmp.gcion.com 0.0.0.0 gcirm.dmregister.com 0.0.0.0 gcirm.dnj.com 0.0.0.0 gcirm.flatoday.com 0.0.0.0 gcirm.gannettnetwork.com 0.0.0.0 gcirm.gannett-tv.com 0.0.0.0 gcirm.greatfallstribune.com 0.0.0.0 gcirm.greenvilleonline.com 0.0.0.0 gcirm.greenvilleonline.gcion.com 0.0.0.0 gcirm.honoluluadvertiser.gcion.com 0.0.0.0 gcirm.idahostatesman.com 0.0.0.0 gcirm.idehostatesman.com 0.0.0.0 gcirm.indystar.com 0.0.0.0 gcirm.injersey.com 0.0.0.0 gcirm.jacksonsun.com 0.0.0.0 gcirm.laregionalonline.com 0.0.0.0 gcirm.lsj.com 0.0.0.0 gcirm.montgomeryadvertiser.com 0.0.0.0 gcirm.muskogeephoenix.com 0.0.0.0 gcirm.newsleader.com 0.0.0.0 gcirm.news-press.com 0.0.0.0 gcirm.ozarksnow.com 0.0.0.0 gcirm.pensacolanewsjournal.com 0.0.0.0 gcirm.press-citizen.com 0.0.0.0 gcirm.pressconnects.com 0.0.0.0 gcirm.rgj.com 0.0.0.0 gcirm.sctimes.com 0.0.0.0 gcirm.stargazette.com 0.0.0.0 gcirm.statesmanjournal.com 0.0.0.0 gcirm.tallahassee.com 0.0.0.0 gcirm.tennessean.com 0.0.0.0 gcirm.thedailyjournal.com 0.0.0.0 gcirm.thedesertsun.com 0.0.0.0 gcirm.theithacajournal.com 0.0.0.0 gcirm.thejournalnews.com 0.0.0.0 gcirm.theolympian.com 0.0.0.0 gcirm.thespectrum.com 0.0.0.0 gcirm.tucson.com 0.0.0.0 gcirm.wisinfo.com 0.0.0.0 gde.adocean.pl 0.0.0.0 gdeee.hit.gemius.pl 0.0.0.0 gdelt.hit.gemius.pl 0.0.0.0 gdelv.hit.gemius.pl 0.0.0.0 gdyn.cnngo.com 0.0.0.0 gdyn.trutv.com 0.0.0.0 gemius.pl 0.0.0.0 geoads.osdn.com 0.0.0.0 geoloc11.geovisite.com 0.0.0.0 geo.precisionclick.com 0.0.0.0 getacool100.com 0.0.0.0 getacool500.com 0.0.0.0 getacoollaptop.com 0.0.0.0 getacooltv.com 0.0.0.0 getafreeiphone.org 0.0.0.0 getagiftonline.com 0.0.0.0 getmyfreebabystuff.com 0.0.0.0 getmyfreegear.com 0.0.0.0 getmyfreegiftcard.com 0.0.0.0 getmyfreelaptop.com 0.0.0.0 getmyfreelaptophere.com 0.0.0.0 getmyfreeplasma.com 0.0.0.0 getmylaptopfree.com 0.0.0.0 getmyplasmatv.com 0.0.0.0 getspecialgifts.com 0.0.0.0 getyour5kcredits0.blogspot.com 0.0.0.0 getyourfreecomputer.com 0.0.0.0 getyourfreetv.com 0.0.0.0 getyourgiftnow2.blogspot.com 0.0.0.0 getyourgiftnow3.blogspot.com 0.0.0.0 gg.adocean.pl 0.0.0.0 giftcardchallenge.com 0.0.0.0 giftcardsurveys.us.com 0.0.0.0 giftrewardzone.com 0.0.0.0 gifts-flowers-rewardpath.com 0.0.0.0 gimmethatreward.com 0.0.0.0 gingert.net 0.0.0.0 globalwebads.com 0.0.0.0 gmads.net 0.0.0.0 gm.preferences.com 0.0.0.0 go2.hit.gemius.pl 0.0.0.0 go.adee.bbelements.com 0.0.0.0 go.adlt.bbelements.com 0.0.0.0 go.adlv.bbelements.com 0.0.0.0 go.admulti.com 0.0.0.0 go.adnet.bbelements.com 0.0.0.0 go.arbo.bbelements.com 0.0.0.0 go.arbopl.bbelements.com 0.0.0.0 go.arboru.bbelements.com 0.0.0.0 go.bb007.bbelements.com 0.0.0.0 go.evolutionmedia.bbelements.com 0.0.0.0 go-free-gifts.com 0.0.0.0 gofreegifts.com 0.0.0.0 go.ihned.bbelements.com 0.0.0.0 go.intact.bbelements.com 0.0.0.0 go.lfstmedia.com 0.0.0.0 go.lotech.bbelements.com 0.0.0.0 goodsblock.marketgid.com 0.0.0.0 goody-garage.com 0.0.0.0 go.pl.bbelements.com 0.0.0.0 got2goshop.com 0.0.0.0 goto.trafficmultiplier.com 0.0.0.0 gozing.directtrack.com 0.0.0.0 grabbit-rabbit.com 0.0.0.0 graphics.adultfriendfinder.com 0.0.0.0 graphics.pop6.com 0.0.0.0 gratkapl.adocean.pl 0.0.0.0 gravitron.chron.com 0.0.0.0 greasypalm.com 0.0.0.0 grfx.mp3.com 0.0.0.0 groupon.pl 0.0.0.0 grz67.com 0.0.0.0 gs1.idsales.co.uk 0.0.0.0 gserv.cneteu.net 0.0.0.0 gspro.hit.gemius.pl 0.0.0.0 g.thinktarget.com 0.0.0.0 guiaconsumidor.com 0.0.0.0 guide2poker.com 0.0.0.0 guptamedianetwork.com 0.0.0.0 guru.sitescout.netdna-cdn.com 0.0.0.0 gwallet.com 0.0.0.0 gx-in-f109.1e100.net 0.0.0.0 h-afnetwww.adshuffle.com 0.0.0.0 halfords.ukrpts.net 0.0.0.0 happydiscountspecials.com 0.0.0.0 harvest176.adgardener.com 0.0.0.0 harvest284.adgardener.com 0.0.0.0 harvest285.adgardener.com 0.0.0.0 harvest.adgardener.com 0.0.0.0 hathor.eztonez.com 0.0.0.0 haynet.adbureau.net 0.0.0.0 hbads.eboz.com 0.0.0.0 hbadz.eboz.com 0.0.0.0 healthbeautyncs.com 0.0.0.0 health-beauty-rewardpath.com 0.0.0.0 health-beauty-savingblvd.com 0.0.0.0 healthclicks.co.uk 0.0.0.0 hebdotop.com 0.0.0.0 help.adtech.de 0.0.0.0 help.adtech.fr 0.0.0.0 help.adtech.us 0.0.0.0 helpint.mywebsearch.com 0.0.0.0 hightrafficads.com 0.0.0.0 himediads.com 0.0.0.0 hit4.hotlog.ru 0.0.0.0 hk.adserver.yahoo.com 0.0.0.0 hlcc.ca 0.0.0.0 holiday-gift-offers.com 0.0.0.0 holidayproductpromo.com 0.0.0.0 holidayshoppingrewards.com 0.0.0.0 home4bizstart.ru 0.0.0.0 homeelectronicproducts.com 0.0.0.0 home-garden-premiumblvd.com 0.0.0.0 home-garden-rewardempire.com 0.0.0.0 home-garden-rewardpath.com 0.0.0.0 homeimprovementonus.com 0.0.0.0 honolulu.app.ur.gcion.com 0.0.0.0 hooqy.com 0.0.0.0 host207.ewtn.com 0.0.0.0 hostedaje14.thruport.com 0.0.0.0 hosting.adjug.com 0.0.0.0 hot-daily-deal.com 0.0.0.0 hotgiftzone.com 0.0.0.0 hot-product-hangout.com 0.0.0.0 hpad.www.infoseek.co.jp 0.0.0.0 h.ppjol.com 0.0.0.0 htmlads.ru 0.0.0.0 html.centralmediaserver.com 0.0.0.0 htmlwww.youfck.com 0.0.0.0 http300.content.ru4.com 0.0.0.0 httpads.com 0.0.0.0 httpwwwadserver.com 0.0.0.0 hub.com.pl 0.0.0.0 huiwiw.hit.gemius.pl 0.0.0.0 huntingtonbank.tt.omtrdc.net 0.0.0.0 huomdgde.adocean.pl 0.0.0.0 hyperion.adtech.de 0.0.0.0 hyperion.adtech.fr 0.0.0.0 hyperion.adtech.us 0.0.0.0 i1.teaser-goods.ru 0.0.0.0 iacas.adbureau.net 0.0.0.0 iad.anm.co.uk 0.0.0.0 iadc.qwapi.com #0.0.0.0 iadsdk.apple.com #may interfere with iTunes radio 0.0.0.0 ib.adnxs.com 0.0.0.0 ibis.lgappstv.com 0.0.0.0 i.blogads.com 0.0.0.0 i.casalemedia.com 0.0.0.0 icon.clickthru.net 0.0.0.0 id11938.luxup.ru 0.0.0.0 id5576.al21.luxup.ru 0.0.0.0 idearc.tt.omtrdc.net 0.0.0.0 idpix.media6degrees.com 0.0.0.0 ieee.adbureau.net 0.0.0.0 if.bbanner.it 0.0.0.0 iftarvakitleri.net 0.0.0.0 ih2.gamecopyworld.com 0.0.0.0 i.hotkeys.com 0.0.0.0 i.interia.pl 0.0.0.0 i.laih.com 0.0.0.0 ilinks.industrybrains.com 0.0.0.0 im.adtech.de 0.0.0.0 image2.pubmatic.com 0.0.0.0 imageads.canoe.ca 0.0.0.0 imagec08.247realmedia.com 0.0.0.0 imagec12.247realmedia.com 0.0.0.0 imagec14.247realmedia.com 0.0.0.0 imagecache2.allposters.com 0.0.0.0 imageceu1.247realmedia.com 0.0.0.0 image.click.livedoor.com 0.0.0.0 image.i1img.com 0.0.0.0 image.linkexchange.com 0.0.0.0 images2.laih.com 0.0.0.0 images3.linkwithin.com 0.0.0.0 images.ads.fairfax.com.au 0.0.0.0 images.blogads.com 0.0.0.0 images.bluetime.com 0.0.0.0 images-cdn.azoogleads.com 0.0.0.0 images.clickfinders.com 0.0.0.0 images.conduit-banners.com 0.0.0.0 images.cybereps.com 0.0.0.0 images.directtrack.com 0.0.0.0 images.emapadserver.com 0.0.0.0 imageserv.adtech.de 0.0.0.0 imageserv.adtech.fr 0.0.0.0 imageserv.adtech.us 0.0.0.0 imageserver1.thruport.com 0.0.0.0 images.jambocast.com 0.0.0.0 images.linkwithin.com 0.0.0.0 images.mbuyu.nl 0.0.0.0 images.netcomvad.com 0.0.0.0 images.newsx.cc 0.0.0.0 images.people2people.com 0.0.0.0 images.primaryads.com 0.0.0.0 images.sexlist.com 0.0.0.0 images.steamray.com 0.0.0.0 images.trafficmp.com 0.0.0.0 im.banner.t-online.de 0.0.0.0 i.media.cz 0.0.0.0 img0.ru.redtram.com 0.0.0.0 img1.ru.redtram.com 0.0.0.0 img2.ru.redtram.com 0.0.0.0 img4.cdn.adjuggler.com 0.0.0.0 img-a2.ak.imagevz.net 0.0.0.0 img.blogads.com 0.0.0.0 img-cdn.mediaplex.com 0.0.0.0 img.directtrack.com 0.0.0.0 imgg.dt00.net 0.0.0.0 imgg.marketgid.com 0.0.0.0 img.layer-ads.de 0.0.0.0 img.marketgid.com 0.0.0.0 imgn.dt00.net 0.0.0.0 imgn.dt07.com 0.0.0.0 imgn.marketgid.com 0.0.0.0 imgserv.adbutler.com 0.0.0.0 img.sn00.net 0.0.0.0 img.soulmate.com 0.0.0.0 img.xnxx.com 0.0.0.0 im.of.pl 0.0.0.0 impact.cossette-webpact.com 0.0.0.0 impbe.tradedoubler.com 0.0.0.0 imp.partner2profit.com 0.0.0.0 imppl.tradedoubler.com 0.0.0.0 impressionaffiliate.com 0.0.0.0 impressionaffiliate.mobi 0.0.0.0 impressionlead.com 0.0.0.0 impressionperformance.biz 0.0.0.0 imserv001.adtech.de 0.0.0.0 imserv001.adtech.fr 0.0.0.0 imserv001.adtech.us 0.0.0.0 imserv002.adtech.de 0.0.0.0 imserv002.adtech.fr 0.0.0.0 imserv002.adtech.us 0.0.0.0 imserv003.adtech.de 0.0.0.0 imserv003.adtech.fr 0.0.0.0 imserv003.adtech.us 0.0.0.0 imserv004.adtech.de 0.0.0.0 imserv004.adtech.fr 0.0.0.0 imserv004.adtech.us 0.0.0.0 imserv005.adtech.de 0.0.0.0 imserv005.adtech.fr 0.0.0.0 imserv005.adtech.us 0.0.0.0 imserv006.adtech.de 0.0.0.0 imserv006.adtech.fr 0.0.0.0 imserv006.adtech.us 0.0.0.0 imserv00x.adtech.de 0.0.0.0 imserv00x.adtech.fr 0.0.0.0 imserv00x.adtech.us 0.0.0.0 imssl01.adtech.de 0.0.0.0 imssl01.adtech.fr 0.0.0.0 imssl01.adtech.us 0.0.0.0 im.xo.pl 0.0.0.0 in.adserver.yahoo.com 0.0.0.0 incentivegateway.com 0.0.0.0 incentiverewardcenter.com 0.0.0.0 incentive-scene.com 0.0.0.0 indexhu.adocean.pl 0.0.0.0 infinite-ads.com 0.0.0.0 inklineglobal.com 0.0.0.0 inl.adbureau.net 0.0.0.0 input.insights.gravity.com 0.0.0.0 insightxe.pittsburghlive.com 0.0.0.0 insightxe.vtsgonline.com 0.0.0.0 ins-offer.com 0.0.0.0 installer.zutrack.com 0.0.0.0 insurance-rewardpath.com 0.0.0.0 intela.com 0.0.0.0 intelliads.com 0.0.0.0 internet.billboard.cz 0.0.0.0 intnet-offer.com 0.0.0.0 intrack.pl 0.0.0.0 invitefashion.com 0.0.0.0 ipacc1.adtech.de 0.0.0.0 ipacc1.adtech.fr 0.0.0.0 ipacc1.adtech.us 0.0.0.0 ipad2free4u.com 0.0.0.0 i.pcp001.com 0.0.0.0 ipdata.adtech.de 0.0.0.0 ipdata.adtech.fr 0.0.0.0 ipdata.adtech.us 0.0.0.0 iq001.adtech.de 0.0.0.0 iq001.adtech.fr 0.0.0.0 iq001.adtech.us 0.0.0.0 i.qitrck.com 0.0.0.0 is.casalemedia.com 0.0.0.0 i.securecontactinfo.com 0.0.0.0 isg01.casalemedia.com 0.0.0.0 isg02.casalemedia.com 0.0.0.0 isg03.casalemedia.com 0.0.0.0 isg04.casalemedia.com 0.0.0.0 isg05.casalemedia.com 0.0.0.0 isg06.casalemedia.com 0.0.0.0 isg07.casalemedia.com 0.0.0.0 isg08.casalemedia.com 0.0.0.0 isg09.casalemedia.com 0.0.0.0 i.simpli.fi 0.0.0.0 it.adserver.yahoo.com 0.0.0.0 i.total-media.net 0.0.0.0 itrackerpro.com 0.0.0.0 i.trkjmp.com 0.0.0.0 itsfree123.com 0.0.0.0 itxt.vibrantmedia.com 0.0.0.0 iwantmyfreecash.com 0.0.0.0 iwantmy-freelaptop.com 0.0.0.0 iwantmyfree-laptop.com 0.0.0.0 iwantmyfreelaptop.com 0.0.0.0 iwantmygiftcard.com 0.0.0.0 jambocast.com 0.0.0.0 jb9clfifs6.s.ad6media.fr 0.0.0.0 jcarter.spinbox.net 0.0.0.0 j.clickdensity.com 0.0.0.0 jcrew.tt.omtrdc.net 0.0.0.0 jersey-offer.com 0.0.0.0 jgedads.cjt.net 0.0.0.0 jh.revolvermaps.com 0.0.0.0 jivox.com 0.0.0.0 jl29jd25sm24mc29.com 0.0.0.0 jlinks.industrybrains.com 0.0.0.0 jmn.jangonetwork.com 0.0.0.0 join1.winhundred.com 0.0.0.0 js1.bloggerads.net 0.0.0.0 js77.neodatagroup.com 0.0.0.0 js.adlink.net 0.0.0.0 js.admngr.com 0.0.0.0 js.adscale.de 0.0.0.0 js.adserverpub.com 0.0.0.0 js.adsonar.com 0.0.0.0 jsc.dt07.net 0.0.0.0 js.goods.redtram.com 0.0.0.0 js.himediads.com 0.0.0.0 js.hotkeys.com 0.0.0.0 jsn.dt07.net 0.0.0.0 js.ru.redtram.com 0.0.0.0 js.selectornews.com 0.0.0.0 js.smi2.ru 0.0.0.0 js.tongji.linezing.com 0.0.0.0 js.zevents.com 0.0.0.0 judo.salon.com 0.0.0.0 juggler.inetinteractive.com 0.0.0.0 justwebads.com 0.0.0.0 jxliu.com 0.0.0.0 k5ads.osdn.com 0.0.0.0 kaartenhuis.nl.site-id.nl 0.0.0.0 kansas.valueclick.com 0.0.0.0 katu.adbureau.net 0.0.0.0 kazaa.adserver.co.il 0.0.0.0 kermit.macnn.com 0.0.0.0 kestrel.ospreymedialp.com 0.0.0.0 keys.dmtracker.com 0.0.0.0 keywordblocks.com 0.0.0.0 keywords.adtlgc.com 0.0.0.0 kitaramarketplace.com 0.0.0.0 kitaramedia.com 0.0.0.0 kitaratrk.com 0.0.0.0 kithrup.matchlogic.com 0.0.0.0 kixer.com 0.0.0.0 klikk.linkpulse.com 0.0.0.0 klikmoney.net 0.0.0.0 kliksaya.com 0.0.0.0 klipads.dvlabs.com 0.0.0.0 klipmart.dvlabs.com 0.0.0.0 klipmart.forbes.com 0.0.0.0 kmdl101.com 0.0.0.0 knc.lv 0.0.0.0 knight.economist.com 0.0.0.0 kona2.kontera.com 0.0.0.0 kona3.kontera.com 0.0.0.0 kona4.kontera.com 0.0.0.0 kona5.kontera.com 0.0.0.0 kona6.kontera.com 0.0.0.0 kona7.kontera.com 0.0.0.0 kona8.kontera.com 0.0.0.0 kona.kontera.com 0.0.0.0 kontera.com 0.0.0.0 kreaffiliation.com 0.0.0.0 kropka.onet.pl 0.0.0.0 kuhdi.com 0.0.0.0 l.5min.com 0.0.0.0 ladyclicks.ru 0.0.0.0 lanzar.publicidadweb.com 0.0.0.0 laptopreportcard.com 0.0.0.0 laptoprewards.com 0.0.0.0 laptoprewardsgroup.com 0.0.0.0 laptoprewardszone.com 0.0.0.0 larivieracasino.com 0.0.0.0 lasthr.info 0.0.0.0 lastmeasure.zoy.org 0.0.0.0 launch.adserver.yahoo.com 0.0.0.0 layer-ads.de 0.0.0.0 lb-adserver.ig.com.br 0.0.0.0 ld1.criteo.com 0.0.0.0 ld2.criteo.com 0.0.0.0 ldglob01.adtech.de 0.0.0.0 ldglob01.adtech.fr 0.0.0.0 ldglob01.adtech.us 0.0.0.0 ldglob02.adtech.de 0.0.0.0 ldglob02.adtech.fr 0.0.0.0 ldglob02.adtech.us 0.0.0.0 ldimage01.adtech.de 0.0.0.0 ldimage01.adtech.fr 0.0.0.0 ldimage01.adtech.us 0.0.0.0 ldimage02.adtech.de 0.0.0.0 ldimage02.adtech.fr 0.0.0.0 ldimage02.adtech.us 0.0.0.0 ldserv01.adtech.de 0.0.0.0 ldserv01.adtech.fr 0.0.0.0 ldserv01.adtech.us 0.0.0.0 ldserv02.adtech.de 0.0.0.0 ldserv02.adtech.fr 0.0.0.0 ldserv02.adtech.us 0.0.0.0 le1er.net 0.0.0.0 leadback.advertising.com 0.0.0.0 leader.linkexchange.com 0.0.0.0 lead.program3.com 0.0.0.0 leadsynaptic.go2jump.org 0.0.0.0 learning-offer.com 0.0.0.0 legal-rewardpath.com 0.0.0.0 leisure-offer.com 0.0.0.0 lg.brandreachsys.com 0.0.0.0 liberty.gedads.com 0.0.0.0 link2me.ru 0.0.0.0 link4ads.com 0.0.0.0 linktracker.angelfire.com 0.0.0.0 linuxpark.adtech.de 0.0.0.0 linuxpark.adtech.fr 0.0.0.0 linuxpark.adtech.us 0.0.0.0 liquidad.narrowcastmedia.com 0.0.0.0 live-cams-1.livejasmin.com 0.0.0.0 livingnet.adtech.de 0.0.0.0 ll.atdmt.com 0.0.0.0 l.linkpulse.com 0.0.0.0 lnads.osdn.com 0.0.0.0 load.exelator.com 0.0.0.0 load.focalex.com 0.0.0.0 loading321.com 0.0.0.0 loadm.exelator.com 0.0.0.0 local.promoisland.net 0.0.0.0 logc252.xiti.com 0.0.0.0 log.feedjit.com 0.0.0.0 login.linkpulse.com 0.0.0.0 log.olark.com 0.0.0.0 looksmartcollect.247realmedia.com 0.0.0.0 louisvil.app.ur.gcion.com 0.0.0.0 louisvil.ur.gcion.com 0.0.0.0 lp1.linkpulse.com 0.0.0.0 lp4.linkpulse.com 0.0.0.0 lpcloudsvr405.com 0.0.0.0 lstats.qip.ru 0.0.0.0 lt.andomedia.com 0.0.0.0 lt.angelfire.com 0.0.0.0 lucky-day-uk.com 0.0.0.0 luxup.ru 0.0.0.0 lw1.gamecopyworld.com 0.0.0.0 lw2.gamecopyworld.com 0.0.0.0 lycos.247realmedia.com 0.0.0.0 l.yieldmanager.net 0.0.0.0 m1.emea.2mdn.net.edgesuite.net 0.0.0.0 m2.sexgarantie.nl 0.0.0.0 m3.2mdn.net 0.0.0.0 macaddictads.snv.futurenet.com 0.0.0.0 macads.net 0.0.0.0 mackeeperapp1.zeobit.com 0.0.0.0 mad2.brandreachsys.com 0.0.0.0 m.adbridge.de 0.0.0.0 mads.aol.com 0.0.0.0 mads.cnet.com 0.0.0.0 mail.radar.imgsmail.ru 0.0.0.0 manage001.adtech.de 0.0.0.0 manage001.adtech.fr 0.0.0.0 manage001.adtech.us 0.0.0.0 manager.rovion.com 0.0.0.0 manuel.theonion.com 0.0.0.0 marketgid.com 0.0.0.0 marketing.888.com 0.0.0.0 marketing-rewardpath.com 0.0.0.0 marriottinternationa.tt.omtrdc.net 0.0.0.0 mastertracks.be 0.0.0.0 matomy.adk2.co 0.0.0.0 matrix.mediavantage.de 0.0.0.0 maxadserver.corusradionetwork.com 0.0.0.0 maxads.ruralpress.com 0.0.0.0 maxbounty.com 0.0.0.0 maximumpcads.imaginemedia.com 0.0.0.0 maxmedia.sgaonline.com 0.0.0.0 maxserving.com 0.0.0.0 mb01.com 0.0.0.0 mbox2.offermatica.com 0.0.0.0 mbox9.offermatica.com 0.0.0.0 mds.centrport.net 0.0.0.0 media2021.videostrip.com 0.0.0.0 media2.adshuffle.com 0.0.0.0 media2.legacy.com 0.0.0.0 media2.travelzoo.com 0.0.0.0 media4021.videostrip.com #http://media4021.videostrip.com/dev8/0/000/449/0000449408.mp4 0.0.0.0 media5021.videostrip.com #http://media5021.videostrip.com/dev14/0/000/363/0000363146.mp4 0.0.0.0 media6021.videostrip.com 0.0.0.0 media6.sitebrand.com 0.0.0.0 media.888.com 0.0.0.0 media.adcentriconline.com 0.0.0.0 media.adrcdn.com 0.0.0.0 media.adrevolver.com 0.0.0.0 media.adrime.com 0.0.0.0 media.adshadow.net 0.0.0.0 media.b.lead.program3.com 0.0.0.0 media.bonnint.net 0.0.0.0 mediacharger.com 0.0.0.0 media.contextweb.com 0.0.0.0 media.elb-kind.de 0.0.0.0 media.espace-plus.net 0.0.0.0 media.fairlink.ru 0.0.0.0 mediafr.247realmedia.com 0.0.0.0 media.funpic.de 0.0.0.0 medialand.relax.ru 0.0.0.0 media.markethealth.com 0.0.0.0 media.naked.com 0.0.0.0 media.nk-net.pl 0.0.0.0 media.ontarionorth.com 0.0.0.0 media.popuptraffic.com 0.0.0.0 mediapst.adbureau.net 0.0.0.0 mediapst-images.adbureau.net 0.0.0.0 mediative.ca 0.0.0.0 mediative.com 0.0.0.0 media.trafficfactory.biz 0.0.0.0 media.trafficjunky.net 0.0.0.0 mediauk.247realmedia.com 0.0.0.0 media.ventivmedia.com 0.0.0.0 media.viwii.net 0.0.0.0 medical-offer.com 0.0.0.0 medical-rewardpath.com 0.0.0.0 medleyads.com 0.0.0.0 medrx.sensis.com.au 0.0.0.0 megapanel.gem.pl 0.0.0.0 mercury.bravenet.com 0.0.0.0 messagent.duvalguillaume.com 0.0.0.0 messagia.adcentric.proximi-t.com 0.0.0.0 meter-svc.nytimes.com 0.0.0.0 metrics.natmags.co.uk 0.0.0.0 metrics.sfr.fr 0.0.0.0 metrics.target.com 0.0.0.0 m.fr.a2dfp.net 0.0.0.0 m.friendlyduck.com 0.0.0.0 mf.sitescout.com 0.0.0.0 mg.dt00.net 0.0.0.0 mgid.com 0.0.0.0 mhlnk.com 0.0.0.0 mi.adinterax.com 0.0.0.0 microsof.wemfbox.ch 0.0.0.0 mightymagoo.com 0.0.0.0 mii-image.adjuggler.com 0.0.0.0 mini.videostrip.com 0.0.0.0 mirror.pointroll.com 0.0.0.0 mjxads.internet.com 0.0.0.0 mjx.ads.nwsource.com 0.0.0.0 mklik.gazeta.pl 0.0.0.0 mktg-offer.com 0.0.0.0 mlntracker.com 0.0.0.0 mm.admob.com 0.0.0.0 mm.chitika.net 0.0.0.0 mob.adwhirl.com 0.0.0.0 mobileads.msn.com 0.0.0.0 mobile.juicyads.com 0.0.0.0 mobularity.com 0.0.0.0 mochibot.com 0.0.0.0 mojofarm.mediaplex.com 0.0.0.0 moneyraid.com 0.0.0.0 monstersandcritics.advertserve.com 0.0.0.0 morefreecamsecrets.com 0.0.0.0 morevisits.info 0.0.0.0 motd.pinion.gg 0.0.0.0 movieads.imgs.sapo.pt 0.0.0.0 mp3playersource.com 0.0.0.0 mp.tscapeplay.com 0.0.0.0 msn.allyes.com 0.0.0.0 msnbe-hp.metriweb.be 0.0.0.0 msn-cdn.effectivemeasure.net 0.0.0.0 msn.oewabox.at 0.0.0.0 msn.tns-cs.net 0.0.0.0 msn.uvwbox.de 0.0.0.0 msn.wrating.com 0.0.0.0 mt58.mtree.com 0.0.0.0 m.tribalfusion.com 0.0.0.0 mu-in-f167.1e100.net 0.0.0.0 multi.xnxx.com 0.0.0.0 mvonline.com 0.0.0.0 mx.adserver.yahoo.com 0.0.0.0 myao.adocean.pl 0.0.0.0 my.blueadvertise.com 0.0.0.0 mycashback.co.uk 0.0.0.0 mycelloffer.com 0.0.0.0 mychoicerewards.com 0.0.0.0 myexclusiverewards.com 0.0.0.0 myfreedinner.com 0.0.0.0 myfreegifts.co.uk 0.0.0.0 myfreemp3player.com 0.0.0.0 mygiftcardcenter.com 0.0.0.0 mygiftresource.com 0.0.0.0 mygreatrewards.com 0.0.0.0 myoffertracking.com 0.0.0.0 my-reward-channel.com 0.0.0.0 my-rewardsvault.com 0.0.0.0 myseostats.com 0.0.0.0 myusersonline.com 0.0.0.0 myyearbookdigital.checkm8.com 0.0.0.0 n4g.us.intellitxt.com 0.0.0.0 n4p.ru.redtram.com 0.0.0.0 nationalissuepanel.com 0.0.0.0 nationalpost.adperfect.com 0.0.0.0 nationalsurveypanel.com 0.0.0.0 nbads.com 0.0.0.0 nbc.adbureau.net 0.0.0.0 nbimg.dt00.net 0.0.0.0 nb.netbreak.com.au 0.0.0.0 nc.ru.redtram.com 0.0.0.0 nctracking.com 0.0.0.0 nd1.gamecopyworld.com 0.0.0.0 nearbyad.com 0.0.0.0 needadvertising.com 0.0.0.0 netads.hotwired.com 0.0.0.0 netadsrv.iworld.com 0.0.0.0 netads.sohu.com 0.0.0.0 netcomm.spinbox.net 0.0.0.0 netpalnow.com 0.0.0.0 netshelter.adtrix.com 0.0.0.0 netspiderads2.indiatimes.com 0.0.0.0 netsponsors.com 0.0.0.0 networkads.net 0.0.0.0 network-ca.247realmedia.com 0.0.0.0 network.realmedia.com 0.0.0.0 network.realtechnetwork.net 0.0.0.0 newads.cmpnet.com 0.0.0.0 newadserver.interfree.it 0.0.0.0 new-ads.eurogamer.net 0.0.0.0 newbs.hutz.co.il 0.0.0.0 news6health.com 0.0.0.0 newsblock.marketgid.com 0.0.0.0 new.smartcontext.pl 0.0.0.0 newssourceoftoday.com #security risk/fake news# 0.0.0.0 newt1.adultadworld.com 0.0.0.0 newt1.adultworld.com 0.0.0.0 ng3.ads.warnerbros.com 0.0.0.0 ngads.smartage.com 0.0.0.0 nitrous.exitfuel.com 0.0.0.0 nitrous.internetfuel.com 0.0.0.0 nivendas.net 0.0.0.0 nkcache.brandreachsys.com 0.0.0.0 nl.adserver.yahoo.com 0.0.0.0 no.adserver.yahoo.com 0.0.0.0 nospartenaires.com 0.0.0.0 nothing-but-value.com 0.0.0.0 novafinanza.com 0.0.0.0 novem.onet.pl 0.0.0.0 nrads.1host.co.il 0.0.0.0 nrkno.linkpulse.com 0.0.0.0 ns1.lalibco.com 0.0.0.0 ns1.primeinteractive.net 0.0.0.0 ns2.hitbox.com 0.0.0.0 ns2.lalibco.com 0.0.0.0 ns2.primeinteractive.net 0.0.0.0 nsads4.us.publicus.com 0.0.0.0 nsads.hotwired.com 0.0.0.0 nsads.us.publicus.com 0.0.0.0 nspmotion.com 0.0.0.0 ns-vip1.hitbox.com 0.0.0.0 ns-vip2.hitbox.com 0.0.0.0 ns-vip3.hitbox.com 0.0.0.0 ntbanner.digitalriver.com 0.0.0.0 nx-adv0005.247realmedia.com 0.0.0.0 nxs.kidcolez.cn 0.0.0.0 nxtscrn.adbureau.net 0.0.0.0 nysubwayoffer.com 0.0.0.0 nytadvertising.nytimes.com 0.0.0.0 o0.winfuture.de 0.0.0.0 o1.qnsr.com 0.0.0.0 o2.eyereturn.com 0.0.0.0 oads.cracked.com 0.0.0.0 oamsrhads.us.publicus.com 0.0.0.0 oas-1.rmuk.co.uk 0.0.0.0 oasads.whitepages.com 0.0.0.0 oasc02023.247realmedia.com 0.0.0.0 oasc02.247realmedia.com 0.0.0.0 oasc03.247realmedia.com 0.0.0.0 oasc04.247.realmedia.com 0.0.0.0 oasc05050.247realmedia.com 0.0.0.0 oasc05.247realmedia.com 0.0.0.0 oasc16.247realmedia.com 0.0.0.0 oascenral.phoenixnewtimes.com 0.0.0.0 oascentral.videodome.com 0.0.0.0 oas.dn.se 0.0.0.0 oas-eu.247realmedia.com 0.0.0.0 oas.heise.de 0.0.0.0 oasis2.advfn.com 0.0.0.0 oasis.411affiliates.ca 0.0.0.0 oasis.nysun.com 0.0.0.0 oasis.promon.cz 0.0.0.0 oasis.realbeer.com 0.0.0.0 oasis.zmh.zope.com 0.0.0.0 oasis.zmh.zope.net 0.0.0.0 oasn03.247realmedia.com 0.0.0.0 oassis.zmh.zope.com 0.0.0.0 objects.abcvisiteurs.com 0.0.0.0 objects.designbloxlive.com 0.0.0.0 obozua.adocean.pl 0.0.0.0 observer.advertserve.com 0.0.0.0 obs.nnm2.ru 0.0.0.0 offers.impower.com 0.0.0.0 offerx.co.uk 0.0.0.0 oinadserve.com 0.0.0.0 old-darkroast.adknowledge.com 0.0.0.0 ometrics.warnerbros.com 0.0.0.0 onclickads.net 0.0.0.0 online1.webcams.com 0.0.0.0 onlineads.magicvalley.com 0.0.0.0 onlinebestoffers.net 0.0.0.0 onocollect.247realmedia.com 0.0.0.0 open.4info.net 0.0.0.0 openadext.tf1.fr 0.0.0.0 openad.infobel.com 0.0.0.0 openads.dimcab.com 0.0.0.0 openads.friendfinder.com 0.0.0.0 openads.nightlifemagazine.ca 0.0.0.0 openads.smithmag.net 0.0.0.0 openads.zeads.com 0.0.0.0 openad.travelnow.com 0.0.0.0 opentable.tt.omtrdc.net 0.0.0.0 openx2.fotoflexer.com 0.0.0.0 openx.adfactor.nl 0.0.0.0 openx.coolconcepts.nl 0.0.0.0 openx.shinyads.com 0.0.0.0 openxxx.viragemedia.com 0.0.0.0 optimized-by.rubiconproject.com 0.0.0.0 optimized.by.vitalads.net 0.0.0.0 optimize.indieclick.com 0.0.0.0 optimzedby.rmxads.com 0.0.0.0 orange.weborama.fr 0.0.0.0 ordie.adbureau.net 0.0.0.0 origin.chron.com 0.0.0.0 out.popads.net 0.0.0.0 overflow.adsoftware.com 0.0.0.0 overlay.ringtonematcher.com 0.0.0.0 overstock.tt.omtrdc.net 0.0.0.0 ox-d.hbr.org 0.0.0.0 ox-d.hulkshare.com 0.0.0.0 ox-d.hypeads.org 0.0.0.0 ox-d.zenoviagroup.com 0.0.0.0 ox.eurogamer.net 0.0.0.0 ox-i.zenoviagroup.com 0.0.0.0 ozonemedia.adbureau.net 0.0.0.0 oz.valueclick.com 0.0.0.0 oz.valueclick.ne.jp 0.0.0.0 p0rnuha.com 0.0.0.0 p1.adhitzads.com 0.0.0.0 pagead1.googlesyndication.com 0.0.0.0 pagead2.googlesyndication.com 0.0.0.0 pagead3.googlesyndication.com 0.0.0.0 pagead.googlesyndication.com 0.0.0.0 pages.etology.com 0.0.0.0 paime.com 0.0.0.0 panel.adtify.pl 0.0.0.0 paperg.com 0.0.0.0 partner01.oingo.com 0.0.0.0 partner02.oingo.com 0.0.0.0 partner03.oingo.com 0.0.0.0 partner.ah-ha.com 0.0.0.0 partner.ceneo.pl 0.0.0.0 partner.join.com.ua 0.0.0.0 partner.magna.ru 0.0.0.0 partner.pobieraczek.pl 0.0.0.0 partners.sprintrade.com 0.0.0.0 partners.webmasterplan.com 0.0.0.0 partner.wapacz.pl 0.0.0.0 partner.wapster.pl 0.0.0.0 pathforpoints.com 0.0.0.0 paulsnetwork.com 0.0.0.0 pbid.pro-market.net 0.0.0.0 pb.tynt.com 0.0.0.0 pcads.ru 0.0.0.0 pei-ads.playboy.com 0.0.0.0 people-choice-sites.com 0.0.0.0 personalcare-offer.com 0.0.0.0 personalcashbailout.com 0.0.0.0 pg2.solution.weborama.fr 0.0.0.0 ph-ad01.focalink.com 0.0.0.0 ph-ad02.focalink.com 0.0.0.0 ph-ad03.focalink.com 0.0.0.0 ph-ad04.focalink.com 0.0.0.0 ph-ad05.focalink.com 0.0.0.0 ph-ad06.focalink.com 0.0.0.0 ph-ad07.focalink.com 0.0.0.0 ph-ad08.focalink.com 0.0.0.0 ph-ad09.focalink.com 0.0.0.0 ph-ad10.focalink.com 0.0.0.0 ph-ad11.focalink.com 0.0.0.0 ph-ad12.focalink.com 0.0.0.0 ph-ad13.focalink.com 0.0.0.0 ph-ad14.focalink.com 0.0.0.0 ph-ad15.focalink.com 0.0.0.0 ph-ad16.focalink.com 0.0.0.0 ph-ad17.focalink.com 0.0.0.0 ph-ad18.focalink.com 0.0.0.0 ph-ad19.focalink.com 0.0.0.0 ph-ad20.focalink.com 0.0.0.0 ph-ad21.focalink.com 0.0.0.0 ph-cdn.effectivemeasure.net 0.0.0.0 phoenixads.co.in 0.0.0.0 photobucket.adnxs.com 0.0.0.0 photos0.pop6.com 0.0.0.0 photos1.pop6.com 0.0.0.0 photos2.pop6.com 0.0.0.0 photos3.pop6.com 0.0.0.0 photos4.pop6.com 0.0.0.0 photos5.pop6.com 0.0.0.0 photos6.pop6.com 0.0.0.0 photos7.pop6.com 0.0.0.0 photos8.pop6.com 0.0.0.0 photos.daily-deals.analoganalytics.com 0.0.0.0 photos.pop6.com 0.0.0.0 phpads.astalavista.us 0.0.0.0 phpads.cnpapers.com 0.0.0.0 phpads.flipcorp.com 0.0.0.0 phpads.foundrymusic.com 0.0.0.0 phpads.i-merge.net 0.0.0.0 phpads.macbidouille.com 0.0.0.0 phpadsnew.gamefolk.de 0.0.0.0 phpadsnew.wn.com 0.0.0.0 php.fark.com 0.0.0.0 pick-savings.com 0.0.0.0 p.ic.tynt.com 0.0.0.0 pink.habralab.ru 0.0.0.0 pix01.revsci.net 0.0.0.0 pix521.adtech.de 0.0.0.0 pix521.adtech.fr 0.0.0.0 pix521.adtech.us 0.0.0.0 pix522.adtech.de 0.0.0.0 pix522.adtech.fr 0.0.0.0 pix522.adtech.us 0.0.0.0 pixel.everesttech.net 0.0.0.0 pixel.mathtag.com 0.0.0.0 pixel.quantserve.com 0.0.0.0 pixel.sitescout.com 0.0.0.0 plasmatv4free.com 0.0.0.0 plasmatvreward.com 0.0.0.0 playlink.pl 0.0.0.0 playtime.tubemogul.com 0.0.0.0 pl.bbelements.com 0.0.0.0 pmstrk.mercadolivre.com.br 0.0.0.0 pntm.adbureau.net 0.0.0.0 pntm-images.adbureau.net 0.0.0.0 pol.bbelements.com 0.0.0.0 politicalopinionsurvey.com 0.0.0.0 pool.pebblemedia.adhese.com 0.0.0.0 popadscdn.net 0.0.0.0 popclick.net 0.0.0.0 poponclick.com 0.0.0.0 popunder.adsrevenue.net 0.0.0.0 popunder.paypopup.com 0.0.0.0 popupclick.ru 0.0.0.0 popupdomination.com 0.0.0.0 popup.matchmaker.com 0.0.0.0 popups.ad-logics.com 0.0.0.0 popups.infostart.com 0.0.0.0 postmasterdirect.com 0.0.0.0 post.rmbn.ru 0.0.0.0 pp.free.fr 0.0.0.0 p.profistats.net 0.0.0.0 p.publico.es 0.0.0.0 premium.ascensionweb.com 0.0.0.0 premiumholidayoffers.com 0.0.0.0 premiumproductsonline.com 0.0.0.0 premium-reward-club.com 0.0.0.0 prexyone.appspot.com 0.0.0.0 primetime.ad.primetime.net 0.0.0.0 privitize.com 0.0.0.0 prizes.co.uk 0.0.0.0 productopinionpanel.com 0.0.0.0 productresearchpanel.com 0.0.0.0 producttestpanel.com 0.0.0.0 profile.uproxx.com 0.0.0.0 promo.awempire.com 0.0.0.0 promo.easy-dating.org 0.0.0.0 promos.fling.com 0.0.0.0 promote-bz.net 0.0.0.0 promotion.partnercash.com 0.0.0.0 proximityads.flipcorp.com 0.0.0.0 proxy.blogads.com 0.0.0.0 ptrads.mp3.com 0.0.0.0 pubdirecte.com 0.0.0.0 pubimgs.sapo.pt 0.0.0.0 publiads.com 0.0.0.0 publicidades.redtotalonline.com 0.0.0.0 publicis.adcentriconline.com 0.0.0.0 publish.bonzaii.no 0.0.0.0 publishers.adscholar.com 0.0.0.0 publishers.bidtraffic.com 0.0.0.0 publishers.brokertraffic.com 0.0.0.0 publishing.kalooga.com 0.0.0.0 pub.sapo.pt 0.0.0.0 pubshop.img.uol.com.br 0.0.0.0 purgecolon.net 0.0.0.0 px10.net 0.0.0.0 q.azcentral.com 0.0.0.0 q.b.h.cltomedia.info 0.0.0.0 qip.magna.ru 0.0.0.0 qitrck.com 0.0.0.0 quickbrowsersearch.com 0.0.0.0 r1-ads.ace.advertising.com 0.0.0.0 r.ace.advertising.com 0.0.0.0 radaronline.advertserve.com 0.0.0.0 r.admob.com 0.0.0.0 rad.msn.com 0.0.0.0 rads.stackoverflow.com 0.0.0.0 ravel-rewardpath.com 0.0.0.0 rb.burstway.com 0.0.0.0 rb.newsru.com 0.0.0.0 rbqip.pochta.ru 0.0.0.0 rc.asci.freenet.de 0.0.0.0 rc.bt.ilsemedia.nl 0.0.0.0 rccl.bridgetrack.com 0.0.0.0 rcdna.gwallet.com 0.0.0.0 r.chitika.net 0.0.0.0 rc.hotkeys.com 0.0.0.0 rcm-images.amazon.com 0.0.0.0 rcm-it.amazon.it 0.0.0.0 rc.rlcdn.com 0.0.0.0 rc.wl.webads.nl 0.0.0.0 realads.realmedia.com 0.0.0.0 realgfsbucks.com 0.0.0.0 realmedia-a800.d4p.net # Scientific American 0.0.0.0 realmedia.advance.net 0.0.0.0 recreation-leisure-rewardpath.com 0.0.0.0 red01.as-eu.falkag.net 0.0.0.0 red01.as-us.falkag.net 0.0.0.0 red02.as-eu.falkag.net 0.0.0.0 red02.as-us.falkag.net 0.0.0.0 red03.as-eu.falkag.net 0.0.0.0 red03.as-us.falkag.net 0.0.0.0 red04.as-eu.falkag.net 0.0.0.0 red04.as-us.falkag.net 0.0.0.0 red.as-eu.falkag.net 0.0.0.0 red.as-us.falkag.net 0.0.0.0 redherring.ngadcenter.net 0.0.0.0 redirect.click2net.com 0.0.0.0 redirect.hotkeys.com 0.0.0.0 reduxads.valuead.com 0.0.0.0 reg.coolsavings.com 0.0.0.0 regflow.com 0.0.0.0 regie.espace-plus.net 0.0.0.0 regio.adlink.de 0.0.0.0 reklama.onet.pl 0.0.0.0 reklamy.sfd.pl 0.0.0.0 re.kontera.com 0.0.0.0 rek.www.wp.pl 0.0.0.0 relestar.com 0.0.0.0 remotead.cnet.com 0.0.0.0 report02.adtech.de 0.0.0.0 report02.adtech.fr 0.0.0.0 report02.adtech.us 0.0.0.0 reporter001.adtech.de 0.0.0.0 reporter001.adtech.fr 0.0.0.0 reporter001.adtech.us 0.0.0.0 reporter.adtech.de 0.0.0.0 reporter.adtech.fr 0.0.0.0 reporter.adtech.us 0.0.0.0 reportimage.adtech.de 0.0.0.0 reportimage.adtech.fr 0.0.0.0 reportimage.adtech.us 0.0.0.0 resolvingserver.com 0.0.0.0 resources.infolinks.com 0.0.0.0 restaurantcom.tt.omtrdc.net 0.0.0.0 reverso.refr.adgtw.orangeads.fr 0.0.0.0 revsci.net 0.0.0.0 rewardblvd.com 0.0.0.0 rewardhotspot.com 0.0.0.0 rewardsflow.com 0.0.0.0 rhads.sv.publicus.com 0.0.0.0 rh.revolvermaps.com 0.0.0.0 richmedia.yimg.com 0.0.0.0 ridepush.com 0.0.0.0 ringtonepartner.com 0.0.0.0 rmbn.ru 0.0.0.0 rmedia.boston.com 0.0.0.0 rmm1u.checkm8.com 0.0.0.0 rms.admeta.com 0.0.0.0 ro.bbelements.com 0.0.0.0 romepartners.com 0.0.0.0 roosevelt.gjbig.com 0.0.0.0 rosettastone.tt.omtrdc.net 0.0.0.0 rotabanner100.utro.ru 0.0.0.0 rotabanner468.utro.ru 0.0.0.0 rotate.infowars.com 0.0.0.0 rotator.adjuggler.com 0.0.0.0 rotator.juggler.inetinteractive.com 0.0.0.0 rotobanner468.utro.ru 0.0.0.0 rovion.com 0.0.0.0 rpc.trafficfactory.biz 0.0.0.0 rp.hit.gemius.pl 0.0.0.0 r.reklama.biz 0.0.0.0 rscounter10.com 0.0.0.0 rsense-ad.realclick.co.kr 0.0.0.0 rss.buysellads.com 0.0.0.0 rt2.infolinks.com 0.0.0.0 rt3.infolinks.com 0.0.0.0 rtb.pclick.yahoo.com 0.0.0.0 rtb.tubemogul.com 0.0.0.0 rtr.innovid.com 0.0.0.0 rts.sparkstudios.com 0.0.0.0 r.turn.com 0.0.0.0 ru.bbelements.com 0.0.0.0 ru.redtram.com 0.0.0.0 russ-shalavy.ru 0.0.0.0 rv.adcpx.v1.de.eusem.adaos-ads.net 0.0.0.0 rya.rockyou.com 0.0.0.0 s0b.bluestreak.com 0.0.0.0 s1.buysellads.com 0.0.0.0 s1.cz.adocean.pl 0.0.0.0 s1.gratkapl.adocean.pl 0.0.0.0 s2.buysellads.com 0.0.0.0 s3.buysellads.com 0.0.0.0 s5.addthis.com 0.0.0.0 s7.addthis.com 0.0.0.0 s.admulti.com 0.0.0.0 sad.sharethis.com 0.0.0.0 safe.hyperpaysys.com 0.0.0.0 safenyplanet.in 0.0.0.0 salesforcecom.tt.omtrdc.net 0.0.0.0 s.amazon-adsystem.com 0.0.0.0 samsung3.solution.weborama.fr 0.0.0.0 s.as-us.falkag.net 0.0.0.0 sat-city-ads.com 0.0.0.0 s.atemda.com 0.0.0.0 saturn.tiser.com.au 0.0.0.0 save-plan.com 0.0.0.0 savings-specials.com 0.0.0.0 savings-time.com 0.0.0.0 s.boom.ro 0.0.0.0 schoorsteen.geenstijl.nl 0.0.0.0 schumacher.adtech.de 0.0.0.0 schumacher.adtech.fr 0.0.0.0 schumacher.adtech.us 0.0.0.0 schwab.tt.omtrdc.net 0.0.0.0 s.clicktale.net 0.0.0.0 scoremygift.com 0.0.0.0 screen-mates.com 0.0.0.0 script.banstex.com 0.0.0.0 script.crsspxl.com 0.0.0.0 scripts.verticalacuity.com 0.0.0.0 scr.kliksaya.com 0.0.0.0 s.di.com.pl 0.0.0.0 se.adserver.yahoo.com 0.0.0.0 search.addthis.com 0.0.0.0 search.freeonline.com 0.0.0.0 search.keywordblocks.com 0.0.0.0 search.netseer.com 0.0.0.0 searchportal.information.com 0.0.0.0 searchwe.com 0.0.0.0 seasonalsamplerspecials.com 0.0.0.0 sec.hit.gemius.pl 0.0.0.0 secimage.adtech.de 0.0.0.0 secimage.adtech.fr 0.0.0.0 secimage.adtech.us 0.0.0.0 secserv.adtech.de 0.0.0.0 secserv.adtech.fr 0.0.0.0 secserv.adtech.us 0.0.0.0 secure.ace-tag.advertising.com 0.0.0.0 secure.addthis.com 0.0.0.0 secureads.ft.com 0.0.0.0 secure.bidvertiserr.com 0.0.0.0 securecontactinfo.com 0.0.0.0 secure.gaug.es 0.0.0.0 secure.img-cdn.mediaplex.com 0.0.0.0 securerunner.com 0.0.0.0 secure.webconnect.net 0.0.0.0 seduction-zone.com 0.0.0.0 sel.as-eu.falkag.net 0.0.0.0 sel.as-us.falkag.net 0.0.0.0 select001.adtech.de 0.0.0.0 select001.adtech.fr 0.0.0.0 select001.adtech.us 0.0.0.0 select002.adtech.de 0.0.0.0 select002.adtech.fr 0.0.0.0 select002.adtech.us 0.0.0.0 select003.adtech.de 0.0.0.0 select003.adtech.fr 0.0.0.0 select003.adtech.us 0.0.0.0 select004.adtech.de 0.0.0.0 select004.adtech.fr 0.0.0.0 select004.adtech.us 0.0.0.0 sergarius.popunder.ru 0.0.0.0 serv2.ad-rotator.com 0.0.0.0 serv.ad-rotator.com 0.0.0.0 servads.aip.org 0.0.0.0 serv.adspeed.com 0.0.0.0 servedbyadbutler.com 0.0.0.0 servedby.adcombination.com 0.0.0.0 servedby.advertising.com 0.0.0.0 servedby.flashtalking.com 0.0.0.0 servedby.netshelter.net 0.0.0.0 servedby.precisionclick.com 0.0.0.0 serve.freegaypix.com 0.0.0.0 serve.popads.net 0.0.0.0 serve.prestigecasino.com 0.0.0.0 server01.popupmoney.com 0.0.0.0 server2.as5000.com 0.0.0.0 server2.mediajmp.com 0.0.0.0 server3.yieldmanaged.com 0.0.0.0 server.as5000.com 0.0.0.0 server.bittads.com 0.0.0.0 server.cpmstar.com 0.0.0.0 server.popads.net 0.0.0.0 server-ssl.yieldmanaged.com 0.0.0.0 service001.adtech.de 0.0.0.0 service001.adtech.fr 0.0.0.0 service001.adtech.us 0.0.0.0 service002.adtech.de 0.0.0.0 service002.adtech.fr 0.0.0.0 service002.adtech.us 0.0.0.0 service003.adtech.de 0.0.0.0 service003.adtech.fr 0.0.0.0 service003.adtech.us 0.0.0.0 service004.adtech.fr 0.0.0.0 service004.adtech.us 0.0.0.0 service00x.adtech.de 0.0.0.0 service00x.adtech.fr 0.0.0.0 service00x.adtech.us 0.0.0.0 service.adtech.de 0.0.0.0 service.adtech.fr 0.0.0.0 service.adtech.us 0.0.0.0 services1.adtech.de 0.0.0.0 services1.adtech.fr 0.0.0.0 services1.adtech.us 0.0.0.0 services.adtech.de 0.0.0.0 services.adtech.fr 0.0.0.0 services.adtech.us 0.0.0.0 serving.plexop.net 0.0.0.0 sexpartnerx.com 0.0.0.0 sexsponsors.com 0.0.0.0 sexzavod.com 0.0.0.0 sfads.osdn.com 0.0.0.0 s.flite.com 0.0.0.0 sg.adserver.yahoo.com 0.0.0.0 sgs001.adtech.de 0.0.0.0 sgs001.adtech.fr 0.0.0.0 sgs001.adtech.us 0.0.0.0 sh4sure-images.adbureau.net 0.0.0.0 shareasale.com 0.0.0.0 sharebar.addthiscdn.com 0.0.0.0 share-server.com 0.0.0.0 shc-rebates.com 0.0.0.0 shinystat.shiny.it 0.0.0.0 shopperpromotions.com 0.0.0.0 shopping-offer.com 0.0.0.0 shoppingsiterewards.com 0.0.0.0 shops-malls-rewardpath.com 0.0.0.0 shoptosaveenergy.com 0.0.0.0 showads1000.pubmatic.com 0.0.0.0 showadsak.pubmatic.com 0.0.0.0 sifomedia.citypaketet.se 0.0.0.0 signup.advance.net 0.0.0.0 si.hit.gemius.pl 0.0.0.0 simg.zedo.com 0.0.0.0 simpleads.net 0.0.0.0 simpli.fi 0.0.0.0 s.innovid.com 0.0.0.0 sixapart.adbureau.net 0.0.0.0 sizzle-savings.com 0.0.0.0 skgde.adocean.pl 0.0.0.0 skill.skilljam.com 0.0.0.0 slider.plugrush.com 0.0.0.0 smartadserver 0.0.0.0 smartadserver.com 0.0.0.0 smart.besonders.ru 0.0.0.0 smartclip.com 0.0.0.0 smartclip.net 0.0.0.0 smartcontext.pl 0.0.0.0 smartinit.webads.nl 0.0.0.0 smart-scripts.com 0.0.0.0 smartshare.lgtvsdp.com 0.0.0.0 s.media-imdb.com 0.0.0.0 s.megaclick.com 0.0.0.0 smile.modchipstore.com 0.0.0.0 smm.sitescout.com 0.0.0.0 s.moatads.com 0.0.0.0 smokersopinionpoll.com 0.0.0.0 smsmovies.net 0.0.0.0 snaps.vidiemi.com 0.0.0.0 sn.baventures.com 0.0.0.0 snip.answers.com 0.0.0.0 snipjs.answcdn.com 0.0.0.0 sochr.com 0.0.0.0 social.bidsystem.com 0.0.0.0 softlinkers.popunder.ru 0.0.0.0 sokrates.adtech.de 0.0.0.0 sokrates.adtech.fr 0.0.0.0 sokrates.adtech.us 0.0.0.0 sol.adbureau.net 0.0.0.0 sol-images.adbureau.net 0.0.0.0 solitairetime.com 0.0.0.0 solution.weborama.fr 0.0.0.0 somethingawful.crwdcntrl.net 0.0.0.0 sonycomputerentertai.tt.omtrdc.net 0.0.0.0 soongu.info 0.0.0.0 spanids.dictionary.com 0.0.0.0 spanids.thesaurus.com 0.0.0.0 spc.cekfmeoejdbfcfichgbfcgjf.vast2as3.glammedia-pubnet.northamerica.telemetryverification.net 0.0.0.0 spe.atdmt.com 0.0.0.0 specialgiftrewards.com 0.0.0.0 specialoffers.aol.com 0.0.0.0 specialonlinegifts.com 0.0.0.0 specials-rewardpath.com 0.0.0.0 speedboink.com 0.0.0.0 speedclicks.ero-advertising.com 0.0.0.0 speed.pointroll.com # Microsoft 0.0.0.0 spinbox.com 0.0.0.0 spinbox.consumerreview.com 0.0.0.0 spinbox.freedom.com 0.0.0.0 spinbox.macworld.com 0.0.0.0 spinbox.techtracker.com 0.0.0.0 spin.spinbox.net 0.0.0.0 sponsor1.com 0.0.0.0 sponsors.behance.com 0.0.0.0 sponsors.ezgreen.com 0.0.0.0 sponsorships.net 0.0.0.0 sports-bonuspath.com 0.0.0.0 sports-fitness-rewardpath.com 0.0.0.0 sports-offer.com 0.0.0.0 sports-offer.net 0.0.0.0 sports-premiumblvd.com 0.0.0.0 spotxchange.com 0.0.0.0 s.ppjol.net 0.0.0.0 sq2trk2.com 0.0.0.0 srs.targetpoint.com 0.0.0.0 srv.juiceadv.com 0.0.0.0 ssads.osdn.com 0.0.0.0 s.skimresources.com 0.0.0.0 sso.canada.com 0.0.0.0 staging.snip.answers.com 0.0.0.0 stampen.adtlgc.com 0.0.0.0 stampen.linkpulse.com 0.0.0.0 stampscom.tt.omtrdc.net 0.0.0.0 stanzapub.advertserve.com 0.0.0.0 star-advertising.com 0.0.0.0 stat.blogads.com 0.0.0.0 stat.dealtime.com 0.0.0.0 stat.ebuzzing.com 0.0.0.0 static1.influads.com 0.0.0.0 static.2mdn.net 0.0.0.0 static.admaximize.com 0.0.0.0 staticads.btopenworld.com 0.0.0.0 static.adsonar.com 0.0.0.0 static.adtaily.pl 0.0.0.0 static.adzerk.net 0.0.0.0 static.aff-landing-tmp.foxtab.com 0.0.0.0 staticb.mydirtyhobby.com 0.0.0.0 static.carbonads.com 0.0.0.0 static.clicktorrent.info 0.0.0.0 static.creatives.livejasmin.com 0.0.0.0 static.doubleclick.net 0.0.0.0 static.everyone.net 0.0.0.0 static.exoclick.com 0.0.0.0 static.fastpic.ru 0.0.0.0 static.firehunt.com 0.0.0.0 static.fmpub.net 0.0.0.0 static.freenet.de 0.0.0.0 static.groupy.co.nz 0.0.0.0 static.hitfarm.com 0.0.0.0 static.ifa.camads.net 0.0.0.0 static.l3.cdn.adbucks.com 0.0.0.0 static.l3.cdn.adsucks.com 0.0.0.0 static.plista.com 0.0.0.0 static.plugrush.com 0.0.0.0 static.pulse360.com 0.0.0.0 static.scanscout.com 0.0.0.0 static.vpptechnologies.com 0.0.0.0 static.way2traffic.com 0.0.0.0 statistik-gallup.dk 0.0.0.0 stats2.dooyoo.com 0.0.0.0 stats.askmoses.com 0.0.0.0 stats.buzzparadise.com 0.0.0.0 stats.jtvnw.net 0.0.0.0 stats.shopify.com 0.0.0.0 status.addthis.com 0.0.0.0 st.blogads.com 0.0.0.0 s.tcimg.com 0.0.0.0 st.marketgid.com 0.0.0.0 stocker.bonnint.net 0.0.0.0 storage.softure.com 0.0.0.0 storage.trafic.ro 0.0.0.0 streamate.com 0.0.0.0 stts.rbc.ru 0.0.0.0 st.valueclick.com 0.0.0.0 su.addthis.com 0.0.0.0 subtracts.userplane.com 0.0.0.0 sudokuwhiz.com 0.0.0.0 sunmaker.com 0.0.0.0 superbrewards.com 0.0.0.0 support.sweepstakes.com 0.0.0.0 supremeadsonline.com 0.0.0.0 suresafe1.adsovo.com 0.0.0.0 surplus-suppliers.com 0.0.0.0 surveycentral.directinsure.info 0.0.0.0 surveymonkeycom.tt.omtrdc.net 0.0.0.0 surveypass.com 0.0.0.0 susi.adtech.fr 0.0.0.0 susi.adtech.us 0.0.0.0 svd2.adtlgc.com 0.0.0.0 svd.adtlgc.com 0.0.0.0 sview.avenuea.com 0.0.0.0 sweetsforfree.com 0.0.0.0 symbiosting.com 0.0.0.0 synad2.nuffnang.com.cn 0.0.0.0 synad.nuffnang.com.sg 0.0.0.0 syncaccess.net 0.0.0.0 sync.mathtag.com 0.0.0.0 syndicated.mondominishows.com 0.0.0.0 syndication.exoclick.com 0.0.0.0 syndication.traffichaus.com 0.0.0.0 syn.verticalacuity.com 0.0.0.0 sysadmin.map24.com 0.0.0.0 t1.adserver.com 0.0.0.0 t4.liverail.com 0.0.0.0 t-ads.adap.tv 0.0.0.0 tag1.webabacus.com 0.0.0.0 tag.admeld.com 0.0.0.0 tag.contextweb.com 0.0.0.0 tag.regieci.com 0.0.0.0 tags.bluekai.com 0.0.0.0 tags.hypeads.org 0.0.0.0 tag.webcompteur.com 0.0.0.0 tag.yieldoptimizer.com 0.0.0.0 taloussanomat.linkpulse.com 0.0.0.0 tap2-cdn.rubiconproject.com 0.0.0.0 tbtrack.zutrack.com 0.0.0.0 tcadops.ca 0.0.0.0 tcimg.com 0.0.0.0 t.cpmadvisors.com 0.0.0.0 tdameritrade.tt.omtrdc.net 0.0.0.0 tdc.advertorials.dk 0.0.0.0 tdkads.ads.dk 0.0.0.0 techreview.adbureau.net 0.0.0.0 techreview-images.adbureau.net 0.0.0.0 teeser.ru 0.0.0.0 te.kontera.com 0.0.0.0 tel.geenstijl.nl 0.0.0.0 textads.madisonavenue.com 0.0.0.0 textad.traficdublu.ro 0.0.0.0 text-link-ads.com 0.0.0.0 text-link-ads.ientry.com 0.0.0.0 text-link-ads-inventory.com 0.0.0.0 textsrv.com 0.0.0.0 tf.nexac.com 0.0.0.0 tgpmanager.com 0.0.0.0 the-binary-trader.biz 0.0.0.0 the-path-gateway.com 0.0.0.0 thepiratetrader.com 0.0.0.0 the-smart-stop.com 0.0.0.0 theuploadbusiness.com 0.0.0.0 theuseful.com 0.0.0.0 theuseful.net 0.0.0.0 thinknyc.eu-adcenter.net 0.0.0.0 thinktarget.com 0.0.0.0 thinlaptoprewards.com 0.0.0.0 this.content.served.by.adshuffle.com 0.0.0.0 thoughtfully-free.com 0.0.0.0 thruport.com 0.0.0.0 tmp3.nexac.com 0.0.0.0 tmsads.tribune.com 0.0.0.0 tmx.technoratimedia.com 0.0.0.0 tn.adserve.com 0.0.0.0 toads.osdn.com 0.0.0.0 tons-to-see.com 0.0.0.0 toolbar.adperium.com 0.0.0.0 top100-images.rambler.ru 0.0.0.0 top1site.3host.com 0.0.0.0 top5.mail.ru 0.0.0.0 topbrandrewards.com 0.0.0.0 topconsumergifts.com 0.0.0.0 topdemaroc.com 0.0.0.0 topica.advertserve.com 0.0.0.0 top.list.ru 0.0.0.0 toplist.throughput.de 0.0.0.0 topmarketcenter.com 0.0.0.0 touche.adcentric.proximi-t.com 0.0.0.0 tower.adexpedia.com 0.0.0.0 toy-offer.com 0.0.0.0 toy-offer.net 0.0.0.0 tpads.ovguide.com 0.0.0.0 tpc.googlesyndication.com 0.0.0.0 tps30.doubleverify.com 0.0.0.0 tps31.doubleverify.com 0.0.0.0 track.adbooth.net 0.0.0.0 trackadvertising.net 0.0.0.0 track-apmebf.cj.akadns.net 0.0.0.0 track.bigbrandpromotions.com 0.0.0.0 track.e7r.com.br 0.0.0.0 trackers.1st-affiliation.fr 0.0.0.0 tracking.craktraffic.com 0.0.0.0 tracking.edvisors.com 0.0.0.0 tracking.eurowebaffiliates.com 0.0.0.0 tracking.joker.com 0.0.0.0 tracking.keywordmax.com 0.0.0.0 tracking.veoxa.com 0.0.0.0 track.omgpl.com 0.0.0.0 track.the-members-section.com 0.0.0.0 track.vscash.com 0.0.0.0 tradearabia.advertserve.com 0.0.0.0 tradefx.advertserve.com 0.0.0.0 trafficbee.com 0.0.0.0 trafficrevenue.net 0.0.0.0 traffictraders.com 0.0.0.0 traffprofit.com 0.0.0.0 trafmag.com 0.0.0.0 trafsearchonline.com 0.0.0.0 traktum.com 0.0.0.0 travel-leisure-bonuspath.com 0.0.0.0 travel-leisure-premiumblvd.com 0.0.0.0 traveller-offer.com 0.0.0.0 traveller-offer.net 0.0.0.0 travelncs.com 0.0.0.0 trekmedia.net 0.0.0.0 trendnews.com 0.0.0.0 trk.alskeip.com 0.0.0.0 trk.etrigue.com 0.0.0.0 trk.yadomedia.com 0.0.0.0 trustsitesite.com 0.0.0.0 trvlnet.adbureau.net 0.0.0.0 trvlnet-images.adbureau.net 0.0.0.0 tr.wl.webads.nl 0.0.0.0 tsms-ad.tsms.com 0.0.0.0 tste.ivillage.com 0.0.0.0 tste.mcclatchyinteractive.com 0.0.0.0 tste.startribune.com 0.0.0.0 ttarget.adbureau.net 0.0.0.0 ttuk.offers4u.mobi 0.0.0.0 turnerapac.d1.sc.omtrdc.net 0.0.0.0 tv2no.linkpulse.com 0.0.0.0 tvshowsnow.tvmax.hop.clickbank.net 0.0.0.0 tw.adserver.yahoo.com 0.0.0.0 twnads.weather.ca # Canadian Weather Network 0.0.0.0 uac.advertising.com 0.0.0.0 u-ads.adap.tv 0.0.0.0 uav.tidaltv.com 0.0.0.0 uc.csc.adserver.yahoo.com 0.0.0.0 uedata.amazon.com 0.0.0.0 uelbdc74fn.s.ad6media.fr 0.0.0.0 uf2.svrni.ca 0.0.0.0 ugo.eu-adcenter.net 0.0.0.0 ui.ppjol.com 0.0.0.0 uk.adserver.yahoo.com 0.0.0.0 uleadstrk.com 0.0.0.0 ultimatefashiongifts.com 0.0.0.0 ultrabestportal.com 0.0.0.0 um.simpli.fi 0.0.0.0 undertonenetworks.com 0.0.0.0 uole.ad.uol.com.br 0.0.0.0 u.openx.net 0.0.0.0 upload.adtech.de 0.0.0.0 upload.adtech.fr 0.0.0.0 upload.adtech.us 0.0.0.0 uproar.com 0.0.0.0 uproar.fortunecity.com 0.0.0.0 upsellit.com 0.0.0.0 us.adserver.yahoo.com 0.0.0.0 usads.vibrantmedia.com 0.0.0.0 usatoday.app.ur.gcion.com 0.0.0.0 usatravel-specials.com 0.0.0.0 usatravel-specials.net 0.0.0.0 us-choicevalue.com 0.0.0.0 usemax.de 0.0.0.0 usr.marketgid.com 0.0.0.0 us-topsites.com 0.0.0.0 ut.addthis.com 0.0.0.0 utarget.ru 0.0.0.0 utils.media-general.com 0.0.0.0 utils.mediageneral.com 0.0.0.0 vad.adbasket.net 0.0.0.0 vads.adbrite.com 0.0.0.0 van.ads.link4ads.com 0.0.0.0 vast.bp3845260.btrll.com 0.0.0.0 vast.bp3846806.btrll.com 0.0.0.0 vast.bp3846885.btrll.com 0.0.0.0 vast.tubemogul.com 0.0.0.0 vclick.adbrite.com 0.0.0.0 venus.goclick.com 0.0.0.0 ve.tscapeplay.com 0.0.0.0 v.fwmrm.net 0.0.0.0 vibrantmedia.com 0.0.0.0 videocop.com 0.0.0.0 videoegg.adbureau.net 0.0.0.0 video-game-rewards-central.com 0.0.0.0 videogamerewardscentral.com 0.0.0.0 videos.fleshlight.com 0.0.0.0 videoslots.888.com 0.0.0.0 videos.video-loader.com 0.0.0.0 view.atdmt.com #This may interfere with downloading from Microsoft, MSDN and TechNet websites. 0.0.0.0 view.avenuea.com 0.0.0.0 view.binlayer.com 0.0.0.0 view.iballs.a1.avenuea.com 0.0.0.0 view.jamba.de 0.0.0.0 view.netrams.com 0.0.0.0 views.m4n.nl 0.0.0.0 viglink.com 0.0.0.0 viglink.pgpartner.com 0.0.0.0 villagevoicecollect.247realmedia.com 0.0.0.0 vip1.tw.adserver.yahoo.com 0.0.0.0 vipfastmoney.com 0.0.0.0 vk.18sexporn.ru 0.0.0.0 vmcsatellite.com 0.0.0.0 vmix.adbureau.net 0.0.0.0 vms.boldchat.com 0.0.0.0 vnu.eu-adcenter.net 0.0.0.0 vodafoneit.solution.weborama.fr 0.0.0.0 vp.tscapeplay.com 0.0.0.0 vu.veoxa.com 0.0.0.0 vzarabotke.ru 0.0.0.0 w100.am15.net 0.0.0.0 w10.am15.net 0.0.0.0 w10.centralmediaserver.com 0.0.0.0 w11.am15.net 0.0.0.0 w11.centralmediaserver.com 0.0.0.0 w12.am15.net 0.0.0.0 w13.am15.net 0.0.0.0 w14.am15.net 0.0.0.0 w15.am15.net 0.0.0.0 w16.am15.net 0.0.0.0 w17.am15.net 0.0.0.0 w18.am15.net 0.0.0.0 w19.am15.net 0.0.0.0 w1.am15.net 0.0.0.0 w1.webcompteur.com 0.0.0.0 w20.am15.net 0.0.0.0 w21.am15.net 0.0.0.0 w22.am15.net 0.0.0.0 w23.am15.net 0.0.0.0 w24.am15.net 0.0.0.0 w25.am15.net 0.0.0.0 w26.am15.net 0.0.0.0 w27.am15.net 0.0.0.0 w28.am15.net 0.0.0.0 w29.am15.net 0.0.0.0 w2.am15.net 0.0.0.0 w30.am15.net 0.0.0.0 w31.am15.net 0.0.0.0 w32.am15.net 0.0.0.0 w33.am15.net 0.0.0.0 w34.am15.net 0.0.0.0 w35.am15.net 0.0.0.0 w36.am15.net 0.0.0.0 w37.am15.net 0.0.0.0 w38.am15.net 0.0.0.0 w39.am15.net 0.0.0.0 w3.am15.net 0.0.0.0 w40.am15.net 0.0.0.0 w41.am15.net 0.0.0.0 w42.am15.net 0.0.0.0 w43.am15.net 0.0.0.0 w44.am15.net 0.0.0.0 w45.am15.net 0.0.0.0 w46.am15.net 0.0.0.0 w47.am15.net 0.0.0.0 w48.am15.net 0.0.0.0 w49.am15.net 0.0.0.0 w4.am15.net 0.0.0.0 w50.am15.net 0.0.0.0 w51.am15.net 0.0.0.0 w52.am15.net 0.0.0.0 w53.am15.net 0.0.0.0 w54.am15.net 0.0.0.0 w55.am15.net 0.0.0.0 w56.am15.net 0.0.0.0 w57.am15.net 0.0.0.0 w58.am15.net 0.0.0.0 w59.am15.net 0.0.0.0 w5.am15.net 0.0.0.0 w60.am15.net 0.0.0.0 w61.am15.net 0.0.0.0 w62.am15.net 0.0.0.0 w63.am15.net 0.0.0.0 w64.am15.net 0.0.0.0 w65.am15.net 0.0.0.0 w66.am15.net 0.0.0.0 w67.am15.net 0.0.0.0 w68.am15.net 0.0.0.0 w69.am15.net 0.0.0.0 w6.am15.net 0.0.0.0 w70.am15.net 0.0.0.0 w71.am15.net 0.0.0.0 w72.am15.net 0.0.0.0 w73.am15.net 0.0.0.0 w74.am15.net 0.0.0.0 w75.am15.net 0.0.0.0 w76.am15.net 0.0.0.0 w77.am15.net 0.0.0.0 w78.am15.net 0.0.0.0 w79.am15.net 0.0.0.0 w7.am15.net 0.0.0.0 w80.am15.net 0.0.0.0 w81.am15.net 0.0.0.0 w82.am15.net 0.0.0.0 w83.am15.net 0.0.0.0 w84.am15.net 0.0.0.0 w85.am15.net 0.0.0.0 w86.am15.net 0.0.0.0 w87.am15.net 0.0.0.0 w88.am15.net 0.0.0.0 w89.am15.net 0.0.0.0 w8.am15.net 0.0.0.0 w90.am15.net 0.0.0.0 w91.am15.net 0.0.0.0 w92.am15.net 0.0.0.0 w93.am15.net 0.0.0.0 w94.am15.net 0.0.0.0 w95.am15.net 0.0.0.0 w96.am15.net 0.0.0.0 w97.am15.net 0.0.0.0 w98.am15.net 0.0.0.0 w99.am15.net 0.0.0.0 w9.am15.net 0.0.0.0 wahoha.com 0.0.0.0 warp.crystalad.com 0.0.0.0 wdm29.com 0.0.0.0 web1b.netreflector.com 0.0.0.0 web.adblade.com 0.0.0.0 webads.bizservers.com 0.0.0.0 webads.nl 0.0.0.0 webcompteur.com 0.0.0.0 webhosting-ads.home.pl 0.0.0.0 webmdcom.tt.omtrdc.net 0.0.0.0 web.nyc.ads.juno.co 0.0.0.0 webservices-rewardpath.com 0.0.0.0 websurvey.spa-mr.com 0.0.0.0 wegetpaid.net 0.0.0.0 w.ic.tynt.com 0.0.0.0 widget3.linkwithin.com 0.0.0.0 widget5.linkwithin.com 0.0.0.0 widget.crowdignite.com 0.0.0.0 widget.plugrush.com 0.0.0.0 widgets.outbrain.com 0.0.0.0 widgets.tcimg.com 0.0.0.0 wigetmedia.com 0.0.0.0 wikiforosh.ir 0.0.0.0 williamhill.es 0.0.0.0 wmedia.rotator.hadj7.adjuggler.net 0.0.0.0 wordplaywhiz.com 0.0.0.0 work-offer.com 0.0.0.0 worry-free-savings.com 0.0.0.0 wppluginspro.com 0.0.0.0 ws.addthis.com 0.0.0.0 wtp101.com 0.0.0.0 ww251.smartadserver.com 0.0.0.0 wwbtads.com 0.0.0.0 www10.ad.tomshardware.com 0.0.0.0 www10.glam.com 0.0.0.0 www10.indiads.com 0.0.0.0 www10.paypopup.com 0.0.0.0 www11.ad.tomshardware.com 0.0.0.0 www123.glam.com 0.0.0.0 www.123specialgifts.com 0.0.0.0 www12.ad.tomshardware.com 0.0.0.0 www12.glam.com 0.0.0.0 www13.ad.tomshardware.com 0.0.0.0 www13.glam.com 0.0.0.0 www14.ad.tomshardware.com 0.0.0.0 www15.ad.tomshardware.com 0.0.0.0 www17.glam.com 0.0.0.0 www18.glam.com 0.0.0.0 www1.adireland.com 0.0.0.0 www1.ad.tomshardware.com 0.0.0.0 www1.bannerspace.com 0.0.0.0 www1.belboon.de 0.0.0.0 www1.clicktorrent.info 0.0.0.0 www1.mpnrs.com 0.0.0.0 www1.popinads.com 0.0.0.0 www1.safenyplanet.in 0.0.0.0 www210.paypopup.com 0.0.0.0 www211.paypopup.com 0.0.0.0 www212.paypopup.com 0.0.0.0 www213.paypopup.com 0.0.0.0 www.247realmedia.com 0.0.0.0 www24a.glam.com 0.0.0.0 www24.glam.com 0.0.0.0 www25a.glam.com 0.0.0.0 www25.glam.com 0.0.0.0 www2.adireland.com 0.0.0.0 www2.adserverpub.com 0.0.0.0 www2.ad.tomshardware.com 0.0.0.0 www.2-art-coliseum.com 0.0.0.0 www2.bannerspace.com 0.0.0.0 www2.glam.com 0.0.0.0 www30a1.glam.com 0.0.0.0 www30a1-orig.glam.com 0.0.0.0 www30a2-orig.glam.com 0.0.0.0 www30a3.glam.com 0.0.0.0 www30a3-orig.glam.com 0.0.0.0 www30a7.glam.com 0.0.0.0 www30.glam.com 0.0.0.0 www30l2.glam.com 0.0.0.0 www30t1-orig.glam.com 0.0.0.0 www.321cba.com 0.0.0.0 www35f.glam.com 0.0.0.0 www35jm.glam.com 0.0.0.0 www35t.glam.com 0.0.0.0 www.360ads.com 0.0.0.0 www3.addthis.com 0.0.0.0 www3.adireland.com 0.0.0.0 www3.ad.tomshardware.com 0.0.0.0 www3.bannerspace.com 0.0.0.0 www3.game-advertising-online.com 0.0.0.0 www.3qqq.net 0.0.0.0 www.3turtles.com 0.0.0.0 www.404errorpage.com 0.0.0.0 www4.ad.tomshardware.com 0.0.0.0 www4.bannerspace.com 0.0.0.0 www4.glam.com 0.0.0.0 www4.smartadserver.com 0.0.0.0 www5.ad.tomshardware.com 0.0.0.0 www5.bannerspace.com 0.0.0.0 www.5thavenue.com 0.0.0.0 www6.ad.tomshardware.com 0.0.0.0 www6.bannerspace.com 0.0.0.0 www74.valueclick.com 0.0.0.0 www.7500.com 0.0.0.0 www7.ad.tomshardware.com 0.0.0.0 www7.bannerspace.com 0.0.0.0 www.7bpeople.com 0.0.0.0 www.7cnbcnews.com 0.0.0.0 www.805m.com 0.0.0.0 www81.valueclick.com 0.0.0.0 www.888casino.com 0.0.0.0 www.888.com 0.0.0.0 www.888poker.com 0.0.0.0 www8.ad.tomshardware.com 0.0.0.0 www8.bannerspace.com 0.0.0.0 www.961.com 0.0.0.0 www9.ad.tomshardware.com 0.0.0.0 www9.paypopup.com 0.0.0.0 www.abrogatesdv.info 0.0.0.0 www.actiondesk.com 0.0.0.0 www.action.ientry.net 0.0.0.0 www.adbanner.gr 0.0.0.0 www.adbrite.com 0.0.0.0 www.adcanadian.com 0.0.0.0 www.adcash.com 0.0.0.0 www.addthiscdn.com 0.0.0.0 www.adengage.com 0.0.0.0 www.adfunkyserver.com 0.0.0.0 www.adfusion.com 0.0.0.0 www.adimages.beeb.com 0.0.0.0 www.adipics.com 0.0.0.0 www.adireland.com 0.0.0.0 www.adjmps.com 0.0.0.0 www.adjug.com 0.0.0.0 www.adloader.com 0.0.0.0 www.adlogix.com 0.0.0.0 www.admex.com 0.0.0.0 www.adnet.biz 0.0.0.0 www.adnet.com 0.0.0.0 www.adnet.de 0.0.0.0 www.adobee.com 0.0.0.0 www.adocean.pl 0.0.0.0 www.adotube.com 0.0.0.0 www.adpepper.dk 0.0.0.0 www.adpowerzone.com 0.0.0.0 www.adquest3d.com 0.0.0.0 www.adreporting.com 0.0.0.0 www.ads2srv.com 0.0.0.0 www.adsentnetwork.com 0.0.0.0 www.adserver.co.il 0.0.0.0 www.adserver.com 0.0.0.0 www.adserver.com.my 0.0.0.0 www.adserver.com.pl 0.0.0.0 www.adserver-espnet.sportszone.net 0.0.0.0 www.adserver.janes.net 0.0.0.0 www.adserver.janes.org 0.0.0.0 www.adserver.jolt.co.uk 0.0.0.0 www.adserver.net 0.0.0.0 www.adserver.ugo.nl 0.0.0.0 www.adservtech.com 0.0.0.0 www.adsinimages.com 0.0.0.0 www.ads.joetec.net 0.0.0.0 www.adsoftware.com 0.0.0.0 www.ad-souk.com 0.0.0.0 www.adspics.com 0.0.0.0 www.ads.revenue.net 0.0.0.0 www.adstogo.com 0.0.0.0 www.adstreams.org 0.0.0.0 www.adtaily.pl 0.0.0.0 www.adtechus.com 0.0.0.0 www.ad.tgdaily.com 0.0.0.0 www.adtlgc.com 0.0.0.0 www.ad.tomshardware.com 0.0.0.0 www.adtrader.com 0.0.0.0 www.adtrix.com 0.0.0.0 www.ad.twitchguru.com 0.0.0.0 www.ad-up.com 0.0.0.0 www.advaliant.com 0.0.0.0 www.advertising-department.com 0.0.0.0 www.advertlets.com 0.0.0.0 www.advertpro.com 0.0.0.0 www.adverts.dcthomson.co.uk 0.0.0.0 www.advertyz.com 0.0.0.0 www.ad-words.ru 0.0.0.0 www.afcyhf.com 0.0.0.0 www.affiliateclick.com 0.0.0.0 www.affiliate-fr.com 0.0.0.0 www.affiliation-france.com 0.0.0.0 www.afform.co.uk 0.0.0.0 www.affpartners.com 0.0.0.0 www.afterdownload.com 0.0.0.0 www.agkn.com 0.0.0.0 www.alexxe.com 0.0.0.0 www.allosponsor.com 0.0.0.0 www.annuaire-autosurf.com 0.0.0.0 www.apparelncs.com 0.0.0.0 www.apparel-offer.com 0.0.0.0 www.applelounge.com 0.0.0.0 www.appnexus.com 0.0.0.0 www.art-music-rewardpath.com 0.0.0.0 www.art-offer.com 0.0.0.0 www.art-offer.net 0.0.0.0 www.art-photo-music-premiumblvd.com 0.0.0.0 www.art-photo-music-rewardempire.com 0.0.0.0 www.art-photo-music-savingblvd.com 0.0.0.0 www.auctionshare.net 0.0.0.0 www.aureate.com 0.0.0.0 www.autohipnose.com 0.0.0.0 www.automotive-offer.com 0.0.0.0 www.automotive-rewardpath.com 0.0.0.0 www.avcounter10.com 0.0.0.0 www.avsads.com 0.0.0.0 www.a.websponsors.com 0.0.0.0 www.awesomevipoffers.com 0.0.0.0 www.awin1.com 0.0.0.0 www.awltovhc.com #qksrv 0.0.0.0 www.bananacashback.com 0.0.0.0 www.banner4all.dk 0.0.0.0 www.bannerads.de 0.0.0.0 www.bannerbackup.com 0.0.0.0 www.bannerconnect.net 0.0.0.0 www.banners.paramountzone.com 0.0.0.0 www.bannersurvey.biz 0.0.0.0 www.banstex.com 0.0.0.0 www.bargainbeautybuys.com 0.0.0.0 www.bbelements.com 0.0.0.0 www.bestshopperrewards.com 0.0.0.0 www.bet365.com 0.0.0.0 www.bidtraffic.com 0.0.0.0 www.bidvertiser.com 0.0.0.0 www.bigbrandpromotions.com 0.0.0.0 www.bigbrandrewards.com 0.0.0.0 www.biggestgiftrewards.com 0.0.0.0 www.binarysystem4u.com 0.0.0.0 www.biz-offer.com 0.0.0.0 www.bizopprewards.com 0.0.0.0 www.blasphemysfhs.info 0.0.0.0 www.blatant8jh.info 0.0.0.0 www.bluediamondoffers.com 0.0.0.0 www.bnnr.nl 0.0.0.0 www.bonzi.com 0.0.0.0 www.bookclub-offer.com 0.0.0.0 www.books-media-edu-premiumblvd.com 0.0.0.0 www.books-media-edu-rewardempire.com 0.0.0.0 www.books-media-rewardpath.com 0.0.0.0 www.boonsolutions.com 0.0.0.0 www.bostonsubwayoffer.com 0.0.0.0 www.brandrewardcentral.com 0.0.0.0 www.brandsurveypanel.com 0.0.0.0 www.brokertraffic.com 0.0.0.0 www.budsinc.com 0.0.0.0 www.bugsbanner.it 0.0.0.0 www.bulkclicks.com 0.0.0.0 www.bulletads.com 0.0.0.0 www.burstnet.com 0.0.0.0 www.business-rewardpath.com 0.0.0.0 www.bus-offer.com 0.0.0.0 www.buttcandy.com 0.0.0.0 www.buwobarun.cn 0.0.0.0 www.buycheapadvertising.com 0.0.0.0 www.buyhitscheap.com 0.0.0.0 www.capath.com 0.0.0.0 www.careers-rewardpath.com 0.0.0.0 www.car-truck-boat-bonuspath.com 0.0.0.0 www.car-truck-boat-premiumblvd.com 0.0.0.0 www.cashback.co.uk 0.0.0.0 www.cashbackwow.co.uk 0.0.0.0 www.cashcount.com 0.0.0.0 www.casino770.com 0.0.0.0 www.catalinkcashback.com 0.0.0.0 www.cell-phone-giveaways.com 0.0.0.0 www.cellphoneincentives.com 0.0.0.0 www.chainsawoffer.com 0.0.0.0 www.chartbeat.com 0.0.0.0 www.choicedealz.com 0.0.0.0 www.choicesurveypanel.com 0.0.0.0 www.christianbusinessadvertising.com 0.0.0.0 www.ciqugasox.cn 0.0.0.0 www.claimfreerewards.com 0.0.0.0 www.clashmediausa.com 0.0.0.0 www.click10.com 0.0.0.0 www.click4click.com 0.0.0.0 www.clickbank.com 0.0.0.0 www.clickdensity.com 0.0.0.0 www.click-find-save.com 0.0.0.0 www.click-see-save.com 0.0.0.0 www.clicksor.com 0.0.0.0 www.clicksotrk.com 0.0.0.0 www.clicktale.com 0.0.0.0 www.clicktale.net 0.0.0.0 www.clickthruserver.com 0.0.0.0 www.clickthrutraffic.com 0.0.0.0 www.clicktilluwin.com 0.0.0.0 www.clicktorrent.info 0.0.0.0 www.clickxchange.com 0.0.0.0 www.closeoutproductsreview.com 0.0.0.0 www.cm1359.com 0.0.0.0 www.come-see-it-all.com 0.0.0.0 www.commerce-offer.com 0.0.0.0 www.commerce-rewardpath.com 0.0.0.0 www.computer-offer.com 0.0.0.0 www.computer-offer.net 0.0.0.0 www.computers-electronics-rewardpath.com 0.0.0.0 www.computersncs.com 0.0.0.0 www.consumergiftcenter.com 0.0.0.0 www.consumerincentivenetwork.com 0.0.0.0 www.consumer-org.com 0.0.0.0 www.contaxe.com 0.0.0.0 www.contextuads.com 0.0.0.0 www.contextweb.com 0.0.0.0 www.cookingtiprewards.com 0.0.0.0 www.coolconcepts.nl 0.0.0.0 www.cool-premiums.com 0.0.0.0 www.cool-premiums-now.com 0.0.0.0 www.coolpremiumsnow.com 0.0.0.0 www.coolsavings.com 0.0.0.0 www.coreglead.co.uk 0.0.0.0 www.cosmeticscentre.uk.com 0.0.0.0 www.cpabank.com 0.0.0.0 www.cpmadvisors.com 0.0.0.0 www.crazypopups.com 0.0.0.0 www.crazywinnings.com 0.0.0.0 www.crediblegfj.info 0.0.0.0 www.crispads.com 0.0.0.0 www.crowdgravity.com 0.0.0.0 www.crowdignite.com 0.0.0.0 www.ctbdev.net 0.0.0.0 www.cyber-incentives.com 0.0.0.0 www.d03x2011.com 0.0.0.0 www.da-ads.com 0.0.0.0 www.daily-saver.com 0.0.0.0 www.datatech.es 0.0.0.0 www.datingadvertising.com 0.0.0.0 www.dctracking.com 0.0.0.0 www.depravedwhores.com 0.0.0.0 www.designbloxlive.com 0.0.0.0 www.dgmaustralia.com 0.0.0.0 www.dietoftoday.ca.pn 0.0.0.0 www.digimedia.com 0.0.0.0 www.directnetadvertising.net 0.0.0.0 www.directpowerrewards.com 0.0.0.0 www.dirtyrhino.com 0.0.0.0 www.discount-savings-more.com 0.0.0.0 www.djugoogs.com 0.0.0.0 www.dl-plugin.com 0.0.0.0 www.drowle.com 0.0.0.0 www.dutchsales.org 0.0.0.0 www.earnmygift.com 0.0.0.0 www.earnpointsandgifts.com 0.0.0.0 www.easyadservice.com 0.0.0.0 www.e-bannerx.com 0.0.0.0 www.ebaybanner.com 0.0.0.0 www.education-rewardpath.com 0.0.0.0 www.edu-offer.com 0.0.0.0 www.electronics-bonuspath.com 0.0.0.0 www.electronics-offer.net 0.0.0.0 www.electronicspresent.com 0.0.0.0 www.electronics-rewardpath.com 0.0.0.0 www.emailadvantagegroup.com 0.0.0.0 www.emailproductreview.com 0.0.0.0 www.emarketmakers.com 0.0.0.0 www.entertainment-rewardpath.com 0.0.0.0 www.entertainment-specials.com 0.0.0.0 www.eshopads2.com 0.0.0.0 www.euros4click.de 0.0.0.0 www.exclusivegiftcards.com 0.0.0.0 www.eyeblaster-bs.com 0.0.0.0 www.eyewonder.com #: Interactive Digital Advertising, Rich Media Ads, Flash Ads, Online Advertising 0.0.0.0 www.falkag.de 0.0.0.0 www.family-offer.com 0.0.0.0 www.fast-adv.it 0.0.0.0 www.fatcatrewards.com 0.0.0.0 www.feedjit.com 0.0.0.0 www.feedstermedia.com 0.0.0.0 www.fif49.info 0.0.0.0 www.finance-offer.com 0.0.0.0 www.finder.cox.net 0.0.0.0 www.fineclicks.com 0.0.0.0 www.flagcounter.com 0.0.0.0 www.flowers-offer.com 0.0.0.0 www.flu23.com 0.0.0.0 www.focalex.com 0.0.0.0 www.folloyu.com 0.0.0.0 www.food-drink-bonuspath.com 0.0.0.0 www.food-drink-rewardpath.com 0.0.0.0 www.foodmixeroffer.com 0.0.0.0 www.food-offer.com 0.0.0.0 www.fpctraffic2.com 0.0.0.0 www.freeadguru.com 0.0.0.0 www.freebiegb.co.uk 0.0.0.0 www.freecameraonus.com 0.0.0.0 www.freecameraprovider.com 0.0.0.0 www.freecamerasource.com 0.0.0.0 www.freecamerauk.co.uk 0.0.0.0 www.freecamsecrets.com 0.0.0.0 www.freecoolgift.com 0.0.0.0 www.freedesignerhandbagreviews.com 0.0.0.0 www.freedinnersource.com 0.0.0.0 www.freedvddept.com 0.0.0.0 www.freeelectronicscenter.com 0.0.0.0 www.freeelectronicsdepot.com 0.0.0.0 www.freeelectronicsonus.com 0.0.0.0 www.freeelectronicssource.com 0.0.0.0 www.freeentertainmentsource.com 0.0.0.0 www.freefoodprovider.com 0.0.0.0 www.freefoodsource.com 0.0.0.0 www.freefuelcard.com 0.0.0.0 www.freefuelcoupon.com 0.0.0.0 www.freegasonus.com 0.0.0.0 www.freegasprovider.com 0.0.0.0 www.free-gift-cards-now.com 0.0.0.0 www.freegiftcardsource.com 0.0.0.0 www.freegiftreward.com 0.0.0.0 www.free-gifts-comp.com 0.0.0.0 www.freeipodnanouk.co.uk 0.0.0.0 www.freeipoduk.com 0.0.0.0 www.freeipoduk.co.uk 0.0.0.0 www.freelaptopgift.com 0.0.0.0 www.freelaptopnation.com 0.0.0.0 www.free-laptop-reward.com 0.0.0.0 www.freelaptopreward.com 0.0.0.0 www.freelaptopwebsites.com 0.0.0.0 www.freenation.com 0.0.0.0 www.freeoffers-toys.com 0.0.0.0 www.freepayasyougotopupuk.co.uk 0.0.0.0 www.freeplasmanation.com 0.0.0.0 www.freerestaurantprovider.com 0.0.0.0 www.freerestaurantsource.com 0.0.0.0 www.freeshoppingprovider.com 0.0.0.0 www.freeshoppingsource.com 0.0.0.0 www.friendlyduck.com 0.0.0.0 www.frontpagecash.com 0.0.0.0 www.ftjcfx.com #commission junction 0.0.0.0 www.fusionbanners.com 0.0.0.0 www.gameconsolerewards.com 0.0.0.0 www.games-toys-bonuspath.com 0.0.0.0 www.games-toys-free.com 0.0.0.0 www.games-toys-rewardpath.com 0.0.0.0 www.gatoradvertisinginformationnetwork.com 0.0.0.0 www.getacool100.com 0.0.0.0 www.getacool500.com 0.0.0.0 www.getacoollaptop.com 0.0.0.0 www.getacooltv.com 0.0.0.0 www.getagiftonline.com 0.0.0.0 www.getloan.com 0.0.0.0 www.getmyfreebabystuff.com 0.0.0.0 www.getmyfreegear.com 0.0.0.0 www.getmyfreegiftcard.com 0.0.0.0 www.getmyfreelaptop.com 0.0.0.0 www.getmyfreelaptophere.com 0.0.0.0 www.getmyfreeplasma.com 0.0.0.0 www.getmylaptopfree.com 0.0.0.0 www.getmyplasmatv.com 0.0.0.0 www.getspecialgifts.com 0.0.0.0 www.getyourfreecomputer.com 0.0.0.0 www.getyourfreetv.com 0.0.0.0 www.giftcardchallenge.com 0.0.0.0 www.giftcardsurveys.us.com 0.0.0.0 www.giftrewardzone.com 0.0.0.0 www.gifts-flowers-rewardpath.com 0.0.0.0 www.gimmethatreward.com 0.0.0.0 www.gmads.net 0.0.0.0 www.go-free-gifts.com 0.0.0.0 www.gofreegifts.com 0.0.0.0 www.goody-garage.com 0.0.0.0 www.gopopup.com 0.0.0.0 www.grabbit-rabbit.com 0.0.0.0 www.greasypalm.com 0.0.0.0 www.grz67.com 0.0.0.0 www.guesstheview.com 0.0.0.0 www.guptamedianetwork.com 0.0.0.0 www.happydiscountspecials.com 0.0.0.0 www.healthbeautyncs.com 0.0.0.0 www.health-beauty-rewardpath.com 0.0.0.0 www.health-beauty-savingblvd.com 0.0.0.0 www.healthclicks.co.uk 0.0.0.0 www.hebdotop.com 0.0.0.0 www.hightrafficads.com 0.0.0.0 www.holiday-gift-offers.com 0.0.0.0 www.holidayproductpromo.com 0.0.0.0 www.holidayshoppingrewards.com 0.0.0.0 www.home4bizstart.ru 0.0.0.0 www.homeelectronicproducts.com 0.0.0.0 www.home-garden-premiumblvd.com 0.0.0.0 www.home-garden-rewardempire.com 0.0.0.0 www.home-garden-rewardpath.com 0.0.0.0 www.hooqy.com 0.0.0.0 www.hot-daily-deal.com 0.0.0.0 www.hotgiftzone.com 0.0.0.0 www.hotkeys.com 0.0.0.0 www.hot-product-hangout.com 0.0.0.0 www.idealcasino.net 0.0.0.0 www.idirect.com 0.0.0.0 www.iicdn.com 0.0.0.0 www.ijacko.net 0.0.0.0 www.ilovecheating.com 0.0.0.0 www.impressionaffiliate.com 0.0.0.0 www.impressionaffiliate.mobi 0.0.0.0 www.impressionlead.com 0.0.0.0 www.impressionperformance.biz 0.0.0.0 www.incentivegateway.com 0.0.0.0 www.incentiverewardcenter.com 0.0.0.0 www.incentive-scene.com 0.0.0.0 www.inckamedia.com 0.0.0.0 www.indiads.com 0.0.0.0 www.infinite-ads.com # www.shareactor.com 0.0.0.0 www.ins-offer.com 0.0.0.0 www.insurance-rewardpath.com 0.0.0.0 www.intela.com 0.0.0.0 www.interstitialzone.com 0.0.0.0 www.intnet-offer.com 0.0.0.0 www.invitefashion.com 0.0.0.0 www.is1.clixgalore.com 0.0.0.0 www.itrackerpro.com 0.0.0.0 www.itsfree123.com 0.0.0.0 www.iwantmyfreecash.com 0.0.0.0 www.iwantmy-freelaptop.com 0.0.0.0 www.iwantmyfree-laptop.com 0.0.0.0 www.iwantmyfreelaptop.com 0.0.0.0 www.iwantmygiftcard.com 0.0.0.0 www.jersey-offer.com 0.0.0.0 www.jetseeker.com 0.0.0.0 www.jivox.com 0.0.0.0 www.jl29jd25sm24mc29.com 0.0.0.0 www.joinfree.ro 0.0.0.0 www.jxliu.com 0.0.0.0 www.keybinary.com 0.0.0.0 www.keywordblocks.com 0.0.0.0 www.kitaramarketplace.com 0.0.0.0 www.kitaramedia.com 0.0.0.0 www.kitaratrk.com 0.0.0.0 www.kixer.com 0.0.0.0 www.kliksaya.com 0.0.0.0 www.kmdl101.com 0.0.0.0 www.kontera.com 0.0.0.0 www.konversation.com 0.0.0.0 www.kreaffiliation.com 0.0.0.0 www.kuhdi.com 0.0.0.0 www.ladyclicks.ru 0.0.0.0 www.laptopreportcard.com 0.0.0.0 www.laptoprewards.com 0.0.0.0 www.laptoprewardsgroup.com 0.0.0.0 www.laptoprewardszone.com 0.0.0.0 www.larivieracasino.com 0.0.0.0 www.lasthr.info 0.0.0.0 www.lduhtrp.net #commission junction 0.0.0.0 www.le1er.net 0.0.0.0 www.leadgreed.com 0.0.0.0 www.learning-offer.com 0.0.0.0 www.legal-rewardpath.com 0.0.0.0 www.leisure-offer.com 0.0.0.0 www.linkhut.com 0.0.0.0 www.linkpulse.com 0.0.0.0 www.linkwithin.com 0.0.0.0 www.liveinternet.ru 0.0.0.0 www.lottoforever.com 0.0.0.0 www.lpcloudsvr302.com 0.0.0.0 www.lucky-day-uk.com 0.0.0.0 www.macombdisplayads.com 0.0.0.0 www.marketing-rewardpath.com 0.0.0.0 www.mastertracks.be 0.0.0.0 www.maxbounty.com 0.0.0.0 www.mb01.com 0.0.0.0 www.media2.travelzoo.com 0.0.0.0 www.media-motor.com 0.0.0.0 www.medical-offer.com 0.0.0.0 www.medical-rewardpath.com 0.0.0.0 www.merchantapp.com 0.0.0.0 www.merlin.co.il 0.0.0.0 www.mgid.com 0.0.0.0 www.mightymagoo.com 0.0.0.0 www.mktg-offer.com 0.0.0.0 www.mlntracker.com 0.0.0.0 www.mochibot.com 0.0.0.0 www.morefreecamsecrets.com 0.0.0.0 www.morevisits.info 0.0.0.0 www.mp3playersource.com 0.0.0.0 www.mpression.net 0.0.0.0 www.myadsl.co.za 0.0.0.0 www.myaffiliateprogram.com 0.0.0.0 www.myairbridge.com 0.0.0.0 www.mycashback.co.uk 0.0.0.0 www.mycelloffer.com 0.0.0.0 www.mychoicerewards.com 0.0.0.0 www.myexclusiverewards.com 0.0.0.0 www.myfreedinner.com 0.0.0.0 www.myfreegifts.co.uk 0.0.0.0 www.myfreemp3player.com 0.0.0.0 www.mygiftcardcenter.com 0.0.0.0 www.mygreatrewards.com 0.0.0.0 www.myoffertracking.com 0.0.0.0 www.my-reward-channel.com 0.0.0.0 www.my-rewardsvault.com 0.0.0.0 www.myseostats.com 0.0.0.0 www.my-stats.com 0.0.0.0 www.myuitm.com 0.0.0.0 www.myusersonline.com 0.0.0.0 www.na47.com 0.0.0.0 www.nationalissuepanel.com 0.0.0.0 www.nationalsurveypanel.com 0.0.0.0 www.nctracking.com 0.0.0.0 www.nearbyad.com 0.0.0.0 www.needadvertising.com 0.0.0.0 www.neptuneads.com 0.0.0.0 www.netpalnow.com 0.0.0.0 www.netpaloffers.net 0.0.0.0 www.news6health.com 0.0.0.0 www.newssourceoftoday.com 0.0.0.0 www.nospartenaires.com 0.0.0.0 www.nothing-but-value.com 0.0.0.0 www.nysubwayoffer.com 0.0.0.0 www.offerx.co.uk 0.0.0.0 www.oinadserve.com 0.0.0.0 www.onlinebestoffers.net 0.0.0.0 www.ontheweb.com 0.0.0.0 www.opendownload.de 0.0.0.0 www.openload.de 0.0.0.0 www.optiad.net 0.0.0.0 www.paperg.com 0.0.0.0 www.parsads.com 0.0.0.0 www.pathforpoints.com 0.0.0.0 www.paypopup.com 0.0.0.0 www.people-choice-sites.com 0.0.0.0 www.personalcare-offer.com 0.0.0.0 www.personalcashbailout.com 0.0.0.0 www.phoenixads.co.in 0.0.0.0 www.pick-savings.com 0.0.0.0 www.plasmatv4free.com 0.0.0.0 www.plasmatvreward.com 0.0.0.0 www.politicalopinionsurvey.com 0.0.0.0 www.poponclick.com 0.0.0.0 www.popupad.net 0.0.0.0 www.popupdomination.com 0.0.0.0 www.popuptraffic.com 0.0.0.0 www.postmasterbannernet.com 0.0.0.0 www.postmasterdirect.com 0.0.0.0 www.postnewsads.com 0.0.0.0 www.premiumholidayoffers.com 0.0.0.0 www.premiumproductsonline.com 0.0.0.0 www.premium-reward-club.com 0.0.0.0 www.prizes.co.uk 0.0.0.0 www.probabilidades.net 0.0.0.0 www.productopinionpanel.com 0.0.0.0 www.productresearchpanel.com 0.0.0.0 www.producttestpanel.com 0.0.0.0 www.psclicks.com 0.0.0.0 www.pubdirecte.com 0.0.0.0 www.qitrck.com 0.0.0.0 www.quickbrowsersearch.com 0.0.0.0 www.radiate.com 0.0.0.0 www.rankyou.com 0.0.0.0 www.ravel-rewardpath.com 0.0.0.0 www.recreation-leisure-rewardpath.com 0.0.0.0 www.regflow.com 0.0.0.0 www.registrarads.com 0.0.0.0 www.resolvingserver.com 0.0.0.0 www.rewardblvd.com 0.0.0.0 www.rewardhotspot.com 0.0.0.0 www.rewardsflow.com 0.0.0.0 www.ringtonepartner.com 0.0.0.0 www.romepartners.com 0.0.0.0 www.roulettebotplus.com 0.0.0.0 www.rovion.com 0.0.0.0 www.rscounter10.com 0.0.0.0 www.rtcode.com 0.0.0.0 www.rwpads.net 0.0.0.0 www.sa44.net 0.0.0.0 www.salesonline.ie 0.0.0.0 www.save-plan.com 0.0.0.0 www.savings-specials.com 0.0.0.0 www.savings-time.com 0.0.0.0 www.scoremygift.com 0.0.0.0 www.screen-mates.com 0.0.0.0 www.searchwe.com 0.0.0.0 www.seasonalsamplerspecials.com 0.0.0.0 www.securecontactinfo.com 0.0.0.0 www.securerunner.com 0.0.0.0 www.servedby.advertising.com 0.0.0.0 www.sexpartnerx.com 0.0.0.0 www.sexsponsors.com 0.0.0.0 www.shareasale.com 0.0.0.0 www.share-server.com 0.0.0.0 www.shc-rebates.com 0.0.0.0 www.shopperpromotions.com 0.0.0.0 www.shoppingjobshere.com 0.0.0.0 www.shopping-offer.com 0.0.0.0 www.shoppingsiterewards.com 0.0.0.0 www.shops-malls-rewardpath.com 0.0.0.0 www.shoptosaveenergy.com 0.0.0.0 www.simpli.fi 0.0.0.0 www.sizzle-savings.com 0.0.0.0 www.smartadserver.com 0.0.0.0 www.smart-scripts.com 0.0.0.0 www.smarttargetting.com 0.0.0.0 www.smokersopinionpoll.com 0.0.0.0 www.smspop.com 0.0.0.0 www.sochr.com 0.0.0.0 www.sociallypublish.com 0.0.0.0 www.soongu.info 0.0.0.0 www.specialgiftrewards.com 0.0.0.0 www.specialonlinegifts.com 0.0.0.0 www.specials-rewardpath.com 0.0.0.0 www.speedboink.com 0.0.0.0 www.speedyclick.com 0.0.0.0 www.spinbox.com 0.0.0.0 www.sponsorads.de 0.0.0.0 www.sponsoradulto.com 0.0.0.0 www.sports-bonuspath.com 0.0.0.0 www.sports-fitness-rewardpath.com 0.0.0.0 www.sports-offer.com 0.0.0.0 www.sports-offer.net 0.0.0.0 www.sports-premiumblvd.com 0.0.0.0 www.sq2trk2.com 0.0.0.0 www.star-advertising.com 0.0.0.0 www.subsitesadserver.co.uk 0.0.0.0 www.sudokuwhiz.com 0.0.0.0 www.superbrewards.com 0.0.0.0 www.supremeadsonline.com 0.0.0.0 www.surplus-suppliers.com 0.0.0.0 www.sweetsforfree.com 0.0.0.0 www.symbiosting.com 0.0.0.0 www.syncaccess.net 0.0.0.0 www.system-live-media.cz 0.0.0.0 www.tcimg.com 0.0.0.0 www.textbanners.net 0.0.0.0 www.text-link-ads.com 0.0.0.0 www.textsrv.com 0.0.0.0 www.tgpmanager.com 0.0.0.0 www.thatrendsystem.com 0.0.0.0 www.the-binary-options-guide.com 0.0.0.0 www.the-binary-theorem.com 0.0.0.0 www.the-path-gateway.com 0.0.0.0 www.the-smart-stop.com 0.0.0.0 www.thetraderinpajamas.com 0.0.0.0 www.theuseful.com 0.0.0.0 www.theuseful.net 0.0.0.0 www.thinktarget.com 0.0.0.0 www.thinlaptoprewards.com 0.0.0.0 www.thoughtfully-free.com 0.0.0.0 www.thruport.com 0.0.0.0 www.tons-to-see.com 0.0.0.0 www.top20free.com 0.0.0.0 www.topbrandrewards.com 0.0.0.0 www.topconsumergifts.com 0.0.0.0 www.topdemaroc.com 0.0.0.0 www.toy-offer.com 0.0.0.0 www.toy-offer.net 0.0.0.0 www.tqlkg.com #commission junction 0.0.0.0 www.trackadvertising.net 0.0.0.0 www.tracklead.net 0.0.0.0 www.trafficrevenue.net 0.0.0.0 www.traffictrader.net 0.0.0.0 www.traffictraders.com 0.0.0.0 www.trafsearchonline.com 0.0.0.0 www.traktum.com 0.0.0.0 www.traveladvertising.com 0.0.0.0 www.travel-leisure-bonuspath.com 0.0.0.0 www.travel-leisure-premiumblvd.com 0.0.0.0 www.traveller-offer.com 0.0.0.0 www.traveller-offer.net 0.0.0.0 www.travelncs.com 0.0.0.0 www.treeloot.com 0.0.0.0 www.trendnews.com 0.0.0.0 www.trendsonline.biz 0.0.0.0 www.trendsonline.me 0.0.0.0 www.trendsonline.mobi 0.0.0.0 www.trndsys.mobi 0.0.0.0 www.tutop.com 0.0.0.0 www.tuttosessogratis.org 0.0.0.0 www.ukbanners.com 0.0.0.0 www.uleadstrk.com 0.0.0.0 www.ultimatefashiongifts.com 0.0.0.0 www.uproar.com 0.0.0.0 www.upsellit.com 0.0.0.0 www.usatravel-specials.com 0.0.0.0 www.usatravel-specials.net 0.0.0.0 www.us-choicevalue.com 0.0.0.0 www.usemax.de 0.0.0.0 www.us-topsites.com 0.0.0.0 www.utarget.co.uk 0.0.0.0 www.valueclick.com 0.0.0.0 www.via22.net 0.0.0.0 www.vibrantmedia.com 0.0.0.0 www.video-game-rewards-central.com 0.0.0.0 www.videogamerewardscentral.com 0.0.0.0 www.view4cash.de 0.0.0.0 www.virtumundo.com 0.0.0.0 www.vmcsatellite.com 0.0.0.0 www.wdm29.com 0.0.0.0 www.webcashvideos.com 0.0.0.0 www.webcompteur.com 0.0.0.0 www.webservices-rewardpath.com 0.0.0.0 www.websponsors.com 0.0.0.0 www.wegetpaid.net 0.0.0.0 www.whatuwhatuwhatuwant.com 0.0.0.0 www.widgetbucks.com 0.0.0.0 www.wigetmedia.com 0.0.0.0 www.williamhill.es 0.0.0.0 www.windaily.com 0.0.0.0 www.winnerschoiceservices.com 0.0.0.0 www.wordplaywhiz.com 0.0.0.0 www.work-offer.com 0.0.0.0 www.worry-free-savings.com 0.0.0.0 www.wppluginspro.com 0.0.0.0 www.wtp101.com 0.0.0.0 www.xbn.ru # exclusive banner network (Russian) 0.0.0.0 www.yceml.net 0.0.0.0 www.yibaruxet.cn 0.0.0.0 www.yieldmanager.net 0.0.0.0 www.youfck.com 0.0.0.0 www.yourdvdplayer.com 0.0.0.0 www.yourfreegascard.com 0.0.0.0 www.yourgascards.com 0.0.0.0 www.yourgiftrewards.com 0.0.0.0 www.your-gift-zone.com 0.0.0.0 www.yourgiftzone.com 0.0.0.0 www.yourhandytips.com 0.0.0.0 www.yourhotgiftzone.com 0.0.0.0 www.youripad4free.com 0.0.0.0 www.yourrewardzone.com 0.0.0.0 www.yoursmartrewards.com 0.0.0.0 www.zemgo.com 0.0.0.0 www.zevents.com 0.0.0.0 x86adserve006.adtech.de 0.0.0.0 xads.zedo.com 0.0.0.0 x.azjmp.com 0.0.0.0 x.iasrv.com 0.0.0.0 x.interia.pl 0.0.0.0 xlonhcld.xlontech.net 0.0.0.0 xml.adtech.de 0.0.0.0 xml.adtech.fr 0.0.0.0 xml.adtech.us 0.0.0.0 xml.click9.com 0.0.0.0 x.mochiads.com 0.0.0.0 xpantivirus.com 0.0.0.0 xpcs.ads.yahoo.com 0.0.0.0 xstatic.nk-net.pl 0.0.0.0 yadro.ru 0.0.0.0 y.cdn.adblade.com 0.0.0.0 yieldmanagement.adbooth.net 0.0.0.0 yieldmanager.net 0.0.0.0 ym.adnxs.com 0.0.0.0 yodleeinc.tt.omtrdc.net 0.0.0.0 youfck.com 0.0.0.0 yourdvdplayer.com 0.0.0.0 yourfreegascard.com 0.0.0.0 your-free-iphone.com 0.0.0.0 yourgascards.com 0.0.0.0 yourgiftrewards.com 0.0.0.0 your-gift-zone.com 0.0.0.0 yourgiftzone.com 0.0.0.0 yourhandytips.com 0.0.0.0 yourhotgiftzone.com 0.0.0.0 youripad4free.com 0.0.0.0 yourrewardzone.com 0.0.0.0 yoursmartrewards.com 0.0.0.0 ypn-js.overture.com 0.0.0.0 ysiu.freenation.com 0.0.0.0 ytaahg.vo.llnwd.net 0.0.0.0 yumenetworks.com 0.0.0.0 yx-in-f108.1e100.net 0.0.0.0 z1.adserver.com 0.0.0.0 zads.zedo.com 0.0.0.0 z.blogads.com 0.0.0.0 z.ceotrk.com 0.0.0.0 zdads.e-media.com 0.0.0.0 zeevex-online.com 0.0.0.0 zemgo.com 0.0.0.0 zevents.com 0.0.0.0 zuzzer5.com # # # yahoo banner ads 0.0.0.0 eur.a1.yimg.com 0.0.0.0 in.yimg.com 0.0.0.0 sg.yimg.com 0.0.0.0 uk.i1.yimg.com 0.0.0.0 us.a1.yimg.com 0.0.0.0 us.b1.yimg.com 0.0.0.0 us.c1.yimg.com 0.0.0.0 us.d1.yimg.com 0.0.0.0 us.e1.yimg.com 0.0.0.0 us.f1.yimg.com 0.0.0.0 us.g1.yimg.com 0.0.0.0 us.h1.yimg.com #0.0.0.0 us.i1.yimg.com #Uncomment this to block yahoo images 0.0.0.0 us.j1.yimg.com 0.0.0.0 us.k1.yimg.com 0.0.0.0 us.l1.yimg.com 0.0.0.0 us.m1.yimg.com 0.0.0.0 us.n1.yimg.com 0.0.0.0 us.o1.yimg.com 0.0.0.0 us.p1.yimg.com 0.0.0.0 us.q1.yimg.com 0.0.0.0 us.r1.yimg.com 0.0.0.0 us.s1.yimg.com 0.0.0.0 us.t1.yimg.com 0.0.0.0 us.u1.yimg.com 0.0.0.0 us.v1.yimg.com 0.0.0.0 us.w1.yimg.com 0.0.0.0 us.x1.yimg.com 0.0.0.0 us.y1.yimg.com 0.0.0.0 us.z1.yimg.com # # # hitbox.com web bugs 0.0.0.0 1cgi.hitbox.com 0.0.0.0 2cgi.hitbox.com 0.0.0.0 adminec1.hitbox.com 0.0.0.0 ads.hitbox.com 0.0.0.0 ag1.hitbox.com 0.0.0.0 ahbn1.hitbox.com 0.0.0.0 ahbn2.hitbox.com 0.0.0.0 ahbn3.hitbox.com 0.0.0.0 ahbn4.hitbox.com 0.0.0.0 aibg.hitbox.com 0.0.0.0 aibl.hitbox.com 0.0.0.0 aics.hitbox.com 0.0.0.0 ai.hitbox.com 0.0.0.0 aiui.hitbox.com 0.0.0.0 bigip1.hitbox.com 0.0.0.0 bigip2.hitbox.com 0.0.0.0 blowfish.hitbox.com 0.0.0.0 cdb.hitbox.com 0.0.0.0 cgi.hitbox.com 0.0.0.0 counter2.hitbox.com 0.0.0.0 counter.hitbox.com 0.0.0.0 dev101.hitbox.com 0.0.0.0 dev102.hitbox.com 0.0.0.0 dev103.hitbox.com 0.0.0.0 dev.hitbox.com 0.0.0.0 download.hitbox.com 0.0.0.0 ec1.hitbox.com 0.0.0.0 ehg-247internet.hitbox.com 0.0.0.0 ehg-accuweather.hitbox.com 0.0.0.0 ehg-acdsystems.hitbox.com 0.0.0.0 ehg-adeptscience.hitbox.com 0.0.0.0 ehg-affinitynet.hitbox.com 0.0.0.0 ehg-aha.hitbox.com 0.0.0.0 ehg-amerix.hitbox.com 0.0.0.0 ehg-apcc.hitbox.com 0.0.0.0 ehg-associatenewmedia.hitbox.com 0.0.0.0 ehg-ati.hitbox.com 0.0.0.0 ehg-attenza.hitbox.com 0.0.0.0 ehg-autodesk.hitbox.com 0.0.0.0 ehg-baa.hitbox.com 0.0.0.0 ehg-backweb.hitbox.com 0.0.0.0 ehg-bestbuy.hitbox.com 0.0.0.0 ehg-bizjournals.hitbox.com 0.0.0.0 ehg-bmwna.hitbox.com 0.0.0.0 ehg-boschsiemens.hitbox.com 0.0.0.0 ehg-bskyb.hitbox.com 0.0.0.0 ehg-cafepress.hitbox.com 0.0.0.0 ehg-careerbuilder.hitbox.com 0.0.0.0 ehg-cbc.hitbox.com 0.0.0.0 ehg-cbs.hitbox.com 0.0.0.0 ehg-cbsradio.hitbox.com 0.0.0.0 ehg-cedarpoint.hitbox.com 0.0.0.0 ehg-clearchannel.hitbox.com 0.0.0.0 ehg-closetmaid.hitbox.com 0.0.0.0 ehg-commjun.hitbox.com 0.0.0.0 ehg.commjun.hitbox.com 0.0.0.0 ehg-communityconnect.hitbox.com 0.0.0.0 ehg-communityconnet.hitbox.com 0.0.0.0 ehg-comscore.hitbox.com 0.0.0.0 ehg-corusentertainment.hitbox.com 0.0.0.0 ehg-coverityinc.hitbox.com 0.0.0.0 ehg-crain.hitbox.com 0.0.0.0 ehg-ctv.hitbox.com 0.0.0.0 ehg-cygnusbm.hitbox.com 0.0.0.0 ehg-datamonitor.hitbox.com 0.0.0.0 ehg-digg.hitbox.com 0.0.0.0 ehg-dig.hitbox.com 0.0.0.0 ehg-eckounlimited.hitbox.com 0.0.0.0 ehg-esa.hitbox.com 0.0.0.0 ehg-espn.hitbox.com 0.0.0.0 ehg-fifa.hitbox.com 0.0.0.0 ehg-findlaw.hitbox.com 0.0.0.0 ehg-foundation.hitbox.com 0.0.0.0 ehg-foxsports.hitbox.com 0.0.0.0 ehg-futurepub.hitbox.com 0.0.0.0 ehg-gamedaily.hitbox.com 0.0.0.0 ehg-gamespot.hitbox.com 0.0.0.0 ehg-gatehousemedia.hitbox.com 0.0.0.0 ehg-gatehoussmedia.hitbox.com 0.0.0.0 ehg-glam.hitbox.com 0.0.0.0 ehg-groceryworks.hitbox.com 0.0.0.0 ehg-groupernetworks.hitbox.com 0.0.0.0 ehg-guardian.hitbox.com 0.0.0.0 ehg-hasbro.hitbox.com 0.0.0.0 ehg-hellodirect.hitbox.com 0.0.0.0 ehg-himedia.hitbox.com 0.0.0.0 ehg.hitbox.com 0.0.0.0 ehg-hitent.hitbox.com 0.0.0.0 ehg-hollywood.hitbox.com 0.0.0.0 ehg-idgentertainment.hitbox.com 0.0.0.0 ehg-idg.hitbox.com 0.0.0.0 ehg-ifilm.hitbox.com 0.0.0.0 ehg-ignitemedia.hitbox.com 0.0.0.0 ehg-intel.hitbox.com 0.0.0.0 ehg-ittoolbox.hitbox.com 0.0.0.0 ehg-itworldcanada.hitbox.com 0.0.0.0 ehg-kingstontechnology.hitbox.com 0.0.0.0 ehg-knightridder.hitbox.com 0.0.0.0 ehg-learningco.hitbox.com 0.0.0.0 ehg-legonewyorkinc.hitbox.com 0.0.0.0 ehg-liveperson.hitbox.com 0.0.0.0 ehg-macpublishingllc.hitbox.com 0.0.0.0 ehg-macromedia.hitbox.com 0.0.0.0 ehg-magicalia.hitbox.com 0.0.0.0 ehg-maplesoft.hitbox.com 0.0.0.0 ehg-mgnlimited.hitbox.com 0.0.0.0 ehg-mindshare.hitbox.com 0.0.0.0 ehg.mindshare.hitbox.com 0.0.0.0 ehg-mtv.hitbox.com 0.0.0.0 ehg-mybc.hitbox.com 0.0.0.0 ehg-newarkinone.hitbox.com.hitbox.com 0.0.0.0 ehg-newegg.hitbox.com 0.0.0.0 ehg-newscientist.hitbox.com 0.0.0.0 ehg-newsinternational.hitbox.com 0.0.0.0 ehg-nokiafin.hitbox.com 0.0.0.0 ehg-novell.hitbox.com 0.0.0.0 ehg-nvidia.hitbox.com 0.0.0.0 ehg-oreilley.hitbox.com 0.0.0.0 ehg-oreilly.hitbox.com 0.0.0.0 ehg-pacifictheatres.hitbox.com 0.0.0.0 ehg-pennwell.hitbox.com 0.0.0.0 ehg-peoplesoft.hitbox.com 0.0.0.0 ehg-philipsvheusen.hitbox.com 0.0.0.0 ehg-pizzahut.hitbox.com 0.0.0.0 ehg-playboy.hitbox.com 0.0.0.0 ehg-presentigsolutions.hitbox.com 0.0.0.0 ehg-qualcomm.hitbox.com 0.0.0.0 ehg-quantumcorp.hitbox.com 0.0.0.0 ehg-randomhouse.hitbox.com 0.0.0.0 ehg-redherring.hitbox.com 0.0.0.0 ehg-register.hitbox.com 0.0.0.0 ehg-researchinmotion.hitbox.com 0.0.0.0 ehg-rfa.hitbox.com 0.0.0.0 ehg-rodale.hitbox.com 0.0.0.0 ehg-salesforce.hitbox.com 0.0.0.0 ehg-salonmedia.hitbox.com 0.0.0.0 ehg-samsungusa.hitbox.com 0.0.0.0 ehg-seca.hitbox.com 0.0.0.0 ehg-shoppersdrugmart.hitbox.com 0.0.0.0 ehg-sonybssc.hitbox.com 0.0.0.0 ehg-sonycomputer.hitbox.com 0.0.0.0 ehg-sonyelec.hitbox.com 0.0.0.0 ehg-sonymusic.hitbox.com 0.0.0.0 ehg-sonyny.hitbox.com 0.0.0.0 ehg-space.hitbox.com 0.0.0.0 ehg-sportsline.hitbox.com 0.0.0.0 ehg-streamload.hitbox.com 0.0.0.0 ehg-superpages.hitbox.com 0.0.0.0 ehg-techtarget.hitbox.com 0.0.0.0 ehg-tfl.hitbox.com 0.0.0.0 ehg-thefirstchurchchrist.hitbox.com 0.0.0.0 ehg-tigerdirect2.hitbox.com 0.0.0.0 ehg-tigerdirect.hitbox.com 0.0.0.0 ehg-topps.hitbox.com 0.0.0.0 ehg-tribute.hitbox.com 0.0.0.0 ehg-tumbleweed.hitbox.com 0.0.0.0 ehg-ubisoft.hitbox.com 0.0.0.0 ehg-uniontrib.hitbox.com 0.0.0.0 ehg-usnewsworldreport.hitbox.com 0.0.0.0 ehg-verizoncommunications.hitbox.com 0.0.0.0 ehg-viacom.hitbox.com 0.0.0.0 ehg-vmware.hitbox.com 0.0.0.0 ehg-vonage.hitbox.com 0.0.0.0 ehg-wachovia.hitbox.com 0.0.0.0 ehg-wacomtechnology.hitbox.com 0.0.0.0 ehg-warner-brothers.hitbox.com 0.0.0.0 ehg-wizardsofthecoast.hitbox.com.hitbox.com 0.0.0.0 ehg-womanswallstreet.hitbox.com 0.0.0.0 ehg-wss.hitbox.com 0.0.0.0 ehg-xxolympicwintergames.hitbox.com 0.0.0.0 ehg-yellowpages.hitbox.com 0.0.0.0 ehg-youtube.hitbox.com 0.0.0.0 ejs.hitbox.com 0.0.0.0 enterprise-admin.hitbox.com 0.0.0.0 enterprise.hitbox.com 0.0.0.0 esg.hitbox.com 0.0.0.0 evwr.hitbox.com 0.0.0.0 get.hitbox.com 0.0.0.0 hg10.hitbox.com 0.0.0.0 hg11.hitbox.com 0.0.0.0 hg12.hitbox.com 0.0.0.0 hg13.hitbox.com 0.0.0.0 hg14.hitbox.com 0.0.0.0 hg15.hitbox.com 0.0.0.0 hg16.hitbox.com 0.0.0.0 hg17.hitbox.com 0.0.0.0 hg1.hitbox.com 0.0.0.0 hg2.hitbox.com 0.0.0.0 hg3.hitbox.com 0.0.0.0 hg4.hitbox.com 0.0.0.0 hg5.hitbox.com 0.0.0.0 hg6a.hitbox.com 0.0.0.0 hg6.hitbox.com 0.0.0.0 hg7.hitbox.com 0.0.0.0 hg8.hitbox.com 0.0.0.0 hg9.hitbox.com 0.0.0.0 hitboxbenchmarker.com 0.0.0.0 hitboxcentral.com 0.0.0.0 hitbox.com 0.0.0.0 hitboxenterprise.com 0.0.0.0 hitboxwireless.com 0.0.0.0 host6.hitbox.com 0.0.0.0 ias2.hitbox.com 0.0.0.0 ias.hitbox.com 0.0.0.0 ibg.hitbox.com 0.0.0.0 ics.hitbox.com 0.0.0.0 idb.hitbox.com 0.0.0.0 js1.hitbox.com 0.0.0.0 lb.hitbox.com 0.0.0.0 lesbian-erotica.hitbox.com 0.0.0.0 lookup2.hitbox.com 0.0.0.0 lookup.hitbox.com 0.0.0.0 mrtg.hitbox.com 0.0.0.0 myhitbox.com 0.0.0.0 na.hitbox.com 0.0.0.0 narwhal.hitbox.com 0.0.0.0 nei.hitbox.com 0.0.0.0 nocboard.hitbox.com 0.0.0.0 noc.hitbox.com 0.0.0.0 noc-request.hitbox.com 0.0.0.0 ns1.hitbox.com 0.0.0.0 oas.hitbox.com 0.0.0.0 phg.hitbox.com 0.0.0.0 pure.hitbox.com 0.0.0.0 rainbowclub.hitbox.com 0.0.0.0 rd1.hitbox.com 0.0.0.0 reseller.hitbox.com 0.0.0.0 resources.hitbox.com 0.0.0.0 sitesearch.hitbox.com 0.0.0.0 specialtyclub.hitbox.com 0.0.0.0 ss.hitbox.com 0.0.0.0 stage101.hitbox.com 0.0.0.0 stage102.hitbox.com 0.0.0.0 stage103.hitbox.com 0.0.0.0 stage104.hitbox.com 0.0.0.0 stage105.hitbox.com 0.0.0.0 stage.hitbox.com 0.0.0.0 stats2.hitbox.com 0.0.0.0 stats3.hitbox.com 0.0.0.0 stats.hitbox.com 0.0.0.0 switch10.hitbox.com 0.0.0.0 switch11.hitbox.com 0.0.0.0 switch1.hitbox.com 0.0.0.0 switch5.hitbox.com 0.0.0.0 switch6.hitbox.com 0.0.0.0 switch8.hitbox.com 0.0.0.0 switch9.hitbox.com 0.0.0.0 switch.hitbox.com 0.0.0.0 tetra.hitbox.com 0.0.0.0 tools2.hitbox.com 0.0.0.0 toolsa.hitbox.com 0.0.0.0 tools.hitbox.com 0.0.0.0 ts1.hitbox.com 0.0.0.0 ts2.hitbox.com 0.0.0.0 vwr1.hitbox.com 0.0.0.0 vwr2.hitbox.com 0.0.0.0 vwr3.hitbox.com 0.0.0.0 w100.hitbox.com 0.0.0.0 w101.hitbox.com 0.0.0.0 w102.hitbox.com 0.0.0.0 w103.hitbox.com 0.0.0.0 w104.hitbox.com 0.0.0.0 w105.hitbox.com 0.0.0.0 w106.hitbox.com 0.0.0.0 w107.hitbox.com 0.0.0.0 w108.hitbox.com 0.0.0.0 w109.hitbox.com 0.0.0.0 w10.hitbox.com 0.0.0.0 w110.hitbox.com 0.0.0.0 w111.hitbox.com 0.0.0.0 w112.hitbox.com 0.0.0.0 w113.hitbox.com 0.0.0.0 w114.hitbox.com 0.0.0.0 w115.hitbox.com 0.0.0.0 w116.hitbox.com 0.0.0.0 w117.hitbox.com 0.0.0.0 w118.hitbox.com 0.0.0.0 w119.hitbox.com 0.0.0.0 w11.hitbox.com 0.0.0.0 w120.hitbox.com 0.0.0.0 w121.hitbox.com 0.0.0.0 w122.hitbox.com 0.0.0.0 w123.hitbox.com 0.0.0.0 w124.hitbox.com 0.0.0.0 w126.hitbox.com 0.0.0.0 w128.hitbox.com 0.0.0.0 w129.hitbox.com 0.0.0.0 w12.hitbox.com 0.0.0.0 w130.hitbox.com 0.0.0.0 w131.hitbox.com 0.0.0.0 w132.hitbox.com 0.0.0.0 w133.hitbox.com 0.0.0.0 w135.hitbox.com 0.0.0.0 w136.hitbox.com 0.0.0.0 w137.hitbox.com 0.0.0.0 w138.hitbox.com 0.0.0.0 w139.hitbox.com 0.0.0.0 w13.hitbox.com 0.0.0.0 w140.hitbox.com 0.0.0.0 w141.hitbox.com 0.0.0.0 w144.hitbox.com 0.0.0.0 w147.hitbox.com 0.0.0.0 w14.hitbox.com 0.0.0.0 w153.hitbox.com 0.0.0.0 w154.hitbox.com 0.0.0.0 w155.hitbox.com 0.0.0.0 w157.hitbox.com 0.0.0.0 w159.hitbox.com 0.0.0.0 w15.hitbox.com 0.0.0.0 w161.hitbox.com 0.0.0.0 w162.hitbox.com 0.0.0.0 w167.hitbox.com 0.0.0.0 w168.hitbox.com 0.0.0.0 w16.hitbox.com 0.0.0.0 w170.hitbox.com 0.0.0.0 w175.hitbox.com 0.0.0.0 w177.hitbox.com 0.0.0.0 w179.hitbox.com 0.0.0.0 w17.hitbox.com 0.0.0.0 w18.hitbox.com 0.0.0.0 w19.hitbox.com 0.0.0.0 w1.hitbox.com 0.0.0.0 w20.hitbox.com 0.0.0.0 w21.hitbox.com 0.0.0.0 w22.hitbox.com 0.0.0.0 w23.hitbox.com 0.0.0.0 w24.hitbox.com 0.0.0.0 w25.hitbox.com 0.0.0.0 w26.hitbox.com 0.0.0.0 w27.hitbox.com 0.0.0.0 w28.hitbox.com 0.0.0.0 w29.hitbox.com 0.0.0.0 w2.hitbox.com 0.0.0.0 w30.hitbox.com 0.0.0.0 w31.hitbox.com 0.0.0.0 w32.hitbox.com 0.0.0.0 w33.hitbox.com 0.0.0.0 w34.hitbox.com 0.0.0.0 w35.hitbox.com 0.0.0.0 w36.hitbox.com 0.0.0.0 w3.hitbox.com 0.0.0.0 w4.hitbox.com 0.0.0.0 w5.hitbox.com 0.0.0.0 w6.hitbox.com 0.0.0.0 w7.hitbox.com 0.0.0.0 w8.hitbox.com 0.0.0.0 w9.hitbox.com 0.0.0.0 webload101.hitbox.com 0.0.0.0 wss-gw-1.hitbox.com 0.0.0.0 wss-gw-3.hitbox.com 0.0.0.0 wvwr1.hitbox.com 0.0.0.0 ww1.hitbox.com 0.0.0.0 ww2.hitbox.com 0.0.0.0 ww3.hitbox.com 0.0.0.0 wwa.hitbox.com 0.0.0.0 wwb.hitbox.com 0.0.0.0 wwc.hitbox.com 0.0.0.0 wwd.hitbox.com 0.0.0.0 www.ehg-rr.hitbox.com 0.0.0.0 www.hitbox.com 0.0.0.0 www.hitboxwireless.com 0.0.0.0 y2k.hitbox.com 0.0.0.0 yang.hitbox.com 0.0.0.0 ying.hitbox.com # # # www.extreme-dm.com tracking 0.0.0.0 extreme-dm.com 0.0.0.0 reports.extreme-dm.com 0.0.0.0 t0.extreme-dm.com 0.0.0.0 t1.extreme-dm.com 0.0.0.0 t.extreme-dm.com 0.0.0.0 u0.extreme-dm.com 0.0.0.0 u1.extreme-dm.com 0.0.0.0 u.extreme-dm.com 0.0.0.0 v0.extreme-dm.com 0.0.0.0 v1.extreme-dm.com 0.0.0.0 v.extreme-dm.com 0.0.0.0 w0.extreme-dm.com 0.0.0.0 w1.extreme-dm.com 0.0.0.0 w2.extreme-dm.com 0.0.0.0 w3.extreme-dm.com 0.0.0.0 w4.extreme-dm.com 0.0.0.0 w5.extreme-dm.com 0.0.0.0 w6.extreme-dm.com 0.0.0.0 w7.extreme-dm.com 0.0.0.0 w8.extreme-dm.com 0.0.0.0 w9.extreme-dm.com 0.0.0.0 w.extreme-dm.com 0.0.0.0 www.extreme-dm.com 0.0.0.0 x3.extreme-dm.com 0.0.0.0 y0.extreme-dm.com 0.0.0.0 y1.extreme-dm.com 0.0.0.0 y.extreme-dm.com 0.0.0.0 z0.extreme-dm.com 0.0.0.0 z1.extreme-dm.com 0.0.0.0 z.extreme-dm.com # # # realmedia.com's Open Ad Stream 0.0.0.0 ap.oasfile.aftenposten.no 0.0.0.0 imagenen1.247realmedia.com 0.0.0.0 oacentral.cepro.com 0.0.0.0 oas.adx.nu 0.0.0.0 oas.aurasports.com 0.0.0.0 oas.benchmark.fr 0.0.0.0 oasc03012.247realmedia.com 0.0.0.0 oasc03049.247realmedia.com 0.0.0.0 oasc06006.247realmedia.com 0.0.0.0 oasc08008.247realmedia.com 0.0.0.0 oasc09.247realmedia.com 0.0.0.0 oascentral.123greetings.com 0.0.0.0 oascentral.abclocal.go.com 0.0.0.0 oascentral.adage.com 0.0.0.0 oascentral.adageglobal.com 0.0.0.0 oascentral.aircanada.com 0.0.0.0 oascentral.alanicnewsnet.ca 0.0.0.0 oascentral.alanticnewsnet.ca 0.0.0.0 oascentral.americanheritage.com 0.0.0.0 oascentral.artistdirect.com 0.0.0.0 oascentral.artistirect.com 0.0.0.0 oascentral.askmen.com 0.0.0.0 oascentral.aviationnow.com 0.0.0.0 oascentral.blackenterprises.com 0.0.0.0 oascentral.blogher.org 0.0.0.0 oascentral.bostonherald.com 0.0.0.0 oascentral.bostonphoenix.com 0.0.0.0 oascentral.businessinsider.com 0.0.0.0 oascentral.businessweek.com 0.0.0.0 oascentral.businessweeks.com 0.0.0.0 oascentral.buy.com 0.0.0.0 oascentral.canadaeast.com 0.0.0.0 oascentral.canadianliving.com 0.0.0.0 oascentral.charleston.net 0.0.0.0 oascentral.chicagobusiness.com 0.0.0.0 oascentral.chron.com 0.0.0.0 oascentral.citypages.com 0.0.0.0 oascentral.clearchannel.com 0.0.0.0 oascentral.comcast.net 0.0.0.0 oascentral.comics.com 0.0.0.0 oascentral.construction.com 0.0.0.0 oascentral.consumerreports.org 0.0.0.0 oascentral.covers.com 0.0.0.0 oascentral.crainsdetroit.com 0.0.0.0 oascentral.crimelibrary.com 0.0.0.0 oascentral.cybereps.com 0.0.0.0 oascentral.dailybreeze.com 0.0.0.0 oascentral.dailyherald.com 0.0.0.0 oascentral.dilbert.com 0.0.0.0 oascentral.discovery.com 0.0.0.0 oascentral.drphil.com 0.0.0.0 oascentral.eastbayexpress.com 0.0.0.0 oas-central.east.realmedia.com 0.0.0.0 oascentral.encyclopedia.com 0.0.0.0 oascentral.fashionmagazine.com 0.0.0.0 oascentral.fayettevillenc.com 0.0.0.0 oascentral.feedroom.com 0.0.0.0 oascentral.forsythnews.com 0.0.0.0 oascentral.fortunecity.com 0.0.0.0 oascentral.foxnews.com 0.0.0.0 oascentral.freedom.com 0.0.0.0 oascentral.g4techtv.com 0.0.0.0 oascentral.ggl.com 0.0.0.0 oascentral.gigex.com 0.0.0.0 oascentral.globalpost.com 0.0.0.0 oascentral.hamptonroads.com 0.0.0.0 oascentral.hamptoroads.com 0.0.0.0 oascentral.hamtoroads.com 0.0.0.0 oascentral.herenb.com 0.0.0.0 oascentral.hollywood.com 0.0.0.0 oascentral.houstonpress.com 0.0.0.0 oascentral.inq7.net 0.0.0.0 oascentral.investors.com 0.0.0.0 oascentral.investorwords.com 0.0.0.0 oascentral.itbusiness.ca 0.0.0.0 oascentral.killsometime.com 0.0.0.0 oascentral.laptopmag.com 0.0.0.0 oascentral.law.com 0.0.0.0 oascentral.laweekly.com 0.0.0.0 oascentral.looksmart.com 0.0.0.0 oascentral.lycos.com 0.0.0.0 oascentral.mailtribune.com 0.0.0.0 oascentral.mayoclinic.com 0.0.0.0 oascentral.medbroadcast.com 0.0.0.0 oascentral.metro.us 0.0.0.0 oascentral.minnpost.com 0.0.0.0 oascentral.mochila.com 0.0.0.0 oascentral.motherjones.com 0.0.0.0 oascentral.nerve.com 0.0.0.0 oascentral.newsmax.com 0.0.0.0 oascentral.nowtoronto.com 0.0.0.0 oascentralnx.comcast.net 0.0.0.0 oascentral.onwisconsin.com 0.0.0.0 oascentral.phoenixnewtimes.com 0.0.0.0 oascentral.phoenixvillenews.com 0.0.0.0 oascentral.pitch.com 0.0.0.0 oascentral.poconorecord.com 0.0.0.0 oascentral.politico.com 0.0.0.0 oascentral.post-gazette.com 0.0.0.0 oascentral.pottsmerc.com 0.0.0.0 oascentral.princetonreview.com 0.0.0.0 oascentral.publicradio.org 0.0.0.0 oascentral.radaronline.com 0.0.0.0 oascentral.rcrnews.com 0.0.0.0 oas-central.realmedia.com 0.0.0.0 oascentral.redherring.com 0.0.0.0 oascentral.redorbit.com 0.0.0.0 oascentral.redstate.com 0.0.0.0 oascentral.reference.com 0.0.0.0 oascentral.regalinterative.com 0.0.0.0 oascentral.register.com 0.0.0.0 oascentral.registerguard.com 0.0.0.0 oascentral.registguard.com 0.0.0.0 oascentral.riverfronttimes.com 0.0.0.0 oascentral.salon.com 0.0.0.0 oascentral.santacruzsentinel.com 0.0.0.0 oascentral.sciam.com 0.0.0.0 oascentral.scientificamerican.com 0.0.0.0 oascentral.seacoastonline.com 0.0.0.0 oascentral.seattleweekly.com 0.0.0.0 oascentral.sfgate.com 0.0.0.0 oascentral.sfweekly.com 0.0.0.0 oascentral.sina.com 0.0.0.0 oascentral.sina.com.hk 0.0.0.0 oascentral.sparknotes.com 0.0.0.0 oascentral.sptimes.com 0.0.0.0 oascentral.starbulletin.com 0.0.0.0 oascentral.suntimes.com 0.0.0.0 oascentral.surfline.com 0.0.0.0 oascentral.thechronicleherald.ca 0.0.0.0 oascentral.thehockeynews.com 0.0.0.0 oascentral.thenation.com 0.0.0.0 oascentral.theonionavclub.com 0.0.0.0 oascentral.theonion.com 0.0.0.0 oascentral.thephoenix.com 0.0.0.0 oascentral.thesmokinggun.com 0.0.0.0 oascentral.thespark.com 0.0.0.0 oascentral.tmcnet.com 0.0.0.0 oascentral.tnr.com 0.0.0.0 oascentral.tourismvancouver.com 0.0.0.0 oascentral.townhall.com 0.0.0.0 oascentral.tribe.net 0.0.0.0 oascentral.trutv.com 0.0.0.0 oascentral.upi.com 0.0.0.0 oascentral.urbanspoon.com 0.0.0.0 oascentral.villagevoice.com 0.0.0.0 oascentral.virtualtourist.com 0.0.0.0 oascentral.warcry.com 0.0.0.0 oascentral.washtimes.com 0.0.0.0 oascentral.wciv.com 0.0.0.0 oascentral.westword.com 0.0.0.0 oascentral.where.ca 0.0.0.0 oascentral.wjla.com 0.0.0.0 oascentral.wkrn.com 0.0.0.0 oascentral.wwe.com 0.0.0.0 oascentral.yellowpages.com 0.0.0.0 oascentral.ywlloewpages.ca 0.0.0.0 oascentral.zwire.com 0.0.0.0 oascentreal.adcritic.com 0.0.0.0 oascetral.laweekly.com 0.0.0.0 oas.dispatch.com 0.0.0.0 oas.foxnews.com 0.0.0.0 oas.greensboro.com 0.0.0.0 oas.guardian.co.uk 0.0.0.0 oas.ibnlive.com 0.0.0.0 oas.lee.net 0.0.0.0 oas.nrjlink.fr 0.0.0.0 oas.nzz.ch 0.0.0.0 oas.portland.com 0.0.0.0 oas.publicitas.ch 0.0.0.0 oasroanoke.com 0.0.0.0 oas.salon.com 0.0.0.0 oas.sciencemag.org 0.0.0.0 oas.signonsandiego.com 0.0.0.0 oas.startribune.com 0.0.0.0 oas.toronto.com 0.0.0.0 oas.uniontrib.com 0.0.0.0 oas.villagevoice.com 0.0.0.0 oas.vtsgonline.com # # # fastclick banner ads 0.0.0.0 media1.fastclick.net 0.0.0.0 media2.fastclick.net 0.0.0.0 media3.fastclick.net 0.0.0.0 media4.fastclick.net 0.0.0.0 media5.fastclick.net 0.0.0.0 media6.fastclick.net 0.0.0.0 media7.fastclick.net 0.0.0.0 media8.fastclick.net 0.0.0.0 media9.fastclick.net 0.0.0.0 media10.fastclick.net 0.0.0.0 media11.fastclick.net 0.0.0.0 media12.fastclick.net 0.0.0.0 media13.fastclick.net 0.0.0.0 media14.fastclick.net 0.0.0.0 media15.fastclick.net 0.0.0.0 media16.fastclick.net 0.0.0.0 media17.fastclick.net 0.0.0.0 media18.fastclick.net 0.0.0.0 media19.fastclick.net 0.0.0.0 media20.fastclick.net 0.0.0.0 media21.fastclick.net 0.0.0.0 media22.fastclick.net 0.0.0.0 media23.fastclick.net 0.0.0.0 media24.fastclick.net 0.0.0.0 media25.fastclick.net 0.0.0.0 media26.fastclick.net 0.0.0.0 media27.fastclick.net 0.0.0.0 media28.fastclick.net 0.0.0.0 media29.fastclick.net 0.0.0.0 media30.fastclick.net 0.0.0.0 media31.fastclick.net 0.0.0.0 media32.fastclick.net 0.0.0.0 media33.fastclick.net 0.0.0.0 media34.fastclick.net 0.0.0.0 media35.fastclick.net 0.0.0.0 media36.fastclick.net 0.0.0.0 media37.fastclick.net 0.0.0.0 media38.fastclick.net 0.0.0.0 media39.fastclick.net 0.0.0.0 media40.fastclick.net 0.0.0.0 media41.fastclick.net 0.0.0.0 media42.fastclick.net 0.0.0.0 media43.fastclick.net 0.0.0.0 media44.fastclick.net 0.0.0.0 media45.fastclick.net 0.0.0.0 media46.fastclick.net 0.0.0.0 media47.fastclick.net 0.0.0.0 media48.fastclick.net 0.0.0.0 media49.fastclick.net 0.0.0.0 media50.fastclick.net 0.0.0.0 media51.fastclick.net 0.0.0.0 media52.fastclick.net 0.0.0.0 media53.fastclick.net 0.0.0.0 media54.fastclick.net 0.0.0.0 media55.fastclick.net 0.0.0.0 media56.fastclick.net 0.0.0.0 media57.fastclick.net 0.0.0.0 media58.fastclick.net 0.0.0.0 media59.fastclick.net 0.0.0.0 media60.fastclick.net 0.0.0.0 media61.fastclick.net 0.0.0.0 media62.fastclick.net 0.0.0.0 media63.fastclick.net 0.0.0.0 media64.fastclick.net 0.0.0.0 media65.fastclick.net 0.0.0.0 media66.fastclick.net 0.0.0.0 media67.fastclick.net 0.0.0.0 media68.fastclick.net 0.0.0.0 media69.fastclick.net 0.0.0.0 media70.fastclick.net 0.0.0.0 media71.fastclick.net 0.0.0.0 media72.fastclick.net 0.0.0.0 media73.fastclick.net 0.0.0.0 media74.fastclick.net 0.0.0.0 media75.fastclick.net 0.0.0.0 media76.fastclick.net 0.0.0.0 media77.fastclick.net 0.0.0.0 media78.fastclick.net 0.0.0.0 media79.fastclick.net 0.0.0.0 media80.fastclick.net 0.0.0.0 media81.fastclick.net 0.0.0.0 media82.fastclick.net 0.0.0.0 media83.fastclick.net 0.0.0.0 media84.fastclick.net 0.0.0.0 media85.fastclick.net 0.0.0.0 media86.fastclick.net 0.0.0.0 media87.fastclick.net 0.0.0.0 media88.fastclick.net 0.0.0.0 media89.fastclick.net 0.0.0.0 media90.fastclick.net 0.0.0.0 media91.fastclick.net 0.0.0.0 media92.fastclick.net 0.0.0.0 media93.fastclick.net 0.0.0.0 media94.fastclick.net 0.0.0.0 media95.fastclick.net 0.0.0.0 media96.fastclick.net 0.0.0.0 media97.fastclick.net 0.0.0.0 media98.fastclick.net 0.0.0.0 media99.fastclick.net 0.0.0.0 fastclick.net # # # belo interactive ads 0.0.0.0 te.about.com 0.0.0.0 te.adlandpro.com 0.0.0.0 te.advance.net 0.0.0.0 te.ap.org 0.0.0.0 te.astrology.com 0.0.0.0 te.audiencematch.net 0.0.0.0 te.belointeractive.com 0.0.0.0 te.boston.com 0.0.0.0 te.businessweek.com 0.0.0.0 te.chicagotribune.com 0.0.0.0 te.chron.com 0.0.0.0 te.cleveland.net 0.0.0.0 te.ctnow.com 0.0.0.0 te.dailycamera.com 0.0.0.0 te.dailypress.com 0.0.0.0 te.dentonrc.com 0.0.0.0 te.greenwichtime.com 0.0.0.0 te.idg.com 0.0.0.0 te.infoworld.com 0.0.0.0 te.ivillage.com 0.0.0.0 te.journalnow.com 0.0.0.0 te.latimes.com 0.0.0.0 te.mcall.com 0.0.0.0 te.mgnetwork.com 0.0.0.0 te.mysanantonio.com 0.0.0.0 te.newsday.com 0.0.0.0 te.nytdigital.com 0.0.0.0 te.orlandosentinel.com 0.0.0.0 te.scripps.com 0.0.0.0 te.scrippsnetworksprivacy.com 0.0.0.0 te.scrippsnewspapersprivacy.com 0.0.0.0 te.sfgate.com 0.0.0.0 te.signonsandiego.com 0.0.0.0 te.stamfordadvocate.com 0.0.0.0 te.sun-sentinel.com 0.0.0.0 te.sunspot.net 0.0.0.0 te.suntimes.com 0.0.0.0 te.tbo.com 0.0.0.0 te.thestar.ca 0.0.0.0 te.thestar.com 0.0.0.0 te.trb.com 0.0.0.0 te.versiontracker.com 0.0.0.0 te.wsls.com # # # popup traps -- sites that bounce you around or won't let you leave 0.0.0.0 24hwebsex.com 0.0.0.0 adultfriendfinder.com 0.0.0.0 all-tgp.org 0.0.0.0 fioe.info 0.0.0.0 incestland.com 0.0.0.0 lesview.com 0.0.0.0 searchforit.com 0.0.0.0 www.asiansforu.com 0.0.0.0 www.bangbuddy.com 0.0.0.0 www.datanotary.com 0.0.0.0 www.entercasino.com 0.0.0.0 www.incestdot.com 0.0.0.0 www.incestgold.com 0.0.0.0 www.justhookup.com 0.0.0.0 www.mangayhentai.com 0.0.0.0 www.myluvcrush.ca 0.0.0.0 www.ourfuckbook.com 0.0.0.0 www.realincestvideos.com 0.0.0.0 www.searchforit.com 0.0.0.0 www.searchv.com 0.0.0.0 www.secretosx.com 0.0.0.0 www.seductiveamateurs.com 0.0.0.0 www.smsmovies.net 0.0.0.0 www.wowjs.1www.cn 0.0.0.0 www.xxxnations.com 0.0.0.0 www.xxxnightly.com 0.0.0.0 www.xxxtoolbar.com 0.0.0.0 www.yourfuckbook.com # # # malicious e-card -- these sites send out mass quantities of spam # and some distribute adware and spyware 0.0.0.0 123greetings.com # contains one link to distributor of adware or spyware 0.0.0.0 2000greetings.com 0.0.0.0 celebwelove.com 0.0.0.0 ecard4all.com 0.0.0.0 eforu.com 0.0.0.0 freewebcards.com 0.0.0.0 fukkad.com 0.0.0.0 fun-e-cards.com 0.0.0.0 funnyreign.com # heavy spam (Site Advisor received 1075 e-mails/week) 0.0.0.0 funsilly.com 0.0.0.0 myfuncards.com 0.0.0.0 www.cool-downloads.com 0.0.0.0 www.cool-downloads.net 0.0.0.0 www.friend-card.com 0.0.0.0 www.friend-cards.com 0.0.0.0 www.friend-cards.net 0.0.0.0 www.friend-greeting.com 0.0.0.0 www.friend-greetings.com 0.0.0.0 www.friendgreetings.com 0.0.0.0 www.friend-greetings.net 0.0.0.0 www.friendgreetings.net 0.0.0.0 www.laugh-mail.com 0.0.0.0 www.laugh-mail.net # # # European network of tracking sites 0.0.0.0 0ivwbox.de 0.0.0.0 1ivwbox.de 0.0.0.0 1und1.ivwbox.de 0.0.0.0 2ivwbox.de 0.0.0.0 3ivwbox.de 0.0.0.0 4ivwbox.de 0.0.0.0 5ivwbox.de 0.0.0.0 6ivwbox.de 0.0.0.0 7ivwbox.de 0.0.0.0 8ivwbox.de 0.0.0.0 8vwbox.de 0.0.0.0 9ivwbox.de 0.0.0.0 9vwbox.de 0.0.0.0 aivwbox.de 0.0.0.0 avwbox.de 0.0.0.0 bild.ivwbox.de 0.0.0.0 bivwbox.de 0.0.0.0 civwbox.de 0.0.0.0 divwbox.de 0.0.0.0 eevwbox.de 0.0.0.0 eivwbox.de 0.0.0.0 evwbox.de 0.0.0.0 faz.ivwbox.de 0.0.0.0 fivwbox.de 0.0.0.0 givwbox.de 0.0.0.0 hivwbox.de 0.0.0.0 i8vwbox.de 0.0.0.0 i9vwbox.de 0.0.0.0 iavwbox.de 0.0.0.0 ibvwbox.de 0.0.0.0 ibwbox.de 0.0.0.0 icvwbox.de 0.0.0.0 icwbox.de 0.0.0.0 ievwbox.de 0.0.0.0 ifvwbox.de 0.0.0.0 ifwbox.de 0.0.0.0 igvwbox.de 0.0.0.0 igwbox.de 0.0.0.0 iivwbox.de 0.0.0.0 ijvwbox.de 0.0.0.0 ikvwbox.de 0.0.0.0 iovwbox.de 0.0.0.0 iuvwbox.de 0.0.0.0 iv2box.de 0.0.0.0 iv2wbox.de 0.0.0.0 iv3box.de 0.0.0.0 iv3wbox.de 0.0.0.0 ivabox.de 0.0.0.0 ivawbox.de 0.0.0.0 ivbox.de 0.0.0.0 ivbwbox.de 0.0.0.0 ivbwox.de 0.0.0.0 ivcwbox.de 0.0.0.0 ivebox.de 0.0.0.0 ivewbox.de 0.0.0.0 ivfwbox.de 0.0.0.0 ivgwbox.de 0.0.0.0 ivqbox.de 0.0.0.0 ivqwbox.de 0.0.0.0 ivsbox.de 0.0.0.0 ivswbox.de 0.0.0.0 ivvbox.de 0.0.0.0 ivvwbox.de 0.0.0.0 ivw2box.de 0.0.0.0 ivw3box.de 0.0.0.0 ivwabox.de 0.0.0.0 ivwb0ox.de 0.0.0.0 ivwb0x.de 0.0.0.0 ivwb9ox.de 0.0.0.0 ivwb9x.de 0.0.0.0 ivwbaox.de 0.0.0.0 ivwbax.de 0.0.0.0 ivwbbox.de 0.0.0.0 ivwbeox.de 0.0.0.0 ivwbex.de 0.0.0.0 ivwbgox.de 0.0.0.0 ivwbhox.de 0.0.0.0 ivwbiox.de 0.0.0.0 ivwbix.de 0.0.0.0 ivwbkox.de 0.0.0.0 ivwbkx.de 0.0.0.0 ivwblox.de 0.0.0.0 ivwblx.de 0.0.0.0 ivwbnox.de 0.0.0.0 ivwbo0x.de 0.0.0.0 ivwbo9x.de 0.0.0.0 ivwboax.de 0.0.0.0 ivwboc.de 0.0.0.0 ivwbock.de 0.0.0.0 ivwbocx.de 0.0.0.0 ivwbod.de 0.0.0.0 ivwbo.de 0.0.0.0 ivwbodx.de 0.0.0.0 ivwboex.de 0.0.0.0 ivwboix.de 0.0.0.0 ivwboks.de 0.0.0.0 ivwbokx.de 0.0.0.0 ivwbolx.de 0.0.0.0 ivwboox.de 0.0.0.0 ivwbopx.de 0.0.0.0 ivwbos.de 0.0.0.0 ivwbosx.de 0.0.0.0 ivwboux.de 0.0.0.0 ivwbox0.de 0.0.0.0 ivwbox1.de 0.0.0.0 ivwbox2.de 0.0.0.0 ivwbox3.de 0.0.0.0 ivwbox4.de 0.0.0.0 ivwbox5.de 0.0.0.0 ivwbox6.de 0.0.0.0 ivwbox7.de 0.0.0.0 ivwbox8.de 0.0.0.0 ivwbox9.de 0.0.0.0 ivwboxa.de 0.0.0.0 ivwboxb.de 0.0.0.0 ivwboxc.de 0.0.0.0 ivwboxd.de 0.0.0.0 ivwbox.de 0.0.0.0 ivwboxe.de 0.0.0.0 ivwboxes.de 0.0.0.0 ivwboxf.de 0.0.0.0 ivwboxg.de 0.0.0.0 ivwboxh.de 0.0.0.0 ivwboxi.de 0.0.0.0 ivwboxj.de 0.0.0.0 ivwboxk.de 0.0.0.0 ivwboxl.de 0.0.0.0 ivwboxm.de 0.0.0.0 ivwboxn.de 0.0.0.0 ivwboxo.de 0.0.0.0 ivwboxp.de 0.0.0.0 ivwboxq.de 0.0.0.0 ivwboxr.de 0.0.0.0 ivwboxs.de 0.0.0.0 ivwboxt.de 0.0.0.0 ivwboxu.de 0.0.0.0 ivwboxv.de 0.0.0.0 ivwboxw.de 0.0.0.0 ivwboxx.de 0.0.0.0 ivwboxy.de 0.0.0.0 ivwboxz.de 0.0.0.0 ivwboyx.de 0.0.0.0 ivwboz.de 0.0.0.0 ivwbozx.de 0.0.0.0 ivwbpox.de 0.0.0.0 ivwbpx.de 0.0.0.0 ivwbuox.de 0.0.0.0 ivwbux.de 0.0.0.0 ivwbvox.de 0.0.0.0 ivwbx.de 0.0.0.0 ivwbxo.de 0.0.0.0 ivwbyox.de 0.0.0.0 ivwbyx.de 0.0.0.0 ivwebox.de 0.0.0.0 ivwgbox.de 0.0.0.0 ivwgox.de 0.0.0.0 ivwhbox.de 0.0.0.0 ivwhox.de 0.0.0.0 ivwnbox.de 0.0.0.0 ivwnox.de 0.0.0.0 ivwobx.de 0.0.0.0 ivwox.de 0.0.0.0 ivwpbox.de 0.0.0.0 ivwpox.de 0.0.0.0 ivwqbox.de 0.0.0.0 ivwsbox.de 0.0.0.0 ivwvbox.de 0.0.0.0 ivwvox.de 0.0.0.0 ivwwbox.de 0.0.0.0 iwbox.de 0.0.0.0 iwvbox.de 0.0.0.0 iwvwbox.de 0.0.0.0 iwwbox.de 0.0.0.0 iyvwbox.de 0.0.0.0 jivwbox.de 0.0.0.0 jvwbox.de 0.0.0.0 kicker.ivwbox.de 0.0.0.0 kivwbox.de 0.0.0.0 kvwbox.de 0.0.0.0 livwbox.de 0.0.0.0 mivwbox.de 0.0.0.0 netzmarkt.ivwbox.de 0.0.0.0 nivwbox.de 0.0.0.0 ntv.ivwbox.de 0.0.0.0 oivwbox.de 0.0.0.0 onvis.ivwbox.de 0.0.0.0 ovwbox.de 0.0.0.0 pivwbox.de 0.0.0.0 qivwbox.de 0.0.0.0 rivwbox.de 0.0.0.0 sivwbox.de 0.0.0.0 spiegel.ivwbox.de 0.0.0.0 tivwbox.de 0.0.0.0 uivwbox.de 0.0.0.0 uvwbox.de 0.0.0.0 vivwbox.de 0.0.0.0 viwbox.de 0.0.0.0 vwbox.de 0.0.0.0 wivwbox.de 0.0.0.0 wwivwbox.de 0.0.0.0 www.0ivwbox.de 0.0.0.0 www.1ivwbox.de 0.0.0.0 www.2ivwbox.de 0.0.0.0 www.3ivwbox.de 0.0.0.0 www.4ivwbox.de 0.0.0.0 www.5ivwbox.de 0.0.0.0 www.6ivwbox.de 0.0.0.0 www.7ivwbox.de 0.0.0.0 www.8ivwbox.de 0.0.0.0 www.8vwbox.de 0.0.0.0 www.9ivwbox.de 0.0.0.0 www.9vwbox.de 0.0.0.0 www.aivwbox.de 0.0.0.0 www.avwbox.de 0.0.0.0 www.bivwbox.de 0.0.0.0 www.civwbox.de 0.0.0.0 www.divwbox.de 0.0.0.0 www.eevwbox.de 0.0.0.0 www.eivwbox.de 0.0.0.0 www.evwbox.de 0.0.0.0 www.fivwbox.de 0.0.0.0 www.givwbox.de 0.0.0.0 www.hivwbox.de 0.0.0.0 www.i8vwbox.de 0.0.0.0 www.i9vwbox.de 0.0.0.0 www.iavwbox.de 0.0.0.0 www.ibvwbox.de 0.0.0.0 www.ibwbox.de 0.0.0.0 www.icvwbox.de 0.0.0.0 www.icwbox.de 0.0.0.0 www.ievwbox.de 0.0.0.0 www.ifvwbox.de 0.0.0.0 www.ifwbox.de 0.0.0.0 www.igvwbox.de 0.0.0.0 www.igwbox.de 0.0.0.0 www.iivwbox.de 0.0.0.0 www.ijvwbox.de 0.0.0.0 www.ikvwbox.de 0.0.0.0 www.iovwbox.de 0.0.0.0 www.iuvwbox.de 0.0.0.0 www.iv2box.de 0.0.0.0 www.iv2wbox.de 0.0.0.0 www.iv3box.de 0.0.0.0 www.iv3wbox.de 0.0.0.0 www.ivabox.de 0.0.0.0 www.ivawbox.de 0.0.0.0 www.ivbox.de 0.0.0.0 www.ivbwbox.de 0.0.0.0 www.ivbwox.de 0.0.0.0 www.ivcwbox.de 0.0.0.0 www.ivebox.de 0.0.0.0 www.ivewbox.de 0.0.0.0 www.ivfwbox.de 0.0.0.0 www.ivgwbox.de 0.0.0.0 www.ivqbox.de 0.0.0.0 www.ivqwbox.de 0.0.0.0 www.ivsbox.de 0.0.0.0 www.ivswbox.de 0.0.0.0 www.ivvbox.de 0.0.0.0 www.ivvwbox.de 0.0.0.0 www.ivw2box.de 0.0.0.0 www.ivw3box.de 0.0.0.0 www.ivwabox.de 0.0.0.0 www.ivwb0ox.de 0.0.0.0 www.ivwb0x.de 0.0.0.0 www.ivwb9ox.de 0.0.0.0 www.ivwb9x.de 0.0.0.0 www.ivwbaox.de 0.0.0.0 www.ivwbax.de 0.0.0.0 www.ivwbbox.de 0.0.0.0 www.ivwbeox.de 0.0.0.0 www.ivwbex.de 0.0.0.0 www.ivwbgox.de 0.0.0.0 www.ivwbhox.de 0.0.0.0 www.ivwbiox.de 0.0.0.0 www.ivwbix.de 0.0.0.0 www.ivwbkox.de 0.0.0.0 www.ivwbkx.de 0.0.0.0 www.ivwblox.de 0.0.0.0 www.ivwblx.de 0.0.0.0 www.ivwbnox.de 0.0.0.0 www.ivwbo0x.de 0.0.0.0 www.ivwbo9x.de 0.0.0.0 www.ivwboax.de 0.0.0.0 www.ivwboc.de 0.0.0.0 www.ivwbock.de 0.0.0.0 www.ivwbocx.de 0.0.0.0 www.ivwbod.de 0.0.0.0 www.ivwbo.de 0.0.0.0 www.ivwbodx.de 0.0.0.0 www.ivwboex.de 0.0.0.0 www.ivwboix.de 0.0.0.0 www.ivwboks.de 0.0.0.0 www.ivwbokx.de 0.0.0.0 www.ivwbolx.de 0.0.0.0 www.ivwboox.de 0.0.0.0 www.ivwbopx.de 0.0.0.0 www.ivwbos.de 0.0.0.0 www.ivwbosx.de 0.0.0.0 www.ivwboux.de 0.0.0.0 www.ivwbox0.de 0.0.0.0 www.ivwbox1.de 0.0.0.0 www.ivwbox2.de 0.0.0.0 www.ivwbox3.de 0.0.0.0 www.ivwbox4.de 0.0.0.0 www.ivwbox5.de 0.0.0.0 www.ivwbox6.de 0.0.0.0 www.ivwbox7.de 0.0.0.0 www.ivwbox8.de 0.0.0.0 www.ivwbox9.de 0.0.0.0 www.ivwboxa.de 0.0.0.0 www.ivwboxb.de 0.0.0.0 www.ivwboxc.de 0.0.0.0 www.ivwboxd.de 0.0.0.0 www.ivwbox.de 0.0.0.0 wwwivwbox.de 0.0.0.0 www.ivwboxe.de 0.0.0.0 www.ivwboxes.de 0.0.0.0 www.ivwboxf.de 0.0.0.0 www.ivwboxg.de 0.0.0.0 www.ivwboxh.de 0.0.0.0 www.ivwboxi.de 0.0.0.0 www.ivwboxj.de 0.0.0.0 www.ivwboxk.de 0.0.0.0 www.ivwboxl.de 0.0.0.0 www.ivwboxm.de 0.0.0.0 www.ivwboxn.de 0.0.0.0 www.ivwboxo.de 0.0.0.0 www.ivwboxp.de 0.0.0.0 www.ivwboxq.de 0.0.0.0 www.ivwboxr.de 0.0.0.0 www.ivwboxs.de 0.0.0.0 www.ivwboxt.de 0.0.0.0 www.ivwboxu.de 0.0.0.0 www.ivwboxv.de 0.0.0.0 www.ivwboxw.de 0.0.0.0 www.ivwboxx.de 0.0.0.0 www.ivwboxy.de 0.0.0.0 www.ivwboxz.de 0.0.0.0 www.ivwboyx.de 0.0.0.0 www.ivwboz.de 0.0.0.0 www.ivwbozx.de 0.0.0.0 www.ivwbpox.de 0.0.0.0 www.ivwbpx.de 0.0.0.0 www.ivwbuox.de 0.0.0.0 www.ivwbux.de 0.0.0.0 www.ivwbvox.de 0.0.0.0 www.ivwbx.de 0.0.0.0 www.ivwbxo.de 0.0.0.0 www.ivwbyox.de 0.0.0.0 www.ivwbyx.de 0.0.0.0 www.ivwebox.de 0.0.0.0 www.ivwgbox.de 0.0.0.0 www.ivwgox.de 0.0.0.0 www.ivwhbox.de 0.0.0.0 www.ivwhox.de 0.0.0.0 www.ivwnbox.de 0.0.0.0 www.ivwnox.de 0.0.0.0 www.ivwobx.de 0.0.0.0 www.ivwox.de 0.0.0.0 www.ivwpbox.de 0.0.0.0 www.ivwpox.de 0.0.0.0 www.ivwqbox.de 0.0.0.0 www.ivwsbox.de 0.0.0.0 www.ivwvbox.de 0.0.0.0 www.ivwvox.de 0.0.0.0 www.ivwwbox.de 0.0.0.0 www.iwbox.de 0.0.0.0 www.iwvbox.de 0.0.0.0 www.iwvwbox.de 0.0.0.0 www.iwwbox.de 0.0.0.0 www.iyvwbox.de 0.0.0.0 www.jivwbox.de 0.0.0.0 www.jvwbox.de 0.0.0.0 www.kivwbox.de 0.0.0.0 www.kvwbox.de 0.0.0.0 www.livwbox.de 0.0.0.0 www.mivwbox.de 0.0.0.0 www.nivwbox.de 0.0.0.0 www.oivwbox.de 0.0.0.0 www.ovwbox.de 0.0.0.0 www.pivwbox.de 0.0.0.0 www.qivwbox.de 0.0.0.0 www.rivwbox.de 0.0.0.0 www.sivwbox.de 0.0.0.0 www.tivwbox.de 0.0.0.0 www.uivwbox.de 0.0.0.0 www.uvwbox.de 0.0.0.0 www.vivwbox.de 0.0.0.0 www.viwbox.de 0.0.0.0 www.vwbox.de 0.0.0.0 www.wivwbox.de 0.0.0.0 www.wwivwbox.de 0.0.0.0 www.wwwivwbox.de 0.0.0.0 www.xivwbox.de 0.0.0.0 www.yevwbox.de 0.0.0.0 www.yivwbox.de 0.0.0.0 www.yvwbox.de 0.0.0.0 www.zivwbox.de 0.0.0.0 xivwbox.de 0.0.0.0 yevwbox.de 0.0.0.0 yivwbox.de 0.0.0.0 yvwbox.de 0.0.0.0 zivwbox.de # # # message board and wiki spam -- these sites are linked in # message board spam and are unlikely to be real sites 0.0.0.0 10pg.scl5fyd.info 0.0.0.0 21jewelry.com 0.0.0.0 24x7.soliday.org 0.0.0.0 2site.com 0.0.0.0 33b.b33r.net 0.0.0.0 48.2mydns.net 0.0.0.0 4allfree.com 0.0.0.0 55.2myip.com 0.0.0.0 6165.rapidforum.com 0.0.0.0 6pg.ryf3hgf.info 0.0.0.0 7x7.ruwe.net 0.0.0.0 7x.cc 0.0.0.0 911.x24hr.com 0.0.0.0 ab.5.p2l.info 0.0.0.0 aboutharrypotter.fasthost.tv 0.0.0.0 aciphex.about-tabs.com 0.0.0.0 actonel.about-tabs.com 0.0.0.0 actos.about-tabs.com 0.0.0.0 acyclovir.1.p2l.info 0.0.0.0 adderall.ourtablets.com 0.0.0.0 adderallxr.freespaces.com 0.0.0.0 adipex.1.p2l.info 0.0.0.0 adipex.24sws.ws 0.0.0.0 adipex.3.p2l.info 0.0.0.0 adipex.4.p2l.info 0.0.0.0 adipex.hut1.ru 0.0.0.0 adipex.ourtablets.com 0.0.0.0 adipexp.3xforum.ro 0.0.0.0 adipex.shengen.ru 0.0.0.0 adipex.t-amo.net 0.0.0.0 adsearch.www1.biz 0.0.0.0 adult.shengen.ru 0.0.0.0 aguileranude.1stOK.com 0.0.0.0 ahh-teens.com 0.0.0.0 aid-golf-golfdust-training.tabrays.com 0.0.0.0 airline-ticket.gloses.net 0.0.0.0 air-plane-ticket.beesearch.info 0.0.0.0 ak.5.p2l.info 0.0.0.0 al.5.p2l.info 0.0.0.0 alcohol-treatment.gloses.net 0.0.0.0 allegra.1.p2l.info 0.0.0.0 allergy.1.p2l.info 0.0.0.0 all-sex.shengen.ru 0.0.0.0 alprazolamonline.findmenow.info 0.0.0.0 alprazolam.ourtablets.com 0.0.0.0 alyssamilano.1stOK.com 0.0.0.0 alyssamilano.ca.tt 0.0.0.0 alyssamilano.home.sapo.pt 0.0.0.0 amateur-mature-sex.adaltabaza.net 0.0.0.0 ambien.1.p2l.info 0.0.0.0 ambien.3.p2l.info 0.0.0.0 ambien.4.p2l.info 0.0.0.0 ambien.ourtablets.com 0.0.0.0 amoxicillin.ourtablets.com 0.0.0.0 angelinajolie.1stOK.com 0.0.0.0 angelinajolie.ca.tt 0.0.0.0 anklets.shengen.ru 0.0.0.0 annanicolesannanicolesmith.ca.tt 0.0.0.0 annanicolesmith.1stOK.com 0.0.0.0 antidepressants.1.p2l.info 0.0.0.0 anxiety.1.p2l.info 0.0.0.0 aol.spb.su 0.0.0.0 ar.5.p2l.info 0.0.0.0 arcade.ya.com 0.0.0.0 armanix.white.prohosting.com 0.0.0.0 arthritis.atspace.com 0.0.0.0 as.5.p2l.info 0.0.0.0 aspirin.about-tabs.com 0.0.0.0 ativan.ourtablets.com 0.0.0.0 austria-car-rental.findworm.net 0.0.0.0 auto.allewagen.de 0.0.0.0 az.5.p2l.info 0.0.0.0 azz.badazz.org 0.0.0.0 balabass.peerserver.com 0.0.0.0 balab.portx.net 0.0.0.0 bbs.ws 0.0.0.0 bc.5.p2l.info 0.0.0.0 beauty.finaltips.com 0.0.0.0 berkleynude.ca.tt 0.0.0.0 bestlolaray.com 0.0.0.0 bet-online.petrovka.info 0.0.0.0 betting-online.petrovka.info 0.0.0.0 bextra.ourtablets.com 0.0.0.0 bextra-store.shengen.ru 0.0.0.0 bingo-online.petrovka.info 0.0.0.0 birth-control.1.p2l.info 0.0.0.0 bontril.1.p2l.info 0.0.0.0 bontril.ourtablets.com 0.0.0.0 britneyspears.1stOK.com 0.0.0.0 britneyspears.ca.tt 0.0.0.0 br.rawcomm.net 0.0.0.0 bupropion-hcl.1.p2l.info 0.0.0.0 buspar.1.p2l.info 0.0.0.0 buspirone.1.p2l.info 0.0.0.0 butalbital-apap.1.p2l.info 0.0.0.0 buy-adipex.aca.ru 0.0.0.0 buy-adipex-cheap-adipex-online.com 0.0.0.0 buy-adipex.hut1.ru 0.0.0.0 buy-adipex.i-jogo.net 0.0.0.0 buy-adipex-online.md-online24.de 0.0.0.0 buy-adipex.petrovka.info 0.0.0.0 buy-carisoprodol.polybuild.ru 0.0.0.0 buy-cheap-phentermine.blogspot.com 0.0.0.0 buy-cheap-xanax.all.at 0.0.0.0 buy-cialis-cheap-cialis-online.info 0.0.0.0 buy-cialis.freewebtools.com 0.0.0.0 buycialisonline.7h.com 0.0.0.0 buycialisonline.bigsitecity.com 0.0.0.0 buy-cialis-online.iscool.nl 0.0.0.0 buy-cialis-online.meperdoe.net 0.0.0.0 buy-cialis.splinder.com 0.0.0.0 buy-diazepam.connect.to 0.0.0.0 buyfioricet.findmenow.info 0.0.0.0 buy-fioricet.hut1.ru 0.0.0.0 buyfioricetonline.7h.com 0.0.0.0 buyfioricetonline.bigsitecity.com 0.0.0.0 buyfioricetonline.freeservers.com 0.0.0.0 buy-flower.petrovka.info 0.0.0.0 buy-hydrocodone.aca.ru 0.0.0.0 buyhydrocodone.all.at 0.0.0.0 buy-hydrocodone-cheap-hydrocodone-online.com 0.0.0.0 buy-hydrocodone.este.ru 0.0.0.0 buyhydrocodoneonline.findmenow.info 0.0.0.0 buy-hydrocodone-online.tche.com 0.0.0.0 buy-hydrocodone.petrovka.info 0.0.0.0 buy-hydrocodone.polybuild.ru 0.0.0.0 buy-hydrocodone.quesaudade.net 0.0.0.0 buy-hydrocodone.scromble.com 0.0.0.0 buylevitra.3xforum.ro 0.0.0.0 buy-levitra-cheap-levitra-online.info 0.0.0.0 buylevitraonline.7h.com 0.0.0.0 buylevitraonline.bigsitecity.com 0.0.0.0 buy-lortab-cheap-lortab-online.com 0.0.0.0 buy-lortab.hut1.ru 0.0.0.0 buylortabonline.7h.com 0.0.0.0 buylortabonline.bigsitecity.com 0.0.0.0 buy-lortab-online.iscool.nl 0.0.0.0 buypaxilonline.7h.com 0.0.0.0 buypaxilonline.bigsitecity.com 0.0.0.0 buy-phentermine-cheap-phentermine-online.com 0.0.0.0 buy-phentermine.hautlynx.com 0.0.0.0 buy-phentermine-online.135.it 0.0.0.0 buyphentermineonline.7h.com 0.0.0.0 buyphentermineonline.bigsitecity.com 0.0.0.0 buy-phentermine-online.i-jogo.net 0.0.0.0 buy-phentermine-online.i-ltda.net 0.0.0.0 buy-phentermine.polybuild.ru 0.0.0.0 buy-phentermine.thepizza.net 0.0.0.0 buy-tamiflu.asian-flu-vaccine.com 0.0.0.0 buy-ultram-online.iscool.nl 0.0.0.0 buy-valium-cheap-valium-online.com 0.0.0.0 buy-valium.este.ru 0.0.0.0 buy-valium.hut1.ru 0.0.0.0 buy-valium.polybuild.ru 0.0.0.0 buyvalium.polybuild.ru 0.0.0.0 buy-viagra.aca.ru 0.0.0.0 buy-viagra.go.to 0.0.0.0 buy-viagra.polybuild.ru 0.0.0.0 buyviagra.polybuild.ru 0.0.0.0 buy-vicodin-cheap-vicodin-online.com 0.0.0.0 buy-vicodin.dd.vu 0.0.0.0 buy-vicodin.hut1.ru 0.0.0.0 buy-vicodin.iscool.nl 0.0.0.0 buy-vicodin-online.i-blog.net 0.0.0.0 buy-vicodin-online.seumala.net 0.0.0.0 buy-vicodin-online.supersite.fr 0.0.0.0 buyvicodinonline.veryweird.com 0.0.0.0 buy-xanax.aztecaonline.net 0.0.0.0 buy-xanax-cheap-xanax-online.com 0.0.0.0 buy-xanax.hut1.ru 0.0.0.0 buy-xanax-online.amovoce.net 0.0.0.0 buy-zyban.all.at 0.0.0.0 bx6.blrf.net 0.0.0.0 ca.5.p2l.info 0.0.0.0 camerondiaznude.1stOK.com 0.0.0.0 camerondiaznude.ca.tt 0.0.0.0 car-donation.shengen.ru 0.0.0.0 car-insurance.inshurance-from.com 0.0.0.0 carisoprodol.1.p2l.info 0.0.0.0 carisoprodol.hut1.ru 0.0.0.0 carisoprodol.ourtablets.com 0.0.0.0 carisoprodol.polybuild.ru 0.0.0.0 carisoprodol.shengen.ru 0.0.0.0 car-loan.shengen.ru 0.0.0.0 carmenelectra.1stOK.com 0.0.0.0 cash-advance.now-cash.com 0.0.0.0 casino-gambling-online.searchservice.info 0.0.0.0 casino-online.100gal.net 0.0.0.0 cat.onlinepeople.net 0.0.0.0 cc5f.dnyp.com 0.0.0.0 celebrex.1.p2l.info 0.0.0.0 celexa.1.p2l.info 0.0.0.0 celexa.3.p2l.info 0.0.0.0 celexa.4.p2l.info 0.0.0.0 cephalexin.ourtablets.com 0.0.0.0 charlizetheron.1stOK.com 0.0.0.0 cheap-adipex.hut1.ru 0.0.0.0 cheap-carisoprodol.polybuild.ru 0.0.0.0 cheap-hydrocodone.go.to 0.0.0.0 cheap-hydrocodone.polybuild.ru 0.0.0.0 cheap-phentermine.polybuild.ru 0.0.0.0 cheap-valium.polybuild.ru 0.0.0.0 cheap-viagra.polybuild.ru 0.0.0.0 cheap-web-hosting-here.blogspot.com 0.0.0.0 cheap-xanax-here.blogspot.com 0.0.0.0 cheapxanax.hut1.ru 0.0.0.0 cialis.1.p2l.info 0.0.0.0 cialis.3.p2l.info 0.0.0.0 cialis.4.p2l.info 0.0.0.0 cialis-finder.com 0.0.0.0 cialis-levitra-viagra.com.cn 0.0.0.0 cialis.ourtablets.com 0.0.0.0 cialis-store.shengen.ru 0.0.0.0 co.5.p2l.info 0.0.0.0 co.dcclan.co.uk 0.0.0.0 codeine.ourtablets.com 0.0.0.0 creampie.afdss.info 0.0.0.0 credit-card-application.now-cash.com 0.0.0.0 credit-cards.shengen.ru 0.0.0.0 ct.5.p2l.info 0.0.0.0 cuiland.info 0.0.0.0 cyclobenzaprine.1.p2l.info 0.0.0.0 cyclobenzaprine.ourtablets.com 0.0.0.0 dal.d.la 0.0.0.0 danger-phentermine.allforyourlife.com 0.0.0.0 darvocet.ourtablets.com 0.0.0.0 dc.5.p2l.info 0.0.0.0 de.5.p2l.info 0.0.0.0 debt.shengen.ru 0.0.0.0 def.5.p2l.info 0.0.0.0 demimoorenude.1stOK.com 0.0.0.0 deniserichards.1stOK.com 0.0.0.0 detox-kit.com 0.0.0.0 detox.shengen.ru 0.0.0.0 diazepam.ourtablets.com 0.0.0.0 diazepam.razma.net 0.0.0.0 diazepam.shengen.ru 0.0.0.0 didrex.1.p2l.info 0.0.0.0 diet-pills.hut1.ru 0.0.0.0 digital-cable-descrambler.planet-high-heels.com 0.0.0.0 dir.opank.com 0.0.0.0 dos.velek.com 0.0.0.0 drewbarrymore.ca.tt 0.0.0.0 drugdetox.shengen.ru 0.0.0.0 drug-online.petrovka.info 0.0.0.0 drug-testing.shengen.ru 0.0.0.0 eb.dd.bluelinecomputers.be 0.0.0.0 eb.prout.be 0.0.0.0 ed.at.is13.de 0.0.0.0 ed.at.thamaster.de 0.0.0.0 e-dot.hut1.ru 0.0.0.0 efam4.info 0.0.0.0 effexor-xr.1.p2l.info 0.0.0.0 e-hosting.hut1.ru 0.0.0.0 ei.imbucurator-de-prost.com 0.0.0.0 eminemticket.freespaces.com 0.0.0.0 en.dd.blueline.be 0.0.0.0 enpresse.1.p2l.info 0.0.0.0 en.ultrex.ru 0.0.0.0 epson-printer-ink.beesearch.info 0.0.0.0 erectile.byethost33.com 0.0.0.0 esgic.1.p2l.info 0.0.0.0 fahrrad.bikesshop.de 0.0.0.0 famous-pics.com 0.0.0.0 famvir.1.p2l.info 0.0.0.0 farmius.org 0.0.0.0 fee-hydrocodone.bebto.com 0.0.0.0 female-v.1.p2l.info 0.0.0.0 femaleviagra.findmenow.info 0.0.0.0 fg.softguy.com 0.0.0.0 findmenow.info 0.0.0.0 fioricet.1.p2l.info 0.0.0.0 fioricet.3.p2l.info 0.0.0.0 fioricet.4.p2l.info 0.0.0.0 fioricet-online.blogspot.com 0.0.0.0 firstfinda.info 0.0.0.0 fl.5.p2l.info 0.0.0.0 flexeril.1.p2l.info 0.0.0.0 flextra.1.p2l.info 0.0.0.0 flonase.1.p2l.info 0.0.0.0 flonase.3.p2l.info 0.0.0.0 flonase.4.p2l.info 0.0.0.0 florineff.ql.st 0.0.0.0 flower-online.petrovka.info 0.0.0.0 fluoxetine.1.p2l.info 0.0.0.0 fo4n.com 0.0.0.0 forex-broker.hut1.ru 0.0.0.0 forex-chart.hut1.ru 0.0.0.0 forex-market.hut1.ru 0.0.0.0 forex-news.hut1.ru 0.0.0.0 forex-online.hut1.ru 0.0.0.0 forex-signal.hut1.ru 0.0.0.0 forex-trade.hut1.ru 0.0.0.0 forex-trading-benefits.blogspot.com 0.0.0.0 forextrading.hut1.ru 0.0.0.0 freechat.llil.de 0.0.0.0 free.hostdepartment.com 0.0.0.0 free-money.host.sk 0.0.0.0 free-viagra.polybuild.ru 0.0.0.0 free-virus-scan.100gal.net 0.0.0.0 ga.5.p2l.info 0.0.0.0 game-online-video.petrovka.info 0.0.0.0 gaming-online.petrovka.info 0.0.0.0 gastrointestinal.1.p2l.info 0.0.0.0 gen-hydrocodone.polybuild.ru 0.0.0.0 getcarisoprodol.polybuild.ru 0.0.0.0 gocarisoprodol.polybuild.ru 0.0.0.0 gsm-mobile-phone.beesearch.info 0.0.0.0 gu.5.p2l.info 0.0.0.0 guerria-skateboard-tommy.tabrays.com 0.0.0.0 gwynethpaltrow.ca.tt 0.0.0.0 h1.ripway.com 0.0.0.0 hair-dos.resourcesarchive.com 0.0.0.0 halleberrynude.ca.tt 0.0.0.0 heathergraham.ca.tt 0.0.0.0 herpes.1.p2l.info 0.0.0.0 herpes.3.p2l.info 0.0.0.0 herpes.4.p2l.info 0.0.0.0 hf.themafia.info 0.0.0.0 hi.5.p2l.info 0.0.0.0 hi.pacehillel.org 0.0.0.0 holobumo.info 0.0.0.0 homehre.bravehost.com 0.0.0.0 homehre.ifrance.com 0.0.0.0 homehre.tripod.com 0.0.0.0 hoodia.kogaryu.com 0.0.0.0 hotel-las-vegas.gloses.net 0.0.0.0 hydrocodone-buy-online.blogspot.com 0.0.0.0 hydrocodone.irondel.swisshost.by 0.0.0.0 hydrocodone.on.to 0.0.0.0 hydrocodone.shengen.ru 0.0.0.0 hydrocodone.t-amo.net 0.0.0.0 hydrocodone.visa-usa.ru 0.0.0.0 hydro.polybuild.ru 0.0.0.0 ia.5.p2l.info 0.0.0.0 ia.warnet-thunder.net 0.0.0.0 ibm-notebook-battery.wp-club.net 0.0.0.0 id.5.p2l.info 0.0.0.0 il.5.p2l.info 0.0.0.0 imitrex.1.p2l.info 0.0.0.0 imitrex.3.p2l.info 0.0.0.0 imitrex.4.p2l.info 0.0.0.0 in.5.p2l.info 0.0.0.0 ionamin.1.p2l.info 0.0.0.0 ionamin.t35.com 0.0.0.0 irondel.swisshost.by 0.0.0.0 japanese-girl-xxx.com 0.0.0.0 java-games.bestxs.de 0.0.0.0 jg.hack-inter.net 0.0.0.0 job-online.petrovka.info 0.0.0.0 jobs-online.petrovka.info 0.0.0.0 kitchen-island.mensk.us 0.0.0.0 konstantin.freespaces.com 0.0.0.0 ks.5.p2l.info 0.0.0.0 ky.5.p2l.info 0.0.0.0 la.5.p2l.info 0.0.0.0 lamictal.about-tabs.com 0.0.0.0 lamisil.about-tabs.com 0.0.0.0 levitra.1.p2l.info 0.0.0.0 levitra.3.p2l.info 0.0.0.0 levitra.4.p2l.info 0.0.0.0 lexapro.1.p2l.info 0.0.0.0 lexapro.3.p2l.info 0.0.0.0 lexapro.4.p2l.info 0.0.0.0 loan.aol.msk.su 0.0.0.0 loan.maybachexelero.org 0.0.0.0 loestrin.1.p2l.info 0.0.0.0 lo.ljkeefeco.com 0.0.0.0 lol.to 0.0.0.0 lortab-cod.hut1.ru 0.0.0.0 lortab.hut1.ru 0.0.0.0 ma.5.p2l.info 0.0.0.0 mailforfreedom.com 0.0.0.0 make-money.shengen.ru 0.0.0.0 maps-antivert58.eksuziv.net 0.0.0.0 maps-spyware251-300.eksuziv.net 0.0.0.0 marketing.beesearch.info 0.0.0.0 mb.5.p2l.info 0.0.0.0 mba-online.petrovka.info 0.0.0.0 md.5.p2l.info 0.0.0.0 me.5.p2l.info 0.0.0.0 medical.carway.net 0.0.0.0 mens.1.p2l.info 0.0.0.0 meridia.1.p2l.info 0.0.0.0 meridia.3.p2l.info 0.0.0.0 meridia.4.p2l.info 0.0.0.0 meridiameridia.3xforum.ro 0.0.0.0 mesotherapy.jino-net.ru 0.0.0.0 mi.5.p2l.info 0.0.0.0 micardiss.ql.st 0.0.0.0 microsoft-sql-server.wp-club.net 0.0.0.0 mn.5.p2l.info 0.0.0.0 mo.5.p2l.info 0.0.0.0 moc.silk.com 0.0.0.0 mortgage-memphis.hotmail.ru 0.0.0.0 mortgage-rates.now-cash.com 0.0.0.0 mp.5.p2l.info 0.0.0.0 mrjeweller.us 0.0.0.0 ms.5.p2l.info 0.0.0.0 mt.5.p2l.info 0.0.0.0 multimedia-projector.katrina.ru 0.0.0.0 muscle-relaxers.1.p2l.info 0.0.0.0 music102.awardspace.com 0.0.0.0 mydaddy.b0x.com 0.0.0.0 myphentermine.polybuild.ru 0.0.0.0 nasacort.1.p2l.info 0.0.0.0 nasonex.1.p2l.info 0.0.0.0 nb.5.p2l.info 0.0.0.0 nc.5.p2l.info 0.0.0.0 nd.5.p2l.info 0.0.0.0 ne.5.p2l.info 0.0.0.0 nellyticket.beast-space.com 0.0.0.0 nelsongod.ca 0.0.0.0 nexium.1.p2l.info 0.0.0.0 nextel-ringtone.komi.su 0.0.0.0 nextel-ringtone.spb.su 0.0.0.0 nf.5.p2l.info 0.0.0.0 nh.5.p2l.info 0.0.0.0 nj.5.p2l.info 0.0.0.0 nm.5.p2l.info 0.0.0.0 nordette.1.p2l.info 0.0.0.0 nordette.3.p2l.info 0.0.0.0 nordette.4.p2l.info 0.0.0.0 norton-antivirus-trial.searchservice.info 0.0.0.0 notebook-memory.searchservice.info 0.0.0.0 ns.5.p2l.info 0.0.0.0 nv.5.p2l.info 0.0.0.0 ny.5.p2l.info 0.0.0.0 o8.aus.cc 0.0.0.0 ofni.al0ne.info 0.0.0.0 oh.5.p2l.info 0.0.0.0 ok.5.p2l.info 0.0.0.0 on.5.p2l.info 0.0.0.0 online-auto-insurance.petrovka.info 0.0.0.0 online-bingo.petrovka.info 0.0.0.0 online-broker.petrovka.info 0.0.0.0 online-cash.petrovka.info 0.0.0.0 online-casino.shengen.ru 0.0.0.0 online-casino.webpark.pl 0.0.0.0 online-cigarettes.hitslog.net 0.0.0.0 online-college.petrovka.info 0.0.0.0 online-degree.petrovka.info 0.0.0.0 online-florist.petrovka.info 0.0.0.0 online-forex.hut1.ru 0.0.0.0 online-forex-trading-systems.blogspot.com 0.0.0.0 online-gaming.petrovka.info 0.0.0.0 online-job.petrovka.info 0.0.0.0 online-loan.petrovka.info 0.0.0.0 online-mortgage.petrovka.info 0.0.0.0 online-personal.petrovka.info 0.0.0.0 online-personals.petrovka.info 0.0.0.0 online-pharmacy-online.blogspot.com 0.0.0.0 online-pharmacy.petrovka.info 0.0.0.0 online-phentermine.petrovka.info 0.0.0.0 online-poker-gambling.petrovka.info 0.0.0.0 online-poker-game.petrovka.info 0.0.0.0 online-poker.shengen.ru 0.0.0.0 online-prescription.petrovka.info 0.0.0.0 online-school.petrovka.info 0.0.0.0 online-schools.petrovka.info 0.0.0.0 online-single.petrovka.info 0.0.0.0 online-tarot-reading.beesearch.info 0.0.0.0 online-travel.petrovka.info 0.0.0.0 online-university.petrovka.info 0.0.0.0 online-viagra.petrovka.info 0.0.0.0 online-xanax.petrovka.info 0.0.0.0 onlypreteens.com 0.0.0.0 only-valium.go.to 0.0.0.0 only-valium.shengen.ru 0.0.0.0 or.5.p2l.info 0.0.0.0 oranla.info 0.0.0.0 orderadipex.findmenow.info 0.0.0.0 order-hydrocodone.polybuild.ru 0.0.0.0 order-phentermine.polybuild.ru 0.0.0.0 order-valium.polybuild.ru 0.0.0.0 ortho-tri-cyclen.1.p2l.info 0.0.0.0 pa.5.p2l.info 0.0.0.0 pacific-poker.e-online-poker-4u.net 0.0.0.0 pain-relief.1.p2l.info 0.0.0.0 paintball-gun.tripod.com 0.0.0.0 patio-furniture.dreamhoster.com 0.0.0.0 paxil.1.p2l.info 0.0.0.0 pay-day-loans.beesearch.info 0.0.0.0 payday-loans.now-cash.com 0.0.0.0 pctuzing.php5.cz 0.0.0.0 pd1.funnyhost.com 0.0.0.0 pe.5.p2l.info 0.0.0.0 peter-north-cum-shot.blogspot.com 0.0.0.0 pets.finaltips.com 0.0.0.0 pharmacy-canada.forsearch.net 0.0.0.0 pharmacy.hut1.ru 0.0.0.0 pharmacy-news.blogspot.com 0.0.0.0 pharmacy-online.petrovka.info 0.0.0.0 phendimetrazine.1.p2l.info 0.0.0.0 phentermine.1.p2l.info 0.0.0.0 phentermine.3.p2l.info 0.0.0.0 phentermine.4.p2l.info 0.0.0.0 phentermine.aussie7.com 0.0.0.0 phentermine-buy-online.hitslog.net 0.0.0.0 phentermine-buy.petrovka.info 0.0.0.0 phentermine-online.iscool.nl 0.0.0.0 phentermine-online.petrovka.info 0.0.0.0 phentermine.petrovka.info 0.0.0.0 phentermine.polybuild.ru 0.0.0.0 phentermine.shengen.ru 0.0.0.0 phentermine.t-amo.net 0.0.0.0 phentermine.webpark.pl 0.0.0.0 phone-calling-card.exnet.su 0.0.0.0 plavix.shengen.ru 0.0.0.0 play-poker-free.forsearch.net 0.0.0.0 poker-games.e-online-poker-4u.net 0.0.0.0 pop.egi.biz 0.0.0.0 pr.5.p2l.info 0.0.0.0 prescription-drugs.easy-find.net 0.0.0.0 prescription-drugs.shengen.ru 0.0.0.0 preteenland.com 0.0.0.0 preteensite.com 0.0.0.0 prevacid.1.p2l.info 0.0.0.0 prevent-asian-flu.com 0.0.0.0 prilosec.1.p2l.info 0.0.0.0 propecia.1.p2l.info 0.0.0.0 protonix.shengen.ru 0.0.0.0 psorias.atspace.com 0.0.0.0 purchase.hut1.ru 0.0.0.0 qc.5.p2l.info 0.0.0.0 qz.informs.com 0.0.0.0 refinance.shengen.ru 0.0.0.0 relenza.asian-flu-vaccine.com 0.0.0.0 renova.1.p2l.info 0.0.0.0 replacement-windows.gloses.net 0.0.0.0 re.rutan.org 0.0.0.0 resanium.com 0.0.0.0 retin-a.1.p2l.info 0.0.0.0 ri.5.p2l.info 0.0.0.0 rise-media.ru 0.0.0.0 root.dns.bz 0.0.0.0 roulette-online.petrovka.info 0.0.0.0 router.googlecom.biz 0.0.0.0 s32.bilsay.com 0.0.0.0 samsclub33.pochta.ru 0.0.0.0 sc10.net 0.0.0.0 sc.5.p2l.info 0.0.0.0 sd.5.p2l.info 0.0.0.0 search4you.50webs.com 0.0.0.0 search-phentermine.hpage.net 0.0.0.0 searchpill.boom.ru 0.0.0.0 seasonale.1.p2l.info 0.0.0.0 shop.kauffes.de 0.0.0.0 single-online.petrovka.info 0.0.0.0 sk.5.p2l.info 0.0.0.0 skelaxin.1.p2l.info 0.0.0.0 skelaxin.3.p2l.info 0.0.0.0 skelaxin.4.p2l.info 0.0.0.0 skin-care.1.p2l.info 0.0.0.0 skocz.pl 0.0.0.0 sleep-aids.1.p2l.info 0.0.0.0 sleeper-sofa.dreamhoster.com 0.0.0.0 slf5cyd.info 0.0.0.0 sobolev.net.ru 0.0.0.0 soma.1.p2l.info 0.0.0.0 soma.3xforum.ro 0.0.0.0 soma-store.visa-usa.ru 0.0.0.0 sonata.1.p2l.info 0.0.0.0 sport-betting-online.hitslog.net 0.0.0.0 spyware-removers.shengen.ru 0.0.0.0 spyware-scan.100gal.net 0.0.0.0 spyware.usafreespace.com 0.0.0.0 sq7.co.uk 0.0.0.0 sql-server-driver.beesearch.info 0.0.0.0 starlix.ql.st 0.0.0.0 stop-smoking.1.p2l.info 0.0.0.0 supplements.1.p2l.info 0.0.0.0 sx.nazari.org 0.0.0.0 sx.z0rz.com 0.0.0.0 ta.at.ic5mp.net 0.0.0.0 ta.at.user-mode-linux.net 0.0.0.0 tamiflu-in-canada.asian-flu-vaccine.com 0.0.0.0 tamiflu-no-prescription.asian-flu-vaccine.com 0.0.0.0 tamiflu-purchase.asian-flu-vaccine.com 0.0.0.0 tamiflu-without-prescription.asian-flu-vaccine.com 0.0.0.0 tenuate.1.p2l.info 0.0.0.0 texas-hold-em.e-online-poker-4u.net 0.0.0.0 texas-holdem.shengen.ru 0.0.0.0 ticket20.tripod.com 0.0.0.0 tizanidine.1.p2l.info 0.0.0.0 tn.5.p2l.info 0.0.0.0 topmeds10.com 0.0.0.0 top.pcanywhere.net 0.0.0.0 toyota.cyberealhosting.com 0.0.0.0 tramadol.1.p2l.info 0.0.0.0 tramadol2006.3xforum.ro 0.0.0.0 tramadol.3.p2l.info 0.0.0.0 tramadol.4.p2l.info 0.0.0.0 travel-insurance-quotes.beesearch.info 0.0.0.0 triphasil.1.p2l.info 0.0.0.0 triphasil.3.p2l.info 0.0.0.0 triphasil.4.p2l.info 0.0.0.0 tx.5.p2l.info 0.0.0.0 uf2aasn.111adfueo.us 0.0.0.0 ultracet.1.p2l.info 0.0.0.0 ultram.1.p2l.info 0.0.0.0 united-airline-fare.100pantyhose.com 0.0.0.0 university-online.petrovka.info 0.0.0.0 urlcut.net 0.0.0.0 urshort.net 0.0.0.0 us.kopuz.com 0.0.0.0 ut.5.p2l.info 0.0.0.0 utairway.com 0.0.0.0 va.5.p2l.info 0.0.0.0 vacation.toppick.info 0.0.0.0 valium.este.ru 0.0.0.0 valium.hut1.ru 0.0.0.0 valium.ourtablets.com 0.0.0.0 valium.polybuild.ru 0.0.0.0 valiumvalium.3xforum.ro 0.0.0.0 valtrex.1.p2l.info 0.0.0.0 valtrex.3.p2l.info 0.0.0.0 valtrex.4.p2l.info 0.0.0.0 valtrex.7h.com 0.0.0.0 vaniqa.1.p2l.info 0.0.0.0 vi.5.p2l.info 0.0.0.0 viagra.1.p2l.info 0.0.0.0 viagra.3.p2l.info 0.0.0.0 viagra.4.p2l.info 0.0.0.0 viagra-online.petrovka.info 0.0.0.0 viagra-pill.blogspot.com 0.0.0.0 viagra.polybuild.ru 0.0.0.0 viagra-soft-tabs.1.p2l.info 0.0.0.0 viagra-store.shengen.ru 0.0.0.0 viagraviagra.3xforum.ro 0.0.0.0 vicodin-online.petrovka.info 0.0.0.0 vicodin-store.shengen.ru 0.0.0.0 vicodin.t-amo.net 0.0.0.0 viewtools.com 0.0.0.0 vioxx.1.p2l.info 0.0.0.0 vitalitymax.1.p2l.info 0.0.0.0 vt.5.p2l.info 0.0.0.0 vxv.phre.net 0.0.0.0 w0.drag0n.org 0.0.0.0 wa.5.p2l.info 0.0.0.0 water-bed.8p.org.uk 0.0.0.0 web-hosting.hitslog.net 0.0.0.0 webhosting.hut1.ru 0.0.0.0 weborg.hut1.ru 0.0.0.0 weight-loss.1.p2l.info 0.0.0.0 weight-loss.3.p2l.info 0.0.0.0 weight-loss.4.p2l.info 0.0.0.0 weight-loss.hut1.ru 0.0.0.0 wellbutrin.1.p2l.info 0.0.0.0 wellbutrin.3.p2l.info 0.0.0.0 wellbutrin.4.p2l.info 0.0.0.0 wellnessmonitor.bravehost.com 0.0.0.0 wi.5.p2l.info 0.0.0.0 world-trade-center.hawaiicity.com 0.0.0.0 wp-club.net 0.0.0.0 ws01.do.nu 0.0.0.0 ws02.do.nu 0.0.0.0 ws03.do.nu 0.0.0.0 ws03.home.sapo.pt 0.0.0.0 ws04.do.nu 0.0.0.0 ws04.home.sapo.pt 0.0.0.0 ws05.home.sapo.pt 0.0.0.0 ws06.home.sapo.pt 0.0.0.0 wv.5.p2l.info 0.0.0.0 www.31d.net 0.0.0.0 www3.ddns.ms 0.0.0.0 www4.at.debianbase.de 0.0.0.0 www4.epac.to 0.0.0.0 www5.3-a.net 0.0.0.0 www69.bestdeals.at 0.0.0.0 www69.byinter.net 0.0.0.0 www69.dynu.com 0.0.0.0 www69.findhere.org 0.0.0.0 www69.fw.nu 0.0.0.0 www69.ugly.as 0.0.0.0 www6.ezua.com 0.0.0.0 www6.ns1.name 0.0.0.0 www7.ygto.com 0.0.0.0 www8.ns01.us 0.0.0.0 www99.bounceme.net 0.0.0.0 www99.fdns.net 0.0.0.0 www99.zapto.org 0.0.0.0 www9.compblue.com 0.0.0.0 www9.servequake.com 0.0.0.0 www9.trickip.org 0.0.0.0 www.adspoll.com 0.0.0.0 www.adult-top-list.com 0.0.0.0 www.aektschen.de 0.0.0.0 www.aeqs.com 0.0.0.0 www.alladultdirectories.com 0.0.0.0 www.alladultdirectory.net 0.0.0.0 www.arbeitssuche-web.de 0.0.0.0 www.bestrxpills.com 0.0.0.0 www.bigsister.cxa.de 0.0.0.0 www.bigsister-puff.cxa.de 0.0.0.0 www.bitlocker.net 0.0.0.0 www.cheap-laptops-notebook-computers.info 0.0.0.0 www.cheap-online-stamp.cast.cc 0.0.0.0 www.codez-knacken.de 0.0.0.0 www.computerxchange.com 0.0.0.0 www.credit-dreams.com 0.0.0.0 www.edle-stuecke.de 0.0.0.0 www.exe-file.de 0.0.0.0 www.exttrem.de 0.0.0.0 www.fetisch-pornos.cxa.de 0.0.0.0 www.ficken-ficken-ficken.cxa.de 0.0.0.0 www.ficken-xxx.cxa.de 0.0.0.0 www.financial-advice-books.com 0.0.0.0 www.finanzmarkt2004.de 0.0.0.0 www.furnitureulimited.com 0.0.0.0 www.gewinnspiele-slotmachine.de 0.0.0.0 www.hardware4freaks.de 0.0.0.0 www.healthyaltprods.com 0.0.0.0 www.heimlich-gefilmt.cxa.de 0.0.0.0 www.huberts-kochseite.de 0.0.0.0 www.huren-verzeichnis.is4all.de 0.0.0.0 www.kaaza-legal.de 0.0.0.0 www.kajahdfssa.net 0.0.0.0 www.keyofhealth.com 0.0.0.0 www.kitchentablegang.org 0.0.0.0 www.km69.de 0.0.0.0 www.koch-backrezepte.de 0.0.0.0 www.kvr-systems.de 0.0.0.0 www.lesben-pornos.cxa.de 0.0.0.0 www.links-private-krankenversicherung.de 0.0.0.0 www.littledevildoubt.com 0.0.0.0 www.mailforfreedom.com 0.0.0.0 www.masterspace.biz 0.0.0.0 www.medical-research-books.com 0.0.0.0 www.microsoft2010.com 0.0.0.0 www.nelsongod.ca 0.0.0.0 www.nextstudent.com 0.0.0.0 www.ntdesk.de 0.0.0.0 www.nutten-verzeichnis.cxa.de 0.0.0.0 www.obesitycheck.com 0.0.0.0 www.pawnauctions.net 0.0.0.0 www.pills-home.com 0.0.0.0 www.poker4spain.com 0.0.0.0 www.poker-new.com 0.0.0.0 www.poker-unique.com 0.0.0.0 www.porno-lesben.cxa.de 0.0.0.0 www.prevent-asian-flu.com 0.0.0.0 www.randppro-cuts.com 0.0.0.0 www.romanticmaui.net 0.0.0.0 www.salldo.de 0.0.0.0 www.samsclub33.pochta.ru 0.0.0.0 www.schwarz-weisses.de 0.0.0.0 www.schwule-boys-nackt.cxa.de 0.0.0.0 www.shopping-artikel.de 0.0.0.0 www.showcaserealestate.net 0.0.0.0 www.skattabrain.com 0.0.0.0 www.softcha.com 0.0.0.0 www.striemline.de 0.0.0.0 www.talentbroker.net 0.0.0.0 www.the-discount-store.com 0.0.0.0 www.topmeds10.com 0.0.0.0 www.uniqueinternettexasholdempoker.com 0.0.0.0 www.viagra-home.com 0.0.0.0 www.vthought.com 0.0.0.0 www.vtoyshop.com 0.0.0.0 www.vulcannonibird.de 0.0.0.0 www.webabrufe.de 0.0.0.0 www.wilddreams.info 0.0.0.0 www.willcommen.de 0.0.0.0 www.xcr-286.com 0.0.0.0 wy.5.p2l.info 0.0.0.0 x25.2mydns.com 0.0.0.0 x25.plorp.com 0.0.0.0 x4.lov3.net 0.0.0.0 x6x.a.la 0.0.0.0 x888x.myserver.org 0.0.0.0 x8x.dyndns.dk 0.0.0.0 x8x.trickip.net 0.0.0.0 xanax-online.dot.de 0.0.0.0 xanax-online.run.to 0.0.0.0 xanax-online.sms2.us 0.0.0.0 xanax.ourtablets.com 0.0.0.0 xanax-store.shengen.ru 0.0.0.0 xanax.t-amo.net 0.0.0.0 xanaxxanax.3xforum.ro 0.0.0.0 x-box.t35.com 0.0.0.0 xcr-286.com 0.0.0.0 xenical.1.p2l.info 0.0.0.0 xenical.3.p2l.info 0.0.0.0 xenical.4.p2l.info 0.0.0.0 x-hydrocodone.info 0.0.0.0 xoomer.alice.it 0.0.0.0 x-phentermine.info 0.0.0.0 xr.h4ck.la 0.0.0.0 yasmin.1.p2l.info 0.0.0.0 yasmin.3.p2l.info 0.0.0.0 yasmin.4.p2l.info 0.0.0.0 yt.5.p2l.info 0.0.0.0 zanaflex.1.p2l.info 0.0.0.0 zebutal.1.p2l.info 0.0.0.0 zocor.about-tabs.com 0.0.0.0 zoloft.1.p2l.info 0.0.0.0 zoloft.3.p2l.info 0.0.0.0 zoloft.4.p2l.info 0.0.0.0 zoloft.about-tabs.com 0.0.0.0 zyban.1.p2l.info 0.0.0.0 zyban.about-tabs.com 0.0.0.0 zyban-store.shengen.ru 0.0.0.0 zyprexa.about-tabs.com 0.0.0.0 zyrtec.1.p2l.info 0.0.0.0 zyrtec.3.p2l.info 0.0.0.0 zyrtec.4.p2l.info # # # Phorm contextual advertising sites 0.0.0.0 a.oix.com 0.0.0.0 a.oix.net 0.0.0.0 a.openinternetexchange.com 0.0.0.0 a.phormlabs.com 0.0.0.0 a.webwise.com 0.0.0.0 a.webwise.net 0.0.0.0 b.oix.net 0.0.0.0 br.phorm.com 0.0.0.0 bt.phorm.com 0.0.0.0 bt.webwise.com 0.0.0.0 b.webwise.net 0.0.0.0 c.webwise.com 0.0.0.0 c.webwise.net 0.0.0.0 d.oix.com 0.0.0.0 d.phormlabs.com 0.0.0.0 ig.fp.oix.net 0.0.0.0 invite.gezinti.com 0.0.0.0 kentsucks.youcanoptout.com 0.0.0.0 kr.phorm.com 0.0.0.0 mail.youcanoptout.com 0.0.0.0 mail.youcanoptout.net 0.0.0.0 mail.youcanoptout.org 0.0.0.0 monitor.phorm.com 0.0.0.0 mx01.openinternetexchange.com 0.0.0.0 mx01.openinternetexchange.net 0.0.0.0 mx01.webwise.com 0.0.0.0 mx03.phorm.com 0.0.0.0 navegador.oi.com.br 0.0.0.0 navegador.telefonica.com.br 0.0.0.0 ns1.oix.com 0.0.0.0 ns1.openinternetexchange.com 0.0.0.0 ns1.phorm.com 0.0.0.0 ns2.oix.com 0.0.0.0 ns2.openinternetexchange.com 0.0.0.0 ns2.phorm.com 0.0.0.0 ns2.youcanoptout.com 0.0.0.0 ns3.openinternetexchange.com 0.0.0.0 oi.webnavegador.com.br 0.0.0.0 oixcrv-lab.net 0.0.0.0 oixcrv.net 0.0.0.0 oixcrv-stage.net 0.0.0.0 oix.phorm.com 0.0.0.0 oixpre.net 0.0.0.0 oixpre-stage.net 0.0.0.0 oixssp-lab.net 0.0.0.0 oixssp.net 0.0.0.0 oix-stage.net 0.0.0.0 openinternetexchange.com 0.0.0.0 openinternetexchange.net 0.0.0.0 phorm.kr 0.0.0.0 phormlabs.com 0.0.0.0 prm-ext.phorm.com 0.0.0.0 romdiscover.com 0.0.0.0 rtc.romdiscover.com 0.0.0.0 stats.oix.com 0.0.0.0 stopphoulplay.com 0.0.0.0 stopphoulplay.net 0.0.0.0 telefonica.webnavegador.com.br 0.0.0.0 webnavegador.com.br 0.0.0.0 webwise.com 0.0.0.0 webwise.net 0.0.0.0 w.oix.net 0.0.0.0 www.gezinti.com 0.0.0.0 www.gozatar.com 0.0.0.0 www.oix.com 0.0.0.0 www.openinternetexchange.com 0.0.0.0 www.phormlabs.com 0.0.0.0 www.stopphoulplay.com 0.0.0.0 www.youcanoptout.com 0.0.0.0 www.youcanoptout.net 0.0.0.0 www.youcanoptout.org 0.0.0.0 xxyyzz.youcanoptout.com 0.0.0.0 youcanoptout.com 0.0.0.0 youcanoptout.net 0.0.0.0 youcanoptout.org # gevent-24.11.1/src/gevent/tests/https_svn_python_org_root.pem000066400000000000000000000050111471441230600244360ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/gevent/tests/keycert.pem000066400000000000000000000035201471441230600205440ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- gevent-24.11.1/src/gevent/tests/known_failures.py000066400000000000000000000421011471441230600217710ustar00rootroot00000000000000# This is a list of known failures (=bugs). # The tests listed there must fail (or testrunner.py will report error) unless they are prefixed with FLAKY # in which cases the result of them is simply ignored from __future__ import print_function import sys import struct from gevent.testing import sysinfo class Condition(object): __slots__ = () def __and__(self, other): return AndCondition(self, other) def __or__(self, other): return OrCondition(self, other) def __bool__(self): raise NotImplementedError class AbstractBinaryCondition(Condition): # pylint:disable=abstract-method __slots__ = ( 'lhs', 'rhs', ) OP = None def __init__(self, lhs, rhs): self.lhs = lhs self.rhs = rhs def __repr__(self): return "(%r %s %r)" % ( self.lhs, self.OP, self.rhs ) class OrCondition(AbstractBinaryCondition): __slots__ = () OP = '|' def __bool__(self): return bool(self.lhs) or bool(self.rhs) class AndCondition(AbstractBinaryCondition): __slots__ = () OP = '&' def __bool__(self): return bool(self.lhs) and bool(self.rhs) class ConstantCondition(Condition): __slots__ = ( 'value', '__name__', ) def __init__(self, value, name=None): self.value = bool(value) self.__name__ = name or str(value) def __bool__(self): return self.value def __repr__(self): return self.__name__ ALWAYS = ConstantCondition(True) NEVER = ConstantCondition(False) class _AttrCondition(ConstantCondition): __slots__ = ( ) def __init__(self, name): ConstantCondition.__init__(self, getattr(sysinfo, name), name) PYPY = _AttrCondition('PYPY') PYPY3 = _AttrCondition('PYPY3') PY3 = _AttrCondition('PY3') PY2 = _AttrCondition('PY2') OSX = _AttrCondition('OSX') LIBUV = _AttrCondition('LIBUV') WIN = _AttrCondition('WIN') APPVEYOR = _AttrCondition('RUNNING_ON_APPVEYOR') TRAVIS = _AttrCondition('RUNNING_ON_TRAVIS') CI = _AttrCondition('RUNNING_ON_CI') LEAKTEST = _AttrCondition('RUN_LEAKCHECKS') COVERAGE = _AttrCondition('RUN_COVERAGE') RESOLVER_NOT_SYSTEM = _AttrCondition('RESOLVER_NOT_SYSTEM') BIT_64 = ConstantCondition(struct.calcsize('P') * 8 == 64, 'BIT_64') PY380_EXACTLY = ConstantCondition(sys.version_info[:3] == (3, 8, 0), 'PY380_EXACTLY') PY312B3_EXACTLY = ConstantCondition(sys.version_info == (3, 12, 0, 'beta', 3)) PY312B4_EXACTLY = ConstantCondition(sys.version_info == (3, 12, 0, 'beta', 4)) class _Definition(object): __slots__ = ( '__name__', # When does the class of this condition apply? 'when', # When should this test be run alone, if it's run? 'run_alone', # Should this test be ignored during coverage measurement? 'ignore_coverage', # {name: (Condition, value)} 'options', ) def __init__(self, when, run_alone, ignore_coverage, options): assert isinstance(when, Condition) assert isinstance(run_alone, Condition) assert isinstance(ignore_coverage, Condition) self.when = when self.__name__ = None # pylint:disable=non-str-assignment-to-dunder-name self.run_alone = run_alone self.ignore_coverage = ignore_coverage if options: for v in options.values(): assert isinstance(v, tuple) and len(v) == 2 assert isinstance(v[0], Condition) self.options = options def __set_name__(self, owner, name): self.__name__ = name def __repr__(self): return '<%s for %s when=%r=%s run_alone=%r=%s>' % ( type(self).__name__, self.__name__, self.when, bool(self.when), self.run_alone, bool(self.run_alone) ) class _Action(_Definition): __slots__ = ( 'reason', ) def __init__(self, reason='', when=ALWAYS, run_alone=NEVER, ignore_coverage=NEVER, options=None): _Definition.__init__(self, when, run_alone, ignore_coverage, options) self.reason = reason class RunAlone(_Action): __slots__ = () def __init__(self, reason='', when=ALWAYS, ignore_coverage=NEVER): _Action.__init__(self, reason, run_alone=when, ignore_coverage=ignore_coverage) class Failing(_Action): __slots__ = () class Flaky(Failing): __slots__ = () class Ignored(_Action): __slots__ = () class Multi(object): def __init__(self): self._conds = [] def flaky(self, reason='', when=True, ignore_coverage=NEVER, run_alone=NEVER): self._conds.append( Flaky( reason, when=when, ignore_coverage=ignore_coverage, run_alone=run_alone, ) ) return self def ignored(self, reason='', when=True): self._conds.append(Ignored(reason, when=when)) return self def __set_name__(self, owner, name): for c in self._conds: c.__set_name__(owner, name) class DefinitionsMeta(type): # a metaclass on Python 3 that makes sure we only set attributes once. pylint doesn't # warn about that. @classmethod def __prepare__(mcs, name, bases): # pylint:disable=unused-argument,bad-dunder-name return SetOnceMapping() class SetOnceMapping(dict): def __setitem__(self, name, value): if name in self: raise AttributeError(name) dict.__setitem__(self, name, value) som = SetOnceMapping() som[1] = 1 try: som[1] = 2 except AttributeError: del som else: raise AssertionError("SetOnceMapping is broken") DefinitionsBase = DefinitionsMeta('DefinitionsBase', (object,), {}) class Definitions(DefinitionsBase): test__util = RunAlone( """ If we have extra greenlets hanging around due to changes in GC, we won't match the expected output. So far, this is only seen on one version, in CI environment. """, when=(CI & (PY312B3_EXACTLY | PY312B4_EXACTLY)) ) test__issue6 = Flaky( """test__issue6 (see comments in test file) is really flaky on both Travis and Appveyor; on Travis we could just run the test again (but that gets old fast), but on appveyor we don't have that option without a new commit---and sometimes we really need a build to succeed in order to get a release wheel""" ) test__core_fork = Ignored( """fork watchers don't get called on windows because fork is not a concept windows has. See this file for a detailed explanation.""", when=WIN ) test__greenletset = Flaky( when=WIN, ignore_coverage=PYPY ) test__example_udp_client = test__example_udp_server = Flaky( """ These both run on port 9000 and can step on each other...seems like the appveyor containers aren't fully port safe? Or it takes longer for the processes to shut down? Or we run them in a different order in the process pool than we do other places? On PyPy on Travis, this fails to get the correct results, sometimes. I can't reproduce locally """, when=APPVEYOR | (PYPY & TRAVIS) ) # This one sometimes randomly closes connections, but no indication # of a server crash, only a client side close. test__server_pywsgi = Flaky(when=APPVEYOR) test_threading = Multi().ignored( """ This one seems to just stop right after patching is done. It passes on a local win 10 vm, and the main test_threading_2.py does as well. Based on the printouts we added, it appears to not even finish importing: https://ci.appveyor.com/project/denik/gevent/build/1.0.1277/job/tpvhesij5gldjxqw#L1190 Ignored because it takes two minutes to time out. """, when=APPVEYOR & LIBUV & PYPY ).flaky( """ test_set_and_clear in Py3 relies on 5 threads all starting and coming to an Event wait point while a sixth thread sleeps for a half second. The sixth thread then does something and checks that the 5 threads were all at the wait point. But the timing is sometimes too tight for appveyor. This happens even if Event isn't monkey-patched """, when=APPVEYOR & PY3 ) test_ftplib = Flaky( r""" could be a problem of appveyor - not sure ====================================================================== ERROR: test_af (__main__.TestIPv6Environment) ---------------------------------------------------------------------- File "C:\Python27-x64\lib\ftplib.py", line 135, in connect self.sock = socket.create_connection((self.host, self.port), self.timeout) File "c:\projects\gevent\gevent\socket.py", line 73, in create_connection raise err error: [Errno 10049] [Error 10049] The requested address is not valid in its context. XXX: On Jan 3 2016 this suddenly started passing on Py27/64; no idea why, the python version was 2.7.11 before and after. """, when=APPVEYOR & BIT_64 ) test__backdoor = Flaky(when=LEAKTEST | PYPY) test__socket_errors = Flaky(when=LEAKTEST) test_signal = Multi().flaky( "On Travis, this very frequently fails due to timing", when=TRAVIS & LEAKTEST, # Partial workaround for the _testcapi issue on PyPy, # but also because signal delivery can sometimes be slow, and this # spawn processes of its own run_alone=APPVEYOR, ).ignored( """ This fails to run a single test. It looks like just importing the module can hang. All I see is the output from patch_all() """, when=APPVEYOR & PYPY3 ) test__monkey_sigchld_2 = Ignored( """ This hangs for no apparent reason when run by the testrunner, even wher maked standalone when run standalone from the command line, it's fine. Issue in pypy2 6.0? """, when=PYPY & LIBUV ) test_ssl = Ignored( """ PyPy 7.0 and 7.1 on Travis with Ubunto Xenial 16.04 can't allocate SSL Context objects, either in Python 2.7 or 3.6. There must be some library incompatibility. No point even running them. XXX: Remember to turn this back on. On Windows, with PyPy3.7 7.3.7, there seem to be all kind of certificate errors. """, when=(PYPY & TRAVIS) | (PYPY3 & WIN) ) test_httpservers = Ignored( """ All the CGI tests hang. There appear to be subprocess problems. """, when=PYPY3 & WIN ) test__pywsgi = Ignored( """ XXX: Re-enable this when we can investigate more. This has started crashing with a SystemError. I cannot reproduce with the same version on macOS and I cannot reproduce with the same version in a Linux vm. Commenting out individual tests just moves the crash around. https://bitbucket.org/pypy/pypy/issues/2769/systemerror-unexpected-internal-exception On Appveyor 3.8.0, for some reason this takes *way* too long, about 100s, which often goes just over the default timeout of 100s. This makes no sense. But it also takes nearly that long in 3.7. 3.6 and earlier are much faster. It also takes just over 100s on PyPy 3.7. """, when=(PYPY & TRAVIS & LIBUV) | PY380_EXACTLY, # https://bitbucket.org/pypy/pypy/issues/2769/systemerror-unexpected-internal-exception run_alone=(CI & LEAKTEST & PY3) | (PYPY & LIBUV), # This often takes much longer on PyPy on CI. options={'timeout': (CI & PYPY, 180)}, ) test_subprocess = Multi().flaky( "Unknown, can't reproduce locally; times out one test", when=PYPY & PY3 & TRAVIS, ignore_coverage=ALWAYS, ).ignored( "Tests don't even start before the process times out.", when=PYPY3 & WIN ) test__threadpool = Ignored( """ XXX: Re-enable these when we have more time to investigate. This test, which normally takes ~60s, sometimes hangs forever after running several tests. I cannot reproduce, it seems highly load dependent. Observed with both libev and libuv. """, when=TRAVIS & (PYPY | OSX), # This often takes much longer on PyPy on CI. options={'timeout': (CI & PYPY, 180)}, ) test__threading_2 = Ignored( """ This test, which normally takes 4-5s, sometimes hangs forever after running two tests. I cannot reproduce, it seems highly load dependent. Observed with both libev and libuv. """, when=TRAVIS & (PYPY | OSX), # This often takes much longer on PyPy on CI. options={'timeout': (CI & PYPY, 180)}, ) test__issue230 = Ignored( """ This rarely hangs for unknown reasons. I cannot reproduce locally. """, when=TRAVIS & OSX ) test_selectors = Flaky( """ Timing issues on appveyor. """, when=PY3 & APPVEYOR, ignore_coverage=ALWAYS, ) test__example_portforwarder = Flaky( """ This one sometimes times out, often after output "The process with PID XXX could not be terminated. Reason: There is no running instance of the task.", """, when=APPVEYOR | COVERAGE ) test__issue302monkey = test__threading_vs_settrace = Flaky( """ The gevent concurrency plugin tends to slow things down and get us past our default timeout value. These tests in particular are sensitive to it. So in fact we just turn them off. """, when=COVERAGE, ignore_coverage=ALWAYS, ) test__hub_join_timeout = Ignored( r""" This sometimes times out. It appears to happen when the times take too long and a test raises a FlakyTestTimeout error, aka a unittest.SkipTest error. This probably indicates that we're not cleaning something up correctly: .....ss GEVENTTEST_USE_RESOURCES=-network C:\Python38-x64\python.exe -u \ -mgevent.tests.test__hub_join_timeout [code TIMEOUT] [took 100.4s] """, when=APPVEYOR ) test__example_wsgiserver = test__example_webproxy = RunAlone( """ These share the same port, which means they can conflict between concurrent test runs too XXX: Fix this by dynamically picking a port. """, ) test__pool = RunAlone( """ On a heavily loaded box, these can all take upwards of 200s. """, when=(CI & LEAKTEST) | (PYPY3 & APPVEYOR) ) test_socket = RunAlone( "Sometimes has unexpected timeouts", when=CI & PYPY & PY3, ignore_coverage=ALWAYS, # times out ) test__refcount = Ignored( "Sometimes fails to connect for no reason", when=(CI & OSX) | (CI & PYPY) | APPVEYOR, ignore_coverage=PYPY ) test__doctests = Ignored( "Sometimes times out during/after gevent._config.Config", when=CI & OSX ) # tests that can't be run when coverage is enabled # TODO: Now that we have this declarative, we could eliminate this list, # just add them to the main IGNORED_TESTS list. IGNORE_COVERAGE = [ ] # A mapping from test file basename to a dictionary of # options that will be applied on top of the DEFAULT_RUN_OPTIONS. TEST_FILE_OPTIONS = { } FAILING_TESTS = [] IGNORED_TESTS = [] # tests that don't do well when run on busy box # or that are mutually exclusive RUN_ALONE = [ ] def populate(): # pylint:disable=too-many-branches # TODO: Maybe move to the metaclass. # TODO: This could be better. for k, v in Definitions.__dict__.items(): if isinstance(v, Multi): actions = v._conds else: actions = (v,) test_name = k + '.py' del k, v for action in actions: if not isinstance(action, _Action): continue if action.run_alone: RUN_ALONE.append(test_name) if action.ignore_coverage: IGNORE_COVERAGE.append(test_name) if action.options: for opt_name, (condition, value) in action.options.items(): # TODO: Verify that this doesn't match more than once. if condition: TEST_FILE_OPTIONS.setdefault(test_name, {})[opt_name] = value if action.when: if isinstance(action, Ignored): IGNORED_TESTS.append(test_name) elif isinstance(action, Flaky): FAILING_TESTS.append('FLAKY ' + test_name) elif isinstance(action, Failing): FAILING_TESTS.append(test_name) FAILING_TESTS.sort() IGNORED_TESTS.sort() RUN_ALONE.sort() populate() if __name__ == '__main__': print('known_failures:\n', FAILING_TESTS) print('ignored tests:\n', IGNORED_TESTS) print('run alone:\n', RUN_ALONE) print('options:\n', TEST_FILE_OPTIONS) print("ignore during coverage:\n", IGNORE_COVERAGE) gevent-24.11.1/src/gevent/tests/lock_tests.py000066400000000000000000000525421471441230600211270ustar00rootroot00000000000000""" Various tests for synchronization primitives. """ # pylint:disable=no-member,abstract-method import sys import time try: from thread import start_new_thread, get_ident except ImportError: from _thread import start_new_thread, get_ident import threading import unittest from gevent.testing import support from gevent.testing.testcase import TimeAssertMixin def _wait(): # A crude wait/yield function not relying on synchronization primitives. time.sleep(0.01) class Bunch(object): """ A bunch of threads. """ def __init__(self, f, n, wait_before_exit=False): """ Construct a bunch of `n` threads running the same function `f`. If `wait_before_exit` is True, the threads won't terminate until do_finish() is called. """ self.f = f self.n = n self.started = [] self.finished = [] self._can_exit = not wait_before_exit def task(): tid = get_ident() self.started.append(tid) try: f() finally: self.finished.append(tid) while not self._can_exit: _wait() for _ in range(n): start_new_thread(task, ()) def wait_for_started(self): while len(self.started) < self.n: _wait() def wait_for_finished(self): while len(self.finished) < self.n: _wait() def do_finish(self): self._can_exit = True class BaseTestCase(TimeAssertMixin, unittest.TestCase): def setUp(self): self._threads = support.threading_setup() def tearDown(self): support.threading_cleanup(*self._threads) support.reap_children() class BaseLockTests(BaseTestCase): """ Tests for both recursive and non-recursive locks. """ def locktype(self): raise NotImplementedError() def test_constructor(self): lock = self.locktype() del lock def test_acquire_destroy(self): lock = self.locktype() lock.acquire() del lock def test_acquire_release(self): lock = self.locktype() lock.acquire() lock.release() del lock def test_try_acquire(self): lock = self.locktype() self.assertTrue(lock.acquire(False)) lock.release() def test_try_acquire_contended(self): lock = self.locktype() lock.acquire() result = [] def f(): result.append(lock.acquire(False)) Bunch(f, 1).wait_for_finished() self.assertFalse(result[0]) lock.release() def test_acquire_contended(self): lock = self.locktype() lock.acquire() N = 5 def f(): lock.acquire() lock.release() b = Bunch(f, N) b.wait_for_started() _wait() self.assertEqual(len(b.finished), 0) lock.release() b.wait_for_finished() self.assertEqual(len(b.finished), N) def test_with(self): lock = self.locktype() def f(): lock.acquire() lock.release() def _with(err=None): with lock: if err is not None: raise err # pylint:disable=raising-bad-type _with() # Check the lock is unacquired Bunch(f, 1).wait_for_finished() self.assertRaises(TypeError, _with, TypeError) # Check the lock is unacquired Bunch(f, 1).wait_for_finished() def test_thread_leak(self): # The lock shouldn't leak a Thread instance when used from a foreign # (non-threading) thread. lock = self.locktype() def f(): lock.acquire() lock.release() n = len(threading.enumerate()) # We run many threads in the hope that existing threads ids won't # be recycled. Bunch(f, 15).wait_for_finished() self.assertEqual(n, len(threading.enumerate())) class LockTests(BaseLockTests): # pylint:disable=abstract-method """ Tests for non-recursive, weak locks (which can be acquired and released from different threads). """ def test_reacquire(self): # Lock needs to be released before re-acquiring. lock = self.locktype() phase = [] def f(): lock.acquire() phase.append(None) lock.acquire() phase.append(None) start_new_thread(f, ()) while not phase: _wait() _wait() self.assertEqual(len(phase), 1) lock.release() while len(phase) == 1: _wait() self.assertEqual(len(phase), 2) def test_different_thread(self): # Lock can be released from a different thread. lock = self.locktype() lock.acquire() def f(): lock.release() b = Bunch(f, 1) b.wait_for_finished() lock.acquire() lock.release() class RLockTests(BaseLockTests): """ Tests for recursive locks. """ def test_reacquire(self): lock = self.locktype() lock.acquire() lock.acquire() lock.release() lock.acquire() lock.release() lock.release() def test_release_unacquired(self): # Cannot release an unacquired lock lock = self.locktype() self.assertRaises(RuntimeError, lock.release) lock.acquire() lock.acquire() lock.release() lock.acquire() lock.release() lock.release() self.assertRaises(RuntimeError, lock.release) def test_different_thread(self): # Cannot release from a different thread lock = self.locktype() def f(): lock.acquire() b = Bunch(f, 1, True) try: self.assertRaises(RuntimeError, lock.release) finally: b.do_finish() def test__is_owned(self): lock = self.locktype() self.assertFalse(lock._is_owned()) lock.acquire() self.assertTrue(lock._is_owned()) lock.acquire() self.assertTrue(lock._is_owned()) result = [] def f(): result.append(lock._is_owned()) Bunch(f, 1).wait_for_finished() self.assertFalse(result[0]) lock.release() self.assertTrue(lock._is_owned()) lock.release() self.assertFalse(lock._is_owned()) class EventTests(BaseTestCase): """ Tests for Event objects. """ def eventtype(self): raise NotImplementedError() def test_is_set(self): evt = self.eventtype() self.assertFalse(evt.is_set()) evt.set() self.assertTrue(evt.is_set()) evt.set() self.assertTrue(evt.is_set()) evt.clear() self.assertFalse(evt.is_set()) evt.clear() self.assertFalse(evt.is_set()) def _check_notify(self, evt): # All threads get notified N = 5 results1 = [] results2 = [] def f(): evt.wait() results1.append(evt.is_set()) evt.wait() results2.append(evt.is_set()) b = Bunch(f, N) b.wait_for_started() _wait() self.assertEqual(len(results1), 0) evt.set() b.wait_for_finished() self.assertEqual(results1, [True] * N) self.assertEqual(results2, [True] * N) def test_notify(self): evt = self.eventtype() self._check_notify(evt) # Another time, after an explicit clear() evt.set() evt.clear() self._check_notify(evt) def test_timeout(self): evt = self.eventtype() results1 = [] results2 = [] N = 5 def f(): evt.wait(0.0) results1.append(evt.is_set()) t1 = time.time() evt.wait(0.2) r = evt.is_set() t2 = time.time() results2.append((r, t2 - t1)) Bunch(f, N).wait_for_finished() self.assertEqual(results1, [False] * N) for r, dt in results2: self.assertFalse(r) self.assertTimeWithinRange(dt, 0.18, 10) # The event is set results1 = [] results2 = [] evt.set() Bunch(f, N).wait_for_finished() self.assertEqual(results1, [True] * N) for r, dt in results2: self.assertTrue(r) class ConditionTests(BaseTestCase): """ Tests for condition variables. """ def condtype(self, *args): raise NotImplementedError() def test_acquire(self): cond = self.condtype() # Be default we have an RLock: the condition can be acquired multiple # times. # pylint:disable=consider-using-with cond.acquire() cond.acquire() cond.release() cond.release() lock = threading.Lock() cond = self.condtype(lock) cond.acquire() self.assertFalse(lock.acquire(False)) cond.release() self.assertTrue(lock.acquire(False)) self.assertFalse(cond.acquire(False)) lock.release() with cond: self.assertFalse(lock.acquire(False)) def test_unacquired_wait(self): cond = self.condtype() self.assertRaises(RuntimeError, cond.wait) def test_unacquired_notify(self): cond = self.condtype() self.assertRaises(RuntimeError, cond.notify) def _check_notify(self, cond): N = 5 results1 = [] results2 = [] phase_num = 0 def f(): cond.acquire() cond.wait() cond.release() results1.append(phase_num) cond.acquire() cond.wait() cond.release() results2.append(phase_num) b = Bunch(f, N) b.wait_for_started() _wait() self.assertEqual(results1, []) # Notify 3 threads at first cond.acquire() cond.notify(3) _wait() phase_num = 1 cond.release() while len(results1) < 3: _wait() self.assertEqual(results1, [1] * 3) self.assertEqual(results2, []) # Notify 5 threads: they might be in their first or second wait cond.acquire() cond.notify(5) _wait() phase_num = 2 cond.release() while len(results1) + len(results2) < 8: _wait() self.assertEqual(results1, [1] * 3 + [2] * 2) self.assertEqual(results2, [2] * 3) # Notify all threads: they are all in their second wait cond.acquire() cond.notify_all() _wait() phase_num = 3 cond.release() while len(results2) < 5: _wait() self.assertEqual(results1, [1] * 3 + [2] * 2) self.assertEqual(results2, [2] * 3 + [3] * 2) b.wait_for_finished() def test_notify(self): cond = self.condtype() self._check_notify(cond) # A second time, to check internal state is still ok. self._check_notify(cond) def test_timeout(self): cond = self.condtype() results = [] N = 5 def f(): cond.acquire() t1 = time.time() cond.wait(0.2) t2 = time.time() cond.release() results.append(t2 - t1) Bunch(f, N).wait_for_finished() self.assertEqual(len(results), 5) for dt in results: # XXX: libuv sometimes produces 0.19958 self.assertTimeWithinRange(dt, 0.19, 2.0) class BaseSemaphoreTests(BaseTestCase): """ Common tests for {bounded, unbounded} semaphore objects. """ def semtype(self, *args): raise NotImplementedError() def test_constructor(self): self.assertRaises(ValueError, self.semtype, value=-1) # Py3 doesn't have sys.maxint self.assertRaises((ValueError, OverflowError), self.semtype, value=-getattr(sys, 'maxint', getattr(sys, 'maxsize', None))) def test_acquire(self): sem = self.semtype(1) sem.acquire() sem.release() sem = self.semtype(2) sem.acquire() sem.acquire() sem.release() sem.release() def test_acquire_destroy(self): sem = self.semtype() sem.acquire() del sem def test_acquire_contended(self): sem = self.semtype(7) sem.acquire() #N = 10 results1 = [] results2 = [] phase_num = 0 def f(): sem.acquire() results1.append(phase_num) sem.acquire() results2.append(phase_num) b = Bunch(f, 10) b.wait_for_started() while len(results1) + len(results2) < 6: _wait() self.assertEqual(results1 + results2, [0] * 6) phase_num = 1 for _ in range(7): sem.release() while len(results1) + len(results2) < 13: _wait() self.assertEqual(sorted(results1 + results2), [0] * 6 + [1] * 7) phase_num = 2 for _ in range(6): sem.release() while len(results1) + len(results2) < 19: _wait() self.assertEqual(sorted(results1 + results2), [0] * 6 + [1] * 7 + [2] * 6) # The semaphore is still locked self.assertFalse(sem.acquire(False)) # Final release, to let the last thread finish sem.release() b.wait_for_finished() def test_try_acquire(self): sem = self.semtype(2) self.assertTrue(sem.acquire(False)) self.assertTrue(sem.acquire(False)) self.assertFalse(sem.acquire(False)) sem.release() self.assertTrue(sem.acquire(False)) def test_try_acquire_contended(self): sem = self.semtype(4) sem.acquire() results = [] def f(): results.append(sem.acquire(False)) results.append(sem.acquire(False)) Bunch(f, 5).wait_for_finished() # There can be a thread switch between acquiring the semaphore and # appending the result, therefore results will not necessarily be # ordered. self.assertEqual(sorted(results), [False] * 7 + [True] * 3) def test_default_value(self): # The default initial value is 1. sem = self.semtype() sem.acquire() def f(): sem.acquire() sem.release() b = Bunch(f, 1) b.wait_for_started() _wait() self.assertFalse(b.finished) sem.release() b.wait_for_finished() def test_with(self): sem = self.semtype(2) def _with(err=None): with sem: self.assertTrue(sem.acquire(False)) sem.release() with sem: self.assertFalse(sem.acquire(False)) if err: raise err # pylint:disable=raising-bad-type _with() self.assertTrue(sem.acquire(False)) sem.release() self.assertRaises(TypeError, _with, TypeError) self.assertTrue(sem.acquire(False)) sem.release() class SemaphoreTests(BaseSemaphoreTests): """ Tests for unbounded semaphores. """ def test_release_unacquired(self): # Unbounded releases are allowed and increment the semaphore's value sem = self.semtype(1) sem.release() sem.acquire() sem.acquire() sem.release() class BoundedSemaphoreTests(BaseSemaphoreTests): """ Tests for bounded semaphores. """ def test_release_unacquired(self): # Cannot go past the initial value sem = self.semtype() self.assertRaises(ValueError, sem.release) sem.acquire() sem.release() self.assertRaises(ValueError, sem.release) class BarrierTests(BaseTestCase): """ Tests for Barrier objects. """ N = 5 defaultTimeout = 2.0 def setUp(self): self.barrier = self.barriertype(self.N, timeout=self.defaultTimeout) def tearDown(self): self.barrier.abort() def run_threads(self, f): b = Bunch(f, self.N-1) f() b.wait_for_finished() def multipass(self, results, n): m = self.barrier.parties self.assertEqual(m, self.N) for i in range(n): results[0].append(True) self.assertEqual(len(results[1]), i * m) self.barrier.wait() results[1].append(True) self.assertEqual(len(results[0]), (i + 1) * m) self.barrier.wait() self.assertEqual(self.barrier.n_waiting, 0) self.assertFalse(self.barrier.broken) def test_barrier(self, passes=1): """ Test that a barrier is passed in lockstep """ results = [[], []] def f(): self.multipass(results, passes) self.run_threads(f) def test_barrier_10(self): """ Test that a barrier works for 10 consecutive runs """ return self.test_barrier(10) def test_wait_return(self): """ test the return value from barrier.wait """ results = [] def f(): r = self.barrier.wait() results.append(r) self.run_threads(f) self.assertEqual(sum(results), sum(range(self.N))) def test_action(self): """ Test the 'action' callback """ results = [] def action(): results.append(True) barrier = self.barriertype(self.N, action) def f(): barrier.wait() self.assertEqual(len(results), 1) self.run_threads(f) def test_abort(self): """ Test that an abort will put the barrier in a broken state """ results1 = [] results2 = [] def f(): try: i = self.barrier.wait() if i == self.N//2: raise RuntimeError self.barrier.wait() results1.append(True) except threading.BrokenBarrierError: results2.append(True) except RuntimeError: self.barrier.abort() self.run_threads(f) self.assertEqual(len(results1), 0) self.assertEqual(len(results2), self.N-1) self.assertTrue(self.barrier.broken) def test_reset(self): """ Test that a 'reset' on a barrier frees the waiting threads """ results1 = [] results2 = [] results3 = [] def f(): i = self.barrier.wait() if i == self.N//2: # Wait until the other threads are all in the barrier. while self.barrier.n_waiting < self.N-1: time.sleep(0.001) self.barrier.reset() else: try: self.barrier.wait() results1.append(True) except threading.BrokenBarrierError: results2.append(True) # Now, pass the barrier again self.barrier.wait() results3.append(True) self.run_threads(f) self.assertEqual(len(results1), 0) self.assertEqual(len(results2), self.N-1) self.assertEqual(len(results3), self.N) def test_abort_and_reset(self): """ Test that a barrier can be reset after being broken. """ results1 = [] results2 = [] results3 = [] barrier2 = self.barriertype(self.N) def f(): try: i = self.barrier.wait() if i == self.N//2: raise RuntimeError self.barrier.wait() results1.append(True) except threading.BrokenBarrierError: results2.append(True) except RuntimeError: self.barrier.abort() # Synchronize and reset the barrier. Must synchronize first so # that everyone has left it when we reset, and after so that no # one enters it before the reset. if barrier2.wait() == self.N//2: self.barrier.reset() barrier2.wait() self.barrier.wait() results3.append(True) self.run_threads(f) self.assertEqual(len(results1), 0) self.assertEqual(len(results2), self.N-1) self.assertEqual(len(results3), self.N) def test_timeout(self): """ Test wait(timeout) """ def f(): i = self.barrier.wait() if i == self.N // 2: # One thread is late! time.sleep(1.0) # Default timeout is 2.0, so this is shorter. self.assertRaises(threading.BrokenBarrierError, self.barrier.wait, 0.5) self.run_threads(f) def test_default_timeout(self): """ Test the barrier's default timeout """ # create a barrier with a low default timeout barrier = self.barriertype(self.N, timeout=0.3) def f(): i = barrier.wait() if i == self.N // 2: # One thread is later than the default timeout of 0.3s. time.sleep(1.0) self.assertRaises(threading.BrokenBarrierError, barrier.wait) self.run_threads(f) def test_single_thread(self): b = self.barriertype(1) b.wait() b.wait() if __name__ == '__main__': print("This module contains no tests; it is used by other test cases like test_threading_2") gevent-24.11.1/src/gevent/tests/monkey_package/000077500000000000000000000000001471441230600213505ustar00rootroot00000000000000gevent-24.11.1/src/gevent/tests/monkey_package/__init__.py000066400000000000000000000005111471441230600234560ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Make a package. This file has no other functionality. Individual modules in this package are used for testing, often being run with 'python -m ...' in individual test cases (functions). """ from __future__ import absolute_import from __future__ import division from __future__ import print_function gevent-24.11.1/src/gevent/tests/monkey_package/__main__.py000066400000000000000000000005531471441230600234450ustar00rootroot00000000000000from __future__ import print_function # This file makes this directory into a runnable package. # it exists to test 'python -m gevent.monkey monkey_package' # Note that the __file__ may differ slightly; starting with # Python 3.9, directly running it gets an abspath, but # using ``runpy`` doesn't. import os.path print(os.path.abspath(__file__)) print(__name__) gevent-24.11.1/src/gevent/tests/monkey_package/issue1526_no_monkey.py000066400000000000000000000010751471441230600254510ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Test for issue #1526: - dnspython is imported first; - no monkey-patching is done. """ from __future__ import print_function from __future__ import absolute_import import dns # pylint:disable=import-error assert dns import gevent.socket as socket # pylint:disable=consider-using-from-import socket.getfqdn() # create the resolver from gevent.resolver.dnspython import dns as gdns import dns.rdtypes # pylint:disable=import-error assert dns is not gdns, (dns, gdns) assert dns.rdtypes is not gdns.rdtypes import sys print(sorted(sys.modules)) gevent-24.11.1/src/gevent/tests/monkey_package/issue1526_with_monkey.py000066400000000000000000000012321471441230600260030ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Test for issue #1526: - dnspython is imported first; - monkey-patching happens early """ from __future__ import print_function, absolute_import from gevent import monkey monkey.patch_all() # pylint:disable=import-error import dns assert dns import socket import sys socket.getfqdn() import gevent.resolver.dnspython from gevent.resolver.dnspython import dns as gdns from dns import rdtypes # NOT import dns.rdtypes assert gevent.resolver.dnspython.dns is gdns assert gdns is not dns, (gdns, dns, "id dns", id(dns)) assert gdns.rdtypes is not rdtypes, (gdns.rdtypes, rdtypes) assert hasattr(dns, 'rdtypes') print(sorted(sys.modules)) gevent-24.11.1/src/gevent/tests/monkey_package/issue302monkey.py000066400000000000000000000016471471441230600245320ustar00rootroot00000000000000from __future__ import print_function import socket import sys import os.path if sys.argv[1] == 'patched': print('gevent' in repr(socket.socket)) else: assert sys.argv[1] == 'stdlib' print('gevent' not in repr(socket.socket)) print(os.path.abspath(__file__)) if sys.argv[1] == 'patched': # __package__ is handled differently, for some reason, and # runpy doesn't let us override it. When we call it, it # becomes ''. This appears to be against the documentation for # runpy, which says specifically "If the supplied path # directly references a script file (whether as source or as # precompiled byte code), then __file__ will be set to the # supplied path, and __spec__, __cached__, __loader__ and # __package__ will all be set to None." print(__package__ == '') # pylint:disable=compare-to-empty-string else: # but the interpreter sets it to None print(__package__ is None) gevent-24.11.1/src/gevent/tests/monkey_package/script.py000066400000000000000000000006531471441230600232320ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Test script file, to be used directly as a file. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function # We need some global imports from textwrap import dedent def use_import(): return dedent(" text") if __name__ == '__main__': import os.path print(os.path.abspath(__file__)) print(__name__) print(use_import()) gevent-24.11.1/src/gevent/tests/monkey_package/threadpool_monkey_patches.py000066400000000000000000000015451471441230600271610ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ This file runs ``gevent.monkey.patch_all()``. It is intended to be used by ``python -m gevent.monkey `` to prove that monkey-patching twice doesn't have unfortunate sife effects (such as breaking the threadpool). """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import sys from gevent import monkey from gevent import get_hub monkey.patch_all(thread=False, sys=True) def thread_is_greenlet(): from gevent.thread import get_ident as gr_ident std_thread_mod = 'thread' if bytes is str else '_thread' thr_ident = monkey.get_original(std_thread_mod, 'get_ident') return thr_ident() == gr_ident() is_greenlet = get_hub().threadpool.apply(thread_is_greenlet) print(is_greenlet) print(len(sys._current_frames())) sys.stdout.flush() sys.stderr.flush() gevent-24.11.1/src/gevent/tests/monkey_package/threadpool_no_monkey.py000066400000000000000000000014231471441230600261410ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ This file *does not* run ``gevent.monkey.patch_all()``. It is intended to be used by ``python -m gevent.monkey `` to prove that the threadpool and getting the original value of things works. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import sys from gevent import monkey from gevent import get_hub from gevent.thread import get_ident as gr_ident std_thread_mod = 'thread' if bytes is str else '_thread' thr_ident = monkey.get_original(std_thread_mod, 'get_ident') print(thr_ident is gr_ident) def thread_is_greenlet(): return thr_ident() == gr_ident() is_greenlet = get_hub().threadpool.apply(thread_is_greenlet) print(is_greenlet) print(len(sys._current_frames())) gevent-24.11.1/src/gevent/tests/nullcert.pem000066400000000000000000000000001471441230600207140ustar00rootroot00000000000000gevent-24.11.1/src/gevent/tests/server.crt000066400000000000000000000034211471441230600204130ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIFCzCCAvOgAwIBAgIUePnEKFfhxpt3oypt6nTicAGTFJowDQYJKoZIhvcNAQEL BQAwFDESMBAGA1UEAwwJbG9jYWxob3N0MCAXDTIxMDcwODExMzQzNVoYDzIxMjEw NjE0MTEzNDM1WjAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwggIiMA0GCSqGSIb3DQEB AQUAA4ICDwAwggIKAoICAQChqfmG6uOG95Jb7uRi6yxohJ8GOR3gi39yX6JB+Xdu kvqxy2/vsjH1+CF1i8jKZZO0hJLGT+/GmKIc1c0XUEjVoQvCNQHIaDTXiUXOGXfk QNKR0vtJH5ZOZn/tvYAKPniYPmHuF3TpAB6HouLpyIC55SXdK7pTEbmU7J1aBjug n3O56cu6FzjU1j/0QVUVGloxApLvv57bmINaX9ygKsh/ug0lhV1RwYLJ9UX57m95 FIlcofa98tCuoKi++G+sWsjopDXVmsiTbjZfs72kcDUTRYKNZbRFRRETORdOVRHx lAIPEn4QFYn/3wVSNFvfeY0j8RI5YcPLU66Batun6HU+YAs6z8Qc8S1EMElJdoyV eLCqLA07btICzKq2I16TZAOWVng2P7NOtibAeCzDAxAxJ3Oby+BVikKcu8WmJLxG vRvaPljdD76xjPB5NK6O0J62C3uU3EWhPODX9H5l/WF+aNRqSccgs0Umddj33N+b /mTJnHn1GpanThrv1UfOFGKfxjemwESz66d1iqD7iXvTxt7yZeU7LIMRgDqhVe6z oBpJEeWl9YYyfGPwgIOhwzNVZ5WkzQARs7si3j3Wkmyca7hEN8qq8DkLWNf1PTcI wo/239wKRbyW3Z+U4IGRrVMdeSoC2JpRAx/eEXTjuUePQlHCvwW9iiY7jTjDfbIv pwIDAQABo1MwUTAdBgNVHQ4EFgQUTUfShFbaXGMwrWEAkm05sXFH/x4wHwYDVR0j BBgwFoAUTUfShFbaXGMwrWEAkm05sXFH/x4wDwYDVR0TAQH/BAUwAwEB/zANBgkq hkiG9w0BAQsFAAOCAgEAe65ORDx0NDxTo1q6EY221KS3vEezUNBdZNaeOQsQeUAY lEO5iZ+2QLIVlWC5UtvISK96FU2CX0ucgAGfHS2ZB7o8i95fbjG2qrWC+VUH4V/6 jse9jlfGlYGkPuU5onNIDGcZ7gay3n0prCDiguAmCzV419GnGDWgSSgyVNCp/0tx b7pR5cVr0kZ5bTZjiysEEprkG2ofAlXzj09VGtTfM8gQvCz9Puj7pGzw2iaIEQVk hSGjoRWlI5x6+o16JOTHXzv9cYRUfDX6tjw3nQJIeMipuUkR8pkHUFjG3EeJEtO3 X/GO0G8rwUPaZiskGPiMZj7XqoVclnYL7JtntwUHR/dU5A/EhDfhgEfTXTqT78Oe cKri+VJE+G/hYxbP0FNYaDtqIwJcX1tsy4HOpKVBncc+K/PvXElVsyQET/+uwH7p Wm5ymndnuLoiQrWIA4nJC6rVwR4GPijuN0NCKcVdE+8jlOCBs3VBJTWKuu0J80RP 71iZy03AoK1YY4+nHglmE9HetAgSsbGh2fWC7DUS/4JzLSzOBeb+nn74zfmIfMU+ qUArFXvVGAtjmZZ/63cWzXDMZsp1BZ+O5dx6Gi2QtjgGYhh6DhW7ocQYXDkAeN/O K1Yzwq/G4AEQA0k0/1I+F0Rdlo41+7tOp+LMCOoZXqUzhM0ZQ2sf3QclubxLX9U= -----END CERTIFICATE----- gevent-24.11.1/src/gevent/tests/server.key000066400000000000000000000063101471441230600204130ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQChqfmG6uOG95Jb 7uRi6yxohJ8GOR3gi39yX6JB+Xdukvqxy2/vsjH1+CF1i8jKZZO0hJLGT+/GmKIc 1c0XUEjVoQvCNQHIaDTXiUXOGXfkQNKR0vtJH5ZOZn/tvYAKPniYPmHuF3TpAB6H ouLpyIC55SXdK7pTEbmU7J1aBjugn3O56cu6FzjU1j/0QVUVGloxApLvv57bmINa X9ygKsh/ug0lhV1RwYLJ9UX57m95FIlcofa98tCuoKi++G+sWsjopDXVmsiTbjZf s72kcDUTRYKNZbRFRRETORdOVRHxlAIPEn4QFYn/3wVSNFvfeY0j8RI5YcPLU66B atun6HU+YAs6z8Qc8S1EMElJdoyVeLCqLA07btICzKq2I16TZAOWVng2P7NOtibA eCzDAxAxJ3Oby+BVikKcu8WmJLxGvRvaPljdD76xjPB5NK6O0J62C3uU3EWhPODX 9H5l/WF+aNRqSccgs0Umddj33N+b/mTJnHn1GpanThrv1UfOFGKfxjemwESz66d1 iqD7iXvTxt7yZeU7LIMRgDqhVe6zoBpJEeWl9YYyfGPwgIOhwzNVZ5WkzQARs7si 3j3Wkmyca7hEN8qq8DkLWNf1PTcIwo/239wKRbyW3Z+U4IGRrVMdeSoC2JpRAx/e EXTjuUePQlHCvwW9iiY7jTjDfbIvpwIDAQABAoICAC3CJMTRe3FaZezro210T2+O Ck0CobhLA9nlw9GUwP9lTtxATwCzmXybrSzOUhknwzUXSUwkmCPIVCqBQbnVmagO G3vu8QA+rqZLTpzVjJ/o0TFBXKsH681pKdCrELDVmeDN135C2W6SABI4Qq4VeIol mCAQHn8gxzyl9Kvkk8AVIfZ/fJDBve5Qbm2+iEye1uSEa/68aEST2Kod9B7JvVKZ 4Nq78vwPH+v2JsZlfNvyuiakGWkOb47eHqVfQIyybaebwzkgxKEmUvGnuIfw0rUP ubI4FVx9/iVIxZYAckHEuQh3HYOD9TmdcK4h79dDWnXP6G6hg3/rwbsT+fR+0aBQ 9rkKnA4uToGikYmplixAQ/jDBwMs3VQqenO+YBIsC4HEZ0fJUbs+l4LEnuUJxYcR UlAvnVQXa1WGne3Yzb2xONWeiocKfhcdJ2JuQo00UR74+2Qonxn/WpimvlLCBDgI uKxHCSWOgv5yPpU2kwTPIjORXcy/y2G9K2bnsQCzznPRDyNkZmavQxxG6greFcrO /0yhRPuBgxKBRvXPO+F5fybKFlU9IPLFehV60jLUybBejab/lMJyxdkh9UMu2Xqy FVsRGazJt6T6AGp6TFEEcFUQw7qXNhVo9S7zGGaJFJdYc+Vx8QJRoCe8EAYVH7Mp b/eYGhHaKg6iG7QCjPPxAoIBAQDN54wtuDqpAA+4PmqhiEhQKhabNqAoVmAWUxnJ Db4Zzvkkc3Fo/Yg0HnQVaT0KmkcxY7397lTdtiwNkWPgJ0f6+g7L4K7PA7xh/q84 IoXFGvYWwVdiVXLR1l06jorpA20clnba6CsbezwcllTq4bWvNnrAcM8l1YrAlRnV qqqbPL78Rnba4C8q+VFy8r0d9OGnbvFcV7VWJjhr0a3aZbHQ67jPinNiUWvBVFFx yGrqPMjkeHyiTLMhqQpaSHH67S88rj0g9RKexBaSUrl18QO7xnQHHSCcFWMQOiSN shNvFri48dnU+Ms6ZLc3MBHbTK6uzP8xJCVnmsz/MWPGkQZFAoIBAQDI/vj/3/y/ EpIawyHN7PQAMoto4AQF6sVasrgGd1tRsJnGKrCugH9gILvyke3L7qg0JTV3bDJY e8+vH1vC3NV7PsOlCFjMtRWG0lRbCh/b7Qe3pCvPu4mbFhJgMT/mz+vbl5zvcdgX kvne+St/267NKnY5gHBDhqitBwkZwNlTWJ0zVmTecKXn/KwjS9lX1qU3HiT3UFkd 5Y5Nt5lj1IOK/6NCXkxVkgOc4Zjcxx138Cg03VJhIiHTusRq6z9iTSTDubhkaSbi 2nadptFBiQtkVhAJ5G53U7pl/pIhhiJy901bu/v/wrIMJ2l6hiZIcLrbg6VGXxjV 5dB7LDEtKoL7AoIBAQC8+ffA+mX0N9c1nSuWh5L+6DIJUHBbtTLJKonu6gsAeuJU 3xNGbfK1CwI1qHnaolAW91knlrcTKaBy726ACu1YXmp4GgW2f9JFCk/csGqfxaf4 qIg/+va/ugOku7CoPXnGFB6PuSffOBKqlhrn3DI41kKBHsgwDDYlnHKylMmyYmVS +oUZS0pfIaXsXvbNaLQ2TG9+9gy7Pabo5e+vE0jI25+p84MEyH+iV3XMfUoLI7Cp aB/TgZuimBelVvotd8Sz56K4/dSSHJwuvXfz1Dk9/Nz+rnAAcOyTtxlXZwnJGkx9 iZMIkTNMq6UwJJEu+ckVK5ZHjso5tWzSBo1xcCcVAoIBAQCPL0x1A7zK5VDd7cqE J1w/U8KKiKN1D6VeElkUiiysyjERwdGxzmpvMYKSsDCGCdMbqrInDBXlgPYXnDBD ZgxSywiW5ZZU5l+advWPEWxWwMmxoitvxfqmV5fpnMwYAmDUQ3KSBTjaumJ03G6H nBkvoSMtnXjcMe6xrIRoK0Dmpgb+znn3GKqn1BFQ57TCZW+3DytoX33M1X6FkNie DINVHv3Pxtt8ThNyzCeYh+RPT+9kkZIhDi6o5bENNd8miSw6nnBkX6BLFTRQ5MjH dfh+luzAD1I+gZAVHsA9T4/09IXQZt+DeNBb5iu3FB/rlRsYS/UOZ6qKnjfhtz6l HVbHAoIBAFjNY/UPJDxQ/uG+rMU0nrmSBRGdgBvQkcefjWX/LIZV3MjNilUQ+B2a lXz5AHGmHRnnwQsBVfN8rf4qQLln8l34Kgm7+cIFavgfg2oqVbNyNgezSlUmRq0J Ttf3xYJtRgRUx8F+BcgJXMqlNGTMQJY8wawM/ATkwkbmSwGOKe04sBeIkwEycMId BupvfN5lxDrKqJVPSl1t5Rh4us95CNh22/c5Tq5rsynl02ZB4swlcsVTdv8FSGmM QVf/MkWXGN/x4lHJhKyklHMGv15GGvys1nlPTstMfUYs55ioWRW46TXQ8vOyzzpg 67xzBKYFEde+hgYk7X1Xeqj8A6bsqro= -----END PRIVATE KEY----- gevent-24.11.1/src/gevent/tests/sha256.pem000066400000000000000000000040211471441230600201030ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIFxzCCA6+gAwIBAgIJALnlnf5uzTkIMA0GCSqGSIb3DQEBCwUAMEsxCzAJBgNV BAYTAkRFMRcwFQYDVQQKEw5zY2hva29rZWtzLm9yZzEjMCEGCSqGSIb3DQEJARYU aGFubm9Ac2Nob2tva2Vrcy5vcmcwHhcNMTAwMTI3MDAyMTI1WhcNMjAwMTI1MDAy MTI1WjBLMQswCQYDVQQGEwJERTEXMBUGA1UEChMOc2Nob2tva2Vrcy5vcmcxIzAh BgkqhkiG9w0BCQEWFGhhbm5vQHNjaG9rb2tla3Mub3JnMIICIjANBgkqhkiG9w0B AQEFAAOCAg8AMIICCgKCAgEApJ4ODPwEooMW35dQPlBqdvcfkEvjhcsA7jmJfFqN e/1T34zT44X9+KnMBSG2InacbD7eyFgjfaENFsZ87YkEBDIFZ/SHotLJZORQ8PUj YoxPG4mjKN+yL2WthNcYbRyJreTbbDroNMuw6tkTSxeSXyYFQrKMCUfErVbZa/d5 RvfFVk+Au9dVUFhed/Stn5cv+a0ffvpyA7ygihm1kMFICbvPeI0846tmC2Ph7rM5 pYQyNBDOVpULODTk5Wu6jiiJJygvJWCZ1FdpsdBs5aKWHWdRhX++quGuflTTjH5d qaIka4op9H7XksYphTDXmV+qHnva5jbPogwutDQcVsGBQcJaLmQqhsQK13bf4khE iWJvfBLfHn8OOpY25ZwwuigJIwifNCxQeeT1FrLmyuYNhz2phPpzx065kqSUSR+A Iw8DPE6e65UqMDKqZnID3dQeiQaFrHEV+Ibo0U/tD0YSBw5p33TMh0Es33IBWMac m7x4hIFWdhl8W522u6qOrTswY3s8vB7blNWqMc9n7oWH8ybFf7EgKeDVtEN9AyBE 0WotXIEZWI+WvDbU1ACJXau9sQhYP/eerg7Zwr3iGUy4IQ5oUJibnjtcE+z8zmDN pE6YcMCLJyLjXiQ3iHG9mNXzw7wPnslTbEEEukrfSlHGgW8Dm+VrNyW0JUM1bntx vbMCAwEAAaOBrTCBqjAdBgNVHQ4EFgQUCedv7pDTuXtCxm4HTw9hUtrTvsowewYD VR0jBHQwcoAUCedv7pDTuXtCxm4HTw9hUtrTvsqhT6RNMEsxCzAJBgNVBAYTAkRF MRcwFQYDVQQKEw5zY2hva29rZWtzLm9yZzEjMCEGCSqGSIb3DQEJARYUaGFubm9A c2Nob2tva2Vrcy5vcmeCCQC55Z3+bs05CDAMBgNVHRMEBTADAQH/MA0GCSqGSIb3 DQEBCwUAA4ICAQBHKAxA7WA/MEFjet03K8ouzEOr6Jrk2fZOuRhoDZ+9gr4FtaJB P3Hh5D00kuSOvDnwsvCohxeNd1KTMAwVmVoH+NZkHERn3UXniUENlp18koI1ehlr CZbXbzzE9Te9BelliSFA63q0cq0yJN1x9GyabU34XkAouCAmOqfSpKNZWZHGBHPF bbYnZrHEMcsye6vKeTOcg1GqUHGrQM2WK0QaOwnCQv2RblI9VN+SeRoUJ44qTXdW TwIYStsIPesacNcAQTStnHgKqIPx4zCwdx5xo8zONbXJfocqwyFqiAofvb9dN1nW g1noVBcXB+oRBZW5CjFw87U88itq39i9+BWl835DWLBW2pVmx1QTLGv0RNgs/xVx mWnjH4nNHvrjn6pRmqHZTk/SS0Hkl2qtDsynVxIl8EiMTfWSU3DBTuD2J/RSzuOE eKtAbaoXkXE31jCl4FEZLITIZd8UkXacb9rN304tAK92L76JOAV+xOZxFRipmvx4 +A9qQXgLhtP4VaDajb44V/kCKPSA0Vm3apehke9Wl8dDtagfos1e6MxSu3EVLXRF SP2U777V77pdMSd0f/7cerKn5FjrxW1v1FaP1oIGniMk4qQNTgA/jvvhjybsPlVA jsfnhWGbh1voJa0RQcMiRMsxpw2P1KNOEu37W2eq/vFghVztZJQUmb5iNw== -----END CERTIFICATE----- gevent-24.11.1/src/gevent/tests/test__GreenletExit.py000066400000000000000000000001771471441230600225470ustar00rootroot00000000000000from gevent import GreenletExit assert issubclass(GreenletExit, BaseException) assert not issubclass(GreenletExit, Exception) gevent-24.11.1/src/gevent/tests/test___config.py000066400000000000000000000114311471441230600215470ustar00rootroot00000000000000# Copyright 2018 gevent contributors. See LICENSE for details. import os import unittest import sys from gevent import _config class TestResolver(unittest.TestCase): old_resolver = None def setUp(self): if 'GEVENT_RESOLVER' in os.environ: self.old_resolver = os.environ['GEVENT_RESOLVER'] del os.environ['GEVENT_RESOLVER'] def tearDown(self): if self.old_resolver: os.environ['GEVENT_RESOLVER'] = self.old_resolver def test_key(self): self.assertEqual(_config.Resolver.environment_key, 'GEVENT_RESOLVER') def test_default(self): from gevent.resolver.thread import Resolver conf = _config.Resolver() self.assertEqual(conf.get(), Resolver) def test_env(self): from gevent.resolver.blocking import Resolver os.environ['GEVENT_RESOLVER'] = 'foo,bar,block,dnspython' conf = _config.Resolver() self.assertEqual(conf.get(), Resolver) os.environ['GEVENT_RESOLVER'] = 'dnspython' # The existing value is unchanged self.assertEqual(conf.get(), Resolver) # A new object reflects it try: from gevent.resolver.dnspython import Resolver as DResolver except ImportError: # pragma: no cover # dnspython is optional; skip it. import warnings warnings.warn('dnspython not installed') else: conf = _config.Resolver() self.assertEqual(conf.get(), DResolver) def test_set_str_long(self): from gevent.resolver.blocking import Resolver conf = _config.Resolver() conf.set('gevent.resolver.blocking.Resolver') self.assertEqual(conf.get(), Resolver) def test_set_str_short(self): from gevent.resolver.blocking import Resolver conf = _config.Resolver() conf.set('block') self.assertEqual(conf.get(), Resolver) def test_set_class(self): from gevent.resolver.blocking import Resolver conf = _config.Resolver() conf.set(Resolver) self.assertEqual(conf.get(), Resolver) def test_set_through_config(self): from gevent.resolver.thread import Resolver as Default from gevent.resolver.blocking import Resolver conf = _config.Config() self.assertEqual(conf.resolver, Default) conf.resolver = 'block' self.assertEqual(conf.resolver, Resolver) class TestFunctions(unittest.TestCase): def test_validate_bool(self): self.assertTrue(_config.validate_bool('on')) self.assertTrue(_config.validate_bool('1')) self.assertFalse(_config.validate_bool('off')) self.assertFalse(_config.validate_bool('0')) self.assertFalse(_config.validate_bool('')) with self.assertRaises(ValueError): _config.validate_bool(' hmm ') def test_validate_invalid(self): with self.assertRaises(ValueError): _config.validate_invalid(self) class TestConfig(unittest.TestCase): def test__dir__(self): self.assertEqual(sorted(_config.config.settings), sorted(dir(_config.config))) def test_getattr(self): # Bypass the property that might be set here self.assertIsNotNone(_config.config.__getattr__('resolver')) def test__getattr__invalid(self): with self.assertRaises(AttributeError): getattr(_config.config, 'no_such_setting') def test_set_invalid(self): with self.assertRaises(AttributeError): _config.config.set('no such setting', True) class TestImportableSetting(unittest.TestCase): def test_empty_list(self): i = _config.ImportableSetting() with self.assertRaisesRegex(ImportError, "Cannot import from empty list"): i._import_one_of([]) def test_path_not_supported(self): import warnings i = _config.ImportableSetting() path = list(sys.path) with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") with self.assertRaisesRegex(ImportError, "Cannot import 'foo/bar/gevent.no_such_module'"): i._import_one('foo/bar/gevent.no_such_module') # We restored the path self.assertEqual(path, sys.path) # We did not issue a warning self.assertEqual(len(w), 0) def test_non_string(self): i = _config.ImportableSetting() self.assertIs(i._import_one(self), self) def test_get_options(self): i = _config.ImportableSetting() self.assertEqual({}, i.get_options()) i.shortname_map = {'foo': 'bad/path'} options = i.get_options() self.assertIn('foo', options) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test___ident.py000066400000000000000000000040751471441230600214130ustar00rootroot00000000000000# -*- coding: utf-8 -*- # copyright 2018 gevent contributors. See LICENSE for details. from __future__ import absolute_import from __future__ import division from __future__ import print_function import gc import gevent.testing as greentest from gevent._ident import IdentRegistry from gevent._compat import PYPY class Target(object): pass class TestIdent(greentest.TestCase): def setUp(self): self.reg = IdentRegistry() def tearDown(self): self.reg = None def test_basic(self): target = Target() self.assertEqual(0, self.reg.get_ident(target)) self.assertEqual(1, len(self.reg)) self.assertEqual(0, self.reg.get_ident(target)) self.assertEqual(1, len(self.reg)) target2 = Target() self.assertEqual(1, self.reg.get_ident(target2)) self.assertEqual(2, len(self.reg)) self.assertEqual(1, self.reg.get_ident(target2)) self.assertEqual(2, len(self.reg)) self.assertEqual(0, self.reg.get_ident(target)) # When an object dies, we can re-use # its id. Under PyPy we need to collect garbage first. del target if PYPY: for _ in range(3): gc.collect() self.assertEqual(1, len(self.reg)) target3 = Target() self.assertEqual(1, self.reg.get_ident(target2)) self.assertEqual(0, self.reg.get_ident(target3)) self.assertEqual(2, len(self.reg)) @greentest.skipOnPyPy("This would need to GC very frequently") def test_circle(self): keep_count = 3 keepalive = [None] * keep_count for i in range(1000): target = Target() # Drop an old one. keepalive[i % keep_count] = target self.assertLessEqual(self.reg.get_ident(target), keep_count) @greentest.skipOnPurePython("Needs C extension") class TestCExt(greentest.TestCase): def test_c_extension(self): self.assertEqual(IdentRegistry.__module__, 'gevent._gevent_c_ident') if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test___monitor.py000066400000000000000000000305201471441230600217710ustar00rootroot00000000000000# Copyright 2018 gevent contributors. See LICENSE for details. import gc import unittest from greenlet import gettrace from greenlet import settrace from gevent.monkey import get_original from gevent._compat import thread_mod_name from gevent._compat import NativeStrIO from gevent.testing import verify from gevent.testing.skipping import skipWithoutPSUtil from gevent import _monitor as monitor from gevent import config as GEVENT_CONFIG get_ident = get_original(thread_mod_name, 'get_ident') class MockHub(object): _threadpool = None _resolver = None def __init__(self): self.thread_ident = get_ident() self.exception_stream = NativeStrIO() self.dead = False def __bool__(self): return not self.dead __nonzero__ = __bool__ def handle_error(self, *args): # pylint:disable=unused-argument raise # pylint:disable=misplaced-bare-raise @property def loop(self): return self def reinit(self): "mock loop.reinit" class _AbstractTestPeriodicMonitoringThread(object): # Makes sure we don't actually spin up a new monitoring thread. # pylint:disable=no-member def setUp(self): super(_AbstractTestPeriodicMonitoringThread, self).setUp() self._orig_start_new_thread = monitor.start_new_thread self._orig_thread_sleep = monitor.thread_sleep monitor.thread_sleep = lambda _s: gc.collect() # For PyPy self.tid = 0xDEADBEEF def start_new_thread(_f, _a): r = self.tid self.tid += 1 return r monitor.start_new_thread = start_new_thread self.hub = MockHub() self.pmt = monitor.PeriodicMonitoringThread(self.hub) self.hub.periodic_monitoring_thread = self.pmt self.pmt_default_funcs = self.pmt.monitoring_functions()[:] self.len_pmt_default_funcs = len(self.pmt_default_funcs) def tearDown(self): monitor.start_new_thread = self._orig_start_new_thread monitor.thread_sleep = self._orig_thread_sleep prev = self.pmt._greenlet_tracer.previous_trace_function self.pmt.kill() assert gettrace() is prev, (gettrace(), prev) settrace(None) super(_AbstractTestPeriodicMonitoringThread, self).tearDown() class TestPeriodicMonitoringThread(_AbstractTestPeriodicMonitoringThread, unittest.TestCase): def test_constructor(self): self.assertEqual(0xDEADBEEF, self.pmt.monitor_thread_ident) self.assertEqual(gettrace(), self.pmt._greenlet_tracer) @skipWithoutPSUtil("Verifies the process") def test_get_process(self): proc = self.pmt._get_process() self.assertIsNotNone(proc) # Same object is returned each time. self.assertIs(proc, self.pmt._get_process()) def test_hub_wref(self): self.assertIs(self.hub, self.pmt.hub) del self.hub gc.collect() self.assertIsNone(self.pmt.hub) # And it killed itself. self.assertFalse(self.pmt.should_run) self.assertIsNone(gettrace()) def test_add_monitoring_function(self): self.assertRaises(ValueError, self.pmt.add_monitoring_function, None, 1) self.assertRaises(ValueError, self.pmt.add_monitoring_function, lambda: None, -1) def f(): "Does nothing" # Add self.pmt.add_monitoring_function(f, 1) self.assertEqual(self.len_pmt_default_funcs + 1, len(self.pmt.monitoring_functions())) self.assertEqual(1, self.pmt.monitoring_functions()[1].period) # Update self.pmt.add_monitoring_function(f, 2) self.assertEqual(self.len_pmt_default_funcs + 1, len(self.pmt.monitoring_functions())) self.assertEqual(2, self.pmt.monitoring_functions()[1].period) # Remove self.pmt.add_monitoring_function(f, None) self.assertEqual(self.len_pmt_default_funcs, len(self.pmt.monitoring_functions())) def test_calculate_sleep_time(self): self.assertEqual( self.pmt.monitoring_functions()[0].period, self.pmt.calculate_sleep_time()) # Pretend that GEVENT_CONFIG.max_blocking_time was set to 0, # to disable this monitor. self.pmt._calculated_sleep_time = 0 self.assertEqual( self.pmt.inactive_sleep_time, self.pmt.calculate_sleep_time() ) # Getting the list of monitoring functions will also # do this, if it looks like it has changed self.pmt.monitoring_functions()[0].period = -1 self.pmt._calculated_sleep_time = 0 self.pmt.monitoring_functions() self.assertEqual( self.pmt.monitoring_functions()[0].period, self.pmt.calculate_sleep_time()) self.assertEqual( self.pmt.monitoring_functions()[0].period, self.pmt._calculated_sleep_time) def test_call_destroyed_hub(self): # Add a function that destroys the hub so we break out (eventually) # This clears the wref, which eventually calls kill() def f(_hub): _hub = None self.hub = None gc.collect() self.pmt.add_monitoring_function(f, 0.1) self.pmt() self.assertFalse(self.pmt.should_run) def test_call_dead_hub(self): # Add a function that makes the hub go false (e.g., it quit) # This causes the function to kill itself. def f(hub): hub.dead = True self.pmt.add_monitoring_function(f, 0.1) self.pmt() self.assertFalse(self.pmt.should_run) def test_call_SystemExit(self): # breaks the loop def f(_hub): raise SystemExit() self.pmt.add_monitoring_function(f, 0.1) self.pmt() def test_call_other_error(self): class MyException(Exception): pass def f(_hub): raise MyException() self.pmt.add_monitoring_function(f, 0.1) with self.assertRaises(MyException): self.pmt() def test_hub_reinit(self): import os from gevent.hub import reinit self.pmt.pid = -1 old_tid = self.pmt.monitor_thread_ident reinit(self.hub) self.assertEqual(os.getpid(), self.pmt.pid) self.assertEqual(old_tid + 1, self.pmt.monitor_thread_ident) class TestPeriodicMonitorBlocking(_AbstractTestPeriodicMonitoringThread, unittest.TestCase): def test_previous_trace(self): self.pmt.kill() self.assertIsNone(gettrace()) called = [] def f(*args): called.append(args) settrace(f) self.pmt = monitor.PeriodicMonitoringThread(self.hub) self.assertEqual(gettrace(), self.pmt._greenlet_tracer) self.assertIs(self.pmt._greenlet_tracer.previous_trace_function, f) self.pmt._greenlet_tracer('event', ('args',)) self.assertEqual([('event', ('args',))], called) def test__greenlet_tracer(self): self.assertEqual(0, self.pmt._greenlet_tracer.greenlet_switch_counter) # Unknown event still counts as a switch (should it?) self.pmt._greenlet_tracer('unknown', None) self.assertEqual(1, self.pmt._greenlet_tracer.greenlet_switch_counter) self.assertIsNone(self.pmt._greenlet_tracer.active_greenlet) origin = object() target = object() self.pmt._greenlet_tracer('switch', (origin, target)) self.assertEqual(2, self.pmt._greenlet_tracer.greenlet_switch_counter) self.assertIs(target, self.pmt._greenlet_tracer.active_greenlet) # Unknown event removes active greenlet self.pmt._greenlet_tracer('unknown', ()) self.assertEqual(3, self.pmt._greenlet_tracer.greenlet_switch_counter) self.assertIsNone(self.pmt._greenlet_tracer.active_greenlet) def test_monitor_blocking(self): # Initially there's no active greenlet and no switches, # so nothing is considered blocked from gevent.events import subscribers from gevent.events import IEventLoopBlocked events = [] subscribers.append(events.append) self.assertFalse(self.pmt.monitor_blocking(self.hub)) # Give it an active greenlet origin = object() target = object() self.pmt._greenlet_tracer('switch', (origin, target)) # We've switched, so we're not blocked self.assertFalse(self.pmt.monitor_blocking(self.hub)) self.assertFalse(events) # Again without switching is a problem. self.assertTrue(self.pmt.monitor_blocking(self.hub)) self.assertTrue(events) verify.verifyObject(IEventLoopBlocked, events[0]) del events[:] # But we can order it not to be a problem self.pmt.ignore_current_greenlet_blocking() self.assertFalse(self.pmt.monitor_blocking(self.hub)) self.assertFalse(events) # And back again self.pmt.monitor_current_greenlet_blocking() self.assertTrue(self.pmt.monitor_blocking(self.hub)) # A bad thread_ident in the hub doesn't mess things up self.hub.thread_ident = -1 self.assertTrue(self.pmt.monitor_blocking(self.hub)) class MockProcess(object): def __init__(self, rss): self.rss = rss def memory_full_info(self): return self @skipWithoutPSUtil("Accessess memory info") class TestPeriodicMonitorMemory(_AbstractTestPeriodicMonitoringThread, unittest.TestCase): rss = 0 def setUp(self): _AbstractTestPeriodicMonitoringThread.setUp(self) self._old_max = GEVENT_CONFIG.max_memory_usage GEVENT_CONFIG.max_memory_usage = None self.pmt._get_process = lambda: MockProcess(self.rss) def tearDown(self): GEVENT_CONFIG.max_memory_usage = self._old_max _AbstractTestPeriodicMonitoringThread.tearDown(self) def test_can_monitor_and_install(self): # We run tests with psutil installed, and we have access to our # process. self.assertTrue(self.pmt.can_monitor_memory_usage()) # No warning, adds a function self.pmt.install_monitor_memory_usage() self.assertEqual(self.len_pmt_default_funcs + 1, len(self.pmt.monitoring_functions())) def test_cannot_monitor_and_install(self): import warnings self.pmt._get_process = lambda: None self.assertFalse(self.pmt.can_monitor_memory_usage()) # This emits a warning, visible by default with warnings.catch_warnings(record=True) as ws: self.pmt.install_monitor_memory_usage() self.assertEqual(1, len(ws)) self.assertIs(monitor.MonitorWarning, ws[0].category) def test_monitor_no_allowed(self): self.assertEqual(-1, self.pmt.monitor_memory_usage(None)) def test_monitor_greater(self): from gevent import events self.rss = 2 GEVENT_CONFIG.max_memory_usage = 1 # Initial event event = self.pmt.monitor_memory_usage(None) self.assertIsInstance(event, events.MemoryUsageThresholdExceeded) self.assertEqual(2, event.mem_usage) self.assertEqual(1, event.max_allowed) # pylint:disable=no-member self.assertIsInstance(event.memory_info, MockProcess) # pylint:disable=no-member # No growth, no event event = self.pmt.monitor_memory_usage(None) self.assertIsNone(event) # Growth, event self.rss = 3 event = self.pmt.monitor_memory_usage(None) self.assertIsInstance(event, events.MemoryUsageThresholdExceeded) self.assertEqual(3, event.mem_usage) # Shrinking below gets us back self.rss = 1 event = self.pmt.monitor_memory_usage(None) self.assertIsInstance(event, events.MemoryUsageUnderThreshold) self.assertEqual(1, event.mem_usage) # coverage repr(event) # No change, no event event = self.pmt.monitor_memory_usage(None) self.assertIsNone(event) # Growth, event self.rss = 3 event = self.pmt.monitor_memory_usage(None) self.assertIsInstance(event, events.MemoryUsageThresholdExceeded) self.assertEqual(3, event.mem_usage) def test_monitor_initial_below(self): self.rss = 1 GEVENT_CONFIG.max_memory_usage = 10 event = self.pmt.monitor_memory_usage(None) self.assertIsNone(event) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test___monkey_patching.py000066400000000000000000000066561471441230600234760ustar00rootroot00000000000000import sys import os import glob import atexit # subprocess: include in subprocess tests from gevent.testing import util from gevent.testing import sysinfo from gevent.testing.support import is_resource_enabled TIMEOUT = 120 # XXX: Generalize this so other packages can use it. def get_absolute_pythonpath(): paths = [os.path.abspath(p) for p in os.environ.get('PYTHONPATH', '').split(os.pathsep)] return os.pathsep.join(paths) def TESTRUNNER(tests=None): if not is_resource_enabled('gevent_monkey'): util.log('WARNING: Testing monkey-patched stdlib has been disabled', color="suboptimal-behaviour") return try: test_dir, version_test_dir = util.find_stdlib_tests() except util.NoSetupPyFound as e: util.log("WARNING: No setup.py and src/greentest found: %r", e, color="suboptimal-behaviour") return if not os.path.exists(test_dir): util.log('WARNING: No test directory found at %s', test_dir, color="suboptimal-behaviour") return # pylint:disable=unspecified-encoding with open(os.path.join(test_dir, 'version')) as f: preferred_version = f.read().strip() running_version = sysinfo.get_python_version() if preferred_version != running_version: util.log('WARNING: The tests in %s/ are from version %s and your Python is %s', test_dir, preferred_version, running_version, color="suboptimal-behaviour") version_tests = glob.glob('%s/test_*.py' % version_test_dir) version_tests = sorted(version_tests) if not tests: tests = glob.glob('%s/test_*.py' % test_dir) tests = sorted(tests) PYTHONPATH = (os.getcwd() + os.pathsep + get_absolute_pythonpath()).rstrip(':') tests = sorted(set(os.path.basename(x) for x in tests)) version_tests = sorted(set(os.path.basename(x) for x in version_tests)) util.log("Discovered %d tests in %s", len(tests), test_dir) util.log("Discovered %d version-specific tests in %s", len(version_tests), version_test_dir) options = { 'cwd': test_dir, 'timeout': TIMEOUT, 'setenv': { 'PYTHONPATH': PYTHONPATH, # debug produces resource tracking warnings for the # CFFI backends. On Python 2, many of the stdlib tests # rely on refcounting to close sockets so they produce # lots of noise. Python 3 is not completely immune; # test_ftplib.py tends to produce warnings---and the Python 3 # test framework turns those into test failures! 'GEVENT_DEBUG': 'error', } } if tests and not sys.platform.startswith("win"): atexit.register(os.system, 'rm -f */@test*') basic_args = [sys.executable, '-u', '-W', 'ignore', '-m', 'gevent.testing.monkey_test'] for filename in tests: if filename in version_tests: util.log("Overriding %s from %s with file from %s", filename, test_dir, version_test_dir) continue yield basic_args + [filename], options.copy() options['cwd'] = version_test_dir for filename in version_tests: yield basic_args + [filename], options.copy() def main(): from gevent.testing import testrunner discovered_tests = TESTRUNNER(sys.argv[1:]) discovered_tests = list(discovered_tests) return testrunner.Runner(discovered_tests, quiet=None)() if __name__ == '__main__': main() gevent-24.11.1/src/gevent/tests/test__all__.py000066400000000000000000000264331471441230600212210ustar00rootroot00000000000000# Check __all__, __implements__, __extensions__, __imports__ of the modules from __future__ import print_function from __future__ import absolute_import import functools import sys import unittest import types import importlib import warnings from gevent.testing import six from gevent.testing import modules from gevent.testing.sysinfo import PLATFORM_SPECIFIC_SUFFIXES from gevent.testing.util import debug from gevent._patcher import MAPPING class ANY(object): def __contains__(self, item): return True ANY = ANY() NOT_IMPLEMENTED = { 'socket': ['CAPI'], 'thread': ['allocate', 'exit_thread', 'interrupt_main', 'start_new'], 'select': ANY, 'os': ANY, 'threading': ANY, '__builtin__' if six.PY2 else 'builtins': ANY, 'signal': ANY, } COULD_BE_MISSING = { 'socket': ['create_connection', 'RAND_add', 'RAND_egd', 'RAND_status'], 'subprocess': ['_posixsubprocess'], } # Things without an __all__ should generally be internal implementation # helpers NO_ALL = { 'gevent.threading', 'gevent._compat', 'gevent._corecffi', 'gevent._ffi', 'gevent._fileobjectcommon', 'gevent._fileobjectposix', 'gevent._patcher', 'gevent._socketcommon', 'gevent._tblib', 'gevent._util', 'gevent.resolver._addresses', 'gevent.resolver._hostsfile', # gevent.monkey is handled specially. } ALLOW_IMPLEMENTS = [ 'gevent._queue', # 'gevent.resolver.dnspython', # 'gevent.resolver_thread', # 'gevent.resolver.blocking', # 'gevent.resolver_ares', # 'gevent.server', # 'gevent._resolver.hostfile', # 'gevent.util', # 'gevent.threadpool', # 'gevent.timeout', ] # A list of modules that may contain things that aren't actually, technically, # extensions, but that need to be in __extensions__ anyway due to the way, # for example, monkey patching, needs to work. EXTRA_EXTENSIONS = [] if sys.platform.startswith('win'): EXTRA_EXTENSIONS.append('gevent.signal') _MISSING = '' def skip_if_no_stdlib_counterpart(f): @functools.wraps(f) def m(self): if not self.stdlib_module: self.skipTest("Need stdlib counterpart to %s" % self.modname) f(self) return m class AbstractTestMixin(object): modname = None stdlib_has_all = False stdlib_all = None stdlib_name = None stdlib_module = None @classmethod def setUpClass(cls): modname = cls.modname if modname.endswith(PLATFORM_SPECIFIC_SUFFIXES): raise unittest.SkipTest("Module %s is platform specific" % modname) with warnings.catch_warnings(): warnings.simplefilter('ignore', DeprecationWarning) try: cls.module = importlib.import_module(modname) except ImportError: if modname in modules.OPTIONAL_MODULES: msg = "Unable to import %s" % modname raise unittest.SkipTest(msg) raise cls.__implements__ = getattr(cls.module, '__implements__', None) cls.__imports__ = getattr(cls.module, '__imports__', []) cls.__extensions__ = getattr(cls.module, '__extensions__', []) cls.stdlib_name = MAPPING.get(modname) if cls.stdlib_name is not None: try: cls.stdlib_module = __import__(cls.stdlib_name) except ImportError: pass else: cls.stdlib_has_all = True cls.stdlib_all = getattr(cls.stdlib_module, '__all__', None) if cls.stdlib_all is None: cls.stdlib_has_all = False cls.stdlib_all = [ name for name in dir(cls.stdlib_module) if not name.startswith('_') and not isinstance(getattr(cls.stdlib_module, name), types.ModuleType) ] def skipIfNoAll(self): if not hasattr(self.module, '__all__'): self.assertTrue( self.modname in NO_ALL or self.modname.startswith('gevent.monkey.'), "Module has no all" ) self.skipTest("%s Needs __all__" % self.modname) def test_all(self): # Check that __all__ is present in the gevent module, # and only includes things that actually exist and can be # imported from it. self.skipIfNoAll() names = {} six.exec_("from %s import *" % self.modname, names) names.pop('__builtins__', None) self.maxDiff = None # It should match both as a set self.assertEqual(set(names), set(self.module.__all__)) # and it should not contain duplicates. self.assertEqual(sorted(names), sorted(self.module.__all__)) def test_all_formula(self): self.skipIfNoAll() # Check __all__ = __implements__ + __extensions__ + __imported__ # This is disabled because it was previously being skipped entirely # back when we had to call things manually. In that time, it drifted # out of sync. It should be enabled again and problems corrected. all_calculated = ( tuple(self.__implements__ or ()) + tuple(self.__imports__ or ()) + tuple(self.__extensions__ or ()) ) try: self.assertEqual(sorted(all_calculated), sorted(self.module.__all__)) except AssertionError: self.skipTest("Module %s fails the all formula; fix it" % self.modname) except TypeError: # TypeError: '<' not supported between instances of 'type' and 'str' raise AssertionError( "Unable to sort %r from all %s in %s" % ( all_calculated, self.module.__all__, self.module ) ) def test_implements_presence_justified(self): # Check that __implements__ is present only if the module is modeled # after a module from stdlib (like gevent.socket). if self.modname in ALLOW_IMPLEMENTS: return if self.__implements__ is not None and self.stdlib_module is None: raise AssertionError( '%s (%r) has __implements__ (%s) but no stdlib counterpart module exists (%s)' % (self.modname, self.module, self.__implements__, self.stdlib_name)) @skip_if_no_stdlib_counterpart def test_implements_subset_of_stdlib_all(self): # Check that __implements__ + __imports__ is a subset of the # corresponding standard module __all__ or dir() for name in tuple(self.__implements__ or ()) + tuple(self.__imports__): if name in self.stdlib_all: continue if name in COULD_BE_MISSING.get(self.stdlib_name, ()): continue if name in dir(self.stdlib_module): # like thread._local which is not in thread.__all__ continue raise AssertionError('%r is not found in %r.__all__ nor in dir(%r)' % (name, self.stdlib_module, self.stdlib_module)) @skip_if_no_stdlib_counterpart def test_implements_actually_implements(self): # Check that the module actually implements the entries from # __implements__ for name in self.__implements__ or (): item = getattr(self.module, name) try: stdlib_item = getattr(self.stdlib_module, name) self.assertIsNot(item, stdlib_item) except AttributeError: if name not in COULD_BE_MISSING.get(self.stdlib_name, []): raise @skip_if_no_stdlib_counterpart def test_imports_actually_imports(self): # Check that the module actually imports the entries from # __imports__ for name in self.__imports__: item = getattr(self.module, name) stdlib_item = getattr(self.stdlib_module, name) self.assertIs(item, stdlib_item) @skip_if_no_stdlib_counterpart def test_extensions_actually_extend(self): # Check that the module actually defines new entries in # __extensions__ if self.modname in EXTRA_EXTENSIONS: return for name in self.__extensions__: try: if hasattr(self.stdlib_module, name): raise AssertionError("'%r' is not an extension, it is found in %r" % ( name, self.stdlib_module )) except TypeError as ex: # TypeError: attribute name must be string, not 'type' raise AssertionError( "Got TypeError (%r) getting %r (of %s) from %s/%s" % ( ex, name, self.__extensions__, self.stdlib_module, self.modname ) ) @skip_if_no_stdlib_counterpart def test_completeness(self): # pylint:disable=too-many-branches # Check that __all__ (or dir()) of the corresponsing stdlib is # a subset of __all__ of this module missed = [] for name in self.stdlib_all: if name not in getattr(self.module, '__all__', []): missed.append(name) # handle stuff like ssl.socket and ssl.socket_error which have no reason to be in gevent.ssl.__all__ if not self.stdlib_has_all: for name in missed[:]: if hasattr(self.module, name): missed.remove(name) # remove known misses not_implemented = NOT_IMPLEMENTED.get(self.stdlib_name) if not_implemented is not None: result = [] for name in missed: if name in not_implemented: # We often don't want __all__ to be set because we wind up # documenting things that we just copy in from the stdlib. # But if we implement it, don't print a warning if getattr(self.module, name, _MISSING) is _MISSING: debug('IncompleteImplWarning: %s.%s' % (self.modname, name)) else: result.append(name) missed = result if missed: if self.stdlib_has_all: msg = '''The following items in %r.__all__ are missing from %r: %r''' % (self.stdlib_module, self.module, missed) else: msg = '''The following items in dir(%r) are missing from %r: %r''' % (self.stdlib_module, self.module, missed) raise AssertionError(msg) def _create_tests(): for _, modname in modules.walk_modules(include_so=False, recursive=True, check_optional=False): if modname.endswith(PLATFORM_SPECIFIC_SUFFIXES): continue if modname.endswith('__main__'): # gevent.monkey.__main__ especially is a problem. # These aren't meant to be imported anyway. continue orig_modname = modname modname_no_period = orig_modname.replace('.', '_') cls = type( 'Test_' + modname_no_period, (AbstractTestMixin, unittest.TestCase), { '__module__': __name__, 'modname': orig_modname } ) globals()[cls.__name__] = cls _create_tests() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/gevent/tests/test__api.py000066400000000000000000000107061471441230600207200ustar00rootroot00000000000000# Copyright (c) 2008 AG Projects # Author: Denis Bilenko # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import gevent.testing as greentest import gevent from gevent import util, socket DELAY = 0.1 class Test(greentest.TestCase): @greentest.skipOnAppVeyor("Timing causes the state to often be [start,finished]") def test_killing_dormant(self): state = [] def test(): try: state.append('start') gevent.sleep(DELAY * 3.0) except: # pylint:disable=bare-except state.append('except') # catching GreenletExit state.append('finished') g = gevent.spawn(test) gevent.sleep(DELAY / 2) assert state == ['start'], state g.kill() # will not get there, unless switching is explicitly scheduled by kill self.assertEqual(state, ['start', 'except', 'finished']) def test_nested_with_timeout(self): def func(): return gevent.with_timeout(0.2, gevent.sleep, 2, timeout_value=1) self.assertRaises(gevent.Timeout, gevent.with_timeout, 0.1, func) def test_sleep_invalid_switch(self): p = gevent.spawn(util.wrap_errors(AssertionError, gevent.sleep), 2) gevent.sleep(0) # wait for p to start, because actual order of switching is reversed switcher = gevent.spawn(p.switch, None) result = p.get() assert isinstance(result, AssertionError), result assert 'Invalid switch' in str(result), repr(str(result)) switcher.kill() if hasattr(socket, 'socketpair'): def _test_wait_read_invalid_switch(self, sleep): sock1, sock2 = socket.socketpair() try: p = gevent.spawn(util.wrap_errors(AssertionError, socket.wait_read), # pylint:disable=no-member sock1.fileno()) gevent.get_hub().loop.run_callback(switch_None, p) if sleep is not None: gevent.sleep(sleep) result = p.get() assert isinstance(result, AssertionError), result assert 'Invalid switch' in str(result), repr(str(result)) finally: sock1.close() sock2.close() def test_invalid_switch_None(self): self._test_wait_read_invalid_switch(None) def test_invalid_switch_0(self): self._test_wait_read_invalid_switch(0) def test_invalid_switch_1(self): self._test_wait_read_invalid_switch(0.001) # we don't test wait_write the same way, because socket is always ready to write def switch_None(g): g.switch(None) class TestTimers(greentest.TestCase): def test_timer_fired(self): lst = [1] def func(): gevent.spawn_later(0.01, lst.pop) gevent.sleep(0.02) gevent.spawn(func) # Func has not run yet self.assertEqual(lst, [1]) # Run callbacks but don't yield. gevent.sleep() # Let timers fire. Func should be done. gevent.sleep(0.1) self.assertEqual(lst, []) def test_spawn_is_not_cancelled(self): lst = [1] def func(): gevent.spawn(lst.pop) # exiting immediately, but self.lst.pop must be called gevent.spawn(func) gevent.sleep(0.1) self.assertEqual(lst, []) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__api_timeout.py000066400000000000000000000142631471441230600224700ustar00rootroot00000000000000# Copyright (c) 2008 AG Projects # Author: Denis Bilenko # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import sys import gevent.testing as greentest import weakref import time import gc from gevent import sleep from gevent import Timeout from gevent import get_hub from gevent.testing.timing import SMALL_TICK as DELAY from gevent.testing import flaky class Error(Exception): pass class _UpdateNowProxy(object): update_now_calls = 0 def __init__(self, loop): self.loop = loop def __getattr__(self, name): return getattr(self.loop, name) def update_now(self): self.update_now_calls += 1 self.loop.update_now() class _UpdateNowWithTimerProxy(_UpdateNowProxy): def timer(self, *_args, **_kwargs): return _Timer(self) class _Timer(object): pending = False active = False def __init__(self, loop): self.loop = loop def start(self, *_args, **kwargs): if kwargs.get("update"): self.loop.update_now() self.pending = self.active = True def stop(self): self.active = self.pending = False def close(self): "Does nothing" class Test(greentest.TestCase): def test_timeout_calls_update_now(self): hub = get_hub() loop = hub.loop proxy = _UpdateNowWithTimerProxy(loop) hub.loop = proxy try: with Timeout(DELAY * 2) as t: self.assertTrue(t.pending) finally: hub.loop = loop self.assertEqual(1, proxy.update_now_calls) def test_sleep_calls_update_now(self): hub = get_hub() loop = hub.loop proxy = _UpdateNowProxy(loop) hub.loop = proxy try: sleep(0.01) finally: hub.loop = loop self.assertEqual(1, proxy.update_now_calls) @greentest.skipOnAppVeyor("Timing is flaky, especially under Py 3.4/64-bit") @greentest.skipOnPyPy3OnCI("Timing is flaky, especially under Py 3.4/64-bit") @greentest.reraises_flaky_timeout((Timeout, AssertionError)) def test_api(self): # Nothing happens if with-block finishes before the timeout expires t = Timeout(DELAY * 2) self.assertFalse(t.pending, t) with t: self.assertTrue(t.pending, t) sleep(DELAY) # check if timer was actually cancelled self.assertFalse(t.pending, t) sleep(DELAY * 2) # An exception will be raised if it's not with self.assertRaises(Timeout) as exc: with Timeout(DELAY) as t: sleep(DELAY * 10) self.assertIs(exc.exception, t) # You can customize the exception raised: with self.assertRaises(IOError): with Timeout(DELAY, IOError("Operation takes way too long")): sleep(DELAY * 10) # Providing classes instead of values should be possible too: with self.assertRaises(ValueError): with Timeout(DELAY, ValueError): sleep(DELAY * 10) try: 1 / 0 except ZeroDivisionError: with self.assertRaises(ZeroDivisionError): with Timeout(DELAY, sys.exc_info()[0]): sleep(DELAY * 10) raise AssertionError('should not get there') raise AssertionError('should not get there') else: raise AssertionError('should not get there') # It's possible to cancel the timer inside the block: with Timeout(DELAY) as timer: timer.cancel() sleep(DELAY * 2) # To silent the exception before exiting the block, pass False as second parameter. XDELAY = 0.1 start = time.time() with Timeout(XDELAY, False): sleep(XDELAY * 2) delta = time.time() - start self.assertTimeWithinRange(delta, 0, XDELAY * 2) # passing None as seconds disables the timer with Timeout(None): sleep(DELAY) sleep(DELAY) def test_ref(self): err = Error() err_ref = weakref.ref(err) with Timeout(DELAY * 2, err): sleep(DELAY) del err gc.collect() self.assertFalse(err_ref(), err_ref) @flaky.reraises_flaky_race_condition() def test_nested_timeout(self): with Timeout(DELAY, False): with Timeout(DELAY * 10, False): sleep(DELAY * 3 * 20) raise AssertionError('should not get there') with Timeout(DELAY) as t1: with Timeout(DELAY * 20) as t2: with self.assertRaises(Timeout) as exc: sleep(DELAY * 30) self.assertIs(exc.exception, t1) self.assertFalse(t1.pending, t1) self.assertTrue(t2.pending, t2) self.assertFalse(t2.pending) with Timeout(DELAY * 20) as t1: with Timeout(DELAY) as t2: with self.assertRaises(Timeout) as exc: sleep(DELAY * 30) self.assertIs(exc.exception, t2) self.assertTrue(t1.pending, t1) self.assertFalse(t2.pending, t2) self.assertFalse(t1.pending) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__ares_host_result.py000066400000000000000000000016141471441230600235320ustar00rootroot00000000000000from __future__ import print_function import pickle import gevent.testing as greentest try: from gevent.resolver.cares import ares_host_result except ImportError: # pragma: no cover ares_host_result = None @greentest.skipIf(ares_host_result is None, "Must be able to import ares") class TestPickle(greentest.TestCase): # Issue 104: ares.ares_host_result unpickleable def _test(self, protocol): r = ares_host_result('family', ('arg1', 'arg2', )) dumped = pickle.dumps(r, protocol) loaded = pickle.loads(dumped) self.assertEqual(r, loaded) # pylint:disable=no-member self.assertEqual(r.family, loaded.family) for i in range(0, pickle.HIGHEST_PROTOCOL): def make_test(j): return lambda self: self._test(j) setattr(TestPickle, 'test' + str(i), make_test(i)) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__ares_timeout.py000066400000000000000000000017671471441230600226560ustar00rootroot00000000000000from __future__ import print_function import unittest import gevent try: from gevent.resolver.ares import Resolver except ImportError as ex: Resolver = None from gevent import socket import gevent.testing as greentest from gevent.testing.sockets import udp_listener @unittest.skipIf( Resolver is None, "Needs ares resolver" ) class TestTimeout(greentest.TestCase): __timeout__ = 30 def test(self): listener = self._close_on_teardown(udp_listener()) address = listener.getsockname() def reader(): while True: listener.recvfrom(10000) greader = gevent.spawn(reader) self._close_on_teardown(greader.kill) r = Resolver(servers=[address[0]], timeout=0.001, tries=1, udp_port=address[-1]) self._close_on_teardown(r) with self.assertRaisesRegex(socket.herror, "ARES_ETIMEOUT"): r.gethostbyname('www.google.com') if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__backdoor.py000066400000000000000000000127141471441230600217340ustar00rootroot00000000000000from __future__ import print_function from __future__ import absolute_import import gevent from gevent import socket from gevent import backdoor import gevent.testing as greentest from gevent.testing.params import DEFAULT_BIND_ADDR_TUPLE from gevent.testing.params import DEFAULT_CONNECT def read_until(conn, postfix): read = b'' assert isinstance(postfix, bytes) while not read.endswith(postfix): result = conn.recv(1) if not result: raise AssertionError('Connection ended before %r. Data read:\n%r' % (postfix, read)) read += result return read if isinstance(read, str) else read.decode('utf-8') def readline(conn): with conn.makefile() as f: return f.readline() class WorkerGreenlet(gevent.Greenlet): spawning_stack_limit = 2 class SocketWithBanner(socket.socket): __slots__ = ('banner',) def __init__(self, *args, **kwargs): self.banner = None super(SocketWithBanner, self).__init__(*args, **kwargs) def __enter__(self): return socket.socket.__enter__(self) def __exit__(self, t, v, tb): return socket.socket.__exit__(self, t, v, tb) @greentest.skipOnAppVeyor( "With the update to libev 4.31 and potentially closing sockets in the background, " "alternate tests started hanging on appveyor. Something like .E.E.E. " "See https://ci.appveyor.com/project/denik/gevent/build/job/n9fynkoyt2bvk8b5 " "It's not clear why, but presumably a socket isn't getting closed and a watcher is tied " "to the wrong file descriptor. I haven't been able to reproduce. If it were a systemic " "problem I'd expect to see more failures, so it is probably specific to resource management " "in this test." ) class Test(greentest.TestCase): __timeout__ = 10 def tearDown(self): gevent.sleep() # let spawned greenlets die super(Test, self).tearDown() def _make_and_start_server(self, *args, **kwargs): server = backdoor.BackdoorServer(DEFAULT_BIND_ADDR_TUPLE, *args, **kwargs) server.start() return server def _create_connection(self, server): conn = SocketWithBanner() conn.connect((DEFAULT_CONNECT, server.server_port)) # pylint:disable=not-callable try: banner = self._wait_for_prompt(conn) except: conn.close() raise conn.banner = banner return conn def _wait_for_prompt(self, conn): return read_until(conn, b'>>> ') def _close(self, conn, cmd=b'quit()\r\n)'): conn.sendall(cmd) line = readline(conn) self.assertEqual(line, '') conn.close() @greentest.skipOnMacOnCI( "Sometimes fails to get the right answers; " "https://travis-ci.org/github/gevent/gevent/jobs/692184822" ) def test_multi(self): with self._make_and_start_server() as server: def connect(): with self._create_connection(server) as conn: conn.sendall(b'2+2\r\n') line = readline(conn) self.assertEqual(line.strip(), '4', repr(line)) self._close(conn) jobs = [WorkerGreenlet.spawn(connect) for _ in range(10)] try: done = gevent.joinall(jobs, raise_error=True) finally: gevent.joinall(jobs, raise_error=False) self.assertEqual(len(done), len(jobs), done) def test_quit(self): with self._make_and_start_server() as server: with self._create_connection(server) as conn: self._close(conn) def test_sys_exit(self): with self._make_and_start_server() as server: with self._create_connection(server) as conn: self._close(conn, b'import sys; sys.exit(0)\r\n') def test_banner(self): expected_banner = "Welcome stranger!" # native string with self._make_and_start_server(banner=expected_banner) as server: with self._create_connection(server) as conn: banner = conn.banner self._close(conn) self.assertEqual(banner[:len(expected_banner)], expected_banner, banner) def test_builtins(self): with self._make_and_start_server() as server: with self._create_connection(server) as conn: conn.sendall(b'locals()["__builtins__"]\r\n') response = read_until(conn, b'>>> ') self._close(conn) self.assertLess( len(response), 300, msg="locals() unusable: %s..." % response) def test_switch_exc(self): from gevent.queue import Queue, Empty def bad(): q = Queue() print('switching out, then throwing in') try: q.get(block=True, timeout=0.1) except Empty: print("Got Empty") print('switching out') gevent.sleep(0.1) print('switched in') with self._make_and_start_server(locals={'bad': bad}) as server: with self._create_connection(server) as conn: conn.sendall(b'bad()\r\n') response = self._wait_for_prompt(conn) self._close(conn) response = response.replace('\r\n', '\n') self.assertEqual( 'switching out, then throwing in\nGot Empty\nswitching out\nswitched in\n>>> ', response) if __name__ == '__main__': greentest.main() # pragma: testrunner-no-combine gevent-24.11.1/src/gevent/tests/test__close_backend_fd.py000066400000000000000000000074311471441230600233750ustar00rootroot00000000000000from __future__ import print_function import os import unittest import gevent from gevent import core from gevent.hub import Hub from gevent.testing import sysinfo @unittest.skipUnless( getattr(core, 'LIBEV_EMBED', False), "Needs embedded libev. " "hub.loop.fileno is only defined when " "we embed libev for some reason. " "Choosing specific backends is also only supported by libev " "(not libuv), and besides, libuv has a nasty tendency to " "abort() the process if its FD gets closed. " ) class Test(unittest.TestCase): # NOTE that we extend unittest.TestCase, not greentest.TestCase # Extending the later causes the wrong hub to get used. BACKENDS_THAT_SUCCEED_WHEN_FD_CLOSED = ( 'kqueue', 'epoll', 'linux_aio', 'linux_iouring', ) BACKENDS_THAT_WILL_FAIL_TO_CREATE_AT_RUNTIME = ( # This fails on the Fedora Rawhide 33 image. It's not clear # why; needs investigated. 'linux_iouring', ) if not sysinfo.libev_supports_linux_iouring() else ( # Even on systems that ought to support it, according to the # version number, it often fails to actually work. See the # manylinux_2_28_x86_64 image as-of 2024-10-03; it seems that # 6.8.0-1014-azure, as used in the Ubuntu 22.04 Github Actions # runner works fine, though. So it's not clear what we can do # other than skip it. It's possible this has to do with # running the manylinux images in docker? " Docker also # consequently disabled io_uring from their default seccomp # profile." ) BACKENDS_TO_SKIP = ( 'linux_iouring', ) BACKENDS_THAT_WILL_FAIL_TO_CREATE_AT_RUNTIME += ( # This can be compiled on any (?) version of # linux, but there's a runtime check that you're # running at least kernel 4.19, so we can fail to create # the hub. When we updated to libev 4.31 from 4.25, Travis Ci # was still on kernel 1.15 (Ubuntu 16.04). 'linux_aio', ) if not sysinfo.libev_supports_linux_aio() else ( ) def _check_backend(self, backend): hub = Hub(backend, default=False) try: self.assertEqual(hub.loop.backend, backend) gevent.sleep(0.001) fileno = hub.loop.fileno() if fileno is None: return # nothing to close, test implicitly passes. os.close(fileno) if backend in self.BACKENDS_THAT_SUCCEED_WHEN_FD_CLOSED: gevent.sleep(0.001) else: with self.assertRaisesRegex(SystemError, "(libev)"): gevent.sleep(0.001) hub.destroy() self.assertIn('destroyed', repr(hub)) finally: if hub.loop is not None: hub.destroy() @classmethod def _make_test(cls, count, backend): # pylint:disable=no-self-argument if backend in cls.BACKENDS_TO_SKIP: def test(self): self.skipTest('Expecting %s to fail' % (backend, )) elif backend in cls.BACKENDS_THAT_WILL_FAIL_TO_CREATE_AT_RUNTIME: def test(self): with self.assertRaisesRegex(SystemError, 'ev_loop_new'): Hub(backend, default=False) else: def test(self): self._check_backend(backend) test.__name__ = 'test_' + backend + '_' + str(count) return test.__name__, test @classmethod def _make_tests(cls): count = backend = None for count in range(2): for backend in core.supported_backends(): name, func = cls._make_test(count, backend) setattr(cls, name, func) name = func = None Test._make_tests() if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test__compat.py000066400000000000000000000026371471441230600214360ustar00rootroot00000000000000from __future__ import absolute_import, print_function, division import os import unittest class TestFSPath(unittest.TestCase): def setUp(self): self.__path = None def __fspath__(self): if self.__path is not None: return self.__path raise AttributeError("Accessing path data") def _callFUT(self, arg): from gevent._compat import _fspath return _fspath(arg) def test_text(self): s = u'path' self.assertIs(s, self._callFUT(s)) def test_bytes(self): s = b'path' self.assertIs(s, self._callFUT(s)) def test_None(self): with self.assertRaises(TypeError): self._callFUT(None) def test_working_path(self): self.__path = u'text' self.assertIs(self.__path, self._callFUT(self)) self.__path = b'bytes' self.assertIs(self.__path, self._callFUT(self)) def test_failing_path_AttributeError(self): self.assertIsNone(self.__path) with self.assertRaises(AttributeError): self._callFUT(self) def test_fspath_non_str(self): self.__path = object() with self.assertRaises(TypeError): self._callFUT(self) @unittest.skipUnless(hasattr(os, 'fspath'), "Tests native os.fspath") class TestNativeFSPath(TestFSPath): def _callFUT(self, arg): return os.fspath(arg) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test__contextvars.py000066400000000000000000000757451471441230600225450ustar00rootroot00000000000000# gevent: copied from 3.7 to test our monkey-patch. # Modified to work on all versions of Python. from gevent import monkey monkey.patch_all() # pylint:disable=superfluous-parens,pointless-statement,not-callable # pylint:disable=unused-argument,too-many-public-methods,unused-variable # pylint:disable=too-many-branches,too-many-statements import concurrent.futures try: import contextvars except ImportError: from gevent import contextvars import functools # import gc import random import time import unittest # import weakref # try: # from _testcapi import hamt # except ImportError: # hamt = None hamt = None def isolated_context(func): """Needed to make reftracking test mode work.""" @functools.wraps(func) def wrapper(*args, **kwargs): ctx = contextvars.Context() return ctx.run(func, *args, **kwargs) return wrapper class ContextTest(unittest.TestCase): if not hasattr(unittest.TestCase, 'assertRaisesRegex'): assertRaisesRegex = unittest.TestCase.assertRaisesRegexp def test_context_var_new_1(self): with self.assertRaises(TypeError): contextvars.ContextVar() # gevent: Doesn't raise # with self.assertRaisesRegex(TypeError, 'must be a str'): # contextvars.ContextVar(1) c = contextvars.ContextVar('aaa') self.assertEqual(c.name, 'aaa') with self.assertRaises(AttributeError): c.name = 'bbb' self.assertNotEqual(hash(c), hash('aaa')) @isolated_context def test_context_var_repr_1(self): c = contextvars.ContextVar('a') self.assertIn('a', repr(c)) c = contextvars.ContextVar('a', default=123) self.assertIn('123', repr(c)) lst = [] c = contextvars.ContextVar('a', default=lst) lst.append(c) self.assertIn('...', repr(c)) self.assertIn('...', repr(lst)) t = c.set(1) self.assertIn(repr(c), repr(t)) self.assertNotIn(' used ', repr(t)) c.reset(t) self.assertIn(' used ', repr(t)) # gevent: Doesn't raise # def test_context_subclassing_1(self): # with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): # class MyContextVar(contextvars.ContextVar): # # Potentially we might want ContextVars to be subclassable. # pass # with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): # class MyContext(contextvars.Context): # pass # with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): # class MyToken(contextvars.Token): # pass def test_context_new_1(self): with self.assertRaises(TypeError): contextvars.Context(1) with self.assertRaises(TypeError): contextvars.Context(1, a=1) with self.assertRaises(TypeError): contextvars.Context(a=1) contextvars.Context(**{}) def test_context_typerrors_1(self): ctx = contextvars.Context() with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): ctx[1] with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): 1 in ctx with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): ctx.get(1) def test_context_get_context_1(self): ctx = contextvars.copy_context() self.assertIsInstance(ctx, contextvars.Context) # gevent: This doesn't raise # def test_context_run_1(self): # ctx = contextvars.Context() # with self.assertRaisesRegex(TypeError, 'missing 1 required'): # ctx.run() def test_context_run_2(self): ctx = contextvars.Context() def func(*args, **kwargs): kwargs['spam'] = 'foo' args += ('bar',) return args, kwargs for f in (func, functools.partial(func)): # partial doesn't support FASTCALL self.assertEqual(ctx.run(f), (('bar',), {'spam': 'foo'})) self.assertEqual(ctx.run(f, 1), ((1, 'bar'), {'spam': 'foo'})) self.assertEqual( ctx.run(f, a=2), (('bar',), {'a': 2, 'spam': 'foo'})) self.assertEqual( ctx.run(f, 11, a=2), ((11, 'bar'), {'a': 2, 'spam': 'foo'})) a = {} self.assertEqual( ctx.run(f, 11, **a), ((11, 'bar'), {'spam': 'foo'})) self.assertEqual(a, {}) def test_context_run_3(self): ctx = contextvars.Context() def func(*args, **kwargs): 1 / 0 with self.assertRaises(ZeroDivisionError): ctx.run(func) with self.assertRaises(ZeroDivisionError): ctx.run(func, 1, 2) with self.assertRaises(ZeroDivisionError): ctx.run(func, 1, 2, a=123) @isolated_context def test_context_run_4(self): ctx1 = contextvars.Context() ctx2 = contextvars.Context() var = contextvars.ContextVar('var') def func2(): self.assertIsNone(var.get(None)) def func1(): self.assertIsNone(var.get(None)) var.set('spam') ctx2.run(func2) self.assertEqual(var.get(None), 'spam') cur = contextvars.copy_context() self.assertEqual(len(cur), 1) self.assertEqual(cur[var], 'spam') return cur returned_ctx = ctx1.run(func1) self.assertEqual(ctx1, returned_ctx) self.assertEqual(returned_ctx[var], 'spam') self.assertIn(var, returned_ctx) def test_context_run_5(self): ctx = contextvars.Context() var = contextvars.ContextVar('var') def func(): self.assertIsNone(var.get(None)) var.set('spam') 1 / 0 with self.assertRaises(ZeroDivisionError): ctx.run(func) self.assertIsNone(var.get(None)) def test_context_run_6(self): ctx = contextvars.Context() c = contextvars.ContextVar('a', default=0) def fun(): self.assertEqual(c.get(), 0) self.assertIsNone(ctx.get(c)) c.set(42) self.assertEqual(c.get(), 42) self.assertEqual(ctx.get(c), 42) ctx.run(fun) def test_context_run_7(self): ctx = contextvars.Context() def fun(): with self.assertRaisesRegex(RuntimeError, 'is already entered'): ctx.run(fun) ctx.run(fun) @isolated_context def test_context_getset_1(self): c = contextvars.ContextVar('c') with self.assertRaises(LookupError): c.get() self.assertIsNone(c.get(None)) t0 = c.set(42) self.assertEqual(c.get(), 42) self.assertEqual(c.get(None), 42) self.assertIs(t0.old_value, t0.MISSING) self.assertIs(t0.old_value, contextvars.Token.MISSING) self.assertIs(t0.var, c) t = c.set('spam') self.assertEqual(c.get(), 'spam') self.assertEqual(c.get(None), 'spam') self.assertEqual(t.old_value, 42) c.reset(t) self.assertEqual(c.get(), 42) self.assertEqual(c.get(None), 42) c.set('spam2') with self.assertRaisesRegex(RuntimeError, 'has already been used'): c.reset(t) self.assertEqual(c.get(), 'spam2') ctx1 = contextvars.copy_context() self.assertIn(c, ctx1) c.reset(t0) with self.assertRaisesRegex(RuntimeError, 'has already been used'): c.reset(t0) self.assertIsNone(c.get(None)) self.assertIn(c, ctx1) self.assertEqual(ctx1[c], 'spam2') self.assertEqual(ctx1.get(c, 'aa'), 'spam2') self.assertEqual(len(ctx1), 1) self.assertEqual(list(ctx1.items()), [(c, 'spam2')]) self.assertEqual(list(ctx1.values()), ['spam2']) self.assertEqual(list(ctx1.keys()), [c]) self.assertEqual(list(ctx1), [c]) ctx2 = contextvars.copy_context() self.assertNotIn(c, ctx2) with self.assertRaises(KeyError): ctx2[c] self.assertEqual(ctx2.get(c, 'aa'), 'aa') self.assertEqual(len(ctx2), 0) self.assertEqual(list(ctx2), []) @isolated_context def test_context_getset_2(self): v1 = contextvars.ContextVar('v1') v2 = contextvars.ContextVar('v2') t1 = v1.set(42) with self.assertRaisesRegex(ValueError, 'by a different'): v2.reset(t1) @isolated_context def test_context_getset_3(self): c = contextvars.ContextVar('c', default=42) ctx = contextvars.Context() def fun(): self.assertEqual(c.get(), 42) with self.assertRaises(KeyError): ctx[c] self.assertIsNone(ctx.get(c)) self.assertEqual(ctx.get(c, 'spam'), 'spam') self.assertNotIn(c, ctx) self.assertEqual(list(ctx.keys()), []) t = c.set(1) self.assertEqual(list(ctx.keys()), [c]) self.assertEqual(ctx[c], 1) c.reset(t) self.assertEqual(list(ctx.keys()), []) with self.assertRaises(KeyError): ctx[c] ctx.run(fun) @isolated_context def test_context_getset_4(self): c = contextvars.ContextVar('c', default=42) ctx = contextvars.Context() tok = ctx.run(c.set, 1) with self.assertRaisesRegex(ValueError, 'different Context'): c.reset(tok) @isolated_context def test_context_getset_5(self): c = contextvars.ContextVar('c', default=42) c.set([]) def fun(): c.set([]) c.get().append(42) self.assertEqual(c.get(), [42]) contextvars.copy_context().run(fun) self.assertEqual(c.get(), []) def test_context_copy_1(self): ctx1 = contextvars.Context() c = contextvars.ContextVar('c', default=42) def ctx1_fun(): c.set(10) ctx2 = ctx1.copy() self.assertEqual(ctx2[c], 10) c.set(20) self.assertEqual(ctx1[c], 20) self.assertEqual(ctx2[c], 10) ctx2.run(ctx2_fun) self.assertEqual(ctx1[c], 20) self.assertEqual(ctx2[c], 30) def ctx2_fun(): self.assertEqual(c.get(), 10) c.set(30) self.assertEqual(c.get(), 30) ctx1.run(ctx1_fun) @isolated_context def test_context_threads_1(self): cvar = contextvars.ContextVar('cvar') def sub(num): for i in range(10): cvar.set(num + i) time.sleep(random.uniform(0.001, 0.05)) self.assertEqual(cvar.get(), num + i) return num with concurrent.futures.ThreadPoolExecutor(max_workers=10) as tp: results = list(tp.map(sub, range(10))) self.assertEqual(results, list(range(10))) # gevent: clases's can't be subscripted on Python 3.6 # def test_contextvar_getitem(self): # clss = contextvars.ContextVar # self.assertEqual(clss[str], clss) # HAMT Tests # class HashKey: # _crasher = None # def __init__(self, hash, name, error_on_eq_to=None): # assert hash != -1 # self.name = name # self.hash = hash # self.error_on_eq_to = error_on_eq_to # # def __repr__(self): # # return f'' # def __hash__(self): # if self._crasher is not None and self._crasher.error_on_hash: # raise HashingError # return self.hash # def __eq__(self, other): # if not isinstance(other, HashKey): # return NotImplemented # if self._crasher is not None and self._crasher.error_on_eq: # raise EqError # if self.error_on_eq_to is not None and self.error_on_eq_to is other: # raise ValueError#(f'cannot compare {self!r} to {other!r}') # if other.error_on_eq_to is not None and other.error_on_eq_to is self: # raise ValueError#(f'cannot compare {other!r} to {self!r}') # return (self.name, self.hash) == (other.name, other.hash) # class KeyStr(str): # def __hash__(self): # if HashKey._crasher is not None and HashKey._crasher.error_on_hash: # raise HashingError # return super().__hash__() # def __eq__(self, other): # if HashKey._crasher is not None and HashKey._crasher.error_on_eq: # raise EqError # return super().__eq__(other) # class HaskKeyCrasher: # def __init__(self, error_on_hash=False, error_on_eq=False): # self.error_on_hash = error_on_hash # self.error_on_eq = error_on_eq # def __enter__(self): # if HashKey._crasher is not None: # raise RuntimeError('cannot nest crashers') # HashKey._crasher = self # def __exit__(self, *exc): # HashKey._crasher = None # class HashingError(Exception): # pass # class EqError(Exception): # pass # @unittest.skipIf(hamt is None, '_testcapi lacks "hamt()" function') # class HamtTest(unittest.TestCase): # def test_hashkey_helper_1(self): # k1 = HashKey(10, 'aaa') # k2 = HashKey(10, 'bbb') # self.assertNotEqual(k1, k2) # self.assertEqual(hash(k1), hash(k2)) # d = dict() # d[k1] = 'a' # d[k2] = 'b' # self.assertEqual(d[k1], 'a') # self.assertEqual(d[k2], 'b') # def test_hamt_basics_1(self): # h = hamt() # h = None # NoQA # def test_hamt_basics_2(self): # h = hamt() # self.assertEqual(len(h), 0) # h2 = h.set('a', 'b') # self.assertIsNot(h, h2) # self.assertEqual(len(h), 0) # self.assertEqual(len(h2), 1) # self.assertIsNone(h.get('a')) # self.assertEqual(h.get('a', 42), 42) # self.assertEqual(h2.get('a'), 'b') # h3 = h2.set('b', 10) # self.assertIsNot(h2, h3) # self.assertEqual(len(h), 0) # self.assertEqual(len(h2), 1) # self.assertEqual(len(h3), 2) # self.assertEqual(h3.get('a'), 'b') # self.assertEqual(h3.get('b'), 10) # self.assertIsNone(h.get('b')) # self.assertIsNone(h2.get('b')) # self.assertIsNone(h.get('a')) # self.assertEqual(h2.get('a'), 'b') # h = h2 = h3 = None # def test_hamt_basics_3(self): # h = hamt() # o = object() # h1 = h.set('1', o) # h2 = h1.set('1', o) # self.assertIs(h1, h2) # def test_hamt_basics_4(self): # h = hamt() # h1 = h.set('key', []) # h2 = h1.set('key', []) # self.assertIsNot(h1, h2) # self.assertEqual(len(h1), 1) # self.assertEqual(len(h2), 1) # self.assertIsNot(h1.get('key'), h2.get('key')) # def test_hamt_collision_1(self): # k1 = HashKey(10, 'aaa') # k2 = HashKey(10, 'bbb') # k3 = HashKey(10, 'ccc') # h = hamt() # h2 = h.set(k1, 'a') # h3 = h2.set(k2, 'b') # self.assertEqual(h.get(k1), None) # self.assertEqual(h.get(k2), None) # self.assertEqual(h2.get(k1), 'a') # self.assertEqual(h2.get(k2), None) # self.assertEqual(h3.get(k1), 'a') # self.assertEqual(h3.get(k2), 'b') # h4 = h3.set(k2, 'cc') # h5 = h4.set(k3, 'aa') # self.assertEqual(h3.get(k1), 'a') # self.assertEqual(h3.get(k2), 'b') # self.assertEqual(h4.get(k1), 'a') # self.assertEqual(h4.get(k2), 'cc') # self.assertEqual(h4.get(k3), None) # self.assertEqual(h5.get(k1), 'a') # self.assertEqual(h5.get(k2), 'cc') # self.assertEqual(h5.get(k2), 'cc') # self.assertEqual(h5.get(k3), 'aa') # self.assertEqual(len(h), 0) # self.assertEqual(len(h2), 1) # self.assertEqual(len(h3), 2) # self.assertEqual(len(h4), 2) # self.assertEqual(len(h5), 3) # def test_hamt_stress(self): # COLLECTION_SIZE = 7000 # TEST_ITERS_EVERY = 647 # CRASH_HASH_EVERY = 97 # CRASH_EQ_EVERY = 11 # RUN_XTIMES = 3 # for _ in range(RUN_XTIMES): # h = hamt() # d = dict() # for i in range(COLLECTION_SIZE): # key = KeyStr(i) # if not (i % CRASH_HASH_EVERY): # with HaskKeyCrasher(error_on_hash=True): # with self.assertRaises(HashingError): # h.set(key, i) # h = h.set(key, i) # if not (i % CRASH_EQ_EVERY): # with HaskKeyCrasher(error_on_eq=True): # with self.assertRaises(EqError): # h.get(KeyStr(i)) # really trigger __eq__ # d[key] = i # self.assertEqual(len(d), len(h)) # if not (i % TEST_ITERS_EVERY): # self.assertEqual(set(h.items()), set(d.items())) # self.assertEqual(len(h.items()), len(d.items())) # self.assertEqual(len(h), COLLECTION_SIZE) # for key in range(COLLECTION_SIZE): # self.assertEqual(h.get(KeyStr(key), 'not found'), key) # keys_to_delete = list(range(COLLECTION_SIZE)) # random.shuffle(keys_to_delete) # for iter_i, i in enumerate(keys_to_delete): # key = KeyStr(i) # if not (iter_i % CRASH_HASH_EVERY): # with HaskKeyCrasher(error_on_hash=True): # with self.assertRaises(HashingError): # h.delete(key) # if not (iter_i % CRASH_EQ_EVERY): # with HaskKeyCrasher(error_on_eq=True): # with self.assertRaises(EqError): # h.delete(KeyStr(i)) # h = h.delete(key) # self.assertEqual(h.get(key, 'not found'), 'not found') # del d[key] # self.assertEqual(len(d), len(h)) # if iter_i == COLLECTION_SIZE // 2: # hm = h # dm = d.copy() # if not (iter_i % TEST_ITERS_EVERY): # self.assertEqual(set(h.keys()), set(d.keys())) # self.assertEqual(len(h.keys()), len(d.keys())) # self.assertEqual(len(d), 0) # self.assertEqual(len(h), 0) # # ============ # for key in dm: # self.assertEqual(hm.get(str(key)), dm[key]) # self.assertEqual(len(dm), len(hm)) # for i, key in enumerate(keys_to_delete): # hm = hm.delete(str(key)) # self.assertEqual(hm.get(str(key), 'not found'), 'not found') # dm.pop(str(key), None) # self.assertEqual(len(d), len(h)) # if not (i % TEST_ITERS_EVERY): # self.assertEqual(set(h.values()), set(d.values())) # self.assertEqual(len(h.values()), len(d.values())) # self.assertEqual(len(d), 0) # self.assertEqual(len(h), 0) # self.assertEqual(list(h.items()), []) # def test_hamt_delete_1(self): # A = HashKey(100, 'A') # B = HashKey(101, 'B') # C = HashKey(102, 'C') # D = HashKey(103, 'D') # E = HashKey(104, 'E') # Z = HashKey(-100, 'Z') # Er = HashKey(103, 'Er', error_on_eq_to=D) # h = hamt() # h = h.set(A, 'a') # h = h.set(B, 'b') # h = h.set(C, 'c') # h = h.set(D, 'd') # h = h.set(E, 'e') # orig_len = len(h) # # BitmapNode(size=10 bitmap=0b111110000 id=0x10eadc618): # # : 'a' # # : 'b' # # : 'c' # # : 'd' # # : 'e' # h = h.delete(C) # self.assertEqual(len(h), orig_len - 1) # with self.assertRaisesRegex(ValueError, 'cannot compare'): # h.delete(Er) # h = h.delete(D) # self.assertEqual(len(h), orig_len - 2) # h2 = h.delete(Z) # self.assertIs(h2, h) # h = h.delete(A) # self.assertEqual(len(h), orig_len - 3) # self.assertEqual(h.get(A, 42), 42) # self.assertEqual(h.get(B), 'b') # self.assertEqual(h.get(E), 'e') # def test_hamt_delete_2(self): # A = HashKey(100, 'A') # B = HashKey(201001, 'B') # C = HashKey(101001, 'C') # D = HashKey(103, 'D') # E = HashKey(104, 'E') # Z = HashKey(-100, 'Z') # Er = HashKey(201001, 'Er', error_on_eq_to=B) # h = hamt() # h = h.set(A, 'a') # h = h.set(B, 'b') # h = h.set(C, 'c') # h = h.set(D, 'd') # h = h.set(E, 'e') # orig_len = len(h) # # BitmapNode(size=8 bitmap=0b1110010000): # # : 'a' # # : 'd' # # : 'e' # # NULL: # # BitmapNode(size=4 bitmap=0b100000000001000000000): # # : 'b' # # : 'c' # with self.assertRaisesRegex(ValueError, 'cannot compare'): # h.delete(Er) # h = h.delete(Z) # self.assertEqual(len(h), orig_len) # h = h.delete(C) # self.assertEqual(len(h), orig_len - 1) # h = h.delete(B) # self.assertEqual(len(h), orig_len - 2) # h = h.delete(A) # self.assertEqual(len(h), orig_len - 3) # self.assertEqual(h.get(D), 'd') # self.assertEqual(h.get(E), 'e') # h = h.delete(A) # h = h.delete(B) # h = h.delete(D) # h = h.delete(E) # self.assertEqual(len(h), 0) # def test_hamt_delete_3(self): # A = HashKey(100, 'A') # B = HashKey(101, 'B') # C = HashKey(100100, 'C') # D = HashKey(100100, 'D') # E = HashKey(104, 'E') # h = hamt() # h = h.set(A, 'a') # h = h.set(B, 'b') # h = h.set(C, 'c') # h = h.set(D, 'd') # h = h.set(E, 'e') # orig_len = len(h) # # BitmapNode(size=6 bitmap=0b100110000): # # NULL: # # BitmapNode(size=4 bitmap=0b1000000000000000000001000): # # : 'a' # # NULL: # # CollisionNode(size=4 id=0x108572410): # # : 'c' # # : 'd' # # : 'b' # # : 'e' # h = h.delete(A) # self.assertEqual(len(h), orig_len - 1) # h = h.delete(E) # self.assertEqual(len(h), orig_len - 2) # self.assertEqual(h.get(C), 'c') # self.assertEqual(h.get(B), 'b') # def test_hamt_delete_4(self): # A = HashKey(100, 'A') # B = HashKey(101, 'B') # C = HashKey(100100, 'C') # D = HashKey(100100, 'D') # E = HashKey(100100, 'E') # h = hamt() # h = h.set(A, 'a') # h = h.set(B, 'b') # h = h.set(C, 'c') # h = h.set(D, 'd') # h = h.set(E, 'e') # orig_len = len(h) # # BitmapNode(size=4 bitmap=0b110000): # # NULL: # # BitmapNode(size=4 bitmap=0b1000000000000000000001000): # # : 'a' # # NULL: # # CollisionNode(size=6 id=0x10515ef30): # # : 'c' # # : 'd' # # : 'e' # # : 'b' # h = h.delete(D) # self.assertEqual(len(h), orig_len - 1) # h = h.delete(E) # self.assertEqual(len(h), orig_len - 2) # h = h.delete(C) # self.assertEqual(len(h), orig_len - 3) # h = h.delete(A) # self.assertEqual(len(h), orig_len - 4) # h = h.delete(B) # self.assertEqual(len(h), 0) # def test_hamt_delete_5(self): # h = hamt() # keys = [] # for i in range(17): # key = HashKey(i, str(i)) # keys.append(key) # h = h.set(key, 'val-{i}'.format(i=i)) # collision_key16 = HashKey(16, '18') # h = h.set(collision_key16, 'collision') # # ArrayNode(id=0x10f8b9318): # # 0:: # # BitmapNode(size=2 count=1 bitmap=0b1): # # : 'val-0' # # # # ... 14 more BitmapNodes ... # # # # 15:: # # BitmapNode(size=2 count=1 bitmap=0b1): # # : 'val-15' # # # # 16:: # # BitmapNode(size=2 count=1 bitmap=0b1): # # NULL: # # CollisionNode(size=4 id=0x10f2f5af8): # # : 'val-16' # # : 'collision' # self.assertEqual(len(h), 18) # h = h.delete(keys[2]) # self.assertEqual(len(h), 17) # h = h.delete(collision_key16) # self.assertEqual(len(h), 16) # h = h.delete(keys[16]) # self.assertEqual(len(h), 15) # h = h.delete(keys[1]) # self.assertEqual(len(h), 14) # h = h.delete(keys[1]) # self.assertEqual(len(h), 14) # for key in keys: # h = h.delete(key) # self.assertEqual(len(h), 0) # def test_hamt_items_1(self): # A = HashKey(100, 'A') # B = HashKey(201001, 'B') # C = HashKey(101001, 'C') # D = HashKey(103, 'D') # E = HashKey(104, 'E') # F = HashKey(110, 'F') # h = hamt() # h = h.set(A, 'a') # h = h.set(B, 'b') # h = h.set(C, 'c') # h = h.set(D, 'd') # h = h.set(E, 'e') # h = h.set(F, 'f') # it = h.items() # self.assertEqual( # set(list(it)), # {(A, 'a'), (B, 'b'), (C, 'c'), (D, 'd'), (E, 'e'), (F, 'f')}) # def test_hamt_items_2(self): # A = HashKey(100, 'A') # B = HashKey(101, 'B') # C = HashKey(100100, 'C') # D = HashKey(100100, 'D') # E = HashKey(100100, 'E') # F = HashKey(110, 'F') # h = hamt() # h = h.set(A, 'a') # h = h.set(B, 'b') # h = h.set(C, 'c') # h = h.set(D, 'd') # h = h.set(E, 'e') # h = h.set(F, 'f') # it = h.items() # self.assertEqual( # set(list(it)), # {(A, 'a'), (B, 'b'), (C, 'c'), (D, 'd'), (E, 'e'), (F, 'f')}) # def test_hamt_keys_1(self): # A = HashKey(100, 'A') # B = HashKey(101, 'B') # C = HashKey(100100, 'C') # D = HashKey(100100, 'D') # E = HashKey(100100, 'E') # F = HashKey(110, 'F') # h = hamt() # h = h.set(A, 'a') # h = h.set(B, 'b') # h = h.set(C, 'c') # h = h.set(D, 'd') # h = h.set(E, 'e') # h = h.set(F, 'f') # self.assertEqual(set(list(h.keys())), {A, B, C, D, E, F}) # self.assertEqual(set(list(h)), {A, B, C, D, E, F}) # def test_hamt_items_3(self): # h = hamt() # self.assertEqual(len(h.items()), 0) # self.assertEqual(list(h.items()), []) # def test_hamt_eq_1(self): # A = HashKey(100, 'A') # B = HashKey(101, 'B') # C = HashKey(100100, 'C') # D = HashKey(100100, 'D') # E = HashKey(120, 'E') # h1 = hamt() # h1 = h1.set(A, 'a') # h1 = h1.set(B, 'b') # h1 = h1.set(C, 'c') # h1 = h1.set(D, 'd') # h2 = hamt() # h2 = h2.set(A, 'a') # self.assertFalse(h1 == h2) # self.assertTrue(h1 != h2) # h2 = h2.set(B, 'b') # self.assertFalse(h1 == h2) # self.assertTrue(h1 != h2) # h2 = h2.set(C, 'c') # self.assertFalse(h1 == h2) # self.assertTrue(h1 != h2) # h2 = h2.set(D, 'd2') # self.assertFalse(h1 == h2) # self.assertTrue(h1 != h2) # h2 = h2.set(D, 'd') # self.assertTrue(h1 == h2) # self.assertFalse(h1 != h2) # h2 = h2.set(E, 'e') # self.assertFalse(h1 == h2) # self.assertTrue(h1 != h2) # h2 = h2.delete(D) # self.assertFalse(h1 == h2) # self.assertTrue(h1 != h2) # h2 = h2.set(E, 'd') # self.assertFalse(h1 == h2) # self.assertTrue(h1 != h2) # def test_hamt_eq_2(self): # A = HashKey(100, 'A') # Er = HashKey(100, 'Er', error_on_eq_to=A) # h1 = hamt() # h1 = h1.set(A, 'a') # h2 = hamt() # h2 = h2.set(Er, 'a') # with self.assertRaisesRegex(ValueError, 'cannot compare'): # h1 == h2 # with self.assertRaisesRegex(ValueError, 'cannot compare'): # h1 != h2 # def test_hamt_gc_1(self): # A = HashKey(100, 'A') # h = hamt() # h = h.set(0, 0) # empty HAMT node is memoized in hamt.c # ref = weakref.ref(h) # a = [] # a.append(a) # a.append(h) # b = [] # a.append(b) # b.append(a) # h = h.set(A, b) # del h, a, b # gc.collect() # gc.collect() # gc.collect() # self.assertIsNone(ref()) # def test_hamt_gc_2(self): # A = HashKey(100, 'A') # B = HashKey(101, 'B') # h = hamt() # h = h.set(A, 'a') # h = h.set(A, h) # ref = weakref.ref(h) # hi = h.items() # next(hi) # del h, hi # gc.collect() # gc.collect() # gc.collect() # self.assertIsNone(ref()) # def test_hamt_in_1(self): # A = HashKey(100, 'A') # AA = HashKey(100, 'A') # B = HashKey(101, 'B') # h = hamt() # h = h.set(A, 1) # self.assertTrue(A in h) # self.assertFalse(B in h) # with self.assertRaises(EqError): # with HaskKeyCrasher(error_on_eq=True): # AA in h # with self.assertRaises(HashingError): # with HaskKeyCrasher(error_on_hash=True): # AA in h # def test_hamt_getitem_1(self): # A = HashKey(100, 'A') # AA = HashKey(100, 'A') # B = HashKey(101, 'B') # h = hamt() # h = h.set(A, 1) # self.assertEqual(h[A], 1) # self.assertEqual(h[AA], 1) # with self.assertRaises(KeyError): # h[B] # with self.assertRaises(EqError): # with HaskKeyCrasher(error_on_eq=True): # h[AA] # with self.assertRaises(HashingError): # with HaskKeyCrasher(error_on_hash=True): # h[AA] if __name__ == "__main__": unittest.main() gevent-24.11.1/src/gevent/tests/test__core.py000066400000000000000000000127621471441230600211030ustar00rootroot00000000000000 from __future__ import absolute_import, print_function, division # Important: This file should have no dependencies that are part of the # ``test`` extra, because it is sometimes run for quick checks without those # installed. import unittest import sys import gevent.testing as greentest from gevent._config import Loop available_loops = Loop().get_options() available_loops.pop('libuv', None) def not_available(name): return isinstance(available_loops[name], ImportError) class WatcherTestMixin(object): kind = None def _makeOne(self): return self.kind(default=False) # pylint:disable=not-callable def destroyOne(self, loop): loop.destroy() def setUp(self): self.loop = self._makeOne() self.core = sys.modules[self.kind.__module__] def tearDown(self): self.destroyOne(self.loop) del self.loop def test_get_version(self): version = self.core.get_version() # pylint: disable=no-member self.assertIsInstance(version, str) self.assertTrue(version) header_version = self.core.get_header_version() # pylint: disable=no-member self.assertIsInstance(header_version, str) self.assertTrue(header_version) self.assertEqual(version, header_version) def test_events_conversion(self): self.assertEqual(self.core._events_to_str(self.core.READ | self.core.WRITE), # pylint: disable=no-member 'READ|WRITE') def test_EVENTS(self): self.assertEqual(str(self.core.EVENTS), # pylint: disable=no-member 'gevent.core.EVENTS') self.assertEqual(repr(self.core.EVENTS), # pylint: disable=no-member 'gevent.core.EVENTS') def test_io(self): if greentest.WIN: # libev raises IOError, libuv raises ValueError Error = (IOError, ValueError) else: Error = ValueError with self.assertRaises(Error): self.loop.io(-1, 1) if hasattr(self.core, 'TIMER'): # libev with self.assertRaises(ValueError): self.loop.io(1, self.core.TIMER) # pylint:disable=no-member # Test we can set events and io before it's started if not greentest.WIN: # We can't do this with arbitrary FDs on windows; # see libev_vfd.h io = self.loop.io(1, self.core.READ) # pylint:disable=no-member io.fd = 2 self.assertEqual(io.fd, 2) io.events = self.core.WRITE # pylint:disable=no-member if not hasattr(self.core, 'libuv'): # libev # pylint:disable=no-member self.assertEqual(self.core._events_to_str(io.events), 'WRITE|_IOFDSET') else: self.assertEqual(self.core._events_to_str(io.events), # pylint:disable=no-member 'WRITE') io.start(lambda: None) io.close() def test_timer_constructor(self): with self.assertRaises(ValueError): self.loop.timer(1, -1) def test_signal_constructor(self): with self.assertRaises(ValueError): self.loop.signal(1000) class LibevTestMixin(WatcherTestMixin): def test_flags_conversion(self): # pylint: disable=no-member core = self.core if not greentest.WIN: self.assertEqual(core.loop(2, default=False).backend_int, 2) self.assertEqual(core.loop('select', default=False).backend, 'select') self.assertEqual(core._flags_to_int(None), 0) self.assertEqual(core._flags_to_int(['kqueue', 'SELECT']), core.BACKEND_KQUEUE | core.BACKEND_SELECT) self.assertEqual(core._flags_to_list(core.BACKEND_PORT | core.BACKEND_POLL), ['port', 'poll']) self.assertRaises(ValueError, core.loop, ['port', 'blabla']) self.assertRaises(TypeError, core.loop, object()) @unittest.skipIf(not_available('libev-cext'), "Needs libev-cext") class TestLibevCext(LibevTestMixin, unittest.TestCase): kind = available_loops['libev-cext'] @unittest.skipIf(not_available('libev-cffi'), "Needs libev-cffi") class TestLibevCffi(LibevTestMixin, unittest.TestCase): kind = available_loops['libev-cffi'] @unittest.skipIf(not_available('libuv-cffi'), "Needs libuv-cffi") class TestLibuvCffi(WatcherTestMixin, unittest.TestCase): kind = available_loops['libuv-cffi'] @greentest.skipOnLibev("libuv-specific") @greentest.skipOnWindows("Destroying the loop somehow fails") def test_io_multiplex_events(self): # pylint:disable=no-member import socket sock = socket.socket() fd = sock.fileno() core = self.core read = self.loop.io(fd, core.READ) write = self.loop.io(fd, core.WRITE) try: real_watcher = read._watcher_ref read.start(lambda: None) self.assertEqual(real_watcher.events, core.READ) write.start(lambda: None) self.assertEqual(real_watcher.events, core.READ | core.WRITE) write.stop() self.assertEqual(real_watcher.events, core.READ) write.start(lambda: None) self.assertEqual(real_watcher.events, core.READ | core.WRITE) read.stop() self.assertEqual(real_watcher.events, core.WRITE) write.stop() self.assertEqual(real_watcher.events, 0) finally: read.close() write.close() sock.close() if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__core_async.py000066400000000000000000000013711471441230600222720ustar00rootroot00000000000000from __future__ import print_function import gevent import gevent.core import time try: import thread except ImportError: import _thread as thread from gevent import testing as greentest class Test(greentest.TestCase): def test(self): hub = gevent.get_hub() watcher = hub.loop.async_() # BWC for <3.7: This should still be an attribute assert hasattr(hub.loop, 'async') gevent.spawn_later(0.1, thread.start_new_thread, watcher.send, ()) start = time.time() with gevent.Timeout(1.0): # Large timeout for appveyor hub.wait(watcher) print('Watcher %r reacted after %.6f seconds' % (watcher, time.time() - start - 0.1)) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__core_callback.py000066400000000000000000000011521471441230600227060ustar00rootroot00000000000000import gevent from gevent.hub import get_hub from gevent import testing as greentest class Test(greentest.TestCase): def test(self): loop = get_hub().loop called = [] def f(): called.append(1) x = loop.run_callback(f) assert x, x gevent.sleep(0) assert called == [1], called assert not x, (x, bool(x)) x = loop.run_callback(f) assert x, x x.stop() assert not x, x gevent.sleep(0) assert called == [1], called assert not x, x if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__core_fork.py000066400000000000000000000052621471441230600221210ustar00rootroot00000000000000from __future__ import print_function from gevent import monkey monkey.patch_all() import os import unittest import multiprocessing import gevent hub = gevent.get_hub() pid = os.getpid() newpid = None def on_fork(): global newpid newpid = os.getpid() fork_watcher = hub.loop.fork(ref=False) fork_watcher.start(on_fork) def in_child(q): # libev only calls fork callbacks at the beginning of # the loop; we use callbacks extensively so it takes *two* # calls to sleep (with a timer) to actually get wrapped # around to the beginning of the loop. gevent.sleep(0.001) gevent.sleep(0.001) q.put(newpid) class Test(unittest.TestCase): def test(self): self.assertEqual(hub.threadpool.size, 0) # Use a thread to make us multi-threaded hub.threadpool.apply(lambda: None) self.assertEqual(hub.threadpool.size, 1) # Not all platforms use fork by default, so we want to force it, # where possible. The test is still useful even if we can't # fork though. try: fork_ctx = multiprocessing.get_context('fork') except (AttributeError, ValueError): # ValueError if fork isn't supported. # AttributeError on Python 2, which doesn't have get_context fork_ctx = multiprocessing # If the Queue is global, q.get() hangs on Windows; must pass as # an argument. q = fork_ctx.Queue() p = fork_ctx.Process(target=in_child, args=(q,)) p.start() p.join() p_val = q.get() self.assertIsNone( newpid, "The fork watcher ran in the parent for some reason." ) self.assertIsNotNone( p_val, "The child process returned nothing, meaning the fork watcher didn't run in the child." ) self.assertNotEqual(p_val, pid) assert p_val != pid if __name__ == '__main__': # Must call for Windows to fork properly; the fork can't be in the top-level multiprocessing.freeze_support() # fork watchers weren't firing in multi-threading processes. # This test is designed to prove that they are. # However, it fails on Windows: The fork watcher never runs! # This makes perfect sense: on Windows, our patches to os.fork() # that call gevent.hub.reinit() don't get used; os.fork doesn't # exist and multiprocessing.Process uses the windows-specific _subprocess.CreateProcess() # to create a whole new process that has no relation to the current process; # that process then calls multiprocessing.forking.main() to do its work. # Since no state is shared, a fork watcher cannot exist in that process. unittest.main() gevent-24.11.1/src/gevent/tests/test__core_loop_run.py000066400000000000000000000007561471441230600230200ustar00rootroot00000000000000from __future__ import print_function import sys from gevent import core from gevent import signal_handler as signal loop = core.loop(default=False) signal = signal(2, sys.stderr.write, 'INTERRUPT!') print('must exit immediately...') loop.run() # must exit immediately print('...and once more...') loop.run() # repeating does not fail print('..done') print('must exit after 0.5 seconds.') timer = loop.timer(0.5) timer.start(lambda: None) loop.run() timer.close() loop.destroy() del loop gevent-24.11.1/src/gevent/tests/test__core_stat.py000066400000000000000000000072521471441230600221340ustar00rootroot00000000000000from __future__ import print_function import os import tempfile import time import gevent import gevent.core import gevent.testing as greentest import gevent.testing.flaky #pylint: disable=protected-access DELAY = 0.5 WIN = greentest.WIN LIBUV = greentest.LIBUV class TestCoreStat(greentest.TestCase): __timeout__ = greentest.LARGE_TIMEOUT def setUp(self): super(TestCoreStat, self).setUp() fd, path = tempfile.mkstemp(suffix='.gevent_test_core_stat') os.close(fd) self.temp_path = path self.hub = gevent.get_hub() # If we don't specify an interval, we default to zero. # libev interprets that as meaning to use its default interval, # which is about 5 seconds. If we go below it's minimum check # threshold, it bumps it up to the minimum. self.watcher = self.hub.loop.stat(self.temp_path, interval=-1) def tearDown(self): self.watcher.close() if os.path.exists(self.temp_path): os.unlink(self.temp_path) super(TestCoreStat, self).tearDown() def _write(self): with open(self.temp_path, 'wb', buffering=0) as f: f.write(b'x') def _check_attr(self, name, none): # Deals with the complex behaviour of the 'attr' and 'prev' # attributes on Windows. This codifies it, rather than simply letting # the test fail, so we know exactly when and what changes it. try: x = getattr(self.watcher, name) except ImportError: if WIN: # the 'posix' module is not available pass else: raise else: if WIN and not LIBUV: # The ImportError is only raised for the first time; # after that, the attribute starts returning None self.assertIsNone(x, "Only None is supported on Windows") if none: self.assertIsNone(x, name) else: self.assertIsNotNone(x, name) def _wait_on_greenlet(self, func, *greenlet_args): start = time.time() self.hub.loop.update_now() greenlet = gevent.spawn_later(DELAY, func, *greenlet_args) with gevent.Timeout(5 + DELAY + 0.5): self.hub.wait(self.watcher) now = time.time() self.assertGreaterEqual(now, start, "Time must move forward") wait_duration = now - start reaction = wait_duration - DELAY if reaction <= 0.0: # Sigh. This is especially true on PyPy on Windows raise gevent.testing.flaky.FlakyTestRaceCondition( "Bad timer resolution (on Windows?), test is useless. Start %s, now %s" % (start, now)) self.assertGreaterEqual( reaction, 0.0, 'Watcher %s reacted too early: %.3fs' % (self.watcher, reaction)) greenlet.join() def test_watcher_basics(self): watcher = self.watcher filename = self.temp_path self.assertEqual(watcher.path, filename) filenames = filename if isinstance(filename, bytes) else filename.encode('ascii') self.assertEqual(watcher._paths, filenames) self.assertEqual(watcher.interval, -1) def test_write(self): self._wait_on_greenlet(self._write) self._check_attr('attr', False) self._check_attr('prev', False) # The watcher interval changed after it started; -1 is illegal self.assertNotEqual(self.watcher.interval, -1) def test_unlink(self): self._wait_on_greenlet(os.unlink, self.temp_path) self._check_attr('attr', True) self._check_attr('prev', False) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__core_timer.py000066400000000000000000000103521471441230600222740ustar00rootroot00000000000000from __future__ import print_function from gevent import config import gevent.testing as greentest from gevent.testing import TestCase from gevent.testing import LARGE_TIMEOUT from gevent.testing.sysinfo import CFFI_BACKEND from gevent.testing.flaky import reraises_flaky_timeout class Test(TestCase): __timeout__ = LARGE_TIMEOUT repeat = 0 timer_duration = 0.001 def setUp(self): super(Test, self).setUp() self.called = [] self.loop = config.loop(default=False) self.timer = self.loop.timer(self.timer_duration, repeat=self.repeat) assert not self.loop.default def cleanup(self): # cleanup instead of tearDown to cooperate well with # leakcheck.py self.timer.close() # cycle the loop so libuv close callbacks fire self.loop.run() self.loop.destroy() self.loop = None self.timer = None def f(self, x=None): self.called.append(1) if x is not None: x.stop() def assertTimerInKeepalive(self): if CFFI_BACKEND: self.assertIn(self.timer, self.loop._keepaliveset) def assertTimerNotInKeepalive(self): if CFFI_BACKEND: self.assertNotIn(self.timer, self.loop._keepaliveset) def test_main(self): loop = self.loop x = self.timer x.start(self.f) self.assertTimerInKeepalive() self.assertTrue(x.active, x) with self.assertRaises((AttributeError, ValueError)): x.priority = 1 loop.run() self.assertEqual(x.pending, 0) self.assertEqual(self.called, [1]) self.assertIsNone(x.callback) self.assertIsNone(x.args) if x.priority is not None: self.assertEqual(x.priority, 0) x.priority = 1 self.assertEqual(x.priority, 1) x.stop() self.assertTimerNotInKeepalive() class TestAgain(Test): repeat = 1 def test_main(self): # Again works for a new timer x = self.timer x.again(self.f, x) self.assertTimerInKeepalive() self.assertEqual(x.args, (x,)) # XXX: On libev, this takes 1 second. On libuv, # it takes the expected time. self.loop.run() self.assertEqual(self.called, [1]) x.stop() self.assertTimerNotInKeepalive() class TestTimerResolution(Test): # On CI, with *all* backends, sometimes we get timer values of # 0.02 or higher. @reraises_flaky_timeout(AssertionError) def test_resolution(self): # pylint:disable=too-many-locals # Make sure that having an active IO watcher # doesn't badly throw off our timer resolution. # (This was a specific problem with libuv) # https://github.com/gevent/gevent/pull/1194 from gevent._compat import perf_counter import socket s = socket.socket() self._close_on_teardown(s) fd = s.fileno() ran_at_least_once = False fired_at = [] def timer_counter(): fired_at.append(perf_counter()) loop = self.loop timer_multiplier = 11 max_time = self.timer_duration * timer_multiplier assert max_time < 0.3 for _ in range(150): # in libuv, our signal timer fires every 300ms; depending on # when this runs, we could artificially get a better # resolution than we expect. Run it multiple times to be more sure. io = loop.io(fd, 1) io.start(lambda events=None: None) now = perf_counter() del fired_at[:] timer = self.timer timer.start(timer_counter) loop.run(once=True) io.stop() io.close() timer.stop() if fired_at: ran_at_least_once = True self.assertEqual(1, len(fired_at)) self.assertTimeWithinRange(fired_at[0] - now, 0, max_time) if not greentest.RUNNING_ON_CI: # Hmm, this always fires locally on mocOS but # not an Travis? self.assertTrue(ran_at_least_once) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__core_watcher.py000066400000000000000000000067651471441230600226260ustar00rootroot00000000000000from __future__ import absolute_import, print_function import gevent.testing as greentest from gevent import config from gevent.testing.sysinfo import CFFI_BACKEND from gevent.core import READ # pylint:disable=no-name-in-module from gevent.core import WRITE # pylint:disable=no-name-in-module class Test(greentest.TestCase): __timeout__ = None def setUp(self): super(Test, self).setUp() self.loop = config.loop(default=False) self.timer = self.loop.timer(0.01) def tearDown(self): if self.timer is not None: self.timer.close() if self.loop is not None: self.loop.destroy() self.loop = self.timer = None super(Test, self).tearDown() def test_non_callable_to_start(self): # test that cannot pass non-callable thing to start() self.assertRaises(TypeError, self.timer.start, None) self.assertRaises(TypeError, self.timer.start, 5) def test_non_callable_after_start(self): # test that cannot set 'callback' to non-callable thing later either lst = [] timer = self.timer timer.start(lst.append) with self.assertRaises(TypeError): timer.callback = False with self.assertRaises(TypeError): timer.callback = 5 def test_args_can_be_changed_after_start(self): lst = [] timer = self.timer self.timer.start(lst.append) self.assertEqual(timer.args, ()) timer.args = (1, 2, 3) self.assertEqual(timer.args, (1, 2, 3)) # Only tuple can be args with self.assertRaises(TypeError): timer.args = 5 with self.assertRaises(TypeError): timer.args = [4, 5] self.assertEqual(timer.args, (1, 2, 3)) # None also works, means empty tuple # XXX why? timer.args = None self.assertEqual(timer.args, None) def test_run(self): loop = self.loop lst = [] self.timer.start(lambda *args: lst.append(args)) loop.run() loop.update_now() self.assertEqual(lst, [()]) # Even if we lose all references to it, the ref in the callback # keeps it alive self.timer.start(reset, self.timer, lst) self.timer = None loop.run() self.assertEqual(lst, [(), 25]) def test_invalid_fd(self): loop = self.loop # Negative case caught everywhere. ValueError # on POSIX, OSError on Windows Py3, IOError on Windows Py2 with self.assertRaises((ValueError, OSError, IOError)): loop.io(-1, READ) @greentest.skipOnWindows("Stdout can't be watched on Win32") def test_reuse_io(self): loop = self.loop # Watchers aren't reused once all outstanding # refs go away BUT THEY MUST BE CLOSED tty_watcher = loop.io(1, WRITE) watcher_handle = tty_watcher._watcher if CFFI_BACKEND else tty_watcher tty_watcher.close() del tty_watcher # XXX: Note there is a cycle in the CFFI code # from watcher_handle._handle -> watcher_handle. # So it doesn't go away until a GC runs. import gc gc.collect() tty_watcher = loop.io(1, WRITE) self.assertIsNot(tty_watcher._watcher if CFFI_BACKEND else tty_watcher, watcher_handle) tty_watcher.close() def reset(watcher, lst): watcher.args = None watcher.callback = lambda: None lst.append(25) watcher.close() if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__destroy.py000066400000000000000000000033451471441230600216410ustar00rootroot00000000000000from __future__ import absolute_import, print_function import gevent import unittest class TestDestroyHub(unittest.TestCase): def test_destroy_hub(self): # Loop of initial Hub is default loop. hub = gevent.get_hub() self.assertTrue(hub.loop.default) # Save `gevent.core.loop` object for later comparison. initloop = hub.loop # Increase test complexity via threadpool creation. # Implicitly creates fork watcher connected to the current event loop. tp = hub.threadpool self.assertIsNotNone(tp) # Destroy hub. Does not destroy libev default loop if not explicitly told to. hub.destroy() # Create new hub. Must re-use existing libev default loop. hub = gevent.get_hub() self.assertTrue(hub.loop.default) # Ensure that loop object is identical to the initial one. self.assertIs(hub.loop, initloop) # Destroy hub including default loop. hub.destroy(destroy_loop=True) # Create new hub and explicitly request creation of a new default loop. # (using default=True, but that's no longer possible.) hub = gevent.get_hub() self.assertTrue(hub.loop.default) # `gevent.core.loop` objects as well as libev loop pointers must differ. self.assertIsNot(hub.loop, initloop) self.assertIsNot(hub.loop.ptr, initloop.ptr) self.assertNotEqual(hub.loop.ptr, initloop.ptr) # Destroy hub including default loop. The default loop regenerates. hub.destroy(destroy_loop=True) hub = gevent.get_hub() self.assertTrue(hub.loop.default) hub.destroy() if __name__ == '__main__': unittest.main() # pragma: testrunner-no-combine gevent-24.11.1/src/gevent/tests/test__destroy_default_loop.py000066400000000000000000000042271471441230600243760ustar00rootroot00000000000000from __future__ import print_function import gevent import unittest class TestDestroyDefaultLoop(unittest.TestCase): def tearDown(self): self._reset_hub() super(TestDestroyDefaultLoop, self).tearDown() def _reset_hub(self): from gevent._hub_local import set_hub from gevent._hub_local import set_loop from gevent._hub_local import get_hub_if_exists hub = get_hub_if_exists() if hub is not None: hub.destroy(destroy_loop=True) set_hub(None) set_loop(None) def test_destroy_gc(self): # Issue 1098: destroying the default loop # while using the C extension could crash # the interpreter when it exits # Create the hub greenlet. This creates one loop # object pointing to the default loop. gevent.get_hub() # Get a new loop object, but using the default # C loop loop = gevent.config.loop(default=True) self.assertTrue(loop.default) # Destroy it loop.destroy() # It no longer claims to be the default self.assertFalse(loop.default) # Delete it del loop # Delete the hub. This prompts garbage # collection of it and its loop object. # (making this test more repeatable; the exit # crash only happened when that greenlet object # was collected at exit time, which was most common # in CPython 3.5) self._reset_hub() def test_destroy_two(self): # Get two new loop object, but using the default # C loop loop1 = gevent.config.loop(default=True) loop2 = gevent.config.loop(default=True) self.assertTrue(loop1.default) self.assertTrue(loop2.default) # Destroy the first loop1.destroy() # It no longer claims to be the default self.assertFalse(loop1.default) # Destroy the second. This doesn't crash. loop2.destroy() self.assertFalse(loop2.default) self.assertFalse(loop2.ptr) self._reset_hub() self.assertTrue(gevent.get_hub().loop.ptr) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test__doctests.py000066400000000000000000000070351471441230600220000ustar00rootroot00000000000000from __future__ import print_function import doctest import functools import os import re import sys import unittest # Ignore tracebacks: ZeroDivisionError def myfunction(*_args, **_kwargs): pass class RENormalizingOutputChecker(doctest.OutputChecker): """ Pattern-normalizing output checker. Inspired by one used in zope.testing. """ def __init__(self, patterns): self.transformers = [functools.partial(re.sub, replacement) for re, replacement in patterns] def check_output(self, want, got, optionflags): if got == want: return True for transformer in self.transformers: want = transformer(want) got = transformer(got) return doctest.OutputChecker.check_output(self, want, got, optionflags) FORBIDDEN_MODULES = set() class Modules(object): def __init__(self, allowed_modules): from gevent.testing import walk_modules self.allowed_modules = allowed_modules self.modules = set() for path, module in walk_modules(recursive=True): self.add_module(module, path) def add_module(self, name, path): if self.allowed_modules and name not in self.allowed_modules: return if name in FORBIDDEN_MODULES: return self.modules.add((name, path)) def __bool__(self): return bool(self.modules) __nonzero__ = __bool__ def __iter__(self): return iter(self.modules) def main(): # pylint:disable=too-many-locals cwd = os.getcwd() # Use pure_python to get the correct module source and docstrings os.environ['PURE_PYTHON'] = '1' import gevent from gevent import socket from gevent.testing import util from gevent.testing import sysinfo if sysinfo.WIN: FORBIDDEN_MODULES.update({ # Uses commands only found on posix 'gevent.subprocess', }) try: allowed_modules = sys.argv[1:] sys.path.append('.') globs = { 'myfunction': myfunction, 'gevent': gevent, 'socket': socket, } modules = Modules(allowed_modules) if not modules: sys.exit('No modules found matching %s' % ' '.join(allowed_modules)) suite = unittest.TestSuite() checker = RENormalizingOutputChecker(( # Normalize subprocess.py: BSD ls is in the example, gnu ls outputs # 'cannot access' (re.compile( "ls: cannot access 'non_existent_file': No such file or directory"), "ls: non_existent_file: No such file or directory"), # Python 3 bytes add a "b". (re.compile(r'b(".*?")'), r"\1"), (re.compile(r"b('.*?')"), r"\1"), )) tests_count = 0 modules_count = 0 for m, path in sorted(modules): with open(path, 'rb') as f: contents = f.read() if re.search(br'^\s*>>> ', contents, re.M): s = doctest.DocTestSuite(m, extraglobs=globs, checker=checker) test_count = len(s._tests) util.log('%s (from %s): %s tests', m, path, test_count) suite.addTest(s) modules_count += 1 tests_count += test_count util.log('Total: %s tests in %s modules', tests_count, modules_count) # TODO: Pass this off to unittest.main() runner = unittest.TextTestRunner(verbosity=2) runner.run(suite) finally: os.chdir(cwd) if __name__ == '__main__': main() gevent-24.11.1/src/gevent/tests/test__environ.py000066400000000000000000000011401471441230600216170ustar00rootroot00000000000000import os import sys import gevent import gevent.core import subprocess if not sys.argv[1:]: os.environ['GEVENT_BACKEND'] = 'select' # (not in Py2) pylint:disable=consider-using-with popen = subprocess.Popen([sys.executable, __file__, '1']) assert popen.wait() == 0, popen.poll() else: # pragma: no cover hub = gevent.get_hub() # pylint:disable-next=no-member if 'select' in gevent.core.supported_backends(): assert hub.loop.backend == 'select', hub.loop.backend else: # libuv isn't configurable assert hub.loop.backend == 'default', hub.loop.backend gevent-24.11.1/src/gevent/tests/test__event.py000066400000000000000000000335401471441230600212710ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import print_function from __future__ import division import weakref import gevent from gevent.event import Event, AsyncResult import gevent.testing as greentest from gevent.testing.six import xrange from gevent.testing.timing import AbstractGenericGetTestCase from gevent.testing.timing import AbstractGenericWaitTestCase from gevent.testing.timing import SMALL_TICK from gevent.testing.timing import SMALL_TICK_MAX_ADJ DELAY = SMALL_TICK + SMALL_TICK_MAX_ADJ class TestEventWait(AbstractGenericWaitTestCase): def wait(self, timeout): Event().wait(timeout=timeout) def test_cover(self): str(Event()) class TestGeventWaitOnEvent(AbstractGenericWaitTestCase): def wait(self, timeout): gevent.wait([Event()], timeout=timeout) def test_set_during_wait(self): # https://github.com/gevent/gevent/issues/771 # broke in the refactoring. we must not add new links # while we're running the callback event = Event() def setter(): event.set() def waiter(): s = gevent.spawn(setter) # let the setter set() the event; # when this method returns we'll be running in the Event._notify_links callback # (that is, it switched to us) res = event.wait() self.assertTrue(res) self.assertTrue(event.ready()) s.join() # make sure it's dead # Clear the event. Now we can't wait for the event without # another set to happen. event.clear() self.assertFalse(event.ready()) # Before the bug fix, this would return "immediately" with # event in the result list, because the _notify_links loop would # immediately add the waiter and call it o = gevent.wait((event,), timeout=0.01) self.assertFalse(event.ready()) self.assertNotIn(event, o) gevent.spawn(waiter).join() class TestAsyncResultWait(AbstractGenericWaitTestCase): def wait(self, timeout): AsyncResult().wait(timeout=timeout) class TestWaitAsyncResult(AbstractGenericWaitTestCase): def wait(self, timeout): gevent.wait([AsyncResult()], timeout=timeout) class TestAsyncResultGet(AbstractGenericGetTestCase): def wait(self, timeout): AsyncResult().get(timeout=timeout) class MyException(Exception): pass class TestAsyncResult(greentest.TestCase): def test_link(self): ar = AsyncResult() self.assertRaises(TypeError, ar.rawlink, None) ar.unlink(None) # doesn't raise ar.unlink(None) # doesn't raise str(ar) # cover def test_set_exc(self): log = [] e = AsyncResult() self.assertEqual(e.exc_info, ()) self.assertEqual(e.exception, None) def waiter(): with self.assertRaises(MyException) as exc: e.get() log.append(('caught', exc.exception)) gevent.spawn(waiter) obj = MyException() e.set_exception(obj) gevent.sleep(0) self.assertEqual(log, [('caught', obj)]) def test_set(self): event1 = AsyncResult() timer_exc = MyException('interrupted') # Notice that this test is racy: # After DELAY, we set the event. We also try to immediately # raise the exception with a timer of 0 --- but that depends # on cycling the loop. Hence the fairly large value for DELAY. g = gevent.spawn_later(DELAY, event1.set, 'hello event1') self._close_on_teardown(g.kill) with gevent.Timeout.start_new(0, timer_exc): with self.assertRaises(MyException) as exc: event1.get() self.assertIs(timer_exc, exc.exception) def test_set_with_timeout(self): event2 = AsyncResult() X = object() result = gevent.with_timeout(DELAY, event2.get, timeout_value=X) self.assertIs( result, X, 'Nobody sent anything to event2 yet it received %r' % (result, )) def test_nonblocking_get(self): ar = AsyncResult() self.assertRaises(gevent.Timeout, ar.get, block=False) self.assertRaises(gevent.Timeout, ar.get_nowait) class TestAsyncResultCrossThread(greentest.TestCase): def _makeOne(self): return AsyncResult() def _setOne(self, one): one.set('from main') BG_WAIT_DELAY = 60 def _check_pypy_switch(self): # On PyPy 7.3.3, switching to the main greenlet of a thread from a # different thread silently does nothing. We can't detect the cross-thread # switch, and so this test breaks # https://foss.heptapod.net/pypy/pypy/-/issues/3381 if greentest.PYPY: import sys if sys.pypy_version_info[:3] <= (7, 3, 3): # pylint:disable=no-member self.skipTest("PyPy bug: https://foss.heptapod.net/pypy/pypy/-/issues/3381") @greentest.ignores_leakcheck def test_cross_thread_use(self, timed_wait=False, wait_in_bg=False): # Issue 1739. # AsyncResult has *never* been thread safe, and using it from one # thread to another is not safe. However, in some very careful use cases # that can actually work. # # This test makes sure it doesn't hang in one careful use # scenario. self.assertNotMonkeyPatched() # Need real threads, event objects from threading import Thread as NativeThread from threading import Event as NativeEvent if not wait_in_bg: self._check_pypy_switch() test = self class Thread(NativeThread): def __init__(self): NativeThread.__init__(self) self.daemon = True self.running_event = NativeEvent() self.finished_event = NativeEvent() self.async_result = test._makeOne() self.result = '' def run(self): # Give the loop in this thread something to do g_event = Event() def spin(): while not g_event.is_set(): g_event.wait(DELAY * 2) glet = gevent.spawn(spin) def work(): self.running_event.set() # If we use a timed wait(), the bug doesn't manifest. # This is probably because the loop wakes up to handle the timer, # and notices the callback. # See https://github.com/gevent/gevent/issues/1735 if timed_wait: self.result = self.async_result.wait(test.BG_WAIT_DELAY) else: self.result = self.async_result.wait() if wait_in_bg: # This results in a separate code path worker = gevent.spawn(work) worker.join() del worker else: work() g_event.set() glet.join() del glet self.finished_event.set() gevent.get_hub().destroy(destroy_loop=True) thread = Thread() thread.start() try: thread.running_event.wait() self._setOne(thread.async_result) thread.finished_event.wait(DELAY * 5) finally: thread.join(DELAY * 15) self._check_result(thread.result) def _check_result(self, result): self.assertEqual(result, 'from main') def test_cross_thread_use_bg(self): self.test_cross_thread_use(timed_wait=False, wait_in_bg=True) def test_cross_thread_use_timed(self): self.test_cross_thread_use(timed_wait=True, wait_in_bg=False) def test_cross_thread_use_timed_bg(self): self.test_cross_thread_use(timed_wait=True, wait_in_bg=True) @greentest.ignores_leakcheck def test_cross_thread_use_set_in_bg(self): self.assertNotMonkeyPatched() # Need real threads, event objects from threading import Thread as NativeThread from threading import Event as NativeEvent self._check_pypy_switch() test = self class Thread(NativeThread): def __init__(self): NativeThread.__init__(self) self.daemon = True self.running_event = NativeEvent() self.finished_event = NativeEvent() self.async_result = test._makeOne() self.result = '' def run(self): self.running_event.set() test._setOne(self.async_result) self.finished_event.set() gevent.get_hub().destroy(destroy_loop=True) thread = Thread() glet = None try: glet = gevent.spawn(thread.start) result = thread.async_result.wait(self.BG_WAIT_DELAY) finally: thread.join(DELAY * 15) if glet is not None: glet.join(DELAY) self._check_result(result) @greentest.ignores_leakcheck def test_cross_thread_use_set_in_bg2(self): # Do it again to make sure it works multiple times. self.test_cross_thread_use_set_in_bg() class TestEventCrossThread(TestAsyncResultCrossThread): def _makeOne(self): return Event() def _setOne(self, one): one.set() def _check_result(self, result): self.assertTrue(result) class TestAsyncResultAsLinkTarget(greentest.TestCase): error_fatal = False def test_set(self): g = gevent.spawn(lambda: 1) s1, s2, s3 = AsyncResult(), AsyncResult(), AsyncResult() g.link(s1) g.link_value(s2) g.link_exception(s3) self.assertEqual(s1.get(), 1) self.assertEqual(s2.get(), 1) X = object() result = gevent.with_timeout(DELAY, s3.get, timeout_value=X) self.assertIs(result, X) def test_set_exception(self): def func(): raise greentest.ExpectedException('TestAsyncResultAsLinkTarget.test_set_exception') g = gevent.spawn(func) s1, s2, s3 = AsyncResult(), AsyncResult(), AsyncResult() g.link(s1) g.link_value(s2) g.link_exception(s3) self.assertRaises(greentest.ExpectedException, s1.get) X = object() result = gevent.with_timeout(DELAY, s2.get, timeout_value=X) self.assertIs(result, X) self.assertRaises(greentest.ExpectedException, s3.get) class TestEvent_SetThenClear(greentest.TestCase): N = 1 def test(self): e = Event() waiters = [gevent.spawn(e.wait) for i in range(self.N)] gevent.sleep(0.001) e.set() e.clear() for greenlet in waiters: greenlet.join() class TestEvent_SetThenClear100(TestEvent_SetThenClear): N = 100 class TestEvent_SetThenClear1000(TestEvent_SetThenClear): N = 1000 class TestWait(greentest.TestCase): N = 5 count = None timeout = 1 period = timeout / 100.0 def _sender(self, events, asyncs): while events or asyncs: gevent.sleep(self.period) if events: events.pop().set() gevent.sleep(self.period) if asyncs: asyncs.pop().set() @greentest.skipOnAppVeyor("Not all results have arrived sometimes due to timer issues") def test(self): events = [Event() for _ in xrange(self.N)] asyncs = [AsyncResult() for _ in xrange(self.N)] max_len = len(events) + len(asyncs) sender = gevent.spawn(self._sender, events, asyncs) results = gevent.wait(events + asyncs, count=self.count, timeout=self.timeout) if self.timeout is None: expected_len = max_len else: expected_len = min(max_len, self.timeout / self.period) if self.count is None: self.assertTrue(sender.ready(), sender) else: expected_len = min(self.count, expected_len) self.assertFalse(sender.ready(), sender) sender.kill() self.assertEqual(expected_len, len(results), (expected_len, len(results), results)) class TestWait_notimeout(TestWait): timeout = None class TestWait_count1(TestWait): count = 1 class TestWait_count2(TestWait): count = 2 class TestEventBasics(greentest.TestCase): def test_weakref(self): # Event objects should allow weakrefs e = Event() r = weakref.ref(e) self.assertIs(e, r()) del e del r def test_wait_while_notifying(self): # If someone calls wait() on an Event that is # ready, and notifying other waiters, that new # waiter still runs at the end, but this does not # require a trip around the event loop. # See https://github.com/gevent/gevent/issues/1520 event = Event() results = [] def wait_then_append(arg): event.wait() results.append(arg) gevent.spawn(wait_then_append, 1) gevent.spawn(wait_then_append, 2) gevent.idle() self.assertEqual(2, event.linkcount()) check = gevent.get_hub().loop.check() check.start(results.append, 4) event.set() wait_then_append(3) self.assertEqual(results, [1, 2, 3]) # Note that the check event DID NOT run. check.stop() check.close() def test_gevent_wait_twice_when_already_set(self): event = Event() event.set() # First one works fine. result = gevent.wait([event]) self.assertEqual(result, [event]) # Second one used to fail with an AssertionError, # now it passes result = gevent.wait([event]) self.assertEqual(result, [event]) del AbstractGenericGetTestCase del AbstractGenericWaitTestCase if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__events.py000066400000000000000000000026711471441230600214550ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright 2018 gevent. See LICENSE. from __future__ import absolute_import from __future__ import division from __future__ import print_function import unittest from gevent import events try: from zope.interface import verify except ImportError: verify = None try: from zope import event except ImportError: event = None @unittest.skipIf(verify is None, "Needs zope.interface") class TestImplements(unittest.TestCase): def test_event_loop_blocked(self): verify.verifyClass(events.IEventLoopBlocked, events.EventLoopBlocked) def test_mem_threshold(self): verify.verifyClass(events.IMemoryUsageThresholdExceeded, events.MemoryUsageThresholdExceeded) verify.verifyObject(events.IMemoryUsageThresholdExceeded, events.MemoryUsageThresholdExceeded(0, 0, 0)) def test_mem_decreased(self): verify.verifyClass(events.IMemoryUsageUnderThreshold, events.MemoryUsageUnderThreshold) verify.verifyObject(events.IMemoryUsageUnderThreshold, events.MemoryUsageUnderThreshold(0, 0, 0, 0)) @unittest.skipIf(event is None, "Needs zope.event") class TestEvents(unittest.TestCase): def test_is_zope(self): self.assertIs(events.subscribers, event.subscribers) self.assertIs(events.notify, event.notify) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test__example_echoserver.py000066400000000000000000000022561471441230600240300ustar00rootroot00000000000000from gevent.socket import create_connection, timeout import gevent.testing as greentest import gevent from gevent.testing import util from gevent.testing import params class Test(util.TestServer): example = 'echoserver.py' def _run_all_tests(self): def test_client(message): if greentest.PY3: kwargs = {'buffering': 1} else: kwargs = {'bufsize': 1} kwargs['mode'] = 'rb' conn = create_connection((params.DEFAULT_LOCAL_HOST_ADDR, 16000)) conn.settimeout(greentest.DEFAULT_XPC_SOCKET_TIMEOUT) rfile = conn.makefile(**kwargs) welcome = rfile.readline() self.assertIn(b'Welcome', welcome) conn.sendall(message) received = rfile.read(len(message)) self.assertEqual(received, message) self.assertRaises(timeout, conn.recv, 1) rfile.close() conn.close() client1 = gevent.spawn(test_client, b'hello\r\n') client2 = gevent.spawn(test_client, b'world\r\n') gevent.joinall([client1, client2], raise_error=True) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__example_portforwarder.py000066400000000000000000000037511471441230600245640ustar00rootroot00000000000000from __future__ import print_function, absolute_import from gevent import monkey; monkey.patch_all() import signal import socket from time import sleep import gevent from gevent.server import StreamServer import gevent.testing as greentest from gevent.testing import util @greentest.skipOnLibuvOnCIOnPyPy("Timing issues sometimes lead to connection refused") class Test(util.TestServer): example = 'portforwarder.py' # [listen on, forward to] example_args = ['127.0.0.1:10011', '127.0.0.1:10012'] if greentest.WIN: from subprocess import CREATE_NEW_PROCESS_GROUP # Must be in a new process group to use CTRL_C_EVENT, otherwise # we get killed too start_kwargs = {'creationflags': CREATE_NEW_PROCESS_GROUP} def after(self): if greentest.WIN: self.assertIsNotNone(self.popen.poll()) else: self.assertEqual(self.popen.poll(), 0) def _run_all_tests(self): log = [] def handle(sock, _address): while True: data = sock.recv(1024) print('got %r' % data) if not data: break log.append(data) server = StreamServer(self.example_args[1], handle) server.start() try: conn = socket.create_connection(('127.0.0.1', 10011)) conn.sendall(b'msg1') sleep(0.1) # On Windows, SIGTERM actually abruptly terminates the process; # it can't be caught. However, CTRL_C_EVENT results in a KeyboardInterrupt # being raised, so we can shut down properly. self.popen.send_signal(getattr(signal, 'CTRL_C_EVENT', signal.SIGTERM)) sleep(0.1) conn.sendall(b'msg2') conn.close() with gevent.Timeout(2.1): self.popen.wait() finally: server.close() self.assertEqual([b'msg1', b'msg2'], log) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__example_udp_client.py000066400000000000000000000015641471441230600240120ustar00rootroot00000000000000from gevent import monkey monkey.patch_all() from gevent.server import DatagramServer from gevent.testing import util from gevent.testing import main class Test_udp_client(util.TestServer): start_kwargs = {'timeout': 10} example = 'udp_client.py' example_args = ['Test_udp_client'] def test(self): log = [] def handle(message, address): log.append(message) server.sendto(b'reply-from-server', address) server = DatagramServer('127.0.0.1:9001', handle) server.start() try: self.run_example() finally: server.close() self.assertEqual(log, [b'Test_udp_client']) if __name__ == '__main__': # Running this following test__example_portforwarder on Appveyor # doesn't work in the same process for some reason. main() # pragma: testrunner-no-combine gevent-24.11.1/src/gevent/tests/test__example_udp_server.py000066400000000000000000000010011471441230600240240ustar00rootroot00000000000000import socket from gevent.testing import util from gevent.testing import main class Test(util.TestServer): example = 'udp_server.py' def _run_all_tests(self): sock = socket.socket(type=socket.SOCK_DGRAM) try: sock.connect(('127.0.0.1', 9000)) sock.send(b'Test udp_server') data, _address = sock.recvfrom(8192) self.assertEqual(data, b'Received 15 bytes') finally: sock.close() if __name__ == '__main__': main() gevent-24.11.1/src/gevent/tests/test__example_webproxy.py000066400000000000000000000014471471441230600235430ustar00rootroot00000000000000from unittest import SkipTest import gevent.testing as greentest from . import test__example_wsgiserver @greentest.skipOnCI("Timing issues sometimes lead to a connection refused") @greentest.skipWithoutExternalNetwork("Tries to reach google.com") class Test_webproxy(test__example_wsgiserver.Test_wsgiserver): example = 'webproxy.py' def _run_all_tests(self): status, data = self.read('/') self.assertEqual(status, '200 OK') self.assertIn(b"gevent example", data) status, data = self.read('/http://www.google.com') self.assertEqual(status, '200 OK') self.assertIn(b'google', data.lower()) def test_a_blocking_client(self): # Not applicable raise SkipTest("Not applicable") if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__example_wsgiserver.py000066400000000000000000000062021471441230600240560ustar00rootroot00000000000000import sys try: from urllib import request as urllib2 except ImportError: import urllib2 import socket import ssl import gevent.testing as greentest from gevent.testing import DEFAULT_XPC_SOCKET_TIMEOUT from gevent.testing import util from gevent.testing import params @greentest.skipOnCI("Timing issues sometimes lead to a connection refused") class Test_wsgiserver(util.TestServer): example = 'wsgiserver.py' URL = 'http://%s:8088' % (params.DEFAULT_LOCAL_HOST_ADDR,) PORT = 8088 not_found_message = b'

Not Found

' ssl_ctx = None _use_ssl = False def read(self, path='/'): url = self.URL + path try: kwargs = {} if self.ssl_ctx is not None: kwargs = {'context': self.ssl_ctx} response = urllib2.urlopen(url, None, DEFAULT_XPC_SOCKET_TIMEOUT, **kwargs) except urllib2.HTTPError: response = sys.exc_info()[1] result = '%s %s' % (response.code, response.msg), response.read() # XXX: It looks like under PyPy this isn't directly closing the socket # when SSL is in use. It takes a GC cycle to make that true. response.close() return result def _test_hello(self): status, data = self.read('/') self.assertEqual(status, '200 OK') self.assertEqual(data, b"hello world") def _test_not_found(self): status, data = self.read('/xxx') self.assertEqual(status, '404 Not Found') self.assertEqual(data, self.not_found_message) def _do_test_a_blocking_client(self): # We spawn this in a separate server because if it's broken # the whole server hangs with self.running_server(): # First, make sure we can talk to it. self._test_hello() # Now create a connection and only partway finish # the transaction sock = socket.create_connection((params.DEFAULT_LOCAL_HOST_ADDR, self.PORT)) ssl_sock = None if self._use_ssl: context = ssl.SSLContext() ssl_sock = context.wrap_socket(sock) sock_file = ssl_sock.makefile(mode='rwb') else: sock_file = sock.makefile(mode='rwb') # write an incomplete request sock_file.write(b'GET /xxx HTTP/1.0\r\n') sock_file.flush() # Leave it open and not doing anything # while the other request runs to completion. # This demonstrates that a blocking client # doesn't hang the whole server self._test_hello() # now finish the original request sock_file.write(b'\r\n') sock_file.flush() line = sock_file.readline() self.assertEqual(line, b'HTTP/1.1 404 Not Found\r\n') sock_file.close() if ssl_sock is not None: ssl_sock.close() sock.close() def test_a_blocking_client(self): self._do_test_a_blocking_client() if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__example_wsgiserver_ssl.py000066400000000000000000000012111471441230600247320ustar00rootroot00000000000000import ssl import gevent.testing as greentest from gevent.testing import params from . import test__example_wsgiserver @greentest.skipOnCI("Timing issues sometimes lead to a connection refused") class Test_wsgiserver_ssl(test__example_wsgiserver.Test_wsgiserver): example = 'wsgiserver_ssl.py' URL = 'https://%s:8443' % (params.DEFAULT_LOCAL_HOST_ADDR,) PORT = 8443 _use_ssl = True if hasattr(ssl, '_create_unverified_context'): # Disable verification for our self-signed cert # on Python >= 2.7.9 and 3.4 ssl_ctx = ssl._create_unverified_context() if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__examples.py000066400000000000000000000064101471441230600217620ustar00rootroot00000000000000""" Test the contents of the ``examples/`` directory. If an existing test in *this* directory named ``test__example_.py`` exists, where ```` is the base filename of an example file, it will not be tested here. Examples can specify that they need particular test resources to be enabled by commenting (one per line) ``# gevent-test-requires-resource: ``; most commonly the resource will be ``network``. You can use this technique to specify non-existant resources for things that should never be tested. """ import re import os import glob import time import unittest import gevent.testing as greentest from gevent.testing import util this_dir = os.path.dirname(__file__) def _find_files_to_ignore(): old_dir = os.getcwd() try: os.chdir(this_dir) result = [x[14:] for x in glob.glob('test__example_*.py')] if greentest.PYPY and greentest.RUNNING_ON_APPVEYOR: # For some reason on Windows with PyPy, this times out, # when it should be very fast. result.append("processes.py") finally: os.chdir(old_dir) return result default_time_range = (2, 10) time_ranges = { # what is this even supposed to mean? pylint:disable=consider-using-namedtuple-or-dataclass 'concurrent_download.py': (0, 30), 'processes.py': (0, default_time_range[-1]) } class _AbstractTestMixin(util.ExampleMixin): time_range = default_time_range example = None def _check_resources(self): from gevent.testing import resources # pylint:disable=unspecified-encoding with open(os.path.join(self.cwd, self.example), 'r') as f: contents = f.read() pattern = re.compile('^# gevent-test-requires-resource: (.*)$', re.MULTILINE) resources_needed = re.finditer(pattern, contents) for match in resources_needed: needed = contents[match.start(1):match.end(1)] resources.skip_without_resource(needed) def test_runs(self): self._check_resources() start = time.time() min_time, max_time = self.time_range self.start_kwargs = { 'timeout': max_time, 'quiet': True, 'buffer_output': True, 'nested': True, 'setenv': {'GEVENT_DEBUG': 'error'} } if not self.run_example(): self.fail("Failed example: " + self.example) else: took = time.time() - start self.assertGreaterEqual(took, min_time) def _build_test_classes(): result = {} try: example_dir = util.ExampleMixin().cwd except unittest.SkipTest: util.log("WARNING: No examples dir found", color='suboptimal-behaviour') return result ignore = _find_files_to_ignore() for filename in glob.glob(example_dir + '/*.py'): bn = os.path.basename(filename) if bn in ignore: continue tc = type( 'Test_' + bn, (_AbstractTestMixin, greentest.TestCase), { 'example': bn, 'time_range': time_ranges.get(bn, _AbstractTestMixin.time_range) } ) result[tc.__name__] = tc return result for k, v in _build_test_classes().items(): locals()[k] = v if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__exc_info.py000066400000000000000000000025411471441230600217370ustar00rootroot00000000000000import gevent import sys import gevent.testing as greentest from gevent.testing import six from gevent.testing import ExpectedException as ExpectedError if six.PY2: sys.exc_clear() class RawException(Exception): pass def hello(err): assert sys.exc_info() == (None, None, None), sys.exc_info() raise err def hello2(): try: hello(ExpectedError('expected exception in hello')) except ExpectedError: pass class Test(greentest.TestCase): def test1(self): error = RawException('hello') expected_error = ExpectedError('expected exception in hello') try: raise error except RawException: self.expect_one_error() g = gevent.spawn(hello, expected_error) g.join() self.assert_error(ExpectedError, expected_error) self.assertIsInstance(g.exception, ExpectedError) try: raise except: # pylint:disable=bare-except ex = sys.exc_info()[1] self.assertIs(ex, error) def test2(self): timer = gevent.get_hub().loop.timer(0) timer.start(hello2) try: gevent.sleep(0.1) self.assertEqual(sys.exc_info(), (None, None, None)) finally: timer.close() if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__execmodules.py000066400000000000000000000031341471441230600224610ustar00rootroot00000000000000import unittest import warnings from gevent.testing import modules from gevent.testing import main from gevent.testing.sysinfo import NON_APPLICABLE_SUFFIXES from gevent.testing import six import pathlib def make_exec_test(path, module): path = pathlib.Path(path) def test(_): with open(path, 'rb') as f: src = f.read() with warnings.catch_warnings(): warnings.simplefilter('ignore', DeprecationWarning) globs = {'__file__': path, '__name__': module} if path.name == '__init__.py': globs['__path__'] = [path] globs['__package__'] = module elif '.' in module: globs['__package__'] = module.rsplit('.', 1)[0] try: six.exec_(src, globs) except ImportError: if module in modules.OPTIONAL_MODULES: raise unittest.SkipTest("Unable to import optional module %s" % module) raise name = "test_" + module.replace(".", "_") test.__name__ = name return test def make_all_tests(cls): for path, module in modules.walk_modules(recursive=True, check_optional=False): if module.endswith(NON_APPLICABLE_SUFFIXES): continue test = make_exec_test(path, module) setattr(cls, test.__name__, test) return cls @make_all_tests class Test(unittest.TestCase): pass if __name__ == '__main__': # This should not be combined with other tests in the same process # because it messes with global shared state. # pragma: testrunner-no-combine main() gevent-24.11.1/src/gevent/tests/test__fileobject.py000066400000000000000000000436511471441230600222620ustar00rootroot00000000000000from __future__ import print_function from __future__ import absolute_import import functools import gc import io import os import sys import tempfile import unittest import gevent from gevent import fileobject from gevent._fileobjectcommon import OpenDescriptor try: from gevent._fileobjectposix import GreenOpenDescriptor except ImportError: GreenOpenDescriptor = None import gevent.testing as greentest from gevent.testing import sysinfo # pylint:disable=unspecified-encoding def Writer(fobj, line): for character in line: fobj.write(character) fobj.flush() fobj.close() def close_fd_quietly(fd): try: os.close(fd) except OSError: pass def skipUnlessWorksWithRegularFiles(func): @functools.wraps(func) def f(self): if not self.WORKS_WITH_REGULAR_FILES: self.skipTest("Doesn't work with regular files") func(self) return f class CleanupMixin(object): def _mkstemp(self, suffix): fileno, path = tempfile.mkstemp(suffix) self.addCleanup(os.remove, path) self.addCleanup(close_fd_quietly, fileno) return fileno, path def _pipe(self): r, w = os.pipe() self.addCleanup(close_fd_quietly, r) self.addCleanup(close_fd_quietly, w) return r, w class TestFileObjectBlock(CleanupMixin, greentest.TestCase): # serves as a base for the concurrent tests too WORKS_WITH_REGULAR_FILES = True def _getTargetClass(self): return fileobject.FileObjectBlock def _makeOne(self, *args, **kwargs): return self._getTargetClass()(*args, **kwargs) def _test_del(self, **kwargs): r, w = self._pipe() self._do_test_del((r, w), **kwargs) def _do_test_del(self, pipe, **kwargs): r, w = pipe s = self._makeOne(w, 'wb', **kwargs) s.write(b'x') try: s.flush() except IOError: # Sometimes seen on Windows/AppVeyor print("Failed flushing fileobject", repr(s), file=sys.stderr) import traceback traceback.print_exc() import warnings with warnings.catch_warnings(): warnings.simplefilter('ignore', ResourceWarning) # Deliberately getting ResourceWarning with FileObject(Thread) under Py3 del s gc.collect() # PyPy if kwargs.get("close", True): with self.assertRaises((OSError, IOError)): # expected, because FileObject already closed it os.close(w) else: os.close(w) with self._makeOne(r, 'rb') as fobj: self.assertEqual(fobj.read(), b'x') def test_del(self): # Close should be true by default self._test_del() def test_del_close(self): self._test_del(close=True) @skipUnlessWorksWithRegularFiles def test_seek(self): fileno, path = self._mkstemp('.gevent.test__fileobject.test_seek') s = b'a' * 1024 os.write(fileno, b'B' * 15) os.write(fileno, s) os.close(fileno) with open(path, 'rb') as f: f.seek(15) native_data = f.read(1024) with open(path, 'rb') as f_raw: f = self._makeOne(f_raw, 'rb', close=False) # On Python 3, all objects should have seekable. # On Python 2, only our custom objects do. self.assertTrue(f.seekable()) f.seek(15) self.assertEqual(15, f.tell()) # Note that a duplicate close() of the underlying # file descriptor can look like an OSError from this line # as we exit the with block fileobj_data = f.read(1024) self.assertEqual(native_data, s) self.assertEqual(native_data, fileobj_data) def __check_native_matches(self, byte_data, open_mode, meth='read', open_path=True, **open_kwargs): fileno, path = self._mkstemp('.gevent_test_' + open_mode) os.write(fileno, byte_data) os.close(fileno) with io.open(path, open_mode, **open_kwargs) as f: native_data = getattr(f, meth)() if open_path: with self._makeOne(path, open_mode, **open_kwargs) as f: gevent_data = getattr(f, meth)() else: # Note that we don't use ``io.open()`` for the raw file, # on Python 2. We want 'r' to mean what the usual call to open() means. opener = io.open with opener(path, open_mode, **open_kwargs) as raw: with self._makeOne(raw) as f: gevent_data = getattr(f, meth)() self.assertEqual(native_data, gevent_data) return gevent_data @skipUnlessWorksWithRegularFiles def test_str_default_to_native(self): # With no 'b' or 't' given, read and write native str. gevent_data = self.__check_native_matches(b'abcdefg', 'r') self.assertIsInstance(gevent_data, str) @skipUnlessWorksWithRegularFiles def test_text_encoding(self): gevent_data = self.__check_native_matches( u'\N{SNOWMAN}'.encode('utf-8'), 'r+', buffering=5, encoding='utf-8' ) self.assertIsInstance(gevent_data, str) @skipUnlessWorksWithRegularFiles def test_does_not_leak_on_exception(self): # If an exception occurs during opening, # everything still gets cleaned up. pass @skipUnlessWorksWithRegularFiles def test_rbU_produces_bytes_readline(self): if sys.version_info > (3, 11): self.skipTest("U file mode was removed in 3.11") # Including U in rb still produces bytes. # Note that the universal newline behaviour is # essentially ignored in explicit bytes mode. gevent_data = self.__check_native_matches( b'line1\nline2\r\nline3\rlastline\n\n', 'rbU', meth='readlines', ) self.assertIsInstance(gevent_data[0], bytes) self.assertEqual(len(gevent_data), 4) @skipUnlessWorksWithRegularFiles def test_rU_produces_native(self): if sys.version_info > (3, 11): self.skipTest("U file mode was removed in 3.11") gevent_data = self.__check_native_matches( b'line1\nline2\r\nline3\rlastline\n\n', 'rU', meth='readlines', ) self.assertIsInstance(gevent_data[0], str) @skipUnlessWorksWithRegularFiles def test_r_readline_produces_native(self): gevent_data = self.__check_native_matches( b'line1\n', 'r', meth='readline', ) self.assertIsInstance(gevent_data, str) @skipUnlessWorksWithRegularFiles def test_r_readline_on_fobject_produces_native(self): gevent_data = self.__check_native_matches( b'line1\n', 'r', meth='readline', open_path=False, ) self.assertIsInstance(gevent_data, str) def test_close_pipe(self): # Issue #190, 203 r, w = os.pipe() x = self._makeOne(r) y = self._makeOne(w, 'w') x.close() y.close() @skipUnlessWorksWithRegularFiles @greentest.ignores_leakcheck def test_name_after_close(self): fileno, path = self._mkstemp('.gevent_test_named_path_after_close') # Passing the fileno; the name is the same as the fileno, and # doesn't change when closed. f = self._makeOne(fileno) nf = os.fdopen(fileno) # On Python 2, os.fdopen() produces a name of ; # we follow the Python 3 semantics everywhere. nf_name = '' if greentest.PY2 else fileno self.assertEqual(f.name, fileno) self.assertEqual(nf.name, nf_name) # A file-like object that has no name; we'll close the # `f` after this because we reuse the fileno, which # gets passed to fcntl and so must still be valid class Nameless(object): def fileno(self): return fileno close = flush = isatty = closed = writable = lambda self: False seekable = readable = lambda self: True nameless = self._makeOne(Nameless(), 'rb') with self.assertRaises(AttributeError): getattr(nameless, 'name') nameless.close() with self.assertRaises(AttributeError): getattr(nameless, 'name') f.close() try: nf.close() except OSError: # OSError: Py3, IOError: Py2 pass self.assertEqual(f.name, fileno) self.assertEqual(nf.name, nf_name) def check(arg): f = self._makeOne(arg) self.assertEqual(f.name, path) f.close() # Doesn't change after closed. self.assertEqual(f.name, path) # Passing the string check(path) # Passing an opened native object with open(path) as nf: check(nf) # An io object with io.open(path) as nf: check(nf) @skipUnlessWorksWithRegularFiles def test_readinto_serial(self): fileno, path = self._mkstemp('.gevent_test_readinto') os.write(fileno, b'hello world') os.close(fileno) buf = bytearray(32) mbuf = memoryview(buf) def assertReadInto(byte_count, expected_data): bytes_read = f.readinto(mbuf[:byte_count]) self.assertEqual(bytes_read, len(expected_data)) self.assertEqual(buf[:bytes_read], expected_data) with self._makeOne(path, 'rb') as f: assertReadInto(2, b'he') assertReadInto(1, b'l') assertReadInto(32, b'lo world') assertReadInto(32, b'') class ConcurrentFileObjectMixin(object): # Additional tests for fileobjects that cooperate # and we have full control of the implementation def test_read1_binary_present(self): # Issue #840 r, w = self._pipe() reader = self._makeOne(r, 'rb') self._close_on_teardown(reader) writer = self._makeOne(w, 'w') self._close_on_teardown(writer) self.assertTrue(hasattr(reader, 'read1'), dir(reader)) def test_read1_text_not_present(self): # Only defined for binary. r, w = self._pipe() reader = self._makeOne(r, 'rt') self._close_on_teardown(reader) self.addCleanup(os.close, w) self.assertFalse(hasattr(reader, 'read1'), dir(reader)) def test_read1_default(self): # If just 'r' is given, whether it has one or not # depends on if we're Python 2 or 3. r, w = self._pipe() self.addCleanup(os.close, w) reader = self._makeOne(r) self._close_on_teardown(reader) self.assertFalse(hasattr(reader, 'read1')) def test_bufsize_0(self): # Issue #840 r, w = self._pipe() x = self._makeOne(r, 'rb', bufsize=0) y = self._makeOne(w, 'wb', bufsize=0) self._close_on_teardown(x) self._close_on_teardown(y) y.write(b'a') b = x.read(1) self.assertEqual(b, b'a') y.writelines([b'2']) b = x.read(1) self.assertEqual(b, b'2') def test_newlines(self): import warnings r, w = self._pipe() lines = [b'line1\n', b'line2\r', b'line3\r\n', b'line4\r\nline5', b'\nline6'] g = gevent.spawn(Writer, self._makeOne(w, 'wb'), lines) try: with warnings.catch_warnings(): if sys.version_info > (3, 11): # U is removed in Python 3.11 mode = 'r' self.skipTest("U file mode was removed in 3.11") else: # U is deprecated in Python 3, shows up on FileObjectThread warnings.simplefilter('ignore', DeprecationWarning) mode = 'rU' fobj = self._makeOne(r, mode) result = fobj.read() fobj.close() self.assertEqual('line1\nline2\nline3\nline4\nline5\nline6', result) finally: g.kill() def test_readinto(self): # verify that .readinto() is cooperative. # if .readinto() is not cooperative spawned greenlet will not be able # to run and call to .readinto() will block forever. r, w = self._pipe() rf = self._close_on_teardown(self._makeOne(r, 'rb')) wf = self._close_on_teardown(self._makeOne(w, 'wb')) g = gevent.spawn(Writer, wf, [b'hello']) try: buf1 = bytearray(32) buf2 = bytearray(32) n1 = rf.readinto(buf1) n2 = rf.readinto(buf2) self.assertEqual(n1, 5) self.assertEqual(buf1[:n1], b'hello') self.assertEqual(n2, 0) finally: g.kill() class TestFileObjectThread(ConcurrentFileObjectMixin, # pylint:disable=too-many-ancestors TestFileObjectBlock): def _getTargetClass(self): return fileobject.FileObjectThread def test_del_noclose(self): # In the past, we used os.fdopen() when given a file descriptor, # and that has a destructor that can't be bypassed, so # close=false wasn't allowed. Now that we do everything with the # io module, it is allowed. self._test_del(close=False) # We don't test this with FileObjectThread. Sometimes the # visibility of the 'close' operation, which happens in a # background thread, doesn't make it to the foreground # thread in a timely fashion, leading to 'os.close(4) must # not succeed' in test_del_close. We have the same thing # with flushing and closing in test_newlines. Both of # these are most commonly (only?) observed on Py27/64-bit. # They also appear on 64-bit 3.6 with libuv def test_del(self): raise unittest.SkipTest("Race conditions") def test_del_close(self): raise unittest.SkipTest("Race conditions") @unittest.skipUnless( hasattr(fileobject, 'FileObjectPosix'), "Needs FileObjectPosix" ) class TestFileObjectPosix(ConcurrentFileObjectMixin, # pylint:disable=too-many-ancestors TestFileObjectBlock): if sysinfo.LIBUV and sysinfo.LINUX: # On Linux, initializing the watcher for a regular # file results in libuv raising EPERM. But that works # fine on other platforms. WORKS_WITH_REGULAR_FILES = False def _getTargetClass(self): return fileobject.FileObjectPosix def test_seek_raises_ioerror(self): # https://github.com/gevent/gevent/issues/1323 # Get a non-seekable file descriptor r, _w = self._pipe() with self.assertRaises(OSError) as ctx: os.lseek(r, 0, os.SEEK_SET) os_ex = ctx.exception with self.assertRaises(IOError) as ctx: f = self._makeOne(r, 'r', close=False) # Seek directly using the underlying GreenFileDescriptorIO; # the buffer may do different things, depending # on the version of Python (especially 3.7+) f.fileio.seek(0) io_ex = ctx.exception self.assertEqual(io_ex.errno, os_ex.errno) self.assertEqual(io_ex.strerror, os_ex.strerror) self.assertEqual(io_ex.args, os_ex.args) self.assertEqual(str(io_ex), str(os_ex)) class TestTextMode(CleanupMixin, unittest.TestCase): def test_default_mode_writes_linesep(self): # See https://github.com/gevent/gevent/issues/1282 # libuv 1.x interferes with the default line mode on # Windows. # First, make sure we initialize gevent gevent.get_hub() fileno, path = self._mkstemp('.gevent.test__fileobject.test_default') os.close(fileno) with open(path, "w") as f: f.write("\n") with open(path, "rb") as f: data = f.read() self.assertEqual(data, os.linesep.encode('ascii')) class TestOpenDescriptor(CleanupMixin, greentest.TestCase): def _getTargetClass(self): return OpenDescriptor def _makeOne(self, *args, **kwargs): return self._getTargetClass()(*args, **kwargs) def _check(self, regex, kind, *args, **kwargs): with self.assertRaisesRegex(kind, regex): self._makeOne(*args, **kwargs) case = lambda re, **kwargs: (re, TypeError, kwargs) vase = lambda re, **kwargs: (re, ValueError, kwargs) CASES = ( case('mode', mode=42), case('buffering', buffering='nope'), case('encoding', encoding=42), case('errors', errors=42), vase('mode', mode='aoeug'), vase('mode U cannot be combined', mode='wU'), vase('text and binary', mode='rtb'), vase('append mode at once', mode='rw'), vase('exactly one', mode='+'), vase('take an encoding', mode='rb', encoding='ascii'), vase('take an errors', mode='rb', errors='strict'), vase('take a newline', mode='rb', newline='\n'), ) def test_atomicwrite_fd(self): from gevent._fileobjectcommon import WriteallMixin # It basically only does something when buffering is otherwise disabled fileno, _w = self._pipe() desc = self._makeOne(fileno, 'wb', buffering=0, closefd=False, atomic_write=True) self.assertTrue(desc.atomic_write) fobj = desc.opened() self.assertIsInstance(fobj, WriteallMixin) os.close(fileno) def pop(): for regex, kind, kwargs in TestOpenDescriptor.CASES: setattr( TestOpenDescriptor, 'test_' + regex.replace(' ', '_'), lambda self, _re=regex, _kind=kind, _kw=kwargs: self._check(_re, _kind, 1, **_kw) ) pop() @unittest.skipIf(GreenOpenDescriptor is None, "No support for non-blocking IO") class TestGreenOpenDescripton(TestOpenDescriptor): def _getTargetClass(self): return GreenOpenDescriptor if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__getaddrinfo_import.py000066400000000000000000000007171471441230600240300ustar00rootroot00000000000000# On Python 2, a deadlock is possible if we import a module that runs gevent's getaddrinfo # with a unicode hostname, which starts Python's getaddrinfo on a thread, which # attempts to import encodings.idna but blocks on the import lock. Verify # that gevent avoids this deadlock. try: import getaddrinfo_module # pylint:disable=import-error except ModuleNotFoundError: from gevent.tests import getaddrinfo_module del getaddrinfo_module # fix pyflakes gevent-24.11.1/src/gevent/tests/test__greenio.py000066400000000000000000000126231471441230600215770ustar00rootroot00000000000000# Copyright (c) 2006-2007, Linden Research, Inc. # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import sys import gevent from gevent import socket from gevent import testing as greentest from gevent.testing import TestCase, tcp_listener from gevent.testing import gc_collect_if_needed from gevent.testing import skipOnPyPy from gevent.testing import params PY3 = sys.version_info[0] >= 3 def _write_to_closed(f, s): try: r = f.write(s) except ValueError: assert PY3 else: assert r is None, r class TestGreenIo(TestCase): def test_close_with_makefile(self): def accept_close_early(listener): # verify that the makefile and the socket are truly independent # by closing the socket prior to using the made file try: conn, _ = listener.accept() fd = conn.makefile(mode='wb') conn.close() fd.write(b'hello\n') fd.close() _write_to_closed(fd, b'a') self.assertRaises(socket.error, conn.send, b'b') finally: listener.close() def accept_close_late(listener): # verify that the makefile and the socket are truly independent # by closing the made file and then sending a character try: conn, _ = listener.accept() fd = conn.makefile(mode='wb') fd.write(b'hello') fd.close() conn.send(b'\n') conn.close() _write_to_closed(fd, b'a') self.assertRaises(socket.error, conn.send, b'b') finally: listener.close() def did_it_work(server): client = socket.create_connection((params.DEFAULT_CONNECT, server.getsockname()[1])) fd = client.makefile(mode='rb') client.close() self.assertEqual(fd.readline(), b'hello\n') self.assertFalse(fd.read()) fd.close() server = tcp_listener() server_greenlet = gevent.spawn(accept_close_early, server) did_it_work(server) server_greenlet.kill() server = tcp_listener() server_greenlet = gevent.spawn(accept_close_late, server) did_it_work(server) server_greenlet.kill() @skipOnPyPy("Takes multiple GCs and issues a warning we can't catch") def test_del_closes_socket(self): import warnings def accept_once(listener): # delete/overwrite the original conn # object, only keeping the file object around # closing the file object should close everything # This is not *exactly* true on Python 3. This produces # a ResourceWarning, which we silence below. (Previously we actually # *saved* a reference to the socket object, so we # weren't testing what we thought we were.) # It's definitely not true on PyPy, which needs GC to # reliably close everything; sometimes this is more than # one collection cycle. And PyPy issues a warning with -X # track-resources that we cannot catch. with warnings.catch_warnings(): warnings.simplefilter('ignore') try: conn = listener.accept()[0] # Note that we overwrite the original variable, # losing our reference to the socket. conn = conn.makefile(mode='wb') conn.write(b'hello\n') conn.close() _write_to_closed(conn, b'a') finally: listener.close() del listener del conn gc_collect_if_needed() gc_collect_if_needed() server = tcp_listener() gevent.spawn(accept_once, server) client = socket.create_connection((params.DEFAULT_CONNECT, server.getsockname()[1])) with gevent.Timeout.start_new(0.5): fd = client.makefile() client.close() self.assertEqual(fd.read(), 'hello\n') # If the socket isn't closed when 'accept_once' finished, # then this will hang and exceed the timeout self.assertEqual(fd.read(), '') fd.close() del client del fd if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__greenlet.py000066400000000000000000000764121471441230600217620ustar00rootroot00000000000000# Copyright (c) 2008-2009 AG Projects # Author: Denis Bilenko # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import functools import unittest from unittest.mock import patch as Patch import gevent.testing as greentest import gevent from gevent import sleep, with_timeout, getcurrent from gevent import greenlet from gevent.event import AsyncResult from gevent.queue import Queue, Channel from gevent.testing.timing import AbstractGenericWaitTestCase from gevent.testing.timing import AbstractGenericGetTestCase from gevent.testing import timing from gevent.testing import ignores_leakcheck DELAY = timing.SMALL_TICK greentest.TestCase.error_fatal = False class ExpectedError(greentest.ExpectedException): pass class ExpectedJoinError(ExpectedError): pass class SuiteExpectedException(ExpectedError): pass class GreenletRaisesJoin(gevent.Greenlet): killed = False joined = False raise_on_join = True def join(self, timeout=None): self.joined += 1 if self.raise_on_join: raise ExpectedJoinError return gevent.Greenlet.join(self, timeout) def kill(self, *args, **kwargs): # pylint:disable=signature-differs self.killed += 1 return gevent.Greenlet.kill(self, *args, **kwargs) class TestLink(greentest.TestCase): def test_link_to_asyncresult(self): p = gevent.spawn(lambda: 100) event = AsyncResult() p.link(event) self.assertEqual(event.get(), 100) for _ in range(3): event2 = AsyncResult() p.link(event2) self.assertEqual(event2.get(), 100) def test_link_to_asyncresult_exception(self): err = ExpectedError('test_link_to_asyncresult_exception') p = gevent.spawn(lambda: getcurrent().throw(err)) event = AsyncResult() p.link(event) with self.assertRaises(ExpectedError) as exc: event.get() self.assertIs(exc.exception, err) for _ in range(3): event2 = AsyncResult() p.link(event2) with self.assertRaises(ExpectedError) as exc: event2.get() self.assertIs(exc.exception, err) def test_link_to_queue(self): p = gevent.spawn(lambda: 100) q = Queue() p.link(q.put) self.assertEqual(q.get().get(), 100) for _ in range(3): p.link(q.put) self.assertEqual(q.get().get(), 100) def test_link_to_channel(self): p1 = gevent.spawn(lambda: 101) p2 = gevent.spawn(lambda: 102) p3 = gevent.spawn(lambda: 103) q = Channel() p1.link(q.put) p2.link(q.put) p3.link(q.put) results = [q.get().get(), q.get().get(), q.get().get()] self.assertEqual(sorted(results), [101, 102, 103], results) class TestUnlink(greentest.TestCase): switch_expected = False def _test_func(self, p, link): link(dummy_test_func) self.assertEqual(1, p.has_links()) p.unlink(dummy_test_func) self.assertEqual(0, p.has_links()) link(self.setUp) self.assertEqual(1, p.has_links()) p.unlink(self.setUp) self.assertEqual(0, p.has_links()) p.kill() def test_func_link(self): p = gevent.spawn(dummy_test_func) self._test_func(p, p.link) def test_func_link_value(self): p = gevent.spawn(dummy_test_func) self._test_func(p, p.link_value) def test_func_link_exception(self): p = gevent.spawn(dummy_test_func) self._test_func(p, p.link_exception) class LinksTestCase(greentest.TestCase): link_method = None def link(self, p, listener=None): getattr(p, self.link_method)(listener) def set_links(self, p): event = AsyncResult() self.link(p, event) queue = Queue(1) self.link(p, queue.put) callback_flag = ['initial'] self.link(p, lambda *args: callback_flag.remove('initial')) for _ in range(10): self.link(p, AsyncResult()) self.link(p, Queue(1).put) return event, queue, callback_flag def set_links_timeout(self, link): # stuff that won't be touched event = AsyncResult() link(event) queue = Channel() link(queue.put) return event, queue def check_timed_out(self, event, queue): got = with_timeout(DELAY, event.get, timeout_value=X) self.assertIs(got, X) got = with_timeout(DELAY, queue.get, timeout_value=X) self.assertIs(got, X) def return25(): return 25 class TestReturn_link(LinksTestCase): link_method = 'link' p = None def cleanup(self): self.p.unlink_all() self.p = None def test_return(self): self.p = gevent.spawn(return25) for _ in range(3): self._test_return(self.p, 25) self.p.kill() def _test_return(self, p, result): event, queue, callback_flag = self.set_links(p) # stuff that will time out because there's no unhandled exception: xxxxx = self.set_links_timeout(p.link_exception) sleep(DELAY * 2) self.assertFalse(p) self.assertEqual(event.get(), result) self.assertEqual(queue.get().get(), result) sleep(DELAY) self.assertFalse(callback_flag) self.check_timed_out(*xxxxx) def _test_kill(self, p): event, queue, callback_flag = self.set_links(p) xxxxx = self.set_links_timeout(p.link_exception) p.kill() sleep(DELAY) self.assertFalse(p) self.assertIsInstance(event.get(), gevent.GreenletExit) self.assertIsInstance(queue.get().get(), gevent.GreenletExit) sleep(DELAY) self.assertFalse(callback_flag) self.check_timed_out(*xxxxx) def test_kill(self): p = self.p = gevent.spawn(sleep, DELAY) for _ in range(3): self._test_kill(p) class TestReturn_link_value(TestReturn_link): link_method = 'link_value' class TestRaise_link(LinksTestCase): link_method = 'link' def _test_raise(self, p): event, queue, callback_flag = self.set_links(p) xxxxx = self.set_links_timeout(p.link_value) sleep(DELAY) self.assertFalse(p, p) self.assertRaises(ExpectedError, event.get) self.assertEqual(queue.get(), p) sleep(DELAY) self.assertFalse(callback_flag, callback_flag) self.check_timed_out(*xxxxx) def test_raise(self): p = gevent.spawn(lambda: getcurrent().throw(ExpectedError('test_raise'))) for _ in range(3): self._test_raise(p) class TestRaise_link_exception(TestRaise_link): link_method = 'link_exception' class TestStuff(greentest.TestCase): def test_minimal_id(self): g = gevent.spawn(lambda: 1) self.assertGreaterEqual(g.minimal_ident, 0) self.assertGreaterEqual(g.parent.minimal_ident, 0) g.join() # don't leave dangling, breaks the leak checks def test_wait_noerrors(self): x = gevent.spawn(lambda: 1) y = gevent.spawn(lambda: 2) z = gevent.spawn(lambda: 3) gevent.joinall([x, y, z], raise_error=True) self.assertEqual([x.value, y.value, z.value], [1, 2, 3]) e = AsyncResult() x.link(e) self.assertEqual(e.get(), 1) x.unlink(e) e = AsyncResult() x.link(e) self.assertEqual(e.get(), 1) @ignores_leakcheck def test_wait_error(self): def x(): sleep(DELAY) return 1 x = gevent.spawn(x) y = gevent.spawn(lambda: getcurrent().throw(ExpectedError('test_wait_error'))) self.assertRaises(ExpectedError, gevent.joinall, [x, y], raise_error=True) self.assertRaises(ExpectedError, gevent.joinall, [y], raise_error=True) x.join() @ignores_leakcheck def test_joinall_exception_order(self): # if there're several exceptions raised, the earliest one must be raised by joinall def first(): sleep(0.1) raise ExpectedError('first') a = gevent.spawn(first) b = gevent.spawn(lambda: getcurrent().throw(ExpectedError('second'))) with self.assertRaisesRegex(ExpectedError, 'second'): gevent.joinall([a, b], raise_error=True) gevent.joinall([a, b]) def test_joinall_count_raise_error(self): # When joinall is asked not to raise an error, the 'count' param still # works. def raises_but_ignored(): raise ExpectedError("count") def sleep_forever(): while True: sleep(0.1) sleeper = gevent.spawn(sleep_forever) raiser = gevent.spawn(raises_but_ignored) gevent.joinall([sleeper, raiser], raise_error=False, count=1) self.assert_greenlet_ready(raiser) self.assert_greenlet_not_ready(sleeper) # Clean up our mess sleeper.kill() self.assert_greenlet_ready(sleeper) def test_multiple_listeners_error(self): # if there was an error while calling a callback # it should not prevent the other listeners from being called # also, all of the errors should be logged, check the output # manually that they are p = gevent.spawn(lambda: 5) results = [] def listener1(*_args): results.append(10) raise ExpectedError('listener1') def listener2(*_args): results.append(20) raise ExpectedError('listener2') def listener3(*_args): raise ExpectedError('listener3') p.link(listener1) p.link(listener2) p.link(listener3) sleep(DELAY * 10) self.assertIn(results, [[10, 20], [20, 10]]) p = gevent.spawn(lambda: getcurrent().throw(ExpectedError('test_multiple_listeners_error'))) results = [] p.link(listener1) p.link(listener2) p.link(listener3) sleep(DELAY * 10) self.assertIn(results, [[10, 20], [20, 10]]) class Results(object): def __init__(self): self.results = [] def listener1(self, p): p.unlink(self.listener2) self.results.append(5) raise ExpectedError('listener1') def listener2(self, p): p.unlink(self.listener1) self.results.append(5) raise ExpectedError('listener2') def listener3(self, _p): raise ExpectedError('listener3') def _test_multiple_listeners_error_unlink(self, _p, link): # notification must not happen after unlink even # though notification process has been already started results = self.Results() link(results.listener1) link(results.listener2) link(results.listener3) sleep(DELAY * 10) self.assertEqual([5], results.results) def test_multiple_listeners_error_unlink_Greenlet_link(self): p = gevent.spawn(lambda: 5) self._test_multiple_listeners_error_unlink(p, p.link) p.kill() def test_multiple_listeners_error_unlink_Greenlet_rawlink(self): p = gevent.spawn(lambda: 5) self._test_multiple_listeners_error_unlink(p, p.rawlink) def test_multiple_listeners_error_unlink_AsyncResult_rawlink(self): e = AsyncResult() gevent.spawn(e.set, 6) self._test_multiple_listeners_error_unlink(e, e.rawlink) def dummy_test_func(*_args): pass class A(object): def method(self): pass class Subclass(gevent.Greenlet): pass class TestStr(greentest.TestCase): def test_function(self): g = gevent.Greenlet.spawn(dummy_test_func) self.assert_nstr_endswith(g, 'at X: dummy_test_func>') self.assert_greenlet_not_ready(g) g.join() self.assert_greenlet_ready(g) self.assert_nstr_endswith(g, 'at X: dummy_test_func>') def test_method(self): g = gevent.Greenlet.spawn(A().method) self.assert_nstr_startswith(g, '>>') self.assert_greenlet_not_ready(g) g.join() self.assert_greenlet_ready(g) self.assert_nstr_endswith(g, 'at X: >>') def test_subclass(self): g = Subclass() self.assert_nstr_startswith(g, '') g = Subclass(None, 'question', answer=42) self.assert_nstr_endswith(g, " at X: _run('question', answer=42)>") class TestJoin(AbstractGenericWaitTestCase): def wait(self, timeout): g = gevent.spawn(gevent.sleep, 10) try: return g.join(timeout=timeout) finally: g.kill() class TestGet(AbstractGenericGetTestCase): def wait(self, timeout): g = gevent.spawn(gevent.sleep, 10) try: return g.get(timeout=timeout) finally: g.kill() class TestJoinAll0(AbstractGenericWaitTestCase): g = gevent.Greenlet() def wait(self, timeout): gevent.joinall([self.g], timeout=timeout) class TestJoinAll(AbstractGenericWaitTestCase): def wait(self, timeout): g = gevent.spawn(gevent.sleep, 10) try: gevent.joinall([g], timeout=timeout) finally: g.kill() class TestBasic(greentest.TestCase): def test_spawn_non_callable(self): self.assertRaises(TypeError, gevent.spawn, 1) self.assertRaises(TypeError, gevent.spawn_raw, 1) # Not passing the run argument, just the seconds argument self.assertRaises(TypeError, gevent.spawn_later, 1) # Passing both, but not implemented self.assertRaises(TypeError, gevent.spawn_later, 1, 1) def test_spawn_raw_kwargs(self): value = [] def f(*args, **kwargs): value.append(args) value.append(kwargs) g = gevent.spawn_raw(f, 1, name='value') gevent.sleep(0.01) self.assertFalse(g) self.assertEqual(value[0], (1,)) self.assertEqual(value[1], {'name': 'value'}) def test_simple_exit(self): link_test = [] def func(delay, return_value=4): gevent.sleep(delay) return return_value g = gevent.Greenlet(func, 0.01, return_value=5) g.rawlink(link_test.append) # use rawlink to avoid timing issues on Appveyor/Travis (not always successful) self.assertFalse(g, g) self.assertFalse(g.dead, g) self.assertFalse(g.started, g) self.assertFalse(g.ready(), g) self.assertFalse(g.successful(), g) self.assertIsNone(g.value, g) self.assertIsNone(g.exception, g) g.start() self.assertTrue(g, g) # changed self.assertFalse(g.dead, g) self.assertTrue(g.started, g) # changed self.assertFalse(g.ready(), g) self.assertFalse(g.successful(), g) self.assertIsNone(g.value, g) self.assertIsNone(g.exception, g) gevent.sleep(0.001) self.assertTrue(g) self.assertFalse(g.dead, g) self.assertTrue(g.started, g) self.assertFalse(g.ready(), g) self.assertFalse(g.successful(), g) self.assertIsNone(g.value, g) self.assertIsNone(g.exception, g) self.assertFalse(link_test) gevent.sleep(0.02) self.assertFalse(g, g) # changed self.assertTrue(g.dead, g) # changed self.assertFalse(g.started, g) # changed self.assertTrue(g.ready(), g) # changed self.assertTrue(g.successful(), g) # changed self.assertEqual(g.value, 5) # changed self.assertIsNone(g.exception, g) self._check_flaky_eq(link_test, g) def _check_flaky_eq(self, link_test, g): if not greentest.RUNNING_ON_CI: # TODO: Change this to assertEqualFlakyRaceCondition and figure # out what the CI issue is. self.assertEqual(link_test, [g]) # changed def test_error_exit(self): link_test = [] def func(delay, return_value=4): gevent.sleep(delay) error = ExpectedError('test_error_exit') setattr(error, 'myattr', return_value) raise error g = gevent.Greenlet(func, timing.SMALLEST_RELIABLE_DELAY, return_value=5) # use rawlink to avoid timing issues on Appveyor (not always successful) g.rawlink(link_test.append) g.start() gevent.sleep() gevent.sleep(timing.LARGE_TICK) self.assertFalse(g) self.assertTrue(g.dead) self.assertFalse(g.started) self.assertTrue(g.ready()) self.assertFalse(g.successful()) self.assertIsNone(g.value) # not changed self.assertEqual(g.exception.myattr, 5) self._check_flaky_eq(link_test, g) def test_exc_info_no_error(self): # Before running self.assertFalse(greenlet.Greenlet().exc_info) g = greenlet.Greenlet(gevent.sleep) g.start() g.join() self.assertFalse(g.exc_info) @greentest.skipOnCI( "Started getting a Fatal Python error on " "Github Actions on 2020-12-18, even with recursion limits " "in place. It was fine before that." ) def test_recursion_error(self): # https://github.com/gevent/gevent/issues/1704 # A RuntimeError: recursion depth exceeded # does not break things. # # However, sometimes, on some interpreter versions on some # systems, actually exhausting the stack results in "Fatal # Python error: Cannot recover from stack overflow.". So we # need to use a low recursion limit so that doesn't happen. # Doesn't seem to help though. # See https://github.com/gevent/gevent/runs/1577692901?check_suite_focus=true#step:21:46 import sys limit = sys.getrecursionlimit() self.addCleanup(sys.setrecursionlimit, limit) sys.setrecursionlimit(limit // 4) def recur(): recur() # This is expected to raise RecursionError errors = [] def handle_error(glet, t, v, tb): errors.append((glet, t, v, tb)) try: gevent.get_hub().handle_error = handle_error g = gevent.spawn(recur) def wait(): return gevent.joinall([g]) g2 = gevent.spawn(wait) gevent.joinall([g2]) finally: del gevent.get_hub().handle_error try: expected_exc = RecursionError except NameError: expected_exc = RuntimeError with self.assertRaises(expected_exc): g.get() self.assertFalse(g.successful()) self.assertTrue(g.dead) self.assertTrue(errors) self.assertEqual(1, len(errors)) self.assertIs(errors[0][0], g) self.assertEqual(errors[0][1], expected_exc) del errors[:] def test_tree_locals(self): g = g2 = None def func(): child = greenlet.Greenlet() self.assertIs(child.spawn_tree_locals, getcurrent().spawn_tree_locals) self.assertIs(child.spawning_greenlet(), getcurrent()) g = greenlet.Greenlet(func) g2 = greenlet.Greenlet(func) # Creating those greenlets did not give the main greenlet # a locals dict. self.assertFalse(hasattr(getcurrent(), 'spawn_tree_locals'), getcurrent()) self.assertIsNot(g.spawn_tree_locals, g2.spawn_tree_locals) g.start() g.join() raw = gevent.spawn_raw(func) self.assertIsNotNone(raw.spawn_tree_locals) self.assertIsNot(raw.spawn_tree_locals, g.spawn_tree_locals) self.assertIs(raw.spawning_greenlet(), getcurrent()) while not raw.dead: gevent.sleep(0.01) def test_add_spawn_callback(self): called = {'#': 0} def cb(gr): called['#'] += 1 gr._called_test = True gevent.Greenlet.add_spawn_callback(cb) try: g = gevent.spawn(lambda: None) self.assertTrue(hasattr(g, '_called_test')) g.join() self.assertEqual(called['#'], 1) g = gevent.spawn_later(1e-5, lambda: None) self.assertTrue(hasattr(g, '_called_test')) g.join() self.assertEqual(called['#'], 2) g = gevent.Greenlet(lambda: None) g.start() self.assertTrue(hasattr(g, '_called_test')) g.join() self.assertEqual(called['#'], 3) gevent.Greenlet.remove_spawn_callback(cb) g = gevent.spawn(lambda: None) self.assertFalse(hasattr(g, '_called_test')) g.join() self.assertEqual(called['#'], 3) finally: gevent.Greenlet.remove_spawn_callback(cb) def test_getframe_value_error(self): def get(): raise ValueError("call stack is not deep enough") try: ogf = greenlet.sys_getframe except AttributeError: # pragma: no cover # Must be running cython compiled raise unittest.SkipTest("Cannot mock when Cython compiled") greenlet.sys_getframe = get try: child = greenlet.Greenlet() self.assertIsNone(child.spawning_stack) finally: greenlet.sys_getframe = ogf def test_minimal_ident_parent_not_hub(self): g = gevent.spawn(lambda: 1) self.assertIs(g.parent, gevent.get_hub()) g.parent = getcurrent() try: self.assertIsNot(g.parent, gevent.get_hub()) with self.assertRaisesRegex((TypeError, # Cython AttributeError), # PyPy 'Cannot convert|ident_registry'): getattr(g, 'minimal_ident') finally: # Attempting to switch into this later, when we next cycle the # loop, would raise an InvalidSwitchError if we don't put # things back the way they were (or kill the greenlet) g.parent = gevent.get_hub() g.kill() class TestKill(greentest.TestCase): def setUp(self): super().setUp() hub = gevent.get_hub() patcher = Patch.object(hub, 'print_exception', autospec=True) patcher.start() self.addCleanup(patcher.stop) def __assertKilled(self, g, successful): self.assertFalse(g) self.assertTrue(g.dead) self.assertFalse(g.started) self.assertTrue(g.ready()) if successful: self.assertTrue(g.successful(), (repr(g), g.value, g.exception)) self.assertIsInstance(g.value, gevent.GreenletExit) self.assertIsNone(g.exception) else: self.assertFalse(g.successful(), (repr(g), g.value, g.exception)) self.assertNotIsInstance(g.value, gevent.GreenletExit) self.assertIsNotNone(g.exception) def assertKilled(self, g, successful=True): self.__assertKilled(g, successful) gevent.sleep(0.01) # spin the loop to make sure it doesn't run. self.__assertKilled(g, successful) def __kill_greenlet(self, g, block, killall, exc=None): if exc is None: exc = gevent.GreenletExit if killall: killer = functools.partial(gevent.killall, [g], exception=exc, block=block) else: killer = functools.partial(g.kill, exception=exc, block=block) killer() if not block: # Must spin the loop to take effect (if it was scheduled) gevent.sleep(timing.SMALLEST_RELIABLE_DELAY) successful = exc is None or (isinstance(exc, type) and issubclass(exc, gevent.GreenletExit)) self.assertKilled(g, successful) # kill second time must not hurt killer() self.assertKilled(g, successful) @staticmethod def _run_in_greenlet(result_collector): result_collector.append(1) def _start_greenlet(self, g): """ Subclasses should override. This doesn't actually start a greenlet. """ _after_kill_greenlet = _start_greenlet def _do_test(self, block, killall, exc=None): link_test = [] result = [] g = gevent.Greenlet(self._run_in_greenlet, result) g.link(link_test.append) self._start_greenlet(g) self.__kill_greenlet(g, block, killall, exc) self._after_kill_greenlet(g) self.assertFalse(result) self.assertEqual(link_test, [g]) def test_block(self): self._do_test(block=True, killall=False) def test_non_block(self): self._do_test(block=False, killall=False) def test_block_killall(self): self._do_test(block=True, killall=True) def test_non_block_killal(self): self._do_test(block=False, killall=True) def test_non_type_exception(self): self._do_test(block=True, killall=False, exc=Exception()) def test_non_type_exception_non_block(self): self._do_test(block=False, killall=False, exc=Exception()) def test_non_type_exception_killall(self): self._do_test(block=True, killall=True, exc=Exception()) def test_non_type_exception_killall_non_block(self): self._do_test(block=False, killall=True, exc=Exception()) def test_non_exc_exception(self): self._do_test(block=True, killall=False, exc=42) def test_non_exc_exception_non_block(self): self._do_test(block=False, killall=False, exc=42) def test_non_exc_exception_killall(self): self._do_test(block=True, killall=True, exc=42) def test_non_exc_exception_killall_non_block(self): self._do_test(block=False, killall=True, exc=42) class TestKillAfterStart(TestKill): def _start_greenlet(self, g): g.start() class TestKillAfterStartLater(TestKill): def _start_greenlet(self, g): g.start_later(timing.LARGE_TICK) class TestKillWhileRunning(TestKill): @staticmethod def _run_in_greenlet(result_collector): gevent.sleep(10) # The above should die with the GreenletExit exception, # so this should never run TestKill._run_in_greenlet(result_collector) def _after_kill_greenlet(self, g): TestKill._after_kill_greenlet(self, g) gevent.sleep(0.01) class TestKillallRawGreenlet(greentest.TestCase): def test_killall_raw(self): g = gevent.spawn_raw(lambda: 1) gevent.killall([g]) class TestContextManager(greentest.TestCase): def test_simple(self): with gevent.spawn(gevent.sleep, timing.SMALL_TICK) as g: self.assert_greenlet_spawned(g) # It is completed after the suite self.assert_greenlet_finished(g) def test_wait_in_suite(self): with gevent.spawn(self._raise_exception) as g: with self.assertRaises(greentest.ExpectedException): g.get() self.assert_greenlet_finished(g) @staticmethod def _raise_exception(): raise greentest.ExpectedException def test_greenlet_raises(self): with gevent.spawn(self._raise_exception) as g: pass self.assert_greenlet_finished(g) with self.assertRaises(greentest.ExpectedException): g.get() def test_join_raises(self): suite_ran = 0 with self.assertRaises(ExpectedJoinError): with GreenletRaisesJoin.spawn(gevent.sleep, timing.SMALL_TICK) as g: self.assert_greenlet_spawned(g) suite_ran = 1 self.assertTrue(suite_ran) self.assert_greenlet_finished(g) self.assertTrue(g.killed) def test_suite_body_raises(self, delay=None): greenlet_sleep = timing.SMALL_TICK if not delay else timing.LARGE_TICK with self.assertRaises(SuiteExpectedException): with GreenletRaisesJoin.spawn(gevent.sleep, greenlet_sleep) as g: self.assert_greenlet_spawned(g) if delay: g.raise_on_join = False gevent.sleep(delay) raise SuiteExpectedException self.assert_greenlet_finished(g) self.assertTrue(g.killed) if delay: self.assertTrue(g.joined) else: self.assertFalse(g.joined) self.assertFalse(g.successful()) with self.assertRaises(SuiteExpectedException): g.get() def test_suite_body_raises_with_delay(self): self.test_suite_body_raises(delay=timing.SMALL_TICK) class TestStart(greentest.TestCase): def test_start(self): g = gevent.spawn(gevent.sleep, timing.SMALL_TICK) self.assert_greenlet_spawned(g) g.start() self.assert_greenlet_started(g) g.join() self.assert_greenlet_finished(g) # cannot start again g.start() self.assert_greenlet_finished(g) class TestRef(greentest.TestCase): def test_init(self): self.switch_expected = False # in python-dbg mode this will check that Greenlet() does not create any circular refs gevent.Greenlet() def test_kill_scheduled(self): gevent.spawn(gevent.sleep, timing.LARGE_TICK).kill() def test_kill_started(self): g = gevent.spawn(gevent.sleep, timing.LARGE_TICK) try: gevent.sleep(timing.SMALLEST_RELIABLE_DELAY) finally: g.kill() @greentest.skipOnPurePython("Needs C extension") class TestCExt(greentest.TestCase): # pragma: no cover (we only do coverage on pure-Python) def test_c_extension(self): self.assertEqual(greenlet.Greenlet.__module__, 'gevent._gevent_cgreenlet') self.assertEqual(greenlet.SpawnedLink.__module__, 'gevent._gevent_cgreenlet') @greentest.skipWithCExtensions("Needs pure python") class TestPure(greentest.TestCase): def test_pure(self): self.assertEqual(greenlet.Greenlet.__module__, 'gevent.greenlet') self.assertEqual(greenlet.SpawnedLink.__module__, 'gevent.greenlet') X = object() del AbstractGenericGetTestCase del AbstractGenericWaitTestCase if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__greenletset.py000066400000000000000000000116501471441230600224670ustar00rootroot00000000000000from __future__ import print_function, division, absolute_import import time import gevent.testing as greentest from gevent.testing import timing import gevent from gevent import pool from gevent.timeout import Timeout DELAY = timing.LARGE_TICK class SpecialError(Exception): pass class Undead(object): def __init__(self): self.shot_count = 0 def __call__(self): while True: try: gevent.sleep(1) except SpecialError: break except: # pylint:disable=bare-except self.shot_count += 1 class Test(greentest.TestCase): __timeout__ = greentest.LARGE_TIMEOUT def test_basic(self): s = pool.Group() s.spawn(gevent.sleep, timing.LARGE_TICK) self.assertEqual(len(s), 1, s) s.spawn(gevent.sleep, timing.LARGE_TICK * 5) self.assertEqual(len(s), 2, s) gevent.sleep() gevent.sleep(timing.LARGE_TICK * 2 + timing.LARGE_TICK_MIN_ADJ) self.assertEqual(len(s), 1, s) gevent.sleep(timing.LARGE_TICK * 5 + timing.LARGE_TICK_MIN_ADJ) self.assertFalse(s) def test_waitall(self): s = pool.Group() s.spawn(gevent.sleep, DELAY) s.spawn(gevent.sleep, DELAY * 2) assert len(s) == 2, s start = time.time() s.join(raise_error=True) delta = time.time() - start self.assertFalse(s) self.assertEqual(len(s), 0) self.assertTimeWithinRange(delta, DELAY * 1.9, DELAY * 2.5) def test_kill_block(self): s = pool.Group() s.spawn(gevent.sleep, DELAY) s.spawn(gevent.sleep, DELAY * 2) assert len(s) == 2, s start = time.time() s.kill() self.assertFalse(s) self.assertEqual(len(s), 0) delta = time.time() - start assert delta < DELAY * 0.8, delta def test_kill_noblock(self): s = pool.Group() s.spawn(gevent.sleep, DELAY) s.spawn(gevent.sleep, DELAY * 2) assert len(s) == 2, s s.kill(block=False) assert len(s) == 2, s gevent.sleep(0.0001) self.assertFalse(s) self.assertEqual(len(s), 0) def test_kill_fires_once(self): u1 = Undead() u2 = Undead() p1 = gevent.spawn(u1) p2 = gevent.spawn(u2) def check(count1, count2): self.assertTrue(p1) self.assertTrue(p2) self.assertFalse(p1.dead, p1) self.assertFalse(p2.dead, p2) self.assertEqual(u1.shot_count, count1) self.assertEqual(u2.shot_count, count2) gevent.sleep(0.01) s = pool.Group([p1, p2]) self.assertEqual(len(s), 2, s) check(0, 0) s.killone(p1, block=False) check(0, 0) gevent.sleep(0) check(1, 0) s.killone(p1) check(1, 0) s.killone(p1) check(1, 0) s.kill(block=False) s.kill(block=False) s.kill(block=False) check(1, 0) gevent.sleep(DELAY) check(1, 1) X = object() kill_result = gevent.with_timeout(DELAY, s.kill, block=True, timeout_value=X) assert kill_result is X, repr(kill_result) assert len(s) == 2, s check(1, 1) p1.kill(SpecialError) p2.kill(SpecialError) def test_killall_subclass(self): p1 = GreenletSubclass.spawn(lambda: 1 / 0) p2 = GreenletSubclass.spawn(lambda: gevent.sleep(10)) s = pool.Group([p1, p2]) s.kill() def test_killall_iterable_argument_non_block(self): p1 = GreenletSubclass.spawn(lambda: gevent.sleep(0.5)) p2 = GreenletSubclass.spawn(lambda: gevent.sleep(0.5)) s = set() s.add(p1) s.add(p2) gevent.killall(s, block=False) gevent.sleep(0.5) for g in s: assert g.dead def test_killall_iterable_argument_timeout_not_started(self): def f(): try: gevent.sleep(1.5) except: # pylint:disable=bare-except gevent.sleep(1) p1 = GreenletSubclass.spawn(f) p2 = GreenletSubclass.spawn(f) s = set() s.add(p1) s.add(p2) gevent.killall(s, timeout=0.5) for g in s: self.assertTrue(g.dead, g) def test_killall_iterable_argument_timeout_started(self): def f(): try: gevent.sleep(1.5) except: # pylint:disable=bare-except gevent.sleep(1) p1 = GreenletSubclass.spawn(f) p2 = GreenletSubclass.spawn(f) s = set() s.add(p1) s.add(p2) # Get them both running. gevent.sleep(timing.SMALLEST_RELIABLE_DELAY) with self.assertRaises(Timeout): gevent.killall(s, timeout=0.5) for g in s: self.assertFalse(g.dead, g) class GreenletSubclass(gevent.Greenlet): pass if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__greenness.py000066400000000000000000000053461471441230600221440ustar00rootroot00000000000000# Copyright (c) 2008 AG Projects # Author: Denis Bilenko # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """ Trivial test that a single process (and single thread) can both read and write from green sockets (when monkey patched). """ from __future__ import print_function from __future__ import absolute_import from __future__ import division from gevent import monkey monkey.patch_all() import gevent.testing as greentest try: from urllib import request as urllib2 from http.server import HTTPServer from http.server import SimpleHTTPRequestHandler except ImportError: # Python 2 import urllib2 from BaseHTTPServer import HTTPServer from SimpleHTTPServer import SimpleHTTPRequestHandler import gevent from gevent.testing import params class QuietHandler(SimpleHTTPRequestHandler, object): def log_message(self, *args): # pylint:disable=arguments-differ self.server.messages += ((args,),) class Server(HTTPServer, object): messages = () requests_handled = 0 def __init__(self): HTTPServer.__init__(self, params.DEFAULT_BIND_ADDR_TUPLE, QuietHandler) def handle_request(self): HTTPServer.handle_request(self) self.requests_handled += 1 class TestGreenness(greentest.TestCase): check_totalrefcount = False def test_urllib2(self): httpd = Server() server_greenlet = gevent.spawn(httpd.handle_request) port = httpd.socket.getsockname()[1] rsp = urllib2.urlopen('http://127.0.0.1:%s' % port) rsp.read() rsp.close() server_greenlet.join() self.assertEqual(httpd.requests_handled, 1) httpd.server_close() if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__hub.py000066400000000000000000000326401471441230600207260ustar00rootroot00000000000000# Copyright (c) 2009 AG Projects # Author: Denis Bilenko # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import re import time import unittest import gevent.testing as greentest import gevent.testing.timing import gevent from gevent import socket from gevent.hub import Waiter, get_hub from gevent._compat import NativeStrIO from gevent._compat import get_this_psutil_process DELAY = 0.1 class TestCloseSocketWhilePolling(greentest.TestCase): def test(self): sock = socket.socket() self._close_on_teardown(sock) t = get_hub().loop.timer(0) t.start(sock.close) with self.assertRaises(socket.error): try: sock.connect(('python.org', 81)) finally: t.close() gevent.sleep(0) class TestExceptionInMainloop(greentest.TestCase): def test_sleep(self): # even if there was an error in the mainloop, the hub should continue to work start = time.time() gevent.sleep(DELAY) delay = time.time() - start delay_range = DELAY * 0.9 self.assertTimeWithinRange(delay, DELAY - delay_range, DELAY + delay_range) error = greentest.ExpectedException('TestExceptionInMainloop.test_sleep/fail') def fail(): raise error with get_hub().loop.timer(0.001) as t: t.start(fail) self.expect_one_error() start = time.time() gevent.sleep(DELAY) delay = time.time() - start self.assert_error(value=error) self.assertTimeWithinRange(delay, DELAY - delay_range, DELAY + delay_range) class TestSleep(gevent.testing.timing.AbstractGenericWaitTestCase): def wait(self, timeout): gevent.sleep(timeout) def test_simple(self): gevent.sleep(0) class TestWaiterGet(gevent.testing.timing.AbstractGenericWaitTestCase): def setUp(self): super(TestWaiterGet, self).setUp() self.waiter = Waiter() def wait(self, timeout): with get_hub().loop.timer(timeout) as evt: evt.start(self.waiter.switch, None) return self.waiter.get() class TestWaiter(greentest.TestCase): def test(self): waiter = Waiter() self.assertEqual(str(waiter), '') waiter.switch(25) self.assertEqual(str(waiter), '') self.assertEqual(waiter.get(), 25) waiter = Waiter() waiter.throw(ZeroDivisionError) assert re.match('^ count_before: # We could be off by exactly 1. Not entirely clear where. # But it only happens the first time. count_after -= 1 # If we were run in multiple process, our count could actually have # gone down due to the GC's we did. self.assertEqual(count_after, count_before) @ignores_leakcheck def test_join_in_new_thread_doesnt_leak_hub_or_greenlet(self): # https://github.com/gevent/gevent/issues/1601 import threading clean = self.__clean def thread_main(): g = gevent.Greenlet(run=lambda: 0) g.start() g.join() hub = gevent.get_hub() hub.join() hub.destroy(destroy_loop=True) del hub def tester(main): t = threading.Thread(target=main) t.start() t.join() clean() with self.assert_no_greenlet_growth(): for _ in range(10): tester(thread_main) del tester del thread_main @ignores_leakcheck def test_destroy_in_main_thread_from_new_thread(self): # https://github.com/gevent/gevent/issues/1631 import threading clean = self.__clean class Thread(threading.Thread): hub = None def run(self): g = gevent.Greenlet(run=lambda: 0) g.start() g.join() del g hub = gevent.get_hub() hub.join() self.hub = hub def tester(Thread, clean): t = Thread() t.start() t.join() t.hub.destroy(destroy_loop=True) t.hub = None del t clean() # Unfortunately, this WILL leak greenlets, # at least on CPython. The frames of the dead threads # are referenced by the hub in some sort of cycle, and # greenlets don't particpate in GC. for _ in range(10): tester(Thread, clean) del tester del Thread if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test__hub_join_timeout.py000066400000000000000000000055411471441230600235130ustar00rootroot00000000000000import functools import unittest import gevent import gevent.core from gevent.event import Event from gevent.testing.testcase import TimeAssertMixin SMALL_TICK = 0.05 # setting up signal does not affect join() gevent.signal_handler(1, lambda: None) # wouldn't work on windows def repeated(func, repetitions=2): @functools.wraps(func) def f(self): for _ in range(repetitions): func(self) return f class Test(TimeAssertMixin, unittest.TestCase): @repeated def test_callback(self): # exiting because the spawned greenlet finished execution (spawn (=callback) variant) x = gevent.spawn(lambda: 5) with self.runs_in_no_time(): result = gevent.wait(timeout=10) self.assertTrue(result) self.assertTrue(x.dead, x) self.assertEqual(x.value, 5) @repeated def test_later(self): # exiting because the spawned greenlet finished execution (spawn_later (=timer) variant) x = gevent.spawn_later(SMALL_TICK, lambda: 5) with self.runs_in_given_time(SMALL_TICK): result = gevent.wait(timeout=10) self.assertTrue(result) self.assertTrue(x.dead, x) @repeated def test_timeout(self): # exiting because of timeout (the spawned greenlet still runs) x = gevent.spawn_later(10, lambda: 5) with self.runs_in_given_time(SMALL_TICK): result = gevent.wait(timeout=SMALL_TICK) self.assertFalse(result) self.assertFalse(x.dead, x) x.kill() with self.runs_in_no_time(): result = gevent.wait() self.assertTrue(result) @repeated def test_event(self): # exiting because of event (the spawned greenlet still runs) x = gevent.spawn_later(10, lambda: 5) event = Event() event_set = gevent.spawn_later(SMALL_TICK, event.set) with self.runs_in_given_time(SMALL_TICK): result = gevent.wait([event]) self.assertEqual(result, [event]) self.assertFalse(x.dead, x) self.assertTrue(event_set.dead) self.assertTrue(event.is_set) x.kill() with self.runs_in_no_time(): result = gevent.wait() self.assertTrue(result) @repeated def test_ref_arg(self): # checking "ref=False" argument gevent.get_hub().loop.timer(10, ref=False).start(lambda: None) with self.runs_in_no_time(): result = gevent.wait() self.assertTrue(result) @repeated def test_ref_attribute(self): # checking "ref=False" attribute w = gevent.get_hub().loop.timer(10) w.start(lambda: None) w.ref = False with self.runs_in_no_time(): result = gevent.wait() self.assertTrue(result) class TestAgain(Test): "Repeat the same tests" if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test__import_blocking_in_greenlet.py000066400000000000000000000021561471441230600257040ustar00rootroot00000000000000#!/usr/bin/python # See https://github.com/gevent/gevent/issues/108 import gevent from gevent import monkey import_errors = [] def some_func(): try: from _blocks_at_top_level import x assert x == 'done' except ImportError as e: import_errors.append(e) raise if __name__ == '__main__': import sys if sys.version_info[:2] == (3, 13): import unittest class Test(unittest.TestCase): def test_it(self): self.skipTest( 'On Python 3.13, no matter how I arrange the PYTHONPATH/sys.path ' 'we get "cannot import name x from partially initialized module ' '_blocks_at_top_level". It is unclear why. Limiting the scope of ' 'the exclusion for now.' ) unittest.main() else: monkey.patch_all() import sys import os p = os.path.dirname(__file__) sys.path.insert(0, p) gs = [gevent.spawn(some_func) for i in range(2)] gevent.joinall(gs) assert not import_errors, import_errors gevent-24.11.1/src/gevent/tests/test__import_wait.py000066400000000000000000000003741471441230600225050ustar00rootroot00000000000000# https://github.com/gevent/gevent/issues/652 and 651 from gevent import monkey monkey.patch_all() try: import _import_wait # pylint:disable=import-error except ModuleNotFoundError: from gevent.tests import _import_wait assert _import_wait.x gevent-24.11.1/src/gevent/tests/test__issue112.py000066400000000000000000000005221471441230600215160ustar00rootroot00000000000000import sys import unittest import threading import gevent import gevent.monkey gevent.monkey.patch_all() @unittest.skipUnless( sys.version_info[0] == 2, "Only on Python 2" ) class Test(unittest.TestCase): def test(self): self.assertIs(threading._sleep, gevent.sleep) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test__issue1686.py000066400000000000000000000054761471441230600216340ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests for https://github.com/gevent/gevent/issues/1686 which is about destroying a hub when there are active callbacks or IO in operation. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import unittest from gevent import testing as greentest # Don't let the testrunner put us in a process with other # tests; we are strict on the state of the hub and greenlets. # pragma: testrunner-no-combine @greentest.skipOnWindows("Uses os.fork") class TestDestroyInChildWithActiveSpawn(unittest.TestCase): def test(self): # pylint:disable=too-many-locals # If this test is broken, there are a few failure modes. # - In the original examples, the parent process just hangs, because the # child has raced ahead, spawned the greenlet and read the data. When the # greenlet goes to read in the parent, it blocks, and the hub and loop # wait for it. # - Here, our child detects the greenlet ran when it shouldn't and # raises an error, which translates to a non-zero exit status, # which the parent checks for and fails by raising an exception before # returning control to the hub. We can replicate the hang by removing the # assertion in the child. from time import sleep as hang from gevent import get_hub from gevent import spawn from gevent.socket import wait_read from gevent.os import nb_read from gevent.os import nb_write from gevent.os import make_nonblocking from gevent.os import fork from gevent.os import waitpid pipe_read_fd, pipe_write_fd = os.pipe() make_nonblocking(pipe_read_fd) make_nonblocking(pipe_write_fd) run = [] def reader(): run.append(1) return nb_read(pipe_read_fd, 4096) # Put data in the pipe DATA = b'test' nb_write(pipe_write_fd, DATA) # Make sure we're ready to read it wait_read(pipe_read_fd) # Schedule a greenlet to start reader = spawn(reader) hub = get_hub() pid = fork() if pid == 0: # Child destroys the hub. The reader should not have run. hub.destroy(destroy_loop=True) self.assertFalse(run) os._exit(0) return # pylint:disable=unreachable # The parent. # Briefly prevent us from spinning our event loop. hang(0.5) wait_child_result = waitpid(pid, 0) self.assertEqual(wait_child_result, (pid, 0)) # We should get the data; the greenlet only runs in the parent. data = reader.get() self.assertEqual(run, [1]) self.assertEqual(data, DATA) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__issue1864.py000066400000000000000000000024131471441230600216160ustar00rootroot00000000000000import sys import unittest from gevent import testing as greentest class TestSubnormalFloatsAreNotDisabled(unittest.TestCase): @greentest.skipOnCI('Some of our tests we compile with -Ofast, which breaks this.') def test_subnormal_is_not_zero(self): # Enabling the -Ofast compiler flag resulted in subnormal floats getting # disabled the moment when gevent was imported. This impacted libraries # that expect subnormal floats to be enabled. # # NOTE: This test is supposed to catch that. It doesn't seem to work perfectly, though. # The test passes under Python 2 on macOS no matter whether -ffast-math is given or not; # perhaps this is a difference in clang vs gcc? In contrast, the test on Python 2.7 always # *fails* on GitHub actions (in both CPython 2.7 and PyPy). We're far past the EOL of # Python 2.7 so I'm not going to spend much time investigating. __import__('gevent') # `sys.float_info.min` is the minimum representable positive normalized # float, so dividing it by two gives us a positive subnormal float, # as long as subnormals floats are not disabled. self.assertGreater(sys.float_info.min / 2, 0.0) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/gevent/tests/test__issue230.py000066400000000000000000000007641471441230600215270ustar00rootroot00000000000000import gevent.monkey gevent.monkey.patch_all() import socket import multiprocessing from gevent import testing as greentest # Make sure that using the resolver in a forked process # doesn't hang forever. def block(): socket.getaddrinfo('localhost', 8001) class Test(greentest.TestCase): def test(self): socket.getaddrinfo('localhost', 8001) p = multiprocessing.Process(target=block) p.start() p.join() if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__issue330.py000066400000000000000000000044351471441230600215270ustar00rootroot00000000000000# A greenlet that's killed before it is ever started # should never be switched to import gevent import gevent.testing as greentest class MyException(Exception): pass class TestSwitch(greentest.TestCase): def setUp(self): super(TestSwitch, self).setUp() self.switched_to = [False, False] self.caught = None def should_never_run(self, i): # pragma: no cover self.switched_to[i] = True def check(self, g, g2): gevent.joinall((g, g2)) self.assertEqual([False, False], self.switched_to) # They both have a GreenletExit as their value self.assertIsInstance(g.value, gevent.GreenletExit) self.assertIsInstance(g2.value, gevent.GreenletExit) # They both have no reported exc_info self.assertIsNone(g.exc_info) self.assertIsNone(g2.exc_info) self.assertIsNone(g.exception) self.assertIsNone(g2.exception) def test_gevent_kill(self): g = gevent.spawn(self.should_never_run, 0) # create but do not switch to g2 = gevent.spawn(self.should_never_run, 1) # create but do not switch to # Using gevent.kill gevent.kill(g) gevent.kill(g2) self.check(g, g2) def test_greenlet_kill(self): # killing directly g = gevent.spawn(self.should_never_run, 0) g2 = gevent.spawn(self.should_never_run, 1) g.kill() g2.kill() self.check(g, g2) def test_throw(self): # throwing g = gevent.spawn(self.should_never_run, 0) g2 = gevent.spawn(self.should_never_run, 1) g.throw(gevent.GreenletExit) g2.throw(gevent.GreenletExit) self.check(g, g2) def catcher(self): try: while True: gevent.sleep(0) except MyException as e: self.caught = e def test_kill_exception(self): # Killing with gevent.kill gets the right exception, # and we can pass exception objects, not just exception classes. g = gevent.spawn(self.catcher) g.start() gevent.sleep() gevent.kill(g, MyException()) gevent.sleep() self.assertIsInstance(self.caught, MyException) self.assertIsNone(g.exception, MyException) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__issue467.py000066400000000000000000000022651471441230600215410ustar00rootroot00000000000000import gevent from gevent import testing as greentest #import socket # on windows # iwait should not raise `LoopExit: This operation would block forever` # or `AssertionError: Invalid switch into ...` # if the caller of iwait causes greenlets to switch in between # return values def worker(i): # Have one of them raise an exception to test that case if i == 2: raise ValueError(i) return i class Test(greentest.TestCase): def test(self): finished = 0 # Wait on a group that includes one that will already be # done, plus some that will finish as we watch done_worker = gevent.spawn(worker, "done") gevent.joinall((done_worker,)) workers = [gevent.spawn(worker, i) for i in range(3)] workers.append(done_worker) for _ in gevent.iwait(workers): finished += 1 # Simulate doing something that causes greenlets to switch; # a non-zero timeout is crucial try: gevent.sleep(0.01) except ValueError as ex: self.assertEqual(ex.args[0], 2) self.assertEqual(finished, 4) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__issue6.py000066400000000000000000000027721471441230600213710ustar00rootroot00000000000000from __future__ import print_function from __future__ import absolute_import from __future__ import division import sys if not sys.argv[1:]: from subprocess import Popen, PIPE # not on Py2 pylint:disable=consider-using-with p = Popen([sys.executable, __file__, 'subprocess'], stdin=PIPE, stdout=PIPE, stderr=PIPE) out, err = p.communicate(b'hello world\n') code = p.poll() assert p.poll() == 0, (out, err, code) assert out.strip() == b'11 chars.', (out, err, code) # XXX: This is seen sometimes to fail on Travis with the following value in err but a code of 0; # it seems load related: # 'Unhandled exception in thread started by \nsys.excepthook is missing\nlost sys.stderr\n'. # If warnings are enabled, Python 3 has started producing this: # '...importlib/_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ # or __package__, falling back on __name__ and __path__\n return f(*args, **kwds)\n' assert err == b'' or b'sys.excepthook' in err or b'Warning' in err, (out, err, code) elif sys.argv[1:] == ['subprocess']: # pragma: no cover import gevent import gevent.monkey gevent.monkey.patch_all(sys=True) def printline(): try: line = raw_input() except NameError: line = input() # pylint:disable=bad-builtin print('%s chars.' % len(line)) sys.stdout.flush() gevent.spawn(printline).join() else: # pragma: no cover sys.exit('Invalid arguments: %r' % (sys.argv, )) gevent-24.11.1/src/gevent/tests/test__issue600.py000066400000000000000000000025521471441230600215250ustar00rootroot00000000000000# Make sure that libev child watchers, implicitly installed through the use # of subprocess, do not cause waitpid() to fail to poll for processes. # NOTE: This was only reproducible under python 2. from __future__ import print_function import gevent from gevent import monkey monkey.patch_all() import sys from multiprocessing import Process from subprocess import Popen, PIPE from gevent import testing as greentest def f(sleep_sec): gevent.sleep(sleep_sec) class TestIssue600(greentest.TestCase): __timeout__ = greentest.LARGE_TIMEOUT @greentest.skipOnLibuvOnPyPyOnWin("hangs") def test_invoke(self): # Run a subprocess through Popen to make sure # libev is handling SIGCHLD. This could *probably* be simplified to use # just hub.loop.install_sigchld # (no __enter__/__exit__ on Py2) pylint:disable=consider-using-with p = Popen([sys.executable, '-V'], stdout=PIPE, stderr=PIPE) gevent.sleep(0) p.communicate() gevent.sleep(0) def test_process(self): # Launch p = Process(target=f, args=(0.5,)) p.start() with gevent.Timeout(3): # Poll for up to 10 seconds. If the bug exists, # this will timeout because our subprocess should # be long gone by now p.join(10) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__issue607.py000066400000000000000000000025121471441230600215300ustar00rootroot00000000000000# A greenlet that's killed with an exception should fail. import gevent.testing as greentest import gevent class ExpectedError(greentest.ExpectedException): pass def f(): gevent.sleep(999) class TestKillWithException(greentest.TestCase): def test_kill_without_exception(self): g = gevent.spawn(f) g.kill() assert g.successful() assert isinstance(g.get(), gevent.GreenletExit) def test_kill_with_exception(self): # issue-607 pointed this case. g = gevent.spawn(f) with gevent.get_hub().ignoring_expected_test_error(): # Hmm, this only needs the `with ignoring...` in # PURE_PYTHON mode (or PyPy). g.kill(ExpectedError) self.assertFalse(g.successful()) self.assertRaises(ExpectedError, g.get) self.assertIsNone(g.value) self.assertIsInstance(g.exception, ExpectedError) def test_kill_with_exception_after_started(self): with gevent.get_hub().ignoring_expected_test_error(): g = gevent.spawn(f) g.join(0) g.kill(ExpectedError) self.assertFalse(g.successful()) self.assertRaises(ExpectedError, g.get) self.assertIsNone(g.value) self.assertIsInstance(g.exception, ExpectedError) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__issue639.py000066400000000000000000000003261471441230600215360ustar00rootroot00000000000000# Test idle import gevent from gevent import testing as greentest class Test(greentest.TestCase): def test(self): gevent.sleep() gevent.idle() if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__issue_728.py000066400000000000000000000003241471441230600216720ustar00rootroot00000000000000#!/usr/bin/env python from gevent.monkey import patch_all patch_all() if __name__ == '__main__': # Reproducing #728 requires a series of nested # imports __import__('_imports_imports_at_top_level') gevent-24.11.1/src/gevent/tests/test__issues461_471.py000066400000000000000000000074611471441230600223140ustar00rootroot00000000000000'''Test for GitHub issues 461 and 471. When moving to Python 3, handling of KeyboardInterrupt exceptions caused by a Ctrl-C raised an exception while printing the traceback for a greenlet preventing the process from exiting. This test tests for proper handling of KeyboardInterrupt. ''' import sys if sys.argv[1:] == ['subprocess']: # pragma: no cover import gevent def task(): sys.stdout.write('ready\n') sys.stdout.flush() gevent.sleep(30) try: gevent.spawn(task).get() except KeyboardInterrupt: pass sys.exit(0) else: import signal from subprocess import Popen, PIPE import time import unittest import gevent.testing as greentest from gevent.testing.sysinfo import CFFI_BACKEND from gevent.testing.sysinfo import RUN_COVERAGE from gevent.testing.sysinfo import WIN from gevent.testing.sysinfo import PYPY3 class Test(unittest.TestCase): @unittest.skipIf( (CFFI_BACKEND and RUN_COVERAGE) or (PYPY3 and WIN), "Interferes with the timing; times out waiting for the child") def test_hang(self): # XXX: Why does PyPy3 on Win fail to kill the child? (This was before we switched # to pypy3w; perhaps that makes a difference?) if WIN: from subprocess import CREATE_NEW_PROCESS_GROUP kwargs = {'creationflags': CREATE_NEW_PROCESS_GROUP} else: kwargs = {} # (not on Py2) pylint:disable=consider-using-with p = Popen([sys.executable, __file__, 'subprocess'], stdout=PIPE, **kwargs) line = p.stdout.readline() if not isinstance(line, str): line = line.decode('ascii') # Windows needs the \n in the string to write (because of buffering), but # because of newline handling it doesn't make it through the read; whereas # it does on other platforms. Universal newlines is broken on Py3, so the best # thing to do is to strip it line = line.strip() self.assertEqual(line, 'ready') # On Windows, we have to send the CTRL_BREAK_EVENT (which seems to terminate the process); SIGINT triggers # "ValueError: Unsupported signal: 2". The CTRL_C_EVENT is ignored on Python 3 (but not Python 2). # So this test doesn't test much on Windows. signal_to_send = signal.SIGINT if not WIN else getattr(signal, 'CTRL_BREAK_EVENT') p.send_signal(signal_to_send) # Wait a few seconds for child process to die. Sometimes signal delivery is delayed # or even swallowed by Python, so send the signal a few more times if necessary wait_seconds = 25.0 now = time.time() midtime = now + (wait_seconds / 2.0) endtime = time.time() + wait_seconds while time.time() < endtime: if p.poll() is not None: break if time.time() > midtime: p.send_signal(signal_to_send) midtime = endtime + 1 # only once time.sleep(0.1) else: # Kill unresponsive child and exit with error 1 p.terminate() p.wait() raise AssertionError("Failed to wait for child") # If we get here, it's because we caused the process to exit; it # didn't hang. Under Windows, however, we have to use CTRL_BREAK_EVENT, # which has an arbitrary returncode depending on versions (so does CTRL_C_EVENT # on Python 2). We still # count this as success. self.assertEqual(p.returncode if not WIN else 0, 0) p.stdout.close() if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__iwait.py000066400000000000000000000022651471441230600212650ustar00rootroot00000000000000import gevent import gevent.testing as greentest from gevent.lock import Semaphore class Testiwait(greentest.TestCase): def test_noiter(self): # Test that gevent.iwait returns objects which can be iterated upon # without additional calls to iter() sem1 = Semaphore() sem2 = Semaphore() gevent.spawn(sem1.release) ready = next(gevent.iwait((sem1, sem2))) self.assertEqual(sem1, ready) def test_iwait_partial(self): # Test that the iwait context manager allows the iterator to be # consumed partially without a memory leak. sem = Semaphore() let = gevent.spawn(sem.release) with gevent.iwait((sem,), timeout=0.01) as iterator: self.assertEqual(sem, next(iterator)) let.get() def test_iwait_nogarbage(self): sem1 = Semaphore() sem2 = Semaphore() let = gevent.spawn(sem1.release) with gevent.iwait((sem1, sem2)) as iterator: self.assertEqual(sem1, next(iterator)) self.assertEqual(sem2.linkcount(), 1) self.assertEqual(sem2.linkcount(), 0) let.get() if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__joinall.py000066400000000000000000000004501471441230600215720ustar00rootroot00000000000000import gevent from gevent import testing as greentest class Test(greentest.TestCase): def test(self): def func(): pass a = gevent.spawn(func) b = gevent.spawn(func) gevent.joinall([a, b, a]) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__local.py000066400000000000000000000267351471441230600212520ustar00rootroot00000000000000import gevent.testing as greentest from copy import copy # Comment the line below to see that the standard thread.local is working correct from gevent import monkey; monkey.patch_all() from threading import local from threading import Thread from zope import interface try: from collections.abc import Mapping except ImportError: from collections import Mapping # pylint:disable=deprecated-class class ReadProperty(object): """A property that can be overridden""" # A non-data descriptor def __get__(self, inst, klass): return 42 if inst is not None else self class A(local): __slots__ = ['initialized', 'obj'] path = '' type_path = 'MyPath' read_property = ReadProperty() def __init__(self, obj): super(A, self).__init__() if not hasattr(self, 'initialized'): self.obj = obj self.path = '' class Obj(object): pass # These next two classes have to be global to avoid the leakchecks deleted_sentinels = [] created_sentinels = [] class Sentinel(object): def __del__(self): deleted_sentinels.append(id(self)) class MyLocal(local): CLASS_PROP = 42 def __init__(self): local.__init__(self) self.sentinel = Sentinel() created_sentinels.append(id(self.sentinel)) @property def desc(self): return self class MyLocalSubclass(MyLocal): pass class WithGetattr(local): def __getattr__(self, name): if name == 'foo': return 42 return super(WithGetattr, self).__getattr__(name) # pylint:disable=no-member class LocalWithABC(local, Mapping): def __getitem__(self, name): return self.d[name] def __iter__(self): return iter(self.d) def __len__(self): return len(self.d) class LocalWithStaticMethod(local): @staticmethod def a_staticmethod(): return 42 class LocalWithClassMethod(local): @classmethod def a_classmethod(cls): return cls class TestGeventLocal(greentest.TestCase): # pylint:disable=attribute-defined-outside-init,blacklisted-name def setUp(self): del deleted_sentinels[:] del created_sentinels[:] tearDown = setUp def test_create_local_subclass_init_args(self): with self.assertRaisesRegex(TypeError, "Initialization arguments are not supported"): local("foo") with self.assertRaisesRegex(TypeError, "Initialization arguments are not supported"): local(kw="foo") def test_local_opts_not_subclassed(self): l = local() l.attr = 1 self.assertEqual(l.attr, 1) def test_cannot_set_delete_dict(self): l = local() with self.assertRaises(AttributeError): l.__dict__ = 1 with self.assertRaises(AttributeError): del l.__dict__ def test_delete_with_no_dict(self): l = local() with self.assertRaises(AttributeError): delattr(l, 'thing') def del_local(): with self.assertRaises(AttributeError): delattr(l, 'thing') t = Thread(target=del_local) t.start() t.join() def test_slot_and_type_attributes(self): a = A(Obj()) a.initialized = 1 self.assertEqual(a.initialized, 1) # The slot is shared def demonstrate_slots_shared(): self.assertEqual(a.initialized, 1) a.initialized = 2 greenlet = Thread(target=demonstrate_slots_shared) greenlet.start() greenlet.join() self.assertEqual(a.initialized, 2) # The slot overrides dict values a.__dict__['initialized'] = 42 # pylint:disable=unsupported-assignment-operation self.assertEqual(a.initialized, 2) # Deleting the slot deletes the slot, but not the dict del a.initialized self.assertFalse(hasattr(a, 'initialized')) self.assertIn('initialized', a.__dict__) # We can delete the 'path' ivar # and fall back to the type del a.path self.assertEqual(a.path, '') with self.assertRaises(AttributeError): del a.path # A read property calls get self.assertEqual(a.read_property, 42) a.read_property = 1 self.assertEqual(a.read_property, 1) self.assertIsInstance(A.read_property, ReadProperty) # Type attributes can be read self.assertEqual(a.type_path, 'MyPath') self.assertNotIn('type_path', a.__dict__) # and replaced in the dict a.type_path = 'Local' self.assertEqual(a.type_path, 'Local') self.assertIn('type_path', a.__dict__) def test_attribute_error(self): # pylint:disable=attribute-defined-outside-init a = A(Obj()) with self.assertRaises(AttributeError): getattr(a, 'fizz_buzz') def set_fizz_buzz(): a.fizz_buzz = 1 greenlet = Thread(target=set_fizz_buzz) greenlet.start() greenlet.join() with self.assertRaises(AttributeError): getattr(a, 'fizz_buzz') def test_getattr_called(self): getter = WithGetattr() self.assertEqual(42, getter.foo) getter.foo = 'baz' self.assertEqual('baz', getter.foo) def test_copy(self): a = A(Obj()) a.path = '123' a.obj.echo = 'test' b = copy(a) # Copy makes a shallow copy. Meaning that the attribute path # has to be independent in the original and the copied object because the # value is a string, but the attribute obj should be just reference to # the instance of the class Obj self.assertEqual(a.path, b.path, 'The values in the two objects must be equal') self.assertEqual(a.obj, b.obj, 'The values must be equal') b.path = '321' self.assertNotEqual(a.path, b.path, 'The values in the two objects must be different') a.obj.echo = "works" self.assertEqual(a.obj, b.obj, 'The values must be equal') def test_copy_no_subclass(self): a = local() setattr(a, 'thing', 42) b = copy(a) self.assertEqual(b.thing, 42) self.assertIsNot(a.__dict__, b.__dict__) def test_objects(self): # Test which failed in the eventlet?! a = A({}) a.path = '123' b = A({'one': 2}) b.path = '123' self.assertEqual(a.path, b.path, 'The values in the two objects must be equal') b.path = '321' self.assertNotEqual(a.path, b.path, 'The values in the two objects must be different') def test_class_attr(self, kind=MyLocal): mylocal = kind() self.assertEqual(42, mylocal.CLASS_PROP) mylocal.CLASS_PROP = 1 self.assertEqual(1, mylocal.CLASS_PROP) self.assertEqual(mylocal.__dict__['CLASS_PROP'], 1) # pylint:disable=unsubscriptable-object del mylocal.CLASS_PROP self.assertEqual(42, mylocal.CLASS_PROP) self.assertIs(mylocal, mylocal.desc) def test_class_attr_subclass(self): self.test_class_attr(kind=MyLocalSubclass) def test_locals_collected_when_greenlet_dead_but_still_referenced(self): # https://github.com/gevent/gevent/issues/387 import gevent my_local = MyLocal() my_local.sentinel = None greentest.gc_collect_if_needed() del created_sentinels[:] del deleted_sentinels[:] def demonstrate_my_local(): # Get the important parts getattr(my_local, 'sentinel') # Create and reference greenlets greenlets = [Thread(target=demonstrate_my_local) for _ in range(5)] for t in greenlets: t.start() gevent.sleep() self.assertEqual(len(created_sentinels), len(greenlets)) for g in greenlets: assert not g.is_alive() gevent.sleep() # let the callbacks run greentest.gc_collect_if_needed() # The sentinels should be gone too self.assertEqual(len(deleted_sentinels), len(greenlets)) @greentest.skipOnLibuvOnPyPyOnWin("GC makes this non-deterministic, especially on Windows") def test_locals_collected_when_unreferenced_even_in_running_greenlet(self): # In fact only on Windows do we see GC being an issue; # pypy2 5.0 on macos and travis don't have a problem. # https://github.com/gevent/gevent/issues/981 import gevent import gc gc.collect() count = 1000 running_greenlet = None def demonstrate_my_local(): for _ in range(1000): x = MyLocal() self.assertIsNotNone(x.sentinel) x = None gc.collect() gc.collect() self.assertEqual(count, len(created_sentinels)) # They're all dead, even though this greenlet is # still running self.assertEqual(count, len(deleted_sentinels)) # The links were removed as well. self.assertFalse(running_greenlet.has_links()) running_greenlet = gevent.spawn(demonstrate_my_local) gevent.sleep() running_greenlet.join() self.assertEqual(count, len(deleted_sentinels)) @greentest.ignores_leakcheck def test_local_dicts_for_greenlet(self): import gevent from gevent.local import all_local_dicts_for_greenlet class MyGreenlet(gevent.Greenlet): results = None id_x = None def _run(self): # pylint:disable=method-hidden x = local() x.foo = 42 self.id_x = id(x) self.results = all_local_dicts_for_greenlet(self) g = MyGreenlet() g.start() g.join() self.assertTrue(g.successful, g) self.assertEqual(g.results, [((local, g.id_x), {'foo': 42})]) def test_local_with_abc(self): # an ABC (or generally any non-exact-type) in the MRO doesn't # break things. See https://github.com/gevent/gevent/issues/1201 x = LocalWithABC() x.d = {'a': 1} self.assertEqual({'a': 1}, x.d) # The ABC part works self.assertIn('a', x.d) self.assertEqual(['a'], list(x.keys())) def test_local_with_staticmethod(self): x = LocalWithStaticMethod() self.assertEqual(42, x.a_staticmethod()) def test_local_with_classmethod(self): x = LocalWithClassMethod() self.assertIs(LocalWithClassMethod, x.a_classmethod()) class TestLocalInterface(greentest.TestCase): __timeout__ = None @greentest.ignores_leakcheck def test_provides(self): # https://github.com/gevent/gevent/issues/1122 # pylint:disable=inherit-non-class class IFoo(interface.Interface): pass @interface.implementer(IFoo) class Base(object): pass class Derived(Base, local): pass d = Derived() p = list(interface.providedBy(d)) self.assertEqual([IFoo], p) @greentest.skipOnPurePython("Needs C extension") class TestCExt(greentest.TestCase): # pragma: no cover def test_c_extension(self): self.assertEqual(local.__module__, 'gevent._gevent_clocal') @greentest.skipWithCExtensions("Needs pure-python") class TestPure(greentest.TestCase): def test_extension(self): self.assertEqual(local.__module__, 'gevent.local') if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__lock.py000066400000000000000000000021141471441230600210710ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import division from __future__ import print_function from gevent import lock import gevent.testing as greentest from gevent.tests import test__semaphore class TestRLockMultiThread(test__semaphore.TestSemaphoreMultiThread): def _makeOne(self): # If we don't set the hub before returning, # there's a potential race condition, if the implementation # isn't careful. If it's the background hub that winds up capturing # the hub, it will ask the hub to switch back to itself and # then switch to the hub, which will raise LoopExit (nothing # for the background thread to do). What is supposed to happen # is that the background thread realizes it's the background thread, # starts an async watcher and then switches to the hub. # # So we deliberately don't set the hub to help test that condition. return lock.RLock() def assertOneHasNoHub(self, sem): self.assertIsNone(sem._block.hub) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__loop_callback.py000066400000000000000000000005441471441230600227330ustar00rootroot00000000000000from gevent import get_hub from gevent import testing as greentest class Test(greentest.TestCase): def test(self): count = [0] def incr(): count[0] += 1 loop = get_hub().loop loop.run_callback(incr) loop.run() self.assertEqual(count, [1]) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__makefile_ref.py000066400000000000000000000453161471441230600225650ustar00rootroot00000000000000from __future__ import print_function import os from gevent import monkey; monkey.patch_all() import socket import ssl import threading import errno import weakref import gevent.testing as greentest from gevent.testing.params import DEFAULT_BIND_ADDR_TUPLE from gevent.testing.params import DEFAULT_CONNECT from gevent.testing.sockets import tcp_listener dirname = os.path.dirname(os.path.abspath(__file__)) CERTFILE = os.path.join(dirname, '2_7_keycert.pem') pid = os.getpid() PY3 = greentest.PY3 PYPY = greentest.PYPY CPYTHON = not PYPY PY2 = not PY3 fd_types = int WIN = greentest.WIN from gevent.testing import get_open_files try: import psutil except ImportError: psutil = None # wrap_socket() is considered deprecated in 3.9 # pylint:disable=deprecated-method class Test(greentest.TestCase): extra_allowed_open_states = () def tearDown(self): self.extra_allowed_open_states = () super(Test, self).tearDown() def assert_raises_EBADF(self, func): try: result = func() except OSError as ex: # Windows/Py3 raises "OSError: [WinError 10038]" if ex.args[0] == errno.EBADF: return if WIN and ex.args[0] == 10038: return raise raise AssertionError('NOT RAISED EBADF: %r() returned %r' % (func, result)) if WIN or (PYPY and greentest.LINUX): def __assert_fd_open(self, fileno): # We can't detect open file descriptors on Windows. # On PyPy 3.6-7.3 on Travis CI (linux), for some reason the # client file descriptors don't always show as open. Don't know why, # was fine in 7.2. # On March 23 2020 we had to pin psutil back to a version # for PyPy 2 (see setup.py) and this same problem started happening there. # PyPy on macOS was unaffected. pass else: def __assert_fd_open(self, fileno): assert isinstance(fileno, fd_types) open_files = get_open_files() if fileno not in open_files: raise AssertionError('%r is not open:\n%s' % (fileno, open_files['data'])) def assert_fd_closed(self, fileno): assert isinstance(fileno, fd_types), repr(fileno) assert fileno > 0, fileno # Here, if we're in the process of closing, don't consider it open. # This goes into details of psutil open_files = get_open_files(count_closing_as_open=False) if fileno in open_files: raise AssertionError('%r is not closed:\n%s' % (fileno, open_files['data'])) def _assert_sock_open(self, sock): # requires the psutil output open_files = get_open_files() sockname = sock.getsockname() for x in open_files['data']: if getattr(x, 'laddr', None) == sockname: assert x.status in (psutil.CONN_LISTEN, psutil.CONN_ESTABLISHED) + self.extra_allowed_open_states, x.status return raise AssertionError("%r is not open:\n%s" % (sock, open_files['data'])) def assert_open(self, sock, *rest): if isinstance(sock, fd_types): self.__assert_fd_open(sock) else: fileno = sock.fileno() assert isinstance(fileno, fd_types), fileno sockname = sock.getsockname() assert isinstance(sockname, tuple), sockname if not WIN: self.__assert_fd_open(fileno) else: self._assert_sock_open(sock) if rest: self.assert_open(rest[0], *rest[1:]) def assert_closed(self, sock, *rest): if isinstance(sock, fd_types): self.assert_fd_closed(sock) else: # Under Python3, the socket module returns -1 for a fileno # of a closed socket; under Py2 it raises if PY3: self.assertEqual(sock.fileno(), -1) else: self.assert_raises_EBADF(sock.fileno) self.assert_raises_EBADF(sock.getsockname) self.assert_raises_EBADF(sock.accept) if rest: self.assert_closed(rest[0], *rest[1:]) def make_open_socket(self): s = socket.socket() try: s.bind(DEFAULT_BIND_ADDR_TUPLE) if WIN or greentest.LINUX: # Windows and linux (with psutil) doesn't show as open until # we call listen (linux with lsof accepts either) s.listen(1) self.assert_open(s, s.fileno()) except: s.close() s = None raise return s # Sometimes its this one, sometimes it's test_ssl. No clue why or how. @greentest.skipOnAppVeyor("This sometimes times out for no apparent reason.") class TestSocket(Test): def test_simple_close(self): with Closing() as closer: s = closer(self.make_open_socket()) fileno = s.fileno() s.close() self.assert_closed(s, fileno) def test_makefile1(self): with Closing() as closer: s = closer(self.make_open_socket()) fileno = s.fileno() f = closer(s.makefile()) self.assert_open(s, fileno) # Under python 2, this closes socket wrapper object but not the file descriptor; # under python 3, both stay open s.close() if PY3: self.assert_open(s, fileno) else: self.assert_closed(s) self.assert_open(fileno) f.close() self.assert_closed(s) self.assert_closed(fileno) def test_makefile2(self): with Closing() as closer: s = closer(self.make_open_socket()) fileno = s.fileno() self.assert_open(s, fileno) f = closer(s.makefile()) self.assert_open(s) self.assert_open(s, fileno) f.close() # closing fileobject does not close the socket self.assert_open(s, fileno) s.close() self.assert_closed(s, fileno) def test_server_simple(self): with Closing() as closer: listener = closer(tcp_listener(backlog=1)) port = listener.getsockname()[1] connector = closer(socket.socket()) def connect(): connector.connect((DEFAULT_CONNECT, port)) closer.running_task(threading.Thread(target=connect)) client_socket = closer.accept(listener) fileno = client_socket.fileno() self.assert_open(client_socket, fileno) client_socket.close() self.assert_closed(client_socket) def test_server_makefile1(self): with Closing() as closer: listener = closer(tcp_listener(backlog=1)) port = listener.getsockname()[1] connector = closer(socket.socket()) def connect(): connector.connect((DEFAULT_CONNECT, port)) closer.running_task(threading.Thread(target=connect)) client_socket = closer.accept(listener) fileno = client_socket.fileno() f = closer(client_socket.makefile()) self.assert_open(client_socket, fileno) client_socket.close() # Under python 2, this closes socket wrapper object but not the file descriptor; # under python 3, both stay open if PY3: self.assert_open(client_socket, fileno) else: self.assert_closed(client_socket) self.assert_open(fileno) f.close() self.assert_closed(client_socket, fileno) def test_server_makefile2(self): with Closing() as closer: listener = closer(tcp_listener(backlog=1)) port = listener.getsockname()[1] connector = closer(socket.socket()) def connect(): connector.connect((DEFAULT_CONNECT, port)) closer.running_task(threading.Thread(target=connect)) client_socket = closer.accept(listener) fileno = client_socket.fileno() f = closer(client_socket.makefile()) self.assert_open(client_socket, fileno) # closing fileobject does not close the socket f.close() self.assert_open(client_socket, fileno) client_socket.close() self.assert_closed(client_socket, fileno) @greentest.skipOnAppVeyor("This sometimes times out for no apparent reason.") class TestSSL(Test): def _ssl_connect_task(self, connector, port, accepted_event): connector.connect((DEFAULT_CONNECT, port)) try: # Note: We get ResourceWarning about 'x' # on Python 3 if we don't join the spawned thread x = ssl.SSLContext().wrap_socket(connector) # Wait to be fully accepted. We could otherwise raise ahead # of the server and close ourself before it's ready to read. accepted_event.wait() except socket.error: # Observed on Windows with PyPy2 5.9.0 and libuv: # if we don't switch in a timely enough fashion, # the server side runs ahead of us and closes # our socket first, so this fails. pass else: x.close() def _make_ssl_connect_task(self, connector, port): accepted_event = threading.Event() t = threading.Thread(target=self._ssl_connect_task, args=(connector, port, accepted_event)) t.daemon = True t.accepted_event = accepted_event return t def test_simple_close(self): with Closing() as closer: s = closer(self.make_open_socket()) fileno = s.fileno() s = closer(ssl.SSLContext().wrap_socket(s)) fileno = s.fileno() self.assert_open(s, fileno) s.close() self.assert_closed(s, fileno) def test_makefile1(self): with Closing() as closer: raw_s = closer(self.make_open_socket()) s = closer(ssl.SSLContext().wrap_socket(raw_s)) fileno = s.fileno() self.assert_open(s, fileno) f = closer(s.makefile()) self.assert_open(s, fileno) s.close() self.assert_open(s, fileno) f.close() raw_s.close() self.assert_closed(s, fileno) def test_makefile2(self): with Closing() as closer: s = closer(self.make_open_socket()) fileno = s.fileno() s = closer(ssl.SSLContext().wrap_socket(s)) fileno = s.fileno() self.assert_open(s, fileno) f = closer(s.makefile()) self.assert_open(s, fileno) f.close() # closing fileobject does not close the socket self.assert_open(s, fileno) s.close() self.assert_closed(s, fileno) def _wrap_socket(self, sock, *, keyfile, certfile, server_side=False): context = ssl.SSLContext() context.load_cert_chain(certfile=certfile, keyfile=keyfile) return context.wrap_socket(sock, server_side=server_side) def test_server_simple(self): with Closing() as closer: listener = closer(tcp_listener(backlog=1)) port = listener.getsockname()[1] connector = closer(socket.socket()) t = self._make_ssl_connect_task(connector, port) closer.running_task(t) client_socket = closer.accept(listener) t.accepted_event.set() client_socket = closer( self._wrap_socket(client_socket, keyfile=CERTFILE, certfile=CERTFILE, server_side=True)) fileno = client_socket.fileno() self.assert_open(client_socket, fileno) client_socket.close() self.assert_closed(client_socket, fileno) def test_server_makefile1(self): with Closing() as closer: listener = closer(tcp_listener(backlog=1)) port = listener.getsockname()[1] connector = closer(socket.socket()) t = self._make_ssl_connect_task(connector, port) closer.running_task(t) client_socket = closer.accept(listener) t.accepted_event.set() client_socket = closer( self._wrap_socket(client_socket, keyfile=CERTFILE, certfile=CERTFILE, server_side=True)) fileno = client_socket.fileno() self.assert_open(client_socket, fileno) f = client_socket.makefile() self.assert_open(client_socket, fileno) client_socket.close() self.assert_open(client_socket, fileno) f.close() self.assert_closed(client_socket, fileno) def test_server_makefile2(self): with Closing() as closer: listener = closer(tcp_listener(backlog=1)) port = listener.getsockname()[1] connector = closer(socket.socket()) t = self._make_ssl_connect_task(connector, port) closer.running_task(t) t.accepted_event.set() client_socket = closer.accept(listener) client_socket = closer( self._wrap_socket(client_socket, keyfile=CERTFILE, certfile=CERTFILE, server_side=True)) fileno = client_socket.fileno() self.assert_open(client_socket, fileno) f = client_socket.makefile() self.assert_open(client_socket, fileno) # Closing fileobject does not close SSLObject f.close() self.assert_open(client_socket, fileno) client_socket.close() self.assert_closed(client_socket, fileno) def test_serverssl_makefile1(self): raw_listener = tcp_listener(backlog=1) fileno = raw_listener.fileno() port = raw_listener.getsockname()[1] listener = self._wrap_socket(raw_listener, keyfile=CERTFILE, certfile=CERTFILE) connector = socket.socket() t = self._make_ssl_connect_task(connector, port) t.start() with CleaningUp(t, listener, raw_listener, connector) as client_socket: t.accepted_event.set() fileno = client_socket.fileno() self.assert_open(client_socket, fileno) f = client_socket.makefile() self.assert_open(client_socket, fileno) client_socket.close() self.assert_open(client_socket, fileno) f.close() self.assert_closed(client_socket, fileno) def test_serverssl_makefile2(self): raw_listener = tcp_listener(backlog=1) port = raw_listener.getsockname()[1] listener = self._wrap_socket(raw_listener, keyfile=CERTFILE, certfile=CERTFILE) accepted_event = threading.Event() def connect(connector=socket.socket()): try: connector.connect((DEFAULT_CONNECT, port)) s = ssl.SSLContext().wrap_socket(connector) accepted_event.wait() s.sendall(b'test_serverssl_makefile2') s.shutdown(socket.SHUT_RDWR) s.close() finally: connector.close() t = threading.Thread(target=connect) t.daemon = True t.start() client_socket = None with CleaningUp(t, listener, raw_listener) as client_socket: accepted_event.set() fileno = client_socket.fileno() self.assert_open(client_socket, fileno) f = client_socket.makefile() self.assert_open(client_socket, fileno) self.assertEqual(f.read(), 'test_serverssl_makefile2') self.assertEqual(f.read(), '') # Closing file object does not close the socket. f.close() if WIN and psutil: # Hmm? self.extra_allowed_open_states = (psutil.CONN_CLOSE_WAIT,) self.assert_open(client_socket, fileno) client_socket.close() self.assert_closed(client_socket, fileno) class Closing(object): def __init__(self, *init): self._objects = [] for i in init: self.closing(i) self.task = None def accept(self, listener): client_socket, _addr = listener.accept() return self.closing(client_socket) def __enter__(self): o = self.objects() if len(o) == 1: return o[0] return self if PY2 and CPYTHON: # This implementation depends or refcounting # for things to close. Eww. def closing(self, o): self._objects.append(weakref.ref(o)) return o def objects(self): return [r() for r in self._objects if r() is not None] else: def objects(self): # PyPy returns an object without __len__... return list(reversed(self._objects)) def closing(self, o): self._objects.append(o) return o __call__ = closing def running_task(self, thread): assert self.task is None self.task = thread self.task.start() return self.task def __exit__(self, t, v, tb): # workaround for test_server_makefile1, test_server_makefile2, # test_server_simple, test_serverssl_makefile1. # On PyPy on Linux, it is important to join the SSL Connect # Task FIRST, before closing the sockets. If we do it after # (which makes more sense) we hang. It's not clear why, except # that it has something to do with context switches. Inserting a call to # gevent.sleep(0.1) instead of joining the task has the same # effect. If the previous tests hang, then later tests can fail with # SSLError: unknown alert type. # XXX: Why do those two things happen? # On PyPy on macOS, we don't have that problem and can use the # more logical order. try: if self.task is not None: self.task.join() finally: self.task = None for o in self.objects(): try: o.close() except Exception: # pylint:disable=broad-except pass self._objects = () class CleaningUp(Closing): def __init__(self, task, listener, *other_sockets): super(CleaningUp, self).__init__(listener, *other_sockets) self.task = task self.listener = listener def __enter__(self): return self.accept(self.listener) def __exit__(self, t, v, tb): try: Closing.__exit__(self, t, v, tb) finally: self.listener = None if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__memleak.py000066400000000000000000000026721471441230600215650ustar00rootroot00000000000000import sys import unittest from gevent.testing import TestCase import gevent from gevent.timeout import Timeout @unittest.skipUnless( hasattr(sys, 'gettotalrefcount'), "Needs debug build" ) # XXX: This name makes no sense. What was this for originally? class TestQueue(TestCase): # pragma: no cover # pylint:disable=bare-except,no-member def test(self): refcounts = [] for _ in range(15): try: Timeout.start_new(0.01) gevent.sleep(0.1) self.fail('must raise Timeout') except Timeout: pass refcounts.append(sys.gettotalrefcount()) # Refcounts may go down, but not up # XXX: JAM: I think this may just be broken. Each time we add # a new integer to our list of refcounts, we'll be # creating a new reference. This makes sense when we see the list # go up by one each iteration: # # AssertionError: 530631 not less than or equal to 530630 # : total refcount mismatch: # [530381, 530618, 530619, 530620, 530621, # 530622, 530623, 530624, 530625, 530626, # 530627, 530628, 530629, 530630, 530631] final = refcounts[-1] previous = refcounts[-2] self.assertLessEqual( final, previous, "total refcount mismatch: %s" % refcounts) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test__monkey.py000066400000000000000000000151021471441230600214440ustar00rootroot00000000000000from gevent import monkey monkey.patch_all() import sys import unittest from gevent.testing.testcase import SubscriberCleanupMixin class TestMonkey(SubscriberCleanupMixin, unittest.TestCase): maxDiff = None def setUp(self): super(TestMonkey, self).setUp() self.all_events = [] self.addSubscriber(self.all_events.append) self.orig_saved = orig_saved = {} for k, v in monkey.saved.items(): orig_saved[k] = v.copy() def tearDown(self): monkey.saved = self.orig_saved del self.orig_saved del self.all_events super(TestMonkey, self).tearDown() def test_time(self): import time from gevent import time as gtime self.assertIs(time.sleep, gtime.sleep) def test_thread(self): import _thread as thread import threading from gevent import thread as gthread self.assertIs(thread.start_new_thread, gthread.start_new_thread) if sys.version_info[:2] < (3, 13): self.assertIs(threading._start_new_thread, gthread.start_new_thread) else: self.assertIs(threading._start_joinable_thread, gthread.start_joinable_thread) # Event patched by default self.assertTrue(monkey.is_object_patched('threading', 'Event')) if sys.version_info[0] == 2: from gevent import threading as gthreading from gevent.event import Event as GEvent self.assertIs(threading._sleep, gthreading._sleep) self.assertTrue(monkey.is_object_patched('threading', '_Event')) self.assertIs(threading._Event, GEvent) def test_socket(self): import socket from gevent import socket as gevent_socket self.assertIs(socket.create_connection, gevent_socket.create_connection) def test_os(self): import os import types from gevent import os as gos for name in ('fork', 'forkpty'): if hasattr(os, name): attr = getattr(os, name) self.assertNotIn('built-in', repr(attr)) self.assertNotIsInstance(attr, types.BuiltinFunctionType) self.assertIsInstance(attr, types.FunctionType) self.assertIs(attr, getattr(gos, name)) def test_saved(self): self.assertTrue(monkey.saved) for modname, objects in monkey.saved.items(): self.assertTrue(monkey.is_module_patched(modname)) for objname in objects: self.assertTrue(monkey.is_object_patched(modname, objname)) def test_patch_subprocess_twice(self): Popen = monkey.get_original('subprocess', 'Popen') self.assertNotIn('gevent', repr(Popen)) self.assertIs(Popen, monkey.get_original('subprocess', 'Popen')) monkey.patch_subprocess() self.assertIs(Popen, monkey.get_original('subprocess', 'Popen')) def test_patch_twice_warnings_events(self): import warnings all_events = self.all_events with warnings.catch_warnings(record=True) as issued_warnings: # Patch again, triggering just one warning, for # a different set of arguments. Because we're going to False instead of # turning something on, nothing is actually done, no events are issued. monkey.patch_all(os=False, extra_kwarg=42) self.assertEqual(len(issued_warnings), 1) self.assertIn('more than once', str(issued_warnings[0].message)) self.assertEqual(all_events, []) # Same warning again, but still nothing is done. del issued_warnings[:] monkey.patch_all(os=False) self.assertEqual(len(issued_warnings), 1) self.assertIn('more than once', str(issued_warnings[0].message)) self.assertEqual(all_events, []) self.orig_saved['_gevent_saved_patch_all_module_settings'] = monkey.saved[ '_gevent_saved_patch_all_module_settings'] # Make sure that re-patching did not change the monkey.saved # attribute, overwriting the original functions. if 'logging' in monkey.saved and 'logging' not in self.orig_saved: # some part of the warning or unittest machinery imports logging self.orig_saved['logging'] = monkey.saved['logging'] self.assertEqual(self.orig_saved, monkey.saved) # Make sure some problematic attributes stayed correct. # NOTE: This was only a problem if threading was not previously imported. for k, v in monkey.saved['threading'].items(): self.assertNotIn('gevent', str(v), (k, v)) def test_patch_events(self): from gevent import events from gevent.testing import verify all_events = self.all_events def veto(event): if isinstance(event, events.GeventWillPatchModuleEvent) and event.module_name == 'ssl': raise events.DoNotPatch self.addSubscriber(veto) monkey.saved = {} # Reset monkey.patch_all(thread=False, select=False, extra_kwarg=42) # Go again self.assertIsInstance(all_events[0], events.GeventWillPatchAllEvent) self.assertEqual({'extra_kwarg': 42}, all_events[0].patch_all_kwargs) verify.verifyObject(events.IGeventWillPatchAllEvent, all_events[0]) self.assertIsInstance(all_events[1], events.GeventWillPatchModuleEvent) verify.verifyObject(events.IGeventWillPatchModuleEvent, all_events[1]) self.assertIsInstance(all_events[2], events.GeventDidPatchModuleEvent) verify.verifyObject(events.IGeventWillPatchModuleEvent, all_events[1]) self.assertIsInstance(all_events[-2], events.GeventDidPatchBuiltinModulesEvent) verify.verifyObject(events.IGeventDidPatchBuiltinModulesEvent, all_events[-2]) self.assertIsInstance(all_events[-1], events.GeventDidPatchAllEvent) verify.verifyObject(events.IGeventDidPatchAllEvent, all_events[-1]) for e in all_events: self.assertFalse(isinstance(e, events.GeventDidPatchModuleEvent) and e.module_name == 'ssl') def test_patch_queue(self): try: import queue except ImportError: # Python 2 called this Queue. Note that having # python-future installed gives us a queue module on # Python 2 as well. queue = None if not hasattr(queue, 'SimpleQueue'): raise unittest.SkipTest("Needs SimpleQueue") # pylint:disable=no-member self.assertIs(queue.SimpleQueue, queue._PySimpleQueue) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test__monkey_builtins_future.py000066400000000000000000000010111471441230600247410ustar00rootroot00000000000000# Under Python 2, if the `future` module is installed, we get # a `builtins` module, which mimics the `builtins` module from # Python 3, but does not have the __import__ and some other functions. # Make sure we can still run in that case. import sys try: # fake out a "broken" builtins module import builtins except ImportError: class builtins(object): pass sys.modules['builtins'] = builtins() if not hasattr(builtins, '__import__'): import gevent.monkey gevent.monkey.patch_builtins() gevent-24.11.1/src/gevent/tests/test__monkey_hub_in_thread.py000066400000000000000000000010101471441230600243100ustar00rootroot00000000000000from gevent.monkey import patch_all patch_all(thread=False) from threading import Thread import time # The first time we init the hub is in the native # thread with time.sleep(), needing multiple # threads at the same time. Note: this is very timing # dependent. # See #687 def func(): time.sleep() def main(): threads = [] for _ in range(3): th = Thread(target=func) th.start() threads.append(th) for th in threads: th.join() if __name__ == '__main__': main() gevent-24.11.1/src/gevent/tests/test__monkey_logging.py000066400000000000000000000030341471441230600231530ustar00rootroot00000000000000# If the logging module is imported *before* monkey patching, # the existing handlers are correctly monkey patched to use gevent locks import logging logging.basicConfig() import threading def _inner_lock(lock): try: return lock._block except AttributeError: return None def _check_type(root, lock, inner_semaphore, kind): if not isinstance(inner_semaphore, kind): raise AssertionError( "Expected .[_]lock._block to be of type %s, " "but it was of type %s.\n" "\t.[_]lock=%r\n" "\t.[_]lock._block=%r\n" "\t=%r" % ( kind, type(inner_semaphore), lock, inner_semaphore, root ) ) def checkLocks(kind, ignore_none=True): handlers = logging._handlerList assert handlers for weakref in handlers: # In py26, these are actual handlers, not weakrefs handler = weakref() if callable(weakref) else weakref block = _inner_lock(handler.lock) if block is None and ignore_none: continue _check_type(handler, handler.lock, block, kind) attr = _inner_lock(logging._lock) if attr is None and ignore_none: return _check_type(logging, logging._lock, attr, kind) checkLocks(type(threading._allocate_lock())) import gevent.monkey gevent.monkey.patch_all() import gevent.lock import gevent.thread checkLocks(type(gevent.thread.allocate_lock()), ignore_none=False) gevent-24.11.1/src/gevent/tests/test__monkey_module_run.py000066400000000000000000000104201471441230600236730ustar00rootroot00000000000000""" Tests for running ``gevent.monkey`` as a module to launch a patched script. Uses files in the ``monkey_package/`` directory. """ from __future__ import print_function from __future__ import absolute_import from __future__ import division import os import os.path import sys from gevent import testing as greentest from gevent.testing.util import absolute_pythonpath from gevent.testing.util import run class TestRun(greentest.TestCase): maxDiff = None def setUp(self): self.abs_pythonpath = absolute_pythonpath() # before we cd self.cwd = os.getcwd() os.chdir(os.path.dirname(__file__)) def tearDown(self): os.chdir(self.cwd) def _run(self, script, module=False): env = os.environ.copy() env['PYTHONWARNINGS'] = 'ignore' if self.abs_pythonpath: env['PYTHONPATH'] = self.abs_pythonpath run_kwargs = dict( buffer_output=True, quiet=True, nested=True, env=env, timeout=10, ) args = [sys.executable, '-m', 'gevent.monkey'] if module: args.append('--module') args += [script, 'patched'] monkey_result = run( args, **run_kwargs ) self.assertTrue(monkey_result) if module: args = [sys.executable, "-m", script, 'stdlib'] else: args = [sys.executable, script, 'stdlib'] std_result = run( args, **run_kwargs ) self.assertTrue(std_result) monkey_out_lines = monkey_result.output_lines std_out_lines = std_result.output_lines self.assertEqual(monkey_out_lines, std_out_lines) self.assertEqual(monkey_result.error, std_result.error) return monkey_out_lines def test_run_simple(self): self._run(os.path.join('monkey_package', 'script.py')) def _run_package(self, module): lines = self._run('monkey_package', module=module) self.assertTrue(lines[0].endswith(u'__main__.py'), lines[0]) self.assertEqual(lines[1].strip(), u'__main__') def test_run_package(self): # Run a __main__ inside a package, even without specifying -m self._run_package(module=False) def test_run_module(self): # Run a __main__ inside a package, when specifying -m self._run_package(module=True) def test_issue_302(self): monkey_lines = self._run(os.path.join('monkey_package', 'issue302monkey.py')) self.assertEqual(monkey_lines[0].strip(), u'True') monkey_lines[1] = monkey_lines[1].replace(u'\\', u'/') # windows path self.assertTrue(monkey_lines[1].strip().endswith(u'monkey_package/issue302monkey.py')) self.assertEqual(monkey_lines[2].strip(), u'True', monkey_lines) # These three tests all sometimes fail on Py2 on CI, writing # to stderr: # Unhandled exception in thread started by \n # sys.excepthook is missing\n # lost sys.stderr\n # Fatal Python error: PyImport_GetModuleDict: no module dictionary!\n' # I haven't been able to produce this locally on macOS or Linux. # The last line seems new with 2.7.17? # Also, occasionally, they get '3' instead of '2' for the number of threads. # That could have something to do with...? Most commonly that's PyPy, but # sometimes CPython. Again, haven't reproduced. # Not relevant since Py2 has been dropped. def test_threadpool_in_patched_after_patch(self): # Issue 1484 # If we don't have this correct, then we get exceptions out = self._run(os.path.join('monkey_package', 'threadpool_monkey_patches.py')) self.assertEqual(out, ['False', '2']) def test_threadpool_in_patched_after_patch_module(self): # Issue 1484 # If we don't have this correct, then we get exceptions out = self._run('monkey_package.threadpool_monkey_patches', module=True) self.assertEqual(out, ['False', '2']) def test_threadpool_not_patched_after_patch_module(self): # Issue 1484 # If we don't have this correct, then we get exceptions out = self._run('monkey_package.threadpool_no_monkey', module=True) self.assertEqual(out, ['False', 'False', '2']) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__monkey_multiple_imports.py000066400000000000000000000004501471441230600251340ustar00rootroot00000000000000# https://github.com/gevent/gevent/issues/615 # Under Python 3, with its use of importlib, # if the monkey patch is done when the importlib import lock is held # (e.g., during recursive imports) we could fail to release the lock. # This is surprisingly common. __import__('_import_import_patch') gevent-24.11.1/src/gevent/tests/test__monkey_queue.py000066400000000000000000000300611471441230600226510ustar00rootroot00000000000000# Some simple queue module tests, plus some failure conditions # to ensure the Queue locks remain stable. from gevent import monkey monkey.patch_all() from gevent import queue as Queue import threading import time import unittest QUEUE_SIZE = 5 # A thread to run a function that unclogs a blocked Queue. class _TriggerThread(threading.Thread): def __init__(self, fn, args): self.fn = fn self.args = args #self.startedEvent = threading.Event() from gevent.event import Event self.startedEvent = Event() threading.Thread.__init__(self) def run(self): # The sleep isn't necessary, but is intended to give the blocking # function in the main thread a chance at actually blocking before # we unclog it. But if the sleep is longer than the timeout-based # tests wait in their blocking functions, those tests will fail. # So we give them much longer timeout values compared to the # sleep here (I aimed at 10 seconds for blocking functions -- # they should never actually wait that long - they should make # progress as soon as we call self.fn()). time.sleep(0.01) self.startedEvent.set() self.fn(*self.args) # Execute a function that blocks, and in a separate thread, a function that # triggers the release. Returns the result of the blocking function. Caution: # block_func must guarantee to block until trigger_func is called, and # trigger_func must guarantee to change queue state so that block_func can make # enough progress to return. In particular, a block_func that just raises an # exception regardless of whether trigger_func is called will lead to # timing-dependent sporadic failures, and one of those went rarely seen but # undiagnosed for years. Now block_func must be unexceptional. If block_func # is supposed to raise an exception, call do_exceptional_blocking_test() # instead. class BlockingTestMixin(object): def do_blocking_test(self, block_func, block_args, trigger_func, trigger_args): self.t = _TriggerThread(trigger_func, trigger_args) self.t.start() self.result = block_func(*block_args) # If block_func returned before our thread made the call, we failed! if not self.t.startedEvent.isSet(): self.fail("blocking function '%r' appeared not to block" % block_func) self.t.join(10) # make sure the thread terminates if self.t.is_alive(): self.fail("trigger function '%r' appeared to not return" % trigger_func) return self.result # Call this instead if block_func is supposed to raise an exception. def do_exceptional_blocking_test(self, block_func, block_args, trigger_func, trigger_args, expected_exception_class): self.t = _TriggerThread(trigger_func, trigger_args) self.t.start() try: with self.assertRaises(expected_exception_class): block_func(*block_args) finally: self.t.join(10) # make sure the thread terminates if self.t.is_alive(): self.fail("trigger function '%r' appeared to not return" % trigger_func) if not self.t.startedEvent.isSet(): self.fail("trigger thread ended but event never set") class BaseQueueTest(unittest.TestCase, BlockingTestMixin): type2test = Queue.Queue def setUp(self): self.cum = 0 self.cumlock = threading.Lock() def simple_queue_test(self, q): if not q.empty(): raise RuntimeError("Call this function with an empty queue") # I guess we better check things actually queue correctly a little :) q.put(111) q.put(333) q.put(222) q.put(444) target_first_items = dict( Queue=111, LifoQueue=444, PriorityQueue=111) actual_first_item = (q.peek(), q.get()) self.assertEqual(actual_first_item, (target_first_items[q.__class__.__name__], target_first_items[q.__class__.__name__]), "q.peek() and q.get() are not equal!") target_order = dict(Queue=[333, 222, 444], LifoQueue=[222, 333, 111], PriorityQueue=[222, 333, 444]) actual_order = [q.get(), q.get(), q.get()] self.assertEqual(actual_order, target_order[q.__class__.__name__], "Didn't seem to queue the correct data!") for i in range(QUEUE_SIZE-1): q.put(i) self.assertFalse(q.empty(), "Queue should not be empty") self.assertFalse(q.full(), "Queue should not be full") q.put(999) self.assertTrue(q.full(), "Queue should be full") try: q.put(888, block=0) self.fail("Didn't appear to block with a full queue") except Queue.Full: pass try: q.put(888, timeout=0.01) self.fail("Didn't appear to time-out with a full queue") except Queue.Full: pass self.assertEqual(q.qsize(), QUEUE_SIZE) # Test a blocking put self.do_blocking_test(q.put, (888,), q.get, ()) self.do_blocking_test(q.put, (888, True, 10), q.get, ()) # Empty it for i in range(QUEUE_SIZE): q.get() self.assertTrue(q.empty(), "Queue should be empty") try: q.get(block=0) self.fail("Didn't appear to block with an empty queue") except Queue.Empty: pass try: q.get(timeout=0.01) self.fail("Didn't appear to time-out with an empty queue") except Queue.Empty: pass # Test a blocking get self.do_blocking_test(q.get, (), q.put, ('empty',)) self.do_blocking_test(q.get, (True, 10), q.put, ('empty',)) def worker(self, q): while True: x = q.get() if x is None: q.task_done() return #with self.cumlock: self.cum += x q.task_done() def queue_join_test(self, q): self.cum = 0 for i in (0, 1): threading.Thread(target=self.worker, args=(q,)).start() for i in range(100): q.put(i) q.join() self.assertEqual(self.cum, sum(range(100)), "q.join() did not block until all tasks were done") for i in (0, 1): q.put(None) # instruct the threads to close q.join() # verify that you can join twice def test_queue_task_done(self): # Test to make sure a queue task completed successfully. q = Queue.JoinableQueue() # self.type2test() # XXX the same test in subclasses try: q.task_done() except ValueError: pass else: self.fail("Did not detect task count going negative") def test_queue_join(self): # Test that a queue join()s successfully, and before anything else # (done twice for insurance). q = Queue.JoinableQueue() # self.type2test() # XXX the same test in subclass self.queue_join_test(q) self.queue_join_test(q) try: q.task_done() except ValueError: pass else: self.fail("Did not detect task count going negative") def test_queue_task_done_with_items(self): # Passing items to the constructor allows for as # many task_done calls. Joining before all the task done # are called returns false # XXX the same test in subclass l = [1, 2, 3] q = Queue.JoinableQueue(items=l) for i in l: self.assertFalse(q.join(timeout=0.001)) self.assertEqual(i, q.get()) q.task_done() try: q.task_done() except ValueError: pass else: self.fail("Did not detect task count going negative") self.assertTrue(q.join(timeout=0.001)) def test_simple_queue(self): # Do it a couple of times on the same queue. # Done twice to make sure works with same instance reused. q = self.type2test(QUEUE_SIZE) self.simple_queue_test(q) self.simple_queue_test(q) class LifoQueueTest(BaseQueueTest): type2test = Queue.LifoQueue class PriorityQueueTest(BaseQueueTest): type2test = Queue.PriorityQueue def test__init(self): item1 = (2, 'b') item2 = (1, 'a') q = self.type2test(items=[item1, item2]) self.assertTupleEqual(item2, q.get_nowait()) self.assertTupleEqual(item1, q.get_nowait()) # A Queue subclass that can provoke failure at a moment's notice :) class FailingQueueException(Exception): pass class FailingQueue(Queue.Queue): def __init__(self, *args): self.fail_next_put = False self.fail_next_get = False Queue.Queue.__init__(self, *args) def _put(self, item): if self.fail_next_put: self.fail_next_put = False raise FailingQueueException("You Lose") return Queue.Queue._put(self, item) def _get(self): if self.fail_next_get: self.fail_next_get = False raise FailingQueueException("You Lose") return Queue.Queue._get(self) class FailingQueueTest(unittest.TestCase, BlockingTestMixin): def failing_queue_test(self, q): if not q.empty(): raise RuntimeError("Call this function with an empty queue") for i in range(QUEUE_SIZE-1): q.put(i) # Test a failing non-blocking put. q.fail_next_put = True with self.assertRaises(FailingQueueException): q.put("oops", block=0) q.fail_next_put = True with self.assertRaises(FailingQueueException): q.put("oops", timeout=0.1) q.put(999) self.assertTrue(q.full(), "Queue should be full") # Test a failing blocking put q.fail_next_put = True with self.assertRaises(FailingQueueException): self.do_blocking_test(q.put, (888,), q.get, ()) # Check the Queue isn't damaged. # put failed, but get succeeded - re-add q.put(999) # Test a failing timeout put q.fail_next_put = True self.do_exceptional_blocking_test(q.put, (888, True, 10), q.get, (), FailingQueueException) # Check the Queue isn't damaged. # put failed, but get succeeded - re-add q.put(999) self.assertTrue(q.full(), "Queue should be full") q.get() self.assertFalse(q.full(), "Queue should not be full") q.put(999) self.assertTrue(q.full(), "Queue should be full") # Test a blocking put self.do_blocking_test(q.put, (888,), q.get, ()) # Empty it for i in range(QUEUE_SIZE): q.get() self.assertTrue(q.empty(), "Queue should be empty") q.put("first") q.fail_next_get = True with self.assertRaises(FailingQueueException): q.get() self.assertFalse(q.empty(), "Queue should not be empty") q.fail_next_get = True with self.assertRaises(FailingQueueException): q.get(timeout=0.1) self.assertFalse(q.empty(), "Queue should not be empty") q.get() self.assertTrue(q.empty(), "Queue should be empty") q.fail_next_get = True self.do_exceptional_blocking_test(q.get, (), q.put, ('empty',), FailingQueueException) # put succeeded, but get failed. self.assertFalse(q.empty(), "Queue should not be empty") q.get() self.assertTrue(q.empty(), "Queue should be empty") def test_failing_queue(self): # Test to make sure a queue is functioning correctly. # Done twice to the same instance. q = FailingQueue(QUEUE_SIZE) self.failing_queue_test(q) self.failing_queue_test(q) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/gevent/tests/test__monkey_select.py000066400000000000000000000013451471441230600230070ustar00rootroot00000000000000# Tests for the monkey-patched select module. from gevent import monkey monkey.patch_all() import select import gevent.testing as greentest class TestSelect(greentest.TestCase): def _make_test(name, ns): # pylint:disable=no-self-argument def test(self): self.assertIs(getattr(select, name, self), self) self.assertFalse(hasattr(select, name)) test.__name__ = 'test_' + name + '_removed' ns[test.__name__] = test for name in ( 'epoll', 'kqueue', 'kevent', 'devpoll', ): _make_test(name, locals()) # pylint:disable=too-many-function-args del name del _make_test if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__monkey_selectors.py000066400000000000000000000050241471441230600235310ustar00rootroot00000000000000 try: # Do this before the patch to be sure we clean # things up properly if the order is wrong. import selectors except ImportError: import selectors2 as selectors from gevent.monkey import patch_all import gevent.testing as greentest patch_all() from gevent.selectors import DefaultSelector from gevent.selectors import GeventSelector from gevent.tests.test__selectors import SelectorTestMixin class TestSelectors(SelectorTestMixin, greentest.TestCase): @greentest.skipOnWindows( "SelectSelector._select is a normal function on Windows" ) def test_selectors_select_is_patched(self): # https://github.com/gevent/gevent/issues/835 _select = selectors.SelectSelector._select self.assertIn('_gevent_monkey', dir(_select)) def test_default(self): # Depending on the order of imports, gevent.select.poll may be defined but # selectors.PollSelector may not be defined. # https://github.com/gevent/gevent/issues/1466 self.assertIs(DefaultSelector, GeventSelector) self.assertIs(selectors.DefaultSelector, GeventSelector) def test_import_selectors(self): # selectors can always be imported once monkey-patched. On Python 2, # this is an alias for gevent.selectors. __import__('selectors') def _make_test(name, kind): # pylint:disable=no-self-argument if kind is None: def m(self): self.skipTest(name + ' is not defined') else: def m(self, k=kind): with k() as sel: self._check_selector(sel) m.__name__ = 'test_selector_' + name return m SelKind = SelKindName = None for SelKindName in ( # The subclass hierarchy changes between versions, and is # complex (e.g, BaseSelector <- BaseSelectorImpl <- # _PollLikSelector <- PollSelector) so its easier to check against # names. 'KqueueSelector', 'EpollSelector', 'DevpollSelector', 'PollSelector', 'SelectSelector', GeventSelector, ): if not isinstance(SelKindName, type): SelKind = getattr(selectors, SelKindName, None) else: SelKind = SelKindName SelKindName = SelKind.__name__ m = _make_test(SelKindName, SelKind) # pylint:disable=too-many-function-args locals()[m.__name__] = m del SelKind del SelKindName del _make_test if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__monkey_sigchld.py000066400000000000000000000055731471441230600231540ustar00rootroot00000000000000import errno import os import sys import gevent import gevent.monkey gevent.monkey.patch_all() pid = None awaiting_child = [] def handle_sigchld(*_args): # Make sure we can do a blocking operation gevent.sleep() # Signal completion awaiting_child.pop() # Raise an ignored error raise TypeError("This should be ignored but printed") # Try to produce output compatible with unittest output so # our status parsing functions work. import signal if hasattr(signal, 'SIGCHLD'): # In Python 3.8.0 final, on both Travis CI/Linux and locally # on macOS, the *child* process started crashing on exit with a memory # error: # # Debug memory block at address p=0x7fcf5d6b5000: API '' # 6508921152173528397 bytes originally requested # The 7 pad bytes at p-7 are not all FORBIDDENBYTE (0xfd): # # When PYTHONDEVMODE is set. This happens even if we just simply fork # the child process and don't have gevent even /imported/ in the most # minimal test case. It's not clear what caused that. if sys.version_info[:2] >= (3, 8) and os.environ.get("PYTHONDEVMODE"): print("Ran 1 tests in 0.0s (skipped=1)") sys.exit(0) assert signal.getsignal(signal.SIGCHLD) == signal.SIG_DFL signal.signal(signal.SIGCHLD, handle_sigchld) handler = signal.getsignal(signal.SIGCHLD) assert signal.getsignal(signal.SIGCHLD) is handle_sigchld, handler if hasattr(os, 'forkpty'): def forkpty(): # For printing in errors return os.forkpty()[0] funcs = (os.fork, forkpty) else: funcs = (os.fork,) for func in funcs: awaiting_child = [True] pid = func() if not pid: # child gevent.sleep(0.3) sys.exit(0) else: timeout = gevent.Timeout(1) try: while awaiting_child: gevent.sleep(0.01) # We should now be able to waitpid() for an arbitrary child wpid, status = os.waitpid(-1, os.WNOHANG) if wpid != pid: raise AssertionError("Failed to wait on a child pid forked with a function", wpid, pid, func) # And a second call should raise ECHILD try: wpid, status = os.waitpid(-1, os.WNOHANG) raise AssertionError("Should not be able to wait again") except OSError as e: assert e.errno == errno.ECHILD except gevent.Timeout as t: if timeout is not t: raise raise AssertionError("Failed to wait using", func) finally: timeout.close() print("Ran 1 tests in 0.0s") sys.exit(0) else: print("No SIGCHLD, not testing") print("Ran 1 tests in 0.0s (skipped=1)") gevent-24.11.1/src/gevent/tests/test__monkey_sigchld_2.py000066400000000000000000000033561471441230600233720ustar00rootroot00000000000000# Mimics what gunicorn workers do: monkey patch in the child process # and try to reset signal handlers to SIG_DFL. # NOTE: This breaks again when gevent.subprocess is used, or any child # watcher. import os import sys import signal pid = None def handle(*_args): if not pid: # We only do this is the child so our # parent's waitpid can get the status. # This is the opposite of gunicorn. os.waitpid(-1, os.WNOHANG) # The signal watcher must be installed *before* monkey patching if hasattr(signal, 'SIGCHLD'): if sys.version_info[:2] >= (3, 8) and os.environ.get("PYTHONDEVMODE"): # See test__monkey_sigchld.py print("Ran 1 tests in 0.0s (skipped=1)") sys.exit(0) # On Python 2, the signal handler breaks the platform # module, because it uses os.popen. pkg_resources uses the platform # module. # Cache that info. import platform platform.uname() signal.signal(signal.SIGCHLD, handle) pid = os.fork() if pid: # parent try: _, stat = os.waitpid(pid, 0) except OSError: # Interrupted system call _, stat = os.waitpid(pid, 0) assert stat == 0, stat else: # Under Python 2, os.popen() directly uses the popen call, and # popen's file uses the pclose() system call to # wait for the child. If it's already waited on, # it raises the same exception. # Python 3 uses the subprocess module directly which doesn't # have this problem. import gevent.monkey gevent.monkey.patch_all() signal.signal(signal.SIGCHLD, signal.SIG_DFL) f = os.popen('true') f.close() sys.exit(0) else: print("No SIGCHLD, not testing") gevent-24.11.1/src/gevent/tests/test__monkey_sigchld_3.py000066400000000000000000000033331471441230600233660ustar00rootroot00000000000000# Mimics what gunicorn workers do *if* the arbiter is also monkey-patched: # After forking from the master monkey-patched process, the child # resets signal handlers to SIG_DFL. If we then fork and watch *again*, # we shouldn't hang. (Note that we carefully handle this so as not to break # os.popen) from __future__ import print_function # Patch in the parent process. import gevent.monkey gevent.monkey.patch_all() from gevent import get_hub import os import sys import signal import subprocess def _waitpid(p): try: _, stat = os.waitpid(p, 0) except OSError: # Interrupted system call _, stat = os.waitpid(p, 0) assert stat == 0, stat if hasattr(signal, 'SIGCHLD'): if sys.version_info[:2] >= (3, 8) and os.environ.get("PYTHONDEVMODE"): # See test__monkey_sigchld.py print("Ran 1 tests in 0.0s (skipped=1)") sys.exit(0) # Do what subprocess does and make sure we have the watcher # in the parent get_hub().loop.install_sigchld() pid = os.fork() if pid: # parent _waitpid(pid) else: # Child resets. signal.signal(signal.SIGCHLD, signal.SIG_DFL) # Go through subprocess because we expect it to automatically # set up the waiting for us. # not on Py2 pylint:disable=consider-using-with popen = subprocess.Popen([sys.executable, '-c', 'import sys'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) popen.stderr.read() popen.stdout.read() popen.wait() # This hangs if it doesn't. popen.stderr.close() popen.stdout.close() sys.exit(0) else: print("No SIGCHLD, not testing") print("Ran 1 tests in 0.0s (skipped=1)") gevent-24.11.1/src/gevent/tests/test__monkey_ssl_warning.py000066400000000000000000000023421471441230600240540ustar00rootroot00000000000000import unittest import warnings # This file should only have this one test in it # because we have to be careful about our imports # and because we need to be careful about our patching. class Test(unittest.TestCase): def test_with_pkg_resources(self): # Issue 1108: Python 2, importing pkg_resources, # as is done for namespace packages, imports ssl, # leading to an unwanted SSL warning. # This is a deprecated API though. with warnings.catch_warnings(): try: __import__('pkg_resources') except ImportError: self.skipTest('Uses pkg_resources (setuptools) which is not installed') from gevent import monkey self.assertFalse(monkey.saved) with warnings.catch_warnings(record=True) as issued_warnings: warnings.simplefilter('always') monkey.patch_all() monkey.patch_all() issued_warnings = [x for x in issued_warnings if isinstance(x.message, monkey.MonkeyPatchWarning)] self.assertFalse(issued_warnings, [str(i) for i in issued_warnings]) self.assertEqual(0, len(issued_warnings)) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test__monkey_ssl_warning2.py000066400000000000000000000023471471441230600241430ustar00rootroot00000000000000import unittest import warnings import sys # All supported python versions now provide SSLContext. # We import it by name and subclass it here by name. # compare with warning3.py from ssl import SSLContext class MySubclass(SSLContext): pass # This file should only have this one test in it # because we have to be careful about our imports # and because we need to be careful about our patching. class Test(unittest.TestCase): @unittest.skipIf(sys.version_info[:2] < (3, 6), "Only on Python 3.6+") def test_ssl_subclass_and_module_reference(self): from gevent import monkey self.assertFalse(monkey.saved) with warnings.catch_warnings(record=True) as issued_warnings: warnings.simplefilter('always') monkey.patch_all() monkey.patch_all() issued_warnings = [x for x in issued_warnings if isinstance(x.message, monkey.MonkeyPatchWarning)] self.assertEqual(1, len(issued_warnings)) message = issued_warnings[0].message self.assertIn("Modules that had direct imports", str(message)) self.assertIn("Subclasses (NOT patched)", str(message)) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test__monkey_ssl_warning3.py000066400000000000000000000024621471441230600241420ustar00rootroot00000000000000import unittest import warnings import sys # All supported python versions now provide SSLContext. # We subclass without importing by name. Compare with # warning2.py import ssl class MySubclass(ssl.SSLContext): pass # This file should only have this one test in it # because we have to be careful about our imports # and because we need to be careful about our patching. class Test(unittest.TestCase): @unittest.skipIf(sys.version_info[:2] < (3, 6), "Only on Python 3.6+") def test_ssl_subclass_and_module_reference(self): from gevent import monkey self.assertFalse(monkey.saved) with warnings.catch_warnings(record=True) as issued_warnings: warnings.simplefilter('always') monkey.patch_all() monkey.patch_all() issued_warnings = [x for x in issued_warnings if isinstance(x.message, monkey.MonkeyPatchWarning)] self.assertEqual(1, len(issued_warnings)) message = str(issued_warnings[0].message) self.assertNotIn("Modules that had direct imports", message) self.assertIn("Subclasses (NOT patched)", message) # the gevent subclasses should not be in here. self.assertNotIn('gevent.', message) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test__nondefaultloop.py000066400000000000000000000003141471441230600231720ustar00rootroot00000000000000# test for issue #210 from gevent import core from gevent.testing.util import alarm alarm(1) log = [] loop = core.loop(default=False) loop.run_callback(log.append, 1) loop.run() assert log == [1], log gevent-24.11.1/src/gevent/tests/test__order.py000066400000000000000000000021451471441230600212600ustar00rootroot00000000000000import gevent import gevent.testing as greentest from gevent.testing.six import xrange class appender(object): def __init__(self, lst, item): self.lst = lst self.item = item def __call__(self, *args): self.lst.append(self.item) class Test(greentest.TestCase): count = 2 def test_greenlet_link(self): lst = [] # test that links are executed in the same order as they were added g = gevent.spawn(lst.append, 0) for i in xrange(1, self.count): g.link(appender(lst, i)) g.join() self.assertEqual(lst, list(range(self.count))) class Test3(Test): count = 3 class Test4(Test): count = 4 class TestM(Test): count = 1000 class TestSleep0(greentest.TestCase): def test(self): lst = [] gevent.spawn(sleep0, lst, '1') gevent.spawn(sleep0, lst, '2') gevent.wait() self.assertEqual(' '.join(lst), '1A 2A 1B 2B') def sleep0(lst, param): lst.append(param + 'A') gevent.sleep(0) lst.append(param + 'B') if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__os.py000066400000000000000000000135131471441230600205670ustar00rootroot00000000000000from __future__ import print_function, absolute_import, division import sys from os import pipe import gevent from gevent import os from gevent import Greenlet, joinall from gevent import testing as greentest from gevent.testing import mock from gevent.testing import six from gevent.testing.skipping import skipOnLibuvOnPyPyOnWin class TestOS_tp(greentest.TestCase): __timeout__ = greentest.LARGE_TIMEOUT def pipe(self): return pipe() read = staticmethod(os.tp_read) write = staticmethod(os.tp_write) @skipOnLibuvOnPyPyOnWin("Sometimes times out") def _test_if_pipe_blocks(self, buffer_class): r, w = self.pipe() # set nbytes such that for sure it is > maximum pipe buffer nbytes = 1000000 block = b'x' * 4096 buf = buffer_class(block) # Lack of "nonlocal" keyword in Python 2.x: bytesread = [0] byteswritten = [0] def produce(): while byteswritten[0] != nbytes: bytesleft = nbytes - byteswritten[0] byteswritten[0] += self.write(w, buf[:min(bytesleft, 4096)]) def consume(): while bytesread[0] != nbytes: bytesleft = nbytes - bytesread[0] bytesread[0] += len(self.read(r, min(bytesleft, 4096))) producer = Greenlet(produce) producer.start() consumer = Greenlet(consume) consumer.start_later(1) # If patching was not succesful, the producer will have filled # the pipe before the consumer starts, and would block the entire # process. Therefore the next line would never finish. joinall([producer, consumer]) self.assertEqual(bytesread[0], nbytes) self.assertEqual(bytesread[0], byteswritten[0]) if sys.version_info[0] < 3: def test_if_pipe_blocks_buffer(self): self._test_if_pipe_blocks(six.builtins.buffer) if sys.version_info[:2] >= (2, 7): def test_if_pipe_blocks_memoryview(self): self._test_if_pipe_blocks(six.builtins.memoryview) @greentest.skipUnless(hasattr(os, 'make_nonblocking'), "Only on POSIX") class TestOS_nb(TestOS_tp): def read(self, fd, count): return os.nb_read(fd, count) def write(self, fd, count): return os.nb_write(fd, count) def pipe(self): r, w = super(TestOS_nb, self).pipe() os.make_nonblocking(r) os.make_nonblocking(w) return r, w def _make_ignored_oserror(self): import errno ignored_oserror = OSError() ignored_oserror.errno = errno.EINTR return ignored_oserror def _check_hub_event_closed(self, mock_get_hub, fd, event): mock_get_hub.assert_called_once_with() hub = mock_get_hub.return_value io = hub.loop.io io.assert_called_once_with(fd, event) event = io.return_value event.close.assert_called_once_with() def _test_event_closed_on_normal_io(self, nb_func, nb_arg, mock_io, mock_get_hub, event): mock_io.side_effect = [self._make_ignored_oserror(), 42] fd = 100 result = nb_func(fd, nb_arg) self.assertEqual(result, 42) self._check_hub_event_closed(mock_get_hub, fd, event) def _test_event_closed_on_io_error(self, nb_func, nb_arg, mock_io, mock_get_hub, event): mock_io.side_effect = [self._make_ignored_oserror(), ValueError()] fd = 100 with self.assertRaises(ValueError): nb_func(fd, nb_arg) self._check_hub_event_closed(mock_get_hub, fd, event) @mock.patch('gevent.os.get_hub') @mock.patch('gevent.os._write') def test_event_closed_on_write(self, mock_write, mock_get_hub): self._test_event_closed_on_normal_io(os.nb_write, b'buf', mock_write, mock_get_hub, 2) @mock.patch('gevent.os.get_hub') @mock.patch('gevent.os._write') def test_event_closed_on_write_error(self, mock_write, mock_get_hub): self._test_event_closed_on_io_error(os.nb_write, b'buf', mock_write, mock_get_hub, 2) @mock.patch('gevent.os.get_hub') @mock.patch('gevent.os._read') def test_event_closed_on_read(self, mock_read, mock_get_hub): self._test_event_closed_on_normal_io(os.nb_read, b'buf', mock_read, mock_get_hub, 1) @mock.patch('gevent.os.get_hub') @mock.patch('gevent.os._read') def test_event_closed_on_read_error(self, mock_read, mock_get_hub): self._test_event_closed_on_io_error(os.nb_read, b'buf', mock_read, mock_get_hub, 1) @greentest.skipUnless(hasattr(os, 'fork_and_watch'), "Only on POSIX") class TestForkAndWatch(greentest.TestCase): __timeout__ = greentest.LARGE_TIMEOUT def test_waitpid_all(self): # Cover this specific case. pid = os.fork_and_watch() if pid: os.waitpid(-1, 0) # Can't assert on what the found pid actually was, # our testrunner may have spawned multiple children. os._reap_children(0) # make the leakchecker happy else: # pragma: no cover gevent.sleep(2) # The test framework will catch a regular SystemExit # from sys.exit(), we need to just kill the process. os._exit(0) def test_waitpid_wrong_neg(self): self.assertRaises(OSError, os.waitpid, -2, 0) def test_waitpid_wrong_pos(self): self.assertRaises(OSError, os.waitpid, 1, 0) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__pool.py000066400000000000000000000436571471441230600211330ustar00rootroot00000000000000from time import time import gevent import gevent.pool from gevent.event import Event from gevent.queue import Queue import gevent.testing as greentest import gevent.testing.timing import random from gevent.testing import ExpectedException import unittest class TestCoroutinePool(unittest.TestCase): klass = gevent.pool.Pool def test_apply_async(self): done = Event() def some_work(_): done.set() pool = self.klass(2) pool.apply_async(some_work, ('x', )) done.wait() def test_apply(self): value = 'return value' def some_work(): return value pool = self.klass(2) result = pool.apply(some_work) self.assertEqual(value, result) def test_apply_raises(self): pool = self.klass(1) def raiser(): raise ExpectedException() try: pool.apply(raiser) except ExpectedException: pass else: self.fail("Should have raised ExpectedException") # Don't let the metaclass automatically force any error # that reaches the hub from a spawned greenlet to become # fatal; that defeats the point of the test. test_apply_raises.error_fatal = False def test_multiple_coros(self): evt = Event() results = [] def producer(): gevent.sleep(0.001) results.append('prod') evt.set() def consumer(): results.append('cons1') evt.wait() results.append('cons2') pool = self.klass(2) done = pool.spawn(consumer) pool.apply_async(producer) done.get() self.assertEqual(['cons1', 'prod', 'cons2'], results) def dont_test_timer_cancel(self): timer_fired = [] def fire_timer(): timer_fired.append(True) def some_work(): gevent.timer(0, fire_timer) # pylint:disable=no-member pool = self.klass(2) pool.apply(some_work) gevent.sleep(0) self.assertEqual(timer_fired, []) def test_reentrant(self): pool = self.klass(1) result = pool.apply(pool.apply, (lambda a: a + 1, (5, ))) self.assertEqual(result, 6) evt = Event() pool.apply_async(evt.set) evt.wait() @greentest.skipOnPyPy("Does not work on PyPy") # Why? def test_stderr_raising(self): # testing that really egregious errors in the error handling code # (that prints tracebacks to stderr) don't cause the pool to lose # any members import sys pool = self.klass(size=1) # we're going to do this by causing the traceback.print_exc in # safe_apply to raise an exception and thus exit _main_loop normal_err = sys.stderr try: sys.stderr = FakeFile() waiter = pool.spawn(crash) with gevent.Timeout(2): # Without the timeout, we can get caught...doing something? # If we call PyErr_WriteUnraisable at a certain point, # we appear to switch back to the hub and do nothing, # meaning we sit forever. The timeout at least keeps us from # doing that and fails the test if we mess up error handling. self.assertRaises(RuntimeError, waiter.get) # the pool should have something free at this point since the # waiter returned # pool.Pool change: if an exception is raised during execution of a link, # the rest of the links are scheduled to be executed on the next hub iteration # this introduces a delay in updating pool.sem which makes pool.free_count() report 0 # therefore, sleep: gevent.sleep(0) self.assertEqual(pool.free_count(), 1) # shouldn't block when trying to get with gevent.Timeout.start_new(0.1): pool.apply(gevent.sleep, (0, )) finally: sys.stderr = normal_err pool.join() def crash(*_args, **_kw): raise RuntimeError("Raising an error from the crash() function") class FakeFile(object): def write(self, *_args): raise RuntimeError('Writing to the file failed') class PoolBasicTests(greentest.TestCase): klass = gevent.pool.Pool def test_execute_async(self): p = self.klass(size=2) self.assertEqual(p.free_count(), 2) r = [] first = p.spawn(r.append, 1) self.assertEqual(p.free_count(), 1) first.get() self.assertEqual(r, [1]) gevent.sleep(0) self.assertEqual(p.free_count(), 2) #Once the pool is exhausted, calling an execute forces a yield. p.apply_async(r.append, (2, )) self.assertEqual(1, p.free_count()) self.assertEqual(r, [1]) p.apply_async(r.append, (3, )) self.assertEqual(0, p.free_count()) self.assertEqual(r, [1]) p.apply_async(r.append, (4, )) self.assertEqual(r, [1]) gevent.sleep(0.01) self.assertEqual(sorted(r), [1, 2, 3, 4]) def test_discard(self): p = self.klass(size=1) first = p.spawn(gevent.sleep, 1000) p.discard(first) first.kill() self.assertFalse(first) self.assertEqual(len(p), 0) self.assertEqual(p._semaphore.counter, 1) def test_add_method(self): p = self.klass(size=1) first = gevent.spawn(gevent.sleep, 1000) try: second = gevent.spawn(gevent.sleep, 1000) try: self.assertEqual(p.free_count(), 1) self.assertEqual(len(p), 0) p.add(first) self.assertEqual(p.free_count(), 0) self.assertEqual(len(p), 1) with self.assertRaises(gevent.Timeout): with gevent.Timeout(0.1): p.add(second) self.assertEqual(p.free_count(), 0) self.assertEqual(len(p), 1) finally: second.kill() finally: first.kill() @greentest.ignores_leakcheck def test_add_method_non_blocking(self): p = self.klass(size=1) first = gevent.spawn(gevent.sleep, 1000) try: second = gevent.spawn(gevent.sleep, 1000) try: p.add(first) with self.assertRaises(gevent.pool.PoolFull): p.add(second, blocking=False) finally: second.kill() finally: first.kill() @greentest.ignores_leakcheck def test_add_method_timeout(self): p = self.klass(size=1) first = gevent.spawn(gevent.sleep, 1000) try: second = gevent.spawn(gevent.sleep, 1000) try: p.add(first) with self.assertRaises(gevent.pool.PoolFull): p.add(second, timeout=0.100) finally: second.kill() finally: first.kill() @greentest.ignores_leakcheck def test_start_method_timeout(self): p = self.klass(size=1) first = gevent.spawn(gevent.sleep, 1000) try: second = gevent.Greenlet(gevent.sleep, 1000) try: p.add(first) with self.assertRaises(gevent.pool.PoolFull): p.start(second, timeout=0.100) finally: second.kill() finally: first.kill() def test_apply(self): p = self.klass() result = p.apply(lambda a: ('foo', a), (1, )) self.assertEqual(result, ('foo', 1)) def test_init_error(self): self.switch_expected = False self.assertRaises(ValueError, self.klass, -1) # # tests from standard library test/test_multiprocessing.py class TimingWrapper(object): def __init__(self, func): self.func = func self.elapsed = None def __call__(self, *args, **kwds): t = time() try: return self.func(*args, **kwds) finally: self.elapsed = time() - t def sqr(x, wait=0.0): gevent.sleep(wait) return x * x def squared(x): return x * x def sqr_random_sleep(x): gevent.sleep(random.random() * 0.1) return x * x def final_sleep(): yield from range(3) gevent.sleep(0.2) TIMEOUT1, TIMEOUT2, TIMEOUT3 = 0.082, 0.035, 0.14 SMALL_RANGE = 10 LARGE_RANGE = 1000 if (greentest.PYPY and greentest.WIN) or greentest.RUN_LEAKCHECKS or greentest.RUN_COVERAGE: # See comments in test__threadpool.py. LARGE_RANGE = 25 elif greentest.RUNNING_ON_CI or greentest.EXPECT_POOR_TIMER_RESOLUTION: LARGE_RANGE = 100 class TestPool(greentest.TestCase): # pylint:disable=too-many-public-methods __timeout__ = greentest.LARGE_TIMEOUT size = 1 def setUp(self): greentest.TestCase.setUp(self) self.pool = gevent.pool.Pool(self.size) def cleanup(self): self.pool.join() def test_apply(self): papply = self.pool.apply self.assertEqual(papply(sqr, (5,)), 25) self.assertEqual(papply(sqr, (), {'x': 3}), 9) def test_map(self): pmap = self.pool.map self.assertEqual(pmap(sqr, range(SMALL_RANGE)), list(map(squared, range(SMALL_RANGE)))) self.assertEqual(pmap(sqr, range(100)), list(map(squared, range(100)))) def test_async(self): res = self.pool.apply_async(sqr, (7, TIMEOUT1,)) get = TimingWrapper(res.get) self.assertEqual(get(), 49) self.assertTimeoutAlmostEqual(get.elapsed, TIMEOUT1, 1) def test_async_callback(self): result = [] res = self.pool.apply_async(sqr, (7, TIMEOUT1,), callback=result.append) get = TimingWrapper(res.get) self.assertEqual(get(), 49) self.assertTimeoutAlmostEqual(get.elapsed, TIMEOUT1, 1) gevent.sleep(0) # lets the callback run self.assertEqual(result, [49]) def test_async_timeout(self): res = self.pool.apply_async(sqr, (6, TIMEOUT2 + 0.2)) get = TimingWrapper(res.get) self.assertRaises(gevent.Timeout, get, timeout=TIMEOUT2) self.assertTimeoutAlmostEqual(get.elapsed, TIMEOUT2, 1) self.pool.join() def test_imap_list_small(self): it = self.pool.imap(sqr, range(SMALL_RANGE)) self.assertEqual(list(it), list(map(sqr, range(SMALL_RANGE)))) def test_imap_it_small(self): it = self.pool.imap(sqr, range(SMALL_RANGE)) for i in range(SMALL_RANGE): self.assertEqual(next(it), i * i) self.assertRaises(StopIteration, next, it) def test_imap_it_large(self): it = self.pool.imap(sqr, range(LARGE_RANGE)) for i in range(LARGE_RANGE): self.assertEqual(next(it), i * i) self.assertRaises(StopIteration, next, it) def test_imap_random(self): it = self.pool.imap(sqr_random_sleep, range(SMALL_RANGE)) self.assertEqual(list(it), list(map(squared, range(SMALL_RANGE)))) def test_imap_unordered(self): it = self.pool.imap_unordered(sqr, range(LARGE_RANGE)) self.assertEqual(sorted(it), list(map(squared, range(LARGE_RANGE)))) it = self.pool.imap_unordered(sqr, range(LARGE_RANGE)) self.assertEqual(sorted(it), list(map(squared, range(LARGE_RANGE)))) def test_imap_unordered_random(self): it = self.pool.imap_unordered(sqr_random_sleep, range(SMALL_RANGE)) self.assertEqual(sorted(it), list(map(squared, range(SMALL_RANGE)))) def test_empty_imap_unordered(self): it = self.pool.imap_unordered(sqr, []) self.assertEqual(list(it), []) def test_empty_imap(self): it = self.pool.imap(sqr, []) self.assertEqual(list(it), []) def test_empty_map(self): self.assertEqual(self.pool.map(sqr, []), []) def test_terminate(self): result = self.pool.map_async(gevent.sleep, [0.1] * ((self.size or 10) * 2)) gevent.sleep(0.1) kill = TimingWrapper(self.pool.kill) kill() self.assertTimeWithinRange(kill.elapsed, 0.0, 0.5) result.join() def sleep(self, x): gevent.sleep(float(x) / 10.) return str(x) def test_imap_unordered_sleep(self): # testing that imap_unordered returns items in competion order result = list(self.pool.imap_unordered(self.sleep, [10, 1, 2])) if self.pool.size == 1: expected = ['10', '1', '2'] else: expected = ['1', '2', '10'] self.assertEqual(result, expected) # https://github.com/gevent/gevent/issues/423 def test_imap_no_stop(self): q = Queue() q.put(123) gevent.spawn_later(0.1, q.put, StopIteration) result = list(self.pool.imap(lambda _: _, q)) self.assertEqual(result, [123]) def test_imap_unordered_no_stop(self): q = Queue() q.put(1234) gevent.spawn_later(0.1, q.put, StopIteration) result = list(self.pool.imap_unordered(lambda _: _, q)) self.assertEqual(result, [1234]) # same issue, but different test: https://github.com/gevent/gevent/issues/311 def test_imap_final_sleep(self): result = list(self.pool.imap(sqr, final_sleep())) self.assertEqual(result, [0, 1, 4]) def test_imap_unordered_final_sleep(self): result = list(self.pool.imap_unordered(sqr, final_sleep())) self.assertEqual(result, [0, 1, 4]) # Issue 638 def test_imap_unordered_bounded_queue(self): iterable = list(range(100)) running = [0] def short_running_func(i, _j): running[0] += 1 return i def make_reader(mapping): # Simulate a long running reader. No matter how many workers # we have, we will never have a queue more than size 1 def reader(): result = [] for i, x in enumerate(mapping): self.assertTrue(running[0] <= i + 2, running[0]) result.append(x) gevent.sleep(0.01) self.assertTrue(len(mapping.queue) <= 2, len(mapping.queue)) return result return reader # Send two iterables to make sure varargs and kwargs are handled # correctly for meth in self.pool.imap_unordered, self.pool.imap: running[0] = 0 mapping = meth(short_running_func, iterable, iterable, maxsize=1) reader = make_reader(mapping) l = reader() self.assertEqual(sorted(l), iterable) @greentest.ignores_leakcheck class TestPool2(TestPool): size = 2 @greentest.ignores_leakcheck class TestPool3(TestPool): size = 3 @greentest.ignores_leakcheck class TestPool10(TestPool): size = 10 class TestPoolUnlimit(TestPool): size = None class TestPool0(greentest.TestCase): size = 0 def test_wait_full(self): p = gevent.pool.Pool(size=0) self.assertEqual(0, p.free_count()) self.assertTrue(p.full()) self.assertEqual(0, p.wait_available(timeout=0.01)) class TestJoinSleep(gevent.testing.timing.AbstractGenericWaitTestCase): def wait(self, timeout): p = gevent.pool.Pool() g = p.spawn(gevent.sleep, 10) try: p.join(timeout=timeout) finally: g.kill() class TestJoinSleep_raise_error(gevent.testing.timing.AbstractGenericWaitTestCase): def wait(self, timeout): p = gevent.pool.Pool() g = p.spawn(gevent.sleep, 10) try: p.join(timeout=timeout, raise_error=True) finally: g.kill() class TestJoinEmpty(greentest.TestCase): switch_expected = False def test(self): p = gevent.pool.Pool() res = p.join() self.assertTrue(res, "empty should return true") class TestSpawn(greentest.TestCase): switch_expected = True def test(self): p = gevent.pool.Pool(1) self.assertEqual(len(p), 0) p.spawn(gevent.sleep, 0.1) self.assertEqual(len(p), 1) p.spawn(gevent.sleep, 0.1) # this spawn blocks until the old one finishes self.assertEqual(len(p), 1) gevent.sleep(0.19 if not greentest.EXPECT_POOR_TIMER_RESOLUTION else 0.5) self.assertEqual(len(p), 0) def testSpawnAndWait(self): p = gevent.pool.Pool(1) self.assertEqual(len(p), 0) p.spawn(gevent.sleep, 0.1) self.assertEqual(len(p), 1) res = p.join(0.01) self.assertFalse(res, "waiting on a full pool should return false") res = p.join() self.assertTrue(res, "waiting to finish should be true") self.assertEqual(len(p), 0) def error_iter(): yield 1 yield 2 raise ExpectedException class TestErrorInIterator(greentest.TestCase): error_fatal = False def test(self): p = gevent.pool.Pool(3) self.assertRaises(ExpectedException, p.map, lambda x: None, error_iter()) gevent.sleep(0.001) def test_unordered(self): p = gevent.pool.Pool(3) def unordered(): return list(p.imap_unordered(lambda x: None, error_iter())) self.assertRaises(ExpectedException, unordered) gevent.sleep(0.001) def divide_by(x): return 1.0 / x class TestErrorInHandler(greentest.TestCase): error_fatal = False def test_map(self): p = gevent.pool.Pool(3) self.assertRaises(ZeroDivisionError, p.map, divide_by, [1, 0, 2]) def test_imap(self): p = gevent.pool.Pool(1) it = p.imap(divide_by, [1, 0, 2]) self.assertEqual(next(it), 1.0) self.assertRaises(ZeroDivisionError, next, it) self.assertEqual(next(it), 0.5) self.assertRaises(StopIteration, next, it) def test_imap_unordered(self): p = gevent.pool.Pool(1) it = p.imap_unordered(divide_by, [1, 0, 2]) self.assertEqual(next(it), 1.0) self.assertRaises(ZeroDivisionError, next, it) self.assertEqual(next(it), 0.5) self.assertRaises(StopIteration, next, it) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__pywsgi.py000066400000000000000000002200721471441230600214700ustar00rootroot00000000000000# Copyright (c) 2007, Linden Research, Inc. # Copyright (c) 2009-2010 gevent contributors # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. # pylint: disable=too-many-lines,unused-argument,too-many-ancestors from gevent import monkey monkey.patch_all() from contextlib import contextmanager from urllib.parse import parse_qs import os import sys from io import BytesIO as StringIO import weakref import unittest from wsgiref.validate import validator import gevent.testing as greentest import gevent from gevent.testing import PY3, PYPY from gevent.testing.exception import ExpectedException from gevent import socket from gevent import pywsgi from gevent.pywsgi import Input class ExpectedAssertionError(ExpectedException, AssertionError): """An expected assertion error""" CONTENT_LENGTH = 'Content-Length' CONN_ABORTED_ERRORS = greentest.CONN_ABORTED_ERRORS REASONS = { 200: 'OK', 500: 'Internal Server Error' } class ConnectionClosed(Exception): pass def read_headers(fd): response_line = fd.readline() if not response_line: raise ConnectionClosed response_line = response_line.decode('latin-1') headers = {} while True: line = fd.readline().strip() if not line: break line = line.decode('latin-1') try: key, value = line.split(': ', 1) except: print('Failed to split: %r' % (line, )) raise assert key.lower() not in {x.lower() for x in headers}, 'Header %r:%r sent more than once: %r' % (key, value, headers) headers[key] = value return response_line, headers def iread_chunks(fd): while True: line = fd.readline() chunk_size = line.strip() chunk_size = int(chunk_size, 16) if chunk_size == 0: crlf = fd.read(2) assert crlf == b'\r\n', repr(crlf) break data = fd.read(chunk_size) yield data crlf = fd.read(2) assert crlf == b'\r\n', repr(crlf) class Response(object): def __init__(self, status_line, headers): self.status_line = status_line self.headers = headers self.body = None self.chunks = False try: version, code, self.reason = status_line[:-2].split(' ', 2) self.code = int(code) HTTP, self.version = version.split('/') assert HTTP == 'HTTP', repr(HTTP) assert self.version in ('1.0', '1.1'), repr(self.version) except Exception: print('Error: %r' % status_line) raise def __iter__(self): yield self.status_line yield self.headers yield self.body def __str__(self): args = (self.__class__.__name__, self.status_line, self.headers, self.body, self.chunks) return '<%s status_line=%r headers=%r body=%r chunks=%r>' % args def assertCode(self, code): if hasattr(code, '__contains__'): assert self.code in code, 'Unexpected code: %r (expected %r)\n%s' % (self.code, code, self) else: assert self.code == code, 'Unexpected code: %r (expected %r)\n%s' % (self.code, code, self) def assertReason(self, reason): assert self.reason == reason, 'Unexpected reason: %r (expected %r)\n%s' % (self.reason, reason, self) def assertVersion(self, version): assert self.version == version, 'Unexpected version: %r (expected %r)\n%s' % (self.version, version, self) def assertHeader(self, header, value): real_value = self.headers.get(header, False) assert real_value == value, \ 'Unexpected header %r: %r (expected %r)\n%s' % (header, real_value, value, self) def assertBody(self, body): if isinstance(body, str) and PY3: body = body.encode("ascii") assert self.body == body, 'Unexpected body: %r (expected %r)\n%s' % (self.body, body, self) @classmethod def read(cls, fd, code=200, reason='default', version='1.1', body=None, chunks=None, content_length=None): """ Read an HTTP response, optionally perform assertions, and return the Response object. """ # pylint:disable=too-many-branches _status_line, headers = read_headers(fd) self = cls(_status_line, headers) if code is not None: self.assertCode(code) if reason == 'default': reason = REASONS.get(code) if reason is not None: self.assertReason(reason) if version is not None: self.assertVersion(version) if self.code == 100: return self if content_length is not None: if isinstance(content_length, int): content_length = str(content_length) self.assertHeader('Content-Length', content_length) if 'chunked' in headers.get('Transfer-Encoding', ''): if CONTENT_LENGTH in headers: print("WARNING: server used chunked transfer-encoding despite having Content-Length header (libevent 1.x's bug)") self.chunks = list(iread_chunks(fd)) self.body = b''.join(self.chunks) elif CONTENT_LENGTH in headers: num = int(headers[CONTENT_LENGTH]) self.body = fd.read(num) else: self.body = fd.read() if body is not None: self.assertBody(body) if chunks is not None: assert chunks == self.chunks, (chunks, self.chunks) return self read_http = Response.read class TestCase(greentest.TestCase): server = None validator = staticmethod(validator) application = None # Bind to default address, which should give us ipv6 (when available) # and ipv4. (see self.connect()) listen_addr = greentest.DEFAULT_BIND_ADDR # connect on ipv4, even though we bound to ipv6 too # to prove ipv4 works...except on Windows, it apparently doesn't. # So use the hostname. connect_addr = greentest.DEFAULT_LOCAL_HOST_ADDR class handler_class(pywsgi.WSGIHandler): ApplicationError = ExpectedAssertionError def init_logger(self): import logging logger = logging.getLogger('gevent.tests.pywsgi') logger.setLevel(logging.CRITICAL) return logger def init_server(self, application): logger = self.logger = self.init_logger() self.server = pywsgi.WSGIServer( (self.listen_addr, 0), application, log=logger, error_log=logger, handler_class=self.handler_class, ) def setUp(self): application = self.application if self.validator is not None: application = self.validator(application) self.init_server(application) self.server.start() while not self.server.server_port: print("Waiting on server port") self.port = self.server.server_port assert self.port greentest.TestCase.setUp(self) if greentest.CPYTHON and greentest.PY2: # Keeping raw sockets alive keeps SSL sockets # from being closed too, at least on CPython2, so we # need to use weakrefs. # In contrast, on PyPy, *only* having a weakref lets the # original socket die and leak def _close_on_teardown(self, resource): self.close_on_teardown.append(weakref.ref(resource)) return resource def _tearDownCloseOnTearDown(self): self.close_on_teardown = [r() for r in self.close_on_teardown if r() is not None] super(TestCase, self)._tearDownCloseOnTearDown() def tearDown(self): greentest.TestCase.tearDown(self) if self.server is not None: with gevent.Timeout.start_new(0.5): self.server.stop() self.server = None if greentest.PYPY: import gc gc.collect() gc.collect() @contextmanager def connect(self): conn = socket.create_connection((self.connect_addr, self.port)) result = conn if PY3: conn_makefile = conn.makefile def makefile(*args, **kwargs): if 'bufsize' in kwargs: kwargs['buffering'] = kwargs.pop('bufsize') if 'mode' in kwargs: return conn_makefile(*args, **kwargs) # Under Python3, you can't read and write to the same # makefile() opened in (default) r, and r+ is not allowed kwargs['mode'] = 'rwb' rconn = conn_makefile(*args, **kwargs) _rconn_write = rconn.write def write(data): if isinstance(data, str): data = data.encode('ascii') return _rconn_write(data) rconn.write = write self._close_on_teardown(rconn) return rconn class proxy(object): def __getattribute__(self, name): if name == 'makefile': return makefile return getattr(conn, name) result = proxy() try: yield result finally: result.close() @contextmanager def makefile(self): with self.connect() as sock: try: result = sock.makefile(bufsize=1) # pylint:disable=unexpected-keyword-arg yield result finally: result.close() def urlopen(self, *args, **kwargs): with self.connect() as sock: with sock.makefile(bufsize=1) as fd: # pylint:disable=unexpected-keyword-arg fd.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') return read_http(fd, *args, **kwargs) HTTP_CLIENT_VERSION = '1.1' DEFAULT_EXTRA_CLIENT_HEADERS = {} def format_request(self, method='GET', path='/', **headers): def_headers = self.DEFAULT_EXTRA_CLIENT_HEADERS.copy() def_headers.update(headers) headers = def_headers headers = '\r\n'.join('%s: %s' % item for item in headers.items()) headers = headers + '\r\n' if headers else headers result = ( '%(method)s %(path)s HTTP/%(http_ver)s\r\n' 'Host: localhost\r\n' '%(headers)s' '\r\n' ) result = result % dict( method=method, path=path, http_ver=self.HTTP_CLIENT_VERSION, headers=headers ) return result class CommonTestMixin(object): PIPELINE_NOT_SUPPORTED_EXS = () EXPECT_CLOSE = False EXPECT_KEEPALIVE = False def test_basic(self): with self.makefile() as fd: fd.write(self.format_request()) response = read_http(fd, body='hello world') if response.headers.get('Connection') == 'close': self.assertTrue(self.EXPECT_CLOSE, "Server closed connection, not expecting that") return response, None self.assertFalse(self.EXPECT_CLOSE) if self.EXPECT_KEEPALIVE: response.assertHeader('Connection', 'keep-alive') fd.write(self.format_request(path='/notexist')) dne_response = read_http(fd, code=404, reason='Not Found', body='not found') fd.write(self.format_request()) response = read_http(fd, body='hello world') return response, dne_response def test_pipeline(self): exception = AssertionError('HTTP pipelining not supported; the second request is thrown away') with self.makefile() as fd: fd.write(self.format_request() + self.format_request(path='/notexist')) read_http(fd, body='hello world') try: timeout = gevent.Timeout.start_new(0.5, exception=exception) try: read_http(fd, code=404, reason='Not Found', body='not found') finally: timeout.close() except self.PIPELINE_NOT_SUPPORTED_EXS: pass except AssertionError as ex: if ex is not exception: raise def test_connection_close(self): with self.makefile() as fd: fd.write(self.format_request()) response = read_http(fd) if response.headers.get('Connection') == 'close': self.assertTrue(self.EXPECT_CLOSE, "Server closed connection, not expecting that") return self.assertFalse(self.EXPECT_CLOSE) if self.EXPECT_KEEPALIVE: response.assertHeader('Connection', 'keep-alive') fd.write(self.format_request(Connection='close')) read_http(fd) fd.write(self.format_request()) # This may either raise, or it may return an empty response, # depend on timing and the Python version. try: result = fd.readline() except socket.error as ex: if ex.args[0] not in CONN_ABORTED_ERRORS: raise else: self.assertFalse( result, 'The remote side is expected to close the connection, but it sent %r' % (result,)) @unittest.skip("Not sure") def test_006_reject_long_urls(self): path_parts = [] for _ in range(3000): path_parts.append('path') path = '/'.join(path_parts) with self.makefile() as fd: request = 'GET /%s HTTP/1.0\r\nHost: localhost\r\n\r\n' % path fd.write(request) result = fd.readline() status = result.split(' ')[1] self.assertEqual(status, '414') class TestNoChunks(CommonTestMixin, TestCase): # when returning a list of strings a shortcut is employed by the server: # it calculates the content-length and joins all the chunks before sending validator = None last_environ = None def _check_environ(self, input_terminated=True): if input_terminated: self.assertTrue(self.last_environ.get('wsgi.input_terminated')) else: self.assertFalse(self.last_environ['wsgi.input_terminated']) def application(self, env, start_response): self.last_environ = env path = env['PATH_INFO'] if path == '/': start_response('200 OK', [('Content-Type', 'text/plain')]) return [b'hello ', b'world'] if path == '/websocket': write = start_response('101 Switching Protocols', [('Content-Type', 'text/plain'), # Con:close is to make our simple client # happy; otherwise it wants to read data from the # body thot's being kept open. ('Connection', 'close')]) write(b'') # Trigger finalizing the headers now. return [b'upgrading to', b'websocket'] start_response('404 Not Found', [('Content-Type', 'text/plain')]) return [b'not ', b'found'] def test_basic(self): response, dne_response = super(TestNoChunks, self).test_basic() self._check_environ() self.assertFalse(response.chunks) response.assertHeader('Content-Length', '11') if dne_response is not None: self.assertFalse(dne_response.chunks) dne_response.assertHeader('Content-Length', '9') def test_dne(self): with self.makefile() as fd: fd.write(self.format_request(path='/notexist')) response = read_http(fd, code=404, reason='Not Found', body='not found') self.assertFalse(response.chunks) self._check_environ() response.assertHeader('Content-Length', '9') class TestConnectionUpgrades(TestNoChunks): def test_connection_upgrade(self): with self.makefile() as fd: fd.write(self.format_request(path='/websocket', Connection='upgrade')) response = read_http(fd, code=101) self._check_environ(input_terminated=False) self.assertFalse(response.chunks) def test_upgrade_websocket(self): with self.makefile() as fd: fd.write(self.format_request(path='/websocket', Upgrade='websocket')) response = read_http(fd, code=101) self._check_environ(input_terminated=False) self.assertFalse(response.chunks) class TestNoChunks10(TestNoChunks): HTTP_CLIENT_VERSION = '1.0' PIPELINE_NOT_SUPPORTED_EXS = (ConnectionClosed,) EXPECT_CLOSE = True class TestNoChunks10KeepAlive(TestNoChunks10): DEFAULT_EXTRA_CLIENT_HEADERS = { 'Connection': 'keep-alive', } EXPECT_CLOSE = False EXPECT_KEEPALIVE = True class TestExplicitContentLength(TestNoChunks): # pylint:disable=too-many-ancestors # when returning a list of strings a shortcut is employed by the # server - it caculates the content-length def application(self, env, start_response): self.last_environ = env self.assertTrue(env.get('wsgi.input_terminated')) path = env['PATH_INFO'] if path == '/': start_response('200 OK', [('Content-Type', 'text/plain'), ('Content-Length', '11')]) return [b'hello ', b'world'] start_response('404 Not Found', [('Content-Type', 'text/plain'), ('Content-Length', '9')]) return [b'not ', b'found'] class TestYield(CommonTestMixin, TestCase): @staticmethod def application(env, start_response): path = env['PATH_INFO'] if path == '/': start_response('200 OK', [('Content-Type', 'text/plain')]) yield b"hello world" else: start_response('404 Not Found', [('Content-Type', 'text/plain')]) yield b"not found" class TestBytearray(CommonTestMixin, TestCase): validator = None @staticmethod def application(env, start_response): path = env['PATH_INFO'] if path == '/': start_response('200 OK', [('Content-Type', 'text/plain')]) return [bytearray(b"hello "), bytearray(b"world")] start_response('404 Not Found', [('Content-Type', 'text/plain')]) return [bytearray(b"not found")] class TestMultiLineHeader(TestCase): @staticmethod def application(env, start_response): assert "test.submit" in env["CONTENT_TYPE"] start_response('200 OK', [('Content-Type', 'text/plain')]) return [b"ok"] def test_multiline_116(self): """issue #116""" request = '\r\n'.join(( 'POST / HTTP/1.0', 'Host: localhost', 'Content-Type: multipart/related; boundary="====XXXX====";', ' type="text/xml";start="test.submit"', 'Content-Length: 0', '', '')) with self.makefile() as fd: fd.write(request) read_http(fd) class TestGetArg(TestCase): @staticmethod def application(env, start_response): body = env['wsgi.input'].read(3) if PY3: body = body.decode('ascii') a = parse_qs(body).get('a', [1])[0] start_response('200 OK', [('Content-Type', 'text/plain')]) return [('a is %s, body is %s' % (a, body)).encode('ascii')] def test_007_get_arg(self): # define a new handler that does a get_arg as well as a read_body request = '\r\n'.join(( 'POST / HTTP/1.0', 'Host: localhost', 'Content-Length: 3', '', 'a=a')) with self.makefile() as fd: fd.write(request) # send some junk after the actual request fd.write('01234567890123456789') read_http(fd, body='a is a, body is a=a') class TestCloseIter(TestCase): # The *Validator* closes the iterators! validator = None def application(self, env, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return self def __iter__(self): yield bytearray(b"Hello World") yield b"!" closed = False def close(self): self.closed += 1 def test_close_is_called(self): self.closed = False with self.makefile() as fd: fd.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') read_http(fd, body=b"Hello World!", chunks=[b'Hello World', b'!']) # We got closed exactly once. self.assertEqual(self.closed, 1) class TestChunkedApp(TestCase): chunks = [b'this', b'is', b'chunked'] def body(self): return b''.join(self.chunks) def application(self, env, start_response): self.assertTrue(env.get('wsgi.input_terminated')) start_response('200 OK', [('Content-Type', 'text/plain')]) yield from self.chunks def test_chunked_response(self): with self.makefile() as fd: fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') response = read_http(fd, body=self.body(), chunks=None) response.assertHeader('Transfer-Encoding', 'chunked') self.assertEqual(response.chunks, self.chunks) def test_no_chunked_http_1_0(self): with self.makefile() as fd: fd.write('GET / HTTP/1.0\r\nHost: localhost\r\nConnection: close\r\n\r\n') response = read_http(fd) self.assertEqual(response.body, self.body()) self.assertEqual(response.headers.get('Transfer-Encoding'), None) content_length = response.headers.get('Content-Length') if content_length is not None: self.assertEqual(content_length, str(len(self.body()))) class TestBigChunks(TestChunkedApp): chunks = [b'a' * 8192] * 3 class TestNegativeRead(TestCase): def application(self, env, start_response): self.assertTrue(env.get('wsgi.input_terminated')) start_response('200 OK', [('Content-Type', 'text/plain')]) if env['PATH_INFO'] == '/read': data = env['wsgi.input'].read(-1) return [data] def test_negative_chunked_read(self): data = (b'POST /read HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n' b'Transfer-Encoding: chunked\r\n\r\n' b'2\r\noh\r\n4\r\n hai\r\n0\r\n\r\n') with self.makefile() as fd: fd.write(data) read_http(fd, body='oh hai') def test_negative_nonchunked_read(self): data = (b'POST /read HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n' b'Content-Length: 6\r\n\r\n' b'oh hai') with self.makefile() as fd: fd.write(data) read_http(fd, body='oh hai') class TestNegativeReadline(TestCase): validator = None @staticmethod def application(env, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) if env['PATH_INFO'] == '/readline': data = env['wsgi.input'].readline(-1) return [data] def test_negative_chunked_readline(self): data = (b'POST /readline HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n' b'Transfer-Encoding: chunked\r\n\r\n' b'2\r\noh\r\n4\r\n hai\r\n0\r\n\r\n') with self.makefile() as fd: fd.write(data) read_http(fd, body='oh hai') def test_negative_nonchunked_readline(self): data = (b'POST /readline HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n' b'Content-Length: 6\r\n\r\n' b'oh hai') with self.makefile() as fd: fd.write(data) read_http(fd, body='oh hai') class TestChunkedPost(TestCase): calls = 0 def setUp(self): super().setUp() self.calls = 0 def application(self, env, start_response): self.calls += 1 self.assertTrue(env.get('wsgi.input_terminated')) start_response('200 OK', [('Content-Type', 'text/plain')]) if env['PATH_INFO'] == '/a': data = env['wsgi.input'].read(6) return [data] if env['PATH_INFO'] == '/b': lines = list(iter(lambda: env['wsgi.input'].read(6), b'')) return lines if env['PATH_INFO'] == '/c': return list(iter(lambda: env['wsgi.input'].read(1), b'')) return [b'We should not get here', env['PATH_INFO'].encode('ascii')] def test_014_chunked_post(self): data = (b'POST /a HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n' b'Transfer-Encoding: chunked\r\n\r\n' b'2\r\noh\r\n4\r\n hai\r\n0\r\n\r\n') with self.makefile() as fd: fd.write(data) read_http(fd, body='oh hai') # self.close_opened() # XXX: Why? with self.makefile() as fd: fd.write(data.replace(b'/a', b'/b')) read_http(fd, body='oh hai') with self.makefile() as fd: fd.write(data.replace(b'/a', b'/c')) read_http(fd, body='oh hai') def test_229_incorrect_chunk_no_newline(self): # Giving both a Content-Length and a Transfer-Encoding, # TE is preferred. But if the chunking is bad from the client, # missing its terminating newline, # the server doesn't hang data = (b'POST /a HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n' b'Content-Length: 12\r\n' b'Transfer-Encoding: chunked\r\n\r\n' b'{"hi": "ho"}') with self.makefile() as fd: fd.write(data) read_http(fd, code=400) def test_229_incorrect_chunk_non_hex(self): # Giving both a Content-Length and a Transfer-Encoding, # TE is preferred. But if the chunking is bad from the client, # the server doesn't hang data = (b'POST /a HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n' b'Content-Length: 12\r\n' b'Transfer-Encoding: chunked\r\n\r\n' b'{"hi": "ho"}\r\n') with self.makefile() as fd: fd.write(data) read_http(fd, code=400) def test_229_correct_chunk_quoted_ext(self): data = (b'POST /a HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n' b'Transfer-Encoding: chunked\r\n\r\n' b'2;token="oh hi"\r\noh\r\n4\r\n hai\r\n0\r\n\r\n') with self.makefile() as fd: fd.write(data) read_http(fd, body='oh hai') def test_229_correct_chunk_token_ext(self): data = (b'POST /a HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n' b'Transfer-Encoding: chunked\r\n\r\n' b'2;token=oh_hi\r\noh\r\n4\r\n hai\r\n0\r\n\r\n') with self.makefile() as fd: fd.write(data) read_http(fd, body='oh hai') def test_229_incorrect_chunk_token_ext_too_long(self): data = (b'POST /a HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n' b'Transfer-Encoding: chunked\r\n\r\n' b'2;token=oh_hi\r\noh\r\n4\r\n hai\r\n0\r\n\r\n') data = data.replace(b'oh_hi', b'_oh_hi' * 4000) with self.makefile() as fd: fd.write(data) read_http(fd, code=400) # XXX: Not sure which one, but one (or more) of these is leading to a # test timeout on Windows. Figure out what/why and solve. @greentest.skipOnWindows('Maybe hangs') def test_trailers_keepalive_ignored(self): # Trailers after a chunk are ignored. data1 = ( b'POST /a HTTP/1.1\r\n' b'Host: localhost\r\n' b'Connection: keep-alive\r\n' b'Transfer-Encoding: chunked\r\n' b'\r\n' b'2\r\noh\r\n' b'4\r\n hai\r\n' b'0\r\n' # last-chunk # Normally the final CRLF would go here, but if you put in a # trailer, it doesn't. b'trailer1: value1\r\n' b'trailer2: value2\r\n' b'\r\n' # Really terminate the chunk. ) data2 = ( b'POST /a HTTP/1.1\r\n' b'Host: localhost\r\n' b'Connection: close\r\n' b'Transfer-Encoding: chunked\r\n' b'\r\n' b'2\r\noh\r\n' b'4\r\n bye\r\n' b'0\r\n' # last-chunk ) with self.makefile() as fd: fd.write(data1) read_http(fd, body='oh hai') fd.write(data2) read_http(fd, body='oh bye') self.assertEqual(self.calls, 2) @greentest.skipOnWindows('Maybe hangs') def test_trailers_close_ignored(self): data = ( b'POST /a HTTP/1.1\r\n' b'Host: localhost\r\n' b'Connection: close\r\n' b'Transfer-Encoding: chunked\r\n' b'\r\n' b'2\r\noh\r\n' b'4\r\n hai\r\n' b'0\r\n' # last-chunk # Normally the final CRLF would go here, but if you put in a # trailer, it doesn't. # b'\r\n' b'GETpath2a:123 HTTP/1.1\r\n' b'Host: a.com\r\n' b'Connection: close\r\n' b'\r\n' ) with self.makefile() as fd: fd.write(data) read_http(fd, body='oh hai') with self.assertRaises(ConnectionClosed): read_http(fd) @greentest.skipOnWindows('Maybe hangs') def test_trailers_too_long(self): # Trailers after a chunk are ignored. data = ( b'POST /a HTTP/1.1\r\n' b'Host: localhost\r\n' b'Connection: keep-alive\r\n' b'Transfer-Encoding: chunked\r\n' b'\r\n' b'2\r\noh\r\n' b'4\r\n hai\r\n' b'0\r\n' # last-chunk # Normally the final CRLF would go here, but if you put in a # trailer, it doesn't. b'trailer2: value2' # note lack of \r\n ) data += b't' * pywsgi.MAX_REQUEST_LINE # No termination, because we detect the trailer as being too # long and abort the connection. with self.makefile() as fd: fd.write(data) read_http(fd, body='oh hai') with self.assertRaises(ConnectionClosed): read_http(fd, body='oh bye') @greentest.skipOnWindows('Maybe hangs') def test_trailers_request_smuggling_missing_last_chunk_keep_alive(self): # When something that looks like a request line comes in the trailer # as the first line, immediately after an invalid last chunk. # We detect this and abort the connection, because the # whitespace in the GET line isn't a legal part of a trailer. # If we didn't abort the connection, then, because we specified # keep-alive, the server would be hanging around waiting for more input. data = ( b'POST /a HTTP/1.1\r\n' b'Host: localhost\r\n' b'Connection: keep-alive\r\n' b'Transfer-Encoding: chunked\r\n' b'\r\n' b'2\r\noh\r\n' b'4\r\n hai\r\n' b'0' # last-chunk, but missing the \r\n # Normally the final CRLF would go here, but if you put in a # trailer, it doesn't. # b'\r\n' b'GET /path2?a=:123 HTTP/1.1\r\n' b'Host: a.com\r\n' b'Connection: close\r\n' b'\r\n' ) with self.makefile() as fd: fd.write(data) read_http(fd, body='oh hai') with self.assertRaises(ConnectionClosed): read_http(fd) self.assertEqual(self.calls, 1) @greentest.skipOnWindows('Maybe hangs') def test_trailers_request_smuggling_header_first(self): # When something that looks like a header comes in the first line. data = ( b'POST /a HTTP/1.1\r\n' b'Host: localhost\r\n' b'Connection: keep-alive\r\n' b'Transfer-Encoding: chunked\r\n' b'\r\n' b'2\r\noh\r\n' b'4\r\n hai\r\n' b'0\r\n' # last-chunk, but only one CRLF b'Header: value\r\n' b'GET /path2?a=:123 HTTP/1.1\r\n' b'Host: a.com\r\n' b'Connection: close\r\n' b'\r\n' ) with self.makefile() as fd: fd.write(data) read_http(fd, body='oh hai') with self.assertRaises(ConnectionClosed): read_http(fd, code=400) self.assertEqual(self.calls, 1) @greentest.skipOnWindows('Maybe hangs') def test_trailers_request_smuggling_request_terminates_then_header(self): data = ( b'POST /a HTTP/1.1\r\n' b'Host: localhost\r\n' b'Connection: keep-alive\r\n' b'Transfer-Encoding: chunked\r\n' b'\r\n' b'2\r\noh\r\n' b'4\r\n hai\r\n' b'0\r\n' # last-chunk b'\r\n' b'Header: value' b'GET /path2?a=:123 HTTP/1.1\r\n' b'Host: a.com\r\n' b'Connection: close\r\n' b'\r\n' ) with self.makefile() as fd: fd.write(data) read_http(fd, body='oh hai') read_http(fd, code=400) self.assertEqual(self.calls, 1) class TestUseWrite(TestCase): body = b'abcde' end = b'end' content_length = str(len(body + end)) def application(self, env, start_response): if env['PATH_INFO'] == '/explicit-content-length': write = start_response('200 OK', [('Content-Type', 'text/plain'), ('Content-Length', self.content_length)]) write(self.body) elif env['PATH_INFO'] == '/no-content-length': write = start_response('200 OK', [('Content-Type', 'text/plain')]) write(self.body) elif env['PATH_INFO'] == '/no-content-length-twice': write = start_response('200 OK', [('Content-Type', 'text/plain')]) write(self.body) write(self.body) else: # pylint:disable-next=broad-exception-raised raise Exception('Invalid url') return [self.end] def test_explicit_content_length(self): with self.makefile() as fd: fd.write('GET /explicit-content-length HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') response = read_http(fd, body=self.body + self.end) response.assertHeader('Content-Length', self.content_length) response.assertHeader('Transfer-Encoding', False) def test_no_content_length(self): with self.makefile() as fd: fd.write('GET /no-content-length HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') response = read_http(fd, body=self.body + self.end) response.assertHeader('Content-Length', False) response.assertHeader('Transfer-Encoding', 'chunked') def test_no_content_length_twice(self): with self.makefile() as fd: fd.write('GET /no-content-length-twice HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') response = read_http(fd, body=self.body + self.body + self.end) response.assertHeader('Content-Length', False) response.assertHeader('Transfer-Encoding', 'chunked') self.assertEqual(response.chunks, [self.body, self.body, self.end]) class HttpsTestCase(TestCase): certfile = os.path.join(os.path.dirname(__file__), 'test_server.crt') keyfile = os.path.join(os.path.dirname(__file__), 'test_server.key') def init_server(self, application): self.server = pywsgi.WSGIServer((self.listen_addr, 0), application, certfile=self.certfile, keyfile=self.keyfile) def urlopen(self, method='GET', post_body=None, **kwargs): # pylint:disable=arguments-differ import ssl with self.connect() as raw_sock: with ssl.wrap_socket(raw_sock) as sock: # pylint:disable=deprecated-method,no-member with sock.makefile(bufsize=1) as fd: # pylint:disable=unexpected-keyword-arg fd.write('%s / HTTP/1.1\r\nHost: localhost\r\n' % method) if post_body is not None: fd.write('Content-Length: %s\r\n\r\n' % len(post_body)) fd.write(post_body) if kwargs.get('body') is None: kwargs['body'] = post_body else: fd.write('\r\n') fd.flush() return read_http(fd, **kwargs) def application(self, environ, start_response): assert environ['wsgi.url_scheme'] == 'https', environ['wsgi.url_scheme'] start_response('200 OK', [('Content-Type', 'text/plain')]) return [environ['wsgi.input'].read(10)] import gevent.ssl HAVE_SSLCONTEXT = getattr(gevent.ssl, 'create_default_context') if HAVE_SSLCONTEXT: class HttpsSslContextTestCase(HttpsTestCase): def init_server(self, application): # On 2.7, our certs don't line up with hostname. # If we just use create_default_context as-is, we get # `ValueError: check_hostname requires server_hostname`. # If we set check_hostname to False, we get # `SSLError: [SSL: PEER_DID_NOT_RETURN_A_CERTIFICATE] peer did not return a certificate` # (Neither of which happens in Python 3.) But the unverified context # works both places. See also test___example_servers.py from gevent.ssl import _create_unverified_context # pylint:disable=no-name-in-module context = _create_unverified_context() context.load_cert_chain(certfile=self.certfile, keyfile=self.keyfile) self.server = pywsgi.WSGIServer((self.listen_addr, 0), application, ssl_context=context) class TestHttps(HttpsTestCase): if hasattr(socket, 'ssl'): def test_012_ssl_server(self): result = self.urlopen(method="POST", post_body='abc') self.assertEqual(result.body, 'abc') def test_013_empty_return(self): result = self.urlopen() self.assertEqual(result.body, '') if HAVE_SSLCONTEXT: class TestHttpsWithContext(HttpsSslContextTestCase, TestHttps): # pylint:disable=too-many-ancestors pass class TestInternational(TestCase): validator = None # wsgiref.validate.IteratorWrapper([]) does not have __len__ def application(self, environ, start_response): path_bytes = b'/\xd0\xbf\xd1\x80\xd0\xb8\xd0\xb2\xd0\xb5\xd1\x82' if PY3: # Under PY3, the escapes were decoded as latin-1 path_bytes = path_bytes.decode('latin-1') self.assertEqual(environ['PATH_INFO'], path_bytes) self.assertEqual(environ['QUERY_STRING'], '%D0%B2%D0%BE%D0%BF%D1%80%D0%BE%D1%81=%D0%BE%D1%82%D0%B2%D0%B5%D1%82') start_response("200 PASSED", [('Content-Type', 'text/plain')]) return [] def test(self): with self.connect() as sock: sock.sendall( b'''GET /%D0%BF%D1%80%D0%B8%D0%B2%D0%B5%D1%82?%D0%B2%D0%BE%D0%BF%D1%80%D0%BE%D1%81=%D0%BE%D1%82%D0%B2%D0%B5%D1%82 HTTP/1.1 Host: localhost Connection: close '''.replace(b'\n', b'\r\n')) with sock.makefile() as fd: read_http(fd, reason='PASSED', chunks=False, body='', content_length=0) class TestNonLatin1HeaderFromApplication(TestCase): error_fatal = False # Allow sending the exception response, don't kill the greenlet validator = None # Don't validate the application, it's deliberately bad header = b'\xe1\xbd\x8a3' # bomb in utf-8 bytes should_error = PY3 # non-native string under Py3 def setUp(self): super(TestNonLatin1HeaderFromApplication, self).setUp() self.errors = [] def tearDown(self): self.errors = [] super(TestNonLatin1HeaderFromApplication, self).tearDown() def application(self, environ, start_response): # We return a header that cannot be encoded in latin-1 try: start_response("200 PASSED", [('Content-Type', 'text/plain'), ('Custom-Header', self.header)]) except: self.errors.append(sys.exc_info()[:2]) raise return [] def test(self): with self.connect() as sock: self.expect_one_error() sock.sendall(b'''GET / HTTP/1.1\r\n\r\n''') with sock.makefile() as fd: if self.should_error: read_http(fd, code=500, reason='Internal Server Error') self.assert_error(where_type=pywsgi.SecureEnviron) self.assertEqual(len(self.errors), 1) _, v = self.errors[0] self.assertIsInstance(v, UnicodeError) else: read_http(fd, code=200, reason='PASSED') self.assertEqual(len(self.errors), 0) class TestNonLatin1UnicodeHeaderFromApplication(TestNonLatin1HeaderFromApplication): # Flip-flop of the superclass: Python 3 native string, Python 2 unicode object header = u"\u1f4a3" # bomb in unicode # Error both on py3 and py2. On py2, non-native string. On py3, native string # that cannot be encoded to latin-1 should_error = True class TestInputReadline(TestCase): # this test relies on the fact that readline() returns '' after it reached EOF # this behaviour is not mandated by WSGI spec, it's just happens that gevent.wsgi behaves like that # as such, this may change in the future validator = None def application(self, environ, start_response): input = environ['wsgi.input'] lines = [] while True: line = input.readline() if not line: break line = line.decode('ascii') if PY3 else line lines.append(repr(line) + ' ') start_response('200 hello', []) return [l.encode('ascii') for l in lines] if PY3 else lines def test(self): with self.makefile() as fd: content = 'hello\n\nworld\n123' fd.write('POST / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n' 'Content-Length: %s\r\n\r\n%s' % (len(content), content)) fd.flush() read_http(fd, reason='hello', body="'hello\\n' '\\n' 'world\\n' '123' ") class TestInputIter(TestInputReadline): def application(self, environ, start_response): input = environ['wsgi.input'] lines = [] for line in input: if not line: break line = line.decode('ascii') if PY3 else line lines.append(repr(line) + ' ') start_response('200 hello', []) return [l.encode('ascii') for l in lines] if PY3 else lines class TestInputReadlines(TestInputReadline): def application(self, environ, start_response): input = environ['wsgi.input'] lines = [l.decode('ascii') if PY3 else l for l in input.readlines()] lines = [repr(line) + ' ' for line in lines] start_response('200 hello', []) return [l.encode('ascii') for l in lines] if PY3 else lines class TestInputN(TestCase): # testing for this: # File "/home/denis/work/gevent/gevent/pywsgi.py", line 70, in _do_read # if length and length > self.content_length - self.position: # TypeError: unsupported operand type(s) for -: 'NoneType' and 'int' validator = None def application(self, environ, start_response): environ['wsgi.input'].read(5) start_response('200 OK', []) return [] def test(self): self.urlopen() class TestErrorInApplication(TestCase): error = object() error_fatal = False def application(self, env, start_response): self.error = greentest.ExpectedException('TestError.application') raise self.error def test(self): self.expect_one_error() self.urlopen(code=500) self.assert_error(greentest.ExpectedException, self.error) class TestError_after_start_response(TestErrorInApplication): def application(self, env, start_response): self.error = greentest.ExpectedException('TestError_after_start_response.application') start_response('200 OK', [('Content-Type', 'text/plain')]) raise self.error class TestEmptyYield(TestCase): @staticmethod def application(env, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) yield b"" yield b"" def test_err(self): chunks = [] with self.makefile() as fd: fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') read_http(fd, body='', chunks=chunks) garbage = fd.read() self.assertEqual(garbage, b"", "got garbage: %r" % garbage) class TestFirstEmptyYield(TestCase): @staticmethod def application(env, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) yield b"" yield b"hello" def test_err(self): chunks = [b'hello'] with self.makefile() as fd: fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') read_http(fd, body='hello', chunks=chunks) garbage = fd.read() self.assertEqual(garbage, b"") class TestEmptyYield304(TestCase): @staticmethod def application(env, start_response): start_response('304 Not modified', []) yield b"" yield b"" def test_err(self): with self.makefile() as fd: fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') read_http(fd, code=304, body='', chunks=False) garbage = fd.read() self.assertEqual(garbage, b"") class TestContentLength304(TestCase): validator = None def application(self, env, start_response): try: start_response('304 Not modified', [('Content-Length', '100')]) except AssertionError as ex: start_response('200 Raised', []) return ex.args raise AssertionError('start_response did not fail but it should') def test_err(self): body = "Invalid Content-Length for 304 response: '100' (must be absent or zero)" with self.makefile() as fd: fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') read_http(fd, code=200, reason='Raised', body=body, chunks=False) garbage = fd.read() self.assertEqual(garbage, b"") class TestBody304(TestCase): validator = None def application(self, env, start_response): start_response('304 Not modified', []) return [b'body'] def test_err(self): with self.makefile() as fd: fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') with self.assertRaises(AssertionError) as exc: read_http(fd) ex = exc.exception self.assertEqual(str(ex), 'The 304 response must have no body') class TestWrite304(TestCase): validator = None error_raised = None def application(self, env, start_response): write = start_response('304 Not modified', []) self.error_raised = False try: write('body') except AssertionError as ex: self.error_raised = True raise ExpectedAssertionError(*ex.args) def test_err(self): with self.makefile() as fd: fd.write(b'GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') with self.assertRaises(AssertionError) as exc: read_http(fd) ex = exc.exception self.assertEqual(str(ex), 'The 304 response must have no body') self.assertTrue(self.error_raised, 'write() must raise') class TestEmptyWrite(TestEmptyYield): @staticmethod def application(env, start_response): write = start_response('200 OK', [('Content-Type', 'text/plain')]) write(b"") write(b"") return [] class BadRequestTests(TestCase): validator = None # pywsgi checks content-length, but wsgi does not content_length = None assert TestCase.handler_class._print_unexpected_exc class handler_class(TestCase.handler_class): def _print_unexpected_exc(self): raise AssertionError("Should not print a traceback") def application(self, env, start_response): self.assertEqual(env['CONTENT_LENGTH'], self.content_length) start_response('200 OK', [('Content-Type', 'text/plain')]) return [b'hello'] def test_negative_content_length(self): self.content_length = '-100' with self.makefile() as fd: fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nContent-Length: %s\r\n\r\n' % self.content_length) read_http(fd, code=(200, 400)) def test_illegal_content_length(self): self.content_length = 'abc' with self.makefile() as fd: fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nContent-Length: %s\r\n\r\n' % self.content_length) read_http(fd, code=(200, 400)) def test_bad_request_line_with_percent(self): # If the request is invalid and contains Python formatting characters (%) # we don't fail to log the error and we do generate a 400. # https://github.com/gevent/gevent/issues/1708 bad_request = 'GET / HTTP %\r\n' with self.makefile() as fd: fd.write(bad_request) read_http(fd, code=400) class ChunkedInputTests(TestCase): dirt = "" validator = None def application(self, env, start_response): input = env['wsgi.input'] response = [] pi = env["PATH_INFO"] if pi == "/short-read": d = input.read(10) response = [d] elif pi == "/lines": for x in input: response.append(x) elif pi == "/ping": input.read(1) response.append(b"pong") else: raise RuntimeError("bad path") start_response('200 OK', [('Content-Type', 'text/plain')]) return response def chunk_encode(self, chunks, dirt=None): if dirt is None: dirt = self.dirt return chunk_encode(chunks, dirt=dirt) def body(self, dirt=None): return self.chunk_encode(["this", " is ", "chunked", "\nline", " 2", "\n", "line3", ""], dirt=dirt) def ping(self, fd): fd.write("GET /ping HTTP/1.1\r\n\r\n") read_http(fd, body="pong") def ping_if_possible(self, fd): self.ping(fd) def test_short_read_with_content_length(self): body = self.body() req = b"POST /short-read HTTP/1.1\r\ntransfer-encoding: Chunked\r\nContent-Length:1000\r\n\r\n" + body with self.connect() as conn: with conn.makefile(bufsize=1) as fd: # pylint:disable=unexpected-keyword-arg fd.write(req) read_http(fd, body="this is ch") self.ping_if_possible(fd) def test_short_read_with_zero_content_length(self): body = self.body() req = b"POST /short-read HTTP/1.1\r\ntransfer-encoding: Chunked\r\nContent-Length:0\r\n\r\n" + body #print("REQUEST:", repr(req)) with self.makefile() as fd: fd.write(req) read_http(fd, body="this is ch") self.ping_if_possible(fd) def test_short_read(self): body = self.body() req = b"POST /short-read HTTP/1.1\r\ntransfer-encoding: Chunked\r\n\r\n" + body with self.makefile() as fd: fd.write(req) read_http(fd, body="this is ch") self.ping_if_possible(fd) def test_dirt(self): body = self.body(dirt="; here is dirt\0bla") req = b"POST /ping HTTP/1.1\r\ntransfer-encoding: Chunked\r\n\r\n" + body with self.makefile() as fd: fd.write(req) read_http(fd, body="pong") self.ping_if_possible(fd) def test_chunked_readline(self): body = self.body() req = "POST /lines HTTP/1.1\r\nContent-Length: %s\r\ntransfer-encoding: Chunked\r\n\r\n" % (len(body)) req = req.encode('latin-1') req += body with self.makefile() as fd: fd.write(req) read_http(fd, body='this is chunked\nline 2\nline3') def test_close_before_finished(self): self.expect_one_error() body = b'4\r\nthi' req = b"POST /short-read HTTP/1.1\r\ntransfer-encoding: Chunked\r\n\r\n" + body with self.connect() as sock: with sock.makefile(bufsize=1, mode='wb') as fd:# pylint:disable=unexpected-keyword-arg fd.write(req) fd.close() # Python 3 keeps the socket open even though the only # makefile is gone; python 2 closed them both (because there were # no outstanding references to the socket). Closing is essential for the server # to get the message that the read will fail. It's better to be explicit # to avoid a ResourceWarning sock.close() # Under Py2 it still needs to go away, which was implicit before del fd del sock gevent.get_hub().loop.update_now() gevent.sleep(0.01) # timing needed for cpython if greentest.PYPY: # XXX: Something is keeping the socket alive, # by which I mean, the close event is not propagating to the server # and waking up its recv() loop...we are stuck with the three bytes of # 'thi' in the buffer and trying to read the forth. No amount of tinkering # with the timing changes this...the only thing that does is running a # GC and letting some object get collected. Might this be a problem in real life? import gc gc.collect() gevent.sleep(0.01) gevent.get_hub().loop.update_now() gc.collect() gevent.sleep(0.01) # XXX2: Sometimes windows and PyPy/Travis fail to get this error, leading to a test failure. # This would have to be due to the socket being kept around and open, # not closed at the low levels. I haven't seen this locally. # In the PyPy case, I've seen the IOError reported on the console, but not # captured in the variables. # https://travis-ci.org/gevent/gevent/jobs/329232976#L1374 self.assert_error(IOError, 'unexpected end of file while parsing chunked data') class Expect100ContinueTests(TestCase): validator = None def application(self, environ, start_response): content_length = int(environ['CONTENT_LENGTH']) if content_length > 1024: start_response('417 Expectation Failed', [('Content-Length', '7'), ('Content-Type', 'text/plain')]) return [b'failure'] # pywsgi did sent a "100 continue" for each read # see http://code.google.com/p/gevent/issues/detail?id=93 text = environ['wsgi.input'].read(1) text += environ['wsgi.input'].read(content_length - 1) start_response('200 OK', [('Content-Length', str(len(text))), ('Content-Type', 'text/plain')]) return [text] def test_continue(self): with self.makefile() as fd: fd.write('PUT / HTTP/1.1\r\nHost: localhost\r\nContent-length: 1025\r\nExpect: 100-continue\r\n\r\n') read_http(fd, code=417, body="failure") fd.write('PUT / HTTP/1.1\r\nHost: localhost\r\nContent-length: 7\r\nExpect: 100-continue\r\n\r\ntesting') read_http(fd, code=100) read_http(fd, body="testing") class MultipleCookieHeadersTest(TestCase): validator = None def application(self, environ, start_response): self.assertEqual(environ['HTTP_COOKIE'], 'name1="value1"; name2="value2"') self.assertEqual(environ['HTTP_COOKIE2'], 'nameA="valueA"; nameB="valueB"') start_response('200 OK', []) return [] def test(self): with self.makefile() as fd: fd.write('''GET / HTTP/1.1 Host: localhost Cookie: name1="value1" Cookie2: nameA="valueA" Cookie2: nameB="valueB" Cookie: name2="value2"\n\n'''.replace('\n', '\r\n')) read_http(fd) class TestLeakInput(TestCase): _leak_wsgi_input = None _leak_environ = None def tearDown(self): TestCase.tearDown(self) self._leak_wsgi_input = None self._leak_environ = None def application(self, environ, start_response): pi = environ["PATH_INFO"] self._leak_wsgi_input = environ["wsgi.input"] self._leak_environ = environ if pi == "/leak-frame": environ["_leak"] = sys._getframe(0) text = b"foobar" start_response('200 OK', [('Content-Length', str(len(text))), ('Content-Type', 'text/plain')]) return [text] def test_connection_close_leak_simple(self): with self.makefile() as fd: fd.write(b"GET / HTTP/1.0\r\nConnection: close\r\n\r\n") d = fd.read() self.assertTrue(d.startswith(b"HTTP/1.1 200 OK"), d) def test_connection_close_leak_frame(self): with self.makefile() as fd: fd.write(b"GET /leak-frame HTTP/1.0\r\nConnection: close\r\n\r\n") d = fd.read() self.assertTrue(d.startswith(b"HTTP/1.1 200 OK"), d) self._leak_environ.pop('_leak') class TestHTTPResponseSplitting(TestCase): # The validator would prevent the app from doing the # bad things it needs to do validator = None status = '200 OK' headers = () start_exc = None def setUp(self): TestCase.setUp(self) self.start_exc = None self.status = TestHTTPResponseSplitting.status self.headers = TestHTTPResponseSplitting.headers def tearDown(self): TestCase.tearDown(self) self.start_exc = None def application(self, environ, start_response): try: start_response(self.status, self.headers) except Exception as e: # pylint: disable=broad-except self.start_exc = e else: self.start_exc = None return () def _assert_failure(self, message): with self.makefile() as fd: fd.write('GET / HTTP/1.0\r\nHost: localhost\r\n\r\n') fd.read() self.assertIsInstance(self.start_exc, ValueError) self.assertEqual(self.start_exc.args[0], message) def test_newline_in_status(self): self.status = '200 OK\r\nConnection: close\r\nContent-Length: 0\r\n\r\n' self._assert_failure('carriage return or newline in status') def test_newline_in_header_value(self): self.headers = [('Test', 'Hi\r\nConnection: close')] self._assert_failure('carriage return or newline in header value') def test_newline_in_header_name(self): self.headers = [('Test\r\n', 'Hi')] self._assert_failure('carriage return or newline in header name') class TestInvalidEnviron(TestCase): validator = None # check that WSGIServer does not insert any default values for CONTENT_LENGTH def application(self, environ, start_response): for key, value in environ.items(): if key in ('CONTENT_LENGTH', 'CONTENT_TYPE') or key.startswith('HTTP_'): if key != 'HTTP_HOST': raise ExpectedAssertionError('Unexpected environment variable: %s=%r' % ( key, value)) start_response('200 OK', []) return [] def test(self): with self.makefile() as fd: fd.write('GET / HTTP/1.0\r\nHost: localhost\r\n\r\n') read_http(fd) with self.makefile() as fd: fd.write('GET / HTTP/1.1\r\nHost: localhost\r\n\r\n') read_http(fd) class TestInvalidHeadersDropped(TestCase): validator = None # check that invalid headers with a _ are dropped def application(self, environ, start_response): self.assertNotIn('HTTP_X_AUTH_USER', environ) start_response('200 OK', []) return [] def test(self): with self.makefile() as fd: fd.write('GET / HTTP/1.0\r\nx-auth_user: bob\r\n\r\n') read_http(fd) class TestHandlerSubclass(TestCase): validator = None class handler_class(TestCase.handler_class): def read_requestline(self): data = self.rfile.read(7) if data[0] == b'<'[0]: # py3: indexing bytes returns ints. sigh. # Returning nothing stops handle_one_request() # Note that closing or even deleting self.socket() here # can lead to the read side throwing Connection Reset By Peer, # depending on the Python version and OS data += self.rfile.read(15) if data.lower() == b'': self.socket.sendall(b'HELLO') else: self.log_error('Invalid request: %r', data) return None return data + self.rfile.readline() def application(self, environ, start_response): start_response('200 OK', []) return [] def test(self): with self.makefile() as fd: fd.write(b'\x00') fd.flush() # flush() is needed on PyPy, apparently it buffers slightly differently self.assertEqual(fd.read(), b'HELLO') with self.makefile() as fd: fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n') fd.flush() read_http(fd) with self.makefile() as fd: # Trigger an error fd.write('\x00') fd.flush() self.assertEqual(fd.read(), b'') class TestErrorAfterChunk(TestCase): validator = None @staticmethod def application(env, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) yield b"hello" raise greentest.ExpectedException('TestErrorAfterChunk') def test(self): with self.makefile() as fd: self.expect_one_error() fd.write('GET / HTTP/1.1\r\nHost: localhost\r\nConnection: keep-alive\r\n\r\n') with self.assertRaises(ValueError): read_http(fd) self.assert_error(greentest.ExpectedException) def chunk_encode(chunks, dirt=None): if dirt is None: dirt = "" b = b"" for c in chunks: x = "%x%s\r\n%s\r\n" % (len(c), dirt, c) b += x.encode('ascii') return b class TestInputRaw(greentest.BaseTestCase): def make_input(self, data, content_length=None, chunked_input=False): if isinstance(data, list): data = chunk_encode(data) chunked_input = True elif isinstance(data, str) and PY3: data = data.encode("ascii") return Input(StringIO(data), content_length=content_length, chunked_input=chunked_input) if PY3: def assertEqual(self, first, second, msg=None): if isinstance(second, str): second = second.encode('ascii') super(TestInputRaw, self).assertEqual(first, second, msg) def test_short_post(self): i = self.make_input("1", content_length=2) self.assertRaises(IOError, i.read) def test_short_post_read_with_length(self): i = self.make_input("1", content_length=2) self.assertRaises(IOError, i.read, 2) def test_short_post_readline(self): i = self.make_input("1", content_length=2) self.assertRaises(IOError, i.readline) def test_post(self): i = self.make_input("12", content_length=2) data = i.read() self.assertEqual(data, "12") def test_post_read_with_length(self): i = self.make_input("12", content_length=2) data = i.read(10) self.assertEqual(data, "12") def test_chunked(self): i = self.make_input(["1", "2", ""]) data = i.read() self.assertEqual(data, "12") def test_chunked_read_with_length(self): i = self.make_input(["1", "2", ""]) data = i.read(10) self.assertEqual(data, "12") def test_chunked_missing_chunk(self): i = self.make_input(["1", "2"]) self.assertRaises(IOError, i.read) def test_chunked_missing_chunk_read_with_length(self): i = self.make_input(["1", "2"]) self.assertRaises(IOError, i.read, 10) def test_chunked_missing_chunk_readline(self): i = self.make_input(["1", "2"]) self.assertRaises(IOError, i.readline) def test_chunked_short_chunk(self): i = self.make_input("2\r\n1", chunked_input=True) self.assertRaises(IOError, i.read) def test_chunked_short_chunk_read_with_length(self): i = self.make_input("2\r\n1", chunked_input=True) self.assertRaises(IOError, i.read, 2) def test_chunked_short_chunk_readline(self): i = self.make_input("2\r\n1", chunked_input=True) self.assertRaises(IOError, i.readline) def test_32bit_overflow(self): # https://github.com/gevent/gevent/issues/289 # Should not raise an OverflowError on Python 2 data = b'asdf\nghij\n' long_data = b'a' * (pywsgi.MAX_REQUEST_LINE + 10) long_data += b'\n' data += long_data partial_data = b'qjk\n' # Note terminating \n n = 25 * 1000000000 if hasattr(n, 'bit_length'): self.assertEqual(n.bit_length(), 35) if not PY3 and not PYPY: # Make sure we have the impl we think we do self.assertRaises(OverflowError, StringIO(data).readline, n) i = self.make_input(data, content_length=n) # No size hint, but we have too large a content_length to fit self.assertEqual(i.readline(), b'asdf\n') # Large size hint self.assertEqual(i.readline(n), b'ghij\n') self.assertEqual(i.readline(n), long_data) # Now again with the real content length, assuring we can't read past it i = self.make_input(data + partial_data, content_length=len(data) + 1) self.assertEqual(i.readline(), b'asdf\n') self.assertEqual(i.readline(n), b'ghij\n') self.assertEqual(i.readline(n), long_data) # Now we've reached content_length so we shouldn't be able to # read anymore except the one byte remaining self.assertEqual(i.readline(n), b'q') class Test414(TestCase): @staticmethod def application(env, start_response): raise AssertionError('should not get there') def test(self): longline = 'x' * 20000 with self.makefile() as fd: fd.write(('''GET /%s HTTP/1.0\r\nHello: world\r\n\r\n''' % longline).encode('latin-1')) read_http(fd, code=414) class TestLogging(TestCase): # Something that gets wrapped in a LoggingLogAdapter class Logger(object): accessed = None logged = None thing = None def log(self, level, msg): self.logged = (level, msg) def access(self, msg): self.accessed = msg def get_thing(self): return self.thing def init_logger(self): return self.Logger() @staticmethod def application(env, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return [b'hello'] # Tests for issue #663 def test_proxy_methods_on_log(self): # An object that looks like a logger gets wrapped # with a proxy that self.assertTrue(isinstance(self.server.log, pywsgi.LoggingLogAdapter)) self.server.log.access("access") self.server.log.write("write") self.assertEqual(self.server.log.accessed, "access") self.assertEqual(self.server.log.logged, (20, "write")) def test_set_attributes(self): # Not defined by the wrapper, it goes to the logger self.server.log.thing = 42 self.assertEqual(self.server.log.get_thing(), 42) del self.server.log.thing self.assertEqual(self.server.log.get_thing(), None) def test_status_log(self): # Issue 664: Make sure we format the status line as a string self.urlopen() msg = self.server.log.logged[1] self.assertTrue('"GET / HTTP/1.1" 200 ' in msg, msg) # Issue 756: Make sure we don't throw a newline on the end self.assertTrue('\n' not in msg, msg) class TestEnviron(TestCase): # The wsgiref validator asserts type(environ) is dict. # https://mail.python.org/pipermail/web-sig/2016-March/005455.html validator = None def init_server(self, application): super(TestEnviron, self).init_server(application) self.server.environ_class = pywsgi.SecureEnviron def application(self, env, start_response): self.assertIsInstance(env, pywsgi.SecureEnviron) start_response('200 OK', [('Content-Type', 'text/plain')]) return [] def test_environ_is_secure_by_default(self): self.urlopen() def test_default_secure_repr(self): environ = pywsgi.SecureEnviron() self.assertIn('"}), str(environ)) self.assertEqual(repr({'key': ""}), repr(environ)) environ.whitelist_keys = ('key',) self.assertEqual(str({'key': 'value'}), str(environ)) self.assertEqual(repr({'key': 'value'}), repr(environ)) del environ.whitelist_keys def test_override_class_defaults(self): class EnvironClass(pywsgi.SecureEnviron): __slots__ = () environ = EnvironClass() self.assertTrue(environ.secure_repr) EnvironClass.default_secure_repr = False self.assertFalse(environ.secure_repr) self.assertEqual(str({}), str(environ)) self.assertEqual(repr({}), repr(environ)) EnvironClass.default_secure_repr = True EnvironClass.default_whitelist_keys = ('key',) environ['key'] = 1 self.assertEqual(str({'key': 1}), str(environ)) self.assertEqual(repr({'key': 1}), repr(environ)) # Clean up for leaktests del environ del EnvironClass import gc; gc.collect() def test_copy_still_secure(self): for cls in (pywsgi.Environ, pywsgi.SecureEnviron): self.assertIsInstance(cls().copy(), cls) def test_pickle_copy_returns_dict(self): # Anything going through copy.copy/pickle should # return the same pickle that a dict would. import pickle import json for cls in (pywsgi.Environ, pywsgi.SecureEnviron): bltin = {'key': 'value'} env = cls(bltin) self.assertIsInstance(env, cls) self.assertEqual(bltin, env) self.assertEqual(env, bltin) for protocol in range(0, pickle.HIGHEST_PROTOCOL + 1): # It's impossible to get a subclass of dict to pickle # identically, but it can restore identically env_dump = pickle.dumps(env, protocol) self.assertNotIn(b'Environ', env_dump) loaded = pickle.loads(env_dump) self.assertEqual(type(loaded), dict) self.assertEqual(json.dumps(bltin), json.dumps(env)) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__queue.py000066400000000000000000000314631471441230600212760ustar00rootroot00000000000000import unittest import gevent.testing as greentest from gevent.testing import TestCase import gevent from gevent.hub import get_hub, LoopExit from gevent import util from gevent import queue from gevent.queue import Empty, Full from gevent.event import AsyncResult from gevent.testing.timing import AbstractGenericGetTestCase # pylint:disable=too-many-ancestors class TestQueue(TestCase): def test_send_first(self): self.switch_expected = False q = queue.Queue() q.put('hi') self.assertEqual(q.peek(), 'hi') self.assertEqual(q.get(), 'hi') def test_peek_empty(self): q = queue.Queue() # No putters waiting, in the main loop: LoopExit with self.assertRaises(LoopExit): q.peek() def waiter(q): self.assertRaises(Empty, q.peek, timeout=0.01) g = gevent.spawn(waiter, q) gevent.sleep(0.1) g.join() def test_peek_multi_greenlet(self): q = queue.Queue() g = gevent.spawn(q.peek) g.start() gevent.sleep(0) q.put(1) g.join() self.assertTrue(g.exception is None) self.assertEqual(q.peek(), 1) def test_send_last(self): q = queue.Queue() def waiter(q): with gevent.Timeout(0.1 if not greentest.RUNNING_ON_APPVEYOR else 0.5): self.assertEqual(q.get(), 'hi2') return "OK" p = gevent.spawn(waiter, q) gevent.sleep(0.01) q.put('hi2') gevent.sleep(0.01) assert p.get(timeout=0) == "OK" def test_max_size(self): q = queue.Queue(2) results = [] def putter(q): q.put('a') results.append('a') q.put('b') results.append('b') q.put('c') results.append('c') return "OK" p = gevent.spawn(putter, q) gevent.sleep(0) self.assertEqual(results, ['a', 'b']) self.assertEqual(q.get(), 'a') gevent.sleep(0) self.assertEqual(results, ['a', 'b', 'c']) self.assertEqual(q.get(), 'b') self.assertEqual(q.get(), 'c') assert p.get(timeout=0) == "OK" def test_zero_max_size(self): q = queue.Channel() def sender(evt, q): q.put('hi') evt.set('done') def receiver(evt, q): x = q.get() evt.set(x) e1 = AsyncResult() e2 = AsyncResult() p1 = gevent.spawn(sender, e1, q) gevent.sleep(0.001) self.assertTrue(not e1.ready()) p2 = gevent.spawn(receiver, e2, q) self.assertEqual(e2.get(), 'hi') self.assertEqual(e1.get(), 'done') with gevent.Timeout(0): gevent.joinall([p1, p2]) def test_multiple_waiters(self): # tests that multiple waiters get their results back q = queue.Queue() def waiter(q, evt): evt.set(q.get()) sendings = ['1', '2', '3', '4'] evts = [AsyncResult() for x in sendings] for i, _ in enumerate(sendings): gevent.spawn(waiter, q, evts[i]) # XXX use waitall for them gevent.sleep(0.01) # get 'em all waiting results = set() def collect_pending_results(): for e in evts: with gevent.Timeout(0.001, False): x = e.get() results.add(x) return len(results) q.put(sendings[0]) self.assertEqual(collect_pending_results(), 1) q.put(sendings[1]) self.assertEqual(collect_pending_results(), 2) q.put(sendings[2]) q.put(sendings[3]) self.assertEqual(collect_pending_results(), 4) def test_waiters_that_cancel(self): q = queue.Queue() def do_receive(q, evt): with gevent.Timeout(0, RuntimeError()): try: result = q.get() evt.set(result) # pragma: no cover (should have raised) except RuntimeError: evt.set('timed out') evt = AsyncResult() gevent.spawn(do_receive, q, evt) self.assertEqual(evt.get(), 'timed out') q.put('hi') self.assertEqual(q.get(), 'hi') def test_senders_that_die(self): q = queue.Queue() def do_send(q): q.put('sent') gevent.spawn(do_send, q) self.assertEqual(q.get(), 'sent') def test_two_waiters_one_dies(self): def waiter(q, evt): evt.set(q.get()) def do_receive(q, evt): with gevent.Timeout(0, RuntimeError()): try: result = q.get() evt.set(result) # pragma: no cover (should have raised) except RuntimeError: evt.set('timed out') q = queue.Queue() dying_evt = AsyncResult() waiting_evt = AsyncResult() gevent.spawn(do_receive, q, dying_evt) gevent.spawn(waiter, q, waiting_evt) gevent.sleep(0.1) q.put('hi') self.assertEqual(dying_evt.get(), 'timed out') self.assertEqual(waiting_evt.get(), 'hi') def test_two_bogus_waiters(self): def do_receive(q, evt): with gevent.Timeout(0, RuntimeError()): try: result = q.get() evt.set(result) # pragma: no cover (should have raised) except RuntimeError: evt.set('timed out') q = queue.Queue() e1 = AsyncResult() e2 = AsyncResult() gevent.spawn(do_receive, q, e1) gevent.spawn(do_receive, q, e2) gevent.sleep(0.1) q.put('sent') self.assertEqual(e1.get(), 'timed out') self.assertEqual(e2.get(), 'timed out') self.assertEqual(q.get(), 'sent') class TestChannel(TestCase): def test_send(self): channel = queue.Channel() events = [] def another_greenlet(): events.append(channel.get()) events.append(channel.get()) g = gevent.spawn(another_greenlet) events.append('sending') channel.put('hello') events.append('sent hello') channel.put('world') events.append('sent world') self.assertEqual(['sending', 'hello', 'sent hello', 'world', 'sent world'], events) g.get() def test_wait(self): channel = queue.Channel() events = [] def another_greenlet(): events.append('sending hello') channel.put('hello') events.append('sending world') channel.put('world') events.append('sent world') g = gevent.spawn(another_greenlet) events.append('waiting') events.append(channel.get()) events.append(channel.get()) self.assertEqual(['waiting', 'sending hello', 'hello', 'sending world', 'world'], events) gevent.sleep(0) self.assertEqual(['waiting', 'sending hello', 'hello', 'sending world', 'world', 'sent world'], events) g.get() def test_iterable(self): channel = queue.Channel() gevent.spawn(channel.put, StopIteration) r = list(channel) self.assertEqual(r, []) class TestJoinableQueue(TestCase): def test_task_done(self): channel = queue.JoinableQueue() X = object() gevent.spawn(channel.put, X) result = channel.get() self.assertIs(result, X) self.assertEqual(1, channel.unfinished_tasks) channel.task_done() self.assertEqual(0, channel.unfinished_tasks) class TestNoWait(TestCase): def test_put_nowait_simple(self): result = [] q = queue.Queue(1) def store_result(func, *args): result.append(func(*args)) run_callback = get_hub().loop.run_callback run_callback(store_result, util.wrap_errors(Full, q.put_nowait), 2) run_callback(store_result, util.wrap_errors(Full, q.put_nowait), 3) gevent.sleep(0) assert len(result) == 2, result assert result[0] is None, result assert isinstance(result[1], queue.Full), result def test_get_nowait_simple(self): result = [] q = queue.Queue(1) q.put(4) def store_result(func, *args): result.append(func(*args)) run_callback = get_hub().loop.run_callback run_callback(store_result, util.wrap_errors(Empty, q.get_nowait)) run_callback(store_result, util.wrap_errors(Empty, q.get_nowait)) gevent.sleep(0) assert len(result) == 2, result assert result[0] == 4, result assert isinstance(result[1], queue.Empty), result # get_nowait must work from the mainloop def test_get_nowait_unlock(self): result = [] q = queue.Queue(1) p = gevent.spawn(q.put, 5) def store_result(func, *args): result.append(func(*args)) assert q.empty(), q gevent.sleep(0) assert q.full(), q get_hub().loop.run_callback(store_result, q.get_nowait) gevent.sleep(0) assert q.empty(), q assert result == [5], result assert p.ready(), p assert p.dead, p assert q.empty(), q def test_get_nowait_unlock_channel(self): # get_nowait runs fine in the hub, and # it switches to a waiting putter if needed. result = [] q = queue.Channel() p = gevent.spawn(q.put, 5) def store_result(func, *args): result.append(func(*args)) self.assertTrue(q.empty()) self.assertTrue(q.full()) gevent.sleep(0.001) self.assertTrue(q.empty()) self.assertTrue(q.full()) get_hub().loop.run_callback(store_result, q.get_nowait) gevent.sleep(0.001) self.assertTrue(q.empty()) self.assertTrue(q.full()) self.assertEqual(result, [5]) self.assertTrue(p.ready()) self.assertTrue(p.dead) self.assertTrue(q.empty()) # put_nowait must work from the mainloop def test_put_nowait_unlock(self): result = [] q = queue.Queue() p = gevent.spawn(q.get) def store_result(func, *args): result.append(func(*args)) self.assertTrue(q.empty(), q) self.assertFalse(q.full(), q) gevent.sleep(0.001) self.assertTrue(q.empty(), q) self.assertFalse(q.full(), q) get_hub().loop.run_callback(store_result, q.put_nowait, 10) self.assertFalse(p.ready(), p) gevent.sleep(0.001) self.assertEqual(result, [None]) self.assertTrue(p.ready(), p) self.assertFalse(q.full(), q) self.assertTrue(q.empty(), q) class TestJoinEmpty(TestCase): def test_issue_45(self): """Test that join() exits immediately if not jobs were put into the queue""" self.switch_expected = False q = queue.JoinableQueue() q.join() class AbstractTestWeakRefMixin(object): def test_weak_reference(self): import weakref one = self._makeOne() ref = weakref.ref(one) self.assertIs(one, ref()) class TestGetInterrupt(AbstractTestWeakRefMixin, AbstractGenericGetTestCase): Timeout = Empty kind = queue.Queue def wait(self, timeout): return self._makeOne().get(timeout=timeout) def _makeOne(self): return self.kind() class TestGetInterruptJoinableQueue(TestGetInterrupt): kind = queue.JoinableQueue class TestGetInterruptLifoQueue(TestGetInterrupt): kind = queue.LifoQueue class TestGetInterruptPriorityQueue(TestGetInterrupt): kind = queue.PriorityQueue class TestGetInterruptChannel(TestGetInterrupt): kind = queue.Channel class TestPutInterrupt(AbstractGenericGetTestCase): kind = queue.Queue Timeout = Full def setUp(self): super(TestPutInterrupt, self).setUp() self.queue = self._makeOne() def wait(self, timeout): while not self.queue.full(): self.queue.put(1) return self.queue.put(2, timeout=timeout) def _makeOne(self): return self.kind(1) class TestPutInterruptJoinableQueue(TestPutInterrupt): kind = queue.JoinableQueue class TestPutInterruptLifoQueue(TestPutInterrupt): kind = queue.LifoQueue class TestPutInterruptPriorityQueue(TestPutInterrupt): kind = queue.PriorityQueue class TestPutInterruptChannel(TestPutInterrupt): kind = queue.Channel def _makeOne(self): return self.kind() if hasattr(queue, 'SimpleQueue'): class TestGetInterruptSimpleQueue(TestGetInterrupt): kind = queue.SimpleQueue def test_raises_timeout_Timeout(self): raise unittest.SkipTest("Not supported") test_raises_timeout_Timeout_exc_customized = test_raises_timeout_Timeout test_outer_timeout_is_not_lost = test_raises_timeout_Timeout del AbstractGenericGetTestCase if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__real_greenlet.py000066400000000000000000000012651471441230600227570ustar00rootroot00000000000000"""Testing that greenlet restores sys.exc_info. Passes with CPython + greenlet 0.4.0 Fails with PyPy 2.2.1 """ from __future__ import print_function import sys from gevent import testing as greentest class Test(greentest.TestCase): def test(self): import greenlet print('Your greenlet version: %s' % (getattr(greenlet, '__version__', None), )) result = [] def func(): result.append(repr(sys.exc_info())) g = greenlet.greenlet(func) try: 1 / 0 except ZeroDivisionError: g.switch() self.assertEqual(result, ['(None, None, None)']) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__refcount.py000066400000000000000000000133521471441230600217740ustar00rootroot00000000000000# Copyright (c) 2008 AG Projects # Author: Denis Bilenko # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. """This test checks that underlying socket instances (gevent.socket.socket._sock) are not leaked by the hub. """ from __future__ import print_function from _socket import socket as c_socket import sys if sys.version_info[0] >= 3: # Python3 enforces that __weakref__ appears only once, # and not when a slotted class inherits from an unslotted class. # We mess around with the class MRO below and violate that rule # (because socket.socket defines __slots__ with __weakref__), # so import socket.socket before that can happen. __import__('socket') Socket = c_socket else: class Socket(c_socket): "Something we can have a weakref to" import _socket _socket.socket = Socket from gevent import monkey; monkey.patch_all() import gevent.testing as greentest from gevent.testing import support from gevent.testing import params try: from thread import start_new_thread except ImportError: from _thread import start_new_thread from time import sleep import weakref import gc import socket socket._realsocket = Socket SOCKET_TIMEOUT = 0.1 if greentest.RESOLVER_DNSPYTHON: # Takes a bit longer to resolve the client # address initially. SOCKET_TIMEOUT *= 2 if greentest.RUNNING_ON_CI: SOCKET_TIMEOUT *= 2 class Server(object): listening = False client_data = None server_port = None def __init__(self, raise_on_timeout): self.raise_on_timeout = raise_on_timeout self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: self.server_port = support.bind_port(self.socket, params.DEFAULT_BIND_ADDR) except: self.close() raise def close(self): self.socket.close() self.socket = None def handle_request(self): try: self.socket.settimeout(SOCKET_TIMEOUT) self.socket.listen(5) self.listening = True try: conn, _ = self.socket.accept() # pylint:disable=no-member except socket.timeout: if self.raise_on_timeout: raise return try: self.client_data = conn.recv(100) conn.send(b'bye') finally: conn.close() finally: self.close() class Client(object): server_data = None def __init__(self, server_port): self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.server_port = server_port def close(self): self.socket.close() self.socket = None def make_request(self): try: self.socket.connect((params.DEFAULT_CONNECT, self.server_port)) self.socket.send(b'hello') self.server_data = self.socket.recv(100) finally: self.close() class Test(greentest.TestCase): __timeout__ = greentest.LARGE_TIMEOUT def run_interaction(self, run_client): server = Server(raise_on_timeout=run_client) wref_to_hidden_server_socket = weakref.ref(server.socket._sock) client = None start_new_thread(server.handle_request) if run_client: client = Client(server.server_port) start_new_thread(client.make_request) # Wait until we do our business; we will always close # the server; We may also close the client. # On PyPy, we may not actually see the changes they write to # their dicts immediately. for obj in server, client: if obj is None: continue while obj.socket is not None: sleep(0.01) # If we have a client, then we should have data if run_client: self.assertEqual(server.client_data, b'hello') self.assertEqual(client.server_data, b'bye') return wref_to_hidden_server_socket def run_and_check(self, run_client): wref_to_hidden_server_socket = self.run_interaction(run_client=run_client) greentest.gc_collect_if_needed() if wref_to_hidden_server_socket(): from pprint import pformat print(pformat(gc.get_referrers(wref_to_hidden_server_socket()))) for x in gc.get_referrers(wref_to_hidden_server_socket()): print(pformat(x)) for y in gc.get_referrers(x): print('-', pformat(y)) self.fail('server socket should be dead by now') def test_clean_exit(self): self.run_and_check(True) self.run_and_check(True) def test_timeout_exit(self): self.run_and_check(False) self.run_and_check(False) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__refcount_core.py000066400000000000000000000011301471441230600227730ustar00rootroot00000000000000import sys import weakref from gevent import testing as greentest class Dummy(object): def __init__(self): __import__('gevent.core') @greentest.skipIf(weakref.ref(Dummy())() is not None, "Relies on refcounting for fast weakref cleanup") class Test(greentest.TestCase): def test(self): from gevent import socket s = socket.socket() r = weakref.ref(s) s.close() del s self.assertIsNone(r()) assert weakref.ref(Dummy())() is None or hasattr(sys, 'pypy_version_info') if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__resolver_dnspython.py000066400000000000000000000021351471441230600241130ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests explicitly using the DNS python resolver. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import sys import unittest import subprocess import os from gevent import testing as greentest @unittest.skipUnless(greentest.resolver_dnspython_available(), "dnspython not available") class TestDnsPython(unittest.TestCase): def _run_one(self, mod_name): cmd = [ sys.executable, '-m', 'gevent.tests.monkey_package.' + mod_name ] env = dict(os.environ) env['GEVENT_RESOLVER'] = 'dnspython' output = subprocess.check_output(cmd, env=env) self.assertIn(b'_g_patched_module_dns', output) self.assertNotIn(b'_g_patched_module_dns.rdtypes', output) return output def test_import_dns_no_monkey_patch(self): self._run_one('issue1526_no_monkey') def test_import_dns_with_monkey_patch(self): self._run_one('issue1526_with_monkey') if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__select.py000066400000000000000000000077451471441230600214370ustar00rootroot00000000000000from gevent.testing import six import sys import os import errno from gevent import select, socket import gevent.core import gevent.testing as greentest import gevent.testing.timing import unittest class TestSelect(gevent.testing.timing.AbstractGenericWaitTestCase): def wait(self, timeout): select.select([], [], [], timeout) @greentest.skipOnWindows("Cant select on files") class TestSelectRead(gevent.testing.timing.AbstractGenericWaitTestCase): def wait(self, timeout): r, w = os.pipe() try: select.select([r], [], [], timeout) finally: os.close(r) os.close(w) # Issue #12367: http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/155606 @unittest.skipIf(sys.platform.startswith('freebsd'), 'skip because of a FreeBSD bug: kern/155606') def test_errno(self): # Backported from test_select.py in 3.4 with open(__file__, 'rb') as fp: fd = fp.fileno() fp.close() try: select.select([fd], [], [], 0) except OSError as err: # Python 3 self.assertEqual(err.errno, errno.EBADF) except select.error as err: # pylint:disable=duplicate-except # Python 2 (select.error is OSError on py3) self.assertEqual(err.args[0], errno.EBADF) else: self.fail("exception not raised") @unittest.skipUnless(hasattr(select, 'poll'), "Needs poll") @greentest.skipOnWindows("Cant poll on files") class TestPollRead(gevent.testing.timing.AbstractGenericWaitTestCase): def wait(self, timeout): # On darwin, the read pipe is reported as writable # immediately, for some reason. So we carefully register # it only for read events (the default is read and write) r, w = os.pipe() try: poll = select.poll() poll.register(r, select.POLLIN) poll.poll(timeout * 1000) finally: poll.unregister(r) os.close(r) os.close(w) def test_unregister_never_registered(self): # "Attempting to remove a file descriptor that was # never registered causes a KeyError exception to be # raised." poll = select.poll() self.assertRaises(KeyError, poll.unregister, 5) def test_poll_invalid(self): self.skipTest( "libev >= 4.27 aborts the process if built with EV_VERIFY >= 2. " "For libuv, depending on whether the fileno is reused or not " "this either crashes or does nothing.") with open(__file__, 'rb') as fp: fd = fp.fileno() poll = select.poll() poll.register(fd, select.POLLIN) # Close after registering; libuv refuses to even # create a watcher if it would get EBADF (so this turns into # a test of whether or not we successfully initted the watcher). fp.close() result = poll.poll(0) self.assertEqual(result, [(fd, select.POLLNVAL)]) # pylint:disable=no-member class TestSelectTypes(greentest.TestCase): def test_int(self): sock = socket.socket() try: select.select([int(sock.fileno())], [], [], 0.001) finally: sock.close() if hasattr(six.builtins, 'long'): def test_long(self): sock = socket.socket() try: select.select( [six.builtins.long(sock.fileno())], [], [], 0.001) finally: sock.close() def test_iterable(self): sock = socket.socket() def fileno_iter(): yield int(sock.fileno()) try: select.select(fileno_iter(), [], [], 0.001) finally: sock.close() def test_string(self): self.switch_expected = False self.assertRaises(TypeError, select.select, ['hello'], [], [], 0.001) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__selectors.py000066400000000000000000000073151471441230600221540ustar00rootroot00000000000000# Tests for gevent.selectors in its native form, without # monkey-patching. import gevent from gevent import socket from gevent import selectors import gevent.testing as greentest class SelectorTestMixin(object): @staticmethod def run_selector_once(sel, timeout=3): # Run in a background greenlet, leaving the main # greenlet free to send data. events = sel.select(timeout=timeout) for key, mask in events: key.data(sel, key.fileobj, mask) gevent.sleep() unregister_after_send = True def read_from_ready_socket_and_reply(self, selector, conn, _events): data = conn.recv(100) # Should be ready if data: conn.send(data) # Hope it won't block # Must unregister before we close. if self.unregister_after_send: selector.unregister(conn) conn.close() def _check_selector(self, sel): server, client = socket.socketpair() glet = None try: sel.register(server, selectors.EVENT_READ, self.read_from_ready_socket_and_reply) glet = gevent.spawn(self.run_selector_once, sel) DATA = b'abcdef' client.send(DATA) data = client.recv(50) # here is probably where we yield to the event loop self.assertEqual(data, DATA) finally: sel.close() server.close() client.close() if glet is not None: glet.join(10) self.assertTrue(glet is not None and glet.ready()) class GeventSelectorTest(SelectorTestMixin, greentest.TestCase): def test_select_using_socketpair(self): # Basic test. with selectors.GeventSelector() as sel: self._check_selector(sel) def test_select_many_sockets(self): try: AF_UNIX = socket.AF_UNIX except AttributeError: AF_UNIX = None pairs = [socket.socketpair() for _ in range(10)] try: server_sel = selectors.GeventSelector() client_sel = selectors.GeventSelector() for i, pair in enumerate(pairs): server, client = pair server_sel.register(server, selectors.EVENT_READ, self.read_from_ready_socket_and_reply) client_sel.register(client, selectors.EVENT_READ, i) # Prime them all to be ready at once. data = str(i).encode('ascii') client.send(data) # Read and reply to all the clients.. # Everyone should be ready, so we ask not to block. # The call to gevent.idle() is there to make sure that # all event loop implementations (looking at you, libuv) # get a chance to poll for IO. Without it, libuv # doesn't find any results here. # Not blocking only works for AF_UNIX sockets, though. # If we got AF_INET (Windows) the data may need some time to # traverse through the layers. gevent.idle() self.run_selector_once( server_sel, timeout=-1 if pairs[0][0].family == AF_UNIX else 3) found = 0 for key, _ in client_sel.select(timeout=3): expected = str(key.data).encode('ascii') data = key.fileobj.recv(50) self.assertEqual(data, expected) found += 1 self.assertEqual(found, len(pairs)) finally: server_sel.close() client_sel.close() for pair in pairs: for s in pair: s.close() if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__semaphore.py000066400000000000000000000326741471441230600221420ustar00rootroot00000000000000### # This file is test__semaphore.py only for organization purposes. # The public API, # and the *only* correct place to import Semaphore --- even in tests --- # is ``gevent.lock``, never ``gevent._semaphore``. ## from __future__ import print_function from __future__ import absolute_import import weakref import gevent import gevent.exceptions from gevent.lock import Semaphore from gevent.lock import BoundedSemaphore import gevent.testing as greentest from gevent.testing import timing class TestSemaphore(greentest.TestCase): # issue 39 def test_acquire_returns_false_after_timeout(self): s = Semaphore(value=0) result = s.acquire(timeout=0.01) assert result is False, repr(result) def test_release_twice(self): s = Semaphore() result = [] s.rawlink(lambda s: result.append('a')) s.release() s.rawlink(lambda s: result.append('b')) s.release() gevent.sleep(0.001) # The order, though, is not guaranteed. self.assertEqual(sorted(result), ['a', 'b']) def test_semaphore_weakref(self): s = Semaphore() r = weakref.ref(s) self.assertEqual(s, r()) @greentest.ignores_leakcheck def test_semaphore_in_class_with_del(self): # Issue #704. This used to crash the process # under PyPy through at least 4.0.1 if the Semaphore # was implemented with Cython. class X(object): def __init__(self): self.s = Semaphore() def __del__(self): self.s.acquire() X() import gc gc.collect() gc.collect() def test_rawlink_on_unacquired_runs_notifiers(self): # https://github.com/gevent/gevent/issues/1287 # Rawlinking a ready semaphore should fire immediately, # not raise LoopExit s = Semaphore() gevent.wait([s]) class TestSemaphoreMultiThread(greentest.TestCase): # Tests that the object can be acquired correctly across # multiple threads. # Used as a base class. # See https://github.com/gevent/gevent/issues/1437 def _getTargetClass(self): return Semaphore def _makeOne(self): # Create an object that is associated with the current hub. If # we don't do this now, it gets initialized lazily the first # time it would have to block, which, in the event of threads, # would be from an arbitrary thread. return self._getTargetClass()(1) def _makeThreadMain(self, thread_running, thread_acquired, sem, acquired, exc_info, **thread_acquire_kwargs): from gevent._hub_local import get_hub_if_exists import sys def thread_main(): thread_running.set() try: acquired.append( sem.acquire(**thread_acquire_kwargs) ) except: exc_info[:] = sys.exc_info() raise # Print finally: hub = get_hub_if_exists() if hub is not None: hub.join() hub.destroy(destroy_loop=True) thread_acquired.set() return thread_main IDLE_ITERATIONS = 5 def _do_test_acquire_in_one_then_another(self, release=True, require_thread_acquired_to_finish=False, **thread_acquire_kwargs): from gevent import monkey self.assertFalse(monkey.is_module_patched('threading')) import threading thread_running = threading.Event() thread_acquired = threading.Event() sem = self._makeOne() # Make future acquires block sem.acquire() exc_info = [] acquired = [] t = threading.Thread(target=self._makeThreadMain( thread_running, thread_acquired, sem, acquired, exc_info, **thread_acquire_kwargs )) t.daemon = True t.start() thread_running.wait(10) # implausibly large time if release: sem.release() # Spin the loop to be sure the release gets through. # (Release schedules the notifier to run, and when the # notifier run it sends the async notification to the # other thread. Depending on exactly where we are in the # event loop, and the limit to the number of callbacks # that get run (including time-based) the notifier may or # may not be immediately ready to run, so this can take up # to two iterations.) for _ in range(self.IDLE_ITERATIONS): gevent.idle() if thread_acquired.wait(timing.LARGE_TICK): break self.assertEqual(acquired, [True]) if not release and thread_acquire_kwargs.get("timeout"): # Spin the loop to be sure that the timeout has a chance to # process. Interleave this with something that drops the GIL # so the background thread has a chance to notice that. for _ in range(self.IDLE_ITERATIONS): gevent.idle() if thread_acquired.wait(timing.LARGE_TICK): break thread_acquired.wait(timing.LARGE_TICK * 5) if require_thread_acquired_to_finish: self.assertTrue(thread_acquired.is_set()) try: self.assertEqual(exc_info, []) finally: exc_info = None return sem, acquired def test_acquire_in_one_then_another(self): self._do_test_acquire_in_one_then_another(release=True) def test_acquire_in_one_then_another_timed(self): sem, acquired_in_thread = self._do_test_acquire_in_one_then_another( release=False, require_thread_acquired_to_finish=True, timeout=timing.SMALLEST_RELIABLE_DELAY) self.assertEqual([False], acquired_in_thread) # This doesn't, of course, notify anything, because # the waiter has given up. sem.release() notifier = getattr(sem, '_notifier', None) self.assertIsNone(notifier) def test_acquire_in_one_wait_greenlet_wait_thread_gives_up(self): # The waiter in the thread both arrives and gives up while # the notifier is already running...or at least, that's what # we'd like to arrange, but the _notify_links function doesn't # drop the GIL/object lock, so the other thread is stuck and doesn't # actually get to call into the acquire method. from gevent import monkey self.assertFalse(monkey.is_module_patched('threading')) import threading sem = self._makeOne() # Make future acquires block sem.acquire() def greenlet_one(): ack = sem.acquire() # We're running in the notifier function right now. It switched to # us. thread.start() gevent.sleep(timing.LARGE_TICK) return ack exc_info = [] acquired = [] glet = gevent.spawn(greenlet_one) thread = threading.Thread(target=self._makeThreadMain( threading.Event(), threading.Event(), sem, acquired, exc_info, timeout=timing.LARGE_TICK )) thread.daemon = True gevent.idle() sem.release() glet.join() for _ in range(3): gevent.idle() thread.join(timing.LARGE_TICK) self.assertEqual(glet.value, True) self.assertEqual([], exc_info) self.assertEqual([False], acquired) self.assertTrue(glet.dead, glet) glet = None def assertOneHasNoHub(self, sem): self.assertIsNone(sem.hub, sem) @greentest.skipOnPyPyOnWindows("Flaky there; can't reproduce elsewhere") def test_dueling_threads(self, acquire_args=(), create_hub=None): # pylint:disable=too-many-locals,too-many-statements # Threads doing nothing but acquiring and releasing locks, without # having any other greenlets to switch to. # https://github.com/gevent/gevent/issues/1698 from gevent import monkey from gevent._hub_local import get_hub_if_exists self.assertFalse(monkey.is_module_patched('threading')) import threading from time import sleep as native_sleep sem = self._makeOne() self.assertOneHasNoHub(sem) count = 10000 results = [-1, -1] run = True def do_it(ix): if create_hub: gevent.get_hub() try: for i in range(count): if not run: break acquired = sem.acquire(*acquire_args) assert acquire_args or acquired if acquired: sem.release() results[ix] = i if not create_hub: # We don't artificially create the hub. self.assertIsNone( get_hub_if_exists(), (get_hub_if_exists(), ix, i) ) if create_hub and i % 10 == 0: gevent.sleep(timing.SMALLEST_RELIABLE_DELAY) elif i % 100 == 0: native_sleep(timing.SMALLEST_RELIABLE_DELAY) except Exception as ex: # pylint:disable=broad-except import traceback; traceback.print_exc() results[ix] = str(ex) ex = None finally: hub = get_hub_if_exists() if hub is not None: hub.join() hub.destroy(destroy_loop=True) t1 = threading.Thread(target=do_it, args=(0,)) t1.daemon = True t2 = threading.Thread(target=do_it, args=(1,)) t2.daemon = True t1.start() t2.start() t1.join(1) t2.join(1) while t1.is_alive() or t2.is_alive(): cur = list(results) t1.join(7) t2.join(7) if cur == results: # Hmm, after two seconds, no progress run = False break self.assertEqual(results, [count - 1, count - 1]) def test_dueling_threads_timeout(self): self.test_dueling_threads((True, 4)) def test_dueling_threads_with_hub(self): self.test_dueling_threads(create_hub=True) # XXX: Need a test with multiple greenlets in a non-primary # thread. Things should work, just very slowly; instead of moving through # greenlet.switch(), they'll be moving with async watchers. class TestBoundedSemaphoreMultiThread(TestSemaphoreMultiThread): def _getTargetClass(self): return BoundedSemaphore @greentest.skipOnPurePython("Needs C extension") class TestCExt(greentest.TestCase): def test_c_extension(self): self.assertEqual(Semaphore.__module__, 'gevent._gevent_c_semaphore') class SwitchWithFixedHash(object): # Replaces greenlet.switch with a callable object # with a hash code we control. This only matters if # we're hashing this somewhere (which we used to), but # that doesn't preserve order, so we don't do # that anymore. def __init__(self, greenlet, hashcode): self.switch = greenlet.switch self.hashcode = hashcode def __hash__(self): raise AssertionError def __eq__(self, other): raise AssertionError def __call__(self, *args, **kwargs): return self.switch(*args, **kwargs) def __repr__(self): return repr(self.switch) class FirstG(gevent.Greenlet): # A greenlet whose switch method will have a low hashcode. hashcode = 10 def __init__(self, *args, **kwargs): gevent.Greenlet.__init__(self, *args, **kwargs) self.switch = SwitchWithFixedHash(self, self.hashcode) class LastG(FirstG): # A greenlet whose switch method will have a high hashcode. hashcode = 12 def acquire_then_exit(sem, should_quit): sem.acquire() should_quit.append(True) def acquire_then_spawn(sem, should_quit): if should_quit: return sem.acquire() g = FirstG.spawn(release_then_spawn, sem, should_quit) g.join() def release_then_spawn(sem, should_quit): sem.release() if should_quit: # pragma: no cover return g = FirstG.spawn(acquire_then_spawn, sem, should_quit) g.join() class TestSemaphoreFair(greentest.TestCase): def test_fair_or_hangs(self): # If the lock isn't fair, this hangs, spinning between # the last two greenlets. # See https://github.com/gevent/gevent/issues/1487 sem = Semaphore() should_quit = [] keep_going1 = FirstG.spawn(acquire_then_spawn, sem, should_quit) keep_going2 = FirstG.spawn(acquire_then_spawn, sem, should_quit) exiting = LastG.spawn(acquire_then_exit, sem, should_quit) with self.assertRaises(gevent.exceptions.LoopExit): gevent.joinall([keep_going1, keep_going2, exiting]) self.assertTrue(exiting.dead, exiting) self.assertTrue(keep_going2.dead, keep_going2) self.assertFalse(keep_going1.dead, keep_going1) sem.release() keep_going1.kill() keep_going2.kill() exiting.kill() gevent.idle() if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__server.py000066400000000000000000000466521471441230600214660ustar00rootroot00000000000000from __future__ import print_function, division from contextlib import contextmanager import unittest import errno import os import gevent.testing as greentest from gevent.testing import PY3 from gevent.testing import sysinfo from gevent.testing import DEFAULT_SOCKET_TIMEOUT as _DEFAULT_SOCKET_TIMEOUT from gevent.testing.timing import SMALLEST_RELIABLE_DELAY from gevent.testing.sockets import tcp_listener from gevent.testing import WIN from gevent import socket import gevent from gevent.server import StreamServer from gevent.exceptions import LoopExit class SimpleStreamServer(StreamServer): def handle(self, client_socket, _address): # pylint:disable=method-hidden fd = client_socket.makefile() try: request_line = fd.readline() if not request_line: return try: _method, path, _rest = request_line.split(' ', 3) except Exception: print('Failed to parse request line: %r' % (request_line, )) raise if path == '/ping': client_socket.sendall(b'HTTP/1.0 200 OK\r\n\r\nPONG') elif path in ['/long', '/short']: client_socket.sendall(b'hello') while True: data = client_socket.recv(1) if not data: break else: client_socket.sendall(b'HTTP/1.0 404 WTF?\r\n\r\n') finally: fd.close() def sleep_to_clear_old_sockets(*_args): try: # Allow any queued callbacks needed to close sockets # to run. On Windows, this needs to spin the event loop to # allow proper FD cleanup. Otherwise we risk getting an # old FD that's being closed and then get spurious connection # errors. gevent.sleep(0 if not WIN else SMALLEST_RELIABLE_DELAY) except Exception: # pylint:disable=broad-except pass class Settings(object): ServerClass = StreamServer ServerSubClass = SimpleStreamServer restartable = True close_socket_detected = True @staticmethod def assertAcceptedConnectionError(inst): with inst.makefile() as conn: try: result = conn.read() except socket.timeout: result = None inst.assertFalse(result) assert500 = assertAcceptedConnectionError @staticmethod def assert503(inst): # regular reads timeout inst.assert500() # attempt to send anything reset the connection try: inst.send_request() except socket.error as ex: if ex.args[0] not in greentest.CONN_ABORTED_ERRORS: raise @staticmethod def assertPoolFull(inst): with inst.assertRaises(socket.timeout): inst.assertRequestSucceeded(timeout=0.01) @staticmethod def fill_default_server_args(inst, kwargs): kwargs.setdefault('spawn', inst.get_spawn()) return kwargs class TestCase(greentest.TestCase): # pylint: disable=too-many-public-methods __timeout__ = greentest.LARGE_TIMEOUT Settings = Settings server = None def cleanup(self): if getattr(self, 'server', None) is not None: self.server.stop() self.server = None sleep_to_clear_old_sockets() def get_listener(self): return self._close_on_teardown(tcp_listener(backlog=5)) def get_server_host_port_family(self): server_host = self.server.server_host if not server_host: server_host = greentest.DEFAULT_LOCAL_HOST_ADDR elif server_host == '::': server_host = greentest.DEFAULT_LOCAL_HOST_ADDR6 try: family = self.server.socket.family except AttributeError: # server deletes socket when closed family = socket.AF_INET return server_host, self.server.server_port, family @contextmanager def makefile(self, timeout=_DEFAULT_SOCKET_TIMEOUT, bufsize=1, include_raw_socket=False): server_host, server_port, family = self.get_server_host_port_family() bufarg = 'buffering' if PY3 else 'bufsize' makefile_kwargs = {bufarg: bufsize} if PY3: # Under Python3, you can't read and write to the same # makefile() opened in r, and r+ is not allowed makefile_kwargs['mode'] = 'rwb' with socket.socket(family=family) as sock: rconn = None # We want the socket to be accessible from the fileobject # we return. On Python 2, natively this is available as # _sock, but Python 3 doesn't have that. sock.connect((server_host, server_port)) sock.settimeout(timeout) with sock.makefile(**makefile_kwargs) as rconn: result = rconn if not include_raw_socket else (rconn, sock) yield result def send_request(self, url='/', timeout=_DEFAULT_SOCKET_TIMEOUT, bufsize=1): with self.makefile(timeout=timeout, bufsize=bufsize) as conn: self.send_request_to_fd(conn, url) def send_request_to_fd(self, fd, url='/'): fd.write(('GET %s HTTP/1.0\r\n\r\n' % url).encode('latin-1')) fd.flush() LOCAL_CONN_REFUSED_ERRORS = () if greentest.OSX: # A kernel bug in OS X sometimes results in this LOCAL_CONN_REFUSED_ERRORS = (errno.EPROTOTYPE,) elif greentest.WIN and greentest.PYPY3: # We see WinError 10049: The requested address is not valid # which is not one of the errors we get anywhere else. # Not sure which errno constant this is? LOCAL_CONN_REFUSED_ERRORS = (10049,) def assertConnectionRefused(self, in_proc_server=True): try: with self.assertRaises(socket.error) as exc: with self.makefile() as conn: conn.close() except LoopExit: if not in_proc_server: raise # A LoopExit is fine. If we've killed the server # and don't have any other greenlets to run, then # blocking to open the connection might raise this. # This became likely on Windows once we stopped # passing IP addresses through an extra call to # ``getaddrinfo``, which changed the number of switches return ex = exc.exception self.assertIn(ex.args[0], (errno.ECONNREFUSED, errno.EADDRNOTAVAIL, errno.ECONNRESET, errno.ECONNABORTED) + self.LOCAL_CONN_REFUSED_ERRORS, (ex, ex.args)) def assert500(self): self.Settings.assert500(self) def assert503(self): self.Settings.assert503(self) def assertAcceptedConnectionError(self): self.Settings.assertAcceptedConnectionError(self) def assertPoolFull(self): self.Settings.assertPoolFull(self) def assertNotAccepted(self): try: with self.makefile(include_raw_socket=True) as (conn, sock): conn.write(b'GET / HTTP/1.0\r\n\r\n') conn.flush() result = b'' try: while True: data = sock.recv(1) if not data: break result += data except socket.timeout: self.assertFalse(result) return except LoopExit: # See assertConnectionRefused return self.assertTrue(result.startswith(b'HTTP/1.0 500 Internal Server Error'), repr(result)) def assertRequestSucceeded(self, timeout=_DEFAULT_SOCKET_TIMEOUT): with self.makefile(timeout=timeout) as conn: conn.write(b'GET /ping HTTP/1.0\r\n\r\n') result = conn.read() self.assertTrue(result.endswith(b'\r\n\r\nPONG'), repr(result)) def start_server(self): self.server.start() self.assertRequestSucceeded() self.assertRequestSucceeded() def stop_server(self): self.server.stop() self.assertConnectionRefused() def report_netstat(self, _msg): # At one point this would call 'sudo netstat -anp | grep PID' # with os.system. We can probably do better with psutil. return def _create_server(self, *args, **kwargs): kind = kwargs.pop('server_kind', self.ServerSubClass) addr = kwargs.pop('server_listen_addr', (greentest.DEFAULT_BIND_ADDR, 0)) return kind(addr, *args, **kwargs) def init_server(self, *args, **kwargs): self.server = self._create_server(*args, **kwargs) self.server.start() sleep_to_clear_old_sockets() @property def socket(self): return self.server.socket def _test_invalid_callback(self): if sysinfo.RUNNING_ON_APPVEYOR: self.skipTest("Sometimes misses the error") # XXX: Why? try: # Can't use a kwarg here, WSGIServer and StreamServer # take different things (application and handle) self.init_server(lambda: None) self.expect_one_error() self.assert500() self.assert_error(TypeError) finally: self.server.stop() # XXX: There's something unreachable (with a traceback?) # We need to clear it to make the leak checks work on Travis; # so far I can't reproduce it locally on OS X. import gc; gc.collect() def fill_default_server_args(self, kwargs): return self.Settings.fill_default_server_args(self, kwargs) def ServerClass(self, *args, **kwargs): return self.Settings.ServerClass(*args, **self.fill_default_server_args(kwargs)) def ServerSubClass(self, *args, **kwargs): return self.Settings.ServerSubClass(*args, **self.fill_default_server_args(kwargs)) def get_spawn(self): return None class TestDefaultSpawn(TestCase): def get_spawn(self): return gevent.spawn def _test_server_start_stop(self, restartable): self.report_netstat('before start') self.start_server() self.report_netstat('after start') if restartable and self.Settings.restartable: self.server.stop_accepting() self.report_netstat('after stop_accepting') self.assertNotAccepted() self.server.start_accepting() self.report_netstat('after start_accepting') sleep_to_clear_old_sockets() self.assertRequestSucceeded() self.stop_server() self.report_netstat('after stop') def test_backlog_is_not_accepted_for_socket(self): self.switch_expected = False with self.assertRaises(TypeError): self.ServerClass(self.get_listener(), backlog=25) @greentest.skipOnLibuvOnCIOnPyPy("Sometimes times out") @greentest.skipOnAppVeyor("Sometimes times out.") def test_backlog_is_accepted_for_address(self): self.server = self.ServerSubClass((greentest.DEFAULT_BIND_ADDR, 0), backlog=25) self.assertConnectionRefused() self._test_server_start_stop(restartable=False) def test_subclass_just_create(self): self.server = self.ServerSubClass(self.get_listener()) self.assertNotAccepted() @greentest.skipOnAppVeyor("Sometimes times out.") def test_subclass_with_socket(self): self.server = self.ServerSubClass(self.get_listener()) # the connection won't be refused, because there exists a # listening socket, but it won't be handled also self.assertNotAccepted() self._test_server_start_stop(restartable=True) def test_subclass_with_address(self): self.server = self.ServerSubClass((greentest.DEFAULT_BIND_ADDR, 0)) self.assertConnectionRefused() self._test_server_start_stop(restartable=True) def test_invalid_callback(self): self._test_invalid_callback() @greentest.reraises_flaky_timeout(socket.timeout) def _test_serve_forever(self): g = gevent.spawn(self.server.serve_forever) try: sleep_to_clear_old_sockets() self.assertRequestSucceeded() self.server.stop() self.assertFalse(self.server.started) self.assertConnectionRefused() finally: g.kill() g.get() self.server.stop() def test_serve_forever(self): self.server = self.ServerSubClass((greentest.DEFAULT_BIND_ADDR, 0)) self.assertFalse(self.server.started) self.assertConnectionRefused() self._test_serve_forever() def test_serve_forever_after_start(self): self.server = self.ServerSubClass((greentest.DEFAULT_BIND_ADDR, 0)) self.assertConnectionRefused() self.assertFalse(self.server.started) self.server.start() self.assertTrue(self.server.started) self._test_serve_forever() @greentest.skipIf(greentest.EXPECT_POOR_TIMER_RESOLUTION, "Sometimes spuriously fails") def test_server_closes_client_sockets(self): self.server = self.ServerClass((greentest.DEFAULT_BIND_ADDR, 0), lambda *args: []) self.server.start() sleep_to_clear_old_sockets() with self.makefile() as conn: self.send_request_to_fd(conn) # use assert500 below? with gevent.Timeout._start_new_or_dummy(1): try: result = conn.read() if result: assert result.startswith('HTTP/1.0 500 Internal Server Error'), repr(result) except socket.timeout: pass except socket.error as ex: if ex.args[0] == 10053: pass # "established connection was aborted by the software in your host machine" elif ex.args[0] == errno.ECONNRESET: pass else: raise self.stop_server() @property def socket(self): return self.server.socket def test_error_in_spawn(self): self.init_server() self.assertTrue(self.server.started) error = ExpectedError('test_error_in_spawn') def _spawn(*_args): gevent.getcurrent().throw(error) self.server._spawn = _spawn self.expect_one_error() self.assertAcceptedConnectionError() self.assert_error(ExpectedError, error) def test_server_repr_when_handle_is_instancemethod(self): # PR 501 self.init_server() assert self.server.started self.assertIn('Server', repr(self.server)) self.server.set_handle(self.server.handle) self.assertIn('handle=', repr(self.server)) self.server.set_handle(self.test_server_repr_when_handle_is_instancemethod) self.assertIn('test_server_repr_when_handle_is_instancemethod', repr(self.server)) def handle(): pass self.server.set_handle(handle) self.assertIn('handle= returned a result with an error set # It's not safe to continue after a SystemError, so we just skip the test there. # As of Jan 2018 with CFFI 1.11.2 this happens reliably on macOS 3.6 and 3.7 # as well. # See https://bitbucket.org/cffi/cffi/issues/352/systemerror-returned-a-result-with-an # This is fixed in 1.11.3 import gevent.signal # make sure it's in sys.modules pylint:disable=redefined-outer-name assert gevent.signal import site if greentest.PY3: from importlib import reload as reload_module else: # builtin on py2 reload_module = reload # pylint:disable=undefined-variable try: reload_module(site) except TypeError: # Non-CFFI on Travis triggers this, for some reason, # but only on 3.6, not 3.4 or 3.5, and not yet on 3.7. # The only module seen to trigger this is __main__, i.e., this module. # This is hard to trigger in a virtualenv since it appears they # install their own site.py, different from the one that ships with # Python 3.6., and at least the version I have doesn't mess with # __cached__ assert greentest.PY36 import sys for m in set(sys.modules.values()): try: if m.__cached__ is None: print("Module has None __cached__", m, file=sys.stderr) except AttributeError: continue if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__sleep0.py000066400000000000000000000002131471441230600213270ustar00rootroot00000000000000import gevent from gevent.testing.util import alarm alarm(3) with gevent.Timeout(0.01, False): while True: gevent.sleep(0) gevent-24.11.1/src/gevent/tests/test__socket.py000066400000000000000000000557021471441230600214440ustar00rootroot00000000000000from __future__ import print_function from __future__ import absolute_import from gevent import monkey # This line can be commented out so that most tests run with the # system socket for comparison. monkey.patch_all() import sys import array import socket import time import unittest from functools import wraps import gevent from gevent._compat import reraise import gevent.testing as greentest from gevent.testing import six from gevent.testing import LARGE_TIMEOUT from gevent.testing import support from gevent.testing import params from gevent.testing.sockets import tcp_listener from gevent.testing.skipping import skipWithoutExternalNetwork from gevent.testing.skipping import skipOnMacOnCI # we use threading on purpose so that we can test both regular and # gevent sockets with the same code from threading import Thread as _Thread from threading import Event errno_types = int # socket.accept/unwrap/makefile aren't found for some reason # pylint:disable=no-member class BaseThread(object): terminal_exc = None def __init__(self, target): @wraps(target) def errors_are_fatal(*args, **kwargs): try: return target(*args, **kwargs) except: # pylint:disable=bare-except self.terminal_exc = sys.exc_info() raise self.target = errors_are_fatal class GreenletThread(BaseThread): def __init__(self, target=None, args=()): BaseThread.__init__(self, target) self.glet = gevent.spawn(self.target, *args) def join(self, *args, **kwargs): return self.glet.join(*args, **kwargs) def is_alive(self): return not self.glet.ready() if not monkey.is_module_patched('threading'): class ThreadThread(BaseThread, _Thread): def __init__(self, **kwargs): target = kwargs.pop('target') BaseThread.__init__(self, target) _Thread.__init__(self, target=self.target, **kwargs) self.start() Thread = ThreadThread else: Thread = GreenletThread class TestTCP(greentest.TestCase): __timeout__ = None TIMEOUT_ERROR = socket.timeout long_data = ", ".join([str(x) for x in range(20000)]) if not isinstance(long_data, bytes): long_data = long_data.encode('ascii') def setUp(self): super(TestTCP, self).setUp() if '-v' in sys.argv: printed = [] try: from time import perf_counter as now except ImportError: from time import time as now def log(*args): if not printed: print() printed.append(1) print("\t -> %0.6f" % now(), *args) orig_cot = self._close_on_teardown def cot(o): log("Registering for teardown", o) def c(o=o): log("Closing on teardown", o) o.close() o = None orig_cot(c) return o self._close_on_teardown = cot else: def log(*_args): "Does nothing" self.log = log self.listener = self._close_on_teardown(self._setup_listener()) # It is important to watch the lifetimes of socket objects and # ensure that: # (1) they are closed; and # (2) *before* the next test begins. # # For example, it's a bad bad thing to leave a greenlet running past the # scope of the individual test method if that greenlet will close # a socket object --- especially if that socket object might also have been # closed explicitly. # # On Windows, we've seen issue with filenos getting reused while something # still thinks they have the original fileno around. When they later # close that fileno, a completely unrelated object is closed. self.port = self.listener.getsockname()[1] def _setup_listener(self): return tcp_listener() def create_connection(self, host=None, port=None, timeout=None, blocking=None): sock = self._close_on_teardown(socket.socket()) sock.connect((host or params.DEFAULT_CONNECT, port or self.port)) if timeout is not None: sock.settimeout(timeout) if blocking is not None: sock.setblocking(blocking) return sock def _test_sendall(self, data, match_data=None, client_method='sendall', **client_args): # pylint:disable=too-many-locals,too-many-branches,too-many-statements log = self.log log("test_sendall using method", client_method) read_data = [] accepted_event = Event() def accept_and_read(): log("\taccepting", self.listener) conn, _ = self.listener.accept() try: with conn.makefile(mode='rb') as r: log("\taccepted on server; client conn is", conn, "file is", r) accepted_event.set() log("\treading") read_data.append(r.read()) log("\tdone reading", r, "got bytes", len(read_data[0])) del r finally: conn.close() del conn server = Thread(target=accept_and_read) try: log("creating client connection") client = self.create_connection(**client_args) # It's important to wait for the server to fully accept before # we shutdown and close the socket. In SSL mode, the number # and timing of data exchanges to complete the handshake and # thus exactly when greenlet switches occur, varies by TLS version. # # It turns out that on < TLS1.3, we were getting lucky and the # server was the greenlet that raced ahead and blocked in r.read() # before the client returned from create_connection(). # # But when TLS 1.3 was deployed (OpenSSL 1.1), the *client* was the # one that raced ahead while the server had yet to return from # self.listener.accept(). So the client sent the data to the socket, # and closed, before the server could do anything, and the server, # when it got switched to by server.join(), found its new socket # dead. accepted_event.wait() log("Client got accepted event from server", client, "; sending data", len(data)) try: x = getattr(client, client_method)(data) log("Client sent data: result from method", x) finally: log("Client will unwrap and shutdown") if hasattr(client, 'unwrap'): # Are we dealing with an SSLSocket? If so, unwrap it # before attempting to shut down the socket. This does the # SSL shutdown handshake and (hopefully) stops ``accept_and_read`` # from generating ``ConnectionResetError`` on AppVeyor. try: client = client.unwrap() except (ValueError, OSError): # PyPy raises _cffi_ssl._stdssl.error.SSLSyscallError, # which is an IOError in 2.7 and OSError in 3.7 pass try: # The implicit reference-based nastiness of Python 2 # sockets interferes, especially when using SSL sockets. # The best way to get a decent FIN to the server is to shutdown # the output. Doing that on Python 3, OTOH, is contraindicated # except on PyPy, so this used to read ``PY2 or PYPY``. But # it seems that a shutdown is generally good practice, and I didn't # document what errors we saw without it. Per issue #1637 # lets do a shutdown everywhere, but only after removing any # SSL wrapping. client.shutdown(socket.SHUT_RDWR) except OSError: pass log("Client will close") client.close() finally: server.join(10) assert not server.is_alive() if server.terminal_exc: reraise(*server.terminal_exc) if match_data is None: match_data = self.long_data read_data = read_data[0].split(b',') match_data = match_data.split(b',') self.assertEqual(read_data[0], match_data[0]) self.assertEqual(len(read_data), len(match_data)) self.assertEqual(read_data, match_data) def test_sendall_str(self): self._test_sendall(self.long_data) if six.PY2: def test_sendall_unicode(self): self._test_sendall(six.text_type(self.long_data)) @skipOnMacOnCI("Sometimes fails for no apparent reason (buffering?)") def test_sendall_array(self): data = array.array("B", self.long_data) self._test_sendall(data) def test_sendall_empty(self): data = b'' self._test_sendall(data, data) def test_sendall_empty_with_timeout(self): # Issue 719 data = b'' self._test_sendall(data, data, timeout=10) def test_sendall_nonblocking(self): # https://github.com/benoitc/gunicorn/issues/1282 # Even if the socket is non-blocking, we make at least # one attempt to send data. Under Py2 before this fix, we # would incorrectly immediately raise a timeout error data = b'hi\n' self._test_sendall(data, data, blocking=False) def test_empty_send(self): # Issue 719 data = b'' self._test_sendall(data, data, client_method='send') def test_fullduplex(self): N = 100000 def server(): remote_client, _ = self.listener.accept() self._close_on_teardown(remote_client) # start reading, then, while reading, start writing. the reader should not hang forever sender = Thread(target=remote_client.sendall, args=((b't' * N),)) try: result = remote_client.recv(1000) self.assertEqual(result, b'hello world') finally: sender.join() server_thread = Thread(target=server) client = self.create_connection() client_file = self._close_on_teardown(client.makefile()) client_reader = Thread(target=client_file.read, args=(N, )) time.sleep(0.1) client.sendall(b'hello world') time.sleep(0.1) # close() used to hang client_file.close() client.close() # this tests "full duplex" bug; server_thread.join() client_reader.join() def test_recv_timeout(self): def accept(): # make sure the conn object stays alive until the end; # premature closing triggers a ResourceWarning and # EOF on the client. conn, _ = self.listener.accept() self._close_on_teardown(conn) acceptor = Thread(target=accept) client = self.create_connection() try: client.settimeout(1) start = time.time() with self.assertRaises(self.TIMEOUT_ERROR): client.recv(1024) took = time.time() - start self.assertTimeWithinRange(took, 1 - 0.1, 1 + 0.1) finally: acceptor.join() # Subclasses can disable this _test_sendall_timeout_check_time = True # Travis-CI container infrastructure is configured with # large socket buffers, at least 2MB, as-of Jun 3, 2015, # so we must be sure to send more data than that. # In 2018, this needs to be increased *again* as a smaller value was # still often being sent. _test_sendall_data = b'hello' * 100000000 # This doesn't make much sense...why are we really skipping this? @greentest.skipOnWindows("On Windows send() accepts whatever is thrown at it") def test_sendall_timeout(self): client_sock = [] acceptor = Thread(target=lambda: client_sock.append(self.listener.accept())) client = self.create_connection() time.sleep(0.1) assert client_sock client.settimeout(0.1) start = time.time() try: with self.assertRaises(self.TIMEOUT_ERROR): client.sendall(self._test_sendall_data) if self._test_sendall_timeout_check_time: took = time.time() - start self.assertTimeWithinRange(took, 0.09, 0.21) finally: acceptor.join() client.close() client_sock[0][0].close() def test_makefile(self): def accept_once(): conn, _ = self.listener.accept() fd = conn.makefile(mode='wb') fd.write(b'hello\n') fd.flush() fd.close() conn.close() # for pypy acceptor = Thread(target=accept_once) try: client = self.create_connection() # Closing the socket doesn't close the file client_file = client.makefile(mode='rb') client.close() line = client_file.readline() self.assertEqual(line, b'hello\n') self.assertEqual(client_file.read(), b'') client_file.close() finally: acceptor.join() def test_makefile_timeout(self): def accept_once(): conn, _ = self.listener.accept() try: time.sleep(0.3) finally: conn.close() # for pypy acceptor = Thread(target=accept_once) try: client = self.create_connection() client.settimeout(0.1) fd = client.makefile(mode='rb') self.assertRaises(self.TIMEOUT_ERROR, fd.readline) client.close() fd.close() finally: acceptor.join() def test_attributes(self): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, 0) self.assertIs(s.family, socket.AF_INET) self.assertEqual(s.type, socket.SOCK_DGRAM) self.assertEqual(0, s.proto) if hasattr(socket, 'SOCK_NONBLOCK'): s.settimeout(1) self.assertIs(s.family, socket.AF_INET) s.setblocking(0) std_socket = monkey.get_original('socket', 'socket')(socket.AF_INET, socket.SOCK_DGRAM, 0) try: std_socket.setblocking(0) self.assertEqual(std_socket.type, s.type) finally: std_socket.close() s.close() def test_connect_ex_nonblocking_bad_connection(self): # Issue 841 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: s.setblocking(False) ret = s.connect_ex((greentest.DEFAULT_LOCAL_HOST_ADDR, support.find_unused_port())) self.assertIsInstance(ret, errno_types) finally: s.close() @skipWithoutExternalNetwork("Tries to resolve hostname") def test_connect_ex_gaierror(self): # Issue 841 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: with self.assertRaises(socket.gaierror): s.connect_ex(('foo.bar.fizzbuzz', support.find_unused_port())) finally: s.close() @skipWithoutExternalNetwork("Tries to resolve hostname") def test_connect_ex_not_call_connect(self): # Issue 1931 def do_it(sock): try: with self.assertRaises(socket.gaierror): sock.connect_ex(('foo.bar.fizzbuzz', support.find_unused_port())) finally: sock.close() # An instance attribute doesn't matter because we can't set it s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) with self.assertRaises(AttributeError): s.connect = None s.close() # A subclass class S(socket.socket): def connect(self, *args): raise AssertionError('Should not be called') s = S(socket.AF_INET, socket.SOCK_STREAM) do_it(s) def test_connect_ex_nonblocking_overflow(self): # Issue 841 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: s.setblocking(False) with self.assertRaises(OverflowError): s.connect_ex((greentest.DEFAULT_LOCAL_HOST_ADDR, 65539)) finally: s.close() @unittest.skipUnless(hasattr(socket, 'SOCK_CLOEXEC'), "Requires SOCK_CLOEXEC") def test_connect_with_type_flags_ignored(self): # Issue 944 # If we have SOCK_CLOEXEC or similar, we shouldn't be passing # them through to the getaddrinfo call that connect() makes SOCK_CLOEXEC = socket.SOCK_CLOEXEC # pylint:disable=no-member s = socket.socket(socket.AF_INET, socket.SOCK_STREAM | SOCK_CLOEXEC) def accept_once(): conn, _ = self.listener.accept() fd = conn.makefile(mode='wb') fd.write(b'hello\n') fd.close() conn.close() acceptor = Thread(target=accept_once) try: s.connect((params.DEFAULT_CONNECT, self.port)) fd = s.makefile(mode='rb') self.assertEqual(fd.readline(), b'hello\n') fd.close() s.close() finally: acceptor.join() class TestCreateConnection(greentest.TestCase): __timeout__ = LARGE_TIMEOUT def test_refuses(self, **conn_args): connect_port = support.find_unused_port() with self.assertRaisesRegex( socket.error, # We really expect "connection refused". It's unclear # where/why we would get '[errno -2] name or service # not known' but it seems some systems generate that. # https://github.com/gevent/gevent/issues/1389 Somehow # extremly rarely we've also seen 'address already in # use', which makes even less sense. The manylinux # 2010 environment produces 'errno 99 Cannot assign # requested address', which, I guess? # Meanwhile, the musllinux_1 environment produces # '[Errno 99] Address not available' 'refused|not known|already in use|assign|not available' ): socket.create_connection( (greentest.DEFAULT_BIND_ADDR, connect_port), timeout=30, **conn_args ) def test_refuses_from_port(self): source_port = support.find_unused_port() # Usually we don't want to bind/connect to '', but # using it as the source is required if we don't want to hang, # at least on some systems (OS X) self.test_refuses(source_address=('', source_port)) @greentest.ignores_leakcheck @skipWithoutExternalNetwork("Tries to resolve hostname") def test_base_exception(self): # such as a GreenletExit or a gevent.timeout.Timeout class E(BaseException): pass class MockSocket(object): created = () closed = False def __init__(self, *_): MockSocket.created += (self,) def connect(self, _): raise E(_) def close(self): self.closed = True def mockgetaddrinfo(*_): return [(1, 2, 3, 3, 5),] import gevent.socket as gsocket # Make sure we're monkey patched self.assertEqual(gsocket.create_connection, socket.create_connection) orig_socket = gsocket.socket orig_getaddrinfo = gsocket.getaddrinfo try: gsocket.socket = MockSocket gsocket.getaddrinfo = mockgetaddrinfo with self.assertRaises(E): socket.create_connection(('host', 'port')) self.assertEqual(1, len(MockSocket.created)) self.assertTrue(MockSocket.created[0].closed) finally: MockSocket.created = () gsocket.socket = orig_socket gsocket.getaddrinfo = orig_getaddrinfo class TestFunctions(greentest.TestCase): @greentest.ignores_leakcheck # Creating new types in the function takes a cycle to cleanup. def test_wait_timeout(self): # Issue #635 from gevent import socket as gsocket class io(object): callback = None def start(self, *_args): gevent.sleep(10) with self.assertRaises(gsocket.timeout): gsocket.wait(io(), timeout=0.01) # pylint:disable=no-member def test_signatures(self): # https://github.com/gevent/gevent/issues/960 exclude = [] if greentest.PYPY: # Up through at least PyPy 5.7.1, they define these as # gethostbyname(host), whereas the official CPython argument name # is hostname. But cpython doesn't allow calling with keyword args. # Likewise for gethostbyaddr: PyPy uses host, cpython uses ip_address exclude.append('gethostbyname') exclude.append('gethostbyname_ex') exclude.append('gethostbyaddr') if sys.version_info[:2] < (3, 11): # 3.11+ add ``*, all_errors=False``. We allow that on all versions, # forcing it to a false value if the user sends a true value before # exception groups exist. exclude.append('create_connection') self.assertMonkeyPatchedFuncSignatures('socket', exclude=exclude) def test_resolve_ipv6_scope_id(self): from gevent import _socketcommon as SC if not SC.__socket__.has_ipv6: self.skipTest("Needs IPv6") # pragma: no cover if not hasattr(SC.__socket__, 'inet_pton'): self.skipTest("Needs inet_pton") # pragma: no cover # A valid IPv6 address, with a scope. addr = ('2607:f8b0:4000:80e::200e', 80, 0, 9) # Mock socket class sock(object): family = SC.AF_INET6 # pylint:disable=no-member self.assertIs(addr, SC._resolve_addr(sock, addr)) class TestSocket(greentest.TestCase): def test_shutdown_when_closed(self): # https://github.com/gevent/gevent/issues/1089 # we once raised an AttributeError. s = socket.socket() s.close() with self.assertRaises(socket.error): s.shutdown(socket.SHUT_RDWR) def test_can_be_weak_ref(self): # stdlib socket can be weak reffed. import weakref s = socket.socket() try: w = weakref.ref(s) self.assertIsNotNone(w) finally: s.close() def test_has_no_dict(self): # stdlib socket has no dict s = socket.socket() try: with self.assertRaises(AttributeError): getattr(s, '__dict__') finally: s.close() if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__socket_close.py000066400000000000000000000035061471441230600226240ustar00rootroot00000000000000import gevent from gevent import socket from gevent import server import gevent.testing as greentest # XXX also test: send, sendall, recvfrom, recvfrom_into, sendto def readall(sock, _): while sock.recv(1024): pass # pragma: no cover we never actually send the data sock.close() class Test(greentest.TestCase): error_fatal = False def setUp(self): self.server = server.StreamServer(greentest.DEFAULT_BIND_ADDR_TUPLE, readall) self.server.start() def tearDown(self): self.server.stop() def test_recv_closed(self): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((greentest.DEFAULT_CONNECT_HOST, self.server.server_port)) receiver = gevent.spawn(sock.recv, 25) try: gevent.sleep(0.001) sock.close() receiver.join(timeout=0.1) self.assertTrue(receiver.ready(), receiver) self.assertEqual(receiver.value, None) self.assertIsInstance(receiver.exception, socket.error) self.assertEqual(receiver.exception.errno, socket.EBADF) finally: receiver.kill() # XXX: This is possibly due to the bad behaviour of small sleeps? # The timeout is the global test timeout, 10s @greentest.skipOnLibuvOnCI("Sometimes randomly times out") def test_recv_twice(self): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((greentest.DEFAULT_CONNECT_HOST, self.server.server_port)) receiver = gevent.spawn(sock.recv, 25) try: gevent.sleep(0.001) self.assertRaises(AssertionError, sock.recv, 25) self.assertRaises(AssertionError, sock.recv, 25) finally: receiver.kill() sock.close() if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__socket_dns.py000066400000000000000000001075111471441230600223040ustar00rootroot00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- from __future__ import print_function from __future__ import absolute_import from __future__ import division import gevent from gevent import monkey import os import re import unittest import socket from time import time import traceback import gevent.socket as gevent_socket import gevent.testing as greentest from gevent.testing import util from gevent.testing.six import xrange from gevent.testing import flaky from gevent.testing.skipping import skipWithoutExternalNetwork resolver = gevent.get_hub().resolver util.debug('Resolver: %s', resolver) if getattr(resolver, 'pool', None) is not None: resolver.pool.size = 1 from gevent.testing.sysinfo import RESOLVER_NOT_SYSTEM from gevent.testing.sysinfo import RESOLVER_DNSPYTHON from gevent.testing.sysinfo import RESOLVER_ARES from gevent.testing.sysinfo import PY2 from gevent.testing.sysinfo import PYPY import gevent.testing.timing assert gevent_socket.gaierror is socket.gaierror assert gevent_socket.error is socket.error RUN_ALL_HOST_TESTS = os.getenv('GEVENTTEST_RUN_ALL_ETC_HOST_TESTS', '') def add(klass, hostname, name=None, skip=None, skip_reason=None, require_equal_errors=True): call = callable(hostname) def _setattr(k, n, func): if skip: func = greentest.skipIf(skip, skip_reason,)(func) if not hasattr(k, n): setattr(k, n, func) if name is None: if call: name = hostname.__name__ else: name = re.sub(r'[^\w]+', '_', repr(hostname)) assert name, repr(hostname) def test_getaddrinfo_http(self): x = hostname() if call else hostname self._test('getaddrinfo', x, 'http', require_equal_errors=require_equal_errors) test_getaddrinfo_http.__name__ = 'test_%s_getaddrinfo_http' % name _setattr(klass, test_getaddrinfo_http.__name__, test_getaddrinfo_http) def test_gethostbyname(self): x = hostname() if call else hostname ipaddr = self._test('gethostbyname', x, require_equal_errors=require_equal_errors) if not isinstance(ipaddr, Exception): self._test('gethostbyaddr', ipaddr, require_equal_errors=require_equal_errors) test_gethostbyname.__name__ = 'test_%s_gethostbyname' % name _setattr(klass, test_gethostbyname.__name__, test_gethostbyname) def test_gethostbyname_ex(self): x = hostname() if call else hostname self._test('gethostbyname_ex', x, require_equal_errors=require_equal_errors) test_gethostbyname_ex.__name__ = 'test_%s_gethostbyname_ex' % name _setattr(klass, test_gethostbyname_ex.__name__, test_gethostbyname_ex) def test4(self): x = hostname() if call else hostname self._test('gethostbyaddr', x, require_equal_errors=require_equal_errors) test4.__name__ = 'test_%s_gethostbyaddr' % name _setattr(klass, test4.__name__, test4) def test5(self): x = hostname() if call else hostname self._test('getnameinfo', (x, 80), 0, require_equal_errors=require_equal_errors) test5.__name__ = 'test_%s_getnameinfo' % name _setattr(klass, test5.__name__, test5) @skipWithoutExternalNetwork("Tries to resolve and compare hostnames/addrinfo") class TestCase(greentest.TestCase): maxDiff = None __timeout__ = 30 switch_expected = None TRACE = not util.QUIET and os.getenv('GEVENT_DEBUG', '') == 'trace' verbose_dns = TRACE def trace(self, message, *args, **kwargs): if self.TRACE: util.debug(message, *args, **kwargs) # Things that the stdlib should never raise and neither should we; # these indicate bugs in our code and we want to raise them. REAL_ERRORS = (AttributeError, ValueError, NameError) def __run_resolver(self, function, args): try: result = function(*args) assert not isinstance(result, BaseException), repr(result) return result except self.REAL_ERRORS: raise except Exception as ex: # pylint:disable=broad-except if self.TRACE: traceback.print_exc() return ex def __trace_call(self, result, runtime, function, *args): util.debug(self.__format_call(function, args)) self.__trace_fresult(result, runtime) def __format_call(self, function, args): args = repr(args) if args.endswith(',)'): args = args[:-2] + ')' try: module = function.__module__.replace('gevent._socketcommon', 'gevent') name = function.__name__ return '%s:%s%s' % (module, name, args) except AttributeError: return function + args def __trace_fresult(self, result, seconds): if isinstance(result, Exception): msg = ' -=> raised %r' % (result, ) else: msg = ' -=> returned %r' % (result, ) time_ms = ' %.2fms' % (seconds * 1000.0, ) space = 80 - len(msg) - len(time_ms) if space > 0: space = ' ' * space else: space = '' util.debug(msg + space + time_ms) if not TRACE: def run_resolver(self, function, func_args): now = time() return self.__run_resolver(function, func_args), time() - now else: def run_resolver(self, function, func_args): self.trace(self.__format_call(function, func_args)) delta = time() result = self.__run_resolver(function, func_args) delta = time() - delta self.__trace_fresult(result, delta) return result, delta def setUp(self): super(TestCase, self).setUp() if not self.verbose_dns: # Silence the default reporting of errors from the ThreadPool, # we handle those here. gevent.get_hub().exception_stream = None def tearDown(self): if not self.verbose_dns: try: del gevent.get_hub().exception_stream except AttributeError: pass # Happens under leak tests super(TestCase, self).tearDown() def should_log_results(self, result1, result2): if not self.verbose_dns: return False if isinstance(result1, BaseException) and isinstance(result2, BaseException): return type(result1) is not type(result2) return repr(result1) != repr(result2) def _test(self, func_name, *args, **kwargs): """ Runs the function *func_name* with *args* and compares gevent and the system. Keyword arguments are passed to the function itself; variable args are used for the socket function. Returns the gevent result. """ gevent_func = getattr(gevent_socket, func_name) real_func = monkey.get_original('socket', func_name) tester = getattr(self, '_run_test_' + func_name, self._run_test_generic) result = tester(func_name, real_func, gevent_func, args, **kwargs) _real_result, time_real, gevent_result, time_gevent = result if self.verbose_dns and time_gevent > time_real + 0.02 and time_gevent > 0.03: msg = 'gevent:%s%s took %dms versus %dms stdlib' % ( func_name, args, time_gevent * 1000.0, time_real * 1000.0) if time_gevent > time_real + 1: word = 'VERY' else: word = 'quite' util.log('\nWARNING: %s slow: %s', word, msg, color='warning') return gevent_result def _run_test_generic(self, func_name, real_func, gevent_func, func_args, require_equal_errors=True): real_result, time_real = self.run_resolver(real_func, func_args) gevent_result, time_gevent = self.run_resolver(gevent_func, func_args) if util.QUIET and self.should_log_results(real_result, gevent_result): util.log('') self.__trace_call(real_result, time_real, real_func, func_args) self.__trace_call(gevent_result, time_gevent, gevent_func, func_args) self.assertEqualResults(real_result, gevent_result, func_name, require_equal_errors=require_equal_errors) return real_result, time_real, gevent_result, time_gevent def _normalize_result(self, result, func_name): norm_name = '_normalize_result_' + func_name if hasattr(self, norm_name): return getattr(self, norm_name)(result) return result NORMALIZE_GAI_IGNORE_CANONICAL_NAME = RESOLVER_ARES # It tends to return them even when not asked for if not RESOLVER_NOT_SYSTEM: def _normalize_result_getaddrinfo(self, result): return result def _normalize_result_gethostbyname_ex(self, result): return result else: def _normalize_result_gethostbyname_ex(self, result): # Often the second and third part of the tuple (hostname, aliaslist, ipaddrlist) # can be in different orders if we're hitting different servers, # or using the native and ares resolvers due to load-balancing techniques. # We sort them. if isinstance(result, BaseException): return result # result[1].sort() # we wind up discarding this # On Py2 in test_russion_gethostbyname_ex, this # is actually an integer, for some reason. In TestLocalhost.tets__ip6_localhost, # the result isn't this long (maybe an error?). try: result[2].sort() except AttributeError: pass except IndexError: return result # On some systems, a random alias is found in the aliaslist # by the system resolver, but not by cares, and vice versa. We deem the aliaslist # unimportant and discard it. # On some systems (Travis CI), the ipaddrlist for 'localhost' can come back # with two entries 127.0.0.1 (presumably two interfaces?) for c-ares ips = result[2] if ips == ['127.0.0.1', '127.0.0.1']: ips = ['127.0.0.1'] # On some systems, the hostname can get caps return (result[0].lower(), [], ips) def _normalize_result_getaddrinfo(self, result): # Result is a list # (family, socktype, proto, canonname, sockaddr) # e.g., # (AF_INET, SOCK_STREAM, IPPROTO_TCP, 'readthedocs.io', (127.0.0.1, 80)) if isinstance(result, BaseException): return result # On Python 3, the builtin resolver can return SOCK_RAW results, but # c-ares doesn't do that. So we remove those if we find them. # Likewise, on certain Linux systems, even on Python 2, IPPROTO_SCTP (132) # results may be returned --- but that may not even have a constant in the # socket module! So to be safe, we strip out anything that's not # SOCK_STREAM or SOCK_DGRAM if isinstance(result, list): result = [ x for x in result if x[1] in (socket.SOCK_STREAM, socket.SOCK_DGRAM) and x[2] in (socket.IPPROTO_TCP, socket.IPPROTO_UDP) ] if self.NORMALIZE_GAI_IGNORE_CANONICAL_NAME: result = [ (family, kind, proto, '', addr) for family, kind, proto, _, addr in result ] if isinstance(result, list): result.sort() return result def _normalize_result_getnameinfo(self, result): return result NORMALIZE_GHBA_IGNORE_ALIAS = False def _normalize_result_gethostbyaddr(self, result): if not RESOLVER_NOT_SYSTEM: return result if self.NORMALIZE_GHBA_IGNORE_ALIAS and isinstance(result, tuple): # On some systems, a random alias is found in the aliaslist # by the system resolver, but not by cares and vice versa. This is *probably* only the # case for localhost or things otherwise in /etc/hosts. We deem the aliaslist # unimportant and discard it. return (result[0], [], result[2]) return result def _compare_exceptions_strict(self, real_result, gevent_result, func_name): if repr(real_result) == repr(gevent_result): # Catch things like `OverflowError('port must be 0-65535.',)``` return msg = (func_name, 'system:', repr(real_result), 'gevent:', repr(gevent_result)) self.assertIs(type(gevent_result), type(real_result), msg) if isinstance(real_result, TypeError): return if PYPY and isinstance(real_result, socket.herror): # PyPy doesn't do errno or multiple arguments in herror; # it just puts a string like 'host lookup failed: '; # it must be doing that manually. return self.assertEqual(real_result.args, gevent_result.args, msg) if hasattr(real_result, 'errno'): self.assertEqual(real_result.errno, gevent_result.errno) def _compare_exceptions_lenient(self, real_result, gevent_result, func_name): try: self._compare_exceptions_strict(real_result, gevent_result, func_name) except AssertionError: # Allow raising different things in a few rare cases. if ( func_name not in ( 'getaddrinfo', 'gethostbyaddr', 'gethostbyname', 'gethostbyname_ex', 'getnameinfo', ) or type(real_result) not in (socket.herror, socket.gaierror) or type(gevent_result) not in (socket.herror, socket.gaierror, socket.error) ): raise util.log('WARNING: error type mismatch for %s: %r (gevent) != %r (stdlib)', func_name, gevent_result, real_result, color='warning') _compare_exceptions = _compare_exceptions_lenient if RESOLVER_NOT_SYSTEM else _compare_exceptions_strict def _compare_results(self, real_result, gevent_result, func_name): if real_result == gevent_result: return True compare_func = getattr(self, '_compare_results_' + func_name, self._generic_compare_results) return compare_func(real_result, gevent_result, func_name) def _generic_compare_results(self, real_result, gevent_result, func_name): try: if len(real_result) != len(gevent_result): return False except TypeError: return False return all(self._compare_results(x, y, func_name) for (x, y) in zip(real_result, gevent_result)) def _compare_results_getaddrinfo(self, real_result, gevent_result, func_name): # On some systems, we find more results with # one resolver than we do with the other resolver. # So as long as they have some subset in common, # we'll take it. errors = isinstance(real_result, Exception) + isinstance(gevent_result, Exception) if errors == 2: return True if errors == 1: return False if not set(real_result).isdisjoint(set(gevent_result)): return True return self._generic_compare_results(real_result, gevent_result, func_name) def _compare_address_strings(self, a, b): # IPv6 address from different requests might be different a_segments = a.count(':') b_segments = b.count(':') if a_segments and b_segments: if a_segments == b_segments and a_segments in (4, 5, 6, 7): return True if a.rstrip(':').startswith(b.rstrip(':')) or b.rstrip(':').startswith(a.rstrip(':')): return True if a_segments >= 2 and b_segments >= 2 and a.split(':')[:2] == b.split(':')[:2]: return True return a.split('.', 1)[-1] == b.split('.', 1)[-1] def _compare_results_gethostbyname(self, real_result, gevent_result, _func_name): # Both strings. return self._compare_address_strings(real_result, gevent_result) def _compare_results_gethostbyname_ex(self, real_result, gevent_result, _func_name): # Results are IPv4 only: # (hostname, [aliaslist], [ipaddrlist]) # As for getaddrinfo, we'll just check the ipaddrlist has something in common. return not set(real_result[2]).isdisjoint(set(gevent_result[2])) def assertEqualResults(self, real_result, gevent_result, func_name, require_equal_errors=True): errors = ( OverflowError, TypeError, UnicodeError, socket.error, socket.gaierror, socket.herror, ) if isinstance(real_result, errors) and isinstance(gevent_result, errors): if require_equal_errors: self._compare_exceptions(real_result, gevent_result, func_name) return real_result = self._normalize_result(real_result, func_name) gevent_result = self._normalize_result(gevent_result, func_name) if self._compare_results(real_result, gevent_result, func_name): return # If we're using a different resolver, allow the real resolver to generate an # error that the gevent resolver actually gets an answer to. if ( RESOLVER_NOT_SYSTEM and isinstance(real_result, errors) and not isinstance(gevent_result, errors) ): return # On PyPy, socket.getnameinfo() can produce results even when the hostname resolves to # multiple addresses, like www.gevent.org does. DNSPython (and c-ares?) don't do that, # they refuse to pick a name and raise ``socket.error`` if ( RESOLVER_NOT_SYSTEM and PYPY and func_name == 'getnameinfo' and isinstance(gevent_result, socket.error) and not isinstance(real_result, socket.error) ): return # From 2.7 on, assertEqual does a better job highlighting the results than we would # because it calls assertSequenceEqual, which highlights the exact # difference in the tuple self.assertEqual(real_result, gevent_result) class TestTypeError(TestCase): pass add(TestTypeError, None) add(TestTypeError, 25) class TestHostname(TestCase): NORMALIZE_GHBA_IGNORE_ALIAS = True def __normalize_name(self, result): if (RESOLVER_ARES or RESOLVER_DNSPYTHON) and isinstance(result, tuple): # The system resolver can return the FQDN, in the first result, # when given certain configurations. But c-ares and dnspython # do not. name = result[0] name = name.split('.', 1)[0] result = (name,) + result[1:] return result def _normalize_result_gethostbyaddr(self, result): result = TestCase._normalize_result_gethostbyaddr(self, result) return self.__normalize_name(result) def _normalize_result_getnameinfo(self, result): result = TestCase._normalize_result_getnameinfo(self, result) if PY2: # Not sure why we only saw this on Python 2 result = self.__normalize_name(result) return result add( TestHostname, socket.gethostname, skip=greentest.RUNNING_ON_TRAVIS and greentest.RESOLVER_NOT_SYSTEM, skip_reason=("Sometimes get a different result for getaddrinfo " "with dnspython; c-ares produces different results for " "localhost on Travis beginning Sept 2019") ) class TestLocalhost(TestCase): # certain tests in test_patched_socket.py only work if getaddrinfo('localhost') does not switch # (e.g. NetworkConnectionAttributesTest.testSourceAddress) #switch_expected = False # XXX: The above has been commented out for some time. Apparently this isn't the case # anymore. def _normalize_result_getaddrinfo(self, result): if RESOLVER_NOT_SYSTEM: # We see that some impls (OS X) return extra results # like DGRAM that ares does not. return () return super(TestLocalhost, self)._normalize_result_getaddrinfo(result) NORMALIZE_GHBA_IGNORE_ALIAS = True if greentest.RUNNING_ON_TRAVIS and greentest.PY2 and RESOLVER_NOT_SYSTEM: def _normalize_result_gethostbyaddr(self, result): # Beginning in November 2017 after an upgrade to Travis, # we started seeing ares return ::1 for localhost, but # the system resolver is still returning 127.0.0.1 under Python 2 result = super(TestLocalhost, self)._normalize_result_gethostbyaddr(result) if isinstance(result, tuple): result = (result[0], result[1], ['127.0.0.1']) return result add( TestLocalhost, 'ip6-localhost', skip=RESOLVER_DNSPYTHON, # XXX: Fix these. skip_reason="Can return gaierror(-2)" ) add( TestLocalhost, 'localhost', skip=greentest.RUNNING_ON_TRAVIS, skip_reason="Can return gaierror(-2)" ) class TestNonexistent(TestCase): pass add(TestNonexistent, 'nonexistentxxxyyy') class Test1234(TestCase): pass add(Test1234, '1.2.3.4') class Test127001(TestCase): NORMALIZE_GHBA_IGNORE_ALIAS = True add( Test127001, '127.0.0.1', # skip=RESOLVER_DNSPYTHON, # skip_reason="Beginning Dec 1 2017, ares started returning ip6-localhost " # "instead of localhost" ) class TestBroadcast(TestCase): switch_expected = False if RESOLVER_DNSPYTHON: # dnspython raises errors for broadcasthost/255.255.255.255, but the system # can resolve it. @unittest.skip('ares raises errors for broadcasthost/255.255.255.255') def test__broadcast__gethostbyaddr(self): return test__broadcast__gethostbyname = test__broadcast__gethostbyaddr add(TestBroadcast, '') from gevent.resolver._hostsfile import HostsFile class SanitizedHostsFile(HostsFile): def iter_all_host_addr_pairs(self): for name, addr in super(SanitizedHostsFile, self).iter_all_host_addr_pairs(): if (RESOLVER_NOT_SYSTEM and (name.endswith('local') # ignore bonjour, ares can't find them # ignore common aliases that ares can't find or addr == '255.255.255.255' or name == 'broadcasthost' # We get extra results from some impls, like OS X # it returns DGRAM results or name == 'localhost')): continue # pragma: no cover if name.endswith('local'): # These can only be found if bonjour is running, # and are very slow to do so with the system resolver on OS X continue yield name, addr @greentest.skipIf(greentest.RUNNING_ON_CI, "This sometimes randomly fails on Travis with ares and on appveyor, beginning Feb 13, 2018") # Probably due to round-robin DNS, # since this is not actually the system's etc hosts file. # TODO: Rethink this. We need something reliable. Go back to using # the system's etc hosts? class TestEtcHosts(TestCase): MAX_HOSTS = int(os.getenv('GEVENTTEST_MAX_ETC_HOSTS', '10')) @classmethod def populate_tests(cls): hf = SanitizedHostsFile(os.path.join(os.path.dirname(__file__), 'hosts_file.txt')) all_etc_hosts = sorted(hf.iter_all_host_addr_pairs()) if len(all_etc_hosts) > cls.MAX_HOSTS and not RUN_ALL_HOST_TESTS: all_etc_hosts = all_etc_hosts[:cls.MAX_HOSTS] for host, ip in all_etc_hosts: add(cls, host) add(cls, ip) TestEtcHosts.populate_tests() class TestGeventOrg(TestCase): # For this test to work correctly, it needs to resolve to # an address with a single A record; round-robin DNS and multiple A records # may mess it up (subsequent requests---and we always make two---may return # unequal results). We used to use gevent.org, but that now has multiple A records; # trying www.gevent.org which is a CNAME to readthedocs.org then worked, but it became # an alias for python-gevent.readthedocs.org, which is an alias for readthedocs.io, # and which also has multiple addresses. So we run the resolver twice to try to get # the different answers, if needed. Even then it's not enough, so # we normalize the two addresses we get to a single one. HOSTNAME = 'www.gevent.org' def _normalize_result_gethostbyname(self, result): if result == '104.17.33.82': result = '104.17.32.82' if result == '104.16.254.120': result = '104.16.253.120' return result def _normalize_result_gethostbyname_ex(self, result): result = super(TestGeventOrg, self)._normalize_result_gethostbyname_ex(result) if result[0] == 'python-gevent.readthedocs.org': result = ('readthedocs.io', ) + result[1:] return result def test_AI_CANONNAME(self): # Not all systems support AI_CANONNAME; notably tha manylinux # resolvers *sometimes* do not. Specifically, sometimes they # provide the canonical name *only* on the first result. args = ( # host TestGeventOrg.HOSTNAME, # port None, # family socket.AF_INET, # type 0, # proto 0, # flags socket.AI_CANONNAME ) gevent_result = gevent_socket.getaddrinfo(*args) self.assertEqual(gevent_result[0][3], 'readthedocs.io') real_result = socket.getaddrinfo(*args) self.NORMALIZE_GAI_IGNORE_CANONICAL_NAME = not all(r[3] for r in real_result) try: self.assertEqualResults(real_result, gevent_result, 'getaddrinfo') finally: del self.NORMALIZE_GAI_IGNORE_CANONICAL_NAME add(TestGeventOrg, TestGeventOrg.HOSTNAME) class TestFamily(TestCase): def test_inet(self): self._test('getaddrinfo', TestGeventOrg.HOSTNAME, None, socket.AF_INET) def test_unspec(self): self._test('getaddrinfo', TestGeventOrg.HOSTNAME, None, socket.AF_UNSPEC) def test_badvalue(self): self._test('getaddrinfo', TestGeventOrg.HOSTNAME, None, 255) self._test('getaddrinfo', TestGeventOrg.HOSTNAME, None, 255000) self._test('getaddrinfo', TestGeventOrg.HOSTNAME, None, -1) @unittest.skipIf(RESOLVER_DNSPYTHON, "Raises the wrong errno") def test_badtype(self): self._test('getaddrinfo', TestGeventOrg.HOSTNAME, 'x') class Test_getaddrinfo(TestCase): def _test_getaddrinfo(self, *args): self._test('getaddrinfo', *args) def test_80(self): self._test_getaddrinfo(TestGeventOrg.HOSTNAME, 80) def test_int_string(self): self._test_getaddrinfo(TestGeventOrg.HOSTNAME, '80') def test_0(self): self._test_getaddrinfo(TestGeventOrg.HOSTNAME, 0) def test_http(self): self._test_getaddrinfo(TestGeventOrg.HOSTNAME, 'http') def test_notexistent_tld(self): self._test_getaddrinfo('myhost.mytld', 53) def test_notexistent_dot_com(self): self._test_getaddrinfo('sdfsdfgu5e66098032453245wfdggd.com', 80) def test1(self): return self._test_getaddrinfo(TestGeventOrg.HOSTNAME, 52, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, 0) def test2(self): return self._test_getaddrinfo(TestGeventOrg.HOSTNAME, 53, socket.AF_INET, socket.SOCK_DGRAM, 17) @unittest.skipIf(RESOLVER_DNSPYTHON, "dnspython only returns some of the possibilities") def test3(self): return self._test_getaddrinfo('google.com', 'http', socket.AF_INET6) @greentest.skipIf(PY2, "Enums only on Python 3.4+") def test_enums(self): # https://github.com/gevent/gevent/issues/1310 # On Python 3, getaddrinfo does special things to make sure that # the fancy enums are returned. gai = gevent_socket.getaddrinfo('example.com', 80, socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_TCP) af, socktype, _proto, _canonname, _sa = gai[0] self.assertIs(socktype, socket.SOCK_STREAM) self.assertIs(af, socket.AF_INET) class TestInternational(TestCase): if PY2: # We expect these to raise UnicodeEncodeError, which is a # subclass of ValueError REAL_ERRORS = set(TestCase.REAL_ERRORS) - {ValueError,} if RESOLVER_ARES: def test_russian_getaddrinfo_http(self): # And somehow, test_russion_getaddrinfo_http (``getaddrinfo(name, 'http')``) # manages to work with recent versions of Python 2, but our preemptive encoding # to ASCII causes it to fail with the c-ares resolver; but only that one test out of # all of them. self.skipTest("ares fails to encode.") # dns python can actually resolve these: it uses # the 2008 version of idna encoding, whereas on Python 2, # with the default resolver, it tries to encode to ascii and # raises a UnicodeEncodeError. So we get different results. # Starting 20221027, on GitHub Actions and *some* versions of Python, # we started getting a different error result from our own resolver # compared to the system. This is very weird because our own resolver # calls the system. I can't reproduce locally. Perhaps the two # different answers are because of caching? One from the real DNS # server, one from the local resolver library? Hence # require_equal_errors=False # ('system:', "herror(2, 'Host name lookup failure')", # 'gevent:', "herror(1, 'Unknown host')") add(TestInternational, u'президент.рф', 'russian', skip=(PY2 and RESOLVER_DNSPYTHON), skip_reason="dnspython can actually resolve these", require_equal_errors=False) add(TestInternational, u'президент.рф'.encode('idna'), 'idna', require_equal_errors=False) @skipWithoutExternalNetwork("Tries to resolve and compare hostnames/addrinfo") class TestInterrupted_gethostbyname(gevent.testing.timing.AbstractGenericWaitTestCase): # There are refs to a Waiter in the C code that don't go # away yet; one gc may or may not do it. @greentest.ignores_leakcheck def test_returns_none_after_timeout(self): super(TestInterrupted_gethostbyname, self).test_returns_none_after_timeout() def wait(self, timeout): with gevent.Timeout(timeout, False): for index in xrange(1000000): try: gevent_socket.gethostbyname('www.x%s.com' % index) except socket.error: pass raise AssertionError('Timeout was not raised') def cleanup(self): # Depending on timing, this can raise: # (This suddenly started happening on Apr 6 2016; www.x1000000.com # is apparently no longer around) # File "test__socket_dns.py", line 538, in cleanup # gevent.get_hub().threadpool.join() # File "/home/travis/build/gevent/gevent/src/gevent/threadpool.py", line 108, in join # sleep(delay) # File "/home/travis/build/gevent/gevent/src/gevent/hub.py", line 169, in sleep # hub.wait(loop.timer(seconds, ref=ref)) # File "/home/travis/build/gevent/gevent/src/gevent/hub.py", line 651, in wait # result = waiter.get() # File "/home/travis/build/gevent/gevent/src/gevent/hub.py", line 899, in get # return self.hub.switch() # File "/home/travis/build/gevent/gevent/src/greentest/greentest.py", line 520, in switch # return _original_Hub.switch(self, *args) # File "/home/travis/build/gevent/gevent/src/gevent/hub.py", line 630, in switch # return RawGreenlet.switch(self) # gaierror: [Errno -2] Name or service not known try: gevent.get_hub().threadpool.join() except Exception: # pragma: no cover pylint:disable=broad-except traceback.print_exc() # class TestInterrupted_getaddrinfo(greentest.GenericWaitTestCase): # # def wait(self, timeout): # with gevent.Timeout(timeout, False): # for index in range(1000): # try: # gevent_socket.getaddrinfo('www.a%s.com' % index, 'http') # except socket.gaierror: # pass class TestBadName(TestCase): pass add(TestBadName, 'xxxxxxxxxxxx') class TestBadIP(TestCase): pass add(TestBadIP, '1.2.3.400') @greentest.skipIf(greentest.RUNNING_ON_TRAVIS, "Travis began returning ip6-localhost") class Test_getnameinfo_127001(TestCase): def test(self): self._test('getnameinfo', ('127.0.0.1', 80), 0) def test_DGRAM(self): self._test('getnameinfo', ('127.0.0.1', 779), 0) self._test('getnameinfo', ('127.0.0.1', 779), socket.NI_DGRAM) def test_NOFQDN(self): # I get ('localhost', 'www') with _socket but ('localhost.localdomain', 'www') with gevent.socket self._test('getnameinfo', ('127.0.0.1', 80), socket.NI_NOFQDN) def test_NAMEREQD(self): self._test('getnameinfo', ('127.0.0.1', 80), socket.NI_NAMEREQD) class Test_getnameinfo_geventorg(TestCase): @unittest.skipIf(RESOLVER_DNSPYTHON, "dnspython raises an error when multiple results are returned") def test_NUMERICHOST(self): self._test('getnameinfo', (TestGeventOrg.HOSTNAME, 80), 0) self._test('getnameinfo', (TestGeventOrg.HOSTNAME, 80), socket.NI_NUMERICHOST) @unittest.skipIf(RESOLVER_DNSPYTHON, "dnspython raises an error when multiple results are returned") def test_NUMERICSERV(self): self._test('getnameinfo', (TestGeventOrg.HOSTNAME, 80), socket.NI_NUMERICSERV) def test_domain1(self): self._test('getnameinfo', (TestGeventOrg.HOSTNAME, 80), 0) def test_domain2(self): self._test('getnameinfo', ('www.gevent.org', 80), 0) def test_port_zero(self): self._test('getnameinfo', ('www.gevent.org', 0), 0) class Test_getnameinfo_fail(TestCase): def test_port_string(self): self._test('getnameinfo', ('www.gevent.org', 'http'), 0) def test_bad_flags(self): self._test('getnameinfo', ('localhost', 80), 55555555) class TestInvalidPort(TestCase): @flaky.reraises_flaky_race_condition() def test_overflow_neg_one(self): # An Appveyor beginning 2019-03-21, the system resolver # sometimes returns ('23.100.69.251', '65535') instead of # raising an error. That IP address belongs to # readthedocs[.io?] which is where www.gevent.org is a CNAME # to...but it doesn't actually *reverse* to readthedocs.io. # Can't reproduce locally, not sure what's happening self._test('getnameinfo', ('www.gevent.org', -1), 0) # Beginning with PyPy 2.7 7.1 on Appveyor, we sometimes see this # return an OverflowError instead of the TypeError about None @greentest.skipOnLibuvOnPyPyOnWin("Errors dont match") def test_typeerror_none(self): self._test('getnameinfo', ('www.gevent.org', None), 0) # Beginning with PyPy 2.7 7.1 on Appveyor, we sometimes see this # return an TypeError instead of the OverflowError. # XXX: But see Test_getnameinfo_fail.test_port_string where this does work. @greentest.skipOnLibuvOnPyPyOnWin("Errors don't match") def test_typeerror_str(self): self._test('getnameinfo', ('www.gevent.org', 'x'), 0) def test_overflow_port_too_large(self): self._test('getnameinfo', ('www.gevent.org', 65536), 0) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__socket_dns6.py000066400000000000000000000072041471441230600223700ustar00rootroot00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- from __future__ import print_function, absolute_import, division import socket import unittest import gevent.testing as greentest from gevent.tests.test__socket_dns import TestCase, add from gevent.testing.sysinfo import OSX from gevent.testing.sysinfo import RESOLVER_DNSPYTHON from gevent.testing.sysinfo import RESOLVER_ARES from gevent.testing.sysinfo import PYPY from gevent.testing.sysinfo import PY2 # We can't control the DNS servers on CI (or in general...) # for the system. This works best with the google DNS servers # The getnameinfo test can fail on CI. # Previously only Test6_ds failed, but as of Jan 2018, Test6 # and Test6_google begin to fail: # First differing element 0: # 'vm2.test-ipv6.com' # 'ip119.gigo.com' # - ('vm2.test-ipv6.com', [], ['2001:470:1:18::125']) # ? --------- ^^ ^^ # + ('ip119.gigo.com', [], ['2001:470:1:18::119']) # ? ^^^^^^^^ ^^ # These are known to work on jamadden's OS X machine using the google # resolvers (but not with DNSPython; things don't *quite* match)...so # by default we skip the tests everywhere else. class Test6(TestCase): NORMALIZE_GHBA_IGNORE_ALIAS = True # host that only has AAAA record host = 'aaaa.test-ipv6.com' def _normalize_result_gethostbyaddr(self, result): # This part of the test is effectively disabled. There are multiple address # that resolve and which ones you get depend on the settings # of the system and ares. They don't match exactly. return () if RESOLVER_ARES and PY2: def _normalize_result_getnameinfo(self, result): # Beginning 2020-07-23, # c-ares returns a scope id on the result: # ('2001:470:1:18::115%0', 'http') # The standard library does not (on linux or os x). # I've only seen '%0', so only remove that ipaddr, service = result if ipaddr.endswith('%0'): ipaddr = ipaddr[:-2] return (ipaddr, service) if not OSX and RESOLVER_DNSPYTHON: # It raises gaierror instead of socket.error, # which is not great and leads to failures. def _run_test_getnameinfo(self, *_args, **_kwargs): return (), 0, (), 0 def _run_test_gethostbyname(self, *_args, **_kwargs): raise unittest.SkipTest("gethostbyname[_ex] does not support IPV6") _run_test_gethostbyname_ex = _run_test_gethostbyname def test_empty(self): self._test('getaddrinfo', self.host, 'http') def test_inet(self): self._test('getaddrinfo', self.host, None, socket.AF_INET) def test_inet6(self): self._test('getaddrinfo', self.host, None, socket.AF_INET6) def test_unspec(self): self._test('getaddrinfo', self.host, None, socket.AF_UNSPEC) class Test6_google(Test6): host = 'ipv6.google.com' if greentest.RUNNING_ON_CI: # Disabled, there are multiple possibilities # and we can get different ones. Even the system resolvers # can go round-robin and provide different answers. def _normalize_result_getnameinfo(self, result): return () if PYPY: # PyPy tends to be especially problematic in that area. _normalize_result_getaddrinfo = _normalize_result_getnameinfo add(Test6, Test6.host) add(Test6_google, Test6_google.host) class Test6_ds(Test6): # host that has both A and AAAA records host = 'ds.test-ipv6.com' _normalize_result_gethostbyname = Test6._normalize_result_gethostbyaddr add(Test6_ds, Test6_ds.host) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__socket_errors.py000066400000000000000000000035151471441230600230330ustar00rootroot00000000000000# Copyright (c) 2008-2009 AG Projects # Author: Denis Bilenko # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import gevent.testing as greentest from gevent.testing import support from gevent.testing import sysinfo from gevent.socket import socket, error from gevent.exceptions import LoopExit class TestSocketErrors(greentest.TestCase): __timeout__ = 5 def test_connection_refused(self): port = support.find_unused_port() with socket() as s: try: with self.assertRaises(error) as exc: s.connect((greentest.DEFAULT_CONNECT_HOST, port)) except LoopExit: return ex = exc.exception self.assertIn(ex.args[0], sysinfo.CONN_REFUSED_ERRORS, ex) self.assertIn('refused', str(ex).lower()) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__socket_ex.py000066400000000000000000000021261471441230600221300ustar00rootroot00000000000000import gevent.testing as greentest from gevent import socket import errno import sys class TestClosedSocket(greentest.TestCase): switch_expected = False def test(self): sock = socket.socket() sock.close() try: sock.send(b'a', timeout=1) self.fail("Should raise socket error") except OSError as ex: if ex.args[0] != errno.EBADF: if sys.platform.startswith('win'): # Windows/Py3 raises "OSError: [WinError 10038] " # which is not standard and not what it does # on Py2. pass else: raise class TestRef(greentest.TestCase): switch_expected = False def test(self): # pylint:disable=no-member sock = socket.socket() self.assertTrue(sock.ref) sock.ref = False self.assertFalse(sock.ref) self.assertFalse(sock._read_event.ref) self.assertFalse(sock._write_event.ref) sock.close() if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__socket_send_memoryview.py000066400000000000000000000017001471441230600247250ustar00rootroot00000000000000# See issue #466 import unittest import ctypes import gevent.testing as greentest class AnStructure(ctypes.Structure): _fields_ = [("x", ctypes.c_int)] def _send(socket): for meth in ('sendall', 'send'): anStructure = AnStructure() sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.connect((greentest.DEFAULT_CONNECT_HOST, 12345)) getattr(sock, meth)(anStructure) sock.close() sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.connect((greentest.DEFAULT_CONNECT_HOST, 12345)) sock.settimeout(1.0) getattr(sock, meth)(anStructure) sock.close() class TestSendBuiltinSocket(unittest.TestCase): def test_send(self): import socket _send(socket) class TestSendGeventSocket(unittest.TestCase): def test_send(self): import gevent.socket _send(gevent.socket) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__socket_ssl.py000066400000000000000000000015411471441230600223150ustar00rootroot00000000000000#!/usr/bin/python from gevent import monkey monkey.patch_all() try: import httplib except ImportError: from http import client as httplib import socket import gevent.testing as greentest @greentest.skipUnless( hasattr(socket, 'ssl'), "Needs socket.ssl (Python 2)" ) @greentest.skipWithoutExternalNetwork("Tries to access amazon.com") class AmazonHTTPSTests(greentest.TestCase): __timeout__ = 30 def test_amazon_response(self): conn = httplib.HTTPSConnection('sdb.amazonaws.com') conn.request('GET', '/') conn.getresponse() def test_str_and_repr(self): conn = socket.socket() conn.connect(('sdb.amazonaws.com', 443)) ssl_conn = socket.ssl(conn) # pylint:disable=no-member assert str(ssl_conn) assert repr(ssl_conn) if __name__ == "__main__": greentest.main() gevent-24.11.1/src/gevent/tests/test__socket_timeout.py000066400000000000000000000025071471441230600232050ustar00rootroot00000000000000import gevent from gevent import socket import gevent.testing as greentest class Test(greentest.TestCase): server = None acceptor = None server_port = None def _accept(self): try: conn, _ = self.server.accept() self._close_on_teardown(conn) except socket.error: pass def setUp(self): super(Test, self).setUp() self.server = self._close_on_teardown(greentest.tcp_listener(backlog=1)) self.server_port = self.server.getsockname()[1] self.acceptor = gevent.spawn(self._accept) gevent.sleep(0) def tearDown(self): if self.acceptor is not None: self.acceptor.kill() self.acceptor = None if self.server is not None: self.server.close() self.server = None super(Test, self).tearDown() def test_timeout(self): gevent.sleep(0) sock = socket.socket() self._close_on_teardown(sock) sock.connect((greentest.DEFAULT_CONNECT_HOST, self.server_port)) sock.settimeout(0.1) with self.assertRaises(socket.error) as cm: sock.recv(1024) ex = cm.exception self.assertEqual(ex.args, ('timed out',)) self.assertEqual(str(ex), 'timed out') if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__socketpair.py000066400000000000000000000016671471441230600223210ustar00rootroot00000000000000from gevent import monkey; monkey.patch_all() import socket import unittest class TestSocketpair(unittest.TestCase): def test_makefile(self): msg = b'hello world' x, y = socket.socketpair() x.sendall(msg) x.close() with y.makefile('rb') as f: read = f.read() self.assertEqual(msg, read) y.close() @unittest.skipUnless(hasattr(socket, 'fromfd'), 'Needs socket.fromfd') def test_fromfd(self): msg = b'hello world' x, y = socket.socketpair() xx = socket.fromfd(x.fileno(), x.family, socket.SOCK_STREAM) x.close() yy = socket.fromfd(y.fileno(), y.family, socket.SOCK_STREAM) y.close() xx.sendall(msg) xx.close() with yy.makefile('rb') as f: read = f.read() self.assertEqual(msg, read) yy.close() if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test__ssl.py000066400000000000000000000127351471441230600207540ustar00rootroot00000000000000from __future__ import print_function, division, absolute_import from gevent import monkey monkey.patch_all() import os import socket import gevent.testing as greentest # Be careful not to have TestTCP as a bare attribute in this module, # even aliased, to avoid running duplicate tests from gevent.tests import test__socket import ssl def ssl_listener(private_key, certificate): raw_listener = socket.socket() greentest.bind_and_listen(raw_listener) # pylint:disable=deprecated-method sock = wrap_socket(raw_listener, keyfile=private_key, certfile=certificate, server_side=True) return sock, raw_listener def wrap_socket(sock, *, keyfile=None, certfile=None, server_side=False): context = ssl.SSLContext( protocol=ssl.PROTOCOL_TLS ) context.verify_mode = ssl.CERT_NONE context.check_hostname = False context.load_default_certs() if keyfile is not None or certfile is not None: context.load_cert_chain(certfile=certfile, keyfile=keyfile) return context.wrap_socket(sock, server_side=server_side) class TestSSL(test__socket.TestTCP): # To generate: # openssl req -x509 -newkey rsa:4096 -keyout test_server.key -out test_server.crt -days 36500 -nodes -subj '/CN=localhost' certfile = os.path.join(os.path.dirname(__file__), 'test_server.crt') privfile = os.path.join(os.path.dirname(__file__), 'test_server.key') # Python 2.x has socket.sslerror (which is an alias for # ssl.SSLError); That's gone in Py3 though. In Python 2, most timeouts are raised # as SSLError, but Python 3 raises the normal socket.timeout instead. So this has # the effect of making TIMEOUT_ERROR be SSLError on Py2 and socket.timeout on Py3 # See https://bugs.python.org/issue10272. # PyPy3 7.2 has a bug, though: it shares much of the SSL implementation with Python 2, # and it unconditionally does `socket.sslerror = SSLError` when ssl is imported. # So we can't rely on getattr/hasattr tests, we must be explicit. TIMEOUT_ERROR = socket.timeout # pylint:disable=no-member def _setup_listener(self): listener, raw_listener = ssl_listener(self.privfile, self.certfile) self._close_on_teardown(raw_listener) return listener def create_connection(self, *args, **kwargs): # pylint:disable=signature-differs return self._close_on_teardown( # pylint:disable=deprecated-method wrap_socket(super(TestSSL, self).create_connection(*args, **kwargs))) # The SSL library can take a long time to buffer the large amount of data we're trying # to send, so we can't compare to the timeout values _test_sendall_timeout_check_time = False # The SSL layer has extra buffering, so test_sendall needs # to send a very large amount to make it timeout _test_sendall_data = data_sent = b'hello' * 100000000 test_sendall_array = greentest.skipOnMacOnCI("Sometimes misses data")( greentest.skipOnManylinux("Sometimes misses data")( test__socket.TestTCP.test_sendall_array ) ) test_sendall_str = greentest.skipOnMacOnCI("Sometimes misses data")( greentest.skipOnManylinux("Sometimes misses data")( test__socket.TestTCP.test_sendall_str ) ) @greentest.skipOnWindows("Not clear why we're skipping") def test_ssl_sendall_timeout0(self): # Issue #317: SSL_WRITE_PENDING in some corner cases server_sock = [] acceptor = test__socket.Thread(target=lambda: server_sock.append( # pylint:disable=no-member self.listener.accept())) client = self.create_connection() client.setblocking(False) try: # Python 3 raises ssl.SSLWantWriteError; Python 2 simply *hangs* # on non-blocking sockets because it's a simple loop around # send(). Python 2.6 doesn't have SSLWantWriteError expected = getattr(ssl, 'SSLWantWriteError', ssl.SSLError) with self.assertRaises(expected): client.sendall(self._test_sendall_data) finally: acceptor.join() client.close() server_sock[0][0].close() # def test_fullduplex(self): # try: # super(TestSSL, self).test_fullduplex() # except LoopExit: # if greentest.LIBUV and greentest.WIN: # # XXX: Unable to duplicate locally # raise greentest.SkipTest("libuv on Windows sometimes raises LoopExit") # raise @greentest.ignores_leakcheck @greentest.skipOnPy310("No longer raises SSLError") def test_empty_send(self): # Issue 719 # Sending empty bytes with the 'send' method raises # ssl.SSLEOFError in the stdlib. PyPy 4.0 and CPython 2.6 # both just raise the superclass, ssl.SSLError. # Ignored during leakchecks because the third or fourth iteration of the # test hangs on CPython 2/posix for some reason, likely due to # the use of _close_on_teardown keeping something alive longer than intended. # cf test__makefile_ref with self.assertRaises(ssl.SSLError): super(TestSSL, self).test_empty_send() @greentest.ignores_leakcheck def test_sendall_nonblocking(self): # Override; doesn't work with SSL sockets. pass @greentest.ignores_leakcheck def test_connect_with_type_flags_ignored(self): # Override; doesn't work with SSL sockets. pass if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__subprocess.py000066400000000000000000000473601471441230600223450ustar00rootroot00000000000000import sys import os import errno import unittest import time import tempfile import gevent.testing as greentest import gevent from gevent.testing import mock from gevent import subprocess if not hasattr(subprocess, 'mswindows'): # PyPy3, native python subprocess subprocess.mswindows = False PYPY = hasattr(sys, 'pypy_version_info') PY3 = sys.version_info[0] >= 3 if subprocess.mswindows: SETBINARY = 'import msvcrt; msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY);' else: SETBINARY = '' python_universal_newlines = hasattr(sys.stdout, 'newlines') # The stdlib of Python 3 on Windows doesn't properly handle universal newlines # (it produces broken results compared to Python 2) # See gevent.subprocess for more details. python_universal_newlines_broken = PY3 and subprocess.mswindows @greentest.skipWithoutResource('subprocess') class TestPopen(greentest.TestCase): # Use the normal error handling. Make sure that any background greenlets # subprocess spawns propagate errors as expected. error_fatal = False def test_exit(self): popen = subprocess.Popen([sys.executable, '-c', 'import sys; sys.exit(10)']) self.assertEqual(popen.wait(), 10) def test_wait(self): popen = subprocess.Popen([sys.executable, '-c', 'import sys; sys.exit(11)']) gevent.wait([popen]) self.assertEqual(popen.poll(), 11) def test_child_exception(self): with self.assertRaises(OSError) as exc: subprocess.Popen(['*']).wait() self.assertEqual(exc.exception.errno, 2) def test_leak(self): num_before = greentest.get_number_open_files() p = subprocess.Popen([sys.executable, "-c", "print()"], stdout=subprocess.PIPE) p.wait() p.stdout.close() del p num_after = greentest.get_number_open_files() self.assertEqual(num_before, num_after) @greentest.skipOnLibuvOnPyPyOnWin("hangs") def test_communicate(self): p = subprocess.Popen([sys.executable, "-W", "ignore", "-c", 'import sys,os;' 'sys.stderr.write("pineapple");' 'sys.stdout.write(sys.stdin.read())'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (stdout, stderr) = p.communicate(b"banana") self.assertEqual(stdout, b"banana") if sys.executable.endswith('-dbg'): assert stderr.startswith(b'pineapple') else: self.assertEqual(stderr, b"pineapple") @greentest.skipIf(subprocess.mswindows, "Windows does weird things here") @greentest.skipOnLibuvOnCIOnPyPy("Sometimes segfaults") def test_communicate_universal(self): # Native string all the things. See https://github.com/gevent/gevent/issues/1039 p = subprocess.Popen( [ sys.executable, "-W", "ignore", "-c", 'import sys,os;' 'sys.stderr.write("pineapple\\r\\n\\xff\\xff\\xf2\\xf9\\r\\n");' 'sys.stdout.write(sys.stdin.read())' ], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True ) (stdout, stderr) = p.communicate('banana\r\n\xff\xff\xf2\xf9\r\n') self.assertIsInstance(stdout, str) self.assertIsInstance(stderr, str) self.assertEqual(stdout, 'banana\n\xff\xff\xf2\xf9\n') self.assertEqual(stderr, 'pineapple\n\xff\xff\xf2\xf9\n') @greentest.skipOnWindows("Windows IO is weird; this doesn't raise") def test_communicate_undecodable(self): # If the subprocess writes non-decodable data, `communicate` raises the # same UnicodeDecodeError that the stdlib does, instead of # printing it to the hub. This only applies to Python 3, because only it # will actually use text mode. # See https://github.com/gevent/gevent/issues/1510 with subprocess.Popen( [ sys.executable, '-W', 'ignore', '-c', "import os, sys; " r'os.write(sys.stdout.fileno(), b"\xff")' ], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, universal_newlines=True ) as p: with self.assertRaises(UnicodeDecodeError): p.communicate() @greentest.skipOnLibuvOnPyPyOnWin("hangs") def test_universal1(self): with subprocess.Popen( [ sys.executable, "-c", 'import sys,os;' + SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' 'sys.stdout.flush();' 'sys.stdout.write("line3\\r\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line4\\r");' 'sys.stdout.flush();' 'sys.stdout.write("\\nline5");' 'sys.stdout.flush();' 'sys.stdout.write("\\nline6");' ], stdout=subprocess.PIPE, universal_newlines=1, bufsize=1 ) as p: stdout = p.stdout.read() if python_universal_newlines: # Interpreter with universal newline support if not python_universal_newlines_broken: self.assertEqual(stdout, "line1\nline2\nline3\nline4\nline5\nline6") else: # Note the extra newline after line 3 self.assertEqual(stdout, 'line1\nline2\nline3\n\nline4\n\nline5\nline6') else: # Interpreter without universal newline support self.assertEqual(stdout, "line1\nline2\rline3\r\nline4\r\nline5\nline6") @greentest.skipOnLibuvOnPyPyOnWin("hangs") def test_universal2(self): with subprocess.Popen( [ sys.executable, "-c", 'import sys,os;' + SETBINARY + 'sys.stdout.write("line1\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line2\\r");' 'sys.stdout.flush();' 'sys.stdout.write("line3\\r\\n");' 'sys.stdout.flush();' 'sys.stdout.write("line4\\r\\nline5");' 'sys.stdout.flush();' 'sys.stdout.write("\\nline6");' ], stdout=subprocess.PIPE, universal_newlines=1, bufsize=1 ) as p: stdout = p.stdout.read() if python_universal_newlines: # Interpreter with universal newline support if not python_universal_newlines_broken: self.assertEqual(stdout, "line1\nline2\nline3\nline4\nline5\nline6") else: # Note the extra newline after line 3 self.assertEqual(stdout, 'line1\nline2\nline3\n\nline4\n\nline5\nline6') else: # Interpreter without universal newline support self.assertEqual(stdout, "line1\nline2\rline3\r\nline4\r\nline5\nline6") @greentest.skipOnWindows("Uses 'grep' command") def test_nonblock_removed(self): # see issue #134 r, w = os.pipe() stdin = subprocess.FileObject(r) with subprocess.Popen(['grep', 'text'], stdin=stdin) as p: try: # Closing one half of the pipe causes Python 3 on OS X to terminate the # child process; it exits with code 1 and the assert that p.poll is None # fails. Removing the close lets it pass under both Python 3 and 2.7. # If subprocess.Popen._remove_nonblock_flag is changed to a noop, then # the test fails (as expected) even with the close removed #os.close(w) time.sleep(0.1) self.assertEqual(p.poll(), None) finally: if p.poll() is None: p.kill() stdin.close() os.close(w) def test_issue148(self): for _ in range(7): with self.assertRaises(OSError) as exc: with subprocess.Popen('this_name_must_not_exist'): pass self.assertEqual(exc.exception.errno, errno.ENOENT) @greentest.skipOnLibuvOnPyPyOnWin("hangs") def test_check_output_keyword_error(self): with self.assertRaises(subprocess.CalledProcessError) as exc: # pylint:disable=no-member subprocess.check_output([sys.executable, '-c', 'import sys; sys.exit(44)']) self.assertEqual(exc.exception.returncode, 44) @greentest.skipOnPy3("The default buffer changed in Py3") def test_popen_bufsize(self): # Test that subprocess has unbuffered output by default # (as the vanilla subprocess module) with subprocess.Popen( [sys.executable, '-u', '-c', 'import sys; sys.stdout.write(sys.stdin.readline())'], stdin=subprocess.PIPE, stdout=subprocess.PIPE ) as p: p.stdin.write(b'foobar\n') r = p.stdout.readline() self.assertEqual(r, b'foobar\n') @greentest.ignores_leakcheck @greentest.skipOnWindows("Not sure why?") def test_subprocess_in_native_thread(self): # gevent.subprocess doesn't work from a background # native thread. See #688 from gevent import monkey # must be a native thread; defend against monkey-patching ex = [] Thread = monkey.get_original('threading', 'Thread') def fn(): with self.assertRaises(TypeError) as exc: gevent.subprocess.Popen('echo 123', shell=True) ex.append(exc.exception) thread = Thread(target=fn) thread.start() thread.join() self.assertEqual(len(ex), 1) self.assertTrue(isinstance(ex[0], TypeError), ex) self.assertEqual(ex[0].args[0], 'child watchers are only available on the default loop') @greentest.skipOnLibuvOnPyPyOnWin("hangs") def __test_no_output(self, kwargs, kind): with subprocess.Popen( [sys.executable, '-c', 'pass'], stdout=subprocess.PIPE, **kwargs ) as proc: stdout, stderr = proc.communicate() self.assertIsInstance(stdout, kind) self.assertIsNone(stderr) @greentest.skipOnLibuvOnCIOnPyPy("Sometimes segfaults; " "https://travis-ci.org/gevent/gevent/jobs/327357682") def test_universal_newlines_text_mode_no_output_is_always_str(self): # If the file is in universal_newlines mode, we should always get a str when # there is no output. # https://github.com/gevent/gevent/pull/939 self.__test_no_output({'universal_newlines': True}, str) @greentest.skipIf(sys.version_info[:2] < (3, 6), "Need encoding argument") def test_encoded_text_mode_no_output_is_str(self): # If the file is in universal_newlines mode, we should always get a str when # there is no output. # https://github.com/gevent/gevent/pull/939 self.__test_no_output({'encoding': 'utf-8'}, str) def test_default_mode_no_output_is_always_str(self): # If the file is in default mode, we should always get a str when # there is no output. # https://github.com/gevent/gevent/pull/939 self.__test_no_output({}, bytes) @greentest.skipOnWindows("Testing POSIX fd closing") class TestFDs(unittest.TestCase): @mock.patch('os.closerange') @mock.patch('gevent.subprocess._set_inheritable') @mock.patch('os.close') def test_close_fds_brute_force(self, close, set_inheritable, closerange): keep = ( 4, 5, # Leave a hole # 6, 7, ) subprocess.Popen._close_fds_brute_force(keep, None) closerange.assert_has_calls([ mock.call(3, 4), mock.call(8, subprocess.MAXFD), ]) set_inheritable.assert_has_calls([ mock.call(4, True), mock.call(5, True), ]) close.assert_called_once_with(6) @mock.patch('gevent.subprocess.Popen._close_fds_brute_force') @mock.patch('os.listdir') def test_close_fds_from_path_bad_values(self, listdir, brute_force): listdir.return_value = 'Not an Integer' subprocess.Popen._close_fds_from_path('path', [], 42) brute_force.assert_called_once_with([], 42) @mock.patch('os.listdir') @mock.patch('os.closerange') @mock.patch('gevent.subprocess._set_inheritable') @mock.patch('os.close') def test_close_fds_from_path(self, close, set_inheritable, closerange, listdir): keep = ( 4, 5, # Leave a hole # 6, 7, ) listdir.return_value = ['1', '6', '37'] subprocess.Popen._close_fds_from_path('path', keep, 5) self.assertEqual([], closerange.mock_calls) set_inheritable.assert_has_calls([ mock.call(4, True), mock.call(7, True), ]) close.assert_has_calls([ mock.call(6), mock.call(37), ]) @mock.patch('gevent.subprocess.Popen._close_fds_brute_force') @mock.patch('os.path.isdir') def test_close_fds_no_dir(self, isdir, brute_force): isdir.return_value = False subprocess.Popen._close_fds([], 42) brute_force.assert_called_once_with([], 42) isdir.assert_has_calls([ mock.call('/proc/self/fd'), mock.call('/dev/fd'), ]) @mock.patch('gevent.subprocess.Popen._close_fds_from_path') @mock.patch('gevent.subprocess.Popen._close_fds_brute_force') @mock.patch('os.path.isdir') def test_close_fds_with_dir(self, isdir, brute_force, from_path): isdir.return_value = True subprocess.Popen._close_fds([7], 42) self.assertEqual([], brute_force.mock_calls) from_path.assert_called_once_with('/proc/self/fd', [7], 42) class RunFuncTestCase(greentest.TestCase): # Based on code from python 3.6+ __timeout__ = greentest.LARGE_TIMEOUT @greentest.skipWithoutResource('subprocess') def run_python(self, code, **kwargs): """Run Python code in a subprocess using subprocess.run""" argv = [sys.executable, "-c", code] return subprocess.run(argv, **kwargs) def test_returncode(self): # call() function with sequence argument cp = self.run_python("import sys; sys.exit(47)") self.assertEqual(cp.returncode, 47) with self.assertRaises(subprocess.CalledProcessError): # pylint:disable=no-member cp.check_returncode() def test_check(self): with self.assertRaises(subprocess.CalledProcessError) as c: # pylint:disable=no-member self.run_python("import sys; sys.exit(47)", check=True) self.assertEqual(c.exception.returncode, 47) def test_check_zero(self): # check_returncode shouldn't raise when returncode is zero cp = self.run_python("import sys; sys.exit(0)", check=True) self.assertEqual(cp.returncode, 0) def test_timeout(self): # run() function with timeout argument; we want to test that the child # process gets killed when the timeout expires. If the child isn't # killed, this call will deadlock since subprocess.run waits for the # child. with self.assertRaises(subprocess.TimeoutExpired): self.run_python("while True: pass", timeout=0.0001) @greentest.skipOnLibuvOnPyPyOnWin("hangs") def test_capture_stdout(self): # capture stdout with zero return code cp = self.run_python("print('BDFL')", stdout=subprocess.PIPE) self.assertIn(b'BDFL', cp.stdout) @greentest.skipOnLibuvOnPyPyOnWin("hangs") def test_capture_stderr(self): cp = self.run_python("import sys; sys.stderr.write('BDFL')", stderr=subprocess.PIPE) self.assertIn(b'BDFL', cp.stderr) @greentest.skipOnLibuvOnPyPyOnWin("hangs") def test_check_output_stdin_arg(self): # run() can be called with stdin set to a file with tempfile.TemporaryFile() as tf: tf.write(b'pear') tf.seek(0) cp = self.run_python( "import sys; sys.stdout.write(sys.stdin.read().upper())", stdin=tf, stdout=subprocess.PIPE) self.assertIn(b'PEAR', cp.stdout) @greentest.skipOnLibuvOnPyPyOnWin("hangs") def test_check_output_input_arg(self): # check_output() can be called with input set to a string cp = self.run_python( "import sys; sys.stdout.write(sys.stdin.read().upper())", input=b'pear', stdout=subprocess.PIPE) self.assertIn(b'PEAR', cp.stdout) @greentest.skipOnLibuvOnPyPyOnWin("hangs") def test_check_output_stdin_with_input_arg(self): # run() refuses to accept 'stdin' with 'input' with tempfile.TemporaryFile() as tf: tf.write(b'pear') tf.seek(0) with self.assertRaises(ValueError, msg="Expected ValueError when stdin and input args supplied.") as c: self.run_python("print('will not be run')", stdin=tf, input=b'hare') self.assertIn('stdin', c.exception.args[0]) self.assertIn('input', c.exception.args[0]) @greentest.skipOnLibuvOnPyPyOnWin("hangs") def test_check_output_timeout(self): with self.assertRaises(subprocess.TimeoutExpired) as c: self.run_python( ( "import sys, time\n" "sys.stdout.write('BDFL')\n" "sys.stdout.flush()\n" "time.sleep(3600)" ), # Some heavily loaded buildbots (sparc Debian 3.x) require # this much time to start and print. timeout=3, stdout=subprocess.PIPE) self.assertEqual(c.exception.output, b'BDFL') # output is aliased to stdout self.assertEqual(c.exception.stdout, b'BDFL') def test_run_kwargs(self): newenv = os.environ.copy() newenv["FRUIT"] = "banana" cp = self.run_python(('import sys, os;' 'sys.exit(33 if os.getenv("FRUIT")=="banana" else 31)'), env=newenv) self.assertEqual(cp.returncode, 33) # This test _might_ wind up a bit fragile on loaded build+test machines # as it depends on the timing with wide enough margins for normal situations # but does assert that it happened "soon enough" to believe the right thing # happened. @greentest.skipOnWindows("requires posix like 'sleep' shell command") def test_run_with_shell_timeout_and_capture_output(self): #Output capturing after a timeout mustn't hang forever on open filehandles with self.runs_in_given_time(0.1): with self.assertRaises(subprocess.TimeoutExpired): subprocess.run('sleep 3', shell=True, timeout=0.1, capture_output=True) # New session unspecified. if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__subprocess_interrupted.py000066400000000000000000000036021471441230600247610ustar00rootroot00000000000000import sys if 'runtestcase' in sys.argv[1:]: # pragma: no cover import gevent import gevent.subprocess gevent.spawn(sys.exit, 'bye') # Look closely, this doesn't actually do anything, that's a string # not a division gevent.subprocess.Popen([sys.executable, '-c', '"1/0"']) gevent.sleep(1) else: # XXX: Handle this more automatically. See comments in the testrunner. from gevent.testing.resources import exit_without_resource exit_without_resource('subprocess') import subprocess for _ in range(5): # not on Py2 pylint:disable=consider-using-with out, err = subprocess.Popen([sys.executable, '-W', 'ignore', __file__, 'runtestcase'], stderr=subprocess.PIPE).communicate() # We've seen a few unexpected forms of output. # # The first involves 'refs'; I don't remember what that was # about, but I think it had to do with debug builds of Python. # # The second is the classic "Unhandled exception in thread # started by \nsys.excepthook is missing\nlost sys.stderr". # This is a race condition between closing sys.stderr and # writing buffered data to a pipe that hasn't been read. We # only see this using GEVENT_FILE=thread (which makes sense); # likewise, on Python 2 with thread, we can sometimes get # `super() argument 1 must be type, not None`; this happens on module # cleanup. # # The third is similar to the second: "AssertionError: # ...\nIOError: close() called during concurrent operation on # the same file object.\n" if b'refs' in err or b'sys.excepthook' in err or b'concurrent' in err: assert err.startswith(b'bye'), repr(err) # pragma: no cover else: assert err.strip() == b'bye', repr(err) gevent-24.11.1/src/gevent/tests/test__subprocess_poll.py000066400000000000000000000005321471441230600233610ustar00rootroot00000000000000import sys # XXX: Handle this more automatically. See comments in the testrunner. from gevent.testing.resources import exit_without_resource exit_without_resource('subprocess') from gevent.subprocess import Popen from gevent.testing.util import alarm alarm(3) popen = Popen([sys.executable, '-c', 'pass']) while popen.poll() is None: pass gevent-24.11.1/src/gevent/tests/test__systemerror.py000066400000000000000000000063371471441230600225520ustar00rootroot00000000000000import sys import gevent.testing as greentest import gevent from gevent.hub import get_hub def raise_(ex): raise ex MSG = 'should be re-raised and caught' class Test(greentest.TestCase): x = None error_fatal = False def start(self, *args): raise NotImplementedError def setUp(self): self.x = None def test_sys_exit(self): self.start(sys.exit, MSG) try: gevent.sleep(0.001) except SystemExit as ex: assert str(ex) == MSG, repr(str(ex)) else: raise AssertionError('must raise SystemExit') def test_keyboard_interrupt(self): self.start(raise_, KeyboardInterrupt) try: gevent.sleep(0.001) except KeyboardInterrupt: pass else: raise AssertionError('must raise KeyboardInterrupt') def test_keyboard_interrupt_stderr_patched(self): # XXX: This one non-top-level call prevents us from being # run in a process with other tests. from gevent import monkey monkey.patch_sys(stdin=False, stdout=False, stderr=True) try: try: self.start(raise_, KeyboardInterrupt) while True: gevent.sleep(0.1) except KeyboardInterrupt: pass # expected finally: sys.stderr = monkey.get_original('sys', 'stderr') def test_system_error(self): self.start(raise_, SystemError(MSG)) with self.assertRaisesRegex(SystemError, MSG): gevent.sleep(0.002) def test_exception(self): self.start(raise_, Exception('regular exception must not kill the program')) gevent.sleep(0.001) class TestCallback(Test): def tearDown(self): if self.x is not None: # libuv: See the notes in libuv/loop.py:loop._start_callback_timer # If that's broken, test_exception can fail sporadically. # If the issue is the same, then adding `gevent.sleep(0)` here # will solve it. There's also a race condition for the first loop, # so we sleep twice. assert not self.x.pending, self.x def start(self, *args): self.x = get_hub().loop.run_callback(*args) if greentest.LIBUV: def test_exception(self): # This call will enter the loop for the very first time (if we're running # standalone). On libuv, where timers run first, that means that depending on the # amount of time that elapses between the call to uv_timer_start and uv_run, # this timer might fire before our check or prepare watchers, and hence callbacks, # run. # We make this call now so that the call in the super class is guaranteed to be # somewhere in the loop and not subject to that race condition. gevent.sleep(0.001) super(TestCallback, self).test_exception() class TestSpawn(Test): def tearDown(self): gevent.sleep(0.0001) if self.x is not None: assert self.x.dead, self.x def start(self, *args): self.x = gevent.spawn(*args) del Test if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__thread.py000066400000000000000000000014141471441230600214120ustar00rootroot00000000000000from __future__ import print_function from __future__ import absolute_import from gevent.thread import allocate_lock import gevent.testing as greentest try: from _thread import allocate_lock as std_allocate_lock except ImportError: # Py2 from thread import allocate_lock as std_allocate_lock class TestLock(greentest.TestCase): def test_release_unheld_lock(self): std_lock = std_allocate_lock() g_lock = allocate_lock() with self.assertRaises(Exception) as exc: std_lock.release() std_exc = exc.exception with self.assertRaises(Exception) as exc: g_lock.release() g_exc = exc.exception self.assertIsInstance(g_exc, type(std_exc)) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__threading.py000066400000000000000000000050321471441230600221100ustar00rootroot00000000000000""" Tests specifically for the monkey-patched threading module. """ from gevent import monkey; monkey.patch_all() # pragma: testrunner-no-monkey-combine import gevent.hub # check that the locks initialized by 'threading' did not init the hub assert gevent.hub._get_hub() is None, 'monkey.patch_all() should not init hub' import gevent import gevent.testing as greentest import threading def helper(): threading.current_thread() gevent.sleep(0.2) class TestCleanup(greentest.TestCase): def _do_test(self, spawn): before = len(threading._active) g = spawn(helper) gevent.sleep(0.1) self.assertEqual(len(threading._active), before + 1) try: g.join() except AttributeError: while not g.dead: gevent.sleep() # Raw greenlet has no join(), uses a weakref to cleanup. # so the greenlet has to die. On CPython, it's enough to # simply delete our reference. del g # On PyPy, it might take a GC, but for some reason, even # running several GC's doesn't clean it up under 5.6.0. # So we skip the test. #import gc #gc.collect() self.assertEqual(len(threading._active), before) def test_cleanup_gevent(self): self._do_test(gevent.spawn) @greentest.skipOnPyPy("weakref is not cleaned up in a timely fashion") def test_cleanup_raw(self): self._do_test(gevent.spawn_raw) class TestLockThread(greentest.TestCase): def _spawn(self, func): t = threading.Thread(target=func) t.start() return t def test_spin_lock_switches(self): # https://github.com/gevent/gevent/issues/1464 # pylint:disable=consider-using-with lock = threading.Lock() lock.acquire() spawned = [] def background(): spawned.append(True) while not lock.acquire(False): pass thread = threading.Thread(target=background) # If lock.acquire(False) doesn't yield when it fails, # then this never returns. thread.start() # Verify it tried to run self.assertEqual(spawned, [True]) # We can attempt to join it, which won't work. thread.join(0) # We can release the lock and then it will acquire. lock.release() thread.join() class TestLockGreenlet(TestLockThread): def _spawn(self, func): return gevent.spawn(func) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__threading_2.py000066400000000000000000000557571471441230600223540ustar00rootroot00000000000000# testing gevent's Event, Lock, RLock, Semaphore, BoundedSemaphore with standard test_threading from gevent.testing.six import xrange import gevent.testing as greentest setup_ = """from gevent import monkey; monkey.patch_all() from gevent.event import Event from gevent.lock import RLock, Semaphore, BoundedSemaphore from gevent.thread import allocate_lock as Lock import threading threading.Event = Event threading.Lock = Lock # NOTE: We're completely patching around the allocate_lock # patch we try to do with RLock; our monkey patch doesn't # behave this way, but we do it in tests to make sure that # our RLock implementation behaves correctly by itself. # However, we must test the patched version too, so make it # available. threading.NativeRLock = threading.RLock threading.RLock = RLock threading.Semaphore = Semaphore threading.BoundedSemaphore = BoundedSemaphore """ exec(setup_) setup_3 = '\n'.join(' %s' % line for line in setup_.split('\n')) setup_4 = '\n'.join(' %s' % line for line in setup_.split('\n')) from gevent.testing import support verbose = support.verbose import random import re import sys import threading import _thread as thread import time import unittest import weakref from gevent.tests import lock_tests verbose = False # pylint:disable=consider-using-with from unittest.mock import patch as Patch def skipDueToHang(cls): return unittest.skipIf( greentest.PYPY3 and greentest.RUNNING_ON_CI, "SKIPPED: Timeout on PyPy3 on Travis" )(cls) class Counter(object): # A trivial mutable counter. def __init__(self): self.value = 0 def inc(self): self.value += 1 def dec(self): self.value -= 1 def get(self): return self.value class TestThread(threading.Thread): def __init__(self, name, testcase, sema, mutex, nrunning): threading.Thread.__init__(self, name=name) self.testcase = testcase self.sema = sema self.mutex = mutex self.nrunning = nrunning def run(self): delay = random.random() / 10000.0 if verbose: print('task %s will run for %.1f usec' % ( self.name, delay * 1e6)) with self.sema: with self.mutex: self.nrunning.inc() if verbose: print(self.nrunning.get(), 'tasks are running') self.testcase.assertLessEqual(self.nrunning.get(), 3) time.sleep(delay) if verbose: print('task', self.name, 'done') with self.mutex: self.nrunning.dec() self.testcase.assertGreaterEqual(self.nrunning.get(), 0) if verbose: print('%s is finished. %d tasks are running' % ( self.name, self.nrunning.get())) @skipDueToHang class ThreadTests(unittest.TestCase): maxDiff = None # Create a bunch of threads, let each do some work, wait until all are # done. def test_various_ops(self): # This takes about n/3 seconds to run (about n/3 clumps of tasks, # times about 1 second per clump). NUMTASKS = 10 # no more than 3 of the 10 can run at once sema = threading.BoundedSemaphore(value=3) mutex = threading.RLock() numrunning = Counter() threads = [] initial_regex = r'' if sys.version_info[:2] < (3, 13): # prior to 3.13, they distinguished the initial state from # the stopped state. initial_regex = r'' for i in range(NUMTASKS): t = TestThread("" % i, self, sema, mutex, numrunning) threads.append(t) # pylint:disable-next=attribute-defined-outside-init t.daemon = False # Under PYPY we get daemon by default? self.assertIsNone(t.ident) self.assertFalse(t.daemon) self.assertTrue(re.match(initial_regex, repr(t)), repr(t)) t.start() if verbose: print('waiting for all tasks to complete') for t in threads: t.join(NUMTASKS) self.assertFalse(t.is_alive(), t.__dict__) if hasattr(t, 'ident'): self.assertNotEqual(t.ident, 0) self.assertFalse(t.ident is None) self.assertTrue(re.match(r'', repr(t))) if verbose: print('all tasks done') self.assertEqual(numrunning.get(), 0) def test_ident_of_no_threading_threads(self): # The ident still must work for the main thread and dummy threads, # as must the repr and str. t = threading.current_thread() self.assertFalse(t.ident is None) str(t) repr(t) def f(): t = threading.current_thread() ident.append(t.ident) str(t) repr(t) done.set() done = threading.Event() ident = [] thread.start_new_thread(f, ()) done.wait() self.assertFalse(ident[0] is None) # Kill the "immortal" _DummyThread del threading._active[ident[0]] # run with a small(ish) thread stack size (256kB) def test_various_ops_small_stack(self): if verbose: print('with 256kB thread stack size...') try: threading.stack_size(262144) except thread.error: if verbose: print('platform does not support changing thread stack size') return self.test_various_ops() threading.stack_size(0) # run with a large thread stack size (1MB) def test_various_ops_large_stack(self): if verbose: print('with 1MB thread stack size...') try: threading.stack_size(0x100000) except thread.error: if verbose: print('platform does not support changing thread stack size') return self.test_various_ops() threading.stack_size(0) def test_foreign_thread(self): # Check that a "foreign" thread can use the threading module. def f(mutex): # Calling current_thread() forces an entry for the foreign # thread to get made in the threading._active map. threading.current_thread() mutex.release() mutex = threading.Lock() mutex.acquire() tid = thread.start_new_thread(f, (mutex,)) # Wait for the thread to finish. mutex.acquire() self.assertIn(tid, threading._active) self.assertIsInstance(threading._active[tid], threading._DummyThread) del threading._active[tid] # in gevent, we actually clean up threading._active, but it's not happended there yet # PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it. def SKIP_test_PyThreadState_SetAsyncExc(self): try: import ctypes except ImportError: if verbose: print("test_PyThreadState_SetAsyncExc can't import ctypes") return # can't do anything set_async_exc = ctypes.pythonapi.PyThreadState_SetAsyncExc class AsyncExc(Exception): pass exception = ctypes.py_object(AsyncExc) # `worker_started` is set by the thread when it's inside a try/except # block waiting to catch the asynchronously set AsyncExc exception. # `worker_saw_exception` is set by the thread upon catching that # exception. worker_started = threading.Event() worker_saw_exception = threading.Event() class Worker(threading.Thread): id = None finished = False def run(self): self.id = thread.get_ident() self.finished = False try: while True: worker_started.set() time.sleep(0.1) except AsyncExc: self.finished = True worker_saw_exception.set() t = Worker() # pylint:disable-next=attribute-defined-outside-init t.daemon = True # so if this fails, we don't hang Python at shutdown t.start() if verbose: print(" started worker thread") # Try a thread id that doesn't make sense. if verbose: print(" trying nonsensical thread id") result = set_async_exc(ctypes.c_long(-1), exception) self.assertEqual(result, 0) # no thread states modified # Now raise an exception in the worker thread. if verbose: print(" waiting for worker thread to get started") worker_started.wait() if verbose: print(" verifying worker hasn't exited") self.assertFalse(t.finished) if verbose: print(" attempting to raise asynch exception in worker") result = set_async_exc(ctypes.c_long(t.id), exception) self.assertEqual(result, 1) # one thread state modified if verbose: print(" waiting for worker to say it caught the exception") worker_saw_exception.wait(timeout=10) self.assertTrue(t.finished) if verbose: print(" all OK -- joining worker") if t.finished: t.join() # else the thread is still running, and we have no way to kill it def test_limbo_cleanup(self): # Issue 7481: Failure to start thread should cleanup the limbo map. def fail_new_thread(*_args, **_kw): raise thread.error() if hasattr(threading, '_start_new_thread'): patcher = Patch.object(threading, '_start_new_thread', new=fail_new_thread) else: # 3.13 or later patcher = Patch.object(threading, '_start_joinable_thread', new=fail_new_thread) self.addCleanup(patcher.stop) patcher.start() t = threading.Thread(target=lambda: None) self.assertRaises(thread.error, t.start) self.assertFalse( t in threading._limbo, "Failed to cleanup _limbo map on failure of Thread.start().") def test_finalize_runnning_thread(self): # Issue 1402: the PyGILState_Ensure / _Release functions may be called # very late on python exit: on deallocation of a running thread for # example. try: import ctypes getattr(ctypes, 'pythonapi') # not available on PyPy getattr(ctypes.pythonapi, 'PyGILState_Ensure') # not available on PyPy3 except (ImportError, AttributeError): if verbose: print("test_finalize_with_runnning_thread can't import ctypes") return # can't do anything del ctypes # pyflakes fix import subprocess rc = subprocess.call([sys.executable, "-W", "ignore", "-c", """if 1: %s import ctypes, sys, time try: import thread except ImportError: import _thread as thread # Py3 # This lock is used as a simple event variable. ready = thread.allocate_lock() ready.acquire() # Module globals are cleared before __del__ is run # So we save the functions in class dict class C: ensure = ctypes.pythonapi.PyGILState_Ensure release = ctypes.pythonapi.PyGILState_Release def __del__(self): state = self.ensure() self.release(state) def waitingThread(): x = C() ready.release() time.sleep(100) thread.start_new_thread(waitingThread, ()) ready.acquire() # Be sure the other thread is waiting. sys.exit(42) """ % setup_3]) self.assertEqual(rc, 42) @greentest.skipOnLibuvOnPyPyOnWin("hangs") def test_join_nondaemon_on_shutdown(self): # Issue 1722344 # Raising SystemExit skipped threading._shutdown import subprocess script = """if 1: %s import threading from time import sleep def child(): sleep(0.3) # As a non-daemon thread we SHOULD wake up and nothing # should be torn down yet print("Woke up, sleep function is: %%s.%%s" %% (sleep.__module__, sleep.__name__)) threading.Thread(target=child).start() raise SystemExit """ % setup_4 p = subprocess.Popen([sys.executable, "-W", "ignore", "-c", script], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() stdout = stdout.strip() stdout = stdout.decode('utf-8') stderr = stderr.decode('utf-8') self.assertEqual( 'Woke up, sleep function is: gevent.hub.sleep', stdout) # On Python 2, importing pkg_resources tends to result in some 'ImportWarning' # being printed to stderr about packages missing __init__.py; the -W ignore is... # ignored. # self.assertEqual(stderr, "") @greentest.skipIf( not(hasattr(sys, 'getcheckinterval')), "Needs sys.getcheckinterval" ) def test_enumerate_after_join(self): # Try hard to trigger #1703448: a thread is still returned in # threading.enumerate() after it has been join()ed. enum = threading.enumerate import warnings with warnings.catch_warnings(): warnings.simplefilter('ignore', DeprecationWarning) # get/set checkinterval are deprecated in Python 3, # and removed in Python 3.9 old_interval = sys.getcheckinterval() # pylint:disable=no-member try: for i in xrange(1, 100): # Try a couple times at each thread-switching interval # to get more interleavings. sys.setcheckinterval(i // 5) # pylint:disable=no-member t = threading.Thread(target=lambda: None) t.start() t.join() l = enum() self.assertFalse(t in l, "#1703448 triggered after %d trials: %s" % (i, l)) finally: sys.setcheckinterval(old_interval) # pylint:disable=no-member if not hasattr(sys, 'pypy_version_info'): def test_no_refcycle_through_target(self): class RunSelfFunction(object): def __init__(self, should_raise): # The links in this refcycle from Thread back to self # should be cleaned up when the thread completes. self.should_raise = should_raise self.thread = threading.Thread(target=self._run, args=(self,), kwargs={'_yet_another': self}) self.thread.start() def _run(self, _other_ref, _yet_another): if self.should_raise: raise SystemExit cyclic_object = RunSelfFunction(should_raise=False) weak_cyclic_object = weakref.ref(cyclic_object) cyclic_object.thread.join() del cyclic_object self.assertIsNone(weak_cyclic_object(), msg=('%d references still around' % sys.getrefcount(weak_cyclic_object()))) raising_cyclic_object = RunSelfFunction(should_raise=True) weak_raising_cyclic_object = weakref.ref(raising_cyclic_object) raising_cyclic_object.thread.join() del raising_cyclic_object self.assertIsNone(weak_raising_cyclic_object(), msg=('%d references still around' % sys.getrefcount(weak_raising_cyclic_object()))) @skipDueToHang class ThreadJoinOnShutdown(unittest.TestCase): maxDiff = None def _run_and_join(self, script): script = """if 1: %s import sys, os, time, threading # a thread, which waits for the main program to terminate def joiningfunc(mainthread): mainthread.join() print('end of thread') \n""" % setup_3 + script import subprocess p = subprocess.Popen([sys.executable, "-W", "ignore", "-c", script], stdout=subprocess.PIPE) rc = p.wait() data = p.stdout.read().replace(b'\r', b'') p.stdout.close() data = data.decode('utf-8') self.assertEqual(data, "end of main\nend of thread\n") self.assertNotEqual(rc, 2, "interpreter was blocked") self.assertEqual(rc, 0, "Unexpected error") @greentest.skipOnLibuvOnPyPyOnWin("hangs") def test_1_join_on_shutdown(self): # The usual case: on exit, wait for a non-daemon thread script = """if 1: import os t = threading.Thread(target=joiningfunc, args=(threading.current_thread(),)) t.start() time.sleep(0.2) print('end of main') """ self._run_and_join(script) @greentest.skipOnPyPy3OnCI("Sometimes randomly times out") def test_2_join_in_forked_process(self): # Like the test above, but from a forked interpreter import os if not hasattr(os, 'fork'): return script = """if 1: childpid = os.fork() if childpid != 0: os.waitpid(childpid, 0) sys.exit(0) t = threading.Thread(target=joiningfunc, args=(threading.current_thread(),)) t.start() print('end of main') """ self._run_and_join(script) def test_3_join_in_forked_from_thread(self): # Like the test above, but fork() was called from a worker thread # In the forked process, the main Thread object must be marked as stopped. import os if not hasattr(os, 'fork'): return # Skip platforms with known problems forking from a worker thread. # See http://bugs.python.org/issue3863. # skip disable because I think the bug shouldn't apply to gevent -- denis #if sys.platform in ('freebsd4', 'freebsd5', 'freebsd6', 'os2emx'): # print(('Skipping test_3_join_in_forked_from_thread' # ' due to known OS bugs on'), sys.platform, file=sys.stderr) # return # A note on CFFI: Under Python 3, using CFFI tends to initialize the GIL, # whether or not we spawn any actual threads. Now, os.fork() calls # PyEval_ReInitThreads, which only does any work of the GIL has been taken. # One of the things it does is call threading._after_fork to reset # some thread state, which causes the main thread (threading._main_thread) # to be reset to the current thread---which for Python >= 3.4 happens # to be our version of thread, gevent.threading.Thread, which doesn't # initialize the _tstate_lock ivar. This causes threading._shutdown to crash # with an AssertionError and this test to fail. We hack around this by # making sure _after_fork is not called in the child process. # XXX: Figure out how to really fix that. script = """if 1: main_thread = threading.current_thread() def worker(): childpid = os.fork() if childpid != 0: os.waitpid(childpid, 0) sys.exit(0) t = threading.Thread(target=joiningfunc, args=(main_thread,)) print('end of main') t.start() t.join() # Should not block: main_thread is already stopped w = threading.Thread(target=worker) w.start() import sys if sys.version_info[:2] >= (3, 7) or (sys.version_info[:2] >= (3, 5) and hasattr(sys, 'pypy_version_info') and sys.platform != 'darwin'): w.join() """ # In PyPy3 5.8.0, if we don't wait on this top-level "thread", 'w', # we never see "end of thread". It's not clear why, since that's being # done in a child of this process. Yet in normal CPython 3, waiting on this # causes the whole process to lock up (possibly because of some loop within # the interpreter waiting on thread locks, like the issue described in threading.py # for Python 3.4? in any case, it doesn't hang in Python 2.) This changed in # 3.7a1 and waiting on it is again necessary and doesn't hang. # PyPy3 5.10.1 is back to the "old" cpython behaviour, and waiting on it # causes the whole process to hang, but apparently only on OS X---linux was fine without it self._run_and_join(script) @skipDueToHang class ThreadingExceptionTests(unittest.TestCase): # A RuntimeError should be raised if Thread.start() is called # multiple times. # pylint:disable=bad-thread-instantiation def test_start_thread_again(self): thread_ = threading.Thread() thread_.start() self.assertRaises(RuntimeError, thread_.start) def test_joining_current_thread(self): current_thread = threading.current_thread() self.assertRaises(RuntimeError, current_thread.join) def test_joining_inactive_thread(self): thread_ = threading.Thread() self.assertRaises(RuntimeError, thread_.join) def test_daemonize_active_thread(self): thread_ = threading.Thread() thread_.start() self.assertRaises(RuntimeError, setattr, thread_, "daemon", True) @skipDueToHang class LockTests(lock_tests.LockTests): locktype = staticmethod(threading.Lock) @skipDueToHang class RLockTests(lock_tests.RLockTests): locktype = staticmethod(threading.RLock) @skipDueToHang class NativeRLockTests(lock_tests.RLockTests): # See comments at the top of the file for the difference # between this and RLockTests, and why they both matter locktype = staticmethod(threading.NativeRLock) @skipDueToHang class EventTests(lock_tests.EventTests): eventtype = staticmethod(threading.Event) @skipDueToHang class ConditionAsRLockTests(lock_tests.RLockTests): # An Condition uses an RLock by default and exports its API. locktype = staticmethod(threading.Condition) @skipDueToHang class ConditionTests(lock_tests.ConditionTests): condtype = staticmethod(threading.Condition) @skipDueToHang class SemaphoreTests(lock_tests.SemaphoreTests): semtype = staticmethod(threading.Semaphore) @skipDueToHang class BoundedSemaphoreTests(lock_tests.BoundedSemaphoreTests): semtype = staticmethod(threading.BoundedSemaphore) if __name__ == "__main__": greentest.main() gevent-24.11.1/src/gevent/tests/test__threading_before_monkey.py000066400000000000000000000013121471441230600250110ustar00rootroot00000000000000# If stdlib threading is imported *BEFORE* monkey patching, # we can still get the current (main) thread, and it's not a DummyThread. import threading from gevent import monkey monkey.patch_all() # pragma: testrunner-no-monkey-combine import gevent.testing as greentest class Test(greentest.TestCase): def test_main_thread(self): current = threading.current_thread() self.assertFalse(isinstance(current, threading._DummyThread)) self.assertTrue(isinstance(current, monkey.get_original('threading', 'Thread'))) # in 3.4, if the patch is incorrectly done, getting the repr # of the thread fails repr(current) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__threading_fork_from_dummy.py000066400000000000000000000042051471441230600253700ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests for `issue 2020 `_: 3.11.8 and 3.12.2 try to swizzle ``__class__`` of the dummy thread found when forking from inside a greenlet. """ from gevent import monkey; monkey.patch_all() # pragma: testrunner-no-monkey-combine import unittest import gevent.testing as greentest import tempfile @greentest.skipOnWindows("Uses os.fork") class Test(greentest.TestCase): def test_fork_from_dummy_thread(self): import os import threading import contextlib import gevent from gevent.threading import _DummyThread if not _DummyThread._NEEDS_CLASS_FORK_FIXUP: self.skipTest('No patch need be applied') def do_it(out): # Be sure we've put the DummyThread in the threading # maps assert isinstance(threading.current_thread(), threading._DummyThread) with open(out, 'wt', encoding='utf-8') as f: with contextlib.redirect_stderr(f): r = os.fork() if r == 0: # In the child. # Our stderr redirect above will capture the # "Exception ignored in _after_fork()" that the interpreter # prints; actually printing the main_thread will result in # raising an exception if our patch doesn't work. main = threading.main_thread() print(main, file=f) print(type(main), file=f) return r with tempfile.NamedTemporaryFile('w+t', suffix='.gevent_threading.txt') as tf: glet = gevent.spawn(do_it, tf.name) glet.join() pid = glet.get() if pid == 0: # Dump the child process quickly os._exit(0) os.waitpid(pid, 0) tf.seek(0, 0) contents = tf.read() self.assertIn("", contents) self.assertNotIn("_DummyThread", contents) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/gevent/tests/test__threading_holding_lock_while_monkey.py000066400000000000000000000006331471441230600274000ustar00rootroot00000000000000from gevent import monkey import threading # Make sure that we can patch gevent while holding # a threading lock. Under Python2, where RLock is implemented # in python code, this used to throw RuntimeErro("Cannot release un-acquired lock") # See https://github.com/gevent/gevent/issues/615 # pylint:disable=useless-with-lock with threading.RLock(): monkey.patch_all() # pragma: testrunner-no-monkey-combine gevent-24.11.1/src/gevent/tests/test__threading_monkey_in_thread.py000066400000000000000000000044761471441230600255220ustar00rootroot00000000000000# We can monkey-patch in a thread, but things don't work as expected. from __future__ import print_function import threading from gevent import monkey import gevent.testing as greentest class Test(greentest.TestCase): @greentest.ignores_leakcheck # can't be run multiple times def test_patch_in_thread(self): all_warnings = [] try: get_ident = threading.get_ident except AttributeError: get_ident = threading._get_ident def process_warnings(warnings): all_warnings.extend(warnings) monkey._process_warnings = process_warnings current = threading.current_thread() current_id = get_ident() def target(): tcurrent = threading.current_thread() monkey.patch_all() # pragma: testrunner-no-monkey-combine tcurrent2 = threading.current_thread() self.assertIsNot(tcurrent, current) # We get a dummy thread now self.assertIsNot(tcurrent, tcurrent2) thread = threading.Thread(target=target) thread.start() try: thread.join() except: # pylint:disable=bare-except # XXX: This can raise LoopExit in some cases. greentest.reraiseFlakyTestRaceCondition() self.assertNotIsInstance(current, threading._DummyThread) self.assertIsInstance(current, monkey.get_original('threading', 'Thread')) # We generated some warnings if greentest.PY3: self.assertEqual( all_warnings, ['Monkey-patching outside the main native thread. Some APIs will not be ' 'available. Expect a KeyError to be printed at shutdown.', 'Monkey-patching not on the main thread; threading.main_thread().join() ' 'will hang from a greenlet']) else: self.assertEqual( all_warnings, ['Monkey-patching outside the main native thread. Some APIs will not be ' 'available. Expect a KeyError to be printed at shutdown.']) # Manual clean up so we don't get a KeyError del threading._active[current_id] threading._active[(getattr(threading, 'get_ident', None) or threading._get_ident)()] = current if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test__threading_native_before_monkey.py000066400000000000000000000041551471441230600263670ustar00rootroot00000000000000# If stdlib threading is imported *BEFORE* monkey patching, *and* # there is a native thread created, we can still get the current # (main) thread, and it's not a DummyThread. # Joining the native thread also does not fail import threading from time import sleep as time_sleep import gevent.testing as greentest class NativeThread(threading.Thread): do_run = True def run(self): while self.do_run: time_sleep(0.1) def stop(self, timeout=None): self.do_run = False self.join(timeout=timeout) native_thread = None monkey = None class Test(greentest.TestCase): @classmethod def tearDownClass(cls): global native_thread if native_thread is not None: native_thread.stop(1) native_thread = None def test_main_thread(self): current = threading.current_thread() self.assertNotIsInstance(current, threading._DummyThread) # pylint:disable-next=used-before-assignment self.assertIsInstance(current, monkey.get_original('threading', 'Thread')) # in 3.4, if the patch is incorrectly done, getting the repr # of the thread fails repr(current) if hasattr(threading, 'main_thread'): # py 3.4 self.assertEqual(threading.current_thread(), threading.main_thread()) @greentest.ignores_leakcheck # because it can't be run multiple times def test_join_native_thread(self): if native_thread is None or not native_thread.do_run: # pragma: no cover self.skipTest("native_thread already closed") self.assertTrue(native_thread.is_alive()) native_thread.stop(timeout=1) self.assertFalse(native_thread.is_alive()) # again, idempotent native_thread.stop() self.assertFalse(native_thread.is_alive()) if __name__ == '__main__': native_thread = NativeThread() native_thread.daemon = True # pylint:disable=attribute-defined-outside-init native_thread.start() # Only patch after we're running from gevent import monkey monkey.patch_all() # pragma: testrunner-no-monkey-combine greentest.main() gevent-24.11.1/src/gevent/tests/test__threading_no_monkey.py000066400000000000000000000014461471441230600241730ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests for ``gevent.threading`` that DO NOT monkey patch. This allows easy comparison with the standard module. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import threading from gevent import threading as gthreading from gevent import testing class TestDummyThread(testing.TestCase): def test_name(self): # Matches the stdlib. # https://github.com/gevent/gevent/issues/1659 std_dummy = threading._DummyThread() gvt_dummy = gthreading._DummyThread() self.assertIsNot(type(std_dummy), type(gvt_dummy)) self.assertStartsWith(std_dummy.name, 'Dummy-') self.assertStartsWith(gvt_dummy.name, 'Dummy-') if __name__ == '__main__': testing.main() gevent-24.11.1/src/gevent/tests/test__threading_patched_local.py000066400000000000000000000012431471441230600247520ustar00rootroot00000000000000from gevent import monkey; monkey.patch_all() import threading localdata = threading.local() localdata.x = "hello" assert localdata.x == 'hello' success = [] def func(): try: getattr(localdata, 'x') raise AssertionError('localdata.x must raise AttributeError') except AttributeError: pass # We really want to check this is exactly an empty dict, # not just anything falsey # pylint:disable=use-implicit-booleaness-not-comparison assert localdata.__dict__ == {}, localdata.__dict__ success.append(1) t = threading.Thread(None, func) t.start() t.join() assert success == [1], 'test failed' assert localdata.x == 'hello' gevent-24.11.1/src/gevent/tests/test__threading_vs_settrace.py000066400000000000000000000116371471441230600245220ustar00rootroot00000000000000from __future__ import print_function import sys import subprocess import unittest from gevent.thread import allocate_lock import gevent.testing as greentest script = """ from gevent import monkey monkey.patch_all() # pragma: testrunner-no-monkey-combine import sys, os, threading, time # A deadlock-killer, to prevent the # testsuite to hang forever def killer(): time.sleep(0.2) sys.stdout.write('..program blocked; aborting!') sys.stdout.flush() os._exit(2) t = threading.Thread(target=killer) t.daemon = True t.start() def trace(frame, event, arg): if threading is not None: threading.current_thread() return trace def doit(): sys.stdout.write("..thread started..") def test1(): t = threading.Thread(target=doit) t.start() t.join() sys.settrace(None) sys.settrace(trace) if len(sys.argv) > 1: test1() sys.stdout.write("..finishing..") """ class TestTrace(unittest.TestCase): @greentest.skipOnPurePython("Locks can be traced in Pure Python") def test_untraceable_lock(self): # Untraceable locks were part of the solution to https://bugs.python.org/issue1733757 # which details a deadlock that could happen if a trace function invoked # threading.currentThread at shutdown time---the cleanup lock would be held # by the VM, and calling currentThread would try to acquire it again. The interpreter # changed in 2.6 to use the `with` statement (https://hg.python.org/cpython/rev/76f577a9ec03/), # which apparently doesn't trace in quite the same way. if hasattr(sys, 'gettrace'): old = sys.gettrace() else: old = None lst = [] try: def trace(frame, ev, _arg): lst.append((frame.f_code.co_filename, frame.f_lineno, ev)) print("TRACE: %s:%s %s" % lst[-1]) return trace with allocate_lock(): sys.settrace(trace) finally: sys.settrace(old) self.assertEqual(lst, [], "trace not empty") @greentest.skipOnPurePython("Locks can be traced in Pure Python") def test_untraceable_lock_uses_different_lock(self): if hasattr(sys, 'gettrace'): old = sys.gettrace() else: old = None lst = [] # we should be able to use unrelated locks from within the trace function l = allocate_lock() try: def trace(frame, ev, _arg): with l: lst.append((frame.f_code.co_filename, frame.f_lineno, ev)) # print("TRACE: %s:%s %s" % lst[-1]) return trace l2 = allocate_lock() sys.settrace(trace) # Separate functions, not the C-implemented `with` so the trace # function gets a crack at them l2.acquire() l2.release() finally: sys.settrace(old) # Have an assert so that we know if we miscompile self.assertTrue(lst, "should not compile on pypy") @greentest.skipOnPurePython("Locks can be traced in Pure Python") def test_untraceable_lock_uses_same_lock(self): from gevent.hub import LoopExit if hasattr(sys, 'gettrace'): old = sys.gettrace() else: old = None lst = [] e = None # we should not be able to use the same lock from within the trace function # because it's over acquired but instead of deadlocking it raises an exception l = allocate_lock() try: def trace(frame, ev, _arg): with l: lst.append((frame.f_code.co_filename, frame.f_lineno, ev)) return trace sys.settrace(trace) # Separate functions, not the C-implemented `with` so the trace # function gets a crack at them l.acquire() except LoopExit as ex: e = ex finally: sys.settrace(old) # Have an assert so that we know if we miscompile self.assertTrue(lst, "should not compile on pypy") self.assertTrue(isinstance(e, LoopExit)) def run_script(self, more_args=()): if ( greentest.PYPY3 and greentest.RUNNING_ON_APPVEYOR and sys.version_info[:2] == (3, 7) ): # Somehow launching the subprocess fails with exit code 1, and # produces no output. It's not clear why. self.skipTest("Known to hang on AppVeyor") args = [sys.executable, "-u", "-c", script] args.extend(more_args) rc = subprocess.call(args) self.assertNotEqual(rc, 2, "interpreter was blocked") self.assertEqual(rc, 0, "Unexpected error") def test_finalize_with_trace(self): self.run_script() def test_bootstrap_inner_with_trace(self): self.run_script(["1"]) if __name__ == "__main__": greentest.main() gevent-24.11.1/src/gevent/tests/test__threadpool.py000066400000000000000000000603711471441230600223130ustar00rootroot00000000000000from __future__ import print_function from time import time, sleep import contextlib import random import weakref import gc import gevent.threadpool from gevent.threadpool import ThreadPool import gevent from gevent.exceptions import InvalidThreadUseError import gevent.testing as greentest from gevent.testing import ExpectedException from gevent.testing import PYPY # pylint:disable=too-many-ancestors @contextlib.contextmanager def disabled_gc(): was_enabled = gc.isenabled() gc.disable() try: yield finally: if was_enabled: gc.enable() class TestCase(greentest.TestCase): # These generally need more time __timeout__ = greentest.LARGE_TIMEOUT pool = None _all_pools = () ClassUnderTest = ThreadPool def _FUT(self): return self.ClassUnderTest def _makeOne(self, maxsize, create_all_worker_threads=greentest.RUN_LEAKCHECKS): self.pool = pool = self._FUT()(maxsize) self._all_pools += (pool,) if create_all_worker_threads: # Max size to help eliminate false positives self.pool.size = maxsize return pool def cleanup(self): self.pool = None all_pools, self._all_pools = self._all_pools, () for pool in all_pools: kill = getattr(pool, 'kill', None) or getattr(pool, 'shutdown') kill() del kill if greentest.RUN_LEAKCHECKS: # Each worker thread created a greenlet object and switched to it. # It's a custom subclass, but even if it's not, it appears that # the root greenlet for the new thread sticks around until there's a # gc. Simply calling 'getcurrent()' is enough to "leak" a greenlet.greenlet # and a weakref. for _ in range(3): gc.collect() class PoolBasicTests(TestCase): def test_execute_async(self): pool = self._makeOne(2) r = [] first = pool.spawn(r.append, 1) first.get() self.assertEqual(r, [1]) gevent.sleep(0) pool.apply_async(r.append, (2, )) self.assertEqual(r, [1]) pool.apply_async(r.append, (3, )) self.assertEqual(r, [1]) pool.apply_async(r.append, (4, )) self.assertEqual(r, [1]) gevent.sleep(0.01) self.assertEqualFlakyRaceCondition(sorted(r), [1, 2, 3, 4]) def test_apply(self): pool = self._makeOne(1) result = pool.apply(lambda a: ('foo', a), (1, )) self.assertEqual(result, ('foo', 1)) def test_apply_raises(self): pool = self._makeOne(1) def raiser(): raise ExpectedException() with self.assertRaises(ExpectedException): pool.apply(raiser) # Don't let the metaclass automatically force any error # that reaches the hub from a spawned greenlet to become # fatal; that defeats the point of the test. test_apply_raises.error_fatal = False def test_init_valueerror(self): self.switch_expected = False with self.assertRaises(ValueError): self._makeOne(-1) # # tests from standard library test/test_multiprocessing.py class TimingWrapper(object): def __init__(self, the_func): self.func = the_func self.elapsed = None def __call__(self, *args, **kwds): t = time() try: return self.func(*args, **kwds) finally: self.elapsed = time() - t def sqr(x, wait=0.0): sleep(wait) return x * x def sqr_random_sleep(x): sleep(random.random() * 0.1) return x * x TIMEOUT1, TIMEOUT2, TIMEOUT3 = 0.082, 0.035, 0.14 class _AbstractPoolTest(TestCase): size = 1 MAP_IS_GEN = False def setUp(self): greentest.TestCase.setUp(self) self._makeOne(self.size) @greentest.ignores_leakcheck def test_map(self): pmap = self.pool.map if self.MAP_IS_GEN: pmap = lambda f, i: list(self.pool.map(f, i)) self.assertEqual(pmap(sqr, range(10)), list(map(sqr, range(10)))) self.assertEqual(pmap(sqr, range(100)), list(map(sqr, range(100)))) self.pool.kill() del self.pool del pmap SMALL_RANGE = 10 LARGE_RANGE = 1000 if (greentest.PYPY and (greentest.WIN or greentest.RUN_COVERAGE)) or greentest.RUN_LEAKCHECKS: # PyPy 5.10 is *really* slow at spawning or switching between # threads (especially on Windows or when coverage is enabled) Tests that happen # instantaneously on other platforms time out due to the overhead. # Leakchecks also take much longer due to all the calls into the GC, # most especially on Python 3 LARGE_RANGE = 50 class TestPool(_AbstractPoolTest): def test_greenlet_class(self): from greenlet import getcurrent from gevent.threadpool import _WorkerGreenlet worker_greenlet = self.pool.apply(getcurrent) self.assertIsInstance(worker_greenlet, _WorkerGreenlet) r = repr(worker_greenlet) self.assertIn('ThreadPoolWorker', r) self.assertIn('thread_ident', r) self.assertIn('hub=', r) from gevent.util import format_run_info info = '\n'.join(format_run_info()) self.assertIn("") value = hexobj.sub('X', value) value = value.replace('epoll', 'select') value = value.replace('select', 'default') value = value.replace('test__util', '__main__') value = re.compile(' fileno=.').sub('', value) value = value.replace('ref=-1', 'ref=0') value = value.replace("type.current_tree", 'GreenletTree.current_tree') value = value.replace('gevent.tests.__main__.MyLocal', '__main__.MyLocal') # The repr in CPython greenlet 1.0a1 added extra info value = value.replace('(otid=X) ', '') value = value.replace(' dead>', '>') value = value.replace(' current active started main>', '>') return value @greentest.ignores_leakcheck def test_tree(self): with gevent.get_hub().ignoring_expected_test_error(): tree, str_tree, tree_format = self._build_tree() self.assertTrue(tree.root) self.assertNotIn('Parent', str_tree) # Simple output value = self._normalize_tree_format(tree_format) expected = """\ : Parent: None : Greenlet Locals: : Local at X : {'foo': 42} +--- : Parent: +--- ; finished with value | +--- ; finished with exception ExpectedException() : Parent: +--- ; finished with value | +--- ; finished with exception ExpectedException() : Parent: +--- ; finished with value : Spawn Tree Locals : {'stl': 'STL'} | +--- ; finished with value | +--- ; finished with exception ExpectedException() : Parent: +--- >>; finished with value """.strip() self.assertEqual(expected, value) @greentest.ignores_leakcheck def test_tree_no_track(self): gevent.config.track_greenlet_tree = False with gevent.get_hub().ignoring_expected_test_error(): self._build_tree() @greentest.ignores_leakcheck def test_forest_fake_parent(self): from greenlet import greenlet as RawGreenlet def t4(): # Ignore this one, make the child the parent, # and don't be a child of the hub. c = RawGreenlet(util.GreenletTree.current_tree) c.parent.greenlet_tree_is_ignored = True c.greenlet_tree_is_root = True return c.switch() g = RawGreenlet(t4) tree = g.switch() tree_format = tree.format(details={'running_stacks': False, 'spawning_stacks': False}) value = self._normalize_tree_format(tree_format) expected = """\ ; not running : Parent: """.strip() self.assertEqual(expected, value) class TestAssertSwitches(unittest.TestCase): def test_time_sleep(self): # A real blocking function from time import sleep # No time given, we detect the failure to switch immediately with self.assertRaises(util._FailedToSwitch) as exc: with util.assert_switches(): sleep(0.001) message = str(exc.exception) self.assertIn('To any greenlet in', message) # Supply a max blocking allowed and exceed it with self.assertRaises(util._FailedToSwitch): with util.assert_switches(0.001): sleep(0.1) # Supply a max blocking allowed, and exit before that happens, # but don't switch to the hub as requested with self.assertRaises(util._FailedToSwitch) as exc: with util.assert_switches(0.001, hub_only=True): sleep(0) message = str(exc.exception) self.assertIn('To the hub in', message) self.assertIn('(max allowed 0.0010 seconds)', message) # Supply a max blocking allowed, and exit before that happens, # and allow any switch (or no switch). # Note that we need to use a relatively long duration; # sleep(0) on Windows can actually take a substantial amount of time # sometimes (more than 0.001s) with util.assert_switches(1.0, hub_only=False): sleep(0) def test_no_switches_no_function(self): # No blocking time given, no switch performed: exception with self.assertRaises(util._FailedToSwitch): with util.assert_switches(): pass # blocking time given, for all greenlets, no switch performed: nothing with util.assert_switches(max_blocking_time=1, hub_only=False): pass def test_exception_not_supressed(self): with self.assertRaises(NameError): with util.assert_switches(): raise NameError() def test_nested(self): from greenlet import gettrace with util.assert_switches() as outer: self.assertEqual(gettrace(), outer.tracer) self.assertIsNotNone(outer.tracer.active_greenlet) with util.assert_switches() as inner: self.assertEqual(gettrace(), inner.tracer) self.assertEqual(inner.tracer.previous_trace_function, outer.tracer) inner.tracer('switch', (self, self)) self.assertIs(self, inner.tracer.active_greenlet) self.assertIs(self, outer.tracer.active_greenlet) self.assertEqual(gettrace(), outer.tracer) class TestFuncs(greentest.TestCase): def test_clear_stack_frames(self): import inspect import threading completed = [] def do_it(): util.clear_stack_frames(inspect.currentframe()) completed.append(1) t = threading.Thread(target=do_it) t.start() t.join(10) self.assertEqual(completed, [1]) if __name__ == '__main__': greentest.main() gevent-24.11.1/src/gevent/tests/test_server.crt000066400000000000000000000034211471441230600214520ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIFCzCCAvOgAwIBAgIUFL7iwYYAfAarNFw2C0Q1zEjC4yUwDQYJKoZIhvcNAQEL BQAwFDESMBAGA1UEAwwJbG9jYWxob3N0MCAXDTIxMDcwODExMzAwN1oYDzIxMjEw NjE0MTEzMDA3WjAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwggIiMA0GCSqGSIb3DQEB AQUAA4ICDwAwggIKAoICAQC2v+TV2yx9DK78LWCLhBKl1g0AcdWi4H9LIbc30RbO 5LOnhL+FxPE9vRU1nD5Z01o7zgqJr8boNqU1oOxhOAyUZkSZwd6SeJHQvLRZQDRI ov3QCL4nYb53T3usSlXw5MuxUql/OwvLcvPO/8FBXKmIBpfOHHxfAwA7+BU8f8ZF aDB02sNnLlAXZc9xB1FkDNAZnM9fjWAAJbtfRcJO0l7zq8AQ/EdO6YVK6vhScf/I ovKcMbDV3GPt8YUSlqLAuIv3rFPclDvpDdp+c96OXA3wK6YhsFBvYmzgRnoVfX8V FQdp4vlcXsqEh9tPhvDmWvfU2xldbX50I1S9/TIucIxrksY7W9787p4lGEjJTfkF mfo/jdNcY7GE/sHj7aVbbK753ZEWV3j7ZbO1llweI5m6Qk4nPwd/H2CHIKqRbitZ Qg7ymGAAoCmbbXnrKI4UUrMysQgtuFYUMKstIMYO8bLAF5npVoVuMg10XxNKgBYC o0+D/RUaTM2rQRtfcwXeIFXNxDuhvblwTTrW2xG+Z2xVENeFVFAjgqEa4YPdjtxO A3mlldtrM5lLClvCLvcusw79RMYShC3NwMNmVTN9wdX1Vgmcf401dlXN4LCqIj51 yIfhB7LD6ll3eAM/qK5gwPPvhz330zfWax8f0lzLRQ1r7l9IY/Y91n7KFRLDy9cD IQIDAQABo1MwUTAdBgNVHQ4EFgQUGSmTQfHLd9mwvtfNtCJykD8F5jkwHwYDVR0j BBgwFoAUGSmTQfHLd9mwvtfNtCJykD8F5jkwDwYDVR0TAQH/BAUwAwEB/zANBgkq hkiG9w0BAQsFAAOCAgEAmeKcbwDzSnZhL9H4oPEzOslTEazn1vRGNTkDabGzHlO1 b56Mw36fOKW9cPSS9By1HiB3iQipUZ6AQ9pIIBv75Z0yNPxsIqhTpDACWEx8jk/k rhzCIMIoxURfBAKQ3Oml7U++EagyBZgQAHjGEROuRE++kUDeEy0SwIWiXmEX1OZ4 tBbaW+Q7Gc+CPHVouOZUq8Ogt9zI98rIiT5VFPm2hBZrcguoqmqSN533HJTJVimi vCBtkRK3YfsMsZYO0jmj8TWsTAZly3wwgMkjV4g5hLtrYOHU6sm8H32QjDcbahLG 7JCgQR5WCgfs/u2RHFysNwURf/Hq+9ieCEtSQrk4u19YvkwpZxVD9xUONaGNZvPR ottciZKo4pGShJADtUTnkKJEOYLTgg3jSUJPQ55AzVwAJTudLEyUGPwJL1lJ4nFu WDSSiZXqoAaD1j2CNGhkzWBT1mJEcvPTuKxDNwYzhF44B0KQSeS3vJtMibELCOZ8 a4WuR4xFe6fleL4fqHYpjI5IWYUDfFRC8lqvWdJl4oCSMH+s0B/m/FWme3lt+7/K Z0vOk3uvi09OLQZTTuGgcSVPoO+zzJOuhLzTdO+FzlbBHlZax/iNZQ1GYZ0gk6wY 9+gxqdVZQXy4UIhjHV2TbW8OlhVyRC1O+YN5pjyD884aYLD+JrxZXQtSlNlurcw= -----END CERTIFICATE----- gevent-24.11.1/src/gevent/tests/test_server.key000066400000000000000000000063041471441230600214550ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIJQQIBADANBgkqhkiG9w0BAQEFAASCCSswggknAgEAAoICAQC2v+TV2yx9DK78 LWCLhBKl1g0AcdWi4H9LIbc30RbO5LOnhL+FxPE9vRU1nD5Z01o7zgqJr8boNqU1 oOxhOAyUZkSZwd6SeJHQvLRZQDRIov3QCL4nYb53T3usSlXw5MuxUql/OwvLcvPO /8FBXKmIBpfOHHxfAwA7+BU8f8ZFaDB02sNnLlAXZc9xB1FkDNAZnM9fjWAAJbtf RcJO0l7zq8AQ/EdO6YVK6vhScf/IovKcMbDV3GPt8YUSlqLAuIv3rFPclDvpDdp+ c96OXA3wK6YhsFBvYmzgRnoVfX8VFQdp4vlcXsqEh9tPhvDmWvfU2xldbX50I1S9 /TIucIxrksY7W9787p4lGEjJTfkFmfo/jdNcY7GE/sHj7aVbbK753ZEWV3j7ZbO1 llweI5m6Qk4nPwd/H2CHIKqRbitZQg7ymGAAoCmbbXnrKI4UUrMysQgtuFYUMKst IMYO8bLAF5npVoVuMg10XxNKgBYCo0+D/RUaTM2rQRtfcwXeIFXNxDuhvblwTTrW 2xG+Z2xVENeFVFAjgqEa4YPdjtxOA3mlldtrM5lLClvCLvcusw79RMYShC3NwMNm VTN9wdX1Vgmcf401dlXN4LCqIj51yIfhB7LD6ll3eAM/qK5gwPPvhz330zfWax8f 0lzLRQ1r7l9IY/Y91n7KFRLDy9cDIQIDAQABAoICAAml7+rqe1rOxJ5Dtwkmm+Vt e5o+aE0FFTNeQFIOE+owYNvDQmlJkIL17Jy79v6/DdCCfEPwp8uytt4x9MjdMKzV CWIkvh91hh1DGTJtFVWQZV4KWB+0JV4fMCRUeF0Tdz2RY6l38JN5Ki4PiqBsx/aK gpE7J8XMXsLLwjNDe7BGY+iHdDGKXGgf0+ffvwhNNN9lS/17dUoMs+u/vxZyPNkY hDdhWlJsOcFOznVr11k8YRql9PQVgqEZUzE8CrOqCpm022iV2uPe+14Zt/JEIehA JbE5ocV/qMfecKuZyI/QYGfSt9+MkZyVn5p/QVCoFNWEC77G/Rock2jEaVXSU1dz uxiU65WrqMdvcetZ6xzhUB/Bz3N5aevjwmFmPjMHzF4npw2xn2dejPkQ31YeIdOF a9Z1tWq8q/UHA2RoooM2hCMJjaIwcABSemCbuFw7ZXm3YUzUFMycav9RGcQ8Q/0h ZPZWo52eVWhQdvI7Xy+gsssBS2/bk+nPgdDDSdprt+IiB1WuL762xB5upcim+cUJ vrx7CiDo5Fh8kJhgvSHjBCON5A/l6eg8XPX56+MBA3t0frTTub3o0tubJlYSlClF nqoNHlXczd0vdtoMpSSaBj7N2GL3FtaWi0jbyagK0IndaouMQM8njBQoI0bTmvHg COfd4uDg4h23jgseqmbBAoIBAQDepq2qASQpQ38qeq586a/YoD4PtZOeapjRNfGj gnvQaSSouaLoq3dRSlKmZwE4nKnSnCKI4Ixje9Likfbdt4/I+hXODTzEy+WpmqlJ 1x4svF6MsB/YZ6r76koGK8/vgPO/w7xLGp5Xi2E5gTaH04o/PuUo/k7yx0FkSHoA EU7WDoriH/6sgkolUL6xsmq8ljq4kHidP2UfYwzsIwngTL+jFYznzTsr6CXsTdH9 I7ppvpS3xhFsdDp0YQyMGHdtvAdEeGz/m6cxxsnwAz0EhwN2qYJD9oNCP+uxk0zD d1QD/XUUcMUrUxyQ1EBn2wAmcj9yYNeMNmhZYz47CowHym0tAoIBAQDSHzwCpk/M 3h5dk0yMgyRMq+flwj1P+eDOrGpmk9VmdCcvIQs1ArbW0/VMQ+lQNXgTZ74o/ccT ABogeD1WOq3fh5PcU6wHAVD1GL1sZ6ZCP9jQXalxt0/1vooDu+LRDLNhcsFY0AJR QfPu37beaCFjlwFf5P0lEPcTTpXBfEaqSvjA2kCys2IMeiKce36GQTy2HJBIe0py Pj+cgxZ3/lg0vGV6SrnMXh5wPbWtsVnhQBilG7niQ15txSrgV5rUYUQPNEfIuDdS MVjH1USbjoNAMlwYJF5Kcel9fn6neHfWqvW3bregw598iCg0Y67KbJl4iFzOqumh lZUy1gD2P65FAoIBADA8P+dSs/jUjJoxVdft8JCntopEtiRdx5mbbCwWOqid/rkm 7molq4XK6jjum88d8ZSVCs5Ih2GOE9PN94N1HwtVUp//MikYWzrxLLe4iOr8LCei iGOjoeFNkpffqf6jGytyRjqnG6KvqXKB0cR/SbYF9DN7VLM4A6ysHvIgzcmGAQSY Fd5do56N7aIlmwYcLcCKW/cFIu030jbeKGeVePbl1k7poWYTtxOIkHOc5+e8yA9A M8ohLADGfadkLYtybsigpkyB9ijMfjcnHHL8pP1yH6yFnU4e9vrThI/cLDFpGZJC FBUcvlWKBiH5ygCKQ8CNxmSz7Mtguryjvk55xkkCggEAQuwRx+JCXkSMNU95vPLz t7u0oxfHQVabhBej18HT4MqzxC3pDNwtcaSWZtDmWVZ+ROfwx8t0ARgyOg8xseoE gMIElNLNYnnH2BgmFIW6jTUaj9qU4hP5UpJ6EJBhwCUkaLAM5oVxh4HS+EymSJWv tLFejbU37vtFRg/sYHB9bTVtnrakjoXVf5XSujYW6RmUBYh5Z6xk3Jf42Jdjq5oF a95pD5cHMBD17teoqoZm0vgAIW4AOREt3RZD/qnINUY5UAJdro8Fh5cR6KuDK2wr X2HqtQG4SkuXixGjsyEKQgO3ONH5iCll/Vq8O1tYSz5lbt83d9c1i/JBT6ybJ9LG ZQKCAQALlAI1Cd8swU1g5a9newXRHfkOYYlJ/CpKbvblBlc0oejEI6oJAD9ykZrE v3/6JMojI0J7ILjGj5a1eyGtleqY1JyrO5dy/djueaHHgLQNNYBkGWbdFN99XV9s iE1iuYthyVKXqMkbhxKcW8L1kyer/Z+o4I3LP4NfMzC9ibhPkjyg6JD2zBT1N49t 26SUApm2Icz54+HVVKHbimfVI6R9NqcVjO7TmQae5UjKeiI8xOiBcjDrtU9K5rj3 O2IOx7mAEc08Mz8ApLo9dnY0+dmPhprJPcpZl1haAvY3CF50iYTcABbAlk1nVfoo 0OV9kUaHy/EaY8/cPeMERFc/SVZi -----END PRIVATE KEY----- gevent-24.11.1/src/gevent/tests/tests_that_dont_do_leakchecks.txt000066400000000000000000000003371471441230600252040ustar00rootroot00000000000000test___monkey_patching.py test__monkey_ssl_warning.py test___monitor.py test__monkey_scope.py test__ares_timeout.py test__close_backend_fd.py test__hub_join.py test__hub_join_timeout.py test__issue112.py test__issue1864.py gevent-24.11.1/src/gevent/tests/tests_that_dont_monkeypatch.txt000066400000000000000000000012071471441230600247440ustar00rootroot00000000000000test___config.py test___ident.py test___monitor.py test__ares_timeout.py test__backdoor.py test__close_backend_fd.py test__events.py test__example_echoserver.py test__example_portforwarder.py test__example_udp_client.py test__example_wsgiserver.py test__example_wsgiserver_ssl.py test__example_webproxy.py test__examples.py test__getaddrinfo_import.py test__hub_join.py test__hub_join_timeout.py test__issue1864.py test__issue330.py test__iwait.py test__monkey_scope.py test__pywsgi.py test__server.py test__server_pywsgi.py test__socket_close.py test__socket_dns6.py test__socket_errors.py test__socket_send_memoryview.py test__socket_timeout.py gevent-24.11.1/src/gevent/tests/tests_that_dont_use_resolver.txt000066400000000000000000000061601471441230600251420ustar00rootroot00000000000000test__all__.py #uses socket test__api.py test__api_timeout.py test__ares_host_result.py test__ares_timeout.py # explicitly uses ares resolver # uses socket test__backdoor.py test__close_backend_fd.py test__core_async.py test__core_callback.py test__core_loop_run.py test__core.py test__core_stat.py test__core_timer.py test__core_watcher.py test__destroy.py # uses socket test__doctests.py test__environ.py test__event.py # uses socket test__example_echoserver.py # uses socket test__example_portforwarder.py # uses socket test__example_w*.py # uses bunch of things test__examples.py # uses socket test__example_udp_client.py # uses socket test__example_udp_server.py test__exc_info.py #test__execmodules.py test__fileobject.py # uses socket test__greenio.py test__GreenletExit.py test__greenlet.py test__greenletset.py # uses socket test__greenness.py test__hub_join.py test__hub_join_timeout.py # uses socket test__hub.py test__issue112.py test__issue1864.py test__joinall.py test__local.py test__loop_callback.py test__memleak.py # uses lots of things test___monkey_patching.py test__monkey.py test__order.py test__os.py test__pool.py # uses socket test__pywsgi.py test__queue.py test__monkey_queue.py # uses socket test__refcount.py test__select.py test__semaphore.py # uses socket test__server.py # test__server_pywsgi.py test__signal.py # uses socket test__socket_close.py # test__socket_dns6.py # test__socket_dns.py # test__socket_errors.py # test__socket.py # test__socket_ssl.py # test__socket_timeout.py test__subprocess_interrupted.py test__subprocess.py test__systemerror.py test__threading_2.py test__threading_patched_local.py test__threading_vs_settrace.py test__threadpool.py test__timeout.py test__compat.py test__core_fork.py test__doctests.py test__core_loop_run_sig_mod.py test__execmodules.py test__greenio.py test__greenness.py test__hub.py test__import_blocking_in_greenlet.py test__import_wait.py test__issue230.py test__issue330.py test__issue467.py test__issue6.py test__issue600.py test__issue607.py test__issue461_471.py test__monkey_builtins_future.py test__monkey_hub_in_thread.py test__monkey_logging.py test__monkey_multiple_imports.py test__monkey_scope.py test__monkey_selectors.py test__monkey_sigchld.py test__monkey_sigchld_2.py test__nondefaultloop.py test__monkey_sigchld_3.py test__real_greenlet.py test__refcount.py test__sleep0.py test__subprocess_poll.py test__threading.py test__threading_before_monkey.py test__threading_holding_lock_while_monkey.py test__threading_monkey_in_thread.py test__threading_native_before_monkey.py test__threadpool_executor_patched.py # monkey patched standard tests: test_queue.py test_select.py test_signal.py test_subprocess.py test_threading_local.py test_threading.py test_thread.py test_selectors.py test_timeout.py # test_asyncore probably does use the resolver, but only # implicitly for localhost, which is covered well enough # elsewhere that we don't need to spend the 20s (*2) test_asyncore.py test___config.py test__destroy_default_loop.py test__util.py test___ident.py test__issue639.py test__issue_728.py test__refcount_core.py test__api.py test__monitor.py test__events.py test__iwait.py gevent-24.11.1/src/gevent/tests/wrongcert.pem000066400000000000000000000035301471441230600211110ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIICXAIBAAKBgQC89ZNxjTgWgq7Z1g0tJ65w+k7lNAj5IgjLb155UkUrz0XsHDnH FlbsVUg2Xtk6+bo2UEYIzN7cIm5ImpmyW/2z0J1IDVDlvR2xJ659xrE0v5c2cB6T f9lnNTwpSoeK24Nd7Jwq4j9vk95fLrdqsBq0/KVlsCXeixS/CaqqduXfvwIDAQAB AoGAQFko4uyCgzfxr4Ezb4Mp5pN3Npqny5+Jey3r8EjSAX9Ogn+CNYgoBcdtFgbq 1yif/0sK7ohGBJU9FUCAwrqNBI9ZHB6rcy7dx+gULOmRBGckln1o5S1+smVdmOsW 7zUVLBVByKuNWqTYFlzfVd6s4iiXtAE2iHn3GCyYdlICwrECQQDhMQVxHd3EFbzg SFmJBTARlZ2GKA3c1g/h9/XbkEPQ9/RwI3vnjJ2RaSnjlfoLl8TOcf0uOGbOEyFe 19RvCLXjAkEA1s+UE5ziF+YVkW3WolDCQ2kQ5WG9+ccfNebfh6b67B7Ln5iG0Sbg ky9cjsO3jbMJQtlzAQnH1850oRD5Gi51dQJAIbHCDLDZU9Ok1TI+I2BhVuA6F666 lEZ7TeZaJSYq34OaUYUdrwG9OdqwZ9sy9LUav4ESzu2lhEQchCJrKMn23QJAReqs ZLHUeTjfXkVk7dHhWPWSlUZ6AhmIlA/AQ7Payg2/8wM/JkZEJEPvGVykms9iPUrv frADRr+hAGe43IewnQJBAJWKZllPgKuEBPwoEldHNS8nRu61D7HzxEzQ2xnfj+Nk 2fgf1MAzzTRsikfGENhVsVWeqOcijWb6g5gsyCmlRpc= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICsDCCAhmgAwIBAgIJAOqYOYFJfEEoMA0GCSqGSIb3DQEBBQUAMEUxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQwHhcNMDgwNjI2MTgxNTUyWhcNMDkwNjI2MTgxNTUyWjBF MQswCQYDVQQGEwJBVTETMBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50 ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKB gQC89ZNxjTgWgq7Z1g0tJ65w+k7lNAj5IgjLb155UkUrz0XsHDnHFlbsVUg2Xtk6 +bo2UEYIzN7cIm5ImpmyW/2z0J1IDVDlvR2xJ659xrE0v5c2cB6Tf9lnNTwpSoeK 24Nd7Jwq4j9vk95fLrdqsBq0/KVlsCXeixS/CaqqduXfvwIDAQABo4GnMIGkMB0G A1UdDgQWBBTctMtI3EO9OjLI0x9Zo2ifkwIiNjB1BgNVHSMEbjBsgBTctMtI3EO9 OjLI0x9Zo2ifkwIiNqFJpEcwRTELMAkGA1UEBhMCQVUxEzARBgNVBAgTClNvbWUt U3RhdGUxITAfBgNVBAoTGEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZIIJAOqYOYFJ fEEoMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEFBQADgYEAQwa7jya/DfhaDn7E usPkpgIX8WCL2B1SqnRTXEZfBPPVq/cUmFGyEVRVATySRuMwi8PXbVcOhXXuocA+ 43W+iIsD9pXapCZhhOerCq18TC1dWK98vLUsoK8PMjB6e5H/O8bqojv0EeC+fyCw eSHj5jpC8iZKjCHBn+mAi4cQ514= -----END CERTIFICATE----- gevent-24.11.1/src/gevent/thread.py000066400000000000000000000245321471441230600170600ustar00rootroot00000000000000""" Implementation of the standard :mod:`thread` module that spawns greenlets. .. note:: This module is a helper for :mod:`gevent.monkey` and is not intended to be used directly. For spawning greenlets in your applications, prefer higher level constructs like :class:`gevent.Greenlet` class or :func:`gevent.spawn`. """ import sys __implements__ = [ 'allocate_lock', 'get_ident', 'exit', 'LockType', 'stack_size', 'start_new_thread', '_local', ] + ([ 'start_joinable_thread', 'lock', '_ThreadHandle', '_make_thread_handle', ] if sys.version_info[:2] >= (3, 13) else [ ]) __imports__ = ['error'] import _thread as __thread__ # pylint:disable=import-error __target__ = '_thread' __imports__ += [ 'TIMEOUT_MAX', 'allocate', 'exit_thread', 'interrupt_main', 'start_new' ] # We can't actually produce a value that "may be used # to identify this particular thread system-wide", right? # Even if we could, I imagine people will want to pass this to # non-Python (native) APIs, so we shouldn't mess with it. __imports__.append('get_native_id') # Added to 3.12 if hasattr(__thread__, 'daemon_threads_allowed'): __imports__.append('daemon_threads_allowed') error = __thread__.error from gevent._compat import PYPY from gevent._util import copy_globals from gevent.hub import getcurrent from gevent.hub import GreenletExit from gevent.hub import sleep from gevent._hub_local import get_hub_if_exists from gevent.greenlet import Greenlet from gevent.lock import BoundedSemaphore from gevent.local import local as _local from gevent.exceptions import LoopExit if hasattr(__thread__, 'RLock'): # Added in Python 3.4, backported to PyPy 2.7-7.0 __imports__.append("RLock") def get_ident(gr=None): if gr is None: gr = getcurrent() return id(gr) def _start_new_greenlet(function, args=(), kwargs=None): if kwargs is not None: greenlet = Greenlet.spawn(function, *args, **kwargs) # pylint:disable=not-a-mapping else: greenlet = Greenlet.spawn(function, *args) return greenlet def start_new_thread(function, args=(), kwargs=None): return get_ident(_start_new_greenlet(function, args, kwargs)) def start_joinable_thread(function, handle=None, daemon=True): # pylint:disable=unused-argument """ *For internal use only*: start a new thread. Like start_new_thread(), this starts a new thread calling the given function. Unlike start_new_thread(), this returns a handle object with methods to join or detach the given thread. This function is not for third-party code, please use the `threading` module instead. During finalization the runtime will not wait for the thread to exit if daemon is True. If handle is provided it must be a newly created thread._ThreadHandle instance. """ # The above docstring is from python 3.13. # # _thread._ThreadHandle has: # - readonly property `ident` # - method is_done # - method join # - method _set_done - threading._shutdown calls this # # I have no idea what it means if you pass a provided handle, # because you can't change the ident once created, and # the constructor of ThreadHande takes arbitrary positional # and keyword arguments, and throws them away. (The ident is set # by C code directly accessing internal structure members). greenlet = _start_new_greenlet(function) # XXX: Daemon is ignored if handle is not None: assert isinstance(handle, _ThreadHandle) handle._set_greenlet(greenlet) return handle class _ThreadHandle: # The constructor must accept and ignore all arguments # to match the stdlib. def __init__(self, *_args, **_kwargs): """Does nothing; ignores args""" # Must keep a weak reference to the greenlet # to avoid problems managing the _active list of # threads, which can sometimes rely on garbage collection. # Also, this breaks a cycle. _greenlet_ref = None def _set_greenlet(self, glet): from weakref import ref assert glet is not None self._greenlet_ref = ref(glet) def _get_greenlet(self): return ( self._greenlet_ref() if self._greenlet_ref is not None else None ) def join(self, timeout): # TODO: This is what we patch Thread.join to do on all versions, # so there's another implementation in gevent.monkey._patch_thread_common. # UNIFY THEM. glet = self._get_greenlet() if glet is not None: if glet is getcurrent(): raise RuntimeError('Cannot joint current thread') if hasattr(glet, 'join'): return glet.join(timeout) # working with a raw greenlet. That # means it's probably the MainThread, because the main # greenlet is always raw. But it could also be a dummy from time import time end = None if timeout: end = time() + timeout while not self.is_done(): if end is not None and time() > end: return sleep(0.001) return None @property def ident(self): glet = self._get_greenlet() if glet is not None: return get_ident(glet) return None def is_done(self): glet = self._get_greenlet() if glet is None: return True return glet.dead def _set_done(self, enter_hub=True): """ Mark the thread as complete. This releases our reference (if any) to our greenlet. By default, this will bounce back to the hub so that waiters in ``join`` can get notified. Set *enter_hub* to false not to do this. """ self._greenlet_ref = None # Let the loop go around so that anyone waiting in # join() gets to know about it. This is particularly # important during threading/interpreter shutdown. if enter_hub: sleep(0.001) def __repr__(self): return '<%s.%s at 0x%x greenlet=%r>' % ( self.__class__.__module__, self.__class__.__name__, id(self), self._get_greenlet() ) def _make_thread_handle(*_args): """ Called on 3.13 after forking in the child. Takes ``(module, ident)``, returns a handle object with that ident. """ # The argument _should_ be a thread identifier int handle = _ThreadHandle() handle._set_greenlet(getcurrent()) return handle class LockType(BoundedSemaphore): """ The basic lock type. .. versionchanged:: 24.10.1 Subclassing this object is no longer allowed. This matches the Python 3 API. """ # Change the ValueError into the appropriate thread error # and any other API changes we need to make to match behaviour _OVER_RELEASE_ERROR = __thread__.error if PYPY: _OVER_RELEASE_ERROR = RuntimeError _TIMEOUT_MAX = __thread__.TIMEOUT_MAX # pylint:disable=no-member def __init__(self): """ .. versionchanged:: 24.10.1 No longer accepts arguments to pass to the super class. If you want a semaphore with a different count, use a semaphore class directly. This matches the Lock API of Python 3 """ super().__init__() @classmethod def __init_subclass__(cls): raise TypeError def acquire(self, blocking=True, timeout=-1): # This is the Python 3 signature. # On Python 2, Lock.acquire has the signature `Lock.acquire([wait])` # where `wait` is a boolean that cannot be passed by name, only position. # so we're fine to use the Python 3 signature. # Transform the default -1 argument into the None that our # semaphore implementation expects, and raise the same error # the stdlib implementation does. if timeout == -1: timeout = None if not blocking and timeout is not None: raise ValueError("can't specify a timeout for a non-blocking call") if timeout is not None: if timeout < 0: # in C: if(timeout < 0 && timeout != -1) raise ValueError("timeout value must be strictly positive") if timeout > self._TIMEOUT_MAX: raise OverflowError('timeout value is too large') try: acquired = BoundedSemaphore.acquire(self, blocking, timeout) except LoopExit: # Raised when the semaphore was not trivially ours, and we needed # to block. Some other thread presumably owns the semaphore, and there are no greenlets # running in this thread to switch to. So the best we can do is # release the GIL and try again later. if blocking: # pragma: no cover raise acquired = False if not acquired and not blocking and getcurrent() is not get_hub_if_exists(): # Run other callbacks. This makes spin locks works. # We can't do this if we're in the hub, which we could easily be: # printing the repr of a thread checks its tstate_lock, and sometimes we # print reprs in the hub. # See https://github.com/gevent/gevent/issues/1464 # By using sleep() instead of self.wait(0), we don't force a trip # around the event loop *unless* we've been running callbacks for # longer than our switch interval. sleep() return acquired # Should we implement _is_owned, at least for Python 2? See notes in # monkey.py's patch_existing_locks. allocate_lock = lock = LockType def exit(): raise GreenletExit if hasattr(__thread__, 'stack_size'): _original_stack_size = __thread__.stack_size def stack_size(size=None): if size is None: return _original_stack_size() if size > _original_stack_size(): return _original_stack_size(size) # not going to decrease stack_size, because otherwise other # greenlets in this thread will suffer else: __implements__.remove('stack_size') __imports__ = copy_globals(__thread__, globals(), only_names=__imports__, ignore_missing_names=True) __all__ = __implements__ + __imports__ __all__.remove('_local') # XXX interrupt_main # XXX _count() gevent-24.11.1/src/gevent/threading.py000066400000000000000000000406501471441230600175550ustar00rootroot00000000000000""" Implementation of the standard :mod:`threading` using greenlets. .. note:: This module is a helper for :mod:`gevent.monkey` and is not intended to be used directly. For spawning greenlets in your applications, prefer higher level constructs like :class:`gevent.Greenlet` class or :func:`gevent.spawn`. Attributes in this module like ``__threading__`` are implementation artifacts subject to change at any time. .. versionchanged:: 1.2.3 Defer adjusting the stdlib's list of active threads until we are monkey patched. Previously this was done at import time. We are documented to only be used as a helper for monkey patching, so this should functionally be the same, but some applications ignore the documentation and directly import this module anyway. A positive consequence is that ``import gevent.threading, threading; threading.current_thread()`` will no longer return a DummyThread before monkey-patching. """ import os import sys __implements__ = [ 'local', '_allocate_lock', 'Lock', '_get_ident', '_sleep', '_DummyThread', # RLock cannot go here, even though we need to import it. # If it goes here, it replaces the RLock from the native # threading module, but we really just need it here when some # things import this module. #'RLock', ] + ([ '_start_new_thread', ] if sys.version_info[:2] < (3, 13) else [ '_start_joinable_thread', '_ThreadHandle', '_make_thread_handle', ]) __extensions__ = [ ] import threading as __threading__ # imports os, sys, _thread, functools, time, itertools _DummyThread_ = __threading__._DummyThread _MainThread_ = __threading__._MainThread from gevent.local import local from gevent.thread import start_new_thread as _start_new_thread from gevent.thread import start_joinable_thread from gevent.thread import _ThreadHandle from gevent.thread import _make_thread_handle from gevent.thread import allocate_lock as _allocate_lock from gevent.thread import get_ident as _get_ident from gevent.hub import sleep as _sleep, getcurrent from gevent.lock import RLock from gevent._util import LazyOnClass # Exports, prevent unused import warnings. # XXX: Why don't we use __all__? local = local start_new_thread = _start_new_thread _start_joinable_thread = start_joinable_thread _make_thread_handle = _make_thread_handle _ThreadHandle = _ThreadHandle allocate_lock = _allocate_lock _get_ident = _get_ident _sleep = _sleep getcurrent = getcurrent Lock = _allocate_lock RLock = RLock def _cleanup(g): __threading__._active.pop(_get_ident(g), None) def _make_cleanup_id(gid): def _(_r): __threading__._active.pop(gid, None) return _ _weakref = None class _DummyThread(_DummyThread_): # We avoid calling the superclass constructor. This makes us about # twice as fast: # # - 1.16 vs 0.68usec on PyPy (unknown version, older Intel mac) # - 29.3 vs 17.7usec on CPython 2.7 (older intel Mac) # - 0.98 vs 2.95usec on CPython 3.12.2 (newer M2 mac) # # It als has the important effect of avoiding allocation and then # immediate deletion of _Thread__block, a lock. This is especially # important on PyPy where locks go through the cpyext API and # Cython, which is known to be slow and potentially buggy (e.g., # https://bitbucket.org/pypy/pypy/issues/2149/memory-leak-for-python-subclass-of-cpyext#comment-22347393) # These objects are constructed quite frequently in some cases, so # the optimization matters: for example, in gunicorn, which uses # pywsgi.WSGIServer, most every request is handled in a new greenlet, # and every request uses a logging.Logger to write the access log, # and every call to a log method captures the current thread (by # default). # # (Obviously we have to duplicate the effects of the constructor, # at least for external state purposes, which is potentially # slightly fragile.) # For the same reason, instances of this class will cleanup their own entry # in ``threading._active`` # This class also solves a problem forking process with subprocess: after forking, # Thread.__stop is called, which throws an exception when __block doesn't # exist. # Capture the static things as class vars to save on memory/ # construction time. # In Py2, they're all private; in Py3, they become protected _Thread__stopped = _is_stopped = _stopped = False _Thread__initialized = _initialized = True _Thread__daemonic = _daemonic = True _Thread__args = _args = () _Thread__kwargs = _kwargs = None _Thread__target = _target = None _Thread_ident = _ident = None _Thread__started = _started = __threading__.Event() _Thread__started.set() _tstate_lock = None _handle = None # 3.13 def __init__(self): # pylint:disable=super-init-not-called #_DummyThread_.__init__(self) # It'd be nice to use a pattern like "greenlet-%d", but there are definitely # third-party libraries checking thread names to detect DummyThread objects. self._name = self._Thread__name = __threading__._newname("Dummy-%d") # All dummy threads in the same native thread share the same ident # (that of the native thread), unless we're monkey-patched. self._set_ident() # _handle is only needed for 3.13; keeps a weak reference # to the greenlet. self._handle = _make_thread_handle(self._ident) # ``_native_id`` backs the ``native_id`` property. self._native_id = __threading__.get_native_id() g = getcurrent() gid = _get_ident(g) __threading__._active[gid] = self rawlink = getattr(g, 'rawlink', None) if rawlink is not None: # raw greenlet.greenlet greenlets don't # have rawlink... rawlink(_cleanup) else: # ... so for them we use weakrefs. # See https://github.com/gevent/gevent/issues/918 ref = self.__weakref_ref ref = ref(g, _make_cleanup_id(gid)) # pylint:disable=too-many-function-args self.__raw_ref = ref assert self.__raw_ref is ref # prevent pylint thinking its unused def _Thread__stop(self): pass _stop = _Thread__stop # py3 def _wait_for_tstate_lock(self, *args, **kwargs): # pylint:disable=signature-differs pass @LazyOnClass def __weakref_ref(self): return __import__('weakref').ref # In Python 3.11.8+ and 3.12.2+ (yes, minor patch releases), # CPython's ``threading._after_fork`` hook began swizzling the # type of the _DummyThread into _MainThread if such a dummy thread # was the current thread when ``os.fork()`` gets called. # From CPython's perspective, that's a more-or-less fine thing to do. # While _DummyThread isn't a subclass of _MainThread, they are both # subclasses of Thread, and _MainThread doesn't add any new instance # variables. # # From gevent's perspective, that's NOT good. Our _DummyThread # doesn't have all the instance variables that Thread does, and so # attempting to do anything with this now-fake _MainThread doesn't work. # You in fact immediately get assertion errors from inside ``_after_fork``. # Now, these are basically harmless --- they're printed, and they prevent the cleanup # of some globals in _threading, but that probably doesn't matter --- but # people complained, and it could break some test scenarios (due to unexpected # output on stderr, for example) # # We thought of a few options to patch around this: # # - Live with the performance penalty. Newer CPythons are making it # harder and harder to perform well, so if we can possibly avoid # adding our own performance regressions, that would be good. # # - ``after_fork`` uses ``isinstance(current, _DummyThread)`` # before swizzling, so we could use a metaclass to make that # check return false. That's a fairly large compatibility risk, # both because of the use of a metaclass (what if some other # subclass of _DummyTHread is using an incompatible metaclass?) # and the change in ``isinstance`` behaviour. We could limit the latter # to a window around the fork, using ``os.register_at_fork(before, after_in_parent=)``, # but that's a lot of moving pieces requiring the use of a global or class # variable to track state. # # - We could copy the ivars of the current main thread into the # _DummyThread in ``register_at_fork(before=)``. That appears to # work, but also requires the use of # ``register_at_fork(after_in_parent=)`` to reverse it. # # - We could simply prevent swizzling the class in the first # place. In combination with # ``register_at_fork(after_in_child=)`` to establish a *real* # new _MainThread, that's a clean solution. Establishing a real # new _MainThread is something that CPython itself is prepared # to do if it can't figure out what the current thread is. The # compatibility risk of this is relatively low: swizzling # classes is frowned upon and uncommon, and we can limit it to # just preventing this specific case. And if somebody was # attempting this already with some other thread subclass, it # would (probably?) have the exact same issues, so we can be pretty # sure nobody is doing that. # # We're initially going with the last fix; the __class__ part is here, # the ``after_in_child`` fixup we only apply if we're monkey-patching. # # Now, all of this is moot in 3.13, which takes a very different # approach to handling this, and also changes some names. See # https://github.com/python/cpython/commit/0e9c364f4ac18a2237bdbac702b96bcf8ef9cb09 # Tests pass just fine in 3.8 (and presumably 3.9 and 3.10) with these fixes # applied, but just in case, we only do it where we know it's necessary. _NEEDS_CLASS_FORK_FIXUP = ( (sys.version_info[:2] == (3, 11) and sys.version_info[:3] >= (3, 11, 8)) or sys.version_info[:3] >= (3, 12, 2) ) if _NEEDS_CLASS_FORK_FIXUP: # Override with a property, as opposed to using __setattr__, # to avoid adding overhead on any other attribute setting. @property def __class__(self): return type(self) @__class__.setter def __class__(self, new_class): # Even if we wanted to allow setting this, I'm not sure # exactly how to do so when we have a property object handling it. # Getting the descriptor from ``object.__dict__['__class__']`` # and using its ``__set__`` method raises a TypeError (as does # the simpler ``super().__class__``). # # Better allow the TypeError for now as opposed to silently ignoring # the assignment. if new_class is not _MainThread_: object.__dict__['__class__'].__set__(self, new_class) def main_native_thread(): return __threading__.main_thread() # pylint:disable=no-member # XXX: Issue 18808 breaks us on Python 3.4+. # Thread objects now expect a callback from the interpreter itself # (threadmodule.c:release_sentinel) when the C-level PyThreadState # object is being deallocated. Because this never happens # when a greenlet exits, join() and friends will block forever. # Fortunately this is easy to fix: just ensure that the allocation of the # lock, _set_sentinel, creates a *gevent* lock, and release it when # we're done. The main _shutdown code is in Python and deals with # this gracefully. class Thread(__threading__.Thread): # Only happens in < 3.13 def _set_tstate_lock(self): super(Thread, self)._set_tstate_lock() greenlet = getcurrent() greenlet.rawlink(self.__greenlet_finished) def __greenlet_finished(self, _): if self._tstate_lock: self._tstate_lock.release() self._stop() __implements__.append('Thread') class Timer(Thread, __threading__.Timer): # pylint:disable=abstract-method,inherit-non-class pass __implements__.append('Timer') _set_sentinel = allocate_lock if sys.version_info[:2] < (3, 13): __implements__.append('_set_sentinel') else: __extensions__.append('_set_sentinel') # The main thread is patched up with more care # in _gevent_will_monkey_patch __implements__.remove('_get_ident') __implements__.append('get_ident') get_ident = _get_ident __implements__.remove('_sleep') if hasattr(__threading__, '_CRLock'): # Python 3 changed the implementation of threading.RLock # Previously it was a factory function around threading._RLock # which in turn used _allocate_lock. Now, it wants to use # threading._CRLock, which is imported from _thread.RLock and as such # is implemented in C. So it bypasses our _allocate_lock function. # Fortunately they left the Python fallback in place and use it # if the imported _CRLock is None; this arranges for that to be the case. # This was also backported to PyPy 2.7-7.0 _CRLock = None __implements__.append('_CRLock') class _ForkHooks: _before_fork_current_thread = None _before_fork_active = None def before_fork_in_parent(self): self._before_fork_active = dict(__threading__._active) self._before_fork_current_thread = __threading__.current_thread() def after_fork_in_child(self): # We've already imported threading, which installed its "after" hook, # so we're going to be called after that hook. # Note that this is only installed when monkey-patching. # TODO: Is there any point to checking to see if the current thread is # our dummy thread before doing this? active = __threading__._active assert len(active) == 1 # We cannot actually kill the greenlets via throw(): # - If it was the main greenlet, the process will exit # - In any case, that would unwind the stack and execute # code that, before 2024-10-03, would never have executed. # # But we can take them out of the map and make them appear # stopped; if they truly are no longer referenced, GC will # kick in a nd delete the greenlet. If gevent is still waiting # to switch to them, that will still happen... # # This happens automatically on <= 3.12 which hardcodes the call to # ``Thread._stop``; for 3.13, we need to go through the handle. current_ident = get_ident() for green_ident, thread in self._before_fork_active.items(): if green_ident != current_ident: try: handle = thread._handle except AttributeError: assert sys.version_info[:2] < (3, 13) assert not thread.is_alive() else: # We DO NOT want to bounce to the hub. We're running # at a very sensitive time and it's best to keep tight control # over what gets to run. handle._set_done(enter_hub=False) main = __threading__._MainThread() main._ident = get_ident() # 3.13: reset to the greenlet version. __threading__._active[__threading__.get_ident()] = main __threading__._main_thread = main main = __threading__.main_thread() # XXX: Not the case. Maybe don't save main. assert main.ident == __threading__.get_ident() _fork_hooks = _ForkHooks() def _gevent_will_monkey_patch(native_module, items, warn): # pylint:disable=unused-argument # Make sure the MainThread can be found by our current greenlet ID, # otherwise we get a new DummyThread, which cannot be joined. # Fixes tests in test_threading_2 under PyPy. main_thread = main_native_thread() if __threading__.current_thread() != main_thread: warn("Monkey-patching outside the main native thread. Some APIs " "will not be available. Expect a KeyError to be printed at shutdown.") return if _get_ident() not in __threading__._active: main_id = main_thread.ident del __threading__._active[main_id] main_thread._ident = main_thread._Thread__ident = _get_ident() __threading__._active[_get_ident()] = main_thread register_at_fork = getattr(os, 'register_at_fork', None) if register_at_fork: #if _DummyThread._NEEDS_CLASS_FORK_FIXUP: register_at_fork( before=_fork_hooks.before_fork_in_parent, after_in_child=_fork_hooks.after_fork_in_child) gevent-24.11.1/src/gevent/threadpool.py000066400000000000000000000743771471441230600177660ustar00rootroot00000000000000# Copyright (c) 2012 Denis Bilenko. See LICENSE for details. from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import sys from greenlet import greenlet as RawGreenlet from gevent import monkey from gevent._compat import integer_types from gevent.event import AsyncResult from gevent.exceptions import InvalidThreadUseError from gevent.greenlet import Greenlet from gevent._hub_local import get_hub_if_exists from gevent.hub import _get_hub_noargs as get_hub from gevent.hub import getcurrent from gevent.hub import sleep from gevent.lock import Semaphore from gevent.pool import GroupMappingMixin from gevent.util import clear_stack_frames from gevent._threading import Queue from gevent._threading import EmptyTimeout from gevent._threading import start_new_thread from gevent._threading import get_thread_ident __all__ = [ 'ThreadPool', 'ThreadResult', ] def _format_hub(hub): if hub is None: return '' return '<%s at 0x%x thread_ident=0x%x>' % ( hub.__class__.__name__, id(hub), hub.thread_ident ) def _get_thread_profile(_sys=sys): if 'threading' in _sys.modules: return _sys.modules['threading']._profile_hook def _get_thread_trace(_sys=sys): if 'threading' in _sys.modules: return _sys.modules['threading']._trace_hook class _WorkerGreenlet(RawGreenlet): # Exists to produce a more useful repr for worker pool # threads/greenlets, and manage the communication of the worker # thread with the threadpool. # Inform the gevent.util.GreenletTree that this should be # considered the root (for printing purposes) greenlet_tree_is_root = True _thread_ident = 0 _exc_info = sys.exc_info _get_hub_if_exists = staticmethod(get_hub_if_exists) # We capture the hub each time through the loop in case its created # so we can destroy it after a fork. _hub_of_worker = None # The hub of the threadpool we're working for. Just for info. _hub = None # A cookie passed to task_queue.get() _task_queue_cookie = None # If not -1, how long to block waiting for a task before we # exit. _idle_task_timeout = -1 def __init__(self, threadpool): # Construct in the main thread (owner of the threadpool) # The parent greenlet and thread identifier will be set once the # new thread begins running. RawGreenlet.__init__(self) self._hub = threadpool.hub # Avoid doing any imports in the background thread if it's not # necessary (monkey.get_original imports if not patched). # Background imports can hang Python 2 (gevent's thread resolver runs in the BG, # and resolving may have to import the idna module, which needs an import lock, so # resolving at module scope) if monkey.is_module_patched('sys'): stderr = monkey.get_original('sys', 'stderr') else: stderr = sys.stderr self._stderr = stderr # We can capture the task_queue; even though it can change if the threadpool # is re-innitted, we won't be running in that case self._task_queue = threadpool.task_queue # type:gevent._threading.Queue self._task_queue_cookie = self._task_queue.allocate_cookie() self._unregister_worker = threadpool._unregister_worker self._idle_task_timeout = threadpool._idle_task_timeout threadpool._register_worker(self) try: start_new_thread(self._begin, ()) except: self._unregister_worker(self) raise def _begin(self, _get_c=getcurrent, _get_ti=get_thread_ident): # Pass arguments to avoid accessing globals during module shutdown. # we're in the new thread (but its root greenlet). Establish invariants and get going # by making this the current greenlet. self.parent = _get_c() # pylint:disable=attribute-defined-outside-init self._thread_ident = _get_ti() # ignore the parent attribute. (We can't set parent to None.) self.parent.greenlet_tree_is_ignored = True try: self.switch() # goto run() except: # pylint:disable=bare-except # run() will attempt to print any exceptions, but that might # not work during shutdown. sys.excepthook and such may be gone, # so things might not get printed at all except for a cryptic # message. This is especially true on Python 2 (doesn't seem to be # an issue on Python 3). pass def __fixup_hub_before_block(self): hub = self._get_hub_if_exists() # Don't create one; only set if a worker function did it if hub is not None: hub.name = 'ThreadPool Worker Hub' # While we block, don't let the monitoring thread, if any, # report us as blocked. Indeed, so long as we never # try to switch greenlets, don't report us as blocked--- # the threadpool is *meant* to run blocking tasks if hub is not None and hub.periodic_monitoring_thread is not None: hub.periodic_monitoring_thread.ignore_current_greenlet_blocking() self._hub_of_worker = hub @staticmethod def __print_tb(tb, stderr): # Extracted from traceback to avoid accessing any module # globals (these sometimes happen during interpreter shutdown; # see test__subprocess_interrupted) while tb is not None: f = tb.tb_frame lineno = tb.tb_lineno co = f.f_code filename = co.co_filename name = co.co_name print(' File "%s", line %d, in %s' % (filename, lineno, name), file=stderr) tb = tb.tb_next def _before_run_task(self, func, args, kwargs, thread_result, _sys=sys, _get_thread_profile=_get_thread_profile, _get_thread_trace=_get_thread_trace): # pylint:disable=unused-argument _sys.setprofile(_get_thread_profile()) _sys.settrace(_get_thread_trace()) def _after_run_task(self, func, args, kwargs, thread_result, _sys=sys): # pylint:disable=unused-argument _sys.setprofile(None) _sys.settrace(None) def __run_task(self, func, args, kwargs, thread_result): self._before_run_task(func, args, kwargs, thread_result) try: thread_result.set(func(*args, **kwargs)) except: # pylint:disable=bare-except thread_result.handle_error((self, func), self._exc_info()) finally: self._after_run_task(func, args, kwargs, thread_result) del func, args, kwargs, thread_result def run(self): # pylint:disable=too-many-branches task = None exc_info = sys.exc_info fixup_hub_before_block = self.__fixup_hub_before_block task_queue_get = self._task_queue.get task_queue_cookie = self._task_queue_cookie run_task = self.__run_task task_queue_done = self._task_queue.task_done idle_task_timeout = self._idle_task_timeout try: # pylint:disable=too-many-nested-blocks while 1: # tiny bit faster than True on Py2 fixup_hub_before_block() try: task = task_queue_get(task_queue_cookie, idle_task_timeout) except EmptyTimeout: # Nothing to do, exit the thread. Do not # go into the next block where we would call # queue.task_done(), because we didn't actually # take a task. return try: if task is None: return run_task(*task) except: task = repr(task) raise finally: task = None if not isinstance(task, str) else task task_queue_done() except Exception as e: # pylint:disable=broad-except print( "Failed to run worker thread. Task=%r Exception=%r" % ( task, e ), file=self._stderr) self.__print_tb(exc_info()[-1], self._stderr) finally: # Re-check for the hub in case the task created it but then # failed. self.cleanup(self._get_hub_if_exists()) def cleanup(self, hub_of_worker): if self._hub is not None: self._hub = None self._unregister_worker(self) self._unregister_worker = lambda _: None self._task_queue = None self._task_queue_cookie = None if hub_of_worker is not None: hub_of_worker.destroy(True) def __repr__(self, _format_hub=_format_hub): return "" % ( id(self), self._thread_ident, _format_hub(self._hub) ) class ThreadPool(GroupMappingMixin): """ A pool of native worker threads. This can be useful for CPU intensive functions, or those that otherwise will not cooperate with gevent. The best functions to execute in a thread pool are small functions with a single purpose; ideally they release the CPython GIL. Such functions are extension functions implemented in C. It implements the same operations as a :class:`gevent.pool.Pool`, but using threads instead of greenlets. .. note:: The method :meth:`apply_async` will always return a new greenlet, bypassing the threadpool entirely. Most users will not need to create instances of this class. Instead, use the threadpool already associated with gevent's hub:: pool = gevent.get_hub().threadpool result = pool.spawn(lambda: "Some func").get() .. important:: It is only possible to use instances of this class from the thread running their hub. Typically that means from the thread that created them. Using the pattern shown above takes care of this. There is no gevent-provided way to have a single process-wide limit on the number of threads in various pools when doing that, however. The suggested way to use gevent and threadpools is to have a single gevent hub and its one threadpool (which is the default without doing any extra work). Only dispatch minimal blocking functions to the threadpool, functions that do not use the gevent hub. The `len` of instances of this class is the number of enqueued (unfinished) tasks. Just before a task starts running in a worker thread, the values of :func:`threading.setprofile` and :func:`threading.settrace` are consulted. Any values there are installed in that thread for the duration of the task (using :func:`sys.setprofile` and :func:`sys.settrace`, respectively). (Because worker threads are long-lived and outlast any given task, this arrangement lets the hook functions change between tasks, but does not let them see the bookkeeping done by the worker thread itself.) .. caution:: Instances of this class are only true if they have unfinished tasks. .. versionchanged:: 1.5a3 The undocumented ``apply_e`` function, deprecated since 1.1, was removed. .. versionchanged:: 20.12.0 Install the profile and trace functions in the worker thread while the worker thread is running the supplied task. .. versionchanged:: 22.08.0 Add the option to let idle threads expire and be removed from the pool after *idle_task_timeout* seconds (-1 for no timeout) """ __slots__ = ( 'hub', '_maxsize', # A Greenlet that runs to adjust the number of worker # threads. 'manager', # The PID of the process we were created in. # Used to help detect a fork and then re-create # internal state. 'pid', 'fork_watcher', # A semaphore initialized with ``maxsize`` counting the # number of available worker threads we have. As a # gevent.lock.Semaphore, this is only safe to use from a single # native thread. '_available_worker_threads_greenlet_sem', # A set of running or pending _WorkerGreenlet objects; # we rely on the GIL for thread safety. '_worker_greenlets', # The task queue is itself safe to use from multiple # native threads. 'task_queue', '_idle_task_timeout', ) _WorkerGreenlet = _WorkerGreenlet def __init__(self, maxsize, hub=None, idle_task_timeout=-1): if hub is None: hub = get_hub() self.hub = hub self.pid = os.getpid() self.manager = None self.task_queue = Queue() self.fork_watcher = None self._idle_task_timeout = idle_task_timeout self._worker_greenlets = set() self._maxsize = 0 # Note that by starting with 1, we actually allow # maxsize + 1 tasks in the queue. self._available_worker_threads_greenlet_sem = Semaphore(1, hub) self._set_maxsize(maxsize) self.fork_watcher = hub.loop.fork(ref=False) def _register_worker(self, worker): self._worker_greenlets.add(worker) def _unregister_worker(self, worker): self._worker_greenlets.discard(worker) def _set_maxsize(self, maxsize): if not isinstance(maxsize, integer_types): raise TypeError('maxsize must be integer: %r' % (maxsize, )) if maxsize < 0: raise ValueError('maxsize must not be negative: %r' % (maxsize, )) difference = maxsize - self._maxsize self._available_worker_threads_greenlet_sem.counter += difference self._maxsize = maxsize self.adjust() # make sure all currently blocking spawn() start unlocking if maxsize increased self._available_worker_threads_greenlet_sem._start_notify() def _get_maxsize(self): return self._maxsize maxsize = property(_get_maxsize, _set_maxsize, doc="""\ The maximum allowed number of worker threads. This is also (approximately) a limit on the number of tasks that can be queued without blocking the waiting greenlet. If this many tasks are already running, then the next greenlet that submits a task will block waiting for a task to finish. """) def __repr__(self, _format_hub=_format_hub): return '<%s at 0x%x tasks=%s size=%s maxsize=%s hub=%s>' % ( self.__class__.__name__, id(self), len(self), self.size, self.maxsize, _format_hub(self.hub), ) def __len__(self): # XXX just do unfinished_tasks property # Note that this becomes the boolean value of this class, # that's probably not what we want! return self.task_queue.unfinished_tasks def _get_size(self): return len(self._worker_greenlets) def _set_size(self, size): if size < 0: raise ValueError('Size of the pool cannot be negative: %r' % (size, )) if size > self._maxsize: raise ValueError('Size of the pool cannot be bigger than maxsize: %r > %r' % (size, self._maxsize)) if self.manager: self.manager.kill() while len(self._worker_greenlets) < size: self._add_thread() delay = self.hub.loop.approx_timer_resolution while len(self._worker_greenlets) > size: while len(self._worker_greenlets) - size > self.task_queue.unfinished_tasks: self.task_queue.put(None) if getcurrent() is self.hub: break sleep(delay) delay = min(delay * 2, .05) if self._worker_greenlets: self.fork_watcher.start(self._on_fork) else: self.fork_watcher.stop() size = property(_get_size, _set_size, doc="""\ The number of running pooled worker threads. Setting this attribute will add or remove running worker threads, up to `maxsize`. Initially there are no pooled running worker threads, and threads are created on demand to satisfy concurrent requests up to `maxsize` threads. """) def _on_fork(self): # fork() only leaves one thread; also screws up locks; # let's re-create locks and threads, and do our best to # clean up any worker threads left behind. # NOTE: See comment in gevent.hub.reinit. pid = os.getpid() if pid != self.pid: # The OS threads have been destroyed, but the Python # objects may live on, creating refcount "leaks". Python 2 # leaves dead frames (those that are for dead OS threads) # around; Python 3.8 does not. thread_ident_to_frame = dict(sys._current_frames()) for worker in list(self._worker_greenlets): frame = thread_ident_to_frame.get(worker._thread_ident) clear_stack_frames(frame) worker.cleanup(worker._hub_of_worker) # We can't throw anything to the greenlet, nor can we # switch to it or set a parent. Those would all be cross-thread # operations, which aren't allowed. worker.__dict__.clear() # We've cleared f_locals and on Python 3.4, possibly the actual # array locals of the stack frame, but the task queue may still be # referenced if we didn't actually get all the locals. Shut it down # and clear it before we throw away our reference. self.task_queue.kill() self.__init__(self._maxsize) def join(self): """Waits until all outstanding tasks have been completed.""" delay = max(0.0005, self.hub.loop.approx_timer_resolution) while self.task_queue.unfinished_tasks > 0: sleep(delay) delay = min(delay * 2, .05) def kill(self): self.size = 0 self.fork_watcher.close() def _adjust_step(self): # if there is a possibility & necessity for adding a thread, do it while (len(self._worker_greenlets) < self._maxsize and self.task_queue.unfinished_tasks > len(self._worker_greenlets)): self._add_thread() # while the number of threads is more than maxsize, kill one # we do not check what's already in task_queue - it could be all Nones while len(self._worker_greenlets) - self._maxsize > self.task_queue.unfinished_tasks: self.task_queue.put(None) if self._worker_greenlets: self.fork_watcher.start(self._on_fork) elif self.fork_watcher is not None: self.fork_watcher.stop() def _adjust_wait(self): delay = self.hub.loop.approx_timer_resolution while True: self._adjust_step() if len(self._worker_greenlets) <= self._maxsize: return sleep(delay) delay = min(delay * 2, .05) def adjust(self): self._adjust_step() if not self.manager and len(self._worker_greenlets) > self._maxsize: # might need to feed more Nones into the pool to shutdown # threads. self.manager = Greenlet.spawn(self._adjust_wait) def _add_thread(self): self._WorkerGreenlet(self) def spawn(self, func, *args, **kwargs): """ Add a new task to the threadpool that will run ``func(*args, **kwargs)``. Waits until a slot is available. Creates a new native thread if necessary. This must only be called from the native thread that owns this object's hub. This is because creating the necessary data structures to communicate back to this thread isn't thread safe, so the hub must not be running something else. Also, ensuring the pool size stays correct only works within a single thread. :return: A :class:`gevent.event.AsyncResult`. :raises InvalidThreadUseError: If called from a different thread. .. versionchanged:: 1.5 Document the thread-safety requirements. """ if self.hub != get_hub(): raise InvalidThreadUseError while 1: semaphore = self._available_worker_threads_greenlet_sem semaphore.acquire() if semaphore is self._available_worker_threads_greenlet_sem: # If we were asked to change size or re-init we could have changed # semaphore objects. break # Returned; lets a greenlet in this thread wait # for the pool thread. Signaled when the async watcher # is fired from the pool thread back into this thread. result = AsyncResult() task_queue = self.task_queue # Encapsulates the async watcher the worker thread uses to # call back into this thread. Immediately allocates and starts the # async watcher in this thread, because it uses this hub/loop, # which is not thread safe. thread_result = None try: thread_result = ThreadResult(result, self.hub, semaphore.release) task_queue.put((func, args, kwargs, thread_result)) self.adjust() except: if thread_result is not None: thread_result.destroy_in_main_thread() semaphore.release() raise return result def _apply_immediately(self): # If we're being called from a different thread than the one that # created us, e.g., because a worker task is trying to use apply() # recursively, we have no choice but to run the task immediately; # if we try to AsyncResult.get() in the worker thread, it's likely to have # nothing to switch to and lead to a LoopExit. return get_hub() is not self.hub def _apply_async_cb_spawn(self, callback, result): callback(result) def _apply_async_use_greenlet(self): # Always go to Greenlet because our self.spawn uses threads return True class _FakeAsync(object): def send(self): pass close = stop = send def __call__(self, result): "fake out for 'receiver'" def __bool__(self): return False __nonzero__ = __bool__ _FakeAsync = _FakeAsync() class ThreadResult(object): """ A one-time event for cross-thread communication. Uses a hub's "async" watcher capability; it must be constructed and destroyed in the thread running the hub (because creating, starting, and destroying async watchers isn't guaranteed to be thread safe). """ # Using slots here helps to debug reference cycles/leaks __slots__ = ('exc_info', 'async_watcher', '_call_when_ready', 'value', 'context', 'hub', 'receiver') def __init__(self, receiver, hub, call_when_ready): self.receiver = receiver self.hub = hub self.context = None self.value = None self.exc_info = () self.async_watcher = hub.loop.async_() self._call_when_ready = call_when_ready self.async_watcher.start(self._on_async) @property def exception(self): return self.exc_info[1] if self.exc_info else None def _on_async(self): # Called in the hub thread. aw = self.async_watcher self.async_watcher = _FakeAsync aw.stop() aw.close() # Typically this is pool.semaphore.release and we have to # call this in the Hub; if we don't we get the dreaded # LoopExit (XXX: Why?) try: self._call_when_ready() if self.exc_info: self.hub.handle_error(self.context, *self.exc_info) self.context = None self.async_watcher = _FakeAsync self.hub = None self._call_when_ready = _FakeAsync self.receiver(self) finally: self.receiver = _FakeAsync self.value = None if self.exc_info: self.exc_info = (self.exc_info[0], self.exc_info[1], None) def destroy_in_main_thread(self): """ This must only be called from the thread running the hub. """ self.async_watcher.stop() self.async_watcher.close() self.async_watcher = _FakeAsync self.context = None self.hub = None self._call_when_ready = _FakeAsync self.receiver = _FakeAsync def set(self, value): self.value = value self.async_watcher.send() def handle_error(self, context, exc_info): self.context = context self.exc_info = exc_info self.async_watcher.send() # link protocol: def successful(self): return self.exception is None try: import concurrent.futures except ImportError: pass else: __all__.append("ThreadPoolExecutor") from gevent.timeout import Timeout as GTimeout from gevent._util import Lazy from concurrent.futures import _base as cfb def _ignore_error(future_proxy, fn): def cbwrap(_): del _ # We're called with the async result (from the threadpool), but # be sure to pass in the user-visible _FutureProxy object.. try: fn(future_proxy) except Exception: # pylint: disable=broad-except # Just print, don't raise to the hub's parent. future_proxy.hub.print_exception((fn, future_proxy), None, None, None) return cbwrap def _wrap(future_proxy, fn): def f(_): fn(future_proxy) return f class _FutureProxy(object): def __init__(self, asyncresult): self.asyncresult = asyncresult # Internal implementation details of a c.f.Future @Lazy def _condition(self): if monkey.is_module_patched('threading') or self.done(): import threading return threading.Condition() # We can only properly work with conditions # when we've been monkey-patched. This is necessary # for the wait/as_completed module functions. raise AttributeError("_condition") @Lazy def _waiters(self): self.asyncresult.rawlink(self.__when_done) return [] def __when_done(self, _): # We should only be called when _waiters has # already been accessed. waiters = getattr(self, '_waiters') for w in waiters: # pylint:disable=not-an-iterable if self.successful(): w.add_result(self) else: w.add_exception(self) @property def _state(self): if self.done(): return cfb.FINISHED return cfb.RUNNING def set_running_or_notify_cancel(self): # Does nothing, not even any consistency checks. It's # meant to be internal to the executor and we don't use it. return def result(self, timeout=None): try: return self.asyncresult.result(timeout=timeout) except GTimeout: # XXX: Theoretically this could be a completely # unrelated timeout instance. Do we care about that? raise concurrent.futures.TimeoutError() def exception(self, timeout=None): try: self.asyncresult.get(timeout=timeout) return self.asyncresult.exception except GTimeout: raise concurrent.futures.TimeoutError() def add_done_callback(self, fn): """Exceptions raised by *fn* are ignored.""" if self.done(): fn(self) else: self.asyncresult.rawlink(_ignore_error(self, fn)) def rawlink(self, fn): self.asyncresult.rawlink(_wrap(self, fn)) def __str__(self): return str(self.asyncresult) def __getattr__(self, name): return getattr(self.asyncresult, name) class ThreadPoolExecutor(concurrent.futures.ThreadPoolExecutor): """ A version of :class:`concurrent.futures.ThreadPoolExecutor` that always uses native threads, even when threading is monkey-patched. The ``Future`` objects returned from this object can be used with gevent waiting primitives like :func:`gevent.wait`. .. caution:: If threading is *not* monkey-patched, then the ``Future`` objects returned by this object are not guaranteed to work with :func:`~concurrent.futures.as_completed` and :func:`~concurrent.futures.wait`. The individual blocking methods like :meth:`~concurrent.futures.Future.result` and :meth:`~concurrent.futures.Future.exception` will always work. .. versionadded:: 1.2a1 This is a provisional API. """ def __init__(self, *args, **kwargs): """ Takes the same arguments as ``concurrent.futures.ThreadPoolExecuter``, which vary between Python versions. The first argument is always *max_workers*, the maximum number of threads to use. Most other arguments, while accepted, are ignored. """ super(ThreadPoolExecutor, self).__init__(*args, **kwargs) self._threadpool = ThreadPool(self._max_workers) def submit(self, fn, *args, **kwargs): # pylint:disable=arguments-differ with self._shutdown_lock: # pylint:disable=not-context-manager if self._shutdown: raise RuntimeError('cannot schedule new futures after shutdown') future = self._threadpool.spawn(fn, *args, **kwargs) return _FutureProxy(future) def shutdown(self, wait=True, **kwargs): # pylint:disable=arguments-differ # In 3.9, this added ``cancel_futures=False`` super(ThreadPoolExecutor, self).shutdown(wait, **kwargs) # XXX: We don't implement wait properly kill = getattr(self._threadpool, 'kill', None) if kill: # pylint:disable=using-constant-test self._threadpool.kill() self._threadpool = None kill = shutdown # greentest compat def _adjust_thread_count(self): # Does nothing. We don't want to spawn any "threads", # let the threadpool handle that. pass gevent-24.11.1/src/gevent/time.py000066400000000000000000000007531471441230600165460ustar00rootroot00000000000000# Copyright (c) 2018 gevent. See LICENSE for details. """ The standard library :mod:`time` module, but :func:`sleep` is gevent-aware. .. versionadded:: 1.3a2 """ from __future__ import absolute_import __implements__ = [ 'sleep', ] __all__ = __implements__ import time as __time__ from gevent._util import copy_globals __imports__ = copy_globals(__time__, globals(), names_to_ignore=__implements__) from gevent.hub import sleep sleep = sleep # pylint gevent-24.11.1/src/gevent/timeout.py000066400000000000000000000325561471441230600173040ustar00rootroot00000000000000# Copyright (c) 2009-2010 Denis Bilenko. See LICENSE for details. """ Timeouts. Many functions in :mod:`gevent` have a *timeout* argument that allows limiting the time the function will block. When that is not available, the :class:`Timeout` class and :func:`with_timeout` function in this module add timeouts to arbitrary code. .. warning:: Timeouts can only work when the greenlet switches to the hub. If a blocking function is called or an intense calculation is ongoing during which no switches occur, :class:`Timeout` is powerless. """ from __future__ import absolute_import, print_function, division from gevent._compat import string_types from gevent._util import _NONE from greenlet import getcurrent from gevent._hub_local import get_hub_noargs as get_hub __all__ = [ 'Timeout', 'with_timeout', ] class _FakeTimer(object): # An object that mimics the API of get_hub().loop.timer, but # without allocating any native resources. This is useful for timeouts # that will never expire. # Also partially mimics the API of Timeout itself for use in _start_new_or_dummy # This object is used as a singleton, so it should be # immutable. __slots__ = () @property def pending(self): return False active = pending @property def seconds(self): "Always returns None" timer = exception = seconds def start(self, *args, **kwargs): # pylint:disable=unused-argument raise AssertionError("non-expiring timer cannot be started") def stop(self): return cancel = stop stop = close = cancel def __enter__(self): return self def __exit__(self, _t, _v, _tb): return _FakeTimer = _FakeTimer() class Timeout(BaseException): """ Timeout(seconds=None, exception=None, ref=True, priority=-1) Raise *exception* in the current greenlet after *seconds* have elapsed:: timeout = Timeout(seconds, exception) timeout.start() try: ... # exception will be raised here, after *seconds* passed since start() call finally: timeout.close() .. warning:: You must **always** call `close` on a ``Timeout`` object you have created, whether or not the code that the timeout was protecting finishes executing before the timeout elapses (whether or not the ``Timeout`` exception is raised) This ``try/finally`` construct or a ``with`` statement is a good pattern. (If the timeout object will be started again, use `cancel` instead of `close`; this is rare. You must still `close` it when you are done.) When *exception* is omitted or ``None``, the ``Timeout`` instance itself is raised:: >>> import gevent >>> gevent.Timeout(0.1).start() >>> gevent.sleep(0.2) #doctest: +IGNORE_EXCEPTION_DETAIL Traceback (most recent call last): ... Timeout: 0.1 seconds If the *seconds* argument is not given or is ``None`` (e.g., ``Timeout()``), then the timeout will never expire and never raise *exception*. This is convenient for creating functions which take an optional timeout parameter of their own. (Note that this is **not** the same thing as a *seconds* value of ``0``.) :: def function(args, timeout=None): "A function with an optional timeout." timer = Timeout(timeout) with timer: ... .. caution:: A *seconds* value less than ``0.0`` (e.g., ``-1``) is poorly defined. In the future, support for negative values is likely to do the same thing as a value of ``None`` or ``0`` A *seconds* value of ``0`` requests that the event loop spin and poll for I/O; it will immediately expire as soon as control returns to the event loop. .. rubric:: Use As A Context Manager To simplify starting and canceling timeouts, the ``with`` statement can be used:: with gevent.Timeout(seconds, exception) as timeout: pass # ... code block ... This is equivalent to the try/finally block above with one additional feature: if *exception* is the literal ``False``, the timeout is still raised, but the context manager suppresses it, so the code outside the with-block won't see it. This is handy for adding a timeout to the functions that don't support a *timeout* parameter themselves:: data = None with gevent.Timeout(5, False): data = mysock.makefile().readline() if data is None: ... # 5 seconds passed without reading a line else: ... # a line was read within 5 seconds .. caution:: If ``readline()`` above catches and doesn't re-raise :exc:`BaseException` (for example, with a bare ``except:``), then your timeout will fail to function and control won't be returned to you when you expect. .. rubric:: Catching Timeouts When catching timeouts, keep in mind that the one you catch may not be the one you have set (a calling function may have set its own timeout); if you going to silence a timeout, always check that it's the instance you need:: timeout = Timeout(1) timeout.start() try: ... except Timeout as t: if t is not timeout: raise # not my timeout finally: timeout.close() .. versionchanged:: 1.1b2 If *seconds* is not given or is ``None``, no longer allocate a native timer object that will never be started. .. versionchanged:: 1.1 Add warning about negative *seconds* values. .. versionchanged:: 1.3a1 Timeout objects now have a :meth:`close` method that *must* be called when the timeout will no longer be used to properly clean up native resources. The ``with`` statement does this automatically. .. versionchanged:: 24.10.1 Timeout values can be compared to be less than an integer value, or to be less than other timeouts, e.g., ``Timeout(0) < 1`` is true. Timeouts are not absolutely ordered and support no other comparisons; this is purely for convenience and may be removed or altered in the future. """ # We inherit a __dict__ from BaseException, so __slots__ actually # makes us larger. def __init__(self, seconds=None, exception=None, ref=True, priority=-1, _one_shot=False): BaseException.__init__(self) self.seconds = seconds self.exception = exception self._one_shot = _one_shot if seconds is None: # Avoid going through the timer codepath if no timeout is # desired; this avoids some CFFI interactions on PyPy that can lead to a # RuntimeError if this implementation is used during an `import` statement. See # https://bitbucket.org/pypy/pypy/issues/2089/crash-in-pypy-260-linux64-with-gevent-11b1 # and https://github.com/gevent/gevent/issues/618. # Plus, in general, it should be more efficient self.timer = _FakeTimer else: # XXX: A timer <= 0 could cause libuv to block the loop; we catch # that case in libuv/loop.py self.timer = get_hub().loop.timer(seconds or 0.0, ref=ref, priority=priority) def start(self): """Schedule the timeout.""" if self.pending: raise AssertionError('%r is already started; to restart it, cancel it first' % self) if self.seconds is None: # "fake" timeout (never expires) return if self.exception is None or self.exception is False or isinstance(self.exception, string_types): # timeout that raises self throws = self else: # regular timeout with user-provided exception throws = self.exception # Make sure the timer updates the current time so that we don't # expire prematurely. self.timer.start(self._on_expiration, getcurrent(), throws, update=True) def _on_expiration(self, prev_greenlet, ex): # Hook for subclasses. prev_greenlet.throw(ex) @classmethod def start_new(cls, timeout=None, exception=None, ref=True, _one_shot=False): """Create a started :class:`Timeout`. This is a shortcut, the exact action depends on *timeout*'s type: * If *timeout* is a :class:`Timeout`, then call its :meth:`start` method if it's not already begun. * Otherwise, create a new :class:`Timeout` instance, passing (*timeout*, *exception*) as arguments, then call its :meth:`start` method. Returns the :class:`Timeout` instance. """ if isinstance(timeout, Timeout): if not timeout.pending: timeout.start() return timeout timeout = cls(timeout, exception, ref=ref, _one_shot=_one_shot) timeout.start() return timeout @staticmethod def _start_new_or_dummy(timeout, exception=None, ref=True): # Internal use only in 1.1 # Return an object with a 'cancel' method; if timeout is None, # this will be a shared instance object that does nothing. Otherwise, # return an actual Timeout. A 0 value is allowed and creates a real Timeout. # Because negative values are hard to reason about, # and are often used as sentinels in Python APIs, in the future it's likely # that a negative timeout will also return the shared instance. # This saves the previously common idiom of # 'timer = Timeout.start_new(t) if t is not None else None' # followed by 'if timer is not None: timer.cancel()'. # That idiom was used to avoid any object allocations. # A staticmethod is slightly faster under CPython, compared to a classmethod; # under PyPy in synthetic benchmarks it makes no difference. if timeout is None: return _FakeTimer return Timeout.start_new(timeout, exception, ref, _one_shot=True) @property def pending(self): """True if the timeout is scheduled to be raised.""" return self.timer.pending or self.timer.active def cancel(self): """ If the timeout is pending, cancel it. Otherwise, do nothing. The timeout object can be :meth:`started ` again. If you will not start the timeout again, you should use :meth:`close` instead. """ self.timer.stop() if self._one_shot: self.close() def close(self): """ Close the timeout and free resources. The timer cannot be started again after this method has been used. """ self.timer.stop() self.timer.close() self.timer = _FakeTimer def __repr__(self): classname = type(self).__name__ if self.pending: pending = ' pending' else: pending = '' if self.exception is None: exception = '' else: exception = ' exception=%r' % self.exception return '<%s at %s seconds=%s%s%s>' % (classname, hex(id(self)), self.seconds, exception, pending) def __str__(self): """ >>> raise Timeout #doctest: +IGNORE_EXCEPTION_DETAIL Traceback (most recent call last): ... Timeout """ if self.seconds is None: return '' suffix = '' if self.seconds == 1 else 's' if self.exception is None: return '%s second%s' % (self.seconds, suffix) if self.exception is False: return '%s second%s (silent)' % (self.seconds, suffix) return '%s second%s: %s' % (self.seconds, suffix, self.exception) def __enter__(self): """ Start and return the timer. If the timer is already started, just return it. """ if not self.pending: self.start() return self def __exit__(self, typ, value, tb): """ Stop the timer. .. versionchanged:: 1.3a1 The underlying native timer is also stopped. This object cannot be used again. """ self.close() if value is self and self.exception is False: return True # Suppress the exception def __lt__(self, other): """ For convenience, timeouts can be compared to integers (numbers) based on their seconds value. """ try: return self.seconds < other.seconds except AttributeError: try: return self.seconds < other except TypeError: return NotImplemented def with_timeout(seconds, function, *args, **kwds): """Wrap a call to *function* with a timeout; if the called function fails to return before the timeout, cancel it and return a flag value, provided by *timeout_value* keyword argument. If timeout expires but *timeout_value* is not provided, raise :class:`Timeout`. Keyword argument *timeout_value* is not passed to *function*. """ timeout_value = kwds.pop("timeout_value", _NONE) timeout = Timeout.start_new(seconds, _one_shot=True) try: try: return function(*args, **kwds) except Timeout as ex: if ex is timeout and timeout_value is not _NONE: return timeout_value raise finally: timeout.cancel() gevent-24.11.1/src/gevent/util.py000066400000000000000000000540531471441230600165670ustar00rootroot00000000000000# Copyright (c) 2009 Denis Bilenko. See LICENSE for details. """ Low-level utilities. """ from __future__ import absolute_import, print_function, division import functools import pprint import sys import traceback from greenlet import getcurrent from gevent._compat import perf_counter from gevent._compat import PYPY from gevent._compat import thread_mod_name from gevent._util import _NONE __all__ = [ 'format_run_info', 'print_run_info', 'GreenletTree', 'wrap_errors', 'assert_switches', ] # PyPy is very slow at formatting stacks # for some reason. _STACK_LIMIT = 20 if PYPY else None def _noop(): return None def _ready(): return False class wrap_errors(object): """ Helper to make function return an exception, rather than raise it. Because every exception that is unhandled by greenlet will be logged, it is desirable to prevent non-error exceptions from leaving a greenlet. This can done with a simple ``try/except`` construct:: def wrapped_func(*args, **kwargs): try: return func(*args, **kwargs) except (TypeError, ValueError, AttributeError) as ex: return ex This class provides a shortcut to write that in one line:: wrapped_func = wrap_errors((TypeError, ValueError, AttributeError), func) It also preserves ``__str__`` and ``__repr__`` of the original function. """ # QQQ could also support using wrap_errors as a decorator def __init__(self, errors, func): """ Calling this makes a new function from *func*, such that it catches *errors* (an :exc:`BaseException` subclass, or a tuple of :exc:`BaseException` subclasses) and return it as a value. """ self.__errors = errors self.__func = func # Set __doc__, __wrapped__, etc, especially useful on Python 3. functools.update_wrapper(self, func) def __call__(self, *args, **kwargs): func = self.__func try: return func(*args, **kwargs) except self.__errors as ex: return ex def __str__(self): return str(self.__func) def __repr__(self): return repr(self.__func) def __getattr__(self, name): return getattr(self.__func, name) def print_run_info(thread_stacks=True, greenlet_stacks=True, limit=_NONE, file=None): """ Call `format_run_info` and print the results to *file*. If *file* is not given, `sys.stderr` will be used. .. versionadded:: 1.3b1 """ lines = format_run_info(thread_stacks=thread_stacks, greenlet_stacks=greenlet_stacks, limit=limit) file = sys.stderr if file is None else file for l in lines: print(l, file=file) def format_run_info(thread_stacks=True, greenlet_stacks=True, limit=_NONE, current_thread_ident=None): """ format_run_info(thread_stacks=True, greenlet_stacks=True, limit=None) -> [str] Request information about the running threads of the current process. This is a debugging utility. Its output has no guarantees other than being intended for human consumption. :keyword bool thread_stacks: If true, then include the stacks for running threads. :keyword bool greenlet_stacks: If true, then include the stacks for running greenlets. (Spawning stacks will always be printed.) Setting this to False can reduce the output volume considerably without reducing the overall information if *thread_stacks* is true and you can associate a greenlet to a thread (using ``thread_ident`` printed values). :keyword int limit: If given, passed directly to `traceback.format_stack`. If not given, this defaults to the whole stack under CPython, and a smaller stack under PyPy. :return: A sequence of text lines detailing the stacks of running threads and greenlets. (One greenlet will duplicate one thread, the current thread and greenlet. If there are multiple running threads, the stack for the current greenlet may be incorrectly duplicated in multiple greenlets.) Extra information about :class:`gevent.Greenlet` object will also be returned. .. versionadded:: 1.3a1 .. versionchanged:: 1.3a2 Renamed from ``dump_stacks`` to reflect the fact that this prints additional information about greenlets, including their spawning stack, parent, locals, and any spawn tree locals. .. versionchanged:: 1.3b1 Added the *thread_stacks*, *greenlet_stacks*, and *limit* params. """ if current_thread_ident is None: from gevent import monkey current_thread_ident = monkey.get_original(thread_mod_name, 'get_ident')() lines = [] limit = _STACK_LIMIT if limit is _NONE else limit _format_thread_info(lines, thread_stacks, limit, current_thread_ident) _format_greenlet_info(lines, greenlet_stacks, limit) return lines def is_idle_threadpool_worker(frame): return frame.f_locals and frame.f_locals.get('gevent_threadpool_worker_idle') def _format_thread_info(lines, thread_stacks, limit, current_thread_ident): import threading threads = {th.ident: th for th in threading.enumerate()} lines.append('*' * 80) lines.append('* Threads') thread = None frame = None for thread_ident, frame in sys._current_frames().items(): do_stacks = thread_stacks lines.append("*" * 80) thread = threads.get(thread_ident) name = None if not thread: # Is it an idle threadpool thread? thread pool threads # don't have a Thread object, they're low-level if is_idle_threadpool_worker(frame): name = 'idle threadpool worker' do_stacks = False else: name = thread.name if getattr(thread, 'gevent_monitoring_thread', None): name = repr(thread.gevent_monitoring_thread()) if current_thread_ident == thread_ident: name = '%s) (CURRENT' % (name,) lines.append('Thread 0x%x (%s)\n' % (thread_ident, name)) if do_stacks: lines.append(''.join(traceback.format_stack(frame, limit))) elif not thread_stacks: lines.append('\t...stack elided...') # We may have captured our own frame, creating a reference # cycle, so clear it out. del thread del frame del lines del threads def _format_greenlet_info(lines, greenlet_stacks, limit): # Use the gc module to inspect all objects to find the greenlets # since there isn't a global registry lines.append('*' * 80) lines.append('* Greenlets') lines.append('*' * 80) for tree in sorted(GreenletTree.forest(), key=lambda t: '' if t.is_current_tree else repr(t.greenlet)): lines.append("---- Thread boundary") lines.extend(tree.format_lines(details={ # greenlets from other threads tend to have their current # frame just match our current frame, which is not helpful, # so don't render their stack. 'running_stacks': greenlet_stacks if tree.is_current_tree else False, 'running_stack_limit': limit, })) del lines dump_stacks = format_run_info def _line(f): @functools.wraps(f) def w(self, *args, **kwargs): r = f(self, *args, **kwargs) self.lines.append(r) return w class _TreeFormatter(object): UP_AND_RIGHT = '+' HORIZONTAL = '-' VERTICAL = '|' VERTICAL_AND_RIGHT = '+' DATA = ':' label_space = 1 horiz_width = 3 indent = 1 def __init__(self, details, depth=0): self.lines = [] self.depth = depth self.details = details if not details: self.child_data = lambda *args, **kwargs: None def deeper(self): return type(self)(self.details, self.depth + 1) @_line def node_label(self, text): return text @_line def child_head(self, label, right=VERTICAL_AND_RIGHT): return ( ' ' * self.indent + right + self.HORIZONTAL * self.horiz_width + ' ' * self.label_space + label ) def last_child_head(self, label): return self.child_head(label, self.UP_AND_RIGHT) @_line def child_tail(self, line, vertical=VERTICAL): return ( ' ' * self.indent + vertical + ' ' * self.horiz_width + line ) def last_child_tail(self, line): return self.child_tail(line, vertical=' ' * len(self.VERTICAL)) @_line def child_data(self, data, data_marker=DATA): # pylint:disable=method-hidden return (( ' ' * self.indent + (data_marker if not self.depth else ' ') + ' ' * self.horiz_width + ' ' * self.label_space + data ),) def last_child_data(self, data): return self.child_data(data, ' ') def child_multidata(self, data): # Remove embedded newlines for l in data.splitlines(): self.child_data(l) class GreenletTree(object): """ Represents a tree of greenlets. In gevent, the *parent* of a greenlet is usually the hub, so this tree is primarily arganized along the *spawning_greenlet* dimension. This object has a small str form showing this hierarchy. The `format` method can output more details. The exact output is unspecified but is intended to be human readable. Use the `forest` method to get the root greenlet trees for all threads, and the `current_tree` to get the root greenlet tree for the current thread. """ #: The greenlet this tree represents. greenlet = None #: Is this tree the root for the current thread? is_current_tree = False def __init__(self, greenlet): self.greenlet = greenlet self.child_trees = [] def add_child(self, tree): if tree is self: return self.child_trees.append(tree) @property def root(self): return self.greenlet.parent is None def __getattr__(self, name): return getattr(self.greenlet, name) DEFAULT_DETAILS = { 'running_stacks': True, 'running_stack_limit': _STACK_LIMIT, 'spawning_stacks': True, 'locals': True, } def format_lines(self, details=True): """ Return a sequence of lines for the greenlet tree. :keyword bool details: If true (the default), then include more informative details in the output. """ if not isinstance(details, dict): if not details: details = {} else: details = self.DEFAULT_DETAILS.copy() else: params = details details = self.DEFAULT_DETAILS.copy() details.update(params) tree = _TreeFormatter(details, depth=0) lines = [l[0] if isinstance(l, tuple) else l for l in self._render(tree)] return lines def format(self, details=True): """ Like `format_lines` but returns a string. """ lines = self.format_lines(details) return '\n'.join(lines) def __str__(self): return self.format(False) # Prior to greenlet 3.0rc1, getting tracebacks of inactive # greenlets could crash on Python 3.12. So we added a # version-based setting here to disable it. That's fixed in the # 3.0 final releases, but appears to be back with Python 3.12.1; # this is likely related to https://github.com/python-greenlet/greenlet/issues/388 #_SUPPORTS_TRACEBACK = sys.version_info[:3] < (3, 12, 1) _SUPPORTS_TRACEBACK = True @classmethod def __render_tb(cls, tree, label, frame, limit): tree.child_data(label) if cls._SUPPORTS_TRACEBACK: tb = ''.join(traceback.format_stack(frame, limit)) else: tb = '' tree.child_multidata(tb) @staticmethod def __spawning_parent(greenlet): return (getattr(greenlet, 'spawning_greenlet', None) or _noop)() def __render_locals(self, tree): # Defer the import to avoid cycles from gevent.local import all_local_dicts_for_greenlet gr_locals = all_local_dicts_for_greenlet(self.greenlet) if gr_locals: tree.child_data("Greenlet Locals:") for (kind, idl), vals in gr_locals: if not vals: continue # not set in this greenlet; ignore it. tree.child_data(" Local %s at %s" % (kind, hex(idl))) tree.child_multidata(" " + pprint.pformat(vals)) def _render(self, tree): label = repr(self.greenlet) if not self.greenlet: # Not running or dead # raw greenlets do not have ready if getattr(self.greenlet, 'ready', _ready)(): label += '; finished' if self.greenlet.value is not None: label += ' with value ' + repr(self.greenlet.value)[:30] elif getattr(self.greenlet, 'exception', None) is not None: label += ' with exception ' + repr(self.greenlet.exception) else: label += '; not running' tree.node_label(label) tree.child_data('Parent: ' + repr(self.greenlet.parent)) if getattr(self.greenlet, 'gevent_monitoring_thread', None) is not None: tree.child_data('Monitoring Thread:' + repr(self.greenlet.gevent_monitoring_thread())) if self.greenlet and tree.details and tree.details['running_stacks']: self.__render_tb(tree, 'Running:', self.greenlet.gr_frame, tree.details['running_stack_limit']) spawning_stack = getattr(self.greenlet, 'spawning_stack', None) if spawning_stack and tree.details and tree.details['spawning_stacks']: # We already placed a limit on the spawning stack when we captured it. self.__render_tb(tree, 'Spawned at:', spawning_stack, None) spawning_parent = self.__spawning_parent(self.greenlet) tree_locals = getattr(self.greenlet, 'spawn_tree_locals', None) if tree_locals and tree_locals is not getattr(spawning_parent, 'spawn_tree_locals', None): tree.child_data('Spawn Tree Locals') tree.child_multidata(pprint.pformat(tree_locals)) self.__render_locals(tree) try: self.__render_children(tree) except RuntimeError: # pragma: no cover # If the tree is exceptionally deep, we can hit the recursion error. # Usually it's several levels down so we can make a print call. # This came up in test__semaphore before TestSemaphoreFair # was fixed. print("When rendering children", *sys.exc_info()) return tree.lines def __render_children(self, tree): children = sorted(self.child_trees, key=lambda c: ( # raw greenlets first. Note that we could be accessing # minimal_ident for a hub from a different thread, which isn't # technically thread safe. getattr(c, 'minimal_ident', -1), # running greenlets next getattr(c, 'ready', _ready)(), id(c.parent))) for n, child in enumerate(children): child_tree = child._render(tree.deeper()) head = tree.child_head tail = tree.child_tail data = tree.child_data if n == len(children) - 1: # last child does not get the line drawn head = tree.last_child_head tail = tree.last_child_tail data = tree.last_child_data head(child_tree.pop(0)) for child_data in child_tree: if isinstance(child_data, tuple): data(child_data[0]) else: tail(child_data) return tree.lines @staticmethod def _root_greenlet(greenlet): while greenlet.parent is not None and not getattr(greenlet, 'greenlet_tree_is_root', False): greenlet = greenlet.parent return greenlet @classmethod def _forest(cls): from gevent._greenlet_primitives import get_reachable_greenlets main_greenlet = cls._root_greenlet(getcurrent()) trees = {} # greenlet -> GreenletTree roots = {} # root greenlet -> GreenletTree current_tree = roots[main_greenlet] = trees[main_greenlet] = cls(main_greenlet) current_tree.is_current_tree = True root_greenlet = cls._root_greenlet glets = get_reachable_greenlets() for ob in glets: spawn_parent = cls.__spawning_parent(ob) if spawn_parent is None: # spawn parent is dead, or raw greenlet. # reparent under the root. spawn_parent = root_greenlet(ob) if spawn_parent is root_greenlet(spawn_parent) and spawn_parent not in roots: assert spawn_parent not in trees trees[spawn_parent] = roots[spawn_parent] = cls(spawn_parent) try: parent_tree = trees[spawn_parent] except KeyError: # pragma: no cover parent_tree = trees[spawn_parent] = cls(spawn_parent) try: # If the child also happened to be a spawning parent, # we could have seen it before; the reachable greenlets # are in no particular order. child_tree = trees[ob] except KeyError: trees[ob] = child_tree = cls(ob) parent_tree.add_child(child_tree) return roots, current_tree @classmethod def forest(cls): """ forest() -> sequence Return a sequence of `GreenletTree`, one for each running native thread. """ return list(cls._forest()[0].values()) @classmethod def current_tree(cls): """ current_tree() -> GreenletTree Returns the `GreenletTree` for the current thread. """ return cls._forest()[1] class _FailedToSwitch(AssertionError): pass class assert_switches(object): """ A context manager for ensuring a block of code switches greenlets. This performs a similar function as the :doc:`monitoring thread `, but the scope is limited to the body of the with statement. If the code within the body doesn't yield to the hub (and doesn't raise an exception), then upon exiting the context manager an :exc:`AssertionError` will be raised. This is useful in unit tests and for debugging purposes. :keyword float max_blocking_time: If given, the body is allowed to block for up to this many fractional seconds before an error is raised. :keyword bool hub_only: If True, then *max_blocking_time* only refers to the amount of time spent between switches into the hub. If False, then it refers to the maximum time between *any* switches. If *max_blocking_time* is not given, has no effect. Example:: # This will always raise an exception: nothing switched with assert_switches(): pass # This will never raise an exception; nothing switched, # but it happened very fast with assert_switches(max_blocking_time=1.0): pass .. versionadded:: 1.3 .. versionchanged:: 1.4 If an exception is raised, it now includes information about the duration of blocking and the parameters of this object. """ hub = None tracer = None _entered = None def __init__(self, max_blocking_time=None, hub_only=False): self.max_blocking_time = max_blocking_time self.hub_only = hub_only def __enter__(self): from gevent import get_hub from gevent import _tracer self.hub = hub = get_hub() # TODO: We could optimize this to use the GreenletTracer # installed by the monitoring thread, if there is one. # As it is, we will chain trace calls back to it. if not self.max_blocking_time: self.tracer = _tracer.GreenletTracer() elif self.hub_only: self.tracer = _tracer.HubSwitchTracer(hub, self.max_blocking_time) else: self.tracer = _tracer.MaxSwitchTracer(hub, self.max_blocking_time) self._entered = perf_counter() self.tracer.monitor_current_greenlet_blocking() return self def __exit__(self, t, v, tb): self.tracer.kill() hub = self.hub; self.hub = None tracer = self.tracer; self.tracer = None # Only check if there was no exception raised, we # don't want to hide anything if t is not None: return did_block = tracer.did_block_hub(hub) if did_block: execution_time_s = perf_counter() - self._entered active_greenlet = did_block[1] report_lines = tracer.did_block_hub_report(hub, active_greenlet, {}) message = 'To the hub' if self.hub_only else 'To any greenlet' message += ' in %.4f seconds' % (execution_time_s,) max_block = self.max_blocking_time message += ' (max allowed %.4f seconds)' % (max_block,) if max_block else '' message += '\n' message += '\n'.join(report_lines) raise _FailedToSwitch(message) def clear_stack_frames(frame): """Do our best to clear local variables in all frames in a stack.""" # On Python 3, frames have a .clear() method that can raise a RuntimeError. while frame is not None: try: frame.clear() except (RuntimeError, AttributeError): pass try: frame.f_locals.clear() except AttributeError: # Python 3.13 removed clear(); # f_locals is now a FrameLocalsProxy. pass frame = frame.f_back gevent-24.11.1/src/gevent/win32util.py000066400000000000000000000070651471441230600174530ustar00rootroot00000000000000# Copyright (c) 2001-2007 Twisted Matrix Laboratories. # Permission is hereby granted, free of charge, to any person obtaining # a copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, # distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so, subject to # the following conditions: # # The above copyright notice and this permission notice shall be # included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE # LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION # WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """Error formatting function for Windows. The code is taken from twisted.python.win32 module. """ from __future__ import absolute_import import os __all__ = ['formatError'] class _ErrorFormatter(object): """ Formatter for Windows error messages. @ivar winError: A callable which takes one integer error number argument and returns an L{exceptions.WindowsError} instance for that error (like L{ctypes.WinError}). @ivar formatMessage: A callable which takes one integer error number argument and returns a C{str} giving the message for that error (like L{win32api.FormatMessage}). @ivar errorTab: A mapping from integer error numbers to C{str} messages which correspond to those errors (like L{socket.errorTab}). """ def __init__(self, WinError, FormatMessage, errorTab): self.winError = WinError self.formatMessage = FormatMessage self.errorTab = errorTab @classmethod def fromEnvironment(cls): """ Get as many of the platform-specific error translation objects as possible and return an instance of C{cls} created with them. """ try: from ctypes import WinError except ImportError: WinError = None try: from win32api import FormatMessage except ImportError: FormatMessage = None try: from socket import errorTab except ImportError: errorTab = None return cls(WinError, FormatMessage, errorTab) def formatError(self, errorcode): """ Returns the string associated with a Windows error message, such as the ones found in socket.error. Attempts direct lookup against the win32 API via ctypes and then pywin32 if available), then in the error table in the socket module, then finally defaulting to C{os.strerror}. @param errorcode: the Windows error code @type errorcode: C{int} @return: The error message string @rtype: C{str} """ if self.winError is not None: return str(self.winError(errorcode)) if self.formatMessage is not None: return self.formatMessage(errorcode) if self.errorTab is not None: result = self.errorTab.get(errorcode) if result is not None: return result return os.strerror(errorcode) formatError = _ErrorFormatter.fromEnvironment().formatError gevent-24.11.1/src/greentest/000077500000000000000000000000001471441230600157415ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.10/000077500000000000000000000000001471441230600163225ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.10/allsans.pem000066400000000000000000000235711471441230600204720ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQDBGvj+Uy/VUyTR mmIA1UEENThh0+pWODcvvUlkeIo+XTJ3FhF4/RVjImDHjozl28Xf2TzKnvQJa1KC pqa7fr8cL9QMwk4pH+S4ulxOu02Bl3Yafx2oJVUML37vciJg+zkzPx1k3tXFjXkr LGjZwOoufBC3AmPuq2xHFBzHrvp5/DIRH2slQFM9fpVZzN77gYyzxba0wCfCPpCf eJFRyYKW8c7MXrwnM82YtE7Rlnf227EkCdMNaSeZLUIxeVpcnScqZl0SIbR3YEiV 0LPFkx0wJFm8qUEFU/h+0jamgy/ON+11nqmMlp3BjNi/JTVsa7N7A3dvdHC7VVlr WnUgU6MoSniyL6ijpucyHtZzK2mJy0sHR8PadHKow0O423/5N8GKTSOvaGMXTjAe OGs+9/P1ZYo3IjjQPz/NV3QlhK8zRqxF3cW0ekHHkT+/jZjCvSKm6mdbMQunKE1W +dokAc815pb48Mzf1eWKd/7UyUf7CXussyAaJ3clpaK1sbbn9m0CAwEAAQKCAYAe BaCCgdJk+xk1USg9cuo5ykBqzTSYlQLXdDlN2oO7sGehJhgvVEGX+QdM3ze+oM2B wNd3tQDB2iKo11oCunDh4/m2xhq6wA+iPK8POoWRSUf+VJb6xlsTmurENV1s8IHz GrPqM87OePFGqg/fEuQVuAotObzppVMfNdxHm0er4W6zRMw2rWqDnAOCQ5zDQ1/p ryp5rYpA49M+R9NoAMlByHRbR7s+6Qnk3NuIMDmUcpF2xeQ/KIMUiHnLEU/gKDpi bsk+VtyjlibR4zhh9/cJrLTApAIA+4eC176EJvKXCh5UIjd92JC7741HTNQXJpvG 9PXbzhyUCmncr04U+46snGHdwD+lG4LS7oBGACTLMtpcMrlgAm6XCg4T8gRVE/9n FvCkqPHBR+vnhOxm+0x0yUY/DstJby6IPYPsfGK/s2n//j/vJrAZE1Pxlm9EPU13 MRLcHstwjAc/NXRPnUN1DfcQvPLx6Tt6rqw3Wm1KO75kM+HZ56BX9/Bi1TgkiI0C gcEA5JTlXssJ3W8Cz6w1ZtGsThHQBDbvHF2D5AdqO7y6/eqzCQgBQl9BTfXOzsvP I1gf2CLEFBtGK09UjAuJQg90/NlKur7i7xt7HpAzEfGsDAL4P5BW5JnMNrzpJjjL 0uUDsPJlA75Wi29N2SFiaIslY0sZ6nckInat5GRe4O1AMSHoJ5suY9yTZTU3XB4O A+XyddutI1GsFZgl8/8LyyNMcyNjxG3T5sr7IKf5/nIv6oMDjC2zLVZa8QS/MEnL Kaa7AoHBANhEsxfcjw2MaPkrsqAsOP0dDf7g2rdz6wKT5BzZu9e+/E76NmvVDpns e+kCjql9Os3/wonOMINvn1bTCQGTgk8+dw1fMyqg+zQCvH4ImcE6LSqhzblVHsIB zZ7rW86trri1U9+olNHG4nwkus0i4LV8eeORns+j8DgXr6/eOvjX3ZW5TyU7/Qgm SiSdBapzJbom3xJrbo9KQsrN5PVCOwuwrgY0o+2BeKyKhnt4uGv0bR+ii06EOJUA WvjD7gLI9wKBwGVRXk3jH29IOm3EvjLh80bzfEmx89CV3tUfOEZcRGIyOsNhCfXa dP7SWqWtDxZyhELwPgtPf43I7wfYQTHH2ioNQqN94ubrPmpwrkJg5cq5MkIyf2F6 jlsg5xMrD6VeH4G6H25GWuQZJN9+fbkrHBpj+ovD3X9tLWzT1H5Miyx8BAQyM6DN 74Nn0C8Dn2C49vyor5i9JdK4ivIY9ahH8CYE5L73k3p0NFXoPtY61ORUyCjFROtu oIa+fOQxgVzn6wKBwQC3DD7BnY7/Gq7m51ODOqrpoaPs7Qhyagyp298hhDD3hNEt T56sWmLHaV/fcqipUDNrlGRmGzz4ooutA2YGDYIn7Gj7ym4WULcN6Jr92e25nLIJ +XWUvjUQZFJThkXogxz1fZSGI7wCamHcTYJGipTDR54rPV+7w7hY4cN0CZbEdIE6 buRMUZ/zO+VZZAYdpORz0N7SSlgDtAkgenCmHe64EEzbN8bgCcvHzl/RNfZyeSm7 supSBJuXkfttvvg/JzUCgcEAlx0Pep9qCLvpk0WqzijBVHc3zK4wYxjhN2MBkF42 SLWfogKpiPfIqxX6YF94roIA0VlW6Pj50v+sbPwq8nwsgFNhml80A4ODKr3O3Y3M fXDBJW5W5ZRb/vhIKRjXyCSckSRfj7N8HUYjCLkxQansNWimrldmSet0H2mYJN0Y JpBXdqpa76zoHzWpKFwD0fSVzvnMelPHSDCNOdIEHmR8e1x2F1/ufR/9/dBzPULY HMj0OhQHoi8kJyMIj3+bQkbC -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5f Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=allsans Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:c1:1a:f8:fe:53:2f:d5:53:24:d1:9a:62:00:d5: 41:04:35:38:61:d3:ea:56:38:37:2f:bd:49:64:78: 8a:3e:5d:32:77:16:11:78:fd:15:63:22:60:c7:8e: 8c:e5:db:c5:df:d9:3c:ca:9e:f4:09:6b:52:82:a6: a6:bb:7e:bf:1c:2f:d4:0c:c2:4e:29:1f:e4:b8:ba: 5c:4e:bb:4d:81:97:76:1a:7f:1d:a8:25:55:0c:2f: 7e:ef:72:22:60:fb:39:33:3f:1d:64:de:d5:c5:8d: 79:2b:2c:68:d9:c0:ea:2e:7c:10:b7:02:63:ee:ab: 6c:47:14:1c:c7:ae:fa:79:fc:32:11:1f:6b:25:40: 53:3d:7e:95:59:cc:de:fb:81:8c:b3:c5:b6:b4:c0: 27:c2:3e:90:9f:78:91:51:c9:82:96:f1:ce:cc:5e: bc:27:33:cd:98:b4:4e:d1:96:77:f6:db:b1:24:09: d3:0d:69:27:99:2d:42:31:79:5a:5c:9d:27:2a:66: 5d:12:21:b4:77:60:48:95:d0:b3:c5:93:1d:30:24: 59:bc:a9:41:05:53:f8:7e:d2:36:a6:83:2f:ce:37: ed:75:9e:a9:8c:96:9d:c1:8c:d8:bf:25:35:6c:6b: b3:7b:03:77:6f:74:70:bb:55:59:6b:5a:75:20:53: a3:28:4a:78:b2:2f:a8:a3:a6:e7:32:1e:d6:73:2b: 69:89:cb:4b:07:47:c3:da:74:72:a8:c3:43:b8:db: 7f:f9:37:c1:8a:4d:23:af:68:63:17:4e:30:1e:38: 6b:3e:f7:f3:f5:65:8a:37:22:38:d0:3f:3f:cd:57: 74:25:84:af:33:46:ac:45:dd:c5:b4:7a:41:c7:91: 3f:bf:8d:98:c2:bd:22:a6:ea:67:5b:31:0b:a7:28: 4d:56:f9:da:24:01:cf:35:e6:96:f8:f0:cc:df:d5: e5:8a:77:fe:d4:c9:47:fb:09:7b:ac:b3:20:1a:27: 77:25:a5:a2:b5:b1:b6:e7:f6:6d Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:allsans, othername:, othername:, email:user@example.org, DNS:www.example.org, DirName:/C=XY/L=Castle Anthrax/O=Python Software Foundation/CN=dirname example, URI:https://www.python.org/, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1, Registered ID:1.2.3.4.5 X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: D4:F1:D8:23:E0:A7:E9:CA:12:45:A0:0D:03:C2:25:A6:E8:65:BC:EE X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 70:77:d8:82:b0:f4:ab:de:84:ce:88:32:63:5e:23:0f:b6:58: a2:b1:65:ff:12:22:0b:88:a6:fa:06:40:9a:e7:63:a7:5d:ae: 94:c5:68:3c:4b:e9:95:34:01:75:24:df:9d:6e:9b:e4:ff:3f: 61:97:29:7b:ab:34:2c:14:d3:01:d2:eb:fb:84:40:db:12:54: 7e:7a:44:bc:08:eb:9f:e2:15:0b:11:4f:25:d2:56:51:95:ad: 6d:ad:07:aa:6a:61:f9:39:d5:82:8c:45:31:9f:2a:ff:18:98: 49:0c:bb:17:ad:d5:24:d3:d1:c7:c4:10:3e:c4:79:26:58:f4: c5:de:82:16:c4:c3:c4:a7:a3:62:22:41:90:36:0f:bc:4c:fd: 6a:18:22:f2:87:e9:07:db:b4:3d:65:00:e4:70:f9:d6:e5:a8: a1:b9:c9:9d:e7:5d:78:aa:98:d5:f8:f4:fd:5c:d9:4c:d0:6d: bf:87:71:d3:5b:ec:f4:bf:46:f9:c8:f8:10:c5:72:af:c3:15: b9:c4:06:67:0b:3f:f6:f4:64:c5:27:74:c1:6b:00:37:da:ea: 18:36:77:36:a7:3e:80:2e:5d:54:0f:01:df:ce:9e:97:dd:c9: f2:8b:59:82:c5:65:31:c8:73:20:fd:24:23:25:d8:00:df:90: 93:26:76:08:0a:06:a9:0e:d3:d3:4c:6f:ef:a7:fb:de:eb:2a: 40:b9:e4:b1:44:0c:37:ca:c6:9e:44:4a:b4:7c:2c:40:52:35: bb:b3:71:28:3d:35:fd:be:c9:4f:54:b3:99:c5:5f:84:38:fb: 2b:fb:ea:dd:88:e8:9d:c1:9b:67:87:3d:79:7b:3d:7e:61:1f: 70:3c:b7:c8:4c:17:a5:0c:a3:28:c7:ab:48:11:14:f7:98:7a: da:4e:fb:91:76:89:0a:a6:c6:72:e0:96:d9:f1:80:ea:68:90: 37:5c:c6:69:c7:d7:bc:c7:d1:ae:5b:a9:12:59:c6:e4:6c:61: a9:8b:ba:51:b3:13 -----BEGIN CERTIFICATE----- MIIHDTCCBXWgAwIBAgIJAMstgJlaaVJfMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2Fs bHNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBGvj+Uy/VUyTR mmIA1UEENThh0+pWODcvvUlkeIo+XTJ3FhF4/RVjImDHjozl28Xf2TzKnvQJa1KC pqa7fr8cL9QMwk4pH+S4ulxOu02Bl3Yafx2oJVUML37vciJg+zkzPx1k3tXFjXkr LGjZwOoufBC3AmPuq2xHFBzHrvp5/DIRH2slQFM9fpVZzN77gYyzxba0wCfCPpCf eJFRyYKW8c7MXrwnM82YtE7Rlnf227EkCdMNaSeZLUIxeVpcnScqZl0SIbR3YEiV 0LPFkx0wJFm8qUEFU/h+0jamgy/ON+11nqmMlp3BjNi/JTVsa7N7A3dvdHC7VVlr WnUgU6MoSniyL6ijpucyHtZzK2mJy0sHR8PadHKow0O423/5N8GKTSOvaGMXTjAe OGs+9/P1ZYo3IjjQPz/NV3QlhK8zRqxF3cW0ekHHkT+/jZjCvSKm6mdbMQunKE1W +dokAc815pb48Mzf1eWKd/7UyUf7CXussyAaJ3clpaK1sbbn9m0CAwEAAaOCAt4w ggLaMIIBMAYDVR0RBIIBJzCCASOCB2FsbHNhbnOgHgYDKgMEoBcMFXNvbWUgb3Ro ZXIgaWRlbnRpZmllcqA1BgYrBgEFAgKgKzApoBAbDktFUkJFUk9TLlJFQUxNoRUw E6ADAgEBoQwwChsIdXNlcm5hbWWBEHVzZXJAZXhhbXBsZS5vcmeCD3d3dy5leGFt cGxlLm9yZ6RnMGUxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJh eDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xGDAWBgNVBAMM D2Rpcm5hbWUgZXhhbXBsZYYXaHR0cHM6Ly93d3cucHl0aG9uLm9yZy+HBH8AAAGH EAAAAAAAAAAAAAAAAAAAAAGIBCoDBAUwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQW MBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBTU 8dgj4KfpyhJFoA0DwiWm6GW87jB9BgNVHSMEdjB0gBSziqCiunHxqCR51KRbJTYV HknIzaFRpE8wTTELMAkGA1UEBhMCWFkxJjAkBgNVBAoMHVB5dGhvbiBTb2Z0d2Fy ZSBGb3VuZGF0aW9uIENBMRYwFAYDVQQDDA1vdXItY2Etc2VydmVyggkAyy2AmVpp UlswgYMGCCsGAQUFBwEBBHcwdTA8BggrBgEFBQcwAoYwaHR0cDovL3Rlc3RjYS5w eXRob250ZXN0Lm5ldC90ZXN0Y2EvcHljYWNlcnQuY2VyMDUGCCsGAQUFBzABhilo dHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9vY3NwLzBDBgNVHR8E PDA6MDigNqA0hjJodHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9y ZXZvY2F0aW9uLmNybDANBgkqhkiG9w0BAQsFAAOCAYEAcHfYgrD0q96EzogyY14j D7ZYorFl/xIiC4im+gZAmudjp12ulMVoPEvplTQBdSTfnW6b5P8/YZcpe6s0LBTT AdLr+4RA2xJUfnpEvAjrn+IVCxFPJdJWUZWtba0Hqmph+TnVgoxFMZ8q/xiYSQy7 F63VJNPRx8QQPsR5Jlj0xd6CFsTDxKejYiJBkDYPvEz9ahgi8ofpB9u0PWUA5HD5 1uWoobnJneddeKqY1fj0/VzZTNBtv4dx01vs9L9G+cj4EMVyr8MVucQGZws/9vRk xSd0wWsAN9rqGDZ3Nqc+gC5dVA8B386el93J8otZgsVlMchzIP0kIyXYAN+QkyZ2 CAoGqQ7T00xv76f73usqQLnksUQMN8rGnkRKtHwsQFI1u7NxKD01/b7JT1SzmcVf hDj7K/vq3YjoncGbZ4c9eXs9fmEfcDy3yEwXpQyjKMerSBEU95h62k77kXaJCqbG cuCW2fGA6miQN1zGacfXvMfRrlupElnG5GxhqYu6UbMT -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/badcert.pem000066400000000000000000000036101471441230600204310ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/badkey.pem000066400000000000000000000041621471441230600202670ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/capath/000077500000000000000000000000001471441230600175625ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.10/capath/4e1295a3.0000066400000000000000000000014561471441230600207260ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/capath/5ed36f99.0000066400000000000000000000050111471441230600210160ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/capath/6e88d7b8.0000066400000000000000000000014561471441230600210300ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/capath/99d0fa06.0000066400000000000000000000050111471441230600210020ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/capath/b1930218.0000066400000000000000000000030721471441230600206360ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/capath/ceff1710.0000066400000000000000000000030721471441230600210610ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/ffdh3072.pem000066400000000000000000000042441471441230600202540ustar00rootroot00000000000000 DH Parameters: (3072 bit) prime: 00:ff:ff:ff:ff:ff:ff:ff:ff:ad:f8:54:58:a2:bb: 4a:9a:af:dc:56:20:27:3d:3c:f1:d8:b9:c5:83:ce: 2d:36:95:a9:e1:36:41:14:64:33:fb:cc:93:9d:ce: 24:9b:3e:f9:7d:2f:e3:63:63:0c:75:d8:f6:81:b2: 02:ae:c4:61:7a:d3:df:1e:d5:d5:fd:65:61:24:33: f5:1f:5f:06:6e:d0:85:63:65:55:3d:ed:1a:f3:b5: 57:13:5e:7f:57:c9:35:98:4f:0c:70:e0:e6:8b:77: e2:a6:89:da:f3:ef:e8:72:1d:f1:58:a1:36:ad:e7: 35:30:ac:ca:4f:48:3a:79:7a:bc:0a:b1:82:b3:24: fb:61:d1:08:a9:4b:b2:c8:e3:fb:b9:6a:da:b7:60: d7:f4:68:1d:4f:42:a3:de:39:4d:f4:ae:56:ed:e7: 63:72:bb:19:0b:07:a7:c8:ee:0a:6d:70:9e:02:fc: e1:cd:f7:e2:ec:c0:34:04:cd:28:34:2f:61:91:72: fe:9c:e9:85:83:ff:8e:4f:12:32:ee:f2:81:83:c3: fe:3b:1b:4c:6f:ad:73:3b:b5:fc:bc:2e:c2:20:05: c5:8e:f1:83:7d:16:83:b2:c6:f3:4a:26:c1:b2:ef: fa:88:6b:42:38:61:1f:cf:dc:de:35:5b:3b:65:19: 03:5b:bc:34:f4:de:f9:9c:02:38:61:b4:6f:c9:d6: e6:c9:07:7a:d9:1d:26:91:f7:f7:ee:59:8c:b0:fa: c1:86:d9:1c:ae:fe:13:09:85:13:92:70:b4:13:0c: 93:bc:43:79:44:f4:fd:44:52:e2:d7:4d:d3:64:f2: e2:1e:71:f5:4b:ff:5c:ae:82:ab:9c:9d:f6:9e:e8: 6d:2b:c5:22:36:3a:0d:ab:c5:21:97:9b:0d:ea:da: 1d:bf:9a:42:d5:c4:48:4e:0a:bc:d0:6b:fa:53:dd: ef:3c:1b:20:ee:3f:d5:9d:7c:25:e4:1d:2b:66:c6: 2e:37:ff:ff:ff:ff:ff:ff:ff:ff generator: 2 (0x2) recommended-private-length: 276 bits -----BEGIN DH PARAMETERS----- MIIBjAKCAYEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz +8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a 87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7 YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi 7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD ssbzSibBsu/6iGtCOGEfz9zeNVs7ZRkDW7w09N75nAI4YbRvydbmyQd62R0mkff3 7lmMsPrBhtkcrv4TCYUTknC0EwyTvEN5RPT9RFLi103TZPLiHnH1S/9croKrnJ32 nuhtK8UiNjoNq8Uhl5sN6todv5pC1cRITgq80Gv6U93vPBsg7j/VnXwl5B0rZsYu N///////////AgECAgIBFA== -----END DH PARAMETERS----- gevent-24.11.1/src/greentest/3.10/idnsans.pem000066400000000000000000000233321471441230600204670ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQC8sqplTuHuLjbW TL5SL2D1fw9U6WQzLVAF5gsyhd5lr2FpfYwjrob5Mav91aOLbJRTvoNyXsJ26FPS 0RycRGXbomcIEJxXGy9aI+0MLYBt1G5mgqCH+HcVCwPzCNlhVnTwvpgA7y8zs3+6 ezZAPWkF0yWOMYLtTcq9A5GWeavt5VMgm1KZF3gO4k58oPyk3Ae9D0LAaYsX6DFi BYx41eUR5UbSb5IYXaDd8d6jqW/jnYhgc6Cxkv1gTJFn87V5lrG0vYMSRUtWDQ9Y Jh/EKAxjGw7AeY429p6TE4UoJhDmoFYR2NLvawhNIplxol/v0fs0veFQjI/UsTD8 2tRfnYL4IX8szhLsE5/5Iq8aiLHjVbIMwmDYAa0P63Ap2kf1biSn9mpDL8lQazSo yr8xzIq2QS5HMvGbeMAmS0ih10Zx84uVmkWlavgvtSflw8K/ZXT9c70rZp/TdBGY 95cOFsbg5U/20M/Llpis9tcBCaoVaYSFupatrP+p8y19qP2nebsCAwEAAQKCAYEA uaYWWwHW6pzxOrnabcVLYX0WunW9LVShbIw97AElI2n/LuhkXh6xkK48BsqP0vaK oDHJ5VYxgQdmoP03Zs8sX4BSWe7twg1u8wJxkA+cUXI1BAn0opHjpwJlalEEfe2v s8PwjMrF59nsCq56W42PrDlms5UmuQ5WLsw6Co++hZmfxW7LPu+GIS6qBZfluNT5 kBpZlDDCtkyteUD4SVI3wvmOSi+Wzv4e7P2wC9kByjENIcfhC5QQURRD4sA1hWCp 2SThYWqJOCEc2SvGgoqgTRaJuQ2aVG9qrntXt0N4V+WdJWXBK0jedkB2flLve1fR KmDYuc9k/c1svmS3Y+iZohBha9H8jpuJmXYBxxg1iNg9m7qkfg8F8wxCYLQKB+U6 tjRS7by+jSE08On7mpDDhJORnlh+rfEuWPPwAKQpLpdp76KDTvR++GvfOMUiOrFM e9s5aXp+vcgkSSqYvigE+sFpCjQWwkGBkMdT16Pf9CzhQaM08YuLnzfLEYgLFw6R AoHBAN5NQINBmlq/cptGSru66kfecqHfI7xHnnGWKAkto/B1x7Crrgs4Tk5b4vaA JmAqatt5P1e7zco7uAXXebY5VURuH/30TlkuaB+oGFp0OMw6165n8RVPT2ZaDViK ssJ9LT8fJ+23TWCCT2Z1zUlM/NnHAMjKOVsJK3/KEkVvlc7ROC7uVooc78AsQehg zpL3GBYEeBukT8aNUMqUlesCsIs/dQHW7DzQL2xGkQagm5/PDsxaCsT7ynA8eL3X TW+IXwKBwQDZTV3TaG6wqtL8y2DR0lN5jY/eYayX4e18iZ+XEZVTntPdVVyJIE4d 0A5ZfcILb9WE8R21iptROYSjcH/05j+3fQMJ1WAK0sNfGTUNNT3jYU8YzLvos+wW G8E+mNMpFPWNvLV5Qrl4VvoifGh8AMvplUEz8uAzGJbXbRxUPcmjth2ph8zULEDn /+o4OcT3gh1bp+HCqch0OuiJRn9qNUpsJG5GMm5FtjBjZM97ucZ1/0DaWl3JUxUN /pueo3J9vCUCgcBg2Fjdlcvv8u2z1aijJmgATVm1SWfhE3ZkV50zem2sSTNotTJK cwoyOveimeueA3ywBp9g0lFx5Bhkex3sFAggmrVXRoKHeZ8lA28woOdJmezybxfp R7b4iQy9YRdFgZEfqawUdMHB5KNAqNt5LpANNBQUZX0dOt53eooBM/6Yri8CyxRq cPbFysIfwWTdQ8Z7eRD2Qdv7TP9AcgDp9C8DSu7nkUEzsSKn0gpGT9vcgDEbN7Lv ZB4qTT3wvoZeq5MCgcBIG18eDtJkN1sp3Yb0OTnP5QSvg3PVNngq0jQt2fzWMacW FARP0HN7exW35n4kc2jD44q7OhJOAqsb3PHo3xqXlZkTg0WKceO4w9GR32/46spn bVCRaFrX/z/BuM6hHD5bWRpS8aw/3YTFOsklFNKVYRyw01BIREmRlLhIz/QAKidv oQt8AG9NTON44tqUUw3Q40WL5fEJeJ6/JrCTGrnmZrRdANEMuucVpFchNEVB1IC9 tCzY6IPdD/atzojoZi0CgcB2x9oWLjJ0XJIp2pMAb8nCMVjkKrznKFjZbDm8EQBs ou7pM2zkO3VRcWT1BXQocinJsjQqjQiTawP6IN2FQgT0d89V+pwd+jdvpdildQhP 1/6SErVRZV//oopKTsC6TIBL/EmW1TkP3ulQIZs8YklFgybeHdDyNFi+VgPXkVGe IHp0nEzrui9q0YJsjHfFHBeGyzDSfbiBYiF7Auk66gYZbXufebP/LZNG/FIamPP3 rwYIeeV1IVwk9tPBw6fGwrs= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:60 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=idnsans Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:bc:b2:aa:65:4e:e1:ee:2e:36:d6:4c:be:52:2f: 60:f5:7f:0f:54:e9:64:33:2d:50:05:e6:0b:32:85: de:65:af:61:69:7d:8c:23:ae:86:f9:31:ab:fd:d5: a3:8b:6c:94:53:be:83:72:5e:c2:76:e8:53:d2:d1: 1c:9c:44:65:db:a2:67:08:10:9c:57:1b:2f:5a:23: ed:0c:2d:80:6d:d4:6e:66:82:a0:87:f8:77:15:0b: 03:f3:08:d9:61:56:74:f0:be:98:00:ef:2f:33:b3: 7f:ba:7b:36:40:3d:69:05:d3:25:8e:31:82:ed:4d: ca:bd:03:91:96:79:ab:ed:e5:53:20:9b:52:99:17: 78:0e:e2:4e:7c:a0:fc:a4:dc:07:bd:0f:42:c0:69: 8b:17:e8:31:62:05:8c:78:d5:e5:11:e5:46:d2:6f: 92:18:5d:a0:dd:f1:de:a3:a9:6f:e3:9d:88:60:73: a0:b1:92:fd:60:4c:91:67:f3:b5:79:96:b1:b4:bd: 83:12:45:4b:56:0d:0f:58:26:1f:c4:28:0c:63:1b: 0e:c0:79:8e:36:f6:9e:93:13:85:28:26:10:e6:a0: 56:11:d8:d2:ef:6b:08:4d:22:99:71:a2:5f:ef:d1: fb:34:bd:e1:50:8c:8f:d4:b1:30:fc:da:d4:5f:9d: 82:f8:21:7f:2c:ce:12:ec:13:9f:f9:22:af:1a:88: b1:e3:55:b2:0c:c2:60:d8:01:ad:0f:eb:70:29:da: 47:f5:6e:24:a7:f6:6a:43:2f:c9:50:6b:34:a8:ca: bf:31:cc:8a:b6:41:2e:47:32:f1:9b:78:c0:26:4b: 48:a1:d7:46:71:f3:8b:95:9a:45:a5:6a:f8:2f:b5: 27:e5:c3:c2:bf:65:74:fd:73:bd:2b:66:9f:d3:74: 11:98:f7:97:0e:16:c6:e0:e5:4f:f6:d0:cf:cb:96: 98:ac:f6:d7:01:09:aa:15:69:84:85:ba:96:ad:ac: ff:a9:f3:2d:7d:a8:fd:a7:79:bb Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:idnsans, DNS:xn--knig-5qa.idn.pythontest.net, DNS:xn--knigsgsschen-lcb0w.idna2003.pythontest.net, DNS:xn--knigsgchen-b4a3dun.idna2008.pythontest.net, DNS:xn--nxasmq6b.idna2003.pythontest.net, DNS:xn--nxasmm1c.idna2008.pythontest.net X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 5C:BE:18:7F:7B:3F:CE:99:66:80:79:53:4B:DD:33:1B:42:A5:7E:00 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 5d:7a:f8:81:e0:a7:c1:3f:39:eb:d3:52:2c:e1:cb:4d:29:b3: 77:18:17:18:9e:12:fc:11:cc:3c:49:cb:6b:f4:4d:6c:b8:d2: f4:e9:37:f8:6b:ed:f5:d7:f1:eb:5a:41:04:c7:f3:8c:da:e1: 05:8e:ae:58:71:d9:01:8a:32:46:b2:dd:95:46:e1:ce:82:04: fa:0b:1c:29:75:07:85:ce:cd:59:d4:cc:f3:56:b3:72:4d:cb: 90:0f:ce:02:21:ce:5d:17:84:96:7f:6a:00:57:42:b7:24:5b: 07:25:1e:77:a8:9d:da:41:09:8e:29:79:b4:b0:a1:45:c8:70: ae:2c:86:24:ae:3d:9a:74:a7:04:78:d6:1f:1b:17:c5:c1:6d: b1:1a:fd:f4:50:2e:61:16:84:89:d0:42:3f:b6:bf:bd:52:bd: c8:3e:8e:87:b4:f0:bd:ad:c7:51:65:2f:77:e8:69:79:0e:03: 63:89:e7:70:ad:c8:d1:2f:1a:a5:06:d2:90:db:7c:07:35:9a: 0b:0e:85:87:d1:70:17:a7:88:0f:c6:b5:9c:88:00:fa:f9:b2: 0a:19:5a:4b:8d:91:12:51:5e:0e:c1:d8:9e:02:78:d0:2d:24: 09:fe:d4:97:3c:cb:a0:1f:9a:ab:f7:0f:e2:fa:64:23:4e:53: 0a:15:3e:f5:04:01:86:29:8b:8e:24:40:2f:b1:90:87:5c:3b: 7b:a7:4c:06:af:c3:90:7f:e9:c6:56:42:61:15:2c:83:f1:7c: 4f:89:17:f3:a0:11:34:3f:8d:af:75:34:60:1e:e0:f2:f3:02: e7:aa:b3:f7:9f:1c:f8:69:f4:fe:da:57:6e:1b:95:53:70:cd: ed:b6:bb:2a:84:eb:ab:c3:a9:b4:d5:15:a0:b2:cc:81:2d:f1: 56:c1:54:9b:5f:14:4c:5f:ad:5f:f5:06:ee:22:60:45:e4:50: 35:64:ac:ac:ca:4a:bf:86:78:f8:53:2d:17:d8:e8:84:c8:07: a4:c2:29:76:c7:1f -----BEGIN CERTIFICATE----- MIIGvTCCBSWgAwIBAgIJAMstgJlaaVJgMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2lk bnNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQC8sqplTuHuLjbW TL5SL2D1fw9U6WQzLVAF5gsyhd5lr2FpfYwjrob5Mav91aOLbJRTvoNyXsJ26FPS 0RycRGXbomcIEJxXGy9aI+0MLYBt1G5mgqCH+HcVCwPzCNlhVnTwvpgA7y8zs3+6 ezZAPWkF0yWOMYLtTcq9A5GWeavt5VMgm1KZF3gO4k58oPyk3Ae9D0LAaYsX6DFi BYx41eUR5UbSb5IYXaDd8d6jqW/jnYhgc6Cxkv1gTJFn87V5lrG0vYMSRUtWDQ9Y Jh/EKAxjGw7AeY429p6TE4UoJhDmoFYR2NLvawhNIplxol/v0fs0veFQjI/UsTD8 2tRfnYL4IX8szhLsE5/5Iq8aiLHjVbIMwmDYAa0P63Ap2kf1biSn9mpDL8lQazSo yr8xzIq2QS5HMvGbeMAmS0ih10Zx84uVmkWlavgvtSflw8K/ZXT9c70rZp/TdBGY 95cOFsbg5U/20M/Llpis9tcBCaoVaYSFupatrP+p8y19qP2nebsCAwEAAaOCAo4w ggKKMIHhBgNVHREEgdkwgdaCB2lkbnNhbnOCH3huLS1rbmlnLTVxYS5pZG4ucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2dzc2NoZW4tbGNiMHcuaWRuYTIwMDMucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2djaGVuLWI0YTNkdW4uaWRuYTIwMDgucHl0 aG9udGVzdC5uZXSCJHhuLS1ueGFzbXE2Yi5pZG5hMjAwMy5weXRob250ZXN0Lm5l dIIkeG4tLW54YXNtbTFjLmlkbmEyMDA4LnB5dGhvbnRlc3QubmV0MA4GA1UdDwEB /wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/ BAIwADAdBgNVHQ4EFgQUXL4Yf3s/zplmgHlTS90zG0KlfgAwfQYDVR0jBHYwdIAU s4qgorpx8agkedSkWyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQK DB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNh LXNlcnZlcoIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKG MGh0dHA6Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNl cjA1BggrBgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2Evb2NzcC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250 ZXN0Lm5ldC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGB AF16+IHgp8E/OevTUizhy00ps3cYFxieEvwRzDxJy2v0TWy40vTpN/hr7fXX8eta QQTH84za4QWOrlhx2QGKMkay3ZVG4c6CBPoLHCl1B4XOzVnUzPNWs3JNy5APzgIh zl0XhJZ/agBXQrckWwclHneondpBCY4pebSwoUXIcK4shiSuPZp0pwR41h8bF8XB bbEa/fRQLmEWhInQQj+2v71Svcg+joe08L2tx1FlL3foaXkOA2OJ53CtyNEvGqUG 0pDbfAc1mgsOhYfRcBeniA/GtZyIAPr5sgoZWkuNkRJRXg7B2J4CeNAtJAn+1Jc8 y6Afmqv3D+L6ZCNOUwoVPvUEAYYpi44kQC+xkIdcO3unTAavw5B/6cZWQmEVLIPx fE+JF/OgETQ/ja91NGAe4PLzAueqs/efHPhp9P7aV24blVNwze22uyqE66vDqbTV FaCyzIEt8VbBVJtfFExfrV/1Bu4iYEXkUDVkrKzKSr+GePhTLRfY6ITIB6TCKXbH Hw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/keycert.passwd.pem000066400000000000000000000102011471441230600217650ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQIhD+rJdxqb6ECAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBDTdyjCP3riOSUfxix4aXEvBIIH ECGkbsFabrcFMZcplw5jHMaOlG7rYjUzwDJ80JM8uzbv2Jb8SvNlns2+xmnEvH/M mNvRmnXmplbVjH3XBMK8o2Psnr2V/a0j7/pgqpRxHykG+koOY4gzdt3MAg8JPbS2 hymSl+Y5EpciO3xLfz4aFL1ZNqspQbO/TD13Ij7DUIy7xIRBMp4taoZCrP0cEBAZ +wgu9m23I4dh3E8RUBzWyFFNic2MVVHrui6JbHc4dIHfyKLtXJDhUcS0vIC9PvcV jhorh3UZC4lM+/jjXV5AhzQ0VrJ2tXAUX2dA144XHzkSH2QmwfnajPsci7BL2CGC rjyTy4NfB/lDwU+55dqJZQSKXMxAapJMrtgw7LD5CKQcN6zmfhXGssJ7HQUXKkaX I1YOFzuUD7oo56BVCnVswv0jX9RxrE5QYNreMlOP9cS+kIYH65N+PAhlURuQC14K PgDkHn5knSa2UQA5tc5f7zdHOZhGRUfcjLP+KAWA3nh+/2OKw/X3zuPx75YT/FKe tACPw5hjEpl62m9Xa0eWepZXwqkIOkzHMmCyNCsbC0mmRoEjmvfnslfsmnh4Dg/c 4YsTYMOLLIeCa+WIc38aA5W2lNO9lW0LwLhX1rP+GRVPv+TVHXlfoyaI+jp0iXrJ t3xxT0gaiIR/VznyS7Py68QV/zB7VdqbsNzS7LdquHK1k8+7OYiWjY3gqyU40Iu2 d1eSnIoDvQJwyYp7XYXbOlXNLY+s1Qb7yxcW3vXm0Bg3gKT8r1XHWJ9rj+CxAn5r ysfkPs1JsesxzzQjwTiDNvHnBnZnwxuxfBr26ektEHmuAXSl8V6dzLN/aaPjpTj4 CkE7KyqX3U9bLkp+ztl4xWKEmW44nskzm0+iqrtrxMyTfvvID4QrABjZL4zmWIqc e3ZfA3AYk9VDIegk/YKGC5VZ8YS7ZXQ0ASK652XqJ7QlMKTxxV7zda6Fp4uW6/qN ezt5wgbGGhZQXj2wDQmWNQYyG/juIgYTpCUA54U5XBIjuR6pg+Ytm0UrvNjsUoAC wGelyqaLDq8U8jdIFYVTJy9aJjQOYXjsUJ0dZN2aGHSlju0ZGIZc49cTIVQ9BTC5 Yc0Vlwzpl+LuA25DzKZNSb/ci0lO/cQGJ2uXQQgaNgdsHlu8nukENGJhnIzx4fzK wEh3yHxhTRCzPPwDfXmx0IHXrPqJhSpAgaXBVIm8OjvmMxO+W75W4uLfNY/B7e2H 3cjklGuvkofOf7sEOrGUYf4cb6Obg8FpvHgpKo5Twwmoh/qvEKckBFqNhZXDDl88 GbGlSEgyaAV1Ig8s1NJKBolWFa0juyPAwJ8vT1T4iwW7kQ7KXKt2UNn96K/HxkLu pikvukz8oRHMlfVHa0R48UB1fFHwZLzPmwkpu6ancIxk3uO3yfhf6iDk3bmnyMlz g3k/b6MrLYaOVByRxay85jH3Vvgqfgn6wa6BJ7xQ81eZ8B45gFuTH0J5JtLL7SH8 darRPLCYfA+Ums9/H6pU5EXfd3yfjMIbvhCXHkJrrljkZ+th3p8dyto6wmYqIY6I qR9sU+o6DhRaiP8tCICuhHxQpXylUM6WeJkJwduTJ8KWIvzsj4mReIKOl/oC2jSd gIdKhb9Q3zj9ce4N5m6v66tyvjxGZ+xf3BvUPDD+LwZeXgf7OBsNVbXzQbzto594 nbCzPocFi3gERE50ru4K70eQCy08TPG5NpOz+DDdO5vpAuMLYEuI7O3L+3GjW40Q G5bu7H5/i7o/RWR67qhG/7p9kPw3nkUtYgnvnWaPMIuTfb4c2d069kjlfgWjIbbI tpSKmm5DHlqTE4/ECAbIEDtSaw9dXHCdL3nh5+n428xDdGbjN4lT86tfu17EYKzl ydH1RJ1LX3o3TEj9UkmDPt7LnftvwybMFEcP7hM2xD4lC++wKQs7Alg6dTkBnJV4 5xU78WRntJkJTU7kFkpPKA0QfyCuSF1fAMoukDBkqUdOj6jE0BlJQlHk5iwgnJlt uEdkTjHZEjIUxWC6llPcAzaPNlmnD45AgfEW+Jn21IvutmJiQAz5lm9Z9PXaR0C8 hXB6owRY67C0YKQwXhoNf6xQun2xGBGYy5rPEEezX1S1tUH5GR/KW1Lh+FzFqHXI ZEb5avfDqHKehGAjPON+Br7akuQ125M9LLjKuSyPaQzeeCAy356Xd7XzVwbPddbm 9S9WSPqzaPgh10chIHoNoC8HMd33dB5j9/Q6jrbU/oPlptu/GlorWblvJdcTuBGI IVn45RFnkG8hCz0GJSNzW7+70YdESQbfJW79vssWMaiSjFE0pMyFXrFR5lBywBTx PiGEUWtvrKG94X1TMlGUzDzDJOQNZ9dT94bonNe9pVmP5BP4/DzwwiWh6qrzWk6p j8OE4cfCSh2WvHnhJbH7/N0v+JKjtxeIeJ16jx/K2oK5 -----END ENCRYPTED PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/keycert.pem000066400000000000000000000077321471441230600205040ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/wIBADANBgkqhkiG9w0BAQEFAASCBukwggblAgEAAoIBgQCylKlLaKU+hOvJ DfriTRLd+IthG5hv28I3A/CGjLICT0rDDtgaXd0uqloJAnjsgn5gMAcStpDW8Rm+ t6LsrBL+5fBgkyU1r94Rvx0HHoyaZwBBouitVHw28hP3W+smddkqB1UxpGnTeL2B gj3dVo/WTtRfO+0h0PKw1l98YE1pMTdqIwcOOE/ER0g4hvA/wrxuLhMvlVLMy/lL 58uctqaDUqryNyeerKbVkq4fJyCG5D2TwXVJ3i2DDh0xSt2Y10poZV4M4k8Su9Z5 8zN2PSvYMT50aqF277v8BaOeYUApBE4kZGIJpo13ATGdEwpUFZ0Fri4zLYUZ1hWb OC35sKo7OxWQ/+tefNUdgWHob6Vmy777jiYcLwxc3sS9rF3AJe0rMW83kCkR6hmy A3250E137N/1QumHuT/Nj9rnI/lwt9jfaYkZjoAgT/C97m/mM83cYpGTdoGV1xNo 7G90MhP0di5FnVsrIaSnvkbGT9UgUWx0oVMjocifdG2qIhMI9psCAwEAAQKCAYBT sHmaPmNaZj59jZCqp0YVQlpHWwBYQ5vD3pPE6oCttm0p9nXt/VkfenQRTthOtmT1 POzDp00/feP7zeGLmqSYUjgRekPw4gdnN7Ip2PY5kdW77NWwDSzdLxuOS8Rq1MW9 /Yu+ZPe3RBlDbT8C0IM+Atlh/BqIQ3zIxN4g0pzUlF0M33d6AYfYSzOcUhibOO7H j84r+YXBNkIRgYKZYbutRXuZYaGuqejRpBj3voVu0d3Ntdb6lCWuClpB9HzfGN0c RTv8g6UYO4sK3qyFn90ibIR/1GB9watvtoWVZqggiWeBzSWVWRsGEf9O+Cx4oJw1 IphglhmhbgNksbj7bD24on/icldSOiVkoUemUOFmHWhCm4PnB1GmbD8YMfEdSbks qDr1Ps1zg4mGOinVD/4cY7vuPFO/HCH07wfeaUGzRt4g0/yLr+XjVofOA3oowyxv JAzr+niHA3lg5ecj4r7M68efwzN1OCyjMrVJw2RAzwvGxE+rm5NiT08SWlKQZnkC gcEA4wvyLpIur/UB84nV3XVJ89UMNBLm++aTFzld047BLJtMaOhvNqx6Cl5c8VuW l261KHjiVzpfNM3/A2LBQJcYkhX7avkqEXlj57cl+dCWAVwUzKmLJTPjfaTTZnYJ xeN3dMYjJz2z2WtgvfvDoJLukVwIMmhTY8wtqqYyQBJ/l06pBsfw5TNvmVIOQHds 8ASOiFt+WRLk2bl9xrGGayqt3VV93KVRzF27cpjOgEcG74F3c0ZW9snERN7vIYwB JfrlAoHBAMlahPwMP2TYylG8OzHe7EiehTekSO26LGh0Cq3wTGXYsK/q8hQCzL14 kWW638vpwXL6L9ntvrd7hjzWRO3vX/VxnYEA6f0bpqHq1tZi6lzix5CTUN5McpDg QnjenSJNrNjS1zEF8WeY9iLEuDI/M/iUW4y9R6s3WpgQhPDXpSvd2g3gMGRUYhxQ Xna8auiJeYFq0oNaOxvJj+VeOfJ3ZMJttd+Y7gTOYZcbg3SdRb/kdxYki0RMD2hF 4ZvjJ6CTfwKBwQDiMqiZFTJGQwYqp4vWEmAW+I4r4xkUpWatoI2Fk5eI5T9+1PLX uYXsho56NxEU1UrOg4Cb/p+TcBc8PErkGqR0BkpxDMOInTOXSrQe6lxIBoECVXc3 HTbrmiay0a5y5GfCgxPKqIJhfcToAceoVjovv0y7S4yoxGZKuUEe7E8JY2iqRNAO yOvKCCICv/hcN235E44RF+2/rDlOltagNej5tY6rIFkaDdgOF4bD7f9O5eEni1Bg litfoesDtQP/3rECgcEAkQfvQ7D6tIPmbqsbJBfCr6fmoqZllT4FIJN84b50+OL0 mTGsfjdqC4tdhx3sdu7/VPbaIqm5NmX10bowWgWSY7MbVME4yQPyqSwC5NbIonEC d6N0mzoLR0kQ+Ai4u+2g82gicgAq2oj1uSNi3WZi48jQjHYFulCbo246o1NgeFFK 77WshYe2R1ioQfQDOU1URKCR0uTaMHClgfu112yiGd12JAD+aF3TM0kxDXz+sXI5 SKy311DFxECZeXRLpcC3AoHBAJkNMJWTyPYbeVu+CTQkec8Uun233EkXa2kUNZc/ 5DuXDaK+A3DMgYRufTKSPpDHGaCZ1SYPInX1Uoe2dgVjWssRL2uitR4ENabDoAOA ICVYXYYNagqQu5wwirF0QeaMXo1fjhuuHQh8GsMdXZvYEaAITZ9/NG5x/oY08+8H kr78SMBOPy3XQn964uKG+e3JwpOG14GKABdAlrHKFXNWchu/6dgcYXB87mrC/GhO zNwzC+QhFTZoOomFoqMgFWujng== -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/keycert2.pem000066400000000000000000000077561471441230600205740ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQCf8FWxi4oVlDVx e8NDFgb+IYAGr/hZWuY1Zq7d7g57yPoxJrgt+bN89+U7qTduqyB2Hy8G0TqeACOr IdpPZ8P7V5E5YiASwfJ72nbVo7qR9DAKA5FE8PU0bJFmFLjDDihc970zc4ilRDfR WylUpj68nefOY4CzFzeiqVOLX2wezs7Z0hflkSXGBmC0j1FbQU2I3YJg3CKCabhT tU6OyKItzjJ2vVaOoQ+B0Kv8leaRQ6ANZBAFQF2LepSy5F2+oSD+QHjPr+012V5D mrsdIc9We8YyonS1u/3HI7lLohf3W+qFroQWjn0DJI56ScV1uEr/B0+hn2jBRTM5 d1F9BeVWm1u8BOJu50CvOeuxiVLsxJpa4T41DJznJk5V+hE4hKvDKmlrwulsRp8o jUEyUi8dzWOBRfAijIWv3qAPjGA/J33n6+PllCczC2BsVZhVmLqSMCwp1g2JTCM/ KC7T4vOl/EGkm76fcmLeA1Ef8oUdRg+3T77VP+HqZ2JP06J8O8MCAwEAAQKCAYAw YvJZ82BEJQGCIrIxMpHNAm+MFmKpDdIFp9oRdDrXgjcG9bLU3e1KSmkEgq4tggIh GlAM3PHB6ULhPC2ixj7JZHWgCaqwYhKtG6vF+HGyRFDgRrIFTGyyfoICgxReloLp lV2dGj/l19yXLuAzJtRmFdOSYhIGnGiNgnKvAKBiNajoxyHJpv7piPZqyc0QMZJ2 bKVMDm02TSuhz4FDuzktaGtl9uQf5GQfnvTZRrRpkC70vigGnrFuSBiCgopF6NLq 6AXl8YS3Jcu2oGWrZDfS/GlG1QmvGGsmr9wndJSGG43jcpcRZt0g1nJNu4Fioq3e 7y6Gap9TEsciuQOv/6RD457XkNARmTQxFpEwmSgOPQn2pFcDspo71Ej7azzL/Z+3 jvnVo3wxgxBcrpyh+vhBtJARp4pT4anW4PcD6IcPSOWbnI8Ldoj1XN5QkJcBcykK 6LmsAUqsmEQDNsmnGZWyYSCns4P2vUJi0hwQz8UiQwgAta3xnq4v5On7l3cq35kC gcEA0+joOFbZBeGlCb27tDW4VCW0cQuczzuNEoBUKnsNSqy0nx1O7hgHm/f/NQDD cpxiD15bRQ0KM9QbQC4dGaVoLsM07hUGk97dCxQPs2zot4CodCKGohs7E154tEDP zVg3YS5mubUmqdqtn8ZCKeeZye/Tv2ageyF300sEgj2Cd7EZ8S4sB0PxZ2tqT3jy cBL5cDruLEWuHIQjN7WwSjxnXocpb1OU7dJ+v4zFPCkSCOoa0DTTw4jFhPEOBdqV T619AoHBAME3QyW4QVtU2Ct9u0B1XThhqSEyOpUrcH9nOoefggwP4WF3phVx16BG aDKUIGQ62klRa5fi2eooxcjQRLv1sWO0UzssnO6ABMnGkUiRdrowo6xukNak0RTp 0gvNoJ0SZxGF0yWSCw1Rq3qP2Koj7XDumFChAzLMyUsnoOl29SA7GfXcZp1pZTiq kOfFMWt0CIHu/EK03YWcd4vfQEq6lus39RCSXuL++Jva3yiEl5s069RFZvP1bNrD emkfetDSPwKBwQClk+8fVnzs44sZOW9ZOEB3P57mVbSJGHb6Zdtd9hhEqP3Y9gWe dJg9fmGjAJ23CAp3B7s5ER9PsAQ6+c0zJNNq9ox9G2CwWgtNhLdf81FDUPxPAktA jxZx4/dcoOe+A5gCD0elA67aOUxA86DvLVA1QXeqrn3muBfwuUUknvs6mt8yXGl6 o9QUgxHmVxLYD3tn/iPr4+ZP0c/Sz9yXpOsAKYxuuFg+G6N9+HiEsXKuFH4vAZgV yODNJ61VVZ4lS+ECgcAqFqOl39E81+qO7sCPdgFsermg5ZQlUmUbG52AVZq6jesG lE21disGWs/v1JyJuNg8CGRrnZriiycqa1PNreOKWImY5kr5GSHx4jNbn3RBcr70 nNEoMJbq+1QqBgzqqkuRYZlxIbMOn6++7v6/cTwT0aWUSr6rnjhrCqLeuG8FKlqp V+1ydLb79QvDsQzm30vLIggJb+ShakgQS/1xSdv+OR5FEd1hjTESokbiSJ/Ny2Vj xAp9MgUYUmSj6ZuTSXkCgcAggshdRQLom/EK2pYwffIpKfBiyLbi+KIjKxkiPEsb jrrQbvh9ZN6iAG3StVAYB5c6vewfeIlcDT0YJDyy1hGRLRG7vf9ubPf+n7Xp1y0W oo9L9qfCHu0jmWwtinkFYjpTDkXlxXCG2v3TllNsNX/5afYo8sb9oxXHLTpBlwZB fw6IgNZblWQevdgmUMTP9W2W7AZUxEz4gOM6lQkOwC3U59Dx2yO6rD3An6G1tlZF 2MClyf8o5d5ePObH8rkxrpY= -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIUF15VKdwjiTzzKgs6PnNpEekV9QQwDQYJKoZIhvcNAQEL BQAwYjELMAkGA1UEBhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYD VQQKDBpQeXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhv c3RuYW1lMB4XDTIxMDMxNzA4NDgyMFoXDTQwMDUxNjA4NDgyMFowYjELMAkGA1UE BhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYDVQQKDBpQeXRob24g U29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhvc3RuYW1lMIIBojAN BgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEAn/BVsYuKFZQ1cXvDQxYG/iGABq/4 WVrmNWau3e4Oe8j6MSa4LfmzfPflO6k3bqsgdh8vBtE6ngAjqyHaT2fD+1eROWIg EsHye9p21aO6kfQwCgORRPD1NGyRZhS4ww4oXPe9M3OIpUQ30VspVKY+vJ3nzmOA sxc3oqlTi19sHs7O2dIX5ZElxgZgtI9RW0FNiN2CYNwigmm4U7VOjsiiLc4ydr1W jqEPgdCr/JXmkUOgDWQQBUBdi3qUsuRdvqEg/kB4z6/tNdleQ5q7HSHPVnvGMqJ0 tbv9xyO5S6IX91vqha6EFo59AySOeknFdbhK/wdPoZ9owUUzOXdRfQXlVptbvATi budArznrsYlS7MSaWuE+NQyc5yZOVfoROISrwyppa8LpbEafKI1BMlIvHc1jgUXw IoyFr96gD4xgPyd95+vj5ZQnMwtgbFWYVZi6kjAsKdYNiUwjPygu0+LzpfxBpJu+ n3Ji3gNRH/KFHUYPt0++1T/h6mdiT9OifDvDAgMBAAGjGzAZMBcGA1UdEQQQMA6C DGZha2Vob3N0bmFtZTANBgkqhkiG9w0BAQsFAAOCAYEARzdkuqa0Hexi/saMkdi3 bubpQkc7X0RYKWnjy/PgcmbvQXLiWRMZOH9rMWvd5v+ZfkgAtsbOQuP8ycioNIFY Il5SEmxHEN81z5UNSPLOib6ky13gzrnXRAxnnO7cICG7AaMu1dHv57fqjevcx/n/ nxPNKwKL+TDpMw7ATVZw7Py7JciKyFAfwtkvt17j/ldvaQvuwmWHzyFVrQniQcQq QEa4jy/Y/pXHAgCKq1qbe0ush17j1ChyH7l4SkF2xJKcYYQF5ipw8zg6WeOL2NFE G1KDJN0SsMmM3PMN1e0lLQP3G+UaatervrKXu51QleKL32Xlby+pp1w9KKs39/Tb RT8EMe9A6cecod6TL0ZUQHow6ykNYBkfSKDLTKWnL9ifZ0C/DvgmS7DpJg3oAa1e GhIglMrgqJflTHAI/PvEsCKM1O0Un2dVGWsUCzPfhj1cKmagyb0Zd+2Tk9xGSRs9 2ceXMxRCjOJwEHUCFuTYeqowabdlpi0nyPbSn7JIwCpT -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/keycert3.pem000066400000000000000000000223501471441230600205600ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQDFtLOteQlQojN7 ztkux7m0hmGKkP1hh0hbKqTcD87jkLAqAwZWenjZMjCbbZ3vP+AObCIkYIKzPXY7 Yi+H5M3O2mXIDxoHGjL/GWtoEyDNXvm9UC+MRuSOq2MaLHHQG0Rx2TxcYrMVUM7b 93rpN1LGRrCv1gISXM4EvEJooAR7Aadj0pG/o0fqDAdFjH6QZbhn1iZle+eGbjcf dgH/H0F8dn1PPGoViHXicbsQ4kB6002Pf+aXP4b2QKAbflyNHEKHPHEOOTXrFjMd c+bqKW24epEsMZI59qx9hU/4Rvp3/v+vEwTL7Nm7ilptzZn2cvGCW39LC0nNYLOz kO3H8xwA75h6uykdB+WO/v2CKIK9M/ZO+9QNrmaokfKDamCk39b8hlCwNL6LsVpv d3XTS5Wn4YWn92EqiltUJJoPo7pc7VTdWCg4zVFn4Q8Zh4NFNn/qTB8lEMgrsNTV 5cyZ7zhoBiUMSO45bmo2NsnE7ce/JUhlqe5uh0PT1MIBgTV+oDMCAwEAAQKCAYEA udsy4gwblqK0tVnxz0lQqYV+os3EdO/BNHr1Oi7eNg2pngTz603812mYSjUVOHma vtQmkH3twGQyBoc52Y1dcGzdK+IOfMjDUg7qao840ffL3I1J9ZwbdodlhZBsec94 W3J1jP/4DDzICf8vm5g3h0+i/9m2Xt7BibAU2dg7/grC+lNUUoxDqaEfIOF/hW0q muq1c8e0EisAROIh5FzUqhWVnWxU6eM7tuFlkuyu4whLLHB3LI466Lo+CTqT9M+v jJYlvS5+AZW3qMBp6WOI8C+VIiBL178mo+Igkyyy5AYXcWeNkjp6ygRWvtWXIhCv CI29mf+BP/54jAY0rQRXJ2UcSHXmM6PTDkE/L2OKeiY1Ou8gLOwun3yBVdbkXJMb PWmUW4N8qSIJQ+vE2TDqmkqAT6m+ilzOXl1O+LLTvGyMnOiiSLXK9mC4ND3tqaQu hvKivnI1doErcWUaIf1DHiJmLrGxrTCUKjCEoefqVq2/dDdtCfx7CqUvjl3DYKMB AoHBAP+Vdi6D07gZFepEGCaJ+YH6cxEyO73CNnea/F1whVAzOv91kHS32jC9PAI3 /wYlX+DLcN9mVF/q62V4SLZYfOxTPW4vWO0A45URe9s9Z795fdAcQ5jt3QFOVSnk 3XSaCkIOwckuwabGJi4+foiUEOnLLzQi1/g7x12dwejxVNhqhz5KFkOQPv8fQRed sb5LVLYDeprsB2Vsx0fHwg4z9FvTIxLBeI7+sJD30lNpYZrCl/T9x4e1SV2Rwn2W bghxgQKBwQDGBx07biZK9RB5g4qPl+G6vz0M+/KBfpwQbMYxSyct7u6gfGD9mWBO qocIIr39Unac3kUL237Cn3HbgiGCRe7Mwd7XqnSSGWM5oWSlVQxEKTXYUlTbd9O9 DKuyQGOl/AMEwD4ZbEOfQNmnd1U4nh1AV052FQY8Ry/atGFT9fApA/5X/bbenOwQ YGDsokLzPf2BIDncpE+VNevUMoMI7EnySgjjfpL+cRld0qpLqBMo2h5VddeJ/5YM 1YcNfMQiw7MCgcEAwXqXuKa7A8aZvHpH/gS9CRRbP01TxFbdfLWrDeE8SnY9111c Ob9kQTk/0D4rpK9uYXIgxD1m6iWghXQFN2TNTOnGuz7EhsYBgrt1k4Zsn5qND5oV 4hNPFsoB1nEW5EooMdGSCYaHuoSOKrvMdgAAvbu+xC0MaTJ3vfrK7Fik7h/WueTD 7emohuFWGVabU38bZZ5EljrPboxmX4Rs9uuFtG2lQ3GKnlVXvKaeZd6EsO9WsXPc NHOcUmUhYokaSvIBAoHAGCxGJTsM8Zl4qVylTWH87A7sJOmccLJD2r1sdBf4cGL6 PhzwugQ+/VtToGqdRo8Ka5u2Ufw5PQi5nVIFRSHERLpluW3VTQBMXHyXDJeVJ7zg Fcf3E9NMxYcGbnvtrhVVSP8ulWvh1U7VQtwOSxsB9xixOzjVygXmkYvzVYxwBJG4 OoV+DS6aomUhb8Fe6tJmX5zPc1+bV1t9ril8VVqCrFDdROfuiaDEt+8/Wnzp2dLG YShBZ1cLugVWtw7D4nqBAoHAF29k64iAxY5Y4OOibVkqjUCPyqG2oxiXqgO7CxZp FGUat5UtV2mIBlSENs1o5AZ1nPlgWtPtg0xVCaG2t/Rq7ugvUfAnAhUK6zX8FS+T gCXE+7iKuuIJiCo13/iAwF/CLfuXvj4CZ71ta0wX9w99f1FcPEk0x+ytiyuWJK8K tyubL34JwNrnkh/8e3LcV3L88Sk9ZmxeTz31f3cA3Fy2ZJOAUMD9dKXeKtY7azzt MkhXedRsdLSKqMh0VGeGHoLS -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5c Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:c5:b4:b3:ad:79:09:50:a2:33:7b:ce:d9:2e:c7: b9:b4:86:61:8a:90:fd:61:87:48:5b:2a:a4:dc:0f: ce:e3:90:b0:2a:03:06:56:7a:78:d9:32:30:9b:6d: 9d:ef:3f:e0:0e:6c:22:24:60:82:b3:3d:76:3b:62: 2f:87:e4:cd:ce:da:65:c8:0f:1a:07:1a:32:ff:19: 6b:68:13:20:cd:5e:f9:bd:50:2f:8c:46:e4:8e:ab: 63:1a:2c:71:d0:1b:44:71:d9:3c:5c:62:b3:15:50: ce:db:f7:7a:e9:37:52:c6:46:b0:af:d6:02:12:5c: ce:04:bc:42:68:a0:04:7b:01:a7:63:d2:91:bf:a3: 47:ea:0c:07:45:8c:7e:90:65:b8:67:d6:26:65:7b: e7:86:6e:37:1f:76:01:ff:1f:41:7c:76:7d:4f:3c: 6a:15:88:75:e2:71:bb:10:e2:40:7a:d3:4d:8f:7f: e6:97:3f:86:f6:40:a0:1b:7e:5c:8d:1c:42:87:3c: 71:0e:39:35:eb:16:33:1d:73:e6:ea:29:6d:b8:7a: 91:2c:31:92:39:f6:ac:7d:85:4f:f8:46:fa:77:fe: ff:af:13:04:cb:ec:d9:bb:8a:5a:6d:cd:99:f6:72: f1:82:5b:7f:4b:0b:49:cd:60:b3:b3:90:ed:c7:f3: 1c:00:ef:98:7a:bb:29:1d:07:e5:8e:fe:fd:82:28: 82:bd:33:f6:4e:fb:d4:0d:ae:66:a8:91:f2:83:6a: 60:a4:df:d6:fc:86:50:b0:34:be:8b:b1:5a:6f:77: 75:d3:4b:95:a7:e1:85:a7:f7:61:2a:8a:5b:54:24: 9a:0f:a3:ba:5c:ed:54:dd:58:28:38:cd:51:67:e1: 0f:19:87:83:45:36:7f:ea:4c:1f:25:10:c8:2b:b0: d4:d5:e5:cc:99:ef:38:68:06:25:0c:48:ee:39:6e: 6a:36:36:c9:c4:ed:c7:bf:25:48:65:a9:ee:6e:87: 43:d3:d4:c2:01:81:35:7e:a0:33 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 85:75:10:25:D0:2C:80:50:24:1A:5B:57:70:DE:B5:CB:71:A9:3B:7B X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 95:f3:56:bb:d5:8c:70:bd:d1:de:da:63:b0:29:d7:db:60:27: d6:59:fd:61:1b:30:c6:d0:5d:73:7d:34:e1:68:e3:28:a6:89: e6:60:bd:89:d3:0e:f4:72:ad:72:76:f8:86:21:fd:75:3c:f8: 6d:be:9c:04:e1:82:03:69:6c:ae:d0:55:ba:5e:f2:ca:f5:0f: 8e:d6:d9:8d:c8:56:46:f4:f8:ac:74:2a:19:7b:8e:47:70:1f: fb:fb:bd:69:02:a1:a5:4a:6e:21:1c:04:14:15:55:bf:bf:24: 43:c8:17:03:be:3e:2c:ea:db:c8:af:1d:fd:52:df:d6:15:49: 9e:c2:44:69:ef:f1:45:43:83:b2:1e:cf:14:1c:13:3f:fe:9c: 71:cb:e7:1b:18:56:36:a7:af:44:f1:0b:a1:79:44:46:f9:43: 46:29:d8:b0:ca:49:4d:65:60:d3:f6:8e:74:bc:62:9e:1e:8d: 4b:29:9a:b4:0d:f0:a2:77:5b:34:e4:11:2f:a7:25:c5:e5:07: 76:12:ae:be:75:73:15:e4:0a:7d:53:38:56:3f:79:6d:6e:ca: ed:80:ab:56:ed:7e:8b:1c:e7:e3:d4:62:30:22:70:e7:29:b2: 03:3c:fe:fa:3d:f0:36:c0:4d:11:a2:99:d3:29:31:27:b8:c5: b8:15:a3:3c:4f:9b:73:5e:2b:b2:fb:cb:fd:75:47:b8:17:bd: 21:d8:e6:c1:b9:ff:73:81:d8:25:08:6d:08:5e:1c:a5:83:50: de:67:e6:da:d0:8e:5a:d3:f2:2a:b1:3f:b8:80:21:07:6a:71: 15:6d:05:eb:51:b3:59:8d:d4:15:46:7e:02:a8:13:01:16:99: bd:03:cc:70:71:2a:23:16:78:af:d1:d5:01:9d:04:b4:63:93: 9a:04:3a:92:2e:e6:7e:73:93:a5:fe:50:9b:bd:0e:ea:54:86: 6f:7c:e5:14:77:fe:c2:28:5a:4a:0e:d7:2d:8c:e9:ed:61:29: b2:53:ff:6c:04:bc -----BEGIN CERTIFICATE----- MIIF8TCCBFmgAwIBAgIJAMstgJlaaVJcMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxv Y2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAMW0s615CVCi M3vO2S7HubSGYYqQ/WGHSFsqpNwPzuOQsCoDBlZ6eNkyMJttne8/4A5sIiRggrM9 djtiL4fkzc7aZcgPGgcaMv8Za2gTIM1e+b1QL4xG5I6rYxoscdAbRHHZPFxisxVQ ztv3euk3UsZGsK/WAhJczgS8QmigBHsBp2PSkb+jR+oMB0WMfpBluGfWJmV754Zu Nx92Af8fQXx2fU88ahWIdeJxuxDiQHrTTY9/5pc/hvZAoBt+XI0cQoc8cQ45NesW Mx1z5uopbbh6kSwxkjn2rH2FT/hG+nf+/68TBMvs2buKWm3NmfZy8YJbf0sLSc1g s7OQ7cfzHADvmHq7KR0H5Y7+/YIogr0z9k771A2uZqiR8oNqYKTf1vyGULA0voux Wm93ddNLlafhhaf3YSqKW1Qkmg+julztVN1YKDjNUWfhDxmHg0U2f+pMHyUQyCuw 1NXlzJnvOGgGJQxI7jluajY2ycTtx78lSGWp7m6HQ9PUwgGBNX6gMwIDAQABo4IB wDCCAbwwFAYDVR0RBA0wC4IJbG9jYWxob3N0MA4GA1UdDwEB/wQEAwIFoDAdBgNV HSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4E FgQUhXUQJdAsgFAkGltXcN61y3GpO3swfQYDVR0jBHYwdIAUs4qgorpx8agkedSk WyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29m dHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMst gJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0 Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcw AYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYD VR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAJXzVrvVjHC90d7a Y7Ap19tgJ9ZZ/WEbMMbQXXN9NOFo4yimieZgvYnTDvRyrXJ2+IYh/XU8+G2+nATh ggNpbK7QVbpe8sr1D47W2Y3IVkb0+Kx0Khl7jkdwH/v7vWkCoaVKbiEcBBQVVb+/ JEPIFwO+Pizq28ivHf1S39YVSZ7CRGnv8UVDg7IezxQcEz/+nHHL5xsYVjanr0Tx C6F5REb5Q0Yp2LDKSU1lYNP2jnS8Yp4ejUspmrQN8KJ3WzTkES+nJcXlB3YSrr51 cxXkCn1TOFY/eW1uyu2Aq1btfosc5+PUYjAicOcpsgM8/vo98DbATRGimdMpMSe4 xbgVozxPm3NeK7L7y/11R7gXvSHY5sG5/3OB2CUIbQheHKWDUN5n5trQjlrT8iqx P7iAIQdqcRVtBetRs1mN1BVGfgKoEwEWmb0DzHBxKiMWeK/R1QGdBLRjk5oEOpIu 5n5zk6X+UJu9DupUhm985RR3/sIoWkoO1y2M6e1hKbJT/2wEvA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/keycert4.pem000066400000000000000000000223661471441230600205700ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQC34y3S6iXdmdvd M/2aFBe6CvRvZwhh1huGl7IQRtdoakPqMLlEdNHJtNeF5M27xLei+p4wt7N1Jyi0 2keHQb1m9TqH5AruOkE2ti+15zEoKoU9aWydTiH+epKTT0yjg2NcKQjRUaWcbhzB H4EMKuCIlzIIz8/EIKkOqhCDwq6+Fv3Ays+z7Bz+yR80ixivKu/l7SjxQ7z7R/kC I7OViRcIO5QBQPj7VLvCTz4VA6u/LdXngK2HNuau6WXm5yNNQbqrB11AEJcYZf/c VrneV4F+ZjLloAKgSn9GB8eWOyilTQ18TcKd+H2icipRaP/+QR/KPx5GK/SXU3my qm62QOGI7t/5ktVdjGhs6tHZxw1SRiipiLYWbtVRrSxa4wYlgpgoUwvrvvtC5kAN nTw1VGWsxcs+6a7+PocYnJiq7k4b5OAUb3Ryvl9DLAMy8NqpRWo4cHD/XQ3FCYwF HlOSgx/dL5Se0i3dW1KzbP6OvaNg6nl/1EXPUsJ1ATS8nzvzhccCAwEAAQKCAYEA nD3GvaJ9MeB802JNZBEWZ9jO/6jHknldQeq6POI0PF+t/NoRUH0BkyS4yucxdw0a CrxulG5BaJUxHRkqFV5iE4zhgnzcXLXamyYJO8GIHtyiASAGTVIJyDNVPxztvTDx x2iGOXPqBxP4Eo82EqSLywLMXHhVzAsEGZWeGpXb61+Vk62+9Nz1dfZlMTvOaWdO Fkp/sx8e/1KT3KGBANlOXIxioP4Xj1Tbg6nY0fogf3vud5j52B1pu8xL7PkPIaFq DEGz3XvWhBF/+Cs5iDeYz8eQpfQig7HdHVn2D8dZmzQgpLw1yGbPAnqrgopWfm7R MqiyFe82p2t+vfSoG5jz28XxPtzBJV3ljxKxlbnclqu/CAYSjzaYohDzyhjdZOZI r9DOfWOqu01Ha3EEsApn95fusHHGTH2FOy0u61FSTrfLfqsLw9WRJPWleirKikhf SZzi223QrmzZMtuCF7VgTx3ghDhBmFD8uzVVQ1SwPZ8CgftRkFcn1llXIAfJ3iHB AoHBAOg3DOIdtUVgpjMKhpAyuH54fYvGl7afIMNbKRle0kCiP45wtGJ43RPMqiR8 1rxZB3+iapICI/lnhk3O7vVRkR64yiqQBcl/hXZ1BhyD6iDXWYmm5mcnymcoqfwc p9TfzEPyGPb3SM2YlI0cSPRqM/jDvGvnDeKIpzEKvUlwJ59WoN2HOHTIXf+XbN5n unpuTt6YKJvc48DrXsPnUzkCmUfbOmgHfeb9/qBs/8kY4YJMsZEjqf88o7mCJCIy BtDxTwKBwQDKuOwE8e0GIA01ZHd6RfR+ZCvmp2oauxal4EJsBx+ZZnhEWGaSm1fE Bf/ih074ghcSKoSrdYpD1xGZ6fGVWMx3jcL11yLDOUiiPDJsm8hUBZ0IW1qXyfCP l7xy1bUkWwPXdmFuGp1exrcjooKrFNuTdYiK4nQZSKuCfXQRADrmEJmM+gYwhqI7 4XsYo848B9A4hbY6RLEox4uvo/RmafY0iR0PMhVEc+ydNLKB/4LpahZqBQ4kTpMv o4+rEvYt1gkCgcB08gx177ozx1nMCLf99N0/LBUmCIytNvR8DfPjyAIg9NUHOjFO CkpkR0VEfO50Cm4hVD1RbOyLFRzpIJbtSvfHvg5qYv/XG3auUn8Sa0jE408/aKNO PhbL3wnEYvYO2ep4KXtzHNQ4XmgprJ39IWMtG/5PZRx0ApgYtazgSDBcKXd4OTow bhwQtUTpuNmMAPONXJnO7O5yYNbn2B7sbiedrYV7kJJSe4X5awtiTjp7sX4XdxuM 5BAcQ7NI2WLfZTcCgcBp/X9hIoATmMRvKwUQx+yJ/KO7Z8KhETpJJdR0mNDbqmit Cy8t7cxYb+6WqLoQUivv0o0k/EJ7L8JDH76woAnfZB4P3RiOy69/K0wN3vFBhOHS kbju7aU53lKoE7YuuOtsRrewEng/KlRsbDY3bqNTGLt4KegbpBQQGLmLffxNd1Zh EAQWcP33ou9yNYrJdihWtQpOssWRlash/O32ceZJF3s7C6t068tFclz2fPocQdxQ OC5pqy9nU/P0tOhDlMkCgcEAosaBJLIeAYlOU0+2uSx5g5mIqOOTyrDEmqqad6T/ wkB7vW2QaoDvLL22Yrzdn9vQ0V0rqzhVtan7sq5pn/BQJAueZYN8rFxS3uuW+UQk Nsc4GLJzU8Az/2DvqEIrnE7zRc5E1FOI9gKLrBlpJB2o0hVcBznDe05Gax6Kjqbm jHqzyU73SpxpEy3OesClCeCQIMr47HaL9aSqaEX4U9bMpgHi0HgTTHqvJ5pch0hY dYl+WAE9LAyF1DF29BirEXVw -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5d Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=fakehostname Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:b7:e3:2d:d2:ea:25:dd:99:db:dd:33:fd:9a:14: 17:ba:0a:f4:6f:67:08:61:d6:1b:86:97:b2:10:46: d7:68:6a:43:ea:30:b9:44:74:d1:c9:b4:d7:85:e4: cd:bb:c4:b7:a2:fa:9e:30:b7:b3:75:27:28:b4:da: 47:87:41:bd:66:f5:3a:87:e4:0a:ee:3a:41:36:b6: 2f:b5:e7:31:28:2a:85:3d:69:6c:9d:4e:21:fe:7a: 92:93:4f:4c:a3:83:63:5c:29:08:d1:51:a5:9c:6e: 1c:c1:1f:81:0c:2a:e0:88:97:32:08:cf:cf:c4:20: a9:0e:aa:10:83:c2:ae:be:16:fd:c0:ca:cf:b3:ec: 1c:fe:c9:1f:34:8b:18:af:2a:ef:e5:ed:28:f1:43: bc:fb:47:f9:02:23:b3:95:89:17:08:3b:94:01:40: f8:fb:54:bb:c2:4f:3e:15:03:ab:bf:2d:d5:e7:80: ad:87:36:e6:ae:e9:65:e6:e7:23:4d:41:ba:ab:07: 5d:40:10:97:18:65:ff:dc:56:b9:de:57:81:7e:66: 32:e5:a0:02:a0:4a:7f:46:07:c7:96:3b:28:a5:4d: 0d:7c:4d:c2:9d:f8:7d:a2:72:2a:51:68:ff:fe:41: 1f:ca:3f:1e:46:2b:f4:97:53:79:b2:aa:6e:b6:40: e1:88:ee:df:f9:92:d5:5d:8c:68:6c:ea:d1:d9:c7: 0d:52:46:28:a9:88:b6:16:6e:d5:51:ad:2c:5a:e3: 06:25:82:98:28:53:0b:eb:be:fb:42:e6:40:0d:9d: 3c:35:54:65:ac:c5:cb:3e:e9:ae:fe:3e:87:18:9c: 98:aa:ee:4e:1b:e4:e0:14:6f:74:72:be:5f:43:2c: 03:32:f0:da:a9:45:6a:38:70:70:ff:5d:0d:c5:09: 8c:05:1e:53:92:83:1f:dd:2f:94:9e:d2:2d:dd:5b: 52:b3:6c:fe:8e:bd:a3:60:ea:79:7f:d4:45:cf:52: c2:75:01:34:bc:9f:3b:f3:85:c7 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:fakehostname X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: C8:BD:A8:B4:C0:F2:32:10:73:47:9C:48:81:32:F8:BA:BB:26:84:97 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 76:87:76:4d:e4:0f:88:bf:2c:f3:58:67:c0:97:6c:cd:59:18: 82:83:4c:04:19:a5:6d:aa:fa:64:3d:49:32:3e:e1:56:95:b2: 13:f7:cf:d3:11:b0:72:b7:5b:e7:d7:85:69:51:3c:b6:54:80: 45:2f:28:10:21:20:b9:ba:e9:27:5a:b7:3f:82:b7:69:f5:46: f5:bf:a2:8b:17:7f:f2:14:d1:46:97:b5:8b:47:fb:9f:e8:5c: 05:0e:9d:11:bd:7c:9a:03:84:0b:ca:29:66:4a:ca:0d:6f:09: 1e:7a:27:c1:7f:03:96:70:8d:18:a5:2f:a4:98:a5:19:aa:8c: 5d:1e:8c:3e:bb:6d:3b:c0:33:c0:15:e1:bd:09:3d:9f:e8:dc: 12:d4:cb:44:1d:06:f5:e8:d6:4e:a1:2d:5c:9f:5d:1f:5b:2a: c3:4d:40:8d:da:d1:78:80:d0:c6:31:72:10:48:8a:e9:10:7a: 13:30:11:b2:9e:67:0e:ed:a1:aa:ec:73:2d:f0:b8:8a:22:75: 0f:30:69:5c:50:7e:91:ce:da:91:c7:70:8c:65:ff:f6:58:fb: 00:bd:45:cc:e2:e4:e3:e5:16:36:7d:f3:a2:4a:9c:45:ff:d9: a5:16:e0:2f:b5:5b:6c:e6:8a:13:15:48:73:bd:7c:80:33:c3: d4:3b:3a:1d:85:0e:a4:f7:f7:fb:48:0c:e9:a0:4b:5e:8a:5c: 67:f8:25:02:6f:cd:72:c1:aa:5a:93:64:7c:14:20:43:e0:13: 7f:0d:e1:0d:61:5e:2e:2c:cd:7a:2e:2a:ae:b6:75:6a:5f:a0: 1a:9b:b6:67:2d:b0:a5:1c:54:bc:8c:70:7e:15:2b:c0:50:e3: 03:bb:a4:a5:fc:45:01:c9:3f:a7:b8:18:dc:3e:08:07:a1:9b: f5:bd:95:bd:49:e8:10:7c:91:7d:2d:c4:c2:98:b6:b7:51:69: d7:0a:68:40:b5:0f:85:a0:a9:67:77:c6:68:cb:0e:58:34:b3: 58:e7:c8:7c:09:67 -----BEGIN CERTIFICATE----- MIIF9zCCBF+gAwIBAgIJAMstgJlaaVJdMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGIxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFTATBgNVBAMMDGZh a2Vob3N0bmFtZTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALfjLdLq Jd2Z290z/ZoUF7oK9G9nCGHWG4aXshBG12hqQ+owuUR00cm014XkzbvEt6L6njC3 s3UnKLTaR4dBvWb1OofkCu46QTa2L7XnMSgqhT1pbJ1OIf56kpNPTKODY1wpCNFR pZxuHMEfgQwq4IiXMgjPz8QgqQ6qEIPCrr4W/cDKz7PsHP7JHzSLGK8q7+XtKPFD vPtH+QIjs5WJFwg7lAFA+PtUu8JPPhUDq78t1eeArYc25q7pZebnI01BuqsHXUAQ lxhl/9xWud5XgX5mMuWgAqBKf0YHx5Y7KKVNDXxNwp34faJyKlFo//5BH8o/HkYr 9JdTebKqbrZA4Yju3/mS1V2MaGzq0dnHDVJGKKmIthZu1VGtLFrjBiWCmChTC+u+ +0LmQA2dPDVUZazFyz7prv4+hxicmKruThvk4BRvdHK+X0MsAzLw2qlFajhwcP9d DcUJjAUeU5KDH90vlJ7SLd1bUrNs/o69o2DqeX/URc9SwnUBNLyfO/OFxwIDAQAB o4IBwzCCAb8wFwYDVR0RBBAwDoIMZmFrZWhvc3RuYW1lMA4GA1UdDwEB/wQEAwIF oDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAd BgNVHQ4EFgQUyL2otMDyMhBzR5xIgTL4ursmhJcwfQYDVR0jBHYwdIAUs4qgorpx 8agkedSkWyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRo b24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZl coIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6 Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1Bggr BgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2Nz cC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5l dC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAHaHdk3k D4i/LPNYZ8CXbM1ZGIKDTAQZpW2q+mQ9STI+4VaVshP3z9MRsHK3W+fXhWlRPLZU gEUvKBAhILm66Sdatz+Ct2n1RvW/oosXf/IU0UaXtYtH+5/oXAUOnRG9fJoDhAvK KWZKyg1vCR56J8F/A5ZwjRilL6SYpRmqjF0ejD67bTvAM8AV4b0JPZ/o3BLUy0Qd BvXo1k6hLVyfXR9bKsNNQI3a0XiA0MYxchBIiukQehMwEbKeZw7toarscy3wuIoi dQ8waVxQfpHO2pHHcIxl//ZY+wC9Rczi5OPlFjZ986JKnEX/2aUW4C+1W2zmihMV SHO9fIAzw9Q7Oh2FDqT39/tIDOmgS16KXGf4JQJvzXLBqlqTZHwUIEPgE38N4Q1h Xi4szXouKq62dWpfoBqbtmctsKUcVLyMcH4VK8BQ4wO7pKX8RQHJP6e4GNw+CAeh m/W9lb1J6BB8kX0txMKYtrdRadcKaEC1D4WgqWd3xmjLDlg0s1jnyHwJZw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/keycertecc.pem000066400000000000000000000130051471441230600211450ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIG2AgEAMBAGByqGSM49AgEGBSuBBAAiBIGeMIGbAgEBBDBcNwE+cm17mmr7Yg6d 0DNCnheGFOjkYH4tYzTyCkcZGShkmF/tKhIqb3imKz0Kx9+hZANiAATyp8ws6CuN OI2/3MC4jZVSkmoDzm/X/ZrkEm4TVHKPSZ6kzZRpmmUlLS9l7SQZSLYyDAFBFzoG JJYHhZNQXEO7HFszn6KnvLjhwS6ddzlaHPziEknrSr0OKhJmdJHrQAQ= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5e Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost-ecc Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (384 bit) pub: 04:f2:a7:cc:2c:e8:2b:8d:38:8d:bf:dc:c0:b8:8d: 95:52:92:6a:03:ce:6f:d7:fd:9a:e4:12:6e:13:54: 72:8f:49:9e:a4:cd:94:69:9a:65:25:2d:2f:65:ed: 24:19:48:b6:32:0c:01:41:17:3a:06:24:96:07:85: 93:50:5c:43:bb:1c:5b:33:9f:a2:a7:bc:b8:e1:c1: 2e:9d:77:39:5a:1c:fc:e2:12:49:eb:4a:bd:0e:2a: 12:66:74:91:eb:40:04 ASN1 OID: secp384r1 NIST CURVE: P-384 X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost-ecc X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 79:11:98:86:15:4F:48:F4:31:0B:D2:CC:C8:26:3A:09:07:5D:96:40 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 6e:42:e8:a2:2d:28:14:e3:25:5c:c1:7e:54:e9:3a:ff:30:db: 94:ba:b2:f6:5f:ae:9a:c1:90:b3:4f:ce:65:1d:84:64:c0:71: 2c:44:8e:7e:00:79:f5:8c:4a:1d:34:13:44:de:99:2e:db:53: ee:ec:74:97:4d:59:1a:09:82:4f:98:75:91:a7:a0:b9:da:5e: 68:f5:32:85:be:36:3d:83:d4:ee:f9:87:67:31:85:41:53:9a: e7:05:96:13:1c:88:2e:7f:33:b1:ee:bd:f9:50:52:24:ed:3d: 92:95:6e:30:c3:af:74:a9:ee:15:bb:da:7c:14:50:8e:e3:99: ea:ba:b4:37:8a:50:61:26:de:01:93:b8:a2:6b:d9:c7:38:5e: b2:f8:96:3d:a8:9f:7d:0c:71:d4:7e:cc:a0:57:af:7e:ce:3f: a7:a7:27:68:c1:28:d7:4f:44:c1:b4:93:c3:c7:35:2b:50:c3: 8e:2c:d0:46:c1:3f:e1:67:d3:f0:81:ae:f3:5c:3e:4f:d5:a8: 07:8f:e0:eb:ef:d8:dc:47:e0:3d:58:eb:de:0e:7f:b2:58:cb: 5c:f1:2f:65:7e:0f:0d:cc:ca:ba:83:53:63:bc:dd:18:0c:ee: ed:ec:96:88:d0:38:c5:d7:ab:e7:55:79:7b:6d:ba:c0:a0:e9: 5c:ca:7c:fb:f8:70:c7:fb:f5:b2:b5:74:cb:f7:c0:0d:20:9f: 1d:b7:4c:bf:8a:8d:cd:e3:bc:4e:30:78:02:12:a0:9b:d5:8f: 49:3c:95:91:76:6e:7c:54:dc:61:7a:2e:20:ed:35:25:e0:c5: 17:50:02:83:00:74:8f:f0:1c:97:96:08:fc:2e:63:a4:f7:97: 87:43:2a:32:04:2d:4c:f9:1a:07:bf:68:91:fc:50:21:a1:3c: 8d:8f:fb:83:57:83:1f:b6:55:5c:55:2f:58:64:ad:f3:27:ba: d0:e3:cd:58:01:a3:c9:ba:1d:95:dc:30:d5:af:b9:20:ad:d9: 48:ba:8d:9a:66:ee -----BEGIN CERTIFICATE----- MIIEyzCCAzOgAwIBAgIJAMstgJlaaVJeMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGMxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFjAUBgNVBAMMDWxv Y2FsaG9zdC1lY2MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATyp8ws6CuNOI2/3MC4 jZVSkmoDzm/X/ZrkEm4TVHKPSZ6kzZRpmmUlLS9l7SQZSLYyDAFBFzoGJJYHhZNQ XEO7HFszn6KnvLjhwS6ddzlaHPziEknrSr0OKhJmdJHrQASjggHEMIIBwDAYBgNV HREEETAPgg1sb2NhbGhvc3QtZWNjMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAU BggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUeRGY hhVPSPQxC9LMyCY6CQddlkAwfQYDVR0jBHYwdIAUs4qgorpx8agkedSkWyU2FR5J yM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMstgJlaaVJb MIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0Y2EucHl0 aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcwAYYpaHR0 cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYDVR0fBDww OjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2EvcmV2 b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAG5C6KItKBTjJVzBflTpOv8w 25S6svZfrprBkLNPzmUdhGTAcSxEjn4AefWMSh00E0TemS7bU+7sdJdNWRoJgk+Y dZGnoLnaXmj1MoW+Nj2D1O75h2cxhUFTmucFlhMciC5/M7HuvflQUiTtPZKVbjDD r3Sp7hW72nwUUI7jmeq6tDeKUGEm3gGTuKJr2cc4XrL4lj2on30McdR+zKBXr37O P6enJ2jBKNdPRMG0k8PHNStQw44s0EbBP+Fn0/CBrvNcPk/VqAeP4Ovv2NxH4D1Y 694Of7JYy1zxL2V+Dw3MyrqDU2O83RgM7u3slojQOMXXq+dVeXttusCg6VzKfPv4 cMf79bK1dMv3wA0gnx23TL+Kjc3jvE4weAISoJvVj0k8lZF2bnxU3GF6LiDtNSXg xRdQAoMAdI/wHJeWCPwuY6T3l4dDKjIELUz5Gge/aJH8UCGhPI2P+4NXgx+2VVxV L1hkrfMnutDjzVgBo8m6HZXcMNWvuSCt2Ui6jZpm7g== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/nokia.pem000066400000000000000000000036031471441230600201300ustar00rootroot00000000000000# Certificate for projects.developer.nokia.com:443 (see issue 13034) -----BEGIN CERTIFICATE----- MIIFLDCCBBSgAwIBAgIQLubqdkCgdc7lAF9NfHlUmjANBgkqhkiG9w0BAQUFADCB vDELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTswOQYDVQQLEzJUZXJtcyBvZiB1c2Ug YXQgaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAoYykxMDE2MDQGA1UEAxMt VmVyaVNpZ24gQ2xhc3MgMyBJbnRlcm5hdGlvbmFsIFNlcnZlciBDQSAtIEczMB4X DTExMDkyMTAwMDAwMFoXDTEyMDkyMDIzNTk1OVowcTELMAkGA1UEBhMCRkkxDjAM BgNVBAgTBUVzcG9vMQ4wDAYDVQQHFAVFc3BvbzEOMAwGA1UEChQFTm9raWExCzAJ BgNVBAsUAkJJMSUwIwYDVQQDFBxwcm9qZWN0cy5kZXZlbG9wZXIubm9raWEuY29t MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCr92w1bpHYSYxUEx8N/8Iddda2 lYi+aXNtQfV/l2Fw9Ykv3Ipw4nLeGTj18FFlAZgMdPRlgrzF/NNXGw/9l3/qKdow CypkQf8lLaxb9Ze1E/KKmkRJa48QTOqvo6GqKuTI6HCeGlG1RxDb8YSKcQWLiytn yj3Wp4MgRQO266xmMQIDAQABo4IB9jCCAfIwQQYDVR0RBDowOIIccHJvamVjdHMu ZGV2ZWxvcGVyLm5va2lhLmNvbYIYcHJvamVjdHMuZm9ydW0ubm9raWEuY29tMAkG A1UdEwQCMAAwCwYDVR0PBAQDAgWgMEEGA1UdHwQ6MDgwNqA0oDKGMGh0dHA6Ly9T VlJJbnRsLUczLWNybC52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNybDBEBgNVHSAE PTA7MDkGC2CGSAGG+EUBBxcDMCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3LnZl cmlzaWduLmNvbS9ycGEwKAYDVR0lBCEwHwYJYIZIAYb4QgQBBggrBgEFBQcDAQYI KwYBBQUHAwIwcgYIKwYBBQUHAQEEZjBkMCQGCCsGAQUFBzABhhhodHRwOi8vb2Nz cC52ZXJpc2lnbi5jb20wPAYIKwYBBQUHMAKGMGh0dHA6Ly9TVlJJbnRsLUczLWFp YS52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNlcjBuBggrBgEFBQcBDARiMGChXqBc MFowWDBWFglpbWFnZS9naWYwITAfMAcGBSsOAwIaBBRLa7kolgYMu9BSOJsprEsH iyEFGDAmFiRodHRwOi8vbG9nby52ZXJpc2lnbi5jb20vdnNsb2dvMS5naWYwDQYJ KoZIhvcNAQEFBQADggEBACQuPyIJqXwUyFRWw9x5yDXgMW4zYFopQYOw/ItRY522 O5BsySTh56BWS6mQB07XVfxmYUGAvRQDA5QHpmY8jIlNwSmN3s8RKo+fAtiNRlcL x/mWSfuMs3D/S6ev3D6+dpEMZtjrhOdctsarMKp8n/hPbwhAbg5hVjpkW5n8vz2y 0KxvvkA1AxpLwpVv7OlK17ttzIHw8bp9HTlHBU5s8bKz4a565V/a5HI0CSEv/+0y ko4/ghTnZc1CkmUngKKeFMSah/mT/xAh8XnE2l1AazFa8UKuYki1e+ArHaGZc4ix UYOtiRphwfuYQhRZ7qX9q2MMkCMI65XNK/SaFrAbbG0= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/nosan.pem000066400000000000000000000170471471441230600201540ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQCv3sUoOE4F7Pye AT2Q6XpXrGUOu1fYgdnItLLLhvn7ACuHMj7TA5UKXxsepJn5m2Ji9LvAbksr1IWd LZAvNgjwsUR+E4HbY108BhVt9sk3HFkvE0OOFbAa14ICtYPe18P/4Hv6Zfu/GJDU rwXHNCUu0p6i/mospZ5O3sx5MgVaShknGAEC3Kp7zOgusMmE8XSbkNQa3ARMkW4o kTqWKAeAHDjVFVyyhzZQmo+gaLzhWfJVSZhlJsuiLoZGGrVTq85EiXsE4l8rPaI+ mKkVzWP13IZW+Fx1tiIktumdHWb1OQWrvm8AiT9b8PcFCUUrvhQFcLDSCZjKlQ0t RWrSSKrrVsSldOreqRLtpjGzFJpGnTcvslL7rP5pg5DjBsYmVcDjrmRuJuhGq52X /6HEC97GouVK8tT1LVMv1wufVPn+i9TzwxOuRWeUvVqLAJgWQ9N3yKdymH+VrpZk /oB9ScyDakGezZBW5CeOQbNJ8WoX58jNxefGjtqKxmyztu43r3ECAwEAAQKCAYBQ fVoqYCqFV8L95X9x1QljGsldhqxbsIIl811o/KtoDtndFEfgd2E8z+4vhhHaRR0w QOW02kWZF7jXCMVWdhp9XgQE15S0/bLsB7TDERFiIZ1HiD+AxbhFcKBV8REbahCQ CQN0xDwFZ47RaBDy7JCf71EfM+UP7fSYECvww83jVspQNBIyZx+3bT5OMCbqqz88 +3m3mT52dJDADEeN9WAJZ+Ey1IYKRwu6tCJLvePEF1BrbDVNBgZogXZ+mzalxpjr 4RpGPMMa+VWc8HmDVd+LtpwKJcQD00GvUP4fNywn+5jvNWl54FdQiTLPrieTWxas XUQ2crxP7Aqr2/vsU5Ruru5uF7H+ssMHp9YQDhpJ2+SVhQ9P+/loXCuKGt+BrB2Z MlitO3f+vfRtzATmJ8G0qFrOqZK1A/qsiyIze240C1hAl3oy2xpZqTDGp4gRWwoi OIN0HmH9UbP7bbNQY1x/zstTbza4/7rGb1+DZKeZIMu7QjBCU0rtsJpGtUvcQGEC gcEA42GMYSL/HljZMF1LsDhTX/cmP8FDNgONhWYxT+w0Csnj1usLNBaT63dYnEiW QKydRR4casAR1Kdy4Yfcy2lCy1kCfwqkQYk8fxSsOSHRjUfwC1SnfdYlwKFMxw4a oZF0R4oVCBYrfP+8kqrj+5gs/gXblsw72XkYtbCdIriKKdmUzTx7MegzSqh2PVRi rJzuwCZQ/O0NfhwdOHxLQDo0dgD+vv9e+KOSoJ9FDv8HH1tnolpRMdkSA8AJR/Nk DXt1AoHBAMYBfTKQZ2jqLKybe4tP+YKjvjVp8vJx0iNUXFN/P6hBaSBOgq85uxXL X3s7N/pkOCjyE95B8QusIkbnbfdyEP89O4bTbUHPXyAkHyRkR7Vny49HYuaR/aXQ mXC0J2z5bXVpCQ514l/R/Io3wBph+hbG3To7pp9pMOV4qzvibUZaTZFwH+q+xDwf SKSFy3fcomgH4/K5/QuKVj0jOUQsYjQQWb8GukS2KZK3zYJIAG1bBcsCVpSuBdW0 eCZgqjnwjQKBwCUyUwWc9QEg5b68tGIKhNEhHDe3xOf0ItWcxxpc+JJ/Pm9tGfMW cnJFntBKK5I+6qdg6qMn8oLINcnhMORxvsSHNhpUQlSaP7RGTHo4JxCmoQUpfxDd 1GUzvdyeWQrvQYdmdlRRVCHpsA6KOCtzVIDlsmtz06Ka5cjrMHl6mNeJyYbdiwW6 B5ICBv23bUDxlzkFy5/ko51qufkAlErYeraHKSVTn1SrZZQzGdf/LkoZ6NUtUzUF XqYQZzRHA6oU9QKBwDslzLljC5D6ivfQxln6POV6dmJMUOd9erFVDPNgSqq/R2EA MueXDjzXcKFGMlWYxHHuxmKZPiEnfWHC1kWZjFxCdVq0I6oKATd/stHTJtyYseUO BQwtRiDXLE7PcguKgtkU1EC+lC3dc1vyhW8cH3HYW9N+aCqsaI/TuQr9e3kNlqhA XzhnXgU7rx5+XSZkARukZ8JlLqLY4yQGNqAXxgoZbEW1A8VsyQRr5XbqfT4td5CK FUT6qwGIlG+aZp9CLQKBwQCQkwdW9A/Q4Ffq8+XTL1hJ24m/q11OLAPODUypOhWw OCbX2fkv59pSBe6niZDBls1NpHB9mzalBrJCfU+yKC667gKcKULOnWULIoOQvmcg Ka3hkkW28gTnCjfDIYm3IdsLjc67zJplOixaKgxhO8NtJZGtg0oLIrofG8EYRInv OmtGw+XE+s4TVs6WgXnEg9zWQ5ZYtqQVn6PT5jsz+Nrvipi61HWHVBd7g+78ojps 3suWxl0FvgzTW5HD16WRXeI= -----END PRIVATE KEY----- Certificate: Data: Version: 1 (0x0) Serial Number: cb:2d:80:99:5a:69:52:61 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=nosan Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:af:de:c5:28:38:4e:05:ec:fc:9e:01:3d:90:e9: 7a:57:ac:65:0e:bb:57:d8:81:d9:c8:b4:b2:cb:86: f9:fb:00:2b:87:32:3e:d3:03:95:0a:5f:1b:1e:a4: 99:f9:9b:62:62:f4:bb:c0:6e:4b:2b:d4:85:9d:2d: 90:2f:36:08:f0:b1:44:7e:13:81:db:63:5d:3c:06: 15:6d:f6:c9:37:1c:59:2f:13:43:8e:15:b0:1a:d7: 82:02:b5:83:de:d7:c3:ff:e0:7b:fa:65:fb:bf:18: 90:d4:af:05:c7:34:25:2e:d2:9e:a2:fe:6a:2c:a5: 9e:4e:de:cc:79:32:05:5a:4a:19:27:18:01:02:dc: aa:7b:cc:e8:2e:b0:c9:84:f1:74:9b:90:d4:1a:dc: 04:4c:91:6e:28:91:3a:96:28:07:80:1c:38:d5:15: 5c:b2:87:36:50:9a:8f:a0:68:bc:e1:59:f2:55:49: 98:65:26:cb:a2:2e:86:46:1a:b5:53:ab:ce:44:89: 7b:04:e2:5f:2b:3d:a2:3e:98:a9:15:cd:63:f5:dc: 86:56:f8:5c:75:b6:22:24:b6:e9:9d:1d:66:f5:39: 05:ab:be:6f:00:89:3f:5b:f0:f7:05:09:45:2b:be: 14:05:70:b0:d2:09:98:ca:95:0d:2d:45:6a:d2:48: aa:eb:56:c4:a5:74:ea:de:a9:12:ed:a6:31:b3:14: 9a:46:9d:37:2f:b2:52:fb:ac:fe:69:83:90:e3:06: c6:26:55:c0:e3:ae:64:6e:26:e8:46:ab:9d:97:ff: a1:c4:0b:de:c6:a2:e5:4a:f2:d4:f5:2d:53:2f:d7: 0b:9f:54:f9:fe:8b:d4:f3:c3:13:ae:45:67:94:bd: 5a:8b:00:98:16:43:d3:77:c8:a7:72:98:7f:95:ae: 96:64:fe:80:7d:49:cc:83:6a:41:9e:cd:90:56:e4: 27:8e:41:b3:49:f1:6a:17:e7:c8:cd:c5:e7:c6:8e: da:8a:c6:6c:b3:b6:ee:37:af:71 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption 91:42:c2:15:57:42:47:77:e7:0f:c5:55:26:b1:5b:c3:5e:ba: 81:db:e1:a4:9f:b8:42:5a:21:c9:8c:18:ae:0f:90:ab:9a:24: e7:d2:78:fc:bd:97:29:b1:5c:46:1f:5b:b8:d2:a7:87:f1:50: 53:5b:d3:be:57:74:bd:e5:75:db:50:81:f7:37:95:0b:69:ef: 39:8c:5c:82:d5:64:62:d5:8b:e9:e0:31:e1:73:d2:5a:2c:de: 43:5a:06:e5:d3:4d:d0:35:e0:9f:c2:73:31:bc:35:69:d4:fb: 7d:f0:1a:33:f7:f6:25:72:9c:a6:84:05:08:f6:b5:e8:04:10: f1:1f:f2:95:ad:a1:f8:d8:80:a5:eb:75:43:99:33:90:0c:79: fc:c0:87:08:95:20:aa:c2:81:0b:22:6f:56:f4:8f:2a:23:f8: 40:47:1c:03:a5:b1:04:0a:04:4a:df:d0:88:a8:bc:31:f2:42: 9b:d8:11:14:9e:e3:68:ea:07:2c:15:de:d2:36:5a:15:38:ed: d2:af:0e:b4:b6:1d:a0:57:94:ea:c3:c7:4c:14:57:81:00:57: 94:d3:b0:27:69:d7:48:02:6c:e5:97:f7:be:22:7c:38:24:af: b2:b0:7b:08:75:1e:ca:2e:c7:41:ef:8b:74:cf:c9:c3:6f:39: b9:52:41:18:c6:70:24:54:51:04:fe:5f:88:70:35:e5:1c:8e: d6:67:69:44:44:33:9b:8c:fe:a5:b9:95:48:66:84:f3:1a:04: ab:a3:57:c1:b6:b4:2f:28:12:45:2b:cb:42:d3:f4:a5:ce:7b: 6c:1f:e4:c8:a9:e7:d4:6d:c8:27:2d:69:26:c5:e8:73:10:54: 1f:c3:bf:fd:aa:f5:95:6f:f6:ca:d5:06:8f:1b:79:93:e3:86: ba:8d:fe:a8:10:8f:95:3e:14:09:bf:ca:88:59:e2:93:b6:ec: 03:a9:7e:dd:1f:5f:13:d3:29:b3:a6:f3:6a:df:30:53:44:c8: cd:e5:82:57:bc:9c -----BEGIN CERTIFICATE----- MIIEJDCCAowCCQDLLYCZWmlSYTANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJY WTEmMCQGA1UECgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNV BAMMDW91ci1jYS1zZXJ2ZXIwHhcNMTgwODI5MTQyMzE2WhcNMzcxMDI4MTQyMzE2 WjBbMQswCQYDVQQGEwJYWTEXMBUGA1UEBwwOQ2FzdGxlIEFudGhyYXgxIzAhBgNV BAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQ4wDAYDVQQDDAVub3NhbjCC AaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAK/exSg4TgXs/J4BPZDpeles ZQ67V9iB2ci0ssuG+fsAK4cyPtMDlQpfGx6kmfmbYmL0u8BuSyvUhZ0tkC82CPCx RH4TgdtjXTwGFW32yTccWS8TQ44VsBrXggK1g97Xw//ge/pl+78YkNSvBcc0JS7S nqL+aiylnk7ezHkyBVpKGScYAQLcqnvM6C6wyYTxdJuQ1BrcBEyRbiiROpYoB4Ac ONUVXLKHNlCaj6BovOFZ8lVJmGUmy6IuhkYatVOrzkSJewTiXys9oj6YqRXNY/Xc hlb4XHW2IiS26Z0dZvU5Bau+bwCJP1vw9wUJRSu+FAVwsNIJmMqVDS1FatJIqutW xKV06t6pEu2mMbMUmkadNy+yUvus/mmDkOMGxiZVwOOuZG4m6EarnZf/ocQL3sai 5Ury1PUtUy/XC59U+f6L1PPDE65FZ5S9WosAmBZD03fIp3KYf5WulmT+gH1JzINq QZ7NkFbkJ45Bs0nxahfnyM3F58aO2orGbLO27jevcQIDAQABMA0GCSqGSIb3DQEB CwUAA4IBgQCRQsIVV0JHd+cPxVUmsVvDXrqB2+Gkn7hCWiHJjBiuD5CrmiTn0nj8 vZcpsVxGH1u40qeH8VBTW9O+V3S95XXbUIH3N5ULae85jFyC1WRi1Yvp4DHhc9Ja LN5DWgbl003QNeCfwnMxvDVp1Pt98Boz9/YlcpymhAUI9rXoBBDxH/KVraH42ICl 63VDmTOQDHn8wIcIlSCqwoELIm9W9I8qI/hARxwDpbEECgRK39CIqLwx8kKb2BEU nuNo6gcsFd7SNloVOO3Srw60th2gV5Tqw8dMFFeBAFeU07AnaddIAmzll/e+Inw4 JK+ysHsIdR7KLsdB74t0z8nDbzm5UkEYxnAkVFEE/l+IcDXlHI7WZ2lERDObjP6l uZVIZoTzGgSro1fBtrQvKBJFK8tC0/SlzntsH+TIqefUbcgnLWkmxehzEFQfw7/9 qvWVb/bK1QaPG3mT44a6jf6oEI+VPhQJv8qIWeKTtuwDqX7dH18T0ymzpvNq3zBT RMjN5YJXvJw= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/nullbytecert.pem000066400000000000000000000124731471441230600215500ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 0 (0x0) Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Validity Not Before: Aug 7 13:11:52 2013 GMT Not After : Aug 7 13:12:52 2013 GMT Subject: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:b5:ea:ed:c9:fb:46:7d:6f:3b:76:80:dd:3a:f3: 03:94:0b:a7:a6:db:ec:1d:df:ff:23:74:08:9d:97: 16:3f:a3:a4:7b:3e:1b:0e:96:59:25:03:a7:26:e2: 88:a9:cf:79:cd:f7:04:56:b0:ab:79:32:6e:59:c1: 32:30:54:eb:58:a8:cb:91:f0:42:a5:64:27:cb:d4: 56:31:88:52:ad:cf:bd:7f:f0:06:64:1f:cc:27:b8: a3:8b:8c:f3:d8:29:1f:25:0b:f5:46:06:1b:ca:02: 45:ad:7b:76:0a:9c:bf:bb:b9:ae:0d:16:ab:60:75: ae:06:3e:9c:7c:31:dc:92:2f:29:1a:e0:4b:0c:91: 90:6c:e9:37:c5:90:d7:2a:d7:97:15:a3:80:8f:5d: 7b:49:8f:54:30:d4:97:2c:1c:5b:37:b5:ab:69:30: 68:43:d3:33:78:4b:02:60:f5:3c:44:80:a1:8f:e7: f0:0f:d1:5e:87:9e:46:cf:62:fc:f9:bf:0c:65:12: f1:93:c8:35:79:3f:c8:ec:ec:47:f5:ef:be:44:d5: ae:82:1e:2d:9a:9f:98:5a:67:65:e1:74:70:7c:cb: d3:c2:ce:0e:45:49:27:dc:e3:2d:d4:fb:48:0e:2f: 9e:77:b8:14:46:c0:c4:36:ca:02:ae:6a:91:8c:da: 2f:85 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 88:5A:55:C0:52:FF:61:CD:52:A3:35:0F:EA:5A:9C:24:38:22:F7:5C X509v3 Key Usage: Digital Signature, Non Repudiation, Key Encipherment X509v3 Subject Alternative Name: ************************************************************* WARNING: The values for DNS, email and URI are WRONG. OpenSSL doesn't print the text after a NULL byte. ************************************************************* DNS:altnull.python.org, email:null@python.org, URI:http://null.python.org, IP Address:192.0.2.1, IP Address:2001:DB8:0:0:0:0:0:1 Signature Algorithm: sha1WithRSAEncryption ac:4f:45:ef:7d:49:a8:21:70:8e:88:59:3e:d4:36:42:70:f5: a3:bd:8b:d7:a8:d0:58:f6:31:4a:b1:a4:a6:dd:6f:d9:e8:44: 3c:b6:0a:71:d6:7f:b1:08:61:9d:60:ce:75:cf:77:0c:d2:37: 86:02:8d:5e:5d:f9:0f:71:b4:16:a8:c1:3d:23:1c:f1:11:b3: 56:6e:ca:d0:8d:34:94:e6:87:2a:99:f2:ae:ae:cc:c2:e8:86: de:08:a8:7f:c5:05:fa:6f:81:a7:82:e6:d0:53:9d:34:f4:ac: 3e:40:fe:89:57:7a:29:a4:91:7e:0b:c6:51:31:e5:10:2f:a4: 60:76:cd:95:51:1a:be:8b:a1:b0:fd:ad:52:bd:d7:1b:87:60: d2:31:c7:17:c4:18:4f:2d:08:25:a3:a7:4f:b7:92:ca:e2:f5: 25:f1:54:75:81:9d:b3:3d:61:a2:f7:da:ed:e1:c6:6f:2c:60: 1f:d8:6f:c5:92:05:ab:c9:09:62:49:a9:14:ad:55:11:cc:d6: 4a:19:94:99:97:37:1d:81:5f:8b:cf:a3:a8:96:44:51:08:3d: 0b:05:65:12:eb:b6:70:80:88:48:72:4f:c6:c2:da:cf:cd:8e: 5b:ba:97:2f:60:b4:96:56:49:5e:3a:43:76:63:04:be:2a:f6: c1:ca:a9:94 -----BEGIN CERTIFICATE----- MIIE2DCCA8CgAwIBAgIBADANBgkqhkiG9w0BAQUFADCBxTELMAkGA1UEBhMCVVMx DzANBgNVBAgMBk9yZWdvbjESMBAGA1UEBwwJQmVhdmVydG9uMSMwIQYDVQQKDBpQ eXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEgMB4GA1UECwwXUHl0aG9uIENvcmUg RGV2ZWxvcG1lbnQxJDAiBgNVBAMMG251bGwucHl0aG9uLm9yZwBleGFtcGxlLm9y ZzEkMCIGCSqGSIb3DQEJARYVcHl0aG9uLWRldkBweXRob24ub3JnMB4XDTEzMDgw NzEzMTE1MloXDTEzMDgwNzEzMTI1MlowgcUxCzAJBgNVBAYTAlVTMQ8wDQYDVQQI DAZPcmVnb24xEjAQBgNVBAcMCUJlYXZlcnRvbjEjMCEGA1UECgwaUHl0aG9uIFNv ZnR3YXJlIEZvdW5kYXRpb24xIDAeBgNVBAsMF1B5dGhvbiBDb3JlIERldmVsb3Bt ZW50MSQwIgYDVQQDDBtudWxsLnB5dGhvbi5vcmcAZXhhbXBsZS5vcmcxJDAiBgkq hkiG9w0BCQEWFXB5dGhvbi1kZXZAcHl0aG9uLm9yZzCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBALXq7cn7Rn1vO3aA3TrzA5QLp6bb7B3f/yN0CJ2XFj+j pHs+Gw6WWSUDpybiiKnPec33BFawq3kyblnBMjBU61ioy5HwQqVkJ8vUVjGIUq3P vX/wBmQfzCe4o4uM89gpHyUL9UYGG8oCRa17dgqcv7u5rg0Wq2B1rgY+nHwx3JIv KRrgSwyRkGzpN8WQ1yrXlxWjgI9de0mPVDDUlywcWze1q2kwaEPTM3hLAmD1PESA oY/n8A/RXoeeRs9i/Pm/DGUS8ZPINXk/yOzsR/XvvkTVroIeLZqfmFpnZeF0cHzL 08LODkVJJ9zjLdT7SA4vnne4FEbAxDbKAq5qkYzaL4UCAwEAAaOB0DCBzTAMBgNV HRMBAf8EAjAAMB0GA1UdDgQWBBSIWlXAUv9hzVKjNQ/qWpwkOCL3XDALBgNVHQ8E BAMCBeAwgZAGA1UdEQSBiDCBhYIeYWx0bnVsbC5weXRob24ub3JnAGV4YW1wbGUu Y29tgSBudWxsQHB5dGhvbi5vcmcAdXNlckBleGFtcGxlLm9yZ4YpaHR0cDovL251 bGwucHl0aG9uLm9yZwBodHRwOi8vZXhhbXBsZS5vcmeHBMAAAgGHECABDbgAAAAA AAAAAAAAAAEwDQYJKoZIhvcNAQEFBQADggEBAKxPRe99SaghcI6IWT7UNkJw9aO9 i9eo0Fj2MUqxpKbdb9noRDy2CnHWf7EIYZ1gznXPdwzSN4YCjV5d+Q9xtBaowT0j HPERs1ZuytCNNJTmhyqZ8q6uzMLoht4IqH/FBfpvgaeC5tBTnTT0rD5A/olXeimk kX4LxlEx5RAvpGB2zZVRGr6LobD9rVK91xuHYNIxxxfEGE8tCCWjp0+3ksri9SXx VHWBnbM9YaL32u3hxm8sYB/Yb8WSBavJCWJJqRStVRHM1koZlJmXNx2BX4vPo6iW RFEIPQsFZRLrtnCAiEhyT8bC2s/Njlu6ly9gtJZWSV46Q3ZjBL4q9sHKqZQ= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/nullcert.pem000066400000000000000000000000001471441230600206430ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.10/pycacert.pem000066400000000000000000000130401471441230600206350ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5b Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, O=Python Software Foundation CA, CN=our-ca-server Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:b1:84:d3:4f:5c:04:80:91:4f:82:49:ba:30:0b: f7:e8:cb:f9:14:ef:3d:9f:0b:3f:0a:62:fc:1b:20: a5:20:d1:60:5f:87:5a:1f:16:d1:ed:97:70:a6:da: 1b:03:2c:7e:a0:5b:3c:4e:2f:16:7e:0e:89:29:89: e1:10:0d:38:da:6a:77:5f:37:13:b3:28:8f:7b:5c: 76:ad:9e:e8:d3:f5:9e:f5:83:aa:10:07:8d:e6:51: 98:f0:7c:0d:52:f2:0c:21:1e:d8:b9:99:26:a9:25: 03:27:bb:5c:ab:2e:33:27:a2:d6:23:a8:83:87:44: 29:9f:97:b5:24:6f:d7:b9:0a:fd:28:ee:bb:fb:41: 58:ea:1d:99:dd:44:86:ab:98:be:1c:dc:cb:a9:89: 1d:36:5c:a9:e8:47:b5:f4:52:48:aa:b5:a4:67:ef: 3e:d7:e2:d3:33:de:98:29:d8:7a:b0:59:5c:e7:b1: 0e:cc:fd:9f:eb:f6:d5:3a:0e:0b:cf:fe:0b:3d:a2: bf:45:18:ce:94:e7:a9:55:60:88:d4:d8:84:50:79: 05:2e:41:03:74:ae:67:26:f6:5b:12:08:98:ce:0a: 97:ed:01:0f:89:4f:17:5c:fa:3e:1d:35:24:47:92: 32:bf:f7:a4:18:2b:3c:d0:48:99:e1:a2:cd:a3:cc: 50:53:20:b5:c6:e3:66:85:7b:57:10:ec:33:4f:c1: 77:e7:1b:7e:81:c6:c4:f3:45:20:c0:91:dd:13:76: 7b:03:af:f6:76:8e:a2:83:63:57:dd:63:bc:bb:5a: 1c:17:52:8a:d6:06:48:cc:0f:c7:d3:4f:e8:da:22: 6c:86:f9:4e:5c:a6:29:07:3b:d8:56:4c:59:b3:20: 49:07:7b:94:84:cf:2b:c3:1c:1a:4e:87:64:92:ba: 42:e1:e6:ad:7d:1d:f6:54:90:6f:2b:e9:b3:cc:4b: 2b:33:26:23:fd:65:c0:3c:f0:79:ad:c9:c1:81:ef: 37:04:e0:27:3e:b0:ee:15:be:51 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Key Identifier: B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD X509v3 Basic Constraints: CA:TRUE Signature Algorithm: sha256WithRSAEncryption 6b:32:2f:e7:05:18:ea:5c:c9:95:f4:e0:c2:0c:41:5f:1a:0a: 95:c9:c7:7d:05:ee:8a:56:29:35:50:40:b7:fe:9f:7b:5b:1c: c3:69:2f:a0:cb:d2:b8:91:2f:50:19:62:f7:27:18:6d:95:7b: 53:16:15:a2:5a:dc:14:e3:fb:b1:32:a9:69:db:a6:33:47:3c: bb:1f:d2:dc:70:f9:6a:2e:0c:d8:8c:6d:e5:5d:1d:43:3c:4e: 91:de:a0:c8:da:a0:4b:0e:9d:5e:b6:0f:4a:49:f0:7b:b6:53: 9e:fd:35:14:5b:e3:4d:b4:18:a6:36:61:e8:8f:33:9b:d4:05: f9:54:66:df:e0:cb:18:a3:4e:dc:17:a8:a0:b3:c1:a8:f4:d6: 9d:ca:7f:68:53:1a:d7:95:da:e8:d3:9e:48:00:71:95:99:11: 07:cf:96:c0:7d:ce:7d:30:e8:4f:e1:83:16:33:a1:ff:59:9b: 3e:4c:e7:3a:38:01:9f:0f:67:4c:fd:2d:8b:4a:d4:01:46:37: 33:e8:13:6b:15:a9:1d:68:76:45:a2:82:33:69:26:30:60:05: c8:8f:bd:b4:75:ab:be:7a:8b:48:68:70:40:b4:1b:51:c5:e6: 7a:ad:6b:4f:db:17:c0:60:67:2e:63:61:9b:2c:48:99:b8:76: 45:a0:9e:cc:ef:33:1e:50:4e:ab:72:c3:65:c8:b2:79:b3:35: 83:21:78:d3:8b:6c:3a:18:e8:65:32:39:b8:c0:9d:71:2f:35: 36:8a:c0:17:62:d8:8b:3e:e1:22:18:2b:4c:63:a6:0e:9d:0a: fa:ab:5b:35:fb:88:91:77:4c:8d:8c:9d:a9:cf:fc:ab:c2:e6: 5a:05:7b:7e:04:6e:39:cf:93:ce:67:3b:7a:cb:af:b6:36:e1: fb:71:64:45:d4:a6:f0:ce:ef:75:04:99:69:9a:e5:88:0a:10: 02:74:89:ec:75:84:44:80:48:df:c1:f7:e9:37:ce:ce:92:92: 5c:89:22:08:73:1f -----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/pycakey.pem000066400000000000000000000046641471441230600205040ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQCxhNNPXASAkU+C SbowC/foy/kU7z2fCz8KYvwbIKUg0WBfh1ofFtHtl3Cm2hsDLH6gWzxOLxZ+Dokp ieEQDTjaandfNxOzKI97XHatnujT9Z71g6oQB43mUZjwfA1S8gwhHti5mSapJQMn u1yrLjMnotYjqIOHRCmfl7Ukb9e5Cv0o7rv7QVjqHZndRIarmL4c3MupiR02XKno R7X0UkiqtaRn7z7X4tMz3pgp2HqwWVznsQ7M/Z/r9tU6DgvP/gs9or9FGM6U56lV YIjU2IRQeQUuQQN0rmcm9lsSCJjOCpftAQ+JTxdc+j4dNSRHkjK/96QYKzzQSJnh os2jzFBTILXG42aFe1cQ7DNPwXfnG36BxsTzRSDAkd0TdnsDr/Z2jqKDY1fdY7y7 WhwXUorWBkjMD8fTT+jaImyG+U5cpikHO9hWTFmzIEkHe5SEzyvDHBpOh2SSukLh 5q19HfZUkG8r6bPMSyszJiP9ZcA88HmtycGB7zcE4Cc+sO4VvlECAwEAAQKCAYB7 gUnzALYxLOgAYYMkQm9si9zz768TpCNr+ooj5YZ9Wq6OSAEveBT+FErQCxaYErDW qCNA0gn4Eezj9YWcQVa4vzHmEM+n6iRJU39ONC0Qqua5Ma10EY1sHIEnb2dlufku YeOu3RrEu3eCgRxsDGySuvv5OxinV4kN++KPQzD3EOopPE+U81YFLCsMgsyfPlmm gwc/IKIuXDHp5Vp2bXkZK98CYLV8RddjUw7SrkZNwx6cI9eET0CgTs7y4SrevoOy jCdnA0j1HvL8AbLQuYoXo9fdGYDeq55hyYlxSMYLaEToZG3DJ0UAldrT+r7x52D8 2QMnJUo2XHzVYPlXPJIAkFJisZZ36TkBvywCgXZMMLibPo9U6V0nfkybTtXKoory nmgBv+XSGSNrVWMiygpDPqpX1G6bBgqUX3CiTlxtSkYYz1M4Vgj2cux5XEPTnVCq CLVzvNIXZt1RyzXPxGWpPidCjOaiWBRT4u1Dol9fs3PmVvDaRxcKo9nspiUHCfEC gcEA4GgxZ+IJwpAMHkdYId0oxjKgTqIg+Ua+EwfUoQT10ERl/k/V4cDwJRHT8lML rKhTNQJMEE040jq+6mPJDl1KqMb/v05Q7fF22ToGw1HkZwK52O6CeEiJW4/J6bR1 pZGN0irsa6GvzV65Y6gZVFEUl0JPRf8wPvQHXsWAw8/2LuXkXjV0ieIMq4pbWJf4 kaid7dYLHnobiP9RVk7BGr7ifmCshoPjWp4TRMwYf6iIZrqMxUSX0QY8Xsqx6bch LLx/AoHBAMqCvvwUKTrF4gKh5jyl6T6DTZ/Dujaz7BuAJdsSSHvuTa/Y1EfsQHZN jABn89ZqHYDiyyCuVFO3dqhLtsPjhyFMSXj+98JYcL3FGKnqQqRTwtzzx2P2lV5X U0WhrNRb3iLu79Tr8pE/2EPnvTr+J5b0DHEeRyM72LWs43zrDYHorH0/Aa5Qd37F gDLCTBEl8jO5irRuAIq/KV9ZFnn8JDjNGVpXgHPW3354ON1YaMLnPASk7FQizSOQ QZAsyxtdLwKBwGUosvTYYXvygXP4x1LkpmfKFJe94E1exXpAsmovmTvcSXn9tTXC Sr77LWb0ZrPbYT7pHS7QEMg8MSnp941hIrG4mzs666KHkgLUdI4B0YtaIDsZMXlV gY3j4KpYbhxH4/2U2eSfC2fxxnKVKW3n6vdQrfmo0q/eQ6BGOgiLK7fybCLHyBQL 8Zg2k3z5bNUEhMTdE0AW3WjBZ4IXmFcdK26616r/szJ7RcZilrydVXexqpmWlTVl sTst9kucAPlwswKBwQCwf7my/GNezR8Jik+fZj7edBQQfcdra+8JnOvhfpLcKLte 2s1RjjA0q6usou1bYAsszP2bEzV97XWmgq7dFg4tUE7s/NO1d91zGDhBx2Gj1TkN 2A5dKonOuq9iDeITB6qYqcUvvyEfxRRZQr2jj+WzZCr/4BLCO6PJ29A9jKOuKLtF QcfWRF2RiNMN6lffzkHFIR4p2YHxa2DEsGGtmbt8Ig3Jtl/HFmydzmxJRoev71dY +ODdB6PhLhZmcRPoWpMCgcEAhGArwL68GwwRMqAX79gMv8tVT0CJnDyGk5mD/ZIB Nzt0yQFO7rTEa1l1vAtOiVJ9IpAak2lgbEwodOfGnQst7lujNYDFzTRPTFt/lID1 u6JBxmqawOSlqa00bt4l2YsTZV+BfSznBP6XO1PK4iR3o5G3NkoKJjZWm3e3asHk 6eTeMLcsIJ+Fp7gG0ve2EdQwhVSVMFEu4Q4C2FcJeU++L4kYpY7sTnAjUtiLvtHn yp3jllEn3CBD8Uhs4B+sL/6p -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.10/revocation.crl000066400000000000000000000014401471441230600211740ustar00rootroot00000000000000-----BEGIN X509 CRL----- MIICJjCBjwIBATANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJYWTEmMCQGA1UE CgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNVBAMMDW91ci1j YS1zZXJ2ZXIXDTIxMDMxNzA4NDgyMFoXDTQwMDUxNjA4NDgyMFqgDjAMMAoGA1Ud FAQDAgEAMA0GCSqGSIb3DQEBCwUAA4IBgQCd2GrHb4zr2R8eK7YMHwlkgICxbWP1 4nuEi55yzUcmMcCZJ6ZQV3yYqTlAULGQ9qWAUdhsyH+yu3hRKFKHQv0DAdKKxgow 66YasAQQ99DskXOPxmRoIA7qtIWZbLtBwHQJWh+uUFlTdUXitGIX5Xie74xu5YIr moa3QeuZyG5+gigSTUyst5T/J/cHfBzlAJLc2k3Ty4EPYXKHCVnrZWJbRmxq199l A7S+eBb9qWXSYXCn6v+EZ76pUS3u/66kZ86PO3h9294BzdhxbCJ27dQXNHw6owe2 Iyiv0aWx+TNSGSf4yCqaYTH6RtEoviI3h/inVFHNGgjlMzdaGw/0I3bkB0rt2WSR Vck37HnXvQvVEkgO/39C0WKZus6m4gmOgZcbJbXaR8uIR5Hmw3SEyGEPEIBu6tXV BLJOSOSu2vVUH5GUIrpvK9FTySKYa+MGryoPasuqZNfwpaXK+ON2G6QsmcXPWZY0 Dry6t0w2geW6UYVGmb831i8ZP3JVVVwcwi0= -----END X509 CRL----- gevent-24.11.1/src/greentest/3.10/secp384r1.pem000066400000000000000000000004001471441230600204530ustar00rootroot00000000000000$ openssl genpkey -genparam -algorithm EC -pkeyopt ec_paramgen_curve:secp384r1 -pkeyopt ec_param_enc:named_curve -text -----BEGIN EC PARAMETERS----- BgUrgQQAIg== -----END EC PARAMETERS----- ECDSA-Parameters: (384 bit) ASN1 OID: secp384r1 NIST CURVE: P-384 gevent-24.11.1/src/greentest/3.10/selfsigned_pythontestdotnet.pem000066400000000000000000000041221471441230600246660ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIF9zCCA9+gAwIBAgIUH98b4Fw/DyugC9cV7VK7ZODzHsIwDQYJKoZIhvcNAQEL BQAwgYoxCzAJBgNVBAYTAlhZMRcwFQYDVQQIDA5DYXN0bGUgQW50aHJheDEYMBYG A1UEBwwPQXJndW1lbnQgQ2xpbmljMSMwIQYDVQQKDBpQeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbjEjMCEGA1UEAwwac2VsZi1zaWduZWQucHl0aG9udGVzdC5uZXQw HhcNMTkwNTA4MDEwMjQzWhcNMjcwNzI0MDEwMjQzWjCBijELMAkGA1UEBhMCWFkx FzAVBgNVBAgMDkNhc3RsZSBBbnRocmF4MRgwFgYDVQQHDA9Bcmd1bWVudCBDbGlu aWMxIzAhBgNVBAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMSMwIQYDVQQD DBpzZWxmLXNpZ25lZC5weXRob250ZXN0Lm5ldDCCAiIwDQYJKoZIhvcNAQEBBQAD ggIPADCCAgoCggIBAMKdJlyCThkahwoBb7pl5q64Pe9Fn5jrIvzsveHTc97TpjV2 RLfICnXKrltPk/ohkVl6K5SUZQZwMVzFubkyxE0nZPHYHlpiKWQxbsYVkYv01rix IFdLvaxxbGYke2jwQao31s4o61AdlsfK1SdpHQUynBBMssqI3SB4XPmcA7e+wEEx jxjVish4ixA1vuIZOx8yibu+CFCf/geEjoBMF3QPdzULzlrCSw8k/45iZCSoNbvK DoL4TVV07PHOxpheDh8ZQmepGvU6pVqhb9m4lgmV0OGWHgozd5Ur9CbTVDmxIEz3 TSoRtNJK7qtyZdGNqwjksQxgZTjM/d/Lm/BJG99AiOmYOjsl9gbQMZgvQmMAtUsI aMJnQuZ6R+KEpW/TR5qSKLWZSG45z/op+tzI2m+cE6HwTRVAWbcuJxcAA55MZjqU OOOu3BBYMjS5nf2sQ9uoXsVBFH7i0mQqoW1SLzr9opI8KsWwFxQmO2vBxWYaN+lH OmwBZBwyODIsmI1YGXmTp09NxRYz3Qe5GCgFzYowpMrcxUC24iduIdMwwhRM7rKg 7GtIWMSrFfuI1XCLRmSlhDbhNN6fVg2f8Bo9PdH9ihiIyxSrc+FOUasUYCCJvlSZ 8hFUlLvcmrZlWuazohm0lsXuMK1JflmQr/DA/uXxP9xzFfRy+RU3jDyxJbRHAgMB AAGjUzBRMB0GA1UdDgQWBBSQJyxiPMRK01i+0BsV9zUwDiBaHzAfBgNVHSMEGDAW gBSQJyxiPMRK01i+0BsV9zUwDiBaHzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3 DQEBCwUAA4ICAQCR+7a7N/m+WLkxPPIA/CB4MOr2Uf8ixTv435Nyv6rXOun0+lTP ExSZ0uYQ+L0WylItI3cQHULldDueD+s8TGzxf5woaLKf6tqyr0NYhKs+UeNEzDnN 9PHQIhX0SZw3XyXGUgPNBfRCg2ZDdtMMdOU4XlQN/IN/9hbYTrueyY7eXq9hmtI9 1srftAMqr9SR1JP7aHI6DVgrEsZVMTDnfT8WmLSGLlY1HmGfdEn1Ip5sbo9uSkiH AEPgPfjYIvR5LqTOMn4KsrlZyBbFIDh9Sl99M1kZzgH6zUGVLCDg1y6Cms69fx/e W1HoIeVkY4b4TY7Bk7JsqyNhIuqu7ARaxkdaZWhYaA2YyknwANdFfNpfH+elCLIk BUt5S3f4i7DaUePTvKukCZiCq4Oyln7RcOn5If73wCeLB/ZM9Ei1HforyLWP1CN8 XLfpHaoeoPSWIveI0XHUl65LsPN2UbMbul/F23hwl+h8+BLmyAS680Yhn4zEN6Ku B7Po90HoFa1Du3bmx4jsN73UkT/dwMTi6K072FbipnC1904oGlWmLwvAHvrtxxmL Pl3pvEaZIu8wa/PNF6Y7J7VIewikIJq6Ta6FrWeFfzMWOj2qA1ZZi6fUaDSNYvuV J5quYKCc/O+I/yDDf8wyBbZ/gvUXzUHTMYGG+bFrn1p7XDbYYeEJ6R/xEg== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/signalinterproctester.py000066400000000000000000000053631471441230600233350ustar00rootroot00000000000000import os import signal import subprocess import sys import time import unittest from test import support class SIGUSR1Exception(Exception): pass class InterProcessSignalTests(unittest.TestCase): def setUp(self): self.got_signals = {'SIGHUP': 0, 'SIGUSR1': 0, 'SIGALRM': 0} def sighup_handler(self, signum, frame): self.got_signals['SIGHUP'] += 1 def sigusr1_handler(self, signum, frame): self.got_signals['SIGUSR1'] += 1 raise SIGUSR1Exception def wait_signal(self, child, signame): if child is not None: # This wait should be interrupted by exc_class # (if set) child.wait() timeout = support.SHORT_TIMEOUT deadline = time.monotonic() + timeout while time.monotonic() < deadline: if self.got_signals[signame]: return signal.pause() self.fail('signal %s not received after %s seconds' % (signame, timeout)) def subprocess_send_signal(self, pid, signame): code = 'import os, signal; os.kill(%s, signal.%s)' % (pid, signame) args = [sys.executable, '-I', '-c', code] return subprocess.Popen(args) def test_interprocess_signal(self): # Install handlers. This function runs in a sub-process, so we # don't worry about re-setting the default handlers. signal.signal(signal.SIGHUP, self.sighup_handler) signal.signal(signal.SIGUSR1, self.sigusr1_handler) signal.signal(signal.SIGUSR2, signal.SIG_IGN) signal.signal(signal.SIGALRM, signal.default_int_handler) # Let the sub-processes know who to send signals to. pid = str(os.getpid()) with self.subprocess_send_signal(pid, "SIGHUP") as child: self.wait_signal(child, 'SIGHUP') self.assertEqual(self.got_signals, {'SIGHUP': 1, 'SIGUSR1': 0, 'SIGALRM': 0}) with self.assertRaises(SIGUSR1Exception): with self.subprocess_send_signal(pid, "SIGUSR1") as child: self.wait_signal(child, 'SIGUSR1') self.assertEqual(self.got_signals, {'SIGHUP': 1, 'SIGUSR1': 1, 'SIGALRM': 0}) with self.subprocess_send_signal(pid, "SIGUSR2") as child: # Nothing should happen: SIGUSR2 is ignored child.wait() try: with self.assertRaises(KeyboardInterrupt): signal.alarm(1) self.wait_signal(None, 'SIGALRM') self.assertEqual(self.got_signals, {'SIGHUP': 1, 'SIGUSR1': 1, 'SIGALRM': 0}) finally: signal.alarm(0) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.10/ssl_cert.pem000066400000000000000000000030421471441230600206420ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/ssl_key.passwd.pem000066400000000000000000000051361471441230600220030ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQI072N7W+PDDMCAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBA/AuaRNi4vE4KGqI4In+70BIIH ENGS5Vex5NID873frmd1UZEHZ+O/Bd0wDb+NUpIqesHkRYf7kKi6Gnr+nKQ/oVVn Lm3JjE7c8ECP0OkOOXmiXuWL1SkzBBWqCI4stSGUPvBiHsGwNnvJAaGjUffgMlcC aJOA2+dnejLkzblq4CB2LQdm06N3Xoe9tyqtQaUHxfzJAf5Ydd8uj7vpKN2MMhY7 icIPJwSyh0N7S6XWVtHEokr9Kp4y2hS5a+BgCWV1/1z0aF7agnSVndmT1VR+nWmc lM14k+lethmHMB+fsNSjnqeJ7XOPlOTHqhiZ9bBSTgF/xr5Bck/NiKRzHjdovBox TKg+xchaBhpRh7wBPBIlNJeHmIjv+8obOKjKU98Ig/7R9+IryZaNcKAH0PuOT+Sw QHXiCGQbOiYHB9UyhDTWiB7YVjd8KHefOFxfHzOQb/iBhbv1x3bTl3DgepvRN6VO dIsPLoIZe42sdf9GeMsk8mGJyZUQ6AzsfhWk3grb/XscizPSvrNsJ2VL1R7YTyT3 3WA4ZXR1EqvXnWL7N/raemQjy62iOG6t7fcF5IdP9CMbWP+Plpsz4cQW7FtesCTq a5ZXraochQz361ODFNIeBEGU+0qqXUtZDlmos/EySkZykSeU/L0bImS62VGE3afo YXBmznTTT9kkFkqv7H0MerfJsrE/wF8puP3GM01DW2JRgXRpSWlvbPV/2LnMtRuD II7iH4rWDtTjCN6BWKAgDOnPkc9sZ4XulqT32lcUeV6LTdMBfq8kMEc8eDij1vUT maVCRpuwaq8EIT3lVgNLufHiG96ojlyYtj3orzw22IjkgC/9ee8UDik9CqbMVmFf fVHhsw8LNSg8Q4bmwm5Eg2w2it2gtI68+mwr75oCxuJ/8OMjW21Prj8XDh5reie2 c0lDKQOFZ9UnLU1bXR/6qUM+JFKR4DMq+fOCuoQSVoyVUEOsJpvBOYnYZN9cxsZm vh9dKafMEcKZ8flsbr+gOmOw7+Py2ifSlf25E/Frb1W4gtbTb0LQVHb6+drutrZj 8HEu4CnHYFCD4ZnOJb26XlZCb8GFBddW86yJYyUqMMV6Q1aJfAOAglsTo1LjIMOZ byo0BTAmwUevU/iuOXQ4qRBXXcoidDcTCrxfUSPG9wdt9l+m5SdQpWqfQ+fx5O7m SLlrHyZCiPSFMtC9DxqjIklHjf5W3wslGLgaD30YXa4VDYkRihf3CNsxGQ+tVvef l0ZjoAitF7Gaua06IESmKnpHe23dkr1cjYq+u2IV+xGH8LeExdwsQ9kpuTeXPnQs JOA99SsFx1ct32RrwjxnDDsiNkaViTKo9GDkV3jQTfoFgAVqfSgg9wGXpqUqhNG7 TiSIHCowllLny2zn4XrXCy2niD3VDt0skb3l/PaegHE2z7S5YY85nQtYwpLiwB9M SQ08DYKxPBZYKtS2iZ/fsA1gjSRQDPg/SIxMhUC3M3qH8iWny1Lzl25F2Uq7VVEX LdTUtaby49jRTT3CQGr5n6z7bMbUegiY7h8WmOekuThGDH+4xZp6+rDP4GFk4FeK JcF70vMQYIjQZhadic6olv+9VtUP42ltGG/yP9a3eWRkzfAf2eCh6B1rYdgEWwE8 rlcZzwM+y6eUmeNF2FVWB8iWtTMQHy+dYNPM+Jtus1KQKxiiq/yCRs7nWvzWRFWA HRyqV0J6/lqgm4FvfktFt1T0W+mDoLJOR2/zIwMy2lgL5zeHuR3SaMJnCikJbqKS HB3UvrhAWUcZqdH29+FhVWeM7ybyF1Wccmf+IIC/ePLa6gjtqPV8lG/5kbpcpnB6 UQY8WWaKMxyr3jJ9bAX5QKshchp04cDecOLZrpFGNNQngR8RxSEkiIgAqNxWunIu KrdBDrupv/XAgEOclmgToY3iywLJSV5gHAyHWDUhRH4cFCLiGPl4XIcnXOuTze3H 3j+EYSiS3v3DhHjp33YU2pXlJDjiYsKzAXejEh66++Y8qaQdCAad3ruWRCzW3kgk Md0A1VGzntTnQsewvExQEMZH2LtYIsPv3KCYGeSAuLabX4tbGk79PswjnjLLEOr0 Ghf6RF6qf5/iFyJoG4vrbKT8kx6ywh0InILCdjUunuDskIBxX6tEcr9XwajoIvb2 kcmGdjam5kKLS7QOWQTl8/r/cuFes0dj34cX5Qpq+Gd7tRq/D+b0207926Cxvftv qQ1cVn8HiLxKkZzd3tpf2xnoV1zkTL0oHrNg+qzxoxXUTUcwtIf1d/HRbYEAhi/d bBBoFeftEHWNq+sJgS9bH+XNzo/yK4u04B5miOq8v4CSkJdzu+ZdF22d4cjiGmtQ 8BTmcn0Unzm+u5H0+QSZe54QBHJGNXXOIKMTkgnOdW27g4DbI1y7fCqJiSMbRW6L oHmMfbdB3GWqGbsUkhY8i6h9op0MU6WOX7ea2Rxyt4t6 -----END ENCRYPTED PRIVATE KEY----- gevent-24.11.1/src/greentest/3.10/ssl_key.pem000066400000000000000000000046701471441230600205050ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/wIBADANBgkqhkiG9w0BAQEFAASCBukwggblAgEAAoIBgQCylKlLaKU+hOvJ DfriTRLd+IthG5hv28I3A/CGjLICT0rDDtgaXd0uqloJAnjsgn5gMAcStpDW8Rm+ t6LsrBL+5fBgkyU1r94Rvx0HHoyaZwBBouitVHw28hP3W+smddkqB1UxpGnTeL2B gj3dVo/WTtRfO+0h0PKw1l98YE1pMTdqIwcOOE/ER0g4hvA/wrxuLhMvlVLMy/lL 58uctqaDUqryNyeerKbVkq4fJyCG5D2TwXVJ3i2DDh0xSt2Y10poZV4M4k8Su9Z5 8zN2PSvYMT50aqF277v8BaOeYUApBE4kZGIJpo13ATGdEwpUFZ0Fri4zLYUZ1hWb OC35sKo7OxWQ/+tefNUdgWHob6Vmy777jiYcLwxc3sS9rF3AJe0rMW83kCkR6hmy A3250E137N/1QumHuT/Nj9rnI/lwt9jfaYkZjoAgT/C97m/mM83cYpGTdoGV1xNo 7G90MhP0di5FnVsrIaSnvkbGT9UgUWx0oVMjocifdG2qIhMI9psCAwEAAQKCAYBT sHmaPmNaZj59jZCqp0YVQlpHWwBYQ5vD3pPE6oCttm0p9nXt/VkfenQRTthOtmT1 POzDp00/feP7zeGLmqSYUjgRekPw4gdnN7Ip2PY5kdW77NWwDSzdLxuOS8Rq1MW9 /Yu+ZPe3RBlDbT8C0IM+Atlh/BqIQ3zIxN4g0pzUlF0M33d6AYfYSzOcUhibOO7H j84r+YXBNkIRgYKZYbutRXuZYaGuqejRpBj3voVu0d3Ntdb6lCWuClpB9HzfGN0c RTv8g6UYO4sK3qyFn90ibIR/1GB9watvtoWVZqggiWeBzSWVWRsGEf9O+Cx4oJw1 IphglhmhbgNksbj7bD24on/icldSOiVkoUemUOFmHWhCm4PnB1GmbD8YMfEdSbks qDr1Ps1zg4mGOinVD/4cY7vuPFO/HCH07wfeaUGzRt4g0/yLr+XjVofOA3oowyxv JAzr+niHA3lg5ecj4r7M68efwzN1OCyjMrVJw2RAzwvGxE+rm5NiT08SWlKQZnkC gcEA4wvyLpIur/UB84nV3XVJ89UMNBLm++aTFzld047BLJtMaOhvNqx6Cl5c8VuW l261KHjiVzpfNM3/A2LBQJcYkhX7avkqEXlj57cl+dCWAVwUzKmLJTPjfaTTZnYJ xeN3dMYjJz2z2WtgvfvDoJLukVwIMmhTY8wtqqYyQBJ/l06pBsfw5TNvmVIOQHds 8ASOiFt+WRLk2bl9xrGGayqt3VV93KVRzF27cpjOgEcG74F3c0ZW9snERN7vIYwB JfrlAoHBAMlahPwMP2TYylG8OzHe7EiehTekSO26LGh0Cq3wTGXYsK/q8hQCzL14 kWW638vpwXL6L9ntvrd7hjzWRO3vX/VxnYEA6f0bpqHq1tZi6lzix5CTUN5McpDg QnjenSJNrNjS1zEF8WeY9iLEuDI/M/iUW4y9R6s3WpgQhPDXpSvd2g3gMGRUYhxQ Xna8auiJeYFq0oNaOxvJj+VeOfJ3ZMJttd+Y7gTOYZcbg3SdRb/kdxYki0RMD2hF 4ZvjJ6CTfwKBwQDiMqiZFTJGQwYqp4vWEmAW+I4r4xkUpWatoI2Fk5eI5T9+1PLX uYXsho56NxEU1UrOg4Cb/p+TcBc8PErkGqR0BkpxDMOInTOXSrQe6lxIBoECVXc3 HTbrmiay0a5y5GfCgxPKqIJhfcToAceoVjovv0y7S4yoxGZKuUEe7E8JY2iqRNAO yOvKCCICv/hcN235E44RF+2/rDlOltagNej5tY6rIFkaDdgOF4bD7f9O5eEni1Bg litfoesDtQP/3rECgcEAkQfvQ7D6tIPmbqsbJBfCr6fmoqZllT4FIJN84b50+OL0 mTGsfjdqC4tdhx3sdu7/VPbaIqm5NmX10bowWgWSY7MbVME4yQPyqSwC5NbIonEC d6N0mzoLR0kQ+Ai4u+2g82gicgAq2oj1uSNi3WZi48jQjHYFulCbo246o1NgeFFK 77WshYe2R1ioQfQDOU1URKCR0uTaMHClgfu112yiGd12JAD+aF3TM0kxDXz+sXI5 SKy311DFxECZeXRLpcC3AoHBAJkNMJWTyPYbeVu+CTQkec8Uun233EkXa2kUNZc/ 5DuXDaK+A3DMgYRufTKSPpDHGaCZ1SYPInX1Uoe2dgVjWssRL2uitR4ENabDoAOA ICVYXYYNagqQu5wwirF0QeaMXo1fjhuuHQh8GsMdXZvYEaAITZ9/NG5x/oY08+8H kr78SMBOPy3XQn964uKG+e3JwpOG14GKABdAlrHKFXNWchu/6dgcYXB87mrC/GhO zNwzC+QhFTZoOomFoqMgFWujng== -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.10/talos-2019-0758.pem000066400000000000000000000024621471441230600211450ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDqDCCApKgAwIBAgIBAjALBgkqhkiG9w0BAQswHzELMAkGA1UEBhMCVUsxEDAO BgNVBAMTB2NvZHktY2EwHhcNMTgwNjE4MTgwMDU4WhcNMjgwNjE0MTgwMDU4WjA7 MQswCQYDVQQGEwJVSzEsMCoGA1UEAxMjY29kZW5vbWljb24tdm0tMi50ZXN0Lmxh bC5jaXNjby5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC63fGB J80A9Av1GB0bptslKRIUtJm8EeEu34HkDWbL6AJY0P8WfDtlXjlPaLqFa6sqH6ES V48prSm1ZUbDSVL8R6BYVYpOlK8/48xk4pGTgRzv69gf5SGtQLwHy8UPBKgjSZoD 5a5k5wJXGswhKFFNqyyxqCvWmMnJWxXTt2XDCiWc4g4YAWi4O4+6SeeHVAV9rV7C 1wxqjzKovVe2uZOHjKEzJbbIU6JBPb6TRfMdRdYOw98n1VXDcKVgdX2DuuqjCzHP WhU4Tw050M9NaK3eXp4Mh69VuiKoBGOLSOcS8reqHIU46Reg0hqeL8LIL6OhFHIF j7HR6V1X6F+BfRS/AgMBAAGjgdYwgdMwCQYDVR0TBAIwADAdBgNVHQ4EFgQUOktp HQjxDXXUg8prleY9jeLKeQ4wTwYDVR0jBEgwRoAUx6zgPygZ0ZErF9sPC4+5e2Io UU+hI6QhMB8xCzAJBgNVBAYTAlVLMRAwDgYDVQQDEwdjb2R5LWNhggkA1QEAuwb7 2s0wCQYDVR0SBAIwADAuBgNVHREEJzAlgiNjb2Rlbm9taWNvbi12bS0yLnRlc3Qu bGFsLmNpc2NvLmNvbTAOBgNVHQ8BAf8EBAMCBaAwCwYDVR0fBAQwAjAAMAsGCSqG SIb3DQEBCwOCAQEAvqantx2yBlM11RoFiCfi+AfSblXPdrIrHvccepV4pYc/yO6p t1f2dxHQb8rWH3i6cWag/EgIZx+HJQvo0rgPY1BFJsX1WnYf1/znZpkUBGbVmlJr t/dW1gSkNS6sPsM0Q+7HPgEv8CPDNK5eo7vU2seE0iWOkxSyVUuiCEY9ZVGaLVit p0C78nZ35Pdv4I+1cosmHl28+es1WI22rrnmdBpH8J1eY6WvUw2xuZHLeNVN0TzV Q3qq53AaCWuLOD1AjESWuUCxMZTK9DPS4JKXTK8RLyDeqOvJGjsSWp3kL0y3GaQ+ 10T1rfkKJub2+m9A9duin1fn6tHc2wSvB7m3DA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.10/test_asyncore.py000066400000000000000000000642011471441230600215610ustar00rootroot00000000000000import unittest import select import os import socket import sys import time import errno import struct import threading from test import support from test.support import os_helper from test.support import socket_helper from test.support import threading_helper from test.support import warnings_helper from io import BytesIO if support.PGO: raise unittest.SkipTest("test is not helpful for PGO") import warnings with warnings.catch_warnings(): warnings.simplefilter('ignore', DeprecationWarning) import asyncore HAS_UNIX_SOCKETS = hasattr(socket, 'AF_UNIX') class dummysocket: def __init__(self): self.closed = False def close(self): self.closed = True def fileno(self): return 42 class dummychannel: def __init__(self): self.socket = dummysocket() def close(self): self.socket.close() class exitingdummy: def __init__(self): pass def handle_read_event(self): raise asyncore.ExitNow() handle_write_event = handle_read_event handle_close = handle_read_event handle_expt_event = handle_read_event class crashingdummy: def __init__(self): self.error_handled = False def handle_read_event(self): raise Exception() handle_write_event = handle_read_event handle_close = handle_read_event handle_expt_event = handle_read_event def handle_error(self): self.error_handled = True # used when testing senders; just collects what it gets until newline is sent def capture_server(evt, buf, serv): try: serv.listen() conn, addr = serv.accept() except TimeoutError: pass else: n = 200 start = time.monotonic() while n > 0 and time.monotonic() - start < 3.0: r, w, e = select.select([conn], [], [], 0.1) if r: n -= 1 data = conn.recv(10) # keep everything except for the newline terminator buf.write(data.replace(b'\n', b'')) if b'\n' in data: break time.sleep(0.01) conn.close() finally: serv.close() evt.set() def bind_af_aware(sock, addr): """Helper function to bind a socket according to its family.""" if HAS_UNIX_SOCKETS and sock.family == socket.AF_UNIX: # Make sure the path doesn't exist. os_helper.unlink(addr) socket_helper.bind_unix_socket(sock, addr) else: sock.bind(addr) class HelperFunctionTests(unittest.TestCase): def test_readwriteexc(self): # Check exception handling behavior of read, write and _exception # check that ExitNow exceptions in the object handler method # bubbles all the way up through asyncore read/write/_exception calls tr1 = exitingdummy() self.assertRaises(asyncore.ExitNow, asyncore.read, tr1) self.assertRaises(asyncore.ExitNow, asyncore.write, tr1) self.assertRaises(asyncore.ExitNow, asyncore._exception, tr1) # check that an exception other than ExitNow in the object handler # method causes the handle_error method to get called tr2 = crashingdummy() asyncore.read(tr2) self.assertEqual(tr2.error_handled, True) tr2 = crashingdummy() asyncore.write(tr2) self.assertEqual(tr2.error_handled, True) tr2 = crashingdummy() asyncore._exception(tr2) self.assertEqual(tr2.error_handled, True) # asyncore.readwrite uses constants in the select module that # are not present in Windows systems (see this thread: # http://mail.python.org/pipermail/python-list/2001-October/109973.html) # These constants should be present as long as poll is available @unittest.skipUnless(hasattr(select, 'poll'), 'select.poll required') def test_readwrite(self): # Check that correct methods are called by readwrite() attributes = ('read', 'expt', 'write', 'closed', 'error_handled') expected = ( (select.POLLIN, 'read'), (select.POLLPRI, 'expt'), (select.POLLOUT, 'write'), (select.POLLERR, 'closed'), (select.POLLHUP, 'closed'), (select.POLLNVAL, 'closed'), ) class testobj: def __init__(self): self.read = False self.write = False self.closed = False self.expt = False self.error_handled = False def handle_read_event(self): self.read = True def handle_write_event(self): self.write = True def handle_close(self): self.closed = True def handle_expt_event(self): self.expt = True def handle_error(self): self.error_handled = True for flag, expectedattr in expected: tobj = testobj() self.assertEqual(getattr(tobj, expectedattr), False) asyncore.readwrite(tobj, flag) # Only the attribute modified by the routine we expect to be # called should be True. for attr in attributes: self.assertEqual(getattr(tobj, attr), attr==expectedattr) # check that ExitNow exceptions in the object handler method # bubbles all the way up through asyncore readwrite call tr1 = exitingdummy() self.assertRaises(asyncore.ExitNow, asyncore.readwrite, tr1, flag) # check that an exception other than ExitNow in the object handler # method causes the handle_error method to get called tr2 = crashingdummy() self.assertEqual(tr2.error_handled, False) asyncore.readwrite(tr2, flag) self.assertEqual(tr2.error_handled, True) def test_closeall(self): self.closeall_check(False) def test_closeall_default(self): self.closeall_check(True) def closeall_check(self, usedefault): # Check that close_all() closes everything in a given map l = [] testmap = {} for i in range(10): c = dummychannel() l.append(c) self.assertEqual(c.socket.closed, False) testmap[i] = c if usedefault: socketmap = asyncore.socket_map try: asyncore.socket_map = testmap asyncore.close_all() finally: testmap, asyncore.socket_map = asyncore.socket_map, socketmap else: asyncore.close_all(testmap) self.assertEqual(len(testmap), 0) for c in l: self.assertEqual(c.socket.closed, True) def test_compact_traceback(self): try: raise Exception("I don't like spam!") except: real_t, real_v, real_tb = sys.exc_info() r = asyncore.compact_traceback() else: self.fail("Expected exception") (f, function, line), t, v, info = r self.assertEqual(os.path.split(f)[-1], 'test_asyncore.py') self.assertEqual(function, 'test_compact_traceback') self.assertEqual(t, real_t) self.assertEqual(v, real_v) self.assertEqual(info, '[%s|%s|%s]' % (f, function, line)) class DispatcherTests(unittest.TestCase): def setUp(self): pass def tearDown(self): asyncore.close_all() def test_basic(self): d = asyncore.dispatcher() self.assertEqual(d.readable(), True) self.assertEqual(d.writable(), True) def test_repr(self): d = asyncore.dispatcher() self.assertEqual(repr(d), '' % id(d)) def test_log(self): d = asyncore.dispatcher() # capture output of dispatcher.log() (to stderr) l1 = "Lovely spam! Wonderful spam!" l2 = "I don't like spam!" with support.captured_stderr() as stderr: d.log(l1) d.log(l2) lines = stderr.getvalue().splitlines() self.assertEqual(lines, ['log: %s' % l1, 'log: %s' % l2]) def test_log_info(self): d = asyncore.dispatcher() # capture output of dispatcher.log_info() (to stdout via print) l1 = "Have you got anything without spam?" l2 = "Why can't she have egg bacon spam and sausage?" l3 = "THAT'S got spam in it!" with support.captured_stdout() as stdout: d.log_info(l1, 'EGGS') d.log_info(l2) d.log_info(l3, 'SPAM') lines = stdout.getvalue().splitlines() expected = ['EGGS: %s' % l1, 'info: %s' % l2, 'SPAM: %s' % l3] self.assertEqual(lines, expected) def test_unhandled(self): d = asyncore.dispatcher() d.ignore_log_types = () # capture output of dispatcher.log_info() (to stdout via print) with support.captured_stdout() as stdout: d.handle_expt() d.handle_read() d.handle_write() d.handle_connect() lines = stdout.getvalue().splitlines() expected = ['warning: unhandled incoming priority event', 'warning: unhandled read event', 'warning: unhandled write event', 'warning: unhandled connect event'] self.assertEqual(lines, expected) def test_strerror(self): # refers to bug #8573 err = asyncore._strerror(errno.EPERM) if hasattr(os, 'strerror'): self.assertEqual(err, os.strerror(errno.EPERM)) err = asyncore._strerror(-1) self.assertTrue(err != "") class dispatcherwithsend_noread(asyncore.dispatcher_with_send): def readable(self): return False def handle_connect(self): pass class DispatcherWithSendTests(unittest.TestCase): def setUp(self): pass def tearDown(self): asyncore.close_all() @threading_helper.reap_threads def test_send(self): evt = threading.Event() sock = socket.socket() sock.settimeout(3) port = socket_helper.bind_port(sock) cap = BytesIO() args = (evt, cap, sock) t = threading.Thread(target=capture_server, args=args) t.start() try: # wait a little longer for the server to initialize (it sometimes # refuses connections on slow machines without this wait) time.sleep(0.2) data = b"Suppose there isn't a 16-ton weight?" d = dispatcherwithsend_noread() d.create_socket() d.connect((socket_helper.HOST, port)) # give time for socket to connect time.sleep(0.1) d.send(data) d.send(data) d.send(b'\n') n = 1000 while d.out_buffer and n > 0: asyncore.poll() n -= 1 evt.wait() self.assertEqual(cap.getvalue(), data*2) finally: threading_helper.join_thread(t) @unittest.skipUnless(hasattr(asyncore, 'file_wrapper'), 'asyncore.file_wrapper required') class FileWrapperTest(unittest.TestCase): def setUp(self): self.d = b"It's not dead, it's sleeping!" with open(os_helper.TESTFN, 'wb') as file: file.write(self.d) def tearDown(self): os_helper.unlink(os_helper.TESTFN) def test_recv(self): fd = os.open(os_helper.TESTFN, os.O_RDONLY) w = asyncore.file_wrapper(fd) os.close(fd) self.assertNotEqual(w.fd, fd) self.assertNotEqual(w.fileno(), fd) self.assertEqual(w.recv(13), b"It's not dead") self.assertEqual(w.read(6), b", it's") w.close() self.assertRaises(OSError, w.read, 1) def test_send(self): d1 = b"Come again?" d2 = b"I want to buy some cheese." fd = os.open(os_helper.TESTFN, os.O_WRONLY | os.O_APPEND) w = asyncore.file_wrapper(fd) os.close(fd) w.write(d1) w.send(d2) w.close() with open(os_helper.TESTFN, 'rb') as file: self.assertEqual(file.read(), self.d + d1 + d2) @unittest.skipUnless(hasattr(asyncore, 'file_dispatcher'), 'asyncore.file_dispatcher required') def test_dispatcher(self): fd = os.open(os_helper.TESTFN, os.O_RDONLY) data = [] class FileDispatcher(asyncore.file_dispatcher): def handle_read(self): data.append(self.recv(29)) s = FileDispatcher(fd) os.close(fd) asyncore.loop(timeout=0.01, use_poll=True, count=2) self.assertEqual(b"".join(data), self.d) def test_resource_warning(self): # Issue #11453 fd = os.open(os_helper.TESTFN, os.O_RDONLY) f = asyncore.file_wrapper(fd) os.close(fd) with warnings_helper.check_warnings(('', ResourceWarning)): f = None support.gc_collect() def test_close_twice(self): fd = os.open(os_helper.TESTFN, os.O_RDONLY) f = asyncore.file_wrapper(fd) os.close(fd) os.close(f.fd) # file_wrapper dupped fd with self.assertRaises(OSError): f.close() self.assertEqual(f.fd, -1) # calling close twice should not fail f.close() class BaseTestHandler(asyncore.dispatcher): def __init__(self, sock=None): asyncore.dispatcher.__init__(self, sock) self.flag = False def handle_accept(self): raise Exception("handle_accept not supposed to be called") def handle_accepted(self): raise Exception("handle_accepted not supposed to be called") def handle_connect(self): raise Exception("handle_connect not supposed to be called") def handle_expt(self): raise Exception("handle_expt not supposed to be called") def handle_close(self): raise Exception("handle_close not supposed to be called") def handle_error(self): raise class BaseServer(asyncore.dispatcher): """A server which listens on an address and dispatches the connection to a handler. """ def __init__(self, family, addr, handler=BaseTestHandler): asyncore.dispatcher.__init__(self) self.create_socket(family) self.set_reuse_addr() bind_af_aware(self.socket, addr) self.listen(5) self.handler = handler @property def address(self): return self.socket.getsockname() def handle_accepted(self, sock, addr): self.handler(sock) def handle_error(self): raise class BaseClient(BaseTestHandler): def __init__(self, family, address): BaseTestHandler.__init__(self) self.create_socket(family) self.connect(address) def handle_connect(self): pass class BaseTestAPI: def tearDown(self): asyncore.close_all(ignore_all=True) def loop_waiting_for_flag(self, instance, timeout=5): timeout = float(timeout) / 100 count = 100 while asyncore.socket_map and count > 0: asyncore.loop(timeout=0.01, count=1, use_poll=self.use_poll) if instance.flag: return count -= 1 time.sleep(timeout) self.fail("flag not set") def test_handle_connect(self): # make sure handle_connect is called on connect() class TestClient(BaseClient): def handle_connect(self): self.flag = True server = BaseServer(self.family, self.addr) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_handle_accept(self): # make sure handle_accept() is called when a client connects class TestListener(BaseTestHandler): def __init__(self, family, addr): BaseTestHandler.__init__(self) self.create_socket(family) bind_af_aware(self.socket, addr) self.listen(5) self.address = self.socket.getsockname() def handle_accept(self): self.flag = True server = TestListener(self.family, self.addr) client = BaseClient(self.family, server.address) self.loop_waiting_for_flag(server) def test_handle_accepted(self): # make sure handle_accepted() is called when a client connects class TestListener(BaseTestHandler): def __init__(self, family, addr): BaseTestHandler.__init__(self) self.create_socket(family) bind_af_aware(self.socket, addr) self.listen(5) self.address = self.socket.getsockname() def handle_accept(self): asyncore.dispatcher.handle_accept(self) def handle_accepted(self, sock, addr): sock.close() self.flag = True server = TestListener(self.family, self.addr) client = BaseClient(self.family, server.address) self.loop_waiting_for_flag(server) def test_handle_read(self): # make sure handle_read is called on data received class TestClient(BaseClient): def handle_read(self): self.flag = True class TestHandler(BaseTestHandler): def __init__(self, conn): BaseTestHandler.__init__(self, conn) self.send(b'x' * 1024) server = BaseServer(self.family, self.addr, TestHandler) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_handle_write(self): # make sure handle_write is called class TestClient(BaseClient): def handle_write(self): self.flag = True server = BaseServer(self.family, self.addr) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_handle_close(self): # make sure handle_close is called when the other end closes # the connection class TestClient(BaseClient): def handle_read(self): # in order to make handle_close be called we are supposed # to make at least one recv() call self.recv(1024) def handle_close(self): self.flag = True self.close() class TestHandler(BaseTestHandler): def __init__(self, conn): BaseTestHandler.__init__(self, conn) self.close() server = BaseServer(self.family, self.addr, TestHandler) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_handle_close_after_conn_broken(self): # Check that ECONNRESET/EPIPE is correctly handled (issues #5661 and # #11265). data = b'\0' * 128 class TestClient(BaseClient): def handle_write(self): self.send(data) def handle_close(self): self.flag = True self.close() def handle_expt(self): self.flag = True self.close() class TestHandler(BaseTestHandler): def handle_read(self): self.recv(len(data)) self.close() def writable(self): return False server = BaseServer(self.family, self.addr, TestHandler) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) @unittest.skipIf(sys.platform.startswith("sunos"), "OOB support is broken on Solaris") def test_handle_expt(self): # Make sure handle_expt is called on OOB data received. # Note: this might fail on some platforms as OOB data is # tenuously supported and rarely used. if HAS_UNIX_SOCKETS and self.family == socket.AF_UNIX: self.skipTest("Not applicable to AF_UNIX sockets.") if sys.platform == "darwin" and self.use_poll: self.skipTest("poll may fail on macOS; see issue #28087") class TestClient(BaseClient): def handle_expt(self): self.socket.recv(1024, socket.MSG_OOB) self.flag = True class TestHandler(BaseTestHandler): def __init__(self, conn): BaseTestHandler.__init__(self, conn) self.socket.send(bytes(chr(244), 'latin-1'), socket.MSG_OOB) server = BaseServer(self.family, self.addr, TestHandler) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_handle_error(self): class TestClient(BaseClient): def handle_write(self): 1.0 / 0 def handle_error(self): self.flag = True try: raise except ZeroDivisionError: pass else: raise Exception("exception not raised") server = BaseServer(self.family, self.addr) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_connection_attributes(self): server = BaseServer(self.family, self.addr) client = BaseClient(self.family, server.address) # we start disconnected self.assertFalse(server.connected) self.assertTrue(server.accepting) # this can't be taken for granted across all platforms #self.assertFalse(client.connected) self.assertFalse(client.accepting) # execute some loops so that client connects to server asyncore.loop(timeout=0.01, use_poll=self.use_poll, count=100) self.assertFalse(server.connected) self.assertTrue(server.accepting) self.assertTrue(client.connected) self.assertFalse(client.accepting) # disconnect the client client.close() self.assertFalse(server.connected) self.assertTrue(server.accepting) self.assertFalse(client.connected) self.assertFalse(client.accepting) # stop serving server.close() self.assertFalse(server.connected) self.assertFalse(server.accepting) def test_create_socket(self): s = asyncore.dispatcher() s.create_socket(self.family) self.assertEqual(s.socket.type, socket.SOCK_STREAM) self.assertEqual(s.socket.family, self.family) self.assertEqual(s.socket.gettimeout(), 0) self.assertFalse(s.socket.get_inheritable()) def test_bind(self): if HAS_UNIX_SOCKETS and self.family == socket.AF_UNIX: self.skipTest("Not applicable to AF_UNIX sockets.") s1 = asyncore.dispatcher() s1.create_socket(self.family) s1.bind(self.addr) s1.listen(5) port = s1.socket.getsockname()[1] s2 = asyncore.dispatcher() s2.create_socket(self.family) # EADDRINUSE indicates the socket was correctly bound self.assertRaises(OSError, s2.bind, (self.addr[0], port)) def test_set_reuse_addr(self): if HAS_UNIX_SOCKETS and self.family == socket.AF_UNIX: self.skipTest("Not applicable to AF_UNIX sockets.") with socket.socket(self.family) as sock: try: sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) except OSError: unittest.skip("SO_REUSEADDR not supported on this platform") else: # if SO_REUSEADDR succeeded for sock we expect asyncore # to do the same s = asyncore.dispatcher(socket.socket(self.family)) self.assertFalse(s.socket.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR)) s.socket.close() s.create_socket(self.family) s.set_reuse_addr() self.assertTrue(s.socket.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR)) @threading_helper.reap_threads def test_quick_connect(self): # see: http://bugs.python.org/issue10340 if self.family not in (socket.AF_INET, getattr(socket, "AF_INET6", object())): self.skipTest("test specific to AF_INET and AF_INET6") server = BaseServer(self.family, self.addr) # run the thread 500 ms: the socket should be connected in 200 ms t = threading.Thread(target=lambda: asyncore.loop(timeout=0.1, count=5)) t.start() try: with socket.socket(self.family, socket.SOCK_STREAM) as s: s.settimeout(.2) s.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0)) try: s.connect(server.address) except OSError: pass finally: threading_helper.join_thread(t) class TestAPI_UseIPv4Sockets(BaseTestAPI): family = socket.AF_INET addr = (socket_helper.HOST, 0) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 support required') class TestAPI_UseIPv6Sockets(BaseTestAPI): family = socket.AF_INET6 addr = (socket_helper.HOSTv6, 0) @unittest.skipUnless(HAS_UNIX_SOCKETS, 'Unix sockets required') class TestAPI_UseUnixSockets(BaseTestAPI): if HAS_UNIX_SOCKETS: family = socket.AF_UNIX addr = os_helper.TESTFN def tearDown(self): os_helper.unlink(self.addr) BaseTestAPI.tearDown(self) class TestAPI_UseIPv4Select(TestAPI_UseIPv4Sockets, unittest.TestCase): use_poll = False @unittest.skipUnless(hasattr(select, 'poll'), 'select.poll required') class TestAPI_UseIPv4Poll(TestAPI_UseIPv4Sockets, unittest.TestCase): use_poll = True class TestAPI_UseIPv6Select(TestAPI_UseIPv6Sockets, unittest.TestCase): use_poll = False @unittest.skipUnless(hasattr(select, 'poll'), 'select.poll required') class TestAPI_UseIPv6Poll(TestAPI_UseIPv6Sockets, unittest.TestCase): use_poll = True class TestAPI_UseUnixSocketsSelect(TestAPI_UseUnixSockets, unittest.TestCase): use_poll = False @unittest.skipUnless(hasattr(select, 'poll'), 'select.poll required') class TestAPI_UseUnixSocketsPoll(TestAPI_UseUnixSockets, unittest.TestCase): use_poll = True if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.10/test_context.py000066400000000000000000000752021471441230600214250ustar00rootroot00000000000000import concurrent.futures import contextvars import functools import gc import random import time import unittest import weakref try: from _testcapi import hamt except ImportError: hamt = None def isolated_context(func): """Needed to make reftracking test mode work.""" @functools.wraps(func) def wrapper(*args, **kwargs): ctx = contextvars.Context() return ctx.run(func, *args, **kwargs) return wrapper class ContextTest(unittest.TestCase): def test_context_var_new_1(self): with self.assertRaisesRegex(TypeError, 'takes exactly 1'): contextvars.ContextVar() with self.assertRaisesRegex(TypeError, 'must be a str'): contextvars.ContextVar(1) c = contextvars.ContextVar('aaa') self.assertEqual(c.name, 'aaa') with self.assertRaises(AttributeError): c.name = 'bbb' self.assertNotEqual(hash(c), hash('aaa')) @isolated_context def test_context_var_repr_1(self): c = contextvars.ContextVar('a') self.assertIn('a', repr(c)) c = contextvars.ContextVar('a', default=123) self.assertIn('123', repr(c)) lst = [] c = contextvars.ContextVar('a', default=lst) lst.append(c) self.assertIn('...', repr(c)) self.assertIn('...', repr(lst)) t = c.set(1) self.assertIn(repr(c), repr(t)) self.assertNotIn(' used ', repr(t)) c.reset(t) self.assertIn(' used ', repr(t)) def test_context_subclassing_1(self): with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): class MyContextVar(contextvars.ContextVar): # Potentially we might want ContextVars to be subclassable. pass with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): class MyContext(contextvars.Context): pass with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): class MyToken(contextvars.Token): pass def test_context_new_1(self): with self.assertRaisesRegex(TypeError, 'any arguments'): contextvars.Context(1) with self.assertRaisesRegex(TypeError, 'any arguments'): contextvars.Context(1, a=1) with self.assertRaisesRegex(TypeError, 'any arguments'): contextvars.Context(a=1) contextvars.Context(**{}) def test_context_typerrors_1(self): ctx = contextvars.Context() with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): ctx[1] with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): 1 in ctx with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): ctx.get(1) def test_context_get_context_1(self): ctx = contextvars.copy_context() self.assertIsInstance(ctx, contextvars.Context) def test_context_run_1(self): ctx = contextvars.Context() with self.assertRaisesRegex(TypeError, 'missing 1 required'): ctx.run() def test_context_run_2(self): ctx = contextvars.Context() def func(*args, **kwargs): kwargs['spam'] = 'foo' args += ('bar',) return args, kwargs for f in (func, functools.partial(func)): # partial doesn't support FASTCALL self.assertEqual(ctx.run(f), (('bar',), {'spam': 'foo'})) self.assertEqual(ctx.run(f, 1), ((1, 'bar'), {'spam': 'foo'})) self.assertEqual( ctx.run(f, a=2), (('bar',), {'a': 2, 'spam': 'foo'})) self.assertEqual( ctx.run(f, 11, a=2), ((11, 'bar'), {'a': 2, 'spam': 'foo'})) a = {} self.assertEqual( ctx.run(f, 11, **a), ((11, 'bar'), {'spam': 'foo'})) self.assertEqual(a, {}) def test_context_run_3(self): ctx = contextvars.Context() def func(*args, **kwargs): 1 / 0 with self.assertRaises(ZeroDivisionError): ctx.run(func) with self.assertRaises(ZeroDivisionError): ctx.run(func, 1, 2) with self.assertRaises(ZeroDivisionError): ctx.run(func, 1, 2, a=123) @isolated_context def test_context_run_4(self): ctx1 = contextvars.Context() ctx2 = contextvars.Context() var = contextvars.ContextVar('var') def func2(): self.assertIsNone(var.get(None)) def func1(): self.assertIsNone(var.get(None)) var.set('spam') ctx2.run(func2) self.assertEqual(var.get(None), 'spam') cur = contextvars.copy_context() self.assertEqual(len(cur), 1) self.assertEqual(cur[var], 'spam') return cur returned_ctx = ctx1.run(func1) self.assertEqual(ctx1, returned_ctx) self.assertEqual(returned_ctx[var], 'spam') self.assertIn(var, returned_ctx) def test_context_run_5(self): ctx = contextvars.Context() var = contextvars.ContextVar('var') def func(): self.assertIsNone(var.get(None)) var.set('spam') 1 / 0 with self.assertRaises(ZeroDivisionError): ctx.run(func) self.assertIsNone(var.get(None)) def test_context_run_6(self): ctx = contextvars.Context() c = contextvars.ContextVar('a', default=0) def fun(): self.assertEqual(c.get(), 0) self.assertIsNone(ctx.get(c)) c.set(42) self.assertEqual(c.get(), 42) self.assertEqual(ctx.get(c), 42) ctx.run(fun) def test_context_run_7(self): ctx = contextvars.Context() def fun(): with self.assertRaisesRegex(RuntimeError, 'is already entered'): ctx.run(fun) ctx.run(fun) @isolated_context def test_context_getset_1(self): c = contextvars.ContextVar('c') with self.assertRaises(LookupError): c.get() self.assertIsNone(c.get(None)) t0 = c.set(42) self.assertEqual(c.get(), 42) self.assertEqual(c.get(None), 42) self.assertIs(t0.old_value, t0.MISSING) self.assertIs(t0.old_value, contextvars.Token.MISSING) self.assertIs(t0.var, c) t = c.set('spam') self.assertEqual(c.get(), 'spam') self.assertEqual(c.get(None), 'spam') self.assertEqual(t.old_value, 42) c.reset(t) self.assertEqual(c.get(), 42) self.assertEqual(c.get(None), 42) c.set('spam2') with self.assertRaisesRegex(RuntimeError, 'has already been used'): c.reset(t) self.assertEqual(c.get(), 'spam2') ctx1 = contextvars.copy_context() self.assertIn(c, ctx1) c.reset(t0) with self.assertRaisesRegex(RuntimeError, 'has already been used'): c.reset(t0) self.assertIsNone(c.get(None)) self.assertIn(c, ctx1) self.assertEqual(ctx1[c], 'spam2') self.assertEqual(ctx1.get(c, 'aa'), 'spam2') self.assertEqual(len(ctx1), 1) self.assertEqual(list(ctx1.items()), [(c, 'spam2')]) self.assertEqual(list(ctx1.values()), ['spam2']) self.assertEqual(list(ctx1.keys()), [c]) self.assertEqual(list(ctx1), [c]) ctx2 = contextvars.copy_context() self.assertNotIn(c, ctx2) with self.assertRaises(KeyError): ctx2[c] self.assertEqual(ctx2.get(c, 'aa'), 'aa') self.assertEqual(len(ctx2), 0) self.assertEqual(list(ctx2), []) @isolated_context def test_context_getset_2(self): v1 = contextvars.ContextVar('v1') v2 = contextvars.ContextVar('v2') t1 = v1.set(42) with self.assertRaisesRegex(ValueError, 'by a different'): v2.reset(t1) @isolated_context def test_context_getset_3(self): c = contextvars.ContextVar('c', default=42) ctx = contextvars.Context() def fun(): self.assertEqual(c.get(), 42) with self.assertRaises(KeyError): ctx[c] self.assertIsNone(ctx.get(c)) self.assertEqual(ctx.get(c, 'spam'), 'spam') self.assertNotIn(c, ctx) self.assertEqual(list(ctx.keys()), []) t = c.set(1) self.assertEqual(list(ctx.keys()), [c]) self.assertEqual(ctx[c], 1) c.reset(t) self.assertEqual(list(ctx.keys()), []) with self.assertRaises(KeyError): ctx[c] ctx.run(fun) @isolated_context def test_context_getset_4(self): c = contextvars.ContextVar('c', default=42) ctx = contextvars.Context() tok = ctx.run(c.set, 1) with self.assertRaisesRegex(ValueError, 'different Context'): c.reset(tok) @isolated_context def test_context_getset_5(self): c = contextvars.ContextVar('c', default=42) c.set([]) def fun(): c.set([]) c.get().append(42) self.assertEqual(c.get(), [42]) contextvars.copy_context().run(fun) self.assertEqual(c.get(), []) def test_context_copy_1(self): ctx1 = contextvars.Context() c = contextvars.ContextVar('c', default=42) def ctx1_fun(): c.set(10) ctx2 = ctx1.copy() self.assertEqual(ctx2[c], 10) c.set(20) self.assertEqual(ctx1[c], 20) self.assertEqual(ctx2[c], 10) ctx2.run(ctx2_fun) self.assertEqual(ctx1[c], 20) self.assertEqual(ctx2[c], 30) def ctx2_fun(): self.assertEqual(c.get(), 10) c.set(30) self.assertEqual(c.get(), 30) ctx1.run(ctx1_fun) @isolated_context def test_context_threads_1(self): cvar = contextvars.ContextVar('cvar') def sub(num): for i in range(10): cvar.set(num + i) time.sleep(random.uniform(0.001, 0.05)) self.assertEqual(cvar.get(), num + i) return num tp = concurrent.futures.ThreadPoolExecutor(max_workers=10) try: results = list(tp.map(sub, range(10))) finally: tp.shutdown() self.assertEqual(results, list(range(10))) # HAMT Tests class HashKey: _crasher = None def __init__(self, hash, name, *, error_on_eq_to=None): assert hash != -1 self.name = name self.hash = hash self.error_on_eq_to = error_on_eq_to def __repr__(self): return f'' def __hash__(self): if self._crasher is not None and self._crasher.error_on_hash: raise HashingError return self.hash def __eq__(self, other): if not isinstance(other, HashKey): return NotImplemented if self._crasher is not None and self._crasher.error_on_eq: raise EqError if self.error_on_eq_to is not None and self.error_on_eq_to is other: raise ValueError(f'cannot compare {self!r} to {other!r}') if other.error_on_eq_to is not None and other.error_on_eq_to is self: raise ValueError(f'cannot compare {other!r} to {self!r}') return (self.name, self.hash) == (other.name, other.hash) class KeyStr(str): def __hash__(self): if HashKey._crasher is not None and HashKey._crasher.error_on_hash: raise HashingError return super().__hash__() def __eq__(self, other): if HashKey._crasher is not None and HashKey._crasher.error_on_eq: raise EqError return super().__eq__(other) class HaskKeyCrasher: def __init__(self, *, error_on_hash=False, error_on_eq=False): self.error_on_hash = error_on_hash self.error_on_eq = error_on_eq def __enter__(self): if HashKey._crasher is not None: raise RuntimeError('cannot nest crashers') HashKey._crasher = self def __exit__(self, *exc): HashKey._crasher = None class HashingError(Exception): pass class EqError(Exception): pass @unittest.skipIf(hamt is None, '_testcapi lacks "hamt()" function') class HamtTest(unittest.TestCase): def test_hashkey_helper_1(self): k1 = HashKey(10, 'aaa') k2 = HashKey(10, 'bbb') self.assertNotEqual(k1, k2) self.assertEqual(hash(k1), hash(k2)) d = dict() d[k1] = 'a' d[k2] = 'b' self.assertEqual(d[k1], 'a') self.assertEqual(d[k2], 'b') def test_hamt_basics_1(self): h = hamt() h = None # NoQA def test_hamt_basics_2(self): h = hamt() self.assertEqual(len(h), 0) h2 = h.set('a', 'b') self.assertIsNot(h, h2) self.assertEqual(len(h), 0) self.assertEqual(len(h2), 1) self.assertIsNone(h.get('a')) self.assertEqual(h.get('a', 42), 42) self.assertEqual(h2.get('a'), 'b') h3 = h2.set('b', 10) self.assertIsNot(h2, h3) self.assertEqual(len(h), 0) self.assertEqual(len(h2), 1) self.assertEqual(len(h3), 2) self.assertEqual(h3.get('a'), 'b') self.assertEqual(h3.get('b'), 10) self.assertIsNone(h.get('b')) self.assertIsNone(h2.get('b')) self.assertIsNone(h.get('a')) self.assertEqual(h2.get('a'), 'b') h = h2 = h3 = None def test_hamt_basics_3(self): h = hamt() o = object() h1 = h.set('1', o) h2 = h1.set('1', o) self.assertIs(h1, h2) def test_hamt_basics_4(self): h = hamt() h1 = h.set('key', []) h2 = h1.set('key', []) self.assertIsNot(h1, h2) self.assertEqual(len(h1), 1) self.assertEqual(len(h2), 1) self.assertIsNot(h1.get('key'), h2.get('key')) def test_hamt_collision_1(self): k1 = HashKey(10, 'aaa') k2 = HashKey(10, 'bbb') k3 = HashKey(10, 'ccc') h = hamt() h2 = h.set(k1, 'a') h3 = h2.set(k2, 'b') self.assertEqual(h.get(k1), None) self.assertEqual(h.get(k2), None) self.assertEqual(h2.get(k1), 'a') self.assertEqual(h2.get(k2), None) self.assertEqual(h3.get(k1), 'a') self.assertEqual(h3.get(k2), 'b') h4 = h3.set(k2, 'cc') h5 = h4.set(k3, 'aa') self.assertEqual(h3.get(k1), 'a') self.assertEqual(h3.get(k2), 'b') self.assertEqual(h4.get(k1), 'a') self.assertEqual(h4.get(k2), 'cc') self.assertEqual(h4.get(k3), None) self.assertEqual(h5.get(k1), 'a') self.assertEqual(h5.get(k2), 'cc') self.assertEqual(h5.get(k2), 'cc') self.assertEqual(h5.get(k3), 'aa') self.assertEqual(len(h), 0) self.assertEqual(len(h2), 1) self.assertEqual(len(h3), 2) self.assertEqual(len(h4), 2) self.assertEqual(len(h5), 3) def test_hamt_collision_3(self): # Test that iteration works with the deepest tree possible. # https://github.com/python/cpython/issues/93065 C = HashKey(0b10000000_00000000_00000000_00000000, 'C') D = HashKey(0b10000000_00000000_00000000_00000000, 'D') E = HashKey(0b00000000_00000000_00000000_00000000, 'E') h = hamt() h = h.set(C, 'C') h = h.set(D, 'D') h = h.set(E, 'E') # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=4 count=2 bitmap=0b101): # : 'E' # NULL: # CollisionNode(size=4 id=0x107a24520): # : 'C' # : 'D' self.assertEqual({k.name for k in h.keys()}, {'C', 'D', 'E'}) def test_hamt_stress(self): COLLECTION_SIZE = 7000 TEST_ITERS_EVERY = 647 CRASH_HASH_EVERY = 97 CRASH_EQ_EVERY = 11 RUN_XTIMES = 3 for _ in range(RUN_XTIMES): h = hamt() d = dict() for i in range(COLLECTION_SIZE): key = KeyStr(i) if not (i % CRASH_HASH_EVERY): with HaskKeyCrasher(error_on_hash=True): with self.assertRaises(HashingError): h.set(key, i) h = h.set(key, i) if not (i % CRASH_EQ_EVERY): with HaskKeyCrasher(error_on_eq=True): with self.assertRaises(EqError): h.get(KeyStr(i)) # really trigger __eq__ d[key] = i self.assertEqual(len(d), len(h)) if not (i % TEST_ITERS_EVERY): self.assertEqual(set(h.items()), set(d.items())) self.assertEqual(len(h.items()), len(d.items())) self.assertEqual(len(h), COLLECTION_SIZE) for key in range(COLLECTION_SIZE): self.assertEqual(h.get(KeyStr(key), 'not found'), key) keys_to_delete = list(range(COLLECTION_SIZE)) random.shuffle(keys_to_delete) for iter_i, i in enumerate(keys_to_delete): key = KeyStr(i) if not (iter_i % CRASH_HASH_EVERY): with HaskKeyCrasher(error_on_hash=True): with self.assertRaises(HashingError): h.delete(key) if not (iter_i % CRASH_EQ_EVERY): with HaskKeyCrasher(error_on_eq=True): with self.assertRaises(EqError): h.delete(KeyStr(i)) h = h.delete(key) self.assertEqual(h.get(key, 'not found'), 'not found') del d[key] self.assertEqual(len(d), len(h)) if iter_i == COLLECTION_SIZE // 2: hm = h dm = d.copy() if not (iter_i % TEST_ITERS_EVERY): self.assertEqual(set(h.keys()), set(d.keys())) self.assertEqual(len(h.keys()), len(d.keys())) self.assertEqual(len(d), 0) self.assertEqual(len(h), 0) # ============ for key in dm: self.assertEqual(hm.get(str(key)), dm[key]) self.assertEqual(len(dm), len(hm)) for i, key in enumerate(keys_to_delete): hm = hm.delete(str(key)) self.assertEqual(hm.get(str(key), 'not found'), 'not found') dm.pop(str(key), None) self.assertEqual(len(d), len(h)) if not (i % TEST_ITERS_EVERY): self.assertEqual(set(h.values()), set(d.values())) self.assertEqual(len(h.values()), len(d.values())) self.assertEqual(len(d), 0) self.assertEqual(len(h), 0) self.assertEqual(list(h.items()), []) def test_hamt_delete_1(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(102, 'C') D = HashKey(103, 'D') E = HashKey(104, 'E') Z = HashKey(-100, 'Z') Er = HashKey(103, 'Er', error_on_eq_to=D) h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=10 bitmap=0b111110000 id=0x10eadc618): # : 'a' # : 'b' # : 'c' # : 'd' # : 'e' h = h.delete(C) self.assertEqual(len(h), orig_len - 1) with self.assertRaisesRegex(ValueError, 'cannot compare'): h.delete(Er) h = h.delete(D) self.assertEqual(len(h), orig_len - 2) h2 = h.delete(Z) self.assertIs(h2, h) h = h.delete(A) self.assertEqual(len(h), orig_len - 3) self.assertEqual(h.get(A, 42), 42) self.assertEqual(h.get(B), 'b') self.assertEqual(h.get(E), 'e') def test_hamt_delete_2(self): A = HashKey(100, 'A') B = HashKey(201001, 'B') C = HashKey(101001, 'C') D = HashKey(103, 'D') E = HashKey(104, 'E') Z = HashKey(-100, 'Z') Er = HashKey(201001, 'Er', error_on_eq_to=B) h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=8 bitmap=0b1110010000): # : 'a' # : 'd' # : 'e' # NULL: # BitmapNode(size=4 bitmap=0b100000000001000000000): # : 'b' # : 'c' with self.assertRaisesRegex(ValueError, 'cannot compare'): h.delete(Er) h = h.delete(Z) self.assertEqual(len(h), orig_len) h = h.delete(C) self.assertEqual(len(h), orig_len - 1) h = h.delete(B) self.assertEqual(len(h), orig_len - 2) h = h.delete(A) self.assertEqual(len(h), orig_len - 3) self.assertEqual(h.get(D), 'd') self.assertEqual(h.get(E), 'e') h = h.delete(A) h = h.delete(B) h = h.delete(D) h = h.delete(E) self.assertEqual(len(h), 0) def test_hamt_delete_3(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(104, 'E') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=6 bitmap=0b100110000): # NULL: # BitmapNode(size=4 bitmap=0b1000000000000000000001000): # : 'a' # NULL: # CollisionNode(size=4 id=0x108572410): # : 'c' # : 'd' # : 'b' # : 'e' h = h.delete(A) self.assertEqual(len(h), orig_len - 1) h = h.delete(E) self.assertEqual(len(h), orig_len - 2) self.assertEqual(h.get(C), 'c') self.assertEqual(h.get(B), 'b') def test_hamt_delete_4(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(100100, 'E') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=4 bitmap=0b110000): # NULL: # BitmapNode(size=4 bitmap=0b1000000000000000000001000): # : 'a' # NULL: # CollisionNode(size=6 id=0x10515ef30): # : 'c' # : 'd' # : 'e' # : 'b' h = h.delete(D) self.assertEqual(len(h), orig_len - 1) h = h.delete(E) self.assertEqual(len(h), orig_len - 2) h = h.delete(C) self.assertEqual(len(h), orig_len - 3) h = h.delete(A) self.assertEqual(len(h), orig_len - 4) h = h.delete(B) self.assertEqual(len(h), 0) def test_hamt_delete_5(self): h = hamt() keys = [] for i in range(17): key = HashKey(i, str(i)) keys.append(key) h = h.set(key, f'val-{i}') collision_key16 = HashKey(16, '18') h = h.set(collision_key16, 'collision') # ArrayNode(id=0x10f8b9318): # 0:: # BitmapNode(size=2 count=1 bitmap=0b1): # : 'val-0' # # ... 14 more BitmapNodes ... # # 15:: # BitmapNode(size=2 count=1 bitmap=0b1): # : 'val-15' # # 16:: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # CollisionNode(size=4 id=0x10f2f5af8): # : 'val-16' # : 'collision' self.assertEqual(len(h), 18) h = h.delete(keys[2]) self.assertEqual(len(h), 17) h = h.delete(collision_key16) self.assertEqual(len(h), 16) h = h.delete(keys[16]) self.assertEqual(len(h), 15) h = h.delete(keys[1]) self.assertEqual(len(h), 14) h = h.delete(keys[1]) self.assertEqual(len(h), 14) for key in keys: h = h.delete(key) self.assertEqual(len(h), 0) def test_hamt_items_1(self): A = HashKey(100, 'A') B = HashKey(201001, 'B') C = HashKey(101001, 'C') D = HashKey(103, 'D') E = HashKey(104, 'E') F = HashKey(110, 'F') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') h = h.set(F, 'f') it = h.items() self.assertEqual( set(list(it)), {(A, 'a'), (B, 'b'), (C, 'c'), (D, 'd'), (E, 'e'), (F, 'f')}) def test_hamt_items_2(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(100100, 'E') F = HashKey(110, 'F') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') h = h.set(F, 'f') it = h.items() self.assertEqual( set(list(it)), {(A, 'a'), (B, 'b'), (C, 'c'), (D, 'd'), (E, 'e'), (F, 'f')}) def test_hamt_keys_1(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(100100, 'E') F = HashKey(110, 'F') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') h = h.set(F, 'f') self.assertEqual(set(list(h.keys())), {A, B, C, D, E, F}) self.assertEqual(set(list(h)), {A, B, C, D, E, F}) def test_hamt_items_3(self): h = hamt() self.assertEqual(len(h.items()), 0) self.assertEqual(list(h.items()), []) def test_hamt_eq_1(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(120, 'E') h1 = hamt() h1 = h1.set(A, 'a') h1 = h1.set(B, 'b') h1 = h1.set(C, 'c') h1 = h1.set(D, 'd') h2 = hamt() h2 = h2.set(A, 'a') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(B, 'b') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(C, 'c') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(D, 'd2') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(D, 'd') self.assertTrue(h1 == h2) self.assertFalse(h1 != h2) h2 = h2.set(E, 'e') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.delete(D) self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(E, 'd') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) def test_hamt_eq_2(self): A = HashKey(100, 'A') Er = HashKey(100, 'Er', error_on_eq_to=A) h1 = hamt() h1 = h1.set(A, 'a') h2 = hamt() h2 = h2.set(Er, 'a') with self.assertRaisesRegex(ValueError, 'cannot compare'): h1 == h2 with self.assertRaisesRegex(ValueError, 'cannot compare'): h1 != h2 def test_hamt_gc_1(self): A = HashKey(100, 'A') h = hamt() h = h.set(0, 0) # empty HAMT node is memoized in hamt.c ref = weakref.ref(h) a = [] a.append(a) a.append(h) b = [] a.append(b) b.append(a) h = h.set(A, b) del h, a, b gc.collect() gc.collect() gc.collect() self.assertIsNone(ref()) def test_hamt_gc_2(self): A = HashKey(100, 'A') B = HashKey(101, 'B') h = hamt() h = h.set(A, 'a') h = h.set(A, h) ref = weakref.ref(h) hi = h.items() next(hi) del h, hi gc.collect() gc.collect() gc.collect() self.assertIsNone(ref()) def test_hamt_in_1(self): A = HashKey(100, 'A') AA = HashKey(100, 'A') B = HashKey(101, 'B') h = hamt() h = h.set(A, 1) self.assertTrue(A in h) self.assertFalse(B in h) with self.assertRaises(EqError): with HaskKeyCrasher(error_on_eq=True): AA in h with self.assertRaises(HashingError): with HaskKeyCrasher(error_on_hash=True): AA in h def test_hamt_getitem_1(self): A = HashKey(100, 'A') AA = HashKey(100, 'A') B = HashKey(101, 'B') h = hamt() h = h.set(A, 1) self.assertEqual(h[A], 1) self.assertEqual(h[AA], 1) with self.assertRaises(KeyError): h[B] with self.assertRaises(EqError): with HaskKeyCrasher(error_on_eq=True): h[AA] with self.assertRaises(HashingError): with HaskKeyCrasher(error_on_hash=True): h[AA] if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.10/test_ftplib.py000066400000000000000000001237631471441230600212270ustar00rootroot00000000000000"""Test script for ftplib module.""" # Modified by Giampaolo Rodola' to test FTP class, IPv6 and TLS # environment import ftplib import socket import io import errno import os import threading import time import unittest try: import ssl except ImportError: ssl = None from unittest import TestCase, skipUnless from test import support from test.support import threading_helper from test.support import socket_helper from test.support import warnings_helper from test.support.socket_helper import HOST, HOSTv6 import warnings with warnings.catch_warnings(): warnings.simplefilter('ignore', DeprecationWarning) import asyncore import asynchat TIMEOUT = support.LOOPBACK_TIMEOUT DEFAULT_ENCODING = 'utf-8' # the dummy data returned by server over the data channel when # RETR, LIST, NLST, MLSD commands are issued RETR_DATA = 'abcde12345\r\n' * 1000 + 'non-ascii char \xAE\r\n' LIST_DATA = 'foo\r\nbar\r\n non-ascii char \xAE\r\n' NLST_DATA = 'foo\r\nbar\r\n non-ascii char \xAE\r\n' MLSD_DATA = ("type=cdir;perm=el;unique==keVO1+ZF4; test\r\n" "type=pdir;perm=e;unique==keVO1+d?3; ..\r\n" "type=OS.unix=slink:/foobar;perm=;unique==keVO1+4G4; foobar\r\n" "type=OS.unix=chr-13/29;perm=;unique==keVO1+5G4; device\r\n" "type=OS.unix=blk-11/108;perm=;unique==keVO1+6G4; block\r\n" "type=file;perm=awr;unique==keVO1+8G4; writable\r\n" "type=dir;perm=cpmel;unique==keVO1+7G4; promiscuous\r\n" "type=dir;perm=;unique==keVO1+1t2; no-exec\r\n" "type=file;perm=r;unique==keVO1+EG4; two words\r\n" "type=file;perm=r;unique==keVO1+IH4; leading space\r\n" "type=file;perm=r;unique==keVO1+1G4; file1\r\n" "type=dir;perm=cpmel;unique==keVO1+7G4; incoming\r\n" "type=file;perm=r;unique==keVO1+1G4; file2\r\n" "type=file;perm=r;unique==keVO1+1G4; file3\r\n" "type=file;perm=r;unique==keVO1+1G4; file4\r\n" "type=dir;perm=cpmel;unique==SGP1; dir \xAE non-ascii char\r\n" "type=file;perm=r;unique==SGP2; file \xAE non-ascii char\r\n") def default_error_handler(): # bpo-44359: Silently ignore socket errors. Such errors occur when a client # socket is closed, in TestFTPClass.tearDown() and makepasv() tests, and # the server gets an error on its side. pass class DummyDTPHandler(asynchat.async_chat): dtp_conn_closed = False def __init__(self, conn, baseclass): asynchat.async_chat.__init__(self, conn) self.baseclass = baseclass self.baseclass.last_received_data = '' self.encoding = baseclass.encoding def handle_read(self): new_data = self.recv(1024).decode(self.encoding, 'replace') self.baseclass.last_received_data += new_data def handle_close(self): # XXX: this method can be called many times in a row for a single # connection, including in clear-text (non-TLS) mode. # (behaviour witnessed with test_data_connection) if not self.dtp_conn_closed: self.baseclass.push('226 transfer complete') self.close() self.dtp_conn_closed = True def push(self, what): if self.baseclass.next_data is not None: what = self.baseclass.next_data self.baseclass.next_data = None if not what: return self.close_when_done() super(DummyDTPHandler, self).push(what.encode(self.encoding)) def handle_error(self): default_error_handler() class DummyFTPHandler(asynchat.async_chat): dtp_handler = DummyDTPHandler def __init__(self, conn, encoding=DEFAULT_ENCODING): asynchat.async_chat.__init__(self, conn) # tells the socket to handle urgent data inline (ABOR command) self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_OOBINLINE, 1) self.set_terminator(b"\r\n") self.in_buffer = [] self.dtp = None self.last_received_cmd = None self.last_received_data = '' self.next_response = '' self.next_data = None self.rest = None self.next_retr_data = RETR_DATA self.push('220 welcome') self.encoding = encoding # We use this as the string IPv4 address to direct the client # to in response to a PASV command. To test security behavior. # https://bugs.python.org/issue43285/. self.fake_pasv_server_ip = '252.253.254.255' def collect_incoming_data(self, data): self.in_buffer.append(data) def found_terminator(self): line = b''.join(self.in_buffer).decode(self.encoding) self.in_buffer = [] if self.next_response: self.push(self.next_response) self.next_response = '' cmd = line.split(' ')[0].lower() self.last_received_cmd = cmd space = line.find(' ') if space != -1: arg = line[space + 1:] else: arg = "" if hasattr(self, 'cmd_' + cmd): method = getattr(self, 'cmd_' + cmd) method(arg) else: self.push('550 command "%s" not understood.' %cmd) def handle_error(self): default_error_handler() def push(self, data): asynchat.async_chat.push(self, data.encode(self.encoding) + b'\r\n') def cmd_port(self, arg): addr = list(map(int, arg.split(','))) ip = '%d.%d.%d.%d' %tuple(addr[:4]) port = (addr[4] * 256) + addr[5] s = socket.create_connection((ip, port), timeout=TIMEOUT) self.dtp = self.dtp_handler(s, baseclass=self) self.push('200 active data connection established') def cmd_pasv(self, arg): with socket.create_server((self.socket.getsockname()[0], 0)) as sock: sock.settimeout(TIMEOUT) port = sock.getsockname()[1] ip = self.fake_pasv_server_ip ip = ip.replace('.', ','); p1 = port / 256; p2 = port % 256 self.push('227 entering passive mode (%s,%d,%d)' %(ip, p1, p2)) conn, addr = sock.accept() self.dtp = self.dtp_handler(conn, baseclass=self) def cmd_eprt(self, arg): af, ip, port = arg.split(arg[0])[1:-1] port = int(port) s = socket.create_connection((ip, port), timeout=TIMEOUT) self.dtp = self.dtp_handler(s, baseclass=self) self.push('200 active data connection established') def cmd_epsv(self, arg): with socket.create_server((self.socket.getsockname()[0], 0), family=socket.AF_INET6) as sock: sock.settimeout(TIMEOUT) port = sock.getsockname()[1] self.push('229 entering extended passive mode (|||%d|)' %port) conn, addr = sock.accept() self.dtp = self.dtp_handler(conn, baseclass=self) def cmd_echo(self, arg): # sends back the received string (used by the test suite) self.push(arg) def cmd_noop(self, arg): self.push('200 noop ok') def cmd_user(self, arg): self.push('331 username ok') def cmd_pass(self, arg): self.push('230 password ok') def cmd_acct(self, arg): self.push('230 acct ok') def cmd_rnfr(self, arg): self.push('350 rnfr ok') def cmd_rnto(self, arg): self.push('250 rnto ok') def cmd_dele(self, arg): self.push('250 dele ok') def cmd_cwd(self, arg): self.push('250 cwd ok') def cmd_size(self, arg): self.push('250 1000') def cmd_mkd(self, arg): self.push('257 "%s"' %arg) def cmd_rmd(self, arg): self.push('250 rmd ok') def cmd_pwd(self, arg): self.push('257 "pwd ok"') def cmd_type(self, arg): self.push('200 type ok') def cmd_quit(self, arg): self.push('221 quit ok') self.close() def cmd_abor(self, arg): self.push('226 abor ok') def cmd_stor(self, arg): self.push('125 stor ok') def cmd_rest(self, arg): self.rest = arg self.push('350 rest ok') def cmd_retr(self, arg): self.push('125 retr ok') if self.rest is not None: offset = int(self.rest) else: offset = 0 self.dtp.push(self.next_retr_data[offset:]) self.dtp.close_when_done() self.rest = None def cmd_list(self, arg): self.push('125 list ok') self.dtp.push(LIST_DATA) self.dtp.close_when_done() def cmd_nlst(self, arg): self.push('125 nlst ok') self.dtp.push(NLST_DATA) self.dtp.close_when_done() def cmd_opts(self, arg): self.push('200 opts ok') def cmd_mlsd(self, arg): self.push('125 mlsd ok') self.dtp.push(MLSD_DATA) self.dtp.close_when_done() def cmd_setlongretr(self, arg): # For testing. Next RETR will return long line. self.next_retr_data = 'x' * int(arg) self.push('125 setlongretr ok') class DummyFTPServer(asyncore.dispatcher, threading.Thread): handler = DummyFTPHandler def __init__(self, address, af=socket.AF_INET, encoding=DEFAULT_ENCODING): threading.Thread.__init__(self) asyncore.dispatcher.__init__(self) self.daemon = True self.create_socket(af, socket.SOCK_STREAM) self.bind(address) self.listen(5) self.active = False self.active_lock = threading.Lock() self.host, self.port = self.socket.getsockname()[:2] self.handler_instance = None self.encoding = encoding def start(self): assert not self.active self.__flag = threading.Event() threading.Thread.start(self) self.__flag.wait() def run(self): self.active = True self.__flag.set() while self.active and asyncore.socket_map: self.active_lock.acquire() asyncore.loop(timeout=0.1, count=1) self.active_lock.release() asyncore.close_all(ignore_all=True) def stop(self): assert self.active self.active = False self.join() def handle_accepted(self, conn, addr): self.handler_instance = self.handler(conn, encoding=self.encoding) def handle_connect(self): self.close() handle_read = handle_connect def writable(self): return 0 def handle_error(self): default_error_handler() if ssl is not None: CERTFILE = os.path.join(os.path.dirname(__file__), "keycert3.pem") CAFILE = os.path.join(os.path.dirname(__file__), "pycacert.pem") class SSLConnection(asyncore.dispatcher): """An asyncore.dispatcher subclass supporting TLS/SSL.""" _ssl_accepting = False _ssl_closing = False def secure_connection(self): context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) context.load_cert_chain(CERTFILE) socket = context.wrap_socket(self.socket, suppress_ragged_eofs=False, server_side=True, do_handshake_on_connect=False) self.del_channel() self.set_socket(socket) self._ssl_accepting = True def _do_ssl_handshake(self): try: self.socket.do_handshake() except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return elif err.args[0] == ssl.SSL_ERROR_EOF: return self.handle_close() # TODO: SSLError does not expose alert information elif "SSLV3_ALERT_BAD_CERTIFICATE" in err.args[1]: return self.handle_close() raise except OSError as err: if err.args[0] == errno.ECONNABORTED: return self.handle_close() else: self._ssl_accepting = False def _do_ssl_shutdown(self): self._ssl_closing = True try: self.socket = self.socket.unwrap() except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return except OSError: # Any "socket error" corresponds to a SSL_ERROR_SYSCALL return # from OpenSSL's SSL_shutdown(), corresponding to a # closed socket condition. See also: # http://www.mail-archive.com/openssl-users@openssl.org/msg60710.html pass self._ssl_closing = False if getattr(self, '_ccc', False) is False: super(SSLConnection, self).close() else: pass def handle_read_event(self): if self._ssl_accepting: self._do_ssl_handshake() elif self._ssl_closing: self._do_ssl_shutdown() else: super(SSLConnection, self).handle_read_event() def handle_write_event(self): if self._ssl_accepting: self._do_ssl_handshake() elif self._ssl_closing: self._do_ssl_shutdown() else: super(SSLConnection, self).handle_write_event() def send(self, data): try: return super(SSLConnection, self).send(data) except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_EOF, ssl.SSL_ERROR_ZERO_RETURN, ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return 0 raise def recv(self, buffer_size): try: return super(SSLConnection, self).recv(buffer_size) except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return b'' if err.args[0] in (ssl.SSL_ERROR_EOF, ssl.SSL_ERROR_ZERO_RETURN): self.handle_close() return b'' raise def handle_error(self): default_error_handler() def close(self): if (isinstance(self.socket, ssl.SSLSocket) and self.socket._sslobj is not None): self._do_ssl_shutdown() else: super(SSLConnection, self).close() class DummyTLS_DTPHandler(SSLConnection, DummyDTPHandler): """A DummyDTPHandler subclass supporting TLS/SSL.""" def __init__(self, conn, baseclass): DummyDTPHandler.__init__(self, conn, baseclass) if self.baseclass.secure_data_channel: self.secure_connection() class DummyTLS_FTPHandler(SSLConnection, DummyFTPHandler): """A DummyFTPHandler subclass supporting TLS/SSL.""" dtp_handler = DummyTLS_DTPHandler def __init__(self, conn, encoding=DEFAULT_ENCODING): DummyFTPHandler.__init__(self, conn, encoding=encoding) self.secure_data_channel = False self._ccc = False def cmd_auth(self, line): """Set up secure control channel.""" self.push('234 AUTH TLS successful') self.secure_connection() def cmd_ccc(self, line): self.push('220 Reverting back to clear-text') self._ccc = True self._do_ssl_shutdown() def cmd_pbsz(self, line): """Negotiate size of buffer for secure data transfer. For TLS/SSL the only valid value for the parameter is '0'. Any other value is accepted but ignored. """ self.push('200 PBSZ=0 successful.') def cmd_prot(self, line): """Setup un/secure data channel.""" arg = line.upper() if arg == 'C': self.push('200 Protection set to Clear') self.secure_data_channel = False elif arg == 'P': self.push('200 Protection set to Private') self.secure_data_channel = True else: self.push("502 Unrecognized PROT type (use C or P).") class DummyTLS_FTPServer(DummyFTPServer): handler = DummyTLS_FTPHandler class TestFTPClass(TestCase): def setUp(self, encoding=DEFAULT_ENCODING): self.server = DummyFTPServer((HOST, 0), encoding=encoding) self.server.start() self.client = ftplib.FTP(timeout=TIMEOUT, encoding=encoding) self.client.connect(self.server.host, self.server.port) def tearDown(self): self.client.close() self.server.stop() # Explicitly clear the attribute to prevent dangling thread self.server = None asyncore.close_all(ignore_all=True) def check_data(self, received, expected): self.assertEqual(len(received), len(expected)) self.assertEqual(received, expected) def test_getwelcome(self): self.assertEqual(self.client.getwelcome(), '220 welcome') def test_sanitize(self): self.assertEqual(self.client.sanitize('foo'), repr('foo')) self.assertEqual(self.client.sanitize('pass 12345'), repr('pass *****')) self.assertEqual(self.client.sanitize('PASS 12345'), repr('PASS *****')) def test_exceptions(self): self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\r\n0') self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\n0') self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\r0') self.assertRaises(ftplib.error_temp, self.client.sendcmd, 'echo 400') self.assertRaises(ftplib.error_temp, self.client.sendcmd, 'echo 499') self.assertRaises(ftplib.error_perm, self.client.sendcmd, 'echo 500') self.assertRaises(ftplib.error_perm, self.client.sendcmd, 'echo 599') self.assertRaises(ftplib.error_proto, self.client.sendcmd, 'echo 999') def test_all_errors(self): exceptions = (ftplib.error_reply, ftplib.error_temp, ftplib.error_perm, ftplib.error_proto, ftplib.Error, OSError, EOFError) for x in exceptions: try: raise x('exception not included in all_errors set') except ftplib.all_errors: pass def test_set_pasv(self): # passive mode is supposed to be enabled by default self.assertTrue(self.client.passiveserver) self.client.set_pasv(True) self.assertTrue(self.client.passiveserver) self.client.set_pasv(False) self.assertFalse(self.client.passiveserver) def test_voidcmd(self): self.client.voidcmd('echo 200') self.client.voidcmd('echo 299') self.assertRaises(ftplib.error_reply, self.client.voidcmd, 'echo 199') self.assertRaises(ftplib.error_reply, self.client.voidcmd, 'echo 300') def test_login(self): self.client.login() def test_acct(self): self.client.acct('passwd') def test_rename(self): self.client.rename('a', 'b') self.server.handler_instance.next_response = '200' self.assertRaises(ftplib.error_reply, self.client.rename, 'a', 'b') def test_delete(self): self.client.delete('foo') self.server.handler_instance.next_response = '199' self.assertRaises(ftplib.error_reply, self.client.delete, 'foo') def test_size(self): self.client.size('foo') def test_mkd(self): dir = self.client.mkd('/foo') self.assertEqual(dir, '/foo') def test_rmd(self): self.client.rmd('foo') def test_cwd(self): dir = self.client.cwd('/foo') self.assertEqual(dir, '250 cwd ok') def test_pwd(self): dir = self.client.pwd() self.assertEqual(dir, 'pwd ok') def test_quit(self): self.assertEqual(self.client.quit(), '221 quit ok') # Ensure the connection gets closed; sock attribute should be None self.assertEqual(self.client.sock, None) def test_abort(self): self.client.abort() def test_retrbinary(self): def callback(data): received.append(data.decode(self.client.encoding)) received = [] self.client.retrbinary('retr', callback) self.check_data(''.join(received), RETR_DATA) def test_retrbinary_rest(self): def callback(data): received.append(data.decode(self.client.encoding)) for rest in (0, 10, 20): received = [] self.client.retrbinary('retr', callback, rest=rest) self.check_data(''.join(received), RETR_DATA[rest:]) def test_retrlines(self): received = [] self.client.retrlines('retr', received.append) self.check_data(''.join(received), RETR_DATA.replace('\r\n', '')) def test_storbinary(self): f = io.BytesIO(RETR_DATA.encode(self.client.encoding)) self.client.storbinary('stor', f) self.check_data(self.server.handler_instance.last_received_data, RETR_DATA) # test new callback arg flag = [] f.seek(0) self.client.storbinary('stor', f, callback=lambda x: flag.append(None)) self.assertTrue(flag) def test_storbinary_rest(self): data = RETR_DATA.replace('\r\n', '\n').encode(self.client.encoding) f = io.BytesIO(data) for r in (30, '30'): f.seek(0) self.client.storbinary('stor', f, rest=r) self.assertEqual(self.server.handler_instance.rest, str(r)) def test_storlines(self): data = RETR_DATA.replace('\r\n', '\n').encode(self.client.encoding) f = io.BytesIO(data) self.client.storlines('stor', f) self.check_data(self.server.handler_instance.last_received_data, RETR_DATA) # test new callback arg flag = [] f.seek(0) self.client.storlines('stor foo', f, callback=lambda x: flag.append(None)) self.assertTrue(flag) f = io.StringIO(RETR_DATA.replace('\r\n', '\n')) # storlines() expects a binary file, not a text file with warnings_helper.check_warnings(('', BytesWarning), quiet=True): self.assertRaises(TypeError, self.client.storlines, 'stor foo', f) def test_nlst(self): self.client.nlst() self.assertEqual(self.client.nlst(), NLST_DATA.split('\r\n')[:-1]) def test_dir(self): l = [] self.client.dir(lambda x: l.append(x)) self.assertEqual(''.join(l), LIST_DATA.replace('\r\n', '')) def test_mlsd(self): list(self.client.mlsd()) list(self.client.mlsd(path='/')) list(self.client.mlsd(path='/', facts=['size', 'type'])) ls = list(self.client.mlsd()) for name, facts in ls: self.assertIsInstance(name, str) self.assertIsInstance(facts, dict) self.assertTrue(name) self.assertIn('type', facts) self.assertIn('perm', facts) self.assertIn('unique', facts) def set_data(data): self.server.handler_instance.next_data = data def test_entry(line, type=None, perm=None, unique=None, name=None): type = 'type' if type is None else type perm = 'perm' if perm is None else perm unique = 'unique' if unique is None else unique name = 'name' if name is None else name set_data(line) _name, facts = next(self.client.mlsd()) self.assertEqual(_name, name) self.assertEqual(facts['type'], type) self.assertEqual(facts['perm'], perm) self.assertEqual(facts['unique'], unique) # plain test_entry('type=type;perm=perm;unique=unique; name\r\n') # "=" in fact value test_entry('type=ty=pe;perm=perm;unique=unique; name\r\n', type="ty=pe") test_entry('type==type;perm=perm;unique=unique; name\r\n', type="=type") test_entry('type=t=y=pe;perm=perm;unique=unique; name\r\n', type="t=y=pe") test_entry('type=====;perm=perm;unique=unique; name\r\n', type="====") # spaces in name test_entry('type=type;perm=perm;unique=unique; na me\r\n', name="na me") test_entry('type=type;perm=perm;unique=unique; name \r\n', name="name ") test_entry('type=type;perm=perm;unique=unique; name\r\n', name=" name") test_entry('type=type;perm=perm;unique=unique; n am e\r\n', name="n am e") # ";" in name test_entry('type=type;perm=perm;unique=unique; na;me\r\n', name="na;me") test_entry('type=type;perm=perm;unique=unique; ;name\r\n', name=";name") test_entry('type=type;perm=perm;unique=unique; ;name;\r\n', name=";name;") test_entry('type=type;perm=perm;unique=unique; ;;;;\r\n', name=";;;;") # case sensitiveness set_data('Type=type;TyPe=perm;UNIQUE=unique; name\r\n') _name, facts = next(self.client.mlsd()) for x in facts: self.assertTrue(x.islower()) # no data (directory empty) set_data('') self.assertRaises(StopIteration, next, self.client.mlsd()) set_data('') for x in self.client.mlsd(): self.fail("unexpected data %s" % x) def test_makeport(self): with self.client.makeport(): # IPv4 is in use, just make sure send_eprt has not been used self.assertEqual(self.server.handler_instance.last_received_cmd, 'port') def test_makepasv(self): host, port = self.client.makepasv() conn = socket.create_connection((host, port), timeout=TIMEOUT) conn.close() # IPv4 is in use, just make sure send_epsv has not been used self.assertEqual(self.server.handler_instance.last_received_cmd, 'pasv') def test_makepasv_issue43285_security_disabled(self): """Test the opt-in to the old vulnerable behavior.""" self.client.trust_server_pasv_ipv4_address = True bad_host, port = self.client.makepasv() self.assertEqual( bad_host, self.server.handler_instance.fake_pasv_server_ip) # Opening and closing a connection keeps the dummy server happy # instead of timing out on accept. socket.create_connection((self.client.sock.getpeername()[0], port), timeout=TIMEOUT).close() def test_makepasv_issue43285_security_enabled_default(self): self.assertFalse(self.client.trust_server_pasv_ipv4_address) trusted_host, port = self.client.makepasv() self.assertNotEqual( trusted_host, self.server.handler_instance.fake_pasv_server_ip) # Opening and closing a connection keeps the dummy server happy # instead of timing out on accept. socket.create_connection((trusted_host, port), timeout=TIMEOUT).close() def test_with_statement(self): self.client.quit() def is_client_connected(): if self.client.sock is None: return False try: self.client.sendcmd('noop') except (OSError, EOFError): return False return True # base test with ftplib.FTP(timeout=TIMEOUT) as self.client: self.client.connect(self.server.host, self.server.port) self.client.sendcmd('noop') self.assertTrue(is_client_connected()) self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit') self.assertFalse(is_client_connected()) # QUIT sent inside the with block with ftplib.FTP(timeout=TIMEOUT) as self.client: self.client.connect(self.server.host, self.server.port) self.client.sendcmd('noop') self.client.quit() self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit') self.assertFalse(is_client_connected()) # force a wrong response code to be sent on QUIT: error_perm # is expected and the connection is supposed to be closed try: with ftplib.FTP(timeout=TIMEOUT) as self.client: self.client.connect(self.server.host, self.server.port) self.client.sendcmd('noop') self.server.handler_instance.next_response = '550 error on quit' except ftplib.error_perm as err: self.assertEqual(str(err), '550 error on quit') else: self.fail('Exception not raised') # needed to give the threaded server some time to set the attribute # which otherwise would still be == 'noop' time.sleep(0.1) self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit') self.assertFalse(is_client_connected()) def test_source_address(self): self.client.quit() port = socket_helper.find_unused_port() try: self.client.connect(self.server.host, self.server.port, source_address=(HOST, port)) self.assertEqual(self.client.sock.getsockname()[1], port) self.client.quit() except OSError as e: if e.errno == errno.EADDRINUSE: self.skipTest("couldn't bind to port %d" % port) raise def test_source_address_passive_connection(self): port = socket_helper.find_unused_port() self.client.source_address = (HOST, port) try: with self.client.transfercmd('list') as sock: self.assertEqual(sock.getsockname()[1], port) except OSError as e: if e.errno == errno.EADDRINUSE: self.skipTest("couldn't bind to port %d" % port) raise def test_parse257(self): self.assertEqual(ftplib.parse257('257 "/foo/bar"'), '/foo/bar') self.assertEqual(ftplib.parse257('257 "/foo/bar" created'), '/foo/bar') self.assertEqual(ftplib.parse257('257 ""'), '') self.assertEqual(ftplib.parse257('257 "" created'), '') self.assertRaises(ftplib.error_reply, ftplib.parse257, '250 "/foo/bar"') # The 257 response is supposed to include the directory # name and in case it contains embedded double-quotes # they must be doubled (see RFC-959, chapter 7, appendix 2). self.assertEqual(ftplib.parse257('257 "/foo/b""ar"'), '/foo/b"ar') self.assertEqual(ftplib.parse257('257 "/foo/b""ar" created'), '/foo/b"ar') def test_line_too_long(self): self.assertRaises(ftplib.Error, self.client.sendcmd, 'x' * self.client.maxline * 2) def test_retrlines_too_long(self): self.client.sendcmd('SETLONGRETR %d' % (self.client.maxline * 2)) received = [] self.assertRaises(ftplib.Error, self.client.retrlines, 'retr', received.append) def test_storlines_too_long(self): f = io.BytesIO(b'x' * self.client.maxline * 2) self.assertRaises(ftplib.Error, self.client.storlines, 'stor', f) def test_encoding_param(self): encodings = ['latin-1', 'utf-8'] for encoding in encodings: with self.subTest(encoding=encoding): self.tearDown() self.setUp(encoding=encoding) self.assertEqual(encoding, self.client.encoding) self.test_retrbinary() self.test_storbinary() self.test_retrlines() new_dir = self.client.mkd('/non-ascii dir \xAE') self.check_data(new_dir, '/non-ascii dir \xAE') # Check default encoding client = ftplib.FTP(timeout=TIMEOUT) self.assertEqual(DEFAULT_ENCODING, client.encoding) @skipUnless(socket_helper.IPV6_ENABLED, "IPv6 not enabled") class TestIPv6Environment(TestCase): def setUp(self): self.server = DummyFTPServer((HOSTv6, 0), af=socket.AF_INET6, encoding=DEFAULT_ENCODING) self.server.start() self.client = ftplib.FTP(timeout=TIMEOUT, encoding=DEFAULT_ENCODING) self.client.connect(self.server.host, self.server.port) def tearDown(self): self.client.close() self.server.stop() # Explicitly clear the attribute to prevent dangling thread self.server = None asyncore.close_all(ignore_all=True) def test_af(self): self.assertEqual(self.client.af, socket.AF_INET6) def test_makeport(self): with self.client.makeport(): self.assertEqual(self.server.handler_instance.last_received_cmd, 'eprt') def test_makepasv(self): host, port = self.client.makepasv() conn = socket.create_connection((host, port), timeout=TIMEOUT) conn.close() self.assertEqual(self.server.handler_instance.last_received_cmd, 'epsv') def test_transfer(self): def retr(): def callback(data): received.append(data.decode(self.client.encoding)) received = [] self.client.retrbinary('retr', callback) self.assertEqual(len(''.join(received)), len(RETR_DATA)) self.assertEqual(''.join(received), RETR_DATA) self.client.set_pasv(True) retr() self.client.set_pasv(False) retr() @skipUnless(ssl, "SSL not available") class TestTLS_FTPClassMixin(TestFTPClass): """Repeat TestFTPClass tests starting the TLS layer for both control and data connections first. """ def setUp(self, encoding=DEFAULT_ENCODING): self.server = DummyTLS_FTPServer((HOST, 0), encoding=encoding) self.server.start() self.client = ftplib.FTP_TLS(timeout=TIMEOUT, encoding=encoding) self.client.connect(self.server.host, self.server.port) # enable TLS self.client.auth() self.client.prot_p() @skipUnless(ssl, "SSL not available") class TestTLS_FTPClass(TestCase): """Specific TLS_FTP class tests.""" def setUp(self, encoding=DEFAULT_ENCODING): self.server = DummyTLS_FTPServer((HOST, 0), encoding=encoding) self.server.start() self.client = ftplib.FTP_TLS(timeout=TIMEOUT) self.client.connect(self.server.host, self.server.port) def tearDown(self): self.client.close() self.server.stop() # Explicitly clear the attribute to prevent dangling thread self.server = None asyncore.close_all(ignore_all=True) def test_control_connection(self): self.assertNotIsInstance(self.client.sock, ssl.SSLSocket) self.client.auth() self.assertIsInstance(self.client.sock, ssl.SSLSocket) def test_data_connection(self): # clear text with self.client.transfercmd('list') as sock: self.assertNotIsInstance(sock, ssl.SSLSocket) self.assertEqual(sock.recv(1024), LIST_DATA.encode(self.client.encoding)) self.assertEqual(self.client.voidresp(), "226 transfer complete") # secured, after PROT P self.client.prot_p() with self.client.transfercmd('list') as sock: self.assertIsInstance(sock, ssl.SSLSocket) # consume from SSL socket to finalize handshake and avoid # "SSLError [SSL] shutdown while in init" self.assertEqual(sock.recv(1024), LIST_DATA.encode(self.client.encoding)) self.assertEqual(self.client.voidresp(), "226 transfer complete") # PROT C is issued, the connection must be in cleartext again self.client.prot_c() with self.client.transfercmd('list') as sock: self.assertNotIsInstance(sock, ssl.SSLSocket) self.assertEqual(sock.recv(1024), LIST_DATA.encode(self.client.encoding)) self.assertEqual(self.client.voidresp(), "226 transfer complete") def test_login(self): # login() is supposed to implicitly secure the control connection self.assertNotIsInstance(self.client.sock, ssl.SSLSocket) self.client.login() self.assertIsInstance(self.client.sock, ssl.SSLSocket) # make sure that AUTH TLS doesn't get issued again self.client.login() def test_auth_issued_twice(self): self.client.auth() self.assertRaises(ValueError, self.client.auth) def test_context(self): self.client.quit() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE self.assertRaises(ValueError, ftplib.FTP_TLS, keyfile=CERTFILE, context=ctx) self.assertRaises(ValueError, ftplib.FTP_TLS, certfile=CERTFILE, context=ctx) self.assertRaises(ValueError, ftplib.FTP_TLS, certfile=CERTFILE, keyfile=CERTFILE, context=ctx) self.client = ftplib.FTP_TLS(context=ctx, timeout=TIMEOUT) self.client.connect(self.server.host, self.server.port) self.assertNotIsInstance(self.client.sock, ssl.SSLSocket) self.client.auth() self.assertIs(self.client.sock.context, ctx) self.assertIsInstance(self.client.sock, ssl.SSLSocket) self.client.prot_p() with self.client.transfercmd('list') as sock: self.assertIs(sock.context, ctx) self.assertIsInstance(sock, ssl.SSLSocket) def test_ccc(self): self.assertRaises(ValueError, self.client.ccc) self.client.login(secure=True) self.assertIsInstance(self.client.sock, ssl.SSLSocket) self.client.ccc() self.assertRaises(ValueError, self.client.sock.unwrap) @skipUnless(False, "FIXME: bpo-32706") def test_check_hostname(self): self.client.quit() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.check_hostname, True) ctx.load_verify_locations(CAFILE) self.client = ftplib.FTP_TLS(context=ctx, timeout=TIMEOUT) # 127.0.0.1 doesn't match SAN self.client.connect(self.server.host, self.server.port) with self.assertRaises(ssl.CertificateError): self.client.auth() # exception quits connection self.client.connect(self.server.host, self.server.port) self.client.prot_p() with self.assertRaises(ssl.CertificateError): with self.client.transfercmd("list") as sock: pass self.client.quit() self.client.connect("localhost", self.server.port) self.client.auth() self.client.quit() self.client.connect("localhost", self.server.port) self.client.prot_p() with self.client.transfercmd("list") as sock: pass class TestTimeouts(TestCase): def setUp(self): self.evt = threading.Event() self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.sock.settimeout(20) self.port = socket_helper.bind_port(self.sock) self.server_thread = threading.Thread(target=self.server) self.server_thread.daemon = True self.server_thread.start() # Wait for the server to be ready. self.evt.wait() self.evt.clear() self.old_port = ftplib.FTP.port ftplib.FTP.port = self.port def tearDown(self): ftplib.FTP.port = self.old_port self.server_thread.join() # Explicitly clear the attribute to prevent dangling thread self.server_thread = None def server(self): # This method sets the evt 3 times: # 1) when the connection is ready to be accepted. # 2) when it is safe for the caller to close the connection # 3) when we have closed the socket self.sock.listen() # (1) Signal the caller that we are ready to accept the connection. self.evt.set() try: conn, addr = self.sock.accept() except TimeoutError: pass else: conn.sendall(b"1 Hola mundo\n") conn.shutdown(socket.SHUT_WR) # (2) Signal the caller that it is safe to close the socket. self.evt.set() conn.close() finally: self.sock.close() def testTimeoutDefault(self): # default -- use global socket timeout self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: ftp = ftplib.FTP(HOST) finally: socket.setdefaulttimeout(None) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() def testTimeoutNone(self): # no timeout -- do not use global socket timeout self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: ftp = ftplib.FTP(HOST, timeout=None) finally: socket.setdefaulttimeout(None) self.assertIsNone(ftp.sock.gettimeout()) self.evt.wait() ftp.close() def testTimeoutValue(self): # a value ftp = ftplib.FTP(HOST, timeout=30) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() # bpo-39259 with self.assertRaises(ValueError): ftplib.FTP(HOST, timeout=0) def testTimeoutConnect(self): ftp = ftplib.FTP() ftp.connect(HOST, timeout=30) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() def testTimeoutDifferentOrder(self): ftp = ftplib.FTP(timeout=30) ftp.connect(HOST) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() def testTimeoutDirectAccess(self): ftp = ftplib.FTP() ftp.timeout = 30 ftp.connect(HOST) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() class MiscTestCase(TestCase): def test__all__(self): not_exported = { 'MSG_OOB', 'FTP_PORT', 'MAXLINE', 'CRLF', 'B_CRLF', 'Error', 'parse150', 'parse227', 'parse229', 'parse257', 'print_line', 'ftpcp', 'test'} support.check__all__(self, ftplib, not_exported=not_exported) def setUpModule(): thread_info = threading_helper.threading_setup() unittest.addModuleCleanup(threading_helper.threading_cleanup, *thread_info) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/greentest/3.10/test_httplib.py000066400000000000000000002362151471441230600214120ustar00rootroot00000000000000import errno from http import client, HTTPStatus import io import itertools import os import array import re import socket import threading import warnings import unittest from unittest import mock TestCase = unittest.TestCase from test import support from test.support import os_helper from test.support import socket_helper from test.support import warnings_helper here = os.path.dirname(__file__) # Self-signed cert file for 'localhost' CERT_localhost = os.path.join(here, 'keycert.pem') # Self-signed cert file for 'fakehostname' CERT_fakehostname = os.path.join(here, 'keycert2.pem') # Self-signed cert file for self-signed.pythontest.net CERT_selfsigned_pythontestdotnet = os.path.join(here, 'selfsigned_pythontestdotnet.pem') # constants for testing chunked encoding chunked_start = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello worl\r\n' '3\r\n' 'd! \r\n' '8\r\n' 'and now \r\n' '22\r\n' 'for something completely different\r\n' ) chunked_expected = b'hello world! and now for something completely different' chunk_extension = ";foo=bar" last_chunk = "0\r\n" last_chunk_extended = "0" + chunk_extension + "\r\n" trailers = "X-Dummy: foo\r\nX-Dumm2: bar\r\n" chunked_end = "\r\n" HOST = socket_helper.HOST class FakeSocket: def __init__(self, text, fileclass=io.BytesIO, host=None, port=None): if isinstance(text, str): text = text.encode("ascii") self.text = text self.fileclass = fileclass self.data = b'' self.sendall_calls = 0 self.file_closed = False self.host = host self.port = port def sendall(self, data): self.sendall_calls += 1 self.data += data def makefile(self, mode, bufsize=None): if mode != 'r' and mode != 'rb': raise client.UnimplementedFileMode() # keep the file around so we can check how much was read from it self.file = self.fileclass(self.text) self.file.close = self.file_close #nerf close () return self.file def file_close(self): self.file_closed = True def close(self): pass def setsockopt(self, level, optname, value): pass class EPipeSocket(FakeSocket): def __init__(self, text, pipe_trigger): # When sendall() is called with pipe_trigger, raise EPIPE. FakeSocket.__init__(self, text) self.pipe_trigger = pipe_trigger def sendall(self, data): if self.pipe_trigger in data: raise OSError(errno.EPIPE, "gotcha") self.data += data def close(self): pass class NoEOFBytesIO(io.BytesIO): """Like BytesIO, but raises AssertionError on EOF. This is used below to test that http.client doesn't try to read more from the underlying file than it should. """ def read(self, n=-1): data = io.BytesIO.read(self, n) if data == b'': raise AssertionError('caller tried to read past EOF') return data def readline(self, length=None): data = io.BytesIO.readline(self, length) if data == b'': raise AssertionError('caller tried to read past EOF') return data class FakeSocketHTTPConnection(client.HTTPConnection): """HTTPConnection subclass using FakeSocket; counts connect() calls""" def __init__(self, *args): self.connections = 0 super().__init__('example.com') self.fake_socket_args = args self._create_connection = self.create_connection def connect(self): """Count the number of times connect() is invoked""" self.connections += 1 return super().connect() def create_connection(self, *pos, **kw): return FakeSocket(*self.fake_socket_args) class HeaderTests(TestCase): def test_auto_headers(self): # Some headers are added automatically, but should not be added by # .request() if they are explicitly set. class HeaderCountingBuffer(list): def __init__(self): self.count = {} def append(self, item): kv = item.split(b':') if len(kv) > 1: # item is a 'Key: Value' header string lcKey = kv[0].decode('ascii').lower() self.count.setdefault(lcKey, 0) self.count[lcKey] += 1 list.append(self, item) for explicit_header in True, False: for header in 'Content-length', 'Host', 'Accept-encoding': conn = client.HTTPConnection('example.com') conn.sock = FakeSocket('blahblahblah') conn._buffer = HeaderCountingBuffer() body = 'spamspamspam' headers = {} if explicit_header: headers[header] = str(len(body)) conn.request('POST', '/', body, headers) self.assertEqual(conn._buffer.count[header.lower()], 1) def test_content_length_0(self): class ContentLengthChecker(list): def __init__(self): list.__init__(self) self.content_length = None def append(self, item): kv = item.split(b':', 1) if len(kv) > 1 and kv[0].lower() == b'content-length': self.content_length = kv[1].strip() list.append(self, item) # Here, we're testing that methods expecting a body get a # content-length set to zero if the body is empty (either None or '') bodies = (None, '') methods_with_body = ('PUT', 'POST', 'PATCH') for method, body in itertools.product(methods_with_body, bodies): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', body) self.assertEqual( conn._buffer.content_length, b'0', 'Header Content-Length incorrect on {}'.format(method) ) # For these methods, we make sure that content-length is not set when # the body is None because it might cause unexpected behaviour on the # server. methods_without_body = ( 'GET', 'CONNECT', 'DELETE', 'HEAD', 'OPTIONS', 'TRACE', ) for method in methods_without_body: conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', None) self.assertEqual( conn._buffer.content_length, None, 'Header Content-Length set for empty body on {}'.format(method) ) # If the body is set to '', that's considered to be "present but # empty" rather than "missing", so content length would be set, even # for methods that don't expect a body. for method in methods_without_body: conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', '') self.assertEqual( conn._buffer.content_length, b'0', 'Header Content-Length incorrect on {}'.format(method) ) # If the body is set, make sure Content-Length is set. for method in itertools.chain(methods_without_body, methods_with_body): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', ' ') self.assertEqual( conn._buffer.content_length, b'1', 'Header Content-Length incorrect on {}'.format(method) ) def test_putheader(self): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn.putrequest('GET','/') conn.putheader('Content-length', 42) self.assertIn(b'Content-length: 42', conn._buffer) conn.putheader('Foo', ' bar ') self.assertIn(b'Foo: bar ', conn._buffer) conn.putheader('Bar', '\tbaz\t') self.assertIn(b'Bar: \tbaz\t', conn._buffer) conn.putheader('Authorization', 'Bearer mytoken') self.assertIn(b'Authorization: Bearer mytoken', conn._buffer) conn.putheader('IterHeader', 'IterA', 'IterB') self.assertIn(b'IterHeader: IterA\r\n\tIterB', conn._buffer) conn.putheader('LatinHeader', b'\xFF') self.assertIn(b'LatinHeader: \xFF', conn._buffer) conn.putheader('Utf8Header', b'\xc3\x80') self.assertIn(b'Utf8Header: \xc3\x80', conn._buffer) conn.putheader('C1-Control', b'next\x85line') self.assertIn(b'C1-Control: next\x85line', conn._buffer) conn.putheader('Embedded-Fold-Space', 'is\r\n allowed') self.assertIn(b'Embedded-Fold-Space: is\r\n allowed', conn._buffer) conn.putheader('Embedded-Fold-Tab', 'is\r\n\tallowed') self.assertIn(b'Embedded-Fold-Tab: is\r\n\tallowed', conn._buffer) conn.putheader('Key Space', 'value') self.assertIn(b'Key Space: value', conn._buffer) conn.putheader('KeySpace ', 'value') self.assertIn(b'KeySpace : value', conn._buffer) conn.putheader(b'Nonbreak\xa0Space', 'value') self.assertIn(b'Nonbreak\xa0Space: value', conn._buffer) conn.putheader(b'\xa0NonbreakSpace', 'value') self.assertIn(b'\xa0NonbreakSpace: value', conn._buffer) def test_ipv6host_header(self): # Default host header on IPv6 transaction should be wrapped by [] if # it is an IPv6 address expected = b'GET /foo HTTP/1.1\r\nHost: [2001::]:81\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[2001::]:81') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) expected = b'GET /foo HTTP/1.1\r\nHost: [2001:102A::]\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[2001:102A::]') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) def test_malformed_headers_coped_with(self): # Issue 19996 body = "HTTP/1.1 200 OK\r\nFirst: val\r\n: nval\r\nSecond: val\r\n\r\n" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.getheader('First'), 'val') self.assertEqual(resp.getheader('Second'), 'val') def test_parse_all_octets(self): # Ensure no valid header field octet breaks the parser body = ( b'HTTP/1.1 200 OK\r\n' b"!#$%&'*+-.^_`|~: value\r\n" # Special token characters b'VCHAR: ' + bytes(range(0x21, 0x7E + 1)) + b'\r\n' b'obs-text: ' + bytes(range(0x80, 0xFF + 1)) + b'\r\n' b'obs-fold: text\r\n' b' folded with space\r\n' b'\tfolded with tab\r\n' b'Content-Length: 0\r\n' b'\r\n' ) sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.getheader('Content-Length'), '0') self.assertEqual(resp.msg['Content-Length'], '0') self.assertEqual(resp.getheader("!#$%&'*+-.^_`|~"), 'value') self.assertEqual(resp.msg["!#$%&'*+-.^_`|~"], 'value') vchar = ''.join(map(chr, range(0x21, 0x7E + 1))) self.assertEqual(resp.getheader('VCHAR'), vchar) self.assertEqual(resp.msg['VCHAR'], vchar) self.assertIsNotNone(resp.getheader('obs-text')) self.assertIn('obs-text', resp.msg) for folded in (resp.getheader('obs-fold'), resp.msg['obs-fold']): self.assertTrue(folded.startswith('text')) self.assertIn(' folded with space', folded) self.assertTrue(folded.endswith('folded with tab')) def test_invalid_headers(self): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket('') conn.putrequest('GET', '/') # http://tools.ietf.org/html/rfc7230#section-3.2.4, whitespace is no # longer allowed in header names cases = ( (b'Invalid\r\nName', b'ValidValue'), (b'Invalid\rName', b'ValidValue'), (b'Invalid\nName', b'ValidValue'), (b'\r\nInvalidName', b'ValidValue'), (b'\rInvalidName', b'ValidValue'), (b'\nInvalidName', b'ValidValue'), (b' InvalidName', b'ValidValue'), (b'\tInvalidName', b'ValidValue'), (b'Invalid:Name', b'ValidValue'), (b':InvalidName', b'ValidValue'), (b'ValidName', b'Invalid\r\nValue'), (b'ValidName', b'Invalid\rValue'), (b'ValidName', b'Invalid\nValue'), (b'ValidName', b'InvalidValue\r\n'), (b'ValidName', b'InvalidValue\r'), (b'ValidName', b'InvalidValue\n'), ) for name, value in cases: with self.subTest((name, value)): with self.assertRaisesRegex(ValueError, 'Invalid header'): conn.putheader(name, value) def test_headers_debuglevel(self): body = ( b'HTTP/1.1 200 OK\r\n' b'First: val\r\n' b'Second: val1\r\n' b'Second: val2\r\n' ) sock = FakeSocket(body) resp = client.HTTPResponse(sock, debuglevel=1) with support.captured_stdout() as output: resp.begin() lines = output.getvalue().splitlines() self.assertEqual(lines[0], "reply: 'HTTP/1.1 200 OK\\r\\n'") self.assertEqual(lines[1], "header: First: val") self.assertEqual(lines[2], "header: Second: val1") self.assertEqual(lines[3], "header: Second: val2") class HttpMethodTests(TestCase): def test_invalid_method_names(self): methods = ( 'GET\r', 'POST\n', 'PUT\n\r', 'POST\nValue', 'POST\nHOST:abc', 'GET\nrHost:abc\n', 'POST\rRemainder:\r', 'GET\rHOST:\n', '\nPUT' ) for method in methods: with self.assertRaisesRegex( ValueError, "method can't contain control characters"): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn.request(method=method, url="/") class TransferEncodingTest(TestCase): expected_body = b"It's just a flesh wound" def test_endheaders_chunked(self): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.putrequest('POST', '/') conn.endheaders(self._make_body(), encode_chunked=True) _, _, body = self._parse_request(conn.sock.data) body = self._parse_chunked(body) self.assertEqual(body, self.expected_body) def test_explicit_headers(self): # explicit chunked conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') # this shouldn't actually be automatically chunk-encoded because the # calling code has explicitly stated that it's taking care of it conn.request( 'POST', '/', self._make_body(), {'Transfer-Encoding': 'chunked'}) _, headers, body = self._parse_request(conn.sock.data) self.assertNotIn('content-length', [k.lower() for k in headers.keys()]) self.assertEqual(headers['Transfer-Encoding'], 'chunked') self.assertEqual(body, self.expected_body) # explicit chunked, string body conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request( 'POST', '/', self.expected_body.decode('latin-1'), {'Transfer-Encoding': 'chunked'}) _, headers, body = self._parse_request(conn.sock.data) self.assertNotIn('content-length', [k.lower() for k in headers.keys()]) self.assertEqual(headers['Transfer-Encoding'], 'chunked') self.assertEqual(body, self.expected_body) # User-specified TE, but request() does the chunk encoding conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request('POST', '/', headers={'Transfer-Encoding': 'gzip, chunked'}, encode_chunked=True, body=self._make_body()) _, headers, body = self._parse_request(conn.sock.data) self.assertNotIn('content-length', [k.lower() for k in headers]) self.assertEqual(headers['Transfer-Encoding'], 'gzip, chunked') self.assertEqual(self._parse_chunked(body), self.expected_body) def test_request(self): for empty_lines in (False, True,): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request( 'POST', '/', self._make_body(empty_lines=empty_lines)) _, headers, body = self._parse_request(conn.sock.data) body = self._parse_chunked(body) self.assertEqual(body, self.expected_body) self.assertEqual(headers['Transfer-Encoding'], 'chunked') # Content-Length and Transfer-Encoding SHOULD not be sent in the # same request self.assertNotIn('content-length', [k.lower() for k in headers]) def test_empty_body(self): # Zero-length iterable should be treated like any other iterable conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request('POST', '/', ()) _, headers, body = self._parse_request(conn.sock.data) self.assertEqual(headers['Transfer-Encoding'], 'chunked') self.assertNotIn('content-length', [k.lower() for k in headers]) self.assertEqual(body, b"0\r\n\r\n") def _make_body(self, empty_lines=False): lines = self.expected_body.split(b' ') for idx, line in enumerate(lines): # for testing handling empty lines if empty_lines and idx % 2: yield b'' if idx < len(lines) - 1: yield line + b' ' else: yield line def _parse_request(self, data): lines = data.split(b'\r\n') request = lines[0] headers = {} n = 1 while n < len(lines) and len(lines[n]) > 0: key, val = lines[n].split(b':') key = key.decode('latin-1').strip() headers[key] = val.decode('latin-1').strip() n += 1 return request, headers, b'\r\n'.join(lines[n + 1:]) def _parse_chunked(self, data): body = [] trailers = {} n = 0 lines = data.split(b'\r\n') # parse body while True: size, chunk = lines[n:n+2] size = int(size, 16) if size == 0: n += 1 break self.assertEqual(size, len(chunk)) body.append(chunk) n += 2 # we /should/ hit the end chunk, but check against the size of # lines so we're not stuck in an infinite loop should we get # malformed data if n > len(lines): break return b''.join(body) class BasicTest(TestCase): def test_dir_with_added_behavior_on_status(self): # see issue40084 self.assertTrue({'description', 'name', 'phrase', 'value'} <= set(dir(HTTPStatus(404)))) def test_status_lines(self): # Test HTTP status lines body = "HTTP/1.1 200 Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(0), b'') # Issue #20007 self.assertFalse(resp.isclosed()) self.assertFalse(resp.closed) self.assertEqual(resp.read(), b"Text") self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) body = "HTTP/1.1 400.100 Not Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) self.assertRaises(client.BadStatusLine, resp.begin) def test_bad_status_repr(self): exc = client.BadStatusLine('') self.assertEqual(repr(exc), '''BadStatusLine("''")''') def test_partial_reads(self): # if we have Content-Length, HTTPResponse knows when to close itself, # the same behaviour as when we read the whole thing with read() body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(2), b'Te') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_mixed_reads(self): # readline() should update the remaining length, so that read() knows # how much data is left and does not raise IncompleteRead body = "HTTP/1.1 200 Ok\r\nContent-Length: 13\r\n\r\nText\r\nAnother" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.readline(), b'Text\r\n') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(), b'Another') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_readintos(self): # if we have Content-Length, HTTPResponse knows when to close itself, # the same behaviour as when we read the whole thing with read() body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(2) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'Te') self.assertFalse(resp.isclosed()) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_reads_past_end(self): # if we have Content-Length, clip reads to the end body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(10), b'Text') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_readintos_past_end(self): # if we have Content-Length, clip readintos to the end body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(10) n = resp.readinto(b) self.assertEqual(n, 4) self.assertEqual(bytes(b)[:4], b'Text') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_reads_no_content_length(self): # when no length is present, the socket should be gracefully closed when # all data was read body = "HTTP/1.1 200 Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(2), b'Te') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertEqual(resp.read(1), b'') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_readintos_no_content_length(self): # when no length is present, the socket should be gracefully closed when # all data was read body = "HTTP/1.1 200 Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(2) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'Te') self.assertFalse(resp.isclosed()) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') n = resp.readinto(b) self.assertEqual(n, 0) self.assertTrue(resp.isclosed()) def test_partial_reads_incomplete_body(self): # if the server shuts down the connection before the whole # content-length is delivered, the socket is gracefully closed body = "HTTP/1.1 200 Ok\r\nContent-Length: 10\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(2), b'Te') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertEqual(resp.read(1), b'') self.assertTrue(resp.isclosed()) def test_partial_readintos_incomplete_body(self): # if the server shuts down the connection before the whole # content-length is delivered, the socket is gracefully closed body = "HTTP/1.1 200 Ok\r\nContent-Length: 10\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(2) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'Te') self.assertFalse(resp.isclosed()) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') n = resp.readinto(b) self.assertEqual(n, 0) self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_host_port(self): # Check invalid host_port for hp in ("www.python.org:abc", "user:password@www.python.org"): self.assertRaises(client.InvalidURL, client.HTTPConnection, hp) for hp, h, p in (("[fe80::207:e9ff:fe9b]:8000", "fe80::207:e9ff:fe9b", 8000), ("www.python.org:80", "www.python.org", 80), ("www.python.org:", "www.python.org", 80), ("www.python.org", "www.python.org", 80), ("[fe80::207:e9ff:fe9b]", "fe80::207:e9ff:fe9b", 80), ("[fe80::207:e9ff:fe9b]:", "fe80::207:e9ff:fe9b", 80)): c = client.HTTPConnection(hp) self.assertEqual(h, c.host) self.assertEqual(p, c.port) def test_response_headers(self): # test response with multiple message headers with the same field name. text = ('HTTP/1.1 200 OK\r\n' 'Set-Cookie: Customer="WILE_E_COYOTE"; ' 'Version="1"; Path="/acme"\r\n' 'Set-Cookie: Part_Number="Rocket_Launcher_0001"; Version="1";' ' Path="/acme"\r\n' '\r\n' 'No body\r\n') hdr = ('Customer="WILE_E_COYOTE"; Version="1"; Path="/acme"' ', ' 'Part_Number="Rocket_Launcher_0001"; Version="1"; Path="/acme"') s = FakeSocket(text) r = client.HTTPResponse(s) r.begin() cookies = r.getheader("Set-Cookie") self.assertEqual(cookies, hdr) def test_read_head(self): # Test that the library doesn't attempt to read any data # from a HEAD request. (Tickles SF bug #622042.) sock = FakeSocket( 'HTTP/1.1 200 OK\r\n' 'Content-Length: 14432\r\n' '\r\n', NoEOFBytesIO) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() if resp.read(): self.fail("Did not expect response from HEAD request") def test_readinto_head(self): # Test that the library doesn't attempt to read any data # from a HEAD request. (Tickles SF bug #622042.) sock = FakeSocket( 'HTTP/1.1 200 OK\r\n' 'Content-Length: 14432\r\n' '\r\n', NoEOFBytesIO) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() b = bytearray(5) if resp.readinto(b) != 0: self.fail("Did not expect response from HEAD request") self.assertEqual(bytes(b), b'\x00'*5) def test_too_many_headers(self): headers = '\r\n'.join('Header%d: foo' % i for i in range(client._MAXHEADERS + 1)) + '\r\n' text = ('HTTP/1.1 200 OK\r\n' + headers) s = FakeSocket(text) r = client.HTTPResponse(s) self.assertRaisesRegex(client.HTTPException, r"got more than \d+ headers", r.begin) def test_send_file(self): expected = (b'GET /foo HTTP/1.1\r\nHost: example.com\r\n' b'Accept-Encoding: identity\r\n' b'Transfer-Encoding: chunked\r\n' b'\r\n') with open(__file__, 'rb') as body: conn = client.HTTPConnection('example.com') sock = FakeSocket(body) conn.sock = sock conn.request('GET', '/foo', body) self.assertTrue(sock.data.startswith(expected), '%r != %r' % (sock.data[:len(expected)], expected)) def test_send(self): expected = b'this is a test this is only a test' conn = client.HTTPConnection('example.com') sock = FakeSocket(None) conn.sock = sock conn.send(expected) self.assertEqual(expected, sock.data) sock.data = b'' conn.send(array.array('b', expected)) self.assertEqual(expected, sock.data) sock.data = b'' conn.send(io.BytesIO(expected)) self.assertEqual(expected, sock.data) def test_send_updating_file(self): def data(): yield 'data' yield None yield 'data_two' class UpdatingFile(io.TextIOBase): mode = 'r' d = data() def read(self, blocksize=-1): return next(self.d) expected = b'data' conn = client.HTTPConnection('example.com') sock = FakeSocket("") conn.sock = sock conn.send(UpdatingFile()) self.assertEqual(sock.data, expected) def test_send_iter(self): expected = b'GET /foo HTTP/1.1\r\nHost: example.com\r\n' \ b'Accept-Encoding: identity\r\nContent-Length: 11\r\n' \ b'\r\nonetwothree' def body(): yield b"one" yield b"two" yield b"three" conn = client.HTTPConnection('example.com') sock = FakeSocket("") conn.sock = sock conn.request('GET', '/foo', body(), {'Content-Length': '11'}) self.assertEqual(sock.data, expected) def test_blocksize_request(self): """Check that request() respects the configured block size.""" blocksize = 8 # For easy debugging. conn = client.HTTPConnection('example.com', blocksize=blocksize) sock = FakeSocket(None) conn.sock = sock expected = b"a" * blocksize + b"b" conn.request("PUT", "/", io.BytesIO(expected), {"Content-Length": "9"}) self.assertEqual(sock.sendall_calls, 3) body = sock.data.split(b"\r\n\r\n", 1)[1] self.assertEqual(body, expected) def test_blocksize_send(self): """Check that send() respects the configured block size.""" blocksize = 8 # For easy debugging. conn = client.HTTPConnection('example.com', blocksize=blocksize) sock = FakeSocket(None) conn.sock = sock expected = b"a" * blocksize + b"b" conn.send(io.BytesIO(expected)) self.assertEqual(sock.sendall_calls, 2) self.assertEqual(sock.data, expected) def test_send_type_error(self): # See: Issue #12676 conn = client.HTTPConnection('example.com') conn.sock = FakeSocket('') with self.assertRaises(TypeError): conn.request('POST', 'test', conn) def test_chunked(self): expected = chunked_expected sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) resp.close() # Various read sizes for n in range(1, 12): sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(n) + resp.read(n) + resp.read(), expected) resp.close() for x in ('', 'foo\r\n'): sock = FakeSocket(chunked_start + x) resp = client.HTTPResponse(sock, method="GET") resp.begin() try: resp.read() except client.IncompleteRead as i: self.assertEqual(i.partial, expected) expected_message = 'IncompleteRead(%d bytes read)' % len(expected) self.assertEqual(repr(i), expected_message) self.assertEqual(str(i), expected_message) else: self.fail('IncompleteRead expected') finally: resp.close() def test_readinto_chunked(self): expected = chunked_expected nexpected = len(expected) b = bytearray(128) sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() n = resp.readinto(b) self.assertEqual(b[:nexpected], expected) self.assertEqual(n, nexpected) resp.close() # Various read sizes for n in range(1, 12): sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() m = memoryview(b) i = resp.readinto(m[0:n]) i += resp.readinto(m[i:n + i]) i += resp.readinto(m[i:]) self.assertEqual(b[:nexpected], expected) self.assertEqual(i, nexpected) resp.close() for x in ('', 'foo\r\n'): sock = FakeSocket(chunked_start + x) resp = client.HTTPResponse(sock, method="GET") resp.begin() try: n = resp.readinto(b) except client.IncompleteRead as i: self.assertEqual(i.partial, expected) expected_message = 'IncompleteRead(%d bytes read)' % len(expected) self.assertEqual(repr(i), expected_message) self.assertEqual(str(i), expected_message) else: self.fail('IncompleteRead expected') finally: resp.close() def test_chunked_head(self): chunked_start = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello world\r\n' '1\r\n' 'd\r\n' ) sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() self.assertEqual(resp.read(), b'') self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_readinto_chunked_head(self): chunked_start = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello world\r\n' '1\r\n' 'd\r\n' ) sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() b = bytearray(5) n = resp.readinto(b) self.assertEqual(n, 0) self.assertEqual(bytes(b), b'\x00'*5) self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_negative_content_length(self): sock = FakeSocket( 'HTTP/1.1 200 OK\r\nContent-Length: -1\r\n\r\nHello\r\n') resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), b'Hello\r\n') self.assertTrue(resp.isclosed()) def test_incomplete_read(self): sock = FakeSocket('HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\nHello\r\n') resp = client.HTTPResponse(sock, method="GET") resp.begin() try: resp.read() except client.IncompleteRead as i: self.assertEqual(i.partial, b'Hello\r\n') self.assertEqual(repr(i), "IncompleteRead(7 bytes read, 3 more expected)") self.assertEqual(str(i), "IncompleteRead(7 bytes read, 3 more expected)") self.assertTrue(resp.isclosed()) else: self.fail('IncompleteRead expected') def test_epipe(self): sock = EPipeSocket( "HTTP/1.0 401 Authorization Required\r\n" "Content-type: text/html\r\n" "WWW-Authenticate: Basic realm=\"example\"\r\n", b"Content-Length") conn = client.HTTPConnection("example.com") conn.sock = sock self.assertRaises(OSError, lambda: conn.request("PUT", "/url", "body")) resp = conn.getresponse() self.assertEqual(401, resp.status) self.assertEqual("Basic realm=\"example\"", resp.getheader("www-authenticate")) # Test lines overflowing the max line size (_MAXLINE in http.client) def test_overflowing_status_line(self): body = "HTTP/1.1 200 Ok" + "k" * 65536 + "\r\n" resp = client.HTTPResponse(FakeSocket(body)) self.assertRaises((client.LineTooLong, client.BadStatusLine), resp.begin) def test_overflowing_header_line(self): body = ( 'HTTP/1.1 200 OK\r\n' 'X-Foo: bar' + 'r' * 65536 + '\r\n\r\n' ) resp = client.HTTPResponse(FakeSocket(body)) self.assertRaises(client.LineTooLong, resp.begin) def test_overflowing_header_limit_after_100(self): body = ( 'HTTP/1.1 100 OK\r\n' 'r\n' * 32768 ) resp = client.HTTPResponse(FakeSocket(body)) with self.assertRaises(client.HTTPException) as cm: resp.begin() # We must assert more because other reasonable errors that we # do not want can also be HTTPException derived. self.assertIn('got more than ', str(cm.exception)) self.assertIn('headers', str(cm.exception)) def test_overflowing_chunked_line(self): body = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' + '0' * 65536 + 'a\r\n' 'hello world\r\n' '0\r\n' '\r\n' ) resp = client.HTTPResponse(FakeSocket(body)) resp.begin() self.assertRaises(client.LineTooLong, resp.read) def test_early_eof(self): # Test httpresponse with no \r\n termination, body = "HTTP/1.1 200 Ok" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(), b'') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_error_leak(self): # Test that the socket is not leaked if getresponse() fails conn = client.HTTPConnection('example.com') response = None class Response(client.HTTPResponse): def __init__(self, *pos, **kw): nonlocal response response = self # Avoid garbage collector closing the socket client.HTTPResponse.__init__(self, *pos, **kw) conn.response_class = Response conn.sock = FakeSocket('Invalid status line') conn.request('GET', '/') self.assertRaises(client.BadStatusLine, conn.getresponse) self.assertTrue(response.closed) self.assertTrue(conn.sock.file_closed) def test_chunked_extension(self): extra = '3;foo=bar\r\n' + 'abc\r\n' expected = chunked_expected + b'abc' sock = FakeSocket(chunked_start + extra + last_chunk_extended + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) resp.close() def test_chunked_missing_end(self): """some servers may serve up a short chunked encoding stream""" expected = chunked_expected sock = FakeSocket(chunked_start + last_chunk) #no terminating crlf resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) resp.close() def test_chunked_trailers(self): """See that trailers are read and ignored""" expected = chunked_expected sock = FakeSocket(chunked_start + last_chunk + trailers + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) # we should have reached the end of the file self.assertEqual(sock.file.read(), b"") #we read to the end resp.close() def test_chunked_sync(self): """Check that we don't read past the end of the chunked-encoding stream""" expected = chunked_expected extradata = "extradata" sock = FakeSocket(chunked_start + last_chunk + trailers + chunked_end + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata.encode("ascii")) #we read to the end resp.close() def test_content_length_sync(self): """Check that we don't read past the end of the Content-Length stream""" extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_readlines_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.readlines(2000), [expected]) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_read1_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read1(2000), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_readline_bound_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.readline(10), expected) self.assertEqual(resp.readline(10), b"") # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_read1_bound_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 30\r\n\r\n' + expected*3 + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read1(20), expected*2) self.assertEqual(resp.read(), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_response_fileno(self): # Make sure fd returned by fileno is valid. serv = socket.create_server((HOST, 0)) self.addCleanup(serv.close) result = None def run_server(): [conn, address] = serv.accept() with conn, conn.makefile("rb") as reader: # Read the request header until a blank line while True: line = reader.readline() if not line.rstrip(b"\r\n"): break conn.sendall(b"HTTP/1.1 200 Connection established\r\n\r\n") nonlocal result result = reader.read() thread = threading.Thread(target=run_server) thread.start() self.addCleanup(thread.join, float(1)) conn = client.HTTPConnection(*serv.getsockname()) conn.request("CONNECT", "dummy:1234") response = conn.getresponse() try: self.assertEqual(response.status, client.OK) s = socket.socket(fileno=response.fileno()) try: s.sendall(b"proxied data\n") finally: s.detach() finally: response.close() conn.close() thread.join() self.assertEqual(result, b"proxied data\n") def test_putrequest_override_domain_validation(self): """ It should be possible to override the default validation behavior in putrequest (bpo-38216). """ class UnsafeHTTPConnection(client.HTTPConnection): def _validate_path(self, url): pass conn = UnsafeHTTPConnection('example.com') conn.sock = FakeSocket('') conn.putrequest('GET', '/\x00') def test_putrequest_override_host_validation(self): class UnsafeHTTPConnection(client.HTTPConnection): def _validate_host(self, url): pass conn = UnsafeHTTPConnection('example.com\r\n') conn.sock = FakeSocket('') # set skip_host so a ValueError is not raised upon adding the # invalid URL as the value of the "Host:" header conn.putrequest('GET', '/', skip_host=1) def test_putrequest_override_encoding(self): """ It should be possible to override the default encoding to transmit bytes in another encoding even if invalid (bpo-36274). """ class UnsafeHTTPConnection(client.HTTPConnection): def _encode_request(self, str_url): return str_url.encode('utf-8') conn = UnsafeHTTPConnection('example.com') conn.sock = FakeSocket('') conn.putrequest('GET', '/☃') class ExtendedReadTest(TestCase): """ Test peek(), read1(), readline() """ lines = ( 'HTTP/1.1 200 OK\r\n' '\r\n' 'hello world!\n' 'and now \n' 'for something completely different\n' 'foo' ) lines_expected = lines[lines.find('hello'):].encode("ascii") lines_chunked = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello worl\r\n' '3\r\n' 'd!\n\r\n' '9\r\n' 'and now \n\r\n' '23\r\n' 'for something completely different\n\r\n' '3\r\n' 'foo\r\n' '0\r\n' # terminating chunk '\r\n' # end of trailers ) def setUp(self): sock = FakeSocket(self.lines) resp = client.HTTPResponse(sock, method="GET") resp.begin() resp.fp = io.BufferedReader(resp.fp) self.resp = resp def test_peek(self): resp = self.resp # patch up the buffered peek so that it returns not too much stuff oldpeek = resp.fp.peek def mypeek(n=-1): p = oldpeek(n) if n >= 0: return p[:n] return p[:10] resp.fp.peek = mypeek all = [] while True: # try a short peek p = resp.peek(3) if p: self.assertGreater(len(p), 0) # then unbounded peek p2 = resp.peek() self.assertGreaterEqual(len(p2), len(p)) self.assertTrue(p2.startswith(p)) next = resp.read(len(p2)) self.assertEqual(next, p2) else: next = resp.read() self.assertFalse(next) all.append(next) if not next: break self.assertEqual(b"".join(all), self.lines_expected) def test_readline(self): resp = self.resp self._verify_readline(self.resp.readline, self.lines_expected) def _verify_readline(self, readline, expected): all = [] while True: # short readlines line = readline(5) if line and line != b"foo": if len(line) < 5: self.assertTrue(line.endswith(b"\n")) all.append(line) if not line: break self.assertEqual(b"".join(all), expected) def test_read1(self): resp = self.resp def r(): res = resp.read1(4) self.assertLessEqual(len(res), 4) return res readliner = Readliner(r) self._verify_readline(readliner.readline, self.lines_expected) def test_read1_unbounded(self): resp = self.resp all = [] while True: data = resp.read1() if not data: break all.append(data) self.assertEqual(b"".join(all), self.lines_expected) def test_read1_bounded(self): resp = self.resp all = [] while True: data = resp.read1(10) if not data: break self.assertLessEqual(len(data), 10) all.append(data) self.assertEqual(b"".join(all), self.lines_expected) def test_read1_0(self): self.assertEqual(self.resp.read1(0), b"") def test_peek_0(self): p = self.resp.peek(0) self.assertLessEqual(0, len(p)) class ExtendedReadTestChunked(ExtendedReadTest): """ Test peek(), read1(), readline() in chunked mode """ lines = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello worl\r\n' '3\r\n' 'd!\n\r\n' '9\r\n' 'and now \n\r\n' '23\r\n' 'for something completely different\n\r\n' '3\r\n' 'foo\r\n' '0\r\n' # terminating chunk '\r\n' # end of trailers ) class Readliner: """ a simple readline class that uses an arbitrary read function and buffering """ def __init__(self, readfunc): self.readfunc = readfunc self.remainder = b"" def readline(self, limit): data = [] datalen = 0 read = self.remainder try: while True: idx = read.find(b'\n') if idx != -1: break if datalen + len(read) >= limit: idx = limit - datalen - 1 # read more data data.append(read) read = self.readfunc() if not read: idx = 0 #eof condition break idx += 1 data.append(read[:idx]) self.remainder = read[idx:] return b"".join(data) except: self.remainder = b"".join(data) raise class OfflineTest(TestCase): def test_all(self): # Documented objects defined in the module should be in __all__ expected = {"responses"} # Allowlist documented dict() object # HTTPMessage, parse_headers(), and the HTTP status code constants are # intentionally omitted for simplicity denylist = {"HTTPMessage", "parse_headers"} for name in dir(client): if name.startswith("_") or name in denylist: continue module_object = getattr(client, name) if getattr(module_object, "__module__", None) == "http.client": expected.add(name) self.assertCountEqual(client.__all__, expected) def test_responses(self): self.assertEqual(client.responses[client.NOT_FOUND], "Not Found") def test_client_constants(self): # Make sure we don't break backward compatibility with 3.4 expected = [ 'CONTINUE', 'SWITCHING_PROTOCOLS', 'PROCESSING', 'OK', 'CREATED', 'ACCEPTED', 'NON_AUTHORITATIVE_INFORMATION', 'NO_CONTENT', 'RESET_CONTENT', 'PARTIAL_CONTENT', 'MULTI_STATUS', 'IM_USED', 'MULTIPLE_CHOICES', 'MOVED_PERMANENTLY', 'FOUND', 'SEE_OTHER', 'NOT_MODIFIED', 'USE_PROXY', 'TEMPORARY_REDIRECT', 'BAD_REQUEST', 'UNAUTHORIZED', 'PAYMENT_REQUIRED', 'FORBIDDEN', 'NOT_FOUND', 'METHOD_NOT_ALLOWED', 'NOT_ACCEPTABLE', 'PROXY_AUTHENTICATION_REQUIRED', 'REQUEST_TIMEOUT', 'CONFLICT', 'GONE', 'LENGTH_REQUIRED', 'PRECONDITION_FAILED', 'REQUEST_ENTITY_TOO_LARGE', 'REQUEST_URI_TOO_LONG', 'UNSUPPORTED_MEDIA_TYPE', 'REQUESTED_RANGE_NOT_SATISFIABLE', 'EXPECTATION_FAILED', 'IM_A_TEAPOT', 'MISDIRECTED_REQUEST', 'UNPROCESSABLE_ENTITY', 'LOCKED', 'FAILED_DEPENDENCY', 'UPGRADE_REQUIRED', 'PRECONDITION_REQUIRED', 'TOO_MANY_REQUESTS', 'REQUEST_HEADER_FIELDS_TOO_LARGE', 'UNAVAILABLE_FOR_LEGAL_REASONS', 'INTERNAL_SERVER_ERROR', 'NOT_IMPLEMENTED', 'BAD_GATEWAY', 'SERVICE_UNAVAILABLE', 'GATEWAY_TIMEOUT', 'HTTP_VERSION_NOT_SUPPORTED', 'INSUFFICIENT_STORAGE', 'NOT_EXTENDED', 'NETWORK_AUTHENTICATION_REQUIRED', 'EARLY_HINTS', 'TOO_EARLY' ] for const in expected: with self.subTest(constant=const): self.assertTrue(hasattr(client, const)) class SourceAddressTest(TestCase): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = socket_helper.bind_port(self.serv) self.source_port = socket_helper.find_unused_port() self.serv.listen() self.conn = None def tearDown(self): if self.conn: self.conn.close() self.conn = None self.serv.close() self.serv = None def testHTTPConnectionSourceAddress(self): self.conn = client.HTTPConnection(HOST, self.port, source_address=('', self.source_port)) self.conn.connect() self.assertEqual(self.conn.sock.getsockname()[1], self.source_port) @unittest.skipIf(not hasattr(client, 'HTTPSConnection'), 'http.client.HTTPSConnection not defined') def testHTTPSConnectionSourceAddress(self): self.conn = client.HTTPSConnection(HOST, self.port, source_address=('', self.source_port)) # We don't test anything here other than the constructor not barfing as # this code doesn't deal with setting up an active running SSL server # for an ssl_wrapped connect() to actually return from. class TimeoutTest(TestCase): PORT = None def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) TimeoutTest.PORT = socket_helper.bind_port(self.serv) self.serv.listen() def tearDown(self): self.serv.close() self.serv = None def testTimeoutAttribute(self): # This will prove that the timeout gets through HTTPConnection # and into the socket. # default -- use global socket timeout self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: httpConn = client.HTTPConnection(HOST, TimeoutTest.PORT) httpConn.connect() finally: socket.setdefaulttimeout(None) self.assertEqual(httpConn.sock.gettimeout(), 30) httpConn.close() # no timeout -- do not use global socket default self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: httpConn = client.HTTPConnection(HOST, TimeoutTest.PORT, timeout=None) httpConn.connect() finally: socket.setdefaulttimeout(None) self.assertEqual(httpConn.sock.gettimeout(), None) httpConn.close() # a value httpConn = client.HTTPConnection(HOST, TimeoutTest.PORT, timeout=30) httpConn.connect() self.assertEqual(httpConn.sock.gettimeout(), 30) httpConn.close() class PersistenceTest(TestCase): def test_reuse_reconnect(self): # Should reuse or reconnect depending on header from server tests = ( ('1.0', '', False), ('1.0', 'Connection: keep-alive\r\n', True), ('1.1', '', True), ('1.1', 'Connection: close\r\n', False), ('1.0', 'Connection: keep-ALIVE\r\n', True), ('1.1', 'Connection: cloSE\r\n', False), ) for version, header, reuse in tests: with self.subTest(version=version, header=header): msg = ( 'HTTP/{} 200 OK\r\n' '{}' 'Content-Length: 12\r\n' '\r\n' 'Dummy body\r\n' ).format(version, header) conn = FakeSocketHTTPConnection(msg) self.assertIsNone(conn.sock) conn.request('GET', '/open-connection') with conn.getresponse() as response: self.assertEqual(conn.sock is None, not reuse) response.read() self.assertEqual(conn.sock is None, not reuse) self.assertEqual(conn.connections, 1) conn.request('GET', '/subsequent-request') self.assertEqual(conn.connections, 1 if reuse else 2) def test_disconnected(self): def make_reset_reader(text): """Return BufferedReader that raises ECONNRESET at EOF""" stream = io.BytesIO(text) def readinto(buffer): size = io.BytesIO.readinto(stream, buffer) if size == 0: raise ConnectionResetError() return size stream.readinto = readinto return io.BufferedReader(stream) tests = ( (io.BytesIO, client.RemoteDisconnected), (make_reset_reader, ConnectionResetError), ) for stream_factory, exception in tests: with self.subTest(exception=exception): conn = FakeSocketHTTPConnection(b'', stream_factory) conn.request('GET', '/eof-response') self.assertRaises(exception, conn.getresponse) self.assertIsNone(conn.sock) # HTTPConnection.connect() should be automatically invoked conn.request('GET', '/reconnect') self.assertEqual(conn.connections, 2) def test_100_close(self): conn = FakeSocketHTTPConnection( b'HTTP/1.1 100 Continue\r\n' b'\r\n' # Missing final response ) conn.request('GET', '/', headers={'Expect': '100-continue'}) self.assertRaises(client.RemoteDisconnected, conn.getresponse) self.assertIsNone(conn.sock) conn.request('GET', '/reconnect') self.assertEqual(conn.connections, 2) class HTTPSTest(TestCase): def setUp(self): if not hasattr(client, 'HTTPSConnection'): self.skipTest('ssl support required') def make_server(self, certfile): from test.ssl_servers import make_https_server return make_https_server(self, certfile=certfile) def test_attributes(self): # simple test to check it's storing the timeout h = client.HTTPSConnection(HOST, TimeoutTest.PORT, timeout=30) self.assertEqual(h.timeout, 30) def test_networked(self): # Default settings: requires a valid cert from a trusted CA import ssl support.requires('network') with socket_helper.transient_internet('self-signed.pythontest.net'): h = client.HTTPSConnection('self-signed.pythontest.net', 443) with self.assertRaises(ssl.SSLError) as exc_info: h.request('GET', '/') self.assertEqual(exc_info.exception.reason, 'CERTIFICATE_VERIFY_FAILED') def test_networked_noverification(self): # Switch off cert verification import ssl support.requires('network') with socket_helper.transient_internet('self-signed.pythontest.net'): context = ssl._create_unverified_context() h = client.HTTPSConnection('self-signed.pythontest.net', 443, context=context) h.request('GET', '/') resp = h.getresponse() h.close() self.assertIn('nginx', resp.getheader('server')) resp.close() @support.system_must_validate_cert def test_networked_trusted_by_default_cert(self): # Default settings: requires a valid cert from a trusted CA support.requires('network') with socket_helper.transient_internet('www.python.org'): h = client.HTTPSConnection('www.python.org', 443) h.request('GET', '/') resp = h.getresponse() content_type = resp.getheader('content-type') resp.close() h.close() self.assertIn('text/html', content_type) def test_networked_good_cert(self): # We feed the server's cert as a validating cert import ssl support.requires('network') selfsigned_pythontestdotnet = 'self-signed.pythontest.net' with socket_helper.transient_internet(selfsigned_pythontestdotnet): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(context.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(context.check_hostname, True) context.load_verify_locations(CERT_selfsigned_pythontestdotnet) try: h = client.HTTPSConnection(selfsigned_pythontestdotnet, 443, context=context) h.request('GET', '/') resp = h.getresponse() except ssl.SSLError as ssl_err: ssl_err_str = str(ssl_err) # In the error message of [SSL: CERTIFICATE_VERIFY_FAILED] on # modern Linux distros (Debian Buster, etc) default OpenSSL # configurations it'll fail saying "key too weak" until we # address https://bugs.python.org/issue36816 to use a proper # key size on self-signed.pythontest.net. if re.search(r'(?i)key.too.weak', ssl_err_str): raise unittest.SkipTest( f'Got {ssl_err_str} trying to connect ' f'to {selfsigned_pythontestdotnet}. ' 'See https://bugs.python.org/issue36816.') raise server_string = resp.getheader('server') resp.close() h.close() self.assertIn('nginx', server_string) def test_networked_bad_cert(self): # We feed a "CA" cert that is unrelated to the server's cert import ssl support.requires('network') with socket_helper.transient_internet('self-signed.pythontest.net'): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations(CERT_localhost) h = client.HTTPSConnection('self-signed.pythontest.net', 443, context=context) with self.assertRaises(ssl.SSLError) as exc_info: h.request('GET', '/') self.assertEqual(exc_info.exception.reason, 'CERTIFICATE_VERIFY_FAILED') def test_local_unknown_cert(self): # The custom cert isn't known to the default trust bundle import ssl server = self.make_server(CERT_localhost) h = client.HTTPSConnection('localhost', server.port) with self.assertRaises(ssl.SSLError) as exc_info: h.request('GET', '/') self.assertEqual(exc_info.exception.reason, 'CERTIFICATE_VERIFY_FAILED') def test_local_good_hostname(self): # The (valid) cert validates the HTTP hostname import ssl server = self.make_server(CERT_localhost) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations(CERT_localhost) h = client.HTTPSConnection('localhost', server.port, context=context) self.addCleanup(h.close) h.request('GET', '/nonexistent') resp = h.getresponse() self.addCleanup(resp.close) self.assertEqual(resp.status, 404) def test_local_bad_hostname(self): # The (valid) cert doesn't validate the HTTP hostname import ssl server = self.make_server(CERT_fakehostname) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations(CERT_fakehostname) h = client.HTTPSConnection('localhost', server.port, context=context) with self.assertRaises(ssl.CertificateError): h.request('GET', '/') # Same with explicit check_hostname=True with warnings_helper.check_warnings(('', DeprecationWarning)): h = client.HTTPSConnection('localhost', server.port, context=context, check_hostname=True) with self.assertRaises(ssl.CertificateError): h.request('GET', '/') # With check_hostname=False, the mismatching is ignored context.check_hostname = False with warnings_helper.check_warnings(('', DeprecationWarning)): h = client.HTTPSConnection('localhost', server.port, context=context, check_hostname=False) h.request('GET', '/nonexistent') resp = h.getresponse() resp.close() h.close() self.assertEqual(resp.status, 404) # The context's check_hostname setting is used if one isn't passed to # HTTPSConnection. context.check_hostname = False h = client.HTTPSConnection('localhost', server.port, context=context) h.request('GET', '/nonexistent') resp = h.getresponse() self.assertEqual(resp.status, 404) resp.close() h.close() # Passing check_hostname to HTTPSConnection should override the # context's setting. with warnings_helper.check_warnings(('', DeprecationWarning)): h = client.HTTPSConnection('localhost', server.port, context=context, check_hostname=True) with self.assertRaises(ssl.CertificateError): h.request('GET', '/') @unittest.skipIf(not hasattr(client, 'HTTPSConnection'), 'http.client.HTTPSConnection not available') def test_host_port(self): # Check invalid host_port for hp in ("www.python.org:abc", "user:password@www.python.org"): self.assertRaises(client.InvalidURL, client.HTTPSConnection, hp) for hp, h, p in (("[fe80::207:e9ff:fe9b]:8000", "fe80::207:e9ff:fe9b", 8000), ("www.python.org:443", "www.python.org", 443), ("www.python.org:", "www.python.org", 443), ("www.python.org", "www.python.org", 443), ("[fe80::207:e9ff:fe9b]", "fe80::207:e9ff:fe9b", 443), ("[fe80::207:e9ff:fe9b]:", "fe80::207:e9ff:fe9b", 443)): c = client.HTTPSConnection(hp) self.assertEqual(h, c.host) self.assertEqual(p, c.port) def test_tls13_pha(self): import ssl if not ssl.HAS_TLSv1_3: self.skipTest('TLS 1.3 support required') # just check status of PHA flag h = client.HTTPSConnection('localhost', 443) self.assertTrue(h._context.post_handshake_auth) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertFalse(context.post_handshake_auth) h = client.HTTPSConnection('localhost', 443, context=context) self.assertIs(h._context, context) self.assertFalse(h._context.post_handshake_auth) with warnings.catch_warnings(): warnings.filterwarnings('ignore', 'key_file, cert_file and check_hostname are deprecated', DeprecationWarning) h = client.HTTPSConnection('localhost', 443, context=context, cert_file=CERT_localhost) self.assertTrue(h._context.post_handshake_auth) class RequestBodyTest(TestCase): """Test cases where a request includes a message body.""" def setUp(self): self.conn = client.HTTPConnection('example.com') self.conn.sock = self.sock = FakeSocket("") self.conn.sock = self.sock def get_headers_and_fp(self): f = io.BytesIO(self.sock.data) f.readline() # read the request line message = client.parse_headers(f) return message, f def test_list_body(self): # Note that no content-length is automatically calculated for # an iterable. The request will fall back to send chunked # transfer encoding. cases = ( ([b'foo', b'bar'], b'3\r\nfoo\r\n3\r\nbar\r\n0\r\n\r\n'), ((b'foo', b'bar'), b'3\r\nfoo\r\n3\r\nbar\r\n0\r\n\r\n'), ) for body, expected in cases: with self.subTest(body): self.conn = client.HTTPConnection('example.com') self.conn.sock = self.sock = FakeSocket('') self.conn.request('PUT', '/url', body) msg, f = self.get_headers_and_fp() self.assertNotIn('Content-Type', msg) self.assertNotIn('Content-Length', msg) self.assertEqual(msg.get('Transfer-Encoding'), 'chunked') self.assertEqual(expected, f.read()) def test_manual_content_length(self): # Set an incorrect content-length so that we can verify that # it will not be over-ridden by the library. self.conn.request("PUT", "/url", "body", {"Content-Length": "42"}) message, f = self.get_headers_and_fp() self.assertEqual("42", message.get("content-length")) self.assertEqual(4, len(f.read())) def test_ascii_body(self): self.conn.request("PUT", "/url", "body") message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("4", message.get("content-length")) self.assertEqual(b'body', f.read()) def test_latin1_body(self): self.conn.request("PUT", "/url", "body\xc1") message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("5", message.get("content-length")) self.assertEqual(b'body\xc1', f.read()) def test_bytes_body(self): self.conn.request("PUT", "/url", b"body\xc1") message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("5", message.get("content-length")) self.assertEqual(b'body\xc1', f.read()) def test_text_file_body(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) with open(os_helper.TESTFN, "w", encoding="utf-8") as f: f.write("body") with open(os_helper.TESTFN, encoding="utf-8") as f: self.conn.request("PUT", "/url", f) message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) # No content-length will be determined for files; the body # will be sent using chunked transfer encoding instead. self.assertIsNone(message.get("content-length")) self.assertEqual("chunked", message.get("transfer-encoding")) self.assertEqual(b'4\r\nbody\r\n0\r\n\r\n', f.read()) def test_binary_file_body(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) with open(os_helper.TESTFN, "wb") as f: f.write(b"body\xc1") with open(os_helper.TESTFN, "rb") as f: self.conn.request("PUT", "/url", f) message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("chunked", message.get("Transfer-Encoding")) self.assertNotIn("Content-Length", message) self.assertEqual(b'5\r\nbody\xc1\r\n0\r\n\r\n', f.read()) class HTTPResponseTest(TestCase): def setUp(self): body = "HTTP/1.1 200 Ok\r\nMy-Header: first-value\r\nMy-Header: \ second-value\r\n\r\nText" sock = FakeSocket(body) self.resp = client.HTTPResponse(sock) self.resp.begin() def test_getting_header(self): header = self.resp.getheader('My-Header') self.assertEqual(header, 'first-value, second-value') header = self.resp.getheader('My-Header', 'some default') self.assertEqual(header, 'first-value, second-value') def test_getting_nonexistent_header_with_string_default(self): header = self.resp.getheader('No-Such-Header', 'default-value') self.assertEqual(header, 'default-value') def test_getting_nonexistent_header_with_iterable_default(self): header = self.resp.getheader('No-Such-Header', ['default', 'values']) self.assertEqual(header, 'default, values') header = self.resp.getheader('No-Such-Header', ('default', 'values')) self.assertEqual(header, 'default, values') def test_getting_nonexistent_header_without_default(self): header = self.resp.getheader('No-Such-Header') self.assertEqual(header, None) def test_getting_header_defaultint(self): header = self.resp.getheader('No-Such-Header',default=42) self.assertEqual(header, 42) class TunnelTests(TestCase): def setUp(self): response_text = ( 'HTTP/1.0 200 OK\r\n\r\n' # Reply to CONNECT 'HTTP/1.1 200 OK\r\n' # Reply to HEAD 'Content-Length: 42\r\n\r\n' ) self.host = 'proxy.com' self.conn = client.HTTPConnection(self.host) self.conn._create_connection = self._create_connection(response_text) def tearDown(self): self.conn.close() def _create_connection(self, response_text): def create_connection(address, timeout=None, source_address=None): return FakeSocket(response_text, host=address[0], port=address[1]) return create_connection def test_set_tunnel_host_port_headers(self): tunnel_host = 'destination.com' tunnel_port = 8888 tunnel_headers = {'User-Agent': 'Mozilla/5.0 (compatible, MSIE 11)'} self.conn.set_tunnel(tunnel_host, port=tunnel_port, headers=tunnel_headers) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, client.HTTP_PORT) self.assertEqual(self.conn._tunnel_host, tunnel_host) self.assertEqual(self.conn._tunnel_port, tunnel_port) self.assertEqual(self.conn._tunnel_headers, tunnel_headers) def test_disallow_set_tunnel_after_connect(self): # Once connected, we shouldn't be able to tunnel anymore self.conn.connect() self.assertRaises(RuntimeError, self.conn.set_tunnel, 'destination.com') def test_connect_with_tunnel(self): self.conn.set_tunnel('destination.com') self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, client.HTTP_PORT) self.assertIn(b'CONNECT destination.com', self.conn.sock.data) # issue22095 self.assertNotIn(b'Host: destination.com:None', self.conn.sock.data) self.assertIn(b'Host: destination.com', self.conn.sock.data) # This test should be removed when CONNECT gets the HTTP/1.1 blessing self.assertNotIn(b'Host: proxy.com', self.conn.sock.data) def test_tunnel_connect_single_send_connection_setup(self): """Regresstion test for https://bugs.python.org/issue43332.""" with mock.patch.object(self.conn, 'send') as mock_send: self.conn.set_tunnel('destination.com') self.conn.connect() self.conn.request('GET', '/') mock_send.assert_called() # Likely 2, but this test only cares about the first. self.assertGreater( len(mock_send.mock_calls), 1, msg=f'unexpected number of send calls: {mock_send.mock_calls}') proxy_setup_data_sent = mock_send.mock_calls[0][1][0] self.assertIn(b'CONNECT destination.com', proxy_setup_data_sent) self.assertTrue( proxy_setup_data_sent.endswith(b'\r\n\r\n'), msg=f'unexpected proxy data sent {proxy_setup_data_sent!r}') def test_connect_put_request(self): self.conn.set_tunnel('destination.com') self.conn.request('PUT', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, client.HTTP_PORT) self.assertIn(b'CONNECT destination.com', self.conn.sock.data) self.assertIn(b'Host: destination.com', self.conn.sock.data) def test_tunnel_debuglog(self): expected_header = 'X-Dummy: 1' response_text = 'HTTP/1.0 200 OK\r\n{}\r\n\r\n'.format(expected_header) self.conn.set_debuglevel(1) self.conn._create_connection = self._create_connection(response_text) self.conn.set_tunnel('destination.com') with support.captured_stdout() as output: self.conn.request('PUT', '/', '') lines = output.getvalue().splitlines() self.assertIn('header: {}'.format(expected_header), lines) if __name__ == '__main__': unittest.main(verbosity=2) gevent-24.11.1/src/greentest/3.10/test_select.py000066400000000000000000000065061471441230600212210ustar00rootroot00000000000000import errno import os import select import subprocess import sys import textwrap import unittest from test import support @unittest.skipIf((sys.platform[:3]=='win'), "can't easily test on this system") class SelectTestCase(unittest.TestCase): class Nope: pass class Almost: def fileno(self): return 'fileno' def test_error_conditions(self): self.assertRaises(TypeError, select.select, 1, 2, 3) self.assertRaises(TypeError, select.select, [self.Nope()], [], []) self.assertRaises(TypeError, select.select, [self.Almost()], [], []) self.assertRaises(TypeError, select.select, [], [], [], "not a number") self.assertRaises(ValueError, select.select, [], [], [], -1) # Issue #12367: http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/155606 @unittest.skipIf(sys.platform.startswith('freebsd'), 'skip because of a FreeBSD bug: kern/155606') def test_errno(self): with open(__file__, 'rb') as fp: fd = fp.fileno() fp.close() try: select.select([fd], [], [], 0) except OSError as err: self.assertEqual(err.errno, errno.EBADF) else: self.fail("exception not raised") def test_returned_list_identity(self): # See issue #8329 r, w, x = select.select([], [], [], 1) self.assertIsNot(r, w) self.assertIsNot(r, x) self.assertIsNot(w, x) @unittest.skipUnless(hasattr(os, 'popen'), "need os.popen()") def test_select(self): code = textwrap.dedent(''' import time for i in range(10): print("testing...", flush=True) time.sleep(0.050) ''') cmd = [sys.executable, '-I', '-c', code] with subprocess.Popen(cmd, stdout=subprocess.PIPE) as proc: pipe = proc.stdout for timeout in (0, 1, 2, 4, 8, 16) + (None,)*10: if support.verbose: print(f'timeout = {timeout}') rfd, wfd, xfd = select.select([pipe], [], [], timeout) self.assertEqual(wfd, []) self.assertEqual(xfd, []) if not rfd: continue if rfd == [pipe]: line = pipe.readline() if support.verbose: print(repr(line)) if not line: if support.verbose: print('EOF') break continue self.fail('Unexpected return values from select():', rfd, wfd, xfd) # Issue 16230: Crash on select resized list def test_select_mutated(self): a = [] class F: def fileno(self): del a[-1] return sys.__stdout__.fileno() a[:] = [F()] * 10 self.assertEqual(select.select([], a, []), ([], a[:5], [])) def test_disallow_instantiation(self): support.check_disallow_instantiation(self, type(select.poll())) if hasattr(select, 'devpoll'): support.check_disallow_instantiation(self, type(select.devpoll())) def tearDownModule(): support.reap_children() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.10/test_selectors.py000066400000000000000000000444231471441230600217450ustar00rootroot00000000000000import errno import os import random import selectors import signal import socket import sys from test import support from test.support import os_helper from test.support import socket_helper from time import sleep import unittest import unittest.mock import tempfile from time import monotonic as time try: import resource except ImportError: resource = None if hasattr(socket, 'socketpair'): socketpair = socket.socketpair else: def socketpair(family=socket.AF_INET, type=socket.SOCK_STREAM, proto=0): with socket.socket(family, type, proto) as l: l.bind((socket_helper.HOST, 0)) l.listen() c = socket.socket(family, type, proto) try: c.connect(l.getsockname()) caddr = c.getsockname() while True: a, addr = l.accept() # check that we've got the correct client if addr == caddr: return c, a a.close() except OSError: c.close() raise def find_ready_matching(ready, flag): match = [] for key, events in ready: if events & flag: match.append(key.fileobj) return match class BaseSelectorTestCase: def make_socketpair(self): rd, wr = socketpair() self.addCleanup(rd.close) self.addCleanup(wr.close) return rd, wr def test_register(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() key = s.register(rd, selectors.EVENT_READ, "data") self.assertIsInstance(key, selectors.SelectorKey) self.assertEqual(key.fileobj, rd) self.assertEqual(key.fd, rd.fileno()) self.assertEqual(key.events, selectors.EVENT_READ) self.assertEqual(key.data, "data") # register an unknown event self.assertRaises(ValueError, s.register, 0, 999999) # register an invalid FD self.assertRaises(ValueError, s.register, -10, selectors.EVENT_READ) # register twice self.assertRaises(KeyError, s.register, rd, selectors.EVENT_READ) # register the same FD, but with a different object self.assertRaises(KeyError, s.register, rd.fileno(), selectors.EVENT_READ) def test_unregister(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.unregister(rd) # unregister an unknown file obj self.assertRaises(KeyError, s.unregister, 999999) # unregister twice self.assertRaises(KeyError, s.unregister, rd) def test_unregister_after_fd_close(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() r, w = rd.fileno(), wr.fileno() s.register(r, selectors.EVENT_READ) s.register(w, selectors.EVENT_WRITE) rd.close() wr.close() s.unregister(r) s.unregister(w) @unittest.skipUnless(os.name == 'posix', "requires posix") def test_unregister_after_fd_close_and_reuse(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() r, w = rd.fileno(), wr.fileno() s.register(r, selectors.EVENT_READ) s.register(w, selectors.EVENT_WRITE) rd2, wr2 = self.make_socketpair() rd.close() wr.close() os.dup2(rd2.fileno(), r) os.dup2(wr2.fileno(), w) self.addCleanup(os.close, r) self.addCleanup(os.close, w) s.unregister(r) s.unregister(w) def test_unregister_after_socket_close(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) rd.close() wr.close() s.unregister(rd) s.unregister(wr) def test_modify(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() key = s.register(rd, selectors.EVENT_READ) # modify events key2 = s.modify(rd, selectors.EVENT_WRITE) self.assertNotEqual(key.events, key2.events) self.assertEqual(key2, s.get_key(rd)) s.unregister(rd) # modify data d1 = object() d2 = object() key = s.register(rd, selectors.EVENT_READ, d1) key2 = s.modify(rd, selectors.EVENT_READ, d2) self.assertEqual(key.events, key2.events) self.assertNotEqual(key.data, key2.data) self.assertEqual(key2, s.get_key(rd)) self.assertEqual(key2.data, d2) # modify unknown file obj self.assertRaises(KeyError, s.modify, 999999, selectors.EVENT_READ) # modify use a shortcut d3 = object() s.register = unittest.mock.Mock() s.unregister = unittest.mock.Mock() s.modify(rd, selectors.EVENT_READ, d3) self.assertFalse(s.register.called) self.assertFalse(s.unregister.called) def test_modify_unregister(self): # Make sure the fd is unregister()ed in case of error on # modify(): http://bugs.python.org/issue30014 if self.SELECTOR.__name__ == 'EpollSelector': patch = unittest.mock.patch( 'selectors.EpollSelector._selector_cls') elif self.SELECTOR.__name__ == 'PollSelector': patch = unittest.mock.patch( 'selectors.PollSelector._selector_cls') elif self.SELECTOR.__name__ == 'DevpollSelector': patch = unittest.mock.patch( 'selectors.DevpollSelector._selector_cls') else: raise self.skipTest("") with patch as m: m.return_value.modify = unittest.mock.Mock( side_effect=ZeroDivisionError) s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) self.assertEqual(len(s._map), 1) with self.assertRaises(ZeroDivisionError): s.modify(rd, selectors.EVENT_WRITE) self.assertEqual(len(s._map), 0) def test_close(self): s = self.SELECTOR() self.addCleanup(s.close) mapping = s.get_map() rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) s.close() self.assertRaises(RuntimeError, s.get_key, rd) self.assertRaises(RuntimeError, s.get_key, wr) self.assertRaises(KeyError, mapping.__getitem__, rd) self.assertRaises(KeyError, mapping.__getitem__, wr) def test_get_key(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() key = s.register(rd, selectors.EVENT_READ, "data") self.assertEqual(key, s.get_key(rd)) # unknown file obj self.assertRaises(KeyError, s.get_key, 999999) def test_get_map(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() keys = s.get_map() self.assertFalse(keys) self.assertEqual(len(keys), 0) self.assertEqual(list(keys), []) key = s.register(rd, selectors.EVENT_READ, "data") self.assertIn(rd, keys) self.assertEqual(key, keys[rd]) self.assertEqual(len(keys), 1) self.assertEqual(list(keys), [rd.fileno()]) self.assertEqual(list(keys.values()), [key]) # unknown file obj with self.assertRaises(KeyError): keys[999999] # Read-only mapping with self.assertRaises(TypeError): del keys[rd] def test_select(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) wr_key = s.register(wr, selectors.EVENT_WRITE) result = s.select() for key, events in result: self.assertTrue(isinstance(key, selectors.SelectorKey)) self.assertTrue(events) self.assertFalse(events & ~(selectors.EVENT_READ | selectors.EVENT_WRITE)) self.assertEqual([(wr_key, selectors.EVENT_WRITE)], result) def test_context_manager(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() with s as sel: sel.register(rd, selectors.EVENT_READ) sel.register(wr, selectors.EVENT_WRITE) self.assertRaises(RuntimeError, s.get_key, rd) self.assertRaises(RuntimeError, s.get_key, wr) def test_fileno(self): s = self.SELECTOR() self.addCleanup(s.close) if hasattr(s, 'fileno'): fd = s.fileno() self.assertTrue(isinstance(fd, int)) self.assertGreaterEqual(fd, 0) def test_selector(self): s = self.SELECTOR() self.addCleanup(s.close) NUM_SOCKETS = 12 MSG = b" This is a test." MSG_LEN = len(MSG) readers = [] writers = [] r2w = {} w2r = {} for i in range(NUM_SOCKETS): rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) readers.append(rd) writers.append(wr) r2w[rd] = wr w2r[wr] = rd bufs = [] while writers: ready = s.select() ready_writers = find_ready_matching(ready, selectors.EVENT_WRITE) if not ready_writers: self.fail("no sockets ready for writing") wr = random.choice(ready_writers) wr.send(MSG) for i in range(10): ready = s.select() ready_readers = find_ready_matching(ready, selectors.EVENT_READ) if ready_readers: break # there might be a delay between the write to the write end and # the read end is reported ready sleep(0.1) else: self.fail("no sockets ready for reading") self.assertEqual([w2r[wr]], ready_readers) rd = ready_readers[0] buf = rd.recv(MSG_LEN) self.assertEqual(len(buf), MSG_LEN) bufs.append(buf) s.unregister(r2w[rd]) s.unregister(rd) writers.remove(r2w[rd]) self.assertEqual(bufs, [MSG] * NUM_SOCKETS) @unittest.skipIf(sys.platform == 'win32', 'select.select() cannot be used with empty fd sets') def test_empty_select(self): # Issue #23009: Make sure EpollSelector.select() works when no FD is # registered. s = self.SELECTOR() self.addCleanup(s.close) self.assertEqual(s.select(timeout=0), []) def test_timeout(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(wr, selectors.EVENT_WRITE) t = time() self.assertEqual(1, len(s.select(0))) self.assertEqual(1, len(s.select(-1))) self.assertLess(time() - t, 0.5) s.unregister(wr) s.register(rd, selectors.EVENT_READ) t = time() self.assertFalse(s.select(0)) self.assertFalse(s.select(-1)) self.assertLess(time() - t, 0.5) t0 = time() self.assertFalse(s.select(1)) t1 = time() dt = t1 - t0 # Tolerate 2.0 seconds for very slow buildbots self.assertTrue(0.8 <= dt <= 2.0, dt) @unittest.skipUnless(hasattr(signal, "alarm"), "signal.alarm() required for this test") def test_select_interrupt_exc(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() class InterruptSelect(Exception): pass def handler(*args): raise InterruptSelect orig_alrm_handler = signal.signal(signal.SIGALRM, handler) self.addCleanup(signal.signal, signal.SIGALRM, orig_alrm_handler) try: signal.alarm(1) s.register(rd, selectors.EVENT_READ) t = time() # select() is interrupted by a signal which raises an exception with self.assertRaises(InterruptSelect): s.select(30) # select() was interrupted before the timeout of 30 seconds self.assertLess(time() - t, 5.0) finally: signal.alarm(0) @unittest.skipUnless(hasattr(signal, "alarm"), "signal.alarm() required for this test") def test_select_interrupt_noraise(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() orig_alrm_handler = signal.signal(signal.SIGALRM, lambda *args: None) self.addCleanup(signal.signal, signal.SIGALRM, orig_alrm_handler) try: signal.alarm(1) s.register(rd, selectors.EVENT_READ) t = time() # select() is interrupted by a signal, but the signal handler doesn't # raise an exception, so select() should by retries with a recomputed # timeout self.assertFalse(s.select(1.5)) self.assertGreaterEqual(time() - t, 1.0) finally: signal.alarm(0) class ScalableSelectorMixIn: # see issue #18963 for why it's skipped on older OS X versions @support.requires_mac_ver(10, 5) @unittest.skipUnless(resource, "Test needs resource module") def test_above_fd_setsize(self): # A scalable implementation should have no problem with more than # FD_SETSIZE file descriptors. Since we don't know the value, we just # try to set the soft RLIMIT_NOFILE to the hard RLIMIT_NOFILE ceiling. soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE) try: resource.setrlimit(resource.RLIMIT_NOFILE, (hard, hard)) self.addCleanup(resource.setrlimit, resource.RLIMIT_NOFILE, (soft, hard)) NUM_FDS = min(hard, 2**16) except (OSError, ValueError): NUM_FDS = soft # guard for already allocated FDs (stdin, stdout...) NUM_FDS -= 32 s = self.SELECTOR() self.addCleanup(s.close) for i in range(NUM_FDS // 2): try: rd, wr = self.make_socketpair() except OSError: # too many FDs, skip - note that we should only catch EMFILE # here, but apparently *BSD and Solaris can fail upon connect() # or bind() with EADDRNOTAVAIL, so let's be safe self.skipTest("FD limit reached") try: s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) except OSError as e: if e.errno == errno.ENOSPC: # this can be raised by epoll if we go over # fs.epoll.max_user_watches sysctl self.skipTest("FD limit reached") raise try: fds = s.select() except OSError as e: if e.errno == errno.EINVAL and sys.platform == 'darwin': # unexplainable errors on macOS don't need to fail the test self.skipTest("Invalid argument error calling poll()") raise self.assertEqual(NUM_FDS // 2, len(fds)) class DefaultSelectorTestCase(BaseSelectorTestCase, unittest.TestCase): SELECTOR = selectors.DefaultSelector class SelectSelectorTestCase(BaseSelectorTestCase, unittest.TestCase): SELECTOR = selectors.SelectSelector @unittest.skipUnless(hasattr(selectors, 'PollSelector'), "Test needs selectors.PollSelector") class PollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'PollSelector', None) @unittest.skipUnless(hasattr(selectors, 'EpollSelector'), "Test needs selectors.EpollSelector") class EpollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'EpollSelector', None) def test_register_file(self): # epoll(7) returns EPERM when given a file to watch s = self.SELECTOR() with tempfile.NamedTemporaryFile() as f: with self.assertRaises(IOError): s.register(f, selectors.EVENT_READ) # the SelectorKey has been removed with self.assertRaises(KeyError): s.get_key(f) @unittest.skipUnless(hasattr(selectors, 'KqueueSelector'), "Test needs selectors.KqueueSelector)") class KqueueSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'KqueueSelector', None) def test_register_bad_fd(self): # a file descriptor that's been closed should raise an OSError # with EBADF s = self.SELECTOR() bad_f = os_helper.make_bad_fd() with self.assertRaises(OSError) as cm: s.register(bad_f, selectors.EVENT_READ) self.assertEqual(cm.exception.errno, errno.EBADF) # the SelectorKey has been removed with self.assertRaises(KeyError): s.get_key(bad_f) def test_empty_select_timeout(self): # Issues #23009, #29255: Make sure timeout is applied when no fds # are registered. s = self.SELECTOR() self.addCleanup(s.close) t0 = time() self.assertEqual(s.select(1), []) t1 = time() dt = t1 - t0 # Tolerate 2.0 seconds for very slow buildbots self.assertTrue(0.8 <= dt <= 2.0, dt) @unittest.skipUnless(hasattr(selectors, 'DevpollSelector'), "Test needs selectors.DevpollSelector") class DevpollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'DevpollSelector', None) def tearDownModule(): support.reap_children() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.10/test_signal.py000066400000000000000000001414601471441230600212160ustar00rootroot00000000000000import errno import inspect import os import random import signal import socket import statistics import subprocess import sys import threading import time import unittest from test import support from test.support import os_helper from test.support.script_helper import assert_python_ok, spawn_python try: import _testcapi except ImportError: _testcapi = None class GenericTests(unittest.TestCase): def test_enums(self): for name in dir(signal): sig = getattr(signal, name) if name in {'SIG_DFL', 'SIG_IGN'}: self.assertIsInstance(sig, signal.Handlers) elif name in {'SIG_BLOCK', 'SIG_UNBLOCK', 'SIG_SETMASK'}: self.assertIsInstance(sig, signal.Sigmasks) elif name.startswith('SIG') and not name.startswith('SIG_'): self.assertIsInstance(sig, signal.Signals) elif name.startswith('CTRL_'): self.assertIsInstance(sig, signal.Signals) self.assertEqual(sys.platform, "win32") def test_functions_module_attr(self): # Issue #27718: If __all__ is not defined all non-builtin functions # should have correct __module__ to be displayed by pydoc. for name in dir(signal): value = getattr(signal, name) if inspect.isroutine(value) and not inspect.isbuiltin(value): self.assertEqual(value.__module__, 'signal') @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") class PosixTests(unittest.TestCase): def trivial_signal_handler(self, *args): pass def test_out_of_range_signal_number_raises_error(self): self.assertRaises(ValueError, signal.getsignal, 4242) self.assertRaises(ValueError, signal.signal, 4242, self.trivial_signal_handler) self.assertRaises(ValueError, signal.strsignal, 4242) def test_setting_signal_handler_to_none_raises_error(self): self.assertRaises(TypeError, signal.signal, signal.SIGUSR1, None) def test_getsignal(self): hup = signal.signal(signal.SIGHUP, self.trivial_signal_handler) self.assertIsInstance(hup, signal.Handlers) self.assertEqual(signal.getsignal(signal.SIGHUP), self.trivial_signal_handler) signal.signal(signal.SIGHUP, hup) self.assertEqual(signal.getsignal(signal.SIGHUP), hup) def test_strsignal(self): self.assertIn("Interrupt", signal.strsignal(signal.SIGINT)) self.assertIn("Terminated", signal.strsignal(signal.SIGTERM)) self.assertIn("Hangup", signal.strsignal(signal.SIGHUP)) # Issue 3864, unknown if this affects earlier versions of freebsd also def test_interprocess_signal(self): dirname = os.path.dirname(__file__) script = os.path.join(dirname, 'signalinterproctester.py') assert_python_ok(script) def test_valid_signals(self): s = signal.valid_signals() self.assertIsInstance(s, set) self.assertIn(signal.Signals.SIGINT, s) self.assertIn(signal.Signals.SIGALRM, s) self.assertNotIn(0, s) self.assertNotIn(signal.NSIG, s) self.assertLess(len(s), signal.NSIG) @unittest.skipUnless(sys.executable, "sys.executable required.") def test_keyboard_interrupt_exit_code(self): """KeyboardInterrupt triggers exit via SIGINT.""" process = subprocess.run( [sys.executable, "-c", "import os, signal, time\n" "os.kill(os.getpid(), signal.SIGINT)\n" "for _ in range(999): time.sleep(0.01)"], stderr=subprocess.PIPE) self.assertIn(b"KeyboardInterrupt", process.stderr) self.assertEqual(process.returncode, -signal.SIGINT) # Caveat: The exit code is insufficient to guarantee we actually died # via a signal. POSIX shells do more than look at the 8 bit value. # Writing an automation friendly test of an interactive shell # to confirm that our process died via a SIGINT proved too complex. @unittest.skipUnless(sys.platform == "win32", "Windows specific") class WindowsSignalTests(unittest.TestCase): def test_valid_signals(self): s = signal.valid_signals() self.assertIsInstance(s, set) self.assertGreaterEqual(len(s), 6) self.assertIn(signal.Signals.SIGINT, s) self.assertNotIn(0, s) self.assertNotIn(signal.NSIG, s) self.assertLess(len(s), signal.NSIG) def test_issue9324(self): # Updated for issue #10003, adding SIGBREAK handler = lambda x, y: None checked = set() for sig in (signal.SIGABRT, signal.SIGBREAK, signal.SIGFPE, signal.SIGILL, signal.SIGINT, signal.SIGSEGV, signal.SIGTERM): # Set and then reset a handler for signals that work on windows. # Issue #18396, only for signals without a C-level handler. if signal.getsignal(sig) is not None: signal.signal(sig, signal.signal(sig, handler)) checked.add(sig) # Issue #18396: Ensure the above loop at least tested *something* self.assertTrue(checked) with self.assertRaises(ValueError): signal.signal(-1, handler) with self.assertRaises(ValueError): signal.signal(7, handler) @unittest.skipUnless(sys.executable, "sys.executable required.") def test_keyboard_interrupt_exit_code(self): """KeyboardInterrupt triggers an exit using STATUS_CONTROL_C_EXIT.""" # We don't test via os.kill(os.getpid(), signal.CTRL_C_EVENT) here # as that requires setting up a console control handler in a child # in its own process group. Doable, but quite complicated. (see # @eryksun on https://github.com/python/cpython/pull/11862) process = subprocess.run( [sys.executable, "-c", "raise KeyboardInterrupt"], stderr=subprocess.PIPE) self.assertIn(b"KeyboardInterrupt", process.stderr) STATUS_CONTROL_C_EXIT = 0xC000013A self.assertEqual(process.returncode, STATUS_CONTROL_C_EXIT) class WakeupFDTests(unittest.TestCase): def test_invalid_call(self): # First parameter is positional-only with self.assertRaises(TypeError): signal.set_wakeup_fd(signum=signal.SIGINT) # warn_on_full_buffer is a keyword-only parameter with self.assertRaises(TypeError): signal.set_wakeup_fd(signal.SIGINT, False) def test_invalid_fd(self): fd = os_helper.make_bad_fd() self.assertRaises((ValueError, OSError), signal.set_wakeup_fd, fd) def test_invalid_socket(self): sock = socket.socket() fd = sock.fileno() sock.close() self.assertRaises((ValueError, OSError), signal.set_wakeup_fd, fd) def test_set_wakeup_fd_result(self): r1, w1 = os.pipe() self.addCleanup(os.close, r1) self.addCleanup(os.close, w1) r2, w2 = os.pipe() self.addCleanup(os.close, r2) self.addCleanup(os.close, w2) if hasattr(os, 'set_blocking'): os.set_blocking(w1, False) os.set_blocking(w2, False) signal.set_wakeup_fd(w1) self.assertEqual(signal.set_wakeup_fd(w2), w1) self.assertEqual(signal.set_wakeup_fd(-1), w2) self.assertEqual(signal.set_wakeup_fd(-1), -1) def test_set_wakeup_fd_socket_result(self): sock1 = socket.socket() self.addCleanup(sock1.close) sock1.setblocking(False) fd1 = sock1.fileno() sock2 = socket.socket() self.addCleanup(sock2.close) sock2.setblocking(False) fd2 = sock2.fileno() signal.set_wakeup_fd(fd1) self.assertEqual(signal.set_wakeup_fd(fd2), fd1) self.assertEqual(signal.set_wakeup_fd(-1), fd2) self.assertEqual(signal.set_wakeup_fd(-1), -1) # On Windows, files are always blocking and Windows does not provide a # function to test if a socket is in non-blocking mode. @unittest.skipIf(sys.platform == "win32", "tests specific to POSIX") def test_set_wakeup_fd_blocking(self): rfd, wfd = os.pipe() self.addCleanup(os.close, rfd) self.addCleanup(os.close, wfd) # fd must be non-blocking os.set_blocking(wfd, True) with self.assertRaises(ValueError) as cm: signal.set_wakeup_fd(wfd) self.assertEqual(str(cm.exception), "the fd %s must be in non-blocking mode" % wfd) # non-blocking is ok os.set_blocking(wfd, False) signal.set_wakeup_fd(wfd) signal.set_wakeup_fd(-1) @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") class WakeupSignalTests(unittest.TestCase): @unittest.skipIf(_testcapi is None, 'need _testcapi') def check_wakeup(self, test_body, *signals, ordered=True): # use a subprocess to have only one thread code = """if 1: import _testcapi import os import signal import struct signals = {!r} def handler(signum, frame): pass def check_signum(signals): data = os.read(read, len(signals)+1) raised = struct.unpack('%uB' % len(data), data) if not {!r}: raised = set(raised) signals = set(signals) if raised != signals: raise Exception("%r != %r" % (raised, signals)) {} signal.signal(signal.SIGALRM, handler) read, write = os.pipe() os.set_blocking(write, False) signal.set_wakeup_fd(write) test() check_signum(signals) os.close(read) os.close(write) """.format(tuple(map(int, signals)), ordered, test_body) assert_python_ok('-c', code) @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_wakeup_write_error(self): # Issue #16105: write() errors in the C signal handler should not # pass silently. # Use a subprocess to have only one thread. code = """if 1: import _testcapi import errno import os import signal import sys from test.support import captured_stderr def handler(signum, frame): 1/0 signal.signal(signal.SIGALRM, handler) r, w = os.pipe() os.set_blocking(r, False) # Set wakeup_fd a read-only file descriptor to trigger the error signal.set_wakeup_fd(r) try: with captured_stderr() as err: signal.raise_signal(signal.SIGALRM) except ZeroDivisionError: # An ignored exception should have been printed out on stderr err = err.getvalue() if ('Exception ignored when trying to write to the signal wakeup fd' not in err): raise AssertionError(err) if ('OSError: [Errno %d]' % errno.EBADF) not in err: raise AssertionError(err) else: raise AssertionError("ZeroDivisionError not raised") os.close(r) os.close(w) """ r, w = os.pipe() try: os.write(r, b'x') except OSError: pass else: self.skipTest("OS doesn't report write() error on the read end of a pipe") finally: os.close(r) os.close(w) assert_python_ok('-c', code) def test_wakeup_fd_early(self): self.check_wakeup("""def test(): import select import time TIMEOUT_FULL = 10 TIMEOUT_HALF = 5 class InterruptSelect(Exception): pass def handler(signum, frame): raise InterruptSelect signal.signal(signal.SIGALRM, handler) signal.alarm(1) # We attempt to get a signal during the sleep, # before select is called try: select.select([], [], [], TIMEOUT_FULL) except InterruptSelect: pass else: raise Exception("select() was not interrupted") before_time = time.monotonic() select.select([read], [], [], TIMEOUT_FULL) after_time = time.monotonic() dt = after_time - before_time if dt >= TIMEOUT_HALF: raise Exception("%s >= %s" % (dt, TIMEOUT_HALF)) """, signal.SIGALRM) def test_wakeup_fd_during(self): self.check_wakeup("""def test(): import select import time TIMEOUT_FULL = 10 TIMEOUT_HALF = 5 class InterruptSelect(Exception): pass def handler(signum, frame): raise InterruptSelect signal.signal(signal.SIGALRM, handler) signal.alarm(1) before_time = time.monotonic() # We attempt to get a signal during the select call try: select.select([read], [], [], TIMEOUT_FULL) except InterruptSelect: pass else: raise Exception("select() was not interrupted") after_time = time.monotonic() dt = after_time - before_time if dt >= TIMEOUT_HALF: raise Exception("%s >= %s" % (dt, TIMEOUT_HALF)) """, signal.SIGALRM) def test_signum(self): self.check_wakeup("""def test(): signal.signal(signal.SIGUSR1, handler) signal.raise_signal(signal.SIGUSR1) signal.raise_signal(signal.SIGALRM) """, signal.SIGUSR1, signal.SIGALRM) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pending(self): self.check_wakeup("""def test(): signum1 = signal.SIGUSR1 signum2 = signal.SIGUSR2 signal.signal(signum1, handler) signal.signal(signum2, handler) signal.pthread_sigmask(signal.SIG_BLOCK, (signum1, signum2)) signal.raise_signal(signum1) signal.raise_signal(signum2) # Unblocking the 2 signals calls the C signal handler twice signal.pthread_sigmask(signal.SIG_UNBLOCK, (signum1, signum2)) """, signal.SIGUSR1, signal.SIGUSR2, ordered=False) @unittest.skipUnless(hasattr(socket, 'socketpair'), 'need socket.socketpair') class WakeupSocketSignalTests(unittest.TestCase): @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_socket(self): # use a subprocess to have only one thread code = """if 1: import signal import socket import struct import _testcapi signum = signal.SIGINT signals = (signum,) def handler(signum, frame): pass signal.signal(signum, handler) read, write = socket.socketpair() write.setblocking(False) signal.set_wakeup_fd(write.fileno()) signal.raise_signal(signum) data = read.recv(1) if not data: raise Exception("no signum written") raised = struct.unpack('B', data) if raised != signals: raise Exception("%r != %r" % (raised, signals)) read.close() write.close() """ assert_python_ok('-c', code) @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_send_error(self): # Use a subprocess to have only one thread. if os.name == 'nt': action = 'send' else: action = 'write' code = """if 1: import errno import signal import socket import sys import time import _testcapi from test.support import captured_stderr signum = signal.SIGINT def handler(signum, frame): pass signal.signal(signum, handler) read, write = socket.socketpair() read.setblocking(False) write.setblocking(False) signal.set_wakeup_fd(write.fileno()) # Close sockets: send() will fail read.close() write.close() with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if ('Exception ignored when trying to {action} to the signal wakeup fd' not in err): raise AssertionError(err) """.format(action=action) assert_python_ok('-c', code) @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_warn_on_full_buffer(self): # Use a subprocess to have only one thread. if os.name == 'nt': action = 'send' else: action = 'write' code = """if 1: import errno import signal import socket import sys import time import _testcapi from test.support import captured_stderr signum = signal.SIGINT # This handler will be called, but we intentionally won't read from # the wakeup fd. def handler(signum, frame): pass signal.signal(signum, handler) read, write = socket.socketpair() # Fill the socketpair buffer if sys.platform == 'win32': # bpo-34130: On Windows, sometimes non-blocking send fails to fill # the full socketpair buffer, so use a timeout of 50 ms instead. write.settimeout(0.050) else: write.setblocking(False) written = 0 if sys.platform == "vxworks": CHUNK_SIZES = (1,) else: # Start with large chunk size to reduce the # number of send needed to fill the buffer. CHUNK_SIZES = (2 ** 16, 2 ** 8, 1) for chunk_size in CHUNK_SIZES: chunk = b"x" * chunk_size try: while True: write.send(chunk) written += chunk_size except (BlockingIOError, TimeoutError): pass print(f"%s bytes written into the socketpair" % written, flush=True) write.setblocking(False) try: write.send(b"x") except BlockingIOError: # The socketpair buffer seems full pass else: raise AssertionError("%s bytes failed to fill the socketpair " "buffer" % written) # By default, we get a warning when a signal arrives msg = ('Exception ignored when trying to {action} ' 'to the signal wakeup fd') signal.set_wakeup_fd(write.fileno()) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if msg not in err: raise AssertionError("first set_wakeup_fd() test failed, " "stderr: %r" % err) # And also if warn_on_full_buffer=True signal.set_wakeup_fd(write.fileno(), warn_on_full_buffer=True) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if msg not in err: raise AssertionError("set_wakeup_fd(warn_on_full_buffer=True) " "test failed, stderr: %r" % err) # But not if warn_on_full_buffer=False signal.set_wakeup_fd(write.fileno(), warn_on_full_buffer=False) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if err != "": raise AssertionError("set_wakeup_fd(warn_on_full_buffer=False) " "test failed, stderr: %r" % err) # And then check the default again, to make sure warn_on_full_buffer # settings don't leak across calls. signal.set_wakeup_fd(write.fileno()) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if msg not in err: raise AssertionError("second set_wakeup_fd() test failed, " "stderr: %r" % err) """.format(action=action) assert_python_ok('-c', code) @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") @unittest.skipUnless(hasattr(signal, 'siginterrupt'), "needs signal.siginterrupt()") class SiginterruptTest(unittest.TestCase): def readpipe_interrupted(self, interrupt): """Perform a read during which a signal will arrive. Return True if the read is interrupted by the signal and raises an exception. Return False if it returns normally. """ # use a subprocess to have only one thread, to have a timeout on the # blocking read and to not touch signal handling in this process code = """if 1: import errno import os import signal import sys interrupt = %r r, w = os.pipe() def handler(signum, frame): 1 / 0 signal.signal(signal.SIGALRM, handler) if interrupt is not None: signal.siginterrupt(signal.SIGALRM, interrupt) print("ready") sys.stdout.flush() # run the test twice try: for loop in range(2): # send a SIGALRM in a second (during the read) signal.alarm(1) try: # blocking call: read from a pipe without data os.read(r, 1) except ZeroDivisionError: pass else: sys.exit(2) sys.exit(3) finally: os.close(r) os.close(w) """ % (interrupt,) with spawn_python('-c', code) as process: try: # wait until the child process is loaded and has started first_line = process.stdout.readline() stdout, stderr = process.communicate(timeout=support.SHORT_TIMEOUT) except subprocess.TimeoutExpired: process.kill() return False else: stdout = first_line + stdout exitcode = process.wait() if exitcode not in (2, 3): raise Exception("Child error (exit code %s): %r" % (exitcode, stdout)) return (exitcode == 3) def test_without_siginterrupt(self): # If a signal handler is installed and siginterrupt is not called # at all, when that signal arrives, it interrupts a syscall that's in # progress. interrupted = self.readpipe_interrupted(None) self.assertTrue(interrupted) def test_siginterrupt_on(self): # If a signal handler is installed and siginterrupt is called with # a true value for the second argument, when that signal arrives, it # interrupts a syscall that's in progress. interrupted = self.readpipe_interrupted(True) self.assertTrue(interrupted) def test_siginterrupt_off(self): # If a signal handler is installed and siginterrupt is called with # a false value for the second argument, when that signal arrives, it # does not interrupt a syscall that's in progress. interrupted = self.readpipe_interrupted(False) self.assertFalse(interrupted) @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") @unittest.skipUnless(hasattr(signal, 'getitimer') and hasattr(signal, 'setitimer'), "needs signal.getitimer() and signal.setitimer()") class ItimerTest(unittest.TestCase): def setUp(self): self.hndl_called = False self.hndl_count = 0 self.itimer = None self.old_alarm = signal.signal(signal.SIGALRM, self.sig_alrm) def tearDown(self): signal.signal(signal.SIGALRM, self.old_alarm) if self.itimer is not None: # test_itimer_exc doesn't change this attr # just ensure that itimer is stopped signal.setitimer(self.itimer, 0) def sig_alrm(self, *args): self.hndl_called = True def sig_vtalrm(self, *args): self.hndl_called = True if self.hndl_count > 3: # it shouldn't be here, because it should have been disabled. raise signal.ItimerError("setitimer didn't disable ITIMER_VIRTUAL " "timer.") elif self.hndl_count == 3: # disable ITIMER_VIRTUAL, this function shouldn't be called anymore signal.setitimer(signal.ITIMER_VIRTUAL, 0) self.hndl_count += 1 def sig_prof(self, *args): self.hndl_called = True signal.setitimer(signal.ITIMER_PROF, 0) def test_itimer_exc(self): # XXX I'm assuming -1 is an invalid itimer, but maybe some platform # defines it ? self.assertRaises(signal.ItimerError, signal.setitimer, -1, 0) # Negative times are treated as zero on some platforms. if 0: self.assertRaises(signal.ItimerError, signal.setitimer, signal.ITIMER_REAL, -1) def test_itimer_real(self): self.itimer = signal.ITIMER_REAL signal.setitimer(self.itimer, 1.0) signal.pause() self.assertEqual(self.hndl_called, True) # Issue 3864, unknown if this affects earlier versions of freebsd also @unittest.skipIf(sys.platform in ('netbsd5',), 'itimer not reliable (does not mix well with threading) on some BSDs.') def test_itimer_virtual(self): self.itimer = signal.ITIMER_VIRTUAL signal.signal(signal.SIGVTALRM, self.sig_vtalrm) signal.setitimer(self.itimer, 0.3, 0.2) start_time = time.monotonic() while time.monotonic() - start_time < 60.0: # use up some virtual time by doing real work _ = pow(12345, 67890, 10000019) if signal.getitimer(self.itimer) == (0.0, 0.0): break # sig_vtalrm handler stopped this itimer else: # Issue 8424 self.skipTest("timeout: likely cause: machine too slow or load too " "high") # virtual itimer should be (0.0, 0.0) now self.assertEqual(signal.getitimer(self.itimer), (0.0, 0.0)) # and the handler should have been called self.assertEqual(self.hndl_called, True) def test_itimer_prof(self): self.itimer = signal.ITIMER_PROF signal.signal(signal.SIGPROF, self.sig_prof) signal.setitimer(self.itimer, 0.2, 0.2) start_time = time.monotonic() while time.monotonic() - start_time < 60.0: # do some work _ = pow(12345, 67890, 10000019) if signal.getitimer(self.itimer) == (0.0, 0.0): break # sig_prof handler stopped this itimer else: # Issue 8424 self.skipTest("timeout: likely cause: machine too slow or load too " "high") # profiling itimer should be (0.0, 0.0) now self.assertEqual(signal.getitimer(self.itimer), (0.0, 0.0)) # and the handler should have been called self.assertEqual(self.hndl_called, True) def test_setitimer_tiny(self): # bpo-30807: C setitimer() takes a microsecond-resolution interval. # Check that float -> timeval conversion doesn't round # the interval down to zero, which would disable the timer. self.itimer = signal.ITIMER_REAL signal.setitimer(self.itimer, 1e-6) time.sleep(1) self.assertEqual(self.hndl_called, True) class PendingSignalsTests(unittest.TestCase): """ Test pthread_sigmask(), pthread_kill(), sigpending() and sigwait() functions. """ @unittest.skipUnless(hasattr(signal, 'sigpending'), 'need signal.sigpending()') def test_sigpending_empty(self): self.assertEqual(signal.sigpending(), set()) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') @unittest.skipUnless(hasattr(signal, 'sigpending'), 'need signal.sigpending()') def test_sigpending(self): code = """if 1: import os import signal def handler(signum, frame): 1/0 signum = signal.SIGUSR1 signal.signal(signum, handler) signal.pthread_sigmask(signal.SIG_BLOCK, [signum]) os.kill(os.getpid(), signum) pending = signal.sigpending() for sig in pending: assert isinstance(sig, signal.Signals), repr(pending) if pending != {signum}: raise Exception('%s != {%s}' % (pending, signum)) try: signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") """ assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'pthread_kill'), 'need signal.pthread_kill()') def test_pthread_kill(self): code = """if 1: import signal import threading import sys signum = signal.SIGUSR1 def handler(signum, frame): 1/0 signal.signal(signum, handler) tid = threading.get_ident() try: signal.pthread_kill(tid, signum) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") """ assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def wait_helper(self, blocked, test): """ test: body of the "def test(signum):" function. blocked: number of the blocked signal """ code = '''if 1: import signal import sys from signal import Signals def handler(signum, frame): 1/0 %s blocked = %s signum = signal.SIGALRM # child: block and wait the signal try: signal.signal(signum, handler) signal.pthread_sigmask(signal.SIG_BLOCK, [blocked]) # Do the tests test(signum) # The handler must not be called on unblock try: signal.pthread_sigmask(signal.SIG_UNBLOCK, [blocked]) except ZeroDivisionError: print("the signal handler has been called", file=sys.stderr) sys.exit(1) except BaseException as err: print("error: {}".format(err), file=sys.stderr) sys.stderr.flush() sys.exit(1) ''' % (test.strip(), blocked) # sig*wait* must be called with the signal blocked: since the current # process might have several threads running, use a subprocess to have # a single thread. assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'sigwait'), 'need signal.sigwait()') def test_sigwait(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): signal.alarm(1) received = signal.sigwait([signum]) assert isinstance(received, signal.Signals), received if received != signum: raise Exception('received %s, not %s' % (received, signum)) ''') @unittest.skipUnless(hasattr(signal, 'sigwaitinfo'), 'need signal.sigwaitinfo()') def test_sigwaitinfo(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): signal.alarm(1) info = signal.sigwaitinfo([signum]) if info.si_signo != signum: raise Exception("info.si_signo != %s" % signum) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): signal.alarm(1) info = signal.sigtimedwait([signum], 10.1000) if info.si_signo != signum: raise Exception('info.si_signo != %s' % signum) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait_poll(self): # check that polling with sigtimedwait works self.wait_helper(signal.SIGALRM, ''' def test(signum): import os os.kill(os.getpid(), signum) info = signal.sigtimedwait([signum], 0) if info.si_signo != signum: raise Exception('info.si_signo != %s' % signum) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait_timeout(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): received = signal.sigtimedwait([signum], 1.0) if received is not None: raise Exception("received=%r" % (received,)) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait_negative_timeout(self): signum = signal.SIGALRM self.assertRaises(ValueError, signal.sigtimedwait, [signum], -1.0) @unittest.skipUnless(hasattr(signal, 'sigwait'), 'need signal.sigwait()') @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_sigwait_thread(self): # Check that calling sigwait() from a thread doesn't suspend the whole # process. A new interpreter is spawned to avoid problems when mixing # threads and fork(): only async-safe functions are allowed between # fork() and exec(). assert_python_ok("-c", """if True: import os, threading, sys, time, signal # the default handler terminates the process signum = signal.SIGUSR1 def kill_later(): # wait until the main thread is waiting in sigwait() time.sleep(1) os.kill(os.getpid(), signum) # the signal must be blocked by all the threads signal.pthread_sigmask(signal.SIG_BLOCK, [signum]) killer = threading.Thread(target=kill_later) killer.start() received = signal.sigwait([signum]) if received != signum: print("sigwait() received %s, not %s" % (received, signum), file=sys.stderr) sys.exit(1) killer.join() # unblock the signal, which should have been cleared by sigwait() signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) """) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pthread_sigmask_arguments(self): self.assertRaises(TypeError, signal.pthread_sigmask) self.assertRaises(TypeError, signal.pthread_sigmask, 1) self.assertRaises(TypeError, signal.pthread_sigmask, 1, 2, 3) self.assertRaises(OSError, signal.pthread_sigmask, 1700, []) with self.assertRaises(ValueError): signal.pthread_sigmask(signal.SIG_BLOCK, [signal.NSIG]) with self.assertRaises(ValueError): signal.pthread_sigmask(signal.SIG_BLOCK, [0]) with self.assertRaises(ValueError): signal.pthread_sigmask(signal.SIG_BLOCK, [1<<1000]) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pthread_sigmask_valid_signals(self): s = signal.pthread_sigmask(signal.SIG_BLOCK, signal.valid_signals()) self.addCleanup(signal.pthread_sigmask, signal.SIG_SETMASK, s) # Get current blocked set s = signal.pthread_sigmask(signal.SIG_UNBLOCK, signal.valid_signals()) self.assertLessEqual(s, signal.valid_signals()) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pthread_sigmask(self): code = """if 1: import signal import os; import threading def handler(signum, frame): 1/0 def kill(signum): os.kill(os.getpid(), signum) def check_mask(mask): for sig in mask: assert isinstance(sig, signal.Signals), repr(sig) def read_sigmask(): sigmask = signal.pthread_sigmask(signal.SIG_BLOCK, []) check_mask(sigmask) return sigmask signum = signal.SIGUSR1 # Install our signal handler old_handler = signal.signal(signum, handler) # Unblock SIGUSR1 (and copy the old mask) to test our signal handler old_mask = signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) check_mask(old_mask) try: kill(signum) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") # Block and then raise SIGUSR1. The signal is blocked: the signal # handler is not called, and the signal is now pending mask = signal.pthread_sigmask(signal.SIG_BLOCK, [signum]) check_mask(mask) kill(signum) # Check the new mask blocked = read_sigmask() check_mask(blocked) if signum not in blocked: raise Exception("%s not in %s" % (signum, blocked)) if old_mask ^ blocked != {signum}: raise Exception("%s ^ %s != {%s}" % (old_mask, blocked, signum)) # Unblock SIGUSR1 try: # unblock the pending signal calls immediately the signal handler signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") try: kill(signum) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") # Check the new mask unblocked = read_sigmask() if signum in unblocked: raise Exception("%s in %s" % (signum, unblocked)) if blocked ^ unblocked != {signum}: raise Exception("%s ^ %s != {%s}" % (blocked, unblocked, signum)) if old_mask != unblocked: raise Exception("%s != %s" % (old_mask, unblocked)) """ assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'pthread_kill'), 'need signal.pthread_kill()') def test_pthread_kill_main_thread(self): # Test that a signal can be sent to the main thread with pthread_kill() # before any other thread has been created (see issue #12392). code = """if True: import threading import signal import sys def handler(signum, frame): sys.exit(3) signal.signal(signal.SIGUSR1, handler) signal.pthread_kill(threading.get_ident(), signal.SIGUSR1) sys.exit(2) """ with spawn_python('-c', code) as process: stdout, stderr = process.communicate() exitcode = process.wait() if exitcode != 3: raise Exception("Child error (exit code %s): %s" % (exitcode, stdout)) class StressTest(unittest.TestCase): """ Stress signal delivery, especially when a signal arrives in the middle of recomputing the signal state or executing previously tripped signal handlers. """ def setsig(self, signum, handler): old_handler = signal.signal(signum, handler) self.addCleanup(signal.signal, signum, old_handler) def measure_itimer_resolution(self): N = 20 times = [] def handler(signum=None, frame=None): if len(times) < N: times.append(time.perf_counter()) # 1 µs is the smallest possible timer interval, # we want to measure what the concrete duration # will be on this platform signal.setitimer(signal.ITIMER_REAL, 1e-6) self.addCleanup(signal.setitimer, signal.ITIMER_REAL, 0) self.setsig(signal.SIGALRM, handler) handler() while len(times) < N: time.sleep(1e-3) durations = [times[i+1] - times[i] for i in range(len(times) - 1)] med = statistics.median(durations) if support.verbose: print("detected median itimer() resolution: %.6f s." % (med,)) return med def decide_itimer_count(self): # Some systems have poor setitimer() resolution (for example # measured around 20 ms. on FreeBSD 9), so decide on a reasonable # number of sequential timers based on that. reso = self.measure_itimer_resolution() if reso <= 1e-4: return 10000 elif reso <= 1e-2: return 100 else: self.skipTest("detected itimer resolution (%.3f s.) too high " "(> 10 ms.) on this platform (or system too busy)" % (reso,)) @unittest.skipUnless(hasattr(signal, "setitimer"), "test needs setitimer()") def test_stress_delivery_dependent(self): """ This test uses dependent signal handlers. """ N = self.decide_itimer_count() sigs = [] def first_handler(signum, frame): # 1e-6 is the minimum non-zero value for `setitimer()`. # Choose a random delay so as to improve chances of # triggering a race condition. Ideally the signal is received # when inside critical signal-handling routines such as # Py_MakePendingCalls(). signal.setitimer(signal.ITIMER_REAL, 1e-6 + random.random() * 1e-5) def second_handler(signum=None, frame=None): sigs.append(signum) # Here on Linux, SIGPROF > SIGALRM > SIGUSR1. By using both # ascending and descending sequences (SIGUSR1 then SIGALRM, # SIGPROF then SIGALRM), we maximize chances of hitting a bug. self.setsig(signal.SIGPROF, first_handler) self.setsig(signal.SIGUSR1, first_handler) self.setsig(signal.SIGALRM, second_handler) # for ITIMER_REAL expected_sigs = 0 deadline = time.monotonic() + support.SHORT_TIMEOUT while expected_sigs < N: os.kill(os.getpid(), signal.SIGPROF) expected_sigs += 1 # Wait for handlers to run to avoid signal coalescing while len(sigs) < expected_sigs and time.monotonic() < deadline: time.sleep(1e-5) os.kill(os.getpid(), signal.SIGUSR1) expected_sigs += 1 while len(sigs) < expected_sigs and time.monotonic() < deadline: time.sleep(1e-5) # All ITIMER_REAL signals should have been delivered to the # Python handler self.assertEqual(len(sigs), N, "Some signals were lost") @unittest.skipUnless(hasattr(signal, "setitimer"), "test needs setitimer()") def test_stress_delivery_simultaneous(self): """ This test uses simultaneous signal handlers. """ N = self.decide_itimer_count() sigs = [] def handler(signum, frame): sigs.append(signum) self.setsig(signal.SIGUSR1, handler) self.setsig(signal.SIGALRM, handler) # for ITIMER_REAL expected_sigs = 0 deadline = time.monotonic() + support.SHORT_TIMEOUT while expected_sigs < N: # Hopefully the SIGALRM will be received somewhere during # initial processing of SIGUSR1. signal.setitimer(signal.ITIMER_REAL, 1e-6 + random.random() * 1e-5) os.kill(os.getpid(), signal.SIGUSR1) expected_sigs += 2 # Wait for handlers to run to avoid signal coalescing while len(sigs) < expected_sigs and time.monotonic() < deadline: time.sleep(1e-5) # All ITIMER_REAL signals should have been delivered to the # Python handler self.assertEqual(len(sigs), N, "Some signals were lost") @unittest.skipUnless(hasattr(signal, "SIGUSR1"), "test needs SIGUSR1") def test_stress_modifying_handlers(self): # bpo-43406: race condition between trip_signal() and signal.signal signum = signal.SIGUSR1 num_sent_signals = 0 num_received_signals = 0 do_stop = False def custom_handler(signum, frame): nonlocal num_received_signals num_received_signals += 1 def set_interrupts(): nonlocal num_sent_signals while not do_stop: signal.raise_signal(signum) num_sent_signals += 1 def cycle_handlers(): while num_sent_signals < 100: for i in range(20000): # Cycle between a Python-defined and a non-Python handler for handler in [custom_handler, signal.SIG_IGN]: signal.signal(signum, handler) old_handler = signal.signal(signum, custom_handler) self.addCleanup(signal.signal, signum, old_handler) t = threading.Thread(target=set_interrupts) try: ignored = False with support.catch_unraisable_exception() as cm: t.start() cycle_handlers() do_stop = True t.join() if cm.unraisable is not None: # An unraisable exception may be printed out when # a signal is ignored due to the aforementioned # race condition, check it. self.assertIsInstance(cm.unraisable.exc_value, OSError) self.assertIn( f"Signal {signum:d} ignored due to race condition", str(cm.unraisable.exc_value)) ignored = True # bpo-43406: Even if it is unlikely, it's technically possible that # all signals were ignored because of race conditions. if not ignored: # Sanity check that some signals were received, but not all self.assertGreater(num_received_signals, 0) self.assertLess(num_received_signals, num_sent_signals) finally: do_stop = True t.join() class RaiseSignalTest(unittest.TestCase): def test_sigint(self): with self.assertRaises(KeyboardInterrupt): signal.raise_signal(signal.SIGINT) @unittest.skipIf(sys.platform != "win32", "Windows specific test") def test_invalid_argument(self): try: SIGHUP = 1 # not supported on win32 signal.raise_signal(SIGHUP) self.fail("OSError (Invalid argument) expected") except OSError as e: if e.errno == errno.EINVAL: pass else: raise def test_handler(self): is_ok = False def handler(a, b): nonlocal is_ok is_ok = True old_signal = signal.signal(signal.SIGINT, handler) self.addCleanup(signal.signal, signal.SIGINT, old_signal) signal.raise_signal(signal.SIGINT) self.assertTrue(is_ok) def test__thread_interrupt_main(self): # See https://github.com/python/cpython/issues/102397 code = """if 1: import _thread class Foo(): def __del__(self): _thread.interrupt_main() x = Foo() """ rc, out, err = assert_python_ok('-c', code) self.assertIn(b'OSError: Signal 2 ignored due to race condition', err) class PidfdSignalTest(unittest.TestCase): @unittest.skipUnless( hasattr(signal, "pidfd_send_signal"), "pidfd support not built in", ) def test_pidfd_send_signal(self): with self.assertRaises(OSError) as cm: signal.pidfd_send_signal(0, signal.SIGINT) if cm.exception.errno == errno.ENOSYS: self.skipTest("kernel does not support pidfds") elif cm.exception.errno == errno.EPERM: self.skipTest("Not enough privileges to use pidfs") self.assertEqual(cm.exception.errno, errno.EBADF) my_pidfd = os.open(f'/proc/{os.getpid()}', os.O_DIRECTORY) self.addCleanup(os.close, my_pidfd) with self.assertRaisesRegex(TypeError, "^siginfo must be None$"): signal.pidfd_send_signal(my_pidfd, signal.SIGINT, object(), 0) with self.assertRaises(KeyboardInterrupt): signal.pidfd_send_signal(my_pidfd, signal.SIGINT) def tearDownModule(): support.reap_children() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.10/test_smtpd.py000066400000000000000000001212351471441230600210660ustar00rootroot00000000000000import unittest import textwrap from test import support, mock_socket from test.support import socket_helper from test.support import warnings_helper import socket import io import warnings with warnings.catch_warnings(): warnings.simplefilter('ignore', DeprecationWarning) import smtpd import asyncore class DummyServer(smtpd.SMTPServer): def __init__(self, *args, **kwargs): smtpd.SMTPServer.__init__(self, *args, **kwargs) self.messages = [] if self._decode_data: self.return_status = 'return status' else: self.return_status = b'return status' def process_message(self, peer, mailfrom, rcpttos, data, **kw): self.messages.append((peer, mailfrom, rcpttos, data)) if data == self.return_status: return '250 Okish' if 'mail_options' in kw and 'SMTPUTF8' in kw['mail_options']: return '250 SMTPUTF8 message okish' class DummyDispatcherBroken(Exception): pass class BrokenDummyServer(DummyServer): def listen(self, num): raise DummyDispatcherBroken() class SMTPDServerTest(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket def test_process_message_unimplemented(self): server = smtpd.SMTPServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, decode_data=True) def write_line(line): channel.socket.queue_recv(line) channel.handle_read() write_line(b'HELO example') write_line(b'MAIL From:eggs@example') write_line(b'RCPT To:spam@example') write_line(b'DATA') self.assertRaises(NotImplementedError, write_line, b'spam\r\n.\r\n') def test_decode_data_and_enable_SMTPUTF8_raises(self): self.assertRaises( ValueError, smtpd.SMTPServer, (socket_helper.HOST, 0), ('b', 0), enable_SMTPUTF8=True, decode_data=True) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket class DebuggingServerTest(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket def send_data(self, channel, data, enable_SMTPUTF8=False): def write_line(line): channel.socket.queue_recv(line) channel.handle_read() write_line(b'EHLO example') if enable_SMTPUTF8: write_line(b'MAIL From:eggs@example BODY=8BITMIME SMTPUTF8') else: write_line(b'MAIL From:eggs@example') write_line(b'RCPT To:spam@example') write_line(b'DATA') write_line(data) write_line(b'.') def test_process_message_with_decode_data_true(self): server = smtpd.DebuggingServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, decode_data=True) with support.captured_stdout() as s: self.send_data(channel, b'From: test\n\nhello\n') stdout = s.getvalue() self.assertEqual(stdout, textwrap.dedent("""\ ---------- MESSAGE FOLLOWS ---------- From: test X-Peer: peer-address hello ------------ END MESSAGE ------------ """)) def test_process_message_with_decode_data_false(self): server = smtpd.DebuggingServer((socket_helper.HOST, 0), ('b', 0)) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr) with support.captured_stdout() as s: self.send_data(channel, b'From: test\n\nh\xc3\xa9llo\xff\n') stdout = s.getvalue() self.assertEqual(stdout, textwrap.dedent("""\ ---------- MESSAGE FOLLOWS ---------- b'From: test' b'X-Peer: peer-address' b'' b'h\\xc3\\xa9llo\\xff' ------------ END MESSAGE ------------ """)) def test_process_message_with_enable_SMTPUTF8_true(self): server = smtpd.DebuggingServer((socket_helper.HOST, 0), ('b', 0), enable_SMTPUTF8=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, enable_SMTPUTF8=True) with support.captured_stdout() as s: self.send_data(channel, b'From: test\n\nh\xc3\xa9llo\xff\n') stdout = s.getvalue() self.assertEqual(stdout, textwrap.dedent("""\ ---------- MESSAGE FOLLOWS ---------- b'From: test' b'X-Peer: peer-address' b'' b'h\\xc3\\xa9llo\\xff' ------------ END MESSAGE ------------ """)) def test_process_SMTPUTF8_message_with_enable_SMTPUTF8_true(self): server = smtpd.DebuggingServer((socket_helper.HOST, 0), ('b', 0), enable_SMTPUTF8=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, enable_SMTPUTF8=True) with support.captured_stdout() as s: self.send_data(channel, b'From: test\n\nh\xc3\xa9llo\xff\n', enable_SMTPUTF8=True) stdout = s.getvalue() self.assertEqual(stdout, textwrap.dedent("""\ ---------- MESSAGE FOLLOWS ---------- mail options: ['BODY=8BITMIME', 'SMTPUTF8'] b'From: test' b'X-Peer: peer-address' b'' b'h\\xc3\\xa9llo\\xff' ------------ END MESSAGE ------------ """)) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket class TestFamilyDetection(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket @unittest.skipUnless(socket_helper.IPV6_ENABLED, "IPv6 not enabled") def test_socket_uses_IPv6(self): server = smtpd.SMTPServer((socket_helper.HOSTv6, 0), (socket_helper.HOSTv4, 0)) self.assertEqual(server.socket.family, socket.AF_INET6) def test_socket_uses_IPv4(self): server = smtpd.SMTPServer((socket_helper.HOSTv4, 0), (socket_helper.HOSTv6, 0)) self.assertEqual(server.socket.family, socket.AF_INET) class TestRcptOptionParsing(unittest.TestCase): error_response = (b'555 RCPT TO parameters not recognized or not ' b'implemented\r\n') def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, channel, line): channel.socket.queue_recv(line) channel.handle_read() def test_params_rejected(self): server = DummyServer((socket_helper.HOST, 0), ('b', 0)) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr) self.write_line(channel, b'EHLO example') self.write_line(channel, b'MAIL from: size=20') self.write_line(channel, b'RCPT to: foo=bar') self.assertEqual(channel.socket.last, self.error_response) def test_nothing_accepted(self): server = DummyServer((socket_helper.HOST, 0), ('b', 0)) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr) self.write_line(channel, b'EHLO example') self.write_line(channel, b'MAIL from: size=20') self.write_line(channel, b'RCPT to: ') self.assertEqual(channel.socket.last, b'250 OK\r\n') class TestMailOptionParsing(unittest.TestCase): error_response = (b'555 MAIL FROM parameters not recognized or not ' b'implemented\r\n') def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, channel, line): channel.socket.queue_recv(line) channel.handle_read() def test_with_decode_data_true(self): server = DummyServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, decode_data=True) self.write_line(channel, b'EHLO example') for line in [ b'MAIL from: size=20 SMTPUTF8', b'MAIL from: size=20 SMTPUTF8 BODY=8BITMIME', b'MAIL from: size=20 BODY=UNKNOWN', b'MAIL from: size=20 body=8bitmime', ]: self.write_line(channel, line) self.assertEqual(channel.socket.last, self.error_response) self.write_line(channel, b'MAIL from: size=20') self.assertEqual(channel.socket.last, b'250 OK\r\n') def test_with_decode_data_false(self): server = DummyServer((socket_helper.HOST, 0), ('b', 0)) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr) self.write_line(channel, b'EHLO example') for line in [ b'MAIL from: size=20 SMTPUTF8', b'MAIL from: size=20 SMTPUTF8 BODY=8BITMIME', ]: self.write_line(channel, line) self.assertEqual(channel.socket.last, self.error_response) self.write_line( channel, b'MAIL from: size=20 SMTPUTF8 BODY=UNKNOWN') self.assertEqual( channel.socket.last, b'501 Error: BODY can only be one of 7BIT, 8BITMIME\r\n') self.write_line( channel, b'MAIL from: size=20 body=8bitmime') self.assertEqual(channel.socket.last, b'250 OK\r\n') def test_with_enable_smtputf8_true(self): server = DummyServer((socket_helper.HOST, 0), ('b', 0), enable_SMTPUTF8=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, enable_SMTPUTF8=True) self.write_line(channel, b'EHLO example') self.write_line( channel, b'MAIL from: size=20 body=8bitmime smtputf8') self.assertEqual(channel.socket.last, b'250 OK\r\n') class SMTPDChannelTest(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = self.server.accept() self.channel = smtpd.SMTPChannel(self.server, conn, addr, decode_data=True) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, line): self.channel.socket.queue_recv(line) self.channel.handle_read() def test_broken_connect(self): self.assertRaises( DummyDispatcherBroken, BrokenDummyServer, (socket_helper.HOST, 0), ('b', 0), decode_data=True) def test_decode_data_and_enable_SMTPUTF8_raises(self): self.assertRaises( ValueError, smtpd.SMTPChannel, self.server, self.channel.conn, self.channel.addr, enable_SMTPUTF8=True, decode_data=True) def test_server_accept(self): self.server.handle_accept() def test_missing_data(self): self.write_line(b'') self.assertEqual(self.channel.socket.last, b'500 Error: bad syntax\r\n') def test_EHLO(self): self.write_line(b'EHLO example') self.assertEqual(self.channel.socket.last, b'250 HELP\r\n') def test_EHLO_bad_syntax(self): self.write_line(b'EHLO') self.assertEqual(self.channel.socket.last, b'501 Syntax: EHLO hostname\r\n') def test_EHLO_duplicate(self): self.write_line(b'EHLO example') self.write_line(b'EHLO example') self.assertEqual(self.channel.socket.last, b'503 Duplicate HELO/EHLO\r\n') def test_EHLO_HELO_duplicate(self): self.write_line(b'EHLO example') self.write_line(b'HELO example') self.assertEqual(self.channel.socket.last, b'503 Duplicate HELO/EHLO\r\n') def test_HELO(self): name = smtpd.socket.getfqdn() self.write_line(b'HELO example') self.assertEqual(self.channel.socket.last, '250 {}\r\n'.format(name).encode('ascii')) def test_HELO_EHLO_duplicate(self): self.write_line(b'HELO example') self.write_line(b'EHLO example') self.assertEqual(self.channel.socket.last, b'503 Duplicate HELO/EHLO\r\n') def test_HELP(self): self.write_line(b'HELP') self.assertEqual(self.channel.socket.last, b'250 Supported commands: EHLO HELO MAIL RCPT ' + \ b'DATA RSET NOOP QUIT VRFY\r\n') def test_HELP_command(self): self.write_line(b'HELP MAIL') self.assertEqual(self.channel.socket.last, b'250 Syntax: MAIL FROM:
\r\n') def test_HELP_command_unknown(self): self.write_line(b'HELP SPAM') self.assertEqual(self.channel.socket.last, b'501 Supported commands: EHLO HELO MAIL RCPT ' + \ b'DATA RSET NOOP QUIT VRFY\r\n') def test_HELO_bad_syntax(self): self.write_line(b'HELO') self.assertEqual(self.channel.socket.last, b'501 Syntax: HELO hostname\r\n') def test_HELO_duplicate(self): self.write_line(b'HELO example') self.write_line(b'HELO example') self.assertEqual(self.channel.socket.last, b'503 Duplicate HELO/EHLO\r\n') def test_HELO_parameter_rejected_when_extensions_not_enabled(self): self.extended_smtp = False self.write_line(b'HELO example') self.write_line(b'MAIL from: SIZE=1234') self.assertEqual(self.channel.socket.last, b'501 Syntax: MAIL FROM:
\r\n') def test_MAIL_allows_space_after_colon(self): self.write_line(b'HELO example') self.write_line(b'MAIL from: ') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_extended_MAIL_allows_space_after_colon(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from: size=20') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_NOOP(self): self.write_line(b'NOOP') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_HELO_NOOP(self): self.write_line(b'HELO example') self.write_line(b'NOOP') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_NOOP_bad_syntax(self): self.write_line(b'NOOP hi') self.assertEqual(self.channel.socket.last, b'501 Syntax: NOOP\r\n') def test_QUIT(self): self.write_line(b'QUIT') self.assertEqual(self.channel.socket.last, b'221 Bye\r\n') def test_HELO_QUIT(self): self.write_line(b'HELO example') self.write_line(b'QUIT') self.assertEqual(self.channel.socket.last, b'221 Bye\r\n') def test_QUIT_arg_ignored(self): self.write_line(b'QUIT bye bye') self.assertEqual(self.channel.socket.last, b'221 Bye\r\n') def test_bad_state(self): self.channel.smtp_state = 'BAD STATE' self.write_line(b'HELO example') self.assertEqual(self.channel.socket.last, b'451 Internal confusion\r\n') def test_command_too_long(self): self.write_line(b'HELO example') self.write_line(b'MAIL from: ' + b'a' * self.channel.command_size_limit + b'@example') self.assertEqual(self.channel.socket.last, b'500 Error: line too long\r\n') def test_MAIL_command_limit_extended_with_SIZE(self): self.write_line(b'EHLO example') fill_len = self.channel.command_size_limit - len('MAIL from:<@example>') self.write_line(b'MAIL from:<' + b'a' * fill_len + b'@example> SIZE=1234') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'MAIL from:<' + b'a' * (fill_len + 26) + b'@example> SIZE=1234') self.assertEqual(self.channel.socket.last, b'500 Error: line too long\r\n') def test_MAIL_command_rejects_SMTPUTF8_by_default(self): self.write_line(b'EHLO example') self.write_line( b'MAIL from: BODY=8BITMIME SMTPUTF8') self.assertEqual(self.channel.socket.last[0:1], b'5') def test_data_longer_than_default_data_size_limit(self): # Hack the default so we don't have to generate so much data. self.channel.data_size_limit = 1048 self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'A' * self.channel.data_size_limit + b'A\r\n.') self.assertEqual(self.channel.socket.last, b'552 Error: Too much mail data\r\n') def test_MAIL_size_parameter(self): self.write_line(b'EHLO example') self.write_line(b'MAIL FROM: SIZE=512') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_MAIL_invalid_size_parameter(self): self.write_line(b'EHLO example') self.write_line(b'MAIL FROM: SIZE=invalid') self.assertEqual(self.channel.socket.last, b'501 Syntax: MAIL FROM:
[SP ]\r\n') def test_MAIL_RCPT_unknown_parameters(self): self.write_line(b'EHLO example') self.write_line(b'MAIL FROM: ham=green') self.assertEqual(self.channel.socket.last, b'555 MAIL FROM parameters not recognized or not implemented\r\n') self.write_line(b'MAIL FROM:') self.write_line(b'RCPT TO: ham=green') self.assertEqual(self.channel.socket.last, b'555 RCPT TO parameters not recognized or not implemented\r\n') def test_MAIL_size_parameter_larger_than_default_data_size_limit(self): self.channel.data_size_limit = 1048 self.write_line(b'EHLO example') self.write_line(b'MAIL FROM: SIZE=2096') self.assertEqual(self.channel.socket.last, b'552 Error: message size exceeds fixed maximum message size\r\n') def test_need_MAIL(self): self.write_line(b'HELO example') self.write_line(b'RCPT to:spam@example') self.assertEqual(self.channel.socket.last, b'503 Error: need MAIL command\r\n') def test_MAIL_syntax_HELO(self): self.write_line(b'HELO example') self.write_line(b'MAIL from eggs@example') self.assertEqual(self.channel.socket.last, b'501 Syntax: MAIL FROM:
\r\n') def test_MAIL_syntax_EHLO(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from eggs@example') self.assertEqual(self.channel.socket.last, b'501 Syntax: MAIL FROM:
[SP ]\r\n') def test_MAIL_missing_address(self): self.write_line(b'HELO example') self.write_line(b'MAIL from:') self.assertEqual(self.channel.socket.last, b'501 Syntax: MAIL FROM:
\r\n') def test_MAIL_chevrons(self): self.write_line(b'HELO example') self.write_line(b'MAIL from:') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_MAIL_empty_chevrons(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from:<>') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_MAIL_quoted_localpart(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from: <"Fred Blogs"@example.com>') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.channel.mailfrom, '"Fred Blogs"@example.com') def test_MAIL_quoted_localpart_no_angles(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from: "Fred Blogs"@example.com') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.channel.mailfrom, '"Fred Blogs"@example.com') def test_MAIL_quoted_localpart_with_size(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from: <"Fred Blogs"@example.com> SIZE=1000') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.channel.mailfrom, '"Fred Blogs"@example.com') def test_MAIL_quoted_localpart_with_size_no_angles(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from: "Fred Blogs"@example.com SIZE=1000') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.channel.mailfrom, '"Fred Blogs"@example.com') def test_nested_MAIL(self): self.write_line(b'HELO example') self.write_line(b'MAIL from:eggs@example') self.write_line(b'MAIL from:spam@example') self.assertEqual(self.channel.socket.last, b'503 Error: nested MAIL command\r\n') def test_VRFY(self): self.write_line(b'VRFY eggs@example') self.assertEqual(self.channel.socket.last, b'252 Cannot VRFY user, but will accept message and attempt ' + \ b'delivery\r\n') def test_VRFY_syntax(self): self.write_line(b'VRFY') self.assertEqual(self.channel.socket.last, b'501 Syntax: VRFY
\r\n') def test_EXPN_not_implemented(self): self.write_line(b'EXPN') self.assertEqual(self.channel.socket.last, b'502 EXPN not implemented\r\n') def test_no_HELO_MAIL(self): self.write_line(b'MAIL from:') self.assertEqual(self.channel.socket.last, b'503 Error: send HELO first\r\n') def test_need_RCPT(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'DATA') self.assertEqual(self.channel.socket.last, b'503 Error: need RCPT command\r\n') def test_RCPT_syntax_HELO(self): self.write_line(b'HELO example') self.write_line(b'MAIL From: eggs@example') self.write_line(b'RCPT to eggs@example') self.assertEqual(self.channel.socket.last, b'501 Syntax: RCPT TO:
\r\n') def test_RCPT_syntax_EHLO(self): self.write_line(b'EHLO example') self.write_line(b'MAIL From: eggs@example') self.write_line(b'RCPT to eggs@example') self.assertEqual(self.channel.socket.last, b'501 Syntax: RCPT TO:
[SP ]\r\n') def test_RCPT_lowercase_to_OK(self): self.write_line(b'HELO example') self.write_line(b'MAIL From: eggs@example') self.write_line(b'RCPT to: ') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_no_HELO_RCPT(self): self.write_line(b'RCPT to eggs@example') self.assertEqual(self.channel.socket.last, b'503 Error: send HELO first\r\n') def test_data_dialog(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'RCPT To:spam@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'DATA') self.assertEqual(self.channel.socket.last, b'354 End data with .\r\n') self.write_line(b'data\r\nmore\r\n.') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.server.messages, [(('peer-address', 'peer-port'), 'eggs@example', ['spam@example'], 'data\nmore')]) def test_DATA_syntax(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA spam') self.assertEqual(self.channel.socket.last, b'501 Syntax: DATA\r\n') def test_no_HELO_DATA(self): self.write_line(b'DATA spam') self.assertEqual(self.channel.socket.last, b'503 Error: send HELO first\r\n') def test_data_transparency_section_4_5_2(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'..\r\n.\r\n') self.assertEqual(self.channel.received_data, '.') def test_multiple_RCPT(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'RCPT To:ham@example') self.write_line(b'DATA') self.write_line(b'data\r\n.') self.assertEqual(self.server.messages, [(('peer-address', 'peer-port'), 'eggs@example', ['spam@example','ham@example'], 'data')]) def test_manual_status(self): # checks that the Channel is able to return a custom status message self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'return status\r\n.') self.assertEqual(self.channel.socket.last, b'250 Okish\r\n') def test_RSET(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'RSET') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'MAIL From:foo@example') self.write_line(b'RCPT To:eggs@example') self.write_line(b'DATA') self.write_line(b'data\r\n.') self.assertEqual(self.server.messages, [(('peer-address', 'peer-port'), 'foo@example', ['eggs@example'], 'data')]) def test_HELO_RSET(self): self.write_line(b'HELO example') self.write_line(b'RSET') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_RSET_syntax(self): self.write_line(b'RSET hi') self.assertEqual(self.channel.socket.last, b'501 Syntax: RSET\r\n') def test_unknown_command(self): self.write_line(b'UNKNOWN_CMD') self.assertEqual(self.channel.socket.last, b'500 Error: command "UNKNOWN_CMD" not ' + \ b'recognized\r\n') def test_attribute_deprecations(self): with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__server with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__server = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__line with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__line = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__state with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__state = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__greeting with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__greeting = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__mailfrom with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__mailfrom = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__rcpttos with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__rcpttos = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__data with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__data = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__fqdn with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__fqdn = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__peer with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__peer = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__conn with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__conn = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__addr with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__addr = 'spam' @unittest.skipUnless(socket_helper.IPV6_ENABLED, "IPv6 not enabled") class SMTPDChannelIPv6Test(SMTPDChannelTest): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOSTv6, 0), ('b', 0), decode_data=True) conn, addr = self.server.accept() self.channel = smtpd.SMTPChannel(self.server, conn, addr, decode_data=True) class SMTPDChannelWithDataSizeLimitTest(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = self.server.accept() # Set DATA size limit to 32 bytes for easy testing self.channel = smtpd.SMTPChannel(self.server, conn, addr, 32, decode_data=True) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, line): self.channel.socket.queue_recv(line) self.channel.handle_read() def test_data_limit_dialog(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'RCPT To:spam@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'DATA') self.assertEqual(self.channel.socket.last, b'354 End data with .\r\n') self.write_line(b'data\r\nmore\r\n.') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.server.messages, [(('peer-address', 'peer-port'), 'eggs@example', ['spam@example'], 'data\nmore')]) def test_data_limit_dialog_too_much_data(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'RCPT To:spam@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'DATA') self.assertEqual(self.channel.socket.last, b'354 End data with .\r\n') self.write_line(b'This message is longer than 32 bytes\r\n.') self.assertEqual(self.channel.socket.last, b'552 Error: Too much mail data\r\n') class SMTPDChannelWithDecodeDataFalse(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOST, 0), ('b', 0)) conn, addr = self.server.accept() self.channel = smtpd.SMTPChannel(self.server, conn, addr) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, line): self.channel.socket.queue_recv(line) self.channel.handle_read() def test_ascii_data(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'plain ascii text') self.write_line(b'.') self.assertEqual(self.channel.received_data, b'plain ascii text') def test_utf8_data(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'utf8 enriched text: \xc5\xbc\xc5\xba\xc4\x87') self.write_line(b'and some plain ascii') self.write_line(b'.') self.assertEqual( self.channel.received_data, b'utf8 enriched text: \xc5\xbc\xc5\xba\xc4\x87\n' b'and some plain ascii') class SMTPDChannelWithDecodeDataTrue(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = self.server.accept() # Set decode_data to True self.channel = smtpd.SMTPChannel(self.server, conn, addr, decode_data=True) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, line): self.channel.socket.queue_recv(line) self.channel.handle_read() def test_ascii_data(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'plain ascii text') self.write_line(b'.') self.assertEqual(self.channel.received_data, 'plain ascii text') def test_utf8_data(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'utf8 enriched text: \xc5\xbc\xc5\xba\xc4\x87') self.write_line(b'and some plain ascii') self.write_line(b'.') self.assertEqual( self.channel.received_data, 'utf8 enriched text: żźć\nand some plain ascii') class SMTPDChannelTestWithEnableSMTPUTF8True(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOST, 0), ('b', 0), enable_SMTPUTF8=True) conn, addr = self.server.accept() self.channel = smtpd.SMTPChannel(self.server, conn, addr, enable_SMTPUTF8=True) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, line): self.channel.socket.queue_recv(line) self.channel.handle_read() def test_MAIL_command_accepts_SMTPUTF8_when_announced(self): self.write_line(b'EHLO example') self.write_line( 'MAIL from: BODY=8BITMIME SMTPUTF8'.encode( 'utf-8') ) self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_process_smtputf8_message(self): self.write_line(b'EHLO example') for mail_parameters in [b'', b'BODY=8BITMIME SMTPUTF8']: self.write_line(b'MAIL from: ' + mail_parameters) self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line(b'rcpt to:') self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line(b'data') self.assertEqual(self.channel.socket.last[0:3], b'354') self.write_line(b'c\r\n.') if mail_parameters == b'': self.assertEqual(self.channel.socket.last, b'250 OK\r\n') else: self.assertEqual(self.channel.socket.last, b'250 SMTPUTF8 message okish\r\n') def test_utf8_data(self): self.write_line(b'EHLO example') self.write_line( 'MAIL From: naïve@examplé BODY=8BITMIME SMTPUTF8'.encode('utf-8')) self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line('RCPT To:späm@examplé'.encode('utf-8')) self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line(b'DATA') self.assertEqual(self.channel.socket.last[0:3], b'354') self.write_line(b'utf8 enriched text: \xc5\xbc\xc5\xba\xc4\x87') self.write_line(b'.') self.assertEqual( self.channel.received_data, b'utf8 enriched text: \xc5\xbc\xc5\xba\xc4\x87') def test_MAIL_command_limit_extended_with_SIZE_and_SMTPUTF8(self): self.write_line(b'ehlo example') fill_len = (512 + 26 + 10) - len('mail from:<@example>') self.write_line(b'MAIL from:<' + b'a' * (fill_len + 1) + b'@example>') self.assertEqual(self.channel.socket.last, b'500 Error: line too long\r\n') self.write_line(b'MAIL from:<' + b'a' * fill_len + b'@example>') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_multiple_emails_with_extended_command_length(self): self.write_line(b'ehlo example') fill_len = (512 + 26 + 10) - len('mail from:<@example>') for char in [b'a', b'b', b'c']: self.write_line(b'MAIL from:<' + char * fill_len + b'a@example>') self.assertEqual(self.channel.socket.last[0:3], b'500') self.write_line(b'MAIL from:<' + char * fill_len + b'@example>') self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line(b'rcpt to:') self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line(b'data') self.assertEqual(self.channel.socket.last[0:3], b'354') self.write_line(b'test\r\n.') self.assertEqual(self.channel.socket.last[0:3], b'250') class MiscTestCase(unittest.TestCase): def test__all__(self): not_exported = { "program", "Devnull", "DEBUGSTREAM", "NEWLINE", "COMMASPACE", "DATA_SIZE_DEFAULT", "usage", "Options", "parseargs", } support.check__all__(self, smtpd, not_exported=not_exported) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.10/test_socket.py000066400000000000000000007541401471441230600212360ustar00rootroot00000000000000import unittest from test import support from test.support import os_helper from test.support import socket_helper from test.support import threading_helper import errno import io import itertools import socket import select import tempfile import time import traceback import queue import sys import os import platform import array import contextlib from weakref import proxy import signal import math import pickle import struct import random import shutil import string import _thread as thread import threading try: import multiprocessing except ImportError: multiprocessing = False try: import fcntl except ImportError: fcntl = None HOST = socket_helper.HOST # test unicode string and carriage return MSG = 'Michael Gilfix was here\u1234\r\n'.encode('utf-8') VSOCKPORT = 1234 AIX = platform.system() == "AIX" try: import _socket except ImportError: _socket = None def get_cid(): if fcntl is None: return None if not hasattr(socket, 'IOCTL_VM_SOCKETS_GET_LOCAL_CID'): return None try: with open("/dev/vsock", "rb") as f: r = fcntl.ioctl(f, socket.IOCTL_VM_SOCKETS_GET_LOCAL_CID, " ") except OSError: return None else: return struct.unpack("I", r)[0] def _have_socket_can(): """Check whether CAN sockets are supported on this host.""" try: s = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_can_isotp(): """Check whether CAN ISOTP sockets are supported on this host.""" try: s = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_can_j1939(): """Check whether CAN J1939 sockets are supported on this host.""" try: s = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_rds(): """Check whether RDS sockets are supported on this host.""" try: s = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_alg(): """Check whether AF_ALG sockets are supported on this host.""" try: s = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_qipcrtr(): """Check whether AF_QIPCRTR sockets are supported on this host.""" try: s = socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM, 0) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_vsock(): """Check whether AF_VSOCK sockets are supported on this host.""" ret = get_cid() is not None return ret def _have_socket_bluetooth(): """Check whether AF_BLUETOOTH sockets are supported on this host.""" try: # RFCOMM is supported by all platforms with bluetooth support. Windows # does not support omitting the protocol. s = socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM) except (AttributeError, OSError): return False else: s.close() return True @contextlib.contextmanager def socket_setdefaulttimeout(timeout): old_timeout = socket.getdefaulttimeout() try: socket.setdefaulttimeout(timeout) yield finally: socket.setdefaulttimeout(old_timeout) HAVE_SOCKET_CAN = _have_socket_can() HAVE_SOCKET_CAN_ISOTP = _have_socket_can_isotp() HAVE_SOCKET_CAN_J1939 = _have_socket_can_j1939() HAVE_SOCKET_RDS = _have_socket_rds() HAVE_SOCKET_ALG = _have_socket_alg() HAVE_SOCKET_QIPCRTR = _have_socket_qipcrtr() HAVE_SOCKET_VSOCK = _have_socket_vsock() HAVE_SOCKET_UDPLITE = hasattr(socket, "IPPROTO_UDPLITE") HAVE_SOCKET_BLUETOOTH = _have_socket_bluetooth() # Size in bytes of the int type SIZEOF_INT = array.array("i").itemsize class SocketTCPTest(unittest.TestCase): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = socket_helper.bind_port(self.serv) self.serv.listen() def tearDown(self): self.serv.close() self.serv = None class SocketUDPTest(unittest.TestCase): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.port = socket_helper.bind_port(self.serv) def tearDown(self): self.serv.close() self.serv = None class SocketUDPLITETest(SocketUDPTest): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) self.port = socket_helper.bind_port(self.serv) class ThreadSafeCleanupTestCase: """Subclass of unittest.TestCase with thread-safe cleanup methods. This subclass protects the addCleanup() and doCleanups() methods with a recursive lock. """ def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self._cleanup_lock = threading.RLock() def addCleanup(self, *args, **kwargs): with self._cleanup_lock: return super().addCleanup(*args, **kwargs) def doCleanups(self, *args, **kwargs): with self._cleanup_lock: return super().doCleanups(*args, **kwargs) class SocketCANTest(unittest.TestCase): """To be able to run this test, a `vcan0` CAN interface can be created with the following commands: # modprobe vcan # ip link add dev vcan0 type vcan # ip link set up vcan0 """ interface = 'vcan0' bufsize = 128 """The CAN frame structure is defined in : struct can_frame { canid_t can_id; /* 32 bit CAN_ID + EFF/RTR/ERR flags */ __u8 can_dlc; /* data length code: 0 .. 8 */ __u8 data[8] __attribute__((aligned(8))); }; """ can_frame_fmt = "=IB3x8s" can_frame_size = struct.calcsize(can_frame_fmt) """The Broadcast Management Command frame structure is defined in : struct bcm_msg_head { __u32 opcode; __u32 flags; __u32 count; struct timeval ival1, ival2; canid_t can_id; __u32 nframes; struct can_frame frames[0]; } `bcm_msg_head` must be 8 bytes aligned because of the `frames` member (see `struct can_frame` definition). Must use native not standard types for packing. """ bcm_cmd_msg_fmt = "@3I4l2I" bcm_cmd_msg_fmt += "x" * (struct.calcsize(bcm_cmd_msg_fmt) % 8) def setUp(self): self.s = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) self.addCleanup(self.s.close) try: self.s.bind((self.interface,)) except OSError: self.skipTest('network interface `%s` does not exist' % self.interface) class SocketRDSTest(unittest.TestCase): """To be able to run this test, the `rds` kernel module must be loaded: # modprobe rds """ bufsize = 8192 def setUp(self): self.serv = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) self.addCleanup(self.serv.close) try: self.port = socket_helper.bind_port(self.serv) except OSError: self.skipTest('unable to bind RDS socket') class ThreadableTest: """Threadable Test class The ThreadableTest class makes it easy to create a threaded client/server pair from an existing unit test. To create a new threaded class from an existing unit test, use multiple inheritance: class NewClass (OldClass, ThreadableTest): pass This class defines two new fixture functions with obvious purposes for overriding: clientSetUp () clientTearDown () Any new test functions within the class must then define tests in pairs, where the test name is preceded with a '_' to indicate the client portion of the test. Ex: def testFoo(self): # Server portion def _testFoo(self): # Client portion Any exceptions raised by the clients during their tests are caught and transferred to the main thread to alert the testing framework. Note, the server setup function cannot call any blocking functions that rely on the client thread during setup, unless serverExplicitReady() is called just before the blocking call (such as in setting up a client/server connection and performing the accept() in setUp(). """ def __init__(self): # Swap the true setup function self.__setUp = self.setUp self.setUp = self._setUp def serverExplicitReady(self): """This method allows the server to explicitly indicate that it wants the client thread to proceed. This is useful if the server is about to execute a blocking routine that is dependent upon the client thread during its setup routine.""" self.server_ready.set() def _setUp(self): self.wait_threads = threading_helper.wait_threads_exit() self.wait_threads.__enter__() self.addCleanup(self.wait_threads.__exit__, None, None, None) self.server_ready = threading.Event() self.client_ready = threading.Event() self.done = threading.Event() self.queue = queue.Queue(1) self.server_crashed = False def raise_queued_exception(): if self.queue.qsize(): raise self.queue.get() self.addCleanup(raise_queued_exception) # Do some munging to start the client test. methodname = self.id() i = methodname.rfind('.') methodname = methodname[i+1:] test_method = getattr(self, '_' + methodname) self.client_thread = thread.start_new_thread( self.clientRun, (test_method,)) try: self.__setUp() except: self.server_crashed = True raise finally: self.server_ready.set() self.client_ready.wait() self.addCleanup(self.done.wait) def clientRun(self, test_func): self.server_ready.wait() try: self.clientSetUp() except BaseException as e: self.queue.put(e) self.clientTearDown() return finally: self.client_ready.set() if self.server_crashed: self.clientTearDown() return if not hasattr(test_func, '__call__'): raise TypeError("test_func must be a callable function") try: test_func() except BaseException as e: self.queue.put(e) finally: self.clientTearDown() def clientSetUp(self): raise NotImplementedError("clientSetUp must be implemented.") def clientTearDown(self): self.done.set() thread.exit() class ThreadedTCPSocketTest(SocketTCPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketTCPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.AF_INET, socket.SOCK_STREAM) def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ThreadedUDPSocketTest(SocketUDPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketUDPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class ThreadedUDPLITESocketTest(SocketUDPLITETest, ThreadableTest): def __init__(self, methodName='runTest'): SocketUDPLITETest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ThreadedCANSocketTest(SocketCANTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketCANTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) try: self.cli.bind((self.interface,)) except OSError: # skipTest should not be called here, and will be called in the # server instead pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ThreadedRDSSocketTest(SocketRDSTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketRDSTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) try: # RDS sockets must be bound explicitly to send or receive data self.cli.bind((HOST, 0)) self.cli_addr = self.cli.getsockname() except OSError: # skipTest should not be called here, and will be called in the # server instead pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) @unittest.skipIf(fcntl is None, "need fcntl") @unittest.skipUnless(HAVE_SOCKET_VSOCK, 'VSOCK sockets required for this test.') @unittest.skipUnless(get_cid() != 2, "This test can only be run on a virtual guest.") class ThreadedVSOCKSocketStreamTest(unittest.TestCase, ThreadableTest): def __init__(self, methodName='runTest'): unittest.TestCase.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def setUp(self): self.serv = socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) self.addCleanup(self.serv.close) self.serv.bind((socket.VMADDR_CID_ANY, VSOCKPORT)) self.serv.listen() self.serverExplicitReady() self.conn, self.connaddr = self.serv.accept() self.addCleanup(self.conn.close) def clientSetUp(self): time.sleep(0.1) self.cli = socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) self.addCleanup(self.cli.close) cid = get_cid() self.cli.connect((cid, VSOCKPORT)) def testStream(self): msg = self.conn.recv(1024) self.assertEqual(msg, MSG) def _testStream(self): self.cli.send(MSG) self.cli.close() class SocketConnectedTest(ThreadedTCPSocketTest): """Socket tests for client-server connection. self.cli_conn is a client socket connected to the server. The setUp() method guarantees that it is connected to the server. """ def __init__(self, methodName='runTest'): ThreadedTCPSocketTest.__init__(self, methodName=methodName) def setUp(self): ThreadedTCPSocketTest.setUp(self) # Indicate explicitly we're ready for the client thread to # proceed and then perform the blocking call to accept self.serverExplicitReady() conn, addr = self.serv.accept() self.cli_conn = conn def tearDown(self): self.cli_conn.close() self.cli_conn = None ThreadedTCPSocketTest.tearDown(self) def clientSetUp(self): ThreadedTCPSocketTest.clientSetUp(self) self.cli.connect((HOST, self.port)) self.serv_conn = self.cli def clientTearDown(self): self.serv_conn.close() self.serv_conn = None ThreadedTCPSocketTest.clientTearDown(self) class SocketPairTest(unittest.TestCase, ThreadableTest): def __init__(self, methodName='runTest'): unittest.TestCase.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def setUp(self): self.serv, self.cli = socket.socketpair() def tearDown(self): self.serv.close() self.serv = None def clientSetUp(self): pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) # The following classes are used by the sendmsg()/recvmsg() tests. # Combining, for instance, ConnectedStreamTestMixin and TCPTestBase # gives a drop-in replacement for SocketConnectedTest, but different # address families can be used, and the attributes serv_addr and # cli_addr will be set to the addresses of the endpoints. class SocketTestBase(unittest.TestCase): """A base class for socket tests. Subclasses must provide methods newSocket() to return a new socket and bindSock(sock) to bind it to an unused address. Creates a socket self.serv and sets self.serv_addr to its address. """ def setUp(self): self.serv = self.newSocket() self.bindServer() def bindServer(self): """Bind server socket and set self.serv_addr to its address.""" self.bindSock(self.serv) self.serv_addr = self.serv.getsockname() def tearDown(self): self.serv.close() self.serv = None class SocketListeningTestMixin(SocketTestBase): """Mixin to listen on the server socket.""" def setUp(self): super().setUp() self.serv.listen() class ThreadedSocketTestMixin(ThreadSafeCleanupTestCase, SocketTestBase, ThreadableTest): """Mixin to add client socket and allow client/server tests. Client socket is self.cli and its address is self.cli_addr. See ThreadableTest for usage information. """ def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = self.newClientSocket() self.bindClient() def newClientSocket(self): """Return a new socket for use as client.""" return self.newSocket() def bindClient(self): """Bind client socket and set self.cli_addr to its address.""" self.bindSock(self.cli) self.cli_addr = self.cli.getsockname() def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ConnectedStreamTestMixin(SocketListeningTestMixin, ThreadedSocketTestMixin): """Mixin to allow client/server stream tests with connected client. Server's socket representing connection to client is self.cli_conn and client's connection to server is self.serv_conn. (Based on SocketConnectedTest.) """ def setUp(self): super().setUp() # Indicate explicitly we're ready for the client thread to # proceed and then perform the blocking call to accept self.serverExplicitReady() conn, addr = self.serv.accept() self.cli_conn = conn def tearDown(self): self.cli_conn.close() self.cli_conn = None super().tearDown() def clientSetUp(self): super().clientSetUp() self.cli.connect(self.serv_addr) self.serv_conn = self.cli def clientTearDown(self): try: self.serv_conn.close() self.serv_conn = None except AttributeError: pass super().clientTearDown() class UnixSocketTestBase(SocketTestBase): """Base class for Unix-domain socket tests.""" # This class is used for file descriptor passing tests, so we # create the sockets in a private directory so that other users # can't send anything that might be problematic for a privileged # user running the tests. def setUp(self): self.dir_path = tempfile.mkdtemp() self.addCleanup(os.rmdir, self.dir_path) super().setUp() def bindSock(self, sock): path = tempfile.mktemp(dir=self.dir_path) socket_helper.bind_unix_socket(sock, path) self.addCleanup(os_helper.unlink, path) class UnixStreamBase(UnixSocketTestBase): """Base class for Unix-domain SOCK_STREAM tests.""" def newSocket(self): return socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) class InetTestBase(SocketTestBase): """Base class for IPv4 socket tests.""" host = HOST def setUp(self): super().setUp() self.port = self.serv_addr[1] def bindSock(self, sock): socket_helper.bind_port(sock, host=self.host) class TCPTestBase(InetTestBase): """Base class for TCP-over-IPv4 tests.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_STREAM) class UDPTestBase(InetTestBase): """Base class for UDP-over-IPv4 tests.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_DGRAM) class UDPLITETestBase(InetTestBase): """Base class for UDPLITE-over-IPv4 tests.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) class SCTPStreamBase(InetTestBase): """Base class for SCTP tests in one-to-one (SOCK_STREAM) mode.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_SCTP) class Inet6TestBase(InetTestBase): """Base class for IPv6 socket tests.""" host = socket_helper.HOSTv6 class UDP6TestBase(Inet6TestBase): """Base class for UDP-over-IPv6 tests.""" def newSocket(self): return socket.socket(socket.AF_INET6, socket.SOCK_DGRAM) class UDPLITE6TestBase(Inet6TestBase): """Base class for UDPLITE-over-IPv6 tests.""" def newSocket(self): return socket.socket(socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) # Test-skipping decorators for use with ThreadableTest. def skipWithClientIf(condition, reason): """Skip decorated test if condition is true, add client_skip decorator. If the decorated object is not a class, sets its attribute "client_skip" to a decorator which will return an empty function if the test is to be skipped, or the original function if it is not. This can be used to avoid running the client part of a skipped test when using ThreadableTest. """ def client_pass(*args, **kwargs): pass def skipdec(obj): retval = unittest.skip(reason)(obj) if not isinstance(obj, type): retval.client_skip = lambda f: client_pass return retval def noskipdec(obj): if not (isinstance(obj, type) or hasattr(obj, "client_skip")): obj.client_skip = lambda f: f return obj return skipdec if condition else noskipdec def requireAttrs(obj, *attributes): """Skip decorated test if obj is missing any of the given attributes. Sets client_skip attribute as skipWithClientIf() does. """ missing = [name for name in attributes if not hasattr(obj, name)] return skipWithClientIf( missing, "don't have " + ", ".join(name for name in missing)) def requireSocket(*args): """Skip decorated test if a socket cannot be created with given arguments. When an argument is given as a string, will use the value of that attribute of the socket module, or skip the test if it doesn't exist. Sets client_skip attribute as skipWithClientIf() does. """ err = None missing = [obj for obj in args if isinstance(obj, str) and not hasattr(socket, obj)] if missing: err = "don't have " + ", ".join(name for name in missing) else: callargs = [getattr(socket, obj) if isinstance(obj, str) else obj for obj in args] try: s = socket.socket(*callargs) except OSError as e: # XXX: check errno? err = str(e) else: s.close() return skipWithClientIf( err is not None, "can't create socket({0}): {1}".format( ", ".join(str(o) for o in args), err)) ####################################################################### ## Begin Tests class GeneralModuleTests(unittest.TestCase): def test_SocketType_is_socketobject(self): import _socket self.assertTrue(socket.SocketType is _socket.socket) s = socket.socket() self.assertIsInstance(s, socket.SocketType) s.close() def test_repr(self): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) with s: self.assertIn('fd=%i' % s.fileno(), repr(s)) self.assertIn('family=%s' % socket.AF_INET, repr(s)) self.assertIn('type=%s' % socket.SOCK_STREAM, repr(s)) self.assertIn('proto=0', repr(s)) self.assertNotIn('raddr', repr(s)) s.bind(('127.0.0.1', 0)) self.assertIn('laddr', repr(s)) self.assertIn(str(s.getsockname()), repr(s)) self.assertIn('[closed]', repr(s)) self.assertNotIn('laddr', repr(s)) @unittest.skipUnless(_socket is not None, 'need _socket module') def test_csocket_repr(self): s = _socket.socket(_socket.AF_INET, _socket.SOCK_STREAM) try: expected = ('' % (s.fileno(), s.family, s.type, s.proto)) self.assertEqual(repr(s), expected) finally: s.close() expected = ('' % (s.family, s.type, s.proto)) self.assertEqual(repr(s), expected) def test_weakref(self): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: p = proxy(s) self.assertEqual(p.fileno(), s.fileno()) s = None support.gc_collect() # For PyPy or other GCs. try: p.fileno() except ReferenceError: pass else: self.fail('Socket proxy still exists') def testSocketError(self): # Testing socket module exceptions msg = "Error raising socket exception (%s)." with self.assertRaises(OSError, msg=msg % 'OSError'): raise OSError with self.assertRaises(OSError, msg=msg % 'socket.herror'): raise socket.herror with self.assertRaises(OSError, msg=msg % 'socket.gaierror'): raise socket.gaierror def testSendtoErrors(self): # Testing that sendto doesn't mask failures. See #10169. s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.addCleanup(s.close) s.bind(('', 0)) sockname = s.getsockname() # 2 args with self.assertRaises(TypeError) as cm: s.sendto('\u2620', sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'str'") with self.assertRaises(TypeError) as cm: s.sendto(5j, sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'complex'") with self.assertRaises(TypeError) as cm: s.sendto(b'foo', None) self.assertIn('not NoneType',str(cm.exception)) # 3 args with self.assertRaises(TypeError) as cm: s.sendto('\u2620', 0, sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'str'") with self.assertRaises(TypeError) as cm: s.sendto(5j, 0, sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'complex'") with self.assertRaises(TypeError) as cm: s.sendto(b'foo', 0, None) self.assertIn('not NoneType', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto(b'foo', 'bar', sockname) with self.assertRaises(TypeError) as cm: s.sendto(b'foo', None, None) # wrong number of args with self.assertRaises(TypeError) as cm: s.sendto(b'foo') self.assertIn('(1 given)', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto(b'foo', 0, sockname, 4) self.assertIn('(4 given)', str(cm.exception)) def testCrucialConstants(self): # Testing for mission critical constants socket.AF_INET if socket.has_ipv6: socket.AF_INET6 socket.SOCK_STREAM socket.SOCK_DGRAM socket.SOCK_RAW socket.SOCK_RDM socket.SOCK_SEQPACKET socket.SOL_SOCKET socket.SO_REUSEADDR def testCrucialIpProtoConstants(self): socket.IPPROTO_TCP socket.IPPROTO_UDP if socket.has_ipv6: socket.IPPROTO_IPV6 @unittest.skipUnless(os.name == "nt", "Windows specific") def testWindowsSpecificConstants(self): socket.IPPROTO_ICLFXBM socket.IPPROTO_ST socket.IPPROTO_CBT socket.IPPROTO_IGP socket.IPPROTO_RDP socket.IPPROTO_PGM socket.IPPROTO_L2TP socket.IPPROTO_SCTP @unittest.skipUnless(sys.platform == 'darwin', 'macOS specific test') @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test3542SocketOptions(self): # Ref. issue #35569 and https://tools.ietf.org/html/rfc3542 opts = { 'IPV6_CHECKSUM', 'IPV6_DONTFRAG', 'IPV6_DSTOPTS', 'IPV6_HOPLIMIT', 'IPV6_HOPOPTS', 'IPV6_NEXTHOP', 'IPV6_PATHMTU', 'IPV6_PKTINFO', 'IPV6_RECVDSTOPTS', 'IPV6_RECVHOPLIMIT', 'IPV6_RECVHOPOPTS', 'IPV6_RECVPATHMTU', 'IPV6_RECVPKTINFO', 'IPV6_RECVRTHDR', 'IPV6_RECVTCLASS', 'IPV6_RTHDR', 'IPV6_RTHDRDSTOPTS', 'IPV6_RTHDR_TYPE_0', 'IPV6_TCLASS', 'IPV6_USE_MIN_MTU', } for opt in opts: self.assertTrue( hasattr(socket, opt), f"Missing RFC3542 socket option '{opt}'" ) def testHostnameRes(self): # Testing hostname resolution mechanisms hostname = socket.gethostname() try: ip = socket.gethostbyname(hostname) except OSError: # Probably name lookup wasn't set up right; skip this test self.skipTest('name lookup failure') self.assertTrue(ip.find('.') >= 0, "Error resolving host to ip.") try: hname, aliases, ipaddrs = socket.gethostbyaddr(ip) except OSError: # Probably a similar problem as above; skip this test self.skipTest('name lookup failure') all_host_names = [hostname, hname] + aliases fqhn = socket.getfqdn(ip) if not fqhn in all_host_names: self.fail("Error testing host resolution mechanisms. (fqdn: %s, all: %s)" % (fqhn, repr(all_host_names))) def test_host_resolution(self): for addr in [socket_helper.HOSTv4, '10.0.0.1', '255.255.255.255']: self.assertEqual(socket.gethostbyname(addr), addr) # we don't test socket_helper.HOSTv6 because there's a chance it doesn't have # a matching name entry (e.g. 'ip6-localhost') for host in [socket_helper.HOSTv4]: self.assertIn(host, socket.gethostbyaddr(host)[2]) def test_host_resolution_bad_address(self): # These are all malformed IP addresses and expected not to resolve to # any result. But some ISPs, e.g. AWS, may successfully resolve these # IPs. explanation = ( "resolving an invalid IP address did not raise OSError; " "can be caused by a broken DNS server" ) for addr in ['0.1.1.~1', '1+.1.1.1', '::1q', '::1::2', '1:1:1:1:1:1:1:1:1']: with self.assertRaises(OSError, msg=addr): socket.gethostbyname(addr) with self.assertRaises(OSError, msg=explanation): socket.gethostbyaddr(addr) @unittest.skipUnless(hasattr(socket, 'sethostname'), "test needs socket.sethostname()") @unittest.skipUnless(hasattr(socket, 'gethostname'), "test needs socket.gethostname()") def test_sethostname(self): oldhn = socket.gethostname() try: socket.sethostname('new') except OSError as e: if e.errno == errno.EPERM: self.skipTest("test should be run as root") else: raise try: # running test as root! self.assertEqual(socket.gethostname(), 'new') # Should work with bytes objects too socket.sethostname(b'bar') self.assertEqual(socket.gethostname(), 'bar') finally: socket.sethostname(oldhn) @unittest.skipUnless(hasattr(socket, 'if_nameindex'), 'socket.if_nameindex() not available.') def testInterfaceNameIndex(self): interfaces = socket.if_nameindex() for index, name in interfaces: self.assertIsInstance(index, int) self.assertIsInstance(name, str) # interface indices are non-zero integers self.assertGreater(index, 0) _index = socket.if_nametoindex(name) self.assertIsInstance(_index, int) self.assertEqual(index, _index) _name = socket.if_indextoname(index) self.assertIsInstance(_name, str) self.assertEqual(name, _name) @unittest.skipUnless(hasattr(socket, 'if_indextoname'), 'socket.if_indextoname() not available.') def testInvalidInterfaceIndexToName(self): self.assertRaises(OSError, socket.if_indextoname, 0) self.assertRaises(TypeError, socket.if_indextoname, '_DEADBEEF') @unittest.skipUnless(hasattr(socket, 'if_nametoindex'), 'socket.if_nametoindex() not available.') def testInvalidInterfaceNameToIndex(self): self.assertRaises(TypeError, socket.if_nametoindex, 0) self.assertRaises(OSError, socket.if_nametoindex, '_DEADBEEF') @unittest.skipUnless(hasattr(sys, 'getrefcount'), 'test needs sys.getrefcount()') def testRefCountGetNameInfo(self): # Testing reference count for getnameinfo try: # On some versions, this loses a reference orig = sys.getrefcount(__name__) socket.getnameinfo(__name__,0) except TypeError: if sys.getrefcount(__name__) != orig: self.fail("socket.getnameinfo loses a reference") def testInterpreterCrash(self): # Making sure getnameinfo doesn't crash the interpreter try: # On some versions, this crashes the interpreter. socket.getnameinfo(('x', 0, 0, 0), 0) except OSError: pass def testNtoH(self): # This just checks that htons etc. are their own inverse, # when looking at the lower 16 or 32 bits. sizes = {socket.htonl: 32, socket.ntohl: 32, socket.htons: 16, socket.ntohs: 16} for func, size in sizes.items(): mask = (1<= 23): port2 = socket.getservbyname(service) eq(port, port2) # Try udp, but don't barf if it doesn't exist try: udpport = socket.getservbyname(service, 'udp') except OSError: udpport = None else: eq(udpport, port) # Now make sure the lookup by port returns the same service name # Issue #26936: Android getservbyport() is broken. if not support.is_android: eq(socket.getservbyport(port2), service) eq(socket.getservbyport(port, 'tcp'), service) if udpport is not None: eq(socket.getservbyport(udpport, 'udp'), service) # Make sure getservbyport does not accept out of range ports. self.assertRaises(OverflowError, socket.getservbyport, -1) self.assertRaises(OverflowError, socket.getservbyport, 65536) def testDefaultTimeout(self): # Testing default timeout # The default timeout should initially be None self.assertEqual(socket.getdefaulttimeout(), None) with socket.socket() as s: self.assertEqual(s.gettimeout(), None) # Set the default timeout to 10, and see if it propagates with socket_setdefaulttimeout(10): self.assertEqual(socket.getdefaulttimeout(), 10) with socket.socket() as sock: self.assertEqual(sock.gettimeout(), 10) # Reset the default timeout to None, and see if it propagates socket.setdefaulttimeout(None) self.assertEqual(socket.getdefaulttimeout(), None) with socket.socket() as sock: self.assertEqual(sock.gettimeout(), None) # Check that setting it to an invalid value raises ValueError self.assertRaises(ValueError, socket.setdefaulttimeout, -1) # Check that setting it to an invalid type raises TypeError self.assertRaises(TypeError, socket.setdefaulttimeout, "spam") @unittest.skipUnless(hasattr(socket, 'inet_aton'), 'test needs socket.inet_aton()') def testIPv4_inet_aton_fourbytes(self): # Test that issue1008086 and issue767150 are fixed. # It must return 4 bytes. self.assertEqual(b'\x00'*4, socket.inet_aton('0.0.0.0')) self.assertEqual(b'\xff'*4, socket.inet_aton('255.255.255.255')) @unittest.skipUnless(hasattr(socket, 'inet_pton'), 'test needs socket.inet_pton()') def testIPv4toString(self): from socket import inet_aton as f, inet_pton, AF_INET g = lambda a: inet_pton(AF_INET, a) assertInvalid = lambda func,a: self.assertRaises( (OSError, ValueError), func, a ) self.assertEqual(b'\x00\x00\x00\x00', f('0.0.0.0')) self.assertEqual(b'\xff\x00\xff\x00', f('255.0.255.0')) self.assertEqual(b'\xaa\xaa\xaa\xaa', f('170.170.170.170')) self.assertEqual(b'\x01\x02\x03\x04', f('1.2.3.4')) self.assertEqual(b'\xff\xff\xff\xff', f('255.255.255.255')) # bpo-29972: inet_pton() doesn't fail on AIX if not AIX: assertInvalid(f, '0.0.0.') assertInvalid(f, '300.0.0.0') assertInvalid(f, 'a.0.0.0') assertInvalid(f, '1.2.3.4.5') assertInvalid(f, '::1') self.assertEqual(b'\x00\x00\x00\x00', g('0.0.0.0')) self.assertEqual(b'\xff\x00\xff\x00', g('255.0.255.0')) self.assertEqual(b'\xaa\xaa\xaa\xaa', g('170.170.170.170')) self.assertEqual(b'\xff\xff\xff\xff', g('255.255.255.255')) assertInvalid(g, '0.0.0.') assertInvalid(g, '300.0.0.0') assertInvalid(g, 'a.0.0.0') assertInvalid(g, '1.2.3.4.5') assertInvalid(g, '::1') @unittest.skipUnless(hasattr(socket, 'inet_pton'), 'test needs socket.inet_pton()') def testIPv6toString(self): try: from socket import inet_pton, AF_INET6, has_ipv6 if not has_ipv6: self.skipTest('IPv6 not available') except ImportError: self.skipTest('could not import needed symbols from socket') if sys.platform == "win32": try: inet_pton(AF_INET6, '::') except OSError as e: if e.winerror == 10022: self.skipTest('IPv6 might not be supported') f = lambda a: inet_pton(AF_INET6, a) assertInvalid = lambda a: self.assertRaises( (OSError, ValueError), f, a ) self.assertEqual(b'\x00' * 16, f('::')) self.assertEqual(b'\x00' * 16, f('0::0')) self.assertEqual(b'\x00\x01' + b'\x00' * 14, f('1::')) self.assertEqual( b'\x45\xef\x76\xcb\x00\x1a\x56\xef\xaf\xeb\x0b\xac\x19\x24\xae\xae', f('45ef:76cb:1a:56ef:afeb:bac:1924:aeae') ) self.assertEqual( b'\xad\x42\x0a\xbc' + b'\x00' * 4 + b'\x01\x27\x00\x00\x02\x54\x00\x02', f('ad42:abc::127:0:254:2') ) self.assertEqual(b'\x00\x12\x00\x0a' + b'\x00' * 12, f('12:a::')) assertInvalid('0x20::') assertInvalid(':::') assertInvalid('::0::') assertInvalid('1::abc::') assertInvalid('1::abc::def') assertInvalid('1:2:3:4:5:6') assertInvalid('1:2:3:4:5:6:') assertInvalid('1:2:3:4:5:6:7:8:0') # bpo-29972: inet_pton() doesn't fail on AIX if not AIX: assertInvalid('1:2:3:4:5:6:7:8:') self.assertEqual(b'\x00' * 12 + b'\xfe\x2a\x17\x40', f('::254.42.23.64') ) self.assertEqual( b'\x00\x42' + b'\x00' * 8 + b'\xa2\x9b\xfe\x2a\x17\x40', f('42::a29b:254.42.23.64') ) self.assertEqual( b'\x00\x42\xa8\xb9\x00\x00\x00\x02\xff\xff\xa2\x9b\xfe\x2a\x17\x40', f('42:a8b9:0:2:ffff:a29b:254.42.23.64') ) assertInvalid('255.254.253.252') assertInvalid('1::260.2.3.0') assertInvalid('1::0.be.e.0') assertInvalid('1:2:3:4:5:6:7:1.2.3.4') assertInvalid('::1.2.3.4:0') assertInvalid('0.100.200.0:3:4:5:6:7:8') @unittest.skipUnless(hasattr(socket, 'inet_ntop'), 'test needs socket.inet_ntop()') def testStringToIPv4(self): from socket import inet_ntoa as f, inet_ntop, AF_INET g = lambda a: inet_ntop(AF_INET, a) assertInvalid = lambda func,a: self.assertRaises( (OSError, ValueError), func, a ) self.assertEqual('1.0.1.0', f(b'\x01\x00\x01\x00')) self.assertEqual('170.85.170.85', f(b'\xaa\x55\xaa\x55')) self.assertEqual('255.255.255.255', f(b'\xff\xff\xff\xff')) self.assertEqual('1.2.3.4', f(b'\x01\x02\x03\x04')) assertInvalid(f, b'\x00' * 3) assertInvalid(f, b'\x00' * 5) assertInvalid(f, b'\x00' * 16) self.assertEqual('170.85.170.85', f(bytearray(b'\xaa\x55\xaa\x55'))) self.assertEqual('1.0.1.0', g(b'\x01\x00\x01\x00')) self.assertEqual('170.85.170.85', g(b'\xaa\x55\xaa\x55')) self.assertEqual('255.255.255.255', g(b'\xff\xff\xff\xff')) assertInvalid(g, b'\x00' * 3) assertInvalid(g, b'\x00' * 5) assertInvalid(g, b'\x00' * 16) self.assertEqual('170.85.170.85', g(bytearray(b'\xaa\x55\xaa\x55'))) @unittest.skipUnless(hasattr(socket, 'inet_ntop'), 'test needs socket.inet_ntop()') def testStringToIPv6(self): try: from socket import inet_ntop, AF_INET6, has_ipv6 if not has_ipv6: self.skipTest('IPv6 not available') except ImportError: self.skipTest('could not import needed symbols from socket') if sys.platform == "win32": try: inet_ntop(AF_INET6, b'\x00' * 16) except OSError as e: if e.winerror == 10022: self.skipTest('IPv6 might not be supported') f = lambda a: inet_ntop(AF_INET6, a) assertInvalid = lambda a: self.assertRaises( (OSError, ValueError), f, a ) self.assertEqual('::', f(b'\x00' * 16)) self.assertEqual('::1', f(b'\x00' * 15 + b'\x01')) self.assertEqual( 'aef:b01:506:1001:ffff:9997:55:170', f(b'\x0a\xef\x0b\x01\x05\x06\x10\x01\xff\xff\x99\x97\x00\x55\x01\x70') ) self.assertEqual('::1', f(bytearray(b'\x00' * 15 + b'\x01'))) assertInvalid(b'\x12' * 15) assertInvalid(b'\x12' * 17) assertInvalid(b'\x12' * 4) # XXX The following don't test module-level functionality... def testSockName(self): # Testing getsockname() sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) # Since find_unused_port() is inherently subject to race conditions, we # call it a couple times if necessary. for i in itertools.count(): port = socket_helper.find_unused_port() try: sock.bind(("0.0.0.0", port)) except OSError as e: if e.errno != errno.EADDRINUSE or i == 5: raise else: break name = sock.getsockname() # XXX(nnorwitz): http://tinyurl.com/os5jz seems to indicate # it reasonable to get the host's addr in addition to 0.0.0.0. # At least for eCos. This is required for the S/390 to pass. try: my_ip_addr = socket.gethostbyname(socket.gethostname()) except OSError: # Probably name lookup wasn't set up right; skip this test self.skipTest('name lookup failure') self.assertIn(name[0], ("0.0.0.0", my_ip_addr), '%s invalid' % name[0]) self.assertEqual(name[1], port) def testGetSockOpt(self): # Testing getsockopt() # We know a socket should start without reuse==0 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) reuse = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) self.assertFalse(reuse != 0, "initial mode is reuse") def testSetSockOpt(self): # Testing setsockopt() sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) reuse = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) self.assertFalse(reuse == 0, "failed to set reuse mode") def testSendAfterClose(self): # testing send() after close() with timeout with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: sock.settimeout(1) self.assertRaises(OSError, sock.send, b"spam") def testCloseException(self): sock = socket.socket() sock.bind((socket._LOCALHOST, 0)) socket.socket(fileno=sock.fileno()).close() try: sock.close() except OSError as err: # Winsock apparently raises ENOTSOCK self.assertIn(err.errno, (errno.EBADF, errno.ENOTSOCK)) else: self.fail("close() should raise EBADF/ENOTSOCK") def testNewAttributes(self): # testing .family, .type and .protocol with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: self.assertEqual(sock.family, socket.AF_INET) if hasattr(socket, 'SOCK_CLOEXEC'): self.assertIn(sock.type, (socket.SOCK_STREAM | socket.SOCK_CLOEXEC, socket.SOCK_STREAM)) else: self.assertEqual(sock.type, socket.SOCK_STREAM) self.assertEqual(sock.proto, 0) def test_getsockaddrarg(self): sock = socket.socket() self.addCleanup(sock.close) port = socket_helper.find_unused_port() big_port = port + 65536 neg_port = port - 65536 self.assertRaises(OverflowError, sock.bind, (HOST, big_port)) self.assertRaises(OverflowError, sock.bind, (HOST, neg_port)) # Since find_unused_port() is inherently subject to race conditions, we # call it a couple times if necessary. for i in itertools.count(): port = socket_helper.find_unused_port() try: sock.bind((HOST, port)) except OSError as e: if e.errno != errno.EADDRINUSE or i == 5: raise else: break @unittest.skipUnless(os.name == "nt", "Windows specific") def test_sock_ioctl(self): self.assertTrue(hasattr(socket.socket, 'ioctl')) self.assertTrue(hasattr(socket, 'SIO_RCVALL')) self.assertTrue(hasattr(socket, 'RCVALL_ON')) self.assertTrue(hasattr(socket, 'RCVALL_OFF')) self.assertTrue(hasattr(socket, 'SIO_KEEPALIVE_VALS')) s = socket.socket() self.addCleanup(s.close) self.assertRaises(ValueError, s.ioctl, -1, None) s.ioctl(socket.SIO_KEEPALIVE_VALS, (1, 100, 100)) @unittest.skipUnless(os.name == "nt", "Windows specific") @unittest.skipUnless(hasattr(socket, 'SIO_LOOPBACK_FAST_PATH'), 'Loopback fast path support required for this test') def test_sio_loopback_fast_path(self): s = socket.socket() self.addCleanup(s.close) try: s.ioctl(socket.SIO_LOOPBACK_FAST_PATH, True) except OSError as exc: WSAEOPNOTSUPP = 10045 if exc.winerror == WSAEOPNOTSUPP: self.skipTest("SIO_LOOPBACK_FAST_PATH is defined but " "doesn't implemented in this Windows version") raise self.assertRaises(TypeError, s.ioctl, socket.SIO_LOOPBACK_FAST_PATH, None) def testGetaddrinfo(self): try: socket.getaddrinfo('localhost', 80) except socket.gaierror as err: if err.errno == socket.EAI_SERVICE: # see http://bugs.python.org/issue1282647 self.skipTest("buggy libc version") raise # len of every sequence is supposed to be == 5 for info in socket.getaddrinfo(HOST, None): self.assertEqual(len(info), 5) # host can be a domain name, a string representation of an # IPv4/v6 address or None socket.getaddrinfo('localhost', 80) socket.getaddrinfo('127.0.0.1', 80) socket.getaddrinfo(None, 80) if socket_helper.IPV6_ENABLED: socket.getaddrinfo('::1', 80) # port can be a string service name such as "http", a numeric # port number or None # Issue #26936: Android getaddrinfo() was broken before API level 23. if (not hasattr(sys, 'getandroidapilevel') or sys.getandroidapilevel() >= 23): socket.getaddrinfo(HOST, "http") socket.getaddrinfo(HOST, 80) socket.getaddrinfo(HOST, None) # test family and socktype filters infos = socket.getaddrinfo(HOST, 80, socket.AF_INET, socket.SOCK_STREAM) for family, type, _, _, _ in infos: self.assertEqual(family, socket.AF_INET) self.assertEqual(str(family), 'AddressFamily.AF_INET') self.assertEqual(type, socket.SOCK_STREAM) self.assertEqual(str(type), 'SocketKind.SOCK_STREAM') infos = socket.getaddrinfo(HOST, None, 0, socket.SOCK_STREAM) for _, socktype, _, _, _ in infos: self.assertEqual(socktype, socket.SOCK_STREAM) # test proto and flags arguments socket.getaddrinfo(HOST, None, 0, 0, socket.SOL_TCP) socket.getaddrinfo(HOST, None, 0, 0, 0, socket.AI_PASSIVE) # a server willing to support both IPv4 and IPv6 will # usually do this socket.getaddrinfo(None, 0, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE) # test keyword arguments a = socket.getaddrinfo(HOST, None) b = socket.getaddrinfo(host=HOST, port=None) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, socket.AF_INET) b = socket.getaddrinfo(HOST, None, family=socket.AF_INET) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, 0, socket.SOCK_STREAM) b = socket.getaddrinfo(HOST, None, type=socket.SOCK_STREAM) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, 0, 0, socket.SOL_TCP) b = socket.getaddrinfo(HOST, None, proto=socket.SOL_TCP) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, 0, 0, 0, socket.AI_PASSIVE) b = socket.getaddrinfo(HOST, None, flags=socket.AI_PASSIVE) self.assertEqual(a, b) a = socket.getaddrinfo(None, 0, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE) b = socket.getaddrinfo(host=None, port=0, family=socket.AF_UNSPEC, type=socket.SOCK_STREAM, proto=0, flags=socket.AI_PASSIVE) self.assertEqual(a, b) # Issue #6697. self.assertRaises(UnicodeEncodeError, socket.getaddrinfo, 'localhost', '\uD800') # Issue 17269: test workaround for OS X platform bug segfault if hasattr(socket, 'AI_NUMERICSERV'): try: # The arguments here are undefined and the call may succeed # or fail. All we care here is that it doesn't segfault. socket.getaddrinfo("localhost", None, 0, 0, 0, socket.AI_NUMERICSERV) except socket.gaierror: pass def test_getnameinfo(self): # only IP addresses are allowed self.assertRaises(OSError, socket.getnameinfo, ('mail.python.org',0), 0) @unittest.skipUnless(support.is_resource_enabled('network'), 'network is not enabled') def test_idna(self): # Check for internet access before running test # (issue #12804, issue #25138). with socket_helper.transient_internet('python.org'): socket.gethostbyname('python.org') # these should all be successful domain = 'испытание.pythontest.net' socket.gethostbyname(domain) socket.gethostbyname_ex(domain) socket.getaddrinfo(domain,0,socket.AF_UNSPEC,socket.SOCK_STREAM) # this may not work if the forward lookup chooses the IPv6 address, as that doesn't # have a reverse entry yet # socket.gethostbyaddr('испытание.python.org') def check_sendall_interrupted(self, with_timeout): # socketpair() is not strictly required, but it makes things easier. if not hasattr(signal, 'alarm') or not hasattr(socket, 'socketpair'): self.skipTest("signal.alarm and socket.socketpair required for this test") # Our signal handlers clobber the C errno by calling a math function # with an invalid domain value. def ok_handler(*args): self.assertRaises(ValueError, math.acosh, 0) def raising_handler(*args): self.assertRaises(ValueError, math.acosh, 0) 1 // 0 c, s = socket.socketpair() old_alarm = signal.signal(signal.SIGALRM, raising_handler) try: if with_timeout: # Just above the one second minimum for signal.alarm c.settimeout(1.5) with self.assertRaises(ZeroDivisionError): signal.alarm(1) c.sendall(b"x" * support.SOCK_MAX_SIZE) if with_timeout: signal.signal(signal.SIGALRM, ok_handler) signal.alarm(1) self.assertRaises(TimeoutError, c.sendall, b"x" * support.SOCK_MAX_SIZE) finally: signal.alarm(0) signal.signal(signal.SIGALRM, old_alarm) c.close() s.close() def test_sendall_interrupted(self): self.check_sendall_interrupted(False) def test_sendall_interrupted_with_timeout(self): self.check_sendall_interrupted(True) def test_dealloc_warn(self): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) r = repr(sock) with self.assertWarns(ResourceWarning) as cm: sock = None support.gc_collect() self.assertIn(r, str(cm.warning.args[0])) # An open socket file object gets dereferenced after the socket sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) f = sock.makefile('rb') r = repr(sock) sock = None support.gc_collect() with self.assertWarns(ResourceWarning): f = None support.gc_collect() def test_name_closed_socketio(self): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: fp = sock.makefile("rb") fp.close() self.assertEqual(repr(fp), "<_io.BufferedReader name=-1>") def test_unusable_closed_socketio(self): with socket.socket() as sock: fp = sock.makefile("rb", buffering=0) self.assertTrue(fp.readable()) self.assertFalse(fp.writable()) self.assertFalse(fp.seekable()) fp.close() self.assertRaises(ValueError, fp.readable) self.assertRaises(ValueError, fp.writable) self.assertRaises(ValueError, fp.seekable) def test_socket_close(self): sock = socket.socket() try: sock.bind((HOST, 0)) socket.close(sock.fileno()) with self.assertRaises(OSError): sock.listen(1) finally: with self.assertRaises(OSError): # sock.close() fails with EBADF sock.close() with self.assertRaises(TypeError): socket.close(None) with self.assertRaises(OSError): socket.close(-1) def test_makefile_mode(self): for mode in 'r', 'rb', 'rw', 'w', 'wb': with self.subTest(mode=mode): with socket.socket() as sock: encoding = None if "b" in mode else "utf-8" with sock.makefile(mode, encoding=encoding) as fp: self.assertEqual(fp.mode, mode) def test_makefile_invalid_mode(self): for mode in 'rt', 'x', '+', 'a': with self.subTest(mode=mode): with socket.socket() as sock: with self.assertRaisesRegex(ValueError, 'invalid mode'): sock.makefile(mode) def test_pickle(self): sock = socket.socket() with sock: for protocol in range(pickle.HIGHEST_PROTOCOL + 1): self.assertRaises(TypeError, pickle.dumps, sock, protocol) for protocol in range(pickle.HIGHEST_PROTOCOL + 1): family = pickle.loads(pickle.dumps(socket.AF_INET, protocol)) self.assertEqual(family, socket.AF_INET) type = pickle.loads(pickle.dumps(socket.SOCK_STREAM, protocol)) self.assertEqual(type, socket.SOCK_STREAM) def test_listen_backlog(self): for backlog in 0, -1: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as srv: srv.bind((HOST, 0)) srv.listen(backlog) with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as srv: srv.bind((HOST, 0)) srv.listen() @support.cpython_only def test_listen_backlog_overflow(self): # Issue 15989 import _testcapi with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as srv: srv.bind((HOST, 0)) self.assertRaises(OverflowError, srv.listen, _testcapi.INT_MAX + 1) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') def test_flowinfo(self): self.assertRaises(OverflowError, socket.getnameinfo, (socket_helper.HOSTv6, 0, 0xffffffff), 0) with socket.socket(socket.AF_INET6, socket.SOCK_STREAM) as s: self.assertRaises(OverflowError, s.bind, (socket_helper.HOSTv6, 0, -10)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') def test_getaddrinfo_ipv6_basic(self): ((*_, sockaddr),) = socket.getaddrinfo( 'ff02::1de:c0:face:8D', # Note capital letter `D`. 1234, socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP ) self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, 0)) def test_getfqdn_filter_localhost(self): self.assertEqual(socket.getfqdn(), socket.getfqdn("0.0.0.0")) self.assertEqual(socket.getfqdn(), socket.getfqdn("::")) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipIf(sys.platform == 'win32', 'does not work on Windows') @unittest.skipIf(AIX, 'Symbolic scope id does not work') @unittest.skipUnless(hasattr(socket, 'if_nameindex'), "test needs socket.if_nameindex()") def test_getaddrinfo_ipv6_scopeid_symbolic(self): # Just pick up any network interface (Linux, Mac OS X) (ifindex, test_interface) = socket.if_nameindex()[0] ((*_, sockaddr),) = socket.getaddrinfo( 'ff02::1de:c0:face:8D%' + test_interface, 1234, socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP ) # Note missing interface name part in IPv6 address self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, ifindex)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless( sys.platform == 'win32', 'Numeric scope id does not work or undocumented') def test_getaddrinfo_ipv6_scopeid_numeric(self): # Also works on Linux and Mac OS X, but is not documented (?) # Windows, Linux and Max OS X allow nonexistent interface numbers here. ifindex = 42 ((*_, sockaddr),) = socket.getaddrinfo( 'ff02::1de:c0:face:8D%' + str(ifindex), 1234, socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP ) # Note missing interface name part in IPv6 address self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, ifindex)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipIf(sys.platform == 'win32', 'does not work on Windows') @unittest.skipIf(AIX, 'Symbolic scope id does not work') @unittest.skipUnless(hasattr(socket, 'if_nameindex'), "test needs socket.if_nameindex()") def test_getnameinfo_ipv6_scopeid_symbolic(self): # Just pick up any network interface. (ifindex, test_interface) = socket.if_nameindex()[0] sockaddr = ('ff02::1de:c0:face:8D', 1234, 0, ifindex) # Note capital letter `D`. nameinfo = socket.getnameinfo(sockaddr, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV) self.assertEqual(nameinfo, ('ff02::1de:c0:face:8d%' + test_interface, '1234')) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless( sys.platform == 'win32', 'Numeric scope id does not work or undocumented') def test_getnameinfo_ipv6_scopeid_numeric(self): # Also works on Linux (undocumented), but does not work on Mac OS X # Windows and Linux allow nonexistent interface numbers here. ifindex = 42 sockaddr = ('ff02::1de:c0:face:8D', 1234, 0, ifindex) # Note capital letter `D`. nameinfo = socket.getnameinfo(sockaddr, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV) self.assertEqual(nameinfo, ('ff02::1de:c0:face:8d%' + str(ifindex), '1234')) def test_str_for_enums(self): # Make sure that the AF_* and SOCK_* constants have enum-like string # reprs. with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: self.assertEqual(str(s.family), 'AddressFamily.AF_INET') self.assertEqual(str(s.type), 'SocketKind.SOCK_STREAM') def test_socket_consistent_sock_type(self): SOCK_NONBLOCK = getattr(socket, 'SOCK_NONBLOCK', 0) SOCK_CLOEXEC = getattr(socket, 'SOCK_CLOEXEC', 0) sock_type = socket.SOCK_STREAM | SOCK_NONBLOCK | SOCK_CLOEXEC with socket.socket(socket.AF_INET, sock_type) as s: self.assertEqual(s.type, socket.SOCK_STREAM) s.settimeout(1) self.assertEqual(s.type, socket.SOCK_STREAM) s.settimeout(0) self.assertEqual(s.type, socket.SOCK_STREAM) s.setblocking(True) self.assertEqual(s.type, socket.SOCK_STREAM) s.setblocking(False) self.assertEqual(s.type, socket.SOCK_STREAM) def test_unknown_socket_family_repr(self): # Test that when created with a family that's not one of the known # AF_*/SOCK_* constants, socket.family just returns the number. # # To do this we fool socket.socket into believing it already has an # open fd because on this path it doesn't actually verify the family and # type and populates the socket object. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) fd = sock.detach() unknown_family = max(socket.AddressFamily.__members__.values()) + 1 unknown_type = max( kind for name, kind in socket.SocketKind.__members__.items() if name not in {'SOCK_NONBLOCK', 'SOCK_CLOEXEC'} ) + 1 with socket.socket( family=unknown_family, type=unknown_type, proto=23, fileno=fd) as s: self.assertEqual(s.family, unknown_family) self.assertEqual(s.type, unknown_type) # some OS like macOS ignore proto self.assertIn(s.proto, {0, 23}) @unittest.skipUnless(hasattr(os, 'sendfile'), 'test needs os.sendfile()') def test__sendfile_use_sendfile(self): class File: def __init__(self, fd): self.fd = fd def fileno(self): return self.fd with socket.socket() as sock: fd = os.open(os.curdir, os.O_RDONLY) os.close(fd) with self.assertRaises(socket._GiveupOnSendfile): sock._sendfile_use_sendfile(File(fd)) with self.assertRaises(OverflowError): sock._sendfile_use_sendfile(File(2**1000)) with self.assertRaises(TypeError): sock._sendfile_use_sendfile(File(None)) def _test_socket_fileno(self, s, family, stype): self.assertEqual(s.family, family) self.assertEqual(s.type, stype) fd = s.fileno() s2 = socket.socket(fileno=fd) self.addCleanup(s2.close) # detach old fd to avoid double close s.detach() self.assertEqual(s2.family, family) self.assertEqual(s2.type, stype) self.assertEqual(s2.fileno(), fd) def test_socket_fileno(self): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(s.close) s.bind((socket_helper.HOST, 0)) self._test_socket_fileno(s, socket.AF_INET, socket.SOCK_STREAM) if hasattr(socket, "SOCK_DGRAM"): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.addCleanup(s.close) s.bind((socket_helper.HOST, 0)) self._test_socket_fileno(s, socket.AF_INET, socket.SOCK_DGRAM) if socket_helper.IPV6_ENABLED: s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) self.addCleanup(s.close) s.bind((socket_helper.HOSTv6, 0, 0, 0)) self._test_socket_fileno(s, socket.AF_INET6, socket.SOCK_STREAM) if hasattr(socket, "AF_UNIX"): tmpdir = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, tmpdir) s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.addCleanup(s.close) try: s.bind(os.path.join(tmpdir, 'socket')) except PermissionError: pass else: self._test_socket_fileno(s, socket.AF_UNIX, socket.SOCK_STREAM) def test_socket_fileno_rejects_float(self): with self.assertRaises(TypeError): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=42.5) def test_socket_fileno_rejects_other_types(self): with self.assertRaises(TypeError): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno="foo") def test_socket_fileno_rejects_invalid_socket(self): with self.assertRaisesRegex(ValueError, "negative file descriptor"): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=-1) @unittest.skipIf(os.name == "nt", "Windows disallows -1 only") def test_socket_fileno_rejects_negative(self): with self.assertRaisesRegex(ValueError, "negative file descriptor"): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=-42) def test_socket_fileno_requires_valid_fd(self): WSAENOTSOCK = 10038 with self.assertRaises(OSError) as cm: socket.socket(fileno=os_helper.make_bad_fd()) self.assertIn(cm.exception.errno, (errno.EBADF, WSAENOTSOCK)) with self.assertRaises(OSError) as cm: socket.socket( socket.AF_INET, socket.SOCK_STREAM, fileno=os_helper.make_bad_fd()) self.assertIn(cm.exception.errno, (errno.EBADF, WSAENOTSOCK)) def test_socket_fileno_requires_socket_fd(self): with tempfile.NamedTemporaryFile() as afile: with self.assertRaises(OSError): socket.socket(fileno=afile.fileno()) with self.assertRaises(OSError) as cm: socket.socket( socket.AF_INET, socket.SOCK_STREAM, fileno=afile.fileno()) self.assertEqual(cm.exception.errno, errno.ENOTSOCK) @unittest.skipUnless(HAVE_SOCKET_CAN, 'SocketCan required for this test.') class BasicCANTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_CAN socket.PF_CAN socket.CAN_RAW @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def testBCMConstants(self): socket.CAN_BCM # opcodes socket.CAN_BCM_TX_SETUP # create (cyclic) transmission task socket.CAN_BCM_TX_DELETE # remove (cyclic) transmission task socket.CAN_BCM_TX_READ # read properties of (cyclic) transmission task socket.CAN_BCM_TX_SEND # send one CAN frame socket.CAN_BCM_RX_SETUP # create RX content filter subscription socket.CAN_BCM_RX_DELETE # remove RX content filter subscription socket.CAN_BCM_RX_READ # read properties of RX content filter subscription socket.CAN_BCM_TX_STATUS # reply to TX_READ request socket.CAN_BCM_TX_EXPIRED # notification on performed transmissions (count=0) socket.CAN_BCM_RX_STATUS # reply to RX_READ request socket.CAN_BCM_RX_TIMEOUT # cyclic message is absent socket.CAN_BCM_RX_CHANGED # updated CAN frame (detected content change) # flags socket.CAN_BCM_SETTIMER socket.CAN_BCM_STARTTIMER socket.CAN_BCM_TX_COUNTEVT socket.CAN_BCM_TX_ANNOUNCE socket.CAN_BCM_TX_CP_CAN_ID socket.CAN_BCM_RX_FILTER_ID socket.CAN_BCM_RX_CHECK_DLC socket.CAN_BCM_RX_NO_AUTOTIMER socket.CAN_BCM_RX_ANNOUNCE_RESUME socket.CAN_BCM_TX_RESET_MULTI_IDX socket.CAN_BCM_RX_RTR_FRAME def testCreateSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: pass @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def testCreateBCMSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) as s: pass def testBindAny(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: address = ('', ) s.bind(address) self.assertEqual(s.getsockname(), address) def testTooLongInterfaceName(self): # most systems limit IFNAMSIZ to 16, take 1024 to be sure with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: self.assertRaisesRegex(OSError, 'interface name too long', s.bind, ('x' * 1024,)) @unittest.skipUnless(hasattr(socket, "CAN_RAW_LOOPBACK"), 'socket.CAN_RAW_LOOPBACK required for this test.') def testLoopback(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: for loopback in (0, 1): s.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_LOOPBACK, loopback) self.assertEqual(loopback, s.getsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_LOOPBACK)) @unittest.skipUnless(hasattr(socket, "CAN_RAW_FILTER"), 'socket.CAN_RAW_FILTER required for this test.') def testFilter(self): can_id, can_mask = 0x200, 0x700 can_filter = struct.pack("=II", can_id, can_mask) with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: s.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_FILTER, can_filter) self.assertEqual(can_filter, s.getsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_FILTER, 8)) s.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_FILTER, bytearray(can_filter)) @unittest.skipUnless(HAVE_SOCKET_CAN, 'SocketCan required for this test.') class CANTest(ThreadedCANSocketTest): def __init__(self, methodName='runTest'): ThreadedCANSocketTest.__init__(self, methodName=methodName) @classmethod def build_can_frame(cls, can_id, data): """Build a CAN frame.""" can_dlc = len(data) data = data.ljust(8, b'\x00') return struct.pack(cls.can_frame_fmt, can_id, can_dlc, data) @classmethod def dissect_can_frame(cls, frame): """Dissect a CAN frame.""" can_id, can_dlc, data = struct.unpack(cls.can_frame_fmt, frame) return (can_id, can_dlc, data[:can_dlc]) def testSendFrame(self): cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf, cf) self.assertEqual(addr[0], self.interface) def _testSendFrame(self): self.cf = self.build_can_frame(0x00, b'\x01\x02\x03\x04\x05') self.cli.send(self.cf) def testSendMaxFrame(self): cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf, cf) def _testSendMaxFrame(self): self.cf = self.build_can_frame(0x00, b'\x07' * 8) self.cli.send(self.cf) def testSendMultiFrames(self): cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf1, cf) cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf2, cf) def _testSendMultiFrames(self): self.cf1 = self.build_can_frame(0x07, b'\x44\x33\x22\x11') self.cli.send(self.cf1) self.cf2 = self.build_can_frame(0x12, b'\x99\x22\x33') self.cli.send(self.cf2) @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def _testBCM(self): cf, addr = self.cli.recvfrom(self.bufsize) self.assertEqual(self.cf, cf) can_id, can_dlc, data = self.dissect_can_frame(cf) self.assertEqual(self.can_id, can_id) self.assertEqual(self.data, data) @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def testBCM(self): bcm = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) self.addCleanup(bcm.close) bcm.connect((self.interface,)) self.can_id = 0x123 self.data = bytes([0xc0, 0xff, 0xee]) self.cf = self.build_can_frame(self.can_id, self.data) opcode = socket.CAN_BCM_TX_SEND flags = 0 count = 0 ival1_seconds = ival1_usec = ival2_seconds = ival2_usec = 0 bcm_can_id = 0x0222 nframes = 1 assert len(self.cf) == 16 header = struct.pack(self.bcm_cmd_msg_fmt, opcode, flags, count, ival1_seconds, ival1_usec, ival2_seconds, ival2_usec, bcm_can_id, nframes, ) header_plus_frame = header + self.cf bytes_sent = bcm.send(header_plus_frame) self.assertEqual(bytes_sent, len(header_plus_frame)) @unittest.skipUnless(HAVE_SOCKET_CAN_ISOTP, 'CAN ISOTP required for this test.') class ISOTPTest(unittest.TestCase): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.interface = "vcan0" def testCrucialConstants(self): socket.AF_CAN socket.PF_CAN socket.CAN_ISOTP socket.SOCK_DGRAM def testCreateSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: pass @unittest.skipUnless(hasattr(socket, "CAN_ISOTP"), 'socket.CAN_ISOTP required for this test.') def testCreateISOTPSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) as s: pass def testTooLongInterfaceName(self): # most systems limit IFNAMSIZ to 16, take 1024 to be sure with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) as s: with self.assertRaisesRegex(OSError, 'interface name too long'): s.bind(('x' * 1024, 1, 2)) def testBind(self): try: with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) as s: addr = self.interface, 0x123, 0x456 s.bind(addr) self.assertEqual(s.getsockname(), addr) except OSError as e: if e.errno == errno.ENODEV: self.skipTest('network interface `%s` does not exist' % self.interface) else: raise @unittest.skipUnless(HAVE_SOCKET_CAN_J1939, 'CAN J1939 required for this test.') class J1939Test(unittest.TestCase): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.interface = "vcan0" @unittest.skipUnless(hasattr(socket, "CAN_J1939"), 'socket.CAN_J1939 required for this test.') def testJ1939Constants(self): socket.CAN_J1939 socket.J1939_MAX_UNICAST_ADDR socket.J1939_IDLE_ADDR socket.J1939_NO_ADDR socket.J1939_NO_NAME socket.J1939_PGN_REQUEST socket.J1939_PGN_ADDRESS_CLAIMED socket.J1939_PGN_ADDRESS_COMMANDED socket.J1939_PGN_PDU1_MAX socket.J1939_PGN_MAX socket.J1939_NO_PGN # J1939 socket options socket.SO_J1939_FILTER socket.SO_J1939_PROMISC socket.SO_J1939_SEND_PRIO socket.SO_J1939_ERRQUEUE socket.SCM_J1939_DEST_ADDR socket.SCM_J1939_DEST_NAME socket.SCM_J1939_PRIO socket.SCM_J1939_ERRQUEUE socket.J1939_NLA_PAD socket.J1939_NLA_BYTES_ACKED socket.J1939_EE_INFO_NONE socket.J1939_EE_INFO_TX_ABORT socket.J1939_FILTER_MAX @unittest.skipUnless(hasattr(socket, "CAN_J1939"), 'socket.CAN_J1939 required for this test.') def testCreateJ1939Socket(self): with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) as s: pass def testBind(self): try: with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) as s: addr = self.interface, socket.J1939_NO_NAME, socket.J1939_NO_PGN, socket.J1939_NO_ADDR s.bind(addr) self.assertEqual(s.getsockname(), addr) except OSError as e: if e.errno == errno.ENODEV: self.skipTest('network interface `%s` does not exist' % self.interface) else: raise @unittest.skipUnless(HAVE_SOCKET_RDS, 'RDS sockets required for this test.') class BasicRDSTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_RDS socket.PF_RDS def testCreateSocket(self): with socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) as s: pass def testSocketBufferSize(self): bufsize = 16384 with socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) as s: s.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, bufsize) s.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, bufsize) @unittest.skipUnless(HAVE_SOCKET_RDS, 'RDS sockets required for this test.') class RDSTest(ThreadedRDSSocketTest): def __init__(self, methodName='runTest'): ThreadedRDSSocketTest.__init__(self, methodName=methodName) def setUp(self): super().setUp() self.evt = threading.Event() def testSendAndRecv(self): data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data, data) self.assertEqual(self.cli_addr, addr) def _testSendAndRecv(self): self.data = b'spam' self.cli.sendto(self.data, 0, (HOST, self.port)) def testPeek(self): data, addr = self.serv.recvfrom(self.bufsize, socket.MSG_PEEK) self.assertEqual(self.data, data) data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data, data) def _testPeek(self): self.data = b'spam' self.cli.sendto(self.data, 0, (HOST, self.port)) @requireAttrs(socket.socket, 'recvmsg') def testSendAndRecvMsg(self): data, ancdata, msg_flags, addr = self.serv.recvmsg(self.bufsize) self.assertEqual(self.data, data) @requireAttrs(socket.socket, 'sendmsg') def _testSendAndRecvMsg(self): self.data = b'hello ' * 10 self.cli.sendmsg([self.data], (), 0, (HOST, self.port)) def testSendAndRecvMulti(self): data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data1, data) data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data2, data) def _testSendAndRecvMulti(self): self.data1 = b'bacon' self.cli.sendto(self.data1, 0, (HOST, self.port)) self.data2 = b'egg' self.cli.sendto(self.data2, 0, (HOST, self.port)) def testSelect(self): r, w, x = select.select([self.serv], [], [], 3.0) self.assertIn(self.serv, r) data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data, data) def _testSelect(self): self.data = b'select' self.cli.sendto(self.data, 0, (HOST, self.port)) @unittest.skipUnless(HAVE_SOCKET_QIPCRTR, 'QIPCRTR sockets required for this test.') class BasicQIPCRTRTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_QIPCRTR def testCreateSocket(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: pass def testUnbound(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: self.assertEqual(s.getsockname()[1], 0) def testBindSock(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: socket_helper.bind_port(s, host=s.getsockname()[0]) self.assertNotEqual(s.getsockname()[1], 0) def testInvalidBindSock(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: self.assertRaises(OSError, socket_helper.bind_port, s, host=-2) def testAutoBindSock(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: s.connect((123, 123)) self.assertNotEqual(s.getsockname()[1], 0) @unittest.skipIf(fcntl is None, "need fcntl") @unittest.skipUnless(HAVE_SOCKET_VSOCK, 'VSOCK sockets required for this test.') class BasicVSOCKTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_VSOCK def testVSOCKConstants(self): socket.SO_VM_SOCKETS_BUFFER_SIZE socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE socket.VMADDR_CID_ANY socket.VMADDR_PORT_ANY socket.VMADDR_CID_HOST socket.VM_SOCKETS_INVALID_VERSION socket.IOCTL_VM_SOCKETS_GET_LOCAL_CID def testCreateSocket(self): with socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) as s: pass def testSocketBufferSize(self): with socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) as s: orig_max = s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE) orig = s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_SIZE) orig_min = s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE) s.setsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE, orig_max * 2) s.setsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_SIZE, orig * 2) s.setsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE, orig_min * 2) self.assertEqual(orig_max * 2, s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE)) self.assertEqual(orig * 2, s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_SIZE)) self.assertEqual(orig_min * 2, s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE)) @unittest.skipUnless(HAVE_SOCKET_BLUETOOTH, 'Bluetooth sockets required for this test.') class BasicBluetoothTest(unittest.TestCase): def testBluetoothConstants(self): socket.BDADDR_ANY socket.BDADDR_LOCAL socket.AF_BLUETOOTH socket.BTPROTO_RFCOMM if sys.platform != "win32": socket.BTPROTO_HCI socket.SOL_HCI socket.BTPROTO_L2CAP if not sys.platform.startswith("freebsd"): socket.BTPROTO_SCO def testCreateRfcommSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM) as s: pass @unittest.skipIf(sys.platform == "win32", "windows does not support L2CAP sockets") def testCreateL2capSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_SEQPACKET, socket.BTPROTO_L2CAP) as s: pass @unittest.skipIf(sys.platform == "win32", "windows does not support HCI sockets") def testCreateHciSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_RAW, socket.BTPROTO_HCI) as s: pass @unittest.skipIf(sys.platform == "win32" or sys.platform.startswith("freebsd"), "windows and freebsd do not support SCO sockets") def testCreateScoSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_SEQPACKET, socket.BTPROTO_SCO) as s: pass class BasicTCPTest(SocketConnectedTest): def __init__(self, methodName='runTest'): SocketConnectedTest.__init__(self, methodName=methodName) def testRecv(self): # Testing large receive over TCP msg = self.cli_conn.recv(1024) self.assertEqual(msg, MSG) def _testRecv(self): self.serv_conn.send(MSG) def testOverFlowRecv(self): # Testing receive in chunks over TCP seg1 = self.cli_conn.recv(len(MSG) - 3) seg2 = self.cli_conn.recv(1024) msg = seg1 + seg2 self.assertEqual(msg, MSG) def _testOverFlowRecv(self): self.serv_conn.send(MSG) def testRecvFrom(self): # Testing large recvfrom() over TCP msg, addr = self.cli_conn.recvfrom(1024) self.assertEqual(msg, MSG) def _testRecvFrom(self): self.serv_conn.send(MSG) def testOverFlowRecvFrom(self): # Testing recvfrom() in chunks over TCP seg1, addr = self.cli_conn.recvfrom(len(MSG)-3) seg2, addr = self.cli_conn.recvfrom(1024) msg = seg1 + seg2 self.assertEqual(msg, MSG) def _testOverFlowRecvFrom(self): self.serv_conn.send(MSG) def testSendAll(self): # Testing sendall() with a 2048 byte string over TCP msg = b'' while 1: read = self.cli_conn.recv(1024) if not read: break msg += read self.assertEqual(msg, b'f' * 2048) def _testSendAll(self): big_chunk = b'f' * 2048 self.serv_conn.sendall(big_chunk) def testFromFd(self): # Testing fromfd() fd = self.cli_conn.fileno() sock = socket.fromfd(fd, socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) self.assertIsInstance(sock, socket.socket) msg = sock.recv(1024) self.assertEqual(msg, MSG) def _testFromFd(self): self.serv_conn.send(MSG) def testDup(self): # Testing dup() sock = self.cli_conn.dup() self.addCleanup(sock.close) msg = sock.recv(1024) self.assertEqual(msg, MSG) def _testDup(self): self.serv_conn.send(MSG) def testShutdown(self): # Testing shutdown() msg = self.cli_conn.recv(1024) self.assertEqual(msg, MSG) # wait for _testShutdown to finish: on OS X, when the server # closes the connection the client also becomes disconnected, # and the client's shutdown call will fail. (Issue #4397.) self.done.wait() def _testShutdown(self): self.serv_conn.send(MSG) self.serv_conn.shutdown(2) testShutdown_overflow = support.cpython_only(testShutdown) @support.cpython_only def _testShutdown_overflow(self): import _testcapi self.serv_conn.send(MSG) # Issue 15989 self.assertRaises(OverflowError, self.serv_conn.shutdown, _testcapi.INT_MAX + 1) self.assertRaises(OverflowError, self.serv_conn.shutdown, 2 + (_testcapi.UINT_MAX + 1)) self.serv_conn.shutdown(2) def testDetach(self): # Testing detach() fileno = self.cli_conn.fileno() f = self.cli_conn.detach() self.assertEqual(f, fileno) # cli_conn cannot be used anymore... self.assertTrue(self.cli_conn._closed) self.assertRaises(OSError, self.cli_conn.recv, 1024) self.cli_conn.close() # ...but we can create another socket using the (still open) # file descriptor sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=f) self.addCleanup(sock.close) msg = sock.recv(1024) self.assertEqual(msg, MSG) def _testDetach(self): self.serv_conn.send(MSG) class BasicUDPTest(ThreadedUDPSocketTest): def __init__(self, methodName='runTest'): ThreadedUDPSocketTest.__init__(self, methodName=methodName) def testSendtoAndRecv(self): # Testing sendto() and Recv() over UDP msg = self.serv.recv(len(MSG)) self.assertEqual(msg, MSG) def _testSendtoAndRecv(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFrom(self): # Testing recvfrom() over UDP msg, addr = self.serv.recvfrom(len(MSG)) self.assertEqual(msg, MSG) def _testRecvFrom(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFromNegative(self): # Negative lengths passed to recvfrom should give ValueError. self.assertRaises(ValueError, self.serv.recvfrom, -1) def _testRecvFromNegative(self): self.cli.sendto(MSG, 0, (HOST, self.port)) @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class BasicUDPLITETest(ThreadedUDPLITESocketTest): def __init__(self, methodName='runTest'): ThreadedUDPLITESocketTest.__init__(self, methodName=methodName) def testSendtoAndRecv(self): # Testing sendto() and Recv() over UDPLITE msg = self.serv.recv(len(MSG)) self.assertEqual(msg, MSG) def _testSendtoAndRecv(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFrom(self): # Testing recvfrom() over UDPLITE msg, addr = self.serv.recvfrom(len(MSG)) self.assertEqual(msg, MSG) def _testRecvFrom(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFromNegative(self): # Negative lengths passed to recvfrom should give ValueError. self.assertRaises(ValueError, self.serv.recvfrom, -1) def _testRecvFromNegative(self): self.cli.sendto(MSG, 0, (HOST, self.port)) # Tests for the sendmsg()/recvmsg() interface. Where possible, the # same test code is used with different families and types of socket # (e.g. stream, datagram), and tests using recvmsg() are repeated # using recvmsg_into(). # # The generic test classes such as SendmsgTests and # RecvmsgGenericTests inherit from SendrecvmsgBase and expect to be # supplied with sockets cli_sock and serv_sock representing the # client's and the server's end of the connection respectively, and # attributes cli_addr and serv_addr holding their (numeric where # appropriate) addresses. # # The final concrete test classes combine these with subclasses of # SocketTestBase which set up client and server sockets of a specific # type, and with subclasses of SendrecvmsgBase such as # SendrecvmsgDgramBase and SendrecvmsgConnectedBase which map these # sockets to cli_sock and serv_sock and override the methods and # attributes of SendrecvmsgBase to fill in destination addresses if # needed when sending, check for specific flags in msg_flags, etc. # # RecvmsgIntoMixin provides a version of doRecvmsg() implemented using # recvmsg_into(). # XXX: like the other datagram (UDP) tests in this module, the code # here assumes that datagram delivery on the local machine will be # reliable. class SendrecvmsgBase(ThreadSafeCleanupTestCase): # Base class for sendmsg()/recvmsg() tests. # Time in seconds to wait before considering a test failed, or # None for no timeout. Not all tests actually set a timeout. fail_timeout = support.LOOPBACK_TIMEOUT def setUp(self): self.misc_event = threading.Event() super().setUp() def sendToServer(self, msg): # Send msg to the server. return self.cli_sock.send(msg) # Tuple of alternative default arguments for sendmsg() when called # via sendmsgToServer() (e.g. to include a destination address). sendmsg_to_server_defaults = () def sendmsgToServer(self, *args): # Call sendmsg() on self.cli_sock with the given arguments, # filling in any arguments which are not supplied with the # corresponding items of self.sendmsg_to_server_defaults, if # any. return self.cli_sock.sendmsg( *(args + self.sendmsg_to_server_defaults[len(args):])) def doRecvmsg(self, sock, bufsize, *args): # Call recvmsg() on sock with given arguments and return its # result. Should be used for tests which can use either # recvmsg() or recvmsg_into() - RecvmsgIntoMixin overrides # this method with one which emulates it using recvmsg_into(), # thus allowing the same test to be used for both methods. result = sock.recvmsg(bufsize, *args) self.registerRecvmsgResult(result) return result def registerRecvmsgResult(self, result): # Called by doRecvmsg() with the return value of recvmsg() or # recvmsg_into(). Can be overridden to arrange cleanup based # on the returned ancillary data, for instance. pass def checkRecvmsgAddress(self, addr1, addr2): # Called to compare the received address with the address of # the peer. self.assertEqual(addr1, addr2) # Flags that are normally unset in msg_flags msg_flags_common_unset = 0 for name in ("MSG_CTRUNC", "MSG_OOB"): msg_flags_common_unset |= getattr(socket, name, 0) # Flags that are normally set msg_flags_common_set = 0 # Flags set when a complete record has been received (e.g. MSG_EOR # for SCTP) msg_flags_eor_indicator = 0 # Flags set when a complete record has not been received # (e.g. MSG_TRUNC for datagram sockets) msg_flags_non_eor_indicator = 0 def checkFlags(self, flags, eor=None, checkset=0, checkunset=0, ignore=0): # Method to check the value of msg_flags returned by recvmsg[_into](). # # Checks that all bits in msg_flags_common_set attribute are # set in "flags" and all bits in msg_flags_common_unset are # unset. # # The "eor" argument specifies whether the flags should # indicate that a full record (or datagram) has been received. # If "eor" is None, no checks are done; otherwise, checks # that: # # * if "eor" is true, all bits in msg_flags_eor_indicator are # set and all bits in msg_flags_non_eor_indicator are unset # # * if "eor" is false, all bits in msg_flags_non_eor_indicator # are set and all bits in msg_flags_eor_indicator are unset # # If "checkset" and/or "checkunset" are supplied, they require # the given bits to be set or unset respectively, overriding # what the attributes require for those bits. # # If any bits are set in "ignore", they will not be checked, # regardless of the other inputs. # # Will raise Exception if the inputs require a bit to be both # set and unset, and it is not ignored. defaultset = self.msg_flags_common_set defaultunset = self.msg_flags_common_unset if eor: defaultset |= self.msg_flags_eor_indicator defaultunset |= self.msg_flags_non_eor_indicator elif eor is not None: defaultset |= self.msg_flags_non_eor_indicator defaultunset |= self.msg_flags_eor_indicator # Function arguments override defaults defaultset &= ~checkunset defaultunset &= ~checkset # Merge arguments with remaining defaults, and check for conflicts checkset |= defaultset checkunset |= defaultunset inboth = checkset & checkunset & ~ignore if inboth: raise Exception("contradictory set, unset requirements for flags " "{0:#x}".format(inboth)) # Compare with given msg_flags value mask = (checkset | checkunset) & ~ignore self.assertEqual(flags & mask, checkset & mask) class RecvmsgIntoMixin(SendrecvmsgBase): # Mixin to implement doRecvmsg() using recvmsg_into(). def doRecvmsg(self, sock, bufsize, *args): buf = bytearray(bufsize) result = sock.recvmsg_into([buf], *args) self.registerRecvmsgResult(result) self.assertGreaterEqual(result[0], 0) self.assertLessEqual(result[0], bufsize) return (bytes(buf[:result[0]]),) + result[1:] class SendrecvmsgDgramFlagsBase(SendrecvmsgBase): # Defines flags to be checked in msg_flags for datagram sockets. @property def msg_flags_non_eor_indicator(self): return super().msg_flags_non_eor_indicator | socket.MSG_TRUNC class SendrecvmsgSCTPFlagsBase(SendrecvmsgBase): # Defines flags to be checked in msg_flags for SCTP sockets. @property def msg_flags_eor_indicator(self): return super().msg_flags_eor_indicator | socket.MSG_EOR class SendrecvmsgConnectionlessBase(SendrecvmsgBase): # Base class for tests on connectionless-mode sockets. Users must # supply sockets on attributes cli and serv to be mapped to # cli_sock and serv_sock respectively. @property def serv_sock(self): return self.serv @property def cli_sock(self): return self.cli @property def sendmsg_to_server_defaults(self): return ([], [], 0, self.serv_addr) def sendToServer(self, msg): return self.cli_sock.sendto(msg, self.serv_addr) class SendrecvmsgConnectedBase(SendrecvmsgBase): # Base class for tests on connected sockets. Users must supply # sockets on attributes serv_conn and cli_conn (representing the # connections *to* the server and the client), to be mapped to # cli_sock and serv_sock respectively. @property def serv_sock(self): return self.cli_conn @property def cli_sock(self): return self.serv_conn def checkRecvmsgAddress(self, addr1, addr2): # Address is currently "unspecified" for a connected socket, # so we don't examine it pass class SendrecvmsgServerTimeoutBase(SendrecvmsgBase): # Base class to set a timeout on server's socket. def setUp(self): super().setUp() self.serv_sock.settimeout(self.fail_timeout) class SendmsgTests(SendrecvmsgServerTimeoutBase): # Tests for sendmsg() which can use any socket type and do not # involve recvmsg() or recvmsg_into(). def testSendmsg(self): # Send a simple message with sendmsg(). self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsg(self): self.assertEqual(self.sendmsgToServer([MSG]), len(MSG)) def testSendmsgDataGenerator(self): # Send from buffer obtained from a generator (not a sequence). self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgDataGenerator(self): self.assertEqual(self.sendmsgToServer((o for o in [MSG])), len(MSG)) def testSendmsgAncillaryGenerator(self): # Gather (empty) ancillary data from a generator. self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgAncillaryGenerator(self): self.assertEqual(self.sendmsgToServer([MSG], (o for o in [])), len(MSG)) def testSendmsgArray(self): # Send data from an array instead of the usual bytes object. self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgArray(self): self.assertEqual(self.sendmsgToServer([array.array("B", MSG)]), len(MSG)) def testSendmsgGather(self): # Send message data from more than one buffer (gather write). self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgGather(self): self.assertEqual(self.sendmsgToServer([MSG[:3], MSG[3:]]), len(MSG)) def testSendmsgBadArgs(self): # Check that sendmsg() rejects invalid arguments. self.assertEqual(self.serv_sock.recv(1000), b"done") def _testSendmsgBadArgs(self): self.assertRaises(TypeError, self.cli_sock.sendmsg) self.assertRaises(TypeError, self.sendmsgToServer, b"not in an iterable") self.assertRaises(TypeError, self.sendmsgToServer, object()) self.assertRaises(TypeError, self.sendmsgToServer, [object()]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG, object()]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], object()) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [], object()) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [], 0, object()) self.sendToServer(b"done") def testSendmsgBadCmsg(self): # Check that invalid ancillary data items are rejected. self.assertEqual(self.serv_sock.recv(1000), b"done") def _testSendmsgBadCmsg(self): self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [object()]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(object(), 0, b"data")]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, object(), b"data")]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0, object())]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0)]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0, b"data", 42)]) self.sendToServer(b"done") @requireAttrs(socket, "CMSG_SPACE") def testSendmsgBadMultiCmsg(self): # Check that invalid ancillary data items are rejected when # more than one item is present. self.assertEqual(self.serv_sock.recv(1000), b"done") @testSendmsgBadMultiCmsg.client_skip def _testSendmsgBadMultiCmsg(self): self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [0, 0, b""]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0, b""), object()]) self.sendToServer(b"done") def testSendmsgExcessCmsgReject(self): # Check that sendmsg() rejects excess ancillary data items # when the number that can be sent is limited. self.assertEqual(self.serv_sock.recv(1000), b"done") def _testSendmsgExcessCmsgReject(self): if not hasattr(socket, "CMSG_SPACE"): # Can only send one item with self.assertRaises(OSError) as cm: self.sendmsgToServer([MSG], [(0, 0, b""), (0, 0, b"")]) self.assertIsNone(cm.exception.errno) self.sendToServer(b"done") def testSendmsgAfterClose(self): # Check that sendmsg() fails on a closed socket. pass def _testSendmsgAfterClose(self): self.cli_sock.close() self.assertRaises(OSError, self.sendmsgToServer, [MSG]) class SendmsgStreamTests(SendmsgTests): # Tests for sendmsg() which require a stream socket and do not # involve recvmsg() or recvmsg_into(). def testSendmsgExplicitNoneAddr(self): # Check that peer address can be specified as None. self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgExplicitNoneAddr(self): self.assertEqual(self.sendmsgToServer([MSG], [], 0, None), len(MSG)) def testSendmsgTimeout(self): # Check that timeout works with sendmsg(). self.assertEqual(self.serv_sock.recv(512), b"a"*512) self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) def _testSendmsgTimeout(self): try: self.cli_sock.settimeout(0.03) try: while True: self.sendmsgToServer([b"a"*512]) except TimeoutError: pass except OSError as exc: if exc.errno != errno.ENOMEM: raise # bpo-33937 the test randomly fails on Travis CI with # "OSError: [Errno 12] Cannot allocate memory" else: self.fail("TimeoutError not raised") finally: self.misc_event.set() # XXX: would be nice to have more tests for sendmsg flags argument. # Linux supports MSG_DONTWAIT when sending, but in general, it # only works when receiving. Could add other platforms if they # support it too. @skipWithClientIf(sys.platform not in {"linux"}, "MSG_DONTWAIT not known to work on this platform when " "sending") def testSendmsgDontWait(self): # Check that MSG_DONTWAIT in flags causes non-blocking behaviour. self.assertEqual(self.serv_sock.recv(512), b"a"*512) self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) @testSendmsgDontWait.client_skip def _testSendmsgDontWait(self): try: with self.assertRaises(OSError) as cm: while True: self.sendmsgToServer([b"a"*512], [], socket.MSG_DONTWAIT) # bpo-33937: catch also ENOMEM, the test randomly fails on Travis CI # with "OSError: [Errno 12] Cannot allocate memory" self.assertIn(cm.exception.errno, (errno.EAGAIN, errno.EWOULDBLOCK, errno.ENOMEM)) finally: self.misc_event.set() class SendmsgConnectionlessTests(SendmsgTests): # Tests for sendmsg() which require a connectionless-mode # (e.g. datagram) socket, and do not involve recvmsg() or # recvmsg_into(). def testSendmsgNoDestAddr(self): # Check that sendmsg() fails when no destination address is # given for unconnected socket. pass def _testSendmsgNoDestAddr(self): self.assertRaises(OSError, self.cli_sock.sendmsg, [MSG]) self.assertRaises(OSError, self.cli_sock.sendmsg, [MSG], [], 0, None) class RecvmsgGenericTests(SendrecvmsgBase): # Tests for recvmsg() which can also be emulated using # recvmsg_into(), and can use any socket type. def testRecvmsg(self): # Receive a simple message with recvmsg[_into](). msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG)) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsg(self): self.sendToServer(MSG) def testRecvmsgExplicitDefaults(self): # Test recvmsg[_into]() with default arguments provided explicitly. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 0, 0) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgExplicitDefaults(self): self.sendToServer(MSG) def testRecvmsgShorter(self): # Receive a message smaller than buffer. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) + 42) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgShorter(self): self.sendToServer(MSG) def testRecvmsgTrunc(self): # Receive part of message, check for truncation indicators. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) - 3) self.assertEqual(msg, MSG[:-3]) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=False) def _testRecvmsgTrunc(self): self.sendToServer(MSG) def testRecvmsgShortAncillaryBuf(self): # Test ancillary data buffer too small to hold any ancillary data. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 1) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgShortAncillaryBuf(self): self.sendToServer(MSG) def testRecvmsgLongAncillaryBuf(self): # Test large ancillary data buffer. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 10240) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgLongAncillaryBuf(self): self.sendToServer(MSG) def testRecvmsgAfterClose(self): # Check that recvmsg[_into]() fails on a closed socket. self.serv_sock.close() self.assertRaises(OSError, self.doRecvmsg, self.serv_sock, 1024) def _testRecvmsgAfterClose(self): pass def testRecvmsgTimeout(self): # Check that timeout works. try: self.serv_sock.settimeout(0.03) self.assertRaises(TimeoutError, self.doRecvmsg, self.serv_sock, len(MSG)) finally: self.misc_event.set() def _testRecvmsgTimeout(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) @requireAttrs(socket, "MSG_PEEK") def testRecvmsgPeek(self): # Check that MSG_PEEK in flags enables examination of pending # data without consuming it. # Receive part of data with MSG_PEEK. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) - 3, 0, socket.MSG_PEEK) self.assertEqual(msg, MSG[:-3]) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) # Ignoring MSG_TRUNC here (so this test is the same for stream # and datagram sockets). Some wording in POSIX seems to # suggest that it needn't be set when peeking, but that may # just be a slip. self.checkFlags(flags, eor=False, ignore=getattr(socket, "MSG_TRUNC", 0)) # Receive all data with MSG_PEEK. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 0, socket.MSG_PEEK) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) # Check that the same data can still be received normally. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG)) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) @testRecvmsgPeek.client_skip def _testRecvmsgPeek(self): self.sendToServer(MSG) @requireAttrs(socket.socket, "sendmsg") def testRecvmsgFromSendmsg(self): # Test receiving with recvmsg[_into]() when message is sent # using sendmsg(). self.serv_sock.settimeout(self.fail_timeout) msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG)) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) @testRecvmsgFromSendmsg.client_skip def _testRecvmsgFromSendmsg(self): self.assertEqual(self.sendmsgToServer([MSG[:3], MSG[3:]]), len(MSG)) class RecvmsgGenericStreamTests(RecvmsgGenericTests): # Tests which require a stream socket and can use either recvmsg() # or recvmsg_into(). def testRecvmsgEOF(self): # Receive end-of-stream indicator (b"", peer socket closed). msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, 1024) self.assertEqual(msg, b"") self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=None) # Might not have end-of-record marker def _testRecvmsgEOF(self): self.cli_sock.close() def testRecvmsgOverflow(self): # Receive a message in more than one chunk. seg1, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) - 3) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=False) seg2, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, 1024) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) msg = seg1 + seg2 self.assertEqual(msg, MSG) def _testRecvmsgOverflow(self): self.sendToServer(MSG) class RecvmsgTests(RecvmsgGenericTests): # Tests for recvmsg() which can use any socket type. def testRecvmsgBadArgs(self): # Check that recvmsg() rejects invalid arguments. self.assertRaises(TypeError, self.serv_sock.recvmsg) self.assertRaises(ValueError, self.serv_sock.recvmsg, -1, 0, 0) self.assertRaises(ValueError, self.serv_sock.recvmsg, len(MSG), -1, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, [bytearray(10)], 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, object(), 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, len(MSG), object(), 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, len(MSG), 0, object()) msg, ancdata, flags, addr = self.serv_sock.recvmsg(len(MSG), 0, 0) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgBadArgs(self): self.sendToServer(MSG) class RecvmsgIntoTests(RecvmsgIntoMixin, RecvmsgGenericTests): # Tests for recvmsg_into() which can use any socket type. def testRecvmsgIntoBadArgs(self): # Check that recvmsg_into() rejects invalid arguments. buf = bytearray(len(MSG)) self.assertRaises(TypeError, self.serv_sock.recvmsg_into) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, len(MSG), 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, buf, 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [object()], 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [b"I'm not writable"], 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [buf, object()], 0, 0) self.assertRaises(ValueError, self.serv_sock.recvmsg_into, [buf], -1, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [buf], object(), 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [buf], 0, object()) nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into([buf], 0, 0) self.assertEqual(nbytes, len(MSG)) self.assertEqual(buf, bytearray(MSG)) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoBadArgs(self): self.sendToServer(MSG) def testRecvmsgIntoGenerator(self): # Receive into buffer obtained from a generator (not a sequence). buf = bytearray(len(MSG)) nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into( (o for o in [buf])) self.assertEqual(nbytes, len(MSG)) self.assertEqual(buf, bytearray(MSG)) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoGenerator(self): self.sendToServer(MSG) def testRecvmsgIntoArray(self): # Receive into an array rather than the usual bytearray. buf = array.array("B", [0] * len(MSG)) nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into([buf]) self.assertEqual(nbytes, len(MSG)) self.assertEqual(buf.tobytes(), MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoArray(self): self.sendToServer(MSG) def testRecvmsgIntoScatter(self): # Receive into multiple buffers (scatter write). b1 = bytearray(b"----") b2 = bytearray(b"0123456789") b3 = bytearray(b"--------------") nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into( [b1, memoryview(b2)[2:9], b3]) self.assertEqual(nbytes, len(b"Mary had a little lamb")) self.assertEqual(b1, bytearray(b"Mary")) self.assertEqual(b2, bytearray(b"01 had a 9")) self.assertEqual(b3, bytearray(b"little lamb---")) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoScatter(self): self.sendToServer(b"Mary had a little lamb") class CmsgMacroTests(unittest.TestCase): # Test the functions CMSG_LEN() and CMSG_SPACE(). Tests # assumptions used by sendmsg() and recvmsg[_into](), which share # code with these functions. # Match the definition in socketmodule.c try: import _testcapi except ImportError: socklen_t_limit = 0x7fffffff else: socklen_t_limit = min(0x7fffffff, _testcapi.INT_MAX) @requireAttrs(socket, "CMSG_LEN") def testCMSG_LEN(self): # Test CMSG_LEN() with various valid and invalid values, # checking the assumptions used by recvmsg() and sendmsg(). toobig = self.socklen_t_limit - socket.CMSG_LEN(0) + 1 values = list(range(257)) + list(range(toobig - 257, toobig)) # struct cmsghdr has at least three members, two of which are ints self.assertGreater(socket.CMSG_LEN(0), array.array("i").itemsize * 2) for n in values: ret = socket.CMSG_LEN(n) # This is how recvmsg() calculates the data size self.assertEqual(ret - socket.CMSG_LEN(0), n) self.assertLessEqual(ret, self.socklen_t_limit) self.assertRaises(OverflowError, socket.CMSG_LEN, -1) # sendmsg() shares code with these functions, and requires # that it reject values over the limit. self.assertRaises(OverflowError, socket.CMSG_LEN, toobig) self.assertRaises(OverflowError, socket.CMSG_LEN, sys.maxsize) @requireAttrs(socket, "CMSG_SPACE") def testCMSG_SPACE(self): # Test CMSG_SPACE() with various valid and invalid values, # checking the assumptions used by sendmsg(). toobig = self.socklen_t_limit - socket.CMSG_SPACE(1) + 1 values = list(range(257)) + list(range(toobig - 257, toobig)) last = socket.CMSG_SPACE(0) # struct cmsghdr has at least three members, two of which are ints self.assertGreater(last, array.array("i").itemsize * 2) for n in values: ret = socket.CMSG_SPACE(n) self.assertGreaterEqual(ret, last) self.assertGreaterEqual(ret, socket.CMSG_LEN(n)) self.assertGreaterEqual(ret, n + socket.CMSG_LEN(0)) self.assertLessEqual(ret, self.socklen_t_limit) last = ret self.assertRaises(OverflowError, socket.CMSG_SPACE, -1) # sendmsg() shares code with these functions, and requires # that it reject values over the limit. self.assertRaises(OverflowError, socket.CMSG_SPACE, toobig) self.assertRaises(OverflowError, socket.CMSG_SPACE, sys.maxsize) class SCMRightsTest(SendrecvmsgServerTimeoutBase): # Tests for file descriptor passing on Unix-domain sockets. # Invalid file descriptor value that's unlikely to evaluate to a # real FD even if one of its bytes is replaced with a different # value (which shouldn't actually happen). badfd = -0x5555 def newFDs(self, n): # Return a list of n file descriptors for newly-created files # containing their list indices as ASCII numbers. fds = [] for i in range(n): fd, path = tempfile.mkstemp() self.addCleanup(os.unlink, path) self.addCleanup(os.close, fd) os.write(fd, str(i).encode()) fds.append(fd) return fds def checkFDs(self, fds): # Check that the file descriptors in the given list contain # their correct list indices as ASCII numbers. for n, fd in enumerate(fds): os.lseek(fd, 0, os.SEEK_SET) self.assertEqual(os.read(fd, 1024), str(n).encode()) def registerRecvmsgResult(self, result): self.addCleanup(self.closeRecvmsgFDs, result) def closeRecvmsgFDs(self, recvmsg_result): # Close all file descriptors specified in the ancillary data # of the given return value from recvmsg() or recvmsg_into(). for cmsg_level, cmsg_type, cmsg_data in recvmsg_result[1]: if (cmsg_level == socket.SOL_SOCKET and cmsg_type == socket.SCM_RIGHTS): fds = array.array("i") fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) for fd in fds: os.close(fd) def createAndSendFDs(self, n): # Send n new file descriptors created by newFDs() to the # server, with the constant MSG as the non-ancillary data. self.assertEqual( self.sendmsgToServer([MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", self.newFDs(n)))]), len(MSG)) def checkRecvmsgFDs(self, numfds, result, maxcmsgs=1, ignoreflags=0): # Check that constant MSG was received with numfds file # descriptors in a maximum of maxcmsgs control messages (which # must contain only complete integers). By default, check # that MSG_CTRUNC is unset, but ignore any flags in # ignoreflags. msg, ancdata, flags, addr = result self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkunset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertIsInstance(ancdata, list) self.assertLessEqual(len(ancdata), maxcmsgs) fds = array.array("i") for item in ancdata: self.assertIsInstance(item, tuple) cmsg_level, cmsg_type, cmsg_data = item self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) self.assertIsInstance(cmsg_data, bytes) self.assertEqual(len(cmsg_data) % SIZEOF_INT, 0) fds.frombytes(cmsg_data) self.assertEqual(len(fds), numfds) self.checkFDs(fds) def testFDPassSimple(self): # Pass a single FD (array read from bytes object). self.checkRecvmsgFDs(1, self.doRecvmsg(self.serv_sock, len(MSG), 10240)) def _testFDPassSimple(self): self.assertEqual( self.sendmsgToServer( [MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", self.newFDs(1)).tobytes())]), len(MSG)) def testMultipleFDPass(self): # Pass multiple FDs in a single array. self.checkRecvmsgFDs(4, self.doRecvmsg(self.serv_sock, len(MSG), 10240)) def _testMultipleFDPass(self): self.createAndSendFDs(4) @requireAttrs(socket, "CMSG_SPACE") def testFDPassCMSG_SPACE(self): # Test using CMSG_SPACE() to calculate ancillary buffer size. self.checkRecvmsgFDs( 4, self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_SPACE(4 * SIZEOF_INT))) @testFDPassCMSG_SPACE.client_skip def _testFDPassCMSG_SPACE(self): self.createAndSendFDs(4) def testFDPassCMSG_LEN(self): # Test using CMSG_LEN() to calculate ancillary buffer size. self.checkRecvmsgFDs(1, self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_LEN(4 * SIZEOF_INT)), # RFC 3542 says implementations may set # MSG_CTRUNC if there isn't enough space # for trailing padding. ignoreflags=socket.MSG_CTRUNC) def _testFDPassCMSG_LEN(self): self.createAndSendFDs(1) @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") @requireAttrs(socket, "CMSG_SPACE") def testFDPassSeparate(self): # Pass two FDs in two separate arrays. Arrays may be combined # into a single control message by the OS. self.checkRecvmsgFDs(2, self.doRecvmsg(self.serv_sock, len(MSG), 10240), maxcmsgs=2) @testFDPassSeparate.client_skip @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") def _testFDPassSeparate(self): fd0, fd1 = self.newFDs(2) self.assertEqual( self.sendmsgToServer([MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd0])), (socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd1]))]), len(MSG)) @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") @requireAttrs(socket, "CMSG_SPACE") def testFDPassSeparateMinSpace(self): # Pass two FDs in two separate arrays, receiving them into the # minimum space for two arrays. num_fds = 2 self.checkRecvmsgFDs(num_fds, self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_SPACE(SIZEOF_INT) + socket.CMSG_LEN(SIZEOF_INT * num_fds)), maxcmsgs=2, ignoreflags=socket.MSG_CTRUNC) @testFDPassSeparateMinSpace.client_skip @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") def _testFDPassSeparateMinSpace(self): fd0, fd1 = self.newFDs(2) self.assertEqual( self.sendmsgToServer([MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd0])), (socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd1]))]), len(MSG)) def sendAncillaryIfPossible(self, msg, ancdata): # Try to send msg and ancdata to server, but if the system # call fails, just send msg with no ancillary data. try: nbytes = self.sendmsgToServer([msg], ancdata) except OSError as e: # Check that it was the system call that failed self.assertIsInstance(e.errno, int) nbytes = self.sendmsgToServer([msg]) self.assertEqual(nbytes, len(msg)) @unittest.skipIf(sys.platform == "darwin", "see issue #24725") def testFDPassEmpty(self): # Try to pass an empty FD array. Can receive either no array # or an empty array. self.checkRecvmsgFDs(0, self.doRecvmsg(self.serv_sock, len(MSG), 10240), ignoreflags=socket.MSG_CTRUNC) def _testFDPassEmpty(self): self.sendAncillaryIfPossible(MSG, [(socket.SOL_SOCKET, socket.SCM_RIGHTS, b"")]) def testFDPassPartialInt(self): # Try to pass a truncated FD array. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 10240) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, ignore=socket.MSG_CTRUNC) self.assertLessEqual(len(ancdata), 1) for cmsg_level, cmsg_type, cmsg_data in ancdata: self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) self.assertLess(len(cmsg_data), SIZEOF_INT) def _testFDPassPartialInt(self): self.sendAncillaryIfPossible( MSG, [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [self.badfd]).tobytes()[:-1])]) @requireAttrs(socket, "CMSG_SPACE") def testFDPassPartialIntInMiddle(self): # Try to pass two FD arrays, the first of which is truncated. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 10240) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, ignore=socket.MSG_CTRUNC) self.assertLessEqual(len(ancdata), 2) fds = array.array("i") # Arrays may have been combined in a single control message for cmsg_level, cmsg_type, cmsg_data in ancdata: self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) self.assertLessEqual(len(fds), 2) self.checkFDs(fds) @testFDPassPartialIntInMiddle.client_skip def _testFDPassPartialIntInMiddle(self): fd0, fd1 = self.newFDs(2) self.sendAncillaryIfPossible( MSG, [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd0, self.badfd]).tobytes()[:-1]), (socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd1]))]) def checkTruncatedHeader(self, result, ignoreflags=0): # Check that no ancillary data items are returned when data is # truncated inside the cmsghdr structure. msg, ancdata, flags, addr = result self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC, ignore=ignoreflags) def testCmsgTruncNoBufSize(self): # Check that no ancillary data is received when no buffer size # is specified. self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG)), # BSD seems to set MSG_CTRUNC only # if an item has been partially # received. ignoreflags=socket.MSG_CTRUNC) def _testCmsgTruncNoBufSize(self): self.createAndSendFDs(1) def testCmsgTrunc0(self): # Check that no ancillary data is received when buffer size is 0. self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), 0), ignoreflags=socket.MSG_CTRUNC) def _testCmsgTrunc0(self): self.createAndSendFDs(1) # Check that no ancillary data is returned for various non-zero # (but still too small) buffer sizes. def testCmsgTrunc1(self): self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), 1)) def _testCmsgTrunc1(self): self.createAndSendFDs(1) def testCmsgTrunc2Int(self): # The cmsghdr structure has at least three members, two of # which are ints, so we still shouldn't see any ancillary # data. self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), SIZEOF_INT * 2)) def _testCmsgTrunc2Int(self): self.createAndSendFDs(1) def testCmsgTruncLen0Minus1(self): self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_LEN(0) - 1)) def _testCmsgTruncLen0Minus1(self): self.createAndSendFDs(1) # The following tests try to truncate the control message in the # middle of the FD array. def checkTruncatedArray(self, ancbuf, maxdata, mindata=0): # Check that file descriptor data is truncated to between # mindata and maxdata bytes when received with buffer size # ancbuf, and that any complete file descriptor numbers are # valid. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbuf) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC) if mindata == 0 and ancdata == []: return self.assertEqual(len(ancdata), 1) cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) self.assertGreaterEqual(len(cmsg_data), mindata) self.assertLessEqual(len(cmsg_data), maxdata) fds = array.array("i") fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) self.checkFDs(fds) def testCmsgTruncLen0(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(0), maxdata=0) def _testCmsgTruncLen0(self): self.createAndSendFDs(1) def testCmsgTruncLen0Plus1(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(0) + 1, maxdata=1) def _testCmsgTruncLen0Plus1(self): self.createAndSendFDs(2) def testCmsgTruncLen1(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(SIZEOF_INT), maxdata=SIZEOF_INT) def _testCmsgTruncLen1(self): self.createAndSendFDs(2) def testCmsgTruncLen2Minus1(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(2 * SIZEOF_INT) - 1, maxdata=(2 * SIZEOF_INT) - 1) def _testCmsgTruncLen2Minus1(self): self.createAndSendFDs(2) class RFC3542AncillaryTest(SendrecvmsgServerTimeoutBase): # Test sendmsg() and recvmsg[_into]() using the ancillary data # features of the RFC 3542 Advanced Sockets API for IPv6. # Currently we can only handle certain data items (e.g. traffic # class, hop limit, MTU discovery and fragmentation settings) # without resorting to unportable means such as the struct module, # but the tests here are aimed at testing the ancillary data # handling in sendmsg() and recvmsg() rather than the IPv6 API # itself. # Test value to use when setting hop limit of packet hop_limit = 2 # Test value to use when setting traffic class of packet. # -1 means "use kernel default". traffic_class = -1 def ancillaryMapping(self, ancdata): # Given ancillary data list ancdata, return a mapping from # pairs (cmsg_level, cmsg_type) to corresponding cmsg_data. # Check that no (level, type) pair appears more than once. d = {} for cmsg_level, cmsg_type, cmsg_data in ancdata: self.assertNotIn((cmsg_level, cmsg_type), d) d[(cmsg_level, cmsg_type)] = cmsg_data return d def checkHopLimit(self, ancbufsize, maxhop=255, ignoreflags=0): # Receive hop limit into ancbufsize bytes of ancillary data # space. Check that data is MSG, ancillary data is not # truncated (but ignore any flags in ignoreflags), and hop # limit is between 0 and maxhop inclusive. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbufsize) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkunset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertEqual(len(ancdata), 1) self.assertIsInstance(ancdata[0], tuple) cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) self.assertEqual(cmsg_type, socket.IPV6_HOPLIMIT) self.assertIsInstance(cmsg_data, bytes) self.assertEqual(len(cmsg_data), SIZEOF_INT) a = array.array("i") a.frombytes(cmsg_data) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], maxhop) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testRecvHopLimit(self): # Test receiving the packet hop limit as ancillary data. self.checkHopLimit(ancbufsize=10240) @testRecvHopLimit.client_skip def _testRecvHopLimit(self): # Need to wait until server has asked to receive ancillary # data, as implementations are not required to buffer it # otherwise. self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testRecvHopLimitCMSG_SPACE(self): # Test receiving hop limit, using CMSG_SPACE to calculate buffer size. self.checkHopLimit(ancbufsize=socket.CMSG_SPACE(SIZEOF_INT)) @testRecvHopLimitCMSG_SPACE.client_skip def _testRecvHopLimitCMSG_SPACE(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) # Could test receiving into buffer sized using CMSG_LEN, but RFC # 3542 says portable applications must provide space for trailing # padding. Implementations may set MSG_CTRUNC if there isn't # enough space for the padding. @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSetHopLimit(self): # Test setting hop limit on outgoing packet and receiving it # at the other end. self.checkHopLimit(ancbufsize=10240, maxhop=self.hop_limit) @testSetHopLimit.client_skip def _testSetHopLimit(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.assertEqual( self.sendmsgToServer([MSG], [(socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]), len(MSG)) def checkTrafficClassAndHopLimit(self, ancbufsize, maxhop=255, ignoreflags=0): # Receive traffic class and hop limit into ancbufsize bytes of # ancillary data space. Check that data is MSG, ancillary # data is not truncated (but ignore any flags in ignoreflags), # and traffic class and hop limit are in range (hop limit no # more than maxhop). self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVTCLASS, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbufsize) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkunset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertEqual(len(ancdata), 2) ancmap = self.ancillaryMapping(ancdata) tcdata = ancmap[(socket.IPPROTO_IPV6, socket.IPV6_TCLASS)] self.assertEqual(len(tcdata), SIZEOF_INT) a = array.array("i") a.frombytes(tcdata) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], 255) hldata = ancmap[(socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT)] self.assertEqual(len(hldata), SIZEOF_INT) a = array.array("i") a.frombytes(hldata) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], maxhop) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testRecvTrafficClassAndHopLimit(self): # Test receiving traffic class and hop limit as ancillary data. self.checkTrafficClassAndHopLimit(ancbufsize=10240) @testRecvTrafficClassAndHopLimit.client_skip def _testRecvTrafficClassAndHopLimit(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testRecvTrafficClassAndHopLimitCMSG_SPACE(self): # Test receiving traffic class and hop limit, using # CMSG_SPACE() to calculate buffer size. self.checkTrafficClassAndHopLimit( ancbufsize=socket.CMSG_SPACE(SIZEOF_INT) * 2) @testRecvTrafficClassAndHopLimitCMSG_SPACE.client_skip def _testRecvTrafficClassAndHopLimitCMSG_SPACE(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSetTrafficClassAndHopLimit(self): # Test setting traffic class and hop limit on outgoing packet, # and receiving them at the other end. self.checkTrafficClassAndHopLimit(ancbufsize=10240, maxhop=self.hop_limit) @testSetTrafficClassAndHopLimit.client_skip def _testSetTrafficClassAndHopLimit(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.assertEqual( self.sendmsgToServer([MSG], [(socket.IPPROTO_IPV6, socket.IPV6_TCLASS, array.array("i", [self.traffic_class])), (socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]), len(MSG)) @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testOddCmsgSize(self): # Try to send ancillary data with first item one byte too # long. Fall back to sending with correct size if this fails, # and check that second item was handled correctly. self.checkTrafficClassAndHopLimit(ancbufsize=10240, maxhop=self.hop_limit) @testOddCmsgSize.client_skip def _testOddCmsgSize(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) try: nbytes = self.sendmsgToServer( [MSG], [(socket.IPPROTO_IPV6, socket.IPV6_TCLASS, array.array("i", [self.traffic_class]).tobytes() + b"\x00"), (socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]) except OSError as e: self.assertIsInstance(e.errno, int) nbytes = self.sendmsgToServer( [MSG], [(socket.IPPROTO_IPV6, socket.IPV6_TCLASS, array.array("i", [self.traffic_class])), (socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]) self.assertEqual(nbytes, len(MSG)) # Tests for proper handling of truncated ancillary data def checkHopLimitTruncatedHeader(self, ancbufsize, ignoreflags=0): # Receive hop limit into ancbufsize bytes of ancillary data # space, which should be too small to contain the ancillary # data header (if ancbufsize is None, pass no second argument # to recvmsg()). Check that data is MSG, MSG_CTRUNC is set # (unless included in ignoreflags), and no ancillary data is # returned. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.misc_event.set() args = () if ancbufsize is None else (ancbufsize,) msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), *args) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC, ignore=ignoreflags) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testCmsgTruncNoBufSize(self): # Check that no ancillary data is received when no ancillary # buffer size is provided. self.checkHopLimitTruncatedHeader(ancbufsize=None, # BSD seems to set # MSG_CTRUNC only if an item # has been partially # received. ignoreflags=socket.MSG_CTRUNC) @testCmsgTruncNoBufSize.client_skip def _testCmsgTruncNoBufSize(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTrunc0(self): # Check that no ancillary data is received when ancillary # buffer size is zero. self.checkHopLimitTruncatedHeader(ancbufsize=0, ignoreflags=socket.MSG_CTRUNC) @testSingleCmsgTrunc0.client_skip def _testSingleCmsgTrunc0(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) # Check that no ancillary data is returned for various non-zero # (but still too small) buffer sizes. @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTrunc1(self): self.checkHopLimitTruncatedHeader(ancbufsize=1) @testSingleCmsgTrunc1.client_skip def _testSingleCmsgTrunc1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTrunc2Int(self): self.checkHopLimitTruncatedHeader(ancbufsize=2 * SIZEOF_INT) @testSingleCmsgTrunc2Int.client_skip def _testSingleCmsgTrunc2Int(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTruncLen0Minus1(self): self.checkHopLimitTruncatedHeader(ancbufsize=socket.CMSG_LEN(0) - 1) @testSingleCmsgTruncLen0Minus1.client_skip def _testSingleCmsgTruncLen0Minus1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTruncInData(self): # Test truncation of a control message inside its associated # data. The message may be returned with its data truncated, # or not returned at all. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg( self.serv_sock, len(MSG), socket.CMSG_LEN(SIZEOF_INT) - 1) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC) self.assertLessEqual(len(ancdata), 1) if ancdata: cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) self.assertEqual(cmsg_type, socket.IPV6_HOPLIMIT) self.assertLess(len(cmsg_data), SIZEOF_INT) @testSingleCmsgTruncInData.client_skip def _testSingleCmsgTruncInData(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) def checkTruncatedSecondHeader(self, ancbufsize, ignoreflags=0): # Receive traffic class and hop limit into ancbufsize bytes of # ancillary data space, which should be large enough to # contain the first item, but too small to contain the header # of the second. Check that data is MSG, MSG_CTRUNC is set # (unless included in ignoreflags), and only one ancillary # data item is returned. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVTCLASS, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbufsize) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertEqual(len(ancdata), 1) cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) self.assertIn(cmsg_type, {socket.IPV6_TCLASS, socket.IPV6_HOPLIMIT}) self.assertEqual(len(cmsg_data), SIZEOF_INT) a = array.array("i") a.frombytes(cmsg_data) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], 255) # Try the above test with various buffer sizes. @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTrunc0(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT), ignoreflags=socket.MSG_CTRUNC) @testSecondCmsgTrunc0.client_skip def _testSecondCmsgTrunc0(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTrunc1(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT) + 1) @testSecondCmsgTrunc1.client_skip def _testSecondCmsgTrunc1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTrunc2Int(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT) + 2 * SIZEOF_INT) @testSecondCmsgTrunc2Int.client_skip def _testSecondCmsgTrunc2Int(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTruncLen0Minus1(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT) + socket.CMSG_LEN(0) - 1) @testSecondCmsgTruncLen0Minus1.client_skip def _testSecondCmsgTruncLen0Minus1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTruncInData(self): # Test truncation of the second of two control messages inside # its associated data. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVTCLASS, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg( self.serv_sock, len(MSG), socket.CMSG_SPACE(SIZEOF_INT) + socket.CMSG_LEN(SIZEOF_INT) - 1) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC) cmsg_types = {socket.IPV6_TCLASS, socket.IPV6_HOPLIMIT} cmsg_level, cmsg_type, cmsg_data = ancdata.pop(0) self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) cmsg_types.remove(cmsg_type) self.assertEqual(len(cmsg_data), SIZEOF_INT) a = array.array("i") a.frombytes(cmsg_data) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], 255) if ancdata: cmsg_level, cmsg_type, cmsg_data = ancdata.pop(0) self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) cmsg_types.remove(cmsg_type) self.assertLess(len(cmsg_data), SIZEOF_INT) self.assertEqual(ancdata, []) @testSecondCmsgTruncInData.client_skip def _testSecondCmsgTruncInData(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) # Derive concrete test classes for different socket types. class SendrecvmsgUDPTestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDPTestBase): pass @requireAttrs(socket.socket, "sendmsg") class SendmsgUDPTest(SendmsgConnectionlessTests, SendrecvmsgUDPTestBase): pass @requireAttrs(socket.socket, "recvmsg") class RecvmsgUDPTest(RecvmsgTests, SendrecvmsgUDPTestBase): pass @requireAttrs(socket.socket, "recvmsg_into") class RecvmsgIntoUDPTest(RecvmsgIntoTests, SendrecvmsgUDPTestBase): pass class SendrecvmsgUDP6TestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDP6TestBase): def checkRecvmsgAddress(self, addr1, addr2): # Called to compare the received address with the address of # the peer, ignoring scope ID self.assertEqual(addr1[:-1], addr2[:-1]) @requireAttrs(socket.socket, "sendmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class SendmsgUDP6Test(SendmsgConnectionlessTests, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgUDP6Test(RecvmsgTests, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoUDP6Test(RecvmsgIntoTests, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgRFC3542AncillaryUDP6Test(RFC3542AncillaryTest, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoRFC3542AncillaryUDP6Test(RecvmsgIntoMixin, RFC3542AncillaryTest, SendrecvmsgUDP6TestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class SendrecvmsgUDPLITETestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket.socket, "sendmsg") class SendmsgUDPLITETest(SendmsgConnectionlessTests, SendrecvmsgUDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket.socket, "recvmsg") class RecvmsgUDPLITETest(RecvmsgTests, SendrecvmsgUDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket.socket, "recvmsg_into") class RecvmsgIntoUDPLITETest(RecvmsgIntoTests, SendrecvmsgUDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class SendrecvmsgUDPLITE6TestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDPLITE6TestBase): def checkRecvmsgAddress(self, addr1, addr2): # Called to compare the received address with the address of # the peer, ignoring scope ID self.assertEqual(addr1[:-1], addr2[:-1]) @requireAttrs(socket.socket, "sendmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class SendmsgUDPLITE6Test(SendmsgConnectionlessTests, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgUDPLITE6Test(RecvmsgTests, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoUDPLITE6Test(RecvmsgIntoTests, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgRFC3542AncillaryUDPLITE6Test(RFC3542AncillaryTest, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoRFC3542AncillaryUDPLITE6Test(RecvmsgIntoMixin, RFC3542AncillaryTest, SendrecvmsgUDPLITE6TestBase): pass class SendrecvmsgTCPTestBase(SendrecvmsgConnectedBase, ConnectedStreamTestMixin, TCPTestBase): pass @requireAttrs(socket.socket, "sendmsg") class SendmsgTCPTest(SendmsgStreamTests, SendrecvmsgTCPTestBase): pass @requireAttrs(socket.socket, "recvmsg") class RecvmsgTCPTest(RecvmsgTests, RecvmsgGenericStreamTests, SendrecvmsgTCPTestBase): pass @requireAttrs(socket.socket, "recvmsg_into") class RecvmsgIntoTCPTest(RecvmsgIntoTests, RecvmsgGenericStreamTests, SendrecvmsgTCPTestBase): pass class SendrecvmsgSCTPStreamTestBase(SendrecvmsgSCTPFlagsBase, SendrecvmsgConnectedBase, ConnectedStreamTestMixin, SCTPStreamBase): pass @requireAttrs(socket.socket, "sendmsg") @unittest.skipIf(AIX, "IPPROTO_SCTP: [Errno 62] Protocol not supported on AIX") @requireSocket("AF_INET", "SOCK_STREAM", "IPPROTO_SCTP") class SendmsgSCTPStreamTest(SendmsgStreamTests, SendrecvmsgSCTPStreamTestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipIf(AIX, "IPPROTO_SCTP: [Errno 62] Protocol not supported on AIX") @requireSocket("AF_INET", "SOCK_STREAM", "IPPROTO_SCTP") class RecvmsgSCTPStreamTest(RecvmsgTests, RecvmsgGenericStreamTests, SendrecvmsgSCTPStreamTestBase): def testRecvmsgEOF(self): try: super(RecvmsgSCTPStreamTest, self).testRecvmsgEOF() except OSError as e: if e.errno != errno.ENOTCONN: raise self.skipTest("sporadic ENOTCONN (kernel issue?) - see issue #13876") @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipIf(AIX, "IPPROTO_SCTP: [Errno 62] Protocol not supported on AIX") @requireSocket("AF_INET", "SOCK_STREAM", "IPPROTO_SCTP") class RecvmsgIntoSCTPStreamTest(RecvmsgIntoTests, RecvmsgGenericStreamTests, SendrecvmsgSCTPStreamTestBase): def testRecvmsgEOF(self): try: super(RecvmsgIntoSCTPStreamTest, self).testRecvmsgEOF() except OSError as e: if e.errno != errno.ENOTCONN: raise self.skipTest("sporadic ENOTCONN (kernel issue?) - see issue #13876") class SendrecvmsgUnixStreamTestBase(SendrecvmsgConnectedBase, ConnectedStreamTestMixin, UnixStreamBase): pass @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "AF_UNIX") class SendmsgUnixStreamTest(SendmsgStreamTests, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "recvmsg") @requireAttrs(socket, "AF_UNIX") class RecvmsgUnixStreamTest(RecvmsgTests, RecvmsgGenericStreamTests, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @requireAttrs(socket, "AF_UNIX") class RecvmsgIntoUnixStreamTest(RecvmsgIntoTests, RecvmsgGenericStreamTests, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "sendmsg", "recvmsg") @requireAttrs(socket, "AF_UNIX", "SOL_SOCKET", "SCM_RIGHTS") class RecvmsgSCMRightsStreamTest(SCMRightsTest, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "sendmsg", "recvmsg_into") @requireAttrs(socket, "AF_UNIX", "SOL_SOCKET", "SCM_RIGHTS") class RecvmsgIntoSCMRightsStreamTest(RecvmsgIntoMixin, SCMRightsTest, SendrecvmsgUnixStreamTestBase): pass # Test interrupting the interruptible send/receive methods with a # signal when a timeout is set. These tests avoid having multiple # threads alive during the test so that the OS cannot deliver the # signal to the wrong one. class InterruptedTimeoutBase: # Base class for interrupted send/receive tests. Installs an # empty handler for SIGALRM and removes it on teardown, along with # any scheduled alarms. def setUp(self): super().setUp() orig_alrm_handler = signal.signal(signal.SIGALRM, lambda signum, frame: 1 / 0) self.addCleanup(signal.signal, signal.SIGALRM, orig_alrm_handler) # Timeout for socket operations timeout = support.LOOPBACK_TIMEOUT # Provide setAlarm() method to schedule delivery of SIGALRM after # given number of seconds, or cancel it if zero, and an # appropriate time value to use. Use setitimer() if available. if hasattr(signal, "setitimer"): alarm_time = 0.05 def setAlarm(self, seconds): signal.setitimer(signal.ITIMER_REAL, seconds) else: # Old systems may deliver the alarm up to one second early alarm_time = 2 def setAlarm(self, seconds): signal.alarm(seconds) # Require siginterrupt() in order to ensure that system calls are # interrupted by default. @requireAttrs(signal, "siginterrupt") @unittest.skipUnless(hasattr(signal, "alarm") or hasattr(signal, "setitimer"), "Don't have signal.alarm or signal.setitimer") class InterruptedRecvTimeoutTest(InterruptedTimeoutBase, UDPTestBase): # Test interrupting the recv*() methods with signals when a # timeout is set. def setUp(self): super().setUp() self.serv.settimeout(self.timeout) def checkInterruptedRecv(self, func, *args, **kwargs): # Check that func(*args, **kwargs) raises # errno of EINTR when interrupted by a signal. try: self.setAlarm(self.alarm_time) with self.assertRaises(ZeroDivisionError) as cm: func(*args, **kwargs) finally: self.setAlarm(0) def testInterruptedRecvTimeout(self): self.checkInterruptedRecv(self.serv.recv, 1024) def testInterruptedRecvIntoTimeout(self): self.checkInterruptedRecv(self.serv.recv_into, bytearray(1024)) def testInterruptedRecvfromTimeout(self): self.checkInterruptedRecv(self.serv.recvfrom, 1024) def testInterruptedRecvfromIntoTimeout(self): self.checkInterruptedRecv(self.serv.recvfrom_into, bytearray(1024)) @requireAttrs(socket.socket, "recvmsg") def testInterruptedRecvmsgTimeout(self): self.checkInterruptedRecv(self.serv.recvmsg, 1024) @requireAttrs(socket.socket, "recvmsg_into") def testInterruptedRecvmsgIntoTimeout(self): self.checkInterruptedRecv(self.serv.recvmsg_into, [bytearray(1024)]) # Require siginterrupt() in order to ensure that system calls are # interrupted by default. @requireAttrs(signal, "siginterrupt") @unittest.skipUnless(hasattr(signal, "alarm") or hasattr(signal, "setitimer"), "Don't have signal.alarm or signal.setitimer") class InterruptedSendTimeoutTest(InterruptedTimeoutBase, ThreadSafeCleanupTestCase, SocketListeningTestMixin, TCPTestBase): # Test interrupting the interruptible send*() methods with signals # when a timeout is set. def setUp(self): super().setUp() self.serv_conn = self.newSocket() self.addCleanup(self.serv_conn.close) # Use a thread to complete the connection, but wait for it to # terminate before running the test, so that there is only one # thread to accept the signal. cli_thread = threading.Thread(target=self.doConnect) cli_thread.start() self.cli_conn, addr = self.serv.accept() self.addCleanup(self.cli_conn.close) cli_thread.join() self.serv_conn.settimeout(self.timeout) def doConnect(self): self.serv_conn.connect(self.serv_addr) def checkInterruptedSend(self, func, *args, **kwargs): # Check that func(*args, **kwargs), run in a loop, raises # OSError with an errno of EINTR when interrupted by a # signal. try: with self.assertRaises(ZeroDivisionError) as cm: while True: self.setAlarm(self.alarm_time) func(*args, **kwargs) finally: self.setAlarm(0) # Issue #12958: The following tests have problems on OS X prior to 10.7 @support.requires_mac_ver(10, 7) def testInterruptedSendTimeout(self): self.checkInterruptedSend(self.serv_conn.send, b"a"*512) @support.requires_mac_ver(10, 7) def testInterruptedSendtoTimeout(self): # Passing an actual address here as Python's wrapper for # sendto() doesn't allow passing a zero-length one; POSIX # requires that the address is ignored since the socket is # connection-mode, however. self.checkInterruptedSend(self.serv_conn.sendto, b"a"*512, self.serv_addr) @support.requires_mac_ver(10, 7) @requireAttrs(socket.socket, "sendmsg") def testInterruptedSendmsgTimeout(self): self.checkInterruptedSend(self.serv_conn.sendmsg, [b"a"*512]) class TCPCloserTest(ThreadedTCPSocketTest): def testClose(self): conn, addr = self.serv.accept() conn.close() sd = self.cli read, write, err = select.select([sd], [], [], 1.0) self.assertEqual(read, [sd]) self.assertEqual(sd.recv(1), b'') # Calling close() many times should be safe. conn.close() conn.close() def _testClose(self): self.cli.connect((HOST, self.port)) time.sleep(1.0) class BasicSocketPairTest(SocketPairTest): def __init__(self, methodName='runTest'): SocketPairTest.__init__(self, methodName=methodName) def _check_defaults(self, sock): self.assertIsInstance(sock, socket.socket) if hasattr(socket, 'AF_UNIX'): self.assertEqual(sock.family, socket.AF_UNIX) else: self.assertEqual(sock.family, socket.AF_INET) self.assertEqual(sock.type, socket.SOCK_STREAM) self.assertEqual(sock.proto, 0) def _testDefaults(self): self._check_defaults(self.cli) def testDefaults(self): self._check_defaults(self.serv) def testRecv(self): msg = self.serv.recv(1024) self.assertEqual(msg, MSG) def _testRecv(self): self.cli.send(MSG) def testSend(self): self.serv.send(MSG) def _testSend(self): msg = self.cli.recv(1024) self.assertEqual(msg, MSG) class NonBlockingTCPTests(ThreadedTCPSocketTest): def __init__(self, methodName='runTest'): self.event = threading.Event() ThreadedTCPSocketTest.__init__(self, methodName=methodName) def assert_sock_timeout(self, sock, timeout): self.assertEqual(self.serv.gettimeout(), timeout) blocking = (timeout != 0.0) self.assertEqual(sock.getblocking(), blocking) if fcntl is not None: # When a Python socket has a non-zero timeout, it's switched # internally to a non-blocking mode. Later, sock.sendall(), # sock.recv(), and other socket operations use a select() call and # handle EWOULDBLOCK/EGAIN on all socket operations. That's how # timeouts are enforced. fd_blocking = (timeout is None) flag = fcntl.fcntl(sock, fcntl.F_GETFL, os.O_NONBLOCK) self.assertEqual(not bool(flag & os.O_NONBLOCK), fd_blocking) def testSetBlocking(self): # Test setblocking() and settimeout() methods self.serv.setblocking(True) self.assert_sock_timeout(self.serv, None) self.serv.setblocking(False) self.assert_sock_timeout(self.serv, 0.0) self.serv.settimeout(None) self.assert_sock_timeout(self.serv, None) self.serv.settimeout(0) self.assert_sock_timeout(self.serv, 0) self.serv.settimeout(10) self.assert_sock_timeout(self.serv, 10) self.serv.settimeout(0) self.assert_sock_timeout(self.serv, 0) def _testSetBlocking(self): pass @support.cpython_only def testSetBlocking_overflow(self): # Issue 15989 import _testcapi if _testcapi.UINT_MAX >= _testcapi.ULONG_MAX: self.skipTest('needs UINT_MAX < ULONG_MAX') self.serv.setblocking(False) self.assertEqual(self.serv.gettimeout(), 0.0) self.serv.setblocking(_testcapi.UINT_MAX + 1) self.assertIsNone(self.serv.gettimeout()) _testSetBlocking_overflow = support.cpython_only(_testSetBlocking) @unittest.skipUnless(hasattr(socket, 'SOCK_NONBLOCK'), 'test needs socket.SOCK_NONBLOCK') @support.requires_linux_version(2, 6, 28) def testInitNonBlocking(self): # create a socket with SOCK_NONBLOCK self.serv.close() self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_NONBLOCK) self.assert_sock_timeout(self.serv, 0) def _testInitNonBlocking(self): pass def testInheritFlagsBlocking(self): # bpo-7995: accept() on a listening socket with a timeout and the # default timeout is None, the resulting socket must be blocking. with socket_setdefaulttimeout(None): self.serv.settimeout(10) conn, addr = self.serv.accept() self.addCleanup(conn.close) self.assertIsNone(conn.gettimeout()) def _testInheritFlagsBlocking(self): self.cli.connect((HOST, self.port)) def testInheritFlagsTimeout(self): # bpo-7995: accept() on a listening socket with a timeout and the # default timeout is None, the resulting socket must inherit # the default timeout. default_timeout = 20.0 with socket_setdefaulttimeout(default_timeout): self.serv.settimeout(10) conn, addr = self.serv.accept() self.addCleanup(conn.close) self.assertEqual(conn.gettimeout(), default_timeout) def _testInheritFlagsTimeout(self): self.cli.connect((HOST, self.port)) def testAccept(self): # Testing non-blocking accept self.serv.setblocking(False) # connect() didn't start: non-blocking accept() fails start_time = time.monotonic() with self.assertRaises(BlockingIOError): conn, addr = self.serv.accept() dt = time.monotonic() - start_time self.assertLess(dt, 1.0) self.event.set() read, write, err = select.select([self.serv], [], [], support.LONG_TIMEOUT) if self.serv not in read: self.fail("Error trying to do accept after select.") # connect() completed: non-blocking accept() doesn't block conn, addr = self.serv.accept() self.addCleanup(conn.close) self.assertIsNone(conn.gettimeout()) def _testAccept(self): # don't connect before event is set to check # that non-blocking accept() raises BlockingIOError self.event.wait() self.cli.connect((HOST, self.port)) def testRecv(self): # Testing non-blocking recv conn, addr = self.serv.accept() self.addCleanup(conn.close) conn.setblocking(False) # the server didn't send data yet: non-blocking recv() fails with self.assertRaises(BlockingIOError): msg = conn.recv(len(MSG)) self.event.set() read, write, err = select.select([conn], [], [], support.LONG_TIMEOUT) if conn not in read: self.fail("Error during select call to non-blocking socket.") # the server sent data yet: non-blocking recv() doesn't block msg = conn.recv(len(MSG)) self.assertEqual(msg, MSG) def _testRecv(self): self.cli.connect((HOST, self.port)) # don't send anything before event is set to check # that non-blocking recv() raises BlockingIOError self.event.wait() # send data: recv() will no longer block self.cli.sendall(MSG) class FileObjectClassTestCase(SocketConnectedTest): """Unit tests for the object returned by socket.makefile() self.read_file is the io object returned by makefile() on the client connection. You can read from this file to get output from the server. self.write_file is the io object returned by makefile() on the server connection. You can write to this file to send output to the client. """ bufsize = -1 # Use default buffer size encoding = 'utf-8' errors = 'strict' newline = None read_mode = 'rb' read_msg = MSG write_mode = 'wb' write_msg = MSG def __init__(self, methodName='runTest'): SocketConnectedTest.__init__(self, methodName=methodName) def setUp(self): self.evt1, self.evt2, self.serv_finished, self.cli_finished = [ threading.Event() for i in range(4)] SocketConnectedTest.setUp(self) self.read_file = self.cli_conn.makefile( self.read_mode, self.bufsize, encoding = self.encoding, errors = self.errors, newline = self.newline) def tearDown(self): self.serv_finished.set() self.read_file.close() self.assertTrue(self.read_file.closed) self.read_file = None SocketConnectedTest.tearDown(self) def clientSetUp(self): SocketConnectedTest.clientSetUp(self) self.write_file = self.serv_conn.makefile( self.write_mode, self.bufsize, encoding = self.encoding, errors = self.errors, newline = self.newline) def clientTearDown(self): self.cli_finished.set() self.write_file.close() self.assertTrue(self.write_file.closed) self.write_file = None SocketConnectedTest.clientTearDown(self) def testReadAfterTimeout(self): # Issue #7322: A file object must disallow further reads # after a timeout has occurred. self.cli_conn.settimeout(1) self.read_file.read(3) # First read raises a timeout self.assertRaises(TimeoutError, self.read_file.read, 1) # Second read is disallowed with self.assertRaises(OSError) as ctx: self.read_file.read(1) self.assertIn("cannot read from timed out object", str(ctx.exception)) def _testReadAfterTimeout(self): self.write_file.write(self.write_msg[0:3]) self.write_file.flush() self.serv_finished.wait() def testSmallRead(self): # Performing small file read test first_seg = self.read_file.read(len(self.read_msg)-3) second_seg = self.read_file.read(3) msg = first_seg + second_seg self.assertEqual(msg, self.read_msg) def _testSmallRead(self): self.write_file.write(self.write_msg) self.write_file.flush() def testFullRead(self): # read until EOF msg = self.read_file.read() self.assertEqual(msg, self.read_msg) def _testFullRead(self): self.write_file.write(self.write_msg) self.write_file.close() def testUnbufferedRead(self): # Performing unbuffered file read test buf = type(self.read_msg)() while 1: char = self.read_file.read(1) if not char: break buf += char self.assertEqual(buf, self.read_msg) def _testUnbufferedRead(self): self.write_file.write(self.write_msg) self.write_file.flush() def testReadline(self): # Performing file readline test line = self.read_file.readline() self.assertEqual(line, self.read_msg) def _testReadline(self): self.write_file.write(self.write_msg) self.write_file.flush() def testCloseAfterMakefile(self): # The file returned by makefile should keep the socket open. self.cli_conn.close() # read until EOF msg = self.read_file.read() self.assertEqual(msg, self.read_msg) def _testCloseAfterMakefile(self): self.write_file.write(self.write_msg) self.write_file.flush() def testMakefileAfterMakefileClose(self): self.read_file.close() msg = self.cli_conn.recv(len(MSG)) if isinstance(self.read_msg, str): msg = msg.decode() self.assertEqual(msg, self.read_msg) def _testMakefileAfterMakefileClose(self): self.write_file.write(self.write_msg) self.write_file.flush() def testClosedAttr(self): self.assertTrue(not self.read_file.closed) def _testClosedAttr(self): self.assertTrue(not self.write_file.closed) def testAttributes(self): self.assertEqual(self.read_file.mode, self.read_mode) self.assertEqual(self.read_file.name, self.cli_conn.fileno()) def _testAttributes(self): self.assertEqual(self.write_file.mode, self.write_mode) self.assertEqual(self.write_file.name, self.serv_conn.fileno()) def testRealClose(self): self.read_file.close() self.assertRaises(ValueError, self.read_file.fileno) self.cli_conn.close() self.assertRaises(OSError, self.cli_conn.getsockname) def _testRealClose(self): pass class UnbufferedFileObjectClassTestCase(FileObjectClassTestCase): """Repeat the tests from FileObjectClassTestCase with bufsize==0. In this case (and in this case only), it should be possible to create a file object, read a line from it, create another file object, read another line from it, without loss of data in the first file object's buffer. Note that http.client relies on this when reading multiple requests from the same socket.""" bufsize = 0 # Use unbuffered mode def testUnbufferedReadline(self): # Read a line, create a new file object, read another line with it line = self.read_file.readline() # first line self.assertEqual(line, b"A. " + self.write_msg) # first line self.read_file = self.cli_conn.makefile('rb', 0) line = self.read_file.readline() # second line self.assertEqual(line, b"B. " + self.write_msg) # second line def _testUnbufferedReadline(self): self.write_file.write(b"A. " + self.write_msg) self.write_file.write(b"B. " + self.write_msg) self.write_file.flush() def testMakefileClose(self): # The file returned by makefile should keep the socket open... self.cli_conn.close() msg = self.cli_conn.recv(1024) self.assertEqual(msg, self.read_msg) # ...until the file is itself closed self.read_file.close() self.assertRaises(OSError, self.cli_conn.recv, 1024) def _testMakefileClose(self): self.write_file.write(self.write_msg) self.write_file.flush() def testMakefileCloseSocketDestroy(self): refcount_before = sys.getrefcount(self.cli_conn) self.read_file.close() refcount_after = sys.getrefcount(self.cli_conn) self.assertEqual(refcount_before - 1, refcount_after) def _testMakefileCloseSocketDestroy(self): pass # Non-blocking ops # NOTE: to set `read_file` as non-blocking, we must call # `cli_conn.setblocking` and vice-versa (see setUp / clientSetUp). def testSmallReadNonBlocking(self): self.cli_conn.setblocking(False) self.assertEqual(self.read_file.readinto(bytearray(10)), None) self.assertEqual(self.read_file.read(len(self.read_msg) - 3), None) self.evt1.set() self.evt2.wait(1.0) first_seg = self.read_file.read(len(self.read_msg) - 3) if first_seg is None: # Data not arrived (can happen under Windows), wait a bit time.sleep(0.5) first_seg = self.read_file.read(len(self.read_msg) - 3) buf = bytearray(10) n = self.read_file.readinto(buf) self.assertEqual(n, 3) msg = first_seg + buf[:n] self.assertEqual(msg, self.read_msg) self.assertEqual(self.read_file.readinto(bytearray(16)), None) self.assertEqual(self.read_file.read(1), None) def _testSmallReadNonBlocking(self): self.evt1.wait(1.0) self.write_file.write(self.write_msg) self.write_file.flush() self.evt2.set() # Avoid closing the socket before the server test has finished, # otherwise system recv() will return 0 instead of EWOULDBLOCK. self.serv_finished.wait(5.0) def testWriteNonBlocking(self): self.cli_finished.wait(5.0) # The client thread can't skip directly - the SkipTest exception # would appear as a failure. if self.serv_skipped: self.skipTest(self.serv_skipped) def _testWriteNonBlocking(self): self.serv_skipped = None self.serv_conn.setblocking(False) # Try to saturate the socket buffer pipe with repeated large writes. BIG = b"x" * support.SOCK_MAX_SIZE LIMIT = 10 # The first write() succeeds since a chunk of data can be buffered n = self.write_file.write(BIG) self.assertGreater(n, 0) for i in range(LIMIT): n = self.write_file.write(BIG) if n is None: # Succeeded break self.assertGreater(n, 0) else: # Let us know that this test didn't manage to establish # the expected conditions. This is not a failure in itself but, # if it happens repeatedly, the test should be fixed. self.serv_skipped = "failed to saturate the socket buffer" class LineBufferedFileObjectClassTestCase(FileObjectClassTestCase): bufsize = 1 # Default-buffered for reading; line-buffered for writing class SmallBufferedFileObjectClassTestCase(FileObjectClassTestCase): bufsize = 2 # Exercise the buffering code class UnicodeReadFileObjectClassTestCase(FileObjectClassTestCase): """Tests for socket.makefile() in text mode (rather than binary)""" read_mode = 'r' read_msg = MSG.decode('utf-8') write_mode = 'wb' write_msg = MSG newline = '' class UnicodeWriteFileObjectClassTestCase(FileObjectClassTestCase): """Tests for socket.makefile() in text mode (rather than binary)""" read_mode = 'rb' read_msg = MSG write_mode = 'w' write_msg = MSG.decode('utf-8') newline = '' class UnicodeReadWriteFileObjectClassTestCase(FileObjectClassTestCase): """Tests for socket.makefile() in text mode (rather than binary)""" read_mode = 'r' read_msg = MSG.decode('utf-8') write_mode = 'w' write_msg = MSG.decode('utf-8') newline = '' class NetworkConnectionTest(object): """Prove network connection.""" def clientSetUp(self): # We're inherited below by BasicTCPTest2, which also inherits # BasicTCPTest, which defines self.port referenced below. self.cli = socket.create_connection((HOST, self.port)) self.serv_conn = self.cli class BasicTCPTest2(NetworkConnectionTest, BasicTCPTest): """Tests that NetworkConnection does not break existing TCP functionality. """ class NetworkConnectionNoServer(unittest.TestCase): class MockSocket(socket.socket): def connect(self, *args): raise TimeoutError('timed out') @contextlib.contextmanager def mocked_socket_module(self): """Return a socket which times out on connect""" old_socket = socket.socket socket.socket = self.MockSocket try: yield finally: socket.socket = old_socket def test_connect(self): port = socket_helper.find_unused_port() cli = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(cli.close) with self.assertRaises(OSError) as cm: cli.connect((HOST, port)) self.assertEqual(cm.exception.errno, errno.ECONNREFUSED) def test_create_connection(self): # Issue #9792: errors raised by create_connection() should have # a proper errno attribute. port = socket_helper.find_unused_port() with self.assertRaises(OSError) as cm: socket.create_connection((HOST, port)) # Issue #16257: create_connection() calls getaddrinfo() against # 'localhost'. This may result in an IPV6 addr being returned # as well as an IPV4 one: # >>> socket.getaddrinfo('localhost', port, 0, SOCK_STREAM) # >>> [(2, 2, 0, '', ('127.0.0.1', 41230)), # (26, 2, 0, '', ('::1', 41230, 0, 0))] # # create_connection() enumerates through all the addresses returned # and if it doesn't successfully bind to any of them, it propagates # the last exception it encountered. # # On Solaris, ENETUNREACH is returned in this circumstance instead # of ECONNREFUSED. So, if that errno exists, add it to our list of # expected errnos. expected_errnos = socket_helper.get_socket_conn_refused_errs() self.assertIn(cm.exception.errno, expected_errnos) def test_create_connection_timeout(self): # Issue #9792: create_connection() should not recast timeout errors # as generic socket errors. with self.mocked_socket_module(): try: socket.create_connection((HOST, 1234)) except TimeoutError: pass except OSError as exc: if socket_helper.IPV6_ENABLED or exc.errno != errno.EAFNOSUPPORT: raise else: self.fail('TimeoutError not raised') class NetworkConnectionAttributesTest(SocketTCPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketTCPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.source_port = socket_helper.find_unused_port() def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) def _justAccept(self): conn, addr = self.serv.accept() conn.close() testFamily = _justAccept def _testFamily(self): self.cli = socket.create_connection((HOST, self.port), timeout=support.LOOPBACK_TIMEOUT) self.addCleanup(self.cli.close) self.assertEqual(self.cli.family, 2) testSourceAddress = _justAccept def _testSourceAddress(self): self.cli = socket.create_connection((HOST, self.port), timeout=support.LOOPBACK_TIMEOUT, source_address=('', self.source_port)) self.addCleanup(self.cli.close) self.assertEqual(self.cli.getsockname()[1], self.source_port) # The port number being used is sufficient to show that the bind() # call happened. testTimeoutDefault = _justAccept def _testTimeoutDefault(self): # passing no explicit timeout uses socket's global default self.assertTrue(socket.getdefaulttimeout() is None) socket.setdefaulttimeout(42) try: self.cli = socket.create_connection((HOST, self.port)) self.addCleanup(self.cli.close) finally: socket.setdefaulttimeout(None) self.assertEqual(self.cli.gettimeout(), 42) testTimeoutNone = _justAccept def _testTimeoutNone(self): # None timeout means the same as sock.settimeout(None) self.assertTrue(socket.getdefaulttimeout() is None) socket.setdefaulttimeout(30) try: self.cli = socket.create_connection((HOST, self.port), timeout=None) self.addCleanup(self.cli.close) finally: socket.setdefaulttimeout(None) self.assertEqual(self.cli.gettimeout(), None) testTimeoutValueNamed = _justAccept def _testTimeoutValueNamed(self): self.cli = socket.create_connection((HOST, self.port), timeout=30) self.assertEqual(self.cli.gettimeout(), 30) testTimeoutValueNonamed = _justAccept def _testTimeoutValueNonamed(self): self.cli = socket.create_connection((HOST, self.port), 30) self.addCleanup(self.cli.close) self.assertEqual(self.cli.gettimeout(), 30) class NetworkConnectionBehaviourTest(SocketTCPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketTCPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) def testInsideTimeout(self): conn, addr = self.serv.accept() self.addCleanup(conn.close) time.sleep(3) conn.send(b"done!") testOutsideTimeout = testInsideTimeout def _testInsideTimeout(self): self.cli = sock = socket.create_connection((HOST, self.port)) data = sock.recv(5) self.assertEqual(data, b"done!") def _testOutsideTimeout(self): self.cli = sock = socket.create_connection((HOST, self.port), timeout=1) self.assertRaises(TimeoutError, lambda: sock.recv(5)) class TCPTimeoutTest(SocketTCPTest): def testTCPTimeout(self): def raise_timeout(*args, **kwargs): self.serv.settimeout(1.0) self.serv.accept() self.assertRaises(TimeoutError, raise_timeout, "Error generating a timeout exception (TCP)") def testTimeoutZero(self): ok = False try: self.serv.settimeout(0.0) foo = self.serv.accept() except TimeoutError: self.fail("caught timeout instead of error (TCP)") except OSError: ok = True except: self.fail("caught unexpected exception (TCP)") if not ok: self.fail("accept() returned success when we did not expect it") @unittest.skipUnless(hasattr(signal, 'alarm'), 'test needs signal.alarm()') def testInterruptedTimeout(self): # XXX I don't know how to do this test on MSWindows or any other # platform that doesn't support signal.alarm() or os.kill(), though # the bug should have existed on all platforms. self.serv.settimeout(5.0) # must be longer than alarm class Alarm(Exception): pass def alarm_handler(signal, frame): raise Alarm old_alarm = signal.signal(signal.SIGALRM, alarm_handler) try: try: signal.alarm(2) # POSIX allows alarm to be up to 1 second early foo = self.serv.accept() except TimeoutError: self.fail("caught timeout instead of Alarm") except Alarm: pass except: self.fail("caught other exception instead of Alarm:" " %s(%s):\n%s" % (sys.exc_info()[:2] + (traceback.format_exc(),))) else: self.fail("nothing caught") finally: signal.alarm(0) # shut off alarm except Alarm: self.fail("got Alarm in wrong place") finally: # no alarm can be pending. Safe to restore old handler. signal.signal(signal.SIGALRM, old_alarm) class UDPTimeoutTest(SocketUDPTest): def testUDPTimeout(self): def raise_timeout(*args, **kwargs): self.serv.settimeout(1.0) self.serv.recv(1024) self.assertRaises(TimeoutError, raise_timeout, "Error generating a timeout exception (UDP)") def testTimeoutZero(self): ok = False try: self.serv.settimeout(0.0) foo = self.serv.recv(1024) except TimeoutError: self.fail("caught timeout instead of error (UDP)") except OSError: ok = True except: self.fail("caught unexpected exception (UDP)") if not ok: self.fail("recv() returned success when we did not expect it") @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class UDPLITETimeoutTest(SocketUDPLITETest): def testUDPLITETimeout(self): def raise_timeout(*args, **kwargs): self.serv.settimeout(1.0) self.serv.recv(1024) self.assertRaises(TimeoutError, raise_timeout, "Error generating a timeout exception (UDPLITE)") def testTimeoutZero(self): ok = False try: self.serv.settimeout(0.0) foo = self.serv.recv(1024) except TimeoutError: self.fail("caught timeout instead of error (UDPLITE)") except OSError: ok = True except: self.fail("caught unexpected exception (UDPLITE)") if not ok: self.fail("recv() returned success when we did not expect it") class TestExceptions(unittest.TestCase): def testExceptionTree(self): self.assertTrue(issubclass(OSError, Exception)) self.assertTrue(issubclass(socket.herror, OSError)) self.assertTrue(issubclass(socket.gaierror, OSError)) self.assertTrue(issubclass(socket.timeout, OSError)) self.assertIs(socket.error, OSError) self.assertIs(socket.timeout, TimeoutError) def test_setblocking_invalidfd(self): # Regression test for issue #28471 sock0 = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) sock = socket.socket( socket.AF_INET, socket.SOCK_STREAM, 0, sock0.fileno()) sock0.close() self.addCleanup(sock.detach) with self.assertRaises(OSError): sock.setblocking(False) @unittest.skipUnless(sys.platform == 'linux', 'Linux specific test') class TestLinuxAbstractNamespace(unittest.TestCase): UNIX_PATH_MAX = 108 def testLinuxAbstractNamespace(self): address = b"\x00python-test-hello\x00\xff" with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s1: s1.bind(address) s1.listen() with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s2: s2.connect(s1.getsockname()) with s1.accept()[0] as s3: self.assertEqual(s1.getsockname(), address) self.assertEqual(s2.getpeername(), address) def testMaxName(self): address = b"\x00" + b"h" * (self.UNIX_PATH_MAX - 1) with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s: s.bind(address) self.assertEqual(s.getsockname(), address) def testNameOverflow(self): address = "\x00" + "h" * self.UNIX_PATH_MAX with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s: self.assertRaises(OSError, s.bind, address) def testStrName(self): # Check that an abstract name can be passed as a string. s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) try: s.bind("\x00python\x00test\x00") self.assertEqual(s.getsockname(), b"\x00python\x00test\x00") finally: s.close() def testBytearrayName(self): # Check that an abstract name can be passed as a bytearray. with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s: s.bind(bytearray(b"\x00python\x00test\x00")) self.assertEqual(s.getsockname(), b"\x00python\x00test\x00") def testAutobind(self): # Check that binding to an empty string binds to an available address # in the abstract namespace as specified in unix(7) "Autobind feature". abstract_address = b"^\0[0-9a-f]{5}" with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s1: s1.bind("") self.assertRegex(s1.getsockname(), abstract_address) # Each socket is bound to a different abstract address. with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s2: s2.bind("") self.assertRegex(s2.getsockname(), abstract_address) self.assertNotEqual(s1.getsockname(), s2.getsockname()) @unittest.skipUnless(hasattr(socket, 'AF_UNIX'), 'test needs socket.AF_UNIX') class TestUnixDomain(unittest.TestCase): def setUp(self): self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) def tearDown(self): self.sock.close() def encoded(self, path): # Return the given path encoded in the file system encoding, # or skip the test if this is not possible. try: return os.fsencode(path) except UnicodeEncodeError: self.skipTest( "Pathname {0!a} cannot be represented in file " "system encoding {1!r}".format( path, sys.getfilesystemencoding())) def bind(self, sock, path): # Bind the socket try: socket_helper.bind_unix_socket(sock, path) except OSError as e: if str(e) == "AF_UNIX path too long": self.skipTest( "Pathname {0!a} is too long to serve as an AF_UNIX path" .format(path)) else: raise def testUnbound(self): # Issue #30205 (note getsockname() can return None on OS X) self.assertIn(self.sock.getsockname(), ('', None)) def testStrAddr(self): # Test binding to and retrieving a normal string pathname. path = os.path.abspath(os_helper.TESTFN) self.bind(self.sock, path) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) def testBytesAddr(self): # Test binding to a bytes pathname. path = os.path.abspath(os_helper.TESTFN) self.bind(self.sock, self.encoded(path)) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) def testSurrogateescapeBind(self): # Test binding to a valid non-ASCII pathname, with the # non-ASCII bytes supplied using surrogateescape encoding. path = os.path.abspath(os_helper.TESTFN_UNICODE) b = self.encoded(path) self.bind(self.sock, b.decode("ascii", "surrogateescape")) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) def testUnencodableAddr(self): # Test binding to a pathname that cannot be encoded in the # file system encoding. if os_helper.TESTFN_UNENCODABLE is None: self.skipTest("No unencodable filename available") path = os.path.abspath(os_helper.TESTFN_UNENCODABLE) self.bind(self.sock, path) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) @unittest.skipIf(sys.platform == 'linux', 'Linux specific test') def testEmptyAddress(self): # Test that binding empty address fails. self.assertRaises(OSError, self.sock.bind, "") class BufferIOTest(SocketConnectedTest): """ Test the buffer versions of socket.recv() and socket.send(). """ def __init__(self, methodName='runTest'): SocketConnectedTest.__init__(self, methodName=methodName) def testRecvIntoArray(self): buf = array.array("B", [0] * len(MSG)) nbytes = self.cli_conn.recv_into(buf) self.assertEqual(nbytes, len(MSG)) buf = buf.tobytes() msg = buf[:len(MSG)] self.assertEqual(msg, MSG) def _testRecvIntoArray(self): buf = bytes(MSG) self.serv_conn.send(buf) def testRecvIntoBytearray(self): buf = bytearray(1024) nbytes = self.cli_conn.recv_into(buf) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvIntoBytearray = _testRecvIntoArray def testRecvIntoMemoryview(self): buf = bytearray(1024) nbytes = self.cli_conn.recv_into(memoryview(buf)) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvIntoMemoryview = _testRecvIntoArray def testRecvFromIntoArray(self): buf = array.array("B", [0] * len(MSG)) nbytes, addr = self.cli_conn.recvfrom_into(buf) self.assertEqual(nbytes, len(MSG)) buf = buf.tobytes() msg = buf[:len(MSG)] self.assertEqual(msg, MSG) def _testRecvFromIntoArray(self): buf = bytes(MSG) self.serv_conn.send(buf) def testRecvFromIntoBytearray(self): buf = bytearray(1024) nbytes, addr = self.cli_conn.recvfrom_into(buf) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvFromIntoBytearray = _testRecvFromIntoArray def testRecvFromIntoMemoryview(self): buf = bytearray(1024) nbytes, addr = self.cli_conn.recvfrom_into(memoryview(buf)) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvFromIntoMemoryview = _testRecvFromIntoArray def testRecvFromIntoSmallBuffer(self): # See issue #20246. buf = bytearray(8) self.assertRaises(ValueError, self.cli_conn.recvfrom_into, buf, 1024) def _testRecvFromIntoSmallBuffer(self): self.serv_conn.send(MSG) def testRecvFromIntoEmptyBuffer(self): buf = bytearray() self.cli_conn.recvfrom_into(buf) self.cli_conn.recvfrom_into(buf, 0) _testRecvFromIntoEmptyBuffer = _testRecvFromIntoArray TIPC_STYPE = 2000 TIPC_LOWER = 200 TIPC_UPPER = 210 def isTipcAvailable(): """Check if the TIPC module is loaded The TIPC module is not loaded automatically on Ubuntu and probably other Linux distros. """ if not hasattr(socket, "AF_TIPC"): return False try: f = open("/proc/modules", encoding="utf-8") except (FileNotFoundError, IsADirectoryError, PermissionError): # It's ok if the file does not exist, is a directory or if we # have not the permission to read it. return False with f: for line in f: if line.startswith("tipc "): return True return False @unittest.skipUnless(isTipcAvailable(), "TIPC module is not loaded, please 'sudo modprobe tipc'") class TIPCTest(unittest.TestCase): def testRDM(self): srv = socket.socket(socket.AF_TIPC, socket.SOCK_RDM) cli = socket.socket(socket.AF_TIPC, socket.SOCK_RDM) self.addCleanup(srv.close) self.addCleanup(cli.close) srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) srvaddr = (socket.TIPC_ADDR_NAMESEQ, TIPC_STYPE, TIPC_LOWER, TIPC_UPPER) srv.bind(srvaddr) sendaddr = (socket.TIPC_ADDR_NAME, TIPC_STYPE, TIPC_LOWER + int((TIPC_UPPER - TIPC_LOWER) / 2), 0) cli.sendto(MSG, sendaddr) msg, recvaddr = srv.recvfrom(1024) self.assertEqual(cli.getsockname(), recvaddr) self.assertEqual(msg, MSG) @unittest.skipUnless(isTipcAvailable(), "TIPC module is not loaded, please 'sudo modprobe tipc'") class TIPCThreadableTest(unittest.TestCase, ThreadableTest): def __init__(self, methodName = 'runTest'): unittest.TestCase.__init__(self, methodName = methodName) ThreadableTest.__init__(self) def setUp(self): self.srv = socket.socket(socket.AF_TIPC, socket.SOCK_STREAM) self.addCleanup(self.srv.close) self.srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) srvaddr = (socket.TIPC_ADDR_NAMESEQ, TIPC_STYPE, TIPC_LOWER, TIPC_UPPER) self.srv.bind(srvaddr) self.srv.listen() self.serverExplicitReady() self.conn, self.connaddr = self.srv.accept() self.addCleanup(self.conn.close) def clientSetUp(self): # There is a hittable race between serverExplicitReady() and the # accept() call; sleep a little while to avoid it, otherwise # we could get an exception time.sleep(0.1) self.cli = socket.socket(socket.AF_TIPC, socket.SOCK_STREAM) self.addCleanup(self.cli.close) addr = (socket.TIPC_ADDR_NAME, TIPC_STYPE, TIPC_LOWER + int((TIPC_UPPER - TIPC_LOWER) / 2), 0) self.cli.connect(addr) self.cliaddr = self.cli.getsockname() def testStream(self): msg = self.conn.recv(1024) self.assertEqual(msg, MSG) self.assertEqual(self.cliaddr, self.connaddr) def _testStream(self): self.cli.send(MSG) self.cli.close() class ContextManagersTest(ThreadedTCPSocketTest): def _testSocketClass(self): # base test with socket.socket() as sock: self.assertFalse(sock._closed) self.assertTrue(sock._closed) # close inside with block with socket.socket() as sock: sock.close() self.assertTrue(sock._closed) # exception inside with block with socket.socket() as sock: self.assertRaises(OSError, sock.sendall, b'foo') self.assertTrue(sock._closed) def testCreateConnectionBase(self): conn, addr = self.serv.accept() self.addCleanup(conn.close) data = conn.recv(1024) conn.sendall(data) def _testCreateConnectionBase(self): address = self.serv.getsockname() with socket.create_connection(address) as sock: self.assertFalse(sock._closed) sock.sendall(b'foo') self.assertEqual(sock.recv(1024), b'foo') self.assertTrue(sock._closed) def testCreateConnectionClose(self): conn, addr = self.serv.accept() self.addCleanup(conn.close) data = conn.recv(1024) conn.sendall(data) def _testCreateConnectionClose(self): address = self.serv.getsockname() with socket.create_connection(address) as sock: sock.close() self.assertTrue(sock._closed) self.assertRaises(OSError, sock.sendall, b'foo') class InheritanceTest(unittest.TestCase): @unittest.skipUnless(hasattr(socket, "SOCK_CLOEXEC"), "SOCK_CLOEXEC not defined") @support.requires_linux_version(2, 6, 28) def test_SOCK_CLOEXEC(self): with socket.socket(socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_CLOEXEC) as s: self.assertEqual(s.type, socket.SOCK_STREAM) self.assertFalse(s.get_inheritable()) def test_default_inheritable(self): sock = socket.socket() with sock: self.assertEqual(sock.get_inheritable(), False) def test_dup(self): sock = socket.socket() with sock: newsock = sock.dup() sock.close() with newsock: self.assertEqual(newsock.get_inheritable(), False) def test_set_inheritable(self): sock = socket.socket() with sock: sock.set_inheritable(True) self.assertEqual(sock.get_inheritable(), True) sock.set_inheritable(False) self.assertEqual(sock.get_inheritable(), False) @unittest.skipIf(fcntl is None, "need fcntl") def test_get_inheritable_cloexec(self): sock = socket.socket() with sock: fd = sock.fileno() self.assertEqual(sock.get_inheritable(), False) # clear FD_CLOEXEC flag flags = fcntl.fcntl(fd, fcntl.F_GETFD) flags &= ~fcntl.FD_CLOEXEC fcntl.fcntl(fd, fcntl.F_SETFD, flags) self.assertEqual(sock.get_inheritable(), True) @unittest.skipIf(fcntl is None, "need fcntl") def test_set_inheritable_cloexec(self): sock = socket.socket() with sock: fd = sock.fileno() self.assertEqual(fcntl.fcntl(fd, fcntl.F_GETFD) & fcntl.FD_CLOEXEC, fcntl.FD_CLOEXEC) sock.set_inheritable(True) self.assertEqual(fcntl.fcntl(fd, fcntl.F_GETFD) & fcntl.FD_CLOEXEC, 0) def test_socketpair(self): s1, s2 = socket.socketpair() self.addCleanup(s1.close) self.addCleanup(s2.close) self.assertEqual(s1.get_inheritable(), False) self.assertEqual(s2.get_inheritable(), False) @unittest.skipUnless(hasattr(socket, "SOCK_NONBLOCK"), "SOCK_NONBLOCK not defined") class NonblockConstantTest(unittest.TestCase): def checkNonblock(self, s, nonblock=True, timeout=0.0): if nonblock: self.assertEqual(s.type, socket.SOCK_STREAM) self.assertEqual(s.gettimeout(), timeout) self.assertTrue( fcntl.fcntl(s, fcntl.F_GETFL, os.O_NONBLOCK) & os.O_NONBLOCK) if timeout == 0: # timeout == 0: means that getblocking() must be False. self.assertFalse(s.getblocking()) else: # If timeout > 0, the socket will be in a "blocking" mode # from the standpoint of the Python API. For Python socket # object, "blocking" means that operations like 'sock.recv()' # will block. Internally, file descriptors for # "blocking" Python sockets *with timeouts* are in a # *non-blocking* mode, and 'sock.recv()' uses 'select()' # and handles EWOULDBLOCK/EAGAIN to enforce the timeout. self.assertTrue(s.getblocking()) else: self.assertEqual(s.type, socket.SOCK_STREAM) self.assertEqual(s.gettimeout(), None) self.assertFalse( fcntl.fcntl(s, fcntl.F_GETFL, os.O_NONBLOCK) & os.O_NONBLOCK) self.assertTrue(s.getblocking()) @support.requires_linux_version(2, 6, 28) def test_SOCK_NONBLOCK(self): # a lot of it seems silly and redundant, but I wanted to test that # changing back and forth worked ok with socket.socket(socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_NONBLOCK) as s: self.checkNonblock(s) s.setblocking(True) self.checkNonblock(s, nonblock=False) s.setblocking(False) self.checkNonblock(s) s.settimeout(None) self.checkNonblock(s, nonblock=False) s.settimeout(2.0) self.checkNonblock(s, timeout=2.0) s.setblocking(True) self.checkNonblock(s, nonblock=False) # defaulttimeout t = socket.getdefaulttimeout() socket.setdefaulttimeout(0.0) with socket.socket() as s: self.checkNonblock(s) socket.setdefaulttimeout(None) with socket.socket() as s: self.checkNonblock(s, False) socket.setdefaulttimeout(2.0) with socket.socket() as s: self.checkNonblock(s, timeout=2.0) socket.setdefaulttimeout(None) with socket.socket() as s: self.checkNonblock(s, False) socket.setdefaulttimeout(t) @unittest.skipUnless(os.name == "nt", "Windows specific") @unittest.skipUnless(multiprocessing, "need multiprocessing") class TestSocketSharing(SocketTCPTest): # This must be classmethod and not staticmethod or multiprocessing # won't be able to bootstrap it. @classmethod def remoteProcessServer(cls, q): # Recreate socket from shared data sdata = q.get() message = q.get() s = socket.fromshare(sdata) s2, c = s.accept() # Send the message s2.sendall(message) s2.close() s.close() def testShare(self): # Transfer the listening server socket to another process # and service it from there. # Create process: q = multiprocessing.Queue() p = multiprocessing.Process(target=self.remoteProcessServer, args=(q,)) p.start() # Get the shared socket data data = self.serv.share(p.pid) # Pass the shared socket to the other process addr = self.serv.getsockname() self.serv.close() q.put(data) # The data that the server will send us message = b"slapmahfro" q.put(message) # Connect s = socket.create_connection(addr) # listen for the data m = [] while True: data = s.recv(100) if not data: break m.append(data) s.close() received = b"".join(m) self.assertEqual(received, message) p.join() def testShareLength(self): data = self.serv.share(os.getpid()) self.assertRaises(ValueError, socket.fromshare, data[:-1]) self.assertRaises(ValueError, socket.fromshare, data+b"foo") def compareSockets(self, org, other): # socket sharing is expected to work only for blocking socket # since the internal python timeout value isn't transferred. self.assertEqual(org.gettimeout(), None) self.assertEqual(org.gettimeout(), other.gettimeout()) self.assertEqual(org.family, other.family) self.assertEqual(org.type, other.type) # If the user specified "0" for proto, then # internally windows will have picked the correct value. # Python introspection on the socket however will still return # 0. For the shared socket, the python value is recreated # from the actual value, so it may not compare correctly. if org.proto != 0: self.assertEqual(org.proto, other.proto) def testShareLocal(self): data = self.serv.share(os.getpid()) s = socket.fromshare(data) try: self.compareSockets(self.serv, s) finally: s.close() def testTypes(self): families = [socket.AF_INET, socket.AF_INET6] types = [socket.SOCK_STREAM, socket.SOCK_DGRAM] for f in families: for t in types: try: source = socket.socket(f, t) except OSError: continue # This combination is not supported try: data = source.share(os.getpid()) shared = socket.fromshare(data) try: self.compareSockets(source, shared) finally: shared.close() finally: source.close() class SendfileUsingSendTest(ThreadedTCPSocketTest): """ Test the send() implementation of socket.sendfile(). """ FILESIZE = (10 * 1024 * 1024) # 10 MiB BUFSIZE = 8192 FILEDATA = b"" TIMEOUT = support.LOOPBACK_TIMEOUT @classmethod def setUpClass(cls): def chunks(total, step): assert total >= step while total > step: yield step total -= step if total: yield total chunk = b"".join([random.choice(string.ascii_letters).encode() for i in range(cls.BUFSIZE)]) with open(os_helper.TESTFN, 'wb') as f: for csize in chunks(cls.FILESIZE, cls.BUFSIZE): f.write(chunk) with open(os_helper.TESTFN, 'rb') as f: cls.FILEDATA = f.read() assert len(cls.FILEDATA) == cls.FILESIZE @classmethod def tearDownClass(cls): os_helper.unlink(os_helper.TESTFN) def accept_conn(self): self.serv.settimeout(support.LONG_TIMEOUT) conn, addr = self.serv.accept() conn.settimeout(self.TIMEOUT) self.addCleanup(conn.close) return conn def recv_data(self, conn): received = [] while True: chunk = conn.recv(self.BUFSIZE) if not chunk: break received.append(chunk) return b''.join(received) def meth_from_sock(self, sock): # Depending on the mixin class being run return either send() # or sendfile() method implementation. return getattr(sock, "_sendfile_use_send") # regular file def _testRegularFile(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address) as sock, file as file: meth = self.meth_from_sock(sock) sent = meth(file) self.assertEqual(sent, self.FILESIZE) self.assertEqual(file.tell(), self.FILESIZE) def testRegularFile(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE) self.assertEqual(data, self.FILEDATA) # non regular file def _testNonRegularFile(self): address = self.serv.getsockname() file = io.BytesIO(self.FILEDATA) with socket.create_connection(address) as sock, file as file: sent = sock.sendfile(file) self.assertEqual(sent, self.FILESIZE) self.assertEqual(file.tell(), self.FILESIZE) self.assertRaises(socket._GiveupOnSendfile, sock._sendfile_use_sendfile, file) def testNonRegularFile(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE) self.assertEqual(data, self.FILEDATA) # empty file def _testEmptyFileSend(self): address = self.serv.getsockname() filename = os_helper.TESTFN + "2" with open(filename, 'wb'): self.addCleanup(os_helper.unlink, filename) file = open(filename, 'rb') with socket.create_connection(address) as sock, file as file: meth = self.meth_from_sock(sock) sent = meth(file) self.assertEqual(sent, 0) self.assertEqual(file.tell(), 0) def testEmptyFileSend(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(data, b"") # offset def _testOffset(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address) as sock, file as file: meth = self.meth_from_sock(sock) sent = meth(file, offset=5000) self.assertEqual(sent, self.FILESIZE - 5000) self.assertEqual(file.tell(), self.FILESIZE) def testOffset(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE - 5000) self.assertEqual(data, self.FILEDATA[5000:]) # count def _testCount(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') sock = socket.create_connection(address, timeout=support.LOOPBACK_TIMEOUT) with sock, file: count = 5000007 meth = self.meth_from_sock(sock) sent = meth(file, count=count) self.assertEqual(sent, count) self.assertEqual(file.tell(), count) def testCount(self): count = 5000007 conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), count) self.assertEqual(data, self.FILEDATA[:count]) # count small def _testCountSmall(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') sock = socket.create_connection(address, timeout=support.LOOPBACK_TIMEOUT) with sock, file: count = 1 meth = self.meth_from_sock(sock) sent = meth(file, count=count) self.assertEqual(sent, count) self.assertEqual(file.tell(), count) def testCountSmall(self): count = 1 conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), count) self.assertEqual(data, self.FILEDATA[:count]) # count + offset def _testCountWithOffset(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address, timeout=2) as sock, file as file: count = 100007 meth = self.meth_from_sock(sock) sent = meth(file, offset=2007, count=count) self.assertEqual(sent, count) self.assertEqual(file.tell(), count + 2007) def testCountWithOffset(self): count = 100007 conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), count) self.assertEqual(data, self.FILEDATA[2007:count+2007]) # non blocking sockets are not supposed to work def _testNonBlocking(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address) as sock, file as file: sock.setblocking(False) meth = self.meth_from_sock(sock) self.assertRaises(ValueError, meth, file) self.assertRaises(ValueError, sock.sendfile, file) def testNonBlocking(self): conn = self.accept_conn() if conn.recv(8192): self.fail('was not supposed to receive any data') # timeout (non-triggered) def _testWithTimeout(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') sock = socket.create_connection(address, timeout=support.LOOPBACK_TIMEOUT) with sock, file: meth = self.meth_from_sock(sock) sent = meth(file) self.assertEqual(sent, self.FILESIZE) def testWithTimeout(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE) self.assertEqual(data, self.FILEDATA) # timeout (triggered) def _testWithTimeoutTriggeredSend(self): address = self.serv.getsockname() with open(os_helper.TESTFN, 'rb') as file: with socket.create_connection(address) as sock: sock.settimeout(0.01) meth = self.meth_from_sock(sock) self.assertRaises(TimeoutError, meth, file) def testWithTimeoutTriggeredSend(self): conn = self.accept_conn() conn.recv(88192) time.sleep(1) # errors def _test_errors(self): pass def test_errors(self): with open(os_helper.TESTFN, 'rb') as file: with socket.socket(type=socket.SOCK_DGRAM) as s: meth = self.meth_from_sock(s) self.assertRaisesRegex( ValueError, "SOCK_STREAM", meth, file) with open(os_helper.TESTFN, encoding="utf-8") as file: with socket.socket() as s: meth = self.meth_from_sock(s) self.assertRaisesRegex( ValueError, "binary mode", meth, file) with open(os_helper.TESTFN, 'rb') as file: with socket.socket() as s: meth = self.meth_from_sock(s) self.assertRaisesRegex(TypeError, "positive integer", meth, file, count='2') self.assertRaisesRegex(TypeError, "positive integer", meth, file, count=0.1) self.assertRaisesRegex(ValueError, "positive integer", meth, file, count=0) self.assertRaisesRegex(ValueError, "positive integer", meth, file, count=-1) @unittest.skipUnless(hasattr(os, "sendfile"), 'os.sendfile() required for this test.') class SendfileUsingSendfileTest(SendfileUsingSendTest): """ Test the sendfile() implementation of socket.sendfile(). """ def meth_from_sock(self, sock): return getattr(sock, "_sendfile_use_sendfile") @unittest.skipUnless(HAVE_SOCKET_ALG, 'AF_ALG required') class LinuxKernelCryptoAPI(unittest.TestCase): # tests for AF_ALG def create_alg(self, typ, name): sock = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) try: sock.bind((typ, name)) except FileNotFoundError as e: # type / algorithm is not available sock.close() raise unittest.SkipTest(str(e), typ, name) else: return sock # bpo-31705: On kernel older than 4.5, sendto() failed with ENOKEY, # at least on ppc64le architecture @support.requires_linux_version(4, 5) def test_sha256(self): expected = bytes.fromhex("ba7816bf8f01cfea414140de5dae2223b00361a396" "177a9cb410ff61f20015ad") with self.create_alg('hash', 'sha256') as algo: op, _ = algo.accept() with op: op.sendall(b"abc") self.assertEqual(op.recv(512), expected) op, _ = algo.accept() with op: op.send(b'a', socket.MSG_MORE) op.send(b'b', socket.MSG_MORE) op.send(b'c', socket.MSG_MORE) op.send(b'') self.assertEqual(op.recv(512), expected) def test_hmac_sha1(self): expected = bytes.fromhex("effcdf6ae5eb2fa2d27416d5f184df9c259a7c79") with self.create_alg('hash', 'hmac(sha1)') as algo: algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, b"Jefe") op, _ = algo.accept() with op: op.sendall(b"what do ya want for nothing?") self.assertEqual(op.recv(512), expected) # Although it should work with 3.19 and newer the test blocks on # Ubuntu 15.10 with Kernel 4.2.0-19. @support.requires_linux_version(4, 3) def test_aes_cbc(self): key = bytes.fromhex('06a9214036b8a15b512e03d534120006') iv = bytes.fromhex('3dafba429d9eb430b422da802c9fac41') msg = b"Single block msg" ciphertext = bytes.fromhex('e353779c1079aeb82708942dbe77181a') msglen = len(msg) with self.create_alg('skcipher', 'cbc(aes)') as algo: algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, key) op, _ = algo.accept() with op: op.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, iv=iv, flags=socket.MSG_MORE) op.sendall(msg) self.assertEqual(op.recv(msglen), ciphertext) op, _ = algo.accept() with op: op.sendmsg_afalg([ciphertext], op=socket.ALG_OP_DECRYPT, iv=iv) self.assertEqual(op.recv(msglen), msg) # long message multiplier = 1024 longmsg = [msg] * multiplier op, _ = algo.accept() with op: op.sendmsg_afalg(longmsg, op=socket.ALG_OP_ENCRYPT, iv=iv) enc = op.recv(msglen * multiplier) self.assertEqual(len(enc), msglen * multiplier) self.assertEqual(enc[:msglen], ciphertext) op, _ = algo.accept() with op: op.sendmsg_afalg([enc], op=socket.ALG_OP_DECRYPT, iv=iv) dec = op.recv(msglen * multiplier) self.assertEqual(len(dec), msglen * multiplier) self.assertEqual(dec, msg * multiplier) @support.requires_linux_version(4, 9) # see issue29324 def test_aead_aes_gcm(self): key = bytes.fromhex('c939cc13397c1d37de6ae0e1cb7c423c') iv = bytes.fromhex('b3d8cc017cbb89b39e0f67e2') plain = bytes.fromhex('c3b3c41f113a31b73d9a5cd432103069') assoc = bytes.fromhex('24825602bd12a984e0092d3e448eda5f') expected_ct = bytes.fromhex('93fe7d9e9bfd10348a5606e5cafa7354') expected_tag = bytes.fromhex('0032a1dc85f1c9786925a2e71d8272dd') taglen = len(expected_tag) assoclen = len(assoc) with self.create_alg('aead', 'gcm(aes)') as algo: algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, key) algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_AEAD_AUTHSIZE, None, taglen) # send assoc, plain and tag buffer in separate steps op, _ = algo.accept() with op: op.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, iv=iv, assoclen=assoclen, flags=socket.MSG_MORE) op.sendall(assoc, socket.MSG_MORE) op.sendall(plain) res = op.recv(assoclen + len(plain) + taglen) self.assertEqual(expected_ct, res[assoclen:-taglen]) self.assertEqual(expected_tag, res[-taglen:]) # now with msg op, _ = algo.accept() with op: msg = assoc + plain op.sendmsg_afalg([msg], op=socket.ALG_OP_ENCRYPT, iv=iv, assoclen=assoclen) res = op.recv(assoclen + len(plain) + taglen) self.assertEqual(expected_ct, res[assoclen:-taglen]) self.assertEqual(expected_tag, res[-taglen:]) # create anc data manually pack_uint32 = struct.Struct('I').pack op, _ = algo.accept() with op: msg = assoc + plain op.sendmsg( [msg], ([socket.SOL_ALG, socket.ALG_SET_OP, pack_uint32(socket.ALG_OP_ENCRYPT)], [socket.SOL_ALG, socket.ALG_SET_IV, pack_uint32(len(iv)) + iv], [socket.SOL_ALG, socket.ALG_SET_AEAD_ASSOCLEN, pack_uint32(assoclen)], ) ) res = op.recv(len(msg) + taglen) self.assertEqual(expected_ct, res[assoclen:-taglen]) self.assertEqual(expected_tag, res[-taglen:]) # decrypt and verify op, _ = algo.accept() with op: msg = assoc + expected_ct + expected_tag op.sendmsg_afalg([msg], op=socket.ALG_OP_DECRYPT, iv=iv, assoclen=assoclen) res = op.recv(len(msg) - taglen) self.assertEqual(plain, res[assoclen:]) @support.requires_linux_version(4, 3) # see test_aes_cbc def test_drbg_pr_sha256(self): # deterministic random bit generator, prediction resistance, sha256 with self.create_alg('rng', 'drbg_pr_sha256') as algo: extra_seed = os.urandom(32) algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, extra_seed) op, _ = algo.accept() with op: rn = op.recv(32) self.assertEqual(len(rn), 32) def test_sendmsg_afalg_args(self): sock = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) with sock: with self.assertRaises(TypeError): sock.sendmsg_afalg() with self.assertRaises(TypeError): sock.sendmsg_afalg(op=None) with self.assertRaises(TypeError): sock.sendmsg_afalg(1) with self.assertRaises(TypeError): sock.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, assoclen=None) with self.assertRaises(TypeError): sock.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, assoclen=-1) def test_length_restriction(self): # bpo-35050, off-by-one error in length check sock = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) self.addCleanup(sock.close) # salg_type[14] with self.assertRaises(FileNotFoundError): sock.bind(("t" * 13, "name")) with self.assertRaisesRegex(ValueError, "type too long"): sock.bind(("t" * 14, "name")) # salg_name[64] with self.assertRaises(FileNotFoundError): sock.bind(("type", "n" * 63)) with self.assertRaisesRegex(ValueError, "name too long"): sock.bind(("type", "n" * 64)) @unittest.skipUnless(sys.platform == 'darwin', 'macOS specific test') class TestMacOSTCPFlags(unittest.TestCase): def test_tcp_keepalive(self): self.assertTrue(socket.TCP_KEEPALIVE) @unittest.skipUnless(sys.platform.startswith("win"), "requires Windows") class TestMSWindowsTCPFlags(unittest.TestCase): knownTCPFlags = { # available since long time ago 'TCP_MAXSEG', 'TCP_NODELAY', # available starting with Windows 10 1607 'TCP_FASTOPEN', # available starting with Windows 10 1703 'TCP_KEEPCNT', # available starting with Windows 10 1709 'TCP_KEEPIDLE', 'TCP_KEEPINTVL' } def test_new_tcp_flags(self): provided = [s for s in dir(socket) if s.startswith('TCP')] unknown = [s for s in provided if s not in self.knownTCPFlags] self.assertEqual([], unknown, "New TCP flags were discovered. See bpo-32394 for more information") class CreateServerTest(unittest.TestCase): def test_address(self): port = socket_helper.find_unused_port() with socket.create_server(("127.0.0.1", port)) as sock: self.assertEqual(sock.getsockname()[0], "127.0.0.1") self.assertEqual(sock.getsockname()[1], port) if socket_helper.IPV6_ENABLED: with socket.create_server(("::1", port), family=socket.AF_INET6) as sock: self.assertEqual(sock.getsockname()[0], "::1") self.assertEqual(sock.getsockname()[1], port) def test_family_and_type(self): with socket.create_server(("127.0.0.1", 0)) as sock: self.assertEqual(sock.family, socket.AF_INET) self.assertEqual(sock.type, socket.SOCK_STREAM) if socket_helper.IPV6_ENABLED: with socket.create_server(("::1", 0), family=socket.AF_INET6) as s: self.assertEqual(s.family, socket.AF_INET6) self.assertEqual(sock.type, socket.SOCK_STREAM) def test_reuse_port(self): if not hasattr(socket, "SO_REUSEPORT"): with self.assertRaises(ValueError): socket.create_server(("localhost", 0), reuse_port=True) else: with socket.create_server(("localhost", 0)) as sock: opt = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT) self.assertEqual(opt, 0) with socket.create_server(("localhost", 0), reuse_port=True) as sock: opt = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT) self.assertNotEqual(opt, 0) @unittest.skipIf(not hasattr(_socket, 'IPPROTO_IPV6') or not hasattr(_socket, 'IPV6_V6ONLY'), "IPV6_V6ONLY option not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_ipv6_only_default(self): with socket.create_server(("::1", 0), family=socket.AF_INET6) as sock: assert sock.getsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY) @unittest.skipIf(not socket.has_dualstack_ipv6(), "dualstack_ipv6 not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_dualstack_ipv6_family(self): with socket.create_server(("::1", 0), family=socket.AF_INET6, dualstack_ipv6=True) as sock: self.assertEqual(sock.family, socket.AF_INET6) class CreateServerFunctionalTest(unittest.TestCase): timeout = support.LOOPBACK_TIMEOUT def echo_server(self, sock): def run(sock): with sock: conn, _ = sock.accept() with conn: event.wait(self.timeout) msg = conn.recv(1024) if not msg: return conn.sendall(msg) event = threading.Event() sock.settimeout(self.timeout) thread = threading.Thread(target=run, args=(sock, )) thread.start() self.addCleanup(thread.join, self.timeout) event.set() def echo_client(self, addr, family): with socket.socket(family=family) as sock: sock.settimeout(self.timeout) sock.connect(addr) sock.sendall(b'foo') self.assertEqual(sock.recv(1024), b'foo') def test_tcp4(self): port = socket_helper.find_unused_port() with socket.create_server(("", port)) as sock: self.echo_server(sock) self.echo_client(("127.0.0.1", port), socket.AF_INET) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_tcp6(self): port = socket_helper.find_unused_port() with socket.create_server(("", port), family=socket.AF_INET6) as sock: self.echo_server(sock) self.echo_client(("::1", port), socket.AF_INET6) # --- dual stack tests @unittest.skipIf(not socket.has_dualstack_ipv6(), "dualstack_ipv6 not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_dual_stack_client_v4(self): port = socket_helper.find_unused_port() with socket.create_server(("", port), family=socket.AF_INET6, dualstack_ipv6=True) as sock: self.echo_server(sock) self.echo_client(("127.0.0.1", port), socket.AF_INET) @unittest.skipIf(not socket.has_dualstack_ipv6(), "dualstack_ipv6 not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_dual_stack_client_v6(self): port = socket_helper.find_unused_port() with socket.create_server(("", port), family=socket.AF_INET6, dualstack_ipv6=True) as sock: self.echo_server(sock) self.echo_client(("::1", port), socket.AF_INET6) @requireAttrs(socket, "send_fds") @requireAttrs(socket, "recv_fds") @requireAttrs(socket, "AF_UNIX") class SendRecvFdsTests(unittest.TestCase): def testSendAndRecvFds(self): def close_pipes(pipes): for fd1, fd2 in pipes: os.close(fd1) os.close(fd2) def close_fds(fds): for fd in fds: os.close(fd) # send 10 file descriptors pipes = [os.pipe() for _ in range(10)] self.addCleanup(close_pipes, pipes) fds = [rfd for rfd, wfd in pipes] # use a UNIX socket pair to exchange file descriptors locally sock1, sock2 = socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM) with sock1, sock2: socket.send_fds(sock1, [MSG], fds) # request more data and file descriptors than expected msg, fds2, flags, addr = socket.recv_fds(sock2, len(MSG) * 2, len(fds) * 2) self.addCleanup(close_fds, fds2) self.assertEqual(msg, MSG) self.assertEqual(len(fds2), len(fds)) self.assertEqual(flags, 0) # don't test addr # test that file descriptors are connected for index, fds in enumerate(pipes): rfd, wfd = fds os.write(wfd, str(index).encode()) for index, rfd in enumerate(fds2): data = os.read(rfd, 100) self.assertEqual(data, str(index).encode()) def setUpModule(): thread_info = threading_helper.threading_setup() unittest.addModuleCleanup(threading_helper.threading_cleanup, *thread_info) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.10/test_ssl.py000066400000000000000000006345241471441230600205520ustar00rootroot00000000000000# Test the support for SSL and sockets import sys import unittest import unittest.mock from test import support from test.support import import_helper from test.support import os_helper from test.support import socket_helper from test.support import threading_helper from test.support import warnings_helper import socket import select import time import datetime import gc import os import errno import pprint import urllib.request import threading import traceback import weakref import platform import sysconfig import functools try: import ctypes except ImportError: ctypes = None import warnings with warnings.catch_warnings(): warnings.simplefilter('ignore', DeprecationWarning) import asyncore ssl = import_helper.import_module("ssl") import _ssl from ssl import TLSVersion, _TLSContentType, _TLSMessageType, _TLSAlertType Py_DEBUG = hasattr(sys, 'gettotalrefcount') Py_DEBUG_WIN32 = Py_DEBUG and sys.platform == 'win32' PROTOCOLS = sorted(ssl._PROTOCOL_NAMES) HOST = socket_helper.HOST IS_OPENSSL_3_0_0 = ssl.OPENSSL_VERSION_INFO >= (3, 0, 0) PY_SSL_DEFAULT_CIPHERS = sysconfig.get_config_var('PY_SSL_DEFAULT_CIPHERS') PROTOCOL_TO_TLS_VERSION = {} for proto, ver in ( ("PROTOCOL_SSLv23", "SSLv3"), ("PROTOCOL_TLSv1", "TLSv1"), ("PROTOCOL_TLSv1_1", "TLSv1_1"), ): try: proto = getattr(ssl, proto) ver = getattr(ssl.TLSVersion, ver) except AttributeError: continue PROTOCOL_TO_TLS_VERSION[proto] = ver def data_file(*name): return os.path.join(os.path.dirname(__file__), *name) # The custom key and certificate files used in test_ssl are generated # using Lib/test/make_ssl_certs.py. # Other certificates are simply fetched from the internet servers they # are meant to authenticate. CERTFILE = data_file("keycert.pem") BYTES_CERTFILE = os.fsencode(CERTFILE) ONLYCERT = data_file("ssl_cert.pem") ONLYKEY = data_file("ssl_key.pem") BYTES_ONLYCERT = os.fsencode(ONLYCERT) BYTES_ONLYKEY = os.fsencode(ONLYKEY) CERTFILE_PROTECTED = data_file("keycert.passwd.pem") ONLYKEY_PROTECTED = data_file("ssl_key.passwd.pem") KEY_PASSWORD = "somepass" CAPATH = data_file("capath") BYTES_CAPATH = os.fsencode(CAPATH) CAFILE_NEURONIO = data_file("capath", "4e1295a3.0") CAFILE_CACERT = data_file("capath", "5ed36f99.0") CERTFILE_INFO = { 'issuer': ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'localhost'),)), 'notAfter': 'Aug 26 14:23:15 2028 GMT', 'notBefore': 'Aug 29 14:23:15 2018 GMT', 'serialNumber': '98A7CF88C74A32ED', 'subject': ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'localhost'),)), 'subjectAltName': (('DNS', 'localhost'),), 'version': 3 } # empty CRL CRLFILE = data_file("revocation.crl") # Two keys and certs signed by the same CA (for SNI tests) SIGNED_CERTFILE = data_file("keycert3.pem") SIGNED_CERTFILE_HOSTNAME = 'localhost' SIGNED_CERTFILE_INFO = { 'OCSP': ('http://testca.pythontest.net/testca/ocsp/',), 'caIssuers': ('http://testca.pythontest.net/testca/pycacert.cer',), 'crlDistributionPoints': ('http://testca.pythontest.net/testca/revocation.crl',), 'issuer': ((('countryName', 'XY'),), (('organizationName', 'Python Software Foundation CA'),), (('commonName', 'our-ca-server'),)), 'notAfter': 'Oct 28 14:23:16 2037 GMT', 'notBefore': 'Aug 29 14:23:16 2018 GMT', 'serialNumber': 'CB2D80995A69525C', 'subject': ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'localhost'),)), 'subjectAltName': (('DNS', 'localhost'),), 'version': 3 } SIGNED_CERTFILE2 = data_file("keycert4.pem") SIGNED_CERTFILE2_HOSTNAME = 'fakehostname' SIGNED_CERTFILE_ECC = data_file("keycertecc.pem") SIGNED_CERTFILE_ECC_HOSTNAME = 'localhost-ecc' # Same certificate as pycacert.pem, but without extra text in file SIGNING_CA = data_file("capath", "ceff1710.0") # cert with all kinds of subject alt names ALLSANFILE = data_file("allsans.pem") IDNSANSFILE = data_file("idnsans.pem") NOSANFILE = data_file("nosan.pem") NOSAN_HOSTNAME = 'localhost' REMOTE_HOST = "self-signed.pythontest.net" EMPTYCERT = data_file("nullcert.pem") BADCERT = data_file("badcert.pem") NONEXISTINGCERT = data_file("XXXnonexisting.pem") BADKEY = data_file("badkey.pem") NOKIACERT = data_file("nokia.pem") NULLBYTECERT = data_file("nullbytecert.pem") TALOS_INVALID_CRLDP = data_file("talos-2019-0758.pem") DHFILE = data_file("ffdh3072.pem") BYTES_DHFILE = os.fsencode(DHFILE) # Not defined in all versions of OpenSSL OP_NO_COMPRESSION = getattr(ssl, "OP_NO_COMPRESSION", 0) OP_SINGLE_DH_USE = getattr(ssl, "OP_SINGLE_DH_USE", 0) OP_SINGLE_ECDH_USE = getattr(ssl, "OP_SINGLE_ECDH_USE", 0) OP_CIPHER_SERVER_PREFERENCE = getattr(ssl, "OP_CIPHER_SERVER_PREFERENCE", 0) OP_ENABLE_MIDDLEBOX_COMPAT = getattr(ssl, "OP_ENABLE_MIDDLEBOX_COMPAT", 0) # Ubuntu has patched OpenSSL and changed behavior of security level 2 # see https://bugs.python.org/issue41561#msg389003 def is_ubuntu(): try: # Assume that any references of "ubuntu" implies Ubuntu-like distro # The workaround is not required for 18.04, but doesn't hurt either. with open("/etc/os-release", encoding="utf-8") as f: return "ubuntu" in f.read() except FileNotFoundError: return False if is_ubuntu(): def seclevel_workaround(*ctxs): """"Lower security level to '1' and allow all ciphers for TLS 1.0/1""" for ctx in ctxs: if ( hasattr(ctx, "minimum_version") and ctx.minimum_version <= ssl.TLSVersion.TLSv1_1 ): ctx.set_ciphers("@SECLEVEL=1:ALL") else: def seclevel_workaround(*ctxs): pass def has_tls_protocol(protocol): """Check if a TLS protocol is available and enabled :param protocol: enum ssl._SSLMethod member or name :return: bool """ if isinstance(protocol, str): assert protocol.startswith('PROTOCOL_') protocol = getattr(ssl, protocol, None) if protocol is None: return False if protocol in { ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS_SERVER, ssl.PROTOCOL_TLS_CLIENT }: # auto-negotiate protocols are always available return True name = protocol.name return has_tls_version(name[len('PROTOCOL_'):]) @functools.lru_cache def has_tls_version(version): """Check if a TLS/SSL version is enabled :param version: TLS version name or ssl.TLSVersion member :return: bool """ if version == "SSLv2": # never supported and not even in TLSVersion enum return False if isinstance(version, str): version = ssl.TLSVersion.__members__[version] # check compile time flags like ssl.HAS_TLSv1_2 if not getattr(ssl, f'HAS_{version.name}'): return False if IS_OPENSSL_3_0_0 and version < ssl.TLSVersion.TLSv1_2: # bpo43791: 3.0.0-alpha14 fails with TLSV1_ALERT_INTERNAL_ERROR return False # check runtime and dynamic crypto policy settings. A TLS version may # be compiled in but disabled by a policy or config option. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) if ( hasattr(ctx, 'minimum_version') and ctx.minimum_version != ssl.TLSVersion.MINIMUM_SUPPORTED and version < ctx.minimum_version ): return False if ( hasattr(ctx, 'maximum_version') and ctx.maximum_version != ssl.TLSVersion.MAXIMUM_SUPPORTED and version > ctx.maximum_version ): return False return True def requires_tls_version(version): """Decorator to skip tests when a required TLS version is not available :param version: TLS version name or ssl.TLSVersion member :return: """ def decorator(func): @functools.wraps(func) def wrapper(*args, **kw): if not has_tls_version(version): raise unittest.SkipTest(f"{version} is not available.") else: return func(*args, **kw) return wrapper return decorator def handle_error(prefix): exc_format = ' '.join(traceback.format_exception(*sys.exc_info())) if support.verbose: sys.stdout.write(prefix + exc_format) def utc_offset(): #NOTE: ignore issues like #1647654 # local time = utc time + utc offset if time.daylight and time.localtime().tm_isdst > 0: return -time.altzone # seconds return -time.timezone ignore_deprecation = warnings_helper.ignore_warnings( category=DeprecationWarning ) def test_wrap_socket(sock, *, cert_reqs=ssl.CERT_NONE, ca_certs=None, ciphers=None, certfile=None, keyfile=None, **kwargs): if not kwargs.get("server_side"): kwargs["server_hostname"] = SIGNED_CERTFILE_HOSTNAME context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) else: context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) if cert_reqs is not None: if cert_reqs == ssl.CERT_NONE: context.check_hostname = False context.verify_mode = cert_reqs if ca_certs is not None: context.load_verify_locations(ca_certs) if certfile is not None or keyfile is not None: context.load_cert_chain(certfile, keyfile) if ciphers is not None: context.set_ciphers(ciphers) return context.wrap_socket(sock, **kwargs) def testing_context(server_cert=SIGNED_CERTFILE, *, server_chain=True): """Create context client_context, server_context, hostname = testing_context() """ if server_cert == SIGNED_CERTFILE: hostname = SIGNED_CERTFILE_HOSTNAME elif server_cert == SIGNED_CERTFILE2: hostname = SIGNED_CERTFILE2_HOSTNAME elif server_cert == NOSANFILE: hostname = NOSAN_HOSTNAME else: raise ValueError(server_cert) client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(server_cert) if server_chain: server_context.load_verify_locations(SIGNING_CA) return client_context, server_context, hostname class BasicSocketTests(unittest.TestCase): def test_constants(self): ssl.CERT_NONE ssl.CERT_OPTIONAL ssl.CERT_REQUIRED ssl.OP_CIPHER_SERVER_PREFERENCE ssl.OP_SINGLE_DH_USE ssl.OP_SINGLE_ECDH_USE ssl.OP_NO_COMPRESSION self.assertEqual(ssl.HAS_SNI, True) self.assertEqual(ssl.HAS_ECDH, True) self.assertEqual(ssl.HAS_TLSv1_2, True) self.assertEqual(ssl.HAS_TLSv1_3, True) ssl.OP_NO_SSLv2 ssl.OP_NO_SSLv3 ssl.OP_NO_TLSv1 ssl.OP_NO_TLSv1_3 ssl.OP_NO_TLSv1_1 ssl.OP_NO_TLSv1_2 self.assertEqual(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv23) def test_ssl_types(self): ssl_types = [ _ssl._SSLContext, _ssl._SSLSocket, _ssl.MemoryBIO, _ssl.Certificate, _ssl.SSLSession, _ssl.SSLError, ] for ssl_type in ssl_types: with self.subTest(ssl_type=ssl_type): with self.assertRaisesRegex(TypeError, "immutable type"): ssl_type.value = None support.check_disallow_instantiation(self, _ssl.Certificate) def test_private_init(self): with self.assertRaisesRegex(TypeError, "public constructor"): with socket.socket() as s: ssl.SSLSocket(s) def test_str_for_enums(self): # Make sure that the PROTOCOL_* constants have enum-like string # reprs. proto = ssl.PROTOCOL_TLS_CLIENT self.assertEqual(str(proto), '_SSLMethod.PROTOCOL_TLS_CLIENT') ctx = ssl.SSLContext(proto) self.assertIs(ctx.protocol, proto) def test_random(self): v = ssl.RAND_status() if support.verbose: sys.stdout.write("\n RAND_status is %d (%s)\n" % (v, (v and "sufficient randomness") or "insufficient randomness")) with warnings_helper.check_warnings(): data, is_cryptographic = ssl.RAND_pseudo_bytes(16) self.assertEqual(len(data), 16) self.assertEqual(is_cryptographic, v == 1) if v: data = ssl.RAND_bytes(16) self.assertEqual(len(data), 16) else: self.assertRaises(ssl.SSLError, ssl.RAND_bytes, 16) # negative num is invalid self.assertRaises(ValueError, ssl.RAND_bytes, -5) with warnings_helper.check_warnings(): self.assertRaises(ValueError, ssl.RAND_pseudo_bytes, -5) ssl.RAND_add("this is a random string", 75.0) ssl.RAND_add(b"this is a random bytes object", 75.0) ssl.RAND_add(bytearray(b"this is a random bytearray object"), 75.0) def test_parse_cert(self): # note that this uses an 'unofficial' function in _ssl.c, # provided solely for this test, to exercise the certificate # parsing code self.assertEqual( ssl._ssl._test_decode_cert(CERTFILE), CERTFILE_INFO ) self.assertEqual( ssl._ssl._test_decode_cert(SIGNED_CERTFILE), SIGNED_CERTFILE_INFO ) # Issue #13034: the subjectAltName in some certificates # (notably projects.developer.nokia.com:443) wasn't parsed p = ssl._ssl._test_decode_cert(NOKIACERT) if support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") self.assertEqual(p['subjectAltName'], (('DNS', 'projects.developer.nokia.com'), ('DNS', 'projects.forum.nokia.com')) ) # extra OCSP and AIA fields self.assertEqual(p['OCSP'], ('http://ocsp.verisign.com',)) self.assertEqual(p['caIssuers'], ('http://SVRIntl-G3-aia.verisign.com/SVRIntlG3.cer',)) self.assertEqual(p['crlDistributionPoints'], ('http://SVRIntl-G3-crl.verisign.com/SVRIntlG3.crl',)) def test_parse_cert_CVE_2019_5010(self): p = ssl._ssl._test_decode_cert(TALOS_INVALID_CRLDP) if support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") self.assertEqual( p, { 'issuer': ( (('countryName', 'UK'),), (('commonName', 'cody-ca'),)), 'notAfter': 'Jun 14 18:00:58 2028 GMT', 'notBefore': 'Jun 18 18:00:58 2018 GMT', 'serialNumber': '02', 'subject': ((('countryName', 'UK'),), (('commonName', 'codenomicon-vm-2.test.lal.cisco.com'),)), 'subjectAltName': ( ('DNS', 'codenomicon-vm-2.test.lal.cisco.com'),), 'version': 3 } ) def test_parse_cert_CVE_2013_4238(self): p = ssl._ssl._test_decode_cert(NULLBYTECERT) if support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") subject = ((('countryName', 'US'),), (('stateOrProvinceName', 'Oregon'),), (('localityName', 'Beaverton'),), (('organizationName', 'Python Software Foundation'),), (('organizationalUnitName', 'Python Core Development'),), (('commonName', 'null.python.org\x00example.org'),), (('emailAddress', 'python-dev@python.org'),)) self.assertEqual(p['subject'], subject) self.assertEqual(p['issuer'], subject) if ssl._OPENSSL_API_VERSION >= (0, 9, 8): san = (('DNS', 'altnull.python.org\x00example.com'), ('email', 'null@python.org\x00user@example.org'), ('URI', 'http://null.python.org\x00http://example.org'), ('IP Address', '192.0.2.1'), ('IP Address', '2001:DB8:0:0:0:0:0:1')) else: # OpenSSL 0.9.7 doesn't support IPv6 addresses in subjectAltName san = (('DNS', 'altnull.python.org\x00example.com'), ('email', 'null@python.org\x00user@example.org'), ('URI', 'http://null.python.org\x00http://example.org'), ('IP Address', '192.0.2.1'), ('IP Address', '')) self.assertEqual(p['subjectAltName'], san) def test_parse_all_sans(self): p = ssl._ssl._test_decode_cert(ALLSANFILE) self.assertEqual(p['subjectAltName'], ( ('DNS', 'allsans'), ('othername', ''), ('othername', ''), ('email', 'user@example.org'), ('DNS', 'www.example.org'), ('DirName', ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'dirname example'),))), ('URI', 'https://www.python.org/'), ('IP Address', '127.0.0.1'), ('IP Address', '0:0:0:0:0:0:0:1'), ('Registered ID', '1.2.3.4.5') ) ) def test_DER_to_PEM(self): with open(CAFILE_CACERT, 'r') as f: pem = f.read() d1 = ssl.PEM_cert_to_DER_cert(pem) p2 = ssl.DER_cert_to_PEM_cert(d1) d2 = ssl.PEM_cert_to_DER_cert(p2) self.assertEqual(d1, d2) if not p2.startswith(ssl.PEM_HEADER + '\n'): self.fail("DER-to-PEM didn't include correct header:\n%r\n" % p2) if not p2.endswith('\n' + ssl.PEM_FOOTER + '\n'): self.fail("DER-to-PEM didn't include correct footer:\n%r\n" % p2) def test_openssl_version(self): n = ssl.OPENSSL_VERSION_NUMBER t = ssl.OPENSSL_VERSION_INFO s = ssl.OPENSSL_VERSION self.assertIsInstance(n, int) self.assertIsInstance(t, tuple) self.assertIsInstance(s, str) # Some sanity checks follow # >= 1.1.1 self.assertGreaterEqual(n, 0x10101000) # < 4.0 self.assertLess(n, 0x40000000) major, minor, fix, patch, status = t self.assertGreaterEqual(major, 1) self.assertLess(major, 4) self.assertGreaterEqual(minor, 0) self.assertLess(minor, 256) self.assertGreaterEqual(fix, 0) self.assertLess(fix, 256) self.assertGreaterEqual(patch, 0) self.assertLessEqual(patch, 63) self.assertGreaterEqual(status, 0) self.assertLessEqual(status, 15) libressl_ver = f"LibreSSL {major:d}" if major >= 3: # 3.x uses 0xMNN00PP0L openssl_ver = f"OpenSSL {major:d}.{minor:d}.{patch:d}" else: openssl_ver = f"OpenSSL {major:d}.{minor:d}.{fix:d}" self.assertTrue( s.startswith((openssl_ver, libressl_ver)), (s, t, hex(n)) ) @support.cpython_only def test_refcycle(self): # Issue #7943: an SSL object doesn't create reference cycles with # itself. s = socket.socket(socket.AF_INET) ss = test_wrap_socket(s) wr = weakref.ref(ss) with warnings_helper.check_warnings(("", ResourceWarning)): del ss self.assertEqual(wr(), None) def test_wrapped_unconnected(self): # Methods on an unconnected SSLSocket propagate the original # OSError raise by the underlying socket object. s = socket.socket(socket.AF_INET) with test_wrap_socket(s) as ss: self.assertRaises(OSError, ss.recv, 1) self.assertRaises(OSError, ss.recv_into, bytearray(b'x')) self.assertRaises(OSError, ss.recvfrom, 1) self.assertRaises(OSError, ss.recvfrom_into, bytearray(b'x'), 1) self.assertRaises(OSError, ss.send, b'x') self.assertRaises(OSError, ss.sendto, b'x', ('0.0.0.0', 0)) self.assertRaises(NotImplementedError, ss.dup) self.assertRaises(NotImplementedError, ss.sendmsg, [b'x'], (), 0, ('0.0.0.0', 0)) self.assertRaises(NotImplementedError, ss.recvmsg, 100) self.assertRaises(NotImplementedError, ss.recvmsg_into, [bytearray(100)]) def test_timeout(self): # Issue #8524: when creating an SSL socket, the timeout of the # original socket should be retained. for timeout in (None, 0.0, 5.0): s = socket.socket(socket.AF_INET) s.settimeout(timeout) with test_wrap_socket(s) as ss: self.assertEqual(timeout, ss.gettimeout()) def test_openssl111_deprecations(self): options = [ ssl.OP_NO_TLSv1, ssl.OP_NO_TLSv1_1, ssl.OP_NO_TLSv1_2, ssl.OP_NO_TLSv1_3 ] protocols = [ ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLS ] versions = [ ssl.TLSVersion.SSLv3, ssl.TLSVersion.TLSv1, ssl.TLSVersion.TLSv1_1, ] for option in options: with self.subTest(option=option): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertWarns(DeprecationWarning) as cm: ctx.options |= option self.assertEqual( 'ssl.OP_NO_SSL*/ssl.OP_NO_TLS* options are deprecated', str(cm.warning) ) for protocol in protocols: if not has_tls_protocol(protocol): continue with self.subTest(protocol=protocol): with self.assertWarns(DeprecationWarning) as cm: ssl.SSLContext(protocol) self.assertEqual( f'ssl.{protocol.name} is deprecated', str(cm.warning) ) for version in versions: if not has_tls_version(version): continue with self.subTest(version=version): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertWarns(DeprecationWarning) as cm: ctx.minimum_version = version self.assertEqual( f'ssl.{version!s} is deprecated', str(cm.warning) ) @ignore_deprecation def test_errors_sslwrap(self): sock = socket.socket() self.assertRaisesRegex(ValueError, "certfile must be specified", ssl.wrap_socket, sock, keyfile=CERTFILE) self.assertRaisesRegex(ValueError, "certfile must be specified for server-side operations", ssl.wrap_socket, sock, server_side=True) self.assertRaisesRegex(ValueError, "certfile must be specified for server-side operations", ssl.wrap_socket, sock, server_side=True, certfile="") with ssl.wrap_socket(sock, server_side=True, certfile=CERTFILE) as s: self.assertRaisesRegex(ValueError, "can't connect in server-side mode", s.connect, (HOST, 8080)) with self.assertRaises(OSError) as cm: with socket.socket() as sock: ssl.wrap_socket(sock, certfile=NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaises(OSError) as cm: with socket.socket() as sock: ssl.wrap_socket(sock, certfile=CERTFILE, keyfile=NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaises(OSError) as cm: with socket.socket() as sock: ssl.wrap_socket(sock, certfile=NONEXISTINGCERT, keyfile=NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) def bad_cert_test(self, certfile): """Check that trying to use the given client certificate fails""" certfile = os.path.join(os.path.dirname(__file__) or os.curdir, certfile) sock = socket.socket() self.addCleanup(sock.close) with self.assertRaises(ssl.SSLError): test_wrap_socket(sock, certfile=certfile) def test_empty_cert(self): """Wrapping with an empty cert file""" self.bad_cert_test("nullcert.pem") def test_malformed_cert(self): """Wrapping with a badly formatted certificate (syntax error)""" self.bad_cert_test("badcert.pem") def test_malformed_key(self): """Wrapping with a badly formatted key (syntax error)""" self.bad_cert_test("badkey.pem") @ignore_deprecation def test_match_hostname(self): def ok(cert, hostname): ssl.match_hostname(cert, hostname) def fail(cert, hostname): self.assertRaises(ssl.CertificateError, ssl.match_hostname, cert, hostname) # -- Hostname matching -- cert = {'subject': ((('commonName', 'example.com'),),)} ok(cert, 'example.com') ok(cert, 'ExAmple.cOm') fail(cert, 'www.example.com') fail(cert, '.example.com') fail(cert, 'example.org') fail(cert, 'exampleXcom') cert = {'subject': ((('commonName', '*.a.com'),),)} ok(cert, 'foo.a.com') fail(cert, 'bar.foo.a.com') fail(cert, 'a.com') fail(cert, 'Xa.com') fail(cert, '.a.com') # only match wildcards when they are the only thing # in left-most segment cert = {'subject': ((('commonName', 'f*.com'),),)} fail(cert, 'foo.com') fail(cert, 'f.com') fail(cert, 'bar.com') fail(cert, 'foo.a.com') fail(cert, 'bar.foo.com') # NULL bytes are bad, CVE-2013-4073 cert = {'subject': ((('commonName', 'null.python.org\x00example.org'),),)} ok(cert, 'null.python.org\x00example.org') # or raise an error? fail(cert, 'example.org') fail(cert, 'null.python.org') # error cases with wildcards cert = {'subject': ((('commonName', '*.*.a.com'),),)} fail(cert, 'bar.foo.a.com') fail(cert, 'a.com') fail(cert, 'Xa.com') fail(cert, '.a.com') cert = {'subject': ((('commonName', 'a.*.com'),),)} fail(cert, 'a.foo.com') fail(cert, 'a..com') fail(cert, 'a.com') # wildcard doesn't match IDNA prefix 'xn--' idna = 'püthon.python.org'.encode("idna").decode("ascii") cert = {'subject': ((('commonName', idna),),)} ok(cert, idna) cert = {'subject': ((('commonName', 'x*.python.org'),),)} fail(cert, idna) cert = {'subject': ((('commonName', 'xn--p*.python.org'),),)} fail(cert, idna) # wildcard in first fragment and IDNA A-labels in sequent fragments # are supported. idna = 'www*.pythön.org'.encode("idna").decode("ascii") cert = {'subject': ((('commonName', idna),),)} fail(cert, 'www.pythön.org'.encode("idna").decode("ascii")) fail(cert, 'www1.pythön.org'.encode("idna").decode("ascii")) fail(cert, 'ftp.pythön.org'.encode("idna").decode("ascii")) fail(cert, 'pythön.org'.encode("idna").decode("ascii")) # Slightly fake real-world example cert = {'notAfter': 'Jun 26 21:41:46 2011 GMT', 'subject': ((('commonName', 'linuxfrz.org'),),), 'subjectAltName': (('DNS', 'linuxfr.org'), ('DNS', 'linuxfr.com'), ('othername', ''))} ok(cert, 'linuxfr.org') ok(cert, 'linuxfr.com') # Not a "DNS" entry fail(cert, '') # When there is a subjectAltName, commonName isn't used fail(cert, 'linuxfrz.org') # A pristine real-world example cert = {'notAfter': 'Dec 18 23:59:59 2011 GMT', 'subject': ((('countryName', 'US'),), (('stateOrProvinceName', 'California'),), (('localityName', 'Mountain View'),), (('organizationName', 'Google Inc'),), (('commonName', 'mail.google.com'),))} ok(cert, 'mail.google.com') fail(cert, 'gmail.com') # Only commonName is considered fail(cert, 'California') # -- IPv4 matching -- cert = {'subject': ((('commonName', 'example.com'),),), 'subjectAltName': (('DNS', 'example.com'), ('IP Address', '10.11.12.13'), ('IP Address', '14.15.16.17'), ('IP Address', '127.0.0.1'))} ok(cert, '10.11.12.13') ok(cert, '14.15.16.17') # socket.inet_ntoa(socket.inet_aton('127.1')) == '127.0.0.1' fail(cert, '127.1') fail(cert, '14.15.16.17 ') fail(cert, '14.15.16.17 extra data') fail(cert, '14.15.16.18') fail(cert, 'example.net') # -- IPv6 matching -- if socket_helper.IPV6_ENABLED: cert = {'subject': ((('commonName', 'example.com'),),), 'subjectAltName': ( ('DNS', 'example.com'), ('IP Address', '2001:0:0:0:0:0:0:CAFE\n'), ('IP Address', '2003:0:0:0:0:0:0:BABA\n'))} ok(cert, '2001::cafe') ok(cert, '2003::baba') fail(cert, '2003::baba ') fail(cert, '2003::baba extra data') fail(cert, '2003::bebe') fail(cert, 'example.net') # -- Miscellaneous -- # Neither commonName nor subjectAltName cert = {'notAfter': 'Dec 18 23:59:59 2011 GMT', 'subject': ((('countryName', 'US'),), (('stateOrProvinceName', 'California'),), (('localityName', 'Mountain View'),), (('organizationName', 'Google Inc'),))} fail(cert, 'mail.google.com') # No DNS entry in subjectAltName but a commonName cert = {'notAfter': 'Dec 18 23:59:59 2099 GMT', 'subject': ((('countryName', 'US'),), (('stateOrProvinceName', 'California'),), (('localityName', 'Mountain View'),), (('commonName', 'mail.google.com'),)), 'subjectAltName': (('othername', 'blabla'), )} ok(cert, 'mail.google.com') # No DNS entry subjectAltName and no commonName cert = {'notAfter': 'Dec 18 23:59:59 2099 GMT', 'subject': ((('countryName', 'US'),), (('stateOrProvinceName', 'California'),), (('localityName', 'Mountain View'),), (('organizationName', 'Google Inc'),)), 'subjectAltName': (('othername', 'blabla'),)} fail(cert, 'google.com') # Empty cert / no cert self.assertRaises(ValueError, ssl.match_hostname, None, 'example.com') self.assertRaises(ValueError, ssl.match_hostname, {}, 'example.com') # Issue #17980: avoid denials of service by refusing more than one # wildcard per fragment. cert = {'subject': ((('commonName', 'a*b.example.com'),),)} with self.assertRaisesRegex( ssl.CertificateError, "partial wildcards in leftmost label are not supported"): ssl.match_hostname(cert, 'axxb.example.com') cert = {'subject': ((('commonName', 'www.*.example.com'),),)} with self.assertRaisesRegex( ssl.CertificateError, "wildcard can only be present in the leftmost label"): ssl.match_hostname(cert, 'www.sub.example.com') cert = {'subject': ((('commonName', 'a*b*.example.com'),),)} with self.assertRaisesRegex( ssl.CertificateError, "too many wildcards"): ssl.match_hostname(cert, 'axxbxxc.example.com') cert = {'subject': ((('commonName', '*'),),)} with self.assertRaisesRegex( ssl.CertificateError, "sole wildcard without additional labels are not support"): ssl.match_hostname(cert, 'host') cert = {'subject': ((('commonName', '*.com'),),)} with self.assertRaisesRegex( ssl.CertificateError, r"hostname 'com' doesn't match '\*.com'"): ssl.match_hostname(cert, 'com') # extra checks for _inet_paton() for invalid in ['1', '', '1.2.3', '256.0.0.1', '127.0.0.1/24']: with self.assertRaises(ValueError): ssl._inet_paton(invalid) for ipaddr in ['127.0.0.1', '192.168.0.1']: self.assertTrue(ssl._inet_paton(ipaddr)) if socket_helper.IPV6_ENABLED: for ipaddr in ['::1', '2001:db8:85a3::8a2e:370:7334']: self.assertTrue(ssl._inet_paton(ipaddr)) def test_server_side(self): # server_hostname doesn't work for server sockets ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) with socket.socket() as sock: self.assertRaises(ValueError, ctx.wrap_socket, sock, True, server_hostname="some.hostname") def test_unknown_channel_binding(self): # should raise ValueError for unknown type s = socket.create_server(('127.0.0.1', 0)) c = socket.socket(socket.AF_INET) c.connect(s.getsockname()) with test_wrap_socket(c, do_handshake_on_connect=False) as ss: with self.assertRaises(ValueError): ss.get_channel_binding("unknown-type") s.close() @unittest.skipUnless("tls-unique" in ssl.CHANNEL_BINDING_TYPES, "'tls-unique' channel binding not available") def test_tls_unique_channel_binding(self): # unconnected should return None for known type s = socket.socket(socket.AF_INET) with test_wrap_socket(s) as ss: self.assertIsNone(ss.get_channel_binding("tls-unique")) # the same for server-side s = socket.socket(socket.AF_INET) with test_wrap_socket(s, server_side=True, certfile=CERTFILE) as ss: self.assertIsNone(ss.get_channel_binding("tls-unique")) def test_dealloc_warn(self): ss = test_wrap_socket(socket.socket(socket.AF_INET)) r = repr(ss) with self.assertWarns(ResourceWarning) as cm: ss = None support.gc_collect() self.assertIn(r, str(cm.warning.args[0])) def test_get_default_verify_paths(self): paths = ssl.get_default_verify_paths() self.assertEqual(len(paths), 6) self.assertIsInstance(paths, ssl.DefaultVerifyPaths) with os_helper.EnvironmentVarGuard() as env: env["SSL_CERT_DIR"] = CAPATH env["SSL_CERT_FILE"] = CERTFILE paths = ssl.get_default_verify_paths() self.assertEqual(paths.cafile, CERTFILE) self.assertEqual(paths.capath, CAPATH) @unittest.skipUnless(sys.platform == "win32", "Windows specific") def test_enum_certificates(self): self.assertTrue(ssl.enum_certificates("CA")) self.assertTrue(ssl.enum_certificates("ROOT")) self.assertRaises(TypeError, ssl.enum_certificates) self.assertRaises(WindowsError, ssl.enum_certificates, "") trust_oids = set() for storename in ("CA", "ROOT"): store = ssl.enum_certificates(storename) self.assertIsInstance(store, list) for element in store: self.assertIsInstance(element, tuple) self.assertEqual(len(element), 3) cert, enc, trust = element self.assertIsInstance(cert, bytes) self.assertIn(enc, {"x509_asn", "pkcs_7_asn"}) self.assertIsInstance(trust, (frozenset, set, bool)) if isinstance(trust, (frozenset, set)): trust_oids.update(trust) serverAuth = "1.3.6.1.5.5.7.3.1" self.assertIn(serverAuth, trust_oids) @unittest.skipUnless(sys.platform == "win32", "Windows specific") def test_enum_crls(self): self.assertTrue(ssl.enum_crls("CA")) self.assertRaises(TypeError, ssl.enum_crls) self.assertRaises(WindowsError, ssl.enum_crls, "") crls = ssl.enum_crls("CA") self.assertIsInstance(crls, list) for element in crls: self.assertIsInstance(element, tuple) self.assertEqual(len(element), 2) self.assertIsInstance(element[0], bytes) self.assertIn(element[1], {"x509_asn", "pkcs_7_asn"}) def test_asn1object(self): expected = (129, 'serverAuth', 'TLS Web Server Authentication', '1.3.6.1.5.5.7.3.1') val = ssl._ASN1Object('1.3.6.1.5.5.7.3.1') self.assertEqual(val, expected) self.assertEqual(val.nid, 129) self.assertEqual(val.shortname, 'serverAuth') self.assertEqual(val.longname, 'TLS Web Server Authentication') self.assertEqual(val.oid, '1.3.6.1.5.5.7.3.1') self.assertIsInstance(val, ssl._ASN1Object) self.assertRaises(ValueError, ssl._ASN1Object, 'serverAuth') val = ssl._ASN1Object.fromnid(129) self.assertEqual(val, expected) self.assertIsInstance(val, ssl._ASN1Object) self.assertRaises(ValueError, ssl._ASN1Object.fromnid, -1) with self.assertRaisesRegex(ValueError, "unknown NID 100000"): ssl._ASN1Object.fromnid(100000) for i in range(1000): try: obj = ssl._ASN1Object.fromnid(i) except ValueError: pass else: self.assertIsInstance(obj.nid, int) self.assertIsInstance(obj.shortname, str) self.assertIsInstance(obj.longname, str) self.assertIsInstance(obj.oid, (str, type(None))) val = ssl._ASN1Object.fromname('TLS Web Server Authentication') self.assertEqual(val, expected) self.assertIsInstance(val, ssl._ASN1Object) self.assertEqual(ssl._ASN1Object.fromname('serverAuth'), expected) self.assertEqual(ssl._ASN1Object.fromname('1.3.6.1.5.5.7.3.1'), expected) with self.assertRaisesRegex(ValueError, "unknown object 'serverauth'"): ssl._ASN1Object.fromname('serverauth') def test_purpose_enum(self): val = ssl._ASN1Object('1.3.6.1.5.5.7.3.1') self.assertIsInstance(ssl.Purpose.SERVER_AUTH, ssl._ASN1Object) self.assertEqual(ssl.Purpose.SERVER_AUTH, val) self.assertEqual(ssl.Purpose.SERVER_AUTH.nid, 129) self.assertEqual(ssl.Purpose.SERVER_AUTH.shortname, 'serverAuth') self.assertEqual(ssl.Purpose.SERVER_AUTH.oid, '1.3.6.1.5.5.7.3.1') val = ssl._ASN1Object('1.3.6.1.5.5.7.3.2') self.assertIsInstance(ssl.Purpose.CLIENT_AUTH, ssl._ASN1Object) self.assertEqual(ssl.Purpose.CLIENT_AUTH, val) self.assertEqual(ssl.Purpose.CLIENT_AUTH.nid, 130) self.assertEqual(ssl.Purpose.CLIENT_AUTH.shortname, 'clientAuth') self.assertEqual(ssl.Purpose.CLIENT_AUTH.oid, '1.3.6.1.5.5.7.3.2') def test_unsupported_dtls(self): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.addCleanup(s.close) with self.assertRaises(NotImplementedError) as cx: test_wrap_socket(s, cert_reqs=ssl.CERT_NONE) self.assertEqual(str(cx.exception), "only stream sockets are supported") ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertRaises(NotImplementedError) as cx: ctx.wrap_socket(s) self.assertEqual(str(cx.exception), "only stream sockets are supported") def cert_time_ok(self, timestring, timestamp): self.assertEqual(ssl.cert_time_to_seconds(timestring), timestamp) def cert_time_fail(self, timestring): with self.assertRaises(ValueError): ssl.cert_time_to_seconds(timestring) @unittest.skipUnless(utc_offset(), 'local time needs to be different from UTC') def test_cert_time_to_seconds_timezone(self): # Issue #19940: ssl.cert_time_to_seconds() returns wrong # results if local timezone is not UTC self.cert_time_ok("May 9 00:00:00 2007 GMT", 1178668800.0) self.cert_time_ok("Jan 5 09:34:43 2018 GMT", 1515144883.0) def test_cert_time_to_seconds(self): timestring = "Jan 5 09:34:43 2018 GMT" ts = 1515144883.0 self.cert_time_ok(timestring, ts) # accept keyword parameter, assert its name self.assertEqual(ssl.cert_time_to_seconds(cert_time=timestring), ts) # accept both %e and %d (space or zero generated by strftime) self.cert_time_ok("Jan 05 09:34:43 2018 GMT", ts) # case-insensitive self.cert_time_ok("JaN 5 09:34:43 2018 GmT", ts) self.cert_time_fail("Jan 5 09:34 2018 GMT") # no seconds self.cert_time_fail("Jan 5 09:34:43 2018") # no GMT self.cert_time_fail("Jan 5 09:34:43 2018 UTC") # not GMT timezone self.cert_time_fail("Jan 35 09:34:43 2018 GMT") # invalid day self.cert_time_fail("Jon 5 09:34:43 2018 GMT") # invalid month self.cert_time_fail("Jan 5 24:00:00 2018 GMT") # invalid hour self.cert_time_fail("Jan 5 09:60:43 2018 GMT") # invalid minute newyear_ts = 1230768000.0 # leap seconds self.cert_time_ok("Dec 31 23:59:60 2008 GMT", newyear_ts) # same timestamp self.cert_time_ok("Jan 1 00:00:00 2009 GMT", newyear_ts) self.cert_time_ok("Jan 5 09:34:59 2018 GMT", 1515144899) # allow 60th second (even if it is not a leap second) self.cert_time_ok("Jan 5 09:34:60 2018 GMT", 1515144900) # allow 2nd leap second for compatibility with time.strptime() self.cert_time_ok("Jan 5 09:34:61 2018 GMT", 1515144901) self.cert_time_fail("Jan 5 09:34:62 2018 GMT") # invalid seconds # no special treatment for the special value: # 99991231235959Z (rfc 5280) self.cert_time_ok("Dec 31 23:59:59 9999 GMT", 253402300799.0) @support.run_with_locale('LC_ALL', '') def test_cert_time_to_seconds_locale(self): # `cert_time_to_seconds()` should be locale independent def local_february_name(): return time.strftime('%b', (1, 2, 3, 4, 5, 6, 0, 0, 0)) if local_february_name().lower() == 'feb': self.skipTest("locale-specific month name needs to be " "different from C locale") # locale-independent self.cert_time_ok("Feb 9 00:00:00 2007 GMT", 1170979200.0) self.cert_time_fail(local_february_name() + " 9 00:00:00 2007 GMT") def test_connect_ex_error(self): server = socket.socket(socket.AF_INET) self.addCleanup(server.close) port = socket_helper.bind_port(server) # Reserve port but don't listen s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED) self.addCleanup(s.close) rc = s.connect_ex((HOST, port)) # Issue #19919: Windows machines or VMs hosted on Windows # machines sometimes return EWOULDBLOCK. errors = ( errno.ECONNREFUSED, errno.EHOSTUNREACH, errno.ETIMEDOUT, errno.EWOULDBLOCK, ) self.assertIn(rc, errors) def test_read_write_zero(self): # empty reads and writes now work, bpo-42854, bpo-31711 client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.recv(0), b"") self.assertEqual(s.send(b""), 0) class ContextTests(unittest.TestCase): def test_constructor(self): for protocol in PROTOCOLS: if has_tls_protocol(protocol): with warnings_helper.check_warnings(): ctx = ssl.SSLContext(protocol) self.assertEqual(ctx.protocol, protocol) with warnings_helper.check_warnings(): ctx = ssl.SSLContext() self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS) self.assertRaises(ValueError, ssl.SSLContext, -1) self.assertRaises(ValueError, ssl.SSLContext, 42) def test_ciphers(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.set_ciphers("ALL") ctx.set_ciphers("DEFAULT") with self.assertRaisesRegex(ssl.SSLError, "No cipher can be selected"): ctx.set_ciphers("^$:,;?*'dorothyx") @unittest.skipUnless(PY_SSL_DEFAULT_CIPHERS == 1, "Test applies only to Python default ciphers") def test_python_ciphers(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ciphers = ctx.get_ciphers() for suite in ciphers: name = suite['name'] self.assertNotIn("PSK", name) self.assertNotIn("SRP", name) self.assertNotIn("MD5", name) self.assertNotIn("RC4", name) self.assertNotIn("3DES", name) def test_get_ciphers(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.set_ciphers('AESGCM') names = set(d['name'] for d in ctx.get_ciphers()) expected = { 'AES128-GCM-SHA256', 'ECDHE-ECDSA-AES128-GCM-SHA256', 'ECDHE-RSA-AES128-GCM-SHA256', 'DHE-RSA-AES128-GCM-SHA256', 'AES256-GCM-SHA384', 'ECDHE-ECDSA-AES256-GCM-SHA384', 'ECDHE-RSA-AES256-GCM-SHA384', 'DHE-RSA-AES256-GCM-SHA384', } intersection = names.intersection(expected) self.assertGreaterEqual( len(intersection), 2, f"\ngot: {sorted(names)}\nexpected: {sorted(expected)}" ) def test_options(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) # OP_ALL | OP_NO_SSLv2 | OP_NO_SSLv3 is the default value default = (ssl.OP_ALL | ssl.OP_NO_SSLv2 | ssl.OP_NO_SSLv3) # SSLContext also enables these by default default |= (OP_NO_COMPRESSION | OP_CIPHER_SERVER_PREFERENCE | OP_SINGLE_DH_USE | OP_SINGLE_ECDH_USE | OP_ENABLE_MIDDLEBOX_COMPAT) self.assertEqual(default, ctx.options) with warnings_helper.check_warnings(): ctx.options |= ssl.OP_NO_TLSv1 self.assertEqual(default | ssl.OP_NO_TLSv1, ctx.options) with warnings_helper.check_warnings(): ctx.options = (ctx.options & ~ssl.OP_NO_TLSv1) self.assertEqual(default, ctx.options) ctx.options = 0 # Ubuntu has OP_NO_SSLv3 forced on by default self.assertEqual(0, ctx.options & ~ssl.OP_NO_SSLv3) def test_verify_mode_protocol(self): with warnings_helper.check_warnings(): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) # Default value self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) ctx.verify_mode = ssl.CERT_OPTIONAL self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) ctx.verify_mode = ssl.CERT_REQUIRED self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.verify_mode = ssl.CERT_NONE self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) with self.assertRaises(TypeError): ctx.verify_mode = None with self.assertRaises(ValueError): ctx.verify_mode = 42 ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self.assertFalse(ctx.check_hostname) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertTrue(ctx.check_hostname) def test_hostname_checks_common_name(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertTrue(ctx.hostname_checks_common_name) if ssl.HAS_NEVER_CHECK_COMMON_NAME: ctx.hostname_checks_common_name = True self.assertTrue(ctx.hostname_checks_common_name) ctx.hostname_checks_common_name = False self.assertFalse(ctx.hostname_checks_common_name) ctx.hostname_checks_common_name = True self.assertTrue(ctx.hostname_checks_common_name) else: with self.assertRaises(AttributeError): ctx.hostname_checks_common_name = True @ignore_deprecation def test_min_max_version(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # OpenSSL default is MINIMUM_SUPPORTED, however some vendors like # Fedora override the setting to TLS 1.0. minimum_range = { # stock OpenSSL ssl.TLSVersion.MINIMUM_SUPPORTED, # Fedora 29 uses TLS 1.0 by default ssl.TLSVersion.TLSv1, # RHEL 8 uses TLS 1.2 by default ssl.TLSVersion.TLSv1_2 } maximum_range = { # stock OpenSSL ssl.TLSVersion.MAXIMUM_SUPPORTED, # Fedora 32 uses TLS 1.3 by default ssl.TLSVersion.TLSv1_3 } self.assertIn( ctx.minimum_version, minimum_range ) self.assertIn( ctx.maximum_version, maximum_range ) ctx.minimum_version = ssl.TLSVersion.TLSv1_1 ctx.maximum_version = ssl.TLSVersion.TLSv1_2 self.assertEqual( ctx.minimum_version, ssl.TLSVersion.TLSv1_1 ) self.assertEqual( ctx.maximum_version, ssl.TLSVersion.TLSv1_2 ) ctx.minimum_version = ssl.TLSVersion.MINIMUM_SUPPORTED ctx.maximum_version = ssl.TLSVersion.TLSv1 self.assertEqual( ctx.minimum_version, ssl.TLSVersion.MINIMUM_SUPPORTED ) self.assertEqual( ctx.maximum_version, ssl.TLSVersion.TLSv1 ) ctx.maximum_version = ssl.TLSVersion.MAXIMUM_SUPPORTED self.assertEqual( ctx.maximum_version, ssl.TLSVersion.MAXIMUM_SUPPORTED ) ctx.maximum_version = ssl.TLSVersion.MINIMUM_SUPPORTED self.assertIn( ctx.maximum_version, {ssl.TLSVersion.TLSv1, ssl.TLSVersion.TLSv1_1, ssl.TLSVersion.SSLv3} ) ctx.minimum_version = ssl.TLSVersion.MAXIMUM_SUPPORTED self.assertIn( ctx.minimum_version, {ssl.TLSVersion.TLSv1_2, ssl.TLSVersion.TLSv1_3} ) with self.assertRaises(ValueError): ctx.minimum_version = 42 if has_tls_protocol(ssl.PROTOCOL_TLSv1_1): ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1_1) self.assertIn( ctx.minimum_version, minimum_range ) self.assertEqual( ctx.maximum_version, ssl.TLSVersion.MAXIMUM_SUPPORTED ) with self.assertRaises(ValueError): ctx.minimum_version = ssl.TLSVersion.MINIMUM_SUPPORTED with self.assertRaises(ValueError): ctx.maximum_version = ssl.TLSVersion.TLSv1 @unittest.skipUnless( hasattr(ssl.SSLContext, 'security_level'), "requires OpenSSL >= 1.1.0" ) def test_security_level(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) # The default security callback allows for levels between 0-5 # with OpenSSL defaulting to 1, however some vendors override the # default value (e.g. Debian defaults to 2) security_level_range = { 0, 1, # OpenSSL default 2, # Debian 3, 4, 5, } self.assertIn(ctx.security_level, security_level_range) def test_verify_flags(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # default value tf = getattr(ssl, "VERIFY_X509_TRUSTED_FIRST", 0) self.assertEqual(ctx.verify_flags, ssl.VERIFY_DEFAULT | tf) ctx.verify_flags = ssl.VERIFY_CRL_CHECK_LEAF self.assertEqual(ctx.verify_flags, ssl.VERIFY_CRL_CHECK_LEAF) ctx.verify_flags = ssl.VERIFY_CRL_CHECK_CHAIN self.assertEqual(ctx.verify_flags, ssl.VERIFY_CRL_CHECK_CHAIN) ctx.verify_flags = ssl.VERIFY_DEFAULT self.assertEqual(ctx.verify_flags, ssl.VERIFY_DEFAULT) ctx.verify_flags = ssl.VERIFY_ALLOW_PROXY_CERTS self.assertEqual(ctx.verify_flags, ssl.VERIFY_ALLOW_PROXY_CERTS) # supports any value ctx.verify_flags = ssl.VERIFY_CRL_CHECK_LEAF | ssl.VERIFY_X509_STRICT self.assertEqual(ctx.verify_flags, ssl.VERIFY_CRL_CHECK_LEAF | ssl.VERIFY_X509_STRICT) with self.assertRaises(TypeError): ctx.verify_flags = None def test_load_cert_chain(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # Combined key and cert in a single file ctx.load_cert_chain(CERTFILE, keyfile=None) ctx.load_cert_chain(CERTFILE, keyfile=CERTFILE) self.assertRaises(TypeError, ctx.load_cert_chain, keyfile=CERTFILE) with self.assertRaises(OSError) as cm: ctx.load_cert_chain(NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_cert_chain(BADCERT) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_cert_chain(EMPTYCERT) # Separate key and cert ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.load_cert_chain(ONLYCERT, ONLYKEY) ctx.load_cert_chain(certfile=ONLYCERT, keyfile=ONLYKEY) ctx.load_cert_chain(certfile=BYTES_ONLYCERT, keyfile=BYTES_ONLYKEY) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_cert_chain(ONLYCERT) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_cert_chain(ONLYKEY) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_cert_chain(certfile=ONLYKEY, keyfile=ONLYCERT) # Mismatching key and cert ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) with self.assertRaisesRegex(ssl.SSLError, "key values mismatch"): ctx.load_cert_chain(CAFILE_CACERT, ONLYKEY) # Password protected key and cert ctx.load_cert_chain(CERTFILE_PROTECTED, password=KEY_PASSWORD) ctx.load_cert_chain(CERTFILE_PROTECTED, password=KEY_PASSWORD.encode()) ctx.load_cert_chain(CERTFILE_PROTECTED, password=bytearray(KEY_PASSWORD.encode())) ctx.load_cert_chain(ONLYCERT, ONLYKEY_PROTECTED, KEY_PASSWORD) ctx.load_cert_chain(ONLYCERT, ONLYKEY_PROTECTED, KEY_PASSWORD.encode()) ctx.load_cert_chain(ONLYCERT, ONLYKEY_PROTECTED, bytearray(KEY_PASSWORD.encode())) with self.assertRaisesRegex(TypeError, "should be a string"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=True) with self.assertRaises(ssl.SSLError): ctx.load_cert_chain(CERTFILE_PROTECTED, password="badpass") with self.assertRaisesRegex(ValueError, "cannot be longer"): # openssl has a fixed limit on the password buffer. # PEM_BUFSIZE is generally set to 1kb. # Return a string larger than this. ctx.load_cert_chain(CERTFILE_PROTECTED, password=b'a' * 102400) # Password callback def getpass_unicode(): return KEY_PASSWORD def getpass_bytes(): return KEY_PASSWORD.encode() def getpass_bytearray(): return bytearray(KEY_PASSWORD.encode()) def getpass_badpass(): return "badpass" def getpass_huge(): return b'a' * (1024 * 1024) def getpass_bad_type(): return 9 def getpass_exception(): raise Exception('getpass error') class GetPassCallable: def __call__(self): return KEY_PASSWORD def getpass(self): return KEY_PASSWORD ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_unicode) ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_bytes) ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_bytearray) ctx.load_cert_chain(CERTFILE_PROTECTED, password=GetPassCallable()) ctx.load_cert_chain(CERTFILE_PROTECTED, password=GetPassCallable().getpass) with self.assertRaises(ssl.SSLError): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_badpass) with self.assertRaisesRegex(ValueError, "cannot be longer"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_huge) with self.assertRaisesRegex(TypeError, "must return a string"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_bad_type) with self.assertRaisesRegex(Exception, "getpass error"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_exception) # Make sure the password function isn't called if it isn't needed ctx.load_cert_chain(CERTFILE, password=getpass_exception) def test_load_verify_locations(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.load_verify_locations(CERTFILE) ctx.load_verify_locations(cafile=CERTFILE, capath=None) ctx.load_verify_locations(BYTES_CERTFILE) ctx.load_verify_locations(cafile=BYTES_CERTFILE, capath=None) self.assertRaises(TypeError, ctx.load_verify_locations) self.assertRaises(TypeError, ctx.load_verify_locations, None, None, None) with self.assertRaises(OSError) as cm: ctx.load_verify_locations(NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_verify_locations(BADCERT) ctx.load_verify_locations(CERTFILE, CAPATH) ctx.load_verify_locations(CERTFILE, capath=BYTES_CAPATH) # Issue #10989: crash if the second argument type is invalid self.assertRaises(TypeError, ctx.load_verify_locations, None, True) def test_load_verify_cadata(self): # test cadata with open(CAFILE_CACERT) as f: cacert_pem = f.read() cacert_der = ssl.PEM_cert_to_DER_cert(cacert_pem) with open(CAFILE_NEURONIO) as f: neuronio_pem = f.read() neuronio_der = ssl.PEM_cert_to_DER_cert(neuronio_pem) # test PEM ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 0) ctx.load_verify_locations(cadata=cacert_pem) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 1) ctx.load_verify_locations(cadata=neuronio_pem) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # cert already in hash table ctx.load_verify_locations(cadata=neuronio_pem) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # combined ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) combined = "\n".join((cacert_pem, neuronio_pem)) ctx.load_verify_locations(cadata=combined) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # with junk around the certs ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) combined = ["head", cacert_pem, "other", neuronio_pem, "again", neuronio_pem, "tail"] ctx.load_verify_locations(cadata="\n".join(combined)) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # test DER ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(cadata=cacert_der) ctx.load_verify_locations(cadata=neuronio_der) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # cert already in hash table ctx.load_verify_locations(cadata=cacert_der) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # combined ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) combined = b"".join((cacert_der, neuronio_der)) ctx.load_verify_locations(cadata=combined) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # error cases ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertRaises(TypeError, ctx.load_verify_locations, cadata=object) with self.assertRaisesRegex( ssl.SSLError, "no start line: cadata does not contain a certificate" ): ctx.load_verify_locations(cadata="broken") with self.assertRaisesRegex( ssl.SSLError, "not enough data: cadata does not contain a certificate" ): ctx.load_verify_locations(cadata=b"broken") @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_load_dh_params(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.load_dh_params(DHFILE) if os.name != 'nt': ctx.load_dh_params(BYTES_DHFILE) self.assertRaises(TypeError, ctx.load_dh_params) self.assertRaises(TypeError, ctx.load_dh_params, None) with self.assertRaises(FileNotFoundError) as cm: ctx.load_dh_params(NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaises(ssl.SSLError) as cm: ctx.load_dh_params(CERTFILE) def test_session_stats(self): for proto in {ssl.PROTOCOL_TLS_CLIENT, ssl.PROTOCOL_TLS_SERVER}: ctx = ssl.SSLContext(proto) self.assertEqual(ctx.session_stats(), { 'number': 0, 'connect': 0, 'connect_good': 0, 'connect_renegotiate': 0, 'accept': 0, 'accept_good': 0, 'accept_renegotiate': 0, 'hits': 0, 'misses': 0, 'timeouts': 0, 'cache_full': 0, }) def test_set_default_verify_paths(self): # There's not much we can do to test that it acts as expected, # so just check it doesn't crash or raise an exception. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.set_default_verify_paths() @unittest.skipUnless(ssl.HAS_ECDH, "ECDH disabled on this OpenSSL build") def test_set_ecdh_curve(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.set_ecdh_curve("prime256v1") ctx.set_ecdh_curve(b"prime256v1") self.assertRaises(TypeError, ctx.set_ecdh_curve) self.assertRaises(TypeError, ctx.set_ecdh_curve, None) self.assertRaises(ValueError, ctx.set_ecdh_curve, "foo") self.assertRaises(ValueError, ctx.set_ecdh_curve, b"foo") def test_sni_callback(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # set_servername_callback expects a callable, or None self.assertRaises(TypeError, ctx.set_servername_callback) self.assertRaises(TypeError, ctx.set_servername_callback, 4) self.assertRaises(TypeError, ctx.set_servername_callback, "") self.assertRaises(TypeError, ctx.set_servername_callback, ctx) def dummycallback(sock, servername, ctx): pass ctx.set_servername_callback(None) ctx.set_servername_callback(dummycallback) def test_sni_callback_refcycle(self): # Reference cycles through the servername callback are detected # and cleared. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) def dummycallback(sock, servername, ctx, cycle=ctx): pass ctx.set_servername_callback(dummycallback) wr = weakref.ref(ctx) del ctx, dummycallback gc.collect() self.assertIs(wr(), None) def test_cert_store_stats(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 0, 'crl': 0, 'x509': 0}) ctx.load_cert_chain(CERTFILE) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 0, 'crl': 0, 'x509': 0}) ctx.load_verify_locations(CERTFILE) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 0, 'crl': 0, 'x509': 1}) ctx.load_verify_locations(CAFILE_CACERT) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 1, 'crl': 0, 'x509': 2}) def test_get_ca_certs(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.get_ca_certs(), []) # CERTFILE is not flagged as X509v3 Basic Constraints: CA:TRUE ctx.load_verify_locations(CERTFILE) self.assertEqual(ctx.get_ca_certs(), []) # but CAFILE_CACERT is a CA cert ctx.load_verify_locations(CAFILE_CACERT) self.assertEqual(ctx.get_ca_certs(), [{'issuer': ((('organizationName', 'Root CA'),), (('organizationalUnitName', 'http://www.cacert.org'),), (('commonName', 'CA Cert Signing Authority'),), (('emailAddress', 'support@cacert.org'),)), 'notAfter': 'Mar 29 12:29:49 2033 GMT', 'notBefore': 'Mar 30 12:29:49 2003 GMT', 'serialNumber': '00', 'crlDistributionPoints': ('https://www.cacert.org/revoke.crl',), 'subject': ((('organizationName', 'Root CA'),), (('organizationalUnitName', 'http://www.cacert.org'),), (('commonName', 'CA Cert Signing Authority'),), (('emailAddress', 'support@cacert.org'),)), 'version': 3}]) with open(CAFILE_CACERT) as f: pem = f.read() der = ssl.PEM_cert_to_DER_cert(pem) self.assertEqual(ctx.get_ca_certs(True), [der]) def test_load_default_certs(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs(ssl.Purpose.SERVER_AUTH) ctx.load_default_certs() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs(ssl.Purpose.CLIENT_AUTH) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertRaises(TypeError, ctx.load_default_certs, None) self.assertRaises(TypeError, ctx.load_default_certs, 'SERVER_AUTH') @unittest.skipIf(sys.platform == "win32", "not-Windows specific") def test_load_default_certs_env(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with os_helper.EnvironmentVarGuard() as env: env["SSL_CERT_DIR"] = CAPATH env["SSL_CERT_FILE"] = CERTFILE ctx.load_default_certs() self.assertEqual(ctx.cert_store_stats(), {"crl": 0, "x509": 1, "x509_ca": 0}) @unittest.skipUnless(sys.platform == "win32", "Windows specific") @unittest.skipIf(hasattr(sys, "gettotalrefcount"), "Debug build does not share environment between CRTs") def test_load_default_certs_env_windows(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs() stats = ctx.cert_store_stats() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with os_helper.EnvironmentVarGuard() as env: env["SSL_CERT_DIR"] = CAPATH env["SSL_CERT_FILE"] = CERTFILE ctx.load_default_certs() stats["x509"] += 1 self.assertEqual(ctx.cert_store_stats(), stats) def _assert_context_options(self, ctx): self.assertEqual(ctx.options & ssl.OP_NO_SSLv2, ssl.OP_NO_SSLv2) if OP_NO_COMPRESSION != 0: self.assertEqual(ctx.options & OP_NO_COMPRESSION, OP_NO_COMPRESSION) if OP_SINGLE_DH_USE != 0: self.assertEqual(ctx.options & OP_SINGLE_DH_USE, OP_SINGLE_DH_USE) if OP_SINGLE_ECDH_USE != 0: self.assertEqual(ctx.options & OP_SINGLE_ECDH_USE, OP_SINGLE_ECDH_USE) if OP_CIPHER_SERVER_PREFERENCE != 0: self.assertEqual(ctx.options & OP_CIPHER_SERVER_PREFERENCE, OP_CIPHER_SERVER_PREFERENCE) def test_create_default_context(self): ctx = ssl.create_default_context() self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertTrue(ctx.check_hostname) self._assert_context_options(ctx) with open(SIGNING_CA) as f: cadata = f.read() ctx = ssl.create_default_context(cafile=SIGNING_CA, capath=CAPATH, cadata=cadata) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self._assert_context_options(ctx) ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self._assert_context_options(ctx) def test__create_stdlib_context(self): ctx = ssl._create_stdlib_context() self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self.assertFalse(ctx.check_hostname) self._assert_context_options(ctx) if has_tls_protocol(ssl.PROTOCOL_TLSv1): with warnings_helper.check_warnings(): ctx = ssl._create_stdlib_context(ssl.PROTOCOL_TLSv1) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLSv1) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self._assert_context_options(ctx) with warnings_helper.check_warnings(): ctx = ssl._create_stdlib_context( ssl.PROTOCOL_TLSv1_2, cert_reqs=ssl.CERT_REQUIRED, check_hostname=True ) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLSv1_2) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertTrue(ctx.check_hostname) self._assert_context_options(ctx) ctx = ssl._create_stdlib_context(purpose=ssl.Purpose.CLIENT_AUTH) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self._assert_context_options(ctx) def test_check_hostname(self): with warnings_helper.check_warnings(): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) # Auto set CERT_REQUIRED ctx.check_hostname = True self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_REQUIRED self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) # Changing verify_mode does not affect check_hostname ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE ctx.check_hostname = False self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) # Auto set ctx.check_hostname = True self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_OPTIONAL ctx.check_hostname = False self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) # keep CERT_OPTIONAL ctx.check_hostname = True self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) # Cannot set CERT_NONE with check_hostname enabled with self.assertRaises(ValueError): ctx.verify_mode = ssl.CERT_NONE ctx.check_hostname = False self.assertFalse(ctx.check_hostname) ctx.verify_mode = ssl.CERT_NONE self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) def test_context_client_server(self): # PROTOCOL_TLS_CLIENT has sane defaults ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) # PROTOCOL_TLS_SERVER has different but also sane defaults ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) def test_context_custom_class(self): class MySSLSocket(ssl.SSLSocket): pass class MySSLObject(ssl.SSLObject): pass ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.sslsocket_class = MySSLSocket ctx.sslobject_class = MySSLObject with ctx.wrap_socket(socket.socket(), server_side=True) as sock: self.assertIsInstance(sock, MySSLSocket) obj = ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_side=True) self.assertIsInstance(obj, MySSLObject) def test_num_tickest(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.num_tickets, 2) ctx.num_tickets = 1 self.assertEqual(ctx.num_tickets, 1) ctx.num_tickets = 0 self.assertEqual(ctx.num_tickets, 0) with self.assertRaises(ValueError): ctx.num_tickets = -1 with self.assertRaises(TypeError): ctx.num_tickets = None ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.num_tickets, 2) with self.assertRaises(ValueError): ctx.num_tickets = 1 class SSLErrorTests(unittest.TestCase): def test_str(self): # The str() of a SSLError doesn't include the errno e = ssl.SSLError(1, "foo") self.assertEqual(str(e), "foo") self.assertEqual(e.errno, 1) # Same for a subclass e = ssl.SSLZeroReturnError(1, "foo") self.assertEqual(str(e), "foo") self.assertEqual(e.errno, 1) @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_lib_reason(self): # Test the library and reason attributes ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertRaises(ssl.SSLError) as cm: ctx.load_dh_params(CERTFILE) self.assertEqual(cm.exception.library, 'PEM') self.assertEqual(cm.exception.reason, 'NO_START_LINE') s = str(cm.exception) self.assertTrue(s.startswith("[PEM: NO_START_LINE] no start line"), s) def test_subclass(self): # Check that the appropriate SSLError subclass is raised # (this only tests one of them) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE with socket.create_server(("127.0.0.1", 0)) as s: c = socket.create_connection(s.getsockname()) c.setblocking(False) with ctx.wrap_socket(c, False, do_handshake_on_connect=False) as c: with self.assertRaises(ssl.SSLWantReadError) as cm: c.do_handshake() s = str(cm.exception) self.assertTrue(s.startswith("The operation did not complete (read)"), s) # For compatibility self.assertEqual(cm.exception.errno, ssl.SSL_ERROR_WANT_READ) def test_bad_server_hostname(self): ctx = ssl.create_default_context() with self.assertRaises(ValueError): ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_hostname="") with self.assertRaises(ValueError): ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_hostname=".example.org") with self.assertRaises(TypeError): ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_hostname="example.org\x00evil.com") class MemoryBIOTests(unittest.TestCase): def test_read_write(self): bio = ssl.MemoryBIO() bio.write(b'foo') self.assertEqual(bio.read(), b'foo') self.assertEqual(bio.read(), b'') bio.write(b'foo') bio.write(b'bar') self.assertEqual(bio.read(), b'foobar') self.assertEqual(bio.read(), b'') bio.write(b'baz') self.assertEqual(bio.read(2), b'ba') self.assertEqual(bio.read(1), b'z') self.assertEqual(bio.read(1), b'') def test_eof(self): bio = ssl.MemoryBIO() self.assertFalse(bio.eof) self.assertEqual(bio.read(), b'') self.assertFalse(bio.eof) bio.write(b'foo') self.assertFalse(bio.eof) bio.write_eof() self.assertFalse(bio.eof) self.assertEqual(bio.read(2), b'fo') self.assertFalse(bio.eof) self.assertEqual(bio.read(1), b'o') self.assertTrue(bio.eof) self.assertEqual(bio.read(), b'') self.assertTrue(bio.eof) def test_pending(self): bio = ssl.MemoryBIO() self.assertEqual(bio.pending, 0) bio.write(b'foo') self.assertEqual(bio.pending, 3) for i in range(3): bio.read(1) self.assertEqual(bio.pending, 3-i-1) for i in range(3): bio.write(b'x') self.assertEqual(bio.pending, i+1) bio.read() self.assertEqual(bio.pending, 0) def test_buffer_types(self): bio = ssl.MemoryBIO() bio.write(b'foo') self.assertEqual(bio.read(), b'foo') bio.write(bytearray(b'bar')) self.assertEqual(bio.read(), b'bar') bio.write(memoryview(b'baz')) self.assertEqual(bio.read(), b'baz') def test_error_types(self): bio = ssl.MemoryBIO() self.assertRaises(TypeError, bio.write, 'foo') self.assertRaises(TypeError, bio.write, None) self.assertRaises(TypeError, bio.write, True) self.assertRaises(TypeError, bio.write, 1) class SSLObjectTests(unittest.TestCase): def test_private_init(self): bio = ssl.MemoryBIO() with self.assertRaisesRegex(TypeError, "public constructor"): ssl.SSLObject(bio, bio) def test_unwrap(self): client_ctx, server_ctx, hostname = testing_context() c_in = ssl.MemoryBIO() c_out = ssl.MemoryBIO() s_in = ssl.MemoryBIO() s_out = ssl.MemoryBIO() client = client_ctx.wrap_bio(c_in, c_out, server_hostname=hostname) server = server_ctx.wrap_bio(s_in, s_out, server_side=True) # Loop on the handshake for a bit to get it settled for _ in range(5): try: client.do_handshake() except ssl.SSLWantReadError: pass if c_out.pending: s_in.write(c_out.read()) try: server.do_handshake() except ssl.SSLWantReadError: pass if s_out.pending: c_in.write(s_out.read()) # Now the handshakes should be complete (don't raise WantReadError) client.do_handshake() server.do_handshake() # Now if we unwrap one side unilaterally, it should send close-notify # and raise WantReadError: with self.assertRaises(ssl.SSLWantReadError): client.unwrap() # But server.unwrap() does not raise, because it reads the client's # close-notify: s_in.write(c_out.read()) server.unwrap() # And now that the client gets the server's close-notify, it doesn't # raise either. c_in.write(s_out.read()) client.unwrap() class SimpleBackgroundTests(unittest.TestCase): """Tests that connect to a simple server running in the background""" def setUp(self): self.server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.server_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=self.server_context) self.server_addr = (HOST, server.port) server.__enter__() self.addCleanup(server.__exit__, None, None, None) def test_connect(self): with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_NONE) as s: s.connect(self.server_addr) self.assertEqual({}, s.getpeercert()) self.assertFalse(s.server_side) # this should succeed because we specify the root cert with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=SIGNING_CA) as s: s.connect(self.server_addr) self.assertTrue(s.getpeercert()) self.assertFalse(s.server_side) def test_connect_fail(self): # This should fail because we have no verification certs. Connection # failure crashes ThreadedEchoServer, so run this in an independent # test method. s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED) self.addCleanup(s.close) self.assertRaisesRegex(ssl.SSLError, "certificate verify failed", s.connect, self.server_addr) def test_connect_ex(self): # Issue #11326: check connect_ex() implementation s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=SIGNING_CA) self.addCleanup(s.close) self.assertEqual(0, s.connect_ex(self.server_addr)) self.assertTrue(s.getpeercert()) def test_non_blocking_connect_ex(self): # Issue #11326: non-blocking connect_ex() should allow handshake # to proceed after the socket gets ready. s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=SIGNING_CA, do_handshake_on_connect=False) self.addCleanup(s.close) s.setblocking(False) rc = s.connect_ex(self.server_addr) # EWOULDBLOCK under Windows, EINPROGRESS elsewhere self.assertIn(rc, (0, errno.EINPROGRESS, errno.EWOULDBLOCK)) # Wait for connect to finish select.select([], [s], [], 5.0) # Non-blocking handshake while True: try: s.do_handshake() break except ssl.SSLWantReadError: select.select([s], [], [], 5.0) except ssl.SSLWantWriteError: select.select([], [s], [], 5.0) # SSL established self.assertTrue(s.getpeercert()) def test_connect_with_context(self): # Same as test_connect, but with a separately created context ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE with ctx.wrap_socket(socket.socket(socket.AF_INET)) as s: s.connect(self.server_addr) self.assertEqual({}, s.getpeercert()) # Same with a server hostname with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname="dummy") as s: s.connect(self.server_addr) ctx.verify_mode = ssl.CERT_REQUIRED # This should succeed because we specify the root cert ctx.load_verify_locations(SIGNING_CA) with ctx.wrap_socket(socket.socket(socket.AF_INET)) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) def test_connect_with_context_fail(self): # This should fail because we have no verification certs. Connection # failure crashes ThreadedEchoServer, so run this in an independent # test method. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) s = ctx.wrap_socket( socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME ) self.addCleanup(s.close) self.assertRaisesRegex(ssl.SSLError, "certificate verify failed", s.connect, self.server_addr) def test_connect_capath(self): # Verify server certificates using the `capath` argument # NOTE: the subject hashing algorithm has been changed between # OpenSSL 0.9.8n and 1.0.0, as a result the capath directory must # contain both versions of each certificate (same content, different # filename) for this test to be portable across OpenSSL releases. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(capath=CAPATH) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) # Same with a bytes `capath` argument ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(capath=BYTES_CAPATH) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) def test_connect_cadata(self): with open(SIGNING_CA) as f: pem = f.read() der = ssl.PEM_cert_to_DER_cert(pem) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(cadata=pem) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) # same with DER ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(cadata=der) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) @unittest.skipIf(os.name == "nt", "Can't use a socket as a file under Windows") def test_makefile_close(self): # Issue #5238: creating a file-like object with makefile() shouldn't # delay closing the underlying "real socket" (here tested with its # file descriptor, hence skipping the test under Windows). ss = test_wrap_socket(socket.socket(socket.AF_INET)) ss.connect(self.server_addr) fd = ss.fileno() f = ss.makefile() f.close() # The fd is still open os.read(fd, 0) # Closing the SSL socket should close the fd too ss.close() gc.collect() with self.assertRaises(OSError) as e: os.read(fd, 0) self.assertEqual(e.exception.errno, errno.EBADF) def test_non_blocking_handshake(self): s = socket.socket(socket.AF_INET) s.connect(self.server_addr) s.setblocking(False) s = test_wrap_socket(s, cert_reqs=ssl.CERT_NONE, do_handshake_on_connect=False) self.addCleanup(s.close) count = 0 while True: try: count += 1 s.do_handshake() break except ssl.SSLWantReadError: select.select([s], [], []) except ssl.SSLWantWriteError: select.select([], [s], []) if support.verbose: sys.stdout.write("\nNeeded %d calls to do_handshake() to establish session.\n" % count) def test_get_server_certificate(self): _test_get_server_certificate(self, *self.server_addr, cert=SIGNING_CA) def test_get_server_certificate_sni(self): host, port = self.server_addr server_names = [] # We store servername_cb arguments to make sure they match the host def servername_cb(ssl_sock, server_name, initial_context): server_names.append(server_name) self.server_context.set_servername_callback(servername_cb) pem = ssl.get_server_certificate((host, port)) if not pem: self.fail("No server certificate on %s:%s!" % (host, port)) pem = ssl.get_server_certificate((host, port), ca_certs=SIGNING_CA) if not pem: self.fail("No server certificate on %s:%s!" % (host, port)) if support.verbose: sys.stdout.write("\nVerified certificate for %s:%s is\n%s\n" % (host, port, pem)) self.assertEqual(server_names, [host, host]) def test_get_server_certificate_fail(self): # Connection failure crashes ThreadedEchoServer, so run this in an # independent test method _test_get_server_certificate_fail(self, *self.server_addr) def test_get_server_certificate_timeout(self): def servername_cb(ssl_sock, server_name, initial_context): time.sleep(0.2) self.server_context.set_servername_callback(servername_cb) with self.assertRaises(socket.timeout): ssl.get_server_certificate(self.server_addr, ca_certs=SIGNING_CA, timeout=0.1) def test_ciphers(self): with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_NONE, ciphers="ALL") as s: s.connect(self.server_addr) with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_NONE, ciphers="DEFAULT") as s: s.connect(self.server_addr) # Error checking can happen at instantiation or when connecting with self.assertRaisesRegex(ssl.SSLError, "No cipher can be selected"): with socket.socket(socket.AF_INET) as sock: s = test_wrap_socket(sock, cert_reqs=ssl.CERT_NONE, ciphers="^$:,;?*'dorothyx") s.connect(self.server_addr) def test_get_ca_certs_capath(self): # capath certs are loaded on request ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(capath=CAPATH) self.assertEqual(ctx.get_ca_certs(), []) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname='localhost') as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) self.assertEqual(len(ctx.get_ca_certs()), 1) def test_context_setget(self): # Check that the context of a connected socket can be replaced. ctx1 = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx1.load_verify_locations(capath=CAPATH) ctx2 = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx2.load_verify_locations(capath=CAPATH) s = socket.socket(socket.AF_INET) with ctx1.wrap_socket(s, server_hostname='localhost') as ss: ss.connect(self.server_addr) self.assertIs(ss.context, ctx1) self.assertIs(ss._sslobj.context, ctx1) ss.context = ctx2 self.assertIs(ss.context, ctx2) self.assertIs(ss._sslobj.context, ctx2) def ssl_io_loop(self, sock, incoming, outgoing, func, *args, **kwargs): # A simple IO loop. Call func(*args) depending on the error we get # (WANT_READ or WANT_WRITE) move data between the socket and the BIOs. timeout = kwargs.get('timeout', support.SHORT_TIMEOUT) deadline = time.monotonic() + timeout count = 0 while True: if time.monotonic() > deadline: self.fail("timeout") errno = None count += 1 try: ret = func(*args) except ssl.SSLError as e: if e.errno not in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): raise errno = e.errno # Get any data from the outgoing BIO irrespective of any error, and # send it to the socket. buf = outgoing.read() sock.sendall(buf) # If there's no error, we're done. For WANT_READ, we need to get # data from the socket and put it in the incoming BIO. if errno is None: break elif errno == ssl.SSL_ERROR_WANT_READ: buf = sock.recv(32768) if buf: incoming.write(buf) else: incoming.write_eof() if support.verbose: sys.stdout.write("Needed %d calls to complete %s().\n" % (count, func.__name__)) return ret def test_bio_handshake(self): sock = socket.socket(socket.AF_INET) self.addCleanup(sock.close) sock.connect(self.server_addr) incoming = ssl.MemoryBIO() outgoing = ssl.MemoryBIO() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.load_verify_locations(SIGNING_CA) sslobj = ctx.wrap_bio(incoming, outgoing, False, SIGNED_CERTFILE_HOSTNAME) self.assertIs(sslobj._sslobj.owner, sslobj) self.assertIsNone(sslobj.cipher()) self.assertIsNone(sslobj.version()) self.assertIsNone(sslobj.shared_ciphers()) self.assertRaises(ValueError, sslobj.getpeercert) if 'tls-unique' in ssl.CHANNEL_BINDING_TYPES: self.assertIsNone(sslobj.get_channel_binding('tls-unique')) self.ssl_io_loop(sock, incoming, outgoing, sslobj.do_handshake) self.assertTrue(sslobj.cipher()) self.assertIsNone(sslobj.shared_ciphers()) self.assertIsNotNone(sslobj.version()) self.assertTrue(sslobj.getpeercert()) if 'tls-unique' in ssl.CHANNEL_BINDING_TYPES: self.assertTrue(sslobj.get_channel_binding('tls-unique')) try: self.ssl_io_loop(sock, incoming, outgoing, sslobj.unwrap) except ssl.SSLSyscallError: # If the server shuts down the TCP connection without sending a # secure shutdown message, this is reported as SSL_ERROR_SYSCALL pass self.assertRaises(ssl.SSLError, sslobj.write, b'foo') def test_bio_read_write_data(self): sock = socket.socket(socket.AF_INET) self.addCleanup(sock.close) sock.connect(self.server_addr) incoming = ssl.MemoryBIO() outgoing = ssl.MemoryBIO() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE sslobj = ctx.wrap_bio(incoming, outgoing, False) self.ssl_io_loop(sock, incoming, outgoing, sslobj.do_handshake) req = b'FOO\n' self.ssl_io_loop(sock, incoming, outgoing, sslobj.write, req) buf = self.ssl_io_loop(sock, incoming, outgoing, sslobj.read, 1024) self.assertEqual(buf, b'foo\n') self.ssl_io_loop(sock, incoming, outgoing, sslobj.unwrap) def test_transport_eof(self): client_context, server_context, hostname = testing_context() with socket.socket(socket.AF_INET) as sock: sock.connect(self.server_addr) incoming = ssl.MemoryBIO() outgoing = ssl.MemoryBIO() sslobj = client_context.wrap_bio(incoming, outgoing, server_hostname=hostname) self.ssl_io_loop(sock, incoming, outgoing, sslobj.do_handshake) # Simulate EOF from the transport. incoming.write_eof() self.assertRaises(ssl.SSLEOFError, sslobj.read) @support.requires_resource('network') class NetworkedTests(unittest.TestCase): def test_timeout_connect_ex(self): # Issue #12065: on a timeout, connect_ex() should return the original # errno (mimicking the behaviour of non-SSL sockets). with socket_helper.transient_internet(REMOTE_HOST): s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, do_handshake_on_connect=False) self.addCleanup(s.close) s.settimeout(0.0000001) rc = s.connect_ex((REMOTE_HOST, 443)) if rc == 0: self.skipTest("REMOTE_HOST responded too quickly") elif rc == errno.ENETUNREACH: self.skipTest("Network unreachable.") self.assertIn(rc, (errno.EAGAIN, errno.EWOULDBLOCK)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'Needs IPv6') def test_get_server_certificate_ipv6(self): with socket_helper.transient_internet('ipv6.google.com'): _test_get_server_certificate(self, 'ipv6.google.com', 443) _test_get_server_certificate_fail(self, 'ipv6.google.com', 443) def _test_get_server_certificate(test, host, port, cert=None): pem = ssl.get_server_certificate((host, port)) if not pem: test.fail("No server certificate on %s:%s!" % (host, port)) pem = ssl.get_server_certificate((host, port), ca_certs=cert) if not pem: test.fail("No server certificate on %s:%s!" % (host, port)) if support.verbose: sys.stdout.write("\nVerified certificate for %s:%s is\n%s\n" % (host, port ,pem)) def _test_get_server_certificate_fail(test, host, port): try: pem = ssl.get_server_certificate((host, port), ca_certs=CERTFILE) except ssl.SSLError as x: #should fail if support.verbose: sys.stdout.write("%s\n" % x) else: test.fail("Got server certificate %s for %s:%s!" % (pem, host, port)) from test.ssl_servers import make_https_server class ThreadedEchoServer(threading.Thread): class ConnectionHandler(threading.Thread): """A mildly complicated class, because we want it to work both with and without the SSL wrapper around the socket connection, so that we can test the STARTTLS functionality.""" def __init__(self, server, connsock, addr): self.server = server self.running = False self.sock = connsock self.addr = addr self.sock.setblocking(True) self.sslconn = None threading.Thread.__init__(self) self.daemon = True def wrap_conn(self): try: self.sslconn = self.server.context.wrap_socket( self.sock, server_side=True) self.server.selected_alpn_protocols.append(self.sslconn.selected_alpn_protocol()) except (ConnectionResetError, BrokenPipeError, ConnectionAbortedError) as e: # We treat ConnectionResetError as though it were an # SSLError - OpenSSL on Ubuntu abruptly closes the # connection when asked to use an unsupported protocol. # # BrokenPipeError is raised in TLS 1.3 mode, when OpenSSL # tries to send session tickets after handshake. # https://github.com/openssl/openssl/issues/6342 # # ConnectionAbortedError is raised in TLS 1.3 mode, when OpenSSL # tries to send session tickets after handshake when using WinSock. self.server.conn_errors.append(str(e)) if self.server.chatty: handle_error("\n server: bad connection attempt from " + repr(self.addr) + ":\n") self.running = False self.close() return False except (ssl.SSLError, OSError) as e: # OSError may occur with wrong protocols, e.g. both # sides use PROTOCOL_TLS_SERVER. # # XXX Various errors can have happened here, for example # a mismatching protocol version, an invalid certificate, # or a low-level bug. This should be made more discriminating. # # bpo-31323: Store the exception as string to prevent # a reference leak: server -> conn_errors -> exception # -> traceback -> self (ConnectionHandler) -> server self.server.conn_errors.append(str(e)) if self.server.chatty: handle_error("\n server: bad connection attempt from " + repr(self.addr) + ":\n") # bpo-44229, bpo-43855, bpo-44237, and bpo-33450: # Ignore spurious EPROTOTYPE returned by write() on macOS. # See also http://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/ if e.errno != errno.EPROTOTYPE and sys.platform != "darwin": self.running = False self.server.stop() self.close() return False else: self.server.shared_ciphers.append(self.sslconn.shared_ciphers()) if self.server.context.verify_mode == ssl.CERT_REQUIRED: cert = self.sslconn.getpeercert() if support.verbose and self.server.chatty: sys.stdout.write(" client cert is " + pprint.pformat(cert) + "\n") cert_binary = self.sslconn.getpeercert(True) if support.verbose and self.server.chatty: if cert_binary is None: sys.stdout.write(" client did not provide a cert\n") else: sys.stdout.write(f" cert binary is {len(cert_binary)}b\n") cipher = self.sslconn.cipher() if support.verbose and self.server.chatty: sys.stdout.write(" server: connection cipher is now " + str(cipher) + "\n") return True def read(self): if self.sslconn: return self.sslconn.read() else: return self.sock.recv(1024) def write(self, bytes): if self.sslconn: return self.sslconn.write(bytes) else: return self.sock.send(bytes) def close(self): if self.sslconn: self.sslconn.close() else: self.sock.close() def run(self): self.running = True if not self.server.starttls_server: if not self.wrap_conn(): return while self.running: try: msg = self.read() stripped = msg.strip() if not stripped: # eof, so quit this handler self.running = False try: self.sock = self.sslconn.unwrap() except OSError: # Many tests shut the TCP connection down # without an SSL shutdown. This causes # unwrap() to raise OSError with errno=0! pass else: self.sslconn = None self.close() elif stripped == b'over': if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: client closed connection\n") self.close() return elif (self.server.starttls_server and stripped == b'STARTTLS'): if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: read STARTTLS from client, sending OK...\n") self.write(b"OK\n") if not self.wrap_conn(): return elif (self.server.starttls_server and self.sslconn and stripped == b'ENDTLS'): if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: read ENDTLS from client, sending OK...\n") self.write(b"OK\n") self.sock = self.sslconn.unwrap() self.sslconn = None if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: connection is now unencrypted...\n") elif stripped == b'CB tls-unique': if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: read CB tls-unique from client, sending our CB data...\n") data = self.sslconn.get_channel_binding("tls-unique") self.write(repr(data).encode("us-ascii") + b"\n") elif stripped == b'PHA': if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: initiating post handshake auth\n") try: self.sslconn.verify_client_post_handshake() except ssl.SSLError as e: self.write(repr(e).encode("us-ascii") + b"\n") else: self.write(b"OK\n") elif stripped == b'HASCERT': if self.sslconn.getpeercert() is not None: self.write(b'TRUE\n') else: self.write(b'FALSE\n') elif stripped == b'GETCERT': cert = self.sslconn.getpeercert() self.write(repr(cert).encode("us-ascii") + b"\n") elif stripped == b'VERIFIEDCHAIN': certs = self.sslconn._sslobj.get_verified_chain() self.write(len(certs).to_bytes(1, "big") + b"\n") elif stripped == b'UNVERIFIEDCHAIN': certs = self.sslconn._sslobj.get_unverified_chain() self.write(len(certs).to_bytes(1, "big") + b"\n") else: if (support.verbose and self.server.connectionchatty): ctype = (self.sslconn and "encrypted") or "unencrypted" sys.stdout.write(" server: read %r (%s), sending back %r (%s)...\n" % (msg, ctype, msg.lower(), ctype)) self.write(msg.lower()) except OSError as e: # handles SSLError and socket errors if self.server.chatty and support.verbose: if isinstance(e, ConnectionError): # OpenSSL 1.1.1 sometimes raises # ConnectionResetError when connection is not # shut down gracefully. print( f" Connection reset by peer: {self.addr}" ) else: handle_error("Test server failure:\n") try: self.write(b"ERROR\n") except OSError: pass self.close() self.running = False # normally, we'd just stop here, but for the test # harness, we want to stop the server self.server.stop() def __init__(self, certificate=None, ssl_version=None, certreqs=None, cacerts=None, chatty=True, connectionchatty=False, starttls_server=False, alpn_protocols=None, ciphers=None, context=None): if context: self.context = context else: self.context = ssl.SSLContext(ssl_version if ssl_version is not None else ssl.PROTOCOL_TLS_SERVER) self.context.verify_mode = (certreqs if certreqs is not None else ssl.CERT_NONE) if cacerts: self.context.load_verify_locations(cacerts) if certificate: self.context.load_cert_chain(certificate) if alpn_protocols: self.context.set_alpn_protocols(alpn_protocols) if ciphers: self.context.set_ciphers(ciphers) self.chatty = chatty self.connectionchatty = connectionchatty self.starttls_server = starttls_server self.sock = socket.socket() self.port = socket_helper.bind_port(self.sock) self.flag = None self.active = False self.selected_alpn_protocols = [] self.shared_ciphers = [] self.conn_errors = [] threading.Thread.__init__(self) self.daemon = True def __enter__(self): self.start(threading.Event()) self.flag.wait() return self def __exit__(self, *args): self.stop() self.join() def start(self, flag=None): self.flag = flag threading.Thread.start(self) def run(self): self.sock.settimeout(1.0) self.sock.listen(5) self.active = True if self.flag: # signal an event self.flag.set() while self.active: try: newconn, connaddr = self.sock.accept() if support.verbose and self.chatty: sys.stdout.write(' server: new connection from ' + repr(connaddr) + '\n') handler = self.ConnectionHandler(self, newconn, connaddr) handler.start() handler.join() except TimeoutError as e: if support.verbose: sys.stdout.write(f' connection timeout {e!r}\n') except KeyboardInterrupt: self.stop() except BaseException as e: if support.verbose and self.chatty: sys.stdout.write( ' connection handling failed: ' + repr(e) + '\n') self.close() def close(self): if self.sock is not None: self.sock.close() self.sock = None def stop(self): self.active = False class AsyncoreEchoServer(threading.Thread): # this one's based on asyncore.dispatcher class EchoServer (asyncore.dispatcher): class ConnectionHandler(asyncore.dispatcher_with_send): def __init__(self, conn, certfile): self.socket = test_wrap_socket(conn, server_side=True, certfile=certfile, do_handshake_on_connect=False) asyncore.dispatcher_with_send.__init__(self, self.socket) self._ssl_accepting = True self._do_ssl_handshake() def readable(self): if isinstance(self.socket, ssl.SSLSocket): while self.socket.pending() > 0: self.handle_read_event() return True def _do_ssl_handshake(self): try: self.socket.do_handshake() except (ssl.SSLWantReadError, ssl.SSLWantWriteError): return except ssl.SSLEOFError: return self.handle_close() except ssl.SSLError: raise except OSError as err: if err.args[0] == errno.ECONNABORTED: return self.handle_close() else: self._ssl_accepting = False def handle_read(self): if self._ssl_accepting: self._do_ssl_handshake() else: data = self.recv(1024) if support.verbose: sys.stdout.write(" server: read %s from client\n" % repr(data)) if not data: self.close() else: self.send(data.lower()) def handle_close(self): self.close() if support.verbose: sys.stdout.write(" server: closed connection %s\n" % self.socket) def handle_error(self): raise def __init__(self, certfile): self.certfile = certfile sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = socket_helper.bind_port(sock, '') asyncore.dispatcher.__init__(self, sock) self.listen(5) def handle_accepted(self, sock_obj, addr): if support.verbose: sys.stdout.write(" server: new connection from %s:%s\n" %addr) self.ConnectionHandler(sock_obj, self.certfile) def handle_error(self): raise def __init__(self, certfile): self.flag = None self.active = False self.server = self.EchoServer(certfile) self.port = self.server.port threading.Thread.__init__(self) self.daemon = True def __str__(self): return "<%s %s>" % (self.__class__.__name__, self.server) def __enter__(self): self.start(threading.Event()) self.flag.wait() return self def __exit__(self, *args): if support.verbose: sys.stdout.write(" cleanup: stopping server.\n") self.stop() if support.verbose: sys.stdout.write(" cleanup: joining server thread.\n") self.join() if support.verbose: sys.stdout.write(" cleanup: successfully joined.\n") # make sure that ConnectionHandler is removed from socket_map asyncore.close_all(ignore_all=True) def start (self, flag=None): self.flag = flag threading.Thread.start(self) def run(self): self.active = True if self.flag: self.flag.set() while self.active: try: asyncore.loop(1) except: pass def stop(self): self.active = False self.server.close() def server_params_test(client_context, server_context, indata=b"FOO\n", chatty=True, connectionchatty=False, sni_name=None, session=None): """ Launch a server, connect a client to it and try various reads and writes. """ stats = {} server = ThreadedEchoServer(context=server_context, chatty=chatty, connectionchatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=sni_name, session=session) as s: s.connect((HOST, server.port)) for arg in [indata, bytearray(indata), memoryview(indata)]: if connectionchatty: if support.verbose: sys.stdout.write( " client: sending %r...\n" % indata) s.write(arg) outdata = s.read() if connectionchatty: if support.verbose: sys.stdout.write(" client: read %r\n" % outdata) if outdata != indata.lower(): raise AssertionError( "bad data <<%r>> (%d) received; expected <<%r>> (%d)\n" % (outdata[:20], len(outdata), indata[:20].lower(), len(indata))) s.write(b"over\n") if connectionchatty: if support.verbose: sys.stdout.write(" client: closing connection.\n") stats.update({ 'compression': s.compression(), 'cipher': s.cipher(), 'peercert': s.getpeercert(), 'client_alpn_protocol': s.selected_alpn_protocol(), 'version': s.version(), 'session_reused': s.session_reused, 'session': s.session, }) s.close() stats['server_alpn_protocols'] = server.selected_alpn_protocols stats['server_shared_ciphers'] = server.shared_ciphers return stats def try_protocol_combo(server_protocol, client_protocol, expect_success, certsreqs=None, server_options=0, client_options=0): """ Try to SSL-connect using *client_protocol* to *server_protocol*. If *expect_success* is true, assert that the connection succeeds, if it's false, assert that the connection fails. Also, if *expect_success* is a string, assert that it is the protocol version actually used by the connection. """ if certsreqs is None: certsreqs = ssl.CERT_NONE certtype = { ssl.CERT_NONE: "CERT_NONE", ssl.CERT_OPTIONAL: "CERT_OPTIONAL", ssl.CERT_REQUIRED: "CERT_REQUIRED", }[certsreqs] if support.verbose: formatstr = (expect_success and " %s->%s %s\n") or " {%s->%s} %s\n" sys.stdout.write(formatstr % (ssl.get_protocol_name(client_protocol), ssl.get_protocol_name(server_protocol), certtype)) with warnings_helper.check_warnings(): # ignore Deprecation warnings client_context = ssl.SSLContext(client_protocol) client_context.options |= client_options server_context = ssl.SSLContext(server_protocol) server_context.options |= server_options min_version = PROTOCOL_TO_TLS_VERSION.get(client_protocol, None) if (min_version is not None # SSLContext.minimum_version is only available on recent OpenSSL # (setter added in OpenSSL 1.1.0, getter added in OpenSSL 1.1.1) and hasattr(server_context, 'minimum_version') and server_protocol == ssl.PROTOCOL_TLS and server_context.minimum_version > min_version ): # If OpenSSL configuration is strict and requires more recent TLS # version, we have to change the minimum to test old TLS versions. with warnings_helper.check_warnings(): server_context.minimum_version = min_version # NOTE: we must enable "ALL" ciphers on the client, otherwise an # SSLv23 client will send an SSLv3 hello (rather than SSLv2) # starting from OpenSSL 1.0.0 (see issue #8322). if client_context.protocol == ssl.PROTOCOL_TLS: client_context.set_ciphers("ALL") seclevel_workaround(server_context, client_context) for ctx in (client_context, server_context): ctx.verify_mode = certsreqs ctx.load_cert_chain(SIGNED_CERTFILE) ctx.load_verify_locations(SIGNING_CA) try: stats = server_params_test(client_context, server_context, chatty=False, connectionchatty=False) # Protocol mismatch can result in either an SSLError, or a # "Connection reset by peer" error. except ssl.SSLError: if expect_success: raise except OSError as e: if expect_success or e.errno != errno.ECONNRESET: raise else: if not expect_success: raise AssertionError( "Client protocol %s succeeded with server protocol %s!" % (ssl.get_protocol_name(client_protocol), ssl.get_protocol_name(server_protocol))) elif (expect_success is not True and expect_success != stats['version']): raise AssertionError("version mismatch: expected %r, got %r" % (expect_success, stats['version'])) class ThreadedTests(unittest.TestCase): def test_echo(self): """Basic test of an SSL client connecting to a server""" if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() with self.subTest(client=ssl.PROTOCOL_TLS_CLIENT, server=ssl.PROTOCOL_TLS_SERVER): server_params_test(client_context=client_context, server_context=server_context, chatty=True, connectionchatty=True, sni_name=hostname) client_context.check_hostname = False with self.subTest(client=ssl.PROTOCOL_TLS_SERVER, server=ssl.PROTOCOL_TLS_CLIENT): with self.assertRaises(ssl.SSLError) as e: server_params_test(client_context=server_context, server_context=client_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIn( 'Cannot create a client socket with a PROTOCOL_TLS_SERVER context', str(e.exception) ) with self.subTest(client=ssl.PROTOCOL_TLS_SERVER, server=ssl.PROTOCOL_TLS_SERVER): with self.assertRaises(ssl.SSLError) as e: server_params_test(client_context=server_context, server_context=server_context, chatty=True, connectionchatty=True) self.assertIn( 'Cannot create a client socket with a PROTOCOL_TLS_SERVER context', str(e.exception) ) with self.subTest(client=ssl.PROTOCOL_TLS_CLIENT, server=ssl.PROTOCOL_TLS_CLIENT): with self.assertRaises(ssl.SSLError) as e: server_params_test(client_context=server_context, server_context=client_context, chatty=True, connectionchatty=True) self.assertIn( 'Cannot create a client socket with a PROTOCOL_TLS_SERVER context', str(e.exception)) def test_getpeercert(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), do_handshake_on_connect=False, server_hostname=hostname) as s: s.connect((HOST, server.port)) # getpeercert() raise ValueError while the handshake isn't # done. with self.assertRaises(ValueError): s.getpeercert() s.do_handshake() cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") cipher = s.cipher() if support.verbose: sys.stdout.write(pprint.pformat(cert) + '\n') sys.stdout.write("Connection cipher is " + str(cipher) + '.\n') if 'subject' not in cert: self.fail("No subject field in certificate: %s." % pprint.pformat(cert)) if ((('organizationName', 'Python Software Foundation'),) not in cert['subject']): self.fail( "Missing or invalid 'organizationName' field in certificate subject; " "should be 'Python Software Foundation'.") self.assertIn('notBefore', cert) self.assertIn('notAfter', cert) before = ssl.cert_time_to_seconds(cert['notBefore']) after = ssl.cert_time_to_seconds(cert['notAfter']) self.assertLess(before, after) def test_crl_check(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() tf = getattr(ssl, "VERIFY_X509_TRUSTED_FIRST", 0) self.assertEqual(client_context.verify_flags, ssl.VERIFY_DEFAULT | tf) # VERIFY_DEFAULT should pass server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") # VERIFY_CRL_CHECK_LEAF without a loaded CRL file fails client_context.verify_flags |= ssl.VERIFY_CRL_CHECK_LEAF server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaisesRegex(ssl.SSLError, "certificate verify failed"): s.connect((HOST, server.port)) # now load a CRL file. The CRL file is signed by the CA. client_context.load_verify_locations(CRLFILE) server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") def test_check_hostname(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() # correct hostname should verify server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") # incorrect hostname should raise an exception server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname="invalid") as s: with self.assertRaisesRegex( ssl.CertificateError, "Hostname mismatch, certificate is not valid for 'invalid'."): s.connect((HOST, server.port)) # missing server_hostname arg should cause an exception, too server = ThreadedEchoServer(context=server_context, chatty=True) with server: with socket.socket() as s: with self.assertRaisesRegex(ValueError, "check_hostname requires server_hostname"): client_context.wrap_socket(s) @unittest.skipUnless( ssl.HAS_NEVER_CHECK_COMMON_NAME, "test requires hostname_checks_common_name" ) def test_hostname_checks_common_name(self): client_context, server_context, hostname = testing_context() assert client_context.hostname_checks_common_name client_context.hostname_checks_common_name = False # default cert has a SAN server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) client_context, server_context, hostname = testing_context(NOSANFILE) client_context.hostname_checks_common_name = False server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(ssl.SSLCertVerificationError): s.connect((HOST, server.port)) def test_ecc_cert(self): client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) client_context.set_ciphers('ECDHE:ECDSA:!NULL:!aRSA') hostname = SIGNED_CERTFILE_ECC_HOSTNAME server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # load ECC cert server_context.load_cert_chain(SIGNED_CERTFILE_ECC) # correct hostname should verify server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") cipher = s.cipher()[0].split('-') self.assertTrue(cipher[:2], ('ECDHE', 'ECDSA')) def test_dual_rsa_ecc(self): client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) # TODO: fix TLSv1.3 once SSLContext can restrict signature # algorithms. client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # only ECDSA certs client_context.set_ciphers('ECDHE:ECDSA:!NULL:!aRSA') hostname = SIGNED_CERTFILE_ECC_HOSTNAME server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # load ECC and RSA key/cert pairs server_context.load_cert_chain(SIGNED_CERTFILE_ECC) server_context.load_cert_chain(SIGNED_CERTFILE) # correct hostname should verify server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") cipher = s.cipher()[0].split('-') self.assertTrue(cipher[:2], ('ECDHE', 'ECDSA')) def test_check_hostname_idn(self): if support.verbose: sys.stdout.write("\n") server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(IDNSANSFILE) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.verify_mode = ssl.CERT_REQUIRED context.check_hostname = True context.load_verify_locations(SIGNING_CA) # correct hostname should verify, when specified in several # different ways idn_hostnames = [ ('könig.idn.pythontest.net', 'xn--knig-5qa.idn.pythontest.net'), ('xn--knig-5qa.idn.pythontest.net', 'xn--knig-5qa.idn.pythontest.net'), (b'xn--knig-5qa.idn.pythontest.net', 'xn--knig-5qa.idn.pythontest.net'), ('königsgäßchen.idna2003.pythontest.net', 'xn--knigsgsschen-lcb0w.idna2003.pythontest.net'), ('xn--knigsgsschen-lcb0w.idna2003.pythontest.net', 'xn--knigsgsschen-lcb0w.idna2003.pythontest.net'), (b'xn--knigsgsschen-lcb0w.idna2003.pythontest.net', 'xn--knigsgsschen-lcb0w.idna2003.pythontest.net'), # ('königsgäßchen.idna2008.pythontest.net', # 'xn--knigsgchen-b4a3dun.idna2008.pythontest.net'), ('xn--knigsgchen-b4a3dun.idna2008.pythontest.net', 'xn--knigsgchen-b4a3dun.idna2008.pythontest.net'), (b'xn--knigsgchen-b4a3dun.idna2008.pythontest.net', 'xn--knigsgchen-b4a3dun.idna2008.pythontest.net'), ] for server_hostname, expected_hostname in idn_hostnames: server = ThreadedEchoServer(context=server_context, chatty=True) with server: with context.wrap_socket(socket.socket(), server_hostname=server_hostname) as s: self.assertEqual(s.server_hostname, expected_hostname) s.connect((HOST, server.port)) cert = s.getpeercert() self.assertEqual(s.server_hostname, expected_hostname) self.assertTrue(cert, "Can't get peer certificate.") # incorrect hostname should raise an exception server = ThreadedEchoServer(context=server_context, chatty=True) with server: with context.wrap_socket(socket.socket(), server_hostname="python.example.org") as s: with self.assertRaises(ssl.CertificateError): s.connect((HOST, server.port)) def test_wrong_cert_tls12(self): """Connecting when the server rejects the client's certificate Launch a server with CERT_REQUIRED, and check that trying to connect to it with a wrong client certificate fails. """ client_context, server_context, hostname = testing_context() # load client cert that is not signed by trusted CA client_context.load_cert_chain(CERTFILE) # require TLS client authentication server_context.verify_mode = ssl.CERT_REQUIRED # TLS 1.3 has different handshake client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server = ThreadedEchoServer( context=server_context, chatty=True, connectionchatty=True, ) with server, \ client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: try: # Expect either an SSL error about the server rejecting # the connection, or a low-level connection reset (which # sometimes happens on Windows) s.connect((HOST, server.port)) except ssl.SSLError as e: if support.verbose: sys.stdout.write("\nSSLError is %r\n" % e) except OSError as e: if e.errno != errno.ECONNRESET: raise if support.verbose: sys.stdout.write("\nsocket.error is %r\n" % e) else: self.fail("Use of invalid cert should have failed!") @requires_tls_version('TLSv1_3') def test_wrong_cert_tls13(self): client_context, server_context, hostname = testing_context() # load client cert that is not signed by trusted CA client_context.load_cert_chain(CERTFILE) server_context.verify_mode = ssl.CERT_REQUIRED server_context.minimum_version = ssl.TLSVersion.TLSv1_3 client_context.minimum_version = ssl.TLSVersion.TLSv1_3 server = ThreadedEchoServer( context=server_context, chatty=True, connectionchatty=True, ) with server, \ client_context.wrap_socket(socket.socket(), server_hostname=hostname, suppress_ragged_eofs=False) as s: # TLS 1.3 perform client cert exchange after handshake s.connect((HOST, server.port)) try: s.write(b'data') s.read(1000) s.write(b'should have failed already') s.read(1000) except ssl.SSLError as e: if support.verbose: sys.stdout.write("\nSSLError is %r\n" % e) except OSError as e: if e.errno != errno.ECONNRESET: raise if support.verbose: sys.stdout.write("\nsocket.error is %r\n" % e) else: self.fail("Use of invalid cert should have failed!") def test_rude_shutdown(self): """A brutal shutdown of an SSL server should raise an OSError in the client when attempting handshake. """ listener_ready = threading.Event() listener_gone = threading.Event() s = socket.socket() port = socket_helper.bind_port(s, HOST) # `listener` runs in a thread. It sits in an accept() until # the main thread connects. Then it rudely closes the socket, # and sets Event `listener_gone` to let the main thread know # the socket is gone. def listener(): s.listen() listener_ready.set() newsock, addr = s.accept() newsock.close() s.close() listener_gone.set() def connector(): listener_ready.wait() with socket.socket() as c: c.connect((HOST, port)) listener_gone.wait() try: ssl_sock = test_wrap_socket(c) except OSError: pass else: self.fail('connecting to closed SSL socket should have failed') t = threading.Thread(target=listener) t.start() try: connector() finally: t.join() def test_ssl_cert_verify_error(self): if support.verbose: sys.stdout.write("\n") server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(SIGNED_CERTFILE) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) server = ThreadedEchoServer(context=server_context, chatty=True) with server: with context.wrap_socket(socket.socket(), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: try: s.connect((HOST, server.port)) except ssl.SSLError as e: msg = 'unable to get local issuer certificate' self.assertIsInstance(e, ssl.SSLCertVerificationError) self.assertEqual(e.verify_code, 20) self.assertEqual(e.verify_message, msg) self.assertIn(msg, repr(e)) self.assertIn('certificate verify failed', repr(e)) @requires_tls_version('SSLv2') def test_protocol_sslv2(self): """Connecting to an SSLv2 server with various client options""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True, ssl.CERT_REQUIRED) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_TLS, False) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_TLSv1, False) # SSLv23 client with specific SSL options try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_SSLv3) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1) def test_PROTOCOL_TLS(self): """Connecting to an SSLv23 server with various client options""" if support.verbose: sys.stdout.write("\n") if has_tls_version('SSLv2'): try: try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv2, True) except OSError as x: # this fails on some older versions of OpenSSL (0.9.7l, for instance) if support.verbose: sys.stdout.write( " SSL2 client to SSL23 server test unexpectedly failed:\n %s\n" % str(x)) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1') if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True, ssl.CERT_OPTIONAL) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_OPTIONAL) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False, ssl.CERT_REQUIRED) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True, ssl.CERT_REQUIRED) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_REQUIRED) # Server with specific SSL options if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False, server_options=ssl.OP_NO_SSLv3) # Will choose TLSv1 try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True, server_options=ssl.OP_NO_SSLv2 | ssl.OP_NO_SSLv3) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, False, server_options=ssl.OP_NO_TLSv1) @requires_tls_version('SSLv3') def test_protocol_sslv3(self): """Connecting to an SSLv3 server with various client options""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, 'SSLv3') try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, 'SSLv3', ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, 'SSLv3', ssl.CERT_REQUIRED) if has_tls_version('SSLv2'): try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_SSLv3) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @requires_tls_version('TLSv1') def test_protocol_tlsv1(self): """Connecting to a TLSv1 server with various client options""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, 'TLSv1') try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_REQUIRED) if has_tls_version('SSLv2'): try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1) @requires_tls_version('TLSv1_1') def test_protocol_tlsv1_1(self): """Connecting to a TLSv1.1 server with various client options. Testing against older TLS versions.""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_1, 'TLSv1.1') if has_tls_version('SSLv2'): try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_SSLv2, False) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1_1) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1_1, 'TLSv1.1') try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_1, False) @requires_tls_version('TLSv1_2') def test_protocol_tlsv1_2(self): """Connecting to a TLSv1.2 server with various client options. Testing against older TLS versions.""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_2, 'TLSv1.2', server_options=ssl.OP_NO_SSLv3|ssl.OP_NO_SSLv2, client_options=ssl.OP_NO_SSLv3|ssl.OP_NO_SSLv2,) if has_tls_version('SSLv2'): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_SSLv2, False) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1_2) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1_2, 'TLSv1.2') if has_tls_protocol(ssl.PROTOCOL_TLSv1): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1_2, False) if has_tls_protocol(ssl.PROTOCOL_TLSv1_1): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_1, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, False) def test_starttls(self): """Switching from clear text to encrypted and back again.""" msgs = (b"msg 1", b"MSG 2", b"STARTTLS", b"MSG 3", b"msg 4", b"ENDTLS", b"msg 5", b"msg 6") server = ThreadedEchoServer(CERTFILE, starttls_server=True, chatty=True, connectionchatty=True) wrapped = False with server: s = socket.socket() s.setblocking(True) s.connect((HOST, server.port)) if support.verbose: sys.stdout.write("\n") for indata in msgs: if support.verbose: sys.stdout.write( " client: sending %r...\n" % indata) if wrapped: conn.write(indata) outdata = conn.read() else: s.send(indata) outdata = s.recv(1024) msg = outdata.strip().lower() if indata == b"STARTTLS" and msg.startswith(b"ok"): # STARTTLS ok, switch to secure mode if support.verbose: sys.stdout.write( " client: read %r from server, starting TLS...\n" % msg) conn = test_wrap_socket(s) wrapped = True elif indata == b"ENDTLS" and msg.startswith(b"ok"): # ENDTLS ok, switch back to clear text if support.verbose: sys.stdout.write( " client: read %r from server, ending TLS...\n" % msg) s = conn.unwrap() wrapped = False else: if support.verbose: sys.stdout.write( " client: read %r from server\n" % msg) if support.verbose: sys.stdout.write(" client: closing connection.\n") if wrapped: conn.write(b"over\n") else: s.send(b"over\n") if wrapped: conn.close() else: s.close() def test_socketserver(self): """Using socketserver to create and manage SSL connections.""" server = make_https_server(self, certfile=SIGNED_CERTFILE) # try to connect if support.verbose: sys.stdout.write('\n') with open(CERTFILE, 'rb') as f: d1 = f.read() d2 = '' # now fetch the same data from the HTTPS server url = 'https://localhost:%d/%s' % ( server.port, os.path.split(CERTFILE)[1]) context = ssl.create_default_context(cafile=SIGNING_CA) f = urllib.request.urlopen(url, context=context) try: dlen = f.info().get("content-length") if dlen and (int(dlen) > 0): d2 = f.read(int(dlen)) if support.verbose: sys.stdout.write( " client: read %d bytes from remote server '%s'\n" % (len(d2), server)) finally: f.close() self.assertEqual(d1, d2) def test_asyncore_server(self): """Check the example asyncore integration.""" if support.verbose: sys.stdout.write("\n") indata = b"FOO\n" server = AsyncoreEchoServer(CERTFILE) with server: s = test_wrap_socket(socket.socket()) s.connect(('127.0.0.1', server.port)) if support.verbose: sys.stdout.write( " client: sending %r...\n" % indata) s.write(indata) outdata = s.read() if support.verbose: sys.stdout.write(" client: read %r\n" % outdata) if outdata != indata.lower(): self.fail( "bad data <<%r>> (%d) received; expected <<%r>> (%d)\n" % (outdata[:20], len(outdata), indata[:20].lower(), len(indata))) s.write(b"over\n") if support.verbose: sys.stdout.write(" client: closing connection.\n") s.close() if support.verbose: sys.stdout.write(" client: connection closed.\n") def test_recv_send(self): """Test recv(), send() and friends.""" if support.verbose: sys.stdout.write("\n") server = ThreadedEchoServer(CERTFILE, certreqs=ssl.CERT_NONE, ssl_version=ssl.PROTOCOL_TLS_SERVER, cacerts=CERTFILE, chatty=True, connectionchatty=False) with server: s = test_wrap_socket(socket.socket(), server_side=False, certfile=CERTFILE, ca_certs=CERTFILE, cert_reqs=ssl.CERT_NONE) s.connect((HOST, server.port)) # helper methods for standardising recv* method signatures def _recv_into(): b = bytearray(b"\0"*100) count = s.recv_into(b) return b[:count] def _recvfrom_into(): b = bytearray(b"\0"*100) count, addr = s.recvfrom_into(b) return b[:count] # (name, method, expect success?, *args, return value func) send_methods = [ ('send', s.send, True, [], len), ('sendto', s.sendto, False, ["some.address"], len), ('sendall', s.sendall, True, [], lambda x: None), ] # (name, method, whether to expect success, *args) recv_methods = [ ('recv', s.recv, True, []), ('recvfrom', s.recvfrom, False, ["some.address"]), ('recv_into', _recv_into, True, []), ('recvfrom_into', _recvfrom_into, False, []), ] data_prefix = "PREFIX_" for (meth_name, send_meth, expect_success, args, ret_val_meth) in send_methods: indata = (data_prefix + meth_name).encode('ascii') try: ret = send_meth(indata, *args) msg = "sending with {}".format(meth_name) self.assertEqual(ret, ret_val_meth(indata), msg=msg) outdata = s.read() if outdata != indata.lower(): self.fail( "While sending with <<{name:s}>> bad data " "<<{outdata:r}>> ({nout:d}) received; " "expected <<{indata:r}>> ({nin:d})\n".format( name=meth_name, outdata=outdata[:20], nout=len(outdata), indata=indata[:20], nin=len(indata) ) ) except ValueError as e: if expect_success: self.fail( "Failed to send with method <<{name:s}>>; " "expected to succeed.\n".format(name=meth_name) ) if not str(e).startswith(meth_name): self.fail( "Method <<{name:s}>> failed with unexpected " "exception message: {exp:s}\n".format( name=meth_name, exp=e ) ) for meth_name, recv_meth, expect_success, args in recv_methods: indata = (data_prefix + meth_name).encode('ascii') try: s.send(indata) outdata = recv_meth(*args) if outdata != indata.lower(): self.fail( "While receiving with <<{name:s}>> bad data " "<<{outdata:r}>> ({nout:d}) received; " "expected <<{indata:r}>> ({nin:d})\n".format( name=meth_name, outdata=outdata[:20], nout=len(outdata), indata=indata[:20], nin=len(indata) ) ) except ValueError as e: if expect_success: self.fail( "Failed to receive with method <<{name:s}>>; " "expected to succeed.\n".format(name=meth_name) ) if not str(e).startswith(meth_name): self.fail( "Method <<{name:s}>> failed with unexpected " "exception message: {exp:s}\n".format( name=meth_name, exp=e ) ) # consume data s.read() # read(-1, buffer) is supported, even though read(-1) is not data = b"data" s.send(data) buffer = bytearray(len(data)) self.assertEqual(s.read(-1, buffer), len(data)) self.assertEqual(buffer, data) # sendall accepts bytes-like objects if ctypes is not None: ubyte = ctypes.c_ubyte * len(data) byteslike = ubyte.from_buffer_copy(data) s.sendall(byteslike) self.assertEqual(s.read(), data) # Make sure sendmsg et al are disallowed to avoid # inadvertent disclosure of data and/or corruption # of the encrypted data stream self.assertRaises(NotImplementedError, s.dup) self.assertRaises(NotImplementedError, s.sendmsg, [b"data"]) self.assertRaises(NotImplementedError, s.recvmsg, 100) self.assertRaises(NotImplementedError, s.recvmsg_into, [bytearray(100)]) s.write(b"over\n") self.assertRaises(ValueError, s.recv, -1) self.assertRaises(ValueError, s.read, -1) s.close() def test_recv_zero(self): server = ThreadedEchoServer(CERTFILE) server.__enter__() self.addCleanup(server.__exit__, None, None) s = socket.create_connection((HOST, server.port)) self.addCleanup(s.close) s = test_wrap_socket(s, suppress_ragged_eofs=False) self.addCleanup(s.close) # recv/read(0) should return no data s.send(b"data") self.assertEqual(s.recv(0), b"") self.assertEqual(s.read(0), b"") self.assertEqual(s.read(), b"data") # Should not block if the other end sends no data s.setblocking(False) self.assertEqual(s.recv(0), b"") self.assertEqual(s.recv_into(bytearray()), 0) def test_nonblocking_send(self): server = ThreadedEchoServer(CERTFILE, certreqs=ssl.CERT_NONE, ssl_version=ssl.PROTOCOL_TLS_SERVER, cacerts=CERTFILE, chatty=True, connectionchatty=False) with server: s = test_wrap_socket(socket.socket(), server_side=False, certfile=CERTFILE, ca_certs=CERTFILE, cert_reqs=ssl.CERT_NONE) s.connect((HOST, server.port)) s.setblocking(False) # If we keep sending data, at some point the buffers # will be full and the call will block buf = bytearray(8192) def fill_buffer(): while True: s.send(buf) self.assertRaises((ssl.SSLWantWriteError, ssl.SSLWantReadError), fill_buffer) # Now read all the output and discard it s.setblocking(True) s.close() def test_handshake_timeout(self): # Issue #5103: SSL handshake must respect the socket timeout server = socket.socket(socket.AF_INET) host = "127.0.0.1" port = socket_helper.bind_port(server) started = threading.Event() finish = False def serve(): server.listen() started.set() conns = [] while not finish: r, w, e = select.select([server], [], [], 0.1) if server in r: # Let the socket hang around rather than having # it closed by garbage collection. conns.append(server.accept()[0]) for sock in conns: sock.close() t = threading.Thread(target=serve) t.start() started.wait() try: try: c = socket.socket(socket.AF_INET) c.settimeout(0.2) c.connect((host, port)) # Will attempt handshake and time out self.assertRaisesRegex(TimeoutError, "timed out", test_wrap_socket, c) finally: c.close() try: c = socket.socket(socket.AF_INET) c = test_wrap_socket(c) c.settimeout(0.2) # Will attempt handshake and time out self.assertRaisesRegex(TimeoutError, "timed out", c.connect, (host, port)) finally: c.close() finally: finish = True t.join() server.close() def test_server_accept(self): # Issue #16357: accept() on a SSLSocket created through # SSLContext.wrap_socket(). client_ctx, server_ctx, hostname = testing_context() server = socket.socket(socket.AF_INET) host = "127.0.0.1" port = socket_helper.bind_port(server) server = server_ctx.wrap_socket(server, server_side=True) self.assertTrue(server.server_side) evt = threading.Event() remote = None peer = None def serve(): nonlocal remote, peer server.listen() # Block on the accept and wait on the connection to close. evt.set() remote, peer = server.accept() remote.send(remote.recv(4)) t = threading.Thread(target=serve) t.start() # Client wait until server setup and perform a connect. evt.wait() client = client_ctx.wrap_socket( socket.socket(), server_hostname=hostname ) client.connect((hostname, port)) client.send(b'data') client.recv() client_addr = client.getsockname() client.close() t.join() remote.close() server.close() # Sanity checks. self.assertIsInstance(remote, ssl.SSLSocket) self.assertEqual(peer, client_addr) def test_getpeercert_enotconn(self): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.check_hostname = False with context.wrap_socket(socket.socket()) as sock: with self.assertRaises(OSError) as cm: sock.getpeercert() self.assertEqual(cm.exception.errno, errno.ENOTCONN) def test_do_handshake_enotconn(self): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.check_hostname = False with context.wrap_socket(socket.socket()) as sock: with self.assertRaises(OSError) as cm: sock.do_handshake() self.assertEqual(cm.exception.errno, errno.ENOTCONN) def test_no_shared_ciphers(self): client_context, server_context, hostname = testing_context() # OpenSSL enables all TLS 1.3 ciphers, enforce TLS 1.2 for test client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # Force different suites on client and server client_context.set_ciphers("AES128") server_context.set_ciphers("AES256") with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(OSError): s.connect((HOST, server.port)) self.assertIn("no shared cipher", server.conn_errors[0]) def test_version_basic(self): """ Basic tests for SSLSocket.version(). More tests are done in the test_protocol_*() methods. """ context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.check_hostname = False context.verify_mode = ssl.CERT_NONE with ThreadedEchoServer(CERTFILE, ssl_version=ssl.PROTOCOL_TLS_SERVER, chatty=False) as server: with context.wrap_socket(socket.socket()) as s: self.assertIs(s.version(), None) self.assertIs(s._sslobj, None) s.connect((HOST, server.port)) self.assertEqual(s.version(), 'TLSv1.3') self.assertIs(s._sslobj, None) self.assertIs(s.version(), None) @requires_tls_version('TLSv1_3') def test_tls1_3(self): client_context, server_context, hostname = testing_context() client_context.minimum_version = ssl.TLSVersion.TLSv1_3 with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertIn(s.cipher()[0], { 'TLS_AES_256_GCM_SHA384', 'TLS_CHACHA20_POLY1305_SHA256', 'TLS_AES_128_GCM_SHA256', }) self.assertEqual(s.version(), 'TLSv1.3') @requires_tls_version('TLSv1_2') @requires_tls_version('TLSv1') @ignore_deprecation def test_min_max_version_tlsv1_2(self): client_context, server_context, hostname = testing_context() # client TLSv1.0 to 1.2 client_context.minimum_version = ssl.TLSVersion.TLSv1 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # server only TLSv1.2 server_context.minimum_version = ssl.TLSVersion.TLSv1_2 server_context.maximum_version = ssl.TLSVersion.TLSv1_2 with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.version(), 'TLSv1.2') @requires_tls_version('TLSv1_1') @ignore_deprecation def test_min_max_version_tlsv1_1(self): client_context, server_context, hostname = testing_context() # client 1.0 to 1.2, server 1.0 to 1.1 client_context.minimum_version = ssl.TLSVersion.TLSv1 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server_context.minimum_version = ssl.TLSVersion.TLSv1 server_context.maximum_version = ssl.TLSVersion.TLSv1_1 seclevel_workaround(client_context, server_context) with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.version(), 'TLSv1.1') @requires_tls_version('TLSv1_2') @requires_tls_version('TLSv1') @ignore_deprecation def test_min_max_version_mismatch(self): client_context, server_context, hostname = testing_context() # client 1.0, server 1.2 (mismatch) server_context.maximum_version = ssl.TLSVersion.TLSv1_2 server_context.minimum_version = ssl.TLSVersion.TLSv1_2 client_context.maximum_version = ssl.TLSVersion.TLSv1 client_context.minimum_version = ssl.TLSVersion.TLSv1 seclevel_workaround(client_context, server_context) with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(ssl.SSLError) as e: s.connect((HOST, server.port)) self.assertIn("alert", str(e.exception)) @requires_tls_version('SSLv3') def test_min_max_version_sslv3(self): client_context, server_context, hostname = testing_context() server_context.minimum_version = ssl.TLSVersion.SSLv3 client_context.minimum_version = ssl.TLSVersion.SSLv3 client_context.maximum_version = ssl.TLSVersion.SSLv3 seclevel_workaround(client_context, server_context) with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.version(), 'SSLv3') def test_default_ecdh_curve(self): # Issue #21015: elliptic curve-based Diffie Hellman key exchange # should be enabled by default on SSL contexts. client_context, server_context, hostname = testing_context() # TLSv1.3 defaults to PFS key agreement and no longer has KEA in # cipher name. client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # Prior to OpenSSL 1.0.0, ECDH ciphers have to be enabled # explicitly using the 'ECCdraft' cipher alias. Otherwise, # our default cipher list should prefer ECDH-based ciphers # automatically. with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertIn("ECDH", s.cipher()[0]) @unittest.skipUnless("tls-unique" in ssl.CHANNEL_BINDING_TYPES, "'tls-unique' channel binding not available") def test_tls_unique_channel_binding(self): """Test tls-unique channel binding.""" if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=True, connectionchatty=False) with server: with client_context.wrap_socket( socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # get the data cb_data = s.get_channel_binding("tls-unique") if support.verbose: sys.stdout.write( " got channel binding data: {0!r}\n".format(cb_data)) # check if it is sane self.assertIsNotNone(cb_data) if s.version() == 'TLSv1.3': self.assertEqual(len(cb_data), 48) else: self.assertEqual(len(cb_data), 12) # True for TLSv1 # and compare with the peers version s.write(b"CB tls-unique\n") peer_data_repr = s.read().strip() self.assertEqual(peer_data_repr, repr(cb_data).encode("us-ascii")) # now, again with client_context.wrap_socket( socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) new_cb_data = s.get_channel_binding("tls-unique") if support.verbose: sys.stdout.write( "got another channel binding data: {0!r}\n".format( new_cb_data) ) # is it really unique self.assertNotEqual(cb_data, new_cb_data) self.assertIsNotNone(cb_data) if s.version() == 'TLSv1.3': self.assertEqual(len(cb_data), 48) else: self.assertEqual(len(cb_data), 12) # True for TLSv1 s.write(b"CB tls-unique\n") peer_data_repr = s.read().strip() self.assertEqual(peer_data_repr, repr(new_cb_data).encode("us-ascii")) def test_compression(self): client_context, server_context, hostname = testing_context() stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) if support.verbose: sys.stdout.write(" got compression: {!r}\n".format(stats['compression'])) self.assertIn(stats['compression'], { None, 'ZLIB', 'RLE' }) @unittest.skipUnless(hasattr(ssl, 'OP_NO_COMPRESSION'), "ssl.OP_NO_COMPRESSION needed for this test") def test_compression_disabled(self): client_context, server_context, hostname = testing_context() client_context.options |= ssl.OP_NO_COMPRESSION server_context.options |= ssl.OP_NO_COMPRESSION stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['compression'], None) @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_dh_params(self): # Check we can get a connection with ephemeral Diffie-Hellman client_context, server_context, hostname = testing_context() # test scenario needs TLS <= 1.2 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server_context.load_dh_params(DHFILE) server_context.set_ciphers("kEDH") server_context.maximum_version = ssl.TLSVersion.TLSv1_2 stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) cipher = stats["cipher"][0] parts = cipher.split("-") if "ADH" not in parts and "EDH" not in parts and "DHE" not in parts: self.fail("Non-DH cipher: " + cipher[0]) def test_ecdh_curve(self): # server secp384r1, client auto client_context, server_context, hostname = testing_context() server_context.set_ecdh_curve("secp384r1") server_context.set_ciphers("ECDHE:!eNULL:!aNULL") server_context.minimum_version = ssl.TLSVersion.TLSv1_2 stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) # server auto, client secp384r1 client_context, server_context, hostname = testing_context() client_context.set_ecdh_curve("secp384r1") server_context.set_ciphers("ECDHE:!eNULL:!aNULL") server_context.minimum_version = ssl.TLSVersion.TLSv1_2 stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) # server / client curve mismatch client_context, server_context, hostname = testing_context() client_context.set_ecdh_curve("prime256v1") server_context.set_ecdh_curve("secp384r1") server_context.set_ciphers("ECDHE:!eNULL:!aNULL") server_context.minimum_version = ssl.TLSVersion.TLSv1_2 with self.assertRaises(ssl.SSLError): server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) def test_selected_alpn_protocol(self): # selected_alpn_protocol() is None unless ALPN is used. client_context, server_context, hostname = testing_context() stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['client_alpn_protocol'], None) def test_selected_alpn_protocol_if_server_uses_alpn(self): # selected_alpn_protocol() is None unless ALPN is used by the client. client_context, server_context, hostname = testing_context() server_context.set_alpn_protocols(['foo', 'bar']) stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['client_alpn_protocol'], None) def test_alpn_protocols(self): server_protocols = ['foo', 'bar', 'milkshake'] protocol_tests = [ (['foo', 'bar'], 'foo'), (['bar', 'foo'], 'foo'), (['milkshake'], 'milkshake'), (['http/3.0', 'http/4.0'], None) ] for client_protocols, expected in protocol_tests: client_context, server_context, hostname = testing_context() server_context.set_alpn_protocols(server_protocols) client_context.set_alpn_protocols(client_protocols) try: stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) except ssl.SSLError as e: stats = e msg = "failed trying %s (s) and %s (c).\n" \ "was expecting %s, but got %%s from the %%s" \ % (str(server_protocols), str(client_protocols), str(expected)) client_result = stats['client_alpn_protocol'] self.assertEqual(client_result, expected, msg % (client_result, "client")) server_result = stats['server_alpn_protocols'][-1] \ if len(stats['server_alpn_protocols']) else 'nothing' self.assertEqual(server_result, expected, msg % (server_result, "server")) def test_npn_protocols(self): assert not ssl.HAS_NPN def sni_contexts(self): server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(SIGNED_CERTFILE) other_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) other_context.load_cert_chain(SIGNED_CERTFILE2) client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) return server_context, other_context, client_context def check_common_name(self, stats, name): cert = stats['peercert'] self.assertIn((('commonName', name),), cert['subject']) def test_sni_callback(self): calls = [] server_context, other_context, client_context = self.sni_contexts() client_context.check_hostname = False def servername_cb(ssl_sock, server_name, initial_context): calls.append((server_name, initial_context)) if server_name is not None: ssl_sock.context = other_context server_context.set_servername_callback(servername_cb) stats = server_params_test(client_context, server_context, chatty=True, sni_name='supermessage') # The hostname was fetched properly, and the certificate was # changed for the connection. self.assertEqual(calls, [("supermessage", server_context)]) # CERTFILE4 was selected self.check_common_name(stats, 'fakehostname') calls = [] # The callback is called with server_name=None stats = server_params_test(client_context, server_context, chatty=True, sni_name=None) self.assertEqual(calls, [(None, server_context)]) self.check_common_name(stats, SIGNED_CERTFILE_HOSTNAME) # Check disabling the callback calls = [] server_context.set_servername_callback(None) stats = server_params_test(client_context, server_context, chatty=True, sni_name='notfunny') # Certificate didn't change self.check_common_name(stats, SIGNED_CERTFILE_HOSTNAME) self.assertEqual(calls, []) def test_sni_callback_alert(self): # Returning a TLS alert is reflected to the connecting client server_context, other_context, client_context = self.sni_contexts() def cb_returning_alert(ssl_sock, server_name, initial_context): return ssl.ALERT_DESCRIPTION_ACCESS_DENIED server_context.set_servername_callback(cb_returning_alert) with self.assertRaises(ssl.SSLError) as cm: stats = server_params_test(client_context, server_context, chatty=False, sni_name='supermessage') self.assertEqual(cm.exception.reason, 'TLSV1_ALERT_ACCESS_DENIED') def test_sni_callback_raising(self): # Raising fails the connection with a TLS handshake failure alert. server_context, other_context, client_context = self.sni_contexts() def cb_raising(ssl_sock, server_name, initial_context): 1/0 server_context.set_servername_callback(cb_raising) with support.catch_unraisable_exception() as catch: with self.assertRaises(ssl.SSLError) as cm: stats = server_params_test(client_context, server_context, chatty=False, sni_name='supermessage') self.assertEqual(cm.exception.reason, 'SSLV3_ALERT_HANDSHAKE_FAILURE') self.assertEqual(catch.unraisable.exc_type, ZeroDivisionError) def test_sni_callback_wrong_return_type(self): # Returning the wrong return type terminates the TLS connection # with an internal error alert. server_context, other_context, client_context = self.sni_contexts() def cb_wrong_return_type(ssl_sock, server_name, initial_context): return "foo" server_context.set_servername_callback(cb_wrong_return_type) with support.catch_unraisable_exception() as catch: with self.assertRaises(ssl.SSLError) as cm: stats = server_params_test(client_context, server_context, chatty=False, sni_name='supermessage') self.assertEqual(cm.exception.reason, 'TLSV1_ALERT_INTERNAL_ERROR') self.assertEqual(catch.unraisable.exc_type, TypeError) def test_shared_ciphers(self): client_context, server_context, hostname = testing_context() client_context.set_ciphers("AES128:AES256") server_context.set_ciphers("AES256:eNULL") expected_algs = [ "AES256", "AES-256", # TLS 1.3 ciphers are always enabled "TLS_CHACHA20", "TLS_AES", ] stats = server_params_test(client_context, server_context, sni_name=hostname) ciphers = stats['server_shared_ciphers'][0] self.assertGreater(len(ciphers), 0) for name, tls_version, bits in ciphers: if not any(alg in name for alg in expected_algs): self.fail(name) def test_read_write_after_close_raises_valuerror(self): client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=False) with server: s = client_context.wrap_socket(socket.socket(), server_hostname=hostname) s.connect((HOST, server.port)) s.close() self.assertRaises(ValueError, s.read, 1024) self.assertRaises(ValueError, s.write, b'hello') def test_sendfile(self): TEST_DATA = b"x" * 512 with open(os_helper.TESTFN, 'wb') as f: f.write(TEST_DATA) self.addCleanup(os_helper.unlink, os_helper.TESTFN) client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) with open(os_helper.TESTFN, 'rb') as file: s.sendfile(file) self.assertEqual(s.recv(1024), TEST_DATA) def test_session(self): client_context, server_context, hostname = testing_context() # TODO: sessions aren't compatible with TLSv1.3 yet client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # first connection without session stats = server_params_test(client_context, server_context, sni_name=hostname) session = stats['session'] self.assertTrue(session.id) self.assertGreater(session.time, 0) self.assertGreater(session.timeout, 0) self.assertTrue(session.has_ticket) self.assertGreater(session.ticket_lifetime_hint, 0) self.assertFalse(stats['session_reused']) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 1) self.assertEqual(sess_stat['hits'], 0) # reuse session stats = server_params_test(client_context, server_context, session=session, sni_name=hostname) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 2) self.assertEqual(sess_stat['hits'], 1) self.assertTrue(stats['session_reused']) session2 = stats['session'] self.assertEqual(session2.id, session.id) self.assertEqual(session2, session) self.assertIsNot(session2, session) self.assertGreaterEqual(session2.time, session.time) self.assertGreaterEqual(session2.timeout, session.timeout) # another one without session stats = server_params_test(client_context, server_context, sni_name=hostname) self.assertFalse(stats['session_reused']) session3 = stats['session'] self.assertNotEqual(session3.id, session.id) self.assertNotEqual(session3, session) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 3) self.assertEqual(sess_stat['hits'], 1) # reuse session again stats = server_params_test(client_context, server_context, session=session, sni_name=hostname) self.assertTrue(stats['session_reused']) session4 = stats['session'] self.assertEqual(session4.id, session.id) self.assertEqual(session4, session) self.assertGreaterEqual(session4.time, session.time) self.assertGreaterEqual(session4.timeout, session.timeout) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 4) self.assertEqual(sess_stat['hits'], 2) def test_session_handling(self): client_context, server_context, hostname = testing_context() client_context2, _, _ = testing_context() # TODO: session reuse does not work with TLSv1.3 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 client_context2.maximum_version = ssl.TLSVersion.TLSv1_2 server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: # session is None before handshake self.assertEqual(s.session, None) self.assertEqual(s.session_reused, None) s.connect((HOST, server.port)) session = s.session self.assertTrue(session) with self.assertRaises(TypeError) as e: s.session = object self.assertEqual(str(e.exception), 'Value is not a SSLSession.') with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # cannot set session after handshake with self.assertRaises(ValueError) as e: s.session = session self.assertEqual(str(e.exception), 'Cannot set session after handshake.') with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: # can set session before handshake and before the # connection was established s.session = session s.connect((HOST, server.port)) self.assertEqual(s.session.id, session.id) self.assertEqual(s.session, session) self.assertEqual(s.session_reused, True) with client_context2.wrap_socket(socket.socket(), server_hostname=hostname) as s: # cannot re-use session with a different SSLContext with self.assertRaises(ValueError) as e: s.session = session s.connect((HOST, server.port)) self.assertEqual(str(e.exception), 'Session refers to a different SSLContext.') @unittest.skipUnless(has_tls_version('TLSv1_3'), "Test needs TLS 1.3") class TestPostHandshakeAuth(unittest.TestCase): def test_pha_setter(self): protocols = [ ssl.PROTOCOL_TLS_SERVER, ssl.PROTOCOL_TLS_CLIENT ] for protocol in protocols: ctx = ssl.SSLContext(protocol) self.assertEqual(ctx.post_handshake_auth, False) ctx.post_handshake_auth = True self.assertEqual(ctx.post_handshake_auth, True) ctx.verify_mode = ssl.CERT_REQUIRED self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.post_handshake_auth, True) ctx.post_handshake_auth = False self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.post_handshake_auth, False) ctx.verify_mode = ssl.CERT_OPTIONAL ctx.post_handshake_auth = True self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) self.assertEqual(ctx.post_handshake_auth, True) def test_pha_required(self): client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') # PHA method just returns true when cert is already available s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'GETCERT') cert_text = s.recv(4096).decode('us-ascii') self.assertIn('Python Software Foundation CA', cert_text) def test_pha_required_nocert(self): client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True def msg_cb(conn, direction, version, content_type, msg_type, data): if support.verbose and content_type == _TLSContentType.ALERT: info = (conn, direction, version, content_type, msg_type, data) sys.stdout.write(f"TLS: {info!r}\n") server_context._msg_callback = msg_cb client_context._msg_callback = msg_cb server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname, suppress_ragged_eofs=False) as s: s.connect((HOST, server.port)) s.write(b'PHA') # test sometimes fails with EOF error. Test passes as long as # server aborts connection with an error. with self.assertRaisesRegex( ssl.SSLError, '(certificate required|EOF occurred)' ): # receive CertificateRequest data = s.recv(1024) self.assertEqual(data, b'OK\n') # send empty Certificate + Finish s.write(b'HASCERT') # receive alert s.recv(1024) def test_pha_optional(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) # check CERT_OPTIONAL server_context.verify_mode = ssl.CERT_OPTIONAL server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') def test_pha_optional_nocert(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_OPTIONAL client_context.post_handshake_auth = True server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') # optional doesn't fail when client does not have a cert s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') def test_pha_no_pha_client(self): client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) with self.assertRaisesRegex(ssl.SSLError, 'not server'): s.verify_client_post_handshake() s.write(b'PHA') self.assertIn(b'extension not received', s.recv(1024)) def test_pha_no_pha_server(self): # server doesn't have PHA enabled, cert is requested in handshake client_context, server_context, hostname = testing_context() server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') # PHA doesn't fail if there is already a cert s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') def test_pha_not_tls13(self): # TLS 1.2 client_context, server_context, hostname = testing_context() server_context.verify_mode = ssl.CERT_REQUIRED client_context.maximum_version = ssl.TLSVersion.TLSv1_2 client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # PHA fails for TLS != 1.3 s.write(b'PHA') self.assertIn(b'WRONG_SSL_VERSION', s.recv(1024)) def test_bpo37428_pha_cert_none(self): # verify that post_handshake_auth does not implicitly enable cert # validation. hostname = SIGNED_CERTFILE_HOSTNAME client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) # no cert validation and CA on client side client_context.check_hostname = False client_context.verify_mode = ssl.CERT_NONE server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(SIGNED_CERTFILE) server_context.load_verify_locations(SIGNING_CA) server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') # server cert has not been validated self.assertEqual(s.getpeercert(), {}) def test_internal_chain_client(self): client_context, server_context, hostname = testing_context( server_chain=False ) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket( socket.socket(), server_hostname=hostname ) as s: s.connect((HOST, server.port)) vc = s._sslobj.get_verified_chain() self.assertEqual(len(vc), 2) ee, ca = vc uvc = s._sslobj.get_unverified_chain() self.assertEqual(len(uvc), 1) self.assertEqual(ee, uvc[0]) self.assertEqual(hash(ee), hash(uvc[0])) self.assertEqual(repr(ee), repr(uvc[0])) self.assertNotEqual(ee, ca) self.assertNotEqual(hash(ee), hash(ca)) self.assertNotEqual(repr(ee), repr(ca)) self.assertNotEqual(ee.get_info(), ca.get_info()) self.assertIn("CN=localhost", repr(ee)) self.assertIn("CN=our-ca-server", repr(ca)) pem = ee.public_bytes(_ssl.ENCODING_PEM) der = ee.public_bytes(_ssl.ENCODING_DER) self.assertIsInstance(pem, str) self.assertIn("-----BEGIN CERTIFICATE-----", pem) self.assertIsInstance(der, bytes) self.assertEqual( ssl.PEM_cert_to_DER_cert(pem), der ) def test_internal_chain_server(self): client_context, server_context, hostname = testing_context() client_context.load_cert_chain(SIGNED_CERTFILE) server_context.verify_mode = ssl.CERT_REQUIRED server_context.maximum_version = ssl.TLSVersion.TLSv1_2 server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket( socket.socket(), server_hostname=hostname ) as s: s.connect((HOST, server.port)) s.write(b'VERIFIEDCHAIN\n') res = s.recv(1024) self.assertEqual(res, b'\x02\n') s.write(b'UNVERIFIEDCHAIN\n') res = s.recv(1024) self.assertEqual(res, b'\x02\n') HAS_KEYLOG = hasattr(ssl.SSLContext, 'keylog_filename') requires_keylog = unittest.skipUnless( HAS_KEYLOG, 'test requires OpenSSL 1.1.1 with keylog callback') class TestSSLDebug(unittest.TestCase): def keylog_lines(self, fname=os_helper.TESTFN): with open(fname) as f: return len(list(f)) @requires_keylog @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_keylog_defaults(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.keylog_filename, None) self.assertFalse(os.path.isfile(os_helper.TESTFN)) ctx.keylog_filename = os_helper.TESTFN self.assertEqual(ctx.keylog_filename, os_helper.TESTFN) self.assertTrue(os.path.isfile(os_helper.TESTFN)) self.assertEqual(self.keylog_lines(), 1) ctx.keylog_filename = None self.assertEqual(ctx.keylog_filename, None) with self.assertRaises((IsADirectoryError, PermissionError)): # Windows raises PermissionError ctx.keylog_filename = os.path.dirname( os.path.abspath(os_helper.TESTFN)) with self.assertRaises(TypeError): ctx.keylog_filename = 1 @requires_keylog @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_keylog_filename(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) client_context, server_context, hostname = testing_context() client_context.keylog_filename = os_helper.TESTFN server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # header, 5 lines for TLS 1.3 self.assertEqual(self.keylog_lines(), 6) client_context.keylog_filename = None server_context.keylog_filename = os_helper.TESTFN server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertGreaterEqual(self.keylog_lines(), 11) client_context.keylog_filename = os_helper.TESTFN server_context.keylog_filename = os_helper.TESTFN server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertGreaterEqual(self.keylog_lines(), 21) client_context.keylog_filename = None server_context.keylog_filename = None @requires_keylog @unittest.skipIf(sys.flags.ignore_environment, "test is not compatible with ignore_environment") @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_keylog_env(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) with unittest.mock.patch.dict(os.environ): os.environ['SSLKEYLOGFILE'] = os_helper.TESTFN self.assertEqual(os.environ['SSLKEYLOGFILE'], os_helper.TESTFN) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.keylog_filename, None) ctx = ssl.create_default_context() self.assertEqual(ctx.keylog_filename, os_helper.TESTFN) ctx = ssl._create_stdlib_context() self.assertEqual(ctx.keylog_filename, os_helper.TESTFN) def test_msg_callback(self): client_context, server_context, hostname = testing_context() def msg_cb(conn, direction, version, content_type, msg_type, data): pass self.assertIs(client_context._msg_callback, None) client_context._msg_callback = msg_cb self.assertIs(client_context._msg_callback, msg_cb) with self.assertRaises(TypeError): client_context._msg_callback = object() def test_msg_callback_tls12(self): client_context, server_context, hostname = testing_context() client_context.maximum_version = ssl.TLSVersion.TLSv1_2 msg = [] def msg_cb(conn, direction, version, content_type, msg_type, data): self.assertIsInstance(conn, ssl.SSLSocket) self.assertIsInstance(data, bytes) self.assertIn(direction, {'read', 'write'}) msg.append((direction, version, content_type, msg_type)) client_context._msg_callback = msg_cb server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertIn( ("read", TLSVersion.TLSv1_2, _TLSContentType.HANDSHAKE, _TLSMessageType.SERVER_KEY_EXCHANGE), msg ) self.assertIn( ("write", TLSVersion.TLSv1_2, _TLSContentType.CHANGE_CIPHER_SPEC, _TLSMessageType.CHANGE_CIPHER_SPEC), msg ) def test_msg_callback_deadlock_bpo43577(self): client_context, server_context, hostname = testing_context() server_context2 = testing_context()[1] def msg_cb(conn, direction, version, content_type, msg_type, data): pass def sni_cb(sock, servername, ctx): sock.context = server_context2 server_context._msg_callback = msg_cb server_context.sni_callback = sni_cb server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) def setUpModule(): if support.verbose: plats = { 'Mac': platform.mac_ver, 'Windows': platform.win32_ver, } for name, func in plats.items(): plat = func() if plat and plat[0]: plat = '%s %r' % (name, plat) break else: plat = repr(platform.platform()) print("test_ssl: testing with %r %r" % (ssl.OPENSSL_VERSION, ssl.OPENSSL_VERSION_INFO)) print(" under %s" % plat) print(" HAS_SNI = %r" % ssl.HAS_SNI) print(" OP_ALL = 0x%8x" % ssl.OP_ALL) try: print(" OP_NO_TLSv1_1 = 0x%8x" % ssl.OP_NO_TLSv1_1) except AttributeError: pass for filename in [ CERTFILE, BYTES_CERTFILE, ONLYCERT, ONLYKEY, BYTES_ONLYCERT, BYTES_ONLYKEY, SIGNED_CERTFILE, SIGNED_CERTFILE2, SIGNING_CA, BADCERT, BADKEY, EMPTYCERT]: if not os.path.exists(filename): raise support.TestFailed("Can't read certificate file %r" % filename) thread_info = threading_helper.threading_setup() unittest.addModuleCleanup(threading_helper.threading_cleanup, *thread_info) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.10/test_subprocess.py000066400000000000000000004705261471441230600221410ustar00rootroot00000000000000import unittest from unittest import mock from test import support from test.support import check_sanitizer from test.support import import_helper from test.support import os_helper from test.support import warnings_helper import subprocess import sys import signal import io import itertools import os import errno import tempfile import time import traceback import types import selectors import sysconfig import select import shutil import threading import gc import textwrap import json import pathlib from test.support.os_helper import FakePath try: import _testcapi except ImportError: _testcapi = None try: import pwd except ImportError: pwd = None try: import grp except ImportError: grp = None try: import fcntl except: fcntl = None if support.PGO: raise unittest.SkipTest("test is not helpful for PGO") mswindows = (sys.platform == "win32") # # Depends on the following external programs: Python # if mswindows: SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' 'os.O_BINARY);') else: SETBINARY = '' NONEXISTING_CMD = ('nonexisting_i_hope',) # Ignore errors that indicate the command was not found NONEXISTING_ERRORS = (FileNotFoundError, NotADirectoryError, PermissionError) ZERO_RETURN_CMD = (sys.executable, '-c', 'pass') def setUpModule(): shell_true = shutil.which('true') if shell_true is None: return if (os.access(shell_true, os.X_OK) and subprocess.run([shell_true]).returncode == 0): global ZERO_RETURN_CMD ZERO_RETURN_CMD = (shell_true,) # Faster than Python startup. class BaseTestCase(unittest.TestCase): def setUp(self): # Try to minimize the number of children we have so this test # doesn't crash on some buildbots (Alphas in particular). support.reap_children() def tearDown(self): if not mswindows: # subprocess._active is not used on Windows and is set to None. for inst in subprocess._active: inst.wait() subprocess._cleanup() self.assertFalse( subprocess._active, "subprocess._active not empty" ) self.doCleanups() support.reap_children() class PopenTestException(Exception): pass class PopenExecuteChildRaises(subprocess.Popen): """Popen subclass for testing cleanup of subprocess.PIPE filehandles when _execute_child fails. """ def _execute_child(self, *args, **kwargs): raise PopenTestException("Forced Exception for Test") class ProcessTestCase(BaseTestCase): def test_io_buffered_by_default(self): p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) try: self.assertIsInstance(p.stdin, io.BufferedIOBase) self.assertIsInstance(p.stdout, io.BufferedIOBase) self.assertIsInstance(p.stderr, io.BufferedIOBase) finally: p.stdin.close() p.stdout.close() p.stderr.close() p.wait() def test_io_unbuffered_works(self): p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=0) try: self.assertIsInstance(p.stdin, io.RawIOBase) self.assertIsInstance(p.stdout, io.RawIOBase) self.assertIsInstance(p.stderr, io.RawIOBase) finally: p.stdin.close() p.stdout.close() p.stderr.close() p.wait() def test_call_seq(self): # call() function with sequence argument rc = subprocess.call([sys.executable, "-c", "import sys; sys.exit(47)"]) self.assertEqual(rc, 47) def test_call_timeout(self): # call() function with timeout argument; we want to test that the child # process gets killed when the timeout expires. If the child isn't # killed, this call will deadlock since subprocess.call waits for the # child. self.assertRaises(subprocess.TimeoutExpired, subprocess.call, [sys.executable, "-c", "while True: pass"], timeout=0.1) def test_check_call_zero(self): # check_call() function with zero return code rc = subprocess.check_call(ZERO_RETURN_CMD) self.assertEqual(rc, 0) def test_check_call_nonzero(self): # check_call() function with non-zero return code with self.assertRaises(subprocess.CalledProcessError) as c: subprocess.check_call([sys.executable, "-c", "import sys; sys.exit(47)"]) self.assertEqual(c.exception.returncode, 47) def test_check_output(self): # check_output() function with zero return code output = subprocess.check_output( [sys.executable, "-c", "print('BDFL')"]) self.assertIn(b'BDFL', output) def test_check_output_nonzero(self): # check_call() function with non-zero return code with self.assertRaises(subprocess.CalledProcessError) as c: subprocess.check_output( [sys.executable, "-c", "import sys; sys.exit(5)"]) self.assertEqual(c.exception.returncode, 5) def test_check_output_stderr(self): # check_output() function stderr redirected to stdout output = subprocess.check_output( [sys.executable, "-c", "import sys; sys.stderr.write('BDFL')"], stderr=subprocess.STDOUT) self.assertIn(b'BDFL', output) def test_check_output_stdin_arg(self): # check_output() can be called with stdin set to a file tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) output = subprocess.check_output( [sys.executable, "-c", "import sys; sys.stdout.write(sys.stdin.read().upper())"], stdin=tf) self.assertIn(b'PEAR', output) def test_check_output_input_arg(self): # check_output() can be called with input set to a string output = subprocess.check_output( [sys.executable, "-c", "import sys; sys.stdout.write(sys.stdin.read().upper())"], input=b'pear') self.assertIn(b'PEAR', output) def test_check_output_input_none(self): """input=None has a legacy meaning of input='' on check_output.""" output = subprocess.check_output( [sys.executable, "-c", "import sys; print('XX' if sys.stdin.read() else '')"], input=None) self.assertNotIn(b'XX', output) def test_check_output_input_none_text(self): output = subprocess.check_output( [sys.executable, "-c", "import sys; print('XX' if sys.stdin.read() else '')"], input=None, text=True) self.assertNotIn('XX', output) def test_check_output_input_none_universal_newlines(self): output = subprocess.check_output( [sys.executable, "-c", "import sys; print('XX' if sys.stdin.read() else '')"], input=None, universal_newlines=True) self.assertNotIn('XX', output) def test_check_output_input_none_encoding_errors(self): output = subprocess.check_output( [sys.executable, "-c", "print('foo')"], input=None, encoding='utf-8', errors='ignore') self.assertIn('foo', output) def test_check_output_stdout_arg(self): # check_output() refuses to accept 'stdout' argument with self.assertRaises(ValueError) as c: output = subprocess.check_output( [sys.executable, "-c", "print('will not be run')"], stdout=sys.stdout) self.fail("Expected ValueError when stdout arg supplied.") self.assertIn('stdout', c.exception.args[0]) def test_check_output_stdin_with_input_arg(self): # check_output() refuses to accept 'stdin' with 'input' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) with self.assertRaises(ValueError) as c: output = subprocess.check_output( [sys.executable, "-c", "print('will not be run')"], stdin=tf, input=b'hare') self.fail("Expected ValueError when stdin and input args supplied.") self.assertIn('stdin', c.exception.args[0]) self.assertIn('input', c.exception.args[0]) def test_check_output_timeout(self): # check_output() function with timeout arg with self.assertRaises(subprocess.TimeoutExpired) as c: output = subprocess.check_output( [sys.executable, "-c", "import sys, time\n" "sys.stdout.write('BDFL')\n" "sys.stdout.flush()\n" "time.sleep(3600)"], # Some heavily loaded buildbots (sparc Debian 3.x) require # this much time to start and print. timeout=3) self.fail("Expected TimeoutExpired.") self.assertEqual(c.exception.output, b'BDFL') def test_call_kwargs(self): # call() function with keyword args newenv = os.environ.copy() newenv["FRUIT"] = "banana" rc = subprocess.call([sys.executable, "-c", 'import sys, os;' 'sys.exit(os.getenv("FRUIT")=="banana")'], env=newenv) self.assertEqual(rc, 1) def test_invalid_args(self): # Popen() called with invalid arguments should raise TypeError # but Popen.__del__ should not complain (issue #12085) with support.captured_stderr() as s: self.assertRaises(TypeError, subprocess.Popen, invalid_arg_name=1) argcount = subprocess.Popen.__init__.__code__.co_argcount too_many_args = [0] * (argcount + 1) self.assertRaises(TypeError, subprocess.Popen, *too_many_args) self.assertEqual(s.getvalue(), '') def test_stdin_none(self): # .stdin is None when not redirected p = subprocess.Popen([sys.executable, "-c", 'print("banana")'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) p.wait() self.assertEqual(p.stdin, None) def test_stdout_none(self): # .stdout is None when not redirected, and the child's stdout will # be inherited from the parent. In order to test this we run a # subprocess in a subprocess: # this_test # \-- subprocess created by this test (parent) # \-- subprocess created by the parent subprocess (child) # The parent doesn't specify stdout, so the child will use the # parent's stdout. This test checks that the message printed by the # child goes to the parent stdout. The parent also checks that the # child's stdout is None. See #11963. code = ('import sys; from subprocess import Popen, PIPE;' 'p = Popen([sys.executable, "-c", "print(\'test_stdout_none\')"],' ' stdin=PIPE, stderr=PIPE);' 'p.wait(); assert p.stdout is None;') p = subprocess.Popen([sys.executable, "-c", code], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) out, err = p.communicate() self.assertEqual(p.returncode, 0, err) self.assertEqual(out.rstrip(), b'test_stdout_none') def test_stderr_none(self): # .stderr is None when not redirected p = subprocess.Popen([sys.executable, "-c", 'print("banana")'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stdin.close) p.wait() self.assertEqual(p.stderr, None) def _assert_python(self, pre_args, **kwargs): # We include sys.exit() to prevent the test runner from hanging # whenever python is found. args = pre_args + ["import sys; sys.exit(47)"] p = subprocess.Popen(args, **kwargs) p.wait() self.assertEqual(47, p.returncode) def test_executable(self): # Check that the executable argument works. # # On Unix (non-Mac and non-Windows), Python looks at args[0] to # determine where its standard library is, so we need the directory # of args[0] to be valid for the Popen() call to Python to succeed. # See also issue #16170 and issue #7774. doesnotexist = os.path.join(os.path.dirname(sys.executable), "doesnotexist") self._assert_python([doesnotexist, "-c"], executable=sys.executable) def test_bytes_executable(self): doesnotexist = os.path.join(os.path.dirname(sys.executable), "doesnotexist") self._assert_python([doesnotexist, "-c"], executable=os.fsencode(sys.executable)) def test_pathlike_executable(self): doesnotexist = os.path.join(os.path.dirname(sys.executable), "doesnotexist") self._assert_python([doesnotexist, "-c"], executable=FakePath(sys.executable)) def test_executable_takes_precedence(self): # Check that the executable argument takes precedence over args[0]. # # Verify first that the call succeeds without the executable arg. pre_args = [sys.executable, "-c"] self._assert_python(pre_args) self.assertRaises(NONEXISTING_ERRORS, self._assert_python, pre_args, executable=NONEXISTING_CMD[0]) @unittest.skipIf(mswindows, "executable argument replaces shell") def test_executable_replaces_shell(self): # Check that the executable argument replaces the default shell # when shell=True. self._assert_python([], executable=sys.executable, shell=True) @unittest.skipIf(mswindows, "executable argument replaces shell") def test_bytes_executable_replaces_shell(self): self._assert_python([], executable=os.fsencode(sys.executable), shell=True) @unittest.skipIf(mswindows, "executable argument replaces shell") def test_pathlike_executable_replaces_shell(self): self._assert_python([], executable=FakePath(sys.executable), shell=True) # For use in the test_cwd* tests below. def _normalize_cwd(self, cwd): # Normalize an expected cwd (for Tru64 support). # We can't use os.path.realpath since it doesn't expand Tru64 {memb} # strings. See bug #1063571. with os_helper.change_cwd(cwd): return os.getcwd() # For use in the test_cwd* tests below. def _split_python_path(self): # Return normalized (python_dir, python_base). python_path = os.path.realpath(sys.executable) return os.path.split(python_path) # For use in the test_cwd* tests below. def _assert_cwd(self, expected_cwd, python_arg, **kwargs): # Invoke Python via Popen, and assert that (1) the call succeeds, # and that (2) the current working directory of the child process # matches *expected_cwd*. p = subprocess.Popen([python_arg, "-c", "import os, sys; " "buf = sys.stdout.buffer; " "buf.write(os.getcwd().encode()); " "buf.flush(); " "sys.exit(47)"], stdout=subprocess.PIPE, **kwargs) self.addCleanup(p.stdout.close) p.wait() self.assertEqual(47, p.returncode) normcase = os.path.normcase self.assertEqual(normcase(expected_cwd), normcase(p.stdout.read().decode())) def test_cwd(self): # Check that cwd changes the cwd for the child process. temp_dir = tempfile.gettempdir() temp_dir = self._normalize_cwd(temp_dir) self._assert_cwd(temp_dir, sys.executable, cwd=temp_dir) def test_cwd_with_bytes(self): temp_dir = tempfile.gettempdir() temp_dir = self._normalize_cwd(temp_dir) self._assert_cwd(temp_dir, sys.executable, cwd=os.fsencode(temp_dir)) def test_cwd_with_pathlike(self): temp_dir = tempfile.gettempdir() temp_dir = self._normalize_cwd(temp_dir) self._assert_cwd(temp_dir, sys.executable, cwd=FakePath(temp_dir)) @unittest.skipIf(mswindows, "pending resolution of issue #15533") def test_cwd_with_relative_arg(self): # Check that Popen looks for args[0] relative to cwd if args[0] # is relative. python_dir, python_base = self._split_python_path() rel_python = os.path.join(os.curdir, python_base) with os_helper.temp_cwd() as wrong_dir: # Before calling with the correct cwd, confirm that the call fails # without cwd and with the wrong cwd. self.assertRaises(FileNotFoundError, subprocess.Popen, [rel_python]) self.assertRaises(FileNotFoundError, subprocess.Popen, [rel_python], cwd=wrong_dir) python_dir = self._normalize_cwd(python_dir) self._assert_cwd(python_dir, rel_python, cwd=python_dir) @unittest.skipIf(mswindows, "pending resolution of issue #15533") def test_cwd_with_relative_executable(self): # Check that Popen looks for executable relative to cwd if executable # is relative (and that executable takes precedence over args[0]). python_dir, python_base = self._split_python_path() rel_python = os.path.join(os.curdir, python_base) doesntexist = "somethingyoudonthave" with os_helper.temp_cwd() as wrong_dir: # Before calling with the correct cwd, confirm that the call fails # without cwd and with the wrong cwd. self.assertRaises(FileNotFoundError, subprocess.Popen, [doesntexist], executable=rel_python) self.assertRaises(FileNotFoundError, subprocess.Popen, [doesntexist], executable=rel_python, cwd=wrong_dir) python_dir = self._normalize_cwd(python_dir) self._assert_cwd(python_dir, doesntexist, executable=rel_python, cwd=python_dir) def test_cwd_with_absolute_arg(self): # Check that Popen can find the executable when the cwd is wrong # if args[0] is an absolute path. python_dir, python_base = self._split_python_path() abs_python = os.path.join(python_dir, python_base) rel_python = os.path.join(os.curdir, python_base) with os_helper.temp_dir() as wrong_dir: # Before calling with an absolute path, confirm that using a # relative path fails. self.assertRaises(FileNotFoundError, subprocess.Popen, [rel_python], cwd=wrong_dir) wrong_dir = self._normalize_cwd(wrong_dir) self._assert_cwd(wrong_dir, abs_python, cwd=wrong_dir) @unittest.skipIf(sys.base_prefix != sys.prefix, 'Test is not venv-compatible') def test_executable_with_cwd(self): python_dir, python_base = self._split_python_path() python_dir = self._normalize_cwd(python_dir) self._assert_cwd(python_dir, "somethingyoudonthave", executable=sys.executable, cwd=python_dir) @unittest.skipIf(sys.base_prefix != sys.prefix, 'Test is not venv-compatible') @unittest.skipIf(sysconfig.is_python_build(), "need an installed Python. See #7774") def test_executable_without_cwd(self): # For a normal installation, it should work without 'cwd' # argument. For test runs in the build directory, see #7774. self._assert_cwd(os.getcwd(), "somethingyoudonthave", executable=sys.executable) def test_stdin_pipe(self): # stdin redirection p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.exit(sys.stdin.read() == "pear")'], stdin=subprocess.PIPE) p.stdin.write(b"pear") p.stdin.close() p.wait() self.assertEqual(p.returncode, 1) def test_stdin_filedes(self): # stdin is set to open file descriptor tf = tempfile.TemporaryFile() self.addCleanup(tf.close) d = tf.fileno() os.write(d, b"pear") os.lseek(d, 0, 0) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.exit(sys.stdin.read() == "pear")'], stdin=d) p.wait() self.assertEqual(p.returncode, 1) def test_stdin_fileobj(self): # stdin is set to open file object tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b"pear") tf.seek(0) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.exit(sys.stdin.read() == "pear")'], stdin=tf) p.wait() self.assertEqual(p.returncode, 1) def test_stdout_pipe(self): # stdout redirection p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("orange")'], stdout=subprocess.PIPE) with p: self.assertEqual(p.stdout.read(), b"orange") def test_stdout_filedes(self): # stdout is set to open file descriptor tf = tempfile.TemporaryFile() self.addCleanup(tf.close) d = tf.fileno() p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("orange")'], stdout=d) p.wait() os.lseek(d, 0, 0) self.assertEqual(os.read(d, 1024), b"orange") def test_stdout_fileobj(self): # stdout is set to open file object tf = tempfile.TemporaryFile() self.addCleanup(tf.close) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("orange")'], stdout=tf) p.wait() tf.seek(0) self.assertEqual(tf.read(), b"orange") def test_stderr_pipe(self): # stderr redirection p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("strawberry")'], stderr=subprocess.PIPE) with p: self.assertEqual(p.stderr.read(), b"strawberry") def test_stderr_filedes(self): # stderr is set to open file descriptor tf = tempfile.TemporaryFile() self.addCleanup(tf.close) d = tf.fileno() p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("strawberry")'], stderr=d) p.wait() os.lseek(d, 0, 0) self.assertEqual(os.read(d, 1024), b"strawberry") def test_stderr_fileobj(self): # stderr is set to open file object tf = tempfile.TemporaryFile() self.addCleanup(tf.close) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("strawberry")'], stderr=tf) p.wait() tf.seek(0) self.assertEqual(tf.read(), b"strawberry") def test_stderr_redirect_with_no_stdout_redirect(self): # test stderr=STDOUT while stdout=None (not set) # - grandchild prints to stderr # - child redirects grandchild's stderr to its stdout # - the parent should get grandchild's stderr in child's stdout p = subprocess.Popen([sys.executable, "-c", 'import sys, subprocess;' 'rc = subprocess.call([sys.executable, "-c",' ' "import sys;"' ' "sys.stderr.write(\'42\')"],' ' stderr=subprocess.STDOUT);' 'sys.exit(rc)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() #NOTE: stdout should get stderr from grandchild self.assertEqual(stdout, b'42') self.assertEqual(stderr, b'') # should be empty self.assertEqual(p.returncode, 0) def test_stdout_stderr_pipe(self): # capture stdout and stderr to the same pipe p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple");' 'sys.stdout.flush();' 'sys.stderr.write("orange")'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) with p: self.assertEqual(p.stdout.read(), b"appleorange") def test_stdout_stderr_file(self): # capture stdout and stderr to the same open file tf = tempfile.TemporaryFile() self.addCleanup(tf.close) p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple");' 'sys.stdout.flush();' 'sys.stderr.write("orange")'], stdout=tf, stderr=tf) p.wait() tf.seek(0) self.assertEqual(tf.read(), b"appleorange") def test_stdout_filedes_of_stdout(self): # stdout is set to 1 (#1531862). # To avoid printing the text on stdout, we do something similar to # test_stdout_none (see above). The parent subprocess calls the child # subprocess passing stdout=1, and this test uses stdout=PIPE in # order to capture and check the output of the parent. See #11963. code = ('import sys, subprocess; ' 'rc = subprocess.call([sys.executable, "-c", ' ' "import os, sys; sys.exit(os.write(sys.stdout.fileno(), ' 'b\'test with stdout=1\'))"], stdout=1); ' 'assert rc == 18') p = subprocess.Popen([sys.executable, "-c", code], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) out, err = p.communicate() self.assertEqual(p.returncode, 0, err) self.assertEqual(out.rstrip(), b'test with stdout=1') def test_stdout_devnull(self): p = subprocess.Popen([sys.executable, "-c", 'for i in range(10240):' 'print("x" * 1024)'], stdout=subprocess.DEVNULL) p.wait() self.assertEqual(p.stdout, None) def test_stderr_devnull(self): p = subprocess.Popen([sys.executable, "-c", 'import sys\n' 'for i in range(10240):' 'sys.stderr.write("x" * 1024)'], stderr=subprocess.DEVNULL) p.wait() self.assertEqual(p.stderr, None) def test_stdin_devnull(self): p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdin.read(1)'], stdin=subprocess.DEVNULL) p.wait() self.assertEqual(p.stdin, None) @unittest.skipUnless(fcntl and hasattr(fcntl, 'F_GETPIPE_SZ'), 'fcntl.F_GETPIPE_SZ required for test.') def test_pipesizes(self): test_pipe_r, test_pipe_w = os.pipe() try: # Get the default pipesize with F_GETPIPE_SZ pipesize_default = fcntl.fcntl(test_pipe_w, fcntl.F_GETPIPE_SZ) finally: os.close(test_pipe_r) os.close(test_pipe_w) pipesize = pipesize_default // 2 if pipesize < 512: # the POSIX minimum raise unittest.SkipTest( 'default pipesize too small to perform test.') p = subprocess.Popen( [sys.executable, "-c", 'import sys; sys.stdin.read(); sys.stdout.write("out"); ' 'sys.stderr.write("error!")'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pipesize=pipesize) try: for fifo in [p.stdin, p.stdout, p.stderr]: self.assertEqual( fcntl.fcntl(fifo.fileno(), fcntl.F_GETPIPE_SZ), pipesize) # Windows pipe size can be acquired via GetNamedPipeInfoFunction # https://docs.microsoft.com/en-us/windows/win32/api/namedpipeapi/nf-namedpipeapi-getnamedpipeinfo # However, this function is not yet in _winapi. p.stdin.write(b"pear") p.stdin.close() finally: p.kill() p.wait() @unittest.skipUnless(fcntl and hasattr(fcntl, 'F_GETPIPE_SZ'), 'fcntl.F_GETPIPE_SZ required for test.') def test_pipesize_default(self): p = subprocess.Popen( [sys.executable, "-c", 'import sys; sys.stdin.read(); sys.stdout.write("out"); ' 'sys.stderr.write("error!")'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pipesize=-1) try: fp_r, fp_w = os.pipe() try: default_pipesize = fcntl.fcntl(fp_w, fcntl.F_GETPIPE_SZ) for fifo in [p.stdin, p.stdout, p.stderr]: self.assertEqual( fcntl.fcntl(fifo.fileno(), fcntl.F_GETPIPE_SZ), default_pipesize) finally: os.close(fp_r) os.close(fp_w) # On other platforms we cannot test the pipe size (yet). But above # code using pipesize=-1 should not crash. p.stdin.close() finally: p.kill() p.wait() def test_env(self): newenv = os.environ.copy() newenv["FRUIT"] = "orange" with subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(os.getenv("FRUIT"))'], stdout=subprocess.PIPE, env=newenv) as p: stdout, stderr = p.communicate() self.assertEqual(stdout, b"orange") # Windows requires at least the SYSTEMROOT environment variable to start # Python @unittest.skipIf(sys.platform == 'win32', 'cannot test an empty env on Windows') @unittest.skipIf(sysconfig.get_config_var('Py_ENABLE_SHARED') == 1, 'The Python shared library cannot be loaded ' 'with an empty environment.') @unittest.skipIf(check_sanitizer(address=True), 'AddressSanitizer adds to the environment.') def test_empty_env(self): """Verify that env={} is as empty as possible.""" def is_env_var_to_ignore(n): """Determine if an environment variable is under our control.""" # This excludes some __CF_* and VERSIONER_* keys MacOS insists # on adding even when the environment in exec is empty. # Gentoo sandboxes also force LD_PRELOAD and SANDBOX_* to exist. return ('VERSIONER' in n or '__CF' in n or # MacOS n == 'LD_PRELOAD' or n.startswith('SANDBOX') or # Gentoo n == 'LC_CTYPE') # Locale coercion triggered with subprocess.Popen([sys.executable, "-c", 'import os; print(list(os.environ.keys()))'], stdout=subprocess.PIPE, env={}) as p: stdout, stderr = p.communicate() child_env_names = eval(stdout.strip()) self.assertIsInstance(child_env_names, list) child_env_names = [k for k in child_env_names if not is_env_var_to_ignore(k)] self.assertEqual(child_env_names, []) def test_invalid_cmd(self): # null character in the command name cmd = sys.executable + '\0' with self.assertRaises(ValueError): subprocess.Popen([cmd, "-c", "pass"]) # null character in the command argument with self.assertRaises(ValueError): subprocess.Popen([sys.executable, "-c", "pass#\0"]) def test_invalid_env(self): # null character in the environment variable name newenv = os.environ.copy() newenv["FRUIT\0VEGETABLE"] = "cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) # null character in the environment variable value newenv = os.environ.copy() newenv["FRUIT"] = "orange\0VEGETABLE=cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) # equal character in the environment variable name newenv = os.environ.copy() newenv["FRUIT=ORANGE"] = "lemon" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) # equal character in the environment variable value newenv = os.environ.copy() newenv["FRUIT"] = "orange=lemon" with subprocess.Popen([sys.executable, "-c", 'import sys, os;' 'sys.stdout.write(os.getenv("FRUIT"))'], stdout=subprocess.PIPE, env=newenv) as p: stdout, stderr = p.communicate() self.assertEqual(stdout, b"orange=lemon") def test_communicate_stdin(self): p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.exit(sys.stdin.read() == "pear")'], stdin=subprocess.PIPE) p.communicate(b"pear") self.assertEqual(p.returncode, 1) def test_communicate_stdout(self): p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("pineapple")'], stdout=subprocess.PIPE) (stdout, stderr) = p.communicate() self.assertEqual(stdout, b"pineapple") self.assertEqual(stderr, None) def test_communicate_stderr(self): p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("pineapple")'], stderr=subprocess.PIPE) (stdout, stderr) = p.communicate() self.assertEqual(stdout, None) self.assertEqual(stderr, b"pineapple") def test_communicate(self): p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stderr.write("pineapple");' 'sys.stdout.write(sys.stdin.read())'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) (stdout, stderr) = p.communicate(b"banana") self.assertEqual(stdout, b"banana") self.assertEqual(stderr, b"pineapple") def test_communicate_timeout(self): p = subprocess.Popen([sys.executable, "-c", 'import sys,os,time;' 'sys.stderr.write("pineapple\\n");' 'time.sleep(1);' 'sys.stderr.write("pear\\n");' 'sys.stdout.write(sys.stdin.read())'], universal_newlines=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.assertRaises(subprocess.TimeoutExpired, p.communicate, "banana", timeout=0.3) # Make sure we can keep waiting for it, and that we get the whole output # after it completes. (stdout, stderr) = p.communicate() self.assertEqual(stdout, "banana") self.assertEqual(stderr.encode(), b"pineapple\npear\n") def test_communicate_timeout_large_output(self): # Test an expiring timeout while the child is outputting lots of data. p = subprocess.Popen([sys.executable, "-c", 'import sys,os,time;' 'sys.stdout.write("a" * (64 * 1024));' 'time.sleep(0.2);' 'sys.stdout.write("a" * (64 * 1024));' 'time.sleep(0.2);' 'sys.stdout.write("a" * (64 * 1024));' 'time.sleep(0.2);' 'sys.stdout.write("a" * (64 * 1024));'], stdout=subprocess.PIPE) self.assertRaises(subprocess.TimeoutExpired, p.communicate, timeout=0.4) (stdout, _) = p.communicate() self.assertEqual(len(stdout), 4 * 64 * 1024) # Test for the fd leak reported in http://bugs.python.org/issue2791. def test_communicate_pipe_fd_leak(self): for stdin_pipe in (False, True): for stdout_pipe in (False, True): for stderr_pipe in (False, True): options = {} if stdin_pipe: options['stdin'] = subprocess.PIPE if stdout_pipe: options['stdout'] = subprocess.PIPE if stderr_pipe: options['stderr'] = subprocess.PIPE if not options: continue p = subprocess.Popen(ZERO_RETURN_CMD, **options) p.communicate() if p.stdin is not None: self.assertTrue(p.stdin.closed) if p.stdout is not None: self.assertTrue(p.stdout.closed) if p.stderr is not None: self.assertTrue(p.stderr.closed) def test_communicate_returns(self): # communicate() should return None if no redirection is active p = subprocess.Popen([sys.executable, "-c", "import sys; sys.exit(47)"]) (stdout, stderr) = p.communicate() self.assertEqual(stdout, None) self.assertEqual(stderr, None) def test_communicate_pipe_buf(self): # communicate() with writes larger than pipe_buf # This test will probably deadlock rather than fail, if # communicate() does not work properly. x, y = os.pipe() os.close(x) os.close(y) p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(sys.stdin.read(47));' 'sys.stderr.write("x" * %d);' 'sys.stdout.write(sys.stdin.read())' % support.PIPE_MAX_SIZE], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) string_to_write = b"a" * support.PIPE_MAX_SIZE (stdout, stderr) = p.communicate(string_to_write) self.assertEqual(stdout, string_to_write) def test_writes_before_communicate(self): # stdin.write before communicate() p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(sys.stdin.read())'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) p.stdin.write(b"banana") (stdout, stderr) = p.communicate(b"split") self.assertEqual(stdout, b"bananasplit") self.assertEqual(stderr, b"") def test_universal_newlines_and_text(self): args = [ sys.executable, "-c", 'import sys,os;' + SETBINARY + 'buf = sys.stdout.buffer;' 'buf.write(sys.stdin.readline().encode());' 'buf.flush();' 'buf.write(b"line2\\n");' 'buf.flush();' 'buf.write(sys.stdin.read().encode());' 'buf.flush();' 'buf.write(b"line4\\n");' 'buf.flush();' 'buf.write(b"line5\\r\\n");' 'buf.flush();' 'buf.write(b"line6\\r");' 'buf.flush();' 'buf.write(b"\\nline7");' 'buf.flush();' 'buf.write(b"\\nline8");'] for extra_kwarg in ('universal_newlines', 'text'): p = subprocess.Popen(args, **{'stdin': subprocess.PIPE, 'stdout': subprocess.PIPE, extra_kwarg: True}) with p: p.stdin.write("line1\n") p.stdin.flush() self.assertEqual(p.stdout.readline(), "line1\n") p.stdin.write("line3\n") p.stdin.close() self.addCleanup(p.stdout.close) self.assertEqual(p.stdout.readline(), "line2\n") self.assertEqual(p.stdout.read(6), "line3\n") self.assertEqual(p.stdout.read(), "line4\nline5\nline6\nline7\nline8") def test_universal_newlines_communicate(self): # universal newlines through communicate() p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' + SETBINARY + 'buf = sys.stdout.buffer;' 'buf.write(b"line2\\n");' 'buf.flush();' 'buf.write(b"line4\\n");' 'buf.flush();' 'buf.write(b"line5\\r\\n");' 'buf.flush();' 'buf.write(b"line6\\r");' 'buf.flush();' 'buf.write(b"\\nline7");' 'buf.flush();' 'buf.write(b"\\nline8");'], stderr=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=1) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) (stdout, stderr) = p.communicate() self.assertEqual(stdout, "line2\nline4\nline5\nline6\nline7\nline8") def test_universal_newlines_communicate_stdin(self): # universal newlines through communicate(), with only stdin p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' + SETBINARY + textwrap.dedent(''' s = sys.stdin.readline() assert s == "line1\\n", repr(s) s = sys.stdin.read() assert s == "line3\\n", repr(s) ''')], stdin=subprocess.PIPE, universal_newlines=1) (stdout, stderr) = p.communicate("line1\nline3\n") self.assertEqual(p.returncode, 0) def test_universal_newlines_communicate_input_none(self): # Test communicate(input=None) with universal newlines. # # We set stdout to PIPE because, as of this writing, a different # code path is tested when the number of pipes is zero or one. p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=True) p.communicate() self.assertEqual(p.returncode, 0) def test_universal_newlines_communicate_stdin_stdout_stderr(self): # universal newlines through communicate(), with stdin, stdout, stderr p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' + SETBINARY + textwrap.dedent(''' s = sys.stdin.buffer.readline() sys.stdout.buffer.write(s) sys.stdout.buffer.write(b"line2\\r") sys.stderr.buffer.write(b"eline2\\n") s = sys.stdin.buffer.read() sys.stdout.buffer.write(s) sys.stdout.buffer.write(b"line4\\n") sys.stdout.buffer.write(b"line5\\r\\n") sys.stderr.buffer.write(b"eline6\\r") sys.stderr.buffer.write(b"eline7\\r\\nz") ''')], stdin=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=True) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) (stdout, stderr) = p.communicate("line1\nline3\n") self.assertEqual(p.returncode, 0) self.assertEqual("line1\nline2\nline3\nline4\nline5\n", stdout) # Python debug build push something like "[42442 refs]\n" # to stderr at exit of subprocess. self.assertTrue(stderr.startswith("eline2\neline6\neline7\n")) def test_universal_newlines_communicate_encodings(self): # Check that universal newlines mode works for various encodings, # in particular for encodings in the UTF-16 and UTF-32 families. # See issue #15595. # # UTF-16 and UTF-32-BE are sufficient to check both with BOM and # without, and UTF-16 and UTF-32. for encoding in ['utf-16', 'utf-32-be']: code = ("import sys; " r"sys.stdout.buffer.write('1\r\n2\r3\n4'.encode('%s'))" % encoding) args = [sys.executable, '-c', code] # We set stdin to be non-None because, as of this writing, # a different code path is used when the number of pipes is # zero or one. popen = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, encoding=encoding) stdout, stderr = popen.communicate(input='') self.assertEqual(stdout, '1\n2\n3\n4') def test_communicate_errors(self): for errors, expected in [ ('ignore', ''), ('replace', '\ufffd\ufffd'), ('surrogateescape', '\udc80\udc80'), ('backslashreplace', '\\x80\\x80'), ]: code = ("import sys; " r"sys.stdout.buffer.write(b'[\x80\x80]')") args = [sys.executable, '-c', code] # We set stdin to be non-None because, as of this writing, # a different code path is used when the number of pipes is # zero or one. popen = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, encoding='utf-8', errors=errors) stdout, stderr = popen.communicate(input='') self.assertEqual(stdout, '[{}]'.format(expected)) def test_no_leaking(self): # Make sure we leak no resources if not mswindows: max_handles = 1026 # too much for most UNIX systems else: max_handles = 2050 # too much for (at least some) Windows setups handles = [] tmpdir = tempfile.mkdtemp() try: for i in range(max_handles): try: tmpfile = os.path.join(tmpdir, os_helper.TESTFN) handles.append(os.open(tmpfile, os.O_WRONLY|os.O_CREAT)) except OSError as e: if e.errno != errno.EMFILE: raise break else: self.skipTest("failed to reach the file descriptor limit " "(tried %d)" % max_handles) # Close a couple of them (should be enough for a subprocess) for i in range(10): os.close(handles.pop()) # Loop creating some subprocesses. If one of them leaks some fds, # the next loop iteration will fail by reaching the max fd limit. for i in range(15): p = subprocess.Popen([sys.executable, "-c", "import sys;" "sys.stdout.write(sys.stdin.read())"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) data = p.communicate(b"lime")[0] self.assertEqual(data, b"lime") finally: for h in handles: os.close(h) shutil.rmtree(tmpdir) def test_list2cmdline(self): self.assertEqual(subprocess.list2cmdline(['a b c', 'd', 'e']), '"a b c" d e') self.assertEqual(subprocess.list2cmdline(['ab"c', '\\', 'd']), 'ab\\"c \\ d') self.assertEqual(subprocess.list2cmdline(['ab"c', ' \\', 'd']), 'ab\\"c " \\\\" d') self.assertEqual(subprocess.list2cmdline(['a\\\\\\b', 'de fg', 'h']), 'a\\\\\\b "de fg" h') self.assertEqual(subprocess.list2cmdline(['a\\"b', 'c', 'd']), 'a\\\\\\"b c d') self.assertEqual(subprocess.list2cmdline(['a\\\\b c', 'd', 'e']), '"a\\\\b c" d e') self.assertEqual(subprocess.list2cmdline(['a\\\\b\\ c', 'd', 'e']), '"a\\\\b\\ c" d e') self.assertEqual(subprocess.list2cmdline(['ab', '']), 'ab ""') def test_poll(self): p = subprocess.Popen([sys.executable, "-c", "import os; os.read(0, 1)"], stdin=subprocess.PIPE) self.addCleanup(p.stdin.close) self.assertIsNone(p.poll()) os.write(p.stdin.fileno(), b'A') p.wait() # Subsequent invocations should just return the returncode self.assertEqual(p.poll(), 0) def test_wait(self): p = subprocess.Popen(ZERO_RETURN_CMD) self.assertEqual(p.wait(), 0) # Subsequent invocations should just return the returncode self.assertEqual(p.wait(), 0) def test_wait_timeout(self): p = subprocess.Popen([sys.executable, "-c", "import time; time.sleep(0.3)"]) with self.assertRaises(subprocess.TimeoutExpired) as c: p.wait(timeout=0.0001) self.assertIn("0.0001", str(c.exception)) # For coverage of __str__. self.assertEqual(p.wait(timeout=support.SHORT_TIMEOUT), 0) def test_invalid_bufsize(self): # an invalid type of the bufsize argument should raise # TypeError. with self.assertRaises(TypeError): subprocess.Popen(ZERO_RETURN_CMD, "orange") def test_bufsize_is_none(self): # bufsize=None should be the same as bufsize=0. p = subprocess.Popen(ZERO_RETURN_CMD, None) self.assertEqual(p.wait(), 0) # Again with keyword arg p = subprocess.Popen(ZERO_RETURN_CMD, bufsize=None) self.assertEqual(p.wait(), 0) def _test_bufsize_equal_one(self, line, expected, universal_newlines): # subprocess may deadlock with bufsize=1, see issue #21332 with subprocess.Popen([sys.executable, "-c", "import sys;" "sys.stdout.write(sys.stdin.readline());" "sys.stdout.flush()"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL, bufsize=1, universal_newlines=universal_newlines) as p: p.stdin.write(line) # expect that it flushes the line in text mode os.close(p.stdin.fileno()) # close it without flushing the buffer read_line = p.stdout.readline() with support.SuppressCrashReport(): try: p.stdin.close() except OSError: pass p.stdin = None self.assertEqual(p.returncode, 0) self.assertEqual(read_line, expected) def test_bufsize_equal_one_text_mode(self): # line is flushed in text mode with bufsize=1. # we should get the full line in return line = "line\n" self._test_bufsize_equal_one(line, line, universal_newlines=True) def test_bufsize_equal_one_binary_mode(self): # line is not flushed in binary mode with bufsize=1. # we should get empty response line = b'line' + os.linesep.encode() # assume ascii-based locale with self.assertWarnsRegex(RuntimeWarning, 'line buffering'): self._test_bufsize_equal_one(line, b'', universal_newlines=False) def test_leaking_fds_on_error(self): # see bug #5179: Popen leaks file descriptors to PIPEs if # the child fails to execute; this will eventually exhaust # the maximum number of open fds. 1024 seems a very common # value for that limit, but Windows has 2048, so we loop # 1024 times (each call leaked two fds). for i in range(1024): with self.assertRaises(NONEXISTING_ERRORS): subprocess.Popen(NONEXISTING_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE) def test_nonexisting_with_pipes(self): # bpo-30121: Popen with pipes must close properly pipes on error. # Previously, os.close() was called with a Windows handle which is not # a valid file descriptor. # # Run the test in a subprocess to control how the CRT reports errors # and to get stderr content. try: import msvcrt msvcrt.CrtSetReportMode except (AttributeError, ImportError): self.skipTest("need msvcrt.CrtSetReportMode") code = textwrap.dedent(f""" import msvcrt import subprocess cmd = {NONEXISTING_CMD!r} for report_type in [msvcrt.CRT_WARN, msvcrt.CRT_ERROR, msvcrt.CRT_ASSERT]: msvcrt.CrtSetReportMode(report_type, msvcrt.CRTDBG_MODE_FILE) msvcrt.CrtSetReportFile(report_type, msvcrt.CRTDBG_FILE_STDERR) try: subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) except OSError: pass """) cmd = [sys.executable, "-c", code] proc = subprocess.Popen(cmd, stderr=subprocess.PIPE, universal_newlines=True) with proc: stderr = proc.communicate()[1] self.assertEqual(stderr, "") self.assertEqual(proc.returncode, 0) def test_double_close_on_error(self): # Issue #18851 fds = [] def open_fds(): for i in range(20): fds.extend(os.pipe()) time.sleep(0.001) t = threading.Thread(target=open_fds) t.start() try: with self.assertRaises(EnvironmentError): subprocess.Popen(NONEXISTING_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) finally: t.join() exc = None for fd in fds: # If a double close occurred, some of those fds will # already have been closed by mistake, and os.close() # here will raise. try: os.close(fd) except OSError as e: exc = e if exc is not None: raise exc def test_threadsafe_wait(self): """Issue21291: Popen.wait() needs to be threadsafe for returncode.""" proc = subprocess.Popen([sys.executable, '-c', 'import time; time.sleep(12)']) self.assertEqual(proc.returncode, None) results = [] def kill_proc_timer_thread(): results.append(('thread-start-poll-result', proc.poll())) # terminate it from the thread and wait for the result. proc.kill() proc.wait() results.append(('thread-after-kill-and-wait', proc.returncode)) # this wait should be a no-op given the above. proc.wait() results.append(('thread-after-second-wait', proc.returncode)) # This is a timing sensitive test, the failure mode is # triggered when both the main thread and this thread are in # the wait() call at once. The delay here is to allow the # main thread to most likely be blocked in its wait() call. t = threading.Timer(0.2, kill_proc_timer_thread) t.start() if mswindows: expected_errorcode = 1 else: # Should be -9 because of the proc.kill() from the thread. expected_errorcode = -9 # Wait for the process to finish; the thread should kill it # long before it finishes on its own. Supplying a timeout # triggers a different code path for better coverage. proc.wait(timeout=support.SHORT_TIMEOUT) self.assertEqual(proc.returncode, expected_errorcode, msg="unexpected result in wait from main thread") # This should be a no-op with no change in returncode. proc.wait() self.assertEqual(proc.returncode, expected_errorcode, msg="unexpected result in second main wait.") t.join() # Ensure that all of the thread results are as expected. # When a race condition occurs in wait(), the returncode could # be set by the wrong thread that doesn't actually have it # leading to an incorrect value. self.assertEqual([('thread-start-poll-result', None), ('thread-after-kill-and-wait', expected_errorcode), ('thread-after-second-wait', expected_errorcode)], results) def test_issue8780(self): # Ensure that stdout is inherited from the parent # if stdout=PIPE is not used code = ';'.join(( 'import subprocess, sys', 'retcode = subprocess.call(' "[sys.executable, '-c', 'print(\"Hello World!\")'])", 'assert retcode == 0')) output = subprocess.check_output([sys.executable, '-c', code]) self.assertTrue(output.startswith(b'Hello World!'), ascii(output)) def test_handles_closed_on_exception(self): # If CreateProcess exits with an error, ensure the # duplicate output handles are released ifhandle, ifname = tempfile.mkstemp() ofhandle, ofname = tempfile.mkstemp() efhandle, efname = tempfile.mkstemp() try: subprocess.Popen (["*"], stdin=ifhandle, stdout=ofhandle, stderr=efhandle) except OSError: os.close(ifhandle) os.remove(ifname) os.close(ofhandle) os.remove(ofname) os.close(efhandle) os.remove(efname) self.assertFalse(os.path.exists(ifname)) self.assertFalse(os.path.exists(ofname)) self.assertFalse(os.path.exists(efname)) def test_communicate_epipe(self): # Issue 10963: communicate() should hide EPIPE p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) p.communicate(b"x" * 2**20) def test_repr(self): path_cmd = pathlib.Path("my-tool.py") pathlib_cls = path_cmd.__class__.__name__ cases = [ ("ls", True, 123, ""), ('a' * 100, True, 0, ""), (["ls"], False, None, ""), (["ls", '--my-opts', 'a' * 100], False, None, ""), (path_cmd, False, 7, f"") ] with unittest.mock.patch.object(subprocess.Popen, '_execute_child'): for cmd, shell, code, sx in cases: p = subprocess.Popen(cmd, shell=shell) p.returncode = code self.assertEqual(repr(p), sx) def test_communicate_epipe_only_stdin(self): # Issue 10963: communicate() should hide EPIPE p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE) self.addCleanup(p.stdin.close) p.wait() p.communicate(b"x" * 2**20) @unittest.skipUnless(hasattr(signal, 'SIGUSR1'), "Requires signal.SIGUSR1") @unittest.skipUnless(hasattr(os, 'kill'), "Requires os.kill") @unittest.skipUnless(hasattr(os, 'getppid'), "Requires os.getppid") def test_communicate_eintr(self): # Issue #12493: communicate() should handle EINTR def handler(signum, frame): pass old_handler = signal.signal(signal.SIGUSR1, handler) self.addCleanup(signal.signal, signal.SIGUSR1, old_handler) args = [sys.executable, "-c", 'import os, signal;' 'os.kill(os.getppid(), signal.SIGUSR1)'] for stream in ('stdout', 'stderr'): kw = {stream: subprocess.PIPE} with subprocess.Popen(args, **kw) as process: # communicate() will be interrupted by SIGUSR1 process.communicate() # This test is Linux-ish specific for simplicity to at least have # some coverage. It is not a platform specific bug. @unittest.skipUnless(os.path.isdir('/proc/%d/fd' % os.getpid()), "Linux specific") def test_failed_child_execute_fd_leak(self): """Test for the fork() failure fd leak reported in issue16327.""" fd_directory = '/proc/%d/fd' % os.getpid() fds_before_popen = os.listdir(fd_directory) with self.assertRaises(PopenTestException): PopenExecuteChildRaises( ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # NOTE: This test doesn't verify that the real _execute_child # does not close the file descriptors itself on the way out # during an exception. Code inspection has confirmed that. fds_after_exception = os.listdir(fd_directory) self.assertEqual(fds_before_popen, fds_after_exception) @unittest.skipIf(mswindows, "behavior currently not supported on Windows") def test_file_not_found_includes_filename(self): with self.assertRaises(FileNotFoundError) as c: subprocess.call(['/opt/nonexistent_binary', 'with', 'some', 'args']) self.assertEqual(c.exception.filename, '/opt/nonexistent_binary') @unittest.skipIf(mswindows, "behavior currently not supported on Windows") def test_file_not_found_with_bad_cwd(self): with self.assertRaises(FileNotFoundError) as c: subprocess.Popen(['exit', '0'], cwd='/some/nonexistent/directory') self.assertEqual(c.exception.filename, '/some/nonexistent/directory') def test_class_getitems(self): self.assertIsInstance(subprocess.Popen[bytes], types.GenericAlias) self.assertIsInstance(subprocess.CompletedProcess[str], types.GenericAlias) class RunFuncTestCase(BaseTestCase): def run_python(self, code, **kwargs): """Run Python code in a subprocess using subprocess.run""" argv = [sys.executable, "-c", code] return subprocess.run(argv, **kwargs) def test_returncode(self): # call() function with sequence argument cp = self.run_python("import sys; sys.exit(47)") self.assertEqual(cp.returncode, 47) with self.assertRaises(subprocess.CalledProcessError): cp.check_returncode() def test_check(self): with self.assertRaises(subprocess.CalledProcessError) as c: self.run_python("import sys; sys.exit(47)", check=True) self.assertEqual(c.exception.returncode, 47) def test_check_zero(self): # check_returncode shouldn't raise when returncode is zero cp = subprocess.run(ZERO_RETURN_CMD, check=True) self.assertEqual(cp.returncode, 0) def test_timeout(self): # run() function with timeout argument; we want to test that the child # process gets killed when the timeout expires. If the child isn't # killed, this call will deadlock since subprocess.run waits for the # child. with self.assertRaises(subprocess.TimeoutExpired): self.run_python("while True: pass", timeout=0.0001) def test_capture_stdout(self): # capture stdout with zero return code cp = self.run_python("print('BDFL')", stdout=subprocess.PIPE) self.assertIn(b'BDFL', cp.stdout) def test_capture_stderr(self): cp = self.run_python("import sys; sys.stderr.write('BDFL')", stderr=subprocess.PIPE) self.assertIn(b'BDFL', cp.stderr) def test_check_output_stdin_arg(self): # run() can be called with stdin set to a file tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) cp = self.run_python( "import sys; sys.stdout.write(sys.stdin.read().upper())", stdin=tf, stdout=subprocess.PIPE) self.assertIn(b'PEAR', cp.stdout) def test_check_output_input_arg(self): # check_output() can be called with input set to a string cp = self.run_python( "import sys; sys.stdout.write(sys.stdin.read().upper())", input=b'pear', stdout=subprocess.PIPE) self.assertIn(b'PEAR', cp.stdout) def test_check_output_stdin_with_input_arg(self): # run() refuses to accept 'stdin' with 'input' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) with self.assertRaises(ValueError, msg="Expected ValueError when stdin and input args supplied.") as c: output = self.run_python("print('will not be run')", stdin=tf, input=b'hare') self.assertIn('stdin', c.exception.args[0]) self.assertIn('input', c.exception.args[0]) def test_check_output_timeout(self): with self.assertRaises(subprocess.TimeoutExpired) as c: cp = self.run_python(( "import sys, time\n" "sys.stdout.write('BDFL')\n" "sys.stdout.flush()\n" "time.sleep(3600)"), # Some heavily loaded buildbots (sparc Debian 3.x) require # this much time to start and print. timeout=3, stdout=subprocess.PIPE) self.assertEqual(c.exception.output, b'BDFL') # output is aliased to stdout self.assertEqual(c.exception.stdout, b'BDFL') def test_run_kwargs(self): newenv = os.environ.copy() newenv["FRUIT"] = "banana" cp = self.run_python(('import sys, os;' 'sys.exit(33 if os.getenv("FRUIT")=="banana" else 31)'), env=newenv) self.assertEqual(cp.returncode, 33) def test_run_with_pathlike_path(self): # bpo-31961: test run(pathlike_object) # the name of a command that can be run without # any arguments that exit fast prog = 'tree.com' if mswindows else 'ls' path = shutil.which(prog) if path is None: self.skipTest(f'{prog} required for this test') path = FakePath(path) res = subprocess.run(path, stdout=subprocess.DEVNULL) self.assertEqual(res.returncode, 0) with self.assertRaises(TypeError): subprocess.run(path, stdout=subprocess.DEVNULL, shell=True) def test_run_with_bytes_path_and_arguments(self): # bpo-31961: test run([bytes_object, b'additional arguments']) path = os.fsencode(sys.executable) args = [path, '-c', b'import sys; sys.exit(57)'] res = subprocess.run(args) self.assertEqual(res.returncode, 57) def test_run_with_pathlike_path_and_arguments(self): # bpo-31961: test run([pathlike_object, 'additional arguments']) path = FakePath(sys.executable) args = [path, '-c', 'import sys; sys.exit(57)'] res = subprocess.run(args) self.assertEqual(res.returncode, 57) def test_capture_output(self): cp = self.run_python(("import sys;" "sys.stdout.write('BDFL'); " "sys.stderr.write('FLUFL')"), capture_output=True) self.assertIn(b'BDFL', cp.stdout) self.assertIn(b'FLUFL', cp.stderr) def test_stdout_with_capture_output_arg(self): # run() refuses to accept 'stdout' with 'capture_output' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) with self.assertRaises(ValueError, msg=("Expected ValueError when stdout and capture_output " "args supplied.")) as c: output = self.run_python("print('will not be run')", capture_output=True, stdout=tf) self.assertIn('stdout', c.exception.args[0]) self.assertIn('capture_output', c.exception.args[0]) def test_stderr_with_capture_output_arg(self): # run() refuses to accept 'stderr' with 'capture_output' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) with self.assertRaises(ValueError, msg=("Expected ValueError when stderr and capture_output " "args supplied.")) as c: output = self.run_python("print('will not be run')", capture_output=True, stderr=tf) self.assertIn('stderr', c.exception.args[0]) self.assertIn('capture_output', c.exception.args[0]) # This test _might_ wind up a bit fragile on loaded build+test machines # as it depends on the timing with wide enough margins for normal situations # but does assert that it happened "soon enough" to believe the right thing # happened. @unittest.skipIf(mswindows, "requires posix like 'sleep' shell command") def test_run_with_shell_timeout_and_capture_output(self): """Output capturing after a timeout mustn't hang forever on open filehandles.""" before_secs = time.monotonic() try: subprocess.run('sleep 3', shell=True, timeout=0.1, capture_output=True) # New session unspecified. except subprocess.TimeoutExpired as exc: after_secs = time.monotonic() stacks = traceback.format_exc() # assertRaises doesn't give this. else: self.fail("TimeoutExpired not raised.") self.assertLess(after_secs - before_secs, 1.5, msg="TimeoutExpired was delayed! Bad traceback:\n```\n" f"{stacks}```") @unittest.skipIf(not sysconfig.get_config_var("HAVE_VFORK"), "vfork() not enabled by configure.") def test__use_vfork(self): # Attempts code coverage within _posixsubprocess.c on the code that # probes the subprocess module for the existence and value of this # attribute in 3.10.5. self.assertTrue(subprocess._USE_VFORK) # The default value regardless. with mock.patch.object(subprocess, "_USE_VFORK", False): self.assertEqual(self.run_python("pass").returncode, 0, msg="False _USE_VFORK failed") class RaisingBool: def __bool__(self): raise RuntimeError("force PyObject_IsTrue to return -1") with mock.patch.object(subprocess, "_USE_VFORK", RaisingBool()): self.assertEqual(self.run_python("pass").returncode, 0, msg="odd bool()-error _USE_VFORK failed") del subprocess._USE_VFORK self.assertEqual(self.run_python("pass").returncode, 0, msg="lack of a _USE_VFORK attribute failed") def _get_test_grp_name(): for name_group in ('staff', 'nogroup', 'grp', 'nobody', 'nfsnobody'): if grp: try: grp.getgrnam(name_group) except KeyError: continue return name_group else: raise unittest.SkipTest('No identified group name to use for this test on this platform.') @unittest.skipIf(mswindows, "POSIX specific tests") class POSIXProcessTestCase(BaseTestCase): def setUp(self): super().setUp() self._nonexistent_dir = "/_this/pa.th/does/not/exist" def _get_chdir_exception(self): try: os.chdir(self._nonexistent_dir) except OSError as e: # This avoids hard coding the errno value or the OS perror() # string and instead capture the exception that we want to see # below for comparison. desired_exception = e else: self.fail("chdir to nonexistent directory %s succeeded." % self._nonexistent_dir) return desired_exception def test_exception_cwd(self): """Test error in the child raised in the parent for a bad cwd.""" desired_exception = self._get_chdir_exception() try: p = subprocess.Popen([sys.executable, "-c", ""], cwd=self._nonexistent_dir) except OSError as e: # Test that the child process chdir failure actually makes # it up to the parent process as the correct exception. self.assertEqual(desired_exception.errno, e.errno) self.assertEqual(desired_exception.strerror, e.strerror) self.assertEqual(desired_exception.filename, e.filename) else: self.fail("Expected OSError: %s" % desired_exception) def test_exception_bad_executable(self): """Test error in the child raised in the parent for a bad executable.""" desired_exception = self._get_chdir_exception() try: p = subprocess.Popen([sys.executable, "-c", ""], executable=self._nonexistent_dir) except OSError as e: # Test that the child process exec failure actually makes # it up to the parent process as the correct exception. self.assertEqual(desired_exception.errno, e.errno) self.assertEqual(desired_exception.strerror, e.strerror) self.assertEqual(desired_exception.filename, e.filename) else: self.fail("Expected OSError: %s" % desired_exception) def test_exception_bad_args_0(self): """Test error in the child raised in the parent for a bad args[0].""" desired_exception = self._get_chdir_exception() try: p = subprocess.Popen([self._nonexistent_dir, "-c", ""]) except OSError as e: # Test that the child process exec failure actually makes # it up to the parent process as the correct exception. self.assertEqual(desired_exception.errno, e.errno) self.assertEqual(desired_exception.strerror, e.strerror) self.assertEqual(desired_exception.filename, e.filename) else: self.fail("Expected OSError: %s" % desired_exception) # We mock the __del__ method for Popen in the next two tests # because it does cleanup based on the pid returned by fork_exec # along with issuing a resource warning if it still exists. Since # we don't actually spawn a process in these tests we can forego # the destructor. An alternative would be to set _child_created to # False before the destructor is called but there is no easy way # to do that class PopenNoDestructor(subprocess.Popen): def __del__(self): pass @mock.patch("subprocess._posixsubprocess.fork_exec") def test_exception_errpipe_normal(self, fork_exec): """Test error passing done through errpipe_write in the good case""" def proper_error(*args): errpipe_write = args[13] # Write the hex for the error code EISDIR: 'is a directory' err_code = '{:x}'.format(errno.EISDIR).encode() os.write(errpipe_write, b"OSError:" + err_code + b":") return 0 fork_exec.side_effect = proper_error with mock.patch("subprocess.os.waitpid", side_effect=ChildProcessError): with self.assertRaises(IsADirectoryError): self.PopenNoDestructor(["non_existent_command"]) @mock.patch("subprocess._posixsubprocess.fork_exec") def test_exception_errpipe_bad_data(self, fork_exec): """Test error passing done through errpipe_write where its not in the expected format""" error_data = b"\xFF\x00\xDE\xAD" def bad_error(*args): errpipe_write = args[13] # Anything can be in the pipe, no assumptions should # be made about its encoding, so we'll write some # arbitrary hex bytes to test it out os.write(errpipe_write, error_data) return 0 fork_exec.side_effect = bad_error with mock.patch("subprocess.os.waitpid", side_effect=ChildProcessError): with self.assertRaises(subprocess.SubprocessError) as e: self.PopenNoDestructor(["non_existent_command"]) self.assertIn(repr(error_data), str(e.exception)) @unittest.skipIf(not os.path.exists('/proc/self/status'), "need /proc/self/status") def test_restore_signals(self): # Blindly assume that cat exists on systems with /proc/self/status... default_proc_status = subprocess.check_output( ['cat', '/proc/self/status'], restore_signals=False) for line in default_proc_status.splitlines(): if line.startswith(b'SigIgn'): default_sig_ign_mask = line break else: self.skipTest("SigIgn not found in /proc/self/status.") restored_proc_status = subprocess.check_output( ['cat', '/proc/self/status'], restore_signals=True) for line in restored_proc_status.splitlines(): if line.startswith(b'SigIgn'): restored_sig_ign_mask = line break self.assertNotEqual(default_sig_ign_mask, restored_sig_ign_mask, msg="restore_signals=True should've unblocked " "SIGPIPE and friends.") def test_start_new_session(self): # For code coverage of calling setsid(). We don't care if we get an # EPERM error from it depending on the test execution environment, that # still indicates that it was called. try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getsid(0))"], start_new_session=True) except OSError as e: if e.errno != errno.EPERM: raise else: parent_sid = os.getsid(0) child_sid = int(output) self.assertNotEqual(parent_sid, child_sid) @unittest.skipUnless(hasattr(os, 'setreuid'), 'no setreuid on platform') def test_user(self): # For code coverage of the user parameter. We don't care if we get an # EPERM error from it depending on the test execution environment, that # still indicates that it was called. uid = os.geteuid() test_users = [65534 if uid != 65534 else 65533, uid] name_uid = "nobody" if sys.platform != 'darwin' else "unknown" if pwd is not None: try: pwd.getpwnam(name_uid) test_users.append(name_uid) except KeyError: # unknown user name name_uid = None for user in test_users: # posix_spawn() may be used with close_fds=False for close_fds in (False, True): with self.subTest(user=user, close_fds=close_fds): try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getuid())"], user=user, close_fds=close_fds) except PermissionError: # (EACCES, EPERM) pass except OSError as e: if e.errno not in (errno.EACCES, errno.EPERM): raise else: if isinstance(user, str): user_uid = pwd.getpwnam(user).pw_uid else: user_uid = user child_user = int(output) self.assertEqual(child_user, user_uid) with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, user=-1) with self.assertRaises(OverflowError): subprocess.check_call(ZERO_RETURN_CMD, cwd=os.curdir, env=os.environ, user=2**64) if pwd is None and name_uid is not None: with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, user=name_uid) @unittest.skipIf(hasattr(os, 'setreuid'), 'setreuid() available on platform') def test_user_error(self): with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, user=65535) @unittest.skipUnless(hasattr(os, 'setregid'), 'no setregid() on platform') def test_group(self): gid = os.getegid() group_list = [65534 if gid != 65534 else 65533] name_group = _get_test_grp_name() if grp is not None: group_list.append(name_group) for group in group_list + [gid]: # posix_spawn() may be used with close_fds=False for close_fds in (False, True): with self.subTest(group=group, close_fds=close_fds): try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getgid())"], group=group, close_fds=close_fds) except PermissionError: # (EACCES, EPERM) pass else: if isinstance(group, str): group_gid = grp.getgrnam(group).gr_gid else: group_gid = group child_group = int(output) self.assertEqual(child_group, group_gid) # make sure we bomb on negative values with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, group=-1) with self.assertRaises(OverflowError): subprocess.check_call(ZERO_RETURN_CMD, cwd=os.curdir, env=os.environ, group=2**64) if grp is None: with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, group=name_group) @unittest.skipIf(hasattr(os, 'setregid'), 'setregid() available on platform') def test_group_error(self): with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, group=65535) @unittest.skipUnless(hasattr(os, 'setgroups'), 'no setgroups() on platform') def test_extra_groups(self): gid = os.getegid() group_list = [65534 if gid != 65534 else 65533] name_group = _get_test_grp_name() perm_error = False if grp is not None: group_list.append(name_group) try: output = subprocess.check_output( [sys.executable, "-c", "import os, sys, json; json.dump(os.getgroups(), sys.stdout)"], extra_groups=group_list) except OSError as ex: if ex.errno != errno.EPERM: raise perm_error = True else: parent_groups = os.getgroups() child_groups = json.loads(output) if grp is not None: desired_gids = [grp.getgrnam(g).gr_gid if isinstance(g, str) else g for g in group_list] else: desired_gids = group_list if perm_error: self.assertEqual(set(child_groups), set(parent_groups)) else: self.assertEqual(set(desired_gids), set(child_groups)) # make sure we bomb on negative values with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, extra_groups=[-1]) with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, cwd=os.curdir, env=os.environ, extra_groups=[2**64]) if grp is None: with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, extra_groups=[name_group]) @unittest.skipIf(hasattr(os, 'setgroups'), 'setgroups() available on platform') def test_extra_groups_error(self): with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, extra_groups=[]) @unittest.skipIf(mswindows or not hasattr(os, 'umask'), 'POSIX umask() is not available.') def test_umask(self): tmpdir = None try: tmpdir = tempfile.mkdtemp() name = os.path.join(tmpdir, "beans") # We set an unusual umask in the child so as a unique mode # for us to test the child's touched file for. subprocess.check_call( [sys.executable, "-c", f"open({name!r}, 'w').close()"], umask=0o053) # Ignore execute permissions entirely in our test, # filesystems could be mounted to ignore or force that. st_mode = os.stat(name).st_mode & 0o666 expected_mode = 0o624 self.assertEqual(expected_mode, st_mode, msg=f'{oct(expected_mode)} != {oct(st_mode)}') finally: if tmpdir is not None: shutil.rmtree(tmpdir) def test_run_abort(self): # returncode handles signal termination with support.SuppressCrashReport(): p = subprocess.Popen([sys.executable, "-c", 'import os; os.abort()']) p.wait() self.assertEqual(-p.returncode, signal.SIGABRT) def test_CalledProcessError_str_signal(self): err = subprocess.CalledProcessError(-int(signal.SIGABRT), "fake cmd") error_string = str(err) # We're relying on the repr() of the signal.Signals intenum to provide # the word signal, the signal name and the numeric value. self.assertIn("signal", error_string.lower()) # We're not being specific about the signal name as some signals have # multiple names and which name is revealed can vary. self.assertIn("SIG", error_string) self.assertIn(str(signal.SIGABRT), error_string) def test_CalledProcessError_str_unknown_signal(self): err = subprocess.CalledProcessError(-9876543, "fake cmd") error_string = str(err) self.assertIn("unknown signal 9876543.", error_string) def test_CalledProcessError_str_non_zero(self): err = subprocess.CalledProcessError(2, "fake cmd") error_string = str(err) self.assertIn("non-zero exit status 2.", error_string) def test_preexec(self): # DISCLAIMER: Setting environment variables is *not* a good use # of a preexec_fn. This is merely a test. p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(os.getenv("FRUIT"))'], stdout=subprocess.PIPE, preexec_fn=lambda: os.putenv("FRUIT", "apple")) with p: self.assertEqual(p.stdout.read(), b"apple") def test_preexec_exception(self): def raise_it(): raise ValueError("What if two swallows carried a coconut?") try: p = subprocess.Popen([sys.executable, "-c", ""], preexec_fn=raise_it) except subprocess.SubprocessError as e: self.assertTrue( subprocess._posixsubprocess, "Expected a ValueError from the preexec_fn") except ValueError as e: self.assertIn("coconut", e.args[0]) else: self.fail("Exception raised by preexec_fn did not make it " "to the parent process.") class _TestExecuteChildPopen(subprocess.Popen): """Used to test behavior at the end of _execute_child.""" def __init__(self, testcase, *args, **kwargs): self._testcase = testcase subprocess.Popen.__init__(self, *args, **kwargs) def _execute_child(self, *args, **kwargs): try: subprocess.Popen._execute_child(self, *args, **kwargs) finally: # Open a bunch of file descriptors and verify that # none of them are the same as the ones the Popen # instance is using for stdin/stdout/stderr. devzero_fds = [os.open("/dev/zero", os.O_RDONLY) for _ in range(8)] try: for fd in devzero_fds: self._testcase.assertNotIn( fd, (self.stdin.fileno(), self.stdout.fileno(), self.stderr.fileno()), msg="At least one fd was closed early.") finally: for fd in devzero_fds: os.close(fd) @unittest.skipIf(not os.path.exists("/dev/zero"), "/dev/zero required.") def test_preexec_errpipe_does_not_double_close_pipes(self): """Issue16140: Don't double close pipes on preexec error.""" def raise_it(): raise subprocess.SubprocessError( "force the _execute_child() errpipe_data path.") with self.assertRaises(subprocess.SubprocessError): self._TestExecuteChildPopen( self, ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=raise_it) def test_preexec_gc_module_failure(self): # This tests the code that disables garbage collection if the child # process will execute any Python. enabled = gc.isenabled() try: gc.disable() self.assertFalse(gc.isenabled()) subprocess.call([sys.executable, '-c', ''], preexec_fn=lambda: None) self.assertFalse(gc.isenabled(), "Popen enabled gc when it shouldn't.") gc.enable() self.assertTrue(gc.isenabled()) subprocess.call([sys.executable, '-c', ''], preexec_fn=lambda: None) self.assertTrue(gc.isenabled(), "Popen left gc disabled.") finally: if not enabled: gc.disable() @unittest.skipIf( sys.platform == 'darwin', 'setrlimit() seems to fail on OS X') def test_preexec_fork_failure(self): # The internal code did not preserve the previous exception when # re-enabling garbage collection try: from resource import getrlimit, setrlimit, RLIMIT_NPROC except ImportError as err: self.skipTest(err) # RLIMIT_NPROC is specific to Linux and BSD limits = getrlimit(RLIMIT_NPROC) [_, hard] = limits setrlimit(RLIMIT_NPROC, (0, hard)) self.addCleanup(setrlimit, RLIMIT_NPROC, limits) try: subprocess.call([sys.executable, '-c', ''], preexec_fn=lambda: None) except BlockingIOError: # Forking should raise EAGAIN, translated to BlockingIOError pass else: self.skipTest('RLIMIT_NPROC had no effect; probably superuser') def test_args_string(self): # args is a string fd, fname = tempfile.mkstemp() # reopen in text mode with open(fd, "w", errors="surrogateescape") as fobj: fobj.write("#!%s\n" % support.unix_shell) fobj.write("exec '%s' -c 'import sys; sys.exit(47)'\n" % sys.executable) os.chmod(fname, 0o700) p = subprocess.Popen(fname) p.wait() os.remove(fname) self.assertEqual(p.returncode, 47) def test_invalid_args(self): # invalid arguments should raise ValueError self.assertRaises(ValueError, subprocess.call, [sys.executable, "-c", "import sys; sys.exit(47)"], startupinfo=47) self.assertRaises(ValueError, subprocess.call, [sys.executable, "-c", "import sys; sys.exit(47)"], creationflags=47) def test_shell_sequence(self): # Run command through the shell (sequence) newenv = os.environ.copy() newenv["FRUIT"] = "apple" p = subprocess.Popen(["echo $FRUIT"], shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertEqual(p.stdout.read().strip(b" \t\r\n\f"), b"apple") def test_shell_string(self): # Run command through the shell (string) newenv = os.environ.copy() newenv["FRUIT"] = "apple" p = subprocess.Popen("echo $FRUIT", shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertEqual(p.stdout.read().strip(b" \t\r\n\f"), b"apple") def test_call_string(self): # call() function with string argument on UNIX fd, fname = tempfile.mkstemp() # reopen in text mode with open(fd, "w", errors="surrogateescape") as fobj: fobj.write("#!%s\n" % support.unix_shell) fobj.write("exec '%s' -c 'import sys; sys.exit(47)'\n" % sys.executable) os.chmod(fname, 0o700) rc = subprocess.call(fname) os.remove(fname) self.assertEqual(rc, 47) def test_specific_shell(self): # Issue #9265: Incorrect name passed as arg[0]. shells = [] for prefix in ['/bin', '/usr/bin/', '/usr/local/bin']: for name in ['bash', 'ksh']: sh = os.path.join(prefix, name) if os.path.isfile(sh): shells.append(sh) if not shells: # Will probably work for any shell but csh. self.skipTest("bash or ksh required for this test") sh = '/bin/sh' if os.path.isfile(sh) and not os.path.islink(sh): # Test will fail if /bin/sh is a symlink to csh. shells.append(sh) for sh in shells: p = subprocess.Popen("echo $0", executable=sh, shell=True, stdout=subprocess.PIPE) with p: self.assertEqual(p.stdout.read().strip(), bytes(sh, 'ascii')) def _kill_process(self, method, *args): # Do not inherit file handles from the parent. # It should fix failures on some platforms. # Also set the SIGINT handler to the default to make sure it's not # being ignored (some tests rely on that.) old_handler = signal.signal(signal.SIGINT, signal.default_int_handler) try: p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() time.sleep(30) """], close_fds=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) finally: signal.signal(signal.SIGINT, old_handler) # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) getattr(p, method)(*args) return p @unittest.skipIf(sys.platform.startswith(('netbsd', 'openbsd')), "Due to known OS bug (issue #16762)") def _kill_dead_process(self, method, *args): # Do not inherit file handles from the parent. # It should fix failures on some platforms. p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() """], close_fds=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) # The process should end after this time.sleep(1) # This shouldn't raise even though the child is now dead getattr(p, method)(*args) p.communicate() def test_send_signal(self): p = self._kill_process('send_signal', signal.SIGINT) _, stderr = p.communicate() self.assertIn(b'KeyboardInterrupt', stderr) self.assertNotEqual(p.wait(), 0) def test_kill(self): p = self._kill_process('kill') _, stderr = p.communicate() self.assertEqual(stderr, b'') self.assertEqual(p.wait(), -signal.SIGKILL) def test_terminate(self): p = self._kill_process('terminate') _, stderr = p.communicate() self.assertEqual(stderr, b'') self.assertEqual(p.wait(), -signal.SIGTERM) def test_send_signal_dead(self): # Sending a signal to a dead process self._kill_dead_process('send_signal', signal.SIGINT) def test_kill_dead(self): # Killing a dead process self._kill_dead_process('kill') def test_terminate_dead(self): # Terminating a dead process self._kill_dead_process('terminate') def _save_fds(self, save_fds): fds = [] for fd in save_fds: inheritable = os.get_inheritable(fd) saved = os.dup(fd) fds.append((fd, saved, inheritable)) return fds def _restore_fds(self, fds): for fd, saved, inheritable in fds: os.dup2(saved, fd, inheritable=inheritable) os.close(saved) def check_close_std_fds(self, fds): # Issue #9905: test that subprocess pipes still work properly with # some standard fds closed stdin = 0 saved_fds = self._save_fds(fds) for fd, saved, inheritable in saved_fds: if fd == 0: stdin = saved break try: for fd in fds: os.close(fd) out, err = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple");' 'sys.stdout.flush();' 'sys.stderr.write("orange")'], stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate() self.assertEqual(out, b'apple') self.assertEqual(err, b'orange') finally: self._restore_fds(saved_fds) def test_close_fd_0(self): self.check_close_std_fds([0]) def test_close_fd_1(self): self.check_close_std_fds([1]) def test_close_fd_2(self): self.check_close_std_fds([2]) def test_close_fds_0_1(self): self.check_close_std_fds([0, 1]) def test_close_fds_0_2(self): self.check_close_std_fds([0, 2]) def test_close_fds_1_2(self): self.check_close_std_fds([1, 2]) def test_close_fds_0_1_2(self): # Issue #10806: test that subprocess pipes still work properly with # all standard fds closed. self.check_close_std_fds([0, 1, 2]) def test_small_errpipe_write_fd(self): """Issue #15798: Popen should work when stdio fds are available.""" new_stdin = os.dup(0) new_stdout = os.dup(1) try: os.close(0) os.close(1) # Side test: if errpipe_write fails to have its CLOEXEC # flag set this should cause the parent to think the exec # failed. Extremely unlikely: everyone supports CLOEXEC. subprocess.Popen([ sys.executable, "-c", "print('AssertionError:0:CLOEXEC failure.')"]).wait() finally: # Restore original stdin and stdout os.dup2(new_stdin, 0) os.dup2(new_stdout, 1) os.close(new_stdin) os.close(new_stdout) def test_remapping_std_fds(self): # open up some temporary files temps = [tempfile.mkstemp() for i in range(3)] try: temp_fds = [fd for fd, fname in temps] # unlink the files -- we won't need to reopen them for fd, fname in temps: os.unlink(fname) # write some data to what will become stdin, and rewind os.write(temp_fds[1], b"STDIN") os.lseek(temp_fds[1], 0, 0) # move the standard file descriptors out of the way saved_fds = self._save_fds(range(3)) try: # duplicate the file objects over the standard fd's for fd, temp_fd in enumerate(temp_fds): os.dup2(temp_fd, fd) # now use those files in the "wrong" order, so that subprocess # has to rearrange them in the child p = subprocess.Popen([sys.executable, "-c", 'import sys; got = sys.stdin.read();' 'sys.stdout.write("got %s"%got); sys.stderr.write("err")'], stdin=temp_fds[1], stdout=temp_fds[2], stderr=temp_fds[0]) p.wait() finally: self._restore_fds(saved_fds) for fd in temp_fds: os.lseek(fd, 0, 0) out = os.read(temp_fds[2], 1024) err = os.read(temp_fds[0], 1024).strip() self.assertEqual(out, b"got STDIN") self.assertEqual(err, b"err") finally: for fd in temp_fds: os.close(fd) def check_swap_fds(self, stdin_no, stdout_no, stderr_no): # open up some temporary files temps = [tempfile.mkstemp() for i in range(3)] temp_fds = [fd for fd, fname in temps] try: # unlink the files -- we won't need to reopen them for fd, fname in temps: os.unlink(fname) # save a copy of the standard file descriptors saved_fds = self._save_fds(range(3)) try: # duplicate the temp files over the standard fd's 0, 1, 2 for fd, temp_fd in enumerate(temp_fds): os.dup2(temp_fd, fd) # write some data to what will become stdin, and rewind os.write(stdin_no, b"STDIN") os.lseek(stdin_no, 0, 0) # now use those files in the given order, so that subprocess # has to rearrange them in the child p = subprocess.Popen([sys.executable, "-c", 'import sys; got = sys.stdin.read();' 'sys.stdout.write("got %s"%got); sys.stderr.write("err")'], stdin=stdin_no, stdout=stdout_no, stderr=stderr_no) p.wait() for fd in temp_fds: os.lseek(fd, 0, 0) out = os.read(stdout_no, 1024) err = os.read(stderr_no, 1024).strip() finally: self._restore_fds(saved_fds) self.assertEqual(out, b"got STDIN") self.assertEqual(err, b"err") finally: for fd in temp_fds: os.close(fd) # When duping fds, if there arises a situation where one of the fds is # either 0, 1 or 2, it is possible that it is overwritten (#12607). # This tests all combinations of this. def test_swap_fds(self): self.check_swap_fds(0, 1, 2) self.check_swap_fds(0, 2, 1) self.check_swap_fds(1, 0, 2) self.check_swap_fds(1, 2, 0) self.check_swap_fds(2, 0, 1) self.check_swap_fds(2, 1, 0) def _check_swap_std_fds_with_one_closed(self, from_fds, to_fds): saved_fds = self._save_fds(range(3)) try: for from_fd in from_fds: with tempfile.TemporaryFile() as f: os.dup2(f.fileno(), from_fd) fd_to_close = (set(range(3)) - set(from_fds)).pop() os.close(fd_to_close) arg_names = ['stdin', 'stdout', 'stderr'] kwargs = {} for from_fd, to_fd in zip(from_fds, to_fds): kwargs[arg_names[to_fd]] = from_fd code = textwrap.dedent(r''' import os, sys skipped_fd = int(sys.argv[1]) for fd in range(3): if fd != skipped_fd: os.write(fd, str(fd).encode('ascii')) ''') skipped_fd = (set(range(3)) - set(to_fds)).pop() rc = subprocess.call([sys.executable, '-c', code, str(skipped_fd)], **kwargs) self.assertEqual(rc, 0) for from_fd, to_fd in zip(from_fds, to_fds): os.lseek(from_fd, 0, os.SEEK_SET) read_bytes = os.read(from_fd, 1024) read_fds = list(map(int, read_bytes.decode('ascii'))) msg = textwrap.dedent(f""" When testing {from_fds} to {to_fds} redirection, parent descriptor {from_fd} got redirected to descriptor(s) {read_fds} instead of descriptor {to_fd}. """) self.assertEqual([to_fd], read_fds, msg) finally: self._restore_fds(saved_fds) # Check that subprocess can remap std fds correctly even # if one of them is closed (#32844). def test_swap_std_fds_with_one_closed(self): for from_fds in itertools.combinations(range(3), 2): for to_fds in itertools.permutations(range(3), 2): self._check_swap_std_fds_with_one_closed(from_fds, to_fds) def test_surrogates_error_message(self): def prepare(): raise ValueError("surrogate:\uDCff") try: subprocess.call( ZERO_RETURN_CMD, preexec_fn=prepare) except ValueError as err: # Pure Python implementations keeps the message self.assertIsNone(subprocess._posixsubprocess) self.assertEqual(str(err), "surrogate:\uDCff") except subprocess.SubprocessError as err: # _posixsubprocess uses a default message self.assertIsNotNone(subprocess._posixsubprocess) self.assertEqual(str(err), "Exception occurred in preexec_fn.") else: self.fail("Expected ValueError or subprocess.SubprocessError") def test_undecodable_env(self): for key, value in (('test', 'abc\uDCFF'), ('test\uDCFF', '42')): encoded_value = value.encode("ascii", "surrogateescape") # test str with surrogates script = "import os; print(ascii(os.getenv(%s)))" % repr(key) env = os.environ.copy() env[key] = value # Use C locale to get ASCII for the locale encoding to force # surrogate-escaping of \xFF in the child process env['LC_ALL'] = 'C' decoded_value = value stdout = subprocess.check_output( [sys.executable, "-c", script], env=env) stdout = stdout.rstrip(b'\n\r') self.assertEqual(stdout.decode('ascii'), ascii(decoded_value)) # test bytes key = key.encode("ascii", "surrogateescape") script = "import os; print(ascii(os.getenvb(%s)))" % repr(key) env = os.environ.copy() env[key] = encoded_value stdout = subprocess.check_output( [sys.executable, "-c", script], env=env) stdout = stdout.rstrip(b'\n\r') self.assertEqual(stdout.decode('ascii'), ascii(encoded_value)) def test_bytes_program(self): abs_program = os.fsencode(ZERO_RETURN_CMD[0]) args = list(ZERO_RETURN_CMD[1:]) path, program = os.path.split(ZERO_RETURN_CMD[0]) program = os.fsencode(program) # absolute bytes path exitcode = subprocess.call([abs_program]+args) self.assertEqual(exitcode, 0) # absolute bytes path as a string cmd = b"'%s' %s" % (abs_program, " ".join(args).encode("utf-8")) exitcode = subprocess.call(cmd, shell=True) self.assertEqual(exitcode, 0) # bytes program, unicode PATH env = os.environ.copy() env["PATH"] = path exitcode = subprocess.call([program]+args, env=env) self.assertEqual(exitcode, 0) # bytes program, bytes PATH envb = os.environb.copy() envb[b"PATH"] = os.fsencode(path) exitcode = subprocess.call([program]+args, env=envb) self.assertEqual(exitcode, 0) def test_pipe_cloexec(self): sleeper = support.findfile("input_reader.py", subdir="subprocessdata") fd_status = support.findfile("fd_status.py", subdir="subprocessdata") p1 = subprocess.Popen([sys.executable, sleeper], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=False) self.addCleanup(p1.communicate, b'') p2 = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=False) output, error = p2.communicate() result_fds = set(map(int, output.split(b','))) unwanted_fds = set([p1.stdin.fileno(), p1.stdout.fileno(), p1.stderr.fileno()]) self.assertFalse(result_fds & unwanted_fds, "Expected no fds from %r to be open in child, " "found %r" % (unwanted_fds, result_fds & unwanted_fds)) def test_pipe_cloexec_real_tools(self): qcat = support.findfile("qcat.py", subdir="subprocessdata") qgrep = support.findfile("qgrep.py", subdir="subprocessdata") subdata = b'zxcvbn' data = subdata * 4 + b'\n' p1 = subprocess.Popen([sys.executable, qcat], stdin=subprocess.PIPE, stdout=subprocess.PIPE, close_fds=False) p2 = subprocess.Popen([sys.executable, qgrep, subdata], stdin=p1.stdout, stdout=subprocess.PIPE, close_fds=False) self.addCleanup(p1.wait) self.addCleanup(p2.wait) def kill_p1(): try: p1.terminate() except ProcessLookupError: pass def kill_p2(): try: p2.terminate() except ProcessLookupError: pass self.addCleanup(kill_p1) self.addCleanup(kill_p2) p1.stdin.write(data) p1.stdin.close() readfiles, ignored1, ignored2 = select.select([p2.stdout], [], [], 10) self.assertTrue(readfiles, "The child hung") self.assertEqual(p2.stdout.read(), data) p1.stdout.close() p2.stdout.close() def test_close_fds(self): fd_status = support.findfile("fd_status.py", subdir="subprocessdata") fds = os.pipe() self.addCleanup(os.close, fds[0]) self.addCleanup(os.close, fds[1]) open_fds = set(fds) # add a bunch more fds for _ in range(9): fd = os.open(os.devnull, os.O_RDONLY) self.addCleanup(os.close, fd) open_fds.add(fd) for fd in open_fds: os.set_inheritable(fd, True) p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=False) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertEqual(remaining_fds & open_fds, open_fds, "Some fds were closed") p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertFalse(remaining_fds & open_fds, "Some fds were left open") self.assertIn(1, remaining_fds, "Subprocess failed") # Keep some of the fd's we opened open in the subprocess. # This tests _posixsubprocess.c's proper handling of fds_to_keep. fds_to_keep = set(open_fds.pop() for _ in range(8)) p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True, pass_fds=fds_to_keep) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertFalse((remaining_fds - fds_to_keep) & open_fds, "Some fds not in pass_fds were left open") self.assertIn(1, remaining_fds, "Subprocess failed") @unittest.skipIf(sys.platform.startswith("freebsd") and os.stat("/dev").st_dev == os.stat("/dev/fd").st_dev, "Requires fdescfs mounted on /dev/fd on FreeBSD.") def test_close_fds_when_max_fd_is_lowered(self): """Confirm that issue21618 is fixed (may fail under valgrind).""" fd_status = support.findfile("fd_status.py", subdir="subprocessdata") # This launches the meat of the test in a child process to # avoid messing with the larger unittest processes maximum # number of file descriptors. # This process launches: # +--> Process that lowers its RLIMIT_NOFILE aftr setting up # a bunch of high open fds above the new lower rlimit. # Those are reported via stdout before launching a new # process with close_fds=False to run the actual test: # +--> The TEST: This one launches a fd_status.py # subprocess with close_fds=True so we can find out if # any of the fds above the lowered rlimit are still open. p = subprocess.Popen([sys.executable, '-c', textwrap.dedent( ''' import os, resource, subprocess, sys, textwrap open_fds = set() # Add a bunch more fds to pass down. for _ in range(40): fd = os.open(os.devnull, os.O_RDONLY) open_fds.add(fd) # Leave a two pairs of low ones available for use by the # internal child error pipe and the stdout pipe. # We also leave 10 more open as some Python buildbots run into # "too many open files" errors during the test if we do not. for fd in sorted(open_fds)[:14]: os.close(fd) open_fds.remove(fd) for fd in open_fds: #self.addCleanup(os.close, fd) os.set_inheritable(fd, True) max_fd_open = max(open_fds) # Communicate the open_fds to the parent unittest.TestCase process. print(','.join(map(str, sorted(open_fds)))) sys.stdout.flush() rlim_cur, rlim_max = resource.getrlimit(resource.RLIMIT_NOFILE) try: # 29 is lower than the highest fds we are leaving open. resource.setrlimit(resource.RLIMIT_NOFILE, (29, rlim_max)) # Launch a new Python interpreter with our low fd rlim_cur that # inherits open fds above that limit. It then uses subprocess # with close_fds=True to get a report of open fds in the child. # An explicit list of fds to check is passed to fd_status.py as # letting fd_status rely on its default logic would miss the # fds above rlim_cur as it normally only checks up to that limit. subprocess.Popen( [sys.executable, '-c', textwrap.dedent(""" import subprocess, sys subprocess.Popen([sys.executable, %r] + [str(x) for x in range({max_fd})], close_fds=True).wait() """.format(max_fd=max_fd_open+1))], close_fds=False).wait() finally: resource.setrlimit(resource.RLIMIT_NOFILE, (rlim_cur, rlim_max)) ''' % fd_status)], stdout=subprocess.PIPE) output, unused_stderr = p.communicate() output_lines = output.splitlines() self.assertEqual(len(output_lines), 2, msg="expected exactly two lines of output:\n%r" % output) opened_fds = set(map(int, output_lines[0].strip().split(b','))) remaining_fds = set(map(int, output_lines[1].strip().split(b','))) self.assertFalse(remaining_fds & opened_fds, msg="Some fds were left open.") # Mac OS X Tiger (10.4) has a kernel bug: sometimes, the file # descriptor of a pipe closed in the parent process is valid in the # child process according to fstat(), but the mode of the file # descriptor is invalid, and read or write raise an error. @support.requires_mac_ver(10, 5) def test_pass_fds(self): fd_status = support.findfile("fd_status.py", subdir="subprocessdata") open_fds = set() for x in range(5): fds = os.pipe() self.addCleanup(os.close, fds[0]) self.addCleanup(os.close, fds[1]) os.set_inheritable(fds[0], True) os.set_inheritable(fds[1], True) open_fds.update(fds) for fd in open_fds: p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True, pass_fds=(fd, )) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) to_be_closed = open_fds - {fd} self.assertIn(fd, remaining_fds, "fd to be passed not passed") self.assertFalse(remaining_fds & to_be_closed, "fd to be closed passed") # pass_fds overrides close_fds with a warning. with self.assertWarns(RuntimeWarning) as context: self.assertFalse(subprocess.call( ZERO_RETURN_CMD, close_fds=False, pass_fds=(fd, ))) self.assertIn('overriding close_fds', str(context.warning)) def test_pass_fds_inheritable(self): script = support.findfile("fd_status.py", subdir="subprocessdata") inheritable, non_inheritable = os.pipe() self.addCleanup(os.close, inheritable) self.addCleanup(os.close, non_inheritable) os.set_inheritable(inheritable, True) os.set_inheritable(non_inheritable, False) pass_fds = (inheritable, non_inheritable) args = [sys.executable, script] args += list(map(str, pass_fds)) p = subprocess.Popen(args, stdout=subprocess.PIPE, close_fds=True, pass_fds=pass_fds) output, ignored = p.communicate() fds = set(map(int, output.split(b','))) # the inheritable file descriptor must be inherited, so its inheritable # flag must be set in the child process after fork() and before exec() self.assertEqual(fds, set(pass_fds), "output=%a" % output) # inheritable flag must not be changed in the parent process self.assertEqual(os.get_inheritable(inheritable), True) self.assertEqual(os.get_inheritable(non_inheritable), False) # bpo-32270: Ensure that descriptors specified in pass_fds # are inherited even if they are used in redirections. # Contributed by @izbyshev. def test_pass_fds_redirected(self): """Regression test for https://bugs.python.org/issue32270.""" fd_status = support.findfile("fd_status.py", subdir="subprocessdata") pass_fds = [] for _ in range(2): fd = os.open(os.devnull, os.O_RDWR) self.addCleanup(os.close, fd) pass_fds.append(fd) stdout_r, stdout_w = os.pipe() self.addCleanup(os.close, stdout_r) self.addCleanup(os.close, stdout_w) pass_fds.insert(1, stdout_w) with subprocess.Popen([sys.executable, fd_status], stdin=pass_fds[0], stdout=pass_fds[1], stderr=pass_fds[2], close_fds=True, pass_fds=pass_fds): output = os.read(stdout_r, 1024) fds = {int(num) for num in output.split(b',')} self.assertEqual(fds, {0, 1, 2} | frozenset(pass_fds), f"output={output!a}") def test_stdout_stdin_are_single_inout_fd(self): with io.open(os.devnull, "r+") as inout: p = subprocess.Popen(ZERO_RETURN_CMD, stdout=inout, stdin=inout) p.wait() def test_stdout_stderr_are_single_inout_fd(self): with io.open(os.devnull, "r+") as inout: p = subprocess.Popen(ZERO_RETURN_CMD, stdout=inout, stderr=inout) p.wait() def test_stderr_stdin_are_single_inout_fd(self): with io.open(os.devnull, "r+") as inout: p = subprocess.Popen(ZERO_RETURN_CMD, stderr=inout, stdin=inout) p.wait() def test_wait_when_sigchild_ignored(self): # NOTE: sigchild_ignore.py may not be an effective test on all OSes. sigchild_ignore = support.findfile("sigchild_ignore.py", subdir="subprocessdata") p = subprocess.Popen([sys.executable, sigchild_ignore], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() self.assertEqual(0, p.returncode, "sigchild_ignore.py exited" " non-zero with this error:\n%s" % stderr.decode('utf-8')) def test_select_unbuffered(self): # Issue #11459: bufsize=0 should really set the pipes as # unbuffered (and therefore let select() work properly). select = import_helper.import_module("select") p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple")'], stdout=subprocess.PIPE, bufsize=0) f = p.stdout self.addCleanup(f.close) try: self.assertEqual(f.read(4), b"appl") self.assertIn(f, select.select([f], [], [], 0.0)[0]) finally: p.wait() def test_zombie_fast_process_del(self): # Issue #12650: on Unix, if Popen.__del__() was called before the # process exited, it wouldn't be added to subprocess._active, and would # remain a zombie. # spawn a Popen, and delete its reference before it exits p = subprocess.Popen([sys.executable, "-c", 'import sys, time;' 'time.sleep(0.2)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) ident = id(p) pid = p.pid with warnings_helper.check_warnings(('', ResourceWarning)): p = None if mswindows: # subprocess._active is not used on Windows and is set to None. self.assertIsNone(subprocess._active) else: # check that p is in the active processes list self.assertIn(ident, [id(o) for o in subprocess._active]) def test_leak_fast_process_del_killed(self): # Issue #12650: on Unix, if Popen.__del__() was called before the # process exited, and the process got killed by a signal, it would never # be removed from subprocess._active, which triggered a FD and memory # leak. # spawn a Popen, delete its reference and kill it p = subprocess.Popen([sys.executable, "-c", 'import time;' 'time.sleep(3)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) ident = id(p) pid = p.pid with warnings_helper.check_warnings(('', ResourceWarning)): p = None support.gc_collect() # For PyPy or other GCs. os.kill(pid, signal.SIGKILL) if mswindows: # subprocess._active is not used on Windows and is set to None. self.assertIsNone(subprocess._active) else: # check that p is in the active processes list self.assertIn(ident, [id(o) for o in subprocess._active]) # let some time for the process to exit, and create a new Popen: this # should trigger the wait() of p time.sleep(0.2) with self.assertRaises(OSError): with subprocess.Popen(NONEXISTING_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc: pass # p should have been wait()ed on, and removed from the _active list self.assertRaises(OSError, os.waitpid, pid, 0) if mswindows: # subprocess._active is not used on Windows and is set to None. self.assertIsNone(subprocess._active) else: self.assertNotIn(ident, [id(o) for o in subprocess._active]) def test_close_fds_after_preexec(self): fd_status = support.findfile("fd_status.py", subdir="subprocessdata") # this FD is used as dup2() target by preexec_fn, and should be closed # in the child process fd = os.dup(1) self.addCleanup(os.close, fd) p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True, preexec_fn=lambda: os.dup2(1, fd)) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertNotIn(fd, remaining_fds) @support.cpython_only def test_fork_exec(self): # Issue #22290: fork_exec() must not crash on memory allocation failure # or other errors import _posixsubprocess gc_enabled = gc.isenabled() try: # Use a preexec function and enable the garbage collector # to force fork_exec() to re-enable the garbage collector # on error. func = lambda: None gc.enable() for args, exe_list, cwd, env_list in ( (123, [b"exe"], None, [b"env"]), ([b"arg"], 123, None, [b"env"]), ([b"arg"], [b"exe"], 123, [b"env"]), ([b"arg"], [b"exe"], None, 123), ): with self.assertRaises(TypeError) as err: _posixsubprocess.fork_exec( args, exe_list, True, (), cwd, env_list, -1, -1, -1, -1, 1, 2, 3, 4, True, True, False, [], 0, -1, func) # Attempt to prevent # "TypeError: fork_exec() takes exactly N arguments (M given)" # from passing the test. More refactoring to have us start # with a valid *args list, confirm a good call with that works # before mutating it in various ways to ensure that bad calls # with individual arg type errors raise a typeerror would be # ideal. Saving that for a future PR... self.assertNotIn('takes exactly', str(err.exception)) finally: if not gc_enabled: gc.disable() @support.cpython_only def test_fork_exec_sorted_fd_sanity_check(self): # Issue #23564: sanity check the fork_exec() fds_to_keep sanity check. import _posixsubprocess class BadInt: first = True def __init__(self, value): self.value = value def __int__(self): if self.first: self.first = False return self.value raise ValueError gc_enabled = gc.isenabled() try: gc.enable() for fds_to_keep in ( (-1, 2, 3, 4, 5), # Negative number. ('str', 4), # Not an int. (18, 23, 42, 2**63), # Out of range. (5, 4), # Not sorted. (6, 7, 7, 8), # Duplicate. (BadInt(1), BadInt(2)), ): with self.assertRaises( ValueError, msg='fds_to_keep={}'.format(fds_to_keep)) as c: _posixsubprocess.fork_exec( [b"false"], [b"false"], True, fds_to_keep, None, [b"env"], -1, -1, -1, -1, 1, 2, 3, 4, True, True, None, None, None, -1, None) self.assertIn('fds_to_keep', str(c.exception)) finally: if not gc_enabled: gc.disable() def test_communicate_BrokenPipeError_stdin_close(self): # By not setting stdout or stderr or a timeout we force the fast path # that just calls _stdin_write() internally due to our mock. proc = subprocess.Popen(ZERO_RETURN_CMD) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin: mock_proc_stdin.close.side_effect = BrokenPipeError proc.communicate() # Should swallow BrokenPipeError from close. mock_proc_stdin.close.assert_called_with() def test_communicate_BrokenPipeError_stdin_write(self): # By not setting stdout or stderr or a timeout we force the fast path # that just calls _stdin_write() internally due to our mock. proc = subprocess.Popen(ZERO_RETURN_CMD) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin: mock_proc_stdin.write.side_effect = BrokenPipeError proc.communicate(b'stuff') # Should swallow the BrokenPipeError. mock_proc_stdin.write.assert_called_once_with(b'stuff') mock_proc_stdin.close.assert_called_once_with() def test_communicate_BrokenPipeError_stdin_flush(self): # Setting stdin and stdout forces the ._communicate() code path. # python -h exits faster than python -c pass (but spams stdout). proc = subprocess.Popen([sys.executable, '-h'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin, \ open(os.devnull, 'wb') as dev_null: mock_proc_stdin.flush.side_effect = BrokenPipeError # because _communicate registers a selector using proc.stdin... mock_proc_stdin.fileno.return_value = dev_null.fileno() # _communicate() should swallow BrokenPipeError from flush. proc.communicate(b'stuff') mock_proc_stdin.flush.assert_called_once_with() def test_communicate_BrokenPipeError_stdin_close_with_timeout(self): # Setting stdin and stdout forces the ._communicate() code path. # python -h exits faster than python -c pass (but spams stdout). proc = subprocess.Popen([sys.executable, '-h'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin: mock_proc_stdin.close.side_effect = BrokenPipeError # _communicate() should swallow BrokenPipeError from close. proc.communicate(timeout=999) mock_proc_stdin.close.assert_called_once_with() @unittest.skipUnless(_testcapi is not None and hasattr(_testcapi, 'W_STOPCODE'), 'need _testcapi.W_STOPCODE') def test_stopped(self): """Test wait() behavior when waitpid returns WIFSTOPPED; issue29335.""" args = ZERO_RETURN_CMD proc = subprocess.Popen(args) # Wait until the real process completes to avoid zombie process support.wait_process(proc.pid, exitcode=0) status = _testcapi.W_STOPCODE(3) with mock.patch('subprocess.os.waitpid', return_value=(proc.pid, status)): returncode = proc.wait() self.assertEqual(returncode, -3) def test_send_signal_race(self): # bpo-38630: send_signal() must poll the process exit status to reduce # the risk of sending the signal to the wrong process. proc = subprocess.Popen(ZERO_RETURN_CMD) # wait until the process completes without using the Popen APIs. support.wait_process(proc.pid, exitcode=0) # returncode is still None but the process completed. self.assertIsNone(proc.returncode) with mock.patch("os.kill") as mock_kill: proc.send_signal(signal.SIGTERM) # send_signal() didn't call os.kill() since the process already # completed. mock_kill.assert_not_called() # Don't check the returncode value: the test reads the exit status, # so Popen failed to read it and uses a default returncode instead. self.assertIsNotNone(proc.returncode) def test_send_signal_race2(self): # bpo-40550: the process might exist between the returncode check and # the kill operation p = subprocess.Popen([sys.executable, '-c', 'exit(1)']) # wait for process to exit while not p.returncode: p.poll() with mock.patch.object(p, 'poll', new=lambda: None): p.returncode = None p.send_signal(signal.SIGTERM) def test_communicate_repeated_call_after_stdout_close(self): proc = subprocess.Popen([sys.executable, '-c', 'import os, time; os.close(1), time.sleep(2)'], stdout=subprocess.PIPE) while True: try: proc.communicate(timeout=0.1) return except subprocess.TimeoutExpired: pass @unittest.skipUnless(mswindows, "Windows specific tests") class Win32ProcessTestCase(BaseTestCase): def test_startupinfo(self): # startupinfo argument # We uses hardcoded constants, because we do not want to # depend on win32all. STARTF_USESHOWWINDOW = 1 SW_MAXIMIZE = 3 startupinfo = subprocess.STARTUPINFO() startupinfo.dwFlags = STARTF_USESHOWWINDOW startupinfo.wShowWindow = SW_MAXIMIZE # Since Python is a console process, it won't be affected # by wShowWindow, but the argument should be silently # ignored subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_startupinfo_keywords(self): # startupinfo argument # We use hardcoded constants, because we do not want to # depend on win32all. STARTF_USERSHOWWINDOW = 1 SW_MAXIMIZE = 3 startupinfo = subprocess.STARTUPINFO( dwFlags=STARTF_USERSHOWWINDOW, wShowWindow=SW_MAXIMIZE ) # Since Python is a console process, it won't be affected # by wShowWindow, but the argument should be silently # ignored subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_startupinfo_copy(self): # bpo-34044: Popen must not modify input STARTUPINFO structure startupinfo = subprocess.STARTUPINFO() startupinfo.dwFlags = subprocess.STARTF_USESHOWWINDOW startupinfo.wShowWindow = subprocess.SW_HIDE # Call Popen() twice with the same startupinfo object to make sure # that it's not modified for _ in range(2): cmd = ZERO_RETURN_CMD with open(os.devnull, 'w') as null: proc = subprocess.Popen(cmd, stdout=null, stderr=subprocess.STDOUT, startupinfo=startupinfo) with proc: proc.communicate() self.assertEqual(proc.returncode, 0) self.assertEqual(startupinfo.dwFlags, subprocess.STARTF_USESHOWWINDOW) self.assertIsNone(startupinfo.hStdInput) self.assertIsNone(startupinfo.hStdOutput) self.assertIsNone(startupinfo.hStdError) self.assertEqual(startupinfo.wShowWindow, subprocess.SW_HIDE) self.assertEqual(startupinfo.lpAttributeList, {"handle_list": []}) def test_creationflags(self): # creationflags argument CREATE_NEW_CONSOLE = 16 sys.stderr.write(" a DOS box should flash briefly ...\n") subprocess.call(sys.executable + ' -c "import time; time.sleep(0.25)"', creationflags=CREATE_NEW_CONSOLE) def test_invalid_args(self): # invalid arguments should raise ValueError self.assertRaises(ValueError, subprocess.call, [sys.executable, "-c", "import sys; sys.exit(47)"], preexec_fn=lambda: 1) @support.cpython_only def test_issue31471(self): # There shouldn't be an assertion failure in Popen() in case the env # argument has a bad keys() method. class BadEnv(dict): keys = None with self.assertRaises(TypeError): subprocess.Popen(ZERO_RETURN_CMD, env=BadEnv()) def test_close_fds(self): # close file descriptors rc = subprocess.call([sys.executable, "-c", "import sys; sys.exit(47)"], close_fds=True) self.assertEqual(rc, 47) def test_close_fds_with_stdio(self): import msvcrt fds = os.pipe() self.addCleanup(os.close, fds[0]) self.addCleanup(os.close, fds[1]) handles = [] for fd in fds: os.set_inheritable(fd, True) handles.append(msvcrt.get_osfhandle(fd)) p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, close_fds=False) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 0) int(stdout.strip()) # Check that stdout is an integer p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 1) self.assertIn(b"OSError", stderr) # The same as the previous call, but with an empty handle_list handle_list = [] startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {"handle_list": handle_list} p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, stderr=subprocess.PIPE, startupinfo=startupinfo, close_fds=True) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 1) self.assertIn(b"OSError", stderr) # Check for a warning due to using handle_list and close_fds=False with warnings_helper.check_warnings((".*overriding close_fds", RuntimeWarning)): startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {"handle_list": handles[:]} p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, stderr=subprocess.PIPE, startupinfo=startupinfo, close_fds=False) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 0) def test_empty_attribute_list(self): startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {} subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_empty_handle_list(self): startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {"handle_list": []} subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_shell_sequence(self): # Run command through the shell (sequence) newenv = os.environ.copy() newenv["FRUIT"] = "physalis" p = subprocess.Popen(["set"], shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertIn(b"physalis", p.stdout.read()) def test_shell_string(self): # Run command through the shell (string) newenv = os.environ.copy() newenv["FRUIT"] = "physalis" p = subprocess.Popen("set", shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertIn(b"physalis", p.stdout.read()) def test_shell_encodings(self): # Run command through the shell (string) for enc in ['ansi', 'oem']: newenv = os.environ.copy() newenv["FRUIT"] = "physalis" p = subprocess.Popen("set", shell=1, stdout=subprocess.PIPE, env=newenv, encoding=enc) with p: self.assertIn("physalis", p.stdout.read(), enc) def test_call_string(self): # call() function with string argument on Windows rc = subprocess.call(sys.executable + ' -c "import sys; sys.exit(47)"') self.assertEqual(rc, 47) def _kill_process(self, method, *args): # Some win32 buildbot raises EOFError if stdin is inherited p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() time.sleep(30) """], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) with p: # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) getattr(p, method)(*args) _, stderr = p.communicate() self.assertEqual(stderr, b'') returncode = p.wait() self.assertNotEqual(returncode, 0) def _kill_dead_process(self, method, *args): p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() sys.exit(42) """], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) with p: # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) # The process should end after this time.sleep(1) # This shouldn't raise even though the child is now dead getattr(p, method)(*args) _, stderr = p.communicate() self.assertEqual(stderr, b'') rc = p.wait() self.assertEqual(rc, 42) def test_send_signal(self): self._kill_process('send_signal', signal.SIGTERM) def test_kill(self): self._kill_process('kill') def test_terminate(self): self._kill_process('terminate') def test_send_signal_dead(self): self._kill_dead_process('send_signal', signal.SIGTERM) def test_kill_dead(self): self._kill_dead_process('kill') def test_terminate_dead(self): self._kill_dead_process('terminate') class MiscTests(unittest.TestCase): class RecordingPopen(subprocess.Popen): """A Popen that saves a reference to each instance for testing.""" instances_created = [] def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.instances_created.append(self) @mock.patch.object(subprocess.Popen, "_communicate") def _test_keyboardinterrupt_no_kill(self, popener, mock__communicate, **kwargs): """Fake a SIGINT happening during Popen._communicate() and ._wait(). This avoids the need to actually try and get test environments to send and receive signals reliably across platforms. The net effect of a ^C happening during a blocking subprocess execution which we want to clean up from is a KeyboardInterrupt coming out of communicate() or wait(). """ mock__communicate.side_effect = KeyboardInterrupt try: with mock.patch.object(subprocess.Popen, "_wait") as mock__wait: # We patch out _wait() as no signal was involved so the # child process isn't actually going to exit rapidly. mock__wait.side_effect = KeyboardInterrupt with mock.patch.object(subprocess, "Popen", self.RecordingPopen): with self.assertRaises(KeyboardInterrupt): popener([sys.executable, "-c", "import time\ntime.sleep(9)\nimport sys\n" "sys.stderr.write('\\n!runaway child!\\n')"], stdout=subprocess.DEVNULL, **kwargs) for call in mock__wait.call_args_list[1:]: self.assertNotEqual( call, mock.call(timeout=None), "no open-ended wait() after the first allowed: " f"{mock__wait.call_args_list}") sigint_calls = [] for call in mock__wait.call_args_list: if call == mock.call(timeout=0.25): # from Popen.__init__ sigint_calls.append(call) self.assertLessEqual(mock__wait.call_count, 2, msg=mock__wait.call_args_list) self.assertEqual(len(sigint_calls), 1, msg=mock__wait.call_args_list) finally: # cleanup the forgotten (due to our mocks) child process process = self.RecordingPopen.instances_created.pop() process.kill() process.wait() self.assertEqual([], self.RecordingPopen.instances_created) def test_call_keyboardinterrupt_no_kill(self): self._test_keyboardinterrupt_no_kill(subprocess.call, timeout=6.282) def test_run_keyboardinterrupt_no_kill(self): self._test_keyboardinterrupt_no_kill(subprocess.run, timeout=6.282) def test_context_manager_keyboardinterrupt_no_kill(self): def popen_via_context_manager(*args, **kwargs): with subprocess.Popen(*args, **kwargs) as unused_process: raise KeyboardInterrupt # Test how __exit__ handles ^C. self._test_keyboardinterrupt_no_kill(popen_via_context_manager) def test_getoutput(self): self.assertEqual(subprocess.getoutput('echo xyzzy'), 'xyzzy') self.assertEqual(subprocess.getstatusoutput('echo xyzzy'), (0, 'xyzzy')) # we use mkdtemp in the next line to create an empty directory # under our exclusive control; from that, we can invent a pathname # that we _know_ won't exist. This is guaranteed to fail. dir = None try: dir = tempfile.mkdtemp() name = os.path.join(dir, "foo") status, output = subprocess.getstatusoutput( ("type " if mswindows else "cat ") + name) self.assertNotEqual(status, 0) finally: if dir is not None: os.rmdir(dir) def test__all__(self): """Ensure that __all__ is populated properly.""" intentionally_excluded = {"list2cmdline", "Handle", "pwd", "grp", "fcntl"} exported = set(subprocess.__all__) possible_exports = set() import types for name, value in subprocess.__dict__.items(): if name.startswith('_'): continue if isinstance(value, (types.ModuleType,)): continue possible_exports.add(name) self.assertEqual(exported, possible_exports - intentionally_excluded) @unittest.skipUnless(hasattr(selectors, 'PollSelector'), "Test needs selectors.PollSelector") class ProcessTestCaseNoPoll(ProcessTestCase): def setUp(self): self.orig_selector = subprocess._PopenSelector subprocess._PopenSelector = selectors.SelectSelector ProcessTestCase.setUp(self) def tearDown(self): subprocess._PopenSelector = self.orig_selector ProcessTestCase.tearDown(self) @unittest.skipUnless(mswindows, "Windows-specific tests") class CommandsWithSpaces (BaseTestCase): def setUp(self): super().setUp() f, fname = tempfile.mkstemp(".py", "te st") self.fname = fname.lower () os.write(f, b"import sys;" b"sys.stdout.write('%d %s' % (len(sys.argv), [a.lower () for a in sys.argv]))" ) os.close(f) def tearDown(self): os.remove(self.fname) super().tearDown() def with_spaces(self, *args, **kwargs): kwargs['stdout'] = subprocess.PIPE p = subprocess.Popen(*args, **kwargs) with p: self.assertEqual( p.stdout.read ().decode("mbcs"), "2 [%r, 'ab cd']" % self.fname ) def test_shell_string_with_spaces(self): # call() function with string argument with spaces on Windows self.with_spaces('"%s" "%s" "%s"' % (sys.executable, self.fname, "ab cd"), shell=1) def test_shell_sequence_with_spaces(self): # call() function with sequence argument with spaces on Windows self.with_spaces([sys.executable, self.fname, "ab cd"], shell=1) def test_noshell_string_with_spaces(self): # call() function with string argument with spaces on Windows self.with_spaces('"%s" "%s" "%s"' % (sys.executable, self.fname, "ab cd")) def test_noshell_sequence_with_spaces(self): # call() function with sequence argument with spaces on Windows self.with_spaces([sys.executable, self.fname, "ab cd"]) class ContextManagerTests(BaseTestCase): def test_pipe(self): with subprocess.Popen([sys.executable, "-c", "import sys;" "sys.stdout.write('stdout');" "sys.stderr.write('stderr');"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc: self.assertEqual(proc.stdout.read(), b"stdout") self.assertEqual(proc.stderr.read(), b"stderr") self.assertTrue(proc.stdout.closed) self.assertTrue(proc.stderr.closed) def test_returncode(self): with subprocess.Popen([sys.executable, "-c", "import sys; sys.exit(100)"]) as proc: pass # __exit__ calls wait(), so the returncode should be set self.assertEqual(proc.returncode, 100) def test_communicate_stdin(self): with subprocess.Popen([sys.executable, "-c", "import sys;" "sys.exit(sys.stdin.read() == 'context')"], stdin=subprocess.PIPE) as proc: proc.communicate(b"context") self.assertEqual(proc.returncode, 1) def test_invalid_args(self): with self.assertRaises(NONEXISTING_ERRORS): with subprocess.Popen(NONEXISTING_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc: pass def test_broken_pipe_cleanup(self): """Broken pipe error should not prevent wait() (Issue 21619)""" proc = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, bufsize=support.PIPE_MAX_SIZE*2) proc = proc.__enter__() # Prepare to send enough data to overflow any OS pipe buffering and # guarantee a broken pipe error. Data is held in BufferedWriter # buffer until closed. proc.stdin.write(b'x' * support.PIPE_MAX_SIZE) self.assertIsNone(proc.returncode) # EPIPE expected under POSIX; EINVAL under Windows self.assertRaises(OSError, proc.__exit__, None, None, None) self.assertEqual(proc.returncode, 0) self.assertTrue(proc.stdin.closed) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.10/test_threading.py000066400000000000000000001670261471441230600217140ustar00rootroot00000000000000""" Tests for the threading module. """ import test.support from test.support import threading_helper from test.support import verbose, cpython_only, os_helper from test.support.import_helper import import_module from test.support.script_helper import assert_python_ok, assert_python_failure import random import sys import _thread import threading import time import unittest import weakref import os import subprocess import signal import textwrap import traceback from unittest import mock from gevent.tests import lock_tests # gevent: use our local copy #from test import lock_tests from test import support # Between fork() and exec(), only async-safe functions are allowed (issues # #12316 and #11870), and fork() from a worker thread is known to trigger # problems with some operating systems (issue #3863): skip problematic tests # on platforms known to behave badly. platforms_to_skip = ('netbsd5', 'hp-ux11') # Is Python built with Py_DEBUG macro defined? Py_DEBUG = hasattr(sys, 'gettotalrefcount') def restore_default_excepthook(testcase): testcase.addCleanup(setattr, threading, 'excepthook', threading.excepthook) threading.excepthook = threading.__excepthook__ # A trivial mutable counter. class Counter(object): def __init__(self): self.value = 0 def inc(self): self.value += 1 def dec(self): self.value -= 1 def get(self): return self.value class TestThread(threading.Thread): def __init__(self, name, testcase, sema, mutex, nrunning): threading.Thread.__init__(self, name=name) self.testcase = testcase self.sema = sema self.mutex = mutex self.nrunning = nrunning def run(self): delay = random.random() / 10000.0 if verbose: print('task %s will run for %.1f usec' % (self.name, delay * 1e6)) with self.sema: with self.mutex: self.nrunning.inc() if verbose: print(self.nrunning.get(), 'tasks are running') self.testcase.assertLessEqual(self.nrunning.get(), 3) time.sleep(delay) if verbose: print('task', self.name, 'done') with self.mutex: self.nrunning.dec() self.testcase.assertGreaterEqual(self.nrunning.get(), 0) if verbose: print('%s is finished. %d tasks are running' % (self.name, self.nrunning.get())) class BaseTestCase(unittest.TestCase): def setUp(self): self._threads = threading_helper.threading_setup() def tearDown(self): threading_helper.threading_cleanup(*self._threads) test.support.reap_children() class ThreadTests(BaseTestCase): @cpython_only def test_name(self): def func(): pass thread = threading.Thread(name="myname1") self.assertEqual(thread.name, "myname1") # Convert int name to str thread = threading.Thread(name=123) self.assertEqual(thread.name, "123") # target name is ignored if name is specified thread = threading.Thread(target=func, name="myname2") self.assertEqual(thread.name, "myname2") with mock.patch.object(threading, '_counter', return_value=2): thread = threading.Thread(name="") self.assertEqual(thread.name, "Thread-2") with mock.patch.object(threading, '_counter', return_value=3): thread = threading.Thread() self.assertEqual(thread.name, "Thread-3") with mock.patch.object(threading, '_counter', return_value=5): thread = threading.Thread(target=func) self.assertEqual(thread.name, "Thread-5 (func)") @cpython_only def test_disallow_instantiation(self): # Ensure that the type disallows instantiation (bpo-43916) lock = threading.Lock() test.support.check_disallow_instantiation(self, type(lock)) # Create a bunch of threads, let each do some work, wait until all are # done. def test_various_ops(self): # This takes about n/3 seconds to run (about n/3 clumps of tasks, # times about 1 second per clump). NUMTASKS = 10 # no more than 3 of the 10 can run at once sema = threading.BoundedSemaphore(value=3) mutex = threading.RLock() numrunning = Counter() threads = [] for i in range(NUMTASKS): t = TestThread(""%i, self, sema, mutex, numrunning) threads.append(t) self.assertIsNone(t.ident) self.assertRegex(repr(t), r'^$') t.start() if hasattr(threading, 'get_native_id'): native_ids = set(t.native_id for t in threads) | {threading.get_native_id()} self.assertNotIn(None, native_ids) self.assertEqual(len(native_ids), NUMTASKS + 1) if verbose: print('waiting for all tasks to complete') for t in threads: t.join() self.assertFalse(t.is_alive()) self.assertNotEqual(t.ident, 0) self.assertIsNotNone(t.ident) self.assertRegex(repr(t), r'^$') if verbose: print('all tasks done') self.assertEqual(numrunning.get(), 0) def test_ident_of_no_threading_threads(self): # The ident still must work for the main thread and dummy threads. self.assertIsNotNone(threading.current_thread().ident) def f(): ident.append(threading.current_thread().ident) done.set() done = threading.Event() ident = [] with threading_helper.wait_threads_exit(): tid = _thread.start_new_thread(f, ()) done.wait() self.assertEqual(ident[0], tid) # Kill the "immortal" _DummyThread del threading._active[ident[0]] # run with a small(ish) thread stack size (256 KiB) def test_various_ops_small_stack(self): if verbose: print('with 256 KiB thread stack size...') try: threading.stack_size(262144) except _thread.error: raise unittest.SkipTest( 'platform does not support changing thread stack size') self.test_various_ops() threading.stack_size(0) # run with a large thread stack size (1 MiB) def test_various_ops_large_stack(self): if verbose: print('with 1 MiB thread stack size...') try: threading.stack_size(0x100000) except _thread.error: raise unittest.SkipTest( 'platform does not support changing thread stack size') self.test_various_ops() threading.stack_size(0) def test_foreign_thread(self): # Check that a "foreign" thread can use the threading module. def f(mutex): # Calling current_thread() forces an entry for the foreign # thread to get made in the threading._active map. threading.current_thread() mutex.release() mutex = threading.Lock() mutex.acquire() with threading_helper.wait_threads_exit(): tid = _thread.start_new_thread(f, (mutex,)) # Wait for the thread to finish. mutex.acquire() self.assertIn(tid, threading._active) self.assertIsInstance(threading._active[tid], threading._DummyThread) #Issue 29376 self.assertTrue(threading._active[tid].is_alive()) self.assertRegex(repr(threading._active[tid]), '_DummyThread') del threading._active[tid] # PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it. def test_PyThreadState_SetAsyncExc(self): ctypes = import_module("ctypes") set_async_exc = ctypes.pythonapi.PyThreadState_SetAsyncExc set_async_exc.argtypes = (ctypes.c_ulong, ctypes.py_object) class AsyncExc(Exception): pass exception = ctypes.py_object(AsyncExc) # First check it works when setting the exception from the same thread. tid = threading.get_ident() self.assertIsInstance(tid, int) self.assertGreater(tid, 0) try: result = set_async_exc(tid, exception) # The exception is async, so we might have to keep the VM busy until # it notices. while True: pass except AsyncExc: pass else: # This code is unreachable but it reflects the intent. If we wanted # to be smarter the above loop wouldn't be infinite. self.fail("AsyncExc not raised") try: self.assertEqual(result, 1) # one thread state modified except UnboundLocalError: # The exception was raised too quickly for us to get the result. pass # `worker_started` is set by the thread when it's inside a try/except # block waiting to catch the asynchronously set AsyncExc exception. # `worker_saw_exception` is set by the thread upon catching that # exception. worker_started = threading.Event() worker_saw_exception = threading.Event() class Worker(threading.Thread): def run(self): self.id = threading.get_ident() self.finished = False try: while True: worker_started.set() time.sleep(0.1) except AsyncExc: self.finished = True worker_saw_exception.set() t = Worker() t.daemon = True # so if this fails, we don't hang Python at shutdown t.start() if verbose: print(" started worker thread") # Try a thread id that doesn't make sense. if verbose: print(" trying nonsensical thread id") result = set_async_exc(-1, exception) self.assertEqual(result, 0) # no thread states modified # Now raise an exception in the worker thread. if verbose: print(" waiting for worker thread to get started") ret = worker_started.wait() self.assertTrue(ret) if verbose: print(" verifying worker hasn't exited") self.assertFalse(t.finished) if verbose: print(" attempting to raise asynch exception in worker") result = set_async_exc(t.id, exception) self.assertEqual(result, 1) # one thread state modified if verbose: print(" waiting for worker to say it caught the exception") worker_saw_exception.wait(timeout=support.SHORT_TIMEOUT) self.assertTrue(t.finished) if verbose: print(" all OK -- joining worker") if t.finished: t.join() # else the thread is still running, and we have no way to kill it def test_limbo_cleanup(self): # Issue 7481: Failure to start thread should cleanup the limbo map. def fail_new_thread(*args): raise threading.ThreadError() _start_new_thread = threading._start_new_thread threading._start_new_thread = fail_new_thread try: t = threading.Thread(target=lambda: None) self.assertRaises(threading.ThreadError, t.start) self.assertFalse( t in threading._limbo, "Failed to cleanup _limbo map on failure of Thread.start().") finally: threading._start_new_thread = _start_new_thread def test_finalize_running_thread(self): # Issue 1402: the PyGILState_Ensure / _Release functions may be called # very late on python exit: on deallocation of a running thread for # example. import_module("ctypes") rc, out, err = assert_python_failure("-c", """if 1: import ctypes, sys, time, _thread # This lock is used as a simple event variable. ready = _thread.allocate_lock() ready.acquire() # Module globals are cleared before __del__ is run # So we save the functions in class dict class C: ensure = ctypes.pythonapi.PyGILState_Ensure release = ctypes.pythonapi.PyGILState_Release def __del__(self): state = self.ensure() self.release(state) def waitingThread(): x = C() ready.release() time.sleep(100) _thread.start_new_thread(waitingThread, ()) ready.acquire() # Be sure the other thread is waiting. sys.exit(42) """) self.assertEqual(rc, 42) def test_finalize_with_trace(self): # Issue1733757 # Avoid a deadlock when sys.settrace steps into threading._shutdown assert_python_ok("-c", """if 1: import sys, threading # A deadlock-killer, to prevent the # testsuite to hang forever def killer(): import os, time time.sleep(2) print('program blocked; aborting') os._exit(2) t = threading.Thread(target=killer) t.daemon = True t.start() # This is the trace function def func(frame, event, arg): threading.current_thread() return func sys.settrace(func) """) def test_join_nondaemon_on_shutdown(self): # Issue 1722344 # Raising SystemExit skipped threading._shutdown rc, out, err = assert_python_ok("-c", """if 1: import threading from time import sleep def child(): sleep(1) # As a non-daemon thread we SHOULD wake up and nothing # should be torn down yet print("Woke up, sleep function is:", sleep) threading.Thread(target=child).start() raise SystemExit """) self.assertEqual(out.strip(), b"Woke up, sleep function is: ") self.assertEqual(err, b"") def test_enumerate_after_join(self): # Try hard to trigger #1703448: a thread is still returned in # threading.enumerate() after it has been join()ed. enum = threading.enumerate old_interval = sys.getswitchinterval() try: for i in range(1, 100): sys.setswitchinterval(i * 0.0002) t = threading.Thread(target=lambda: None) t.start() t.join() l = enum() self.assertNotIn(t, l, "#1703448 triggered after %d trials: %s" % (i, l)) finally: sys.setswitchinterval(old_interval) def test_no_refcycle_through_target(self): class RunSelfFunction(object): def __init__(self, should_raise): # The links in this refcycle from Thread back to self # should be cleaned up when the thread completes. self.should_raise = should_raise self.thread = threading.Thread(target=self._run, args=(self,), kwargs={'yet_another':self}) self.thread.start() def _run(self, other_ref, yet_another): if self.should_raise: raise SystemExit restore_default_excepthook(self) cyclic_object = RunSelfFunction(should_raise=False) weak_cyclic_object = weakref.ref(cyclic_object) cyclic_object.thread.join() del cyclic_object self.assertIsNone(weak_cyclic_object(), msg=('%d references still around' % sys.getrefcount(weak_cyclic_object()))) raising_cyclic_object = RunSelfFunction(should_raise=True) weak_raising_cyclic_object = weakref.ref(raising_cyclic_object) raising_cyclic_object.thread.join() del raising_cyclic_object self.assertIsNone(weak_raising_cyclic_object(), msg=('%d references still around' % sys.getrefcount(weak_raising_cyclic_object()))) def test_old_threading_api(self): # Just a quick sanity check to make sure the old method names are # still present t = threading.Thread() with self.assertWarnsRegex(DeprecationWarning, r'get the daemon attribute'): t.isDaemon() with self.assertWarnsRegex(DeprecationWarning, r'set the daemon attribute'): t.setDaemon(True) with self.assertWarnsRegex(DeprecationWarning, r'get the name attribute'): t.getName() with self.assertWarnsRegex(DeprecationWarning, r'set the name attribute'): t.setName("name") e = threading.Event() with self.assertWarnsRegex(DeprecationWarning, 'use is_set()'): e.isSet() cond = threading.Condition() cond.acquire() with self.assertWarnsRegex(DeprecationWarning, 'use notify_all()'): cond.notifyAll() with self.assertWarnsRegex(DeprecationWarning, 'use active_count()'): threading.activeCount() with self.assertWarnsRegex(DeprecationWarning, 'use current_thread()'): threading.currentThread() def test_repr_daemon(self): t = threading.Thread() self.assertNotIn('daemon', repr(t)) t.daemon = True self.assertIn('daemon', repr(t)) def test_daemon_param(self): t = threading.Thread() self.assertFalse(t.daemon) t = threading.Thread(daemon=False) self.assertFalse(t.daemon) t = threading.Thread(daemon=True) self.assertTrue(t.daemon) @unittest.skipUnless(hasattr(os, 'fork'), 'needs os.fork()') def test_fork_at_exit(self): # bpo-42350: Calling os.fork() after threading._shutdown() must # not log an error. code = textwrap.dedent(""" import atexit import os import sys from test.support import wait_process # Import the threading module to register its "at fork" callback import threading def exit_handler(): pid = os.fork() if not pid: print("child process ok", file=sys.stderr, flush=True) # child process else: wait_process(pid, exitcode=0) # exit_handler() will be called after threading._shutdown() atexit.register(exit_handler) """) _, out, err = assert_python_ok("-c", code) self.assertEqual(out, b'') self.assertEqual(err.rstrip(), b'child process ok') @unittest.skipUnless(hasattr(os, 'fork'), 'test needs fork()') def test_dummy_thread_after_fork(self): # Issue #14308: a dummy thread in the active list doesn't mess up # the after-fork mechanism. code = """if 1: import _thread, threading, os, time def background_thread(evt): # Creates and registers the _DummyThread instance threading.current_thread() evt.set() time.sleep(10) evt = threading.Event() _thread.start_new_thread(background_thread, (evt,)) evt.wait() assert threading.active_count() == 2, threading.active_count() if os.fork() == 0: assert threading.active_count() == 1, threading.active_count() os._exit(0) else: os.wait() """ _, out, err = assert_python_ok("-c", code) self.assertEqual(out, b'') self.assertEqual(err, b'') @unittest.skipUnless(hasattr(os, 'fork'), "needs os.fork()") def test_is_alive_after_fork(self): # Try hard to trigger #18418: is_alive() could sometimes be True on # threads that vanished after a fork. old_interval = sys.getswitchinterval() self.addCleanup(sys.setswitchinterval, old_interval) # Make the bug more likely to manifest. test.support.setswitchinterval(1e-6) for i in range(20): t = threading.Thread(target=lambda: None) t.start() pid = os.fork() if pid == 0: os._exit(11 if t.is_alive() else 10) else: t.join() support.wait_process(pid, exitcode=10) def test_main_thread(self): main = threading.main_thread() self.assertEqual(main.name, 'MainThread') self.assertEqual(main.ident, threading.current_thread().ident) self.assertEqual(main.ident, threading.get_ident()) def f(): self.assertNotEqual(threading.main_thread().ident, threading.current_thread().ident) th = threading.Thread(target=f) th.start() th.join() @unittest.skipUnless(hasattr(os, 'fork'), "test needs os.fork()") @unittest.skipUnless(hasattr(os, 'waitpid'), "test needs os.waitpid()") def test_main_thread_after_fork(self): code = """if 1: import os, threading from test import support pid = os.fork() if pid == 0: main = threading.main_thread() print(main.name) print(main.ident == threading.current_thread().ident) print(main.ident == threading.get_ident()) else: support.wait_process(pid, exitcode=0) """ _, out, err = assert_python_ok("-c", code) data = out.decode().replace('\r', '') self.assertEqual(err, b"") self.assertEqual(data, "MainThread\nTrue\nTrue\n") @unittest.skipIf(sys.platform in platforms_to_skip, "due to known OS bug") @unittest.skipUnless(hasattr(os, 'fork'), "test needs os.fork()") @unittest.skipUnless(hasattr(os, 'waitpid'), "test needs os.waitpid()") def test_main_thread_after_fork_from_nonmain_thread(self): code = """if 1: import os, threading, sys from test import support def func(): pid = os.fork() if pid == 0: main = threading.main_thread() print(main.name) print(main.ident == threading.current_thread().ident) print(main.ident == threading.get_ident()) # stdout is fully buffered because not a tty, # we have to flush before exit. sys.stdout.flush() else: support.wait_process(pid, exitcode=0) th = threading.Thread(target=func) th.start() th.join() """ _, out, err = assert_python_ok("-c", code) data = out.decode().replace('\r', '') self.assertEqual(err, b"") self.assertEqual(data, "Thread-1 (func)\nTrue\nTrue\n") def test_main_thread_during_shutdown(self): # bpo-31516: current_thread() should still point to the main thread # at shutdown code = """if 1: import gc, threading main_thread = threading.current_thread() assert main_thread is threading.main_thread() # sanity check class RefCycle: def __init__(self): self.cycle = self def __del__(self): print("GC:", threading.current_thread() is main_thread, threading.main_thread() is main_thread, threading.enumerate() == [main_thread]) RefCycle() gc.collect() # sanity check x = RefCycle() """ _, out, err = assert_python_ok("-c", code) data = out.decode() self.assertEqual(err, b"") self.assertEqual(data.splitlines(), ["GC: True True True"] * 2) def test_finalization_shutdown(self): # bpo-36402: Py_Finalize() calls threading._shutdown() which must wait # until Python thread states of all non-daemon threads get deleted. # # Test similar to SubinterpThreadingTests.test_threads_join_2(), but # test the finalization of the main interpreter. code = """if 1: import os import threading import time import random def random_sleep(): seconds = random.random() * 0.010 time.sleep(seconds) class Sleeper: def __del__(self): random_sleep() tls = threading.local() def f(): # Sleep a bit so that the thread is still running when # Py_Finalize() is called. random_sleep() tls.x = Sleeper() random_sleep() threading.Thread(target=f).start() random_sleep() """ rc, out, err = assert_python_ok("-c", code) self.assertEqual(err, b"") def test_tstate_lock(self): # Test an implementation detail of Thread objects. started = _thread.allocate_lock() finish = _thread.allocate_lock() started.acquire() finish.acquire() def f(): started.release() finish.acquire() time.sleep(0.01) # The tstate lock is None until the thread is started t = threading.Thread(target=f) self.assertIs(t._tstate_lock, None) t.start() started.acquire() self.assertTrue(t.is_alive()) # The tstate lock can't be acquired when the thread is running # (or suspended). tstate_lock = t._tstate_lock self.assertFalse(tstate_lock.acquire(timeout=0), False) finish.release() # When the thread ends, the state_lock can be successfully # acquired. self.assertTrue(tstate_lock.acquire(timeout=support.SHORT_TIMEOUT), False) # But is_alive() is still True: we hold _tstate_lock now, which # prevents is_alive() from knowing the thread's end-of-life C code # is done. self.assertTrue(t.is_alive()) # Let is_alive() find out the C code is done. tstate_lock.release() self.assertFalse(t.is_alive()) # And verify the thread disposed of _tstate_lock. self.assertIsNone(t._tstate_lock) t.join() def test_repr_stopped(self): # Verify that "stopped" shows up in repr(Thread) appropriately. started = _thread.allocate_lock() finish = _thread.allocate_lock() started.acquire() finish.acquire() def f(): started.release() finish.acquire() t = threading.Thread(target=f) t.start() started.acquire() self.assertIn("started", repr(t)) finish.release() # "stopped" should appear in the repr in a reasonable amount of time. # Implementation detail: as of this writing, that's trivially true # if .join() is called, and almost trivially true if .is_alive() is # called. The detail we're testing here is that "stopped" shows up # "all on its own". LOOKING_FOR = "stopped" for i in range(500): if LOOKING_FOR in repr(t): break time.sleep(0.01) self.assertIn(LOOKING_FOR, repr(t)) # we waited at least 5 seconds t.join() def test_BoundedSemaphore_limit(self): # BoundedSemaphore should raise ValueError if released too often. for limit in range(1, 10): bs = threading.BoundedSemaphore(limit) threads = [threading.Thread(target=bs.acquire) for _ in range(limit)] for t in threads: t.start() for t in threads: t.join() threads = [threading.Thread(target=bs.release) for _ in range(limit)] for t in threads: t.start() for t in threads: t.join() self.assertRaises(ValueError, bs.release) @cpython_only def test_frame_tstate_tracing(self): # Issue #14432: Crash when a generator is created in a C thread that is # destroyed while the generator is still used. The issue was that a # generator contains a frame, and the frame kept a reference to the # Python state of the destroyed C thread. The crash occurs when a trace # function is setup. def noop_trace(frame, event, arg): # no operation return noop_trace def generator(): while 1: yield "generator" def callback(): if callback.gen is None: callback.gen = generator() return next(callback.gen) callback.gen = None old_trace = sys.gettrace() sys.settrace(noop_trace) try: # Install a trace function threading.settrace(noop_trace) # Create a generator in a C thread which exits after the call import _testcapi _testcapi.call_in_temporary_c_thread(callback) # Call the generator in a different Python thread, check that the # generator didn't keep a reference to the destroyed thread state for test in range(3): # The trace function is still called here callback() finally: sys.settrace(old_trace) def test_gettrace(self): def noop_trace(frame, event, arg): # no operation return noop_trace old_trace = threading.gettrace() try: threading.settrace(noop_trace) trace_func = threading.gettrace() self.assertEqual(noop_trace,trace_func) finally: threading.settrace(old_trace) def test_getprofile(self): def fn(*args): pass old_profile = threading.getprofile() try: threading.setprofile(fn) self.assertEqual(fn, threading.getprofile()) finally: threading.setprofile(old_profile) @cpython_only def test_shutdown_locks(self): for daemon in (False, True): with self.subTest(daemon=daemon): event = threading.Event() thread = threading.Thread(target=event.wait, daemon=daemon) # Thread.start() must add lock to _shutdown_locks, # but only for non-daemon thread thread.start() tstate_lock = thread._tstate_lock if not daemon: self.assertIn(tstate_lock, threading._shutdown_locks) else: self.assertNotIn(tstate_lock, threading._shutdown_locks) # unblock the thread and join it event.set() thread.join() # Thread._stop() must remove tstate_lock from _shutdown_locks. # Daemon threads must never add it to _shutdown_locks. self.assertNotIn(tstate_lock, threading._shutdown_locks) def test_locals_at_exit(self): # bpo-19466: thread locals must not be deleted before destructors # are called rc, out, err = assert_python_ok("-c", """if 1: import threading class Atexit: def __del__(self): print("thread_dict.atexit = %r" % thread_dict.atexit) thread_dict = threading.local() thread_dict.atexit = "value" atexit = Atexit() """) self.assertEqual(out.rstrip(), b"thread_dict.atexit = 'value'") def test_boolean_target(self): # bpo-41149: A thread that had a boolean value of False would not # run, regardless of whether it was callable. The correct behaviour # is for a thread to do nothing if its target is None, and to call # the target otherwise. class BooleanTarget(object): def __init__(self): self.ran = False def __bool__(self): return False def __call__(self): self.ran = True target = BooleanTarget() thread = threading.Thread(target=target) thread.start() thread.join() self.assertTrue(target.ran) def test_leak_without_join(self): # bpo-37788: Test that a thread which is not joined explicitly # does not leak. Test written for reference leak checks. def noop(): pass with threading_helper.wait_threads_exit(): threading.Thread(target=noop).start() # Thread.join() is not called @unittest.skipUnless(Py_DEBUG, 'need debug build (Py_DEBUG)') def test_debug_deprecation(self): # bpo-44584: The PYTHONTHREADDEBUG environment variable is deprecated rc, out, err = assert_python_ok("-Wdefault", "-c", "pass", PYTHONTHREADDEBUG="1") msg = (b'DeprecationWarning: The threading debug ' b'(PYTHONTHREADDEBUG environment variable) ' b'is deprecated and will be removed in Python 3.12') self.assertIn(msg, err) def test_import_from_another_thread(self): # bpo-1596321: If the threading module is first import from a thread # different than the main thread, threading._shutdown() must handle # this case without logging an error at Python exit. code = textwrap.dedent(''' import _thread import sys event = _thread.allocate_lock() event.acquire() def import_threading(): import threading event.release() if 'threading' in sys.modules: raise Exception('threading is already imported') _thread.start_new_thread(import_threading, ()) # wait until the threading module is imported event.acquire() event.release() if 'threading' not in sys.modules: raise Exception('threading is not imported') # don't wait until the thread completes ''') rc, out, err = assert_python_ok("-c", code) self.assertEqual(out, b'') self.assertEqual(err, b'') class ThreadJoinOnShutdown(BaseTestCase): def _run_and_join(self, script): script = """if 1: import sys, os, time, threading # a thread, which waits for the main program to terminate def joiningfunc(mainthread): mainthread.join() print('end of thread') # stdout is fully buffered because not a tty, we have to flush # before exit. sys.stdout.flush() \n""" + script rc, out, err = assert_python_ok("-c", script) data = out.decode().replace('\r', '') self.assertEqual(data, "end of main\nend of thread\n") def test_1_join_on_shutdown(self): # The usual case: on exit, wait for a non-daemon thread script = """if 1: import os t = threading.Thread(target=joiningfunc, args=(threading.current_thread(),)) t.start() time.sleep(0.1) print('end of main') """ self._run_and_join(script) @unittest.skipUnless(hasattr(os, 'fork'), "needs os.fork()") @unittest.skipIf(sys.platform in platforms_to_skip, "due to known OS bug") def test_2_join_in_forked_process(self): # Like the test above, but from a forked interpreter script = """if 1: from test import support childpid = os.fork() if childpid != 0: # parent process support.wait_process(childpid, exitcode=0) sys.exit(0) # child process t = threading.Thread(target=joiningfunc, args=(threading.current_thread(),)) t.start() print('end of main') """ self._run_and_join(script) @unittest.skipUnless(hasattr(os, 'fork'), "needs os.fork()") @unittest.skipIf(sys.platform in platforms_to_skip, "due to known OS bug") def test_3_join_in_forked_from_thread(self): # Like the test above, but fork() was called from a worker thread # In the forked process, the main Thread object must be marked as stopped. script = """if 1: from test import support main_thread = threading.current_thread() def worker(): childpid = os.fork() if childpid != 0: # parent process support.wait_process(childpid, exitcode=0) sys.exit(0) # child process t = threading.Thread(target=joiningfunc, args=(main_thread,)) print('end of main') t.start() t.join() # Should not block: main_thread is already stopped w = threading.Thread(target=worker) w.start() """ self._run_and_join(script) @unittest.skipIf(sys.platform in platforms_to_skip, "due to known OS bug") def test_4_daemon_threads(self): # Check that a daemon thread cannot crash the interpreter on shutdown # by manipulating internal structures that are being disposed of in # the main thread. script = """if True: import os import random import sys import time import threading thread_has_run = set() def random_io(): '''Loop for a while sleeping random tiny amounts and doing some I/O.''' while True: with open(os.__file__, 'rb') as in_f: stuff = in_f.read(200) with open(os.devnull, 'wb') as null_f: null_f.write(stuff) time.sleep(random.random() / 1995) thread_has_run.add(threading.current_thread()) def main(): count = 0 for _ in range(40): new_thread = threading.Thread(target=random_io) new_thread.daemon = True new_thread.start() count += 1 while len(thread_has_run) < count: time.sleep(0.001) # Trigger process shutdown sys.exit(0) main() """ rc, out, err = assert_python_ok('-c', script) self.assertFalse(err) @unittest.skipUnless(hasattr(os, 'fork'), "needs os.fork()") @unittest.skipIf(sys.platform in platforms_to_skip, "due to known OS bug") def test_reinit_tls_after_fork(self): # Issue #13817: fork() would deadlock in a multithreaded program with # the ad-hoc TLS implementation. def do_fork_and_wait(): # just fork a child process and wait it pid = os.fork() if pid > 0: support.wait_process(pid, exitcode=50) else: os._exit(50) # start a bunch of threads that will fork() child processes threads = [] for i in range(16): t = threading.Thread(target=do_fork_and_wait) threads.append(t) t.start() for t in threads: t.join() @unittest.skipUnless(hasattr(os, 'fork'), "needs os.fork()") def test_clear_threads_states_after_fork(self): # Issue #17094: check that threads states are cleared after fork() # start a bunch of threads threads = [] for i in range(16): t = threading.Thread(target=lambda : time.sleep(0.3)) threads.append(t) t.start() pid = os.fork() if pid == 0: # check that threads states have been cleared if len(sys._current_frames()) == 1: os._exit(51) else: os._exit(52) else: support.wait_process(pid, exitcode=51) for t in threads: t.join() class SubinterpThreadingTests(BaseTestCase): def pipe(self): r, w = os.pipe() self.addCleanup(os.close, r) self.addCleanup(os.close, w) if hasattr(os, 'set_blocking'): os.set_blocking(r, False) return (r, w) def test_threads_join(self): # Non-daemon threads should be joined at subinterpreter shutdown # (issue #18808) r, w = self.pipe() code = textwrap.dedent(r""" import os import random import threading import time def random_sleep(): seconds = random.random() * 0.010 time.sleep(seconds) def f(): # Sleep a bit so that the thread is still running when # Py_EndInterpreter is called. random_sleep() os.write(%d, b"x") threading.Thread(target=f).start() random_sleep() """ % (w,)) ret = test.support.run_in_subinterp(code) self.assertEqual(ret, 0) # The thread was joined properly. self.assertEqual(os.read(r, 1), b"x") def test_threads_join_2(self): # Same as above, but a delay gets introduced after the thread's # Python code returned but before the thread state is deleted. # To achieve this, we register a thread-local object which sleeps # a bit when deallocated. r, w = self.pipe() code = textwrap.dedent(r""" import os import random import threading import time def random_sleep(): seconds = random.random() * 0.010 time.sleep(seconds) class Sleeper: def __del__(self): random_sleep() tls = threading.local() def f(): # Sleep a bit so that the thread is still running when # Py_EndInterpreter is called. random_sleep() tls.x = Sleeper() os.write(%d, b"x") threading.Thread(target=f).start() random_sleep() """ % (w,)) ret = test.support.run_in_subinterp(code) self.assertEqual(ret, 0) # The thread was joined properly. self.assertEqual(os.read(r, 1), b"x") @cpython_only def test_daemon_threads_fatal_error(self): subinterp_code = f"""if 1: import os import threading import time def f(): # Make sure the daemon thread is still running when # Py_EndInterpreter is called. time.sleep({test.support.SHORT_TIMEOUT}) threading.Thread(target=f, daemon=True).start() """ script = r"""if 1: import _testcapi _testcapi.run_in_subinterp(%r) """ % (subinterp_code,) with test.support.SuppressCrashReport(): rc, out, err = assert_python_failure("-c", script) self.assertIn("Fatal Python error: Py_EndInterpreter: " "not the last thread", err.decode()) class ThreadingExceptionTests(BaseTestCase): # A RuntimeError should be raised if Thread.start() is called # multiple times. def test_start_thread_again(self): thread = threading.Thread() thread.start() self.assertRaises(RuntimeError, thread.start) thread.join() def test_joining_current_thread(self): current_thread = threading.current_thread() self.assertRaises(RuntimeError, current_thread.join); def test_joining_inactive_thread(self): thread = threading.Thread() self.assertRaises(RuntimeError, thread.join) def test_daemonize_active_thread(self): thread = threading.Thread() thread.start() self.assertRaises(RuntimeError, setattr, thread, "daemon", True) thread.join() def test_releasing_unacquired_lock(self): lock = threading.Lock() self.assertRaises(RuntimeError, lock.release) def test_recursion_limit(self): # Issue 9670 # test that excessive recursion within a non-main thread causes # an exception rather than crashing the interpreter on platforms # like Mac OS X or FreeBSD which have small default stack sizes # for threads script = """if True: import threading def recurse(): return recurse() def outer(): try: recurse() except RecursionError: pass w = threading.Thread(target=outer) w.start() w.join() print('end of main thread') """ expected_output = "end of main thread\n" p = subprocess.Popen([sys.executable, "-c", script], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() data = stdout.decode().replace('\r', '') self.assertEqual(p.returncode, 0, "Unexpected error: " + stderr.decode()) self.assertEqual(data, expected_output) def test_print_exception(self): script = r"""if True: import threading import time running = False def run(): global running running = True while running: time.sleep(0.01) 1/0 t = threading.Thread(target=run) t.start() while not running: time.sleep(0.01) running = False t.join() """ rc, out, err = assert_python_ok("-c", script) self.assertEqual(out, b'') err = err.decode() self.assertIn("Exception in thread", err) self.assertIn("Traceback (most recent call last):", err) self.assertIn("ZeroDivisionError", err) self.assertNotIn("Unhandled exception", err) def test_print_exception_stderr_is_none_1(self): script = r"""if True: import sys import threading import time running = False def run(): global running running = True while running: time.sleep(0.01) 1/0 t = threading.Thread(target=run) t.start() while not running: time.sleep(0.01) sys.stderr = None running = False t.join() """ rc, out, err = assert_python_ok("-c", script) self.assertEqual(out, b'') err = err.decode() self.assertIn("Exception in thread", err) self.assertIn("Traceback (most recent call last):", err) self.assertIn("ZeroDivisionError", err) self.assertNotIn("Unhandled exception", err) def test_print_exception_stderr_is_none_2(self): script = r"""if True: import sys import threading import time running = False def run(): global running running = True while running: time.sleep(0.01) 1/0 sys.stderr = None t = threading.Thread(target=run) t.start() while not running: time.sleep(0.01) running = False t.join() """ rc, out, err = assert_python_ok("-c", script) self.assertEqual(out, b'') self.assertNotIn("Unhandled exception", err.decode()) def test_bare_raise_in_brand_new_thread(self): def bare_raise(): raise class Issue27558(threading.Thread): exc = None def run(self): try: bare_raise() except Exception as exc: self.exc = exc thread = Issue27558() thread.start() thread.join() self.assertIsNotNone(thread.exc) self.assertIsInstance(thread.exc, RuntimeError) # explicitly break the reference cycle to not leak a dangling thread thread.exc = None def test_multithread_modify_file_noerror(self): # See issue25872 def modify_file(): with open(os_helper.TESTFN, 'w', encoding='utf-8') as fp: fp.write(' ') traceback.format_stack() self.addCleanup(os_helper.unlink, os_helper.TESTFN) threads = [ threading.Thread(target=modify_file) for i in range(100) ] for t in threads: t.start() t.join() class ThreadRunFail(threading.Thread): def run(self): raise ValueError("run failed") class ExceptHookTests(BaseTestCase): def setUp(self): restore_default_excepthook(self) super().setUp() def test_excepthook(self): with support.captured_output("stderr") as stderr: thread = ThreadRunFail(name="excepthook thread") thread.start() thread.join() stderr = stderr.getvalue().strip() self.assertIn(f'Exception in thread {thread.name}:\n', stderr) self.assertIn('Traceback (most recent call last):\n', stderr) self.assertIn(' raise ValueError("run failed")', stderr) self.assertIn('ValueError: run failed', stderr) @support.cpython_only def test_excepthook_thread_None(self): # threading.excepthook called with thread=None: log the thread # identifier in this case. with support.captured_output("stderr") as stderr: try: raise ValueError("bug") except Exception as exc: args = threading.ExceptHookArgs([*sys.exc_info(), None]) try: threading.excepthook(args) finally: # Explicitly break a reference cycle args = None stderr = stderr.getvalue().strip() self.assertIn(f'Exception in thread {threading.get_ident()}:\n', stderr) self.assertIn('Traceback (most recent call last):\n', stderr) self.assertIn(' raise ValueError("bug")', stderr) self.assertIn('ValueError: bug', stderr) def test_system_exit(self): class ThreadExit(threading.Thread): def run(self): sys.exit(1) # threading.excepthook() silently ignores SystemExit with support.captured_output("stderr") as stderr: thread = ThreadExit() thread.start() thread.join() self.assertEqual(stderr.getvalue(), '') def test_custom_excepthook(self): args = None def hook(hook_args): nonlocal args args = hook_args try: with support.swap_attr(threading, 'excepthook', hook): thread = ThreadRunFail() thread.start() thread.join() self.assertEqual(args.exc_type, ValueError) self.assertEqual(str(args.exc_value), 'run failed') self.assertEqual(args.exc_traceback, args.exc_value.__traceback__) self.assertIs(args.thread, thread) finally: # Break reference cycle args = None def test_custom_excepthook_fail(self): def threading_hook(args): raise ValueError("threading_hook failed") err_str = None def sys_hook(exc_type, exc_value, exc_traceback): nonlocal err_str err_str = str(exc_value) with support.swap_attr(threading, 'excepthook', threading_hook), \ support.swap_attr(sys, 'excepthook', sys_hook), \ support.captured_output('stderr') as stderr: thread = ThreadRunFail() thread.start() thread.join() self.assertEqual(stderr.getvalue(), 'Exception in threading.excepthook:\n') self.assertEqual(err_str, 'threading_hook failed') def test_original_excepthook(self): def run_thread(): with support.captured_output("stderr") as output: thread = ThreadRunFail(name="excepthook thread") thread.start() thread.join() return output.getvalue() def threading_hook(args): print("Running a thread failed", file=sys.stderr) default_output = run_thread() with support.swap_attr(threading, 'excepthook', threading_hook): custom_hook_output = run_thread() threading.excepthook = threading.__excepthook__ recovered_output = run_thread() self.assertEqual(default_output, recovered_output) self.assertNotEqual(default_output, custom_hook_output) self.assertEqual(custom_hook_output, "Running a thread failed\n") class TimerTests(BaseTestCase): def setUp(self): BaseTestCase.setUp(self) self.callback_args = [] self.callback_event = threading.Event() def test_init_immutable_default_args(self): # Issue 17435: constructor defaults were mutable objects, they could be # mutated via the object attributes and affect other Timer objects. timer1 = threading.Timer(0.01, self._callback_spy) timer1.start() self.callback_event.wait() timer1.args.append("blah") timer1.kwargs["foo"] = "bar" self.callback_event.clear() timer2 = threading.Timer(0.01, self._callback_spy) timer2.start() self.callback_event.wait() self.assertEqual(len(self.callback_args), 2) self.assertEqual(self.callback_args, [((), {}), ((), {})]) timer1.join() timer2.join() def _callback_spy(self, *args, **kwargs): self.callback_args.append((args[:], kwargs.copy())) self.callback_event.set() class LockTests(lock_tests.LockTests): locktype = staticmethod(threading.Lock) class PyRLockTests(lock_tests.RLockTests): locktype = staticmethod(threading._PyRLock) @unittest.skipIf(threading._CRLock is None, 'RLock not implemented in C') class CRLockTests(lock_tests.RLockTests): locktype = staticmethod(threading._CRLock) class EventTests(lock_tests.EventTests): eventtype = staticmethod(threading.Event) class ConditionAsRLockTests(lock_tests.RLockTests): # Condition uses an RLock by default and exports its API. locktype = staticmethod(threading.Condition) class ConditionTests(lock_tests.ConditionTests): condtype = staticmethod(threading.Condition) class SemaphoreTests(lock_tests.SemaphoreTests): semtype = staticmethod(threading.Semaphore) class BoundedSemaphoreTests(lock_tests.BoundedSemaphoreTests): semtype = staticmethod(threading.BoundedSemaphore) class BarrierTests(lock_tests.BarrierTests): barriertype = staticmethod(threading.Barrier) class MiscTestCase(unittest.TestCase): def test__all__(self): restore_default_excepthook(self) extra = {"ThreadError"} not_exported = {'currentThread', 'activeCount'} support.check__all__(self, threading, ('threading', '_thread'), extra=extra, not_exported=not_exported) class InterruptMainTests(unittest.TestCase): def check_interrupt_main_with_signal_handler(self, signum): def handler(signum, frame): 1/0 old_handler = signal.signal(signum, handler) self.addCleanup(signal.signal, signum, old_handler) with self.assertRaises(ZeroDivisionError): _thread.interrupt_main() def check_interrupt_main_noerror(self, signum): handler = signal.getsignal(signum) try: # No exception should arise. signal.signal(signum, signal.SIG_IGN) _thread.interrupt_main(signum) signal.signal(signum, signal.SIG_DFL) _thread.interrupt_main(signum) finally: # Restore original handler signal.signal(signum, handler) def test_interrupt_main_subthread(self): # Calling start_new_thread with a function that executes interrupt_main # should raise KeyboardInterrupt upon completion. def call_interrupt(): _thread.interrupt_main() t = threading.Thread(target=call_interrupt) with self.assertRaises(KeyboardInterrupt): t.start() t.join() t.join() def test_interrupt_main_mainthread(self): # Make sure that if interrupt_main is called in main thread that # KeyboardInterrupt is raised instantly. with self.assertRaises(KeyboardInterrupt): _thread.interrupt_main() def test_interrupt_main_with_signal_handler(self): self.check_interrupt_main_with_signal_handler(signal.SIGINT) self.check_interrupt_main_with_signal_handler(signal.SIGTERM) def test_interrupt_main_noerror(self): self.check_interrupt_main_noerror(signal.SIGINT) self.check_interrupt_main_noerror(signal.SIGTERM) def test_interrupt_main_invalid_signal(self): self.assertRaises(ValueError, _thread.interrupt_main, -1) self.assertRaises(ValueError, _thread.interrupt_main, signal.NSIG) self.assertRaises(ValueError, _thread.interrupt_main, 1000000) @threading_helper.reap_threads def test_can_interrupt_tight_loops(self): cont = [True] started = [False] interrupted = [False] def worker(started, cont, interrupted): iterations = 100_000_000 started[0] = True while cont[0]: if iterations: iterations -= 1 else: return pass interrupted[0] = True t = threading.Thread(target=worker,args=(started, cont, interrupted)) t.start() while not started[0]: pass cont[0] = False t.join() self.assertTrue(interrupted[0]) class AtexitTests(unittest.TestCase): def test_atexit_output(self): rc, out, err = assert_python_ok("-c", """if True: import threading def run_last(): print('parrot') threading._register_atexit(run_last) """) self.assertFalse(err) self.assertEqual(out.strip(), b'parrot') def test_atexit_called_once(self): rc, out, err = assert_python_ok("-c", """if True: import threading from unittest.mock import Mock mock = Mock() threading._register_atexit(mock) mock.assert_not_called() # force early shutdown to ensure it was called once threading._shutdown() mock.assert_called_once() """) self.assertFalse(err) def test_atexit_after_shutdown(self): # The only way to do this is by registering an atexit within # an atexit, which is intended to raise an exception. rc, out, err = assert_python_ok("-c", """if True: import threading def func(): pass def run_last(): threading._register_atexit(func) threading._register_atexit(run_last) """) self.assertTrue(err) self.assertIn("RuntimeError: can't register atexit after shutdown", err.decode()) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.10/test_wsgiref.py000066400000000000000000000743111471441230600214070ustar00rootroot00000000000000from unittest import mock from test import support from test.support import socket_helper from test.support import warnings_helper from test.test_httpservers import NoLogRequestHandler from unittest import TestCase from wsgiref.util import setup_testing_defaults from wsgiref.headers import Headers from wsgiref.handlers import BaseHandler, BaseCGIHandler, SimpleHandler from wsgiref import util from wsgiref.validate import validator from wsgiref.simple_server import WSGIServer, WSGIRequestHandler from wsgiref.simple_server import make_server from http.client import HTTPConnection from io import StringIO, BytesIO, BufferedReader from socketserver import BaseServer from platform import python_implementation import os import re import signal import sys import threading import unittest class MockServer(WSGIServer): """Non-socket HTTP server""" def __init__(self, server_address, RequestHandlerClass): BaseServer.__init__(self, server_address, RequestHandlerClass) self.server_bind() def server_bind(self): host, port = self.server_address self.server_name = host self.server_port = port self.setup_environ() class MockHandler(WSGIRequestHandler): """Non-socket HTTP handler""" def setup(self): self.connection = self.request self.rfile, self.wfile = self.connection def finish(self): pass def hello_app(environ,start_response): start_response("200 OK", [ ('Content-Type','text/plain'), ('Date','Mon, 05 Jun 2006 18:49:54 GMT') ]) return [b"Hello, world!"] def header_app(environ, start_response): start_response("200 OK", [ ('Content-Type', 'text/plain'), ('Date', 'Mon, 05 Jun 2006 18:49:54 GMT') ]) return [';'.join([ environ['HTTP_X_TEST_HEADER'], environ['QUERY_STRING'], environ['PATH_INFO'] ]).encode('iso-8859-1')] def run_amock(app=hello_app, data=b"GET / HTTP/1.0\n\n"): server = make_server("", 80, app, MockServer, MockHandler) inp = BufferedReader(BytesIO(data)) out = BytesIO() olderr = sys.stderr err = sys.stderr = StringIO() try: server.finish_request((inp, out), ("127.0.0.1",8888)) finally: sys.stderr = olderr return out.getvalue(), err.getvalue() def compare_generic_iter(make_it,match): """Utility to compare a generic 2.1/2.2+ iterator with an iterable If running under Python 2.2+, this tests the iterator using iter()/next(), as well as __getitem__. 'make_it' must be a function returning a fresh iterator to be tested (since this may test the iterator twice).""" it = make_it() n = 0 for item in match: if not it[n]==item: raise AssertionError n+=1 try: it[n] except IndexError: pass else: raise AssertionError("Too many items from __getitem__",it) try: iter, StopIteration except NameError: pass else: # Only test iter mode under 2.2+ it = make_it() if not iter(it) is it: raise AssertionError for item in match: if not next(it) == item: raise AssertionError try: next(it) except StopIteration: pass else: raise AssertionError("Too many items from .__next__()", it) class IntegrationTests(TestCase): def check_hello(self, out, has_length=True): pyver = (python_implementation() + "/" + sys.version.split()[0]) self.assertEqual(out, ("HTTP/1.0 200 OK\r\n" "Server: WSGIServer/0.2 " + pyver +"\r\n" "Content-Type: text/plain\r\n" "Date: Mon, 05 Jun 2006 18:49:54 GMT\r\n" + (has_length and "Content-Length: 13\r\n" or "") + "\r\n" "Hello, world!").encode("iso-8859-1") ) def test_plain_hello(self): out, err = run_amock() self.check_hello(out) def test_environ(self): request = ( b"GET /p%61th/?query=test HTTP/1.0\n" b"X-Test-Header: Python test \n" b"X-Test-Header: Python test 2\n" b"Content-Length: 0\n\n" ) out, err = run_amock(header_app, request) self.assertEqual( out.splitlines()[-1], b"Python test,Python test 2;query=test;/path/" ) def test_request_length(self): out, err = run_amock(data=b"GET " + (b"x" * 65537) + b" HTTP/1.0\n\n") self.assertEqual(out.splitlines()[0], b"HTTP/1.0 414 Request-URI Too Long") def test_validated_hello(self): out, err = run_amock(validator(hello_app)) # the middleware doesn't support len(), so content-length isn't there self.check_hello(out, has_length=False) def test_simple_validation_error(self): def bad_app(environ,start_response): start_response("200 OK", ('Content-Type','text/plain')) return ["Hello, world!"] out, err = run_amock(validator(bad_app)) self.assertTrue(out.endswith( b"A server error occurred. Please contact the administrator." )) self.assertEqual( err.splitlines()[-2], "AssertionError: Headers (('Content-Type', 'text/plain')) must" " be of type list: " ) def test_status_validation_errors(self): def create_bad_app(status): def bad_app(environ, start_response): start_response(status, [("Content-Type", "text/plain; charset=utf-8")]) return [b"Hello, world!"] return bad_app tests = [ ('200', 'AssertionError: Status must be at least 4 characters'), ('20X OK', 'AssertionError: Status message must begin w/3-digit code'), ('200OK', 'AssertionError: Status message must have a space after code'), ] for status, exc_message in tests: with self.subTest(status=status): out, err = run_amock(create_bad_app(status)) self.assertTrue(out.endswith( b"A server error occurred. Please contact the administrator." )) self.assertEqual(err.splitlines()[-2], exc_message) def test_wsgi_input(self): def bad_app(e,s): e["wsgi.input"].read() s("200 OK", [("Content-Type", "text/plain; charset=utf-8")]) return [b"data"] out, err = run_amock(validator(bad_app)) self.assertTrue(out.endswith( b"A server error occurred. Please contact the administrator." )) self.assertEqual( err.splitlines()[-2], "AssertionError" ) def test_bytes_validation(self): def app(e, s): s("200 OK", [ ("Content-Type", "text/plain; charset=utf-8"), ("Date", "Wed, 24 Dec 2008 13:29:32 GMT"), ]) return [b"data"] out, err = run_amock(validator(app)) self.assertTrue(err.endswith('"GET / HTTP/1.0" 200 4\n')) ver = sys.version.split()[0].encode('ascii') py = python_implementation().encode('ascii') pyver = py + b"/" + ver self.assertEqual( b"HTTP/1.0 200 OK\r\n" b"Server: WSGIServer/0.2 "+ pyver + b"\r\n" b"Content-Type: text/plain; charset=utf-8\r\n" b"Date: Wed, 24 Dec 2008 13:29:32 GMT\r\n" b"\r\n" b"data", out) def test_cp1252_url(self): def app(e, s): s("200 OK", [ ("Content-Type", "text/plain"), ("Date", "Wed, 24 Dec 2008 13:29:32 GMT"), ]) # PEP3333 says environ variables are decoded as latin1. # Encode as latin1 to get original bytes return [e["PATH_INFO"].encode("latin1")] out, err = run_amock( validator(app), data=b"GET /\x80%80 HTTP/1.0") self.assertEqual( [ b"HTTP/1.0 200 OK", mock.ANY, b"Content-Type: text/plain", b"Date: Wed, 24 Dec 2008 13:29:32 GMT", b"", b"/\x80\x80", ], out.splitlines()) def test_interrupted_write(self): # BaseHandler._write() and _flush() have to write all data, even if # it takes multiple send() calls. Test this by interrupting a send() # call with a Unix signal. pthread_kill = support.get_attribute(signal, "pthread_kill") def app(environ, start_response): start_response("200 OK", []) return [b'\0' * support.SOCK_MAX_SIZE] class WsgiHandler(NoLogRequestHandler, WSGIRequestHandler): pass server = make_server(socket_helper.HOST, 0, app, handler_class=WsgiHandler) self.addCleanup(server.server_close) interrupted = threading.Event() def signal_handler(signum, frame): interrupted.set() original = signal.signal(signal.SIGUSR1, signal_handler) self.addCleanup(signal.signal, signal.SIGUSR1, original) received = None main_thread = threading.get_ident() def run_client(): http = HTTPConnection(*server.server_address) http.request("GET", "/") with http.getresponse() as response: response.read(100) # The main thread should now be blocking in a send() system # call. But in theory, it could get interrupted by other # signals, and then retried. So keep sending the signal in a # loop, in case an earlier signal happens to be delivered at # an inconvenient moment. while True: pthread_kill(main_thread, signal.SIGUSR1) if interrupted.wait(timeout=float(1)): break nonlocal received received = len(response.read()) http.close() background = threading.Thread(target=run_client) background.start() server.handle_request() background.join() self.assertEqual(received, support.SOCK_MAX_SIZE - 100) class UtilityTests(TestCase): def checkShift(self,sn_in,pi_in,part,sn_out,pi_out): env = {'SCRIPT_NAME':sn_in,'PATH_INFO':pi_in} util.setup_testing_defaults(env) self.assertEqual(util.shift_path_info(env),part) self.assertEqual(env['PATH_INFO'],pi_out) self.assertEqual(env['SCRIPT_NAME'],sn_out) return env def checkDefault(self, key, value, alt=None): # Check defaulting when empty env = {} util.setup_testing_defaults(env) if isinstance(value, StringIO): self.assertIsInstance(env[key], StringIO) elif isinstance(value,BytesIO): self.assertIsInstance(env[key],BytesIO) else: self.assertEqual(env[key], value) # Check existing value env = {key:alt} util.setup_testing_defaults(env) self.assertIs(env[key], alt) def checkCrossDefault(self,key,value,**kw): util.setup_testing_defaults(kw) self.assertEqual(kw[key],value) def checkAppURI(self,uri,**kw): util.setup_testing_defaults(kw) self.assertEqual(util.application_uri(kw),uri) def checkReqURI(self,uri,query=1,**kw): util.setup_testing_defaults(kw) self.assertEqual(util.request_uri(kw,query),uri) @warnings_helper.ignore_warnings(category=DeprecationWarning) def checkFW(self,text,size,match): def make_it(text=text,size=size): return util.FileWrapper(StringIO(text),size) compare_generic_iter(make_it,match) it = make_it() self.assertFalse(it.filelike.closed) for item in it: pass self.assertFalse(it.filelike.closed) it.close() self.assertTrue(it.filelike.closed) def test_filewrapper_getitem_deprecation(self): wrapper = util.FileWrapper(StringIO('foobar'), 3) with self.assertWarnsRegex(DeprecationWarning, r'Use iterator protocol instead'): # This should have returned 'bar'. self.assertEqual(wrapper[1], 'foo') def testSimpleShifts(self): self.checkShift('','/', '', '/', '') self.checkShift('','/x', 'x', '/x', '') self.checkShift('/','', None, '/', '') self.checkShift('/a','/x/y', 'x', '/a/x', '/y') self.checkShift('/a','/x/', 'x', '/a/x', '/') def testNormalizedShifts(self): self.checkShift('/a/b', '/../y', '..', '/a', '/y') self.checkShift('', '/../y', '..', '', '/y') self.checkShift('/a/b', '//y', 'y', '/a/b/y', '') self.checkShift('/a/b', '//y/', 'y', '/a/b/y', '/') self.checkShift('/a/b', '/./y', 'y', '/a/b/y', '') self.checkShift('/a/b', '/./y/', 'y', '/a/b/y', '/') self.checkShift('/a/b', '///./..//y/.//', '..', '/a', '/y/') self.checkShift('/a/b', '///', '', '/a/b/', '') self.checkShift('/a/b', '/.//', '', '/a/b/', '') self.checkShift('/a/b', '/x//', 'x', '/a/b/x', '/') self.checkShift('/a/b', '/.', None, '/a/b', '') def testDefaults(self): for key, value in [ ('SERVER_NAME','127.0.0.1'), ('SERVER_PORT', '80'), ('SERVER_PROTOCOL','HTTP/1.0'), ('HTTP_HOST','127.0.0.1'), ('REQUEST_METHOD','GET'), ('SCRIPT_NAME',''), ('PATH_INFO','/'), ('wsgi.version', (1,0)), ('wsgi.run_once', 0), ('wsgi.multithread', 0), ('wsgi.multiprocess', 0), ('wsgi.input', BytesIO()), ('wsgi.errors', StringIO()), ('wsgi.url_scheme','http'), ]: self.checkDefault(key,value) def testCrossDefaults(self): self.checkCrossDefault('HTTP_HOST',"foo.bar",SERVER_NAME="foo.bar") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="on") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="1") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="yes") self.checkCrossDefault('wsgi.url_scheme',"http",HTTPS="foo") self.checkCrossDefault('SERVER_PORT',"80",HTTPS="foo") self.checkCrossDefault('SERVER_PORT',"443",HTTPS="on") def testGuessScheme(self): self.assertEqual(util.guess_scheme({}), "http") self.assertEqual(util.guess_scheme({'HTTPS':"foo"}), "http") self.assertEqual(util.guess_scheme({'HTTPS':"on"}), "https") self.assertEqual(util.guess_scheme({'HTTPS':"yes"}), "https") self.assertEqual(util.guess_scheme({'HTTPS':"1"}), "https") def testAppURIs(self): self.checkAppURI("http://127.0.0.1/") self.checkAppURI("http://127.0.0.1/spam", SCRIPT_NAME="/spam") self.checkAppURI("http://127.0.0.1/sp%E4m", SCRIPT_NAME="/sp\xe4m") self.checkAppURI("http://spam.example.com:2071/", HTTP_HOST="spam.example.com:2071", SERVER_PORT="2071") self.checkAppURI("http://spam.example.com/", SERVER_NAME="spam.example.com") self.checkAppURI("http://127.0.0.1/", HTTP_HOST="127.0.0.1", SERVER_NAME="spam.example.com") self.checkAppURI("https://127.0.0.1/", HTTPS="on") self.checkAppURI("http://127.0.0.1:8000/", SERVER_PORT="8000", HTTP_HOST=None) def testReqURIs(self): self.checkReqURI("http://127.0.0.1/") self.checkReqURI("http://127.0.0.1/spam", SCRIPT_NAME="/spam") self.checkReqURI("http://127.0.0.1/sp%E4m", SCRIPT_NAME="/sp\xe4m") self.checkReqURI("http://127.0.0.1/spammity/spam", SCRIPT_NAME="/spammity", PATH_INFO="/spam") self.checkReqURI("http://127.0.0.1/spammity/sp%E4m", SCRIPT_NAME="/spammity", PATH_INFO="/sp\xe4m") self.checkReqURI("http://127.0.0.1/spammity/spam;ham", SCRIPT_NAME="/spammity", PATH_INFO="/spam;ham") self.checkReqURI("http://127.0.0.1/spammity/spam;cookie=1234,5678", SCRIPT_NAME="/spammity", PATH_INFO="/spam;cookie=1234,5678") self.checkReqURI("http://127.0.0.1/spammity/spam?say=ni", SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="say=ni") self.checkReqURI("http://127.0.0.1/spammity/spam?s%E4y=ni", SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="s%E4y=ni") self.checkReqURI("http://127.0.0.1/spammity/spam", 0, SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="say=ni") def testFileWrapper(self): self.checkFW("xyz"*50, 120, ["xyz"*40,"xyz"*10]) def testHopByHop(self): for hop in ( "Connection Keep-Alive Proxy-Authenticate Proxy-Authorization " "TE Trailers Transfer-Encoding Upgrade" ).split(): for alt in hop, hop.title(), hop.upper(), hop.lower(): self.assertTrue(util.is_hop_by_hop(alt)) # Not comprehensive, just a few random header names for hop in ( "Accept Cache-Control Date Pragma Trailer Via Warning" ).split(): for alt in hop, hop.title(), hop.upper(), hop.lower(): self.assertFalse(util.is_hop_by_hop(alt)) class HeaderTests(TestCase): def testMappingInterface(self): test = [('x','y')] self.assertEqual(len(Headers()), 0) self.assertEqual(len(Headers([])),0) self.assertEqual(len(Headers(test[:])),1) self.assertEqual(Headers(test[:]).keys(), ['x']) self.assertEqual(Headers(test[:]).values(), ['y']) self.assertEqual(Headers(test[:]).items(), test) self.assertIsNot(Headers(test).items(), test) # must be copy! h = Headers() del h['foo'] # should not raise an error h['Foo'] = 'bar' for m in h.__contains__, h.get, h.get_all, h.__getitem__: self.assertTrue(m('foo')) self.assertTrue(m('Foo')) self.assertTrue(m('FOO')) self.assertFalse(m('bar')) self.assertEqual(h['foo'],'bar') h['foo'] = 'baz' self.assertEqual(h['FOO'],'baz') self.assertEqual(h.get_all('foo'),['baz']) self.assertEqual(h.get("foo","whee"), "baz") self.assertEqual(h.get("zoo","whee"), "whee") self.assertEqual(h.setdefault("foo","whee"), "baz") self.assertEqual(h.setdefault("zoo","whee"), "whee") self.assertEqual(h["foo"],"baz") self.assertEqual(h["zoo"],"whee") def testRequireList(self): self.assertRaises(TypeError, Headers, "foo") def testExtras(self): h = Headers() self.assertEqual(str(h),'\r\n') h.add_header('foo','bar',baz="spam") self.assertEqual(h['foo'], 'bar; baz="spam"') self.assertEqual(str(h),'foo: bar; baz="spam"\r\n\r\n') h.add_header('Foo','bar',cheese=None) self.assertEqual(h.get_all('foo'), ['bar; baz="spam"', 'bar; cheese']) self.assertEqual(str(h), 'foo: bar; baz="spam"\r\n' 'Foo: bar; cheese\r\n' '\r\n' ) class ErrorHandler(BaseCGIHandler): """Simple handler subclass for testing BaseHandler""" # BaseHandler records the OS environment at import time, but envvars # might have been changed later by other tests, which trips up # HandlerTests.testEnviron(). os_environ = dict(os.environ.items()) def __init__(self,**kw): setup_testing_defaults(kw) BaseCGIHandler.__init__( self, BytesIO(), BytesIO(), StringIO(), kw, multithread=True, multiprocess=True ) class TestHandler(ErrorHandler): """Simple handler subclass for testing BaseHandler, w/error passthru""" def handle_error(self): raise # for testing, we want to see what's happening class HandlerTests(TestCase): # testEnviron() can produce long error message maxDiff = 80 * 50 def testEnviron(self): os_environ = { # very basic environment 'HOME': '/my/home', 'PATH': '/my/path', 'LANG': 'fr_FR.UTF-8', # set some WSGI variables 'SCRIPT_NAME': 'test_script_name', 'SERVER_NAME': 'test_server_name', } with support.swap_attr(TestHandler, 'os_environ', os_environ): # override X and HOME variables handler = TestHandler(X="Y", HOME="/override/home") handler.setup_environ() # Check that wsgi_xxx attributes are copied to wsgi.xxx variables # of handler.environ for attr in ('version', 'multithread', 'multiprocess', 'run_once', 'file_wrapper'): self.assertEqual(getattr(handler, 'wsgi_' + attr), handler.environ['wsgi.' + attr]) # Test handler.environ as a dict expected = {} setup_testing_defaults(expected) # Handler inherits os_environ variables which are not overridden # by SimpleHandler.add_cgi_vars() (SimpleHandler.base_env) for key, value in os_environ.items(): if key not in expected: expected[key] = value expected.update({ # X doesn't exist in os_environ "X": "Y", # HOME is overridden by TestHandler 'HOME': "/override/home", # overridden by setup_testing_defaults() "SCRIPT_NAME": "", "SERVER_NAME": "127.0.0.1", # set by BaseHandler.setup_environ() 'wsgi.input': handler.get_stdin(), 'wsgi.errors': handler.get_stderr(), 'wsgi.version': (1, 0), 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.multithread': True, 'wsgi.multiprocess': True, 'wsgi.file_wrapper': util.FileWrapper, }) self.assertDictEqual(handler.environ, expected) def testCGIEnviron(self): h = BaseCGIHandler(None,None,None,{}) h.setup_environ() for key in 'wsgi.url_scheme', 'wsgi.input', 'wsgi.errors': self.assertIn(key, h.environ) def testScheme(self): h=TestHandler(HTTPS="on"); h.setup_environ() self.assertEqual(h.environ['wsgi.url_scheme'],'https') h=TestHandler(); h.setup_environ() self.assertEqual(h.environ['wsgi.url_scheme'],'http') def testAbstractMethods(self): h = BaseHandler() for name in [ '_flush','get_stdin','get_stderr','add_cgi_vars' ]: self.assertRaises(NotImplementedError, getattr(h,name)) self.assertRaises(NotImplementedError, h._write, "test") def testContentLength(self): # Demo one reason iteration is better than write()... ;) def trivial_app1(e,s): s('200 OK',[]) return [e['wsgi.url_scheme'].encode('iso-8859-1')] def trivial_app2(e,s): s('200 OK',[])(e['wsgi.url_scheme'].encode('iso-8859-1')) return [] def trivial_app3(e,s): s('200 OK',[]) return ['\u0442\u0435\u0441\u0442'.encode("utf-8")] def trivial_app4(e,s): # Simulate a response to a HEAD request s('200 OK',[('Content-Length', '12345')]) return [] h = TestHandler() h.run(trivial_app1) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "Content-Length: 4\r\n" "\r\n" "http").encode("iso-8859-1")) h = TestHandler() h.run(trivial_app2) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "\r\n" "http").encode("iso-8859-1")) h = TestHandler() h.run(trivial_app3) self.assertEqual(h.stdout.getvalue(), b'Status: 200 OK\r\n' b'Content-Length: 8\r\n' b'\r\n' b'\xd1\x82\xd0\xb5\xd1\x81\xd1\x82') h = TestHandler() h.run(trivial_app4) self.assertEqual(h.stdout.getvalue(), b'Status: 200 OK\r\n' b'Content-Length: 12345\r\n' b'\r\n') def testBasicErrorOutput(self): def non_error_app(e,s): s('200 OK',[]) return [] def error_app(e,s): raise AssertionError("This should be caught by handler") h = ErrorHandler() h.run(non_error_app) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "Content-Length: 0\r\n" "\r\n").encode("iso-8859-1")) self.assertEqual(h.stderr.getvalue(),"") h = ErrorHandler() h.run(error_app) self.assertEqual(h.stdout.getvalue(), ("Status: %s\r\n" "Content-Type: text/plain\r\n" "Content-Length: %d\r\n" "\r\n" % (h.error_status,len(h.error_body))).encode('iso-8859-1') + h.error_body) self.assertIn("AssertionError", h.stderr.getvalue()) def testErrorAfterOutput(self): MSG = b"Some output has been sent" def error_app(e,s): s("200 OK",[])(MSG) raise AssertionError("This should be caught by handler") h = ErrorHandler() h.run(error_app) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "\r\n".encode("iso-8859-1")+MSG)) self.assertIn("AssertionError", h.stderr.getvalue()) def testHeaderFormats(self): def non_error_app(e,s): s('200 OK',[]) return [] stdpat = ( r"HTTP/%s 200 OK\r\n" r"Date: \w{3}, [ 0123]\d \w{3} \d{4} \d\d:\d\d:\d\d GMT\r\n" r"%s" r"Content-Length: 0\r\n" r"\r\n" ) shortpat = ( "Status: 200 OK\r\n" "Content-Length: 0\r\n" "\r\n" ).encode("iso-8859-1") for ssw in "FooBar/1.0", None: sw = ssw and "Server: %s\r\n" % ssw or "" for version in "1.0", "1.1": for proto in "HTTP/0.9", "HTTP/1.0", "HTTP/1.1": h = TestHandler(SERVER_PROTOCOL=proto) h.origin_server = False h.http_version = version h.server_software = ssw h.run(non_error_app) self.assertEqual(shortpat,h.stdout.getvalue()) h = TestHandler(SERVER_PROTOCOL=proto) h.origin_server = True h.http_version = version h.server_software = ssw h.run(non_error_app) if proto=="HTTP/0.9": self.assertEqual(h.stdout.getvalue(),b"") else: self.assertTrue( re.match((stdpat%(version,sw)).encode("iso-8859-1"), h.stdout.getvalue()), ((stdpat%(version,sw)).encode("iso-8859-1"), h.stdout.getvalue()) ) def testBytesData(self): def app(e, s): s("200 OK", [ ("Content-Type", "text/plain; charset=utf-8"), ]) return [b"data"] h = TestHandler() h.run(app) self.assertEqual(b"Status: 200 OK\r\n" b"Content-Type: text/plain; charset=utf-8\r\n" b"Content-Length: 4\r\n" b"\r\n" b"data", h.stdout.getvalue()) def testCloseOnError(self): side_effects = {'close_called': False} MSG = b"Some output has been sent" def error_app(e,s): s("200 OK",[])(MSG) class CrashyIterable(object): def __iter__(self): while True: yield b'blah' raise AssertionError("This should be caught by handler") def close(self): side_effects['close_called'] = True return CrashyIterable() h = ErrorHandler() h.run(error_app) self.assertEqual(side_effects['close_called'], True) def testPartialWrite(self): written = bytearray() class PartialWriter: def write(self, b): partial = b[:7] written.extend(partial) return len(partial) def flush(self): pass environ = {"SERVER_PROTOCOL": "HTTP/1.0"} h = SimpleHandler(BytesIO(), PartialWriter(), sys.stderr, environ) msg = "should not do partial writes" with self.assertWarnsRegex(DeprecationWarning, msg): h.run(hello_app) self.assertEqual(b"HTTP/1.0 200 OK\r\n" b"Content-Type: text/plain\r\n" b"Date: Mon, 05 Jun 2006 18:49:54 GMT\r\n" b"Content-Length: 13\r\n" b"\r\n" b"Hello, world!", written) def testClientConnectionTerminations(self): environ = {"SERVER_PROTOCOL": "HTTP/1.0"} for exception in ( ConnectionAbortedError, BrokenPipeError, ConnectionResetError, ): with self.subTest(exception=exception): class AbortingWriter: def write(self, b): raise exception stderr = StringIO() h = SimpleHandler(BytesIO(), AbortingWriter(), stderr, environ) h.run(hello_app) self.assertFalse(stderr.getvalue()) def testDontResetInternalStateOnException(self): class CustomException(ValueError): pass # We are raising CustomException here to trigger an exception # during the execution of SimpleHandler.finish_response(), so # we can easily test that the internal state of the handler is # preserved in case of an exception. class AbortingWriter: def write(self, b): raise CustomException stderr = StringIO() environ = {"SERVER_PROTOCOL": "HTTP/1.0"} h = SimpleHandler(BytesIO(), AbortingWriter(), stderr, environ) h.run(hello_app) self.assertIn("CustomException", stderr.getvalue()) # Test that the internal state of the handler is preserved. self.assertIsNotNone(h.result) self.assertIsNotNone(h.headers) self.assertIsNotNone(h.status) self.assertIsNotNone(h.environ) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.10/version000066400000000000000000000000101471441230600177210ustar00rootroot000000000000003.10.12 gevent-24.11.1/src/greentest/3.11/000077500000000000000000000000001471441230600163235ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.11/allsans.pem000066400000000000000000000235711471441230600204730ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQDBGvj+Uy/VUyTR mmIA1UEENThh0+pWODcvvUlkeIo+XTJ3FhF4/RVjImDHjozl28Xf2TzKnvQJa1KC pqa7fr8cL9QMwk4pH+S4ulxOu02Bl3Yafx2oJVUML37vciJg+zkzPx1k3tXFjXkr LGjZwOoufBC3AmPuq2xHFBzHrvp5/DIRH2slQFM9fpVZzN77gYyzxba0wCfCPpCf eJFRyYKW8c7MXrwnM82YtE7Rlnf227EkCdMNaSeZLUIxeVpcnScqZl0SIbR3YEiV 0LPFkx0wJFm8qUEFU/h+0jamgy/ON+11nqmMlp3BjNi/JTVsa7N7A3dvdHC7VVlr WnUgU6MoSniyL6ijpucyHtZzK2mJy0sHR8PadHKow0O423/5N8GKTSOvaGMXTjAe OGs+9/P1ZYo3IjjQPz/NV3QlhK8zRqxF3cW0ekHHkT+/jZjCvSKm6mdbMQunKE1W +dokAc815pb48Mzf1eWKd/7UyUf7CXussyAaJ3clpaK1sbbn9m0CAwEAAQKCAYAe BaCCgdJk+xk1USg9cuo5ykBqzTSYlQLXdDlN2oO7sGehJhgvVEGX+QdM3ze+oM2B wNd3tQDB2iKo11oCunDh4/m2xhq6wA+iPK8POoWRSUf+VJb6xlsTmurENV1s8IHz GrPqM87OePFGqg/fEuQVuAotObzppVMfNdxHm0er4W6zRMw2rWqDnAOCQ5zDQ1/p ryp5rYpA49M+R9NoAMlByHRbR7s+6Qnk3NuIMDmUcpF2xeQ/KIMUiHnLEU/gKDpi bsk+VtyjlibR4zhh9/cJrLTApAIA+4eC176EJvKXCh5UIjd92JC7741HTNQXJpvG 9PXbzhyUCmncr04U+46snGHdwD+lG4LS7oBGACTLMtpcMrlgAm6XCg4T8gRVE/9n FvCkqPHBR+vnhOxm+0x0yUY/DstJby6IPYPsfGK/s2n//j/vJrAZE1Pxlm9EPU13 MRLcHstwjAc/NXRPnUN1DfcQvPLx6Tt6rqw3Wm1KO75kM+HZ56BX9/Bi1TgkiI0C gcEA5JTlXssJ3W8Cz6w1ZtGsThHQBDbvHF2D5AdqO7y6/eqzCQgBQl9BTfXOzsvP I1gf2CLEFBtGK09UjAuJQg90/NlKur7i7xt7HpAzEfGsDAL4P5BW5JnMNrzpJjjL 0uUDsPJlA75Wi29N2SFiaIslY0sZ6nckInat5GRe4O1AMSHoJ5suY9yTZTU3XB4O A+XyddutI1GsFZgl8/8LyyNMcyNjxG3T5sr7IKf5/nIv6oMDjC2zLVZa8QS/MEnL Kaa7AoHBANhEsxfcjw2MaPkrsqAsOP0dDf7g2rdz6wKT5BzZu9e+/E76NmvVDpns e+kCjql9Os3/wonOMINvn1bTCQGTgk8+dw1fMyqg+zQCvH4ImcE6LSqhzblVHsIB zZ7rW86trri1U9+olNHG4nwkus0i4LV8eeORns+j8DgXr6/eOvjX3ZW5TyU7/Qgm SiSdBapzJbom3xJrbo9KQsrN5PVCOwuwrgY0o+2BeKyKhnt4uGv0bR+ii06EOJUA WvjD7gLI9wKBwGVRXk3jH29IOm3EvjLh80bzfEmx89CV3tUfOEZcRGIyOsNhCfXa dP7SWqWtDxZyhELwPgtPf43I7wfYQTHH2ioNQqN94ubrPmpwrkJg5cq5MkIyf2F6 jlsg5xMrD6VeH4G6H25GWuQZJN9+fbkrHBpj+ovD3X9tLWzT1H5Miyx8BAQyM6DN 74Nn0C8Dn2C49vyor5i9JdK4ivIY9ahH8CYE5L73k3p0NFXoPtY61ORUyCjFROtu oIa+fOQxgVzn6wKBwQC3DD7BnY7/Gq7m51ODOqrpoaPs7Qhyagyp298hhDD3hNEt T56sWmLHaV/fcqipUDNrlGRmGzz4ooutA2YGDYIn7Gj7ym4WULcN6Jr92e25nLIJ +XWUvjUQZFJThkXogxz1fZSGI7wCamHcTYJGipTDR54rPV+7w7hY4cN0CZbEdIE6 buRMUZ/zO+VZZAYdpORz0N7SSlgDtAkgenCmHe64EEzbN8bgCcvHzl/RNfZyeSm7 supSBJuXkfttvvg/JzUCgcEAlx0Pep9qCLvpk0WqzijBVHc3zK4wYxjhN2MBkF42 SLWfogKpiPfIqxX6YF94roIA0VlW6Pj50v+sbPwq8nwsgFNhml80A4ODKr3O3Y3M fXDBJW5W5ZRb/vhIKRjXyCSckSRfj7N8HUYjCLkxQansNWimrldmSet0H2mYJN0Y JpBXdqpa76zoHzWpKFwD0fSVzvnMelPHSDCNOdIEHmR8e1x2F1/ufR/9/dBzPULY HMj0OhQHoi8kJyMIj3+bQkbC -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5f Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=allsans Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:c1:1a:f8:fe:53:2f:d5:53:24:d1:9a:62:00:d5: 41:04:35:38:61:d3:ea:56:38:37:2f:bd:49:64:78: 8a:3e:5d:32:77:16:11:78:fd:15:63:22:60:c7:8e: 8c:e5:db:c5:df:d9:3c:ca:9e:f4:09:6b:52:82:a6: a6:bb:7e:bf:1c:2f:d4:0c:c2:4e:29:1f:e4:b8:ba: 5c:4e:bb:4d:81:97:76:1a:7f:1d:a8:25:55:0c:2f: 7e:ef:72:22:60:fb:39:33:3f:1d:64:de:d5:c5:8d: 79:2b:2c:68:d9:c0:ea:2e:7c:10:b7:02:63:ee:ab: 6c:47:14:1c:c7:ae:fa:79:fc:32:11:1f:6b:25:40: 53:3d:7e:95:59:cc:de:fb:81:8c:b3:c5:b6:b4:c0: 27:c2:3e:90:9f:78:91:51:c9:82:96:f1:ce:cc:5e: bc:27:33:cd:98:b4:4e:d1:96:77:f6:db:b1:24:09: d3:0d:69:27:99:2d:42:31:79:5a:5c:9d:27:2a:66: 5d:12:21:b4:77:60:48:95:d0:b3:c5:93:1d:30:24: 59:bc:a9:41:05:53:f8:7e:d2:36:a6:83:2f:ce:37: ed:75:9e:a9:8c:96:9d:c1:8c:d8:bf:25:35:6c:6b: b3:7b:03:77:6f:74:70:bb:55:59:6b:5a:75:20:53: a3:28:4a:78:b2:2f:a8:a3:a6:e7:32:1e:d6:73:2b: 69:89:cb:4b:07:47:c3:da:74:72:a8:c3:43:b8:db: 7f:f9:37:c1:8a:4d:23:af:68:63:17:4e:30:1e:38: 6b:3e:f7:f3:f5:65:8a:37:22:38:d0:3f:3f:cd:57: 74:25:84:af:33:46:ac:45:dd:c5:b4:7a:41:c7:91: 3f:bf:8d:98:c2:bd:22:a6:ea:67:5b:31:0b:a7:28: 4d:56:f9:da:24:01:cf:35:e6:96:f8:f0:cc:df:d5: e5:8a:77:fe:d4:c9:47:fb:09:7b:ac:b3:20:1a:27: 77:25:a5:a2:b5:b1:b6:e7:f6:6d Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:allsans, othername:, othername:, email:user@example.org, DNS:www.example.org, DirName:/C=XY/L=Castle Anthrax/O=Python Software Foundation/CN=dirname example, URI:https://www.python.org/, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1, Registered ID:1.2.3.4.5 X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: D4:F1:D8:23:E0:A7:E9:CA:12:45:A0:0D:03:C2:25:A6:E8:65:BC:EE X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 70:77:d8:82:b0:f4:ab:de:84:ce:88:32:63:5e:23:0f:b6:58: a2:b1:65:ff:12:22:0b:88:a6:fa:06:40:9a:e7:63:a7:5d:ae: 94:c5:68:3c:4b:e9:95:34:01:75:24:df:9d:6e:9b:e4:ff:3f: 61:97:29:7b:ab:34:2c:14:d3:01:d2:eb:fb:84:40:db:12:54: 7e:7a:44:bc:08:eb:9f:e2:15:0b:11:4f:25:d2:56:51:95:ad: 6d:ad:07:aa:6a:61:f9:39:d5:82:8c:45:31:9f:2a:ff:18:98: 49:0c:bb:17:ad:d5:24:d3:d1:c7:c4:10:3e:c4:79:26:58:f4: c5:de:82:16:c4:c3:c4:a7:a3:62:22:41:90:36:0f:bc:4c:fd: 6a:18:22:f2:87:e9:07:db:b4:3d:65:00:e4:70:f9:d6:e5:a8: a1:b9:c9:9d:e7:5d:78:aa:98:d5:f8:f4:fd:5c:d9:4c:d0:6d: bf:87:71:d3:5b:ec:f4:bf:46:f9:c8:f8:10:c5:72:af:c3:15: b9:c4:06:67:0b:3f:f6:f4:64:c5:27:74:c1:6b:00:37:da:ea: 18:36:77:36:a7:3e:80:2e:5d:54:0f:01:df:ce:9e:97:dd:c9: f2:8b:59:82:c5:65:31:c8:73:20:fd:24:23:25:d8:00:df:90: 93:26:76:08:0a:06:a9:0e:d3:d3:4c:6f:ef:a7:fb:de:eb:2a: 40:b9:e4:b1:44:0c:37:ca:c6:9e:44:4a:b4:7c:2c:40:52:35: bb:b3:71:28:3d:35:fd:be:c9:4f:54:b3:99:c5:5f:84:38:fb: 2b:fb:ea:dd:88:e8:9d:c1:9b:67:87:3d:79:7b:3d:7e:61:1f: 70:3c:b7:c8:4c:17:a5:0c:a3:28:c7:ab:48:11:14:f7:98:7a: da:4e:fb:91:76:89:0a:a6:c6:72:e0:96:d9:f1:80:ea:68:90: 37:5c:c6:69:c7:d7:bc:c7:d1:ae:5b:a9:12:59:c6:e4:6c:61: a9:8b:ba:51:b3:13 -----BEGIN CERTIFICATE----- MIIHDTCCBXWgAwIBAgIJAMstgJlaaVJfMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2Fs bHNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBGvj+Uy/VUyTR mmIA1UEENThh0+pWODcvvUlkeIo+XTJ3FhF4/RVjImDHjozl28Xf2TzKnvQJa1KC pqa7fr8cL9QMwk4pH+S4ulxOu02Bl3Yafx2oJVUML37vciJg+zkzPx1k3tXFjXkr LGjZwOoufBC3AmPuq2xHFBzHrvp5/DIRH2slQFM9fpVZzN77gYyzxba0wCfCPpCf eJFRyYKW8c7MXrwnM82YtE7Rlnf227EkCdMNaSeZLUIxeVpcnScqZl0SIbR3YEiV 0LPFkx0wJFm8qUEFU/h+0jamgy/ON+11nqmMlp3BjNi/JTVsa7N7A3dvdHC7VVlr WnUgU6MoSniyL6ijpucyHtZzK2mJy0sHR8PadHKow0O423/5N8GKTSOvaGMXTjAe OGs+9/P1ZYo3IjjQPz/NV3QlhK8zRqxF3cW0ekHHkT+/jZjCvSKm6mdbMQunKE1W +dokAc815pb48Mzf1eWKd/7UyUf7CXussyAaJ3clpaK1sbbn9m0CAwEAAaOCAt4w ggLaMIIBMAYDVR0RBIIBJzCCASOCB2FsbHNhbnOgHgYDKgMEoBcMFXNvbWUgb3Ro ZXIgaWRlbnRpZmllcqA1BgYrBgEFAgKgKzApoBAbDktFUkJFUk9TLlJFQUxNoRUw E6ADAgEBoQwwChsIdXNlcm5hbWWBEHVzZXJAZXhhbXBsZS5vcmeCD3d3dy5leGFt cGxlLm9yZ6RnMGUxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJh eDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xGDAWBgNVBAMM D2Rpcm5hbWUgZXhhbXBsZYYXaHR0cHM6Ly93d3cucHl0aG9uLm9yZy+HBH8AAAGH EAAAAAAAAAAAAAAAAAAAAAGIBCoDBAUwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQW MBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBTU 8dgj4KfpyhJFoA0DwiWm6GW87jB9BgNVHSMEdjB0gBSziqCiunHxqCR51KRbJTYV HknIzaFRpE8wTTELMAkGA1UEBhMCWFkxJjAkBgNVBAoMHVB5dGhvbiBTb2Z0d2Fy ZSBGb3VuZGF0aW9uIENBMRYwFAYDVQQDDA1vdXItY2Etc2VydmVyggkAyy2AmVpp UlswgYMGCCsGAQUFBwEBBHcwdTA8BggrBgEFBQcwAoYwaHR0cDovL3Rlc3RjYS5w eXRob250ZXN0Lm5ldC90ZXN0Y2EvcHljYWNlcnQuY2VyMDUGCCsGAQUFBzABhilo dHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9vY3NwLzBDBgNVHR8E PDA6MDigNqA0hjJodHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9y ZXZvY2F0aW9uLmNybDANBgkqhkiG9w0BAQsFAAOCAYEAcHfYgrD0q96EzogyY14j D7ZYorFl/xIiC4im+gZAmudjp12ulMVoPEvplTQBdSTfnW6b5P8/YZcpe6s0LBTT AdLr+4RA2xJUfnpEvAjrn+IVCxFPJdJWUZWtba0Hqmph+TnVgoxFMZ8q/xiYSQy7 F63VJNPRx8QQPsR5Jlj0xd6CFsTDxKejYiJBkDYPvEz9ahgi8ofpB9u0PWUA5HD5 1uWoobnJneddeKqY1fj0/VzZTNBtv4dx01vs9L9G+cj4EMVyr8MVucQGZws/9vRk xSd0wWsAN9rqGDZ3Nqc+gC5dVA8B386el93J8otZgsVlMchzIP0kIyXYAN+QkyZ2 CAoGqQ7T00xv76f73usqQLnksUQMN8rGnkRKtHwsQFI1u7NxKD01/b7JT1SzmcVf hDj7K/vq3YjoncGbZ4c9eXs9fmEfcDy3yEwXpQyjKMerSBEU95h62k77kXaJCqbG cuCW2fGA6miQN1zGacfXvMfRrlupElnG5GxhqYu6UbMT -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/badcert.pem000066400000000000000000000036101471441230600204320ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/badkey.pem000066400000000000000000000041621471441230600202700ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/capath/000077500000000000000000000000001471441230600175635ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.11/capath/4e1295a3.0000066400000000000000000000014561471441230600207270ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/capath/5ed36f99.0000066400000000000000000000050111471441230600210170ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/capath/6e88d7b8.0000066400000000000000000000014561471441230600210310ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/capath/99d0fa06.0000066400000000000000000000050111471441230600210030ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/capath/b1930218.0000066400000000000000000000030721471441230600206370ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/capath/ceff1710.0000066400000000000000000000030721471441230600210620ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/000077500000000000000000000000001471441230600201125ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.11/certdata/allsans.pem000066400000000000000000000235711471441230600222620ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQDBGvj+Uy/VUyTR mmIA1UEENThh0+pWODcvvUlkeIo+XTJ3FhF4/RVjImDHjozl28Xf2TzKnvQJa1KC pqa7fr8cL9QMwk4pH+S4ulxOu02Bl3Yafx2oJVUML37vciJg+zkzPx1k3tXFjXkr LGjZwOoufBC3AmPuq2xHFBzHrvp5/DIRH2slQFM9fpVZzN77gYyzxba0wCfCPpCf eJFRyYKW8c7MXrwnM82YtE7Rlnf227EkCdMNaSeZLUIxeVpcnScqZl0SIbR3YEiV 0LPFkx0wJFm8qUEFU/h+0jamgy/ON+11nqmMlp3BjNi/JTVsa7N7A3dvdHC7VVlr WnUgU6MoSniyL6ijpucyHtZzK2mJy0sHR8PadHKow0O423/5N8GKTSOvaGMXTjAe OGs+9/P1ZYo3IjjQPz/NV3QlhK8zRqxF3cW0ekHHkT+/jZjCvSKm6mdbMQunKE1W +dokAc815pb48Mzf1eWKd/7UyUf7CXussyAaJ3clpaK1sbbn9m0CAwEAAQKCAYAe BaCCgdJk+xk1USg9cuo5ykBqzTSYlQLXdDlN2oO7sGehJhgvVEGX+QdM3ze+oM2B wNd3tQDB2iKo11oCunDh4/m2xhq6wA+iPK8POoWRSUf+VJb6xlsTmurENV1s8IHz GrPqM87OePFGqg/fEuQVuAotObzppVMfNdxHm0er4W6zRMw2rWqDnAOCQ5zDQ1/p ryp5rYpA49M+R9NoAMlByHRbR7s+6Qnk3NuIMDmUcpF2xeQ/KIMUiHnLEU/gKDpi bsk+VtyjlibR4zhh9/cJrLTApAIA+4eC176EJvKXCh5UIjd92JC7741HTNQXJpvG 9PXbzhyUCmncr04U+46snGHdwD+lG4LS7oBGACTLMtpcMrlgAm6XCg4T8gRVE/9n FvCkqPHBR+vnhOxm+0x0yUY/DstJby6IPYPsfGK/s2n//j/vJrAZE1Pxlm9EPU13 MRLcHstwjAc/NXRPnUN1DfcQvPLx6Tt6rqw3Wm1KO75kM+HZ56BX9/Bi1TgkiI0C gcEA5JTlXssJ3W8Cz6w1ZtGsThHQBDbvHF2D5AdqO7y6/eqzCQgBQl9BTfXOzsvP I1gf2CLEFBtGK09UjAuJQg90/NlKur7i7xt7HpAzEfGsDAL4P5BW5JnMNrzpJjjL 0uUDsPJlA75Wi29N2SFiaIslY0sZ6nckInat5GRe4O1AMSHoJ5suY9yTZTU3XB4O A+XyddutI1GsFZgl8/8LyyNMcyNjxG3T5sr7IKf5/nIv6oMDjC2zLVZa8QS/MEnL Kaa7AoHBANhEsxfcjw2MaPkrsqAsOP0dDf7g2rdz6wKT5BzZu9e+/E76NmvVDpns e+kCjql9Os3/wonOMINvn1bTCQGTgk8+dw1fMyqg+zQCvH4ImcE6LSqhzblVHsIB zZ7rW86trri1U9+olNHG4nwkus0i4LV8eeORns+j8DgXr6/eOvjX3ZW5TyU7/Qgm SiSdBapzJbom3xJrbo9KQsrN5PVCOwuwrgY0o+2BeKyKhnt4uGv0bR+ii06EOJUA WvjD7gLI9wKBwGVRXk3jH29IOm3EvjLh80bzfEmx89CV3tUfOEZcRGIyOsNhCfXa dP7SWqWtDxZyhELwPgtPf43I7wfYQTHH2ioNQqN94ubrPmpwrkJg5cq5MkIyf2F6 jlsg5xMrD6VeH4G6H25GWuQZJN9+fbkrHBpj+ovD3X9tLWzT1H5Miyx8BAQyM6DN 74Nn0C8Dn2C49vyor5i9JdK4ivIY9ahH8CYE5L73k3p0NFXoPtY61ORUyCjFROtu oIa+fOQxgVzn6wKBwQC3DD7BnY7/Gq7m51ODOqrpoaPs7Qhyagyp298hhDD3hNEt T56sWmLHaV/fcqipUDNrlGRmGzz4ooutA2YGDYIn7Gj7ym4WULcN6Jr92e25nLIJ +XWUvjUQZFJThkXogxz1fZSGI7wCamHcTYJGipTDR54rPV+7w7hY4cN0CZbEdIE6 buRMUZ/zO+VZZAYdpORz0N7SSlgDtAkgenCmHe64EEzbN8bgCcvHzl/RNfZyeSm7 supSBJuXkfttvvg/JzUCgcEAlx0Pep9qCLvpk0WqzijBVHc3zK4wYxjhN2MBkF42 SLWfogKpiPfIqxX6YF94roIA0VlW6Pj50v+sbPwq8nwsgFNhml80A4ODKr3O3Y3M fXDBJW5W5ZRb/vhIKRjXyCSckSRfj7N8HUYjCLkxQansNWimrldmSet0H2mYJN0Y JpBXdqpa76zoHzWpKFwD0fSVzvnMelPHSDCNOdIEHmR8e1x2F1/ufR/9/dBzPULY HMj0OhQHoi8kJyMIj3+bQkbC -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5f Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=allsans Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:c1:1a:f8:fe:53:2f:d5:53:24:d1:9a:62:00:d5: 41:04:35:38:61:d3:ea:56:38:37:2f:bd:49:64:78: 8a:3e:5d:32:77:16:11:78:fd:15:63:22:60:c7:8e: 8c:e5:db:c5:df:d9:3c:ca:9e:f4:09:6b:52:82:a6: a6:bb:7e:bf:1c:2f:d4:0c:c2:4e:29:1f:e4:b8:ba: 5c:4e:bb:4d:81:97:76:1a:7f:1d:a8:25:55:0c:2f: 7e:ef:72:22:60:fb:39:33:3f:1d:64:de:d5:c5:8d: 79:2b:2c:68:d9:c0:ea:2e:7c:10:b7:02:63:ee:ab: 6c:47:14:1c:c7:ae:fa:79:fc:32:11:1f:6b:25:40: 53:3d:7e:95:59:cc:de:fb:81:8c:b3:c5:b6:b4:c0: 27:c2:3e:90:9f:78:91:51:c9:82:96:f1:ce:cc:5e: bc:27:33:cd:98:b4:4e:d1:96:77:f6:db:b1:24:09: d3:0d:69:27:99:2d:42:31:79:5a:5c:9d:27:2a:66: 5d:12:21:b4:77:60:48:95:d0:b3:c5:93:1d:30:24: 59:bc:a9:41:05:53:f8:7e:d2:36:a6:83:2f:ce:37: ed:75:9e:a9:8c:96:9d:c1:8c:d8:bf:25:35:6c:6b: b3:7b:03:77:6f:74:70:bb:55:59:6b:5a:75:20:53: a3:28:4a:78:b2:2f:a8:a3:a6:e7:32:1e:d6:73:2b: 69:89:cb:4b:07:47:c3:da:74:72:a8:c3:43:b8:db: 7f:f9:37:c1:8a:4d:23:af:68:63:17:4e:30:1e:38: 6b:3e:f7:f3:f5:65:8a:37:22:38:d0:3f:3f:cd:57: 74:25:84:af:33:46:ac:45:dd:c5:b4:7a:41:c7:91: 3f:bf:8d:98:c2:bd:22:a6:ea:67:5b:31:0b:a7:28: 4d:56:f9:da:24:01:cf:35:e6:96:f8:f0:cc:df:d5: e5:8a:77:fe:d4:c9:47:fb:09:7b:ac:b3:20:1a:27: 77:25:a5:a2:b5:b1:b6:e7:f6:6d Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:allsans, othername:, othername:, email:user@example.org, DNS:www.example.org, DirName:/C=XY/L=Castle Anthrax/O=Python Software Foundation/CN=dirname example, URI:https://www.python.org/, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1, Registered ID:1.2.3.4.5 X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: D4:F1:D8:23:E0:A7:E9:CA:12:45:A0:0D:03:C2:25:A6:E8:65:BC:EE X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 70:77:d8:82:b0:f4:ab:de:84:ce:88:32:63:5e:23:0f:b6:58: a2:b1:65:ff:12:22:0b:88:a6:fa:06:40:9a:e7:63:a7:5d:ae: 94:c5:68:3c:4b:e9:95:34:01:75:24:df:9d:6e:9b:e4:ff:3f: 61:97:29:7b:ab:34:2c:14:d3:01:d2:eb:fb:84:40:db:12:54: 7e:7a:44:bc:08:eb:9f:e2:15:0b:11:4f:25:d2:56:51:95:ad: 6d:ad:07:aa:6a:61:f9:39:d5:82:8c:45:31:9f:2a:ff:18:98: 49:0c:bb:17:ad:d5:24:d3:d1:c7:c4:10:3e:c4:79:26:58:f4: c5:de:82:16:c4:c3:c4:a7:a3:62:22:41:90:36:0f:bc:4c:fd: 6a:18:22:f2:87:e9:07:db:b4:3d:65:00:e4:70:f9:d6:e5:a8: a1:b9:c9:9d:e7:5d:78:aa:98:d5:f8:f4:fd:5c:d9:4c:d0:6d: bf:87:71:d3:5b:ec:f4:bf:46:f9:c8:f8:10:c5:72:af:c3:15: b9:c4:06:67:0b:3f:f6:f4:64:c5:27:74:c1:6b:00:37:da:ea: 18:36:77:36:a7:3e:80:2e:5d:54:0f:01:df:ce:9e:97:dd:c9: f2:8b:59:82:c5:65:31:c8:73:20:fd:24:23:25:d8:00:df:90: 93:26:76:08:0a:06:a9:0e:d3:d3:4c:6f:ef:a7:fb:de:eb:2a: 40:b9:e4:b1:44:0c:37:ca:c6:9e:44:4a:b4:7c:2c:40:52:35: bb:b3:71:28:3d:35:fd:be:c9:4f:54:b3:99:c5:5f:84:38:fb: 2b:fb:ea:dd:88:e8:9d:c1:9b:67:87:3d:79:7b:3d:7e:61:1f: 70:3c:b7:c8:4c:17:a5:0c:a3:28:c7:ab:48:11:14:f7:98:7a: da:4e:fb:91:76:89:0a:a6:c6:72:e0:96:d9:f1:80:ea:68:90: 37:5c:c6:69:c7:d7:bc:c7:d1:ae:5b:a9:12:59:c6:e4:6c:61: a9:8b:ba:51:b3:13 -----BEGIN CERTIFICATE----- MIIHDTCCBXWgAwIBAgIJAMstgJlaaVJfMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2Fs bHNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBGvj+Uy/VUyTR mmIA1UEENThh0+pWODcvvUlkeIo+XTJ3FhF4/RVjImDHjozl28Xf2TzKnvQJa1KC pqa7fr8cL9QMwk4pH+S4ulxOu02Bl3Yafx2oJVUML37vciJg+zkzPx1k3tXFjXkr LGjZwOoufBC3AmPuq2xHFBzHrvp5/DIRH2slQFM9fpVZzN77gYyzxba0wCfCPpCf eJFRyYKW8c7MXrwnM82YtE7Rlnf227EkCdMNaSeZLUIxeVpcnScqZl0SIbR3YEiV 0LPFkx0wJFm8qUEFU/h+0jamgy/ON+11nqmMlp3BjNi/JTVsa7N7A3dvdHC7VVlr WnUgU6MoSniyL6ijpucyHtZzK2mJy0sHR8PadHKow0O423/5N8GKTSOvaGMXTjAe OGs+9/P1ZYo3IjjQPz/NV3QlhK8zRqxF3cW0ekHHkT+/jZjCvSKm6mdbMQunKE1W +dokAc815pb48Mzf1eWKd/7UyUf7CXussyAaJ3clpaK1sbbn9m0CAwEAAaOCAt4w ggLaMIIBMAYDVR0RBIIBJzCCASOCB2FsbHNhbnOgHgYDKgMEoBcMFXNvbWUgb3Ro ZXIgaWRlbnRpZmllcqA1BgYrBgEFAgKgKzApoBAbDktFUkJFUk9TLlJFQUxNoRUw E6ADAgEBoQwwChsIdXNlcm5hbWWBEHVzZXJAZXhhbXBsZS5vcmeCD3d3dy5leGFt cGxlLm9yZ6RnMGUxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJh eDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xGDAWBgNVBAMM D2Rpcm5hbWUgZXhhbXBsZYYXaHR0cHM6Ly93d3cucHl0aG9uLm9yZy+HBH8AAAGH EAAAAAAAAAAAAAAAAAAAAAGIBCoDBAUwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQW MBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBTU 8dgj4KfpyhJFoA0DwiWm6GW87jB9BgNVHSMEdjB0gBSziqCiunHxqCR51KRbJTYV HknIzaFRpE8wTTELMAkGA1UEBhMCWFkxJjAkBgNVBAoMHVB5dGhvbiBTb2Z0d2Fy ZSBGb3VuZGF0aW9uIENBMRYwFAYDVQQDDA1vdXItY2Etc2VydmVyggkAyy2AmVpp UlswgYMGCCsGAQUFBwEBBHcwdTA8BggrBgEFBQcwAoYwaHR0cDovL3Rlc3RjYS5w eXRob250ZXN0Lm5ldC90ZXN0Y2EvcHljYWNlcnQuY2VyMDUGCCsGAQUFBzABhilo dHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9vY3NwLzBDBgNVHR8E PDA6MDigNqA0hjJodHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9y ZXZvY2F0aW9uLmNybDANBgkqhkiG9w0BAQsFAAOCAYEAcHfYgrD0q96EzogyY14j D7ZYorFl/xIiC4im+gZAmudjp12ulMVoPEvplTQBdSTfnW6b5P8/YZcpe6s0LBTT AdLr+4RA2xJUfnpEvAjrn+IVCxFPJdJWUZWtba0Hqmph+TnVgoxFMZ8q/xiYSQy7 F63VJNPRx8QQPsR5Jlj0xd6CFsTDxKejYiJBkDYPvEz9ahgi8ofpB9u0PWUA5HD5 1uWoobnJneddeKqY1fj0/VzZTNBtv4dx01vs9L9G+cj4EMVyr8MVucQGZws/9vRk xSd0wWsAN9rqGDZ3Nqc+gC5dVA8B386el93J8otZgsVlMchzIP0kIyXYAN+QkyZ2 CAoGqQ7T00xv76f73usqQLnksUQMN8rGnkRKtHwsQFI1u7NxKD01/b7JT1SzmcVf hDj7K/vq3YjoncGbZ4c9eXs9fmEfcDy3yEwXpQyjKMerSBEU95h62k77kXaJCqbG cuCW2fGA6miQN1zGacfXvMfRrlupElnG5GxhqYu6UbMT -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/badcert.pem000066400000000000000000000036101471441230600222210ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/badkey.pem000066400000000000000000000041621471441230600220570ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/capath/000077500000000000000000000000001471441230600213525ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.11/certdata/capath/4e1295a3.0000066400000000000000000000014561471441230600225160ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/capath/5ed36f99.0000066400000000000000000000050111471441230600226060ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/capath/6e88d7b8.0000066400000000000000000000014561471441230600226200ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/capath/99d0fa06.0000066400000000000000000000050111471441230600225720ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/capath/b1930218.0000066400000000000000000000030721471441230600224260ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/capath/ceff1710.0000066400000000000000000000030721471441230600226510ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/ffdh3072.pem000066400000000000000000000042441471441230600220440ustar00rootroot00000000000000 DH Parameters: (3072 bit) prime: 00:ff:ff:ff:ff:ff:ff:ff:ff:ad:f8:54:58:a2:bb: 4a:9a:af:dc:56:20:27:3d:3c:f1:d8:b9:c5:83:ce: 2d:36:95:a9:e1:36:41:14:64:33:fb:cc:93:9d:ce: 24:9b:3e:f9:7d:2f:e3:63:63:0c:75:d8:f6:81:b2: 02:ae:c4:61:7a:d3:df:1e:d5:d5:fd:65:61:24:33: f5:1f:5f:06:6e:d0:85:63:65:55:3d:ed:1a:f3:b5: 57:13:5e:7f:57:c9:35:98:4f:0c:70:e0:e6:8b:77: e2:a6:89:da:f3:ef:e8:72:1d:f1:58:a1:36:ad:e7: 35:30:ac:ca:4f:48:3a:79:7a:bc:0a:b1:82:b3:24: fb:61:d1:08:a9:4b:b2:c8:e3:fb:b9:6a:da:b7:60: d7:f4:68:1d:4f:42:a3:de:39:4d:f4:ae:56:ed:e7: 63:72:bb:19:0b:07:a7:c8:ee:0a:6d:70:9e:02:fc: e1:cd:f7:e2:ec:c0:34:04:cd:28:34:2f:61:91:72: fe:9c:e9:85:83:ff:8e:4f:12:32:ee:f2:81:83:c3: fe:3b:1b:4c:6f:ad:73:3b:b5:fc:bc:2e:c2:20:05: c5:8e:f1:83:7d:16:83:b2:c6:f3:4a:26:c1:b2:ef: fa:88:6b:42:38:61:1f:cf:dc:de:35:5b:3b:65:19: 03:5b:bc:34:f4:de:f9:9c:02:38:61:b4:6f:c9:d6: e6:c9:07:7a:d9:1d:26:91:f7:f7:ee:59:8c:b0:fa: c1:86:d9:1c:ae:fe:13:09:85:13:92:70:b4:13:0c: 93:bc:43:79:44:f4:fd:44:52:e2:d7:4d:d3:64:f2: e2:1e:71:f5:4b:ff:5c:ae:82:ab:9c:9d:f6:9e:e8: 6d:2b:c5:22:36:3a:0d:ab:c5:21:97:9b:0d:ea:da: 1d:bf:9a:42:d5:c4:48:4e:0a:bc:d0:6b:fa:53:dd: ef:3c:1b:20:ee:3f:d5:9d:7c:25:e4:1d:2b:66:c6: 2e:37:ff:ff:ff:ff:ff:ff:ff:ff generator: 2 (0x2) recommended-private-length: 276 bits -----BEGIN DH PARAMETERS----- MIIBjAKCAYEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz +8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a 87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7 YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi 7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD ssbzSibBsu/6iGtCOGEfz9zeNVs7ZRkDW7w09N75nAI4YbRvydbmyQd62R0mkff3 7lmMsPrBhtkcrv4TCYUTknC0EwyTvEN5RPT9RFLi103TZPLiHnH1S/9croKrnJ32 nuhtK8UiNjoNq8Uhl5sN6todv5pC1cRITgq80Gv6U93vPBsg7j/VnXwl5B0rZsYu N///////////AgECAgIBFA== -----END DH PARAMETERS----- gevent-24.11.1/src/greentest/3.11/certdata/idnsans.pem000066400000000000000000000233321471441230600222570ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQC8sqplTuHuLjbW TL5SL2D1fw9U6WQzLVAF5gsyhd5lr2FpfYwjrob5Mav91aOLbJRTvoNyXsJ26FPS 0RycRGXbomcIEJxXGy9aI+0MLYBt1G5mgqCH+HcVCwPzCNlhVnTwvpgA7y8zs3+6 ezZAPWkF0yWOMYLtTcq9A5GWeavt5VMgm1KZF3gO4k58oPyk3Ae9D0LAaYsX6DFi BYx41eUR5UbSb5IYXaDd8d6jqW/jnYhgc6Cxkv1gTJFn87V5lrG0vYMSRUtWDQ9Y Jh/EKAxjGw7AeY429p6TE4UoJhDmoFYR2NLvawhNIplxol/v0fs0veFQjI/UsTD8 2tRfnYL4IX8szhLsE5/5Iq8aiLHjVbIMwmDYAa0P63Ap2kf1biSn9mpDL8lQazSo yr8xzIq2QS5HMvGbeMAmS0ih10Zx84uVmkWlavgvtSflw8K/ZXT9c70rZp/TdBGY 95cOFsbg5U/20M/Llpis9tcBCaoVaYSFupatrP+p8y19qP2nebsCAwEAAQKCAYEA uaYWWwHW6pzxOrnabcVLYX0WunW9LVShbIw97AElI2n/LuhkXh6xkK48BsqP0vaK oDHJ5VYxgQdmoP03Zs8sX4BSWe7twg1u8wJxkA+cUXI1BAn0opHjpwJlalEEfe2v s8PwjMrF59nsCq56W42PrDlms5UmuQ5WLsw6Co++hZmfxW7LPu+GIS6qBZfluNT5 kBpZlDDCtkyteUD4SVI3wvmOSi+Wzv4e7P2wC9kByjENIcfhC5QQURRD4sA1hWCp 2SThYWqJOCEc2SvGgoqgTRaJuQ2aVG9qrntXt0N4V+WdJWXBK0jedkB2flLve1fR KmDYuc9k/c1svmS3Y+iZohBha9H8jpuJmXYBxxg1iNg9m7qkfg8F8wxCYLQKB+U6 tjRS7by+jSE08On7mpDDhJORnlh+rfEuWPPwAKQpLpdp76KDTvR++GvfOMUiOrFM e9s5aXp+vcgkSSqYvigE+sFpCjQWwkGBkMdT16Pf9CzhQaM08YuLnzfLEYgLFw6R AoHBAN5NQINBmlq/cptGSru66kfecqHfI7xHnnGWKAkto/B1x7Crrgs4Tk5b4vaA JmAqatt5P1e7zco7uAXXebY5VURuH/30TlkuaB+oGFp0OMw6165n8RVPT2ZaDViK ssJ9LT8fJ+23TWCCT2Z1zUlM/NnHAMjKOVsJK3/KEkVvlc7ROC7uVooc78AsQehg zpL3GBYEeBukT8aNUMqUlesCsIs/dQHW7DzQL2xGkQagm5/PDsxaCsT7ynA8eL3X TW+IXwKBwQDZTV3TaG6wqtL8y2DR0lN5jY/eYayX4e18iZ+XEZVTntPdVVyJIE4d 0A5ZfcILb9WE8R21iptROYSjcH/05j+3fQMJ1WAK0sNfGTUNNT3jYU8YzLvos+wW G8E+mNMpFPWNvLV5Qrl4VvoifGh8AMvplUEz8uAzGJbXbRxUPcmjth2ph8zULEDn /+o4OcT3gh1bp+HCqch0OuiJRn9qNUpsJG5GMm5FtjBjZM97ucZ1/0DaWl3JUxUN /pueo3J9vCUCgcBg2Fjdlcvv8u2z1aijJmgATVm1SWfhE3ZkV50zem2sSTNotTJK cwoyOveimeueA3ywBp9g0lFx5Bhkex3sFAggmrVXRoKHeZ8lA28woOdJmezybxfp R7b4iQy9YRdFgZEfqawUdMHB5KNAqNt5LpANNBQUZX0dOt53eooBM/6Yri8CyxRq cPbFysIfwWTdQ8Z7eRD2Qdv7TP9AcgDp9C8DSu7nkUEzsSKn0gpGT9vcgDEbN7Lv ZB4qTT3wvoZeq5MCgcBIG18eDtJkN1sp3Yb0OTnP5QSvg3PVNngq0jQt2fzWMacW FARP0HN7exW35n4kc2jD44q7OhJOAqsb3PHo3xqXlZkTg0WKceO4w9GR32/46spn bVCRaFrX/z/BuM6hHD5bWRpS8aw/3YTFOsklFNKVYRyw01BIREmRlLhIz/QAKidv oQt8AG9NTON44tqUUw3Q40WL5fEJeJ6/JrCTGrnmZrRdANEMuucVpFchNEVB1IC9 tCzY6IPdD/atzojoZi0CgcB2x9oWLjJ0XJIp2pMAb8nCMVjkKrznKFjZbDm8EQBs ou7pM2zkO3VRcWT1BXQocinJsjQqjQiTawP6IN2FQgT0d89V+pwd+jdvpdildQhP 1/6SErVRZV//oopKTsC6TIBL/EmW1TkP3ulQIZs8YklFgybeHdDyNFi+VgPXkVGe IHp0nEzrui9q0YJsjHfFHBeGyzDSfbiBYiF7Auk66gYZbXufebP/LZNG/FIamPP3 rwYIeeV1IVwk9tPBw6fGwrs= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:60 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=idnsans Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:bc:b2:aa:65:4e:e1:ee:2e:36:d6:4c:be:52:2f: 60:f5:7f:0f:54:e9:64:33:2d:50:05:e6:0b:32:85: de:65:af:61:69:7d:8c:23:ae:86:f9:31:ab:fd:d5: a3:8b:6c:94:53:be:83:72:5e:c2:76:e8:53:d2:d1: 1c:9c:44:65:db:a2:67:08:10:9c:57:1b:2f:5a:23: ed:0c:2d:80:6d:d4:6e:66:82:a0:87:f8:77:15:0b: 03:f3:08:d9:61:56:74:f0:be:98:00:ef:2f:33:b3: 7f:ba:7b:36:40:3d:69:05:d3:25:8e:31:82:ed:4d: ca:bd:03:91:96:79:ab:ed:e5:53:20:9b:52:99:17: 78:0e:e2:4e:7c:a0:fc:a4:dc:07:bd:0f:42:c0:69: 8b:17:e8:31:62:05:8c:78:d5:e5:11:e5:46:d2:6f: 92:18:5d:a0:dd:f1:de:a3:a9:6f:e3:9d:88:60:73: a0:b1:92:fd:60:4c:91:67:f3:b5:79:96:b1:b4:bd: 83:12:45:4b:56:0d:0f:58:26:1f:c4:28:0c:63:1b: 0e:c0:79:8e:36:f6:9e:93:13:85:28:26:10:e6:a0: 56:11:d8:d2:ef:6b:08:4d:22:99:71:a2:5f:ef:d1: fb:34:bd:e1:50:8c:8f:d4:b1:30:fc:da:d4:5f:9d: 82:f8:21:7f:2c:ce:12:ec:13:9f:f9:22:af:1a:88: b1:e3:55:b2:0c:c2:60:d8:01:ad:0f:eb:70:29:da: 47:f5:6e:24:a7:f6:6a:43:2f:c9:50:6b:34:a8:ca: bf:31:cc:8a:b6:41:2e:47:32:f1:9b:78:c0:26:4b: 48:a1:d7:46:71:f3:8b:95:9a:45:a5:6a:f8:2f:b5: 27:e5:c3:c2:bf:65:74:fd:73:bd:2b:66:9f:d3:74: 11:98:f7:97:0e:16:c6:e0:e5:4f:f6:d0:cf:cb:96: 98:ac:f6:d7:01:09:aa:15:69:84:85:ba:96:ad:ac: ff:a9:f3:2d:7d:a8:fd:a7:79:bb Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:idnsans, DNS:xn--knig-5qa.idn.pythontest.net, DNS:xn--knigsgsschen-lcb0w.idna2003.pythontest.net, DNS:xn--knigsgchen-b4a3dun.idna2008.pythontest.net, DNS:xn--nxasmq6b.idna2003.pythontest.net, DNS:xn--nxasmm1c.idna2008.pythontest.net X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 5C:BE:18:7F:7B:3F:CE:99:66:80:79:53:4B:DD:33:1B:42:A5:7E:00 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 5d:7a:f8:81:e0:a7:c1:3f:39:eb:d3:52:2c:e1:cb:4d:29:b3: 77:18:17:18:9e:12:fc:11:cc:3c:49:cb:6b:f4:4d:6c:b8:d2: f4:e9:37:f8:6b:ed:f5:d7:f1:eb:5a:41:04:c7:f3:8c:da:e1: 05:8e:ae:58:71:d9:01:8a:32:46:b2:dd:95:46:e1:ce:82:04: fa:0b:1c:29:75:07:85:ce:cd:59:d4:cc:f3:56:b3:72:4d:cb: 90:0f:ce:02:21:ce:5d:17:84:96:7f:6a:00:57:42:b7:24:5b: 07:25:1e:77:a8:9d:da:41:09:8e:29:79:b4:b0:a1:45:c8:70: ae:2c:86:24:ae:3d:9a:74:a7:04:78:d6:1f:1b:17:c5:c1:6d: b1:1a:fd:f4:50:2e:61:16:84:89:d0:42:3f:b6:bf:bd:52:bd: c8:3e:8e:87:b4:f0:bd:ad:c7:51:65:2f:77:e8:69:79:0e:03: 63:89:e7:70:ad:c8:d1:2f:1a:a5:06:d2:90:db:7c:07:35:9a: 0b:0e:85:87:d1:70:17:a7:88:0f:c6:b5:9c:88:00:fa:f9:b2: 0a:19:5a:4b:8d:91:12:51:5e:0e:c1:d8:9e:02:78:d0:2d:24: 09:fe:d4:97:3c:cb:a0:1f:9a:ab:f7:0f:e2:fa:64:23:4e:53: 0a:15:3e:f5:04:01:86:29:8b:8e:24:40:2f:b1:90:87:5c:3b: 7b:a7:4c:06:af:c3:90:7f:e9:c6:56:42:61:15:2c:83:f1:7c: 4f:89:17:f3:a0:11:34:3f:8d:af:75:34:60:1e:e0:f2:f3:02: e7:aa:b3:f7:9f:1c:f8:69:f4:fe:da:57:6e:1b:95:53:70:cd: ed:b6:bb:2a:84:eb:ab:c3:a9:b4:d5:15:a0:b2:cc:81:2d:f1: 56:c1:54:9b:5f:14:4c:5f:ad:5f:f5:06:ee:22:60:45:e4:50: 35:64:ac:ac:ca:4a:bf:86:78:f8:53:2d:17:d8:e8:84:c8:07: a4:c2:29:76:c7:1f -----BEGIN CERTIFICATE----- MIIGvTCCBSWgAwIBAgIJAMstgJlaaVJgMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2lk bnNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQC8sqplTuHuLjbW TL5SL2D1fw9U6WQzLVAF5gsyhd5lr2FpfYwjrob5Mav91aOLbJRTvoNyXsJ26FPS 0RycRGXbomcIEJxXGy9aI+0MLYBt1G5mgqCH+HcVCwPzCNlhVnTwvpgA7y8zs3+6 ezZAPWkF0yWOMYLtTcq9A5GWeavt5VMgm1KZF3gO4k58oPyk3Ae9D0LAaYsX6DFi BYx41eUR5UbSb5IYXaDd8d6jqW/jnYhgc6Cxkv1gTJFn87V5lrG0vYMSRUtWDQ9Y Jh/EKAxjGw7AeY429p6TE4UoJhDmoFYR2NLvawhNIplxol/v0fs0veFQjI/UsTD8 2tRfnYL4IX8szhLsE5/5Iq8aiLHjVbIMwmDYAa0P63Ap2kf1biSn9mpDL8lQazSo yr8xzIq2QS5HMvGbeMAmS0ih10Zx84uVmkWlavgvtSflw8K/ZXT9c70rZp/TdBGY 95cOFsbg5U/20M/Llpis9tcBCaoVaYSFupatrP+p8y19qP2nebsCAwEAAaOCAo4w ggKKMIHhBgNVHREEgdkwgdaCB2lkbnNhbnOCH3huLS1rbmlnLTVxYS5pZG4ucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2dzc2NoZW4tbGNiMHcuaWRuYTIwMDMucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2djaGVuLWI0YTNkdW4uaWRuYTIwMDgucHl0 aG9udGVzdC5uZXSCJHhuLS1ueGFzbXE2Yi5pZG5hMjAwMy5weXRob250ZXN0Lm5l dIIkeG4tLW54YXNtbTFjLmlkbmEyMDA4LnB5dGhvbnRlc3QubmV0MA4GA1UdDwEB /wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/ BAIwADAdBgNVHQ4EFgQUXL4Yf3s/zplmgHlTS90zG0KlfgAwfQYDVR0jBHYwdIAU s4qgorpx8agkedSkWyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQK DB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNh LXNlcnZlcoIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKG MGh0dHA6Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNl cjA1BggrBgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2Evb2NzcC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250 ZXN0Lm5ldC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGB AF16+IHgp8E/OevTUizhy00ps3cYFxieEvwRzDxJy2v0TWy40vTpN/hr7fXX8eta QQTH84za4QWOrlhx2QGKMkay3ZVG4c6CBPoLHCl1B4XOzVnUzPNWs3JNy5APzgIh zl0XhJZ/agBXQrckWwclHneondpBCY4pebSwoUXIcK4shiSuPZp0pwR41h8bF8XB bbEa/fRQLmEWhInQQj+2v71Svcg+joe08L2tx1FlL3foaXkOA2OJ53CtyNEvGqUG 0pDbfAc1mgsOhYfRcBeniA/GtZyIAPr5sgoZWkuNkRJRXg7B2J4CeNAtJAn+1Jc8 y6Afmqv3D+L6ZCNOUwoVPvUEAYYpi44kQC+xkIdcO3unTAavw5B/6cZWQmEVLIPx fE+JF/OgETQ/ja91NGAe4PLzAueqs/efHPhp9P7aV24blVNwze22uyqE66vDqbTV FaCyzIEt8VbBVJtfFExfrV/1Bu4iYEXkUDVkrKzKSr+GePhTLRfY6ITIB6TCKXbH Hw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/keycert.passwd.pem000066400000000000000000000102011471441230600235550ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQIhD+rJdxqb6ECAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBDTdyjCP3riOSUfxix4aXEvBIIH ECGkbsFabrcFMZcplw5jHMaOlG7rYjUzwDJ80JM8uzbv2Jb8SvNlns2+xmnEvH/M mNvRmnXmplbVjH3XBMK8o2Psnr2V/a0j7/pgqpRxHykG+koOY4gzdt3MAg8JPbS2 hymSl+Y5EpciO3xLfz4aFL1ZNqspQbO/TD13Ij7DUIy7xIRBMp4taoZCrP0cEBAZ +wgu9m23I4dh3E8RUBzWyFFNic2MVVHrui6JbHc4dIHfyKLtXJDhUcS0vIC9PvcV jhorh3UZC4lM+/jjXV5AhzQ0VrJ2tXAUX2dA144XHzkSH2QmwfnajPsci7BL2CGC rjyTy4NfB/lDwU+55dqJZQSKXMxAapJMrtgw7LD5CKQcN6zmfhXGssJ7HQUXKkaX I1YOFzuUD7oo56BVCnVswv0jX9RxrE5QYNreMlOP9cS+kIYH65N+PAhlURuQC14K PgDkHn5knSa2UQA5tc5f7zdHOZhGRUfcjLP+KAWA3nh+/2OKw/X3zuPx75YT/FKe tACPw5hjEpl62m9Xa0eWepZXwqkIOkzHMmCyNCsbC0mmRoEjmvfnslfsmnh4Dg/c 4YsTYMOLLIeCa+WIc38aA5W2lNO9lW0LwLhX1rP+GRVPv+TVHXlfoyaI+jp0iXrJ t3xxT0gaiIR/VznyS7Py68QV/zB7VdqbsNzS7LdquHK1k8+7OYiWjY3gqyU40Iu2 d1eSnIoDvQJwyYp7XYXbOlXNLY+s1Qb7yxcW3vXm0Bg3gKT8r1XHWJ9rj+CxAn5r ysfkPs1JsesxzzQjwTiDNvHnBnZnwxuxfBr26ektEHmuAXSl8V6dzLN/aaPjpTj4 CkE7KyqX3U9bLkp+ztl4xWKEmW44nskzm0+iqrtrxMyTfvvID4QrABjZL4zmWIqc e3ZfA3AYk9VDIegk/YKGC5VZ8YS7ZXQ0ASK652XqJ7QlMKTxxV7zda6Fp4uW6/qN ezt5wgbGGhZQXj2wDQmWNQYyG/juIgYTpCUA54U5XBIjuR6pg+Ytm0UrvNjsUoAC wGelyqaLDq8U8jdIFYVTJy9aJjQOYXjsUJ0dZN2aGHSlju0ZGIZc49cTIVQ9BTC5 Yc0Vlwzpl+LuA25DzKZNSb/ci0lO/cQGJ2uXQQgaNgdsHlu8nukENGJhnIzx4fzK wEh3yHxhTRCzPPwDfXmx0IHXrPqJhSpAgaXBVIm8OjvmMxO+W75W4uLfNY/B7e2H 3cjklGuvkofOf7sEOrGUYf4cb6Obg8FpvHgpKo5Twwmoh/qvEKckBFqNhZXDDl88 GbGlSEgyaAV1Ig8s1NJKBolWFa0juyPAwJ8vT1T4iwW7kQ7KXKt2UNn96K/HxkLu pikvukz8oRHMlfVHa0R48UB1fFHwZLzPmwkpu6ancIxk3uO3yfhf6iDk3bmnyMlz g3k/b6MrLYaOVByRxay85jH3Vvgqfgn6wa6BJ7xQ81eZ8B45gFuTH0J5JtLL7SH8 darRPLCYfA+Ums9/H6pU5EXfd3yfjMIbvhCXHkJrrljkZ+th3p8dyto6wmYqIY6I qR9sU+o6DhRaiP8tCICuhHxQpXylUM6WeJkJwduTJ8KWIvzsj4mReIKOl/oC2jSd gIdKhb9Q3zj9ce4N5m6v66tyvjxGZ+xf3BvUPDD+LwZeXgf7OBsNVbXzQbzto594 nbCzPocFi3gERE50ru4K70eQCy08TPG5NpOz+DDdO5vpAuMLYEuI7O3L+3GjW40Q G5bu7H5/i7o/RWR67qhG/7p9kPw3nkUtYgnvnWaPMIuTfb4c2d069kjlfgWjIbbI tpSKmm5DHlqTE4/ECAbIEDtSaw9dXHCdL3nh5+n428xDdGbjN4lT86tfu17EYKzl ydH1RJ1LX3o3TEj9UkmDPt7LnftvwybMFEcP7hM2xD4lC++wKQs7Alg6dTkBnJV4 5xU78WRntJkJTU7kFkpPKA0QfyCuSF1fAMoukDBkqUdOj6jE0BlJQlHk5iwgnJlt uEdkTjHZEjIUxWC6llPcAzaPNlmnD45AgfEW+Jn21IvutmJiQAz5lm9Z9PXaR0C8 hXB6owRY67C0YKQwXhoNf6xQun2xGBGYy5rPEEezX1S1tUH5GR/KW1Lh+FzFqHXI ZEb5avfDqHKehGAjPON+Br7akuQ125M9LLjKuSyPaQzeeCAy356Xd7XzVwbPddbm 9S9WSPqzaPgh10chIHoNoC8HMd33dB5j9/Q6jrbU/oPlptu/GlorWblvJdcTuBGI IVn45RFnkG8hCz0GJSNzW7+70YdESQbfJW79vssWMaiSjFE0pMyFXrFR5lBywBTx PiGEUWtvrKG94X1TMlGUzDzDJOQNZ9dT94bonNe9pVmP5BP4/DzwwiWh6qrzWk6p j8OE4cfCSh2WvHnhJbH7/N0v+JKjtxeIeJ16jx/K2oK5 -----END ENCRYPTED PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/keycert.pem000066400000000000000000000077321471441230600222740ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/wIBADANBgkqhkiG9w0BAQEFAASCBukwggblAgEAAoIBgQCylKlLaKU+hOvJ DfriTRLd+IthG5hv28I3A/CGjLICT0rDDtgaXd0uqloJAnjsgn5gMAcStpDW8Rm+ t6LsrBL+5fBgkyU1r94Rvx0HHoyaZwBBouitVHw28hP3W+smddkqB1UxpGnTeL2B gj3dVo/WTtRfO+0h0PKw1l98YE1pMTdqIwcOOE/ER0g4hvA/wrxuLhMvlVLMy/lL 58uctqaDUqryNyeerKbVkq4fJyCG5D2TwXVJ3i2DDh0xSt2Y10poZV4M4k8Su9Z5 8zN2PSvYMT50aqF277v8BaOeYUApBE4kZGIJpo13ATGdEwpUFZ0Fri4zLYUZ1hWb OC35sKo7OxWQ/+tefNUdgWHob6Vmy777jiYcLwxc3sS9rF3AJe0rMW83kCkR6hmy A3250E137N/1QumHuT/Nj9rnI/lwt9jfaYkZjoAgT/C97m/mM83cYpGTdoGV1xNo 7G90MhP0di5FnVsrIaSnvkbGT9UgUWx0oVMjocifdG2qIhMI9psCAwEAAQKCAYBT sHmaPmNaZj59jZCqp0YVQlpHWwBYQ5vD3pPE6oCttm0p9nXt/VkfenQRTthOtmT1 POzDp00/feP7zeGLmqSYUjgRekPw4gdnN7Ip2PY5kdW77NWwDSzdLxuOS8Rq1MW9 /Yu+ZPe3RBlDbT8C0IM+Atlh/BqIQ3zIxN4g0pzUlF0M33d6AYfYSzOcUhibOO7H j84r+YXBNkIRgYKZYbutRXuZYaGuqejRpBj3voVu0d3Ntdb6lCWuClpB9HzfGN0c RTv8g6UYO4sK3qyFn90ibIR/1GB9watvtoWVZqggiWeBzSWVWRsGEf9O+Cx4oJw1 IphglhmhbgNksbj7bD24on/icldSOiVkoUemUOFmHWhCm4PnB1GmbD8YMfEdSbks qDr1Ps1zg4mGOinVD/4cY7vuPFO/HCH07wfeaUGzRt4g0/yLr+XjVofOA3oowyxv JAzr+niHA3lg5ecj4r7M68efwzN1OCyjMrVJw2RAzwvGxE+rm5NiT08SWlKQZnkC gcEA4wvyLpIur/UB84nV3XVJ89UMNBLm++aTFzld047BLJtMaOhvNqx6Cl5c8VuW l261KHjiVzpfNM3/A2LBQJcYkhX7avkqEXlj57cl+dCWAVwUzKmLJTPjfaTTZnYJ xeN3dMYjJz2z2WtgvfvDoJLukVwIMmhTY8wtqqYyQBJ/l06pBsfw5TNvmVIOQHds 8ASOiFt+WRLk2bl9xrGGayqt3VV93KVRzF27cpjOgEcG74F3c0ZW9snERN7vIYwB JfrlAoHBAMlahPwMP2TYylG8OzHe7EiehTekSO26LGh0Cq3wTGXYsK/q8hQCzL14 kWW638vpwXL6L9ntvrd7hjzWRO3vX/VxnYEA6f0bpqHq1tZi6lzix5CTUN5McpDg QnjenSJNrNjS1zEF8WeY9iLEuDI/M/iUW4y9R6s3WpgQhPDXpSvd2g3gMGRUYhxQ Xna8auiJeYFq0oNaOxvJj+VeOfJ3ZMJttd+Y7gTOYZcbg3SdRb/kdxYki0RMD2hF 4ZvjJ6CTfwKBwQDiMqiZFTJGQwYqp4vWEmAW+I4r4xkUpWatoI2Fk5eI5T9+1PLX uYXsho56NxEU1UrOg4Cb/p+TcBc8PErkGqR0BkpxDMOInTOXSrQe6lxIBoECVXc3 HTbrmiay0a5y5GfCgxPKqIJhfcToAceoVjovv0y7S4yoxGZKuUEe7E8JY2iqRNAO yOvKCCICv/hcN235E44RF+2/rDlOltagNej5tY6rIFkaDdgOF4bD7f9O5eEni1Bg litfoesDtQP/3rECgcEAkQfvQ7D6tIPmbqsbJBfCr6fmoqZllT4FIJN84b50+OL0 mTGsfjdqC4tdhx3sdu7/VPbaIqm5NmX10bowWgWSY7MbVME4yQPyqSwC5NbIonEC d6N0mzoLR0kQ+Ai4u+2g82gicgAq2oj1uSNi3WZi48jQjHYFulCbo246o1NgeFFK 77WshYe2R1ioQfQDOU1URKCR0uTaMHClgfu112yiGd12JAD+aF3TM0kxDXz+sXI5 SKy311DFxECZeXRLpcC3AoHBAJkNMJWTyPYbeVu+CTQkec8Uun233EkXa2kUNZc/ 5DuXDaK+A3DMgYRufTKSPpDHGaCZ1SYPInX1Uoe2dgVjWssRL2uitR4ENabDoAOA ICVYXYYNagqQu5wwirF0QeaMXo1fjhuuHQh8GsMdXZvYEaAITZ9/NG5x/oY08+8H kr78SMBOPy3XQn964uKG+e3JwpOG14GKABdAlrHKFXNWchu/6dgcYXB87mrC/GhO zNwzC+QhFTZoOomFoqMgFWujng== -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/keycert2.pem000066400000000000000000000077561471441230600223640ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQCf8FWxi4oVlDVx e8NDFgb+IYAGr/hZWuY1Zq7d7g57yPoxJrgt+bN89+U7qTduqyB2Hy8G0TqeACOr IdpPZ8P7V5E5YiASwfJ72nbVo7qR9DAKA5FE8PU0bJFmFLjDDihc970zc4ilRDfR WylUpj68nefOY4CzFzeiqVOLX2wezs7Z0hflkSXGBmC0j1FbQU2I3YJg3CKCabhT tU6OyKItzjJ2vVaOoQ+B0Kv8leaRQ6ANZBAFQF2LepSy5F2+oSD+QHjPr+012V5D mrsdIc9We8YyonS1u/3HI7lLohf3W+qFroQWjn0DJI56ScV1uEr/B0+hn2jBRTM5 d1F9BeVWm1u8BOJu50CvOeuxiVLsxJpa4T41DJznJk5V+hE4hKvDKmlrwulsRp8o jUEyUi8dzWOBRfAijIWv3qAPjGA/J33n6+PllCczC2BsVZhVmLqSMCwp1g2JTCM/ KC7T4vOl/EGkm76fcmLeA1Ef8oUdRg+3T77VP+HqZ2JP06J8O8MCAwEAAQKCAYAw YvJZ82BEJQGCIrIxMpHNAm+MFmKpDdIFp9oRdDrXgjcG9bLU3e1KSmkEgq4tggIh GlAM3PHB6ULhPC2ixj7JZHWgCaqwYhKtG6vF+HGyRFDgRrIFTGyyfoICgxReloLp lV2dGj/l19yXLuAzJtRmFdOSYhIGnGiNgnKvAKBiNajoxyHJpv7piPZqyc0QMZJ2 bKVMDm02TSuhz4FDuzktaGtl9uQf5GQfnvTZRrRpkC70vigGnrFuSBiCgopF6NLq 6AXl8YS3Jcu2oGWrZDfS/GlG1QmvGGsmr9wndJSGG43jcpcRZt0g1nJNu4Fioq3e 7y6Gap9TEsciuQOv/6RD457XkNARmTQxFpEwmSgOPQn2pFcDspo71Ej7azzL/Z+3 jvnVo3wxgxBcrpyh+vhBtJARp4pT4anW4PcD6IcPSOWbnI8Ldoj1XN5QkJcBcykK 6LmsAUqsmEQDNsmnGZWyYSCns4P2vUJi0hwQz8UiQwgAta3xnq4v5On7l3cq35kC gcEA0+joOFbZBeGlCb27tDW4VCW0cQuczzuNEoBUKnsNSqy0nx1O7hgHm/f/NQDD cpxiD15bRQ0KM9QbQC4dGaVoLsM07hUGk97dCxQPs2zot4CodCKGohs7E154tEDP zVg3YS5mubUmqdqtn8ZCKeeZye/Tv2ageyF300sEgj2Cd7EZ8S4sB0PxZ2tqT3jy cBL5cDruLEWuHIQjN7WwSjxnXocpb1OU7dJ+v4zFPCkSCOoa0DTTw4jFhPEOBdqV T619AoHBAME3QyW4QVtU2Ct9u0B1XThhqSEyOpUrcH9nOoefggwP4WF3phVx16BG aDKUIGQ62klRa5fi2eooxcjQRLv1sWO0UzssnO6ABMnGkUiRdrowo6xukNak0RTp 0gvNoJ0SZxGF0yWSCw1Rq3qP2Koj7XDumFChAzLMyUsnoOl29SA7GfXcZp1pZTiq kOfFMWt0CIHu/EK03YWcd4vfQEq6lus39RCSXuL++Jva3yiEl5s069RFZvP1bNrD emkfetDSPwKBwQClk+8fVnzs44sZOW9ZOEB3P57mVbSJGHb6Zdtd9hhEqP3Y9gWe dJg9fmGjAJ23CAp3B7s5ER9PsAQ6+c0zJNNq9ox9G2CwWgtNhLdf81FDUPxPAktA jxZx4/dcoOe+A5gCD0elA67aOUxA86DvLVA1QXeqrn3muBfwuUUknvs6mt8yXGl6 o9QUgxHmVxLYD3tn/iPr4+ZP0c/Sz9yXpOsAKYxuuFg+G6N9+HiEsXKuFH4vAZgV yODNJ61VVZ4lS+ECgcAqFqOl39E81+qO7sCPdgFsermg5ZQlUmUbG52AVZq6jesG lE21disGWs/v1JyJuNg8CGRrnZriiycqa1PNreOKWImY5kr5GSHx4jNbn3RBcr70 nNEoMJbq+1QqBgzqqkuRYZlxIbMOn6++7v6/cTwT0aWUSr6rnjhrCqLeuG8FKlqp V+1ydLb79QvDsQzm30vLIggJb+ShakgQS/1xSdv+OR5FEd1hjTESokbiSJ/Ny2Vj xAp9MgUYUmSj6ZuTSXkCgcAggshdRQLom/EK2pYwffIpKfBiyLbi+KIjKxkiPEsb jrrQbvh9ZN6iAG3StVAYB5c6vewfeIlcDT0YJDyy1hGRLRG7vf9ubPf+n7Xp1y0W oo9L9qfCHu0jmWwtinkFYjpTDkXlxXCG2v3TllNsNX/5afYo8sb9oxXHLTpBlwZB fw6IgNZblWQevdgmUMTP9W2W7AZUxEz4gOM6lQkOwC3U59Dx2yO6rD3An6G1tlZF 2MClyf8o5d5ePObH8rkxrpY= -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIUF15VKdwjiTzzKgs6PnNpEekV9QQwDQYJKoZIhvcNAQEL BQAwYjELMAkGA1UEBhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYD VQQKDBpQeXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhv c3RuYW1lMB4XDTIxMDMxNzA4NDgyMFoXDTQwMDUxNjA4NDgyMFowYjELMAkGA1UE BhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYDVQQKDBpQeXRob24g U29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhvc3RuYW1lMIIBojAN BgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEAn/BVsYuKFZQ1cXvDQxYG/iGABq/4 WVrmNWau3e4Oe8j6MSa4LfmzfPflO6k3bqsgdh8vBtE6ngAjqyHaT2fD+1eROWIg EsHye9p21aO6kfQwCgORRPD1NGyRZhS4ww4oXPe9M3OIpUQ30VspVKY+vJ3nzmOA sxc3oqlTi19sHs7O2dIX5ZElxgZgtI9RW0FNiN2CYNwigmm4U7VOjsiiLc4ydr1W jqEPgdCr/JXmkUOgDWQQBUBdi3qUsuRdvqEg/kB4z6/tNdleQ5q7HSHPVnvGMqJ0 tbv9xyO5S6IX91vqha6EFo59AySOeknFdbhK/wdPoZ9owUUzOXdRfQXlVptbvATi budArznrsYlS7MSaWuE+NQyc5yZOVfoROISrwyppa8LpbEafKI1BMlIvHc1jgUXw IoyFr96gD4xgPyd95+vj5ZQnMwtgbFWYVZi6kjAsKdYNiUwjPygu0+LzpfxBpJu+ n3Ji3gNRH/KFHUYPt0++1T/h6mdiT9OifDvDAgMBAAGjGzAZMBcGA1UdEQQQMA6C DGZha2Vob3N0bmFtZTANBgkqhkiG9w0BAQsFAAOCAYEARzdkuqa0Hexi/saMkdi3 bubpQkc7X0RYKWnjy/PgcmbvQXLiWRMZOH9rMWvd5v+ZfkgAtsbOQuP8ycioNIFY Il5SEmxHEN81z5UNSPLOib6ky13gzrnXRAxnnO7cICG7AaMu1dHv57fqjevcx/n/ nxPNKwKL+TDpMw7ATVZw7Py7JciKyFAfwtkvt17j/ldvaQvuwmWHzyFVrQniQcQq QEa4jy/Y/pXHAgCKq1qbe0ush17j1ChyH7l4SkF2xJKcYYQF5ipw8zg6WeOL2NFE G1KDJN0SsMmM3PMN1e0lLQP3G+UaatervrKXu51QleKL32Xlby+pp1w9KKs39/Tb RT8EMe9A6cecod6TL0ZUQHow6ykNYBkfSKDLTKWnL9ifZ0C/DvgmS7DpJg3oAa1e GhIglMrgqJflTHAI/PvEsCKM1O0Un2dVGWsUCzPfhj1cKmagyb0Zd+2Tk9xGSRs9 2ceXMxRCjOJwEHUCFuTYeqowabdlpi0nyPbSn7JIwCpT -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/keycert3.pem000066400000000000000000000223501471441230600223500ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQDFtLOteQlQojN7 ztkux7m0hmGKkP1hh0hbKqTcD87jkLAqAwZWenjZMjCbbZ3vP+AObCIkYIKzPXY7 Yi+H5M3O2mXIDxoHGjL/GWtoEyDNXvm9UC+MRuSOq2MaLHHQG0Rx2TxcYrMVUM7b 93rpN1LGRrCv1gISXM4EvEJooAR7Aadj0pG/o0fqDAdFjH6QZbhn1iZle+eGbjcf dgH/H0F8dn1PPGoViHXicbsQ4kB6002Pf+aXP4b2QKAbflyNHEKHPHEOOTXrFjMd c+bqKW24epEsMZI59qx9hU/4Rvp3/v+vEwTL7Nm7ilptzZn2cvGCW39LC0nNYLOz kO3H8xwA75h6uykdB+WO/v2CKIK9M/ZO+9QNrmaokfKDamCk39b8hlCwNL6LsVpv d3XTS5Wn4YWn92EqiltUJJoPo7pc7VTdWCg4zVFn4Q8Zh4NFNn/qTB8lEMgrsNTV 5cyZ7zhoBiUMSO45bmo2NsnE7ce/JUhlqe5uh0PT1MIBgTV+oDMCAwEAAQKCAYEA udsy4gwblqK0tVnxz0lQqYV+os3EdO/BNHr1Oi7eNg2pngTz603812mYSjUVOHma vtQmkH3twGQyBoc52Y1dcGzdK+IOfMjDUg7qao840ffL3I1J9ZwbdodlhZBsec94 W3J1jP/4DDzICf8vm5g3h0+i/9m2Xt7BibAU2dg7/grC+lNUUoxDqaEfIOF/hW0q muq1c8e0EisAROIh5FzUqhWVnWxU6eM7tuFlkuyu4whLLHB3LI466Lo+CTqT9M+v jJYlvS5+AZW3qMBp6WOI8C+VIiBL178mo+Igkyyy5AYXcWeNkjp6ygRWvtWXIhCv CI29mf+BP/54jAY0rQRXJ2UcSHXmM6PTDkE/L2OKeiY1Ou8gLOwun3yBVdbkXJMb PWmUW4N8qSIJQ+vE2TDqmkqAT6m+ilzOXl1O+LLTvGyMnOiiSLXK9mC4ND3tqaQu hvKivnI1doErcWUaIf1DHiJmLrGxrTCUKjCEoefqVq2/dDdtCfx7CqUvjl3DYKMB AoHBAP+Vdi6D07gZFepEGCaJ+YH6cxEyO73CNnea/F1whVAzOv91kHS32jC9PAI3 /wYlX+DLcN9mVF/q62V4SLZYfOxTPW4vWO0A45URe9s9Z795fdAcQ5jt3QFOVSnk 3XSaCkIOwckuwabGJi4+foiUEOnLLzQi1/g7x12dwejxVNhqhz5KFkOQPv8fQRed sb5LVLYDeprsB2Vsx0fHwg4z9FvTIxLBeI7+sJD30lNpYZrCl/T9x4e1SV2Rwn2W bghxgQKBwQDGBx07biZK9RB5g4qPl+G6vz0M+/KBfpwQbMYxSyct7u6gfGD9mWBO qocIIr39Unac3kUL237Cn3HbgiGCRe7Mwd7XqnSSGWM5oWSlVQxEKTXYUlTbd9O9 DKuyQGOl/AMEwD4ZbEOfQNmnd1U4nh1AV052FQY8Ry/atGFT9fApA/5X/bbenOwQ YGDsokLzPf2BIDncpE+VNevUMoMI7EnySgjjfpL+cRld0qpLqBMo2h5VddeJ/5YM 1YcNfMQiw7MCgcEAwXqXuKa7A8aZvHpH/gS9CRRbP01TxFbdfLWrDeE8SnY9111c Ob9kQTk/0D4rpK9uYXIgxD1m6iWghXQFN2TNTOnGuz7EhsYBgrt1k4Zsn5qND5oV 4hNPFsoB1nEW5EooMdGSCYaHuoSOKrvMdgAAvbu+xC0MaTJ3vfrK7Fik7h/WueTD 7emohuFWGVabU38bZZ5EljrPboxmX4Rs9uuFtG2lQ3GKnlVXvKaeZd6EsO9WsXPc NHOcUmUhYokaSvIBAoHAGCxGJTsM8Zl4qVylTWH87A7sJOmccLJD2r1sdBf4cGL6 PhzwugQ+/VtToGqdRo8Ka5u2Ufw5PQi5nVIFRSHERLpluW3VTQBMXHyXDJeVJ7zg Fcf3E9NMxYcGbnvtrhVVSP8ulWvh1U7VQtwOSxsB9xixOzjVygXmkYvzVYxwBJG4 OoV+DS6aomUhb8Fe6tJmX5zPc1+bV1t9ril8VVqCrFDdROfuiaDEt+8/Wnzp2dLG YShBZ1cLugVWtw7D4nqBAoHAF29k64iAxY5Y4OOibVkqjUCPyqG2oxiXqgO7CxZp FGUat5UtV2mIBlSENs1o5AZ1nPlgWtPtg0xVCaG2t/Rq7ugvUfAnAhUK6zX8FS+T gCXE+7iKuuIJiCo13/iAwF/CLfuXvj4CZ71ta0wX9w99f1FcPEk0x+ytiyuWJK8K tyubL34JwNrnkh/8e3LcV3L88Sk9ZmxeTz31f3cA3Fy2ZJOAUMD9dKXeKtY7azzt MkhXedRsdLSKqMh0VGeGHoLS -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5c Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:c5:b4:b3:ad:79:09:50:a2:33:7b:ce:d9:2e:c7: b9:b4:86:61:8a:90:fd:61:87:48:5b:2a:a4:dc:0f: ce:e3:90:b0:2a:03:06:56:7a:78:d9:32:30:9b:6d: 9d:ef:3f:e0:0e:6c:22:24:60:82:b3:3d:76:3b:62: 2f:87:e4:cd:ce:da:65:c8:0f:1a:07:1a:32:ff:19: 6b:68:13:20:cd:5e:f9:bd:50:2f:8c:46:e4:8e:ab: 63:1a:2c:71:d0:1b:44:71:d9:3c:5c:62:b3:15:50: ce:db:f7:7a:e9:37:52:c6:46:b0:af:d6:02:12:5c: ce:04:bc:42:68:a0:04:7b:01:a7:63:d2:91:bf:a3: 47:ea:0c:07:45:8c:7e:90:65:b8:67:d6:26:65:7b: e7:86:6e:37:1f:76:01:ff:1f:41:7c:76:7d:4f:3c: 6a:15:88:75:e2:71:bb:10:e2:40:7a:d3:4d:8f:7f: e6:97:3f:86:f6:40:a0:1b:7e:5c:8d:1c:42:87:3c: 71:0e:39:35:eb:16:33:1d:73:e6:ea:29:6d:b8:7a: 91:2c:31:92:39:f6:ac:7d:85:4f:f8:46:fa:77:fe: ff:af:13:04:cb:ec:d9:bb:8a:5a:6d:cd:99:f6:72: f1:82:5b:7f:4b:0b:49:cd:60:b3:b3:90:ed:c7:f3: 1c:00:ef:98:7a:bb:29:1d:07:e5:8e:fe:fd:82:28: 82:bd:33:f6:4e:fb:d4:0d:ae:66:a8:91:f2:83:6a: 60:a4:df:d6:fc:86:50:b0:34:be:8b:b1:5a:6f:77: 75:d3:4b:95:a7:e1:85:a7:f7:61:2a:8a:5b:54:24: 9a:0f:a3:ba:5c:ed:54:dd:58:28:38:cd:51:67:e1: 0f:19:87:83:45:36:7f:ea:4c:1f:25:10:c8:2b:b0: d4:d5:e5:cc:99:ef:38:68:06:25:0c:48:ee:39:6e: 6a:36:36:c9:c4:ed:c7:bf:25:48:65:a9:ee:6e:87: 43:d3:d4:c2:01:81:35:7e:a0:33 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 85:75:10:25:D0:2C:80:50:24:1A:5B:57:70:DE:B5:CB:71:A9:3B:7B X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 95:f3:56:bb:d5:8c:70:bd:d1:de:da:63:b0:29:d7:db:60:27: d6:59:fd:61:1b:30:c6:d0:5d:73:7d:34:e1:68:e3:28:a6:89: e6:60:bd:89:d3:0e:f4:72:ad:72:76:f8:86:21:fd:75:3c:f8: 6d:be:9c:04:e1:82:03:69:6c:ae:d0:55:ba:5e:f2:ca:f5:0f: 8e:d6:d9:8d:c8:56:46:f4:f8:ac:74:2a:19:7b:8e:47:70:1f: fb:fb:bd:69:02:a1:a5:4a:6e:21:1c:04:14:15:55:bf:bf:24: 43:c8:17:03:be:3e:2c:ea:db:c8:af:1d:fd:52:df:d6:15:49: 9e:c2:44:69:ef:f1:45:43:83:b2:1e:cf:14:1c:13:3f:fe:9c: 71:cb:e7:1b:18:56:36:a7:af:44:f1:0b:a1:79:44:46:f9:43: 46:29:d8:b0:ca:49:4d:65:60:d3:f6:8e:74:bc:62:9e:1e:8d: 4b:29:9a:b4:0d:f0:a2:77:5b:34:e4:11:2f:a7:25:c5:e5:07: 76:12:ae:be:75:73:15:e4:0a:7d:53:38:56:3f:79:6d:6e:ca: ed:80:ab:56:ed:7e:8b:1c:e7:e3:d4:62:30:22:70:e7:29:b2: 03:3c:fe:fa:3d:f0:36:c0:4d:11:a2:99:d3:29:31:27:b8:c5: b8:15:a3:3c:4f:9b:73:5e:2b:b2:fb:cb:fd:75:47:b8:17:bd: 21:d8:e6:c1:b9:ff:73:81:d8:25:08:6d:08:5e:1c:a5:83:50: de:67:e6:da:d0:8e:5a:d3:f2:2a:b1:3f:b8:80:21:07:6a:71: 15:6d:05:eb:51:b3:59:8d:d4:15:46:7e:02:a8:13:01:16:99: bd:03:cc:70:71:2a:23:16:78:af:d1:d5:01:9d:04:b4:63:93: 9a:04:3a:92:2e:e6:7e:73:93:a5:fe:50:9b:bd:0e:ea:54:86: 6f:7c:e5:14:77:fe:c2:28:5a:4a:0e:d7:2d:8c:e9:ed:61:29: b2:53:ff:6c:04:bc -----BEGIN CERTIFICATE----- MIIF8TCCBFmgAwIBAgIJAMstgJlaaVJcMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxv Y2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAMW0s615CVCi M3vO2S7HubSGYYqQ/WGHSFsqpNwPzuOQsCoDBlZ6eNkyMJttne8/4A5sIiRggrM9 djtiL4fkzc7aZcgPGgcaMv8Za2gTIM1e+b1QL4xG5I6rYxoscdAbRHHZPFxisxVQ ztv3euk3UsZGsK/WAhJczgS8QmigBHsBp2PSkb+jR+oMB0WMfpBluGfWJmV754Zu Nx92Af8fQXx2fU88ahWIdeJxuxDiQHrTTY9/5pc/hvZAoBt+XI0cQoc8cQ45NesW Mx1z5uopbbh6kSwxkjn2rH2FT/hG+nf+/68TBMvs2buKWm3NmfZy8YJbf0sLSc1g s7OQ7cfzHADvmHq7KR0H5Y7+/YIogr0z9k771A2uZqiR8oNqYKTf1vyGULA0voux Wm93ddNLlafhhaf3YSqKW1Qkmg+julztVN1YKDjNUWfhDxmHg0U2f+pMHyUQyCuw 1NXlzJnvOGgGJQxI7jluajY2ycTtx78lSGWp7m6HQ9PUwgGBNX6gMwIDAQABo4IB wDCCAbwwFAYDVR0RBA0wC4IJbG9jYWxob3N0MA4GA1UdDwEB/wQEAwIFoDAdBgNV HSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4E FgQUhXUQJdAsgFAkGltXcN61y3GpO3swfQYDVR0jBHYwdIAUs4qgorpx8agkedSk WyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29m dHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMst gJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0 Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcw AYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYD VR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAJXzVrvVjHC90d7a Y7Ap19tgJ9ZZ/WEbMMbQXXN9NOFo4yimieZgvYnTDvRyrXJ2+IYh/XU8+G2+nATh ggNpbK7QVbpe8sr1D47W2Y3IVkb0+Kx0Khl7jkdwH/v7vWkCoaVKbiEcBBQVVb+/ JEPIFwO+Pizq28ivHf1S39YVSZ7CRGnv8UVDg7IezxQcEz/+nHHL5xsYVjanr0Tx C6F5REb5Q0Yp2LDKSU1lYNP2jnS8Yp4ejUspmrQN8KJ3WzTkES+nJcXlB3YSrr51 cxXkCn1TOFY/eW1uyu2Aq1btfosc5+PUYjAicOcpsgM8/vo98DbATRGimdMpMSe4 xbgVozxPm3NeK7L7y/11R7gXvSHY5sG5/3OB2CUIbQheHKWDUN5n5trQjlrT8iqx P7iAIQdqcRVtBetRs1mN1BVGfgKoEwEWmb0DzHBxKiMWeK/R1QGdBLRjk5oEOpIu 5n5zk6X+UJu9DupUhm985RR3/sIoWkoO1y2M6e1hKbJT/2wEvA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/keycert4.pem000066400000000000000000000223661471441230600223600ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQC34y3S6iXdmdvd M/2aFBe6CvRvZwhh1huGl7IQRtdoakPqMLlEdNHJtNeF5M27xLei+p4wt7N1Jyi0 2keHQb1m9TqH5AruOkE2ti+15zEoKoU9aWydTiH+epKTT0yjg2NcKQjRUaWcbhzB H4EMKuCIlzIIz8/EIKkOqhCDwq6+Fv3Ays+z7Bz+yR80ixivKu/l7SjxQ7z7R/kC I7OViRcIO5QBQPj7VLvCTz4VA6u/LdXngK2HNuau6WXm5yNNQbqrB11AEJcYZf/c VrneV4F+ZjLloAKgSn9GB8eWOyilTQ18TcKd+H2icipRaP/+QR/KPx5GK/SXU3my qm62QOGI7t/5ktVdjGhs6tHZxw1SRiipiLYWbtVRrSxa4wYlgpgoUwvrvvtC5kAN nTw1VGWsxcs+6a7+PocYnJiq7k4b5OAUb3Ryvl9DLAMy8NqpRWo4cHD/XQ3FCYwF HlOSgx/dL5Se0i3dW1KzbP6OvaNg6nl/1EXPUsJ1ATS8nzvzhccCAwEAAQKCAYEA nD3GvaJ9MeB802JNZBEWZ9jO/6jHknldQeq6POI0PF+t/NoRUH0BkyS4yucxdw0a CrxulG5BaJUxHRkqFV5iE4zhgnzcXLXamyYJO8GIHtyiASAGTVIJyDNVPxztvTDx x2iGOXPqBxP4Eo82EqSLywLMXHhVzAsEGZWeGpXb61+Vk62+9Nz1dfZlMTvOaWdO Fkp/sx8e/1KT3KGBANlOXIxioP4Xj1Tbg6nY0fogf3vud5j52B1pu8xL7PkPIaFq DEGz3XvWhBF/+Cs5iDeYz8eQpfQig7HdHVn2D8dZmzQgpLw1yGbPAnqrgopWfm7R MqiyFe82p2t+vfSoG5jz28XxPtzBJV3ljxKxlbnclqu/CAYSjzaYohDzyhjdZOZI r9DOfWOqu01Ha3EEsApn95fusHHGTH2FOy0u61FSTrfLfqsLw9WRJPWleirKikhf SZzi223QrmzZMtuCF7VgTx3ghDhBmFD8uzVVQ1SwPZ8CgftRkFcn1llXIAfJ3iHB AoHBAOg3DOIdtUVgpjMKhpAyuH54fYvGl7afIMNbKRle0kCiP45wtGJ43RPMqiR8 1rxZB3+iapICI/lnhk3O7vVRkR64yiqQBcl/hXZ1BhyD6iDXWYmm5mcnymcoqfwc p9TfzEPyGPb3SM2YlI0cSPRqM/jDvGvnDeKIpzEKvUlwJ59WoN2HOHTIXf+XbN5n unpuTt6YKJvc48DrXsPnUzkCmUfbOmgHfeb9/qBs/8kY4YJMsZEjqf88o7mCJCIy BtDxTwKBwQDKuOwE8e0GIA01ZHd6RfR+ZCvmp2oauxal4EJsBx+ZZnhEWGaSm1fE Bf/ih074ghcSKoSrdYpD1xGZ6fGVWMx3jcL11yLDOUiiPDJsm8hUBZ0IW1qXyfCP l7xy1bUkWwPXdmFuGp1exrcjooKrFNuTdYiK4nQZSKuCfXQRADrmEJmM+gYwhqI7 4XsYo848B9A4hbY6RLEox4uvo/RmafY0iR0PMhVEc+ydNLKB/4LpahZqBQ4kTpMv o4+rEvYt1gkCgcB08gx177ozx1nMCLf99N0/LBUmCIytNvR8DfPjyAIg9NUHOjFO CkpkR0VEfO50Cm4hVD1RbOyLFRzpIJbtSvfHvg5qYv/XG3auUn8Sa0jE408/aKNO PhbL3wnEYvYO2ep4KXtzHNQ4XmgprJ39IWMtG/5PZRx0ApgYtazgSDBcKXd4OTow bhwQtUTpuNmMAPONXJnO7O5yYNbn2B7sbiedrYV7kJJSe4X5awtiTjp7sX4XdxuM 5BAcQ7NI2WLfZTcCgcBp/X9hIoATmMRvKwUQx+yJ/KO7Z8KhETpJJdR0mNDbqmit Cy8t7cxYb+6WqLoQUivv0o0k/EJ7L8JDH76woAnfZB4P3RiOy69/K0wN3vFBhOHS kbju7aU53lKoE7YuuOtsRrewEng/KlRsbDY3bqNTGLt4KegbpBQQGLmLffxNd1Zh EAQWcP33ou9yNYrJdihWtQpOssWRlash/O32ceZJF3s7C6t068tFclz2fPocQdxQ OC5pqy9nU/P0tOhDlMkCgcEAosaBJLIeAYlOU0+2uSx5g5mIqOOTyrDEmqqad6T/ wkB7vW2QaoDvLL22Yrzdn9vQ0V0rqzhVtan7sq5pn/BQJAueZYN8rFxS3uuW+UQk Nsc4GLJzU8Az/2DvqEIrnE7zRc5E1FOI9gKLrBlpJB2o0hVcBznDe05Gax6Kjqbm jHqzyU73SpxpEy3OesClCeCQIMr47HaL9aSqaEX4U9bMpgHi0HgTTHqvJ5pch0hY dYl+WAE9LAyF1DF29BirEXVw -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5d Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=fakehostname Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:b7:e3:2d:d2:ea:25:dd:99:db:dd:33:fd:9a:14: 17:ba:0a:f4:6f:67:08:61:d6:1b:86:97:b2:10:46: d7:68:6a:43:ea:30:b9:44:74:d1:c9:b4:d7:85:e4: cd:bb:c4:b7:a2:fa:9e:30:b7:b3:75:27:28:b4:da: 47:87:41:bd:66:f5:3a:87:e4:0a:ee:3a:41:36:b6: 2f:b5:e7:31:28:2a:85:3d:69:6c:9d:4e:21:fe:7a: 92:93:4f:4c:a3:83:63:5c:29:08:d1:51:a5:9c:6e: 1c:c1:1f:81:0c:2a:e0:88:97:32:08:cf:cf:c4:20: a9:0e:aa:10:83:c2:ae:be:16:fd:c0:ca:cf:b3:ec: 1c:fe:c9:1f:34:8b:18:af:2a:ef:e5:ed:28:f1:43: bc:fb:47:f9:02:23:b3:95:89:17:08:3b:94:01:40: f8:fb:54:bb:c2:4f:3e:15:03:ab:bf:2d:d5:e7:80: ad:87:36:e6:ae:e9:65:e6:e7:23:4d:41:ba:ab:07: 5d:40:10:97:18:65:ff:dc:56:b9:de:57:81:7e:66: 32:e5:a0:02:a0:4a:7f:46:07:c7:96:3b:28:a5:4d: 0d:7c:4d:c2:9d:f8:7d:a2:72:2a:51:68:ff:fe:41: 1f:ca:3f:1e:46:2b:f4:97:53:79:b2:aa:6e:b6:40: e1:88:ee:df:f9:92:d5:5d:8c:68:6c:ea:d1:d9:c7: 0d:52:46:28:a9:88:b6:16:6e:d5:51:ad:2c:5a:e3: 06:25:82:98:28:53:0b:eb:be:fb:42:e6:40:0d:9d: 3c:35:54:65:ac:c5:cb:3e:e9:ae:fe:3e:87:18:9c: 98:aa:ee:4e:1b:e4:e0:14:6f:74:72:be:5f:43:2c: 03:32:f0:da:a9:45:6a:38:70:70:ff:5d:0d:c5:09: 8c:05:1e:53:92:83:1f:dd:2f:94:9e:d2:2d:dd:5b: 52:b3:6c:fe:8e:bd:a3:60:ea:79:7f:d4:45:cf:52: c2:75:01:34:bc:9f:3b:f3:85:c7 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:fakehostname X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: C8:BD:A8:B4:C0:F2:32:10:73:47:9C:48:81:32:F8:BA:BB:26:84:97 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 76:87:76:4d:e4:0f:88:bf:2c:f3:58:67:c0:97:6c:cd:59:18: 82:83:4c:04:19:a5:6d:aa:fa:64:3d:49:32:3e:e1:56:95:b2: 13:f7:cf:d3:11:b0:72:b7:5b:e7:d7:85:69:51:3c:b6:54:80: 45:2f:28:10:21:20:b9:ba:e9:27:5a:b7:3f:82:b7:69:f5:46: f5:bf:a2:8b:17:7f:f2:14:d1:46:97:b5:8b:47:fb:9f:e8:5c: 05:0e:9d:11:bd:7c:9a:03:84:0b:ca:29:66:4a:ca:0d:6f:09: 1e:7a:27:c1:7f:03:96:70:8d:18:a5:2f:a4:98:a5:19:aa:8c: 5d:1e:8c:3e:bb:6d:3b:c0:33:c0:15:e1:bd:09:3d:9f:e8:dc: 12:d4:cb:44:1d:06:f5:e8:d6:4e:a1:2d:5c:9f:5d:1f:5b:2a: c3:4d:40:8d:da:d1:78:80:d0:c6:31:72:10:48:8a:e9:10:7a: 13:30:11:b2:9e:67:0e:ed:a1:aa:ec:73:2d:f0:b8:8a:22:75: 0f:30:69:5c:50:7e:91:ce:da:91:c7:70:8c:65:ff:f6:58:fb: 00:bd:45:cc:e2:e4:e3:e5:16:36:7d:f3:a2:4a:9c:45:ff:d9: a5:16:e0:2f:b5:5b:6c:e6:8a:13:15:48:73:bd:7c:80:33:c3: d4:3b:3a:1d:85:0e:a4:f7:f7:fb:48:0c:e9:a0:4b:5e:8a:5c: 67:f8:25:02:6f:cd:72:c1:aa:5a:93:64:7c:14:20:43:e0:13: 7f:0d:e1:0d:61:5e:2e:2c:cd:7a:2e:2a:ae:b6:75:6a:5f:a0: 1a:9b:b6:67:2d:b0:a5:1c:54:bc:8c:70:7e:15:2b:c0:50:e3: 03:bb:a4:a5:fc:45:01:c9:3f:a7:b8:18:dc:3e:08:07:a1:9b: f5:bd:95:bd:49:e8:10:7c:91:7d:2d:c4:c2:98:b6:b7:51:69: d7:0a:68:40:b5:0f:85:a0:a9:67:77:c6:68:cb:0e:58:34:b3: 58:e7:c8:7c:09:67 -----BEGIN CERTIFICATE----- MIIF9zCCBF+gAwIBAgIJAMstgJlaaVJdMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGIxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFTATBgNVBAMMDGZh a2Vob3N0bmFtZTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALfjLdLq Jd2Z290z/ZoUF7oK9G9nCGHWG4aXshBG12hqQ+owuUR00cm014XkzbvEt6L6njC3 s3UnKLTaR4dBvWb1OofkCu46QTa2L7XnMSgqhT1pbJ1OIf56kpNPTKODY1wpCNFR pZxuHMEfgQwq4IiXMgjPz8QgqQ6qEIPCrr4W/cDKz7PsHP7JHzSLGK8q7+XtKPFD vPtH+QIjs5WJFwg7lAFA+PtUu8JPPhUDq78t1eeArYc25q7pZebnI01BuqsHXUAQ lxhl/9xWud5XgX5mMuWgAqBKf0YHx5Y7KKVNDXxNwp34faJyKlFo//5BH8o/HkYr 9JdTebKqbrZA4Yju3/mS1V2MaGzq0dnHDVJGKKmIthZu1VGtLFrjBiWCmChTC+u+ +0LmQA2dPDVUZazFyz7prv4+hxicmKruThvk4BRvdHK+X0MsAzLw2qlFajhwcP9d DcUJjAUeU5KDH90vlJ7SLd1bUrNs/o69o2DqeX/URc9SwnUBNLyfO/OFxwIDAQAB o4IBwzCCAb8wFwYDVR0RBBAwDoIMZmFrZWhvc3RuYW1lMA4GA1UdDwEB/wQEAwIF oDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAd BgNVHQ4EFgQUyL2otMDyMhBzR5xIgTL4ursmhJcwfQYDVR0jBHYwdIAUs4qgorpx 8agkedSkWyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRo b24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZl coIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6 Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1Bggr BgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2Nz cC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5l dC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAHaHdk3k D4i/LPNYZ8CXbM1ZGIKDTAQZpW2q+mQ9STI+4VaVshP3z9MRsHK3W+fXhWlRPLZU gEUvKBAhILm66Sdatz+Ct2n1RvW/oosXf/IU0UaXtYtH+5/oXAUOnRG9fJoDhAvK KWZKyg1vCR56J8F/A5ZwjRilL6SYpRmqjF0ejD67bTvAM8AV4b0JPZ/o3BLUy0Qd BvXo1k6hLVyfXR9bKsNNQI3a0XiA0MYxchBIiukQehMwEbKeZw7toarscy3wuIoi dQ8waVxQfpHO2pHHcIxl//ZY+wC9Rczi5OPlFjZ986JKnEX/2aUW4C+1W2zmihMV SHO9fIAzw9Q7Oh2FDqT39/tIDOmgS16KXGf4JQJvzXLBqlqTZHwUIEPgE38N4Q1h Xi4szXouKq62dWpfoBqbtmctsKUcVLyMcH4VK8BQ4wO7pKX8RQHJP6e4GNw+CAeh m/W9lb1J6BB8kX0txMKYtrdRadcKaEC1D4WgqWd3xmjLDlg0s1jnyHwJZw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/keycertecc.pem000066400000000000000000000130051471441230600227350ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIG2AgEAMBAGByqGSM49AgEGBSuBBAAiBIGeMIGbAgEBBDBcNwE+cm17mmr7Yg6d 0DNCnheGFOjkYH4tYzTyCkcZGShkmF/tKhIqb3imKz0Kx9+hZANiAATyp8ws6CuN OI2/3MC4jZVSkmoDzm/X/ZrkEm4TVHKPSZ6kzZRpmmUlLS9l7SQZSLYyDAFBFzoG JJYHhZNQXEO7HFszn6KnvLjhwS6ddzlaHPziEknrSr0OKhJmdJHrQAQ= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5e Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost-ecc Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (384 bit) pub: 04:f2:a7:cc:2c:e8:2b:8d:38:8d:bf:dc:c0:b8:8d: 95:52:92:6a:03:ce:6f:d7:fd:9a:e4:12:6e:13:54: 72:8f:49:9e:a4:cd:94:69:9a:65:25:2d:2f:65:ed: 24:19:48:b6:32:0c:01:41:17:3a:06:24:96:07:85: 93:50:5c:43:bb:1c:5b:33:9f:a2:a7:bc:b8:e1:c1: 2e:9d:77:39:5a:1c:fc:e2:12:49:eb:4a:bd:0e:2a: 12:66:74:91:eb:40:04 ASN1 OID: secp384r1 NIST CURVE: P-384 X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost-ecc X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 79:11:98:86:15:4F:48:F4:31:0B:D2:CC:C8:26:3A:09:07:5D:96:40 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 6e:42:e8:a2:2d:28:14:e3:25:5c:c1:7e:54:e9:3a:ff:30:db: 94:ba:b2:f6:5f:ae:9a:c1:90:b3:4f:ce:65:1d:84:64:c0:71: 2c:44:8e:7e:00:79:f5:8c:4a:1d:34:13:44:de:99:2e:db:53: ee:ec:74:97:4d:59:1a:09:82:4f:98:75:91:a7:a0:b9:da:5e: 68:f5:32:85:be:36:3d:83:d4:ee:f9:87:67:31:85:41:53:9a: e7:05:96:13:1c:88:2e:7f:33:b1:ee:bd:f9:50:52:24:ed:3d: 92:95:6e:30:c3:af:74:a9:ee:15:bb:da:7c:14:50:8e:e3:99: ea:ba:b4:37:8a:50:61:26:de:01:93:b8:a2:6b:d9:c7:38:5e: b2:f8:96:3d:a8:9f:7d:0c:71:d4:7e:cc:a0:57:af:7e:ce:3f: a7:a7:27:68:c1:28:d7:4f:44:c1:b4:93:c3:c7:35:2b:50:c3: 8e:2c:d0:46:c1:3f:e1:67:d3:f0:81:ae:f3:5c:3e:4f:d5:a8: 07:8f:e0:eb:ef:d8:dc:47:e0:3d:58:eb:de:0e:7f:b2:58:cb: 5c:f1:2f:65:7e:0f:0d:cc:ca:ba:83:53:63:bc:dd:18:0c:ee: ed:ec:96:88:d0:38:c5:d7:ab:e7:55:79:7b:6d:ba:c0:a0:e9: 5c:ca:7c:fb:f8:70:c7:fb:f5:b2:b5:74:cb:f7:c0:0d:20:9f: 1d:b7:4c:bf:8a:8d:cd:e3:bc:4e:30:78:02:12:a0:9b:d5:8f: 49:3c:95:91:76:6e:7c:54:dc:61:7a:2e:20:ed:35:25:e0:c5: 17:50:02:83:00:74:8f:f0:1c:97:96:08:fc:2e:63:a4:f7:97: 87:43:2a:32:04:2d:4c:f9:1a:07:bf:68:91:fc:50:21:a1:3c: 8d:8f:fb:83:57:83:1f:b6:55:5c:55:2f:58:64:ad:f3:27:ba: d0:e3:cd:58:01:a3:c9:ba:1d:95:dc:30:d5:af:b9:20:ad:d9: 48:ba:8d:9a:66:ee -----BEGIN CERTIFICATE----- MIIEyzCCAzOgAwIBAgIJAMstgJlaaVJeMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGMxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFjAUBgNVBAMMDWxv Y2FsaG9zdC1lY2MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATyp8ws6CuNOI2/3MC4 jZVSkmoDzm/X/ZrkEm4TVHKPSZ6kzZRpmmUlLS9l7SQZSLYyDAFBFzoGJJYHhZNQ XEO7HFszn6KnvLjhwS6ddzlaHPziEknrSr0OKhJmdJHrQASjggHEMIIBwDAYBgNV HREEETAPgg1sb2NhbGhvc3QtZWNjMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAU BggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUeRGY hhVPSPQxC9LMyCY6CQddlkAwfQYDVR0jBHYwdIAUs4qgorpx8agkedSkWyU2FR5J yM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMstgJlaaVJb MIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0Y2EucHl0 aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcwAYYpaHR0 cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYDVR0fBDww OjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2EvcmV2 b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAG5C6KItKBTjJVzBflTpOv8w 25S6svZfrprBkLNPzmUdhGTAcSxEjn4AefWMSh00E0TemS7bU+7sdJdNWRoJgk+Y dZGnoLnaXmj1MoW+Nj2D1O75h2cxhUFTmucFlhMciC5/M7HuvflQUiTtPZKVbjDD r3Sp7hW72nwUUI7jmeq6tDeKUGEm3gGTuKJr2cc4XrL4lj2on30McdR+zKBXr37O P6enJ2jBKNdPRMG0k8PHNStQw44s0EbBP+Fn0/CBrvNcPk/VqAeP4Ovv2NxH4D1Y 694Of7JYy1zxL2V+Dw3MyrqDU2O83RgM7u3slojQOMXXq+dVeXttusCg6VzKfPv4 cMf79bK1dMv3wA0gnx23TL+Kjc3jvE4weAISoJvVj0k8lZF2bnxU3GF6LiDtNSXg xRdQAoMAdI/wHJeWCPwuY6T3l4dDKjIELUz5Gge/aJH8UCGhPI2P+4NXgx+2VVxV L1hkrfMnutDjzVgBo8m6HZXcMNWvuSCt2Ui6jZpm7g== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/make_ssl_certs.py000066400000000000000000000223751471441230600234730ustar00rootroot00000000000000"""Make the custom certificate and private key files used by test_ssl and friends.""" import os import pprint import shutil import tempfile from subprocess import * startdate = "20180829142316Z" enddate = "20371028142316Z" req_template = """ [ default ] base_url = http://testca.pythontest.net/testca [req] distinguished_name = req_distinguished_name prompt = no [req_distinguished_name] C = XY L = Castle Anthrax O = Python Software Foundation CN = {hostname} [req_x509_extensions_nosan] [req_x509_extensions_simple] subjectAltName = @san [req_x509_extensions_full] subjectAltName = @san keyUsage = critical,keyEncipherment,digitalSignature extendedKeyUsage = serverAuth,clientAuth basicConstraints = critical,CA:false subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer:always authorityInfoAccess = @issuer_ocsp_info crlDistributionPoints = @crl_info [ issuer_ocsp_info ] caIssuers;URI.0 = $base_url/pycacert.cer OCSP;URI.0 = $base_url/ocsp/ [ crl_info ] URI.0 = $base_url/revocation.crl [san] DNS.1 = {hostname} {extra_san} [dir_sect] C = XY L = Castle Anthrax O = Python Software Foundation CN = dirname example [princ_name] realm = EXP:0, GeneralString:KERBEROS.REALM principal_name = EXP:1, SEQUENCE:principal_seq [principal_seq] name_type = EXP:0, INTEGER:1 name_string = EXP:1, SEQUENCE:principals [principals] princ1 = GeneralString:username [ ca ] default_ca = CA_default [ CA_default ] dir = cadir database = $dir/index.txt crlnumber = $dir/crl.txt default_md = sha256 startdate = {startdate} default_startdate = {startdate} enddate = {enddate} default_enddate = {enddate} default_days = 7000 default_crl_days = 7000 certificate = pycacert.pem private_key = pycakey.pem serial = $dir/serial RANDFILE = $dir/.rand policy = policy_match [ policy_match ] countryName = match stateOrProvinceName = optional organizationName = match organizationalUnitName = optional commonName = supplied emailAddress = optional [ policy_anything ] countryName = optional stateOrProvinceName = optional localityName = optional organizationName = optional organizationalUnitName = optional commonName = supplied emailAddress = optional [ v3_ca ] subjectKeyIdentifier=hash authorityKeyIdentifier=keyid:always,issuer basicConstraints = CA:true """ here = os.path.abspath(os.path.dirname(__file__)) def make_cert_key(hostname, sign=False, extra_san='', ext='req_x509_extensions_full', key='rsa:3072'): print("creating cert for " + hostname) tempnames = [] for i in range(3): with tempfile.NamedTemporaryFile(delete=False) as f: tempnames.append(f.name) req_file, cert_file, key_file = tempnames try: req = req_template.format( hostname=hostname, extra_san=extra_san, startdate=startdate, enddate=enddate ) with open(req_file, 'w') as f: f.write(req) args = ['req', '-new', '-nodes', '-days', '7000', '-newkey', key, '-keyout', key_file, '-extensions', ext, '-config', req_file] if sign: with tempfile.NamedTemporaryFile(delete=False) as f: tempnames.append(f.name) reqfile = f.name args += ['-out', reqfile ] else: args += ['-x509', '-out', cert_file ] check_call(['openssl'] + args) if sign: args = [ 'ca', '-config', req_file, '-extensions', ext, '-out', cert_file, '-outdir', 'cadir', '-policy', 'policy_anything', '-batch', '-infiles', reqfile ] check_call(['openssl'] + args) with open(cert_file, 'r') as f: cert = f.read() with open(key_file, 'r') as f: key = f.read() return cert, key finally: for name in tempnames: os.remove(name) TMP_CADIR = 'cadir' def unmake_ca(): shutil.rmtree(TMP_CADIR) def make_ca(): os.mkdir(TMP_CADIR) with open(os.path.join('cadir','index.txt'),'a+') as f: pass # empty file with open(os.path.join('cadir','crl.txt'),'a+') as f: f.write("00") with open(os.path.join('cadir','index.txt.attr'),'w+') as f: f.write('unique_subject = no') # random start value for serial numbers with open(os.path.join('cadir','serial'), 'w') as f: f.write('CB2D80995A69525B\n') with tempfile.NamedTemporaryFile("w") as t: req = req_template.format( hostname='our-ca-server', extra_san='', startdate=startdate, enddate=enddate ) t.write(req) t.flush() with tempfile.NamedTemporaryFile() as f: args = ['req', '-config', t.name, '-new', '-nodes', '-newkey', 'rsa:3072', '-keyout', 'pycakey.pem', '-out', f.name, '-subj', '/C=XY/L=Castle Anthrax/O=Python Software Foundation CA/CN=our-ca-server'] check_call(['openssl'] + args) args = ['ca', '-config', t.name, '-out', 'pycacert.pem', '-batch', '-outdir', TMP_CADIR, '-keyfile', 'pycakey.pem', '-selfsign', '-extensions', 'v3_ca', '-infiles', f.name ] check_call(['openssl'] + args) args = ['ca', '-config', t.name, '-gencrl', '-out', 'revocation.crl'] check_call(['openssl'] + args) # capath hashes depend on subject! check_call([ 'openssl', 'x509', '-in', 'pycacert.pem', '-out', 'capath/ceff1710.0' ]) shutil.copy('capath/ceff1710.0', 'capath/b1930218.0') def print_cert(path): import _ssl pprint.pprint(_ssl._test_decode_cert(path)) if __name__ == '__main__': os.chdir(here) cert, key = make_cert_key('localhost', ext='req_x509_extensions_simple') with open('ssl_cert.pem', 'w') as f: f.write(cert) with open('ssl_key.pem', 'w') as f: f.write(key) print("password protecting ssl_key.pem in ssl_key.passwd.pem") check_call(['openssl','pkey','-in','ssl_key.pem','-out','ssl_key.passwd.pem','-aes256','-passout','pass:somepass']) check_call(['openssl','pkey','-in','ssl_key.pem','-out','keycert.passwd.pem','-aes256','-passout','pass:somepass']) with open('keycert.pem', 'w') as f: f.write(key) f.write(cert) with open('keycert.passwd.pem', 'a+') as f: f.write(cert) # For certificate matching tests make_ca() cert, key = make_cert_key('fakehostname', ext='req_x509_extensions_simple') with open('keycert2.pem', 'w') as f: f.write(key) f.write(cert) cert, key = make_cert_key('localhost', sign=True) with open('keycert3.pem', 'w') as f: f.write(key) f.write(cert) cert, key = make_cert_key('fakehostname', sign=True) with open('keycert4.pem', 'w') as f: f.write(key) f.write(cert) cert, key = make_cert_key( 'localhost-ecc', sign=True, key='param:secp384r1.pem' ) with open('keycertecc.pem', 'w') as f: f.write(key) f.write(cert) extra_san = [ 'otherName.1 = 1.2.3.4;UTF8:some other identifier', 'otherName.2 = 1.3.6.1.5.2.2;SEQUENCE:princ_name', 'email.1 = user@example.org', 'DNS.2 = www.example.org', # GEN_X400 'dirName.1 = dir_sect', # GEN_EDIPARTY 'URI.1 = https://www.python.org/', 'IP.1 = 127.0.0.1', 'IP.2 = ::1', 'RID.1 = 1.2.3.4.5', ] cert, key = make_cert_key('allsans', sign=True, extra_san='\n'.join(extra_san)) with open('allsans.pem', 'w') as f: f.write(key) f.write(cert) extra_san = [ # könig (king) 'DNS.2 = xn--knig-5qa.idn.pythontest.net', # königsgäßchen (king's alleyway) 'DNS.3 = xn--knigsgsschen-lcb0w.idna2003.pythontest.net', 'DNS.4 = xn--knigsgchen-b4a3dun.idna2008.pythontest.net', # βόλοσ (marble) 'DNS.5 = xn--nxasmq6b.idna2003.pythontest.net', 'DNS.6 = xn--nxasmm1c.idna2008.pythontest.net', ] # IDN SANS, signed cert, key = make_cert_key('idnsans', sign=True, extra_san='\n'.join(extra_san)) with open('idnsans.pem', 'w') as f: f.write(key) f.write(cert) cert, key = make_cert_key('nosan', sign=True, ext='req_x509_extensions_nosan') with open('nosan.pem', 'w') as f: f.write(key) f.write(cert) unmake_ca() print("update Lib/test/test_ssl.py and Lib/test/test_asyncio/utils.py") print_cert('keycert.pem') print_cert('keycert3.pem') gevent-24.11.1/src/greentest/3.11/certdata/nokia.pem000066400000000000000000000036031471441230600217200ustar00rootroot00000000000000# Certificate for projects.developer.nokia.com:443 (see issue 13034) -----BEGIN CERTIFICATE----- MIIFLDCCBBSgAwIBAgIQLubqdkCgdc7lAF9NfHlUmjANBgkqhkiG9w0BAQUFADCB vDELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTswOQYDVQQLEzJUZXJtcyBvZiB1c2Ug YXQgaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAoYykxMDE2MDQGA1UEAxMt VmVyaVNpZ24gQ2xhc3MgMyBJbnRlcm5hdGlvbmFsIFNlcnZlciBDQSAtIEczMB4X DTExMDkyMTAwMDAwMFoXDTEyMDkyMDIzNTk1OVowcTELMAkGA1UEBhMCRkkxDjAM BgNVBAgTBUVzcG9vMQ4wDAYDVQQHFAVFc3BvbzEOMAwGA1UEChQFTm9raWExCzAJ BgNVBAsUAkJJMSUwIwYDVQQDFBxwcm9qZWN0cy5kZXZlbG9wZXIubm9raWEuY29t MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCr92w1bpHYSYxUEx8N/8Iddda2 lYi+aXNtQfV/l2Fw9Ykv3Ipw4nLeGTj18FFlAZgMdPRlgrzF/NNXGw/9l3/qKdow CypkQf8lLaxb9Ze1E/KKmkRJa48QTOqvo6GqKuTI6HCeGlG1RxDb8YSKcQWLiytn yj3Wp4MgRQO266xmMQIDAQABo4IB9jCCAfIwQQYDVR0RBDowOIIccHJvamVjdHMu ZGV2ZWxvcGVyLm5va2lhLmNvbYIYcHJvamVjdHMuZm9ydW0ubm9raWEuY29tMAkG A1UdEwQCMAAwCwYDVR0PBAQDAgWgMEEGA1UdHwQ6MDgwNqA0oDKGMGh0dHA6Ly9T VlJJbnRsLUczLWNybC52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNybDBEBgNVHSAE PTA7MDkGC2CGSAGG+EUBBxcDMCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3LnZl cmlzaWduLmNvbS9ycGEwKAYDVR0lBCEwHwYJYIZIAYb4QgQBBggrBgEFBQcDAQYI KwYBBQUHAwIwcgYIKwYBBQUHAQEEZjBkMCQGCCsGAQUFBzABhhhodHRwOi8vb2Nz cC52ZXJpc2lnbi5jb20wPAYIKwYBBQUHMAKGMGh0dHA6Ly9TVlJJbnRsLUczLWFp YS52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNlcjBuBggrBgEFBQcBDARiMGChXqBc MFowWDBWFglpbWFnZS9naWYwITAfMAcGBSsOAwIaBBRLa7kolgYMu9BSOJsprEsH iyEFGDAmFiRodHRwOi8vbG9nby52ZXJpc2lnbi5jb20vdnNsb2dvMS5naWYwDQYJ KoZIhvcNAQEFBQADggEBACQuPyIJqXwUyFRWw9x5yDXgMW4zYFopQYOw/ItRY522 O5BsySTh56BWS6mQB07XVfxmYUGAvRQDA5QHpmY8jIlNwSmN3s8RKo+fAtiNRlcL x/mWSfuMs3D/S6ev3D6+dpEMZtjrhOdctsarMKp8n/hPbwhAbg5hVjpkW5n8vz2y 0KxvvkA1AxpLwpVv7OlK17ttzIHw8bp9HTlHBU5s8bKz4a565V/a5HI0CSEv/+0y ko4/ghTnZc1CkmUngKKeFMSah/mT/xAh8XnE2l1AazFa8UKuYki1e+ArHaGZc4ix UYOtiRphwfuYQhRZ7qX9q2MMkCMI65XNK/SaFrAbbG0= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/nosan.pem000066400000000000000000000170471471441230600217440ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQCv3sUoOE4F7Pye AT2Q6XpXrGUOu1fYgdnItLLLhvn7ACuHMj7TA5UKXxsepJn5m2Ji9LvAbksr1IWd LZAvNgjwsUR+E4HbY108BhVt9sk3HFkvE0OOFbAa14ICtYPe18P/4Hv6Zfu/GJDU rwXHNCUu0p6i/mospZ5O3sx5MgVaShknGAEC3Kp7zOgusMmE8XSbkNQa3ARMkW4o kTqWKAeAHDjVFVyyhzZQmo+gaLzhWfJVSZhlJsuiLoZGGrVTq85EiXsE4l8rPaI+ mKkVzWP13IZW+Fx1tiIktumdHWb1OQWrvm8AiT9b8PcFCUUrvhQFcLDSCZjKlQ0t RWrSSKrrVsSldOreqRLtpjGzFJpGnTcvslL7rP5pg5DjBsYmVcDjrmRuJuhGq52X /6HEC97GouVK8tT1LVMv1wufVPn+i9TzwxOuRWeUvVqLAJgWQ9N3yKdymH+VrpZk /oB9ScyDakGezZBW5CeOQbNJ8WoX58jNxefGjtqKxmyztu43r3ECAwEAAQKCAYBQ fVoqYCqFV8L95X9x1QljGsldhqxbsIIl811o/KtoDtndFEfgd2E8z+4vhhHaRR0w QOW02kWZF7jXCMVWdhp9XgQE15S0/bLsB7TDERFiIZ1HiD+AxbhFcKBV8REbahCQ CQN0xDwFZ47RaBDy7JCf71EfM+UP7fSYECvww83jVspQNBIyZx+3bT5OMCbqqz88 +3m3mT52dJDADEeN9WAJZ+Ey1IYKRwu6tCJLvePEF1BrbDVNBgZogXZ+mzalxpjr 4RpGPMMa+VWc8HmDVd+LtpwKJcQD00GvUP4fNywn+5jvNWl54FdQiTLPrieTWxas XUQ2crxP7Aqr2/vsU5Ruru5uF7H+ssMHp9YQDhpJ2+SVhQ9P+/loXCuKGt+BrB2Z MlitO3f+vfRtzATmJ8G0qFrOqZK1A/qsiyIze240C1hAl3oy2xpZqTDGp4gRWwoi OIN0HmH9UbP7bbNQY1x/zstTbza4/7rGb1+DZKeZIMu7QjBCU0rtsJpGtUvcQGEC gcEA42GMYSL/HljZMF1LsDhTX/cmP8FDNgONhWYxT+w0Csnj1usLNBaT63dYnEiW QKydRR4casAR1Kdy4Yfcy2lCy1kCfwqkQYk8fxSsOSHRjUfwC1SnfdYlwKFMxw4a oZF0R4oVCBYrfP+8kqrj+5gs/gXblsw72XkYtbCdIriKKdmUzTx7MegzSqh2PVRi rJzuwCZQ/O0NfhwdOHxLQDo0dgD+vv9e+KOSoJ9FDv8HH1tnolpRMdkSA8AJR/Nk DXt1AoHBAMYBfTKQZ2jqLKybe4tP+YKjvjVp8vJx0iNUXFN/P6hBaSBOgq85uxXL X3s7N/pkOCjyE95B8QusIkbnbfdyEP89O4bTbUHPXyAkHyRkR7Vny49HYuaR/aXQ mXC0J2z5bXVpCQ514l/R/Io3wBph+hbG3To7pp9pMOV4qzvibUZaTZFwH+q+xDwf SKSFy3fcomgH4/K5/QuKVj0jOUQsYjQQWb8GukS2KZK3zYJIAG1bBcsCVpSuBdW0 eCZgqjnwjQKBwCUyUwWc9QEg5b68tGIKhNEhHDe3xOf0ItWcxxpc+JJ/Pm9tGfMW cnJFntBKK5I+6qdg6qMn8oLINcnhMORxvsSHNhpUQlSaP7RGTHo4JxCmoQUpfxDd 1GUzvdyeWQrvQYdmdlRRVCHpsA6KOCtzVIDlsmtz06Ka5cjrMHl6mNeJyYbdiwW6 B5ICBv23bUDxlzkFy5/ko51qufkAlErYeraHKSVTn1SrZZQzGdf/LkoZ6NUtUzUF XqYQZzRHA6oU9QKBwDslzLljC5D6ivfQxln6POV6dmJMUOd9erFVDPNgSqq/R2EA MueXDjzXcKFGMlWYxHHuxmKZPiEnfWHC1kWZjFxCdVq0I6oKATd/stHTJtyYseUO BQwtRiDXLE7PcguKgtkU1EC+lC3dc1vyhW8cH3HYW9N+aCqsaI/TuQr9e3kNlqhA XzhnXgU7rx5+XSZkARukZ8JlLqLY4yQGNqAXxgoZbEW1A8VsyQRr5XbqfT4td5CK FUT6qwGIlG+aZp9CLQKBwQCQkwdW9A/Q4Ffq8+XTL1hJ24m/q11OLAPODUypOhWw OCbX2fkv59pSBe6niZDBls1NpHB9mzalBrJCfU+yKC667gKcKULOnWULIoOQvmcg Ka3hkkW28gTnCjfDIYm3IdsLjc67zJplOixaKgxhO8NtJZGtg0oLIrofG8EYRInv OmtGw+XE+s4TVs6WgXnEg9zWQ5ZYtqQVn6PT5jsz+Nrvipi61HWHVBd7g+78ojps 3suWxl0FvgzTW5HD16WRXeI= -----END PRIVATE KEY----- Certificate: Data: Version: 1 (0x0) Serial Number: cb:2d:80:99:5a:69:52:61 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=nosan Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:af:de:c5:28:38:4e:05:ec:fc:9e:01:3d:90:e9: 7a:57:ac:65:0e:bb:57:d8:81:d9:c8:b4:b2:cb:86: f9:fb:00:2b:87:32:3e:d3:03:95:0a:5f:1b:1e:a4: 99:f9:9b:62:62:f4:bb:c0:6e:4b:2b:d4:85:9d:2d: 90:2f:36:08:f0:b1:44:7e:13:81:db:63:5d:3c:06: 15:6d:f6:c9:37:1c:59:2f:13:43:8e:15:b0:1a:d7: 82:02:b5:83:de:d7:c3:ff:e0:7b:fa:65:fb:bf:18: 90:d4:af:05:c7:34:25:2e:d2:9e:a2:fe:6a:2c:a5: 9e:4e:de:cc:79:32:05:5a:4a:19:27:18:01:02:dc: aa:7b:cc:e8:2e:b0:c9:84:f1:74:9b:90:d4:1a:dc: 04:4c:91:6e:28:91:3a:96:28:07:80:1c:38:d5:15: 5c:b2:87:36:50:9a:8f:a0:68:bc:e1:59:f2:55:49: 98:65:26:cb:a2:2e:86:46:1a:b5:53:ab:ce:44:89: 7b:04:e2:5f:2b:3d:a2:3e:98:a9:15:cd:63:f5:dc: 86:56:f8:5c:75:b6:22:24:b6:e9:9d:1d:66:f5:39: 05:ab:be:6f:00:89:3f:5b:f0:f7:05:09:45:2b:be: 14:05:70:b0:d2:09:98:ca:95:0d:2d:45:6a:d2:48: aa:eb:56:c4:a5:74:ea:de:a9:12:ed:a6:31:b3:14: 9a:46:9d:37:2f:b2:52:fb:ac:fe:69:83:90:e3:06: c6:26:55:c0:e3:ae:64:6e:26:e8:46:ab:9d:97:ff: a1:c4:0b:de:c6:a2:e5:4a:f2:d4:f5:2d:53:2f:d7: 0b:9f:54:f9:fe:8b:d4:f3:c3:13:ae:45:67:94:bd: 5a:8b:00:98:16:43:d3:77:c8:a7:72:98:7f:95:ae: 96:64:fe:80:7d:49:cc:83:6a:41:9e:cd:90:56:e4: 27:8e:41:b3:49:f1:6a:17:e7:c8:cd:c5:e7:c6:8e: da:8a:c6:6c:b3:b6:ee:37:af:71 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption 91:42:c2:15:57:42:47:77:e7:0f:c5:55:26:b1:5b:c3:5e:ba: 81:db:e1:a4:9f:b8:42:5a:21:c9:8c:18:ae:0f:90:ab:9a:24: e7:d2:78:fc:bd:97:29:b1:5c:46:1f:5b:b8:d2:a7:87:f1:50: 53:5b:d3:be:57:74:bd:e5:75:db:50:81:f7:37:95:0b:69:ef: 39:8c:5c:82:d5:64:62:d5:8b:e9:e0:31:e1:73:d2:5a:2c:de: 43:5a:06:e5:d3:4d:d0:35:e0:9f:c2:73:31:bc:35:69:d4:fb: 7d:f0:1a:33:f7:f6:25:72:9c:a6:84:05:08:f6:b5:e8:04:10: f1:1f:f2:95:ad:a1:f8:d8:80:a5:eb:75:43:99:33:90:0c:79: fc:c0:87:08:95:20:aa:c2:81:0b:22:6f:56:f4:8f:2a:23:f8: 40:47:1c:03:a5:b1:04:0a:04:4a:df:d0:88:a8:bc:31:f2:42: 9b:d8:11:14:9e:e3:68:ea:07:2c:15:de:d2:36:5a:15:38:ed: d2:af:0e:b4:b6:1d:a0:57:94:ea:c3:c7:4c:14:57:81:00:57: 94:d3:b0:27:69:d7:48:02:6c:e5:97:f7:be:22:7c:38:24:af: b2:b0:7b:08:75:1e:ca:2e:c7:41:ef:8b:74:cf:c9:c3:6f:39: b9:52:41:18:c6:70:24:54:51:04:fe:5f:88:70:35:e5:1c:8e: d6:67:69:44:44:33:9b:8c:fe:a5:b9:95:48:66:84:f3:1a:04: ab:a3:57:c1:b6:b4:2f:28:12:45:2b:cb:42:d3:f4:a5:ce:7b: 6c:1f:e4:c8:a9:e7:d4:6d:c8:27:2d:69:26:c5:e8:73:10:54: 1f:c3:bf:fd:aa:f5:95:6f:f6:ca:d5:06:8f:1b:79:93:e3:86: ba:8d:fe:a8:10:8f:95:3e:14:09:bf:ca:88:59:e2:93:b6:ec: 03:a9:7e:dd:1f:5f:13:d3:29:b3:a6:f3:6a:df:30:53:44:c8: cd:e5:82:57:bc:9c -----BEGIN CERTIFICATE----- MIIEJDCCAowCCQDLLYCZWmlSYTANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJY WTEmMCQGA1UECgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNV BAMMDW91ci1jYS1zZXJ2ZXIwHhcNMTgwODI5MTQyMzE2WhcNMzcxMDI4MTQyMzE2 WjBbMQswCQYDVQQGEwJYWTEXMBUGA1UEBwwOQ2FzdGxlIEFudGhyYXgxIzAhBgNV BAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQ4wDAYDVQQDDAVub3NhbjCC AaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAK/exSg4TgXs/J4BPZDpeles ZQ67V9iB2ci0ssuG+fsAK4cyPtMDlQpfGx6kmfmbYmL0u8BuSyvUhZ0tkC82CPCx RH4TgdtjXTwGFW32yTccWS8TQ44VsBrXggK1g97Xw//ge/pl+78YkNSvBcc0JS7S nqL+aiylnk7ezHkyBVpKGScYAQLcqnvM6C6wyYTxdJuQ1BrcBEyRbiiROpYoB4Ac ONUVXLKHNlCaj6BovOFZ8lVJmGUmy6IuhkYatVOrzkSJewTiXys9oj6YqRXNY/Xc hlb4XHW2IiS26Z0dZvU5Bau+bwCJP1vw9wUJRSu+FAVwsNIJmMqVDS1FatJIqutW xKV06t6pEu2mMbMUmkadNy+yUvus/mmDkOMGxiZVwOOuZG4m6EarnZf/ocQL3sai 5Ury1PUtUy/XC59U+f6L1PPDE65FZ5S9WosAmBZD03fIp3KYf5WulmT+gH1JzINq QZ7NkFbkJ45Bs0nxahfnyM3F58aO2orGbLO27jevcQIDAQABMA0GCSqGSIb3DQEB CwUAA4IBgQCRQsIVV0JHd+cPxVUmsVvDXrqB2+Gkn7hCWiHJjBiuD5CrmiTn0nj8 vZcpsVxGH1u40qeH8VBTW9O+V3S95XXbUIH3N5ULae85jFyC1WRi1Yvp4DHhc9Ja LN5DWgbl003QNeCfwnMxvDVp1Pt98Boz9/YlcpymhAUI9rXoBBDxH/KVraH42ICl 63VDmTOQDHn8wIcIlSCqwoELIm9W9I8qI/hARxwDpbEECgRK39CIqLwx8kKb2BEU nuNo6gcsFd7SNloVOO3Srw60th2gV5Tqw8dMFFeBAFeU07AnaddIAmzll/e+Inw4 JK+ysHsIdR7KLsdB74t0z8nDbzm5UkEYxnAkVFEE/l+IcDXlHI7WZ2lERDObjP6l uZVIZoTzGgSro1fBtrQvKBJFK8tC0/SlzntsH+TIqefUbcgnLWkmxehzEFQfw7/9 qvWVb/bK1QaPG3mT44a6jf6oEI+VPhQJv8qIWeKTtuwDqX7dH18T0ymzpvNq3zBT RMjN5YJXvJw= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/nullbytecert.pem000066400000000000000000000124731471441230600233400ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 0 (0x0) Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Validity Not Before: Aug 7 13:11:52 2013 GMT Not After : Aug 7 13:12:52 2013 GMT Subject: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:b5:ea:ed:c9:fb:46:7d:6f:3b:76:80:dd:3a:f3: 03:94:0b:a7:a6:db:ec:1d:df:ff:23:74:08:9d:97: 16:3f:a3:a4:7b:3e:1b:0e:96:59:25:03:a7:26:e2: 88:a9:cf:79:cd:f7:04:56:b0:ab:79:32:6e:59:c1: 32:30:54:eb:58:a8:cb:91:f0:42:a5:64:27:cb:d4: 56:31:88:52:ad:cf:bd:7f:f0:06:64:1f:cc:27:b8: a3:8b:8c:f3:d8:29:1f:25:0b:f5:46:06:1b:ca:02: 45:ad:7b:76:0a:9c:bf:bb:b9:ae:0d:16:ab:60:75: ae:06:3e:9c:7c:31:dc:92:2f:29:1a:e0:4b:0c:91: 90:6c:e9:37:c5:90:d7:2a:d7:97:15:a3:80:8f:5d: 7b:49:8f:54:30:d4:97:2c:1c:5b:37:b5:ab:69:30: 68:43:d3:33:78:4b:02:60:f5:3c:44:80:a1:8f:e7: f0:0f:d1:5e:87:9e:46:cf:62:fc:f9:bf:0c:65:12: f1:93:c8:35:79:3f:c8:ec:ec:47:f5:ef:be:44:d5: ae:82:1e:2d:9a:9f:98:5a:67:65:e1:74:70:7c:cb: d3:c2:ce:0e:45:49:27:dc:e3:2d:d4:fb:48:0e:2f: 9e:77:b8:14:46:c0:c4:36:ca:02:ae:6a:91:8c:da: 2f:85 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 88:5A:55:C0:52:FF:61:CD:52:A3:35:0F:EA:5A:9C:24:38:22:F7:5C X509v3 Key Usage: Digital Signature, Non Repudiation, Key Encipherment X509v3 Subject Alternative Name: ************************************************************* WARNING: The values for DNS, email and URI are WRONG. OpenSSL doesn't print the text after a NULL byte. ************************************************************* DNS:altnull.python.org, email:null@python.org, URI:http://null.python.org, IP Address:192.0.2.1, IP Address:2001:DB8:0:0:0:0:0:1 Signature Algorithm: sha1WithRSAEncryption ac:4f:45:ef:7d:49:a8:21:70:8e:88:59:3e:d4:36:42:70:f5: a3:bd:8b:d7:a8:d0:58:f6:31:4a:b1:a4:a6:dd:6f:d9:e8:44: 3c:b6:0a:71:d6:7f:b1:08:61:9d:60:ce:75:cf:77:0c:d2:37: 86:02:8d:5e:5d:f9:0f:71:b4:16:a8:c1:3d:23:1c:f1:11:b3: 56:6e:ca:d0:8d:34:94:e6:87:2a:99:f2:ae:ae:cc:c2:e8:86: de:08:a8:7f:c5:05:fa:6f:81:a7:82:e6:d0:53:9d:34:f4:ac: 3e:40:fe:89:57:7a:29:a4:91:7e:0b:c6:51:31:e5:10:2f:a4: 60:76:cd:95:51:1a:be:8b:a1:b0:fd:ad:52:bd:d7:1b:87:60: d2:31:c7:17:c4:18:4f:2d:08:25:a3:a7:4f:b7:92:ca:e2:f5: 25:f1:54:75:81:9d:b3:3d:61:a2:f7:da:ed:e1:c6:6f:2c:60: 1f:d8:6f:c5:92:05:ab:c9:09:62:49:a9:14:ad:55:11:cc:d6: 4a:19:94:99:97:37:1d:81:5f:8b:cf:a3:a8:96:44:51:08:3d: 0b:05:65:12:eb:b6:70:80:88:48:72:4f:c6:c2:da:cf:cd:8e: 5b:ba:97:2f:60:b4:96:56:49:5e:3a:43:76:63:04:be:2a:f6: c1:ca:a9:94 -----BEGIN CERTIFICATE----- MIIE2DCCA8CgAwIBAgIBADANBgkqhkiG9w0BAQUFADCBxTELMAkGA1UEBhMCVVMx DzANBgNVBAgMBk9yZWdvbjESMBAGA1UEBwwJQmVhdmVydG9uMSMwIQYDVQQKDBpQ eXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEgMB4GA1UECwwXUHl0aG9uIENvcmUg RGV2ZWxvcG1lbnQxJDAiBgNVBAMMG251bGwucHl0aG9uLm9yZwBleGFtcGxlLm9y ZzEkMCIGCSqGSIb3DQEJARYVcHl0aG9uLWRldkBweXRob24ub3JnMB4XDTEzMDgw NzEzMTE1MloXDTEzMDgwNzEzMTI1MlowgcUxCzAJBgNVBAYTAlVTMQ8wDQYDVQQI DAZPcmVnb24xEjAQBgNVBAcMCUJlYXZlcnRvbjEjMCEGA1UECgwaUHl0aG9uIFNv ZnR3YXJlIEZvdW5kYXRpb24xIDAeBgNVBAsMF1B5dGhvbiBDb3JlIERldmVsb3Bt ZW50MSQwIgYDVQQDDBtudWxsLnB5dGhvbi5vcmcAZXhhbXBsZS5vcmcxJDAiBgkq hkiG9w0BCQEWFXB5dGhvbi1kZXZAcHl0aG9uLm9yZzCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBALXq7cn7Rn1vO3aA3TrzA5QLp6bb7B3f/yN0CJ2XFj+j pHs+Gw6WWSUDpybiiKnPec33BFawq3kyblnBMjBU61ioy5HwQqVkJ8vUVjGIUq3P vX/wBmQfzCe4o4uM89gpHyUL9UYGG8oCRa17dgqcv7u5rg0Wq2B1rgY+nHwx3JIv KRrgSwyRkGzpN8WQ1yrXlxWjgI9de0mPVDDUlywcWze1q2kwaEPTM3hLAmD1PESA oY/n8A/RXoeeRs9i/Pm/DGUS8ZPINXk/yOzsR/XvvkTVroIeLZqfmFpnZeF0cHzL 08LODkVJJ9zjLdT7SA4vnne4FEbAxDbKAq5qkYzaL4UCAwEAAaOB0DCBzTAMBgNV HRMBAf8EAjAAMB0GA1UdDgQWBBSIWlXAUv9hzVKjNQ/qWpwkOCL3XDALBgNVHQ8E BAMCBeAwgZAGA1UdEQSBiDCBhYIeYWx0bnVsbC5weXRob24ub3JnAGV4YW1wbGUu Y29tgSBudWxsQHB5dGhvbi5vcmcAdXNlckBleGFtcGxlLm9yZ4YpaHR0cDovL251 bGwucHl0aG9uLm9yZwBodHRwOi8vZXhhbXBsZS5vcmeHBMAAAgGHECABDbgAAAAA AAAAAAAAAAEwDQYJKoZIhvcNAQEFBQADggEBAKxPRe99SaghcI6IWT7UNkJw9aO9 i9eo0Fj2MUqxpKbdb9noRDy2CnHWf7EIYZ1gznXPdwzSN4YCjV5d+Q9xtBaowT0j HPERs1ZuytCNNJTmhyqZ8q6uzMLoht4IqH/FBfpvgaeC5tBTnTT0rD5A/olXeimk kX4LxlEx5RAvpGB2zZVRGr6LobD9rVK91xuHYNIxxxfEGE8tCCWjp0+3ksri9SXx VHWBnbM9YaL32u3hxm8sYB/Yb8WSBavJCWJJqRStVRHM1koZlJmXNx2BX4vPo6iW RFEIPQsFZRLrtnCAiEhyT8bC2s/Njlu6ly9gtJZWSV46Q3ZjBL4q9sHKqZQ= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/nullcert.pem000066400000000000000000000000001471441230600224330ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.11/certdata/pycacert.pem000066400000000000000000000130401471441230600224250ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5b Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, O=Python Software Foundation CA, CN=our-ca-server Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:b1:84:d3:4f:5c:04:80:91:4f:82:49:ba:30:0b: f7:e8:cb:f9:14:ef:3d:9f:0b:3f:0a:62:fc:1b:20: a5:20:d1:60:5f:87:5a:1f:16:d1:ed:97:70:a6:da: 1b:03:2c:7e:a0:5b:3c:4e:2f:16:7e:0e:89:29:89: e1:10:0d:38:da:6a:77:5f:37:13:b3:28:8f:7b:5c: 76:ad:9e:e8:d3:f5:9e:f5:83:aa:10:07:8d:e6:51: 98:f0:7c:0d:52:f2:0c:21:1e:d8:b9:99:26:a9:25: 03:27:bb:5c:ab:2e:33:27:a2:d6:23:a8:83:87:44: 29:9f:97:b5:24:6f:d7:b9:0a:fd:28:ee:bb:fb:41: 58:ea:1d:99:dd:44:86:ab:98:be:1c:dc:cb:a9:89: 1d:36:5c:a9:e8:47:b5:f4:52:48:aa:b5:a4:67:ef: 3e:d7:e2:d3:33:de:98:29:d8:7a:b0:59:5c:e7:b1: 0e:cc:fd:9f:eb:f6:d5:3a:0e:0b:cf:fe:0b:3d:a2: bf:45:18:ce:94:e7:a9:55:60:88:d4:d8:84:50:79: 05:2e:41:03:74:ae:67:26:f6:5b:12:08:98:ce:0a: 97:ed:01:0f:89:4f:17:5c:fa:3e:1d:35:24:47:92: 32:bf:f7:a4:18:2b:3c:d0:48:99:e1:a2:cd:a3:cc: 50:53:20:b5:c6:e3:66:85:7b:57:10:ec:33:4f:c1: 77:e7:1b:7e:81:c6:c4:f3:45:20:c0:91:dd:13:76: 7b:03:af:f6:76:8e:a2:83:63:57:dd:63:bc:bb:5a: 1c:17:52:8a:d6:06:48:cc:0f:c7:d3:4f:e8:da:22: 6c:86:f9:4e:5c:a6:29:07:3b:d8:56:4c:59:b3:20: 49:07:7b:94:84:cf:2b:c3:1c:1a:4e:87:64:92:ba: 42:e1:e6:ad:7d:1d:f6:54:90:6f:2b:e9:b3:cc:4b: 2b:33:26:23:fd:65:c0:3c:f0:79:ad:c9:c1:81:ef: 37:04:e0:27:3e:b0:ee:15:be:51 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Key Identifier: B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD X509v3 Basic Constraints: CA:TRUE Signature Algorithm: sha256WithRSAEncryption 6b:32:2f:e7:05:18:ea:5c:c9:95:f4:e0:c2:0c:41:5f:1a:0a: 95:c9:c7:7d:05:ee:8a:56:29:35:50:40:b7:fe:9f:7b:5b:1c: c3:69:2f:a0:cb:d2:b8:91:2f:50:19:62:f7:27:18:6d:95:7b: 53:16:15:a2:5a:dc:14:e3:fb:b1:32:a9:69:db:a6:33:47:3c: bb:1f:d2:dc:70:f9:6a:2e:0c:d8:8c:6d:e5:5d:1d:43:3c:4e: 91:de:a0:c8:da:a0:4b:0e:9d:5e:b6:0f:4a:49:f0:7b:b6:53: 9e:fd:35:14:5b:e3:4d:b4:18:a6:36:61:e8:8f:33:9b:d4:05: f9:54:66:df:e0:cb:18:a3:4e:dc:17:a8:a0:b3:c1:a8:f4:d6: 9d:ca:7f:68:53:1a:d7:95:da:e8:d3:9e:48:00:71:95:99:11: 07:cf:96:c0:7d:ce:7d:30:e8:4f:e1:83:16:33:a1:ff:59:9b: 3e:4c:e7:3a:38:01:9f:0f:67:4c:fd:2d:8b:4a:d4:01:46:37: 33:e8:13:6b:15:a9:1d:68:76:45:a2:82:33:69:26:30:60:05: c8:8f:bd:b4:75:ab:be:7a:8b:48:68:70:40:b4:1b:51:c5:e6: 7a:ad:6b:4f:db:17:c0:60:67:2e:63:61:9b:2c:48:99:b8:76: 45:a0:9e:cc:ef:33:1e:50:4e:ab:72:c3:65:c8:b2:79:b3:35: 83:21:78:d3:8b:6c:3a:18:e8:65:32:39:b8:c0:9d:71:2f:35: 36:8a:c0:17:62:d8:8b:3e:e1:22:18:2b:4c:63:a6:0e:9d:0a: fa:ab:5b:35:fb:88:91:77:4c:8d:8c:9d:a9:cf:fc:ab:c2:e6: 5a:05:7b:7e:04:6e:39:cf:93:ce:67:3b:7a:cb:af:b6:36:e1: fb:71:64:45:d4:a6:f0:ce:ef:75:04:99:69:9a:e5:88:0a:10: 02:74:89:ec:75:84:44:80:48:df:c1:f7:e9:37:ce:ce:92:92: 5c:89:22:08:73:1f -----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/pycakey.pem000066400000000000000000000046641471441230600222740ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQCxhNNPXASAkU+C SbowC/foy/kU7z2fCz8KYvwbIKUg0WBfh1ofFtHtl3Cm2hsDLH6gWzxOLxZ+Dokp ieEQDTjaandfNxOzKI97XHatnujT9Z71g6oQB43mUZjwfA1S8gwhHti5mSapJQMn u1yrLjMnotYjqIOHRCmfl7Ukb9e5Cv0o7rv7QVjqHZndRIarmL4c3MupiR02XKno R7X0UkiqtaRn7z7X4tMz3pgp2HqwWVznsQ7M/Z/r9tU6DgvP/gs9or9FGM6U56lV YIjU2IRQeQUuQQN0rmcm9lsSCJjOCpftAQ+JTxdc+j4dNSRHkjK/96QYKzzQSJnh os2jzFBTILXG42aFe1cQ7DNPwXfnG36BxsTzRSDAkd0TdnsDr/Z2jqKDY1fdY7y7 WhwXUorWBkjMD8fTT+jaImyG+U5cpikHO9hWTFmzIEkHe5SEzyvDHBpOh2SSukLh 5q19HfZUkG8r6bPMSyszJiP9ZcA88HmtycGB7zcE4Cc+sO4VvlECAwEAAQKCAYB7 gUnzALYxLOgAYYMkQm9si9zz768TpCNr+ooj5YZ9Wq6OSAEveBT+FErQCxaYErDW qCNA0gn4Eezj9YWcQVa4vzHmEM+n6iRJU39ONC0Qqua5Ma10EY1sHIEnb2dlufku YeOu3RrEu3eCgRxsDGySuvv5OxinV4kN++KPQzD3EOopPE+U81YFLCsMgsyfPlmm gwc/IKIuXDHp5Vp2bXkZK98CYLV8RddjUw7SrkZNwx6cI9eET0CgTs7y4SrevoOy jCdnA0j1HvL8AbLQuYoXo9fdGYDeq55hyYlxSMYLaEToZG3DJ0UAldrT+r7x52D8 2QMnJUo2XHzVYPlXPJIAkFJisZZ36TkBvywCgXZMMLibPo9U6V0nfkybTtXKoory nmgBv+XSGSNrVWMiygpDPqpX1G6bBgqUX3CiTlxtSkYYz1M4Vgj2cux5XEPTnVCq CLVzvNIXZt1RyzXPxGWpPidCjOaiWBRT4u1Dol9fs3PmVvDaRxcKo9nspiUHCfEC gcEA4GgxZ+IJwpAMHkdYId0oxjKgTqIg+Ua+EwfUoQT10ERl/k/V4cDwJRHT8lML rKhTNQJMEE040jq+6mPJDl1KqMb/v05Q7fF22ToGw1HkZwK52O6CeEiJW4/J6bR1 pZGN0irsa6GvzV65Y6gZVFEUl0JPRf8wPvQHXsWAw8/2LuXkXjV0ieIMq4pbWJf4 kaid7dYLHnobiP9RVk7BGr7ifmCshoPjWp4TRMwYf6iIZrqMxUSX0QY8Xsqx6bch LLx/AoHBAMqCvvwUKTrF4gKh5jyl6T6DTZ/Dujaz7BuAJdsSSHvuTa/Y1EfsQHZN jABn89ZqHYDiyyCuVFO3dqhLtsPjhyFMSXj+98JYcL3FGKnqQqRTwtzzx2P2lV5X U0WhrNRb3iLu79Tr8pE/2EPnvTr+J5b0DHEeRyM72LWs43zrDYHorH0/Aa5Qd37F gDLCTBEl8jO5irRuAIq/KV9ZFnn8JDjNGVpXgHPW3354ON1YaMLnPASk7FQizSOQ QZAsyxtdLwKBwGUosvTYYXvygXP4x1LkpmfKFJe94E1exXpAsmovmTvcSXn9tTXC Sr77LWb0ZrPbYT7pHS7QEMg8MSnp941hIrG4mzs666KHkgLUdI4B0YtaIDsZMXlV gY3j4KpYbhxH4/2U2eSfC2fxxnKVKW3n6vdQrfmo0q/eQ6BGOgiLK7fybCLHyBQL 8Zg2k3z5bNUEhMTdE0AW3WjBZ4IXmFcdK26616r/szJ7RcZilrydVXexqpmWlTVl sTst9kucAPlwswKBwQCwf7my/GNezR8Jik+fZj7edBQQfcdra+8JnOvhfpLcKLte 2s1RjjA0q6usou1bYAsszP2bEzV97XWmgq7dFg4tUE7s/NO1d91zGDhBx2Gj1TkN 2A5dKonOuq9iDeITB6qYqcUvvyEfxRRZQr2jj+WzZCr/4BLCO6PJ29A9jKOuKLtF QcfWRF2RiNMN6lffzkHFIR4p2YHxa2DEsGGtmbt8Ig3Jtl/HFmydzmxJRoev71dY +ODdB6PhLhZmcRPoWpMCgcEAhGArwL68GwwRMqAX79gMv8tVT0CJnDyGk5mD/ZIB Nzt0yQFO7rTEa1l1vAtOiVJ9IpAak2lgbEwodOfGnQst7lujNYDFzTRPTFt/lID1 u6JBxmqawOSlqa00bt4l2YsTZV+BfSznBP6XO1PK4iR3o5G3NkoKJjZWm3e3asHk 6eTeMLcsIJ+Fp7gG0ve2EdQwhVSVMFEu4Q4C2FcJeU++L4kYpY7sTnAjUtiLvtHn yp3jllEn3CBD8Uhs4B+sL/6p -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.11/certdata/revocation.crl000066400000000000000000000014401471441230600227640ustar00rootroot00000000000000-----BEGIN X509 CRL----- MIICJjCBjwIBATANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJYWTEmMCQGA1UE CgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNVBAMMDW91ci1j YS1zZXJ2ZXIXDTIxMDMxNzA4NDgyMFoXDTQwMDUxNjA4NDgyMFqgDjAMMAoGA1Ud FAQDAgEAMA0GCSqGSIb3DQEBCwUAA4IBgQCd2GrHb4zr2R8eK7YMHwlkgICxbWP1 4nuEi55yzUcmMcCZJ6ZQV3yYqTlAULGQ9qWAUdhsyH+yu3hRKFKHQv0DAdKKxgow 66YasAQQ99DskXOPxmRoIA7qtIWZbLtBwHQJWh+uUFlTdUXitGIX5Xie74xu5YIr moa3QeuZyG5+gigSTUyst5T/J/cHfBzlAJLc2k3Ty4EPYXKHCVnrZWJbRmxq199l A7S+eBb9qWXSYXCn6v+EZ76pUS3u/66kZ86PO3h9294BzdhxbCJ27dQXNHw6owe2 Iyiv0aWx+TNSGSf4yCqaYTH6RtEoviI3h/inVFHNGgjlMzdaGw/0I3bkB0rt2WSR Vck37HnXvQvVEkgO/39C0WKZus6m4gmOgZcbJbXaR8uIR5Hmw3SEyGEPEIBu6tXV BLJOSOSu2vVUH5GUIrpvK9FTySKYa+MGryoPasuqZNfwpaXK+ON2G6QsmcXPWZY0 Dry6t0w2geW6UYVGmb831i8ZP3JVVVwcwi0= -----END X509 CRL----- gevent-24.11.1/src/greentest/3.11/certdata/secp384r1.pem000066400000000000000000000004001471441230600222430ustar00rootroot00000000000000$ openssl genpkey -genparam -algorithm EC -pkeyopt ec_paramgen_curve:secp384r1 -pkeyopt ec_param_enc:named_curve -text -----BEGIN EC PARAMETERS----- BgUrgQQAIg== -----END EC PARAMETERS----- ECDSA-Parameters: (384 bit) ASN1 OID: secp384r1 NIST CURVE: P-384 gevent-24.11.1/src/greentest/3.11/certdata/selfsigned_pythontestdotnet.pem000066400000000000000000000041221471441230600264560ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIF9zCCA9+gAwIBAgIUH98b4Fw/DyugC9cV7VK7ZODzHsIwDQYJKoZIhvcNAQEL BQAwgYoxCzAJBgNVBAYTAlhZMRcwFQYDVQQIDA5DYXN0bGUgQW50aHJheDEYMBYG A1UEBwwPQXJndW1lbnQgQ2xpbmljMSMwIQYDVQQKDBpQeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbjEjMCEGA1UEAwwac2VsZi1zaWduZWQucHl0aG9udGVzdC5uZXQw HhcNMTkwNTA4MDEwMjQzWhcNMjcwNzI0MDEwMjQzWjCBijELMAkGA1UEBhMCWFkx FzAVBgNVBAgMDkNhc3RsZSBBbnRocmF4MRgwFgYDVQQHDA9Bcmd1bWVudCBDbGlu aWMxIzAhBgNVBAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMSMwIQYDVQQD DBpzZWxmLXNpZ25lZC5weXRob250ZXN0Lm5ldDCCAiIwDQYJKoZIhvcNAQEBBQAD ggIPADCCAgoCggIBAMKdJlyCThkahwoBb7pl5q64Pe9Fn5jrIvzsveHTc97TpjV2 RLfICnXKrltPk/ohkVl6K5SUZQZwMVzFubkyxE0nZPHYHlpiKWQxbsYVkYv01rix IFdLvaxxbGYke2jwQao31s4o61AdlsfK1SdpHQUynBBMssqI3SB4XPmcA7e+wEEx jxjVish4ixA1vuIZOx8yibu+CFCf/geEjoBMF3QPdzULzlrCSw8k/45iZCSoNbvK DoL4TVV07PHOxpheDh8ZQmepGvU6pVqhb9m4lgmV0OGWHgozd5Ur9CbTVDmxIEz3 TSoRtNJK7qtyZdGNqwjksQxgZTjM/d/Lm/BJG99AiOmYOjsl9gbQMZgvQmMAtUsI aMJnQuZ6R+KEpW/TR5qSKLWZSG45z/op+tzI2m+cE6HwTRVAWbcuJxcAA55MZjqU OOOu3BBYMjS5nf2sQ9uoXsVBFH7i0mQqoW1SLzr9opI8KsWwFxQmO2vBxWYaN+lH OmwBZBwyODIsmI1YGXmTp09NxRYz3Qe5GCgFzYowpMrcxUC24iduIdMwwhRM7rKg 7GtIWMSrFfuI1XCLRmSlhDbhNN6fVg2f8Bo9PdH9ihiIyxSrc+FOUasUYCCJvlSZ 8hFUlLvcmrZlWuazohm0lsXuMK1JflmQr/DA/uXxP9xzFfRy+RU3jDyxJbRHAgMB AAGjUzBRMB0GA1UdDgQWBBSQJyxiPMRK01i+0BsV9zUwDiBaHzAfBgNVHSMEGDAW gBSQJyxiPMRK01i+0BsV9zUwDiBaHzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3 DQEBCwUAA4ICAQCR+7a7N/m+WLkxPPIA/CB4MOr2Uf8ixTv435Nyv6rXOun0+lTP ExSZ0uYQ+L0WylItI3cQHULldDueD+s8TGzxf5woaLKf6tqyr0NYhKs+UeNEzDnN 9PHQIhX0SZw3XyXGUgPNBfRCg2ZDdtMMdOU4XlQN/IN/9hbYTrueyY7eXq9hmtI9 1srftAMqr9SR1JP7aHI6DVgrEsZVMTDnfT8WmLSGLlY1HmGfdEn1Ip5sbo9uSkiH AEPgPfjYIvR5LqTOMn4KsrlZyBbFIDh9Sl99M1kZzgH6zUGVLCDg1y6Cms69fx/e W1HoIeVkY4b4TY7Bk7JsqyNhIuqu7ARaxkdaZWhYaA2YyknwANdFfNpfH+elCLIk BUt5S3f4i7DaUePTvKukCZiCq4Oyln7RcOn5If73wCeLB/ZM9Ei1HforyLWP1CN8 XLfpHaoeoPSWIveI0XHUl65LsPN2UbMbul/F23hwl+h8+BLmyAS680Yhn4zEN6Ku B7Po90HoFa1Du3bmx4jsN73UkT/dwMTi6K072FbipnC1904oGlWmLwvAHvrtxxmL Pl3pvEaZIu8wa/PNF6Y7J7VIewikIJq6Ta6FrWeFfzMWOj2qA1ZZi6fUaDSNYvuV J5quYKCc/O+I/yDDf8wyBbZ/gvUXzUHTMYGG+bFrn1p7XDbYYeEJ6R/xEg== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/ssl_cert.pem000066400000000000000000000030421471441230600224320ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/certdata/ssl_key.passwd.pem000066400000000000000000000051361471441230600235730ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQI072N7W+PDDMCAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBA/AuaRNi4vE4KGqI4In+70BIIH ENGS5Vex5NID873frmd1UZEHZ+O/Bd0wDb+NUpIqesHkRYf7kKi6Gnr+nKQ/oVVn Lm3JjE7c8ECP0OkOOXmiXuWL1SkzBBWqCI4stSGUPvBiHsGwNnvJAaGjUffgMlcC aJOA2+dnejLkzblq4CB2LQdm06N3Xoe9tyqtQaUHxfzJAf5Ydd8uj7vpKN2MMhY7 icIPJwSyh0N7S6XWVtHEokr9Kp4y2hS5a+BgCWV1/1z0aF7agnSVndmT1VR+nWmc lM14k+lethmHMB+fsNSjnqeJ7XOPlOTHqhiZ9bBSTgF/xr5Bck/NiKRzHjdovBox TKg+xchaBhpRh7wBPBIlNJeHmIjv+8obOKjKU98Ig/7R9+IryZaNcKAH0PuOT+Sw QHXiCGQbOiYHB9UyhDTWiB7YVjd8KHefOFxfHzOQb/iBhbv1x3bTl3DgepvRN6VO dIsPLoIZe42sdf9GeMsk8mGJyZUQ6AzsfhWk3grb/XscizPSvrNsJ2VL1R7YTyT3 3WA4ZXR1EqvXnWL7N/raemQjy62iOG6t7fcF5IdP9CMbWP+Plpsz4cQW7FtesCTq a5ZXraochQz361ODFNIeBEGU+0qqXUtZDlmos/EySkZykSeU/L0bImS62VGE3afo YXBmznTTT9kkFkqv7H0MerfJsrE/wF8puP3GM01DW2JRgXRpSWlvbPV/2LnMtRuD II7iH4rWDtTjCN6BWKAgDOnPkc9sZ4XulqT32lcUeV6LTdMBfq8kMEc8eDij1vUT maVCRpuwaq8EIT3lVgNLufHiG96ojlyYtj3orzw22IjkgC/9ee8UDik9CqbMVmFf fVHhsw8LNSg8Q4bmwm5Eg2w2it2gtI68+mwr75oCxuJ/8OMjW21Prj8XDh5reie2 c0lDKQOFZ9UnLU1bXR/6qUM+JFKR4DMq+fOCuoQSVoyVUEOsJpvBOYnYZN9cxsZm vh9dKafMEcKZ8flsbr+gOmOw7+Py2ifSlf25E/Frb1W4gtbTb0LQVHb6+drutrZj 8HEu4CnHYFCD4ZnOJb26XlZCb8GFBddW86yJYyUqMMV6Q1aJfAOAglsTo1LjIMOZ byo0BTAmwUevU/iuOXQ4qRBXXcoidDcTCrxfUSPG9wdt9l+m5SdQpWqfQ+fx5O7m SLlrHyZCiPSFMtC9DxqjIklHjf5W3wslGLgaD30YXa4VDYkRihf3CNsxGQ+tVvef l0ZjoAitF7Gaua06IESmKnpHe23dkr1cjYq+u2IV+xGH8LeExdwsQ9kpuTeXPnQs JOA99SsFx1ct32RrwjxnDDsiNkaViTKo9GDkV3jQTfoFgAVqfSgg9wGXpqUqhNG7 TiSIHCowllLny2zn4XrXCy2niD3VDt0skb3l/PaegHE2z7S5YY85nQtYwpLiwB9M SQ08DYKxPBZYKtS2iZ/fsA1gjSRQDPg/SIxMhUC3M3qH8iWny1Lzl25F2Uq7VVEX LdTUtaby49jRTT3CQGr5n6z7bMbUegiY7h8WmOekuThGDH+4xZp6+rDP4GFk4FeK JcF70vMQYIjQZhadic6olv+9VtUP42ltGG/yP9a3eWRkzfAf2eCh6B1rYdgEWwE8 rlcZzwM+y6eUmeNF2FVWB8iWtTMQHy+dYNPM+Jtus1KQKxiiq/yCRs7nWvzWRFWA HRyqV0J6/lqgm4FvfktFt1T0W+mDoLJOR2/zIwMy2lgL5zeHuR3SaMJnCikJbqKS HB3UvrhAWUcZqdH29+FhVWeM7ybyF1Wccmf+IIC/ePLa6gjtqPV8lG/5kbpcpnB6 UQY8WWaKMxyr3jJ9bAX5QKshchp04cDecOLZrpFGNNQngR8RxSEkiIgAqNxWunIu KrdBDrupv/XAgEOclmgToY3iywLJSV5gHAyHWDUhRH4cFCLiGPl4XIcnXOuTze3H 3j+EYSiS3v3DhHjp33YU2pXlJDjiYsKzAXejEh66++Y8qaQdCAad3ruWRCzW3kgk Md0A1VGzntTnQsewvExQEMZH2LtYIsPv3KCYGeSAuLabX4tbGk79PswjnjLLEOr0 Ghf6RF6qf5/iFyJoG4vrbKT8kx6ywh0InILCdjUunuDskIBxX6tEcr9XwajoIvb2 kcmGdjam5kKLS7QOWQTl8/r/cuFes0dj34cX5Qpq+Gd7tRq/D+b0207926Cxvftv qQ1cVn8HiLxKkZzd3tpf2xnoV1zkTL0oHrNg+qzxoxXUTUcwtIf1d/HRbYEAhi/d bBBoFeftEHWNq+sJgS9bH+XNzo/yK4u04B5miOq8v4CSkJdzu+ZdF22d4cjiGmtQ 8BTmcn0Unzm+u5H0+QSZe54QBHJGNXXOIKMTkgnOdW27g4DbI1y7fCqJiSMbRW6L oHmMfbdB3GWqGbsUkhY8i6h9op0MU6WOX7ea2Rxyt4t6 -----END ENCRYPTED PRIVATE KEY----- gevent-24.11.1/src/greentest/3.11/certdata/ssl_key.pem000066400000000000000000000046701471441230600222750ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/wIBADANBgkqhkiG9w0BAQEFAASCBukwggblAgEAAoIBgQCylKlLaKU+hOvJ DfriTRLd+IthG5hv28I3A/CGjLICT0rDDtgaXd0uqloJAnjsgn5gMAcStpDW8Rm+ t6LsrBL+5fBgkyU1r94Rvx0HHoyaZwBBouitVHw28hP3W+smddkqB1UxpGnTeL2B gj3dVo/WTtRfO+0h0PKw1l98YE1pMTdqIwcOOE/ER0g4hvA/wrxuLhMvlVLMy/lL 58uctqaDUqryNyeerKbVkq4fJyCG5D2TwXVJ3i2DDh0xSt2Y10poZV4M4k8Su9Z5 8zN2PSvYMT50aqF277v8BaOeYUApBE4kZGIJpo13ATGdEwpUFZ0Fri4zLYUZ1hWb OC35sKo7OxWQ/+tefNUdgWHob6Vmy777jiYcLwxc3sS9rF3AJe0rMW83kCkR6hmy A3250E137N/1QumHuT/Nj9rnI/lwt9jfaYkZjoAgT/C97m/mM83cYpGTdoGV1xNo 7G90MhP0di5FnVsrIaSnvkbGT9UgUWx0oVMjocifdG2qIhMI9psCAwEAAQKCAYBT sHmaPmNaZj59jZCqp0YVQlpHWwBYQ5vD3pPE6oCttm0p9nXt/VkfenQRTthOtmT1 POzDp00/feP7zeGLmqSYUjgRekPw4gdnN7Ip2PY5kdW77NWwDSzdLxuOS8Rq1MW9 /Yu+ZPe3RBlDbT8C0IM+Atlh/BqIQ3zIxN4g0pzUlF0M33d6AYfYSzOcUhibOO7H j84r+YXBNkIRgYKZYbutRXuZYaGuqejRpBj3voVu0d3Ntdb6lCWuClpB9HzfGN0c RTv8g6UYO4sK3qyFn90ibIR/1GB9watvtoWVZqggiWeBzSWVWRsGEf9O+Cx4oJw1 IphglhmhbgNksbj7bD24on/icldSOiVkoUemUOFmHWhCm4PnB1GmbD8YMfEdSbks qDr1Ps1zg4mGOinVD/4cY7vuPFO/HCH07wfeaUGzRt4g0/yLr+XjVofOA3oowyxv JAzr+niHA3lg5ecj4r7M68efwzN1OCyjMrVJw2RAzwvGxE+rm5NiT08SWlKQZnkC gcEA4wvyLpIur/UB84nV3XVJ89UMNBLm++aTFzld047BLJtMaOhvNqx6Cl5c8VuW l261KHjiVzpfNM3/A2LBQJcYkhX7avkqEXlj57cl+dCWAVwUzKmLJTPjfaTTZnYJ xeN3dMYjJz2z2WtgvfvDoJLukVwIMmhTY8wtqqYyQBJ/l06pBsfw5TNvmVIOQHds 8ASOiFt+WRLk2bl9xrGGayqt3VV93KVRzF27cpjOgEcG74F3c0ZW9snERN7vIYwB JfrlAoHBAMlahPwMP2TYylG8OzHe7EiehTekSO26LGh0Cq3wTGXYsK/q8hQCzL14 kWW638vpwXL6L9ntvrd7hjzWRO3vX/VxnYEA6f0bpqHq1tZi6lzix5CTUN5McpDg QnjenSJNrNjS1zEF8WeY9iLEuDI/M/iUW4y9R6s3WpgQhPDXpSvd2g3gMGRUYhxQ Xna8auiJeYFq0oNaOxvJj+VeOfJ3ZMJttd+Y7gTOYZcbg3SdRb/kdxYki0RMD2hF 4ZvjJ6CTfwKBwQDiMqiZFTJGQwYqp4vWEmAW+I4r4xkUpWatoI2Fk5eI5T9+1PLX uYXsho56NxEU1UrOg4Cb/p+TcBc8PErkGqR0BkpxDMOInTOXSrQe6lxIBoECVXc3 HTbrmiay0a5y5GfCgxPKqIJhfcToAceoVjovv0y7S4yoxGZKuUEe7E8JY2iqRNAO yOvKCCICv/hcN235E44RF+2/rDlOltagNej5tY6rIFkaDdgOF4bD7f9O5eEni1Bg litfoesDtQP/3rECgcEAkQfvQ7D6tIPmbqsbJBfCr6fmoqZllT4FIJN84b50+OL0 mTGsfjdqC4tdhx3sdu7/VPbaIqm5NmX10bowWgWSY7MbVME4yQPyqSwC5NbIonEC d6N0mzoLR0kQ+Ai4u+2g82gicgAq2oj1uSNi3WZi48jQjHYFulCbo246o1NgeFFK 77WshYe2R1ioQfQDOU1URKCR0uTaMHClgfu112yiGd12JAD+aF3TM0kxDXz+sXI5 SKy311DFxECZeXRLpcC3AoHBAJkNMJWTyPYbeVu+CTQkec8Uun233EkXa2kUNZc/ 5DuXDaK+A3DMgYRufTKSPpDHGaCZ1SYPInX1Uoe2dgVjWssRL2uitR4ENabDoAOA ICVYXYYNagqQu5wwirF0QeaMXo1fjhuuHQh8GsMdXZvYEaAITZ9/NG5x/oY08+8H kr78SMBOPy3XQn964uKG+e3JwpOG14GKABdAlrHKFXNWchu/6dgcYXB87mrC/GhO zNwzC+QhFTZoOomFoqMgFWujng== -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.11/certdata/talos-2019-0758.pem000066400000000000000000000024621471441230600227350ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDqDCCApKgAwIBAgIBAjALBgkqhkiG9w0BAQswHzELMAkGA1UEBhMCVUsxEDAO BgNVBAMTB2NvZHktY2EwHhcNMTgwNjE4MTgwMDU4WhcNMjgwNjE0MTgwMDU4WjA7 MQswCQYDVQQGEwJVSzEsMCoGA1UEAxMjY29kZW5vbWljb24tdm0tMi50ZXN0Lmxh bC5jaXNjby5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC63fGB J80A9Av1GB0bptslKRIUtJm8EeEu34HkDWbL6AJY0P8WfDtlXjlPaLqFa6sqH6ES V48prSm1ZUbDSVL8R6BYVYpOlK8/48xk4pGTgRzv69gf5SGtQLwHy8UPBKgjSZoD 5a5k5wJXGswhKFFNqyyxqCvWmMnJWxXTt2XDCiWc4g4YAWi4O4+6SeeHVAV9rV7C 1wxqjzKovVe2uZOHjKEzJbbIU6JBPb6TRfMdRdYOw98n1VXDcKVgdX2DuuqjCzHP WhU4Tw050M9NaK3eXp4Mh69VuiKoBGOLSOcS8reqHIU46Reg0hqeL8LIL6OhFHIF j7HR6V1X6F+BfRS/AgMBAAGjgdYwgdMwCQYDVR0TBAIwADAdBgNVHQ4EFgQUOktp HQjxDXXUg8prleY9jeLKeQ4wTwYDVR0jBEgwRoAUx6zgPygZ0ZErF9sPC4+5e2Io UU+hI6QhMB8xCzAJBgNVBAYTAlVLMRAwDgYDVQQDEwdjb2R5LWNhggkA1QEAuwb7 2s0wCQYDVR0SBAIwADAuBgNVHREEJzAlgiNjb2Rlbm9taWNvbi12bS0yLnRlc3Qu bGFsLmNpc2NvLmNvbTAOBgNVHQ8BAf8EBAMCBaAwCwYDVR0fBAQwAjAAMAsGCSqG SIb3DQEBCwOCAQEAvqantx2yBlM11RoFiCfi+AfSblXPdrIrHvccepV4pYc/yO6p t1f2dxHQb8rWH3i6cWag/EgIZx+HJQvo0rgPY1BFJsX1WnYf1/znZpkUBGbVmlJr t/dW1gSkNS6sPsM0Q+7HPgEv8CPDNK5eo7vU2seE0iWOkxSyVUuiCEY9ZVGaLVit p0C78nZ35Pdv4I+1cosmHl28+es1WI22rrnmdBpH8J1eY6WvUw2xuZHLeNVN0TzV Q3qq53AaCWuLOD1AjESWuUCxMZTK9DPS4JKXTK8RLyDeqOvJGjsSWp3kL0y3GaQ+ 10T1rfkKJub2+m9A9duin1fn6tHc2wSvB7m3DA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/ffdh3072.pem000066400000000000000000000042441471441230600202550ustar00rootroot00000000000000 DH Parameters: (3072 bit) prime: 00:ff:ff:ff:ff:ff:ff:ff:ff:ad:f8:54:58:a2:bb: 4a:9a:af:dc:56:20:27:3d:3c:f1:d8:b9:c5:83:ce: 2d:36:95:a9:e1:36:41:14:64:33:fb:cc:93:9d:ce: 24:9b:3e:f9:7d:2f:e3:63:63:0c:75:d8:f6:81:b2: 02:ae:c4:61:7a:d3:df:1e:d5:d5:fd:65:61:24:33: f5:1f:5f:06:6e:d0:85:63:65:55:3d:ed:1a:f3:b5: 57:13:5e:7f:57:c9:35:98:4f:0c:70:e0:e6:8b:77: e2:a6:89:da:f3:ef:e8:72:1d:f1:58:a1:36:ad:e7: 35:30:ac:ca:4f:48:3a:79:7a:bc:0a:b1:82:b3:24: fb:61:d1:08:a9:4b:b2:c8:e3:fb:b9:6a:da:b7:60: d7:f4:68:1d:4f:42:a3:de:39:4d:f4:ae:56:ed:e7: 63:72:bb:19:0b:07:a7:c8:ee:0a:6d:70:9e:02:fc: e1:cd:f7:e2:ec:c0:34:04:cd:28:34:2f:61:91:72: fe:9c:e9:85:83:ff:8e:4f:12:32:ee:f2:81:83:c3: fe:3b:1b:4c:6f:ad:73:3b:b5:fc:bc:2e:c2:20:05: c5:8e:f1:83:7d:16:83:b2:c6:f3:4a:26:c1:b2:ef: fa:88:6b:42:38:61:1f:cf:dc:de:35:5b:3b:65:19: 03:5b:bc:34:f4:de:f9:9c:02:38:61:b4:6f:c9:d6: e6:c9:07:7a:d9:1d:26:91:f7:f7:ee:59:8c:b0:fa: c1:86:d9:1c:ae:fe:13:09:85:13:92:70:b4:13:0c: 93:bc:43:79:44:f4:fd:44:52:e2:d7:4d:d3:64:f2: e2:1e:71:f5:4b:ff:5c:ae:82:ab:9c:9d:f6:9e:e8: 6d:2b:c5:22:36:3a:0d:ab:c5:21:97:9b:0d:ea:da: 1d:bf:9a:42:d5:c4:48:4e:0a:bc:d0:6b:fa:53:dd: ef:3c:1b:20:ee:3f:d5:9d:7c:25:e4:1d:2b:66:c6: 2e:37:ff:ff:ff:ff:ff:ff:ff:ff generator: 2 (0x2) recommended-private-length: 276 bits -----BEGIN DH PARAMETERS----- MIIBjAKCAYEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz +8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a 87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7 YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi 7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD ssbzSibBsu/6iGtCOGEfz9zeNVs7ZRkDW7w09N75nAI4YbRvydbmyQd62R0mkff3 7lmMsPrBhtkcrv4TCYUTknC0EwyTvEN5RPT9RFLi103TZPLiHnH1S/9croKrnJ32 nuhtK8UiNjoNq8Uhl5sN6todv5pC1cRITgq80Gv6U93vPBsg7j/VnXwl5B0rZsYu N///////////AgECAgIBFA== -----END DH PARAMETERS----- gevent-24.11.1/src/greentest/3.11/idnsans.pem000066400000000000000000000233321471441230600204700ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQC8sqplTuHuLjbW TL5SL2D1fw9U6WQzLVAF5gsyhd5lr2FpfYwjrob5Mav91aOLbJRTvoNyXsJ26FPS 0RycRGXbomcIEJxXGy9aI+0MLYBt1G5mgqCH+HcVCwPzCNlhVnTwvpgA7y8zs3+6 ezZAPWkF0yWOMYLtTcq9A5GWeavt5VMgm1KZF3gO4k58oPyk3Ae9D0LAaYsX6DFi BYx41eUR5UbSb5IYXaDd8d6jqW/jnYhgc6Cxkv1gTJFn87V5lrG0vYMSRUtWDQ9Y Jh/EKAxjGw7AeY429p6TE4UoJhDmoFYR2NLvawhNIplxol/v0fs0veFQjI/UsTD8 2tRfnYL4IX8szhLsE5/5Iq8aiLHjVbIMwmDYAa0P63Ap2kf1biSn9mpDL8lQazSo yr8xzIq2QS5HMvGbeMAmS0ih10Zx84uVmkWlavgvtSflw8K/ZXT9c70rZp/TdBGY 95cOFsbg5U/20M/Llpis9tcBCaoVaYSFupatrP+p8y19qP2nebsCAwEAAQKCAYEA uaYWWwHW6pzxOrnabcVLYX0WunW9LVShbIw97AElI2n/LuhkXh6xkK48BsqP0vaK oDHJ5VYxgQdmoP03Zs8sX4BSWe7twg1u8wJxkA+cUXI1BAn0opHjpwJlalEEfe2v s8PwjMrF59nsCq56W42PrDlms5UmuQ5WLsw6Co++hZmfxW7LPu+GIS6qBZfluNT5 kBpZlDDCtkyteUD4SVI3wvmOSi+Wzv4e7P2wC9kByjENIcfhC5QQURRD4sA1hWCp 2SThYWqJOCEc2SvGgoqgTRaJuQ2aVG9qrntXt0N4V+WdJWXBK0jedkB2flLve1fR KmDYuc9k/c1svmS3Y+iZohBha9H8jpuJmXYBxxg1iNg9m7qkfg8F8wxCYLQKB+U6 tjRS7by+jSE08On7mpDDhJORnlh+rfEuWPPwAKQpLpdp76KDTvR++GvfOMUiOrFM e9s5aXp+vcgkSSqYvigE+sFpCjQWwkGBkMdT16Pf9CzhQaM08YuLnzfLEYgLFw6R AoHBAN5NQINBmlq/cptGSru66kfecqHfI7xHnnGWKAkto/B1x7Crrgs4Tk5b4vaA JmAqatt5P1e7zco7uAXXebY5VURuH/30TlkuaB+oGFp0OMw6165n8RVPT2ZaDViK ssJ9LT8fJ+23TWCCT2Z1zUlM/NnHAMjKOVsJK3/KEkVvlc7ROC7uVooc78AsQehg zpL3GBYEeBukT8aNUMqUlesCsIs/dQHW7DzQL2xGkQagm5/PDsxaCsT7ynA8eL3X TW+IXwKBwQDZTV3TaG6wqtL8y2DR0lN5jY/eYayX4e18iZ+XEZVTntPdVVyJIE4d 0A5ZfcILb9WE8R21iptROYSjcH/05j+3fQMJ1WAK0sNfGTUNNT3jYU8YzLvos+wW G8E+mNMpFPWNvLV5Qrl4VvoifGh8AMvplUEz8uAzGJbXbRxUPcmjth2ph8zULEDn /+o4OcT3gh1bp+HCqch0OuiJRn9qNUpsJG5GMm5FtjBjZM97ucZ1/0DaWl3JUxUN /pueo3J9vCUCgcBg2Fjdlcvv8u2z1aijJmgATVm1SWfhE3ZkV50zem2sSTNotTJK cwoyOveimeueA3ywBp9g0lFx5Bhkex3sFAggmrVXRoKHeZ8lA28woOdJmezybxfp R7b4iQy9YRdFgZEfqawUdMHB5KNAqNt5LpANNBQUZX0dOt53eooBM/6Yri8CyxRq cPbFysIfwWTdQ8Z7eRD2Qdv7TP9AcgDp9C8DSu7nkUEzsSKn0gpGT9vcgDEbN7Lv ZB4qTT3wvoZeq5MCgcBIG18eDtJkN1sp3Yb0OTnP5QSvg3PVNngq0jQt2fzWMacW FARP0HN7exW35n4kc2jD44q7OhJOAqsb3PHo3xqXlZkTg0WKceO4w9GR32/46spn bVCRaFrX/z/BuM6hHD5bWRpS8aw/3YTFOsklFNKVYRyw01BIREmRlLhIz/QAKidv oQt8AG9NTON44tqUUw3Q40WL5fEJeJ6/JrCTGrnmZrRdANEMuucVpFchNEVB1IC9 tCzY6IPdD/atzojoZi0CgcB2x9oWLjJ0XJIp2pMAb8nCMVjkKrznKFjZbDm8EQBs ou7pM2zkO3VRcWT1BXQocinJsjQqjQiTawP6IN2FQgT0d89V+pwd+jdvpdildQhP 1/6SErVRZV//oopKTsC6TIBL/EmW1TkP3ulQIZs8YklFgybeHdDyNFi+VgPXkVGe IHp0nEzrui9q0YJsjHfFHBeGyzDSfbiBYiF7Auk66gYZbXufebP/LZNG/FIamPP3 rwYIeeV1IVwk9tPBw6fGwrs= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:60 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=idnsans Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:bc:b2:aa:65:4e:e1:ee:2e:36:d6:4c:be:52:2f: 60:f5:7f:0f:54:e9:64:33:2d:50:05:e6:0b:32:85: de:65:af:61:69:7d:8c:23:ae:86:f9:31:ab:fd:d5: a3:8b:6c:94:53:be:83:72:5e:c2:76:e8:53:d2:d1: 1c:9c:44:65:db:a2:67:08:10:9c:57:1b:2f:5a:23: ed:0c:2d:80:6d:d4:6e:66:82:a0:87:f8:77:15:0b: 03:f3:08:d9:61:56:74:f0:be:98:00:ef:2f:33:b3: 7f:ba:7b:36:40:3d:69:05:d3:25:8e:31:82:ed:4d: ca:bd:03:91:96:79:ab:ed:e5:53:20:9b:52:99:17: 78:0e:e2:4e:7c:a0:fc:a4:dc:07:bd:0f:42:c0:69: 8b:17:e8:31:62:05:8c:78:d5:e5:11:e5:46:d2:6f: 92:18:5d:a0:dd:f1:de:a3:a9:6f:e3:9d:88:60:73: a0:b1:92:fd:60:4c:91:67:f3:b5:79:96:b1:b4:bd: 83:12:45:4b:56:0d:0f:58:26:1f:c4:28:0c:63:1b: 0e:c0:79:8e:36:f6:9e:93:13:85:28:26:10:e6:a0: 56:11:d8:d2:ef:6b:08:4d:22:99:71:a2:5f:ef:d1: fb:34:bd:e1:50:8c:8f:d4:b1:30:fc:da:d4:5f:9d: 82:f8:21:7f:2c:ce:12:ec:13:9f:f9:22:af:1a:88: b1:e3:55:b2:0c:c2:60:d8:01:ad:0f:eb:70:29:da: 47:f5:6e:24:a7:f6:6a:43:2f:c9:50:6b:34:a8:ca: bf:31:cc:8a:b6:41:2e:47:32:f1:9b:78:c0:26:4b: 48:a1:d7:46:71:f3:8b:95:9a:45:a5:6a:f8:2f:b5: 27:e5:c3:c2:bf:65:74:fd:73:bd:2b:66:9f:d3:74: 11:98:f7:97:0e:16:c6:e0:e5:4f:f6:d0:cf:cb:96: 98:ac:f6:d7:01:09:aa:15:69:84:85:ba:96:ad:ac: ff:a9:f3:2d:7d:a8:fd:a7:79:bb Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:idnsans, DNS:xn--knig-5qa.idn.pythontest.net, DNS:xn--knigsgsschen-lcb0w.idna2003.pythontest.net, DNS:xn--knigsgchen-b4a3dun.idna2008.pythontest.net, DNS:xn--nxasmq6b.idna2003.pythontest.net, DNS:xn--nxasmm1c.idna2008.pythontest.net X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 5C:BE:18:7F:7B:3F:CE:99:66:80:79:53:4B:DD:33:1B:42:A5:7E:00 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 5d:7a:f8:81:e0:a7:c1:3f:39:eb:d3:52:2c:e1:cb:4d:29:b3: 77:18:17:18:9e:12:fc:11:cc:3c:49:cb:6b:f4:4d:6c:b8:d2: f4:e9:37:f8:6b:ed:f5:d7:f1:eb:5a:41:04:c7:f3:8c:da:e1: 05:8e:ae:58:71:d9:01:8a:32:46:b2:dd:95:46:e1:ce:82:04: fa:0b:1c:29:75:07:85:ce:cd:59:d4:cc:f3:56:b3:72:4d:cb: 90:0f:ce:02:21:ce:5d:17:84:96:7f:6a:00:57:42:b7:24:5b: 07:25:1e:77:a8:9d:da:41:09:8e:29:79:b4:b0:a1:45:c8:70: ae:2c:86:24:ae:3d:9a:74:a7:04:78:d6:1f:1b:17:c5:c1:6d: b1:1a:fd:f4:50:2e:61:16:84:89:d0:42:3f:b6:bf:bd:52:bd: c8:3e:8e:87:b4:f0:bd:ad:c7:51:65:2f:77:e8:69:79:0e:03: 63:89:e7:70:ad:c8:d1:2f:1a:a5:06:d2:90:db:7c:07:35:9a: 0b:0e:85:87:d1:70:17:a7:88:0f:c6:b5:9c:88:00:fa:f9:b2: 0a:19:5a:4b:8d:91:12:51:5e:0e:c1:d8:9e:02:78:d0:2d:24: 09:fe:d4:97:3c:cb:a0:1f:9a:ab:f7:0f:e2:fa:64:23:4e:53: 0a:15:3e:f5:04:01:86:29:8b:8e:24:40:2f:b1:90:87:5c:3b: 7b:a7:4c:06:af:c3:90:7f:e9:c6:56:42:61:15:2c:83:f1:7c: 4f:89:17:f3:a0:11:34:3f:8d:af:75:34:60:1e:e0:f2:f3:02: e7:aa:b3:f7:9f:1c:f8:69:f4:fe:da:57:6e:1b:95:53:70:cd: ed:b6:bb:2a:84:eb:ab:c3:a9:b4:d5:15:a0:b2:cc:81:2d:f1: 56:c1:54:9b:5f:14:4c:5f:ad:5f:f5:06:ee:22:60:45:e4:50: 35:64:ac:ac:ca:4a:bf:86:78:f8:53:2d:17:d8:e8:84:c8:07: a4:c2:29:76:c7:1f -----BEGIN CERTIFICATE----- MIIGvTCCBSWgAwIBAgIJAMstgJlaaVJgMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2lk bnNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQC8sqplTuHuLjbW TL5SL2D1fw9U6WQzLVAF5gsyhd5lr2FpfYwjrob5Mav91aOLbJRTvoNyXsJ26FPS 0RycRGXbomcIEJxXGy9aI+0MLYBt1G5mgqCH+HcVCwPzCNlhVnTwvpgA7y8zs3+6 ezZAPWkF0yWOMYLtTcq9A5GWeavt5VMgm1KZF3gO4k58oPyk3Ae9D0LAaYsX6DFi BYx41eUR5UbSb5IYXaDd8d6jqW/jnYhgc6Cxkv1gTJFn87V5lrG0vYMSRUtWDQ9Y Jh/EKAxjGw7AeY429p6TE4UoJhDmoFYR2NLvawhNIplxol/v0fs0veFQjI/UsTD8 2tRfnYL4IX8szhLsE5/5Iq8aiLHjVbIMwmDYAa0P63Ap2kf1biSn9mpDL8lQazSo yr8xzIq2QS5HMvGbeMAmS0ih10Zx84uVmkWlavgvtSflw8K/ZXT9c70rZp/TdBGY 95cOFsbg5U/20M/Llpis9tcBCaoVaYSFupatrP+p8y19qP2nebsCAwEAAaOCAo4w ggKKMIHhBgNVHREEgdkwgdaCB2lkbnNhbnOCH3huLS1rbmlnLTVxYS5pZG4ucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2dzc2NoZW4tbGNiMHcuaWRuYTIwMDMucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2djaGVuLWI0YTNkdW4uaWRuYTIwMDgucHl0 aG9udGVzdC5uZXSCJHhuLS1ueGFzbXE2Yi5pZG5hMjAwMy5weXRob250ZXN0Lm5l dIIkeG4tLW54YXNtbTFjLmlkbmEyMDA4LnB5dGhvbnRlc3QubmV0MA4GA1UdDwEB /wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/ BAIwADAdBgNVHQ4EFgQUXL4Yf3s/zplmgHlTS90zG0KlfgAwfQYDVR0jBHYwdIAU s4qgorpx8agkedSkWyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQK DB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNh LXNlcnZlcoIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKG MGh0dHA6Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNl cjA1BggrBgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2Evb2NzcC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250 ZXN0Lm5ldC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGB AF16+IHgp8E/OevTUizhy00ps3cYFxieEvwRzDxJy2v0TWy40vTpN/hr7fXX8eta QQTH84za4QWOrlhx2QGKMkay3ZVG4c6CBPoLHCl1B4XOzVnUzPNWs3JNy5APzgIh zl0XhJZ/agBXQrckWwclHneondpBCY4pebSwoUXIcK4shiSuPZp0pwR41h8bF8XB bbEa/fRQLmEWhInQQj+2v71Svcg+joe08L2tx1FlL3foaXkOA2OJ53CtyNEvGqUG 0pDbfAc1mgsOhYfRcBeniA/GtZyIAPr5sgoZWkuNkRJRXg7B2J4CeNAtJAn+1Jc8 y6Afmqv3D+L6ZCNOUwoVPvUEAYYpi44kQC+xkIdcO3unTAavw5B/6cZWQmEVLIPx fE+JF/OgETQ/ja91NGAe4PLzAueqs/efHPhp9P7aV24blVNwze22uyqE66vDqbTV FaCyzIEt8VbBVJtfFExfrV/1Bu4iYEXkUDVkrKzKSr+GePhTLRfY6ITIB6TCKXbH Hw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/keycert.passwd.pem000066400000000000000000000102011471441230600217660ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQIhD+rJdxqb6ECAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBDTdyjCP3riOSUfxix4aXEvBIIH ECGkbsFabrcFMZcplw5jHMaOlG7rYjUzwDJ80JM8uzbv2Jb8SvNlns2+xmnEvH/M mNvRmnXmplbVjH3XBMK8o2Psnr2V/a0j7/pgqpRxHykG+koOY4gzdt3MAg8JPbS2 hymSl+Y5EpciO3xLfz4aFL1ZNqspQbO/TD13Ij7DUIy7xIRBMp4taoZCrP0cEBAZ +wgu9m23I4dh3E8RUBzWyFFNic2MVVHrui6JbHc4dIHfyKLtXJDhUcS0vIC9PvcV jhorh3UZC4lM+/jjXV5AhzQ0VrJ2tXAUX2dA144XHzkSH2QmwfnajPsci7BL2CGC rjyTy4NfB/lDwU+55dqJZQSKXMxAapJMrtgw7LD5CKQcN6zmfhXGssJ7HQUXKkaX I1YOFzuUD7oo56BVCnVswv0jX9RxrE5QYNreMlOP9cS+kIYH65N+PAhlURuQC14K PgDkHn5knSa2UQA5tc5f7zdHOZhGRUfcjLP+KAWA3nh+/2OKw/X3zuPx75YT/FKe tACPw5hjEpl62m9Xa0eWepZXwqkIOkzHMmCyNCsbC0mmRoEjmvfnslfsmnh4Dg/c 4YsTYMOLLIeCa+WIc38aA5W2lNO9lW0LwLhX1rP+GRVPv+TVHXlfoyaI+jp0iXrJ t3xxT0gaiIR/VznyS7Py68QV/zB7VdqbsNzS7LdquHK1k8+7OYiWjY3gqyU40Iu2 d1eSnIoDvQJwyYp7XYXbOlXNLY+s1Qb7yxcW3vXm0Bg3gKT8r1XHWJ9rj+CxAn5r ysfkPs1JsesxzzQjwTiDNvHnBnZnwxuxfBr26ektEHmuAXSl8V6dzLN/aaPjpTj4 CkE7KyqX3U9bLkp+ztl4xWKEmW44nskzm0+iqrtrxMyTfvvID4QrABjZL4zmWIqc e3ZfA3AYk9VDIegk/YKGC5VZ8YS7ZXQ0ASK652XqJ7QlMKTxxV7zda6Fp4uW6/qN ezt5wgbGGhZQXj2wDQmWNQYyG/juIgYTpCUA54U5XBIjuR6pg+Ytm0UrvNjsUoAC wGelyqaLDq8U8jdIFYVTJy9aJjQOYXjsUJ0dZN2aGHSlju0ZGIZc49cTIVQ9BTC5 Yc0Vlwzpl+LuA25DzKZNSb/ci0lO/cQGJ2uXQQgaNgdsHlu8nukENGJhnIzx4fzK wEh3yHxhTRCzPPwDfXmx0IHXrPqJhSpAgaXBVIm8OjvmMxO+W75W4uLfNY/B7e2H 3cjklGuvkofOf7sEOrGUYf4cb6Obg8FpvHgpKo5Twwmoh/qvEKckBFqNhZXDDl88 GbGlSEgyaAV1Ig8s1NJKBolWFa0juyPAwJ8vT1T4iwW7kQ7KXKt2UNn96K/HxkLu pikvukz8oRHMlfVHa0R48UB1fFHwZLzPmwkpu6ancIxk3uO3yfhf6iDk3bmnyMlz g3k/b6MrLYaOVByRxay85jH3Vvgqfgn6wa6BJ7xQ81eZ8B45gFuTH0J5JtLL7SH8 darRPLCYfA+Ums9/H6pU5EXfd3yfjMIbvhCXHkJrrljkZ+th3p8dyto6wmYqIY6I qR9sU+o6DhRaiP8tCICuhHxQpXylUM6WeJkJwduTJ8KWIvzsj4mReIKOl/oC2jSd gIdKhb9Q3zj9ce4N5m6v66tyvjxGZ+xf3BvUPDD+LwZeXgf7OBsNVbXzQbzto594 nbCzPocFi3gERE50ru4K70eQCy08TPG5NpOz+DDdO5vpAuMLYEuI7O3L+3GjW40Q G5bu7H5/i7o/RWR67qhG/7p9kPw3nkUtYgnvnWaPMIuTfb4c2d069kjlfgWjIbbI tpSKmm5DHlqTE4/ECAbIEDtSaw9dXHCdL3nh5+n428xDdGbjN4lT86tfu17EYKzl ydH1RJ1LX3o3TEj9UkmDPt7LnftvwybMFEcP7hM2xD4lC++wKQs7Alg6dTkBnJV4 5xU78WRntJkJTU7kFkpPKA0QfyCuSF1fAMoukDBkqUdOj6jE0BlJQlHk5iwgnJlt uEdkTjHZEjIUxWC6llPcAzaPNlmnD45AgfEW+Jn21IvutmJiQAz5lm9Z9PXaR0C8 hXB6owRY67C0YKQwXhoNf6xQun2xGBGYy5rPEEezX1S1tUH5GR/KW1Lh+FzFqHXI ZEb5avfDqHKehGAjPON+Br7akuQ125M9LLjKuSyPaQzeeCAy356Xd7XzVwbPddbm 9S9WSPqzaPgh10chIHoNoC8HMd33dB5j9/Q6jrbU/oPlptu/GlorWblvJdcTuBGI IVn45RFnkG8hCz0GJSNzW7+70YdESQbfJW79vssWMaiSjFE0pMyFXrFR5lBywBTx PiGEUWtvrKG94X1TMlGUzDzDJOQNZ9dT94bonNe9pVmP5BP4/DzwwiWh6qrzWk6p j8OE4cfCSh2WvHnhJbH7/N0v+JKjtxeIeJ16jx/K2oK5 -----END ENCRYPTED PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/keycert.pem000066400000000000000000000077321471441230600205050ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/wIBADANBgkqhkiG9w0BAQEFAASCBukwggblAgEAAoIBgQCylKlLaKU+hOvJ DfriTRLd+IthG5hv28I3A/CGjLICT0rDDtgaXd0uqloJAnjsgn5gMAcStpDW8Rm+ t6LsrBL+5fBgkyU1r94Rvx0HHoyaZwBBouitVHw28hP3W+smddkqB1UxpGnTeL2B gj3dVo/WTtRfO+0h0PKw1l98YE1pMTdqIwcOOE/ER0g4hvA/wrxuLhMvlVLMy/lL 58uctqaDUqryNyeerKbVkq4fJyCG5D2TwXVJ3i2DDh0xSt2Y10poZV4M4k8Su9Z5 8zN2PSvYMT50aqF277v8BaOeYUApBE4kZGIJpo13ATGdEwpUFZ0Fri4zLYUZ1hWb OC35sKo7OxWQ/+tefNUdgWHob6Vmy777jiYcLwxc3sS9rF3AJe0rMW83kCkR6hmy A3250E137N/1QumHuT/Nj9rnI/lwt9jfaYkZjoAgT/C97m/mM83cYpGTdoGV1xNo 7G90MhP0di5FnVsrIaSnvkbGT9UgUWx0oVMjocifdG2qIhMI9psCAwEAAQKCAYBT sHmaPmNaZj59jZCqp0YVQlpHWwBYQ5vD3pPE6oCttm0p9nXt/VkfenQRTthOtmT1 POzDp00/feP7zeGLmqSYUjgRekPw4gdnN7Ip2PY5kdW77NWwDSzdLxuOS8Rq1MW9 /Yu+ZPe3RBlDbT8C0IM+Atlh/BqIQ3zIxN4g0pzUlF0M33d6AYfYSzOcUhibOO7H j84r+YXBNkIRgYKZYbutRXuZYaGuqejRpBj3voVu0d3Ntdb6lCWuClpB9HzfGN0c RTv8g6UYO4sK3qyFn90ibIR/1GB9watvtoWVZqggiWeBzSWVWRsGEf9O+Cx4oJw1 IphglhmhbgNksbj7bD24on/icldSOiVkoUemUOFmHWhCm4PnB1GmbD8YMfEdSbks qDr1Ps1zg4mGOinVD/4cY7vuPFO/HCH07wfeaUGzRt4g0/yLr+XjVofOA3oowyxv JAzr+niHA3lg5ecj4r7M68efwzN1OCyjMrVJw2RAzwvGxE+rm5NiT08SWlKQZnkC gcEA4wvyLpIur/UB84nV3XVJ89UMNBLm++aTFzld047BLJtMaOhvNqx6Cl5c8VuW l261KHjiVzpfNM3/A2LBQJcYkhX7avkqEXlj57cl+dCWAVwUzKmLJTPjfaTTZnYJ xeN3dMYjJz2z2WtgvfvDoJLukVwIMmhTY8wtqqYyQBJ/l06pBsfw5TNvmVIOQHds 8ASOiFt+WRLk2bl9xrGGayqt3VV93KVRzF27cpjOgEcG74F3c0ZW9snERN7vIYwB JfrlAoHBAMlahPwMP2TYylG8OzHe7EiehTekSO26LGh0Cq3wTGXYsK/q8hQCzL14 kWW638vpwXL6L9ntvrd7hjzWRO3vX/VxnYEA6f0bpqHq1tZi6lzix5CTUN5McpDg QnjenSJNrNjS1zEF8WeY9iLEuDI/M/iUW4y9R6s3WpgQhPDXpSvd2g3gMGRUYhxQ Xna8auiJeYFq0oNaOxvJj+VeOfJ3ZMJttd+Y7gTOYZcbg3SdRb/kdxYki0RMD2hF 4ZvjJ6CTfwKBwQDiMqiZFTJGQwYqp4vWEmAW+I4r4xkUpWatoI2Fk5eI5T9+1PLX uYXsho56NxEU1UrOg4Cb/p+TcBc8PErkGqR0BkpxDMOInTOXSrQe6lxIBoECVXc3 HTbrmiay0a5y5GfCgxPKqIJhfcToAceoVjovv0y7S4yoxGZKuUEe7E8JY2iqRNAO yOvKCCICv/hcN235E44RF+2/rDlOltagNej5tY6rIFkaDdgOF4bD7f9O5eEni1Bg litfoesDtQP/3rECgcEAkQfvQ7D6tIPmbqsbJBfCr6fmoqZllT4FIJN84b50+OL0 mTGsfjdqC4tdhx3sdu7/VPbaIqm5NmX10bowWgWSY7MbVME4yQPyqSwC5NbIonEC d6N0mzoLR0kQ+Ai4u+2g82gicgAq2oj1uSNi3WZi48jQjHYFulCbo246o1NgeFFK 77WshYe2R1ioQfQDOU1URKCR0uTaMHClgfu112yiGd12JAD+aF3TM0kxDXz+sXI5 SKy311DFxECZeXRLpcC3AoHBAJkNMJWTyPYbeVu+CTQkec8Uun233EkXa2kUNZc/ 5DuXDaK+A3DMgYRufTKSPpDHGaCZ1SYPInX1Uoe2dgVjWssRL2uitR4ENabDoAOA ICVYXYYNagqQu5wwirF0QeaMXo1fjhuuHQh8GsMdXZvYEaAITZ9/NG5x/oY08+8H kr78SMBOPy3XQn964uKG+e3JwpOG14GKABdAlrHKFXNWchu/6dgcYXB87mrC/GhO zNwzC+QhFTZoOomFoqMgFWujng== -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/keycert2.pem000066400000000000000000000077561471441230600205750ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQCf8FWxi4oVlDVx e8NDFgb+IYAGr/hZWuY1Zq7d7g57yPoxJrgt+bN89+U7qTduqyB2Hy8G0TqeACOr IdpPZ8P7V5E5YiASwfJ72nbVo7qR9DAKA5FE8PU0bJFmFLjDDihc970zc4ilRDfR WylUpj68nefOY4CzFzeiqVOLX2wezs7Z0hflkSXGBmC0j1FbQU2I3YJg3CKCabhT tU6OyKItzjJ2vVaOoQ+B0Kv8leaRQ6ANZBAFQF2LepSy5F2+oSD+QHjPr+012V5D mrsdIc9We8YyonS1u/3HI7lLohf3W+qFroQWjn0DJI56ScV1uEr/B0+hn2jBRTM5 d1F9BeVWm1u8BOJu50CvOeuxiVLsxJpa4T41DJznJk5V+hE4hKvDKmlrwulsRp8o jUEyUi8dzWOBRfAijIWv3qAPjGA/J33n6+PllCczC2BsVZhVmLqSMCwp1g2JTCM/ KC7T4vOl/EGkm76fcmLeA1Ef8oUdRg+3T77VP+HqZ2JP06J8O8MCAwEAAQKCAYAw YvJZ82BEJQGCIrIxMpHNAm+MFmKpDdIFp9oRdDrXgjcG9bLU3e1KSmkEgq4tggIh GlAM3PHB6ULhPC2ixj7JZHWgCaqwYhKtG6vF+HGyRFDgRrIFTGyyfoICgxReloLp lV2dGj/l19yXLuAzJtRmFdOSYhIGnGiNgnKvAKBiNajoxyHJpv7piPZqyc0QMZJ2 bKVMDm02TSuhz4FDuzktaGtl9uQf5GQfnvTZRrRpkC70vigGnrFuSBiCgopF6NLq 6AXl8YS3Jcu2oGWrZDfS/GlG1QmvGGsmr9wndJSGG43jcpcRZt0g1nJNu4Fioq3e 7y6Gap9TEsciuQOv/6RD457XkNARmTQxFpEwmSgOPQn2pFcDspo71Ej7azzL/Z+3 jvnVo3wxgxBcrpyh+vhBtJARp4pT4anW4PcD6IcPSOWbnI8Ldoj1XN5QkJcBcykK 6LmsAUqsmEQDNsmnGZWyYSCns4P2vUJi0hwQz8UiQwgAta3xnq4v5On7l3cq35kC gcEA0+joOFbZBeGlCb27tDW4VCW0cQuczzuNEoBUKnsNSqy0nx1O7hgHm/f/NQDD cpxiD15bRQ0KM9QbQC4dGaVoLsM07hUGk97dCxQPs2zot4CodCKGohs7E154tEDP zVg3YS5mubUmqdqtn8ZCKeeZye/Tv2ageyF300sEgj2Cd7EZ8S4sB0PxZ2tqT3jy cBL5cDruLEWuHIQjN7WwSjxnXocpb1OU7dJ+v4zFPCkSCOoa0DTTw4jFhPEOBdqV T619AoHBAME3QyW4QVtU2Ct9u0B1XThhqSEyOpUrcH9nOoefggwP4WF3phVx16BG aDKUIGQ62klRa5fi2eooxcjQRLv1sWO0UzssnO6ABMnGkUiRdrowo6xukNak0RTp 0gvNoJ0SZxGF0yWSCw1Rq3qP2Koj7XDumFChAzLMyUsnoOl29SA7GfXcZp1pZTiq kOfFMWt0CIHu/EK03YWcd4vfQEq6lus39RCSXuL++Jva3yiEl5s069RFZvP1bNrD emkfetDSPwKBwQClk+8fVnzs44sZOW9ZOEB3P57mVbSJGHb6Zdtd9hhEqP3Y9gWe dJg9fmGjAJ23CAp3B7s5ER9PsAQ6+c0zJNNq9ox9G2CwWgtNhLdf81FDUPxPAktA jxZx4/dcoOe+A5gCD0elA67aOUxA86DvLVA1QXeqrn3muBfwuUUknvs6mt8yXGl6 o9QUgxHmVxLYD3tn/iPr4+ZP0c/Sz9yXpOsAKYxuuFg+G6N9+HiEsXKuFH4vAZgV yODNJ61VVZ4lS+ECgcAqFqOl39E81+qO7sCPdgFsermg5ZQlUmUbG52AVZq6jesG lE21disGWs/v1JyJuNg8CGRrnZriiycqa1PNreOKWImY5kr5GSHx4jNbn3RBcr70 nNEoMJbq+1QqBgzqqkuRYZlxIbMOn6++7v6/cTwT0aWUSr6rnjhrCqLeuG8FKlqp V+1ydLb79QvDsQzm30vLIggJb+ShakgQS/1xSdv+OR5FEd1hjTESokbiSJ/Ny2Vj xAp9MgUYUmSj6ZuTSXkCgcAggshdRQLom/EK2pYwffIpKfBiyLbi+KIjKxkiPEsb jrrQbvh9ZN6iAG3StVAYB5c6vewfeIlcDT0YJDyy1hGRLRG7vf9ubPf+n7Xp1y0W oo9L9qfCHu0jmWwtinkFYjpTDkXlxXCG2v3TllNsNX/5afYo8sb9oxXHLTpBlwZB fw6IgNZblWQevdgmUMTP9W2W7AZUxEz4gOM6lQkOwC3U59Dx2yO6rD3An6G1tlZF 2MClyf8o5d5ePObH8rkxrpY= -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIUF15VKdwjiTzzKgs6PnNpEekV9QQwDQYJKoZIhvcNAQEL BQAwYjELMAkGA1UEBhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYD VQQKDBpQeXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhv c3RuYW1lMB4XDTIxMDMxNzA4NDgyMFoXDTQwMDUxNjA4NDgyMFowYjELMAkGA1UE BhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYDVQQKDBpQeXRob24g U29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhvc3RuYW1lMIIBojAN BgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEAn/BVsYuKFZQ1cXvDQxYG/iGABq/4 WVrmNWau3e4Oe8j6MSa4LfmzfPflO6k3bqsgdh8vBtE6ngAjqyHaT2fD+1eROWIg EsHye9p21aO6kfQwCgORRPD1NGyRZhS4ww4oXPe9M3OIpUQ30VspVKY+vJ3nzmOA sxc3oqlTi19sHs7O2dIX5ZElxgZgtI9RW0FNiN2CYNwigmm4U7VOjsiiLc4ydr1W jqEPgdCr/JXmkUOgDWQQBUBdi3qUsuRdvqEg/kB4z6/tNdleQ5q7HSHPVnvGMqJ0 tbv9xyO5S6IX91vqha6EFo59AySOeknFdbhK/wdPoZ9owUUzOXdRfQXlVptbvATi budArznrsYlS7MSaWuE+NQyc5yZOVfoROISrwyppa8LpbEafKI1BMlIvHc1jgUXw IoyFr96gD4xgPyd95+vj5ZQnMwtgbFWYVZi6kjAsKdYNiUwjPygu0+LzpfxBpJu+ n3Ji3gNRH/KFHUYPt0++1T/h6mdiT9OifDvDAgMBAAGjGzAZMBcGA1UdEQQQMA6C DGZha2Vob3N0bmFtZTANBgkqhkiG9w0BAQsFAAOCAYEARzdkuqa0Hexi/saMkdi3 bubpQkc7X0RYKWnjy/PgcmbvQXLiWRMZOH9rMWvd5v+ZfkgAtsbOQuP8ycioNIFY Il5SEmxHEN81z5UNSPLOib6ky13gzrnXRAxnnO7cICG7AaMu1dHv57fqjevcx/n/ nxPNKwKL+TDpMw7ATVZw7Py7JciKyFAfwtkvt17j/ldvaQvuwmWHzyFVrQniQcQq QEa4jy/Y/pXHAgCKq1qbe0ush17j1ChyH7l4SkF2xJKcYYQF5ipw8zg6WeOL2NFE G1KDJN0SsMmM3PMN1e0lLQP3G+UaatervrKXu51QleKL32Xlby+pp1w9KKs39/Tb RT8EMe9A6cecod6TL0ZUQHow6ykNYBkfSKDLTKWnL9ifZ0C/DvgmS7DpJg3oAa1e GhIglMrgqJflTHAI/PvEsCKM1O0Un2dVGWsUCzPfhj1cKmagyb0Zd+2Tk9xGSRs9 2ceXMxRCjOJwEHUCFuTYeqowabdlpi0nyPbSn7JIwCpT -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/keycert3.pem000066400000000000000000000223501471441230600205610ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQDFtLOteQlQojN7 ztkux7m0hmGKkP1hh0hbKqTcD87jkLAqAwZWenjZMjCbbZ3vP+AObCIkYIKzPXY7 Yi+H5M3O2mXIDxoHGjL/GWtoEyDNXvm9UC+MRuSOq2MaLHHQG0Rx2TxcYrMVUM7b 93rpN1LGRrCv1gISXM4EvEJooAR7Aadj0pG/o0fqDAdFjH6QZbhn1iZle+eGbjcf dgH/H0F8dn1PPGoViHXicbsQ4kB6002Pf+aXP4b2QKAbflyNHEKHPHEOOTXrFjMd c+bqKW24epEsMZI59qx9hU/4Rvp3/v+vEwTL7Nm7ilptzZn2cvGCW39LC0nNYLOz kO3H8xwA75h6uykdB+WO/v2CKIK9M/ZO+9QNrmaokfKDamCk39b8hlCwNL6LsVpv d3XTS5Wn4YWn92EqiltUJJoPo7pc7VTdWCg4zVFn4Q8Zh4NFNn/qTB8lEMgrsNTV 5cyZ7zhoBiUMSO45bmo2NsnE7ce/JUhlqe5uh0PT1MIBgTV+oDMCAwEAAQKCAYEA udsy4gwblqK0tVnxz0lQqYV+os3EdO/BNHr1Oi7eNg2pngTz603812mYSjUVOHma vtQmkH3twGQyBoc52Y1dcGzdK+IOfMjDUg7qao840ffL3I1J9ZwbdodlhZBsec94 W3J1jP/4DDzICf8vm5g3h0+i/9m2Xt7BibAU2dg7/grC+lNUUoxDqaEfIOF/hW0q muq1c8e0EisAROIh5FzUqhWVnWxU6eM7tuFlkuyu4whLLHB3LI466Lo+CTqT9M+v jJYlvS5+AZW3qMBp6WOI8C+VIiBL178mo+Igkyyy5AYXcWeNkjp6ygRWvtWXIhCv CI29mf+BP/54jAY0rQRXJ2UcSHXmM6PTDkE/L2OKeiY1Ou8gLOwun3yBVdbkXJMb PWmUW4N8qSIJQ+vE2TDqmkqAT6m+ilzOXl1O+LLTvGyMnOiiSLXK9mC4ND3tqaQu hvKivnI1doErcWUaIf1DHiJmLrGxrTCUKjCEoefqVq2/dDdtCfx7CqUvjl3DYKMB AoHBAP+Vdi6D07gZFepEGCaJ+YH6cxEyO73CNnea/F1whVAzOv91kHS32jC9PAI3 /wYlX+DLcN9mVF/q62V4SLZYfOxTPW4vWO0A45URe9s9Z795fdAcQ5jt3QFOVSnk 3XSaCkIOwckuwabGJi4+foiUEOnLLzQi1/g7x12dwejxVNhqhz5KFkOQPv8fQRed sb5LVLYDeprsB2Vsx0fHwg4z9FvTIxLBeI7+sJD30lNpYZrCl/T9x4e1SV2Rwn2W bghxgQKBwQDGBx07biZK9RB5g4qPl+G6vz0M+/KBfpwQbMYxSyct7u6gfGD9mWBO qocIIr39Unac3kUL237Cn3HbgiGCRe7Mwd7XqnSSGWM5oWSlVQxEKTXYUlTbd9O9 DKuyQGOl/AMEwD4ZbEOfQNmnd1U4nh1AV052FQY8Ry/atGFT9fApA/5X/bbenOwQ YGDsokLzPf2BIDncpE+VNevUMoMI7EnySgjjfpL+cRld0qpLqBMo2h5VddeJ/5YM 1YcNfMQiw7MCgcEAwXqXuKa7A8aZvHpH/gS9CRRbP01TxFbdfLWrDeE8SnY9111c Ob9kQTk/0D4rpK9uYXIgxD1m6iWghXQFN2TNTOnGuz7EhsYBgrt1k4Zsn5qND5oV 4hNPFsoB1nEW5EooMdGSCYaHuoSOKrvMdgAAvbu+xC0MaTJ3vfrK7Fik7h/WueTD 7emohuFWGVabU38bZZ5EljrPboxmX4Rs9uuFtG2lQ3GKnlVXvKaeZd6EsO9WsXPc NHOcUmUhYokaSvIBAoHAGCxGJTsM8Zl4qVylTWH87A7sJOmccLJD2r1sdBf4cGL6 PhzwugQ+/VtToGqdRo8Ka5u2Ufw5PQi5nVIFRSHERLpluW3VTQBMXHyXDJeVJ7zg Fcf3E9NMxYcGbnvtrhVVSP8ulWvh1U7VQtwOSxsB9xixOzjVygXmkYvzVYxwBJG4 OoV+DS6aomUhb8Fe6tJmX5zPc1+bV1t9ril8VVqCrFDdROfuiaDEt+8/Wnzp2dLG YShBZ1cLugVWtw7D4nqBAoHAF29k64iAxY5Y4OOibVkqjUCPyqG2oxiXqgO7CxZp FGUat5UtV2mIBlSENs1o5AZ1nPlgWtPtg0xVCaG2t/Rq7ugvUfAnAhUK6zX8FS+T gCXE+7iKuuIJiCo13/iAwF/CLfuXvj4CZ71ta0wX9w99f1FcPEk0x+ytiyuWJK8K tyubL34JwNrnkh/8e3LcV3L88Sk9ZmxeTz31f3cA3Fy2ZJOAUMD9dKXeKtY7azzt MkhXedRsdLSKqMh0VGeGHoLS -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5c Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:c5:b4:b3:ad:79:09:50:a2:33:7b:ce:d9:2e:c7: b9:b4:86:61:8a:90:fd:61:87:48:5b:2a:a4:dc:0f: ce:e3:90:b0:2a:03:06:56:7a:78:d9:32:30:9b:6d: 9d:ef:3f:e0:0e:6c:22:24:60:82:b3:3d:76:3b:62: 2f:87:e4:cd:ce:da:65:c8:0f:1a:07:1a:32:ff:19: 6b:68:13:20:cd:5e:f9:bd:50:2f:8c:46:e4:8e:ab: 63:1a:2c:71:d0:1b:44:71:d9:3c:5c:62:b3:15:50: ce:db:f7:7a:e9:37:52:c6:46:b0:af:d6:02:12:5c: ce:04:bc:42:68:a0:04:7b:01:a7:63:d2:91:bf:a3: 47:ea:0c:07:45:8c:7e:90:65:b8:67:d6:26:65:7b: e7:86:6e:37:1f:76:01:ff:1f:41:7c:76:7d:4f:3c: 6a:15:88:75:e2:71:bb:10:e2:40:7a:d3:4d:8f:7f: e6:97:3f:86:f6:40:a0:1b:7e:5c:8d:1c:42:87:3c: 71:0e:39:35:eb:16:33:1d:73:e6:ea:29:6d:b8:7a: 91:2c:31:92:39:f6:ac:7d:85:4f:f8:46:fa:77:fe: ff:af:13:04:cb:ec:d9:bb:8a:5a:6d:cd:99:f6:72: f1:82:5b:7f:4b:0b:49:cd:60:b3:b3:90:ed:c7:f3: 1c:00:ef:98:7a:bb:29:1d:07:e5:8e:fe:fd:82:28: 82:bd:33:f6:4e:fb:d4:0d:ae:66:a8:91:f2:83:6a: 60:a4:df:d6:fc:86:50:b0:34:be:8b:b1:5a:6f:77: 75:d3:4b:95:a7:e1:85:a7:f7:61:2a:8a:5b:54:24: 9a:0f:a3:ba:5c:ed:54:dd:58:28:38:cd:51:67:e1: 0f:19:87:83:45:36:7f:ea:4c:1f:25:10:c8:2b:b0: d4:d5:e5:cc:99:ef:38:68:06:25:0c:48:ee:39:6e: 6a:36:36:c9:c4:ed:c7:bf:25:48:65:a9:ee:6e:87: 43:d3:d4:c2:01:81:35:7e:a0:33 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 85:75:10:25:D0:2C:80:50:24:1A:5B:57:70:DE:B5:CB:71:A9:3B:7B X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 95:f3:56:bb:d5:8c:70:bd:d1:de:da:63:b0:29:d7:db:60:27: d6:59:fd:61:1b:30:c6:d0:5d:73:7d:34:e1:68:e3:28:a6:89: e6:60:bd:89:d3:0e:f4:72:ad:72:76:f8:86:21:fd:75:3c:f8: 6d:be:9c:04:e1:82:03:69:6c:ae:d0:55:ba:5e:f2:ca:f5:0f: 8e:d6:d9:8d:c8:56:46:f4:f8:ac:74:2a:19:7b:8e:47:70:1f: fb:fb:bd:69:02:a1:a5:4a:6e:21:1c:04:14:15:55:bf:bf:24: 43:c8:17:03:be:3e:2c:ea:db:c8:af:1d:fd:52:df:d6:15:49: 9e:c2:44:69:ef:f1:45:43:83:b2:1e:cf:14:1c:13:3f:fe:9c: 71:cb:e7:1b:18:56:36:a7:af:44:f1:0b:a1:79:44:46:f9:43: 46:29:d8:b0:ca:49:4d:65:60:d3:f6:8e:74:bc:62:9e:1e:8d: 4b:29:9a:b4:0d:f0:a2:77:5b:34:e4:11:2f:a7:25:c5:e5:07: 76:12:ae:be:75:73:15:e4:0a:7d:53:38:56:3f:79:6d:6e:ca: ed:80:ab:56:ed:7e:8b:1c:e7:e3:d4:62:30:22:70:e7:29:b2: 03:3c:fe:fa:3d:f0:36:c0:4d:11:a2:99:d3:29:31:27:b8:c5: b8:15:a3:3c:4f:9b:73:5e:2b:b2:fb:cb:fd:75:47:b8:17:bd: 21:d8:e6:c1:b9:ff:73:81:d8:25:08:6d:08:5e:1c:a5:83:50: de:67:e6:da:d0:8e:5a:d3:f2:2a:b1:3f:b8:80:21:07:6a:71: 15:6d:05:eb:51:b3:59:8d:d4:15:46:7e:02:a8:13:01:16:99: bd:03:cc:70:71:2a:23:16:78:af:d1:d5:01:9d:04:b4:63:93: 9a:04:3a:92:2e:e6:7e:73:93:a5:fe:50:9b:bd:0e:ea:54:86: 6f:7c:e5:14:77:fe:c2:28:5a:4a:0e:d7:2d:8c:e9:ed:61:29: b2:53:ff:6c:04:bc -----BEGIN CERTIFICATE----- MIIF8TCCBFmgAwIBAgIJAMstgJlaaVJcMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxv Y2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAMW0s615CVCi M3vO2S7HubSGYYqQ/WGHSFsqpNwPzuOQsCoDBlZ6eNkyMJttne8/4A5sIiRggrM9 djtiL4fkzc7aZcgPGgcaMv8Za2gTIM1e+b1QL4xG5I6rYxoscdAbRHHZPFxisxVQ ztv3euk3UsZGsK/WAhJczgS8QmigBHsBp2PSkb+jR+oMB0WMfpBluGfWJmV754Zu Nx92Af8fQXx2fU88ahWIdeJxuxDiQHrTTY9/5pc/hvZAoBt+XI0cQoc8cQ45NesW Mx1z5uopbbh6kSwxkjn2rH2FT/hG+nf+/68TBMvs2buKWm3NmfZy8YJbf0sLSc1g s7OQ7cfzHADvmHq7KR0H5Y7+/YIogr0z9k771A2uZqiR8oNqYKTf1vyGULA0voux Wm93ddNLlafhhaf3YSqKW1Qkmg+julztVN1YKDjNUWfhDxmHg0U2f+pMHyUQyCuw 1NXlzJnvOGgGJQxI7jluajY2ycTtx78lSGWp7m6HQ9PUwgGBNX6gMwIDAQABo4IB wDCCAbwwFAYDVR0RBA0wC4IJbG9jYWxob3N0MA4GA1UdDwEB/wQEAwIFoDAdBgNV HSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4E FgQUhXUQJdAsgFAkGltXcN61y3GpO3swfQYDVR0jBHYwdIAUs4qgorpx8agkedSk WyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29m dHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMst gJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0 Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcw AYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYD VR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAJXzVrvVjHC90d7a Y7Ap19tgJ9ZZ/WEbMMbQXXN9NOFo4yimieZgvYnTDvRyrXJ2+IYh/XU8+G2+nATh ggNpbK7QVbpe8sr1D47W2Y3IVkb0+Kx0Khl7jkdwH/v7vWkCoaVKbiEcBBQVVb+/ JEPIFwO+Pizq28ivHf1S39YVSZ7CRGnv8UVDg7IezxQcEz/+nHHL5xsYVjanr0Tx C6F5REb5Q0Yp2LDKSU1lYNP2jnS8Yp4ejUspmrQN8KJ3WzTkES+nJcXlB3YSrr51 cxXkCn1TOFY/eW1uyu2Aq1btfosc5+PUYjAicOcpsgM8/vo98DbATRGimdMpMSe4 xbgVozxPm3NeK7L7y/11R7gXvSHY5sG5/3OB2CUIbQheHKWDUN5n5trQjlrT8iqx P7iAIQdqcRVtBetRs1mN1BVGfgKoEwEWmb0DzHBxKiMWeK/R1QGdBLRjk5oEOpIu 5n5zk6X+UJu9DupUhm985RR3/sIoWkoO1y2M6e1hKbJT/2wEvA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/keycert4.pem000066400000000000000000000223661471441230600205710ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQC34y3S6iXdmdvd M/2aFBe6CvRvZwhh1huGl7IQRtdoakPqMLlEdNHJtNeF5M27xLei+p4wt7N1Jyi0 2keHQb1m9TqH5AruOkE2ti+15zEoKoU9aWydTiH+epKTT0yjg2NcKQjRUaWcbhzB H4EMKuCIlzIIz8/EIKkOqhCDwq6+Fv3Ays+z7Bz+yR80ixivKu/l7SjxQ7z7R/kC I7OViRcIO5QBQPj7VLvCTz4VA6u/LdXngK2HNuau6WXm5yNNQbqrB11AEJcYZf/c VrneV4F+ZjLloAKgSn9GB8eWOyilTQ18TcKd+H2icipRaP/+QR/KPx5GK/SXU3my qm62QOGI7t/5ktVdjGhs6tHZxw1SRiipiLYWbtVRrSxa4wYlgpgoUwvrvvtC5kAN nTw1VGWsxcs+6a7+PocYnJiq7k4b5OAUb3Ryvl9DLAMy8NqpRWo4cHD/XQ3FCYwF HlOSgx/dL5Se0i3dW1KzbP6OvaNg6nl/1EXPUsJ1ATS8nzvzhccCAwEAAQKCAYEA nD3GvaJ9MeB802JNZBEWZ9jO/6jHknldQeq6POI0PF+t/NoRUH0BkyS4yucxdw0a CrxulG5BaJUxHRkqFV5iE4zhgnzcXLXamyYJO8GIHtyiASAGTVIJyDNVPxztvTDx x2iGOXPqBxP4Eo82EqSLywLMXHhVzAsEGZWeGpXb61+Vk62+9Nz1dfZlMTvOaWdO Fkp/sx8e/1KT3KGBANlOXIxioP4Xj1Tbg6nY0fogf3vud5j52B1pu8xL7PkPIaFq DEGz3XvWhBF/+Cs5iDeYz8eQpfQig7HdHVn2D8dZmzQgpLw1yGbPAnqrgopWfm7R MqiyFe82p2t+vfSoG5jz28XxPtzBJV3ljxKxlbnclqu/CAYSjzaYohDzyhjdZOZI r9DOfWOqu01Ha3EEsApn95fusHHGTH2FOy0u61FSTrfLfqsLw9WRJPWleirKikhf SZzi223QrmzZMtuCF7VgTx3ghDhBmFD8uzVVQ1SwPZ8CgftRkFcn1llXIAfJ3iHB AoHBAOg3DOIdtUVgpjMKhpAyuH54fYvGl7afIMNbKRle0kCiP45wtGJ43RPMqiR8 1rxZB3+iapICI/lnhk3O7vVRkR64yiqQBcl/hXZ1BhyD6iDXWYmm5mcnymcoqfwc p9TfzEPyGPb3SM2YlI0cSPRqM/jDvGvnDeKIpzEKvUlwJ59WoN2HOHTIXf+XbN5n unpuTt6YKJvc48DrXsPnUzkCmUfbOmgHfeb9/qBs/8kY4YJMsZEjqf88o7mCJCIy BtDxTwKBwQDKuOwE8e0GIA01ZHd6RfR+ZCvmp2oauxal4EJsBx+ZZnhEWGaSm1fE Bf/ih074ghcSKoSrdYpD1xGZ6fGVWMx3jcL11yLDOUiiPDJsm8hUBZ0IW1qXyfCP l7xy1bUkWwPXdmFuGp1exrcjooKrFNuTdYiK4nQZSKuCfXQRADrmEJmM+gYwhqI7 4XsYo848B9A4hbY6RLEox4uvo/RmafY0iR0PMhVEc+ydNLKB/4LpahZqBQ4kTpMv o4+rEvYt1gkCgcB08gx177ozx1nMCLf99N0/LBUmCIytNvR8DfPjyAIg9NUHOjFO CkpkR0VEfO50Cm4hVD1RbOyLFRzpIJbtSvfHvg5qYv/XG3auUn8Sa0jE408/aKNO PhbL3wnEYvYO2ep4KXtzHNQ4XmgprJ39IWMtG/5PZRx0ApgYtazgSDBcKXd4OTow bhwQtUTpuNmMAPONXJnO7O5yYNbn2B7sbiedrYV7kJJSe4X5awtiTjp7sX4XdxuM 5BAcQ7NI2WLfZTcCgcBp/X9hIoATmMRvKwUQx+yJ/KO7Z8KhETpJJdR0mNDbqmit Cy8t7cxYb+6WqLoQUivv0o0k/EJ7L8JDH76woAnfZB4P3RiOy69/K0wN3vFBhOHS kbju7aU53lKoE7YuuOtsRrewEng/KlRsbDY3bqNTGLt4KegbpBQQGLmLffxNd1Zh EAQWcP33ou9yNYrJdihWtQpOssWRlash/O32ceZJF3s7C6t068tFclz2fPocQdxQ OC5pqy9nU/P0tOhDlMkCgcEAosaBJLIeAYlOU0+2uSx5g5mIqOOTyrDEmqqad6T/ wkB7vW2QaoDvLL22Yrzdn9vQ0V0rqzhVtan7sq5pn/BQJAueZYN8rFxS3uuW+UQk Nsc4GLJzU8Az/2DvqEIrnE7zRc5E1FOI9gKLrBlpJB2o0hVcBznDe05Gax6Kjqbm jHqzyU73SpxpEy3OesClCeCQIMr47HaL9aSqaEX4U9bMpgHi0HgTTHqvJ5pch0hY dYl+WAE9LAyF1DF29BirEXVw -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5d Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=fakehostname Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:b7:e3:2d:d2:ea:25:dd:99:db:dd:33:fd:9a:14: 17:ba:0a:f4:6f:67:08:61:d6:1b:86:97:b2:10:46: d7:68:6a:43:ea:30:b9:44:74:d1:c9:b4:d7:85:e4: cd:bb:c4:b7:a2:fa:9e:30:b7:b3:75:27:28:b4:da: 47:87:41:bd:66:f5:3a:87:e4:0a:ee:3a:41:36:b6: 2f:b5:e7:31:28:2a:85:3d:69:6c:9d:4e:21:fe:7a: 92:93:4f:4c:a3:83:63:5c:29:08:d1:51:a5:9c:6e: 1c:c1:1f:81:0c:2a:e0:88:97:32:08:cf:cf:c4:20: a9:0e:aa:10:83:c2:ae:be:16:fd:c0:ca:cf:b3:ec: 1c:fe:c9:1f:34:8b:18:af:2a:ef:e5:ed:28:f1:43: bc:fb:47:f9:02:23:b3:95:89:17:08:3b:94:01:40: f8:fb:54:bb:c2:4f:3e:15:03:ab:bf:2d:d5:e7:80: ad:87:36:e6:ae:e9:65:e6:e7:23:4d:41:ba:ab:07: 5d:40:10:97:18:65:ff:dc:56:b9:de:57:81:7e:66: 32:e5:a0:02:a0:4a:7f:46:07:c7:96:3b:28:a5:4d: 0d:7c:4d:c2:9d:f8:7d:a2:72:2a:51:68:ff:fe:41: 1f:ca:3f:1e:46:2b:f4:97:53:79:b2:aa:6e:b6:40: e1:88:ee:df:f9:92:d5:5d:8c:68:6c:ea:d1:d9:c7: 0d:52:46:28:a9:88:b6:16:6e:d5:51:ad:2c:5a:e3: 06:25:82:98:28:53:0b:eb:be:fb:42:e6:40:0d:9d: 3c:35:54:65:ac:c5:cb:3e:e9:ae:fe:3e:87:18:9c: 98:aa:ee:4e:1b:e4:e0:14:6f:74:72:be:5f:43:2c: 03:32:f0:da:a9:45:6a:38:70:70:ff:5d:0d:c5:09: 8c:05:1e:53:92:83:1f:dd:2f:94:9e:d2:2d:dd:5b: 52:b3:6c:fe:8e:bd:a3:60:ea:79:7f:d4:45:cf:52: c2:75:01:34:bc:9f:3b:f3:85:c7 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:fakehostname X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: C8:BD:A8:B4:C0:F2:32:10:73:47:9C:48:81:32:F8:BA:BB:26:84:97 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 76:87:76:4d:e4:0f:88:bf:2c:f3:58:67:c0:97:6c:cd:59:18: 82:83:4c:04:19:a5:6d:aa:fa:64:3d:49:32:3e:e1:56:95:b2: 13:f7:cf:d3:11:b0:72:b7:5b:e7:d7:85:69:51:3c:b6:54:80: 45:2f:28:10:21:20:b9:ba:e9:27:5a:b7:3f:82:b7:69:f5:46: f5:bf:a2:8b:17:7f:f2:14:d1:46:97:b5:8b:47:fb:9f:e8:5c: 05:0e:9d:11:bd:7c:9a:03:84:0b:ca:29:66:4a:ca:0d:6f:09: 1e:7a:27:c1:7f:03:96:70:8d:18:a5:2f:a4:98:a5:19:aa:8c: 5d:1e:8c:3e:bb:6d:3b:c0:33:c0:15:e1:bd:09:3d:9f:e8:dc: 12:d4:cb:44:1d:06:f5:e8:d6:4e:a1:2d:5c:9f:5d:1f:5b:2a: c3:4d:40:8d:da:d1:78:80:d0:c6:31:72:10:48:8a:e9:10:7a: 13:30:11:b2:9e:67:0e:ed:a1:aa:ec:73:2d:f0:b8:8a:22:75: 0f:30:69:5c:50:7e:91:ce:da:91:c7:70:8c:65:ff:f6:58:fb: 00:bd:45:cc:e2:e4:e3:e5:16:36:7d:f3:a2:4a:9c:45:ff:d9: a5:16:e0:2f:b5:5b:6c:e6:8a:13:15:48:73:bd:7c:80:33:c3: d4:3b:3a:1d:85:0e:a4:f7:f7:fb:48:0c:e9:a0:4b:5e:8a:5c: 67:f8:25:02:6f:cd:72:c1:aa:5a:93:64:7c:14:20:43:e0:13: 7f:0d:e1:0d:61:5e:2e:2c:cd:7a:2e:2a:ae:b6:75:6a:5f:a0: 1a:9b:b6:67:2d:b0:a5:1c:54:bc:8c:70:7e:15:2b:c0:50:e3: 03:bb:a4:a5:fc:45:01:c9:3f:a7:b8:18:dc:3e:08:07:a1:9b: f5:bd:95:bd:49:e8:10:7c:91:7d:2d:c4:c2:98:b6:b7:51:69: d7:0a:68:40:b5:0f:85:a0:a9:67:77:c6:68:cb:0e:58:34:b3: 58:e7:c8:7c:09:67 -----BEGIN CERTIFICATE----- MIIF9zCCBF+gAwIBAgIJAMstgJlaaVJdMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGIxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFTATBgNVBAMMDGZh a2Vob3N0bmFtZTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALfjLdLq Jd2Z290z/ZoUF7oK9G9nCGHWG4aXshBG12hqQ+owuUR00cm014XkzbvEt6L6njC3 s3UnKLTaR4dBvWb1OofkCu46QTa2L7XnMSgqhT1pbJ1OIf56kpNPTKODY1wpCNFR pZxuHMEfgQwq4IiXMgjPz8QgqQ6qEIPCrr4W/cDKz7PsHP7JHzSLGK8q7+XtKPFD vPtH+QIjs5WJFwg7lAFA+PtUu8JPPhUDq78t1eeArYc25q7pZebnI01BuqsHXUAQ lxhl/9xWud5XgX5mMuWgAqBKf0YHx5Y7KKVNDXxNwp34faJyKlFo//5BH8o/HkYr 9JdTebKqbrZA4Yju3/mS1V2MaGzq0dnHDVJGKKmIthZu1VGtLFrjBiWCmChTC+u+ +0LmQA2dPDVUZazFyz7prv4+hxicmKruThvk4BRvdHK+X0MsAzLw2qlFajhwcP9d DcUJjAUeU5KDH90vlJ7SLd1bUrNs/o69o2DqeX/URc9SwnUBNLyfO/OFxwIDAQAB o4IBwzCCAb8wFwYDVR0RBBAwDoIMZmFrZWhvc3RuYW1lMA4GA1UdDwEB/wQEAwIF oDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAd BgNVHQ4EFgQUyL2otMDyMhBzR5xIgTL4ursmhJcwfQYDVR0jBHYwdIAUs4qgorpx 8agkedSkWyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRo b24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZl coIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6 Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1Bggr BgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2Nz cC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5l dC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAHaHdk3k D4i/LPNYZ8CXbM1ZGIKDTAQZpW2q+mQ9STI+4VaVshP3z9MRsHK3W+fXhWlRPLZU gEUvKBAhILm66Sdatz+Ct2n1RvW/oosXf/IU0UaXtYtH+5/oXAUOnRG9fJoDhAvK KWZKyg1vCR56J8F/A5ZwjRilL6SYpRmqjF0ejD67bTvAM8AV4b0JPZ/o3BLUy0Qd BvXo1k6hLVyfXR9bKsNNQI3a0XiA0MYxchBIiukQehMwEbKeZw7toarscy3wuIoi dQ8waVxQfpHO2pHHcIxl//ZY+wC9Rczi5OPlFjZ986JKnEX/2aUW4C+1W2zmihMV SHO9fIAzw9Q7Oh2FDqT39/tIDOmgS16KXGf4JQJvzXLBqlqTZHwUIEPgE38N4Q1h Xi4szXouKq62dWpfoBqbtmctsKUcVLyMcH4VK8BQ4wO7pKX8RQHJP6e4GNw+CAeh m/W9lb1J6BB8kX0txMKYtrdRadcKaEC1D4WgqWd3xmjLDlg0s1jnyHwJZw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/keycertecc.pem000066400000000000000000000130051471441230600211460ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIG2AgEAMBAGByqGSM49AgEGBSuBBAAiBIGeMIGbAgEBBDBcNwE+cm17mmr7Yg6d 0DNCnheGFOjkYH4tYzTyCkcZGShkmF/tKhIqb3imKz0Kx9+hZANiAATyp8ws6CuN OI2/3MC4jZVSkmoDzm/X/ZrkEm4TVHKPSZ6kzZRpmmUlLS9l7SQZSLYyDAFBFzoG JJYHhZNQXEO7HFszn6KnvLjhwS6ddzlaHPziEknrSr0OKhJmdJHrQAQ= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5e Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost-ecc Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (384 bit) pub: 04:f2:a7:cc:2c:e8:2b:8d:38:8d:bf:dc:c0:b8:8d: 95:52:92:6a:03:ce:6f:d7:fd:9a:e4:12:6e:13:54: 72:8f:49:9e:a4:cd:94:69:9a:65:25:2d:2f:65:ed: 24:19:48:b6:32:0c:01:41:17:3a:06:24:96:07:85: 93:50:5c:43:bb:1c:5b:33:9f:a2:a7:bc:b8:e1:c1: 2e:9d:77:39:5a:1c:fc:e2:12:49:eb:4a:bd:0e:2a: 12:66:74:91:eb:40:04 ASN1 OID: secp384r1 NIST CURVE: P-384 X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost-ecc X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 79:11:98:86:15:4F:48:F4:31:0B:D2:CC:C8:26:3A:09:07:5D:96:40 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 6e:42:e8:a2:2d:28:14:e3:25:5c:c1:7e:54:e9:3a:ff:30:db: 94:ba:b2:f6:5f:ae:9a:c1:90:b3:4f:ce:65:1d:84:64:c0:71: 2c:44:8e:7e:00:79:f5:8c:4a:1d:34:13:44:de:99:2e:db:53: ee:ec:74:97:4d:59:1a:09:82:4f:98:75:91:a7:a0:b9:da:5e: 68:f5:32:85:be:36:3d:83:d4:ee:f9:87:67:31:85:41:53:9a: e7:05:96:13:1c:88:2e:7f:33:b1:ee:bd:f9:50:52:24:ed:3d: 92:95:6e:30:c3:af:74:a9:ee:15:bb:da:7c:14:50:8e:e3:99: ea:ba:b4:37:8a:50:61:26:de:01:93:b8:a2:6b:d9:c7:38:5e: b2:f8:96:3d:a8:9f:7d:0c:71:d4:7e:cc:a0:57:af:7e:ce:3f: a7:a7:27:68:c1:28:d7:4f:44:c1:b4:93:c3:c7:35:2b:50:c3: 8e:2c:d0:46:c1:3f:e1:67:d3:f0:81:ae:f3:5c:3e:4f:d5:a8: 07:8f:e0:eb:ef:d8:dc:47:e0:3d:58:eb:de:0e:7f:b2:58:cb: 5c:f1:2f:65:7e:0f:0d:cc:ca:ba:83:53:63:bc:dd:18:0c:ee: ed:ec:96:88:d0:38:c5:d7:ab:e7:55:79:7b:6d:ba:c0:a0:e9: 5c:ca:7c:fb:f8:70:c7:fb:f5:b2:b5:74:cb:f7:c0:0d:20:9f: 1d:b7:4c:bf:8a:8d:cd:e3:bc:4e:30:78:02:12:a0:9b:d5:8f: 49:3c:95:91:76:6e:7c:54:dc:61:7a:2e:20:ed:35:25:e0:c5: 17:50:02:83:00:74:8f:f0:1c:97:96:08:fc:2e:63:a4:f7:97: 87:43:2a:32:04:2d:4c:f9:1a:07:bf:68:91:fc:50:21:a1:3c: 8d:8f:fb:83:57:83:1f:b6:55:5c:55:2f:58:64:ad:f3:27:ba: d0:e3:cd:58:01:a3:c9:ba:1d:95:dc:30:d5:af:b9:20:ad:d9: 48:ba:8d:9a:66:ee -----BEGIN CERTIFICATE----- MIIEyzCCAzOgAwIBAgIJAMstgJlaaVJeMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGMxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFjAUBgNVBAMMDWxv Y2FsaG9zdC1lY2MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATyp8ws6CuNOI2/3MC4 jZVSkmoDzm/X/ZrkEm4TVHKPSZ6kzZRpmmUlLS9l7SQZSLYyDAFBFzoGJJYHhZNQ XEO7HFszn6KnvLjhwS6ddzlaHPziEknrSr0OKhJmdJHrQASjggHEMIIBwDAYBgNV HREEETAPgg1sb2NhbGhvc3QtZWNjMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAU BggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUeRGY hhVPSPQxC9LMyCY6CQddlkAwfQYDVR0jBHYwdIAUs4qgorpx8agkedSkWyU2FR5J yM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMstgJlaaVJb MIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0Y2EucHl0 aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcwAYYpaHR0 cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYDVR0fBDww OjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2EvcmV2 b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAG5C6KItKBTjJVzBflTpOv8w 25S6svZfrprBkLNPzmUdhGTAcSxEjn4AefWMSh00E0TemS7bU+7sdJdNWRoJgk+Y dZGnoLnaXmj1MoW+Nj2D1O75h2cxhUFTmucFlhMciC5/M7HuvflQUiTtPZKVbjDD r3Sp7hW72nwUUI7jmeq6tDeKUGEm3gGTuKJr2cc4XrL4lj2on30McdR+zKBXr37O P6enJ2jBKNdPRMG0k8PHNStQw44s0EbBP+Fn0/CBrvNcPk/VqAeP4Ovv2NxH4D1Y 694Of7JYy1zxL2V+Dw3MyrqDU2O83RgM7u3slojQOMXXq+dVeXttusCg6VzKfPv4 cMf79bK1dMv3wA0gnx23TL+Kjc3jvE4weAISoJvVj0k8lZF2bnxU3GF6LiDtNSXg xRdQAoMAdI/wHJeWCPwuY6T3l4dDKjIELUz5Gge/aJH8UCGhPI2P+4NXgx+2VVxV L1hkrfMnutDjzVgBo8m6HZXcMNWvuSCt2Ui6jZpm7g== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/nokia.pem000066400000000000000000000036031471441230600201310ustar00rootroot00000000000000# Certificate for projects.developer.nokia.com:443 (see issue 13034) -----BEGIN CERTIFICATE----- MIIFLDCCBBSgAwIBAgIQLubqdkCgdc7lAF9NfHlUmjANBgkqhkiG9w0BAQUFADCB vDELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTswOQYDVQQLEzJUZXJtcyBvZiB1c2Ug YXQgaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAoYykxMDE2MDQGA1UEAxMt VmVyaVNpZ24gQ2xhc3MgMyBJbnRlcm5hdGlvbmFsIFNlcnZlciBDQSAtIEczMB4X DTExMDkyMTAwMDAwMFoXDTEyMDkyMDIzNTk1OVowcTELMAkGA1UEBhMCRkkxDjAM BgNVBAgTBUVzcG9vMQ4wDAYDVQQHFAVFc3BvbzEOMAwGA1UEChQFTm9raWExCzAJ BgNVBAsUAkJJMSUwIwYDVQQDFBxwcm9qZWN0cy5kZXZlbG9wZXIubm9raWEuY29t MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCr92w1bpHYSYxUEx8N/8Iddda2 lYi+aXNtQfV/l2Fw9Ykv3Ipw4nLeGTj18FFlAZgMdPRlgrzF/NNXGw/9l3/qKdow CypkQf8lLaxb9Ze1E/KKmkRJa48QTOqvo6GqKuTI6HCeGlG1RxDb8YSKcQWLiytn yj3Wp4MgRQO266xmMQIDAQABo4IB9jCCAfIwQQYDVR0RBDowOIIccHJvamVjdHMu ZGV2ZWxvcGVyLm5va2lhLmNvbYIYcHJvamVjdHMuZm9ydW0ubm9raWEuY29tMAkG A1UdEwQCMAAwCwYDVR0PBAQDAgWgMEEGA1UdHwQ6MDgwNqA0oDKGMGh0dHA6Ly9T VlJJbnRsLUczLWNybC52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNybDBEBgNVHSAE PTA7MDkGC2CGSAGG+EUBBxcDMCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3LnZl cmlzaWduLmNvbS9ycGEwKAYDVR0lBCEwHwYJYIZIAYb4QgQBBggrBgEFBQcDAQYI KwYBBQUHAwIwcgYIKwYBBQUHAQEEZjBkMCQGCCsGAQUFBzABhhhodHRwOi8vb2Nz cC52ZXJpc2lnbi5jb20wPAYIKwYBBQUHMAKGMGh0dHA6Ly9TVlJJbnRsLUczLWFp YS52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNlcjBuBggrBgEFBQcBDARiMGChXqBc MFowWDBWFglpbWFnZS9naWYwITAfMAcGBSsOAwIaBBRLa7kolgYMu9BSOJsprEsH iyEFGDAmFiRodHRwOi8vbG9nby52ZXJpc2lnbi5jb20vdnNsb2dvMS5naWYwDQYJ KoZIhvcNAQEFBQADggEBACQuPyIJqXwUyFRWw9x5yDXgMW4zYFopQYOw/ItRY522 O5BsySTh56BWS6mQB07XVfxmYUGAvRQDA5QHpmY8jIlNwSmN3s8RKo+fAtiNRlcL x/mWSfuMs3D/S6ev3D6+dpEMZtjrhOdctsarMKp8n/hPbwhAbg5hVjpkW5n8vz2y 0KxvvkA1AxpLwpVv7OlK17ttzIHw8bp9HTlHBU5s8bKz4a565V/a5HI0CSEv/+0y ko4/ghTnZc1CkmUngKKeFMSah/mT/xAh8XnE2l1AazFa8UKuYki1e+ArHaGZc4ix UYOtiRphwfuYQhRZ7qX9q2MMkCMI65XNK/SaFrAbbG0= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/nosan.pem000066400000000000000000000170471471441230600201550ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQCv3sUoOE4F7Pye AT2Q6XpXrGUOu1fYgdnItLLLhvn7ACuHMj7TA5UKXxsepJn5m2Ji9LvAbksr1IWd LZAvNgjwsUR+E4HbY108BhVt9sk3HFkvE0OOFbAa14ICtYPe18P/4Hv6Zfu/GJDU rwXHNCUu0p6i/mospZ5O3sx5MgVaShknGAEC3Kp7zOgusMmE8XSbkNQa3ARMkW4o kTqWKAeAHDjVFVyyhzZQmo+gaLzhWfJVSZhlJsuiLoZGGrVTq85EiXsE4l8rPaI+ mKkVzWP13IZW+Fx1tiIktumdHWb1OQWrvm8AiT9b8PcFCUUrvhQFcLDSCZjKlQ0t RWrSSKrrVsSldOreqRLtpjGzFJpGnTcvslL7rP5pg5DjBsYmVcDjrmRuJuhGq52X /6HEC97GouVK8tT1LVMv1wufVPn+i9TzwxOuRWeUvVqLAJgWQ9N3yKdymH+VrpZk /oB9ScyDakGezZBW5CeOQbNJ8WoX58jNxefGjtqKxmyztu43r3ECAwEAAQKCAYBQ fVoqYCqFV8L95X9x1QljGsldhqxbsIIl811o/KtoDtndFEfgd2E8z+4vhhHaRR0w QOW02kWZF7jXCMVWdhp9XgQE15S0/bLsB7TDERFiIZ1HiD+AxbhFcKBV8REbahCQ CQN0xDwFZ47RaBDy7JCf71EfM+UP7fSYECvww83jVspQNBIyZx+3bT5OMCbqqz88 +3m3mT52dJDADEeN9WAJZ+Ey1IYKRwu6tCJLvePEF1BrbDVNBgZogXZ+mzalxpjr 4RpGPMMa+VWc8HmDVd+LtpwKJcQD00GvUP4fNywn+5jvNWl54FdQiTLPrieTWxas XUQ2crxP7Aqr2/vsU5Ruru5uF7H+ssMHp9YQDhpJ2+SVhQ9P+/loXCuKGt+BrB2Z MlitO3f+vfRtzATmJ8G0qFrOqZK1A/qsiyIze240C1hAl3oy2xpZqTDGp4gRWwoi OIN0HmH9UbP7bbNQY1x/zstTbza4/7rGb1+DZKeZIMu7QjBCU0rtsJpGtUvcQGEC gcEA42GMYSL/HljZMF1LsDhTX/cmP8FDNgONhWYxT+w0Csnj1usLNBaT63dYnEiW QKydRR4casAR1Kdy4Yfcy2lCy1kCfwqkQYk8fxSsOSHRjUfwC1SnfdYlwKFMxw4a oZF0R4oVCBYrfP+8kqrj+5gs/gXblsw72XkYtbCdIriKKdmUzTx7MegzSqh2PVRi rJzuwCZQ/O0NfhwdOHxLQDo0dgD+vv9e+KOSoJ9FDv8HH1tnolpRMdkSA8AJR/Nk DXt1AoHBAMYBfTKQZ2jqLKybe4tP+YKjvjVp8vJx0iNUXFN/P6hBaSBOgq85uxXL X3s7N/pkOCjyE95B8QusIkbnbfdyEP89O4bTbUHPXyAkHyRkR7Vny49HYuaR/aXQ mXC0J2z5bXVpCQ514l/R/Io3wBph+hbG3To7pp9pMOV4qzvibUZaTZFwH+q+xDwf SKSFy3fcomgH4/K5/QuKVj0jOUQsYjQQWb8GukS2KZK3zYJIAG1bBcsCVpSuBdW0 eCZgqjnwjQKBwCUyUwWc9QEg5b68tGIKhNEhHDe3xOf0ItWcxxpc+JJ/Pm9tGfMW cnJFntBKK5I+6qdg6qMn8oLINcnhMORxvsSHNhpUQlSaP7RGTHo4JxCmoQUpfxDd 1GUzvdyeWQrvQYdmdlRRVCHpsA6KOCtzVIDlsmtz06Ka5cjrMHl6mNeJyYbdiwW6 B5ICBv23bUDxlzkFy5/ko51qufkAlErYeraHKSVTn1SrZZQzGdf/LkoZ6NUtUzUF XqYQZzRHA6oU9QKBwDslzLljC5D6ivfQxln6POV6dmJMUOd9erFVDPNgSqq/R2EA MueXDjzXcKFGMlWYxHHuxmKZPiEnfWHC1kWZjFxCdVq0I6oKATd/stHTJtyYseUO BQwtRiDXLE7PcguKgtkU1EC+lC3dc1vyhW8cH3HYW9N+aCqsaI/TuQr9e3kNlqhA XzhnXgU7rx5+XSZkARukZ8JlLqLY4yQGNqAXxgoZbEW1A8VsyQRr5XbqfT4td5CK FUT6qwGIlG+aZp9CLQKBwQCQkwdW9A/Q4Ffq8+XTL1hJ24m/q11OLAPODUypOhWw OCbX2fkv59pSBe6niZDBls1NpHB9mzalBrJCfU+yKC667gKcKULOnWULIoOQvmcg Ka3hkkW28gTnCjfDIYm3IdsLjc67zJplOixaKgxhO8NtJZGtg0oLIrofG8EYRInv OmtGw+XE+s4TVs6WgXnEg9zWQ5ZYtqQVn6PT5jsz+Nrvipi61HWHVBd7g+78ojps 3suWxl0FvgzTW5HD16WRXeI= -----END PRIVATE KEY----- Certificate: Data: Version: 1 (0x0) Serial Number: cb:2d:80:99:5a:69:52:61 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=nosan Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:af:de:c5:28:38:4e:05:ec:fc:9e:01:3d:90:e9: 7a:57:ac:65:0e:bb:57:d8:81:d9:c8:b4:b2:cb:86: f9:fb:00:2b:87:32:3e:d3:03:95:0a:5f:1b:1e:a4: 99:f9:9b:62:62:f4:bb:c0:6e:4b:2b:d4:85:9d:2d: 90:2f:36:08:f0:b1:44:7e:13:81:db:63:5d:3c:06: 15:6d:f6:c9:37:1c:59:2f:13:43:8e:15:b0:1a:d7: 82:02:b5:83:de:d7:c3:ff:e0:7b:fa:65:fb:bf:18: 90:d4:af:05:c7:34:25:2e:d2:9e:a2:fe:6a:2c:a5: 9e:4e:de:cc:79:32:05:5a:4a:19:27:18:01:02:dc: aa:7b:cc:e8:2e:b0:c9:84:f1:74:9b:90:d4:1a:dc: 04:4c:91:6e:28:91:3a:96:28:07:80:1c:38:d5:15: 5c:b2:87:36:50:9a:8f:a0:68:bc:e1:59:f2:55:49: 98:65:26:cb:a2:2e:86:46:1a:b5:53:ab:ce:44:89: 7b:04:e2:5f:2b:3d:a2:3e:98:a9:15:cd:63:f5:dc: 86:56:f8:5c:75:b6:22:24:b6:e9:9d:1d:66:f5:39: 05:ab:be:6f:00:89:3f:5b:f0:f7:05:09:45:2b:be: 14:05:70:b0:d2:09:98:ca:95:0d:2d:45:6a:d2:48: aa:eb:56:c4:a5:74:ea:de:a9:12:ed:a6:31:b3:14: 9a:46:9d:37:2f:b2:52:fb:ac:fe:69:83:90:e3:06: c6:26:55:c0:e3:ae:64:6e:26:e8:46:ab:9d:97:ff: a1:c4:0b:de:c6:a2:e5:4a:f2:d4:f5:2d:53:2f:d7: 0b:9f:54:f9:fe:8b:d4:f3:c3:13:ae:45:67:94:bd: 5a:8b:00:98:16:43:d3:77:c8:a7:72:98:7f:95:ae: 96:64:fe:80:7d:49:cc:83:6a:41:9e:cd:90:56:e4: 27:8e:41:b3:49:f1:6a:17:e7:c8:cd:c5:e7:c6:8e: da:8a:c6:6c:b3:b6:ee:37:af:71 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption 91:42:c2:15:57:42:47:77:e7:0f:c5:55:26:b1:5b:c3:5e:ba: 81:db:e1:a4:9f:b8:42:5a:21:c9:8c:18:ae:0f:90:ab:9a:24: e7:d2:78:fc:bd:97:29:b1:5c:46:1f:5b:b8:d2:a7:87:f1:50: 53:5b:d3:be:57:74:bd:e5:75:db:50:81:f7:37:95:0b:69:ef: 39:8c:5c:82:d5:64:62:d5:8b:e9:e0:31:e1:73:d2:5a:2c:de: 43:5a:06:e5:d3:4d:d0:35:e0:9f:c2:73:31:bc:35:69:d4:fb: 7d:f0:1a:33:f7:f6:25:72:9c:a6:84:05:08:f6:b5:e8:04:10: f1:1f:f2:95:ad:a1:f8:d8:80:a5:eb:75:43:99:33:90:0c:79: fc:c0:87:08:95:20:aa:c2:81:0b:22:6f:56:f4:8f:2a:23:f8: 40:47:1c:03:a5:b1:04:0a:04:4a:df:d0:88:a8:bc:31:f2:42: 9b:d8:11:14:9e:e3:68:ea:07:2c:15:de:d2:36:5a:15:38:ed: d2:af:0e:b4:b6:1d:a0:57:94:ea:c3:c7:4c:14:57:81:00:57: 94:d3:b0:27:69:d7:48:02:6c:e5:97:f7:be:22:7c:38:24:af: b2:b0:7b:08:75:1e:ca:2e:c7:41:ef:8b:74:cf:c9:c3:6f:39: b9:52:41:18:c6:70:24:54:51:04:fe:5f:88:70:35:e5:1c:8e: d6:67:69:44:44:33:9b:8c:fe:a5:b9:95:48:66:84:f3:1a:04: ab:a3:57:c1:b6:b4:2f:28:12:45:2b:cb:42:d3:f4:a5:ce:7b: 6c:1f:e4:c8:a9:e7:d4:6d:c8:27:2d:69:26:c5:e8:73:10:54: 1f:c3:bf:fd:aa:f5:95:6f:f6:ca:d5:06:8f:1b:79:93:e3:86: ba:8d:fe:a8:10:8f:95:3e:14:09:bf:ca:88:59:e2:93:b6:ec: 03:a9:7e:dd:1f:5f:13:d3:29:b3:a6:f3:6a:df:30:53:44:c8: cd:e5:82:57:bc:9c -----BEGIN CERTIFICATE----- MIIEJDCCAowCCQDLLYCZWmlSYTANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJY WTEmMCQGA1UECgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNV BAMMDW91ci1jYS1zZXJ2ZXIwHhcNMTgwODI5MTQyMzE2WhcNMzcxMDI4MTQyMzE2 WjBbMQswCQYDVQQGEwJYWTEXMBUGA1UEBwwOQ2FzdGxlIEFudGhyYXgxIzAhBgNV BAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQ4wDAYDVQQDDAVub3NhbjCC AaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAK/exSg4TgXs/J4BPZDpeles ZQ67V9iB2ci0ssuG+fsAK4cyPtMDlQpfGx6kmfmbYmL0u8BuSyvUhZ0tkC82CPCx RH4TgdtjXTwGFW32yTccWS8TQ44VsBrXggK1g97Xw//ge/pl+78YkNSvBcc0JS7S nqL+aiylnk7ezHkyBVpKGScYAQLcqnvM6C6wyYTxdJuQ1BrcBEyRbiiROpYoB4Ac ONUVXLKHNlCaj6BovOFZ8lVJmGUmy6IuhkYatVOrzkSJewTiXys9oj6YqRXNY/Xc hlb4XHW2IiS26Z0dZvU5Bau+bwCJP1vw9wUJRSu+FAVwsNIJmMqVDS1FatJIqutW xKV06t6pEu2mMbMUmkadNy+yUvus/mmDkOMGxiZVwOOuZG4m6EarnZf/ocQL3sai 5Ury1PUtUy/XC59U+f6L1PPDE65FZ5S9WosAmBZD03fIp3KYf5WulmT+gH1JzINq QZ7NkFbkJ45Bs0nxahfnyM3F58aO2orGbLO27jevcQIDAQABMA0GCSqGSIb3DQEB CwUAA4IBgQCRQsIVV0JHd+cPxVUmsVvDXrqB2+Gkn7hCWiHJjBiuD5CrmiTn0nj8 vZcpsVxGH1u40qeH8VBTW9O+V3S95XXbUIH3N5ULae85jFyC1WRi1Yvp4DHhc9Ja LN5DWgbl003QNeCfwnMxvDVp1Pt98Boz9/YlcpymhAUI9rXoBBDxH/KVraH42ICl 63VDmTOQDHn8wIcIlSCqwoELIm9W9I8qI/hARxwDpbEECgRK39CIqLwx8kKb2BEU nuNo6gcsFd7SNloVOO3Srw60th2gV5Tqw8dMFFeBAFeU07AnaddIAmzll/e+Inw4 JK+ysHsIdR7KLsdB74t0z8nDbzm5UkEYxnAkVFEE/l+IcDXlHI7WZ2lERDObjP6l uZVIZoTzGgSro1fBtrQvKBJFK8tC0/SlzntsH+TIqefUbcgnLWkmxehzEFQfw7/9 qvWVb/bK1QaPG3mT44a6jf6oEI+VPhQJv8qIWeKTtuwDqX7dH18T0ymzpvNq3zBT RMjN5YJXvJw= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/nullbytecert.pem000066400000000000000000000124731471441230600215510ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 0 (0x0) Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Validity Not Before: Aug 7 13:11:52 2013 GMT Not After : Aug 7 13:12:52 2013 GMT Subject: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:b5:ea:ed:c9:fb:46:7d:6f:3b:76:80:dd:3a:f3: 03:94:0b:a7:a6:db:ec:1d:df:ff:23:74:08:9d:97: 16:3f:a3:a4:7b:3e:1b:0e:96:59:25:03:a7:26:e2: 88:a9:cf:79:cd:f7:04:56:b0:ab:79:32:6e:59:c1: 32:30:54:eb:58:a8:cb:91:f0:42:a5:64:27:cb:d4: 56:31:88:52:ad:cf:bd:7f:f0:06:64:1f:cc:27:b8: a3:8b:8c:f3:d8:29:1f:25:0b:f5:46:06:1b:ca:02: 45:ad:7b:76:0a:9c:bf:bb:b9:ae:0d:16:ab:60:75: ae:06:3e:9c:7c:31:dc:92:2f:29:1a:e0:4b:0c:91: 90:6c:e9:37:c5:90:d7:2a:d7:97:15:a3:80:8f:5d: 7b:49:8f:54:30:d4:97:2c:1c:5b:37:b5:ab:69:30: 68:43:d3:33:78:4b:02:60:f5:3c:44:80:a1:8f:e7: f0:0f:d1:5e:87:9e:46:cf:62:fc:f9:bf:0c:65:12: f1:93:c8:35:79:3f:c8:ec:ec:47:f5:ef:be:44:d5: ae:82:1e:2d:9a:9f:98:5a:67:65:e1:74:70:7c:cb: d3:c2:ce:0e:45:49:27:dc:e3:2d:d4:fb:48:0e:2f: 9e:77:b8:14:46:c0:c4:36:ca:02:ae:6a:91:8c:da: 2f:85 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 88:5A:55:C0:52:FF:61:CD:52:A3:35:0F:EA:5A:9C:24:38:22:F7:5C X509v3 Key Usage: Digital Signature, Non Repudiation, Key Encipherment X509v3 Subject Alternative Name: ************************************************************* WARNING: The values for DNS, email and URI are WRONG. OpenSSL doesn't print the text after a NULL byte. ************************************************************* DNS:altnull.python.org, email:null@python.org, URI:http://null.python.org, IP Address:192.0.2.1, IP Address:2001:DB8:0:0:0:0:0:1 Signature Algorithm: sha1WithRSAEncryption ac:4f:45:ef:7d:49:a8:21:70:8e:88:59:3e:d4:36:42:70:f5: a3:bd:8b:d7:a8:d0:58:f6:31:4a:b1:a4:a6:dd:6f:d9:e8:44: 3c:b6:0a:71:d6:7f:b1:08:61:9d:60:ce:75:cf:77:0c:d2:37: 86:02:8d:5e:5d:f9:0f:71:b4:16:a8:c1:3d:23:1c:f1:11:b3: 56:6e:ca:d0:8d:34:94:e6:87:2a:99:f2:ae:ae:cc:c2:e8:86: de:08:a8:7f:c5:05:fa:6f:81:a7:82:e6:d0:53:9d:34:f4:ac: 3e:40:fe:89:57:7a:29:a4:91:7e:0b:c6:51:31:e5:10:2f:a4: 60:76:cd:95:51:1a:be:8b:a1:b0:fd:ad:52:bd:d7:1b:87:60: d2:31:c7:17:c4:18:4f:2d:08:25:a3:a7:4f:b7:92:ca:e2:f5: 25:f1:54:75:81:9d:b3:3d:61:a2:f7:da:ed:e1:c6:6f:2c:60: 1f:d8:6f:c5:92:05:ab:c9:09:62:49:a9:14:ad:55:11:cc:d6: 4a:19:94:99:97:37:1d:81:5f:8b:cf:a3:a8:96:44:51:08:3d: 0b:05:65:12:eb:b6:70:80:88:48:72:4f:c6:c2:da:cf:cd:8e: 5b:ba:97:2f:60:b4:96:56:49:5e:3a:43:76:63:04:be:2a:f6: c1:ca:a9:94 -----BEGIN CERTIFICATE----- MIIE2DCCA8CgAwIBAgIBADANBgkqhkiG9w0BAQUFADCBxTELMAkGA1UEBhMCVVMx DzANBgNVBAgMBk9yZWdvbjESMBAGA1UEBwwJQmVhdmVydG9uMSMwIQYDVQQKDBpQ eXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEgMB4GA1UECwwXUHl0aG9uIENvcmUg RGV2ZWxvcG1lbnQxJDAiBgNVBAMMG251bGwucHl0aG9uLm9yZwBleGFtcGxlLm9y ZzEkMCIGCSqGSIb3DQEJARYVcHl0aG9uLWRldkBweXRob24ub3JnMB4XDTEzMDgw NzEzMTE1MloXDTEzMDgwNzEzMTI1MlowgcUxCzAJBgNVBAYTAlVTMQ8wDQYDVQQI DAZPcmVnb24xEjAQBgNVBAcMCUJlYXZlcnRvbjEjMCEGA1UECgwaUHl0aG9uIFNv ZnR3YXJlIEZvdW5kYXRpb24xIDAeBgNVBAsMF1B5dGhvbiBDb3JlIERldmVsb3Bt ZW50MSQwIgYDVQQDDBtudWxsLnB5dGhvbi5vcmcAZXhhbXBsZS5vcmcxJDAiBgkq hkiG9w0BCQEWFXB5dGhvbi1kZXZAcHl0aG9uLm9yZzCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBALXq7cn7Rn1vO3aA3TrzA5QLp6bb7B3f/yN0CJ2XFj+j pHs+Gw6WWSUDpybiiKnPec33BFawq3kyblnBMjBU61ioy5HwQqVkJ8vUVjGIUq3P vX/wBmQfzCe4o4uM89gpHyUL9UYGG8oCRa17dgqcv7u5rg0Wq2B1rgY+nHwx3JIv KRrgSwyRkGzpN8WQ1yrXlxWjgI9de0mPVDDUlywcWze1q2kwaEPTM3hLAmD1PESA oY/n8A/RXoeeRs9i/Pm/DGUS8ZPINXk/yOzsR/XvvkTVroIeLZqfmFpnZeF0cHzL 08LODkVJJ9zjLdT7SA4vnne4FEbAxDbKAq5qkYzaL4UCAwEAAaOB0DCBzTAMBgNV HRMBAf8EAjAAMB0GA1UdDgQWBBSIWlXAUv9hzVKjNQ/qWpwkOCL3XDALBgNVHQ8E BAMCBeAwgZAGA1UdEQSBiDCBhYIeYWx0bnVsbC5weXRob24ub3JnAGV4YW1wbGUu Y29tgSBudWxsQHB5dGhvbi5vcmcAdXNlckBleGFtcGxlLm9yZ4YpaHR0cDovL251 bGwucHl0aG9uLm9yZwBodHRwOi8vZXhhbXBsZS5vcmeHBMAAAgGHECABDbgAAAAA AAAAAAAAAAEwDQYJKoZIhvcNAQEFBQADggEBAKxPRe99SaghcI6IWT7UNkJw9aO9 i9eo0Fj2MUqxpKbdb9noRDy2CnHWf7EIYZ1gznXPdwzSN4YCjV5d+Q9xtBaowT0j HPERs1ZuytCNNJTmhyqZ8q6uzMLoht4IqH/FBfpvgaeC5tBTnTT0rD5A/olXeimk kX4LxlEx5RAvpGB2zZVRGr6LobD9rVK91xuHYNIxxxfEGE8tCCWjp0+3ksri9SXx VHWBnbM9YaL32u3hxm8sYB/Yb8WSBavJCWJJqRStVRHM1koZlJmXNx2BX4vPo6iW RFEIPQsFZRLrtnCAiEhyT8bC2s/Njlu6ly9gtJZWSV46Q3ZjBL4q9sHKqZQ= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/nullcert.pem000066400000000000000000000000001471441230600206440ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.11/pycacert.pem000066400000000000000000000130401471441230600206360ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5b Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, O=Python Software Foundation CA, CN=our-ca-server Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:b1:84:d3:4f:5c:04:80:91:4f:82:49:ba:30:0b: f7:e8:cb:f9:14:ef:3d:9f:0b:3f:0a:62:fc:1b:20: a5:20:d1:60:5f:87:5a:1f:16:d1:ed:97:70:a6:da: 1b:03:2c:7e:a0:5b:3c:4e:2f:16:7e:0e:89:29:89: e1:10:0d:38:da:6a:77:5f:37:13:b3:28:8f:7b:5c: 76:ad:9e:e8:d3:f5:9e:f5:83:aa:10:07:8d:e6:51: 98:f0:7c:0d:52:f2:0c:21:1e:d8:b9:99:26:a9:25: 03:27:bb:5c:ab:2e:33:27:a2:d6:23:a8:83:87:44: 29:9f:97:b5:24:6f:d7:b9:0a:fd:28:ee:bb:fb:41: 58:ea:1d:99:dd:44:86:ab:98:be:1c:dc:cb:a9:89: 1d:36:5c:a9:e8:47:b5:f4:52:48:aa:b5:a4:67:ef: 3e:d7:e2:d3:33:de:98:29:d8:7a:b0:59:5c:e7:b1: 0e:cc:fd:9f:eb:f6:d5:3a:0e:0b:cf:fe:0b:3d:a2: bf:45:18:ce:94:e7:a9:55:60:88:d4:d8:84:50:79: 05:2e:41:03:74:ae:67:26:f6:5b:12:08:98:ce:0a: 97:ed:01:0f:89:4f:17:5c:fa:3e:1d:35:24:47:92: 32:bf:f7:a4:18:2b:3c:d0:48:99:e1:a2:cd:a3:cc: 50:53:20:b5:c6:e3:66:85:7b:57:10:ec:33:4f:c1: 77:e7:1b:7e:81:c6:c4:f3:45:20:c0:91:dd:13:76: 7b:03:af:f6:76:8e:a2:83:63:57:dd:63:bc:bb:5a: 1c:17:52:8a:d6:06:48:cc:0f:c7:d3:4f:e8:da:22: 6c:86:f9:4e:5c:a6:29:07:3b:d8:56:4c:59:b3:20: 49:07:7b:94:84:cf:2b:c3:1c:1a:4e:87:64:92:ba: 42:e1:e6:ad:7d:1d:f6:54:90:6f:2b:e9:b3:cc:4b: 2b:33:26:23:fd:65:c0:3c:f0:79:ad:c9:c1:81:ef: 37:04:e0:27:3e:b0:ee:15:be:51 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Key Identifier: B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD X509v3 Basic Constraints: CA:TRUE Signature Algorithm: sha256WithRSAEncryption 6b:32:2f:e7:05:18:ea:5c:c9:95:f4:e0:c2:0c:41:5f:1a:0a: 95:c9:c7:7d:05:ee:8a:56:29:35:50:40:b7:fe:9f:7b:5b:1c: c3:69:2f:a0:cb:d2:b8:91:2f:50:19:62:f7:27:18:6d:95:7b: 53:16:15:a2:5a:dc:14:e3:fb:b1:32:a9:69:db:a6:33:47:3c: bb:1f:d2:dc:70:f9:6a:2e:0c:d8:8c:6d:e5:5d:1d:43:3c:4e: 91:de:a0:c8:da:a0:4b:0e:9d:5e:b6:0f:4a:49:f0:7b:b6:53: 9e:fd:35:14:5b:e3:4d:b4:18:a6:36:61:e8:8f:33:9b:d4:05: f9:54:66:df:e0:cb:18:a3:4e:dc:17:a8:a0:b3:c1:a8:f4:d6: 9d:ca:7f:68:53:1a:d7:95:da:e8:d3:9e:48:00:71:95:99:11: 07:cf:96:c0:7d:ce:7d:30:e8:4f:e1:83:16:33:a1:ff:59:9b: 3e:4c:e7:3a:38:01:9f:0f:67:4c:fd:2d:8b:4a:d4:01:46:37: 33:e8:13:6b:15:a9:1d:68:76:45:a2:82:33:69:26:30:60:05: c8:8f:bd:b4:75:ab:be:7a:8b:48:68:70:40:b4:1b:51:c5:e6: 7a:ad:6b:4f:db:17:c0:60:67:2e:63:61:9b:2c:48:99:b8:76: 45:a0:9e:cc:ef:33:1e:50:4e:ab:72:c3:65:c8:b2:79:b3:35: 83:21:78:d3:8b:6c:3a:18:e8:65:32:39:b8:c0:9d:71:2f:35: 36:8a:c0:17:62:d8:8b:3e:e1:22:18:2b:4c:63:a6:0e:9d:0a: fa:ab:5b:35:fb:88:91:77:4c:8d:8c:9d:a9:cf:fc:ab:c2:e6: 5a:05:7b:7e:04:6e:39:cf:93:ce:67:3b:7a:cb:af:b6:36:e1: fb:71:64:45:d4:a6:f0:ce:ef:75:04:99:69:9a:e5:88:0a:10: 02:74:89:ec:75:84:44:80:48:df:c1:f7:e9:37:ce:ce:92:92: 5c:89:22:08:73:1f -----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/pycakey.pem000066400000000000000000000046641471441230600205050ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQCxhNNPXASAkU+C SbowC/foy/kU7z2fCz8KYvwbIKUg0WBfh1ofFtHtl3Cm2hsDLH6gWzxOLxZ+Dokp ieEQDTjaandfNxOzKI97XHatnujT9Z71g6oQB43mUZjwfA1S8gwhHti5mSapJQMn u1yrLjMnotYjqIOHRCmfl7Ukb9e5Cv0o7rv7QVjqHZndRIarmL4c3MupiR02XKno R7X0UkiqtaRn7z7X4tMz3pgp2HqwWVznsQ7M/Z/r9tU6DgvP/gs9or9FGM6U56lV YIjU2IRQeQUuQQN0rmcm9lsSCJjOCpftAQ+JTxdc+j4dNSRHkjK/96QYKzzQSJnh os2jzFBTILXG42aFe1cQ7DNPwXfnG36BxsTzRSDAkd0TdnsDr/Z2jqKDY1fdY7y7 WhwXUorWBkjMD8fTT+jaImyG+U5cpikHO9hWTFmzIEkHe5SEzyvDHBpOh2SSukLh 5q19HfZUkG8r6bPMSyszJiP9ZcA88HmtycGB7zcE4Cc+sO4VvlECAwEAAQKCAYB7 gUnzALYxLOgAYYMkQm9si9zz768TpCNr+ooj5YZ9Wq6OSAEveBT+FErQCxaYErDW qCNA0gn4Eezj9YWcQVa4vzHmEM+n6iRJU39ONC0Qqua5Ma10EY1sHIEnb2dlufku YeOu3RrEu3eCgRxsDGySuvv5OxinV4kN++KPQzD3EOopPE+U81YFLCsMgsyfPlmm gwc/IKIuXDHp5Vp2bXkZK98CYLV8RddjUw7SrkZNwx6cI9eET0CgTs7y4SrevoOy jCdnA0j1HvL8AbLQuYoXo9fdGYDeq55hyYlxSMYLaEToZG3DJ0UAldrT+r7x52D8 2QMnJUo2XHzVYPlXPJIAkFJisZZ36TkBvywCgXZMMLibPo9U6V0nfkybTtXKoory nmgBv+XSGSNrVWMiygpDPqpX1G6bBgqUX3CiTlxtSkYYz1M4Vgj2cux5XEPTnVCq CLVzvNIXZt1RyzXPxGWpPidCjOaiWBRT4u1Dol9fs3PmVvDaRxcKo9nspiUHCfEC gcEA4GgxZ+IJwpAMHkdYId0oxjKgTqIg+Ua+EwfUoQT10ERl/k/V4cDwJRHT8lML rKhTNQJMEE040jq+6mPJDl1KqMb/v05Q7fF22ToGw1HkZwK52O6CeEiJW4/J6bR1 pZGN0irsa6GvzV65Y6gZVFEUl0JPRf8wPvQHXsWAw8/2LuXkXjV0ieIMq4pbWJf4 kaid7dYLHnobiP9RVk7BGr7ifmCshoPjWp4TRMwYf6iIZrqMxUSX0QY8Xsqx6bch LLx/AoHBAMqCvvwUKTrF4gKh5jyl6T6DTZ/Dujaz7BuAJdsSSHvuTa/Y1EfsQHZN jABn89ZqHYDiyyCuVFO3dqhLtsPjhyFMSXj+98JYcL3FGKnqQqRTwtzzx2P2lV5X U0WhrNRb3iLu79Tr8pE/2EPnvTr+J5b0DHEeRyM72LWs43zrDYHorH0/Aa5Qd37F gDLCTBEl8jO5irRuAIq/KV9ZFnn8JDjNGVpXgHPW3354ON1YaMLnPASk7FQizSOQ QZAsyxtdLwKBwGUosvTYYXvygXP4x1LkpmfKFJe94E1exXpAsmovmTvcSXn9tTXC Sr77LWb0ZrPbYT7pHS7QEMg8MSnp941hIrG4mzs666KHkgLUdI4B0YtaIDsZMXlV gY3j4KpYbhxH4/2U2eSfC2fxxnKVKW3n6vdQrfmo0q/eQ6BGOgiLK7fybCLHyBQL 8Zg2k3z5bNUEhMTdE0AW3WjBZ4IXmFcdK26616r/szJ7RcZilrydVXexqpmWlTVl sTst9kucAPlwswKBwQCwf7my/GNezR8Jik+fZj7edBQQfcdra+8JnOvhfpLcKLte 2s1RjjA0q6usou1bYAsszP2bEzV97XWmgq7dFg4tUE7s/NO1d91zGDhBx2Gj1TkN 2A5dKonOuq9iDeITB6qYqcUvvyEfxRRZQr2jj+WzZCr/4BLCO6PJ29A9jKOuKLtF QcfWRF2RiNMN6lffzkHFIR4p2YHxa2DEsGGtmbt8Ig3Jtl/HFmydzmxJRoev71dY +ODdB6PhLhZmcRPoWpMCgcEAhGArwL68GwwRMqAX79gMv8tVT0CJnDyGk5mD/ZIB Nzt0yQFO7rTEa1l1vAtOiVJ9IpAak2lgbEwodOfGnQst7lujNYDFzTRPTFt/lID1 u6JBxmqawOSlqa00bt4l2YsTZV+BfSznBP6XO1PK4iR3o5G3NkoKJjZWm3e3asHk 6eTeMLcsIJ+Fp7gG0ve2EdQwhVSVMFEu4Q4C2FcJeU++L4kYpY7sTnAjUtiLvtHn yp3jllEn3CBD8Uhs4B+sL/6p -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.11/revocation.crl000066400000000000000000000014401471441230600211750ustar00rootroot00000000000000-----BEGIN X509 CRL----- MIICJjCBjwIBATANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJYWTEmMCQGA1UE CgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNVBAMMDW91ci1j YS1zZXJ2ZXIXDTIxMDMxNzA4NDgyMFoXDTQwMDUxNjA4NDgyMFqgDjAMMAoGA1Ud FAQDAgEAMA0GCSqGSIb3DQEBCwUAA4IBgQCd2GrHb4zr2R8eK7YMHwlkgICxbWP1 4nuEi55yzUcmMcCZJ6ZQV3yYqTlAULGQ9qWAUdhsyH+yu3hRKFKHQv0DAdKKxgow 66YasAQQ99DskXOPxmRoIA7qtIWZbLtBwHQJWh+uUFlTdUXitGIX5Xie74xu5YIr moa3QeuZyG5+gigSTUyst5T/J/cHfBzlAJLc2k3Ty4EPYXKHCVnrZWJbRmxq199l A7S+eBb9qWXSYXCn6v+EZ76pUS3u/66kZ86PO3h9294BzdhxbCJ27dQXNHw6owe2 Iyiv0aWx+TNSGSf4yCqaYTH6RtEoviI3h/inVFHNGgjlMzdaGw/0I3bkB0rt2WSR Vck37HnXvQvVEkgO/39C0WKZus6m4gmOgZcbJbXaR8uIR5Hmw3SEyGEPEIBu6tXV BLJOSOSu2vVUH5GUIrpvK9FTySKYa+MGryoPasuqZNfwpaXK+ON2G6QsmcXPWZY0 Dry6t0w2geW6UYVGmb831i8ZP3JVVVwcwi0= -----END X509 CRL----- gevent-24.11.1/src/greentest/3.11/secp384r1.pem000066400000000000000000000004001471441230600204540ustar00rootroot00000000000000$ openssl genpkey -genparam -algorithm EC -pkeyopt ec_paramgen_curve:secp384r1 -pkeyopt ec_param_enc:named_curve -text -----BEGIN EC PARAMETERS----- BgUrgQQAIg== -----END EC PARAMETERS----- ECDSA-Parameters: (384 bit) ASN1 OID: secp384r1 NIST CURVE: P-384 gevent-24.11.1/src/greentest/3.11/selfsigned_pythontestdotnet.pem000066400000000000000000000041221471441230600246670ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIF9zCCA9+gAwIBAgIUH98b4Fw/DyugC9cV7VK7ZODzHsIwDQYJKoZIhvcNAQEL BQAwgYoxCzAJBgNVBAYTAlhZMRcwFQYDVQQIDA5DYXN0bGUgQW50aHJheDEYMBYG A1UEBwwPQXJndW1lbnQgQ2xpbmljMSMwIQYDVQQKDBpQeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbjEjMCEGA1UEAwwac2VsZi1zaWduZWQucHl0aG9udGVzdC5uZXQw HhcNMTkwNTA4MDEwMjQzWhcNMjcwNzI0MDEwMjQzWjCBijELMAkGA1UEBhMCWFkx FzAVBgNVBAgMDkNhc3RsZSBBbnRocmF4MRgwFgYDVQQHDA9Bcmd1bWVudCBDbGlu aWMxIzAhBgNVBAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMSMwIQYDVQQD DBpzZWxmLXNpZ25lZC5weXRob250ZXN0Lm5ldDCCAiIwDQYJKoZIhvcNAQEBBQAD ggIPADCCAgoCggIBAMKdJlyCThkahwoBb7pl5q64Pe9Fn5jrIvzsveHTc97TpjV2 RLfICnXKrltPk/ohkVl6K5SUZQZwMVzFubkyxE0nZPHYHlpiKWQxbsYVkYv01rix IFdLvaxxbGYke2jwQao31s4o61AdlsfK1SdpHQUynBBMssqI3SB4XPmcA7e+wEEx jxjVish4ixA1vuIZOx8yibu+CFCf/geEjoBMF3QPdzULzlrCSw8k/45iZCSoNbvK DoL4TVV07PHOxpheDh8ZQmepGvU6pVqhb9m4lgmV0OGWHgozd5Ur9CbTVDmxIEz3 TSoRtNJK7qtyZdGNqwjksQxgZTjM/d/Lm/BJG99AiOmYOjsl9gbQMZgvQmMAtUsI aMJnQuZ6R+KEpW/TR5qSKLWZSG45z/op+tzI2m+cE6HwTRVAWbcuJxcAA55MZjqU OOOu3BBYMjS5nf2sQ9uoXsVBFH7i0mQqoW1SLzr9opI8KsWwFxQmO2vBxWYaN+lH OmwBZBwyODIsmI1YGXmTp09NxRYz3Qe5GCgFzYowpMrcxUC24iduIdMwwhRM7rKg 7GtIWMSrFfuI1XCLRmSlhDbhNN6fVg2f8Bo9PdH9ihiIyxSrc+FOUasUYCCJvlSZ 8hFUlLvcmrZlWuazohm0lsXuMK1JflmQr/DA/uXxP9xzFfRy+RU3jDyxJbRHAgMB AAGjUzBRMB0GA1UdDgQWBBSQJyxiPMRK01i+0BsV9zUwDiBaHzAfBgNVHSMEGDAW gBSQJyxiPMRK01i+0BsV9zUwDiBaHzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3 DQEBCwUAA4ICAQCR+7a7N/m+WLkxPPIA/CB4MOr2Uf8ixTv435Nyv6rXOun0+lTP ExSZ0uYQ+L0WylItI3cQHULldDueD+s8TGzxf5woaLKf6tqyr0NYhKs+UeNEzDnN 9PHQIhX0SZw3XyXGUgPNBfRCg2ZDdtMMdOU4XlQN/IN/9hbYTrueyY7eXq9hmtI9 1srftAMqr9SR1JP7aHI6DVgrEsZVMTDnfT8WmLSGLlY1HmGfdEn1Ip5sbo9uSkiH AEPgPfjYIvR5LqTOMn4KsrlZyBbFIDh9Sl99M1kZzgH6zUGVLCDg1y6Cms69fx/e W1HoIeVkY4b4TY7Bk7JsqyNhIuqu7ARaxkdaZWhYaA2YyknwANdFfNpfH+elCLIk BUt5S3f4i7DaUePTvKukCZiCq4Oyln7RcOn5If73wCeLB/ZM9Ei1HforyLWP1CN8 XLfpHaoeoPSWIveI0XHUl65LsPN2UbMbul/F23hwl+h8+BLmyAS680Yhn4zEN6Ku B7Po90HoFa1Du3bmx4jsN73UkT/dwMTi6K072FbipnC1904oGlWmLwvAHvrtxxmL Pl3pvEaZIu8wa/PNF6Y7J7VIewikIJq6Ta6FrWeFfzMWOj2qA1ZZi6fUaDSNYvuV J5quYKCc/O+I/yDDf8wyBbZ/gvUXzUHTMYGG+bFrn1p7XDbYYeEJ6R/xEg== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/signalinterproctester.py000066400000000000000000000061171471441230600233340ustar00rootroot00000000000000import gc import os import signal import subprocess import sys import time import unittest from test import support class SIGUSR1Exception(Exception): pass class InterProcessSignalTests(unittest.TestCase): def setUp(self): self.got_signals = {'SIGHUP': 0, 'SIGUSR1': 0, 'SIGALRM': 0} def sighup_handler(self, signum, frame): self.got_signals['SIGHUP'] += 1 def sigusr1_handler(self, signum, frame): self.got_signals['SIGUSR1'] += 1 raise SIGUSR1Exception def wait_signal(self, child, signame): if child is not None: # This wait should be interrupted by exc_class # (if set) child.wait() timeout = support.SHORT_TIMEOUT deadline = time.monotonic() + timeout while time.monotonic() < deadline: if self.got_signals[signame]: return signal.pause() self.fail('signal %s not received after %s seconds' % (signame, timeout)) def subprocess_send_signal(self, pid, signame): code = 'import os, signal; os.kill(%s, signal.%s)' % (pid, signame) args = [sys.executable, '-I', '-c', code] return subprocess.Popen(args) def test_interprocess_signal(self): # Install handlers. This function runs in a sub-process, so we # don't worry about re-setting the default handlers. signal.signal(signal.SIGHUP, self.sighup_handler) signal.signal(signal.SIGUSR1, self.sigusr1_handler) signal.signal(signal.SIGUSR2, signal.SIG_IGN) signal.signal(signal.SIGALRM, signal.default_int_handler) # Let the sub-processes know who to send signals to. pid = str(os.getpid()) with self.subprocess_send_signal(pid, "SIGHUP") as child: self.wait_signal(child, 'SIGHUP') self.assertEqual(self.got_signals, {'SIGHUP': 1, 'SIGUSR1': 0, 'SIGALRM': 0}) # gh-110033: Make sure that the subprocess.Popen is deleted before # the next test which raises an exception. Otherwise, the exception # may be raised when Popen.__del__() is executed and so be logged # as "Exception ignored in: ". child = None gc.collect() with self.assertRaises(SIGUSR1Exception): with self.subprocess_send_signal(pid, "SIGUSR1") as child: self.wait_signal(child, 'SIGUSR1') self.assertEqual(self.got_signals, {'SIGHUP': 1, 'SIGUSR1': 1, 'SIGALRM': 0}) with self.subprocess_send_signal(pid, "SIGUSR2") as child: # Nothing should happen: SIGUSR2 is ignored child.wait() try: with self.assertRaises(KeyboardInterrupt): signal.alarm(1) self.wait_signal(None, 'SIGALRM') self.assertEqual(self.got_signals, {'SIGHUP': 1, 'SIGUSR1': 1, 'SIGALRM': 0}) finally: signal.alarm(0) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.11/ssl_cert.pem000066400000000000000000000030421471441230600206430ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/ssl_key.passwd.pem000066400000000000000000000051361471441230600220040ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQI072N7W+PDDMCAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBA/AuaRNi4vE4KGqI4In+70BIIH ENGS5Vex5NID873frmd1UZEHZ+O/Bd0wDb+NUpIqesHkRYf7kKi6Gnr+nKQ/oVVn Lm3JjE7c8ECP0OkOOXmiXuWL1SkzBBWqCI4stSGUPvBiHsGwNnvJAaGjUffgMlcC aJOA2+dnejLkzblq4CB2LQdm06N3Xoe9tyqtQaUHxfzJAf5Ydd8uj7vpKN2MMhY7 icIPJwSyh0N7S6XWVtHEokr9Kp4y2hS5a+BgCWV1/1z0aF7agnSVndmT1VR+nWmc lM14k+lethmHMB+fsNSjnqeJ7XOPlOTHqhiZ9bBSTgF/xr5Bck/NiKRzHjdovBox TKg+xchaBhpRh7wBPBIlNJeHmIjv+8obOKjKU98Ig/7R9+IryZaNcKAH0PuOT+Sw QHXiCGQbOiYHB9UyhDTWiB7YVjd8KHefOFxfHzOQb/iBhbv1x3bTl3DgepvRN6VO dIsPLoIZe42sdf9GeMsk8mGJyZUQ6AzsfhWk3grb/XscizPSvrNsJ2VL1R7YTyT3 3WA4ZXR1EqvXnWL7N/raemQjy62iOG6t7fcF5IdP9CMbWP+Plpsz4cQW7FtesCTq a5ZXraochQz361ODFNIeBEGU+0qqXUtZDlmos/EySkZykSeU/L0bImS62VGE3afo YXBmznTTT9kkFkqv7H0MerfJsrE/wF8puP3GM01DW2JRgXRpSWlvbPV/2LnMtRuD II7iH4rWDtTjCN6BWKAgDOnPkc9sZ4XulqT32lcUeV6LTdMBfq8kMEc8eDij1vUT maVCRpuwaq8EIT3lVgNLufHiG96ojlyYtj3orzw22IjkgC/9ee8UDik9CqbMVmFf fVHhsw8LNSg8Q4bmwm5Eg2w2it2gtI68+mwr75oCxuJ/8OMjW21Prj8XDh5reie2 c0lDKQOFZ9UnLU1bXR/6qUM+JFKR4DMq+fOCuoQSVoyVUEOsJpvBOYnYZN9cxsZm vh9dKafMEcKZ8flsbr+gOmOw7+Py2ifSlf25E/Frb1W4gtbTb0LQVHb6+drutrZj 8HEu4CnHYFCD4ZnOJb26XlZCb8GFBddW86yJYyUqMMV6Q1aJfAOAglsTo1LjIMOZ byo0BTAmwUevU/iuOXQ4qRBXXcoidDcTCrxfUSPG9wdt9l+m5SdQpWqfQ+fx5O7m SLlrHyZCiPSFMtC9DxqjIklHjf5W3wslGLgaD30YXa4VDYkRihf3CNsxGQ+tVvef l0ZjoAitF7Gaua06IESmKnpHe23dkr1cjYq+u2IV+xGH8LeExdwsQ9kpuTeXPnQs JOA99SsFx1ct32RrwjxnDDsiNkaViTKo9GDkV3jQTfoFgAVqfSgg9wGXpqUqhNG7 TiSIHCowllLny2zn4XrXCy2niD3VDt0skb3l/PaegHE2z7S5YY85nQtYwpLiwB9M SQ08DYKxPBZYKtS2iZ/fsA1gjSRQDPg/SIxMhUC3M3qH8iWny1Lzl25F2Uq7VVEX LdTUtaby49jRTT3CQGr5n6z7bMbUegiY7h8WmOekuThGDH+4xZp6+rDP4GFk4FeK JcF70vMQYIjQZhadic6olv+9VtUP42ltGG/yP9a3eWRkzfAf2eCh6B1rYdgEWwE8 rlcZzwM+y6eUmeNF2FVWB8iWtTMQHy+dYNPM+Jtus1KQKxiiq/yCRs7nWvzWRFWA HRyqV0J6/lqgm4FvfktFt1T0W+mDoLJOR2/zIwMy2lgL5zeHuR3SaMJnCikJbqKS HB3UvrhAWUcZqdH29+FhVWeM7ybyF1Wccmf+IIC/ePLa6gjtqPV8lG/5kbpcpnB6 UQY8WWaKMxyr3jJ9bAX5QKshchp04cDecOLZrpFGNNQngR8RxSEkiIgAqNxWunIu KrdBDrupv/XAgEOclmgToY3iywLJSV5gHAyHWDUhRH4cFCLiGPl4XIcnXOuTze3H 3j+EYSiS3v3DhHjp33YU2pXlJDjiYsKzAXejEh66++Y8qaQdCAad3ruWRCzW3kgk Md0A1VGzntTnQsewvExQEMZH2LtYIsPv3KCYGeSAuLabX4tbGk79PswjnjLLEOr0 Ghf6RF6qf5/iFyJoG4vrbKT8kx6ywh0InILCdjUunuDskIBxX6tEcr9XwajoIvb2 kcmGdjam5kKLS7QOWQTl8/r/cuFes0dj34cX5Qpq+Gd7tRq/D+b0207926Cxvftv qQ1cVn8HiLxKkZzd3tpf2xnoV1zkTL0oHrNg+qzxoxXUTUcwtIf1d/HRbYEAhi/d bBBoFeftEHWNq+sJgS9bH+XNzo/yK4u04B5miOq8v4CSkJdzu+ZdF22d4cjiGmtQ 8BTmcn0Unzm+u5H0+QSZe54QBHJGNXXOIKMTkgnOdW27g4DbI1y7fCqJiSMbRW6L oHmMfbdB3GWqGbsUkhY8i6h9op0MU6WOX7ea2Rxyt4t6 -----END ENCRYPTED PRIVATE KEY----- gevent-24.11.1/src/greentest/3.11/ssl_key.pem000066400000000000000000000046701471441230600205060ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/wIBADANBgkqhkiG9w0BAQEFAASCBukwggblAgEAAoIBgQCylKlLaKU+hOvJ DfriTRLd+IthG5hv28I3A/CGjLICT0rDDtgaXd0uqloJAnjsgn5gMAcStpDW8Rm+ t6LsrBL+5fBgkyU1r94Rvx0HHoyaZwBBouitVHw28hP3W+smddkqB1UxpGnTeL2B gj3dVo/WTtRfO+0h0PKw1l98YE1pMTdqIwcOOE/ER0g4hvA/wrxuLhMvlVLMy/lL 58uctqaDUqryNyeerKbVkq4fJyCG5D2TwXVJ3i2DDh0xSt2Y10poZV4M4k8Su9Z5 8zN2PSvYMT50aqF277v8BaOeYUApBE4kZGIJpo13ATGdEwpUFZ0Fri4zLYUZ1hWb OC35sKo7OxWQ/+tefNUdgWHob6Vmy777jiYcLwxc3sS9rF3AJe0rMW83kCkR6hmy A3250E137N/1QumHuT/Nj9rnI/lwt9jfaYkZjoAgT/C97m/mM83cYpGTdoGV1xNo 7G90MhP0di5FnVsrIaSnvkbGT9UgUWx0oVMjocifdG2qIhMI9psCAwEAAQKCAYBT sHmaPmNaZj59jZCqp0YVQlpHWwBYQ5vD3pPE6oCttm0p9nXt/VkfenQRTthOtmT1 POzDp00/feP7zeGLmqSYUjgRekPw4gdnN7Ip2PY5kdW77NWwDSzdLxuOS8Rq1MW9 /Yu+ZPe3RBlDbT8C0IM+Atlh/BqIQ3zIxN4g0pzUlF0M33d6AYfYSzOcUhibOO7H j84r+YXBNkIRgYKZYbutRXuZYaGuqejRpBj3voVu0d3Ntdb6lCWuClpB9HzfGN0c RTv8g6UYO4sK3qyFn90ibIR/1GB9watvtoWVZqggiWeBzSWVWRsGEf9O+Cx4oJw1 IphglhmhbgNksbj7bD24on/icldSOiVkoUemUOFmHWhCm4PnB1GmbD8YMfEdSbks qDr1Ps1zg4mGOinVD/4cY7vuPFO/HCH07wfeaUGzRt4g0/yLr+XjVofOA3oowyxv JAzr+niHA3lg5ecj4r7M68efwzN1OCyjMrVJw2RAzwvGxE+rm5NiT08SWlKQZnkC gcEA4wvyLpIur/UB84nV3XVJ89UMNBLm++aTFzld047BLJtMaOhvNqx6Cl5c8VuW l261KHjiVzpfNM3/A2LBQJcYkhX7avkqEXlj57cl+dCWAVwUzKmLJTPjfaTTZnYJ xeN3dMYjJz2z2WtgvfvDoJLukVwIMmhTY8wtqqYyQBJ/l06pBsfw5TNvmVIOQHds 8ASOiFt+WRLk2bl9xrGGayqt3VV93KVRzF27cpjOgEcG74F3c0ZW9snERN7vIYwB JfrlAoHBAMlahPwMP2TYylG8OzHe7EiehTekSO26LGh0Cq3wTGXYsK/q8hQCzL14 kWW638vpwXL6L9ntvrd7hjzWRO3vX/VxnYEA6f0bpqHq1tZi6lzix5CTUN5McpDg QnjenSJNrNjS1zEF8WeY9iLEuDI/M/iUW4y9R6s3WpgQhPDXpSvd2g3gMGRUYhxQ Xna8auiJeYFq0oNaOxvJj+VeOfJ3ZMJttd+Y7gTOYZcbg3SdRb/kdxYki0RMD2hF 4ZvjJ6CTfwKBwQDiMqiZFTJGQwYqp4vWEmAW+I4r4xkUpWatoI2Fk5eI5T9+1PLX uYXsho56NxEU1UrOg4Cb/p+TcBc8PErkGqR0BkpxDMOInTOXSrQe6lxIBoECVXc3 HTbrmiay0a5y5GfCgxPKqIJhfcToAceoVjovv0y7S4yoxGZKuUEe7E8JY2iqRNAO yOvKCCICv/hcN235E44RF+2/rDlOltagNej5tY6rIFkaDdgOF4bD7f9O5eEni1Bg litfoesDtQP/3rECgcEAkQfvQ7D6tIPmbqsbJBfCr6fmoqZllT4FIJN84b50+OL0 mTGsfjdqC4tdhx3sdu7/VPbaIqm5NmX10bowWgWSY7MbVME4yQPyqSwC5NbIonEC d6N0mzoLR0kQ+Ai4u+2g82gicgAq2oj1uSNi3WZi48jQjHYFulCbo246o1NgeFFK 77WshYe2R1ioQfQDOU1URKCR0uTaMHClgfu112yiGd12JAD+aF3TM0kxDXz+sXI5 SKy311DFxECZeXRLpcC3AoHBAJkNMJWTyPYbeVu+CTQkec8Uun233EkXa2kUNZc/ 5DuXDaK+A3DMgYRufTKSPpDHGaCZ1SYPInX1Uoe2dgVjWssRL2uitR4ENabDoAOA ICVYXYYNagqQu5wwirF0QeaMXo1fjhuuHQh8GsMdXZvYEaAITZ9/NG5x/oY08+8H kr78SMBOPy3XQn964uKG+e3JwpOG14GKABdAlrHKFXNWchu/6dgcYXB87mrC/GhO zNwzC+QhFTZoOomFoqMgFWujng== -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.11/talos-2019-0758.pem000066400000000000000000000024621471441230600211460ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDqDCCApKgAwIBAgIBAjALBgkqhkiG9w0BAQswHzELMAkGA1UEBhMCVUsxEDAO BgNVBAMTB2NvZHktY2EwHhcNMTgwNjE4MTgwMDU4WhcNMjgwNjE0MTgwMDU4WjA7 MQswCQYDVQQGEwJVSzEsMCoGA1UEAxMjY29kZW5vbWljb24tdm0tMi50ZXN0Lmxh bC5jaXNjby5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC63fGB J80A9Av1GB0bptslKRIUtJm8EeEu34HkDWbL6AJY0P8WfDtlXjlPaLqFa6sqH6ES V48prSm1ZUbDSVL8R6BYVYpOlK8/48xk4pGTgRzv69gf5SGtQLwHy8UPBKgjSZoD 5a5k5wJXGswhKFFNqyyxqCvWmMnJWxXTt2XDCiWc4g4YAWi4O4+6SeeHVAV9rV7C 1wxqjzKovVe2uZOHjKEzJbbIU6JBPb6TRfMdRdYOw98n1VXDcKVgdX2DuuqjCzHP WhU4Tw050M9NaK3eXp4Mh69VuiKoBGOLSOcS8reqHIU46Reg0hqeL8LIL6OhFHIF j7HR6V1X6F+BfRS/AgMBAAGjgdYwgdMwCQYDVR0TBAIwADAdBgNVHQ4EFgQUOktp HQjxDXXUg8prleY9jeLKeQ4wTwYDVR0jBEgwRoAUx6zgPygZ0ZErF9sPC4+5e2Io UU+hI6QhMB8xCzAJBgNVBAYTAlVLMRAwDgYDVQQDEwdjb2R5LWNhggkA1QEAuwb7 2s0wCQYDVR0SBAIwADAuBgNVHREEJzAlgiNjb2Rlbm9taWNvbi12bS0yLnRlc3Qu bGFsLmNpc2NvLmNvbTAOBgNVHQ8BAf8EBAMCBaAwCwYDVR0fBAQwAjAAMAsGCSqG SIb3DQEBCwOCAQEAvqantx2yBlM11RoFiCfi+AfSblXPdrIrHvccepV4pYc/yO6p t1f2dxHQb8rWH3i6cWag/EgIZx+HJQvo0rgPY1BFJsX1WnYf1/znZpkUBGbVmlJr t/dW1gSkNS6sPsM0Q+7HPgEv8CPDNK5eo7vU2seE0iWOkxSyVUuiCEY9ZVGaLVit p0C78nZ35Pdv4I+1cosmHl28+es1WI22rrnmdBpH8J1eY6WvUw2xuZHLeNVN0TzV Q3qq53AaCWuLOD1AjESWuUCxMZTK9DPS4JKXTK8RLyDeqOvJGjsSWp3kL0y3GaQ+ 10T1rfkKJub2+m9A9duin1fn6tHc2wSvB7m3DA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.11/test_asyncore.py000066400000000000000000000641541471441230600215710ustar00rootroot00000000000000import unittest import select import os import socket import sys import time import errno import struct import threading from test import support from test.support import os_helper from test.support import socket_helper from test.support import threading_helper from test.support import warnings_helper from io import BytesIO if support.PGO: raise unittest.SkipTest("test is not helpful for PGO") support.requires_working_socket(module=True) asyncore = warnings_helper.import_deprecated('asyncore') HAS_UNIX_SOCKETS = hasattr(socket, 'AF_UNIX') class dummysocket: def __init__(self): self.closed = False def close(self): self.closed = True def fileno(self): return 42 class dummychannel: def __init__(self): self.socket = dummysocket() def close(self): self.socket.close() class exitingdummy: def __init__(self): pass def handle_read_event(self): raise asyncore.ExitNow() handle_write_event = handle_read_event handle_close = handle_read_event handle_expt_event = handle_read_event class crashingdummy: def __init__(self): self.error_handled = False def handle_read_event(self): raise Exception() handle_write_event = handle_read_event handle_close = handle_read_event handle_expt_event = handle_read_event def handle_error(self): self.error_handled = True # used when testing senders; just collects what it gets until newline is sent def capture_server(evt, buf, serv): try: serv.listen() conn, addr = serv.accept() except TimeoutError: pass else: n = 200 start = time.monotonic() while n > 0 and time.monotonic() - start < 3.0: r, w, e = select.select([conn], [], [], 0.1) if r: n -= 1 data = conn.recv(10) # keep everything except for the newline terminator buf.write(data.replace(b'\n', b'')) if b'\n' in data: break time.sleep(0.01) conn.close() finally: serv.close() evt.set() def bind_af_aware(sock, addr): """Helper function to bind a socket according to its family.""" if HAS_UNIX_SOCKETS and sock.family == socket.AF_UNIX: # Make sure the path doesn't exist. os_helper.unlink(addr) socket_helper.bind_unix_socket(sock, addr) else: sock.bind(addr) class HelperFunctionTests(unittest.TestCase): def test_readwriteexc(self): # Check exception handling behavior of read, write and _exception # check that ExitNow exceptions in the object handler method # bubbles all the way up through asyncore read/write/_exception calls tr1 = exitingdummy() self.assertRaises(asyncore.ExitNow, asyncore.read, tr1) self.assertRaises(asyncore.ExitNow, asyncore.write, tr1) self.assertRaises(asyncore.ExitNow, asyncore._exception, tr1) # check that an exception other than ExitNow in the object handler # method causes the handle_error method to get called tr2 = crashingdummy() asyncore.read(tr2) self.assertEqual(tr2.error_handled, True) tr2 = crashingdummy() asyncore.write(tr2) self.assertEqual(tr2.error_handled, True) tr2 = crashingdummy() asyncore._exception(tr2) self.assertEqual(tr2.error_handled, True) # asyncore.readwrite uses constants in the select module that # are not present in Windows systems (see this thread: # http://mail.python.org/pipermail/python-list/2001-October/109973.html) # These constants should be present as long as poll is available @unittest.skipUnless(hasattr(select, 'poll'), 'select.poll required') def test_readwrite(self): # Check that correct methods are called by readwrite() attributes = ('read', 'expt', 'write', 'closed', 'error_handled') expected = ( (select.POLLIN, 'read'), (select.POLLPRI, 'expt'), (select.POLLOUT, 'write'), (select.POLLERR, 'closed'), (select.POLLHUP, 'closed'), (select.POLLNVAL, 'closed'), ) class testobj: def __init__(self): self.read = False self.write = False self.closed = False self.expt = False self.error_handled = False def handle_read_event(self): self.read = True def handle_write_event(self): self.write = True def handle_close(self): self.closed = True def handle_expt_event(self): self.expt = True def handle_error(self): self.error_handled = True for flag, expectedattr in expected: tobj = testobj() self.assertEqual(getattr(tobj, expectedattr), False) asyncore.readwrite(tobj, flag) # Only the attribute modified by the routine we expect to be # called should be True. for attr in attributes: self.assertEqual(getattr(tobj, attr), attr==expectedattr) # check that ExitNow exceptions in the object handler method # bubbles all the way up through asyncore readwrite call tr1 = exitingdummy() self.assertRaises(asyncore.ExitNow, asyncore.readwrite, tr1, flag) # check that an exception other than ExitNow in the object handler # method causes the handle_error method to get called tr2 = crashingdummy() self.assertEqual(tr2.error_handled, False) asyncore.readwrite(tr2, flag) self.assertEqual(tr2.error_handled, True) def test_closeall(self): self.closeall_check(False) def test_closeall_default(self): self.closeall_check(True) def closeall_check(self, usedefault): # Check that close_all() closes everything in a given map l = [] testmap = {} for i in range(10): c = dummychannel() l.append(c) self.assertEqual(c.socket.closed, False) testmap[i] = c if usedefault: socketmap = asyncore.socket_map try: asyncore.socket_map = testmap asyncore.close_all() finally: testmap, asyncore.socket_map = asyncore.socket_map, socketmap else: asyncore.close_all(testmap) self.assertEqual(len(testmap), 0) for c in l: self.assertEqual(c.socket.closed, True) def test_compact_traceback(self): try: raise Exception("I don't like spam!") except: real_t, real_v, real_tb = sys.exc_info() r = asyncore.compact_traceback() else: self.fail("Expected exception") (f, function, line), t, v, info = r self.assertEqual(os.path.split(f)[-1], 'test_asyncore.py') self.assertEqual(function, 'test_compact_traceback') self.assertEqual(t, real_t) self.assertEqual(v, real_v) self.assertEqual(info, '[%s|%s|%s]' % (f, function, line)) class DispatcherTests(unittest.TestCase): def setUp(self): pass def tearDown(self): asyncore.close_all() def test_basic(self): d = asyncore.dispatcher() self.assertEqual(d.readable(), True) self.assertEqual(d.writable(), True) def test_repr(self): d = asyncore.dispatcher() self.assertEqual(repr(d), '' % id(d)) def test_log(self): d = asyncore.dispatcher() # capture output of dispatcher.log() (to stderr) l1 = "Lovely spam! Wonderful spam!" l2 = "I don't like spam!" with support.captured_stderr() as stderr: d.log(l1) d.log(l2) lines = stderr.getvalue().splitlines() self.assertEqual(lines, ['log: %s' % l1, 'log: %s' % l2]) def test_log_info(self): d = asyncore.dispatcher() # capture output of dispatcher.log_info() (to stdout via print) l1 = "Have you got anything without spam?" l2 = "Why can't she have egg bacon spam and sausage?" l3 = "THAT'S got spam in it!" with support.captured_stdout() as stdout: d.log_info(l1, 'EGGS') d.log_info(l2) d.log_info(l3, 'SPAM') lines = stdout.getvalue().splitlines() expected = ['EGGS: %s' % l1, 'info: %s' % l2, 'SPAM: %s' % l3] self.assertEqual(lines, expected) def test_unhandled(self): d = asyncore.dispatcher() d.ignore_log_types = () # capture output of dispatcher.log_info() (to stdout via print) with support.captured_stdout() as stdout: d.handle_expt() d.handle_read() d.handle_write() d.handle_connect() lines = stdout.getvalue().splitlines() expected = ['warning: unhandled incoming priority event', 'warning: unhandled read event', 'warning: unhandled write event', 'warning: unhandled connect event'] self.assertEqual(lines, expected) def test_strerror(self): # refers to bug #8573 err = asyncore._strerror(errno.EPERM) if hasattr(os, 'strerror'): self.assertEqual(err, os.strerror(errno.EPERM)) err = asyncore._strerror(-1) self.assertTrue(err != "") class dispatcherwithsend_noread(asyncore.dispatcher_with_send): def readable(self): return False def handle_connect(self): pass class DispatcherWithSendTests(unittest.TestCase): def setUp(self): pass def tearDown(self): asyncore.close_all() @threading_helper.reap_threads def test_send(self): evt = threading.Event() sock = socket.socket() sock.settimeout(3) port = socket_helper.bind_port(sock) cap = BytesIO() args = (evt, cap, sock) t = threading.Thread(target=capture_server, args=args) t.start() try: # wait a little longer for the server to initialize (it sometimes # refuses connections on slow machines without this wait) time.sleep(0.2) data = b"Suppose there isn't a 16-ton weight?" d = dispatcherwithsend_noread() d.create_socket() d.connect((socket_helper.HOST, port)) # give time for socket to connect time.sleep(0.1) d.send(data) d.send(data) d.send(b'\n') n = 1000 while d.out_buffer and n > 0: asyncore.poll() n -= 1 evt.wait() self.assertEqual(cap.getvalue(), data*2) finally: threading_helper.join_thread(t) @unittest.skipUnless(hasattr(asyncore, 'file_wrapper'), 'asyncore.file_wrapper required') class FileWrapperTest(unittest.TestCase): def setUp(self): self.d = b"It's not dead, it's sleeping!" with open(os_helper.TESTFN, 'wb') as file: file.write(self.d) def tearDown(self): os_helper.unlink(os_helper.TESTFN) def test_recv(self): fd = os.open(os_helper.TESTFN, os.O_RDONLY) w = asyncore.file_wrapper(fd) os.close(fd) self.assertNotEqual(w.fd, fd) self.assertNotEqual(w.fileno(), fd) self.assertEqual(w.recv(13), b"It's not dead") self.assertEqual(w.read(6), b", it's") w.close() self.assertRaises(OSError, w.read, 1) def test_send(self): d1 = b"Come again?" d2 = b"I want to buy some cheese." fd = os.open(os_helper.TESTFN, os.O_WRONLY | os.O_APPEND) w = asyncore.file_wrapper(fd) os.close(fd) w.write(d1) w.send(d2) w.close() with open(os_helper.TESTFN, 'rb') as file: self.assertEqual(file.read(), self.d + d1 + d2) @unittest.skipUnless(hasattr(asyncore, 'file_dispatcher'), 'asyncore.file_dispatcher required') def test_dispatcher(self): fd = os.open(os_helper.TESTFN, os.O_RDONLY) data = [] class FileDispatcher(asyncore.file_dispatcher): def handle_read(self): data.append(self.recv(29)) s = FileDispatcher(fd) os.close(fd) asyncore.loop(timeout=0.01, use_poll=True, count=2) self.assertEqual(b"".join(data), self.d) def test_resource_warning(self): # Issue #11453 fd = os.open(os_helper.TESTFN, os.O_RDONLY) f = asyncore.file_wrapper(fd) os.close(fd) with warnings_helper.check_warnings(('', ResourceWarning)): f = None support.gc_collect() def test_close_twice(self): fd = os.open(os_helper.TESTFN, os.O_RDONLY) f = asyncore.file_wrapper(fd) os.close(fd) os.close(f.fd) # file_wrapper dupped fd with self.assertRaises(OSError): f.close() self.assertEqual(f.fd, -1) # calling close twice should not fail f.close() class BaseTestHandler(asyncore.dispatcher): def __init__(self, sock=None): asyncore.dispatcher.__init__(self, sock) self.flag = False def handle_accept(self): raise Exception("handle_accept not supposed to be called") def handle_accepted(self): raise Exception("handle_accepted not supposed to be called") def handle_connect(self): raise Exception("handle_connect not supposed to be called") def handle_expt(self): raise Exception("handle_expt not supposed to be called") def handle_close(self): raise Exception("handle_close not supposed to be called") def handle_error(self): raise class BaseServer(asyncore.dispatcher): """A server which listens on an address and dispatches the connection to a handler. """ def __init__(self, family, addr, handler=BaseTestHandler): asyncore.dispatcher.__init__(self) self.create_socket(family) self.set_reuse_addr() bind_af_aware(self.socket, addr) self.listen(5) self.handler = handler @property def address(self): return self.socket.getsockname() def handle_accepted(self, sock, addr): self.handler(sock) def handle_error(self): raise class BaseClient(BaseTestHandler): def __init__(self, family, address): BaseTestHandler.__init__(self) self.create_socket(family) self.connect(address) def handle_connect(self): pass class BaseTestAPI: def tearDown(self): asyncore.close_all(ignore_all=True) def loop_waiting_for_flag(self, instance, timeout=5): timeout = float(timeout) / 100 count = 100 while asyncore.socket_map and count > 0: asyncore.loop(timeout=0.01, count=1, use_poll=self.use_poll) if instance.flag: return count -= 1 time.sleep(timeout) self.fail("flag not set") def test_handle_connect(self): # make sure handle_connect is called on connect() class TestClient(BaseClient): def handle_connect(self): self.flag = True server = BaseServer(self.family, self.addr) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_handle_accept(self): # make sure handle_accept() is called when a client connects class TestListener(BaseTestHandler): def __init__(self, family, addr): BaseTestHandler.__init__(self) self.create_socket(family) bind_af_aware(self.socket, addr) self.listen(5) self.address = self.socket.getsockname() def handle_accept(self): self.flag = True server = TestListener(self.family, self.addr) client = BaseClient(self.family, server.address) self.loop_waiting_for_flag(server) def test_handle_accepted(self): # make sure handle_accepted() is called when a client connects class TestListener(BaseTestHandler): def __init__(self, family, addr): BaseTestHandler.__init__(self) self.create_socket(family) bind_af_aware(self.socket, addr) self.listen(5) self.address = self.socket.getsockname() def handle_accept(self): asyncore.dispatcher.handle_accept(self) def handle_accepted(self, sock, addr): sock.close() self.flag = True server = TestListener(self.family, self.addr) client = BaseClient(self.family, server.address) self.loop_waiting_for_flag(server) def test_handle_read(self): # make sure handle_read is called on data received class TestClient(BaseClient): def handle_read(self): self.flag = True class TestHandler(BaseTestHandler): def __init__(self, conn): BaseTestHandler.__init__(self, conn) self.send(b'x' * 1024) server = BaseServer(self.family, self.addr, TestHandler) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_handle_write(self): # make sure handle_write is called class TestClient(BaseClient): def handle_write(self): self.flag = True server = BaseServer(self.family, self.addr) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_handle_close(self): # make sure handle_close is called when the other end closes # the connection class TestClient(BaseClient): def handle_read(self): # in order to make handle_close be called we are supposed # to make at least one recv() call self.recv(1024) def handle_close(self): self.flag = True self.close() class TestHandler(BaseTestHandler): def __init__(self, conn): BaseTestHandler.__init__(self, conn) self.close() server = BaseServer(self.family, self.addr, TestHandler) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_handle_close_after_conn_broken(self): # Check that ECONNRESET/EPIPE is correctly handled (issues #5661 and # #11265). data = b'\0' * 128 class TestClient(BaseClient): def handle_write(self): self.send(data) def handle_close(self): self.flag = True self.close() def handle_expt(self): self.flag = True self.close() class TestHandler(BaseTestHandler): def handle_read(self): self.recv(len(data)) self.close() def writable(self): return False server = BaseServer(self.family, self.addr, TestHandler) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) @unittest.skipIf(sys.platform.startswith("sunos"), "OOB support is broken on Solaris") def test_handle_expt(self): # Make sure handle_expt is called on OOB data received. # Note: this might fail on some platforms as OOB data is # tenuously supported and rarely used. if HAS_UNIX_SOCKETS and self.family == socket.AF_UNIX: self.skipTest("Not applicable to AF_UNIX sockets.") if sys.platform == "darwin" and self.use_poll: self.skipTest("poll may fail on macOS; see issue #28087") class TestClient(BaseClient): def handle_expt(self): self.socket.recv(1024, socket.MSG_OOB) self.flag = True class TestHandler(BaseTestHandler): def __init__(self, conn): BaseTestHandler.__init__(self, conn) self.socket.send(bytes(chr(244), 'latin-1'), socket.MSG_OOB) server = BaseServer(self.family, self.addr, TestHandler) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_handle_error(self): class TestClient(BaseClient): def handle_write(self): 1.0 / 0 def handle_error(self): self.flag = True try: raise except ZeroDivisionError: pass else: raise Exception("exception not raised") server = BaseServer(self.family, self.addr) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_connection_attributes(self): server = BaseServer(self.family, self.addr) client = BaseClient(self.family, server.address) # we start disconnected self.assertFalse(server.connected) self.assertTrue(server.accepting) # this can't be taken for granted across all platforms #self.assertFalse(client.connected) self.assertFalse(client.accepting) # execute some loops so that client connects to server asyncore.loop(timeout=0.01, use_poll=self.use_poll, count=100) self.assertFalse(server.connected) self.assertTrue(server.accepting) self.assertTrue(client.connected) self.assertFalse(client.accepting) # disconnect the client client.close() self.assertFalse(server.connected) self.assertTrue(server.accepting) self.assertFalse(client.connected) self.assertFalse(client.accepting) # stop serving server.close() self.assertFalse(server.connected) self.assertFalse(server.accepting) def test_create_socket(self): s = asyncore.dispatcher() s.create_socket(self.family) self.assertEqual(s.socket.type, socket.SOCK_STREAM) self.assertEqual(s.socket.family, self.family) self.assertEqual(s.socket.gettimeout(), 0) self.assertFalse(s.socket.get_inheritable()) def test_bind(self): if HAS_UNIX_SOCKETS and self.family == socket.AF_UNIX: self.skipTest("Not applicable to AF_UNIX sockets.") s1 = asyncore.dispatcher() s1.create_socket(self.family) s1.bind(self.addr) s1.listen(5) port = s1.socket.getsockname()[1] s2 = asyncore.dispatcher() s2.create_socket(self.family) # EADDRINUSE indicates the socket was correctly bound self.assertRaises(OSError, s2.bind, (self.addr[0], port)) def test_set_reuse_addr(self): if HAS_UNIX_SOCKETS and self.family == socket.AF_UNIX: self.skipTest("Not applicable to AF_UNIX sockets.") with socket.socket(self.family) as sock: try: sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) except OSError: unittest.skip("SO_REUSEADDR not supported on this platform") else: # if SO_REUSEADDR succeeded for sock we expect asyncore # to do the same s = asyncore.dispatcher(socket.socket(self.family)) self.assertFalse(s.socket.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR)) s.socket.close() s.create_socket(self.family) s.set_reuse_addr() self.assertTrue(s.socket.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR)) @threading_helper.reap_threads def test_quick_connect(self): # see: http://bugs.python.org/issue10340 if self.family not in (socket.AF_INET, getattr(socket, "AF_INET6", object())): self.skipTest("test specific to AF_INET and AF_INET6") server = BaseServer(self.family, self.addr) # run the thread 500 ms: the socket should be connected in 200 ms t = threading.Thread(target=lambda: asyncore.loop(timeout=0.1, count=5)) t.start() try: with socket.socket(self.family, socket.SOCK_STREAM) as s: s.settimeout(.2) s.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0)) try: s.connect(server.address) except OSError: pass finally: threading_helper.join_thread(t) class TestAPI_UseIPv4Sockets(BaseTestAPI): family = socket.AF_INET addr = (socket_helper.HOST, 0) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 support required') class TestAPI_UseIPv6Sockets(BaseTestAPI): family = socket.AF_INET6 addr = (socket_helper.HOSTv6, 0) @unittest.skipUnless(HAS_UNIX_SOCKETS, 'Unix sockets required') class TestAPI_UseUnixSockets(BaseTestAPI): if HAS_UNIX_SOCKETS: family = socket.AF_UNIX addr = os_helper.TESTFN def tearDown(self): os_helper.unlink(self.addr) BaseTestAPI.tearDown(self) class TestAPI_UseIPv4Select(TestAPI_UseIPv4Sockets, unittest.TestCase): use_poll = False @unittest.skipUnless(hasattr(select, 'poll'), 'select.poll required') class TestAPI_UseIPv4Poll(TestAPI_UseIPv4Sockets, unittest.TestCase): use_poll = True class TestAPI_UseIPv6Select(TestAPI_UseIPv6Sockets, unittest.TestCase): use_poll = False @unittest.skipUnless(hasattr(select, 'poll'), 'select.poll required') class TestAPI_UseIPv6Poll(TestAPI_UseIPv6Sockets, unittest.TestCase): use_poll = True class TestAPI_UseUnixSocketsSelect(TestAPI_UseUnixSockets, unittest.TestCase): use_poll = False @unittest.skipUnless(hasattr(select, 'poll'), 'select.poll required') class TestAPI_UseUnixSocketsPoll(TestAPI_UseUnixSockets, unittest.TestCase): use_poll = True if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.11/test_context.py000066400000000000000000000754361471441230600214370ustar00rootroot00000000000000import concurrent.futures import contextvars import functools import gc import random import time import unittest import weakref from test import support from test.support import threading_helper try: from _testcapi import hamt except ImportError: hamt = None def isolated_context(func): """Needed to make reftracking test mode work.""" @functools.wraps(func) def wrapper(*args, **kwargs): ctx = contextvars.Context() return ctx.run(func, *args, **kwargs) return wrapper class ContextTest(unittest.TestCase): def test_context_var_new_1(self): with self.assertRaisesRegex(TypeError, 'takes exactly 1'): contextvars.ContextVar() with self.assertRaisesRegex(TypeError, 'must be a str'): contextvars.ContextVar(1) c = contextvars.ContextVar('aaa') self.assertEqual(c.name, 'aaa') with self.assertRaises(AttributeError): c.name = 'bbb' self.assertNotEqual(hash(c), hash('aaa')) @isolated_context def test_context_var_repr_1(self): c = contextvars.ContextVar('a') self.assertIn('a', repr(c)) c = contextvars.ContextVar('a', default=123) self.assertIn('123', repr(c)) lst = [] c = contextvars.ContextVar('a', default=lst) lst.append(c) self.assertIn('...', repr(c)) self.assertIn('...', repr(lst)) t = c.set(1) self.assertIn(repr(c), repr(t)) self.assertNotIn(' used ', repr(t)) c.reset(t) self.assertIn(' used ', repr(t)) def test_context_subclassing_1(self): with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): class MyContextVar(contextvars.ContextVar): # Potentially we might want ContextVars to be subclassable. pass with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): class MyContext(contextvars.Context): pass with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): class MyToken(contextvars.Token): pass def test_context_new_1(self): with self.assertRaisesRegex(TypeError, 'any arguments'): contextvars.Context(1) with self.assertRaisesRegex(TypeError, 'any arguments'): contextvars.Context(1, a=1) with self.assertRaisesRegex(TypeError, 'any arguments'): contextvars.Context(a=1) contextvars.Context(**{}) def test_context_typerrors_1(self): ctx = contextvars.Context() with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): ctx[1] with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): 1 in ctx with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): ctx.get(1) def test_context_get_context_1(self): ctx = contextvars.copy_context() self.assertIsInstance(ctx, contextvars.Context) def test_context_run_1(self): ctx = contextvars.Context() with self.assertRaisesRegex(TypeError, 'missing 1 required'): ctx.run() def test_context_run_2(self): ctx = contextvars.Context() def func(*args, **kwargs): kwargs['spam'] = 'foo' args += ('bar',) return args, kwargs for f in (func, functools.partial(func)): # partial doesn't support FASTCALL self.assertEqual(ctx.run(f), (('bar',), {'spam': 'foo'})) self.assertEqual(ctx.run(f, 1), ((1, 'bar'), {'spam': 'foo'})) self.assertEqual( ctx.run(f, a=2), (('bar',), {'a': 2, 'spam': 'foo'})) self.assertEqual( ctx.run(f, 11, a=2), ((11, 'bar'), {'a': 2, 'spam': 'foo'})) a = {} self.assertEqual( ctx.run(f, 11, **a), ((11, 'bar'), {'spam': 'foo'})) self.assertEqual(a, {}) def test_context_run_3(self): ctx = contextvars.Context() def func(*args, **kwargs): 1 / 0 with self.assertRaises(ZeroDivisionError): ctx.run(func) with self.assertRaises(ZeroDivisionError): ctx.run(func, 1, 2) with self.assertRaises(ZeroDivisionError): ctx.run(func, 1, 2, a=123) @isolated_context def test_context_run_4(self): ctx1 = contextvars.Context() ctx2 = contextvars.Context() var = contextvars.ContextVar('var') def func2(): self.assertIsNone(var.get(None)) def func1(): self.assertIsNone(var.get(None)) var.set('spam') ctx2.run(func2) self.assertEqual(var.get(None), 'spam') cur = contextvars.copy_context() self.assertEqual(len(cur), 1) self.assertEqual(cur[var], 'spam') return cur returned_ctx = ctx1.run(func1) self.assertEqual(ctx1, returned_ctx) self.assertEqual(returned_ctx[var], 'spam') self.assertIn(var, returned_ctx) def test_context_run_5(self): ctx = contextvars.Context() var = contextvars.ContextVar('var') def func(): self.assertIsNone(var.get(None)) var.set('spam') 1 / 0 with self.assertRaises(ZeroDivisionError): ctx.run(func) self.assertIsNone(var.get(None)) def test_context_run_6(self): ctx = contextvars.Context() c = contextvars.ContextVar('a', default=0) def fun(): self.assertEqual(c.get(), 0) self.assertIsNone(ctx.get(c)) c.set(42) self.assertEqual(c.get(), 42) self.assertEqual(ctx.get(c), 42) ctx.run(fun) def test_context_run_7(self): ctx = contextvars.Context() def fun(): with self.assertRaisesRegex(RuntimeError, 'is already entered'): ctx.run(fun) ctx.run(fun) @isolated_context def test_context_getset_1(self): c = contextvars.ContextVar('c') with self.assertRaises(LookupError): c.get() self.assertIsNone(c.get(None)) t0 = c.set(42) self.assertEqual(c.get(), 42) self.assertEqual(c.get(None), 42) self.assertIs(t0.old_value, t0.MISSING) self.assertIs(t0.old_value, contextvars.Token.MISSING) self.assertIs(t0.var, c) t = c.set('spam') self.assertEqual(c.get(), 'spam') self.assertEqual(c.get(None), 'spam') self.assertEqual(t.old_value, 42) c.reset(t) self.assertEqual(c.get(), 42) self.assertEqual(c.get(None), 42) c.set('spam2') with self.assertRaisesRegex(RuntimeError, 'has already been used'): c.reset(t) self.assertEqual(c.get(), 'spam2') ctx1 = contextvars.copy_context() self.assertIn(c, ctx1) c.reset(t0) with self.assertRaisesRegex(RuntimeError, 'has already been used'): c.reset(t0) self.assertIsNone(c.get(None)) self.assertIn(c, ctx1) self.assertEqual(ctx1[c], 'spam2') self.assertEqual(ctx1.get(c, 'aa'), 'spam2') self.assertEqual(len(ctx1), 1) self.assertEqual(list(ctx1.items()), [(c, 'spam2')]) self.assertEqual(list(ctx1.values()), ['spam2']) self.assertEqual(list(ctx1.keys()), [c]) self.assertEqual(list(ctx1), [c]) ctx2 = contextvars.copy_context() self.assertNotIn(c, ctx2) with self.assertRaises(KeyError): ctx2[c] self.assertEqual(ctx2.get(c, 'aa'), 'aa') self.assertEqual(len(ctx2), 0) self.assertEqual(list(ctx2), []) @isolated_context def test_context_getset_2(self): v1 = contextvars.ContextVar('v1') v2 = contextvars.ContextVar('v2') t1 = v1.set(42) with self.assertRaisesRegex(ValueError, 'by a different'): v2.reset(t1) @isolated_context def test_context_getset_3(self): c = contextvars.ContextVar('c', default=42) ctx = contextvars.Context() def fun(): self.assertEqual(c.get(), 42) with self.assertRaises(KeyError): ctx[c] self.assertIsNone(ctx.get(c)) self.assertEqual(ctx.get(c, 'spam'), 'spam') self.assertNotIn(c, ctx) self.assertEqual(list(ctx.keys()), []) t = c.set(1) self.assertEqual(list(ctx.keys()), [c]) self.assertEqual(ctx[c], 1) c.reset(t) self.assertEqual(list(ctx.keys()), []) with self.assertRaises(KeyError): ctx[c] ctx.run(fun) @isolated_context def test_context_getset_4(self): c = contextvars.ContextVar('c', default=42) ctx = contextvars.Context() tok = ctx.run(c.set, 1) with self.assertRaisesRegex(ValueError, 'different Context'): c.reset(tok) @isolated_context def test_context_getset_5(self): c = contextvars.ContextVar('c', default=42) c.set([]) def fun(): c.set([]) c.get().append(42) self.assertEqual(c.get(), [42]) contextvars.copy_context().run(fun) self.assertEqual(c.get(), []) def test_context_copy_1(self): ctx1 = contextvars.Context() c = contextvars.ContextVar('c', default=42) def ctx1_fun(): c.set(10) ctx2 = ctx1.copy() self.assertEqual(ctx2[c], 10) c.set(20) self.assertEqual(ctx1[c], 20) self.assertEqual(ctx2[c], 10) ctx2.run(ctx2_fun) self.assertEqual(ctx1[c], 20) self.assertEqual(ctx2[c], 30) def ctx2_fun(): self.assertEqual(c.get(), 10) c.set(30) self.assertEqual(c.get(), 30) ctx1.run(ctx1_fun) @isolated_context @threading_helper.requires_working_threading() def test_context_threads_1(self): cvar = contextvars.ContextVar('cvar') def sub(num): for i in range(10): cvar.set(num + i) time.sleep(random.uniform(0.001, 0.05)) self.assertEqual(cvar.get(), num + i) return num tp = concurrent.futures.ThreadPoolExecutor(max_workers=10) try: results = list(tp.map(sub, range(10))) finally: tp.shutdown() self.assertEqual(results, list(range(10))) # HAMT Tests class HashKey: _crasher = None def __init__(self, hash, name, *, error_on_eq_to=None): assert hash != -1 self.name = name self.hash = hash self.error_on_eq_to = error_on_eq_to def __repr__(self): return f'' def __hash__(self): if self._crasher is not None and self._crasher.error_on_hash: raise HashingError return self.hash def __eq__(self, other): if not isinstance(other, HashKey): return NotImplemented if self._crasher is not None and self._crasher.error_on_eq: raise EqError if self.error_on_eq_to is not None and self.error_on_eq_to is other: raise ValueError(f'cannot compare {self!r} to {other!r}') if other.error_on_eq_to is not None and other.error_on_eq_to is self: raise ValueError(f'cannot compare {other!r} to {self!r}') return (self.name, self.hash) == (other.name, other.hash) class KeyStr(str): def __hash__(self): if HashKey._crasher is not None and HashKey._crasher.error_on_hash: raise HashingError return super().__hash__() def __eq__(self, other): if HashKey._crasher is not None and HashKey._crasher.error_on_eq: raise EqError return super().__eq__(other) class HaskKeyCrasher: def __init__(self, *, error_on_hash=False, error_on_eq=False): self.error_on_hash = error_on_hash self.error_on_eq = error_on_eq def __enter__(self): if HashKey._crasher is not None: raise RuntimeError('cannot nest crashers') HashKey._crasher = self def __exit__(self, *exc): HashKey._crasher = None class HashingError(Exception): pass class EqError(Exception): pass @unittest.skipIf(hamt is None, '_testcapi lacks "hamt()" function') class HamtTest(unittest.TestCase): def test_hashkey_helper_1(self): k1 = HashKey(10, 'aaa') k2 = HashKey(10, 'bbb') self.assertNotEqual(k1, k2) self.assertEqual(hash(k1), hash(k2)) d = dict() d[k1] = 'a' d[k2] = 'b' self.assertEqual(d[k1], 'a') self.assertEqual(d[k2], 'b') def test_hamt_basics_1(self): h = hamt() h = None # NoQA def test_hamt_basics_2(self): h = hamt() self.assertEqual(len(h), 0) h2 = h.set('a', 'b') self.assertIsNot(h, h2) self.assertEqual(len(h), 0) self.assertEqual(len(h2), 1) self.assertIsNone(h.get('a')) self.assertEqual(h.get('a', 42), 42) self.assertEqual(h2.get('a'), 'b') h3 = h2.set('b', 10) self.assertIsNot(h2, h3) self.assertEqual(len(h), 0) self.assertEqual(len(h2), 1) self.assertEqual(len(h3), 2) self.assertEqual(h3.get('a'), 'b') self.assertEqual(h3.get('b'), 10) self.assertIsNone(h.get('b')) self.assertIsNone(h2.get('b')) self.assertIsNone(h.get('a')) self.assertEqual(h2.get('a'), 'b') h = h2 = h3 = None def test_hamt_basics_3(self): h = hamt() o = object() h1 = h.set('1', o) h2 = h1.set('1', o) self.assertIs(h1, h2) def test_hamt_basics_4(self): h = hamt() h1 = h.set('key', []) h2 = h1.set('key', []) self.assertIsNot(h1, h2) self.assertEqual(len(h1), 1) self.assertEqual(len(h2), 1) self.assertIsNot(h1.get('key'), h2.get('key')) def test_hamt_collision_1(self): k1 = HashKey(10, 'aaa') k2 = HashKey(10, 'bbb') k3 = HashKey(10, 'ccc') h = hamt() h2 = h.set(k1, 'a') h3 = h2.set(k2, 'b') self.assertEqual(h.get(k1), None) self.assertEqual(h.get(k2), None) self.assertEqual(h2.get(k1), 'a') self.assertEqual(h2.get(k2), None) self.assertEqual(h3.get(k1), 'a') self.assertEqual(h3.get(k2), 'b') h4 = h3.set(k2, 'cc') h5 = h4.set(k3, 'aa') self.assertEqual(h3.get(k1), 'a') self.assertEqual(h3.get(k2), 'b') self.assertEqual(h4.get(k1), 'a') self.assertEqual(h4.get(k2), 'cc') self.assertEqual(h4.get(k3), None) self.assertEqual(h5.get(k1), 'a') self.assertEqual(h5.get(k2), 'cc') self.assertEqual(h5.get(k2), 'cc') self.assertEqual(h5.get(k3), 'aa') self.assertEqual(len(h), 0) self.assertEqual(len(h2), 1) self.assertEqual(len(h3), 2) self.assertEqual(len(h4), 2) self.assertEqual(len(h5), 3) def test_hamt_collision_3(self): # Test that iteration works with the deepest tree possible. # https://github.com/python/cpython/issues/93065 C = HashKey(0b10000000_00000000_00000000_00000000, 'C') D = HashKey(0b10000000_00000000_00000000_00000000, 'D') E = HashKey(0b00000000_00000000_00000000_00000000, 'E') h = hamt() h = h.set(C, 'C') h = h.set(D, 'D') h = h.set(E, 'E') # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=4 count=2 bitmap=0b101): # : 'E' # NULL: # CollisionNode(size=4 id=0x107a24520): # : 'C' # : 'D' self.assertEqual({k.name for k in h.keys()}, {'C', 'D', 'E'}) @support.requires_resource('cpu') def test_hamt_stress(self): COLLECTION_SIZE = 7000 TEST_ITERS_EVERY = 647 CRASH_HASH_EVERY = 97 CRASH_EQ_EVERY = 11 RUN_XTIMES = 3 for _ in range(RUN_XTIMES): h = hamt() d = dict() for i in range(COLLECTION_SIZE): key = KeyStr(i) if not (i % CRASH_HASH_EVERY): with HaskKeyCrasher(error_on_hash=True): with self.assertRaises(HashingError): h.set(key, i) h = h.set(key, i) if not (i % CRASH_EQ_EVERY): with HaskKeyCrasher(error_on_eq=True): with self.assertRaises(EqError): h.get(KeyStr(i)) # really trigger __eq__ d[key] = i self.assertEqual(len(d), len(h)) if not (i % TEST_ITERS_EVERY): self.assertEqual(set(h.items()), set(d.items())) self.assertEqual(len(h.items()), len(d.items())) self.assertEqual(len(h), COLLECTION_SIZE) for key in range(COLLECTION_SIZE): self.assertEqual(h.get(KeyStr(key), 'not found'), key) keys_to_delete = list(range(COLLECTION_SIZE)) random.shuffle(keys_to_delete) for iter_i, i in enumerate(keys_to_delete): key = KeyStr(i) if not (iter_i % CRASH_HASH_EVERY): with HaskKeyCrasher(error_on_hash=True): with self.assertRaises(HashingError): h.delete(key) if not (iter_i % CRASH_EQ_EVERY): with HaskKeyCrasher(error_on_eq=True): with self.assertRaises(EqError): h.delete(KeyStr(i)) h = h.delete(key) self.assertEqual(h.get(key, 'not found'), 'not found') del d[key] self.assertEqual(len(d), len(h)) if iter_i == COLLECTION_SIZE // 2: hm = h dm = d.copy() if not (iter_i % TEST_ITERS_EVERY): self.assertEqual(set(h.keys()), set(d.keys())) self.assertEqual(len(h.keys()), len(d.keys())) self.assertEqual(len(d), 0) self.assertEqual(len(h), 0) # ============ for key in dm: self.assertEqual(hm.get(str(key)), dm[key]) self.assertEqual(len(dm), len(hm)) for i, key in enumerate(keys_to_delete): hm = hm.delete(str(key)) self.assertEqual(hm.get(str(key), 'not found'), 'not found') dm.pop(str(key), None) self.assertEqual(len(d), len(h)) if not (i % TEST_ITERS_EVERY): self.assertEqual(set(h.values()), set(d.values())) self.assertEqual(len(h.values()), len(d.values())) self.assertEqual(len(d), 0) self.assertEqual(len(h), 0) self.assertEqual(list(h.items()), []) def test_hamt_delete_1(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(102, 'C') D = HashKey(103, 'D') E = HashKey(104, 'E') Z = HashKey(-100, 'Z') Er = HashKey(103, 'Er', error_on_eq_to=D) h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=10 bitmap=0b111110000 id=0x10eadc618): # : 'a' # : 'b' # : 'c' # : 'd' # : 'e' h = h.delete(C) self.assertEqual(len(h), orig_len - 1) with self.assertRaisesRegex(ValueError, 'cannot compare'): h.delete(Er) h = h.delete(D) self.assertEqual(len(h), orig_len - 2) h2 = h.delete(Z) self.assertIs(h2, h) h = h.delete(A) self.assertEqual(len(h), orig_len - 3) self.assertEqual(h.get(A, 42), 42) self.assertEqual(h.get(B), 'b') self.assertEqual(h.get(E), 'e') def test_hamt_delete_2(self): A = HashKey(100, 'A') B = HashKey(201001, 'B') C = HashKey(101001, 'C') D = HashKey(103, 'D') E = HashKey(104, 'E') Z = HashKey(-100, 'Z') Er = HashKey(201001, 'Er', error_on_eq_to=B) h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=8 bitmap=0b1110010000): # : 'a' # : 'd' # : 'e' # NULL: # BitmapNode(size=4 bitmap=0b100000000001000000000): # : 'b' # : 'c' with self.assertRaisesRegex(ValueError, 'cannot compare'): h.delete(Er) h = h.delete(Z) self.assertEqual(len(h), orig_len) h = h.delete(C) self.assertEqual(len(h), orig_len - 1) h = h.delete(B) self.assertEqual(len(h), orig_len - 2) h = h.delete(A) self.assertEqual(len(h), orig_len - 3) self.assertEqual(h.get(D), 'd') self.assertEqual(h.get(E), 'e') h = h.delete(A) h = h.delete(B) h = h.delete(D) h = h.delete(E) self.assertEqual(len(h), 0) def test_hamt_delete_3(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(104, 'E') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=6 bitmap=0b100110000): # NULL: # BitmapNode(size=4 bitmap=0b1000000000000000000001000): # : 'a' # NULL: # CollisionNode(size=4 id=0x108572410): # : 'c' # : 'd' # : 'b' # : 'e' h = h.delete(A) self.assertEqual(len(h), orig_len - 1) h = h.delete(E) self.assertEqual(len(h), orig_len - 2) self.assertEqual(h.get(C), 'c') self.assertEqual(h.get(B), 'b') def test_hamt_delete_4(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(100100, 'E') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=4 bitmap=0b110000): # NULL: # BitmapNode(size=4 bitmap=0b1000000000000000000001000): # : 'a' # NULL: # CollisionNode(size=6 id=0x10515ef30): # : 'c' # : 'd' # : 'e' # : 'b' h = h.delete(D) self.assertEqual(len(h), orig_len - 1) h = h.delete(E) self.assertEqual(len(h), orig_len - 2) h = h.delete(C) self.assertEqual(len(h), orig_len - 3) h = h.delete(A) self.assertEqual(len(h), orig_len - 4) h = h.delete(B) self.assertEqual(len(h), 0) def test_hamt_delete_5(self): h = hamt() keys = [] for i in range(17): key = HashKey(i, str(i)) keys.append(key) h = h.set(key, f'val-{i}') collision_key16 = HashKey(16, '18') h = h.set(collision_key16, 'collision') # ArrayNode(id=0x10f8b9318): # 0:: # BitmapNode(size=2 count=1 bitmap=0b1): # : 'val-0' # # ... 14 more BitmapNodes ... # # 15:: # BitmapNode(size=2 count=1 bitmap=0b1): # : 'val-15' # # 16:: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # CollisionNode(size=4 id=0x10f2f5af8): # : 'val-16' # : 'collision' self.assertEqual(len(h), 18) h = h.delete(keys[2]) self.assertEqual(len(h), 17) h = h.delete(collision_key16) self.assertEqual(len(h), 16) h = h.delete(keys[16]) self.assertEqual(len(h), 15) h = h.delete(keys[1]) self.assertEqual(len(h), 14) h = h.delete(keys[1]) self.assertEqual(len(h), 14) for key in keys: h = h.delete(key) self.assertEqual(len(h), 0) def test_hamt_items_1(self): A = HashKey(100, 'A') B = HashKey(201001, 'B') C = HashKey(101001, 'C') D = HashKey(103, 'D') E = HashKey(104, 'E') F = HashKey(110, 'F') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') h = h.set(F, 'f') it = h.items() self.assertEqual( set(list(it)), {(A, 'a'), (B, 'b'), (C, 'c'), (D, 'd'), (E, 'e'), (F, 'f')}) def test_hamt_items_2(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(100100, 'E') F = HashKey(110, 'F') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') h = h.set(F, 'f') it = h.items() self.assertEqual( set(list(it)), {(A, 'a'), (B, 'b'), (C, 'c'), (D, 'd'), (E, 'e'), (F, 'f')}) def test_hamt_keys_1(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(100100, 'E') F = HashKey(110, 'F') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') h = h.set(F, 'f') self.assertEqual(set(list(h.keys())), {A, B, C, D, E, F}) self.assertEqual(set(list(h)), {A, B, C, D, E, F}) def test_hamt_items_3(self): h = hamt() self.assertEqual(len(h.items()), 0) self.assertEqual(list(h.items()), []) def test_hamt_eq_1(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(120, 'E') h1 = hamt() h1 = h1.set(A, 'a') h1 = h1.set(B, 'b') h1 = h1.set(C, 'c') h1 = h1.set(D, 'd') h2 = hamt() h2 = h2.set(A, 'a') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(B, 'b') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(C, 'c') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(D, 'd2') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(D, 'd') self.assertTrue(h1 == h2) self.assertFalse(h1 != h2) h2 = h2.set(E, 'e') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.delete(D) self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(E, 'd') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) def test_hamt_eq_2(self): A = HashKey(100, 'A') Er = HashKey(100, 'Er', error_on_eq_to=A) h1 = hamt() h1 = h1.set(A, 'a') h2 = hamt() h2 = h2.set(Er, 'a') with self.assertRaisesRegex(ValueError, 'cannot compare'): h1 == h2 with self.assertRaisesRegex(ValueError, 'cannot compare'): h1 != h2 def test_hamt_gc_1(self): A = HashKey(100, 'A') h = hamt() h = h.set(0, 0) # empty HAMT node is memoized in hamt.c ref = weakref.ref(h) a = [] a.append(a) a.append(h) b = [] a.append(b) b.append(a) h = h.set(A, b) del h, a, b gc.collect() gc.collect() gc.collect() self.assertIsNone(ref()) def test_hamt_gc_2(self): A = HashKey(100, 'A') B = HashKey(101, 'B') h = hamt() h = h.set(A, 'a') h = h.set(A, h) ref = weakref.ref(h) hi = h.items() next(hi) del h, hi gc.collect() gc.collect() gc.collect() self.assertIsNone(ref()) def test_hamt_in_1(self): A = HashKey(100, 'A') AA = HashKey(100, 'A') B = HashKey(101, 'B') h = hamt() h = h.set(A, 1) self.assertTrue(A in h) self.assertFalse(B in h) with self.assertRaises(EqError): with HaskKeyCrasher(error_on_eq=True): AA in h with self.assertRaises(HashingError): with HaskKeyCrasher(error_on_hash=True): AA in h def test_hamt_getitem_1(self): A = HashKey(100, 'A') AA = HashKey(100, 'A') B = HashKey(101, 'B') h = hamt() h = h.set(A, 1) self.assertEqual(h[A], 1) self.assertEqual(h[AA], 1) with self.assertRaises(KeyError): h[B] with self.assertRaises(EqError): with HaskKeyCrasher(error_on_eq=True): h[AA] with self.assertRaises(HashingError): with HaskKeyCrasher(error_on_hash=True): h[AA] if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.11/test_ftplib.py000066400000000000000000001236771471441230600212340ustar00rootroot00000000000000"""Test script for ftplib module.""" # Modified by Giampaolo Rodola' to test FTP class, IPv6 and TLS # environment import ftplib import socket import io import errno import os import threading import time import unittest try: import ssl except ImportError: ssl = None from unittest import TestCase, skipUnless from test import support from test.support import threading_helper from test.support import socket_helper from test.support import warnings_helper from test.support.socket_helper import HOST, HOSTv6 asynchat = warnings_helper.import_deprecated('asynchat') asyncore = warnings_helper.import_deprecated('asyncore') support.requires_working_socket(module=True) TIMEOUT = support.LOOPBACK_TIMEOUT DEFAULT_ENCODING = 'utf-8' # the dummy data returned by server over the data channel when # RETR, LIST, NLST, MLSD commands are issued RETR_DATA = 'abcde\xB9\xB2\xB3\xA4\xA6\r\n' * 1000 LIST_DATA = 'foo\r\nbar\r\n non-ascii char \xAE\r\n' NLST_DATA = 'foo\r\nbar\r\n non-ascii char \xAE\r\n' MLSD_DATA = ("type=cdir;perm=el;unique==keVO1+ZF4; test\r\n" "type=pdir;perm=e;unique==keVO1+d?3; ..\r\n" "type=OS.unix=slink:/foobar;perm=;unique==keVO1+4G4; foobar\r\n" "type=OS.unix=chr-13/29;perm=;unique==keVO1+5G4; device\r\n" "type=OS.unix=blk-11/108;perm=;unique==keVO1+6G4; block\r\n" "type=file;perm=awr;unique==keVO1+8G4; writable\r\n" "type=dir;perm=cpmel;unique==keVO1+7G4; promiscuous\r\n" "type=dir;perm=;unique==keVO1+1t2; no-exec\r\n" "type=file;perm=r;unique==keVO1+EG4; two words\r\n" "type=file;perm=r;unique==keVO1+IH4; leading space\r\n" "type=file;perm=r;unique==keVO1+1G4; file1\r\n" "type=dir;perm=cpmel;unique==keVO1+7G4; incoming\r\n" "type=file;perm=r;unique==keVO1+1G4; file2\r\n" "type=file;perm=r;unique==keVO1+1G4; file3\r\n" "type=file;perm=r;unique==keVO1+1G4; file4\r\n" "type=dir;perm=cpmel;unique==SGP1; dir \xAE non-ascii char\r\n" "type=file;perm=r;unique==SGP2; file \xAE non-ascii char\r\n") def default_error_handler(): # bpo-44359: Silently ignore socket errors. Such errors occur when a client # socket is closed, in TestFTPClass.tearDown() and makepasv() tests, and # the server gets an error on its side. pass class DummyDTPHandler(asynchat.async_chat): dtp_conn_closed = False def __init__(self, conn, baseclass): asynchat.async_chat.__init__(self, conn) self.baseclass = baseclass self.baseclass.last_received_data = bytearray() self.encoding = baseclass.encoding def handle_read(self): new_data = self.recv(1024) self.baseclass.last_received_data += new_data def handle_close(self): # XXX: this method can be called many times in a row for a single # connection, including in clear-text (non-TLS) mode. # (behaviour witnessed with test_data_connection) if not self.dtp_conn_closed: self.baseclass.push('226 transfer complete') self.close() self.dtp_conn_closed = True def push(self, what): if self.baseclass.next_data is not None: what = self.baseclass.next_data self.baseclass.next_data = None if not what: return self.close_when_done() super(DummyDTPHandler, self).push(what.encode(self.encoding)) def handle_error(self): default_error_handler() class DummyFTPHandler(asynchat.async_chat): dtp_handler = DummyDTPHandler def __init__(self, conn, encoding=DEFAULT_ENCODING): asynchat.async_chat.__init__(self, conn) # tells the socket to handle urgent data inline (ABOR command) self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_OOBINLINE, 1) self.set_terminator(b"\r\n") self.in_buffer = [] self.dtp = None self.last_received_cmd = None self.last_received_data = bytearray() self.next_response = '' self.next_data = None self.rest = None self.next_retr_data = RETR_DATA self.push('220 welcome') self.encoding = encoding # We use this as the string IPv4 address to direct the client # to in response to a PASV command. To test security behavior. # https://bugs.python.org/issue43285/. self.fake_pasv_server_ip = '252.253.254.255' def collect_incoming_data(self, data): self.in_buffer.append(data) def found_terminator(self): line = b''.join(self.in_buffer).decode(self.encoding) self.in_buffer = [] if self.next_response: self.push(self.next_response) self.next_response = '' cmd = line.split(' ')[0].lower() self.last_received_cmd = cmd space = line.find(' ') if space != -1: arg = line[space + 1:] else: arg = "" if hasattr(self, 'cmd_' + cmd): method = getattr(self, 'cmd_' + cmd) method(arg) else: self.push('550 command "%s" not understood.' %cmd) def handle_error(self): default_error_handler() def push(self, data): asynchat.async_chat.push(self, data.encode(self.encoding) + b'\r\n') def cmd_port(self, arg): addr = list(map(int, arg.split(','))) ip = '%d.%d.%d.%d' %tuple(addr[:4]) port = (addr[4] * 256) + addr[5] s = socket.create_connection((ip, port), timeout=TIMEOUT) self.dtp = self.dtp_handler(s, baseclass=self) self.push('200 active data connection established') def cmd_pasv(self, arg): with socket.create_server((self.socket.getsockname()[0], 0)) as sock: sock.settimeout(TIMEOUT) port = sock.getsockname()[1] ip = self.fake_pasv_server_ip ip = ip.replace('.', ','); p1 = port / 256; p2 = port % 256 self.push('227 entering passive mode (%s,%d,%d)' %(ip, p1, p2)) conn, addr = sock.accept() self.dtp = self.dtp_handler(conn, baseclass=self) def cmd_eprt(self, arg): af, ip, port = arg.split(arg[0])[1:-1] port = int(port) s = socket.create_connection((ip, port), timeout=TIMEOUT) self.dtp = self.dtp_handler(s, baseclass=self) self.push('200 active data connection established') def cmd_epsv(self, arg): with socket.create_server((self.socket.getsockname()[0], 0), family=socket.AF_INET6) as sock: sock.settimeout(TIMEOUT) port = sock.getsockname()[1] self.push('229 entering extended passive mode (|||%d|)' %port) conn, addr = sock.accept() self.dtp = self.dtp_handler(conn, baseclass=self) def cmd_echo(self, arg): # sends back the received string (used by the test suite) self.push(arg) def cmd_noop(self, arg): self.push('200 noop ok') def cmd_user(self, arg): self.push('331 username ok') def cmd_pass(self, arg): self.push('230 password ok') def cmd_acct(self, arg): self.push('230 acct ok') def cmd_rnfr(self, arg): self.push('350 rnfr ok') def cmd_rnto(self, arg): self.push('250 rnto ok') def cmd_dele(self, arg): self.push('250 dele ok') def cmd_cwd(self, arg): self.push('250 cwd ok') def cmd_size(self, arg): self.push('250 1000') def cmd_mkd(self, arg): self.push('257 "%s"' %arg) def cmd_rmd(self, arg): self.push('250 rmd ok') def cmd_pwd(self, arg): self.push('257 "pwd ok"') def cmd_type(self, arg): self.push('200 type ok') def cmd_quit(self, arg): self.push('221 quit ok') self.close() def cmd_abor(self, arg): self.push('226 abor ok') def cmd_stor(self, arg): self.push('125 stor ok') def cmd_rest(self, arg): self.rest = arg self.push('350 rest ok') def cmd_retr(self, arg): self.push('125 retr ok') if self.rest is not None: offset = int(self.rest) else: offset = 0 self.dtp.push(self.next_retr_data[offset:]) self.dtp.close_when_done() self.rest = None def cmd_list(self, arg): self.push('125 list ok') self.dtp.push(LIST_DATA) self.dtp.close_when_done() def cmd_nlst(self, arg): self.push('125 nlst ok') self.dtp.push(NLST_DATA) self.dtp.close_when_done() def cmd_opts(self, arg): self.push('200 opts ok') def cmd_mlsd(self, arg): self.push('125 mlsd ok') self.dtp.push(MLSD_DATA) self.dtp.close_when_done() def cmd_setlongretr(self, arg): # For testing. Next RETR will return long line. self.next_retr_data = 'x' * int(arg) self.push('125 setlongretr ok') class DummyFTPServer(asyncore.dispatcher, threading.Thread): handler = DummyFTPHandler def __init__(self, address, af=socket.AF_INET, encoding=DEFAULT_ENCODING): threading.Thread.__init__(self) asyncore.dispatcher.__init__(self) self.daemon = True self.create_socket(af, socket.SOCK_STREAM) self.bind(address) self.listen(5) self.active = False self.active_lock = threading.Lock() self.host, self.port = self.socket.getsockname()[:2] self.handler_instance = None self.encoding = encoding def start(self): assert not self.active self.__flag = threading.Event() threading.Thread.start(self) self.__flag.wait() def run(self): self.active = True self.__flag.set() while self.active and asyncore.socket_map: self.active_lock.acquire() asyncore.loop(timeout=0.1, count=1) self.active_lock.release() asyncore.close_all(ignore_all=True) def stop(self): assert self.active self.active = False self.join() def handle_accepted(self, conn, addr): self.handler_instance = self.handler(conn, encoding=self.encoding) def handle_connect(self): self.close() handle_read = handle_connect def writable(self): return 0 def handle_error(self): default_error_handler() if ssl is not None: CERTFILE = os.path.join(os.path.dirname(__file__), "certdata", "keycert3.pem") CAFILE = os.path.join(os.path.dirname(__file__), "certdata", "pycacert.pem") class SSLConnection(asyncore.dispatcher): """An asyncore.dispatcher subclass supporting TLS/SSL.""" _ssl_accepting = False _ssl_closing = False def secure_connection(self): context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) context.load_cert_chain(CERTFILE) socket = context.wrap_socket(self.socket, suppress_ragged_eofs=False, server_side=True, do_handshake_on_connect=False) self.del_channel() self.set_socket(socket) self._ssl_accepting = True def _do_ssl_handshake(self): try: self.socket.do_handshake() except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return elif err.args[0] == ssl.SSL_ERROR_EOF: return self.handle_close() # TODO: SSLError does not expose alert information elif "SSLV3_ALERT_BAD_CERTIFICATE" in err.args[1]: return self.handle_close() raise except OSError as err: if err.args[0] == errno.ECONNABORTED: return self.handle_close() else: self._ssl_accepting = False def _do_ssl_shutdown(self): self._ssl_closing = True try: self.socket = self.socket.unwrap() except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return except OSError: # Any "socket error" corresponds to a SSL_ERROR_SYSCALL return # from OpenSSL's SSL_shutdown(), corresponding to a # closed socket condition. See also: # http://www.mail-archive.com/openssl-users@openssl.org/msg60710.html pass self._ssl_closing = False if getattr(self, '_ccc', False) is False: super(SSLConnection, self).close() else: pass def handle_read_event(self): if self._ssl_accepting: self._do_ssl_handshake() elif self._ssl_closing: self._do_ssl_shutdown() else: super(SSLConnection, self).handle_read_event() def handle_write_event(self): if self._ssl_accepting: self._do_ssl_handshake() elif self._ssl_closing: self._do_ssl_shutdown() else: super(SSLConnection, self).handle_write_event() def send(self, data): try: return super(SSLConnection, self).send(data) except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_EOF, ssl.SSL_ERROR_ZERO_RETURN, ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return 0 raise def recv(self, buffer_size): try: return super(SSLConnection, self).recv(buffer_size) except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return b'' if err.args[0] in (ssl.SSL_ERROR_EOF, ssl.SSL_ERROR_ZERO_RETURN): self.handle_close() return b'' raise def handle_error(self): default_error_handler() def close(self): if (isinstance(self.socket, ssl.SSLSocket) and self.socket._sslobj is not None): self._do_ssl_shutdown() else: super(SSLConnection, self).close() class DummyTLS_DTPHandler(SSLConnection, DummyDTPHandler): """A DummyDTPHandler subclass supporting TLS/SSL.""" def __init__(self, conn, baseclass): DummyDTPHandler.__init__(self, conn, baseclass) if self.baseclass.secure_data_channel: self.secure_connection() class DummyTLS_FTPHandler(SSLConnection, DummyFTPHandler): """A DummyFTPHandler subclass supporting TLS/SSL.""" dtp_handler = DummyTLS_DTPHandler def __init__(self, conn, encoding=DEFAULT_ENCODING): DummyFTPHandler.__init__(self, conn, encoding=encoding) self.secure_data_channel = False self._ccc = False def cmd_auth(self, line): """Set up secure control channel.""" self.push('234 AUTH TLS successful') self.secure_connection() def cmd_ccc(self, line): self.push('220 Reverting back to clear-text') self._ccc = True self._do_ssl_shutdown() def cmd_pbsz(self, line): """Negotiate size of buffer for secure data transfer. For TLS/SSL the only valid value for the parameter is '0'. Any other value is accepted but ignored. """ self.push('200 PBSZ=0 successful.') def cmd_prot(self, line): """Setup un/secure data channel.""" arg = line.upper() if arg == 'C': self.push('200 Protection set to Clear') self.secure_data_channel = False elif arg == 'P': self.push('200 Protection set to Private') self.secure_data_channel = True else: self.push("502 Unrecognized PROT type (use C or P).") class DummyTLS_FTPServer(DummyFTPServer): handler = DummyTLS_FTPHandler class TestFTPClass(TestCase): def setUp(self, encoding=DEFAULT_ENCODING): self.server = DummyFTPServer((HOST, 0), encoding=encoding) self.server.start() self.client = ftplib.FTP(timeout=TIMEOUT, encoding=encoding) self.client.connect(self.server.host, self.server.port) def tearDown(self): self.client.close() self.server.stop() # Explicitly clear the attribute to prevent dangling thread self.server = None asyncore.close_all(ignore_all=True) def check_data(self, received, expected): self.assertEqual(len(received), len(expected)) self.assertEqual(received, expected) def test_getwelcome(self): self.assertEqual(self.client.getwelcome(), '220 welcome') def test_sanitize(self): self.assertEqual(self.client.sanitize('foo'), repr('foo')) self.assertEqual(self.client.sanitize('pass 12345'), repr('pass *****')) self.assertEqual(self.client.sanitize('PASS 12345'), repr('PASS *****')) def test_exceptions(self): self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\r\n0') self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\n0') self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\r0') self.assertRaises(ftplib.error_temp, self.client.sendcmd, 'echo 400') self.assertRaises(ftplib.error_temp, self.client.sendcmd, 'echo 499') self.assertRaises(ftplib.error_perm, self.client.sendcmd, 'echo 500') self.assertRaises(ftplib.error_perm, self.client.sendcmd, 'echo 599') self.assertRaises(ftplib.error_proto, self.client.sendcmd, 'echo 999') def test_all_errors(self): exceptions = (ftplib.error_reply, ftplib.error_temp, ftplib.error_perm, ftplib.error_proto, ftplib.Error, OSError, EOFError) for x in exceptions: try: raise x('exception not included in all_errors set') except ftplib.all_errors: pass def test_set_pasv(self): # passive mode is supposed to be enabled by default self.assertTrue(self.client.passiveserver) self.client.set_pasv(True) self.assertTrue(self.client.passiveserver) self.client.set_pasv(False) self.assertFalse(self.client.passiveserver) def test_voidcmd(self): self.client.voidcmd('echo 200') self.client.voidcmd('echo 299') self.assertRaises(ftplib.error_reply, self.client.voidcmd, 'echo 199') self.assertRaises(ftplib.error_reply, self.client.voidcmd, 'echo 300') def test_login(self): self.client.login() def test_acct(self): self.client.acct('passwd') def test_rename(self): self.client.rename('a', 'b') self.server.handler_instance.next_response = '200' self.assertRaises(ftplib.error_reply, self.client.rename, 'a', 'b') def test_delete(self): self.client.delete('foo') self.server.handler_instance.next_response = '199' self.assertRaises(ftplib.error_reply, self.client.delete, 'foo') def test_size(self): self.client.size('foo') def test_mkd(self): dir = self.client.mkd('/foo') self.assertEqual(dir, '/foo') def test_rmd(self): self.client.rmd('foo') def test_cwd(self): dir = self.client.cwd('/foo') self.assertEqual(dir, '250 cwd ok') def test_pwd(self): dir = self.client.pwd() self.assertEqual(dir, 'pwd ok') def test_quit(self): self.assertEqual(self.client.quit(), '221 quit ok') # Ensure the connection gets closed; sock attribute should be None self.assertEqual(self.client.sock, None) def test_abort(self): self.client.abort() def test_retrbinary(self): received = [] self.client.retrbinary('retr', received.append) self.check_data(b''.join(received), RETR_DATA.encode(self.client.encoding)) def test_retrbinary_rest(self): for rest in (0, 10, 20): received = [] self.client.retrbinary('retr', received.append, rest=rest) self.check_data(b''.join(received), RETR_DATA[rest:].encode(self.client.encoding)) def test_retrlines(self): received = [] self.client.retrlines('retr', received.append) self.check_data(''.join(received), RETR_DATA.replace('\r\n', '')) def test_storbinary(self): f = io.BytesIO(RETR_DATA.encode(self.client.encoding)) self.client.storbinary('stor', f) self.check_data(self.server.handler_instance.last_received_data, RETR_DATA.encode(self.server.encoding)) # test new callback arg flag = [] f.seek(0) self.client.storbinary('stor', f, callback=lambda x: flag.append(None)) self.assertTrue(flag) def test_storbinary_rest(self): data = RETR_DATA.replace('\r\n', '\n').encode(self.client.encoding) f = io.BytesIO(data) for r in (30, '30'): f.seek(0) self.client.storbinary('stor', f, rest=r) self.assertEqual(self.server.handler_instance.rest, str(r)) def test_storlines(self): data = RETR_DATA.replace('\r\n', '\n').encode(self.client.encoding) f = io.BytesIO(data) self.client.storlines('stor', f) self.check_data(self.server.handler_instance.last_received_data, RETR_DATA.encode(self.server.encoding)) # test new callback arg flag = [] f.seek(0) self.client.storlines('stor foo', f, callback=lambda x: flag.append(None)) self.assertTrue(flag) f = io.StringIO(RETR_DATA.replace('\r\n', '\n')) # storlines() expects a binary file, not a text file with warnings_helper.check_warnings(('', BytesWarning), quiet=True): self.assertRaises(TypeError, self.client.storlines, 'stor foo', f) def test_nlst(self): self.client.nlst() self.assertEqual(self.client.nlst(), NLST_DATA.split('\r\n')[:-1]) def test_dir(self): l = [] self.client.dir(l.append) self.assertEqual(''.join(l), LIST_DATA.replace('\r\n', '')) def test_mlsd(self): list(self.client.mlsd()) list(self.client.mlsd(path='/')) list(self.client.mlsd(path='/', facts=['size', 'type'])) ls = list(self.client.mlsd()) for name, facts in ls: self.assertIsInstance(name, str) self.assertIsInstance(facts, dict) self.assertTrue(name) self.assertIn('type', facts) self.assertIn('perm', facts) self.assertIn('unique', facts) def set_data(data): self.server.handler_instance.next_data = data def test_entry(line, type=None, perm=None, unique=None, name=None): type = 'type' if type is None else type perm = 'perm' if perm is None else perm unique = 'unique' if unique is None else unique name = 'name' if name is None else name set_data(line) _name, facts = next(self.client.mlsd()) self.assertEqual(_name, name) self.assertEqual(facts['type'], type) self.assertEqual(facts['perm'], perm) self.assertEqual(facts['unique'], unique) # plain test_entry('type=type;perm=perm;unique=unique; name\r\n') # "=" in fact value test_entry('type=ty=pe;perm=perm;unique=unique; name\r\n', type="ty=pe") test_entry('type==type;perm=perm;unique=unique; name\r\n', type="=type") test_entry('type=t=y=pe;perm=perm;unique=unique; name\r\n', type="t=y=pe") test_entry('type=====;perm=perm;unique=unique; name\r\n', type="====") # spaces in name test_entry('type=type;perm=perm;unique=unique; na me\r\n', name="na me") test_entry('type=type;perm=perm;unique=unique; name \r\n', name="name ") test_entry('type=type;perm=perm;unique=unique; name\r\n', name=" name") test_entry('type=type;perm=perm;unique=unique; n am e\r\n', name="n am e") # ";" in name test_entry('type=type;perm=perm;unique=unique; na;me\r\n', name="na;me") test_entry('type=type;perm=perm;unique=unique; ;name\r\n', name=";name") test_entry('type=type;perm=perm;unique=unique; ;name;\r\n', name=";name;") test_entry('type=type;perm=perm;unique=unique; ;;;;\r\n', name=";;;;") # case sensitiveness set_data('Type=type;TyPe=perm;UNIQUE=unique; name\r\n') _name, facts = next(self.client.mlsd()) for x in facts: self.assertTrue(x.islower()) # no data (directory empty) set_data('') self.assertRaises(StopIteration, next, self.client.mlsd()) set_data('') for x in self.client.mlsd(): self.fail("unexpected data %s" % x) def test_makeport(self): with self.client.makeport(): # IPv4 is in use, just make sure send_eprt has not been used self.assertEqual(self.server.handler_instance.last_received_cmd, 'port') def test_makepasv(self): host, port = self.client.makepasv() conn = socket.create_connection((host, port), timeout=TIMEOUT) conn.close() # IPv4 is in use, just make sure send_epsv has not been used self.assertEqual(self.server.handler_instance.last_received_cmd, 'pasv') def test_makepasv_issue43285_security_disabled(self): """Test the opt-in to the old vulnerable behavior.""" self.client.trust_server_pasv_ipv4_address = True bad_host, port = self.client.makepasv() self.assertEqual( bad_host, self.server.handler_instance.fake_pasv_server_ip) # Opening and closing a connection keeps the dummy server happy # instead of timing out on accept. socket.create_connection((self.client.sock.getpeername()[0], port), timeout=TIMEOUT).close() def test_makepasv_issue43285_security_enabled_default(self): self.assertFalse(self.client.trust_server_pasv_ipv4_address) trusted_host, port = self.client.makepasv() self.assertNotEqual( trusted_host, self.server.handler_instance.fake_pasv_server_ip) # Opening and closing a connection keeps the dummy server happy # instead of timing out on accept. socket.create_connection((trusted_host, port), timeout=TIMEOUT).close() def test_with_statement(self): self.client.quit() def is_client_connected(): if self.client.sock is None: return False try: self.client.sendcmd('noop') except (OSError, EOFError): return False return True # base test with ftplib.FTP(timeout=TIMEOUT) as self.client: self.client.connect(self.server.host, self.server.port) self.client.sendcmd('noop') self.assertTrue(is_client_connected()) self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit') self.assertFalse(is_client_connected()) # QUIT sent inside the with block with ftplib.FTP(timeout=TIMEOUT) as self.client: self.client.connect(self.server.host, self.server.port) self.client.sendcmd('noop') self.client.quit() self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit') self.assertFalse(is_client_connected()) # force a wrong response code to be sent on QUIT: error_perm # is expected and the connection is supposed to be closed try: with ftplib.FTP(timeout=TIMEOUT) as self.client: self.client.connect(self.server.host, self.server.port) self.client.sendcmd('noop') self.server.handler_instance.next_response = '550 error on quit' except ftplib.error_perm as err: self.assertEqual(str(err), '550 error on quit') else: self.fail('Exception not raised') # needed to give the threaded server some time to set the attribute # which otherwise would still be == 'noop' time.sleep(0.1) self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit') self.assertFalse(is_client_connected()) def test_source_address(self): self.client.quit() port = socket_helper.find_unused_port() try: self.client.connect(self.server.host, self.server.port, source_address=(HOST, port)) self.assertEqual(self.client.sock.getsockname()[1], port) self.client.quit() except OSError as e: if e.errno == errno.EADDRINUSE: self.skipTest("couldn't bind to port %d" % port) raise def test_source_address_passive_connection(self): port = socket_helper.find_unused_port() self.client.source_address = (HOST, port) try: with self.client.transfercmd('list') as sock: self.assertEqual(sock.getsockname()[1], port) except OSError as e: if e.errno == errno.EADDRINUSE: self.skipTest("couldn't bind to port %d" % port) raise def test_parse257(self): self.assertEqual(ftplib.parse257('257 "/foo/bar"'), '/foo/bar') self.assertEqual(ftplib.parse257('257 "/foo/bar" created'), '/foo/bar') self.assertEqual(ftplib.parse257('257 ""'), '') self.assertEqual(ftplib.parse257('257 "" created'), '') self.assertRaises(ftplib.error_reply, ftplib.parse257, '250 "/foo/bar"') # The 257 response is supposed to include the directory # name and in case it contains embedded double-quotes # they must be doubled (see RFC-959, chapter 7, appendix 2). self.assertEqual(ftplib.parse257('257 "/foo/b""ar"'), '/foo/b"ar') self.assertEqual(ftplib.parse257('257 "/foo/b""ar" created'), '/foo/b"ar') def test_line_too_long(self): self.assertRaises(ftplib.Error, self.client.sendcmd, 'x' * self.client.maxline * 2) def test_retrlines_too_long(self): self.client.sendcmd('SETLONGRETR %d' % (self.client.maxline * 2)) received = [] self.assertRaises(ftplib.Error, self.client.retrlines, 'retr', received.append) def test_storlines_too_long(self): f = io.BytesIO(b'x' * self.client.maxline * 2) self.assertRaises(ftplib.Error, self.client.storlines, 'stor', f) def test_encoding_param(self): encodings = ['latin-1', 'utf-8'] for encoding in encodings: with self.subTest(encoding=encoding): self.tearDown() self.setUp(encoding=encoding) self.assertEqual(encoding, self.client.encoding) self.test_retrbinary() self.test_storbinary() self.test_retrlines() new_dir = self.client.mkd('/non-ascii dir \xAE') self.check_data(new_dir, '/non-ascii dir \xAE') # Check default encoding client = ftplib.FTP(timeout=TIMEOUT) self.assertEqual(DEFAULT_ENCODING, client.encoding) @skipUnless(socket_helper.IPV6_ENABLED, "IPv6 not enabled") class TestIPv6Environment(TestCase): def setUp(self): self.server = DummyFTPServer((HOSTv6, 0), af=socket.AF_INET6, encoding=DEFAULT_ENCODING) self.server.start() self.client = ftplib.FTP(timeout=TIMEOUT, encoding=DEFAULT_ENCODING) self.client.connect(self.server.host, self.server.port) def tearDown(self): self.client.close() self.server.stop() # Explicitly clear the attribute to prevent dangling thread self.server = None asyncore.close_all(ignore_all=True) def test_af(self): self.assertEqual(self.client.af, socket.AF_INET6) def test_makeport(self): with self.client.makeport(): self.assertEqual(self.server.handler_instance.last_received_cmd, 'eprt') def test_makepasv(self): host, port = self.client.makepasv() conn = socket.create_connection((host, port), timeout=TIMEOUT) conn.close() self.assertEqual(self.server.handler_instance.last_received_cmd, 'epsv') def test_transfer(self): def retr(): received = [] self.client.retrbinary('retr', received.append) self.assertEqual(b''.join(received), RETR_DATA.encode(self.client.encoding)) self.client.set_pasv(True) retr() self.client.set_pasv(False) retr() @skipUnless(ssl, "SSL not available") class TestTLS_FTPClassMixin(TestFTPClass): """Repeat TestFTPClass tests starting the TLS layer for both control and data connections first. """ def setUp(self, encoding=DEFAULT_ENCODING): self.server = DummyTLS_FTPServer((HOST, 0), encoding=encoding) self.server.start() self.client = ftplib.FTP_TLS(timeout=TIMEOUT, encoding=encoding) self.client.connect(self.server.host, self.server.port) # enable TLS self.client.auth() self.client.prot_p() @skipUnless(ssl, "SSL not available") class TestTLS_FTPClass(TestCase): """Specific TLS_FTP class tests.""" def setUp(self, encoding=DEFAULT_ENCODING): self.server = DummyTLS_FTPServer((HOST, 0), encoding=encoding) self.server.start() self.client = ftplib.FTP_TLS(timeout=TIMEOUT) self.client.connect(self.server.host, self.server.port) def tearDown(self): self.client.close() self.server.stop() # Explicitly clear the attribute to prevent dangling thread self.server = None asyncore.close_all(ignore_all=True) def test_control_connection(self): self.assertNotIsInstance(self.client.sock, ssl.SSLSocket) self.client.auth() self.assertIsInstance(self.client.sock, ssl.SSLSocket) def test_data_connection(self): # clear text with self.client.transfercmd('list') as sock: self.assertNotIsInstance(sock, ssl.SSLSocket) self.assertEqual(sock.recv(1024), LIST_DATA.encode(self.client.encoding)) self.assertEqual(self.client.voidresp(), "226 transfer complete") # secured, after PROT P self.client.prot_p() with self.client.transfercmd('list') as sock: self.assertIsInstance(sock, ssl.SSLSocket) # consume from SSL socket to finalize handshake and avoid # "SSLError [SSL] shutdown while in init" self.assertEqual(sock.recv(1024), LIST_DATA.encode(self.client.encoding)) self.assertEqual(self.client.voidresp(), "226 transfer complete") # PROT C is issued, the connection must be in cleartext again self.client.prot_c() with self.client.transfercmd('list') as sock: self.assertNotIsInstance(sock, ssl.SSLSocket) self.assertEqual(sock.recv(1024), LIST_DATA.encode(self.client.encoding)) self.assertEqual(self.client.voidresp(), "226 transfer complete") def test_login(self): # login() is supposed to implicitly secure the control connection self.assertNotIsInstance(self.client.sock, ssl.SSLSocket) self.client.login() self.assertIsInstance(self.client.sock, ssl.SSLSocket) # make sure that AUTH TLS doesn't get issued again self.client.login() def test_auth_issued_twice(self): self.client.auth() self.assertRaises(ValueError, self.client.auth) def test_context(self): self.client.quit() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE self.assertRaises(ValueError, ftplib.FTP_TLS, keyfile=CERTFILE, context=ctx) self.assertRaises(ValueError, ftplib.FTP_TLS, certfile=CERTFILE, context=ctx) self.assertRaises(ValueError, ftplib.FTP_TLS, certfile=CERTFILE, keyfile=CERTFILE, context=ctx) self.client = ftplib.FTP_TLS(context=ctx, timeout=TIMEOUT) self.client.connect(self.server.host, self.server.port) self.assertNotIsInstance(self.client.sock, ssl.SSLSocket) self.client.auth() self.assertIs(self.client.sock.context, ctx) self.assertIsInstance(self.client.sock, ssl.SSLSocket) self.client.prot_p() with self.client.transfercmd('list') as sock: self.assertIs(sock.context, ctx) self.assertIsInstance(sock, ssl.SSLSocket) def test_ccc(self): self.assertRaises(ValueError, self.client.ccc) self.client.login(secure=True) self.assertIsInstance(self.client.sock, ssl.SSLSocket) self.client.ccc() self.assertRaises(ValueError, self.client.sock.unwrap) @skipUnless(False, "FIXME: bpo-32706") def test_check_hostname(self): self.client.quit() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.check_hostname, True) ctx.load_verify_locations(CAFILE) self.client = ftplib.FTP_TLS(context=ctx, timeout=TIMEOUT) # 127.0.0.1 doesn't match SAN self.client.connect(self.server.host, self.server.port) with self.assertRaises(ssl.CertificateError): self.client.auth() # exception quits connection self.client.connect(self.server.host, self.server.port) self.client.prot_p() with self.assertRaises(ssl.CertificateError): with self.client.transfercmd("list") as sock: pass self.client.quit() self.client.connect("localhost", self.server.port) self.client.auth() self.client.quit() self.client.connect("localhost", self.server.port) self.client.prot_p() with self.client.transfercmd("list") as sock: pass class TestTimeouts(TestCase): def setUp(self): self.evt = threading.Event() self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.sock.settimeout(20) self.port = socket_helper.bind_port(self.sock) self.server_thread = threading.Thread(target=self.server) self.server_thread.daemon = True self.server_thread.start() # Wait for the server to be ready. self.evt.wait() self.evt.clear() self.old_port = ftplib.FTP.port ftplib.FTP.port = self.port def tearDown(self): ftplib.FTP.port = self.old_port self.server_thread.join() # Explicitly clear the attribute to prevent dangling thread self.server_thread = None def server(self): # This method sets the evt 3 times: # 1) when the connection is ready to be accepted. # 2) when it is safe for the caller to close the connection # 3) when we have closed the socket self.sock.listen() # (1) Signal the caller that we are ready to accept the connection. self.evt.set() try: conn, addr = self.sock.accept() except TimeoutError: pass else: conn.sendall(b"1 Hola mundo\n") conn.shutdown(socket.SHUT_WR) # (2) Signal the caller that it is safe to close the socket. self.evt.set() conn.close() finally: self.sock.close() def testTimeoutDefault(self): # default -- use global socket timeout self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: ftp = ftplib.FTP(HOST) finally: socket.setdefaulttimeout(None) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() def testTimeoutNone(self): # no timeout -- do not use global socket timeout self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: ftp = ftplib.FTP(HOST, timeout=None) finally: socket.setdefaulttimeout(None) self.assertIsNone(ftp.sock.gettimeout()) self.evt.wait() ftp.close() def testTimeoutValue(self): # a value ftp = ftplib.FTP(HOST, timeout=30) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() # bpo-39259 with self.assertRaises(ValueError): ftplib.FTP(HOST, timeout=0) def testTimeoutConnect(self): ftp = ftplib.FTP() ftp.connect(HOST, timeout=30) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() def testTimeoutDifferentOrder(self): ftp = ftplib.FTP(timeout=30) ftp.connect(HOST) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() def testTimeoutDirectAccess(self): ftp = ftplib.FTP() ftp.timeout = 30 ftp.connect(HOST) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() class MiscTestCase(TestCase): def test__all__(self): not_exported = { 'MSG_OOB', 'FTP_PORT', 'MAXLINE', 'CRLF', 'B_CRLF', 'Error', 'parse150', 'parse227', 'parse229', 'parse257', 'print_line', 'ftpcp', 'test'} support.check__all__(self, ftplib, not_exported=not_exported) def setUpModule(): thread_info = threading_helper.threading_setup() unittest.addModuleCleanup(threading_helper.threading_cleanup, *thread_info) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/greentest/3.11/test_httplib.py000066400000000000000000002615121471441230600214110ustar00rootroot00000000000000import enum import errno from http import client, HTTPStatus import io import itertools import os import array import re import socket import threading import warnings import unittest from unittest import mock TestCase = unittest.TestCase from test import support from test.support import os_helper from test.support import socket_helper from test.support import warnings_helper support.requires_working_socket(module=True) here = os.path.dirname(__file__) # Self-signed cert file for 'localhost' CERT_localhost = os.path.join(here, 'certdata', 'keycert.pem') # Self-signed cert file for 'fakehostname' CERT_fakehostname = os.path.join(here, 'certdata', 'keycert2.pem') # Self-signed cert file for self-signed.pythontest.net CERT_selfsigned_pythontestdotnet = os.path.join( here, 'certdata', 'selfsigned_pythontestdotnet.pem', ) # constants for testing chunked encoding chunked_start = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello worl\r\n' '3\r\n' 'd! \r\n' '8\r\n' 'and now \r\n' '22\r\n' 'for something completely different\r\n' ) chunked_expected = b'hello world! and now for something completely different' chunk_extension = ";foo=bar" last_chunk = "0\r\n" last_chunk_extended = "0" + chunk_extension + "\r\n" trailers = "X-Dummy: foo\r\nX-Dumm2: bar\r\n" chunked_end = "\r\n" HOST = socket_helper.HOST class FakeSocket: def __init__(self, text, fileclass=io.BytesIO, host=None, port=None): if isinstance(text, str): text = text.encode("ascii") self.text = text self.fileclass = fileclass self.data = b'' self.sendall_calls = 0 self.file_closed = False self.host = host self.port = port def sendall(self, data): self.sendall_calls += 1 self.data += data def makefile(self, mode, bufsize=None): if mode != 'r' and mode != 'rb': raise client.UnimplementedFileMode() # keep the file around so we can check how much was read from it self.file = self.fileclass(self.text) self.file.close = self.file_close #nerf close () return self.file def file_close(self): self.file_closed = True def close(self): pass def setsockopt(self, level, optname, value): pass class EPipeSocket(FakeSocket): def __init__(self, text, pipe_trigger): # When sendall() is called with pipe_trigger, raise EPIPE. FakeSocket.__init__(self, text) self.pipe_trigger = pipe_trigger def sendall(self, data): if self.pipe_trigger in data: raise OSError(errno.EPIPE, "gotcha") self.data += data def close(self): pass class NoEOFBytesIO(io.BytesIO): """Like BytesIO, but raises AssertionError on EOF. This is used below to test that http.client doesn't try to read more from the underlying file than it should. """ def read(self, n=-1): data = io.BytesIO.read(self, n) if data == b'': raise AssertionError('caller tried to read past EOF') return data def readline(self, length=None): data = io.BytesIO.readline(self, length) if data == b'': raise AssertionError('caller tried to read past EOF') return data class FakeSocketHTTPConnection(client.HTTPConnection): """HTTPConnection subclass using FakeSocket; counts connect() calls""" def __init__(self, *args): self.connections = 0 super().__init__('example.com') self.fake_socket_args = args self._create_connection = self.create_connection def connect(self): """Count the number of times connect() is invoked""" self.connections += 1 return super().connect() def create_connection(self, *pos, **kw): return FakeSocket(*self.fake_socket_args) class HeaderTests(TestCase): def test_auto_headers(self): # Some headers are added automatically, but should not be added by # .request() if they are explicitly set. class HeaderCountingBuffer(list): def __init__(self): self.count = {} def append(self, item): kv = item.split(b':') if len(kv) > 1: # item is a 'Key: Value' header string lcKey = kv[0].decode('ascii').lower() self.count.setdefault(lcKey, 0) self.count[lcKey] += 1 list.append(self, item) for explicit_header in True, False: for header in 'Content-length', 'Host', 'Accept-encoding': conn = client.HTTPConnection('example.com') conn.sock = FakeSocket('blahblahblah') conn._buffer = HeaderCountingBuffer() body = 'spamspamspam' headers = {} if explicit_header: headers[header] = str(len(body)) conn.request('POST', '/', body, headers) self.assertEqual(conn._buffer.count[header.lower()], 1) def test_content_length_0(self): class ContentLengthChecker(list): def __init__(self): list.__init__(self) self.content_length = None def append(self, item): kv = item.split(b':', 1) if len(kv) > 1 and kv[0].lower() == b'content-length': self.content_length = kv[1].strip() list.append(self, item) # Here, we're testing that methods expecting a body get a # content-length set to zero if the body is empty (either None or '') bodies = (None, '') methods_with_body = ('PUT', 'POST', 'PATCH') for method, body in itertools.product(methods_with_body, bodies): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', body) self.assertEqual( conn._buffer.content_length, b'0', 'Header Content-Length incorrect on {}'.format(method) ) # For these methods, we make sure that content-length is not set when # the body is None because it might cause unexpected behaviour on the # server. methods_without_body = ( 'GET', 'CONNECT', 'DELETE', 'HEAD', 'OPTIONS', 'TRACE', ) for method in methods_without_body: conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', None) self.assertEqual( conn._buffer.content_length, None, 'Header Content-Length set for empty body on {}'.format(method) ) # If the body is set to '', that's considered to be "present but # empty" rather than "missing", so content length would be set, even # for methods that don't expect a body. for method in methods_without_body: conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', '') self.assertEqual( conn._buffer.content_length, b'0', 'Header Content-Length incorrect on {}'.format(method) ) # If the body is set, make sure Content-Length is set. for method in itertools.chain(methods_without_body, methods_with_body): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', ' ') self.assertEqual( conn._buffer.content_length, b'1', 'Header Content-Length incorrect on {}'.format(method) ) def test_putheader(self): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn.putrequest('GET','/') conn.putheader('Content-length', 42) self.assertIn(b'Content-length: 42', conn._buffer) conn.putheader('Foo', ' bar ') self.assertIn(b'Foo: bar ', conn._buffer) conn.putheader('Bar', '\tbaz\t') self.assertIn(b'Bar: \tbaz\t', conn._buffer) conn.putheader('Authorization', 'Bearer mytoken') self.assertIn(b'Authorization: Bearer mytoken', conn._buffer) conn.putheader('IterHeader', 'IterA', 'IterB') self.assertIn(b'IterHeader: IterA\r\n\tIterB', conn._buffer) conn.putheader('LatinHeader', b'\xFF') self.assertIn(b'LatinHeader: \xFF', conn._buffer) conn.putheader('Utf8Header', b'\xc3\x80') self.assertIn(b'Utf8Header: \xc3\x80', conn._buffer) conn.putheader('C1-Control', b'next\x85line') self.assertIn(b'C1-Control: next\x85line', conn._buffer) conn.putheader('Embedded-Fold-Space', 'is\r\n allowed') self.assertIn(b'Embedded-Fold-Space: is\r\n allowed', conn._buffer) conn.putheader('Embedded-Fold-Tab', 'is\r\n\tallowed') self.assertIn(b'Embedded-Fold-Tab: is\r\n\tallowed', conn._buffer) conn.putheader('Key Space', 'value') self.assertIn(b'Key Space: value', conn._buffer) conn.putheader('KeySpace ', 'value') self.assertIn(b'KeySpace : value', conn._buffer) conn.putheader(b'Nonbreak\xa0Space', 'value') self.assertIn(b'Nonbreak\xa0Space: value', conn._buffer) conn.putheader(b'\xa0NonbreakSpace', 'value') self.assertIn(b'\xa0NonbreakSpace: value', conn._buffer) def test_ipv6host_header(self): # Default host header on IPv6 transaction should be wrapped by [] if # it is an IPv6 address expected = b'GET /foo HTTP/1.1\r\nHost: [2001::]:81\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[2001::]:81') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) expected = b'GET /foo HTTP/1.1\r\nHost: [2001:102A::]\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[2001:102A::]') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) expected = b'GET /foo HTTP/1.1\r\nHost: [fe80::]\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[fe80::%2]') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) expected = b'GET /foo HTTP/1.1\r\nHost: [fe80::]:81\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[fe80::%2]:81') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) def test_malformed_headers_coped_with(self): # Issue 19996 body = "HTTP/1.1 200 OK\r\nFirst: val\r\n: nval\r\nSecond: val\r\n\r\n" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.getheader('First'), 'val') self.assertEqual(resp.getheader('Second'), 'val') def test_parse_all_octets(self): # Ensure no valid header field octet breaks the parser body = ( b'HTTP/1.1 200 OK\r\n' b"!#$%&'*+-.^_`|~: value\r\n" # Special token characters b'VCHAR: ' + bytes(range(0x21, 0x7E + 1)) + b'\r\n' b'obs-text: ' + bytes(range(0x80, 0xFF + 1)) + b'\r\n' b'obs-fold: text\r\n' b' folded with space\r\n' b'\tfolded with tab\r\n' b'Content-Length: 0\r\n' b'\r\n' ) sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.getheader('Content-Length'), '0') self.assertEqual(resp.msg['Content-Length'], '0') self.assertEqual(resp.getheader("!#$%&'*+-.^_`|~"), 'value') self.assertEqual(resp.msg["!#$%&'*+-.^_`|~"], 'value') vchar = ''.join(map(chr, range(0x21, 0x7E + 1))) self.assertEqual(resp.getheader('VCHAR'), vchar) self.assertEqual(resp.msg['VCHAR'], vchar) self.assertIsNotNone(resp.getheader('obs-text')) self.assertIn('obs-text', resp.msg) for folded in (resp.getheader('obs-fold'), resp.msg['obs-fold']): self.assertTrue(folded.startswith('text')) self.assertIn(' folded with space', folded) self.assertTrue(folded.endswith('folded with tab')) def test_invalid_headers(self): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket('') conn.putrequest('GET', '/') # http://tools.ietf.org/html/rfc7230#section-3.2.4, whitespace is no # longer allowed in header names cases = ( (b'Invalid\r\nName', b'ValidValue'), (b'Invalid\rName', b'ValidValue'), (b'Invalid\nName', b'ValidValue'), (b'\r\nInvalidName', b'ValidValue'), (b'\rInvalidName', b'ValidValue'), (b'\nInvalidName', b'ValidValue'), (b' InvalidName', b'ValidValue'), (b'\tInvalidName', b'ValidValue'), (b'Invalid:Name', b'ValidValue'), (b':InvalidName', b'ValidValue'), (b'ValidName', b'Invalid\r\nValue'), (b'ValidName', b'Invalid\rValue'), (b'ValidName', b'Invalid\nValue'), (b'ValidName', b'InvalidValue\r\n'), (b'ValidName', b'InvalidValue\r'), (b'ValidName', b'InvalidValue\n'), ) for name, value in cases: with self.subTest((name, value)): with self.assertRaisesRegex(ValueError, 'Invalid header'): conn.putheader(name, value) def test_headers_debuglevel(self): body = ( b'HTTP/1.1 200 OK\r\n' b'First: val\r\n' b'Second: val1\r\n' b'Second: val2\r\n' ) sock = FakeSocket(body) resp = client.HTTPResponse(sock, debuglevel=1) with support.captured_stdout() as output: resp.begin() lines = output.getvalue().splitlines() self.assertEqual(lines[0], "reply: 'HTTP/1.1 200 OK\\r\\n'") self.assertEqual(lines[1], "header: First: val") self.assertEqual(lines[2], "header: Second: val1") self.assertEqual(lines[3], "header: Second: val2") class HttpMethodTests(TestCase): def test_invalid_method_names(self): methods = ( 'GET\r', 'POST\n', 'PUT\n\r', 'POST\nValue', 'POST\nHOST:abc', 'GET\nrHost:abc\n', 'POST\rRemainder:\r', 'GET\rHOST:\n', '\nPUT' ) for method in methods: with self.assertRaisesRegex( ValueError, "method can't contain control characters"): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn.request(method=method, url="/") class TransferEncodingTest(TestCase): expected_body = b"It's just a flesh wound" def test_endheaders_chunked(self): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.putrequest('POST', '/') conn.endheaders(self._make_body(), encode_chunked=True) _, _, body = self._parse_request(conn.sock.data) body = self._parse_chunked(body) self.assertEqual(body, self.expected_body) def test_explicit_headers(self): # explicit chunked conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') # this shouldn't actually be automatically chunk-encoded because the # calling code has explicitly stated that it's taking care of it conn.request( 'POST', '/', self._make_body(), {'Transfer-Encoding': 'chunked'}) _, headers, body = self._parse_request(conn.sock.data) self.assertNotIn('content-length', [k.lower() for k in headers.keys()]) self.assertEqual(headers['Transfer-Encoding'], 'chunked') self.assertEqual(body, self.expected_body) # explicit chunked, string body conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request( 'POST', '/', self.expected_body.decode('latin-1'), {'Transfer-Encoding': 'chunked'}) _, headers, body = self._parse_request(conn.sock.data) self.assertNotIn('content-length', [k.lower() for k in headers.keys()]) self.assertEqual(headers['Transfer-Encoding'], 'chunked') self.assertEqual(body, self.expected_body) # User-specified TE, but request() does the chunk encoding conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request('POST', '/', headers={'Transfer-Encoding': 'gzip, chunked'}, encode_chunked=True, body=self._make_body()) _, headers, body = self._parse_request(conn.sock.data) self.assertNotIn('content-length', [k.lower() for k in headers]) self.assertEqual(headers['Transfer-Encoding'], 'gzip, chunked') self.assertEqual(self._parse_chunked(body), self.expected_body) def test_request(self): for empty_lines in (False, True,): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request( 'POST', '/', self._make_body(empty_lines=empty_lines)) _, headers, body = self._parse_request(conn.sock.data) body = self._parse_chunked(body) self.assertEqual(body, self.expected_body) self.assertEqual(headers['Transfer-Encoding'], 'chunked') # Content-Length and Transfer-Encoding SHOULD not be sent in the # same request self.assertNotIn('content-length', [k.lower() for k in headers]) def test_empty_body(self): # Zero-length iterable should be treated like any other iterable conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request('POST', '/', ()) _, headers, body = self._parse_request(conn.sock.data) self.assertEqual(headers['Transfer-Encoding'], 'chunked') self.assertNotIn('content-length', [k.lower() for k in headers]) self.assertEqual(body, b"0\r\n\r\n") def _make_body(self, empty_lines=False): lines = self.expected_body.split(b' ') for idx, line in enumerate(lines): # for testing handling empty lines if empty_lines and idx % 2: yield b'' if idx < len(lines) - 1: yield line + b' ' else: yield line def _parse_request(self, data): lines = data.split(b'\r\n') request = lines[0] headers = {} n = 1 while n < len(lines) and len(lines[n]) > 0: key, val = lines[n].split(b':') key = key.decode('latin-1').strip() headers[key] = val.decode('latin-1').strip() n += 1 return request, headers, b'\r\n'.join(lines[n + 1:]) def _parse_chunked(self, data): body = [] trailers = {} n = 0 lines = data.split(b'\r\n') # parse body while True: size, chunk = lines[n:n+2] size = int(size, 16) if size == 0: n += 1 break self.assertEqual(size, len(chunk)) body.append(chunk) n += 2 # we /should/ hit the end chunk, but check against the size of # lines so we're not stuck in an infinite loop should we get # malformed data if n > len(lines): break return b''.join(body) class BasicTest(TestCase): def test_dir_with_added_behavior_on_status(self): # see issue40084 self.assertTrue({'description', 'name', 'phrase', 'value'} <= set(dir(HTTPStatus(404)))) def test_simple_httpstatus(self): class CheckedHTTPStatus(enum.IntEnum): """HTTP status codes and reason phrases Status codes from the following RFCs are all observed: * RFC 7231: Hypertext Transfer Protocol (HTTP/1.1), obsoletes 2616 * RFC 6585: Additional HTTP Status Codes * RFC 3229: Delta encoding in HTTP * RFC 4918: HTTP Extensions for WebDAV, obsoletes 2518 * RFC 5842: Binding Extensions to WebDAV * RFC 7238: Permanent Redirect * RFC 2295: Transparent Content Negotiation in HTTP * RFC 2774: An HTTP Extension Framework * RFC 7725: An HTTP Status Code to Report Legal Obstacles * RFC 7540: Hypertext Transfer Protocol Version 2 (HTTP/2) * RFC 2324: Hyper Text Coffee Pot Control Protocol (HTCPCP/1.0) * RFC 8297: An HTTP Status Code for Indicating Hints * RFC 8470: Using Early Data in HTTP """ def __new__(cls, value, phrase, description=''): obj = int.__new__(cls, value) obj._value_ = value obj.phrase = phrase obj.description = description return obj # informational CONTINUE = 100, 'Continue', 'Request received, please continue' SWITCHING_PROTOCOLS = (101, 'Switching Protocols', 'Switching to new protocol; obey Upgrade header') PROCESSING = 102, 'Processing' EARLY_HINTS = 103, 'Early Hints' # success OK = 200, 'OK', 'Request fulfilled, document follows' CREATED = 201, 'Created', 'Document created, URL follows' ACCEPTED = (202, 'Accepted', 'Request accepted, processing continues off-line') NON_AUTHORITATIVE_INFORMATION = (203, 'Non-Authoritative Information', 'Request fulfilled from cache') NO_CONTENT = 204, 'No Content', 'Request fulfilled, nothing follows' RESET_CONTENT = 205, 'Reset Content', 'Clear input form for further input' PARTIAL_CONTENT = 206, 'Partial Content', 'Partial content follows' MULTI_STATUS = 207, 'Multi-Status' ALREADY_REPORTED = 208, 'Already Reported' IM_USED = 226, 'IM Used' # redirection MULTIPLE_CHOICES = (300, 'Multiple Choices', 'Object has several resources -- see URI list') MOVED_PERMANENTLY = (301, 'Moved Permanently', 'Object moved permanently -- see URI list') FOUND = 302, 'Found', 'Object moved temporarily -- see URI list' SEE_OTHER = 303, 'See Other', 'Object moved -- see Method and URL list' NOT_MODIFIED = (304, 'Not Modified', 'Document has not changed since given time') USE_PROXY = (305, 'Use Proxy', 'You must use proxy specified in Location to access this resource') TEMPORARY_REDIRECT = (307, 'Temporary Redirect', 'Object moved temporarily -- see URI list') PERMANENT_REDIRECT = (308, 'Permanent Redirect', 'Object moved permanently -- see URI list') # client error BAD_REQUEST = (400, 'Bad Request', 'Bad request syntax or unsupported method') UNAUTHORIZED = (401, 'Unauthorized', 'No permission -- see authorization schemes') PAYMENT_REQUIRED = (402, 'Payment Required', 'No payment -- see charging schemes') FORBIDDEN = (403, 'Forbidden', 'Request forbidden -- authorization will not help') NOT_FOUND = (404, 'Not Found', 'Nothing matches the given URI') METHOD_NOT_ALLOWED = (405, 'Method Not Allowed', 'Specified method is invalid for this resource') NOT_ACCEPTABLE = (406, 'Not Acceptable', 'URI not available in preferred format') PROXY_AUTHENTICATION_REQUIRED = (407, 'Proxy Authentication Required', 'You must authenticate with this proxy before proceeding') REQUEST_TIMEOUT = (408, 'Request Timeout', 'Request timed out; try again later') CONFLICT = 409, 'Conflict', 'Request conflict' GONE = (410, 'Gone', 'URI no longer exists and has been permanently removed') LENGTH_REQUIRED = (411, 'Length Required', 'Client must specify Content-Length') PRECONDITION_FAILED = (412, 'Precondition Failed', 'Precondition in headers is false') REQUEST_ENTITY_TOO_LARGE = (413, 'Request Entity Too Large', 'Entity is too large') REQUEST_URI_TOO_LONG = (414, 'Request-URI Too Long', 'URI is too long') UNSUPPORTED_MEDIA_TYPE = (415, 'Unsupported Media Type', 'Entity body in unsupported format') REQUESTED_RANGE_NOT_SATISFIABLE = (416, 'Requested Range Not Satisfiable', 'Cannot satisfy request range') EXPECTATION_FAILED = (417, 'Expectation Failed', 'Expect condition could not be satisfied') IM_A_TEAPOT = (418, 'I\'m a Teapot', 'Server refuses to brew coffee because it is a teapot.') MISDIRECTED_REQUEST = (421, 'Misdirected Request', 'Server is not able to produce a response') UNPROCESSABLE_ENTITY = 422, 'Unprocessable Entity' LOCKED = 423, 'Locked' FAILED_DEPENDENCY = 424, 'Failed Dependency' TOO_EARLY = 425, 'Too Early' UPGRADE_REQUIRED = 426, 'Upgrade Required' PRECONDITION_REQUIRED = (428, 'Precondition Required', 'The origin server requires the request to be conditional') TOO_MANY_REQUESTS = (429, 'Too Many Requests', 'The user has sent too many requests in ' 'a given amount of time ("rate limiting")') REQUEST_HEADER_FIELDS_TOO_LARGE = (431, 'Request Header Fields Too Large', 'The server is unwilling to process the request because its header ' 'fields are too large') UNAVAILABLE_FOR_LEGAL_REASONS = (451, 'Unavailable For Legal Reasons', 'The server is denying access to the ' 'resource as a consequence of a legal demand') # server errors INTERNAL_SERVER_ERROR = (500, 'Internal Server Error', 'Server got itself in trouble') NOT_IMPLEMENTED = (501, 'Not Implemented', 'Server does not support this operation') BAD_GATEWAY = (502, 'Bad Gateway', 'Invalid responses from another server/proxy') SERVICE_UNAVAILABLE = (503, 'Service Unavailable', 'The server cannot process the request due to a high load') GATEWAY_TIMEOUT = (504, 'Gateway Timeout', 'The gateway server did not receive a timely response') HTTP_VERSION_NOT_SUPPORTED = (505, 'HTTP Version Not Supported', 'Cannot fulfill request') VARIANT_ALSO_NEGOTIATES = 506, 'Variant Also Negotiates' INSUFFICIENT_STORAGE = 507, 'Insufficient Storage' LOOP_DETECTED = 508, 'Loop Detected' NOT_EXTENDED = 510, 'Not Extended' NETWORK_AUTHENTICATION_REQUIRED = (511, 'Network Authentication Required', 'The client needs to authenticate to gain network access') enum._test_simple_enum(CheckedHTTPStatus, HTTPStatus) def test_status_lines(self): # Test HTTP status lines body = "HTTP/1.1 200 Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(0), b'') # Issue #20007 self.assertFalse(resp.isclosed()) self.assertFalse(resp.closed) self.assertEqual(resp.read(), b"Text") self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) body = "HTTP/1.1 400.100 Not Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) self.assertRaises(client.BadStatusLine, resp.begin) def test_bad_status_repr(self): exc = client.BadStatusLine('') self.assertEqual(repr(exc), '''BadStatusLine("''")''') def test_partial_reads(self): # if we have Content-Length, HTTPResponse knows when to close itself, # the same behaviour as when we read the whole thing with read() body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(2), b'Te') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_mixed_reads(self): # readline() should update the remaining length, so that read() knows # how much data is left and does not raise IncompleteRead body = "HTTP/1.1 200 Ok\r\nContent-Length: 13\r\n\r\nText\r\nAnother" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.readline(), b'Text\r\n') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(), b'Another') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_readintos(self): # if we have Content-Length, HTTPResponse knows when to close itself, # the same behaviour as when we read the whole thing with read() body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(2) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'Te') self.assertFalse(resp.isclosed()) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_reads_past_end(self): # if we have Content-Length, clip reads to the end body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(10), b'Text') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_readintos_past_end(self): # if we have Content-Length, clip readintos to the end body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(10) n = resp.readinto(b) self.assertEqual(n, 4) self.assertEqual(bytes(b)[:4], b'Text') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_reads_no_content_length(self): # when no length is present, the socket should be gracefully closed when # all data was read body = "HTTP/1.1 200 Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(2), b'Te') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertEqual(resp.read(1), b'') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_readintos_no_content_length(self): # when no length is present, the socket should be gracefully closed when # all data was read body = "HTTP/1.1 200 Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(2) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'Te') self.assertFalse(resp.isclosed()) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') n = resp.readinto(b) self.assertEqual(n, 0) self.assertTrue(resp.isclosed()) def test_partial_reads_incomplete_body(self): # if the server shuts down the connection before the whole # content-length is delivered, the socket is gracefully closed body = "HTTP/1.1 200 Ok\r\nContent-Length: 10\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(2), b'Te') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertEqual(resp.read(1), b'') self.assertTrue(resp.isclosed()) def test_partial_readintos_incomplete_body(self): # if the server shuts down the connection before the whole # content-length is delivered, the socket is gracefully closed body = "HTTP/1.1 200 Ok\r\nContent-Length: 10\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(2) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'Te') self.assertFalse(resp.isclosed()) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') n = resp.readinto(b) self.assertEqual(n, 0) self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_host_port(self): # Check invalid host_port for hp in ("www.python.org:abc", "user:password@www.python.org"): self.assertRaises(client.InvalidURL, client.HTTPConnection, hp) for hp, h, p in (("[fe80::207:e9ff:fe9b]:8000", "fe80::207:e9ff:fe9b", 8000), ("www.python.org:80", "www.python.org", 80), ("www.python.org:", "www.python.org", 80), ("www.python.org", "www.python.org", 80), ("[fe80::207:e9ff:fe9b]", "fe80::207:e9ff:fe9b", 80), ("[fe80::207:e9ff:fe9b]:", "fe80::207:e9ff:fe9b", 80)): c = client.HTTPConnection(hp) self.assertEqual(h, c.host) self.assertEqual(p, c.port) def test_response_headers(self): # test response with multiple message headers with the same field name. text = ('HTTP/1.1 200 OK\r\n' 'Set-Cookie: Customer="WILE_E_COYOTE"; ' 'Version="1"; Path="/acme"\r\n' 'Set-Cookie: Part_Number="Rocket_Launcher_0001"; Version="1";' ' Path="/acme"\r\n' '\r\n' 'No body\r\n') hdr = ('Customer="WILE_E_COYOTE"; Version="1"; Path="/acme"' ', ' 'Part_Number="Rocket_Launcher_0001"; Version="1"; Path="/acme"') s = FakeSocket(text) r = client.HTTPResponse(s) r.begin() cookies = r.getheader("Set-Cookie") self.assertEqual(cookies, hdr) def test_read_head(self): # Test that the library doesn't attempt to read any data # from a HEAD request. (Tickles SF bug #622042.) sock = FakeSocket( 'HTTP/1.1 200 OK\r\n' 'Content-Length: 14432\r\n' '\r\n', NoEOFBytesIO) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() if resp.read(): self.fail("Did not expect response from HEAD request") def test_readinto_head(self): # Test that the library doesn't attempt to read any data # from a HEAD request. (Tickles SF bug #622042.) sock = FakeSocket( 'HTTP/1.1 200 OK\r\n' 'Content-Length: 14432\r\n' '\r\n', NoEOFBytesIO) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() b = bytearray(5) if resp.readinto(b) != 0: self.fail("Did not expect response from HEAD request") self.assertEqual(bytes(b), b'\x00'*5) def test_too_many_headers(self): headers = '\r\n'.join('Header%d: foo' % i for i in range(client._MAXHEADERS + 1)) + '\r\n' text = ('HTTP/1.1 200 OK\r\n' + headers) s = FakeSocket(text) r = client.HTTPResponse(s) self.assertRaisesRegex(client.HTTPException, r"got more than \d+ headers", r.begin) def test_send_file(self): expected = (b'GET /foo HTTP/1.1\r\nHost: example.com\r\n' b'Accept-Encoding: identity\r\n' b'Transfer-Encoding: chunked\r\n' b'\r\n') with open(__file__, 'rb') as body: conn = client.HTTPConnection('example.com') sock = FakeSocket(body) conn.sock = sock conn.request('GET', '/foo', body) self.assertTrue(sock.data.startswith(expected), '%r != %r' % (sock.data[:len(expected)], expected)) def test_send(self): expected = b'this is a test this is only a test' conn = client.HTTPConnection('example.com') sock = FakeSocket(None) conn.sock = sock conn.send(expected) self.assertEqual(expected, sock.data) sock.data = b'' conn.send(array.array('b', expected)) self.assertEqual(expected, sock.data) sock.data = b'' conn.send(io.BytesIO(expected)) self.assertEqual(expected, sock.data) def test_send_updating_file(self): def data(): yield 'data' yield None yield 'data_two' class UpdatingFile(io.TextIOBase): mode = 'r' d = data() def read(self, blocksize=-1): return next(self.d) expected = b'data' conn = client.HTTPConnection('example.com') sock = FakeSocket("") conn.sock = sock conn.send(UpdatingFile()) self.assertEqual(sock.data, expected) def test_send_iter(self): expected = b'GET /foo HTTP/1.1\r\nHost: example.com\r\n' \ b'Accept-Encoding: identity\r\nContent-Length: 11\r\n' \ b'\r\nonetwothree' def body(): yield b"one" yield b"two" yield b"three" conn = client.HTTPConnection('example.com') sock = FakeSocket("") conn.sock = sock conn.request('GET', '/foo', body(), {'Content-Length': '11'}) self.assertEqual(sock.data, expected) def test_blocksize_request(self): """Check that request() respects the configured block size.""" blocksize = 8 # For easy debugging. conn = client.HTTPConnection('example.com', blocksize=blocksize) sock = FakeSocket(None) conn.sock = sock expected = b"a" * blocksize + b"b" conn.request("PUT", "/", io.BytesIO(expected), {"Content-Length": "9"}) self.assertEqual(sock.sendall_calls, 3) body = sock.data.split(b"\r\n\r\n", 1)[1] self.assertEqual(body, expected) def test_blocksize_send(self): """Check that send() respects the configured block size.""" blocksize = 8 # For easy debugging. conn = client.HTTPConnection('example.com', blocksize=blocksize) sock = FakeSocket(None) conn.sock = sock expected = b"a" * blocksize + b"b" conn.send(io.BytesIO(expected)) self.assertEqual(sock.sendall_calls, 2) self.assertEqual(sock.data, expected) def test_send_type_error(self): # See: Issue #12676 conn = client.HTTPConnection('example.com') conn.sock = FakeSocket('') with self.assertRaises(TypeError): conn.request('POST', 'test', conn) def test_chunked(self): expected = chunked_expected sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) resp.close() # Various read sizes for n in range(1, 12): sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(n) + resp.read(n) + resp.read(), expected) resp.close() for x in ('', 'foo\r\n'): sock = FakeSocket(chunked_start + x) resp = client.HTTPResponse(sock, method="GET") resp.begin() try: resp.read() except client.IncompleteRead as i: self.assertEqual(i.partial, expected) expected_message = 'IncompleteRead(%d bytes read)' % len(expected) self.assertEqual(repr(i), expected_message) self.assertEqual(str(i), expected_message) else: self.fail('IncompleteRead expected') finally: resp.close() def test_readinto_chunked(self): expected = chunked_expected nexpected = len(expected) b = bytearray(128) sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() n = resp.readinto(b) self.assertEqual(b[:nexpected], expected) self.assertEqual(n, nexpected) resp.close() # Various read sizes for n in range(1, 12): sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() m = memoryview(b) i = resp.readinto(m[0:n]) i += resp.readinto(m[i:n + i]) i += resp.readinto(m[i:]) self.assertEqual(b[:nexpected], expected) self.assertEqual(i, nexpected) resp.close() for x in ('', 'foo\r\n'): sock = FakeSocket(chunked_start + x) resp = client.HTTPResponse(sock, method="GET") resp.begin() try: n = resp.readinto(b) except client.IncompleteRead as i: self.assertEqual(i.partial, expected) expected_message = 'IncompleteRead(%d bytes read)' % len(expected) self.assertEqual(repr(i), expected_message) self.assertEqual(str(i), expected_message) else: self.fail('IncompleteRead expected') finally: resp.close() def test_chunked_head(self): chunked_start = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello world\r\n' '1\r\n' 'd\r\n' ) sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() self.assertEqual(resp.read(), b'') self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_readinto_chunked_head(self): chunked_start = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello world\r\n' '1\r\n' 'd\r\n' ) sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() b = bytearray(5) n = resp.readinto(b) self.assertEqual(n, 0) self.assertEqual(bytes(b), b'\x00'*5) self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_negative_content_length(self): sock = FakeSocket( 'HTTP/1.1 200 OK\r\nContent-Length: -1\r\n\r\nHello\r\n') resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), b'Hello\r\n') self.assertTrue(resp.isclosed()) def test_incomplete_read(self): sock = FakeSocket('HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\nHello\r\n') resp = client.HTTPResponse(sock, method="GET") resp.begin() try: resp.read() except client.IncompleteRead as i: self.assertEqual(i.partial, b'Hello\r\n') self.assertEqual(repr(i), "IncompleteRead(7 bytes read, 3 more expected)") self.assertEqual(str(i), "IncompleteRead(7 bytes read, 3 more expected)") self.assertTrue(resp.isclosed()) else: self.fail('IncompleteRead expected') def test_epipe(self): sock = EPipeSocket( "HTTP/1.0 401 Authorization Required\r\n" "Content-type: text/html\r\n" "WWW-Authenticate: Basic realm=\"example\"\r\n", b"Content-Length") conn = client.HTTPConnection("example.com") conn.sock = sock self.assertRaises(OSError, lambda: conn.request("PUT", "/url", "body")) resp = conn.getresponse() self.assertEqual(401, resp.status) self.assertEqual("Basic realm=\"example\"", resp.getheader("www-authenticate")) # Test lines overflowing the max line size (_MAXLINE in http.client) def test_overflowing_status_line(self): body = "HTTP/1.1 200 Ok" + "k" * 65536 + "\r\n" resp = client.HTTPResponse(FakeSocket(body)) self.assertRaises((client.LineTooLong, client.BadStatusLine), resp.begin) def test_overflowing_header_line(self): body = ( 'HTTP/1.1 200 OK\r\n' 'X-Foo: bar' + 'r' * 65536 + '\r\n\r\n' ) resp = client.HTTPResponse(FakeSocket(body)) self.assertRaises(client.LineTooLong, resp.begin) def test_overflowing_header_limit_after_100(self): body = ( 'HTTP/1.1 100 OK\r\n' 'r\n' * 32768 ) resp = client.HTTPResponse(FakeSocket(body)) with self.assertRaises(client.HTTPException) as cm: resp.begin() # We must assert more because other reasonable errors that we # do not want can also be HTTPException derived. self.assertIn('got more than ', str(cm.exception)) self.assertIn('headers', str(cm.exception)) def test_overflowing_chunked_line(self): body = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' + '0' * 65536 + 'a\r\n' 'hello world\r\n' '0\r\n' '\r\n' ) resp = client.HTTPResponse(FakeSocket(body)) resp.begin() self.assertRaises(client.LineTooLong, resp.read) def test_early_eof(self): # Test httpresponse with no \r\n termination, body = "HTTP/1.1 200 Ok" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(), b'') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_error_leak(self): # Test that the socket is not leaked if getresponse() fails conn = client.HTTPConnection('example.com') response = None class Response(client.HTTPResponse): def __init__(self, *pos, **kw): nonlocal response response = self # Avoid garbage collector closing the socket client.HTTPResponse.__init__(self, *pos, **kw) conn.response_class = Response conn.sock = FakeSocket('Invalid status line') conn.request('GET', '/') self.assertRaises(client.BadStatusLine, conn.getresponse) self.assertTrue(response.closed) self.assertTrue(conn.sock.file_closed) def test_chunked_extension(self): extra = '3;foo=bar\r\n' + 'abc\r\n' expected = chunked_expected + b'abc' sock = FakeSocket(chunked_start + extra + last_chunk_extended + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) resp.close() def test_chunked_missing_end(self): """some servers may serve up a short chunked encoding stream""" expected = chunked_expected sock = FakeSocket(chunked_start + last_chunk) #no terminating crlf resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) resp.close() def test_chunked_trailers(self): """See that trailers are read and ignored""" expected = chunked_expected sock = FakeSocket(chunked_start + last_chunk + trailers + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) # we should have reached the end of the file self.assertEqual(sock.file.read(), b"") #we read to the end resp.close() def test_chunked_sync(self): """Check that we don't read past the end of the chunked-encoding stream""" expected = chunked_expected extradata = "extradata" sock = FakeSocket(chunked_start + last_chunk + trailers + chunked_end + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata.encode("ascii")) #we read to the end resp.close() def test_content_length_sync(self): """Check that we don't read past the end of the Content-Length stream""" extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_readlines_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.readlines(2000), [expected]) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_read1_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read1(2000), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_readline_bound_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.readline(10), expected) self.assertEqual(resp.readline(10), b"") # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_read1_bound_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 30\r\n\r\n' + expected*3 + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read1(20), expected*2) self.assertEqual(resp.read(), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_response_fileno(self): # Make sure fd returned by fileno is valid. serv = socket.create_server((HOST, 0)) self.addCleanup(serv.close) result = None def run_server(): [conn, address] = serv.accept() with conn, conn.makefile("rb") as reader: # Read the request header until a blank line while True: line = reader.readline() if not line.rstrip(b"\r\n"): break conn.sendall(b"HTTP/1.1 200 Connection established\r\n\r\n") nonlocal result result = reader.read() thread = threading.Thread(target=run_server) thread.start() self.addCleanup(thread.join, float(1)) conn = client.HTTPConnection(*serv.getsockname()) conn.request("CONNECT", "dummy:1234") response = conn.getresponse() try: self.assertEqual(response.status, client.OK) s = socket.socket(fileno=response.fileno()) try: s.sendall(b"proxied data\n") finally: s.detach() finally: response.close() conn.close() thread.join() self.assertEqual(result, b"proxied data\n") def test_putrequest_override_domain_validation(self): """ It should be possible to override the default validation behavior in putrequest (bpo-38216). """ class UnsafeHTTPConnection(client.HTTPConnection): def _validate_path(self, url): pass conn = UnsafeHTTPConnection('example.com') conn.sock = FakeSocket('') conn.putrequest('GET', '/\x00') def test_putrequest_override_host_validation(self): class UnsafeHTTPConnection(client.HTTPConnection): def _validate_host(self, url): pass conn = UnsafeHTTPConnection('example.com\r\n') conn.sock = FakeSocket('') # set skip_host so a ValueError is not raised upon adding the # invalid URL as the value of the "Host:" header conn.putrequest('GET', '/', skip_host=1) def test_putrequest_override_encoding(self): """ It should be possible to override the default encoding to transmit bytes in another encoding even if invalid (bpo-36274). """ class UnsafeHTTPConnection(client.HTTPConnection): def _encode_request(self, str_url): return str_url.encode('utf-8') conn = UnsafeHTTPConnection('example.com') conn.sock = FakeSocket('') conn.putrequest('GET', '/☃') class ExtendedReadTest(TestCase): """ Test peek(), read1(), readline() """ lines = ( 'HTTP/1.1 200 OK\r\n' '\r\n' 'hello world!\n' 'and now \n' 'for something completely different\n' 'foo' ) lines_expected = lines[lines.find('hello'):].encode("ascii") lines_chunked = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello worl\r\n' '3\r\n' 'd!\n\r\n' '9\r\n' 'and now \n\r\n' '23\r\n' 'for something completely different\n\r\n' '3\r\n' 'foo\r\n' '0\r\n' # terminating chunk '\r\n' # end of trailers ) def setUp(self): sock = FakeSocket(self.lines) resp = client.HTTPResponse(sock, method="GET") resp.begin() resp.fp = io.BufferedReader(resp.fp) self.resp = resp def test_peek(self): resp = self.resp # patch up the buffered peek so that it returns not too much stuff oldpeek = resp.fp.peek def mypeek(n=-1): p = oldpeek(n) if n >= 0: return p[:n] return p[:10] resp.fp.peek = mypeek all = [] while True: # try a short peek p = resp.peek(3) if p: self.assertGreater(len(p), 0) # then unbounded peek p2 = resp.peek() self.assertGreaterEqual(len(p2), len(p)) self.assertTrue(p2.startswith(p)) next = resp.read(len(p2)) self.assertEqual(next, p2) else: next = resp.read() self.assertFalse(next) all.append(next) if not next: break self.assertEqual(b"".join(all), self.lines_expected) def test_readline(self): resp = self.resp self._verify_readline(self.resp.readline, self.lines_expected) def test_readline_without_limit(self): self._verify_readline(self.resp.readline, self.lines_expected, limit=-1) def _verify_readline(self, readline, expected, limit=5): all = [] while True: # short readlines line = readline(limit) if line and line != b"foo": if len(line) < 5: self.assertTrue(line.endswith(b"\n")) all.append(line) if not line: break self.assertEqual(b"".join(all), expected) self.assertTrue(self.resp.isclosed()) def test_read1(self): resp = self.resp def r(): res = resp.read1(4) self.assertLessEqual(len(res), 4) return res readliner = Readliner(r) self._verify_readline(readliner.readline, self.lines_expected) def test_read1_unbounded(self): resp = self.resp all = [] while True: data = resp.read1() if not data: break all.append(data) self.assertEqual(b"".join(all), self.lines_expected) self.assertTrue(resp.isclosed()) def test_read1_bounded(self): resp = self.resp all = [] while True: data = resp.read1(10) if not data: break self.assertLessEqual(len(data), 10) all.append(data) self.assertEqual(b"".join(all), self.lines_expected) self.assertTrue(resp.isclosed()) def test_read1_0(self): self.assertEqual(self.resp.read1(0), b"") self.assertFalse(self.resp.isclosed()) def test_peek_0(self): p = self.resp.peek(0) self.assertLessEqual(0, len(p)) class ExtendedReadTestContentLengthKnown(ExtendedReadTest): _header, _body = ExtendedReadTest.lines.split('\r\n\r\n', 1) lines = _header + f'\r\nContent-Length: {len(_body)}\r\n\r\n' + _body class ExtendedReadTestChunked(ExtendedReadTest): """ Test peek(), read1(), readline() in chunked mode """ lines = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello worl\r\n' '3\r\n' 'd!\n\r\n' '9\r\n' 'and now \n\r\n' '23\r\n' 'for something completely different\n\r\n' '3\r\n' 'foo\r\n' '0\r\n' # terminating chunk '\r\n' # end of trailers ) class Readliner: """ a simple readline class that uses an arbitrary read function and buffering """ def __init__(self, readfunc): self.readfunc = readfunc self.remainder = b"" def readline(self, limit): data = [] datalen = 0 read = self.remainder try: while True: idx = read.find(b'\n') if idx != -1: break if datalen + len(read) >= limit: idx = limit - datalen - 1 # read more data data.append(read) read = self.readfunc() if not read: idx = 0 #eof condition break idx += 1 data.append(read[:idx]) self.remainder = read[idx:] return b"".join(data) except: self.remainder = b"".join(data) raise class OfflineTest(TestCase): def test_all(self): # Documented objects defined in the module should be in __all__ expected = {"responses"} # Allowlist documented dict() object # HTTPMessage, parse_headers(), and the HTTP status code constants are # intentionally omitted for simplicity denylist = {"HTTPMessage", "parse_headers"} for name in dir(client): if name.startswith("_") or name in denylist: continue module_object = getattr(client, name) if getattr(module_object, "__module__", None) == "http.client": expected.add(name) self.assertCountEqual(client.__all__, expected) def test_responses(self): self.assertEqual(client.responses[client.NOT_FOUND], "Not Found") def test_client_constants(self): # Make sure we don't break backward compatibility with 3.4 expected = [ 'CONTINUE', 'SWITCHING_PROTOCOLS', 'PROCESSING', 'OK', 'CREATED', 'ACCEPTED', 'NON_AUTHORITATIVE_INFORMATION', 'NO_CONTENT', 'RESET_CONTENT', 'PARTIAL_CONTENT', 'MULTI_STATUS', 'IM_USED', 'MULTIPLE_CHOICES', 'MOVED_PERMANENTLY', 'FOUND', 'SEE_OTHER', 'NOT_MODIFIED', 'USE_PROXY', 'TEMPORARY_REDIRECT', 'BAD_REQUEST', 'UNAUTHORIZED', 'PAYMENT_REQUIRED', 'FORBIDDEN', 'NOT_FOUND', 'METHOD_NOT_ALLOWED', 'NOT_ACCEPTABLE', 'PROXY_AUTHENTICATION_REQUIRED', 'REQUEST_TIMEOUT', 'CONFLICT', 'GONE', 'LENGTH_REQUIRED', 'PRECONDITION_FAILED', 'REQUEST_ENTITY_TOO_LARGE', 'REQUEST_URI_TOO_LONG', 'UNSUPPORTED_MEDIA_TYPE', 'REQUESTED_RANGE_NOT_SATISFIABLE', 'EXPECTATION_FAILED', 'IM_A_TEAPOT', 'MISDIRECTED_REQUEST', 'UNPROCESSABLE_ENTITY', 'LOCKED', 'FAILED_DEPENDENCY', 'UPGRADE_REQUIRED', 'PRECONDITION_REQUIRED', 'TOO_MANY_REQUESTS', 'REQUEST_HEADER_FIELDS_TOO_LARGE', 'UNAVAILABLE_FOR_LEGAL_REASONS', 'INTERNAL_SERVER_ERROR', 'NOT_IMPLEMENTED', 'BAD_GATEWAY', 'SERVICE_UNAVAILABLE', 'GATEWAY_TIMEOUT', 'HTTP_VERSION_NOT_SUPPORTED', 'INSUFFICIENT_STORAGE', 'NOT_EXTENDED', 'NETWORK_AUTHENTICATION_REQUIRED', 'EARLY_HINTS', 'TOO_EARLY' ] for const in expected: with self.subTest(constant=const): self.assertTrue(hasattr(client, const)) class SourceAddressTest(TestCase): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = socket_helper.bind_port(self.serv) self.source_port = socket_helper.find_unused_port() self.serv.listen() self.conn = None def tearDown(self): if self.conn: self.conn.close() self.conn = None self.serv.close() self.serv = None def testHTTPConnectionSourceAddress(self): self.conn = client.HTTPConnection(HOST, self.port, source_address=('', self.source_port)) self.conn.connect() self.assertEqual(self.conn.sock.getsockname()[1], self.source_port) @unittest.skipIf(not hasattr(client, 'HTTPSConnection'), 'http.client.HTTPSConnection not defined') def testHTTPSConnectionSourceAddress(self): self.conn = client.HTTPSConnection(HOST, self.port, source_address=('', self.source_port)) # We don't test anything here other than the constructor not barfing as # this code doesn't deal with setting up an active running SSL server # for an ssl_wrapped connect() to actually return from. class TimeoutTest(TestCase): PORT = None def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) TimeoutTest.PORT = socket_helper.bind_port(self.serv) self.serv.listen() def tearDown(self): self.serv.close() self.serv = None def testTimeoutAttribute(self): # This will prove that the timeout gets through HTTPConnection # and into the socket. # default -- use global socket timeout self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: httpConn = client.HTTPConnection(HOST, TimeoutTest.PORT) httpConn.connect() finally: socket.setdefaulttimeout(None) self.assertEqual(httpConn.sock.gettimeout(), 30) httpConn.close() # no timeout -- do not use global socket default self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: httpConn = client.HTTPConnection(HOST, TimeoutTest.PORT, timeout=None) httpConn.connect() finally: socket.setdefaulttimeout(None) self.assertEqual(httpConn.sock.gettimeout(), None) httpConn.close() # a value httpConn = client.HTTPConnection(HOST, TimeoutTest.PORT, timeout=30) httpConn.connect() self.assertEqual(httpConn.sock.gettimeout(), 30) httpConn.close() class PersistenceTest(TestCase): def test_reuse_reconnect(self): # Should reuse or reconnect depending on header from server tests = ( ('1.0', '', False), ('1.0', 'Connection: keep-alive\r\n', True), ('1.1', '', True), ('1.1', 'Connection: close\r\n', False), ('1.0', 'Connection: keep-ALIVE\r\n', True), ('1.1', 'Connection: cloSE\r\n', False), ) for version, header, reuse in tests: with self.subTest(version=version, header=header): msg = ( 'HTTP/{} 200 OK\r\n' '{}' 'Content-Length: 12\r\n' '\r\n' 'Dummy body\r\n' ).format(version, header) conn = FakeSocketHTTPConnection(msg) self.assertIsNone(conn.sock) conn.request('GET', '/open-connection') with conn.getresponse() as response: self.assertEqual(conn.sock is None, not reuse) response.read() self.assertEqual(conn.sock is None, not reuse) self.assertEqual(conn.connections, 1) conn.request('GET', '/subsequent-request') self.assertEqual(conn.connections, 1 if reuse else 2) def test_disconnected(self): def make_reset_reader(text): """Return BufferedReader that raises ECONNRESET at EOF""" stream = io.BytesIO(text) def readinto(buffer): size = io.BytesIO.readinto(stream, buffer) if size == 0: raise ConnectionResetError() return size stream.readinto = readinto return io.BufferedReader(stream) tests = ( (io.BytesIO, client.RemoteDisconnected), (make_reset_reader, ConnectionResetError), ) for stream_factory, exception in tests: with self.subTest(exception=exception): conn = FakeSocketHTTPConnection(b'', stream_factory) conn.request('GET', '/eof-response') self.assertRaises(exception, conn.getresponse) self.assertIsNone(conn.sock) # HTTPConnection.connect() should be automatically invoked conn.request('GET', '/reconnect') self.assertEqual(conn.connections, 2) def test_100_close(self): conn = FakeSocketHTTPConnection( b'HTTP/1.1 100 Continue\r\n' b'\r\n' # Missing final response ) conn.request('GET', '/', headers={'Expect': '100-continue'}) self.assertRaises(client.RemoteDisconnected, conn.getresponse) self.assertIsNone(conn.sock) conn.request('GET', '/reconnect') self.assertEqual(conn.connections, 2) class HTTPSTest(TestCase): def setUp(self): if not hasattr(client, 'HTTPSConnection'): self.skipTest('ssl support required') def make_server(self, certfile): from test.ssl_servers import make_https_server return make_https_server(self, certfile=certfile) def test_attributes(self): # simple test to check it's storing the timeout h = client.HTTPSConnection(HOST, TimeoutTest.PORT, timeout=30) self.assertEqual(h.timeout, 30) def test_networked(self): # Default settings: requires a valid cert from a trusted CA import ssl support.requires('network') with socket_helper.transient_internet('self-signed.pythontest.net'): h = client.HTTPSConnection('self-signed.pythontest.net', 443) with self.assertRaises(ssl.SSLError) as exc_info: h.request('GET', '/') self.assertEqual(exc_info.exception.reason, 'CERTIFICATE_VERIFY_FAILED') def test_networked_noverification(self): # Switch off cert verification import ssl support.requires('network') with socket_helper.transient_internet('self-signed.pythontest.net'): context = ssl._create_unverified_context() h = client.HTTPSConnection('self-signed.pythontest.net', 443, context=context) h.request('GET', '/') resp = h.getresponse() h.close() self.assertIn('nginx', resp.getheader('server')) resp.close() @support.system_must_validate_cert def test_networked_trusted_by_default_cert(self): # Default settings: requires a valid cert from a trusted CA support.requires('network') with socket_helper.transient_internet('www.python.org'): h = client.HTTPSConnection('www.python.org', 443) h.request('GET', '/') resp = h.getresponse() content_type = resp.getheader('content-type') resp.close() h.close() self.assertIn('text/html', content_type) def test_networked_good_cert(self): # We feed the server's cert as a validating cert import ssl support.requires('network') selfsigned_pythontestdotnet = 'self-signed.pythontest.net' with socket_helper.transient_internet(selfsigned_pythontestdotnet): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(context.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(context.check_hostname, True) context.load_verify_locations(CERT_selfsigned_pythontestdotnet) try: h = client.HTTPSConnection(selfsigned_pythontestdotnet, 443, context=context) h.request('GET', '/') resp = h.getresponse() except ssl.SSLError as ssl_err: ssl_err_str = str(ssl_err) # In the error message of [SSL: CERTIFICATE_VERIFY_FAILED] on # modern Linux distros (Debian Buster, etc) default OpenSSL # configurations it'll fail saying "key too weak" until we # address https://bugs.python.org/issue36816 to use a proper # key size on self-signed.pythontest.net. if re.search(r'(?i)key.too.weak', ssl_err_str): raise unittest.SkipTest( f'Got {ssl_err_str} trying to connect ' f'to {selfsigned_pythontestdotnet}. ' 'See https://bugs.python.org/issue36816.') raise server_string = resp.getheader('server') resp.close() h.close() self.assertIn('nginx', server_string) @support.requires_resource('walltime') def test_networked_bad_cert(self): # We feed a "CA" cert that is unrelated to the server's cert import ssl support.requires('network') with socket_helper.transient_internet('self-signed.pythontest.net'): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations(CERT_localhost) h = client.HTTPSConnection('self-signed.pythontest.net', 443, context=context) with self.assertRaises(ssl.SSLError) as exc_info: h.request('GET', '/') self.assertEqual(exc_info.exception.reason, 'CERTIFICATE_VERIFY_FAILED') def test_local_unknown_cert(self): # The custom cert isn't known to the default trust bundle import ssl server = self.make_server(CERT_localhost) h = client.HTTPSConnection('localhost', server.port) with self.assertRaises(ssl.SSLError) as exc_info: h.request('GET', '/') self.assertEqual(exc_info.exception.reason, 'CERTIFICATE_VERIFY_FAILED') def test_local_good_hostname(self): # The (valid) cert validates the HTTP hostname import ssl server = self.make_server(CERT_localhost) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations(CERT_localhost) h = client.HTTPSConnection('localhost', server.port, context=context) self.addCleanup(h.close) h.request('GET', '/nonexistent') resp = h.getresponse() self.addCleanup(resp.close) self.assertEqual(resp.status, 404) def test_local_bad_hostname(self): # The (valid) cert doesn't validate the HTTP hostname import ssl server = self.make_server(CERT_fakehostname) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations(CERT_fakehostname) h = client.HTTPSConnection('localhost', server.port, context=context) with self.assertRaises(ssl.CertificateError): h.request('GET', '/') # Same with explicit check_hostname=True with warnings_helper.check_warnings(('', DeprecationWarning)): h = client.HTTPSConnection('localhost', server.port, context=context, check_hostname=True) with self.assertRaises(ssl.CertificateError): h.request('GET', '/') # With check_hostname=False, the mismatching is ignored context.check_hostname = False with warnings_helper.check_warnings(('', DeprecationWarning)): h = client.HTTPSConnection('localhost', server.port, context=context, check_hostname=False) h.request('GET', '/nonexistent') resp = h.getresponse() resp.close() h.close() self.assertEqual(resp.status, 404) # The context's check_hostname setting is used if one isn't passed to # HTTPSConnection. context.check_hostname = False h = client.HTTPSConnection('localhost', server.port, context=context) h.request('GET', '/nonexistent') resp = h.getresponse() self.assertEqual(resp.status, 404) resp.close() h.close() # Passing check_hostname to HTTPSConnection should override the # context's setting. with warnings_helper.check_warnings(('', DeprecationWarning)): h = client.HTTPSConnection('localhost', server.port, context=context, check_hostname=True) with self.assertRaises(ssl.CertificateError): h.request('GET', '/') @unittest.skipIf(not hasattr(client, 'HTTPSConnection'), 'http.client.HTTPSConnection not available') def test_host_port(self): # Check invalid host_port for hp in ("www.python.org:abc", "user:password@www.python.org"): self.assertRaises(client.InvalidURL, client.HTTPSConnection, hp) for hp, h, p in (("[fe80::207:e9ff:fe9b]:8000", "fe80::207:e9ff:fe9b", 8000), ("www.python.org:443", "www.python.org", 443), ("www.python.org:", "www.python.org", 443), ("www.python.org", "www.python.org", 443), ("[fe80::207:e9ff:fe9b]", "fe80::207:e9ff:fe9b", 443), ("[fe80::207:e9ff:fe9b]:", "fe80::207:e9ff:fe9b", 443)): c = client.HTTPSConnection(hp) self.assertEqual(h, c.host) self.assertEqual(p, c.port) def test_tls13_pha(self): import ssl if not ssl.HAS_TLSv1_3: self.skipTest('TLS 1.3 support required') # just check status of PHA flag h = client.HTTPSConnection('localhost', 443) self.assertTrue(h._context.post_handshake_auth) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertFalse(context.post_handshake_auth) h = client.HTTPSConnection('localhost', 443, context=context) self.assertIs(h._context, context) self.assertFalse(h._context.post_handshake_auth) with warnings.catch_warnings(): warnings.filterwarnings('ignore', 'key_file, cert_file and check_hostname are deprecated', DeprecationWarning) h = client.HTTPSConnection('localhost', 443, context=context, cert_file=CERT_localhost) self.assertTrue(h._context.post_handshake_auth) class RequestBodyTest(TestCase): """Test cases where a request includes a message body.""" def setUp(self): self.conn = client.HTTPConnection('example.com') self.conn.sock = self.sock = FakeSocket("") self.conn.sock = self.sock def get_headers_and_fp(self): f = io.BytesIO(self.sock.data) f.readline() # read the request line message = client.parse_headers(f) return message, f def test_list_body(self): # Note that no content-length is automatically calculated for # an iterable. The request will fall back to send chunked # transfer encoding. cases = ( ([b'foo', b'bar'], b'3\r\nfoo\r\n3\r\nbar\r\n0\r\n\r\n'), ((b'foo', b'bar'), b'3\r\nfoo\r\n3\r\nbar\r\n0\r\n\r\n'), ) for body, expected in cases: with self.subTest(body): self.conn = client.HTTPConnection('example.com') self.conn.sock = self.sock = FakeSocket('') self.conn.request('PUT', '/url', body) msg, f = self.get_headers_and_fp() self.assertNotIn('Content-Type', msg) self.assertNotIn('Content-Length', msg) self.assertEqual(msg.get('Transfer-Encoding'), 'chunked') self.assertEqual(expected, f.read()) def test_manual_content_length(self): # Set an incorrect content-length so that we can verify that # it will not be over-ridden by the library. self.conn.request("PUT", "/url", "body", {"Content-Length": "42"}) message, f = self.get_headers_and_fp() self.assertEqual("42", message.get("content-length")) self.assertEqual(4, len(f.read())) def test_ascii_body(self): self.conn.request("PUT", "/url", "body") message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("4", message.get("content-length")) self.assertEqual(b'body', f.read()) def test_latin1_body(self): self.conn.request("PUT", "/url", "body\xc1") message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("5", message.get("content-length")) self.assertEqual(b'body\xc1', f.read()) def test_bytes_body(self): self.conn.request("PUT", "/url", b"body\xc1") message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("5", message.get("content-length")) self.assertEqual(b'body\xc1', f.read()) def test_text_file_body(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) with open(os_helper.TESTFN, "w", encoding="utf-8") as f: f.write("body") with open(os_helper.TESTFN, encoding="utf-8") as f: self.conn.request("PUT", "/url", f) message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) # No content-length will be determined for files; the body # will be sent using chunked transfer encoding instead. self.assertIsNone(message.get("content-length")) self.assertEqual("chunked", message.get("transfer-encoding")) self.assertEqual(b'4\r\nbody\r\n0\r\n\r\n', f.read()) def test_binary_file_body(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) with open(os_helper.TESTFN, "wb") as f: f.write(b"body\xc1") with open(os_helper.TESTFN, "rb") as f: self.conn.request("PUT", "/url", f) message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("chunked", message.get("Transfer-Encoding")) self.assertNotIn("Content-Length", message) self.assertEqual(b'5\r\nbody\xc1\r\n0\r\n\r\n', f.read()) class HTTPResponseTest(TestCase): def setUp(self): body = "HTTP/1.1 200 Ok\r\nMy-Header: first-value\r\nMy-Header: \ second-value\r\n\r\nText" sock = FakeSocket(body) self.resp = client.HTTPResponse(sock) self.resp.begin() def test_getting_header(self): header = self.resp.getheader('My-Header') self.assertEqual(header, 'first-value, second-value') header = self.resp.getheader('My-Header', 'some default') self.assertEqual(header, 'first-value, second-value') def test_getting_nonexistent_header_with_string_default(self): header = self.resp.getheader('No-Such-Header', 'default-value') self.assertEqual(header, 'default-value') def test_getting_nonexistent_header_with_iterable_default(self): header = self.resp.getheader('No-Such-Header', ['default', 'values']) self.assertEqual(header, 'default, values') header = self.resp.getheader('No-Such-Header', ('default', 'values')) self.assertEqual(header, 'default, values') def test_getting_nonexistent_header_without_default(self): header = self.resp.getheader('No-Such-Header') self.assertEqual(header, None) def test_getting_header_defaultint(self): header = self.resp.getheader('No-Such-Header',default=42) self.assertEqual(header, 42) class TunnelTests(TestCase): def setUp(self): response_text = ( 'HTTP/1.0 200 OK\r\n\r\n' # Reply to CONNECT 'HTTP/1.1 200 OK\r\n' # Reply to HEAD 'Content-Length: 42\r\n\r\n' ) self.host = 'proxy.com' self.conn = client.HTTPConnection(self.host) self.conn._create_connection = self._create_connection(response_text) def tearDown(self): self.conn.close() def _create_connection(self, response_text): def create_connection(address, timeout=None, source_address=None): return FakeSocket(response_text, host=address[0], port=address[1]) return create_connection def test_set_tunnel_host_port_headers(self): tunnel_host = 'destination.com' tunnel_port = 8888 tunnel_headers = {'User-Agent': 'Mozilla/5.0 (compatible, MSIE 11)'} self.conn.set_tunnel(tunnel_host, port=tunnel_port, headers=tunnel_headers) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, client.HTTP_PORT) self.assertEqual(self.conn._tunnel_host, tunnel_host) self.assertEqual(self.conn._tunnel_port, tunnel_port) self.assertEqual(self.conn._tunnel_headers, tunnel_headers) def test_disallow_set_tunnel_after_connect(self): # Once connected, we shouldn't be able to tunnel anymore self.conn.connect() self.assertRaises(RuntimeError, self.conn.set_tunnel, 'destination.com') def test_connect_with_tunnel(self): self.conn.set_tunnel('destination.com') self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, client.HTTP_PORT) self.assertIn(b'CONNECT destination.com', self.conn.sock.data) # issue22095 self.assertNotIn(b'Host: destination.com:None', self.conn.sock.data) self.assertIn(b'Host: destination.com', self.conn.sock.data) # This test should be removed when CONNECT gets the HTTP/1.1 blessing self.assertNotIn(b'Host: proxy.com', self.conn.sock.data) def test_tunnel_connect_single_send_connection_setup(self): """Regresstion test for https://bugs.python.org/issue43332.""" with mock.patch.object(self.conn, 'send') as mock_send: self.conn.set_tunnel('destination.com') self.conn.connect() self.conn.request('GET', '/') mock_send.assert_called() # Likely 2, but this test only cares about the first. self.assertGreater( len(mock_send.mock_calls), 1, msg=f'unexpected number of send calls: {mock_send.mock_calls}') proxy_setup_data_sent = mock_send.mock_calls[0][1][0] self.assertIn(b'CONNECT destination.com', proxy_setup_data_sent) self.assertTrue( proxy_setup_data_sent.endswith(b'\r\n\r\n'), msg=f'unexpected proxy data sent {proxy_setup_data_sent!r}') def test_connect_put_request(self): self.conn.set_tunnel('destination.com') self.conn.request('PUT', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, client.HTTP_PORT) self.assertIn(b'CONNECT destination.com', self.conn.sock.data) self.assertIn(b'Host: destination.com', self.conn.sock.data) def test_tunnel_debuglog(self): expected_header = 'X-Dummy: 1' response_text = 'HTTP/1.0 200 OK\r\n{}\r\n\r\n'.format(expected_header) self.conn.set_debuglevel(1) self.conn._create_connection = self._create_connection(response_text) self.conn.set_tunnel('destination.com') with support.captured_stdout() as output: self.conn.request('PUT', '/', '') lines = output.getvalue().splitlines() self.assertIn('header: {}'.format(expected_header), lines) def test_tunnel_leak(self): sock = None def _create_connection(address, timeout=None, source_address=None): nonlocal sock sock = FakeSocket( 'HTTP/1.1 404 NOT FOUND\r\n\r\n', host=address[0], port=address[1], ) return sock self.conn._create_connection = _create_connection self.conn.set_tunnel('destination.com') exc = None try: self.conn.request('HEAD', '/', '') except OSError as e: # keeping a reference to exc keeps response alive in the traceback exc = e self.assertIsNotNone(exc) self.assertTrue(sock.file_closed) if __name__ == '__main__': unittest.main(verbosity=2) gevent-24.11.1/src/greentest/3.11/test_select.py000066400000000000000000000066721471441230600212260ustar00rootroot00000000000000import errno import os import select import subprocess import sys import textwrap import unittest from test import support support.requires_working_socket(module=True) @unittest.skipIf((sys.platform[:3]=='win'), "can't easily test on this system") class SelectTestCase(unittest.TestCase): class Nope: pass class Almost: def fileno(self): return 'fileno' def test_error_conditions(self): self.assertRaises(TypeError, select.select, 1, 2, 3) self.assertRaises(TypeError, select.select, [self.Nope()], [], []) self.assertRaises(TypeError, select.select, [self.Almost()], [], []) self.assertRaises(TypeError, select.select, [], [], [], "not a number") self.assertRaises(ValueError, select.select, [], [], [], -1) # Issue #12367: http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/155606 @unittest.skipIf(sys.platform.startswith('freebsd'), 'skip because of a FreeBSD bug: kern/155606') def test_errno(self): with open(__file__, 'rb') as fp: fd = fp.fileno() fp.close() try: select.select([fd], [], [], 0) except OSError as err: self.assertEqual(err.errno, errno.EBADF) else: self.fail("exception not raised") def test_returned_list_identity(self): # See issue #8329 r, w, x = select.select([], [], [], 1) self.assertIsNot(r, w) self.assertIsNot(r, x) self.assertIsNot(w, x) @support.requires_fork() def test_select(self): code = textwrap.dedent(''' import time for i in range(10): print("testing...", flush=True) time.sleep(0.050) ''') cmd = [sys.executable, '-I', '-c', code] with subprocess.Popen(cmd, stdout=subprocess.PIPE) as proc: pipe = proc.stdout for timeout in (0, 1, 2, 4, 8, 16) + (None,)*10: if support.verbose: print(f'timeout = {timeout}') rfd, wfd, xfd = select.select([pipe], [], [], timeout) self.assertEqual(wfd, []) self.assertEqual(xfd, []) if not rfd: continue if rfd == [pipe]: line = pipe.readline() if support.verbose: print(repr(line)) if not line: if support.verbose: print('EOF') break continue self.fail('Unexpected return values from select():', rfd, wfd, xfd) # Issue 16230: Crash on select resized list @unittest.skipIf( support.is_emscripten, "Emscripten cannot select a fd multiple times." ) def test_select_mutated(self): a = [] class F: def fileno(self): del a[-1] return sys.__stdout__.fileno() a[:] = [F()] * 10 self.assertEqual(select.select([], a, []), ([], a[:5], [])) def test_disallow_instantiation(self): support.check_disallow_instantiation(self, type(select.poll())) if hasattr(select, 'devpoll'): support.check_disallow_instantiation(self, type(select.devpoll())) def tearDownModule(): support.reap_children() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.11/test_selectors.py000066400000000000000000000472151471441230600217500ustar00rootroot00000000000000import errno import os import random import selectors import signal import socket import sys from test import support from test.support import os_helper from test.support import socket_helper from time import sleep import unittest import unittest.mock import tempfile from time import monotonic as time try: import resource except ImportError: resource = None if support.is_emscripten or support.is_wasi: raise unittest.SkipTest("Cannot create socketpair on Emscripten/WASI.") if hasattr(socket, 'socketpair'): socketpair = socket.socketpair else: def socketpair(family=socket.AF_INET, type=socket.SOCK_STREAM, proto=0): with socket.socket(family, type, proto) as l: l.bind((socket_helper.HOST, 0)) l.listen() c = socket.socket(family, type, proto) try: c.connect(l.getsockname()) caddr = c.getsockname() while True: a, addr = l.accept() # check that we've got the correct client if addr == caddr: return c, a a.close() except OSError: c.close() raise def find_ready_matching(ready, flag): match = [] for key, events in ready: if events & flag: match.append(key.fileobj) return match class BaseSelectorTestCase: def make_socketpair(self): rd, wr = socketpair() self.addCleanup(rd.close) self.addCleanup(wr.close) return rd, wr def test_register(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() key = s.register(rd, selectors.EVENT_READ, "data") self.assertIsInstance(key, selectors.SelectorKey) self.assertEqual(key.fileobj, rd) self.assertEqual(key.fd, rd.fileno()) self.assertEqual(key.events, selectors.EVENT_READ) self.assertEqual(key.data, "data") # register an unknown event self.assertRaises(ValueError, s.register, 0, 999999) # register an invalid FD self.assertRaises(ValueError, s.register, -10, selectors.EVENT_READ) # register twice self.assertRaises(KeyError, s.register, rd, selectors.EVENT_READ) # register the same FD, but with a different object self.assertRaises(KeyError, s.register, rd.fileno(), selectors.EVENT_READ) def test_unregister(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.unregister(rd) # unregister an unknown file obj self.assertRaises(KeyError, s.unregister, 999999) # unregister twice self.assertRaises(KeyError, s.unregister, rd) def test_unregister_after_fd_close(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() r, w = rd.fileno(), wr.fileno() s.register(r, selectors.EVENT_READ) s.register(w, selectors.EVENT_WRITE) rd.close() wr.close() s.unregister(r) s.unregister(w) @unittest.skipUnless(os.name == 'posix', "requires posix") def test_unregister_after_fd_close_and_reuse(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() r, w = rd.fileno(), wr.fileno() s.register(r, selectors.EVENT_READ) s.register(w, selectors.EVENT_WRITE) rd2, wr2 = self.make_socketpair() rd.close() wr.close() os.dup2(rd2.fileno(), r) os.dup2(wr2.fileno(), w) self.addCleanup(os.close, r) self.addCleanup(os.close, w) s.unregister(r) s.unregister(w) def test_unregister_after_socket_close(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) rd.close() wr.close() s.unregister(rd) s.unregister(wr) def test_modify(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() key = s.register(rd, selectors.EVENT_READ) # modify events key2 = s.modify(rd, selectors.EVENT_WRITE) self.assertNotEqual(key.events, key2.events) self.assertEqual(key2, s.get_key(rd)) s.unregister(rd) # modify data d1 = object() d2 = object() key = s.register(rd, selectors.EVENT_READ, d1) key2 = s.modify(rd, selectors.EVENT_READ, d2) self.assertEqual(key.events, key2.events) self.assertNotEqual(key.data, key2.data) self.assertEqual(key2, s.get_key(rd)) self.assertEqual(key2.data, d2) # modify unknown file obj self.assertRaises(KeyError, s.modify, 999999, selectors.EVENT_READ) # modify use a shortcut d3 = object() s.register = unittest.mock.Mock() s.unregister = unittest.mock.Mock() s.modify(rd, selectors.EVENT_READ, d3) self.assertFalse(s.register.called) self.assertFalse(s.unregister.called) def test_modify_unregister(self): # Make sure the fd is unregister()ed in case of error on # modify(): http://bugs.python.org/issue30014 if self.SELECTOR.__name__ == 'EpollSelector': patch = unittest.mock.patch( 'selectors.EpollSelector._selector_cls') elif self.SELECTOR.__name__ == 'PollSelector': patch = unittest.mock.patch( 'selectors.PollSelector._selector_cls') elif self.SELECTOR.__name__ == 'DevpollSelector': patch = unittest.mock.patch( 'selectors.DevpollSelector._selector_cls') else: raise self.skipTest("") with patch as m: m.return_value.modify = unittest.mock.Mock( side_effect=ZeroDivisionError) s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) self.assertEqual(len(s._map), 1) with self.assertRaises(ZeroDivisionError): s.modify(rd, selectors.EVENT_WRITE) self.assertEqual(len(s._map), 0) def test_close(self): s = self.SELECTOR() self.addCleanup(s.close) mapping = s.get_map() rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) s.close() self.assertRaises(RuntimeError, s.get_key, rd) self.assertRaises(RuntimeError, s.get_key, wr) self.assertRaises(KeyError, mapping.__getitem__, rd) self.assertRaises(KeyError, mapping.__getitem__, wr) def test_get_key(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() key = s.register(rd, selectors.EVENT_READ, "data") self.assertEqual(key, s.get_key(rd)) # unknown file obj self.assertRaises(KeyError, s.get_key, 999999) def test_get_map(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() keys = s.get_map() self.assertFalse(keys) self.assertEqual(len(keys), 0) self.assertEqual(list(keys), []) key = s.register(rd, selectors.EVENT_READ, "data") self.assertIn(rd, keys) self.assertEqual(key, keys[rd]) self.assertEqual(len(keys), 1) self.assertEqual(list(keys), [rd.fileno()]) self.assertEqual(list(keys.values()), [key]) # unknown file obj with self.assertRaises(KeyError): keys[999999] # Read-only mapping with self.assertRaises(TypeError): del keys[rd] def test_select(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) wr_key = s.register(wr, selectors.EVENT_WRITE) result = s.select() for key, events in result: self.assertTrue(isinstance(key, selectors.SelectorKey)) self.assertTrue(events) self.assertFalse(events & ~(selectors.EVENT_READ | selectors.EVENT_WRITE)) self.assertEqual([(wr_key, selectors.EVENT_WRITE)], result) def test_select_read_write(self): # gh-110038: when a file descriptor is registered for both read and # write, the two events must be seen on a single call to select(). s = self.SELECTOR() self.addCleanup(s.close) sock1, sock2 = self.make_socketpair() sock2.send(b"foo") my_key = s.register(sock1, selectors.EVENT_READ | selectors.EVENT_WRITE) seen_read, seen_write = False, False result = s.select() # We get the read and write either in the same result entry or in two # distinct entries with the same key. self.assertLessEqual(len(result), 2) for key, events in result: self.assertTrue(isinstance(key, selectors.SelectorKey)) self.assertEqual(key, my_key) self.assertFalse(events & ~(selectors.EVENT_READ | selectors.EVENT_WRITE)) if events & selectors.EVENT_READ: self.assertFalse(seen_read) seen_read = True if events & selectors.EVENT_WRITE: self.assertFalse(seen_write) seen_write = True self.assertTrue(seen_read) self.assertTrue(seen_write) def test_context_manager(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() with s as sel: sel.register(rd, selectors.EVENT_READ) sel.register(wr, selectors.EVENT_WRITE) self.assertRaises(RuntimeError, s.get_key, rd) self.assertRaises(RuntimeError, s.get_key, wr) def test_fileno(self): s = self.SELECTOR() self.addCleanup(s.close) if hasattr(s, 'fileno'): fd = s.fileno() self.assertTrue(isinstance(fd, int)) self.assertGreaterEqual(fd, 0) def test_selector(self): s = self.SELECTOR() self.addCleanup(s.close) NUM_SOCKETS = 12 MSG = b" This is a test." MSG_LEN = len(MSG) readers = [] writers = [] r2w = {} w2r = {} for i in range(NUM_SOCKETS): rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) readers.append(rd) writers.append(wr) r2w[rd] = wr w2r[wr] = rd bufs = [] while writers: ready = s.select() ready_writers = find_ready_matching(ready, selectors.EVENT_WRITE) if not ready_writers: self.fail("no sockets ready for writing") wr = random.choice(ready_writers) wr.send(MSG) for i in range(10): ready = s.select() ready_readers = find_ready_matching(ready, selectors.EVENT_READ) if ready_readers: break # there might be a delay between the write to the write end and # the read end is reported ready sleep(0.1) else: self.fail("no sockets ready for reading") self.assertEqual([w2r[wr]], ready_readers) rd = ready_readers[0] buf = rd.recv(MSG_LEN) self.assertEqual(len(buf), MSG_LEN) bufs.append(buf) s.unregister(r2w[rd]) s.unregister(rd) writers.remove(r2w[rd]) self.assertEqual(bufs, [MSG] * NUM_SOCKETS) @unittest.skipIf(sys.platform == 'win32', 'select.select() cannot be used with empty fd sets') def test_empty_select(self): # Issue #23009: Make sure EpollSelector.select() works when no FD is # registered. s = self.SELECTOR() self.addCleanup(s.close) self.assertEqual(s.select(timeout=0), []) def test_timeout(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(wr, selectors.EVENT_WRITE) t = time() self.assertEqual(1, len(s.select(0))) self.assertEqual(1, len(s.select(-1))) self.assertLess(time() - t, 0.5) s.unregister(wr) s.register(rd, selectors.EVENT_READ) t = time() self.assertFalse(s.select(0)) self.assertFalse(s.select(-1)) self.assertLess(time() - t, 0.5) t0 = time() self.assertFalse(s.select(1)) t1 = time() dt = t1 - t0 # Tolerate 2.0 seconds for very slow buildbots self.assertTrue(0.8 <= dt <= 2.0, dt) @unittest.skipUnless(hasattr(signal, "alarm"), "signal.alarm() required for this test") def test_select_interrupt_exc(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() class InterruptSelect(Exception): pass def handler(*args): raise InterruptSelect orig_alrm_handler = signal.signal(signal.SIGALRM, handler) self.addCleanup(signal.signal, signal.SIGALRM, orig_alrm_handler) try: signal.alarm(1) s.register(rd, selectors.EVENT_READ) t = time() # select() is interrupted by a signal which raises an exception with self.assertRaises(InterruptSelect): s.select(30) # select() was interrupted before the timeout of 30 seconds self.assertLess(time() - t, 5.0) finally: signal.alarm(0) @unittest.skipUnless(hasattr(signal, "alarm"), "signal.alarm() required for this test") def test_select_interrupt_noraise(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() orig_alrm_handler = signal.signal(signal.SIGALRM, lambda *args: None) self.addCleanup(signal.signal, signal.SIGALRM, orig_alrm_handler) try: signal.alarm(1) s.register(rd, selectors.EVENT_READ) t = time() # select() is interrupted by a signal, but the signal handler doesn't # raise an exception, so select() should by retries with a recomputed # timeout self.assertFalse(s.select(1.5)) self.assertGreaterEqual(time() - t, 1.0) finally: signal.alarm(0) class ScalableSelectorMixIn: # see issue #18963 for why it's skipped on older OS X versions @support.requires_mac_ver(10, 5) @unittest.skipUnless(resource, "Test needs resource module") @support.requires_resource('cpu') def test_above_fd_setsize(self): # A scalable implementation should have no problem with more than # FD_SETSIZE file descriptors. Since we don't know the value, we just # try to set the soft RLIMIT_NOFILE to the hard RLIMIT_NOFILE ceiling. soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE) try: resource.setrlimit(resource.RLIMIT_NOFILE, (hard, hard)) self.addCleanup(resource.setrlimit, resource.RLIMIT_NOFILE, (soft, hard)) NUM_FDS = min(hard, 2**16) except (OSError, ValueError): NUM_FDS = soft # guard for already allocated FDs (stdin, stdout...) NUM_FDS -= 32 s = self.SELECTOR() self.addCleanup(s.close) for i in range(NUM_FDS // 2): try: rd, wr = self.make_socketpair() except OSError: # too many FDs, skip - note that we should only catch EMFILE # here, but apparently *BSD and Solaris can fail upon connect() # or bind() with EADDRNOTAVAIL, so let's be safe self.skipTest("FD limit reached") try: s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) except OSError as e: if e.errno == errno.ENOSPC: # this can be raised by epoll if we go over # fs.epoll.max_user_watches sysctl self.skipTest("FD limit reached") raise try: fds = s.select() except OSError as e: if e.errno == errno.EINVAL and sys.platform == 'darwin': # unexplainable errors on macOS don't need to fail the test self.skipTest("Invalid argument error calling poll()") raise self.assertEqual(NUM_FDS // 2, len(fds)) class DefaultSelectorTestCase(BaseSelectorTestCase, unittest.TestCase): SELECTOR = selectors.DefaultSelector class SelectSelectorTestCase(BaseSelectorTestCase, unittest.TestCase): SELECTOR = selectors.SelectSelector @unittest.skipUnless(hasattr(selectors, 'PollSelector'), "Test needs selectors.PollSelector") class PollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'PollSelector', None) @unittest.skipUnless(hasattr(selectors, 'EpollSelector'), "Test needs selectors.EpollSelector") class EpollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'EpollSelector', None) def test_register_file(self): # epoll(7) returns EPERM when given a file to watch s = self.SELECTOR() with tempfile.NamedTemporaryFile() as f: with self.assertRaises(IOError): s.register(f, selectors.EVENT_READ) # the SelectorKey has been removed with self.assertRaises(KeyError): s.get_key(f) @unittest.skipUnless(hasattr(selectors, 'KqueueSelector'), "Test needs selectors.KqueueSelector)") class KqueueSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'KqueueSelector', None) def test_register_bad_fd(self): # a file descriptor that's been closed should raise an OSError # with EBADF s = self.SELECTOR() bad_f = os_helper.make_bad_fd() with self.assertRaises(OSError) as cm: s.register(bad_f, selectors.EVENT_READ) self.assertEqual(cm.exception.errno, errno.EBADF) # the SelectorKey has been removed with self.assertRaises(KeyError): s.get_key(bad_f) def test_empty_select_timeout(self): # Issues #23009, #29255: Make sure timeout is applied when no fds # are registered. s = self.SELECTOR() self.addCleanup(s.close) t0 = time() self.assertEqual(s.select(1), []) t1 = time() dt = t1 - t0 # Tolerate 2.0 seconds for very slow buildbots self.assertTrue(0.8 <= dt <= 2.0, dt) @unittest.skipUnless(hasattr(selectors, 'DevpollSelector'), "Test needs selectors.DevpollSelector") class DevpollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'DevpollSelector', None) def tearDownModule(): support.reap_children() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.11/test_signal.py000066400000000000000000001513401471441230600212150ustar00rootroot00000000000000import enum import errno import functools import inspect import os import random import signal import socket import statistics import subprocess import sys import threading import time import unittest from test import support from test.support import os_helper from test.support.script_helper import assert_python_ok, spawn_python from test.support import threading_helper try: import _testcapi except ImportError: _testcapi = None class GenericTests(unittest.TestCase): def test_enums(self): for name in dir(signal): sig = getattr(signal, name) if name in {'SIG_DFL', 'SIG_IGN'}: self.assertIsInstance(sig, signal.Handlers) elif name in {'SIG_BLOCK', 'SIG_UNBLOCK', 'SIG_SETMASK'}: self.assertIsInstance(sig, signal.Sigmasks) elif name.startswith('SIG') and not name.startswith('SIG_'): self.assertIsInstance(sig, signal.Signals) elif name.startswith('CTRL_'): self.assertIsInstance(sig, signal.Signals) self.assertEqual(sys.platform, "win32") CheckedSignals = enum._old_convert_( enum.IntEnum, 'Signals', 'signal', lambda name: name.isupper() and (name.startswith('SIG') and not name.startswith('SIG_')) or name.startswith('CTRL_'), source=signal, ) enum._test_simple_enum(CheckedSignals, signal.Signals) CheckedHandlers = enum._old_convert_( enum.IntEnum, 'Handlers', 'signal', lambda name: name in ('SIG_DFL', 'SIG_IGN'), source=signal, ) enum._test_simple_enum(CheckedHandlers, signal.Handlers) Sigmasks = getattr(signal, 'Sigmasks', None) if Sigmasks is not None: CheckedSigmasks = enum._old_convert_( enum.IntEnum, 'Sigmasks', 'signal', lambda name: name in ('SIG_BLOCK', 'SIG_UNBLOCK', 'SIG_SETMASK'), source=signal, ) enum._test_simple_enum(CheckedSigmasks, Sigmasks) def test_functions_module_attr(self): # Issue #27718: If __all__ is not defined all non-builtin functions # should have correct __module__ to be displayed by pydoc. for name in dir(signal): value = getattr(signal, name) if inspect.isroutine(value) and not inspect.isbuiltin(value): self.assertEqual(value.__module__, 'signal') @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") class PosixTests(unittest.TestCase): def trivial_signal_handler(self, *args): pass def create_handler_with_partial(self, argument): return functools.partial(self.trivial_signal_handler, argument) def test_out_of_range_signal_number_raises_error(self): self.assertRaises(ValueError, signal.getsignal, 4242) self.assertRaises(ValueError, signal.signal, 4242, self.trivial_signal_handler) self.assertRaises(ValueError, signal.strsignal, 4242) def test_setting_signal_handler_to_none_raises_error(self): self.assertRaises(TypeError, signal.signal, signal.SIGUSR1, None) def test_getsignal(self): hup = signal.signal(signal.SIGHUP, self.trivial_signal_handler) self.assertIsInstance(hup, signal.Handlers) self.assertEqual(signal.getsignal(signal.SIGHUP), self.trivial_signal_handler) signal.signal(signal.SIGHUP, hup) self.assertEqual(signal.getsignal(signal.SIGHUP), hup) def test_no_repr_is_called_on_signal_handler(self): # See https://github.com/python/cpython/issues/112559. class MyArgument: def __init__(self): self.repr_count = 0 def __repr__(self): self.repr_count += 1 return super().__repr__() argument = MyArgument() self.assertEqual(0, argument.repr_count) handler = self.create_handler_with_partial(argument) hup = signal.signal(signal.SIGHUP, handler) self.assertIsInstance(hup, signal.Handlers) self.assertEqual(signal.getsignal(signal.SIGHUP), handler) signal.signal(signal.SIGHUP, hup) self.assertEqual(signal.getsignal(signal.SIGHUP), hup) self.assertEqual(0, argument.repr_count) def test_strsignal(self): self.assertIn("Interrupt", signal.strsignal(signal.SIGINT)) self.assertIn("Terminated", signal.strsignal(signal.SIGTERM)) self.assertIn("Hangup", signal.strsignal(signal.SIGHUP)) # Issue 3864, unknown if this affects earlier versions of freebsd also def test_interprocess_signal(self): dirname = os.path.dirname(__file__) script = os.path.join(dirname, 'signalinterproctester.py') assert_python_ok(script) @unittest.skipUnless( hasattr(signal, "valid_signals"), "requires signal.valid_signals" ) def test_valid_signals(self): s = signal.valid_signals() self.assertIsInstance(s, set) self.assertIn(signal.Signals.SIGINT, s) self.assertIn(signal.Signals.SIGALRM, s) self.assertNotIn(0, s) self.assertNotIn(signal.NSIG, s) self.assertLess(len(s), signal.NSIG) # gh-91145: Make sure that all SIGxxx constants exposed by the Python # signal module have a number in the [0; signal.NSIG-1] range. for name in dir(signal): if not name.startswith("SIG"): continue if name in {"SIG_IGN", "SIG_DFL"}: # SIG_IGN and SIG_DFL are pointers continue with self.subTest(name=name): signum = getattr(signal, name) self.assertGreaterEqual(signum, 0) self.assertLess(signum, signal.NSIG) @unittest.skipUnless(sys.executable, "sys.executable required.") @support.requires_subprocess() def test_keyboard_interrupt_exit_code(self): """KeyboardInterrupt triggers exit via SIGINT.""" process = subprocess.run( [sys.executable, "-c", "import os, signal, time\n" "os.kill(os.getpid(), signal.SIGINT)\n" "for _ in range(999): time.sleep(0.01)"], stderr=subprocess.PIPE) self.assertIn(b"KeyboardInterrupt", process.stderr) self.assertEqual(process.returncode, -signal.SIGINT) # Caveat: The exit code is insufficient to guarantee we actually died # via a signal. POSIX shells do more than look at the 8 bit value. # Writing an automation friendly test of an interactive shell # to confirm that our process died via a SIGINT proved too complex. @unittest.skipUnless(sys.platform == "win32", "Windows specific") class WindowsSignalTests(unittest.TestCase): def test_valid_signals(self): s = signal.valid_signals() self.assertIsInstance(s, set) self.assertGreaterEqual(len(s), 6) self.assertIn(signal.Signals.SIGINT, s) self.assertNotIn(0, s) self.assertNotIn(signal.NSIG, s) self.assertLess(len(s), signal.NSIG) def test_issue9324(self): # Updated for issue #10003, adding SIGBREAK handler = lambda x, y: None checked = set() for sig in (signal.SIGABRT, signal.SIGBREAK, signal.SIGFPE, signal.SIGILL, signal.SIGINT, signal.SIGSEGV, signal.SIGTERM): # Set and then reset a handler for signals that work on windows. # Issue #18396, only for signals without a C-level handler. if signal.getsignal(sig) is not None: signal.signal(sig, signal.signal(sig, handler)) checked.add(sig) # Issue #18396: Ensure the above loop at least tested *something* self.assertTrue(checked) with self.assertRaises(ValueError): signal.signal(-1, handler) with self.assertRaises(ValueError): signal.signal(7, handler) @unittest.skipUnless(sys.executable, "sys.executable required.") @support.requires_subprocess() def test_keyboard_interrupt_exit_code(self): """KeyboardInterrupt triggers an exit using STATUS_CONTROL_C_EXIT.""" # We don't test via os.kill(os.getpid(), signal.CTRL_C_EVENT) here # as that requires setting up a console control handler in a child # in its own process group. Doable, but quite complicated. (see # @eryksun on https://github.com/python/cpython/pull/11862) process = subprocess.run( [sys.executable, "-c", "raise KeyboardInterrupt"], stderr=subprocess.PIPE) self.assertIn(b"KeyboardInterrupt", process.stderr) STATUS_CONTROL_C_EXIT = 0xC000013A self.assertEqual(process.returncode, STATUS_CONTROL_C_EXIT) class WakeupFDTests(unittest.TestCase): def test_invalid_call(self): # First parameter is positional-only with self.assertRaises(TypeError): signal.set_wakeup_fd(signum=signal.SIGINT) # warn_on_full_buffer is a keyword-only parameter with self.assertRaises(TypeError): signal.set_wakeup_fd(signal.SIGINT, False) def test_invalid_fd(self): fd = os_helper.make_bad_fd() self.assertRaises((ValueError, OSError), signal.set_wakeup_fd, fd) @unittest.skipUnless(support.has_socket_support, "needs working sockets.") def test_invalid_socket(self): sock = socket.socket() fd = sock.fileno() sock.close() self.assertRaises((ValueError, OSError), signal.set_wakeup_fd, fd) # Emscripten does not support fstat on pipes yet. # https://github.com/emscripten-core/emscripten/issues/16414 @unittest.skipIf(support.is_emscripten, "Emscripten cannot fstat pipes.") @unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()") def test_set_wakeup_fd_result(self): r1, w1 = os.pipe() self.addCleanup(os.close, r1) self.addCleanup(os.close, w1) r2, w2 = os.pipe() self.addCleanup(os.close, r2) self.addCleanup(os.close, w2) if hasattr(os, 'set_blocking'): os.set_blocking(w1, False) os.set_blocking(w2, False) signal.set_wakeup_fd(w1) self.assertEqual(signal.set_wakeup_fd(w2), w1) self.assertEqual(signal.set_wakeup_fd(-1), w2) self.assertEqual(signal.set_wakeup_fd(-1), -1) @unittest.skipIf(support.is_emscripten, "Emscripten cannot fstat pipes.") @unittest.skipUnless(support.has_socket_support, "needs working sockets.") def test_set_wakeup_fd_socket_result(self): sock1 = socket.socket() self.addCleanup(sock1.close) sock1.setblocking(False) fd1 = sock1.fileno() sock2 = socket.socket() self.addCleanup(sock2.close) sock2.setblocking(False) fd2 = sock2.fileno() signal.set_wakeup_fd(fd1) self.assertEqual(signal.set_wakeup_fd(fd2), fd1) self.assertEqual(signal.set_wakeup_fd(-1), fd2) self.assertEqual(signal.set_wakeup_fd(-1), -1) # On Windows, files are always blocking and Windows does not provide a # function to test if a socket is in non-blocking mode. @unittest.skipIf(sys.platform == "win32", "tests specific to POSIX") @unittest.skipIf(support.is_emscripten, "Emscripten cannot fstat pipes.") @unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()") def test_set_wakeup_fd_blocking(self): rfd, wfd = os.pipe() self.addCleanup(os.close, rfd) self.addCleanup(os.close, wfd) # fd must be non-blocking os.set_blocking(wfd, True) with self.assertRaises(ValueError) as cm: signal.set_wakeup_fd(wfd) self.assertEqual(str(cm.exception), "the fd %s must be in non-blocking mode" % wfd) # non-blocking is ok os.set_blocking(wfd, False) signal.set_wakeup_fd(wfd) signal.set_wakeup_fd(-1) @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") class WakeupSignalTests(unittest.TestCase): @unittest.skipIf(_testcapi is None, 'need _testcapi') def check_wakeup(self, test_body, *signals, ordered=True): # use a subprocess to have only one thread code = """if 1: import _testcapi import os import signal import struct signals = {!r} def handler(signum, frame): pass def check_signum(signals): data = os.read(read, len(signals)+1) raised = struct.unpack('%uB' % len(data), data) if not {!r}: raised = set(raised) signals = set(signals) if raised != signals: raise Exception("%r != %r" % (raised, signals)) {} signal.signal(signal.SIGALRM, handler) read, write = os.pipe() os.set_blocking(write, False) signal.set_wakeup_fd(write) test() check_signum(signals) os.close(read) os.close(write) """.format(tuple(map(int, signals)), ordered, test_body) assert_python_ok('-c', code) @unittest.skipIf(_testcapi is None, 'need _testcapi') @unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()") def test_wakeup_write_error(self): # Issue #16105: write() errors in the C signal handler should not # pass silently. # Use a subprocess to have only one thread. code = """if 1: import _testcapi import errno import os import signal import sys from test.support import captured_stderr def handler(signum, frame): 1/0 signal.signal(signal.SIGALRM, handler) r, w = os.pipe() os.set_blocking(r, False) # Set wakeup_fd a read-only file descriptor to trigger the error signal.set_wakeup_fd(r) try: with captured_stderr() as err: signal.raise_signal(signal.SIGALRM) except ZeroDivisionError: # An ignored exception should have been printed out on stderr err = err.getvalue() if ('Exception ignored when trying to write to the signal wakeup fd' not in err): raise AssertionError(err) if ('OSError: [Errno %d]' % errno.EBADF) not in err: raise AssertionError(err) else: raise AssertionError("ZeroDivisionError not raised") os.close(r) os.close(w) """ r, w = os.pipe() try: os.write(r, b'x') except OSError: pass else: self.skipTest("OS doesn't report write() error on the read end of a pipe") finally: os.close(r) os.close(w) assert_python_ok('-c', code) def test_wakeup_fd_early(self): self.check_wakeup("""def test(): import select import time TIMEOUT_FULL = 10 TIMEOUT_HALF = 5 class InterruptSelect(Exception): pass def handler(signum, frame): raise InterruptSelect signal.signal(signal.SIGALRM, handler) signal.alarm(1) # We attempt to get a signal during the sleep, # before select is called try: select.select([], [], [], TIMEOUT_FULL) except InterruptSelect: pass else: raise Exception("select() was not interrupted") before_time = time.monotonic() select.select([read], [], [], TIMEOUT_FULL) after_time = time.monotonic() dt = after_time - before_time if dt >= TIMEOUT_HALF: raise Exception("%s >= %s" % (dt, TIMEOUT_HALF)) """, signal.SIGALRM) def test_wakeup_fd_during(self): self.check_wakeup("""def test(): import select import time TIMEOUT_FULL = 10 TIMEOUT_HALF = 5 class InterruptSelect(Exception): pass def handler(signum, frame): raise InterruptSelect signal.signal(signal.SIGALRM, handler) signal.alarm(1) before_time = time.monotonic() # We attempt to get a signal during the select call try: select.select([read], [], [], TIMEOUT_FULL) except InterruptSelect: pass else: raise Exception("select() was not interrupted") after_time = time.monotonic() dt = after_time - before_time if dt >= TIMEOUT_HALF: raise Exception("%s >= %s" % (dt, TIMEOUT_HALF)) """, signal.SIGALRM) def test_signum(self): self.check_wakeup("""def test(): signal.signal(signal.SIGUSR1, handler) signal.raise_signal(signal.SIGUSR1) signal.raise_signal(signal.SIGALRM) """, signal.SIGUSR1, signal.SIGALRM) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pending(self): self.check_wakeup("""def test(): signum1 = signal.SIGUSR1 signum2 = signal.SIGUSR2 signal.signal(signum1, handler) signal.signal(signum2, handler) signal.pthread_sigmask(signal.SIG_BLOCK, (signum1, signum2)) signal.raise_signal(signum1) signal.raise_signal(signum2) # Unblocking the 2 signals calls the C signal handler twice signal.pthread_sigmask(signal.SIG_UNBLOCK, (signum1, signum2)) """, signal.SIGUSR1, signal.SIGUSR2, ordered=False) @unittest.skipUnless(hasattr(socket, 'socketpair'), 'need socket.socketpair') class WakeupSocketSignalTests(unittest.TestCase): @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_socket(self): # use a subprocess to have only one thread code = """if 1: import signal import socket import struct import _testcapi signum = signal.SIGINT signals = (signum,) def handler(signum, frame): pass signal.signal(signum, handler) read, write = socket.socketpair() write.setblocking(False) signal.set_wakeup_fd(write.fileno()) signal.raise_signal(signum) data = read.recv(1) if not data: raise Exception("no signum written") raised = struct.unpack('B', data) if raised != signals: raise Exception("%r != %r" % (raised, signals)) read.close() write.close() """ assert_python_ok('-c', code) @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_send_error(self): # Use a subprocess to have only one thread. if os.name == 'nt': action = 'send' else: action = 'write' code = """if 1: import errno import signal import socket import sys import time import _testcapi from test.support import captured_stderr signum = signal.SIGINT def handler(signum, frame): pass signal.signal(signum, handler) read, write = socket.socketpair() read.setblocking(False) write.setblocking(False) signal.set_wakeup_fd(write.fileno()) # Close sockets: send() will fail read.close() write.close() with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if ('Exception ignored when trying to {action} to the signal wakeup fd' not in err): raise AssertionError(err) """.format(action=action) assert_python_ok('-c', code) @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_warn_on_full_buffer(self): # Use a subprocess to have only one thread. if os.name == 'nt': action = 'send' else: action = 'write' code = """if 1: import errno import signal import socket import sys import time import _testcapi from test.support import captured_stderr signum = signal.SIGINT # This handler will be called, but we intentionally won't read from # the wakeup fd. def handler(signum, frame): pass signal.signal(signum, handler) read, write = socket.socketpair() # Fill the socketpair buffer if sys.platform == 'win32': # bpo-34130: On Windows, sometimes non-blocking send fails to fill # the full socketpair buffer, so use a timeout of 50 ms instead. write.settimeout(0.050) else: write.setblocking(False) written = 0 if sys.platform == "vxworks": CHUNK_SIZES = (1,) else: # Start with large chunk size to reduce the # number of send needed to fill the buffer. CHUNK_SIZES = (2 ** 16, 2 ** 8, 1) for chunk_size in CHUNK_SIZES: chunk = b"x" * chunk_size try: while True: write.send(chunk) written += chunk_size except (BlockingIOError, TimeoutError): pass print(f"%s bytes written into the socketpair" % written, flush=True) write.setblocking(False) try: write.send(b"x") except BlockingIOError: # The socketpair buffer seems full pass else: raise AssertionError("%s bytes failed to fill the socketpair " "buffer" % written) # By default, we get a warning when a signal arrives msg = ('Exception ignored when trying to {action} ' 'to the signal wakeup fd') signal.set_wakeup_fd(write.fileno()) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if msg not in err: raise AssertionError("first set_wakeup_fd() test failed, " "stderr: %r" % err) # And also if warn_on_full_buffer=True signal.set_wakeup_fd(write.fileno(), warn_on_full_buffer=True) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if msg not in err: raise AssertionError("set_wakeup_fd(warn_on_full_buffer=True) " "test failed, stderr: %r" % err) # But not if warn_on_full_buffer=False signal.set_wakeup_fd(write.fileno(), warn_on_full_buffer=False) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if err != "": raise AssertionError("set_wakeup_fd(warn_on_full_buffer=False) " "test failed, stderr: %r" % err) # And then check the default again, to make sure warn_on_full_buffer # settings don't leak across calls. signal.set_wakeup_fd(write.fileno()) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if msg not in err: raise AssertionError("second set_wakeup_fd() test failed, " "stderr: %r" % err) """.format(action=action) assert_python_ok('-c', code) @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") @unittest.skipUnless(hasattr(signal, 'siginterrupt'), "needs signal.siginterrupt()") @support.requires_subprocess() @unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()") class SiginterruptTest(unittest.TestCase): def readpipe_interrupted(self, interrupt): """Perform a read during which a signal will arrive. Return True if the read is interrupted by the signal and raises an exception. Return False if it returns normally. """ # use a subprocess to have only one thread, to have a timeout on the # blocking read and to not touch signal handling in this process code = """if 1: import errno import os import signal import sys interrupt = %r r, w = os.pipe() def handler(signum, frame): 1 / 0 signal.signal(signal.SIGALRM, handler) if interrupt is not None: signal.siginterrupt(signal.SIGALRM, interrupt) print("ready") sys.stdout.flush() # run the test twice try: for loop in range(2): # send a SIGALRM in a second (during the read) signal.alarm(1) try: # blocking call: read from a pipe without data os.read(r, 1) except ZeroDivisionError: pass else: sys.exit(2) sys.exit(3) finally: os.close(r) os.close(w) """ % (interrupt,) with spawn_python('-c', code) as process: try: # wait until the child process is loaded and has started first_line = process.stdout.readline() stdout, stderr = process.communicate(timeout=support.SHORT_TIMEOUT) except subprocess.TimeoutExpired: process.kill() return False else: stdout = first_line + stdout exitcode = process.wait() if exitcode not in (2, 3): raise Exception("Child error (exit code %s): %r" % (exitcode, stdout)) return (exitcode == 3) def test_without_siginterrupt(self): # If a signal handler is installed and siginterrupt is not called # at all, when that signal arrives, it interrupts a syscall that's in # progress. interrupted = self.readpipe_interrupted(None) self.assertTrue(interrupted) def test_siginterrupt_on(self): # If a signal handler is installed and siginterrupt is called with # a true value for the second argument, when that signal arrives, it # interrupts a syscall that's in progress. interrupted = self.readpipe_interrupted(True) self.assertTrue(interrupted) @support.requires_resource('walltime') def test_siginterrupt_off(self): # If a signal handler is installed and siginterrupt is called with # a false value for the second argument, when that signal arrives, it # does not interrupt a syscall that's in progress. interrupted = self.readpipe_interrupted(False) self.assertFalse(interrupted) @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") @unittest.skipUnless(hasattr(signal, 'getitimer') and hasattr(signal, 'setitimer'), "needs signal.getitimer() and signal.setitimer()") class ItimerTest(unittest.TestCase): def setUp(self): self.hndl_called = False self.hndl_count = 0 self.itimer = None self.old_alarm = signal.signal(signal.SIGALRM, self.sig_alrm) def tearDown(self): signal.signal(signal.SIGALRM, self.old_alarm) if self.itimer is not None: # test_itimer_exc doesn't change this attr # just ensure that itimer is stopped signal.setitimer(self.itimer, 0) def sig_alrm(self, *args): self.hndl_called = True def sig_vtalrm(self, *args): self.hndl_called = True if self.hndl_count > 3: # it shouldn't be here, because it should have been disabled. raise signal.ItimerError("setitimer didn't disable ITIMER_VIRTUAL " "timer.") elif self.hndl_count == 3: # disable ITIMER_VIRTUAL, this function shouldn't be called anymore signal.setitimer(signal.ITIMER_VIRTUAL, 0) self.hndl_count += 1 def sig_prof(self, *args): self.hndl_called = True signal.setitimer(signal.ITIMER_PROF, 0) def test_itimer_exc(self): # XXX I'm assuming -1 is an invalid itimer, but maybe some platform # defines it ? self.assertRaises(signal.ItimerError, signal.setitimer, -1, 0) # Negative times are treated as zero on some platforms. if 0: self.assertRaises(signal.ItimerError, signal.setitimer, signal.ITIMER_REAL, -1) def test_itimer_real(self): self.itimer = signal.ITIMER_REAL signal.setitimer(self.itimer, 1.0) signal.pause() self.assertEqual(self.hndl_called, True) # Issue 3864, unknown if this affects earlier versions of freebsd also @unittest.skipIf(sys.platform in ('netbsd5',), 'itimer not reliable (does not mix well with threading) on some BSDs.') def test_itimer_virtual(self): self.itimer = signal.ITIMER_VIRTUAL signal.signal(signal.SIGVTALRM, self.sig_vtalrm) signal.setitimer(self.itimer, 0.3, 0.2) for _ in support.busy_retry(60.0, error=False): # use up some virtual time by doing real work _ = pow(12345, 67890, 10000019) if signal.getitimer(self.itimer) == (0.0, 0.0): # sig_vtalrm handler stopped this itimer break else: # bpo-8424 self.skipTest("timeout: likely cause: machine too slow or load too " "high") # virtual itimer should be (0.0, 0.0) now self.assertEqual(signal.getitimer(self.itimer), (0.0, 0.0)) # and the handler should have been called self.assertEqual(self.hndl_called, True) def test_itimer_prof(self): self.itimer = signal.ITIMER_PROF signal.signal(signal.SIGPROF, self.sig_prof) signal.setitimer(self.itimer, 0.2, 0.2) for _ in support.busy_retry(60.0, error=False): # do some work _ = pow(12345, 67890, 10000019) if signal.getitimer(self.itimer) == (0.0, 0.0): # sig_prof handler stopped this itimer break else: # bpo-8424 self.skipTest("timeout: likely cause: machine too slow or load too " "high") # profiling itimer should be (0.0, 0.0) now self.assertEqual(signal.getitimer(self.itimer), (0.0, 0.0)) # and the handler should have been called self.assertEqual(self.hndl_called, True) def test_setitimer_tiny(self): # bpo-30807: C setitimer() takes a microsecond-resolution interval. # Check that float -> timeval conversion doesn't round # the interval down to zero, which would disable the timer. self.itimer = signal.ITIMER_REAL signal.setitimer(self.itimer, 1e-6) time.sleep(1) self.assertEqual(self.hndl_called, True) class PendingSignalsTests(unittest.TestCase): """ Test pthread_sigmask(), pthread_kill(), sigpending() and sigwait() functions. """ @unittest.skipUnless(hasattr(signal, 'sigpending'), 'need signal.sigpending()') def test_sigpending_empty(self): self.assertEqual(signal.sigpending(), set()) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') @unittest.skipUnless(hasattr(signal, 'sigpending'), 'need signal.sigpending()') def test_sigpending(self): code = """if 1: import os import signal def handler(signum, frame): 1/0 signum = signal.SIGUSR1 signal.signal(signum, handler) signal.pthread_sigmask(signal.SIG_BLOCK, [signum]) os.kill(os.getpid(), signum) pending = signal.sigpending() for sig in pending: assert isinstance(sig, signal.Signals), repr(pending) if pending != {signum}: raise Exception('%s != {%s}' % (pending, signum)) try: signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") """ assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'pthread_kill'), 'need signal.pthread_kill()') @threading_helper.requires_working_threading() def test_pthread_kill(self): code = """if 1: import signal import threading import sys signum = signal.SIGUSR1 def handler(signum, frame): 1/0 signal.signal(signum, handler) tid = threading.get_ident() try: signal.pthread_kill(tid, signum) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") """ assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def wait_helper(self, blocked, test): """ test: body of the "def test(signum):" function. blocked: number of the blocked signal """ code = '''if 1: import signal import sys from signal import Signals def handler(signum, frame): 1/0 %s blocked = %s signum = signal.SIGALRM # child: block and wait the signal try: signal.signal(signum, handler) signal.pthread_sigmask(signal.SIG_BLOCK, [blocked]) # Do the tests test(signum) # The handler must not be called on unblock try: signal.pthread_sigmask(signal.SIG_UNBLOCK, [blocked]) except ZeroDivisionError: print("the signal handler has been called", file=sys.stderr) sys.exit(1) except BaseException as err: print("error: {}".format(err), file=sys.stderr) sys.stderr.flush() sys.exit(1) ''' % (test.strip(), blocked) # sig*wait* must be called with the signal blocked: since the current # process might have several threads running, use a subprocess to have # a single thread. assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'sigwait'), 'need signal.sigwait()') def test_sigwait(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): signal.alarm(1) received = signal.sigwait([signum]) assert isinstance(received, signal.Signals), received if received != signum: raise Exception('received %s, not %s' % (received, signum)) ''') @unittest.skipUnless(hasattr(signal, 'sigwaitinfo'), 'need signal.sigwaitinfo()') def test_sigwaitinfo(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): signal.alarm(1) info = signal.sigwaitinfo([signum]) if info.si_signo != signum: raise Exception("info.si_signo != %s" % signum) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): signal.alarm(1) info = signal.sigtimedwait([signum], 10.1000) if info.si_signo != signum: raise Exception('info.si_signo != %s' % signum) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait_poll(self): # check that polling with sigtimedwait works self.wait_helper(signal.SIGALRM, ''' def test(signum): import os os.kill(os.getpid(), signum) info = signal.sigtimedwait([signum], 0) if info.si_signo != signum: raise Exception('info.si_signo != %s' % signum) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait_timeout(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): received = signal.sigtimedwait([signum], 1.0) if received is not None: raise Exception("received=%r" % (received,)) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait_negative_timeout(self): signum = signal.SIGALRM self.assertRaises(ValueError, signal.sigtimedwait, [signum], -1.0) @unittest.skipUnless(hasattr(signal, 'sigwait'), 'need signal.sigwait()') @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') @threading_helper.requires_working_threading() def test_sigwait_thread(self): # Check that calling sigwait() from a thread doesn't suspend the whole # process. A new interpreter is spawned to avoid problems when mixing # threads and fork(): only async-safe functions are allowed between # fork() and exec(). assert_python_ok("-c", """if True: import os, threading, sys, time, signal # the default handler terminates the process signum = signal.SIGUSR1 def kill_later(): # wait until the main thread is waiting in sigwait() time.sleep(1) os.kill(os.getpid(), signum) # the signal must be blocked by all the threads signal.pthread_sigmask(signal.SIG_BLOCK, [signum]) killer = threading.Thread(target=kill_later) killer.start() received = signal.sigwait([signum]) if received != signum: print("sigwait() received %s, not %s" % (received, signum), file=sys.stderr) sys.exit(1) killer.join() # unblock the signal, which should have been cleared by sigwait() signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) """) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pthread_sigmask_arguments(self): self.assertRaises(TypeError, signal.pthread_sigmask) self.assertRaises(TypeError, signal.pthread_sigmask, 1) self.assertRaises(TypeError, signal.pthread_sigmask, 1, 2, 3) self.assertRaises(OSError, signal.pthread_sigmask, 1700, []) with self.assertRaises(ValueError): signal.pthread_sigmask(signal.SIG_BLOCK, [signal.NSIG]) with self.assertRaises(ValueError): signal.pthread_sigmask(signal.SIG_BLOCK, [0]) with self.assertRaises(ValueError): signal.pthread_sigmask(signal.SIG_BLOCK, [1<<1000]) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pthread_sigmask_valid_signals(self): s = signal.pthread_sigmask(signal.SIG_BLOCK, signal.valid_signals()) self.addCleanup(signal.pthread_sigmask, signal.SIG_SETMASK, s) # Get current blocked set s = signal.pthread_sigmask(signal.SIG_UNBLOCK, signal.valid_signals()) self.assertLessEqual(s, signal.valid_signals()) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') @threading_helper.requires_working_threading() def test_pthread_sigmask(self): code = """if 1: import signal import os; import threading def handler(signum, frame): 1/0 def kill(signum): os.kill(os.getpid(), signum) def check_mask(mask): for sig in mask: assert isinstance(sig, signal.Signals), repr(sig) def read_sigmask(): sigmask = signal.pthread_sigmask(signal.SIG_BLOCK, []) check_mask(sigmask) return sigmask signum = signal.SIGUSR1 # Install our signal handler old_handler = signal.signal(signum, handler) # Unblock SIGUSR1 (and copy the old mask) to test our signal handler old_mask = signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) check_mask(old_mask) try: kill(signum) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") # Block and then raise SIGUSR1. The signal is blocked: the signal # handler is not called, and the signal is now pending mask = signal.pthread_sigmask(signal.SIG_BLOCK, [signum]) check_mask(mask) kill(signum) # Check the new mask blocked = read_sigmask() check_mask(blocked) if signum not in blocked: raise Exception("%s not in %s" % (signum, blocked)) if old_mask ^ blocked != {signum}: raise Exception("%s ^ %s != {%s}" % (old_mask, blocked, signum)) # Unblock SIGUSR1 try: # unblock the pending signal calls immediately the signal handler signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") try: kill(signum) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") # Check the new mask unblocked = read_sigmask() if signum in unblocked: raise Exception("%s in %s" % (signum, unblocked)) if blocked ^ unblocked != {signum}: raise Exception("%s ^ %s != {%s}" % (blocked, unblocked, signum)) if old_mask != unblocked: raise Exception("%s != %s" % (old_mask, unblocked)) """ assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'pthread_kill'), 'need signal.pthread_kill()') @threading_helper.requires_working_threading() def test_pthread_kill_main_thread(self): # Test that a signal can be sent to the main thread with pthread_kill() # before any other thread has been created (see issue #12392). code = """if True: import threading import signal import sys def handler(signum, frame): sys.exit(3) signal.signal(signal.SIGUSR1, handler) signal.pthread_kill(threading.get_ident(), signal.SIGUSR1) sys.exit(2) """ with spawn_python('-c', code) as process: stdout, stderr = process.communicate() exitcode = process.wait() if exitcode != 3: raise Exception("Child error (exit code %s): %s" % (exitcode, stdout)) class StressTest(unittest.TestCase): """ Stress signal delivery, especially when a signal arrives in the middle of recomputing the signal state or executing previously tripped signal handlers. """ def setsig(self, signum, handler): old_handler = signal.signal(signum, handler) self.addCleanup(signal.signal, signum, old_handler) def measure_itimer_resolution(self): N = 20 times = [] def handler(signum=None, frame=None): if len(times) < N: times.append(time.perf_counter()) # 1 µs is the smallest possible timer interval, # we want to measure what the concrete duration # will be on this platform signal.setitimer(signal.ITIMER_REAL, 1e-6) self.addCleanup(signal.setitimer, signal.ITIMER_REAL, 0) self.setsig(signal.SIGALRM, handler) handler() while len(times) < N: time.sleep(1e-3) durations = [times[i+1] - times[i] for i in range(len(times) - 1)] med = statistics.median(durations) if support.verbose: print("detected median itimer() resolution: %.6f s." % (med,)) return med def decide_itimer_count(self): # Some systems have poor setitimer() resolution (for example # measured around 20 ms. on FreeBSD 9), so decide on a reasonable # number of sequential timers based on that. reso = self.measure_itimer_resolution() if reso <= 1e-4: return 10000 elif reso <= 1e-2: return 100 else: self.skipTest("detected itimer resolution (%.3f s.) too high " "(> 10 ms.) on this platform (or system too busy)" % (reso,)) @unittest.skipUnless(hasattr(signal, "setitimer"), "test needs setitimer()") def test_stress_delivery_dependent(self): """ This test uses dependent signal handlers. """ N = self.decide_itimer_count() sigs = [] def first_handler(signum, frame): # 1e-6 is the minimum non-zero value for `setitimer()`. # Choose a random delay so as to improve chances of # triggering a race condition. Ideally the signal is received # when inside critical signal-handling routines such as # Py_MakePendingCalls(). signal.setitimer(signal.ITIMER_REAL, 1e-6 + random.random() * 1e-5) def second_handler(signum=None, frame=None): sigs.append(signum) # Here on Linux, SIGPROF > SIGALRM > SIGUSR1. By using both # ascending and descending sequences (SIGUSR1 then SIGALRM, # SIGPROF then SIGALRM), we maximize chances of hitting a bug. self.setsig(signal.SIGPROF, first_handler) self.setsig(signal.SIGUSR1, first_handler) self.setsig(signal.SIGALRM, second_handler) # for ITIMER_REAL expected_sigs = 0 deadline = time.monotonic() + support.SHORT_TIMEOUT while expected_sigs < N: os.kill(os.getpid(), signal.SIGPROF) expected_sigs += 1 # Wait for handlers to run to avoid signal coalescing while len(sigs) < expected_sigs and time.monotonic() < deadline: time.sleep(1e-5) os.kill(os.getpid(), signal.SIGUSR1) expected_sigs += 1 while len(sigs) < expected_sigs and time.monotonic() < deadline: time.sleep(1e-5) # All ITIMER_REAL signals should have been delivered to the # Python handler self.assertEqual(len(sigs), N, "Some signals were lost") @unittest.skipUnless(hasattr(signal, "setitimer"), "test needs setitimer()") def test_stress_delivery_simultaneous(self): """ This test uses simultaneous signal handlers. """ N = self.decide_itimer_count() sigs = [] def handler(signum, frame): sigs.append(signum) self.setsig(signal.SIGUSR1, handler) self.setsig(signal.SIGALRM, handler) # for ITIMER_REAL expected_sigs = 0 while expected_sigs < N: # Hopefully the SIGALRM will be received somewhere during # initial processing of SIGUSR1. signal.setitimer(signal.ITIMER_REAL, 1e-6 + random.random() * 1e-5) os.kill(os.getpid(), signal.SIGUSR1) expected_sigs += 2 # Wait for handlers to run to avoid signal coalescing for _ in support.sleeping_retry(support.SHORT_TIMEOUT, error=False): if len(sigs) >= expected_sigs: break # All ITIMER_REAL signals should have been delivered to the # Python handler self.assertEqual(len(sigs), N, "Some signals were lost") @unittest.skipIf(sys.platform == "darwin", "crashes due to system bug (FB13453490)") @unittest.skipUnless(hasattr(signal, "SIGUSR1"), "test needs SIGUSR1") @threading_helper.requires_working_threading() def test_stress_modifying_handlers(self): # bpo-43406: race condition between trip_signal() and signal.signal signum = signal.SIGUSR1 num_sent_signals = 0 num_received_signals = 0 do_stop = False def custom_handler(signum, frame): nonlocal num_received_signals num_received_signals += 1 def set_interrupts(): nonlocal num_sent_signals while not do_stop: signal.raise_signal(signum) num_sent_signals += 1 def cycle_handlers(): while num_sent_signals < 100 or num_received_signals < 1: for i in range(20000): # Cycle between a Python-defined and a non-Python handler for handler in [custom_handler, signal.SIG_IGN]: signal.signal(signum, handler) old_handler = signal.signal(signum, custom_handler) self.addCleanup(signal.signal, signum, old_handler) t = threading.Thread(target=set_interrupts) try: ignored = False with support.catch_unraisable_exception() as cm: t.start() cycle_handlers() do_stop = True t.join() if cm.unraisable is not None: # An unraisable exception may be printed out when # a signal is ignored due to the aforementioned # race condition, check it. self.assertIsInstance(cm.unraisable.exc_value, OSError) self.assertIn( f"Signal {signum:d} ignored due to race condition", str(cm.unraisable.exc_value)) ignored = True # bpo-43406: Even if it is unlikely, it's technically possible that # all signals were ignored because of race conditions. if not ignored: # Sanity check that some signals were received, but not all self.assertGreater(num_received_signals, 0) self.assertLessEqual(num_received_signals, num_sent_signals) finally: do_stop = True t.join() class RaiseSignalTest(unittest.TestCase): def test_sigint(self): with self.assertRaises(KeyboardInterrupt): signal.raise_signal(signal.SIGINT) @unittest.skipIf(sys.platform != "win32", "Windows specific test") def test_invalid_argument(self): try: SIGHUP = 1 # not supported on win32 signal.raise_signal(SIGHUP) self.fail("OSError (Invalid argument) expected") except OSError as e: if e.errno == errno.EINVAL: pass else: raise def test_handler(self): is_ok = False def handler(a, b): nonlocal is_ok is_ok = True old_signal = signal.signal(signal.SIGINT, handler) self.addCleanup(signal.signal, signal.SIGINT, old_signal) signal.raise_signal(signal.SIGINT) self.assertTrue(is_ok) def test__thread_interrupt_main(self): # See https://github.com/python/cpython/issues/102397 code = """if 1: import _thread class Foo(): def __del__(self): _thread.interrupt_main() x = Foo() """ rc, out, err = assert_python_ok('-c', code) self.assertIn(b'OSError: Signal 2 ignored due to race condition', err) class PidfdSignalTest(unittest.TestCase): @unittest.skipUnless( hasattr(signal, "pidfd_send_signal"), "pidfd support not built in", ) def test_pidfd_send_signal(self): with self.assertRaises(OSError) as cm: signal.pidfd_send_signal(0, signal.SIGINT) if cm.exception.errno == errno.ENOSYS: self.skipTest("kernel does not support pidfds") elif cm.exception.errno == errno.EPERM: self.skipTest("Not enough privileges to use pidfs") self.assertEqual(cm.exception.errno, errno.EBADF) my_pidfd = os.open(f'/proc/{os.getpid()}', os.O_DIRECTORY) self.addCleanup(os.close, my_pidfd) with self.assertRaisesRegex(TypeError, "^siginfo must be None$"): signal.pidfd_send_signal(my_pidfd, signal.SIGINT, object(), 0) with self.assertRaises(KeyboardInterrupt): signal.pidfd_send_signal(my_pidfd, signal.SIGINT) def tearDownModule(): support.reap_children() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.11/test_smtpd.py000066400000000000000000001213371471441230600210720ustar00rootroot00000000000000import unittest import textwrap from test import support, mock_socket from test.support import socket_helper from test.support import warnings_helper import socket import io smtpd = warnings_helper.import_deprecated('smtpd') asyncore = warnings_helper.import_deprecated('asyncore') if not socket_helper.has_gethostname: raise unittest.SkipTest("test requires gethostname()") class DummyServer(smtpd.SMTPServer): def __init__(self, *args, **kwargs): smtpd.SMTPServer.__init__(self, *args, **kwargs) self.messages = [] if self._decode_data: self.return_status = 'return status' else: self.return_status = b'return status' def process_message(self, peer, mailfrom, rcpttos, data, **kw): self.messages.append((peer, mailfrom, rcpttos, data)) if data == self.return_status: return '250 Okish' if 'mail_options' in kw and 'SMTPUTF8' in kw['mail_options']: return '250 SMTPUTF8 message okish' class DummyDispatcherBroken(Exception): pass class BrokenDummyServer(DummyServer): def listen(self, num): raise DummyDispatcherBroken() class SMTPDServerTest(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket def test_process_message_unimplemented(self): server = smtpd.SMTPServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, decode_data=True) def write_line(line): channel.socket.queue_recv(line) channel.handle_read() write_line(b'HELO example') write_line(b'MAIL From:eggs@example') write_line(b'RCPT To:spam@example') write_line(b'DATA') self.assertRaises(NotImplementedError, write_line, b'spam\r\n.\r\n') def test_decode_data_and_enable_SMTPUTF8_raises(self): self.assertRaises( ValueError, smtpd.SMTPServer, (socket_helper.HOST, 0), ('b', 0), enable_SMTPUTF8=True, decode_data=True) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket class DebuggingServerTest(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket def send_data(self, channel, data, enable_SMTPUTF8=False): def write_line(line): channel.socket.queue_recv(line) channel.handle_read() write_line(b'EHLO example') if enable_SMTPUTF8: write_line(b'MAIL From:eggs@example BODY=8BITMIME SMTPUTF8') else: write_line(b'MAIL From:eggs@example') write_line(b'RCPT To:spam@example') write_line(b'DATA') write_line(data) write_line(b'.') def test_process_message_with_decode_data_true(self): server = smtpd.DebuggingServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, decode_data=True) with support.captured_stdout() as s: self.send_data(channel, b'From: test\n\nhello\n') stdout = s.getvalue() self.assertEqual(stdout, textwrap.dedent("""\ ---------- MESSAGE FOLLOWS ---------- From: test X-Peer: peer-address hello ------------ END MESSAGE ------------ """)) def test_process_message_with_decode_data_false(self): server = smtpd.DebuggingServer((socket_helper.HOST, 0), ('b', 0)) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr) with support.captured_stdout() as s: self.send_data(channel, b'From: test\n\nh\xc3\xa9llo\xff\n') stdout = s.getvalue() self.assertEqual(stdout, textwrap.dedent("""\ ---------- MESSAGE FOLLOWS ---------- b'From: test' b'X-Peer: peer-address' b'' b'h\\xc3\\xa9llo\\xff' ------------ END MESSAGE ------------ """)) def test_process_message_with_enable_SMTPUTF8_true(self): server = smtpd.DebuggingServer((socket_helper.HOST, 0), ('b', 0), enable_SMTPUTF8=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, enable_SMTPUTF8=True) with support.captured_stdout() as s: self.send_data(channel, b'From: test\n\nh\xc3\xa9llo\xff\n') stdout = s.getvalue() self.assertEqual(stdout, textwrap.dedent("""\ ---------- MESSAGE FOLLOWS ---------- b'From: test' b'X-Peer: peer-address' b'' b'h\\xc3\\xa9llo\\xff' ------------ END MESSAGE ------------ """)) def test_process_SMTPUTF8_message_with_enable_SMTPUTF8_true(self): server = smtpd.DebuggingServer((socket_helper.HOST, 0), ('b', 0), enable_SMTPUTF8=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, enable_SMTPUTF8=True) with support.captured_stdout() as s: self.send_data(channel, b'From: test\n\nh\xc3\xa9llo\xff\n', enable_SMTPUTF8=True) stdout = s.getvalue() self.assertEqual(stdout, textwrap.dedent("""\ ---------- MESSAGE FOLLOWS ---------- mail options: ['BODY=8BITMIME', 'SMTPUTF8'] b'From: test' b'X-Peer: peer-address' b'' b'h\\xc3\\xa9llo\\xff' ------------ END MESSAGE ------------ """)) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket class TestFamilyDetection(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket @unittest.skipUnless(socket_helper.IPV6_ENABLED, "IPv6 not enabled") def test_socket_uses_IPv6(self): server = smtpd.SMTPServer((socket_helper.HOSTv6, 0), (socket_helper.HOSTv4, 0)) self.assertEqual(server.socket.family, socket.AF_INET6) def test_socket_uses_IPv4(self): server = smtpd.SMTPServer((socket_helper.HOSTv4, 0), (socket_helper.HOSTv6, 0)) self.assertEqual(server.socket.family, socket.AF_INET) class TestRcptOptionParsing(unittest.TestCase): error_response = (b'555 RCPT TO parameters not recognized or not ' b'implemented\r\n') def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, channel, line): channel.socket.queue_recv(line) channel.handle_read() def test_params_rejected(self): server = DummyServer((socket_helper.HOST, 0), ('b', 0)) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr) self.write_line(channel, b'EHLO example') self.write_line(channel, b'MAIL from: size=20') self.write_line(channel, b'RCPT to: foo=bar') self.assertEqual(channel.socket.last, self.error_response) def test_nothing_accepted(self): server = DummyServer((socket_helper.HOST, 0), ('b', 0)) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr) self.write_line(channel, b'EHLO example') self.write_line(channel, b'MAIL from: size=20') self.write_line(channel, b'RCPT to: ') self.assertEqual(channel.socket.last, b'250 OK\r\n') class TestMailOptionParsing(unittest.TestCase): error_response = (b'555 MAIL FROM parameters not recognized or not ' b'implemented\r\n') def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, channel, line): channel.socket.queue_recv(line) channel.handle_read() def test_with_decode_data_true(self): server = DummyServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, decode_data=True) self.write_line(channel, b'EHLO example') for line in [ b'MAIL from: size=20 SMTPUTF8', b'MAIL from: size=20 SMTPUTF8 BODY=8BITMIME', b'MAIL from: size=20 BODY=UNKNOWN', b'MAIL from: size=20 body=8bitmime', ]: self.write_line(channel, line) self.assertEqual(channel.socket.last, self.error_response) self.write_line(channel, b'MAIL from: size=20') self.assertEqual(channel.socket.last, b'250 OK\r\n') def test_with_decode_data_false(self): server = DummyServer((socket_helper.HOST, 0), ('b', 0)) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr) self.write_line(channel, b'EHLO example') for line in [ b'MAIL from: size=20 SMTPUTF8', b'MAIL from: size=20 SMTPUTF8 BODY=8BITMIME', ]: self.write_line(channel, line) self.assertEqual(channel.socket.last, self.error_response) self.write_line( channel, b'MAIL from: size=20 SMTPUTF8 BODY=UNKNOWN') self.assertEqual( channel.socket.last, b'501 Error: BODY can only be one of 7BIT, 8BITMIME\r\n') self.write_line( channel, b'MAIL from: size=20 body=8bitmime') self.assertEqual(channel.socket.last, b'250 OK\r\n') def test_with_enable_smtputf8_true(self): server = DummyServer((socket_helper.HOST, 0), ('b', 0), enable_SMTPUTF8=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, enable_SMTPUTF8=True) self.write_line(channel, b'EHLO example') self.write_line( channel, b'MAIL from: size=20 body=8bitmime smtputf8') self.assertEqual(channel.socket.last, b'250 OK\r\n') class SMTPDChannelTest(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = self.server.accept() self.channel = smtpd.SMTPChannel(self.server, conn, addr, decode_data=True) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, line): self.channel.socket.queue_recv(line) self.channel.handle_read() def test_broken_connect(self): self.assertRaises( DummyDispatcherBroken, BrokenDummyServer, (socket_helper.HOST, 0), ('b', 0), decode_data=True) def test_decode_data_and_enable_SMTPUTF8_raises(self): self.assertRaises( ValueError, smtpd.SMTPChannel, self.server, self.channel.conn, self.channel.addr, enable_SMTPUTF8=True, decode_data=True) def test_server_accept(self): self.server.handle_accept() def test_missing_data(self): self.write_line(b'') self.assertEqual(self.channel.socket.last, b'500 Error: bad syntax\r\n') def test_EHLO(self): self.write_line(b'EHLO example') self.assertEqual(self.channel.socket.last, b'250 HELP\r\n') def test_EHLO_bad_syntax(self): self.write_line(b'EHLO') self.assertEqual(self.channel.socket.last, b'501 Syntax: EHLO hostname\r\n') def test_EHLO_duplicate(self): self.write_line(b'EHLO example') self.write_line(b'EHLO example') self.assertEqual(self.channel.socket.last, b'503 Duplicate HELO/EHLO\r\n') def test_EHLO_HELO_duplicate(self): self.write_line(b'EHLO example') self.write_line(b'HELO example') self.assertEqual(self.channel.socket.last, b'503 Duplicate HELO/EHLO\r\n') def test_HELO(self): name = smtpd.socket.getfqdn() self.write_line(b'HELO example') self.assertEqual(self.channel.socket.last, '250 {}\r\n'.format(name).encode('ascii')) def test_HELO_EHLO_duplicate(self): self.write_line(b'HELO example') self.write_line(b'EHLO example') self.assertEqual(self.channel.socket.last, b'503 Duplicate HELO/EHLO\r\n') def test_HELP(self): self.write_line(b'HELP') self.assertEqual(self.channel.socket.last, b'250 Supported commands: EHLO HELO MAIL RCPT ' + \ b'DATA RSET NOOP QUIT VRFY\r\n') def test_HELP_command(self): self.write_line(b'HELP MAIL') self.assertEqual(self.channel.socket.last, b'250 Syntax: MAIL FROM:
\r\n') def test_HELP_command_unknown(self): self.write_line(b'HELP SPAM') self.assertEqual(self.channel.socket.last, b'501 Supported commands: EHLO HELO MAIL RCPT ' + \ b'DATA RSET NOOP QUIT VRFY\r\n') def test_HELO_bad_syntax(self): self.write_line(b'HELO') self.assertEqual(self.channel.socket.last, b'501 Syntax: HELO hostname\r\n') def test_HELO_duplicate(self): self.write_line(b'HELO example') self.write_line(b'HELO example') self.assertEqual(self.channel.socket.last, b'503 Duplicate HELO/EHLO\r\n') def test_HELO_parameter_rejected_when_extensions_not_enabled(self): self.extended_smtp = False self.write_line(b'HELO example') self.write_line(b'MAIL from: SIZE=1234') self.assertEqual(self.channel.socket.last, b'501 Syntax: MAIL FROM:
\r\n') def test_MAIL_allows_space_after_colon(self): self.write_line(b'HELO example') self.write_line(b'MAIL from: ') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_extended_MAIL_allows_space_after_colon(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from: size=20') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_NOOP(self): self.write_line(b'NOOP') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_HELO_NOOP(self): self.write_line(b'HELO example') self.write_line(b'NOOP') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_NOOP_bad_syntax(self): self.write_line(b'NOOP hi') self.assertEqual(self.channel.socket.last, b'501 Syntax: NOOP\r\n') def test_QUIT(self): self.write_line(b'QUIT') self.assertEqual(self.channel.socket.last, b'221 Bye\r\n') def test_HELO_QUIT(self): self.write_line(b'HELO example') self.write_line(b'QUIT') self.assertEqual(self.channel.socket.last, b'221 Bye\r\n') def test_QUIT_arg_ignored(self): self.write_line(b'QUIT bye bye') self.assertEqual(self.channel.socket.last, b'221 Bye\r\n') def test_bad_state(self): self.channel.smtp_state = 'BAD STATE' self.write_line(b'HELO example') self.assertEqual(self.channel.socket.last, b'451 Internal confusion\r\n') def test_command_too_long(self): self.write_line(b'HELO example') self.write_line(b'MAIL from: ' + b'a' * self.channel.command_size_limit + b'@example') self.assertEqual(self.channel.socket.last, b'500 Error: line too long\r\n') def test_MAIL_command_limit_extended_with_SIZE(self): self.write_line(b'EHLO example') fill_len = self.channel.command_size_limit - len('MAIL from:<@example>') self.write_line(b'MAIL from:<' + b'a' * fill_len + b'@example> SIZE=1234') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'MAIL from:<' + b'a' * (fill_len + 26) + b'@example> SIZE=1234') self.assertEqual(self.channel.socket.last, b'500 Error: line too long\r\n') def test_MAIL_command_rejects_SMTPUTF8_by_default(self): self.write_line(b'EHLO example') self.write_line( b'MAIL from: BODY=8BITMIME SMTPUTF8') self.assertEqual(self.channel.socket.last[0:1], b'5') def test_data_longer_than_default_data_size_limit(self): # Hack the default so we don't have to generate so much data. self.channel.data_size_limit = 1048 self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'A' * self.channel.data_size_limit + b'A\r\n.') self.assertEqual(self.channel.socket.last, b'552 Error: Too much mail data\r\n') def test_MAIL_size_parameter(self): self.write_line(b'EHLO example') self.write_line(b'MAIL FROM: SIZE=512') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_MAIL_invalid_size_parameter(self): self.write_line(b'EHLO example') self.write_line(b'MAIL FROM: SIZE=invalid') self.assertEqual(self.channel.socket.last, b'501 Syntax: MAIL FROM:
[SP ]\r\n') def test_MAIL_RCPT_unknown_parameters(self): self.write_line(b'EHLO example') self.write_line(b'MAIL FROM: ham=green') self.assertEqual(self.channel.socket.last, b'555 MAIL FROM parameters not recognized or not implemented\r\n') self.write_line(b'MAIL FROM:') self.write_line(b'RCPT TO: ham=green') self.assertEqual(self.channel.socket.last, b'555 RCPT TO parameters not recognized or not implemented\r\n') def test_MAIL_size_parameter_larger_than_default_data_size_limit(self): self.channel.data_size_limit = 1048 self.write_line(b'EHLO example') self.write_line(b'MAIL FROM: SIZE=2096') self.assertEqual(self.channel.socket.last, b'552 Error: message size exceeds fixed maximum message size\r\n') def test_need_MAIL(self): self.write_line(b'HELO example') self.write_line(b'RCPT to:spam@example') self.assertEqual(self.channel.socket.last, b'503 Error: need MAIL command\r\n') def test_MAIL_syntax_HELO(self): self.write_line(b'HELO example') self.write_line(b'MAIL from eggs@example') self.assertEqual(self.channel.socket.last, b'501 Syntax: MAIL FROM:
\r\n') def test_MAIL_syntax_EHLO(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from eggs@example') self.assertEqual(self.channel.socket.last, b'501 Syntax: MAIL FROM:
[SP ]\r\n') def test_MAIL_missing_address(self): self.write_line(b'HELO example') self.write_line(b'MAIL from:') self.assertEqual(self.channel.socket.last, b'501 Syntax: MAIL FROM:
\r\n') def test_MAIL_chevrons(self): self.write_line(b'HELO example') self.write_line(b'MAIL from:') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_MAIL_empty_chevrons(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from:<>') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_MAIL_quoted_localpart(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from: <"Fred Blogs"@example.com>') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.channel.mailfrom, '"Fred Blogs"@example.com') def test_MAIL_quoted_localpart_no_angles(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from: "Fred Blogs"@example.com') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.channel.mailfrom, '"Fred Blogs"@example.com') def test_MAIL_quoted_localpart_with_size(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from: <"Fred Blogs"@example.com> SIZE=1000') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.channel.mailfrom, '"Fred Blogs"@example.com') def test_MAIL_quoted_localpart_with_size_no_angles(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from: "Fred Blogs"@example.com SIZE=1000') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.channel.mailfrom, '"Fred Blogs"@example.com') def test_nested_MAIL(self): self.write_line(b'HELO example') self.write_line(b'MAIL from:eggs@example') self.write_line(b'MAIL from:spam@example') self.assertEqual(self.channel.socket.last, b'503 Error: nested MAIL command\r\n') def test_VRFY(self): self.write_line(b'VRFY eggs@example') self.assertEqual(self.channel.socket.last, b'252 Cannot VRFY user, but will accept message and attempt ' + \ b'delivery\r\n') def test_VRFY_syntax(self): self.write_line(b'VRFY') self.assertEqual(self.channel.socket.last, b'501 Syntax: VRFY
\r\n') def test_EXPN_not_implemented(self): self.write_line(b'EXPN') self.assertEqual(self.channel.socket.last, b'502 EXPN not implemented\r\n') def test_no_HELO_MAIL(self): self.write_line(b'MAIL from:') self.assertEqual(self.channel.socket.last, b'503 Error: send HELO first\r\n') def test_need_RCPT(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'DATA') self.assertEqual(self.channel.socket.last, b'503 Error: need RCPT command\r\n') def test_RCPT_syntax_HELO(self): self.write_line(b'HELO example') self.write_line(b'MAIL From: eggs@example') self.write_line(b'RCPT to eggs@example') self.assertEqual(self.channel.socket.last, b'501 Syntax: RCPT TO:
\r\n') def test_RCPT_syntax_EHLO(self): self.write_line(b'EHLO example') self.write_line(b'MAIL From: eggs@example') self.write_line(b'RCPT to eggs@example') self.assertEqual(self.channel.socket.last, b'501 Syntax: RCPT TO:
[SP ]\r\n') def test_RCPT_lowercase_to_OK(self): self.write_line(b'HELO example') self.write_line(b'MAIL From: eggs@example') self.write_line(b'RCPT to: ') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_no_HELO_RCPT(self): self.write_line(b'RCPT to eggs@example') self.assertEqual(self.channel.socket.last, b'503 Error: send HELO first\r\n') def test_data_dialog(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'RCPT To:spam@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'DATA') self.assertEqual(self.channel.socket.last, b'354 End data with .\r\n') self.write_line(b'data\r\nmore\r\n.') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.server.messages, [(('peer-address', 'peer-port'), 'eggs@example', ['spam@example'], 'data\nmore')]) def test_DATA_syntax(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA spam') self.assertEqual(self.channel.socket.last, b'501 Syntax: DATA\r\n') def test_no_HELO_DATA(self): self.write_line(b'DATA spam') self.assertEqual(self.channel.socket.last, b'503 Error: send HELO first\r\n') def test_data_transparency_section_4_5_2(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'..\r\n.\r\n') self.assertEqual(self.channel.received_data, '.') def test_multiple_RCPT(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'RCPT To:ham@example') self.write_line(b'DATA') self.write_line(b'data\r\n.') self.assertEqual(self.server.messages, [(('peer-address', 'peer-port'), 'eggs@example', ['spam@example','ham@example'], 'data')]) def test_manual_status(self): # checks that the Channel is able to return a custom status message self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'return status\r\n.') self.assertEqual(self.channel.socket.last, b'250 Okish\r\n') def test_RSET(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'RSET') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'MAIL From:foo@example') self.write_line(b'RCPT To:eggs@example') self.write_line(b'DATA') self.write_line(b'data\r\n.') self.assertEqual(self.server.messages, [(('peer-address', 'peer-port'), 'foo@example', ['eggs@example'], 'data')]) def test_HELO_RSET(self): self.write_line(b'HELO example') self.write_line(b'RSET') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_RSET_syntax(self): self.write_line(b'RSET hi') self.assertEqual(self.channel.socket.last, b'501 Syntax: RSET\r\n') def test_unknown_command(self): self.write_line(b'UNKNOWN_CMD') self.assertEqual(self.channel.socket.last, b'500 Error: command "UNKNOWN_CMD" not ' + \ b'recognized\r\n') def test_attribute_deprecations(self): with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__server with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__server = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__line with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__line = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__state with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__state = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__greeting with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__greeting = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__mailfrom with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__mailfrom = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__rcpttos with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__rcpttos = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__data with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__data = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__fqdn with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__fqdn = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__peer with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__peer = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__conn with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__conn = 'spam' with warnings_helper.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__addr with warnings_helper.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__addr = 'spam' @unittest.skipUnless(socket_helper.IPV6_ENABLED, "IPv6 not enabled") class SMTPDChannelIPv6Test(SMTPDChannelTest): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOSTv6, 0), ('b', 0), decode_data=True) conn, addr = self.server.accept() self.channel = smtpd.SMTPChannel(self.server, conn, addr, decode_data=True) class SMTPDChannelWithDataSizeLimitTest(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = self.server.accept() # Set DATA size limit to 32 bytes for easy testing self.channel = smtpd.SMTPChannel(self.server, conn, addr, 32, decode_data=True) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, line): self.channel.socket.queue_recv(line) self.channel.handle_read() def test_data_limit_dialog(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'RCPT To:spam@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'DATA') self.assertEqual(self.channel.socket.last, b'354 End data with .\r\n') self.write_line(b'data\r\nmore\r\n.') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.server.messages, [(('peer-address', 'peer-port'), 'eggs@example', ['spam@example'], 'data\nmore')]) def test_data_limit_dialog_too_much_data(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'RCPT To:spam@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'DATA') self.assertEqual(self.channel.socket.last, b'354 End data with .\r\n') self.write_line(b'This message is longer than 32 bytes\r\n.') self.assertEqual(self.channel.socket.last, b'552 Error: Too much mail data\r\n') class SMTPDChannelWithDecodeDataFalse(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOST, 0), ('b', 0)) conn, addr = self.server.accept() self.channel = smtpd.SMTPChannel(self.server, conn, addr) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, line): self.channel.socket.queue_recv(line) self.channel.handle_read() def test_ascii_data(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'plain ascii text') self.write_line(b'.') self.assertEqual(self.channel.received_data, b'plain ascii text') def test_utf8_data(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'utf8 enriched text: \xc5\xbc\xc5\xba\xc4\x87') self.write_line(b'and some plain ascii') self.write_line(b'.') self.assertEqual( self.channel.received_data, b'utf8 enriched text: \xc5\xbc\xc5\xba\xc4\x87\n' b'and some plain ascii') class SMTPDChannelWithDecodeDataTrue(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = self.server.accept() # Set decode_data to True self.channel = smtpd.SMTPChannel(self.server, conn, addr, decode_data=True) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, line): self.channel.socket.queue_recv(line) self.channel.handle_read() def test_ascii_data(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'plain ascii text') self.write_line(b'.') self.assertEqual(self.channel.received_data, 'plain ascii text') def test_utf8_data(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'utf8 enriched text: \xc5\xbc\xc5\xba\xc4\x87') self.write_line(b'and some plain ascii') self.write_line(b'.') self.assertEqual( self.channel.received_data, 'utf8 enriched text: żźć\nand some plain ascii') class SMTPDChannelTestWithEnableSMTPUTF8True(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOST, 0), ('b', 0), enable_SMTPUTF8=True) conn, addr = self.server.accept() self.channel = smtpd.SMTPChannel(self.server, conn, addr, enable_SMTPUTF8=True) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, line): self.channel.socket.queue_recv(line) self.channel.handle_read() def test_MAIL_command_accepts_SMTPUTF8_when_announced(self): self.write_line(b'EHLO example') self.write_line( 'MAIL from: BODY=8BITMIME SMTPUTF8'.encode( 'utf-8') ) self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_process_smtputf8_message(self): self.write_line(b'EHLO example') for mail_parameters in [b'', b'BODY=8BITMIME SMTPUTF8']: self.write_line(b'MAIL from: ' + mail_parameters) self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line(b'rcpt to:') self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line(b'data') self.assertEqual(self.channel.socket.last[0:3], b'354') self.write_line(b'c\r\n.') if mail_parameters == b'': self.assertEqual(self.channel.socket.last, b'250 OK\r\n') else: self.assertEqual(self.channel.socket.last, b'250 SMTPUTF8 message okish\r\n') def test_utf8_data(self): self.write_line(b'EHLO example') self.write_line( 'MAIL From: naïve@examplé BODY=8BITMIME SMTPUTF8'.encode('utf-8')) self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line('RCPT To:späm@examplé'.encode('utf-8')) self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line(b'DATA') self.assertEqual(self.channel.socket.last[0:3], b'354') self.write_line(b'utf8 enriched text: \xc5\xbc\xc5\xba\xc4\x87') self.write_line(b'.') self.assertEqual( self.channel.received_data, b'utf8 enriched text: \xc5\xbc\xc5\xba\xc4\x87') def test_MAIL_command_limit_extended_with_SIZE_and_SMTPUTF8(self): self.write_line(b'ehlo example') fill_len = (512 + 26 + 10) - len('mail from:<@example>') self.write_line(b'MAIL from:<' + b'a' * (fill_len + 1) + b'@example>') self.assertEqual(self.channel.socket.last, b'500 Error: line too long\r\n') self.write_line(b'MAIL from:<' + b'a' * fill_len + b'@example>') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_multiple_emails_with_extended_command_length(self): self.write_line(b'ehlo example') fill_len = (512 + 26 + 10) - len('mail from:<@example>') for char in [b'a', b'b', b'c']: self.write_line(b'MAIL from:<' + char * fill_len + b'a@example>') self.assertEqual(self.channel.socket.last[0:3], b'500') self.write_line(b'MAIL from:<' + char * fill_len + b'@example>') self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line(b'rcpt to:') self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line(b'data') self.assertEqual(self.channel.socket.last[0:3], b'354') self.write_line(b'test\r\n.') self.assertEqual(self.channel.socket.last[0:3], b'250') class MiscTestCase(unittest.TestCase): def test__all__(self): not_exported = { "program", "Devnull", "DEBUGSTREAM", "NEWLINE", "COMMASPACE", "DATA_SIZE_DEFAULT", "usage", "Options", "parseargs", } support.check__all__(self, smtpd, not_exported=not_exported) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.11/test_socket.py000066400000000000000000007626061471441230600212450ustar00rootroot00000000000000import unittest from test import support from test.support import os_helper from test.support import socket_helper from test.support import threading_helper import errno import io import itertools import socket import select import tempfile import time import traceback import queue import sys import os import platform import array import contextlib from weakref import proxy import signal import math import pickle import struct import random import shutil import string import _thread as thread import threading try: import multiprocessing except ImportError: multiprocessing = False try: import fcntl except ImportError: fcntl = None support.requires_working_socket(module=True) HOST = socket_helper.HOST # test unicode string and carriage return MSG = 'Michael Gilfix was here\u1234\r\n'.encode('utf-8') VSOCKPORT = 1234 AIX = platform.system() == "AIX" try: import _socket except ImportError: _socket = None def get_cid(): if fcntl is None: return None if not hasattr(socket, 'IOCTL_VM_SOCKETS_GET_LOCAL_CID'): return None try: with open("/dev/vsock", "rb") as f: r = fcntl.ioctl(f, socket.IOCTL_VM_SOCKETS_GET_LOCAL_CID, " ") except OSError: return None else: return struct.unpack("I", r)[0] def _have_socket_can(): """Check whether CAN sockets are supported on this host.""" try: s = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_can_isotp(): """Check whether CAN ISOTP sockets are supported on this host.""" try: s = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_can_j1939(): """Check whether CAN J1939 sockets are supported on this host.""" try: s = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_rds(): """Check whether RDS sockets are supported on this host.""" try: s = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_alg(): """Check whether AF_ALG sockets are supported on this host.""" try: s = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_qipcrtr(): """Check whether AF_QIPCRTR sockets are supported on this host.""" try: s = socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM, 0) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_vsock(): """Check whether AF_VSOCK sockets are supported on this host.""" ret = get_cid() is not None return ret def _have_socket_bluetooth(): """Check whether AF_BLUETOOTH sockets are supported on this host.""" try: # RFCOMM is supported by all platforms with bluetooth support. Windows # does not support omitting the protocol. s = socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM) except (AttributeError, OSError): return False else: s.close() return True @contextlib.contextmanager def socket_setdefaulttimeout(timeout): old_timeout = socket.getdefaulttimeout() try: socket.setdefaulttimeout(timeout) yield finally: socket.setdefaulttimeout(old_timeout) HAVE_SOCKET_CAN = _have_socket_can() HAVE_SOCKET_CAN_ISOTP = _have_socket_can_isotp() HAVE_SOCKET_CAN_J1939 = _have_socket_can_j1939() HAVE_SOCKET_RDS = _have_socket_rds() HAVE_SOCKET_ALG = _have_socket_alg() HAVE_SOCKET_QIPCRTR = _have_socket_qipcrtr() HAVE_SOCKET_VSOCK = _have_socket_vsock() HAVE_SOCKET_UDPLITE = hasattr(socket, "IPPROTO_UDPLITE") HAVE_SOCKET_BLUETOOTH = _have_socket_bluetooth() # Size in bytes of the int type SIZEOF_INT = array.array("i").itemsize class SocketTCPTest(unittest.TestCase): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = socket_helper.bind_port(self.serv) self.serv.listen() def tearDown(self): self.serv.close() self.serv = None class SocketUDPTest(unittest.TestCase): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.port = socket_helper.bind_port(self.serv) def tearDown(self): self.serv.close() self.serv = None class SocketUDPLITETest(SocketUDPTest): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) self.port = socket_helper.bind_port(self.serv) class SocketCANTest(unittest.TestCase): """To be able to run this test, a `vcan0` CAN interface can be created with the following commands: # modprobe vcan # ip link add dev vcan0 type vcan # ip link set up vcan0 """ interface = 'vcan0' bufsize = 128 """The CAN frame structure is defined in : struct can_frame { canid_t can_id; /* 32 bit CAN_ID + EFF/RTR/ERR flags */ __u8 can_dlc; /* data length code: 0 .. 8 */ __u8 data[8] __attribute__((aligned(8))); }; """ can_frame_fmt = "=IB3x8s" can_frame_size = struct.calcsize(can_frame_fmt) """The Broadcast Management Command frame structure is defined in : struct bcm_msg_head { __u32 opcode; __u32 flags; __u32 count; struct timeval ival1, ival2; canid_t can_id; __u32 nframes; struct can_frame frames[0]; } `bcm_msg_head` must be 8 bytes aligned because of the `frames` member (see `struct can_frame` definition). Must use native not standard types for packing. """ bcm_cmd_msg_fmt = "@3I4l2I" bcm_cmd_msg_fmt += "x" * (struct.calcsize(bcm_cmd_msg_fmt) % 8) def setUp(self): self.s = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) self.addCleanup(self.s.close) try: self.s.bind((self.interface,)) except OSError: self.skipTest('network interface `%s` does not exist' % self.interface) class SocketRDSTest(unittest.TestCase): """To be able to run this test, the `rds` kernel module must be loaded: # modprobe rds """ bufsize = 8192 def setUp(self): self.serv = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) self.addCleanup(self.serv.close) try: self.port = socket_helper.bind_port(self.serv) except OSError: self.skipTest('unable to bind RDS socket') class ThreadableTest: """Threadable Test class The ThreadableTest class makes it easy to create a threaded client/server pair from an existing unit test. To create a new threaded class from an existing unit test, use multiple inheritance: class NewClass (OldClass, ThreadableTest): pass This class defines two new fixture functions with obvious purposes for overriding: clientSetUp () clientTearDown () Any new test functions within the class must then define tests in pairs, where the test name is preceded with a '_' to indicate the client portion of the test. Ex: def testFoo(self): # Server portion def _testFoo(self): # Client portion Any exceptions raised by the clients during their tests are caught and transferred to the main thread to alert the testing framework. Note, the server setup function cannot call any blocking functions that rely on the client thread during setup, unless serverExplicitReady() is called just before the blocking call (such as in setting up a client/server connection and performing the accept() in setUp(). """ def __init__(self): # Swap the true setup function self.__setUp = self.setUp self.setUp = self._setUp def serverExplicitReady(self): """This method allows the server to explicitly indicate that it wants the client thread to proceed. This is useful if the server is about to execute a blocking routine that is dependent upon the client thread during its setup routine.""" self.server_ready.set() def _setUp(self): self.enterContext(threading_helper.wait_threads_exit()) self.server_ready = threading.Event() self.client_ready = threading.Event() self.done = threading.Event() self.queue = queue.Queue(1) self.server_crashed = False def raise_queued_exception(): if self.queue.qsize(): raise self.queue.get() self.addCleanup(raise_queued_exception) # Do some munging to start the client test. methodname = self.id() i = methodname.rfind('.') methodname = methodname[i+1:] test_method = getattr(self, '_' + methodname) self.client_thread = thread.start_new_thread( self.clientRun, (test_method,)) try: self.__setUp() except: self.server_crashed = True raise finally: self.server_ready.set() self.client_ready.wait() self.addCleanup(self.done.wait) def clientRun(self, test_func): self.server_ready.wait() try: self.clientSetUp() except BaseException as e: self.queue.put(e) self.clientTearDown() return finally: self.client_ready.set() if self.server_crashed: self.clientTearDown() return if not hasattr(test_func, '__call__'): raise TypeError("test_func must be a callable function") try: test_func() except BaseException as e: self.queue.put(e) finally: self.clientTearDown() def clientSetUp(self): raise NotImplementedError("clientSetUp must be implemented.") def clientTearDown(self): self.done.set() thread.exit() class ThreadedTCPSocketTest(SocketTCPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketTCPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.AF_INET, socket.SOCK_STREAM) def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ThreadedUDPSocketTest(SocketUDPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketUDPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class ThreadedUDPLITESocketTest(SocketUDPLITETest, ThreadableTest): def __init__(self, methodName='runTest'): SocketUDPLITETest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ThreadedCANSocketTest(SocketCANTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketCANTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) try: self.cli.bind((self.interface,)) except OSError: # skipTest should not be called here, and will be called in the # server instead pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ThreadedRDSSocketTest(SocketRDSTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketRDSTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) try: # RDS sockets must be bound explicitly to send or receive data self.cli.bind((HOST, 0)) self.cli_addr = self.cli.getsockname() except OSError: # skipTest should not be called here, and will be called in the # server instead pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) @unittest.skipIf(fcntl is None, "need fcntl") @unittest.skipUnless(HAVE_SOCKET_VSOCK, 'VSOCK sockets required for this test.') @unittest.skipUnless(get_cid() != 2, "This test can only be run on a virtual guest.") class ThreadedVSOCKSocketStreamTest(unittest.TestCase, ThreadableTest): def __init__(self, methodName='runTest'): unittest.TestCase.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def setUp(self): self.serv = socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) self.addCleanup(self.serv.close) self.serv.bind((socket.VMADDR_CID_ANY, VSOCKPORT)) self.serv.listen() self.serverExplicitReady() self.conn, self.connaddr = self.serv.accept() self.addCleanup(self.conn.close) def clientSetUp(self): time.sleep(0.1) self.cli = socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) self.addCleanup(self.cli.close) cid = get_cid() self.cli.connect((cid, VSOCKPORT)) def testStream(self): msg = self.conn.recv(1024) self.assertEqual(msg, MSG) def _testStream(self): self.cli.send(MSG) self.cli.close() class SocketConnectedTest(ThreadedTCPSocketTest): """Socket tests for client-server connection. self.cli_conn is a client socket connected to the server. The setUp() method guarantees that it is connected to the server. """ def __init__(self, methodName='runTest'): ThreadedTCPSocketTest.__init__(self, methodName=methodName) def setUp(self): ThreadedTCPSocketTest.setUp(self) # Indicate explicitly we're ready for the client thread to # proceed and then perform the blocking call to accept self.serverExplicitReady() conn, addr = self.serv.accept() self.cli_conn = conn def tearDown(self): self.cli_conn.close() self.cli_conn = None ThreadedTCPSocketTest.tearDown(self) def clientSetUp(self): ThreadedTCPSocketTest.clientSetUp(self) self.cli.connect((HOST, self.port)) self.serv_conn = self.cli def clientTearDown(self): self.serv_conn.close() self.serv_conn = None ThreadedTCPSocketTest.clientTearDown(self) class SocketPairTest(unittest.TestCase, ThreadableTest): def __init__(self, methodName='runTest'): unittest.TestCase.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def setUp(self): self.serv, self.cli = socket.socketpair() def tearDown(self): self.serv.close() self.serv = None def clientSetUp(self): pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) # The following classes are used by the sendmsg()/recvmsg() tests. # Combining, for instance, ConnectedStreamTestMixin and TCPTestBase # gives a drop-in replacement for SocketConnectedTest, but different # address families can be used, and the attributes serv_addr and # cli_addr will be set to the addresses of the endpoints. class SocketTestBase(unittest.TestCase): """A base class for socket tests. Subclasses must provide methods newSocket() to return a new socket and bindSock(sock) to bind it to an unused address. Creates a socket self.serv and sets self.serv_addr to its address. """ def setUp(self): self.serv = self.newSocket() self.bindServer() def bindServer(self): """Bind server socket and set self.serv_addr to its address.""" self.bindSock(self.serv) self.serv_addr = self.serv.getsockname() def tearDown(self): self.serv.close() self.serv = None class SocketListeningTestMixin(SocketTestBase): """Mixin to listen on the server socket.""" def setUp(self): super().setUp() self.serv.listen() class ThreadedSocketTestMixin(SocketTestBase, ThreadableTest): """Mixin to add client socket and allow client/server tests. Client socket is self.cli and its address is self.cli_addr. See ThreadableTest for usage information. """ def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = self.newClientSocket() self.bindClient() def newClientSocket(self): """Return a new socket for use as client.""" return self.newSocket() def bindClient(self): """Bind client socket and set self.cli_addr to its address.""" self.bindSock(self.cli) self.cli_addr = self.cli.getsockname() def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ConnectedStreamTestMixin(SocketListeningTestMixin, ThreadedSocketTestMixin): """Mixin to allow client/server stream tests with connected client. Server's socket representing connection to client is self.cli_conn and client's connection to server is self.serv_conn. (Based on SocketConnectedTest.) """ def setUp(self): super().setUp() # Indicate explicitly we're ready for the client thread to # proceed and then perform the blocking call to accept self.serverExplicitReady() conn, addr = self.serv.accept() self.cli_conn = conn def tearDown(self): self.cli_conn.close() self.cli_conn = None super().tearDown() def clientSetUp(self): super().clientSetUp() self.cli.connect(self.serv_addr) self.serv_conn = self.cli def clientTearDown(self): try: self.serv_conn.close() self.serv_conn = None except AttributeError: pass super().clientTearDown() class UnixSocketTestBase(SocketTestBase): """Base class for Unix-domain socket tests.""" # This class is used for file descriptor passing tests, so we # create the sockets in a private directory so that other users # can't send anything that might be problematic for a privileged # user running the tests. def setUp(self): self.dir_path = tempfile.mkdtemp() self.addCleanup(os.rmdir, self.dir_path) super().setUp() def bindSock(self, sock): path = tempfile.mktemp(dir=self.dir_path) socket_helper.bind_unix_socket(sock, path) self.addCleanup(os_helper.unlink, path) class UnixStreamBase(UnixSocketTestBase): """Base class for Unix-domain SOCK_STREAM tests.""" def newSocket(self): return socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) class InetTestBase(SocketTestBase): """Base class for IPv4 socket tests.""" host = HOST def setUp(self): super().setUp() self.port = self.serv_addr[1] def bindSock(self, sock): socket_helper.bind_port(sock, host=self.host) class TCPTestBase(InetTestBase): """Base class for TCP-over-IPv4 tests.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_STREAM) class UDPTestBase(InetTestBase): """Base class for UDP-over-IPv4 tests.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_DGRAM) class UDPLITETestBase(InetTestBase): """Base class for UDPLITE-over-IPv4 tests.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) class SCTPStreamBase(InetTestBase): """Base class for SCTP tests in one-to-one (SOCK_STREAM) mode.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_SCTP) class Inet6TestBase(InetTestBase): """Base class for IPv6 socket tests.""" host = socket_helper.HOSTv6 class UDP6TestBase(Inet6TestBase): """Base class for UDP-over-IPv6 tests.""" def newSocket(self): return socket.socket(socket.AF_INET6, socket.SOCK_DGRAM) class UDPLITE6TestBase(Inet6TestBase): """Base class for UDPLITE-over-IPv6 tests.""" def newSocket(self): return socket.socket(socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) # Test-skipping decorators for use with ThreadableTest. def skipWithClientIf(condition, reason): """Skip decorated test if condition is true, add client_skip decorator. If the decorated object is not a class, sets its attribute "client_skip" to a decorator which will return an empty function if the test is to be skipped, or the original function if it is not. This can be used to avoid running the client part of a skipped test when using ThreadableTest. """ def client_pass(*args, **kwargs): pass def skipdec(obj): retval = unittest.skip(reason)(obj) if not isinstance(obj, type): retval.client_skip = lambda f: client_pass return retval def noskipdec(obj): if not (isinstance(obj, type) or hasattr(obj, "client_skip")): obj.client_skip = lambda f: f return obj return skipdec if condition else noskipdec def requireAttrs(obj, *attributes): """Skip decorated test if obj is missing any of the given attributes. Sets client_skip attribute as skipWithClientIf() does. """ missing = [name for name in attributes if not hasattr(obj, name)] return skipWithClientIf( missing, "don't have " + ", ".join(name for name in missing)) def requireSocket(*args): """Skip decorated test if a socket cannot be created with given arguments. When an argument is given as a string, will use the value of that attribute of the socket module, or skip the test if it doesn't exist. Sets client_skip attribute as skipWithClientIf() does. """ err = None missing = [obj for obj in args if isinstance(obj, str) and not hasattr(socket, obj)] if missing: err = "don't have " + ", ".join(name for name in missing) else: callargs = [getattr(socket, obj) if isinstance(obj, str) else obj for obj in args] try: s = socket.socket(*callargs) except OSError as e: # XXX: check errno? err = str(e) else: s.close() return skipWithClientIf( err is not None, "can't create socket({0}): {1}".format( ", ".join(str(o) for o in args), err)) ####################################################################### ## Begin Tests class GeneralModuleTests(unittest.TestCase): def test_SocketType_is_socketobject(self): import _socket self.assertTrue(socket.SocketType is _socket.socket) s = socket.socket() self.assertIsInstance(s, socket.SocketType) s.close() def test_repr(self): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) with s: self.assertIn('fd=%i' % s.fileno(), repr(s)) self.assertIn('family=%s' % socket.AF_INET, repr(s)) self.assertIn('type=%s' % socket.SOCK_STREAM, repr(s)) self.assertIn('proto=0', repr(s)) self.assertNotIn('raddr', repr(s)) s.bind(('127.0.0.1', 0)) self.assertIn('laddr', repr(s)) self.assertIn(str(s.getsockname()), repr(s)) self.assertIn('[closed]', repr(s)) self.assertNotIn('laddr', repr(s)) @unittest.skipUnless(_socket is not None, 'need _socket module') def test_csocket_repr(self): s = _socket.socket(_socket.AF_INET, _socket.SOCK_STREAM) try: expected = ('' % (s.fileno(), s.family, s.type, s.proto)) self.assertEqual(repr(s), expected) finally: s.close() expected = ('' % (s.family, s.type, s.proto)) self.assertEqual(repr(s), expected) def test_weakref(self): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: p = proxy(s) self.assertEqual(p.fileno(), s.fileno()) s = None support.gc_collect() # For PyPy or other GCs. try: p.fileno() except ReferenceError: pass else: self.fail('Socket proxy still exists') def testSocketError(self): # Testing socket module exceptions msg = "Error raising socket exception (%s)." with self.assertRaises(OSError, msg=msg % 'OSError'): raise OSError with self.assertRaises(OSError, msg=msg % 'socket.herror'): raise socket.herror with self.assertRaises(OSError, msg=msg % 'socket.gaierror'): raise socket.gaierror def testSendtoErrors(self): # Testing that sendto doesn't mask failures. See #10169. s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.addCleanup(s.close) s.bind(('', 0)) sockname = s.getsockname() # 2 args with self.assertRaises(TypeError) as cm: s.sendto('\u2620', sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'str'") with self.assertRaises(TypeError) as cm: s.sendto(5j, sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'complex'") with self.assertRaises(TypeError) as cm: s.sendto(b'foo', None) self.assertIn('not NoneType',str(cm.exception)) # 3 args with self.assertRaises(TypeError) as cm: s.sendto('\u2620', 0, sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'str'") with self.assertRaises(TypeError) as cm: s.sendto(5j, 0, sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'complex'") with self.assertRaises(TypeError) as cm: s.sendto(b'foo', 0, None) self.assertIn('not NoneType', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto(b'foo', 'bar', sockname) with self.assertRaises(TypeError) as cm: s.sendto(b'foo', None, None) # wrong number of args with self.assertRaises(TypeError) as cm: s.sendto(b'foo') self.assertIn('(1 given)', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto(b'foo', 0, sockname, 4) self.assertIn('(4 given)', str(cm.exception)) def testCrucialConstants(self): # Testing for mission critical constants socket.AF_INET if socket.has_ipv6: socket.AF_INET6 socket.SOCK_STREAM socket.SOCK_DGRAM socket.SOCK_RAW socket.SOCK_RDM socket.SOCK_SEQPACKET socket.SOL_SOCKET socket.SO_REUSEADDR def testCrucialIpProtoConstants(self): socket.IPPROTO_TCP socket.IPPROTO_UDP if socket.has_ipv6: socket.IPPROTO_IPV6 @unittest.skipUnless(os.name == "nt", "Windows specific") def testWindowsSpecificConstants(self): socket.IPPROTO_ICLFXBM socket.IPPROTO_ST socket.IPPROTO_CBT socket.IPPROTO_IGP socket.IPPROTO_RDP socket.IPPROTO_PGM socket.IPPROTO_L2TP socket.IPPROTO_SCTP @unittest.skipIf(support.is_wasi, "WASI is missing these methods") def test_socket_methods(self): # socket methods that depend on a configure HAVE_ check. They should # be present on all platforms except WASI. names = [ "_accept", "bind", "connect", "connect_ex", "getpeername", "getsockname", "listen", "recvfrom", "recvfrom_into", "sendto", "setsockopt", "shutdown" ] for name in names: if not hasattr(socket.socket, name): self.fail(f"socket method {name} is missing") @unittest.skipUnless(sys.platform == 'darwin', 'macOS specific test') @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test3542SocketOptions(self): # Ref. issue #35569 and https://tools.ietf.org/html/rfc3542 opts = { 'IPV6_CHECKSUM', 'IPV6_DONTFRAG', 'IPV6_DSTOPTS', 'IPV6_HOPLIMIT', 'IPV6_HOPOPTS', 'IPV6_NEXTHOP', 'IPV6_PATHMTU', 'IPV6_PKTINFO', 'IPV6_RECVDSTOPTS', 'IPV6_RECVHOPLIMIT', 'IPV6_RECVHOPOPTS', 'IPV6_RECVPATHMTU', 'IPV6_RECVPKTINFO', 'IPV6_RECVRTHDR', 'IPV6_RECVTCLASS', 'IPV6_RTHDR', 'IPV6_RTHDRDSTOPTS', 'IPV6_RTHDR_TYPE_0', 'IPV6_TCLASS', 'IPV6_USE_MIN_MTU', } for opt in opts: self.assertTrue( hasattr(socket, opt), f"Missing RFC3542 socket option '{opt}'" ) def testHostnameRes(self): # Testing hostname resolution mechanisms hostname = socket.gethostname() try: ip = socket.gethostbyname(hostname) except OSError: # Probably name lookup wasn't set up right; skip this test self.skipTest('name lookup failure') self.assertTrue(ip.find('.') >= 0, "Error resolving host to ip.") try: hname, aliases, ipaddrs = socket.gethostbyaddr(ip) except OSError: # Probably a similar problem as above; skip this test self.skipTest('name lookup failure') all_host_names = [hostname, hname] + aliases fqhn = socket.getfqdn(ip) if not fqhn in all_host_names: self.fail("Error testing host resolution mechanisms. (fqdn: %s, all: %s)" % (fqhn, repr(all_host_names))) def test_host_resolution(self): for addr in [socket_helper.HOSTv4, '10.0.0.1', '255.255.255.255']: self.assertEqual(socket.gethostbyname(addr), addr) # we don't test socket_helper.HOSTv6 because there's a chance it doesn't have # a matching name entry (e.g. 'ip6-localhost') for host in [socket_helper.HOSTv4]: self.assertIn(host, socket.gethostbyaddr(host)[2]) def test_host_resolution_bad_address(self): # These are all malformed IP addresses and expected not to resolve to # any result. But some ISPs, e.g. AWS and AT&T, may successfully # resolve these IPs. In particular, AT&T's DNS Error Assist service # will break this test. See https://bugs.python.org/issue42092 for a # workaround. explanation = ( "resolving an invalid IP address did not raise OSError; " "can be caused by a broken DNS server" ) for addr in ['0.1.1.~1', '1+.1.1.1', '::1q', '::1::2', '1:1:1:1:1:1:1:1:1']: with self.assertRaises(OSError, msg=addr): socket.gethostbyname(addr) with self.assertRaises(OSError, msg=explanation): socket.gethostbyaddr(addr) @unittest.skipUnless(hasattr(socket, 'sethostname'), "test needs socket.sethostname()") @unittest.skipUnless(hasattr(socket, 'gethostname'), "test needs socket.gethostname()") def test_sethostname(self): oldhn = socket.gethostname() try: socket.sethostname('new') except OSError as e: if e.errno == errno.EPERM: self.skipTest("test should be run as root") else: raise try: # running test as root! self.assertEqual(socket.gethostname(), 'new') # Should work with bytes objects too socket.sethostname(b'bar') self.assertEqual(socket.gethostname(), 'bar') finally: socket.sethostname(oldhn) @unittest.skipUnless(hasattr(socket, 'if_nameindex'), 'socket.if_nameindex() not available.') def testInterfaceNameIndex(self): interfaces = socket.if_nameindex() for index, name in interfaces: self.assertIsInstance(index, int) self.assertIsInstance(name, str) # interface indices are non-zero integers self.assertGreater(index, 0) _index = socket.if_nametoindex(name) self.assertIsInstance(_index, int) self.assertEqual(index, _index) _name = socket.if_indextoname(index) self.assertIsInstance(_name, str) self.assertEqual(name, _name) @unittest.skipUnless(hasattr(socket, 'if_indextoname'), 'socket.if_indextoname() not available.') def testInvalidInterfaceIndexToName(self): self.assertRaises(OSError, socket.if_indextoname, 0) self.assertRaises(OverflowError, socket.if_indextoname, -1) self.assertRaises(OverflowError, socket.if_indextoname, 2**1000) self.assertRaises(TypeError, socket.if_indextoname, '_DEADBEEF') if hasattr(socket, 'if_nameindex'): indices = dict(socket.if_nameindex()) for index in indices: index2 = index + 2**32 if index2 not in indices: with self.assertRaises((OverflowError, OSError)): socket.if_indextoname(index2) for index in 2**32-1, 2**64-1: if index not in indices: with self.assertRaises((OverflowError, OSError)): socket.if_indextoname(index) @unittest.skipUnless(hasattr(socket, 'if_nametoindex'), 'socket.if_nametoindex() not available.') def testInvalidInterfaceNameToIndex(self): self.assertRaises(TypeError, socket.if_nametoindex, 0) self.assertRaises(OSError, socket.if_nametoindex, '_DEADBEEF') @unittest.skipUnless(hasattr(sys, 'getrefcount'), 'test needs sys.getrefcount()') def testRefCountGetNameInfo(self): # Testing reference count for getnameinfo try: # On some versions, this loses a reference orig = sys.getrefcount(__name__) socket.getnameinfo(__name__,0) except TypeError: if sys.getrefcount(__name__) != orig: self.fail("socket.getnameinfo loses a reference") def testInterpreterCrash(self): # Making sure getnameinfo doesn't crash the interpreter try: # On some versions, this crashes the interpreter. socket.getnameinfo(('x', 0, 0, 0), 0) except OSError: pass def testNtoH(self): # This just checks that htons etc. are their own inverse, # when looking at the lower 16 or 32 bits. sizes = {socket.htonl: 32, socket.ntohl: 32, socket.htons: 16, socket.ntohs: 16} for func, size in sizes.items(): mask = (1<= 23): port2 = socket.getservbyname(service) eq(port, port2) # Try udp, but don't barf if it doesn't exist try: udpport = socket.getservbyname(service, 'udp') except OSError: udpport = None else: eq(udpport, port) # Now make sure the lookup by port returns the same service name # Issue #26936: Android getservbyport() is broken. if not support.is_android: eq(socket.getservbyport(port2), service) eq(socket.getservbyport(port, 'tcp'), service) if udpport is not None: eq(socket.getservbyport(udpport, 'udp'), service) # Make sure getservbyport does not accept out of range ports. self.assertRaises(OverflowError, socket.getservbyport, -1) self.assertRaises(OverflowError, socket.getservbyport, 65536) def testDefaultTimeout(self): # Testing default timeout # The default timeout should initially be None self.assertEqual(socket.getdefaulttimeout(), None) with socket.socket() as s: self.assertEqual(s.gettimeout(), None) # Set the default timeout to 10, and see if it propagates with socket_setdefaulttimeout(10): self.assertEqual(socket.getdefaulttimeout(), 10) with socket.socket() as sock: self.assertEqual(sock.gettimeout(), 10) # Reset the default timeout to None, and see if it propagates socket.setdefaulttimeout(None) self.assertEqual(socket.getdefaulttimeout(), None) with socket.socket() as sock: self.assertEqual(sock.gettimeout(), None) # Check that setting it to an invalid value raises ValueError self.assertRaises(ValueError, socket.setdefaulttimeout, -1) # Check that setting it to an invalid type raises TypeError self.assertRaises(TypeError, socket.setdefaulttimeout, "spam") @unittest.skipUnless(hasattr(socket, 'inet_aton'), 'test needs socket.inet_aton()') def testIPv4_inet_aton_fourbytes(self): # Test that issue1008086 and issue767150 are fixed. # It must return 4 bytes. self.assertEqual(b'\x00'*4, socket.inet_aton('0.0.0.0')) self.assertEqual(b'\xff'*4, socket.inet_aton('255.255.255.255')) @unittest.skipUnless(hasattr(socket, 'inet_pton'), 'test needs socket.inet_pton()') def testIPv4toString(self): from socket import inet_aton as f, inet_pton, AF_INET g = lambda a: inet_pton(AF_INET, a) assertInvalid = lambda func,a: self.assertRaises( (OSError, ValueError), func, a ) self.assertEqual(b'\x00\x00\x00\x00', f('0.0.0.0')) self.assertEqual(b'\xff\x00\xff\x00', f('255.0.255.0')) self.assertEqual(b'\xaa\xaa\xaa\xaa', f('170.170.170.170')) self.assertEqual(b'\x01\x02\x03\x04', f('1.2.3.4')) self.assertEqual(b'\xff\xff\xff\xff', f('255.255.255.255')) # bpo-29972: inet_pton() doesn't fail on AIX if not AIX: assertInvalid(f, '0.0.0.') assertInvalid(f, '300.0.0.0') assertInvalid(f, 'a.0.0.0') assertInvalid(f, '1.2.3.4.5') assertInvalid(f, '::1') self.assertEqual(b'\x00\x00\x00\x00', g('0.0.0.0')) self.assertEqual(b'\xff\x00\xff\x00', g('255.0.255.0')) self.assertEqual(b'\xaa\xaa\xaa\xaa', g('170.170.170.170')) self.assertEqual(b'\xff\xff\xff\xff', g('255.255.255.255')) assertInvalid(g, '0.0.0.') assertInvalid(g, '300.0.0.0') assertInvalid(g, 'a.0.0.0') assertInvalid(g, '1.2.3.4.5') assertInvalid(g, '::1') @unittest.skipUnless(hasattr(socket, 'inet_pton'), 'test needs socket.inet_pton()') def testIPv6toString(self): try: from socket import inet_pton, AF_INET6, has_ipv6 if not has_ipv6: self.skipTest('IPv6 not available') except ImportError: self.skipTest('could not import needed symbols from socket') if sys.platform == "win32": try: inet_pton(AF_INET6, '::') except OSError as e: if e.winerror == 10022: self.skipTest('IPv6 might not be supported') f = lambda a: inet_pton(AF_INET6, a) assertInvalid = lambda a: self.assertRaises( (OSError, ValueError), f, a ) self.assertEqual(b'\x00' * 16, f('::')) self.assertEqual(b'\x00' * 16, f('0::0')) self.assertEqual(b'\x00\x01' + b'\x00' * 14, f('1::')) self.assertEqual( b'\x45\xef\x76\xcb\x00\x1a\x56\xef\xaf\xeb\x0b\xac\x19\x24\xae\xae', f('45ef:76cb:1a:56ef:afeb:bac:1924:aeae') ) self.assertEqual( b'\xad\x42\x0a\xbc' + b'\x00' * 4 + b'\x01\x27\x00\x00\x02\x54\x00\x02', f('ad42:abc::127:0:254:2') ) self.assertEqual(b'\x00\x12\x00\x0a' + b'\x00' * 12, f('12:a::')) assertInvalid('0x20::') assertInvalid(':::') assertInvalid('::0::') assertInvalid('1::abc::') assertInvalid('1::abc::def') assertInvalid('1:2:3:4:5:6') assertInvalid('1:2:3:4:5:6:') assertInvalid('1:2:3:4:5:6:7:8:0') # bpo-29972: inet_pton() doesn't fail on AIX if not AIX: assertInvalid('1:2:3:4:5:6:7:8:') self.assertEqual(b'\x00' * 12 + b'\xfe\x2a\x17\x40', f('::254.42.23.64') ) self.assertEqual( b'\x00\x42' + b'\x00' * 8 + b'\xa2\x9b\xfe\x2a\x17\x40', f('42::a29b:254.42.23.64') ) self.assertEqual( b'\x00\x42\xa8\xb9\x00\x00\x00\x02\xff\xff\xa2\x9b\xfe\x2a\x17\x40', f('42:a8b9:0:2:ffff:a29b:254.42.23.64') ) assertInvalid('255.254.253.252') assertInvalid('1::260.2.3.0') assertInvalid('1::0.be.e.0') assertInvalid('1:2:3:4:5:6:7:1.2.3.4') assertInvalid('::1.2.3.4:0') assertInvalid('0.100.200.0:3:4:5:6:7:8') @unittest.skipUnless(hasattr(socket, 'inet_ntop'), 'test needs socket.inet_ntop()') def testStringToIPv4(self): from socket import inet_ntoa as f, inet_ntop, AF_INET g = lambda a: inet_ntop(AF_INET, a) assertInvalid = lambda func,a: self.assertRaises( (OSError, ValueError), func, a ) self.assertEqual('1.0.1.0', f(b'\x01\x00\x01\x00')) self.assertEqual('170.85.170.85', f(b'\xaa\x55\xaa\x55')) self.assertEqual('255.255.255.255', f(b'\xff\xff\xff\xff')) self.assertEqual('1.2.3.4', f(b'\x01\x02\x03\x04')) assertInvalid(f, b'\x00' * 3) assertInvalid(f, b'\x00' * 5) assertInvalid(f, b'\x00' * 16) self.assertEqual('170.85.170.85', f(bytearray(b'\xaa\x55\xaa\x55'))) self.assertEqual('1.0.1.0', g(b'\x01\x00\x01\x00')) self.assertEqual('170.85.170.85', g(b'\xaa\x55\xaa\x55')) self.assertEqual('255.255.255.255', g(b'\xff\xff\xff\xff')) assertInvalid(g, b'\x00' * 3) assertInvalid(g, b'\x00' * 5) assertInvalid(g, b'\x00' * 16) self.assertEqual('170.85.170.85', g(bytearray(b'\xaa\x55\xaa\x55'))) @unittest.skipUnless(hasattr(socket, 'inet_ntop'), 'test needs socket.inet_ntop()') def testStringToIPv6(self): try: from socket import inet_ntop, AF_INET6, has_ipv6 if not has_ipv6: self.skipTest('IPv6 not available') except ImportError: self.skipTest('could not import needed symbols from socket') if sys.platform == "win32": try: inet_ntop(AF_INET6, b'\x00' * 16) except OSError as e: if e.winerror == 10022: self.skipTest('IPv6 might not be supported') f = lambda a: inet_ntop(AF_INET6, a) assertInvalid = lambda a: self.assertRaises( (OSError, ValueError), f, a ) self.assertEqual('::', f(b'\x00' * 16)) self.assertEqual('::1', f(b'\x00' * 15 + b'\x01')) self.assertEqual( 'aef:b01:506:1001:ffff:9997:55:170', f(b'\x0a\xef\x0b\x01\x05\x06\x10\x01\xff\xff\x99\x97\x00\x55\x01\x70') ) self.assertEqual('::1', f(bytearray(b'\x00' * 15 + b'\x01'))) assertInvalid(b'\x12' * 15) assertInvalid(b'\x12' * 17) assertInvalid(b'\x12' * 4) # XXX The following don't test module-level functionality... def testSockName(self): # Testing getsockname() sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) # Since find_unused_port() is inherently subject to race conditions, we # call it a couple times if necessary. for i in itertools.count(): port = socket_helper.find_unused_port() try: sock.bind(("0.0.0.0", port)) except OSError as e: if e.errno != errno.EADDRINUSE or i == 5: raise else: break name = sock.getsockname() # XXX(nnorwitz): http://tinyurl.com/os5jz seems to indicate # it reasonable to get the host's addr in addition to 0.0.0.0. # At least for eCos. This is required for the S/390 to pass. try: my_ip_addr = socket.gethostbyname(socket.gethostname()) except OSError: # Probably name lookup wasn't set up right; skip this test self.skipTest('name lookup failure') self.assertIn(name[0], ("0.0.0.0", my_ip_addr), '%s invalid' % name[0]) self.assertEqual(name[1], port) def testGetSockOpt(self): # Testing getsockopt() # We know a socket should start without reuse==0 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) reuse = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) self.assertFalse(reuse != 0, "initial mode is reuse") def testSetSockOpt(self): # Testing setsockopt() sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) reuse = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) self.assertFalse(reuse == 0, "failed to set reuse mode") def testSendAfterClose(self): # testing send() after close() with timeout with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: sock.settimeout(1) self.assertRaises(OSError, sock.send, b"spam") def testCloseException(self): sock = socket.socket() sock.bind((socket._LOCALHOST, 0)) socket.socket(fileno=sock.fileno()).close() try: sock.close() except OSError as err: # Winsock apparently raises ENOTSOCK self.assertIn(err.errno, (errno.EBADF, errno.ENOTSOCK)) else: self.fail("close() should raise EBADF/ENOTSOCK") def testNewAttributes(self): # testing .family, .type and .protocol with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: self.assertEqual(sock.family, socket.AF_INET) if hasattr(socket, 'SOCK_CLOEXEC'): self.assertIn(sock.type, (socket.SOCK_STREAM | socket.SOCK_CLOEXEC, socket.SOCK_STREAM)) else: self.assertEqual(sock.type, socket.SOCK_STREAM) self.assertEqual(sock.proto, 0) def test_getsockaddrarg(self): sock = socket.socket() self.addCleanup(sock.close) port = socket_helper.find_unused_port() big_port = port + 65536 neg_port = port - 65536 self.assertRaises(OverflowError, sock.bind, (HOST, big_port)) self.assertRaises(OverflowError, sock.bind, (HOST, neg_port)) # Since find_unused_port() is inherently subject to race conditions, we # call it a couple times if necessary. for i in itertools.count(): port = socket_helper.find_unused_port() try: sock.bind((HOST, port)) except OSError as e: if e.errno != errno.EADDRINUSE or i == 5: raise else: break @unittest.skipUnless(os.name == "nt", "Windows specific") def test_sock_ioctl(self): self.assertTrue(hasattr(socket.socket, 'ioctl')) self.assertTrue(hasattr(socket, 'SIO_RCVALL')) self.assertTrue(hasattr(socket, 'RCVALL_ON')) self.assertTrue(hasattr(socket, 'RCVALL_OFF')) self.assertTrue(hasattr(socket, 'SIO_KEEPALIVE_VALS')) s = socket.socket() self.addCleanup(s.close) self.assertRaises(ValueError, s.ioctl, -1, None) s.ioctl(socket.SIO_KEEPALIVE_VALS, (1, 100, 100)) @unittest.skipUnless(os.name == "nt", "Windows specific") @unittest.skipUnless(hasattr(socket, 'SIO_LOOPBACK_FAST_PATH'), 'Loopback fast path support required for this test') def test_sio_loopback_fast_path(self): s = socket.socket() self.addCleanup(s.close) try: s.ioctl(socket.SIO_LOOPBACK_FAST_PATH, True) except OSError as exc: WSAEOPNOTSUPP = 10045 if exc.winerror == WSAEOPNOTSUPP: self.skipTest("SIO_LOOPBACK_FAST_PATH is defined but " "doesn't implemented in this Windows version") raise self.assertRaises(TypeError, s.ioctl, socket.SIO_LOOPBACK_FAST_PATH, None) def testGetaddrinfo(self): try: socket.getaddrinfo('localhost', 80) except socket.gaierror as err: if err.errno == socket.EAI_SERVICE: # see http://bugs.python.org/issue1282647 self.skipTest("buggy libc version") raise # len of every sequence is supposed to be == 5 for info in socket.getaddrinfo(HOST, None): self.assertEqual(len(info), 5) # host can be a domain name, a string representation of an # IPv4/v6 address or None socket.getaddrinfo('localhost', 80) socket.getaddrinfo('127.0.0.1', 80) socket.getaddrinfo(None, 80) if socket_helper.IPV6_ENABLED: socket.getaddrinfo('::1', 80) # port can be a string service name such as "http", a numeric # port number or None # Issue #26936: Android getaddrinfo() was broken before API level 23. if (not hasattr(sys, 'getandroidapilevel') or sys.getandroidapilevel() >= 23): socket.getaddrinfo(HOST, "http") socket.getaddrinfo(HOST, 80) socket.getaddrinfo(HOST, None) # test family and socktype filters infos = socket.getaddrinfo(HOST, 80, socket.AF_INET, socket.SOCK_STREAM) for family, type, _, _, _ in infos: self.assertEqual(family, socket.AF_INET) self.assertEqual(repr(family), '' % family.value) self.assertEqual(str(family), str(family.value)) self.assertEqual(type, socket.SOCK_STREAM) self.assertEqual(repr(type), '' % type.value) self.assertEqual(str(type), str(type.value)) infos = socket.getaddrinfo(HOST, None, 0, socket.SOCK_STREAM) for _, socktype, _, _, _ in infos: self.assertEqual(socktype, socket.SOCK_STREAM) # test proto and flags arguments socket.getaddrinfo(HOST, None, 0, 0, socket.SOL_TCP) socket.getaddrinfo(HOST, None, 0, 0, 0, socket.AI_PASSIVE) # a server willing to support both IPv4 and IPv6 will # usually do this socket.getaddrinfo(None, 0, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE) # test keyword arguments a = socket.getaddrinfo(HOST, None) b = socket.getaddrinfo(host=HOST, port=None) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, socket.AF_INET) b = socket.getaddrinfo(HOST, None, family=socket.AF_INET) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, 0, socket.SOCK_STREAM) b = socket.getaddrinfo(HOST, None, type=socket.SOCK_STREAM) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, 0, 0, socket.SOL_TCP) b = socket.getaddrinfo(HOST, None, proto=socket.SOL_TCP) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, 0, 0, 0, socket.AI_PASSIVE) b = socket.getaddrinfo(HOST, None, flags=socket.AI_PASSIVE) self.assertEqual(a, b) a = socket.getaddrinfo(None, 0, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE) b = socket.getaddrinfo(host=None, port=0, family=socket.AF_UNSPEC, type=socket.SOCK_STREAM, proto=0, flags=socket.AI_PASSIVE) self.assertEqual(a, b) # Issue #6697. self.assertRaises(UnicodeEncodeError, socket.getaddrinfo, 'localhost', '\uD800') # Issue 17269: test workaround for OS X platform bug segfault if hasattr(socket, 'AI_NUMERICSERV'): try: # The arguments here are undefined and the call may succeed # or fail. All we care here is that it doesn't segfault. socket.getaddrinfo("localhost", None, 0, 0, 0, socket.AI_NUMERICSERV) except socket.gaierror: pass def test_getnameinfo(self): # only IP addresses are allowed self.assertRaises(OSError, socket.getnameinfo, ('mail.python.org',0), 0) @unittest.skipUnless(support.is_resource_enabled('network'), 'network is not enabled') def test_idna(self): # Check for internet access before running test # (issue #12804, issue #25138). with socket_helper.transient_internet('python.org'): socket.gethostbyname('python.org') # these should all be successful domain = 'испытание.pythontest.net' socket.gethostbyname(domain) socket.gethostbyname_ex(domain) socket.getaddrinfo(domain,0,socket.AF_UNSPEC,socket.SOCK_STREAM) # this may not work if the forward lookup chooses the IPv6 address, as that doesn't # have a reverse entry yet # socket.gethostbyaddr('испытание.python.org') def check_sendall_interrupted(self, with_timeout): # socketpair() is not strictly required, but it makes things easier. if not hasattr(signal, 'alarm') or not hasattr(socket, 'socketpair'): self.skipTest("signal.alarm and socket.socketpair required for this test") # Our signal handlers clobber the C errno by calling a math function # with an invalid domain value. def ok_handler(*args): self.assertRaises(ValueError, math.acosh, 0) def raising_handler(*args): self.assertRaises(ValueError, math.acosh, 0) 1 // 0 c, s = socket.socketpair() old_alarm = signal.signal(signal.SIGALRM, raising_handler) try: if with_timeout: # Just above the one second minimum for signal.alarm c.settimeout(1.5) with self.assertRaises(ZeroDivisionError): signal.alarm(1) c.sendall(b"x" * support.SOCK_MAX_SIZE) if with_timeout: signal.signal(signal.SIGALRM, ok_handler) signal.alarm(1) self.assertRaises(TimeoutError, c.sendall, b"x" * support.SOCK_MAX_SIZE) finally: signal.alarm(0) signal.signal(signal.SIGALRM, old_alarm) c.close() s.close() def test_sendall_interrupted(self): self.check_sendall_interrupted(False) def test_sendall_interrupted_with_timeout(self): self.check_sendall_interrupted(True) def test_dealloc_warn(self): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) r = repr(sock) with self.assertWarns(ResourceWarning) as cm: sock = None support.gc_collect() self.assertIn(r, str(cm.warning.args[0])) # An open socket file object gets dereferenced after the socket sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) f = sock.makefile('rb') r = repr(sock) sock = None support.gc_collect() with self.assertWarns(ResourceWarning): f = None support.gc_collect() def test_name_closed_socketio(self): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: fp = sock.makefile("rb") fp.close() self.assertEqual(repr(fp), "<_io.BufferedReader name=-1>") def test_unusable_closed_socketio(self): with socket.socket() as sock: fp = sock.makefile("rb", buffering=0) self.assertTrue(fp.readable()) self.assertFalse(fp.writable()) self.assertFalse(fp.seekable()) fp.close() self.assertRaises(ValueError, fp.readable) self.assertRaises(ValueError, fp.writable) self.assertRaises(ValueError, fp.seekable) def test_socket_close(self): sock = socket.socket() try: sock.bind((HOST, 0)) socket.close(sock.fileno()) with self.assertRaises(OSError): sock.listen(1) finally: with self.assertRaises(OSError): # sock.close() fails with EBADF sock.close() with self.assertRaises(TypeError): socket.close(None) with self.assertRaises(OSError): socket.close(-1) def test_makefile_mode(self): for mode in 'r', 'rb', 'rw', 'w', 'wb': with self.subTest(mode=mode): with socket.socket() as sock: encoding = None if "b" in mode else "utf-8" with sock.makefile(mode, encoding=encoding) as fp: self.assertEqual(fp.mode, mode) def test_makefile_invalid_mode(self): for mode in 'rt', 'x', '+', 'a': with self.subTest(mode=mode): with socket.socket() as sock: with self.assertRaisesRegex(ValueError, 'invalid mode'): sock.makefile(mode) def test_pickle(self): sock = socket.socket() with sock: for protocol in range(pickle.HIGHEST_PROTOCOL + 1): self.assertRaises(TypeError, pickle.dumps, sock, protocol) for protocol in range(pickle.HIGHEST_PROTOCOL + 1): family = pickle.loads(pickle.dumps(socket.AF_INET, protocol)) self.assertEqual(family, socket.AF_INET) type = pickle.loads(pickle.dumps(socket.SOCK_STREAM, protocol)) self.assertEqual(type, socket.SOCK_STREAM) def test_listen_backlog(self): for backlog in 0, -1: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as srv: srv.bind((HOST, 0)) srv.listen(backlog) with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as srv: srv.bind((HOST, 0)) srv.listen() @support.cpython_only def test_listen_backlog_overflow(self): # Issue 15989 import _testcapi with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as srv: srv.bind((HOST, 0)) self.assertRaises(OverflowError, srv.listen, _testcapi.INT_MAX + 1) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') def test_flowinfo(self): self.assertRaises(OverflowError, socket.getnameinfo, (socket_helper.HOSTv6, 0, 0xffffffff), 0) with socket.socket(socket.AF_INET6, socket.SOCK_STREAM) as s: self.assertRaises(OverflowError, s.bind, (socket_helper.HOSTv6, 0, -10)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') def test_getaddrinfo_ipv6_basic(self): ((*_, sockaddr),) = socket.getaddrinfo( 'ff02::1de:c0:face:8D', # Note capital letter `D`. 1234, socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP ) self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, 0)) def test_getfqdn_filter_localhost(self): self.assertEqual(socket.getfqdn(), socket.getfqdn("0.0.0.0")) self.assertEqual(socket.getfqdn(), socket.getfqdn("::")) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipIf(sys.platform == 'win32', 'does not work on Windows') @unittest.skipIf(AIX, 'Symbolic scope id does not work') @unittest.skipUnless(hasattr(socket, 'if_nameindex'), "test needs socket.if_nameindex()") def test_getaddrinfo_ipv6_scopeid_symbolic(self): # Just pick up any network interface (Linux, Mac OS X) (ifindex, test_interface) = socket.if_nameindex()[0] ((*_, sockaddr),) = socket.getaddrinfo( 'ff02::1de:c0:face:8D%' + test_interface, 1234, socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP ) # Note missing interface name part in IPv6 address self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, ifindex)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless( sys.platform == 'win32', 'Numeric scope id does not work or undocumented') def test_getaddrinfo_ipv6_scopeid_numeric(self): # Also works on Linux and Mac OS X, but is not documented (?) # Windows, Linux and Max OS X allow nonexistent interface numbers here. ifindex = 42 ((*_, sockaddr),) = socket.getaddrinfo( 'ff02::1de:c0:face:8D%' + str(ifindex), 1234, socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP ) # Note missing interface name part in IPv6 address self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, ifindex)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipIf(sys.platform == 'win32', 'does not work on Windows') @unittest.skipIf(AIX, 'Symbolic scope id does not work') @unittest.skipUnless(hasattr(socket, 'if_nameindex'), "test needs socket.if_nameindex()") def test_getnameinfo_ipv6_scopeid_symbolic(self): # Just pick up any network interface. (ifindex, test_interface) = socket.if_nameindex()[0] sockaddr = ('ff02::1de:c0:face:8D', 1234, 0, ifindex) # Note capital letter `D`. nameinfo = socket.getnameinfo(sockaddr, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV) self.assertEqual(nameinfo, ('ff02::1de:c0:face:8d%' + test_interface, '1234')) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless( sys.platform == 'win32', 'Numeric scope id does not work or undocumented') def test_getnameinfo_ipv6_scopeid_numeric(self): # Also works on Linux (undocumented), but does not work on Mac OS X # Windows and Linux allow nonexistent interface numbers here. ifindex = 42 sockaddr = ('ff02::1de:c0:face:8D', 1234, 0, ifindex) # Note capital letter `D`. nameinfo = socket.getnameinfo(sockaddr, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV) self.assertEqual(nameinfo, ('ff02::1de:c0:face:8d%' + str(ifindex), '1234')) def test_str_for_enums(self): # Make sure that the AF_* and SOCK_* constants have enum-like string # reprs. with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: self.assertEqual(repr(s.family), '' % s.family.value) self.assertEqual(repr(s.type), '' % s.type.value) self.assertEqual(str(s.family), str(s.family.value)) self.assertEqual(str(s.type), str(s.type.value)) def test_socket_consistent_sock_type(self): SOCK_NONBLOCK = getattr(socket, 'SOCK_NONBLOCK', 0) SOCK_CLOEXEC = getattr(socket, 'SOCK_CLOEXEC', 0) sock_type = socket.SOCK_STREAM | SOCK_NONBLOCK | SOCK_CLOEXEC with socket.socket(socket.AF_INET, sock_type) as s: self.assertEqual(s.type, socket.SOCK_STREAM) s.settimeout(1) self.assertEqual(s.type, socket.SOCK_STREAM) s.settimeout(0) self.assertEqual(s.type, socket.SOCK_STREAM) s.setblocking(True) self.assertEqual(s.type, socket.SOCK_STREAM) s.setblocking(False) self.assertEqual(s.type, socket.SOCK_STREAM) def test_unknown_socket_family_repr(self): # Test that when created with a family that's not one of the known # AF_*/SOCK_* constants, socket.family just returns the number. # # To do this we fool socket.socket into believing it already has an # open fd because on this path it doesn't actually verify the family and # type and populates the socket object. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) fd = sock.detach() unknown_family = max(socket.AddressFamily.__members__.values()) + 1 unknown_type = max( kind for name, kind in socket.SocketKind.__members__.items() if name not in {'SOCK_NONBLOCK', 'SOCK_CLOEXEC'} ) + 1 with socket.socket( family=unknown_family, type=unknown_type, proto=23, fileno=fd) as s: self.assertEqual(s.family, unknown_family) self.assertEqual(s.type, unknown_type) # some OS like macOS ignore proto self.assertIn(s.proto, {0, 23}) @unittest.skipUnless(hasattr(os, 'sendfile'), 'test needs os.sendfile()') def test__sendfile_use_sendfile(self): class File: def __init__(self, fd): self.fd = fd def fileno(self): return self.fd with socket.socket() as sock: fd = os.open(os.curdir, os.O_RDONLY) os.close(fd) with self.assertRaises(socket._GiveupOnSendfile): sock._sendfile_use_sendfile(File(fd)) with self.assertRaises(OverflowError): sock._sendfile_use_sendfile(File(2**1000)) with self.assertRaises(TypeError): sock._sendfile_use_sendfile(File(None)) def _test_socket_fileno(self, s, family, stype): self.assertEqual(s.family, family) self.assertEqual(s.type, stype) fd = s.fileno() s2 = socket.socket(fileno=fd) self.addCleanup(s2.close) # detach old fd to avoid double close s.detach() self.assertEqual(s2.family, family) self.assertEqual(s2.type, stype) self.assertEqual(s2.fileno(), fd) def test_socket_fileno(self): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(s.close) s.bind((socket_helper.HOST, 0)) self._test_socket_fileno(s, socket.AF_INET, socket.SOCK_STREAM) if hasattr(socket, "SOCK_DGRAM"): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.addCleanup(s.close) s.bind((socket_helper.HOST, 0)) self._test_socket_fileno(s, socket.AF_INET, socket.SOCK_DGRAM) if socket_helper.IPV6_ENABLED: s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) self.addCleanup(s.close) s.bind((socket_helper.HOSTv6, 0, 0, 0)) self._test_socket_fileno(s, socket.AF_INET6, socket.SOCK_STREAM) if hasattr(socket, "AF_UNIX"): tmpdir = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, tmpdir) s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.addCleanup(s.close) try: s.bind(os.path.join(tmpdir, 'socket')) except PermissionError: pass else: self._test_socket_fileno(s, socket.AF_UNIX, socket.SOCK_STREAM) def test_socket_fileno_rejects_float(self): with self.assertRaises(TypeError): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=42.5) def test_socket_fileno_rejects_other_types(self): with self.assertRaises(TypeError): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno="foo") def test_socket_fileno_rejects_invalid_socket(self): with self.assertRaisesRegex(ValueError, "negative file descriptor"): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=-1) @unittest.skipIf(os.name == "nt", "Windows disallows -1 only") def test_socket_fileno_rejects_negative(self): with self.assertRaisesRegex(ValueError, "negative file descriptor"): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=-42) def test_socket_fileno_requires_valid_fd(self): WSAENOTSOCK = 10038 with self.assertRaises(OSError) as cm: socket.socket(fileno=os_helper.make_bad_fd()) self.assertIn(cm.exception.errno, (errno.EBADF, WSAENOTSOCK)) with self.assertRaises(OSError) as cm: socket.socket( socket.AF_INET, socket.SOCK_STREAM, fileno=os_helper.make_bad_fd()) self.assertIn(cm.exception.errno, (errno.EBADF, WSAENOTSOCK)) def test_socket_fileno_requires_socket_fd(self): with tempfile.NamedTemporaryFile() as afile: with self.assertRaises(OSError): socket.socket(fileno=afile.fileno()) with self.assertRaises(OSError) as cm: socket.socket( socket.AF_INET, socket.SOCK_STREAM, fileno=afile.fileno()) self.assertEqual(cm.exception.errno, errno.ENOTSOCK) def test_addressfamily_enum(self): import _socket, enum CheckedAddressFamily = enum._old_convert_( enum.IntEnum, 'AddressFamily', 'socket', lambda C: C.isupper() and C.startswith('AF_'), source=_socket, ) enum._test_simple_enum(CheckedAddressFamily, socket.AddressFamily) def test_socketkind_enum(self): import _socket, enum CheckedSocketKind = enum._old_convert_( enum.IntEnum, 'SocketKind', 'socket', lambda C: C.isupper() and C.startswith('SOCK_'), source=_socket, ) enum._test_simple_enum(CheckedSocketKind, socket.SocketKind) def test_msgflag_enum(self): import _socket, enum CheckedMsgFlag = enum._old_convert_( enum.IntFlag, 'MsgFlag', 'socket', lambda C: C.isupper() and C.startswith('MSG_'), source=_socket, ) enum._test_simple_enum(CheckedMsgFlag, socket.MsgFlag) def test_addressinfo_enum(self): import _socket, enum CheckedAddressInfo = enum._old_convert_( enum.IntFlag, 'AddressInfo', 'socket', lambda C: C.isupper() and C.startswith('AI_'), source=_socket) enum._test_simple_enum(CheckedAddressInfo, socket.AddressInfo) @unittest.skipUnless(HAVE_SOCKET_CAN, 'SocketCan required for this test.') class BasicCANTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_CAN socket.PF_CAN socket.CAN_RAW @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def testBCMConstants(self): socket.CAN_BCM # opcodes socket.CAN_BCM_TX_SETUP # create (cyclic) transmission task socket.CAN_BCM_TX_DELETE # remove (cyclic) transmission task socket.CAN_BCM_TX_READ # read properties of (cyclic) transmission task socket.CAN_BCM_TX_SEND # send one CAN frame socket.CAN_BCM_RX_SETUP # create RX content filter subscription socket.CAN_BCM_RX_DELETE # remove RX content filter subscription socket.CAN_BCM_RX_READ # read properties of RX content filter subscription socket.CAN_BCM_TX_STATUS # reply to TX_READ request socket.CAN_BCM_TX_EXPIRED # notification on performed transmissions (count=0) socket.CAN_BCM_RX_STATUS # reply to RX_READ request socket.CAN_BCM_RX_TIMEOUT # cyclic message is absent socket.CAN_BCM_RX_CHANGED # updated CAN frame (detected content change) # flags socket.CAN_BCM_SETTIMER socket.CAN_BCM_STARTTIMER socket.CAN_BCM_TX_COUNTEVT socket.CAN_BCM_TX_ANNOUNCE socket.CAN_BCM_TX_CP_CAN_ID socket.CAN_BCM_RX_FILTER_ID socket.CAN_BCM_RX_CHECK_DLC socket.CAN_BCM_RX_NO_AUTOTIMER socket.CAN_BCM_RX_ANNOUNCE_RESUME socket.CAN_BCM_TX_RESET_MULTI_IDX socket.CAN_BCM_RX_RTR_FRAME def testCreateSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: pass @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def testCreateBCMSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) as s: pass def testBindAny(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: address = ('', ) s.bind(address) self.assertEqual(s.getsockname(), address) def testTooLongInterfaceName(self): # most systems limit IFNAMSIZ to 16, take 1024 to be sure with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: self.assertRaisesRegex(OSError, 'interface name too long', s.bind, ('x' * 1024,)) @unittest.skipUnless(hasattr(socket, "CAN_RAW_LOOPBACK"), 'socket.CAN_RAW_LOOPBACK required for this test.') def testLoopback(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: for loopback in (0, 1): s.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_LOOPBACK, loopback) self.assertEqual(loopback, s.getsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_LOOPBACK)) @unittest.skipUnless(hasattr(socket, "CAN_RAW_FILTER"), 'socket.CAN_RAW_FILTER required for this test.') def testFilter(self): can_id, can_mask = 0x200, 0x700 can_filter = struct.pack("=II", can_id, can_mask) with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: s.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_FILTER, can_filter) self.assertEqual(can_filter, s.getsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_FILTER, 8)) s.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_FILTER, bytearray(can_filter)) @unittest.skipUnless(HAVE_SOCKET_CAN, 'SocketCan required for this test.') class CANTest(ThreadedCANSocketTest): def __init__(self, methodName='runTest'): ThreadedCANSocketTest.__init__(self, methodName=methodName) @classmethod def build_can_frame(cls, can_id, data): """Build a CAN frame.""" can_dlc = len(data) data = data.ljust(8, b'\x00') return struct.pack(cls.can_frame_fmt, can_id, can_dlc, data) @classmethod def dissect_can_frame(cls, frame): """Dissect a CAN frame.""" can_id, can_dlc, data = struct.unpack(cls.can_frame_fmt, frame) return (can_id, can_dlc, data[:can_dlc]) def testSendFrame(self): cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf, cf) self.assertEqual(addr[0], self.interface) def _testSendFrame(self): self.cf = self.build_can_frame(0x00, b'\x01\x02\x03\x04\x05') self.cli.send(self.cf) def testSendMaxFrame(self): cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf, cf) def _testSendMaxFrame(self): self.cf = self.build_can_frame(0x00, b'\x07' * 8) self.cli.send(self.cf) def testSendMultiFrames(self): cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf1, cf) cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf2, cf) def _testSendMultiFrames(self): self.cf1 = self.build_can_frame(0x07, b'\x44\x33\x22\x11') self.cli.send(self.cf1) self.cf2 = self.build_can_frame(0x12, b'\x99\x22\x33') self.cli.send(self.cf2) @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def _testBCM(self): cf, addr = self.cli.recvfrom(self.bufsize) self.assertEqual(self.cf, cf) can_id, can_dlc, data = self.dissect_can_frame(cf) self.assertEqual(self.can_id, can_id) self.assertEqual(self.data, data) @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def testBCM(self): bcm = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) self.addCleanup(bcm.close) bcm.connect((self.interface,)) self.can_id = 0x123 self.data = bytes([0xc0, 0xff, 0xee]) self.cf = self.build_can_frame(self.can_id, self.data) opcode = socket.CAN_BCM_TX_SEND flags = 0 count = 0 ival1_seconds = ival1_usec = ival2_seconds = ival2_usec = 0 bcm_can_id = 0x0222 nframes = 1 assert len(self.cf) == 16 header = struct.pack(self.bcm_cmd_msg_fmt, opcode, flags, count, ival1_seconds, ival1_usec, ival2_seconds, ival2_usec, bcm_can_id, nframes, ) header_plus_frame = header + self.cf bytes_sent = bcm.send(header_plus_frame) self.assertEqual(bytes_sent, len(header_plus_frame)) @unittest.skipUnless(HAVE_SOCKET_CAN_ISOTP, 'CAN ISOTP required for this test.') class ISOTPTest(unittest.TestCase): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.interface = "vcan0" def testCrucialConstants(self): socket.AF_CAN socket.PF_CAN socket.CAN_ISOTP socket.SOCK_DGRAM def testCreateSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: pass @unittest.skipUnless(hasattr(socket, "CAN_ISOTP"), 'socket.CAN_ISOTP required for this test.') def testCreateISOTPSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) as s: pass def testTooLongInterfaceName(self): # most systems limit IFNAMSIZ to 16, take 1024 to be sure with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) as s: with self.assertRaisesRegex(OSError, 'interface name too long'): s.bind(('x' * 1024, 1, 2)) def testBind(self): try: with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) as s: addr = self.interface, 0x123, 0x456 s.bind(addr) self.assertEqual(s.getsockname(), addr) except OSError as e: if e.errno == errno.ENODEV: self.skipTest('network interface `%s` does not exist' % self.interface) else: raise @unittest.skipUnless(HAVE_SOCKET_CAN_J1939, 'CAN J1939 required for this test.') class J1939Test(unittest.TestCase): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.interface = "vcan0" @unittest.skipUnless(hasattr(socket, "CAN_J1939"), 'socket.CAN_J1939 required for this test.') def testJ1939Constants(self): socket.CAN_J1939 socket.J1939_MAX_UNICAST_ADDR socket.J1939_IDLE_ADDR socket.J1939_NO_ADDR socket.J1939_NO_NAME socket.J1939_PGN_REQUEST socket.J1939_PGN_ADDRESS_CLAIMED socket.J1939_PGN_ADDRESS_COMMANDED socket.J1939_PGN_PDU1_MAX socket.J1939_PGN_MAX socket.J1939_NO_PGN # J1939 socket options socket.SO_J1939_FILTER socket.SO_J1939_PROMISC socket.SO_J1939_SEND_PRIO socket.SO_J1939_ERRQUEUE socket.SCM_J1939_DEST_ADDR socket.SCM_J1939_DEST_NAME socket.SCM_J1939_PRIO socket.SCM_J1939_ERRQUEUE socket.J1939_NLA_PAD socket.J1939_NLA_BYTES_ACKED socket.J1939_EE_INFO_NONE socket.J1939_EE_INFO_TX_ABORT socket.J1939_FILTER_MAX @unittest.skipUnless(hasattr(socket, "CAN_J1939"), 'socket.CAN_J1939 required for this test.') def testCreateJ1939Socket(self): with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) as s: pass def testBind(self): try: with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) as s: addr = self.interface, socket.J1939_NO_NAME, socket.J1939_NO_PGN, socket.J1939_NO_ADDR s.bind(addr) self.assertEqual(s.getsockname(), addr) except OSError as e: if e.errno == errno.ENODEV: self.skipTest('network interface `%s` does not exist' % self.interface) else: raise @unittest.skipUnless(HAVE_SOCKET_RDS, 'RDS sockets required for this test.') class BasicRDSTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_RDS socket.PF_RDS def testCreateSocket(self): with socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) as s: pass def testSocketBufferSize(self): bufsize = 16384 with socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) as s: s.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, bufsize) s.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, bufsize) @unittest.skipUnless(HAVE_SOCKET_RDS, 'RDS sockets required for this test.') class RDSTest(ThreadedRDSSocketTest): def __init__(self, methodName='runTest'): ThreadedRDSSocketTest.__init__(self, methodName=methodName) def setUp(self): super().setUp() self.evt = threading.Event() def testSendAndRecv(self): data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data, data) self.assertEqual(self.cli_addr, addr) def _testSendAndRecv(self): self.data = b'spam' self.cli.sendto(self.data, 0, (HOST, self.port)) def testPeek(self): data, addr = self.serv.recvfrom(self.bufsize, socket.MSG_PEEK) self.assertEqual(self.data, data) data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data, data) def _testPeek(self): self.data = b'spam' self.cli.sendto(self.data, 0, (HOST, self.port)) @requireAttrs(socket.socket, 'recvmsg') def testSendAndRecvMsg(self): data, ancdata, msg_flags, addr = self.serv.recvmsg(self.bufsize) self.assertEqual(self.data, data) @requireAttrs(socket.socket, 'sendmsg') def _testSendAndRecvMsg(self): self.data = b'hello ' * 10 self.cli.sendmsg([self.data], (), 0, (HOST, self.port)) def testSendAndRecvMulti(self): data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data1, data) data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data2, data) def _testSendAndRecvMulti(self): self.data1 = b'bacon' self.cli.sendto(self.data1, 0, (HOST, self.port)) self.data2 = b'egg' self.cli.sendto(self.data2, 0, (HOST, self.port)) def testSelect(self): r, w, x = select.select([self.serv], [], [], 3.0) self.assertIn(self.serv, r) data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data, data) def _testSelect(self): self.data = b'select' self.cli.sendto(self.data, 0, (HOST, self.port)) @unittest.skipUnless(HAVE_SOCKET_QIPCRTR, 'QIPCRTR sockets required for this test.') class BasicQIPCRTRTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_QIPCRTR def testCreateSocket(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: pass def testUnbound(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: self.assertEqual(s.getsockname()[1], 0) def testBindSock(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: socket_helper.bind_port(s, host=s.getsockname()[0]) self.assertNotEqual(s.getsockname()[1], 0) def testInvalidBindSock(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: self.assertRaises(OSError, socket_helper.bind_port, s, host=-2) def testAutoBindSock(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: s.connect((123, 123)) self.assertNotEqual(s.getsockname()[1], 0) @unittest.skipIf(fcntl is None, "need fcntl") @unittest.skipUnless(HAVE_SOCKET_VSOCK, 'VSOCK sockets required for this test.') class BasicVSOCKTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_VSOCK def testVSOCKConstants(self): socket.SO_VM_SOCKETS_BUFFER_SIZE socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE socket.VMADDR_CID_ANY socket.VMADDR_PORT_ANY socket.VMADDR_CID_HOST socket.VM_SOCKETS_INVALID_VERSION socket.IOCTL_VM_SOCKETS_GET_LOCAL_CID def testCreateSocket(self): with socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) as s: pass def testSocketBufferSize(self): with socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) as s: orig_max = s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE) orig = s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_SIZE) orig_min = s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE) s.setsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE, orig_max * 2) s.setsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_SIZE, orig * 2) s.setsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE, orig_min * 2) self.assertEqual(orig_max * 2, s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE)) self.assertEqual(orig * 2, s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_SIZE)) self.assertEqual(orig_min * 2, s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE)) @unittest.skipUnless(HAVE_SOCKET_BLUETOOTH, 'Bluetooth sockets required for this test.') class BasicBluetoothTest(unittest.TestCase): def testBluetoothConstants(self): socket.BDADDR_ANY socket.BDADDR_LOCAL socket.AF_BLUETOOTH socket.BTPROTO_RFCOMM if sys.platform != "win32": socket.BTPROTO_HCI socket.SOL_HCI socket.BTPROTO_L2CAP if not sys.platform.startswith("freebsd"): socket.BTPROTO_SCO def testCreateRfcommSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM) as s: pass @unittest.skipIf(sys.platform == "win32", "windows does not support L2CAP sockets") def testCreateL2capSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_SEQPACKET, socket.BTPROTO_L2CAP) as s: pass @unittest.skipIf(sys.platform == "win32", "windows does not support HCI sockets") def testCreateHciSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_RAW, socket.BTPROTO_HCI) as s: pass @unittest.skipIf(sys.platform == "win32" or sys.platform.startswith("freebsd"), "windows and freebsd do not support SCO sockets") def testCreateScoSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_SEQPACKET, socket.BTPROTO_SCO) as s: pass class BasicTCPTest(SocketConnectedTest): def __init__(self, methodName='runTest'): SocketConnectedTest.__init__(self, methodName=methodName) def testRecv(self): # Testing large receive over TCP msg = self.cli_conn.recv(1024) self.assertEqual(msg, MSG) def _testRecv(self): self.serv_conn.send(MSG) def testOverFlowRecv(self): # Testing receive in chunks over TCP seg1 = self.cli_conn.recv(len(MSG) - 3) seg2 = self.cli_conn.recv(1024) msg = seg1 + seg2 self.assertEqual(msg, MSG) def _testOverFlowRecv(self): self.serv_conn.send(MSG) def testRecvFrom(self): # Testing large recvfrom() over TCP msg, addr = self.cli_conn.recvfrom(1024) self.assertEqual(msg, MSG) def _testRecvFrom(self): self.serv_conn.send(MSG) def testOverFlowRecvFrom(self): # Testing recvfrom() in chunks over TCP seg1, addr = self.cli_conn.recvfrom(len(MSG)-3) seg2, addr = self.cli_conn.recvfrom(1024) msg = seg1 + seg2 self.assertEqual(msg, MSG) def _testOverFlowRecvFrom(self): self.serv_conn.send(MSG) def testSendAll(self): # Testing sendall() with a 2048 byte string over TCP msg = b'' while 1: read = self.cli_conn.recv(1024) if not read: break msg += read self.assertEqual(msg, b'f' * 2048) def _testSendAll(self): big_chunk = b'f' * 2048 self.serv_conn.sendall(big_chunk) def testFromFd(self): # Testing fromfd() fd = self.cli_conn.fileno() sock = socket.fromfd(fd, socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) self.assertIsInstance(sock, socket.socket) msg = sock.recv(1024) self.assertEqual(msg, MSG) def _testFromFd(self): self.serv_conn.send(MSG) def testDup(self): # Testing dup() sock = self.cli_conn.dup() self.addCleanup(sock.close) msg = sock.recv(1024) self.assertEqual(msg, MSG) def _testDup(self): self.serv_conn.send(MSG) def testShutdown(self): # Testing shutdown() msg = self.cli_conn.recv(1024) self.assertEqual(msg, MSG) # wait for _testShutdown to finish: on OS X, when the server # closes the connection the client also becomes disconnected, # and the client's shutdown call will fail. (Issue #4397.) self.done.wait() def _testShutdown(self): self.serv_conn.send(MSG) self.serv_conn.shutdown(2) testShutdown_overflow = support.cpython_only(testShutdown) @support.cpython_only def _testShutdown_overflow(self): import _testcapi self.serv_conn.send(MSG) # Issue 15989 self.assertRaises(OverflowError, self.serv_conn.shutdown, _testcapi.INT_MAX + 1) self.assertRaises(OverflowError, self.serv_conn.shutdown, 2 + (_testcapi.UINT_MAX + 1)) self.serv_conn.shutdown(2) def testDetach(self): # Testing detach() fileno = self.cli_conn.fileno() f = self.cli_conn.detach() self.assertEqual(f, fileno) # cli_conn cannot be used anymore... self.assertTrue(self.cli_conn._closed) self.assertRaises(OSError, self.cli_conn.recv, 1024) self.cli_conn.close() # ...but we can create another socket using the (still open) # file descriptor sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=f) self.addCleanup(sock.close) msg = sock.recv(1024) self.assertEqual(msg, MSG) def _testDetach(self): self.serv_conn.send(MSG) class BasicUDPTest(ThreadedUDPSocketTest): def __init__(self, methodName='runTest'): ThreadedUDPSocketTest.__init__(self, methodName=methodName) def testSendtoAndRecv(self): # Testing sendto() and Recv() over UDP msg = self.serv.recv(len(MSG)) self.assertEqual(msg, MSG) def _testSendtoAndRecv(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFrom(self): # Testing recvfrom() over UDP msg, addr = self.serv.recvfrom(len(MSG)) self.assertEqual(msg, MSG) def _testRecvFrom(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFromNegative(self): # Negative lengths passed to recvfrom should give ValueError. self.assertRaises(ValueError, self.serv.recvfrom, -1) def _testRecvFromNegative(self): self.cli.sendto(MSG, 0, (HOST, self.port)) @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class BasicUDPLITETest(ThreadedUDPLITESocketTest): def __init__(self, methodName='runTest'): ThreadedUDPLITESocketTest.__init__(self, methodName=methodName) def testSendtoAndRecv(self): # Testing sendto() and Recv() over UDPLITE msg = self.serv.recv(len(MSG)) self.assertEqual(msg, MSG) def _testSendtoAndRecv(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFrom(self): # Testing recvfrom() over UDPLITE msg, addr = self.serv.recvfrom(len(MSG)) self.assertEqual(msg, MSG) def _testRecvFrom(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFromNegative(self): # Negative lengths passed to recvfrom should give ValueError. self.assertRaises(ValueError, self.serv.recvfrom, -1) def _testRecvFromNegative(self): self.cli.sendto(MSG, 0, (HOST, self.port)) # Tests for the sendmsg()/recvmsg() interface. Where possible, the # same test code is used with different families and types of socket # (e.g. stream, datagram), and tests using recvmsg() are repeated # using recvmsg_into(). # # The generic test classes such as SendmsgTests and # RecvmsgGenericTests inherit from SendrecvmsgBase and expect to be # supplied with sockets cli_sock and serv_sock representing the # client's and the server's end of the connection respectively, and # attributes cli_addr and serv_addr holding their (numeric where # appropriate) addresses. # # The final concrete test classes combine these with subclasses of # SocketTestBase which set up client and server sockets of a specific # type, and with subclasses of SendrecvmsgBase such as # SendrecvmsgDgramBase and SendrecvmsgConnectedBase which map these # sockets to cli_sock and serv_sock and override the methods and # attributes of SendrecvmsgBase to fill in destination addresses if # needed when sending, check for specific flags in msg_flags, etc. # # RecvmsgIntoMixin provides a version of doRecvmsg() implemented using # recvmsg_into(). # XXX: like the other datagram (UDP) tests in this module, the code # here assumes that datagram delivery on the local machine will be # reliable. class SendrecvmsgBase: # Base class for sendmsg()/recvmsg() tests. # Time in seconds to wait before considering a test failed, or # None for no timeout. Not all tests actually set a timeout. fail_timeout = support.LOOPBACK_TIMEOUT def setUp(self): self.misc_event = threading.Event() super().setUp() def sendToServer(self, msg): # Send msg to the server. return self.cli_sock.send(msg) # Tuple of alternative default arguments for sendmsg() when called # via sendmsgToServer() (e.g. to include a destination address). sendmsg_to_server_defaults = () def sendmsgToServer(self, *args): # Call sendmsg() on self.cli_sock with the given arguments, # filling in any arguments which are not supplied with the # corresponding items of self.sendmsg_to_server_defaults, if # any. return self.cli_sock.sendmsg( *(args + self.sendmsg_to_server_defaults[len(args):])) def doRecvmsg(self, sock, bufsize, *args): # Call recvmsg() on sock with given arguments and return its # result. Should be used for tests which can use either # recvmsg() or recvmsg_into() - RecvmsgIntoMixin overrides # this method with one which emulates it using recvmsg_into(), # thus allowing the same test to be used for both methods. result = sock.recvmsg(bufsize, *args) self.registerRecvmsgResult(result) return result def registerRecvmsgResult(self, result): # Called by doRecvmsg() with the return value of recvmsg() or # recvmsg_into(). Can be overridden to arrange cleanup based # on the returned ancillary data, for instance. pass def checkRecvmsgAddress(self, addr1, addr2): # Called to compare the received address with the address of # the peer. self.assertEqual(addr1, addr2) # Flags that are normally unset in msg_flags msg_flags_common_unset = 0 for name in ("MSG_CTRUNC", "MSG_OOB"): msg_flags_common_unset |= getattr(socket, name, 0) # Flags that are normally set msg_flags_common_set = 0 # Flags set when a complete record has been received (e.g. MSG_EOR # for SCTP) msg_flags_eor_indicator = 0 # Flags set when a complete record has not been received # (e.g. MSG_TRUNC for datagram sockets) msg_flags_non_eor_indicator = 0 def checkFlags(self, flags, eor=None, checkset=0, checkunset=0, ignore=0): # Method to check the value of msg_flags returned by recvmsg[_into](). # # Checks that all bits in msg_flags_common_set attribute are # set in "flags" and all bits in msg_flags_common_unset are # unset. # # The "eor" argument specifies whether the flags should # indicate that a full record (or datagram) has been received. # If "eor" is None, no checks are done; otherwise, checks # that: # # * if "eor" is true, all bits in msg_flags_eor_indicator are # set and all bits in msg_flags_non_eor_indicator are unset # # * if "eor" is false, all bits in msg_flags_non_eor_indicator # are set and all bits in msg_flags_eor_indicator are unset # # If "checkset" and/or "checkunset" are supplied, they require # the given bits to be set or unset respectively, overriding # what the attributes require for those bits. # # If any bits are set in "ignore", they will not be checked, # regardless of the other inputs. # # Will raise Exception if the inputs require a bit to be both # set and unset, and it is not ignored. defaultset = self.msg_flags_common_set defaultunset = self.msg_flags_common_unset if eor: defaultset |= self.msg_flags_eor_indicator defaultunset |= self.msg_flags_non_eor_indicator elif eor is not None: defaultset |= self.msg_flags_non_eor_indicator defaultunset |= self.msg_flags_eor_indicator # Function arguments override defaults defaultset &= ~checkunset defaultunset &= ~checkset # Merge arguments with remaining defaults, and check for conflicts checkset |= defaultset checkunset |= defaultunset inboth = checkset & checkunset & ~ignore if inboth: raise Exception("contradictory set, unset requirements for flags " "{0:#x}".format(inboth)) # Compare with given msg_flags value mask = (checkset | checkunset) & ~ignore self.assertEqual(flags & mask, checkset & mask) class RecvmsgIntoMixin(SendrecvmsgBase): # Mixin to implement doRecvmsg() using recvmsg_into(). def doRecvmsg(self, sock, bufsize, *args): buf = bytearray(bufsize) result = sock.recvmsg_into([buf], *args) self.registerRecvmsgResult(result) self.assertGreaterEqual(result[0], 0) self.assertLessEqual(result[0], bufsize) return (bytes(buf[:result[0]]),) + result[1:] class SendrecvmsgDgramFlagsBase(SendrecvmsgBase): # Defines flags to be checked in msg_flags for datagram sockets. @property def msg_flags_non_eor_indicator(self): return super().msg_flags_non_eor_indicator | socket.MSG_TRUNC class SendrecvmsgSCTPFlagsBase(SendrecvmsgBase): # Defines flags to be checked in msg_flags for SCTP sockets. @property def msg_flags_eor_indicator(self): return super().msg_flags_eor_indicator | socket.MSG_EOR class SendrecvmsgConnectionlessBase(SendrecvmsgBase): # Base class for tests on connectionless-mode sockets. Users must # supply sockets on attributes cli and serv to be mapped to # cli_sock and serv_sock respectively. @property def serv_sock(self): return self.serv @property def cli_sock(self): return self.cli @property def sendmsg_to_server_defaults(self): return ([], [], 0, self.serv_addr) def sendToServer(self, msg): return self.cli_sock.sendto(msg, self.serv_addr) class SendrecvmsgConnectedBase(SendrecvmsgBase): # Base class for tests on connected sockets. Users must supply # sockets on attributes serv_conn and cli_conn (representing the # connections *to* the server and the client), to be mapped to # cli_sock and serv_sock respectively. @property def serv_sock(self): return self.cli_conn @property def cli_sock(self): return self.serv_conn def checkRecvmsgAddress(self, addr1, addr2): # Address is currently "unspecified" for a connected socket, # so we don't examine it pass class SendrecvmsgServerTimeoutBase(SendrecvmsgBase): # Base class to set a timeout on server's socket. def setUp(self): super().setUp() self.serv_sock.settimeout(self.fail_timeout) class SendmsgTests(SendrecvmsgServerTimeoutBase): # Tests for sendmsg() which can use any socket type and do not # involve recvmsg() or recvmsg_into(). def testSendmsg(self): # Send a simple message with sendmsg(). self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsg(self): self.assertEqual(self.sendmsgToServer([MSG]), len(MSG)) def testSendmsgDataGenerator(self): # Send from buffer obtained from a generator (not a sequence). self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgDataGenerator(self): self.assertEqual(self.sendmsgToServer((o for o in [MSG])), len(MSG)) def testSendmsgAncillaryGenerator(self): # Gather (empty) ancillary data from a generator. self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgAncillaryGenerator(self): self.assertEqual(self.sendmsgToServer([MSG], (o for o in [])), len(MSG)) def testSendmsgArray(self): # Send data from an array instead of the usual bytes object. self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgArray(self): self.assertEqual(self.sendmsgToServer([array.array("B", MSG)]), len(MSG)) def testSendmsgGather(self): # Send message data from more than one buffer (gather write). self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgGather(self): self.assertEqual(self.sendmsgToServer([MSG[:3], MSG[3:]]), len(MSG)) def testSendmsgBadArgs(self): # Check that sendmsg() rejects invalid arguments. self.assertEqual(self.serv_sock.recv(1000), b"done") def _testSendmsgBadArgs(self): self.assertRaises(TypeError, self.cli_sock.sendmsg) self.assertRaises(TypeError, self.sendmsgToServer, b"not in an iterable") self.assertRaises(TypeError, self.sendmsgToServer, object()) self.assertRaises(TypeError, self.sendmsgToServer, [object()]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG, object()]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], object()) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [], object()) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [], 0, object()) self.sendToServer(b"done") def testSendmsgBadCmsg(self): # Check that invalid ancillary data items are rejected. self.assertEqual(self.serv_sock.recv(1000), b"done") def _testSendmsgBadCmsg(self): self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [object()]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(object(), 0, b"data")]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, object(), b"data")]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0, object())]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0)]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0, b"data", 42)]) self.sendToServer(b"done") @requireAttrs(socket, "CMSG_SPACE") def testSendmsgBadMultiCmsg(self): # Check that invalid ancillary data items are rejected when # more than one item is present. self.assertEqual(self.serv_sock.recv(1000), b"done") @testSendmsgBadMultiCmsg.client_skip def _testSendmsgBadMultiCmsg(self): self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [0, 0, b""]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0, b""), object()]) self.sendToServer(b"done") def testSendmsgExcessCmsgReject(self): # Check that sendmsg() rejects excess ancillary data items # when the number that can be sent is limited. self.assertEqual(self.serv_sock.recv(1000), b"done") def _testSendmsgExcessCmsgReject(self): if not hasattr(socket, "CMSG_SPACE"): # Can only send one item with self.assertRaises(OSError) as cm: self.sendmsgToServer([MSG], [(0, 0, b""), (0, 0, b"")]) self.assertIsNone(cm.exception.errno) self.sendToServer(b"done") def testSendmsgAfterClose(self): # Check that sendmsg() fails on a closed socket. pass def _testSendmsgAfterClose(self): self.cli_sock.close() self.assertRaises(OSError, self.sendmsgToServer, [MSG]) class SendmsgStreamTests(SendmsgTests): # Tests for sendmsg() which require a stream socket and do not # involve recvmsg() or recvmsg_into(). def testSendmsgExplicitNoneAddr(self): # Check that peer address can be specified as None. self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgExplicitNoneAddr(self): self.assertEqual(self.sendmsgToServer([MSG], [], 0, None), len(MSG)) def testSendmsgTimeout(self): # Check that timeout works with sendmsg(). self.assertEqual(self.serv_sock.recv(512), b"a"*512) self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) def _testSendmsgTimeout(self): try: self.cli_sock.settimeout(0.03) try: while True: self.sendmsgToServer([b"a"*512]) except TimeoutError: pass except OSError as exc: if exc.errno != errno.ENOMEM: raise # bpo-33937 the test randomly fails on Travis CI with # "OSError: [Errno 12] Cannot allocate memory" else: self.fail("TimeoutError not raised") finally: self.misc_event.set() # XXX: would be nice to have more tests for sendmsg flags argument. # Linux supports MSG_DONTWAIT when sending, but in general, it # only works when receiving. Could add other platforms if they # support it too. @skipWithClientIf(sys.platform not in {"linux"}, "MSG_DONTWAIT not known to work on this platform when " "sending") def testSendmsgDontWait(self): # Check that MSG_DONTWAIT in flags causes non-blocking behaviour. self.assertEqual(self.serv_sock.recv(512), b"a"*512) self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) @testSendmsgDontWait.client_skip def _testSendmsgDontWait(self): try: with self.assertRaises(OSError) as cm: while True: self.sendmsgToServer([b"a"*512], [], socket.MSG_DONTWAIT) # bpo-33937: catch also ENOMEM, the test randomly fails on Travis CI # with "OSError: [Errno 12] Cannot allocate memory" self.assertIn(cm.exception.errno, (errno.EAGAIN, errno.EWOULDBLOCK, errno.ENOMEM)) finally: self.misc_event.set() class SendmsgConnectionlessTests(SendmsgTests): # Tests for sendmsg() which require a connectionless-mode # (e.g. datagram) socket, and do not involve recvmsg() or # recvmsg_into(). def testSendmsgNoDestAddr(self): # Check that sendmsg() fails when no destination address is # given for unconnected socket. pass def _testSendmsgNoDestAddr(self): self.assertRaises(OSError, self.cli_sock.sendmsg, [MSG]) self.assertRaises(OSError, self.cli_sock.sendmsg, [MSG], [], 0, None) class RecvmsgGenericTests(SendrecvmsgBase): # Tests for recvmsg() which can also be emulated using # recvmsg_into(), and can use any socket type. def testRecvmsg(self): # Receive a simple message with recvmsg[_into](). msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG)) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsg(self): self.sendToServer(MSG) def testRecvmsgExplicitDefaults(self): # Test recvmsg[_into]() with default arguments provided explicitly. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 0, 0) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgExplicitDefaults(self): self.sendToServer(MSG) def testRecvmsgShorter(self): # Receive a message smaller than buffer. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) + 42) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgShorter(self): self.sendToServer(MSG) def testRecvmsgTrunc(self): # Receive part of message, check for truncation indicators. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) - 3) self.assertEqual(msg, MSG[:-3]) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=False) def _testRecvmsgTrunc(self): self.sendToServer(MSG) def testRecvmsgShortAncillaryBuf(self): # Test ancillary data buffer too small to hold any ancillary data. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 1) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgShortAncillaryBuf(self): self.sendToServer(MSG) def testRecvmsgLongAncillaryBuf(self): # Test large ancillary data buffer. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 10240) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgLongAncillaryBuf(self): self.sendToServer(MSG) def testRecvmsgAfterClose(self): # Check that recvmsg[_into]() fails on a closed socket. self.serv_sock.close() self.assertRaises(OSError, self.doRecvmsg, self.serv_sock, 1024) def _testRecvmsgAfterClose(self): pass def testRecvmsgTimeout(self): # Check that timeout works. try: self.serv_sock.settimeout(0.03) self.assertRaises(TimeoutError, self.doRecvmsg, self.serv_sock, len(MSG)) finally: self.misc_event.set() def _testRecvmsgTimeout(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) @requireAttrs(socket, "MSG_PEEK") def testRecvmsgPeek(self): # Check that MSG_PEEK in flags enables examination of pending # data without consuming it. # Receive part of data with MSG_PEEK. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) - 3, 0, socket.MSG_PEEK) self.assertEqual(msg, MSG[:-3]) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) # Ignoring MSG_TRUNC here (so this test is the same for stream # and datagram sockets). Some wording in POSIX seems to # suggest that it needn't be set when peeking, but that may # just be a slip. self.checkFlags(flags, eor=False, ignore=getattr(socket, "MSG_TRUNC", 0)) # Receive all data with MSG_PEEK. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 0, socket.MSG_PEEK) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) # Check that the same data can still be received normally. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG)) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) @testRecvmsgPeek.client_skip def _testRecvmsgPeek(self): self.sendToServer(MSG) @requireAttrs(socket.socket, "sendmsg") def testRecvmsgFromSendmsg(self): # Test receiving with recvmsg[_into]() when message is sent # using sendmsg(). self.serv_sock.settimeout(self.fail_timeout) msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG)) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) @testRecvmsgFromSendmsg.client_skip def _testRecvmsgFromSendmsg(self): self.assertEqual(self.sendmsgToServer([MSG[:3], MSG[3:]]), len(MSG)) class RecvmsgGenericStreamTests(RecvmsgGenericTests): # Tests which require a stream socket and can use either recvmsg() # or recvmsg_into(). def testRecvmsgEOF(self): # Receive end-of-stream indicator (b"", peer socket closed). msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, 1024) self.assertEqual(msg, b"") self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=None) # Might not have end-of-record marker def _testRecvmsgEOF(self): self.cli_sock.close() def testRecvmsgOverflow(self): # Receive a message in more than one chunk. seg1, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) - 3) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=False) seg2, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, 1024) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) msg = seg1 + seg2 self.assertEqual(msg, MSG) def _testRecvmsgOverflow(self): self.sendToServer(MSG) class RecvmsgTests(RecvmsgGenericTests): # Tests for recvmsg() which can use any socket type. def testRecvmsgBadArgs(self): # Check that recvmsg() rejects invalid arguments. self.assertRaises(TypeError, self.serv_sock.recvmsg) self.assertRaises(ValueError, self.serv_sock.recvmsg, -1, 0, 0) self.assertRaises(ValueError, self.serv_sock.recvmsg, len(MSG), -1, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, [bytearray(10)], 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, object(), 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, len(MSG), object(), 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, len(MSG), 0, object()) msg, ancdata, flags, addr = self.serv_sock.recvmsg(len(MSG), 0, 0) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgBadArgs(self): self.sendToServer(MSG) class RecvmsgIntoTests(RecvmsgIntoMixin, RecvmsgGenericTests): # Tests for recvmsg_into() which can use any socket type. def testRecvmsgIntoBadArgs(self): # Check that recvmsg_into() rejects invalid arguments. buf = bytearray(len(MSG)) self.assertRaises(TypeError, self.serv_sock.recvmsg_into) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, len(MSG), 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, buf, 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [object()], 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [b"I'm not writable"], 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [buf, object()], 0, 0) self.assertRaises(ValueError, self.serv_sock.recvmsg_into, [buf], -1, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [buf], object(), 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [buf], 0, object()) nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into([buf], 0, 0) self.assertEqual(nbytes, len(MSG)) self.assertEqual(buf, bytearray(MSG)) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoBadArgs(self): self.sendToServer(MSG) def testRecvmsgIntoGenerator(self): # Receive into buffer obtained from a generator (not a sequence). buf = bytearray(len(MSG)) nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into( (o for o in [buf])) self.assertEqual(nbytes, len(MSG)) self.assertEqual(buf, bytearray(MSG)) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoGenerator(self): self.sendToServer(MSG) def testRecvmsgIntoArray(self): # Receive into an array rather than the usual bytearray. buf = array.array("B", [0] * len(MSG)) nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into([buf]) self.assertEqual(nbytes, len(MSG)) self.assertEqual(buf.tobytes(), MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoArray(self): self.sendToServer(MSG) def testRecvmsgIntoScatter(self): # Receive into multiple buffers (scatter write). b1 = bytearray(b"----") b2 = bytearray(b"0123456789") b3 = bytearray(b"--------------") nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into( [b1, memoryview(b2)[2:9], b3]) self.assertEqual(nbytes, len(b"Mary had a little lamb")) self.assertEqual(b1, bytearray(b"Mary")) self.assertEqual(b2, bytearray(b"01 had a 9")) self.assertEqual(b3, bytearray(b"little lamb---")) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoScatter(self): self.sendToServer(b"Mary had a little lamb") class CmsgMacroTests(unittest.TestCase): # Test the functions CMSG_LEN() and CMSG_SPACE(). Tests # assumptions used by sendmsg() and recvmsg[_into](), which share # code with these functions. # Match the definition in socketmodule.c try: import _testcapi except ImportError: socklen_t_limit = 0x7fffffff else: socklen_t_limit = min(0x7fffffff, _testcapi.INT_MAX) @requireAttrs(socket, "CMSG_LEN") def testCMSG_LEN(self): # Test CMSG_LEN() with various valid and invalid values, # checking the assumptions used by recvmsg() and sendmsg(). toobig = self.socklen_t_limit - socket.CMSG_LEN(0) + 1 values = list(range(257)) + list(range(toobig - 257, toobig)) # struct cmsghdr has at least three members, two of which are ints self.assertGreater(socket.CMSG_LEN(0), array.array("i").itemsize * 2) for n in values: ret = socket.CMSG_LEN(n) # This is how recvmsg() calculates the data size self.assertEqual(ret - socket.CMSG_LEN(0), n) self.assertLessEqual(ret, self.socklen_t_limit) self.assertRaises(OverflowError, socket.CMSG_LEN, -1) # sendmsg() shares code with these functions, and requires # that it reject values over the limit. self.assertRaises(OverflowError, socket.CMSG_LEN, toobig) self.assertRaises(OverflowError, socket.CMSG_LEN, sys.maxsize) @requireAttrs(socket, "CMSG_SPACE") def testCMSG_SPACE(self): # Test CMSG_SPACE() with various valid and invalid values, # checking the assumptions used by sendmsg(). toobig = self.socklen_t_limit - socket.CMSG_SPACE(1) + 1 values = list(range(257)) + list(range(toobig - 257, toobig)) last = socket.CMSG_SPACE(0) # struct cmsghdr has at least three members, two of which are ints self.assertGreater(last, array.array("i").itemsize * 2) for n in values: ret = socket.CMSG_SPACE(n) self.assertGreaterEqual(ret, last) self.assertGreaterEqual(ret, socket.CMSG_LEN(n)) self.assertGreaterEqual(ret, n + socket.CMSG_LEN(0)) self.assertLessEqual(ret, self.socklen_t_limit) last = ret self.assertRaises(OverflowError, socket.CMSG_SPACE, -1) # sendmsg() shares code with these functions, and requires # that it reject values over the limit. self.assertRaises(OverflowError, socket.CMSG_SPACE, toobig) self.assertRaises(OverflowError, socket.CMSG_SPACE, sys.maxsize) class SCMRightsTest(SendrecvmsgServerTimeoutBase): # Tests for file descriptor passing on Unix-domain sockets. # Invalid file descriptor value that's unlikely to evaluate to a # real FD even if one of its bytes is replaced with a different # value (which shouldn't actually happen). badfd = -0x5555 def newFDs(self, n): # Return a list of n file descriptors for newly-created files # containing their list indices as ASCII numbers. fds = [] for i in range(n): fd, path = tempfile.mkstemp() self.addCleanup(os.unlink, path) self.addCleanup(os.close, fd) os.write(fd, str(i).encode()) fds.append(fd) return fds def checkFDs(self, fds): # Check that the file descriptors in the given list contain # their correct list indices as ASCII numbers. for n, fd in enumerate(fds): os.lseek(fd, 0, os.SEEK_SET) self.assertEqual(os.read(fd, 1024), str(n).encode()) def registerRecvmsgResult(self, result): self.addCleanup(self.closeRecvmsgFDs, result) def closeRecvmsgFDs(self, recvmsg_result): # Close all file descriptors specified in the ancillary data # of the given return value from recvmsg() or recvmsg_into(). for cmsg_level, cmsg_type, cmsg_data in recvmsg_result[1]: if (cmsg_level == socket.SOL_SOCKET and cmsg_type == socket.SCM_RIGHTS): fds = array.array("i") fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) for fd in fds: os.close(fd) def createAndSendFDs(self, n): # Send n new file descriptors created by newFDs() to the # server, with the constant MSG as the non-ancillary data. self.assertEqual( self.sendmsgToServer([MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", self.newFDs(n)))]), len(MSG)) def checkRecvmsgFDs(self, numfds, result, maxcmsgs=1, ignoreflags=0): # Check that constant MSG was received with numfds file # descriptors in a maximum of maxcmsgs control messages (which # must contain only complete integers). By default, check # that MSG_CTRUNC is unset, but ignore any flags in # ignoreflags. msg, ancdata, flags, addr = result self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkunset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertIsInstance(ancdata, list) self.assertLessEqual(len(ancdata), maxcmsgs) fds = array.array("i") for item in ancdata: self.assertIsInstance(item, tuple) cmsg_level, cmsg_type, cmsg_data = item self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) self.assertIsInstance(cmsg_data, bytes) self.assertEqual(len(cmsg_data) % SIZEOF_INT, 0) fds.frombytes(cmsg_data) self.assertEqual(len(fds), numfds) self.checkFDs(fds) def testFDPassSimple(self): # Pass a single FD (array read from bytes object). self.checkRecvmsgFDs(1, self.doRecvmsg(self.serv_sock, len(MSG), 10240)) def _testFDPassSimple(self): self.assertEqual( self.sendmsgToServer( [MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", self.newFDs(1)).tobytes())]), len(MSG)) def testMultipleFDPass(self): # Pass multiple FDs in a single array. self.checkRecvmsgFDs(4, self.doRecvmsg(self.serv_sock, len(MSG), 10240)) def _testMultipleFDPass(self): self.createAndSendFDs(4) @requireAttrs(socket, "CMSG_SPACE") def testFDPassCMSG_SPACE(self): # Test using CMSG_SPACE() to calculate ancillary buffer size. self.checkRecvmsgFDs( 4, self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_SPACE(4 * SIZEOF_INT))) @testFDPassCMSG_SPACE.client_skip def _testFDPassCMSG_SPACE(self): self.createAndSendFDs(4) def testFDPassCMSG_LEN(self): # Test using CMSG_LEN() to calculate ancillary buffer size. self.checkRecvmsgFDs(1, self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_LEN(4 * SIZEOF_INT)), # RFC 3542 says implementations may set # MSG_CTRUNC if there isn't enough space # for trailing padding. ignoreflags=socket.MSG_CTRUNC) def _testFDPassCMSG_LEN(self): self.createAndSendFDs(1) @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") @requireAttrs(socket, "CMSG_SPACE") def testFDPassSeparate(self): # Pass two FDs in two separate arrays. Arrays may be combined # into a single control message by the OS. self.checkRecvmsgFDs(2, self.doRecvmsg(self.serv_sock, len(MSG), 10240), maxcmsgs=2) @testFDPassSeparate.client_skip @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") def _testFDPassSeparate(self): fd0, fd1 = self.newFDs(2) self.assertEqual( self.sendmsgToServer([MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd0])), (socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd1]))]), len(MSG)) @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") @requireAttrs(socket, "CMSG_SPACE") def testFDPassSeparateMinSpace(self): # Pass two FDs in two separate arrays, receiving them into the # minimum space for two arrays. num_fds = 2 self.checkRecvmsgFDs(num_fds, self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_SPACE(SIZEOF_INT) + socket.CMSG_LEN(SIZEOF_INT * num_fds)), maxcmsgs=2, ignoreflags=socket.MSG_CTRUNC) @testFDPassSeparateMinSpace.client_skip @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") def _testFDPassSeparateMinSpace(self): fd0, fd1 = self.newFDs(2) self.assertEqual( self.sendmsgToServer([MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd0])), (socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd1]))]), len(MSG)) def sendAncillaryIfPossible(self, msg, ancdata): # Try to send msg and ancdata to server, but if the system # call fails, just send msg with no ancillary data. try: nbytes = self.sendmsgToServer([msg], ancdata) except OSError as e: # Check that it was the system call that failed self.assertIsInstance(e.errno, int) nbytes = self.sendmsgToServer([msg]) self.assertEqual(nbytes, len(msg)) @unittest.skipIf(sys.platform == "darwin", "see issue #24725") def testFDPassEmpty(self): # Try to pass an empty FD array. Can receive either no array # or an empty array. self.checkRecvmsgFDs(0, self.doRecvmsg(self.serv_sock, len(MSG), 10240), ignoreflags=socket.MSG_CTRUNC) def _testFDPassEmpty(self): self.sendAncillaryIfPossible(MSG, [(socket.SOL_SOCKET, socket.SCM_RIGHTS, b"")]) def testFDPassPartialInt(self): # Try to pass a truncated FD array. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 10240) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, ignore=socket.MSG_CTRUNC) self.assertLessEqual(len(ancdata), 1) for cmsg_level, cmsg_type, cmsg_data in ancdata: self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) self.assertLess(len(cmsg_data), SIZEOF_INT) def _testFDPassPartialInt(self): self.sendAncillaryIfPossible( MSG, [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [self.badfd]).tobytes()[:-1])]) @requireAttrs(socket, "CMSG_SPACE") def testFDPassPartialIntInMiddle(self): # Try to pass two FD arrays, the first of which is truncated. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 10240) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, ignore=socket.MSG_CTRUNC) self.assertLessEqual(len(ancdata), 2) fds = array.array("i") # Arrays may have been combined in a single control message for cmsg_level, cmsg_type, cmsg_data in ancdata: self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) self.assertLessEqual(len(fds), 2) self.checkFDs(fds) @testFDPassPartialIntInMiddle.client_skip def _testFDPassPartialIntInMiddle(self): fd0, fd1 = self.newFDs(2) self.sendAncillaryIfPossible( MSG, [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd0, self.badfd]).tobytes()[:-1]), (socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd1]))]) def checkTruncatedHeader(self, result, ignoreflags=0): # Check that no ancillary data items are returned when data is # truncated inside the cmsghdr structure. msg, ancdata, flags, addr = result self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC, ignore=ignoreflags) def testCmsgTruncNoBufSize(self): # Check that no ancillary data is received when no buffer size # is specified. self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG)), # BSD seems to set MSG_CTRUNC only # if an item has been partially # received. ignoreflags=socket.MSG_CTRUNC) def _testCmsgTruncNoBufSize(self): self.createAndSendFDs(1) def testCmsgTrunc0(self): # Check that no ancillary data is received when buffer size is 0. self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), 0), ignoreflags=socket.MSG_CTRUNC) def _testCmsgTrunc0(self): self.createAndSendFDs(1) # Check that no ancillary data is returned for various non-zero # (but still too small) buffer sizes. def testCmsgTrunc1(self): self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), 1)) def _testCmsgTrunc1(self): self.createAndSendFDs(1) def testCmsgTrunc2Int(self): # The cmsghdr structure has at least three members, two of # which are ints, so we still shouldn't see any ancillary # data. self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), SIZEOF_INT * 2)) def _testCmsgTrunc2Int(self): self.createAndSendFDs(1) def testCmsgTruncLen0Minus1(self): self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_LEN(0) - 1)) def _testCmsgTruncLen0Minus1(self): self.createAndSendFDs(1) # The following tests try to truncate the control message in the # middle of the FD array. def checkTruncatedArray(self, ancbuf, maxdata, mindata=0): # Check that file descriptor data is truncated to between # mindata and maxdata bytes when received with buffer size # ancbuf, and that any complete file descriptor numbers are # valid. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbuf) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC) if mindata == 0 and ancdata == []: return self.assertEqual(len(ancdata), 1) cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) self.assertGreaterEqual(len(cmsg_data), mindata) self.assertLessEqual(len(cmsg_data), maxdata) fds = array.array("i") fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) self.checkFDs(fds) def testCmsgTruncLen0(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(0), maxdata=0) def _testCmsgTruncLen0(self): self.createAndSendFDs(1) def testCmsgTruncLen0Plus1(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(0) + 1, maxdata=1) def _testCmsgTruncLen0Plus1(self): self.createAndSendFDs(2) def testCmsgTruncLen1(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(SIZEOF_INT), maxdata=SIZEOF_INT) def _testCmsgTruncLen1(self): self.createAndSendFDs(2) def testCmsgTruncLen2Minus1(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(2 * SIZEOF_INT) - 1, maxdata=(2 * SIZEOF_INT) - 1) def _testCmsgTruncLen2Minus1(self): self.createAndSendFDs(2) class RFC3542AncillaryTest(SendrecvmsgServerTimeoutBase): # Test sendmsg() and recvmsg[_into]() using the ancillary data # features of the RFC 3542 Advanced Sockets API for IPv6. # Currently we can only handle certain data items (e.g. traffic # class, hop limit, MTU discovery and fragmentation settings) # without resorting to unportable means such as the struct module, # but the tests here are aimed at testing the ancillary data # handling in sendmsg() and recvmsg() rather than the IPv6 API # itself. # Test value to use when setting hop limit of packet hop_limit = 2 # Test value to use when setting traffic class of packet. # -1 means "use kernel default". traffic_class = -1 def ancillaryMapping(self, ancdata): # Given ancillary data list ancdata, return a mapping from # pairs (cmsg_level, cmsg_type) to corresponding cmsg_data. # Check that no (level, type) pair appears more than once. d = {} for cmsg_level, cmsg_type, cmsg_data in ancdata: self.assertNotIn((cmsg_level, cmsg_type), d) d[(cmsg_level, cmsg_type)] = cmsg_data return d def checkHopLimit(self, ancbufsize, maxhop=255, ignoreflags=0): # Receive hop limit into ancbufsize bytes of ancillary data # space. Check that data is MSG, ancillary data is not # truncated (but ignore any flags in ignoreflags), and hop # limit is between 0 and maxhop inclusive. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbufsize) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkunset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertEqual(len(ancdata), 1) self.assertIsInstance(ancdata[0], tuple) cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) self.assertEqual(cmsg_type, socket.IPV6_HOPLIMIT) self.assertIsInstance(cmsg_data, bytes) self.assertEqual(len(cmsg_data), SIZEOF_INT) a = array.array("i") a.frombytes(cmsg_data) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], maxhop) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testRecvHopLimit(self): # Test receiving the packet hop limit as ancillary data. self.checkHopLimit(ancbufsize=10240) @testRecvHopLimit.client_skip def _testRecvHopLimit(self): # Need to wait until server has asked to receive ancillary # data, as implementations are not required to buffer it # otherwise. self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testRecvHopLimitCMSG_SPACE(self): # Test receiving hop limit, using CMSG_SPACE to calculate buffer size. self.checkHopLimit(ancbufsize=socket.CMSG_SPACE(SIZEOF_INT)) @testRecvHopLimitCMSG_SPACE.client_skip def _testRecvHopLimitCMSG_SPACE(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) # Could test receiving into buffer sized using CMSG_LEN, but RFC # 3542 says portable applications must provide space for trailing # padding. Implementations may set MSG_CTRUNC if there isn't # enough space for the padding. @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSetHopLimit(self): # Test setting hop limit on outgoing packet and receiving it # at the other end. self.checkHopLimit(ancbufsize=10240, maxhop=self.hop_limit) @testSetHopLimit.client_skip def _testSetHopLimit(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.assertEqual( self.sendmsgToServer([MSG], [(socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]), len(MSG)) def checkTrafficClassAndHopLimit(self, ancbufsize, maxhop=255, ignoreflags=0): # Receive traffic class and hop limit into ancbufsize bytes of # ancillary data space. Check that data is MSG, ancillary # data is not truncated (but ignore any flags in ignoreflags), # and traffic class and hop limit are in range (hop limit no # more than maxhop). self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVTCLASS, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbufsize) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkunset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertEqual(len(ancdata), 2) ancmap = self.ancillaryMapping(ancdata) tcdata = ancmap[(socket.IPPROTO_IPV6, socket.IPV6_TCLASS)] self.assertEqual(len(tcdata), SIZEOF_INT) a = array.array("i") a.frombytes(tcdata) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], 255) hldata = ancmap[(socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT)] self.assertEqual(len(hldata), SIZEOF_INT) a = array.array("i") a.frombytes(hldata) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], maxhop) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testRecvTrafficClassAndHopLimit(self): # Test receiving traffic class and hop limit as ancillary data. self.checkTrafficClassAndHopLimit(ancbufsize=10240) @testRecvTrafficClassAndHopLimit.client_skip def _testRecvTrafficClassAndHopLimit(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testRecvTrafficClassAndHopLimitCMSG_SPACE(self): # Test receiving traffic class and hop limit, using # CMSG_SPACE() to calculate buffer size. self.checkTrafficClassAndHopLimit( ancbufsize=socket.CMSG_SPACE(SIZEOF_INT) * 2) @testRecvTrafficClassAndHopLimitCMSG_SPACE.client_skip def _testRecvTrafficClassAndHopLimitCMSG_SPACE(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSetTrafficClassAndHopLimit(self): # Test setting traffic class and hop limit on outgoing packet, # and receiving them at the other end. self.checkTrafficClassAndHopLimit(ancbufsize=10240, maxhop=self.hop_limit) @testSetTrafficClassAndHopLimit.client_skip def _testSetTrafficClassAndHopLimit(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.assertEqual( self.sendmsgToServer([MSG], [(socket.IPPROTO_IPV6, socket.IPV6_TCLASS, array.array("i", [self.traffic_class])), (socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]), len(MSG)) @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testOddCmsgSize(self): # Try to send ancillary data with first item one byte too # long. Fall back to sending with correct size if this fails, # and check that second item was handled correctly. self.checkTrafficClassAndHopLimit(ancbufsize=10240, maxhop=self.hop_limit) @testOddCmsgSize.client_skip def _testOddCmsgSize(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) try: nbytes = self.sendmsgToServer( [MSG], [(socket.IPPROTO_IPV6, socket.IPV6_TCLASS, array.array("i", [self.traffic_class]).tobytes() + b"\x00"), (socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]) except OSError as e: self.assertIsInstance(e.errno, int) nbytes = self.sendmsgToServer( [MSG], [(socket.IPPROTO_IPV6, socket.IPV6_TCLASS, array.array("i", [self.traffic_class])), (socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]) self.assertEqual(nbytes, len(MSG)) # Tests for proper handling of truncated ancillary data def checkHopLimitTruncatedHeader(self, ancbufsize, ignoreflags=0): # Receive hop limit into ancbufsize bytes of ancillary data # space, which should be too small to contain the ancillary # data header (if ancbufsize is None, pass no second argument # to recvmsg()). Check that data is MSG, MSG_CTRUNC is set # (unless included in ignoreflags), and no ancillary data is # returned. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.misc_event.set() args = () if ancbufsize is None else (ancbufsize,) msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), *args) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC, ignore=ignoreflags) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testCmsgTruncNoBufSize(self): # Check that no ancillary data is received when no ancillary # buffer size is provided. self.checkHopLimitTruncatedHeader(ancbufsize=None, # BSD seems to set # MSG_CTRUNC only if an item # has been partially # received. ignoreflags=socket.MSG_CTRUNC) @testCmsgTruncNoBufSize.client_skip def _testCmsgTruncNoBufSize(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTrunc0(self): # Check that no ancillary data is received when ancillary # buffer size is zero. self.checkHopLimitTruncatedHeader(ancbufsize=0, ignoreflags=socket.MSG_CTRUNC) @testSingleCmsgTrunc0.client_skip def _testSingleCmsgTrunc0(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) # Check that no ancillary data is returned for various non-zero # (but still too small) buffer sizes. @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTrunc1(self): self.checkHopLimitTruncatedHeader(ancbufsize=1) @testSingleCmsgTrunc1.client_skip def _testSingleCmsgTrunc1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTrunc2Int(self): self.checkHopLimitTruncatedHeader(ancbufsize=2 * SIZEOF_INT) @testSingleCmsgTrunc2Int.client_skip def _testSingleCmsgTrunc2Int(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTruncLen0Minus1(self): self.checkHopLimitTruncatedHeader(ancbufsize=socket.CMSG_LEN(0) - 1) @testSingleCmsgTruncLen0Minus1.client_skip def _testSingleCmsgTruncLen0Minus1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTruncInData(self): # Test truncation of a control message inside its associated # data. The message may be returned with its data truncated, # or not returned at all. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg( self.serv_sock, len(MSG), socket.CMSG_LEN(SIZEOF_INT) - 1) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC) self.assertLessEqual(len(ancdata), 1) if ancdata: cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) self.assertEqual(cmsg_type, socket.IPV6_HOPLIMIT) self.assertLess(len(cmsg_data), SIZEOF_INT) @testSingleCmsgTruncInData.client_skip def _testSingleCmsgTruncInData(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) def checkTruncatedSecondHeader(self, ancbufsize, ignoreflags=0): # Receive traffic class and hop limit into ancbufsize bytes of # ancillary data space, which should be large enough to # contain the first item, but too small to contain the header # of the second. Check that data is MSG, MSG_CTRUNC is set # (unless included in ignoreflags), and only one ancillary # data item is returned. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVTCLASS, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbufsize) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertEqual(len(ancdata), 1) cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) self.assertIn(cmsg_type, {socket.IPV6_TCLASS, socket.IPV6_HOPLIMIT}) self.assertEqual(len(cmsg_data), SIZEOF_INT) a = array.array("i") a.frombytes(cmsg_data) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], 255) # Try the above test with various buffer sizes. @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTrunc0(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT), ignoreflags=socket.MSG_CTRUNC) @testSecondCmsgTrunc0.client_skip def _testSecondCmsgTrunc0(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTrunc1(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT) + 1) @testSecondCmsgTrunc1.client_skip def _testSecondCmsgTrunc1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTrunc2Int(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT) + 2 * SIZEOF_INT) @testSecondCmsgTrunc2Int.client_skip def _testSecondCmsgTrunc2Int(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTruncLen0Minus1(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT) + socket.CMSG_LEN(0) - 1) @testSecondCmsgTruncLen0Minus1.client_skip def _testSecondCmsgTruncLen0Minus1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTruncInData(self): # Test truncation of the second of two control messages inside # its associated data. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVTCLASS, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg( self.serv_sock, len(MSG), socket.CMSG_SPACE(SIZEOF_INT) + socket.CMSG_LEN(SIZEOF_INT) - 1) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC) cmsg_types = {socket.IPV6_TCLASS, socket.IPV6_HOPLIMIT} cmsg_level, cmsg_type, cmsg_data = ancdata.pop(0) self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) cmsg_types.remove(cmsg_type) self.assertEqual(len(cmsg_data), SIZEOF_INT) a = array.array("i") a.frombytes(cmsg_data) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], 255) if ancdata: cmsg_level, cmsg_type, cmsg_data = ancdata.pop(0) self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) cmsg_types.remove(cmsg_type) self.assertLess(len(cmsg_data), SIZEOF_INT) self.assertEqual(ancdata, []) @testSecondCmsgTruncInData.client_skip def _testSecondCmsgTruncInData(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) # Derive concrete test classes for different socket types. class SendrecvmsgUDPTestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDPTestBase): pass @requireAttrs(socket.socket, "sendmsg") class SendmsgUDPTest(SendmsgConnectionlessTests, SendrecvmsgUDPTestBase): pass @requireAttrs(socket.socket, "recvmsg") class RecvmsgUDPTest(RecvmsgTests, SendrecvmsgUDPTestBase): pass @requireAttrs(socket.socket, "recvmsg_into") class RecvmsgIntoUDPTest(RecvmsgIntoTests, SendrecvmsgUDPTestBase): pass class SendrecvmsgUDP6TestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDP6TestBase): def checkRecvmsgAddress(self, addr1, addr2): # Called to compare the received address with the address of # the peer, ignoring scope ID self.assertEqual(addr1[:-1], addr2[:-1]) @requireAttrs(socket.socket, "sendmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class SendmsgUDP6Test(SendmsgConnectionlessTests, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgUDP6Test(RecvmsgTests, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoUDP6Test(RecvmsgIntoTests, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgRFC3542AncillaryUDP6Test(RFC3542AncillaryTest, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoRFC3542AncillaryUDP6Test(RecvmsgIntoMixin, RFC3542AncillaryTest, SendrecvmsgUDP6TestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class SendrecvmsgUDPLITETestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket.socket, "sendmsg") class SendmsgUDPLITETest(SendmsgConnectionlessTests, SendrecvmsgUDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket.socket, "recvmsg") class RecvmsgUDPLITETest(RecvmsgTests, SendrecvmsgUDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket.socket, "recvmsg_into") class RecvmsgIntoUDPLITETest(RecvmsgIntoTests, SendrecvmsgUDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class SendrecvmsgUDPLITE6TestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDPLITE6TestBase): def checkRecvmsgAddress(self, addr1, addr2): # Called to compare the received address with the address of # the peer, ignoring scope ID self.assertEqual(addr1[:-1], addr2[:-1]) @requireAttrs(socket.socket, "sendmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class SendmsgUDPLITE6Test(SendmsgConnectionlessTests, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgUDPLITE6Test(RecvmsgTests, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoUDPLITE6Test(RecvmsgIntoTests, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgRFC3542AncillaryUDPLITE6Test(RFC3542AncillaryTest, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoRFC3542AncillaryUDPLITE6Test(RecvmsgIntoMixin, RFC3542AncillaryTest, SendrecvmsgUDPLITE6TestBase): pass class SendrecvmsgTCPTestBase(SendrecvmsgConnectedBase, ConnectedStreamTestMixin, TCPTestBase): pass @requireAttrs(socket.socket, "sendmsg") class SendmsgTCPTest(SendmsgStreamTests, SendrecvmsgTCPTestBase): pass @requireAttrs(socket.socket, "recvmsg") class RecvmsgTCPTest(RecvmsgTests, RecvmsgGenericStreamTests, SendrecvmsgTCPTestBase): pass @requireAttrs(socket.socket, "recvmsg_into") class RecvmsgIntoTCPTest(RecvmsgIntoTests, RecvmsgGenericStreamTests, SendrecvmsgTCPTestBase): pass class SendrecvmsgSCTPStreamTestBase(SendrecvmsgSCTPFlagsBase, SendrecvmsgConnectedBase, ConnectedStreamTestMixin, SCTPStreamBase): pass @requireAttrs(socket.socket, "sendmsg") @unittest.skipIf(AIX, "IPPROTO_SCTP: [Errno 62] Protocol not supported on AIX") @requireSocket("AF_INET", "SOCK_STREAM", "IPPROTO_SCTP") class SendmsgSCTPStreamTest(SendmsgStreamTests, SendrecvmsgSCTPStreamTestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipIf(AIX, "IPPROTO_SCTP: [Errno 62] Protocol not supported on AIX") @requireSocket("AF_INET", "SOCK_STREAM", "IPPROTO_SCTP") class RecvmsgSCTPStreamTest(RecvmsgTests, RecvmsgGenericStreamTests, SendrecvmsgSCTPStreamTestBase): def testRecvmsgEOF(self): try: super(RecvmsgSCTPStreamTest, self).testRecvmsgEOF() except OSError as e: if e.errno != errno.ENOTCONN: raise self.skipTest("sporadic ENOTCONN (kernel issue?) - see issue #13876") @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipIf(AIX, "IPPROTO_SCTP: [Errno 62] Protocol not supported on AIX") @requireSocket("AF_INET", "SOCK_STREAM", "IPPROTO_SCTP") class RecvmsgIntoSCTPStreamTest(RecvmsgIntoTests, RecvmsgGenericStreamTests, SendrecvmsgSCTPStreamTestBase): def testRecvmsgEOF(self): try: super(RecvmsgIntoSCTPStreamTest, self).testRecvmsgEOF() except OSError as e: if e.errno != errno.ENOTCONN: raise self.skipTest("sporadic ENOTCONN (kernel issue?) - see issue #13876") class SendrecvmsgUnixStreamTestBase(SendrecvmsgConnectedBase, ConnectedStreamTestMixin, UnixStreamBase): pass @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "AF_UNIX") class SendmsgUnixStreamTest(SendmsgStreamTests, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "recvmsg") @requireAttrs(socket, "AF_UNIX") class RecvmsgUnixStreamTest(RecvmsgTests, RecvmsgGenericStreamTests, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @requireAttrs(socket, "AF_UNIX") class RecvmsgIntoUnixStreamTest(RecvmsgIntoTests, RecvmsgGenericStreamTests, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "sendmsg", "recvmsg") @requireAttrs(socket, "AF_UNIX", "SOL_SOCKET", "SCM_RIGHTS") class RecvmsgSCMRightsStreamTest(SCMRightsTest, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "sendmsg", "recvmsg_into") @requireAttrs(socket, "AF_UNIX", "SOL_SOCKET", "SCM_RIGHTS") class RecvmsgIntoSCMRightsStreamTest(RecvmsgIntoMixin, SCMRightsTest, SendrecvmsgUnixStreamTestBase): pass # Test interrupting the interruptible send/receive methods with a # signal when a timeout is set. These tests avoid having multiple # threads alive during the test so that the OS cannot deliver the # signal to the wrong one. class InterruptedTimeoutBase: # Base class for interrupted send/receive tests. Installs an # empty handler for SIGALRM and removes it on teardown, along with # any scheduled alarms. def setUp(self): super().setUp() orig_alrm_handler = signal.signal(signal.SIGALRM, lambda signum, frame: 1 / 0) self.addCleanup(signal.signal, signal.SIGALRM, orig_alrm_handler) # Timeout for socket operations timeout = support.LOOPBACK_TIMEOUT # Provide setAlarm() method to schedule delivery of SIGALRM after # given number of seconds, or cancel it if zero, and an # appropriate time value to use. Use setitimer() if available. if hasattr(signal, "setitimer"): alarm_time = 0.05 def setAlarm(self, seconds): signal.setitimer(signal.ITIMER_REAL, seconds) else: # Old systems may deliver the alarm up to one second early alarm_time = 2 def setAlarm(self, seconds): signal.alarm(seconds) # Require siginterrupt() in order to ensure that system calls are # interrupted by default. @requireAttrs(signal, "siginterrupt") @unittest.skipUnless(hasattr(signal, "alarm") or hasattr(signal, "setitimer"), "Don't have signal.alarm or signal.setitimer") class InterruptedRecvTimeoutTest(InterruptedTimeoutBase, UDPTestBase): # Test interrupting the recv*() methods with signals when a # timeout is set. def setUp(self): super().setUp() self.serv.settimeout(self.timeout) def checkInterruptedRecv(self, func, *args, **kwargs): # Check that func(*args, **kwargs) raises # errno of EINTR when interrupted by a signal. try: self.setAlarm(self.alarm_time) with self.assertRaises(ZeroDivisionError) as cm: func(*args, **kwargs) finally: self.setAlarm(0) def testInterruptedRecvTimeout(self): self.checkInterruptedRecv(self.serv.recv, 1024) def testInterruptedRecvIntoTimeout(self): self.checkInterruptedRecv(self.serv.recv_into, bytearray(1024)) def testInterruptedRecvfromTimeout(self): self.checkInterruptedRecv(self.serv.recvfrom, 1024) def testInterruptedRecvfromIntoTimeout(self): self.checkInterruptedRecv(self.serv.recvfrom_into, bytearray(1024)) @requireAttrs(socket.socket, "recvmsg") def testInterruptedRecvmsgTimeout(self): self.checkInterruptedRecv(self.serv.recvmsg, 1024) @requireAttrs(socket.socket, "recvmsg_into") def testInterruptedRecvmsgIntoTimeout(self): self.checkInterruptedRecv(self.serv.recvmsg_into, [bytearray(1024)]) # Require siginterrupt() in order to ensure that system calls are # interrupted by default. @requireAttrs(signal, "siginterrupt") @unittest.skipUnless(hasattr(signal, "alarm") or hasattr(signal, "setitimer"), "Don't have signal.alarm or signal.setitimer") class InterruptedSendTimeoutTest(InterruptedTimeoutBase, SocketListeningTestMixin, TCPTestBase): # Test interrupting the interruptible send*() methods with signals # when a timeout is set. def setUp(self): super().setUp() self.serv_conn = self.newSocket() self.addCleanup(self.serv_conn.close) # Use a thread to complete the connection, but wait for it to # terminate before running the test, so that there is only one # thread to accept the signal. cli_thread = threading.Thread(target=self.doConnect) cli_thread.start() self.cli_conn, addr = self.serv.accept() self.addCleanup(self.cli_conn.close) cli_thread.join() self.serv_conn.settimeout(self.timeout) def doConnect(self): self.serv_conn.connect(self.serv_addr) def checkInterruptedSend(self, func, *args, **kwargs): # Check that func(*args, **kwargs), run in a loop, raises # OSError with an errno of EINTR when interrupted by a # signal. try: with self.assertRaises(ZeroDivisionError) as cm: while True: self.setAlarm(self.alarm_time) func(*args, **kwargs) finally: self.setAlarm(0) # Issue #12958: The following tests have problems on OS X prior to 10.7 @support.requires_mac_ver(10, 7) def testInterruptedSendTimeout(self): self.checkInterruptedSend(self.serv_conn.send, b"a"*512) @support.requires_mac_ver(10, 7) def testInterruptedSendtoTimeout(self): # Passing an actual address here as Python's wrapper for # sendto() doesn't allow passing a zero-length one; POSIX # requires that the address is ignored since the socket is # connection-mode, however. self.checkInterruptedSend(self.serv_conn.sendto, b"a"*512, self.serv_addr) @support.requires_mac_ver(10, 7) @requireAttrs(socket.socket, "sendmsg") def testInterruptedSendmsgTimeout(self): self.checkInterruptedSend(self.serv_conn.sendmsg, [b"a"*512]) class TCPCloserTest(ThreadedTCPSocketTest): def testClose(self): conn, addr = self.serv.accept() conn.close() sd = self.cli read, write, err = select.select([sd], [], [], 1.0) self.assertEqual(read, [sd]) self.assertEqual(sd.recv(1), b'') # Calling close() many times should be safe. conn.close() conn.close() def _testClose(self): self.cli.connect((HOST, self.port)) time.sleep(1.0) class BasicSocketPairTest(SocketPairTest): def __init__(self, methodName='runTest'): SocketPairTest.__init__(self, methodName=methodName) def _check_defaults(self, sock): self.assertIsInstance(sock, socket.socket) if hasattr(socket, 'AF_UNIX'): self.assertEqual(sock.family, socket.AF_UNIX) else: self.assertEqual(sock.family, socket.AF_INET) self.assertEqual(sock.type, socket.SOCK_STREAM) self.assertEqual(sock.proto, 0) def _testDefaults(self): self._check_defaults(self.cli) def testDefaults(self): self._check_defaults(self.serv) def testRecv(self): msg = self.serv.recv(1024) self.assertEqual(msg, MSG) def _testRecv(self): self.cli.send(MSG) def testSend(self): self.serv.send(MSG) def _testSend(self): msg = self.cli.recv(1024) self.assertEqual(msg, MSG) class NonBlockingTCPTests(ThreadedTCPSocketTest): def __init__(self, methodName='runTest'): self.event = threading.Event() ThreadedTCPSocketTest.__init__(self, methodName=methodName) def assert_sock_timeout(self, sock, timeout): self.assertEqual(self.serv.gettimeout(), timeout) blocking = (timeout != 0.0) self.assertEqual(sock.getblocking(), blocking) if fcntl is not None: # When a Python socket has a non-zero timeout, it's switched # internally to a non-blocking mode. Later, sock.sendall(), # sock.recv(), and other socket operations use a select() call and # handle EWOULDBLOCK/EGAIN on all socket operations. That's how # timeouts are enforced. fd_blocking = (timeout is None) flag = fcntl.fcntl(sock, fcntl.F_GETFL, os.O_NONBLOCK) self.assertEqual(not bool(flag & os.O_NONBLOCK), fd_blocking) def testSetBlocking(self): # Test setblocking() and settimeout() methods self.serv.setblocking(True) self.assert_sock_timeout(self.serv, None) self.serv.setblocking(False) self.assert_sock_timeout(self.serv, 0.0) self.serv.settimeout(None) self.assert_sock_timeout(self.serv, None) self.serv.settimeout(0) self.assert_sock_timeout(self.serv, 0) self.serv.settimeout(10) self.assert_sock_timeout(self.serv, 10) self.serv.settimeout(0) self.assert_sock_timeout(self.serv, 0) def _testSetBlocking(self): pass @support.cpython_only def testSetBlocking_overflow(self): # Issue 15989 import _testcapi if _testcapi.UINT_MAX >= _testcapi.ULONG_MAX: self.skipTest('needs UINT_MAX < ULONG_MAX') self.serv.setblocking(False) self.assert_sock_timeout(self.serv, 0.0) self.serv.setblocking(_testcapi.UINT_MAX + 1) self.assert_sock_timeout(self.serv, None) _testSetBlocking_overflow = support.cpython_only(_testSetBlocking) @unittest.skipUnless(hasattr(socket, 'SOCK_NONBLOCK'), 'test needs socket.SOCK_NONBLOCK') @support.requires_linux_version(2, 6, 28) def testInitNonBlocking(self): # create a socket with SOCK_NONBLOCK self.serv.close() self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_NONBLOCK) self.assert_sock_timeout(self.serv, 0) def _testInitNonBlocking(self): pass def testInheritFlagsBlocking(self): # bpo-7995: accept() on a listening socket with a timeout and the # default timeout is None, the resulting socket must be blocking. with socket_setdefaulttimeout(None): self.serv.settimeout(10) conn, addr = self.serv.accept() self.addCleanup(conn.close) self.assertIsNone(conn.gettimeout()) def _testInheritFlagsBlocking(self): self.cli.connect((HOST, self.port)) def testInheritFlagsTimeout(self): # bpo-7995: accept() on a listening socket with a timeout and the # default timeout is None, the resulting socket must inherit # the default timeout. default_timeout = 20.0 with socket_setdefaulttimeout(default_timeout): self.serv.settimeout(10) conn, addr = self.serv.accept() self.addCleanup(conn.close) self.assertEqual(conn.gettimeout(), default_timeout) def _testInheritFlagsTimeout(self): self.cli.connect((HOST, self.port)) def testAccept(self): # Testing non-blocking accept self.serv.setblocking(False) # connect() didn't start: non-blocking accept() fails start_time = time.monotonic() with self.assertRaises(BlockingIOError): conn, addr = self.serv.accept() dt = time.monotonic() - start_time self.assertLess(dt, 1.0) self.event.set() read, write, err = select.select([self.serv], [], [], support.LONG_TIMEOUT) if self.serv not in read: self.fail("Error trying to do accept after select.") # connect() completed: non-blocking accept() doesn't block conn, addr = self.serv.accept() self.addCleanup(conn.close) self.assertIsNone(conn.gettimeout()) def _testAccept(self): # don't connect before event is set to check # that non-blocking accept() raises BlockingIOError self.event.wait() self.cli.connect((HOST, self.port)) def testRecv(self): # Testing non-blocking recv conn, addr = self.serv.accept() self.addCleanup(conn.close) conn.setblocking(False) # the server didn't send data yet: non-blocking recv() fails with self.assertRaises(BlockingIOError): msg = conn.recv(len(MSG)) self.event.set() read, write, err = select.select([conn], [], [], support.LONG_TIMEOUT) if conn not in read: self.fail("Error during select call to non-blocking socket.") # the server sent data yet: non-blocking recv() doesn't block msg = conn.recv(len(MSG)) self.assertEqual(msg, MSG) def _testRecv(self): self.cli.connect((HOST, self.port)) # don't send anything before event is set to check # that non-blocking recv() raises BlockingIOError self.event.wait() # send data: recv() will no longer block self.cli.sendall(MSG) class FileObjectClassTestCase(SocketConnectedTest): """Unit tests for the object returned by socket.makefile() self.read_file is the io object returned by makefile() on the client connection. You can read from this file to get output from the server. self.write_file is the io object returned by makefile() on the server connection. You can write to this file to send output to the client. """ bufsize = -1 # Use default buffer size encoding = 'utf-8' errors = 'strict' newline = None read_mode = 'rb' read_msg = MSG write_mode = 'wb' write_msg = MSG def __init__(self, methodName='runTest'): SocketConnectedTest.__init__(self, methodName=methodName) def setUp(self): self.evt1, self.evt2, self.serv_finished, self.cli_finished = [ threading.Event() for i in range(4)] SocketConnectedTest.setUp(self) self.read_file = self.cli_conn.makefile( self.read_mode, self.bufsize, encoding = self.encoding, errors = self.errors, newline = self.newline) def tearDown(self): self.serv_finished.set() self.read_file.close() self.assertTrue(self.read_file.closed) self.read_file = None SocketConnectedTest.tearDown(self) def clientSetUp(self): SocketConnectedTest.clientSetUp(self) self.write_file = self.serv_conn.makefile( self.write_mode, self.bufsize, encoding = self.encoding, errors = self.errors, newline = self.newline) def clientTearDown(self): self.cli_finished.set() self.write_file.close() self.assertTrue(self.write_file.closed) self.write_file = None SocketConnectedTest.clientTearDown(self) def testReadAfterTimeout(self): # Issue #7322: A file object must disallow further reads # after a timeout has occurred. self.cli_conn.settimeout(1) self.read_file.read(3) # First read raises a timeout self.assertRaises(TimeoutError, self.read_file.read, 1) # Second read is disallowed with self.assertRaises(OSError) as ctx: self.read_file.read(1) self.assertIn("cannot read from timed out object", str(ctx.exception)) def _testReadAfterTimeout(self): self.write_file.write(self.write_msg[0:3]) self.write_file.flush() self.serv_finished.wait() def testSmallRead(self): # Performing small file read test first_seg = self.read_file.read(len(self.read_msg)-3) second_seg = self.read_file.read(3) msg = first_seg + second_seg self.assertEqual(msg, self.read_msg) def _testSmallRead(self): self.write_file.write(self.write_msg) self.write_file.flush() def testFullRead(self): # read until EOF msg = self.read_file.read() self.assertEqual(msg, self.read_msg) def _testFullRead(self): self.write_file.write(self.write_msg) self.write_file.close() def testUnbufferedRead(self): # Performing unbuffered file read test buf = type(self.read_msg)() while 1: char = self.read_file.read(1) if not char: break buf += char self.assertEqual(buf, self.read_msg) def _testUnbufferedRead(self): self.write_file.write(self.write_msg) self.write_file.flush() def testReadline(self): # Performing file readline test line = self.read_file.readline() self.assertEqual(line, self.read_msg) def _testReadline(self): self.write_file.write(self.write_msg) self.write_file.flush() def testCloseAfterMakefile(self): # The file returned by makefile should keep the socket open. self.cli_conn.close() # read until EOF msg = self.read_file.read() self.assertEqual(msg, self.read_msg) def _testCloseAfterMakefile(self): self.write_file.write(self.write_msg) self.write_file.flush() def testMakefileAfterMakefileClose(self): self.read_file.close() msg = self.cli_conn.recv(len(MSG)) if isinstance(self.read_msg, str): msg = msg.decode() self.assertEqual(msg, self.read_msg) def _testMakefileAfterMakefileClose(self): self.write_file.write(self.write_msg) self.write_file.flush() def testClosedAttr(self): self.assertTrue(not self.read_file.closed) def _testClosedAttr(self): self.assertTrue(not self.write_file.closed) def testAttributes(self): self.assertEqual(self.read_file.mode, self.read_mode) self.assertEqual(self.read_file.name, self.cli_conn.fileno()) def _testAttributes(self): self.assertEqual(self.write_file.mode, self.write_mode) self.assertEqual(self.write_file.name, self.serv_conn.fileno()) def testRealClose(self): self.read_file.close() self.assertRaises(ValueError, self.read_file.fileno) self.cli_conn.close() self.assertRaises(OSError, self.cli_conn.getsockname) def _testRealClose(self): pass class UnbufferedFileObjectClassTestCase(FileObjectClassTestCase): """Repeat the tests from FileObjectClassTestCase with bufsize==0. In this case (and in this case only), it should be possible to create a file object, read a line from it, create another file object, read another line from it, without loss of data in the first file object's buffer. Note that http.client relies on this when reading multiple requests from the same socket.""" bufsize = 0 # Use unbuffered mode def testUnbufferedReadline(self): # Read a line, create a new file object, read another line with it line = self.read_file.readline() # first line self.assertEqual(line, b"A. " + self.write_msg) # first line self.read_file = self.cli_conn.makefile('rb', 0) line = self.read_file.readline() # second line self.assertEqual(line, b"B. " + self.write_msg) # second line def _testUnbufferedReadline(self): self.write_file.write(b"A. " + self.write_msg) self.write_file.write(b"B. " + self.write_msg) self.write_file.flush() def testMakefileClose(self): # The file returned by makefile should keep the socket open... self.cli_conn.close() msg = self.cli_conn.recv(1024) self.assertEqual(msg, self.read_msg) # ...until the file is itself closed self.read_file.close() self.assertRaises(OSError, self.cli_conn.recv, 1024) def _testMakefileClose(self): self.write_file.write(self.write_msg) self.write_file.flush() def testMakefileCloseSocketDestroy(self): refcount_before = sys.getrefcount(self.cli_conn) self.read_file.close() refcount_after = sys.getrefcount(self.cli_conn) self.assertEqual(refcount_before - 1, refcount_after) def _testMakefileCloseSocketDestroy(self): pass # Non-blocking ops # NOTE: to set `read_file` as non-blocking, we must call # `cli_conn.setblocking` and vice-versa (see setUp / clientSetUp). def testSmallReadNonBlocking(self): self.cli_conn.setblocking(False) self.assertEqual(self.read_file.readinto(bytearray(10)), None) self.assertEqual(self.read_file.read(len(self.read_msg) - 3), None) self.evt1.set() self.evt2.wait(1.0) first_seg = self.read_file.read(len(self.read_msg) - 3) if first_seg is None: # Data not arrived (can happen under Windows), wait a bit time.sleep(0.5) first_seg = self.read_file.read(len(self.read_msg) - 3) buf = bytearray(10) n = self.read_file.readinto(buf) self.assertEqual(n, 3) msg = first_seg + buf[:n] self.assertEqual(msg, self.read_msg) self.assertEqual(self.read_file.readinto(bytearray(16)), None) self.assertEqual(self.read_file.read(1), None) def _testSmallReadNonBlocking(self): self.evt1.wait(1.0) self.write_file.write(self.write_msg) self.write_file.flush() self.evt2.set() # Avoid closing the socket before the server test has finished, # otherwise system recv() will return 0 instead of EWOULDBLOCK. self.serv_finished.wait(5.0) def testWriteNonBlocking(self): self.cli_finished.wait(5.0) # The client thread can't skip directly - the SkipTest exception # would appear as a failure. if self.serv_skipped: self.skipTest(self.serv_skipped) def _testWriteNonBlocking(self): self.serv_skipped = None self.serv_conn.setblocking(False) # Try to saturate the socket buffer pipe with repeated large writes. BIG = b"x" * support.SOCK_MAX_SIZE LIMIT = 10 # The first write() succeeds since a chunk of data can be buffered n = self.write_file.write(BIG) self.assertGreater(n, 0) for i in range(LIMIT): n = self.write_file.write(BIG) if n is None: # Succeeded break self.assertGreater(n, 0) else: # Let us know that this test didn't manage to establish # the expected conditions. This is not a failure in itself but, # if it happens repeatedly, the test should be fixed. self.serv_skipped = "failed to saturate the socket buffer" class LineBufferedFileObjectClassTestCase(FileObjectClassTestCase): bufsize = 1 # Default-buffered for reading; line-buffered for writing class SmallBufferedFileObjectClassTestCase(FileObjectClassTestCase): bufsize = 2 # Exercise the buffering code class UnicodeReadFileObjectClassTestCase(FileObjectClassTestCase): """Tests for socket.makefile() in text mode (rather than binary)""" read_mode = 'r' read_msg = MSG.decode('utf-8') write_mode = 'wb' write_msg = MSG newline = '' class UnicodeWriteFileObjectClassTestCase(FileObjectClassTestCase): """Tests for socket.makefile() in text mode (rather than binary)""" read_mode = 'rb' read_msg = MSG write_mode = 'w' write_msg = MSG.decode('utf-8') newline = '' class UnicodeReadWriteFileObjectClassTestCase(FileObjectClassTestCase): """Tests for socket.makefile() in text mode (rather than binary)""" read_mode = 'r' read_msg = MSG.decode('utf-8') write_mode = 'w' write_msg = MSG.decode('utf-8') newline = '' class NetworkConnectionTest(object): """Prove network connection.""" def clientSetUp(self): # We're inherited below by BasicTCPTest2, which also inherits # BasicTCPTest, which defines self.port referenced below. self.cli = socket.create_connection((HOST, self.port)) self.serv_conn = self.cli class BasicTCPTest2(NetworkConnectionTest, BasicTCPTest): """Tests that NetworkConnection does not break existing TCP functionality. """ class NetworkConnectionNoServer(unittest.TestCase): class MockSocket(socket.socket): def connect(self, *args): raise TimeoutError('timed out') @contextlib.contextmanager def mocked_socket_module(self): """Return a socket which times out on connect""" old_socket = socket.socket socket.socket = self.MockSocket try: yield finally: socket.socket = old_socket @socket_helper.skip_if_tcp_blackhole def test_connect(self): port = socket_helper.find_unused_port() cli = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(cli.close) with self.assertRaises(OSError) as cm: cli.connect((HOST, port)) self.assertEqual(cm.exception.errno, errno.ECONNREFUSED) @socket_helper.skip_if_tcp_blackhole def test_create_connection(self): # Issue #9792: errors raised by create_connection() should have # a proper errno attribute. port = socket_helper.find_unused_port() with self.assertRaises(OSError) as cm: socket.create_connection((HOST, port)) # Issue #16257: create_connection() calls getaddrinfo() against # 'localhost'. This may result in an IPV6 addr being returned # as well as an IPV4 one: # >>> socket.getaddrinfo('localhost', port, 0, SOCK_STREAM) # >>> [(2, 2, 0, '', ('127.0.0.1', 41230)), # (26, 2, 0, '', ('::1', 41230, 0, 0))] # # create_connection() enumerates through all the addresses returned # and if it doesn't successfully bind to any of them, it propagates # the last exception it encountered. # # On Solaris, ENETUNREACH is returned in this circumstance instead # of ECONNREFUSED. So, if that errno exists, add it to our list of # expected errnos. expected_errnos = socket_helper.get_socket_conn_refused_errs() self.assertIn(cm.exception.errno, expected_errnos) def test_create_connection_all_errors(self): port = socket_helper.find_unused_port() try: socket.create_connection((HOST, port), all_errors=True) except ExceptionGroup as e: eg = e else: self.fail('expected connection to fail') self.assertIsInstance(eg, ExceptionGroup) for e in eg.exceptions: self.assertIsInstance(e, OSError) addresses = socket.getaddrinfo( 'localhost', port, 0, socket.SOCK_STREAM) # assert that we got an exception for each address self.assertEqual(len(addresses), len(eg.exceptions)) def test_create_connection_timeout(self): # Issue #9792: create_connection() should not recast timeout errors # as generic socket errors. with self.mocked_socket_module(): try: socket.create_connection((HOST, 1234)) except TimeoutError: pass except OSError as exc: if socket_helper.IPV6_ENABLED or exc.errno != errno.EAFNOSUPPORT: raise else: self.fail('TimeoutError not raised') class NetworkConnectionAttributesTest(SocketTCPTest, ThreadableTest): cli = None def __init__(self, methodName='runTest'): SocketTCPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.source_port = socket_helper.find_unused_port() def clientTearDown(self): if self.cli is not None: self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) def _justAccept(self): conn, addr = self.serv.accept() conn.close() testFamily = _justAccept def _testFamily(self): self.cli = socket.create_connection((HOST, self.port), timeout=support.LOOPBACK_TIMEOUT) self.addCleanup(self.cli.close) self.assertEqual(self.cli.family, 2) testSourceAddress = _justAccept def _testSourceAddress(self): self.cli = socket.create_connection((HOST, self.port), timeout=support.LOOPBACK_TIMEOUT, source_address=('', self.source_port)) self.addCleanup(self.cli.close) self.assertEqual(self.cli.getsockname()[1], self.source_port) # The port number being used is sufficient to show that the bind() # call happened. testTimeoutDefault = _justAccept def _testTimeoutDefault(self): # passing no explicit timeout uses socket's global default self.assertTrue(socket.getdefaulttimeout() is None) socket.setdefaulttimeout(42) try: self.cli = socket.create_connection((HOST, self.port)) self.addCleanup(self.cli.close) finally: socket.setdefaulttimeout(None) self.assertEqual(self.cli.gettimeout(), 42) testTimeoutNone = _justAccept def _testTimeoutNone(self): # None timeout means the same as sock.settimeout(None) self.assertTrue(socket.getdefaulttimeout() is None) socket.setdefaulttimeout(30) try: self.cli = socket.create_connection((HOST, self.port), timeout=None) self.addCleanup(self.cli.close) finally: socket.setdefaulttimeout(None) self.assertEqual(self.cli.gettimeout(), None) testTimeoutValueNamed = _justAccept def _testTimeoutValueNamed(self): self.cli = socket.create_connection((HOST, self.port), timeout=30) self.assertEqual(self.cli.gettimeout(), 30) testTimeoutValueNonamed = _justAccept def _testTimeoutValueNonamed(self): self.cli = socket.create_connection((HOST, self.port), 30) self.addCleanup(self.cli.close) self.assertEqual(self.cli.gettimeout(), 30) class NetworkConnectionBehaviourTest(SocketTCPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketTCPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) def testInsideTimeout(self): conn, addr = self.serv.accept() self.addCleanup(conn.close) time.sleep(3) conn.send(b"done!") testOutsideTimeout = testInsideTimeout def _testInsideTimeout(self): self.cli = sock = socket.create_connection((HOST, self.port)) data = sock.recv(5) self.assertEqual(data, b"done!") def _testOutsideTimeout(self): self.cli = sock = socket.create_connection((HOST, self.port), timeout=1) self.assertRaises(TimeoutError, lambda: sock.recv(5)) class TCPTimeoutTest(SocketTCPTest): def testTCPTimeout(self): def raise_timeout(*args, **kwargs): self.serv.settimeout(1.0) self.serv.accept() self.assertRaises(TimeoutError, raise_timeout, "Error generating a timeout exception (TCP)") def testTimeoutZero(self): ok = False try: self.serv.settimeout(0.0) foo = self.serv.accept() except TimeoutError: self.fail("caught timeout instead of error (TCP)") except OSError: ok = True except: self.fail("caught unexpected exception (TCP)") if not ok: self.fail("accept() returned success when we did not expect it") @unittest.skipUnless(hasattr(signal, 'alarm'), 'test needs signal.alarm()') def testInterruptedTimeout(self): # XXX I don't know how to do this test on MSWindows or any other # platform that doesn't support signal.alarm() or os.kill(), though # the bug should have existed on all platforms. self.serv.settimeout(5.0) # must be longer than alarm class Alarm(Exception): pass def alarm_handler(signal, frame): raise Alarm old_alarm = signal.signal(signal.SIGALRM, alarm_handler) try: try: signal.alarm(2) # POSIX allows alarm to be up to 1 second early foo = self.serv.accept() except TimeoutError: self.fail("caught timeout instead of Alarm") except Alarm: pass except: self.fail("caught other exception instead of Alarm:" " %s(%s):\n%s" % (sys.exc_info()[:2] + (traceback.format_exc(),))) else: self.fail("nothing caught") finally: signal.alarm(0) # shut off alarm except Alarm: self.fail("got Alarm in wrong place") finally: # no alarm can be pending. Safe to restore old handler. signal.signal(signal.SIGALRM, old_alarm) class UDPTimeoutTest(SocketUDPTest): def testUDPTimeout(self): def raise_timeout(*args, **kwargs): self.serv.settimeout(1.0) self.serv.recv(1024) self.assertRaises(TimeoutError, raise_timeout, "Error generating a timeout exception (UDP)") def testTimeoutZero(self): ok = False try: self.serv.settimeout(0.0) foo = self.serv.recv(1024) except TimeoutError: self.fail("caught timeout instead of error (UDP)") except OSError: ok = True except: self.fail("caught unexpected exception (UDP)") if not ok: self.fail("recv() returned success when we did not expect it") @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class UDPLITETimeoutTest(SocketUDPLITETest): def testUDPLITETimeout(self): def raise_timeout(*args, **kwargs): self.serv.settimeout(1.0) self.serv.recv(1024) self.assertRaises(TimeoutError, raise_timeout, "Error generating a timeout exception (UDPLITE)") def testTimeoutZero(self): ok = False try: self.serv.settimeout(0.0) foo = self.serv.recv(1024) except TimeoutError: self.fail("caught timeout instead of error (UDPLITE)") except OSError: ok = True except: self.fail("caught unexpected exception (UDPLITE)") if not ok: self.fail("recv() returned success when we did not expect it") class TestExceptions(unittest.TestCase): def testExceptionTree(self): self.assertTrue(issubclass(OSError, Exception)) self.assertTrue(issubclass(socket.herror, OSError)) self.assertTrue(issubclass(socket.gaierror, OSError)) self.assertTrue(issubclass(socket.timeout, OSError)) self.assertIs(socket.error, OSError) self.assertIs(socket.timeout, TimeoutError) def test_setblocking_invalidfd(self): # Regression test for issue #28471 sock0 = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) sock = socket.socket( socket.AF_INET, socket.SOCK_STREAM, 0, sock0.fileno()) sock0.close() self.addCleanup(sock.detach) with self.assertRaises(OSError): sock.setblocking(False) @unittest.skipUnless(sys.platform == 'linux', 'Linux specific test') class TestLinuxAbstractNamespace(unittest.TestCase): UNIX_PATH_MAX = 108 def testLinuxAbstractNamespace(self): address = b"\x00python-test-hello\x00\xff" with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s1: s1.bind(address) s1.listen() with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s2: s2.connect(s1.getsockname()) with s1.accept()[0] as s3: self.assertEqual(s1.getsockname(), address) self.assertEqual(s2.getpeername(), address) def testMaxName(self): address = b"\x00" + b"h" * (self.UNIX_PATH_MAX - 1) with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s: s.bind(address) self.assertEqual(s.getsockname(), address) def testNameOverflow(self): address = "\x00" + "h" * self.UNIX_PATH_MAX with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s: self.assertRaises(OSError, s.bind, address) def testStrName(self): # Check that an abstract name can be passed as a string. s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) try: s.bind("\x00python\x00test\x00") self.assertEqual(s.getsockname(), b"\x00python\x00test\x00") finally: s.close() def testBytearrayName(self): # Check that an abstract name can be passed as a bytearray. with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s: s.bind(bytearray(b"\x00python\x00test\x00")) self.assertEqual(s.getsockname(), b"\x00python\x00test\x00") def testAutobind(self): # Check that binding to an empty string binds to an available address # in the abstract namespace as specified in unix(7) "Autobind feature". abstract_address = b"^\0[0-9a-f]{5}" with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s1: s1.bind("") self.assertRegex(s1.getsockname(), abstract_address) # Each socket is bound to a different abstract address. with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s2: s2.bind("") self.assertRegex(s2.getsockname(), abstract_address) self.assertNotEqual(s1.getsockname(), s2.getsockname()) @unittest.skipUnless(hasattr(socket, 'AF_UNIX'), 'test needs socket.AF_UNIX') class TestUnixDomain(unittest.TestCase): def setUp(self): self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) def tearDown(self): self.sock.close() def encoded(self, path): # Return the given path encoded in the file system encoding, # or skip the test if this is not possible. try: return os.fsencode(path) except UnicodeEncodeError: self.skipTest( "Pathname {0!a} cannot be represented in file " "system encoding {1!r}".format( path, sys.getfilesystemencoding())) def bind(self, sock, path): # Bind the socket try: socket_helper.bind_unix_socket(sock, path) except OSError as e: if str(e) == "AF_UNIX path too long": self.skipTest( "Pathname {0!a} is too long to serve as an AF_UNIX path" .format(path)) else: raise def testUnbound(self): # Issue #30205 (note getsockname() can return None on OS X) self.assertIn(self.sock.getsockname(), ('', None)) def testStrAddr(self): # Test binding to and retrieving a normal string pathname. path = os.path.abspath(os_helper.TESTFN) self.bind(self.sock, path) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) def testBytesAddr(self): # Test binding to a bytes pathname. path = os.path.abspath(os_helper.TESTFN) self.bind(self.sock, self.encoded(path)) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) def testSurrogateescapeBind(self): # Test binding to a valid non-ASCII pathname, with the # non-ASCII bytes supplied using surrogateescape encoding. path = os.path.abspath(os_helper.TESTFN_UNICODE) b = self.encoded(path) self.bind(self.sock, b.decode("ascii", "surrogateescape")) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) def testUnencodableAddr(self): # Test binding to a pathname that cannot be encoded in the # file system encoding. if os_helper.TESTFN_UNENCODABLE is None: self.skipTest("No unencodable filename available") path = os.path.abspath(os_helper.TESTFN_UNENCODABLE) self.bind(self.sock, path) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) @unittest.skipIf(sys.platform == 'linux', 'Linux specific test') def testEmptyAddress(self): # Test that binding empty address fails. self.assertRaises(OSError, self.sock.bind, "") class BufferIOTest(SocketConnectedTest): """ Test the buffer versions of socket.recv() and socket.send(). """ def __init__(self, methodName='runTest'): SocketConnectedTest.__init__(self, methodName=methodName) def testRecvIntoArray(self): buf = array.array("B", [0] * len(MSG)) nbytes = self.cli_conn.recv_into(buf) self.assertEqual(nbytes, len(MSG)) buf = buf.tobytes() msg = buf[:len(MSG)] self.assertEqual(msg, MSG) def _testRecvIntoArray(self): buf = bytes(MSG) self.serv_conn.send(buf) def testRecvIntoBytearray(self): buf = bytearray(1024) nbytes = self.cli_conn.recv_into(buf) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvIntoBytearray = _testRecvIntoArray def testRecvIntoMemoryview(self): buf = bytearray(1024) nbytes = self.cli_conn.recv_into(memoryview(buf)) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvIntoMemoryview = _testRecvIntoArray def testRecvFromIntoArray(self): buf = array.array("B", [0] * len(MSG)) nbytes, addr = self.cli_conn.recvfrom_into(buf) self.assertEqual(nbytes, len(MSG)) buf = buf.tobytes() msg = buf[:len(MSG)] self.assertEqual(msg, MSG) def _testRecvFromIntoArray(self): buf = bytes(MSG) self.serv_conn.send(buf) def testRecvFromIntoBytearray(self): buf = bytearray(1024) nbytes, addr = self.cli_conn.recvfrom_into(buf) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvFromIntoBytearray = _testRecvFromIntoArray def testRecvFromIntoMemoryview(self): buf = bytearray(1024) nbytes, addr = self.cli_conn.recvfrom_into(memoryview(buf)) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvFromIntoMemoryview = _testRecvFromIntoArray def testRecvFromIntoSmallBuffer(self): # See issue #20246. buf = bytearray(8) self.assertRaises(ValueError, self.cli_conn.recvfrom_into, buf, 1024) def _testRecvFromIntoSmallBuffer(self): self.serv_conn.send(MSG) def testRecvFromIntoEmptyBuffer(self): buf = bytearray() self.cli_conn.recvfrom_into(buf) self.cli_conn.recvfrom_into(buf, 0) _testRecvFromIntoEmptyBuffer = _testRecvFromIntoArray TIPC_STYPE = 2000 TIPC_LOWER = 200 TIPC_UPPER = 210 def isTipcAvailable(): """Check if the TIPC module is loaded The TIPC module is not loaded automatically on Ubuntu and probably other Linux distros. """ if not hasattr(socket, "AF_TIPC"): return False try: f = open("/proc/modules", encoding="utf-8") except (FileNotFoundError, IsADirectoryError, PermissionError): # It's ok if the file does not exist, is a directory or if we # have not the permission to read it. return False with f: for line in f: if line.startswith("tipc "): return True return False @unittest.skipUnless(isTipcAvailable(), "TIPC module is not loaded, please 'sudo modprobe tipc'") class TIPCTest(unittest.TestCase): def testRDM(self): srv = socket.socket(socket.AF_TIPC, socket.SOCK_RDM) cli = socket.socket(socket.AF_TIPC, socket.SOCK_RDM) self.addCleanup(srv.close) self.addCleanup(cli.close) srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) srvaddr = (socket.TIPC_ADDR_NAMESEQ, TIPC_STYPE, TIPC_LOWER, TIPC_UPPER) srv.bind(srvaddr) sendaddr = (socket.TIPC_ADDR_NAME, TIPC_STYPE, TIPC_LOWER + int((TIPC_UPPER - TIPC_LOWER) / 2), 0) cli.sendto(MSG, sendaddr) msg, recvaddr = srv.recvfrom(1024) self.assertEqual(cli.getsockname(), recvaddr) self.assertEqual(msg, MSG) @unittest.skipUnless(isTipcAvailable(), "TIPC module is not loaded, please 'sudo modprobe tipc'") class TIPCThreadableTest(unittest.TestCase, ThreadableTest): def __init__(self, methodName = 'runTest'): unittest.TestCase.__init__(self, methodName = methodName) ThreadableTest.__init__(self) def setUp(self): self.srv = socket.socket(socket.AF_TIPC, socket.SOCK_STREAM) self.addCleanup(self.srv.close) self.srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) srvaddr = (socket.TIPC_ADDR_NAMESEQ, TIPC_STYPE, TIPC_LOWER, TIPC_UPPER) self.srv.bind(srvaddr) self.srv.listen() self.serverExplicitReady() self.conn, self.connaddr = self.srv.accept() self.addCleanup(self.conn.close) def clientSetUp(self): # There is a hittable race between serverExplicitReady() and the # accept() call; sleep a little while to avoid it, otherwise # we could get an exception time.sleep(0.1) self.cli = socket.socket(socket.AF_TIPC, socket.SOCK_STREAM) self.addCleanup(self.cli.close) addr = (socket.TIPC_ADDR_NAME, TIPC_STYPE, TIPC_LOWER + int((TIPC_UPPER - TIPC_LOWER) / 2), 0) self.cli.connect(addr) self.cliaddr = self.cli.getsockname() def testStream(self): msg = self.conn.recv(1024) self.assertEqual(msg, MSG) self.assertEqual(self.cliaddr, self.connaddr) def _testStream(self): self.cli.send(MSG) self.cli.close() class ContextManagersTest(ThreadedTCPSocketTest): def _testSocketClass(self): # base test with socket.socket() as sock: self.assertFalse(sock._closed) self.assertTrue(sock._closed) # close inside with block with socket.socket() as sock: sock.close() self.assertTrue(sock._closed) # exception inside with block with socket.socket() as sock: self.assertRaises(OSError, sock.sendall, b'foo') self.assertTrue(sock._closed) def testCreateConnectionBase(self): conn, addr = self.serv.accept() self.addCleanup(conn.close) data = conn.recv(1024) conn.sendall(data) def _testCreateConnectionBase(self): address = self.serv.getsockname() with socket.create_connection(address) as sock: self.assertFalse(sock._closed) sock.sendall(b'foo') self.assertEqual(sock.recv(1024), b'foo') self.assertTrue(sock._closed) def testCreateConnectionClose(self): conn, addr = self.serv.accept() self.addCleanup(conn.close) data = conn.recv(1024) conn.sendall(data) def _testCreateConnectionClose(self): address = self.serv.getsockname() with socket.create_connection(address) as sock: sock.close() self.assertTrue(sock._closed) self.assertRaises(OSError, sock.sendall, b'foo') class InheritanceTest(unittest.TestCase): @unittest.skipUnless(hasattr(socket, "SOCK_CLOEXEC"), "SOCK_CLOEXEC not defined") @support.requires_linux_version(2, 6, 28) def test_SOCK_CLOEXEC(self): with socket.socket(socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_CLOEXEC) as s: self.assertEqual(s.type, socket.SOCK_STREAM) self.assertFalse(s.get_inheritable()) def test_default_inheritable(self): sock = socket.socket() with sock: self.assertEqual(sock.get_inheritable(), False) def test_dup(self): sock = socket.socket() with sock: newsock = sock.dup() sock.close() with newsock: self.assertEqual(newsock.get_inheritable(), False) def test_set_inheritable(self): sock = socket.socket() with sock: sock.set_inheritable(True) self.assertEqual(sock.get_inheritable(), True) sock.set_inheritable(False) self.assertEqual(sock.get_inheritable(), False) @unittest.skipIf(fcntl is None, "need fcntl") def test_get_inheritable_cloexec(self): sock = socket.socket() with sock: fd = sock.fileno() self.assertEqual(sock.get_inheritable(), False) # clear FD_CLOEXEC flag flags = fcntl.fcntl(fd, fcntl.F_GETFD) flags &= ~fcntl.FD_CLOEXEC fcntl.fcntl(fd, fcntl.F_SETFD, flags) self.assertEqual(sock.get_inheritable(), True) @unittest.skipIf(fcntl is None, "need fcntl") def test_set_inheritable_cloexec(self): sock = socket.socket() with sock: fd = sock.fileno() self.assertEqual(fcntl.fcntl(fd, fcntl.F_GETFD) & fcntl.FD_CLOEXEC, fcntl.FD_CLOEXEC) sock.set_inheritable(True) self.assertEqual(fcntl.fcntl(fd, fcntl.F_GETFD) & fcntl.FD_CLOEXEC, 0) def test_socketpair(self): s1, s2 = socket.socketpair() self.addCleanup(s1.close) self.addCleanup(s2.close) self.assertEqual(s1.get_inheritable(), False) self.assertEqual(s2.get_inheritable(), False) @unittest.skipUnless(hasattr(socket, "SOCK_NONBLOCK"), "SOCK_NONBLOCK not defined") class NonblockConstantTest(unittest.TestCase): def checkNonblock(self, s, nonblock=True, timeout=0.0): if nonblock: self.assertEqual(s.type, socket.SOCK_STREAM) self.assertEqual(s.gettimeout(), timeout) self.assertTrue( fcntl.fcntl(s, fcntl.F_GETFL, os.O_NONBLOCK) & os.O_NONBLOCK) if timeout == 0: # timeout == 0: means that getblocking() must be False. self.assertFalse(s.getblocking()) else: # If timeout > 0, the socket will be in a "blocking" mode # from the standpoint of the Python API. For Python socket # object, "blocking" means that operations like 'sock.recv()' # will block. Internally, file descriptors for # "blocking" Python sockets *with timeouts* are in a # *non-blocking* mode, and 'sock.recv()' uses 'select()' # and handles EWOULDBLOCK/EAGAIN to enforce the timeout. self.assertTrue(s.getblocking()) else: self.assertEqual(s.type, socket.SOCK_STREAM) self.assertEqual(s.gettimeout(), None) self.assertFalse( fcntl.fcntl(s, fcntl.F_GETFL, os.O_NONBLOCK) & os.O_NONBLOCK) self.assertTrue(s.getblocking()) @support.requires_linux_version(2, 6, 28) def test_SOCK_NONBLOCK(self): # a lot of it seems silly and redundant, but I wanted to test that # changing back and forth worked ok with socket.socket(socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_NONBLOCK) as s: self.checkNonblock(s) s.setblocking(True) self.checkNonblock(s, nonblock=False) s.setblocking(False) self.checkNonblock(s) s.settimeout(None) self.checkNonblock(s, nonblock=False) s.settimeout(2.0) self.checkNonblock(s, timeout=2.0) s.setblocking(True) self.checkNonblock(s, nonblock=False) # defaulttimeout t = socket.getdefaulttimeout() socket.setdefaulttimeout(0.0) with socket.socket() as s: self.checkNonblock(s) socket.setdefaulttimeout(None) with socket.socket() as s: self.checkNonblock(s, False) socket.setdefaulttimeout(2.0) with socket.socket() as s: self.checkNonblock(s, timeout=2.0) socket.setdefaulttimeout(None) with socket.socket() as s: self.checkNonblock(s, False) socket.setdefaulttimeout(t) @unittest.skipUnless(os.name == "nt", "Windows specific") @unittest.skipUnless(multiprocessing, "need multiprocessing") class TestSocketSharing(SocketTCPTest): # This must be classmethod and not staticmethod or multiprocessing # won't be able to bootstrap it. @classmethod def remoteProcessServer(cls, q): # Recreate socket from shared data sdata = q.get() message = q.get() s = socket.fromshare(sdata) s2, c = s.accept() # Send the message s2.sendall(message) s2.close() s.close() def testShare(self): # Transfer the listening server socket to another process # and service it from there. # Create process: q = multiprocessing.Queue() p = multiprocessing.Process(target=self.remoteProcessServer, args=(q,)) p.start() # Get the shared socket data data = self.serv.share(p.pid) # Pass the shared socket to the other process addr = self.serv.getsockname() self.serv.close() q.put(data) # The data that the server will send us message = b"slapmahfro" q.put(message) # Connect s = socket.create_connection(addr) # listen for the data m = [] while True: data = s.recv(100) if not data: break m.append(data) s.close() received = b"".join(m) self.assertEqual(received, message) p.join() def testShareLength(self): data = self.serv.share(os.getpid()) self.assertRaises(ValueError, socket.fromshare, data[:-1]) self.assertRaises(ValueError, socket.fromshare, data+b"foo") def compareSockets(self, org, other): # socket sharing is expected to work only for blocking socket # since the internal python timeout value isn't transferred. self.assertEqual(org.gettimeout(), None) self.assertEqual(org.gettimeout(), other.gettimeout()) self.assertEqual(org.family, other.family) self.assertEqual(org.type, other.type) # If the user specified "0" for proto, then # internally windows will have picked the correct value. # Python introspection on the socket however will still return # 0. For the shared socket, the python value is recreated # from the actual value, so it may not compare correctly. if org.proto != 0: self.assertEqual(org.proto, other.proto) def testShareLocal(self): data = self.serv.share(os.getpid()) s = socket.fromshare(data) try: self.compareSockets(self.serv, s) finally: s.close() def testTypes(self): families = [socket.AF_INET, socket.AF_INET6] types = [socket.SOCK_STREAM, socket.SOCK_DGRAM] for f in families: for t in types: try: source = socket.socket(f, t) except OSError: continue # This combination is not supported try: data = source.share(os.getpid()) shared = socket.fromshare(data) try: self.compareSockets(source, shared) finally: shared.close() finally: source.close() class SendfileUsingSendTest(ThreadedTCPSocketTest): """ Test the send() implementation of socket.sendfile(). """ FILESIZE = (10 * 1024 * 1024) # 10 MiB BUFSIZE = 8192 FILEDATA = b"" TIMEOUT = support.LOOPBACK_TIMEOUT @classmethod def setUpClass(cls): def chunks(total, step): assert total >= step while total > step: yield step total -= step if total: yield total chunk = b"".join([random.choice(string.ascii_letters).encode() for i in range(cls.BUFSIZE)]) with open(os_helper.TESTFN, 'wb') as f: for csize in chunks(cls.FILESIZE, cls.BUFSIZE): f.write(chunk) with open(os_helper.TESTFN, 'rb') as f: cls.FILEDATA = f.read() assert len(cls.FILEDATA) == cls.FILESIZE @classmethod def tearDownClass(cls): os_helper.unlink(os_helper.TESTFN) def accept_conn(self): self.serv.settimeout(support.LONG_TIMEOUT) conn, addr = self.serv.accept() conn.settimeout(self.TIMEOUT) self.addCleanup(conn.close) return conn def recv_data(self, conn): received = [] while True: chunk = conn.recv(self.BUFSIZE) if not chunk: break received.append(chunk) return b''.join(received) def meth_from_sock(self, sock): # Depending on the mixin class being run return either send() # or sendfile() method implementation. return getattr(sock, "_sendfile_use_send") # regular file def _testRegularFile(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address) as sock, file as file: meth = self.meth_from_sock(sock) sent = meth(file) self.assertEqual(sent, self.FILESIZE) self.assertEqual(file.tell(), self.FILESIZE) def testRegularFile(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE) self.assertEqual(data, self.FILEDATA) # non regular file def _testNonRegularFile(self): address = self.serv.getsockname() file = io.BytesIO(self.FILEDATA) with socket.create_connection(address) as sock, file as file: sent = sock.sendfile(file) self.assertEqual(sent, self.FILESIZE) self.assertEqual(file.tell(), self.FILESIZE) self.assertRaises(socket._GiveupOnSendfile, sock._sendfile_use_sendfile, file) def testNonRegularFile(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE) self.assertEqual(data, self.FILEDATA) # empty file def _testEmptyFileSend(self): address = self.serv.getsockname() filename = os_helper.TESTFN + "2" with open(filename, 'wb'): self.addCleanup(os_helper.unlink, filename) file = open(filename, 'rb') with socket.create_connection(address) as sock, file as file: meth = self.meth_from_sock(sock) sent = meth(file) self.assertEqual(sent, 0) self.assertEqual(file.tell(), 0) def testEmptyFileSend(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(data, b"") # offset def _testOffset(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address) as sock, file as file: meth = self.meth_from_sock(sock) sent = meth(file, offset=5000) self.assertEqual(sent, self.FILESIZE - 5000) self.assertEqual(file.tell(), self.FILESIZE) def testOffset(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE - 5000) self.assertEqual(data, self.FILEDATA[5000:]) # count def _testCount(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') sock = socket.create_connection(address, timeout=support.LOOPBACK_TIMEOUT) with sock, file: count = 5000007 meth = self.meth_from_sock(sock) sent = meth(file, count=count) self.assertEqual(sent, count) self.assertEqual(file.tell(), count) def testCount(self): count = 5000007 conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), count) self.assertEqual(data, self.FILEDATA[:count]) # count small def _testCountSmall(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') sock = socket.create_connection(address, timeout=support.LOOPBACK_TIMEOUT) with sock, file: count = 1 meth = self.meth_from_sock(sock) sent = meth(file, count=count) self.assertEqual(sent, count) self.assertEqual(file.tell(), count) def testCountSmall(self): count = 1 conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), count) self.assertEqual(data, self.FILEDATA[:count]) # count + offset def _testCountWithOffset(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address, timeout=2) as sock, file as file: count = 100007 meth = self.meth_from_sock(sock) sent = meth(file, offset=2007, count=count) self.assertEqual(sent, count) self.assertEqual(file.tell(), count + 2007) def testCountWithOffset(self): count = 100007 conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), count) self.assertEqual(data, self.FILEDATA[2007:count+2007]) # non blocking sockets are not supposed to work def _testNonBlocking(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address) as sock, file as file: sock.setblocking(False) meth = self.meth_from_sock(sock) self.assertRaises(ValueError, meth, file) self.assertRaises(ValueError, sock.sendfile, file) def testNonBlocking(self): conn = self.accept_conn() if conn.recv(8192): self.fail('was not supposed to receive any data') # timeout (non-triggered) def _testWithTimeout(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') sock = socket.create_connection(address, timeout=support.LOOPBACK_TIMEOUT) with sock, file: meth = self.meth_from_sock(sock) sent = meth(file) self.assertEqual(sent, self.FILESIZE) def testWithTimeout(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE) self.assertEqual(data, self.FILEDATA) # timeout (triggered) def _testWithTimeoutTriggeredSend(self): address = self.serv.getsockname() with open(os_helper.TESTFN, 'rb') as file: with socket.create_connection(address) as sock: sock.settimeout(0.01) meth = self.meth_from_sock(sock) self.assertRaises(TimeoutError, meth, file) def testWithTimeoutTriggeredSend(self): conn = self.accept_conn() conn.recv(88192) # bpo-45212: the wait here needs to be longer than the client-side timeout (0.01s) time.sleep(1) # errors def _test_errors(self): pass def test_errors(self): with open(os_helper.TESTFN, 'rb') as file: with socket.socket(type=socket.SOCK_DGRAM) as s: meth = self.meth_from_sock(s) self.assertRaisesRegex( ValueError, "SOCK_STREAM", meth, file) with open(os_helper.TESTFN, encoding="utf-8") as file: with socket.socket() as s: meth = self.meth_from_sock(s) self.assertRaisesRegex( ValueError, "binary mode", meth, file) with open(os_helper.TESTFN, 'rb') as file: with socket.socket() as s: meth = self.meth_from_sock(s) self.assertRaisesRegex(TypeError, "positive integer", meth, file, count='2') self.assertRaisesRegex(TypeError, "positive integer", meth, file, count=0.1) self.assertRaisesRegex(ValueError, "positive integer", meth, file, count=0) self.assertRaisesRegex(ValueError, "positive integer", meth, file, count=-1) @unittest.skipUnless(hasattr(os, "sendfile"), 'os.sendfile() required for this test.') class SendfileUsingSendfileTest(SendfileUsingSendTest): """ Test the sendfile() implementation of socket.sendfile(). """ def meth_from_sock(self, sock): return getattr(sock, "_sendfile_use_sendfile") @unittest.skipUnless(HAVE_SOCKET_ALG, 'AF_ALG required') class LinuxKernelCryptoAPI(unittest.TestCase): # tests for AF_ALG def create_alg(self, typ, name): sock = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) try: sock.bind((typ, name)) except FileNotFoundError as e: # type / algorithm is not available sock.close() raise unittest.SkipTest(str(e), typ, name) else: return sock # bpo-31705: On kernel older than 4.5, sendto() failed with ENOKEY, # at least on ppc64le architecture @support.requires_linux_version(4, 5) def test_sha256(self): expected = bytes.fromhex("ba7816bf8f01cfea414140de5dae2223b00361a396" "177a9cb410ff61f20015ad") with self.create_alg('hash', 'sha256') as algo: op, _ = algo.accept() with op: op.sendall(b"abc") self.assertEqual(op.recv(512), expected) op, _ = algo.accept() with op: op.send(b'a', socket.MSG_MORE) op.send(b'b', socket.MSG_MORE) op.send(b'c', socket.MSG_MORE) op.send(b'') self.assertEqual(op.recv(512), expected) def test_hmac_sha1(self): # gh-109396: In FIPS mode, Linux 6.5 requires a key # of at least 112 bits. Use a key of 152 bits. key = b"Python loves AF_ALG" data = b"what do ya want for nothing?" expected = bytes.fromhex("193dbb43c6297b47ea6277ec0ce67119a3f3aa66") with self.create_alg('hash', 'hmac(sha1)') as algo: algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, key) op, _ = algo.accept() with op: op.sendall(data) self.assertEqual(op.recv(512), expected) # Although it should work with 3.19 and newer the test blocks on # Ubuntu 15.10 with Kernel 4.2.0-19. @support.requires_linux_version(4, 3) def test_aes_cbc(self): key = bytes.fromhex('06a9214036b8a15b512e03d534120006') iv = bytes.fromhex('3dafba429d9eb430b422da802c9fac41') msg = b"Single block msg" ciphertext = bytes.fromhex('e353779c1079aeb82708942dbe77181a') msglen = len(msg) with self.create_alg('skcipher', 'cbc(aes)') as algo: algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, key) op, _ = algo.accept() with op: op.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, iv=iv, flags=socket.MSG_MORE) op.sendall(msg) self.assertEqual(op.recv(msglen), ciphertext) op, _ = algo.accept() with op: op.sendmsg_afalg([ciphertext], op=socket.ALG_OP_DECRYPT, iv=iv) self.assertEqual(op.recv(msglen), msg) # long message multiplier = 1024 longmsg = [msg] * multiplier op, _ = algo.accept() with op: op.sendmsg_afalg(longmsg, op=socket.ALG_OP_ENCRYPT, iv=iv) enc = op.recv(msglen * multiplier) self.assertEqual(len(enc), msglen * multiplier) self.assertEqual(enc[:msglen], ciphertext) op, _ = algo.accept() with op: op.sendmsg_afalg([enc], op=socket.ALG_OP_DECRYPT, iv=iv) dec = op.recv(msglen * multiplier) self.assertEqual(len(dec), msglen * multiplier) self.assertEqual(dec, msg * multiplier) @support.requires_linux_version(4, 9) # see issue29324 def test_aead_aes_gcm(self): key = bytes.fromhex('c939cc13397c1d37de6ae0e1cb7c423c') iv = bytes.fromhex('b3d8cc017cbb89b39e0f67e2') plain = bytes.fromhex('c3b3c41f113a31b73d9a5cd432103069') assoc = bytes.fromhex('24825602bd12a984e0092d3e448eda5f') expected_ct = bytes.fromhex('93fe7d9e9bfd10348a5606e5cafa7354') expected_tag = bytes.fromhex('0032a1dc85f1c9786925a2e71d8272dd') taglen = len(expected_tag) assoclen = len(assoc) with self.create_alg('aead', 'gcm(aes)') as algo: algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, key) algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_AEAD_AUTHSIZE, None, taglen) # send assoc, plain and tag buffer in separate steps op, _ = algo.accept() with op: op.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, iv=iv, assoclen=assoclen, flags=socket.MSG_MORE) op.sendall(assoc, socket.MSG_MORE) op.sendall(plain) res = op.recv(assoclen + len(plain) + taglen) self.assertEqual(expected_ct, res[assoclen:-taglen]) self.assertEqual(expected_tag, res[-taglen:]) # now with msg op, _ = algo.accept() with op: msg = assoc + plain op.sendmsg_afalg([msg], op=socket.ALG_OP_ENCRYPT, iv=iv, assoclen=assoclen) res = op.recv(assoclen + len(plain) + taglen) self.assertEqual(expected_ct, res[assoclen:-taglen]) self.assertEqual(expected_tag, res[-taglen:]) # create anc data manually pack_uint32 = struct.Struct('I').pack op, _ = algo.accept() with op: msg = assoc + plain op.sendmsg( [msg], ([socket.SOL_ALG, socket.ALG_SET_OP, pack_uint32(socket.ALG_OP_ENCRYPT)], [socket.SOL_ALG, socket.ALG_SET_IV, pack_uint32(len(iv)) + iv], [socket.SOL_ALG, socket.ALG_SET_AEAD_ASSOCLEN, pack_uint32(assoclen)], ) ) res = op.recv(len(msg) + taglen) self.assertEqual(expected_ct, res[assoclen:-taglen]) self.assertEqual(expected_tag, res[-taglen:]) # decrypt and verify op, _ = algo.accept() with op: msg = assoc + expected_ct + expected_tag op.sendmsg_afalg([msg], op=socket.ALG_OP_DECRYPT, iv=iv, assoclen=assoclen) res = op.recv(len(msg) - taglen) self.assertEqual(plain, res[assoclen:]) @support.requires_linux_version(4, 3) # see test_aes_cbc def test_drbg_pr_sha256(self): # deterministic random bit generator, prediction resistance, sha256 with self.create_alg('rng', 'drbg_pr_sha256') as algo: extra_seed = os.urandom(32) algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, extra_seed) op, _ = algo.accept() with op: rn = op.recv(32) self.assertEqual(len(rn), 32) def test_sendmsg_afalg_args(self): sock = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) with sock: with self.assertRaises(TypeError): sock.sendmsg_afalg() with self.assertRaises(TypeError): sock.sendmsg_afalg(op=None) with self.assertRaises(TypeError): sock.sendmsg_afalg(1) with self.assertRaises(TypeError): sock.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, assoclen=None) with self.assertRaises(TypeError): sock.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, assoclen=-1) def test_length_restriction(self): # bpo-35050, off-by-one error in length check sock = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) self.addCleanup(sock.close) # salg_type[14] with self.assertRaises(FileNotFoundError): sock.bind(("t" * 13, "name")) with self.assertRaisesRegex(ValueError, "type too long"): sock.bind(("t" * 14, "name")) # salg_name[64] with self.assertRaises(FileNotFoundError): sock.bind(("type", "n" * 63)) with self.assertRaisesRegex(ValueError, "name too long"): sock.bind(("type", "n" * 64)) @unittest.skipUnless(sys.platform == 'darwin', 'macOS specific test') class TestMacOSTCPFlags(unittest.TestCase): def test_tcp_keepalive(self): self.assertTrue(socket.TCP_KEEPALIVE) @unittest.skipUnless(sys.platform.startswith("win"), "requires Windows") class TestMSWindowsTCPFlags(unittest.TestCase): knownTCPFlags = { # available since long time ago 'TCP_MAXSEG', 'TCP_NODELAY', # available starting with Windows 10 1607 'TCP_FASTOPEN', # available starting with Windows 10 1703 'TCP_KEEPCNT', # available starting with Windows 10 1709 'TCP_KEEPIDLE', 'TCP_KEEPINTVL' } def test_new_tcp_flags(self): provided = [s for s in dir(socket) if s.startswith('TCP')] unknown = [s for s in provided if s not in self.knownTCPFlags] self.assertEqual([], unknown, "New TCP flags were discovered. See bpo-32394 for more information") class CreateServerTest(unittest.TestCase): def test_address(self): port = socket_helper.find_unused_port() with socket.create_server(("127.0.0.1", port)) as sock: self.assertEqual(sock.getsockname()[0], "127.0.0.1") self.assertEqual(sock.getsockname()[1], port) if socket_helper.IPV6_ENABLED: with socket.create_server(("::1", port), family=socket.AF_INET6) as sock: self.assertEqual(sock.getsockname()[0], "::1") self.assertEqual(sock.getsockname()[1], port) def test_family_and_type(self): with socket.create_server(("127.0.0.1", 0)) as sock: self.assertEqual(sock.family, socket.AF_INET) self.assertEqual(sock.type, socket.SOCK_STREAM) if socket_helper.IPV6_ENABLED: with socket.create_server(("::1", 0), family=socket.AF_INET6) as s: self.assertEqual(s.family, socket.AF_INET6) self.assertEqual(sock.type, socket.SOCK_STREAM) def test_reuse_port(self): if not hasattr(socket, "SO_REUSEPORT"): with self.assertRaises(ValueError): socket.create_server(("localhost", 0), reuse_port=True) else: with socket.create_server(("localhost", 0)) as sock: opt = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT) self.assertEqual(opt, 0) with socket.create_server(("localhost", 0), reuse_port=True) as sock: opt = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT) self.assertNotEqual(opt, 0) @unittest.skipIf(not hasattr(_socket, 'IPPROTO_IPV6') or not hasattr(_socket, 'IPV6_V6ONLY'), "IPV6_V6ONLY option not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_ipv6_only_default(self): with socket.create_server(("::1", 0), family=socket.AF_INET6) as sock: assert sock.getsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY) @unittest.skipIf(not socket.has_dualstack_ipv6(), "dualstack_ipv6 not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_dualstack_ipv6_family(self): with socket.create_server(("::1", 0), family=socket.AF_INET6, dualstack_ipv6=True) as sock: self.assertEqual(sock.family, socket.AF_INET6) class CreateServerFunctionalTest(unittest.TestCase): timeout = support.LOOPBACK_TIMEOUT def echo_server(self, sock): def run(sock): with sock: conn, _ = sock.accept() with conn: event.wait(self.timeout) msg = conn.recv(1024) if not msg: return conn.sendall(msg) event = threading.Event() sock.settimeout(self.timeout) thread = threading.Thread(target=run, args=(sock, )) thread.start() self.addCleanup(thread.join, self.timeout) event.set() def echo_client(self, addr, family): with socket.socket(family=family) as sock: sock.settimeout(self.timeout) sock.connect(addr) sock.sendall(b'foo') self.assertEqual(sock.recv(1024), b'foo') def test_tcp4(self): port = socket_helper.find_unused_port() with socket.create_server(("", port)) as sock: self.echo_server(sock) self.echo_client(("127.0.0.1", port), socket.AF_INET) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_tcp6(self): port = socket_helper.find_unused_port() with socket.create_server(("", port), family=socket.AF_INET6) as sock: self.echo_server(sock) self.echo_client(("::1", port), socket.AF_INET6) # --- dual stack tests @unittest.skipIf(not socket.has_dualstack_ipv6(), "dualstack_ipv6 not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_dual_stack_client_v4(self): port = socket_helper.find_unused_port() with socket.create_server(("", port), family=socket.AF_INET6, dualstack_ipv6=True) as sock: self.echo_server(sock) self.echo_client(("127.0.0.1", port), socket.AF_INET) @unittest.skipIf(not socket.has_dualstack_ipv6(), "dualstack_ipv6 not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_dual_stack_client_v6(self): port = socket_helper.find_unused_port() with socket.create_server(("", port), family=socket.AF_INET6, dualstack_ipv6=True) as sock: self.echo_server(sock) self.echo_client(("::1", port), socket.AF_INET6) @requireAttrs(socket, "send_fds") @requireAttrs(socket, "recv_fds") @requireAttrs(socket, "AF_UNIX") class SendRecvFdsTests(unittest.TestCase): def testSendAndRecvFds(self): def close_pipes(pipes): for fd1, fd2 in pipes: os.close(fd1) os.close(fd2) def close_fds(fds): for fd in fds: os.close(fd) # send 10 file descriptors pipes = [os.pipe() for _ in range(10)] self.addCleanup(close_pipes, pipes) fds = [rfd for rfd, wfd in pipes] # use a UNIX socket pair to exchange file descriptors locally sock1, sock2 = socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM) with sock1, sock2: socket.send_fds(sock1, [MSG], fds) # request more data and file descriptors than expected msg, fds2, flags, addr = socket.recv_fds(sock2, len(MSG) * 2, len(fds) * 2) self.addCleanup(close_fds, fds2) self.assertEqual(msg, MSG) self.assertEqual(len(fds2), len(fds)) self.assertEqual(flags, 0) # don't test addr # test that file descriptors are connected for index, fds in enumerate(pipes): rfd, wfd = fds os.write(wfd, str(index).encode()) for index, rfd in enumerate(fds2): data = os.read(rfd, 100) self.assertEqual(data, str(index).encode()) def setUpModule(): thread_info = threading_helper.threading_setup() unittest.addModuleCleanup(threading_helper.threading_cleanup, *thread_info) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.11/test_ssl.py000066400000000000000000006777431471441230600205650ustar00rootroot00000000000000# Test the support for SSL and sockets import sys import unittest import unittest.mock from test import support from test.support import import_helper from test.support import os_helper from test.support import socket_helper from test.support import threading_helper from test.support import warnings_helper import array import re import socket import select import struct import time import enum import gc import http.client import os import errno import pprint import urllib.request import threading import traceback import weakref import platform import sysconfig import functools try: import ctypes except ImportError: ctypes = None asyncore = warnings_helper.import_deprecated('asyncore') ssl = import_helper.import_module("ssl") import _ssl from ssl import TLSVersion, _TLSContentType, _TLSMessageType, _TLSAlertType Py_DEBUG = hasattr(sys, 'gettotalrefcount') Py_DEBUG_WIN32 = Py_DEBUG and sys.platform == 'win32' PROTOCOLS = sorted(ssl._PROTOCOL_NAMES) HOST = socket_helper.HOST IS_OPENSSL_3_0_0 = ssl.OPENSSL_VERSION_INFO >= (3, 0, 0) PY_SSL_DEFAULT_CIPHERS = sysconfig.get_config_var('PY_SSL_DEFAULT_CIPHERS') PROTOCOL_TO_TLS_VERSION = {} for proto, ver in ( ("PROTOCOL_SSLv23", "SSLv3"), ("PROTOCOL_TLSv1", "TLSv1"), ("PROTOCOL_TLSv1_1", "TLSv1_1"), ): try: proto = getattr(ssl, proto) ver = getattr(ssl.TLSVersion, ver) except AttributeError: continue PROTOCOL_TO_TLS_VERSION[proto] = ver def data_file(*name): return os.path.join(os.path.dirname(__file__), "certdata", *name) # The custom key and certificate files used in test_ssl are generated # using Lib/test/certdata/make_ssl_certs.py. # Other certificates are simply fetched from the internet servers they # are meant to authenticate. CERTFILE = data_file("keycert.pem") BYTES_CERTFILE = os.fsencode(CERTFILE) ONLYCERT = data_file("ssl_cert.pem") ONLYKEY = data_file("ssl_key.pem") BYTES_ONLYCERT = os.fsencode(ONLYCERT) BYTES_ONLYKEY = os.fsencode(ONLYKEY) CERTFILE_PROTECTED = data_file("keycert.passwd.pem") ONLYKEY_PROTECTED = data_file("ssl_key.passwd.pem") KEY_PASSWORD = "somepass" CAPATH = data_file("capath") BYTES_CAPATH = os.fsencode(CAPATH) CAFILE_NEURONIO = data_file("capath", "4e1295a3.0") CAFILE_CACERT = data_file("capath", "5ed36f99.0") CERTFILE_INFO = { 'issuer': ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'localhost'),)), 'notAfter': 'Aug 26 14:23:15 2028 GMT', 'notBefore': 'Aug 29 14:23:15 2018 GMT', 'serialNumber': '98A7CF88C74A32ED', 'subject': ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'localhost'),)), 'subjectAltName': (('DNS', 'localhost'),), 'version': 3 } # empty CRL CRLFILE = data_file("revocation.crl") # Two keys and certs signed by the same CA (for SNI tests) SIGNED_CERTFILE = data_file("keycert3.pem") SIGNED_CERTFILE_HOSTNAME = 'localhost' SIGNED_CERTFILE_INFO = { 'OCSP': ('http://testca.pythontest.net/testca/ocsp/',), 'caIssuers': ('http://testca.pythontest.net/testca/pycacert.cer',), 'crlDistributionPoints': ('http://testca.pythontest.net/testca/revocation.crl',), 'issuer': ((('countryName', 'XY'),), (('organizationName', 'Python Software Foundation CA'),), (('commonName', 'our-ca-server'),)), 'notAfter': 'Oct 28 14:23:16 2037 GMT', 'notBefore': 'Aug 29 14:23:16 2018 GMT', 'serialNumber': 'CB2D80995A69525C', 'subject': ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'localhost'),)), 'subjectAltName': (('DNS', 'localhost'),), 'version': 3 } SIGNED_CERTFILE2 = data_file("keycert4.pem") SIGNED_CERTFILE2_HOSTNAME = 'fakehostname' SIGNED_CERTFILE_ECC = data_file("keycertecc.pem") SIGNED_CERTFILE_ECC_HOSTNAME = 'localhost-ecc' # Same certificate as pycacert.pem, but without extra text in file SIGNING_CA = data_file("capath", "ceff1710.0") # cert with all kinds of subject alt names ALLSANFILE = data_file("allsans.pem") IDNSANSFILE = data_file("idnsans.pem") NOSANFILE = data_file("nosan.pem") NOSAN_HOSTNAME = 'localhost' REMOTE_HOST = "self-signed.pythontest.net" EMPTYCERT = data_file("nullcert.pem") BADCERT = data_file("badcert.pem") NONEXISTINGCERT = data_file("XXXnonexisting.pem") BADKEY = data_file("badkey.pem") NOKIACERT = data_file("nokia.pem") NULLBYTECERT = data_file("nullbytecert.pem") TALOS_INVALID_CRLDP = data_file("talos-2019-0758.pem") DHFILE = data_file("ffdh3072.pem") BYTES_DHFILE = os.fsencode(DHFILE) # Not defined in all versions of OpenSSL OP_NO_COMPRESSION = getattr(ssl, "OP_NO_COMPRESSION", 0) OP_SINGLE_DH_USE = getattr(ssl, "OP_SINGLE_DH_USE", 0) OP_SINGLE_ECDH_USE = getattr(ssl, "OP_SINGLE_ECDH_USE", 0) OP_CIPHER_SERVER_PREFERENCE = getattr(ssl, "OP_CIPHER_SERVER_PREFERENCE", 0) OP_ENABLE_MIDDLEBOX_COMPAT = getattr(ssl, "OP_ENABLE_MIDDLEBOX_COMPAT", 0) # Ubuntu has patched OpenSSL and changed behavior of security level 2 # see https://bugs.python.org/issue41561#msg389003 def is_ubuntu(): try: # Assume that any references of "ubuntu" implies Ubuntu-like distro # The workaround is not required for 18.04, but doesn't hurt either. with open("/etc/os-release", encoding="utf-8") as f: return "ubuntu" in f.read() except FileNotFoundError: return False if is_ubuntu(): def seclevel_workaround(*ctxs): """"Lower security level to '1' and allow all ciphers for TLS 1.0/1""" for ctx in ctxs: if ( hasattr(ctx, "minimum_version") and ctx.minimum_version <= ssl.TLSVersion.TLSv1_1 ): ctx.set_ciphers("@SECLEVEL=1:ALL") else: def seclevel_workaround(*ctxs): pass def has_tls_protocol(protocol): """Check if a TLS protocol is available and enabled :param protocol: enum ssl._SSLMethod member or name :return: bool """ if isinstance(protocol, str): assert protocol.startswith('PROTOCOL_') protocol = getattr(ssl, protocol, None) if protocol is None: return False if protocol in { ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS_SERVER, ssl.PROTOCOL_TLS_CLIENT }: # auto-negotiate protocols are always available return True name = protocol.name return has_tls_version(name[len('PROTOCOL_'):]) @functools.lru_cache def has_tls_version(version): """Check if a TLS/SSL version is enabled :param version: TLS version name or ssl.TLSVersion member :return: bool """ if version == "SSLv2": # never supported and not even in TLSVersion enum return False if isinstance(version, str): version = ssl.TLSVersion.__members__[version] # check compile time flags like ssl.HAS_TLSv1_2 if not getattr(ssl, f'HAS_{version.name}'): return False if IS_OPENSSL_3_0_0 and version < ssl.TLSVersion.TLSv1_2: # bpo43791: 3.0.0-alpha14 fails with TLSV1_ALERT_INTERNAL_ERROR return False # check runtime and dynamic crypto policy settings. A TLS version may # be compiled in but disabled by a policy or config option. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) if ( hasattr(ctx, 'minimum_version') and ctx.minimum_version != ssl.TLSVersion.MINIMUM_SUPPORTED and version < ctx.minimum_version ): return False if ( hasattr(ctx, 'maximum_version') and ctx.maximum_version != ssl.TLSVersion.MAXIMUM_SUPPORTED and version > ctx.maximum_version ): return False return True def requires_tls_version(version): """Decorator to skip tests when a required TLS version is not available :param version: TLS version name or ssl.TLSVersion member :return: """ def decorator(func): @functools.wraps(func) def wrapper(*args, **kw): if not has_tls_version(version): raise unittest.SkipTest(f"{version} is not available.") else: return func(*args, **kw) return wrapper return decorator def handle_error(prefix): exc_format = ' '.join(traceback.format_exception(*sys.exc_info())) if support.verbose: sys.stdout.write(prefix + exc_format) def utc_offset(): #NOTE: ignore issues like #1647654 # local time = utc time + utc offset if time.daylight and time.localtime().tm_isdst > 0: return -time.altzone # seconds return -time.timezone ignore_deprecation = warnings_helper.ignore_warnings( category=DeprecationWarning ) def test_wrap_socket(sock, *, cert_reqs=ssl.CERT_NONE, ca_certs=None, ciphers=None, certfile=None, keyfile=None, **kwargs): if not kwargs.get("server_side"): kwargs["server_hostname"] = SIGNED_CERTFILE_HOSTNAME context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) else: context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) if cert_reqs is not None: if cert_reqs == ssl.CERT_NONE: context.check_hostname = False context.verify_mode = cert_reqs if ca_certs is not None: context.load_verify_locations(ca_certs) if certfile is not None or keyfile is not None: context.load_cert_chain(certfile, keyfile) if ciphers is not None: context.set_ciphers(ciphers) return context.wrap_socket(sock, **kwargs) def testing_context(server_cert=SIGNED_CERTFILE, *, server_chain=True): """Create context client_context, server_context, hostname = testing_context() """ if server_cert == SIGNED_CERTFILE: hostname = SIGNED_CERTFILE_HOSTNAME elif server_cert == SIGNED_CERTFILE2: hostname = SIGNED_CERTFILE2_HOSTNAME elif server_cert == NOSANFILE: hostname = NOSAN_HOSTNAME else: raise ValueError(server_cert) client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(server_cert) if server_chain: server_context.load_verify_locations(SIGNING_CA) return client_context, server_context, hostname class BasicSocketTests(unittest.TestCase): def test_constants(self): ssl.CERT_NONE ssl.CERT_OPTIONAL ssl.CERT_REQUIRED ssl.OP_CIPHER_SERVER_PREFERENCE ssl.OP_SINGLE_DH_USE ssl.OP_SINGLE_ECDH_USE ssl.OP_NO_COMPRESSION self.assertEqual(ssl.HAS_SNI, True) self.assertEqual(ssl.HAS_ECDH, True) self.assertEqual(ssl.HAS_TLSv1_2, True) self.assertEqual(ssl.HAS_TLSv1_3, True) ssl.OP_NO_SSLv2 ssl.OP_NO_SSLv3 ssl.OP_NO_TLSv1 ssl.OP_NO_TLSv1_3 ssl.OP_NO_TLSv1_1 ssl.OP_NO_TLSv1_2 self.assertEqual(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv23) def test_ssl_types(self): ssl_types = [ _ssl._SSLContext, _ssl._SSLSocket, _ssl.MemoryBIO, _ssl.Certificate, _ssl.SSLSession, _ssl.SSLError, ] for ssl_type in ssl_types: with self.subTest(ssl_type=ssl_type): with self.assertRaisesRegex(TypeError, "immutable type"): ssl_type.value = None support.check_disallow_instantiation(self, _ssl.Certificate) def test_private_init(self): with self.assertRaisesRegex(TypeError, "public constructor"): with socket.socket() as s: ssl.SSLSocket(s) def test_str_for_enums(self): # Make sure that the PROTOCOL_* constants have enum-like string # reprs. proto = ssl.PROTOCOL_TLS_CLIENT self.assertEqual(repr(proto), '<_SSLMethod.PROTOCOL_TLS_CLIENT: %r>' % proto.value) self.assertEqual(str(proto), str(proto.value)) ctx = ssl.SSLContext(proto) self.assertIs(ctx.protocol, proto) def test_random(self): v = ssl.RAND_status() if support.verbose: sys.stdout.write("\n RAND_status is %d (%s)\n" % (v, (v and "sufficient randomness") or "insufficient randomness")) with warnings_helper.check_warnings(): data, is_cryptographic = ssl.RAND_pseudo_bytes(16) self.assertEqual(len(data), 16) self.assertEqual(is_cryptographic, v == 1) if v: data = ssl.RAND_bytes(16) self.assertEqual(len(data), 16) else: self.assertRaises(ssl.SSLError, ssl.RAND_bytes, 16) # negative num is invalid self.assertRaises(ValueError, ssl.RAND_bytes, -5) with warnings_helper.check_warnings(): self.assertRaises(ValueError, ssl.RAND_pseudo_bytes, -5) ssl.RAND_add("this is a random string", 75.0) ssl.RAND_add(b"this is a random bytes object", 75.0) ssl.RAND_add(bytearray(b"this is a random bytearray object"), 75.0) def test_parse_cert(self): # note that this uses an 'unofficial' function in _ssl.c, # provided solely for this test, to exercise the certificate # parsing code self.assertEqual( ssl._ssl._test_decode_cert(CERTFILE), CERTFILE_INFO ) self.assertEqual( ssl._ssl._test_decode_cert(SIGNED_CERTFILE), SIGNED_CERTFILE_INFO ) # Issue #13034: the subjectAltName in some certificates # (notably projects.developer.nokia.com:443) wasn't parsed p = ssl._ssl._test_decode_cert(NOKIACERT) if support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") self.assertEqual(p['subjectAltName'], (('DNS', 'projects.developer.nokia.com'), ('DNS', 'projects.forum.nokia.com')) ) # extra OCSP and AIA fields self.assertEqual(p['OCSP'], ('http://ocsp.verisign.com',)) self.assertEqual(p['caIssuers'], ('http://SVRIntl-G3-aia.verisign.com/SVRIntlG3.cer',)) self.assertEqual(p['crlDistributionPoints'], ('http://SVRIntl-G3-crl.verisign.com/SVRIntlG3.crl',)) def test_parse_cert_CVE_2019_5010(self): p = ssl._ssl._test_decode_cert(TALOS_INVALID_CRLDP) if support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") self.assertEqual( p, { 'issuer': ( (('countryName', 'UK'),), (('commonName', 'cody-ca'),)), 'notAfter': 'Jun 14 18:00:58 2028 GMT', 'notBefore': 'Jun 18 18:00:58 2018 GMT', 'serialNumber': '02', 'subject': ((('countryName', 'UK'),), (('commonName', 'codenomicon-vm-2.test.lal.cisco.com'),)), 'subjectAltName': ( ('DNS', 'codenomicon-vm-2.test.lal.cisco.com'),), 'version': 3 } ) def test_parse_cert_CVE_2013_4238(self): p = ssl._ssl._test_decode_cert(NULLBYTECERT) if support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") subject = ((('countryName', 'US'),), (('stateOrProvinceName', 'Oregon'),), (('localityName', 'Beaverton'),), (('organizationName', 'Python Software Foundation'),), (('organizationalUnitName', 'Python Core Development'),), (('commonName', 'null.python.org\x00example.org'),), (('emailAddress', 'python-dev@python.org'),)) self.assertEqual(p['subject'], subject) self.assertEqual(p['issuer'], subject) if ssl._OPENSSL_API_VERSION >= (0, 9, 8): san = (('DNS', 'altnull.python.org\x00example.com'), ('email', 'null@python.org\x00user@example.org'), ('URI', 'http://null.python.org\x00http://example.org'), ('IP Address', '192.0.2.1'), ('IP Address', '2001:DB8:0:0:0:0:0:1')) else: # OpenSSL 0.9.7 doesn't support IPv6 addresses in subjectAltName san = (('DNS', 'altnull.python.org\x00example.com'), ('email', 'null@python.org\x00user@example.org'), ('URI', 'http://null.python.org\x00http://example.org'), ('IP Address', '192.0.2.1'), ('IP Address', '')) self.assertEqual(p['subjectAltName'], san) def test_parse_all_sans(self): p = ssl._ssl._test_decode_cert(ALLSANFILE) self.assertEqual(p['subjectAltName'], ( ('DNS', 'allsans'), ('othername', ''), ('othername', ''), ('email', 'user@example.org'), ('DNS', 'www.example.org'), ('DirName', ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'dirname example'),))), ('URI', 'https://www.python.org/'), ('IP Address', '127.0.0.1'), ('IP Address', '0:0:0:0:0:0:0:1'), ('Registered ID', '1.2.3.4.5') ) ) def test_DER_to_PEM(self): with open(CAFILE_CACERT, 'r') as f: pem = f.read() d1 = ssl.PEM_cert_to_DER_cert(pem) p2 = ssl.DER_cert_to_PEM_cert(d1) d2 = ssl.PEM_cert_to_DER_cert(p2) self.assertEqual(d1, d2) if not p2.startswith(ssl.PEM_HEADER + '\n'): self.fail("DER-to-PEM didn't include correct header:\n%r\n" % p2) if not p2.endswith('\n' + ssl.PEM_FOOTER + '\n'): self.fail("DER-to-PEM didn't include correct footer:\n%r\n" % p2) def test_openssl_version(self): n = ssl.OPENSSL_VERSION_NUMBER t = ssl.OPENSSL_VERSION_INFO s = ssl.OPENSSL_VERSION self.assertIsInstance(n, int) self.assertIsInstance(t, tuple) self.assertIsInstance(s, str) # Some sanity checks follow # >= 1.1.1 self.assertGreaterEqual(n, 0x10101000) # < 4.0 self.assertLess(n, 0x40000000) major, minor, fix, patch, status = t self.assertGreaterEqual(major, 1) self.assertLess(major, 4) self.assertGreaterEqual(minor, 0) self.assertLess(minor, 256) self.assertGreaterEqual(fix, 0) self.assertLess(fix, 256) self.assertGreaterEqual(patch, 0) self.assertLessEqual(patch, 63) self.assertGreaterEqual(status, 0) self.assertLessEqual(status, 15) libressl_ver = f"LibreSSL {major:d}" if major >= 3: # 3.x uses 0xMNN00PP0L openssl_ver = f"OpenSSL {major:d}.{minor:d}.{patch:d}" else: openssl_ver = f"OpenSSL {major:d}.{minor:d}.{fix:d}" self.assertTrue( s.startswith((openssl_ver, libressl_ver)), (s, t, hex(n)) ) @support.cpython_only def test_refcycle(self): # Issue #7943: an SSL object doesn't create reference cycles with # itself. s = socket.socket(socket.AF_INET) ss = test_wrap_socket(s) wr = weakref.ref(ss) with warnings_helper.check_warnings(("", ResourceWarning)): del ss self.assertEqual(wr(), None) def test_wrapped_unconnected(self): # Methods on an unconnected SSLSocket propagate the original # OSError raise by the underlying socket object. s = socket.socket(socket.AF_INET) with test_wrap_socket(s) as ss: self.assertRaises(OSError, ss.recv, 1) self.assertRaises(OSError, ss.recv_into, bytearray(b'x')) self.assertRaises(OSError, ss.recvfrom, 1) self.assertRaises(OSError, ss.recvfrom_into, bytearray(b'x'), 1) self.assertRaises(OSError, ss.send, b'x') self.assertRaises(OSError, ss.sendto, b'x', ('0.0.0.0', 0)) self.assertRaises(NotImplementedError, ss.dup) self.assertRaises(NotImplementedError, ss.sendmsg, [b'x'], (), 0, ('0.0.0.0', 0)) self.assertRaises(NotImplementedError, ss.recvmsg, 100) self.assertRaises(NotImplementedError, ss.recvmsg_into, [bytearray(100)]) def test_timeout(self): # Issue #8524: when creating an SSL socket, the timeout of the # original socket should be retained. for timeout in (None, 0.0, 5.0): s = socket.socket(socket.AF_INET) s.settimeout(timeout) with test_wrap_socket(s) as ss: self.assertEqual(timeout, ss.gettimeout()) def test_openssl111_deprecations(self): options = [ ssl.OP_NO_TLSv1, ssl.OP_NO_TLSv1_1, ssl.OP_NO_TLSv1_2, ssl.OP_NO_TLSv1_3 ] protocols = [ ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLS ] versions = [ ssl.TLSVersion.SSLv3, ssl.TLSVersion.TLSv1, ssl.TLSVersion.TLSv1_1, ] for option in options: with self.subTest(option=option): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertWarns(DeprecationWarning) as cm: ctx.options |= option self.assertEqual( 'ssl.OP_NO_SSL*/ssl.OP_NO_TLS* options are deprecated', str(cm.warning) ) for protocol in protocols: if not has_tls_protocol(protocol): continue with self.subTest(protocol=protocol): with self.assertWarns(DeprecationWarning) as cm: ssl.SSLContext(protocol) self.assertEqual( f'ssl.{protocol.name} is deprecated', str(cm.warning) ) for version in versions: if not has_tls_version(version): continue with self.subTest(version=version): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertWarns(DeprecationWarning) as cm: ctx.minimum_version = version version_text = '%s.%s' % (version.__class__.__name__, version.name) self.assertEqual( f'ssl.{version_text} is deprecated', str(cm.warning) ) @ignore_deprecation def test_errors_sslwrap(self): sock = socket.socket() self.assertRaisesRegex(ValueError, "certfile must be specified", ssl.wrap_socket, sock, keyfile=CERTFILE) self.assertRaisesRegex(ValueError, "certfile must be specified for server-side operations", ssl.wrap_socket, sock, server_side=True) self.assertRaisesRegex(ValueError, "certfile must be specified for server-side operations", ssl.wrap_socket, sock, server_side=True, certfile="") with ssl.wrap_socket(sock, server_side=True, certfile=CERTFILE) as s: self.assertRaisesRegex(ValueError, "can't connect in server-side mode", s.connect, (HOST, 8080)) with self.assertRaises(OSError) as cm: with socket.socket() as sock: ssl.wrap_socket(sock, certfile=NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaises(OSError) as cm: with socket.socket() as sock: ssl.wrap_socket(sock, certfile=CERTFILE, keyfile=NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaises(OSError) as cm: with socket.socket() as sock: ssl.wrap_socket(sock, certfile=NONEXISTINGCERT, keyfile=NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) def bad_cert_test(self, certfile): """Check that trying to use the given client certificate fails""" certfile = os.path.join(os.path.dirname(__file__) or os.curdir, "certdata", certfile) sock = socket.socket() self.addCleanup(sock.close) with self.assertRaises(ssl.SSLError): test_wrap_socket(sock, certfile=certfile) def test_empty_cert(self): """Wrapping with an empty cert file""" self.bad_cert_test("nullcert.pem") def test_malformed_cert(self): """Wrapping with a badly formatted certificate (syntax error)""" self.bad_cert_test("badcert.pem") def test_malformed_key(self): """Wrapping with a badly formatted key (syntax error)""" self.bad_cert_test("badkey.pem") @ignore_deprecation def test_match_hostname(self): def ok(cert, hostname): ssl.match_hostname(cert, hostname) def fail(cert, hostname): self.assertRaises(ssl.CertificateError, ssl.match_hostname, cert, hostname) # -- Hostname matching -- cert = {'subject': ((('commonName', 'example.com'),),)} ok(cert, 'example.com') ok(cert, 'ExAmple.cOm') fail(cert, 'www.example.com') fail(cert, '.example.com') fail(cert, 'example.org') fail(cert, 'exampleXcom') cert = {'subject': ((('commonName', '*.a.com'),),)} ok(cert, 'foo.a.com') fail(cert, 'bar.foo.a.com') fail(cert, 'a.com') fail(cert, 'Xa.com') fail(cert, '.a.com') # only match wildcards when they are the only thing # in left-most segment cert = {'subject': ((('commonName', 'f*.com'),),)} fail(cert, 'foo.com') fail(cert, 'f.com') fail(cert, 'bar.com') fail(cert, 'foo.a.com') fail(cert, 'bar.foo.com') # NULL bytes are bad, CVE-2013-4073 cert = {'subject': ((('commonName', 'null.python.org\x00example.org'),),)} ok(cert, 'null.python.org\x00example.org') # or raise an error? fail(cert, 'example.org') fail(cert, 'null.python.org') # error cases with wildcards cert = {'subject': ((('commonName', '*.*.a.com'),),)} fail(cert, 'bar.foo.a.com') fail(cert, 'a.com') fail(cert, 'Xa.com') fail(cert, '.a.com') cert = {'subject': ((('commonName', 'a.*.com'),),)} fail(cert, 'a.foo.com') fail(cert, 'a..com') fail(cert, 'a.com') # wildcard doesn't match IDNA prefix 'xn--' idna = 'püthon.python.org'.encode("idna").decode("ascii") cert = {'subject': ((('commonName', idna),),)} ok(cert, idna) cert = {'subject': ((('commonName', 'x*.python.org'),),)} fail(cert, idna) cert = {'subject': ((('commonName', 'xn--p*.python.org'),),)} fail(cert, idna) # wildcard in first fragment and IDNA A-labels in sequent fragments # are supported. idna = 'www*.pythön.org'.encode("idna").decode("ascii") cert = {'subject': ((('commonName', idna),),)} fail(cert, 'www.pythön.org'.encode("idna").decode("ascii")) fail(cert, 'www1.pythön.org'.encode("idna").decode("ascii")) fail(cert, 'ftp.pythön.org'.encode("idna").decode("ascii")) fail(cert, 'pythön.org'.encode("idna").decode("ascii")) # Slightly fake real-world example cert = {'notAfter': 'Jun 26 21:41:46 2011 GMT', 'subject': ((('commonName', 'linuxfrz.org'),),), 'subjectAltName': (('DNS', 'linuxfr.org'), ('DNS', 'linuxfr.com'), ('othername', ''))} ok(cert, 'linuxfr.org') ok(cert, 'linuxfr.com') # Not a "DNS" entry fail(cert, '') # When there is a subjectAltName, commonName isn't used fail(cert, 'linuxfrz.org') # A pristine real-world example cert = {'notAfter': 'Dec 18 23:59:59 2011 GMT', 'subject': ((('countryName', 'US'),), (('stateOrProvinceName', 'California'),), (('localityName', 'Mountain View'),), (('organizationName', 'Google Inc'),), (('commonName', 'mail.google.com'),))} ok(cert, 'mail.google.com') fail(cert, 'gmail.com') # Only commonName is considered fail(cert, 'California') # -- IPv4 matching -- cert = {'subject': ((('commonName', 'example.com'),),), 'subjectAltName': (('DNS', 'example.com'), ('IP Address', '10.11.12.13'), ('IP Address', '14.15.16.17'), ('IP Address', '127.0.0.1'))} ok(cert, '10.11.12.13') ok(cert, '14.15.16.17') # socket.inet_ntoa(socket.inet_aton('127.1')) == '127.0.0.1' fail(cert, '127.1') fail(cert, '14.15.16.17 ') fail(cert, '14.15.16.17 extra data') fail(cert, '14.15.16.18') fail(cert, 'example.net') # -- IPv6 matching -- if socket_helper.IPV6_ENABLED: cert = {'subject': ((('commonName', 'example.com'),),), 'subjectAltName': ( ('DNS', 'example.com'), ('IP Address', '2001:0:0:0:0:0:0:CAFE\n'), ('IP Address', '2003:0:0:0:0:0:0:BABA\n'))} ok(cert, '2001::cafe') ok(cert, '2003::baba') fail(cert, '2003::baba ') fail(cert, '2003::baba extra data') fail(cert, '2003::bebe') fail(cert, 'example.net') # -- Miscellaneous -- # Neither commonName nor subjectAltName cert = {'notAfter': 'Dec 18 23:59:59 2011 GMT', 'subject': ((('countryName', 'US'),), (('stateOrProvinceName', 'California'),), (('localityName', 'Mountain View'),), (('organizationName', 'Google Inc'),))} fail(cert, 'mail.google.com') # No DNS entry in subjectAltName but a commonName cert = {'notAfter': 'Dec 18 23:59:59 2099 GMT', 'subject': ((('countryName', 'US'),), (('stateOrProvinceName', 'California'),), (('localityName', 'Mountain View'),), (('commonName', 'mail.google.com'),)), 'subjectAltName': (('othername', 'blabla'), )} ok(cert, 'mail.google.com') # No DNS entry subjectAltName and no commonName cert = {'notAfter': 'Dec 18 23:59:59 2099 GMT', 'subject': ((('countryName', 'US'),), (('stateOrProvinceName', 'California'),), (('localityName', 'Mountain View'),), (('organizationName', 'Google Inc'),)), 'subjectAltName': (('othername', 'blabla'),)} fail(cert, 'google.com') # Empty cert / no cert self.assertRaises(ValueError, ssl.match_hostname, None, 'example.com') self.assertRaises(ValueError, ssl.match_hostname, {}, 'example.com') # Issue #17980: avoid denials of service by refusing more than one # wildcard per fragment. cert = {'subject': ((('commonName', 'a*b.example.com'),),)} with self.assertRaisesRegex( ssl.CertificateError, "partial wildcards in leftmost label are not supported"): ssl.match_hostname(cert, 'axxb.example.com') cert = {'subject': ((('commonName', 'www.*.example.com'),),)} with self.assertRaisesRegex( ssl.CertificateError, "wildcard can only be present in the leftmost label"): ssl.match_hostname(cert, 'www.sub.example.com') cert = {'subject': ((('commonName', 'a*b*.example.com'),),)} with self.assertRaisesRegex( ssl.CertificateError, "too many wildcards"): ssl.match_hostname(cert, 'axxbxxc.example.com') cert = {'subject': ((('commonName', '*'),),)} with self.assertRaisesRegex( ssl.CertificateError, "sole wildcard without additional labels are not support"): ssl.match_hostname(cert, 'host') cert = {'subject': ((('commonName', '*.com'),),)} with self.assertRaisesRegex( ssl.CertificateError, r"hostname 'com' doesn't match '\*.com'"): ssl.match_hostname(cert, 'com') # extra checks for _inet_paton() for invalid in ['1', '', '1.2.3', '256.0.0.1', '127.0.0.1/24']: with self.assertRaises(ValueError): ssl._inet_paton(invalid) for ipaddr in ['127.0.0.1', '192.168.0.1']: self.assertTrue(ssl._inet_paton(ipaddr)) if socket_helper.IPV6_ENABLED: for ipaddr in ['::1', '2001:db8:85a3::8a2e:370:7334']: self.assertTrue(ssl._inet_paton(ipaddr)) def test_server_side(self): # server_hostname doesn't work for server sockets ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) with socket.socket() as sock: self.assertRaises(ValueError, ctx.wrap_socket, sock, True, server_hostname="some.hostname") def test_unknown_channel_binding(self): # should raise ValueError for unknown type s = socket.create_server(('127.0.0.1', 0)) c = socket.socket(socket.AF_INET) c.connect(s.getsockname()) with test_wrap_socket(c, do_handshake_on_connect=False) as ss: with self.assertRaises(ValueError): ss.get_channel_binding("unknown-type") s.close() @unittest.skipUnless("tls-unique" in ssl.CHANNEL_BINDING_TYPES, "'tls-unique' channel binding not available") def test_tls_unique_channel_binding(self): # unconnected should return None for known type s = socket.socket(socket.AF_INET) with test_wrap_socket(s) as ss: self.assertIsNone(ss.get_channel_binding("tls-unique")) # the same for server-side s = socket.socket(socket.AF_INET) with test_wrap_socket(s, server_side=True, certfile=CERTFILE) as ss: self.assertIsNone(ss.get_channel_binding("tls-unique")) def test_dealloc_warn(self): ss = test_wrap_socket(socket.socket(socket.AF_INET)) r = repr(ss) with self.assertWarns(ResourceWarning) as cm: ss = None support.gc_collect() self.assertIn(r, str(cm.warning.args[0])) def test_get_default_verify_paths(self): paths = ssl.get_default_verify_paths() self.assertEqual(len(paths), 6) self.assertIsInstance(paths, ssl.DefaultVerifyPaths) with os_helper.EnvironmentVarGuard() as env: env["SSL_CERT_DIR"] = CAPATH env["SSL_CERT_FILE"] = CERTFILE paths = ssl.get_default_verify_paths() self.assertEqual(paths.cafile, CERTFILE) self.assertEqual(paths.capath, CAPATH) @unittest.skipUnless(sys.platform == "win32", "Windows specific") def test_enum_certificates(self): self.assertTrue(ssl.enum_certificates("CA")) self.assertTrue(ssl.enum_certificates("ROOT")) self.assertRaises(TypeError, ssl.enum_certificates) self.assertRaises(WindowsError, ssl.enum_certificates, "") trust_oids = set() for storename in ("CA", "ROOT"): store = ssl.enum_certificates(storename) self.assertIsInstance(store, list) for element in store: self.assertIsInstance(element, tuple) self.assertEqual(len(element), 3) cert, enc, trust = element self.assertIsInstance(cert, bytes) self.assertIn(enc, {"x509_asn", "pkcs_7_asn"}) self.assertIsInstance(trust, (frozenset, set, bool)) if isinstance(trust, (frozenset, set)): trust_oids.update(trust) serverAuth = "1.3.6.1.5.5.7.3.1" self.assertIn(serverAuth, trust_oids) @unittest.skipUnless(sys.platform == "win32", "Windows specific") def test_enum_crls(self): self.assertTrue(ssl.enum_crls("CA")) self.assertRaises(TypeError, ssl.enum_crls) self.assertRaises(WindowsError, ssl.enum_crls, "") crls = ssl.enum_crls("CA") self.assertIsInstance(crls, list) for element in crls: self.assertIsInstance(element, tuple) self.assertEqual(len(element), 2) self.assertIsInstance(element[0], bytes) self.assertIn(element[1], {"x509_asn", "pkcs_7_asn"}) def test_asn1object(self): expected = (129, 'serverAuth', 'TLS Web Server Authentication', '1.3.6.1.5.5.7.3.1') val = ssl._ASN1Object('1.3.6.1.5.5.7.3.1') self.assertEqual(val, expected) self.assertEqual(val.nid, 129) self.assertEqual(val.shortname, 'serverAuth') self.assertEqual(val.longname, 'TLS Web Server Authentication') self.assertEqual(val.oid, '1.3.6.1.5.5.7.3.1') self.assertIsInstance(val, ssl._ASN1Object) self.assertRaises(ValueError, ssl._ASN1Object, 'serverAuth') val = ssl._ASN1Object.fromnid(129) self.assertEqual(val, expected) self.assertIsInstance(val, ssl._ASN1Object) self.assertRaises(ValueError, ssl._ASN1Object.fromnid, -1) with self.assertRaisesRegex(ValueError, "unknown NID 100000"): ssl._ASN1Object.fromnid(100000) for i in range(1000): try: obj = ssl._ASN1Object.fromnid(i) except ValueError: pass else: self.assertIsInstance(obj.nid, int) self.assertIsInstance(obj.shortname, str) self.assertIsInstance(obj.longname, str) self.assertIsInstance(obj.oid, (str, type(None))) val = ssl._ASN1Object.fromname('TLS Web Server Authentication') self.assertEqual(val, expected) self.assertIsInstance(val, ssl._ASN1Object) self.assertEqual(ssl._ASN1Object.fromname('serverAuth'), expected) self.assertEqual(ssl._ASN1Object.fromname('1.3.6.1.5.5.7.3.1'), expected) with self.assertRaisesRegex(ValueError, "unknown object 'serverauth'"): ssl._ASN1Object.fromname('serverauth') def test_purpose_enum(self): val = ssl._ASN1Object('1.3.6.1.5.5.7.3.1') self.assertIsInstance(ssl.Purpose.SERVER_AUTH, ssl._ASN1Object) self.assertEqual(ssl.Purpose.SERVER_AUTH, val) self.assertEqual(ssl.Purpose.SERVER_AUTH.nid, 129) self.assertEqual(ssl.Purpose.SERVER_AUTH.shortname, 'serverAuth') self.assertEqual(ssl.Purpose.SERVER_AUTH.oid, '1.3.6.1.5.5.7.3.1') val = ssl._ASN1Object('1.3.6.1.5.5.7.3.2') self.assertIsInstance(ssl.Purpose.CLIENT_AUTH, ssl._ASN1Object) self.assertEqual(ssl.Purpose.CLIENT_AUTH, val) self.assertEqual(ssl.Purpose.CLIENT_AUTH.nid, 130) self.assertEqual(ssl.Purpose.CLIENT_AUTH.shortname, 'clientAuth') self.assertEqual(ssl.Purpose.CLIENT_AUTH.oid, '1.3.6.1.5.5.7.3.2') def test_unsupported_dtls(self): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.addCleanup(s.close) with self.assertRaises(NotImplementedError) as cx: test_wrap_socket(s, cert_reqs=ssl.CERT_NONE) self.assertEqual(str(cx.exception), "only stream sockets are supported") ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertRaises(NotImplementedError) as cx: ctx.wrap_socket(s) self.assertEqual(str(cx.exception), "only stream sockets are supported") def cert_time_ok(self, timestring, timestamp): self.assertEqual(ssl.cert_time_to_seconds(timestring), timestamp) def cert_time_fail(self, timestring): with self.assertRaises(ValueError): ssl.cert_time_to_seconds(timestring) @unittest.skipUnless(utc_offset(), 'local time needs to be different from UTC') def test_cert_time_to_seconds_timezone(self): # Issue #19940: ssl.cert_time_to_seconds() returns wrong # results if local timezone is not UTC self.cert_time_ok("May 9 00:00:00 2007 GMT", 1178668800.0) self.cert_time_ok("Jan 5 09:34:43 2018 GMT", 1515144883.0) def test_cert_time_to_seconds(self): timestring = "Jan 5 09:34:43 2018 GMT" ts = 1515144883.0 self.cert_time_ok(timestring, ts) # accept keyword parameter, assert its name self.assertEqual(ssl.cert_time_to_seconds(cert_time=timestring), ts) # accept both %e and %d (space or zero generated by strftime) self.cert_time_ok("Jan 05 09:34:43 2018 GMT", ts) # case-insensitive self.cert_time_ok("JaN 5 09:34:43 2018 GmT", ts) self.cert_time_fail("Jan 5 09:34 2018 GMT") # no seconds self.cert_time_fail("Jan 5 09:34:43 2018") # no GMT self.cert_time_fail("Jan 5 09:34:43 2018 UTC") # not GMT timezone self.cert_time_fail("Jan 35 09:34:43 2018 GMT") # invalid day self.cert_time_fail("Jon 5 09:34:43 2018 GMT") # invalid month self.cert_time_fail("Jan 5 24:00:00 2018 GMT") # invalid hour self.cert_time_fail("Jan 5 09:60:43 2018 GMT") # invalid minute newyear_ts = 1230768000.0 # leap seconds self.cert_time_ok("Dec 31 23:59:60 2008 GMT", newyear_ts) # same timestamp self.cert_time_ok("Jan 1 00:00:00 2009 GMT", newyear_ts) self.cert_time_ok("Jan 5 09:34:59 2018 GMT", 1515144899) # allow 60th second (even if it is not a leap second) self.cert_time_ok("Jan 5 09:34:60 2018 GMT", 1515144900) # allow 2nd leap second for compatibility with time.strptime() self.cert_time_ok("Jan 5 09:34:61 2018 GMT", 1515144901) self.cert_time_fail("Jan 5 09:34:62 2018 GMT") # invalid seconds # no special treatment for the special value: # 99991231235959Z (rfc 5280) self.cert_time_ok("Dec 31 23:59:59 9999 GMT", 253402300799.0) @support.run_with_locale('LC_ALL', '') def test_cert_time_to_seconds_locale(self): # `cert_time_to_seconds()` should be locale independent def local_february_name(): return time.strftime('%b', (1, 2, 3, 4, 5, 6, 0, 0, 0)) if local_february_name().lower() == 'feb': self.skipTest("locale-specific month name needs to be " "different from C locale") # locale-independent self.cert_time_ok("Feb 9 00:00:00 2007 GMT", 1170979200.0) self.cert_time_fail(local_february_name() + " 9 00:00:00 2007 GMT") def test_connect_ex_error(self): server = socket.socket(socket.AF_INET) self.addCleanup(server.close) port = socket_helper.bind_port(server) # Reserve port but don't listen s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED) self.addCleanup(s.close) rc = s.connect_ex((HOST, port)) # Issue #19919: Windows machines or VMs hosted on Windows # machines sometimes return EWOULDBLOCK. errors = ( errno.ECONNREFUSED, errno.EHOSTUNREACH, errno.ETIMEDOUT, errno.EWOULDBLOCK, ) self.assertIn(rc, errors) def test_read_write_zero(self): # empty reads and writes now work, bpo-42854, bpo-31711 client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.recv(0), b"") self.assertEqual(s.send(b""), 0) class ContextTests(unittest.TestCase): def test_constructor(self): for protocol in PROTOCOLS: if has_tls_protocol(protocol): with warnings_helper.check_warnings(): ctx = ssl.SSLContext(protocol) self.assertEqual(ctx.protocol, protocol) with warnings_helper.check_warnings(): ctx = ssl.SSLContext() self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS) self.assertRaises(ValueError, ssl.SSLContext, -1) self.assertRaises(ValueError, ssl.SSLContext, 42) def test_ciphers(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.set_ciphers("ALL") ctx.set_ciphers("DEFAULT") with self.assertRaisesRegex(ssl.SSLError, "No cipher can be selected"): ctx.set_ciphers("^$:,;?*'dorothyx") @unittest.skipUnless(PY_SSL_DEFAULT_CIPHERS == 1, "Test applies only to Python default ciphers") def test_python_ciphers(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ciphers = ctx.get_ciphers() for suite in ciphers: name = suite['name'] self.assertNotIn("PSK", name) self.assertNotIn("SRP", name) self.assertNotIn("MD5", name) self.assertNotIn("RC4", name) self.assertNotIn("3DES", name) def test_get_ciphers(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.set_ciphers('AESGCM') names = set(d['name'] for d in ctx.get_ciphers()) expected = { 'AES128-GCM-SHA256', 'ECDHE-ECDSA-AES128-GCM-SHA256', 'ECDHE-RSA-AES128-GCM-SHA256', 'DHE-RSA-AES128-GCM-SHA256', 'AES256-GCM-SHA384', 'ECDHE-ECDSA-AES256-GCM-SHA384', 'ECDHE-RSA-AES256-GCM-SHA384', 'DHE-RSA-AES256-GCM-SHA384', } intersection = names.intersection(expected) self.assertGreaterEqual( len(intersection), 2, f"\ngot: {sorted(names)}\nexpected: {sorted(expected)}" ) def test_options(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) # OP_ALL | OP_NO_SSLv2 | OP_NO_SSLv3 is the default value default = (ssl.OP_ALL | ssl.OP_NO_SSLv2 | ssl.OP_NO_SSLv3) # SSLContext also enables these by default default |= (OP_NO_COMPRESSION | OP_CIPHER_SERVER_PREFERENCE | OP_SINGLE_DH_USE | OP_SINGLE_ECDH_USE | OP_ENABLE_MIDDLEBOX_COMPAT) self.assertEqual(default, ctx.options) with warnings_helper.check_warnings(): ctx.options |= ssl.OP_NO_TLSv1 self.assertEqual(default | ssl.OP_NO_TLSv1, ctx.options) with warnings_helper.check_warnings(): ctx.options = (ctx.options & ~ssl.OP_NO_TLSv1) self.assertEqual(default, ctx.options) ctx.options = 0 # Ubuntu has OP_NO_SSLv3 forced on by default self.assertEqual(0, ctx.options & ~ssl.OP_NO_SSLv3) def test_verify_mode_protocol(self): with warnings_helper.check_warnings(): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) # Default value self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) ctx.verify_mode = ssl.CERT_OPTIONAL self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) ctx.verify_mode = ssl.CERT_REQUIRED self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.verify_mode = ssl.CERT_NONE self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) with self.assertRaises(TypeError): ctx.verify_mode = None with self.assertRaises(ValueError): ctx.verify_mode = 42 ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self.assertFalse(ctx.check_hostname) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertTrue(ctx.check_hostname) def test_hostname_checks_common_name(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertTrue(ctx.hostname_checks_common_name) if ssl.HAS_NEVER_CHECK_COMMON_NAME: ctx.hostname_checks_common_name = True self.assertTrue(ctx.hostname_checks_common_name) ctx.hostname_checks_common_name = False self.assertFalse(ctx.hostname_checks_common_name) ctx.hostname_checks_common_name = True self.assertTrue(ctx.hostname_checks_common_name) else: with self.assertRaises(AttributeError): ctx.hostname_checks_common_name = True @ignore_deprecation def test_min_max_version(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # OpenSSL default is MINIMUM_SUPPORTED, however some vendors like # Fedora override the setting to TLS 1.0. minimum_range = { # stock OpenSSL ssl.TLSVersion.MINIMUM_SUPPORTED, # Fedora 29 uses TLS 1.0 by default ssl.TLSVersion.TLSv1, # RHEL 8 uses TLS 1.2 by default ssl.TLSVersion.TLSv1_2 } maximum_range = { # stock OpenSSL ssl.TLSVersion.MAXIMUM_SUPPORTED, # Fedora 32 uses TLS 1.3 by default ssl.TLSVersion.TLSv1_3 } self.assertIn( ctx.minimum_version, minimum_range ) self.assertIn( ctx.maximum_version, maximum_range ) ctx.minimum_version = ssl.TLSVersion.TLSv1_1 ctx.maximum_version = ssl.TLSVersion.TLSv1_2 self.assertEqual( ctx.minimum_version, ssl.TLSVersion.TLSv1_1 ) self.assertEqual( ctx.maximum_version, ssl.TLSVersion.TLSv1_2 ) ctx.minimum_version = ssl.TLSVersion.MINIMUM_SUPPORTED ctx.maximum_version = ssl.TLSVersion.TLSv1 self.assertEqual( ctx.minimum_version, ssl.TLSVersion.MINIMUM_SUPPORTED ) self.assertEqual( ctx.maximum_version, ssl.TLSVersion.TLSv1 ) ctx.maximum_version = ssl.TLSVersion.MAXIMUM_SUPPORTED self.assertEqual( ctx.maximum_version, ssl.TLSVersion.MAXIMUM_SUPPORTED ) ctx.maximum_version = ssl.TLSVersion.MINIMUM_SUPPORTED self.assertIn( ctx.maximum_version, {ssl.TLSVersion.TLSv1, ssl.TLSVersion.TLSv1_1, ssl.TLSVersion.SSLv3} ) ctx.minimum_version = ssl.TLSVersion.MAXIMUM_SUPPORTED self.assertIn( ctx.minimum_version, {ssl.TLSVersion.TLSv1_2, ssl.TLSVersion.TLSv1_3} ) with self.assertRaises(ValueError): ctx.minimum_version = 42 if has_tls_protocol(ssl.PROTOCOL_TLSv1_1): ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1_1) self.assertIn( ctx.minimum_version, minimum_range ) self.assertEqual( ctx.maximum_version, ssl.TLSVersion.MAXIMUM_SUPPORTED ) with self.assertRaises(ValueError): ctx.minimum_version = ssl.TLSVersion.MINIMUM_SUPPORTED with self.assertRaises(ValueError): ctx.maximum_version = ssl.TLSVersion.TLSv1 @unittest.skipUnless( hasattr(ssl.SSLContext, 'security_level'), "requires OpenSSL >= 1.1.0" ) def test_security_level(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) # The default security callback allows for levels between 0-5 # with OpenSSL defaulting to 1, however some vendors override the # default value (e.g. Debian defaults to 2) security_level_range = { 0, 1, # OpenSSL default 2, # Debian 3, 4, 5, } self.assertIn(ctx.security_level, security_level_range) def test_verify_flags(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # default value tf = getattr(ssl, "VERIFY_X509_TRUSTED_FIRST", 0) self.assertEqual(ctx.verify_flags, ssl.VERIFY_DEFAULT | tf) ctx.verify_flags = ssl.VERIFY_CRL_CHECK_LEAF self.assertEqual(ctx.verify_flags, ssl.VERIFY_CRL_CHECK_LEAF) ctx.verify_flags = ssl.VERIFY_CRL_CHECK_CHAIN self.assertEqual(ctx.verify_flags, ssl.VERIFY_CRL_CHECK_CHAIN) ctx.verify_flags = ssl.VERIFY_DEFAULT self.assertEqual(ctx.verify_flags, ssl.VERIFY_DEFAULT) ctx.verify_flags = ssl.VERIFY_ALLOW_PROXY_CERTS self.assertEqual(ctx.verify_flags, ssl.VERIFY_ALLOW_PROXY_CERTS) # supports any value ctx.verify_flags = ssl.VERIFY_CRL_CHECK_LEAF | ssl.VERIFY_X509_STRICT self.assertEqual(ctx.verify_flags, ssl.VERIFY_CRL_CHECK_LEAF | ssl.VERIFY_X509_STRICT) with self.assertRaises(TypeError): ctx.verify_flags = None def test_load_cert_chain(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # Combined key and cert in a single file ctx.load_cert_chain(CERTFILE, keyfile=None) ctx.load_cert_chain(CERTFILE, keyfile=CERTFILE) self.assertRaises(TypeError, ctx.load_cert_chain, keyfile=CERTFILE) with self.assertRaises(OSError) as cm: ctx.load_cert_chain(NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_cert_chain(BADCERT) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_cert_chain(EMPTYCERT) # Separate key and cert ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.load_cert_chain(ONLYCERT, ONLYKEY) ctx.load_cert_chain(certfile=ONLYCERT, keyfile=ONLYKEY) ctx.load_cert_chain(certfile=BYTES_ONLYCERT, keyfile=BYTES_ONLYKEY) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_cert_chain(ONLYCERT) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_cert_chain(ONLYKEY) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_cert_chain(certfile=ONLYKEY, keyfile=ONLYCERT) # Mismatching key and cert ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) with self.assertRaisesRegex(ssl.SSLError, "key values mismatch"): ctx.load_cert_chain(CAFILE_CACERT, ONLYKEY) # Password protected key and cert ctx.load_cert_chain(CERTFILE_PROTECTED, password=KEY_PASSWORD) ctx.load_cert_chain(CERTFILE_PROTECTED, password=KEY_PASSWORD.encode()) ctx.load_cert_chain(CERTFILE_PROTECTED, password=bytearray(KEY_PASSWORD.encode())) ctx.load_cert_chain(ONLYCERT, ONLYKEY_PROTECTED, KEY_PASSWORD) ctx.load_cert_chain(ONLYCERT, ONLYKEY_PROTECTED, KEY_PASSWORD.encode()) ctx.load_cert_chain(ONLYCERT, ONLYKEY_PROTECTED, bytearray(KEY_PASSWORD.encode())) with self.assertRaisesRegex(TypeError, "should be a string"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=True) with self.assertRaises(ssl.SSLError): ctx.load_cert_chain(CERTFILE_PROTECTED, password="badpass") with self.assertRaisesRegex(ValueError, "cannot be longer"): # openssl has a fixed limit on the password buffer. # PEM_BUFSIZE is generally set to 1kb. # Return a string larger than this. ctx.load_cert_chain(CERTFILE_PROTECTED, password=b'a' * 102400) # Password callback def getpass_unicode(): return KEY_PASSWORD def getpass_bytes(): return KEY_PASSWORD.encode() def getpass_bytearray(): return bytearray(KEY_PASSWORD.encode()) def getpass_badpass(): return "badpass" def getpass_huge(): return b'a' * (1024 * 1024) def getpass_bad_type(): return 9 def getpass_exception(): raise Exception('getpass error') class GetPassCallable: def __call__(self): return KEY_PASSWORD def getpass(self): return KEY_PASSWORD ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_unicode) ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_bytes) ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_bytearray) ctx.load_cert_chain(CERTFILE_PROTECTED, password=GetPassCallable()) ctx.load_cert_chain(CERTFILE_PROTECTED, password=GetPassCallable().getpass) with self.assertRaises(ssl.SSLError): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_badpass) with self.assertRaisesRegex(ValueError, "cannot be longer"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_huge) with self.assertRaisesRegex(TypeError, "must return a string"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_bad_type) with self.assertRaisesRegex(Exception, "getpass error"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_exception) # Make sure the password function isn't called if it isn't needed ctx.load_cert_chain(CERTFILE, password=getpass_exception) def test_load_verify_locations(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.load_verify_locations(CERTFILE) ctx.load_verify_locations(cafile=CERTFILE, capath=None) ctx.load_verify_locations(BYTES_CERTFILE) ctx.load_verify_locations(cafile=BYTES_CERTFILE, capath=None) self.assertRaises(TypeError, ctx.load_verify_locations) self.assertRaises(TypeError, ctx.load_verify_locations, None, None, None) with self.assertRaises(OSError) as cm: ctx.load_verify_locations(NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_verify_locations(BADCERT) ctx.load_verify_locations(CERTFILE, CAPATH) ctx.load_verify_locations(CERTFILE, capath=BYTES_CAPATH) # Issue #10989: crash if the second argument type is invalid self.assertRaises(TypeError, ctx.load_verify_locations, None, True) def test_load_verify_cadata(self): # test cadata with open(CAFILE_CACERT) as f: cacert_pem = f.read() cacert_der = ssl.PEM_cert_to_DER_cert(cacert_pem) with open(CAFILE_NEURONIO) as f: neuronio_pem = f.read() neuronio_der = ssl.PEM_cert_to_DER_cert(neuronio_pem) # test PEM ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 0) ctx.load_verify_locations(cadata=cacert_pem) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 1) ctx.load_verify_locations(cadata=neuronio_pem) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # cert already in hash table ctx.load_verify_locations(cadata=neuronio_pem) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # combined ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) combined = "\n".join((cacert_pem, neuronio_pem)) ctx.load_verify_locations(cadata=combined) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # with junk around the certs ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) combined = ["head", cacert_pem, "other", neuronio_pem, "again", neuronio_pem, "tail"] ctx.load_verify_locations(cadata="\n".join(combined)) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # test DER ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(cadata=cacert_der) ctx.load_verify_locations(cadata=neuronio_der) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # cert already in hash table ctx.load_verify_locations(cadata=cacert_der) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # combined ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) combined = b"".join((cacert_der, neuronio_der)) ctx.load_verify_locations(cadata=combined) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # error cases ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertRaises(TypeError, ctx.load_verify_locations, cadata=object) with self.assertRaisesRegex( ssl.SSLError, "no start line: cadata does not contain a certificate" ): ctx.load_verify_locations(cadata="broken") with self.assertRaisesRegex( ssl.SSLError, "not enough data: cadata does not contain a certificate" ): ctx.load_verify_locations(cadata=b"broken") @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_load_dh_params(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.load_dh_params(DHFILE) if os.name != 'nt': ctx.load_dh_params(BYTES_DHFILE) self.assertRaises(TypeError, ctx.load_dh_params) self.assertRaises(TypeError, ctx.load_dh_params, None) with self.assertRaises(FileNotFoundError) as cm: ctx.load_dh_params(NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaises(ssl.SSLError) as cm: ctx.load_dh_params(CERTFILE) def test_session_stats(self): for proto in {ssl.PROTOCOL_TLS_CLIENT, ssl.PROTOCOL_TLS_SERVER}: ctx = ssl.SSLContext(proto) self.assertEqual(ctx.session_stats(), { 'number': 0, 'connect': 0, 'connect_good': 0, 'connect_renegotiate': 0, 'accept': 0, 'accept_good': 0, 'accept_renegotiate': 0, 'hits': 0, 'misses': 0, 'timeouts': 0, 'cache_full': 0, }) def test_set_default_verify_paths(self): # There's not much we can do to test that it acts as expected, # so just check it doesn't crash or raise an exception. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.set_default_verify_paths() @unittest.skipUnless(ssl.HAS_ECDH, "ECDH disabled on this OpenSSL build") def test_set_ecdh_curve(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.set_ecdh_curve("prime256v1") ctx.set_ecdh_curve(b"prime256v1") self.assertRaises(TypeError, ctx.set_ecdh_curve) self.assertRaises(TypeError, ctx.set_ecdh_curve, None) self.assertRaises(ValueError, ctx.set_ecdh_curve, "foo") self.assertRaises(ValueError, ctx.set_ecdh_curve, b"foo") def test_sni_callback(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # set_servername_callback expects a callable, or None self.assertRaises(TypeError, ctx.set_servername_callback) self.assertRaises(TypeError, ctx.set_servername_callback, 4) self.assertRaises(TypeError, ctx.set_servername_callback, "") self.assertRaises(TypeError, ctx.set_servername_callback, ctx) def dummycallback(sock, servername, ctx): pass ctx.set_servername_callback(None) ctx.set_servername_callback(dummycallback) def test_sni_callback_refcycle(self): # Reference cycles through the servername callback are detected # and cleared. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) def dummycallback(sock, servername, ctx, cycle=ctx): pass ctx.set_servername_callback(dummycallback) wr = weakref.ref(ctx) del ctx, dummycallback gc.collect() self.assertIs(wr(), None) def test_cert_store_stats(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 0, 'crl': 0, 'x509': 0}) ctx.load_cert_chain(CERTFILE) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 0, 'crl': 0, 'x509': 0}) ctx.load_verify_locations(CERTFILE) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 0, 'crl': 0, 'x509': 1}) ctx.load_verify_locations(CAFILE_CACERT) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 1, 'crl': 0, 'x509': 2}) def test_get_ca_certs(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.get_ca_certs(), []) # CERTFILE is not flagged as X509v3 Basic Constraints: CA:TRUE ctx.load_verify_locations(CERTFILE) self.assertEqual(ctx.get_ca_certs(), []) # but CAFILE_CACERT is a CA cert ctx.load_verify_locations(CAFILE_CACERT) self.assertEqual(ctx.get_ca_certs(), [{'issuer': ((('organizationName', 'Root CA'),), (('organizationalUnitName', 'http://www.cacert.org'),), (('commonName', 'CA Cert Signing Authority'),), (('emailAddress', 'support@cacert.org'),)), 'notAfter': 'Mar 29 12:29:49 2033 GMT', 'notBefore': 'Mar 30 12:29:49 2003 GMT', 'serialNumber': '00', 'crlDistributionPoints': ('https://www.cacert.org/revoke.crl',), 'subject': ((('organizationName', 'Root CA'),), (('organizationalUnitName', 'http://www.cacert.org'),), (('commonName', 'CA Cert Signing Authority'),), (('emailAddress', 'support@cacert.org'),)), 'version': 3}]) with open(CAFILE_CACERT) as f: pem = f.read() der = ssl.PEM_cert_to_DER_cert(pem) self.assertEqual(ctx.get_ca_certs(True), [der]) def test_load_default_certs(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs(ssl.Purpose.SERVER_AUTH) ctx.load_default_certs() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs(ssl.Purpose.CLIENT_AUTH) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertRaises(TypeError, ctx.load_default_certs, None) self.assertRaises(TypeError, ctx.load_default_certs, 'SERVER_AUTH') @unittest.skipIf(sys.platform == "win32", "not-Windows specific") def test_load_default_certs_env(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with os_helper.EnvironmentVarGuard() as env: env["SSL_CERT_DIR"] = CAPATH env["SSL_CERT_FILE"] = CERTFILE ctx.load_default_certs() self.assertEqual(ctx.cert_store_stats(), {"crl": 0, "x509": 1, "x509_ca": 0}) @unittest.skipUnless(sys.platform == "win32", "Windows specific") @unittest.skipIf(hasattr(sys, "gettotalrefcount"), "Debug build does not share environment between CRTs") def test_load_default_certs_env_windows(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs() stats = ctx.cert_store_stats() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with os_helper.EnvironmentVarGuard() as env: env["SSL_CERT_DIR"] = CAPATH env["SSL_CERT_FILE"] = CERTFILE ctx.load_default_certs() stats["x509"] += 1 self.assertEqual(ctx.cert_store_stats(), stats) def _assert_context_options(self, ctx): self.assertEqual(ctx.options & ssl.OP_NO_SSLv2, ssl.OP_NO_SSLv2) if OP_NO_COMPRESSION != 0: self.assertEqual(ctx.options & OP_NO_COMPRESSION, OP_NO_COMPRESSION) if OP_SINGLE_DH_USE != 0: self.assertEqual(ctx.options & OP_SINGLE_DH_USE, OP_SINGLE_DH_USE) if OP_SINGLE_ECDH_USE != 0: self.assertEqual(ctx.options & OP_SINGLE_ECDH_USE, OP_SINGLE_ECDH_USE) if OP_CIPHER_SERVER_PREFERENCE != 0: self.assertEqual(ctx.options & OP_CIPHER_SERVER_PREFERENCE, OP_CIPHER_SERVER_PREFERENCE) def test_create_default_context(self): ctx = ssl.create_default_context() self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertTrue(ctx.check_hostname) self._assert_context_options(ctx) with open(SIGNING_CA) as f: cadata = f.read() ctx = ssl.create_default_context(cafile=SIGNING_CA, capath=CAPATH, cadata=cadata) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self._assert_context_options(ctx) ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self._assert_context_options(ctx) def test__create_stdlib_context(self): ctx = ssl._create_stdlib_context() self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self.assertFalse(ctx.check_hostname) self._assert_context_options(ctx) if has_tls_protocol(ssl.PROTOCOL_TLSv1): with warnings_helper.check_warnings(): ctx = ssl._create_stdlib_context(ssl.PROTOCOL_TLSv1) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLSv1) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self._assert_context_options(ctx) with warnings_helper.check_warnings(): ctx = ssl._create_stdlib_context( ssl.PROTOCOL_TLSv1_2, cert_reqs=ssl.CERT_REQUIRED, check_hostname=True ) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLSv1_2) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertTrue(ctx.check_hostname) self._assert_context_options(ctx) ctx = ssl._create_stdlib_context(purpose=ssl.Purpose.CLIENT_AUTH) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self._assert_context_options(ctx) def test_check_hostname(self): with warnings_helper.check_warnings(): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) # Auto set CERT_REQUIRED ctx.check_hostname = True self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_REQUIRED self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) # Changing verify_mode does not affect check_hostname ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE ctx.check_hostname = False self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) # Auto set ctx.check_hostname = True self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_OPTIONAL ctx.check_hostname = False self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) # keep CERT_OPTIONAL ctx.check_hostname = True self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) # Cannot set CERT_NONE with check_hostname enabled with self.assertRaises(ValueError): ctx.verify_mode = ssl.CERT_NONE ctx.check_hostname = False self.assertFalse(ctx.check_hostname) ctx.verify_mode = ssl.CERT_NONE self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) def test_context_client_server(self): # PROTOCOL_TLS_CLIENT has sane defaults ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) # PROTOCOL_TLS_SERVER has different but also sane defaults ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) def test_context_custom_class(self): class MySSLSocket(ssl.SSLSocket): pass class MySSLObject(ssl.SSLObject): pass ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.sslsocket_class = MySSLSocket ctx.sslobject_class = MySSLObject with ctx.wrap_socket(socket.socket(), server_side=True) as sock: self.assertIsInstance(sock, MySSLSocket) obj = ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_side=True) self.assertIsInstance(obj, MySSLObject) def test_num_tickest(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.num_tickets, 2) ctx.num_tickets = 1 self.assertEqual(ctx.num_tickets, 1) ctx.num_tickets = 0 self.assertEqual(ctx.num_tickets, 0) with self.assertRaises(ValueError): ctx.num_tickets = -1 with self.assertRaises(TypeError): ctx.num_tickets = None ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.num_tickets, 2) with self.assertRaises(ValueError): ctx.num_tickets = 1 class SSLErrorTests(unittest.TestCase): def test_str(self): # The str() of a SSLError doesn't include the errno e = ssl.SSLError(1, "foo") self.assertEqual(str(e), "foo") self.assertEqual(e.errno, 1) # Same for a subclass e = ssl.SSLZeroReturnError(1, "foo") self.assertEqual(str(e), "foo") self.assertEqual(e.errno, 1) @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_lib_reason(self): # Test the library and reason attributes ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertRaises(ssl.SSLError) as cm: ctx.load_dh_params(CERTFILE) self.assertEqual(cm.exception.library, 'PEM') self.assertEqual(cm.exception.reason, 'NO_START_LINE') s = str(cm.exception) self.assertTrue(s.startswith("[PEM: NO_START_LINE] no start line"), s) def test_subclass(self): # Check that the appropriate SSLError subclass is raised # (this only tests one of them) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE with socket.create_server(("127.0.0.1", 0)) as s: c = socket.create_connection(s.getsockname()) c.setblocking(False) with ctx.wrap_socket(c, False, do_handshake_on_connect=False) as c: with self.assertRaises(ssl.SSLWantReadError) as cm: c.do_handshake() s = str(cm.exception) self.assertTrue(s.startswith("The operation did not complete (read)"), s) # For compatibility self.assertEqual(cm.exception.errno, ssl.SSL_ERROR_WANT_READ) def test_bad_server_hostname(self): ctx = ssl.create_default_context() with self.assertRaises(ValueError): ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_hostname="") with self.assertRaises(ValueError): ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_hostname=".example.org") with self.assertRaises(TypeError): ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_hostname="example.org\x00evil.com") class MemoryBIOTests(unittest.TestCase): def test_read_write(self): bio = ssl.MemoryBIO() bio.write(b'foo') self.assertEqual(bio.read(), b'foo') self.assertEqual(bio.read(), b'') bio.write(b'foo') bio.write(b'bar') self.assertEqual(bio.read(), b'foobar') self.assertEqual(bio.read(), b'') bio.write(b'baz') self.assertEqual(bio.read(2), b'ba') self.assertEqual(bio.read(1), b'z') self.assertEqual(bio.read(1), b'') def test_eof(self): bio = ssl.MemoryBIO() self.assertFalse(bio.eof) self.assertEqual(bio.read(), b'') self.assertFalse(bio.eof) bio.write(b'foo') self.assertFalse(bio.eof) bio.write_eof() self.assertFalse(bio.eof) self.assertEqual(bio.read(2), b'fo') self.assertFalse(bio.eof) self.assertEqual(bio.read(1), b'o') self.assertTrue(bio.eof) self.assertEqual(bio.read(), b'') self.assertTrue(bio.eof) def test_pending(self): bio = ssl.MemoryBIO() self.assertEqual(bio.pending, 0) bio.write(b'foo') self.assertEqual(bio.pending, 3) for i in range(3): bio.read(1) self.assertEqual(bio.pending, 3-i-1) for i in range(3): bio.write(b'x') self.assertEqual(bio.pending, i+1) bio.read() self.assertEqual(bio.pending, 0) def test_buffer_types(self): bio = ssl.MemoryBIO() bio.write(b'foo') self.assertEqual(bio.read(), b'foo') bio.write(bytearray(b'bar')) self.assertEqual(bio.read(), b'bar') bio.write(memoryview(b'baz')) self.assertEqual(bio.read(), b'baz') m = memoryview(bytearray(b'noncontig')) noncontig_writable = m[::-2] with self.assertRaises(BufferError): bio.write(memoryview(noncontig_writable)) def test_error_types(self): bio = ssl.MemoryBIO() self.assertRaises(TypeError, bio.write, 'foo') self.assertRaises(TypeError, bio.write, None) self.assertRaises(TypeError, bio.write, True) self.assertRaises(TypeError, bio.write, 1) class SSLObjectTests(unittest.TestCase): def test_private_init(self): bio = ssl.MemoryBIO() with self.assertRaisesRegex(TypeError, "public constructor"): ssl.SSLObject(bio, bio) def test_unwrap(self): client_ctx, server_ctx, hostname = testing_context() c_in = ssl.MemoryBIO() c_out = ssl.MemoryBIO() s_in = ssl.MemoryBIO() s_out = ssl.MemoryBIO() client = client_ctx.wrap_bio(c_in, c_out, server_hostname=hostname) server = server_ctx.wrap_bio(s_in, s_out, server_side=True) # Loop on the handshake for a bit to get it settled for _ in range(5): try: client.do_handshake() except ssl.SSLWantReadError: pass if c_out.pending: s_in.write(c_out.read()) try: server.do_handshake() except ssl.SSLWantReadError: pass if s_out.pending: c_in.write(s_out.read()) # Now the handshakes should be complete (don't raise WantReadError) client.do_handshake() server.do_handshake() # Now if we unwrap one side unilaterally, it should send close-notify # and raise WantReadError: with self.assertRaises(ssl.SSLWantReadError): client.unwrap() # But server.unwrap() does not raise, because it reads the client's # close-notify: s_in.write(c_out.read()) server.unwrap() # And now that the client gets the server's close-notify, it doesn't # raise either. c_in.write(s_out.read()) client.unwrap() class SimpleBackgroundTests(unittest.TestCase): """Tests that connect to a simple server running in the background""" def setUp(self): self.server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.server_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=self.server_context) self.enterContext(server) self.server_addr = (HOST, server.port) def test_connect(self): with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_NONE) as s: s.connect(self.server_addr) self.assertEqual({}, s.getpeercert()) self.assertFalse(s.server_side) # this should succeed because we specify the root cert with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=SIGNING_CA) as s: s.connect(self.server_addr) self.assertTrue(s.getpeercert()) self.assertFalse(s.server_side) def test_connect_fail(self): # This should fail because we have no verification certs. Connection # failure crashes ThreadedEchoServer, so run this in an independent # test method. s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED) self.addCleanup(s.close) self.assertRaisesRegex(ssl.SSLError, "certificate verify failed", s.connect, self.server_addr) def test_connect_ex(self): # Issue #11326: check connect_ex() implementation s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=SIGNING_CA) self.addCleanup(s.close) self.assertEqual(0, s.connect_ex(self.server_addr)) self.assertTrue(s.getpeercert()) def test_non_blocking_connect_ex(self): # Issue #11326: non-blocking connect_ex() should allow handshake # to proceed after the socket gets ready. s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=SIGNING_CA, do_handshake_on_connect=False) self.addCleanup(s.close) s.setblocking(False) rc = s.connect_ex(self.server_addr) # EWOULDBLOCK under Windows, EINPROGRESS elsewhere self.assertIn(rc, (0, errno.EINPROGRESS, errno.EWOULDBLOCK)) # Wait for connect to finish select.select([], [s], [], 5.0) # Non-blocking handshake while True: try: s.do_handshake() break except ssl.SSLWantReadError: select.select([s], [], [], 5.0) except ssl.SSLWantWriteError: select.select([], [s], [], 5.0) # SSL established self.assertTrue(s.getpeercert()) def test_connect_with_context(self): # Same as test_connect, but with a separately created context ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE with ctx.wrap_socket(socket.socket(socket.AF_INET)) as s: s.connect(self.server_addr) self.assertEqual({}, s.getpeercert()) # Same with a server hostname with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname="dummy") as s: s.connect(self.server_addr) ctx.verify_mode = ssl.CERT_REQUIRED # This should succeed because we specify the root cert ctx.load_verify_locations(SIGNING_CA) with ctx.wrap_socket(socket.socket(socket.AF_INET)) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) def test_connect_with_context_fail(self): # This should fail because we have no verification certs. Connection # failure crashes ThreadedEchoServer, so run this in an independent # test method. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) s = ctx.wrap_socket( socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME ) self.addCleanup(s.close) self.assertRaisesRegex(ssl.SSLError, "certificate verify failed", s.connect, self.server_addr) def test_connect_capath(self): # Verify server certificates using the `capath` argument # NOTE: the subject hashing algorithm has been changed between # OpenSSL 0.9.8n and 1.0.0, as a result the capath directory must # contain both versions of each certificate (same content, different # filename) for this test to be portable across OpenSSL releases. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(capath=CAPATH) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) # Same with a bytes `capath` argument ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(capath=BYTES_CAPATH) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) def test_connect_cadata(self): with open(SIGNING_CA) as f: pem = f.read() der = ssl.PEM_cert_to_DER_cert(pem) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(cadata=pem) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) # same with DER ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(cadata=der) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) @unittest.skipIf(os.name == "nt", "Can't use a socket as a file under Windows") def test_makefile_close(self): # Issue #5238: creating a file-like object with makefile() shouldn't # delay closing the underlying "real socket" (here tested with its # file descriptor, hence skipping the test under Windows). ss = test_wrap_socket(socket.socket(socket.AF_INET)) ss.connect(self.server_addr) fd = ss.fileno() f = ss.makefile() f.close() # The fd is still open os.read(fd, 0) # Closing the SSL socket should close the fd too ss.close() gc.collect() with self.assertRaises(OSError) as e: os.read(fd, 0) self.assertEqual(e.exception.errno, errno.EBADF) def test_non_blocking_handshake(self): s = socket.socket(socket.AF_INET) s.connect(self.server_addr) s.setblocking(False) s = test_wrap_socket(s, cert_reqs=ssl.CERT_NONE, do_handshake_on_connect=False) self.addCleanup(s.close) count = 0 while True: try: count += 1 s.do_handshake() break except ssl.SSLWantReadError: select.select([s], [], []) except ssl.SSLWantWriteError: select.select([], [s], []) if support.verbose: sys.stdout.write("\nNeeded %d calls to do_handshake() to establish session.\n" % count) def test_get_server_certificate(self): _test_get_server_certificate(self, *self.server_addr, cert=SIGNING_CA) def test_get_server_certificate_sni(self): host, port = self.server_addr server_names = [] # We store servername_cb arguments to make sure they match the host def servername_cb(ssl_sock, server_name, initial_context): server_names.append(server_name) self.server_context.set_servername_callback(servername_cb) pem = ssl.get_server_certificate((host, port)) if not pem: self.fail("No server certificate on %s:%s!" % (host, port)) pem = ssl.get_server_certificate((host, port), ca_certs=SIGNING_CA) if not pem: self.fail("No server certificate on %s:%s!" % (host, port)) if support.verbose: sys.stdout.write("\nVerified certificate for %s:%s is\n%s\n" % (host, port, pem)) self.assertEqual(server_names, [host, host]) def test_get_server_certificate_fail(self): # Connection failure crashes ThreadedEchoServer, so run this in an # independent test method _test_get_server_certificate_fail(self, *self.server_addr) def test_get_server_certificate_timeout(self): def servername_cb(ssl_sock, server_name, initial_context): time.sleep(0.2) self.server_context.set_servername_callback(servername_cb) with self.assertRaises(socket.timeout): ssl.get_server_certificate(self.server_addr, ca_certs=SIGNING_CA, timeout=0.1) def test_ciphers(self): with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_NONE, ciphers="ALL") as s: s.connect(self.server_addr) with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_NONE, ciphers="DEFAULT") as s: s.connect(self.server_addr) # Error checking can happen at instantiation or when connecting with self.assertRaisesRegex(ssl.SSLError, "No cipher can be selected"): with socket.socket(socket.AF_INET) as sock: s = test_wrap_socket(sock, cert_reqs=ssl.CERT_NONE, ciphers="^$:,;?*'dorothyx") s.connect(self.server_addr) def test_get_ca_certs_capath(self): # capath certs are loaded on request ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(capath=CAPATH) self.assertEqual(ctx.get_ca_certs(), []) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname='localhost') as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) self.assertEqual(len(ctx.get_ca_certs()), 1) def test_context_setget(self): # Check that the context of a connected socket can be replaced. ctx1 = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx1.load_verify_locations(capath=CAPATH) ctx2 = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx2.load_verify_locations(capath=CAPATH) s = socket.socket(socket.AF_INET) with ctx1.wrap_socket(s, server_hostname='localhost') as ss: ss.connect(self.server_addr) self.assertIs(ss.context, ctx1) self.assertIs(ss._sslobj.context, ctx1) ss.context = ctx2 self.assertIs(ss.context, ctx2) self.assertIs(ss._sslobj.context, ctx2) def ssl_io_loop(self, sock, incoming, outgoing, func, *args, **kwargs): # A simple IO loop. Call func(*args) depending on the error we get # (WANT_READ or WANT_WRITE) move data between the socket and the BIOs. timeout = kwargs.get('timeout', support.SHORT_TIMEOUT) count = 0 for _ in support.busy_retry(timeout): errno = None count += 1 try: ret = func(*args) except ssl.SSLError as e: if e.errno not in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): raise errno = e.errno # Get any data from the outgoing BIO irrespective of any error, and # send it to the socket. buf = outgoing.read() sock.sendall(buf) # If there's no error, we're done. For WANT_READ, we need to get # data from the socket and put it in the incoming BIO. if errno is None: break elif errno == ssl.SSL_ERROR_WANT_READ: buf = sock.recv(32768) if buf: incoming.write(buf) else: incoming.write_eof() if support.verbose: sys.stdout.write("Needed %d calls to complete %s().\n" % (count, func.__name__)) return ret def test_bio_handshake(self): sock = socket.socket(socket.AF_INET) self.addCleanup(sock.close) sock.connect(self.server_addr) incoming = ssl.MemoryBIO() outgoing = ssl.MemoryBIO() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.load_verify_locations(SIGNING_CA) sslobj = ctx.wrap_bio(incoming, outgoing, False, SIGNED_CERTFILE_HOSTNAME) self.assertIs(sslobj._sslobj.owner, sslobj) self.assertIsNone(sslobj.cipher()) self.assertIsNone(sslobj.version()) self.assertIsNone(sslobj.shared_ciphers()) self.assertRaises(ValueError, sslobj.getpeercert) if 'tls-unique' in ssl.CHANNEL_BINDING_TYPES: self.assertIsNone(sslobj.get_channel_binding('tls-unique')) self.ssl_io_loop(sock, incoming, outgoing, sslobj.do_handshake) self.assertTrue(sslobj.cipher()) self.assertIsNone(sslobj.shared_ciphers()) self.assertIsNotNone(sslobj.version()) self.assertTrue(sslobj.getpeercert()) if 'tls-unique' in ssl.CHANNEL_BINDING_TYPES: self.assertTrue(sslobj.get_channel_binding('tls-unique')) try: self.ssl_io_loop(sock, incoming, outgoing, sslobj.unwrap) except ssl.SSLSyscallError: # If the server shuts down the TCP connection without sending a # secure shutdown message, this is reported as SSL_ERROR_SYSCALL pass self.assertRaises(ssl.SSLError, sslobj.write, b'foo') def test_bio_read_write_data(self): sock = socket.socket(socket.AF_INET) self.addCleanup(sock.close) sock.connect(self.server_addr) incoming = ssl.MemoryBIO() outgoing = ssl.MemoryBIO() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE sslobj = ctx.wrap_bio(incoming, outgoing, False) self.ssl_io_loop(sock, incoming, outgoing, sslobj.do_handshake) req = b'FOO\n' self.ssl_io_loop(sock, incoming, outgoing, sslobj.write, req) buf = self.ssl_io_loop(sock, incoming, outgoing, sslobj.read, 1024) self.assertEqual(buf, b'foo\n') self.ssl_io_loop(sock, incoming, outgoing, sslobj.unwrap) def test_transport_eof(self): client_context, server_context, hostname = testing_context() with socket.socket(socket.AF_INET) as sock: sock.connect(self.server_addr) incoming = ssl.MemoryBIO() outgoing = ssl.MemoryBIO() sslobj = client_context.wrap_bio(incoming, outgoing, server_hostname=hostname) self.ssl_io_loop(sock, incoming, outgoing, sslobj.do_handshake) # Simulate EOF from the transport. incoming.write_eof() self.assertRaises(ssl.SSLEOFError, sslobj.read) @support.requires_resource('network') class NetworkedTests(unittest.TestCase): def test_timeout_connect_ex(self): # Issue #12065: on a timeout, connect_ex() should return the original # errno (mimicking the behaviour of non-SSL sockets). with socket_helper.transient_internet(REMOTE_HOST): s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, do_handshake_on_connect=False) self.addCleanup(s.close) s.settimeout(0.0000001) rc = s.connect_ex((REMOTE_HOST, 443)) if rc == 0: self.skipTest("REMOTE_HOST responded too quickly") elif rc == errno.ENETUNREACH: self.skipTest("Network unreachable.") self.assertIn(rc, (errno.EAGAIN, errno.EWOULDBLOCK)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'Needs IPv6') @support.requires_resource('walltime') def test_get_server_certificate_ipv6(self): with socket_helper.transient_internet('ipv6.google.com'): _test_get_server_certificate(self, 'ipv6.google.com', 443) _test_get_server_certificate_fail(self, 'ipv6.google.com', 443) def _test_get_server_certificate(test, host, port, cert=None): pem = ssl.get_server_certificate((host, port)) if not pem: test.fail("No server certificate on %s:%s!" % (host, port)) pem = ssl.get_server_certificate((host, port), ca_certs=cert) if not pem: test.fail("No server certificate on %s:%s!" % (host, port)) if support.verbose: sys.stdout.write("\nVerified certificate for %s:%s is\n%s\n" % (host, port ,pem)) def _test_get_server_certificate_fail(test, host, port): with warnings_helper.check_no_resource_warning(test): try: pem = ssl.get_server_certificate((host, port), ca_certs=CERTFILE) except ssl.SSLError as x: #should fail if support.verbose: sys.stdout.write("%s\n" % x) else: test.fail("Got server certificate %s for %s:%s!" % (pem, host, port)) from test.ssl_servers import make_https_server class ThreadedEchoServer(threading.Thread): class ConnectionHandler(threading.Thread): """A mildly complicated class, because we want it to work both with and without the SSL wrapper around the socket connection, so that we can test the STARTTLS functionality.""" def __init__(self, server, connsock, addr): self.server = server self.running = False self.sock = connsock self.addr = addr self.sock.setblocking(True) self.sslconn = None threading.Thread.__init__(self) self.daemon = True def wrap_conn(self): try: self.sslconn = self.server.context.wrap_socket( self.sock, server_side=True) self.server.selected_alpn_protocols.append(self.sslconn.selected_alpn_protocol()) except (ConnectionResetError, BrokenPipeError, ConnectionAbortedError) as e: # We treat ConnectionResetError as though it were an # SSLError - OpenSSL on Ubuntu abruptly closes the # connection when asked to use an unsupported protocol. # # BrokenPipeError is raised in TLS 1.3 mode, when OpenSSL # tries to send session tickets after handshake. # https://github.com/openssl/openssl/issues/6342 # # ConnectionAbortedError is raised in TLS 1.3 mode, when OpenSSL # tries to send session tickets after handshake when using WinSock. self.server.conn_errors.append(str(e)) if self.server.chatty: handle_error("\n server: bad connection attempt from " + repr(self.addr) + ":\n") self.running = False self.close() return False except (ssl.SSLError, OSError) as e: # OSError may occur with wrong protocols, e.g. both # sides use PROTOCOL_TLS_SERVER. # # XXX Various errors can have happened here, for example # a mismatching protocol version, an invalid certificate, # or a low-level bug. This should be made more discriminating. # # bpo-31323: Store the exception as string to prevent # a reference leak: server -> conn_errors -> exception # -> traceback -> self (ConnectionHandler) -> server self.server.conn_errors.append(str(e)) if self.server.chatty: handle_error("\n server: bad connection attempt from " + repr(self.addr) + ":\n") # bpo-44229, bpo-43855, bpo-44237, and bpo-33450: # Ignore spurious EPROTOTYPE returned by write() on macOS. # See also http://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/ if e.errno != errno.EPROTOTYPE and sys.platform != "darwin": self.running = False self.server.stop() self.close() return False else: self.server.shared_ciphers.append(self.sslconn.shared_ciphers()) if self.server.context.verify_mode == ssl.CERT_REQUIRED: cert = self.sslconn.getpeercert() if support.verbose and self.server.chatty: sys.stdout.write(" client cert is " + pprint.pformat(cert) + "\n") cert_binary = self.sslconn.getpeercert(True) if support.verbose and self.server.chatty: if cert_binary is None: sys.stdout.write(" client did not provide a cert\n") else: sys.stdout.write(f" cert binary is {len(cert_binary)}b\n") cipher = self.sslconn.cipher() if support.verbose and self.server.chatty: sys.stdout.write(" server: connection cipher is now " + str(cipher) + "\n") return True def read(self): if self.sslconn: return self.sslconn.read() else: return self.sock.recv(1024) def write(self, bytes): if self.sslconn: return self.sslconn.write(bytes) else: return self.sock.send(bytes) def close(self): if self.sslconn: self.sslconn.close() else: self.sock.close() def run(self): self.running = True if not self.server.starttls_server: if not self.wrap_conn(): return while self.running: try: msg = self.read() stripped = msg.strip() if not stripped: # eof, so quit this handler self.running = False try: self.sock = self.sslconn.unwrap() except OSError: # Many tests shut the TCP connection down # without an SSL shutdown. This causes # unwrap() to raise OSError with errno=0! pass else: self.sslconn = None self.close() elif stripped == b'over': if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: client closed connection\n") self.close() return elif (self.server.starttls_server and stripped == b'STARTTLS'): if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: read STARTTLS from client, sending OK...\n") self.write(b"OK\n") if not self.wrap_conn(): return elif (self.server.starttls_server and self.sslconn and stripped == b'ENDTLS'): if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: read ENDTLS from client, sending OK...\n") self.write(b"OK\n") self.sock = self.sslconn.unwrap() self.sslconn = None if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: connection is now unencrypted...\n") elif stripped == b'CB tls-unique': if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: read CB tls-unique from client, sending our CB data...\n") data = self.sslconn.get_channel_binding("tls-unique") self.write(repr(data).encode("us-ascii") + b"\n") elif stripped == b'PHA': if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: initiating post handshake auth\n") try: self.sslconn.verify_client_post_handshake() except ssl.SSLError as e: self.write(repr(e).encode("us-ascii") + b"\n") else: self.write(b"OK\n") elif stripped == b'HASCERT': if self.sslconn.getpeercert() is not None: self.write(b'TRUE\n') else: self.write(b'FALSE\n') elif stripped == b'GETCERT': cert = self.sslconn.getpeercert() self.write(repr(cert).encode("us-ascii") + b"\n") elif stripped == b'VERIFIEDCHAIN': certs = self.sslconn._sslobj.get_verified_chain() self.write(len(certs).to_bytes(1, "big") + b"\n") elif stripped == b'UNVERIFIEDCHAIN': certs = self.sslconn._sslobj.get_unverified_chain() self.write(len(certs).to_bytes(1, "big") + b"\n") else: if (support.verbose and self.server.connectionchatty): ctype = (self.sslconn and "encrypted") or "unencrypted" sys.stdout.write(" server: read %r (%s), sending back %r (%s)...\n" % (msg, ctype, msg.lower(), ctype)) self.write(msg.lower()) except OSError as e: # handles SSLError and socket errors if self.server.chatty and support.verbose: if isinstance(e, ConnectionError): # OpenSSL 1.1.1 sometimes raises # ConnectionResetError when connection is not # shut down gracefully. print( f" Connection reset by peer: {self.addr}" ) else: handle_error("Test server failure:\n") try: self.write(b"ERROR\n") except OSError: pass self.close() self.running = False # normally, we'd just stop here, but for the test # harness, we want to stop the server self.server.stop() def __init__(self, certificate=None, ssl_version=None, certreqs=None, cacerts=None, chatty=True, connectionchatty=False, starttls_server=False, alpn_protocols=None, ciphers=None, context=None): if context: self.context = context else: self.context = ssl.SSLContext(ssl_version if ssl_version is not None else ssl.PROTOCOL_TLS_SERVER) self.context.verify_mode = (certreqs if certreqs is not None else ssl.CERT_NONE) if cacerts: self.context.load_verify_locations(cacerts) if certificate: self.context.load_cert_chain(certificate) if alpn_protocols: self.context.set_alpn_protocols(alpn_protocols) if ciphers: self.context.set_ciphers(ciphers) self.chatty = chatty self.connectionchatty = connectionchatty self.starttls_server = starttls_server self.sock = socket.socket() self.port = socket_helper.bind_port(self.sock) self.flag = None self.active = False self.selected_alpn_protocols = [] self.shared_ciphers = [] self.conn_errors = [] threading.Thread.__init__(self) self.daemon = True def __enter__(self): self.start(threading.Event()) self.flag.wait() return self def __exit__(self, *args): self.stop() self.join() def start(self, flag=None): self.flag = flag threading.Thread.start(self) def run(self): self.sock.settimeout(1.0) self.sock.listen(5) self.active = True if self.flag: # signal an event self.flag.set() while self.active: try: newconn, connaddr = self.sock.accept() if support.verbose and self.chatty: sys.stdout.write(' server: new connection from ' + repr(connaddr) + '\n') handler = self.ConnectionHandler(self, newconn, connaddr) handler.start() handler.join() except TimeoutError as e: if support.verbose: sys.stdout.write(f' connection timeout {e!r}\n') except KeyboardInterrupt: self.stop() except BaseException as e: if support.verbose and self.chatty: sys.stdout.write( ' connection handling failed: ' + repr(e) + '\n') self.close() def close(self): if self.sock is not None: self.sock.close() self.sock = None def stop(self): self.active = False class AsyncoreEchoServer(threading.Thread): # this one's based on asyncore.dispatcher class EchoServer (asyncore.dispatcher): class ConnectionHandler(asyncore.dispatcher_with_send): def __init__(self, conn, certfile): self.socket = test_wrap_socket(conn, server_side=True, certfile=certfile, do_handshake_on_connect=False) asyncore.dispatcher_with_send.__init__(self, self.socket) self._ssl_accepting = True self._do_ssl_handshake() def readable(self): if isinstance(self.socket, ssl.SSLSocket): while self.socket.pending() > 0: self.handle_read_event() return True def _do_ssl_handshake(self): try: self.socket.do_handshake() except (ssl.SSLWantReadError, ssl.SSLWantWriteError): return except ssl.SSLEOFError: return self.handle_close() except ssl.SSLError: raise except OSError as err: if err.args[0] == errno.ECONNABORTED: return self.handle_close() else: self._ssl_accepting = False def handle_read(self): if self._ssl_accepting: self._do_ssl_handshake() else: data = self.recv(1024) if support.verbose: sys.stdout.write(" server: read %s from client\n" % repr(data)) if not data: self.close() else: self.send(data.lower()) def handle_close(self): self.close() if support.verbose: sys.stdout.write(" server: closed connection %s\n" % self.socket) def handle_error(self): raise def __init__(self, certfile): self.certfile = certfile sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = socket_helper.bind_port(sock, '') asyncore.dispatcher.__init__(self, sock) self.listen(5) def handle_accepted(self, sock_obj, addr): if support.verbose: sys.stdout.write(" server: new connection from %s:%s\n" %addr) self.ConnectionHandler(sock_obj, self.certfile) def handle_error(self): raise def __init__(self, certfile): self.flag = None self.active = False self.server = self.EchoServer(certfile) self.port = self.server.port threading.Thread.__init__(self) self.daemon = True def __str__(self): return "<%s %s>" % (self.__class__.__name__, self.server) def __enter__(self): self.start(threading.Event()) self.flag.wait() return self def __exit__(self, *args): if support.verbose: sys.stdout.write(" cleanup: stopping server.\n") self.stop() if support.verbose: sys.stdout.write(" cleanup: joining server thread.\n") self.join() if support.verbose: sys.stdout.write(" cleanup: successfully joined.\n") # make sure that ConnectionHandler is removed from socket_map asyncore.close_all(ignore_all=True) def start (self, flag=None): self.flag = flag threading.Thread.start(self) def run(self): self.active = True if self.flag: self.flag.set() while self.active: try: asyncore.loop(1) except: pass def stop(self): self.active = False self.server.close() def server_params_test(client_context, server_context, indata=b"FOO\n", chatty=True, connectionchatty=False, sni_name=None, session=None): """ Launch a server, connect a client to it and try various reads and writes. """ stats = {} server = ThreadedEchoServer(context=server_context, chatty=chatty, connectionchatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=sni_name, session=session) as s: s.connect((HOST, server.port)) for arg in [indata, bytearray(indata), memoryview(indata)]: if connectionchatty: if support.verbose: sys.stdout.write( " client: sending %r...\n" % indata) s.write(arg) outdata = s.read() if connectionchatty: if support.verbose: sys.stdout.write(" client: read %r\n" % outdata) if outdata != indata.lower(): raise AssertionError( "bad data <<%r>> (%d) received; expected <<%r>> (%d)\n" % (outdata[:20], len(outdata), indata[:20].lower(), len(indata))) s.write(b"over\n") if connectionchatty: if support.verbose: sys.stdout.write(" client: closing connection.\n") stats.update({ 'compression': s.compression(), 'cipher': s.cipher(), 'peercert': s.getpeercert(), 'client_alpn_protocol': s.selected_alpn_protocol(), 'version': s.version(), 'session_reused': s.session_reused, 'session': s.session, }) s.close() stats['server_alpn_protocols'] = server.selected_alpn_protocols stats['server_shared_ciphers'] = server.shared_ciphers return stats def try_protocol_combo(server_protocol, client_protocol, expect_success, certsreqs=None, server_options=0, client_options=0): """ Try to SSL-connect using *client_protocol* to *server_protocol*. If *expect_success* is true, assert that the connection succeeds, if it's false, assert that the connection fails. Also, if *expect_success* is a string, assert that it is the protocol version actually used by the connection. """ if certsreqs is None: certsreqs = ssl.CERT_NONE certtype = { ssl.CERT_NONE: "CERT_NONE", ssl.CERT_OPTIONAL: "CERT_OPTIONAL", ssl.CERT_REQUIRED: "CERT_REQUIRED", }[certsreqs] if support.verbose: formatstr = (expect_success and " %s->%s %s\n") or " {%s->%s} %s\n" sys.stdout.write(formatstr % (ssl.get_protocol_name(client_protocol), ssl.get_protocol_name(server_protocol), certtype)) with warnings_helper.check_warnings(): # ignore Deprecation warnings client_context = ssl.SSLContext(client_protocol) client_context.options |= client_options server_context = ssl.SSLContext(server_protocol) server_context.options |= server_options min_version = PROTOCOL_TO_TLS_VERSION.get(client_protocol, None) if (min_version is not None # SSLContext.minimum_version is only available on recent OpenSSL # (setter added in OpenSSL 1.1.0, getter added in OpenSSL 1.1.1) and hasattr(server_context, 'minimum_version') and server_protocol == ssl.PROTOCOL_TLS and server_context.minimum_version > min_version ): # If OpenSSL configuration is strict and requires more recent TLS # version, we have to change the minimum to test old TLS versions. with warnings_helper.check_warnings(): server_context.minimum_version = min_version # NOTE: we must enable "ALL" ciphers on the client, otherwise an # SSLv23 client will send an SSLv3 hello (rather than SSLv2) # starting from OpenSSL 1.0.0 (see issue #8322). if client_context.protocol == ssl.PROTOCOL_TLS: client_context.set_ciphers("ALL") seclevel_workaround(server_context, client_context) for ctx in (client_context, server_context): ctx.verify_mode = certsreqs ctx.load_cert_chain(SIGNED_CERTFILE) ctx.load_verify_locations(SIGNING_CA) try: stats = server_params_test(client_context, server_context, chatty=False, connectionchatty=False) # Protocol mismatch can result in either an SSLError, or a # "Connection reset by peer" error. except ssl.SSLError: if expect_success: raise except OSError as e: if expect_success or e.errno != errno.ECONNRESET: raise else: if not expect_success: raise AssertionError( "Client protocol %s succeeded with server protocol %s!" % (ssl.get_protocol_name(client_protocol), ssl.get_protocol_name(server_protocol))) elif (expect_success is not True and expect_success != stats['version']): raise AssertionError("version mismatch: expected %r, got %r" % (expect_success, stats['version'])) class ThreadedTests(unittest.TestCase): @support.requires_resource('walltime') def test_echo(self): """Basic test of an SSL client connecting to a server""" if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() with self.subTest(client=ssl.PROTOCOL_TLS_CLIENT, server=ssl.PROTOCOL_TLS_SERVER): server_params_test(client_context=client_context, server_context=server_context, chatty=True, connectionchatty=True, sni_name=hostname) client_context.check_hostname = False with self.subTest(client=ssl.PROTOCOL_TLS_SERVER, server=ssl.PROTOCOL_TLS_CLIENT): with self.assertRaises(ssl.SSLError) as e: server_params_test(client_context=server_context, server_context=client_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIn( 'Cannot create a client socket with a PROTOCOL_TLS_SERVER context', str(e.exception) ) with self.subTest(client=ssl.PROTOCOL_TLS_SERVER, server=ssl.PROTOCOL_TLS_SERVER): with self.assertRaises(ssl.SSLError) as e: server_params_test(client_context=server_context, server_context=server_context, chatty=True, connectionchatty=True) self.assertIn( 'Cannot create a client socket with a PROTOCOL_TLS_SERVER context', str(e.exception) ) with self.subTest(client=ssl.PROTOCOL_TLS_CLIENT, server=ssl.PROTOCOL_TLS_CLIENT): with self.assertRaises(ssl.SSLError) as e: server_params_test(client_context=server_context, server_context=client_context, chatty=True, connectionchatty=True) self.assertIn( 'Cannot create a client socket with a PROTOCOL_TLS_SERVER context', str(e.exception)) def test_getpeercert(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), do_handshake_on_connect=False, server_hostname=hostname) as s: s.connect((HOST, server.port)) # getpeercert() raise ValueError while the handshake isn't # done. with self.assertRaises(ValueError): s.getpeercert() s.do_handshake() cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") cipher = s.cipher() if support.verbose: sys.stdout.write(pprint.pformat(cert) + '\n') sys.stdout.write("Connection cipher is " + str(cipher) + '.\n') if 'subject' not in cert: self.fail("No subject field in certificate: %s." % pprint.pformat(cert)) if ((('organizationName', 'Python Software Foundation'),) not in cert['subject']): self.fail( "Missing or invalid 'organizationName' field in certificate subject; " "should be 'Python Software Foundation'.") self.assertIn('notBefore', cert) self.assertIn('notAfter', cert) before = ssl.cert_time_to_seconds(cert['notBefore']) after = ssl.cert_time_to_seconds(cert['notAfter']) self.assertLess(before, after) def test_crl_check(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() tf = getattr(ssl, "VERIFY_X509_TRUSTED_FIRST", 0) self.assertEqual(client_context.verify_flags, ssl.VERIFY_DEFAULT | tf) # VERIFY_DEFAULT should pass server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") # VERIFY_CRL_CHECK_LEAF without a loaded CRL file fails client_context.verify_flags |= ssl.VERIFY_CRL_CHECK_LEAF server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaisesRegex(ssl.SSLError, "certificate verify failed"): s.connect((HOST, server.port)) # now load a CRL file. The CRL file is signed by the CA. client_context.load_verify_locations(CRLFILE) server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") def test_check_hostname(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() # correct hostname should verify server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") # incorrect hostname should raise an exception server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname="invalid") as s: with self.assertRaisesRegex( ssl.CertificateError, "Hostname mismatch, certificate is not valid for 'invalid'."): s.connect((HOST, server.port)) # missing server_hostname arg should cause an exception, too server = ThreadedEchoServer(context=server_context, chatty=True) with server: with socket.socket() as s: with self.assertRaisesRegex(ValueError, "check_hostname requires server_hostname"): client_context.wrap_socket(s) @unittest.skipUnless( ssl.HAS_NEVER_CHECK_COMMON_NAME, "test requires hostname_checks_common_name" ) def test_hostname_checks_common_name(self): client_context, server_context, hostname = testing_context() assert client_context.hostname_checks_common_name client_context.hostname_checks_common_name = False # default cert has a SAN server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) client_context, server_context, hostname = testing_context(NOSANFILE) client_context.hostname_checks_common_name = False server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(ssl.SSLCertVerificationError): s.connect((HOST, server.port)) def test_ecc_cert(self): client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) client_context.set_ciphers('ECDHE:ECDSA:!NULL:!aRSA') hostname = SIGNED_CERTFILE_ECC_HOSTNAME server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # load ECC cert server_context.load_cert_chain(SIGNED_CERTFILE_ECC) # correct hostname should verify server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") cipher = s.cipher()[0].split('-') self.assertTrue(cipher[:2], ('ECDHE', 'ECDSA')) def test_dual_rsa_ecc(self): client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) # TODO: fix TLSv1.3 once SSLContext can restrict signature # algorithms. client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # only ECDSA certs client_context.set_ciphers('ECDHE:ECDSA:!NULL:!aRSA') hostname = SIGNED_CERTFILE_ECC_HOSTNAME server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # load ECC and RSA key/cert pairs server_context.load_cert_chain(SIGNED_CERTFILE_ECC) server_context.load_cert_chain(SIGNED_CERTFILE) # correct hostname should verify server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") cipher = s.cipher()[0].split('-') self.assertTrue(cipher[:2], ('ECDHE', 'ECDSA')) def test_check_hostname_idn(self): if support.verbose: sys.stdout.write("\n") server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(IDNSANSFILE) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.verify_mode = ssl.CERT_REQUIRED context.check_hostname = True context.load_verify_locations(SIGNING_CA) # correct hostname should verify, when specified in several # different ways idn_hostnames = [ ('könig.idn.pythontest.net', 'xn--knig-5qa.idn.pythontest.net'), ('xn--knig-5qa.idn.pythontest.net', 'xn--knig-5qa.idn.pythontest.net'), (b'xn--knig-5qa.idn.pythontest.net', 'xn--knig-5qa.idn.pythontest.net'), ('königsgäßchen.idna2003.pythontest.net', 'xn--knigsgsschen-lcb0w.idna2003.pythontest.net'), ('xn--knigsgsschen-lcb0w.idna2003.pythontest.net', 'xn--knigsgsschen-lcb0w.idna2003.pythontest.net'), (b'xn--knigsgsschen-lcb0w.idna2003.pythontest.net', 'xn--knigsgsschen-lcb0w.idna2003.pythontest.net'), # ('königsgäßchen.idna2008.pythontest.net', # 'xn--knigsgchen-b4a3dun.idna2008.pythontest.net'), ('xn--knigsgchen-b4a3dun.idna2008.pythontest.net', 'xn--knigsgchen-b4a3dun.idna2008.pythontest.net'), (b'xn--knigsgchen-b4a3dun.idna2008.pythontest.net', 'xn--knigsgchen-b4a3dun.idna2008.pythontest.net'), ] for server_hostname, expected_hostname in idn_hostnames: server = ThreadedEchoServer(context=server_context, chatty=True) with server: with context.wrap_socket(socket.socket(), server_hostname=server_hostname) as s: self.assertEqual(s.server_hostname, expected_hostname) s.connect((HOST, server.port)) cert = s.getpeercert() self.assertEqual(s.server_hostname, expected_hostname) self.assertTrue(cert, "Can't get peer certificate.") # incorrect hostname should raise an exception server = ThreadedEchoServer(context=server_context, chatty=True) with server: with context.wrap_socket(socket.socket(), server_hostname="python.example.org") as s: with self.assertRaises(ssl.CertificateError): s.connect((HOST, server.port)) with ThreadedEchoServer(context=server_context, chatty=True) as server: with warnings_helper.check_no_resource_warning(self): with self.assertRaises(UnicodeError): context.wrap_socket(socket.socket(), server_hostname='.pythontest.net') with ThreadedEchoServer(context=server_context, chatty=True) as server: with warnings_helper.check_no_resource_warning(self): with self.assertRaises(UnicodeDecodeError): context.wrap_socket(socket.socket(), server_hostname=b'k\xf6nig.idn.pythontest.net') def test_wrong_cert_tls12(self): """Connecting when the server rejects the client's certificate Launch a server with CERT_REQUIRED, and check that trying to connect to it with a wrong client certificate fails. """ client_context, server_context, hostname = testing_context() # load client cert that is not signed by trusted CA client_context.load_cert_chain(CERTFILE) # require TLS client authentication server_context.verify_mode = ssl.CERT_REQUIRED # TLS 1.3 has different handshake client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server = ThreadedEchoServer( context=server_context, chatty=True, connectionchatty=True, ) with server, \ client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: try: # Expect either an SSL error about the server rejecting # the connection, or a low-level connection reset (which # sometimes happens on Windows) s.connect((HOST, server.port)) except ssl.SSLError as e: if support.verbose: sys.stdout.write("\nSSLError is %r\n" % e) except OSError as e: if e.errno != errno.ECONNRESET: raise if support.verbose: sys.stdout.write("\nsocket.error is %r\n" % e) else: self.fail("Use of invalid cert should have failed!") @requires_tls_version('TLSv1_3') def test_wrong_cert_tls13(self): client_context, server_context, hostname = testing_context() # load client cert that is not signed by trusted CA client_context.load_cert_chain(CERTFILE) server_context.verify_mode = ssl.CERT_REQUIRED server_context.minimum_version = ssl.TLSVersion.TLSv1_3 client_context.minimum_version = ssl.TLSVersion.TLSv1_3 server = ThreadedEchoServer( context=server_context, chatty=True, connectionchatty=True, ) with server, \ client_context.wrap_socket(socket.socket(), server_hostname=hostname, suppress_ragged_eofs=False) as s: s.connect((HOST, server.port)) with self.assertRaisesRegex( ssl.SSLError, 'alert unknown ca|EOF occurred' ): # TLS 1.3 perform client cert exchange after handshake s.write(b'data') s.read(1000) s.write(b'should have failed already') s.read(1000) def test_rude_shutdown(self): """A brutal shutdown of an SSL server should raise an OSError in the client when attempting handshake. """ listener_ready = threading.Event() listener_gone = threading.Event() s = socket.socket() port = socket_helper.bind_port(s, HOST) # `listener` runs in a thread. It sits in an accept() until # the main thread connects. Then it rudely closes the socket, # and sets Event `listener_gone` to let the main thread know # the socket is gone. def listener(): s.listen() listener_ready.set() newsock, addr = s.accept() newsock.close() s.close() listener_gone.set() def connector(): listener_ready.wait() with socket.socket() as c: c.connect((HOST, port)) listener_gone.wait() try: ssl_sock = test_wrap_socket(c) except OSError: pass else: self.fail('connecting to closed SSL socket should have failed') t = threading.Thread(target=listener) t.start() try: connector() finally: t.join() def test_ssl_cert_verify_error(self): if support.verbose: sys.stdout.write("\n") server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(SIGNED_CERTFILE) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) server = ThreadedEchoServer(context=server_context, chatty=True) with server: with context.wrap_socket(socket.socket(), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: try: s.connect((HOST, server.port)) except ssl.SSLError as e: msg = 'unable to get local issuer certificate' self.assertIsInstance(e, ssl.SSLCertVerificationError) self.assertEqual(e.verify_code, 20) self.assertEqual(e.verify_message, msg) self.assertIn(msg, repr(e)) self.assertIn('certificate verify failed', repr(e)) @requires_tls_version('SSLv2') def test_protocol_sslv2(self): """Connecting to an SSLv2 server with various client options""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True, ssl.CERT_REQUIRED) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_TLS, False) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_TLSv1, False) # SSLv23 client with specific SSL options try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_SSLv3) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1) def test_PROTOCOL_TLS(self): """Connecting to an SSLv23 server with various client options""" if support.verbose: sys.stdout.write("\n") if has_tls_version('SSLv2'): try: try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv2, True) except OSError as x: # this fails on some older versions of OpenSSL (0.9.7l, for instance) if support.verbose: sys.stdout.write( " SSL2 client to SSL23 server test unexpectedly failed:\n %s\n" % str(x)) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1') if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True, ssl.CERT_OPTIONAL) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_OPTIONAL) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False, ssl.CERT_REQUIRED) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True, ssl.CERT_REQUIRED) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_REQUIRED) # Server with specific SSL options if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False, server_options=ssl.OP_NO_SSLv3) # Will choose TLSv1 try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True, server_options=ssl.OP_NO_SSLv2 | ssl.OP_NO_SSLv3) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, False, server_options=ssl.OP_NO_TLSv1) @requires_tls_version('SSLv3') def test_protocol_sslv3(self): """Connecting to an SSLv3 server with various client options""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, 'SSLv3') try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, 'SSLv3', ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, 'SSLv3', ssl.CERT_REQUIRED) if has_tls_version('SSLv2'): try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_SSLv3) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @requires_tls_version('TLSv1') def test_protocol_tlsv1(self): """Connecting to a TLSv1 server with various client options""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, 'TLSv1') try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_REQUIRED) if has_tls_version('SSLv2'): try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1) @requires_tls_version('TLSv1_1') def test_protocol_tlsv1_1(self): """Connecting to a TLSv1.1 server with various client options. Testing against older TLS versions.""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_1, 'TLSv1.1') if has_tls_version('SSLv2'): try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_SSLv2, False) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1_1) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1_1, 'TLSv1.1') try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_1, False) @requires_tls_version('TLSv1_2') def test_protocol_tlsv1_2(self): """Connecting to a TLSv1.2 server with various client options. Testing against older TLS versions.""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_2, 'TLSv1.2', server_options=ssl.OP_NO_SSLv3|ssl.OP_NO_SSLv2, client_options=ssl.OP_NO_SSLv3|ssl.OP_NO_SSLv2,) if has_tls_version('SSLv2'): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_SSLv2, False) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1_2) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1_2, 'TLSv1.2') if has_tls_protocol(ssl.PROTOCOL_TLSv1): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1_2, False) if has_tls_protocol(ssl.PROTOCOL_TLSv1_1): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_1, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, False) def test_starttls(self): """Switching from clear text to encrypted and back again.""" msgs = (b"msg 1", b"MSG 2", b"STARTTLS", b"MSG 3", b"msg 4", b"ENDTLS", b"msg 5", b"msg 6") server = ThreadedEchoServer(CERTFILE, starttls_server=True, chatty=True, connectionchatty=True) wrapped = False with server: s = socket.socket() s.setblocking(True) s.connect((HOST, server.port)) if support.verbose: sys.stdout.write("\n") for indata in msgs: if support.verbose: sys.stdout.write( " client: sending %r...\n" % indata) if wrapped: conn.write(indata) outdata = conn.read() else: s.send(indata) outdata = s.recv(1024) msg = outdata.strip().lower() if indata == b"STARTTLS" and msg.startswith(b"ok"): # STARTTLS ok, switch to secure mode if support.verbose: sys.stdout.write( " client: read %r from server, starting TLS...\n" % msg) conn = test_wrap_socket(s) wrapped = True elif indata == b"ENDTLS" and msg.startswith(b"ok"): # ENDTLS ok, switch back to clear text if support.verbose: sys.stdout.write( " client: read %r from server, ending TLS...\n" % msg) s = conn.unwrap() wrapped = False else: if support.verbose: sys.stdout.write( " client: read %r from server\n" % msg) if support.verbose: sys.stdout.write(" client: closing connection.\n") if wrapped: conn.write(b"over\n") else: s.send(b"over\n") if wrapped: conn.close() else: s.close() def test_socketserver(self): """Using socketserver to create and manage SSL connections.""" server = make_https_server(self, certfile=SIGNED_CERTFILE) # try to connect if support.verbose: sys.stdout.write('\n') # Get this test file itself: with open(__file__, 'rb') as f: d1 = f.read() d2 = '' # now fetch the same data from the HTTPS server url = f'https://localhost:{server.port}/test_ssl.py' context = ssl.create_default_context(cafile=SIGNING_CA) f = urllib.request.urlopen(url, context=context) try: dlen = f.info().get("content-length") if dlen and (int(dlen) > 0): d2 = f.read(int(dlen)) if support.verbose: sys.stdout.write( " client: read %d bytes from remote server '%s'\n" % (len(d2), server)) finally: f.close() self.assertEqual(d1, d2) def test_asyncore_server(self): """Check the example asyncore integration.""" if support.verbose: sys.stdout.write("\n") indata = b"FOO\n" server = AsyncoreEchoServer(CERTFILE) with server: s = test_wrap_socket(socket.socket()) s.connect(('127.0.0.1', server.port)) if support.verbose: sys.stdout.write( " client: sending %r...\n" % indata) s.write(indata) outdata = s.read() if support.verbose: sys.stdout.write(" client: read %r\n" % outdata) if outdata != indata.lower(): self.fail( "bad data <<%r>> (%d) received; expected <<%r>> (%d)\n" % (outdata[:20], len(outdata), indata[:20].lower(), len(indata))) s.write(b"over\n") if support.verbose: sys.stdout.write(" client: closing connection.\n") s.close() if support.verbose: sys.stdout.write(" client: connection closed.\n") def test_recv_send(self): """Test recv(), send() and friends.""" if support.verbose: sys.stdout.write("\n") server = ThreadedEchoServer(CERTFILE, certreqs=ssl.CERT_NONE, ssl_version=ssl.PROTOCOL_TLS_SERVER, cacerts=CERTFILE, chatty=True, connectionchatty=False) with server: s = test_wrap_socket(socket.socket(), server_side=False, certfile=CERTFILE, ca_certs=CERTFILE, cert_reqs=ssl.CERT_NONE) s.connect((HOST, server.port)) # helper methods for standardising recv* method signatures def _recv_into(): b = bytearray(b"\0"*100) count = s.recv_into(b) return b[:count] def _recvfrom_into(): b = bytearray(b"\0"*100) count, addr = s.recvfrom_into(b) return b[:count] # (name, method, expect success?, *args, return value func) send_methods = [ ('send', s.send, True, [], len), ('sendto', s.sendto, False, ["some.address"], len), ('sendall', s.sendall, True, [], lambda x: None), ] # (name, method, whether to expect success, *args) recv_methods = [ ('recv', s.recv, True, []), ('recvfrom', s.recvfrom, False, ["some.address"]), ('recv_into', _recv_into, True, []), ('recvfrom_into', _recvfrom_into, False, []), ] data_prefix = "PREFIX_" for (meth_name, send_meth, expect_success, args, ret_val_meth) in send_methods: indata = (data_prefix + meth_name).encode('ascii') try: ret = send_meth(indata, *args) msg = "sending with {}".format(meth_name) self.assertEqual(ret, ret_val_meth(indata), msg=msg) outdata = s.read() if outdata != indata.lower(): self.fail( "While sending with <<{name:s}>> bad data " "<<{outdata:r}>> ({nout:d}) received; " "expected <<{indata:r}>> ({nin:d})\n".format( name=meth_name, outdata=outdata[:20], nout=len(outdata), indata=indata[:20], nin=len(indata) ) ) except ValueError as e: if expect_success: self.fail( "Failed to send with method <<{name:s}>>; " "expected to succeed.\n".format(name=meth_name) ) if not str(e).startswith(meth_name): self.fail( "Method <<{name:s}>> failed with unexpected " "exception message: {exp:s}\n".format( name=meth_name, exp=e ) ) for meth_name, recv_meth, expect_success, args in recv_methods: indata = (data_prefix + meth_name).encode('ascii') try: s.send(indata) outdata = recv_meth(*args) if outdata != indata.lower(): self.fail( "While receiving with <<{name:s}>> bad data " "<<{outdata:r}>> ({nout:d}) received; " "expected <<{indata:r}>> ({nin:d})\n".format( name=meth_name, outdata=outdata[:20], nout=len(outdata), indata=indata[:20], nin=len(indata) ) ) except ValueError as e: if expect_success: self.fail( "Failed to receive with method <<{name:s}>>; " "expected to succeed.\n".format(name=meth_name) ) if not str(e).startswith(meth_name): self.fail( "Method <<{name:s}>> failed with unexpected " "exception message: {exp:s}\n".format( name=meth_name, exp=e ) ) # consume data s.read() # read(-1, buffer) is supported, even though read(-1) is not data = b"data" s.send(data) buffer = bytearray(len(data)) self.assertEqual(s.read(-1, buffer), len(data)) self.assertEqual(buffer, data) # sendall accepts bytes-like objects if ctypes is not None: ubyte = ctypes.c_ubyte * len(data) byteslike = ubyte.from_buffer_copy(data) s.sendall(byteslike) self.assertEqual(s.read(), data) # Make sure sendmsg et al are disallowed to avoid # inadvertent disclosure of data and/or corruption # of the encrypted data stream self.assertRaises(NotImplementedError, s.dup) self.assertRaises(NotImplementedError, s.sendmsg, [b"data"]) self.assertRaises(NotImplementedError, s.recvmsg, 100) self.assertRaises(NotImplementedError, s.recvmsg_into, [bytearray(100)]) s.write(b"over\n") self.assertRaises(ValueError, s.recv, -1) self.assertRaises(ValueError, s.read, -1) s.close() def test_recv_zero(self): server = ThreadedEchoServer(CERTFILE) self.enterContext(server) s = socket.create_connection((HOST, server.port)) self.addCleanup(s.close) s = test_wrap_socket(s, suppress_ragged_eofs=False) self.addCleanup(s.close) # recv/read(0) should return no data s.send(b"data") self.assertEqual(s.recv(0), b"") self.assertEqual(s.read(0), b"") self.assertEqual(s.read(), b"data") # Should not block if the other end sends no data s.setblocking(False) self.assertEqual(s.recv(0), b"") self.assertEqual(s.recv_into(bytearray()), 0) def test_recv_into_buffer_protocol_len(self): server = ThreadedEchoServer(CERTFILE) self.enterContext(server) s = socket.create_connection((HOST, server.port)) self.addCleanup(s.close) s = test_wrap_socket(s, suppress_ragged_eofs=False) self.addCleanup(s.close) s.send(b"data") buf = array.array('I', [0, 0]) self.assertEqual(s.recv_into(buf), 4) self.assertEqual(bytes(buf)[:4], b"data") class B(bytearray): def __len__(self): 1/0 s.send(b"data") buf = B(6) self.assertEqual(s.recv_into(buf), 4) self.assertEqual(bytes(buf), b"data\0\0") def test_nonblocking_send(self): server = ThreadedEchoServer(CERTFILE, certreqs=ssl.CERT_NONE, ssl_version=ssl.PROTOCOL_TLS_SERVER, cacerts=CERTFILE, chatty=True, connectionchatty=False) with server: s = test_wrap_socket(socket.socket(), server_side=False, certfile=CERTFILE, ca_certs=CERTFILE, cert_reqs=ssl.CERT_NONE) s.connect((HOST, server.port)) s.setblocking(False) # If we keep sending data, at some point the buffers # will be full and the call will block buf = bytearray(8192) def fill_buffer(): while True: s.send(buf) self.assertRaises((ssl.SSLWantWriteError, ssl.SSLWantReadError), fill_buffer) # Now read all the output and discard it s.setblocking(True) s.close() def test_handshake_timeout(self): # Issue #5103: SSL handshake must respect the socket timeout server = socket.socket(socket.AF_INET) host = "127.0.0.1" port = socket_helper.bind_port(server) started = threading.Event() finish = False def serve(): server.listen() started.set() conns = [] while not finish: r, w, e = select.select([server], [], [], 0.1) if server in r: # Let the socket hang around rather than having # it closed by garbage collection. conns.append(server.accept()[0]) for sock in conns: sock.close() t = threading.Thread(target=serve) t.start() started.wait() try: try: c = socket.socket(socket.AF_INET) c.settimeout(0.2) c.connect((host, port)) # Will attempt handshake and time out self.assertRaisesRegex(TimeoutError, "timed out", test_wrap_socket, c) finally: c.close() try: c = socket.socket(socket.AF_INET) c = test_wrap_socket(c) c.settimeout(0.2) # Will attempt handshake and time out self.assertRaisesRegex(TimeoutError, "timed out", c.connect, (host, port)) finally: c.close() finally: finish = True t.join() server.close() def test_server_accept(self): # Issue #16357: accept() on a SSLSocket created through # SSLContext.wrap_socket(). client_ctx, server_ctx, hostname = testing_context() server = socket.socket(socket.AF_INET) host = "127.0.0.1" port = socket_helper.bind_port(server) server = server_ctx.wrap_socket(server, server_side=True) self.assertTrue(server.server_side) evt = threading.Event() remote = None peer = None def serve(): nonlocal remote, peer server.listen() # Block on the accept and wait on the connection to close. evt.set() remote, peer = server.accept() remote.send(remote.recv(4)) t = threading.Thread(target=serve) t.start() # Client wait until server setup and perform a connect. evt.wait() client = client_ctx.wrap_socket( socket.socket(), server_hostname=hostname ) client.connect((hostname, port)) client.send(b'data') client.recv() client_addr = client.getsockname() client.close() t.join() remote.close() server.close() # Sanity checks. self.assertIsInstance(remote, ssl.SSLSocket) self.assertEqual(peer, client_addr) def test_getpeercert_enotconn(self): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.check_hostname = False with context.wrap_socket(socket.socket()) as sock: with self.assertRaises(OSError) as cm: sock.getpeercert() self.assertEqual(cm.exception.errno, errno.ENOTCONN) def test_do_handshake_enotconn(self): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.check_hostname = False with context.wrap_socket(socket.socket()) as sock: with self.assertRaises(OSError) as cm: sock.do_handshake() self.assertEqual(cm.exception.errno, errno.ENOTCONN) def test_no_shared_ciphers(self): client_context, server_context, hostname = testing_context() # OpenSSL enables all TLS 1.3 ciphers, enforce TLS 1.2 for test client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # Force different suites on client and server client_context.set_ciphers("AES128") server_context.set_ciphers("AES256") with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(OSError): s.connect((HOST, server.port)) self.assertIn("no shared cipher", server.conn_errors[0]) def test_version_basic(self): """ Basic tests for SSLSocket.version(). More tests are done in the test_protocol_*() methods. """ context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.check_hostname = False context.verify_mode = ssl.CERT_NONE with ThreadedEchoServer(CERTFILE, ssl_version=ssl.PROTOCOL_TLS_SERVER, chatty=False) as server: with context.wrap_socket(socket.socket()) as s: self.assertIs(s.version(), None) self.assertIs(s._sslobj, None) s.connect((HOST, server.port)) self.assertEqual(s.version(), 'TLSv1.3') self.assertIs(s._sslobj, None) self.assertIs(s.version(), None) @requires_tls_version('TLSv1_3') def test_tls1_3(self): client_context, server_context, hostname = testing_context() client_context.minimum_version = ssl.TLSVersion.TLSv1_3 with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertIn(s.cipher()[0], { 'TLS_AES_256_GCM_SHA384', 'TLS_CHACHA20_POLY1305_SHA256', 'TLS_AES_128_GCM_SHA256', }) self.assertEqual(s.version(), 'TLSv1.3') @requires_tls_version('TLSv1_2') @requires_tls_version('TLSv1') @ignore_deprecation def test_min_max_version_tlsv1_2(self): client_context, server_context, hostname = testing_context() # client TLSv1.0 to 1.2 client_context.minimum_version = ssl.TLSVersion.TLSv1 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # server only TLSv1.2 server_context.minimum_version = ssl.TLSVersion.TLSv1_2 server_context.maximum_version = ssl.TLSVersion.TLSv1_2 with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.version(), 'TLSv1.2') @requires_tls_version('TLSv1_1') @ignore_deprecation def test_min_max_version_tlsv1_1(self): client_context, server_context, hostname = testing_context() # client 1.0 to 1.2, server 1.0 to 1.1 client_context.minimum_version = ssl.TLSVersion.TLSv1 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server_context.minimum_version = ssl.TLSVersion.TLSv1 server_context.maximum_version = ssl.TLSVersion.TLSv1_1 seclevel_workaround(client_context, server_context) with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.version(), 'TLSv1.1') @requires_tls_version('TLSv1_2') @requires_tls_version('TLSv1') @ignore_deprecation def test_min_max_version_mismatch(self): client_context, server_context, hostname = testing_context() # client 1.0, server 1.2 (mismatch) server_context.maximum_version = ssl.TLSVersion.TLSv1_2 server_context.minimum_version = ssl.TLSVersion.TLSv1_2 client_context.maximum_version = ssl.TLSVersion.TLSv1 client_context.minimum_version = ssl.TLSVersion.TLSv1 seclevel_workaround(client_context, server_context) with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(ssl.SSLError) as e: s.connect((HOST, server.port)) self.assertIn("alert", str(e.exception)) @requires_tls_version('SSLv3') def test_min_max_version_sslv3(self): client_context, server_context, hostname = testing_context() server_context.minimum_version = ssl.TLSVersion.SSLv3 client_context.minimum_version = ssl.TLSVersion.SSLv3 client_context.maximum_version = ssl.TLSVersion.SSLv3 seclevel_workaround(client_context, server_context) with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.version(), 'SSLv3') def test_default_ecdh_curve(self): # Issue #21015: elliptic curve-based Diffie Hellman key exchange # should be enabled by default on SSL contexts. client_context, server_context, hostname = testing_context() # TLSv1.3 defaults to PFS key agreement and no longer has KEA in # cipher name. client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # Prior to OpenSSL 1.0.0, ECDH ciphers have to be enabled # explicitly using the 'ECCdraft' cipher alias. Otherwise, # our default cipher list should prefer ECDH-based ciphers # automatically. with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertIn("ECDH", s.cipher()[0]) @unittest.skipUnless("tls-unique" in ssl.CHANNEL_BINDING_TYPES, "'tls-unique' channel binding not available") def test_tls_unique_channel_binding(self): """Test tls-unique channel binding.""" if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=True, connectionchatty=False) with server: with client_context.wrap_socket( socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # get the data cb_data = s.get_channel_binding("tls-unique") if support.verbose: sys.stdout.write( " got channel binding data: {0!r}\n".format(cb_data)) # check if it is sane self.assertIsNotNone(cb_data) if s.version() == 'TLSv1.3': self.assertEqual(len(cb_data), 48) else: self.assertEqual(len(cb_data), 12) # True for TLSv1 # and compare with the peers version s.write(b"CB tls-unique\n") peer_data_repr = s.read().strip() self.assertEqual(peer_data_repr, repr(cb_data).encode("us-ascii")) # now, again with client_context.wrap_socket( socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) new_cb_data = s.get_channel_binding("tls-unique") if support.verbose: sys.stdout.write( "got another channel binding data: {0!r}\n".format( new_cb_data) ) # is it really unique self.assertNotEqual(cb_data, new_cb_data) self.assertIsNotNone(cb_data) if s.version() == 'TLSv1.3': self.assertEqual(len(cb_data), 48) else: self.assertEqual(len(cb_data), 12) # True for TLSv1 s.write(b"CB tls-unique\n") peer_data_repr = s.read().strip() self.assertEqual(peer_data_repr, repr(new_cb_data).encode("us-ascii")) def test_compression(self): client_context, server_context, hostname = testing_context() stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) if support.verbose: sys.stdout.write(" got compression: {!r}\n".format(stats['compression'])) self.assertIn(stats['compression'], { None, 'ZLIB', 'RLE' }) @unittest.skipUnless(hasattr(ssl, 'OP_NO_COMPRESSION'), "ssl.OP_NO_COMPRESSION needed for this test") def test_compression_disabled(self): client_context, server_context, hostname = testing_context() client_context.options |= ssl.OP_NO_COMPRESSION server_context.options |= ssl.OP_NO_COMPRESSION stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['compression'], None) @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_dh_params(self): # Check we can get a connection with ephemeral Diffie-Hellman client_context, server_context, hostname = testing_context() # test scenario needs TLS <= 1.2 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server_context.load_dh_params(DHFILE) server_context.set_ciphers("kEDH") server_context.maximum_version = ssl.TLSVersion.TLSv1_2 stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) cipher = stats["cipher"][0] parts = cipher.split("-") if "ADH" not in parts and "EDH" not in parts and "DHE" not in parts: self.fail("Non-DH cipher: " + cipher[0]) def test_ecdh_curve(self): # server secp384r1, client auto client_context, server_context, hostname = testing_context() server_context.set_ecdh_curve("secp384r1") server_context.set_ciphers("ECDHE:!eNULL:!aNULL") server_context.minimum_version = ssl.TLSVersion.TLSv1_2 stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) # server auto, client secp384r1 client_context, server_context, hostname = testing_context() client_context.set_ecdh_curve("secp384r1") server_context.set_ciphers("ECDHE:!eNULL:!aNULL") server_context.minimum_version = ssl.TLSVersion.TLSv1_2 stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) # server / client curve mismatch client_context, server_context, hostname = testing_context() client_context.set_ecdh_curve("prime256v1") server_context.set_ecdh_curve("secp384r1") server_context.set_ciphers("ECDHE:!eNULL:!aNULL") server_context.minimum_version = ssl.TLSVersion.TLSv1_2 with self.assertRaises(ssl.SSLError): server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) def test_selected_alpn_protocol(self): # selected_alpn_protocol() is None unless ALPN is used. client_context, server_context, hostname = testing_context() stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['client_alpn_protocol'], None) def test_selected_alpn_protocol_if_server_uses_alpn(self): # selected_alpn_protocol() is None unless ALPN is used by the client. client_context, server_context, hostname = testing_context() server_context.set_alpn_protocols(['foo', 'bar']) stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['client_alpn_protocol'], None) def test_alpn_protocols(self): server_protocols = ['foo', 'bar', 'milkshake'] protocol_tests = [ (['foo', 'bar'], 'foo'), (['bar', 'foo'], 'foo'), (['milkshake'], 'milkshake'), (['http/3.0', 'http/4.0'], None) ] for client_protocols, expected in protocol_tests: client_context, server_context, hostname = testing_context() server_context.set_alpn_protocols(server_protocols) client_context.set_alpn_protocols(client_protocols) try: stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) except ssl.SSLError as e: stats = e msg = "failed trying %s (s) and %s (c).\n" \ "was expecting %s, but got %%s from the %%s" \ % (str(server_protocols), str(client_protocols), str(expected)) client_result = stats['client_alpn_protocol'] self.assertEqual(client_result, expected, msg % (client_result, "client")) server_result = stats['server_alpn_protocols'][-1] \ if len(stats['server_alpn_protocols']) else 'nothing' self.assertEqual(server_result, expected, msg % (server_result, "server")) def test_npn_protocols(self): assert not ssl.HAS_NPN def sni_contexts(self): server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(SIGNED_CERTFILE) other_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) other_context.load_cert_chain(SIGNED_CERTFILE2) client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) return server_context, other_context, client_context def check_common_name(self, stats, name): cert = stats['peercert'] self.assertIn((('commonName', name),), cert['subject']) def test_sni_callback(self): calls = [] server_context, other_context, client_context = self.sni_contexts() client_context.check_hostname = False def servername_cb(ssl_sock, server_name, initial_context): calls.append((server_name, initial_context)) if server_name is not None: ssl_sock.context = other_context server_context.set_servername_callback(servername_cb) stats = server_params_test(client_context, server_context, chatty=True, sni_name='supermessage') # The hostname was fetched properly, and the certificate was # changed for the connection. self.assertEqual(calls, [("supermessage", server_context)]) # CERTFILE4 was selected self.check_common_name(stats, 'fakehostname') calls = [] # The callback is called with server_name=None stats = server_params_test(client_context, server_context, chatty=True, sni_name=None) self.assertEqual(calls, [(None, server_context)]) self.check_common_name(stats, SIGNED_CERTFILE_HOSTNAME) # Check disabling the callback calls = [] server_context.set_servername_callback(None) stats = server_params_test(client_context, server_context, chatty=True, sni_name='notfunny') # Certificate didn't change self.check_common_name(stats, SIGNED_CERTFILE_HOSTNAME) self.assertEqual(calls, []) def test_sni_callback_alert(self): # Returning a TLS alert is reflected to the connecting client server_context, other_context, client_context = self.sni_contexts() def cb_returning_alert(ssl_sock, server_name, initial_context): return ssl.ALERT_DESCRIPTION_ACCESS_DENIED server_context.set_servername_callback(cb_returning_alert) with self.assertRaises(ssl.SSLError) as cm: stats = server_params_test(client_context, server_context, chatty=False, sni_name='supermessage') self.assertEqual(cm.exception.reason, 'TLSV1_ALERT_ACCESS_DENIED') def test_sni_callback_raising(self): # Raising fails the connection with a TLS handshake failure alert. server_context, other_context, client_context = self.sni_contexts() def cb_raising(ssl_sock, server_name, initial_context): 1/0 server_context.set_servername_callback(cb_raising) with support.catch_unraisable_exception() as catch: with self.assertRaises(ssl.SSLError) as cm: stats = server_params_test(client_context, server_context, chatty=False, sni_name='supermessage') self.assertEqual(cm.exception.reason, 'SSLV3_ALERT_HANDSHAKE_FAILURE') self.assertEqual(catch.unraisable.exc_type, ZeroDivisionError) def test_sni_callback_wrong_return_type(self): # Returning the wrong return type terminates the TLS connection # with an internal error alert. server_context, other_context, client_context = self.sni_contexts() def cb_wrong_return_type(ssl_sock, server_name, initial_context): return "foo" server_context.set_servername_callback(cb_wrong_return_type) with support.catch_unraisable_exception() as catch: with self.assertRaises(ssl.SSLError) as cm: stats = server_params_test(client_context, server_context, chatty=False, sni_name='supermessage') self.assertEqual(cm.exception.reason, 'TLSV1_ALERT_INTERNAL_ERROR') self.assertEqual(catch.unraisable.exc_type, TypeError) def test_shared_ciphers(self): client_context, server_context, hostname = testing_context() client_context.set_ciphers("AES128:AES256") server_context.set_ciphers("AES256:eNULL") expected_algs = [ "AES256", "AES-256", # TLS 1.3 ciphers are always enabled "TLS_CHACHA20", "TLS_AES", ] stats = server_params_test(client_context, server_context, sni_name=hostname) ciphers = stats['server_shared_ciphers'][0] self.assertGreater(len(ciphers), 0) for name, tls_version, bits in ciphers: if not any(alg in name for alg in expected_algs): self.fail(name) def test_read_write_after_close_raises_valuerror(self): client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=False) with server: s = client_context.wrap_socket(socket.socket(), server_hostname=hostname) s.connect((HOST, server.port)) s.close() self.assertRaises(ValueError, s.read, 1024) self.assertRaises(ValueError, s.write, b'hello') def test_sendfile(self): TEST_DATA = b"x" * 512 with open(os_helper.TESTFN, 'wb') as f: f.write(TEST_DATA) self.addCleanup(os_helper.unlink, os_helper.TESTFN) client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) with open(os_helper.TESTFN, 'rb') as file: s.sendfile(file) self.assertEqual(s.recv(1024), TEST_DATA) def test_session(self): client_context, server_context, hostname = testing_context() # TODO: sessions aren't compatible with TLSv1.3 yet client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # first connection without session stats = server_params_test(client_context, server_context, sni_name=hostname) session = stats['session'] self.assertTrue(session.id) self.assertGreater(session.time, 0) self.assertGreater(session.timeout, 0) self.assertTrue(session.has_ticket) self.assertGreater(session.ticket_lifetime_hint, 0) self.assertFalse(stats['session_reused']) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 1) self.assertEqual(sess_stat['hits'], 0) # reuse session stats = server_params_test(client_context, server_context, session=session, sni_name=hostname) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 2) self.assertEqual(sess_stat['hits'], 1) self.assertTrue(stats['session_reused']) session2 = stats['session'] self.assertEqual(session2.id, session.id) self.assertEqual(session2, session) self.assertIsNot(session2, session) self.assertGreaterEqual(session2.time, session.time) self.assertGreaterEqual(session2.timeout, session.timeout) # another one without session stats = server_params_test(client_context, server_context, sni_name=hostname) self.assertFalse(stats['session_reused']) session3 = stats['session'] self.assertNotEqual(session3.id, session.id) self.assertNotEqual(session3, session) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 3) self.assertEqual(sess_stat['hits'], 1) # reuse session again stats = server_params_test(client_context, server_context, session=session, sni_name=hostname) self.assertTrue(stats['session_reused']) session4 = stats['session'] self.assertEqual(session4.id, session.id) self.assertEqual(session4, session) self.assertGreaterEqual(session4.time, session.time) self.assertGreaterEqual(session4.timeout, session.timeout) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 4) self.assertEqual(sess_stat['hits'], 2) def test_session_handling(self): client_context, server_context, hostname = testing_context() client_context2, _, _ = testing_context() # TODO: session reuse does not work with TLSv1.3 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 client_context2.maximum_version = ssl.TLSVersion.TLSv1_2 server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: # session is None before handshake self.assertEqual(s.session, None) self.assertEqual(s.session_reused, None) s.connect((HOST, server.port)) session = s.session self.assertTrue(session) with self.assertRaises(TypeError) as e: s.session = object self.assertEqual(str(e.exception), 'Value is not a SSLSession.') with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # cannot set session after handshake with self.assertRaises(ValueError) as e: s.session = session self.assertEqual(str(e.exception), 'Cannot set session after handshake.') with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: # can set session before handshake and before the # connection was established s.session = session s.connect((HOST, server.port)) self.assertEqual(s.session.id, session.id) self.assertEqual(s.session, session) self.assertEqual(s.session_reused, True) with client_context2.wrap_socket(socket.socket(), server_hostname=hostname) as s: # cannot re-use session with a different SSLContext with self.assertRaises(ValueError) as e: s.session = session s.connect((HOST, server.port)) self.assertEqual(str(e.exception), 'Session refers to a different SSLContext.') @unittest.skipUnless(has_tls_version('TLSv1_3'), "Test needs TLS 1.3") class TestPostHandshakeAuth(unittest.TestCase): def test_pha_setter(self): protocols = [ ssl.PROTOCOL_TLS_SERVER, ssl.PROTOCOL_TLS_CLIENT ] for protocol in protocols: ctx = ssl.SSLContext(protocol) self.assertEqual(ctx.post_handshake_auth, False) ctx.post_handshake_auth = True self.assertEqual(ctx.post_handshake_auth, True) ctx.verify_mode = ssl.CERT_REQUIRED self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.post_handshake_auth, True) ctx.post_handshake_auth = False self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.post_handshake_auth, False) ctx.verify_mode = ssl.CERT_OPTIONAL ctx.post_handshake_auth = True self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) self.assertEqual(ctx.post_handshake_auth, True) def test_pha_required(self): client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') # PHA method just returns true when cert is already available s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'GETCERT') cert_text = s.recv(4096).decode('us-ascii') self.assertIn('Python Software Foundation CA', cert_text) def test_pha_required_nocert(self): client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True def msg_cb(conn, direction, version, content_type, msg_type, data): if support.verbose and content_type == _TLSContentType.ALERT: info = (conn, direction, version, content_type, msg_type, data) sys.stdout.write(f"TLS: {info!r}\n") server_context._msg_callback = msg_cb client_context._msg_callback = msg_cb server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname, suppress_ragged_eofs=False) as s: s.connect((HOST, server.port)) s.write(b'PHA') # test sometimes fails with EOF error. Test passes as long as # server aborts connection with an error. with self.assertRaisesRegex( ssl.SSLError, '(certificate required|EOF occurred)' ): # receive CertificateRequest data = s.recv(1024) self.assertEqual(data, b'OK\n') # send empty Certificate + Finish s.write(b'HASCERT') # receive alert s.recv(1024) def test_pha_optional(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) # check CERT_OPTIONAL server_context.verify_mode = ssl.CERT_OPTIONAL server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') def test_pha_optional_nocert(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_OPTIONAL client_context.post_handshake_auth = True server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') # optional doesn't fail when client does not have a cert s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') def test_pha_no_pha_client(self): client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) with self.assertRaisesRegex(ssl.SSLError, 'not server'): s.verify_client_post_handshake() s.write(b'PHA') self.assertIn(b'extension not received', s.recv(1024)) def test_pha_no_pha_server(self): # server doesn't have PHA enabled, cert is requested in handshake client_context, server_context, hostname = testing_context() server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') # PHA doesn't fail if there is already a cert s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') def test_pha_not_tls13(self): # TLS 1.2 client_context, server_context, hostname = testing_context() server_context.verify_mode = ssl.CERT_REQUIRED client_context.maximum_version = ssl.TLSVersion.TLSv1_2 client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # PHA fails for TLS != 1.3 s.write(b'PHA') self.assertIn(b'WRONG_SSL_VERSION', s.recv(1024)) def test_bpo37428_pha_cert_none(self): # verify that post_handshake_auth does not implicitly enable cert # validation. hostname = SIGNED_CERTFILE_HOSTNAME client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) # no cert validation and CA on client side client_context.check_hostname = False client_context.verify_mode = ssl.CERT_NONE server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(SIGNED_CERTFILE) server_context.load_verify_locations(SIGNING_CA) server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') # server cert has not been validated self.assertEqual(s.getpeercert(), {}) def test_internal_chain_client(self): client_context, server_context, hostname = testing_context( server_chain=False ) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket( socket.socket(), server_hostname=hostname ) as s: s.connect((HOST, server.port)) vc = s._sslobj.get_verified_chain() self.assertEqual(len(vc), 2) ee, ca = vc uvc = s._sslobj.get_unverified_chain() self.assertEqual(len(uvc), 1) self.assertEqual(ee, uvc[0]) self.assertEqual(hash(ee), hash(uvc[0])) self.assertEqual(repr(ee), repr(uvc[0])) self.assertNotEqual(ee, ca) self.assertNotEqual(hash(ee), hash(ca)) self.assertNotEqual(repr(ee), repr(ca)) self.assertNotEqual(ee.get_info(), ca.get_info()) self.assertIn("CN=localhost", repr(ee)) self.assertIn("CN=our-ca-server", repr(ca)) pem = ee.public_bytes(_ssl.ENCODING_PEM) der = ee.public_bytes(_ssl.ENCODING_DER) self.assertIsInstance(pem, str) self.assertIn("-----BEGIN CERTIFICATE-----", pem) self.assertIsInstance(der, bytes) self.assertEqual( ssl.PEM_cert_to_DER_cert(pem), der ) def test_internal_chain_server(self): client_context, server_context, hostname = testing_context() client_context.load_cert_chain(SIGNED_CERTFILE) server_context.verify_mode = ssl.CERT_REQUIRED server_context.maximum_version = ssl.TLSVersion.TLSv1_2 server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket( socket.socket(), server_hostname=hostname ) as s: s.connect((HOST, server.port)) s.write(b'VERIFIEDCHAIN\n') res = s.recv(1024) self.assertEqual(res, b'\x02\n') s.write(b'UNVERIFIEDCHAIN\n') res = s.recv(1024) self.assertEqual(res, b'\x02\n') HAS_KEYLOG = hasattr(ssl.SSLContext, 'keylog_filename') requires_keylog = unittest.skipUnless( HAS_KEYLOG, 'test requires OpenSSL 1.1.1 with keylog callback') class TestSSLDebug(unittest.TestCase): def keylog_lines(self, fname=os_helper.TESTFN): with open(fname) as f: return len(list(f)) @requires_keylog @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_keylog_defaults(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.keylog_filename, None) self.assertFalse(os.path.isfile(os_helper.TESTFN)) ctx.keylog_filename = os_helper.TESTFN self.assertEqual(ctx.keylog_filename, os_helper.TESTFN) self.assertTrue(os.path.isfile(os_helper.TESTFN)) self.assertEqual(self.keylog_lines(), 1) ctx.keylog_filename = None self.assertEqual(ctx.keylog_filename, None) with self.assertRaises((IsADirectoryError, PermissionError)): # Windows raises PermissionError ctx.keylog_filename = os.path.dirname( os.path.abspath(os_helper.TESTFN)) with self.assertRaises(TypeError): ctx.keylog_filename = 1 @requires_keylog @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_keylog_filename(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) client_context, server_context, hostname = testing_context() client_context.keylog_filename = os_helper.TESTFN server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # header, 5 lines for TLS 1.3 self.assertEqual(self.keylog_lines(), 6) client_context.keylog_filename = None server_context.keylog_filename = os_helper.TESTFN server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertGreaterEqual(self.keylog_lines(), 11) client_context.keylog_filename = os_helper.TESTFN server_context.keylog_filename = os_helper.TESTFN server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertGreaterEqual(self.keylog_lines(), 21) client_context.keylog_filename = None server_context.keylog_filename = None @requires_keylog @unittest.skipIf(sys.flags.ignore_environment, "test is not compatible with ignore_environment") @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_keylog_env(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) with unittest.mock.patch.dict(os.environ): os.environ['SSLKEYLOGFILE'] = os_helper.TESTFN self.assertEqual(os.environ['SSLKEYLOGFILE'], os_helper.TESTFN) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.keylog_filename, None) ctx = ssl.create_default_context() self.assertEqual(ctx.keylog_filename, os_helper.TESTFN) ctx = ssl._create_stdlib_context() self.assertEqual(ctx.keylog_filename, os_helper.TESTFN) def test_msg_callback(self): client_context, server_context, hostname = testing_context() def msg_cb(conn, direction, version, content_type, msg_type, data): pass self.assertIs(client_context._msg_callback, None) client_context._msg_callback = msg_cb self.assertIs(client_context._msg_callback, msg_cb) with self.assertRaises(TypeError): client_context._msg_callback = object() def test_msg_callback_tls12(self): client_context, server_context, hostname = testing_context() client_context.maximum_version = ssl.TLSVersion.TLSv1_2 msg = [] def msg_cb(conn, direction, version, content_type, msg_type, data): self.assertIsInstance(conn, ssl.SSLSocket) self.assertIsInstance(data, bytes) self.assertIn(direction, {'read', 'write'}) msg.append((direction, version, content_type, msg_type)) client_context._msg_callback = msg_cb server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertIn( ("read", TLSVersion.TLSv1_2, _TLSContentType.HANDSHAKE, _TLSMessageType.SERVER_KEY_EXCHANGE), msg ) self.assertIn( ("write", TLSVersion.TLSv1_2, _TLSContentType.CHANGE_CIPHER_SPEC, _TLSMessageType.CHANGE_CIPHER_SPEC), msg ) def test_msg_callback_deadlock_bpo43577(self): client_context, server_context, hostname = testing_context() server_context2 = testing_context()[1] def msg_cb(conn, direction, version, content_type, msg_type, data): pass def sni_cb(sock, servername, ctx): sock.context = server_context2 server_context._msg_callback = msg_cb server_context.sni_callback = sni_cb server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) def set_socket_so_linger_on_with_zero_timeout(sock): sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0)) class TestPreHandshakeClose(unittest.TestCase): """Verify behavior of close sockets with received data before to the handshake. """ class SingleConnectionTestServerThread(threading.Thread): def __init__(self, *, name, call_after_accept, timeout=None): self.call_after_accept = call_after_accept self.received_data = b'' # set by .run() self.wrap_error = None # set by .run() self.listener = None # set by .start() self.port = None # set by .start() if timeout is None: self.timeout = support.SHORT_TIMEOUT else: self.timeout = timeout super().__init__(name=name) def __enter__(self): self.start() return self def __exit__(self, *args): try: if self.listener: self.listener.close() except OSError: pass self.join() self.wrap_error = None # avoid dangling references def start(self): self.ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) self.ssl_ctx.verify_mode = ssl.CERT_REQUIRED self.ssl_ctx.load_verify_locations(cafile=ONLYCERT) self.ssl_ctx.load_cert_chain(certfile=ONLYCERT, keyfile=ONLYKEY) self.listener = socket.socket() self.port = socket_helper.bind_port(self.listener) self.listener.settimeout(self.timeout) self.listener.listen(1) super().start() def run(self): try: conn, address = self.listener.accept() except TimeoutError: # on timeout, just close the listener return finally: self.listener.close() with conn: if self.call_after_accept(conn): return try: tls_socket = self.ssl_ctx.wrap_socket(conn, server_side=True) except OSError as err: # ssl.SSLError inherits from OSError self.wrap_error = err else: try: self.received_data = tls_socket.recv(400) except OSError: pass # closed, protocol error, etc. def non_linux_skip_if_other_okay_error(self, err): if sys.platform == "linux": return # Expect the full test setup to always work on Linux. if (isinstance(err, ConnectionResetError) or (isinstance(err, OSError) and err.errno == errno.EINVAL) or re.search('wrong.version.number', getattr(err, "reason", ""), re.I)): # On Windows the TCP RST leads to a ConnectionResetError # (ECONNRESET) which Linux doesn't appear to surface to userspace. # If wrap_socket() winds up on the "if connected:" path and doing # the actual wrapping... we get an SSLError from OpenSSL. Typically # WRONG_VERSION_NUMBER. While appropriate, neither is the scenario # we're specifically trying to test. The way this test is written # is known to work on Linux. We'll skip it anywhere else that it # does not present as doing so. try: self.skipTest(f"Could not recreate conditions on {sys.platform}:" f" {err=}") finally: # gh-108342: Explicitly break the reference cycle err = None # If maintaining this conditional winds up being a problem. # just turn this into an unconditional skip anything but Linux. # The important thing is that our CI has the logic covered. def test_preauth_data_to_tls_server(self): server_accept_called = threading.Event() ready_for_server_wrap_socket = threading.Event() def call_after_accept(unused): server_accept_called.set() if not ready_for_server_wrap_socket.wait(support.SHORT_TIMEOUT): raise RuntimeError("wrap_socket event never set, test may fail.") return False # Tell the server thread to continue. server = self.SingleConnectionTestServerThread( call_after_accept=call_after_accept, name="preauth_data_to_tls_server") self.enterContext(server) # starts it & unittest.TestCase stops it. with socket.socket() as client: client.connect(server.listener.getsockname()) # This forces an immediate connection close via RST on .close(). set_socket_so_linger_on_with_zero_timeout(client) client.setblocking(False) server_accept_called.wait() client.send(b"DELETE /data HTTP/1.0\r\n\r\n") client.close() # RST ready_for_server_wrap_socket.set() server.join() wrap_error = server.wrap_error server.wrap_error = None try: self.assertEqual(b"", server.received_data) self.assertIsInstance(wrap_error, OSError) # All platforms. self.non_linux_skip_if_other_okay_error(wrap_error) self.assertIsInstance(wrap_error, ssl.SSLError) self.assertIn("before TLS handshake with data", wrap_error.args[1]) self.assertIn("before TLS handshake with data", wrap_error.reason) self.assertNotEqual(0, wrap_error.args[0]) self.assertIsNone(wrap_error.library, msg="attr must exist") finally: # gh-108342: Explicitly break the reference cycle wrap_error = None server = None def test_preauth_data_to_tls_client(self): server_can_continue_with_wrap_socket = threading.Event() client_can_continue_with_wrap_socket = threading.Event() def call_after_accept(conn_to_client): if not server_can_continue_with_wrap_socket.wait(support.SHORT_TIMEOUT): print("ERROR: test client took too long") # This forces an immediate connection close via RST on .close(). set_socket_so_linger_on_with_zero_timeout(conn_to_client) conn_to_client.send( b"HTTP/1.0 307 Temporary Redirect\r\n" b"Location: https://example.com/someone-elses-server\r\n" b"\r\n") conn_to_client.close() # RST client_can_continue_with_wrap_socket.set() return True # Tell the server to stop. server = self.SingleConnectionTestServerThread( call_after_accept=call_after_accept, name="preauth_data_to_tls_client") self.enterContext(server) # starts it & unittest.TestCase stops it. # Redundant; call_after_accept sets SO_LINGER on the accepted conn. set_socket_so_linger_on_with_zero_timeout(server.listener) with socket.socket() as client: client.connect(server.listener.getsockname()) server_can_continue_with_wrap_socket.set() if not client_can_continue_with_wrap_socket.wait(support.SHORT_TIMEOUT): self.fail("test server took too long") ssl_ctx = ssl.create_default_context() try: tls_client = ssl_ctx.wrap_socket( client, server_hostname="localhost") except OSError as err: # SSLError inherits from OSError wrap_error = err received_data = b"" else: wrap_error = None received_data = tls_client.recv(400) tls_client.close() server.join() try: self.assertEqual(b"", received_data) self.assertIsInstance(wrap_error, OSError) # All platforms. self.non_linux_skip_if_other_okay_error(wrap_error) self.assertIsInstance(wrap_error, ssl.SSLError) self.assertIn("before TLS handshake with data", wrap_error.args[1]) self.assertIn("before TLS handshake with data", wrap_error.reason) self.assertNotEqual(0, wrap_error.args[0]) self.assertIsNone(wrap_error.library, msg="attr must exist") finally: # gh-108342: Explicitly break the reference cycle with warnings_helper.check_no_resource_warning(self): wrap_error = None server = None def test_https_client_non_tls_response_ignored(self): server_responding = threading.Event() class SynchronizedHTTPSConnection(http.client.HTTPSConnection): def connect(self): # Call clear text HTTP connect(), not the encrypted HTTPS (TLS) # connect(): wrap_socket() is called manually below. http.client.HTTPConnection.connect(self) # Wait for our fault injection server to have done its thing. if not server_responding.wait(support.SHORT_TIMEOUT) and support.verbose: sys.stdout.write("server_responding event never set.") self.sock = self._context.wrap_socket( self.sock, server_hostname=self.host) def call_after_accept(conn_to_client): # This forces an immediate connection close via RST on .close(). set_socket_so_linger_on_with_zero_timeout(conn_to_client) conn_to_client.send( b"HTTP/1.0 402 Payment Required\r\n" b"\r\n") conn_to_client.close() # RST server_responding.set() return True # Tell the server to stop. timeout = 2.0 server = self.SingleConnectionTestServerThread( call_after_accept=call_after_accept, name="non_tls_http_RST_responder", timeout=timeout) self.enterContext(server) # starts it & unittest.TestCase stops it. # Redundant; call_after_accept sets SO_LINGER on the accepted conn. set_socket_so_linger_on_with_zero_timeout(server.listener) connection = SynchronizedHTTPSConnection( server.listener.getsockname()[0], port=server.port, context=ssl.create_default_context(), timeout=timeout, ) # There are lots of reasons this raises as desired, long before this # test was added. Sending the request requires a successful TLS wrapped # socket; that fails if the connection is broken. It may seem pointless # to test this. It serves as an illustration of something that we never # want to happen... properly not happening. with warnings_helper.check_no_resource_warning(self), \ self.assertRaises(OSError): connection.request("HEAD", "/test", headers={"Host": "localhost"}) response = connection.getresponse() server.join() class TestEnumerations(unittest.TestCase): def test_tlsversion(self): class CheckedTLSVersion(enum.IntEnum): MINIMUM_SUPPORTED = _ssl.PROTO_MINIMUM_SUPPORTED SSLv3 = _ssl.PROTO_SSLv3 TLSv1 = _ssl.PROTO_TLSv1 TLSv1_1 = _ssl.PROTO_TLSv1_1 TLSv1_2 = _ssl.PROTO_TLSv1_2 TLSv1_3 = _ssl.PROTO_TLSv1_3 MAXIMUM_SUPPORTED = _ssl.PROTO_MAXIMUM_SUPPORTED enum._test_simple_enum(CheckedTLSVersion, TLSVersion) def test_tlscontenttype(self): class Checked_TLSContentType(enum.IntEnum): """Content types (record layer) See RFC 8446, section B.1 """ CHANGE_CIPHER_SPEC = 20 ALERT = 21 HANDSHAKE = 22 APPLICATION_DATA = 23 # pseudo content types HEADER = 0x100 INNER_CONTENT_TYPE = 0x101 enum._test_simple_enum(Checked_TLSContentType, _TLSContentType) def test_tlsalerttype(self): class Checked_TLSAlertType(enum.IntEnum): """Alert types for TLSContentType.ALERT messages See RFC 8466, section B.2 """ CLOSE_NOTIFY = 0 UNEXPECTED_MESSAGE = 10 BAD_RECORD_MAC = 20 DECRYPTION_FAILED = 21 RECORD_OVERFLOW = 22 DECOMPRESSION_FAILURE = 30 HANDSHAKE_FAILURE = 40 NO_CERTIFICATE = 41 BAD_CERTIFICATE = 42 UNSUPPORTED_CERTIFICATE = 43 CERTIFICATE_REVOKED = 44 CERTIFICATE_EXPIRED = 45 CERTIFICATE_UNKNOWN = 46 ILLEGAL_PARAMETER = 47 UNKNOWN_CA = 48 ACCESS_DENIED = 49 DECODE_ERROR = 50 DECRYPT_ERROR = 51 EXPORT_RESTRICTION = 60 PROTOCOL_VERSION = 70 INSUFFICIENT_SECURITY = 71 INTERNAL_ERROR = 80 INAPPROPRIATE_FALLBACK = 86 USER_CANCELED = 90 NO_RENEGOTIATION = 100 MISSING_EXTENSION = 109 UNSUPPORTED_EXTENSION = 110 CERTIFICATE_UNOBTAINABLE = 111 UNRECOGNIZED_NAME = 112 BAD_CERTIFICATE_STATUS_RESPONSE = 113 BAD_CERTIFICATE_HASH_VALUE = 114 UNKNOWN_PSK_IDENTITY = 115 CERTIFICATE_REQUIRED = 116 NO_APPLICATION_PROTOCOL = 120 enum._test_simple_enum(Checked_TLSAlertType, _TLSAlertType) def test_tlsmessagetype(self): class Checked_TLSMessageType(enum.IntEnum): """Message types (handshake protocol) See RFC 8446, section B.3 """ HELLO_REQUEST = 0 CLIENT_HELLO = 1 SERVER_HELLO = 2 HELLO_VERIFY_REQUEST = 3 NEWSESSION_TICKET = 4 END_OF_EARLY_DATA = 5 HELLO_RETRY_REQUEST = 6 ENCRYPTED_EXTENSIONS = 8 CERTIFICATE = 11 SERVER_KEY_EXCHANGE = 12 CERTIFICATE_REQUEST = 13 SERVER_DONE = 14 CERTIFICATE_VERIFY = 15 CLIENT_KEY_EXCHANGE = 16 FINISHED = 20 CERTIFICATE_URL = 21 CERTIFICATE_STATUS = 22 SUPPLEMENTAL_DATA = 23 KEY_UPDATE = 24 NEXT_PROTO = 67 MESSAGE_HASH = 254 CHANGE_CIPHER_SPEC = 0x0101 enum._test_simple_enum(Checked_TLSMessageType, _TLSMessageType) def test_sslmethod(self): Checked_SSLMethod = enum._old_convert_( enum.IntEnum, '_SSLMethod', 'ssl', lambda name: name.startswith('PROTOCOL_') and name != 'PROTOCOL_SSLv23', source=ssl._ssl, ) # This member is assigned dynamically in `ssl.py`: Checked_SSLMethod.PROTOCOL_SSLv23 = Checked_SSLMethod.PROTOCOL_TLS enum._test_simple_enum(Checked_SSLMethod, ssl._SSLMethod) def test_options(self): CheckedOptions = enum._old_convert_( enum.IntFlag, 'Options', 'ssl', lambda name: name.startswith('OP_'), source=ssl._ssl, ) enum._test_simple_enum(CheckedOptions, ssl.Options) def test_alertdescription(self): CheckedAlertDescription = enum._old_convert_( enum.IntEnum, 'AlertDescription', 'ssl', lambda name: name.startswith('ALERT_DESCRIPTION_'), source=ssl._ssl, ) enum._test_simple_enum(CheckedAlertDescription, ssl.AlertDescription) def test_sslerrornumber(self): Checked_SSLErrorNumber = enum._old_convert_( enum.IntEnum, 'SSLErrorNumber', 'ssl', lambda name: name.startswith('SSL_ERROR_'), source=ssl._ssl, ) enum._test_simple_enum(Checked_SSLErrorNumber, ssl.SSLErrorNumber) def test_verifyflags(self): CheckedVerifyFlags = enum._old_convert_( enum.IntFlag, 'VerifyFlags', 'ssl', lambda name: name.startswith('VERIFY_'), source=ssl._ssl, ) enum._test_simple_enum(CheckedVerifyFlags, ssl.VerifyFlags) def test_verifymode(self): CheckedVerifyMode = enum._old_convert_( enum.IntEnum, 'VerifyMode', 'ssl', lambda name: name.startswith('CERT_'), source=ssl._ssl, ) enum._test_simple_enum(CheckedVerifyMode, ssl.VerifyMode) def setUpModule(): if support.verbose: plats = { 'Mac': platform.mac_ver, 'Windows': platform.win32_ver, } for name, func in plats.items(): plat = func() if plat and plat[0]: plat = '%s %r' % (name, plat) break else: plat = repr(platform.platform()) print("test_ssl: testing with %r %r" % (ssl.OPENSSL_VERSION, ssl.OPENSSL_VERSION_INFO)) print(" under %s" % plat) print(" HAS_SNI = %r" % ssl.HAS_SNI) print(" OP_ALL = 0x%8x" % ssl.OP_ALL) try: print(" OP_NO_TLSv1_1 = 0x%8x" % ssl.OP_NO_TLSv1_1) except AttributeError: pass for filename in [ CERTFILE, BYTES_CERTFILE, ONLYCERT, ONLYKEY, BYTES_ONLYCERT, BYTES_ONLYKEY, SIGNED_CERTFILE, SIGNED_CERTFILE2, SIGNING_CA, BADCERT, BADKEY, EMPTYCERT]: if not os.path.exists(filename): raise support.TestFailed("Can't read certificate file %r" % filename) thread_info = threading_helper.threading_setup() unittest.addModuleCleanup(threading_helper.threading_cleanup, *thread_info) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.11/test_subprocess.py000066400000000000000000005017511471441230600221350ustar00rootroot00000000000000import unittest from unittest import mock from test import support from test.support import check_sanitizer from test.support import import_helper from test.support import os_helper from test.support import warnings_helper import subprocess import sys import signal import io import itertools import os import errno import tempfile import time import traceback import types import selectors import sysconfig import select import shutil import threading import gc import textwrap import json import pathlib from test.support.os_helper import FakePath try: import _testcapi except ImportError: _testcapi = None try: import pwd except ImportError: pwd = None try: import grp except ImportError: grp = None try: import fcntl except: fcntl = None if support.PGO: raise unittest.SkipTest("test is not helpful for PGO") if not support.has_subprocess_support: raise unittest.SkipTest("test module requires subprocess") mswindows = (sys.platform == "win32") # # Depends on the following external programs: Python # if mswindows: SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' 'os.O_BINARY);') else: SETBINARY = '' NONEXISTING_CMD = ('nonexisting_i_hope',) # Ignore errors that indicate the command was not found NONEXISTING_ERRORS = (FileNotFoundError, NotADirectoryError, PermissionError) ZERO_RETURN_CMD = (sys.executable, '-c', 'pass') def setUpModule(): shell_true = shutil.which('true') if shell_true is None: return if (os.access(shell_true, os.X_OK) and subprocess.run([shell_true]).returncode == 0): global ZERO_RETURN_CMD ZERO_RETURN_CMD = (shell_true,) # Faster than Python startup. class BaseTestCase(unittest.TestCase): def setUp(self): # Try to minimize the number of children we have so this test # doesn't crash on some buildbots (Alphas in particular). support.reap_children() def tearDown(self): if not mswindows: # subprocess._active is not used on Windows and is set to None. for inst in subprocess._active: inst.wait() subprocess._cleanup() self.assertFalse( subprocess._active, "subprocess._active not empty" ) self.doCleanups() support.reap_children() class PopenTestException(Exception): pass class PopenExecuteChildRaises(subprocess.Popen): """Popen subclass for testing cleanup of subprocess.PIPE filehandles when _execute_child fails. """ def _execute_child(self, *args, **kwargs): raise PopenTestException("Forced Exception for Test") class ProcessTestCase(BaseTestCase): def test_io_buffered_by_default(self): p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) try: self.assertIsInstance(p.stdin, io.BufferedIOBase) self.assertIsInstance(p.stdout, io.BufferedIOBase) self.assertIsInstance(p.stderr, io.BufferedIOBase) finally: p.stdin.close() p.stdout.close() p.stderr.close() p.wait() def test_io_unbuffered_works(self): p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=0) try: self.assertIsInstance(p.stdin, io.RawIOBase) self.assertIsInstance(p.stdout, io.RawIOBase) self.assertIsInstance(p.stderr, io.RawIOBase) finally: p.stdin.close() p.stdout.close() p.stderr.close() p.wait() def test_call_seq(self): # call() function with sequence argument rc = subprocess.call([sys.executable, "-c", "import sys; sys.exit(47)"]) self.assertEqual(rc, 47) def test_call_timeout(self): # call() function with timeout argument; we want to test that the child # process gets killed when the timeout expires. If the child isn't # killed, this call will deadlock since subprocess.call waits for the # child. self.assertRaises(subprocess.TimeoutExpired, subprocess.call, [sys.executable, "-c", "while True: pass"], timeout=0.1) def test_check_call_zero(self): # check_call() function with zero return code rc = subprocess.check_call(ZERO_RETURN_CMD) self.assertEqual(rc, 0) def test_check_call_nonzero(self): # check_call() function with non-zero return code with self.assertRaises(subprocess.CalledProcessError) as c: subprocess.check_call([sys.executable, "-c", "import sys; sys.exit(47)"]) self.assertEqual(c.exception.returncode, 47) def test_check_output(self): # check_output() function with zero return code output = subprocess.check_output( [sys.executable, "-c", "print('BDFL')"]) self.assertIn(b'BDFL', output) with self.assertRaisesRegex(ValueError, "stdout argument not allowed, it will be overridden"): subprocess.check_output([], stdout=None) with self.assertRaisesRegex(ValueError, "check argument not allowed, it will be overridden"): subprocess.check_output([], check=False) def test_check_output_nonzero(self): # check_call() function with non-zero return code with self.assertRaises(subprocess.CalledProcessError) as c: subprocess.check_output( [sys.executable, "-c", "import sys; sys.exit(5)"]) self.assertEqual(c.exception.returncode, 5) def test_check_output_stderr(self): # check_output() function stderr redirected to stdout output = subprocess.check_output( [sys.executable, "-c", "import sys; sys.stderr.write('BDFL')"], stderr=subprocess.STDOUT) self.assertIn(b'BDFL', output) def test_check_output_stdin_arg(self): # check_output() can be called with stdin set to a file tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) output = subprocess.check_output( [sys.executable, "-c", "import sys; sys.stdout.write(sys.stdin.read().upper())"], stdin=tf) self.assertIn(b'PEAR', output) def test_check_output_input_arg(self): # check_output() can be called with input set to a string output = subprocess.check_output( [sys.executable, "-c", "import sys; sys.stdout.write(sys.stdin.read().upper())"], input=b'pear') self.assertIn(b'PEAR', output) def test_check_output_input_none(self): """input=None has a legacy meaning of input='' on check_output.""" output = subprocess.check_output( [sys.executable, "-c", "import sys; print('XX' if sys.stdin.read() else '')"], input=None) self.assertNotIn(b'XX', output) def test_check_output_input_none_text(self): output = subprocess.check_output( [sys.executable, "-c", "import sys; print('XX' if sys.stdin.read() else '')"], input=None, text=True) self.assertNotIn('XX', output) def test_check_output_input_none_universal_newlines(self): output = subprocess.check_output( [sys.executable, "-c", "import sys; print('XX' if sys.stdin.read() else '')"], input=None, universal_newlines=True) self.assertNotIn('XX', output) def test_check_output_input_none_encoding_errors(self): output = subprocess.check_output( [sys.executable, "-c", "print('foo')"], input=None, encoding='utf-8', errors='ignore') self.assertIn('foo', output) def test_check_output_stdout_arg(self): # check_output() refuses to accept 'stdout' argument with self.assertRaises(ValueError) as c: output = subprocess.check_output( [sys.executable, "-c", "print('will not be run')"], stdout=sys.stdout) self.fail("Expected ValueError when stdout arg supplied.") self.assertIn('stdout', c.exception.args[0]) def test_check_output_stdin_with_input_arg(self): # check_output() refuses to accept 'stdin' with 'input' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) with self.assertRaises(ValueError) as c: output = subprocess.check_output( [sys.executable, "-c", "print('will not be run')"], stdin=tf, input=b'hare') self.fail("Expected ValueError when stdin and input args supplied.") self.assertIn('stdin', c.exception.args[0]) self.assertIn('input', c.exception.args[0]) @support.requires_resource('walltime') def test_check_output_timeout(self): # check_output() function with timeout arg with self.assertRaises(subprocess.TimeoutExpired) as c: output = subprocess.check_output( [sys.executable, "-c", "import sys, time\n" "sys.stdout.write('BDFL')\n" "sys.stdout.flush()\n" "time.sleep(3600)"], # Some heavily loaded buildbots (sparc Debian 3.x) require # this much time to start and print. timeout=3) self.fail("Expected TimeoutExpired.") self.assertEqual(c.exception.output, b'BDFL') def test_call_kwargs(self): # call() function with keyword args newenv = os.environ.copy() newenv["FRUIT"] = "banana" rc = subprocess.call([sys.executable, "-c", 'import sys, os;' 'sys.exit(os.getenv("FRUIT")=="banana")'], env=newenv) self.assertEqual(rc, 1) def test_invalid_args(self): # Popen() called with invalid arguments should raise TypeError # but Popen.__del__ should not complain (issue #12085) with support.captured_stderr() as s: self.assertRaises(TypeError, subprocess.Popen, invalid_arg_name=1) argcount = subprocess.Popen.__init__.__code__.co_argcount too_many_args = [0] * (argcount + 1) self.assertRaises(TypeError, subprocess.Popen, *too_many_args) self.assertEqual(s.getvalue(), '') def test_stdin_none(self): # .stdin is None when not redirected p = subprocess.Popen([sys.executable, "-c", 'print("banana")'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) p.wait() self.assertEqual(p.stdin, None) def test_stdout_none(self): # .stdout is None when not redirected, and the child's stdout will # be inherited from the parent. In order to test this we run a # subprocess in a subprocess: # this_test # \-- subprocess created by this test (parent) # \-- subprocess created by the parent subprocess (child) # The parent doesn't specify stdout, so the child will use the # parent's stdout. This test checks that the message printed by the # child goes to the parent stdout. The parent also checks that the # child's stdout is None. See #11963. code = ('import sys; from subprocess import Popen, PIPE;' 'p = Popen([sys.executable, "-c", "print(\'test_stdout_none\')"],' ' stdin=PIPE, stderr=PIPE);' 'p.wait(); assert p.stdout is None;') p = subprocess.Popen([sys.executable, "-c", code], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) out, err = p.communicate() self.assertEqual(p.returncode, 0, err) self.assertEqual(out.rstrip(), b'test_stdout_none') def test_stderr_none(self): # .stderr is None when not redirected p = subprocess.Popen([sys.executable, "-c", 'print("banana")'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stdin.close) p.wait() self.assertEqual(p.stderr, None) def _assert_python(self, pre_args, **kwargs): # We include sys.exit() to prevent the test runner from hanging # whenever python is found. args = pre_args + ["import sys; sys.exit(47)"] p = subprocess.Popen(args, **kwargs) p.wait() self.assertEqual(47, p.returncode) def test_executable(self): # Check that the executable argument works. # # On Unix (non-Mac and non-Windows), Python looks at args[0] to # determine where its standard library is, so we need the directory # of args[0] to be valid for the Popen() call to Python to succeed. # See also issue #16170 and issue #7774. doesnotexist = os.path.join(os.path.dirname(sys.executable), "doesnotexist") self._assert_python([doesnotexist, "-c"], executable=sys.executable) def test_bytes_executable(self): doesnotexist = os.path.join(os.path.dirname(sys.executable), "doesnotexist") self._assert_python([doesnotexist, "-c"], executable=os.fsencode(sys.executable)) def test_pathlike_executable(self): doesnotexist = os.path.join(os.path.dirname(sys.executable), "doesnotexist") self._assert_python([doesnotexist, "-c"], executable=FakePath(sys.executable)) def test_executable_takes_precedence(self): # Check that the executable argument takes precedence over args[0]. # # Verify first that the call succeeds without the executable arg. pre_args = [sys.executable, "-c"] self._assert_python(pre_args) self.assertRaises(NONEXISTING_ERRORS, self._assert_python, pre_args, executable=NONEXISTING_CMD[0]) @unittest.skipIf(mswindows, "executable argument replaces shell") def test_executable_replaces_shell(self): # Check that the executable argument replaces the default shell # when shell=True. self._assert_python([], executable=sys.executable, shell=True) @unittest.skipIf(mswindows, "executable argument replaces shell") def test_bytes_executable_replaces_shell(self): self._assert_python([], executable=os.fsencode(sys.executable), shell=True) @unittest.skipIf(mswindows, "executable argument replaces shell") def test_pathlike_executable_replaces_shell(self): self._assert_python([], executable=FakePath(sys.executable), shell=True) # For use in the test_cwd* tests below. def _normalize_cwd(self, cwd): # Normalize an expected cwd (for Tru64 support). # We can't use os.path.realpath since it doesn't expand Tru64 {memb} # strings. See bug #1063571. with os_helper.change_cwd(cwd): return os.getcwd() # For use in the test_cwd* tests below. def _split_python_path(self): # Return normalized (python_dir, python_base). python_path = os.path.realpath(sys.executable) return os.path.split(python_path) # For use in the test_cwd* tests below. def _assert_cwd(self, expected_cwd, python_arg, **kwargs): # Invoke Python via Popen, and assert that (1) the call succeeds, # and that (2) the current working directory of the child process # matches *expected_cwd*. p = subprocess.Popen([python_arg, "-c", "import os, sys; " "buf = sys.stdout.buffer; " "buf.write(os.getcwd().encode()); " "buf.flush(); " "sys.exit(47)"], stdout=subprocess.PIPE, **kwargs) self.addCleanup(p.stdout.close) p.wait() self.assertEqual(47, p.returncode) normcase = os.path.normcase self.assertEqual(normcase(expected_cwd), normcase(p.stdout.read().decode())) def test_cwd(self): # Check that cwd changes the cwd for the child process. temp_dir = tempfile.gettempdir() temp_dir = self._normalize_cwd(temp_dir) self._assert_cwd(temp_dir, sys.executable, cwd=temp_dir) def test_cwd_with_bytes(self): temp_dir = tempfile.gettempdir() temp_dir = self._normalize_cwd(temp_dir) self._assert_cwd(temp_dir, sys.executable, cwd=os.fsencode(temp_dir)) def test_cwd_with_pathlike(self): temp_dir = tempfile.gettempdir() temp_dir = self._normalize_cwd(temp_dir) self._assert_cwd(temp_dir, sys.executable, cwd=FakePath(temp_dir)) @unittest.skipIf(mswindows, "pending resolution of issue #15533") def test_cwd_with_relative_arg(self): # Check that Popen looks for args[0] relative to cwd if args[0] # is relative. python_dir, python_base = self._split_python_path() rel_python = os.path.join(os.curdir, python_base) with os_helper.temp_cwd() as wrong_dir: # Before calling with the correct cwd, confirm that the call fails # without cwd and with the wrong cwd. self.assertRaises(FileNotFoundError, subprocess.Popen, [rel_python]) self.assertRaises(FileNotFoundError, subprocess.Popen, [rel_python], cwd=wrong_dir) python_dir = self._normalize_cwd(python_dir) self._assert_cwd(python_dir, rel_python, cwd=python_dir) @unittest.skipIf(mswindows, "pending resolution of issue #15533") def test_cwd_with_relative_executable(self): # Check that Popen looks for executable relative to cwd if executable # is relative (and that executable takes precedence over args[0]). python_dir, python_base = self._split_python_path() rel_python = os.path.join(os.curdir, python_base) doesntexist = "somethingyoudonthave" with os_helper.temp_cwd() as wrong_dir: # Before calling with the correct cwd, confirm that the call fails # without cwd and with the wrong cwd. self.assertRaises(FileNotFoundError, subprocess.Popen, [doesntexist], executable=rel_python) self.assertRaises(FileNotFoundError, subprocess.Popen, [doesntexist], executable=rel_python, cwd=wrong_dir) python_dir = self._normalize_cwd(python_dir) self._assert_cwd(python_dir, doesntexist, executable=rel_python, cwd=python_dir) def test_cwd_with_absolute_arg(self): # Check that Popen can find the executable when the cwd is wrong # if args[0] is an absolute path. python_dir, python_base = self._split_python_path() abs_python = os.path.join(python_dir, python_base) rel_python = os.path.join(os.curdir, python_base) with os_helper.temp_dir() as wrong_dir: # Before calling with an absolute path, confirm that using a # relative path fails. self.assertRaises(FileNotFoundError, subprocess.Popen, [rel_python], cwd=wrong_dir) wrong_dir = self._normalize_cwd(wrong_dir) self._assert_cwd(wrong_dir, abs_python, cwd=wrong_dir) @unittest.skipIf(sys.base_prefix != sys.prefix, 'Test is not venv-compatible') def test_executable_with_cwd(self): python_dir, python_base = self._split_python_path() python_dir = self._normalize_cwd(python_dir) self._assert_cwd(python_dir, "somethingyoudonthave", executable=sys.executable, cwd=python_dir) @unittest.skipIf(sys.base_prefix != sys.prefix, 'Test is not venv-compatible') @unittest.skipIf(sysconfig.is_python_build(), "need an installed Python. See #7774") def test_executable_without_cwd(self): # For a normal installation, it should work without 'cwd' # argument. For test runs in the build directory, see #7774. self._assert_cwd(os.getcwd(), "somethingyoudonthave", executable=sys.executable) def test_stdin_pipe(self): # stdin redirection p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.exit(sys.stdin.read() == "pear")'], stdin=subprocess.PIPE) p.stdin.write(b"pear") p.stdin.close() p.wait() self.assertEqual(p.returncode, 1) def test_stdin_filedes(self): # stdin is set to open file descriptor tf = tempfile.TemporaryFile() self.addCleanup(tf.close) d = tf.fileno() os.write(d, b"pear") os.lseek(d, 0, 0) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.exit(sys.stdin.read() == "pear")'], stdin=d) p.wait() self.assertEqual(p.returncode, 1) def test_stdin_fileobj(self): # stdin is set to open file object tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b"pear") tf.seek(0) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.exit(sys.stdin.read() == "pear")'], stdin=tf) p.wait() self.assertEqual(p.returncode, 1) def test_stdout_pipe(self): # stdout redirection p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("orange")'], stdout=subprocess.PIPE) with p: self.assertEqual(p.stdout.read(), b"orange") def test_stdout_filedes(self): # stdout is set to open file descriptor tf = tempfile.TemporaryFile() self.addCleanup(tf.close) d = tf.fileno() p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("orange")'], stdout=d) p.wait() os.lseek(d, 0, 0) self.assertEqual(os.read(d, 1024), b"orange") def test_stdout_fileobj(self): # stdout is set to open file object tf = tempfile.TemporaryFile() self.addCleanup(tf.close) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("orange")'], stdout=tf) p.wait() tf.seek(0) self.assertEqual(tf.read(), b"orange") def test_stderr_pipe(self): # stderr redirection p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("strawberry")'], stderr=subprocess.PIPE) with p: self.assertEqual(p.stderr.read(), b"strawberry") def test_stderr_filedes(self): # stderr is set to open file descriptor tf = tempfile.TemporaryFile() self.addCleanup(tf.close) d = tf.fileno() p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("strawberry")'], stderr=d) p.wait() os.lseek(d, 0, 0) self.assertEqual(os.read(d, 1024), b"strawberry") def test_stderr_fileobj(self): # stderr is set to open file object tf = tempfile.TemporaryFile() self.addCleanup(tf.close) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("strawberry")'], stderr=tf) p.wait() tf.seek(0) self.assertEqual(tf.read(), b"strawberry") def test_stderr_redirect_with_no_stdout_redirect(self): # test stderr=STDOUT while stdout=None (not set) # - grandchild prints to stderr # - child redirects grandchild's stderr to its stdout # - the parent should get grandchild's stderr in child's stdout p = subprocess.Popen([sys.executable, "-c", 'import sys, subprocess;' 'rc = subprocess.call([sys.executable, "-c",' ' "import sys;"' ' "sys.stderr.write(\'42\')"],' ' stderr=subprocess.STDOUT);' 'sys.exit(rc)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() #NOTE: stdout should get stderr from grandchild self.assertEqual(stdout, b'42') self.assertEqual(stderr, b'') # should be empty self.assertEqual(p.returncode, 0) def test_stdout_stderr_pipe(self): # capture stdout and stderr to the same pipe p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple");' 'sys.stdout.flush();' 'sys.stderr.write("orange")'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) with p: self.assertEqual(p.stdout.read(), b"appleorange") def test_stdout_stderr_file(self): # capture stdout and stderr to the same open file tf = tempfile.TemporaryFile() self.addCleanup(tf.close) p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple");' 'sys.stdout.flush();' 'sys.stderr.write("orange")'], stdout=tf, stderr=tf) p.wait() tf.seek(0) self.assertEqual(tf.read(), b"appleorange") def test_stdout_filedes_of_stdout(self): # stdout is set to 1 (#1531862). # To avoid printing the text on stdout, we do something similar to # test_stdout_none (see above). The parent subprocess calls the child # subprocess passing stdout=1, and this test uses stdout=PIPE in # order to capture and check the output of the parent. See #11963. code = ('import sys, subprocess; ' 'rc = subprocess.call([sys.executable, "-c", ' ' "import os, sys; sys.exit(os.write(sys.stdout.fileno(), ' 'b\'test with stdout=1\'))"], stdout=1); ' 'assert rc == 18') p = subprocess.Popen([sys.executable, "-c", code], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) out, err = p.communicate() self.assertEqual(p.returncode, 0, err) self.assertEqual(out.rstrip(), b'test with stdout=1') def test_stdout_devnull(self): p = subprocess.Popen([sys.executable, "-c", 'for i in range(10240):' 'print("x" * 1024)'], stdout=subprocess.DEVNULL) p.wait() self.assertEqual(p.stdout, None) def test_stderr_devnull(self): p = subprocess.Popen([sys.executable, "-c", 'import sys\n' 'for i in range(10240):' 'sys.stderr.write("x" * 1024)'], stderr=subprocess.DEVNULL) p.wait() self.assertEqual(p.stderr, None) def test_stdin_devnull(self): p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdin.read(1)'], stdin=subprocess.DEVNULL) p.wait() self.assertEqual(p.stdin, None) @unittest.skipUnless(fcntl and hasattr(fcntl, 'F_GETPIPE_SZ'), 'fcntl.F_GETPIPE_SZ required for test.') def test_pipesizes(self): test_pipe_r, test_pipe_w = os.pipe() try: # Get the default pipesize with F_GETPIPE_SZ pipesize_default = fcntl.fcntl(test_pipe_w, fcntl.F_GETPIPE_SZ) finally: os.close(test_pipe_r) os.close(test_pipe_w) pipesize = pipesize_default // 2 if pipesize < 512: # the POSIX minimum raise unittest.SkipTest( 'default pipesize too small to perform test.') p = subprocess.Popen( [sys.executable, "-c", 'import sys; sys.stdin.read(); sys.stdout.write("out"); ' 'sys.stderr.write("error!")'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pipesize=pipesize) try: for fifo in [p.stdin, p.stdout, p.stderr]: self.assertEqual( fcntl.fcntl(fifo.fileno(), fcntl.F_GETPIPE_SZ), pipesize) # Windows pipe size can be acquired via GetNamedPipeInfoFunction # https://docs.microsoft.com/en-us/windows/win32/api/namedpipeapi/nf-namedpipeapi-getnamedpipeinfo # However, this function is not yet in _winapi. p.stdin.write(b"pear") p.stdin.close() p.stdout.close() p.stderr.close() finally: p.kill() p.wait() @unittest.skipUnless(fcntl and hasattr(fcntl, 'F_GETPIPE_SZ'), 'fcntl.F_GETPIPE_SZ required for test.') def test_pipesize_default(self): proc = subprocess.Popen( [sys.executable, "-c", 'import sys; sys.stdin.read(); sys.stdout.write("out"); ' 'sys.stderr.write("error!")'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pipesize=-1) with proc: try: fp_r, fp_w = os.pipe() try: default_read_pipesize = fcntl.fcntl(fp_r, fcntl.F_GETPIPE_SZ) default_write_pipesize = fcntl.fcntl(fp_w, fcntl.F_GETPIPE_SZ) finally: os.close(fp_r) os.close(fp_w) self.assertEqual( fcntl.fcntl(proc.stdin.fileno(), fcntl.F_GETPIPE_SZ), default_read_pipesize) self.assertEqual( fcntl.fcntl(proc.stdout.fileno(), fcntl.F_GETPIPE_SZ), default_write_pipesize) self.assertEqual( fcntl.fcntl(proc.stderr.fileno(), fcntl.F_GETPIPE_SZ), default_write_pipesize) # On other platforms we cannot test the pipe size (yet). But above # code using pipesize=-1 should not crash. finally: proc.kill() def test_env(self): newenv = os.environ.copy() newenv["FRUIT"] = "orange" with subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(os.getenv("FRUIT"))'], stdout=subprocess.PIPE, env=newenv) as p: stdout, stderr = p.communicate() self.assertEqual(stdout, b"orange") @unittest.skipUnless(sys.platform == "win32", "Windows only issue") def test_win32_duplicate_envs(self): newenv = os.environ.copy() newenv["fRUit"] = "cherry" newenv["fruit"] = "lemon" newenv["FRUIT"] = "orange" newenv["frUit"] = "banana" with subprocess.Popen(["CMD", "/c", "SET", "fruit"], stdout=subprocess.PIPE, env=newenv) as p: stdout, _ = p.communicate() self.assertEqual(stdout.strip(), b"frUit=banana") # Windows requires at least the SYSTEMROOT environment variable to start # Python @unittest.skipIf(sys.platform == 'win32', 'cannot test an empty env on Windows') @unittest.skipIf(sysconfig.get_config_var('Py_ENABLE_SHARED') == 1, 'The Python shared library cannot be loaded ' 'with an empty environment.') @unittest.skipIf(check_sanitizer(address=True), 'AddressSanitizer adds to the environment.') def test_empty_env(self): """Verify that env={} is as empty as possible.""" def is_env_var_to_ignore(n): """Determine if an environment variable is under our control.""" # This excludes some __CF_* and VERSIONER_* keys MacOS insists # on adding even when the environment in exec is empty. # Gentoo sandboxes also force LD_PRELOAD and SANDBOX_* to exist. return ('VERSIONER' in n or '__CF' in n or # MacOS n == 'LD_PRELOAD' or n.startswith('SANDBOX') or # Gentoo n == 'LC_CTYPE') # Locale coercion triggered with subprocess.Popen([sys.executable, "-c", 'import os; print(list(os.environ.keys()))'], stdout=subprocess.PIPE, env={}) as p: stdout, stderr = p.communicate() child_env_names = eval(stdout.strip()) self.assertIsInstance(child_env_names, list) child_env_names = [k for k in child_env_names if not is_env_var_to_ignore(k)] self.assertEqual(child_env_names, []) @unittest.skipIf(sysconfig.get_config_var('Py_ENABLE_SHARED') == 1, 'The Python shared library cannot be loaded ' 'without some system environments.') @unittest.skipIf(check_sanitizer(address=True), 'AddressSanitizer adds to the environment.') def test_one_environment_variable(self): newenv = {'fruit': 'orange'} cmd = [sys.executable, '-c', 'import sys,os;' 'sys.stdout.write("fruit="+os.getenv("fruit"))'] if sys.platform == "win32": cmd = ["CMD", "/c", "SET", "fruit"] with subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=newenv) as p: stdout, stderr = p.communicate() if p.returncode and support.verbose: print("STDOUT:", stdout.decode("ascii", "replace")) print("STDERR:", stderr.decode("ascii", "replace")) self.assertEqual(p.returncode, 0) self.assertEqual(stdout.strip(), b"fruit=orange") def test_invalid_cmd(self): # null character in the command name cmd = sys.executable + '\0' with self.assertRaises(ValueError): subprocess.Popen([cmd, "-c", "pass"]) # null character in the command argument with self.assertRaises(ValueError): subprocess.Popen([sys.executable, "-c", "pass#\0"]) def test_invalid_env(self): # null character in the environment variable name newenv = os.environ.copy() newenv["FRUIT\0VEGETABLE"] = "cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) # null character in the environment variable value newenv = os.environ.copy() newenv["FRUIT"] = "orange\0VEGETABLE=cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) # equal character in the environment variable name newenv = os.environ.copy() newenv["FRUIT=ORANGE"] = "lemon" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) # equal character in the environment variable value newenv = os.environ.copy() newenv["FRUIT"] = "orange=lemon" with subprocess.Popen([sys.executable, "-c", 'import sys, os;' 'sys.stdout.write(os.getenv("FRUIT"))'], stdout=subprocess.PIPE, env=newenv) as p: stdout, stderr = p.communicate() self.assertEqual(stdout, b"orange=lemon") @unittest.skipUnless(sys.platform == "win32", "Windows only issue") def test_win32_invalid_env(self): # '=' in the environment variable name newenv = os.environ.copy() newenv["FRUIT=VEGETABLE"] = "cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) newenv = os.environ.copy() newenv["==FRUIT"] = "cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) def test_communicate_stdin(self): p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.exit(sys.stdin.read() == "pear")'], stdin=subprocess.PIPE) p.communicate(b"pear") self.assertEqual(p.returncode, 1) def test_communicate_stdout(self): p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("pineapple")'], stdout=subprocess.PIPE) (stdout, stderr) = p.communicate() self.assertEqual(stdout, b"pineapple") self.assertEqual(stderr, None) def test_communicate_stderr(self): p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("pineapple")'], stderr=subprocess.PIPE) (stdout, stderr) = p.communicate() self.assertEqual(stdout, None) self.assertEqual(stderr, b"pineapple") def test_communicate(self): p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stderr.write("pineapple");' 'sys.stdout.write(sys.stdin.read())'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) (stdout, stderr) = p.communicate(b"banana") self.assertEqual(stdout, b"banana") self.assertEqual(stderr, b"pineapple") def test_communicate_timeout(self): p = subprocess.Popen([sys.executable, "-c", 'import sys,os,time;' 'sys.stderr.write("pineapple\\n");' 'time.sleep(1);' 'sys.stderr.write("pear\\n");' 'sys.stdout.write(sys.stdin.read())'], universal_newlines=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.assertRaises(subprocess.TimeoutExpired, p.communicate, "banana", timeout=0.3) # Make sure we can keep waiting for it, and that we get the whole output # after it completes. (stdout, stderr) = p.communicate() self.assertEqual(stdout, "banana") self.assertEqual(stderr.encode(), b"pineapple\npear\n") def test_communicate_timeout_large_output(self): # Test an expiring timeout while the child is outputting lots of data. p = subprocess.Popen([sys.executable, "-c", 'import sys,os,time;' 'sys.stdout.write("a" * (64 * 1024));' 'time.sleep(0.2);' 'sys.stdout.write("a" * (64 * 1024));' 'time.sleep(0.2);' 'sys.stdout.write("a" * (64 * 1024));' 'time.sleep(0.2);' 'sys.stdout.write("a" * (64 * 1024));'], stdout=subprocess.PIPE) self.assertRaises(subprocess.TimeoutExpired, p.communicate, timeout=0.4) (stdout, _) = p.communicate() self.assertEqual(len(stdout), 4 * 64 * 1024) # Test for the fd leak reported in http://bugs.python.org/issue2791. def test_communicate_pipe_fd_leak(self): for stdin_pipe in (False, True): for stdout_pipe in (False, True): for stderr_pipe in (False, True): options = {} if stdin_pipe: options['stdin'] = subprocess.PIPE if stdout_pipe: options['stdout'] = subprocess.PIPE if stderr_pipe: options['stderr'] = subprocess.PIPE if not options: continue p = subprocess.Popen(ZERO_RETURN_CMD, **options) p.communicate() if p.stdin is not None: self.assertTrue(p.stdin.closed) if p.stdout is not None: self.assertTrue(p.stdout.closed) if p.stderr is not None: self.assertTrue(p.stderr.closed) def test_communicate_returns(self): # communicate() should return None if no redirection is active p = subprocess.Popen([sys.executable, "-c", "import sys; sys.exit(47)"]) (stdout, stderr) = p.communicate() self.assertEqual(stdout, None) self.assertEqual(stderr, None) def test_communicate_pipe_buf(self): # communicate() with writes larger than pipe_buf # This test will probably deadlock rather than fail, if # communicate() does not work properly. x, y = os.pipe() os.close(x) os.close(y) p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(sys.stdin.read(47));' 'sys.stderr.write("x" * %d);' 'sys.stdout.write(sys.stdin.read())' % support.PIPE_MAX_SIZE], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) string_to_write = b"a" * support.PIPE_MAX_SIZE (stdout, stderr) = p.communicate(string_to_write) self.assertEqual(stdout, string_to_write) def test_writes_before_communicate(self): # stdin.write before communicate() p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(sys.stdin.read())'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) p.stdin.write(b"banana") (stdout, stderr) = p.communicate(b"split") self.assertEqual(stdout, b"bananasplit") self.assertEqual(stderr, b"") def test_universal_newlines_and_text(self): args = [ sys.executable, "-c", 'import sys,os;' + SETBINARY + 'buf = sys.stdout.buffer;' 'buf.write(sys.stdin.readline().encode());' 'buf.flush();' 'buf.write(b"line2\\n");' 'buf.flush();' 'buf.write(sys.stdin.read().encode());' 'buf.flush();' 'buf.write(b"line4\\n");' 'buf.flush();' 'buf.write(b"line5\\r\\n");' 'buf.flush();' 'buf.write(b"line6\\r");' 'buf.flush();' 'buf.write(b"\\nline7");' 'buf.flush();' 'buf.write(b"\\nline8");'] for extra_kwarg in ('universal_newlines', 'text'): p = subprocess.Popen(args, **{'stdin': subprocess.PIPE, 'stdout': subprocess.PIPE, extra_kwarg: True}) with p: p.stdin.write("line1\n") p.stdin.flush() self.assertEqual(p.stdout.readline(), "line1\n") p.stdin.write("line3\n") p.stdin.close() self.addCleanup(p.stdout.close) self.assertEqual(p.stdout.readline(), "line2\n") self.assertEqual(p.stdout.read(6), "line3\n") self.assertEqual(p.stdout.read(), "line4\nline5\nline6\nline7\nline8") def test_universal_newlines_communicate(self): # universal newlines through communicate() p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' + SETBINARY + 'buf = sys.stdout.buffer;' 'buf.write(b"line2\\n");' 'buf.flush();' 'buf.write(b"line4\\n");' 'buf.flush();' 'buf.write(b"line5\\r\\n");' 'buf.flush();' 'buf.write(b"line6\\r");' 'buf.flush();' 'buf.write(b"\\nline7");' 'buf.flush();' 'buf.write(b"\\nline8");'], stderr=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=1) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) (stdout, stderr) = p.communicate() self.assertEqual(stdout, "line2\nline4\nline5\nline6\nline7\nline8") def test_universal_newlines_communicate_stdin(self): # universal newlines through communicate(), with only stdin p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' + SETBINARY + textwrap.dedent(''' s = sys.stdin.readline() assert s == "line1\\n", repr(s) s = sys.stdin.read() assert s == "line3\\n", repr(s) ''')], stdin=subprocess.PIPE, universal_newlines=1) (stdout, stderr) = p.communicate("line1\nline3\n") self.assertEqual(p.returncode, 0) def test_universal_newlines_communicate_input_none(self): # Test communicate(input=None) with universal newlines. # # We set stdout to PIPE because, as of this writing, a different # code path is tested when the number of pipes is zero or one. p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=True) p.communicate() self.assertEqual(p.returncode, 0) def test_universal_newlines_communicate_stdin_stdout_stderr(self): # universal newlines through communicate(), with stdin, stdout, stderr p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' + SETBINARY + textwrap.dedent(''' s = sys.stdin.buffer.readline() sys.stdout.buffer.write(s) sys.stdout.buffer.write(b"line2\\r") sys.stderr.buffer.write(b"eline2\\n") s = sys.stdin.buffer.read() sys.stdout.buffer.write(s) sys.stdout.buffer.write(b"line4\\n") sys.stdout.buffer.write(b"line5\\r\\n") sys.stderr.buffer.write(b"eline6\\r") sys.stderr.buffer.write(b"eline7\\r\\nz") ''')], stdin=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=True) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) (stdout, stderr) = p.communicate("line1\nline3\n") self.assertEqual(p.returncode, 0) self.assertEqual("line1\nline2\nline3\nline4\nline5\n", stdout) # Python debug build push something like "[42442 refs]\n" # to stderr at exit of subprocess. self.assertTrue(stderr.startswith("eline2\neline6\neline7\n")) def test_universal_newlines_communicate_encodings(self): # Check that universal newlines mode works for various encodings, # in particular for encodings in the UTF-16 and UTF-32 families. # See issue #15595. # # UTF-16 and UTF-32-BE are sufficient to check both with BOM and # without, and UTF-16 and UTF-32. for encoding in ['utf-16', 'utf-32-be']: code = ("import sys; " r"sys.stdout.buffer.write('1\r\n2\r3\n4'.encode('%s'))" % encoding) args = [sys.executable, '-c', code] # We set stdin to be non-None because, as of this writing, # a different code path is used when the number of pipes is # zero or one. popen = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, encoding=encoding) stdout, stderr = popen.communicate(input='') self.assertEqual(stdout, '1\n2\n3\n4') def test_communicate_errors(self): for errors, expected in [ ('ignore', ''), ('replace', '\ufffd\ufffd'), ('surrogateescape', '\udc80\udc80'), ('backslashreplace', '\\x80\\x80'), ]: code = ("import sys; " r"sys.stdout.buffer.write(b'[\x80\x80]')") args = [sys.executable, '-c', code] # We set stdin to be non-None because, as of this writing, # a different code path is used when the number of pipes is # zero or one. popen = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, encoding='utf-8', errors=errors) stdout, stderr = popen.communicate(input='') self.assertEqual(stdout, '[{}]'.format(expected)) def test_no_leaking(self): # Make sure we leak no resources if not mswindows: max_handles = 1026 # too much for most UNIX systems else: max_handles = 2050 # too much for (at least some) Windows setups handles = [] tmpdir = tempfile.mkdtemp() try: for i in range(max_handles): try: tmpfile = os.path.join(tmpdir, os_helper.TESTFN) handles.append(os.open(tmpfile, os.O_WRONLY|os.O_CREAT)) except OSError as e: if e.errno != errno.EMFILE: raise break else: self.skipTest("failed to reach the file descriptor limit " "(tried %d)" % max_handles) # Close a couple of them (should be enough for a subprocess) for i in range(10): os.close(handles.pop()) # Loop creating some subprocesses. If one of them leaks some fds, # the next loop iteration will fail by reaching the max fd limit. for i in range(15): p = subprocess.Popen([sys.executable, "-c", "import sys;" "sys.stdout.write(sys.stdin.read())"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) data = p.communicate(b"lime")[0] self.assertEqual(data, b"lime") finally: for h in handles: os.close(h) shutil.rmtree(tmpdir) def test_list2cmdline(self): self.assertEqual(subprocess.list2cmdline(['a b c', 'd', 'e']), '"a b c" d e') self.assertEqual(subprocess.list2cmdline(['ab"c', '\\', 'd']), 'ab\\"c \\ d') self.assertEqual(subprocess.list2cmdline(['ab"c', ' \\', 'd']), 'ab\\"c " \\\\" d') self.assertEqual(subprocess.list2cmdline(['a\\\\\\b', 'de fg', 'h']), 'a\\\\\\b "de fg" h') self.assertEqual(subprocess.list2cmdline(['a\\"b', 'c', 'd']), 'a\\\\\\"b c d') self.assertEqual(subprocess.list2cmdline(['a\\\\b c', 'd', 'e']), '"a\\\\b c" d e') self.assertEqual(subprocess.list2cmdline(['a\\\\b\\ c', 'd', 'e']), '"a\\\\b\\ c" d e') self.assertEqual(subprocess.list2cmdline(['ab', '']), 'ab ""') def test_poll(self): p = subprocess.Popen([sys.executable, "-c", "import os; os.read(0, 1)"], stdin=subprocess.PIPE) self.addCleanup(p.stdin.close) self.assertIsNone(p.poll()) os.write(p.stdin.fileno(), b'A') p.wait() # Subsequent invocations should just return the returncode self.assertEqual(p.poll(), 0) def test_wait(self): p = subprocess.Popen(ZERO_RETURN_CMD) self.assertEqual(p.wait(), 0) # Subsequent invocations should just return the returncode self.assertEqual(p.wait(), 0) def test_wait_timeout(self): p = subprocess.Popen([sys.executable, "-c", "import time; time.sleep(0.3)"]) with self.assertRaises(subprocess.TimeoutExpired) as c: p.wait(timeout=0.0001) self.assertIn("0.0001", str(c.exception)) # For coverage of __str__. self.assertEqual(p.wait(timeout=support.SHORT_TIMEOUT), 0) def test_invalid_bufsize(self): # an invalid type of the bufsize argument should raise # TypeError. with self.assertRaises(TypeError): subprocess.Popen(ZERO_RETURN_CMD, "orange") def test_bufsize_is_none(self): # bufsize=None should be the same as bufsize=0. p = subprocess.Popen(ZERO_RETURN_CMD, None) self.assertEqual(p.wait(), 0) # Again with keyword arg p = subprocess.Popen(ZERO_RETURN_CMD, bufsize=None) self.assertEqual(p.wait(), 0) def _test_bufsize_equal_one(self, line, expected, universal_newlines): # subprocess may deadlock with bufsize=1, see issue #21332 with subprocess.Popen([sys.executable, "-c", "import sys;" "sys.stdout.write(sys.stdin.readline());" "sys.stdout.flush()"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL, bufsize=1, universal_newlines=universal_newlines) as p: p.stdin.write(line) # expect that it flushes the line in text mode os.close(p.stdin.fileno()) # close it without flushing the buffer read_line = p.stdout.readline() with support.SuppressCrashReport(): try: p.stdin.close() except OSError: pass p.stdin = None self.assertEqual(p.returncode, 0) self.assertEqual(read_line, expected) def test_bufsize_equal_one_text_mode(self): # line is flushed in text mode with bufsize=1. # we should get the full line in return line = "line\n" self._test_bufsize_equal_one(line, line, universal_newlines=True) def test_bufsize_equal_one_binary_mode(self): # line is not flushed in binary mode with bufsize=1. # we should get empty response line = b'line' + os.linesep.encode() # assume ascii-based locale with self.assertWarnsRegex(RuntimeWarning, 'line buffering'): self._test_bufsize_equal_one(line, b'', universal_newlines=False) @support.requires_resource('cpu') def test_leaking_fds_on_error(self): # see bug #5179: Popen leaks file descriptors to PIPEs if # the child fails to execute; this will eventually exhaust # the maximum number of open fds. 1024 seems a very common # value for that limit, but Windows has 2048, so we loop # 1024 times (each call leaked two fds). for i in range(1024): with self.assertRaises(NONEXISTING_ERRORS): subprocess.Popen(NONEXISTING_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE) def test_nonexisting_with_pipes(self): # bpo-30121: Popen with pipes must close properly pipes on error. # Previously, os.close() was called with a Windows handle which is not # a valid file descriptor. # # Run the test in a subprocess to control how the CRT reports errors # and to get stderr content. try: import msvcrt msvcrt.CrtSetReportMode except (AttributeError, ImportError): self.skipTest("need msvcrt.CrtSetReportMode") code = textwrap.dedent(f""" import msvcrt import subprocess cmd = {NONEXISTING_CMD!r} for report_type in [msvcrt.CRT_WARN, msvcrt.CRT_ERROR, msvcrt.CRT_ASSERT]: msvcrt.CrtSetReportMode(report_type, msvcrt.CRTDBG_MODE_FILE) msvcrt.CrtSetReportFile(report_type, msvcrt.CRTDBG_FILE_STDERR) try: subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) except OSError: pass """) cmd = [sys.executable, "-c", code] proc = subprocess.Popen(cmd, stderr=subprocess.PIPE, universal_newlines=True) with proc: stderr = proc.communicate()[1] self.assertEqual(stderr, "") self.assertEqual(proc.returncode, 0) def test_double_close_on_error(self): # Issue #18851 fds = [] def open_fds(): for i in range(20): fds.extend(os.pipe()) time.sleep(0.001) t = threading.Thread(target=open_fds) t.start() try: with self.assertRaises(EnvironmentError): subprocess.Popen(NONEXISTING_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) finally: t.join() exc = None for fd in fds: # If a double close occurred, some of those fds will # already have been closed by mistake, and os.close() # here will raise. try: os.close(fd) except OSError as e: exc = e if exc is not None: raise exc def test_threadsafe_wait(self): """Issue21291: Popen.wait() needs to be threadsafe for returncode.""" proc = subprocess.Popen([sys.executable, '-c', 'import time; time.sleep(12)']) self.assertEqual(proc.returncode, None) results = [] def kill_proc_timer_thread(): results.append(('thread-start-poll-result', proc.poll())) # terminate it from the thread and wait for the result. proc.kill() proc.wait() results.append(('thread-after-kill-and-wait', proc.returncode)) # this wait should be a no-op given the above. proc.wait() results.append(('thread-after-second-wait', proc.returncode)) # This is a timing sensitive test, the failure mode is # triggered when both the main thread and this thread are in # the wait() call at once. The delay here is to allow the # main thread to most likely be blocked in its wait() call. t = threading.Timer(0.2, kill_proc_timer_thread) t.start() if mswindows: expected_errorcode = 1 else: # Should be -9 because of the proc.kill() from the thread. expected_errorcode = -9 # Wait for the process to finish; the thread should kill it # long before it finishes on its own. Supplying a timeout # triggers a different code path for better coverage. proc.wait(timeout=support.SHORT_TIMEOUT) self.assertEqual(proc.returncode, expected_errorcode, msg="unexpected result in wait from main thread") # This should be a no-op with no change in returncode. proc.wait() self.assertEqual(proc.returncode, expected_errorcode, msg="unexpected result in second main wait.") t.join() # Ensure that all of the thread results are as expected. # When a race condition occurs in wait(), the returncode could # be set by the wrong thread that doesn't actually have it # leading to an incorrect value. self.assertEqual([('thread-start-poll-result', None), ('thread-after-kill-and-wait', expected_errorcode), ('thread-after-second-wait', expected_errorcode)], results) def test_issue8780(self): # Ensure that stdout is inherited from the parent # if stdout=PIPE is not used code = ';'.join(( 'import subprocess, sys', 'retcode = subprocess.call(' "[sys.executable, '-c', 'print(\"Hello World!\")'])", 'assert retcode == 0')) output = subprocess.check_output([sys.executable, '-c', code]) self.assertTrue(output.startswith(b'Hello World!'), ascii(output)) def test_handles_closed_on_exception(self): # If CreateProcess exits with an error, ensure the # duplicate output handles are released ifhandle, ifname = tempfile.mkstemp() ofhandle, ofname = tempfile.mkstemp() efhandle, efname = tempfile.mkstemp() try: subprocess.Popen (["*"], stdin=ifhandle, stdout=ofhandle, stderr=efhandle) except OSError: os.close(ifhandle) os.remove(ifname) os.close(ofhandle) os.remove(ofname) os.close(efhandle) os.remove(efname) self.assertFalse(os.path.exists(ifname)) self.assertFalse(os.path.exists(ofname)) self.assertFalse(os.path.exists(efname)) def test_communicate_epipe(self): # Issue 10963: communicate() should hide EPIPE p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) p.communicate(b"x" * 2**20) def test_repr(self): path_cmd = pathlib.Path("my-tool.py") pathlib_cls = path_cmd.__class__.__name__ cases = [ ("ls", True, 123, ""), ('a' * 100, True, 0, ""), (["ls"], False, None, ""), (["ls", '--my-opts', 'a' * 100], False, None, ""), (path_cmd, False, 7, f"") ] with unittest.mock.patch.object(subprocess.Popen, '_execute_child'): for cmd, shell, code, sx in cases: p = subprocess.Popen(cmd, shell=shell) p.returncode = code self.assertEqual(repr(p), sx) def test_communicate_epipe_only_stdin(self): # Issue 10963: communicate() should hide EPIPE p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE) self.addCleanup(p.stdin.close) p.wait() p.communicate(b"x" * 2**20) @unittest.skipUnless(hasattr(signal, 'SIGUSR1'), "Requires signal.SIGUSR1") @unittest.skipUnless(hasattr(os, 'kill'), "Requires os.kill") @unittest.skipUnless(hasattr(os, 'getppid'), "Requires os.getppid") def test_communicate_eintr(self): # Issue #12493: communicate() should handle EINTR def handler(signum, frame): pass old_handler = signal.signal(signal.SIGUSR1, handler) self.addCleanup(signal.signal, signal.SIGUSR1, old_handler) args = [sys.executable, "-c", 'import os, signal;' 'os.kill(os.getppid(), signal.SIGUSR1)'] for stream in ('stdout', 'stderr'): kw = {stream: subprocess.PIPE} with subprocess.Popen(args, **kw) as process: # communicate() will be interrupted by SIGUSR1 process.communicate() # This test is Linux-ish specific for simplicity to at least have # some coverage. It is not a platform specific bug. @unittest.skipUnless(os.path.isdir('/proc/%d/fd' % os.getpid()), "Linux specific") def test_failed_child_execute_fd_leak(self): """Test for the fork() failure fd leak reported in issue16327.""" fd_directory = '/proc/%d/fd' % os.getpid() fds_before_popen = os.listdir(fd_directory) with self.assertRaises(PopenTestException): PopenExecuteChildRaises( ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # NOTE: This test doesn't verify that the real _execute_child # does not close the file descriptors itself on the way out # during an exception. Code inspection has confirmed that. fds_after_exception = os.listdir(fd_directory) self.assertEqual(fds_before_popen, fds_after_exception) @unittest.skipIf(mswindows, "behavior currently not supported on Windows") def test_file_not_found_includes_filename(self): with self.assertRaises(FileNotFoundError) as c: subprocess.call(['/opt/nonexistent_binary', 'with', 'some', 'args']) self.assertEqual(c.exception.filename, '/opt/nonexistent_binary') @unittest.skipIf(mswindows, "behavior currently not supported on Windows") def test_file_not_found_with_bad_cwd(self): with self.assertRaises(FileNotFoundError) as c: subprocess.Popen(['exit', '0'], cwd='/some/nonexistent/directory') self.assertEqual(c.exception.filename, '/some/nonexistent/directory') def test_class_getitems(self): self.assertIsInstance(subprocess.Popen[bytes], types.GenericAlias) self.assertIsInstance(subprocess.CompletedProcess[str], types.GenericAlias) @unittest.skipIf(not sysconfig.get_config_var("HAVE_VFORK"), "vfork() not enabled by configure.") @mock.patch("subprocess._fork_exec") def test__use_vfork(self, mock_fork_exec): self.assertTrue(subprocess._USE_VFORK) # The default value regardless. mock_fork_exec.side_effect = RuntimeError("just testing args") with self.assertRaises(RuntimeError): subprocess.run([sys.executable, "-c", "pass"]) mock_fork_exec.assert_called_once() self.assertTrue(mock_fork_exec.call_args.args[-1]) with mock.patch.object(subprocess, '_USE_VFORK', False): with self.assertRaises(RuntimeError): subprocess.run([sys.executable, "-c", "pass"]) self.assertFalse(mock_fork_exec.call_args_list[-1].args[-1]) class RunFuncTestCase(BaseTestCase): def run_python(self, code, **kwargs): """Run Python code in a subprocess using subprocess.run""" argv = [sys.executable, "-c", code] return subprocess.run(argv, **kwargs) def test_returncode(self): # call() function with sequence argument cp = self.run_python("import sys; sys.exit(47)") self.assertEqual(cp.returncode, 47) with self.assertRaises(subprocess.CalledProcessError): cp.check_returncode() def test_check(self): with self.assertRaises(subprocess.CalledProcessError) as c: self.run_python("import sys; sys.exit(47)", check=True) self.assertEqual(c.exception.returncode, 47) def test_check_zero(self): # check_returncode shouldn't raise when returncode is zero cp = subprocess.run(ZERO_RETURN_CMD, check=True) self.assertEqual(cp.returncode, 0) def test_timeout(self): # run() function with timeout argument; we want to test that the child # process gets killed when the timeout expires. If the child isn't # killed, this call will deadlock since subprocess.run waits for the # child. with self.assertRaises(subprocess.TimeoutExpired): self.run_python("while True: pass", timeout=0.0001) def test_capture_stdout(self): # capture stdout with zero return code cp = self.run_python("print('BDFL')", stdout=subprocess.PIPE) self.assertIn(b'BDFL', cp.stdout) def test_capture_stderr(self): cp = self.run_python("import sys; sys.stderr.write('BDFL')", stderr=subprocess.PIPE) self.assertIn(b'BDFL', cp.stderr) def test_check_output_stdin_arg(self): # run() can be called with stdin set to a file tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) cp = self.run_python( "import sys; sys.stdout.write(sys.stdin.read().upper())", stdin=tf, stdout=subprocess.PIPE) self.assertIn(b'PEAR', cp.stdout) def test_check_output_input_arg(self): # check_output() can be called with input set to a string cp = self.run_python( "import sys; sys.stdout.write(sys.stdin.read().upper())", input=b'pear', stdout=subprocess.PIPE) self.assertIn(b'PEAR', cp.stdout) def test_check_output_stdin_with_input_arg(self): # run() refuses to accept 'stdin' with 'input' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) with self.assertRaises(ValueError, msg="Expected ValueError when stdin and input args supplied.") as c: output = self.run_python("print('will not be run')", stdin=tf, input=b'hare') self.assertIn('stdin', c.exception.args[0]) self.assertIn('input', c.exception.args[0]) @support.requires_resource('walltime') def test_check_output_timeout(self): with self.assertRaises(subprocess.TimeoutExpired) as c: cp = self.run_python(( "import sys, time\n" "sys.stdout.write('BDFL')\n" "sys.stdout.flush()\n" "time.sleep(3600)"), # Some heavily loaded buildbots (sparc Debian 3.x) require # this much time to start and print. timeout=3, stdout=subprocess.PIPE) self.assertEqual(c.exception.output, b'BDFL') # output is aliased to stdout self.assertEqual(c.exception.stdout, b'BDFL') def test_run_kwargs(self): newenv = os.environ.copy() newenv["FRUIT"] = "banana" cp = self.run_python(('import sys, os;' 'sys.exit(33 if os.getenv("FRUIT")=="banana" else 31)'), env=newenv) self.assertEqual(cp.returncode, 33) def test_run_with_pathlike_path(self): # bpo-31961: test run(pathlike_object) # the name of a command that can be run without # any arguments that exit fast prog = 'tree.com' if mswindows else 'ls' path = shutil.which(prog) if path is None: self.skipTest(f'{prog} required for this test') path = FakePath(path) res = subprocess.run(path, stdout=subprocess.DEVNULL) self.assertEqual(res.returncode, 0) with self.assertRaises(TypeError): subprocess.run(path, stdout=subprocess.DEVNULL, shell=True) def test_run_with_bytes_path_and_arguments(self): # bpo-31961: test run([bytes_object, b'additional arguments']) path = os.fsencode(sys.executable) args = [path, '-c', b'import sys; sys.exit(57)'] res = subprocess.run(args) self.assertEqual(res.returncode, 57) def test_run_with_pathlike_path_and_arguments(self): # bpo-31961: test run([pathlike_object, 'additional arguments']) path = FakePath(sys.executable) args = [path, '-c', 'import sys; sys.exit(57)'] res = subprocess.run(args) self.assertEqual(res.returncode, 57) @unittest.skipUnless(mswindows, "Maybe test trigger a leak on Ubuntu") def test_run_with_an_empty_env(self): # gh-105436: fix subprocess.run(..., env={}) broken on Windows args = [sys.executable, "-c", 'pass'] # Ignore subprocess errors - we only care that the API doesn't # raise an OSError subprocess.run(args, env={}) def test_capture_output(self): cp = self.run_python(("import sys;" "sys.stdout.write('BDFL'); " "sys.stderr.write('FLUFL')"), capture_output=True) self.assertIn(b'BDFL', cp.stdout) self.assertIn(b'FLUFL', cp.stderr) def test_stdout_with_capture_output_arg(self): # run() refuses to accept 'stdout' with 'capture_output' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) with self.assertRaises(ValueError, msg=("Expected ValueError when stdout and capture_output " "args supplied.")) as c: output = self.run_python("print('will not be run')", capture_output=True, stdout=tf) self.assertIn('stdout', c.exception.args[0]) self.assertIn('capture_output', c.exception.args[0]) def test_stderr_with_capture_output_arg(self): # run() refuses to accept 'stderr' with 'capture_output' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) with self.assertRaises(ValueError, msg=("Expected ValueError when stderr and capture_output " "args supplied.")) as c: output = self.run_python("print('will not be run')", capture_output=True, stderr=tf) self.assertIn('stderr', c.exception.args[0]) self.assertIn('capture_output', c.exception.args[0]) # This test _might_ wind up a bit fragile on loaded build+test machines # as it depends on the timing with wide enough margins for normal situations # but does assert that it happened "soon enough" to believe the right thing # happened. @unittest.skipIf(mswindows, "requires posix like 'sleep' shell command") def test_run_with_shell_timeout_and_capture_output(self): """Output capturing after a timeout mustn't hang forever on open filehandles.""" before_secs = time.monotonic() try: subprocess.run('sleep 3', shell=True, timeout=0.1, capture_output=True) # New session unspecified. except subprocess.TimeoutExpired as exc: after_secs = time.monotonic() stacks = traceback.format_exc() # assertRaises doesn't give this. else: self.fail("TimeoutExpired not raised.") self.assertLess(after_secs - before_secs, 1.5, msg="TimeoutExpired was delayed! Bad traceback:\n```\n" f"{stacks}```") def test_encoding_warning(self): code = textwrap.dedent("""\ from subprocess import * run("echo hello", shell=True, text=True) check_output("echo hello", shell=True, text=True) """) cp = subprocess.run([sys.executable, "-Xwarn_default_encoding", "-c", code], capture_output=True) lines = cp.stderr.splitlines() self.assertEqual(len(lines), 2, lines) self.assertTrue(lines[0].startswith(b":2: EncodingWarning: ")) self.assertTrue(lines[1].startswith(b":3: EncodingWarning: ")) def _get_test_grp_name(): for name_group in ('staff', 'nogroup', 'grp', 'nobody', 'nfsnobody'): if grp: try: grp.getgrnam(name_group) except KeyError: continue return name_group else: raise unittest.SkipTest('No identified group name to use for this test on this platform.') @unittest.skipIf(mswindows, "POSIX specific tests") class POSIXProcessTestCase(BaseTestCase): def setUp(self): super().setUp() self._nonexistent_dir = "/_this/pa.th/does/not/exist" def _get_chdir_exception(self): try: os.chdir(self._nonexistent_dir) except OSError as e: # This avoids hard coding the errno value or the OS perror() # string and instead capture the exception that we want to see # below for comparison. desired_exception = e else: self.fail("chdir to nonexistent directory %s succeeded." % self._nonexistent_dir) return desired_exception def test_exception_cwd(self): """Test error in the child raised in the parent for a bad cwd.""" desired_exception = self._get_chdir_exception() try: p = subprocess.Popen([sys.executable, "-c", ""], cwd=self._nonexistent_dir) except OSError as e: # Test that the child process chdir failure actually makes # it up to the parent process as the correct exception. self.assertEqual(desired_exception.errno, e.errno) self.assertEqual(desired_exception.strerror, e.strerror) self.assertEqual(desired_exception.filename, e.filename) else: self.fail("Expected OSError: %s" % desired_exception) def test_exception_bad_executable(self): """Test error in the child raised in the parent for a bad executable.""" desired_exception = self._get_chdir_exception() try: p = subprocess.Popen([sys.executable, "-c", ""], executable=self._nonexistent_dir) except OSError as e: # Test that the child process exec failure actually makes # it up to the parent process as the correct exception. self.assertEqual(desired_exception.errno, e.errno) self.assertEqual(desired_exception.strerror, e.strerror) self.assertEqual(desired_exception.filename, e.filename) else: self.fail("Expected OSError: %s" % desired_exception) def test_exception_bad_args_0(self): """Test error in the child raised in the parent for a bad args[0].""" desired_exception = self._get_chdir_exception() try: p = subprocess.Popen([self._nonexistent_dir, "-c", ""]) except OSError as e: # Test that the child process exec failure actually makes # it up to the parent process as the correct exception. self.assertEqual(desired_exception.errno, e.errno) self.assertEqual(desired_exception.strerror, e.strerror) self.assertEqual(desired_exception.filename, e.filename) else: self.fail("Expected OSError: %s" % desired_exception) # We mock the __del__ method for Popen in the next two tests # because it does cleanup based on the pid returned by fork_exec # along with issuing a resource warning if it still exists. Since # we don't actually spawn a process in these tests we can forego # the destructor. An alternative would be to set _child_created to # False before the destructor is called but there is no easy way # to do that class PopenNoDestructor(subprocess.Popen): def __del__(self): pass @mock.patch("subprocess._fork_exec") def test_exception_errpipe_normal(self, fork_exec): """Test error passing done through errpipe_write in the good case""" def proper_error(*args): errpipe_write = args[13] # Write the hex for the error code EISDIR: 'is a directory' err_code = '{:x}'.format(errno.EISDIR).encode() os.write(errpipe_write, b"OSError:" + err_code + b":") return 0 fork_exec.side_effect = proper_error with mock.patch("subprocess.os.waitpid", side_effect=ChildProcessError): with self.assertRaises(IsADirectoryError): self.PopenNoDestructor(["non_existent_command"]) @mock.patch("subprocess._fork_exec") def test_exception_errpipe_bad_data(self, fork_exec): """Test error passing done through errpipe_write where its not in the expected format""" error_data = b"\xFF\x00\xDE\xAD" def bad_error(*args): errpipe_write = args[13] # Anything can be in the pipe, no assumptions should # be made about its encoding, so we'll write some # arbitrary hex bytes to test it out os.write(errpipe_write, error_data) return 0 fork_exec.side_effect = bad_error with mock.patch("subprocess.os.waitpid", side_effect=ChildProcessError): with self.assertRaises(subprocess.SubprocessError) as e: self.PopenNoDestructor(["non_existent_command"]) self.assertIn(repr(error_data), str(e.exception)) @unittest.skipIf(not os.path.exists('/proc/self/status'), "need /proc/self/status") def test_restore_signals(self): # Blindly assume that cat exists on systems with /proc/self/status... default_proc_status = subprocess.check_output( ['cat', '/proc/self/status'], restore_signals=False) for line in default_proc_status.splitlines(): if line.startswith(b'SigIgn'): default_sig_ign_mask = line break else: self.skipTest("SigIgn not found in /proc/self/status.") restored_proc_status = subprocess.check_output( ['cat', '/proc/self/status'], restore_signals=True) for line in restored_proc_status.splitlines(): if line.startswith(b'SigIgn'): restored_sig_ign_mask = line break self.assertNotEqual(default_sig_ign_mask, restored_sig_ign_mask, msg="restore_signals=True should've unblocked " "SIGPIPE and friends.") def test_start_new_session(self): # For code coverage of calling setsid(). We don't care if we get an # EPERM error from it depending on the test execution environment, that # still indicates that it was called. try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getsid(0))"], start_new_session=True) except PermissionError as e: if e.errno != errno.EPERM: raise # EACCES? else: parent_sid = os.getsid(0) child_sid = int(output) self.assertNotEqual(parent_sid, child_sid) @unittest.skipUnless(hasattr(os, 'setpgid') and hasattr(os, 'getpgid'), 'no setpgid or getpgid on platform') def test_process_group_0(self): # For code coverage of calling setpgid(). We don't care if we get an # EPERM error from it depending on the test execution environment, that # still indicates that it was called. try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getpgid(0))"], process_group=0) except PermissionError as e: if e.errno != errno.EPERM: raise # EACCES? else: parent_pgid = os.getpgid(0) child_pgid = int(output) self.assertNotEqual(parent_pgid, child_pgid) @unittest.skipUnless(hasattr(os, 'setreuid'), 'no setreuid on platform') def test_user(self): # For code coverage of the user parameter. We don't care if we get a # permission error from it depending on the test execution environment, # that still indicates that it was called. uid = os.geteuid() test_users = [65534 if uid != 65534 else 65533, uid] name_uid = "nobody" if sys.platform != 'darwin' else "unknown" if pwd is not None: try: pwd.getpwnam(name_uid) test_users.append(name_uid) except KeyError: # unknown user name name_uid = None for user in test_users: # posix_spawn() may be used with close_fds=False for close_fds in (False, True): with self.subTest(user=user, close_fds=close_fds): try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getuid())"], user=user, close_fds=close_fds) except PermissionError as e: # (EACCES, EPERM) if e.errno == errno.EACCES: self.assertEqual(e.filename, sys.executable) else: self.assertIsNone(e.filename) else: if isinstance(user, str): user_uid = pwd.getpwnam(user).pw_uid else: user_uid = user child_user = int(output) self.assertEqual(child_user, user_uid) with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, user=-1) with self.assertRaises(OverflowError): subprocess.check_call(ZERO_RETURN_CMD, cwd=os.curdir, env=os.environ, user=2**64) if pwd is None and name_uid is not None: with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, user=name_uid) @unittest.skipIf(hasattr(os, 'setreuid'), 'setreuid() available on platform') def test_user_error(self): with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, user=65535) @unittest.skipUnless(hasattr(os, 'setregid'), 'no setregid() on platform') def test_group(self): gid = os.getegid() group_list = [65534 if gid != 65534 else 65533] name_group = _get_test_grp_name() if grp is not None: group_list.append(name_group) for group in group_list + [gid]: # posix_spawn() may be used with close_fds=False for close_fds in (False, True): with self.subTest(group=group, close_fds=close_fds): try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getgid())"], group=group, close_fds=close_fds) except PermissionError as e: # (EACCES, EPERM) self.assertIsNone(e.filename) else: if isinstance(group, str): group_gid = grp.getgrnam(group).gr_gid else: group_gid = group child_group = int(output) self.assertEqual(child_group, group_gid) # make sure we bomb on negative values with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, group=-1) with self.assertRaises(OverflowError): subprocess.check_call(ZERO_RETURN_CMD, cwd=os.curdir, env=os.environ, group=2**64) if grp is None: with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, group=name_group) @unittest.skipIf(hasattr(os, 'setregid'), 'setregid() available on platform') def test_group_error(self): with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, group=65535) @unittest.skipUnless(hasattr(os, 'setgroups'), 'no setgroups() on platform') def test_extra_groups(self): gid = os.getegid() group_list = [65534 if gid != 65534 else 65533] name_group = _get_test_grp_name() perm_error = False if grp is not None: group_list.append(name_group) try: output = subprocess.check_output( [sys.executable, "-c", "import os, sys, json; json.dump(os.getgroups(), sys.stdout)"], extra_groups=group_list) except OSError as ex: if ex.errno != errno.EPERM: raise self.assertIsNone(ex.filename) perm_error = True else: parent_groups = os.getgroups() child_groups = json.loads(output) if grp is not None: desired_gids = [grp.getgrnam(g).gr_gid if isinstance(g, str) else g for g in group_list] else: desired_gids = group_list if perm_error: self.assertEqual(set(child_groups), set(parent_groups)) else: self.assertEqual(set(desired_gids), set(child_groups)) # make sure we bomb on negative values with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, extra_groups=[-1]) with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, cwd=os.curdir, env=os.environ, extra_groups=[2**64]) if grp is None: with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, extra_groups=[name_group]) @unittest.skipIf(hasattr(os, 'setgroups'), 'setgroups() available on platform') def test_extra_groups_error(self): with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, extra_groups=[]) @unittest.skipIf(mswindows or not hasattr(os, 'umask'), 'POSIX umask() is not available.') def test_umask(self): tmpdir = None try: tmpdir = tempfile.mkdtemp() name = os.path.join(tmpdir, "beans") # We set an unusual umask in the child so as a unique mode # for us to test the child's touched file for. subprocess.check_call( [sys.executable, "-c", f"open({name!r}, 'w').close()"], umask=0o053) # Ignore execute permissions entirely in our test, # filesystems could be mounted to ignore or force that. st_mode = os.stat(name).st_mode & 0o666 expected_mode = 0o624 self.assertEqual(expected_mode, st_mode, msg=f'{oct(expected_mode)} != {oct(st_mode)}') finally: if tmpdir is not None: shutil.rmtree(tmpdir) def test_run_abort(self): # returncode handles signal termination with support.SuppressCrashReport(): p = subprocess.Popen([sys.executable, "-c", 'import os; os.abort()']) p.wait() self.assertEqual(-p.returncode, signal.SIGABRT) def test_CalledProcessError_str_signal(self): err = subprocess.CalledProcessError(-int(signal.SIGABRT), "fake cmd") error_string = str(err) # We're relying on the repr() of the signal.Signals intenum to provide # the word signal, the signal name and the numeric value. self.assertIn("signal", error_string.lower()) # We're not being specific about the signal name as some signals have # multiple names and which name is revealed can vary. self.assertIn("SIG", error_string) self.assertIn(str(signal.SIGABRT), error_string) def test_CalledProcessError_str_unknown_signal(self): err = subprocess.CalledProcessError(-9876543, "fake cmd") error_string = str(err) self.assertIn("unknown signal 9876543.", error_string) def test_CalledProcessError_str_non_zero(self): err = subprocess.CalledProcessError(2, "fake cmd") error_string = str(err) self.assertIn("non-zero exit status 2.", error_string) def test_preexec(self): # DISCLAIMER: Setting environment variables is *not* a good use # of a preexec_fn. This is merely a test. p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(os.getenv("FRUIT"))'], stdout=subprocess.PIPE, preexec_fn=lambda: os.putenv("FRUIT", "apple")) with p: self.assertEqual(p.stdout.read(), b"apple") def test_preexec_exception(self): def raise_it(): raise ValueError("What if two swallows carried a coconut?") try: p = subprocess.Popen([sys.executable, "-c", ""], preexec_fn=raise_it) except subprocess.SubprocessError as e: self.assertTrue( subprocess._fork_exec, "Expected a ValueError from the preexec_fn") except ValueError as e: self.assertIn("coconut", e.args[0]) else: self.fail("Exception raised by preexec_fn did not make it " "to the parent process.") class _TestExecuteChildPopen(subprocess.Popen): """Used to test behavior at the end of _execute_child.""" def __init__(self, testcase, *args, **kwargs): self._testcase = testcase subprocess.Popen.__init__(self, *args, **kwargs) def _execute_child(self, *args, **kwargs): try: subprocess.Popen._execute_child(self, *args, **kwargs) finally: # Open a bunch of file descriptors and verify that # none of them are the same as the ones the Popen # instance is using for stdin/stdout/stderr. devzero_fds = [os.open("/dev/zero", os.O_RDONLY) for _ in range(8)] try: for fd in devzero_fds: self._testcase.assertNotIn( fd, (self.stdin.fileno(), self.stdout.fileno(), self.stderr.fileno()), msg="At least one fd was closed early.") finally: for fd in devzero_fds: os.close(fd) @unittest.skipIf(not os.path.exists("/dev/zero"), "/dev/zero required.") def test_preexec_errpipe_does_not_double_close_pipes(self): """Issue16140: Don't double close pipes on preexec error.""" def raise_it(): raise subprocess.SubprocessError( "force the _execute_child() errpipe_data path.") with self.assertRaises(subprocess.SubprocessError): self._TestExecuteChildPopen( self, ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=raise_it) def test_preexec_gc_module_failure(self): # This tests the code that disables garbage collection if the child # process will execute any Python. enabled = gc.isenabled() try: gc.disable() self.assertFalse(gc.isenabled()) subprocess.call([sys.executable, '-c', ''], preexec_fn=lambda: None) self.assertFalse(gc.isenabled(), "Popen enabled gc when it shouldn't.") gc.enable() self.assertTrue(gc.isenabled()) subprocess.call([sys.executable, '-c', ''], preexec_fn=lambda: None) self.assertTrue(gc.isenabled(), "Popen left gc disabled.") finally: if not enabled: gc.disable() @unittest.skipIf( sys.platform == 'darwin', 'setrlimit() seems to fail on OS X') def test_preexec_fork_failure(self): # The internal code did not preserve the previous exception when # re-enabling garbage collection try: from resource import getrlimit, setrlimit, RLIMIT_NPROC except ImportError as err: self.skipTest(err) # RLIMIT_NPROC is specific to Linux and BSD limits = getrlimit(RLIMIT_NPROC) [_, hard] = limits setrlimit(RLIMIT_NPROC, (0, hard)) self.addCleanup(setrlimit, RLIMIT_NPROC, limits) try: subprocess.call([sys.executable, '-c', ''], preexec_fn=lambda: None) except BlockingIOError: # Forking should raise EAGAIN, translated to BlockingIOError pass else: self.skipTest('RLIMIT_NPROC had no effect; probably superuser') def test_args_string(self): # args is a string fd, fname = tempfile.mkstemp() # reopen in text mode with open(fd, "w", errors="surrogateescape") as fobj: fobj.write("#!%s\n" % support.unix_shell) fobj.write("exec '%s' -c 'import sys; sys.exit(47)'\n" % sys.executable) os.chmod(fname, 0o700) p = subprocess.Popen(fname) p.wait() os.remove(fname) self.assertEqual(p.returncode, 47) def test_invalid_args(self): # invalid arguments should raise ValueError self.assertRaises(ValueError, subprocess.call, [sys.executable, "-c", "import sys; sys.exit(47)"], startupinfo=47) self.assertRaises(ValueError, subprocess.call, [sys.executable, "-c", "import sys; sys.exit(47)"], creationflags=47) def test_shell_sequence(self): # Run command through the shell (sequence) newenv = os.environ.copy() newenv["FRUIT"] = "apple" p = subprocess.Popen(["echo $FRUIT"], shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertEqual(p.stdout.read().strip(b" \t\r\n\f"), b"apple") def test_shell_string(self): # Run command through the shell (string) newenv = os.environ.copy() newenv["FRUIT"] = "apple" p = subprocess.Popen("echo $FRUIT", shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertEqual(p.stdout.read().strip(b" \t\r\n\f"), b"apple") def test_call_string(self): # call() function with string argument on UNIX fd, fname = tempfile.mkstemp() # reopen in text mode with open(fd, "w", errors="surrogateescape") as fobj: fobj.write("#!%s\n" % support.unix_shell) fobj.write("exec '%s' -c 'import sys; sys.exit(47)'\n" % sys.executable) os.chmod(fname, 0o700) rc = subprocess.call(fname) os.remove(fname) self.assertEqual(rc, 47) def test_specific_shell(self): # Issue #9265: Incorrect name passed as arg[0]. shells = [] for prefix in ['/bin', '/usr/bin/', '/usr/local/bin']: for name in ['bash', 'ksh']: sh = os.path.join(prefix, name) if os.path.isfile(sh): shells.append(sh) if not shells: # Will probably work for any shell but csh. self.skipTest("bash or ksh required for this test") sh = '/bin/sh' if os.path.isfile(sh) and not os.path.islink(sh): # Test will fail if /bin/sh is a symlink to csh. shells.append(sh) for sh in shells: p = subprocess.Popen("echo $0", executable=sh, shell=True, stdout=subprocess.PIPE) with p: self.assertEqual(p.stdout.read().strip(), bytes(sh, 'ascii')) def _kill_process(self, method, *args): # Do not inherit file handles from the parent. # It should fix failures on some platforms. # Also set the SIGINT handler to the default to make sure it's not # being ignored (some tests rely on that.) old_handler = signal.signal(signal.SIGINT, signal.default_int_handler) try: p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() time.sleep(30) """], close_fds=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) finally: signal.signal(signal.SIGINT, old_handler) # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) getattr(p, method)(*args) return p @unittest.skipIf(sys.platform.startswith(('netbsd', 'openbsd')), "Due to known OS bug (issue #16762)") def _kill_dead_process(self, method, *args): # Do not inherit file handles from the parent. # It should fix failures on some platforms. p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() """], close_fds=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) # The process should end after this time.sleep(1) # This shouldn't raise even though the child is now dead getattr(p, method)(*args) p.communicate() def test_send_signal(self): p = self._kill_process('send_signal', signal.SIGINT) _, stderr = p.communicate() self.assertIn(b'KeyboardInterrupt', stderr) self.assertNotEqual(p.wait(), 0) def test_kill(self): p = self._kill_process('kill') _, stderr = p.communicate() self.assertEqual(stderr, b'') self.assertEqual(p.wait(), -signal.SIGKILL) def test_terminate(self): p = self._kill_process('terminate') _, stderr = p.communicate() self.assertEqual(stderr, b'') self.assertEqual(p.wait(), -signal.SIGTERM) def test_send_signal_dead(self): # Sending a signal to a dead process self._kill_dead_process('send_signal', signal.SIGINT) def test_kill_dead(self): # Killing a dead process self._kill_dead_process('kill') def test_terminate_dead(self): # Terminating a dead process self._kill_dead_process('terminate') def _save_fds(self, save_fds): fds = [] for fd in save_fds: inheritable = os.get_inheritable(fd) saved = os.dup(fd) fds.append((fd, saved, inheritable)) return fds def _restore_fds(self, fds): for fd, saved, inheritable in fds: os.dup2(saved, fd, inheritable=inheritable) os.close(saved) def check_close_std_fds(self, fds): # Issue #9905: test that subprocess pipes still work properly with # some standard fds closed stdin = 0 saved_fds = self._save_fds(fds) for fd, saved, inheritable in saved_fds: if fd == 0: stdin = saved break try: for fd in fds: os.close(fd) out, err = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple");' 'sys.stdout.flush();' 'sys.stderr.write("orange")'], stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate() self.assertEqual(out, b'apple') self.assertEqual(err, b'orange') finally: self._restore_fds(saved_fds) def test_close_fd_0(self): self.check_close_std_fds([0]) def test_close_fd_1(self): self.check_close_std_fds([1]) def test_close_fd_2(self): self.check_close_std_fds([2]) def test_close_fds_0_1(self): self.check_close_std_fds([0, 1]) def test_close_fds_0_2(self): self.check_close_std_fds([0, 2]) def test_close_fds_1_2(self): self.check_close_std_fds([1, 2]) def test_close_fds_0_1_2(self): # Issue #10806: test that subprocess pipes still work properly with # all standard fds closed. self.check_close_std_fds([0, 1, 2]) def test_small_errpipe_write_fd(self): """Issue #15798: Popen should work when stdio fds are available.""" new_stdin = os.dup(0) new_stdout = os.dup(1) try: os.close(0) os.close(1) # Side test: if errpipe_write fails to have its CLOEXEC # flag set this should cause the parent to think the exec # failed. Extremely unlikely: everyone supports CLOEXEC. subprocess.Popen([ sys.executable, "-c", "print('AssertionError:0:CLOEXEC failure.')"]).wait() finally: # Restore original stdin and stdout os.dup2(new_stdin, 0) os.dup2(new_stdout, 1) os.close(new_stdin) os.close(new_stdout) def test_remapping_std_fds(self): # open up some temporary files temps = [tempfile.mkstemp() for i in range(3)] try: temp_fds = [fd for fd, fname in temps] # unlink the files -- we won't need to reopen them for fd, fname in temps: os.unlink(fname) # write some data to what will become stdin, and rewind os.write(temp_fds[1], b"STDIN") os.lseek(temp_fds[1], 0, 0) # move the standard file descriptors out of the way saved_fds = self._save_fds(range(3)) try: # duplicate the file objects over the standard fd's for fd, temp_fd in enumerate(temp_fds): os.dup2(temp_fd, fd) # now use those files in the "wrong" order, so that subprocess # has to rearrange them in the child p = subprocess.Popen([sys.executable, "-c", 'import sys; got = sys.stdin.read();' 'sys.stdout.write("got %s"%got); sys.stderr.write("err")'], stdin=temp_fds[1], stdout=temp_fds[2], stderr=temp_fds[0]) p.wait() finally: self._restore_fds(saved_fds) for fd in temp_fds: os.lseek(fd, 0, 0) out = os.read(temp_fds[2], 1024) err = os.read(temp_fds[0], 1024).strip() self.assertEqual(out, b"got STDIN") self.assertEqual(err, b"err") finally: for fd in temp_fds: os.close(fd) def check_swap_fds(self, stdin_no, stdout_no, stderr_no): # open up some temporary files temps = [tempfile.mkstemp() for i in range(3)] temp_fds = [fd for fd, fname in temps] try: # unlink the files -- we won't need to reopen them for fd, fname in temps: os.unlink(fname) # save a copy of the standard file descriptors saved_fds = self._save_fds(range(3)) try: # duplicate the temp files over the standard fd's 0, 1, 2 for fd, temp_fd in enumerate(temp_fds): os.dup2(temp_fd, fd) # write some data to what will become stdin, and rewind os.write(stdin_no, b"STDIN") os.lseek(stdin_no, 0, 0) # now use those files in the given order, so that subprocess # has to rearrange them in the child p = subprocess.Popen([sys.executable, "-c", 'import sys; got = sys.stdin.read();' 'sys.stdout.write("got %s"%got); sys.stderr.write("err")'], stdin=stdin_no, stdout=stdout_no, stderr=stderr_no) p.wait() for fd in temp_fds: os.lseek(fd, 0, 0) out = os.read(stdout_no, 1024) err = os.read(stderr_no, 1024).strip() finally: self._restore_fds(saved_fds) self.assertEqual(out, b"got STDIN") self.assertEqual(err, b"err") finally: for fd in temp_fds: os.close(fd) # When duping fds, if there arises a situation where one of the fds is # either 0, 1 or 2, it is possible that it is overwritten (#12607). # This tests all combinations of this. def test_swap_fds(self): self.check_swap_fds(0, 1, 2) self.check_swap_fds(0, 2, 1) self.check_swap_fds(1, 0, 2) self.check_swap_fds(1, 2, 0) self.check_swap_fds(2, 0, 1) self.check_swap_fds(2, 1, 0) def _check_swap_std_fds_with_one_closed(self, from_fds, to_fds): saved_fds = self._save_fds(range(3)) try: for from_fd in from_fds: with tempfile.TemporaryFile() as f: os.dup2(f.fileno(), from_fd) fd_to_close = (set(range(3)) - set(from_fds)).pop() os.close(fd_to_close) arg_names = ['stdin', 'stdout', 'stderr'] kwargs = {} for from_fd, to_fd in zip(from_fds, to_fds): kwargs[arg_names[to_fd]] = from_fd code = textwrap.dedent(r''' import os, sys skipped_fd = int(sys.argv[1]) for fd in range(3): if fd != skipped_fd: os.write(fd, str(fd).encode('ascii')) ''') skipped_fd = (set(range(3)) - set(to_fds)).pop() rc = subprocess.call([sys.executable, '-c', code, str(skipped_fd)], **kwargs) self.assertEqual(rc, 0) for from_fd, to_fd in zip(from_fds, to_fds): os.lseek(from_fd, 0, os.SEEK_SET) read_bytes = os.read(from_fd, 1024) read_fds = list(map(int, read_bytes.decode('ascii'))) msg = textwrap.dedent(f""" When testing {from_fds} to {to_fds} redirection, parent descriptor {from_fd} got redirected to descriptor(s) {read_fds} instead of descriptor {to_fd}. """) self.assertEqual([to_fd], read_fds, msg) finally: self._restore_fds(saved_fds) # Check that subprocess can remap std fds correctly even # if one of them is closed (#32844). def test_swap_std_fds_with_one_closed(self): for from_fds in itertools.combinations(range(3), 2): for to_fds in itertools.permutations(range(3), 2): self._check_swap_std_fds_with_one_closed(from_fds, to_fds) def test_surrogates_error_message(self): def prepare(): raise ValueError("surrogate:\uDCff") try: subprocess.call( ZERO_RETURN_CMD, preexec_fn=prepare) except ValueError as err: # Pure Python implementations keeps the message self.assertIsNone(subprocess._fork_exec) self.assertEqual(str(err), "surrogate:\uDCff") except subprocess.SubprocessError as err: # _posixsubprocess uses a default message self.assertIsNotNone(subprocess._fork_exec) self.assertEqual(str(err), "Exception occurred in preexec_fn.") else: self.fail("Expected ValueError or subprocess.SubprocessError") def test_undecodable_env(self): for key, value in (('test', 'abc\uDCFF'), ('test\uDCFF', '42')): encoded_value = value.encode("ascii", "surrogateescape") # test str with surrogates script = "import os; print(ascii(os.getenv(%s)))" % repr(key) env = os.environ.copy() env[key] = value # Use C locale to get ASCII for the locale encoding to force # surrogate-escaping of \xFF in the child process env['LC_ALL'] = 'C' decoded_value = value stdout = subprocess.check_output( [sys.executable, "-c", script], env=env) stdout = stdout.rstrip(b'\n\r') self.assertEqual(stdout.decode('ascii'), ascii(decoded_value)) # test bytes key = key.encode("ascii", "surrogateescape") script = "import os; print(ascii(os.getenvb(%s)))" % repr(key) env = os.environ.copy() env[key] = encoded_value stdout = subprocess.check_output( [sys.executable, "-c", script], env=env) stdout = stdout.rstrip(b'\n\r') self.assertEqual(stdout.decode('ascii'), ascii(encoded_value)) def test_bytes_program(self): abs_program = os.fsencode(ZERO_RETURN_CMD[0]) args = list(ZERO_RETURN_CMD[1:]) path, program = os.path.split(ZERO_RETURN_CMD[0]) program = os.fsencode(program) # absolute bytes path exitcode = subprocess.call([abs_program]+args) self.assertEqual(exitcode, 0) # absolute bytes path as a string cmd = b"'%s' %s" % (abs_program, " ".join(args).encode("utf-8")) exitcode = subprocess.call(cmd, shell=True) self.assertEqual(exitcode, 0) # bytes program, unicode PATH env = os.environ.copy() env["PATH"] = path exitcode = subprocess.call([program]+args, env=env) self.assertEqual(exitcode, 0) # bytes program, bytes PATH envb = os.environb.copy() envb[b"PATH"] = os.fsencode(path) exitcode = subprocess.call([program]+args, env=envb) self.assertEqual(exitcode, 0) def test_pipe_cloexec(self): sleeper = support.findfile("input_reader.py", subdir="subprocessdata") fd_status = support.findfile("fd_status.py", subdir="subprocessdata") p1 = subprocess.Popen([sys.executable, sleeper], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=False) self.addCleanup(p1.communicate, b'') p2 = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=False) output, error = p2.communicate() result_fds = set(map(int, output.split(b','))) unwanted_fds = set([p1.stdin.fileno(), p1.stdout.fileno(), p1.stderr.fileno()]) self.assertFalse(result_fds & unwanted_fds, "Expected no fds from %r to be open in child, " "found %r" % (unwanted_fds, result_fds & unwanted_fds)) def test_pipe_cloexec_real_tools(self): qcat = support.findfile("qcat.py", subdir="subprocessdata") qgrep = support.findfile("qgrep.py", subdir="subprocessdata") subdata = b'zxcvbn' data = subdata * 4 + b'\n' p1 = subprocess.Popen([sys.executable, qcat], stdin=subprocess.PIPE, stdout=subprocess.PIPE, close_fds=False) p2 = subprocess.Popen([sys.executable, qgrep, subdata], stdin=p1.stdout, stdout=subprocess.PIPE, close_fds=False) self.addCleanup(p1.wait) self.addCleanup(p2.wait) def kill_p1(): try: p1.terminate() except ProcessLookupError: pass def kill_p2(): try: p2.terminate() except ProcessLookupError: pass self.addCleanup(kill_p1) self.addCleanup(kill_p2) p1.stdin.write(data) p1.stdin.close() readfiles, ignored1, ignored2 = select.select([p2.stdout], [], [], 10) self.assertTrue(readfiles, "The child hung") self.assertEqual(p2.stdout.read(), data) p1.stdout.close() p2.stdout.close() def test_close_fds(self): fd_status = support.findfile("fd_status.py", subdir="subprocessdata") fds = os.pipe() self.addCleanup(os.close, fds[0]) self.addCleanup(os.close, fds[1]) open_fds = set(fds) # add a bunch more fds for _ in range(9): fd = os.open(os.devnull, os.O_RDONLY) self.addCleanup(os.close, fd) open_fds.add(fd) for fd in open_fds: os.set_inheritable(fd, True) p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=False) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertEqual(remaining_fds & open_fds, open_fds, "Some fds were closed") p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertFalse(remaining_fds & open_fds, "Some fds were left open") self.assertIn(1, remaining_fds, "Subprocess failed") # Keep some of the fd's we opened open in the subprocess. # This tests _posixsubprocess.c's proper handling of fds_to_keep. fds_to_keep = set(open_fds.pop() for _ in range(8)) p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True, pass_fds=fds_to_keep) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertFalse((remaining_fds - fds_to_keep) & open_fds, "Some fds not in pass_fds were left open") self.assertIn(1, remaining_fds, "Subprocess failed") @unittest.skipIf(sys.platform.startswith("freebsd") and os.stat("/dev").st_dev == os.stat("/dev/fd").st_dev, "Requires fdescfs mounted on /dev/fd on FreeBSD") def test_close_fds_when_max_fd_is_lowered(self): """Confirm that issue21618 is fixed (may fail under valgrind).""" fd_status = support.findfile("fd_status.py", subdir="subprocessdata") # This launches the meat of the test in a child process to # avoid messing with the larger unittest processes maximum # number of file descriptors. # This process launches: # +--> Process that lowers its RLIMIT_NOFILE aftr setting up # a bunch of high open fds above the new lower rlimit. # Those are reported via stdout before launching a new # process with close_fds=False to run the actual test: # +--> The TEST: This one launches a fd_status.py # subprocess with close_fds=True so we can find out if # any of the fds above the lowered rlimit are still open. p = subprocess.Popen([sys.executable, '-c', textwrap.dedent( ''' import os, resource, subprocess, sys, textwrap open_fds = set() # Add a bunch more fds to pass down. for _ in range(40): fd = os.open(os.devnull, os.O_RDONLY) open_fds.add(fd) # Leave a two pairs of low ones available for use by the # internal child error pipe and the stdout pipe. # We also leave 10 more open as some Python buildbots run into # "too many open files" errors during the test if we do not. for fd in sorted(open_fds)[:14]: os.close(fd) open_fds.remove(fd) for fd in open_fds: #self.addCleanup(os.close, fd) os.set_inheritable(fd, True) max_fd_open = max(open_fds) # Communicate the open_fds to the parent unittest.TestCase process. print(','.join(map(str, sorted(open_fds)))) sys.stdout.flush() rlim_cur, rlim_max = resource.getrlimit(resource.RLIMIT_NOFILE) try: # 29 is lower than the highest fds we are leaving open. resource.setrlimit(resource.RLIMIT_NOFILE, (29, rlim_max)) # Launch a new Python interpreter with our low fd rlim_cur that # inherits open fds above that limit. It then uses subprocess # with close_fds=True to get a report of open fds in the child. # An explicit list of fds to check is passed to fd_status.py as # letting fd_status rely on its default logic would miss the # fds above rlim_cur as it normally only checks up to that limit. subprocess.Popen( [sys.executable, '-c', textwrap.dedent(""" import subprocess, sys subprocess.Popen([sys.executable, %r] + [str(x) for x in range({max_fd})], close_fds=True).wait() """.format(max_fd=max_fd_open+1))], close_fds=False).wait() finally: resource.setrlimit(resource.RLIMIT_NOFILE, (rlim_cur, rlim_max)) ''' % fd_status)], stdout=subprocess.PIPE) output, unused_stderr = p.communicate() output_lines = output.splitlines() self.assertEqual(len(output_lines), 2, msg="expected exactly two lines of output:\n%r" % output) opened_fds = set(map(int, output_lines[0].strip().split(b','))) remaining_fds = set(map(int, output_lines[1].strip().split(b','))) self.assertFalse(remaining_fds & opened_fds, msg="Some fds were left open.") # Mac OS X Tiger (10.4) has a kernel bug: sometimes, the file # descriptor of a pipe closed in the parent process is valid in the # child process according to fstat(), but the mode of the file # descriptor is invalid, and read or write raise an error. @support.requires_mac_ver(10, 5) def test_pass_fds(self): fd_status = support.findfile("fd_status.py", subdir="subprocessdata") open_fds = set() for x in range(5): fds = os.pipe() self.addCleanup(os.close, fds[0]) self.addCleanup(os.close, fds[1]) os.set_inheritable(fds[0], True) os.set_inheritable(fds[1], True) open_fds.update(fds) for fd in open_fds: p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True, pass_fds=(fd, )) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) to_be_closed = open_fds - {fd} self.assertIn(fd, remaining_fds, "fd to be passed not passed") self.assertFalse(remaining_fds & to_be_closed, "fd to be closed passed") # pass_fds overrides close_fds with a warning. with self.assertWarns(RuntimeWarning) as context: self.assertFalse(subprocess.call( ZERO_RETURN_CMD, close_fds=False, pass_fds=(fd, ))) self.assertIn('overriding close_fds', str(context.warning)) def test_pass_fds_inheritable(self): script = support.findfile("fd_status.py", subdir="subprocessdata") inheritable, non_inheritable = os.pipe() self.addCleanup(os.close, inheritable) self.addCleanup(os.close, non_inheritable) os.set_inheritable(inheritable, True) os.set_inheritable(non_inheritable, False) pass_fds = (inheritable, non_inheritable) args = [sys.executable, script] args += list(map(str, pass_fds)) p = subprocess.Popen(args, stdout=subprocess.PIPE, close_fds=True, pass_fds=pass_fds) output, ignored = p.communicate() fds = set(map(int, output.split(b','))) # the inheritable file descriptor must be inherited, so its inheritable # flag must be set in the child process after fork() and before exec() self.assertEqual(fds, set(pass_fds), "output=%a" % output) # inheritable flag must not be changed in the parent process self.assertEqual(os.get_inheritable(inheritable), True) self.assertEqual(os.get_inheritable(non_inheritable), False) # bpo-32270: Ensure that descriptors specified in pass_fds # are inherited even if they are used in redirections. # Contributed by @izbyshev. def test_pass_fds_redirected(self): """Regression test for https://bugs.python.org/issue32270.""" fd_status = support.findfile("fd_status.py", subdir="subprocessdata") pass_fds = [] for _ in range(2): fd = os.open(os.devnull, os.O_RDWR) self.addCleanup(os.close, fd) pass_fds.append(fd) stdout_r, stdout_w = os.pipe() self.addCleanup(os.close, stdout_r) self.addCleanup(os.close, stdout_w) pass_fds.insert(1, stdout_w) with subprocess.Popen([sys.executable, fd_status], stdin=pass_fds[0], stdout=pass_fds[1], stderr=pass_fds[2], close_fds=True, pass_fds=pass_fds): output = os.read(stdout_r, 1024) fds = {int(num) for num in output.split(b',')} self.assertEqual(fds, {0, 1, 2} | frozenset(pass_fds), f"output={output!a}") def test_stdout_stdin_are_single_inout_fd(self): with io.open(os.devnull, "r+") as inout: p = subprocess.Popen(ZERO_RETURN_CMD, stdout=inout, stdin=inout) p.wait() def test_stdout_stderr_are_single_inout_fd(self): with io.open(os.devnull, "r+") as inout: p = subprocess.Popen(ZERO_RETURN_CMD, stdout=inout, stderr=inout) p.wait() def test_stderr_stdin_are_single_inout_fd(self): with io.open(os.devnull, "r+") as inout: p = subprocess.Popen(ZERO_RETURN_CMD, stderr=inout, stdin=inout) p.wait() def test_wait_when_sigchild_ignored(self): # NOTE: sigchild_ignore.py may not be an effective test on all OSes. sigchild_ignore = support.findfile("sigchild_ignore.py", subdir="subprocessdata") p = subprocess.Popen([sys.executable, sigchild_ignore], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() self.assertEqual(0, p.returncode, "sigchild_ignore.py exited" " non-zero with this error:\n%s" % stderr.decode('utf-8')) def test_select_unbuffered(self): # Issue #11459: bufsize=0 should really set the pipes as # unbuffered (and therefore let select() work properly). select = import_helper.import_module("select") p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple")'], stdout=subprocess.PIPE, bufsize=0) f = p.stdout self.addCleanup(f.close) try: self.assertEqual(f.read(4), b"appl") self.assertIn(f, select.select([f], [], [], 0.0)[0]) finally: p.wait() def test_zombie_fast_process_del(self): # Issue #12650: on Unix, if Popen.__del__() was called before the # process exited, it wouldn't be added to subprocess._active, and would # remain a zombie. # spawn a Popen, and delete its reference before it exits p = subprocess.Popen([sys.executable, "-c", 'import sys, time;' 'time.sleep(0.2)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) ident = id(p) pid = p.pid with warnings_helper.check_warnings(('', ResourceWarning)): p = None if mswindows: # subprocess._active is not used on Windows and is set to None. self.assertIsNone(subprocess._active) else: # check that p is in the active processes list self.assertIn(ident, [id(o) for o in subprocess._active]) def test_leak_fast_process_del_killed(self): # Issue #12650: on Unix, if Popen.__del__() was called before the # process exited, and the process got killed by a signal, it would never # be removed from subprocess._active, which triggered a FD and memory # leak. # spawn a Popen, delete its reference and kill it p = subprocess.Popen([sys.executable, "-c", 'import time;' 'time.sleep(3)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) ident = id(p) pid = p.pid with warnings_helper.check_warnings(('', ResourceWarning)): p = None support.gc_collect() # For PyPy or other GCs. os.kill(pid, signal.SIGKILL) if mswindows: # subprocess._active is not used on Windows and is set to None. self.assertIsNone(subprocess._active) else: # check that p is in the active processes list self.assertIn(ident, [id(o) for o in subprocess._active]) # let some time for the process to exit, and create a new Popen: this # should trigger the wait() of p time.sleep(0.2) with self.assertRaises(OSError): with subprocess.Popen(NONEXISTING_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc: pass # p should have been wait()ed on, and removed from the _active list self.assertRaises(OSError, os.waitpid, pid, 0) if mswindows: # subprocess._active is not used on Windows and is set to None. self.assertIsNone(subprocess._active) else: self.assertNotIn(ident, [id(o) for o in subprocess._active]) def test_close_fds_after_preexec(self): fd_status = support.findfile("fd_status.py", subdir="subprocessdata") # this FD is used as dup2() target by preexec_fn, and should be closed # in the child process fd = os.dup(1) self.addCleanup(os.close, fd) p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True, preexec_fn=lambda: os.dup2(1, fd)) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertNotIn(fd, remaining_fds) @support.cpython_only def test_fork_exec(self): # Issue #22290: fork_exec() must not crash on memory allocation failure # or other errors import _posixsubprocess gc_enabled = gc.isenabled() try: # Use a preexec function and enable the garbage collector # to force fork_exec() to re-enable the garbage collector # on error. func = lambda: None gc.enable() for args, exe_list, cwd, env_list in ( (123, [b"exe"], None, [b"env"]), ([b"arg"], 123, None, [b"env"]), ([b"arg"], [b"exe"], 123, [b"env"]), ([b"arg"], [b"exe"], None, 123), ): with self.assertRaises(TypeError) as err: _posixsubprocess.fork_exec( args, exe_list, True, (), cwd, env_list, -1, -1, -1, -1, 1, 2, 3, 4, True, True, 0, False, [], 0, -1, func, False) # Attempt to prevent # "TypeError: fork_exec() takes exactly N arguments (M given)" # from passing the test. More refactoring to have us start # with a valid *args list, confirm a good call with that works # before mutating it in various ways to ensure that bad calls # with individual arg type errors raise a typeerror would be # ideal. Saving that for a future PR... self.assertNotIn('takes exactly', str(err.exception)) finally: if not gc_enabled: gc.disable() @support.cpython_only def test_fork_exec_sorted_fd_sanity_check(self): # Issue #23564: sanity check the fork_exec() fds_to_keep sanity check. import _posixsubprocess class BadInt: first = True def __init__(self, value): self.value = value def __int__(self): if self.first: self.first = False return self.value raise ValueError gc_enabled = gc.isenabled() try: gc.enable() for fds_to_keep in ( (-1, 2, 3, 4, 5), # Negative number. ('str', 4), # Not an int. (18, 23, 42, 2**63), # Out of range. (5, 4), # Not sorted. (6, 7, 7, 8), # Duplicate. (BadInt(1), BadInt(2)), ): with self.assertRaises( ValueError, msg='fds_to_keep={}'.format(fds_to_keep)) as c: _posixsubprocess.fork_exec( [b"false"], [b"false"], True, fds_to_keep, None, [b"env"], -1, -1, -1, -1, 1, 2, 3, 4, True, True, 0, None, None, None, -1, None, True) self.assertIn('fds_to_keep', str(c.exception)) finally: if not gc_enabled: gc.disable() def test_communicate_BrokenPipeError_stdin_close(self): # By not setting stdout or stderr or a timeout we force the fast path # that just calls _stdin_write() internally due to our mock. proc = subprocess.Popen(ZERO_RETURN_CMD) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin: mock_proc_stdin.close.side_effect = BrokenPipeError proc.communicate() # Should swallow BrokenPipeError from close. mock_proc_stdin.close.assert_called_with() def test_communicate_BrokenPipeError_stdin_write(self): # By not setting stdout or stderr or a timeout we force the fast path # that just calls _stdin_write() internally due to our mock. proc = subprocess.Popen(ZERO_RETURN_CMD) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin: mock_proc_stdin.write.side_effect = BrokenPipeError proc.communicate(b'stuff') # Should swallow the BrokenPipeError. mock_proc_stdin.write.assert_called_once_with(b'stuff') mock_proc_stdin.close.assert_called_once_with() def test_communicate_BrokenPipeError_stdin_flush(self): # Setting stdin and stdout forces the ._communicate() code path. # python -h exits faster than python -c pass (but spams stdout). proc = subprocess.Popen([sys.executable, '-h'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin, \ open(os.devnull, 'wb') as dev_null: mock_proc_stdin.flush.side_effect = BrokenPipeError # because _communicate registers a selector using proc.stdin... mock_proc_stdin.fileno.return_value = dev_null.fileno() # _communicate() should swallow BrokenPipeError from flush. proc.communicate(b'stuff') mock_proc_stdin.flush.assert_called_once_with() def test_communicate_BrokenPipeError_stdin_close_with_timeout(self): # Setting stdin and stdout forces the ._communicate() code path. # python -h exits faster than python -c pass (but spams stdout). proc = subprocess.Popen([sys.executable, '-h'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin: mock_proc_stdin.close.side_effect = BrokenPipeError # _communicate() should swallow BrokenPipeError from close. proc.communicate(timeout=999) mock_proc_stdin.close.assert_called_once_with() @unittest.skipUnless(_testcapi is not None and hasattr(_testcapi, 'W_STOPCODE'), 'need _testcapi.W_STOPCODE') def test_stopped(self): """Test wait() behavior when waitpid returns WIFSTOPPED; issue29335.""" args = ZERO_RETURN_CMD proc = subprocess.Popen(args) # Wait until the real process completes to avoid zombie process support.wait_process(proc.pid, exitcode=0) status = _testcapi.W_STOPCODE(3) with mock.patch('subprocess.os.waitpid', return_value=(proc.pid, status)): returncode = proc.wait() self.assertEqual(returncode, -3) def test_send_signal_race(self): # bpo-38630: send_signal() must poll the process exit status to reduce # the risk of sending the signal to the wrong process. proc = subprocess.Popen(ZERO_RETURN_CMD) # wait until the process completes without using the Popen APIs. support.wait_process(proc.pid, exitcode=0) # returncode is still None but the process completed. self.assertIsNone(proc.returncode) with mock.patch("os.kill") as mock_kill: proc.send_signal(signal.SIGTERM) # send_signal() didn't call os.kill() since the process already # completed. mock_kill.assert_not_called() # Don't check the returncode value: the test reads the exit status, # so Popen failed to read it and uses a default returncode instead. self.assertIsNotNone(proc.returncode) def test_send_signal_race2(self): # bpo-40550: the process might exist between the returncode check and # the kill operation p = subprocess.Popen([sys.executable, '-c', 'exit(1)']) # wait for process to exit while not p.returncode: p.poll() with mock.patch.object(p, 'poll', new=lambda: None): p.returncode = None p.send_signal(signal.SIGTERM) p.kill() def test_communicate_repeated_call_after_stdout_close(self): proc = subprocess.Popen([sys.executable, '-c', 'import os, time; os.close(1), time.sleep(2)'], stdout=subprocess.PIPE) while True: try: proc.communicate(timeout=0.1) return except subprocess.TimeoutExpired: pass @unittest.skipUnless(mswindows, "Windows specific tests") class Win32ProcessTestCase(BaseTestCase): def test_startupinfo(self): # startupinfo argument # We uses hardcoded constants, because we do not want to # depend on win32all. STARTF_USESHOWWINDOW = 1 SW_MAXIMIZE = 3 startupinfo = subprocess.STARTUPINFO() startupinfo.dwFlags = STARTF_USESHOWWINDOW startupinfo.wShowWindow = SW_MAXIMIZE # Since Python is a console process, it won't be affected # by wShowWindow, but the argument should be silently # ignored subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_startupinfo_keywords(self): # startupinfo argument # We use hardcoded constants, because we do not want to # depend on win32all. STARTF_USERSHOWWINDOW = 1 SW_MAXIMIZE = 3 startupinfo = subprocess.STARTUPINFO( dwFlags=STARTF_USERSHOWWINDOW, wShowWindow=SW_MAXIMIZE ) # Since Python is a console process, it won't be affected # by wShowWindow, but the argument should be silently # ignored subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_startupinfo_copy(self): # bpo-34044: Popen must not modify input STARTUPINFO structure startupinfo = subprocess.STARTUPINFO() startupinfo.dwFlags = subprocess.STARTF_USESHOWWINDOW startupinfo.wShowWindow = subprocess.SW_HIDE # Call Popen() twice with the same startupinfo object to make sure # that it's not modified for _ in range(2): cmd = ZERO_RETURN_CMD with open(os.devnull, 'w') as null: proc = subprocess.Popen(cmd, stdout=null, stderr=subprocess.STDOUT, startupinfo=startupinfo) with proc: proc.communicate() self.assertEqual(proc.returncode, 0) self.assertEqual(startupinfo.dwFlags, subprocess.STARTF_USESHOWWINDOW) self.assertIsNone(startupinfo.hStdInput) self.assertIsNone(startupinfo.hStdOutput) self.assertIsNone(startupinfo.hStdError) self.assertEqual(startupinfo.wShowWindow, subprocess.SW_HIDE) self.assertEqual(startupinfo.lpAttributeList, {"handle_list": []}) def test_creationflags(self): # creationflags argument CREATE_NEW_CONSOLE = 16 sys.stderr.write(" a DOS box should flash briefly ...\n") subprocess.call(sys.executable + ' -c "import time; time.sleep(0.25)"', creationflags=CREATE_NEW_CONSOLE) def test_invalid_args(self): # invalid arguments should raise ValueError self.assertRaises(ValueError, subprocess.call, [sys.executable, "-c", "import sys; sys.exit(47)"], preexec_fn=lambda: 1) @support.cpython_only def test_issue31471(self): # There shouldn't be an assertion failure in Popen() in case the env # argument has a bad keys() method. class BadEnv(dict): keys = None with self.assertRaises(TypeError): subprocess.Popen(ZERO_RETURN_CMD, env=BadEnv()) def test_close_fds(self): # close file descriptors rc = subprocess.call([sys.executable, "-c", "import sys; sys.exit(47)"], close_fds=True) self.assertEqual(rc, 47) def test_close_fds_with_stdio(self): import msvcrt fds = os.pipe() self.addCleanup(os.close, fds[0]) self.addCleanup(os.close, fds[1]) handles = [] for fd in fds: os.set_inheritable(fd, True) handles.append(msvcrt.get_osfhandle(fd)) p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, close_fds=False) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 0) int(stdout.strip()) # Check that stdout is an integer p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 1) self.assertIn(b"OSError", stderr) # The same as the previous call, but with an empty handle_list handle_list = [] startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {"handle_list": handle_list} p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, stderr=subprocess.PIPE, startupinfo=startupinfo, close_fds=True) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 1) self.assertIn(b"OSError", stderr) # Check for a warning due to using handle_list and close_fds=False with warnings_helper.check_warnings((".*overriding close_fds", RuntimeWarning)): startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {"handle_list": handles[:]} p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, stderr=subprocess.PIPE, startupinfo=startupinfo, close_fds=False) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 0) def test_empty_attribute_list(self): startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {} subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_empty_handle_list(self): startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {"handle_list": []} subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_shell_sequence(self): # Run command through the shell (sequence) newenv = os.environ.copy() newenv["FRUIT"] = "physalis" p = subprocess.Popen(["set"], shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertIn(b"physalis", p.stdout.read()) def test_shell_string(self): # Run command through the shell (string) newenv = os.environ.copy() newenv["FRUIT"] = "physalis" p = subprocess.Popen("set", shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertIn(b"physalis", p.stdout.read()) def test_shell_encodings(self): # Run command through the shell (string) for enc in ['ansi', 'oem']: newenv = os.environ.copy() newenv["FRUIT"] = "physalis" p = subprocess.Popen("set", shell=1, stdout=subprocess.PIPE, env=newenv, encoding=enc) with p: self.assertIn("physalis", p.stdout.read(), enc) def test_call_string(self): # call() function with string argument on Windows rc = subprocess.call(sys.executable + ' -c "import sys; sys.exit(47)"') self.assertEqual(rc, 47) def _kill_process(self, method, *args): # Some win32 buildbot raises EOFError if stdin is inherited p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() time.sleep(30) """], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) with p: # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) getattr(p, method)(*args) _, stderr = p.communicate() self.assertEqual(stderr, b'') returncode = p.wait() self.assertNotEqual(returncode, 0) def _kill_dead_process(self, method, *args): p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() sys.exit(42) """], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) with p: # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) # The process should end after this time.sleep(1) # This shouldn't raise even though the child is now dead getattr(p, method)(*args) _, stderr = p.communicate() self.assertEqual(stderr, b'') rc = p.wait() self.assertEqual(rc, 42) def test_send_signal(self): self._kill_process('send_signal', signal.SIGTERM) def test_kill(self): self._kill_process('kill') def test_terminate(self): self._kill_process('terminate') def test_send_signal_dead(self): self._kill_dead_process('send_signal', signal.SIGTERM) def test_kill_dead(self): self._kill_dead_process('kill') def test_terminate_dead(self): self._kill_dead_process('terminate') class MiscTests(unittest.TestCase): class RecordingPopen(subprocess.Popen): """A Popen that saves a reference to each instance for testing.""" instances_created = [] def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.instances_created.append(self) @mock.patch.object(subprocess.Popen, "_communicate") def _test_keyboardinterrupt_no_kill(self, popener, mock__communicate, **kwargs): """Fake a SIGINT happening during Popen._communicate() and ._wait(). This avoids the need to actually try and get test environments to send and receive signals reliably across platforms. The net effect of a ^C happening during a blocking subprocess execution which we want to clean up from is a KeyboardInterrupt coming out of communicate() or wait(). """ mock__communicate.side_effect = KeyboardInterrupt try: with mock.patch.object(subprocess.Popen, "_wait") as mock__wait: # We patch out _wait() as no signal was involved so the # child process isn't actually going to exit rapidly. mock__wait.side_effect = KeyboardInterrupt with mock.patch.object(subprocess, "Popen", self.RecordingPopen): with self.assertRaises(KeyboardInterrupt): popener([sys.executable, "-c", "import time\ntime.sleep(9)\nimport sys\n" "sys.stderr.write('\\n!runaway child!\\n')"], stdout=subprocess.DEVNULL, **kwargs) for call in mock__wait.call_args_list[1:]: self.assertNotEqual( call, mock.call(timeout=None), "no open-ended wait() after the first allowed: " f"{mock__wait.call_args_list}") sigint_calls = [] for call in mock__wait.call_args_list: if call == mock.call(timeout=0.25): # from Popen.__init__ sigint_calls.append(call) self.assertLessEqual(mock__wait.call_count, 2, msg=mock__wait.call_args_list) self.assertEqual(len(sigint_calls), 1, msg=mock__wait.call_args_list) finally: # cleanup the forgotten (due to our mocks) child process process = self.RecordingPopen.instances_created.pop() process.kill() process.wait() self.assertEqual([], self.RecordingPopen.instances_created) def test_call_keyboardinterrupt_no_kill(self): self._test_keyboardinterrupt_no_kill(subprocess.call, timeout=6.282) def test_run_keyboardinterrupt_no_kill(self): self._test_keyboardinterrupt_no_kill(subprocess.run, timeout=6.282) def test_context_manager_keyboardinterrupt_no_kill(self): def popen_via_context_manager(*args, **kwargs): with subprocess.Popen(*args, **kwargs) as unused_process: raise KeyboardInterrupt # Test how __exit__ handles ^C. self._test_keyboardinterrupt_no_kill(popen_via_context_manager) def test_getoutput(self): self.assertEqual(subprocess.getoutput('echo xyzzy'), 'xyzzy') self.assertEqual(subprocess.getstatusoutput('echo xyzzy'), (0, 'xyzzy')) # we use mkdtemp in the next line to create an empty directory # under our exclusive control; from that, we can invent a pathname # that we _know_ won't exist. This is guaranteed to fail. dir = None try: dir = tempfile.mkdtemp() name = os.path.join(dir, "foo") status, output = subprocess.getstatusoutput( ("type " if mswindows else "cat ") + name) self.assertNotEqual(status, 0) finally: if dir is not None: os.rmdir(dir) def test__all__(self): """Ensure that __all__ is populated properly.""" intentionally_excluded = {"list2cmdline", "Handle", "pwd", "grp", "fcntl"} exported = set(subprocess.__all__) possible_exports = set() import types for name, value in subprocess.__dict__.items(): if name.startswith('_'): continue if isinstance(value, (types.ModuleType,)): continue possible_exports.add(name) self.assertEqual(exported, possible_exports - intentionally_excluded) @unittest.skipUnless(hasattr(selectors, 'PollSelector'), "Test needs selectors.PollSelector") class ProcessTestCaseNoPoll(ProcessTestCase): def setUp(self): self.orig_selector = subprocess._PopenSelector subprocess._PopenSelector = selectors.SelectSelector ProcessTestCase.setUp(self) def tearDown(self): subprocess._PopenSelector = self.orig_selector ProcessTestCase.tearDown(self) @unittest.skipUnless(mswindows, "Windows-specific tests") class CommandsWithSpaces (BaseTestCase): def setUp(self): super().setUp() f, fname = tempfile.mkstemp(".py", "te st") self.fname = fname.lower () os.write(f, b"import sys;" b"sys.stdout.write('%d %s' % (len(sys.argv), [a.lower () for a in sys.argv]))" ) os.close(f) def tearDown(self): os.remove(self.fname) super().tearDown() def with_spaces(self, *args, **kwargs): kwargs['stdout'] = subprocess.PIPE p = subprocess.Popen(*args, **kwargs) with p: self.assertEqual( p.stdout.read ().decode("mbcs"), "2 [%r, 'ab cd']" % self.fname ) def test_shell_string_with_spaces(self): # call() function with string argument with spaces on Windows self.with_spaces('"%s" "%s" "%s"' % (sys.executable, self.fname, "ab cd"), shell=1) def test_shell_sequence_with_spaces(self): # call() function with sequence argument with spaces on Windows self.with_spaces([sys.executable, self.fname, "ab cd"], shell=1) def test_noshell_string_with_spaces(self): # call() function with string argument with spaces on Windows self.with_spaces('"%s" "%s" "%s"' % (sys.executable, self.fname, "ab cd")) def test_noshell_sequence_with_spaces(self): # call() function with sequence argument with spaces on Windows self.with_spaces([sys.executable, self.fname, "ab cd"]) class ContextManagerTests(BaseTestCase): def test_pipe(self): with subprocess.Popen([sys.executable, "-c", "import sys;" "sys.stdout.write('stdout');" "sys.stderr.write('stderr');"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc: self.assertEqual(proc.stdout.read(), b"stdout") self.assertEqual(proc.stderr.read(), b"stderr") self.assertTrue(proc.stdout.closed) self.assertTrue(proc.stderr.closed) def test_returncode(self): with subprocess.Popen([sys.executable, "-c", "import sys; sys.exit(100)"]) as proc: pass # __exit__ calls wait(), so the returncode should be set self.assertEqual(proc.returncode, 100) def test_communicate_stdin(self): with subprocess.Popen([sys.executable, "-c", "import sys;" "sys.exit(sys.stdin.read() == 'context')"], stdin=subprocess.PIPE) as proc: proc.communicate(b"context") self.assertEqual(proc.returncode, 1) def test_invalid_args(self): with self.assertRaises(NONEXISTING_ERRORS): with subprocess.Popen(NONEXISTING_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc: pass def test_broken_pipe_cleanup(self): """Broken pipe error should not prevent wait() (Issue 21619)""" proc = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, bufsize=support.PIPE_MAX_SIZE*2) proc = proc.__enter__() # Prepare to send enough data to overflow any OS pipe buffering and # guarantee a broken pipe error. Data is held in BufferedWriter # buffer until closed. proc.stdin.write(b'x' * support.PIPE_MAX_SIZE) self.assertIsNone(proc.returncode) # EPIPE expected under POSIX; EINVAL under Windows self.assertRaises(OSError, proc.__exit__, None, None, None) self.assertEqual(proc.returncode, 0) self.assertTrue(proc.stdin.closed) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.11/test_threading.py000066400000000000000000002003771471441230600217120ustar00rootroot00000000000000""" Tests for the threading module. """ import test.support from test.support import threading_helper, requires_subprocess from test.support import verbose, cpython_only, os_helper from test.support.import_helper import import_module from test.support.script_helper import assert_python_ok, assert_python_failure import random import sys import _thread import threading import time import unittest import weakref import os import subprocess import signal import textwrap import traceback from unittest import mock from test import lock_tests from test import support threading_helper.requires_working_threading(module=True) # Between fork() and exec(), only async-safe functions are allowed (issues # #12316 and #11870), and fork() from a worker thread is known to trigger # problems with some operating systems (issue #3863): skip problematic tests # on platforms known to behave badly. platforms_to_skip = ('netbsd5', 'hp-ux11') # Is Python built with Py_DEBUG macro defined? Py_DEBUG = hasattr(sys, 'gettotalrefcount') def skip_unless_reliable_fork(test): if not support.has_fork_support: return unittest.skip("requires working os.fork()")(test) if sys.platform in platforms_to_skip: return unittest.skip("due to known OS bug related to thread+fork")(test) if support.HAVE_ASAN_FORK_BUG: return unittest.skip("libasan has a pthread_create() dead lock related to thread+fork")(test) return test def restore_default_excepthook(testcase): testcase.addCleanup(setattr, threading, 'excepthook', threading.excepthook) threading.excepthook = threading.__excepthook__ # A trivial mutable counter. class Counter(object): def __init__(self): self.value = 0 def inc(self): self.value += 1 def dec(self): self.value -= 1 def get(self): return self.value class TestThread(threading.Thread): def __init__(self, name, testcase, sema, mutex, nrunning): threading.Thread.__init__(self, name=name) self.testcase = testcase self.sema = sema self.mutex = mutex self.nrunning = nrunning def run(self): delay = random.random() / 10000.0 if verbose: print('task %s will run for %.1f usec' % (self.name, delay * 1e6)) with self.sema: with self.mutex: self.nrunning.inc() if verbose: print(self.nrunning.get(), 'tasks are running') self.testcase.assertLessEqual(self.nrunning.get(), 3) time.sleep(delay) if verbose: print('task', self.name, 'done') with self.mutex: self.nrunning.dec() self.testcase.assertGreaterEqual(self.nrunning.get(), 0) if verbose: print('%s is finished. %d tasks are running' % (self.name, self.nrunning.get())) class BaseTestCase(unittest.TestCase): def setUp(self): self._threads = threading_helper.threading_setup() def tearDown(self): threading_helper.threading_cleanup(*self._threads) test.support.reap_children() class ThreadTests(BaseTestCase): maxDiff = 9999 @cpython_only def test_name(self): def func(): pass thread = threading.Thread(name="myname1") self.assertEqual(thread.name, "myname1") # Convert int name to str thread = threading.Thread(name=123) self.assertEqual(thread.name, "123") # target name is ignored if name is specified thread = threading.Thread(target=func, name="myname2") self.assertEqual(thread.name, "myname2") with mock.patch.object(threading, '_counter', return_value=2): thread = threading.Thread(name="") self.assertEqual(thread.name, "Thread-2") with mock.patch.object(threading, '_counter', return_value=3): thread = threading.Thread() self.assertEqual(thread.name, "Thread-3") with mock.patch.object(threading, '_counter', return_value=5): thread = threading.Thread(target=func) self.assertEqual(thread.name, "Thread-5 (func)") def test_args_argument(self): # bpo-45735: Using list or tuple as *args* in constructor could # achieve the same effect. num_list = [1] num_tuple = (1,) str_list = ["str"] str_tuple = ("str",) list_in_tuple = ([1],) tuple_in_list = [(1,)] test_cases = ( (num_list, lambda arg: self.assertEqual(arg, 1)), (num_tuple, lambda arg: self.assertEqual(arg, 1)), (str_list, lambda arg: self.assertEqual(arg, "str")), (str_tuple, lambda arg: self.assertEqual(arg, "str")), (list_in_tuple, lambda arg: self.assertEqual(arg, [1])), (tuple_in_list, lambda arg: self.assertEqual(arg, (1,))) ) for args, target in test_cases: with self.subTest(target=target, args=args): t = threading.Thread(target=target, args=args) t.start() t.join() @cpython_only def test_disallow_instantiation(self): # Ensure that the type disallows instantiation (bpo-43916) lock = threading.Lock() test.support.check_disallow_instantiation(self, type(lock)) # Create a bunch of threads, let each do some work, wait until all are # done. def test_various_ops(self): # This takes about n/3 seconds to run (about n/3 clumps of tasks, # times about 1 second per clump). NUMTASKS = 10 # no more than 3 of the 10 can run at once sema = threading.BoundedSemaphore(value=3) mutex = threading.RLock() numrunning = Counter() threads = [] for i in range(NUMTASKS): t = TestThread(""%i, self, sema, mutex, numrunning) threads.append(t) self.assertIsNone(t.ident) self.assertRegex(repr(t), r'^$') t.start() if hasattr(threading, 'get_native_id'): native_ids = set(t.native_id for t in threads) | {threading.get_native_id()} self.assertNotIn(None, native_ids) self.assertEqual(len(native_ids), NUMTASKS + 1) if verbose: print('waiting for all tasks to complete') for t in threads: t.join() self.assertFalse(t.is_alive()) self.assertNotEqual(t.ident, 0) self.assertIsNotNone(t.ident) self.assertRegex(repr(t), r'^$') if verbose: print('all tasks done') self.assertEqual(numrunning.get(), 0) def test_ident_of_no_threading_threads(self): # The ident still must work for the main thread and dummy threads. self.assertIsNotNone(threading.current_thread().ident) def f(): ident.append(threading.current_thread().ident) done.set() done = threading.Event() ident = [] with threading_helper.wait_threads_exit(): tid = _thread.start_new_thread(f, ()) done.wait() self.assertEqual(ident[0], tid) # Kill the "immortal" _DummyThread del threading._active[ident[0]] # run with a small(ish) thread stack size (256 KiB) def test_various_ops_small_stack(self): if verbose: print('with 256 KiB thread stack size...') try: threading.stack_size(262144) except _thread.error: raise unittest.SkipTest( 'platform does not support changing thread stack size') self.test_various_ops() threading.stack_size(0) # run with a large thread stack size (1 MiB) def test_various_ops_large_stack(self): if verbose: print('with 1 MiB thread stack size...') try: threading.stack_size(0x100000) except _thread.error: raise unittest.SkipTest( 'platform does not support changing thread stack size') self.test_various_ops() threading.stack_size(0) def test_foreign_thread(self): # Check that a "foreign" thread can use the threading module. def f(mutex): # Calling current_thread() forces an entry for the foreign # thread to get made in the threading._active map. threading.current_thread() mutex.release() mutex = threading.Lock() mutex.acquire() with threading_helper.wait_threads_exit(): tid = _thread.start_new_thread(f, (mutex,)) # Wait for the thread to finish. mutex.acquire() self.assertIn(tid, threading._active) self.assertIsInstance(threading._active[tid], threading._DummyThread) #Issue 29376 self.assertTrue(threading._active[tid].is_alive()) self.assertRegex(repr(threading._active[tid]), '_DummyThread') del threading._active[tid] # PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it. def test_PyThreadState_SetAsyncExc(self): ctypes = import_module("ctypes") set_async_exc = ctypes.pythonapi.PyThreadState_SetAsyncExc set_async_exc.argtypes = (ctypes.c_ulong, ctypes.py_object) class AsyncExc(Exception): pass exception = ctypes.py_object(AsyncExc) # First check it works when setting the exception from the same thread. tid = threading.get_ident() self.assertIsInstance(tid, int) self.assertGreater(tid, 0) try: result = set_async_exc(tid, exception) # The exception is async, so we might have to keep the VM busy until # it notices. while True: pass except AsyncExc: pass else: # This code is unreachable but it reflects the intent. If we wanted # to be smarter the above loop wouldn't be infinite. self.fail("AsyncExc not raised") try: self.assertEqual(result, 1) # one thread state modified except UnboundLocalError: # The exception was raised too quickly for us to get the result. pass # `worker_started` is set by the thread when it's inside a try/except # block waiting to catch the asynchronously set AsyncExc exception. # `worker_saw_exception` is set by the thread upon catching that # exception. worker_started = threading.Event() worker_saw_exception = threading.Event() class Worker(threading.Thread): def run(self): self.id = threading.get_ident() self.finished = False try: while True: worker_started.set() time.sleep(0.1) except AsyncExc: self.finished = True worker_saw_exception.set() t = Worker() t.daemon = True # so if this fails, we don't hang Python at shutdown t.start() if verbose: print(" started worker thread") # Try a thread id that doesn't make sense. if verbose: print(" trying nonsensical thread id") result = set_async_exc(-1, exception) self.assertEqual(result, 0) # no thread states modified # Now raise an exception in the worker thread. if verbose: print(" waiting for worker thread to get started") ret = worker_started.wait() self.assertTrue(ret) if verbose: print(" verifying worker hasn't exited") self.assertFalse(t.finished) if verbose: print(" attempting to raise asynch exception in worker") result = set_async_exc(t.id, exception) self.assertEqual(result, 1) # one thread state modified if verbose: print(" waiting for worker to say it caught the exception") worker_saw_exception.wait(timeout=support.SHORT_TIMEOUT) self.assertTrue(t.finished) if verbose: print(" all OK -- joining worker") if t.finished: t.join() # else the thread is still running, and we have no way to kill it def test_limbo_cleanup(self): # Issue 7481: Failure to start thread should cleanup the limbo map. def fail_new_thread(*args): raise threading.ThreadError() _start_new_thread = threading._start_new_thread threading._start_new_thread = fail_new_thread try: t = threading.Thread(target=lambda: None) self.assertRaises(threading.ThreadError, t.start) self.assertFalse( t in threading._limbo, "Failed to cleanup _limbo map on failure of Thread.start().") finally: threading._start_new_thread = _start_new_thread def test_finalize_running_thread(self): # Issue 1402: the PyGILState_Ensure / _Release functions may be called # very late on python exit: on deallocation of a running thread for # example. import_module("ctypes") rc, out, err = assert_python_failure("-c", """if 1: import ctypes, sys, time, _thread # This lock is used as a simple event variable. ready = _thread.allocate_lock() ready.acquire() # Module globals are cleared before __del__ is run # So we save the functions in class dict class C: ensure = ctypes.pythonapi.PyGILState_Ensure release = ctypes.pythonapi.PyGILState_Release def __del__(self): state = self.ensure() self.release(state) def waitingThread(): x = C() ready.release() time.sleep(100) _thread.start_new_thread(waitingThread, ()) ready.acquire() # Be sure the other thread is waiting. sys.exit(42) """) self.assertEqual(rc, 42) def test_finalize_with_trace(self): # Issue1733757 # Avoid a deadlock when sys.settrace steps into threading._shutdown assert_python_ok("-c", """if 1: import sys, threading # A deadlock-killer, to prevent the # testsuite to hang forever def killer(): import os, time time.sleep(2) print('program blocked; aborting') os._exit(2) t = threading.Thread(target=killer) t.daemon = True t.start() # This is the trace function def func(frame, event, arg): threading.current_thread() return func sys.settrace(func) """) def test_join_nondaemon_on_shutdown(self): # Issue 1722344 # Raising SystemExit skipped threading._shutdown rc, out, err = assert_python_ok("-c", """if 1: import threading from time import sleep def child(): sleep(1) # As a non-daemon thread we SHOULD wake up and nothing # should be torn down yet print("Woke up, sleep function is:", sleep) threading.Thread(target=child).start() raise SystemExit """) self.assertEqual(out.strip(), b"Woke up, sleep function is: ") self.assertEqual(err, b"") def test_enumerate_after_join(self): # Try hard to trigger #1703448: a thread is still returned in # threading.enumerate() after it has been join()ed. enum = threading.enumerate old_interval = sys.getswitchinterval() try: for i in range(1, 100): sys.setswitchinterval(i * 0.0002) t = threading.Thread(target=lambda: None) t.start() t.join() l = enum() self.assertNotIn(t, l, "#1703448 triggered after %d trials: %s" % (i, l)) finally: sys.setswitchinterval(old_interval) def test_no_refcycle_through_target(self): class RunSelfFunction(object): def __init__(self, should_raise): # The links in this refcycle from Thread back to self # should be cleaned up when the thread completes. self.should_raise = should_raise self.thread = threading.Thread(target=self._run, args=(self,), kwargs={'yet_another':self}) self.thread.start() def _run(self, other_ref, yet_another): if self.should_raise: raise SystemExit restore_default_excepthook(self) cyclic_object = RunSelfFunction(should_raise=False) weak_cyclic_object = weakref.ref(cyclic_object) cyclic_object.thread.join() del cyclic_object self.assertIsNone(weak_cyclic_object(), msg=('%d references still around' % sys.getrefcount(weak_cyclic_object()))) raising_cyclic_object = RunSelfFunction(should_raise=True) weak_raising_cyclic_object = weakref.ref(raising_cyclic_object) raising_cyclic_object.thread.join() del raising_cyclic_object self.assertIsNone(weak_raising_cyclic_object(), msg=('%d references still around' % sys.getrefcount(weak_raising_cyclic_object()))) def test_old_threading_api(self): # Just a quick sanity check to make sure the old method names are # still present t = threading.Thread() with self.assertWarnsRegex(DeprecationWarning, r'get the daemon attribute'): t.isDaemon() with self.assertWarnsRegex(DeprecationWarning, r'set the daemon attribute'): t.setDaemon(True) with self.assertWarnsRegex(DeprecationWarning, r'get the name attribute'): t.getName() with self.assertWarnsRegex(DeprecationWarning, r'set the name attribute'): t.setName("name") e = threading.Event() with self.assertWarnsRegex(DeprecationWarning, 'use is_set()'): e.isSet() cond = threading.Condition() cond.acquire() with self.assertWarnsRegex(DeprecationWarning, 'use notify_all()'): cond.notifyAll() with self.assertWarnsRegex(DeprecationWarning, 'use active_count()'): threading.activeCount() with self.assertWarnsRegex(DeprecationWarning, 'use current_thread()'): threading.currentThread() def test_repr_daemon(self): t = threading.Thread() self.assertNotIn('daemon', repr(t)) t.daemon = True self.assertIn('daemon', repr(t)) def test_daemon_param(self): t = threading.Thread() self.assertFalse(t.daemon) t = threading.Thread(daemon=False) self.assertFalse(t.daemon) t = threading.Thread(daemon=True) self.assertTrue(t.daemon) @skip_unless_reliable_fork def test_fork_at_exit(self): # bpo-42350: Calling os.fork() after threading._shutdown() must # not log an error. code = textwrap.dedent(""" import atexit import os import sys from test.support import wait_process # Import the threading module to register its "at fork" callback import threading def exit_handler(): pid = os.fork() if not pid: print("child process ok", file=sys.stderr, flush=True) # child process else: wait_process(pid, exitcode=0) # exit_handler() will be called after threading._shutdown() atexit.register(exit_handler) """) _, out, err = assert_python_ok("-c", code) self.assertEqual(out, b'') self.assertEqual(err.rstrip(), b'child process ok') @skip_unless_reliable_fork def test_dummy_thread_after_fork(self): # Issue #14308: a dummy thread in the active list doesn't mess up # the after-fork mechanism. code = """if 1: import _thread, threading, os, time def background_thread(evt): # Creates and registers the _DummyThread instance threading.current_thread() evt.set() time.sleep(10) evt = threading.Event() _thread.start_new_thread(background_thread, (evt,)) evt.wait() assert threading.active_count() == 2, threading.active_count() if os.fork() == 0: assert threading.active_count() == 1, threading.active_count() os._exit(0) else: os.wait() """ _, out, err = assert_python_ok("-c", code) self.assertEqual(out, b'') self.assertEqual(err, b'') @skip_unless_reliable_fork def test_is_alive_after_fork(self): # Try hard to trigger #18418: is_alive() could sometimes be True on # threads that vanished after a fork. old_interval = sys.getswitchinterval() self.addCleanup(sys.setswitchinterval, old_interval) # Make the bug more likely to manifest. test.support.setswitchinterval(1e-6) for i in range(20): t = threading.Thread(target=lambda: None) t.start() pid = os.fork() if pid == 0: os._exit(11 if t.is_alive() else 10) else: t.join() support.wait_process(pid, exitcode=10) def test_main_thread(self): main = threading.main_thread() self.assertEqual(main.name, 'MainThread') self.assertEqual(main.ident, threading.current_thread().ident) self.assertEqual(main.ident, threading.get_ident()) def f(): self.assertNotEqual(threading.main_thread().ident, threading.current_thread().ident) th = threading.Thread(target=f) th.start() th.join() @skip_unless_reliable_fork @unittest.skipUnless(hasattr(os, 'waitpid'), "test needs os.waitpid()") def test_main_thread_after_fork(self): code = """if 1: import os, threading from test import support ident = threading.get_ident() pid = os.fork() if pid == 0: print("current ident", threading.get_ident() == ident) main = threading.main_thread() print("main", main.name) print("main ident", main.ident == ident) print("current is main", threading.current_thread() is main) else: support.wait_process(pid, exitcode=0) """ _, out, err = assert_python_ok("-c", code) data = out.decode().replace('\r', '') self.assertEqual(err, b"") self.assertEqual(data, "current ident True\n" "main MainThread\n" "main ident True\n" "current is main True\n") @skip_unless_reliable_fork @unittest.skipUnless(hasattr(os, 'waitpid'), "test needs os.waitpid()") def test_main_thread_after_fork_from_nonmain_thread(self): code = """if 1: import os, threading, sys from test import support def func(): ident = threading.get_ident() pid = os.fork() if pid == 0: print("current ident", threading.get_ident() == ident) main = threading.main_thread() print("main", main.name, type(main).__name__) print("main ident", main.ident == ident) print("current is main", threading.current_thread() is main) # stdout is fully buffered because not a tty, # we have to flush before exit. sys.stdout.flush() th = threading.Thread(target=func) th.start() th.join() """ _, out, err = assert_python_ok("-c", code) data = out.decode().replace('\r', '') self.assertEqual(err.decode('utf-8'), "") self.assertEqual(data, "current ident True\n" "main Thread-1 (func) Thread\n" "main ident True\n" "current is main True\n" ) @unittest.skipIf(sys.platform in platforms_to_skip, "due to known OS bug") @support.requires_fork() @unittest.skipUnless(hasattr(os, 'waitpid'), "test needs os.waitpid()") def test_main_thread_after_fork_from_foreign_thread(self, create_dummy=False): code = """if 1: import os, threading, sys, traceback, _thread from test import support def func(lock): ident = threading.get_ident() if %s: # call current_thread() before fork to allocate DummyThread current = threading.current_thread() print("current", current.name, type(current).__name__) print("ident in _active", ident in threading._active) # flush before fork, so child won't flush it again sys.stdout.flush() pid = os.fork() if pid == 0: print("current ident", threading.get_ident() == ident) main = threading.main_thread() print("main", main.name, type(main).__name__) print("main ident", main.ident == ident) print("current is main", threading.current_thread() is main) print("_dangling", [t.name for t in list(threading._dangling)]) # stdout is fully buffered because not a tty, # we have to flush before exit. sys.stdout.flush() try: threading._shutdown() os._exit(0) except: traceback.print_exc() sys.stderr.flush() os._exit(1) else: try: support.wait_process(pid, exitcode=0) except Exception: # avoid 'could not acquire lock for # <_io.BufferedWriter name=''> at interpreter shutdown,' traceback.print_exc() sys.stderr.flush() finally: lock.release() join_lock = _thread.allocate_lock() join_lock.acquire() th = _thread.start_new_thread(func, (join_lock,)) join_lock.acquire() """ % create_dummy # "DeprecationWarning: This process is multi-threaded, use of fork() # may lead to deadlocks in the child" _, out, err = assert_python_ok("-W", "ignore::DeprecationWarning", "-c", code) data = out.decode().replace('\r', '') self.assertEqual(err.decode(), "") self.assertEqual(data, ("current Dummy-1 _DummyThread\n" if create_dummy else "") + f"ident in _active {create_dummy!s}\n" + "current ident True\n" "main MainThread _MainThread\n" "main ident True\n" "current is main True\n" "_dangling ['MainThread']\n") def test_main_thread_after_fork_from_dummy_thread(self, create_dummy=False): self.test_main_thread_after_fork_from_foreign_thread(create_dummy=True) def test_main_thread_during_shutdown(self): # bpo-31516: current_thread() should still point to the main thread # at shutdown code = """if 1: import gc, threading main_thread = threading.current_thread() assert main_thread is threading.main_thread() # sanity check class RefCycle: def __init__(self): self.cycle = self def __del__(self): print("GC:", threading.current_thread() is main_thread, threading.main_thread() is main_thread, threading.enumerate() == [main_thread]) RefCycle() gc.collect() # sanity check x = RefCycle() """ _, out, err = assert_python_ok("-c", code) data = out.decode() self.assertEqual(err, b"") self.assertEqual(data.splitlines(), ["GC: True True True"] * 2) def test_finalization_shutdown(self): # bpo-36402: Py_Finalize() calls threading._shutdown() which must wait # until Python thread states of all non-daemon threads get deleted. # # Test similar to SubinterpThreadingTests.test_threads_join_2(), but # test the finalization of the main interpreter. code = """if 1: import os import threading import time import random def random_sleep(): seconds = random.random() * 0.010 time.sleep(seconds) class Sleeper: def __del__(self): random_sleep() tls = threading.local() def f(): # Sleep a bit so that the thread is still running when # Py_Finalize() is called. random_sleep() tls.x = Sleeper() random_sleep() threading.Thread(target=f).start() random_sleep() """ rc, out, err = assert_python_ok("-c", code) self.assertEqual(err, b"") def test_tstate_lock(self): # Test an implementation detail of Thread objects. started = _thread.allocate_lock() finish = _thread.allocate_lock() started.acquire() finish.acquire() def f(): started.release() finish.acquire() time.sleep(0.01) # The tstate lock is None until the thread is started t = threading.Thread(target=f) self.assertIs(t._tstate_lock, None) t.start() started.acquire() self.assertTrue(t.is_alive()) # The tstate lock can't be acquired when the thread is running # (or suspended). tstate_lock = t._tstate_lock self.assertFalse(tstate_lock.acquire(timeout=0), False) finish.release() # When the thread ends, the state_lock can be successfully # acquired. self.assertTrue(tstate_lock.acquire(timeout=support.SHORT_TIMEOUT), False) # But is_alive() is still True: we hold _tstate_lock now, which # prevents is_alive() from knowing the thread's end-of-life C code # is done. self.assertTrue(t.is_alive()) # Let is_alive() find out the C code is done. tstate_lock.release() self.assertFalse(t.is_alive()) # And verify the thread disposed of _tstate_lock. self.assertIsNone(t._tstate_lock) t.join() def test_repr_stopped(self): # Verify that "stopped" shows up in repr(Thread) appropriately. started = _thread.allocate_lock() finish = _thread.allocate_lock() started.acquire() finish.acquire() def f(): started.release() finish.acquire() t = threading.Thread(target=f) t.start() started.acquire() self.assertIn("started", repr(t)) finish.release() # "stopped" should appear in the repr in a reasonable amount of time. # Implementation detail: as of this writing, that's trivially true # if .join() is called, and almost trivially true if .is_alive() is # called. The detail we're testing here is that "stopped" shows up # "all on its own". LOOKING_FOR = "stopped" for i in range(500): if LOOKING_FOR in repr(t): break time.sleep(0.01) self.assertIn(LOOKING_FOR, repr(t)) # we waited at least 5 seconds t.join() def test_BoundedSemaphore_limit(self): # BoundedSemaphore should raise ValueError if released too often. for limit in range(1, 10): bs = threading.BoundedSemaphore(limit) threads = [threading.Thread(target=bs.acquire) for _ in range(limit)] for t in threads: t.start() for t in threads: t.join() threads = [threading.Thread(target=bs.release) for _ in range(limit)] for t in threads: t.start() for t in threads: t.join() self.assertRaises(ValueError, bs.release) @cpython_only def test_frame_tstate_tracing(self): # Issue #14432: Crash when a generator is created in a C thread that is # destroyed while the generator is still used. The issue was that a # generator contains a frame, and the frame kept a reference to the # Python state of the destroyed C thread. The crash occurs when a trace # function is setup. def noop_trace(frame, event, arg): # no operation return noop_trace def generator(): while 1: yield "generator" def callback(): if callback.gen is None: callback.gen = generator() return next(callback.gen) callback.gen = None old_trace = sys.gettrace() sys.settrace(noop_trace) try: # Install a trace function threading.settrace(noop_trace) # Create a generator in a C thread which exits after the call import _testcapi _testcapi.call_in_temporary_c_thread(callback) # Call the generator in a different Python thread, check that the # generator didn't keep a reference to the destroyed thread state for test in range(3): # The trace function is still called here callback() finally: sys.settrace(old_trace) def test_gettrace(self): def noop_trace(frame, event, arg): # no operation return noop_trace old_trace = threading.gettrace() try: threading.settrace(noop_trace) trace_func = threading.gettrace() self.assertEqual(noop_trace,trace_func) finally: threading.settrace(old_trace) def test_getprofile(self): def fn(*args): pass old_profile = threading.getprofile() try: threading.setprofile(fn) self.assertEqual(fn, threading.getprofile()) finally: threading.setprofile(old_profile) @cpython_only def test_shutdown_locks(self): for daemon in (False, True): with self.subTest(daemon=daemon): event = threading.Event() thread = threading.Thread(target=event.wait, daemon=daemon) # Thread.start() must add lock to _shutdown_locks, # but only for non-daemon thread thread.start() tstate_lock = thread._tstate_lock if not daemon: self.assertIn(tstate_lock, threading._shutdown_locks) else: self.assertNotIn(tstate_lock, threading._shutdown_locks) # unblock the thread and join it event.set() thread.join() # Thread._stop() must remove tstate_lock from _shutdown_locks. # Daemon threads must never add it to _shutdown_locks. self.assertNotIn(tstate_lock, threading._shutdown_locks) def test_locals_at_exit(self): # bpo-19466: thread locals must not be deleted before destructors # are called rc, out, err = assert_python_ok("-c", """if 1: import threading class Atexit: def __del__(self): print("thread_dict.atexit = %r" % thread_dict.atexit) thread_dict = threading.local() thread_dict.atexit = "value" atexit = Atexit() """) self.assertEqual(out.rstrip(), b"thread_dict.atexit = 'value'") def test_boolean_target(self): # bpo-41149: A thread that had a boolean value of False would not # run, regardless of whether it was callable. The correct behaviour # is for a thread to do nothing if its target is None, and to call # the target otherwise. class BooleanTarget(object): def __init__(self): self.ran = False def __bool__(self): return False def __call__(self): self.ran = True target = BooleanTarget() thread = threading.Thread(target=target) thread.start() thread.join() self.assertTrue(target.ran) def test_leak_without_join(self): # bpo-37788: Test that a thread which is not joined explicitly # does not leak. Test written for reference leak checks. def noop(): pass with threading_helper.wait_threads_exit(): threading.Thread(target=noop).start() # Thread.join() is not called @unittest.skipUnless(Py_DEBUG, 'need debug build (Py_DEBUG)') def test_debug_deprecation(self): # bpo-44584: The PYTHONTHREADDEBUG environment variable is deprecated rc, out, err = assert_python_ok("-Wdefault", "-c", "pass", PYTHONTHREADDEBUG="1") msg = (b'DeprecationWarning: The threading debug ' b'(PYTHONTHREADDEBUG environment variable) ' b'is deprecated and will be removed in Python 3.12') self.assertIn(msg, err) def test_import_from_another_thread(self): # bpo-1596321: If the threading module is first import from a thread # different than the main thread, threading._shutdown() must handle # this case without logging an error at Python exit. code = textwrap.dedent(''' import _thread import sys event = _thread.allocate_lock() event.acquire() def import_threading(): import threading event.release() if 'threading' in sys.modules: raise Exception('threading is already imported') _thread.start_new_thread(import_threading, ()) # wait until the threading module is imported event.acquire() event.release() if 'threading' not in sys.modules: raise Exception('threading is not imported') # don't wait until the thread completes ''') rc, out, err = assert_python_ok("-c", code) self.assertEqual(out, b'') self.assertEqual(err, b'') class ThreadJoinOnShutdown(BaseTestCase): def _run_and_join(self, script): script = """if 1: import sys, os, time, threading # a thread, which waits for the main program to terminate def joiningfunc(mainthread): mainthread.join() print('end of thread') # stdout is fully buffered because not a tty, we have to flush # before exit. sys.stdout.flush() \n""" + script rc, out, err = assert_python_ok("-c", script) data = out.decode().replace('\r', '') self.assertEqual(data, "end of main\nend of thread\n") def test_1_join_on_shutdown(self): # The usual case: on exit, wait for a non-daemon thread script = """if 1: import os t = threading.Thread(target=joiningfunc, args=(threading.current_thread(),)) t.start() time.sleep(0.1) print('end of main') """ self._run_and_join(script) @skip_unless_reliable_fork def test_2_join_in_forked_process(self): # Like the test above, but from a forked interpreter script = """if 1: from test import support childpid = os.fork() if childpid != 0: # parent process support.wait_process(childpid, exitcode=0) sys.exit(0) # child process t = threading.Thread(target=joiningfunc, args=(threading.current_thread(),)) t.start() print('end of main') """ self._run_and_join(script) @skip_unless_reliable_fork def test_3_join_in_forked_from_thread(self): # Like the test above, but fork() was called from a worker thread # In the forked process, the main Thread object must be marked as stopped. script = """if 1: from test import support main_thread = threading.current_thread() def worker(): childpid = os.fork() if childpid != 0: # parent process support.wait_process(childpid, exitcode=0) sys.exit(0) # child process t = threading.Thread(target=joiningfunc, args=(main_thread,)) print('end of main') t.start() t.join() # Should not block: main_thread is already stopped w = threading.Thread(target=worker) w.start() """ self._run_and_join(script) @unittest.skipIf(sys.platform in platforms_to_skip, "due to known OS bug") def test_4_daemon_threads(self): # Check that a daemon thread cannot crash the interpreter on shutdown # by manipulating internal structures that are being disposed of in # the main thread. script = """if True: import os import random import sys import time import threading thread_has_run = set() def random_io(): '''Loop for a while sleeping random tiny amounts and doing some I/O.''' import test.test_threading as mod while True: with open(mod.__file__, 'rb') as in_f: stuff = in_f.read(200) with open(os.devnull, 'wb') as null_f: null_f.write(stuff) time.sleep(random.random() / 1995) thread_has_run.add(threading.current_thread()) def main(): count = 0 for _ in range(40): new_thread = threading.Thread(target=random_io) new_thread.daemon = True new_thread.start() count += 1 while len(thread_has_run) < count: time.sleep(0.001) # Trigger process shutdown sys.exit(0) main() """ rc, out, err = assert_python_ok('-c', script) self.assertFalse(err) @skip_unless_reliable_fork def test_reinit_tls_after_fork(self): # Issue #13817: fork() would deadlock in a multithreaded program with # the ad-hoc TLS implementation. def do_fork_and_wait(): # just fork a child process and wait it pid = os.fork() if pid > 0: support.wait_process(pid, exitcode=50) else: os._exit(50) # start a bunch of threads that will fork() child processes threads = [] for i in range(16): t = threading.Thread(target=do_fork_and_wait) threads.append(t) t.start() for t in threads: t.join() @skip_unless_reliable_fork def test_clear_threads_states_after_fork(self): # Issue #17094: check that threads states are cleared after fork() # start a bunch of threads threads = [] for i in range(16): t = threading.Thread(target=lambda : time.sleep(0.3)) threads.append(t) t.start() pid = os.fork() if pid == 0: # check that threads states have been cleared if len(sys._current_frames()) == 1: os._exit(51) else: os._exit(52) else: support.wait_process(pid, exitcode=51) for t in threads: t.join() class SubinterpThreadingTests(BaseTestCase): def pipe(self): r, w = os.pipe() self.addCleanup(os.close, r) self.addCleanup(os.close, w) if hasattr(os, 'set_blocking'): os.set_blocking(r, False) return (r, w) def test_threads_join(self): # Non-daemon threads should be joined at subinterpreter shutdown # (issue #18808) r, w = self.pipe() code = textwrap.dedent(r""" import os import random import threading import time def random_sleep(): seconds = random.random() * 0.010 time.sleep(seconds) def f(): # Sleep a bit so that the thread is still running when # Py_EndInterpreter is called. random_sleep() os.write(%d, b"x") threading.Thread(target=f).start() random_sleep() """ % (w,)) ret = test.support.run_in_subinterp(code) self.assertEqual(ret, 0) # The thread was joined properly. self.assertEqual(os.read(r, 1), b"x") def test_threads_join_2(self): # Same as above, but a delay gets introduced after the thread's # Python code returned but before the thread state is deleted. # To achieve this, we register a thread-local object which sleeps # a bit when deallocated. r, w = self.pipe() code = textwrap.dedent(r""" import os import random import threading import time def random_sleep(): seconds = random.random() * 0.010 time.sleep(seconds) class Sleeper: def __del__(self): random_sleep() tls = threading.local() def f(): # Sleep a bit so that the thread is still running when # Py_EndInterpreter is called. random_sleep() tls.x = Sleeper() os.write(%d, b"x") threading.Thread(target=f).start() random_sleep() """ % (w,)) ret = test.support.run_in_subinterp(code) self.assertEqual(ret, 0) # The thread was joined properly. self.assertEqual(os.read(r, 1), b"x") @cpython_only def test_daemon_threads_fatal_error(self): subinterp_code = f"""if 1: import os import threading import time def f(): # Make sure the daemon thread is still running when # Py_EndInterpreter is called. time.sleep({test.support.SHORT_TIMEOUT}) threading.Thread(target=f, daemon=True).start() """ script = r"""if 1: import _testcapi _testcapi.run_in_subinterp(%r) """ % (subinterp_code,) with test.support.SuppressCrashReport(): rc, out, err = assert_python_failure("-c", script) self.assertIn("Fatal Python error: Py_EndInterpreter: " "not the last thread", err.decode()) class ThreadingExceptionTests(BaseTestCase): # A RuntimeError should be raised if Thread.start() is called # multiple times. def test_start_thread_again(self): thread = threading.Thread() thread.start() self.assertRaises(RuntimeError, thread.start) thread.join() def test_joining_current_thread(self): current_thread = threading.current_thread() self.assertRaises(RuntimeError, current_thread.join); def test_joining_inactive_thread(self): thread = threading.Thread() self.assertRaises(RuntimeError, thread.join) def test_daemonize_active_thread(self): thread = threading.Thread() thread.start() self.assertRaises(RuntimeError, setattr, thread, "daemon", True) thread.join() def test_releasing_unacquired_lock(self): lock = threading.Lock() self.assertRaises(RuntimeError, lock.release) @requires_subprocess() def test_recursion_limit(self): # Issue 9670 # test that excessive recursion within a non-main thread causes # an exception rather than crashing the interpreter on platforms # like Mac OS X or FreeBSD which have small default stack sizes # for threads script = """if True: import threading def recurse(): return recurse() def outer(): try: recurse() except RecursionError: pass w = threading.Thread(target=outer) w.start() w.join() print('end of main thread') """ expected_output = "end of main thread\n" p = subprocess.Popen([sys.executable, "-c", script], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() data = stdout.decode().replace('\r', '') self.assertEqual(p.returncode, 0, "Unexpected error: " + stderr.decode()) self.assertEqual(data, expected_output) def test_print_exception(self): script = r"""if True: import threading import time running = False def run(): global running running = True while running: time.sleep(0.01) 1/0 t = threading.Thread(target=run) t.start() while not running: time.sleep(0.01) running = False t.join() """ rc, out, err = assert_python_ok("-c", script) self.assertEqual(out, b'') err = err.decode() self.assertIn("Exception in thread", err) self.assertIn("Traceback (most recent call last):", err) self.assertIn("ZeroDivisionError", err) self.assertNotIn("Unhandled exception", err) def test_print_exception_stderr_is_none_1(self): script = r"""if True: import sys import threading import time running = False def run(): global running running = True while running: time.sleep(0.01) 1/0 t = threading.Thread(target=run) t.start() while not running: time.sleep(0.01) sys.stderr = None running = False t.join() """ rc, out, err = assert_python_ok("-c", script) self.assertEqual(out, b'') err = err.decode() self.assertIn("Exception in thread", err) self.assertIn("Traceback (most recent call last):", err) self.assertIn("ZeroDivisionError", err) self.assertNotIn("Unhandled exception", err) def test_print_exception_stderr_is_none_2(self): script = r"""if True: import sys import threading import time running = False def run(): global running running = True while running: time.sleep(0.01) 1/0 sys.stderr = None t = threading.Thread(target=run) t.start() while not running: time.sleep(0.01) running = False t.join() """ rc, out, err = assert_python_ok("-c", script) self.assertEqual(out, b'') self.assertNotIn("Unhandled exception", err.decode()) def test_bare_raise_in_brand_new_thread(self): def bare_raise(): raise class Issue27558(threading.Thread): exc = None def run(self): try: bare_raise() except Exception as exc: self.exc = exc thread = Issue27558() thread.start() thread.join() self.assertIsNotNone(thread.exc) self.assertIsInstance(thread.exc, RuntimeError) # explicitly break the reference cycle to not leak a dangling thread thread.exc = None def test_multithread_modify_file_noerror(self): # See issue25872 def modify_file(): with open(os_helper.TESTFN, 'w', encoding='utf-8') as fp: fp.write(' ') traceback.format_stack() self.addCleanup(os_helper.unlink, os_helper.TESTFN) threads = [ threading.Thread(target=modify_file) for i in range(100) ] for t in threads: t.start() t.join() class ThreadRunFail(threading.Thread): def run(self): raise ValueError("run failed") class ExceptHookTests(BaseTestCase): def setUp(self): restore_default_excepthook(self) super().setUp() def test_excepthook(self): with support.captured_output("stderr") as stderr: thread = ThreadRunFail(name="excepthook thread") thread.start() thread.join() stderr = stderr.getvalue().strip() self.assertIn(f'Exception in thread {thread.name}:\n', stderr) self.assertIn('Traceback (most recent call last):\n', stderr) self.assertIn(' raise ValueError("run failed")', stderr) self.assertIn('ValueError: run failed', stderr) @support.cpython_only def test_excepthook_thread_None(self): # threading.excepthook called with thread=None: log the thread # identifier in this case. with support.captured_output("stderr") as stderr: try: raise ValueError("bug") except Exception as exc: args = threading.ExceptHookArgs([*sys.exc_info(), None]) try: threading.excepthook(args) finally: # Explicitly break a reference cycle args = None stderr = stderr.getvalue().strip() self.assertIn(f'Exception in thread {threading.get_ident()}:\n', stderr) self.assertIn('Traceback (most recent call last):\n', stderr) self.assertIn(' raise ValueError("bug")', stderr) self.assertIn('ValueError: bug', stderr) def test_system_exit(self): class ThreadExit(threading.Thread): def run(self): sys.exit(1) # threading.excepthook() silently ignores SystemExit with support.captured_output("stderr") as stderr: thread = ThreadExit() thread.start() thread.join() self.assertEqual(stderr.getvalue(), '') def test_custom_excepthook(self): args = None def hook(hook_args): nonlocal args args = hook_args try: with support.swap_attr(threading, 'excepthook', hook): thread = ThreadRunFail() thread.start() thread.join() self.assertEqual(args.exc_type, ValueError) self.assertEqual(str(args.exc_value), 'run failed') self.assertEqual(args.exc_traceback, args.exc_value.__traceback__) self.assertIs(args.thread, thread) finally: # Break reference cycle args = None def test_custom_excepthook_fail(self): def threading_hook(args): raise ValueError("threading_hook failed") err_str = None def sys_hook(exc_type, exc_value, exc_traceback): nonlocal err_str err_str = str(exc_value) with support.swap_attr(threading, 'excepthook', threading_hook), \ support.swap_attr(sys, 'excepthook', sys_hook), \ support.captured_output('stderr') as stderr: thread = ThreadRunFail() thread.start() thread.join() self.assertEqual(stderr.getvalue(), 'Exception in threading.excepthook:\n') self.assertEqual(err_str, 'threading_hook failed') def test_original_excepthook(self): def run_thread(): with support.captured_output("stderr") as output: thread = ThreadRunFail(name="excepthook thread") thread.start() thread.join() return output.getvalue() def threading_hook(args): print("Running a thread failed", file=sys.stderr) default_output = run_thread() with support.swap_attr(threading, 'excepthook', threading_hook): custom_hook_output = run_thread() threading.excepthook = threading.__excepthook__ recovered_output = run_thread() self.assertEqual(default_output, recovered_output) self.assertNotEqual(default_output, custom_hook_output) self.assertEqual(custom_hook_output, "Running a thread failed\n") class TimerTests(BaseTestCase): def setUp(self): BaseTestCase.setUp(self) self.callback_args = [] self.callback_event = threading.Event() def test_init_immutable_default_args(self): # Issue 17435: constructor defaults were mutable objects, they could be # mutated via the object attributes and affect other Timer objects. timer1 = threading.Timer(0.01, self._callback_spy) timer1.start() self.callback_event.wait() timer1.args.append("blah") timer1.kwargs["foo"] = "bar" self.callback_event.clear() timer2 = threading.Timer(0.01, self._callback_spy) timer2.start() self.callback_event.wait() self.assertEqual(len(self.callback_args), 2) self.assertEqual(self.callback_args, [((), {}), ((), {})]) timer1.join() timer2.join() def _callback_spy(self, *args, **kwargs): self.callback_args.append((args[:], kwargs.copy())) self.callback_event.set() class LockTests(lock_tests.LockTests): locktype = staticmethod(threading.Lock) class PyRLockTests(lock_tests.RLockTests): locktype = staticmethod(threading._PyRLock) @unittest.skipIf(threading._CRLock is None, 'RLock not implemented in C') class CRLockTests(lock_tests.RLockTests): locktype = staticmethod(threading._CRLock) class EventTests(lock_tests.EventTests): eventtype = staticmethod(threading.Event) class ConditionAsRLockTests(lock_tests.RLockTests): # Condition uses an RLock by default and exports its API. locktype = staticmethod(threading.Condition) def test_recursion_count(self): self.skipTest("Condition does not expose _recursion_count()") class ConditionTests(lock_tests.ConditionTests): condtype = staticmethod(threading.Condition) class SemaphoreTests(lock_tests.SemaphoreTests): semtype = staticmethod(threading.Semaphore) class BoundedSemaphoreTests(lock_tests.BoundedSemaphoreTests): semtype = staticmethod(threading.BoundedSemaphore) class BarrierTests(lock_tests.BarrierTests): barriertype = staticmethod(threading.Barrier) class MiscTestCase(unittest.TestCase): def test__all__(self): restore_default_excepthook(self) extra = {"ThreadError"} not_exported = {'currentThread', 'activeCount'} support.check__all__(self, threading, ('threading', '_thread'), extra=extra, not_exported=not_exported) class InterruptMainTests(unittest.TestCase): def check_interrupt_main_with_signal_handler(self, signum): def handler(signum, frame): 1/0 old_handler = signal.signal(signum, handler) self.addCleanup(signal.signal, signum, old_handler) with self.assertRaises(ZeroDivisionError): _thread.interrupt_main() def check_interrupt_main_noerror(self, signum): handler = signal.getsignal(signum) try: # No exception should arise. signal.signal(signum, signal.SIG_IGN) _thread.interrupt_main(signum) signal.signal(signum, signal.SIG_DFL) _thread.interrupt_main(signum) finally: # Restore original handler signal.signal(signum, handler) def test_interrupt_main_subthread(self): # Calling start_new_thread with a function that executes interrupt_main # should raise KeyboardInterrupt upon completion. def call_interrupt(): _thread.interrupt_main() t = threading.Thread(target=call_interrupt) with self.assertRaises(KeyboardInterrupt): t.start() t.join() t.join() def test_interrupt_main_mainthread(self): # Make sure that if interrupt_main is called in main thread that # KeyboardInterrupt is raised instantly. with self.assertRaises(KeyboardInterrupt): _thread.interrupt_main() def test_interrupt_main_with_signal_handler(self): self.check_interrupt_main_with_signal_handler(signal.SIGINT) self.check_interrupt_main_with_signal_handler(signal.SIGTERM) def test_interrupt_main_noerror(self): self.check_interrupt_main_noerror(signal.SIGINT) self.check_interrupt_main_noerror(signal.SIGTERM) def test_interrupt_main_invalid_signal(self): self.assertRaises(ValueError, _thread.interrupt_main, -1) self.assertRaises(ValueError, _thread.interrupt_main, signal.NSIG) self.assertRaises(ValueError, _thread.interrupt_main, 1000000) @threading_helper.reap_threads def test_can_interrupt_tight_loops(self): cont = [True] started = [False] interrupted = [False] def worker(started, cont, interrupted): iterations = 100_000_000 started[0] = True while cont[0]: if iterations: iterations -= 1 else: return pass interrupted[0] = True t = threading.Thread(target=worker,args=(started, cont, interrupted)) t.start() while not started[0]: pass cont[0] = False t.join() self.assertTrue(interrupted[0]) class AtexitTests(unittest.TestCase): def test_atexit_output(self): rc, out, err = assert_python_ok("-c", """if True: import threading def run_last(): print('parrot') threading._register_atexit(run_last) """) self.assertFalse(err) self.assertEqual(out.strip(), b'parrot') def test_atexit_called_once(self): rc, out, err = assert_python_ok("-c", """if True: import threading from unittest.mock import Mock mock = Mock() threading._register_atexit(mock) mock.assert_not_called() # force early shutdown to ensure it was called once threading._shutdown() mock.assert_called_once() """) self.assertFalse(err) def test_atexit_after_shutdown(self): # The only way to do this is by registering an atexit within # an atexit, which is intended to raise an exception. rc, out, err = assert_python_ok("-c", """if True: import threading def func(): pass def run_last(): threading._register_atexit(func) threading._register_atexit(run_last) """) self.assertTrue(err) self.assertIn("RuntimeError: can't register atexit after shutdown", err.decode()) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.11/test_wsgiref.py000066400000000000000000000725131471441230600214120ustar00rootroot00000000000000from unittest import mock from test import support from test.support import socket_helper from test.test_httpservers import NoLogRequestHandler from unittest import TestCase from wsgiref.util import setup_testing_defaults from wsgiref.headers import Headers from wsgiref.handlers import BaseHandler, BaseCGIHandler, SimpleHandler from wsgiref import util from wsgiref.validate import validator from wsgiref.simple_server import WSGIServer, WSGIRequestHandler from wsgiref.simple_server import make_server from http.client import HTTPConnection from io import StringIO, BytesIO, BufferedReader from socketserver import BaseServer from platform import python_implementation import os import re import signal import sys import threading import unittest class MockServer(WSGIServer): """Non-socket HTTP server""" def __init__(self, server_address, RequestHandlerClass): BaseServer.__init__(self, server_address, RequestHandlerClass) self.server_bind() def server_bind(self): host, port = self.server_address self.server_name = host self.server_port = port self.setup_environ() class MockHandler(WSGIRequestHandler): """Non-socket HTTP handler""" def setup(self): self.connection = self.request self.rfile, self.wfile = self.connection def finish(self): pass def hello_app(environ,start_response): start_response("200 OK", [ ('Content-Type','text/plain'), ('Date','Mon, 05 Jun 2006 18:49:54 GMT') ]) return [b"Hello, world!"] def header_app(environ, start_response): start_response("200 OK", [ ('Content-Type', 'text/plain'), ('Date', 'Mon, 05 Jun 2006 18:49:54 GMT') ]) return [';'.join([ environ['HTTP_X_TEST_HEADER'], environ['QUERY_STRING'], environ['PATH_INFO'] ]).encode('iso-8859-1')] def run_amock(app=hello_app, data=b"GET / HTTP/1.0\n\n"): server = make_server("", 80, app, MockServer, MockHandler) inp = BufferedReader(BytesIO(data)) out = BytesIO() olderr = sys.stderr err = sys.stderr = StringIO() try: server.finish_request((inp, out), ("127.0.0.1",8888)) finally: sys.stderr = olderr return out.getvalue(), err.getvalue() def compare_generic_iter(make_it, match): """Utility to compare a generic iterator with an iterable This tests the iterator using iter()/next(). 'make_it' must be a function returning a fresh iterator to be tested (since this may test the iterator twice).""" it = make_it() if not iter(it) is it: raise AssertionError for item in match: if not next(it) == item: raise AssertionError try: next(it) except StopIteration: pass else: raise AssertionError("Too many items from .__next__()", it) class IntegrationTests(TestCase): def check_hello(self, out, has_length=True): pyver = (python_implementation() + "/" + sys.version.split()[0]) self.assertEqual(out, ("HTTP/1.0 200 OK\r\n" "Server: WSGIServer/0.2 " + pyver +"\r\n" "Content-Type: text/plain\r\n" "Date: Mon, 05 Jun 2006 18:49:54 GMT\r\n" + (has_length and "Content-Length: 13\r\n" or "") + "\r\n" "Hello, world!").encode("iso-8859-1") ) def test_plain_hello(self): out, err = run_amock() self.check_hello(out) def test_environ(self): request = ( b"GET /p%61th/?query=test HTTP/1.0\n" b"X-Test-Header: Python test \n" b"X-Test-Header: Python test 2\n" b"Content-Length: 0\n\n" ) out, err = run_amock(header_app, request) self.assertEqual( out.splitlines()[-1], b"Python test,Python test 2;query=test;/path/" ) def test_request_length(self): out, err = run_amock(data=b"GET " + (b"x" * 65537) + b" HTTP/1.0\n\n") self.assertEqual(out.splitlines()[0], b"HTTP/1.0 414 Request-URI Too Long") def test_validated_hello(self): out, err = run_amock(validator(hello_app)) # the middleware doesn't support len(), so content-length isn't there self.check_hello(out, has_length=False) def test_simple_validation_error(self): def bad_app(environ,start_response): start_response("200 OK", ('Content-Type','text/plain')) return ["Hello, world!"] out, err = run_amock(validator(bad_app)) self.assertTrue(out.endswith( b"A server error occurred. Please contact the administrator." )) self.assertEqual( err.splitlines()[-2], "AssertionError: Headers (('Content-Type', 'text/plain')) must" " be of type list: " ) def test_status_validation_errors(self): def create_bad_app(status): def bad_app(environ, start_response): start_response(status, [("Content-Type", "text/plain; charset=utf-8")]) return [b"Hello, world!"] return bad_app tests = [ ('200', 'AssertionError: Status must be at least 4 characters'), ('20X OK', 'AssertionError: Status message must begin w/3-digit code'), ('200OK', 'AssertionError: Status message must have a space after code'), ] for status, exc_message in tests: with self.subTest(status=status): out, err = run_amock(create_bad_app(status)) self.assertTrue(out.endswith( b"A server error occurred. Please contact the administrator." )) self.assertEqual(err.splitlines()[-2], exc_message) def test_wsgi_input(self): def bad_app(e,s): e["wsgi.input"].read() s("200 OK", [("Content-Type", "text/plain; charset=utf-8")]) return [b"data"] out, err = run_amock(validator(bad_app)) self.assertTrue(out.endswith( b"A server error occurred. Please contact the administrator." )) self.assertEqual( err.splitlines()[-2], "AssertionError" ) def test_bytes_validation(self): def app(e, s): s("200 OK", [ ("Content-Type", "text/plain; charset=utf-8"), ("Date", "Wed, 24 Dec 2008 13:29:32 GMT"), ]) return [b"data"] out, err = run_amock(validator(app)) self.assertTrue(err.endswith('"GET / HTTP/1.0" 200 4\n')) ver = sys.version.split()[0].encode('ascii') py = python_implementation().encode('ascii') pyver = py + b"/" + ver self.assertEqual( b"HTTP/1.0 200 OK\r\n" b"Server: WSGIServer/0.2 "+ pyver + b"\r\n" b"Content-Type: text/plain; charset=utf-8\r\n" b"Date: Wed, 24 Dec 2008 13:29:32 GMT\r\n" b"\r\n" b"data", out) def test_cp1252_url(self): def app(e, s): s("200 OK", [ ("Content-Type", "text/plain"), ("Date", "Wed, 24 Dec 2008 13:29:32 GMT"), ]) # PEP3333 says environ variables are decoded as latin1. # Encode as latin1 to get original bytes return [e["PATH_INFO"].encode("latin1")] out, err = run_amock( validator(app), data=b"GET /\x80%80 HTTP/1.0") self.assertEqual( [ b"HTTP/1.0 200 OK", mock.ANY, b"Content-Type: text/plain", b"Date: Wed, 24 Dec 2008 13:29:32 GMT", b"", b"/\x80\x80", ], out.splitlines()) def test_interrupted_write(self): # BaseHandler._write() and _flush() have to write all data, even if # it takes multiple send() calls. Test this by interrupting a send() # call with a Unix signal. pthread_kill = support.get_attribute(signal, "pthread_kill") def app(environ, start_response): start_response("200 OK", []) return [b'\0' * support.SOCK_MAX_SIZE] class WsgiHandler(NoLogRequestHandler, WSGIRequestHandler): pass server = make_server(socket_helper.HOST, 0, app, handler_class=WsgiHandler) self.addCleanup(server.server_close) interrupted = threading.Event() def signal_handler(signum, frame): interrupted.set() original = signal.signal(signal.SIGUSR1, signal_handler) self.addCleanup(signal.signal, signal.SIGUSR1, original) received = None main_thread = threading.get_ident() def run_client(): http = HTTPConnection(*server.server_address) http.request("GET", "/") with http.getresponse() as response: response.read(100) # The main thread should now be blocking in a send() system # call. But in theory, it could get interrupted by other # signals, and then retried. So keep sending the signal in a # loop, in case an earlier signal happens to be delivered at # an inconvenient moment. while True: pthread_kill(main_thread, signal.SIGUSR1) if interrupted.wait(timeout=float(1)): break nonlocal received received = len(response.read()) http.close() background = threading.Thread(target=run_client) background.start() server.handle_request() background.join() self.assertEqual(received, support.SOCK_MAX_SIZE - 100) class UtilityTests(TestCase): def checkShift(self,sn_in,pi_in,part,sn_out,pi_out): env = {'SCRIPT_NAME':sn_in,'PATH_INFO':pi_in} util.setup_testing_defaults(env) self.assertEqual(util.shift_path_info(env),part) self.assertEqual(env['PATH_INFO'],pi_out) self.assertEqual(env['SCRIPT_NAME'],sn_out) return env def checkDefault(self, key, value, alt=None): # Check defaulting when empty env = {} util.setup_testing_defaults(env) if isinstance(value, StringIO): self.assertIsInstance(env[key], StringIO) elif isinstance(value,BytesIO): self.assertIsInstance(env[key],BytesIO) else: self.assertEqual(env[key], value) # Check existing value env = {key:alt} util.setup_testing_defaults(env) self.assertIs(env[key], alt) def checkCrossDefault(self,key,value,**kw): util.setup_testing_defaults(kw) self.assertEqual(kw[key],value) def checkAppURI(self,uri,**kw): util.setup_testing_defaults(kw) self.assertEqual(util.application_uri(kw),uri) def checkReqURI(self,uri,query=1,**kw): util.setup_testing_defaults(kw) self.assertEqual(util.request_uri(kw,query),uri) def checkFW(self,text,size,match): def make_it(text=text,size=size): return util.FileWrapper(StringIO(text),size) compare_generic_iter(make_it,match) it = make_it() self.assertFalse(it.filelike.closed) for item in it: pass self.assertFalse(it.filelike.closed) it.close() self.assertTrue(it.filelike.closed) def testSimpleShifts(self): self.checkShift('','/', '', '/', '') self.checkShift('','/x', 'x', '/x', '') self.checkShift('/','', None, '/', '') self.checkShift('/a','/x/y', 'x', '/a/x', '/y') self.checkShift('/a','/x/', 'x', '/a/x', '/') def testNormalizedShifts(self): self.checkShift('/a/b', '/../y', '..', '/a', '/y') self.checkShift('', '/../y', '..', '', '/y') self.checkShift('/a/b', '//y', 'y', '/a/b/y', '') self.checkShift('/a/b', '//y/', 'y', '/a/b/y', '/') self.checkShift('/a/b', '/./y', 'y', '/a/b/y', '') self.checkShift('/a/b', '/./y/', 'y', '/a/b/y', '/') self.checkShift('/a/b', '///./..//y/.//', '..', '/a', '/y/') self.checkShift('/a/b', '///', '', '/a/b/', '') self.checkShift('/a/b', '/.//', '', '/a/b/', '') self.checkShift('/a/b', '/x//', 'x', '/a/b/x', '/') self.checkShift('/a/b', '/.', None, '/a/b', '') def testDefaults(self): for key, value in [ ('SERVER_NAME','127.0.0.1'), ('SERVER_PORT', '80'), ('SERVER_PROTOCOL','HTTP/1.0'), ('HTTP_HOST','127.0.0.1'), ('REQUEST_METHOD','GET'), ('SCRIPT_NAME',''), ('PATH_INFO','/'), ('wsgi.version', (1,0)), ('wsgi.run_once', 0), ('wsgi.multithread', 0), ('wsgi.multiprocess', 0), ('wsgi.input', BytesIO()), ('wsgi.errors', StringIO()), ('wsgi.url_scheme','http'), ]: self.checkDefault(key,value) def testCrossDefaults(self): self.checkCrossDefault('HTTP_HOST',"foo.bar",SERVER_NAME="foo.bar") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="on") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="1") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="yes") self.checkCrossDefault('wsgi.url_scheme',"http",HTTPS="foo") self.checkCrossDefault('SERVER_PORT',"80",HTTPS="foo") self.checkCrossDefault('SERVER_PORT',"443",HTTPS="on") def testGuessScheme(self): self.assertEqual(util.guess_scheme({}), "http") self.assertEqual(util.guess_scheme({'HTTPS':"foo"}), "http") self.assertEqual(util.guess_scheme({'HTTPS':"on"}), "https") self.assertEqual(util.guess_scheme({'HTTPS':"yes"}), "https") self.assertEqual(util.guess_scheme({'HTTPS':"1"}), "https") def testAppURIs(self): self.checkAppURI("http://127.0.0.1/") self.checkAppURI("http://127.0.0.1/spam", SCRIPT_NAME="/spam") self.checkAppURI("http://127.0.0.1/sp%E4m", SCRIPT_NAME="/sp\xe4m") self.checkAppURI("http://spam.example.com:2071/", HTTP_HOST="spam.example.com:2071", SERVER_PORT="2071") self.checkAppURI("http://spam.example.com/", SERVER_NAME="spam.example.com") self.checkAppURI("http://127.0.0.1/", HTTP_HOST="127.0.0.1", SERVER_NAME="spam.example.com") self.checkAppURI("https://127.0.0.1/", HTTPS="on") self.checkAppURI("http://127.0.0.1:8000/", SERVER_PORT="8000", HTTP_HOST=None) def testReqURIs(self): self.checkReqURI("http://127.0.0.1/") self.checkReqURI("http://127.0.0.1/spam", SCRIPT_NAME="/spam") self.checkReqURI("http://127.0.0.1/sp%E4m", SCRIPT_NAME="/sp\xe4m") self.checkReqURI("http://127.0.0.1/spammity/spam", SCRIPT_NAME="/spammity", PATH_INFO="/spam") self.checkReqURI("http://127.0.0.1/spammity/sp%E4m", SCRIPT_NAME="/spammity", PATH_INFO="/sp\xe4m") self.checkReqURI("http://127.0.0.1/spammity/spam;ham", SCRIPT_NAME="/spammity", PATH_INFO="/spam;ham") self.checkReqURI("http://127.0.0.1/spammity/spam;cookie=1234,5678", SCRIPT_NAME="/spammity", PATH_INFO="/spam;cookie=1234,5678") self.checkReqURI("http://127.0.0.1/spammity/spam?say=ni", SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="say=ni") self.checkReqURI("http://127.0.0.1/spammity/spam?s%E4y=ni", SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="s%E4y=ni") self.checkReqURI("http://127.0.0.1/spammity/spam", 0, SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="say=ni") def testFileWrapper(self): self.checkFW("xyz"*50, 120, ["xyz"*40,"xyz"*10]) def testHopByHop(self): for hop in ( "Connection Keep-Alive Proxy-Authenticate Proxy-Authorization " "TE Trailers Transfer-Encoding Upgrade" ).split(): for alt in hop, hop.title(), hop.upper(), hop.lower(): self.assertTrue(util.is_hop_by_hop(alt)) # Not comprehensive, just a few random header names for hop in ( "Accept Cache-Control Date Pragma Trailer Via Warning" ).split(): for alt in hop, hop.title(), hop.upper(), hop.lower(): self.assertFalse(util.is_hop_by_hop(alt)) class HeaderTests(TestCase): def testMappingInterface(self): test = [('x','y')] self.assertEqual(len(Headers()), 0) self.assertEqual(len(Headers([])),0) self.assertEqual(len(Headers(test[:])),1) self.assertEqual(Headers(test[:]).keys(), ['x']) self.assertEqual(Headers(test[:]).values(), ['y']) self.assertEqual(Headers(test[:]).items(), test) self.assertIsNot(Headers(test).items(), test) # must be copy! h = Headers() del h['foo'] # should not raise an error h['Foo'] = 'bar' for m in h.__contains__, h.get, h.get_all, h.__getitem__: self.assertTrue(m('foo')) self.assertTrue(m('Foo')) self.assertTrue(m('FOO')) self.assertFalse(m('bar')) self.assertEqual(h['foo'],'bar') h['foo'] = 'baz' self.assertEqual(h['FOO'],'baz') self.assertEqual(h.get_all('foo'),['baz']) self.assertEqual(h.get("foo","whee"), "baz") self.assertEqual(h.get("zoo","whee"), "whee") self.assertEqual(h.setdefault("foo","whee"), "baz") self.assertEqual(h.setdefault("zoo","whee"), "whee") self.assertEqual(h["foo"],"baz") self.assertEqual(h["zoo"],"whee") def testRequireList(self): self.assertRaises(TypeError, Headers, "foo") def testExtras(self): h = Headers() self.assertEqual(str(h),'\r\n') h.add_header('foo','bar',baz="spam") self.assertEqual(h['foo'], 'bar; baz="spam"') self.assertEqual(str(h),'foo: bar; baz="spam"\r\n\r\n') h.add_header('Foo','bar',cheese=None) self.assertEqual(h.get_all('foo'), ['bar; baz="spam"', 'bar; cheese']) self.assertEqual(str(h), 'foo: bar; baz="spam"\r\n' 'Foo: bar; cheese\r\n' '\r\n' ) class ErrorHandler(BaseCGIHandler): """Simple handler subclass for testing BaseHandler""" # BaseHandler records the OS environment at import time, but envvars # might have been changed later by other tests, which trips up # HandlerTests.testEnviron(). os_environ = dict(os.environ.items()) def __init__(self,**kw): setup_testing_defaults(kw) BaseCGIHandler.__init__( self, BytesIO(), BytesIO(), StringIO(), kw, multithread=True, multiprocess=True ) class TestHandler(ErrorHandler): """Simple handler subclass for testing BaseHandler, w/error passthru""" def handle_error(self): raise # for testing, we want to see what's happening class HandlerTests(TestCase): # testEnviron() can produce long error message maxDiff = 80 * 50 def testEnviron(self): os_environ = { # very basic environment 'HOME': '/my/home', 'PATH': '/my/path', 'LANG': 'fr_FR.UTF-8', # set some WSGI variables 'SCRIPT_NAME': 'test_script_name', 'SERVER_NAME': 'test_server_name', } with support.swap_attr(TestHandler, 'os_environ', os_environ): # override X and HOME variables handler = TestHandler(X="Y", HOME="/override/home") handler.setup_environ() # Check that wsgi_xxx attributes are copied to wsgi.xxx variables # of handler.environ for attr in ('version', 'multithread', 'multiprocess', 'run_once', 'file_wrapper'): self.assertEqual(getattr(handler, 'wsgi_' + attr), handler.environ['wsgi.' + attr]) # Test handler.environ as a dict expected = {} setup_testing_defaults(expected) # Handler inherits os_environ variables which are not overridden # by SimpleHandler.add_cgi_vars() (SimpleHandler.base_env) for key, value in os_environ.items(): if key not in expected: expected[key] = value expected.update({ # X doesn't exist in os_environ "X": "Y", # HOME is overridden by TestHandler 'HOME': "/override/home", # overridden by setup_testing_defaults() "SCRIPT_NAME": "", "SERVER_NAME": "127.0.0.1", # set by BaseHandler.setup_environ() 'wsgi.input': handler.get_stdin(), 'wsgi.errors': handler.get_stderr(), 'wsgi.version': (1, 0), 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.multithread': True, 'wsgi.multiprocess': True, 'wsgi.file_wrapper': util.FileWrapper, }) self.assertDictEqual(handler.environ, expected) def testCGIEnviron(self): h = BaseCGIHandler(None,None,None,{}) h.setup_environ() for key in 'wsgi.url_scheme', 'wsgi.input', 'wsgi.errors': self.assertIn(key, h.environ) def testScheme(self): h=TestHandler(HTTPS="on"); h.setup_environ() self.assertEqual(h.environ['wsgi.url_scheme'],'https') h=TestHandler(); h.setup_environ() self.assertEqual(h.environ['wsgi.url_scheme'],'http') def testAbstractMethods(self): h = BaseHandler() for name in [ '_flush','get_stdin','get_stderr','add_cgi_vars' ]: self.assertRaises(NotImplementedError, getattr(h,name)) self.assertRaises(NotImplementedError, h._write, "test") def testContentLength(self): # Demo one reason iteration is better than write()... ;) def trivial_app1(e,s): s('200 OK',[]) return [e['wsgi.url_scheme'].encode('iso-8859-1')] def trivial_app2(e,s): s('200 OK',[])(e['wsgi.url_scheme'].encode('iso-8859-1')) return [] def trivial_app3(e,s): s('200 OK',[]) return ['\u0442\u0435\u0441\u0442'.encode("utf-8")] def trivial_app4(e,s): # Simulate a response to a HEAD request s('200 OK',[('Content-Length', '12345')]) return [] h = TestHandler() h.run(trivial_app1) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "Content-Length: 4\r\n" "\r\n" "http").encode("iso-8859-1")) h = TestHandler() h.run(trivial_app2) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "\r\n" "http").encode("iso-8859-1")) h = TestHandler() h.run(trivial_app3) self.assertEqual(h.stdout.getvalue(), b'Status: 200 OK\r\n' b'Content-Length: 8\r\n' b'\r\n' b'\xd1\x82\xd0\xb5\xd1\x81\xd1\x82') h = TestHandler() h.run(trivial_app4) self.assertEqual(h.stdout.getvalue(), b'Status: 200 OK\r\n' b'Content-Length: 12345\r\n' b'\r\n') def testBasicErrorOutput(self): def non_error_app(e,s): s('200 OK',[]) return [] def error_app(e,s): raise AssertionError("This should be caught by handler") h = ErrorHandler() h.run(non_error_app) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "Content-Length: 0\r\n" "\r\n").encode("iso-8859-1")) self.assertEqual(h.stderr.getvalue(),"") h = ErrorHandler() h.run(error_app) self.assertEqual(h.stdout.getvalue(), ("Status: %s\r\n" "Content-Type: text/plain\r\n" "Content-Length: %d\r\n" "\r\n" % (h.error_status,len(h.error_body))).encode('iso-8859-1') + h.error_body) self.assertIn("AssertionError", h.stderr.getvalue()) def testErrorAfterOutput(self): MSG = b"Some output has been sent" def error_app(e,s): s("200 OK",[])(MSG) raise AssertionError("This should be caught by handler") h = ErrorHandler() h.run(error_app) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "\r\n".encode("iso-8859-1")+MSG)) self.assertIn("AssertionError", h.stderr.getvalue()) def testHeaderFormats(self): def non_error_app(e,s): s('200 OK',[]) return [] stdpat = ( r"HTTP/%s 200 OK\r\n" r"Date: \w{3}, [ 0123]\d \w{3} \d{4} \d\d:\d\d:\d\d GMT\r\n" r"%s" r"Content-Length: 0\r\n" r"\r\n" ) shortpat = ( "Status: 200 OK\r\n" "Content-Length: 0\r\n" "\r\n" ).encode("iso-8859-1") for ssw in "FooBar/1.0", None: sw = ssw and "Server: %s\r\n" % ssw or "" for version in "1.0", "1.1": for proto in "HTTP/0.9", "HTTP/1.0", "HTTP/1.1": h = TestHandler(SERVER_PROTOCOL=proto) h.origin_server = False h.http_version = version h.server_software = ssw h.run(non_error_app) self.assertEqual(shortpat,h.stdout.getvalue()) h = TestHandler(SERVER_PROTOCOL=proto) h.origin_server = True h.http_version = version h.server_software = ssw h.run(non_error_app) if proto=="HTTP/0.9": self.assertEqual(h.stdout.getvalue(),b"") else: self.assertTrue( re.match((stdpat%(version,sw)).encode("iso-8859-1"), h.stdout.getvalue()), ((stdpat%(version,sw)).encode("iso-8859-1"), h.stdout.getvalue()) ) def testBytesData(self): def app(e, s): s("200 OK", [ ("Content-Type", "text/plain; charset=utf-8"), ]) return [b"data"] h = TestHandler() h.run(app) self.assertEqual(b"Status: 200 OK\r\n" b"Content-Type: text/plain; charset=utf-8\r\n" b"Content-Length: 4\r\n" b"\r\n" b"data", h.stdout.getvalue()) def testCloseOnError(self): side_effects = {'close_called': False} MSG = b"Some output has been sent" def error_app(e,s): s("200 OK",[])(MSG) class CrashyIterable(object): def __iter__(self): while True: yield b'blah' raise AssertionError("This should be caught by handler") def close(self): side_effects['close_called'] = True return CrashyIterable() h = ErrorHandler() h.run(error_app) self.assertEqual(side_effects['close_called'], True) def testPartialWrite(self): written = bytearray() class PartialWriter: def write(self, b): partial = b[:7] written.extend(partial) return len(partial) def flush(self): pass environ = {"SERVER_PROTOCOL": "HTTP/1.0"} h = SimpleHandler(BytesIO(), PartialWriter(), sys.stderr, environ) msg = "should not do partial writes" with self.assertWarnsRegex(DeprecationWarning, msg): h.run(hello_app) self.assertEqual(b"HTTP/1.0 200 OK\r\n" b"Content-Type: text/plain\r\n" b"Date: Mon, 05 Jun 2006 18:49:54 GMT\r\n" b"Content-Length: 13\r\n" b"\r\n" b"Hello, world!", written) def testClientConnectionTerminations(self): environ = {"SERVER_PROTOCOL": "HTTP/1.0"} for exception in ( ConnectionAbortedError, BrokenPipeError, ConnectionResetError, ): with self.subTest(exception=exception): class AbortingWriter: def write(self, b): raise exception stderr = StringIO() h = SimpleHandler(BytesIO(), AbortingWriter(), stderr, environ) h.run(hello_app) self.assertFalse(stderr.getvalue()) def testDontResetInternalStateOnException(self): class CustomException(ValueError): pass # We are raising CustomException here to trigger an exception # during the execution of SimpleHandler.finish_response(), so # we can easily test that the internal state of the handler is # preserved in case of an exception. class AbortingWriter: def write(self, b): raise CustomException stderr = StringIO() environ = {"SERVER_PROTOCOL": "HTTP/1.0"} h = SimpleHandler(BytesIO(), AbortingWriter(), stderr, environ) h.run(hello_app) self.assertIn("CustomException", stderr.getvalue()) # Test that the internal state of the handler is preserved. self.assertIsNotNone(h.result) self.assertIsNotNone(h.headers) self.assertIsNotNone(h.status) self.assertIsNotNone(h.environ) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.11/version000066400000000000000000000000071471441230600177300ustar00rootroot000000000000003.11.8 gevent-24.11.1/src/greentest/3.12/000077500000000000000000000000001471441230600163245ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.12/allsans.pem000066400000000000000000000235711471441230600204740ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQDBGvj+Uy/VUyTR mmIA1UEENThh0+pWODcvvUlkeIo+XTJ3FhF4/RVjImDHjozl28Xf2TzKnvQJa1KC pqa7fr8cL9QMwk4pH+S4ulxOu02Bl3Yafx2oJVUML37vciJg+zkzPx1k3tXFjXkr LGjZwOoufBC3AmPuq2xHFBzHrvp5/DIRH2slQFM9fpVZzN77gYyzxba0wCfCPpCf eJFRyYKW8c7MXrwnM82YtE7Rlnf227EkCdMNaSeZLUIxeVpcnScqZl0SIbR3YEiV 0LPFkx0wJFm8qUEFU/h+0jamgy/ON+11nqmMlp3BjNi/JTVsa7N7A3dvdHC7VVlr WnUgU6MoSniyL6ijpucyHtZzK2mJy0sHR8PadHKow0O423/5N8GKTSOvaGMXTjAe OGs+9/P1ZYo3IjjQPz/NV3QlhK8zRqxF3cW0ekHHkT+/jZjCvSKm6mdbMQunKE1W +dokAc815pb48Mzf1eWKd/7UyUf7CXussyAaJ3clpaK1sbbn9m0CAwEAAQKCAYAe BaCCgdJk+xk1USg9cuo5ykBqzTSYlQLXdDlN2oO7sGehJhgvVEGX+QdM3ze+oM2B wNd3tQDB2iKo11oCunDh4/m2xhq6wA+iPK8POoWRSUf+VJb6xlsTmurENV1s8IHz GrPqM87OePFGqg/fEuQVuAotObzppVMfNdxHm0er4W6zRMw2rWqDnAOCQ5zDQ1/p ryp5rYpA49M+R9NoAMlByHRbR7s+6Qnk3NuIMDmUcpF2xeQ/KIMUiHnLEU/gKDpi bsk+VtyjlibR4zhh9/cJrLTApAIA+4eC176EJvKXCh5UIjd92JC7741HTNQXJpvG 9PXbzhyUCmncr04U+46snGHdwD+lG4LS7oBGACTLMtpcMrlgAm6XCg4T8gRVE/9n FvCkqPHBR+vnhOxm+0x0yUY/DstJby6IPYPsfGK/s2n//j/vJrAZE1Pxlm9EPU13 MRLcHstwjAc/NXRPnUN1DfcQvPLx6Tt6rqw3Wm1KO75kM+HZ56BX9/Bi1TgkiI0C gcEA5JTlXssJ3W8Cz6w1ZtGsThHQBDbvHF2D5AdqO7y6/eqzCQgBQl9BTfXOzsvP I1gf2CLEFBtGK09UjAuJQg90/NlKur7i7xt7HpAzEfGsDAL4P5BW5JnMNrzpJjjL 0uUDsPJlA75Wi29N2SFiaIslY0sZ6nckInat5GRe4O1AMSHoJ5suY9yTZTU3XB4O A+XyddutI1GsFZgl8/8LyyNMcyNjxG3T5sr7IKf5/nIv6oMDjC2zLVZa8QS/MEnL Kaa7AoHBANhEsxfcjw2MaPkrsqAsOP0dDf7g2rdz6wKT5BzZu9e+/E76NmvVDpns e+kCjql9Os3/wonOMINvn1bTCQGTgk8+dw1fMyqg+zQCvH4ImcE6LSqhzblVHsIB zZ7rW86trri1U9+olNHG4nwkus0i4LV8eeORns+j8DgXr6/eOvjX3ZW5TyU7/Qgm SiSdBapzJbom3xJrbo9KQsrN5PVCOwuwrgY0o+2BeKyKhnt4uGv0bR+ii06EOJUA WvjD7gLI9wKBwGVRXk3jH29IOm3EvjLh80bzfEmx89CV3tUfOEZcRGIyOsNhCfXa dP7SWqWtDxZyhELwPgtPf43I7wfYQTHH2ioNQqN94ubrPmpwrkJg5cq5MkIyf2F6 jlsg5xMrD6VeH4G6H25GWuQZJN9+fbkrHBpj+ovD3X9tLWzT1H5Miyx8BAQyM6DN 74Nn0C8Dn2C49vyor5i9JdK4ivIY9ahH8CYE5L73k3p0NFXoPtY61ORUyCjFROtu oIa+fOQxgVzn6wKBwQC3DD7BnY7/Gq7m51ODOqrpoaPs7Qhyagyp298hhDD3hNEt T56sWmLHaV/fcqipUDNrlGRmGzz4ooutA2YGDYIn7Gj7ym4WULcN6Jr92e25nLIJ +XWUvjUQZFJThkXogxz1fZSGI7wCamHcTYJGipTDR54rPV+7w7hY4cN0CZbEdIE6 buRMUZ/zO+VZZAYdpORz0N7SSlgDtAkgenCmHe64EEzbN8bgCcvHzl/RNfZyeSm7 supSBJuXkfttvvg/JzUCgcEAlx0Pep9qCLvpk0WqzijBVHc3zK4wYxjhN2MBkF42 SLWfogKpiPfIqxX6YF94roIA0VlW6Pj50v+sbPwq8nwsgFNhml80A4ODKr3O3Y3M fXDBJW5W5ZRb/vhIKRjXyCSckSRfj7N8HUYjCLkxQansNWimrldmSet0H2mYJN0Y JpBXdqpa76zoHzWpKFwD0fSVzvnMelPHSDCNOdIEHmR8e1x2F1/ufR/9/dBzPULY HMj0OhQHoi8kJyMIj3+bQkbC -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5f Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=allsans Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:c1:1a:f8:fe:53:2f:d5:53:24:d1:9a:62:00:d5: 41:04:35:38:61:d3:ea:56:38:37:2f:bd:49:64:78: 8a:3e:5d:32:77:16:11:78:fd:15:63:22:60:c7:8e: 8c:e5:db:c5:df:d9:3c:ca:9e:f4:09:6b:52:82:a6: a6:bb:7e:bf:1c:2f:d4:0c:c2:4e:29:1f:e4:b8:ba: 5c:4e:bb:4d:81:97:76:1a:7f:1d:a8:25:55:0c:2f: 7e:ef:72:22:60:fb:39:33:3f:1d:64:de:d5:c5:8d: 79:2b:2c:68:d9:c0:ea:2e:7c:10:b7:02:63:ee:ab: 6c:47:14:1c:c7:ae:fa:79:fc:32:11:1f:6b:25:40: 53:3d:7e:95:59:cc:de:fb:81:8c:b3:c5:b6:b4:c0: 27:c2:3e:90:9f:78:91:51:c9:82:96:f1:ce:cc:5e: bc:27:33:cd:98:b4:4e:d1:96:77:f6:db:b1:24:09: d3:0d:69:27:99:2d:42:31:79:5a:5c:9d:27:2a:66: 5d:12:21:b4:77:60:48:95:d0:b3:c5:93:1d:30:24: 59:bc:a9:41:05:53:f8:7e:d2:36:a6:83:2f:ce:37: ed:75:9e:a9:8c:96:9d:c1:8c:d8:bf:25:35:6c:6b: b3:7b:03:77:6f:74:70:bb:55:59:6b:5a:75:20:53: a3:28:4a:78:b2:2f:a8:a3:a6:e7:32:1e:d6:73:2b: 69:89:cb:4b:07:47:c3:da:74:72:a8:c3:43:b8:db: 7f:f9:37:c1:8a:4d:23:af:68:63:17:4e:30:1e:38: 6b:3e:f7:f3:f5:65:8a:37:22:38:d0:3f:3f:cd:57: 74:25:84:af:33:46:ac:45:dd:c5:b4:7a:41:c7:91: 3f:bf:8d:98:c2:bd:22:a6:ea:67:5b:31:0b:a7:28: 4d:56:f9:da:24:01:cf:35:e6:96:f8:f0:cc:df:d5: e5:8a:77:fe:d4:c9:47:fb:09:7b:ac:b3:20:1a:27: 77:25:a5:a2:b5:b1:b6:e7:f6:6d Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:allsans, othername:, othername:, email:user@example.org, DNS:www.example.org, DirName:/C=XY/L=Castle Anthrax/O=Python Software Foundation/CN=dirname example, URI:https://www.python.org/, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1, Registered ID:1.2.3.4.5 X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: D4:F1:D8:23:E0:A7:E9:CA:12:45:A0:0D:03:C2:25:A6:E8:65:BC:EE X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 70:77:d8:82:b0:f4:ab:de:84:ce:88:32:63:5e:23:0f:b6:58: a2:b1:65:ff:12:22:0b:88:a6:fa:06:40:9a:e7:63:a7:5d:ae: 94:c5:68:3c:4b:e9:95:34:01:75:24:df:9d:6e:9b:e4:ff:3f: 61:97:29:7b:ab:34:2c:14:d3:01:d2:eb:fb:84:40:db:12:54: 7e:7a:44:bc:08:eb:9f:e2:15:0b:11:4f:25:d2:56:51:95:ad: 6d:ad:07:aa:6a:61:f9:39:d5:82:8c:45:31:9f:2a:ff:18:98: 49:0c:bb:17:ad:d5:24:d3:d1:c7:c4:10:3e:c4:79:26:58:f4: c5:de:82:16:c4:c3:c4:a7:a3:62:22:41:90:36:0f:bc:4c:fd: 6a:18:22:f2:87:e9:07:db:b4:3d:65:00:e4:70:f9:d6:e5:a8: a1:b9:c9:9d:e7:5d:78:aa:98:d5:f8:f4:fd:5c:d9:4c:d0:6d: bf:87:71:d3:5b:ec:f4:bf:46:f9:c8:f8:10:c5:72:af:c3:15: b9:c4:06:67:0b:3f:f6:f4:64:c5:27:74:c1:6b:00:37:da:ea: 18:36:77:36:a7:3e:80:2e:5d:54:0f:01:df:ce:9e:97:dd:c9: f2:8b:59:82:c5:65:31:c8:73:20:fd:24:23:25:d8:00:df:90: 93:26:76:08:0a:06:a9:0e:d3:d3:4c:6f:ef:a7:fb:de:eb:2a: 40:b9:e4:b1:44:0c:37:ca:c6:9e:44:4a:b4:7c:2c:40:52:35: bb:b3:71:28:3d:35:fd:be:c9:4f:54:b3:99:c5:5f:84:38:fb: 2b:fb:ea:dd:88:e8:9d:c1:9b:67:87:3d:79:7b:3d:7e:61:1f: 70:3c:b7:c8:4c:17:a5:0c:a3:28:c7:ab:48:11:14:f7:98:7a: da:4e:fb:91:76:89:0a:a6:c6:72:e0:96:d9:f1:80:ea:68:90: 37:5c:c6:69:c7:d7:bc:c7:d1:ae:5b:a9:12:59:c6:e4:6c:61: a9:8b:ba:51:b3:13 -----BEGIN CERTIFICATE----- MIIHDTCCBXWgAwIBAgIJAMstgJlaaVJfMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2Fs bHNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBGvj+Uy/VUyTR mmIA1UEENThh0+pWODcvvUlkeIo+XTJ3FhF4/RVjImDHjozl28Xf2TzKnvQJa1KC pqa7fr8cL9QMwk4pH+S4ulxOu02Bl3Yafx2oJVUML37vciJg+zkzPx1k3tXFjXkr LGjZwOoufBC3AmPuq2xHFBzHrvp5/DIRH2slQFM9fpVZzN77gYyzxba0wCfCPpCf eJFRyYKW8c7MXrwnM82YtE7Rlnf227EkCdMNaSeZLUIxeVpcnScqZl0SIbR3YEiV 0LPFkx0wJFm8qUEFU/h+0jamgy/ON+11nqmMlp3BjNi/JTVsa7N7A3dvdHC7VVlr WnUgU6MoSniyL6ijpucyHtZzK2mJy0sHR8PadHKow0O423/5N8GKTSOvaGMXTjAe OGs+9/P1ZYo3IjjQPz/NV3QlhK8zRqxF3cW0ekHHkT+/jZjCvSKm6mdbMQunKE1W +dokAc815pb48Mzf1eWKd/7UyUf7CXussyAaJ3clpaK1sbbn9m0CAwEAAaOCAt4w ggLaMIIBMAYDVR0RBIIBJzCCASOCB2FsbHNhbnOgHgYDKgMEoBcMFXNvbWUgb3Ro ZXIgaWRlbnRpZmllcqA1BgYrBgEFAgKgKzApoBAbDktFUkJFUk9TLlJFQUxNoRUw E6ADAgEBoQwwChsIdXNlcm5hbWWBEHVzZXJAZXhhbXBsZS5vcmeCD3d3dy5leGFt cGxlLm9yZ6RnMGUxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJh eDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xGDAWBgNVBAMM D2Rpcm5hbWUgZXhhbXBsZYYXaHR0cHM6Ly93d3cucHl0aG9uLm9yZy+HBH8AAAGH EAAAAAAAAAAAAAAAAAAAAAGIBCoDBAUwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQW MBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBTU 8dgj4KfpyhJFoA0DwiWm6GW87jB9BgNVHSMEdjB0gBSziqCiunHxqCR51KRbJTYV HknIzaFRpE8wTTELMAkGA1UEBhMCWFkxJjAkBgNVBAoMHVB5dGhvbiBTb2Z0d2Fy ZSBGb3VuZGF0aW9uIENBMRYwFAYDVQQDDA1vdXItY2Etc2VydmVyggkAyy2AmVpp UlswgYMGCCsGAQUFBwEBBHcwdTA8BggrBgEFBQcwAoYwaHR0cDovL3Rlc3RjYS5w eXRob250ZXN0Lm5ldC90ZXN0Y2EvcHljYWNlcnQuY2VyMDUGCCsGAQUFBzABhilo dHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9vY3NwLzBDBgNVHR8E PDA6MDigNqA0hjJodHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9y ZXZvY2F0aW9uLmNybDANBgkqhkiG9w0BAQsFAAOCAYEAcHfYgrD0q96EzogyY14j D7ZYorFl/xIiC4im+gZAmudjp12ulMVoPEvplTQBdSTfnW6b5P8/YZcpe6s0LBTT AdLr+4RA2xJUfnpEvAjrn+IVCxFPJdJWUZWtba0Hqmph+TnVgoxFMZ8q/xiYSQy7 F63VJNPRx8QQPsR5Jlj0xd6CFsTDxKejYiJBkDYPvEz9ahgi8ofpB9u0PWUA5HD5 1uWoobnJneddeKqY1fj0/VzZTNBtv4dx01vs9L9G+cj4EMVyr8MVucQGZws/9vRk xSd0wWsAN9rqGDZ3Nqc+gC5dVA8B386el93J8otZgsVlMchzIP0kIyXYAN+QkyZ2 CAoGqQ7T00xv76f73usqQLnksUQMN8rGnkRKtHwsQFI1u7NxKD01/b7JT1SzmcVf hDj7K/vq3YjoncGbZ4c9eXs9fmEfcDy3yEwXpQyjKMerSBEU95h62k77kXaJCqbG cuCW2fGA6miQN1zGacfXvMfRrlupElnG5GxhqYu6UbMT -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/badcert.pem000066400000000000000000000036101471441230600204330ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/badkey.pem000066400000000000000000000041621471441230600202710ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/capath/000077500000000000000000000000001471441230600175645ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.12/capath/4e1295a3.0000066400000000000000000000014561471441230600207300ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/capath/5ed36f99.0000066400000000000000000000050111471441230600210200ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/capath/6e88d7b8.0000066400000000000000000000014561471441230600210320ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/capath/99d0fa06.0000066400000000000000000000050111471441230600210040ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/capath/b1930218.0000066400000000000000000000030721471441230600206400ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/capath/ceff1710.0000066400000000000000000000030721471441230600210630ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/000077500000000000000000000000001471441230600201135ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.12/certdata/allsans.pem000066400000000000000000000235711471441230600222630ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQDBGvj+Uy/VUyTR mmIA1UEENThh0+pWODcvvUlkeIo+XTJ3FhF4/RVjImDHjozl28Xf2TzKnvQJa1KC pqa7fr8cL9QMwk4pH+S4ulxOu02Bl3Yafx2oJVUML37vciJg+zkzPx1k3tXFjXkr LGjZwOoufBC3AmPuq2xHFBzHrvp5/DIRH2slQFM9fpVZzN77gYyzxba0wCfCPpCf eJFRyYKW8c7MXrwnM82YtE7Rlnf227EkCdMNaSeZLUIxeVpcnScqZl0SIbR3YEiV 0LPFkx0wJFm8qUEFU/h+0jamgy/ON+11nqmMlp3BjNi/JTVsa7N7A3dvdHC7VVlr WnUgU6MoSniyL6ijpucyHtZzK2mJy0sHR8PadHKow0O423/5N8GKTSOvaGMXTjAe OGs+9/P1ZYo3IjjQPz/NV3QlhK8zRqxF3cW0ekHHkT+/jZjCvSKm6mdbMQunKE1W +dokAc815pb48Mzf1eWKd/7UyUf7CXussyAaJ3clpaK1sbbn9m0CAwEAAQKCAYAe BaCCgdJk+xk1USg9cuo5ykBqzTSYlQLXdDlN2oO7sGehJhgvVEGX+QdM3ze+oM2B wNd3tQDB2iKo11oCunDh4/m2xhq6wA+iPK8POoWRSUf+VJb6xlsTmurENV1s8IHz GrPqM87OePFGqg/fEuQVuAotObzppVMfNdxHm0er4W6zRMw2rWqDnAOCQ5zDQ1/p ryp5rYpA49M+R9NoAMlByHRbR7s+6Qnk3NuIMDmUcpF2xeQ/KIMUiHnLEU/gKDpi bsk+VtyjlibR4zhh9/cJrLTApAIA+4eC176EJvKXCh5UIjd92JC7741HTNQXJpvG 9PXbzhyUCmncr04U+46snGHdwD+lG4LS7oBGACTLMtpcMrlgAm6XCg4T8gRVE/9n FvCkqPHBR+vnhOxm+0x0yUY/DstJby6IPYPsfGK/s2n//j/vJrAZE1Pxlm9EPU13 MRLcHstwjAc/NXRPnUN1DfcQvPLx6Tt6rqw3Wm1KO75kM+HZ56BX9/Bi1TgkiI0C gcEA5JTlXssJ3W8Cz6w1ZtGsThHQBDbvHF2D5AdqO7y6/eqzCQgBQl9BTfXOzsvP I1gf2CLEFBtGK09UjAuJQg90/NlKur7i7xt7HpAzEfGsDAL4P5BW5JnMNrzpJjjL 0uUDsPJlA75Wi29N2SFiaIslY0sZ6nckInat5GRe4O1AMSHoJ5suY9yTZTU3XB4O A+XyddutI1GsFZgl8/8LyyNMcyNjxG3T5sr7IKf5/nIv6oMDjC2zLVZa8QS/MEnL Kaa7AoHBANhEsxfcjw2MaPkrsqAsOP0dDf7g2rdz6wKT5BzZu9e+/E76NmvVDpns e+kCjql9Os3/wonOMINvn1bTCQGTgk8+dw1fMyqg+zQCvH4ImcE6LSqhzblVHsIB zZ7rW86trri1U9+olNHG4nwkus0i4LV8eeORns+j8DgXr6/eOvjX3ZW5TyU7/Qgm SiSdBapzJbom3xJrbo9KQsrN5PVCOwuwrgY0o+2BeKyKhnt4uGv0bR+ii06EOJUA WvjD7gLI9wKBwGVRXk3jH29IOm3EvjLh80bzfEmx89CV3tUfOEZcRGIyOsNhCfXa dP7SWqWtDxZyhELwPgtPf43I7wfYQTHH2ioNQqN94ubrPmpwrkJg5cq5MkIyf2F6 jlsg5xMrD6VeH4G6H25GWuQZJN9+fbkrHBpj+ovD3X9tLWzT1H5Miyx8BAQyM6DN 74Nn0C8Dn2C49vyor5i9JdK4ivIY9ahH8CYE5L73k3p0NFXoPtY61ORUyCjFROtu oIa+fOQxgVzn6wKBwQC3DD7BnY7/Gq7m51ODOqrpoaPs7Qhyagyp298hhDD3hNEt T56sWmLHaV/fcqipUDNrlGRmGzz4ooutA2YGDYIn7Gj7ym4WULcN6Jr92e25nLIJ +XWUvjUQZFJThkXogxz1fZSGI7wCamHcTYJGipTDR54rPV+7w7hY4cN0CZbEdIE6 buRMUZ/zO+VZZAYdpORz0N7SSlgDtAkgenCmHe64EEzbN8bgCcvHzl/RNfZyeSm7 supSBJuXkfttvvg/JzUCgcEAlx0Pep9qCLvpk0WqzijBVHc3zK4wYxjhN2MBkF42 SLWfogKpiPfIqxX6YF94roIA0VlW6Pj50v+sbPwq8nwsgFNhml80A4ODKr3O3Y3M fXDBJW5W5ZRb/vhIKRjXyCSckSRfj7N8HUYjCLkxQansNWimrldmSet0H2mYJN0Y JpBXdqpa76zoHzWpKFwD0fSVzvnMelPHSDCNOdIEHmR8e1x2F1/ufR/9/dBzPULY HMj0OhQHoi8kJyMIj3+bQkbC -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5f Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=allsans Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:c1:1a:f8:fe:53:2f:d5:53:24:d1:9a:62:00:d5: 41:04:35:38:61:d3:ea:56:38:37:2f:bd:49:64:78: 8a:3e:5d:32:77:16:11:78:fd:15:63:22:60:c7:8e: 8c:e5:db:c5:df:d9:3c:ca:9e:f4:09:6b:52:82:a6: a6:bb:7e:bf:1c:2f:d4:0c:c2:4e:29:1f:e4:b8:ba: 5c:4e:bb:4d:81:97:76:1a:7f:1d:a8:25:55:0c:2f: 7e:ef:72:22:60:fb:39:33:3f:1d:64:de:d5:c5:8d: 79:2b:2c:68:d9:c0:ea:2e:7c:10:b7:02:63:ee:ab: 6c:47:14:1c:c7:ae:fa:79:fc:32:11:1f:6b:25:40: 53:3d:7e:95:59:cc:de:fb:81:8c:b3:c5:b6:b4:c0: 27:c2:3e:90:9f:78:91:51:c9:82:96:f1:ce:cc:5e: bc:27:33:cd:98:b4:4e:d1:96:77:f6:db:b1:24:09: d3:0d:69:27:99:2d:42:31:79:5a:5c:9d:27:2a:66: 5d:12:21:b4:77:60:48:95:d0:b3:c5:93:1d:30:24: 59:bc:a9:41:05:53:f8:7e:d2:36:a6:83:2f:ce:37: ed:75:9e:a9:8c:96:9d:c1:8c:d8:bf:25:35:6c:6b: b3:7b:03:77:6f:74:70:bb:55:59:6b:5a:75:20:53: a3:28:4a:78:b2:2f:a8:a3:a6:e7:32:1e:d6:73:2b: 69:89:cb:4b:07:47:c3:da:74:72:a8:c3:43:b8:db: 7f:f9:37:c1:8a:4d:23:af:68:63:17:4e:30:1e:38: 6b:3e:f7:f3:f5:65:8a:37:22:38:d0:3f:3f:cd:57: 74:25:84:af:33:46:ac:45:dd:c5:b4:7a:41:c7:91: 3f:bf:8d:98:c2:bd:22:a6:ea:67:5b:31:0b:a7:28: 4d:56:f9:da:24:01:cf:35:e6:96:f8:f0:cc:df:d5: e5:8a:77:fe:d4:c9:47:fb:09:7b:ac:b3:20:1a:27: 77:25:a5:a2:b5:b1:b6:e7:f6:6d Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:allsans, othername:, othername:, email:user@example.org, DNS:www.example.org, DirName:/C=XY/L=Castle Anthrax/O=Python Software Foundation/CN=dirname example, URI:https://www.python.org/, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1, Registered ID:1.2.3.4.5 X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: D4:F1:D8:23:E0:A7:E9:CA:12:45:A0:0D:03:C2:25:A6:E8:65:BC:EE X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 70:77:d8:82:b0:f4:ab:de:84:ce:88:32:63:5e:23:0f:b6:58: a2:b1:65:ff:12:22:0b:88:a6:fa:06:40:9a:e7:63:a7:5d:ae: 94:c5:68:3c:4b:e9:95:34:01:75:24:df:9d:6e:9b:e4:ff:3f: 61:97:29:7b:ab:34:2c:14:d3:01:d2:eb:fb:84:40:db:12:54: 7e:7a:44:bc:08:eb:9f:e2:15:0b:11:4f:25:d2:56:51:95:ad: 6d:ad:07:aa:6a:61:f9:39:d5:82:8c:45:31:9f:2a:ff:18:98: 49:0c:bb:17:ad:d5:24:d3:d1:c7:c4:10:3e:c4:79:26:58:f4: c5:de:82:16:c4:c3:c4:a7:a3:62:22:41:90:36:0f:bc:4c:fd: 6a:18:22:f2:87:e9:07:db:b4:3d:65:00:e4:70:f9:d6:e5:a8: a1:b9:c9:9d:e7:5d:78:aa:98:d5:f8:f4:fd:5c:d9:4c:d0:6d: bf:87:71:d3:5b:ec:f4:bf:46:f9:c8:f8:10:c5:72:af:c3:15: b9:c4:06:67:0b:3f:f6:f4:64:c5:27:74:c1:6b:00:37:da:ea: 18:36:77:36:a7:3e:80:2e:5d:54:0f:01:df:ce:9e:97:dd:c9: f2:8b:59:82:c5:65:31:c8:73:20:fd:24:23:25:d8:00:df:90: 93:26:76:08:0a:06:a9:0e:d3:d3:4c:6f:ef:a7:fb:de:eb:2a: 40:b9:e4:b1:44:0c:37:ca:c6:9e:44:4a:b4:7c:2c:40:52:35: bb:b3:71:28:3d:35:fd:be:c9:4f:54:b3:99:c5:5f:84:38:fb: 2b:fb:ea:dd:88:e8:9d:c1:9b:67:87:3d:79:7b:3d:7e:61:1f: 70:3c:b7:c8:4c:17:a5:0c:a3:28:c7:ab:48:11:14:f7:98:7a: da:4e:fb:91:76:89:0a:a6:c6:72:e0:96:d9:f1:80:ea:68:90: 37:5c:c6:69:c7:d7:bc:c7:d1:ae:5b:a9:12:59:c6:e4:6c:61: a9:8b:ba:51:b3:13 -----BEGIN CERTIFICATE----- MIIHDTCCBXWgAwIBAgIJAMstgJlaaVJfMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2Fs bHNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBGvj+Uy/VUyTR mmIA1UEENThh0+pWODcvvUlkeIo+XTJ3FhF4/RVjImDHjozl28Xf2TzKnvQJa1KC pqa7fr8cL9QMwk4pH+S4ulxOu02Bl3Yafx2oJVUML37vciJg+zkzPx1k3tXFjXkr LGjZwOoufBC3AmPuq2xHFBzHrvp5/DIRH2slQFM9fpVZzN77gYyzxba0wCfCPpCf eJFRyYKW8c7MXrwnM82YtE7Rlnf227EkCdMNaSeZLUIxeVpcnScqZl0SIbR3YEiV 0LPFkx0wJFm8qUEFU/h+0jamgy/ON+11nqmMlp3BjNi/JTVsa7N7A3dvdHC7VVlr WnUgU6MoSniyL6ijpucyHtZzK2mJy0sHR8PadHKow0O423/5N8GKTSOvaGMXTjAe OGs+9/P1ZYo3IjjQPz/NV3QlhK8zRqxF3cW0ekHHkT+/jZjCvSKm6mdbMQunKE1W +dokAc815pb48Mzf1eWKd/7UyUf7CXussyAaJ3clpaK1sbbn9m0CAwEAAaOCAt4w ggLaMIIBMAYDVR0RBIIBJzCCASOCB2FsbHNhbnOgHgYDKgMEoBcMFXNvbWUgb3Ro ZXIgaWRlbnRpZmllcqA1BgYrBgEFAgKgKzApoBAbDktFUkJFUk9TLlJFQUxNoRUw E6ADAgEBoQwwChsIdXNlcm5hbWWBEHVzZXJAZXhhbXBsZS5vcmeCD3d3dy5leGFt cGxlLm9yZ6RnMGUxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJh eDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xGDAWBgNVBAMM D2Rpcm5hbWUgZXhhbXBsZYYXaHR0cHM6Ly93d3cucHl0aG9uLm9yZy+HBH8AAAGH EAAAAAAAAAAAAAAAAAAAAAGIBCoDBAUwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQW MBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBTU 8dgj4KfpyhJFoA0DwiWm6GW87jB9BgNVHSMEdjB0gBSziqCiunHxqCR51KRbJTYV HknIzaFRpE8wTTELMAkGA1UEBhMCWFkxJjAkBgNVBAoMHVB5dGhvbiBTb2Z0d2Fy ZSBGb3VuZGF0aW9uIENBMRYwFAYDVQQDDA1vdXItY2Etc2VydmVyggkAyy2AmVpp UlswgYMGCCsGAQUFBwEBBHcwdTA8BggrBgEFBQcwAoYwaHR0cDovL3Rlc3RjYS5w eXRob250ZXN0Lm5ldC90ZXN0Y2EvcHljYWNlcnQuY2VyMDUGCCsGAQUFBzABhilo dHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9vY3NwLzBDBgNVHR8E PDA6MDigNqA0hjJodHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9y ZXZvY2F0aW9uLmNybDANBgkqhkiG9w0BAQsFAAOCAYEAcHfYgrD0q96EzogyY14j D7ZYorFl/xIiC4im+gZAmudjp12ulMVoPEvplTQBdSTfnW6b5P8/YZcpe6s0LBTT AdLr+4RA2xJUfnpEvAjrn+IVCxFPJdJWUZWtba0Hqmph+TnVgoxFMZ8q/xiYSQy7 F63VJNPRx8QQPsR5Jlj0xd6CFsTDxKejYiJBkDYPvEz9ahgi8ofpB9u0PWUA5HD5 1uWoobnJneddeKqY1fj0/VzZTNBtv4dx01vs9L9G+cj4EMVyr8MVucQGZws/9vRk xSd0wWsAN9rqGDZ3Nqc+gC5dVA8B386el93J8otZgsVlMchzIP0kIyXYAN+QkyZ2 CAoGqQ7T00xv76f73usqQLnksUQMN8rGnkRKtHwsQFI1u7NxKD01/b7JT1SzmcVf hDj7K/vq3YjoncGbZ4c9eXs9fmEfcDy3yEwXpQyjKMerSBEU95h62k77kXaJCqbG cuCW2fGA6miQN1zGacfXvMfRrlupElnG5GxhqYu6UbMT -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/badcert.pem000066400000000000000000000036101471441230600222220ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/badkey.pem000066400000000000000000000041621471441230600220600ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/capath/000077500000000000000000000000001471441230600213535ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.12/certdata/capath/4e1295a3.0000066400000000000000000000014561471441230600225170ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/capath/5ed36f99.0000066400000000000000000000050111471441230600226070ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/capath/6e88d7b8.0000066400000000000000000000014561471441230600226210ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/capath/99d0fa06.0000066400000000000000000000050111471441230600225730ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/capath/b1930218.0000066400000000000000000000030721471441230600224270ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/capath/ceff1710.0000066400000000000000000000030721471441230600226520ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/ffdh3072.pem000066400000000000000000000042441471441230600220450ustar00rootroot00000000000000 DH Parameters: (3072 bit) prime: 00:ff:ff:ff:ff:ff:ff:ff:ff:ad:f8:54:58:a2:bb: 4a:9a:af:dc:56:20:27:3d:3c:f1:d8:b9:c5:83:ce: 2d:36:95:a9:e1:36:41:14:64:33:fb:cc:93:9d:ce: 24:9b:3e:f9:7d:2f:e3:63:63:0c:75:d8:f6:81:b2: 02:ae:c4:61:7a:d3:df:1e:d5:d5:fd:65:61:24:33: f5:1f:5f:06:6e:d0:85:63:65:55:3d:ed:1a:f3:b5: 57:13:5e:7f:57:c9:35:98:4f:0c:70:e0:e6:8b:77: e2:a6:89:da:f3:ef:e8:72:1d:f1:58:a1:36:ad:e7: 35:30:ac:ca:4f:48:3a:79:7a:bc:0a:b1:82:b3:24: fb:61:d1:08:a9:4b:b2:c8:e3:fb:b9:6a:da:b7:60: d7:f4:68:1d:4f:42:a3:de:39:4d:f4:ae:56:ed:e7: 63:72:bb:19:0b:07:a7:c8:ee:0a:6d:70:9e:02:fc: e1:cd:f7:e2:ec:c0:34:04:cd:28:34:2f:61:91:72: fe:9c:e9:85:83:ff:8e:4f:12:32:ee:f2:81:83:c3: fe:3b:1b:4c:6f:ad:73:3b:b5:fc:bc:2e:c2:20:05: c5:8e:f1:83:7d:16:83:b2:c6:f3:4a:26:c1:b2:ef: fa:88:6b:42:38:61:1f:cf:dc:de:35:5b:3b:65:19: 03:5b:bc:34:f4:de:f9:9c:02:38:61:b4:6f:c9:d6: e6:c9:07:7a:d9:1d:26:91:f7:f7:ee:59:8c:b0:fa: c1:86:d9:1c:ae:fe:13:09:85:13:92:70:b4:13:0c: 93:bc:43:79:44:f4:fd:44:52:e2:d7:4d:d3:64:f2: e2:1e:71:f5:4b:ff:5c:ae:82:ab:9c:9d:f6:9e:e8: 6d:2b:c5:22:36:3a:0d:ab:c5:21:97:9b:0d:ea:da: 1d:bf:9a:42:d5:c4:48:4e:0a:bc:d0:6b:fa:53:dd: ef:3c:1b:20:ee:3f:d5:9d:7c:25:e4:1d:2b:66:c6: 2e:37:ff:ff:ff:ff:ff:ff:ff:ff generator: 2 (0x2) recommended-private-length: 276 bits -----BEGIN DH PARAMETERS----- MIIBjAKCAYEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz +8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a 87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7 YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi 7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD ssbzSibBsu/6iGtCOGEfz9zeNVs7ZRkDW7w09N75nAI4YbRvydbmyQd62R0mkff3 7lmMsPrBhtkcrv4TCYUTknC0EwyTvEN5RPT9RFLi103TZPLiHnH1S/9croKrnJ32 nuhtK8UiNjoNq8Uhl5sN6todv5pC1cRITgq80Gv6U93vPBsg7j/VnXwl5B0rZsYu N///////////AgECAgIBFA== -----END DH PARAMETERS----- gevent-24.11.1/src/greentest/3.12/certdata/idnsans.pem000066400000000000000000000233321471441230600222600ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQC8sqplTuHuLjbW TL5SL2D1fw9U6WQzLVAF5gsyhd5lr2FpfYwjrob5Mav91aOLbJRTvoNyXsJ26FPS 0RycRGXbomcIEJxXGy9aI+0MLYBt1G5mgqCH+HcVCwPzCNlhVnTwvpgA7y8zs3+6 ezZAPWkF0yWOMYLtTcq9A5GWeavt5VMgm1KZF3gO4k58oPyk3Ae9D0LAaYsX6DFi BYx41eUR5UbSb5IYXaDd8d6jqW/jnYhgc6Cxkv1gTJFn87V5lrG0vYMSRUtWDQ9Y Jh/EKAxjGw7AeY429p6TE4UoJhDmoFYR2NLvawhNIplxol/v0fs0veFQjI/UsTD8 2tRfnYL4IX8szhLsE5/5Iq8aiLHjVbIMwmDYAa0P63Ap2kf1biSn9mpDL8lQazSo yr8xzIq2QS5HMvGbeMAmS0ih10Zx84uVmkWlavgvtSflw8K/ZXT9c70rZp/TdBGY 95cOFsbg5U/20M/Llpis9tcBCaoVaYSFupatrP+p8y19qP2nebsCAwEAAQKCAYEA uaYWWwHW6pzxOrnabcVLYX0WunW9LVShbIw97AElI2n/LuhkXh6xkK48BsqP0vaK oDHJ5VYxgQdmoP03Zs8sX4BSWe7twg1u8wJxkA+cUXI1BAn0opHjpwJlalEEfe2v s8PwjMrF59nsCq56W42PrDlms5UmuQ5WLsw6Co++hZmfxW7LPu+GIS6qBZfluNT5 kBpZlDDCtkyteUD4SVI3wvmOSi+Wzv4e7P2wC9kByjENIcfhC5QQURRD4sA1hWCp 2SThYWqJOCEc2SvGgoqgTRaJuQ2aVG9qrntXt0N4V+WdJWXBK0jedkB2flLve1fR KmDYuc9k/c1svmS3Y+iZohBha9H8jpuJmXYBxxg1iNg9m7qkfg8F8wxCYLQKB+U6 tjRS7by+jSE08On7mpDDhJORnlh+rfEuWPPwAKQpLpdp76KDTvR++GvfOMUiOrFM e9s5aXp+vcgkSSqYvigE+sFpCjQWwkGBkMdT16Pf9CzhQaM08YuLnzfLEYgLFw6R AoHBAN5NQINBmlq/cptGSru66kfecqHfI7xHnnGWKAkto/B1x7Crrgs4Tk5b4vaA JmAqatt5P1e7zco7uAXXebY5VURuH/30TlkuaB+oGFp0OMw6165n8RVPT2ZaDViK ssJ9LT8fJ+23TWCCT2Z1zUlM/NnHAMjKOVsJK3/KEkVvlc7ROC7uVooc78AsQehg zpL3GBYEeBukT8aNUMqUlesCsIs/dQHW7DzQL2xGkQagm5/PDsxaCsT7ynA8eL3X TW+IXwKBwQDZTV3TaG6wqtL8y2DR0lN5jY/eYayX4e18iZ+XEZVTntPdVVyJIE4d 0A5ZfcILb9WE8R21iptROYSjcH/05j+3fQMJ1WAK0sNfGTUNNT3jYU8YzLvos+wW G8E+mNMpFPWNvLV5Qrl4VvoifGh8AMvplUEz8uAzGJbXbRxUPcmjth2ph8zULEDn /+o4OcT3gh1bp+HCqch0OuiJRn9qNUpsJG5GMm5FtjBjZM97ucZ1/0DaWl3JUxUN /pueo3J9vCUCgcBg2Fjdlcvv8u2z1aijJmgATVm1SWfhE3ZkV50zem2sSTNotTJK cwoyOveimeueA3ywBp9g0lFx5Bhkex3sFAggmrVXRoKHeZ8lA28woOdJmezybxfp R7b4iQy9YRdFgZEfqawUdMHB5KNAqNt5LpANNBQUZX0dOt53eooBM/6Yri8CyxRq cPbFysIfwWTdQ8Z7eRD2Qdv7TP9AcgDp9C8DSu7nkUEzsSKn0gpGT9vcgDEbN7Lv ZB4qTT3wvoZeq5MCgcBIG18eDtJkN1sp3Yb0OTnP5QSvg3PVNngq0jQt2fzWMacW FARP0HN7exW35n4kc2jD44q7OhJOAqsb3PHo3xqXlZkTg0WKceO4w9GR32/46spn bVCRaFrX/z/BuM6hHD5bWRpS8aw/3YTFOsklFNKVYRyw01BIREmRlLhIz/QAKidv oQt8AG9NTON44tqUUw3Q40WL5fEJeJ6/JrCTGrnmZrRdANEMuucVpFchNEVB1IC9 tCzY6IPdD/atzojoZi0CgcB2x9oWLjJ0XJIp2pMAb8nCMVjkKrznKFjZbDm8EQBs ou7pM2zkO3VRcWT1BXQocinJsjQqjQiTawP6IN2FQgT0d89V+pwd+jdvpdildQhP 1/6SErVRZV//oopKTsC6TIBL/EmW1TkP3ulQIZs8YklFgybeHdDyNFi+VgPXkVGe IHp0nEzrui9q0YJsjHfFHBeGyzDSfbiBYiF7Auk66gYZbXufebP/LZNG/FIamPP3 rwYIeeV1IVwk9tPBw6fGwrs= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:60 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=idnsans Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:bc:b2:aa:65:4e:e1:ee:2e:36:d6:4c:be:52:2f: 60:f5:7f:0f:54:e9:64:33:2d:50:05:e6:0b:32:85: de:65:af:61:69:7d:8c:23:ae:86:f9:31:ab:fd:d5: a3:8b:6c:94:53:be:83:72:5e:c2:76:e8:53:d2:d1: 1c:9c:44:65:db:a2:67:08:10:9c:57:1b:2f:5a:23: ed:0c:2d:80:6d:d4:6e:66:82:a0:87:f8:77:15:0b: 03:f3:08:d9:61:56:74:f0:be:98:00:ef:2f:33:b3: 7f:ba:7b:36:40:3d:69:05:d3:25:8e:31:82:ed:4d: ca:bd:03:91:96:79:ab:ed:e5:53:20:9b:52:99:17: 78:0e:e2:4e:7c:a0:fc:a4:dc:07:bd:0f:42:c0:69: 8b:17:e8:31:62:05:8c:78:d5:e5:11:e5:46:d2:6f: 92:18:5d:a0:dd:f1:de:a3:a9:6f:e3:9d:88:60:73: a0:b1:92:fd:60:4c:91:67:f3:b5:79:96:b1:b4:bd: 83:12:45:4b:56:0d:0f:58:26:1f:c4:28:0c:63:1b: 0e:c0:79:8e:36:f6:9e:93:13:85:28:26:10:e6:a0: 56:11:d8:d2:ef:6b:08:4d:22:99:71:a2:5f:ef:d1: fb:34:bd:e1:50:8c:8f:d4:b1:30:fc:da:d4:5f:9d: 82:f8:21:7f:2c:ce:12:ec:13:9f:f9:22:af:1a:88: b1:e3:55:b2:0c:c2:60:d8:01:ad:0f:eb:70:29:da: 47:f5:6e:24:a7:f6:6a:43:2f:c9:50:6b:34:a8:ca: bf:31:cc:8a:b6:41:2e:47:32:f1:9b:78:c0:26:4b: 48:a1:d7:46:71:f3:8b:95:9a:45:a5:6a:f8:2f:b5: 27:e5:c3:c2:bf:65:74:fd:73:bd:2b:66:9f:d3:74: 11:98:f7:97:0e:16:c6:e0:e5:4f:f6:d0:cf:cb:96: 98:ac:f6:d7:01:09:aa:15:69:84:85:ba:96:ad:ac: ff:a9:f3:2d:7d:a8:fd:a7:79:bb Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:idnsans, DNS:xn--knig-5qa.idn.pythontest.net, DNS:xn--knigsgsschen-lcb0w.idna2003.pythontest.net, DNS:xn--knigsgchen-b4a3dun.idna2008.pythontest.net, DNS:xn--nxasmq6b.idna2003.pythontest.net, DNS:xn--nxasmm1c.idna2008.pythontest.net X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 5C:BE:18:7F:7B:3F:CE:99:66:80:79:53:4B:DD:33:1B:42:A5:7E:00 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 5d:7a:f8:81:e0:a7:c1:3f:39:eb:d3:52:2c:e1:cb:4d:29:b3: 77:18:17:18:9e:12:fc:11:cc:3c:49:cb:6b:f4:4d:6c:b8:d2: f4:e9:37:f8:6b:ed:f5:d7:f1:eb:5a:41:04:c7:f3:8c:da:e1: 05:8e:ae:58:71:d9:01:8a:32:46:b2:dd:95:46:e1:ce:82:04: fa:0b:1c:29:75:07:85:ce:cd:59:d4:cc:f3:56:b3:72:4d:cb: 90:0f:ce:02:21:ce:5d:17:84:96:7f:6a:00:57:42:b7:24:5b: 07:25:1e:77:a8:9d:da:41:09:8e:29:79:b4:b0:a1:45:c8:70: ae:2c:86:24:ae:3d:9a:74:a7:04:78:d6:1f:1b:17:c5:c1:6d: b1:1a:fd:f4:50:2e:61:16:84:89:d0:42:3f:b6:bf:bd:52:bd: c8:3e:8e:87:b4:f0:bd:ad:c7:51:65:2f:77:e8:69:79:0e:03: 63:89:e7:70:ad:c8:d1:2f:1a:a5:06:d2:90:db:7c:07:35:9a: 0b:0e:85:87:d1:70:17:a7:88:0f:c6:b5:9c:88:00:fa:f9:b2: 0a:19:5a:4b:8d:91:12:51:5e:0e:c1:d8:9e:02:78:d0:2d:24: 09:fe:d4:97:3c:cb:a0:1f:9a:ab:f7:0f:e2:fa:64:23:4e:53: 0a:15:3e:f5:04:01:86:29:8b:8e:24:40:2f:b1:90:87:5c:3b: 7b:a7:4c:06:af:c3:90:7f:e9:c6:56:42:61:15:2c:83:f1:7c: 4f:89:17:f3:a0:11:34:3f:8d:af:75:34:60:1e:e0:f2:f3:02: e7:aa:b3:f7:9f:1c:f8:69:f4:fe:da:57:6e:1b:95:53:70:cd: ed:b6:bb:2a:84:eb:ab:c3:a9:b4:d5:15:a0:b2:cc:81:2d:f1: 56:c1:54:9b:5f:14:4c:5f:ad:5f:f5:06:ee:22:60:45:e4:50: 35:64:ac:ac:ca:4a:bf:86:78:f8:53:2d:17:d8:e8:84:c8:07: a4:c2:29:76:c7:1f -----BEGIN CERTIFICATE----- MIIGvTCCBSWgAwIBAgIJAMstgJlaaVJgMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2lk bnNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQC8sqplTuHuLjbW TL5SL2D1fw9U6WQzLVAF5gsyhd5lr2FpfYwjrob5Mav91aOLbJRTvoNyXsJ26FPS 0RycRGXbomcIEJxXGy9aI+0MLYBt1G5mgqCH+HcVCwPzCNlhVnTwvpgA7y8zs3+6 ezZAPWkF0yWOMYLtTcq9A5GWeavt5VMgm1KZF3gO4k58oPyk3Ae9D0LAaYsX6DFi BYx41eUR5UbSb5IYXaDd8d6jqW/jnYhgc6Cxkv1gTJFn87V5lrG0vYMSRUtWDQ9Y Jh/EKAxjGw7AeY429p6TE4UoJhDmoFYR2NLvawhNIplxol/v0fs0veFQjI/UsTD8 2tRfnYL4IX8szhLsE5/5Iq8aiLHjVbIMwmDYAa0P63Ap2kf1biSn9mpDL8lQazSo yr8xzIq2QS5HMvGbeMAmS0ih10Zx84uVmkWlavgvtSflw8K/ZXT9c70rZp/TdBGY 95cOFsbg5U/20M/Llpis9tcBCaoVaYSFupatrP+p8y19qP2nebsCAwEAAaOCAo4w ggKKMIHhBgNVHREEgdkwgdaCB2lkbnNhbnOCH3huLS1rbmlnLTVxYS5pZG4ucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2dzc2NoZW4tbGNiMHcuaWRuYTIwMDMucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2djaGVuLWI0YTNkdW4uaWRuYTIwMDgucHl0 aG9udGVzdC5uZXSCJHhuLS1ueGFzbXE2Yi5pZG5hMjAwMy5weXRob250ZXN0Lm5l dIIkeG4tLW54YXNtbTFjLmlkbmEyMDA4LnB5dGhvbnRlc3QubmV0MA4GA1UdDwEB /wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/ BAIwADAdBgNVHQ4EFgQUXL4Yf3s/zplmgHlTS90zG0KlfgAwfQYDVR0jBHYwdIAU s4qgorpx8agkedSkWyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQK DB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNh LXNlcnZlcoIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKG MGh0dHA6Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNl cjA1BggrBgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2Evb2NzcC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250 ZXN0Lm5ldC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGB AF16+IHgp8E/OevTUizhy00ps3cYFxieEvwRzDxJy2v0TWy40vTpN/hr7fXX8eta QQTH84za4QWOrlhx2QGKMkay3ZVG4c6CBPoLHCl1B4XOzVnUzPNWs3JNy5APzgIh zl0XhJZ/agBXQrckWwclHneondpBCY4pebSwoUXIcK4shiSuPZp0pwR41h8bF8XB bbEa/fRQLmEWhInQQj+2v71Svcg+joe08L2tx1FlL3foaXkOA2OJ53CtyNEvGqUG 0pDbfAc1mgsOhYfRcBeniA/GtZyIAPr5sgoZWkuNkRJRXg7B2J4CeNAtJAn+1Jc8 y6Afmqv3D+L6ZCNOUwoVPvUEAYYpi44kQC+xkIdcO3unTAavw5B/6cZWQmEVLIPx fE+JF/OgETQ/ja91NGAe4PLzAueqs/efHPhp9P7aV24blVNwze22uyqE66vDqbTV FaCyzIEt8VbBVJtfFExfrV/1Bu4iYEXkUDVkrKzKSr+GePhTLRfY6ITIB6TCKXbH Hw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/keycert.passwd.pem000066400000000000000000000102011471441230600235560ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQIhD+rJdxqb6ECAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBDTdyjCP3riOSUfxix4aXEvBIIH ECGkbsFabrcFMZcplw5jHMaOlG7rYjUzwDJ80JM8uzbv2Jb8SvNlns2+xmnEvH/M mNvRmnXmplbVjH3XBMK8o2Psnr2V/a0j7/pgqpRxHykG+koOY4gzdt3MAg8JPbS2 hymSl+Y5EpciO3xLfz4aFL1ZNqspQbO/TD13Ij7DUIy7xIRBMp4taoZCrP0cEBAZ +wgu9m23I4dh3E8RUBzWyFFNic2MVVHrui6JbHc4dIHfyKLtXJDhUcS0vIC9PvcV jhorh3UZC4lM+/jjXV5AhzQ0VrJ2tXAUX2dA144XHzkSH2QmwfnajPsci7BL2CGC rjyTy4NfB/lDwU+55dqJZQSKXMxAapJMrtgw7LD5CKQcN6zmfhXGssJ7HQUXKkaX I1YOFzuUD7oo56BVCnVswv0jX9RxrE5QYNreMlOP9cS+kIYH65N+PAhlURuQC14K PgDkHn5knSa2UQA5tc5f7zdHOZhGRUfcjLP+KAWA3nh+/2OKw/X3zuPx75YT/FKe tACPw5hjEpl62m9Xa0eWepZXwqkIOkzHMmCyNCsbC0mmRoEjmvfnslfsmnh4Dg/c 4YsTYMOLLIeCa+WIc38aA5W2lNO9lW0LwLhX1rP+GRVPv+TVHXlfoyaI+jp0iXrJ t3xxT0gaiIR/VznyS7Py68QV/zB7VdqbsNzS7LdquHK1k8+7OYiWjY3gqyU40Iu2 d1eSnIoDvQJwyYp7XYXbOlXNLY+s1Qb7yxcW3vXm0Bg3gKT8r1XHWJ9rj+CxAn5r ysfkPs1JsesxzzQjwTiDNvHnBnZnwxuxfBr26ektEHmuAXSl8V6dzLN/aaPjpTj4 CkE7KyqX3U9bLkp+ztl4xWKEmW44nskzm0+iqrtrxMyTfvvID4QrABjZL4zmWIqc e3ZfA3AYk9VDIegk/YKGC5VZ8YS7ZXQ0ASK652XqJ7QlMKTxxV7zda6Fp4uW6/qN ezt5wgbGGhZQXj2wDQmWNQYyG/juIgYTpCUA54U5XBIjuR6pg+Ytm0UrvNjsUoAC wGelyqaLDq8U8jdIFYVTJy9aJjQOYXjsUJ0dZN2aGHSlju0ZGIZc49cTIVQ9BTC5 Yc0Vlwzpl+LuA25DzKZNSb/ci0lO/cQGJ2uXQQgaNgdsHlu8nukENGJhnIzx4fzK wEh3yHxhTRCzPPwDfXmx0IHXrPqJhSpAgaXBVIm8OjvmMxO+W75W4uLfNY/B7e2H 3cjklGuvkofOf7sEOrGUYf4cb6Obg8FpvHgpKo5Twwmoh/qvEKckBFqNhZXDDl88 GbGlSEgyaAV1Ig8s1NJKBolWFa0juyPAwJ8vT1T4iwW7kQ7KXKt2UNn96K/HxkLu pikvukz8oRHMlfVHa0R48UB1fFHwZLzPmwkpu6ancIxk3uO3yfhf6iDk3bmnyMlz g3k/b6MrLYaOVByRxay85jH3Vvgqfgn6wa6BJ7xQ81eZ8B45gFuTH0J5JtLL7SH8 darRPLCYfA+Ums9/H6pU5EXfd3yfjMIbvhCXHkJrrljkZ+th3p8dyto6wmYqIY6I qR9sU+o6DhRaiP8tCICuhHxQpXylUM6WeJkJwduTJ8KWIvzsj4mReIKOl/oC2jSd gIdKhb9Q3zj9ce4N5m6v66tyvjxGZ+xf3BvUPDD+LwZeXgf7OBsNVbXzQbzto594 nbCzPocFi3gERE50ru4K70eQCy08TPG5NpOz+DDdO5vpAuMLYEuI7O3L+3GjW40Q G5bu7H5/i7o/RWR67qhG/7p9kPw3nkUtYgnvnWaPMIuTfb4c2d069kjlfgWjIbbI tpSKmm5DHlqTE4/ECAbIEDtSaw9dXHCdL3nh5+n428xDdGbjN4lT86tfu17EYKzl ydH1RJ1LX3o3TEj9UkmDPt7LnftvwybMFEcP7hM2xD4lC++wKQs7Alg6dTkBnJV4 5xU78WRntJkJTU7kFkpPKA0QfyCuSF1fAMoukDBkqUdOj6jE0BlJQlHk5iwgnJlt uEdkTjHZEjIUxWC6llPcAzaPNlmnD45AgfEW+Jn21IvutmJiQAz5lm9Z9PXaR0C8 hXB6owRY67C0YKQwXhoNf6xQun2xGBGYy5rPEEezX1S1tUH5GR/KW1Lh+FzFqHXI ZEb5avfDqHKehGAjPON+Br7akuQ125M9LLjKuSyPaQzeeCAy356Xd7XzVwbPddbm 9S9WSPqzaPgh10chIHoNoC8HMd33dB5j9/Q6jrbU/oPlptu/GlorWblvJdcTuBGI IVn45RFnkG8hCz0GJSNzW7+70YdESQbfJW79vssWMaiSjFE0pMyFXrFR5lBywBTx PiGEUWtvrKG94X1TMlGUzDzDJOQNZ9dT94bonNe9pVmP5BP4/DzwwiWh6qrzWk6p j8OE4cfCSh2WvHnhJbH7/N0v+JKjtxeIeJ16jx/K2oK5 -----END ENCRYPTED PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/keycert.pem000066400000000000000000000077321471441230600222750ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/wIBADANBgkqhkiG9w0BAQEFAASCBukwggblAgEAAoIBgQCylKlLaKU+hOvJ DfriTRLd+IthG5hv28I3A/CGjLICT0rDDtgaXd0uqloJAnjsgn5gMAcStpDW8Rm+ t6LsrBL+5fBgkyU1r94Rvx0HHoyaZwBBouitVHw28hP3W+smddkqB1UxpGnTeL2B gj3dVo/WTtRfO+0h0PKw1l98YE1pMTdqIwcOOE/ER0g4hvA/wrxuLhMvlVLMy/lL 58uctqaDUqryNyeerKbVkq4fJyCG5D2TwXVJ3i2DDh0xSt2Y10poZV4M4k8Su9Z5 8zN2PSvYMT50aqF277v8BaOeYUApBE4kZGIJpo13ATGdEwpUFZ0Fri4zLYUZ1hWb OC35sKo7OxWQ/+tefNUdgWHob6Vmy777jiYcLwxc3sS9rF3AJe0rMW83kCkR6hmy A3250E137N/1QumHuT/Nj9rnI/lwt9jfaYkZjoAgT/C97m/mM83cYpGTdoGV1xNo 7G90MhP0di5FnVsrIaSnvkbGT9UgUWx0oVMjocifdG2qIhMI9psCAwEAAQKCAYBT sHmaPmNaZj59jZCqp0YVQlpHWwBYQ5vD3pPE6oCttm0p9nXt/VkfenQRTthOtmT1 POzDp00/feP7zeGLmqSYUjgRekPw4gdnN7Ip2PY5kdW77NWwDSzdLxuOS8Rq1MW9 /Yu+ZPe3RBlDbT8C0IM+Atlh/BqIQ3zIxN4g0pzUlF0M33d6AYfYSzOcUhibOO7H j84r+YXBNkIRgYKZYbutRXuZYaGuqejRpBj3voVu0d3Ntdb6lCWuClpB9HzfGN0c RTv8g6UYO4sK3qyFn90ibIR/1GB9watvtoWVZqggiWeBzSWVWRsGEf9O+Cx4oJw1 IphglhmhbgNksbj7bD24on/icldSOiVkoUemUOFmHWhCm4PnB1GmbD8YMfEdSbks qDr1Ps1zg4mGOinVD/4cY7vuPFO/HCH07wfeaUGzRt4g0/yLr+XjVofOA3oowyxv JAzr+niHA3lg5ecj4r7M68efwzN1OCyjMrVJw2RAzwvGxE+rm5NiT08SWlKQZnkC gcEA4wvyLpIur/UB84nV3XVJ89UMNBLm++aTFzld047BLJtMaOhvNqx6Cl5c8VuW l261KHjiVzpfNM3/A2LBQJcYkhX7avkqEXlj57cl+dCWAVwUzKmLJTPjfaTTZnYJ xeN3dMYjJz2z2WtgvfvDoJLukVwIMmhTY8wtqqYyQBJ/l06pBsfw5TNvmVIOQHds 8ASOiFt+WRLk2bl9xrGGayqt3VV93KVRzF27cpjOgEcG74F3c0ZW9snERN7vIYwB JfrlAoHBAMlahPwMP2TYylG8OzHe7EiehTekSO26LGh0Cq3wTGXYsK/q8hQCzL14 kWW638vpwXL6L9ntvrd7hjzWRO3vX/VxnYEA6f0bpqHq1tZi6lzix5CTUN5McpDg QnjenSJNrNjS1zEF8WeY9iLEuDI/M/iUW4y9R6s3WpgQhPDXpSvd2g3gMGRUYhxQ Xna8auiJeYFq0oNaOxvJj+VeOfJ3ZMJttd+Y7gTOYZcbg3SdRb/kdxYki0RMD2hF 4ZvjJ6CTfwKBwQDiMqiZFTJGQwYqp4vWEmAW+I4r4xkUpWatoI2Fk5eI5T9+1PLX uYXsho56NxEU1UrOg4Cb/p+TcBc8PErkGqR0BkpxDMOInTOXSrQe6lxIBoECVXc3 HTbrmiay0a5y5GfCgxPKqIJhfcToAceoVjovv0y7S4yoxGZKuUEe7E8JY2iqRNAO yOvKCCICv/hcN235E44RF+2/rDlOltagNej5tY6rIFkaDdgOF4bD7f9O5eEni1Bg litfoesDtQP/3rECgcEAkQfvQ7D6tIPmbqsbJBfCr6fmoqZllT4FIJN84b50+OL0 mTGsfjdqC4tdhx3sdu7/VPbaIqm5NmX10bowWgWSY7MbVME4yQPyqSwC5NbIonEC d6N0mzoLR0kQ+Ai4u+2g82gicgAq2oj1uSNi3WZi48jQjHYFulCbo246o1NgeFFK 77WshYe2R1ioQfQDOU1URKCR0uTaMHClgfu112yiGd12JAD+aF3TM0kxDXz+sXI5 SKy311DFxECZeXRLpcC3AoHBAJkNMJWTyPYbeVu+CTQkec8Uun233EkXa2kUNZc/ 5DuXDaK+A3DMgYRufTKSPpDHGaCZ1SYPInX1Uoe2dgVjWssRL2uitR4ENabDoAOA ICVYXYYNagqQu5wwirF0QeaMXo1fjhuuHQh8GsMdXZvYEaAITZ9/NG5x/oY08+8H kr78SMBOPy3XQn964uKG+e3JwpOG14GKABdAlrHKFXNWchu/6dgcYXB87mrC/GhO zNwzC+QhFTZoOomFoqMgFWujng== -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/keycert2.pem000066400000000000000000000077561471441230600223650ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQCf8FWxi4oVlDVx e8NDFgb+IYAGr/hZWuY1Zq7d7g57yPoxJrgt+bN89+U7qTduqyB2Hy8G0TqeACOr IdpPZ8P7V5E5YiASwfJ72nbVo7qR9DAKA5FE8PU0bJFmFLjDDihc970zc4ilRDfR WylUpj68nefOY4CzFzeiqVOLX2wezs7Z0hflkSXGBmC0j1FbQU2I3YJg3CKCabhT tU6OyKItzjJ2vVaOoQ+B0Kv8leaRQ6ANZBAFQF2LepSy5F2+oSD+QHjPr+012V5D mrsdIc9We8YyonS1u/3HI7lLohf3W+qFroQWjn0DJI56ScV1uEr/B0+hn2jBRTM5 d1F9BeVWm1u8BOJu50CvOeuxiVLsxJpa4T41DJznJk5V+hE4hKvDKmlrwulsRp8o jUEyUi8dzWOBRfAijIWv3qAPjGA/J33n6+PllCczC2BsVZhVmLqSMCwp1g2JTCM/ KC7T4vOl/EGkm76fcmLeA1Ef8oUdRg+3T77VP+HqZ2JP06J8O8MCAwEAAQKCAYAw YvJZ82BEJQGCIrIxMpHNAm+MFmKpDdIFp9oRdDrXgjcG9bLU3e1KSmkEgq4tggIh GlAM3PHB6ULhPC2ixj7JZHWgCaqwYhKtG6vF+HGyRFDgRrIFTGyyfoICgxReloLp lV2dGj/l19yXLuAzJtRmFdOSYhIGnGiNgnKvAKBiNajoxyHJpv7piPZqyc0QMZJ2 bKVMDm02TSuhz4FDuzktaGtl9uQf5GQfnvTZRrRpkC70vigGnrFuSBiCgopF6NLq 6AXl8YS3Jcu2oGWrZDfS/GlG1QmvGGsmr9wndJSGG43jcpcRZt0g1nJNu4Fioq3e 7y6Gap9TEsciuQOv/6RD457XkNARmTQxFpEwmSgOPQn2pFcDspo71Ej7azzL/Z+3 jvnVo3wxgxBcrpyh+vhBtJARp4pT4anW4PcD6IcPSOWbnI8Ldoj1XN5QkJcBcykK 6LmsAUqsmEQDNsmnGZWyYSCns4P2vUJi0hwQz8UiQwgAta3xnq4v5On7l3cq35kC gcEA0+joOFbZBeGlCb27tDW4VCW0cQuczzuNEoBUKnsNSqy0nx1O7hgHm/f/NQDD cpxiD15bRQ0KM9QbQC4dGaVoLsM07hUGk97dCxQPs2zot4CodCKGohs7E154tEDP zVg3YS5mubUmqdqtn8ZCKeeZye/Tv2ageyF300sEgj2Cd7EZ8S4sB0PxZ2tqT3jy cBL5cDruLEWuHIQjN7WwSjxnXocpb1OU7dJ+v4zFPCkSCOoa0DTTw4jFhPEOBdqV T619AoHBAME3QyW4QVtU2Ct9u0B1XThhqSEyOpUrcH9nOoefggwP4WF3phVx16BG aDKUIGQ62klRa5fi2eooxcjQRLv1sWO0UzssnO6ABMnGkUiRdrowo6xukNak0RTp 0gvNoJ0SZxGF0yWSCw1Rq3qP2Koj7XDumFChAzLMyUsnoOl29SA7GfXcZp1pZTiq kOfFMWt0CIHu/EK03YWcd4vfQEq6lus39RCSXuL++Jva3yiEl5s069RFZvP1bNrD emkfetDSPwKBwQClk+8fVnzs44sZOW9ZOEB3P57mVbSJGHb6Zdtd9hhEqP3Y9gWe dJg9fmGjAJ23CAp3B7s5ER9PsAQ6+c0zJNNq9ox9G2CwWgtNhLdf81FDUPxPAktA jxZx4/dcoOe+A5gCD0elA67aOUxA86DvLVA1QXeqrn3muBfwuUUknvs6mt8yXGl6 o9QUgxHmVxLYD3tn/iPr4+ZP0c/Sz9yXpOsAKYxuuFg+G6N9+HiEsXKuFH4vAZgV yODNJ61VVZ4lS+ECgcAqFqOl39E81+qO7sCPdgFsermg5ZQlUmUbG52AVZq6jesG lE21disGWs/v1JyJuNg8CGRrnZriiycqa1PNreOKWImY5kr5GSHx4jNbn3RBcr70 nNEoMJbq+1QqBgzqqkuRYZlxIbMOn6++7v6/cTwT0aWUSr6rnjhrCqLeuG8FKlqp V+1ydLb79QvDsQzm30vLIggJb+ShakgQS/1xSdv+OR5FEd1hjTESokbiSJ/Ny2Vj xAp9MgUYUmSj6ZuTSXkCgcAggshdRQLom/EK2pYwffIpKfBiyLbi+KIjKxkiPEsb jrrQbvh9ZN6iAG3StVAYB5c6vewfeIlcDT0YJDyy1hGRLRG7vf9ubPf+n7Xp1y0W oo9L9qfCHu0jmWwtinkFYjpTDkXlxXCG2v3TllNsNX/5afYo8sb9oxXHLTpBlwZB fw6IgNZblWQevdgmUMTP9W2W7AZUxEz4gOM6lQkOwC3U59Dx2yO6rD3An6G1tlZF 2MClyf8o5d5ePObH8rkxrpY= -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIUF15VKdwjiTzzKgs6PnNpEekV9QQwDQYJKoZIhvcNAQEL BQAwYjELMAkGA1UEBhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYD VQQKDBpQeXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhv c3RuYW1lMB4XDTIxMDMxNzA4NDgyMFoXDTQwMDUxNjA4NDgyMFowYjELMAkGA1UE BhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYDVQQKDBpQeXRob24g U29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhvc3RuYW1lMIIBojAN BgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEAn/BVsYuKFZQ1cXvDQxYG/iGABq/4 WVrmNWau3e4Oe8j6MSa4LfmzfPflO6k3bqsgdh8vBtE6ngAjqyHaT2fD+1eROWIg EsHye9p21aO6kfQwCgORRPD1NGyRZhS4ww4oXPe9M3OIpUQ30VspVKY+vJ3nzmOA sxc3oqlTi19sHs7O2dIX5ZElxgZgtI9RW0FNiN2CYNwigmm4U7VOjsiiLc4ydr1W jqEPgdCr/JXmkUOgDWQQBUBdi3qUsuRdvqEg/kB4z6/tNdleQ5q7HSHPVnvGMqJ0 tbv9xyO5S6IX91vqha6EFo59AySOeknFdbhK/wdPoZ9owUUzOXdRfQXlVptbvATi budArznrsYlS7MSaWuE+NQyc5yZOVfoROISrwyppa8LpbEafKI1BMlIvHc1jgUXw IoyFr96gD4xgPyd95+vj5ZQnMwtgbFWYVZi6kjAsKdYNiUwjPygu0+LzpfxBpJu+ n3Ji3gNRH/KFHUYPt0++1T/h6mdiT9OifDvDAgMBAAGjGzAZMBcGA1UdEQQQMA6C DGZha2Vob3N0bmFtZTANBgkqhkiG9w0BAQsFAAOCAYEARzdkuqa0Hexi/saMkdi3 bubpQkc7X0RYKWnjy/PgcmbvQXLiWRMZOH9rMWvd5v+ZfkgAtsbOQuP8ycioNIFY Il5SEmxHEN81z5UNSPLOib6ky13gzrnXRAxnnO7cICG7AaMu1dHv57fqjevcx/n/ nxPNKwKL+TDpMw7ATVZw7Py7JciKyFAfwtkvt17j/ldvaQvuwmWHzyFVrQniQcQq QEa4jy/Y/pXHAgCKq1qbe0ush17j1ChyH7l4SkF2xJKcYYQF5ipw8zg6WeOL2NFE G1KDJN0SsMmM3PMN1e0lLQP3G+UaatervrKXu51QleKL32Xlby+pp1w9KKs39/Tb RT8EMe9A6cecod6TL0ZUQHow6ykNYBkfSKDLTKWnL9ifZ0C/DvgmS7DpJg3oAa1e GhIglMrgqJflTHAI/PvEsCKM1O0Un2dVGWsUCzPfhj1cKmagyb0Zd+2Tk9xGSRs9 2ceXMxRCjOJwEHUCFuTYeqowabdlpi0nyPbSn7JIwCpT -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/keycert3.pem000066400000000000000000000223501471441230600223510ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQDFtLOteQlQojN7 ztkux7m0hmGKkP1hh0hbKqTcD87jkLAqAwZWenjZMjCbbZ3vP+AObCIkYIKzPXY7 Yi+H5M3O2mXIDxoHGjL/GWtoEyDNXvm9UC+MRuSOq2MaLHHQG0Rx2TxcYrMVUM7b 93rpN1LGRrCv1gISXM4EvEJooAR7Aadj0pG/o0fqDAdFjH6QZbhn1iZle+eGbjcf dgH/H0F8dn1PPGoViHXicbsQ4kB6002Pf+aXP4b2QKAbflyNHEKHPHEOOTXrFjMd c+bqKW24epEsMZI59qx9hU/4Rvp3/v+vEwTL7Nm7ilptzZn2cvGCW39LC0nNYLOz kO3H8xwA75h6uykdB+WO/v2CKIK9M/ZO+9QNrmaokfKDamCk39b8hlCwNL6LsVpv d3XTS5Wn4YWn92EqiltUJJoPo7pc7VTdWCg4zVFn4Q8Zh4NFNn/qTB8lEMgrsNTV 5cyZ7zhoBiUMSO45bmo2NsnE7ce/JUhlqe5uh0PT1MIBgTV+oDMCAwEAAQKCAYEA udsy4gwblqK0tVnxz0lQqYV+os3EdO/BNHr1Oi7eNg2pngTz603812mYSjUVOHma vtQmkH3twGQyBoc52Y1dcGzdK+IOfMjDUg7qao840ffL3I1J9ZwbdodlhZBsec94 W3J1jP/4DDzICf8vm5g3h0+i/9m2Xt7BibAU2dg7/grC+lNUUoxDqaEfIOF/hW0q muq1c8e0EisAROIh5FzUqhWVnWxU6eM7tuFlkuyu4whLLHB3LI466Lo+CTqT9M+v jJYlvS5+AZW3qMBp6WOI8C+VIiBL178mo+Igkyyy5AYXcWeNkjp6ygRWvtWXIhCv CI29mf+BP/54jAY0rQRXJ2UcSHXmM6PTDkE/L2OKeiY1Ou8gLOwun3yBVdbkXJMb PWmUW4N8qSIJQ+vE2TDqmkqAT6m+ilzOXl1O+LLTvGyMnOiiSLXK9mC4ND3tqaQu hvKivnI1doErcWUaIf1DHiJmLrGxrTCUKjCEoefqVq2/dDdtCfx7CqUvjl3DYKMB AoHBAP+Vdi6D07gZFepEGCaJ+YH6cxEyO73CNnea/F1whVAzOv91kHS32jC9PAI3 /wYlX+DLcN9mVF/q62V4SLZYfOxTPW4vWO0A45URe9s9Z795fdAcQ5jt3QFOVSnk 3XSaCkIOwckuwabGJi4+foiUEOnLLzQi1/g7x12dwejxVNhqhz5KFkOQPv8fQRed sb5LVLYDeprsB2Vsx0fHwg4z9FvTIxLBeI7+sJD30lNpYZrCl/T9x4e1SV2Rwn2W bghxgQKBwQDGBx07biZK9RB5g4qPl+G6vz0M+/KBfpwQbMYxSyct7u6gfGD9mWBO qocIIr39Unac3kUL237Cn3HbgiGCRe7Mwd7XqnSSGWM5oWSlVQxEKTXYUlTbd9O9 DKuyQGOl/AMEwD4ZbEOfQNmnd1U4nh1AV052FQY8Ry/atGFT9fApA/5X/bbenOwQ YGDsokLzPf2BIDncpE+VNevUMoMI7EnySgjjfpL+cRld0qpLqBMo2h5VddeJ/5YM 1YcNfMQiw7MCgcEAwXqXuKa7A8aZvHpH/gS9CRRbP01TxFbdfLWrDeE8SnY9111c Ob9kQTk/0D4rpK9uYXIgxD1m6iWghXQFN2TNTOnGuz7EhsYBgrt1k4Zsn5qND5oV 4hNPFsoB1nEW5EooMdGSCYaHuoSOKrvMdgAAvbu+xC0MaTJ3vfrK7Fik7h/WueTD 7emohuFWGVabU38bZZ5EljrPboxmX4Rs9uuFtG2lQ3GKnlVXvKaeZd6EsO9WsXPc NHOcUmUhYokaSvIBAoHAGCxGJTsM8Zl4qVylTWH87A7sJOmccLJD2r1sdBf4cGL6 PhzwugQ+/VtToGqdRo8Ka5u2Ufw5PQi5nVIFRSHERLpluW3VTQBMXHyXDJeVJ7zg Fcf3E9NMxYcGbnvtrhVVSP8ulWvh1U7VQtwOSxsB9xixOzjVygXmkYvzVYxwBJG4 OoV+DS6aomUhb8Fe6tJmX5zPc1+bV1t9ril8VVqCrFDdROfuiaDEt+8/Wnzp2dLG YShBZ1cLugVWtw7D4nqBAoHAF29k64iAxY5Y4OOibVkqjUCPyqG2oxiXqgO7CxZp FGUat5UtV2mIBlSENs1o5AZ1nPlgWtPtg0xVCaG2t/Rq7ugvUfAnAhUK6zX8FS+T gCXE+7iKuuIJiCo13/iAwF/CLfuXvj4CZ71ta0wX9w99f1FcPEk0x+ytiyuWJK8K tyubL34JwNrnkh/8e3LcV3L88Sk9ZmxeTz31f3cA3Fy2ZJOAUMD9dKXeKtY7azzt MkhXedRsdLSKqMh0VGeGHoLS -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5c Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:c5:b4:b3:ad:79:09:50:a2:33:7b:ce:d9:2e:c7: b9:b4:86:61:8a:90:fd:61:87:48:5b:2a:a4:dc:0f: ce:e3:90:b0:2a:03:06:56:7a:78:d9:32:30:9b:6d: 9d:ef:3f:e0:0e:6c:22:24:60:82:b3:3d:76:3b:62: 2f:87:e4:cd:ce:da:65:c8:0f:1a:07:1a:32:ff:19: 6b:68:13:20:cd:5e:f9:bd:50:2f:8c:46:e4:8e:ab: 63:1a:2c:71:d0:1b:44:71:d9:3c:5c:62:b3:15:50: ce:db:f7:7a:e9:37:52:c6:46:b0:af:d6:02:12:5c: ce:04:bc:42:68:a0:04:7b:01:a7:63:d2:91:bf:a3: 47:ea:0c:07:45:8c:7e:90:65:b8:67:d6:26:65:7b: e7:86:6e:37:1f:76:01:ff:1f:41:7c:76:7d:4f:3c: 6a:15:88:75:e2:71:bb:10:e2:40:7a:d3:4d:8f:7f: e6:97:3f:86:f6:40:a0:1b:7e:5c:8d:1c:42:87:3c: 71:0e:39:35:eb:16:33:1d:73:e6:ea:29:6d:b8:7a: 91:2c:31:92:39:f6:ac:7d:85:4f:f8:46:fa:77:fe: ff:af:13:04:cb:ec:d9:bb:8a:5a:6d:cd:99:f6:72: f1:82:5b:7f:4b:0b:49:cd:60:b3:b3:90:ed:c7:f3: 1c:00:ef:98:7a:bb:29:1d:07:e5:8e:fe:fd:82:28: 82:bd:33:f6:4e:fb:d4:0d:ae:66:a8:91:f2:83:6a: 60:a4:df:d6:fc:86:50:b0:34:be:8b:b1:5a:6f:77: 75:d3:4b:95:a7:e1:85:a7:f7:61:2a:8a:5b:54:24: 9a:0f:a3:ba:5c:ed:54:dd:58:28:38:cd:51:67:e1: 0f:19:87:83:45:36:7f:ea:4c:1f:25:10:c8:2b:b0: d4:d5:e5:cc:99:ef:38:68:06:25:0c:48:ee:39:6e: 6a:36:36:c9:c4:ed:c7:bf:25:48:65:a9:ee:6e:87: 43:d3:d4:c2:01:81:35:7e:a0:33 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 85:75:10:25:D0:2C:80:50:24:1A:5B:57:70:DE:B5:CB:71:A9:3B:7B X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 95:f3:56:bb:d5:8c:70:bd:d1:de:da:63:b0:29:d7:db:60:27: d6:59:fd:61:1b:30:c6:d0:5d:73:7d:34:e1:68:e3:28:a6:89: e6:60:bd:89:d3:0e:f4:72:ad:72:76:f8:86:21:fd:75:3c:f8: 6d:be:9c:04:e1:82:03:69:6c:ae:d0:55:ba:5e:f2:ca:f5:0f: 8e:d6:d9:8d:c8:56:46:f4:f8:ac:74:2a:19:7b:8e:47:70:1f: fb:fb:bd:69:02:a1:a5:4a:6e:21:1c:04:14:15:55:bf:bf:24: 43:c8:17:03:be:3e:2c:ea:db:c8:af:1d:fd:52:df:d6:15:49: 9e:c2:44:69:ef:f1:45:43:83:b2:1e:cf:14:1c:13:3f:fe:9c: 71:cb:e7:1b:18:56:36:a7:af:44:f1:0b:a1:79:44:46:f9:43: 46:29:d8:b0:ca:49:4d:65:60:d3:f6:8e:74:bc:62:9e:1e:8d: 4b:29:9a:b4:0d:f0:a2:77:5b:34:e4:11:2f:a7:25:c5:e5:07: 76:12:ae:be:75:73:15:e4:0a:7d:53:38:56:3f:79:6d:6e:ca: ed:80:ab:56:ed:7e:8b:1c:e7:e3:d4:62:30:22:70:e7:29:b2: 03:3c:fe:fa:3d:f0:36:c0:4d:11:a2:99:d3:29:31:27:b8:c5: b8:15:a3:3c:4f:9b:73:5e:2b:b2:fb:cb:fd:75:47:b8:17:bd: 21:d8:e6:c1:b9:ff:73:81:d8:25:08:6d:08:5e:1c:a5:83:50: de:67:e6:da:d0:8e:5a:d3:f2:2a:b1:3f:b8:80:21:07:6a:71: 15:6d:05:eb:51:b3:59:8d:d4:15:46:7e:02:a8:13:01:16:99: bd:03:cc:70:71:2a:23:16:78:af:d1:d5:01:9d:04:b4:63:93: 9a:04:3a:92:2e:e6:7e:73:93:a5:fe:50:9b:bd:0e:ea:54:86: 6f:7c:e5:14:77:fe:c2:28:5a:4a:0e:d7:2d:8c:e9:ed:61:29: b2:53:ff:6c:04:bc -----BEGIN CERTIFICATE----- MIIF8TCCBFmgAwIBAgIJAMstgJlaaVJcMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxv Y2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAMW0s615CVCi M3vO2S7HubSGYYqQ/WGHSFsqpNwPzuOQsCoDBlZ6eNkyMJttne8/4A5sIiRggrM9 djtiL4fkzc7aZcgPGgcaMv8Za2gTIM1e+b1QL4xG5I6rYxoscdAbRHHZPFxisxVQ ztv3euk3UsZGsK/WAhJczgS8QmigBHsBp2PSkb+jR+oMB0WMfpBluGfWJmV754Zu Nx92Af8fQXx2fU88ahWIdeJxuxDiQHrTTY9/5pc/hvZAoBt+XI0cQoc8cQ45NesW Mx1z5uopbbh6kSwxkjn2rH2FT/hG+nf+/68TBMvs2buKWm3NmfZy8YJbf0sLSc1g s7OQ7cfzHADvmHq7KR0H5Y7+/YIogr0z9k771A2uZqiR8oNqYKTf1vyGULA0voux Wm93ddNLlafhhaf3YSqKW1Qkmg+julztVN1YKDjNUWfhDxmHg0U2f+pMHyUQyCuw 1NXlzJnvOGgGJQxI7jluajY2ycTtx78lSGWp7m6HQ9PUwgGBNX6gMwIDAQABo4IB wDCCAbwwFAYDVR0RBA0wC4IJbG9jYWxob3N0MA4GA1UdDwEB/wQEAwIFoDAdBgNV HSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4E FgQUhXUQJdAsgFAkGltXcN61y3GpO3swfQYDVR0jBHYwdIAUs4qgorpx8agkedSk WyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29m dHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMst gJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0 Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcw AYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYD VR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAJXzVrvVjHC90d7a Y7Ap19tgJ9ZZ/WEbMMbQXXN9NOFo4yimieZgvYnTDvRyrXJ2+IYh/XU8+G2+nATh ggNpbK7QVbpe8sr1D47W2Y3IVkb0+Kx0Khl7jkdwH/v7vWkCoaVKbiEcBBQVVb+/ JEPIFwO+Pizq28ivHf1S39YVSZ7CRGnv8UVDg7IezxQcEz/+nHHL5xsYVjanr0Tx C6F5REb5Q0Yp2LDKSU1lYNP2jnS8Yp4ejUspmrQN8KJ3WzTkES+nJcXlB3YSrr51 cxXkCn1TOFY/eW1uyu2Aq1btfosc5+PUYjAicOcpsgM8/vo98DbATRGimdMpMSe4 xbgVozxPm3NeK7L7y/11R7gXvSHY5sG5/3OB2CUIbQheHKWDUN5n5trQjlrT8iqx P7iAIQdqcRVtBetRs1mN1BVGfgKoEwEWmb0DzHBxKiMWeK/R1QGdBLRjk5oEOpIu 5n5zk6X+UJu9DupUhm985RR3/sIoWkoO1y2M6e1hKbJT/2wEvA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/keycert4.pem000066400000000000000000000223661471441230600223610ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQC34y3S6iXdmdvd M/2aFBe6CvRvZwhh1huGl7IQRtdoakPqMLlEdNHJtNeF5M27xLei+p4wt7N1Jyi0 2keHQb1m9TqH5AruOkE2ti+15zEoKoU9aWydTiH+epKTT0yjg2NcKQjRUaWcbhzB H4EMKuCIlzIIz8/EIKkOqhCDwq6+Fv3Ays+z7Bz+yR80ixivKu/l7SjxQ7z7R/kC I7OViRcIO5QBQPj7VLvCTz4VA6u/LdXngK2HNuau6WXm5yNNQbqrB11AEJcYZf/c VrneV4F+ZjLloAKgSn9GB8eWOyilTQ18TcKd+H2icipRaP/+QR/KPx5GK/SXU3my qm62QOGI7t/5ktVdjGhs6tHZxw1SRiipiLYWbtVRrSxa4wYlgpgoUwvrvvtC5kAN nTw1VGWsxcs+6a7+PocYnJiq7k4b5OAUb3Ryvl9DLAMy8NqpRWo4cHD/XQ3FCYwF HlOSgx/dL5Se0i3dW1KzbP6OvaNg6nl/1EXPUsJ1ATS8nzvzhccCAwEAAQKCAYEA nD3GvaJ9MeB802JNZBEWZ9jO/6jHknldQeq6POI0PF+t/NoRUH0BkyS4yucxdw0a CrxulG5BaJUxHRkqFV5iE4zhgnzcXLXamyYJO8GIHtyiASAGTVIJyDNVPxztvTDx x2iGOXPqBxP4Eo82EqSLywLMXHhVzAsEGZWeGpXb61+Vk62+9Nz1dfZlMTvOaWdO Fkp/sx8e/1KT3KGBANlOXIxioP4Xj1Tbg6nY0fogf3vud5j52B1pu8xL7PkPIaFq DEGz3XvWhBF/+Cs5iDeYz8eQpfQig7HdHVn2D8dZmzQgpLw1yGbPAnqrgopWfm7R MqiyFe82p2t+vfSoG5jz28XxPtzBJV3ljxKxlbnclqu/CAYSjzaYohDzyhjdZOZI r9DOfWOqu01Ha3EEsApn95fusHHGTH2FOy0u61FSTrfLfqsLw9WRJPWleirKikhf SZzi223QrmzZMtuCF7VgTx3ghDhBmFD8uzVVQ1SwPZ8CgftRkFcn1llXIAfJ3iHB AoHBAOg3DOIdtUVgpjMKhpAyuH54fYvGl7afIMNbKRle0kCiP45wtGJ43RPMqiR8 1rxZB3+iapICI/lnhk3O7vVRkR64yiqQBcl/hXZ1BhyD6iDXWYmm5mcnymcoqfwc p9TfzEPyGPb3SM2YlI0cSPRqM/jDvGvnDeKIpzEKvUlwJ59WoN2HOHTIXf+XbN5n unpuTt6YKJvc48DrXsPnUzkCmUfbOmgHfeb9/qBs/8kY4YJMsZEjqf88o7mCJCIy BtDxTwKBwQDKuOwE8e0GIA01ZHd6RfR+ZCvmp2oauxal4EJsBx+ZZnhEWGaSm1fE Bf/ih074ghcSKoSrdYpD1xGZ6fGVWMx3jcL11yLDOUiiPDJsm8hUBZ0IW1qXyfCP l7xy1bUkWwPXdmFuGp1exrcjooKrFNuTdYiK4nQZSKuCfXQRADrmEJmM+gYwhqI7 4XsYo848B9A4hbY6RLEox4uvo/RmafY0iR0PMhVEc+ydNLKB/4LpahZqBQ4kTpMv o4+rEvYt1gkCgcB08gx177ozx1nMCLf99N0/LBUmCIytNvR8DfPjyAIg9NUHOjFO CkpkR0VEfO50Cm4hVD1RbOyLFRzpIJbtSvfHvg5qYv/XG3auUn8Sa0jE408/aKNO PhbL3wnEYvYO2ep4KXtzHNQ4XmgprJ39IWMtG/5PZRx0ApgYtazgSDBcKXd4OTow bhwQtUTpuNmMAPONXJnO7O5yYNbn2B7sbiedrYV7kJJSe4X5awtiTjp7sX4XdxuM 5BAcQ7NI2WLfZTcCgcBp/X9hIoATmMRvKwUQx+yJ/KO7Z8KhETpJJdR0mNDbqmit Cy8t7cxYb+6WqLoQUivv0o0k/EJ7L8JDH76woAnfZB4P3RiOy69/K0wN3vFBhOHS kbju7aU53lKoE7YuuOtsRrewEng/KlRsbDY3bqNTGLt4KegbpBQQGLmLffxNd1Zh EAQWcP33ou9yNYrJdihWtQpOssWRlash/O32ceZJF3s7C6t068tFclz2fPocQdxQ OC5pqy9nU/P0tOhDlMkCgcEAosaBJLIeAYlOU0+2uSx5g5mIqOOTyrDEmqqad6T/ wkB7vW2QaoDvLL22Yrzdn9vQ0V0rqzhVtan7sq5pn/BQJAueZYN8rFxS3uuW+UQk Nsc4GLJzU8Az/2DvqEIrnE7zRc5E1FOI9gKLrBlpJB2o0hVcBznDe05Gax6Kjqbm jHqzyU73SpxpEy3OesClCeCQIMr47HaL9aSqaEX4U9bMpgHi0HgTTHqvJ5pch0hY dYl+WAE9LAyF1DF29BirEXVw -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5d Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=fakehostname Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:b7:e3:2d:d2:ea:25:dd:99:db:dd:33:fd:9a:14: 17:ba:0a:f4:6f:67:08:61:d6:1b:86:97:b2:10:46: d7:68:6a:43:ea:30:b9:44:74:d1:c9:b4:d7:85:e4: cd:bb:c4:b7:a2:fa:9e:30:b7:b3:75:27:28:b4:da: 47:87:41:bd:66:f5:3a:87:e4:0a:ee:3a:41:36:b6: 2f:b5:e7:31:28:2a:85:3d:69:6c:9d:4e:21:fe:7a: 92:93:4f:4c:a3:83:63:5c:29:08:d1:51:a5:9c:6e: 1c:c1:1f:81:0c:2a:e0:88:97:32:08:cf:cf:c4:20: a9:0e:aa:10:83:c2:ae:be:16:fd:c0:ca:cf:b3:ec: 1c:fe:c9:1f:34:8b:18:af:2a:ef:e5:ed:28:f1:43: bc:fb:47:f9:02:23:b3:95:89:17:08:3b:94:01:40: f8:fb:54:bb:c2:4f:3e:15:03:ab:bf:2d:d5:e7:80: ad:87:36:e6:ae:e9:65:e6:e7:23:4d:41:ba:ab:07: 5d:40:10:97:18:65:ff:dc:56:b9:de:57:81:7e:66: 32:e5:a0:02:a0:4a:7f:46:07:c7:96:3b:28:a5:4d: 0d:7c:4d:c2:9d:f8:7d:a2:72:2a:51:68:ff:fe:41: 1f:ca:3f:1e:46:2b:f4:97:53:79:b2:aa:6e:b6:40: e1:88:ee:df:f9:92:d5:5d:8c:68:6c:ea:d1:d9:c7: 0d:52:46:28:a9:88:b6:16:6e:d5:51:ad:2c:5a:e3: 06:25:82:98:28:53:0b:eb:be:fb:42:e6:40:0d:9d: 3c:35:54:65:ac:c5:cb:3e:e9:ae:fe:3e:87:18:9c: 98:aa:ee:4e:1b:e4:e0:14:6f:74:72:be:5f:43:2c: 03:32:f0:da:a9:45:6a:38:70:70:ff:5d:0d:c5:09: 8c:05:1e:53:92:83:1f:dd:2f:94:9e:d2:2d:dd:5b: 52:b3:6c:fe:8e:bd:a3:60:ea:79:7f:d4:45:cf:52: c2:75:01:34:bc:9f:3b:f3:85:c7 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:fakehostname X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: C8:BD:A8:B4:C0:F2:32:10:73:47:9C:48:81:32:F8:BA:BB:26:84:97 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 76:87:76:4d:e4:0f:88:bf:2c:f3:58:67:c0:97:6c:cd:59:18: 82:83:4c:04:19:a5:6d:aa:fa:64:3d:49:32:3e:e1:56:95:b2: 13:f7:cf:d3:11:b0:72:b7:5b:e7:d7:85:69:51:3c:b6:54:80: 45:2f:28:10:21:20:b9:ba:e9:27:5a:b7:3f:82:b7:69:f5:46: f5:bf:a2:8b:17:7f:f2:14:d1:46:97:b5:8b:47:fb:9f:e8:5c: 05:0e:9d:11:bd:7c:9a:03:84:0b:ca:29:66:4a:ca:0d:6f:09: 1e:7a:27:c1:7f:03:96:70:8d:18:a5:2f:a4:98:a5:19:aa:8c: 5d:1e:8c:3e:bb:6d:3b:c0:33:c0:15:e1:bd:09:3d:9f:e8:dc: 12:d4:cb:44:1d:06:f5:e8:d6:4e:a1:2d:5c:9f:5d:1f:5b:2a: c3:4d:40:8d:da:d1:78:80:d0:c6:31:72:10:48:8a:e9:10:7a: 13:30:11:b2:9e:67:0e:ed:a1:aa:ec:73:2d:f0:b8:8a:22:75: 0f:30:69:5c:50:7e:91:ce:da:91:c7:70:8c:65:ff:f6:58:fb: 00:bd:45:cc:e2:e4:e3:e5:16:36:7d:f3:a2:4a:9c:45:ff:d9: a5:16:e0:2f:b5:5b:6c:e6:8a:13:15:48:73:bd:7c:80:33:c3: d4:3b:3a:1d:85:0e:a4:f7:f7:fb:48:0c:e9:a0:4b:5e:8a:5c: 67:f8:25:02:6f:cd:72:c1:aa:5a:93:64:7c:14:20:43:e0:13: 7f:0d:e1:0d:61:5e:2e:2c:cd:7a:2e:2a:ae:b6:75:6a:5f:a0: 1a:9b:b6:67:2d:b0:a5:1c:54:bc:8c:70:7e:15:2b:c0:50:e3: 03:bb:a4:a5:fc:45:01:c9:3f:a7:b8:18:dc:3e:08:07:a1:9b: f5:bd:95:bd:49:e8:10:7c:91:7d:2d:c4:c2:98:b6:b7:51:69: d7:0a:68:40:b5:0f:85:a0:a9:67:77:c6:68:cb:0e:58:34:b3: 58:e7:c8:7c:09:67 -----BEGIN CERTIFICATE----- MIIF9zCCBF+gAwIBAgIJAMstgJlaaVJdMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGIxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFTATBgNVBAMMDGZh a2Vob3N0bmFtZTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALfjLdLq Jd2Z290z/ZoUF7oK9G9nCGHWG4aXshBG12hqQ+owuUR00cm014XkzbvEt6L6njC3 s3UnKLTaR4dBvWb1OofkCu46QTa2L7XnMSgqhT1pbJ1OIf56kpNPTKODY1wpCNFR pZxuHMEfgQwq4IiXMgjPz8QgqQ6qEIPCrr4W/cDKz7PsHP7JHzSLGK8q7+XtKPFD vPtH+QIjs5WJFwg7lAFA+PtUu8JPPhUDq78t1eeArYc25q7pZebnI01BuqsHXUAQ lxhl/9xWud5XgX5mMuWgAqBKf0YHx5Y7KKVNDXxNwp34faJyKlFo//5BH8o/HkYr 9JdTebKqbrZA4Yju3/mS1V2MaGzq0dnHDVJGKKmIthZu1VGtLFrjBiWCmChTC+u+ +0LmQA2dPDVUZazFyz7prv4+hxicmKruThvk4BRvdHK+X0MsAzLw2qlFajhwcP9d DcUJjAUeU5KDH90vlJ7SLd1bUrNs/o69o2DqeX/URc9SwnUBNLyfO/OFxwIDAQAB o4IBwzCCAb8wFwYDVR0RBBAwDoIMZmFrZWhvc3RuYW1lMA4GA1UdDwEB/wQEAwIF oDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAd BgNVHQ4EFgQUyL2otMDyMhBzR5xIgTL4ursmhJcwfQYDVR0jBHYwdIAUs4qgorpx 8agkedSkWyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRo b24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZl coIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6 Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1Bggr BgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2Nz cC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5l dC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAHaHdk3k D4i/LPNYZ8CXbM1ZGIKDTAQZpW2q+mQ9STI+4VaVshP3z9MRsHK3W+fXhWlRPLZU gEUvKBAhILm66Sdatz+Ct2n1RvW/oosXf/IU0UaXtYtH+5/oXAUOnRG9fJoDhAvK KWZKyg1vCR56J8F/A5ZwjRilL6SYpRmqjF0ejD67bTvAM8AV4b0JPZ/o3BLUy0Qd BvXo1k6hLVyfXR9bKsNNQI3a0XiA0MYxchBIiukQehMwEbKeZw7toarscy3wuIoi dQ8waVxQfpHO2pHHcIxl//ZY+wC9Rczi5OPlFjZ986JKnEX/2aUW4C+1W2zmihMV SHO9fIAzw9Q7Oh2FDqT39/tIDOmgS16KXGf4JQJvzXLBqlqTZHwUIEPgE38N4Q1h Xi4szXouKq62dWpfoBqbtmctsKUcVLyMcH4VK8BQ4wO7pKX8RQHJP6e4GNw+CAeh m/W9lb1J6BB8kX0txMKYtrdRadcKaEC1D4WgqWd3xmjLDlg0s1jnyHwJZw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/keycertecc.pem000066400000000000000000000130051471441230600227360ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIG2AgEAMBAGByqGSM49AgEGBSuBBAAiBIGeMIGbAgEBBDBcNwE+cm17mmr7Yg6d 0DNCnheGFOjkYH4tYzTyCkcZGShkmF/tKhIqb3imKz0Kx9+hZANiAATyp8ws6CuN OI2/3MC4jZVSkmoDzm/X/ZrkEm4TVHKPSZ6kzZRpmmUlLS9l7SQZSLYyDAFBFzoG JJYHhZNQXEO7HFszn6KnvLjhwS6ddzlaHPziEknrSr0OKhJmdJHrQAQ= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5e Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost-ecc Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (384 bit) pub: 04:f2:a7:cc:2c:e8:2b:8d:38:8d:bf:dc:c0:b8:8d: 95:52:92:6a:03:ce:6f:d7:fd:9a:e4:12:6e:13:54: 72:8f:49:9e:a4:cd:94:69:9a:65:25:2d:2f:65:ed: 24:19:48:b6:32:0c:01:41:17:3a:06:24:96:07:85: 93:50:5c:43:bb:1c:5b:33:9f:a2:a7:bc:b8:e1:c1: 2e:9d:77:39:5a:1c:fc:e2:12:49:eb:4a:bd:0e:2a: 12:66:74:91:eb:40:04 ASN1 OID: secp384r1 NIST CURVE: P-384 X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost-ecc X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 79:11:98:86:15:4F:48:F4:31:0B:D2:CC:C8:26:3A:09:07:5D:96:40 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 6e:42:e8:a2:2d:28:14:e3:25:5c:c1:7e:54:e9:3a:ff:30:db: 94:ba:b2:f6:5f:ae:9a:c1:90:b3:4f:ce:65:1d:84:64:c0:71: 2c:44:8e:7e:00:79:f5:8c:4a:1d:34:13:44:de:99:2e:db:53: ee:ec:74:97:4d:59:1a:09:82:4f:98:75:91:a7:a0:b9:da:5e: 68:f5:32:85:be:36:3d:83:d4:ee:f9:87:67:31:85:41:53:9a: e7:05:96:13:1c:88:2e:7f:33:b1:ee:bd:f9:50:52:24:ed:3d: 92:95:6e:30:c3:af:74:a9:ee:15:bb:da:7c:14:50:8e:e3:99: ea:ba:b4:37:8a:50:61:26:de:01:93:b8:a2:6b:d9:c7:38:5e: b2:f8:96:3d:a8:9f:7d:0c:71:d4:7e:cc:a0:57:af:7e:ce:3f: a7:a7:27:68:c1:28:d7:4f:44:c1:b4:93:c3:c7:35:2b:50:c3: 8e:2c:d0:46:c1:3f:e1:67:d3:f0:81:ae:f3:5c:3e:4f:d5:a8: 07:8f:e0:eb:ef:d8:dc:47:e0:3d:58:eb:de:0e:7f:b2:58:cb: 5c:f1:2f:65:7e:0f:0d:cc:ca:ba:83:53:63:bc:dd:18:0c:ee: ed:ec:96:88:d0:38:c5:d7:ab:e7:55:79:7b:6d:ba:c0:a0:e9: 5c:ca:7c:fb:f8:70:c7:fb:f5:b2:b5:74:cb:f7:c0:0d:20:9f: 1d:b7:4c:bf:8a:8d:cd:e3:bc:4e:30:78:02:12:a0:9b:d5:8f: 49:3c:95:91:76:6e:7c:54:dc:61:7a:2e:20:ed:35:25:e0:c5: 17:50:02:83:00:74:8f:f0:1c:97:96:08:fc:2e:63:a4:f7:97: 87:43:2a:32:04:2d:4c:f9:1a:07:bf:68:91:fc:50:21:a1:3c: 8d:8f:fb:83:57:83:1f:b6:55:5c:55:2f:58:64:ad:f3:27:ba: d0:e3:cd:58:01:a3:c9:ba:1d:95:dc:30:d5:af:b9:20:ad:d9: 48:ba:8d:9a:66:ee -----BEGIN CERTIFICATE----- MIIEyzCCAzOgAwIBAgIJAMstgJlaaVJeMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGMxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFjAUBgNVBAMMDWxv Y2FsaG9zdC1lY2MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATyp8ws6CuNOI2/3MC4 jZVSkmoDzm/X/ZrkEm4TVHKPSZ6kzZRpmmUlLS9l7SQZSLYyDAFBFzoGJJYHhZNQ XEO7HFszn6KnvLjhwS6ddzlaHPziEknrSr0OKhJmdJHrQASjggHEMIIBwDAYBgNV HREEETAPgg1sb2NhbGhvc3QtZWNjMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAU BggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUeRGY hhVPSPQxC9LMyCY6CQddlkAwfQYDVR0jBHYwdIAUs4qgorpx8agkedSkWyU2FR5J yM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMstgJlaaVJb MIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0Y2EucHl0 aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcwAYYpaHR0 cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYDVR0fBDww OjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2EvcmV2 b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAG5C6KItKBTjJVzBflTpOv8w 25S6svZfrprBkLNPzmUdhGTAcSxEjn4AefWMSh00E0TemS7bU+7sdJdNWRoJgk+Y dZGnoLnaXmj1MoW+Nj2D1O75h2cxhUFTmucFlhMciC5/M7HuvflQUiTtPZKVbjDD r3Sp7hW72nwUUI7jmeq6tDeKUGEm3gGTuKJr2cc4XrL4lj2on30McdR+zKBXr37O P6enJ2jBKNdPRMG0k8PHNStQw44s0EbBP+Fn0/CBrvNcPk/VqAeP4Ovv2NxH4D1Y 694Of7JYy1zxL2V+Dw3MyrqDU2O83RgM7u3slojQOMXXq+dVeXttusCg6VzKfPv4 cMf79bK1dMv3wA0gnx23TL+Kjc3jvE4weAISoJvVj0k8lZF2bnxU3GF6LiDtNSXg xRdQAoMAdI/wHJeWCPwuY6T3l4dDKjIELUz5Gge/aJH8UCGhPI2P+4NXgx+2VVxV L1hkrfMnutDjzVgBo8m6HZXcMNWvuSCt2Ui6jZpm7g== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/make_ssl_certs.py000066400000000000000000000223751471441230600234740ustar00rootroot00000000000000"""Make the custom certificate and private key files used by test_ssl and friends.""" import os import pprint import shutil import tempfile from subprocess import * startdate = "20180829142316Z" enddate = "20371028142316Z" req_template = """ [ default ] base_url = http://testca.pythontest.net/testca [req] distinguished_name = req_distinguished_name prompt = no [req_distinguished_name] C = XY L = Castle Anthrax O = Python Software Foundation CN = {hostname} [req_x509_extensions_nosan] [req_x509_extensions_simple] subjectAltName = @san [req_x509_extensions_full] subjectAltName = @san keyUsage = critical,keyEncipherment,digitalSignature extendedKeyUsage = serverAuth,clientAuth basicConstraints = critical,CA:false subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer:always authorityInfoAccess = @issuer_ocsp_info crlDistributionPoints = @crl_info [ issuer_ocsp_info ] caIssuers;URI.0 = $base_url/pycacert.cer OCSP;URI.0 = $base_url/ocsp/ [ crl_info ] URI.0 = $base_url/revocation.crl [san] DNS.1 = {hostname} {extra_san} [dir_sect] C = XY L = Castle Anthrax O = Python Software Foundation CN = dirname example [princ_name] realm = EXP:0, GeneralString:KERBEROS.REALM principal_name = EXP:1, SEQUENCE:principal_seq [principal_seq] name_type = EXP:0, INTEGER:1 name_string = EXP:1, SEQUENCE:principals [principals] princ1 = GeneralString:username [ ca ] default_ca = CA_default [ CA_default ] dir = cadir database = $dir/index.txt crlnumber = $dir/crl.txt default_md = sha256 startdate = {startdate} default_startdate = {startdate} enddate = {enddate} default_enddate = {enddate} default_days = 7000 default_crl_days = 7000 certificate = pycacert.pem private_key = pycakey.pem serial = $dir/serial RANDFILE = $dir/.rand policy = policy_match [ policy_match ] countryName = match stateOrProvinceName = optional organizationName = match organizationalUnitName = optional commonName = supplied emailAddress = optional [ policy_anything ] countryName = optional stateOrProvinceName = optional localityName = optional organizationName = optional organizationalUnitName = optional commonName = supplied emailAddress = optional [ v3_ca ] subjectKeyIdentifier=hash authorityKeyIdentifier=keyid:always,issuer basicConstraints = CA:true """ here = os.path.abspath(os.path.dirname(__file__)) def make_cert_key(hostname, sign=False, extra_san='', ext='req_x509_extensions_full', key='rsa:3072'): print("creating cert for " + hostname) tempnames = [] for i in range(3): with tempfile.NamedTemporaryFile(delete=False) as f: tempnames.append(f.name) req_file, cert_file, key_file = tempnames try: req = req_template.format( hostname=hostname, extra_san=extra_san, startdate=startdate, enddate=enddate ) with open(req_file, 'w') as f: f.write(req) args = ['req', '-new', '-nodes', '-days', '7000', '-newkey', key, '-keyout', key_file, '-extensions', ext, '-config', req_file] if sign: with tempfile.NamedTemporaryFile(delete=False) as f: tempnames.append(f.name) reqfile = f.name args += ['-out', reqfile ] else: args += ['-x509', '-out', cert_file ] check_call(['openssl'] + args) if sign: args = [ 'ca', '-config', req_file, '-extensions', ext, '-out', cert_file, '-outdir', 'cadir', '-policy', 'policy_anything', '-batch', '-infiles', reqfile ] check_call(['openssl'] + args) with open(cert_file, 'r') as f: cert = f.read() with open(key_file, 'r') as f: key = f.read() return cert, key finally: for name in tempnames: os.remove(name) TMP_CADIR = 'cadir' def unmake_ca(): shutil.rmtree(TMP_CADIR) def make_ca(): os.mkdir(TMP_CADIR) with open(os.path.join('cadir','index.txt'),'a+') as f: pass # empty file with open(os.path.join('cadir','crl.txt'),'a+') as f: f.write("00") with open(os.path.join('cadir','index.txt.attr'),'w+') as f: f.write('unique_subject = no') # random start value for serial numbers with open(os.path.join('cadir','serial'), 'w') as f: f.write('CB2D80995A69525B\n') with tempfile.NamedTemporaryFile("w") as t: req = req_template.format( hostname='our-ca-server', extra_san='', startdate=startdate, enddate=enddate ) t.write(req) t.flush() with tempfile.NamedTemporaryFile() as f: args = ['req', '-config', t.name, '-new', '-nodes', '-newkey', 'rsa:3072', '-keyout', 'pycakey.pem', '-out', f.name, '-subj', '/C=XY/L=Castle Anthrax/O=Python Software Foundation CA/CN=our-ca-server'] check_call(['openssl'] + args) args = ['ca', '-config', t.name, '-out', 'pycacert.pem', '-batch', '-outdir', TMP_CADIR, '-keyfile', 'pycakey.pem', '-selfsign', '-extensions', 'v3_ca', '-infiles', f.name ] check_call(['openssl'] + args) args = ['ca', '-config', t.name, '-gencrl', '-out', 'revocation.crl'] check_call(['openssl'] + args) # capath hashes depend on subject! check_call([ 'openssl', 'x509', '-in', 'pycacert.pem', '-out', 'capath/ceff1710.0' ]) shutil.copy('capath/ceff1710.0', 'capath/b1930218.0') def print_cert(path): import _ssl pprint.pprint(_ssl._test_decode_cert(path)) if __name__ == '__main__': os.chdir(here) cert, key = make_cert_key('localhost', ext='req_x509_extensions_simple') with open('ssl_cert.pem', 'w') as f: f.write(cert) with open('ssl_key.pem', 'w') as f: f.write(key) print("password protecting ssl_key.pem in ssl_key.passwd.pem") check_call(['openssl','pkey','-in','ssl_key.pem','-out','ssl_key.passwd.pem','-aes256','-passout','pass:somepass']) check_call(['openssl','pkey','-in','ssl_key.pem','-out','keycert.passwd.pem','-aes256','-passout','pass:somepass']) with open('keycert.pem', 'w') as f: f.write(key) f.write(cert) with open('keycert.passwd.pem', 'a+') as f: f.write(cert) # For certificate matching tests make_ca() cert, key = make_cert_key('fakehostname', ext='req_x509_extensions_simple') with open('keycert2.pem', 'w') as f: f.write(key) f.write(cert) cert, key = make_cert_key('localhost', sign=True) with open('keycert3.pem', 'w') as f: f.write(key) f.write(cert) cert, key = make_cert_key('fakehostname', sign=True) with open('keycert4.pem', 'w') as f: f.write(key) f.write(cert) cert, key = make_cert_key( 'localhost-ecc', sign=True, key='param:secp384r1.pem' ) with open('keycertecc.pem', 'w') as f: f.write(key) f.write(cert) extra_san = [ 'otherName.1 = 1.2.3.4;UTF8:some other identifier', 'otherName.2 = 1.3.6.1.5.2.2;SEQUENCE:princ_name', 'email.1 = user@example.org', 'DNS.2 = www.example.org', # GEN_X400 'dirName.1 = dir_sect', # GEN_EDIPARTY 'URI.1 = https://www.python.org/', 'IP.1 = 127.0.0.1', 'IP.2 = ::1', 'RID.1 = 1.2.3.4.5', ] cert, key = make_cert_key('allsans', sign=True, extra_san='\n'.join(extra_san)) with open('allsans.pem', 'w') as f: f.write(key) f.write(cert) extra_san = [ # könig (king) 'DNS.2 = xn--knig-5qa.idn.pythontest.net', # königsgäßchen (king's alleyway) 'DNS.3 = xn--knigsgsschen-lcb0w.idna2003.pythontest.net', 'DNS.4 = xn--knigsgchen-b4a3dun.idna2008.pythontest.net', # βόλοσ (marble) 'DNS.5 = xn--nxasmq6b.idna2003.pythontest.net', 'DNS.6 = xn--nxasmm1c.idna2008.pythontest.net', ] # IDN SANS, signed cert, key = make_cert_key('idnsans', sign=True, extra_san='\n'.join(extra_san)) with open('idnsans.pem', 'w') as f: f.write(key) f.write(cert) cert, key = make_cert_key('nosan', sign=True, ext='req_x509_extensions_nosan') with open('nosan.pem', 'w') as f: f.write(key) f.write(cert) unmake_ca() print("update Lib/test/test_ssl.py and Lib/test/test_asyncio/utils.py") print_cert('keycert.pem') print_cert('keycert3.pem') gevent-24.11.1/src/greentest/3.12/certdata/nokia.pem000066400000000000000000000036031471441230600217210ustar00rootroot00000000000000# Certificate for projects.developer.nokia.com:443 (see issue 13034) -----BEGIN CERTIFICATE----- MIIFLDCCBBSgAwIBAgIQLubqdkCgdc7lAF9NfHlUmjANBgkqhkiG9w0BAQUFADCB vDELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTswOQYDVQQLEzJUZXJtcyBvZiB1c2Ug YXQgaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAoYykxMDE2MDQGA1UEAxMt VmVyaVNpZ24gQ2xhc3MgMyBJbnRlcm5hdGlvbmFsIFNlcnZlciBDQSAtIEczMB4X DTExMDkyMTAwMDAwMFoXDTEyMDkyMDIzNTk1OVowcTELMAkGA1UEBhMCRkkxDjAM BgNVBAgTBUVzcG9vMQ4wDAYDVQQHFAVFc3BvbzEOMAwGA1UEChQFTm9raWExCzAJ BgNVBAsUAkJJMSUwIwYDVQQDFBxwcm9qZWN0cy5kZXZlbG9wZXIubm9raWEuY29t MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCr92w1bpHYSYxUEx8N/8Iddda2 lYi+aXNtQfV/l2Fw9Ykv3Ipw4nLeGTj18FFlAZgMdPRlgrzF/NNXGw/9l3/qKdow CypkQf8lLaxb9Ze1E/KKmkRJa48QTOqvo6GqKuTI6HCeGlG1RxDb8YSKcQWLiytn yj3Wp4MgRQO266xmMQIDAQABo4IB9jCCAfIwQQYDVR0RBDowOIIccHJvamVjdHMu ZGV2ZWxvcGVyLm5va2lhLmNvbYIYcHJvamVjdHMuZm9ydW0ubm9raWEuY29tMAkG A1UdEwQCMAAwCwYDVR0PBAQDAgWgMEEGA1UdHwQ6MDgwNqA0oDKGMGh0dHA6Ly9T VlJJbnRsLUczLWNybC52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNybDBEBgNVHSAE PTA7MDkGC2CGSAGG+EUBBxcDMCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3LnZl cmlzaWduLmNvbS9ycGEwKAYDVR0lBCEwHwYJYIZIAYb4QgQBBggrBgEFBQcDAQYI KwYBBQUHAwIwcgYIKwYBBQUHAQEEZjBkMCQGCCsGAQUFBzABhhhodHRwOi8vb2Nz cC52ZXJpc2lnbi5jb20wPAYIKwYBBQUHMAKGMGh0dHA6Ly9TVlJJbnRsLUczLWFp YS52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNlcjBuBggrBgEFBQcBDARiMGChXqBc MFowWDBWFglpbWFnZS9naWYwITAfMAcGBSsOAwIaBBRLa7kolgYMu9BSOJsprEsH iyEFGDAmFiRodHRwOi8vbG9nby52ZXJpc2lnbi5jb20vdnNsb2dvMS5naWYwDQYJ KoZIhvcNAQEFBQADggEBACQuPyIJqXwUyFRWw9x5yDXgMW4zYFopQYOw/ItRY522 O5BsySTh56BWS6mQB07XVfxmYUGAvRQDA5QHpmY8jIlNwSmN3s8RKo+fAtiNRlcL x/mWSfuMs3D/S6ev3D6+dpEMZtjrhOdctsarMKp8n/hPbwhAbg5hVjpkW5n8vz2y 0KxvvkA1AxpLwpVv7OlK17ttzIHw8bp9HTlHBU5s8bKz4a565V/a5HI0CSEv/+0y ko4/ghTnZc1CkmUngKKeFMSah/mT/xAh8XnE2l1AazFa8UKuYki1e+ArHaGZc4ix UYOtiRphwfuYQhRZ7qX9q2MMkCMI65XNK/SaFrAbbG0= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/nosan.pem000066400000000000000000000170471471441230600217450ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQCv3sUoOE4F7Pye AT2Q6XpXrGUOu1fYgdnItLLLhvn7ACuHMj7TA5UKXxsepJn5m2Ji9LvAbksr1IWd LZAvNgjwsUR+E4HbY108BhVt9sk3HFkvE0OOFbAa14ICtYPe18P/4Hv6Zfu/GJDU rwXHNCUu0p6i/mospZ5O3sx5MgVaShknGAEC3Kp7zOgusMmE8XSbkNQa3ARMkW4o kTqWKAeAHDjVFVyyhzZQmo+gaLzhWfJVSZhlJsuiLoZGGrVTq85EiXsE4l8rPaI+ mKkVzWP13IZW+Fx1tiIktumdHWb1OQWrvm8AiT9b8PcFCUUrvhQFcLDSCZjKlQ0t RWrSSKrrVsSldOreqRLtpjGzFJpGnTcvslL7rP5pg5DjBsYmVcDjrmRuJuhGq52X /6HEC97GouVK8tT1LVMv1wufVPn+i9TzwxOuRWeUvVqLAJgWQ9N3yKdymH+VrpZk /oB9ScyDakGezZBW5CeOQbNJ8WoX58jNxefGjtqKxmyztu43r3ECAwEAAQKCAYBQ fVoqYCqFV8L95X9x1QljGsldhqxbsIIl811o/KtoDtndFEfgd2E8z+4vhhHaRR0w QOW02kWZF7jXCMVWdhp9XgQE15S0/bLsB7TDERFiIZ1HiD+AxbhFcKBV8REbahCQ CQN0xDwFZ47RaBDy7JCf71EfM+UP7fSYECvww83jVspQNBIyZx+3bT5OMCbqqz88 +3m3mT52dJDADEeN9WAJZ+Ey1IYKRwu6tCJLvePEF1BrbDVNBgZogXZ+mzalxpjr 4RpGPMMa+VWc8HmDVd+LtpwKJcQD00GvUP4fNywn+5jvNWl54FdQiTLPrieTWxas XUQ2crxP7Aqr2/vsU5Ruru5uF7H+ssMHp9YQDhpJ2+SVhQ9P+/loXCuKGt+BrB2Z MlitO3f+vfRtzATmJ8G0qFrOqZK1A/qsiyIze240C1hAl3oy2xpZqTDGp4gRWwoi OIN0HmH9UbP7bbNQY1x/zstTbza4/7rGb1+DZKeZIMu7QjBCU0rtsJpGtUvcQGEC gcEA42GMYSL/HljZMF1LsDhTX/cmP8FDNgONhWYxT+w0Csnj1usLNBaT63dYnEiW QKydRR4casAR1Kdy4Yfcy2lCy1kCfwqkQYk8fxSsOSHRjUfwC1SnfdYlwKFMxw4a oZF0R4oVCBYrfP+8kqrj+5gs/gXblsw72XkYtbCdIriKKdmUzTx7MegzSqh2PVRi rJzuwCZQ/O0NfhwdOHxLQDo0dgD+vv9e+KOSoJ9FDv8HH1tnolpRMdkSA8AJR/Nk DXt1AoHBAMYBfTKQZ2jqLKybe4tP+YKjvjVp8vJx0iNUXFN/P6hBaSBOgq85uxXL X3s7N/pkOCjyE95B8QusIkbnbfdyEP89O4bTbUHPXyAkHyRkR7Vny49HYuaR/aXQ mXC0J2z5bXVpCQ514l/R/Io3wBph+hbG3To7pp9pMOV4qzvibUZaTZFwH+q+xDwf SKSFy3fcomgH4/K5/QuKVj0jOUQsYjQQWb8GukS2KZK3zYJIAG1bBcsCVpSuBdW0 eCZgqjnwjQKBwCUyUwWc9QEg5b68tGIKhNEhHDe3xOf0ItWcxxpc+JJ/Pm9tGfMW cnJFntBKK5I+6qdg6qMn8oLINcnhMORxvsSHNhpUQlSaP7RGTHo4JxCmoQUpfxDd 1GUzvdyeWQrvQYdmdlRRVCHpsA6KOCtzVIDlsmtz06Ka5cjrMHl6mNeJyYbdiwW6 B5ICBv23bUDxlzkFy5/ko51qufkAlErYeraHKSVTn1SrZZQzGdf/LkoZ6NUtUzUF XqYQZzRHA6oU9QKBwDslzLljC5D6ivfQxln6POV6dmJMUOd9erFVDPNgSqq/R2EA MueXDjzXcKFGMlWYxHHuxmKZPiEnfWHC1kWZjFxCdVq0I6oKATd/stHTJtyYseUO BQwtRiDXLE7PcguKgtkU1EC+lC3dc1vyhW8cH3HYW9N+aCqsaI/TuQr9e3kNlqhA XzhnXgU7rx5+XSZkARukZ8JlLqLY4yQGNqAXxgoZbEW1A8VsyQRr5XbqfT4td5CK FUT6qwGIlG+aZp9CLQKBwQCQkwdW9A/Q4Ffq8+XTL1hJ24m/q11OLAPODUypOhWw OCbX2fkv59pSBe6niZDBls1NpHB9mzalBrJCfU+yKC667gKcKULOnWULIoOQvmcg Ka3hkkW28gTnCjfDIYm3IdsLjc67zJplOixaKgxhO8NtJZGtg0oLIrofG8EYRInv OmtGw+XE+s4TVs6WgXnEg9zWQ5ZYtqQVn6PT5jsz+Nrvipi61HWHVBd7g+78ojps 3suWxl0FvgzTW5HD16WRXeI= -----END PRIVATE KEY----- Certificate: Data: Version: 1 (0x0) Serial Number: cb:2d:80:99:5a:69:52:61 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=nosan Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:af:de:c5:28:38:4e:05:ec:fc:9e:01:3d:90:e9: 7a:57:ac:65:0e:bb:57:d8:81:d9:c8:b4:b2:cb:86: f9:fb:00:2b:87:32:3e:d3:03:95:0a:5f:1b:1e:a4: 99:f9:9b:62:62:f4:bb:c0:6e:4b:2b:d4:85:9d:2d: 90:2f:36:08:f0:b1:44:7e:13:81:db:63:5d:3c:06: 15:6d:f6:c9:37:1c:59:2f:13:43:8e:15:b0:1a:d7: 82:02:b5:83:de:d7:c3:ff:e0:7b:fa:65:fb:bf:18: 90:d4:af:05:c7:34:25:2e:d2:9e:a2:fe:6a:2c:a5: 9e:4e:de:cc:79:32:05:5a:4a:19:27:18:01:02:dc: aa:7b:cc:e8:2e:b0:c9:84:f1:74:9b:90:d4:1a:dc: 04:4c:91:6e:28:91:3a:96:28:07:80:1c:38:d5:15: 5c:b2:87:36:50:9a:8f:a0:68:bc:e1:59:f2:55:49: 98:65:26:cb:a2:2e:86:46:1a:b5:53:ab:ce:44:89: 7b:04:e2:5f:2b:3d:a2:3e:98:a9:15:cd:63:f5:dc: 86:56:f8:5c:75:b6:22:24:b6:e9:9d:1d:66:f5:39: 05:ab:be:6f:00:89:3f:5b:f0:f7:05:09:45:2b:be: 14:05:70:b0:d2:09:98:ca:95:0d:2d:45:6a:d2:48: aa:eb:56:c4:a5:74:ea:de:a9:12:ed:a6:31:b3:14: 9a:46:9d:37:2f:b2:52:fb:ac:fe:69:83:90:e3:06: c6:26:55:c0:e3:ae:64:6e:26:e8:46:ab:9d:97:ff: a1:c4:0b:de:c6:a2:e5:4a:f2:d4:f5:2d:53:2f:d7: 0b:9f:54:f9:fe:8b:d4:f3:c3:13:ae:45:67:94:bd: 5a:8b:00:98:16:43:d3:77:c8:a7:72:98:7f:95:ae: 96:64:fe:80:7d:49:cc:83:6a:41:9e:cd:90:56:e4: 27:8e:41:b3:49:f1:6a:17:e7:c8:cd:c5:e7:c6:8e: da:8a:c6:6c:b3:b6:ee:37:af:71 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption 91:42:c2:15:57:42:47:77:e7:0f:c5:55:26:b1:5b:c3:5e:ba: 81:db:e1:a4:9f:b8:42:5a:21:c9:8c:18:ae:0f:90:ab:9a:24: e7:d2:78:fc:bd:97:29:b1:5c:46:1f:5b:b8:d2:a7:87:f1:50: 53:5b:d3:be:57:74:bd:e5:75:db:50:81:f7:37:95:0b:69:ef: 39:8c:5c:82:d5:64:62:d5:8b:e9:e0:31:e1:73:d2:5a:2c:de: 43:5a:06:e5:d3:4d:d0:35:e0:9f:c2:73:31:bc:35:69:d4:fb: 7d:f0:1a:33:f7:f6:25:72:9c:a6:84:05:08:f6:b5:e8:04:10: f1:1f:f2:95:ad:a1:f8:d8:80:a5:eb:75:43:99:33:90:0c:79: fc:c0:87:08:95:20:aa:c2:81:0b:22:6f:56:f4:8f:2a:23:f8: 40:47:1c:03:a5:b1:04:0a:04:4a:df:d0:88:a8:bc:31:f2:42: 9b:d8:11:14:9e:e3:68:ea:07:2c:15:de:d2:36:5a:15:38:ed: d2:af:0e:b4:b6:1d:a0:57:94:ea:c3:c7:4c:14:57:81:00:57: 94:d3:b0:27:69:d7:48:02:6c:e5:97:f7:be:22:7c:38:24:af: b2:b0:7b:08:75:1e:ca:2e:c7:41:ef:8b:74:cf:c9:c3:6f:39: b9:52:41:18:c6:70:24:54:51:04:fe:5f:88:70:35:e5:1c:8e: d6:67:69:44:44:33:9b:8c:fe:a5:b9:95:48:66:84:f3:1a:04: ab:a3:57:c1:b6:b4:2f:28:12:45:2b:cb:42:d3:f4:a5:ce:7b: 6c:1f:e4:c8:a9:e7:d4:6d:c8:27:2d:69:26:c5:e8:73:10:54: 1f:c3:bf:fd:aa:f5:95:6f:f6:ca:d5:06:8f:1b:79:93:e3:86: ba:8d:fe:a8:10:8f:95:3e:14:09:bf:ca:88:59:e2:93:b6:ec: 03:a9:7e:dd:1f:5f:13:d3:29:b3:a6:f3:6a:df:30:53:44:c8: cd:e5:82:57:bc:9c -----BEGIN CERTIFICATE----- MIIEJDCCAowCCQDLLYCZWmlSYTANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJY WTEmMCQGA1UECgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNV BAMMDW91ci1jYS1zZXJ2ZXIwHhcNMTgwODI5MTQyMzE2WhcNMzcxMDI4MTQyMzE2 WjBbMQswCQYDVQQGEwJYWTEXMBUGA1UEBwwOQ2FzdGxlIEFudGhyYXgxIzAhBgNV BAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQ4wDAYDVQQDDAVub3NhbjCC AaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAK/exSg4TgXs/J4BPZDpeles ZQ67V9iB2ci0ssuG+fsAK4cyPtMDlQpfGx6kmfmbYmL0u8BuSyvUhZ0tkC82CPCx RH4TgdtjXTwGFW32yTccWS8TQ44VsBrXggK1g97Xw//ge/pl+78YkNSvBcc0JS7S nqL+aiylnk7ezHkyBVpKGScYAQLcqnvM6C6wyYTxdJuQ1BrcBEyRbiiROpYoB4Ac ONUVXLKHNlCaj6BovOFZ8lVJmGUmy6IuhkYatVOrzkSJewTiXys9oj6YqRXNY/Xc hlb4XHW2IiS26Z0dZvU5Bau+bwCJP1vw9wUJRSu+FAVwsNIJmMqVDS1FatJIqutW xKV06t6pEu2mMbMUmkadNy+yUvus/mmDkOMGxiZVwOOuZG4m6EarnZf/ocQL3sai 5Ury1PUtUy/XC59U+f6L1PPDE65FZ5S9WosAmBZD03fIp3KYf5WulmT+gH1JzINq QZ7NkFbkJ45Bs0nxahfnyM3F58aO2orGbLO27jevcQIDAQABMA0GCSqGSIb3DQEB CwUAA4IBgQCRQsIVV0JHd+cPxVUmsVvDXrqB2+Gkn7hCWiHJjBiuD5CrmiTn0nj8 vZcpsVxGH1u40qeH8VBTW9O+V3S95XXbUIH3N5ULae85jFyC1WRi1Yvp4DHhc9Ja LN5DWgbl003QNeCfwnMxvDVp1Pt98Boz9/YlcpymhAUI9rXoBBDxH/KVraH42ICl 63VDmTOQDHn8wIcIlSCqwoELIm9W9I8qI/hARxwDpbEECgRK39CIqLwx8kKb2BEU nuNo6gcsFd7SNloVOO3Srw60th2gV5Tqw8dMFFeBAFeU07AnaddIAmzll/e+Inw4 JK+ysHsIdR7KLsdB74t0z8nDbzm5UkEYxnAkVFEE/l+IcDXlHI7WZ2lERDObjP6l uZVIZoTzGgSro1fBtrQvKBJFK8tC0/SlzntsH+TIqefUbcgnLWkmxehzEFQfw7/9 qvWVb/bK1QaPG3mT44a6jf6oEI+VPhQJv8qIWeKTtuwDqX7dH18T0ymzpvNq3zBT RMjN5YJXvJw= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/nullbytecert.pem000066400000000000000000000124731471441230600233410ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 0 (0x0) Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Validity Not Before: Aug 7 13:11:52 2013 GMT Not After : Aug 7 13:12:52 2013 GMT Subject: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:b5:ea:ed:c9:fb:46:7d:6f:3b:76:80:dd:3a:f3: 03:94:0b:a7:a6:db:ec:1d:df:ff:23:74:08:9d:97: 16:3f:a3:a4:7b:3e:1b:0e:96:59:25:03:a7:26:e2: 88:a9:cf:79:cd:f7:04:56:b0:ab:79:32:6e:59:c1: 32:30:54:eb:58:a8:cb:91:f0:42:a5:64:27:cb:d4: 56:31:88:52:ad:cf:bd:7f:f0:06:64:1f:cc:27:b8: a3:8b:8c:f3:d8:29:1f:25:0b:f5:46:06:1b:ca:02: 45:ad:7b:76:0a:9c:bf:bb:b9:ae:0d:16:ab:60:75: ae:06:3e:9c:7c:31:dc:92:2f:29:1a:e0:4b:0c:91: 90:6c:e9:37:c5:90:d7:2a:d7:97:15:a3:80:8f:5d: 7b:49:8f:54:30:d4:97:2c:1c:5b:37:b5:ab:69:30: 68:43:d3:33:78:4b:02:60:f5:3c:44:80:a1:8f:e7: f0:0f:d1:5e:87:9e:46:cf:62:fc:f9:bf:0c:65:12: f1:93:c8:35:79:3f:c8:ec:ec:47:f5:ef:be:44:d5: ae:82:1e:2d:9a:9f:98:5a:67:65:e1:74:70:7c:cb: d3:c2:ce:0e:45:49:27:dc:e3:2d:d4:fb:48:0e:2f: 9e:77:b8:14:46:c0:c4:36:ca:02:ae:6a:91:8c:da: 2f:85 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 88:5A:55:C0:52:FF:61:CD:52:A3:35:0F:EA:5A:9C:24:38:22:F7:5C X509v3 Key Usage: Digital Signature, Non Repudiation, Key Encipherment X509v3 Subject Alternative Name: ************************************************************* WARNING: The values for DNS, email and URI are WRONG. OpenSSL doesn't print the text after a NULL byte. ************************************************************* DNS:altnull.python.org, email:null@python.org, URI:http://null.python.org, IP Address:192.0.2.1, IP Address:2001:DB8:0:0:0:0:0:1 Signature Algorithm: sha1WithRSAEncryption ac:4f:45:ef:7d:49:a8:21:70:8e:88:59:3e:d4:36:42:70:f5: a3:bd:8b:d7:a8:d0:58:f6:31:4a:b1:a4:a6:dd:6f:d9:e8:44: 3c:b6:0a:71:d6:7f:b1:08:61:9d:60:ce:75:cf:77:0c:d2:37: 86:02:8d:5e:5d:f9:0f:71:b4:16:a8:c1:3d:23:1c:f1:11:b3: 56:6e:ca:d0:8d:34:94:e6:87:2a:99:f2:ae:ae:cc:c2:e8:86: de:08:a8:7f:c5:05:fa:6f:81:a7:82:e6:d0:53:9d:34:f4:ac: 3e:40:fe:89:57:7a:29:a4:91:7e:0b:c6:51:31:e5:10:2f:a4: 60:76:cd:95:51:1a:be:8b:a1:b0:fd:ad:52:bd:d7:1b:87:60: d2:31:c7:17:c4:18:4f:2d:08:25:a3:a7:4f:b7:92:ca:e2:f5: 25:f1:54:75:81:9d:b3:3d:61:a2:f7:da:ed:e1:c6:6f:2c:60: 1f:d8:6f:c5:92:05:ab:c9:09:62:49:a9:14:ad:55:11:cc:d6: 4a:19:94:99:97:37:1d:81:5f:8b:cf:a3:a8:96:44:51:08:3d: 0b:05:65:12:eb:b6:70:80:88:48:72:4f:c6:c2:da:cf:cd:8e: 5b:ba:97:2f:60:b4:96:56:49:5e:3a:43:76:63:04:be:2a:f6: c1:ca:a9:94 -----BEGIN CERTIFICATE----- MIIE2DCCA8CgAwIBAgIBADANBgkqhkiG9w0BAQUFADCBxTELMAkGA1UEBhMCVVMx DzANBgNVBAgMBk9yZWdvbjESMBAGA1UEBwwJQmVhdmVydG9uMSMwIQYDVQQKDBpQ eXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEgMB4GA1UECwwXUHl0aG9uIENvcmUg RGV2ZWxvcG1lbnQxJDAiBgNVBAMMG251bGwucHl0aG9uLm9yZwBleGFtcGxlLm9y ZzEkMCIGCSqGSIb3DQEJARYVcHl0aG9uLWRldkBweXRob24ub3JnMB4XDTEzMDgw NzEzMTE1MloXDTEzMDgwNzEzMTI1MlowgcUxCzAJBgNVBAYTAlVTMQ8wDQYDVQQI DAZPcmVnb24xEjAQBgNVBAcMCUJlYXZlcnRvbjEjMCEGA1UECgwaUHl0aG9uIFNv ZnR3YXJlIEZvdW5kYXRpb24xIDAeBgNVBAsMF1B5dGhvbiBDb3JlIERldmVsb3Bt ZW50MSQwIgYDVQQDDBtudWxsLnB5dGhvbi5vcmcAZXhhbXBsZS5vcmcxJDAiBgkq hkiG9w0BCQEWFXB5dGhvbi1kZXZAcHl0aG9uLm9yZzCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBALXq7cn7Rn1vO3aA3TrzA5QLp6bb7B3f/yN0CJ2XFj+j pHs+Gw6WWSUDpybiiKnPec33BFawq3kyblnBMjBU61ioy5HwQqVkJ8vUVjGIUq3P vX/wBmQfzCe4o4uM89gpHyUL9UYGG8oCRa17dgqcv7u5rg0Wq2B1rgY+nHwx3JIv KRrgSwyRkGzpN8WQ1yrXlxWjgI9de0mPVDDUlywcWze1q2kwaEPTM3hLAmD1PESA oY/n8A/RXoeeRs9i/Pm/DGUS8ZPINXk/yOzsR/XvvkTVroIeLZqfmFpnZeF0cHzL 08LODkVJJ9zjLdT7SA4vnne4FEbAxDbKAq5qkYzaL4UCAwEAAaOB0DCBzTAMBgNV HRMBAf8EAjAAMB0GA1UdDgQWBBSIWlXAUv9hzVKjNQ/qWpwkOCL3XDALBgNVHQ8E BAMCBeAwgZAGA1UdEQSBiDCBhYIeYWx0bnVsbC5weXRob24ub3JnAGV4YW1wbGUu Y29tgSBudWxsQHB5dGhvbi5vcmcAdXNlckBleGFtcGxlLm9yZ4YpaHR0cDovL251 bGwucHl0aG9uLm9yZwBodHRwOi8vZXhhbXBsZS5vcmeHBMAAAgGHECABDbgAAAAA AAAAAAAAAAEwDQYJKoZIhvcNAQEFBQADggEBAKxPRe99SaghcI6IWT7UNkJw9aO9 i9eo0Fj2MUqxpKbdb9noRDy2CnHWf7EIYZ1gznXPdwzSN4YCjV5d+Q9xtBaowT0j HPERs1ZuytCNNJTmhyqZ8q6uzMLoht4IqH/FBfpvgaeC5tBTnTT0rD5A/olXeimk kX4LxlEx5RAvpGB2zZVRGr6LobD9rVK91xuHYNIxxxfEGE8tCCWjp0+3ksri9SXx VHWBnbM9YaL32u3hxm8sYB/Yb8WSBavJCWJJqRStVRHM1koZlJmXNx2BX4vPo6iW RFEIPQsFZRLrtnCAiEhyT8bC2s/Njlu6ly9gtJZWSV46Q3ZjBL4q9sHKqZQ= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/nullcert.pem000066400000000000000000000000001471441230600224340ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.12/certdata/pycacert.pem000066400000000000000000000130401471441230600224260ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5b Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, O=Python Software Foundation CA, CN=our-ca-server Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:b1:84:d3:4f:5c:04:80:91:4f:82:49:ba:30:0b: f7:e8:cb:f9:14:ef:3d:9f:0b:3f:0a:62:fc:1b:20: a5:20:d1:60:5f:87:5a:1f:16:d1:ed:97:70:a6:da: 1b:03:2c:7e:a0:5b:3c:4e:2f:16:7e:0e:89:29:89: e1:10:0d:38:da:6a:77:5f:37:13:b3:28:8f:7b:5c: 76:ad:9e:e8:d3:f5:9e:f5:83:aa:10:07:8d:e6:51: 98:f0:7c:0d:52:f2:0c:21:1e:d8:b9:99:26:a9:25: 03:27:bb:5c:ab:2e:33:27:a2:d6:23:a8:83:87:44: 29:9f:97:b5:24:6f:d7:b9:0a:fd:28:ee:bb:fb:41: 58:ea:1d:99:dd:44:86:ab:98:be:1c:dc:cb:a9:89: 1d:36:5c:a9:e8:47:b5:f4:52:48:aa:b5:a4:67:ef: 3e:d7:e2:d3:33:de:98:29:d8:7a:b0:59:5c:e7:b1: 0e:cc:fd:9f:eb:f6:d5:3a:0e:0b:cf:fe:0b:3d:a2: bf:45:18:ce:94:e7:a9:55:60:88:d4:d8:84:50:79: 05:2e:41:03:74:ae:67:26:f6:5b:12:08:98:ce:0a: 97:ed:01:0f:89:4f:17:5c:fa:3e:1d:35:24:47:92: 32:bf:f7:a4:18:2b:3c:d0:48:99:e1:a2:cd:a3:cc: 50:53:20:b5:c6:e3:66:85:7b:57:10:ec:33:4f:c1: 77:e7:1b:7e:81:c6:c4:f3:45:20:c0:91:dd:13:76: 7b:03:af:f6:76:8e:a2:83:63:57:dd:63:bc:bb:5a: 1c:17:52:8a:d6:06:48:cc:0f:c7:d3:4f:e8:da:22: 6c:86:f9:4e:5c:a6:29:07:3b:d8:56:4c:59:b3:20: 49:07:7b:94:84:cf:2b:c3:1c:1a:4e:87:64:92:ba: 42:e1:e6:ad:7d:1d:f6:54:90:6f:2b:e9:b3:cc:4b: 2b:33:26:23:fd:65:c0:3c:f0:79:ad:c9:c1:81:ef: 37:04:e0:27:3e:b0:ee:15:be:51 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Key Identifier: B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD X509v3 Basic Constraints: CA:TRUE Signature Algorithm: sha256WithRSAEncryption 6b:32:2f:e7:05:18:ea:5c:c9:95:f4:e0:c2:0c:41:5f:1a:0a: 95:c9:c7:7d:05:ee:8a:56:29:35:50:40:b7:fe:9f:7b:5b:1c: c3:69:2f:a0:cb:d2:b8:91:2f:50:19:62:f7:27:18:6d:95:7b: 53:16:15:a2:5a:dc:14:e3:fb:b1:32:a9:69:db:a6:33:47:3c: bb:1f:d2:dc:70:f9:6a:2e:0c:d8:8c:6d:e5:5d:1d:43:3c:4e: 91:de:a0:c8:da:a0:4b:0e:9d:5e:b6:0f:4a:49:f0:7b:b6:53: 9e:fd:35:14:5b:e3:4d:b4:18:a6:36:61:e8:8f:33:9b:d4:05: f9:54:66:df:e0:cb:18:a3:4e:dc:17:a8:a0:b3:c1:a8:f4:d6: 9d:ca:7f:68:53:1a:d7:95:da:e8:d3:9e:48:00:71:95:99:11: 07:cf:96:c0:7d:ce:7d:30:e8:4f:e1:83:16:33:a1:ff:59:9b: 3e:4c:e7:3a:38:01:9f:0f:67:4c:fd:2d:8b:4a:d4:01:46:37: 33:e8:13:6b:15:a9:1d:68:76:45:a2:82:33:69:26:30:60:05: c8:8f:bd:b4:75:ab:be:7a:8b:48:68:70:40:b4:1b:51:c5:e6: 7a:ad:6b:4f:db:17:c0:60:67:2e:63:61:9b:2c:48:99:b8:76: 45:a0:9e:cc:ef:33:1e:50:4e:ab:72:c3:65:c8:b2:79:b3:35: 83:21:78:d3:8b:6c:3a:18:e8:65:32:39:b8:c0:9d:71:2f:35: 36:8a:c0:17:62:d8:8b:3e:e1:22:18:2b:4c:63:a6:0e:9d:0a: fa:ab:5b:35:fb:88:91:77:4c:8d:8c:9d:a9:cf:fc:ab:c2:e6: 5a:05:7b:7e:04:6e:39:cf:93:ce:67:3b:7a:cb:af:b6:36:e1: fb:71:64:45:d4:a6:f0:ce:ef:75:04:99:69:9a:e5:88:0a:10: 02:74:89:ec:75:84:44:80:48:df:c1:f7:e9:37:ce:ce:92:92: 5c:89:22:08:73:1f -----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/pycakey.pem000066400000000000000000000046641471441230600222750ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQCxhNNPXASAkU+C SbowC/foy/kU7z2fCz8KYvwbIKUg0WBfh1ofFtHtl3Cm2hsDLH6gWzxOLxZ+Dokp ieEQDTjaandfNxOzKI97XHatnujT9Z71g6oQB43mUZjwfA1S8gwhHti5mSapJQMn u1yrLjMnotYjqIOHRCmfl7Ukb9e5Cv0o7rv7QVjqHZndRIarmL4c3MupiR02XKno R7X0UkiqtaRn7z7X4tMz3pgp2HqwWVznsQ7M/Z/r9tU6DgvP/gs9or9FGM6U56lV YIjU2IRQeQUuQQN0rmcm9lsSCJjOCpftAQ+JTxdc+j4dNSRHkjK/96QYKzzQSJnh os2jzFBTILXG42aFe1cQ7DNPwXfnG36BxsTzRSDAkd0TdnsDr/Z2jqKDY1fdY7y7 WhwXUorWBkjMD8fTT+jaImyG+U5cpikHO9hWTFmzIEkHe5SEzyvDHBpOh2SSukLh 5q19HfZUkG8r6bPMSyszJiP9ZcA88HmtycGB7zcE4Cc+sO4VvlECAwEAAQKCAYB7 gUnzALYxLOgAYYMkQm9si9zz768TpCNr+ooj5YZ9Wq6OSAEveBT+FErQCxaYErDW qCNA0gn4Eezj9YWcQVa4vzHmEM+n6iRJU39ONC0Qqua5Ma10EY1sHIEnb2dlufku YeOu3RrEu3eCgRxsDGySuvv5OxinV4kN++KPQzD3EOopPE+U81YFLCsMgsyfPlmm gwc/IKIuXDHp5Vp2bXkZK98CYLV8RddjUw7SrkZNwx6cI9eET0CgTs7y4SrevoOy jCdnA0j1HvL8AbLQuYoXo9fdGYDeq55hyYlxSMYLaEToZG3DJ0UAldrT+r7x52D8 2QMnJUo2XHzVYPlXPJIAkFJisZZ36TkBvywCgXZMMLibPo9U6V0nfkybTtXKoory nmgBv+XSGSNrVWMiygpDPqpX1G6bBgqUX3CiTlxtSkYYz1M4Vgj2cux5XEPTnVCq CLVzvNIXZt1RyzXPxGWpPidCjOaiWBRT4u1Dol9fs3PmVvDaRxcKo9nspiUHCfEC gcEA4GgxZ+IJwpAMHkdYId0oxjKgTqIg+Ua+EwfUoQT10ERl/k/V4cDwJRHT8lML rKhTNQJMEE040jq+6mPJDl1KqMb/v05Q7fF22ToGw1HkZwK52O6CeEiJW4/J6bR1 pZGN0irsa6GvzV65Y6gZVFEUl0JPRf8wPvQHXsWAw8/2LuXkXjV0ieIMq4pbWJf4 kaid7dYLHnobiP9RVk7BGr7ifmCshoPjWp4TRMwYf6iIZrqMxUSX0QY8Xsqx6bch LLx/AoHBAMqCvvwUKTrF4gKh5jyl6T6DTZ/Dujaz7BuAJdsSSHvuTa/Y1EfsQHZN jABn89ZqHYDiyyCuVFO3dqhLtsPjhyFMSXj+98JYcL3FGKnqQqRTwtzzx2P2lV5X U0WhrNRb3iLu79Tr8pE/2EPnvTr+J5b0DHEeRyM72LWs43zrDYHorH0/Aa5Qd37F gDLCTBEl8jO5irRuAIq/KV9ZFnn8JDjNGVpXgHPW3354ON1YaMLnPASk7FQizSOQ QZAsyxtdLwKBwGUosvTYYXvygXP4x1LkpmfKFJe94E1exXpAsmovmTvcSXn9tTXC Sr77LWb0ZrPbYT7pHS7QEMg8MSnp941hIrG4mzs666KHkgLUdI4B0YtaIDsZMXlV gY3j4KpYbhxH4/2U2eSfC2fxxnKVKW3n6vdQrfmo0q/eQ6BGOgiLK7fybCLHyBQL 8Zg2k3z5bNUEhMTdE0AW3WjBZ4IXmFcdK26616r/szJ7RcZilrydVXexqpmWlTVl sTst9kucAPlwswKBwQCwf7my/GNezR8Jik+fZj7edBQQfcdra+8JnOvhfpLcKLte 2s1RjjA0q6usou1bYAsszP2bEzV97XWmgq7dFg4tUE7s/NO1d91zGDhBx2Gj1TkN 2A5dKonOuq9iDeITB6qYqcUvvyEfxRRZQr2jj+WzZCr/4BLCO6PJ29A9jKOuKLtF QcfWRF2RiNMN6lffzkHFIR4p2YHxa2DEsGGtmbt8Ig3Jtl/HFmydzmxJRoev71dY +ODdB6PhLhZmcRPoWpMCgcEAhGArwL68GwwRMqAX79gMv8tVT0CJnDyGk5mD/ZIB Nzt0yQFO7rTEa1l1vAtOiVJ9IpAak2lgbEwodOfGnQst7lujNYDFzTRPTFt/lID1 u6JBxmqawOSlqa00bt4l2YsTZV+BfSznBP6XO1PK4iR3o5G3NkoKJjZWm3e3asHk 6eTeMLcsIJ+Fp7gG0ve2EdQwhVSVMFEu4Q4C2FcJeU++L4kYpY7sTnAjUtiLvtHn yp3jllEn3CBD8Uhs4B+sL/6p -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.12/certdata/revocation.crl000066400000000000000000000014401471441230600227650ustar00rootroot00000000000000-----BEGIN X509 CRL----- MIICJjCBjwIBATANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJYWTEmMCQGA1UE CgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNVBAMMDW91ci1j YS1zZXJ2ZXIXDTIxMDMxNzA4NDgyMFoXDTQwMDUxNjA4NDgyMFqgDjAMMAoGA1Ud FAQDAgEAMA0GCSqGSIb3DQEBCwUAA4IBgQCd2GrHb4zr2R8eK7YMHwlkgICxbWP1 4nuEi55yzUcmMcCZJ6ZQV3yYqTlAULGQ9qWAUdhsyH+yu3hRKFKHQv0DAdKKxgow 66YasAQQ99DskXOPxmRoIA7qtIWZbLtBwHQJWh+uUFlTdUXitGIX5Xie74xu5YIr moa3QeuZyG5+gigSTUyst5T/J/cHfBzlAJLc2k3Ty4EPYXKHCVnrZWJbRmxq199l A7S+eBb9qWXSYXCn6v+EZ76pUS3u/66kZ86PO3h9294BzdhxbCJ27dQXNHw6owe2 Iyiv0aWx+TNSGSf4yCqaYTH6RtEoviI3h/inVFHNGgjlMzdaGw/0I3bkB0rt2WSR Vck37HnXvQvVEkgO/39C0WKZus6m4gmOgZcbJbXaR8uIR5Hmw3SEyGEPEIBu6tXV BLJOSOSu2vVUH5GUIrpvK9FTySKYa+MGryoPasuqZNfwpaXK+ON2G6QsmcXPWZY0 Dry6t0w2geW6UYVGmb831i8ZP3JVVVwcwi0= -----END X509 CRL----- gevent-24.11.1/src/greentest/3.12/certdata/secp384r1.pem000066400000000000000000000004001471441230600222440ustar00rootroot00000000000000$ openssl genpkey -genparam -algorithm EC -pkeyopt ec_paramgen_curve:secp384r1 -pkeyopt ec_param_enc:named_curve -text -----BEGIN EC PARAMETERS----- BgUrgQQAIg== -----END EC PARAMETERS----- ECDSA-Parameters: (384 bit) ASN1 OID: secp384r1 NIST CURVE: P-384 gevent-24.11.1/src/greentest/3.12/certdata/selfsigned_pythontestdotnet.pem000066400000000000000000000041221471441230600264570ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIF9zCCA9+gAwIBAgIUH98b4Fw/DyugC9cV7VK7ZODzHsIwDQYJKoZIhvcNAQEL BQAwgYoxCzAJBgNVBAYTAlhZMRcwFQYDVQQIDA5DYXN0bGUgQW50aHJheDEYMBYG A1UEBwwPQXJndW1lbnQgQ2xpbmljMSMwIQYDVQQKDBpQeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbjEjMCEGA1UEAwwac2VsZi1zaWduZWQucHl0aG9udGVzdC5uZXQw HhcNMTkwNTA4MDEwMjQzWhcNMjcwNzI0MDEwMjQzWjCBijELMAkGA1UEBhMCWFkx FzAVBgNVBAgMDkNhc3RsZSBBbnRocmF4MRgwFgYDVQQHDA9Bcmd1bWVudCBDbGlu aWMxIzAhBgNVBAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMSMwIQYDVQQD DBpzZWxmLXNpZ25lZC5weXRob250ZXN0Lm5ldDCCAiIwDQYJKoZIhvcNAQEBBQAD ggIPADCCAgoCggIBAMKdJlyCThkahwoBb7pl5q64Pe9Fn5jrIvzsveHTc97TpjV2 RLfICnXKrltPk/ohkVl6K5SUZQZwMVzFubkyxE0nZPHYHlpiKWQxbsYVkYv01rix IFdLvaxxbGYke2jwQao31s4o61AdlsfK1SdpHQUynBBMssqI3SB4XPmcA7e+wEEx jxjVish4ixA1vuIZOx8yibu+CFCf/geEjoBMF3QPdzULzlrCSw8k/45iZCSoNbvK DoL4TVV07PHOxpheDh8ZQmepGvU6pVqhb9m4lgmV0OGWHgozd5Ur9CbTVDmxIEz3 TSoRtNJK7qtyZdGNqwjksQxgZTjM/d/Lm/BJG99AiOmYOjsl9gbQMZgvQmMAtUsI aMJnQuZ6R+KEpW/TR5qSKLWZSG45z/op+tzI2m+cE6HwTRVAWbcuJxcAA55MZjqU OOOu3BBYMjS5nf2sQ9uoXsVBFH7i0mQqoW1SLzr9opI8KsWwFxQmO2vBxWYaN+lH OmwBZBwyODIsmI1YGXmTp09NxRYz3Qe5GCgFzYowpMrcxUC24iduIdMwwhRM7rKg 7GtIWMSrFfuI1XCLRmSlhDbhNN6fVg2f8Bo9PdH9ihiIyxSrc+FOUasUYCCJvlSZ 8hFUlLvcmrZlWuazohm0lsXuMK1JflmQr/DA/uXxP9xzFfRy+RU3jDyxJbRHAgMB AAGjUzBRMB0GA1UdDgQWBBSQJyxiPMRK01i+0BsV9zUwDiBaHzAfBgNVHSMEGDAW gBSQJyxiPMRK01i+0BsV9zUwDiBaHzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3 DQEBCwUAA4ICAQCR+7a7N/m+WLkxPPIA/CB4MOr2Uf8ixTv435Nyv6rXOun0+lTP ExSZ0uYQ+L0WylItI3cQHULldDueD+s8TGzxf5woaLKf6tqyr0NYhKs+UeNEzDnN 9PHQIhX0SZw3XyXGUgPNBfRCg2ZDdtMMdOU4XlQN/IN/9hbYTrueyY7eXq9hmtI9 1srftAMqr9SR1JP7aHI6DVgrEsZVMTDnfT8WmLSGLlY1HmGfdEn1Ip5sbo9uSkiH AEPgPfjYIvR5LqTOMn4KsrlZyBbFIDh9Sl99M1kZzgH6zUGVLCDg1y6Cms69fx/e W1HoIeVkY4b4TY7Bk7JsqyNhIuqu7ARaxkdaZWhYaA2YyknwANdFfNpfH+elCLIk BUt5S3f4i7DaUePTvKukCZiCq4Oyln7RcOn5If73wCeLB/ZM9Ei1HforyLWP1CN8 XLfpHaoeoPSWIveI0XHUl65LsPN2UbMbul/F23hwl+h8+BLmyAS680Yhn4zEN6Ku B7Po90HoFa1Du3bmx4jsN73UkT/dwMTi6K072FbipnC1904oGlWmLwvAHvrtxxmL Pl3pvEaZIu8wa/PNF6Y7J7VIewikIJq6Ta6FrWeFfzMWOj2qA1ZZi6fUaDSNYvuV J5quYKCc/O+I/yDDf8wyBbZ/gvUXzUHTMYGG+bFrn1p7XDbYYeEJ6R/xEg== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/ssl_cert.pem000066400000000000000000000030421471441230600224330ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/certdata/ssl_key.passwd.pem000066400000000000000000000051361471441230600235740ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQI072N7W+PDDMCAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBA/AuaRNi4vE4KGqI4In+70BIIH ENGS5Vex5NID873frmd1UZEHZ+O/Bd0wDb+NUpIqesHkRYf7kKi6Gnr+nKQ/oVVn Lm3JjE7c8ECP0OkOOXmiXuWL1SkzBBWqCI4stSGUPvBiHsGwNnvJAaGjUffgMlcC aJOA2+dnejLkzblq4CB2LQdm06N3Xoe9tyqtQaUHxfzJAf5Ydd8uj7vpKN2MMhY7 icIPJwSyh0N7S6XWVtHEokr9Kp4y2hS5a+BgCWV1/1z0aF7agnSVndmT1VR+nWmc lM14k+lethmHMB+fsNSjnqeJ7XOPlOTHqhiZ9bBSTgF/xr5Bck/NiKRzHjdovBox TKg+xchaBhpRh7wBPBIlNJeHmIjv+8obOKjKU98Ig/7R9+IryZaNcKAH0PuOT+Sw QHXiCGQbOiYHB9UyhDTWiB7YVjd8KHefOFxfHzOQb/iBhbv1x3bTl3DgepvRN6VO dIsPLoIZe42sdf9GeMsk8mGJyZUQ6AzsfhWk3grb/XscizPSvrNsJ2VL1R7YTyT3 3WA4ZXR1EqvXnWL7N/raemQjy62iOG6t7fcF5IdP9CMbWP+Plpsz4cQW7FtesCTq a5ZXraochQz361ODFNIeBEGU+0qqXUtZDlmos/EySkZykSeU/L0bImS62VGE3afo YXBmznTTT9kkFkqv7H0MerfJsrE/wF8puP3GM01DW2JRgXRpSWlvbPV/2LnMtRuD II7iH4rWDtTjCN6BWKAgDOnPkc9sZ4XulqT32lcUeV6LTdMBfq8kMEc8eDij1vUT maVCRpuwaq8EIT3lVgNLufHiG96ojlyYtj3orzw22IjkgC/9ee8UDik9CqbMVmFf fVHhsw8LNSg8Q4bmwm5Eg2w2it2gtI68+mwr75oCxuJ/8OMjW21Prj8XDh5reie2 c0lDKQOFZ9UnLU1bXR/6qUM+JFKR4DMq+fOCuoQSVoyVUEOsJpvBOYnYZN9cxsZm vh9dKafMEcKZ8flsbr+gOmOw7+Py2ifSlf25E/Frb1W4gtbTb0LQVHb6+drutrZj 8HEu4CnHYFCD4ZnOJb26XlZCb8GFBddW86yJYyUqMMV6Q1aJfAOAglsTo1LjIMOZ byo0BTAmwUevU/iuOXQ4qRBXXcoidDcTCrxfUSPG9wdt9l+m5SdQpWqfQ+fx5O7m SLlrHyZCiPSFMtC9DxqjIklHjf5W3wslGLgaD30YXa4VDYkRihf3CNsxGQ+tVvef l0ZjoAitF7Gaua06IESmKnpHe23dkr1cjYq+u2IV+xGH8LeExdwsQ9kpuTeXPnQs JOA99SsFx1ct32RrwjxnDDsiNkaViTKo9GDkV3jQTfoFgAVqfSgg9wGXpqUqhNG7 TiSIHCowllLny2zn4XrXCy2niD3VDt0skb3l/PaegHE2z7S5YY85nQtYwpLiwB9M SQ08DYKxPBZYKtS2iZ/fsA1gjSRQDPg/SIxMhUC3M3qH8iWny1Lzl25F2Uq7VVEX LdTUtaby49jRTT3CQGr5n6z7bMbUegiY7h8WmOekuThGDH+4xZp6+rDP4GFk4FeK JcF70vMQYIjQZhadic6olv+9VtUP42ltGG/yP9a3eWRkzfAf2eCh6B1rYdgEWwE8 rlcZzwM+y6eUmeNF2FVWB8iWtTMQHy+dYNPM+Jtus1KQKxiiq/yCRs7nWvzWRFWA HRyqV0J6/lqgm4FvfktFt1T0W+mDoLJOR2/zIwMy2lgL5zeHuR3SaMJnCikJbqKS HB3UvrhAWUcZqdH29+FhVWeM7ybyF1Wccmf+IIC/ePLa6gjtqPV8lG/5kbpcpnB6 UQY8WWaKMxyr3jJ9bAX5QKshchp04cDecOLZrpFGNNQngR8RxSEkiIgAqNxWunIu KrdBDrupv/XAgEOclmgToY3iywLJSV5gHAyHWDUhRH4cFCLiGPl4XIcnXOuTze3H 3j+EYSiS3v3DhHjp33YU2pXlJDjiYsKzAXejEh66++Y8qaQdCAad3ruWRCzW3kgk Md0A1VGzntTnQsewvExQEMZH2LtYIsPv3KCYGeSAuLabX4tbGk79PswjnjLLEOr0 Ghf6RF6qf5/iFyJoG4vrbKT8kx6ywh0InILCdjUunuDskIBxX6tEcr9XwajoIvb2 kcmGdjam5kKLS7QOWQTl8/r/cuFes0dj34cX5Qpq+Gd7tRq/D+b0207926Cxvftv qQ1cVn8HiLxKkZzd3tpf2xnoV1zkTL0oHrNg+qzxoxXUTUcwtIf1d/HRbYEAhi/d bBBoFeftEHWNq+sJgS9bH+XNzo/yK4u04B5miOq8v4CSkJdzu+ZdF22d4cjiGmtQ 8BTmcn0Unzm+u5H0+QSZe54QBHJGNXXOIKMTkgnOdW27g4DbI1y7fCqJiSMbRW6L oHmMfbdB3GWqGbsUkhY8i6h9op0MU6WOX7ea2Rxyt4t6 -----END ENCRYPTED PRIVATE KEY----- gevent-24.11.1/src/greentest/3.12/certdata/ssl_key.pem000066400000000000000000000046701471441230600222760ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/wIBADANBgkqhkiG9w0BAQEFAASCBukwggblAgEAAoIBgQCylKlLaKU+hOvJ DfriTRLd+IthG5hv28I3A/CGjLICT0rDDtgaXd0uqloJAnjsgn5gMAcStpDW8Rm+ t6LsrBL+5fBgkyU1r94Rvx0HHoyaZwBBouitVHw28hP3W+smddkqB1UxpGnTeL2B gj3dVo/WTtRfO+0h0PKw1l98YE1pMTdqIwcOOE/ER0g4hvA/wrxuLhMvlVLMy/lL 58uctqaDUqryNyeerKbVkq4fJyCG5D2TwXVJ3i2DDh0xSt2Y10poZV4M4k8Su9Z5 8zN2PSvYMT50aqF277v8BaOeYUApBE4kZGIJpo13ATGdEwpUFZ0Fri4zLYUZ1hWb OC35sKo7OxWQ/+tefNUdgWHob6Vmy777jiYcLwxc3sS9rF3AJe0rMW83kCkR6hmy A3250E137N/1QumHuT/Nj9rnI/lwt9jfaYkZjoAgT/C97m/mM83cYpGTdoGV1xNo 7G90MhP0di5FnVsrIaSnvkbGT9UgUWx0oVMjocifdG2qIhMI9psCAwEAAQKCAYBT sHmaPmNaZj59jZCqp0YVQlpHWwBYQ5vD3pPE6oCttm0p9nXt/VkfenQRTthOtmT1 POzDp00/feP7zeGLmqSYUjgRekPw4gdnN7Ip2PY5kdW77NWwDSzdLxuOS8Rq1MW9 /Yu+ZPe3RBlDbT8C0IM+Atlh/BqIQ3zIxN4g0pzUlF0M33d6AYfYSzOcUhibOO7H j84r+YXBNkIRgYKZYbutRXuZYaGuqejRpBj3voVu0d3Ntdb6lCWuClpB9HzfGN0c RTv8g6UYO4sK3qyFn90ibIR/1GB9watvtoWVZqggiWeBzSWVWRsGEf9O+Cx4oJw1 IphglhmhbgNksbj7bD24on/icldSOiVkoUemUOFmHWhCm4PnB1GmbD8YMfEdSbks qDr1Ps1zg4mGOinVD/4cY7vuPFO/HCH07wfeaUGzRt4g0/yLr+XjVofOA3oowyxv JAzr+niHA3lg5ecj4r7M68efwzN1OCyjMrVJw2RAzwvGxE+rm5NiT08SWlKQZnkC gcEA4wvyLpIur/UB84nV3XVJ89UMNBLm++aTFzld047BLJtMaOhvNqx6Cl5c8VuW l261KHjiVzpfNM3/A2LBQJcYkhX7avkqEXlj57cl+dCWAVwUzKmLJTPjfaTTZnYJ xeN3dMYjJz2z2WtgvfvDoJLukVwIMmhTY8wtqqYyQBJ/l06pBsfw5TNvmVIOQHds 8ASOiFt+WRLk2bl9xrGGayqt3VV93KVRzF27cpjOgEcG74F3c0ZW9snERN7vIYwB JfrlAoHBAMlahPwMP2TYylG8OzHe7EiehTekSO26LGh0Cq3wTGXYsK/q8hQCzL14 kWW638vpwXL6L9ntvrd7hjzWRO3vX/VxnYEA6f0bpqHq1tZi6lzix5CTUN5McpDg QnjenSJNrNjS1zEF8WeY9iLEuDI/M/iUW4y9R6s3WpgQhPDXpSvd2g3gMGRUYhxQ Xna8auiJeYFq0oNaOxvJj+VeOfJ3ZMJttd+Y7gTOYZcbg3SdRb/kdxYki0RMD2hF 4ZvjJ6CTfwKBwQDiMqiZFTJGQwYqp4vWEmAW+I4r4xkUpWatoI2Fk5eI5T9+1PLX uYXsho56NxEU1UrOg4Cb/p+TcBc8PErkGqR0BkpxDMOInTOXSrQe6lxIBoECVXc3 HTbrmiay0a5y5GfCgxPKqIJhfcToAceoVjovv0y7S4yoxGZKuUEe7E8JY2iqRNAO yOvKCCICv/hcN235E44RF+2/rDlOltagNej5tY6rIFkaDdgOF4bD7f9O5eEni1Bg litfoesDtQP/3rECgcEAkQfvQ7D6tIPmbqsbJBfCr6fmoqZllT4FIJN84b50+OL0 mTGsfjdqC4tdhx3sdu7/VPbaIqm5NmX10bowWgWSY7MbVME4yQPyqSwC5NbIonEC d6N0mzoLR0kQ+Ai4u+2g82gicgAq2oj1uSNi3WZi48jQjHYFulCbo246o1NgeFFK 77WshYe2R1ioQfQDOU1URKCR0uTaMHClgfu112yiGd12JAD+aF3TM0kxDXz+sXI5 SKy311DFxECZeXRLpcC3AoHBAJkNMJWTyPYbeVu+CTQkec8Uun233EkXa2kUNZc/ 5DuXDaK+A3DMgYRufTKSPpDHGaCZ1SYPInX1Uoe2dgVjWssRL2uitR4ENabDoAOA ICVYXYYNagqQu5wwirF0QeaMXo1fjhuuHQh8GsMdXZvYEaAITZ9/NG5x/oY08+8H kr78SMBOPy3XQn964uKG+e3JwpOG14GKABdAlrHKFXNWchu/6dgcYXB87mrC/GhO zNwzC+QhFTZoOomFoqMgFWujng== -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.12/certdata/talos-2019-0758.pem000066400000000000000000000024621471441230600227360ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDqDCCApKgAwIBAgIBAjALBgkqhkiG9w0BAQswHzELMAkGA1UEBhMCVUsxEDAO BgNVBAMTB2NvZHktY2EwHhcNMTgwNjE4MTgwMDU4WhcNMjgwNjE0MTgwMDU4WjA7 MQswCQYDVQQGEwJVSzEsMCoGA1UEAxMjY29kZW5vbWljb24tdm0tMi50ZXN0Lmxh bC5jaXNjby5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC63fGB J80A9Av1GB0bptslKRIUtJm8EeEu34HkDWbL6AJY0P8WfDtlXjlPaLqFa6sqH6ES V48prSm1ZUbDSVL8R6BYVYpOlK8/48xk4pGTgRzv69gf5SGtQLwHy8UPBKgjSZoD 5a5k5wJXGswhKFFNqyyxqCvWmMnJWxXTt2XDCiWc4g4YAWi4O4+6SeeHVAV9rV7C 1wxqjzKovVe2uZOHjKEzJbbIU6JBPb6TRfMdRdYOw98n1VXDcKVgdX2DuuqjCzHP WhU4Tw050M9NaK3eXp4Mh69VuiKoBGOLSOcS8reqHIU46Reg0hqeL8LIL6OhFHIF j7HR6V1X6F+BfRS/AgMBAAGjgdYwgdMwCQYDVR0TBAIwADAdBgNVHQ4EFgQUOktp HQjxDXXUg8prleY9jeLKeQ4wTwYDVR0jBEgwRoAUx6zgPygZ0ZErF9sPC4+5e2Io UU+hI6QhMB8xCzAJBgNVBAYTAlVLMRAwDgYDVQQDEwdjb2R5LWNhggkA1QEAuwb7 2s0wCQYDVR0SBAIwADAuBgNVHREEJzAlgiNjb2Rlbm9taWNvbi12bS0yLnRlc3Qu bGFsLmNpc2NvLmNvbTAOBgNVHQ8BAf8EBAMCBaAwCwYDVR0fBAQwAjAAMAsGCSqG SIb3DQEBCwOCAQEAvqantx2yBlM11RoFiCfi+AfSblXPdrIrHvccepV4pYc/yO6p t1f2dxHQb8rWH3i6cWag/EgIZx+HJQvo0rgPY1BFJsX1WnYf1/znZpkUBGbVmlJr t/dW1gSkNS6sPsM0Q+7HPgEv8CPDNK5eo7vU2seE0iWOkxSyVUuiCEY9ZVGaLVit p0C78nZ35Pdv4I+1cosmHl28+es1WI22rrnmdBpH8J1eY6WvUw2xuZHLeNVN0TzV Q3qq53AaCWuLOD1AjESWuUCxMZTK9DPS4JKXTK8RLyDeqOvJGjsSWp3kL0y3GaQ+ 10T1rfkKJub2+m9A9duin1fn6tHc2wSvB7m3DA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/ffdh3072.pem000066400000000000000000000042441471441230600202560ustar00rootroot00000000000000 DH Parameters: (3072 bit) prime: 00:ff:ff:ff:ff:ff:ff:ff:ff:ad:f8:54:58:a2:bb: 4a:9a:af:dc:56:20:27:3d:3c:f1:d8:b9:c5:83:ce: 2d:36:95:a9:e1:36:41:14:64:33:fb:cc:93:9d:ce: 24:9b:3e:f9:7d:2f:e3:63:63:0c:75:d8:f6:81:b2: 02:ae:c4:61:7a:d3:df:1e:d5:d5:fd:65:61:24:33: f5:1f:5f:06:6e:d0:85:63:65:55:3d:ed:1a:f3:b5: 57:13:5e:7f:57:c9:35:98:4f:0c:70:e0:e6:8b:77: e2:a6:89:da:f3:ef:e8:72:1d:f1:58:a1:36:ad:e7: 35:30:ac:ca:4f:48:3a:79:7a:bc:0a:b1:82:b3:24: fb:61:d1:08:a9:4b:b2:c8:e3:fb:b9:6a:da:b7:60: d7:f4:68:1d:4f:42:a3:de:39:4d:f4:ae:56:ed:e7: 63:72:bb:19:0b:07:a7:c8:ee:0a:6d:70:9e:02:fc: e1:cd:f7:e2:ec:c0:34:04:cd:28:34:2f:61:91:72: fe:9c:e9:85:83:ff:8e:4f:12:32:ee:f2:81:83:c3: fe:3b:1b:4c:6f:ad:73:3b:b5:fc:bc:2e:c2:20:05: c5:8e:f1:83:7d:16:83:b2:c6:f3:4a:26:c1:b2:ef: fa:88:6b:42:38:61:1f:cf:dc:de:35:5b:3b:65:19: 03:5b:bc:34:f4:de:f9:9c:02:38:61:b4:6f:c9:d6: e6:c9:07:7a:d9:1d:26:91:f7:f7:ee:59:8c:b0:fa: c1:86:d9:1c:ae:fe:13:09:85:13:92:70:b4:13:0c: 93:bc:43:79:44:f4:fd:44:52:e2:d7:4d:d3:64:f2: e2:1e:71:f5:4b:ff:5c:ae:82:ab:9c:9d:f6:9e:e8: 6d:2b:c5:22:36:3a:0d:ab:c5:21:97:9b:0d:ea:da: 1d:bf:9a:42:d5:c4:48:4e:0a:bc:d0:6b:fa:53:dd: ef:3c:1b:20:ee:3f:d5:9d:7c:25:e4:1d:2b:66:c6: 2e:37:ff:ff:ff:ff:ff:ff:ff:ff generator: 2 (0x2) recommended-private-length: 276 bits -----BEGIN DH PARAMETERS----- MIIBjAKCAYEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz +8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a 87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7 YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi 7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD ssbzSibBsu/6iGtCOGEfz9zeNVs7ZRkDW7w09N75nAI4YbRvydbmyQd62R0mkff3 7lmMsPrBhtkcrv4TCYUTknC0EwyTvEN5RPT9RFLi103TZPLiHnH1S/9croKrnJ32 nuhtK8UiNjoNq8Uhl5sN6todv5pC1cRITgq80Gv6U93vPBsg7j/VnXwl5B0rZsYu N///////////AgECAgIBFA== -----END DH PARAMETERS----- gevent-24.11.1/src/greentest/3.12/idnsans.pem000066400000000000000000000233321471441230600204710ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQC8sqplTuHuLjbW TL5SL2D1fw9U6WQzLVAF5gsyhd5lr2FpfYwjrob5Mav91aOLbJRTvoNyXsJ26FPS 0RycRGXbomcIEJxXGy9aI+0MLYBt1G5mgqCH+HcVCwPzCNlhVnTwvpgA7y8zs3+6 ezZAPWkF0yWOMYLtTcq9A5GWeavt5VMgm1KZF3gO4k58oPyk3Ae9D0LAaYsX6DFi BYx41eUR5UbSb5IYXaDd8d6jqW/jnYhgc6Cxkv1gTJFn87V5lrG0vYMSRUtWDQ9Y Jh/EKAxjGw7AeY429p6TE4UoJhDmoFYR2NLvawhNIplxol/v0fs0veFQjI/UsTD8 2tRfnYL4IX8szhLsE5/5Iq8aiLHjVbIMwmDYAa0P63Ap2kf1biSn9mpDL8lQazSo yr8xzIq2QS5HMvGbeMAmS0ih10Zx84uVmkWlavgvtSflw8K/ZXT9c70rZp/TdBGY 95cOFsbg5U/20M/Llpis9tcBCaoVaYSFupatrP+p8y19qP2nebsCAwEAAQKCAYEA uaYWWwHW6pzxOrnabcVLYX0WunW9LVShbIw97AElI2n/LuhkXh6xkK48BsqP0vaK oDHJ5VYxgQdmoP03Zs8sX4BSWe7twg1u8wJxkA+cUXI1BAn0opHjpwJlalEEfe2v s8PwjMrF59nsCq56W42PrDlms5UmuQ5WLsw6Co++hZmfxW7LPu+GIS6qBZfluNT5 kBpZlDDCtkyteUD4SVI3wvmOSi+Wzv4e7P2wC9kByjENIcfhC5QQURRD4sA1hWCp 2SThYWqJOCEc2SvGgoqgTRaJuQ2aVG9qrntXt0N4V+WdJWXBK0jedkB2flLve1fR KmDYuc9k/c1svmS3Y+iZohBha9H8jpuJmXYBxxg1iNg9m7qkfg8F8wxCYLQKB+U6 tjRS7by+jSE08On7mpDDhJORnlh+rfEuWPPwAKQpLpdp76KDTvR++GvfOMUiOrFM e9s5aXp+vcgkSSqYvigE+sFpCjQWwkGBkMdT16Pf9CzhQaM08YuLnzfLEYgLFw6R AoHBAN5NQINBmlq/cptGSru66kfecqHfI7xHnnGWKAkto/B1x7Crrgs4Tk5b4vaA JmAqatt5P1e7zco7uAXXebY5VURuH/30TlkuaB+oGFp0OMw6165n8RVPT2ZaDViK ssJ9LT8fJ+23TWCCT2Z1zUlM/NnHAMjKOVsJK3/KEkVvlc7ROC7uVooc78AsQehg zpL3GBYEeBukT8aNUMqUlesCsIs/dQHW7DzQL2xGkQagm5/PDsxaCsT7ynA8eL3X TW+IXwKBwQDZTV3TaG6wqtL8y2DR0lN5jY/eYayX4e18iZ+XEZVTntPdVVyJIE4d 0A5ZfcILb9WE8R21iptROYSjcH/05j+3fQMJ1WAK0sNfGTUNNT3jYU8YzLvos+wW G8E+mNMpFPWNvLV5Qrl4VvoifGh8AMvplUEz8uAzGJbXbRxUPcmjth2ph8zULEDn /+o4OcT3gh1bp+HCqch0OuiJRn9qNUpsJG5GMm5FtjBjZM97ucZ1/0DaWl3JUxUN /pueo3J9vCUCgcBg2Fjdlcvv8u2z1aijJmgATVm1SWfhE3ZkV50zem2sSTNotTJK cwoyOveimeueA3ywBp9g0lFx5Bhkex3sFAggmrVXRoKHeZ8lA28woOdJmezybxfp R7b4iQy9YRdFgZEfqawUdMHB5KNAqNt5LpANNBQUZX0dOt53eooBM/6Yri8CyxRq cPbFysIfwWTdQ8Z7eRD2Qdv7TP9AcgDp9C8DSu7nkUEzsSKn0gpGT9vcgDEbN7Lv ZB4qTT3wvoZeq5MCgcBIG18eDtJkN1sp3Yb0OTnP5QSvg3PVNngq0jQt2fzWMacW FARP0HN7exW35n4kc2jD44q7OhJOAqsb3PHo3xqXlZkTg0WKceO4w9GR32/46spn bVCRaFrX/z/BuM6hHD5bWRpS8aw/3YTFOsklFNKVYRyw01BIREmRlLhIz/QAKidv oQt8AG9NTON44tqUUw3Q40WL5fEJeJ6/JrCTGrnmZrRdANEMuucVpFchNEVB1IC9 tCzY6IPdD/atzojoZi0CgcB2x9oWLjJ0XJIp2pMAb8nCMVjkKrznKFjZbDm8EQBs ou7pM2zkO3VRcWT1BXQocinJsjQqjQiTawP6IN2FQgT0d89V+pwd+jdvpdildQhP 1/6SErVRZV//oopKTsC6TIBL/EmW1TkP3ulQIZs8YklFgybeHdDyNFi+VgPXkVGe IHp0nEzrui9q0YJsjHfFHBeGyzDSfbiBYiF7Auk66gYZbXufebP/LZNG/FIamPP3 rwYIeeV1IVwk9tPBw6fGwrs= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:60 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=idnsans Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:bc:b2:aa:65:4e:e1:ee:2e:36:d6:4c:be:52:2f: 60:f5:7f:0f:54:e9:64:33:2d:50:05:e6:0b:32:85: de:65:af:61:69:7d:8c:23:ae:86:f9:31:ab:fd:d5: a3:8b:6c:94:53:be:83:72:5e:c2:76:e8:53:d2:d1: 1c:9c:44:65:db:a2:67:08:10:9c:57:1b:2f:5a:23: ed:0c:2d:80:6d:d4:6e:66:82:a0:87:f8:77:15:0b: 03:f3:08:d9:61:56:74:f0:be:98:00:ef:2f:33:b3: 7f:ba:7b:36:40:3d:69:05:d3:25:8e:31:82:ed:4d: ca:bd:03:91:96:79:ab:ed:e5:53:20:9b:52:99:17: 78:0e:e2:4e:7c:a0:fc:a4:dc:07:bd:0f:42:c0:69: 8b:17:e8:31:62:05:8c:78:d5:e5:11:e5:46:d2:6f: 92:18:5d:a0:dd:f1:de:a3:a9:6f:e3:9d:88:60:73: a0:b1:92:fd:60:4c:91:67:f3:b5:79:96:b1:b4:bd: 83:12:45:4b:56:0d:0f:58:26:1f:c4:28:0c:63:1b: 0e:c0:79:8e:36:f6:9e:93:13:85:28:26:10:e6:a0: 56:11:d8:d2:ef:6b:08:4d:22:99:71:a2:5f:ef:d1: fb:34:bd:e1:50:8c:8f:d4:b1:30:fc:da:d4:5f:9d: 82:f8:21:7f:2c:ce:12:ec:13:9f:f9:22:af:1a:88: b1:e3:55:b2:0c:c2:60:d8:01:ad:0f:eb:70:29:da: 47:f5:6e:24:a7:f6:6a:43:2f:c9:50:6b:34:a8:ca: bf:31:cc:8a:b6:41:2e:47:32:f1:9b:78:c0:26:4b: 48:a1:d7:46:71:f3:8b:95:9a:45:a5:6a:f8:2f:b5: 27:e5:c3:c2:bf:65:74:fd:73:bd:2b:66:9f:d3:74: 11:98:f7:97:0e:16:c6:e0:e5:4f:f6:d0:cf:cb:96: 98:ac:f6:d7:01:09:aa:15:69:84:85:ba:96:ad:ac: ff:a9:f3:2d:7d:a8:fd:a7:79:bb Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:idnsans, DNS:xn--knig-5qa.idn.pythontest.net, DNS:xn--knigsgsschen-lcb0w.idna2003.pythontest.net, DNS:xn--knigsgchen-b4a3dun.idna2008.pythontest.net, DNS:xn--nxasmq6b.idna2003.pythontest.net, DNS:xn--nxasmm1c.idna2008.pythontest.net X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 5C:BE:18:7F:7B:3F:CE:99:66:80:79:53:4B:DD:33:1B:42:A5:7E:00 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 5d:7a:f8:81:e0:a7:c1:3f:39:eb:d3:52:2c:e1:cb:4d:29:b3: 77:18:17:18:9e:12:fc:11:cc:3c:49:cb:6b:f4:4d:6c:b8:d2: f4:e9:37:f8:6b:ed:f5:d7:f1:eb:5a:41:04:c7:f3:8c:da:e1: 05:8e:ae:58:71:d9:01:8a:32:46:b2:dd:95:46:e1:ce:82:04: fa:0b:1c:29:75:07:85:ce:cd:59:d4:cc:f3:56:b3:72:4d:cb: 90:0f:ce:02:21:ce:5d:17:84:96:7f:6a:00:57:42:b7:24:5b: 07:25:1e:77:a8:9d:da:41:09:8e:29:79:b4:b0:a1:45:c8:70: ae:2c:86:24:ae:3d:9a:74:a7:04:78:d6:1f:1b:17:c5:c1:6d: b1:1a:fd:f4:50:2e:61:16:84:89:d0:42:3f:b6:bf:bd:52:bd: c8:3e:8e:87:b4:f0:bd:ad:c7:51:65:2f:77:e8:69:79:0e:03: 63:89:e7:70:ad:c8:d1:2f:1a:a5:06:d2:90:db:7c:07:35:9a: 0b:0e:85:87:d1:70:17:a7:88:0f:c6:b5:9c:88:00:fa:f9:b2: 0a:19:5a:4b:8d:91:12:51:5e:0e:c1:d8:9e:02:78:d0:2d:24: 09:fe:d4:97:3c:cb:a0:1f:9a:ab:f7:0f:e2:fa:64:23:4e:53: 0a:15:3e:f5:04:01:86:29:8b:8e:24:40:2f:b1:90:87:5c:3b: 7b:a7:4c:06:af:c3:90:7f:e9:c6:56:42:61:15:2c:83:f1:7c: 4f:89:17:f3:a0:11:34:3f:8d:af:75:34:60:1e:e0:f2:f3:02: e7:aa:b3:f7:9f:1c:f8:69:f4:fe:da:57:6e:1b:95:53:70:cd: ed:b6:bb:2a:84:eb:ab:c3:a9:b4:d5:15:a0:b2:cc:81:2d:f1: 56:c1:54:9b:5f:14:4c:5f:ad:5f:f5:06:ee:22:60:45:e4:50: 35:64:ac:ac:ca:4a:bf:86:78:f8:53:2d:17:d8:e8:84:c8:07: a4:c2:29:76:c7:1f -----BEGIN CERTIFICATE----- MIIGvTCCBSWgAwIBAgIJAMstgJlaaVJgMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2lk bnNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQC8sqplTuHuLjbW TL5SL2D1fw9U6WQzLVAF5gsyhd5lr2FpfYwjrob5Mav91aOLbJRTvoNyXsJ26FPS 0RycRGXbomcIEJxXGy9aI+0MLYBt1G5mgqCH+HcVCwPzCNlhVnTwvpgA7y8zs3+6 ezZAPWkF0yWOMYLtTcq9A5GWeavt5VMgm1KZF3gO4k58oPyk3Ae9D0LAaYsX6DFi BYx41eUR5UbSb5IYXaDd8d6jqW/jnYhgc6Cxkv1gTJFn87V5lrG0vYMSRUtWDQ9Y Jh/EKAxjGw7AeY429p6TE4UoJhDmoFYR2NLvawhNIplxol/v0fs0veFQjI/UsTD8 2tRfnYL4IX8szhLsE5/5Iq8aiLHjVbIMwmDYAa0P63Ap2kf1biSn9mpDL8lQazSo yr8xzIq2QS5HMvGbeMAmS0ih10Zx84uVmkWlavgvtSflw8K/ZXT9c70rZp/TdBGY 95cOFsbg5U/20M/Llpis9tcBCaoVaYSFupatrP+p8y19qP2nebsCAwEAAaOCAo4w ggKKMIHhBgNVHREEgdkwgdaCB2lkbnNhbnOCH3huLS1rbmlnLTVxYS5pZG4ucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2dzc2NoZW4tbGNiMHcuaWRuYTIwMDMucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2djaGVuLWI0YTNkdW4uaWRuYTIwMDgucHl0 aG9udGVzdC5uZXSCJHhuLS1ueGFzbXE2Yi5pZG5hMjAwMy5weXRob250ZXN0Lm5l dIIkeG4tLW54YXNtbTFjLmlkbmEyMDA4LnB5dGhvbnRlc3QubmV0MA4GA1UdDwEB /wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/ BAIwADAdBgNVHQ4EFgQUXL4Yf3s/zplmgHlTS90zG0KlfgAwfQYDVR0jBHYwdIAU s4qgorpx8agkedSkWyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQK DB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNh LXNlcnZlcoIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKG MGh0dHA6Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNl cjA1BggrBgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2Evb2NzcC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250 ZXN0Lm5ldC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGB AF16+IHgp8E/OevTUizhy00ps3cYFxieEvwRzDxJy2v0TWy40vTpN/hr7fXX8eta QQTH84za4QWOrlhx2QGKMkay3ZVG4c6CBPoLHCl1B4XOzVnUzPNWs3JNy5APzgIh zl0XhJZ/agBXQrckWwclHneondpBCY4pebSwoUXIcK4shiSuPZp0pwR41h8bF8XB bbEa/fRQLmEWhInQQj+2v71Svcg+joe08L2tx1FlL3foaXkOA2OJ53CtyNEvGqUG 0pDbfAc1mgsOhYfRcBeniA/GtZyIAPr5sgoZWkuNkRJRXg7B2J4CeNAtJAn+1Jc8 y6Afmqv3D+L6ZCNOUwoVPvUEAYYpi44kQC+xkIdcO3unTAavw5B/6cZWQmEVLIPx fE+JF/OgETQ/ja91NGAe4PLzAueqs/efHPhp9P7aV24blVNwze22uyqE66vDqbTV FaCyzIEt8VbBVJtfFExfrV/1Bu4iYEXkUDVkrKzKSr+GePhTLRfY6ITIB6TCKXbH Hw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/keycert.passwd.pem000066400000000000000000000102011471441230600217670ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQIhD+rJdxqb6ECAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBDTdyjCP3riOSUfxix4aXEvBIIH ECGkbsFabrcFMZcplw5jHMaOlG7rYjUzwDJ80JM8uzbv2Jb8SvNlns2+xmnEvH/M mNvRmnXmplbVjH3XBMK8o2Psnr2V/a0j7/pgqpRxHykG+koOY4gzdt3MAg8JPbS2 hymSl+Y5EpciO3xLfz4aFL1ZNqspQbO/TD13Ij7DUIy7xIRBMp4taoZCrP0cEBAZ +wgu9m23I4dh3E8RUBzWyFFNic2MVVHrui6JbHc4dIHfyKLtXJDhUcS0vIC9PvcV jhorh3UZC4lM+/jjXV5AhzQ0VrJ2tXAUX2dA144XHzkSH2QmwfnajPsci7BL2CGC rjyTy4NfB/lDwU+55dqJZQSKXMxAapJMrtgw7LD5CKQcN6zmfhXGssJ7HQUXKkaX I1YOFzuUD7oo56BVCnVswv0jX9RxrE5QYNreMlOP9cS+kIYH65N+PAhlURuQC14K PgDkHn5knSa2UQA5tc5f7zdHOZhGRUfcjLP+KAWA3nh+/2OKw/X3zuPx75YT/FKe tACPw5hjEpl62m9Xa0eWepZXwqkIOkzHMmCyNCsbC0mmRoEjmvfnslfsmnh4Dg/c 4YsTYMOLLIeCa+WIc38aA5W2lNO9lW0LwLhX1rP+GRVPv+TVHXlfoyaI+jp0iXrJ t3xxT0gaiIR/VznyS7Py68QV/zB7VdqbsNzS7LdquHK1k8+7OYiWjY3gqyU40Iu2 d1eSnIoDvQJwyYp7XYXbOlXNLY+s1Qb7yxcW3vXm0Bg3gKT8r1XHWJ9rj+CxAn5r ysfkPs1JsesxzzQjwTiDNvHnBnZnwxuxfBr26ektEHmuAXSl8V6dzLN/aaPjpTj4 CkE7KyqX3U9bLkp+ztl4xWKEmW44nskzm0+iqrtrxMyTfvvID4QrABjZL4zmWIqc e3ZfA3AYk9VDIegk/YKGC5VZ8YS7ZXQ0ASK652XqJ7QlMKTxxV7zda6Fp4uW6/qN ezt5wgbGGhZQXj2wDQmWNQYyG/juIgYTpCUA54U5XBIjuR6pg+Ytm0UrvNjsUoAC wGelyqaLDq8U8jdIFYVTJy9aJjQOYXjsUJ0dZN2aGHSlju0ZGIZc49cTIVQ9BTC5 Yc0Vlwzpl+LuA25DzKZNSb/ci0lO/cQGJ2uXQQgaNgdsHlu8nukENGJhnIzx4fzK wEh3yHxhTRCzPPwDfXmx0IHXrPqJhSpAgaXBVIm8OjvmMxO+W75W4uLfNY/B7e2H 3cjklGuvkofOf7sEOrGUYf4cb6Obg8FpvHgpKo5Twwmoh/qvEKckBFqNhZXDDl88 GbGlSEgyaAV1Ig8s1NJKBolWFa0juyPAwJ8vT1T4iwW7kQ7KXKt2UNn96K/HxkLu pikvukz8oRHMlfVHa0R48UB1fFHwZLzPmwkpu6ancIxk3uO3yfhf6iDk3bmnyMlz g3k/b6MrLYaOVByRxay85jH3Vvgqfgn6wa6BJ7xQ81eZ8B45gFuTH0J5JtLL7SH8 darRPLCYfA+Ums9/H6pU5EXfd3yfjMIbvhCXHkJrrljkZ+th3p8dyto6wmYqIY6I qR9sU+o6DhRaiP8tCICuhHxQpXylUM6WeJkJwduTJ8KWIvzsj4mReIKOl/oC2jSd gIdKhb9Q3zj9ce4N5m6v66tyvjxGZ+xf3BvUPDD+LwZeXgf7OBsNVbXzQbzto594 nbCzPocFi3gERE50ru4K70eQCy08TPG5NpOz+DDdO5vpAuMLYEuI7O3L+3GjW40Q G5bu7H5/i7o/RWR67qhG/7p9kPw3nkUtYgnvnWaPMIuTfb4c2d069kjlfgWjIbbI tpSKmm5DHlqTE4/ECAbIEDtSaw9dXHCdL3nh5+n428xDdGbjN4lT86tfu17EYKzl ydH1RJ1LX3o3TEj9UkmDPt7LnftvwybMFEcP7hM2xD4lC++wKQs7Alg6dTkBnJV4 5xU78WRntJkJTU7kFkpPKA0QfyCuSF1fAMoukDBkqUdOj6jE0BlJQlHk5iwgnJlt uEdkTjHZEjIUxWC6llPcAzaPNlmnD45AgfEW+Jn21IvutmJiQAz5lm9Z9PXaR0C8 hXB6owRY67C0YKQwXhoNf6xQun2xGBGYy5rPEEezX1S1tUH5GR/KW1Lh+FzFqHXI ZEb5avfDqHKehGAjPON+Br7akuQ125M9LLjKuSyPaQzeeCAy356Xd7XzVwbPddbm 9S9WSPqzaPgh10chIHoNoC8HMd33dB5j9/Q6jrbU/oPlptu/GlorWblvJdcTuBGI IVn45RFnkG8hCz0GJSNzW7+70YdESQbfJW79vssWMaiSjFE0pMyFXrFR5lBywBTx PiGEUWtvrKG94X1TMlGUzDzDJOQNZ9dT94bonNe9pVmP5BP4/DzwwiWh6qrzWk6p j8OE4cfCSh2WvHnhJbH7/N0v+JKjtxeIeJ16jx/K2oK5 -----END ENCRYPTED PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/keycert.pem000066400000000000000000000077321471441230600205060ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/wIBADANBgkqhkiG9w0BAQEFAASCBukwggblAgEAAoIBgQCylKlLaKU+hOvJ DfriTRLd+IthG5hv28I3A/CGjLICT0rDDtgaXd0uqloJAnjsgn5gMAcStpDW8Rm+ t6LsrBL+5fBgkyU1r94Rvx0HHoyaZwBBouitVHw28hP3W+smddkqB1UxpGnTeL2B gj3dVo/WTtRfO+0h0PKw1l98YE1pMTdqIwcOOE/ER0g4hvA/wrxuLhMvlVLMy/lL 58uctqaDUqryNyeerKbVkq4fJyCG5D2TwXVJ3i2DDh0xSt2Y10poZV4M4k8Su9Z5 8zN2PSvYMT50aqF277v8BaOeYUApBE4kZGIJpo13ATGdEwpUFZ0Fri4zLYUZ1hWb OC35sKo7OxWQ/+tefNUdgWHob6Vmy777jiYcLwxc3sS9rF3AJe0rMW83kCkR6hmy A3250E137N/1QumHuT/Nj9rnI/lwt9jfaYkZjoAgT/C97m/mM83cYpGTdoGV1xNo 7G90MhP0di5FnVsrIaSnvkbGT9UgUWx0oVMjocifdG2qIhMI9psCAwEAAQKCAYBT sHmaPmNaZj59jZCqp0YVQlpHWwBYQ5vD3pPE6oCttm0p9nXt/VkfenQRTthOtmT1 POzDp00/feP7zeGLmqSYUjgRekPw4gdnN7Ip2PY5kdW77NWwDSzdLxuOS8Rq1MW9 /Yu+ZPe3RBlDbT8C0IM+Atlh/BqIQ3zIxN4g0pzUlF0M33d6AYfYSzOcUhibOO7H j84r+YXBNkIRgYKZYbutRXuZYaGuqejRpBj3voVu0d3Ntdb6lCWuClpB9HzfGN0c RTv8g6UYO4sK3qyFn90ibIR/1GB9watvtoWVZqggiWeBzSWVWRsGEf9O+Cx4oJw1 IphglhmhbgNksbj7bD24on/icldSOiVkoUemUOFmHWhCm4PnB1GmbD8YMfEdSbks qDr1Ps1zg4mGOinVD/4cY7vuPFO/HCH07wfeaUGzRt4g0/yLr+XjVofOA3oowyxv JAzr+niHA3lg5ecj4r7M68efwzN1OCyjMrVJw2RAzwvGxE+rm5NiT08SWlKQZnkC gcEA4wvyLpIur/UB84nV3XVJ89UMNBLm++aTFzld047BLJtMaOhvNqx6Cl5c8VuW l261KHjiVzpfNM3/A2LBQJcYkhX7avkqEXlj57cl+dCWAVwUzKmLJTPjfaTTZnYJ xeN3dMYjJz2z2WtgvfvDoJLukVwIMmhTY8wtqqYyQBJ/l06pBsfw5TNvmVIOQHds 8ASOiFt+WRLk2bl9xrGGayqt3VV93KVRzF27cpjOgEcG74F3c0ZW9snERN7vIYwB JfrlAoHBAMlahPwMP2TYylG8OzHe7EiehTekSO26LGh0Cq3wTGXYsK/q8hQCzL14 kWW638vpwXL6L9ntvrd7hjzWRO3vX/VxnYEA6f0bpqHq1tZi6lzix5CTUN5McpDg QnjenSJNrNjS1zEF8WeY9iLEuDI/M/iUW4y9R6s3WpgQhPDXpSvd2g3gMGRUYhxQ Xna8auiJeYFq0oNaOxvJj+VeOfJ3ZMJttd+Y7gTOYZcbg3SdRb/kdxYki0RMD2hF 4ZvjJ6CTfwKBwQDiMqiZFTJGQwYqp4vWEmAW+I4r4xkUpWatoI2Fk5eI5T9+1PLX uYXsho56NxEU1UrOg4Cb/p+TcBc8PErkGqR0BkpxDMOInTOXSrQe6lxIBoECVXc3 HTbrmiay0a5y5GfCgxPKqIJhfcToAceoVjovv0y7S4yoxGZKuUEe7E8JY2iqRNAO yOvKCCICv/hcN235E44RF+2/rDlOltagNej5tY6rIFkaDdgOF4bD7f9O5eEni1Bg litfoesDtQP/3rECgcEAkQfvQ7D6tIPmbqsbJBfCr6fmoqZllT4FIJN84b50+OL0 mTGsfjdqC4tdhx3sdu7/VPbaIqm5NmX10bowWgWSY7MbVME4yQPyqSwC5NbIonEC d6N0mzoLR0kQ+Ai4u+2g82gicgAq2oj1uSNi3WZi48jQjHYFulCbo246o1NgeFFK 77WshYe2R1ioQfQDOU1URKCR0uTaMHClgfu112yiGd12JAD+aF3TM0kxDXz+sXI5 SKy311DFxECZeXRLpcC3AoHBAJkNMJWTyPYbeVu+CTQkec8Uun233EkXa2kUNZc/ 5DuXDaK+A3DMgYRufTKSPpDHGaCZ1SYPInX1Uoe2dgVjWssRL2uitR4ENabDoAOA ICVYXYYNagqQu5wwirF0QeaMXo1fjhuuHQh8GsMdXZvYEaAITZ9/NG5x/oY08+8H kr78SMBOPy3XQn964uKG+e3JwpOG14GKABdAlrHKFXNWchu/6dgcYXB87mrC/GhO zNwzC+QhFTZoOomFoqMgFWujng== -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/keycert2.pem000066400000000000000000000077561471441230600205760ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQCf8FWxi4oVlDVx e8NDFgb+IYAGr/hZWuY1Zq7d7g57yPoxJrgt+bN89+U7qTduqyB2Hy8G0TqeACOr IdpPZ8P7V5E5YiASwfJ72nbVo7qR9DAKA5FE8PU0bJFmFLjDDihc970zc4ilRDfR WylUpj68nefOY4CzFzeiqVOLX2wezs7Z0hflkSXGBmC0j1FbQU2I3YJg3CKCabhT tU6OyKItzjJ2vVaOoQ+B0Kv8leaRQ6ANZBAFQF2LepSy5F2+oSD+QHjPr+012V5D mrsdIc9We8YyonS1u/3HI7lLohf3W+qFroQWjn0DJI56ScV1uEr/B0+hn2jBRTM5 d1F9BeVWm1u8BOJu50CvOeuxiVLsxJpa4T41DJznJk5V+hE4hKvDKmlrwulsRp8o jUEyUi8dzWOBRfAijIWv3qAPjGA/J33n6+PllCczC2BsVZhVmLqSMCwp1g2JTCM/ KC7T4vOl/EGkm76fcmLeA1Ef8oUdRg+3T77VP+HqZ2JP06J8O8MCAwEAAQKCAYAw YvJZ82BEJQGCIrIxMpHNAm+MFmKpDdIFp9oRdDrXgjcG9bLU3e1KSmkEgq4tggIh GlAM3PHB6ULhPC2ixj7JZHWgCaqwYhKtG6vF+HGyRFDgRrIFTGyyfoICgxReloLp lV2dGj/l19yXLuAzJtRmFdOSYhIGnGiNgnKvAKBiNajoxyHJpv7piPZqyc0QMZJ2 bKVMDm02TSuhz4FDuzktaGtl9uQf5GQfnvTZRrRpkC70vigGnrFuSBiCgopF6NLq 6AXl8YS3Jcu2oGWrZDfS/GlG1QmvGGsmr9wndJSGG43jcpcRZt0g1nJNu4Fioq3e 7y6Gap9TEsciuQOv/6RD457XkNARmTQxFpEwmSgOPQn2pFcDspo71Ej7azzL/Z+3 jvnVo3wxgxBcrpyh+vhBtJARp4pT4anW4PcD6IcPSOWbnI8Ldoj1XN5QkJcBcykK 6LmsAUqsmEQDNsmnGZWyYSCns4P2vUJi0hwQz8UiQwgAta3xnq4v5On7l3cq35kC gcEA0+joOFbZBeGlCb27tDW4VCW0cQuczzuNEoBUKnsNSqy0nx1O7hgHm/f/NQDD cpxiD15bRQ0KM9QbQC4dGaVoLsM07hUGk97dCxQPs2zot4CodCKGohs7E154tEDP zVg3YS5mubUmqdqtn8ZCKeeZye/Tv2ageyF300sEgj2Cd7EZ8S4sB0PxZ2tqT3jy cBL5cDruLEWuHIQjN7WwSjxnXocpb1OU7dJ+v4zFPCkSCOoa0DTTw4jFhPEOBdqV T619AoHBAME3QyW4QVtU2Ct9u0B1XThhqSEyOpUrcH9nOoefggwP4WF3phVx16BG aDKUIGQ62klRa5fi2eooxcjQRLv1sWO0UzssnO6ABMnGkUiRdrowo6xukNak0RTp 0gvNoJ0SZxGF0yWSCw1Rq3qP2Koj7XDumFChAzLMyUsnoOl29SA7GfXcZp1pZTiq kOfFMWt0CIHu/EK03YWcd4vfQEq6lus39RCSXuL++Jva3yiEl5s069RFZvP1bNrD emkfetDSPwKBwQClk+8fVnzs44sZOW9ZOEB3P57mVbSJGHb6Zdtd9hhEqP3Y9gWe dJg9fmGjAJ23CAp3B7s5ER9PsAQ6+c0zJNNq9ox9G2CwWgtNhLdf81FDUPxPAktA jxZx4/dcoOe+A5gCD0elA67aOUxA86DvLVA1QXeqrn3muBfwuUUknvs6mt8yXGl6 o9QUgxHmVxLYD3tn/iPr4+ZP0c/Sz9yXpOsAKYxuuFg+G6N9+HiEsXKuFH4vAZgV yODNJ61VVZ4lS+ECgcAqFqOl39E81+qO7sCPdgFsermg5ZQlUmUbG52AVZq6jesG lE21disGWs/v1JyJuNg8CGRrnZriiycqa1PNreOKWImY5kr5GSHx4jNbn3RBcr70 nNEoMJbq+1QqBgzqqkuRYZlxIbMOn6++7v6/cTwT0aWUSr6rnjhrCqLeuG8FKlqp V+1ydLb79QvDsQzm30vLIggJb+ShakgQS/1xSdv+OR5FEd1hjTESokbiSJ/Ny2Vj xAp9MgUYUmSj6ZuTSXkCgcAggshdRQLom/EK2pYwffIpKfBiyLbi+KIjKxkiPEsb jrrQbvh9ZN6iAG3StVAYB5c6vewfeIlcDT0YJDyy1hGRLRG7vf9ubPf+n7Xp1y0W oo9L9qfCHu0jmWwtinkFYjpTDkXlxXCG2v3TllNsNX/5afYo8sb9oxXHLTpBlwZB fw6IgNZblWQevdgmUMTP9W2W7AZUxEz4gOM6lQkOwC3U59Dx2yO6rD3An6G1tlZF 2MClyf8o5d5ePObH8rkxrpY= -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIUF15VKdwjiTzzKgs6PnNpEekV9QQwDQYJKoZIhvcNAQEL BQAwYjELMAkGA1UEBhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYD VQQKDBpQeXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhv c3RuYW1lMB4XDTIxMDMxNzA4NDgyMFoXDTQwMDUxNjA4NDgyMFowYjELMAkGA1UE BhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYDVQQKDBpQeXRob24g U29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhvc3RuYW1lMIIBojAN BgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEAn/BVsYuKFZQ1cXvDQxYG/iGABq/4 WVrmNWau3e4Oe8j6MSa4LfmzfPflO6k3bqsgdh8vBtE6ngAjqyHaT2fD+1eROWIg EsHye9p21aO6kfQwCgORRPD1NGyRZhS4ww4oXPe9M3OIpUQ30VspVKY+vJ3nzmOA sxc3oqlTi19sHs7O2dIX5ZElxgZgtI9RW0FNiN2CYNwigmm4U7VOjsiiLc4ydr1W jqEPgdCr/JXmkUOgDWQQBUBdi3qUsuRdvqEg/kB4z6/tNdleQ5q7HSHPVnvGMqJ0 tbv9xyO5S6IX91vqha6EFo59AySOeknFdbhK/wdPoZ9owUUzOXdRfQXlVptbvATi budArznrsYlS7MSaWuE+NQyc5yZOVfoROISrwyppa8LpbEafKI1BMlIvHc1jgUXw IoyFr96gD4xgPyd95+vj5ZQnMwtgbFWYVZi6kjAsKdYNiUwjPygu0+LzpfxBpJu+ n3Ji3gNRH/KFHUYPt0++1T/h6mdiT9OifDvDAgMBAAGjGzAZMBcGA1UdEQQQMA6C DGZha2Vob3N0bmFtZTANBgkqhkiG9w0BAQsFAAOCAYEARzdkuqa0Hexi/saMkdi3 bubpQkc7X0RYKWnjy/PgcmbvQXLiWRMZOH9rMWvd5v+ZfkgAtsbOQuP8ycioNIFY Il5SEmxHEN81z5UNSPLOib6ky13gzrnXRAxnnO7cICG7AaMu1dHv57fqjevcx/n/ nxPNKwKL+TDpMw7ATVZw7Py7JciKyFAfwtkvt17j/ldvaQvuwmWHzyFVrQniQcQq QEa4jy/Y/pXHAgCKq1qbe0ush17j1ChyH7l4SkF2xJKcYYQF5ipw8zg6WeOL2NFE G1KDJN0SsMmM3PMN1e0lLQP3G+UaatervrKXu51QleKL32Xlby+pp1w9KKs39/Tb RT8EMe9A6cecod6TL0ZUQHow6ykNYBkfSKDLTKWnL9ifZ0C/DvgmS7DpJg3oAa1e GhIglMrgqJflTHAI/PvEsCKM1O0Un2dVGWsUCzPfhj1cKmagyb0Zd+2Tk9xGSRs9 2ceXMxRCjOJwEHUCFuTYeqowabdlpi0nyPbSn7JIwCpT -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/keycert3.pem000066400000000000000000000223501471441230600205620ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQDFtLOteQlQojN7 ztkux7m0hmGKkP1hh0hbKqTcD87jkLAqAwZWenjZMjCbbZ3vP+AObCIkYIKzPXY7 Yi+H5M3O2mXIDxoHGjL/GWtoEyDNXvm9UC+MRuSOq2MaLHHQG0Rx2TxcYrMVUM7b 93rpN1LGRrCv1gISXM4EvEJooAR7Aadj0pG/o0fqDAdFjH6QZbhn1iZle+eGbjcf dgH/H0F8dn1PPGoViHXicbsQ4kB6002Pf+aXP4b2QKAbflyNHEKHPHEOOTXrFjMd c+bqKW24epEsMZI59qx9hU/4Rvp3/v+vEwTL7Nm7ilptzZn2cvGCW39LC0nNYLOz kO3H8xwA75h6uykdB+WO/v2CKIK9M/ZO+9QNrmaokfKDamCk39b8hlCwNL6LsVpv d3XTS5Wn4YWn92EqiltUJJoPo7pc7VTdWCg4zVFn4Q8Zh4NFNn/qTB8lEMgrsNTV 5cyZ7zhoBiUMSO45bmo2NsnE7ce/JUhlqe5uh0PT1MIBgTV+oDMCAwEAAQKCAYEA udsy4gwblqK0tVnxz0lQqYV+os3EdO/BNHr1Oi7eNg2pngTz603812mYSjUVOHma vtQmkH3twGQyBoc52Y1dcGzdK+IOfMjDUg7qao840ffL3I1J9ZwbdodlhZBsec94 W3J1jP/4DDzICf8vm5g3h0+i/9m2Xt7BibAU2dg7/grC+lNUUoxDqaEfIOF/hW0q muq1c8e0EisAROIh5FzUqhWVnWxU6eM7tuFlkuyu4whLLHB3LI466Lo+CTqT9M+v jJYlvS5+AZW3qMBp6WOI8C+VIiBL178mo+Igkyyy5AYXcWeNkjp6ygRWvtWXIhCv CI29mf+BP/54jAY0rQRXJ2UcSHXmM6PTDkE/L2OKeiY1Ou8gLOwun3yBVdbkXJMb PWmUW4N8qSIJQ+vE2TDqmkqAT6m+ilzOXl1O+LLTvGyMnOiiSLXK9mC4ND3tqaQu hvKivnI1doErcWUaIf1DHiJmLrGxrTCUKjCEoefqVq2/dDdtCfx7CqUvjl3DYKMB AoHBAP+Vdi6D07gZFepEGCaJ+YH6cxEyO73CNnea/F1whVAzOv91kHS32jC9PAI3 /wYlX+DLcN9mVF/q62V4SLZYfOxTPW4vWO0A45URe9s9Z795fdAcQ5jt3QFOVSnk 3XSaCkIOwckuwabGJi4+foiUEOnLLzQi1/g7x12dwejxVNhqhz5KFkOQPv8fQRed sb5LVLYDeprsB2Vsx0fHwg4z9FvTIxLBeI7+sJD30lNpYZrCl/T9x4e1SV2Rwn2W bghxgQKBwQDGBx07biZK9RB5g4qPl+G6vz0M+/KBfpwQbMYxSyct7u6gfGD9mWBO qocIIr39Unac3kUL237Cn3HbgiGCRe7Mwd7XqnSSGWM5oWSlVQxEKTXYUlTbd9O9 DKuyQGOl/AMEwD4ZbEOfQNmnd1U4nh1AV052FQY8Ry/atGFT9fApA/5X/bbenOwQ YGDsokLzPf2BIDncpE+VNevUMoMI7EnySgjjfpL+cRld0qpLqBMo2h5VddeJ/5YM 1YcNfMQiw7MCgcEAwXqXuKa7A8aZvHpH/gS9CRRbP01TxFbdfLWrDeE8SnY9111c Ob9kQTk/0D4rpK9uYXIgxD1m6iWghXQFN2TNTOnGuz7EhsYBgrt1k4Zsn5qND5oV 4hNPFsoB1nEW5EooMdGSCYaHuoSOKrvMdgAAvbu+xC0MaTJ3vfrK7Fik7h/WueTD 7emohuFWGVabU38bZZ5EljrPboxmX4Rs9uuFtG2lQ3GKnlVXvKaeZd6EsO9WsXPc NHOcUmUhYokaSvIBAoHAGCxGJTsM8Zl4qVylTWH87A7sJOmccLJD2r1sdBf4cGL6 PhzwugQ+/VtToGqdRo8Ka5u2Ufw5PQi5nVIFRSHERLpluW3VTQBMXHyXDJeVJ7zg Fcf3E9NMxYcGbnvtrhVVSP8ulWvh1U7VQtwOSxsB9xixOzjVygXmkYvzVYxwBJG4 OoV+DS6aomUhb8Fe6tJmX5zPc1+bV1t9ril8VVqCrFDdROfuiaDEt+8/Wnzp2dLG YShBZ1cLugVWtw7D4nqBAoHAF29k64iAxY5Y4OOibVkqjUCPyqG2oxiXqgO7CxZp FGUat5UtV2mIBlSENs1o5AZ1nPlgWtPtg0xVCaG2t/Rq7ugvUfAnAhUK6zX8FS+T gCXE+7iKuuIJiCo13/iAwF/CLfuXvj4CZ71ta0wX9w99f1FcPEk0x+ytiyuWJK8K tyubL34JwNrnkh/8e3LcV3L88Sk9ZmxeTz31f3cA3Fy2ZJOAUMD9dKXeKtY7azzt MkhXedRsdLSKqMh0VGeGHoLS -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5c Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:c5:b4:b3:ad:79:09:50:a2:33:7b:ce:d9:2e:c7: b9:b4:86:61:8a:90:fd:61:87:48:5b:2a:a4:dc:0f: ce:e3:90:b0:2a:03:06:56:7a:78:d9:32:30:9b:6d: 9d:ef:3f:e0:0e:6c:22:24:60:82:b3:3d:76:3b:62: 2f:87:e4:cd:ce:da:65:c8:0f:1a:07:1a:32:ff:19: 6b:68:13:20:cd:5e:f9:bd:50:2f:8c:46:e4:8e:ab: 63:1a:2c:71:d0:1b:44:71:d9:3c:5c:62:b3:15:50: ce:db:f7:7a:e9:37:52:c6:46:b0:af:d6:02:12:5c: ce:04:bc:42:68:a0:04:7b:01:a7:63:d2:91:bf:a3: 47:ea:0c:07:45:8c:7e:90:65:b8:67:d6:26:65:7b: e7:86:6e:37:1f:76:01:ff:1f:41:7c:76:7d:4f:3c: 6a:15:88:75:e2:71:bb:10:e2:40:7a:d3:4d:8f:7f: e6:97:3f:86:f6:40:a0:1b:7e:5c:8d:1c:42:87:3c: 71:0e:39:35:eb:16:33:1d:73:e6:ea:29:6d:b8:7a: 91:2c:31:92:39:f6:ac:7d:85:4f:f8:46:fa:77:fe: ff:af:13:04:cb:ec:d9:bb:8a:5a:6d:cd:99:f6:72: f1:82:5b:7f:4b:0b:49:cd:60:b3:b3:90:ed:c7:f3: 1c:00:ef:98:7a:bb:29:1d:07:e5:8e:fe:fd:82:28: 82:bd:33:f6:4e:fb:d4:0d:ae:66:a8:91:f2:83:6a: 60:a4:df:d6:fc:86:50:b0:34:be:8b:b1:5a:6f:77: 75:d3:4b:95:a7:e1:85:a7:f7:61:2a:8a:5b:54:24: 9a:0f:a3:ba:5c:ed:54:dd:58:28:38:cd:51:67:e1: 0f:19:87:83:45:36:7f:ea:4c:1f:25:10:c8:2b:b0: d4:d5:e5:cc:99:ef:38:68:06:25:0c:48:ee:39:6e: 6a:36:36:c9:c4:ed:c7:bf:25:48:65:a9:ee:6e:87: 43:d3:d4:c2:01:81:35:7e:a0:33 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 85:75:10:25:D0:2C:80:50:24:1A:5B:57:70:DE:B5:CB:71:A9:3B:7B X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 95:f3:56:bb:d5:8c:70:bd:d1:de:da:63:b0:29:d7:db:60:27: d6:59:fd:61:1b:30:c6:d0:5d:73:7d:34:e1:68:e3:28:a6:89: e6:60:bd:89:d3:0e:f4:72:ad:72:76:f8:86:21:fd:75:3c:f8: 6d:be:9c:04:e1:82:03:69:6c:ae:d0:55:ba:5e:f2:ca:f5:0f: 8e:d6:d9:8d:c8:56:46:f4:f8:ac:74:2a:19:7b:8e:47:70:1f: fb:fb:bd:69:02:a1:a5:4a:6e:21:1c:04:14:15:55:bf:bf:24: 43:c8:17:03:be:3e:2c:ea:db:c8:af:1d:fd:52:df:d6:15:49: 9e:c2:44:69:ef:f1:45:43:83:b2:1e:cf:14:1c:13:3f:fe:9c: 71:cb:e7:1b:18:56:36:a7:af:44:f1:0b:a1:79:44:46:f9:43: 46:29:d8:b0:ca:49:4d:65:60:d3:f6:8e:74:bc:62:9e:1e:8d: 4b:29:9a:b4:0d:f0:a2:77:5b:34:e4:11:2f:a7:25:c5:e5:07: 76:12:ae:be:75:73:15:e4:0a:7d:53:38:56:3f:79:6d:6e:ca: ed:80:ab:56:ed:7e:8b:1c:e7:e3:d4:62:30:22:70:e7:29:b2: 03:3c:fe:fa:3d:f0:36:c0:4d:11:a2:99:d3:29:31:27:b8:c5: b8:15:a3:3c:4f:9b:73:5e:2b:b2:fb:cb:fd:75:47:b8:17:bd: 21:d8:e6:c1:b9:ff:73:81:d8:25:08:6d:08:5e:1c:a5:83:50: de:67:e6:da:d0:8e:5a:d3:f2:2a:b1:3f:b8:80:21:07:6a:71: 15:6d:05:eb:51:b3:59:8d:d4:15:46:7e:02:a8:13:01:16:99: bd:03:cc:70:71:2a:23:16:78:af:d1:d5:01:9d:04:b4:63:93: 9a:04:3a:92:2e:e6:7e:73:93:a5:fe:50:9b:bd:0e:ea:54:86: 6f:7c:e5:14:77:fe:c2:28:5a:4a:0e:d7:2d:8c:e9:ed:61:29: b2:53:ff:6c:04:bc -----BEGIN CERTIFICATE----- MIIF8TCCBFmgAwIBAgIJAMstgJlaaVJcMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxv Y2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAMW0s615CVCi M3vO2S7HubSGYYqQ/WGHSFsqpNwPzuOQsCoDBlZ6eNkyMJttne8/4A5sIiRggrM9 djtiL4fkzc7aZcgPGgcaMv8Za2gTIM1e+b1QL4xG5I6rYxoscdAbRHHZPFxisxVQ ztv3euk3UsZGsK/WAhJczgS8QmigBHsBp2PSkb+jR+oMB0WMfpBluGfWJmV754Zu Nx92Af8fQXx2fU88ahWIdeJxuxDiQHrTTY9/5pc/hvZAoBt+XI0cQoc8cQ45NesW Mx1z5uopbbh6kSwxkjn2rH2FT/hG+nf+/68TBMvs2buKWm3NmfZy8YJbf0sLSc1g s7OQ7cfzHADvmHq7KR0H5Y7+/YIogr0z9k771A2uZqiR8oNqYKTf1vyGULA0voux Wm93ddNLlafhhaf3YSqKW1Qkmg+julztVN1YKDjNUWfhDxmHg0U2f+pMHyUQyCuw 1NXlzJnvOGgGJQxI7jluajY2ycTtx78lSGWp7m6HQ9PUwgGBNX6gMwIDAQABo4IB wDCCAbwwFAYDVR0RBA0wC4IJbG9jYWxob3N0MA4GA1UdDwEB/wQEAwIFoDAdBgNV HSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4E FgQUhXUQJdAsgFAkGltXcN61y3GpO3swfQYDVR0jBHYwdIAUs4qgorpx8agkedSk WyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29m dHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMst gJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0 Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcw AYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYD VR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAJXzVrvVjHC90d7a Y7Ap19tgJ9ZZ/WEbMMbQXXN9NOFo4yimieZgvYnTDvRyrXJ2+IYh/XU8+G2+nATh ggNpbK7QVbpe8sr1D47W2Y3IVkb0+Kx0Khl7jkdwH/v7vWkCoaVKbiEcBBQVVb+/ JEPIFwO+Pizq28ivHf1S39YVSZ7CRGnv8UVDg7IezxQcEz/+nHHL5xsYVjanr0Tx C6F5REb5Q0Yp2LDKSU1lYNP2jnS8Yp4ejUspmrQN8KJ3WzTkES+nJcXlB3YSrr51 cxXkCn1TOFY/eW1uyu2Aq1btfosc5+PUYjAicOcpsgM8/vo98DbATRGimdMpMSe4 xbgVozxPm3NeK7L7y/11R7gXvSHY5sG5/3OB2CUIbQheHKWDUN5n5trQjlrT8iqx P7iAIQdqcRVtBetRs1mN1BVGfgKoEwEWmb0DzHBxKiMWeK/R1QGdBLRjk5oEOpIu 5n5zk6X+UJu9DupUhm985RR3/sIoWkoO1y2M6e1hKbJT/2wEvA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/keycert4.pem000066400000000000000000000223661471441230600205720ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQC34y3S6iXdmdvd M/2aFBe6CvRvZwhh1huGl7IQRtdoakPqMLlEdNHJtNeF5M27xLei+p4wt7N1Jyi0 2keHQb1m9TqH5AruOkE2ti+15zEoKoU9aWydTiH+epKTT0yjg2NcKQjRUaWcbhzB H4EMKuCIlzIIz8/EIKkOqhCDwq6+Fv3Ays+z7Bz+yR80ixivKu/l7SjxQ7z7R/kC I7OViRcIO5QBQPj7VLvCTz4VA6u/LdXngK2HNuau6WXm5yNNQbqrB11AEJcYZf/c VrneV4F+ZjLloAKgSn9GB8eWOyilTQ18TcKd+H2icipRaP/+QR/KPx5GK/SXU3my qm62QOGI7t/5ktVdjGhs6tHZxw1SRiipiLYWbtVRrSxa4wYlgpgoUwvrvvtC5kAN nTw1VGWsxcs+6a7+PocYnJiq7k4b5OAUb3Ryvl9DLAMy8NqpRWo4cHD/XQ3FCYwF HlOSgx/dL5Se0i3dW1KzbP6OvaNg6nl/1EXPUsJ1ATS8nzvzhccCAwEAAQKCAYEA nD3GvaJ9MeB802JNZBEWZ9jO/6jHknldQeq6POI0PF+t/NoRUH0BkyS4yucxdw0a CrxulG5BaJUxHRkqFV5iE4zhgnzcXLXamyYJO8GIHtyiASAGTVIJyDNVPxztvTDx x2iGOXPqBxP4Eo82EqSLywLMXHhVzAsEGZWeGpXb61+Vk62+9Nz1dfZlMTvOaWdO Fkp/sx8e/1KT3KGBANlOXIxioP4Xj1Tbg6nY0fogf3vud5j52B1pu8xL7PkPIaFq DEGz3XvWhBF/+Cs5iDeYz8eQpfQig7HdHVn2D8dZmzQgpLw1yGbPAnqrgopWfm7R MqiyFe82p2t+vfSoG5jz28XxPtzBJV3ljxKxlbnclqu/CAYSjzaYohDzyhjdZOZI r9DOfWOqu01Ha3EEsApn95fusHHGTH2FOy0u61FSTrfLfqsLw9WRJPWleirKikhf SZzi223QrmzZMtuCF7VgTx3ghDhBmFD8uzVVQ1SwPZ8CgftRkFcn1llXIAfJ3iHB AoHBAOg3DOIdtUVgpjMKhpAyuH54fYvGl7afIMNbKRle0kCiP45wtGJ43RPMqiR8 1rxZB3+iapICI/lnhk3O7vVRkR64yiqQBcl/hXZ1BhyD6iDXWYmm5mcnymcoqfwc p9TfzEPyGPb3SM2YlI0cSPRqM/jDvGvnDeKIpzEKvUlwJ59WoN2HOHTIXf+XbN5n unpuTt6YKJvc48DrXsPnUzkCmUfbOmgHfeb9/qBs/8kY4YJMsZEjqf88o7mCJCIy BtDxTwKBwQDKuOwE8e0GIA01ZHd6RfR+ZCvmp2oauxal4EJsBx+ZZnhEWGaSm1fE Bf/ih074ghcSKoSrdYpD1xGZ6fGVWMx3jcL11yLDOUiiPDJsm8hUBZ0IW1qXyfCP l7xy1bUkWwPXdmFuGp1exrcjooKrFNuTdYiK4nQZSKuCfXQRADrmEJmM+gYwhqI7 4XsYo848B9A4hbY6RLEox4uvo/RmafY0iR0PMhVEc+ydNLKB/4LpahZqBQ4kTpMv o4+rEvYt1gkCgcB08gx177ozx1nMCLf99N0/LBUmCIytNvR8DfPjyAIg9NUHOjFO CkpkR0VEfO50Cm4hVD1RbOyLFRzpIJbtSvfHvg5qYv/XG3auUn8Sa0jE408/aKNO PhbL3wnEYvYO2ep4KXtzHNQ4XmgprJ39IWMtG/5PZRx0ApgYtazgSDBcKXd4OTow bhwQtUTpuNmMAPONXJnO7O5yYNbn2B7sbiedrYV7kJJSe4X5awtiTjp7sX4XdxuM 5BAcQ7NI2WLfZTcCgcBp/X9hIoATmMRvKwUQx+yJ/KO7Z8KhETpJJdR0mNDbqmit Cy8t7cxYb+6WqLoQUivv0o0k/EJ7L8JDH76woAnfZB4P3RiOy69/K0wN3vFBhOHS kbju7aU53lKoE7YuuOtsRrewEng/KlRsbDY3bqNTGLt4KegbpBQQGLmLffxNd1Zh EAQWcP33ou9yNYrJdihWtQpOssWRlash/O32ceZJF3s7C6t068tFclz2fPocQdxQ OC5pqy9nU/P0tOhDlMkCgcEAosaBJLIeAYlOU0+2uSx5g5mIqOOTyrDEmqqad6T/ wkB7vW2QaoDvLL22Yrzdn9vQ0V0rqzhVtan7sq5pn/BQJAueZYN8rFxS3uuW+UQk Nsc4GLJzU8Az/2DvqEIrnE7zRc5E1FOI9gKLrBlpJB2o0hVcBznDe05Gax6Kjqbm jHqzyU73SpxpEy3OesClCeCQIMr47HaL9aSqaEX4U9bMpgHi0HgTTHqvJ5pch0hY dYl+WAE9LAyF1DF29BirEXVw -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5d Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=fakehostname Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:b7:e3:2d:d2:ea:25:dd:99:db:dd:33:fd:9a:14: 17:ba:0a:f4:6f:67:08:61:d6:1b:86:97:b2:10:46: d7:68:6a:43:ea:30:b9:44:74:d1:c9:b4:d7:85:e4: cd:bb:c4:b7:a2:fa:9e:30:b7:b3:75:27:28:b4:da: 47:87:41:bd:66:f5:3a:87:e4:0a:ee:3a:41:36:b6: 2f:b5:e7:31:28:2a:85:3d:69:6c:9d:4e:21:fe:7a: 92:93:4f:4c:a3:83:63:5c:29:08:d1:51:a5:9c:6e: 1c:c1:1f:81:0c:2a:e0:88:97:32:08:cf:cf:c4:20: a9:0e:aa:10:83:c2:ae:be:16:fd:c0:ca:cf:b3:ec: 1c:fe:c9:1f:34:8b:18:af:2a:ef:e5:ed:28:f1:43: bc:fb:47:f9:02:23:b3:95:89:17:08:3b:94:01:40: f8:fb:54:bb:c2:4f:3e:15:03:ab:bf:2d:d5:e7:80: ad:87:36:e6:ae:e9:65:e6:e7:23:4d:41:ba:ab:07: 5d:40:10:97:18:65:ff:dc:56:b9:de:57:81:7e:66: 32:e5:a0:02:a0:4a:7f:46:07:c7:96:3b:28:a5:4d: 0d:7c:4d:c2:9d:f8:7d:a2:72:2a:51:68:ff:fe:41: 1f:ca:3f:1e:46:2b:f4:97:53:79:b2:aa:6e:b6:40: e1:88:ee:df:f9:92:d5:5d:8c:68:6c:ea:d1:d9:c7: 0d:52:46:28:a9:88:b6:16:6e:d5:51:ad:2c:5a:e3: 06:25:82:98:28:53:0b:eb:be:fb:42:e6:40:0d:9d: 3c:35:54:65:ac:c5:cb:3e:e9:ae:fe:3e:87:18:9c: 98:aa:ee:4e:1b:e4:e0:14:6f:74:72:be:5f:43:2c: 03:32:f0:da:a9:45:6a:38:70:70:ff:5d:0d:c5:09: 8c:05:1e:53:92:83:1f:dd:2f:94:9e:d2:2d:dd:5b: 52:b3:6c:fe:8e:bd:a3:60:ea:79:7f:d4:45:cf:52: c2:75:01:34:bc:9f:3b:f3:85:c7 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:fakehostname X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: C8:BD:A8:B4:C0:F2:32:10:73:47:9C:48:81:32:F8:BA:BB:26:84:97 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 76:87:76:4d:e4:0f:88:bf:2c:f3:58:67:c0:97:6c:cd:59:18: 82:83:4c:04:19:a5:6d:aa:fa:64:3d:49:32:3e:e1:56:95:b2: 13:f7:cf:d3:11:b0:72:b7:5b:e7:d7:85:69:51:3c:b6:54:80: 45:2f:28:10:21:20:b9:ba:e9:27:5a:b7:3f:82:b7:69:f5:46: f5:bf:a2:8b:17:7f:f2:14:d1:46:97:b5:8b:47:fb:9f:e8:5c: 05:0e:9d:11:bd:7c:9a:03:84:0b:ca:29:66:4a:ca:0d:6f:09: 1e:7a:27:c1:7f:03:96:70:8d:18:a5:2f:a4:98:a5:19:aa:8c: 5d:1e:8c:3e:bb:6d:3b:c0:33:c0:15:e1:bd:09:3d:9f:e8:dc: 12:d4:cb:44:1d:06:f5:e8:d6:4e:a1:2d:5c:9f:5d:1f:5b:2a: c3:4d:40:8d:da:d1:78:80:d0:c6:31:72:10:48:8a:e9:10:7a: 13:30:11:b2:9e:67:0e:ed:a1:aa:ec:73:2d:f0:b8:8a:22:75: 0f:30:69:5c:50:7e:91:ce:da:91:c7:70:8c:65:ff:f6:58:fb: 00:bd:45:cc:e2:e4:e3:e5:16:36:7d:f3:a2:4a:9c:45:ff:d9: a5:16:e0:2f:b5:5b:6c:e6:8a:13:15:48:73:bd:7c:80:33:c3: d4:3b:3a:1d:85:0e:a4:f7:f7:fb:48:0c:e9:a0:4b:5e:8a:5c: 67:f8:25:02:6f:cd:72:c1:aa:5a:93:64:7c:14:20:43:e0:13: 7f:0d:e1:0d:61:5e:2e:2c:cd:7a:2e:2a:ae:b6:75:6a:5f:a0: 1a:9b:b6:67:2d:b0:a5:1c:54:bc:8c:70:7e:15:2b:c0:50:e3: 03:bb:a4:a5:fc:45:01:c9:3f:a7:b8:18:dc:3e:08:07:a1:9b: f5:bd:95:bd:49:e8:10:7c:91:7d:2d:c4:c2:98:b6:b7:51:69: d7:0a:68:40:b5:0f:85:a0:a9:67:77:c6:68:cb:0e:58:34:b3: 58:e7:c8:7c:09:67 -----BEGIN CERTIFICATE----- MIIF9zCCBF+gAwIBAgIJAMstgJlaaVJdMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGIxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFTATBgNVBAMMDGZh a2Vob3N0bmFtZTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALfjLdLq Jd2Z290z/ZoUF7oK9G9nCGHWG4aXshBG12hqQ+owuUR00cm014XkzbvEt6L6njC3 s3UnKLTaR4dBvWb1OofkCu46QTa2L7XnMSgqhT1pbJ1OIf56kpNPTKODY1wpCNFR pZxuHMEfgQwq4IiXMgjPz8QgqQ6qEIPCrr4W/cDKz7PsHP7JHzSLGK8q7+XtKPFD vPtH+QIjs5WJFwg7lAFA+PtUu8JPPhUDq78t1eeArYc25q7pZebnI01BuqsHXUAQ lxhl/9xWud5XgX5mMuWgAqBKf0YHx5Y7KKVNDXxNwp34faJyKlFo//5BH8o/HkYr 9JdTebKqbrZA4Yju3/mS1V2MaGzq0dnHDVJGKKmIthZu1VGtLFrjBiWCmChTC+u+ +0LmQA2dPDVUZazFyz7prv4+hxicmKruThvk4BRvdHK+X0MsAzLw2qlFajhwcP9d DcUJjAUeU5KDH90vlJ7SLd1bUrNs/o69o2DqeX/URc9SwnUBNLyfO/OFxwIDAQAB o4IBwzCCAb8wFwYDVR0RBBAwDoIMZmFrZWhvc3RuYW1lMA4GA1UdDwEB/wQEAwIF oDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAd BgNVHQ4EFgQUyL2otMDyMhBzR5xIgTL4ursmhJcwfQYDVR0jBHYwdIAUs4qgorpx 8agkedSkWyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRo b24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZl coIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6 Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1Bggr BgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2Nz cC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5l dC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAHaHdk3k D4i/LPNYZ8CXbM1ZGIKDTAQZpW2q+mQ9STI+4VaVshP3z9MRsHK3W+fXhWlRPLZU gEUvKBAhILm66Sdatz+Ct2n1RvW/oosXf/IU0UaXtYtH+5/oXAUOnRG9fJoDhAvK KWZKyg1vCR56J8F/A5ZwjRilL6SYpRmqjF0ejD67bTvAM8AV4b0JPZ/o3BLUy0Qd BvXo1k6hLVyfXR9bKsNNQI3a0XiA0MYxchBIiukQehMwEbKeZw7toarscy3wuIoi dQ8waVxQfpHO2pHHcIxl//ZY+wC9Rczi5OPlFjZ986JKnEX/2aUW4C+1W2zmihMV SHO9fIAzw9Q7Oh2FDqT39/tIDOmgS16KXGf4JQJvzXLBqlqTZHwUIEPgE38N4Q1h Xi4szXouKq62dWpfoBqbtmctsKUcVLyMcH4VK8BQ4wO7pKX8RQHJP6e4GNw+CAeh m/W9lb1J6BB8kX0txMKYtrdRadcKaEC1D4WgqWd3xmjLDlg0s1jnyHwJZw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/keycertecc.pem000066400000000000000000000130051471441230600211470ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIG2AgEAMBAGByqGSM49AgEGBSuBBAAiBIGeMIGbAgEBBDBcNwE+cm17mmr7Yg6d 0DNCnheGFOjkYH4tYzTyCkcZGShkmF/tKhIqb3imKz0Kx9+hZANiAATyp8ws6CuN OI2/3MC4jZVSkmoDzm/X/ZrkEm4TVHKPSZ6kzZRpmmUlLS9l7SQZSLYyDAFBFzoG JJYHhZNQXEO7HFszn6KnvLjhwS6ddzlaHPziEknrSr0OKhJmdJHrQAQ= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5e Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost-ecc Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (384 bit) pub: 04:f2:a7:cc:2c:e8:2b:8d:38:8d:bf:dc:c0:b8:8d: 95:52:92:6a:03:ce:6f:d7:fd:9a:e4:12:6e:13:54: 72:8f:49:9e:a4:cd:94:69:9a:65:25:2d:2f:65:ed: 24:19:48:b6:32:0c:01:41:17:3a:06:24:96:07:85: 93:50:5c:43:bb:1c:5b:33:9f:a2:a7:bc:b8:e1:c1: 2e:9d:77:39:5a:1c:fc:e2:12:49:eb:4a:bd:0e:2a: 12:66:74:91:eb:40:04 ASN1 OID: secp384r1 NIST CURVE: P-384 X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost-ecc X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 79:11:98:86:15:4F:48:F4:31:0B:D2:CC:C8:26:3A:09:07:5D:96:40 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 6e:42:e8:a2:2d:28:14:e3:25:5c:c1:7e:54:e9:3a:ff:30:db: 94:ba:b2:f6:5f:ae:9a:c1:90:b3:4f:ce:65:1d:84:64:c0:71: 2c:44:8e:7e:00:79:f5:8c:4a:1d:34:13:44:de:99:2e:db:53: ee:ec:74:97:4d:59:1a:09:82:4f:98:75:91:a7:a0:b9:da:5e: 68:f5:32:85:be:36:3d:83:d4:ee:f9:87:67:31:85:41:53:9a: e7:05:96:13:1c:88:2e:7f:33:b1:ee:bd:f9:50:52:24:ed:3d: 92:95:6e:30:c3:af:74:a9:ee:15:bb:da:7c:14:50:8e:e3:99: ea:ba:b4:37:8a:50:61:26:de:01:93:b8:a2:6b:d9:c7:38:5e: b2:f8:96:3d:a8:9f:7d:0c:71:d4:7e:cc:a0:57:af:7e:ce:3f: a7:a7:27:68:c1:28:d7:4f:44:c1:b4:93:c3:c7:35:2b:50:c3: 8e:2c:d0:46:c1:3f:e1:67:d3:f0:81:ae:f3:5c:3e:4f:d5:a8: 07:8f:e0:eb:ef:d8:dc:47:e0:3d:58:eb:de:0e:7f:b2:58:cb: 5c:f1:2f:65:7e:0f:0d:cc:ca:ba:83:53:63:bc:dd:18:0c:ee: ed:ec:96:88:d0:38:c5:d7:ab:e7:55:79:7b:6d:ba:c0:a0:e9: 5c:ca:7c:fb:f8:70:c7:fb:f5:b2:b5:74:cb:f7:c0:0d:20:9f: 1d:b7:4c:bf:8a:8d:cd:e3:bc:4e:30:78:02:12:a0:9b:d5:8f: 49:3c:95:91:76:6e:7c:54:dc:61:7a:2e:20:ed:35:25:e0:c5: 17:50:02:83:00:74:8f:f0:1c:97:96:08:fc:2e:63:a4:f7:97: 87:43:2a:32:04:2d:4c:f9:1a:07:bf:68:91:fc:50:21:a1:3c: 8d:8f:fb:83:57:83:1f:b6:55:5c:55:2f:58:64:ad:f3:27:ba: d0:e3:cd:58:01:a3:c9:ba:1d:95:dc:30:d5:af:b9:20:ad:d9: 48:ba:8d:9a:66:ee -----BEGIN CERTIFICATE----- MIIEyzCCAzOgAwIBAgIJAMstgJlaaVJeMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGMxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFjAUBgNVBAMMDWxv Y2FsaG9zdC1lY2MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATyp8ws6CuNOI2/3MC4 jZVSkmoDzm/X/ZrkEm4TVHKPSZ6kzZRpmmUlLS9l7SQZSLYyDAFBFzoGJJYHhZNQ XEO7HFszn6KnvLjhwS6ddzlaHPziEknrSr0OKhJmdJHrQASjggHEMIIBwDAYBgNV HREEETAPgg1sb2NhbGhvc3QtZWNjMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAU BggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUeRGY hhVPSPQxC9LMyCY6CQddlkAwfQYDVR0jBHYwdIAUs4qgorpx8agkedSkWyU2FR5J yM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMstgJlaaVJb MIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0Y2EucHl0 aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcwAYYpaHR0 cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYDVR0fBDww OjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2EvcmV2 b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAG5C6KItKBTjJVzBflTpOv8w 25S6svZfrprBkLNPzmUdhGTAcSxEjn4AefWMSh00E0TemS7bU+7sdJdNWRoJgk+Y dZGnoLnaXmj1MoW+Nj2D1O75h2cxhUFTmucFlhMciC5/M7HuvflQUiTtPZKVbjDD r3Sp7hW72nwUUI7jmeq6tDeKUGEm3gGTuKJr2cc4XrL4lj2on30McdR+zKBXr37O P6enJ2jBKNdPRMG0k8PHNStQw44s0EbBP+Fn0/CBrvNcPk/VqAeP4Ovv2NxH4D1Y 694Of7JYy1zxL2V+Dw3MyrqDU2O83RgM7u3slojQOMXXq+dVeXttusCg6VzKfPv4 cMf79bK1dMv3wA0gnx23TL+Kjc3jvE4weAISoJvVj0k8lZF2bnxU3GF6LiDtNSXg xRdQAoMAdI/wHJeWCPwuY6T3l4dDKjIELUz5Gge/aJH8UCGhPI2P+4NXgx+2VVxV L1hkrfMnutDjzVgBo8m6HZXcMNWvuSCt2Ui6jZpm7g== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/nokia.pem000066400000000000000000000036031471441230600201320ustar00rootroot00000000000000# Certificate for projects.developer.nokia.com:443 (see issue 13034) -----BEGIN CERTIFICATE----- MIIFLDCCBBSgAwIBAgIQLubqdkCgdc7lAF9NfHlUmjANBgkqhkiG9w0BAQUFADCB vDELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTswOQYDVQQLEzJUZXJtcyBvZiB1c2Ug YXQgaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAoYykxMDE2MDQGA1UEAxMt VmVyaVNpZ24gQ2xhc3MgMyBJbnRlcm5hdGlvbmFsIFNlcnZlciBDQSAtIEczMB4X DTExMDkyMTAwMDAwMFoXDTEyMDkyMDIzNTk1OVowcTELMAkGA1UEBhMCRkkxDjAM BgNVBAgTBUVzcG9vMQ4wDAYDVQQHFAVFc3BvbzEOMAwGA1UEChQFTm9raWExCzAJ BgNVBAsUAkJJMSUwIwYDVQQDFBxwcm9qZWN0cy5kZXZlbG9wZXIubm9raWEuY29t MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCr92w1bpHYSYxUEx8N/8Iddda2 lYi+aXNtQfV/l2Fw9Ykv3Ipw4nLeGTj18FFlAZgMdPRlgrzF/NNXGw/9l3/qKdow CypkQf8lLaxb9Ze1E/KKmkRJa48QTOqvo6GqKuTI6HCeGlG1RxDb8YSKcQWLiytn yj3Wp4MgRQO266xmMQIDAQABo4IB9jCCAfIwQQYDVR0RBDowOIIccHJvamVjdHMu ZGV2ZWxvcGVyLm5va2lhLmNvbYIYcHJvamVjdHMuZm9ydW0ubm9raWEuY29tMAkG A1UdEwQCMAAwCwYDVR0PBAQDAgWgMEEGA1UdHwQ6MDgwNqA0oDKGMGh0dHA6Ly9T VlJJbnRsLUczLWNybC52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNybDBEBgNVHSAE PTA7MDkGC2CGSAGG+EUBBxcDMCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3LnZl cmlzaWduLmNvbS9ycGEwKAYDVR0lBCEwHwYJYIZIAYb4QgQBBggrBgEFBQcDAQYI KwYBBQUHAwIwcgYIKwYBBQUHAQEEZjBkMCQGCCsGAQUFBzABhhhodHRwOi8vb2Nz cC52ZXJpc2lnbi5jb20wPAYIKwYBBQUHMAKGMGh0dHA6Ly9TVlJJbnRsLUczLWFp YS52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNlcjBuBggrBgEFBQcBDARiMGChXqBc MFowWDBWFglpbWFnZS9naWYwITAfMAcGBSsOAwIaBBRLa7kolgYMu9BSOJsprEsH iyEFGDAmFiRodHRwOi8vbG9nby52ZXJpc2lnbi5jb20vdnNsb2dvMS5naWYwDQYJ KoZIhvcNAQEFBQADggEBACQuPyIJqXwUyFRWw9x5yDXgMW4zYFopQYOw/ItRY522 O5BsySTh56BWS6mQB07XVfxmYUGAvRQDA5QHpmY8jIlNwSmN3s8RKo+fAtiNRlcL x/mWSfuMs3D/S6ev3D6+dpEMZtjrhOdctsarMKp8n/hPbwhAbg5hVjpkW5n8vz2y 0KxvvkA1AxpLwpVv7OlK17ttzIHw8bp9HTlHBU5s8bKz4a565V/a5HI0CSEv/+0y ko4/ghTnZc1CkmUngKKeFMSah/mT/xAh8XnE2l1AazFa8UKuYki1e+ArHaGZc4ix UYOtiRphwfuYQhRZ7qX9q2MMkCMI65XNK/SaFrAbbG0= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/nosan.pem000066400000000000000000000170471471441230600201560ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQCv3sUoOE4F7Pye AT2Q6XpXrGUOu1fYgdnItLLLhvn7ACuHMj7TA5UKXxsepJn5m2Ji9LvAbksr1IWd LZAvNgjwsUR+E4HbY108BhVt9sk3HFkvE0OOFbAa14ICtYPe18P/4Hv6Zfu/GJDU rwXHNCUu0p6i/mospZ5O3sx5MgVaShknGAEC3Kp7zOgusMmE8XSbkNQa3ARMkW4o kTqWKAeAHDjVFVyyhzZQmo+gaLzhWfJVSZhlJsuiLoZGGrVTq85EiXsE4l8rPaI+ mKkVzWP13IZW+Fx1tiIktumdHWb1OQWrvm8AiT9b8PcFCUUrvhQFcLDSCZjKlQ0t RWrSSKrrVsSldOreqRLtpjGzFJpGnTcvslL7rP5pg5DjBsYmVcDjrmRuJuhGq52X /6HEC97GouVK8tT1LVMv1wufVPn+i9TzwxOuRWeUvVqLAJgWQ9N3yKdymH+VrpZk /oB9ScyDakGezZBW5CeOQbNJ8WoX58jNxefGjtqKxmyztu43r3ECAwEAAQKCAYBQ fVoqYCqFV8L95X9x1QljGsldhqxbsIIl811o/KtoDtndFEfgd2E8z+4vhhHaRR0w QOW02kWZF7jXCMVWdhp9XgQE15S0/bLsB7TDERFiIZ1HiD+AxbhFcKBV8REbahCQ CQN0xDwFZ47RaBDy7JCf71EfM+UP7fSYECvww83jVspQNBIyZx+3bT5OMCbqqz88 +3m3mT52dJDADEeN9WAJZ+Ey1IYKRwu6tCJLvePEF1BrbDVNBgZogXZ+mzalxpjr 4RpGPMMa+VWc8HmDVd+LtpwKJcQD00GvUP4fNywn+5jvNWl54FdQiTLPrieTWxas XUQ2crxP7Aqr2/vsU5Ruru5uF7H+ssMHp9YQDhpJ2+SVhQ9P+/loXCuKGt+BrB2Z MlitO3f+vfRtzATmJ8G0qFrOqZK1A/qsiyIze240C1hAl3oy2xpZqTDGp4gRWwoi OIN0HmH9UbP7bbNQY1x/zstTbza4/7rGb1+DZKeZIMu7QjBCU0rtsJpGtUvcQGEC gcEA42GMYSL/HljZMF1LsDhTX/cmP8FDNgONhWYxT+w0Csnj1usLNBaT63dYnEiW QKydRR4casAR1Kdy4Yfcy2lCy1kCfwqkQYk8fxSsOSHRjUfwC1SnfdYlwKFMxw4a oZF0R4oVCBYrfP+8kqrj+5gs/gXblsw72XkYtbCdIriKKdmUzTx7MegzSqh2PVRi rJzuwCZQ/O0NfhwdOHxLQDo0dgD+vv9e+KOSoJ9FDv8HH1tnolpRMdkSA8AJR/Nk DXt1AoHBAMYBfTKQZ2jqLKybe4tP+YKjvjVp8vJx0iNUXFN/P6hBaSBOgq85uxXL X3s7N/pkOCjyE95B8QusIkbnbfdyEP89O4bTbUHPXyAkHyRkR7Vny49HYuaR/aXQ mXC0J2z5bXVpCQ514l/R/Io3wBph+hbG3To7pp9pMOV4qzvibUZaTZFwH+q+xDwf SKSFy3fcomgH4/K5/QuKVj0jOUQsYjQQWb8GukS2KZK3zYJIAG1bBcsCVpSuBdW0 eCZgqjnwjQKBwCUyUwWc9QEg5b68tGIKhNEhHDe3xOf0ItWcxxpc+JJ/Pm9tGfMW cnJFntBKK5I+6qdg6qMn8oLINcnhMORxvsSHNhpUQlSaP7RGTHo4JxCmoQUpfxDd 1GUzvdyeWQrvQYdmdlRRVCHpsA6KOCtzVIDlsmtz06Ka5cjrMHl6mNeJyYbdiwW6 B5ICBv23bUDxlzkFy5/ko51qufkAlErYeraHKSVTn1SrZZQzGdf/LkoZ6NUtUzUF XqYQZzRHA6oU9QKBwDslzLljC5D6ivfQxln6POV6dmJMUOd9erFVDPNgSqq/R2EA MueXDjzXcKFGMlWYxHHuxmKZPiEnfWHC1kWZjFxCdVq0I6oKATd/stHTJtyYseUO BQwtRiDXLE7PcguKgtkU1EC+lC3dc1vyhW8cH3HYW9N+aCqsaI/TuQr9e3kNlqhA XzhnXgU7rx5+XSZkARukZ8JlLqLY4yQGNqAXxgoZbEW1A8VsyQRr5XbqfT4td5CK FUT6qwGIlG+aZp9CLQKBwQCQkwdW9A/Q4Ffq8+XTL1hJ24m/q11OLAPODUypOhWw OCbX2fkv59pSBe6niZDBls1NpHB9mzalBrJCfU+yKC667gKcKULOnWULIoOQvmcg Ka3hkkW28gTnCjfDIYm3IdsLjc67zJplOixaKgxhO8NtJZGtg0oLIrofG8EYRInv OmtGw+XE+s4TVs6WgXnEg9zWQ5ZYtqQVn6PT5jsz+Nrvipi61HWHVBd7g+78ojps 3suWxl0FvgzTW5HD16WRXeI= -----END PRIVATE KEY----- Certificate: Data: Version: 1 (0x0) Serial Number: cb:2d:80:99:5a:69:52:61 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=nosan Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:af:de:c5:28:38:4e:05:ec:fc:9e:01:3d:90:e9: 7a:57:ac:65:0e:bb:57:d8:81:d9:c8:b4:b2:cb:86: f9:fb:00:2b:87:32:3e:d3:03:95:0a:5f:1b:1e:a4: 99:f9:9b:62:62:f4:bb:c0:6e:4b:2b:d4:85:9d:2d: 90:2f:36:08:f0:b1:44:7e:13:81:db:63:5d:3c:06: 15:6d:f6:c9:37:1c:59:2f:13:43:8e:15:b0:1a:d7: 82:02:b5:83:de:d7:c3:ff:e0:7b:fa:65:fb:bf:18: 90:d4:af:05:c7:34:25:2e:d2:9e:a2:fe:6a:2c:a5: 9e:4e:de:cc:79:32:05:5a:4a:19:27:18:01:02:dc: aa:7b:cc:e8:2e:b0:c9:84:f1:74:9b:90:d4:1a:dc: 04:4c:91:6e:28:91:3a:96:28:07:80:1c:38:d5:15: 5c:b2:87:36:50:9a:8f:a0:68:bc:e1:59:f2:55:49: 98:65:26:cb:a2:2e:86:46:1a:b5:53:ab:ce:44:89: 7b:04:e2:5f:2b:3d:a2:3e:98:a9:15:cd:63:f5:dc: 86:56:f8:5c:75:b6:22:24:b6:e9:9d:1d:66:f5:39: 05:ab:be:6f:00:89:3f:5b:f0:f7:05:09:45:2b:be: 14:05:70:b0:d2:09:98:ca:95:0d:2d:45:6a:d2:48: aa:eb:56:c4:a5:74:ea:de:a9:12:ed:a6:31:b3:14: 9a:46:9d:37:2f:b2:52:fb:ac:fe:69:83:90:e3:06: c6:26:55:c0:e3:ae:64:6e:26:e8:46:ab:9d:97:ff: a1:c4:0b:de:c6:a2:e5:4a:f2:d4:f5:2d:53:2f:d7: 0b:9f:54:f9:fe:8b:d4:f3:c3:13:ae:45:67:94:bd: 5a:8b:00:98:16:43:d3:77:c8:a7:72:98:7f:95:ae: 96:64:fe:80:7d:49:cc:83:6a:41:9e:cd:90:56:e4: 27:8e:41:b3:49:f1:6a:17:e7:c8:cd:c5:e7:c6:8e: da:8a:c6:6c:b3:b6:ee:37:af:71 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption 91:42:c2:15:57:42:47:77:e7:0f:c5:55:26:b1:5b:c3:5e:ba: 81:db:e1:a4:9f:b8:42:5a:21:c9:8c:18:ae:0f:90:ab:9a:24: e7:d2:78:fc:bd:97:29:b1:5c:46:1f:5b:b8:d2:a7:87:f1:50: 53:5b:d3:be:57:74:bd:e5:75:db:50:81:f7:37:95:0b:69:ef: 39:8c:5c:82:d5:64:62:d5:8b:e9:e0:31:e1:73:d2:5a:2c:de: 43:5a:06:e5:d3:4d:d0:35:e0:9f:c2:73:31:bc:35:69:d4:fb: 7d:f0:1a:33:f7:f6:25:72:9c:a6:84:05:08:f6:b5:e8:04:10: f1:1f:f2:95:ad:a1:f8:d8:80:a5:eb:75:43:99:33:90:0c:79: fc:c0:87:08:95:20:aa:c2:81:0b:22:6f:56:f4:8f:2a:23:f8: 40:47:1c:03:a5:b1:04:0a:04:4a:df:d0:88:a8:bc:31:f2:42: 9b:d8:11:14:9e:e3:68:ea:07:2c:15:de:d2:36:5a:15:38:ed: d2:af:0e:b4:b6:1d:a0:57:94:ea:c3:c7:4c:14:57:81:00:57: 94:d3:b0:27:69:d7:48:02:6c:e5:97:f7:be:22:7c:38:24:af: b2:b0:7b:08:75:1e:ca:2e:c7:41:ef:8b:74:cf:c9:c3:6f:39: b9:52:41:18:c6:70:24:54:51:04:fe:5f:88:70:35:e5:1c:8e: d6:67:69:44:44:33:9b:8c:fe:a5:b9:95:48:66:84:f3:1a:04: ab:a3:57:c1:b6:b4:2f:28:12:45:2b:cb:42:d3:f4:a5:ce:7b: 6c:1f:e4:c8:a9:e7:d4:6d:c8:27:2d:69:26:c5:e8:73:10:54: 1f:c3:bf:fd:aa:f5:95:6f:f6:ca:d5:06:8f:1b:79:93:e3:86: ba:8d:fe:a8:10:8f:95:3e:14:09:bf:ca:88:59:e2:93:b6:ec: 03:a9:7e:dd:1f:5f:13:d3:29:b3:a6:f3:6a:df:30:53:44:c8: cd:e5:82:57:bc:9c -----BEGIN CERTIFICATE----- MIIEJDCCAowCCQDLLYCZWmlSYTANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJY WTEmMCQGA1UECgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNV BAMMDW91ci1jYS1zZXJ2ZXIwHhcNMTgwODI5MTQyMzE2WhcNMzcxMDI4MTQyMzE2 WjBbMQswCQYDVQQGEwJYWTEXMBUGA1UEBwwOQ2FzdGxlIEFudGhyYXgxIzAhBgNV BAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQ4wDAYDVQQDDAVub3NhbjCC AaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAK/exSg4TgXs/J4BPZDpeles ZQ67V9iB2ci0ssuG+fsAK4cyPtMDlQpfGx6kmfmbYmL0u8BuSyvUhZ0tkC82CPCx RH4TgdtjXTwGFW32yTccWS8TQ44VsBrXggK1g97Xw//ge/pl+78YkNSvBcc0JS7S nqL+aiylnk7ezHkyBVpKGScYAQLcqnvM6C6wyYTxdJuQ1BrcBEyRbiiROpYoB4Ac ONUVXLKHNlCaj6BovOFZ8lVJmGUmy6IuhkYatVOrzkSJewTiXys9oj6YqRXNY/Xc hlb4XHW2IiS26Z0dZvU5Bau+bwCJP1vw9wUJRSu+FAVwsNIJmMqVDS1FatJIqutW xKV06t6pEu2mMbMUmkadNy+yUvus/mmDkOMGxiZVwOOuZG4m6EarnZf/ocQL3sai 5Ury1PUtUy/XC59U+f6L1PPDE65FZ5S9WosAmBZD03fIp3KYf5WulmT+gH1JzINq QZ7NkFbkJ45Bs0nxahfnyM3F58aO2orGbLO27jevcQIDAQABMA0GCSqGSIb3DQEB CwUAA4IBgQCRQsIVV0JHd+cPxVUmsVvDXrqB2+Gkn7hCWiHJjBiuD5CrmiTn0nj8 vZcpsVxGH1u40qeH8VBTW9O+V3S95XXbUIH3N5ULae85jFyC1WRi1Yvp4DHhc9Ja LN5DWgbl003QNeCfwnMxvDVp1Pt98Boz9/YlcpymhAUI9rXoBBDxH/KVraH42ICl 63VDmTOQDHn8wIcIlSCqwoELIm9W9I8qI/hARxwDpbEECgRK39CIqLwx8kKb2BEU nuNo6gcsFd7SNloVOO3Srw60th2gV5Tqw8dMFFeBAFeU07AnaddIAmzll/e+Inw4 JK+ysHsIdR7KLsdB74t0z8nDbzm5UkEYxnAkVFEE/l+IcDXlHI7WZ2lERDObjP6l uZVIZoTzGgSro1fBtrQvKBJFK8tC0/SlzntsH+TIqefUbcgnLWkmxehzEFQfw7/9 qvWVb/bK1QaPG3mT44a6jf6oEI+VPhQJv8qIWeKTtuwDqX7dH18T0ymzpvNq3zBT RMjN5YJXvJw= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/nullbytecert.pem000066400000000000000000000124731471441230600215520ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 0 (0x0) Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Validity Not Before: Aug 7 13:11:52 2013 GMT Not After : Aug 7 13:12:52 2013 GMT Subject: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:b5:ea:ed:c9:fb:46:7d:6f:3b:76:80:dd:3a:f3: 03:94:0b:a7:a6:db:ec:1d:df:ff:23:74:08:9d:97: 16:3f:a3:a4:7b:3e:1b:0e:96:59:25:03:a7:26:e2: 88:a9:cf:79:cd:f7:04:56:b0:ab:79:32:6e:59:c1: 32:30:54:eb:58:a8:cb:91:f0:42:a5:64:27:cb:d4: 56:31:88:52:ad:cf:bd:7f:f0:06:64:1f:cc:27:b8: a3:8b:8c:f3:d8:29:1f:25:0b:f5:46:06:1b:ca:02: 45:ad:7b:76:0a:9c:bf:bb:b9:ae:0d:16:ab:60:75: ae:06:3e:9c:7c:31:dc:92:2f:29:1a:e0:4b:0c:91: 90:6c:e9:37:c5:90:d7:2a:d7:97:15:a3:80:8f:5d: 7b:49:8f:54:30:d4:97:2c:1c:5b:37:b5:ab:69:30: 68:43:d3:33:78:4b:02:60:f5:3c:44:80:a1:8f:e7: f0:0f:d1:5e:87:9e:46:cf:62:fc:f9:bf:0c:65:12: f1:93:c8:35:79:3f:c8:ec:ec:47:f5:ef:be:44:d5: ae:82:1e:2d:9a:9f:98:5a:67:65:e1:74:70:7c:cb: d3:c2:ce:0e:45:49:27:dc:e3:2d:d4:fb:48:0e:2f: 9e:77:b8:14:46:c0:c4:36:ca:02:ae:6a:91:8c:da: 2f:85 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 88:5A:55:C0:52:FF:61:CD:52:A3:35:0F:EA:5A:9C:24:38:22:F7:5C X509v3 Key Usage: Digital Signature, Non Repudiation, Key Encipherment X509v3 Subject Alternative Name: ************************************************************* WARNING: The values for DNS, email and URI are WRONG. OpenSSL doesn't print the text after a NULL byte. ************************************************************* DNS:altnull.python.org, email:null@python.org, URI:http://null.python.org, IP Address:192.0.2.1, IP Address:2001:DB8:0:0:0:0:0:1 Signature Algorithm: sha1WithRSAEncryption ac:4f:45:ef:7d:49:a8:21:70:8e:88:59:3e:d4:36:42:70:f5: a3:bd:8b:d7:a8:d0:58:f6:31:4a:b1:a4:a6:dd:6f:d9:e8:44: 3c:b6:0a:71:d6:7f:b1:08:61:9d:60:ce:75:cf:77:0c:d2:37: 86:02:8d:5e:5d:f9:0f:71:b4:16:a8:c1:3d:23:1c:f1:11:b3: 56:6e:ca:d0:8d:34:94:e6:87:2a:99:f2:ae:ae:cc:c2:e8:86: de:08:a8:7f:c5:05:fa:6f:81:a7:82:e6:d0:53:9d:34:f4:ac: 3e:40:fe:89:57:7a:29:a4:91:7e:0b:c6:51:31:e5:10:2f:a4: 60:76:cd:95:51:1a:be:8b:a1:b0:fd:ad:52:bd:d7:1b:87:60: d2:31:c7:17:c4:18:4f:2d:08:25:a3:a7:4f:b7:92:ca:e2:f5: 25:f1:54:75:81:9d:b3:3d:61:a2:f7:da:ed:e1:c6:6f:2c:60: 1f:d8:6f:c5:92:05:ab:c9:09:62:49:a9:14:ad:55:11:cc:d6: 4a:19:94:99:97:37:1d:81:5f:8b:cf:a3:a8:96:44:51:08:3d: 0b:05:65:12:eb:b6:70:80:88:48:72:4f:c6:c2:da:cf:cd:8e: 5b:ba:97:2f:60:b4:96:56:49:5e:3a:43:76:63:04:be:2a:f6: c1:ca:a9:94 -----BEGIN CERTIFICATE----- MIIE2DCCA8CgAwIBAgIBADANBgkqhkiG9w0BAQUFADCBxTELMAkGA1UEBhMCVVMx DzANBgNVBAgMBk9yZWdvbjESMBAGA1UEBwwJQmVhdmVydG9uMSMwIQYDVQQKDBpQ eXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEgMB4GA1UECwwXUHl0aG9uIENvcmUg RGV2ZWxvcG1lbnQxJDAiBgNVBAMMG251bGwucHl0aG9uLm9yZwBleGFtcGxlLm9y ZzEkMCIGCSqGSIb3DQEJARYVcHl0aG9uLWRldkBweXRob24ub3JnMB4XDTEzMDgw NzEzMTE1MloXDTEzMDgwNzEzMTI1MlowgcUxCzAJBgNVBAYTAlVTMQ8wDQYDVQQI DAZPcmVnb24xEjAQBgNVBAcMCUJlYXZlcnRvbjEjMCEGA1UECgwaUHl0aG9uIFNv ZnR3YXJlIEZvdW5kYXRpb24xIDAeBgNVBAsMF1B5dGhvbiBDb3JlIERldmVsb3Bt ZW50MSQwIgYDVQQDDBtudWxsLnB5dGhvbi5vcmcAZXhhbXBsZS5vcmcxJDAiBgkq hkiG9w0BCQEWFXB5dGhvbi1kZXZAcHl0aG9uLm9yZzCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBALXq7cn7Rn1vO3aA3TrzA5QLp6bb7B3f/yN0CJ2XFj+j pHs+Gw6WWSUDpybiiKnPec33BFawq3kyblnBMjBU61ioy5HwQqVkJ8vUVjGIUq3P vX/wBmQfzCe4o4uM89gpHyUL9UYGG8oCRa17dgqcv7u5rg0Wq2B1rgY+nHwx3JIv KRrgSwyRkGzpN8WQ1yrXlxWjgI9de0mPVDDUlywcWze1q2kwaEPTM3hLAmD1PESA oY/n8A/RXoeeRs9i/Pm/DGUS8ZPINXk/yOzsR/XvvkTVroIeLZqfmFpnZeF0cHzL 08LODkVJJ9zjLdT7SA4vnne4FEbAxDbKAq5qkYzaL4UCAwEAAaOB0DCBzTAMBgNV HRMBAf8EAjAAMB0GA1UdDgQWBBSIWlXAUv9hzVKjNQ/qWpwkOCL3XDALBgNVHQ8E BAMCBeAwgZAGA1UdEQSBiDCBhYIeYWx0bnVsbC5weXRob24ub3JnAGV4YW1wbGUu Y29tgSBudWxsQHB5dGhvbi5vcmcAdXNlckBleGFtcGxlLm9yZ4YpaHR0cDovL251 bGwucHl0aG9uLm9yZwBodHRwOi8vZXhhbXBsZS5vcmeHBMAAAgGHECABDbgAAAAA AAAAAAAAAAEwDQYJKoZIhvcNAQEFBQADggEBAKxPRe99SaghcI6IWT7UNkJw9aO9 i9eo0Fj2MUqxpKbdb9noRDy2CnHWf7EIYZ1gznXPdwzSN4YCjV5d+Q9xtBaowT0j HPERs1ZuytCNNJTmhyqZ8q6uzMLoht4IqH/FBfpvgaeC5tBTnTT0rD5A/olXeimk kX4LxlEx5RAvpGB2zZVRGr6LobD9rVK91xuHYNIxxxfEGE8tCCWjp0+3ksri9SXx VHWBnbM9YaL32u3hxm8sYB/Yb8WSBavJCWJJqRStVRHM1koZlJmXNx2BX4vPo6iW RFEIPQsFZRLrtnCAiEhyT8bC2s/Njlu6ly9gtJZWSV46Q3ZjBL4q9sHKqZQ= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/nullcert.pem000066400000000000000000000000001471441230600206450ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.12/pycacert.pem000066400000000000000000000130401471441230600206370ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5b Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, O=Python Software Foundation CA, CN=our-ca-server Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:b1:84:d3:4f:5c:04:80:91:4f:82:49:ba:30:0b: f7:e8:cb:f9:14:ef:3d:9f:0b:3f:0a:62:fc:1b:20: a5:20:d1:60:5f:87:5a:1f:16:d1:ed:97:70:a6:da: 1b:03:2c:7e:a0:5b:3c:4e:2f:16:7e:0e:89:29:89: e1:10:0d:38:da:6a:77:5f:37:13:b3:28:8f:7b:5c: 76:ad:9e:e8:d3:f5:9e:f5:83:aa:10:07:8d:e6:51: 98:f0:7c:0d:52:f2:0c:21:1e:d8:b9:99:26:a9:25: 03:27:bb:5c:ab:2e:33:27:a2:d6:23:a8:83:87:44: 29:9f:97:b5:24:6f:d7:b9:0a:fd:28:ee:bb:fb:41: 58:ea:1d:99:dd:44:86:ab:98:be:1c:dc:cb:a9:89: 1d:36:5c:a9:e8:47:b5:f4:52:48:aa:b5:a4:67:ef: 3e:d7:e2:d3:33:de:98:29:d8:7a:b0:59:5c:e7:b1: 0e:cc:fd:9f:eb:f6:d5:3a:0e:0b:cf:fe:0b:3d:a2: bf:45:18:ce:94:e7:a9:55:60:88:d4:d8:84:50:79: 05:2e:41:03:74:ae:67:26:f6:5b:12:08:98:ce:0a: 97:ed:01:0f:89:4f:17:5c:fa:3e:1d:35:24:47:92: 32:bf:f7:a4:18:2b:3c:d0:48:99:e1:a2:cd:a3:cc: 50:53:20:b5:c6:e3:66:85:7b:57:10:ec:33:4f:c1: 77:e7:1b:7e:81:c6:c4:f3:45:20:c0:91:dd:13:76: 7b:03:af:f6:76:8e:a2:83:63:57:dd:63:bc:bb:5a: 1c:17:52:8a:d6:06:48:cc:0f:c7:d3:4f:e8:da:22: 6c:86:f9:4e:5c:a6:29:07:3b:d8:56:4c:59:b3:20: 49:07:7b:94:84:cf:2b:c3:1c:1a:4e:87:64:92:ba: 42:e1:e6:ad:7d:1d:f6:54:90:6f:2b:e9:b3:cc:4b: 2b:33:26:23:fd:65:c0:3c:f0:79:ad:c9:c1:81:ef: 37:04:e0:27:3e:b0:ee:15:be:51 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Key Identifier: B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD X509v3 Basic Constraints: CA:TRUE Signature Algorithm: sha256WithRSAEncryption 6b:32:2f:e7:05:18:ea:5c:c9:95:f4:e0:c2:0c:41:5f:1a:0a: 95:c9:c7:7d:05:ee:8a:56:29:35:50:40:b7:fe:9f:7b:5b:1c: c3:69:2f:a0:cb:d2:b8:91:2f:50:19:62:f7:27:18:6d:95:7b: 53:16:15:a2:5a:dc:14:e3:fb:b1:32:a9:69:db:a6:33:47:3c: bb:1f:d2:dc:70:f9:6a:2e:0c:d8:8c:6d:e5:5d:1d:43:3c:4e: 91:de:a0:c8:da:a0:4b:0e:9d:5e:b6:0f:4a:49:f0:7b:b6:53: 9e:fd:35:14:5b:e3:4d:b4:18:a6:36:61:e8:8f:33:9b:d4:05: f9:54:66:df:e0:cb:18:a3:4e:dc:17:a8:a0:b3:c1:a8:f4:d6: 9d:ca:7f:68:53:1a:d7:95:da:e8:d3:9e:48:00:71:95:99:11: 07:cf:96:c0:7d:ce:7d:30:e8:4f:e1:83:16:33:a1:ff:59:9b: 3e:4c:e7:3a:38:01:9f:0f:67:4c:fd:2d:8b:4a:d4:01:46:37: 33:e8:13:6b:15:a9:1d:68:76:45:a2:82:33:69:26:30:60:05: c8:8f:bd:b4:75:ab:be:7a:8b:48:68:70:40:b4:1b:51:c5:e6: 7a:ad:6b:4f:db:17:c0:60:67:2e:63:61:9b:2c:48:99:b8:76: 45:a0:9e:cc:ef:33:1e:50:4e:ab:72:c3:65:c8:b2:79:b3:35: 83:21:78:d3:8b:6c:3a:18:e8:65:32:39:b8:c0:9d:71:2f:35: 36:8a:c0:17:62:d8:8b:3e:e1:22:18:2b:4c:63:a6:0e:9d:0a: fa:ab:5b:35:fb:88:91:77:4c:8d:8c:9d:a9:cf:fc:ab:c2:e6: 5a:05:7b:7e:04:6e:39:cf:93:ce:67:3b:7a:cb:af:b6:36:e1: fb:71:64:45:d4:a6:f0:ce:ef:75:04:99:69:9a:e5:88:0a:10: 02:74:89:ec:75:84:44:80:48:df:c1:f7:e9:37:ce:ce:92:92: 5c:89:22:08:73:1f -----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/pycakey.pem000066400000000000000000000046641471441230600205060ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQCxhNNPXASAkU+C SbowC/foy/kU7z2fCz8KYvwbIKUg0WBfh1ofFtHtl3Cm2hsDLH6gWzxOLxZ+Dokp ieEQDTjaandfNxOzKI97XHatnujT9Z71g6oQB43mUZjwfA1S8gwhHti5mSapJQMn u1yrLjMnotYjqIOHRCmfl7Ukb9e5Cv0o7rv7QVjqHZndRIarmL4c3MupiR02XKno R7X0UkiqtaRn7z7X4tMz3pgp2HqwWVznsQ7M/Z/r9tU6DgvP/gs9or9FGM6U56lV YIjU2IRQeQUuQQN0rmcm9lsSCJjOCpftAQ+JTxdc+j4dNSRHkjK/96QYKzzQSJnh os2jzFBTILXG42aFe1cQ7DNPwXfnG36BxsTzRSDAkd0TdnsDr/Z2jqKDY1fdY7y7 WhwXUorWBkjMD8fTT+jaImyG+U5cpikHO9hWTFmzIEkHe5SEzyvDHBpOh2SSukLh 5q19HfZUkG8r6bPMSyszJiP9ZcA88HmtycGB7zcE4Cc+sO4VvlECAwEAAQKCAYB7 gUnzALYxLOgAYYMkQm9si9zz768TpCNr+ooj5YZ9Wq6OSAEveBT+FErQCxaYErDW qCNA0gn4Eezj9YWcQVa4vzHmEM+n6iRJU39ONC0Qqua5Ma10EY1sHIEnb2dlufku YeOu3RrEu3eCgRxsDGySuvv5OxinV4kN++KPQzD3EOopPE+U81YFLCsMgsyfPlmm gwc/IKIuXDHp5Vp2bXkZK98CYLV8RddjUw7SrkZNwx6cI9eET0CgTs7y4SrevoOy jCdnA0j1HvL8AbLQuYoXo9fdGYDeq55hyYlxSMYLaEToZG3DJ0UAldrT+r7x52D8 2QMnJUo2XHzVYPlXPJIAkFJisZZ36TkBvywCgXZMMLibPo9U6V0nfkybTtXKoory nmgBv+XSGSNrVWMiygpDPqpX1G6bBgqUX3CiTlxtSkYYz1M4Vgj2cux5XEPTnVCq CLVzvNIXZt1RyzXPxGWpPidCjOaiWBRT4u1Dol9fs3PmVvDaRxcKo9nspiUHCfEC gcEA4GgxZ+IJwpAMHkdYId0oxjKgTqIg+Ua+EwfUoQT10ERl/k/V4cDwJRHT8lML rKhTNQJMEE040jq+6mPJDl1KqMb/v05Q7fF22ToGw1HkZwK52O6CeEiJW4/J6bR1 pZGN0irsa6GvzV65Y6gZVFEUl0JPRf8wPvQHXsWAw8/2LuXkXjV0ieIMq4pbWJf4 kaid7dYLHnobiP9RVk7BGr7ifmCshoPjWp4TRMwYf6iIZrqMxUSX0QY8Xsqx6bch LLx/AoHBAMqCvvwUKTrF4gKh5jyl6T6DTZ/Dujaz7BuAJdsSSHvuTa/Y1EfsQHZN jABn89ZqHYDiyyCuVFO3dqhLtsPjhyFMSXj+98JYcL3FGKnqQqRTwtzzx2P2lV5X U0WhrNRb3iLu79Tr8pE/2EPnvTr+J5b0DHEeRyM72LWs43zrDYHorH0/Aa5Qd37F gDLCTBEl8jO5irRuAIq/KV9ZFnn8JDjNGVpXgHPW3354ON1YaMLnPASk7FQizSOQ QZAsyxtdLwKBwGUosvTYYXvygXP4x1LkpmfKFJe94E1exXpAsmovmTvcSXn9tTXC Sr77LWb0ZrPbYT7pHS7QEMg8MSnp941hIrG4mzs666KHkgLUdI4B0YtaIDsZMXlV gY3j4KpYbhxH4/2U2eSfC2fxxnKVKW3n6vdQrfmo0q/eQ6BGOgiLK7fybCLHyBQL 8Zg2k3z5bNUEhMTdE0AW3WjBZ4IXmFcdK26616r/szJ7RcZilrydVXexqpmWlTVl sTst9kucAPlwswKBwQCwf7my/GNezR8Jik+fZj7edBQQfcdra+8JnOvhfpLcKLte 2s1RjjA0q6usou1bYAsszP2bEzV97XWmgq7dFg4tUE7s/NO1d91zGDhBx2Gj1TkN 2A5dKonOuq9iDeITB6qYqcUvvyEfxRRZQr2jj+WzZCr/4BLCO6PJ29A9jKOuKLtF QcfWRF2RiNMN6lffzkHFIR4p2YHxa2DEsGGtmbt8Ig3Jtl/HFmydzmxJRoev71dY +ODdB6PhLhZmcRPoWpMCgcEAhGArwL68GwwRMqAX79gMv8tVT0CJnDyGk5mD/ZIB Nzt0yQFO7rTEa1l1vAtOiVJ9IpAak2lgbEwodOfGnQst7lujNYDFzTRPTFt/lID1 u6JBxmqawOSlqa00bt4l2YsTZV+BfSznBP6XO1PK4iR3o5G3NkoKJjZWm3e3asHk 6eTeMLcsIJ+Fp7gG0ve2EdQwhVSVMFEu4Q4C2FcJeU++L4kYpY7sTnAjUtiLvtHn yp3jllEn3CBD8Uhs4B+sL/6p -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.12/revocation.crl000066400000000000000000000014401471441230600211760ustar00rootroot00000000000000-----BEGIN X509 CRL----- MIICJjCBjwIBATANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJYWTEmMCQGA1UE CgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNVBAMMDW91ci1j YS1zZXJ2ZXIXDTIxMDMxNzA4NDgyMFoXDTQwMDUxNjA4NDgyMFqgDjAMMAoGA1Ud FAQDAgEAMA0GCSqGSIb3DQEBCwUAA4IBgQCd2GrHb4zr2R8eK7YMHwlkgICxbWP1 4nuEi55yzUcmMcCZJ6ZQV3yYqTlAULGQ9qWAUdhsyH+yu3hRKFKHQv0DAdKKxgow 66YasAQQ99DskXOPxmRoIA7qtIWZbLtBwHQJWh+uUFlTdUXitGIX5Xie74xu5YIr moa3QeuZyG5+gigSTUyst5T/J/cHfBzlAJLc2k3Ty4EPYXKHCVnrZWJbRmxq199l A7S+eBb9qWXSYXCn6v+EZ76pUS3u/66kZ86PO3h9294BzdhxbCJ27dQXNHw6owe2 Iyiv0aWx+TNSGSf4yCqaYTH6RtEoviI3h/inVFHNGgjlMzdaGw/0I3bkB0rt2WSR Vck37HnXvQvVEkgO/39C0WKZus6m4gmOgZcbJbXaR8uIR5Hmw3SEyGEPEIBu6tXV BLJOSOSu2vVUH5GUIrpvK9FTySKYa+MGryoPasuqZNfwpaXK+ON2G6QsmcXPWZY0 Dry6t0w2geW6UYVGmb831i8ZP3JVVVwcwi0= -----END X509 CRL----- gevent-24.11.1/src/greentest/3.12/secp384r1.pem000066400000000000000000000004001471441230600204550ustar00rootroot00000000000000$ openssl genpkey -genparam -algorithm EC -pkeyopt ec_paramgen_curve:secp384r1 -pkeyopt ec_param_enc:named_curve -text -----BEGIN EC PARAMETERS----- BgUrgQQAIg== -----END EC PARAMETERS----- ECDSA-Parameters: (384 bit) ASN1 OID: secp384r1 NIST CURVE: P-384 gevent-24.11.1/src/greentest/3.12/selfsigned_pythontestdotnet.pem000066400000000000000000000041221471441230600246700ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIF9zCCA9+gAwIBAgIUH98b4Fw/DyugC9cV7VK7ZODzHsIwDQYJKoZIhvcNAQEL BQAwgYoxCzAJBgNVBAYTAlhZMRcwFQYDVQQIDA5DYXN0bGUgQW50aHJheDEYMBYG A1UEBwwPQXJndW1lbnQgQ2xpbmljMSMwIQYDVQQKDBpQeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbjEjMCEGA1UEAwwac2VsZi1zaWduZWQucHl0aG9udGVzdC5uZXQw HhcNMTkwNTA4MDEwMjQzWhcNMjcwNzI0MDEwMjQzWjCBijELMAkGA1UEBhMCWFkx FzAVBgNVBAgMDkNhc3RsZSBBbnRocmF4MRgwFgYDVQQHDA9Bcmd1bWVudCBDbGlu aWMxIzAhBgNVBAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMSMwIQYDVQQD DBpzZWxmLXNpZ25lZC5weXRob250ZXN0Lm5ldDCCAiIwDQYJKoZIhvcNAQEBBQAD ggIPADCCAgoCggIBAMKdJlyCThkahwoBb7pl5q64Pe9Fn5jrIvzsveHTc97TpjV2 RLfICnXKrltPk/ohkVl6K5SUZQZwMVzFubkyxE0nZPHYHlpiKWQxbsYVkYv01rix IFdLvaxxbGYke2jwQao31s4o61AdlsfK1SdpHQUynBBMssqI3SB4XPmcA7e+wEEx jxjVish4ixA1vuIZOx8yibu+CFCf/geEjoBMF3QPdzULzlrCSw8k/45iZCSoNbvK DoL4TVV07PHOxpheDh8ZQmepGvU6pVqhb9m4lgmV0OGWHgozd5Ur9CbTVDmxIEz3 TSoRtNJK7qtyZdGNqwjksQxgZTjM/d/Lm/BJG99AiOmYOjsl9gbQMZgvQmMAtUsI aMJnQuZ6R+KEpW/TR5qSKLWZSG45z/op+tzI2m+cE6HwTRVAWbcuJxcAA55MZjqU OOOu3BBYMjS5nf2sQ9uoXsVBFH7i0mQqoW1SLzr9opI8KsWwFxQmO2vBxWYaN+lH OmwBZBwyODIsmI1YGXmTp09NxRYz3Qe5GCgFzYowpMrcxUC24iduIdMwwhRM7rKg 7GtIWMSrFfuI1XCLRmSlhDbhNN6fVg2f8Bo9PdH9ihiIyxSrc+FOUasUYCCJvlSZ 8hFUlLvcmrZlWuazohm0lsXuMK1JflmQr/DA/uXxP9xzFfRy+RU3jDyxJbRHAgMB AAGjUzBRMB0GA1UdDgQWBBSQJyxiPMRK01i+0BsV9zUwDiBaHzAfBgNVHSMEGDAW gBSQJyxiPMRK01i+0BsV9zUwDiBaHzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3 DQEBCwUAA4ICAQCR+7a7N/m+WLkxPPIA/CB4MOr2Uf8ixTv435Nyv6rXOun0+lTP ExSZ0uYQ+L0WylItI3cQHULldDueD+s8TGzxf5woaLKf6tqyr0NYhKs+UeNEzDnN 9PHQIhX0SZw3XyXGUgPNBfRCg2ZDdtMMdOU4XlQN/IN/9hbYTrueyY7eXq9hmtI9 1srftAMqr9SR1JP7aHI6DVgrEsZVMTDnfT8WmLSGLlY1HmGfdEn1Ip5sbo9uSkiH AEPgPfjYIvR5LqTOMn4KsrlZyBbFIDh9Sl99M1kZzgH6zUGVLCDg1y6Cms69fx/e W1HoIeVkY4b4TY7Bk7JsqyNhIuqu7ARaxkdaZWhYaA2YyknwANdFfNpfH+elCLIk BUt5S3f4i7DaUePTvKukCZiCq4Oyln7RcOn5If73wCeLB/ZM9Ei1HforyLWP1CN8 XLfpHaoeoPSWIveI0XHUl65LsPN2UbMbul/F23hwl+h8+BLmyAS680Yhn4zEN6Ku B7Po90HoFa1Du3bmx4jsN73UkT/dwMTi6K072FbipnC1904oGlWmLwvAHvrtxxmL Pl3pvEaZIu8wa/PNF6Y7J7VIewikIJq6Ta6FrWeFfzMWOj2qA1ZZi6fUaDSNYvuV J5quYKCc/O+I/yDDf8wyBbZ/gvUXzUHTMYGG+bFrn1p7XDbYYeEJ6R/xEg== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/signalinterproctester.py000066400000000000000000000061751471441230600233410ustar00rootroot00000000000000import gc import os import signal import subprocess import sys import time import unittest from test import support class SIGUSR1Exception(Exception): pass class InterProcessSignalTests(unittest.TestCase): def setUp(self): self.got_signals = {'SIGHUP': 0, 'SIGUSR1': 0, 'SIGALRM': 0} def sighup_handler(self, signum, frame): self.got_signals['SIGHUP'] += 1 def sigusr1_handler(self, signum, frame): self.got_signals['SIGUSR1'] += 1 raise SIGUSR1Exception def wait_signal(self, child, signame): if child is not None: # This wait should be interrupted by exc_class # (if set) child.wait() start_time = time.monotonic() for _ in support.busy_retry(support.SHORT_TIMEOUT, error=False): if self.got_signals[signame]: return signal.pause() else: dt = time.monotonic() - start_time self.fail('signal %s not received after %.1f seconds' % (signame, dt)) def subprocess_send_signal(self, pid, signame): code = 'import os, signal; os.kill(%s, signal.%s)' % (pid, signame) args = [sys.executable, '-I', '-c', code] return subprocess.Popen(args) def test_interprocess_signal(self): # Install handlers. This function runs in a sub-process, so we # don't worry about re-setting the default handlers. signal.signal(signal.SIGHUP, self.sighup_handler) signal.signal(signal.SIGUSR1, self.sigusr1_handler) signal.signal(signal.SIGUSR2, signal.SIG_IGN) signal.signal(signal.SIGALRM, signal.default_int_handler) # Let the sub-processes know who to send signals to. pid = str(os.getpid()) with self.subprocess_send_signal(pid, "SIGHUP") as child: self.wait_signal(child, 'SIGHUP') self.assertEqual(self.got_signals, {'SIGHUP': 1, 'SIGUSR1': 0, 'SIGALRM': 0}) # gh-110033: Make sure that the subprocess.Popen is deleted before # the next test which raises an exception. Otherwise, the exception # may be raised when Popen.__del__() is executed and so be logged # as "Exception ignored in: ". child = None gc.collect() with self.assertRaises(SIGUSR1Exception): with self.subprocess_send_signal(pid, "SIGUSR1") as child: self.wait_signal(child, 'SIGUSR1') self.assertEqual(self.got_signals, {'SIGHUP': 1, 'SIGUSR1': 1, 'SIGALRM': 0}) with self.subprocess_send_signal(pid, "SIGUSR2") as child: # Nothing should happen: SIGUSR2 is ignored child.wait() try: with self.assertRaises(KeyboardInterrupt): signal.alarm(1) self.wait_signal(None, 'SIGALRM') self.assertEqual(self.got_signals, {'SIGHUP': 1, 'SIGUSR1': 1, 'SIGALRM': 0}) finally: signal.alarm(0) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.12/ssl_cert.pem000066400000000000000000000030421471441230600206440ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/ssl_key.passwd.pem000066400000000000000000000051361471441230600220050ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQI072N7W+PDDMCAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBA/AuaRNi4vE4KGqI4In+70BIIH ENGS5Vex5NID873frmd1UZEHZ+O/Bd0wDb+NUpIqesHkRYf7kKi6Gnr+nKQ/oVVn Lm3JjE7c8ECP0OkOOXmiXuWL1SkzBBWqCI4stSGUPvBiHsGwNnvJAaGjUffgMlcC aJOA2+dnejLkzblq4CB2LQdm06N3Xoe9tyqtQaUHxfzJAf5Ydd8uj7vpKN2MMhY7 icIPJwSyh0N7S6XWVtHEokr9Kp4y2hS5a+BgCWV1/1z0aF7agnSVndmT1VR+nWmc lM14k+lethmHMB+fsNSjnqeJ7XOPlOTHqhiZ9bBSTgF/xr5Bck/NiKRzHjdovBox TKg+xchaBhpRh7wBPBIlNJeHmIjv+8obOKjKU98Ig/7R9+IryZaNcKAH0PuOT+Sw QHXiCGQbOiYHB9UyhDTWiB7YVjd8KHefOFxfHzOQb/iBhbv1x3bTl3DgepvRN6VO dIsPLoIZe42sdf9GeMsk8mGJyZUQ6AzsfhWk3grb/XscizPSvrNsJ2VL1R7YTyT3 3WA4ZXR1EqvXnWL7N/raemQjy62iOG6t7fcF5IdP9CMbWP+Plpsz4cQW7FtesCTq a5ZXraochQz361ODFNIeBEGU+0qqXUtZDlmos/EySkZykSeU/L0bImS62VGE3afo YXBmznTTT9kkFkqv7H0MerfJsrE/wF8puP3GM01DW2JRgXRpSWlvbPV/2LnMtRuD II7iH4rWDtTjCN6BWKAgDOnPkc9sZ4XulqT32lcUeV6LTdMBfq8kMEc8eDij1vUT maVCRpuwaq8EIT3lVgNLufHiG96ojlyYtj3orzw22IjkgC/9ee8UDik9CqbMVmFf fVHhsw8LNSg8Q4bmwm5Eg2w2it2gtI68+mwr75oCxuJ/8OMjW21Prj8XDh5reie2 c0lDKQOFZ9UnLU1bXR/6qUM+JFKR4DMq+fOCuoQSVoyVUEOsJpvBOYnYZN9cxsZm vh9dKafMEcKZ8flsbr+gOmOw7+Py2ifSlf25E/Frb1W4gtbTb0LQVHb6+drutrZj 8HEu4CnHYFCD4ZnOJb26XlZCb8GFBddW86yJYyUqMMV6Q1aJfAOAglsTo1LjIMOZ byo0BTAmwUevU/iuOXQ4qRBXXcoidDcTCrxfUSPG9wdt9l+m5SdQpWqfQ+fx5O7m SLlrHyZCiPSFMtC9DxqjIklHjf5W3wslGLgaD30YXa4VDYkRihf3CNsxGQ+tVvef l0ZjoAitF7Gaua06IESmKnpHe23dkr1cjYq+u2IV+xGH8LeExdwsQ9kpuTeXPnQs JOA99SsFx1ct32RrwjxnDDsiNkaViTKo9GDkV3jQTfoFgAVqfSgg9wGXpqUqhNG7 TiSIHCowllLny2zn4XrXCy2niD3VDt0skb3l/PaegHE2z7S5YY85nQtYwpLiwB9M SQ08DYKxPBZYKtS2iZ/fsA1gjSRQDPg/SIxMhUC3M3qH8iWny1Lzl25F2Uq7VVEX LdTUtaby49jRTT3CQGr5n6z7bMbUegiY7h8WmOekuThGDH+4xZp6+rDP4GFk4FeK JcF70vMQYIjQZhadic6olv+9VtUP42ltGG/yP9a3eWRkzfAf2eCh6B1rYdgEWwE8 rlcZzwM+y6eUmeNF2FVWB8iWtTMQHy+dYNPM+Jtus1KQKxiiq/yCRs7nWvzWRFWA HRyqV0J6/lqgm4FvfktFt1T0W+mDoLJOR2/zIwMy2lgL5zeHuR3SaMJnCikJbqKS HB3UvrhAWUcZqdH29+FhVWeM7ybyF1Wccmf+IIC/ePLa6gjtqPV8lG/5kbpcpnB6 UQY8WWaKMxyr3jJ9bAX5QKshchp04cDecOLZrpFGNNQngR8RxSEkiIgAqNxWunIu KrdBDrupv/XAgEOclmgToY3iywLJSV5gHAyHWDUhRH4cFCLiGPl4XIcnXOuTze3H 3j+EYSiS3v3DhHjp33YU2pXlJDjiYsKzAXejEh66++Y8qaQdCAad3ruWRCzW3kgk Md0A1VGzntTnQsewvExQEMZH2LtYIsPv3KCYGeSAuLabX4tbGk79PswjnjLLEOr0 Ghf6RF6qf5/iFyJoG4vrbKT8kx6ywh0InILCdjUunuDskIBxX6tEcr9XwajoIvb2 kcmGdjam5kKLS7QOWQTl8/r/cuFes0dj34cX5Qpq+Gd7tRq/D+b0207926Cxvftv qQ1cVn8HiLxKkZzd3tpf2xnoV1zkTL0oHrNg+qzxoxXUTUcwtIf1d/HRbYEAhi/d bBBoFeftEHWNq+sJgS9bH+XNzo/yK4u04B5miOq8v4CSkJdzu+ZdF22d4cjiGmtQ 8BTmcn0Unzm+u5H0+QSZe54QBHJGNXXOIKMTkgnOdW27g4DbI1y7fCqJiSMbRW6L oHmMfbdB3GWqGbsUkhY8i6h9op0MU6WOX7ea2Rxyt4t6 -----END ENCRYPTED PRIVATE KEY----- gevent-24.11.1/src/greentest/3.12/ssl_key.pem000066400000000000000000000046701471441230600205070ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/wIBADANBgkqhkiG9w0BAQEFAASCBukwggblAgEAAoIBgQCylKlLaKU+hOvJ DfriTRLd+IthG5hv28I3A/CGjLICT0rDDtgaXd0uqloJAnjsgn5gMAcStpDW8Rm+ t6LsrBL+5fBgkyU1r94Rvx0HHoyaZwBBouitVHw28hP3W+smddkqB1UxpGnTeL2B gj3dVo/WTtRfO+0h0PKw1l98YE1pMTdqIwcOOE/ER0g4hvA/wrxuLhMvlVLMy/lL 58uctqaDUqryNyeerKbVkq4fJyCG5D2TwXVJ3i2DDh0xSt2Y10poZV4M4k8Su9Z5 8zN2PSvYMT50aqF277v8BaOeYUApBE4kZGIJpo13ATGdEwpUFZ0Fri4zLYUZ1hWb OC35sKo7OxWQ/+tefNUdgWHob6Vmy777jiYcLwxc3sS9rF3AJe0rMW83kCkR6hmy A3250E137N/1QumHuT/Nj9rnI/lwt9jfaYkZjoAgT/C97m/mM83cYpGTdoGV1xNo 7G90MhP0di5FnVsrIaSnvkbGT9UgUWx0oVMjocifdG2qIhMI9psCAwEAAQKCAYBT sHmaPmNaZj59jZCqp0YVQlpHWwBYQ5vD3pPE6oCttm0p9nXt/VkfenQRTthOtmT1 POzDp00/feP7zeGLmqSYUjgRekPw4gdnN7Ip2PY5kdW77NWwDSzdLxuOS8Rq1MW9 /Yu+ZPe3RBlDbT8C0IM+Atlh/BqIQ3zIxN4g0pzUlF0M33d6AYfYSzOcUhibOO7H j84r+YXBNkIRgYKZYbutRXuZYaGuqejRpBj3voVu0d3Ntdb6lCWuClpB9HzfGN0c RTv8g6UYO4sK3qyFn90ibIR/1GB9watvtoWVZqggiWeBzSWVWRsGEf9O+Cx4oJw1 IphglhmhbgNksbj7bD24on/icldSOiVkoUemUOFmHWhCm4PnB1GmbD8YMfEdSbks qDr1Ps1zg4mGOinVD/4cY7vuPFO/HCH07wfeaUGzRt4g0/yLr+XjVofOA3oowyxv JAzr+niHA3lg5ecj4r7M68efwzN1OCyjMrVJw2RAzwvGxE+rm5NiT08SWlKQZnkC gcEA4wvyLpIur/UB84nV3XVJ89UMNBLm++aTFzld047BLJtMaOhvNqx6Cl5c8VuW l261KHjiVzpfNM3/A2LBQJcYkhX7avkqEXlj57cl+dCWAVwUzKmLJTPjfaTTZnYJ xeN3dMYjJz2z2WtgvfvDoJLukVwIMmhTY8wtqqYyQBJ/l06pBsfw5TNvmVIOQHds 8ASOiFt+WRLk2bl9xrGGayqt3VV93KVRzF27cpjOgEcG74F3c0ZW9snERN7vIYwB JfrlAoHBAMlahPwMP2TYylG8OzHe7EiehTekSO26LGh0Cq3wTGXYsK/q8hQCzL14 kWW638vpwXL6L9ntvrd7hjzWRO3vX/VxnYEA6f0bpqHq1tZi6lzix5CTUN5McpDg QnjenSJNrNjS1zEF8WeY9iLEuDI/M/iUW4y9R6s3WpgQhPDXpSvd2g3gMGRUYhxQ Xna8auiJeYFq0oNaOxvJj+VeOfJ3ZMJttd+Y7gTOYZcbg3SdRb/kdxYki0RMD2hF 4ZvjJ6CTfwKBwQDiMqiZFTJGQwYqp4vWEmAW+I4r4xkUpWatoI2Fk5eI5T9+1PLX uYXsho56NxEU1UrOg4Cb/p+TcBc8PErkGqR0BkpxDMOInTOXSrQe6lxIBoECVXc3 HTbrmiay0a5y5GfCgxPKqIJhfcToAceoVjovv0y7S4yoxGZKuUEe7E8JY2iqRNAO yOvKCCICv/hcN235E44RF+2/rDlOltagNej5tY6rIFkaDdgOF4bD7f9O5eEni1Bg litfoesDtQP/3rECgcEAkQfvQ7D6tIPmbqsbJBfCr6fmoqZllT4FIJN84b50+OL0 mTGsfjdqC4tdhx3sdu7/VPbaIqm5NmX10bowWgWSY7MbVME4yQPyqSwC5NbIonEC d6N0mzoLR0kQ+Ai4u+2g82gicgAq2oj1uSNi3WZi48jQjHYFulCbo246o1NgeFFK 77WshYe2R1ioQfQDOU1URKCR0uTaMHClgfu112yiGd12JAD+aF3TM0kxDXz+sXI5 SKy311DFxECZeXRLpcC3AoHBAJkNMJWTyPYbeVu+CTQkec8Uun233EkXa2kUNZc/ 5DuXDaK+A3DMgYRufTKSPpDHGaCZ1SYPInX1Uoe2dgVjWssRL2uitR4ENabDoAOA ICVYXYYNagqQu5wwirF0QeaMXo1fjhuuHQh8GsMdXZvYEaAITZ9/NG5x/oY08+8H kr78SMBOPy3XQn964uKG+e3JwpOG14GKABdAlrHKFXNWchu/6dgcYXB87mrC/GhO zNwzC+QhFTZoOomFoqMgFWujng== -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.12/talos-2019-0758.pem000066400000000000000000000024621471441230600211470ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDqDCCApKgAwIBAgIBAjALBgkqhkiG9w0BAQswHzELMAkGA1UEBhMCVUsxEDAO BgNVBAMTB2NvZHktY2EwHhcNMTgwNjE4MTgwMDU4WhcNMjgwNjE0MTgwMDU4WjA7 MQswCQYDVQQGEwJVSzEsMCoGA1UEAxMjY29kZW5vbWljb24tdm0tMi50ZXN0Lmxh bC5jaXNjby5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC63fGB J80A9Av1GB0bptslKRIUtJm8EeEu34HkDWbL6AJY0P8WfDtlXjlPaLqFa6sqH6ES V48prSm1ZUbDSVL8R6BYVYpOlK8/48xk4pGTgRzv69gf5SGtQLwHy8UPBKgjSZoD 5a5k5wJXGswhKFFNqyyxqCvWmMnJWxXTt2XDCiWc4g4YAWi4O4+6SeeHVAV9rV7C 1wxqjzKovVe2uZOHjKEzJbbIU6JBPb6TRfMdRdYOw98n1VXDcKVgdX2DuuqjCzHP WhU4Tw050M9NaK3eXp4Mh69VuiKoBGOLSOcS8reqHIU46Reg0hqeL8LIL6OhFHIF j7HR6V1X6F+BfRS/AgMBAAGjgdYwgdMwCQYDVR0TBAIwADAdBgNVHQ4EFgQUOktp HQjxDXXUg8prleY9jeLKeQ4wTwYDVR0jBEgwRoAUx6zgPygZ0ZErF9sPC4+5e2Io UU+hI6QhMB8xCzAJBgNVBAYTAlVLMRAwDgYDVQQDEwdjb2R5LWNhggkA1QEAuwb7 2s0wCQYDVR0SBAIwADAuBgNVHREEJzAlgiNjb2Rlbm9taWNvbi12bS0yLnRlc3Qu bGFsLmNpc2NvLmNvbTAOBgNVHQ8BAf8EBAMCBaAwCwYDVR0fBAQwAjAAMAsGCSqG SIb3DQEBCwOCAQEAvqantx2yBlM11RoFiCfi+AfSblXPdrIrHvccepV4pYc/yO6p t1f2dxHQb8rWH3i6cWag/EgIZx+HJQvo0rgPY1BFJsX1WnYf1/znZpkUBGbVmlJr t/dW1gSkNS6sPsM0Q+7HPgEv8CPDNK5eo7vU2seE0iWOkxSyVUuiCEY9ZVGaLVit p0C78nZ35Pdv4I+1cosmHl28+es1WI22rrnmdBpH8J1eY6WvUw2xuZHLeNVN0TzV Q3qq53AaCWuLOD1AjESWuUCxMZTK9DPS4JKXTK8RLyDeqOvJGjsSWp3kL0y3GaQ+ 10T1rfkKJub2+m9A9duin1fn6tHc2wSvB7m3DA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.12/test_context.py000066400000000000000000000754361471441230600214400ustar00rootroot00000000000000import concurrent.futures import contextvars import functools import gc import random import time import unittest import weakref from test import support from test.support import threading_helper try: from _testcapi import hamt except ImportError: hamt = None def isolated_context(func): """Needed to make reftracking test mode work.""" @functools.wraps(func) def wrapper(*args, **kwargs): ctx = contextvars.Context() return ctx.run(func, *args, **kwargs) return wrapper class ContextTest(unittest.TestCase): def test_context_var_new_1(self): with self.assertRaisesRegex(TypeError, 'takes exactly 1'): contextvars.ContextVar() with self.assertRaisesRegex(TypeError, 'must be a str'): contextvars.ContextVar(1) c = contextvars.ContextVar('aaa') self.assertEqual(c.name, 'aaa') with self.assertRaises(AttributeError): c.name = 'bbb' self.assertNotEqual(hash(c), hash('aaa')) @isolated_context def test_context_var_repr_1(self): c = contextvars.ContextVar('a') self.assertIn('a', repr(c)) c = contextvars.ContextVar('a', default=123) self.assertIn('123', repr(c)) lst = [] c = contextvars.ContextVar('a', default=lst) lst.append(c) self.assertIn('...', repr(c)) self.assertIn('...', repr(lst)) t = c.set(1) self.assertIn(repr(c), repr(t)) self.assertNotIn(' used ', repr(t)) c.reset(t) self.assertIn(' used ', repr(t)) def test_context_subclassing_1(self): with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): class MyContextVar(contextvars.ContextVar): # Potentially we might want ContextVars to be subclassable. pass with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): class MyContext(contextvars.Context): pass with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): class MyToken(contextvars.Token): pass def test_context_new_1(self): with self.assertRaisesRegex(TypeError, 'any arguments'): contextvars.Context(1) with self.assertRaisesRegex(TypeError, 'any arguments'): contextvars.Context(1, a=1) with self.assertRaisesRegex(TypeError, 'any arguments'): contextvars.Context(a=1) contextvars.Context(**{}) def test_context_typerrors_1(self): ctx = contextvars.Context() with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): ctx[1] with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): 1 in ctx with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): ctx.get(1) def test_context_get_context_1(self): ctx = contextvars.copy_context() self.assertIsInstance(ctx, contextvars.Context) def test_context_run_1(self): ctx = contextvars.Context() with self.assertRaisesRegex(TypeError, 'missing 1 required'): ctx.run() def test_context_run_2(self): ctx = contextvars.Context() def func(*args, **kwargs): kwargs['spam'] = 'foo' args += ('bar',) return args, kwargs for f in (func, functools.partial(func)): # partial doesn't support FASTCALL self.assertEqual(ctx.run(f), (('bar',), {'spam': 'foo'})) self.assertEqual(ctx.run(f, 1), ((1, 'bar'), {'spam': 'foo'})) self.assertEqual( ctx.run(f, a=2), (('bar',), {'a': 2, 'spam': 'foo'})) self.assertEqual( ctx.run(f, 11, a=2), ((11, 'bar'), {'a': 2, 'spam': 'foo'})) a = {} self.assertEqual( ctx.run(f, 11, **a), ((11, 'bar'), {'spam': 'foo'})) self.assertEqual(a, {}) def test_context_run_3(self): ctx = contextvars.Context() def func(*args, **kwargs): 1 / 0 with self.assertRaises(ZeroDivisionError): ctx.run(func) with self.assertRaises(ZeroDivisionError): ctx.run(func, 1, 2) with self.assertRaises(ZeroDivisionError): ctx.run(func, 1, 2, a=123) @isolated_context def test_context_run_4(self): ctx1 = contextvars.Context() ctx2 = contextvars.Context() var = contextvars.ContextVar('var') def func2(): self.assertIsNone(var.get(None)) def func1(): self.assertIsNone(var.get(None)) var.set('spam') ctx2.run(func2) self.assertEqual(var.get(None), 'spam') cur = contextvars.copy_context() self.assertEqual(len(cur), 1) self.assertEqual(cur[var], 'spam') return cur returned_ctx = ctx1.run(func1) self.assertEqual(ctx1, returned_ctx) self.assertEqual(returned_ctx[var], 'spam') self.assertIn(var, returned_ctx) def test_context_run_5(self): ctx = contextvars.Context() var = contextvars.ContextVar('var') def func(): self.assertIsNone(var.get(None)) var.set('spam') 1 / 0 with self.assertRaises(ZeroDivisionError): ctx.run(func) self.assertIsNone(var.get(None)) def test_context_run_6(self): ctx = contextvars.Context() c = contextvars.ContextVar('a', default=0) def fun(): self.assertEqual(c.get(), 0) self.assertIsNone(ctx.get(c)) c.set(42) self.assertEqual(c.get(), 42) self.assertEqual(ctx.get(c), 42) ctx.run(fun) def test_context_run_7(self): ctx = contextvars.Context() def fun(): with self.assertRaisesRegex(RuntimeError, 'is already entered'): ctx.run(fun) ctx.run(fun) @isolated_context def test_context_getset_1(self): c = contextvars.ContextVar('c') with self.assertRaises(LookupError): c.get() self.assertIsNone(c.get(None)) t0 = c.set(42) self.assertEqual(c.get(), 42) self.assertEqual(c.get(None), 42) self.assertIs(t0.old_value, t0.MISSING) self.assertIs(t0.old_value, contextvars.Token.MISSING) self.assertIs(t0.var, c) t = c.set('spam') self.assertEqual(c.get(), 'spam') self.assertEqual(c.get(None), 'spam') self.assertEqual(t.old_value, 42) c.reset(t) self.assertEqual(c.get(), 42) self.assertEqual(c.get(None), 42) c.set('spam2') with self.assertRaisesRegex(RuntimeError, 'has already been used'): c.reset(t) self.assertEqual(c.get(), 'spam2') ctx1 = contextvars.copy_context() self.assertIn(c, ctx1) c.reset(t0) with self.assertRaisesRegex(RuntimeError, 'has already been used'): c.reset(t0) self.assertIsNone(c.get(None)) self.assertIn(c, ctx1) self.assertEqual(ctx1[c], 'spam2') self.assertEqual(ctx1.get(c, 'aa'), 'spam2') self.assertEqual(len(ctx1), 1) self.assertEqual(list(ctx1.items()), [(c, 'spam2')]) self.assertEqual(list(ctx1.values()), ['spam2']) self.assertEqual(list(ctx1.keys()), [c]) self.assertEqual(list(ctx1), [c]) ctx2 = contextvars.copy_context() self.assertNotIn(c, ctx2) with self.assertRaises(KeyError): ctx2[c] self.assertEqual(ctx2.get(c, 'aa'), 'aa') self.assertEqual(len(ctx2), 0) self.assertEqual(list(ctx2), []) @isolated_context def test_context_getset_2(self): v1 = contextvars.ContextVar('v1') v2 = contextvars.ContextVar('v2') t1 = v1.set(42) with self.assertRaisesRegex(ValueError, 'by a different'): v2.reset(t1) @isolated_context def test_context_getset_3(self): c = contextvars.ContextVar('c', default=42) ctx = contextvars.Context() def fun(): self.assertEqual(c.get(), 42) with self.assertRaises(KeyError): ctx[c] self.assertIsNone(ctx.get(c)) self.assertEqual(ctx.get(c, 'spam'), 'spam') self.assertNotIn(c, ctx) self.assertEqual(list(ctx.keys()), []) t = c.set(1) self.assertEqual(list(ctx.keys()), [c]) self.assertEqual(ctx[c], 1) c.reset(t) self.assertEqual(list(ctx.keys()), []) with self.assertRaises(KeyError): ctx[c] ctx.run(fun) @isolated_context def test_context_getset_4(self): c = contextvars.ContextVar('c', default=42) ctx = contextvars.Context() tok = ctx.run(c.set, 1) with self.assertRaisesRegex(ValueError, 'different Context'): c.reset(tok) @isolated_context def test_context_getset_5(self): c = contextvars.ContextVar('c', default=42) c.set([]) def fun(): c.set([]) c.get().append(42) self.assertEqual(c.get(), [42]) contextvars.copy_context().run(fun) self.assertEqual(c.get(), []) def test_context_copy_1(self): ctx1 = contextvars.Context() c = contextvars.ContextVar('c', default=42) def ctx1_fun(): c.set(10) ctx2 = ctx1.copy() self.assertEqual(ctx2[c], 10) c.set(20) self.assertEqual(ctx1[c], 20) self.assertEqual(ctx2[c], 10) ctx2.run(ctx2_fun) self.assertEqual(ctx1[c], 20) self.assertEqual(ctx2[c], 30) def ctx2_fun(): self.assertEqual(c.get(), 10) c.set(30) self.assertEqual(c.get(), 30) ctx1.run(ctx1_fun) @isolated_context @threading_helper.requires_working_threading() def test_context_threads_1(self): cvar = contextvars.ContextVar('cvar') def sub(num): for i in range(10): cvar.set(num + i) time.sleep(random.uniform(0.001, 0.05)) self.assertEqual(cvar.get(), num + i) return num tp = concurrent.futures.ThreadPoolExecutor(max_workers=10) try: results = list(tp.map(sub, range(10))) finally: tp.shutdown() self.assertEqual(results, list(range(10))) # HAMT Tests class HashKey: _crasher = None def __init__(self, hash, name, *, error_on_eq_to=None): assert hash != -1 self.name = name self.hash = hash self.error_on_eq_to = error_on_eq_to def __repr__(self): return f'' def __hash__(self): if self._crasher is not None and self._crasher.error_on_hash: raise HashingError return self.hash def __eq__(self, other): if not isinstance(other, HashKey): return NotImplemented if self._crasher is not None and self._crasher.error_on_eq: raise EqError if self.error_on_eq_to is not None and self.error_on_eq_to is other: raise ValueError(f'cannot compare {self!r} to {other!r}') if other.error_on_eq_to is not None and other.error_on_eq_to is self: raise ValueError(f'cannot compare {other!r} to {self!r}') return (self.name, self.hash) == (other.name, other.hash) class KeyStr(str): def __hash__(self): if HashKey._crasher is not None and HashKey._crasher.error_on_hash: raise HashingError return super().__hash__() def __eq__(self, other): if HashKey._crasher is not None and HashKey._crasher.error_on_eq: raise EqError return super().__eq__(other) class HaskKeyCrasher: def __init__(self, *, error_on_hash=False, error_on_eq=False): self.error_on_hash = error_on_hash self.error_on_eq = error_on_eq def __enter__(self): if HashKey._crasher is not None: raise RuntimeError('cannot nest crashers') HashKey._crasher = self def __exit__(self, *exc): HashKey._crasher = None class HashingError(Exception): pass class EqError(Exception): pass @unittest.skipIf(hamt is None, '_testcapi lacks "hamt()" function') class HamtTest(unittest.TestCase): def test_hashkey_helper_1(self): k1 = HashKey(10, 'aaa') k2 = HashKey(10, 'bbb') self.assertNotEqual(k1, k2) self.assertEqual(hash(k1), hash(k2)) d = dict() d[k1] = 'a' d[k2] = 'b' self.assertEqual(d[k1], 'a') self.assertEqual(d[k2], 'b') def test_hamt_basics_1(self): h = hamt() h = None # NoQA def test_hamt_basics_2(self): h = hamt() self.assertEqual(len(h), 0) h2 = h.set('a', 'b') self.assertIsNot(h, h2) self.assertEqual(len(h), 0) self.assertEqual(len(h2), 1) self.assertIsNone(h.get('a')) self.assertEqual(h.get('a', 42), 42) self.assertEqual(h2.get('a'), 'b') h3 = h2.set('b', 10) self.assertIsNot(h2, h3) self.assertEqual(len(h), 0) self.assertEqual(len(h2), 1) self.assertEqual(len(h3), 2) self.assertEqual(h3.get('a'), 'b') self.assertEqual(h3.get('b'), 10) self.assertIsNone(h.get('b')) self.assertIsNone(h2.get('b')) self.assertIsNone(h.get('a')) self.assertEqual(h2.get('a'), 'b') h = h2 = h3 = None def test_hamt_basics_3(self): h = hamt() o = object() h1 = h.set('1', o) h2 = h1.set('1', o) self.assertIs(h1, h2) def test_hamt_basics_4(self): h = hamt() h1 = h.set('key', []) h2 = h1.set('key', []) self.assertIsNot(h1, h2) self.assertEqual(len(h1), 1) self.assertEqual(len(h2), 1) self.assertIsNot(h1.get('key'), h2.get('key')) def test_hamt_collision_1(self): k1 = HashKey(10, 'aaa') k2 = HashKey(10, 'bbb') k3 = HashKey(10, 'ccc') h = hamt() h2 = h.set(k1, 'a') h3 = h2.set(k2, 'b') self.assertEqual(h.get(k1), None) self.assertEqual(h.get(k2), None) self.assertEqual(h2.get(k1), 'a') self.assertEqual(h2.get(k2), None) self.assertEqual(h3.get(k1), 'a') self.assertEqual(h3.get(k2), 'b') h4 = h3.set(k2, 'cc') h5 = h4.set(k3, 'aa') self.assertEqual(h3.get(k1), 'a') self.assertEqual(h3.get(k2), 'b') self.assertEqual(h4.get(k1), 'a') self.assertEqual(h4.get(k2), 'cc') self.assertEqual(h4.get(k3), None) self.assertEqual(h5.get(k1), 'a') self.assertEqual(h5.get(k2), 'cc') self.assertEqual(h5.get(k2), 'cc') self.assertEqual(h5.get(k3), 'aa') self.assertEqual(len(h), 0) self.assertEqual(len(h2), 1) self.assertEqual(len(h3), 2) self.assertEqual(len(h4), 2) self.assertEqual(len(h5), 3) def test_hamt_collision_3(self): # Test that iteration works with the deepest tree possible. # https://github.com/python/cpython/issues/93065 C = HashKey(0b10000000_00000000_00000000_00000000, 'C') D = HashKey(0b10000000_00000000_00000000_00000000, 'D') E = HashKey(0b00000000_00000000_00000000_00000000, 'E') h = hamt() h = h.set(C, 'C') h = h.set(D, 'D') h = h.set(E, 'E') # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=4 count=2 bitmap=0b101): # : 'E' # NULL: # CollisionNode(size=4 id=0x107a24520): # : 'C' # : 'D' self.assertEqual({k.name for k in h.keys()}, {'C', 'D', 'E'}) @support.requires_resource('cpu') def test_hamt_stress(self): COLLECTION_SIZE = 7000 TEST_ITERS_EVERY = 647 CRASH_HASH_EVERY = 97 CRASH_EQ_EVERY = 11 RUN_XTIMES = 3 for _ in range(RUN_XTIMES): h = hamt() d = dict() for i in range(COLLECTION_SIZE): key = KeyStr(i) if not (i % CRASH_HASH_EVERY): with HaskKeyCrasher(error_on_hash=True): with self.assertRaises(HashingError): h.set(key, i) h = h.set(key, i) if not (i % CRASH_EQ_EVERY): with HaskKeyCrasher(error_on_eq=True): with self.assertRaises(EqError): h.get(KeyStr(i)) # really trigger __eq__ d[key] = i self.assertEqual(len(d), len(h)) if not (i % TEST_ITERS_EVERY): self.assertEqual(set(h.items()), set(d.items())) self.assertEqual(len(h.items()), len(d.items())) self.assertEqual(len(h), COLLECTION_SIZE) for key in range(COLLECTION_SIZE): self.assertEqual(h.get(KeyStr(key), 'not found'), key) keys_to_delete = list(range(COLLECTION_SIZE)) random.shuffle(keys_to_delete) for iter_i, i in enumerate(keys_to_delete): key = KeyStr(i) if not (iter_i % CRASH_HASH_EVERY): with HaskKeyCrasher(error_on_hash=True): with self.assertRaises(HashingError): h.delete(key) if not (iter_i % CRASH_EQ_EVERY): with HaskKeyCrasher(error_on_eq=True): with self.assertRaises(EqError): h.delete(KeyStr(i)) h = h.delete(key) self.assertEqual(h.get(key, 'not found'), 'not found') del d[key] self.assertEqual(len(d), len(h)) if iter_i == COLLECTION_SIZE // 2: hm = h dm = d.copy() if not (iter_i % TEST_ITERS_EVERY): self.assertEqual(set(h.keys()), set(d.keys())) self.assertEqual(len(h.keys()), len(d.keys())) self.assertEqual(len(d), 0) self.assertEqual(len(h), 0) # ============ for key in dm: self.assertEqual(hm.get(str(key)), dm[key]) self.assertEqual(len(dm), len(hm)) for i, key in enumerate(keys_to_delete): hm = hm.delete(str(key)) self.assertEqual(hm.get(str(key), 'not found'), 'not found') dm.pop(str(key), None) self.assertEqual(len(d), len(h)) if not (i % TEST_ITERS_EVERY): self.assertEqual(set(h.values()), set(d.values())) self.assertEqual(len(h.values()), len(d.values())) self.assertEqual(len(d), 0) self.assertEqual(len(h), 0) self.assertEqual(list(h.items()), []) def test_hamt_delete_1(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(102, 'C') D = HashKey(103, 'D') E = HashKey(104, 'E') Z = HashKey(-100, 'Z') Er = HashKey(103, 'Er', error_on_eq_to=D) h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=10 bitmap=0b111110000 id=0x10eadc618): # : 'a' # : 'b' # : 'c' # : 'd' # : 'e' h = h.delete(C) self.assertEqual(len(h), orig_len - 1) with self.assertRaisesRegex(ValueError, 'cannot compare'): h.delete(Er) h = h.delete(D) self.assertEqual(len(h), orig_len - 2) h2 = h.delete(Z) self.assertIs(h2, h) h = h.delete(A) self.assertEqual(len(h), orig_len - 3) self.assertEqual(h.get(A, 42), 42) self.assertEqual(h.get(B), 'b') self.assertEqual(h.get(E), 'e') def test_hamt_delete_2(self): A = HashKey(100, 'A') B = HashKey(201001, 'B') C = HashKey(101001, 'C') D = HashKey(103, 'D') E = HashKey(104, 'E') Z = HashKey(-100, 'Z') Er = HashKey(201001, 'Er', error_on_eq_to=B) h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=8 bitmap=0b1110010000): # : 'a' # : 'd' # : 'e' # NULL: # BitmapNode(size=4 bitmap=0b100000000001000000000): # : 'b' # : 'c' with self.assertRaisesRegex(ValueError, 'cannot compare'): h.delete(Er) h = h.delete(Z) self.assertEqual(len(h), orig_len) h = h.delete(C) self.assertEqual(len(h), orig_len - 1) h = h.delete(B) self.assertEqual(len(h), orig_len - 2) h = h.delete(A) self.assertEqual(len(h), orig_len - 3) self.assertEqual(h.get(D), 'd') self.assertEqual(h.get(E), 'e') h = h.delete(A) h = h.delete(B) h = h.delete(D) h = h.delete(E) self.assertEqual(len(h), 0) def test_hamt_delete_3(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(104, 'E') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=6 bitmap=0b100110000): # NULL: # BitmapNode(size=4 bitmap=0b1000000000000000000001000): # : 'a' # NULL: # CollisionNode(size=4 id=0x108572410): # : 'c' # : 'd' # : 'b' # : 'e' h = h.delete(A) self.assertEqual(len(h), orig_len - 1) h = h.delete(E) self.assertEqual(len(h), orig_len - 2) self.assertEqual(h.get(C), 'c') self.assertEqual(h.get(B), 'b') def test_hamt_delete_4(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(100100, 'E') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=4 bitmap=0b110000): # NULL: # BitmapNode(size=4 bitmap=0b1000000000000000000001000): # : 'a' # NULL: # CollisionNode(size=6 id=0x10515ef30): # : 'c' # : 'd' # : 'e' # : 'b' h = h.delete(D) self.assertEqual(len(h), orig_len - 1) h = h.delete(E) self.assertEqual(len(h), orig_len - 2) h = h.delete(C) self.assertEqual(len(h), orig_len - 3) h = h.delete(A) self.assertEqual(len(h), orig_len - 4) h = h.delete(B) self.assertEqual(len(h), 0) def test_hamt_delete_5(self): h = hamt() keys = [] for i in range(17): key = HashKey(i, str(i)) keys.append(key) h = h.set(key, f'val-{i}') collision_key16 = HashKey(16, '18') h = h.set(collision_key16, 'collision') # ArrayNode(id=0x10f8b9318): # 0:: # BitmapNode(size=2 count=1 bitmap=0b1): # : 'val-0' # # ... 14 more BitmapNodes ... # # 15:: # BitmapNode(size=2 count=1 bitmap=0b1): # : 'val-15' # # 16:: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # CollisionNode(size=4 id=0x10f2f5af8): # : 'val-16' # : 'collision' self.assertEqual(len(h), 18) h = h.delete(keys[2]) self.assertEqual(len(h), 17) h = h.delete(collision_key16) self.assertEqual(len(h), 16) h = h.delete(keys[16]) self.assertEqual(len(h), 15) h = h.delete(keys[1]) self.assertEqual(len(h), 14) h = h.delete(keys[1]) self.assertEqual(len(h), 14) for key in keys: h = h.delete(key) self.assertEqual(len(h), 0) def test_hamt_items_1(self): A = HashKey(100, 'A') B = HashKey(201001, 'B') C = HashKey(101001, 'C') D = HashKey(103, 'D') E = HashKey(104, 'E') F = HashKey(110, 'F') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') h = h.set(F, 'f') it = h.items() self.assertEqual( set(list(it)), {(A, 'a'), (B, 'b'), (C, 'c'), (D, 'd'), (E, 'e'), (F, 'f')}) def test_hamt_items_2(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(100100, 'E') F = HashKey(110, 'F') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') h = h.set(F, 'f') it = h.items() self.assertEqual( set(list(it)), {(A, 'a'), (B, 'b'), (C, 'c'), (D, 'd'), (E, 'e'), (F, 'f')}) def test_hamt_keys_1(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(100100, 'E') F = HashKey(110, 'F') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') h = h.set(F, 'f') self.assertEqual(set(list(h.keys())), {A, B, C, D, E, F}) self.assertEqual(set(list(h)), {A, B, C, D, E, F}) def test_hamt_items_3(self): h = hamt() self.assertEqual(len(h.items()), 0) self.assertEqual(list(h.items()), []) def test_hamt_eq_1(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(120, 'E') h1 = hamt() h1 = h1.set(A, 'a') h1 = h1.set(B, 'b') h1 = h1.set(C, 'c') h1 = h1.set(D, 'd') h2 = hamt() h2 = h2.set(A, 'a') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(B, 'b') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(C, 'c') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(D, 'd2') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(D, 'd') self.assertTrue(h1 == h2) self.assertFalse(h1 != h2) h2 = h2.set(E, 'e') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.delete(D) self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(E, 'd') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) def test_hamt_eq_2(self): A = HashKey(100, 'A') Er = HashKey(100, 'Er', error_on_eq_to=A) h1 = hamt() h1 = h1.set(A, 'a') h2 = hamt() h2 = h2.set(Er, 'a') with self.assertRaisesRegex(ValueError, 'cannot compare'): h1 == h2 with self.assertRaisesRegex(ValueError, 'cannot compare'): h1 != h2 def test_hamt_gc_1(self): A = HashKey(100, 'A') h = hamt() h = h.set(0, 0) # empty HAMT node is memoized in hamt.c ref = weakref.ref(h) a = [] a.append(a) a.append(h) b = [] a.append(b) b.append(a) h = h.set(A, b) del h, a, b gc.collect() gc.collect() gc.collect() self.assertIsNone(ref()) def test_hamt_gc_2(self): A = HashKey(100, 'A') B = HashKey(101, 'B') h = hamt() h = h.set(A, 'a') h = h.set(A, h) ref = weakref.ref(h) hi = h.items() next(hi) del h, hi gc.collect() gc.collect() gc.collect() self.assertIsNone(ref()) def test_hamt_in_1(self): A = HashKey(100, 'A') AA = HashKey(100, 'A') B = HashKey(101, 'B') h = hamt() h = h.set(A, 1) self.assertTrue(A in h) self.assertFalse(B in h) with self.assertRaises(EqError): with HaskKeyCrasher(error_on_eq=True): AA in h with self.assertRaises(HashingError): with HaskKeyCrasher(error_on_hash=True): AA in h def test_hamt_getitem_1(self): A = HashKey(100, 'A') AA = HashKey(100, 'A') B = HashKey(101, 'B') h = hamt() h = h.set(A, 1) self.assertEqual(h[A], 1) self.assertEqual(h[AA], 1) with self.assertRaises(KeyError): h[B] with self.assertRaises(EqError): with HaskKeyCrasher(error_on_eq=True): h[AA] with self.assertRaises(HashingError): with HaskKeyCrasher(error_on_hash=True): h[AA] if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.12/test_ftplib.py000066400000000000000000001236761471441230600212340ustar00rootroot00000000000000"""Test script for ftplib module.""" # Modified by Giampaolo Rodola' to test FTP class, IPv6 and TLS # environment import ftplib import socket import io import errno import os import threading import time import unittest try: import ssl except ImportError: ssl = None from unittest import TestCase, skipUnless from test import support from test.support import threading_helper from test.support import socket_helper from test.support import warnings_helper from test.support import asynchat from test.support import asyncore from test.support.socket_helper import HOST, HOSTv6 support.requires_working_socket(module=True) TIMEOUT = support.LOOPBACK_TIMEOUT DEFAULT_ENCODING = 'utf-8' # the dummy data returned by server over the data channel when # RETR, LIST, NLST, MLSD commands are issued RETR_DATA = 'abcde\xB9\xB2\xB3\xA4\xA6\r\n' * 1000 LIST_DATA = 'foo\r\nbar\r\n non-ascii char \xAE\r\n' NLST_DATA = 'foo\r\nbar\r\n non-ascii char \xAE\r\n' MLSD_DATA = ("type=cdir;perm=el;unique==keVO1+ZF4; test\r\n" "type=pdir;perm=e;unique==keVO1+d?3; ..\r\n" "type=OS.unix=slink:/foobar;perm=;unique==keVO1+4G4; foobar\r\n" "type=OS.unix=chr-13/29;perm=;unique==keVO1+5G4; device\r\n" "type=OS.unix=blk-11/108;perm=;unique==keVO1+6G4; block\r\n" "type=file;perm=awr;unique==keVO1+8G4; writable\r\n" "type=dir;perm=cpmel;unique==keVO1+7G4; promiscuous\r\n" "type=dir;perm=;unique==keVO1+1t2; no-exec\r\n" "type=file;perm=r;unique==keVO1+EG4; two words\r\n" "type=file;perm=r;unique==keVO1+IH4; leading space\r\n" "type=file;perm=r;unique==keVO1+1G4; file1\r\n" "type=dir;perm=cpmel;unique==keVO1+7G4; incoming\r\n" "type=file;perm=r;unique==keVO1+1G4; file2\r\n" "type=file;perm=r;unique==keVO1+1G4; file3\r\n" "type=file;perm=r;unique==keVO1+1G4; file4\r\n" "type=dir;perm=cpmel;unique==SGP1; dir \xAE non-ascii char\r\n" "type=file;perm=r;unique==SGP2; file \xAE non-ascii char\r\n") def default_error_handler(): # bpo-44359: Silently ignore socket errors. Such errors occur when a client # socket is closed, in TestFTPClass.tearDown() and makepasv() tests, and # the server gets an error on its side. pass class DummyDTPHandler(asynchat.async_chat): dtp_conn_closed = False def __init__(self, conn, baseclass): asynchat.async_chat.__init__(self, conn) self.baseclass = baseclass self.baseclass.last_received_data = bytearray() self.encoding = baseclass.encoding def handle_read(self): new_data = self.recv(1024) self.baseclass.last_received_data += new_data def handle_close(self): # XXX: this method can be called many times in a row for a single # connection, including in clear-text (non-TLS) mode. # (behaviour witnessed with test_data_connection) if not self.dtp_conn_closed: self.baseclass.push('226 transfer complete') self.close() self.dtp_conn_closed = True def push(self, what): if self.baseclass.next_data is not None: what = self.baseclass.next_data self.baseclass.next_data = None if not what: return self.close_when_done() super(DummyDTPHandler, self).push(what.encode(self.encoding)) def handle_error(self): default_error_handler() class DummyFTPHandler(asynchat.async_chat): dtp_handler = DummyDTPHandler def __init__(self, conn, encoding=DEFAULT_ENCODING): asynchat.async_chat.__init__(self, conn) # tells the socket to handle urgent data inline (ABOR command) self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_OOBINLINE, 1) self.set_terminator(b"\r\n") self.in_buffer = [] self.dtp = None self.last_received_cmd = None self.last_received_data = bytearray() self.next_response = '' self.next_data = None self.rest = None self.next_retr_data = RETR_DATA self.push('220 welcome') self.encoding = encoding # We use this as the string IPv4 address to direct the client # to in response to a PASV command. To test security behavior. # https://bugs.python.org/issue43285/. self.fake_pasv_server_ip = '252.253.254.255' def collect_incoming_data(self, data): self.in_buffer.append(data) def found_terminator(self): line = b''.join(self.in_buffer).decode(self.encoding) self.in_buffer = [] if self.next_response: self.push(self.next_response) self.next_response = '' cmd = line.split(' ')[0].lower() self.last_received_cmd = cmd space = line.find(' ') if space != -1: arg = line[space + 1:] else: arg = "" if hasattr(self, 'cmd_' + cmd): method = getattr(self, 'cmd_' + cmd) method(arg) else: self.push('550 command "%s" not understood.' %cmd) def handle_error(self): default_error_handler() def push(self, data): asynchat.async_chat.push(self, data.encode(self.encoding) + b'\r\n') def cmd_port(self, arg): addr = list(map(int, arg.split(','))) ip = '%d.%d.%d.%d' %tuple(addr[:4]) port = (addr[4] * 256) + addr[5] s = socket.create_connection((ip, port), timeout=TIMEOUT) self.dtp = self.dtp_handler(s, baseclass=self) self.push('200 active data connection established') def cmd_pasv(self, arg): with socket.create_server((self.socket.getsockname()[0], 0)) as sock: sock.settimeout(TIMEOUT) port = sock.getsockname()[1] ip = self.fake_pasv_server_ip ip = ip.replace('.', ','); p1 = port / 256; p2 = port % 256 self.push('227 entering passive mode (%s,%d,%d)' %(ip, p1, p2)) conn, addr = sock.accept() self.dtp = self.dtp_handler(conn, baseclass=self) def cmd_eprt(self, arg): af, ip, port = arg.split(arg[0])[1:-1] port = int(port) s = socket.create_connection((ip, port), timeout=TIMEOUT) self.dtp = self.dtp_handler(s, baseclass=self) self.push('200 active data connection established') def cmd_epsv(self, arg): with socket.create_server((self.socket.getsockname()[0], 0), family=socket.AF_INET6) as sock: sock.settimeout(TIMEOUT) port = sock.getsockname()[1] self.push('229 entering extended passive mode (|||%d|)' %port) conn, addr = sock.accept() self.dtp = self.dtp_handler(conn, baseclass=self) def cmd_echo(self, arg): # sends back the received string (used by the test suite) self.push(arg) def cmd_noop(self, arg): self.push('200 noop ok') def cmd_user(self, arg): self.push('331 username ok') def cmd_pass(self, arg): self.push('230 password ok') def cmd_acct(self, arg): self.push('230 acct ok') def cmd_rnfr(self, arg): self.push('350 rnfr ok') def cmd_rnto(self, arg): self.push('250 rnto ok') def cmd_dele(self, arg): self.push('250 dele ok') def cmd_cwd(self, arg): self.push('250 cwd ok') def cmd_size(self, arg): self.push('250 1000') def cmd_mkd(self, arg): self.push('257 "%s"' %arg) def cmd_rmd(self, arg): self.push('250 rmd ok') def cmd_pwd(self, arg): self.push('257 "pwd ok"') def cmd_type(self, arg): self.push('200 type ok') def cmd_quit(self, arg): self.push('221 quit ok') self.close() def cmd_abor(self, arg): self.push('226 abor ok') def cmd_stor(self, arg): self.push('125 stor ok') def cmd_rest(self, arg): self.rest = arg self.push('350 rest ok') def cmd_retr(self, arg): self.push('125 retr ok') if self.rest is not None: offset = int(self.rest) else: offset = 0 self.dtp.push(self.next_retr_data[offset:]) self.dtp.close_when_done() self.rest = None def cmd_list(self, arg): self.push('125 list ok') self.dtp.push(LIST_DATA) self.dtp.close_when_done() def cmd_nlst(self, arg): self.push('125 nlst ok') self.dtp.push(NLST_DATA) self.dtp.close_when_done() def cmd_opts(self, arg): self.push('200 opts ok') def cmd_mlsd(self, arg): self.push('125 mlsd ok') self.dtp.push(MLSD_DATA) self.dtp.close_when_done() def cmd_setlongretr(self, arg): # For testing. Next RETR will return long line. self.next_retr_data = 'x' * int(arg) self.push('125 setlongretr ok') class DummyFTPServer(asyncore.dispatcher, threading.Thread): handler = DummyFTPHandler def __init__(self, address, af=socket.AF_INET, encoding=DEFAULT_ENCODING): threading.Thread.__init__(self) asyncore.dispatcher.__init__(self) self.daemon = True self.create_socket(af, socket.SOCK_STREAM) self.bind(address) self.listen(5) self.active = False self.active_lock = threading.Lock() self.host, self.port = self.socket.getsockname()[:2] self.handler_instance = None self.encoding = encoding def start(self): assert not self.active self.__flag = threading.Event() threading.Thread.start(self) self.__flag.wait() def run(self): self.active = True self.__flag.set() while self.active and asyncore.socket_map: self.active_lock.acquire() asyncore.loop(timeout=0.1, count=1) self.active_lock.release() asyncore.close_all(ignore_all=True) def stop(self): assert self.active self.active = False self.join() def handle_accepted(self, conn, addr): self.handler_instance = self.handler(conn, encoding=self.encoding) def handle_connect(self): self.close() handle_read = handle_connect def writable(self): return 0 def handle_error(self): default_error_handler() if ssl is not None: CERTFILE = os.path.join(os.path.dirname(__file__), "certdata", "keycert3.pem") CAFILE = os.path.join(os.path.dirname(__file__), "certdata", "pycacert.pem") class SSLConnection(asyncore.dispatcher): """An asyncore.dispatcher subclass supporting TLS/SSL.""" _ssl_accepting = False _ssl_closing = False def secure_connection(self): context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) context.load_cert_chain(CERTFILE) socket = context.wrap_socket(self.socket, suppress_ragged_eofs=False, server_side=True, do_handshake_on_connect=False) self.del_channel() self.set_socket(socket) self._ssl_accepting = True def _do_ssl_handshake(self): try: self.socket.do_handshake() except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return elif err.args[0] == ssl.SSL_ERROR_EOF: return self.handle_close() # TODO: SSLError does not expose alert information elif "SSLV3_ALERT_BAD_CERTIFICATE" in err.args[1]: return self.handle_close() raise except OSError as err: if err.args[0] == errno.ECONNABORTED: return self.handle_close() else: self._ssl_accepting = False def _do_ssl_shutdown(self): self._ssl_closing = True try: self.socket = self.socket.unwrap() except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return except OSError: # Any "socket error" corresponds to a SSL_ERROR_SYSCALL return # from OpenSSL's SSL_shutdown(), corresponding to a # closed socket condition. See also: # http://www.mail-archive.com/openssl-users@openssl.org/msg60710.html pass self._ssl_closing = False if getattr(self, '_ccc', False) is False: super(SSLConnection, self).close() else: pass def handle_read_event(self): if self._ssl_accepting: self._do_ssl_handshake() elif self._ssl_closing: self._do_ssl_shutdown() else: super(SSLConnection, self).handle_read_event() def handle_write_event(self): if self._ssl_accepting: self._do_ssl_handshake() elif self._ssl_closing: self._do_ssl_shutdown() else: super(SSLConnection, self).handle_write_event() def send(self, data): try: return super(SSLConnection, self).send(data) except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_EOF, ssl.SSL_ERROR_ZERO_RETURN, ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return 0 raise def recv(self, buffer_size): try: return super(SSLConnection, self).recv(buffer_size) except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return b'' if err.args[0] in (ssl.SSL_ERROR_EOF, ssl.SSL_ERROR_ZERO_RETURN): self.handle_close() return b'' raise def handle_error(self): default_error_handler() def close(self): if (isinstance(self.socket, ssl.SSLSocket) and self.socket._sslobj is not None): self._do_ssl_shutdown() else: super(SSLConnection, self).close() class DummyTLS_DTPHandler(SSLConnection, DummyDTPHandler): """A DummyDTPHandler subclass supporting TLS/SSL.""" def __init__(self, conn, baseclass): DummyDTPHandler.__init__(self, conn, baseclass) if self.baseclass.secure_data_channel: self.secure_connection() class DummyTLS_FTPHandler(SSLConnection, DummyFTPHandler): """A DummyFTPHandler subclass supporting TLS/SSL.""" dtp_handler = DummyTLS_DTPHandler def __init__(self, conn, encoding=DEFAULT_ENCODING): DummyFTPHandler.__init__(self, conn, encoding=encoding) self.secure_data_channel = False self._ccc = False def cmd_auth(self, line): """Set up secure control channel.""" self.push('234 AUTH TLS successful') self.secure_connection() def cmd_ccc(self, line): self.push('220 Reverting back to clear-text') self._ccc = True self._do_ssl_shutdown() def cmd_pbsz(self, line): """Negotiate size of buffer for secure data transfer. For TLS/SSL the only valid value for the parameter is '0'. Any other value is accepted but ignored. """ self.push('200 PBSZ=0 successful.') def cmd_prot(self, line): """Setup un/secure data channel.""" arg = line.upper() if arg == 'C': self.push('200 Protection set to Clear') self.secure_data_channel = False elif arg == 'P': self.push('200 Protection set to Private') self.secure_data_channel = True else: self.push("502 Unrecognized PROT type (use C or P).") class DummyTLS_FTPServer(DummyFTPServer): handler = DummyTLS_FTPHandler class TestFTPClass(TestCase): def setUp(self, encoding=DEFAULT_ENCODING): self.server = DummyFTPServer((HOST, 0), encoding=encoding) self.server.start() self.client = ftplib.FTP(timeout=TIMEOUT, encoding=encoding) self.client.connect(self.server.host, self.server.port) def tearDown(self): self.client.close() self.server.stop() # Explicitly clear the attribute to prevent dangling thread self.server = None asyncore.close_all(ignore_all=True) def check_data(self, received, expected): self.assertEqual(len(received), len(expected)) self.assertEqual(received, expected) def test_getwelcome(self): self.assertEqual(self.client.getwelcome(), '220 welcome') def test_sanitize(self): self.assertEqual(self.client.sanitize('foo'), repr('foo')) self.assertEqual(self.client.sanitize('pass 12345'), repr('pass *****')) self.assertEqual(self.client.sanitize('PASS 12345'), repr('PASS *****')) def test_exceptions(self): self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\r\n0') self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\n0') self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\r0') self.assertRaises(ftplib.error_temp, self.client.sendcmd, 'echo 400') self.assertRaises(ftplib.error_temp, self.client.sendcmd, 'echo 499') self.assertRaises(ftplib.error_perm, self.client.sendcmd, 'echo 500') self.assertRaises(ftplib.error_perm, self.client.sendcmd, 'echo 599') self.assertRaises(ftplib.error_proto, self.client.sendcmd, 'echo 999') def test_all_errors(self): exceptions = (ftplib.error_reply, ftplib.error_temp, ftplib.error_perm, ftplib.error_proto, ftplib.Error, OSError, EOFError) for x in exceptions: try: raise x('exception not included in all_errors set') except ftplib.all_errors: pass def test_set_pasv(self): # passive mode is supposed to be enabled by default self.assertTrue(self.client.passiveserver) self.client.set_pasv(True) self.assertTrue(self.client.passiveserver) self.client.set_pasv(False) self.assertFalse(self.client.passiveserver) def test_voidcmd(self): self.assertEqual(self.client.voidcmd('echo 200'), '200') self.assertEqual(self.client.voidcmd('echo 299'), '299') self.assertRaises(ftplib.error_reply, self.client.voidcmd, 'echo 199') self.assertRaises(ftplib.error_reply, self.client.voidcmd, 'echo 300') def test_login(self): self.client.login() def test_acct(self): self.client.acct('passwd') def test_rename(self): self.client.rename('a', 'b') self.server.handler_instance.next_response = '200' self.assertRaises(ftplib.error_reply, self.client.rename, 'a', 'b') def test_delete(self): self.client.delete('foo') self.server.handler_instance.next_response = '199' self.assertRaises(ftplib.error_reply, self.client.delete, 'foo') def test_size(self): self.client.size('foo') def test_mkd(self): dir = self.client.mkd('/foo') self.assertEqual(dir, '/foo') def test_rmd(self): self.client.rmd('foo') def test_cwd(self): dir = self.client.cwd('/foo') self.assertEqual(dir, '250 cwd ok') def test_pwd(self): dir = self.client.pwd() self.assertEqual(dir, 'pwd ok') def test_quit(self): self.assertEqual(self.client.quit(), '221 quit ok') # Ensure the connection gets closed; sock attribute should be None self.assertEqual(self.client.sock, None) def test_abort(self): self.client.abort() def test_retrbinary(self): received = [] self.client.retrbinary('retr', received.append) self.check_data(b''.join(received), RETR_DATA.encode(self.client.encoding)) def test_retrbinary_rest(self): for rest in (0, 10, 20): received = [] self.client.retrbinary('retr', received.append, rest=rest) self.check_data(b''.join(received), RETR_DATA[rest:].encode(self.client.encoding)) def test_retrlines(self): received = [] self.client.retrlines('retr', received.append) self.check_data(''.join(received), RETR_DATA.replace('\r\n', '')) def test_storbinary(self): f = io.BytesIO(RETR_DATA.encode(self.client.encoding)) self.client.storbinary('stor', f) self.check_data(self.server.handler_instance.last_received_data, RETR_DATA.encode(self.server.encoding)) # test new callback arg flag = [] f.seek(0) self.client.storbinary('stor', f, callback=lambda x: flag.append(None)) self.assertTrue(flag) def test_storbinary_rest(self): data = RETR_DATA.replace('\r\n', '\n').encode(self.client.encoding) f = io.BytesIO(data) for r in (30, '30'): f.seek(0) self.client.storbinary('stor', f, rest=r) self.assertEqual(self.server.handler_instance.rest, str(r)) def test_storlines(self): data = RETR_DATA.replace('\r\n', '\n').encode(self.client.encoding) f = io.BytesIO(data) self.client.storlines('stor', f) self.check_data(self.server.handler_instance.last_received_data, RETR_DATA.encode(self.server.encoding)) # test new callback arg flag = [] f.seek(0) self.client.storlines('stor foo', f, callback=lambda x: flag.append(None)) self.assertTrue(flag) f = io.StringIO(RETR_DATA.replace('\r\n', '\n')) # storlines() expects a binary file, not a text file with warnings_helper.check_warnings(('', BytesWarning), quiet=True): self.assertRaises(TypeError, self.client.storlines, 'stor foo', f) def test_nlst(self): self.client.nlst() self.assertEqual(self.client.nlst(), NLST_DATA.split('\r\n')[:-1]) def test_dir(self): l = [] self.client.dir(l.append) self.assertEqual(''.join(l), LIST_DATA.replace('\r\n', '')) def test_mlsd(self): list(self.client.mlsd()) list(self.client.mlsd(path='/')) list(self.client.mlsd(path='/', facts=['size', 'type'])) ls = list(self.client.mlsd()) for name, facts in ls: self.assertIsInstance(name, str) self.assertIsInstance(facts, dict) self.assertTrue(name) self.assertIn('type', facts) self.assertIn('perm', facts) self.assertIn('unique', facts) def set_data(data): self.server.handler_instance.next_data = data def test_entry(line, type=None, perm=None, unique=None, name=None): type = 'type' if type is None else type perm = 'perm' if perm is None else perm unique = 'unique' if unique is None else unique name = 'name' if name is None else name set_data(line) _name, facts = next(self.client.mlsd()) self.assertEqual(_name, name) self.assertEqual(facts['type'], type) self.assertEqual(facts['perm'], perm) self.assertEqual(facts['unique'], unique) # plain test_entry('type=type;perm=perm;unique=unique; name\r\n') # "=" in fact value test_entry('type=ty=pe;perm=perm;unique=unique; name\r\n', type="ty=pe") test_entry('type==type;perm=perm;unique=unique; name\r\n', type="=type") test_entry('type=t=y=pe;perm=perm;unique=unique; name\r\n', type="t=y=pe") test_entry('type=====;perm=perm;unique=unique; name\r\n', type="====") # spaces in name test_entry('type=type;perm=perm;unique=unique; na me\r\n', name="na me") test_entry('type=type;perm=perm;unique=unique; name \r\n', name="name ") test_entry('type=type;perm=perm;unique=unique; name\r\n', name=" name") test_entry('type=type;perm=perm;unique=unique; n am e\r\n', name="n am e") # ";" in name test_entry('type=type;perm=perm;unique=unique; na;me\r\n', name="na;me") test_entry('type=type;perm=perm;unique=unique; ;name\r\n', name=";name") test_entry('type=type;perm=perm;unique=unique; ;name;\r\n', name=";name;") test_entry('type=type;perm=perm;unique=unique; ;;;;\r\n', name=";;;;") # case sensitiveness set_data('Type=type;TyPe=perm;UNIQUE=unique; name\r\n') _name, facts = next(self.client.mlsd()) for x in facts: self.assertTrue(x.islower()) # no data (directory empty) set_data('') self.assertRaises(StopIteration, next, self.client.mlsd()) set_data('') for x in self.client.mlsd(): self.fail("unexpected data %s" % x) def test_makeport(self): with self.client.makeport(): # IPv4 is in use, just make sure send_eprt has not been used self.assertEqual(self.server.handler_instance.last_received_cmd, 'port') def test_makepasv(self): host, port = self.client.makepasv() conn = socket.create_connection((host, port), timeout=TIMEOUT) conn.close() # IPv4 is in use, just make sure send_epsv has not been used self.assertEqual(self.server.handler_instance.last_received_cmd, 'pasv') def test_makepasv_issue43285_security_disabled(self): """Test the opt-in to the old vulnerable behavior.""" self.client.trust_server_pasv_ipv4_address = True bad_host, port = self.client.makepasv() self.assertEqual( bad_host, self.server.handler_instance.fake_pasv_server_ip) # Opening and closing a connection keeps the dummy server happy # instead of timing out on accept. socket.create_connection((self.client.sock.getpeername()[0], port), timeout=TIMEOUT).close() def test_makepasv_issue43285_security_enabled_default(self): self.assertFalse(self.client.trust_server_pasv_ipv4_address) trusted_host, port = self.client.makepasv() self.assertNotEqual( trusted_host, self.server.handler_instance.fake_pasv_server_ip) # Opening and closing a connection keeps the dummy server happy # instead of timing out on accept. socket.create_connection((trusted_host, port), timeout=TIMEOUT).close() def test_with_statement(self): self.client.quit() def is_client_connected(): if self.client.sock is None: return False try: self.client.sendcmd('noop') except (OSError, EOFError): return False return True # base test with ftplib.FTP(timeout=TIMEOUT) as self.client: self.client.connect(self.server.host, self.server.port) self.client.sendcmd('noop') self.assertTrue(is_client_connected()) self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit') self.assertFalse(is_client_connected()) # QUIT sent inside the with block with ftplib.FTP(timeout=TIMEOUT) as self.client: self.client.connect(self.server.host, self.server.port) self.client.sendcmd('noop') self.client.quit() self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit') self.assertFalse(is_client_connected()) # force a wrong response code to be sent on QUIT: error_perm # is expected and the connection is supposed to be closed try: with ftplib.FTP(timeout=TIMEOUT) as self.client: self.client.connect(self.server.host, self.server.port) self.client.sendcmd('noop') self.server.handler_instance.next_response = '550 error on quit' except ftplib.error_perm as err: self.assertEqual(str(err), '550 error on quit') else: self.fail('Exception not raised') # needed to give the threaded server some time to set the attribute # which otherwise would still be == 'noop' time.sleep(0.1) self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit') self.assertFalse(is_client_connected()) def test_source_address(self): self.client.quit() port = socket_helper.find_unused_port() try: self.client.connect(self.server.host, self.server.port, source_address=(HOST, port)) self.assertEqual(self.client.sock.getsockname()[1], port) self.client.quit() except OSError as e: if e.errno == errno.EADDRINUSE: self.skipTest("couldn't bind to port %d" % port) raise def test_source_address_passive_connection(self): port = socket_helper.find_unused_port() self.client.source_address = (HOST, port) try: with self.client.transfercmd('list') as sock: self.assertEqual(sock.getsockname()[1], port) except OSError as e: if e.errno == errno.EADDRINUSE: self.skipTest("couldn't bind to port %d" % port) raise def test_parse257(self): self.assertEqual(ftplib.parse257('257 "/foo/bar"'), '/foo/bar') self.assertEqual(ftplib.parse257('257 "/foo/bar" created'), '/foo/bar') self.assertEqual(ftplib.parse257('257 ""'), '') self.assertEqual(ftplib.parse257('257 "" created'), '') self.assertRaises(ftplib.error_reply, ftplib.parse257, '250 "/foo/bar"') # The 257 response is supposed to include the directory # name and in case it contains embedded double-quotes # they must be doubled (see RFC-959, chapter 7, appendix 2). self.assertEqual(ftplib.parse257('257 "/foo/b""ar"'), '/foo/b"ar') self.assertEqual(ftplib.parse257('257 "/foo/b""ar" created'), '/foo/b"ar') def test_line_too_long(self): self.assertRaises(ftplib.Error, self.client.sendcmd, 'x' * self.client.maxline * 2) def test_retrlines_too_long(self): self.client.sendcmd('SETLONGRETR %d' % (self.client.maxline * 2)) received = [] self.assertRaises(ftplib.Error, self.client.retrlines, 'retr', received.append) def test_storlines_too_long(self): f = io.BytesIO(b'x' * self.client.maxline * 2) self.assertRaises(ftplib.Error, self.client.storlines, 'stor', f) def test_encoding_param(self): encodings = ['latin-1', 'utf-8'] for encoding in encodings: with self.subTest(encoding=encoding): self.tearDown() self.setUp(encoding=encoding) self.assertEqual(encoding, self.client.encoding) self.test_retrbinary() self.test_storbinary() self.test_retrlines() new_dir = self.client.mkd('/non-ascii dir \xAE') self.check_data(new_dir, '/non-ascii dir \xAE') # Check default encoding client = ftplib.FTP(timeout=TIMEOUT) self.assertEqual(DEFAULT_ENCODING, client.encoding) @skipUnless(socket_helper.IPV6_ENABLED, "IPv6 not enabled") class TestIPv6Environment(TestCase): def setUp(self): self.server = DummyFTPServer((HOSTv6, 0), af=socket.AF_INET6, encoding=DEFAULT_ENCODING) self.server.start() self.client = ftplib.FTP(timeout=TIMEOUT, encoding=DEFAULT_ENCODING) self.client.connect(self.server.host, self.server.port) def tearDown(self): self.client.close() self.server.stop() # Explicitly clear the attribute to prevent dangling thread self.server = None asyncore.close_all(ignore_all=True) def test_af(self): self.assertEqual(self.client.af, socket.AF_INET6) def test_makeport(self): with self.client.makeport(): self.assertEqual(self.server.handler_instance.last_received_cmd, 'eprt') def test_makepasv(self): host, port = self.client.makepasv() conn = socket.create_connection((host, port), timeout=TIMEOUT) conn.close() self.assertEqual(self.server.handler_instance.last_received_cmd, 'epsv') def test_transfer(self): def retr(): received = [] self.client.retrbinary('retr', received.append) self.assertEqual(b''.join(received), RETR_DATA.encode(self.client.encoding)) self.client.set_pasv(True) retr() self.client.set_pasv(False) retr() @skipUnless(ssl, "SSL not available") class TestTLS_FTPClassMixin(TestFTPClass): """Repeat TestFTPClass tests starting the TLS layer for both control and data connections first. """ def setUp(self, encoding=DEFAULT_ENCODING): self.server = DummyTLS_FTPServer((HOST, 0), encoding=encoding) self.server.start() self.client = ftplib.FTP_TLS(timeout=TIMEOUT, encoding=encoding) self.client.connect(self.server.host, self.server.port) # enable TLS self.client.auth() self.client.prot_p() @skipUnless(ssl, "SSL not available") class TestTLS_FTPClass(TestCase): """Specific TLS_FTP class tests.""" def setUp(self, encoding=DEFAULT_ENCODING): self.server = DummyTLS_FTPServer((HOST, 0), encoding=encoding) self.server.start() self.client = ftplib.FTP_TLS(timeout=TIMEOUT) self.client.connect(self.server.host, self.server.port) def tearDown(self): self.client.close() self.server.stop() # Explicitly clear the attribute to prevent dangling thread self.server = None asyncore.close_all(ignore_all=True) def test_control_connection(self): self.assertNotIsInstance(self.client.sock, ssl.SSLSocket) self.client.auth() self.assertIsInstance(self.client.sock, ssl.SSLSocket) def test_data_connection(self): # clear text with self.client.transfercmd('list') as sock: self.assertNotIsInstance(sock, ssl.SSLSocket) self.assertEqual(sock.recv(1024), LIST_DATA.encode(self.client.encoding)) self.assertEqual(self.client.voidresp(), "226 transfer complete") # secured, after PROT P self.client.prot_p() with self.client.transfercmd('list') as sock: self.assertIsInstance(sock, ssl.SSLSocket) # consume from SSL socket to finalize handshake and avoid # "SSLError [SSL] shutdown while in init" self.assertEqual(sock.recv(1024), LIST_DATA.encode(self.client.encoding)) self.assertEqual(self.client.voidresp(), "226 transfer complete") # PROT C is issued, the connection must be in cleartext again self.client.prot_c() with self.client.transfercmd('list') as sock: self.assertNotIsInstance(sock, ssl.SSLSocket) self.assertEqual(sock.recv(1024), LIST_DATA.encode(self.client.encoding)) self.assertEqual(self.client.voidresp(), "226 transfer complete") def test_login(self): # login() is supposed to implicitly secure the control connection self.assertNotIsInstance(self.client.sock, ssl.SSLSocket) self.client.login() self.assertIsInstance(self.client.sock, ssl.SSLSocket) # make sure that AUTH TLS doesn't get issued again self.client.login() def test_auth_issued_twice(self): self.client.auth() self.assertRaises(ValueError, self.client.auth) def test_context(self): self.client.quit() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE self.assertRaises(TypeError, ftplib.FTP_TLS, keyfile=CERTFILE, context=ctx) self.assertRaises(TypeError, ftplib.FTP_TLS, certfile=CERTFILE, context=ctx) self.assertRaises(TypeError, ftplib.FTP_TLS, certfile=CERTFILE, keyfile=CERTFILE, context=ctx) self.client = ftplib.FTP_TLS(context=ctx, timeout=TIMEOUT) self.client.connect(self.server.host, self.server.port) self.assertNotIsInstance(self.client.sock, ssl.SSLSocket) self.client.auth() self.assertIs(self.client.sock.context, ctx) self.assertIsInstance(self.client.sock, ssl.SSLSocket) self.client.prot_p() with self.client.transfercmd('list') as sock: self.assertIs(sock.context, ctx) self.assertIsInstance(sock, ssl.SSLSocket) def test_ccc(self): self.assertRaises(ValueError, self.client.ccc) self.client.login(secure=True) self.assertIsInstance(self.client.sock, ssl.SSLSocket) self.client.ccc() self.assertRaises(ValueError, self.client.sock.unwrap) @skipUnless(False, "FIXME: bpo-32706") def test_check_hostname(self): self.client.quit() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.check_hostname, True) ctx.load_verify_locations(CAFILE) self.client = ftplib.FTP_TLS(context=ctx, timeout=TIMEOUT) # 127.0.0.1 doesn't match SAN self.client.connect(self.server.host, self.server.port) with self.assertRaises(ssl.CertificateError): self.client.auth() # exception quits connection self.client.connect(self.server.host, self.server.port) self.client.prot_p() with self.assertRaises(ssl.CertificateError): with self.client.transfercmd("list") as sock: pass self.client.quit() self.client.connect("localhost", self.server.port) self.client.auth() self.client.quit() self.client.connect("localhost", self.server.port) self.client.prot_p() with self.client.transfercmd("list") as sock: pass class TestTimeouts(TestCase): def setUp(self): self.evt = threading.Event() self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.sock.settimeout(20) self.port = socket_helper.bind_port(self.sock) self.server_thread = threading.Thread(target=self.server) self.server_thread.daemon = True self.server_thread.start() # Wait for the server to be ready. self.evt.wait() self.evt.clear() self.old_port = ftplib.FTP.port ftplib.FTP.port = self.port def tearDown(self): ftplib.FTP.port = self.old_port self.server_thread.join() # Explicitly clear the attribute to prevent dangling thread self.server_thread = None def server(self): # This method sets the evt 3 times: # 1) when the connection is ready to be accepted. # 2) when it is safe for the caller to close the connection # 3) when we have closed the socket self.sock.listen() # (1) Signal the caller that we are ready to accept the connection. self.evt.set() try: conn, addr = self.sock.accept() except TimeoutError: pass else: conn.sendall(b"1 Hola mundo\n") conn.shutdown(socket.SHUT_WR) # (2) Signal the caller that it is safe to close the socket. self.evt.set() conn.close() finally: self.sock.close() def testTimeoutDefault(self): # default -- use global socket timeout self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: ftp = ftplib.FTP(HOST) finally: socket.setdefaulttimeout(None) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() def testTimeoutNone(self): # no timeout -- do not use global socket timeout self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: ftp = ftplib.FTP(HOST, timeout=None) finally: socket.setdefaulttimeout(None) self.assertIsNone(ftp.sock.gettimeout()) self.evt.wait() ftp.close() def testTimeoutValue(self): # a value ftp = ftplib.FTP(HOST, timeout=30) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() # bpo-39259 with self.assertRaises(ValueError): ftplib.FTP(HOST, timeout=0) def testTimeoutConnect(self): ftp = ftplib.FTP() ftp.connect(HOST, timeout=30) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() def testTimeoutDifferentOrder(self): ftp = ftplib.FTP(timeout=30) ftp.connect(HOST) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() def testTimeoutDirectAccess(self): ftp = ftplib.FTP() ftp.timeout = 30 ftp.connect(HOST) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() class MiscTestCase(TestCase): def test__all__(self): not_exported = { 'MSG_OOB', 'FTP_PORT', 'MAXLINE', 'CRLF', 'B_CRLF', 'Error', 'parse150', 'parse227', 'parse229', 'parse257', 'print_line', 'ftpcp', 'test'} support.check__all__(self, ftplib, not_exported=not_exported) def setUpModule(): thread_info = threading_helper.threading_setup() unittest.addModuleCleanup(threading_helper.threading_cleanup, *thread_info) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/greentest/3.12/test_httplib.py000066400000000000000000003002211471441230600214010ustar00rootroot00000000000000import enum import errno from http import client, HTTPStatus import io import itertools import os import array import re import socket import threading import unittest from unittest import mock TestCase = unittest.TestCase from test import support from test.support import os_helper from test.support import socket_helper support.requires_working_socket(module=True) here = os.path.dirname(__file__) # Self-signed cert file for 'localhost' CERT_localhost = os.path.join(here, 'certdata', 'keycert.pem') # Self-signed cert file for 'fakehostname' CERT_fakehostname = os.path.join(here, 'certdata', 'keycert2.pem') # Self-signed cert file for self-signed.pythontest.net CERT_selfsigned_pythontestdotnet = os.path.join( here, 'certdata', 'selfsigned_pythontestdotnet.pem', ) # constants for testing chunked encoding chunked_start = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello worl\r\n' '3\r\n' 'd! \r\n' '8\r\n' 'and now \r\n' '22\r\n' 'for something completely different\r\n' ) chunked_expected = b'hello world! and now for something completely different' chunk_extension = ";foo=bar" last_chunk = "0\r\n" last_chunk_extended = "0" + chunk_extension + "\r\n" trailers = "X-Dummy: foo\r\nX-Dumm2: bar\r\n" chunked_end = "\r\n" HOST = socket_helper.HOST class FakeSocket: def __init__(self, text, fileclass=io.BytesIO, host=None, port=None): if isinstance(text, str): text = text.encode("ascii") self.text = text self.fileclass = fileclass self.data = b'' self.sendall_calls = 0 self.file_closed = False self.host = host self.port = port def sendall(self, data): self.sendall_calls += 1 self.data += data def makefile(self, mode, bufsize=None): if mode != 'r' and mode != 'rb': raise client.UnimplementedFileMode() # keep the file around so we can check how much was read from it self.file = self.fileclass(self.text) self.file.close = self.file_close #nerf close () return self.file def file_close(self): self.file_closed = True def close(self): pass def setsockopt(self, level, optname, value): pass class EPipeSocket(FakeSocket): def __init__(self, text, pipe_trigger): # When sendall() is called with pipe_trigger, raise EPIPE. FakeSocket.__init__(self, text) self.pipe_trigger = pipe_trigger def sendall(self, data): if self.pipe_trigger in data: raise OSError(errno.EPIPE, "gotcha") self.data += data def close(self): pass class NoEOFBytesIO(io.BytesIO): """Like BytesIO, but raises AssertionError on EOF. This is used below to test that http.client doesn't try to read more from the underlying file than it should. """ def read(self, n=-1): data = io.BytesIO.read(self, n) if data == b'': raise AssertionError('caller tried to read past EOF') return data def readline(self, length=None): data = io.BytesIO.readline(self, length) if data == b'': raise AssertionError('caller tried to read past EOF') return data class FakeSocketHTTPConnection(client.HTTPConnection): """HTTPConnection subclass using FakeSocket; counts connect() calls""" def __init__(self, *args): self.connections = 0 super().__init__('example.com') self.fake_socket_args = args self._create_connection = self.create_connection def connect(self): """Count the number of times connect() is invoked""" self.connections += 1 return super().connect() def create_connection(self, *pos, **kw): return FakeSocket(*self.fake_socket_args) class HeaderTests(TestCase): def test_auto_headers(self): # Some headers are added automatically, but should not be added by # .request() if they are explicitly set. class HeaderCountingBuffer(list): def __init__(self): self.count = {} def append(self, item): kv = item.split(b':') if len(kv) > 1: # item is a 'Key: Value' header string lcKey = kv[0].decode('ascii').lower() self.count.setdefault(lcKey, 0) self.count[lcKey] += 1 list.append(self, item) for explicit_header in True, False: for header in 'Content-length', 'Host', 'Accept-encoding': conn = client.HTTPConnection('example.com') conn.sock = FakeSocket('blahblahblah') conn._buffer = HeaderCountingBuffer() body = 'spamspamspam' headers = {} if explicit_header: headers[header] = str(len(body)) conn.request('POST', '/', body, headers) self.assertEqual(conn._buffer.count[header.lower()], 1) def test_content_length_0(self): class ContentLengthChecker(list): def __init__(self): list.__init__(self) self.content_length = None def append(self, item): kv = item.split(b':', 1) if len(kv) > 1 and kv[0].lower() == b'content-length': self.content_length = kv[1].strip() list.append(self, item) # Here, we're testing that methods expecting a body get a # content-length set to zero if the body is empty (either None or '') bodies = (None, '') methods_with_body = ('PUT', 'POST', 'PATCH') for method, body in itertools.product(methods_with_body, bodies): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', body) self.assertEqual( conn._buffer.content_length, b'0', 'Header Content-Length incorrect on {}'.format(method) ) # For these methods, we make sure that content-length is not set when # the body is None because it might cause unexpected behaviour on the # server. methods_without_body = ( 'GET', 'CONNECT', 'DELETE', 'HEAD', 'OPTIONS', 'TRACE', ) for method in methods_without_body: conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', None) self.assertEqual( conn._buffer.content_length, None, 'Header Content-Length set for empty body on {}'.format(method) ) # If the body is set to '', that's considered to be "present but # empty" rather than "missing", so content length would be set, even # for methods that don't expect a body. for method in methods_without_body: conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', '') self.assertEqual( conn._buffer.content_length, b'0', 'Header Content-Length incorrect on {}'.format(method) ) # If the body is set, make sure Content-Length is set. for method in itertools.chain(methods_without_body, methods_with_body): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', ' ') self.assertEqual( conn._buffer.content_length, b'1', 'Header Content-Length incorrect on {}'.format(method) ) def test_putheader(self): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn.putrequest('GET','/') conn.putheader('Content-length', 42) self.assertIn(b'Content-length: 42', conn._buffer) conn.putheader('Foo', ' bar ') self.assertIn(b'Foo: bar ', conn._buffer) conn.putheader('Bar', '\tbaz\t') self.assertIn(b'Bar: \tbaz\t', conn._buffer) conn.putheader('Authorization', 'Bearer mytoken') self.assertIn(b'Authorization: Bearer mytoken', conn._buffer) conn.putheader('IterHeader', 'IterA', 'IterB') self.assertIn(b'IterHeader: IterA\r\n\tIterB', conn._buffer) conn.putheader('LatinHeader', b'\xFF') self.assertIn(b'LatinHeader: \xFF', conn._buffer) conn.putheader('Utf8Header', b'\xc3\x80') self.assertIn(b'Utf8Header: \xc3\x80', conn._buffer) conn.putheader('C1-Control', b'next\x85line') self.assertIn(b'C1-Control: next\x85line', conn._buffer) conn.putheader('Embedded-Fold-Space', 'is\r\n allowed') self.assertIn(b'Embedded-Fold-Space: is\r\n allowed', conn._buffer) conn.putheader('Embedded-Fold-Tab', 'is\r\n\tallowed') self.assertIn(b'Embedded-Fold-Tab: is\r\n\tallowed', conn._buffer) conn.putheader('Key Space', 'value') self.assertIn(b'Key Space: value', conn._buffer) conn.putheader('KeySpace ', 'value') self.assertIn(b'KeySpace : value', conn._buffer) conn.putheader(b'Nonbreak\xa0Space', 'value') self.assertIn(b'Nonbreak\xa0Space: value', conn._buffer) conn.putheader(b'\xa0NonbreakSpace', 'value') self.assertIn(b'\xa0NonbreakSpace: value', conn._buffer) def test_ipv6host_header(self): # Default host header on IPv6 transaction should be wrapped by [] if # it is an IPv6 address expected = b'GET /foo HTTP/1.1\r\nHost: [2001::]:81\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[2001::]:81') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) expected = b'GET /foo HTTP/1.1\r\nHost: [2001:102A::]\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[2001:102A::]') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) expected = b'GET /foo HTTP/1.1\r\nHost: [fe80::]\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[fe80::%2]') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) expected = b'GET /foo HTTP/1.1\r\nHost: [fe80::]:81\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[fe80::%2]:81') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) def test_malformed_headers_coped_with(self): # Issue 19996 body = "HTTP/1.1 200 OK\r\nFirst: val\r\n: nval\r\nSecond: val\r\n\r\n" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.getheader('First'), 'val') self.assertEqual(resp.getheader('Second'), 'val') def test_parse_all_octets(self): # Ensure no valid header field octet breaks the parser body = ( b'HTTP/1.1 200 OK\r\n' b"!#$%&'*+-.^_`|~: value\r\n" # Special token characters b'VCHAR: ' + bytes(range(0x21, 0x7E + 1)) + b'\r\n' b'obs-text: ' + bytes(range(0x80, 0xFF + 1)) + b'\r\n' b'obs-fold: text\r\n' b' folded with space\r\n' b'\tfolded with tab\r\n' b'Content-Length: 0\r\n' b'\r\n' ) sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.getheader('Content-Length'), '0') self.assertEqual(resp.msg['Content-Length'], '0') self.assertEqual(resp.getheader("!#$%&'*+-.^_`|~"), 'value') self.assertEqual(resp.msg["!#$%&'*+-.^_`|~"], 'value') vchar = ''.join(map(chr, range(0x21, 0x7E + 1))) self.assertEqual(resp.getheader('VCHAR'), vchar) self.assertEqual(resp.msg['VCHAR'], vchar) self.assertIsNotNone(resp.getheader('obs-text')) self.assertIn('obs-text', resp.msg) for folded in (resp.getheader('obs-fold'), resp.msg['obs-fold']): self.assertTrue(folded.startswith('text')) self.assertIn(' folded with space', folded) self.assertTrue(folded.endswith('folded with tab')) def test_invalid_headers(self): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket('') conn.putrequest('GET', '/') # http://tools.ietf.org/html/rfc7230#section-3.2.4, whitespace is no # longer allowed in header names cases = ( (b'Invalid\r\nName', b'ValidValue'), (b'Invalid\rName', b'ValidValue'), (b'Invalid\nName', b'ValidValue'), (b'\r\nInvalidName', b'ValidValue'), (b'\rInvalidName', b'ValidValue'), (b'\nInvalidName', b'ValidValue'), (b' InvalidName', b'ValidValue'), (b'\tInvalidName', b'ValidValue'), (b'Invalid:Name', b'ValidValue'), (b':InvalidName', b'ValidValue'), (b'ValidName', b'Invalid\r\nValue'), (b'ValidName', b'Invalid\rValue'), (b'ValidName', b'Invalid\nValue'), (b'ValidName', b'InvalidValue\r\n'), (b'ValidName', b'InvalidValue\r'), (b'ValidName', b'InvalidValue\n'), ) for name, value in cases: with self.subTest((name, value)): with self.assertRaisesRegex(ValueError, 'Invalid header'): conn.putheader(name, value) def test_headers_debuglevel(self): body = ( b'HTTP/1.1 200 OK\r\n' b'First: val\r\n' b'Second: val1\r\n' b'Second: val2\r\n' ) sock = FakeSocket(body) resp = client.HTTPResponse(sock, debuglevel=1) with support.captured_stdout() as output: resp.begin() lines = output.getvalue().splitlines() self.assertEqual(lines[0], "reply: 'HTTP/1.1 200 OK\\r\\n'") self.assertEqual(lines[1], "header: First: val") self.assertEqual(lines[2], "header: Second: val1") self.assertEqual(lines[3], "header: Second: val2") class HttpMethodTests(TestCase): def test_invalid_method_names(self): methods = ( 'GET\r', 'POST\n', 'PUT\n\r', 'POST\nValue', 'POST\nHOST:abc', 'GET\nrHost:abc\n', 'POST\rRemainder:\r', 'GET\rHOST:\n', '\nPUT' ) for method in methods: with self.assertRaisesRegex( ValueError, "method can't contain control characters"): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn.request(method=method, url="/") class TransferEncodingTest(TestCase): expected_body = b"It's just a flesh wound" def test_endheaders_chunked(self): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.putrequest('POST', '/') conn.endheaders(self._make_body(), encode_chunked=True) _, _, body = self._parse_request(conn.sock.data) body = self._parse_chunked(body) self.assertEqual(body, self.expected_body) def test_explicit_headers(self): # explicit chunked conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') # this shouldn't actually be automatically chunk-encoded because the # calling code has explicitly stated that it's taking care of it conn.request( 'POST', '/', self._make_body(), {'Transfer-Encoding': 'chunked'}) _, headers, body = self._parse_request(conn.sock.data) self.assertNotIn('content-length', [k.lower() for k in headers.keys()]) self.assertEqual(headers['Transfer-Encoding'], 'chunked') self.assertEqual(body, self.expected_body) # explicit chunked, string body conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request( 'POST', '/', self.expected_body.decode('latin-1'), {'Transfer-Encoding': 'chunked'}) _, headers, body = self._parse_request(conn.sock.data) self.assertNotIn('content-length', [k.lower() for k in headers.keys()]) self.assertEqual(headers['Transfer-Encoding'], 'chunked') self.assertEqual(body, self.expected_body) # User-specified TE, but request() does the chunk encoding conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request('POST', '/', headers={'Transfer-Encoding': 'gzip, chunked'}, encode_chunked=True, body=self._make_body()) _, headers, body = self._parse_request(conn.sock.data) self.assertNotIn('content-length', [k.lower() for k in headers]) self.assertEqual(headers['Transfer-Encoding'], 'gzip, chunked') self.assertEqual(self._parse_chunked(body), self.expected_body) def test_request(self): for empty_lines in (False, True,): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request( 'POST', '/', self._make_body(empty_lines=empty_lines)) _, headers, body = self._parse_request(conn.sock.data) body = self._parse_chunked(body) self.assertEqual(body, self.expected_body) self.assertEqual(headers['Transfer-Encoding'], 'chunked') # Content-Length and Transfer-Encoding SHOULD not be sent in the # same request self.assertNotIn('content-length', [k.lower() for k in headers]) def test_empty_body(self): # Zero-length iterable should be treated like any other iterable conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request('POST', '/', ()) _, headers, body = self._parse_request(conn.sock.data) self.assertEqual(headers['Transfer-Encoding'], 'chunked') self.assertNotIn('content-length', [k.lower() for k in headers]) self.assertEqual(body, b"0\r\n\r\n") def _make_body(self, empty_lines=False): lines = self.expected_body.split(b' ') for idx, line in enumerate(lines): # for testing handling empty lines if empty_lines and idx % 2: yield b'' if idx < len(lines) - 1: yield line + b' ' else: yield line def _parse_request(self, data): lines = data.split(b'\r\n') request = lines[0] headers = {} n = 1 while n < len(lines) and len(lines[n]) > 0: key, val = lines[n].split(b':') key = key.decode('latin-1').strip() headers[key] = val.decode('latin-1').strip() n += 1 return request, headers, b'\r\n'.join(lines[n + 1:]) def _parse_chunked(self, data): body = [] trailers = {} n = 0 lines = data.split(b'\r\n') # parse body while True: size, chunk = lines[n:n+2] size = int(size, 16) if size == 0: n += 1 break self.assertEqual(size, len(chunk)) body.append(chunk) n += 2 # we /should/ hit the end chunk, but check against the size of # lines so we're not stuck in an infinite loop should we get # malformed data if n > len(lines): break return b''.join(body) class BasicTest(TestCase): def test_dir_with_added_behavior_on_status(self): # see issue40084 self.assertTrue({'description', 'name', 'phrase', 'value'} <= set(dir(HTTPStatus(404)))) def test_simple_httpstatus(self): class CheckedHTTPStatus(enum.IntEnum): """HTTP status codes and reason phrases Status codes from the following RFCs are all observed: * RFC 7231: Hypertext Transfer Protocol (HTTP/1.1), obsoletes 2616 * RFC 6585: Additional HTTP Status Codes * RFC 3229: Delta encoding in HTTP * RFC 4918: HTTP Extensions for WebDAV, obsoletes 2518 * RFC 5842: Binding Extensions to WebDAV * RFC 7238: Permanent Redirect * RFC 2295: Transparent Content Negotiation in HTTP * RFC 2774: An HTTP Extension Framework * RFC 7725: An HTTP Status Code to Report Legal Obstacles * RFC 7540: Hypertext Transfer Protocol Version 2 (HTTP/2) * RFC 2324: Hyper Text Coffee Pot Control Protocol (HTCPCP/1.0) * RFC 8297: An HTTP Status Code for Indicating Hints * RFC 8470: Using Early Data in HTTP """ def __new__(cls, value, phrase, description=''): obj = int.__new__(cls, value) obj._value_ = value obj.phrase = phrase obj.description = description return obj @property def is_informational(self): return 100 <= self <= 199 @property def is_success(self): return 200 <= self <= 299 @property def is_redirection(self): return 300 <= self <= 399 @property def is_client_error(self): return 400 <= self <= 499 @property def is_server_error(self): return 500 <= self <= 599 # informational CONTINUE = 100, 'Continue', 'Request received, please continue' SWITCHING_PROTOCOLS = (101, 'Switching Protocols', 'Switching to new protocol; obey Upgrade header') PROCESSING = 102, 'Processing' EARLY_HINTS = 103, 'Early Hints' # success OK = 200, 'OK', 'Request fulfilled, document follows' CREATED = 201, 'Created', 'Document created, URL follows' ACCEPTED = (202, 'Accepted', 'Request accepted, processing continues off-line') NON_AUTHORITATIVE_INFORMATION = (203, 'Non-Authoritative Information', 'Request fulfilled from cache') NO_CONTENT = 204, 'No Content', 'Request fulfilled, nothing follows' RESET_CONTENT = 205, 'Reset Content', 'Clear input form for further input' PARTIAL_CONTENT = 206, 'Partial Content', 'Partial content follows' MULTI_STATUS = 207, 'Multi-Status' ALREADY_REPORTED = 208, 'Already Reported' IM_USED = 226, 'IM Used' # redirection MULTIPLE_CHOICES = (300, 'Multiple Choices', 'Object has several resources -- see URI list') MOVED_PERMANENTLY = (301, 'Moved Permanently', 'Object moved permanently -- see URI list') FOUND = 302, 'Found', 'Object moved temporarily -- see URI list' SEE_OTHER = 303, 'See Other', 'Object moved -- see Method and URL list' NOT_MODIFIED = (304, 'Not Modified', 'Document has not changed since given time') USE_PROXY = (305, 'Use Proxy', 'You must use proxy specified in Location to access this resource') TEMPORARY_REDIRECT = (307, 'Temporary Redirect', 'Object moved temporarily -- see URI list') PERMANENT_REDIRECT = (308, 'Permanent Redirect', 'Object moved permanently -- see URI list') # client error BAD_REQUEST = (400, 'Bad Request', 'Bad request syntax or unsupported method') UNAUTHORIZED = (401, 'Unauthorized', 'No permission -- see authorization schemes') PAYMENT_REQUIRED = (402, 'Payment Required', 'No payment -- see charging schemes') FORBIDDEN = (403, 'Forbidden', 'Request forbidden -- authorization will not help') NOT_FOUND = (404, 'Not Found', 'Nothing matches the given URI') METHOD_NOT_ALLOWED = (405, 'Method Not Allowed', 'Specified method is invalid for this resource') NOT_ACCEPTABLE = (406, 'Not Acceptable', 'URI not available in preferred format') PROXY_AUTHENTICATION_REQUIRED = (407, 'Proxy Authentication Required', 'You must authenticate with this proxy before proceeding') REQUEST_TIMEOUT = (408, 'Request Timeout', 'Request timed out; try again later') CONFLICT = 409, 'Conflict', 'Request conflict' GONE = (410, 'Gone', 'URI no longer exists and has been permanently removed') LENGTH_REQUIRED = (411, 'Length Required', 'Client must specify Content-Length') PRECONDITION_FAILED = (412, 'Precondition Failed', 'Precondition in headers is false') REQUEST_ENTITY_TOO_LARGE = (413, 'Request Entity Too Large', 'Entity is too large') REQUEST_URI_TOO_LONG = (414, 'Request-URI Too Long', 'URI is too long') UNSUPPORTED_MEDIA_TYPE = (415, 'Unsupported Media Type', 'Entity body in unsupported format') REQUESTED_RANGE_NOT_SATISFIABLE = (416, 'Requested Range Not Satisfiable', 'Cannot satisfy request range') EXPECTATION_FAILED = (417, 'Expectation Failed', 'Expect condition could not be satisfied') IM_A_TEAPOT = (418, 'I\'m a Teapot', 'Server refuses to brew coffee because it is a teapot.') MISDIRECTED_REQUEST = (421, 'Misdirected Request', 'Server is not able to produce a response') UNPROCESSABLE_ENTITY = 422, 'Unprocessable Entity' LOCKED = 423, 'Locked' FAILED_DEPENDENCY = 424, 'Failed Dependency' TOO_EARLY = 425, 'Too Early' UPGRADE_REQUIRED = 426, 'Upgrade Required' PRECONDITION_REQUIRED = (428, 'Precondition Required', 'The origin server requires the request to be conditional') TOO_MANY_REQUESTS = (429, 'Too Many Requests', 'The user has sent too many requests in ' 'a given amount of time ("rate limiting")') REQUEST_HEADER_FIELDS_TOO_LARGE = (431, 'Request Header Fields Too Large', 'The server is unwilling to process the request because its header ' 'fields are too large') UNAVAILABLE_FOR_LEGAL_REASONS = (451, 'Unavailable For Legal Reasons', 'The server is denying access to the ' 'resource as a consequence of a legal demand') # server errors INTERNAL_SERVER_ERROR = (500, 'Internal Server Error', 'Server got itself in trouble') NOT_IMPLEMENTED = (501, 'Not Implemented', 'Server does not support this operation') BAD_GATEWAY = (502, 'Bad Gateway', 'Invalid responses from another server/proxy') SERVICE_UNAVAILABLE = (503, 'Service Unavailable', 'The server cannot process the request due to a high load') GATEWAY_TIMEOUT = (504, 'Gateway Timeout', 'The gateway server did not receive a timely response') HTTP_VERSION_NOT_SUPPORTED = (505, 'HTTP Version Not Supported', 'Cannot fulfill request') VARIANT_ALSO_NEGOTIATES = 506, 'Variant Also Negotiates' INSUFFICIENT_STORAGE = 507, 'Insufficient Storage' LOOP_DETECTED = 508, 'Loop Detected' NOT_EXTENDED = 510, 'Not Extended' NETWORK_AUTHENTICATION_REQUIRED = (511, 'Network Authentication Required', 'The client needs to authenticate to gain network access') enum._test_simple_enum(CheckedHTTPStatus, HTTPStatus) def test_httpstatus_range(self): """Checks that the statuses are in the 100-599 range""" for member in HTTPStatus.__members__.values(): self.assertGreaterEqual(member, 100) self.assertLessEqual(member, 599) def test_httpstatus_category(self): """Checks that the statuses belong to the standard categories""" categories = ( ((100, 199), "is_informational"), ((200, 299), "is_success"), ((300, 399), "is_redirection"), ((400, 499), "is_client_error"), ((500, 599), "is_server_error"), ) for member in HTTPStatus.__members__.values(): for (lower, upper), category in categories: category_indicator = getattr(member, category) if lower <= member <= upper: self.assertTrue(category_indicator) else: self.assertFalse(category_indicator) def test_status_lines(self): # Test HTTP status lines body = "HTTP/1.1 200 Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(0), b'') # Issue #20007 self.assertFalse(resp.isclosed()) self.assertFalse(resp.closed) self.assertEqual(resp.read(), b"Text") self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) body = "HTTP/1.1 400.100 Not Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) self.assertRaises(client.BadStatusLine, resp.begin) def test_bad_status_repr(self): exc = client.BadStatusLine('') self.assertEqual(repr(exc), '''BadStatusLine("''")''') def test_partial_reads(self): # if we have Content-Length, HTTPResponse knows when to close itself, # the same behaviour as when we read the whole thing with read() body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(2), b'Te') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_mixed_reads(self): # readline() should update the remaining length, so that read() knows # how much data is left and does not raise IncompleteRead body = "HTTP/1.1 200 Ok\r\nContent-Length: 13\r\n\r\nText\r\nAnother" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.readline(), b'Text\r\n') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(), b'Another') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_readintos(self): # if we have Content-Length, HTTPResponse knows when to close itself, # the same behaviour as when we read the whole thing with read() body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(2) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'Te') self.assertFalse(resp.isclosed()) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_reads_past_end(self): # if we have Content-Length, clip reads to the end body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(10), b'Text') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_readintos_past_end(self): # if we have Content-Length, clip readintos to the end body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(10) n = resp.readinto(b) self.assertEqual(n, 4) self.assertEqual(bytes(b)[:4], b'Text') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_reads_no_content_length(self): # when no length is present, the socket should be gracefully closed when # all data was read body = "HTTP/1.1 200 Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(2), b'Te') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertEqual(resp.read(1), b'') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_readintos_no_content_length(self): # when no length is present, the socket should be gracefully closed when # all data was read body = "HTTP/1.1 200 Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(2) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'Te') self.assertFalse(resp.isclosed()) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') n = resp.readinto(b) self.assertEqual(n, 0) self.assertTrue(resp.isclosed()) def test_partial_reads_incomplete_body(self): # if the server shuts down the connection before the whole # content-length is delivered, the socket is gracefully closed body = "HTTP/1.1 200 Ok\r\nContent-Length: 10\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(2), b'Te') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertEqual(resp.read(1), b'') self.assertTrue(resp.isclosed()) def test_partial_readintos_incomplete_body(self): # if the server shuts down the connection before the whole # content-length is delivered, the socket is gracefully closed body = "HTTP/1.1 200 Ok\r\nContent-Length: 10\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(2) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'Te') self.assertFalse(resp.isclosed()) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') n = resp.readinto(b) self.assertEqual(n, 0) self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_host_port(self): # Check invalid host_port for hp in ("www.python.org:abc", "user:password@www.python.org"): self.assertRaises(client.InvalidURL, client.HTTPConnection, hp) for hp, h, p in (("[fe80::207:e9ff:fe9b]:8000", "fe80::207:e9ff:fe9b", 8000), ("www.python.org:80", "www.python.org", 80), ("www.python.org:", "www.python.org", 80), ("www.python.org", "www.python.org", 80), ("[fe80::207:e9ff:fe9b]", "fe80::207:e9ff:fe9b", 80), ("[fe80::207:e9ff:fe9b]:", "fe80::207:e9ff:fe9b", 80)): c = client.HTTPConnection(hp) self.assertEqual(h, c.host) self.assertEqual(p, c.port) def test_response_headers(self): # test response with multiple message headers with the same field name. text = ('HTTP/1.1 200 OK\r\n' 'Set-Cookie: Customer="WILE_E_COYOTE"; ' 'Version="1"; Path="/acme"\r\n' 'Set-Cookie: Part_Number="Rocket_Launcher_0001"; Version="1";' ' Path="/acme"\r\n' '\r\n' 'No body\r\n') hdr = ('Customer="WILE_E_COYOTE"; Version="1"; Path="/acme"' ', ' 'Part_Number="Rocket_Launcher_0001"; Version="1"; Path="/acme"') s = FakeSocket(text) r = client.HTTPResponse(s) r.begin() cookies = r.getheader("Set-Cookie") self.assertEqual(cookies, hdr) def test_read_head(self): # Test that the library doesn't attempt to read any data # from a HEAD request. (Tickles SF bug #622042.) sock = FakeSocket( 'HTTP/1.1 200 OK\r\n' 'Content-Length: 14432\r\n' '\r\n', NoEOFBytesIO) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() if resp.read(): self.fail("Did not expect response from HEAD request") def test_readinto_head(self): # Test that the library doesn't attempt to read any data # from a HEAD request. (Tickles SF bug #622042.) sock = FakeSocket( 'HTTP/1.1 200 OK\r\n' 'Content-Length: 14432\r\n' '\r\n', NoEOFBytesIO) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() b = bytearray(5) if resp.readinto(b) != 0: self.fail("Did not expect response from HEAD request") self.assertEqual(bytes(b), b'\x00'*5) def test_too_many_headers(self): headers = '\r\n'.join('Header%d: foo' % i for i in range(client._MAXHEADERS + 1)) + '\r\n' text = ('HTTP/1.1 200 OK\r\n' + headers) s = FakeSocket(text) r = client.HTTPResponse(s) self.assertRaisesRegex(client.HTTPException, r"got more than \d+ headers", r.begin) def test_send_file(self): expected = (b'GET /foo HTTP/1.1\r\nHost: example.com\r\n' b'Accept-Encoding: identity\r\n' b'Transfer-Encoding: chunked\r\n' b'\r\n') with open(__file__, 'rb') as body: conn = client.HTTPConnection('example.com') sock = FakeSocket(body) conn.sock = sock conn.request('GET', '/foo', body) self.assertTrue(sock.data.startswith(expected), '%r != %r' % (sock.data[:len(expected)], expected)) def test_send(self): expected = b'this is a test this is only a test' conn = client.HTTPConnection('example.com') sock = FakeSocket(None) conn.sock = sock conn.send(expected) self.assertEqual(expected, sock.data) sock.data = b'' conn.send(array.array('b', expected)) self.assertEqual(expected, sock.data) sock.data = b'' conn.send(io.BytesIO(expected)) self.assertEqual(expected, sock.data) def test_send_updating_file(self): def data(): yield 'data' yield None yield 'data_two' class UpdatingFile(io.TextIOBase): mode = 'r' d = data() def read(self, blocksize=-1): return next(self.d) expected = b'data' conn = client.HTTPConnection('example.com') sock = FakeSocket("") conn.sock = sock conn.send(UpdatingFile()) self.assertEqual(sock.data, expected) def test_send_iter(self): expected = b'GET /foo HTTP/1.1\r\nHost: example.com\r\n' \ b'Accept-Encoding: identity\r\nContent-Length: 11\r\n' \ b'\r\nonetwothree' def body(): yield b"one" yield b"two" yield b"three" conn = client.HTTPConnection('example.com') sock = FakeSocket("") conn.sock = sock conn.request('GET', '/foo', body(), {'Content-Length': '11'}) self.assertEqual(sock.data, expected) def test_blocksize_request(self): """Check that request() respects the configured block size.""" blocksize = 8 # For easy debugging. conn = client.HTTPConnection('example.com', blocksize=blocksize) sock = FakeSocket(None) conn.sock = sock expected = b"a" * blocksize + b"b" conn.request("PUT", "/", io.BytesIO(expected), {"Content-Length": "9"}) self.assertEqual(sock.sendall_calls, 3) body = sock.data.split(b"\r\n\r\n", 1)[1] self.assertEqual(body, expected) def test_blocksize_send(self): """Check that send() respects the configured block size.""" blocksize = 8 # For easy debugging. conn = client.HTTPConnection('example.com', blocksize=blocksize) sock = FakeSocket(None) conn.sock = sock expected = b"a" * blocksize + b"b" conn.send(io.BytesIO(expected)) self.assertEqual(sock.sendall_calls, 2) self.assertEqual(sock.data, expected) def test_send_type_error(self): # See: Issue #12676 conn = client.HTTPConnection('example.com') conn.sock = FakeSocket('') with self.assertRaises(TypeError): conn.request('POST', 'test', conn) def test_chunked(self): expected = chunked_expected sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) resp.close() # Various read sizes for n in range(1, 12): sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(n) + resp.read(n) + resp.read(), expected) resp.close() for x in ('', 'foo\r\n'): sock = FakeSocket(chunked_start + x) resp = client.HTTPResponse(sock, method="GET") resp.begin() try: resp.read() except client.IncompleteRead as i: self.assertEqual(i.partial, expected) expected_message = 'IncompleteRead(%d bytes read)' % len(expected) self.assertEqual(repr(i), expected_message) self.assertEqual(str(i), expected_message) else: self.fail('IncompleteRead expected') finally: resp.close() def test_readinto_chunked(self): expected = chunked_expected nexpected = len(expected) b = bytearray(128) sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() n = resp.readinto(b) self.assertEqual(b[:nexpected], expected) self.assertEqual(n, nexpected) resp.close() # Various read sizes for n in range(1, 12): sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() m = memoryview(b) i = resp.readinto(m[0:n]) i += resp.readinto(m[i:n + i]) i += resp.readinto(m[i:]) self.assertEqual(b[:nexpected], expected) self.assertEqual(i, nexpected) resp.close() for x in ('', 'foo\r\n'): sock = FakeSocket(chunked_start + x) resp = client.HTTPResponse(sock, method="GET") resp.begin() try: n = resp.readinto(b) except client.IncompleteRead as i: self.assertEqual(i.partial, expected) expected_message = 'IncompleteRead(%d bytes read)' % len(expected) self.assertEqual(repr(i), expected_message) self.assertEqual(str(i), expected_message) else: self.fail('IncompleteRead expected') finally: resp.close() def test_chunked_head(self): chunked_start = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello world\r\n' '1\r\n' 'd\r\n' ) sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() self.assertEqual(resp.read(), b'') self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_readinto_chunked_head(self): chunked_start = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello world\r\n' '1\r\n' 'd\r\n' ) sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() b = bytearray(5) n = resp.readinto(b) self.assertEqual(n, 0) self.assertEqual(bytes(b), b'\x00'*5) self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_negative_content_length(self): sock = FakeSocket( 'HTTP/1.1 200 OK\r\nContent-Length: -1\r\n\r\nHello\r\n') resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), b'Hello\r\n') self.assertTrue(resp.isclosed()) def test_incomplete_read(self): sock = FakeSocket('HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\nHello\r\n') resp = client.HTTPResponse(sock, method="GET") resp.begin() try: resp.read() except client.IncompleteRead as i: self.assertEqual(i.partial, b'Hello\r\n') self.assertEqual(repr(i), "IncompleteRead(7 bytes read, 3 more expected)") self.assertEqual(str(i), "IncompleteRead(7 bytes read, 3 more expected)") self.assertTrue(resp.isclosed()) else: self.fail('IncompleteRead expected') def test_epipe(self): sock = EPipeSocket( "HTTP/1.0 401 Authorization Required\r\n" "Content-type: text/html\r\n" "WWW-Authenticate: Basic realm=\"example\"\r\n", b"Content-Length") conn = client.HTTPConnection("example.com") conn.sock = sock self.assertRaises(OSError, lambda: conn.request("PUT", "/url", "body")) resp = conn.getresponse() self.assertEqual(401, resp.status) self.assertEqual("Basic realm=\"example\"", resp.getheader("www-authenticate")) # Test lines overflowing the max line size (_MAXLINE in http.client) def test_overflowing_status_line(self): body = "HTTP/1.1 200 Ok" + "k" * 65536 + "\r\n" resp = client.HTTPResponse(FakeSocket(body)) self.assertRaises((client.LineTooLong, client.BadStatusLine), resp.begin) def test_overflowing_header_line(self): body = ( 'HTTP/1.1 200 OK\r\n' 'X-Foo: bar' + 'r' * 65536 + '\r\n\r\n' ) resp = client.HTTPResponse(FakeSocket(body)) self.assertRaises(client.LineTooLong, resp.begin) def test_overflowing_header_limit_after_100(self): body = ( 'HTTP/1.1 100 OK\r\n' 'r\n' * 32768 ) resp = client.HTTPResponse(FakeSocket(body)) with self.assertRaises(client.HTTPException) as cm: resp.begin() # We must assert more because other reasonable errors that we # do not want can also be HTTPException derived. self.assertIn('got more than ', str(cm.exception)) self.assertIn('headers', str(cm.exception)) def test_overflowing_chunked_line(self): body = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' + '0' * 65536 + 'a\r\n' 'hello world\r\n' '0\r\n' '\r\n' ) resp = client.HTTPResponse(FakeSocket(body)) resp.begin() self.assertRaises(client.LineTooLong, resp.read) def test_early_eof(self): # Test httpresponse with no \r\n termination, body = "HTTP/1.1 200 Ok" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(), b'') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_error_leak(self): # Test that the socket is not leaked if getresponse() fails conn = client.HTTPConnection('example.com') response = None class Response(client.HTTPResponse): def __init__(self, *pos, **kw): nonlocal response response = self # Avoid garbage collector closing the socket client.HTTPResponse.__init__(self, *pos, **kw) conn.response_class = Response conn.sock = FakeSocket('Invalid status line') conn.request('GET', '/') self.assertRaises(client.BadStatusLine, conn.getresponse) self.assertTrue(response.closed) self.assertTrue(conn.sock.file_closed) def test_chunked_extension(self): extra = '3;foo=bar\r\n' + 'abc\r\n' expected = chunked_expected + b'abc' sock = FakeSocket(chunked_start + extra + last_chunk_extended + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) resp.close() def test_chunked_missing_end(self): """some servers may serve up a short chunked encoding stream""" expected = chunked_expected sock = FakeSocket(chunked_start + last_chunk) #no terminating crlf resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) resp.close() def test_chunked_trailers(self): """See that trailers are read and ignored""" expected = chunked_expected sock = FakeSocket(chunked_start + last_chunk + trailers + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) # we should have reached the end of the file self.assertEqual(sock.file.read(), b"") #we read to the end resp.close() def test_chunked_sync(self): """Check that we don't read past the end of the chunked-encoding stream""" expected = chunked_expected extradata = "extradata" sock = FakeSocket(chunked_start + last_chunk + trailers + chunked_end + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata.encode("ascii")) #we read to the end resp.close() def test_content_length_sync(self): """Check that we don't read past the end of the Content-Length stream""" extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_readlines_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.readlines(2000), [expected]) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_read1_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read1(2000), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_readline_bound_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.readline(10), expected) self.assertEqual(resp.readline(10), b"") # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_read1_bound_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 30\r\n\r\n' + expected*3 + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read1(20), expected*2) self.assertEqual(resp.read(), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_response_fileno(self): # Make sure fd returned by fileno is valid. serv = socket.create_server((HOST, 0)) self.addCleanup(serv.close) result = None def run_server(): [conn, address] = serv.accept() with conn, conn.makefile("rb") as reader: # Read the request header until a blank line while True: line = reader.readline() if not line.rstrip(b"\r\n"): break conn.sendall(b"HTTP/1.1 200 Connection established\r\n\r\n") nonlocal result result = reader.read() thread = threading.Thread(target=run_server) thread.start() self.addCleanup(thread.join, float(1)) conn = client.HTTPConnection(*serv.getsockname()) conn.request("CONNECT", "dummy:1234") response = conn.getresponse() try: self.assertEqual(response.status, client.OK) s = socket.socket(fileno=response.fileno()) try: s.sendall(b"proxied data\n") finally: s.detach() finally: response.close() conn.close() thread.join() self.assertEqual(result, b"proxied data\n") def test_putrequest_override_domain_validation(self): """ It should be possible to override the default validation behavior in putrequest (bpo-38216). """ class UnsafeHTTPConnection(client.HTTPConnection): def _validate_path(self, url): pass conn = UnsafeHTTPConnection('example.com') conn.sock = FakeSocket('') conn.putrequest('GET', '/\x00') def test_putrequest_override_host_validation(self): class UnsafeHTTPConnection(client.HTTPConnection): def _validate_host(self, url): pass conn = UnsafeHTTPConnection('example.com\r\n') conn.sock = FakeSocket('') # set skip_host so a ValueError is not raised upon adding the # invalid URL as the value of the "Host:" header conn.putrequest('GET', '/', skip_host=1) def test_putrequest_override_encoding(self): """ It should be possible to override the default encoding to transmit bytes in another encoding even if invalid (bpo-36274). """ class UnsafeHTTPConnection(client.HTTPConnection): def _encode_request(self, str_url): return str_url.encode('utf-8') conn = UnsafeHTTPConnection('example.com') conn.sock = FakeSocket('') conn.putrequest('GET', '/☃') class ExtendedReadTest(TestCase): """ Test peek(), read1(), readline() """ lines = ( 'HTTP/1.1 200 OK\r\n' '\r\n' 'hello world!\n' 'and now \n' 'for something completely different\n' 'foo' ) lines_expected = lines[lines.find('hello'):].encode("ascii") lines_chunked = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello worl\r\n' '3\r\n' 'd!\n\r\n' '9\r\n' 'and now \n\r\n' '23\r\n' 'for something completely different\n\r\n' '3\r\n' 'foo\r\n' '0\r\n' # terminating chunk '\r\n' # end of trailers ) def setUp(self): sock = FakeSocket(self.lines) resp = client.HTTPResponse(sock, method="GET") resp.begin() resp.fp = io.BufferedReader(resp.fp) self.resp = resp def test_peek(self): resp = self.resp # patch up the buffered peek so that it returns not too much stuff oldpeek = resp.fp.peek def mypeek(n=-1): p = oldpeek(n) if n >= 0: return p[:n] return p[:10] resp.fp.peek = mypeek all = [] while True: # try a short peek p = resp.peek(3) if p: self.assertGreater(len(p), 0) # then unbounded peek p2 = resp.peek() self.assertGreaterEqual(len(p2), len(p)) self.assertTrue(p2.startswith(p)) next = resp.read(len(p2)) self.assertEqual(next, p2) else: next = resp.read() self.assertFalse(next) all.append(next) if not next: break self.assertEqual(b"".join(all), self.lines_expected) def test_readline(self): resp = self.resp self._verify_readline(self.resp.readline, self.lines_expected) def test_readline_without_limit(self): self._verify_readline(self.resp.readline, self.lines_expected, limit=-1) def _verify_readline(self, readline, expected, limit=5): all = [] while True: # short readlines line = readline(limit) if line and line != b"foo": if len(line) < 5: self.assertTrue(line.endswith(b"\n")) all.append(line) if not line: break self.assertEqual(b"".join(all), expected) self.assertTrue(self.resp.isclosed()) def test_read1(self): resp = self.resp def r(): res = resp.read1(4) self.assertLessEqual(len(res), 4) return res readliner = Readliner(r) self._verify_readline(readliner.readline, self.lines_expected) def test_read1_unbounded(self): resp = self.resp all = [] while True: data = resp.read1() if not data: break all.append(data) self.assertEqual(b"".join(all), self.lines_expected) self.assertTrue(resp.isclosed()) def test_read1_bounded(self): resp = self.resp all = [] while True: data = resp.read1(10) if not data: break self.assertLessEqual(len(data), 10) all.append(data) self.assertEqual(b"".join(all), self.lines_expected) self.assertTrue(resp.isclosed()) def test_read1_0(self): self.assertEqual(self.resp.read1(0), b"") self.assertFalse(self.resp.isclosed()) def test_peek_0(self): p = self.resp.peek(0) self.assertLessEqual(0, len(p)) class ExtendedReadTestContentLengthKnown(ExtendedReadTest): _header, _body = ExtendedReadTest.lines.split('\r\n\r\n', 1) lines = _header + f'\r\nContent-Length: {len(_body)}\r\n\r\n' + _body class ExtendedReadTestChunked(ExtendedReadTest): """ Test peek(), read1(), readline() in chunked mode """ lines = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello worl\r\n' '3\r\n' 'd!\n\r\n' '9\r\n' 'and now \n\r\n' '23\r\n' 'for something completely different\n\r\n' '3\r\n' 'foo\r\n' '0\r\n' # terminating chunk '\r\n' # end of trailers ) class Readliner: """ a simple readline class that uses an arbitrary read function and buffering """ def __init__(self, readfunc): self.readfunc = readfunc self.remainder = b"" def readline(self, limit): data = [] datalen = 0 read = self.remainder try: while True: idx = read.find(b'\n') if idx != -1: break if datalen + len(read) >= limit: idx = limit - datalen - 1 # read more data data.append(read) read = self.readfunc() if not read: idx = 0 #eof condition break idx += 1 data.append(read[:idx]) self.remainder = read[idx:] return b"".join(data) except: self.remainder = b"".join(data) raise class OfflineTest(TestCase): def test_all(self): # Documented objects defined in the module should be in __all__ expected = {"responses"} # Allowlist documented dict() object # HTTPMessage, parse_headers(), and the HTTP status code constants are # intentionally omitted for simplicity denylist = {"HTTPMessage", "parse_headers"} for name in dir(client): if name.startswith("_") or name in denylist: continue module_object = getattr(client, name) if getattr(module_object, "__module__", None) == "http.client": expected.add(name) self.assertCountEqual(client.__all__, expected) def test_responses(self): self.assertEqual(client.responses[client.NOT_FOUND], "Not Found") def test_client_constants(self): # Make sure we don't break backward compatibility with 3.4 expected = [ 'CONTINUE', 'SWITCHING_PROTOCOLS', 'PROCESSING', 'OK', 'CREATED', 'ACCEPTED', 'NON_AUTHORITATIVE_INFORMATION', 'NO_CONTENT', 'RESET_CONTENT', 'PARTIAL_CONTENT', 'MULTI_STATUS', 'IM_USED', 'MULTIPLE_CHOICES', 'MOVED_PERMANENTLY', 'FOUND', 'SEE_OTHER', 'NOT_MODIFIED', 'USE_PROXY', 'TEMPORARY_REDIRECT', 'BAD_REQUEST', 'UNAUTHORIZED', 'PAYMENT_REQUIRED', 'FORBIDDEN', 'NOT_FOUND', 'METHOD_NOT_ALLOWED', 'NOT_ACCEPTABLE', 'PROXY_AUTHENTICATION_REQUIRED', 'REQUEST_TIMEOUT', 'CONFLICT', 'GONE', 'LENGTH_REQUIRED', 'PRECONDITION_FAILED', 'REQUEST_ENTITY_TOO_LARGE', 'REQUEST_URI_TOO_LONG', 'UNSUPPORTED_MEDIA_TYPE', 'REQUESTED_RANGE_NOT_SATISFIABLE', 'EXPECTATION_FAILED', 'IM_A_TEAPOT', 'MISDIRECTED_REQUEST', 'UNPROCESSABLE_ENTITY', 'LOCKED', 'FAILED_DEPENDENCY', 'UPGRADE_REQUIRED', 'PRECONDITION_REQUIRED', 'TOO_MANY_REQUESTS', 'REQUEST_HEADER_FIELDS_TOO_LARGE', 'UNAVAILABLE_FOR_LEGAL_REASONS', 'INTERNAL_SERVER_ERROR', 'NOT_IMPLEMENTED', 'BAD_GATEWAY', 'SERVICE_UNAVAILABLE', 'GATEWAY_TIMEOUT', 'HTTP_VERSION_NOT_SUPPORTED', 'INSUFFICIENT_STORAGE', 'NOT_EXTENDED', 'NETWORK_AUTHENTICATION_REQUIRED', 'EARLY_HINTS', 'TOO_EARLY' ] for const in expected: with self.subTest(constant=const): self.assertTrue(hasattr(client, const)) class SourceAddressTest(TestCase): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = socket_helper.bind_port(self.serv) self.source_port = socket_helper.find_unused_port() self.serv.listen() self.conn = None def tearDown(self): if self.conn: self.conn.close() self.conn = None self.serv.close() self.serv = None def testHTTPConnectionSourceAddress(self): self.conn = client.HTTPConnection(HOST, self.port, source_address=('', self.source_port)) self.conn.connect() self.assertEqual(self.conn.sock.getsockname()[1], self.source_port) @unittest.skipIf(not hasattr(client, 'HTTPSConnection'), 'http.client.HTTPSConnection not defined') def testHTTPSConnectionSourceAddress(self): self.conn = client.HTTPSConnection(HOST, self.port, source_address=('', self.source_port)) # We don't test anything here other than the constructor not barfing as # this code doesn't deal with setting up an active running SSL server # for an ssl_wrapped connect() to actually return from. class TimeoutTest(TestCase): PORT = None def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) TimeoutTest.PORT = socket_helper.bind_port(self.serv) self.serv.listen() def tearDown(self): self.serv.close() self.serv = None def testTimeoutAttribute(self): # This will prove that the timeout gets through HTTPConnection # and into the socket. # default -- use global socket timeout self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: httpConn = client.HTTPConnection(HOST, TimeoutTest.PORT) httpConn.connect() finally: socket.setdefaulttimeout(None) self.assertEqual(httpConn.sock.gettimeout(), 30) httpConn.close() # no timeout -- do not use global socket default self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: httpConn = client.HTTPConnection(HOST, TimeoutTest.PORT, timeout=None) httpConn.connect() finally: socket.setdefaulttimeout(None) self.assertEqual(httpConn.sock.gettimeout(), None) httpConn.close() # a value httpConn = client.HTTPConnection(HOST, TimeoutTest.PORT, timeout=30) httpConn.connect() self.assertEqual(httpConn.sock.gettimeout(), 30) httpConn.close() class PersistenceTest(TestCase): def test_reuse_reconnect(self): # Should reuse or reconnect depending on header from server tests = ( ('1.0', '', False), ('1.0', 'Connection: keep-alive\r\n', True), ('1.1', '', True), ('1.1', 'Connection: close\r\n', False), ('1.0', 'Connection: keep-ALIVE\r\n', True), ('1.1', 'Connection: cloSE\r\n', False), ) for version, header, reuse in tests: with self.subTest(version=version, header=header): msg = ( 'HTTP/{} 200 OK\r\n' '{}' 'Content-Length: 12\r\n' '\r\n' 'Dummy body\r\n' ).format(version, header) conn = FakeSocketHTTPConnection(msg) self.assertIsNone(conn.sock) conn.request('GET', '/open-connection') with conn.getresponse() as response: self.assertEqual(conn.sock is None, not reuse) response.read() self.assertEqual(conn.sock is None, not reuse) self.assertEqual(conn.connections, 1) conn.request('GET', '/subsequent-request') self.assertEqual(conn.connections, 1 if reuse else 2) def test_disconnected(self): def make_reset_reader(text): """Return BufferedReader that raises ECONNRESET at EOF""" stream = io.BytesIO(text) def readinto(buffer): size = io.BytesIO.readinto(stream, buffer) if size == 0: raise ConnectionResetError() return size stream.readinto = readinto return io.BufferedReader(stream) tests = ( (io.BytesIO, client.RemoteDisconnected), (make_reset_reader, ConnectionResetError), ) for stream_factory, exception in tests: with self.subTest(exception=exception): conn = FakeSocketHTTPConnection(b'', stream_factory) conn.request('GET', '/eof-response') self.assertRaises(exception, conn.getresponse) self.assertIsNone(conn.sock) # HTTPConnection.connect() should be automatically invoked conn.request('GET', '/reconnect') self.assertEqual(conn.connections, 2) def test_100_close(self): conn = FakeSocketHTTPConnection( b'HTTP/1.1 100 Continue\r\n' b'\r\n' # Missing final response ) conn.request('GET', '/', headers={'Expect': '100-continue'}) self.assertRaises(client.RemoteDisconnected, conn.getresponse) self.assertIsNone(conn.sock) conn.request('GET', '/reconnect') self.assertEqual(conn.connections, 2) class HTTPSTest(TestCase): def setUp(self): if not hasattr(client, 'HTTPSConnection'): self.skipTest('ssl support required') def make_server(self, certfile): from test.ssl_servers import make_https_server return make_https_server(self, certfile=certfile) def test_attributes(self): # simple test to check it's storing the timeout h = client.HTTPSConnection(HOST, TimeoutTest.PORT, timeout=30) self.assertEqual(h.timeout, 30) def test_networked(self): # Default settings: requires a valid cert from a trusted CA import ssl support.requires('network') with socket_helper.transient_internet('self-signed.pythontest.net'): h = client.HTTPSConnection('self-signed.pythontest.net', 443) with self.assertRaises(ssl.SSLError) as exc_info: h.request('GET', '/') self.assertEqual(exc_info.exception.reason, 'CERTIFICATE_VERIFY_FAILED') def test_networked_noverification(self): # Switch off cert verification import ssl support.requires('network') with socket_helper.transient_internet('self-signed.pythontest.net'): context = ssl._create_unverified_context() h = client.HTTPSConnection('self-signed.pythontest.net', 443, context=context) h.request('GET', '/') resp = h.getresponse() h.close() self.assertIn('nginx', resp.getheader('server')) resp.close() @support.system_must_validate_cert def test_networked_trusted_by_default_cert(self): # Default settings: requires a valid cert from a trusted CA support.requires('network') with socket_helper.transient_internet('www.python.org'): h = client.HTTPSConnection('www.python.org', 443) h.request('GET', '/') resp = h.getresponse() content_type = resp.getheader('content-type') resp.close() h.close() self.assertIn('text/html', content_type) def test_networked_good_cert(self): # We feed the server's cert as a validating cert import ssl support.requires('network') selfsigned_pythontestdotnet = 'self-signed.pythontest.net' with socket_helper.transient_internet(selfsigned_pythontestdotnet): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(context.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(context.check_hostname, True) context.load_verify_locations(CERT_selfsigned_pythontestdotnet) try: h = client.HTTPSConnection(selfsigned_pythontestdotnet, 443, context=context) h.request('GET', '/') resp = h.getresponse() except ssl.SSLError as ssl_err: ssl_err_str = str(ssl_err) # In the error message of [SSL: CERTIFICATE_VERIFY_FAILED] on # modern Linux distros (Debian Buster, etc) default OpenSSL # configurations it'll fail saying "key too weak" until we # address https://bugs.python.org/issue36816 to use a proper # key size on self-signed.pythontest.net. if re.search(r'(?i)key.too.weak', ssl_err_str): raise unittest.SkipTest( f'Got {ssl_err_str} trying to connect ' f'to {selfsigned_pythontestdotnet}. ' 'See https://bugs.python.org/issue36816.') raise server_string = resp.getheader('server') resp.close() h.close() self.assertIn('nginx', server_string) @support.requires_resource('walltime') def test_networked_bad_cert(self): # We feed a "CA" cert that is unrelated to the server's cert import ssl support.requires('network') with socket_helper.transient_internet('self-signed.pythontest.net'): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations(CERT_localhost) h = client.HTTPSConnection('self-signed.pythontest.net', 443, context=context) with self.assertRaises(ssl.SSLError) as exc_info: h.request('GET', '/') self.assertEqual(exc_info.exception.reason, 'CERTIFICATE_VERIFY_FAILED') def test_local_unknown_cert(self): # The custom cert isn't known to the default trust bundle import ssl server = self.make_server(CERT_localhost) h = client.HTTPSConnection('localhost', server.port) with self.assertRaises(ssl.SSLError) as exc_info: h.request('GET', '/') self.assertEqual(exc_info.exception.reason, 'CERTIFICATE_VERIFY_FAILED') def test_local_good_hostname(self): # The (valid) cert validates the HTTPS hostname import ssl server = self.make_server(CERT_localhost) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations(CERT_localhost) h = client.HTTPSConnection('localhost', server.port, context=context) self.addCleanup(h.close) h.request('GET', '/nonexistent') resp = h.getresponse() self.addCleanup(resp.close) self.assertEqual(resp.status, 404) def test_local_bad_hostname(self): # The (valid) cert doesn't validate the HTTPS hostname import ssl server = self.make_server(CERT_fakehostname) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations(CERT_fakehostname) h = client.HTTPSConnection('localhost', server.port, context=context) with self.assertRaises(ssl.CertificateError): h.request('GET', '/') # Same with explicit context.check_hostname=True context.check_hostname = True h = client.HTTPSConnection('localhost', server.port, context=context) with self.assertRaises(ssl.CertificateError): h.request('GET', '/') # With context.check_hostname=False, the mismatching is ignored context.check_hostname = False h = client.HTTPSConnection('localhost', server.port, context=context) h.request('GET', '/nonexistent') resp = h.getresponse() resp.close() h.close() self.assertEqual(resp.status, 404) @unittest.skipIf(not hasattr(client, 'HTTPSConnection'), 'http.client.HTTPSConnection not available') def test_host_port(self): # Check invalid host_port for hp in ("www.python.org:abc", "user:password@www.python.org"): self.assertRaises(client.InvalidURL, client.HTTPSConnection, hp) for hp, h, p in (("[fe80::207:e9ff:fe9b]:8000", "fe80::207:e9ff:fe9b", 8000), ("www.python.org:443", "www.python.org", 443), ("www.python.org:", "www.python.org", 443), ("www.python.org", "www.python.org", 443), ("[fe80::207:e9ff:fe9b]", "fe80::207:e9ff:fe9b", 443), ("[fe80::207:e9ff:fe9b]:", "fe80::207:e9ff:fe9b", 443)): c = client.HTTPSConnection(hp) self.assertEqual(h, c.host) self.assertEqual(p, c.port) def test_tls13_pha(self): import ssl if not ssl.HAS_TLSv1_3: self.skipTest('TLS 1.3 support required') # just check status of PHA flag h = client.HTTPSConnection('localhost', 443) self.assertTrue(h._context.post_handshake_auth) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertFalse(context.post_handshake_auth) h = client.HTTPSConnection('localhost', 443, context=context) self.assertIs(h._context, context) self.assertFalse(h._context.post_handshake_auth) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT, cert_file=CERT_localhost) context.post_handshake_auth = True h = client.HTTPSConnection('localhost', 443, context=context) self.assertTrue(h._context.post_handshake_auth) class RequestBodyTest(TestCase): """Test cases where a request includes a message body.""" def setUp(self): self.conn = client.HTTPConnection('example.com') self.conn.sock = self.sock = FakeSocket("") self.conn.sock = self.sock def get_headers_and_fp(self): f = io.BytesIO(self.sock.data) f.readline() # read the request line message = client.parse_headers(f) return message, f def test_list_body(self): # Note that no content-length is automatically calculated for # an iterable. The request will fall back to send chunked # transfer encoding. cases = ( ([b'foo', b'bar'], b'3\r\nfoo\r\n3\r\nbar\r\n0\r\n\r\n'), ((b'foo', b'bar'), b'3\r\nfoo\r\n3\r\nbar\r\n0\r\n\r\n'), ) for body, expected in cases: with self.subTest(body): self.conn = client.HTTPConnection('example.com') self.conn.sock = self.sock = FakeSocket('') self.conn.request('PUT', '/url', body) msg, f = self.get_headers_and_fp() self.assertNotIn('Content-Type', msg) self.assertNotIn('Content-Length', msg) self.assertEqual(msg.get('Transfer-Encoding'), 'chunked') self.assertEqual(expected, f.read()) def test_manual_content_length(self): # Set an incorrect content-length so that we can verify that # it will not be over-ridden by the library. self.conn.request("PUT", "/url", "body", {"Content-Length": "42"}) message, f = self.get_headers_and_fp() self.assertEqual("42", message.get("content-length")) self.assertEqual(4, len(f.read())) def test_ascii_body(self): self.conn.request("PUT", "/url", "body") message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("4", message.get("content-length")) self.assertEqual(b'body', f.read()) def test_latin1_body(self): self.conn.request("PUT", "/url", "body\xc1") message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("5", message.get("content-length")) self.assertEqual(b'body\xc1', f.read()) def test_bytes_body(self): self.conn.request("PUT", "/url", b"body\xc1") message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("5", message.get("content-length")) self.assertEqual(b'body\xc1', f.read()) def test_text_file_body(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) with open(os_helper.TESTFN, "w", encoding="utf-8") as f: f.write("body") with open(os_helper.TESTFN, encoding="utf-8") as f: self.conn.request("PUT", "/url", f) message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) # No content-length will be determined for files; the body # will be sent using chunked transfer encoding instead. self.assertIsNone(message.get("content-length")) self.assertEqual("chunked", message.get("transfer-encoding")) self.assertEqual(b'4\r\nbody\r\n0\r\n\r\n', f.read()) def test_binary_file_body(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) with open(os_helper.TESTFN, "wb") as f: f.write(b"body\xc1") with open(os_helper.TESTFN, "rb") as f: self.conn.request("PUT", "/url", f) message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("chunked", message.get("Transfer-Encoding")) self.assertNotIn("Content-Length", message) self.assertEqual(b'5\r\nbody\xc1\r\n0\r\n\r\n', f.read()) class HTTPResponseTest(TestCase): def setUp(self): body = "HTTP/1.1 200 Ok\r\nMy-Header: first-value\r\nMy-Header: \ second-value\r\n\r\nText" sock = FakeSocket(body) self.resp = client.HTTPResponse(sock) self.resp.begin() def test_getting_header(self): header = self.resp.getheader('My-Header') self.assertEqual(header, 'first-value, second-value') header = self.resp.getheader('My-Header', 'some default') self.assertEqual(header, 'first-value, second-value') def test_getting_nonexistent_header_with_string_default(self): header = self.resp.getheader('No-Such-Header', 'default-value') self.assertEqual(header, 'default-value') def test_getting_nonexistent_header_with_iterable_default(self): header = self.resp.getheader('No-Such-Header', ['default', 'values']) self.assertEqual(header, 'default, values') header = self.resp.getheader('No-Such-Header', ('default', 'values')) self.assertEqual(header, 'default, values') def test_getting_nonexistent_header_without_default(self): header = self.resp.getheader('No-Such-Header') self.assertEqual(header, None) def test_getting_header_defaultint(self): header = self.resp.getheader('No-Such-Header',default=42) self.assertEqual(header, 42) class TunnelTests(TestCase): def setUp(self): response_text = ( 'HTTP/1.1 200 OK\r\n\r\n' # Reply to CONNECT 'HTTP/1.1 200 OK\r\n' # Reply to HEAD 'Content-Length: 42\r\n\r\n' ) self.host = 'proxy.com' self.port = client.HTTP_PORT self.conn = client.HTTPConnection(self.host) self.conn._create_connection = self._create_connection(response_text) def tearDown(self): self.conn.close() def _create_connection(self, response_text): def create_connection(address, timeout=None, source_address=None): return FakeSocket(response_text, host=address[0], port=address[1]) return create_connection def test_set_tunnel_host_port_headers_add_host_missing(self): tunnel_host = 'destination.com' tunnel_port = 8888 tunnel_headers = {'User-Agent': 'Mozilla/5.0 (compatible, MSIE 11)'} tunnel_headers_after = tunnel_headers.copy() tunnel_headers_after['Host'] = '%s:%d' % (tunnel_host, tunnel_port) self.conn.set_tunnel(tunnel_host, port=tunnel_port, headers=tunnel_headers) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertEqual(self.conn._tunnel_host, tunnel_host) self.assertEqual(self.conn._tunnel_port, tunnel_port) self.assertEqual(self.conn._tunnel_headers, tunnel_headers_after) def test_set_tunnel_host_port_headers_set_host_identical(self): tunnel_host = 'destination.com' tunnel_port = 8888 tunnel_headers = {'User-Agent': 'Mozilla/5.0 (compatible, MSIE 11)', 'Host': '%s:%d' % (tunnel_host, tunnel_port)} self.conn.set_tunnel(tunnel_host, port=tunnel_port, headers=tunnel_headers) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertEqual(self.conn._tunnel_host, tunnel_host) self.assertEqual(self.conn._tunnel_port, tunnel_port) self.assertEqual(self.conn._tunnel_headers, tunnel_headers) def test_set_tunnel_host_port_headers_set_host_different(self): tunnel_host = 'destination.com' tunnel_port = 8888 tunnel_headers = {'User-Agent': 'Mozilla/5.0 (compatible, MSIE 11)', 'Host': '%s:%d' % ('example.com', 4200)} self.conn.set_tunnel(tunnel_host, port=tunnel_port, headers=tunnel_headers) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertEqual(self.conn._tunnel_host, tunnel_host) self.assertEqual(self.conn._tunnel_port, tunnel_port) self.assertEqual(self.conn._tunnel_headers, tunnel_headers) def test_disallow_set_tunnel_after_connect(self): # Once connected, we shouldn't be able to tunnel anymore self.conn.connect() self.assertRaises(RuntimeError, self.conn.set_tunnel, 'destination.com') def test_connect_with_tunnel(self): d = { b'host': b'destination.com', b'port': client.HTTP_PORT, } self.conn.set_tunnel(d[b'host'].decode('ascii')) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertIn(b'CONNECT %(host)s:%(port)d HTTP/1.1\r\n' b'Host: %(host)s:%(port)d\r\n\r\n' % d, self.conn.sock.data) self.assertIn(b'HEAD / HTTP/1.1\r\nHost: %(host)s\r\n' % d, self.conn.sock.data) def test_connect_with_tunnel_with_default_port(self): d = { b'host': b'destination.com', b'port': client.HTTP_PORT, } self.conn.set_tunnel(d[b'host'].decode('ascii'), port=d[b'port']) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertIn(b'CONNECT %(host)s:%(port)d HTTP/1.1\r\n' b'Host: %(host)s:%(port)d\r\n\r\n' % d, self.conn.sock.data) self.assertIn(b'HEAD / HTTP/1.1\r\nHost: %(host)s\r\n' % d, self.conn.sock.data) def test_connect_with_tunnel_with_nonstandard_port(self): d = { b'host': b'destination.com', b'port': 8888, } self.conn.set_tunnel(d[b'host'].decode('ascii'), port=d[b'port']) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertIn(b'CONNECT %(host)s:%(port)d HTTP/1.1\r\n' b'Host: %(host)s:%(port)d\r\n\r\n' % d, self.conn.sock.data) self.assertIn(b'HEAD / HTTP/1.1\r\nHost: %(host)s:%(port)d\r\n' % d, self.conn.sock.data) # This request is not RFC-valid, but it's been possible with the library # for years, so don't break it unexpectedly... This also tests # case-insensitivity when injecting Host: headers if they're missing. def test_connect_with_tunnel_with_different_host_header(self): d = { b'host': b'destination.com', b'tunnel_host_header': b'example.com:9876', b'port': client.HTTP_PORT, } self.conn.set_tunnel( d[b'host'].decode('ascii'), headers={'HOST': d[b'tunnel_host_header'].decode('ascii')}) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertIn(b'CONNECT %(host)s:%(port)d HTTP/1.1\r\n' b'HOST: %(tunnel_host_header)s\r\n\r\n' % d, self.conn.sock.data) self.assertIn(b'HEAD / HTTP/1.1\r\nHost: %(host)s\r\n' % d, self.conn.sock.data) def test_connect_with_tunnel_different_host(self): d = { b'host': b'destination.com', b'port': client.HTTP_PORT, } self.conn.set_tunnel(d[b'host'].decode('ascii')) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertIn(b'CONNECT %(host)s:%(port)d HTTP/1.1\r\n' b'Host: %(host)s:%(port)d\r\n\r\n' % d, self.conn.sock.data) self.assertIn(b'HEAD / HTTP/1.1\r\nHost: %(host)s\r\n' % d, self.conn.sock.data) def test_connect_with_tunnel_idna(self): dest = '\u03b4\u03c0\u03b8.gr' dest_port = b'%s:%d' % (dest.encode('idna'), client.HTTP_PORT) expected = b'CONNECT %s HTTP/1.1\r\nHost: %s\r\n\r\n' % ( dest_port, dest_port) self.conn.set_tunnel(dest) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, client.HTTP_PORT) self.assertIn(expected, self.conn.sock.data) def test_tunnel_connect_single_send_connection_setup(self): """Regresstion test for https://bugs.python.org/issue43332.""" with mock.patch.object(self.conn, 'send') as mock_send: self.conn.set_tunnel('destination.com') self.conn.connect() self.conn.request('GET', '/') mock_send.assert_called() # Likely 2, but this test only cares about the first. self.assertGreater( len(mock_send.mock_calls), 1, msg=f'unexpected number of send calls: {mock_send.mock_calls}') proxy_setup_data_sent = mock_send.mock_calls[0][1][0] self.assertIn(b'CONNECT destination.com', proxy_setup_data_sent) self.assertTrue( proxy_setup_data_sent.endswith(b'\r\n\r\n'), msg=f'unexpected proxy data sent {proxy_setup_data_sent!r}') def test_connect_put_request(self): d = { b'host': b'destination.com', b'port': client.HTTP_PORT, } self.conn.set_tunnel(d[b'host'].decode('ascii')) self.conn.request('PUT', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertIn(b'CONNECT %(host)s:%(port)d HTTP/1.1\r\n' b'Host: %(host)s:%(port)d\r\n\r\n' % d, self.conn.sock.data) self.assertIn(b'PUT / HTTP/1.1\r\nHost: %(host)s\r\n' % d, self.conn.sock.data) def test_connect_put_request_ipv6(self): self.conn.set_tunnel('[1:2:3::4]', 1234) self.conn.request('PUT', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, client.HTTP_PORT) self.assertIn(b'CONNECT [1:2:3::4]:1234', self.conn.sock.data) self.assertIn(b'Host: [1:2:3::4]:1234', self.conn.sock.data) def test_connect_put_request_ipv6_port(self): self.conn.set_tunnel('[1:2:3::4]:1234') self.conn.request('PUT', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, client.HTTP_PORT) self.assertIn(b'CONNECT [1:2:3::4]:1234', self.conn.sock.data) self.assertIn(b'Host: [1:2:3::4]:1234', self.conn.sock.data) def test_tunnel_debuglog(self): expected_header = 'X-Dummy: 1' response_text = 'HTTP/1.0 200 OK\r\n{}\r\n\r\n'.format(expected_header) self.conn.set_debuglevel(1) self.conn._create_connection = self._create_connection(response_text) self.conn.set_tunnel('destination.com') with support.captured_stdout() as output: self.conn.request('PUT', '/', '') lines = output.getvalue().splitlines() self.assertIn('header: {}'.format(expected_header), lines) def test_proxy_response_headers(self): expected_header = ('X-Dummy', '1') response_text = ( 'HTTP/1.0 200 OK\r\n' '{0}\r\n\r\n'.format(':'.join(expected_header)) ) self.conn._create_connection = self._create_connection(response_text) self.conn.set_tunnel('destination.com') self.conn.request('PUT', '/', '') headers = self.conn.get_proxy_response_headers() self.assertIn(expected_header, headers.items()) def test_no_proxy_response_headers(self): expected_header = ('X-Dummy', '1') response_text = ( 'HTTP/1.0 200 OK\r\n' '{0}\r\n\r\n'.format(':'.join(expected_header)) ) self.conn._create_connection = self._create_connection(response_text) self.conn.request('PUT', '/', '') headers = self.conn.get_proxy_response_headers() self.assertIsNone(headers) def test_tunnel_leak(self): sock = None def _create_connection(address, timeout=None, source_address=None): nonlocal sock sock = FakeSocket( 'HTTP/1.1 404 NOT FOUND\r\n\r\n', host=address[0], port=address[1], ) return sock self.conn._create_connection = _create_connection self.conn.set_tunnel('destination.com') exc = None try: self.conn.request('HEAD', '/', '') except OSError as e: # keeping a reference to exc keeps response alive in the traceback exc = e self.assertIsNotNone(exc) self.assertTrue(sock.file_closed) if __name__ == '__main__': unittest.main(verbosity=2) gevent-24.11.1/src/greentest/3.12/test_interpreters.py000066400000000000000000001001611471441230600224620ustar00rootroot00000000000000import contextlib import json import os import os.path import sys import threading from textwrap import dedent import unittest import time from test import support from test.support import import_helper from test.support import threading_helper from test.support import os_helper _interpreters = import_helper.import_module('_xxsubinterpreters') _channels = import_helper.import_module('_xxinterpchannels') from test.support import interpreters def _captured_script(script): r, w = os.pipe() indented = script.replace('\n', '\n ') wrapped = dedent(f""" import contextlib with open({w}, 'w', encoding='utf-8') as spipe: with contextlib.redirect_stdout(spipe): {indented} """) return wrapped, open(r, encoding='utf-8') def clean_up_interpreters(): for interp in interpreters.list_all(): if interp.id == 0: # main continue try: interp.close() except RuntimeError: pass # already destroyed def _run_output(interp, request, channels=None): script, rpipe = _captured_script(request) with rpipe: interp.run(script, channels=channels) return rpipe.read() @contextlib.contextmanager def _running(interp): r, w = os.pipe() def run(): interp.run(dedent(f""" # wait for "signal" with open({r}) as rpipe: rpipe.read() """)) t = threading.Thread(target=run) t.start() yield with open(w, 'w') as spipe: spipe.write('done') t.join() class TestBase(unittest.TestCase): def os_pipe(self): r, w = os.pipe() def cleanup(): try: os.close(w) except Exception: pass try: os.close(r) except Exception: pass self.addCleanup(cleanup) return r, w def tearDown(self): clean_up_interpreters() class CreateTests(TestBase): def test_in_main(self): interp = interpreters.create() self.assertIsInstance(interp, interpreters.Interpreter) self.assertIn(interp, interpreters.list_all()) def test_in_thread(self): lock = threading.Lock() interp = None def f(): nonlocal interp interp = interpreters.create() lock.acquire() lock.release() t = threading.Thread(target=f) with lock: t.start() t.join() self.assertIn(interp, interpreters.list_all()) def test_in_subinterpreter(self): main, = interpreters.list_all() interp = interpreters.create() out = _run_output(interp, dedent(""" from test.support import interpreters interp = interpreters.create() print(interp.id) """)) interp2 = interpreters.Interpreter(int(out)) self.assertEqual(interpreters.list_all(), [main, interp, interp2]) def test_after_destroy_all(self): before = set(interpreters.list_all()) # Create 3 subinterpreters. interp_lst = [] for _ in range(3): interps = interpreters.create() interp_lst.append(interps) # Now destroy them. for interp in interp_lst: interp.close() # Finally, create another. interp = interpreters.create() self.assertEqual(set(interpreters.list_all()), before | {interp}) def test_after_destroy_some(self): before = set(interpreters.list_all()) # Create 3 subinterpreters. interp1 = interpreters.create() interp2 = interpreters.create() interp3 = interpreters.create() # Now destroy 2 of them. interp1.close() interp2.close() # Finally, create another. interp = interpreters.create() self.assertEqual(set(interpreters.list_all()), before | {interp3, interp}) class GetCurrentTests(TestBase): def test_main(self): main = interpreters.get_main() current = interpreters.get_current() self.assertEqual(current, main) def test_subinterpreter(self): main = _interpreters.get_main() interp = interpreters.create() out = _run_output(interp, dedent(""" from test.support import interpreters cur = interpreters.get_current() print(cur.id) """)) current = interpreters.Interpreter(int(out)) self.assertNotEqual(current, main) class ListAllTests(TestBase): def test_initial(self): interps = interpreters.list_all() self.assertEqual(1, len(interps)) def test_after_creating(self): main = interpreters.get_current() first = interpreters.create() second = interpreters.create() ids = [] for interp in interpreters.list_all(): ids.append(interp.id) self.assertEqual(ids, [main.id, first.id, second.id]) def test_after_destroying(self): main = interpreters.get_current() first = interpreters.create() second = interpreters.create() first.close() ids = [] for interp in interpreters.list_all(): ids.append(interp.id) self.assertEqual(ids, [main.id, second.id]) class TestInterpreterAttrs(TestBase): def test_id_type(self): main = interpreters.get_main() current = interpreters.get_current() interp = interpreters.create() self.assertIsInstance(main.id, _interpreters.InterpreterID) self.assertIsInstance(current.id, _interpreters.InterpreterID) self.assertIsInstance(interp.id, _interpreters.InterpreterID) def test_main_id(self): main = interpreters.get_main() self.assertEqual(main.id, 0) def test_custom_id(self): interp = interpreters.Interpreter(1) self.assertEqual(interp.id, 1) with self.assertRaises(TypeError): interpreters.Interpreter('1') def test_id_readonly(self): interp = interpreters.Interpreter(1) with self.assertRaises(AttributeError): interp.id = 2 @unittest.skip('not ready yet (see bpo-32604)') def test_main_isolated(self): main = interpreters.get_main() self.assertFalse(main.isolated) @unittest.skip('not ready yet (see bpo-32604)') def test_subinterpreter_isolated_default(self): interp = interpreters.create() self.assertFalse(interp.isolated) def test_subinterpreter_isolated_explicit(self): interp1 = interpreters.create(isolated=True) interp2 = interpreters.create(isolated=False) self.assertTrue(interp1.isolated) self.assertFalse(interp2.isolated) @unittest.skip('not ready yet (see bpo-32604)') def test_custom_isolated_default(self): interp = interpreters.Interpreter(1) self.assertFalse(interp.isolated) def test_custom_isolated_explicit(self): interp1 = interpreters.Interpreter(1, isolated=True) interp2 = interpreters.Interpreter(1, isolated=False) self.assertTrue(interp1.isolated) self.assertFalse(interp2.isolated) def test_isolated_readonly(self): interp = interpreters.Interpreter(1) with self.assertRaises(AttributeError): interp.isolated = True def test_equality(self): interp1 = interpreters.create() interp2 = interpreters.create() self.assertEqual(interp1, interp1) self.assertNotEqual(interp1, interp2) class TestInterpreterIsRunning(TestBase): def test_main(self): main = interpreters.get_main() self.assertTrue(main.is_running()) @unittest.skip('Fails on FreeBSD') def test_subinterpreter(self): interp = interpreters.create() self.assertFalse(interp.is_running()) with _running(interp): self.assertTrue(interp.is_running()) self.assertFalse(interp.is_running()) def test_finished(self): r, w = self.os_pipe() interp = interpreters.create() interp.run(f"""if True: import os os.write({w}, b'x') """) self.assertFalse(interp.is_running()) self.assertEqual(os.read(r, 1), b'x') def test_from_subinterpreter(self): interp = interpreters.create() out = _run_output(interp, dedent(f""" import _xxsubinterpreters as _interpreters if _interpreters.is_running({interp.id}): print(True) else: print(False) """)) self.assertEqual(out.strip(), 'True') def test_already_destroyed(self): interp = interpreters.create() interp.close() with self.assertRaises(RuntimeError): interp.is_running() def test_does_not_exist(self): interp = interpreters.Interpreter(1_000_000) with self.assertRaises(RuntimeError): interp.is_running() def test_bad_id(self): interp = interpreters.Interpreter(-1) with self.assertRaises(ValueError): interp.is_running() def test_with_only_background_threads(self): r_interp, w_interp = self.os_pipe() r_thread, w_thread = self.os_pipe() DONE = b'D' FINISHED = b'F' interp = interpreters.create() interp.run(f"""if True: import os import threading def task(): v = os.read({r_thread}, 1) assert v == {DONE!r} os.write({w_interp}, {FINISHED!r}) t = threading.Thread(target=task) t.start() """) self.assertFalse(interp.is_running()) os.write(w_thread, DONE) interp.run('t.join()') self.assertEqual(os.read(r_interp, 1), FINISHED) class TestInterpreterClose(TestBase): def test_basic(self): main = interpreters.get_main() interp1 = interpreters.create() interp2 = interpreters.create() interp3 = interpreters.create() self.assertEqual(set(interpreters.list_all()), {main, interp1, interp2, interp3}) interp2.close() self.assertEqual(set(interpreters.list_all()), {main, interp1, interp3}) def test_all(self): before = set(interpreters.list_all()) interps = set() for _ in range(3): interp = interpreters.create() interps.add(interp) self.assertEqual(set(interpreters.list_all()), before | interps) for interp in interps: interp.close() self.assertEqual(set(interpreters.list_all()), before) def test_main(self): main, = interpreters.list_all() with self.assertRaises(RuntimeError): main.close() def f(): with self.assertRaises(RuntimeError): main.close() t = threading.Thread(target=f) t.start() t.join() def test_already_destroyed(self): interp = interpreters.create() interp.close() with self.assertRaises(RuntimeError): interp.close() def test_does_not_exist(self): interp = interpreters.Interpreter(1_000_000) with self.assertRaises(RuntimeError): interp.close() def test_bad_id(self): interp = interpreters.Interpreter(-1) with self.assertRaises(ValueError): interp.close() def test_from_current(self): main, = interpreters.list_all() interp = interpreters.create() out = _run_output(interp, dedent(f""" from test.support import interpreters interp = interpreters.Interpreter({int(interp.id)}) try: interp.close() except RuntimeError: print('failed') """)) self.assertEqual(out.strip(), 'failed') self.assertEqual(set(interpreters.list_all()), {main, interp}) def test_from_sibling(self): main, = interpreters.list_all() interp1 = interpreters.create() interp2 = interpreters.create() self.assertEqual(set(interpreters.list_all()), {main, interp1, interp2}) interp1.run(dedent(f""" from test.support import interpreters interp2 = interpreters.Interpreter(int({interp2.id})) interp2.close() interp3 = interpreters.create() interp3.close() """)) self.assertEqual(set(interpreters.list_all()), {main, interp1}) def test_from_other_thread(self): interp = interpreters.create() def f(): interp.close() t = threading.Thread(target=f) t.start() t.join() @unittest.skip('Fails on FreeBSD') def test_still_running(self): main, = interpreters.list_all() interp = interpreters.create() with _running(interp): with self.assertRaises(RuntimeError): interp.close() self.assertTrue(interp.is_running()) def test_subthreads_still_running(self): r_interp, w_interp = self.os_pipe() r_thread, w_thread = self.os_pipe() FINISHED = b'F' interp = interpreters.create() interp.run(f"""if True: import os import threading import time done = False def notify_fini(): global done done = True t.join() threading._register_atexit(notify_fini) def task(): while not done: time.sleep(0.1) os.write({w_interp}, {FINISHED!r}) t = threading.Thread(target=task) t.start() """) interp.close() self.assertEqual(os.read(r_interp, 1), FINISHED) class TestInterpreterRun(TestBase): def test_success(self): interp = interpreters.create() script, file = _captured_script('print("it worked!", end="")') with file: interp.run(script) out = file.read() self.assertEqual(out, 'it worked!') def test_in_thread(self): interp = interpreters.create() script, file = _captured_script('print("it worked!", end="")') with file: def f(): interp.run(script) t = threading.Thread(target=f) t.start() t.join() out = file.read() self.assertEqual(out, 'it worked!') @support.requires_fork() def test_fork(self): interp = interpreters.create() import tempfile with tempfile.NamedTemporaryFile('w+', encoding='utf-8') as file: file.write('') file.flush() expected = 'spam spam spam spam spam' script = dedent(f""" import os try: os.fork() except RuntimeError: with open('{file.name}', 'w', encoding='utf-8') as out: out.write('{expected}') """) interp.run(script) file.seek(0) content = file.read() self.assertEqual(content, expected) @unittest.skip('Fails on FreeBSD') def test_already_running(self): interp = interpreters.create() with _running(interp): with self.assertRaises(RuntimeError): interp.run('print("spam")') def test_does_not_exist(self): interp = interpreters.Interpreter(1_000_000) with self.assertRaises(RuntimeError): interp.run('print("spam")') def test_bad_id(self): interp = interpreters.Interpreter(-1) with self.assertRaises(ValueError): interp.run('print("spam")') def test_bad_script(self): interp = interpreters.create() with self.assertRaises(TypeError): interp.run(10) def test_bytes_for_script(self): interp = interpreters.create() with self.assertRaises(TypeError): interp.run(b'print("spam")') def test_with_background_threads_still_running(self): r_interp, w_interp = self.os_pipe() r_thread, w_thread = self.os_pipe() RAN = b'R' DONE = b'D' FINISHED = b'F' interp = interpreters.create() interp.run(f"""if True: import os import threading def task(): v = os.read({r_thread}, 1) assert v == {DONE!r} os.write({w_interp}, {FINISHED!r}) t = threading.Thread(target=task) t.start() os.write({w_interp}, {RAN!r}) """) interp.run(f"""if True: os.write({w_interp}, {RAN!r}) """) os.write(w_thread, DONE) interp.run('t.join()') self.assertEqual(os.read(r_interp, 1), RAN) self.assertEqual(os.read(r_interp, 1), RAN) self.assertEqual(os.read(r_interp, 1), FINISHED) # test_xxsubinterpreters covers the remaining Interpreter.run() behavior. @unittest.skip('these are crashing, likely just due just to _xxsubinterpreters (see gh-105699)') class StressTests(TestBase): # In these tests we generally want a lot of interpreters, # but not so many that any test takes too long. @support.requires_resource('cpu') def test_create_many_sequential(self): alive = [] for _ in range(100): interp = interpreters.create() alive.append(interp) @support.requires_resource('cpu') def test_create_many_threaded(self): alive = [] def task(): interp = interpreters.create() alive.append(interp) threads = (threading.Thread(target=task) for _ in range(200)) with threading_helper.start_threads(threads): pass class StartupTests(TestBase): # We want to ensure the initial state of subinterpreters # matches expectations. _subtest_count = 0 @contextlib.contextmanager def subTest(self, *args): with super().subTest(*args) as ctx: self._subtest_count += 1 try: yield ctx finally: if self._debugged_in_subtest: if self._subtest_count == 1: # The first subtest adds a leading newline, so we # compensate here by not printing a trailing newline. print('### end subtest debug ###', end='') else: print('### end subtest debug ###') self._debugged_in_subtest = False def debug(self, msg, *, header=None): if header: self._debug(f'--- {header} ---') if msg: if msg.endswith(os.linesep): self._debug(msg[:-len(os.linesep)]) else: self._debug(msg) self._debug('') self._debug('------') else: self._debug(msg) _debugged = False _debugged_in_subtest = False def _debug(self, msg): if not self._debugged: print() self._debugged = True if self._subtest is not None: if True: if not self._debugged_in_subtest: self._debugged_in_subtest = True print('### start subtest debug ###') print(msg) else: print(msg) def create_temp_dir(self): import tempfile tmp = tempfile.mkdtemp(prefix='test_interpreters_') tmp = os.path.realpath(tmp) self.addCleanup(os_helper.rmtree, tmp) return tmp def write_script(self, *path, text): filename = os.path.join(*path) dirname = os.path.dirname(filename) if dirname: os.makedirs(dirname, exist_ok=True) with open(filename, 'w', encoding='utf-8') as outfile: outfile.write(dedent(text)) return filename @support.requires_subprocess() def run_python(self, argv, *, cwd=None): # This method is inspired by # EmbeddingTestsMixin.run_embedded_interpreter() in test_embed.py. import shlex import subprocess if isinstance(argv, str): argv = shlex.split(argv) argv = [sys.executable, *argv] try: proc = subprocess.run( argv, cwd=cwd, capture_output=True, text=True, ) except Exception as exc: self.debug(f'# cmd: {shlex.join(argv)}') if isinstance(exc, FileNotFoundError) and not exc.filename: if os.path.exists(argv[0]): exists = 'exists' else: exists = 'does not exist' self.debug(f'{argv[0]} {exists}') raise # re-raise assert proc.stderr == '' or proc.returncode != 0, proc.stderr if proc.returncode != 0 and support.verbose: self.debug(f'# python3 {shlex.join(argv[1:])} failed:') self.debug(proc.stdout, header='stdout') self.debug(proc.stderr, header='stderr') self.assertEqual(proc.returncode, 0) self.assertEqual(proc.stderr, '') return proc.stdout def test_sys_path_0(self): # The main interpreter's sys.path[0] should be used by subinterpreters. script = ''' import sys from test.support import interpreters orig = sys.path[0] interp = interpreters.create() interp.run(f"""if True: import json import sys print(json.dumps({{ 'main': {orig!r}, 'sub': sys.path[0], }}, indent=4), flush=True) """) ''' # / # pkg/ # __init__.py # __main__.py # script.py # script.py cwd = self.create_temp_dir() self.write_script(cwd, 'pkg', '__init__.py', text='') self.write_script(cwd, 'pkg', '__main__.py', text=script) self.write_script(cwd, 'pkg', 'script.py', text=script) self.write_script(cwd, 'script.py', text=script) cases = [ ('script.py', cwd), ('-m script', cwd), ('-m pkg', cwd), ('-m pkg.script', cwd), ('-c "import script"', ''), ] for argv, expected in cases: with self.subTest(f'python3 {argv}'): out = self.run_python(argv, cwd=cwd) data = json.loads(out) sp0_main, sp0_sub = data['main'], data['sub'] self.assertEqual(sp0_sub, sp0_main) self.assertEqual(sp0_sub, expected) class FinalizationTests(TestBase): def test_gh_109793(self): import subprocess argv = [sys.executable, '-c', '''if True: import _xxsubinterpreters as _interpreters interpid = _interpreters.create() raise Exception '''] proc = subprocess.run(argv, capture_output=True, text=True) self.assertIn('Traceback', proc.stderr) if proc.returncode == 0 and support.verbose: print() print("--- cmd unexpected succeeded ---") print(f"stdout:\n{proc.stdout}") print(f"stderr:\n{proc.stderr}") print("------") self.assertEqual(proc.returncode, 1) class TestIsShareable(TestBase): def test_default_shareables(self): shareables = [ # singletons None, # builtin objects b'spam', 'spam', 10, -10, ] for obj in shareables: with self.subTest(obj): shareable = interpreters.is_shareable(obj) self.assertTrue(shareable) def test_not_shareable(self): class Cheese: def __init__(self, name): self.name = name def __str__(self): return self.name class SubBytes(bytes): """A subclass of a shareable type.""" not_shareables = [ # singletons True, False, NotImplemented, ..., # builtin types and objects type, object, object(), Exception(), 100.0, # user-defined types and objects Cheese, Cheese('Wensleydale'), SubBytes(b'spam'), ] for obj in not_shareables: with self.subTest(repr(obj)): self.assertFalse( interpreters.is_shareable(obj)) class TestChannels(TestBase): def test_create(self): r, s = interpreters.create_channel() self.assertIsInstance(r, interpreters.RecvChannel) self.assertIsInstance(s, interpreters.SendChannel) def test_list_all(self): self.assertEqual(interpreters.list_all_channels(), []) created = set() for _ in range(3): ch = interpreters.create_channel() created.add(ch) after = set(interpreters.list_all_channels()) self.assertEqual(after, created) class TestRecvChannelAttrs(TestBase): def test_id_type(self): rch, _ = interpreters.create_channel() self.assertIsInstance(rch.id, _channels.ChannelID) def test_custom_id(self): rch = interpreters.RecvChannel(1) self.assertEqual(rch.id, 1) with self.assertRaises(TypeError): interpreters.RecvChannel('1') def test_id_readonly(self): rch = interpreters.RecvChannel(1) with self.assertRaises(AttributeError): rch.id = 2 def test_equality(self): ch1, _ = interpreters.create_channel() ch2, _ = interpreters.create_channel() self.assertEqual(ch1, ch1) self.assertNotEqual(ch1, ch2) class TestSendChannelAttrs(TestBase): def test_id_type(self): _, sch = interpreters.create_channel() self.assertIsInstance(sch.id, _channels.ChannelID) def test_custom_id(self): sch = interpreters.SendChannel(1) self.assertEqual(sch.id, 1) with self.assertRaises(TypeError): interpreters.SendChannel('1') def test_id_readonly(self): sch = interpreters.SendChannel(1) with self.assertRaises(AttributeError): sch.id = 2 def test_equality(self): _, ch1 = interpreters.create_channel() _, ch2 = interpreters.create_channel() self.assertEqual(ch1, ch1) self.assertNotEqual(ch1, ch2) class TestSendRecv(TestBase): def test_send_recv_main(self): r, s = interpreters.create_channel() orig = b'spam' s.send_nowait(orig) obj = r.recv() self.assertEqual(obj, orig) self.assertIsNot(obj, orig) def test_send_recv_same_interpreter(self): interp = interpreters.create() interp.run(dedent(""" from test.support import interpreters r, s = interpreters.create_channel() orig = b'spam' s.send_nowait(orig) obj = r.recv() assert obj == orig, 'expected: obj == orig' assert obj is not orig, 'expected: obj is not orig' """)) @unittest.skip('broken (see BPO-...)') def test_send_recv_different_interpreters(self): r1, s1 = interpreters.create_channel() r2, s2 = interpreters.create_channel() orig1 = b'spam' s1.send_nowait(orig1) out = _run_output( interpreters.create(), dedent(f""" obj1 = r.recv() assert obj1 == b'spam', 'expected: obj1 == orig1' # When going to another interpreter we get a copy. assert id(obj1) != {id(orig1)}, 'expected: obj1 is not orig1' orig2 = b'eggs' print(id(orig2)) s.send_nowait(orig2) """), channels=dict(r=r1, s=s2), ) obj2 = r2.recv() self.assertEqual(obj2, b'eggs') self.assertNotEqual(id(obj2), int(out)) def test_send_recv_different_threads(self): r, s = interpreters.create_channel() def f(): while True: try: obj = r.recv() break except interpreters.ChannelEmptyError: time.sleep(0.1) s.send(obj) t = threading.Thread(target=f) t.start() orig = b'spam' s.send(orig) t.join() obj = r.recv() self.assertEqual(obj, orig) self.assertIsNot(obj, orig) def test_send_recv_nowait_main(self): r, s = interpreters.create_channel() orig = b'spam' s.send_nowait(orig) obj = r.recv_nowait() self.assertEqual(obj, orig) self.assertIsNot(obj, orig) def test_send_recv_nowait_main_with_default(self): r, _ = interpreters.create_channel() obj = r.recv_nowait(None) self.assertIsNone(obj) def test_send_recv_nowait_same_interpreter(self): interp = interpreters.create() interp.run(dedent(""" from test.support import interpreters r, s = interpreters.create_channel() orig = b'spam' s.send_nowait(orig) obj = r.recv_nowait() assert obj == orig, 'expected: obj == orig' # When going back to the same interpreter we get the same object. assert obj is not orig, 'expected: obj is not orig' """)) @unittest.skip('broken (see BPO-...)') def test_send_recv_nowait_different_interpreters(self): r1, s1 = interpreters.create_channel() r2, s2 = interpreters.create_channel() orig1 = b'spam' s1.send_nowait(orig1) out = _run_output( interpreters.create(), dedent(f""" obj1 = r.recv_nowait() assert obj1 == b'spam', 'expected: obj1 == orig1' # When going to another interpreter we get a copy. assert id(obj1) != {id(orig1)}, 'expected: obj1 is not orig1' orig2 = b'eggs' print(id(orig2)) s.send_nowait(orig2) """), channels=dict(r=r1, s=s2), ) obj2 = r2.recv_nowait() self.assertEqual(obj2, b'eggs') self.assertNotEqual(id(obj2), int(out)) def test_recv_channel_does_not_exist(self): ch = interpreters.RecvChannel(1_000_000) with self.assertRaises(interpreters.ChannelNotFoundError): ch.recv() def test_send_channel_does_not_exist(self): ch = interpreters.SendChannel(1_000_000) with self.assertRaises(interpreters.ChannelNotFoundError): ch.send(b'spam') def test_recv_nowait_channel_does_not_exist(self): ch = interpreters.RecvChannel(1_000_000) with self.assertRaises(interpreters.ChannelNotFoundError): ch.recv_nowait() def test_send_nowait_channel_does_not_exist(self): ch = interpreters.SendChannel(1_000_000) with self.assertRaises(interpreters.ChannelNotFoundError): ch.send_nowait(b'spam') def test_recv_nowait_empty(self): ch, _ = interpreters.create_channel() with self.assertRaises(interpreters.ChannelEmptyError): ch.recv_nowait() def test_recv_nowait_default(self): default = object() rch, sch = interpreters.create_channel() obj1 = rch.recv_nowait(default) sch.send_nowait(None) sch.send_nowait(1) sch.send_nowait(b'spam') sch.send_nowait(b'eggs') obj2 = rch.recv_nowait(default) obj3 = rch.recv_nowait(default) obj4 = rch.recv_nowait() obj5 = rch.recv_nowait(default) obj6 = rch.recv_nowait(default) self.assertIs(obj1, default) self.assertIs(obj2, None) self.assertEqual(obj3, 1) self.assertEqual(obj4, b'spam') self.assertEqual(obj5, b'eggs') self.assertIs(obj6, default) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.12/test_select.py000066400000000000000000000066601471441230600212240ustar00rootroot00000000000000import errno import select import subprocess import sys import textwrap import unittest from test import support support.requires_working_socket(module=True) @unittest.skipIf((sys.platform[:3]=='win'), "can't easily test on this system") class SelectTestCase(unittest.TestCase): class Nope: pass class Almost: def fileno(self): return 'fileno' def test_error_conditions(self): self.assertRaises(TypeError, select.select, 1, 2, 3) self.assertRaises(TypeError, select.select, [self.Nope()], [], []) self.assertRaises(TypeError, select.select, [self.Almost()], [], []) self.assertRaises(TypeError, select.select, [], [], [], "not a number") self.assertRaises(ValueError, select.select, [], [], [], -1) # Issue #12367: http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/155606 @unittest.skipIf(sys.platform.startswith('freebsd'), 'skip because of a FreeBSD bug: kern/155606') def test_errno(self): with open(__file__, 'rb') as fp: fd = fp.fileno() fp.close() try: select.select([fd], [], [], 0) except OSError as err: self.assertEqual(err.errno, errno.EBADF) else: self.fail("exception not raised") def test_returned_list_identity(self): # See issue #8329 r, w, x = select.select([], [], [], 1) self.assertIsNot(r, w) self.assertIsNot(r, x) self.assertIsNot(w, x) @support.requires_fork() def test_select(self): code = textwrap.dedent(''' import time for i in range(10): print("testing...", flush=True) time.sleep(0.050) ''') cmd = [sys.executable, '-I', '-c', code] with subprocess.Popen(cmd, stdout=subprocess.PIPE) as proc: pipe = proc.stdout for timeout in (0, 1, 2, 4, 8, 16) + (None,)*10: if support.verbose: print(f'timeout = {timeout}') rfd, wfd, xfd = select.select([pipe], [], [], timeout) self.assertEqual(wfd, []) self.assertEqual(xfd, []) if not rfd: continue if rfd == [pipe]: line = pipe.readline() if support.verbose: print(repr(line)) if not line: if support.verbose: print('EOF') break continue self.fail('Unexpected return values from select():', rfd, wfd, xfd) # Issue 16230: Crash on select resized list @unittest.skipIf( support.is_emscripten, "Emscripten cannot select a fd multiple times." ) def test_select_mutated(self): a = [] class F: def fileno(self): del a[-1] return sys.__stdout__.fileno() a[:] = [F()] * 10 self.assertEqual(select.select([], a, []), ([], a[:5], [])) def test_disallow_instantiation(self): support.check_disallow_instantiation(self, type(select.poll())) if hasattr(select, 'devpoll'): support.check_disallow_instantiation(self, type(select.devpoll())) def tearDownModule(): support.reap_children() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.12/test_selectors.py000066400000000000000000000472151471441230600217510ustar00rootroot00000000000000import errno import os import random import selectors import signal import socket import sys from test import support from test.support import os_helper from test.support import socket_helper from time import sleep import unittest import unittest.mock import tempfile from time import monotonic as time try: import resource except ImportError: resource = None if support.is_emscripten or support.is_wasi: raise unittest.SkipTest("Cannot create socketpair on Emscripten/WASI.") if hasattr(socket, 'socketpair'): socketpair = socket.socketpair else: def socketpair(family=socket.AF_INET, type=socket.SOCK_STREAM, proto=0): with socket.socket(family, type, proto) as l: l.bind((socket_helper.HOST, 0)) l.listen() c = socket.socket(family, type, proto) try: c.connect(l.getsockname()) caddr = c.getsockname() while True: a, addr = l.accept() # check that we've got the correct client if addr == caddr: return c, a a.close() except OSError: c.close() raise def find_ready_matching(ready, flag): match = [] for key, events in ready: if events & flag: match.append(key.fileobj) return match class BaseSelectorTestCase: def make_socketpair(self): rd, wr = socketpair() self.addCleanup(rd.close) self.addCleanup(wr.close) return rd, wr def test_register(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() key = s.register(rd, selectors.EVENT_READ, "data") self.assertIsInstance(key, selectors.SelectorKey) self.assertEqual(key.fileobj, rd) self.assertEqual(key.fd, rd.fileno()) self.assertEqual(key.events, selectors.EVENT_READ) self.assertEqual(key.data, "data") # register an unknown event self.assertRaises(ValueError, s.register, 0, 999999) # register an invalid FD self.assertRaises(ValueError, s.register, -10, selectors.EVENT_READ) # register twice self.assertRaises(KeyError, s.register, rd, selectors.EVENT_READ) # register the same FD, but with a different object self.assertRaises(KeyError, s.register, rd.fileno(), selectors.EVENT_READ) def test_unregister(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.unregister(rd) # unregister an unknown file obj self.assertRaises(KeyError, s.unregister, 999999) # unregister twice self.assertRaises(KeyError, s.unregister, rd) def test_unregister_after_fd_close(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() r, w = rd.fileno(), wr.fileno() s.register(r, selectors.EVENT_READ) s.register(w, selectors.EVENT_WRITE) rd.close() wr.close() s.unregister(r) s.unregister(w) @unittest.skipUnless(os.name == 'posix', "requires posix") def test_unregister_after_fd_close_and_reuse(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() r, w = rd.fileno(), wr.fileno() s.register(r, selectors.EVENT_READ) s.register(w, selectors.EVENT_WRITE) rd2, wr2 = self.make_socketpair() rd.close() wr.close() os.dup2(rd2.fileno(), r) os.dup2(wr2.fileno(), w) self.addCleanup(os.close, r) self.addCleanup(os.close, w) s.unregister(r) s.unregister(w) def test_unregister_after_socket_close(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) rd.close() wr.close() s.unregister(rd) s.unregister(wr) def test_modify(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() key = s.register(rd, selectors.EVENT_READ) # modify events key2 = s.modify(rd, selectors.EVENT_WRITE) self.assertNotEqual(key.events, key2.events) self.assertEqual(key2, s.get_key(rd)) s.unregister(rd) # modify data d1 = object() d2 = object() key = s.register(rd, selectors.EVENT_READ, d1) key2 = s.modify(rd, selectors.EVENT_READ, d2) self.assertEqual(key.events, key2.events) self.assertNotEqual(key.data, key2.data) self.assertEqual(key2, s.get_key(rd)) self.assertEqual(key2.data, d2) # modify unknown file obj self.assertRaises(KeyError, s.modify, 999999, selectors.EVENT_READ) # modify use a shortcut d3 = object() s.register = unittest.mock.Mock() s.unregister = unittest.mock.Mock() s.modify(rd, selectors.EVENT_READ, d3) self.assertFalse(s.register.called) self.assertFalse(s.unregister.called) def test_modify_unregister(self): # Make sure the fd is unregister()ed in case of error on # modify(): http://bugs.python.org/issue30014 if self.SELECTOR.__name__ == 'EpollSelector': patch = unittest.mock.patch( 'selectors.EpollSelector._selector_cls') elif self.SELECTOR.__name__ == 'PollSelector': patch = unittest.mock.patch( 'selectors.PollSelector._selector_cls') elif self.SELECTOR.__name__ == 'DevpollSelector': patch = unittest.mock.patch( 'selectors.DevpollSelector._selector_cls') else: raise self.skipTest("") with patch as m: m.return_value.modify = unittest.mock.Mock( side_effect=ZeroDivisionError) s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) self.assertEqual(len(s._map), 1) with self.assertRaises(ZeroDivisionError): s.modify(rd, selectors.EVENT_WRITE) self.assertEqual(len(s._map), 0) def test_close(self): s = self.SELECTOR() self.addCleanup(s.close) mapping = s.get_map() rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) s.close() self.assertRaises(RuntimeError, s.get_key, rd) self.assertRaises(RuntimeError, s.get_key, wr) self.assertRaises(KeyError, mapping.__getitem__, rd) self.assertRaises(KeyError, mapping.__getitem__, wr) def test_get_key(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() key = s.register(rd, selectors.EVENT_READ, "data") self.assertEqual(key, s.get_key(rd)) # unknown file obj self.assertRaises(KeyError, s.get_key, 999999) def test_get_map(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() keys = s.get_map() self.assertFalse(keys) self.assertEqual(len(keys), 0) self.assertEqual(list(keys), []) key = s.register(rd, selectors.EVENT_READ, "data") self.assertIn(rd, keys) self.assertEqual(key, keys[rd]) self.assertEqual(len(keys), 1) self.assertEqual(list(keys), [rd.fileno()]) self.assertEqual(list(keys.values()), [key]) # unknown file obj with self.assertRaises(KeyError): keys[999999] # Read-only mapping with self.assertRaises(TypeError): del keys[rd] def test_select(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) wr_key = s.register(wr, selectors.EVENT_WRITE) result = s.select() for key, events in result: self.assertTrue(isinstance(key, selectors.SelectorKey)) self.assertTrue(events) self.assertFalse(events & ~(selectors.EVENT_READ | selectors.EVENT_WRITE)) self.assertEqual([(wr_key, selectors.EVENT_WRITE)], result) def test_select_read_write(self): # gh-110038: when a file descriptor is registered for both read and # write, the two events must be seen on a single call to select(). s = self.SELECTOR() self.addCleanup(s.close) sock1, sock2 = self.make_socketpair() sock2.send(b"foo") my_key = s.register(sock1, selectors.EVENT_READ | selectors.EVENT_WRITE) seen_read, seen_write = False, False result = s.select() # We get the read and write either in the same result entry or in two # distinct entries with the same key. self.assertLessEqual(len(result), 2) for key, events in result: self.assertTrue(isinstance(key, selectors.SelectorKey)) self.assertEqual(key, my_key) self.assertFalse(events & ~(selectors.EVENT_READ | selectors.EVENT_WRITE)) if events & selectors.EVENT_READ: self.assertFalse(seen_read) seen_read = True if events & selectors.EVENT_WRITE: self.assertFalse(seen_write) seen_write = True self.assertTrue(seen_read) self.assertTrue(seen_write) def test_context_manager(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() with s as sel: sel.register(rd, selectors.EVENT_READ) sel.register(wr, selectors.EVENT_WRITE) self.assertRaises(RuntimeError, s.get_key, rd) self.assertRaises(RuntimeError, s.get_key, wr) def test_fileno(self): s = self.SELECTOR() self.addCleanup(s.close) if hasattr(s, 'fileno'): fd = s.fileno() self.assertTrue(isinstance(fd, int)) self.assertGreaterEqual(fd, 0) def test_selector(self): s = self.SELECTOR() self.addCleanup(s.close) NUM_SOCKETS = 12 MSG = b" This is a test." MSG_LEN = len(MSG) readers = [] writers = [] r2w = {} w2r = {} for i in range(NUM_SOCKETS): rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) readers.append(rd) writers.append(wr) r2w[rd] = wr w2r[wr] = rd bufs = [] while writers: ready = s.select() ready_writers = find_ready_matching(ready, selectors.EVENT_WRITE) if not ready_writers: self.fail("no sockets ready for writing") wr = random.choice(ready_writers) wr.send(MSG) for i in range(10): ready = s.select() ready_readers = find_ready_matching(ready, selectors.EVENT_READ) if ready_readers: break # there might be a delay between the write to the write end and # the read end is reported ready sleep(0.1) else: self.fail("no sockets ready for reading") self.assertEqual([w2r[wr]], ready_readers) rd = ready_readers[0] buf = rd.recv(MSG_LEN) self.assertEqual(len(buf), MSG_LEN) bufs.append(buf) s.unregister(r2w[rd]) s.unregister(rd) writers.remove(r2w[rd]) self.assertEqual(bufs, [MSG] * NUM_SOCKETS) @unittest.skipIf(sys.platform == 'win32', 'select.select() cannot be used with empty fd sets') def test_empty_select(self): # Issue #23009: Make sure EpollSelector.select() works when no FD is # registered. s = self.SELECTOR() self.addCleanup(s.close) self.assertEqual(s.select(timeout=0), []) def test_timeout(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(wr, selectors.EVENT_WRITE) t = time() self.assertEqual(1, len(s.select(0))) self.assertEqual(1, len(s.select(-1))) self.assertLess(time() - t, 0.5) s.unregister(wr) s.register(rd, selectors.EVENT_READ) t = time() self.assertFalse(s.select(0)) self.assertFalse(s.select(-1)) self.assertLess(time() - t, 0.5) t0 = time() self.assertFalse(s.select(1)) t1 = time() dt = t1 - t0 # Tolerate 2.0 seconds for very slow buildbots self.assertTrue(0.8 <= dt <= 2.0, dt) @unittest.skipUnless(hasattr(signal, "alarm"), "signal.alarm() required for this test") def test_select_interrupt_exc(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() class InterruptSelect(Exception): pass def handler(*args): raise InterruptSelect orig_alrm_handler = signal.signal(signal.SIGALRM, handler) self.addCleanup(signal.signal, signal.SIGALRM, orig_alrm_handler) try: signal.alarm(1) s.register(rd, selectors.EVENT_READ) t = time() # select() is interrupted by a signal which raises an exception with self.assertRaises(InterruptSelect): s.select(30) # select() was interrupted before the timeout of 30 seconds self.assertLess(time() - t, 5.0) finally: signal.alarm(0) @unittest.skipUnless(hasattr(signal, "alarm"), "signal.alarm() required for this test") def test_select_interrupt_noraise(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() orig_alrm_handler = signal.signal(signal.SIGALRM, lambda *args: None) self.addCleanup(signal.signal, signal.SIGALRM, orig_alrm_handler) try: signal.alarm(1) s.register(rd, selectors.EVENT_READ) t = time() # select() is interrupted by a signal, but the signal handler doesn't # raise an exception, so select() should by retries with a recomputed # timeout self.assertFalse(s.select(1.5)) self.assertGreaterEqual(time() - t, 1.0) finally: signal.alarm(0) class ScalableSelectorMixIn: # see issue #18963 for why it's skipped on older OS X versions @support.requires_mac_ver(10, 5) @unittest.skipUnless(resource, "Test needs resource module") @support.requires_resource('cpu') def test_above_fd_setsize(self): # A scalable implementation should have no problem with more than # FD_SETSIZE file descriptors. Since we don't know the value, we just # try to set the soft RLIMIT_NOFILE to the hard RLIMIT_NOFILE ceiling. soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE) try: resource.setrlimit(resource.RLIMIT_NOFILE, (hard, hard)) self.addCleanup(resource.setrlimit, resource.RLIMIT_NOFILE, (soft, hard)) NUM_FDS = min(hard, 2**16) except (OSError, ValueError): NUM_FDS = soft # guard for already allocated FDs (stdin, stdout...) NUM_FDS -= 32 s = self.SELECTOR() self.addCleanup(s.close) for i in range(NUM_FDS // 2): try: rd, wr = self.make_socketpair() except OSError: # too many FDs, skip - note that we should only catch EMFILE # here, but apparently *BSD and Solaris can fail upon connect() # or bind() with EADDRNOTAVAIL, so let's be safe self.skipTest("FD limit reached") try: s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) except OSError as e: if e.errno == errno.ENOSPC: # this can be raised by epoll if we go over # fs.epoll.max_user_watches sysctl self.skipTest("FD limit reached") raise try: fds = s.select() except OSError as e: if e.errno == errno.EINVAL and sys.platform == 'darwin': # unexplainable errors on macOS don't need to fail the test self.skipTest("Invalid argument error calling poll()") raise self.assertEqual(NUM_FDS // 2, len(fds)) class DefaultSelectorTestCase(BaseSelectorTestCase, unittest.TestCase): SELECTOR = selectors.DefaultSelector class SelectSelectorTestCase(BaseSelectorTestCase, unittest.TestCase): SELECTOR = selectors.SelectSelector @unittest.skipUnless(hasattr(selectors, 'PollSelector'), "Test needs selectors.PollSelector") class PollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'PollSelector', None) @unittest.skipUnless(hasattr(selectors, 'EpollSelector'), "Test needs selectors.EpollSelector") class EpollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'EpollSelector', None) def test_register_file(self): # epoll(7) returns EPERM when given a file to watch s = self.SELECTOR() with tempfile.NamedTemporaryFile() as f: with self.assertRaises(IOError): s.register(f, selectors.EVENT_READ) # the SelectorKey has been removed with self.assertRaises(KeyError): s.get_key(f) @unittest.skipUnless(hasattr(selectors, 'KqueueSelector'), "Test needs selectors.KqueueSelector)") class KqueueSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'KqueueSelector', None) def test_register_bad_fd(self): # a file descriptor that's been closed should raise an OSError # with EBADF s = self.SELECTOR() bad_f = os_helper.make_bad_fd() with self.assertRaises(OSError) as cm: s.register(bad_f, selectors.EVENT_READ) self.assertEqual(cm.exception.errno, errno.EBADF) # the SelectorKey has been removed with self.assertRaises(KeyError): s.get_key(bad_f) def test_empty_select_timeout(self): # Issues #23009, #29255: Make sure timeout is applied when no fds # are registered. s = self.SELECTOR() self.addCleanup(s.close) t0 = time() self.assertEqual(s.select(1), []) t1 = time() dt = t1 - t0 # Tolerate 2.0 seconds for very slow buildbots self.assertTrue(0.8 <= dt <= 2.0, dt) @unittest.skipUnless(hasattr(selectors, 'DevpollSelector'), "Test needs selectors.DevpollSelector") class DevpollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'DevpollSelector', None) def tearDownModule(): support.reap_children() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.12/test_signal.py000066400000000000000000001507051471441230600212220ustar00rootroot00000000000000import enum import errno import functools import inspect import os import random import signal import socket import statistics import subprocess import sys import threading import time import unittest from test import support from test.support import os_helper from test.support.script_helper import assert_python_ok, spawn_python from test.support import threading_helper try: import _testcapi except ImportError: _testcapi = None class GenericTests(unittest.TestCase): def test_enums(self): for name in dir(signal): sig = getattr(signal, name) if name in {'SIG_DFL', 'SIG_IGN'}: self.assertIsInstance(sig, signal.Handlers) elif name in {'SIG_BLOCK', 'SIG_UNBLOCK', 'SIG_SETMASK'}: self.assertIsInstance(sig, signal.Sigmasks) elif name.startswith('SIG') and not name.startswith('SIG_'): self.assertIsInstance(sig, signal.Signals) elif name.startswith('CTRL_'): self.assertIsInstance(sig, signal.Signals) self.assertEqual(sys.platform, "win32") CheckedSignals = enum._old_convert_( enum.IntEnum, 'Signals', 'signal', lambda name: name.isupper() and (name.startswith('SIG') and not name.startswith('SIG_')) or name.startswith('CTRL_'), source=signal, ) enum._test_simple_enum(CheckedSignals, signal.Signals) CheckedHandlers = enum._old_convert_( enum.IntEnum, 'Handlers', 'signal', lambda name: name in ('SIG_DFL', 'SIG_IGN'), source=signal, ) enum._test_simple_enum(CheckedHandlers, signal.Handlers) Sigmasks = getattr(signal, 'Sigmasks', None) if Sigmasks is not None: CheckedSigmasks = enum._old_convert_( enum.IntEnum, 'Sigmasks', 'signal', lambda name: name in ('SIG_BLOCK', 'SIG_UNBLOCK', 'SIG_SETMASK'), source=signal, ) enum._test_simple_enum(CheckedSigmasks, Sigmasks) def test_functions_module_attr(self): # Issue #27718: If __all__ is not defined all non-builtin functions # should have correct __module__ to be displayed by pydoc. for name in dir(signal): value = getattr(signal, name) if inspect.isroutine(value) and not inspect.isbuiltin(value): self.assertEqual(value.__module__, 'signal') @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") class PosixTests(unittest.TestCase): def trivial_signal_handler(self, *args): pass def create_handler_with_partial(self, argument): return functools.partial(self.trivial_signal_handler, argument) def test_out_of_range_signal_number_raises_error(self): self.assertRaises(ValueError, signal.getsignal, 4242) self.assertRaises(ValueError, signal.signal, 4242, self.trivial_signal_handler) self.assertRaises(ValueError, signal.strsignal, 4242) def test_setting_signal_handler_to_none_raises_error(self): self.assertRaises(TypeError, signal.signal, signal.SIGUSR1, None) def test_getsignal(self): hup = signal.signal(signal.SIGHUP, self.trivial_signal_handler) self.assertIsInstance(hup, signal.Handlers) self.assertEqual(signal.getsignal(signal.SIGHUP), self.trivial_signal_handler) signal.signal(signal.SIGHUP, hup) self.assertEqual(signal.getsignal(signal.SIGHUP), hup) def test_no_repr_is_called_on_signal_handler(self): # See https://github.com/python/cpython/issues/112559. class MyArgument: def __init__(self): self.repr_count = 0 def __repr__(self): self.repr_count += 1 return super().__repr__() argument = MyArgument() self.assertEqual(0, argument.repr_count) handler = self.create_handler_with_partial(argument) hup = signal.signal(signal.SIGHUP, handler) self.assertIsInstance(hup, signal.Handlers) self.assertEqual(signal.getsignal(signal.SIGHUP), handler) signal.signal(signal.SIGHUP, hup) self.assertEqual(signal.getsignal(signal.SIGHUP), hup) self.assertEqual(0, argument.repr_count) def test_strsignal(self): self.assertIn("Interrupt", signal.strsignal(signal.SIGINT)) self.assertIn("Terminated", signal.strsignal(signal.SIGTERM)) self.assertIn("Hangup", signal.strsignal(signal.SIGHUP)) # Issue 3864, unknown if this affects earlier versions of freebsd also def test_interprocess_signal(self): dirname = os.path.dirname(__file__) script = os.path.join(dirname, 'signalinterproctester.py') assert_python_ok(script) @unittest.skipUnless( hasattr(signal, "valid_signals"), "requires signal.valid_signals" ) def test_valid_signals(self): s = signal.valid_signals() self.assertIsInstance(s, set) self.assertIn(signal.Signals.SIGINT, s) self.assertIn(signal.Signals.SIGALRM, s) self.assertNotIn(0, s) self.assertNotIn(signal.NSIG, s) self.assertLess(len(s), signal.NSIG) # gh-91145: Make sure that all SIGxxx constants exposed by the Python # signal module have a number in the [0; signal.NSIG-1] range. for name in dir(signal): if not name.startswith("SIG"): continue if name in {"SIG_IGN", "SIG_DFL"}: # SIG_IGN and SIG_DFL are pointers continue with self.subTest(name=name): signum = getattr(signal, name) self.assertGreaterEqual(signum, 0) self.assertLess(signum, signal.NSIG) @unittest.skipUnless(sys.executable, "sys.executable required.") @support.requires_subprocess() def test_keyboard_interrupt_exit_code(self): """KeyboardInterrupt triggers exit via SIGINT.""" process = subprocess.run( [sys.executable, "-c", "import os, signal, time\n" "os.kill(os.getpid(), signal.SIGINT)\n" "for _ in range(999): time.sleep(0.01)"], stderr=subprocess.PIPE) self.assertIn(b"KeyboardInterrupt", process.stderr) self.assertEqual(process.returncode, -signal.SIGINT) # Caveat: The exit code is insufficient to guarantee we actually died # via a signal. POSIX shells do more than look at the 8 bit value. # Writing an automation friendly test of an interactive shell # to confirm that our process died via a SIGINT proved too complex. @unittest.skipUnless(sys.platform == "win32", "Windows specific") class WindowsSignalTests(unittest.TestCase): def test_valid_signals(self): s = signal.valid_signals() self.assertIsInstance(s, set) self.assertGreaterEqual(len(s), 6) self.assertIn(signal.Signals.SIGINT, s) self.assertNotIn(0, s) self.assertNotIn(signal.NSIG, s) self.assertLess(len(s), signal.NSIG) def test_issue9324(self): # Updated for issue #10003, adding SIGBREAK handler = lambda x, y: None checked = set() for sig in (signal.SIGABRT, signal.SIGBREAK, signal.SIGFPE, signal.SIGILL, signal.SIGINT, signal.SIGSEGV, signal.SIGTERM): # Set and then reset a handler for signals that work on windows. # Issue #18396, only for signals without a C-level handler. if signal.getsignal(sig) is not None: signal.signal(sig, signal.signal(sig, handler)) checked.add(sig) # Issue #18396: Ensure the above loop at least tested *something* self.assertTrue(checked) with self.assertRaises(ValueError): signal.signal(-1, handler) with self.assertRaises(ValueError): signal.signal(7, handler) @unittest.skipUnless(sys.executable, "sys.executable required.") @support.requires_subprocess() def test_keyboard_interrupt_exit_code(self): """KeyboardInterrupt triggers an exit using STATUS_CONTROL_C_EXIT.""" # We don't test via os.kill(os.getpid(), signal.CTRL_C_EVENT) here # as that requires setting up a console control handler in a child # in its own process group. Doable, but quite complicated. (see # @eryksun on https://github.com/python/cpython/pull/11862) process = subprocess.run( [sys.executable, "-c", "raise KeyboardInterrupt"], stderr=subprocess.PIPE) self.assertIn(b"KeyboardInterrupt", process.stderr) STATUS_CONTROL_C_EXIT = 0xC000013A self.assertEqual(process.returncode, STATUS_CONTROL_C_EXIT) class WakeupFDTests(unittest.TestCase): def test_invalid_call(self): # First parameter is positional-only with self.assertRaises(TypeError): signal.set_wakeup_fd(signum=signal.SIGINT) # warn_on_full_buffer is a keyword-only parameter with self.assertRaises(TypeError): signal.set_wakeup_fd(signal.SIGINT, False) def test_invalid_fd(self): fd = os_helper.make_bad_fd() self.assertRaises((ValueError, OSError), signal.set_wakeup_fd, fd) @unittest.skipUnless(support.has_socket_support, "needs working sockets.") def test_invalid_socket(self): sock = socket.socket() fd = sock.fileno() sock.close() self.assertRaises((ValueError, OSError), signal.set_wakeup_fd, fd) # Emscripten does not support fstat on pipes yet. # https://github.com/emscripten-core/emscripten/issues/16414 @unittest.skipIf(support.is_emscripten, "Emscripten cannot fstat pipes.") @unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()") def test_set_wakeup_fd_result(self): r1, w1 = os.pipe() self.addCleanup(os.close, r1) self.addCleanup(os.close, w1) r2, w2 = os.pipe() self.addCleanup(os.close, r2) self.addCleanup(os.close, w2) if hasattr(os, 'set_blocking'): os.set_blocking(w1, False) os.set_blocking(w2, False) signal.set_wakeup_fd(w1) self.assertEqual(signal.set_wakeup_fd(w2), w1) self.assertEqual(signal.set_wakeup_fd(-1), w2) self.assertEqual(signal.set_wakeup_fd(-1), -1) @unittest.skipIf(support.is_emscripten, "Emscripten cannot fstat pipes.") @unittest.skipUnless(support.has_socket_support, "needs working sockets.") def test_set_wakeup_fd_socket_result(self): sock1 = socket.socket() self.addCleanup(sock1.close) sock1.setblocking(False) fd1 = sock1.fileno() sock2 = socket.socket() self.addCleanup(sock2.close) sock2.setblocking(False) fd2 = sock2.fileno() signal.set_wakeup_fd(fd1) self.assertEqual(signal.set_wakeup_fd(fd2), fd1) self.assertEqual(signal.set_wakeup_fd(-1), fd2) self.assertEqual(signal.set_wakeup_fd(-1), -1) # On Windows, files are always blocking and Windows does not provide a # function to test if a socket is in non-blocking mode. @unittest.skipIf(sys.platform == "win32", "tests specific to POSIX") @unittest.skipIf(support.is_emscripten, "Emscripten cannot fstat pipes.") @unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()") def test_set_wakeup_fd_blocking(self): rfd, wfd = os.pipe() self.addCleanup(os.close, rfd) self.addCleanup(os.close, wfd) # fd must be non-blocking os.set_blocking(wfd, True) with self.assertRaises(ValueError) as cm: signal.set_wakeup_fd(wfd) self.assertEqual(str(cm.exception), "the fd %s must be in non-blocking mode" % wfd) # non-blocking is ok os.set_blocking(wfd, False) signal.set_wakeup_fd(wfd) signal.set_wakeup_fd(-1) @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") class WakeupSignalTests(unittest.TestCase): @unittest.skipIf(_testcapi is None, 'need _testcapi') def check_wakeup(self, test_body, *signals, ordered=True): # use a subprocess to have only one thread code = """if 1: import _testcapi import os import signal import struct signals = {!r} def handler(signum, frame): pass def check_signum(signals): data = os.read(read, len(signals)+1) raised = struct.unpack('%uB' % len(data), data) if not {!r}: raised = set(raised) signals = set(signals) if raised != signals: raise Exception("%r != %r" % (raised, signals)) {} signal.signal(signal.SIGALRM, handler) read, write = os.pipe() os.set_blocking(write, False) signal.set_wakeup_fd(write) test() check_signum(signals) os.close(read) os.close(write) """.format(tuple(map(int, signals)), ordered, test_body) assert_python_ok('-c', code) @unittest.skipIf(_testcapi is None, 'need _testcapi') @unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()") def test_wakeup_write_error(self): # Issue #16105: write() errors in the C signal handler should not # pass silently. # Use a subprocess to have only one thread. code = """if 1: import _testcapi import errno import os import signal import sys from test.support import captured_stderr def handler(signum, frame): 1/0 signal.signal(signal.SIGALRM, handler) r, w = os.pipe() os.set_blocking(r, False) # Set wakeup_fd a read-only file descriptor to trigger the error signal.set_wakeup_fd(r) try: with captured_stderr() as err: signal.raise_signal(signal.SIGALRM) except ZeroDivisionError: # An ignored exception should have been printed out on stderr err = err.getvalue() if ('Exception ignored when trying to write to the signal wakeup fd' not in err): raise AssertionError(err) if ('OSError: [Errno %d]' % errno.EBADF) not in err: raise AssertionError(err) else: raise AssertionError("ZeroDivisionError not raised") os.close(r) os.close(w) """ r, w = os.pipe() try: os.write(r, b'x') except OSError: pass else: self.skipTest("OS doesn't report write() error on the read end of a pipe") finally: os.close(r) os.close(w) assert_python_ok('-c', code) def test_wakeup_fd_early(self): self.check_wakeup("""def test(): import select import time TIMEOUT_FULL = 10 TIMEOUT_HALF = 5 class InterruptSelect(Exception): pass def handler(signum, frame): raise InterruptSelect signal.signal(signal.SIGALRM, handler) signal.alarm(1) # We attempt to get a signal during the sleep, # before select is called try: select.select([], [], [], TIMEOUT_FULL) except InterruptSelect: pass else: raise Exception("select() was not interrupted") before_time = time.monotonic() select.select([read], [], [], TIMEOUT_FULL) after_time = time.monotonic() dt = after_time - before_time if dt >= TIMEOUT_HALF: raise Exception("%s >= %s" % (dt, TIMEOUT_HALF)) """, signal.SIGALRM) def test_wakeup_fd_during(self): self.check_wakeup("""def test(): import select import time TIMEOUT_FULL = 10 TIMEOUT_HALF = 5 class InterruptSelect(Exception): pass def handler(signum, frame): raise InterruptSelect signal.signal(signal.SIGALRM, handler) signal.alarm(1) before_time = time.monotonic() # We attempt to get a signal during the select call try: select.select([read], [], [], TIMEOUT_FULL) except InterruptSelect: pass else: raise Exception("select() was not interrupted") after_time = time.monotonic() dt = after_time - before_time if dt >= TIMEOUT_HALF: raise Exception("%s >= %s" % (dt, TIMEOUT_HALF)) """, signal.SIGALRM) def test_signum(self): self.check_wakeup("""def test(): signal.signal(signal.SIGUSR1, handler) signal.raise_signal(signal.SIGUSR1) signal.raise_signal(signal.SIGALRM) """, signal.SIGUSR1, signal.SIGALRM) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pending(self): self.check_wakeup("""def test(): signum1 = signal.SIGUSR1 signum2 = signal.SIGUSR2 signal.signal(signum1, handler) signal.signal(signum2, handler) signal.pthread_sigmask(signal.SIG_BLOCK, (signum1, signum2)) signal.raise_signal(signum1) signal.raise_signal(signum2) # Unblocking the 2 signals calls the C signal handler twice signal.pthread_sigmask(signal.SIG_UNBLOCK, (signum1, signum2)) """, signal.SIGUSR1, signal.SIGUSR2, ordered=False) @unittest.skipUnless(hasattr(socket, 'socketpair'), 'need socket.socketpair') class WakeupSocketSignalTests(unittest.TestCase): @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_socket(self): # use a subprocess to have only one thread code = """if 1: import signal import socket import struct import _testcapi signum = signal.SIGINT signals = (signum,) def handler(signum, frame): pass signal.signal(signum, handler) read, write = socket.socketpair() write.setblocking(False) signal.set_wakeup_fd(write.fileno()) signal.raise_signal(signum) data = read.recv(1) if not data: raise Exception("no signum written") raised = struct.unpack('B', data) if raised != signals: raise Exception("%r != %r" % (raised, signals)) read.close() write.close() """ assert_python_ok('-c', code) @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_send_error(self): # Use a subprocess to have only one thread. if os.name == 'nt': action = 'send' else: action = 'write' code = """if 1: import errno import signal import socket import sys import time import _testcapi from test.support import captured_stderr signum = signal.SIGINT def handler(signum, frame): pass signal.signal(signum, handler) read, write = socket.socketpair() read.setblocking(False) write.setblocking(False) signal.set_wakeup_fd(write.fileno()) # Close sockets: send() will fail read.close() write.close() with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if ('Exception ignored when trying to {action} to the signal wakeup fd' not in err): raise AssertionError(err) """.format(action=action) assert_python_ok('-c', code) @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_warn_on_full_buffer(self): # Use a subprocess to have only one thread. if os.name == 'nt': action = 'send' else: action = 'write' code = """if 1: import errno import signal import socket import sys import time import _testcapi from test.support import captured_stderr signum = signal.SIGINT # This handler will be called, but we intentionally won't read from # the wakeup fd. def handler(signum, frame): pass signal.signal(signum, handler) read, write = socket.socketpair() # Fill the socketpair buffer if sys.platform == 'win32': # bpo-34130: On Windows, sometimes non-blocking send fails to fill # the full socketpair buffer, so use a timeout of 50 ms instead. write.settimeout(0.050) else: write.setblocking(False) written = 0 if sys.platform == "vxworks": CHUNK_SIZES = (1,) else: # Start with large chunk size to reduce the # number of send needed to fill the buffer. CHUNK_SIZES = (2 ** 16, 2 ** 8, 1) for chunk_size in CHUNK_SIZES: chunk = b"x" * chunk_size try: while True: write.send(chunk) written += chunk_size except (BlockingIOError, TimeoutError): pass print(f"%s bytes written into the socketpair" % written, flush=True) write.setblocking(False) try: write.send(b"x") except BlockingIOError: # The socketpair buffer seems full pass else: raise AssertionError("%s bytes failed to fill the socketpair " "buffer" % written) # By default, we get a warning when a signal arrives msg = ('Exception ignored when trying to {action} ' 'to the signal wakeup fd') signal.set_wakeup_fd(write.fileno()) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if msg not in err: raise AssertionError("first set_wakeup_fd() test failed, " "stderr: %r" % err) # And also if warn_on_full_buffer=True signal.set_wakeup_fd(write.fileno(), warn_on_full_buffer=True) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if msg not in err: raise AssertionError("set_wakeup_fd(warn_on_full_buffer=True) " "test failed, stderr: %r" % err) # But not if warn_on_full_buffer=False signal.set_wakeup_fd(write.fileno(), warn_on_full_buffer=False) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if err != "": raise AssertionError("set_wakeup_fd(warn_on_full_buffer=False) " "test failed, stderr: %r" % err) # And then check the default again, to make sure warn_on_full_buffer # settings don't leak across calls. signal.set_wakeup_fd(write.fileno()) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if msg not in err: raise AssertionError("second set_wakeup_fd() test failed, " "stderr: %r" % err) """.format(action=action) assert_python_ok('-c', code) @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") @unittest.skipUnless(hasattr(signal, 'siginterrupt'), "needs signal.siginterrupt()") @support.requires_subprocess() @unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()") class SiginterruptTest(unittest.TestCase): def readpipe_interrupted(self, interrupt, timeout=support.SHORT_TIMEOUT): """Perform a read during which a signal will arrive. Return True if the read is interrupted by the signal and raises an exception. Return False if it returns normally. """ # use a subprocess to have only one thread, to have a timeout on the # blocking read and to not touch signal handling in this process code = """if 1: import errno import os import signal import sys interrupt = %r r, w = os.pipe() def handler(signum, frame): 1 / 0 signal.signal(signal.SIGALRM, handler) if interrupt is not None: signal.siginterrupt(signal.SIGALRM, interrupt) print("ready") sys.stdout.flush() # run the test twice try: for loop in range(2): # send a SIGALRM in a second (during the read) signal.alarm(1) try: # blocking call: read from a pipe without data os.read(r, 1) except ZeroDivisionError: pass else: sys.exit(2) sys.exit(3) finally: os.close(r) os.close(w) """ % (interrupt,) with spawn_python('-c', code) as process: try: # wait until the child process is loaded and has started first_line = process.stdout.readline() stdout, stderr = process.communicate(timeout=timeout) except subprocess.TimeoutExpired: process.kill() return False else: stdout = first_line + stdout exitcode = process.wait() if exitcode not in (2, 3): raise Exception("Child error (exit code %s): %r" % (exitcode, stdout)) return (exitcode == 3) def test_without_siginterrupt(self): # If a signal handler is installed and siginterrupt is not called # at all, when that signal arrives, it interrupts a syscall that's in # progress. interrupted = self.readpipe_interrupted(None) self.assertTrue(interrupted) def test_siginterrupt_on(self): # If a signal handler is installed and siginterrupt is called with # a true value for the second argument, when that signal arrives, it # interrupts a syscall that's in progress. interrupted = self.readpipe_interrupted(True) self.assertTrue(interrupted) @support.requires_resource('walltime') def test_siginterrupt_off(self): # If a signal handler is installed and siginterrupt is called with # a false value for the second argument, when that signal arrives, it # does not interrupt a syscall that's in progress. interrupted = self.readpipe_interrupted(False, timeout=2) self.assertFalse(interrupted) @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") @unittest.skipUnless(hasattr(signal, 'getitimer') and hasattr(signal, 'setitimer'), "needs signal.getitimer() and signal.setitimer()") class ItimerTest(unittest.TestCase): def setUp(self): self.hndl_called = False self.hndl_count = 0 self.itimer = None self.old_alarm = signal.signal(signal.SIGALRM, self.sig_alrm) def tearDown(self): signal.signal(signal.SIGALRM, self.old_alarm) if self.itimer is not None: # test_itimer_exc doesn't change this attr # just ensure that itimer is stopped signal.setitimer(self.itimer, 0) def sig_alrm(self, *args): self.hndl_called = True def sig_vtalrm(self, *args): self.hndl_called = True if self.hndl_count > 3: # it shouldn't be here, because it should have been disabled. raise signal.ItimerError("setitimer didn't disable ITIMER_VIRTUAL " "timer.") elif self.hndl_count == 3: # disable ITIMER_VIRTUAL, this function shouldn't be called anymore signal.setitimer(signal.ITIMER_VIRTUAL, 0) self.hndl_count += 1 def sig_prof(self, *args): self.hndl_called = True signal.setitimer(signal.ITIMER_PROF, 0) def test_itimer_exc(self): # XXX I'm assuming -1 is an invalid itimer, but maybe some platform # defines it ? self.assertRaises(signal.ItimerError, signal.setitimer, -1, 0) # Negative times are treated as zero on some platforms. if 0: self.assertRaises(signal.ItimerError, signal.setitimer, signal.ITIMER_REAL, -1) def test_itimer_real(self): self.itimer = signal.ITIMER_REAL signal.setitimer(self.itimer, 1.0) signal.pause() self.assertEqual(self.hndl_called, True) # Issue 3864, unknown if this affects earlier versions of freebsd also @unittest.skipIf(sys.platform in ('netbsd5',), 'itimer not reliable (does not mix well with threading) on some BSDs.') def test_itimer_virtual(self): self.itimer = signal.ITIMER_VIRTUAL signal.signal(signal.SIGVTALRM, self.sig_vtalrm) signal.setitimer(self.itimer, 0.3, 0.2) for _ in support.busy_retry(support.LONG_TIMEOUT): # use up some virtual time by doing real work _ = pow(12345, 67890, 10000019) if signal.getitimer(self.itimer) == (0.0, 0.0): # sig_vtalrm handler stopped this itimer break # virtual itimer should be (0.0, 0.0) now self.assertEqual(signal.getitimer(self.itimer), (0.0, 0.0)) # and the handler should have been called self.assertEqual(self.hndl_called, True) def test_itimer_prof(self): self.itimer = signal.ITIMER_PROF signal.signal(signal.SIGPROF, self.sig_prof) signal.setitimer(self.itimer, 0.2, 0.2) for _ in support.busy_retry(support.LONG_TIMEOUT): # do some work _ = pow(12345, 67890, 10000019) if signal.getitimer(self.itimer) == (0.0, 0.0): # sig_prof handler stopped this itimer break # profiling itimer should be (0.0, 0.0) now self.assertEqual(signal.getitimer(self.itimer), (0.0, 0.0)) # and the handler should have been called self.assertEqual(self.hndl_called, True) def test_setitimer_tiny(self): # bpo-30807: C setitimer() takes a microsecond-resolution interval. # Check that float -> timeval conversion doesn't round # the interval down to zero, which would disable the timer. self.itimer = signal.ITIMER_REAL signal.setitimer(self.itimer, 1e-6) time.sleep(1) self.assertEqual(self.hndl_called, True) class PendingSignalsTests(unittest.TestCase): """ Test pthread_sigmask(), pthread_kill(), sigpending() and sigwait() functions. """ @unittest.skipUnless(hasattr(signal, 'sigpending'), 'need signal.sigpending()') def test_sigpending_empty(self): self.assertEqual(signal.sigpending(), set()) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') @unittest.skipUnless(hasattr(signal, 'sigpending'), 'need signal.sigpending()') def test_sigpending(self): code = """if 1: import os import signal def handler(signum, frame): 1/0 signum = signal.SIGUSR1 signal.signal(signum, handler) signal.pthread_sigmask(signal.SIG_BLOCK, [signum]) os.kill(os.getpid(), signum) pending = signal.sigpending() for sig in pending: assert isinstance(sig, signal.Signals), repr(pending) if pending != {signum}: raise Exception('%s != {%s}' % (pending, signum)) try: signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") """ assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'pthread_kill'), 'need signal.pthread_kill()') @threading_helper.requires_working_threading() def test_pthread_kill(self): code = """if 1: import signal import threading import sys signum = signal.SIGUSR1 def handler(signum, frame): 1/0 signal.signal(signum, handler) tid = threading.get_ident() try: signal.pthread_kill(tid, signum) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") """ assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def wait_helper(self, blocked, test): """ test: body of the "def test(signum):" function. blocked: number of the blocked signal """ code = '''if 1: import signal import sys from signal import Signals def handler(signum, frame): 1/0 %s blocked = %s signum = signal.SIGALRM # child: block and wait the signal try: signal.signal(signum, handler) signal.pthread_sigmask(signal.SIG_BLOCK, [blocked]) # Do the tests test(signum) # The handler must not be called on unblock try: signal.pthread_sigmask(signal.SIG_UNBLOCK, [blocked]) except ZeroDivisionError: print("the signal handler has been called", file=sys.stderr) sys.exit(1) except BaseException as err: print("error: {}".format(err), file=sys.stderr) sys.stderr.flush() sys.exit(1) ''' % (test.strip(), blocked) # sig*wait* must be called with the signal blocked: since the current # process might have several threads running, use a subprocess to have # a single thread. assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'sigwait'), 'need signal.sigwait()') def test_sigwait(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): signal.alarm(1) received = signal.sigwait([signum]) assert isinstance(received, signal.Signals), received if received != signum: raise Exception('received %s, not %s' % (received, signum)) ''') @unittest.skipUnless(hasattr(signal, 'sigwaitinfo'), 'need signal.sigwaitinfo()') def test_sigwaitinfo(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): signal.alarm(1) info = signal.sigwaitinfo([signum]) if info.si_signo != signum: raise Exception("info.si_signo != %s" % signum) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): signal.alarm(1) info = signal.sigtimedwait([signum], 10.1000) if info.si_signo != signum: raise Exception('info.si_signo != %s' % signum) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait_poll(self): # check that polling with sigtimedwait works self.wait_helper(signal.SIGALRM, ''' def test(signum): import os os.kill(os.getpid(), signum) info = signal.sigtimedwait([signum], 0) if info.si_signo != signum: raise Exception('info.si_signo != %s' % signum) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait_timeout(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): received = signal.sigtimedwait([signum], 1.0) if received is not None: raise Exception("received=%r" % (received,)) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait_negative_timeout(self): signum = signal.SIGALRM self.assertRaises(ValueError, signal.sigtimedwait, [signum], -1.0) @unittest.skipUnless(hasattr(signal, 'sigwait'), 'need signal.sigwait()') @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') @threading_helper.requires_working_threading() def test_sigwait_thread(self): # Check that calling sigwait() from a thread doesn't suspend the whole # process. A new interpreter is spawned to avoid problems when mixing # threads and fork(): only async-safe functions are allowed between # fork() and exec(). assert_python_ok("-c", """if True: import os, threading, sys, time, signal # the default handler terminates the process signum = signal.SIGUSR1 def kill_later(): # wait until the main thread is waiting in sigwait() time.sleep(1) os.kill(os.getpid(), signum) # the signal must be blocked by all the threads signal.pthread_sigmask(signal.SIG_BLOCK, [signum]) killer = threading.Thread(target=kill_later) killer.start() received = signal.sigwait([signum]) if received != signum: print("sigwait() received %s, not %s" % (received, signum), file=sys.stderr) sys.exit(1) killer.join() # unblock the signal, which should have been cleared by sigwait() signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) """) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pthread_sigmask_arguments(self): self.assertRaises(TypeError, signal.pthread_sigmask) self.assertRaises(TypeError, signal.pthread_sigmask, 1) self.assertRaises(TypeError, signal.pthread_sigmask, 1, 2, 3) self.assertRaises(OSError, signal.pthread_sigmask, 1700, []) with self.assertRaises(ValueError): signal.pthread_sigmask(signal.SIG_BLOCK, [signal.NSIG]) with self.assertRaises(ValueError): signal.pthread_sigmask(signal.SIG_BLOCK, [0]) with self.assertRaises(ValueError): signal.pthread_sigmask(signal.SIG_BLOCK, [1<<1000]) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pthread_sigmask_valid_signals(self): s = signal.pthread_sigmask(signal.SIG_BLOCK, signal.valid_signals()) self.addCleanup(signal.pthread_sigmask, signal.SIG_SETMASK, s) # Get current blocked set s = signal.pthread_sigmask(signal.SIG_UNBLOCK, signal.valid_signals()) self.assertLessEqual(s, signal.valid_signals()) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') @threading_helper.requires_working_threading() def test_pthread_sigmask(self): code = """if 1: import signal import os; import threading def handler(signum, frame): 1/0 def kill(signum): os.kill(os.getpid(), signum) def check_mask(mask): for sig in mask: assert isinstance(sig, signal.Signals), repr(sig) def read_sigmask(): sigmask = signal.pthread_sigmask(signal.SIG_BLOCK, []) check_mask(sigmask) return sigmask signum = signal.SIGUSR1 # Install our signal handler old_handler = signal.signal(signum, handler) # Unblock SIGUSR1 (and copy the old mask) to test our signal handler old_mask = signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) check_mask(old_mask) try: kill(signum) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") # Block and then raise SIGUSR1. The signal is blocked: the signal # handler is not called, and the signal is now pending mask = signal.pthread_sigmask(signal.SIG_BLOCK, [signum]) check_mask(mask) kill(signum) # Check the new mask blocked = read_sigmask() check_mask(blocked) if signum not in blocked: raise Exception("%s not in %s" % (signum, blocked)) if old_mask ^ blocked != {signum}: raise Exception("%s ^ %s != {%s}" % (old_mask, blocked, signum)) # Unblock SIGUSR1 try: # unblock the pending signal calls immediately the signal handler signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") try: kill(signum) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") # Check the new mask unblocked = read_sigmask() if signum in unblocked: raise Exception("%s in %s" % (signum, unblocked)) if blocked ^ unblocked != {signum}: raise Exception("%s ^ %s != {%s}" % (blocked, unblocked, signum)) if old_mask != unblocked: raise Exception("%s != %s" % (old_mask, unblocked)) """ assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'pthread_kill'), 'need signal.pthread_kill()') @threading_helper.requires_working_threading() def test_pthread_kill_main_thread(self): # Test that a signal can be sent to the main thread with pthread_kill() # before any other thread has been created (see issue #12392). code = """if True: import threading import signal import sys def handler(signum, frame): sys.exit(3) signal.signal(signal.SIGUSR1, handler) signal.pthread_kill(threading.get_ident(), signal.SIGUSR1) sys.exit(2) """ with spawn_python('-c', code) as process: stdout, stderr = process.communicate() exitcode = process.wait() if exitcode != 3: raise Exception("Child error (exit code %s): %s" % (exitcode, stdout)) class StressTest(unittest.TestCase): """ Stress signal delivery, especially when a signal arrives in the middle of recomputing the signal state or executing previously tripped signal handlers. """ def setsig(self, signum, handler): old_handler = signal.signal(signum, handler) self.addCleanup(signal.signal, signum, old_handler) def measure_itimer_resolution(self): N = 20 times = [] def handler(signum=None, frame=None): if len(times) < N: times.append(time.perf_counter()) # 1 µs is the smallest possible timer interval, # we want to measure what the concrete duration # will be on this platform signal.setitimer(signal.ITIMER_REAL, 1e-6) self.addCleanup(signal.setitimer, signal.ITIMER_REAL, 0) self.setsig(signal.SIGALRM, handler) handler() while len(times) < N: time.sleep(1e-3) durations = [times[i+1] - times[i] for i in range(len(times) - 1)] med = statistics.median(durations) if support.verbose: print("detected median itimer() resolution: %.6f s." % (med,)) return med def decide_itimer_count(self): # Some systems have poor setitimer() resolution (for example # measured around 20 ms. on FreeBSD 9), so decide on a reasonable # number of sequential timers based on that. reso = self.measure_itimer_resolution() if reso <= 1e-4: return 10000 elif reso <= 1e-2: return 100 else: self.skipTest("detected itimer resolution (%.3f s.) too high " "(> 10 ms.) on this platform (or system too busy)" % (reso,)) @unittest.skipUnless(hasattr(signal, "setitimer"), "test needs setitimer()") def test_stress_delivery_dependent(self): """ This test uses dependent signal handlers. """ N = self.decide_itimer_count() sigs = [] def first_handler(signum, frame): # 1e-6 is the minimum non-zero value for `setitimer()`. # Choose a random delay so as to improve chances of # triggering a race condition. Ideally the signal is received # when inside critical signal-handling routines such as # Py_MakePendingCalls(). signal.setitimer(signal.ITIMER_REAL, 1e-6 + random.random() * 1e-5) def second_handler(signum=None, frame=None): sigs.append(signum) # Here on Linux, SIGPROF > SIGALRM > SIGUSR1. By using both # ascending and descending sequences (SIGUSR1 then SIGALRM, # SIGPROF then SIGALRM), we maximize chances of hitting a bug. self.setsig(signal.SIGPROF, first_handler) self.setsig(signal.SIGUSR1, first_handler) self.setsig(signal.SIGALRM, second_handler) # for ITIMER_REAL expected_sigs = 0 deadline = time.monotonic() + support.SHORT_TIMEOUT while expected_sigs < N: os.kill(os.getpid(), signal.SIGPROF) expected_sigs += 1 # Wait for handlers to run to avoid signal coalescing while len(sigs) < expected_sigs and time.monotonic() < deadline: time.sleep(1e-5) os.kill(os.getpid(), signal.SIGUSR1) expected_sigs += 1 while len(sigs) < expected_sigs and time.monotonic() < deadline: time.sleep(1e-5) # All ITIMER_REAL signals should have been delivered to the # Python handler self.assertEqual(len(sigs), N, "Some signals were lost") @unittest.skipUnless(hasattr(signal, "setitimer"), "test needs setitimer()") def test_stress_delivery_simultaneous(self): """ This test uses simultaneous signal handlers. """ N = self.decide_itimer_count() sigs = [] def handler(signum, frame): sigs.append(signum) self.setsig(signal.SIGUSR1, handler) self.setsig(signal.SIGALRM, handler) # for ITIMER_REAL expected_sigs = 0 while expected_sigs < N: # Hopefully the SIGALRM will be received somewhere during # initial processing of SIGUSR1. signal.setitimer(signal.ITIMER_REAL, 1e-6 + random.random() * 1e-5) os.kill(os.getpid(), signal.SIGUSR1) expected_sigs += 2 # Wait for handlers to run to avoid signal coalescing for _ in support.sleeping_retry(support.SHORT_TIMEOUT): if len(sigs) >= expected_sigs: break # All ITIMER_REAL signals should have been delivered to the # Python handler self.assertEqual(len(sigs), N, "Some signals were lost") @unittest.skipIf(sys.platform == "darwin", "crashes due to system bug (FB13453490)") @unittest.skipUnless(hasattr(signal, "SIGUSR1"), "test needs SIGUSR1") @threading_helper.requires_working_threading() def test_stress_modifying_handlers(self): # bpo-43406: race condition between trip_signal() and signal.signal signum = signal.SIGUSR1 num_sent_signals = 0 num_received_signals = 0 do_stop = False def custom_handler(signum, frame): nonlocal num_received_signals num_received_signals += 1 def set_interrupts(): nonlocal num_sent_signals while not do_stop: signal.raise_signal(signum) num_sent_signals += 1 def cycle_handlers(): while num_sent_signals < 100 or num_received_signals < 1: for i in range(20000): # Cycle between a Python-defined and a non-Python handler for handler in [custom_handler, signal.SIG_IGN]: signal.signal(signum, handler) old_handler = signal.signal(signum, custom_handler) self.addCleanup(signal.signal, signum, old_handler) t = threading.Thread(target=set_interrupts) try: ignored = False with support.catch_unraisable_exception() as cm: t.start() cycle_handlers() do_stop = True t.join() if cm.unraisable is not None: # An unraisable exception may be printed out when # a signal is ignored due to the aforementioned # race condition, check it. self.assertIsInstance(cm.unraisable.exc_value, OSError) self.assertIn( f"Signal {signum:d} ignored due to race condition", str(cm.unraisable.exc_value)) ignored = True # bpo-43406: Even if it is unlikely, it's technically possible that # all signals were ignored because of race conditions. if not ignored: # Sanity check that some signals were received, but not all self.assertGreater(num_received_signals, 0) self.assertLessEqual(num_received_signals, num_sent_signals) finally: do_stop = True t.join() class RaiseSignalTest(unittest.TestCase): def test_sigint(self): with self.assertRaises(KeyboardInterrupt): signal.raise_signal(signal.SIGINT) @unittest.skipIf(sys.platform != "win32", "Windows specific test") def test_invalid_argument(self): try: SIGHUP = 1 # not supported on win32 signal.raise_signal(SIGHUP) self.fail("OSError (Invalid argument) expected") except OSError as e: if e.errno == errno.EINVAL: pass else: raise def test_handler(self): is_ok = False def handler(a, b): nonlocal is_ok is_ok = True old_signal = signal.signal(signal.SIGINT, handler) self.addCleanup(signal.signal, signal.SIGINT, old_signal) signal.raise_signal(signal.SIGINT) self.assertTrue(is_ok) def test__thread_interrupt_main(self): # See https://github.com/python/cpython/issues/102397 code = """if 1: import _thread class Foo(): def __del__(self): _thread.interrupt_main() x = Foo() """ rc, out, err = assert_python_ok('-c', code) self.assertIn(b'OSError: Signal 2 ignored due to race condition', err) class PidfdSignalTest(unittest.TestCase): @unittest.skipUnless( hasattr(signal, "pidfd_send_signal"), "pidfd support not built in", ) def test_pidfd_send_signal(self): with self.assertRaises(OSError) as cm: signal.pidfd_send_signal(0, signal.SIGINT) if cm.exception.errno == errno.ENOSYS: self.skipTest("kernel does not support pidfds") elif cm.exception.errno == errno.EPERM: self.skipTest("Not enough privileges to use pidfs") self.assertEqual(cm.exception.errno, errno.EBADF) my_pidfd = os.open(f'/proc/{os.getpid()}', os.O_DIRECTORY) self.addCleanup(os.close, my_pidfd) with self.assertRaisesRegex(TypeError, "^siginfo must be None$"): signal.pidfd_send_signal(my_pidfd, signal.SIGINT, object(), 0) with self.assertRaises(KeyboardInterrupt): signal.pidfd_send_signal(my_pidfd, signal.SIGINT) def tearDownModule(): support.reap_children() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.12/test_socket.py000066400000000000000000010052541471441230600212340ustar00rootroot00000000000000import unittest from test import support from test.support import os_helper from test.support import socket_helper from test.support import threading_helper import _thread as thread import array import contextlib import errno import gc import io import itertools import math import os import pickle import platform import queue import random import re import select import signal import socket import string import struct import sys import tempfile import threading import time import traceback from weakref import proxy try: import multiprocessing except ImportError: multiprocessing = False try: import fcntl except ImportError: fcntl = None support.requires_working_socket(module=True) HOST = socket_helper.HOST # test unicode string and carriage return MSG = 'Michael Gilfix was here\u1234\r\n'.encode('utf-8') VMADDR_CID_LOCAL = 1 VSOCKPORT = 1234 AIX = platform.system() == "AIX" WSL = "microsoft-standard-WSL" in platform.release() try: import _socket except ImportError: _socket = None def get_cid(): if fcntl is None: return None if not hasattr(socket, 'IOCTL_VM_SOCKETS_GET_LOCAL_CID'): return None try: with open("/dev/vsock", "rb") as f: r = fcntl.ioctl(f, socket.IOCTL_VM_SOCKETS_GET_LOCAL_CID, " ") except OSError: return None else: return struct.unpack("I", r)[0] def _have_socket_can(): """Check whether CAN sockets are supported on this host.""" try: s = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_can_isotp(): """Check whether CAN ISOTP sockets are supported on this host.""" try: s = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_can_j1939(): """Check whether CAN J1939 sockets are supported on this host.""" try: s = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_rds(): """Check whether RDS sockets are supported on this host.""" try: s = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_alg(): """Check whether AF_ALG sockets are supported on this host.""" try: s = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_qipcrtr(): """Check whether AF_QIPCRTR sockets are supported on this host.""" try: s = socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM, 0) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_vsock(): """Check whether AF_VSOCK sockets are supported on this host.""" cid = get_cid() return (cid is not None) def _have_socket_bluetooth(): """Check whether AF_BLUETOOTH sockets are supported on this host.""" try: # RFCOMM is supported by all platforms with bluetooth support. Windows # does not support omitting the protocol. s = socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_hyperv(): """Check whether AF_HYPERV sockets are supported on this host.""" try: s = socket.socket(socket.AF_HYPERV, socket.SOCK_STREAM, socket.HV_PROTOCOL_RAW) except (AttributeError, OSError): return False else: s.close() return True @contextlib.contextmanager def socket_setdefaulttimeout(timeout): old_timeout = socket.getdefaulttimeout() try: socket.setdefaulttimeout(timeout) yield finally: socket.setdefaulttimeout(old_timeout) HAVE_SOCKET_CAN = _have_socket_can() HAVE_SOCKET_CAN_ISOTP = _have_socket_can_isotp() HAVE_SOCKET_CAN_J1939 = _have_socket_can_j1939() HAVE_SOCKET_RDS = _have_socket_rds() HAVE_SOCKET_ALG = _have_socket_alg() HAVE_SOCKET_QIPCRTR = _have_socket_qipcrtr() HAVE_SOCKET_VSOCK = _have_socket_vsock() HAVE_SOCKET_UDPLITE = hasattr(socket, "IPPROTO_UDPLITE") HAVE_SOCKET_BLUETOOTH = _have_socket_bluetooth() HAVE_SOCKET_HYPERV = _have_socket_hyperv() # Size in bytes of the int type SIZEOF_INT = array.array("i").itemsize class SocketTCPTest(unittest.TestCase): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = socket_helper.bind_port(self.serv) self.serv.listen() def tearDown(self): self.serv.close() self.serv = None class SocketUDPTest(unittest.TestCase): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.port = socket_helper.bind_port(self.serv) def tearDown(self): self.serv.close() self.serv = None class SocketUDPLITETest(SocketUDPTest): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) self.port = socket_helper.bind_port(self.serv) class SocketCANTest(unittest.TestCase): """To be able to run this test, a `vcan0` CAN interface can be created with the following commands: # modprobe vcan # ip link add dev vcan0 type vcan # ip link set up vcan0 """ interface = 'vcan0' bufsize = 128 """The CAN frame structure is defined in : struct can_frame { canid_t can_id; /* 32 bit CAN_ID + EFF/RTR/ERR flags */ __u8 can_dlc; /* data length code: 0 .. 8 */ __u8 data[8] __attribute__((aligned(8))); }; """ can_frame_fmt = "=IB3x8s" can_frame_size = struct.calcsize(can_frame_fmt) """The Broadcast Management Command frame structure is defined in : struct bcm_msg_head { __u32 opcode; __u32 flags; __u32 count; struct timeval ival1, ival2; canid_t can_id; __u32 nframes; struct can_frame frames[0]; } `bcm_msg_head` must be 8 bytes aligned because of the `frames` member (see `struct can_frame` definition). Must use native not standard types for packing. """ bcm_cmd_msg_fmt = "@3I4l2I" bcm_cmd_msg_fmt += "x" * (struct.calcsize(bcm_cmd_msg_fmt) % 8) def setUp(self): self.s = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) self.addCleanup(self.s.close) try: self.s.bind((self.interface,)) except OSError: self.skipTest('network interface `%s` does not exist' % self.interface) class SocketRDSTest(unittest.TestCase): """To be able to run this test, the `rds` kernel module must be loaded: # modprobe rds """ bufsize = 8192 def setUp(self): self.serv = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) self.addCleanup(self.serv.close) try: self.port = socket_helper.bind_port(self.serv) except OSError: self.skipTest('unable to bind RDS socket') class ThreadableTest: """Threadable Test class The ThreadableTest class makes it easy to create a threaded client/server pair from an existing unit test. To create a new threaded class from an existing unit test, use multiple inheritance: class NewClass (OldClass, ThreadableTest): pass This class defines two new fixture functions with obvious purposes for overriding: clientSetUp () clientTearDown () Any new test functions within the class must then define tests in pairs, where the test name is preceded with a '_' to indicate the client portion of the test. Ex: def testFoo(self): # Server portion def _testFoo(self): # Client portion Any exceptions raised by the clients during their tests are caught and transferred to the main thread to alert the testing framework. Note, the server setup function cannot call any blocking functions that rely on the client thread during setup, unless serverExplicitReady() is called just before the blocking call (such as in setting up a client/server connection and performing the accept() in setUp(). """ def __init__(self): # Swap the true setup function self.__setUp = self.setUp self.setUp = self._setUp def serverExplicitReady(self): """This method allows the server to explicitly indicate that it wants the client thread to proceed. This is useful if the server is about to execute a blocking routine that is dependent upon the client thread during its setup routine.""" self.server_ready.set() def _setUp(self): self.enterContext(threading_helper.wait_threads_exit()) self.server_ready = threading.Event() self.client_ready = threading.Event() self.done = threading.Event() self.queue = queue.Queue(1) self.server_crashed = False def raise_queued_exception(): if self.queue.qsize(): raise self.queue.get() self.addCleanup(raise_queued_exception) # Do some munging to start the client test. methodname = self.id() i = methodname.rfind('.') methodname = methodname[i+1:] test_method = getattr(self, '_' + methodname) self.client_thread = thread.start_new_thread( self.clientRun, (test_method,)) try: self.__setUp() except: self.server_crashed = True raise finally: self.server_ready.set() self.client_ready.wait() self.addCleanup(self.done.wait) def clientRun(self, test_func): self.server_ready.wait() try: self.clientSetUp() except BaseException as e: self.queue.put(e) self.clientTearDown() return finally: self.client_ready.set() if self.server_crashed: self.clientTearDown() return if not hasattr(test_func, '__call__'): raise TypeError("test_func must be a callable function") try: test_func() except BaseException as e: self.queue.put(e) finally: self.clientTearDown() def clientSetUp(self): raise NotImplementedError("clientSetUp must be implemented.") def clientTearDown(self): self.done.set() thread.exit() class ThreadedTCPSocketTest(SocketTCPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketTCPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.AF_INET, socket.SOCK_STREAM) def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ThreadedUDPSocketTest(SocketUDPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketUDPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class ThreadedUDPLITESocketTest(SocketUDPLITETest, ThreadableTest): def __init__(self, methodName='runTest'): SocketUDPLITETest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ThreadedCANSocketTest(SocketCANTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketCANTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) try: self.cli.bind((self.interface,)) except OSError: # skipTest should not be called here, and will be called in the # server instead pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ThreadedRDSSocketTest(SocketRDSTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketRDSTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) try: # RDS sockets must be bound explicitly to send or receive data self.cli.bind((HOST, 0)) self.cli_addr = self.cli.getsockname() except OSError: # skipTest should not be called here, and will be called in the # server instead pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) @unittest.skipIf(fcntl is None, "need fcntl") @unittest.skipIf(WSL, 'VSOCK does not work on Microsoft WSL') @unittest.skipUnless(HAVE_SOCKET_VSOCK, 'VSOCK sockets required for this test.') class ThreadedVSOCKSocketStreamTest(unittest.TestCase, ThreadableTest): def __init__(self, methodName='runTest'): unittest.TestCase.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def setUp(self): self.serv = socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) self.addCleanup(self.serv.close) self.serv.bind((socket.VMADDR_CID_ANY, VSOCKPORT)) self.serv.listen() self.serverExplicitReady() self.serv.settimeout(support.LOOPBACK_TIMEOUT) self.conn, self.connaddr = self.serv.accept() self.addCleanup(self.conn.close) def clientSetUp(self): time.sleep(0.1) self.cli = socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) self.addCleanup(self.cli.close) cid = get_cid() if cid in (socket.VMADDR_CID_HOST, socket.VMADDR_CID_ANY): # gh-119461: Use the local communication address (loopback) cid = VMADDR_CID_LOCAL self.cli.connect((cid, VSOCKPORT)) def testStream(self): msg = self.conn.recv(1024) self.assertEqual(msg, MSG) def _testStream(self): self.cli.send(MSG) self.cli.close() class SocketConnectedTest(ThreadedTCPSocketTest): """Socket tests for client-server connection. self.cli_conn is a client socket connected to the server. The setUp() method guarantees that it is connected to the server. """ def __init__(self, methodName='runTest'): ThreadedTCPSocketTest.__init__(self, methodName=methodName) def setUp(self): ThreadedTCPSocketTest.setUp(self) # Indicate explicitly we're ready for the client thread to # proceed and then perform the blocking call to accept self.serverExplicitReady() conn, addr = self.serv.accept() self.cli_conn = conn def tearDown(self): self.cli_conn.close() self.cli_conn = None ThreadedTCPSocketTest.tearDown(self) def clientSetUp(self): ThreadedTCPSocketTest.clientSetUp(self) self.cli.connect((HOST, self.port)) self.serv_conn = self.cli def clientTearDown(self): self.serv_conn.close() self.serv_conn = None ThreadedTCPSocketTest.clientTearDown(self) class SocketPairTest(unittest.TestCase, ThreadableTest): def __init__(self, methodName='runTest'): unittest.TestCase.__init__(self, methodName=methodName) ThreadableTest.__init__(self) self.cli = None self.serv = None def socketpair(self): # To be overridden by some child classes. return socket.socketpair() def setUp(self): self.serv, self.cli = self.socketpair() def tearDown(self): if self.serv: self.serv.close() self.serv = None def clientSetUp(self): pass def clientTearDown(self): if self.cli: self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) # The following classes are used by the sendmsg()/recvmsg() tests. # Combining, for instance, ConnectedStreamTestMixin and TCPTestBase # gives a drop-in replacement for SocketConnectedTest, but different # address families can be used, and the attributes serv_addr and # cli_addr will be set to the addresses of the endpoints. class SocketTestBase(unittest.TestCase): """A base class for socket tests. Subclasses must provide methods newSocket() to return a new socket and bindSock(sock) to bind it to an unused address. Creates a socket self.serv and sets self.serv_addr to its address. """ def setUp(self): self.serv = self.newSocket() self.addCleanup(self.close_server) self.bindServer() def close_server(self): self.serv.close() self.serv = None def bindServer(self): """Bind server socket and set self.serv_addr to its address.""" self.bindSock(self.serv) self.serv_addr = self.serv.getsockname() class SocketListeningTestMixin(SocketTestBase): """Mixin to listen on the server socket.""" def setUp(self): super().setUp() self.serv.listen() class ThreadedSocketTestMixin(SocketTestBase, ThreadableTest): """Mixin to add client socket and allow client/server tests. Client socket is self.cli and its address is self.cli_addr. See ThreadableTest for usage information. """ def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = self.newClientSocket() self.bindClient() def newClientSocket(self): """Return a new socket for use as client.""" return self.newSocket() def bindClient(self): """Bind client socket and set self.cli_addr to its address.""" self.bindSock(self.cli) self.cli_addr = self.cli.getsockname() def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ConnectedStreamTestMixin(SocketListeningTestMixin, ThreadedSocketTestMixin): """Mixin to allow client/server stream tests with connected client. Server's socket representing connection to client is self.cli_conn and client's connection to server is self.serv_conn. (Based on SocketConnectedTest.) """ def setUp(self): super().setUp() # Indicate explicitly we're ready for the client thread to # proceed and then perform the blocking call to accept self.serverExplicitReady() conn, addr = self.serv.accept() self.cli_conn = conn def tearDown(self): self.cli_conn.close() self.cli_conn = None super().tearDown() def clientSetUp(self): super().clientSetUp() self.cli.connect(self.serv_addr) self.serv_conn = self.cli def clientTearDown(self): try: self.serv_conn.close() self.serv_conn = None except AttributeError: pass super().clientTearDown() class UnixSocketTestBase(SocketTestBase): """Base class for Unix-domain socket tests.""" # This class is used for file descriptor passing tests, so we # create the sockets in a private directory so that other users # can't send anything that might be problematic for a privileged # user running the tests. def bindSock(self, sock): path = socket_helper.create_unix_domain_name() self.addCleanup(os_helper.unlink, path) socket_helper.bind_unix_socket(sock, path) class UnixStreamBase(UnixSocketTestBase): """Base class for Unix-domain SOCK_STREAM tests.""" def newSocket(self): return socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) class InetTestBase(SocketTestBase): """Base class for IPv4 socket tests.""" host = HOST def setUp(self): super().setUp() self.port = self.serv_addr[1] def bindSock(self, sock): socket_helper.bind_port(sock, host=self.host) class TCPTestBase(InetTestBase): """Base class for TCP-over-IPv4 tests.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_STREAM) class UDPTestBase(InetTestBase): """Base class for UDP-over-IPv4 tests.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_DGRAM) class UDPLITETestBase(InetTestBase): """Base class for UDPLITE-over-IPv4 tests.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) class SCTPStreamBase(InetTestBase): """Base class for SCTP tests in one-to-one (SOCK_STREAM) mode.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_SCTP) class Inet6TestBase(InetTestBase): """Base class for IPv6 socket tests.""" host = socket_helper.HOSTv6 class UDP6TestBase(Inet6TestBase): """Base class for UDP-over-IPv6 tests.""" def newSocket(self): return socket.socket(socket.AF_INET6, socket.SOCK_DGRAM) class UDPLITE6TestBase(Inet6TestBase): """Base class for UDPLITE-over-IPv6 tests.""" def newSocket(self): return socket.socket(socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) # Test-skipping decorators for use with ThreadableTest. def skipWithClientIf(condition, reason): """Skip decorated test if condition is true, add client_skip decorator. If the decorated object is not a class, sets its attribute "client_skip" to a decorator which will return an empty function if the test is to be skipped, or the original function if it is not. This can be used to avoid running the client part of a skipped test when using ThreadableTest. """ def client_pass(*args, **kwargs): pass def skipdec(obj): retval = unittest.skip(reason)(obj) if not isinstance(obj, type): retval.client_skip = lambda f: client_pass return retval def noskipdec(obj): if not (isinstance(obj, type) or hasattr(obj, "client_skip")): obj.client_skip = lambda f: f return obj return skipdec if condition else noskipdec def requireAttrs(obj, *attributes): """Skip decorated test if obj is missing any of the given attributes. Sets client_skip attribute as skipWithClientIf() does. """ missing = [name for name in attributes if not hasattr(obj, name)] return skipWithClientIf( missing, "don't have " + ", ".join(name for name in missing)) def requireSocket(*args): """Skip decorated test if a socket cannot be created with given arguments. When an argument is given as a string, will use the value of that attribute of the socket module, or skip the test if it doesn't exist. Sets client_skip attribute as skipWithClientIf() does. """ err = None missing = [obj for obj in args if isinstance(obj, str) and not hasattr(socket, obj)] if missing: err = "don't have " + ", ".join(name for name in missing) else: callargs = [getattr(socket, obj) if isinstance(obj, str) else obj for obj in args] try: s = socket.socket(*callargs) except OSError as e: # XXX: check errno? err = str(e) else: s.close() return skipWithClientIf( err is not None, "can't create socket({0}): {1}".format( ", ".join(str(o) for o in args), err)) ####################################################################### ## Begin Tests class GeneralModuleTests(unittest.TestCase): @unittest.skipUnless(_socket is not None, 'need _socket module') def test_socket_type(self): self.assertTrue(gc.is_tracked(_socket.socket)) with self.assertRaisesRegex(TypeError, "immutable"): _socket.socket.foo = 1 def test_SocketType_is_socketobject(self): import _socket self.assertTrue(socket.SocketType is _socket.socket) s = socket.socket() self.assertIsInstance(s, socket.SocketType) s.close() def test_repr(self): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) with s: self.assertIn('fd=%i' % s.fileno(), repr(s)) self.assertIn('family=%s' % socket.AF_INET, repr(s)) self.assertIn('type=%s' % socket.SOCK_STREAM, repr(s)) self.assertIn('proto=0', repr(s)) self.assertNotIn('raddr', repr(s)) s.bind(('127.0.0.1', 0)) self.assertIn('laddr', repr(s)) self.assertIn(str(s.getsockname()), repr(s)) self.assertIn('[closed]', repr(s)) self.assertNotIn('laddr', repr(s)) @unittest.skipUnless(_socket is not None, 'need _socket module') def test_csocket_repr(self): s = _socket.socket(_socket.AF_INET, _socket.SOCK_STREAM) try: expected = ('' % (s.fileno(), s.family, s.type, s.proto)) self.assertEqual(repr(s), expected) finally: s.close() expected = ('' % (s.family, s.type, s.proto)) self.assertEqual(repr(s), expected) def test_weakref(self): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: p = proxy(s) self.assertEqual(p.fileno(), s.fileno()) s = None support.gc_collect() # For PyPy or other GCs. try: p.fileno() except ReferenceError: pass else: self.fail('Socket proxy still exists') def testSocketError(self): # Testing socket module exceptions msg = "Error raising socket exception (%s)." with self.assertRaises(OSError, msg=msg % 'OSError'): raise OSError with self.assertRaises(OSError, msg=msg % 'socket.herror'): raise socket.herror with self.assertRaises(OSError, msg=msg % 'socket.gaierror'): raise socket.gaierror def testSendtoErrors(self): # Testing that sendto doesn't mask failures. See #10169. s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.addCleanup(s.close) s.bind(('', 0)) sockname = s.getsockname() # 2 args with self.assertRaises(TypeError) as cm: s.sendto('\u2620', sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'str'") with self.assertRaises(TypeError) as cm: s.sendto(5j, sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'complex'") with self.assertRaises(TypeError) as cm: s.sendto(b'foo', None) self.assertIn('not NoneType',str(cm.exception)) # 3 args with self.assertRaises(TypeError) as cm: s.sendto('\u2620', 0, sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'str'") with self.assertRaises(TypeError) as cm: s.sendto(5j, 0, sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'complex'") with self.assertRaises(TypeError) as cm: s.sendto(b'foo', 0, None) self.assertIn('not NoneType', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto(b'foo', 'bar', sockname) with self.assertRaises(TypeError) as cm: s.sendto(b'foo', None, None) # wrong number of args with self.assertRaises(TypeError) as cm: s.sendto(b'foo') self.assertIn('(1 given)', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto(b'foo', 0, sockname, 4) self.assertIn('(4 given)', str(cm.exception)) def testCrucialConstants(self): # Testing for mission critical constants socket.AF_INET if socket.has_ipv6: socket.AF_INET6 socket.SOCK_STREAM socket.SOCK_DGRAM socket.SOCK_RAW socket.SOCK_RDM socket.SOCK_SEQPACKET socket.SOL_SOCKET socket.SO_REUSEADDR def testCrucialIpProtoConstants(self): socket.IPPROTO_TCP socket.IPPROTO_UDP if socket.has_ipv6: socket.IPPROTO_IPV6 @unittest.skipUnless(os.name == "nt", "Windows specific") def testWindowsSpecificConstants(self): socket.IPPROTO_ICLFXBM socket.IPPROTO_ST socket.IPPROTO_CBT socket.IPPROTO_IGP socket.IPPROTO_RDP socket.IPPROTO_PGM socket.IPPROTO_L2TP socket.IPPROTO_SCTP @unittest.skipIf(support.is_wasi, "WASI is missing these methods") def test_socket_methods(self): # socket methods that depend on a configure HAVE_ check. They should # be present on all platforms except WASI. names = [ "_accept", "bind", "connect", "connect_ex", "getpeername", "getsockname", "listen", "recvfrom", "recvfrom_into", "sendto", "setsockopt", "shutdown" ] for name in names: if not hasattr(socket.socket, name): self.fail(f"socket method {name} is missing") @unittest.skipUnless(sys.platform == 'darwin', 'macOS specific test') @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test3542SocketOptions(self): # Ref. issue #35569 and https://tools.ietf.org/html/rfc3542 opts = { 'IPV6_CHECKSUM', 'IPV6_DONTFRAG', 'IPV6_DSTOPTS', 'IPV6_HOPLIMIT', 'IPV6_HOPOPTS', 'IPV6_NEXTHOP', 'IPV6_PATHMTU', 'IPV6_PKTINFO', 'IPV6_RECVDSTOPTS', 'IPV6_RECVHOPLIMIT', 'IPV6_RECVHOPOPTS', 'IPV6_RECVPATHMTU', 'IPV6_RECVPKTINFO', 'IPV6_RECVRTHDR', 'IPV6_RECVTCLASS', 'IPV6_RTHDR', 'IPV6_RTHDRDSTOPTS', 'IPV6_RTHDR_TYPE_0', 'IPV6_TCLASS', 'IPV6_USE_MIN_MTU', } for opt in opts: self.assertTrue( hasattr(socket, opt), f"Missing RFC3542 socket option '{opt}'" ) def testHostnameRes(self): # Testing hostname resolution mechanisms hostname = socket.gethostname() try: ip = socket.gethostbyname(hostname) except OSError: # Probably name lookup wasn't set up right; skip this test self.skipTest('name lookup failure') self.assertTrue(ip.find('.') >= 0, "Error resolving host to ip.") try: hname, aliases, ipaddrs = socket.gethostbyaddr(ip) except OSError: # Probably a similar problem as above; skip this test self.skipTest('name lookup failure') all_host_names = [hostname, hname] + aliases fqhn = socket.getfqdn(ip) if not fqhn in all_host_names: self.fail("Error testing host resolution mechanisms. (fqdn: %s, all: %s)" % (fqhn, repr(all_host_names))) def test_host_resolution(self): for addr in [socket_helper.HOSTv4, '10.0.0.1', '255.255.255.255']: self.assertEqual(socket.gethostbyname(addr), addr) # we don't test socket_helper.HOSTv6 because there's a chance it doesn't have # a matching name entry (e.g. 'ip6-localhost') for host in [socket_helper.HOSTv4]: self.assertIn(host, socket.gethostbyaddr(host)[2]) def test_host_resolution_bad_address(self): # These are all malformed IP addresses and expected not to resolve to # any result. But some ISPs, e.g. AWS and AT&T, may successfully # resolve these IPs. In particular, AT&T's DNS Error Assist service # will break this test. See https://bugs.python.org/issue42092 for a # workaround. explanation = ( "resolving an invalid IP address did not raise OSError; " "can be caused by a broken DNS server" ) for addr in ['0.1.1.~1', '1+.1.1.1', '::1q', '::1::2', '1:1:1:1:1:1:1:1:1']: with self.assertRaises(OSError, msg=addr): socket.gethostbyname(addr) with self.assertRaises(OSError, msg=explanation): socket.gethostbyaddr(addr) @unittest.skipUnless(hasattr(socket, 'sethostname'), "test needs socket.sethostname()") @unittest.skipUnless(hasattr(socket, 'gethostname'), "test needs socket.gethostname()") def test_sethostname(self): oldhn = socket.gethostname() try: socket.sethostname('new') except OSError as e: if e.errno == errno.EPERM: self.skipTest("test should be run as root") else: raise try: # running test as root! self.assertEqual(socket.gethostname(), 'new') # Should work with bytes objects too socket.sethostname(b'bar') self.assertEqual(socket.gethostname(), 'bar') finally: socket.sethostname(oldhn) @unittest.skipUnless(hasattr(socket, 'if_nameindex'), 'socket.if_nameindex() not available.') def testInterfaceNameIndex(self): interfaces = socket.if_nameindex() for index, name in interfaces: self.assertIsInstance(index, int) self.assertIsInstance(name, str) # interface indices are non-zero integers self.assertGreater(index, 0) _index = socket.if_nametoindex(name) self.assertIsInstance(_index, int) self.assertEqual(index, _index) _name = socket.if_indextoname(index) self.assertIsInstance(_name, str) self.assertEqual(name, _name) @unittest.skipUnless(hasattr(socket, 'if_indextoname'), 'socket.if_indextoname() not available.') def testInvalidInterfaceIndexToName(self): self.assertRaises(OSError, socket.if_indextoname, 0) self.assertRaises(OverflowError, socket.if_indextoname, -1) self.assertRaises(OverflowError, socket.if_indextoname, 2**1000) self.assertRaises(TypeError, socket.if_indextoname, '_DEADBEEF') if hasattr(socket, 'if_nameindex'): indices = dict(socket.if_nameindex()) for index in indices: index2 = index + 2**32 if index2 not in indices: with self.assertRaises((OverflowError, OSError)): socket.if_indextoname(index2) for index in 2**32-1, 2**64-1: if index not in indices: with self.assertRaises((OverflowError, OSError)): socket.if_indextoname(index) @unittest.skipUnless(hasattr(socket, 'if_nametoindex'), 'socket.if_nametoindex() not available.') def testInvalidInterfaceNameToIndex(self): self.assertRaises(TypeError, socket.if_nametoindex, 0) self.assertRaises(OSError, socket.if_nametoindex, '_DEADBEEF') @unittest.skipUnless(hasattr(sys, 'getrefcount'), 'test needs sys.getrefcount()') def testRefCountGetNameInfo(self): # Testing reference count for getnameinfo try: # On some versions, this loses a reference orig = sys.getrefcount(__name__) socket.getnameinfo(__name__,0) except TypeError: if sys.getrefcount(__name__) != orig: self.fail("socket.getnameinfo loses a reference") def testInterpreterCrash(self): # Making sure getnameinfo doesn't crash the interpreter try: # On some versions, this crashes the interpreter. socket.getnameinfo(('x', 0, 0, 0), 0) except OSError: pass def testNtoH(self): # This just checks that htons etc. are their own inverse, # when looking at the lower 16 or 32 bits. sizes = {socket.htonl: 32, socket.ntohl: 32, socket.htons: 16, socket.ntohs: 16} for func, size in sizes.items(): mask = (1<= 23): port2 = socket.getservbyname(service) eq(port, port2) # Try udp, but don't barf if it doesn't exist try: udpport = socket.getservbyname(service, 'udp') except OSError: udpport = None else: eq(udpport, port) # Now make sure the lookup by port returns the same service name # Issue #26936: Android getservbyport() is broken. if not support.is_android: eq(socket.getservbyport(port2), service) eq(socket.getservbyport(port, 'tcp'), service) if udpport is not None: eq(socket.getservbyport(udpport, 'udp'), service) # Make sure getservbyport does not accept out of range ports. self.assertRaises(OverflowError, socket.getservbyport, -1) self.assertRaises(OverflowError, socket.getservbyport, 65536) def testDefaultTimeout(self): # Testing default timeout # The default timeout should initially be None self.assertEqual(socket.getdefaulttimeout(), None) with socket.socket() as s: self.assertEqual(s.gettimeout(), None) # Set the default timeout to 10, and see if it propagates with socket_setdefaulttimeout(10): self.assertEqual(socket.getdefaulttimeout(), 10) with socket.socket() as sock: self.assertEqual(sock.gettimeout(), 10) # Reset the default timeout to None, and see if it propagates socket.setdefaulttimeout(None) self.assertEqual(socket.getdefaulttimeout(), None) with socket.socket() as sock: self.assertEqual(sock.gettimeout(), None) # Check that setting it to an invalid value raises ValueError self.assertRaises(ValueError, socket.setdefaulttimeout, -1) # Check that setting it to an invalid type raises TypeError self.assertRaises(TypeError, socket.setdefaulttimeout, "spam") @unittest.skipUnless(hasattr(socket, 'inet_aton'), 'test needs socket.inet_aton()') def testIPv4_inet_aton_fourbytes(self): # Test that issue1008086 and issue767150 are fixed. # It must return 4 bytes. self.assertEqual(b'\x00'*4, socket.inet_aton('0.0.0.0')) self.assertEqual(b'\xff'*4, socket.inet_aton('255.255.255.255')) @unittest.skipUnless(hasattr(socket, 'inet_pton'), 'test needs socket.inet_pton()') def testIPv4toString(self): from socket import inet_aton as f, inet_pton, AF_INET g = lambda a: inet_pton(AF_INET, a) assertInvalid = lambda func,a: self.assertRaises( (OSError, ValueError), func, a ) self.assertEqual(b'\x00\x00\x00\x00', f('0.0.0.0')) self.assertEqual(b'\xff\x00\xff\x00', f('255.0.255.0')) self.assertEqual(b'\xaa\xaa\xaa\xaa', f('170.170.170.170')) self.assertEqual(b'\x01\x02\x03\x04', f('1.2.3.4')) self.assertEqual(b'\xff\xff\xff\xff', f('255.255.255.255')) # bpo-29972: inet_pton() doesn't fail on AIX if not AIX: assertInvalid(f, '0.0.0.') assertInvalid(f, '300.0.0.0') assertInvalid(f, 'a.0.0.0') assertInvalid(f, '1.2.3.4.5') assertInvalid(f, '::1') self.assertEqual(b'\x00\x00\x00\x00', g('0.0.0.0')) self.assertEqual(b'\xff\x00\xff\x00', g('255.0.255.0')) self.assertEqual(b'\xaa\xaa\xaa\xaa', g('170.170.170.170')) self.assertEqual(b'\xff\xff\xff\xff', g('255.255.255.255')) assertInvalid(g, '0.0.0.') assertInvalid(g, '300.0.0.0') assertInvalid(g, 'a.0.0.0') assertInvalid(g, '1.2.3.4.5') assertInvalid(g, '::1') @unittest.skipUnless(hasattr(socket, 'inet_pton'), 'test needs socket.inet_pton()') def testIPv6toString(self): try: from socket import inet_pton, AF_INET6, has_ipv6 if not has_ipv6: self.skipTest('IPv6 not available') except ImportError: self.skipTest('could not import needed symbols from socket') if sys.platform == "win32": try: inet_pton(AF_INET6, '::') except OSError as e: if e.winerror == 10022: self.skipTest('IPv6 might not be supported') f = lambda a: inet_pton(AF_INET6, a) assertInvalid = lambda a: self.assertRaises( (OSError, ValueError), f, a ) self.assertEqual(b'\x00' * 16, f('::')) self.assertEqual(b'\x00' * 16, f('0::0')) self.assertEqual(b'\x00\x01' + b'\x00' * 14, f('1::')) self.assertEqual( b'\x45\xef\x76\xcb\x00\x1a\x56\xef\xaf\xeb\x0b\xac\x19\x24\xae\xae', f('45ef:76cb:1a:56ef:afeb:bac:1924:aeae') ) self.assertEqual( b'\xad\x42\x0a\xbc' + b'\x00' * 4 + b'\x01\x27\x00\x00\x02\x54\x00\x02', f('ad42:abc::127:0:254:2') ) self.assertEqual(b'\x00\x12\x00\x0a' + b'\x00' * 12, f('12:a::')) assertInvalid('0x20::') assertInvalid(':::') assertInvalid('::0::') assertInvalid('1::abc::') assertInvalid('1::abc::def') assertInvalid('1:2:3:4:5:6') assertInvalid('1:2:3:4:5:6:') assertInvalid('1:2:3:4:5:6:7:8:0') # bpo-29972: inet_pton() doesn't fail on AIX if not AIX: assertInvalid('1:2:3:4:5:6:7:8:') self.assertEqual(b'\x00' * 12 + b'\xfe\x2a\x17\x40', f('::254.42.23.64') ) self.assertEqual( b'\x00\x42' + b'\x00' * 8 + b'\xa2\x9b\xfe\x2a\x17\x40', f('42::a29b:254.42.23.64') ) self.assertEqual( b'\x00\x42\xa8\xb9\x00\x00\x00\x02\xff\xff\xa2\x9b\xfe\x2a\x17\x40', f('42:a8b9:0:2:ffff:a29b:254.42.23.64') ) assertInvalid('255.254.253.252') assertInvalid('1::260.2.3.0') assertInvalid('1::0.be.e.0') assertInvalid('1:2:3:4:5:6:7:1.2.3.4') assertInvalid('::1.2.3.4:0') assertInvalid('0.100.200.0:3:4:5:6:7:8') @unittest.skipUnless(hasattr(socket, 'inet_ntop'), 'test needs socket.inet_ntop()') def testStringToIPv4(self): from socket import inet_ntoa as f, inet_ntop, AF_INET g = lambda a: inet_ntop(AF_INET, a) assertInvalid = lambda func,a: self.assertRaises( (OSError, ValueError), func, a ) self.assertEqual('1.0.1.0', f(b'\x01\x00\x01\x00')) self.assertEqual('170.85.170.85', f(b'\xaa\x55\xaa\x55')) self.assertEqual('255.255.255.255', f(b'\xff\xff\xff\xff')) self.assertEqual('1.2.3.4', f(b'\x01\x02\x03\x04')) assertInvalid(f, b'\x00' * 3) assertInvalid(f, b'\x00' * 5) assertInvalid(f, b'\x00' * 16) self.assertEqual('170.85.170.85', f(bytearray(b'\xaa\x55\xaa\x55'))) self.assertEqual('1.0.1.0', g(b'\x01\x00\x01\x00')) self.assertEqual('170.85.170.85', g(b'\xaa\x55\xaa\x55')) self.assertEqual('255.255.255.255', g(b'\xff\xff\xff\xff')) assertInvalid(g, b'\x00' * 3) assertInvalid(g, b'\x00' * 5) assertInvalid(g, b'\x00' * 16) self.assertEqual('170.85.170.85', g(bytearray(b'\xaa\x55\xaa\x55'))) @unittest.skipUnless(hasattr(socket, 'inet_ntop'), 'test needs socket.inet_ntop()') def testStringToIPv6(self): try: from socket import inet_ntop, AF_INET6, has_ipv6 if not has_ipv6: self.skipTest('IPv6 not available') except ImportError: self.skipTest('could not import needed symbols from socket') if sys.platform == "win32": try: inet_ntop(AF_INET6, b'\x00' * 16) except OSError as e: if e.winerror == 10022: self.skipTest('IPv6 might not be supported') f = lambda a: inet_ntop(AF_INET6, a) assertInvalid = lambda a: self.assertRaises( (OSError, ValueError), f, a ) self.assertEqual('::', f(b'\x00' * 16)) self.assertEqual('::1', f(b'\x00' * 15 + b'\x01')) self.assertEqual( 'aef:b01:506:1001:ffff:9997:55:170', f(b'\x0a\xef\x0b\x01\x05\x06\x10\x01\xff\xff\x99\x97\x00\x55\x01\x70') ) self.assertEqual('::1', f(bytearray(b'\x00' * 15 + b'\x01'))) assertInvalid(b'\x12' * 15) assertInvalid(b'\x12' * 17) assertInvalid(b'\x12' * 4) # XXX The following don't test module-level functionality... def testSockName(self): # Testing getsockname() sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) # Since find_unused_port() is inherently subject to race conditions, we # call it a couple times if necessary. for i in itertools.count(): port = socket_helper.find_unused_port() try: sock.bind(("0.0.0.0", port)) except OSError as e: if e.errno != errno.EADDRINUSE or i == 5: raise else: break name = sock.getsockname() # XXX(nnorwitz): http://tinyurl.com/os5jz seems to indicate # it reasonable to get the host's addr in addition to 0.0.0.0. # At least for eCos. This is required for the S/390 to pass. try: my_ip_addr = socket.gethostbyname(socket.gethostname()) except OSError: # Probably name lookup wasn't set up right; skip this test self.skipTest('name lookup failure') self.assertIn(name[0], ("0.0.0.0", my_ip_addr), '%s invalid' % name[0]) self.assertEqual(name[1], port) def testGetSockOpt(self): # Testing getsockopt() # We know a socket should start without reuse==0 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) reuse = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) self.assertFalse(reuse != 0, "initial mode is reuse") def testSetSockOpt(self): # Testing setsockopt() sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) reuse = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) self.assertFalse(reuse == 0, "failed to set reuse mode") def testSendAfterClose(self): # testing send() after close() with timeout with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: sock.settimeout(1) self.assertRaises(OSError, sock.send, b"spam") def testCloseException(self): sock = socket.socket() sock.bind((socket._LOCALHOST, 0)) socket.socket(fileno=sock.fileno()).close() try: sock.close() except OSError as err: # Winsock apparently raises ENOTSOCK self.assertIn(err.errno, (errno.EBADF, errno.ENOTSOCK)) else: self.fail("close() should raise EBADF/ENOTSOCK") def testNewAttributes(self): # testing .family, .type and .protocol with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: self.assertEqual(sock.family, socket.AF_INET) if hasattr(socket, 'SOCK_CLOEXEC'): self.assertIn(sock.type, (socket.SOCK_STREAM | socket.SOCK_CLOEXEC, socket.SOCK_STREAM)) else: self.assertEqual(sock.type, socket.SOCK_STREAM) self.assertEqual(sock.proto, 0) def test_getsockaddrarg(self): sock = socket.socket() self.addCleanup(sock.close) port = socket_helper.find_unused_port() big_port = port + 65536 neg_port = port - 65536 self.assertRaises(OverflowError, sock.bind, (HOST, big_port)) self.assertRaises(OverflowError, sock.bind, (HOST, neg_port)) # Since find_unused_port() is inherently subject to race conditions, we # call it a couple times if necessary. for i in itertools.count(): port = socket_helper.find_unused_port() try: sock.bind((HOST, port)) except OSError as e: if e.errno != errno.EADDRINUSE or i == 5: raise else: break @unittest.skipUnless(os.name == "nt", "Windows specific") def test_sock_ioctl(self): self.assertTrue(hasattr(socket.socket, 'ioctl')) self.assertTrue(hasattr(socket, 'SIO_RCVALL')) self.assertTrue(hasattr(socket, 'RCVALL_ON')) self.assertTrue(hasattr(socket, 'RCVALL_OFF')) self.assertTrue(hasattr(socket, 'SIO_KEEPALIVE_VALS')) s = socket.socket() self.addCleanup(s.close) self.assertRaises(ValueError, s.ioctl, -1, None) s.ioctl(socket.SIO_KEEPALIVE_VALS, (1, 100, 100)) @unittest.skipUnless(os.name == "nt", "Windows specific") @unittest.skipUnless(hasattr(socket, 'SIO_LOOPBACK_FAST_PATH'), 'Loopback fast path support required for this test') def test_sio_loopback_fast_path(self): s = socket.socket() self.addCleanup(s.close) try: s.ioctl(socket.SIO_LOOPBACK_FAST_PATH, True) except OSError as exc: WSAEOPNOTSUPP = 10045 if exc.winerror == WSAEOPNOTSUPP: self.skipTest("SIO_LOOPBACK_FAST_PATH is defined but " "doesn't implemented in this Windows version") raise self.assertRaises(TypeError, s.ioctl, socket.SIO_LOOPBACK_FAST_PATH, None) def testGetaddrinfo(self): try: socket.getaddrinfo('localhost', 80) except socket.gaierror as err: if err.errno == socket.EAI_SERVICE: # see http://bugs.python.org/issue1282647 self.skipTest("buggy libc version") raise # len of every sequence is supposed to be == 5 for info in socket.getaddrinfo(HOST, None): self.assertEqual(len(info), 5) # host can be a domain name, a string representation of an # IPv4/v6 address or None socket.getaddrinfo('localhost', 80) socket.getaddrinfo('127.0.0.1', 80) socket.getaddrinfo(None, 80) if socket_helper.IPV6_ENABLED: socket.getaddrinfo('::1', 80) # port can be a string service name such as "http", a numeric # port number or None # Issue #26936: Android getaddrinfo() was broken before API level 23. if (not hasattr(sys, 'getandroidapilevel') or sys.getandroidapilevel() >= 23): socket.getaddrinfo(HOST, "http") socket.getaddrinfo(HOST, 80) socket.getaddrinfo(HOST, None) # test family and socktype filters infos = socket.getaddrinfo(HOST, 80, socket.AF_INET, socket.SOCK_STREAM) for family, type, _, _, _ in infos: self.assertEqual(family, socket.AF_INET) self.assertEqual(repr(family), '' % family.value) self.assertEqual(str(family), str(family.value)) self.assertEqual(type, socket.SOCK_STREAM) self.assertEqual(repr(type), '' % type.value) self.assertEqual(str(type), str(type.value)) infos = socket.getaddrinfo(HOST, None, 0, socket.SOCK_STREAM) for _, socktype, _, _, _ in infos: self.assertEqual(socktype, socket.SOCK_STREAM) # test proto and flags arguments socket.getaddrinfo(HOST, None, 0, 0, socket.SOL_TCP) socket.getaddrinfo(HOST, None, 0, 0, 0, socket.AI_PASSIVE) # a server willing to support both IPv4 and IPv6 will # usually do this socket.getaddrinfo(None, 0, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE) # test keyword arguments a = socket.getaddrinfo(HOST, None) b = socket.getaddrinfo(host=HOST, port=None) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, socket.AF_INET) b = socket.getaddrinfo(HOST, None, family=socket.AF_INET) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, 0, socket.SOCK_STREAM) b = socket.getaddrinfo(HOST, None, type=socket.SOCK_STREAM) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, 0, 0, socket.SOL_TCP) b = socket.getaddrinfo(HOST, None, proto=socket.SOL_TCP) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, 0, 0, 0, socket.AI_PASSIVE) b = socket.getaddrinfo(HOST, None, flags=socket.AI_PASSIVE) self.assertEqual(a, b) a = socket.getaddrinfo(None, 0, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE) b = socket.getaddrinfo(host=None, port=0, family=socket.AF_UNSPEC, type=socket.SOCK_STREAM, proto=0, flags=socket.AI_PASSIVE) self.assertEqual(a, b) # Issue #6697. self.assertRaises(UnicodeEncodeError, socket.getaddrinfo, 'localhost', '\uD800') # Issue 17269: test workaround for OS X platform bug segfault if hasattr(socket, 'AI_NUMERICSERV'): try: # The arguments here are undefined and the call may succeed # or fail. All we care here is that it doesn't segfault. socket.getaddrinfo("localhost", None, 0, 0, 0, socket.AI_NUMERICSERV) except socket.gaierror: pass def test_getaddrinfo_int_port_overflow(self): # gh-74895: Test that getaddrinfo does not raise OverflowError on port. # # POSIX getaddrinfo() never specify the valid range for "service" # decimal port number values. For IPv4 and IPv6 they are technically # unsigned 16-bit values, but the API is protocol agnostic. Which values # trigger an error from the C library function varies by platform as # they do not all perform validation. # The key here is that we don't want to produce OverflowError as Python # prior to 3.12 did for ints outside of a [LONG_MIN, LONG_MAX] range. # Leave the error up to the underlying string based platform C API. from _testcapi import ULONG_MAX, LONG_MAX, LONG_MIN try: socket.getaddrinfo(None, ULONG_MAX + 1, type=socket.SOCK_STREAM) except OverflowError: # Platforms differ as to what values consitute a getaddrinfo() error # return. Some fail for LONG_MAX+1, others ULONG_MAX+1, and Windows # silently accepts such huge "port" aka "service" numeric values. self.fail("Either no error or socket.gaierror expected.") except socket.gaierror: pass try: socket.getaddrinfo(None, LONG_MAX + 1, type=socket.SOCK_STREAM) except OverflowError: self.fail("Either no error or socket.gaierror expected.") except socket.gaierror: pass try: socket.getaddrinfo(None, LONG_MAX - 0xffff + 1, type=socket.SOCK_STREAM) except OverflowError: self.fail("Either no error or socket.gaierror expected.") except socket.gaierror: pass try: socket.getaddrinfo(None, LONG_MIN - 1, type=socket.SOCK_STREAM) except OverflowError: self.fail("Either no error or socket.gaierror expected.") except socket.gaierror: pass socket.getaddrinfo(None, 0, type=socket.SOCK_STREAM) # No error expected. socket.getaddrinfo(None, 0xffff, type=socket.SOCK_STREAM) # No error expected. def test_getnameinfo(self): # only IP addresses are allowed self.assertRaises(OSError, socket.getnameinfo, ('mail.python.org',0), 0) @unittest.skipUnless(support.is_resource_enabled('network'), 'network is not enabled') def test_idna(self): # Check for internet access before running test # (issue #12804, issue #25138). with socket_helper.transient_internet('python.org'): socket.gethostbyname('python.org') # these should all be successful domain = 'испытание.pythontest.net' socket.gethostbyname(domain) socket.gethostbyname_ex(domain) socket.getaddrinfo(domain,0,socket.AF_UNSPEC,socket.SOCK_STREAM) # this may not work if the forward lookup chooses the IPv6 address, as that doesn't # have a reverse entry yet # socket.gethostbyaddr('испытание.python.org') def check_sendall_interrupted(self, with_timeout): # socketpair() is not strictly required, but it makes things easier. if not hasattr(signal, 'alarm') or not hasattr(socket, 'socketpair'): self.skipTest("signal.alarm and socket.socketpair required for this test") # Our signal handlers clobber the C errno by calling a math function # with an invalid domain value. def ok_handler(*args): self.assertRaises(ValueError, math.acosh, 0) def raising_handler(*args): self.assertRaises(ValueError, math.acosh, 0) 1 // 0 c, s = socket.socketpair() old_alarm = signal.signal(signal.SIGALRM, raising_handler) try: if with_timeout: # Just above the one second minimum for signal.alarm c.settimeout(1.5) with self.assertRaises(ZeroDivisionError): signal.alarm(1) c.sendall(b"x" * support.SOCK_MAX_SIZE) if with_timeout: signal.signal(signal.SIGALRM, ok_handler) signal.alarm(1) self.assertRaises(TimeoutError, c.sendall, b"x" * support.SOCK_MAX_SIZE) finally: signal.alarm(0) signal.signal(signal.SIGALRM, old_alarm) c.close() s.close() def test_sendall_interrupted(self): self.check_sendall_interrupted(False) def test_sendall_interrupted_with_timeout(self): self.check_sendall_interrupted(True) def test_dealloc_warn(self): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) r = repr(sock) with self.assertWarns(ResourceWarning) as cm: sock = None support.gc_collect() self.assertIn(r, str(cm.warning.args[0])) # An open socket file object gets dereferenced after the socket sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) f = sock.makefile('rb') r = repr(sock) sock = None support.gc_collect() with self.assertWarns(ResourceWarning): f = None support.gc_collect() def test_name_closed_socketio(self): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: fp = sock.makefile("rb") fp.close() self.assertEqual(repr(fp), "<_io.BufferedReader name=-1>") def test_unusable_closed_socketio(self): with socket.socket() as sock: fp = sock.makefile("rb", buffering=0) self.assertTrue(fp.readable()) self.assertFalse(fp.writable()) self.assertFalse(fp.seekable()) fp.close() self.assertRaises(ValueError, fp.readable) self.assertRaises(ValueError, fp.writable) self.assertRaises(ValueError, fp.seekable) def test_socket_close(self): sock = socket.socket() try: sock.bind((HOST, 0)) socket.close(sock.fileno()) with self.assertRaises(OSError): sock.listen(1) finally: with self.assertRaises(OSError): # sock.close() fails with EBADF sock.close() with self.assertRaises(TypeError): socket.close(None) with self.assertRaises(OSError): socket.close(-1) def test_makefile_mode(self): for mode in 'r', 'rb', 'rw', 'w', 'wb': with self.subTest(mode=mode): with socket.socket() as sock: encoding = None if "b" in mode else "utf-8" with sock.makefile(mode, encoding=encoding) as fp: self.assertEqual(fp.mode, mode) def test_makefile_invalid_mode(self): for mode in 'rt', 'x', '+', 'a': with self.subTest(mode=mode): with socket.socket() as sock: with self.assertRaisesRegex(ValueError, 'invalid mode'): sock.makefile(mode) def test_pickle(self): sock = socket.socket() with sock: for protocol in range(pickle.HIGHEST_PROTOCOL + 1): self.assertRaises(TypeError, pickle.dumps, sock, protocol) for protocol in range(pickle.HIGHEST_PROTOCOL + 1): family = pickle.loads(pickle.dumps(socket.AF_INET, protocol)) self.assertEqual(family, socket.AF_INET) type = pickle.loads(pickle.dumps(socket.SOCK_STREAM, protocol)) self.assertEqual(type, socket.SOCK_STREAM) def test_listen_backlog(self): for backlog in 0, -1: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as srv: srv.bind((HOST, 0)) srv.listen(backlog) with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as srv: srv.bind((HOST, 0)) srv.listen() @support.cpython_only def test_listen_backlog_overflow(self): # Issue 15989 import _testcapi with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as srv: srv.bind((HOST, 0)) self.assertRaises(OverflowError, srv.listen, _testcapi.INT_MAX + 1) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') def test_flowinfo(self): self.assertRaises(OverflowError, socket.getnameinfo, (socket_helper.HOSTv6, 0, 0xffffffff), 0) with socket.socket(socket.AF_INET6, socket.SOCK_STREAM) as s: self.assertRaises(OverflowError, s.bind, (socket_helper.HOSTv6, 0, -10)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') def test_getaddrinfo_ipv6_basic(self): ((*_, sockaddr),) = socket.getaddrinfo( 'ff02::1de:c0:face:8D', # Note capital letter `D`. 1234, socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP ) self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, 0)) def test_getfqdn_filter_localhost(self): self.assertEqual(socket.getfqdn(), socket.getfqdn("0.0.0.0")) self.assertEqual(socket.getfqdn(), socket.getfqdn("::")) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipIf(sys.platform == 'win32', 'does not work on Windows') @unittest.skipIf(AIX, 'Symbolic scope id does not work') @unittest.skipUnless(hasattr(socket, 'if_nameindex'), "test needs socket.if_nameindex()") def test_getaddrinfo_ipv6_scopeid_symbolic(self): # Just pick up any network interface (Linux, Mac OS X) (ifindex, test_interface) = socket.if_nameindex()[0] ((*_, sockaddr),) = socket.getaddrinfo( 'ff02::1de:c0:face:8D%' + test_interface, 1234, socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP ) # Note missing interface name part in IPv6 address self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, ifindex)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless( sys.platform == 'win32', 'Numeric scope id does not work or undocumented') def test_getaddrinfo_ipv6_scopeid_numeric(self): # Also works on Linux and Mac OS X, but is not documented (?) # Windows, Linux and Max OS X allow nonexistent interface numbers here. ifindex = 42 ((*_, sockaddr),) = socket.getaddrinfo( 'ff02::1de:c0:face:8D%' + str(ifindex), 1234, socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP ) # Note missing interface name part in IPv6 address self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, ifindex)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipIf(sys.platform == 'win32', 'does not work on Windows') @unittest.skipIf(AIX, 'Symbolic scope id does not work') @unittest.skipUnless(hasattr(socket, 'if_nameindex'), "test needs socket.if_nameindex()") def test_getnameinfo_ipv6_scopeid_symbolic(self): # Just pick up any network interface. (ifindex, test_interface) = socket.if_nameindex()[0] sockaddr = ('ff02::1de:c0:face:8D', 1234, 0, ifindex) # Note capital letter `D`. nameinfo = socket.getnameinfo(sockaddr, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV) self.assertEqual(nameinfo, ('ff02::1de:c0:face:8d%' + test_interface, '1234')) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless( sys.platform == 'win32', 'Numeric scope id does not work or undocumented') def test_getnameinfo_ipv6_scopeid_numeric(self): # Also works on Linux (undocumented), but does not work on Mac OS X # Windows and Linux allow nonexistent interface numbers here. ifindex = 42 sockaddr = ('ff02::1de:c0:face:8D', 1234, 0, ifindex) # Note capital letter `D`. nameinfo = socket.getnameinfo(sockaddr, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV) self.assertEqual(nameinfo, ('ff02::1de:c0:face:8d%' + str(ifindex), '1234')) def test_str_for_enums(self): # Make sure that the AF_* and SOCK_* constants have enum-like string # reprs. with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: self.assertEqual(repr(s.family), '' % s.family.value) self.assertEqual(repr(s.type), '' % s.type.value) self.assertEqual(str(s.family), str(s.family.value)) self.assertEqual(str(s.type), str(s.type.value)) def test_socket_consistent_sock_type(self): SOCK_NONBLOCK = getattr(socket, 'SOCK_NONBLOCK', 0) SOCK_CLOEXEC = getattr(socket, 'SOCK_CLOEXEC', 0) sock_type = socket.SOCK_STREAM | SOCK_NONBLOCK | SOCK_CLOEXEC with socket.socket(socket.AF_INET, sock_type) as s: self.assertEqual(s.type, socket.SOCK_STREAM) s.settimeout(1) self.assertEqual(s.type, socket.SOCK_STREAM) s.settimeout(0) self.assertEqual(s.type, socket.SOCK_STREAM) s.setblocking(True) self.assertEqual(s.type, socket.SOCK_STREAM) s.setblocking(False) self.assertEqual(s.type, socket.SOCK_STREAM) def test_unknown_socket_family_repr(self): # Test that when created with a family that's not one of the known # AF_*/SOCK_* constants, socket.family just returns the number. # # To do this we fool socket.socket into believing it already has an # open fd because on this path it doesn't actually verify the family and # type and populates the socket object. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) fd = sock.detach() unknown_family = max(socket.AddressFamily.__members__.values()) + 1 unknown_type = max( kind for name, kind in socket.SocketKind.__members__.items() if name not in {'SOCK_NONBLOCK', 'SOCK_CLOEXEC'} ) + 1 with socket.socket( family=unknown_family, type=unknown_type, proto=23, fileno=fd) as s: self.assertEqual(s.family, unknown_family) self.assertEqual(s.type, unknown_type) # some OS like macOS ignore proto self.assertIn(s.proto, {0, 23}) @unittest.skipUnless(hasattr(os, 'sendfile'), 'test needs os.sendfile()') def test__sendfile_use_sendfile(self): class File: def __init__(self, fd): self.fd = fd def fileno(self): return self.fd with socket.socket() as sock: fd = os.open(os.curdir, os.O_RDONLY) os.close(fd) with self.assertRaises(socket._GiveupOnSendfile): sock._sendfile_use_sendfile(File(fd)) with self.assertRaises(OverflowError): sock._sendfile_use_sendfile(File(2**1000)) with self.assertRaises(TypeError): sock._sendfile_use_sendfile(File(None)) def _test_socket_fileno(self, s, family, stype): self.assertEqual(s.family, family) self.assertEqual(s.type, stype) fd = s.fileno() s2 = socket.socket(fileno=fd) self.addCleanup(s2.close) # detach old fd to avoid double close s.detach() self.assertEqual(s2.family, family) self.assertEqual(s2.type, stype) self.assertEqual(s2.fileno(), fd) def test_socket_fileno(self): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(s.close) s.bind((socket_helper.HOST, 0)) self._test_socket_fileno(s, socket.AF_INET, socket.SOCK_STREAM) if hasattr(socket, "SOCK_DGRAM"): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.addCleanup(s.close) s.bind((socket_helper.HOST, 0)) self._test_socket_fileno(s, socket.AF_INET, socket.SOCK_DGRAM) if socket_helper.IPV6_ENABLED: s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) self.addCleanup(s.close) s.bind((socket_helper.HOSTv6, 0, 0, 0)) self._test_socket_fileno(s, socket.AF_INET6, socket.SOCK_STREAM) if hasattr(socket, "AF_UNIX"): unix_name = socket_helper.create_unix_domain_name() self.addCleanup(os_helper.unlink, unix_name) s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) with s: try: s.bind(unix_name) except PermissionError: pass else: self._test_socket_fileno(s, socket.AF_UNIX, socket.SOCK_STREAM) def test_socket_fileno_rejects_float(self): with self.assertRaises(TypeError): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=42.5) def test_socket_fileno_rejects_other_types(self): with self.assertRaises(TypeError): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno="foo") def test_socket_fileno_rejects_invalid_socket(self): with self.assertRaisesRegex(ValueError, "negative file descriptor"): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=-1) @unittest.skipIf(os.name == "nt", "Windows disallows -1 only") def test_socket_fileno_rejects_negative(self): with self.assertRaisesRegex(ValueError, "negative file descriptor"): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=-42) def test_socket_fileno_requires_valid_fd(self): WSAENOTSOCK = 10038 with self.assertRaises(OSError) as cm: socket.socket(fileno=os_helper.make_bad_fd()) self.assertIn(cm.exception.errno, (errno.EBADF, WSAENOTSOCK)) with self.assertRaises(OSError) as cm: socket.socket( socket.AF_INET, socket.SOCK_STREAM, fileno=os_helper.make_bad_fd()) self.assertIn(cm.exception.errno, (errno.EBADF, WSAENOTSOCK)) def test_socket_fileno_requires_socket_fd(self): with tempfile.NamedTemporaryFile() as afile: with self.assertRaises(OSError): socket.socket(fileno=afile.fileno()) with self.assertRaises(OSError) as cm: socket.socket( socket.AF_INET, socket.SOCK_STREAM, fileno=afile.fileno()) self.assertEqual(cm.exception.errno, errno.ENOTSOCK) def test_addressfamily_enum(self): import _socket, enum CheckedAddressFamily = enum._old_convert_( enum.IntEnum, 'AddressFamily', 'socket', lambda C: C.isupper() and C.startswith('AF_'), source=_socket, ) enum._test_simple_enum(CheckedAddressFamily, socket.AddressFamily) def test_socketkind_enum(self): import _socket, enum CheckedSocketKind = enum._old_convert_( enum.IntEnum, 'SocketKind', 'socket', lambda C: C.isupper() and C.startswith('SOCK_'), source=_socket, ) enum._test_simple_enum(CheckedSocketKind, socket.SocketKind) def test_msgflag_enum(self): import _socket, enum CheckedMsgFlag = enum._old_convert_( enum.IntFlag, 'MsgFlag', 'socket', lambda C: C.isupper() and C.startswith('MSG_'), source=_socket, ) enum._test_simple_enum(CheckedMsgFlag, socket.MsgFlag) def test_addressinfo_enum(self): import _socket, enum CheckedAddressInfo = enum._old_convert_( enum.IntFlag, 'AddressInfo', 'socket', lambda C: C.isupper() and C.startswith('AI_'), source=_socket) enum._test_simple_enum(CheckedAddressInfo, socket.AddressInfo) @unittest.skipUnless(HAVE_SOCKET_CAN, 'SocketCan required for this test.') class BasicCANTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_CAN socket.PF_CAN socket.CAN_RAW @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def testBCMConstants(self): socket.CAN_BCM # opcodes socket.CAN_BCM_TX_SETUP # create (cyclic) transmission task socket.CAN_BCM_TX_DELETE # remove (cyclic) transmission task socket.CAN_BCM_TX_READ # read properties of (cyclic) transmission task socket.CAN_BCM_TX_SEND # send one CAN frame socket.CAN_BCM_RX_SETUP # create RX content filter subscription socket.CAN_BCM_RX_DELETE # remove RX content filter subscription socket.CAN_BCM_RX_READ # read properties of RX content filter subscription socket.CAN_BCM_TX_STATUS # reply to TX_READ request socket.CAN_BCM_TX_EXPIRED # notification on performed transmissions (count=0) socket.CAN_BCM_RX_STATUS # reply to RX_READ request socket.CAN_BCM_RX_TIMEOUT # cyclic message is absent socket.CAN_BCM_RX_CHANGED # updated CAN frame (detected content change) # flags socket.CAN_BCM_SETTIMER socket.CAN_BCM_STARTTIMER socket.CAN_BCM_TX_COUNTEVT socket.CAN_BCM_TX_ANNOUNCE socket.CAN_BCM_TX_CP_CAN_ID socket.CAN_BCM_RX_FILTER_ID socket.CAN_BCM_RX_CHECK_DLC socket.CAN_BCM_RX_NO_AUTOTIMER socket.CAN_BCM_RX_ANNOUNCE_RESUME socket.CAN_BCM_TX_RESET_MULTI_IDX socket.CAN_BCM_RX_RTR_FRAME def testCreateSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: pass @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def testCreateBCMSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) as s: pass def testBindAny(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: address = ('', ) s.bind(address) self.assertEqual(s.getsockname(), address) def testTooLongInterfaceName(self): # most systems limit IFNAMSIZ to 16, take 1024 to be sure with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: self.assertRaisesRegex(OSError, 'interface name too long', s.bind, ('x' * 1024,)) @unittest.skipUnless(hasattr(socket, "CAN_RAW_LOOPBACK"), 'socket.CAN_RAW_LOOPBACK required for this test.') def testLoopback(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: for loopback in (0, 1): s.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_LOOPBACK, loopback) self.assertEqual(loopback, s.getsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_LOOPBACK)) @unittest.skipUnless(hasattr(socket, "CAN_RAW_FILTER"), 'socket.CAN_RAW_FILTER required for this test.') def testFilter(self): can_id, can_mask = 0x200, 0x700 can_filter = struct.pack("=II", can_id, can_mask) with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: s.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_FILTER, can_filter) self.assertEqual(can_filter, s.getsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_FILTER, 8)) s.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_FILTER, bytearray(can_filter)) @unittest.skipUnless(HAVE_SOCKET_CAN, 'SocketCan required for this test.') class CANTest(ThreadedCANSocketTest): def __init__(self, methodName='runTest'): ThreadedCANSocketTest.__init__(self, methodName=methodName) @classmethod def build_can_frame(cls, can_id, data): """Build a CAN frame.""" can_dlc = len(data) data = data.ljust(8, b'\x00') return struct.pack(cls.can_frame_fmt, can_id, can_dlc, data) @classmethod def dissect_can_frame(cls, frame): """Dissect a CAN frame.""" can_id, can_dlc, data = struct.unpack(cls.can_frame_fmt, frame) return (can_id, can_dlc, data[:can_dlc]) def testSendFrame(self): cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf, cf) self.assertEqual(addr[0], self.interface) def _testSendFrame(self): self.cf = self.build_can_frame(0x00, b'\x01\x02\x03\x04\x05') self.cli.send(self.cf) def testSendMaxFrame(self): cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf, cf) def _testSendMaxFrame(self): self.cf = self.build_can_frame(0x00, b'\x07' * 8) self.cli.send(self.cf) def testSendMultiFrames(self): cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf1, cf) cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf2, cf) def _testSendMultiFrames(self): self.cf1 = self.build_can_frame(0x07, b'\x44\x33\x22\x11') self.cli.send(self.cf1) self.cf2 = self.build_can_frame(0x12, b'\x99\x22\x33') self.cli.send(self.cf2) @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def _testBCM(self): cf, addr = self.cli.recvfrom(self.bufsize) self.assertEqual(self.cf, cf) can_id, can_dlc, data = self.dissect_can_frame(cf) self.assertEqual(self.can_id, can_id) self.assertEqual(self.data, data) @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def testBCM(self): bcm = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) self.addCleanup(bcm.close) bcm.connect((self.interface,)) self.can_id = 0x123 self.data = bytes([0xc0, 0xff, 0xee]) self.cf = self.build_can_frame(self.can_id, self.data) opcode = socket.CAN_BCM_TX_SEND flags = 0 count = 0 ival1_seconds = ival1_usec = ival2_seconds = ival2_usec = 0 bcm_can_id = 0x0222 nframes = 1 assert len(self.cf) == 16 header = struct.pack(self.bcm_cmd_msg_fmt, opcode, flags, count, ival1_seconds, ival1_usec, ival2_seconds, ival2_usec, bcm_can_id, nframes, ) header_plus_frame = header + self.cf bytes_sent = bcm.send(header_plus_frame) self.assertEqual(bytes_sent, len(header_plus_frame)) @unittest.skipUnless(HAVE_SOCKET_CAN_ISOTP, 'CAN ISOTP required for this test.') class ISOTPTest(unittest.TestCase): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.interface = "vcan0" def testCrucialConstants(self): socket.AF_CAN socket.PF_CAN socket.CAN_ISOTP socket.SOCK_DGRAM def testCreateSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: pass @unittest.skipUnless(hasattr(socket, "CAN_ISOTP"), 'socket.CAN_ISOTP required for this test.') def testCreateISOTPSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) as s: pass def testTooLongInterfaceName(self): # most systems limit IFNAMSIZ to 16, take 1024 to be sure with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) as s: with self.assertRaisesRegex(OSError, 'interface name too long'): s.bind(('x' * 1024, 1, 2)) def testBind(self): try: with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) as s: addr = self.interface, 0x123, 0x456 s.bind(addr) self.assertEqual(s.getsockname(), addr) except OSError as e: if e.errno == errno.ENODEV: self.skipTest('network interface `%s` does not exist' % self.interface) else: raise @unittest.skipUnless(HAVE_SOCKET_CAN_J1939, 'CAN J1939 required for this test.') class J1939Test(unittest.TestCase): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.interface = "vcan0" @unittest.skipUnless(hasattr(socket, "CAN_J1939"), 'socket.CAN_J1939 required for this test.') def testJ1939Constants(self): socket.CAN_J1939 socket.J1939_MAX_UNICAST_ADDR socket.J1939_IDLE_ADDR socket.J1939_NO_ADDR socket.J1939_NO_NAME socket.J1939_PGN_REQUEST socket.J1939_PGN_ADDRESS_CLAIMED socket.J1939_PGN_ADDRESS_COMMANDED socket.J1939_PGN_PDU1_MAX socket.J1939_PGN_MAX socket.J1939_NO_PGN # J1939 socket options socket.SO_J1939_FILTER socket.SO_J1939_PROMISC socket.SO_J1939_SEND_PRIO socket.SO_J1939_ERRQUEUE socket.SCM_J1939_DEST_ADDR socket.SCM_J1939_DEST_NAME socket.SCM_J1939_PRIO socket.SCM_J1939_ERRQUEUE socket.J1939_NLA_PAD socket.J1939_NLA_BYTES_ACKED socket.J1939_EE_INFO_NONE socket.J1939_EE_INFO_TX_ABORT socket.J1939_FILTER_MAX @unittest.skipUnless(hasattr(socket, "CAN_J1939"), 'socket.CAN_J1939 required for this test.') def testCreateJ1939Socket(self): with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) as s: pass def testBind(self): try: with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) as s: addr = self.interface, socket.J1939_NO_NAME, socket.J1939_NO_PGN, socket.J1939_NO_ADDR s.bind(addr) self.assertEqual(s.getsockname(), addr) except OSError as e: if e.errno == errno.ENODEV: self.skipTest('network interface `%s` does not exist' % self.interface) else: raise @unittest.skipUnless(HAVE_SOCKET_RDS, 'RDS sockets required for this test.') class BasicRDSTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_RDS socket.PF_RDS def testCreateSocket(self): with socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) as s: pass def testSocketBufferSize(self): bufsize = 16384 with socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) as s: s.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, bufsize) s.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, bufsize) @unittest.skipUnless(HAVE_SOCKET_RDS, 'RDS sockets required for this test.') class RDSTest(ThreadedRDSSocketTest): def __init__(self, methodName='runTest'): ThreadedRDSSocketTest.__init__(self, methodName=methodName) def setUp(self): super().setUp() self.evt = threading.Event() def testSendAndRecv(self): data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data, data) self.assertEqual(self.cli_addr, addr) def _testSendAndRecv(self): self.data = b'spam' self.cli.sendto(self.data, 0, (HOST, self.port)) def testPeek(self): data, addr = self.serv.recvfrom(self.bufsize, socket.MSG_PEEK) self.assertEqual(self.data, data) data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data, data) def _testPeek(self): self.data = b'spam' self.cli.sendto(self.data, 0, (HOST, self.port)) @requireAttrs(socket.socket, 'recvmsg') def testSendAndRecvMsg(self): data, ancdata, msg_flags, addr = self.serv.recvmsg(self.bufsize) self.assertEqual(self.data, data) @requireAttrs(socket.socket, 'sendmsg') def _testSendAndRecvMsg(self): self.data = b'hello ' * 10 self.cli.sendmsg([self.data], (), 0, (HOST, self.port)) def testSendAndRecvMulti(self): data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data1, data) data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data2, data) def _testSendAndRecvMulti(self): self.data1 = b'bacon' self.cli.sendto(self.data1, 0, (HOST, self.port)) self.data2 = b'egg' self.cli.sendto(self.data2, 0, (HOST, self.port)) def testSelect(self): r, w, x = select.select([self.serv], [], [], 3.0) self.assertIn(self.serv, r) data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data, data) def _testSelect(self): self.data = b'select' self.cli.sendto(self.data, 0, (HOST, self.port)) @unittest.skipUnless(HAVE_SOCKET_QIPCRTR, 'QIPCRTR sockets required for this test.') class BasicQIPCRTRTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_QIPCRTR def testCreateSocket(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: pass def testUnbound(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: self.assertEqual(s.getsockname()[1], 0) def testBindSock(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: socket_helper.bind_port(s, host=s.getsockname()[0]) self.assertNotEqual(s.getsockname()[1], 0) def testInvalidBindSock(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: self.assertRaises(OSError, socket_helper.bind_port, s, host=-2) def testAutoBindSock(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: s.connect((123, 123)) self.assertNotEqual(s.getsockname()[1], 0) @unittest.skipIf(fcntl is None, "need fcntl") @unittest.skipUnless(HAVE_SOCKET_VSOCK, 'VSOCK sockets required for this test.') class BasicVSOCKTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_VSOCK def testVSOCKConstants(self): socket.SO_VM_SOCKETS_BUFFER_SIZE socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE socket.VMADDR_CID_ANY socket.VMADDR_PORT_ANY socket.VMADDR_CID_HOST socket.VM_SOCKETS_INVALID_VERSION socket.IOCTL_VM_SOCKETS_GET_LOCAL_CID def testCreateSocket(self): with socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) as s: pass def testSocketBufferSize(self): with socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) as s: orig_max = s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE) orig = s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_SIZE) orig_min = s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE) s.setsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE, orig_max * 2) s.setsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_SIZE, orig * 2) s.setsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE, orig_min * 2) self.assertEqual(orig_max * 2, s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE)) self.assertEqual(orig * 2, s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_SIZE)) self.assertEqual(orig_min * 2, s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE)) @unittest.skipUnless(HAVE_SOCKET_BLUETOOTH, 'Bluetooth sockets required for this test.') class BasicBluetoothTest(unittest.TestCase): def testBluetoothConstants(self): socket.BDADDR_ANY socket.BDADDR_LOCAL socket.AF_BLUETOOTH socket.BTPROTO_RFCOMM if sys.platform != "win32": socket.BTPROTO_HCI socket.SOL_HCI socket.BTPROTO_L2CAP if not sys.platform.startswith("freebsd"): socket.BTPROTO_SCO def testCreateRfcommSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM) as s: pass @unittest.skipIf(sys.platform == "win32", "windows does not support L2CAP sockets") def testCreateL2capSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_SEQPACKET, socket.BTPROTO_L2CAP) as s: pass @unittest.skipIf(sys.platform == "win32", "windows does not support HCI sockets") def testCreateHciSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_RAW, socket.BTPROTO_HCI) as s: pass @unittest.skipIf(sys.platform == "win32" or sys.platform.startswith("freebsd"), "windows and freebsd do not support SCO sockets") def testCreateScoSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_SEQPACKET, socket.BTPROTO_SCO) as s: pass @unittest.skipUnless(HAVE_SOCKET_HYPERV, 'Hyper-V sockets required for this test.') class BasicHyperVTest(unittest.TestCase): def testHyperVConstants(self): socket.HVSOCKET_CONNECT_TIMEOUT socket.HVSOCKET_CONNECT_TIMEOUT_MAX socket.HVSOCKET_CONNECTED_SUSPEND socket.HVSOCKET_ADDRESS_FLAG_PASSTHRU socket.HV_GUID_ZERO socket.HV_GUID_WILDCARD socket.HV_GUID_BROADCAST socket.HV_GUID_CHILDREN socket.HV_GUID_LOOPBACK socket.HV_GUID_PARENT def testCreateHyperVSocketWithUnknownProtoFailure(self): expected = r"\[WinError 10041\]" with self.assertRaisesRegex(OSError, expected): socket.socket(socket.AF_HYPERV, socket.SOCK_STREAM) def testCreateHyperVSocketAddrNotTupleFailure(self): expected = "connect(): AF_HYPERV address must be tuple, not str" with socket.socket(socket.AF_HYPERV, socket.SOCK_STREAM, socket.HV_PROTOCOL_RAW) as s: with self.assertRaisesRegex(TypeError, re.escape(expected)): s.connect(socket.HV_GUID_ZERO) def testCreateHyperVSocketAddrNotTupleOf2StrsFailure(self): expected = "AF_HYPERV address must be a str tuple (vm_id, service_id)" with socket.socket(socket.AF_HYPERV, socket.SOCK_STREAM, socket.HV_PROTOCOL_RAW) as s: with self.assertRaisesRegex(TypeError, re.escape(expected)): s.connect((socket.HV_GUID_ZERO,)) def testCreateHyperVSocketAddrNotTupleOfStrsFailure(self): expected = "AF_HYPERV address must be a str tuple (vm_id, service_id)" with socket.socket(socket.AF_HYPERV, socket.SOCK_STREAM, socket.HV_PROTOCOL_RAW) as s: with self.assertRaisesRegex(TypeError, re.escape(expected)): s.connect((1, 2)) def testCreateHyperVSocketAddrVmIdNotValidUUIDFailure(self): expected = "connect(): AF_HYPERV address vm_id is not a valid UUID string" with socket.socket(socket.AF_HYPERV, socket.SOCK_STREAM, socket.HV_PROTOCOL_RAW) as s: with self.assertRaisesRegex(ValueError, re.escape(expected)): s.connect(("00", socket.HV_GUID_ZERO)) def testCreateHyperVSocketAddrServiceIdNotValidUUIDFailure(self): expected = "connect(): AF_HYPERV address service_id is not a valid UUID string" with socket.socket(socket.AF_HYPERV, socket.SOCK_STREAM, socket.HV_PROTOCOL_RAW) as s: with self.assertRaisesRegex(ValueError, re.escape(expected)): s.connect((socket.HV_GUID_ZERO, "00")) class BasicTCPTest(SocketConnectedTest): def __init__(self, methodName='runTest'): SocketConnectedTest.__init__(self, methodName=methodName) def testRecv(self): # Testing large receive over TCP msg = self.cli_conn.recv(1024) self.assertEqual(msg, MSG) def _testRecv(self): self.serv_conn.send(MSG) def testOverFlowRecv(self): # Testing receive in chunks over TCP seg1 = self.cli_conn.recv(len(MSG) - 3) seg2 = self.cli_conn.recv(1024) msg = seg1 + seg2 self.assertEqual(msg, MSG) def _testOverFlowRecv(self): self.serv_conn.send(MSG) def testRecvFrom(self): # Testing large recvfrom() over TCP msg, addr = self.cli_conn.recvfrom(1024) self.assertEqual(msg, MSG) def _testRecvFrom(self): self.serv_conn.send(MSG) def testOverFlowRecvFrom(self): # Testing recvfrom() in chunks over TCP seg1, addr = self.cli_conn.recvfrom(len(MSG)-3) seg2, addr = self.cli_conn.recvfrom(1024) msg = seg1 + seg2 self.assertEqual(msg, MSG) def _testOverFlowRecvFrom(self): self.serv_conn.send(MSG) def testSendAll(self): # Testing sendall() with a 2048 byte string over TCP msg = b'' while 1: read = self.cli_conn.recv(1024) if not read: break msg += read self.assertEqual(msg, b'f' * 2048) def _testSendAll(self): big_chunk = b'f' * 2048 self.serv_conn.sendall(big_chunk) def testFromFd(self): # Testing fromfd() fd = self.cli_conn.fileno() sock = socket.fromfd(fd, socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) self.assertIsInstance(sock, socket.socket) msg = sock.recv(1024) self.assertEqual(msg, MSG) def _testFromFd(self): self.serv_conn.send(MSG) def testDup(self): # Testing dup() sock = self.cli_conn.dup() self.addCleanup(sock.close) msg = sock.recv(1024) self.assertEqual(msg, MSG) def _testDup(self): self.serv_conn.send(MSG) def testShutdown(self): # Testing shutdown() msg = self.cli_conn.recv(1024) self.assertEqual(msg, MSG) # wait for _testShutdown to finish: on OS X, when the server # closes the connection the client also becomes disconnected, # and the client's shutdown call will fail. (Issue #4397.) self.done.wait() def _testShutdown(self): self.serv_conn.send(MSG) self.serv_conn.shutdown(2) testShutdown_overflow = support.cpython_only(testShutdown) @support.cpython_only def _testShutdown_overflow(self): import _testcapi self.serv_conn.send(MSG) # Issue 15989 self.assertRaises(OverflowError, self.serv_conn.shutdown, _testcapi.INT_MAX + 1) self.assertRaises(OverflowError, self.serv_conn.shutdown, 2 + (_testcapi.UINT_MAX + 1)) self.serv_conn.shutdown(2) def testDetach(self): # Testing detach() fileno = self.cli_conn.fileno() f = self.cli_conn.detach() self.assertEqual(f, fileno) # cli_conn cannot be used anymore... self.assertTrue(self.cli_conn._closed) self.assertRaises(OSError, self.cli_conn.recv, 1024) self.cli_conn.close() # ...but we can create another socket using the (still open) # file descriptor sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=f) self.addCleanup(sock.close) msg = sock.recv(1024) self.assertEqual(msg, MSG) def _testDetach(self): self.serv_conn.send(MSG) class BasicUDPTest(ThreadedUDPSocketTest): def __init__(self, methodName='runTest'): ThreadedUDPSocketTest.__init__(self, methodName=methodName) def testSendtoAndRecv(self): # Testing sendto() and Recv() over UDP msg = self.serv.recv(len(MSG)) self.assertEqual(msg, MSG) def _testSendtoAndRecv(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFrom(self): # Testing recvfrom() over UDP msg, addr = self.serv.recvfrom(len(MSG)) self.assertEqual(msg, MSG) def _testRecvFrom(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFromNegative(self): # Negative lengths passed to recvfrom should give ValueError. self.assertRaises(ValueError, self.serv.recvfrom, -1) def _testRecvFromNegative(self): self.cli.sendto(MSG, 0, (HOST, self.port)) @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class BasicUDPLITETest(ThreadedUDPLITESocketTest): def __init__(self, methodName='runTest'): ThreadedUDPLITESocketTest.__init__(self, methodName=methodName) def testSendtoAndRecv(self): # Testing sendto() and Recv() over UDPLITE msg = self.serv.recv(len(MSG)) self.assertEqual(msg, MSG) def _testSendtoAndRecv(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFrom(self): # Testing recvfrom() over UDPLITE msg, addr = self.serv.recvfrom(len(MSG)) self.assertEqual(msg, MSG) def _testRecvFrom(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFromNegative(self): # Negative lengths passed to recvfrom should give ValueError. self.assertRaises(ValueError, self.serv.recvfrom, -1) def _testRecvFromNegative(self): self.cli.sendto(MSG, 0, (HOST, self.port)) # Tests for the sendmsg()/recvmsg() interface. Where possible, the # same test code is used with different families and types of socket # (e.g. stream, datagram), and tests using recvmsg() are repeated # using recvmsg_into(). # # The generic test classes such as SendmsgTests and # RecvmsgGenericTests inherit from SendrecvmsgBase and expect to be # supplied with sockets cli_sock and serv_sock representing the # client's and the server's end of the connection respectively, and # attributes cli_addr and serv_addr holding their (numeric where # appropriate) addresses. # # The final concrete test classes combine these with subclasses of # SocketTestBase which set up client and server sockets of a specific # type, and with subclasses of SendrecvmsgBase such as # SendrecvmsgDgramBase and SendrecvmsgConnectedBase which map these # sockets to cli_sock and serv_sock and override the methods and # attributes of SendrecvmsgBase to fill in destination addresses if # needed when sending, check for specific flags in msg_flags, etc. # # RecvmsgIntoMixin provides a version of doRecvmsg() implemented using # recvmsg_into(). # XXX: like the other datagram (UDP) tests in this module, the code # here assumes that datagram delivery on the local machine will be # reliable. class SendrecvmsgBase: # Base class for sendmsg()/recvmsg() tests. # Time in seconds to wait before considering a test failed, or # None for no timeout. Not all tests actually set a timeout. fail_timeout = support.LOOPBACK_TIMEOUT def setUp(self): self.misc_event = threading.Event() super().setUp() def sendToServer(self, msg): # Send msg to the server. return self.cli_sock.send(msg) # Tuple of alternative default arguments for sendmsg() when called # via sendmsgToServer() (e.g. to include a destination address). sendmsg_to_server_defaults = () def sendmsgToServer(self, *args): # Call sendmsg() on self.cli_sock with the given arguments, # filling in any arguments which are not supplied with the # corresponding items of self.sendmsg_to_server_defaults, if # any. return self.cli_sock.sendmsg( *(args + self.sendmsg_to_server_defaults[len(args):])) def doRecvmsg(self, sock, bufsize, *args): # Call recvmsg() on sock with given arguments and return its # result. Should be used for tests which can use either # recvmsg() or recvmsg_into() - RecvmsgIntoMixin overrides # this method with one which emulates it using recvmsg_into(), # thus allowing the same test to be used for both methods. result = sock.recvmsg(bufsize, *args) self.registerRecvmsgResult(result) return result def registerRecvmsgResult(self, result): # Called by doRecvmsg() with the return value of recvmsg() or # recvmsg_into(). Can be overridden to arrange cleanup based # on the returned ancillary data, for instance. pass def checkRecvmsgAddress(self, addr1, addr2): # Called to compare the received address with the address of # the peer. self.assertEqual(addr1, addr2) # Flags that are normally unset in msg_flags msg_flags_common_unset = 0 for name in ("MSG_CTRUNC", "MSG_OOB"): msg_flags_common_unset |= getattr(socket, name, 0) # Flags that are normally set msg_flags_common_set = 0 # Flags set when a complete record has been received (e.g. MSG_EOR # for SCTP) msg_flags_eor_indicator = 0 # Flags set when a complete record has not been received # (e.g. MSG_TRUNC for datagram sockets) msg_flags_non_eor_indicator = 0 def checkFlags(self, flags, eor=None, checkset=0, checkunset=0, ignore=0): # Method to check the value of msg_flags returned by recvmsg[_into](). # # Checks that all bits in msg_flags_common_set attribute are # set in "flags" and all bits in msg_flags_common_unset are # unset. # # The "eor" argument specifies whether the flags should # indicate that a full record (or datagram) has been received. # If "eor" is None, no checks are done; otherwise, checks # that: # # * if "eor" is true, all bits in msg_flags_eor_indicator are # set and all bits in msg_flags_non_eor_indicator are unset # # * if "eor" is false, all bits in msg_flags_non_eor_indicator # are set and all bits in msg_flags_eor_indicator are unset # # If "checkset" and/or "checkunset" are supplied, they require # the given bits to be set or unset respectively, overriding # what the attributes require for those bits. # # If any bits are set in "ignore", they will not be checked, # regardless of the other inputs. # # Will raise Exception if the inputs require a bit to be both # set and unset, and it is not ignored. defaultset = self.msg_flags_common_set defaultunset = self.msg_flags_common_unset if eor: defaultset |= self.msg_flags_eor_indicator defaultunset |= self.msg_flags_non_eor_indicator elif eor is not None: defaultset |= self.msg_flags_non_eor_indicator defaultunset |= self.msg_flags_eor_indicator # Function arguments override defaults defaultset &= ~checkunset defaultunset &= ~checkset # Merge arguments with remaining defaults, and check for conflicts checkset |= defaultset checkunset |= defaultunset inboth = checkset & checkunset & ~ignore if inboth: raise Exception("contradictory set, unset requirements for flags " "{0:#x}".format(inboth)) # Compare with given msg_flags value mask = (checkset | checkunset) & ~ignore self.assertEqual(flags & mask, checkset & mask) class RecvmsgIntoMixin(SendrecvmsgBase): # Mixin to implement doRecvmsg() using recvmsg_into(). def doRecvmsg(self, sock, bufsize, *args): buf = bytearray(bufsize) result = sock.recvmsg_into([buf], *args) self.registerRecvmsgResult(result) self.assertGreaterEqual(result[0], 0) self.assertLessEqual(result[0], bufsize) return (bytes(buf[:result[0]]),) + result[1:] class SendrecvmsgDgramFlagsBase(SendrecvmsgBase): # Defines flags to be checked in msg_flags for datagram sockets. @property def msg_flags_non_eor_indicator(self): return super().msg_flags_non_eor_indicator | socket.MSG_TRUNC class SendrecvmsgSCTPFlagsBase(SendrecvmsgBase): # Defines flags to be checked in msg_flags for SCTP sockets. @property def msg_flags_eor_indicator(self): return super().msg_flags_eor_indicator | socket.MSG_EOR class SendrecvmsgConnectionlessBase(SendrecvmsgBase): # Base class for tests on connectionless-mode sockets. Users must # supply sockets on attributes cli and serv to be mapped to # cli_sock and serv_sock respectively. @property def serv_sock(self): return self.serv @property def cli_sock(self): return self.cli @property def sendmsg_to_server_defaults(self): return ([], [], 0, self.serv_addr) def sendToServer(self, msg): return self.cli_sock.sendto(msg, self.serv_addr) class SendrecvmsgConnectedBase(SendrecvmsgBase): # Base class for tests on connected sockets. Users must supply # sockets on attributes serv_conn and cli_conn (representing the # connections *to* the server and the client), to be mapped to # cli_sock and serv_sock respectively. @property def serv_sock(self): return self.cli_conn @property def cli_sock(self): return self.serv_conn def checkRecvmsgAddress(self, addr1, addr2): # Address is currently "unspecified" for a connected socket, # so we don't examine it pass class SendrecvmsgServerTimeoutBase(SendrecvmsgBase): # Base class to set a timeout on server's socket. def setUp(self): super().setUp() self.serv_sock.settimeout(self.fail_timeout) class SendmsgTests(SendrecvmsgServerTimeoutBase): # Tests for sendmsg() which can use any socket type and do not # involve recvmsg() or recvmsg_into(). def testSendmsg(self): # Send a simple message with sendmsg(). self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsg(self): self.assertEqual(self.sendmsgToServer([MSG]), len(MSG)) def testSendmsgDataGenerator(self): # Send from buffer obtained from a generator (not a sequence). self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgDataGenerator(self): self.assertEqual(self.sendmsgToServer((o for o in [MSG])), len(MSG)) def testSendmsgAncillaryGenerator(self): # Gather (empty) ancillary data from a generator. self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgAncillaryGenerator(self): self.assertEqual(self.sendmsgToServer([MSG], (o for o in [])), len(MSG)) def testSendmsgArray(self): # Send data from an array instead of the usual bytes object. self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgArray(self): self.assertEqual(self.sendmsgToServer([array.array("B", MSG)]), len(MSG)) def testSendmsgGather(self): # Send message data from more than one buffer (gather write). self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgGather(self): self.assertEqual(self.sendmsgToServer([MSG[:3], MSG[3:]]), len(MSG)) def testSendmsgBadArgs(self): # Check that sendmsg() rejects invalid arguments. self.assertEqual(self.serv_sock.recv(1000), b"done") def _testSendmsgBadArgs(self): self.assertRaises(TypeError, self.cli_sock.sendmsg) self.assertRaises(TypeError, self.sendmsgToServer, b"not in an iterable") self.assertRaises(TypeError, self.sendmsgToServer, object()) self.assertRaises(TypeError, self.sendmsgToServer, [object()]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG, object()]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], object()) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [], object()) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [], 0, object()) self.sendToServer(b"done") def testSendmsgBadCmsg(self): # Check that invalid ancillary data items are rejected. self.assertEqual(self.serv_sock.recv(1000), b"done") def _testSendmsgBadCmsg(self): self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [object()]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(object(), 0, b"data")]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, object(), b"data")]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0, object())]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0)]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0, b"data", 42)]) self.sendToServer(b"done") @requireAttrs(socket, "CMSG_SPACE") def testSendmsgBadMultiCmsg(self): # Check that invalid ancillary data items are rejected when # more than one item is present. self.assertEqual(self.serv_sock.recv(1000), b"done") @testSendmsgBadMultiCmsg.client_skip def _testSendmsgBadMultiCmsg(self): self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [0, 0, b""]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0, b""), object()]) self.sendToServer(b"done") def testSendmsgExcessCmsgReject(self): # Check that sendmsg() rejects excess ancillary data items # when the number that can be sent is limited. self.assertEqual(self.serv_sock.recv(1000), b"done") def _testSendmsgExcessCmsgReject(self): if not hasattr(socket, "CMSG_SPACE"): # Can only send one item with self.assertRaises(OSError) as cm: self.sendmsgToServer([MSG], [(0, 0, b""), (0, 0, b"")]) self.assertIsNone(cm.exception.errno) self.sendToServer(b"done") def testSendmsgAfterClose(self): # Check that sendmsg() fails on a closed socket. pass def _testSendmsgAfterClose(self): self.cli_sock.close() self.assertRaises(OSError, self.sendmsgToServer, [MSG]) class SendmsgStreamTests(SendmsgTests): # Tests for sendmsg() which require a stream socket and do not # involve recvmsg() or recvmsg_into(). def testSendmsgExplicitNoneAddr(self): # Check that peer address can be specified as None. self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgExplicitNoneAddr(self): self.assertEqual(self.sendmsgToServer([MSG], [], 0, None), len(MSG)) def testSendmsgTimeout(self): # Check that timeout works with sendmsg(). self.assertEqual(self.serv_sock.recv(512), b"a"*512) self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) def _testSendmsgTimeout(self): try: self.cli_sock.settimeout(0.03) try: while True: self.sendmsgToServer([b"a"*512]) except TimeoutError: pass except OSError as exc: if exc.errno != errno.ENOMEM: raise # bpo-33937 the test randomly fails on Travis CI with # "OSError: [Errno 12] Cannot allocate memory" else: self.fail("TimeoutError not raised") finally: self.misc_event.set() # XXX: would be nice to have more tests for sendmsg flags argument. # Linux supports MSG_DONTWAIT when sending, but in general, it # only works when receiving. Could add other platforms if they # support it too. @skipWithClientIf(sys.platform not in {"linux"}, "MSG_DONTWAIT not known to work on this platform when " "sending") def testSendmsgDontWait(self): # Check that MSG_DONTWAIT in flags causes non-blocking behaviour. self.assertEqual(self.serv_sock.recv(512), b"a"*512) self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) @testSendmsgDontWait.client_skip def _testSendmsgDontWait(self): try: with self.assertRaises(OSError) as cm: while True: self.sendmsgToServer([b"a"*512], [], socket.MSG_DONTWAIT) # bpo-33937: catch also ENOMEM, the test randomly fails on Travis CI # with "OSError: [Errno 12] Cannot allocate memory" self.assertIn(cm.exception.errno, (errno.EAGAIN, errno.EWOULDBLOCK, errno.ENOMEM)) finally: self.misc_event.set() class SendmsgConnectionlessTests(SendmsgTests): # Tests for sendmsg() which require a connectionless-mode # (e.g. datagram) socket, and do not involve recvmsg() or # recvmsg_into(). def testSendmsgNoDestAddr(self): # Check that sendmsg() fails when no destination address is # given for unconnected socket. pass def _testSendmsgNoDestAddr(self): self.assertRaises(OSError, self.cli_sock.sendmsg, [MSG]) self.assertRaises(OSError, self.cli_sock.sendmsg, [MSG], [], 0, None) class RecvmsgGenericTests(SendrecvmsgBase): # Tests for recvmsg() which can also be emulated using # recvmsg_into(), and can use any socket type. def testRecvmsg(self): # Receive a simple message with recvmsg[_into](). msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG)) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsg(self): self.sendToServer(MSG) def testRecvmsgExplicitDefaults(self): # Test recvmsg[_into]() with default arguments provided explicitly. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 0, 0) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgExplicitDefaults(self): self.sendToServer(MSG) def testRecvmsgShorter(self): # Receive a message smaller than buffer. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) + 42) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgShorter(self): self.sendToServer(MSG) def testRecvmsgTrunc(self): # Receive part of message, check for truncation indicators. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) - 3) self.assertEqual(msg, MSG[:-3]) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=False) def _testRecvmsgTrunc(self): self.sendToServer(MSG) def testRecvmsgShortAncillaryBuf(self): # Test ancillary data buffer too small to hold any ancillary data. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 1) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgShortAncillaryBuf(self): self.sendToServer(MSG) def testRecvmsgLongAncillaryBuf(self): # Test large ancillary data buffer. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 10240) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgLongAncillaryBuf(self): self.sendToServer(MSG) def testRecvmsgAfterClose(self): # Check that recvmsg[_into]() fails on a closed socket. self.serv_sock.close() self.assertRaises(OSError, self.doRecvmsg, self.serv_sock, 1024) def _testRecvmsgAfterClose(self): pass def testRecvmsgTimeout(self): # Check that timeout works. try: self.serv_sock.settimeout(0.03) self.assertRaises(TimeoutError, self.doRecvmsg, self.serv_sock, len(MSG)) finally: self.misc_event.set() def _testRecvmsgTimeout(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) @requireAttrs(socket, "MSG_PEEK") def testRecvmsgPeek(self): # Check that MSG_PEEK in flags enables examination of pending # data without consuming it. # Receive part of data with MSG_PEEK. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) - 3, 0, socket.MSG_PEEK) self.assertEqual(msg, MSG[:-3]) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) # Ignoring MSG_TRUNC here (so this test is the same for stream # and datagram sockets). Some wording in POSIX seems to # suggest that it needn't be set when peeking, but that may # just be a slip. self.checkFlags(flags, eor=False, ignore=getattr(socket, "MSG_TRUNC", 0)) # Receive all data with MSG_PEEK. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 0, socket.MSG_PEEK) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) # Check that the same data can still be received normally. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG)) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) @testRecvmsgPeek.client_skip def _testRecvmsgPeek(self): self.sendToServer(MSG) @requireAttrs(socket.socket, "sendmsg") def testRecvmsgFromSendmsg(self): # Test receiving with recvmsg[_into]() when message is sent # using sendmsg(). self.serv_sock.settimeout(self.fail_timeout) msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG)) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) @testRecvmsgFromSendmsg.client_skip def _testRecvmsgFromSendmsg(self): self.assertEqual(self.sendmsgToServer([MSG[:3], MSG[3:]]), len(MSG)) class RecvmsgGenericStreamTests(RecvmsgGenericTests): # Tests which require a stream socket and can use either recvmsg() # or recvmsg_into(). def testRecvmsgEOF(self): # Receive end-of-stream indicator (b"", peer socket closed). msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, 1024) self.assertEqual(msg, b"") self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=None) # Might not have end-of-record marker def _testRecvmsgEOF(self): self.cli_sock.close() def testRecvmsgOverflow(self): # Receive a message in more than one chunk. seg1, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) - 3) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=False) seg2, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, 1024) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) msg = seg1 + seg2 self.assertEqual(msg, MSG) def _testRecvmsgOverflow(self): self.sendToServer(MSG) class RecvmsgTests(RecvmsgGenericTests): # Tests for recvmsg() which can use any socket type. def testRecvmsgBadArgs(self): # Check that recvmsg() rejects invalid arguments. self.assertRaises(TypeError, self.serv_sock.recvmsg) self.assertRaises(ValueError, self.serv_sock.recvmsg, -1, 0, 0) self.assertRaises(ValueError, self.serv_sock.recvmsg, len(MSG), -1, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, [bytearray(10)], 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, object(), 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, len(MSG), object(), 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, len(MSG), 0, object()) msg, ancdata, flags, addr = self.serv_sock.recvmsg(len(MSG), 0, 0) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgBadArgs(self): self.sendToServer(MSG) class RecvmsgIntoTests(RecvmsgIntoMixin, RecvmsgGenericTests): # Tests for recvmsg_into() which can use any socket type. def testRecvmsgIntoBadArgs(self): # Check that recvmsg_into() rejects invalid arguments. buf = bytearray(len(MSG)) self.assertRaises(TypeError, self.serv_sock.recvmsg_into) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, len(MSG), 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, buf, 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [object()], 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [b"I'm not writable"], 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [buf, object()], 0, 0) self.assertRaises(ValueError, self.serv_sock.recvmsg_into, [buf], -1, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [buf], object(), 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [buf], 0, object()) nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into([buf], 0, 0) self.assertEqual(nbytes, len(MSG)) self.assertEqual(buf, bytearray(MSG)) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoBadArgs(self): self.sendToServer(MSG) def testRecvmsgIntoGenerator(self): # Receive into buffer obtained from a generator (not a sequence). buf = bytearray(len(MSG)) nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into( (o for o in [buf])) self.assertEqual(nbytes, len(MSG)) self.assertEqual(buf, bytearray(MSG)) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoGenerator(self): self.sendToServer(MSG) def testRecvmsgIntoArray(self): # Receive into an array rather than the usual bytearray. buf = array.array("B", [0] * len(MSG)) nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into([buf]) self.assertEqual(nbytes, len(MSG)) self.assertEqual(buf.tobytes(), MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoArray(self): self.sendToServer(MSG) def testRecvmsgIntoScatter(self): # Receive into multiple buffers (scatter write). b1 = bytearray(b"----") b2 = bytearray(b"0123456789") b3 = bytearray(b"--------------") nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into( [b1, memoryview(b2)[2:9], b3]) self.assertEqual(nbytes, len(b"Mary had a little lamb")) self.assertEqual(b1, bytearray(b"Mary")) self.assertEqual(b2, bytearray(b"01 had a 9")) self.assertEqual(b3, bytearray(b"little lamb---")) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoScatter(self): self.sendToServer(b"Mary had a little lamb") class CmsgMacroTests(unittest.TestCase): # Test the functions CMSG_LEN() and CMSG_SPACE(). Tests # assumptions used by sendmsg() and recvmsg[_into](), which share # code with these functions. # Match the definition in socketmodule.c try: import _testcapi except ImportError: socklen_t_limit = 0x7fffffff else: socklen_t_limit = min(0x7fffffff, _testcapi.INT_MAX) @requireAttrs(socket, "CMSG_LEN") def testCMSG_LEN(self): # Test CMSG_LEN() with various valid and invalid values, # checking the assumptions used by recvmsg() and sendmsg(). toobig = self.socklen_t_limit - socket.CMSG_LEN(0) + 1 values = list(range(257)) + list(range(toobig - 257, toobig)) # struct cmsghdr has at least three members, two of which are ints self.assertGreater(socket.CMSG_LEN(0), array.array("i").itemsize * 2) for n in values: ret = socket.CMSG_LEN(n) # This is how recvmsg() calculates the data size self.assertEqual(ret - socket.CMSG_LEN(0), n) self.assertLessEqual(ret, self.socklen_t_limit) self.assertRaises(OverflowError, socket.CMSG_LEN, -1) # sendmsg() shares code with these functions, and requires # that it reject values over the limit. self.assertRaises(OverflowError, socket.CMSG_LEN, toobig) self.assertRaises(OverflowError, socket.CMSG_LEN, sys.maxsize) @requireAttrs(socket, "CMSG_SPACE") def testCMSG_SPACE(self): # Test CMSG_SPACE() with various valid and invalid values, # checking the assumptions used by sendmsg(). toobig = self.socklen_t_limit - socket.CMSG_SPACE(1) + 1 values = list(range(257)) + list(range(toobig - 257, toobig)) last = socket.CMSG_SPACE(0) # struct cmsghdr has at least three members, two of which are ints self.assertGreater(last, array.array("i").itemsize * 2) for n in values: ret = socket.CMSG_SPACE(n) self.assertGreaterEqual(ret, last) self.assertGreaterEqual(ret, socket.CMSG_LEN(n)) self.assertGreaterEqual(ret, n + socket.CMSG_LEN(0)) self.assertLessEqual(ret, self.socklen_t_limit) last = ret self.assertRaises(OverflowError, socket.CMSG_SPACE, -1) # sendmsg() shares code with these functions, and requires # that it reject values over the limit. self.assertRaises(OverflowError, socket.CMSG_SPACE, toobig) self.assertRaises(OverflowError, socket.CMSG_SPACE, sys.maxsize) class SCMRightsTest(SendrecvmsgServerTimeoutBase): # Tests for file descriptor passing on Unix-domain sockets. # Invalid file descriptor value that's unlikely to evaluate to a # real FD even if one of its bytes is replaced with a different # value (which shouldn't actually happen). badfd = -0x5555 def newFDs(self, n): # Return a list of n file descriptors for newly-created files # containing their list indices as ASCII numbers. fds = [] for i in range(n): fd, path = tempfile.mkstemp() self.addCleanup(os.unlink, path) self.addCleanup(os.close, fd) os.write(fd, str(i).encode()) fds.append(fd) return fds def checkFDs(self, fds): # Check that the file descriptors in the given list contain # their correct list indices as ASCII numbers. for n, fd in enumerate(fds): os.lseek(fd, 0, os.SEEK_SET) self.assertEqual(os.read(fd, 1024), str(n).encode()) def registerRecvmsgResult(self, result): self.addCleanup(self.closeRecvmsgFDs, result) def closeRecvmsgFDs(self, recvmsg_result): # Close all file descriptors specified in the ancillary data # of the given return value from recvmsg() or recvmsg_into(). for cmsg_level, cmsg_type, cmsg_data in recvmsg_result[1]: if (cmsg_level == socket.SOL_SOCKET and cmsg_type == socket.SCM_RIGHTS): fds = array.array("i") fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) for fd in fds: os.close(fd) def createAndSendFDs(self, n): # Send n new file descriptors created by newFDs() to the # server, with the constant MSG as the non-ancillary data. self.assertEqual( self.sendmsgToServer([MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", self.newFDs(n)))]), len(MSG)) def checkRecvmsgFDs(self, numfds, result, maxcmsgs=1, ignoreflags=0): # Check that constant MSG was received with numfds file # descriptors in a maximum of maxcmsgs control messages (which # must contain only complete integers). By default, check # that MSG_CTRUNC is unset, but ignore any flags in # ignoreflags. msg, ancdata, flags, addr = result self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkunset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertIsInstance(ancdata, list) self.assertLessEqual(len(ancdata), maxcmsgs) fds = array.array("i") for item in ancdata: self.assertIsInstance(item, tuple) cmsg_level, cmsg_type, cmsg_data = item self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) self.assertIsInstance(cmsg_data, bytes) self.assertEqual(len(cmsg_data) % SIZEOF_INT, 0) fds.frombytes(cmsg_data) self.assertEqual(len(fds), numfds) self.checkFDs(fds) def testFDPassSimple(self): # Pass a single FD (array read from bytes object). self.checkRecvmsgFDs(1, self.doRecvmsg(self.serv_sock, len(MSG), 10240)) def _testFDPassSimple(self): self.assertEqual( self.sendmsgToServer( [MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", self.newFDs(1)).tobytes())]), len(MSG)) def testMultipleFDPass(self): # Pass multiple FDs in a single array. self.checkRecvmsgFDs(4, self.doRecvmsg(self.serv_sock, len(MSG), 10240)) def _testMultipleFDPass(self): self.createAndSendFDs(4) @requireAttrs(socket, "CMSG_SPACE") def testFDPassCMSG_SPACE(self): # Test using CMSG_SPACE() to calculate ancillary buffer size. self.checkRecvmsgFDs( 4, self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_SPACE(4 * SIZEOF_INT))) @testFDPassCMSG_SPACE.client_skip def _testFDPassCMSG_SPACE(self): self.createAndSendFDs(4) def testFDPassCMSG_LEN(self): # Test using CMSG_LEN() to calculate ancillary buffer size. self.checkRecvmsgFDs(1, self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_LEN(4 * SIZEOF_INT)), # RFC 3542 says implementations may set # MSG_CTRUNC if there isn't enough space # for trailing padding. ignoreflags=socket.MSG_CTRUNC) def _testFDPassCMSG_LEN(self): self.createAndSendFDs(1) @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") @requireAttrs(socket, "CMSG_SPACE") def testFDPassSeparate(self): # Pass two FDs in two separate arrays. Arrays may be combined # into a single control message by the OS. self.checkRecvmsgFDs(2, self.doRecvmsg(self.serv_sock, len(MSG), 10240), maxcmsgs=2) @testFDPassSeparate.client_skip @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") def _testFDPassSeparate(self): fd0, fd1 = self.newFDs(2) self.assertEqual( self.sendmsgToServer([MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd0])), (socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd1]))]), len(MSG)) @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") @requireAttrs(socket, "CMSG_SPACE") def testFDPassSeparateMinSpace(self): # Pass two FDs in two separate arrays, receiving them into the # minimum space for two arrays. num_fds = 2 self.checkRecvmsgFDs(num_fds, self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_SPACE(SIZEOF_INT) + socket.CMSG_LEN(SIZEOF_INT * num_fds)), maxcmsgs=2, ignoreflags=socket.MSG_CTRUNC) @testFDPassSeparateMinSpace.client_skip @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") def _testFDPassSeparateMinSpace(self): fd0, fd1 = self.newFDs(2) self.assertEqual( self.sendmsgToServer([MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd0])), (socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd1]))]), len(MSG)) def sendAncillaryIfPossible(self, msg, ancdata): # Try to send msg and ancdata to server, but if the system # call fails, just send msg with no ancillary data. try: nbytes = self.sendmsgToServer([msg], ancdata) except OSError as e: # Check that it was the system call that failed self.assertIsInstance(e.errno, int) nbytes = self.sendmsgToServer([msg]) self.assertEqual(nbytes, len(msg)) @unittest.skipIf(sys.platform == "darwin", "see issue #24725") def testFDPassEmpty(self): # Try to pass an empty FD array. Can receive either no array # or an empty array. self.checkRecvmsgFDs(0, self.doRecvmsg(self.serv_sock, len(MSG), 10240), ignoreflags=socket.MSG_CTRUNC) def _testFDPassEmpty(self): self.sendAncillaryIfPossible(MSG, [(socket.SOL_SOCKET, socket.SCM_RIGHTS, b"")]) def testFDPassPartialInt(self): # Try to pass a truncated FD array. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 10240) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, ignore=socket.MSG_CTRUNC) self.assertLessEqual(len(ancdata), 1) for cmsg_level, cmsg_type, cmsg_data in ancdata: self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) self.assertLess(len(cmsg_data), SIZEOF_INT) def _testFDPassPartialInt(self): self.sendAncillaryIfPossible( MSG, [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [self.badfd]).tobytes()[:-1])]) @requireAttrs(socket, "CMSG_SPACE") def testFDPassPartialIntInMiddle(self): # Try to pass two FD arrays, the first of which is truncated. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 10240) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, ignore=socket.MSG_CTRUNC) self.assertLessEqual(len(ancdata), 2) fds = array.array("i") # Arrays may have been combined in a single control message for cmsg_level, cmsg_type, cmsg_data in ancdata: self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) self.assertLessEqual(len(fds), 2) self.checkFDs(fds) @testFDPassPartialIntInMiddle.client_skip def _testFDPassPartialIntInMiddle(self): fd0, fd1 = self.newFDs(2) self.sendAncillaryIfPossible( MSG, [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd0, self.badfd]).tobytes()[:-1]), (socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd1]))]) def checkTruncatedHeader(self, result, ignoreflags=0): # Check that no ancillary data items are returned when data is # truncated inside the cmsghdr structure. msg, ancdata, flags, addr = result self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC, ignore=ignoreflags) def testCmsgTruncNoBufSize(self): # Check that no ancillary data is received when no buffer size # is specified. self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG)), # BSD seems to set MSG_CTRUNC only # if an item has been partially # received. ignoreflags=socket.MSG_CTRUNC) def _testCmsgTruncNoBufSize(self): self.createAndSendFDs(1) def testCmsgTrunc0(self): # Check that no ancillary data is received when buffer size is 0. self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), 0), ignoreflags=socket.MSG_CTRUNC) def _testCmsgTrunc0(self): self.createAndSendFDs(1) # Check that no ancillary data is returned for various non-zero # (but still too small) buffer sizes. def testCmsgTrunc1(self): self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), 1)) def _testCmsgTrunc1(self): self.createAndSendFDs(1) def testCmsgTrunc2Int(self): # The cmsghdr structure has at least three members, two of # which are ints, so we still shouldn't see any ancillary # data. self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), SIZEOF_INT * 2)) def _testCmsgTrunc2Int(self): self.createAndSendFDs(1) def testCmsgTruncLen0Minus1(self): self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_LEN(0) - 1)) def _testCmsgTruncLen0Minus1(self): self.createAndSendFDs(1) # The following tests try to truncate the control message in the # middle of the FD array. def checkTruncatedArray(self, ancbuf, maxdata, mindata=0): # Check that file descriptor data is truncated to between # mindata and maxdata bytes when received with buffer size # ancbuf, and that any complete file descriptor numbers are # valid. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbuf) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC) if mindata == 0 and ancdata == []: return self.assertEqual(len(ancdata), 1) cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) self.assertGreaterEqual(len(cmsg_data), mindata) self.assertLessEqual(len(cmsg_data), maxdata) fds = array.array("i") fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) self.checkFDs(fds) def testCmsgTruncLen0(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(0), maxdata=0) def _testCmsgTruncLen0(self): self.createAndSendFDs(1) def testCmsgTruncLen0Plus1(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(0) + 1, maxdata=1) def _testCmsgTruncLen0Plus1(self): self.createAndSendFDs(2) def testCmsgTruncLen1(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(SIZEOF_INT), maxdata=SIZEOF_INT) def _testCmsgTruncLen1(self): self.createAndSendFDs(2) def testCmsgTruncLen2Minus1(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(2 * SIZEOF_INT) - 1, maxdata=(2 * SIZEOF_INT) - 1) def _testCmsgTruncLen2Minus1(self): self.createAndSendFDs(2) class RFC3542AncillaryTest(SendrecvmsgServerTimeoutBase): # Test sendmsg() and recvmsg[_into]() using the ancillary data # features of the RFC 3542 Advanced Sockets API for IPv6. # Currently we can only handle certain data items (e.g. traffic # class, hop limit, MTU discovery and fragmentation settings) # without resorting to unportable means such as the struct module, # but the tests here are aimed at testing the ancillary data # handling in sendmsg() and recvmsg() rather than the IPv6 API # itself. # Test value to use when setting hop limit of packet hop_limit = 2 # Test value to use when setting traffic class of packet. # -1 means "use kernel default". traffic_class = -1 def ancillaryMapping(self, ancdata): # Given ancillary data list ancdata, return a mapping from # pairs (cmsg_level, cmsg_type) to corresponding cmsg_data. # Check that no (level, type) pair appears more than once. d = {} for cmsg_level, cmsg_type, cmsg_data in ancdata: self.assertNotIn((cmsg_level, cmsg_type), d) d[(cmsg_level, cmsg_type)] = cmsg_data return d def checkHopLimit(self, ancbufsize, maxhop=255, ignoreflags=0): # Receive hop limit into ancbufsize bytes of ancillary data # space. Check that data is MSG, ancillary data is not # truncated (but ignore any flags in ignoreflags), and hop # limit is between 0 and maxhop inclusive. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbufsize) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkunset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertEqual(len(ancdata), 1) self.assertIsInstance(ancdata[0], tuple) cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) self.assertEqual(cmsg_type, socket.IPV6_HOPLIMIT) self.assertIsInstance(cmsg_data, bytes) self.assertEqual(len(cmsg_data), SIZEOF_INT) a = array.array("i") a.frombytes(cmsg_data) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], maxhop) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testRecvHopLimit(self): # Test receiving the packet hop limit as ancillary data. self.checkHopLimit(ancbufsize=10240) @testRecvHopLimit.client_skip def _testRecvHopLimit(self): # Need to wait until server has asked to receive ancillary # data, as implementations are not required to buffer it # otherwise. self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testRecvHopLimitCMSG_SPACE(self): # Test receiving hop limit, using CMSG_SPACE to calculate buffer size. self.checkHopLimit(ancbufsize=socket.CMSG_SPACE(SIZEOF_INT)) @testRecvHopLimitCMSG_SPACE.client_skip def _testRecvHopLimitCMSG_SPACE(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) # Could test receiving into buffer sized using CMSG_LEN, but RFC # 3542 says portable applications must provide space for trailing # padding. Implementations may set MSG_CTRUNC if there isn't # enough space for the padding. @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSetHopLimit(self): # Test setting hop limit on outgoing packet and receiving it # at the other end. self.checkHopLimit(ancbufsize=10240, maxhop=self.hop_limit) @testSetHopLimit.client_skip def _testSetHopLimit(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.assertEqual( self.sendmsgToServer([MSG], [(socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]), len(MSG)) def checkTrafficClassAndHopLimit(self, ancbufsize, maxhop=255, ignoreflags=0): # Receive traffic class and hop limit into ancbufsize bytes of # ancillary data space. Check that data is MSG, ancillary # data is not truncated (but ignore any flags in ignoreflags), # and traffic class and hop limit are in range (hop limit no # more than maxhop). self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVTCLASS, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbufsize) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkunset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertEqual(len(ancdata), 2) ancmap = self.ancillaryMapping(ancdata) tcdata = ancmap[(socket.IPPROTO_IPV6, socket.IPV6_TCLASS)] self.assertEqual(len(tcdata), SIZEOF_INT) a = array.array("i") a.frombytes(tcdata) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], 255) hldata = ancmap[(socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT)] self.assertEqual(len(hldata), SIZEOF_INT) a = array.array("i") a.frombytes(hldata) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], maxhop) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testRecvTrafficClassAndHopLimit(self): # Test receiving traffic class and hop limit as ancillary data. self.checkTrafficClassAndHopLimit(ancbufsize=10240) @testRecvTrafficClassAndHopLimit.client_skip def _testRecvTrafficClassAndHopLimit(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testRecvTrafficClassAndHopLimitCMSG_SPACE(self): # Test receiving traffic class and hop limit, using # CMSG_SPACE() to calculate buffer size. self.checkTrafficClassAndHopLimit( ancbufsize=socket.CMSG_SPACE(SIZEOF_INT) * 2) @testRecvTrafficClassAndHopLimitCMSG_SPACE.client_skip def _testRecvTrafficClassAndHopLimitCMSG_SPACE(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSetTrafficClassAndHopLimit(self): # Test setting traffic class and hop limit on outgoing packet, # and receiving them at the other end. self.checkTrafficClassAndHopLimit(ancbufsize=10240, maxhop=self.hop_limit) @testSetTrafficClassAndHopLimit.client_skip def _testSetTrafficClassAndHopLimit(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.assertEqual( self.sendmsgToServer([MSG], [(socket.IPPROTO_IPV6, socket.IPV6_TCLASS, array.array("i", [self.traffic_class])), (socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]), len(MSG)) @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testOddCmsgSize(self): # Try to send ancillary data with first item one byte too # long. Fall back to sending with correct size if this fails, # and check that second item was handled correctly. self.checkTrafficClassAndHopLimit(ancbufsize=10240, maxhop=self.hop_limit) @testOddCmsgSize.client_skip def _testOddCmsgSize(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) try: nbytes = self.sendmsgToServer( [MSG], [(socket.IPPROTO_IPV6, socket.IPV6_TCLASS, array.array("i", [self.traffic_class]).tobytes() + b"\x00"), (socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]) except OSError as e: self.assertIsInstance(e.errno, int) nbytes = self.sendmsgToServer( [MSG], [(socket.IPPROTO_IPV6, socket.IPV6_TCLASS, array.array("i", [self.traffic_class])), (socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]) self.assertEqual(nbytes, len(MSG)) # Tests for proper handling of truncated ancillary data def checkHopLimitTruncatedHeader(self, ancbufsize, ignoreflags=0): # Receive hop limit into ancbufsize bytes of ancillary data # space, which should be too small to contain the ancillary # data header (if ancbufsize is None, pass no second argument # to recvmsg()). Check that data is MSG, MSG_CTRUNC is set # (unless included in ignoreflags), and no ancillary data is # returned. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.misc_event.set() args = () if ancbufsize is None else (ancbufsize,) msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), *args) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC, ignore=ignoreflags) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testCmsgTruncNoBufSize(self): # Check that no ancillary data is received when no ancillary # buffer size is provided. self.checkHopLimitTruncatedHeader(ancbufsize=None, # BSD seems to set # MSG_CTRUNC only if an item # has been partially # received. ignoreflags=socket.MSG_CTRUNC) @testCmsgTruncNoBufSize.client_skip def _testCmsgTruncNoBufSize(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTrunc0(self): # Check that no ancillary data is received when ancillary # buffer size is zero. self.checkHopLimitTruncatedHeader(ancbufsize=0, ignoreflags=socket.MSG_CTRUNC) @testSingleCmsgTrunc0.client_skip def _testSingleCmsgTrunc0(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) # Check that no ancillary data is returned for various non-zero # (but still too small) buffer sizes. @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTrunc1(self): self.checkHopLimitTruncatedHeader(ancbufsize=1) @testSingleCmsgTrunc1.client_skip def _testSingleCmsgTrunc1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTrunc2Int(self): self.checkHopLimitTruncatedHeader(ancbufsize=2 * SIZEOF_INT) @testSingleCmsgTrunc2Int.client_skip def _testSingleCmsgTrunc2Int(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTruncLen0Minus1(self): self.checkHopLimitTruncatedHeader(ancbufsize=socket.CMSG_LEN(0) - 1) @testSingleCmsgTruncLen0Minus1.client_skip def _testSingleCmsgTruncLen0Minus1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTruncInData(self): # Test truncation of a control message inside its associated # data. The message may be returned with its data truncated, # or not returned at all. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg( self.serv_sock, len(MSG), socket.CMSG_LEN(SIZEOF_INT) - 1) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC) self.assertLessEqual(len(ancdata), 1) if ancdata: cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) self.assertEqual(cmsg_type, socket.IPV6_HOPLIMIT) self.assertLess(len(cmsg_data), SIZEOF_INT) @testSingleCmsgTruncInData.client_skip def _testSingleCmsgTruncInData(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) def checkTruncatedSecondHeader(self, ancbufsize, ignoreflags=0): # Receive traffic class and hop limit into ancbufsize bytes of # ancillary data space, which should be large enough to # contain the first item, but too small to contain the header # of the second. Check that data is MSG, MSG_CTRUNC is set # (unless included in ignoreflags), and only one ancillary # data item is returned. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVTCLASS, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbufsize) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertEqual(len(ancdata), 1) cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) self.assertIn(cmsg_type, {socket.IPV6_TCLASS, socket.IPV6_HOPLIMIT}) self.assertEqual(len(cmsg_data), SIZEOF_INT) a = array.array("i") a.frombytes(cmsg_data) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], 255) # Try the above test with various buffer sizes. @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTrunc0(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT), ignoreflags=socket.MSG_CTRUNC) @testSecondCmsgTrunc0.client_skip def _testSecondCmsgTrunc0(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTrunc1(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT) + 1) @testSecondCmsgTrunc1.client_skip def _testSecondCmsgTrunc1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTrunc2Int(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT) + 2 * SIZEOF_INT) @testSecondCmsgTrunc2Int.client_skip def _testSecondCmsgTrunc2Int(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTruncLen0Minus1(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT) + socket.CMSG_LEN(0) - 1) @testSecondCmsgTruncLen0Minus1.client_skip def _testSecondCmsgTruncLen0Minus1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTruncInData(self): # Test truncation of the second of two control messages inside # its associated data. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVTCLASS, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg( self.serv_sock, len(MSG), socket.CMSG_SPACE(SIZEOF_INT) + socket.CMSG_LEN(SIZEOF_INT) - 1) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC) cmsg_types = {socket.IPV6_TCLASS, socket.IPV6_HOPLIMIT} cmsg_level, cmsg_type, cmsg_data = ancdata.pop(0) self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) cmsg_types.remove(cmsg_type) self.assertEqual(len(cmsg_data), SIZEOF_INT) a = array.array("i") a.frombytes(cmsg_data) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], 255) if ancdata: cmsg_level, cmsg_type, cmsg_data = ancdata.pop(0) self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) cmsg_types.remove(cmsg_type) self.assertLess(len(cmsg_data), SIZEOF_INT) self.assertEqual(ancdata, []) @testSecondCmsgTruncInData.client_skip def _testSecondCmsgTruncInData(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) # Derive concrete test classes for different socket types. class SendrecvmsgUDPTestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDPTestBase): pass @requireAttrs(socket.socket, "sendmsg") class SendmsgUDPTest(SendmsgConnectionlessTests, SendrecvmsgUDPTestBase): pass @requireAttrs(socket.socket, "recvmsg") class RecvmsgUDPTest(RecvmsgTests, SendrecvmsgUDPTestBase): pass @requireAttrs(socket.socket, "recvmsg_into") class RecvmsgIntoUDPTest(RecvmsgIntoTests, SendrecvmsgUDPTestBase): pass class SendrecvmsgUDP6TestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDP6TestBase): def checkRecvmsgAddress(self, addr1, addr2): # Called to compare the received address with the address of # the peer, ignoring scope ID self.assertEqual(addr1[:-1], addr2[:-1]) @requireAttrs(socket.socket, "sendmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class SendmsgUDP6Test(SendmsgConnectionlessTests, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgUDP6Test(RecvmsgTests, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoUDP6Test(RecvmsgIntoTests, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgRFC3542AncillaryUDP6Test(RFC3542AncillaryTest, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoRFC3542AncillaryUDP6Test(RecvmsgIntoMixin, RFC3542AncillaryTest, SendrecvmsgUDP6TestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class SendrecvmsgUDPLITETestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket.socket, "sendmsg") class SendmsgUDPLITETest(SendmsgConnectionlessTests, SendrecvmsgUDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket.socket, "recvmsg") class RecvmsgUDPLITETest(RecvmsgTests, SendrecvmsgUDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket.socket, "recvmsg_into") class RecvmsgIntoUDPLITETest(RecvmsgIntoTests, SendrecvmsgUDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class SendrecvmsgUDPLITE6TestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDPLITE6TestBase): def checkRecvmsgAddress(self, addr1, addr2): # Called to compare the received address with the address of # the peer, ignoring scope ID self.assertEqual(addr1[:-1], addr2[:-1]) @requireAttrs(socket.socket, "sendmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class SendmsgUDPLITE6Test(SendmsgConnectionlessTests, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgUDPLITE6Test(RecvmsgTests, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoUDPLITE6Test(RecvmsgIntoTests, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgRFC3542AncillaryUDPLITE6Test(RFC3542AncillaryTest, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoRFC3542AncillaryUDPLITE6Test(RecvmsgIntoMixin, RFC3542AncillaryTest, SendrecvmsgUDPLITE6TestBase): pass class SendrecvmsgTCPTestBase(SendrecvmsgConnectedBase, ConnectedStreamTestMixin, TCPTestBase): pass @requireAttrs(socket.socket, "sendmsg") class SendmsgTCPTest(SendmsgStreamTests, SendrecvmsgTCPTestBase): pass @requireAttrs(socket.socket, "recvmsg") class RecvmsgTCPTest(RecvmsgTests, RecvmsgGenericStreamTests, SendrecvmsgTCPTestBase): pass @requireAttrs(socket.socket, "recvmsg_into") class RecvmsgIntoTCPTest(RecvmsgIntoTests, RecvmsgGenericStreamTests, SendrecvmsgTCPTestBase): pass class SendrecvmsgSCTPStreamTestBase(SendrecvmsgSCTPFlagsBase, SendrecvmsgConnectedBase, ConnectedStreamTestMixin, SCTPStreamBase): pass @requireAttrs(socket.socket, "sendmsg") @unittest.skipIf(AIX, "IPPROTO_SCTP: [Errno 62] Protocol not supported on AIX") @requireSocket("AF_INET", "SOCK_STREAM", "IPPROTO_SCTP") class SendmsgSCTPStreamTest(SendmsgStreamTests, SendrecvmsgSCTPStreamTestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipIf(AIX, "IPPROTO_SCTP: [Errno 62] Protocol not supported on AIX") @requireSocket("AF_INET", "SOCK_STREAM", "IPPROTO_SCTP") class RecvmsgSCTPStreamTest(RecvmsgTests, RecvmsgGenericStreamTests, SendrecvmsgSCTPStreamTestBase): def testRecvmsgEOF(self): try: super(RecvmsgSCTPStreamTest, self).testRecvmsgEOF() except OSError as e: if e.errno != errno.ENOTCONN: raise self.skipTest("sporadic ENOTCONN (kernel issue?) - see issue #13876") @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipIf(AIX, "IPPROTO_SCTP: [Errno 62] Protocol not supported on AIX") @requireSocket("AF_INET", "SOCK_STREAM", "IPPROTO_SCTP") class RecvmsgIntoSCTPStreamTest(RecvmsgIntoTests, RecvmsgGenericStreamTests, SendrecvmsgSCTPStreamTestBase): def testRecvmsgEOF(self): try: super(RecvmsgIntoSCTPStreamTest, self).testRecvmsgEOF() except OSError as e: if e.errno != errno.ENOTCONN: raise self.skipTest("sporadic ENOTCONN (kernel issue?) - see issue #13876") class SendrecvmsgUnixStreamTestBase(SendrecvmsgConnectedBase, ConnectedStreamTestMixin, UnixStreamBase): pass @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "AF_UNIX") class SendmsgUnixStreamTest(SendmsgStreamTests, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "recvmsg") @requireAttrs(socket, "AF_UNIX") class RecvmsgUnixStreamTest(RecvmsgTests, RecvmsgGenericStreamTests, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @requireAttrs(socket, "AF_UNIX") class RecvmsgIntoUnixStreamTest(RecvmsgIntoTests, RecvmsgGenericStreamTests, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "sendmsg", "recvmsg") @requireAttrs(socket, "AF_UNIX", "SOL_SOCKET", "SCM_RIGHTS") class RecvmsgSCMRightsStreamTest(SCMRightsTest, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "sendmsg", "recvmsg_into") @requireAttrs(socket, "AF_UNIX", "SOL_SOCKET", "SCM_RIGHTS") class RecvmsgIntoSCMRightsStreamTest(RecvmsgIntoMixin, SCMRightsTest, SendrecvmsgUnixStreamTestBase): pass # Test interrupting the interruptible send/receive methods with a # signal when a timeout is set. These tests avoid having multiple # threads alive during the test so that the OS cannot deliver the # signal to the wrong one. class InterruptedTimeoutBase: # Base class for interrupted send/receive tests. Installs an # empty handler for SIGALRM and removes it on teardown, along with # any scheduled alarms. def setUp(self): super().setUp() orig_alrm_handler = signal.signal(signal.SIGALRM, lambda signum, frame: 1 / 0) self.addCleanup(signal.signal, signal.SIGALRM, orig_alrm_handler) # Timeout for socket operations timeout = support.LOOPBACK_TIMEOUT # Provide setAlarm() method to schedule delivery of SIGALRM after # given number of seconds, or cancel it if zero, and an # appropriate time value to use. Use setitimer() if available. if hasattr(signal, "setitimer"): alarm_time = 0.05 def setAlarm(self, seconds): signal.setitimer(signal.ITIMER_REAL, seconds) else: # Old systems may deliver the alarm up to one second early alarm_time = 2 def setAlarm(self, seconds): signal.alarm(seconds) # Require siginterrupt() in order to ensure that system calls are # interrupted by default. @requireAttrs(signal, "siginterrupt") @unittest.skipUnless(hasattr(signal, "alarm") or hasattr(signal, "setitimer"), "Don't have signal.alarm or signal.setitimer") class InterruptedRecvTimeoutTest(InterruptedTimeoutBase, UDPTestBase): # Test interrupting the recv*() methods with signals when a # timeout is set. def setUp(self): super().setUp() self.serv.settimeout(self.timeout) def checkInterruptedRecv(self, func, *args, **kwargs): # Check that func(*args, **kwargs) raises # errno of EINTR when interrupted by a signal. try: self.setAlarm(self.alarm_time) with self.assertRaises(ZeroDivisionError) as cm: func(*args, **kwargs) finally: self.setAlarm(0) def testInterruptedRecvTimeout(self): self.checkInterruptedRecv(self.serv.recv, 1024) def testInterruptedRecvIntoTimeout(self): self.checkInterruptedRecv(self.serv.recv_into, bytearray(1024)) def testInterruptedRecvfromTimeout(self): self.checkInterruptedRecv(self.serv.recvfrom, 1024) def testInterruptedRecvfromIntoTimeout(self): self.checkInterruptedRecv(self.serv.recvfrom_into, bytearray(1024)) @requireAttrs(socket.socket, "recvmsg") def testInterruptedRecvmsgTimeout(self): self.checkInterruptedRecv(self.serv.recvmsg, 1024) @requireAttrs(socket.socket, "recvmsg_into") def testInterruptedRecvmsgIntoTimeout(self): self.checkInterruptedRecv(self.serv.recvmsg_into, [bytearray(1024)]) # Require siginterrupt() in order to ensure that system calls are # interrupted by default. @requireAttrs(signal, "siginterrupt") @unittest.skipUnless(hasattr(signal, "alarm") or hasattr(signal, "setitimer"), "Don't have signal.alarm or signal.setitimer") class InterruptedSendTimeoutTest(InterruptedTimeoutBase, SocketListeningTestMixin, TCPTestBase): # Test interrupting the interruptible send*() methods with signals # when a timeout is set. def setUp(self): super().setUp() self.serv_conn = self.newSocket() self.addCleanup(self.serv_conn.close) # Use a thread to complete the connection, but wait for it to # terminate before running the test, so that there is only one # thread to accept the signal. cli_thread = threading.Thread(target=self.doConnect) cli_thread.start() self.cli_conn, addr = self.serv.accept() self.addCleanup(self.cli_conn.close) cli_thread.join() self.serv_conn.settimeout(self.timeout) def doConnect(self): self.serv_conn.connect(self.serv_addr) def checkInterruptedSend(self, func, *args, **kwargs): # Check that func(*args, **kwargs), run in a loop, raises # OSError with an errno of EINTR when interrupted by a # signal. try: with self.assertRaises(ZeroDivisionError) as cm: while True: self.setAlarm(self.alarm_time) func(*args, **kwargs) finally: self.setAlarm(0) # Issue #12958: The following tests have problems on OS X prior to 10.7 @support.requires_mac_ver(10, 7) def testInterruptedSendTimeout(self): self.checkInterruptedSend(self.serv_conn.send, b"a"*512) @support.requires_mac_ver(10, 7) def testInterruptedSendtoTimeout(self): # Passing an actual address here as Python's wrapper for # sendto() doesn't allow passing a zero-length one; POSIX # requires that the address is ignored since the socket is # connection-mode, however. self.checkInterruptedSend(self.serv_conn.sendto, b"a"*512, self.serv_addr) @support.requires_mac_ver(10, 7) @requireAttrs(socket.socket, "sendmsg") def testInterruptedSendmsgTimeout(self): self.checkInterruptedSend(self.serv_conn.sendmsg, [b"a"*512]) class TCPCloserTest(ThreadedTCPSocketTest): def testClose(self): conn, addr = self.serv.accept() conn.close() sd = self.cli read, write, err = select.select([sd], [], [], 1.0) self.assertEqual(read, [sd]) self.assertEqual(sd.recv(1), b'') # Calling close() many times should be safe. conn.close() conn.close() def _testClose(self): self.cli.connect((HOST, self.port)) time.sleep(1.0) class BasicSocketPairTest(SocketPairTest): def __init__(self, methodName='runTest'): SocketPairTest.__init__(self, methodName=methodName) def _check_defaults(self, sock): self.assertIsInstance(sock, socket.socket) if hasattr(socket, 'AF_UNIX'): self.assertEqual(sock.family, socket.AF_UNIX) else: self.assertEqual(sock.family, socket.AF_INET) self.assertEqual(sock.type, socket.SOCK_STREAM) self.assertEqual(sock.proto, 0) def _testDefaults(self): self._check_defaults(self.cli) def testDefaults(self): self._check_defaults(self.serv) def testRecv(self): msg = self.serv.recv(1024) self.assertEqual(msg, MSG) def _testRecv(self): self.cli.send(MSG) def testSend(self): self.serv.send(MSG) def _testSend(self): msg = self.cli.recv(1024) self.assertEqual(msg, MSG) class PurePythonSocketPairTest(SocketPairTest): # Explicitly use socketpair AF_INET or AF_INET6 to ensure that is the # code path we're using regardless platform is the pure python one where # `_socket.socketpair` does not exist. (AF_INET does not work with # _socket.socketpair on many platforms). def socketpair(self): # called by super().setUp(). try: return socket.socketpair(socket.AF_INET6) except OSError: return socket.socketpair(socket.AF_INET) # Local imports in this class make for easy security fix backporting. def setUp(self): if hasattr(_socket, "socketpair"): self._orig_sp = socket.socketpair # This forces the version using the non-OS provided socketpair # emulation via an AF_INET socket in Lib/socket.py. socket.socketpair = socket._fallback_socketpair else: # This platform already uses the non-OS provided version. self._orig_sp = None super().setUp() def tearDown(self): super().tearDown() if self._orig_sp is not None: # Restore the default socket.socketpair definition. socket.socketpair = self._orig_sp def test_recv(self): msg = self.serv.recv(1024) self.assertEqual(msg, MSG) def _test_recv(self): self.cli.send(MSG) def test_send(self): self.serv.send(MSG) def _test_send(self): msg = self.cli.recv(1024) self.assertEqual(msg, MSG) def test_ipv4(self): cli, srv = socket.socketpair(socket.AF_INET) cli.close() srv.close() def _test_ipv4(self): pass @unittest.skipIf(not hasattr(_socket, 'IPPROTO_IPV6') or not hasattr(_socket, 'IPV6_V6ONLY'), "IPV6_V6ONLY option not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_ipv6(self): cli, srv = socket.socketpair(socket.AF_INET6) cli.close() srv.close() def _test_ipv6(self): pass def test_injected_authentication_failure(self): orig_getsockname = socket.socket.getsockname inject_sock = None def inject_getsocketname(self): nonlocal inject_sock sockname = orig_getsockname(self) # Connect to the listening socket ahead of the # client socket. if inject_sock is None: inject_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) inject_sock.setblocking(False) try: inject_sock.connect(sockname[:2]) except (BlockingIOError, InterruptedError): pass inject_sock.setblocking(True) return sockname sock1 = sock2 = None try: socket.socket.getsockname = inject_getsocketname with self.assertRaises(OSError): sock1, sock2 = socket.socketpair() finally: socket.socket.getsockname = orig_getsockname if inject_sock: inject_sock.close() if sock1: # This cleanup isn't needed on a successful test. sock1.close() if sock2: sock2.close() def _test_injected_authentication_failure(self): # No-op. Exists for base class threading infrastructure to call. # We could refactor this test into its own lesser class along with the # setUp and tearDown code to construct an ideal; it is simpler to keep # it here and live with extra overhead one this _one_ failure test. pass class NonBlockingTCPTests(ThreadedTCPSocketTest): def __init__(self, methodName='runTest'): self.event = threading.Event() ThreadedTCPSocketTest.__init__(self, methodName=methodName) def assert_sock_timeout(self, sock, timeout): self.assertEqual(self.serv.gettimeout(), timeout) blocking = (timeout != 0.0) self.assertEqual(sock.getblocking(), blocking) if fcntl is not None: # When a Python socket has a non-zero timeout, it's switched # internally to a non-blocking mode. Later, sock.sendall(), # sock.recv(), and other socket operations use a select() call and # handle EWOULDBLOCK/EGAIN on all socket operations. That's how # timeouts are enforced. fd_blocking = (timeout is None) flag = fcntl.fcntl(sock, fcntl.F_GETFL, os.O_NONBLOCK) self.assertEqual(not bool(flag & os.O_NONBLOCK), fd_blocking) def testSetBlocking(self): # Test setblocking() and settimeout() methods self.serv.setblocking(True) self.assert_sock_timeout(self.serv, None) self.serv.setblocking(False) self.assert_sock_timeout(self.serv, 0.0) self.serv.settimeout(None) self.assert_sock_timeout(self.serv, None) self.serv.settimeout(0) self.assert_sock_timeout(self.serv, 0) self.serv.settimeout(10) self.assert_sock_timeout(self.serv, 10) self.serv.settimeout(0) self.assert_sock_timeout(self.serv, 0) def _testSetBlocking(self): pass @support.cpython_only def testSetBlocking_overflow(self): # Issue 15989 import _testcapi if _testcapi.UINT_MAX >= _testcapi.ULONG_MAX: self.skipTest('needs UINT_MAX < ULONG_MAX') self.serv.setblocking(False) self.assertEqual(self.serv.gettimeout(), 0.0) self.serv.setblocking(_testcapi.UINT_MAX + 1) self.assertIsNone(self.serv.gettimeout()) _testSetBlocking_overflow = support.cpython_only(_testSetBlocking) @unittest.skipUnless(hasattr(socket, 'SOCK_NONBLOCK'), 'test needs socket.SOCK_NONBLOCK') @support.requires_linux_version(2, 6, 28) def testInitNonBlocking(self): # create a socket with SOCK_NONBLOCK self.serv.close() self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_NONBLOCK) self.assert_sock_timeout(self.serv, 0) def _testInitNonBlocking(self): pass def testInheritFlagsBlocking(self): # bpo-7995: accept() on a listening socket with a timeout and the # default timeout is None, the resulting socket must be blocking. with socket_setdefaulttimeout(None): self.serv.settimeout(10) conn, addr = self.serv.accept() self.addCleanup(conn.close) self.assertIsNone(conn.gettimeout()) def _testInheritFlagsBlocking(self): self.cli.connect((HOST, self.port)) def testInheritFlagsTimeout(self): # bpo-7995: accept() on a listening socket with a timeout and the # default timeout is None, the resulting socket must inherit # the default timeout. default_timeout = 20.0 with socket_setdefaulttimeout(default_timeout): self.serv.settimeout(10) conn, addr = self.serv.accept() self.addCleanup(conn.close) self.assertEqual(conn.gettimeout(), default_timeout) def _testInheritFlagsTimeout(self): self.cli.connect((HOST, self.port)) def testAccept(self): # Testing non-blocking accept self.serv.setblocking(False) # connect() didn't start: non-blocking accept() fails start_time = time.monotonic() with self.assertRaises(BlockingIOError): conn, addr = self.serv.accept() dt = time.monotonic() - start_time self.assertLess(dt, 1.0) self.event.set() read, write, err = select.select([self.serv], [], [], support.LONG_TIMEOUT) if self.serv not in read: self.fail("Error trying to do accept after select.") # connect() completed: non-blocking accept() doesn't block conn, addr = self.serv.accept() self.addCleanup(conn.close) self.assertIsNone(conn.gettimeout()) def _testAccept(self): # don't connect before event is set to check # that non-blocking accept() raises BlockingIOError self.event.wait() self.cli.connect((HOST, self.port)) def testRecv(self): # Testing non-blocking recv conn, addr = self.serv.accept() self.addCleanup(conn.close) conn.setblocking(False) # the server didn't send data yet: non-blocking recv() fails with self.assertRaises(BlockingIOError): msg = conn.recv(len(MSG)) self.event.set() read, write, err = select.select([conn], [], [], support.LONG_TIMEOUT) if conn not in read: self.fail("Error during select call to non-blocking socket.") # the server sent data yet: non-blocking recv() doesn't block msg = conn.recv(len(MSG)) self.assertEqual(msg, MSG) def _testRecv(self): self.cli.connect((HOST, self.port)) # don't send anything before event is set to check # that non-blocking recv() raises BlockingIOError self.event.wait() # send data: recv() will no longer block self.cli.sendall(MSG) class FileObjectClassTestCase(SocketConnectedTest): """Unit tests for the object returned by socket.makefile() self.read_file is the io object returned by makefile() on the client connection. You can read from this file to get output from the server. self.write_file is the io object returned by makefile() on the server connection. You can write to this file to send output to the client. """ bufsize = -1 # Use default buffer size encoding = 'utf-8' errors = 'strict' newline = None read_mode = 'rb' read_msg = MSG write_mode = 'wb' write_msg = MSG def __init__(self, methodName='runTest'): SocketConnectedTest.__init__(self, methodName=methodName) def setUp(self): self.evt1, self.evt2, self.serv_finished, self.cli_finished = [ threading.Event() for i in range(4)] SocketConnectedTest.setUp(self) self.read_file = self.cli_conn.makefile( self.read_mode, self.bufsize, encoding = self.encoding, errors = self.errors, newline = self.newline) def tearDown(self): self.serv_finished.set() self.read_file.close() self.assertTrue(self.read_file.closed) self.read_file = None SocketConnectedTest.tearDown(self) def clientSetUp(self): SocketConnectedTest.clientSetUp(self) self.write_file = self.serv_conn.makefile( self.write_mode, self.bufsize, encoding = self.encoding, errors = self.errors, newline = self.newline) def clientTearDown(self): self.cli_finished.set() self.write_file.close() self.assertTrue(self.write_file.closed) self.write_file = None SocketConnectedTest.clientTearDown(self) def testReadAfterTimeout(self): # Issue #7322: A file object must disallow further reads # after a timeout has occurred. self.cli_conn.settimeout(1) self.read_file.read(3) # First read raises a timeout self.assertRaises(TimeoutError, self.read_file.read, 1) # Second read is disallowed with self.assertRaises(OSError) as ctx: self.read_file.read(1) self.assertIn("cannot read from timed out object", str(ctx.exception)) def _testReadAfterTimeout(self): self.write_file.write(self.write_msg[0:3]) self.write_file.flush() self.serv_finished.wait() def testSmallRead(self): # Performing small file read test first_seg = self.read_file.read(len(self.read_msg)-3) second_seg = self.read_file.read(3) msg = first_seg + second_seg self.assertEqual(msg, self.read_msg) def _testSmallRead(self): self.write_file.write(self.write_msg) self.write_file.flush() def testFullRead(self): # read until EOF msg = self.read_file.read() self.assertEqual(msg, self.read_msg) def _testFullRead(self): self.write_file.write(self.write_msg) self.write_file.close() def testUnbufferedRead(self): # Performing unbuffered file read test buf = type(self.read_msg)() while 1: char = self.read_file.read(1) if not char: break buf += char self.assertEqual(buf, self.read_msg) def _testUnbufferedRead(self): self.write_file.write(self.write_msg) self.write_file.flush() def testReadline(self): # Performing file readline test line = self.read_file.readline() self.assertEqual(line, self.read_msg) def _testReadline(self): self.write_file.write(self.write_msg) self.write_file.flush() def testCloseAfterMakefile(self): # The file returned by makefile should keep the socket open. self.cli_conn.close() # read until EOF msg = self.read_file.read() self.assertEqual(msg, self.read_msg) def _testCloseAfterMakefile(self): self.write_file.write(self.write_msg) self.write_file.flush() def testMakefileAfterMakefileClose(self): self.read_file.close() msg = self.cli_conn.recv(len(MSG)) if isinstance(self.read_msg, str): msg = msg.decode() self.assertEqual(msg, self.read_msg) def _testMakefileAfterMakefileClose(self): self.write_file.write(self.write_msg) self.write_file.flush() def testClosedAttr(self): self.assertTrue(not self.read_file.closed) def _testClosedAttr(self): self.assertTrue(not self.write_file.closed) def testAttributes(self): self.assertEqual(self.read_file.mode, self.read_mode) self.assertEqual(self.read_file.name, self.cli_conn.fileno()) def _testAttributes(self): self.assertEqual(self.write_file.mode, self.write_mode) self.assertEqual(self.write_file.name, self.serv_conn.fileno()) def testRealClose(self): self.read_file.close() self.assertRaises(ValueError, self.read_file.fileno) self.cli_conn.close() self.assertRaises(OSError, self.cli_conn.getsockname) def _testRealClose(self): pass class UnbufferedFileObjectClassTestCase(FileObjectClassTestCase): """Repeat the tests from FileObjectClassTestCase with bufsize==0. In this case (and in this case only), it should be possible to create a file object, read a line from it, create another file object, read another line from it, without loss of data in the first file object's buffer. Note that http.client relies on this when reading multiple requests from the same socket.""" bufsize = 0 # Use unbuffered mode def testUnbufferedReadline(self): # Read a line, create a new file object, read another line with it line = self.read_file.readline() # first line self.assertEqual(line, b"A. " + self.write_msg) # first line self.read_file = self.cli_conn.makefile('rb', 0) line = self.read_file.readline() # second line self.assertEqual(line, b"B. " + self.write_msg) # second line def _testUnbufferedReadline(self): self.write_file.write(b"A. " + self.write_msg) self.write_file.write(b"B. " + self.write_msg) self.write_file.flush() def testMakefileClose(self): # The file returned by makefile should keep the socket open... self.cli_conn.close() msg = self.cli_conn.recv(1024) self.assertEqual(msg, self.read_msg) # ...until the file is itself closed self.read_file.close() self.assertRaises(OSError, self.cli_conn.recv, 1024) def _testMakefileClose(self): self.write_file.write(self.write_msg) self.write_file.flush() def testMakefileCloseSocketDestroy(self): refcount_before = sys.getrefcount(self.cli_conn) self.read_file.close() refcount_after = sys.getrefcount(self.cli_conn) self.assertEqual(refcount_before - 1, refcount_after) def _testMakefileCloseSocketDestroy(self): pass # Non-blocking ops # NOTE: to set `read_file` as non-blocking, we must call # `cli_conn.setblocking` and vice-versa (see setUp / clientSetUp). def testSmallReadNonBlocking(self): self.cli_conn.setblocking(False) self.assertEqual(self.read_file.readinto(bytearray(10)), None) self.assertEqual(self.read_file.read(len(self.read_msg) - 3), None) self.evt1.set() self.evt2.wait(1.0) first_seg = self.read_file.read(len(self.read_msg) - 3) if first_seg is None: # Data not arrived (can happen under Windows), wait a bit time.sleep(0.5) first_seg = self.read_file.read(len(self.read_msg) - 3) buf = bytearray(10) n = self.read_file.readinto(buf) self.assertEqual(n, 3) msg = first_seg + buf[:n] self.assertEqual(msg, self.read_msg) self.assertEqual(self.read_file.readinto(bytearray(16)), None) self.assertEqual(self.read_file.read(1), None) def _testSmallReadNonBlocking(self): self.evt1.wait(1.0) self.write_file.write(self.write_msg) self.write_file.flush() self.evt2.set() # Avoid closing the socket before the server test has finished, # otherwise system recv() will return 0 instead of EWOULDBLOCK. self.serv_finished.wait(5.0) def testWriteNonBlocking(self): self.cli_finished.wait(5.0) # The client thread can't skip directly - the SkipTest exception # would appear as a failure. if self.serv_skipped: self.skipTest(self.serv_skipped) def _testWriteNonBlocking(self): self.serv_skipped = None self.serv_conn.setblocking(False) # Try to saturate the socket buffer pipe with repeated large writes. BIG = b"x" * support.SOCK_MAX_SIZE LIMIT = 10 # The first write() succeeds since a chunk of data can be buffered n = self.write_file.write(BIG) self.assertGreater(n, 0) for i in range(LIMIT): n = self.write_file.write(BIG) if n is None: # Succeeded break self.assertGreater(n, 0) else: # Let us know that this test didn't manage to establish # the expected conditions. This is not a failure in itself but, # if it happens repeatedly, the test should be fixed. self.serv_skipped = "failed to saturate the socket buffer" class LineBufferedFileObjectClassTestCase(FileObjectClassTestCase): bufsize = 1 # Default-buffered for reading; line-buffered for writing class SmallBufferedFileObjectClassTestCase(FileObjectClassTestCase): bufsize = 2 # Exercise the buffering code class UnicodeReadFileObjectClassTestCase(FileObjectClassTestCase): """Tests for socket.makefile() in text mode (rather than binary)""" read_mode = 'r' read_msg = MSG.decode('utf-8') write_mode = 'wb' write_msg = MSG newline = '' class UnicodeWriteFileObjectClassTestCase(FileObjectClassTestCase): """Tests for socket.makefile() in text mode (rather than binary)""" read_mode = 'rb' read_msg = MSG write_mode = 'w' write_msg = MSG.decode('utf-8') newline = '' class UnicodeReadWriteFileObjectClassTestCase(FileObjectClassTestCase): """Tests for socket.makefile() in text mode (rather than binary)""" read_mode = 'r' read_msg = MSG.decode('utf-8') write_mode = 'w' write_msg = MSG.decode('utf-8') newline = '' class NetworkConnectionTest(object): """Prove network connection.""" def clientSetUp(self): # We're inherited below by BasicTCPTest2, which also inherits # BasicTCPTest, which defines self.port referenced below. self.cli = socket.create_connection((HOST, self.port)) self.serv_conn = self.cli class BasicTCPTest2(NetworkConnectionTest, BasicTCPTest): """Tests that NetworkConnection does not break existing TCP functionality. """ class NetworkConnectionNoServer(unittest.TestCase): class MockSocket(socket.socket): def connect(self, *args): raise TimeoutError('timed out') @contextlib.contextmanager def mocked_socket_module(self): """Return a socket which times out on connect""" old_socket = socket.socket socket.socket = self.MockSocket try: yield finally: socket.socket = old_socket @socket_helper.skip_if_tcp_blackhole def test_connect(self): port = socket_helper.find_unused_port() cli = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(cli.close) with self.assertRaises(OSError) as cm: cli.connect((HOST, port)) self.assertEqual(cm.exception.errno, errno.ECONNREFUSED) @socket_helper.skip_if_tcp_blackhole def test_create_connection(self): # Issue #9792: errors raised by create_connection() should have # a proper errno attribute. port = socket_helper.find_unused_port() with self.assertRaises(OSError) as cm: socket.create_connection((HOST, port)) # Issue #16257: create_connection() calls getaddrinfo() against # 'localhost'. This may result in an IPV6 addr being returned # as well as an IPV4 one: # >>> socket.getaddrinfo('localhost', port, 0, SOCK_STREAM) # >>> [(2, 2, 0, '', ('127.0.0.1', 41230)), # (26, 2, 0, '', ('::1', 41230, 0, 0))] # # create_connection() enumerates through all the addresses returned # and if it doesn't successfully bind to any of them, it propagates # the last exception it encountered. # # On Solaris, ENETUNREACH is returned in this circumstance instead # of ECONNREFUSED. So, if that errno exists, add it to our list of # expected errnos. expected_errnos = socket_helper.get_socket_conn_refused_errs() self.assertIn(cm.exception.errno, expected_errnos) def test_create_connection_all_errors(self): port = socket_helper.find_unused_port() try: socket.create_connection((HOST, port), all_errors=True) except ExceptionGroup as e: eg = e else: self.fail('expected connection to fail') self.assertIsInstance(eg, ExceptionGroup) for e in eg.exceptions: self.assertIsInstance(e, OSError) addresses = socket.getaddrinfo( 'localhost', port, 0, socket.SOCK_STREAM) # assert that we got an exception for each address self.assertEqual(len(addresses), len(eg.exceptions)) def test_create_connection_timeout(self): # Issue #9792: create_connection() should not recast timeout errors # as generic socket errors. with self.mocked_socket_module(): try: socket.create_connection((HOST, 1234)) except TimeoutError: pass except OSError as exc: if socket_helper.IPV6_ENABLED or exc.errno != errno.EAFNOSUPPORT: raise else: self.fail('TimeoutError not raised') class NetworkConnectionAttributesTest(SocketTCPTest, ThreadableTest): cli = None def __init__(self, methodName='runTest'): SocketTCPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.source_port = socket_helper.find_unused_port() def clientTearDown(self): if self.cli is not None: self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) def _justAccept(self): conn, addr = self.serv.accept() conn.close() testFamily = _justAccept def _testFamily(self): self.cli = socket.create_connection((HOST, self.port), timeout=support.LOOPBACK_TIMEOUT) self.addCleanup(self.cli.close) self.assertEqual(self.cli.family, 2) testSourceAddress = _justAccept def _testSourceAddress(self): self.cli = socket.create_connection((HOST, self.port), timeout=support.LOOPBACK_TIMEOUT, source_address=('', self.source_port)) self.addCleanup(self.cli.close) self.assertEqual(self.cli.getsockname()[1], self.source_port) # The port number being used is sufficient to show that the bind() # call happened. testTimeoutDefault = _justAccept def _testTimeoutDefault(self): # passing no explicit timeout uses socket's global default self.assertTrue(socket.getdefaulttimeout() is None) socket.setdefaulttimeout(42) try: self.cli = socket.create_connection((HOST, self.port)) self.addCleanup(self.cli.close) finally: socket.setdefaulttimeout(None) self.assertEqual(self.cli.gettimeout(), 42) testTimeoutNone = _justAccept def _testTimeoutNone(self): # None timeout means the same as sock.settimeout(None) self.assertTrue(socket.getdefaulttimeout() is None) socket.setdefaulttimeout(30) try: self.cli = socket.create_connection((HOST, self.port), timeout=None) self.addCleanup(self.cli.close) finally: socket.setdefaulttimeout(None) self.assertEqual(self.cli.gettimeout(), None) testTimeoutValueNamed = _justAccept def _testTimeoutValueNamed(self): self.cli = socket.create_connection((HOST, self.port), timeout=30) self.assertEqual(self.cli.gettimeout(), 30) testTimeoutValueNonamed = _justAccept def _testTimeoutValueNonamed(self): self.cli = socket.create_connection((HOST, self.port), 30) self.addCleanup(self.cli.close) self.assertEqual(self.cli.gettimeout(), 30) class NetworkConnectionBehaviourTest(SocketTCPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketTCPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) def testInsideTimeout(self): conn, addr = self.serv.accept() self.addCleanup(conn.close) time.sleep(3) conn.send(b"done!") testOutsideTimeout = testInsideTimeout def _testInsideTimeout(self): self.cli = sock = socket.create_connection((HOST, self.port)) data = sock.recv(5) self.assertEqual(data, b"done!") def _testOutsideTimeout(self): self.cli = sock = socket.create_connection((HOST, self.port), timeout=1) self.assertRaises(TimeoutError, lambda: sock.recv(5)) class TCPTimeoutTest(SocketTCPTest): def testTCPTimeout(self): def raise_timeout(*args, **kwargs): self.serv.settimeout(1.0) self.serv.accept() self.assertRaises(TimeoutError, raise_timeout, "Error generating a timeout exception (TCP)") def testTimeoutZero(self): ok = False try: self.serv.settimeout(0.0) foo = self.serv.accept() except TimeoutError: self.fail("caught timeout instead of error (TCP)") except OSError: ok = True except: self.fail("caught unexpected exception (TCP)") if not ok: self.fail("accept() returned success when we did not expect it") @unittest.skipUnless(hasattr(signal, 'alarm'), 'test needs signal.alarm()') def testInterruptedTimeout(self): # XXX I don't know how to do this test on MSWindows or any other # platform that doesn't support signal.alarm() or os.kill(), though # the bug should have existed on all platforms. self.serv.settimeout(5.0) # must be longer than alarm class Alarm(Exception): pass def alarm_handler(signal, frame): raise Alarm old_alarm = signal.signal(signal.SIGALRM, alarm_handler) try: try: signal.alarm(2) # POSIX allows alarm to be up to 1 second early foo = self.serv.accept() except TimeoutError: self.fail("caught timeout instead of Alarm") except Alarm: pass except BaseException as e: self.fail("caught other exception instead of Alarm:" " %s(%s):\n%s" % (type(e), e, traceback.format_exc())) else: self.fail("nothing caught") finally: signal.alarm(0) # shut off alarm except Alarm: self.fail("got Alarm in wrong place") finally: # no alarm can be pending. Safe to restore old handler. signal.signal(signal.SIGALRM, old_alarm) class UDPTimeoutTest(SocketUDPTest): def testUDPTimeout(self): def raise_timeout(*args, **kwargs): self.serv.settimeout(1.0) self.serv.recv(1024) self.assertRaises(TimeoutError, raise_timeout, "Error generating a timeout exception (UDP)") def testTimeoutZero(self): ok = False try: self.serv.settimeout(0.0) foo = self.serv.recv(1024) except TimeoutError: self.fail("caught timeout instead of error (UDP)") except OSError: ok = True except: self.fail("caught unexpected exception (UDP)") if not ok: self.fail("recv() returned success when we did not expect it") @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class UDPLITETimeoutTest(SocketUDPLITETest): def testUDPLITETimeout(self): def raise_timeout(*args, **kwargs): self.serv.settimeout(1.0) self.serv.recv(1024) self.assertRaises(TimeoutError, raise_timeout, "Error generating a timeout exception (UDPLITE)") def testTimeoutZero(self): ok = False try: self.serv.settimeout(0.0) foo = self.serv.recv(1024) except TimeoutError: self.fail("caught timeout instead of error (UDPLITE)") except OSError: ok = True except: self.fail("caught unexpected exception (UDPLITE)") if not ok: self.fail("recv() returned success when we did not expect it") class TestExceptions(unittest.TestCase): def testExceptionTree(self): self.assertTrue(issubclass(OSError, Exception)) self.assertTrue(issubclass(socket.herror, OSError)) self.assertTrue(issubclass(socket.gaierror, OSError)) self.assertTrue(issubclass(socket.timeout, OSError)) self.assertIs(socket.error, OSError) self.assertIs(socket.timeout, TimeoutError) def test_setblocking_invalidfd(self): # Regression test for issue #28471 sock0 = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) sock = socket.socket( socket.AF_INET, socket.SOCK_STREAM, 0, sock0.fileno()) sock0.close() self.addCleanup(sock.detach) with self.assertRaises(OSError): sock.setblocking(False) @unittest.skipUnless(sys.platform == 'linux', 'Linux specific test') class TestLinuxAbstractNamespace(unittest.TestCase): UNIX_PATH_MAX = 108 def testLinuxAbstractNamespace(self): address = b"\x00python-test-hello\x00\xff" with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s1: s1.bind(address) s1.listen() with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s2: s2.connect(s1.getsockname()) with s1.accept()[0] as s3: self.assertEqual(s1.getsockname(), address) self.assertEqual(s2.getpeername(), address) def testMaxName(self): address = b"\x00" + b"h" * (self.UNIX_PATH_MAX - 1) with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s: s.bind(address) self.assertEqual(s.getsockname(), address) def testNameOverflow(self): address = "\x00" + "h" * self.UNIX_PATH_MAX with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s: self.assertRaises(OSError, s.bind, address) def testStrName(self): # Check that an abstract name can be passed as a string. s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) try: s.bind("\x00python\x00test\x00") self.assertEqual(s.getsockname(), b"\x00python\x00test\x00") finally: s.close() def testBytearrayName(self): # Check that an abstract name can be passed as a bytearray. with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s: s.bind(bytearray(b"\x00python\x00test\x00")) self.assertEqual(s.getsockname(), b"\x00python\x00test\x00") def testAutobind(self): # Check that binding to an empty string binds to an available address # in the abstract namespace as specified in unix(7) "Autobind feature". abstract_address = b"^\0[0-9a-f]{5}" with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s1: s1.bind("") self.assertRegex(s1.getsockname(), abstract_address) # Each socket is bound to a different abstract address. with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s2: s2.bind("") self.assertRegex(s2.getsockname(), abstract_address) self.assertNotEqual(s1.getsockname(), s2.getsockname()) @unittest.skipUnless(hasattr(socket, 'AF_UNIX'), 'test needs socket.AF_UNIX') class TestUnixDomain(unittest.TestCase): def setUp(self): self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) def tearDown(self): self.sock.close() def encoded(self, path): # Return the given path encoded in the file system encoding, # or skip the test if this is not possible. try: return os.fsencode(path) except UnicodeEncodeError: self.skipTest( "Pathname {0!a} cannot be represented in file " "system encoding {1!r}".format( path, sys.getfilesystemencoding())) def bind(self, sock, path): # Bind the socket try: socket_helper.bind_unix_socket(sock, path) except OSError as e: if str(e) == "AF_UNIX path too long": self.skipTest( "Pathname {0!a} is too long to serve as an AF_UNIX path" .format(path)) else: raise def testUnbound(self): # Issue #30205 (note getsockname() can return None on OS X) self.assertIn(self.sock.getsockname(), ('', None)) def testStrAddr(self): # Test binding to and retrieving a normal string pathname. path = os.path.abspath(os_helper.TESTFN) self.bind(self.sock, path) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) def testBytesAddr(self): # Test binding to a bytes pathname. path = os.path.abspath(os_helper.TESTFN) self.bind(self.sock, self.encoded(path)) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) def testSurrogateescapeBind(self): # Test binding to a valid non-ASCII pathname, with the # non-ASCII bytes supplied using surrogateescape encoding. path = os.path.abspath(os_helper.TESTFN_UNICODE) b = self.encoded(path) self.bind(self.sock, b.decode("ascii", "surrogateescape")) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) def testUnencodableAddr(self): # Test binding to a pathname that cannot be encoded in the # file system encoding. if os_helper.TESTFN_UNENCODABLE is None: self.skipTest("No unencodable filename available") path = os.path.abspath(os_helper.TESTFN_UNENCODABLE) self.bind(self.sock, path) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) @unittest.skipIf(sys.platform == 'linux', 'Linux specific test') def testEmptyAddress(self): # Test that binding empty address fails. self.assertRaises(OSError, self.sock.bind, "") class BufferIOTest(SocketConnectedTest): """ Test the buffer versions of socket.recv() and socket.send(). """ def __init__(self, methodName='runTest'): SocketConnectedTest.__init__(self, methodName=methodName) def testRecvIntoArray(self): buf = array.array("B", [0] * len(MSG)) nbytes = self.cli_conn.recv_into(buf) self.assertEqual(nbytes, len(MSG)) buf = buf.tobytes() msg = buf[:len(MSG)] self.assertEqual(msg, MSG) def _testRecvIntoArray(self): buf = bytes(MSG) self.serv_conn.send(buf) def testRecvIntoBytearray(self): buf = bytearray(1024) nbytes = self.cli_conn.recv_into(buf) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvIntoBytearray = _testRecvIntoArray def testRecvIntoMemoryview(self): buf = bytearray(1024) nbytes = self.cli_conn.recv_into(memoryview(buf)) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvIntoMemoryview = _testRecvIntoArray def testRecvFromIntoArray(self): buf = array.array("B", [0] * len(MSG)) nbytes, addr = self.cli_conn.recvfrom_into(buf) self.assertEqual(nbytes, len(MSG)) buf = buf.tobytes() msg = buf[:len(MSG)] self.assertEqual(msg, MSG) def _testRecvFromIntoArray(self): buf = bytes(MSG) self.serv_conn.send(buf) def testRecvFromIntoBytearray(self): buf = bytearray(1024) nbytes, addr = self.cli_conn.recvfrom_into(buf) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvFromIntoBytearray = _testRecvFromIntoArray def testRecvFromIntoMemoryview(self): buf = bytearray(1024) nbytes, addr = self.cli_conn.recvfrom_into(memoryview(buf)) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvFromIntoMemoryview = _testRecvFromIntoArray def testRecvFromIntoSmallBuffer(self): # See issue #20246. buf = bytearray(8) self.assertRaises(ValueError, self.cli_conn.recvfrom_into, buf, 1024) def _testRecvFromIntoSmallBuffer(self): self.serv_conn.send(MSG) def testRecvFromIntoEmptyBuffer(self): buf = bytearray() self.cli_conn.recvfrom_into(buf) self.cli_conn.recvfrom_into(buf, 0) _testRecvFromIntoEmptyBuffer = _testRecvFromIntoArray TIPC_STYPE = 2000 TIPC_LOWER = 200 TIPC_UPPER = 210 def isTipcAvailable(): """Check if the TIPC module is loaded The TIPC module is not loaded automatically on Ubuntu and probably other Linux distros. """ if not hasattr(socket, "AF_TIPC"): return False try: f = open("/proc/modules", encoding="utf-8") except (FileNotFoundError, IsADirectoryError, PermissionError): # It's ok if the file does not exist, is a directory or if we # have not the permission to read it. return False with f: for line in f: if line.startswith("tipc "): return True return False @unittest.skipUnless(isTipcAvailable(), "TIPC module is not loaded, please 'sudo modprobe tipc'") class TIPCTest(unittest.TestCase): def testRDM(self): srv = socket.socket(socket.AF_TIPC, socket.SOCK_RDM) cli = socket.socket(socket.AF_TIPC, socket.SOCK_RDM) self.addCleanup(srv.close) self.addCleanup(cli.close) srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) srvaddr = (socket.TIPC_ADDR_NAMESEQ, TIPC_STYPE, TIPC_LOWER, TIPC_UPPER) srv.bind(srvaddr) sendaddr = (socket.TIPC_ADDR_NAME, TIPC_STYPE, TIPC_LOWER + int((TIPC_UPPER - TIPC_LOWER) / 2), 0) cli.sendto(MSG, sendaddr) msg, recvaddr = srv.recvfrom(1024) self.assertEqual(cli.getsockname(), recvaddr) self.assertEqual(msg, MSG) @unittest.skipUnless(isTipcAvailable(), "TIPC module is not loaded, please 'sudo modprobe tipc'") class TIPCThreadableTest(unittest.TestCase, ThreadableTest): def __init__(self, methodName = 'runTest'): unittest.TestCase.__init__(self, methodName = methodName) ThreadableTest.__init__(self) def setUp(self): self.srv = socket.socket(socket.AF_TIPC, socket.SOCK_STREAM) self.addCleanup(self.srv.close) self.srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) srvaddr = (socket.TIPC_ADDR_NAMESEQ, TIPC_STYPE, TIPC_LOWER, TIPC_UPPER) self.srv.bind(srvaddr) self.srv.listen() self.serverExplicitReady() self.conn, self.connaddr = self.srv.accept() self.addCleanup(self.conn.close) def clientSetUp(self): # There is a hittable race between serverExplicitReady() and the # accept() call; sleep a little while to avoid it, otherwise # we could get an exception time.sleep(0.1) self.cli = socket.socket(socket.AF_TIPC, socket.SOCK_STREAM) self.addCleanup(self.cli.close) addr = (socket.TIPC_ADDR_NAME, TIPC_STYPE, TIPC_LOWER + int((TIPC_UPPER - TIPC_LOWER) / 2), 0) self.cli.connect(addr) self.cliaddr = self.cli.getsockname() def testStream(self): msg = self.conn.recv(1024) self.assertEqual(msg, MSG) self.assertEqual(self.cliaddr, self.connaddr) def _testStream(self): self.cli.send(MSG) self.cli.close() class ContextManagersTest(ThreadedTCPSocketTest): def _testSocketClass(self): # base test with socket.socket() as sock: self.assertFalse(sock._closed) self.assertTrue(sock._closed) # close inside with block with socket.socket() as sock: sock.close() self.assertTrue(sock._closed) # exception inside with block with socket.socket() as sock: self.assertRaises(OSError, sock.sendall, b'foo') self.assertTrue(sock._closed) def testCreateConnectionBase(self): conn, addr = self.serv.accept() self.addCleanup(conn.close) data = conn.recv(1024) conn.sendall(data) def _testCreateConnectionBase(self): address = self.serv.getsockname() with socket.create_connection(address) as sock: self.assertFalse(sock._closed) sock.sendall(b'foo') self.assertEqual(sock.recv(1024), b'foo') self.assertTrue(sock._closed) def testCreateConnectionClose(self): conn, addr = self.serv.accept() self.addCleanup(conn.close) data = conn.recv(1024) conn.sendall(data) def _testCreateConnectionClose(self): address = self.serv.getsockname() with socket.create_connection(address) as sock: sock.close() self.assertTrue(sock._closed) self.assertRaises(OSError, sock.sendall, b'foo') class InheritanceTest(unittest.TestCase): @unittest.skipUnless(hasattr(socket, "SOCK_CLOEXEC"), "SOCK_CLOEXEC not defined") @support.requires_linux_version(2, 6, 28) def test_SOCK_CLOEXEC(self): with socket.socket(socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_CLOEXEC) as s: self.assertEqual(s.type, socket.SOCK_STREAM) self.assertFalse(s.get_inheritable()) def test_default_inheritable(self): sock = socket.socket() with sock: self.assertEqual(sock.get_inheritable(), False) def test_dup(self): sock = socket.socket() with sock: newsock = sock.dup() sock.close() with newsock: self.assertEqual(newsock.get_inheritable(), False) def test_set_inheritable(self): sock = socket.socket() with sock: sock.set_inheritable(True) self.assertEqual(sock.get_inheritable(), True) sock.set_inheritable(False) self.assertEqual(sock.get_inheritable(), False) @unittest.skipIf(fcntl is None, "need fcntl") def test_get_inheritable_cloexec(self): sock = socket.socket() with sock: fd = sock.fileno() self.assertEqual(sock.get_inheritable(), False) # clear FD_CLOEXEC flag flags = fcntl.fcntl(fd, fcntl.F_GETFD) flags &= ~fcntl.FD_CLOEXEC fcntl.fcntl(fd, fcntl.F_SETFD, flags) self.assertEqual(sock.get_inheritable(), True) @unittest.skipIf(fcntl is None, "need fcntl") def test_set_inheritable_cloexec(self): sock = socket.socket() with sock: fd = sock.fileno() self.assertEqual(fcntl.fcntl(fd, fcntl.F_GETFD) & fcntl.FD_CLOEXEC, fcntl.FD_CLOEXEC) sock.set_inheritable(True) self.assertEqual(fcntl.fcntl(fd, fcntl.F_GETFD) & fcntl.FD_CLOEXEC, 0) def test_socketpair(self): s1, s2 = socket.socketpair() self.addCleanup(s1.close) self.addCleanup(s2.close) self.assertEqual(s1.get_inheritable(), False) self.assertEqual(s2.get_inheritable(), False) @unittest.skipUnless(hasattr(socket, "SOCK_NONBLOCK"), "SOCK_NONBLOCK not defined") class NonblockConstantTest(unittest.TestCase): def checkNonblock(self, s, nonblock=True, timeout=0.0): if nonblock: self.assertEqual(s.type, socket.SOCK_STREAM) self.assertEqual(s.gettimeout(), timeout) self.assertTrue( fcntl.fcntl(s, fcntl.F_GETFL, os.O_NONBLOCK) & os.O_NONBLOCK) if timeout == 0: # timeout == 0: means that getblocking() must be False. self.assertFalse(s.getblocking()) else: # If timeout > 0, the socket will be in a "blocking" mode # from the standpoint of the Python API. For Python socket # object, "blocking" means that operations like 'sock.recv()' # will block. Internally, file descriptors for # "blocking" Python sockets *with timeouts* are in a # *non-blocking* mode, and 'sock.recv()' uses 'select()' # and handles EWOULDBLOCK/EAGAIN to enforce the timeout. self.assertTrue(s.getblocking()) else: self.assertEqual(s.type, socket.SOCK_STREAM) self.assertEqual(s.gettimeout(), None) self.assertFalse( fcntl.fcntl(s, fcntl.F_GETFL, os.O_NONBLOCK) & os.O_NONBLOCK) self.assertTrue(s.getblocking()) @support.requires_linux_version(2, 6, 28) def test_SOCK_NONBLOCK(self): # a lot of it seems silly and redundant, but I wanted to test that # changing back and forth worked ok with socket.socket(socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_NONBLOCK) as s: self.checkNonblock(s) s.setblocking(True) self.checkNonblock(s, nonblock=False) s.setblocking(False) self.checkNonblock(s) s.settimeout(None) self.checkNonblock(s, nonblock=False) s.settimeout(2.0) self.checkNonblock(s, timeout=2.0) s.setblocking(True) self.checkNonblock(s, nonblock=False) # defaulttimeout t = socket.getdefaulttimeout() socket.setdefaulttimeout(0.0) with socket.socket() as s: self.checkNonblock(s) socket.setdefaulttimeout(None) with socket.socket() as s: self.checkNonblock(s, False) socket.setdefaulttimeout(2.0) with socket.socket() as s: self.checkNonblock(s, timeout=2.0) socket.setdefaulttimeout(None) with socket.socket() as s: self.checkNonblock(s, False) socket.setdefaulttimeout(t) @unittest.skipUnless(os.name == "nt", "Windows specific") @unittest.skipUnless(multiprocessing, "need multiprocessing") class TestSocketSharing(SocketTCPTest): # This must be classmethod and not staticmethod or multiprocessing # won't be able to bootstrap it. @classmethod def remoteProcessServer(cls, q): # Recreate socket from shared data sdata = q.get() message = q.get() s = socket.fromshare(sdata) s2, c = s.accept() # Send the message s2.sendall(message) s2.close() s.close() def testShare(self): # Transfer the listening server socket to another process # and service it from there. # Create process: q = multiprocessing.Queue() p = multiprocessing.Process(target=self.remoteProcessServer, args=(q,)) p.start() # Get the shared socket data data = self.serv.share(p.pid) # Pass the shared socket to the other process addr = self.serv.getsockname() self.serv.close() q.put(data) # The data that the server will send us message = b"slapmahfro" q.put(message) # Connect s = socket.create_connection(addr) # listen for the data m = [] while True: data = s.recv(100) if not data: break m.append(data) s.close() received = b"".join(m) self.assertEqual(received, message) p.join() def testShareLength(self): data = self.serv.share(os.getpid()) self.assertRaises(ValueError, socket.fromshare, data[:-1]) self.assertRaises(ValueError, socket.fromshare, data+b"foo") def compareSockets(self, org, other): # socket sharing is expected to work only for blocking socket # since the internal python timeout value isn't transferred. self.assertEqual(org.gettimeout(), None) self.assertEqual(org.gettimeout(), other.gettimeout()) self.assertEqual(org.family, other.family) self.assertEqual(org.type, other.type) # If the user specified "0" for proto, then # internally windows will have picked the correct value. # Python introspection on the socket however will still return # 0. For the shared socket, the python value is recreated # from the actual value, so it may not compare correctly. if org.proto != 0: self.assertEqual(org.proto, other.proto) def testShareLocal(self): data = self.serv.share(os.getpid()) s = socket.fromshare(data) try: self.compareSockets(self.serv, s) finally: s.close() def testTypes(self): families = [socket.AF_INET, socket.AF_INET6] types = [socket.SOCK_STREAM, socket.SOCK_DGRAM] for f in families: for t in types: try: source = socket.socket(f, t) except OSError: continue # This combination is not supported try: data = source.share(os.getpid()) shared = socket.fromshare(data) try: self.compareSockets(source, shared) finally: shared.close() finally: source.close() class SendfileUsingSendTest(ThreadedTCPSocketTest): """ Test the send() implementation of socket.sendfile(). """ FILESIZE = (10 * 1024 * 1024) # 10 MiB BUFSIZE = 8192 FILEDATA = b"" TIMEOUT = support.LOOPBACK_TIMEOUT @classmethod def setUpClass(cls): def chunks(total, step): assert total >= step while total > step: yield step total -= step if total: yield total chunk = b"".join([random.choice(string.ascii_letters).encode() for i in range(cls.BUFSIZE)]) with open(os_helper.TESTFN, 'wb') as f: for csize in chunks(cls.FILESIZE, cls.BUFSIZE): f.write(chunk) with open(os_helper.TESTFN, 'rb') as f: cls.FILEDATA = f.read() assert len(cls.FILEDATA) == cls.FILESIZE @classmethod def tearDownClass(cls): os_helper.unlink(os_helper.TESTFN) def accept_conn(self): self.serv.settimeout(support.LONG_TIMEOUT) conn, addr = self.serv.accept() conn.settimeout(self.TIMEOUT) self.addCleanup(conn.close) return conn def recv_data(self, conn): received = [] while True: chunk = conn.recv(self.BUFSIZE) if not chunk: break received.append(chunk) return b''.join(received) def meth_from_sock(self, sock): # Depending on the mixin class being run return either send() # or sendfile() method implementation. return getattr(sock, "_sendfile_use_send") # regular file def _testRegularFile(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address) as sock, file as file: meth = self.meth_from_sock(sock) sent = meth(file) self.assertEqual(sent, self.FILESIZE) self.assertEqual(file.tell(), self.FILESIZE) def testRegularFile(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE) self.assertEqual(data, self.FILEDATA) # non regular file def _testNonRegularFile(self): address = self.serv.getsockname() file = io.BytesIO(self.FILEDATA) with socket.create_connection(address) as sock, file as file: sent = sock.sendfile(file) self.assertEqual(sent, self.FILESIZE) self.assertEqual(file.tell(), self.FILESIZE) self.assertRaises(socket._GiveupOnSendfile, sock._sendfile_use_sendfile, file) def testNonRegularFile(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE) self.assertEqual(data, self.FILEDATA) # empty file def _testEmptyFileSend(self): address = self.serv.getsockname() filename = os_helper.TESTFN + "2" with open(filename, 'wb'): self.addCleanup(os_helper.unlink, filename) file = open(filename, 'rb') with socket.create_connection(address) as sock, file as file: meth = self.meth_from_sock(sock) sent = meth(file) self.assertEqual(sent, 0) self.assertEqual(file.tell(), 0) def testEmptyFileSend(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(data, b"") # offset def _testOffset(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address) as sock, file as file: meth = self.meth_from_sock(sock) sent = meth(file, offset=5000) self.assertEqual(sent, self.FILESIZE - 5000) self.assertEqual(file.tell(), self.FILESIZE) def testOffset(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE - 5000) self.assertEqual(data, self.FILEDATA[5000:]) # count def _testCount(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') sock = socket.create_connection(address, timeout=support.LOOPBACK_TIMEOUT) with sock, file: count = 5000007 meth = self.meth_from_sock(sock) sent = meth(file, count=count) self.assertEqual(sent, count) self.assertEqual(file.tell(), count) def testCount(self): count = 5000007 conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), count) self.assertEqual(data, self.FILEDATA[:count]) # count small def _testCountSmall(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') sock = socket.create_connection(address, timeout=support.LOOPBACK_TIMEOUT) with sock, file: count = 1 meth = self.meth_from_sock(sock) sent = meth(file, count=count) self.assertEqual(sent, count) self.assertEqual(file.tell(), count) def testCountSmall(self): count = 1 conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), count) self.assertEqual(data, self.FILEDATA[:count]) # count + offset def _testCountWithOffset(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address, timeout=2) as sock, file as file: count = 100007 meth = self.meth_from_sock(sock) sent = meth(file, offset=2007, count=count) self.assertEqual(sent, count) self.assertEqual(file.tell(), count + 2007) def testCountWithOffset(self): count = 100007 conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), count) self.assertEqual(data, self.FILEDATA[2007:count+2007]) # non blocking sockets are not supposed to work def _testNonBlocking(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address) as sock, file as file: sock.setblocking(False) meth = self.meth_from_sock(sock) self.assertRaises(ValueError, meth, file) self.assertRaises(ValueError, sock.sendfile, file) def testNonBlocking(self): conn = self.accept_conn() if conn.recv(8192): self.fail('was not supposed to receive any data') # timeout (non-triggered) def _testWithTimeout(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') sock = socket.create_connection(address, timeout=support.LOOPBACK_TIMEOUT) with sock, file: meth = self.meth_from_sock(sock) sent = meth(file) self.assertEqual(sent, self.FILESIZE) def testWithTimeout(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE) self.assertEqual(data, self.FILEDATA) # timeout (triggered) def _testWithTimeoutTriggeredSend(self): address = self.serv.getsockname() with open(os_helper.TESTFN, 'rb') as file: with socket.create_connection(address) as sock: sock.settimeout(0.01) meth = self.meth_from_sock(sock) self.assertRaises(TimeoutError, meth, file) def testWithTimeoutTriggeredSend(self): conn = self.accept_conn() conn.recv(88192) # bpo-45212: the wait here needs to be longer than the client-side timeout (0.01s) time.sleep(1) # errors def _test_errors(self): pass def test_errors(self): with open(os_helper.TESTFN, 'rb') as file: with socket.socket(type=socket.SOCK_DGRAM) as s: meth = self.meth_from_sock(s) self.assertRaisesRegex( ValueError, "SOCK_STREAM", meth, file) with open(os_helper.TESTFN, encoding="utf-8") as file: with socket.socket() as s: meth = self.meth_from_sock(s) self.assertRaisesRegex( ValueError, "binary mode", meth, file) with open(os_helper.TESTFN, 'rb') as file: with socket.socket() as s: meth = self.meth_from_sock(s) self.assertRaisesRegex(TypeError, "positive integer", meth, file, count='2') self.assertRaisesRegex(TypeError, "positive integer", meth, file, count=0.1) self.assertRaisesRegex(ValueError, "positive integer", meth, file, count=0) self.assertRaisesRegex(ValueError, "positive integer", meth, file, count=-1) @unittest.skipUnless(hasattr(os, "sendfile"), 'os.sendfile() required for this test.') class SendfileUsingSendfileTest(SendfileUsingSendTest): """ Test the sendfile() implementation of socket.sendfile(). """ def meth_from_sock(self, sock): return getattr(sock, "_sendfile_use_sendfile") @unittest.skipUnless(HAVE_SOCKET_ALG, 'AF_ALG required') class LinuxKernelCryptoAPI(unittest.TestCase): # tests for AF_ALG def create_alg(self, typ, name): sock = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) try: sock.bind((typ, name)) except FileNotFoundError as e: # type / algorithm is not available sock.close() raise unittest.SkipTest(str(e), typ, name) else: return sock # bpo-31705: On kernel older than 4.5, sendto() failed with ENOKEY, # at least on ppc64le architecture @support.requires_linux_version(4, 5) def test_sha256(self): expected = bytes.fromhex("ba7816bf8f01cfea414140de5dae2223b00361a396" "177a9cb410ff61f20015ad") with self.create_alg('hash', 'sha256') as algo: op, _ = algo.accept() with op: op.sendall(b"abc") self.assertEqual(op.recv(512), expected) op, _ = algo.accept() with op: op.send(b'a', socket.MSG_MORE) op.send(b'b', socket.MSG_MORE) op.send(b'c', socket.MSG_MORE) op.send(b'') self.assertEqual(op.recv(512), expected) def test_hmac_sha1(self): # gh-109396: In FIPS mode, Linux 6.5 requires a key # of at least 112 bits. Use a key of 152 bits. key = b"Python loves AF_ALG" data = b"what do ya want for nothing?" expected = bytes.fromhex("193dbb43c6297b47ea6277ec0ce67119a3f3aa66") with self.create_alg('hash', 'hmac(sha1)') as algo: algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, key) op, _ = algo.accept() with op: op.sendall(data) self.assertEqual(op.recv(512), expected) # Although it should work with 3.19 and newer the test blocks on # Ubuntu 15.10 with Kernel 4.2.0-19. @support.requires_linux_version(4, 3) def test_aes_cbc(self): key = bytes.fromhex('06a9214036b8a15b512e03d534120006') iv = bytes.fromhex('3dafba429d9eb430b422da802c9fac41') msg = b"Single block msg" ciphertext = bytes.fromhex('e353779c1079aeb82708942dbe77181a') msglen = len(msg) with self.create_alg('skcipher', 'cbc(aes)') as algo: algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, key) op, _ = algo.accept() with op: op.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, iv=iv, flags=socket.MSG_MORE) op.sendall(msg) self.assertEqual(op.recv(msglen), ciphertext) op, _ = algo.accept() with op: op.sendmsg_afalg([ciphertext], op=socket.ALG_OP_DECRYPT, iv=iv) self.assertEqual(op.recv(msglen), msg) # long message multiplier = 1024 longmsg = [msg] * multiplier op, _ = algo.accept() with op: op.sendmsg_afalg(longmsg, op=socket.ALG_OP_ENCRYPT, iv=iv) enc = op.recv(msglen * multiplier) self.assertEqual(len(enc), msglen * multiplier) self.assertEqual(enc[:msglen], ciphertext) op, _ = algo.accept() with op: op.sendmsg_afalg([enc], op=socket.ALG_OP_DECRYPT, iv=iv) dec = op.recv(msglen * multiplier) self.assertEqual(len(dec), msglen * multiplier) self.assertEqual(dec, msg * multiplier) @support.requires_linux_version(4, 9) # see issue29324 def test_aead_aes_gcm(self): key = bytes.fromhex('c939cc13397c1d37de6ae0e1cb7c423c') iv = bytes.fromhex('b3d8cc017cbb89b39e0f67e2') plain = bytes.fromhex('c3b3c41f113a31b73d9a5cd432103069') assoc = bytes.fromhex('24825602bd12a984e0092d3e448eda5f') expected_ct = bytes.fromhex('93fe7d9e9bfd10348a5606e5cafa7354') expected_tag = bytes.fromhex('0032a1dc85f1c9786925a2e71d8272dd') taglen = len(expected_tag) assoclen = len(assoc) with self.create_alg('aead', 'gcm(aes)') as algo: algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, key) algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_AEAD_AUTHSIZE, None, taglen) # send assoc, plain and tag buffer in separate steps op, _ = algo.accept() with op: op.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, iv=iv, assoclen=assoclen, flags=socket.MSG_MORE) op.sendall(assoc, socket.MSG_MORE) op.sendall(plain) res = op.recv(assoclen + len(plain) + taglen) self.assertEqual(expected_ct, res[assoclen:-taglen]) self.assertEqual(expected_tag, res[-taglen:]) # now with msg op, _ = algo.accept() with op: msg = assoc + plain op.sendmsg_afalg([msg], op=socket.ALG_OP_ENCRYPT, iv=iv, assoclen=assoclen) res = op.recv(assoclen + len(plain) + taglen) self.assertEqual(expected_ct, res[assoclen:-taglen]) self.assertEqual(expected_tag, res[-taglen:]) # create anc data manually pack_uint32 = struct.Struct('I').pack op, _ = algo.accept() with op: msg = assoc + plain op.sendmsg( [msg], ([socket.SOL_ALG, socket.ALG_SET_OP, pack_uint32(socket.ALG_OP_ENCRYPT)], [socket.SOL_ALG, socket.ALG_SET_IV, pack_uint32(len(iv)) + iv], [socket.SOL_ALG, socket.ALG_SET_AEAD_ASSOCLEN, pack_uint32(assoclen)], ) ) res = op.recv(len(msg) + taglen) self.assertEqual(expected_ct, res[assoclen:-taglen]) self.assertEqual(expected_tag, res[-taglen:]) # decrypt and verify op, _ = algo.accept() with op: msg = assoc + expected_ct + expected_tag op.sendmsg_afalg([msg], op=socket.ALG_OP_DECRYPT, iv=iv, assoclen=assoclen) res = op.recv(len(msg) - taglen) self.assertEqual(plain, res[assoclen:]) @support.requires_linux_version(4, 3) # see test_aes_cbc def test_drbg_pr_sha256(self): # deterministic random bit generator, prediction resistance, sha256 with self.create_alg('rng', 'drbg_pr_sha256') as algo: extra_seed = os.urandom(32) algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, extra_seed) op, _ = algo.accept() with op: rn = op.recv(32) self.assertEqual(len(rn), 32) def test_sendmsg_afalg_args(self): sock = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) with sock: with self.assertRaises(TypeError): sock.sendmsg_afalg() with self.assertRaises(TypeError): sock.sendmsg_afalg(op=None) with self.assertRaises(TypeError): sock.sendmsg_afalg(1) with self.assertRaises(TypeError): sock.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, assoclen=None) with self.assertRaises(TypeError): sock.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, assoclen=-1) def test_length_restriction(self): # bpo-35050, off-by-one error in length check sock = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) self.addCleanup(sock.close) # salg_type[14] with self.assertRaises(FileNotFoundError): sock.bind(("t" * 13, "name")) with self.assertRaisesRegex(ValueError, "type too long"): sock.bind(("t" * 14, "name")) # salg_name[64] with self.assertRaises(FileNotFoundError): sock.bind(("type", "n" * 63)) with self.assertRaisesRegex(ValueError, "name too long"): sock.bind(("type", "n" * 64)) @unittest.skipUnless(sys.platform == 'darwin', 'macOS specific test') class TestMacOSTCPFlags(unittest.TestCase): def test_tcp_keepalive(self): self.assertTrue(socket.TCP_KEEPALIVE) @unittest.skipUnless(sys.platform.startswith("win"), "requires Windows") class TestMSWindowsTCPFlags(unittest.TestCase): knownTCPFlags = { # available since long time ago 'TCP_MAXSEG', 'TCP_NODELAY', # available starting with Windows 10 1607 'TCP_FASTOPEN', # available starting with Windows 10 1703 'TCP_KEEPCNT', # available starting with Windows 10 1709 'TCP_KEEPIDLE', 'TCP_KEEPINTVL' } def test_new_tcp_flags(self): provided = [s for s in dir(socket) if s.startswith('TCP')] unknown = [s for s in provided if s not in self.knownTCPFlags] self.assertEqual([], unknown, "New TCP flags were discovered. See bpo-32394 for more information") class CreateServerTest(unittest.TestCase): def test_address(self): port = socket_helper.find_unused_port() with socket.create_server(("127.0.0.1", port)) as sock: self.assertEqual(sock.getsockname()[0], "127.0.0.1") self.assertEqual(sock.getsockname()[1], port) if socket_helper.IPV6_ENABLED: with socket.create_server(("::1", port), family=socket.AF_INET6) as sock: self.assertEqual(sock.getsockname()[0], "::1") self.assertEqual(sock.getsockname()[1], port) def test_family_and_type(self): with socket.create_server(("127.0.0.1", 0)) as sock: self.assertEqual(sock.family, socket.AF_INET) self.assertEqual(sock.type, socket.SOCK_STREAM) if socket_helper.IPV6_ENABLED: with socket.create_server(("::1", 0), family=socket.AF_INET6) as s: self.assertEqual(s.family, socket.AF_INET6) self.assertEqual(sock.type, socket.SOCK_STREAM) def test_reuse_port(self): if not hasattr(socket, "SO_REUSEPORT"): with self.assertRaises(ValueError): socket.create_server(("localhost", 0), reuse_port=True) else: with socket.create_server(("localhost", 0)) as sock: opt = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT) self.assertEqual(opt, 0) with socket.create_server(("localhost", 0), reuse_port=True) as sock: opt = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT) self.assertNotEqual(opt, 0) @unittest.skipIf(not hasattr(_socket, 'IPPROTO_IPV6') or not hasattr(_socket, 'IPV6_V6ONLY'), "IPV6_V6ONLY option not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_ipv6_only_default(self): with socket.create_server(("::1", 0), family=socket.AF_INET6) as sock: assert sock.getsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY) @unittest.skipIf(not socket.has_dualstack_ipv6(), "dualstack_ipv6 not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_dualstack_ipv6_family(self): with socket.create_server(("::1", 0), family=socket.AF_INET6, dualstack_ipv6=True) as sock: self.assertEqual(sock.family, socket.AF_INET6) class CreateServerFunctionalTest(unittest.TestCase): timeout = support.LOOPBACK_TIMEOUT def echo_server(self, sock): def run(sock): with sock: conn, _ = sock.accept() with conn: event.wait(self.timeout) msg = conn.recv(1024) if not msg: return conn.sendall(msg) event = threading.Event() sock.settimeout(self.timeout) thread = threading.Thread(target=run, args=(sock, )) thread.start() self.addCleanup(thread.join, self.timeout) event.set() def echo_client(self, addr, family): with socket.socket(family=family) as sock: sock.settimeout(self.timeout) sock.connect(addr) sock.sendall(b'foo') self.assertEqual(sock.recv(1024), b'foo') def test_tcp4(self): port = socket_helper.find_unused_port() with socket.create_server(("", port)) as sock: self.echo_server(sock) self.echo_client(("127.0.0.1", port), socket.AF_INET) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_tcp6(self): port = socket_helper.find_unused_port() with socket.create_server(("", port), family=socket.AF_INET6) as sock: self.echo_server(sock) self.echo_client(("::1", port), socket.AF_INET6) # --- dual stack tests @unittest.skipIf(not socket.has_dualstack_ipv6(), "dualstack_ipv6 not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_dual_stack_client_v4(self): port = socket_helper.find_unused_port() with socket.create_server(("", port), family=socket.AF_INET6, dualstack_ipv6=True) as sock: self.echo_server(sock) self.echo_client(("127.0.0.1", port), socket.AF_INET) @unittest.skipIf(not socket.has_dualstack_ipv6(), "dualstack_ipv6 not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_dual_stack_client_v6(self): port = socket_helper.find_unused_port() with socket.create_server(("", port), family=socket.AF_INET6, dualstack_ipv6=True) as sock: self.echo_server(sock) self.echo_client(("::1", port), socket.AF_INET6) @requireAttrs(socket, "send_fds") @requireAttrs(socket, "recv_fds") @requireAttrs(socket, "AF_UNIX") class SendRecvFdsTests(unittest.TestCase): def testSendAndRecvFds(self): def close_pipes(pipes): for fd1, fd2 in pipes: os.close(fd1) os.close(fd2) def close_fds(fds): for fd in fds: os.close(fd) # send 10 file descriptors pipes = [os.pipe() for _ in range(10)] self.addCleanup(close_pipes, pipes) fds = [rfd for rfd, wfd in pipes] # use a UNIX socket pair to exchange file descriptors locally sock1, sock2 = socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM) with sock1, sock2: socket.send_fds(sock1, [MSG], fds) # request more data and file descriptors than expected msg, fds2, flags, addr = socket.recv_fds(sock2, len(MSG) * 2, len(fds) * 2) self.addCleanup(close_fds, fds2) self.assertEqual(msg, MSG) self.assertEqual(len(fds2), len(fds)) self.assertEqual(flags, 0) # don't test addr # test that file descriptors are connected for index, fds in enumerate(pipes): rfd, wfd = fds os.write(wfd, str(index).encode()) for index, rfd in enumerate(fds2): data = os.read(rfd, 100) self.assertEqual(data, str(index).encode()) def setUpModule(): thread_info = threading_helper.threading_setup() unittest.addModuleCleanup(threading_helper.threading_cleanup, *thread_info) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.12/test_ssl.py000066400000000000000000006563631471441230600205610ustar00rootroot00000000000000# Test the support for SSL and sockets import sys import unittest import unittest.mock from test import support from test.support import import_helper from test.support import os_helper from test.support import socket_helper from test.support import threading_helper from test.support import warnings_helper from test.support import asyncore import array import re import socket import select import struct import time import enum import gc import http.client import os import errno import pprint import urllib.request import threading import traceback import weakref import platform import sysconfig import functools try: import ctypes except ImportError: ctypes = None ssl = import_helper.import_module("ssl") import _ssl from ssl import TLSVersion, _TLSContentType, _TLSMessageType, _TLSAlertType Py_DEBUG_WIN32 = support.Py_DEBUG and sys.platform == 'win32' PROTOCOLS = sorted(ssl._PROTOCOL_NAMES) HOST = socket_helper.HOST IS_OPENSSL_3_0_0 = ssl.OPENSSL_VERSION_INFO >= (3, 0, 0) PY_SSL_DEFAULT_CIPHERS = sysconfig.get_config_var('PY_SSL_DEFAULT_CIPHERS') PROTOCOL_TO_TLS_VERSION = {} for proto, ver in ( ("PROTOCOL_SSLv3", "SSLv3"), ("PROTOCOL_TLSv1", "TLSv1"), ("PROTOCOL_TLSv1_1", "TLSv1_1"), ): try: proto = getattr(ssl, proto) ver = getattr(ssl.TLSVersion, ver) except AttributeError: continue PROTOCOL_TO_TLS_VERSION[proto] = ver def data_file(*name): return os.path.join(os.path.dirname(__file__), "certdata", *name) # The custom key and certificate files used in test_ssl are generated # using Lib/test/certdata/make_ssl_certs.py. # Other certificates are simply fetched from the internet servers they # are meant to authenticate. CERTFILE = data_file("keycert.pem") BYTES_CERTFILE = os.fsencode(CERTFILE) ONLYCERT = data_file("ssl_cert.pem") ONLYKEY = data_file("ssl_key.pem") BYTES_ONLYCERT = os.fsencode(ONLYCERT) BYTES_ONLYKEY = os.fsencode(ONLYKEY) CERTFILE_PROTECTED = data_file("keycert.passwd.pem") ONLYKEY_PROTECTED = data_file("ssl_key.passwd.pem") KEY_PASSWORD = "somepass" CAPATH = data_file("capath") BYTES_CAPATH = os.fsencode(CAPATH) CAFILE_NEURONIO = data_file("capath", "4e1295a3.0") CAFILE_CACERT = data_file("capath", "5ed36f99.0") CERTFILE_INFO = { 'issuer': ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'localhost'),)), 'notAfter': 'Aug 26 14:23:15 2028 GMT', 'notBefore': 'Aug 29 14:23:15 2018 GMT', 'serialNumber': '98A7CF88C74A32ED', 'subject': ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'localhost'),)), 'subjectAltName': (('DNS', 'localhost'),), 'version': 3 } # empty CRL CRLFILE = data_file("revocation.crl") # Two keys and certs signed by the same CA (for SNI tests) SIGNED_CERTFILE = data_file("keycert3.pem") SIGNED_CERTFILE_HOSTNAME = 'localhost' SIGNED_CERTFILE_INFO = { 'OCSP': ('http://testca.pythontest.net/testca/ocsp/',), 'caIssuers': ('http://testca.pythontest.net/testca/pycacert.cer',), 'crlDistributionPoints': ('http://testca.pythontest.net/testca/revocation.crl',), 'issuer': ((('countryName', 'XY'),), (('organizationName', 'Python Software Foundation CA'),), (('commonName', 'our-ca-server'),)), 'notAfter': 'Oct 28 14:23:16 2037 GMT', 'notBefore': 'Aug 29 14:23:16 2018 GMT', 'serialNumber': 'CB2D80995A69525C', 'subject': ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'localhost'),)), 'subjectAltName': (('DNS', 'localhost'),), 'version': 3 } SIGNED_CERTFILE2 = data_file("keycert4.pem") SIGNED_CERTFILE2_HOSTNAME = 'fakehostname' SIGNED_CERTFILE_ECC = data_file("keycertecc.pem") SIGNED_CERTFILE_ECC_HOSTNAME = 'localhost-ecc' # Same certificate as pycacert.pem, but without extra text in file SIGNING_CA = data_file("capath", "ceff1710.0") # cert with all kinds of subject alt names ALLSANFILE = data_file("allsans.pem") IDNSANSFILE = data_file("idnsans.pem") NOSANFILE = data_file("nosan.pem") NOSAN_HOSTNAME = 'localhost' REMOTE_HOST = "self-signed.pythontest.net" EMPTYCERT = data_file("nullcert.pem") BADCERT = data_file("badcert.pem") NONEXISTINGCERT = data_file("XXXnonexisting.pem") BADKEY = data_file("badkey.pem") NOKIACERT = data_file("nokia.pem") NULLBYTECERT = data_file("nullbytecert.pem") TALOS_INVALID_CRLDP = data_file("talos-2019-0758.pem") DHFILE = data_file("ffdh3072.pem") BYTES_DHFILE = os.fsencode(DHFILE) # Not defined in all versions of OpenSSL OP_NO_COMPRESSION = getattr(ssl, "OP_NO_COMPRESSION", 0) OP_SINGLE_DH_USE = getattr(ssl, "OP_SINGLE_DH_USE", 0) OP_SINGLE_ECDH_USE = getattr(ssl, "OP_SINGLE_ECDH_USE", 0) OP_CIPHER_SERVER_PREFERENCE = getattr(ssl, "OP_CIPHER_SERVER_PREFERENCE", 0) OP_ENABLE_MIDDLEBOX_COMPAT = getattr(ssl, "OP_ENABLE_MIDDLEBOX_COMPAT", 0) # Ubuntu has patched OpenSSL and changed behavior of security level 2 # see https://bugs.python.org/issue41561#msg389003 def is_ubuntu(): try: # Assume that any references of "ubuntu" implies Ubuntu-like distro # The workaround is not required for 18.04, but doesn't hurt either. with open("/etc/os-release", encoding="utf-8") as f: return "ubuntu" in f.read() except FileNotFoundError: return False if is_ubuntu(): def seclevel_workaround(*ctxs): """"Lower security level to '1' and allow all ciphers for TLS 1.0/1""" for ctx in ctxs: if ( hasattr(ctx, "minimum_version") and ctx.minimum_version <= ssl.TLSVersion.TLSv1_1 ): ctx.set_ciphers("@SECLEVEL=1:ALL") else: def seclevel_workaround(*ctxs): pass def has_tls_protocol(protocol): """Check if a TLS protocol is available and enabled :param protocol: enum ssl._SSLMethod member or name :return: bool """ if isinstance(protocol, str): assert protocol.startswith('PROTOCOL_') protocol = getattr(ssl, protocol, None) if protocol is None: return False if protocol in { ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS_SERVER, ssl.PROTOCOL_TLS_CLIENT }: # auto-negotiate protocols are always available return True name = protocol.name return has_tls_version(name[len('PROTOCOL_'):]) @functools.lru_cache def has_tls_version(version): """Check if a TLS/SSL version is enabled :param version: TLS version name or ssl.TLSVersion member :return: bool """ if isinstance(version, str): version = ssl.TLSVersion.__members__[version] # check compile time flags like ssl.HAS_TLSv1_2 if not getattr(ssl, f'HAS_{version.name}'): return False if IS_OPENSSL_3_0_0 and version < ssl.TLSVersion.TLSv1_2: # bpo43791: 3.0.0-alpha14 fails with TLSV1_ALERT_INTERNAL_ERROR return False # check runtime and dynamic crypto policy settings. A TLS version may # be compiled in but disabled by a policy or config option. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) if ( hasattr(ctx, 'minimum_version') and ctx.minimum_version != ssl.TLSVersion.MINIMUM_SUPPORTED and version < ctx.minimum_version ): return False if ( hasattr(ctx, 'maximum_version') and ctx.maximum_version != ssl.TLSVersion.MAXIMUM_SUPPORTED and version > ctx.maximum_version ): return False return True def requires_tls_version(version): """Decorator to skip tests when a required TLS version is not available :param version: TLS version name or ssl.TLSVersion member :return: """ def decorator(func): @functools.wraps(func) def wrapper(*args, **kw): if not has_tls_version(version): raise unittest.SkipTest(f"{version} is not available.") else: return func(*args, **kw) return wrapper return decorator def handle_error(prefix): exc_format = ' '.join(traceback.format_exception(sys.exception())) if support.verbose: sys.stdout.write(prefix + exc_format) def utc_offset(): #NOTE: ignore issues like #1647654 # local time = utc time + utc offset if time.daylight and time.localtime().tm_isdst > 0: return -time.altzone # seconds return -time.timezone ignore_deprecation = warnings_helper.ignore_warnings( category=DeprecationWarning ) def test_wrap_socket(sock, *, cert_reqs=ssl.CERT_NONE, ca_certs=None, ciphers=None, certfile=None, keyfile=None, **kwargs): if not kwargs.get("server_side"): kwargs["server_hostname"] = SIGNED_CERTFILE_HOSTNAME context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) else: context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) if cert_reqs is not None: if cert_reqs == ssl.CERT_NONE: context.check_hostname = False context.verify_mode = cert_reqs if ca_certs is not None: context.load_verify_locations(ca_certs) if certfile is not None or keyfile is not None: context.load_cert_chain(certfile, keyfile) if ciphers is not None: context.set_ciphers(ciphers) return context.wrap_socket(sock, **kwargs) def testing_context(server_cert=SIGNED_CERTFILE, *, server_chain=True): """Create context client_context, server_context, hostname = testing_context() """ if server_cert == SIGNED_CERTFILE: hostname = SIGNED_CERTFILE_HOSTNAME elif server_cert == SIGNED_CERTFILE2: hostname = SIGNED_CERTFILE2_HOSTNAME elif server_cert == NOSANFILE: hostname = NOSAN_HOSTNAME else: raise ValueError(server_cert) client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(server_cert) if server_chain: server_context.load_verify_locations(SIGNING_CA) return client_context, server_context, hostname class BasicSocketTests(unittest.TestCase): def test_constants(self): ssl.CERT_NONE ssl.CERT_OPTIONAL ssl.CERT_REQUIRED ssl.OP_CIPHER_SERVER_PREFERENCE ssl.OP_SINGLE_DH_USE ssl.OP_SINGLE_ECDH_USE ssl.OP_NO_COMPRESSION self.assertEqual(ssl.HAS_SNI, True) self.assertEqual(ssl.HAS_ECDH, True) self.assertEqual(ssl.HAS_TLSv1_2, True) self.assertEqual(ssl.HAS_TLSv1_3, True) ssl.OP_NO_SSLv2 ssl.OP_NO_SSLv3 ssl.OP_NO_TLSv1 ssl.OP_NO_TLSv1_3 ssl.OP_NO_TLSv1_1 ssl.OP_NO_TLSv1_2 self.assertEqual(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv23) def test_options(self): # gh-106687: SSL options values are unsigned integer (uint64_t) for name in dir(ssl): if not name.startswith('OP_'): continue with self.subTest(option=name): value = getattr(ssl, name) self.assertGreaterEqual(value, 0, f"ssl.{name}") def test_ssl_types(self): ssl_types = [ _ssl._SSLContext, _ssl._SSLSocket, _ssl.MemoryBIO, _ssl.Certificate, _ssl.SSLSession, _ssl.SSLError, ] for ssl_type in ssl_types: with self.subTest(ssl_type=ssl_type): with self.assertRaisesRegex(TypeError, "immutable type"): ssl_type.value = None support.check_disallow_instantiation(self, _ssl.Certificate) def test_private_init(self): with self.assertRaisesRegex(TypeError, "public constructor"): with socket.socket() as s: ssl.SSLSocket(s) def test_str_for_enums(self): # Make sure that the PROTOCOL_* constants have enum-like string # reprs. proto = ssl.PROTOCOL_TLS_CLIENT self.assertEqual(repr(proto), '<_SSLMethod.PROTOCOL_TLS_CLIENT: %r>' % proto.value) self.assertEqual(str(proto), str(proto.value)) ctx = ssl.SSLContext(proto) self.assertIs(ctx.protocol, proto) def test_random(self): v = ssl.RAND_status() if support.verbose: sys.stdout.write("\n RAND_status is %d (%s)\n" % (v, (v and "sufficient randomness") or "insufficient randomness")) if v: data = ssl.RAND_bytes(16) self.assertEqual(len(data), 16) else: self.assertRaises(ssl.SSLError, ssl.RAND_bytes, 16) # negative num is invalid self.assertRaises(ValueError, ssl.RAND_bytes, -5) ssl.RAND_add("this is a random string", 75.0) ssl.RAND_add(b"this is a random bytes object", 75.0) ssl.RAND_add(bytearray(b"this is a random bytearray object"), 75.0) def test_parse_cert(self): # note that this uses an 'unofficial' function in _ssl.c, # provided solely for this test, to exercise the certificate # parsing code self.assertEqual( ssl._ssl._test_decode_cert(CERTFILE), CERTFILE_INFO ) self.assertEqual( ssl._ssl._test_decode_cert(SIGNED_CERTFILE), SIGNED_CERTFILE_INFO ) # Issue #13034: the subjectAltName in some certificates # (notably projects.developer.nokia.com:443) wasn't parsed p = ssl._ssl._test_decode_cert(NOKIACERT) if support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") self.assertEqual(p['subjectAltName'], (('DNS', 'projects.developer.nokia.com'), ('DNS', 'projects.forum.nokia.com')) ) # extra OCSP and AIA fields self.assertEqual(p['OCSP'], ('http://ocsp.verisign.com',)) self.assertEqual(p['caIssuers'], ('http://SVRIntl-G3-aia.verisign.com/SVRIntlG3.cer',)) self.assertEqual(p['crlDistributionPoints'], ('http://SVRIntl-G3-crl.verisign.com/SVRIntlG3.crl',)) def test_parse_cert_CVE_2019_5010(self): p = ssl._ssl._test_decode_cert(TALOS_INVALID_CRLDP) if support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") self.assertEqual( p, { 'issuer': ( (('countryName', 'UK'),), (('commonName', 'cody-ca'),)), 'notAfter': 'Jun 14 18:00:58 2028 GMT', 'notBefore': 'Jun 18 18:00:58 2018 GMT', 'serialNumber': '02', 'subject': ((('countryName', 'UK'),), (('commonName', 'codenomicon-vm-2.test.lal.cisco.com'),)), 'subjectAltName': ( ('DNS', 'codenomicon-vm-2.test.lal.cisco.com'),), 'version': 3 } ) def test_parse_cert_CVE_2013_4238(self): p = ssl._ssl._test_decode_cert(NULLBYTECERT) if support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") subject = ((('countryName', 'US'),), (('stateOrProvinceName', 'Oregon'),), (('localityName', 'Beaverton'),), (('organizationName', 'Python Software Foundation'),), (('organizationalUnitName', 'Python Core Development'),), (('commonName', 'null.python.org\x00example.org'),), (('emailAddress', 'python-dev@python.org'),)) self.assertEqual(p['subject'], subject) self.assertEqual(p['issuer'], subject) if ssl._OPENSSL_API_VERSION >= (0, 9, 8): san = (('DNS', 'altnull.python.org\x00example.com'), ('email', 'null@python.org\x00user@example.org'), ('URI', 'http://null.python.org\x00http://example.org'), ('IP Address', '192.0.2.1'), ('IP Address', '2001:DB8:0:0:0:0:0:1')) else: # OpenSSL 0.9.7 doesn't support IPv6 addresses in subjectAltName san = (('DNS', 'altnull.python.org\x00example.com'), ('email', 'null@python.org\x00user@example.org'), ('URI', 'http://null.python.org\x00http://example.org'), ('IP Address', '192.0.2.1'), ('IP Address', '')) self.assertEqual(p['subjectAltName'], san) def test_parse_all_sans(self): p = ssl._ssl._test_decode_cert(ALLSANFILE) self.assertEqual(p['subjectAltName'], ( ('DNS', 'allsans'), ('othername', ''), ('othername', ''), ('email', 'user@example.org'), ('DNS', 'www.example.org'), ('DirName', ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'dirname example'),))), ('URI', 'https://www.python.org/'), ('IP Address', '127.0.0.1'), ('IP Address', '0:0:0:0:0:0:0:1'), ('Registered ID', '1.2.3.4.5') ) ) def test_DER_to_PEM(self): with open(CAFILE_CACERT, 'r') as f: pem = f.read() d1 = ssl.PEM_cert_to_DER_cert(pem) p2 = ssl.DER_cert_to_PEM_cert(d1) d2 = ssl.PEM_cert_to_DER_cert(p2) self.assertEqual(d1, d2) if not p2.startswith(ssl.PEM_HEADER + '\n'): self.fail("DER-to-PEM didn't include correct header:\n%r\n" % p2) if not p2.endswith('\n' + ssl.PEM_FOOTER + '\n'): self.fail("DER-to-PEM didn't include correct footer:\n%r\n" % p2) def test_openssl_version(self): n = ssl.OPENSSL_VERSION_NUMBER t = ssl.OPENSSL_VERSION_INFO s = ssl.OPENSSL_VERSION self.assertIsInstance(n, int) self.assertIsInstance(t, tuple) self.assertIsInstance(s, str) # Some sanity checks follow # >= 1.1.1 self.assertGreaterEqual(n, 0x10101000) # < 4.0 self.assertLess(n, 0x40000000) major, minor, fix, patch, status = t self.assertGreaterEqual(major, 1) self.assertLess(major, 4) self.assertGreaterEqual(minor, 0) self.assertLess(minor, 256) self.assertGreaterEqual(fix, 0) self.assertLess(fix, 256) self.assertGreaterEqual(patch, 0) self.assertLessEqual(patch, 63) self.assertGreaterEqual(status, 0) self.assertLessEqual(status, 15) libressl_ver = f"LibreSSL {major:d}" if major >= 3: # 3.x uses 0xMNN00PP0L openssl_ver = f"OpenSSL {major:d}.{minor:d}.{patch:d}" else: openssl_ver = f"OpenSSL {major:d}.{minor:d}.{fix:d}" self.assertTrue( s.startswith((openssl_ver, libressl_ver, "AWS-LC")), (s, t, hex(n)) ) @support.cpython_only def test_refcycle(self): # Issue #7943: an SSL object doesn't create reference cycles with # itself. s = socket.socket(socket.AF_INET) ss = test_wrap_socket(s) wr = weakref.ref(ss) with warnings_helper.check_warnings(("", ResourceWarning)): del ss self.assertEqual(wr(), None) def test_wrapped_unconnected(self): # Methods on an unconnected SSLSocket propagate the original # OSError raise by the underlying socket object. s = socket.socket(socket.AF_INET) with test_wrap_socket(s) as ss: self.assertRaises(OSError, ss.recv, 1) self.assertRaises(OSError, ss.recv_into, bytearray(b'x')) self.assertRaises(OSError, ss.recvfrom, 1) self.assertRaises(OSError, ss.recvfrom_into, bytearray(b'x'), 1) self.assertRaises(OSError, ss.send, b'x') self.assertRaises(OSError, ss.sendto, b'x', ('0.0.0.0', 0)) self.assertRaises(NotImplementedError, ss.dup) self.assertRaises(NotImplementedError, ss.sendmsg, [b'x'], (), 0, ('0.0.0.0', 0)) self.assertRaises(NotImplementedError, ss.recvmsg, 100) self.assertRaises(NotImplementedError, ss.recvmsg_into, [bytearray(100)]) def test_timeout(self): # Issue #8524: when creating an SSL socket, the timeout of the # original socket should be retained. for timeout in (None, 0.0, 5.0): s = socket.socket(socket.AF_INET) s.settimeout(timeout) with test_wrap_socket(s) as ss: self.assertEqual(timeout, ss.gettimeout()) def test_openssl111_deprecations(self): options = [ ssl.OP_NO_TLSv1, ssl.OP_NO_TLSv1_1, ssl.OP_NO_TLSv1_2, ssl.OP_NO_TLSv1_3 ] protocols = [ ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLS ] versions = [ ssl.TLSVersion.SSLv3, ssl.TLSVersion.TLSv1, ssl.TLSVersion.TLSv1_1, ] for option in options: with self.subTest(option=option): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertWarns(DeprecationWarning) as cm: ctx.options |= option self.assertEqual( 'ssl.OP_NO_SSL*/ssl.OP_NO_TLS* options are deprecated', str(cm.warning) ) for protocol in protocols: if not has_tls_protocol(protocol): continue with self.subTest(protocol=protocol): with self.assertWarns(DeprecationWarning) as cm: ssl.SSLContext(protocol) self.assertEqual( f'ssl.{protocol.name} is deprecated', str(cm.warning) ) for version in versions: if not has_tls_version(version): continue with self.subTest(version=version): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertWarns(DeprecationWarning) as cm: ctx.minimum_version = version version_text = '%s.%s' % (version.__class__.__name__, version.name) self.assertEqual( f'ssl.{version_text} is deprecated', str(cm.warning) ) def bad_cert_test(self, certfile): """Check that trying to use the given client certificate fails""" certfile = os.path.join(os.path.dirname(__file__) or os.curdir, "certdata", certfile) sock = socket.socket() self.addCleanup(sock.close) with self.assertRaises(ssl.SSLError): test_wrap_socket(sock, certfile=certfile) def test_empty_cert(self): """Wrapping with an empty cert file""" self.bad_cert_test("nullcert.pem") def test_malformed_cert(self): """Wrapping with a badly formatted certificate (syntax error)""" self.bad_cert_test("badcert.pem") def test_malformed_key(self): """Wrapping with a badly formatted key (syntax error)""" self.bad_cert_test("badkey.pem") def test_server_side(self): # server_hostname doesn't work for server sockets ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) with socket.socket() as sock: self.assertRaises(ValueError, ctx.wrap_socket, sock, True, server_hostname="some.hostname") def test_unknown_channel_binding(self): # should raise ValueError for unknown type s = socket.create_server(('127.0.0.1', 0)) c = socket.socket(socket.AF_INET) c.connect(s.getsockname()) with test_wrap_socket(c, do_handshake_on_connect=False) as ss: with self.assertRaises(ValueError): ss.get_channel_binding("unknown-type") s.close() @unittest.skipUnless("tls-unique" in ssl.CHANNEL_BINDING_TYPES, "'tls-unique' channel binding not available") def test_tls_unique_channel_binding(self): # unconnected should return None for known type s = socket.socket(socket.AF_INET) with test_wrap_socket(s) as ss: self.assertIsNone(ss.get_channel_binding("tls-unique")) # the same for server-side s = socket.socket(socket.AF_INET) with test_wrap_socket(s, server_side=True, certfile=CERTFILE) as ss: self.assertIsNone(ss.get_channel_binding("tls-unique")) def test_dealloc_warn(self): ss = test_wrap_socket(socket.socket(socket.AF_INET)) r = repr(ss) with self.assertWarns(ResourceWarning) as cm: ss = None support.gc_collect() self.assertIn(r, str(cm.warning.args[0])) def test_get_default_verify_paths(self): paths = ssl.get_default_verify_paths() self.assertEqual(len(paths), 6) self.assertIsInstance(paths, ssl.DefaultVerifyPaths) with os_helper.EnvironmentVarGuard() as env: env["SSL_CERT_DIR"] = CAPATH env["SSL_CERT_FILE"] = CERTFILE paths = ssl.get_default_verify_paths() self.assertEqual(paths.cafile, CERTFILE) self.assertEqual(paths.capath, CAPATH) @unittest.skipUnless(sys.platform == "win32", "Windows specific") def test_enum_certificates(self): self.assertTrue(ssl.enum_certificates("CA")) self.assertTrue(ssl.enum_certificates("ROOT")) self.assertRaises(TypeError, ssl.enum_certificates) self.assertRaises(WindowsError, ssl.enum_certificates, "") trust_oids = set() for storename in ("CA", "ROOT"): store = ssl.enum_certificates(storename) self.assertIsInstance(store, list) for element in store: self.assertIsInstance(element, tuple) self.assertEqual(len(element), 3) cert, enc, trust = element self.assertIsInstance(cert, bytes) self.assertIn(enc, {"x509_asn", "pkcs_7_asn"}) self.assertIsInstance(trust, (frozenset, set, bool)) if isinstance(trust, (frozenset, set)): trust_oids.update(trust) serverAuth = "1.3.6.1.5.5.7.3.1" self.assertIn(serverAuth, trust_oids) @unittest.skipUnless(sys.platform == "win32", "Windows specific") def test_enum_crls(self): self.assertTrue(ssl.enum_crls("CA")) self.assertRaises(TypeError, ssl.enum_crls) self.assertRaises(WindowsError, ssl.enum_crls, "") crls = ssl.enum_crls("CA") self.assertIsInstance(crls, list) for element in crls: self.assertIsInstance(element, tuple) self.assertEqual(len(element), 2) self.assertIsInstance(element[0], bytes) self.assertIn(element[1], {"x509_asn", "pkcs_7_asn"}) def test_asn1object(self): expected = (129, 'serverAuth', 'TLS Web Server Authentication', '1.3.6.1.5.5.7.3.1') val = ssl._ASN1Object('1.3.6.1.5.5.7.3.1') self.assertEqual(val, expected) self.assertEqual(val.nid, 129) self.assertEqual(val.shortname, 'serverAuth') self.assertEqual(val.longname, 'TLS Web Server Authentication') self.assertEqual(val.oid, '1.3.6.1.5.5.7.3.1') self.assertIsInstance(val, ssl._ASN1Object) self.assertRaises(ValueError, ssl._ASN1Object, 'serverAuth') val = ssl._ASN1Object.fromnid(129) self.assertEqual(val, expected) self.assertIsInstance(val, ssl._ASN1Object) self.assertRaises(ValueError, ssl._ASN1Object.fromnid, -1) with self.assertRaisesRegex(ValueError, "unknown NID 100000"): ssl._ASN1Object.fromnid(100000) for i in range(1000): try: obj = ssl._ASN1Object.fromnid(i) except ValueError: pass else: self.assertIsInstance(obj.nid, int) self.assertIsInstance(obj.shortname, str) self.assertIsInstance(obj.longname, str) self.assertIsInstance(obj.oid, (str, type(None))) val = ssl._ASN1Object.fromname('TLS Web Server Authentication') self.assertEqual(val, expected) self.assertIsInstance(val, ssl._ASN1Object) self.assertEqual(ssl._ASN1Object.fromname('serverAuth'), expected) self.assertEqual(ssl._ASN1Object.fromname('1.3.6.1.5.5.7.3.1'), expected) with self.assertRaisesRegex(ValueError, "unknown object 'serverauth'"): ssl._ASN1Object.fromname('serverauth') def test_purpose_enum(self): val = ssl._ASN1Object('1.3.6.1.5.5.7.3.1') self.assertIsInstance(ssl.Purpose.SERVER_AUTH, ssl._ASN1Object) self.assertEqual(ssl.Purpose.SERVER_AUTH, val) self.assertEqual(ssl.Purpose.SERVER_AUTH.nid, 129) self.assertEqual(ssl.Purpose.SERVER_AUTH.shortname, 'serverAuth') self.assertEqual(ssl.Purpose.SERVER_AUTH.oid, '1.3.6.1.5.5.7.3.1') val = ssl._ASN1Object('1.3.6.1.5.5.7.3.2') self.assertIsInstance(ssl.Purpose.CLIENT_AUTH, ssl._ASN1Object) self.assertEqual(ssl.Purpose.CLIENT_AUTH, val) self.assertEqual(ssl.Purpose.CLIENT_AUTH.nid, 130) self.assertEqual(ssl.Purpose.CLIENT_AUTH.shortname, 'clientAuth') self.assertEqual(ssl.Purpose.CLIENT_AUTH.oid, '1.3.6.1.5.5.7.3.2') def test_unsupported_dtls(self): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.addCleanup(s.close) with self.assertRaises(NotImplementedError) as cx: test_wrap_socket(s, cert_reqs=ssl.CERT_NONE) self.assertEqual(str(cx.exception), "only stream sockets are supported") ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertRaises(NotImplementedError) as cx: ctx.wrap_socket(s) self.assertEqual(str(cx.exception), "only stream sockets are supported") def cert_time_ok(self, timestring, timestamp): self.assertEqual(ssl.cert_time_to_seconds(timestring), timestamp) def cert_time_fail(self, timestring): with self.assertRaises(ValueError): ssl.cert_time_to_seconds(timestring) @unittest.skipUnless(utc_offset(), 'local time needs to be different from UTC') def test_cert_time_to_seconds_timezone(self): # Issue #19940: ssl.cert_time_to_seconds() returns wrong # results if local timezone is not UTC self.cert_time_ok("May 9 00:00:00 2007 GMT", 1178668800.0) self.cert_time_ok("Jan 5 09:34:43 2018 GMT", 1515144883.0) def test_cert_time_to_seconds(self): timestring = "Jan 5 09:34:43 2018 GMT" ts = 1515144883.0 self.cert_time_ok(timestring, ts) # accept keyword parameter, assert its name self.assertEqual(ssl.cert_time_to_seconds(cert_time=timestring), ts) # accept both %e and %d (space or zero generated by strftime) self.cert_time_ok("Jan 05 09:34:43 2018 GMT", ts) # case-insensitive self.cert_time_ok("JaN 5 09:34:43 2018 GmT", ts) self.cert_time_fail("Jan 5 09:34 2018 GMT") # no seconds self.cert_time_fail("Jan 5 09:34:43 2018") # no GMT self.cert_time_fail("Jan 5 09:34:43 2018 UTC") # not GMT timezone self.cert_time_fail("Jan 35 09:34:43 2018 GMT") # invalid day self.cert_time_fail("Jon 5 09:34:43 2018 GMT") # invalid month self.cert_time_fail("Jan 5 24:00:00 2018 GMT") # invalid hour self.cert_time_fail("Jan 5 09:60:43 2018 GMT") # invalid minute newyear_ts = 1230768000.0 # leap seconds self.cert_time_ok("Dec 31 23:59:60 2008 GMT", newyear_ts) # same timestamp self.cert_time_ok("Jan 1 00:00:00 2009 GMT", newyear_ts) self.cert_time_ok("Jan 5 09:34:59 2018 GMT", 1515144899) # allow 60th second (even if it is not a leap second) self.cert_time_ok("Jan 5 09:34:60 2018 GMT", 1515144900) # allow 2nd leap second for compatibility with time.strptime() self.cert_time_ok("Jan 5 09:34:61 2018 GMT", 1515144901) self.cert_time_fail("Jan 5 09:34:62 2018 GMT") # invalid seconds # no special treatment for the special value: # 99991231235959Z (rfc 5280) self.cert_time_ok("Dec 31 23:59:59 9999 GMT", 253402300799.0) @support.run_with_locale('LC_ALL', '') def test_cert_time_to_seconds_locale(self): # `cert_time_to_seconds()` should be locale independent def local_february_name(): return time.strftime('%b', (1, 2, 3, 4, 5, 6, 0, 0, 0)) if local_february_name().lower() == 'feb': self.skipTest("locale-specific month name needs to be " "different from C locale") # locale-independent self.cert_time_ok("Feb 9 00:00:00 2007 GMT", 1170979200.0) self.cert_time_fail(local_february_name() + " 9 00:00:00 2007 GMT") def test_connect_ex_error(self): server = socket.socket(socket.AF_INET) self.addCleanup(server.close) port = socket_helper.bind_port(server) # Reserve port but don't listen s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED) self.addCleanup(s.close) rc = s.connect_ex((HOST, port)) # Issue #19919: Windows machines or VMs hosted on Windows # machines sometimes return EWOULDBLOCK. errors = ( errno.ECONNREFUSED, errno.EHOSTUNREACH, errno.ETIMEDOUT, errno.EWOULDBLOCK, ) self.assertIn(rc, errors) def test_read_write_zero(self): # empty reads and writes now work, bpo-42854, bpo-31711 client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.recv(0), b"") self.assertEqual(s.send(b""), 0) class ContextTests(unittest.TestCase): def test_constructor(self): for protocol in PROTOCOLS: if has_tls_protocol(protocol): with warnings_helper.check_warnings(): ctx = ssl.SSLContext(protocol) self.assertEqual(ctx.protocol, protocol) with warnings_helper.check_warnings(): ctx = ssl.SSLContext() self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS) self.assertRaises(ValueError, ssl.SSLContext, -1) self.assertRaises(ValueError, ssl.SSLContext, 42) def test_ciphers(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.set_ciphers("ALL") ctx.set_ciphers("DEFAULT") with self.assertRaisesRegex(ssl.SSLError, "No cipher can be selected"): ctx.set_ciphers("^$:,;?*'dorothyx") @unittest.skipUnless(PY_SSL_DEFAULT_CIPHERS == 1, "Test applies only to Python default ciphers") def test_python_ciphers(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ciphers = ctx.get_ciphers() for suite in ciphers: name = suite['name'] self.assertNotIn("PSK", name) self.assertNotIn("SRP", name) self.assertNotIn("MD5", name) self.assertNotIn("RC4", name) self.assertNotIn("3DES", name) def test_get_ciphers(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.set_ciphers('AESGCM') names = set(d['name'] for d in ctx.get_ciphers()) expected = { 'AES128-GCM-SHA256', 'ECDHE-ECDSA-AES128-GCM-SHA256', 'ECDHE-RSA-AES128-GCM-SHA256', 'DHE-RSA-AES128-GCM-SHA256', 'AES256-GCM-SHA384', 'ECDHE-ECDSA-AES256-GCM-SHA384', 'ECDHE-RSA-AES256-GCM-SHA384', 'DHE-RSA-AES256-GCM-SHA384', } intersection = names.intersection(expected) self.assertGreaterEqual( len(intersection), 2, f"\ngot: {sorted(names)}\nexpected: {sorted(expected)}" ) def test_options(self): # Test default SSLContext options ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) # OP_ALL | OP_NO_SSLv2 | OP_NO_SSLv3 is the default value default = (ssl.OP_ALL | ssl.OP_NO_SSLv2 | ssl.OP_NO_SSLv3) # SSLContext also enables these by default default |= (OP_NO_COMPRESSION | OP_CIPHER_SERVER_PREFERENCE | OP_SINGLE_DH_USE | OP_SINGLE_ECDH_USE | OP_ENABLE_MIDDLEBOX_COMPAT) self.assertEqual(default, ctx.options) # disallow TLSv1 with warnings_helper.check_warnings(): ctx.options |= ssl.OP_NO_TLSv1 self.assertEqual(default | ssl.OP_NO_TLSv1, ctx.options) # allow TLSv1 with warnings_helper.check_warnings(): ctx.options = (ctx.options & ~ssl.OP_NO_TLSv1) self.assertEqual(default, ctx.options) # clear all options ctx.options = 0 # Ubuntu has OP_NO_SSLv3 forced on by default self.assertEqual(0, ctx.options & ~ssl.OP_NO_SSLv3) # invalid options with self.assertRaises(OverflowError): ctx.options = -1 with self.assertRaises(OverflowError): ctx.options = 2 ** 100 with self.assertRaises(TypeError): ctx.options = "abc" def test_verify_mode_protocol(self): with warnings_helper.check_warnings(): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) # Default value self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) ctx.verify_mode = ssl.CERT_OPTIONAL self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) ctx.verify_mode = ssl.CERT_REQUIRED self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.verify_mode = ssl.CERT_NONE self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) with self.assertRaises(TypeError): ctx.verify_mode = None with self.assertRaises(ValueError): ctx.verify_mode = 42 ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self.assertFalse(ctx.check_hostname) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertTrue(ctx.check_hostname) def test_hostname_checks_common_name(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertTrue(ctx.hostname_checks_common_name) if ssl.HAS_NEVER_CHECK_COMMON_NAME: ctx.hostname_checks_common_name = True self.assertTrue(ctx.hostname_checks_common_name) ctx.hostname_checks_common_name = False self.assertFalse(ctx.hostname_checks_common_name) ctx.hostname_checks_common_name = True self.assertTrue(ctx.hostname_checks_common_name) else: with self.assertRaises(AttributeError): ctx.hostname_checks_common_name = True @ignore_deprecation def test_min_max_version(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # OpenSSL default is MINIMUM_SUPPORTED, however some vendors like # Fedora override the setting to TLS 1.0. minimum_range = { # stock OpenSSL ssl.TLSVersion.MINIMUM_SUPPORTED, # Fedora 29 uses TLS 1.0 by default ssl.TLSVersion.TLSv1, # RHEL 8 uses TLS 1.2 by default ssl.TLSVersion.TLSv1_2 } maximum_range = { # stock OpenSSL ssl.TLSVersion.MAXIMUM_SUPPORTED, # Fedora 32 uses TLS 1.3 by default ssl.TLSVersion.TLSv1_3 } self.assertIn( ctx.minimum_version, minimum_range ) self.assertIn( ctx.maximum_version, maximum_range ) ctx.minimum_version = ssl.TLSVersion.TLSv1_1 ctx.maximum_version = ssl.TLSVersion.TLSv1_2 self.assertEqual( ctx.minimum_version, ssl.TLSVersion.TLSv1_1 ) self.assertEqual( ctx.maximum_version, ssl.TLSVersion.TLSv1_2 ) ctx.minimum_version = ssl.TLSVersion.MINIMUM_SUPPORTED ctx.maximum_version = ssl.TLSVersion.TLSv1 self.assertEqual( ctx.minimum_version, ssl.TLSVersion.MINIMUM_SUPPORTED ) self.assertEqual( ctx.maximum_version, ssl.TLSVersion.TLSv1 ) ctx.maximum_version = ssl.TLSVersion.MAXIMUM_SUPPORTED self.assertEqual( ctx.maximum_version, ssl.TLSVersion.MAXIMUM_SUPPORTED ) ctx.maximum_version = ssl.TLSVersion.MINIMUM_SUPPORTED self.assertIn( ctx.maximum_version, {ssl.TLSVersion.TLSv1, ssl.TLSVersion.TLSv1_1, ssl.TLSVersion.SSLv3} ) ctx.minimum_version = ssl.TLSVersion.MAXIMUM_SUPPORTED self.assertIn( ctx.minimum_version, {ssl.TLSVersion.TLSv1_2, ssl.TLSVersion.TLSv1_3} ) with self.assertRaises(ValueError): ctx.minimum_version = 42 if has_tls_protocol(ssl.PROTOCOL_TLSv1_1): ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1_1) self.assertIn( ctx.minimum_version, minimum_range ) self.assertEqual( ctx.maximum_version, ssl.TLSVersion.MAXIMUM_SUPPORTED ) with self.assertRaises(ValueError): ctx.minimum_version = ssl.TLSVersion.MINIMUM_SUPPORTED with self.assertRaises(ValueError): ctx.maximum_version = ssl.TLSVersion.TLSv1 @unittest.skipUnless( hasattr(ssl.SSLContext, 'security_level'), "requires OpenSSL >= 1.1.0" ) def test_security_level(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) # The default security callback allows for levels between 0-5 # with OpenSSL defaulting to 1, however some vendors override the # default value (e.g. Debian defaults to 2) security_level_range = { 0, 1, # OpenSSL default 2, # Debian 3, 4, 5, } self.assertIn(ctx.security_level, security_level_range) def test_verify_flags(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # default value tf = getattr(ssl, "VERIFY_X509_TRUSTED_FIRST", 0) self.assertEqual(ctx.verify_flags, ssl.VERIFY_DEFAULT | tf) ctx.verify_flags = ssl.VERIFY_CRL_CHECK_LEAF self.assertEqual(ctx.verify_flags, ssl.VERIFY_CRL_CHECK_LEAF) ctx.verify_flags = ssl.VERIFY_CRL_CHECK_CHAIN self.assertEqual(ctx.verify_flags, ssl.VERIFY_CRL_CHECK_CHAIN) ctx.verify_flags = ssl.VERIFY_DEFAULT self.assertEqual(ctx.verify_flags, ssl.VERIFY_DEFAULT) ctx.verify_flags = ssl.VERIFY_ALLOW_PROXY_CERTS self.assertEqual(ctx.verify_flags, ssl.VERIFY_ALLOW_PROXY_CERTS) # supports any value ctx.verify_flags = ssl.VERIFY_CRL_CHECK_LEAF | ssl.VERIFY_X509_STRICT self.assertEqual(ctx.verify_flags, ssl.VERIFY_CRL_CHECK_LEAF | ssl.VERIFY_X509_STRICT) with self.assertRaises(TypeError): ctx.verify_flags = None def test_load_cert_chain(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # Combined key and cert in a single file ctx.load_cert_chain(CERTFILE, keyfile=None) ctx.load_cert_chain(CERTFILE, keyfile=CERTFILE) self.assertRaises(TypeError, ctx.load_cert_chain, keyfile=CERTFILE) with self.assertRaises(OSError) as cm: ctx.load_cert_chain(NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaisesRegex(ssl.SSLError, "PEM (lib|routines)"): ctx.load_cert_chain(BADCERT) with self.assertRaisesRegex(ssl.SSLError, "PEM (lib|routines)"): ctx.load_cert_chain(EMPTYCERT) # Separate key and cert ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.load_cert_chain(ONLYCERT, ONLYKEY) ctx.load_cert_chain(certfile=ONLYCERT, keyfile=ONLYKEY) ctx.load_cert_chain(certfile=BYTES_ONLYCERT, keyfile=BYTES_ONLYKEY) with self.assertRaisesRegex(ssl.SSLError, "PEM (lib|routines)"): ctx.load_cert_chain(ONLYCERT) with self.assertRaisesRegex(ssl.SSLError, "PEM (lib|routines)"): ctx.load_cert_chain(ONLYKEY) with self.assertRaisesRegex(ssl.SSLError, "PEM (lib|routines)"): ctx.load_cert_chain(certfile=ONLYKEY, keyfile=ONLYCERT) # Mismatching key and cert ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # Allow for flexible libssl error messages. regex = re.compile(r"""( key values mismatch # OpenSSL | KEY_VALUES_MISMATCH # AWS-LC )""", re.X) with self.assertRaisesRegex(ssl.SSLError, regex): ctx.load_cert_chain(CAFILE_CACERT, ONLYKEY) # Password protected key and cert ctx.load_cert_chain(CERTFILE_PROTECTED, password=KEY_PASSWORD) ctx.load_cert_chain(CERTFILE_PROTECTED, password=KEY_PASSWORD.encode()) ctx.load_cert_chain(CERTFILE_PROTECTED, password=bytearray(KEY_PASSWORD.encode())) ctx.load_cert_chain(ONLYCERT, ONLYKEY_PROTECTED, KEY_PASSWORD) ctx.load_cert_chain(ONLYCERT, ONLYKEY_PROTECTED, KEY_PASSWORD.encode()) ctx.load_cert_chain(ONLYCERT, ONLYKEY_PROTECTED, bytearray(KEY_PASSWORD.encode())) with self.assertRaisesRegex(TypeError, "should be a string"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=True) with self.assertRaises(ssl.SSLError): ctx.load_cert_chain(CERTFILE_PROTECTED, password="badpass") with self.assertRaisesRegex(ValueError, "cannot be longer"): # openssl has a fixed limit on the password buffer. # PEM_BUFSIZE is generally set to 1kb. # Return a string larger than this. ctx.load_cert_chain(CERTFILE_PROTECTED, password=b'a' * 102400) # Password callback def getpass_unicode(): return KEY_PASSWORD def getpass_bytes(): return KEY_PASSWORD.encode() def getpass_bytearray(): return bytearray(KEY_PASSWORD.encode()) def getpass_badpass(): return "badpass" def getpass_huge(): return b'a' * (1024 * 1024) def getpass_bad_type(): return 9 def getpass_exception(): raise Exception('getpass error') class GetPassCallable: def __call__(self): return KEY_PASSWORD def getpass(self): return KEY_PASSWORD ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_unicode) ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_bytes) ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_bytearray) ctx.load_cert_chain(CERTFILE_PROTECTED, password=GetPassCallable()) ctx.load_cert_chain(CERTFILE_PROTECTED, password=GetPassCallable().getpass) with self.assertRaises(ssl.SSLError): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_badpass) with self.assertRaisesRegex(ValueError, "cannot be longer"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_huge) with self.assertRaisesRegex(TypeError, "must return a string"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_bad_type) with self.assertRaisesRegex(Exception, "getpass error"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_exception) # Make sure the password function isn't called if it isn't needed ctx.load_cert_chain(CERTFILE, password=getpass_exception) def test_load_verify_locations(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.load_verify_locations(CERTFILE) ctx.load_verify_locations(cafile=CERTFILE, capath=None) ctx.load_verify_locations(BYTES_CERTFILE) ctx.load_verify_locations(cafile=BYTES_CERTFILE, capath=None) self.assertRaises(TypeError, ctx.load_verify_locations) self.assertRaises(TypeError, ctx.load_verify_locations, None, None, None) with self.assertRaises(OSError) as cm: ctx.load_verify_locations(NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaisesRegex(ssl.SSLError, "PEM (lib|routines)"): ctx.load_verify_locations(BADCERT) ctx.load_verify_locations(CERTFILE, CAPATH) ctx.load_verify_locations(CERTFILE, capath=BYTES_CAPATH) # Issue #10989: crash if the second argument type is invalid self.assertRaises(TypeError, ctx.load_verify_locations, None, True) def test_load_verify_cadata(self): # test cadata with open(CAFILE_CACERT) as f: cacert_pem = f.read() cacert_der = ssl.PEM_cert_to_DER_cert(cacert_pem) with open(CAFILE_NEURONIO) as f: neuronio_pem = f.read() neuronio_der = ssl.PEM_cert_to_DER_cert(neuronio_pem) # test PEM ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 0) ctx.load_verify_locations(cadata=cacert_pem) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 1) ctx.load_verify_locations(cadata=neuronio_pem) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # cert already in hash table ctx.load_verify_locations(cadata=neuronio_pem) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # combined ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) combined = "\n".join((cacert_pem, neuronio_pem)) ctx.load_verify_locations(cadata=combined) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # with junk around the certs ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) combined = ["head", cacert_pem, "other", neuronio_pem, "again", neuronio_pem, "tail"] ctx.load_verify_locations(cadata="\n".join(combined)) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # test DER ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(cadata=cacert_der) ctx.load_verify_locations(cadata=neuronio_der) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # cert already in hash table ctx.load_verify_locations(cadata=cacert_der) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # combined ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) combined = b"".join((cacert_der, neuronio_der)) ctx.load_verify_locations(cadata=combined) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # error cases ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertRaises(TypeError, ctx.load_verify_locations, cadata=object) with self.assertRaisesRegex( ssl.SSLError, "no start line: cadata does not contain a certificate" ): ctx.load_verify_locations(cadata="broken") with self.assertRaisesRegex( ssl.SSLError, "not enough data: cadata does not contain a certificate" ): ctx.load_verify_locations(cadata=b"broken") with self.assertRaises(ssl.SSLError): ctx.load_verify_locations(cadata=cacert_der + b"A") @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_load_dh_params(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.load_dh_params(DHFILE) if os.name != 'nt': ctx.load_dh_params(BYTES_DHFILE) self.assertRaises(TypeError, ctx.load_dh_params) self.assertRaises(TypeError, ctx.load_dh_params, None) with self.assertRaises(FileNotFoundError) as cm: ctx.load_dh_params(NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaises(ssl.SSLError) as cm: ctx.load_dh_params(CERTFILE) def test_session_stats(self): for proto in {ssl.PROTOCOL_TLS_CLIENT, ssl.PROTOCOL_TLS_SERVER}: ctx = ssl.SSLContext(proto) self.assertEqual(ctx.session_stats(), { 'number': 0, 'connect': 0, 'connect_good': 0, 'connect_renegotiate': 0, 'accept': 0, 'accept_good': 0, 'accept_renegotiate': 0, 'hits': 0, 'misses': 0, 'timeouts': 0, 'cache_full': 0, }) def test_set_default_verify_paths(self): # There's not much we can do to test that it acts as expected, # so just check it doesn't crash or raise an exception. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.set_default_verify_paths() @unittest.skipUnless(ssl.HAS_ECDH, "ECDH disabled on this OpenSSL build") def test_set_ecdh_curve(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.set_ecdh_curve("prime256v1") ctx.set_ecdh_curve(b"prime256v1") self.assertRaises(TypeError, ctx.set_ecdh_curve) self.assertRaises(TypeError, ctx.set_ecdh_curve, None) self.assertRaises(ValueError, ctx.set_ecdh_curve, "foo") self.assertRaises(ValueError, ctx.set_ecdh_curve, b"foo") def test_sni_callback(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # set_servername_callback expects a callable, or None self.assertRaises(TypeError, ctx.set_servername_callback) self.assertRaises(TypeError, ctx.set_servername_callback, 4) self.assertRaises(TypeError, ctx.set_servername_callback, "") self.assertRaises(TypeError, ctx.set_servername_callback, ctx) def dummycallback(sock, servername, ctx): pass ctx.set_servername_callback(None) ctx.set_servername_callback(dummycallback) def test_sni_callback_refcycle(self): # Reference cycles through the servername callback are detected # and cleared. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) def dummycallback(sock, servername, ctx, cycle=ctx): pass ctx.set_servername_callback(dummycallback) wr = weakref.ref(ctx) del ctx, dummycallback gc.collect() self.assertIs(wr(), None) def test_cert_store_stats(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 0, 'crl': 0, 'x509': 0}) ctx.load_cert_chain(CERTFILE) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 0, 'crl': 0, 'x509': 0}) ctx.load_verify_locations(CERTFILE) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 0, 'crl': 0, 'x509': 1}) ctx.load_verify_locations(CAFILE_CACERT) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 1, 'crl': 0, 'x509': 2}) def test_get_ca_certs(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.get_ca_certs(), []) # CERTFILE is not flagged as X509v3 Basic Constraints: CA:TRUE ctx.load_verify_locations(CERTFILE) self.assertEqual(ctx.get_ca_certs(), []) # but CAFILE_CACERT is a CA cert ctx.load_verify_locations(CAFILE_CACERT) self.assertEqual(ctx.get_ca_certs(), [{'issuer': ((('organizationName', 'Root CA'),), (('organizationalUnitName', 'http://www.cacert.org'),), (('commonName', 'CA Cert Signing Authority'),), (('emailAddress', 'support@cacert.org'),)), 'notAfter': 'Mar 29 12:29:49 2033 GMT', 'notBefore': 'Mar 30 12:29:49 2003 GMT', 'serialNumber': '00', 'crlDistributionPoints': ('https://www.cacert.org/revoke.crl',), 'subject': ((('organizationName', 'Root CA'),), (('organizationalUnitName', 'http://www.cacert.org'),), (('commonName', 'CA Cert Signing Authority'),), (('emailAddress', 'support@cacert.org'),)), 'version': 3}]) with open(CAFILE_CACERT) as f: pem = f.read() der = ssl.PEM_cert_to_DER_cert(pem) self.assertEqual(ctx.get_ca_certs(True), [der]) def test_load_default_certs(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs(ssl.Purpose.SERVER_AUTH) ctx.load_default_certs() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs(ssl.Purpose.CLIENT_AUTH) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertRaises(TypeError, ctx.load_default_certs, None) self.assertRaises(TypeError, ctx.load_default_certs, 'SERVER_AUTH') @unittest.skipIf(sys.platform == "win32", "not-Windows specific") def test_load_default_certs_env(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with os_helper.EnvironmentVarGuard() as env: env["SSL_CERT_DIR"] = CAPATH env["SSL_CERT_FILE"] = CERTFILE ctx.load_default_certs() self.assertEqual(ctx.cert_store_stats(), {"crl": 0, "x509": 1, "x509_ca": 0}) @unittest.skipUnless(sys.platform == "win32", "Windows specific") @unittest.skipIf(support.Py_DEBUG, "Debug build does not share environment between CRTs") def test_load_default_certs_env_windows(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs() stats = ctx.cert_store_stats() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with os_helper.EnvironmentVarGuard() as env: env["SSL_CERT_DIR"] = CAPATH env["SSL_CERT_FILE"] = CERTFILE ctx.load_default_certs() stats["x509"] += 1 self.assertEqual(ctx.cert_store_stats(), stats) def _assert_context_options(self, ctx): self.assertEqual(ctx.options & ssl.OP_NO_SSLv2, ssl.OP_NO_SSLv2) if OP_NO_COMPRESSION != 0: self.assertEqual(ctx.options & OP_NO_COMPRESSION, OP_NO_COMPRESSION) if OP_SINGLE_DH_USE != 0: self.assertEqual(ctx.options & OP_SINGLE_DH_USE, OP_SINGLE_DH_USE) if OP_SINGLE_ECDH_USE != 0: self.assertEqual(ctx.options & OP_SINGLE_ECDH_USE, OP_SINGLE_ECDH_USE) if OP_CIPHER_SERVER_PREFERENCE != 0: self.assertEqual(ctx.options & OP_CIPHER_SERVER_PREFERENCE, OP_CIPHER_SERVER_PREFERENCE) self.assertEqual(ctx.options & ssl.OP_LEGACY_SERVER_CONNECT, 0 if IS_OPENSSL_3_0_0 else ssl.OP_LEGACY_SERVER_CONNECT) def test_create_default_context(self): ctx = ssl.create_default_context() self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertTrue(ctx.check_hostname) self._assert_context_options(ctx) with open(SIGNING_CA) as f: cadata = f.read() ctx = ssl.create_default_context(cafile=SIGNING_CA, capath=CAPATH, cadata=cadata) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self._assert_context_options(ctx) ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self._assert_context_options(ctx) def test__create_stdlib_context(self): ctx = ssl._create_stdlib_context() self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self.assertFalse(ctx.check_hostname) self._assert_context_options(ctx) if has_tls_protocol(ssl.PROTOCOL_TLSv1): with warnings_helper.check_warnings(): ctx = ssl._create_stdlib_context(ssl.PROTOCOL_TLSv1) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLSv1) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self._assert_context_options(ctx) with warnings_helper.check_warnings(): ctx = ssl._create_stdlib_context( ssl.PROTOCOL_TLSv1_2, cert_reqs=ssl.CERT_REQUIRED, check_hostname=True ) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLSv1_2) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertTrue(ctx.check_hostname) self._assert_context_options(ctx) ctx = ssl._create_stdlib_context(purpose=ssl.Purpose.CLIENT_AUTH) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self._assert_context_options(ctx) def test_check_hostname(self): with warnings_helper.check_warnings(): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) # Auto set CERT_REQUIRED ctx.check_hostname = True self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_REQUIRED self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) # Changing verify_mode does not affect check_hostname ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE ctx.check_hostname = False self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) # Auto set ctx.check_hostname = True self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_OPTIONAL ctx.check_hostname = False self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) # keep CERT_OPTIONAL ctx.check_hostname = True self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) # Cannot set CERT_NONE with check_hostname enabled with self.assertRaises(ValueError): ctx.verify_mode = ssl.CERT_NONE ctx.check_hostname = False self.assertFalse(ctx.check_hostname) ctx.verify_mode = ssl.CERT_NONE self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) def test_context_client_server(self): # PROTOCOL_TLS_CLIENT has sane defaults ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) # PROTOCOL_TLS_SERVER has different but also sane defaults ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) def test_context_custom_class(self): class MySSLSocket(ssl.SSLSocket): pass class MySSLObject(ssl.SSLObject): pass ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.sslsocket_class = MySSLSocket ctx.sslobject_class = MySSLObject with ctx.wrap_socket(socket.socket(), server_side=True) as sock: self.assertIsInstance(sock, MySSLSocket) obj = ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_side=True) self.assertIsInstance(obj, MySSLObject) def test_num_tickest(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.num_tickets, 2) ctx.num_tickets = 1 self.assertEqual(ctx.num_tickets, 1) ctx.num_tickets = 0 self.assertEqual(ctx.num_tickets, 0) with self.assertRaises(ValueError): ctx.num_tickets = -1 with self.assertRaises(TypeError): ctx.num_tickets = None ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.num_tickets, 2) with self.assertRaises(ValueError): ctx.num_tickets = 1 class SSLErrorTests(unittest.TestCase): def test_str(self): # The str() of a SSLError doesn't include the errno e = ssl.SSLError(1, "foo") self.assertEqual(str(e), "foo") self.assertEqual(e.errno, 1) # Same for a subclass e = ssl.SSLZeroReturnError(1, "foo") self.assertEqual(str(e), "foo") self.assertEqual(e.errno, 1) @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_lib_reason(self): # Test the library and reason attributes ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertRaises(ssl.SSLError) as cm: ctx.load_dh_params(CERTFILE) self.assertEqual(cm.exception.library, 'PEM') regex = "(NO_START_LINE|UNSUPPORTED_PUBLIC_KEY_TYPE)" self.assertRegex(cm.exception.reason, regex) s = str(cm.exception) self.assertTrue("NO_START_LINE" in s, s) def test_subclass(self): # Check that the appropriate SSLError subclass is raised # (this only tests one of them) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE with socket.create_server(("127.0.0.1", 0)) as s: c = socket.create_connection(s.getsockname()) c.setblocking(False) with ctx.wrap_socket(c, False, do_handshake_on_connect=False) as c: with self.assertRaises(ssl.SSLWantReadError) as cm: c.do_handshake() s = str(cm.exception) self.assertTrue(s.startswith("The operation did not complete (read)"), s) # For compatibility self.assertEqual(cm.exception.errno, ssl.SSL_ERROR_WANT_READ) def test_bad_server_hostname(self): ctx = ssl.create_default_context() with self.assertRaises(ValueError): ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_hostname="") with self.assertRaises(ValueError): ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_hostname=".example.org") with self.assertRaises(TypeError): ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_hostname="example.org\x00evil.com") class MemoryBIOTests(unittest.TestCase): def test_read_write(self): bio = ssl.MemoryBIO() bio.write(b'foo') self.assertEqual(bio.read(), b'foo') self.assertEqual(bio.read(), b'') bio.write(b'foo') bio.write(b'bar') self.assertEqual(bio.read(), b'foobar') self.assertEqual(bio.read(), b'') bio.write(b'baz') self.assertEqual(bio.read(2), b'ba') self.assertEqual(bio.read(1), b'z') self.assertEqual(bio.read(1), b'') def test_eof(self): bio = ssl.MemoryBIO() self.assertFalse(bio.eof) self.assertEqual(bio.read(), b'') self.assertFalse(bio.eof) bio.write(b'foo') self.assertFalse(bio.eof) bio.write_eof() self.assertFalse(bio.eof) self.assertEqual(bio.read(2), b'fo') self.assertFalse(bio.eof) self.assertEqual(bio.read(1), b'o') self.assertTrue(bio.eof) self.assertEqual(bio.read(), b'') self.assertTrue(bio.eof) def test_pending(self): bio = ssl.MemoryBIO() self.assertEqual(bio.pending, 0) bio.write(b'foo') self.assertEqual(bio.pending, 3) for i in range(3): bio.read(1) self.assertEqual(bio.pending, 3-i-1) for i in range(3): bio.write(b'x') self.assertEqual(bio.pending, i+1) bio.read() self.assertEqual(bio.pending, 0) def test_buffer_types(self): bio = ssl.MemoryBIO() bio.write(b'foo') self.assertEqual(bio.read(), b'foo') bio.write(bytearray(b'bar')) self.assertEqual(bio.read(), b'bar') bio.write(memoryview(b'baz')) self.assertEqual(bio.read(), b'baz') m = memoryview(bytearray(b'noncontig')) noncontig_writable = m[::-2] with self.assertRaises(BufferError): bio.write(memoryview(noncontig_writable)) def test_error_types(self): bio = ssl.MemoryBIO() self.assertRaises(TypeError, bio.write, 'foo') self.assertRaises(TypeError, bio.write, None) self.assertRaises(TypeError, bio.write, True) self.assertRaises(TypeError, bio.write, 1) class SSLObjectTests(unittest.TestCase): def test_private_init(self): bio = ssl.MemoryBIO() with self.assertRaisesRegex(TypeError, "public constructor"): ssl.SSLObject(bio, bio) def test_unwrap(self): client_ctx, server_ctx, hostname = testing_context() c_in = ssl.MemoryBIO() c_out = ssl.MemoryBIO() s_in = ssl.MemoryBIO() s_out = ssl.MemoryBIO() client = client_ctx.wrap_bio(c_in, c_out, server_hostname=hostname) server = server_ctx.wrap_bio(s_in, s_out, server_side=True) # Loop on the handshake for a bit to get it settled for _ in range(5): try: client.do_handshake() except ssl.SSLWantReadError: pass if c_out.pending: s_in.write(c_out.read()) try: server.do_handshake() except ssl.SSLWantReadError: pass if s_out.pending: c_in.write(s_out.read()) # Now the handshakes should be complete (don't raise WantReadError) client.do_handshake() server.do_handshake() # Now if we unwrap one side unilaterally, it should send close-notify # and raise WantReadError: with self.assertRaises(ssl.SSLWantReadError): client.unwrap() # But server.unwrap() does not raise, because it reads the client's # close-notify: s_in.write(c_out.read()) server.unwrap() # And now that the client gets the server's close-notify, it doesn't # raise either. c_in.write(s_out.read()) client.unwrap() class SimpleBackgroundTests(unittest.TestCase): """Tests that connect to a simple server running in the background""" def setUp(self): self.server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.server_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=self.server_context) self.enterContext(server) self.server_addr = (HOST, server.port) def test_connect(self): with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_NONE) as s: s.connect(self.server_addr) self.assertEqual({}, s.getpeercert()) self.assertFalse(s.server_side) # this should succeed because we specify the root cert with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=SIGNING_CA) as s: s.connect(self.server_addr) self.assertTrue(s.getpeercert()) self.assertFalse(s.server_side) def test_connect_fail(self): # This should fail because we have no verification certs. Connection # failure crashes ThreadedEchoServer, so run this in an independent # test method. s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED) self.addCleanup(s.close) # Allow for flexible libssl error messages. regex = re.compile(r"""( certificate verify failed # OpenSSL | CERTIFICATE_VERIFY_FAILED # AWS-LC )""", re.X) self.assertRaisesRegex(ssl.SSLError, regex, s.connect, self.server_addr) def test_connect_ex(self): # Issue #11326: check connect_ex() implementation s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=SIGNING_CA) self.addCleanup(s.close) self.assertEqual(0, s.connect_ex(self.server_addr)) self.assertTrue(s.getpeercert()) def test_non_blocking_connect_ex(self): # Issue #11326: non-blocking connect_ex() should allow handshake # to proceed after the socket gets ready. s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=SIGNING_CA, do_handshake_on_connect=False) self.addCleanup(s.close) s.setblocking(False) rc = s.connect_ex(self.server_addr) # EWOULDBLOCK under Windows, EINPROGRESS elsewhere self.assertIn(rc, (0, errno.EINPROGRESS, errno.EWOULDBLOCK)) # Wait for connect to finish select.select([], [s], [], 5.0) # Non-blocking handshake while True: try: s.do_handshake() break except ssl.SSLWantReadError: select.select([s], [], [], 5.0) except ssl.SSLWantWriteError: select.select([], [s], [], 5.0) # SSL established self.assertTrue(s.getpeercert()) def test_connect_with_context(self): # Same as test_connect, but with a separately created context ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE with ctx.wrap_socket(socket.socket(socket.AF_INET)) as s: s.connect(self.server_addr) self.assertEqual({}, s.getpeercert()) # Same with a server hostname with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname="dummy") as s: s.connect(self.server_addr) ctx.verify_mode = ssl.CERT_REQUIRED # This should succeed because we specify the root cert ctx.load_verify_locations(SIGNING_CA) with ctx.wrap_socket(socket.socket(socket.AF_INET)) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) def test_connect_with_context_fail(self): # This should fail because we have no verification certs. Connection # failure crashes ThreadedEchoServer, so run this in an independent # test method. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) s = ctx.wrap_socket( socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME ) self.addCleanup(s.close) # Allow for flexible libssl error messages. regex = re.compile(r"""( certificate verify failed # OpenSSL | CERTIFICATE_VERIFY_FAILED # AWS-LC )""", re.X) self.assertRaisesRegex(ssl.SSLError, regex, s.connect, self.server_addr) def test_connect_capath(self): # Verify server certificates using the `capath` argument # NOTE: the subject hashing algorithm has been changed between # OpenSSL 0.9.8n and 1.0.0, as a result the capath directory must # contain both versions of each certificate (same content, different # filename) for this test to be portable across OpenSSL releases. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(capath=CAPATH) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) # Same with a bytes `capath` argument ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(capath=BYTES_CAPATH) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) def test_connect_cadata(self): with open(SIGNING_CA) as f: pem = f.read() der = ssl.PEM_cert_to_DER_cert(pem) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(cadata=pem) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) # same with DER ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(cadata=der) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) @unittest.skipIf(os.name == "nt", "Can't use a socket as a file under Windows") def test_makefile_close(self): # Issue #5238: creating a file-like object with makefile() shouldn't # delay closing the underlying "real socket" (here tested with its # file descriptor, hence skipping the test under Windows). ss = test_wrap_socket(socket.socket(socket.AF_INET)) ss.connect(self.server_addr) fd = ss.fileno() f = ss.makefile() f.close() # The fd is still open os.read(fd, 0) # Closing the SSL socket should close the fd too ss.close() gc.collect() with self.assertRaises(OSError) as e: os.read(fd, 0) self.assertEqual(e.exception.errno, errno.EBADF) def test_non_blocking_handshake(self): s = socket.socket(socket.AF_INET) s.connect(self.server_addr) s.setblocking(False) s = test_wrap_socket(s, cert_reqs=ssl.CERT_NONE, do_handshake_on_connect=False) self.addCleanup(s.close) count = 0 while True: try: count += 1 s.do_handshake() break except ssl.SSLWantReadError: select.select([s], [], []) except ssl.SSLWantWriteError: select.select([], [s], []) if support.verbose: sys.stdout.write("\nNeeded %d calls to do_handshake() to establish session.\n" % count) def test_get_server_certificate(self): _test_get_server_certificate(self, *self.server_addr, cert=SIGNING_CA) def test_get_server_certificate_sni(self): host, port = self.server_addr server_names = [] # We store servername_cb arguments to make sure they match the host def servername_cb(ssl_sock, server_name, initial_context): server_names.append(server_name) self.server_context.set_servername_callback(servername_cb) pem = ssl.get_server_certificate((host, port)) if not pem: self.fail("No server certificate on %s:%s!" % (host, port)) pem = ssl.get_server_certificate((host, port), ca_certs=SIGNING_CA) if not pem: self.fail("No server certificate on %s:%s!" % (host, port)) if support.verbose: sys.stdout.write("\nVerified certificate for %s:%s is\n%s\n" % (host, port, pem)) self.assertEqual(server_names, [host, host]) def test_get_server_certificate_fail(self): # Connection failure crashes ThreadedEchoServer, so run this in an # independent test method _test_get_server_certificate_fail(self, *self.server_addr) def test_get_server_certificate_timeout(self): def servername_cb(ssl_sock, server_name, initial_context): time.sleep(0.2) self.server_context.set_servername_callback(servername_cb) with self.assertRaises(socket.timeout): ssl.get_server_certificate(self.server_addr, ca_certs=SIGNING_CA, timeout=0.1) def test_ciphers(self): with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_NONE, ciphers="ALL") as s: s.connect(self.server_addr) with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_NONE, ciphers="DEFAULT") as s: s.connect(self.server_addr) # Error checking can happen at instantiation or when connecting with self.assertRaisesRegex(ssl.SSLError, "No cipher can be selected"): with socket.socket(socket.AF_INET) as sock: s = test_wrap_socket(sock, cert_reqs=ssl.CERT_NONE, ciphers="^$:,;?*'dorothyx") s.connect(self.server_addr) def test_get_ca_certs_capath(self): # capath certs are loaded on request ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(capath=CAPATH) self.assertEqual(ctx.get_ca_certs(), []) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname='localhost') as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) self.assertEqual(len(ctx.get_ca_certs()), 1) def test_context_setget(self): # Check that the context of a connected socket can be replaced. ctx1 = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx1.load_verify_locations(capath=CAPATH) ctx2 = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx2.load_verify_locations(capath=CAPATH) s = socket.socket(socket.AF_INET) with ctx1.wrap_socket(s, server_hostname='localhost') as ss: ss.connect(self.server_addr) self.assertIs(ss.context, ctx1) self.assertIs(ss._sslobj.context, ctx1) ss.context = ctx2 self.assertIs(ss.context, ctx2) self.assertIs(ss._sslobj.context, ctx2) def ssl_io_loop(self, sock, incoming, outgoing, func, *args, **kwargs): # A simple IO loop. Call func(*args) depending on the error we get # (WANT_READ or WANT_WRITE) move data between the socket and the BIOs. timeout = kwargs.get('timeout', support.SHORT_TIMEOUT) count = 0 for _ in support.busy_retry(timeout): errno = None count += 1 try: ret = func(*args) except ssl.SSLError as e: if e.errno not in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): raise errno = e.errno # Get any data from the outgoing BIO irrespective of any error, and # send it to the socket. buf = outgoing.read() sock.sendall(buf) # If there's no error, we're done. For WANT_READ, we need to get # data from the socket and put it in the incoming BIO. if errno is None: break elif errno == ssl.SSL_ERROR_WANT_READ: buf = sock.recv(32768) if buf: incoming.write(buf) else: incoming.write_eof() if support.verbose: sys.stdout.write("Needed %d calls to complete %s().\n" % (count, func.__name__)) return ret def test_bio_handshake(self): sock = socket.socket(socket.AF_INET) self.addCleanup(sock.close) sock.connect(self.server_addr) incoming = ssl.MemoryBIO() outgoing = ssl.MemoryBIO() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.load_verify_locations(SIGNING_CA) sslobj = ctx.wrap_bio(incoming, outgoing, False, SIGNED_CERTFILE_HOSTNAME) self.assertIs(sslobj._sslobj.owner, sslobj) self.assertIsNone(sslobj.cipher()) self.assertIsNone(sslobj.version()) self.assertIsNone(sslobj.shared_ciphers()) self.assertRaises(ValueError, sslobj.getpeercert) # tls-unique is not defined for TLSv1.3 # https://datatracker.ietf.org/doc/html/rfc8446#appendix-C.5 if 'tls-unique' in ssl.CHANNEL_BINDING_TYPES and sslobj.version() != "TLSv1.3": self.assertIsNone(sslobj.get_channel_binding('tls-unique')) self.ssl_io_loop(sock, incoming, outgoing, sslobj.do_handshake) self.assertTrue(sslobj.cipher()) self.assertIsNone(sslobj.shared_ciphers()) self.assertIsNotNone(sslobj.version()) self.assertTrue(sslobj.getpeercert()) if 'tls-unique' in ssl.CHANNEL_BINDING_TYPES and sslobj.version() != "TLSv1.3": self.assertTrue(sslobj.get_channel_binding('tls-unique')) try: self.ssl_io_loop(sock, incoming, outgoing, sslobj.unwrap) except ssl.SSLSyscallError: # If the server shuts down the TCP connection without sending a # secure shutdown message, this is reported as SSL_ERROR_SYSCALL pass self.assertRaises(ssl.SSLError, sslobj.write, b'foo') def test_bio_read_write_data(self): sock = socket.socket(socket.AF_INET) self.addCleanup(sock.close) sock.connect(self.server_addr) incoming = ssl.MemoryBIO() outgoing = ssl.MemoryBIO() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE sslobj = ctx.wrap_bio(incoming, outgoing, False) self.ssl_io_loop(sock, incoming, outgoing, sslobj.do_handshake) req = b'FOO\n' self.ssl_io_loop(sock, incoming, outgoing, sslobj.write, req) buf = self.ssl_io_loop(sock, incoming, outgoing, sslobj.read, 1024) self.assertEqual(buf, b'foo\n') self.ssl_io_loop(sock, incoming, outgoing, sslobj.unwrap) def test_transport_eof(self): client_context, server_context, hostname = testing_context() with socket.socket(socket.AF_INET) as sock: sock.connect(self.server_addr) incoming = ssl.MemoryBIO() outgoing = ssl.MemoryBIO() sslobj = client_context.wrap_bio(incoming, outgoing, server_hostname=hostname) self.ssl_io_loop(sock, incoming, outgoing, sslobj.do_handshake) # Simulate EOF from the transport. incoming.write_eof() self.assertRaises(ssl.SSLEOFError, sslobj.read) @support.requires_resource('network') class NetworkedTests(unittest.TestCase): def test_timeout_connect_ex(self): # Issue #12065: on a timeout, connect_ex() should return the original # errno (mimicking the behaviour of non-SSL sockets). with socket_helper.transient_internet(REMOTE_HOST): s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, do_handshake_on_connect=False) self.addCleanup(s.close) s.settimeout(0.0000001) rc = s.connect_ex((REMOTE_HOST, 443)) if rc == 0: self.skipTest("REMOTE_HOST responded too quickly") elif rc == errno.ENETUNREACH: self.skipTest("Network unreachable.") self.assertIn(rc, (errno.EAGAIN, errno.EWOULDBLOCK)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'Needs IPv6') @support.requires_resource('walltime') def test_get_server_certificate_ipv6(self): with socket_helper.transient_internet('ipv6.google.com'): _test_get_server_certificate(self, 'ipv6.google.com', 443) _test_get_server_certificate_fail(self, 'ipv6.google.com', 443) def _test_get_server_certificate(test, host, port, cert=None): pem = ssl.get_server_certificate((host, port)) if not pem: test.fail("No server certificate on %s:%s!" % (host, port)) pem = ssl.get_server_certificate((host, port), ca_certs=cert) if not pem: test.fail("No server certificate on %s:%s!" % (host, port)) if support.verbose: sys.stdout.write("\nVerified certificate for %s:%s is\n%s\n" % (host, port ,pem)) def _test_get_server_certificate_fail(test, host, port): with warnings_helper.check_no_resource_warning(test): try: pem = ssl.get_server_certificate((host, port), ca_certs=CERTFILE) except ssl.SSLError as x: #should fail if support.verbose: sys.stdout.write("%s\n" % x) else: test.fail("Got server certificate %s for %s:%s!" % (pem, host, port)) from test.ssl_servers import make_https_server class ThreadedEchoServer(threading.Thread): class ConnectionHandler(threading.Thread): """A mildly complicated class, because we want it to work both with and without the SSL wrapper around the socket connection, so that we can test the STARTTLS functionality.""" def __init__(self, server, connsock, addr): self.server = server self.running = False self.sock = connsock self.addr = addr self.sock.setblocking(True) self.sslconn = None threading.Thread.__init__(self) self.daemon = True def wrap_conn(self): try: self.sslconn = self.server.context.wrap_socket( self.sock, server_side=True) self.server.selected_alpn_protocols.append(self.sslconn.selected_alpn_protocol()) except (ConnectionResetError, BrokenPipeError, ConnectionAbortedError) as e: # We treat ConnectionResetError as though it were an # SSLError - OpenSSL on Ubuntu abruptly closes the # connection when asked to use an unsupported protocol. # # BrokenPipeError is raised in TLS 1.3 mode, when OpenSSL # tries to send session tickets after handshake. # https://github.com/openssl/openssl/issues/6342 # # ConnectionAbortedError is raised in TLS 1.3 mode, when OpenSSL # tries to send session tickets after handshake when using WinSock. self.server.conn_errors.append(str(e)) if self.server.chatty: handle_error("\n server: bad connection attempt from " + repr(self.addr) + ":\n") self.running = False self.close() return False except (ssl.SSLError, OSError) as e: # OSError may occur with wrong protocols, e.g. both # sides use PROTOCOL_TLS_SERVER. # # XXX Various errors can have happened here, for example # a mismatching protocol version, an invalid certificate, # or a low-level bug. This should be made more discriminating. # # bpo-31323: Store the exception as string to prevent # a reference leak: server -> conn_errors -> exception # -> traceback -> self (ConnectionHandler) -> server self.server.conn_errors.append(str(e)) if self.server.chatty: handle_error("\n server: bad connection attempt from " + repr(self.addr) + ":\n") # bpo-44229, bpo-43855, bpo-44237, and bpo-33450: # Ignore spurious EPROTOTYPE returned by write() on macOS. # See also http://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/ if e.errno != errno.EPROTOTYPE and sys.platform != "darwin": self.running = False self.server.stop() self.close() return False else: self.server.shared_ciphers.append(self.sslconn.shared_ciphers()) if self.server.context.verify_mode == ssl.CERT_REQUIRED: cert = self.sslconn.getpeercert() if support.verbose and self.server.chatty: sys.stdout.write(" client cert is " + pprint.pformat(cert) + "\n") cert_binary = self.sslconn.getpeercert(True) if support.verbose and self.server.chatty: if cert_binary is None: sys.stdout.write(" client did not provide a cert\n") else: sys.stdout.write(f" cert binary is {len(cert_binary)}b\n") cipher = self.sslconn.cipher() if support.verbose and self.server.chatty: sys.stdout.write(" server: connection cipher is now " + str(cipher) + "\n") return True def read(self): if self.sslconn: return self.sslconn.read() else: return self.sock.recv(1024) def write(self, bytes): if self.sslconn: return self.sslconn.write(bytes) else: return self.sock.send(bytes) def close(self): if self.sslconn: self.sslconn.close() else: self.sock.close() def run(self): self.running = True if not self.server.starttls_server: if not self.wrap_conn(): return while self.running: try: msg = self.read() stripped = msg.strip() if not stripped: # eof, so quit this handler self.running = False try: self.sock = self.sslconn.unwrap() except OSError: # Many tests shut the TCP connection down # without an SSL shutdown. This causes # unwrap() to raise OSError with errno=0! pass else: self.sslconn = None self.close() elif stripped == b'over': if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: client closed connection\n") self.close() return elif (self.server.starttls_server and stripped == b'STARTTLS'): if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: read STARTTLS from client, sending OK...\n") self.write(b"OK\n") if not self.wrap_conn(): return elif (self.server.starttls_server and self.sslconn and stripped == b'ENDTLS'): if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: read ENDTLS from client, sending OK...\n") self.write(b"OK\n") self.sock = self.sslconn.unwrap() self.sslconn = None if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: connection is now unencrypted...\n") elif stripped == b'CB tls-unique': if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: read CB tls-unique from client, sending our CB data...\n") data = self.sslconn.get_channel_binding("tls-unique") self.write(repr(data).encode("us-ascii") + b"\n") elif stripped == b'PHA': if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: initiating post handshake auth\n") try: self.sslconn.verify_client_post_handshake() except ssl.SSLError as e: self.write(repr(e).encode("us-ascii") + b"\n") else: self.write(b"OK\n") elif stripped == b'HASCERT': if self.sslconn.getpeercert() is not None: self.write(b'TRUE\n') else: self.write(b'FALSE\n') elif stripped == b'GETCERT': cert = self.sslconn.getpeercert() self.write(repr(cert).encode("us-ascii") + b"\n") elif stripped == b'VERIFIEDCHAIN': certs = self.sslconn._sslobj.get_verified_chain() self.write(len(certs).to_bytes(1, "big") + b"\n") elif stripped == b'UNVERIFIEDCHAIN': certs = self.sslconn._sslobj.get_unverified_chain() self.write(len(certs).to_bytes(1, "big") + b"\n") else: if (support.verbose and self.server.connectionchatty): ctype = (self.sslconn and "encrypted") or "unencrypted" sys.stdout.write(" server: read %r (%s), sending back %r (%s)...\n" % (msg, ctype, msg.lower(), ctype)) self.write(msg.lower()) except OSError as e: # handles SSLError and socket errors if self.server.chatty and support.verbose: if isinstance(e, ConnectionError): # OpenSSL 1.1.1 sometimes raises # ConnectionResetError when connection is not # shut down gracefully. print( f" Connection reset by peer: {self.addr}" ) else: handle_error("Test server failure:\n") try: self.write(b"ERROR\n") except OSError: pass self.close() self.running = False # normally, we'd just stop here, but for the test # harness, we want to stop the server self.server.stop() def __init__(self, certificate=None, ssl_version=None, certreqs=None, cacerts=None, chatty=True, connectionchatty=False, starttls_server=False, alpn_protocols=None, ciphers=None, context=None): if context: self.context = context else: self.context = ssl.SSLContext(ssl_version if ssl_version is not None else ssl.PROTOCOL_TLS_SERVER) self.context.verify_mode = (certreqs if certreqs is not None else ssl.CERT_NONE) if cacerts: self.context.load_verify_locations(cacerts) if certificate: self.context.load_cert_chain(certificate) if alpn_protocols: self.context.set_alpn_protocols(alpn_protocols) if ciphers: self.context.set_ciphers(ciphers) self.chatty = chatty self.connectionchatty = connectionchatty self.starttls_server = starttls_server self.sock = socket.socket() self.port = socket_helper.bind_port(self.sock) self.flag = None self.active = False self.selected_alpn_protocols = [] self.shared_ciphers = [] self.conn_errors = [] threading.Thread.__init__(self) self.daemon = True def __enter__(self): self.start(threading.Event()) self.flag.wait() return self def __exit__(self, *args): self.stop() self.join() def start(self, flag=None): self.flag = flag threading.Thread.start(self) def run(self): self.sock.settimeout(1.0) self.sock.listen(5) self.active = True if self.flag: # signal an event self.flag.set() while self.active: try: newconn, connaddr = self.sock.accept() if support.verbose and self.chatty: sys.stdout.write(' server: new connection from ' + repr(connaddr) + '\n') handler = self.ConnectionHandler(self, newconn, connaddr) handler.start() handler.join() except TimeoutError as e: if support.verbose: sys.stdout.write(f' connection timeout {e!r}\n') except KeyboardInterrupt: self.stop() except BaseException as e: if support.verbose and self.chatty: sys.stdout.write( ' connection handling failed: ' + repr(e) + '\n') self.close() def close(self): if self.sock is not None: self.sock.close() self.sock = None def stop(self): self.active = False class AsyncoreEchoServer(threading.Thread): # this one's based on asyncore.dispatcher class EchoServer (asyncore.dispatcher): class ConnectionHandler(asyncore.dispatcher_with_send): def __init__(self, conn, certfile): self.socket = test_wrap_socket(conn, server_side=True, certfile=certfile, do_handshake_on_connect=False) asyncore.dispatcher_with_send.__init__(self, self.socket) self._ssl_accepting = True self._do_ssl_handshake() def readable(self): if isinstance(self.socket, ssl.SSLSocket): while self.socket.pending() > 0: self.handle_read_event() return True def _do_ssl_handshake(self): try: self.socket.do_handshake() except (ssl.SSLWantReadError, ssl.SSLWantWriteError): return except ssl.SSLEOFError: return self.handle_close() except ssl.SSLError: raise except OSError as err: if err.args[0] == errno.ECONNABORTED: return self.handle_close() else: self._ssl_accepting = False def handle_read(self): if self._ssl_accepting: self._do_ssl_handshake() else: data = self.recv(1024) if support.verbose: sys.stdout.write(" server: read %s from client\n" % repr(data)) if not data: self.close() else: self.send(data.lower()) def handle_close(self): self.close() if support.verbose: sys.stdout.write(" server: closed connection %s\n" % self.socket) def handle_error(self): raise def __init__(self, certfile): self.certfile = certfile sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = socket_helper.bind_port(sock, '') asyncore.dispatcher.__init__(self, sock) self.listen(5) def handle_accepted(self, sock_obj, addr): if support.verbose: sys.stdout.write(" server: new connection from %s:%s\n" %addr) self.ConnectionHandler(sock_obj, self.certfile) def handle_error(self): raise def __init__(self, certfile): self.flag = None self.active = False self.server = self.EchoServer(certfile) self.port = self.server.port threading.Thread.__init__(self) self.daemon = True def __str__(self): return "<%s %s>" % (self.__class__.__name__, self.server) def __enter__(self): self.start(threading.Event()) self.flag.wait() return self def __exit__(self, *args): if support.verbose: sys.stdout.write(" cleanup: stopping server.\n") self.stop() if support.verbose: sys.stdout.write(" cleanup: joining server thread.\n") self.join() if support.verbose: sys.stdout.write(" cleanup: successfully joined.\n") # make sure that ConnectionHandler is removed from socket_map asyncore.close_all(ignore_all=True) def start (self, flag=None): self.flag = flag threading.Thread.start(self) def run(self): self.active = True if self.flag: self.flag.set() while self.active: try: asyncore.loop(1) except: pass def stop(self): self.active = False self.server.close() def server_params_test(client_context, server_context, indata=b"FOO\n", chatty=True, connectionchatty=False, sni_name=None, session=None): """ Launch a server, connect a client to it and try various reads and writes. """ stats = {} server = ThreadedEchoServer(context=server_context, chatty=chatty, connectionchatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=sni_name, session=session) as s: s.connect((HOST, server.port)) for arg in [indata, bytearray(indata), memoryview(indata)]: if connectionchatty: if support.verbose: sys.stdout.write( " client: sending %r...\n" % indata) s.write(arg) outdata = s.read() if connectionchatty: if support.verbose: sys.stdout.write(" client: read %r\n" % outdata) if outdata != indata.lower(): raise AssertionError( "bad data <<%r>> (%d) received; expected <<%r>> (%d)\n" % (outdata[:20], len(outdata), indata[:20].lower(), len(indata))) s.write(b"over\n") if connectionchatty: if support.verbose: sys.stdout.write(" client: closing connection.\n") stats.update({ 'compression': s.compression(), 'cipher': s.cipher(), 'peercert': s.getpeercert(), 'client_alpn_protocol': s.selected_alpn_protocol(), 'version': s.version(), 'session_reused': s.session_reused, 'session': s.session, }) s.close() stats['server_alpn_protocols'] = server.selected_alpn_protocols stats['server_shared_ciphers'] = server.shared_ciphers return stats def try_protocol_combo(server_protocol, client_protocol, expect_success, certsreqs=None, server_options=0, client_options=0): """ Try to SSL-connect using *client_protocol* to *server_protocol*. If *expect_success* is true, assert that the connection succeeds, if it's false, assert that the connection fails. Also, if *expect_success* is a string, assert that it is the protocol version actually used by the connection. """ if certsreqs is None: certsreqs = ssl.CERT_NONE certtype = { ssl.CERT_NONE: "CERT_NONE", ssl.CERT_OPTIONAL: "CERT_OPTIONAL", ssl.CERT_REQUIRED: "CERT_REQUIRED", }[certsreqs] if support.verbose: formatstr = (expect_success and " %s->%s %s\n") or " {%s->%s} %s\n" sys.stdout.write(formatstr % (ssl.get_protocol_name(client_protocol), ssl.get_protocol_name(server_protocol), certtype)) with warnings_helper.check_warnings(): # ignore Deprecation warnings client_context = ssl.SSLContext(client_protocol) client_context.options |= client_options server_context = ssl.SSLContext(server_protocol) server_context.options |= server_options min_version = PROTOCOL_TO_TLS_VERSION.get(client_protocol, None) if (min_version is not None # SSLContext.minimum_version is only available on recent OpenSSL # (setter added in OpenSSL 1.1.0, getter added in OpenSSL 1.1.1) and hasattr(server_context, 'minimum_version') and server_protocol == ssl.PROTOCOL_TLS and server_context.minimum_version > min_version ): # If OpenSSL configuration is strict and requires more recent TLS # version, we have to change the minimum to test old TLS versions. with warnings_helper.check_warnings(): server_context.minimum_version = min_version # NOTE: we must enable "ALL" ciphers on the client, otherwise an # SSLv23 client will send an SSLv3 hello (rather than SSLv2) # starting from OpenSSL 1.0.0 (see issue #8322). if client_context.protocol == ssl.PROTOCOL_TLS: client_context.set_ciphers("ALL") seclevel_workaround(server_context, client_context) for ctx in (client_context, server_context): ctx.verify_mode = certsreqs ctx.load_cert_chain(SIGNED_CERTFILE) ctx.load_verify_locations(SIGNING_CA) try: stats = server_params_test(client_context, server_context, chatty=False, connectionchatty=False) # Protocol mismatch can result in either an SSLError, or a # "Connection reset by peer" error. except ssl.SSLError: if expect_success: raise except OSError as e: if expect_success or e.errno != errno.ECONNRESET: raise else: if not expect_success: raise AssertionError( "Client protocol %s succeeded with server protocol %s!" % (ssl.get_protocol_name(client_protocol), ssl.get_protocol_name(server_protocol))) elif (expect_success is not True and expect_success != stats['version']): raise AssertionError("version mismatch: expected %r, got %r" % (expect_success, stats['version'])) class ThreadedTests(unittest.TestCase): @support.requires_resource('walltime') def test_echo(self): """Basic test of an SSL client connecting to a server""" if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() with self.subTest(client=ssl.PROTOCOL_TLS_CLIENT, server=ssl.PROTOCOL_TLS_SERVER): server_params_test(client_context=client_context, server_context=server_context, chatty=True, connectionchatty=True, sni_name=hostname) client_context.check_hostname = False with self.subTest(client=ssl.PROTOCOL_TLS_SERVER, server=ssl.PROTOCOL_TLS_CLIENT): with self.assertRaises(ssl.SSLError) as e: server_params_test(client_context=server_context, server_context=client_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIn( 'Cannot create a client socket with a PROTOCOL_TLS_SERVER context', str(e.exception) ) with self.subTest(client=ssl.PROTOCOL_TLS_SERVER, server=ssl.PROTOCOL_TLS_SERVER): with self.assertRaises(ssl.SSLError) as e: server_params_test(client_context=server_context, server_context=server_context, chatty=True, connectionchatty=True) self.assertIn( 'Cannot create a client socket with a PROTOCOL_TLS_SERVER context', str(e.exception) ) with self.subTest(client=ssl.PROTOCOL_TLS_CLIENT, server=ssl.PROTOCOL_TLS_CLIENT): with self.assertRaises(ssl.SSLError) as e: server_params_test(client_context=server_context, server_context=client_context, chatty=True, connectionchatty=True) self.assertIn( 'Cannot create a client socket with a PROTOCOL_TLS_SERVER context', str(e.exception)) def test_getpeercert(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), do_handshake_on_connect=False, server_hostname=hostname) as s: s.connect((HOST, server.port)) # getpeercert() raise ValueError while the handshake isn't # done. with self.assertRaises(ValueError): s.getpeercert() s.do_handshake() cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") cipher = s.cipher() if support.verbose: sys.stdout.write(pprint.pformat(cert) + '\n') sys.stdout.write("Connection cipher is " + str(cipher) + '.\n') if 'subject' not in cert: self.fail("No subject field in certificate: %s." % pprint.pformat(cert)) if ((('organizationName', 'Python Software Foundation'),) not in cert['subject']): self.fail( "Missing or invalid 'organizationName' field in certificate subject; " "should be 'Python Software Foundation'.") self.assertIn('notBefore', cert) self.assertIn('notAfter', cert) before = ssl.cert_time_to_seconds(cert['notBefore']) after = ssl.cert_time_to_seconds(cert['notAfter']) self.assertLess(before, after) def test_crl_check(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() tf = getattr(ssl, "VERIFY_X509_TRUSTED_FIRST", 0) self.assertEqual(client_context.verify_flags, ssl.VERIFY_DEFAULT | tf) # VERIFY_DEFAULT should pass server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") # VERIFY_CRL_CHECK_LEAF without a loaded CRL file fails client_context.verify_flags |= ssl.VERIFY_CRL_CHECK_LEAF server = ThreadedEchoServer(context=server_context, chatty=True) # Allow for flexible libssl error messages. regex = re.compile(r"""( certificate verify failed # OpenSSL | CERTIFICATE_VERIFY_FAILED # AWS-LC )""", re.X) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaisesRegex(ssl.SSLError, regex): s.connect((HOST, server.port)) # now load a CRL file. The CRL file is signed by the CA. client_context.load_verify_locations(CRLFILE) server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") def test_check_hostname(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() # correct hostname should verify server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") # incorrect hostname should raise an exception server = ThreadedEchoServer(context=server_context, chatty=True) # Allow for flexible libssl error messages. regex = re.compile(r"""( certificate verify failed # OpenSSL | CERTIFICATE_VERIFY_FAILED # AWS-LC )""", re.X) with server: with client_context.wrap_socket(socket.socket(), server_hostname="invalid") as s: with self.assertRaisesRegex(ssl.CertificateError, regex): s.connect((HOST, server.port)) # missing server_hostname arg should cause an exception, too server = ThreadedEchoServer(context=server_context, chatty=True) with server: with socket.socket() as s: with self.assertRaisesRegex(ValueError, "check_hostname requires server_hostname"): client_context.wrap_socket(s) @unittest.skipUnless( ssl.HAS_NEVER_CHECK_COMMON_NAME, "test requires hostname_checks_common_name" ) def test_hostname_checks_common_name(self): client_context, server_context, hostname = testing_context() assert client_context.hostname_checks_common_name client_context.hostname_checks_common_name = False # default cert has a SAN server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) client_context, server_context, hostname = testing_context(NOSANFILE) client_context.hostname_checks_common_name = False server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(ssl.SSLCertVerificationError): s.connect((HOST, server.port)) def test_ecc_cert(self): client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) client_context.set_ciphers('ECDHE:ECDSA:!NULL:!aRSA') hostname = SIGNED_CERTFILE_ECC_HOSTNAME server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # load ECC cert server_context.load_cert_chain(SIGNED_CERTFILE_ECC) # correct hostname should verify server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") cipher = s.cipher()[0].split('-') self.assertTrue(cipher[:2], ('ECDHE', 'ECDSA')) def test_dual_rsa_ecc(self): client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) # TODO: fix TLSv1.3 once SSLContext can restrict signature # algorithms. client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # only ECDSA certs client_context.set_ciphers('ECDHE:ECDSA:!NULL:!aRSA') hostname = SIGNED_CERTFILE_ECC_HOSTNAME server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # load ECC and RSA key/cert pairs server_context.load_cert_chain(SIGNED_CERTFILE_ECC) server_context.load_cert_chain(SIGNED_CERTFILE) # correct hostname should verify server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") cipher = s.cipher()[0].split('-') self.assertTrue(cipher[:2], ('ECDHE', 'ECDSA')) def test_check_hostname_idn(self): if support.verbose: sys.stdout.write("\n") server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(IDNSANSFILE) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.verify_mode = ssl.CERT_REQUIRED context.check_hostname = True context.load_verify_locations(SIGNING_CA) # correct hostname should verify, when specified in several # different ways idn_hostnames = [ ('könig.idn.pythontest.net', 'xn--knig-5qa.idn.pythontest.net'), ('xn--knig-5qa.idn.pythontest.net', 'xn--knig-5qa.idn.pythontest.net'), (b'xn--knig-5qa.idn.pythontest.net', 'xn--knig-5qa.idn.pythontest.net'), ('königsgäßchen.idna2003.pythontest.net', 'xn--knigsgsschen-lcb0w.idna2003.pythontest.net'), ('xn--knigsgsschen-lcb0w.idna2003.pythontest.net', 'xn--knigsgsschen-lcb0w.idna2003.pythontest.net'), (b'xn--knigsgsschen-lcb0w.idna2003.pythontest.net', 'xn--knigsgsschen-lcb0w.idna2003.pythontest.net'), # ('königsgäßchen.idna2008.pythontest.net', # 'xn--knigsgchen-b4a3dun.idna2008.pythontest.net'), ('xn--knigsgchen-b4a3dun.idna2008.pythontest.net', 'xn--knigsgchen-b4a3dun.idna2008.pythontest.net'), (b'xn--knigsgchen-b4a3dun.idna2008.pythontest.net', 'xn--knigsgchen-b4a3dun.idna2008.pythontest.net'), ] for server_hostname, expected_hostname in idn_hostnames: server = ThreadedEchoServer(context=server_context, chatty=True) with server: with context.wrap_socket(socket.socket(), server_hostname=server_hostname) as s: self.assertEqual(s.server_hostname, expected_hostname) s.connect((HOST, server.port)) cert = s.getpeercert() self.assertEqual(s.server_hostname, expected_hostname) self.assertTrue(cert, "Can't get peer certificate.") # incorrect hostname should raise an exception server = ThreadedEchoServer(context=server_context, chatty=True) with server: with context.wrap_socket(socket.socket(), server_hostname="python.example.org") as s: with self.assertRaises(ssl.CertificateError): s.connect((HOST, server.port)) with ThreadedEchoServer(context=server_context, chatty=True) as server: with warnings_helper.check_no_resource_warning(self): with self.assertRaises(UnicodeError): context.wrap_socket(socket.socket(), server_hostname='.pythontest.net') with ThreadedEchoServer(context=server_context, chatty=True) as server: with warnings_helper.check_no_resource_warning(self): with self.assertRaises(UnicodeDecodeError): context.wrap_socket(socket.socket(), server_hostname=b'k\xf6nig.idn.pythontest.net') def test_wrong_cert_tls12(self): """Connecting when the server rejects the client's certificate Launch a server with CERT_REQUIRED, and check that trying to connect to it with a wrong client certificate fails. """ client_context, server_context, hostname = testing_context() # load client cert that is not signed by trusted CA client_context.load_cert_chain(CERTFILE) # require TLS client authentication server_context.verify_mode = ssl.CERT_REQUIRED # TLS 1.3 has different handshake client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server = ThreadedEchoServer( context=server_context, chatty=True, connectionchatty=True, ) with server, \ client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: try: # Expect either an SSL error about the server rejecting # the connection, or a low-level connection reset (which # sometimes happens on Windows) s.connect((HOST, server.port)) except ssl.SSLError as e: if support.verbose: sys.stdout.write("\nSSLError is %r\n" % e) except OSError as e: if e.errno != errno.ECONNRESET: raise if support.verbose: sys.stdout.write("\nsocket.error is %r\n" % e) else: self.fail("Use of invalid cert should have failed!") @requires_tls_version('TLSv1_3') def test_wrong_cert_tls13(self): client_context, server_context, hostname = testing_context() # load client cert that is not signed by trusted CA client_context.load_cert_chain(CERTFILE) server_context.verify_mode = ssl.CERT_REQUIRED server_context.minimum_version = ssl.TLSVersion.TLSv1_3 client_context.minimum_version = ssl.TLSVersion.TLSv1_3 server = ThreadedEchoServer( context=server_context, chatty=True, connectionchatty=True, ) with server, \ client_context.wrap_socket(socket.socket(), server_hostname=hostname, suppress_ragged_eofs=False) as s: s.connect((HOST, server.port)) with self.assertRaisesRegex( ssl.SSLError, 'alert unknown ca|EOF occurred|TLSV1_ALERT_UNKNOWN_CA' ): # TLS 1.3 perform client cert exchange after handshake s.write(b'data') s.read(1000) s.write(b'should have failed already') s.read(1000) def test_rude_shutdown(self): """A brutal shutdown of an SSL server should raise an OSError in the client when attempting handshake. """ listener_ready = threading.Event() listener_gone = threading.Event() s = socket.socket() port = socket_helper.bind_port(s, HOST) # `listener` runs in a thread. It sits in an accept() until # the main thread connects. Then it rudely closes the socket, # and sets Event `listener_gone` to let the main thread know # the socket is gone. def listener(): s.listen() listener_ready.set() newsock, addr = s.accept() newsock.close() s.close() listener_gone.set() def connector(): listener_ready.wait() with socket.socket() as c: c.connect((HOST, port)) listener_gone.wait() try: ssl_sock = test_wrap_socket(c) except OSError: pass else: self.fail('connecting to closed SSL socket should have failed') t = threading.Thread(target=listener) t.start() try: connector() finally: t.join() def test_ssl_cert_verify_error(self): if support.verbose: sys.stdout.write("\n") server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(SIGNED_CERTFILE) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) server = ThreadedEchoServer(context=server_context, chatty=True) with server: with context.wrap_socket(socket.socket(), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: try: s.connect((HOST, server.port)) self.fail("Expected connection failure") except ssl.SSLError as e: msg = 'unable to get local issuer certificate' self.assertIsInstance(e, ssl.SSLCertVerificationError) self.assertEqual(e.verify_code, 20) self.assertEqual(e.verify_message, msg) # Allow for flexible libssl error messages. regex = f"({msg}|CERTIFICATE_VERIFY_FAILED)" self.assertRegex(repr(e), regex) regex = re.compile(r"""( certificate verify failed # OpenSSL | CERTIFICATE_VERIFY_FAILED # AWS-LC )""", re.X) self.assertRegex(repr(e), regex) def test_PROTOCOL_TLS(self): """Connecting to an SSLv23 server with various client options""" if support.verbose: sys.stdout.write("\n") if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1') if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True, ssl.CERT_OPTIONAL) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_OPTIONAL) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False, ssl.CERT_REQUIRED) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True, ssl.CERT_REQUIRED) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_REQUIRED) # Server with specific SSL options if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False, server_options=ssl.OP_NO_SSLv3) # Will choose TLSv1 try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True, server_options=ssl.OP_NO_SSLv2 | ssl.OP_NO_SSLv3) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, False, server_options=ssl.OP_NO_TLSv1) @requires_tls_version('SSLv3') def test_protocol_sslv3(self): """Connecting to an SSLv3 server with various client options""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, 'SSLv3') try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, 'SSLv3', ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, 'SSLv3', ssl.CERT_REQUIRED) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_SSLv3) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @requires_tls_version('TLSv1') def test_protocol_tlsv1(self): """Connecting to a TLSv1 server with various client options""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, 'TLSv1') try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_REQUIRED) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1) @requires_tls_version('TLSv1_1') def test_protocol_tlsv1_1(self): """Connecting to a TLSv1.1 server with various client options. Testing against older TLS versions.""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_1, 'TLSv1.1') if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1_1) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1_1, 'TLSv1.1') try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_1, False) @requires_tls_version('TLSv1_2') def test_protocol_tlsv1_2(self): """Connecting to a TLSv1.2 server with various client options. Testing against older TLS versions.""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_2, 'TLSv1.2', server_options=ssl.OP_NO_SSLv3|ssl.OP_NO_SSLv2, client_options=ssl.OP_NO_SSLv3|ssl.OP_NO_SSLv2,) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1_2) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1_2, 'TLSv1.2') if has_tls_protocol(ssl.PROTOCOL_TLSv1): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1_2, False) if has_tls_protocol(ssl.PROTOCOL_TLSv1_1): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_1, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, False) def test_starttls(self): """Switching from clear text to encrypted and back again.""" msgs = (b"msg 1", b"MSG 2", b"STARTTLS", b"MSG 3", b"msg 4", b"ENDTLS", b"msg 5", b"msg 6") server = ThreadedEchoServer(CERTFILE, starttls_server=True, chatty=True, connectionchatty=True) wrapped = False with server: s = socket.socket() s.setblocking(True) s.connect((HOST, server.port)) if support.verbose: sys.stdout.write("\n") for indata in msgs: if support.verbose: sys.stdout.write( " client: sending %r...\n" % indata) if wrapped: conn.write(indata) outdata = conn.read() else: s.send(indata) outdata = s.recv(1024) msg = outdata.strip().lower() if indata == b"STARTTLS" and msg.startswith(b"ok"): # STARTTLS ok, switch to secure mode if support.verbose: sys.stdout.write( " client: read %r from server, starting TLS...\n" % msg) conn = test_wrap_socket(s) wrapped = True elif indata == b"ENDTLS" and msg.startswith(b"ok"): # ENDTLS ok, switch back to clear text if support.verbose: sys.stdout.write( " client: read %r from server, ending TLS...\n" % msg) s = conn.unwrap() wrapped = False else: if support.verbose: sys.stdout.write( " client: read %r from server\n" % msg) if support.verbose: sys.stdout.write(" client: closing connection.\n") if wrapped: conn.write(b"over\n") else: s.send(b"over\n") if wrapped: conn.close() else: s.close() def test_socketserver(self): """Using socketserver to create and manage SSL connections.""" server = make_https_server(self, certfile=SIGNED_CERTFILE) # try to connect if support.verbose: sys.stdout.write('\n') # Get this test file itself: with open(__file__, 'rb') as f: d1 = f.read() d2 = '' # now fetch the same data from the HTTPS server url = f'https://localhost:{server.port}/test_ssl.py' context = ssl.create_default_context(cafile=SIGNING_CA) f = urllib.request.urlopen(url, context=context) try: dlen = f.info().get("content-length") if dlen and (int(dlen) > 0): d2 = f.read(int(dlen)) if support.verbose: sys.stdout.write( " client: read %d bytes from remote server '%s'\n" % (len(d2), server)) finally: f.close() self.assertEqual(d1, d2) def test_asyncore_server(self): """Check the example asyncore integration.""" if support.verbose: sys.stdout.write("\n") indata = b"FOO\n" server = AsyncoreEchoServer(CERTFILE) with server: s = test_wrap_socket(socket.socket()) s.connect(('127.0.0.1', server.port)) if support.verbose: sys.stdout.write( " client: sending %r...\n" % indata) s.write(indata) outdata = s.read() if support.verbose: sys.stdout.write(" client: read %r\n" % outdata) if outdata != indata.lower(): self.fail( "bad data <<%r>> (%d) received; expected <<%r>> (%d)\n" % (outdata[:20], len(outdata), indata[:20].lower(), len(indata))) s.write(b"over\n") if support.verbose: sys.stdout.write(" client: closing connection.\n") s.close() if support.verbose: sys.stdout.write(" client: connection closed.\n") def test_recv_send(self): """Test recv(), send() and friends.""" if support.verbose: sys.stdout.write("\n") server = ThreadedEchoServer(CERTFILE, certreqs=ssl.CERT_NONE, ssl_version=ssl.PROTOCOL_TLS_SERVER, cacerts=CERTFILE, chatty=True, connectionchatty=False) with server: s = test_wrap_socket(socket.socket(), server_side=False, certfile=CERTFILE, ca_certs=CERTFILE, cert_reqs=ssl.CERT_NONE) s.connect((HOST, server.port)) # helper methods for standardising recv* method signatures def _recv_into(): b = bytearray(b"\0"*100) count = s.recv_into(b) return b[:count] def _recvfrom_into(): b = bytearray(b"\0"*100) count, addr = s.recvfrom_into(b) return b[:count] # (name, method, expect success?, *args, return value func) send_methods = [ ('send', s.send, True, [], len), ('sendto', s.sendto, False, ["some.address"], len), ('sendall', s.sendall, True, [], lambda x: None), ] # (name, method, whether to expect success, *args) recv_methods = [ ('recv', s.recv, True, []), ('recvfrom', s.recvfrom, False, ["some.address"]), ('recv_into', _recv_into, True, []), ('recvfrom_into', _recvfrom_into, False, []), ] data_prefix = "PREFIX_" for (meth_name, send_meth, expect_success, args, ret_val_meth) in send_methods: indata = (data_prefix + meth_name).encode('ascii') try: ret = send_meth(indata, *args) msg = "sending with {}".format(meth_name) self.assertEqual(ret, ret_val_meth(indata), msg=msg) outdata = s.read() if outdata != indata.lower(): self.fail( "While sending with <<{name:s}>> bad data " "<<{outdata:r}>> ({nout:d}) received; " "expected <<{indata:r}>> ({nin:d})\n".format( name=meth_name, outdata=outdata[:20], nout=len(outdata), indata=indata[:20], nin=len(indata) ) ) except ValueError as e: if expect_success: self.fail( "Failed to send with method <<{name:s}>>; " "expected to succeed.\n".format(name=meth_name) ) if not str(e).startswith(meth_name): self.fail( "Method <<{name:s}>> failed with unexpected " "exception message: {exp:s}\n".format( name=meth_name, exp=e ) ) for meth_name, recv_meth, expect_success, args in recv_methods: indata = (data_prefix + meth_name).encode('ascii') try: s.send(indata) outdata = recv_meth(*args) if outdata != indata.lower(): self.fail( "While receiving with <<{name:s}>> bad data " "<<{outdata:r}>> ({nout:d}) received; " "expected <<{indata:r}>> ({nin:d})\n".format( name=meth_name, outdata=outdata[:20], nout=len(outdata), indata=indata[:20], nin=len(indata) ) ) except ValueError as e: if expect_success: self.fail( "Failed to receive with method <<{name:s}>>; " "expected to succeed.\n".format(name=meth_name) ) if not str(e).startswith(meth_name): self.fail( "Method <<{name:s}>> failed with unexpected " "exception message: {exp:s}\n".format( name=meth_name, exp=e ) ) # consume data s.read() # read(-1, buffer) is supported, even though read(-1) is not data = b"data" s.send(data) buffer = bytearray(len(data)) self.assertEqual(s.read(-1, buffer), len(data)) self.assertEqual(buffer, data) # sendall accepts bytes-like objects if ctypes is not None: ubyte = ctypes.c_ubyte * len(data) byteslike = ubyte.from_buffer_copy(data) s.sendall(byteslike) self.assertEqual(s.read(), data) # Make sure sendmsg et al are disallowed to avoid # inadvertent disclosure of data and/or corruption # of the encrypted data stream self.assertRaises(NotImplementedError, s.dup) self.assertRaises(NotImplementedError, s.sendmsg, [b"data"]) self.assertRaises(NotImplementedError, s.recvmsg, 100) self.assertRaises(NotImplementedError, s.recvmsg_into, [bytearray(100)]) s.write(b"over\n") self.assertRaises(ValueError, s.recv, -1) self.assertRaises(ValueError, s.read, -1) s.close() def test_recv_zero(self): server = ThreadedEchoServer(CERTFILE) self.enterContext(server) s = socket.create_connection((HOST, server.port)) self.addCleanup(s.close) s = test_wrap_socket(s, suppress_ragged_eofs=False) self.addCleanup(s.close) # recv/read(0) should return no data s.send(b"data") self.assertEqual(s.recv(0), b"") self.assertEqual(s.read(0), b"") self.assertEqual(s.read(), b"data") # Should not block if the other end sends no data s.setblocking(False) self.assertEqual(s.recv(0), b"") self.assertEqual(s.recv_into(bytearray()), 0) def test_recv_into_buffer_protocol_len(self): server = ThreadedEchoServer(CERTFILE) self.enterContext(server) s = socket.create_connection((HOST, server.port)) self.addCleanup(s.close) s = test_wrap_socket(s, suppress_ragged_eofs=False) self.addCleanup(s.close) s.send(b"data") buf = array.array('I', [0, 0]) self.assertEqual(s.recv_into(buf), 4) self.assertEqual(bytes(buf)[:4], b"data") class B(bytearray): def __len__(self): 1/0 s.send(b"data") buf = B(6) self.assertEqual(s.recv_into(buf), 4) self.assertEqual(bytes(buf), b"data\0\0") def test_nonblocking_send(self): server = ThreadedEchoServer(CERTFILE, certreqs=ssl.CERT_NONE, ssl_version=ssl.PROTOCOL_TLS_SERVER, cacerts=CERTFILE, chatty=True, connectionchatty=False) with server: s = test_wrap_socket(socket.socket(), server_side=False, certfile=CERTFILE, ca_certs=CERTFILE, cert_reqs=ssl.CERT_NONE) s.connect((HOST, server.port)) s.setblocking(False) # If we keep sending data, at some point the buffers # will be full and the call will block buf = bytearray(8192) def fill_buffer(): while True: s.send(buf) self.assertRaises((ssl.SSLWantWriteError, ssl.SSLWantReadError), fill_buffer) # Now read all the output and discard it s.setblocking(True) s.close() def test_handshake_timeout(self): # Issue #5103: SSL handshake must respect the socket timeout server = socket.socket(socket.AF_INET) host = "127.0.0.1" port = socket_helper.bind_port(server) started = threading.Event() finish = False def serve(): server.listen() started.set() conns = [] while not finish: r, w, e = select.select([server], [], [], 0.1) if server in r: # Let the socket hang around rather than having # it closed by garbage collection. conns.append(server.accept()[0]) for sock in conns: sock.close() t = threading.Thread(target=serve) t.start() started.wait() try: try: c = socket.socket(socket.AF_INET) c.settimeout(0.2) c.connect((host, port)) # Will attempt handshake and time out self.assertRaisesRegex(TimeoutError, "timed out", test_wrap_socket, c) finally: c.close() try: c = socket.socket(socket.AF_INET) c = test_wrap_socket(c) c.settimeout(0.2) # Will attempt handshake and time out self.assertRaisesRegex(TimeoutError, "timed out", c.connect, (host, port)) finally: c.close() finally: finish = True t.join() server.close() def test_server_accept(self): # Issue #16357: accept() on a SSLSocket created through # SSLContext.wrap_socket(). client_ctx, server_ctx, hostname = testing_context() server = socket.socket(socket.AF_INET) host = "127.0.0.1" port = socket_helper.bind_port(server) server = server_ctx.wrap_socket(server, server_side=True) self.assertTrue(server.server_side) evt = threading.Event() remote = None peer = None def serve(): nonlocal remote, peer server.listen() # Block on the accept and wait on the connection to close. evt.set() remote, peer = server.accept() remote.send(remote.recv(4)) t = threading.Thread(target=serve) t.start() # Client wait until server setup and perform a connect. evt.wait() client = client_ctx.wrap_socket( socket.socket(), server_hostname=hostname ) client.connect((hostname, port)) client.send(b'data') client.recv() client_addr = client.getsockname() client.close() t.join() remote.close() server.close() # Sanity checks. self.assertIsInstance(remote, ssl.SSLSocket) self.assertEqual(peer, client_addr) def test_getpeercert_enotconn(self): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.check_hostname = False with context.wrap_socket(socket.socket()) as sock: with self.assertRaises(OSError) as cm: sock.getpeercert() self.assertEqual(cm.exception.errno, errno.ENOTCONN) def test_do_handshake_enotconn(self): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.check_hostname = False with context.wrap_socket(socket.socket()) as sock: with self.assertRaises(OSError) as cm: sock.do_handshake() self.assertEqual(cm.exception.errno, errno.ENOTCONN) def test_no_shared_ciphers(self): client_context, server_context, hostname = testing_context() # OpenSSL enables all TLS 1.3 ciphers, enforce TLS 1.2 for test client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # Force different suites on client and server client_context.set_ciphers("AES128") server_context.set_ciphers("AES256") with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(OSError): s.connect((HOST, server.port)) self.assertIn("NO_SHARED_CIPHER", server.conn_errors[0]) def test_version_basic(self): """ Basic tests for SSLSocket.version(). More tests are done in the test_protocol_*() methods. """ context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.check_hostname = False context.verify_mode = ssl.CERT_NONE with ThreadedEchoServer(CERTFILE, ssl_version=ssl.PROTOCOL_TLS_SERVER, chatty=False) as server: with context.wrap_socket(socket.socket()) as s: self.assertIs(s.version(), None) self.assertIs(s._sslobj, None) s.connect((HOST, server.port)) self.assertEqual(s.version(), 'TLSv1.3') self.assertIs(s._sslobj, None) self.assertIs(s.version(), None) @requires_tls_version('TLSv1_3') def test_tls1_3(self): client_context, server_context, hostname = testing_context() client_context.minimum_version = ssl.TLSVersion.TLSv1_3 with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertIn(s.cipher()[0], { 'TLS_AES_256_GCM_SHA384', 'TLS_CHACHA20_POLY1305_SHA256', 'TLS_AES_128_GCM_SHA256', }) self.assertEqual(s.version(), 'TLSv1.3') @requires_tls_version('TLSv1_2') @requires_tls_version('TLSv1') @ignore_deprecation def test_min_max_version_tlsv1_2(self): client_context, server_context, hostname = testing_context() # client TLSv1.0 to 1.2 client_context.minimum_version = ssl.TLSVersion.TLSv1 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # server only TLSv1.2 server_context.minimum_version = ssl.TLSVersion.TLSv1_2 server_context.maximum_version = ssl.TLSVersion.TLSv1_2 with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.version(), 'TLSv1.2') @requires_tls_version('TLSv1_1') @ignore_deprecation def test_min_max_version_tlsv1_1(self): client_context, server_context, hostname = testing_context() # client 1.0 to 1.2, server 1.0 to 1.1 client_context.minimum_version = ssl.TLSVersion.TLSv1 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server_context.minimum_version = ssl.TLSVersion.TLSv1 server_context.maximum_version = ssl.TLSVersion.TLSv1_1 seclevel_workaround(client_context, server_context) with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.version(), 'TLSv1.1') @requires_tls_version('TLSv1_2') @requires_tls_version('TLSv1') @ignore_deprecation def test_min_max_version_mismatch(self): client_context, server_context, hostname = testing_context() # client 1.0, server 1.2 (mismatch) server_context.maximum_version = ssl.TLSVersion.TLSv1_2 server_context.minimum_version = ssl.TLSVersion.TLSv1_2 client_context.maximum_version = ssl.TLSVersion.TLSv1 client_context.minimum_version = ssl.TLSVersion.TLSv1 seclevel_workaround(client_context, server_context) with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(ssl.SSLError) as e: s.connect((HOST, server.port)) self.assertRegex(str(e.exception), "(alert|ALERT)") @requires_tls_version('SSLv3') def test_min_max_version_sslv3(self): client_context, server_context, hostname = testing_context() server_context.minimum_version = ssl.TLSVersion.SSLv3 client_context.minimum_version = ssl.TLSVersion.SSLv3 client_context.maximum_version = ssl.TLSVersion.SSLv3 seclevel_workaround(client_context, server_context) with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.version(), 'SSLv3') def test_default_ecdh_curve(self): # Issue #21015: elliptic curve-based Diffie Hellman key exchange # should be enabled by default on SSL contexts. client_context, server_context, hostname = testing_context() # TLSv1.3 defaults to PFS key agreement and no longer has KEA in # cipher name. client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # Prior to OpenSSL 1.0.0, ECDH ciphers have to be enabled # explicitly using the 'ECCdraft' cipher alias. Otherwise, # our default cipher list should prefer ECDH-based ciphers # automatically. with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertIn("ECDH", s.cipher()[0]) @unittest.skipUnless("tls-unique" in ssl.CHANNEL_BINDING_TYPES, "'tls-unique' channel binding not available") def test_tls_unique_channel_binding(self): """Test tls-unique channel binding.""" if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() # tls-unique is not defined for TLSv1.3 # https://datatracker.ietf.org/doc/html/rfc8446#appendix-C.5 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server = ThreadedEchoServer(context=server_context, chatty=True, connectionchatty=False) with server: with client_context.wrap_socket( socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # get the data cb_data = s.get_channel_binding("tls-unique") if support.verbose: sys.stdout.write( " got channel binding data: {0!r}\n".format(cb_data)) # check if it is sane self.assertIsNotNone(cb_data) if s.version() == 'TLSv1.3': self.assertEqual(len(cb_data), 48) else: self.assertEqual(len(cb_data), 12) # True for TLSv1 # and compare with the peers version s.write(b"CB tls-unique\n") peer_data_repr = s.read().strip() self.assertEqual(peer_data_repr, repr(cb_data).encode("us-ascii")) # now, again with client_context.wrap_socket( socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) new_cb_data = s.get_channel_binding("tls-unique") if support.verbose: sys.stdout.write( "got another channel binding data: {0!r}\n".format( new_cb_data) ) # is it really unique self.assertNotEqual(cb_data, new_cb_data) self.assertIsNotNone(cb_data) if s.version() == 'TLSv1.3': self.assertEqual(len(cb_data), 48) else: self.assertEqual(len(cb_data), 12) # True for TLSv1 s.write(b"CB tls-unique\n") peer_data_repr = s.read().strip() self.assertEqual(peer_data_repr, repr(new_cb_data).encode("us-ascii")) def test_compression(self): client_context, server_context, hostname = testing_context() stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) if support.verbose: sys.stdout.write(" got compression: {!r}\n".format(stats['compression'])) self.assertIn(stats['compression'], { None, 'ZLIB', 'RLE' }) @unittest.skipUnless(hasattr(ssl, 'OP_NO_COMPRESSION'), "ssl.OP_NO_COMPRESSION needed for this test") def test_compression_disabled(self): client_context, server_context, hostname = testing_context() client_context.options |= ssl.OP_NO_COMPRESSION server_context.options |= ssl.OP_NO_COMPRESSION stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['compression'], None) def test_legacy_server_connect(self): client_context, server_context, hostname = testing_context() client_context.options |= ssl.OP_LEGACY_SERVER_CONNECT server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) def test_no_legacy_server_connect(self): client_context, server_context, hostname = testing_context() client_context.options &= ~ssl.OP_LEGACY_SERVER_CONNECT server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_dh_params(self): # Check we can get a connection with ephemeral Diffie-Hellman client_context, server_context, hostname = testing_context() # test scenario needs TLS <= 1.2 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server_context.load_dh_params(DHFILE) server_context.set_ciphers("kEDH") server_context.maximum_version = ssl.TLSVersion.TLSv1_2 stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) cipher = stats["cipher"][0] parts = cipher.split("-") if "ADH" not in parts and "EDH" not in parts and "DHE" not in parts: self.fail("Non-DH key exchange: " + cipher[0]) def test_ecdh_curve(self): # server secp384r1, client auto client_context, server_context, hostname = testing_context() server_context.set_ecdh_curve("secp384r1") server_context.set_ciphers("ECDHE:!eNULL:!aNULL") server_context.minimum_version = ssl.TLSVersion.TLSv1_2 stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) # server auto, client secp384r1 client_context, server_context, hostname = testing_context() client_context.set_ecdh_curve("secp384r1") server_context.set_ciphers("ECDHE:!eNULL:!aNULL") server_context.minimum_version = ssl.TLSVersion.TLSv1_2 stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) # server / client curve mismatch client_context, server_context, hostname = testing_context() client_context.set_ecdh_curve("prime256v1") server_context.set_ecdh_curve("secp384r1") server_context.set_ciphers("ECDHE:!eNULL:!aNULL") server_context.minimum_version = ssl.TLSVersion.TLSv1_2 with self.assertRaises(ssl.SSLError): server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) def test_selected_alpn_protocol(self): # selected_alpn_protocol() is None unless ALPN is used. client_context, server_context, hostname = testing_context() stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['client_alpn_protocol'], None) def test_selected_alpn_protocol_if_server_uses_alpn(self): # selected_alpn_protocol() is None unless ALPN is used by the client. client_context, server_context, hostname = testing_context() server_context.set_alpn_protocols(['foo', 'bar']) stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['client_alpn_protocol'], None) def test_alpn_protocols(self): server_protocols = ['foo', 'bar', 'milkshake'] protocol_tests = [ (['foo', 'bar'], 'foo'), (['bar', 'foo'], 'foo'), (['milkshake'], 'milkshake'), (['http/3.0', 'http/4.0'], None) ] for client_protocols, expected in protocol_tests: client_context, server_context, hostname = testing_context() server_context.set_alpn_protocols(server_protocols) client_context.set_alpn_protocols(client_protocols) try: stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) except ssl.SSLError as e: stats = e msg = "failed trying %s (s) and %s (c).\n" \ "was expecting %s, but got %%s from the %%s" \ % (str(server_protocols), str(client_protocols), str(expected)) client_result = stats['client_alpn_protocol'] self.assertEqual(client_result, expected, msg % (client_result, "client")) server_result = stats['server_alpn_protocols'][-1] \ if len(stats['server_alpn_protocols']) else 'nothing' self.assertEqual(server_result, expected, msg % (server_result, "server")) def test_npn_protocols(self): assert not ssl.HAS_NPN def sni_contexts(self): server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(SIGNED_CERTFILE) other_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) other_context.load_cert_chain(SIGNED_CERTFILE2) client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) return server_context, other_context, client_context def check_common_name(self, stats, name): cert = stats['peercert'] self.assertIn((('commonName', name),), cert['subject']) def test_sni_callback(self): calls = [] server_context, other_context, client_context = self.sni_contexts() client_context.check_hostname = False def servername_cb(ssl_sock, server_name, initial_context): calls.append((server_name, initial_context)) if server_name is not None: ssl_sock.context = other_context server_context.set_servername_callback(servername_cb) stats = server_params_test(client_context, server_context, chatty=True, sni_name='supermessage') # The hostname was fetched properly, and the certificate was # changed for the connection. self.assertEqual(calls, [("supermessage", server_context)]) # CERTFILE4 was selected self.check_common_name(stats, 'fakehostname') calls = [] # The callback is called with server_name=None stats = server_params_test(client_context, server_context, chatty=True, sni_name=None) self.assertEqual(calls, [(None, server_context)]) self.check_common_name(stats, SIGNED_CERTFILE_HOSTNAME) # Check disabling the callback calls = [] server_context.set_servername_callback(None) stats = server_params_test(client_context, server_context, chatty=True, sni_name='notfunny') # Certificate didn't change self.check_common_name(stats, SIGNED_CERTFILE_HOSTNAME) self.assertEqual(calls, []) def test_sni_callback_alert(self): # Returning a TLS alert is reflected to the connecting client server_context, other_context, client_context = self.sni_contexts() def cb_returning_alert(ssl_sock, server_name, initial_context): return ssl.ALERT_DESCRIPTION_ACCESS_DENIED server_context.set_servername_callback(cb_returning_alert) with self.assertRaises(ssl.SSLError) as cm: stats = server_params_test(client_context, server_context, chatty=False, sni_name='supermessage') self.assertEqual(cm.exception.reason, 'TLSV1_ALERT_ACCESS_DENIED') def test_sni_callback_raising(self): # Raising fails the connection with a TLS handshake failure alert. server_context, other_context, client_context = self.sni_contexts() def cb_raising(ssl_sock, server_name, initial_context): 1/0 server_context.set_servername_callback(cb_raising) with support.catch_unraisable_exception() as catch: with self.assertRaises(ssl.SSLError) as cm: stats = server_params_test(client_context, server_context, chatty=False, sni_name='supermessage') # Allow for flexible libssl error messages. regex = "(SSLV3_ALERT_HANDSHAKE_FAILURE|NO_PRIVATE_VALUE)" self.assertRegex(cm.exception.reason, regex) self.assertEqual(catch.unraisable.exc_type, ZeroDivisionError) def test_sni_callback_wrong_return_type(self): # Returning the wrong return type terminates the TLS connection # with an internal error alert. server_context, other_context, client_context = self.sni_contexts() def cb_wrong_return_type(ssl_sock, server_name, initial_context): return "foo" server_context.set_servername_callback(cb_wrong_return_type) with support.catch_unraisable_exception() as catch: with self.assertRaises(ssl.SSLError) as cm: stats = server_params_test(client_context, server_context, chatty=False, sni_name='supermessage') self.assertEqual(cm.exception.reason, 'TLSV1_ALERT_INTERNAL_ERROR') self.assertEqual(catch.unraisable.exc_type, TypeError) def test_shared_ciphers(self): client_context, server_context, hostname = testing_context() client_context.set_ciphers("AES128:AES256") server_context.set_ciphers("AES256:eNULL") expected_algs = [ "AES256", "AES-256", # TLS 1.3 ciphers are always enabled "TLS_CHACHA20", "TLS_AES", ] stats = server_params_test(client_context, server_context, sni_name=hostname) ciphers = stats['server_shared_ciphers'][0] self.assertGreater(len(ciphers), 0) for name, tls_version, bits in ciphers: if not any(alg in name for alg in expected_algs): self.fail(name) def test_read_write_after_close_raises_valuerror(self): client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=False) with server: s = client_context.wrap_socket(socket.socket(), server_hostname=hostname) s.connect((HOST, server.port)) s.close() self.assertRaises(ValueError, s.read, 1024) self.assertRaises(ValueError, s.write, b'hello') def test_sendfile(self): TEST_DATA = b"x" * 512 with open(os_helper.TESTFN, 'wb') as f: f.write(TEST_DATA) self.addCleanup(os_helper.unlink, os_helper.TESTFN) client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) with open(os_helper.TESTFN, 'rb') as file: s.sendfile(file) self.assertEqual(s.recv(1024), TEST_DATA) def test_session(self): client_context, server_context, hostname = testing_context() # TODO: sessions aren't compatible with TLSv1.3 yet client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # first connection without session stats = server_params_test(client_context, server_context, sni_name=hostname) session = stats['session'] self.assertTrue(session.id) self.assertGreater(session.time, 0) self.assertGreater(session.timeout, 0) self.assertTrue(session.has_ticket) self.assertGreater(session.ticket_lifetime_hint, 0) self.assertFalse(stats['session_reused']) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 1) self.assertEqual(sess_stat['hits'], 0) # reuse session stats = server_params_test(client_context, server_context, session=session, sni_name=hostname) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 2) self.assertEqual(sess_stat['hits'], 1) self.assertTrue(stats['session_reused']) session2 = stats['session'] self.assertEqual(session2.id, session.id) self.assertEqual(session2, session) self.assertIsNot(session2, session) self.assertGreaterEqual(session2.time, session.time) self.assertGreaterEqual(session2.timeout, session.timeout) # another one without session stats = server_params_test(client_context, server_context, sni_name=hostname) self.assertFalse(stats['session_reused']) session3 = stats['session'] self.assertNotEqual(session3.id, session.id) self.assertNotEqual(session3, session) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 3) self.assertEqual(sess_stat['hits'], 1) # reuse session again stats = server_params_test(client_context, server_context, session=session, sni_name=hostname) self.assertTrue(stats['session_reused']) session4 = stats['session'] self.assertEqual(session4.id, session.id) self.assertEqual(session4, session) self.assertGreaterEqual(session4.time, session.time) self.assertGreaterEqual(session4.timeout, session.timeout) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 4) self.assertEqual(sess_stat['hits'], 2) def test_session_handling(self): client_context, server_context, hostname = testing_context() client_context2, _, _ = testing_context() # TODO: session reuse does not work with TLSv1.3 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 client_context2.maximum_version = ssl.TLSVersion.TLSv1_2 server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: # session is None before handshake self.assertEqual(s.session, None) self.assertEqual(s.session_reused, None) s.connect((HOST, server.port)) session = s.session self.assertTrue(session) with self.assertRaises(TypeError) as e: s.session = object self.assertEqual(str(e.exception), 'Value is not a SSLSession.') with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # cannot set session after handshake with self.assertRaises(ValueError) as e: s.session = session self.assertEqual(str(e.exception), 'Cannot set session after handshake.') with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: # can set session before handshake and before the # connection was established s.session = session s.connect((HOST, server.port)) self.assertEqual(s.session.id, session.id) self.assertEqual(s.session, session) self.assertEqual(s.session_reused, True) with client_context2.wrap_socket(socket.socket(), server_hostname=hostname) as s: # cannot re-use session with a different SSLContext with self.assertRaises(ValueError) as e: s.session = session s.connect((HOST, server.port)) self.assertEqual(str(e.exception), 'Session refers to a different SSLContext.') @unittest.skipUnless(has_tls_version('TLSv1_3'), "Test needs TLS 1.3") class TestPostHandshakeAuth(unittest.TestCase): def test_pha_setter(self): protocols = [ ssl.PROTOCOL_TLS_SERVER, ssl.PROTOCOL_TLS_CLIENT ] for protocol in protocols: ctx = ssl.SSLContext(protocol) self.assertEqual(ctx.post_handshake_auth, False) ctx.post_handshake_auth = True self.assertEqual(ctx.post_handshake_auth, True) ctx.verify_mode = ssl.CERT_REQUIRED self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.post_handshake_auth, True) ctx.post_handshake_auth = False self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.post_handshake_auth, False) ctx.verify_mode = ssl.CERT_OPTIONAL ctx.post_handshake_auth = True self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) self.assertEqual(ctx.post_handshake_auth, True) def test_pha_required(self): client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') # PHA method just returns true when cert is already available s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'GETCERT') cert_text = s.recv(4096).decode('us-ascii') self.assertIn('Python Software Foundation CA', cert_text) def test_pha_required_nocert(self): client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True def msg_cb(conn, direction, version, content_type, msg_type, data): if support.verbose and content_type == _TLSContentType.ALERT: info = (conn, direction, version, content_type, msg_type, data) sys.stdout.write(f"TLS: {info!r}\n") server_context._msg_callback = msg_cb client_context._msg_callback = msg_cb server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname, suppress_ragged_eofs=False) as s: s.connect((HOST, server.port)) s.write(b'PHA') # test sometimes fails with EOF error. Test passes as long as # server aborts connection with an error. with self.assertRaisesRegex( ssl.SSLError, '(certificate required|EOF occurred)' ): # receive CertificateRequest data = s.recv(1024) self.assertEqual(data, b'OK\n') # send empty Certificate + Finish s.write(b'HASCERT') # receive alert s.recv(1024) def test_pha_optional(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) # check CERT_OPTIONAL server_context.verify_mode = ssl.CERT_OPTIONAL server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') def test_pha_optional_nocert(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_OPTIONAL client_context.post_handshake_auth = True server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') # optional doesn't fail when client does not have a cert s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') def test_pha_no_pha_client(self): client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) with self.assertRaisesRegex(ssl.SSLError, 'not server'): s.verify_client_post_handshake() s.write(b'PHA') self.assertIn(b'extension not received', s.recv(1024)) def test_pha_no_pha_server(self): # server doesn't have PHA enabled, cert is requested in handshake client_context, server_context, hostname = testing_context() server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') # PHA doesn't fail if there is already a cert s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') def test_pha_not_tls13(self): # TLS 1.2 client_context, server_context, hostname = testing_context() server_context.verify_mode = ssl.CERT_REQUIRED client_context.maximum_version = ssl.TLSVersion.TLSv1_2 client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # PHA fails for TLS != 1.3 s.write(b'PHA') self.assertIn(b'WRONG_SSL_VERSION', s.recv(1024)) def test_bpo37428_pha_cert_none(self): # verify that post_handshake_auth does not implicitly enable cert # validation. hostname = SIGNED_CERTFILE_HOSTNAME client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) # no cert validation and CA on client side client_context.check_hostname = False client_context.verify_mode = ssl.CERT_NONE server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(SIGNED_CERTFILE) server_context.load_verify_locations(SIGNING_CA) server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') # server cert has not been validated self.assertEqual(s.getpeercert(), {}) def test_internal_chain_client(self): client_context, server_context, hostname = testing_context( server_chain=False ) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket( socket.socket(), server_hostname=hostname ) as s: s.connect((HOST, server.port)) vc = s._sslobj.get_verified_chain() self.assertEqual(len(vc), 2) ee, ca = vc uvc = s._sslobj.get_unverified_chain() self.assertEqual(len(uvc), 1) self.assertEqual(ee, uvc[0]) self.assertEqual(hash(ee), hash(uvc[0])) self.assertEqual(repr(ee), repr(uvc[0])) self.assertNotEqual(ee, ca) self.assertNotEqual(hash(ee), hash(ca)) self.assertNotEqual(repr(ee), repr(ca)) self.assertNotEqual(ee.get_info(), ca.get_info()) self.assertIn("CN=localhost", repr(ee)) self.assertIn("CN=our-ca-server", repr(ca)) pem = ee.public_bytes(_ssl.ENCODING_PEM) der = ee.public_bytes(_ssl.ENCODING_DER) self.assertIsInstance(pem, str) self.assertIn("-----BEGIN CERTIFICATE-----", pem) self.assertIsInstance(der, bytes) self.assertEqual( ssl.PEM_cert_to_DER_cert(pem), der ) def test_internal_chain_server(self): client_context, server_context, hostname = testing_context() client_context.load_cert_chain(SIGNED_CERTFILE) server_context.verify_mode = ssl.CERT_REQUIRED server_context.maximum_version = ssl.TLSVersion.TLSv1_2 server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket( socket.socket(), server_hostname=hostname ) as s: s.connect((HOST, server.port)) s.write(b'VERIFIEDCHAIN\n') res = s.recv(1024) self.assertEqual(res, b'\x02\n') s.write(b'UNVERIFIEDCHAIN\n') res = s.recv(1024) self.assertEqual(res, b'\x02\n') HAS_KEYLOG = hasattr(ssl.SSLContext, 'keylog_filename') requires_keylog = unittest.skipUnless( HAS_KEYLOG, 'test requires OpenSSL 1.1.1 with keylog callback') class TestSSLDebug(unittest.TestCase): def keylog_lines(self, fname=os_helper.TESTFN): with open(fname) as f: return len(list(f)) @requires_keylog @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_keylog_defaults(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.keylog_filename, None) self.assertFalse(os.path.isfile(os_helper.TESTFN)) ctx.keylog_filename = os_helper.TESTFN self.assertEqual(ctx.keylog_filename, os_helper.TESTFN) self.assertTrue(os.path.isfile(os_helper.TESTFN)) self.assertEqual(self.keylog_lines(), 1) ctx.keylog_filename = None self.assertEqual(ctx.keylog_filename, None) with self.assertRaises((IsADirectoryError, PermissionError)): # Windows raises PermissionError ctx.keylog_filename = os.path.dirname( os.path.abspath(os_helper.TESTFN)) with self.assertRaises(TypeError): ctx.keylog_filename = 1 @requires_keylog @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_keylog_filename(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) client_context, server_context, hostname = testing_context() client_context.keylog_filename = os_helper.TESTFN server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # header, 5 lines for TLS 1.3 self.assertEqual(self.keylog_lines(), 6) client_context.keylog_filename = None server_context.keylog_filename = os_helper.TESTFN server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertGreaterEqual(self.keylog_lines(), 11) client_context.keylog_filename = os_helper.TESTFN server_context.keylog_filename = os_helper.TESTFN server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertGreaterEqual(self.keylog_lines(), 21) client_context.keylog_filename = None server_context.keylog_filename = None @requires_keylog @unittest.skipIf(sys.flags.ignore_environment, "test is not compatible with ignore_environment") @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_keylog_env(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) with unittest.mock.patch.dict(os.environ): os.environ['SSLKEYLOGFILE'] = os_helper.TESTFN self.assertEqual(os.environ['SSLKEYLOGFILE'], os_helper.TESTFN) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.keylog_filename, None) ctx = ssl.create_default_context() self.assertEqual(ctx.keylog_filename, os_helper.TESTFN) ctx = ssl._create_stdlib_context() self.assertEqual(ctx.keylog_filename, os_helper.TESTFN) def test_msg_callback(self): client_context, server_context, hostname = testing_context() def msg_cb(conn, direction, version, content_type, msg_type, data): pass self.assertIs(client_context._msg_callback, None) client_context._msg_callback = msg_cb self.assertIs(client_context._msg_callback, msg_cb) with self.assertRaises(TypeError): client_context._msg_callback = object() def test_msg_callback_tls12(self): client_context, server_context, hostname = testing_context() client_context.maximum_version = ssl.TLSVersion.TLSv1_2 msg = [] def msg_cb(conn, direction, version, content_type, msg_type, data): self.assertIsInstance(conn, ssl.SSLSocket) self.assertIsInstance(data, bytes) self.assertIn(direction, {'read', 'write'}) msg.append((direction, version, content_type, msg_type)) client_context._msg_callback = msg_cb server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertIn( ("read", TLSVersion.TLSv1_2, _TLSContentType.HANDSHAKE, _TLSMessageType.SERVER_KEY_EXCHANGE), msg ) self.assertIn( ("write", TLSVersion.TLSv1_2, _TLSContentType.CHANGE_CIPHER_SPEC, _TLSMessageType.CHANGE_CIPHER_SPEC), msg ) def test_msg_callback_deadlock_bpo43577(self): client_context, server_context, hostname = testing_context() server_context2 = testing_context()[1] def msg_cb(conn, direction, version, content_type, msg_type, data): pass def sni_cb(sock, servername, ctx): sock.context = server_context2 server_context._msg_callback = msg_cb server_context.sni_callback = sni_cb server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) def set_socket_so_linger_on_with_zero_timeout(sock): sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0)) class TestPreHandshakeClose(unittest.TestCase): """Verify behavior of close sockets with received data before to the handshake. """ class SingleConnectionTestServerThread(threading.Thread): def __init__(self, *, name, call_after_accept, timeout=None): self.call_after_accept = call_after_accept self.received_data = b'' # set by .run() self.wrap_error = None # set by .run() self.listener = None # set by .start() self.port = None # set by .start() if timeout is None: self.timeout = support.SHORT_TIMEOUT else: self.timeout = timeout super().__init__(name=name) def __enter__(self): self.start() return self def __exit__(self, *args): try: if self.listener: self.listener.close() except OSError: pass self.join() self.wrap_error = None # avoid dangling references def start(self): self.ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) self.ssl_ctx.verify_mode = ssl.CERT_REQUIRED self.ssl_ctx.load_verify_locations(cafile=ONLYCERT) self.ssl_ctx.load_cert_chain(certfile=ONLYCERT, keyfile=ONLYKEY) self.listener = socket.socket() self.port = socket_helper.bind_port(self.listener) self.listener.settimeout(self.timeout) self.listener.listen(1) super().start() def run(self): try: conn, address = self.listener.accept() except TimeoutError: # on timeout, just close the listener return finally: self.listener.close() with conn: if self.call_after_accept(conn): return try: tls_socket = self.ssl_ctx.wrap_socket(conn, server_side=True) except OSError as err: # ssl.SSLError inherits from OSError self.wrap_error = err else: try: self.received_data = tls_socket.recv(400) except OSError: pass # closed, protocol error, etc. def non_linux_skip_if_other_okay_error(self, err): if sys.platform == "linux": return # Expect the full test setup to always work on Linux. if (isinstance(err, ConnectionResetError) or (isinstance(err, OSError) and err.errno == errno.EINVAL) or re.search('wrong.version.number', getattr(err, "reason", ""), re.I)): # On Windows the TCP RST leads to a ConnectionResetError # (ECONNRESET) which Linux doesn't appear to surface to userspace. # If wrap_socket() winds up on the "if connected:" path and doing # the actual wrapping... we get an SSLError from OpenSSL. Typically # WRONG_VERSION_NUMBER. While appropriate, neither is the scenario # we're specifically trying to test. The way this test is written # is known to work on Linux. We'll skip it anywhere else that it # does not present as doing so. try: self.skipTest(f"Could not recreate conditions on {sys.platform}:" f" {err=}") finally: # gh-108342: Explicitly break the reference cycle err = None # If maintaining this conditional winds up being a problem. # just turn this into an unconditional skip anything but Linux. # The important thing is that our CI has the logic covered. def test_preauth_data_to_tls_server(self): server_accept_called = threading.Event() ready_for_server_wrap_socket = threading.Event() def call_after_accept(unused): server_accept_called.set() if not ready_for_server_wrap_socket.wait(support.SHORT_TIMEOUT): raise RuntimeError("wrap_socket event never set, test may fail.") return False # Tell the server thread to continue. server = self.SingleConnectionTestServerThread( call_after_accept=call_after_accept, name="preauth_data_to_tls_server") self.enterContext(server) # starts it & unittest.TestCase stops it. with socket.socket() as client: client.connect(server.listener.getsockname()) # This forces an immediate connection close via RST on .close(). set_socket_so_linger_on_with_zero_timeout(client) client.setblocking(False) server_accept_called.wait() client.send(b"DELETE /data HTTP/1.0\r\n\r\n") client.close() # RST ready_for_server_wrap_socket.set() server.join() wrap_error = server.wrap_error server.wrap_error = None try: self.assertEqual(b"", server.received_data) self.assertIsInstance(wrap_error, OSError) # All platforms. self.non_linux_skip_if_other_okay_error(wrap_error) self.assertIsInstance(wrap_error, ssl.SSLError) self.assertIn("before TLS handshake with data", wrap_error.args[1]) self.assertIn("before TLS handshake with data", wrap_error.reason) self.assertNotEqual(0, wrap_error.args[0]) self.assertIsNone(wrap_error.library, msg="attr must exist") finally: # gh-108342: Explicitly break the reference cycle wrap_error = None server = None def test_preauth_data_to_tls_client(self): server_can_continue_with_wrap_socket = threading.Event() client_can_continue_with_wrap_socket = threading.Event() def call_after_accept(conn_to_client): if not server_can_continue_with_wrap_socket.wait(support.SHORT_TIMEOUT): print("ERROR: test client took too long") # This forces an immediate connection close via RST on .close(). set_socket_so_linger_on_with_zero_timeout(conn_to_client) conn_to_client.send( b"HTTP/1.0 307 Temporary Redirect\r\n" b"Location: https://example.com/someone-elses-server\r\n" b"\r\n") conn_to_client.close() # RST client_can_continue_with_wrap_socket.set() return True # Tell the server to stop. server = self.SingleConnectionTestServerThread( call_after_accept=call_after_accept, name="preauth_data_to_tls_client") self.enterContext(server) # starts it & unittest.TestCase stops it. # Redundant; call_after_accept sets SO_LINGER on the accepted conn. set_socket_so_linger_on_with_zero_timeout(server.listener) with socket.socket() as client: client.connect(server.listener.getsockname()) server_can_continue_with_wrap_socket.set() if not client_can_continue_with_wrap_socket.wait(support.SHORT_TIMEOUT): self.fail("test server took too long") ssl_ctx = ssl.create_default_context() try: tls_client = ssl_ctx.wrap_socket( client, server_hostname="localhost") except OSError as err: # SSLError inherits from OSError wrap_error = err received_data = b"" else: wrap_error = None received_data = tls_client.recv(400) tls_client.close() server.join() try: self.assertEqual(b"", received_data) self.assertIsInstance(wrap_error, OSError) # All platforms. self.non_linux_skip_if_other_okay_error(wrap_error) self.assertIsInstance(wrap_error, ssl.SSLError) self.assertIn("before TLS handshake with data", wrap_error.args[1]) self.assertIn("before TLS handshake with data", wrap_error.reason) self.assertNotEqual(0, wrap_error.args[0]) self.assertIsNone(wrap_error.library, msg="attr must exist") finally: # gh-108342: Explicitly break the reference cycle with warnings_helper.check_no_resource_warning(self): wrap_error = None server = None def test_https_client_non_tls_response_ignored(self): server_responding = threading.Event() class SynchronizedHTTPSConnection(http.client.HTTPSConnection): def connect(self): # Call clear text HTTP connect(), not the encrypted HTTPS (TLS) # connect(): wrap_socket() is called manually below. http.client.HTTPConnection.connect(self) # Wait for our fault injection server to have done its thing. if not server_responding.wait(support.SHORT_TIMEOUT) and support.verbose: sys.stdout.write("server_responding event never set.") self.sock = self._context.wrap_socket( self.sock, server_hostname=self.host) def call_after_accept(conn_to_client): # This forces an immediate connection close via RST on .close(). set_socket_so_linger_on_with_zero_timeout(conn_to_client) conn_to_client.send( b"HTTP/1.0 402 Payment Required\r\n" b"\r\n") conn_to_client.close() # RST server_responding.set() return True # Tell the server to stop. timeout = 2.0 server = self.SingleConnectionTestServerThread( call_after_accept=call_after_accept, name="non_tls_http_RST_responder", timeout=timeout) self.enterContext(server) # starts it & unittest.TestCase stops it. # Redundant; call_after_accept sets SO_LINGER on the accepted conn. set_socket_so_linger_on_with_zero_timeout(server.listener) connection = SynchronizedHTTPSConnection( server.listener.getsockname()[0], port=server.port, context=ssl.create_default_context(), timeout=timeout, ) # There are lots of reasons this raises as desired, long before this # test was added. Sending the request requires a successful TLS wrapped # socket; that fails if the connection is broken. It may seem pointless # to test this. It serves as an illustration of something that we never # want to happen... properly not happening. with warnings_helper.check_no_resource_warning(self), \ self.assertRaises(OSError): connection.request("HEAD", "/test", headers={"Host": "localhost"}) response = connection.getresponse() server.join() class TestEnumerations(unittest.TestCase): def test_tlsversion(self): class CheckedTLSVersion(enum.IntEnum): MINIMUM_SUPPORTED = _ssl.PROTO_MINIMUM_SUPPORTED SSLv3 = _ssl.PROTO_SSLv3 TLSv1 = _ssl.PROTO_TLSv1 TLSv1_1 = _ssl.PROTO_TLSv1_1 TLSv1_2 = _ssl.PROTO_TLSv1_2 TLSv1_3 = _ssl.PROTO_TLSv1_3 MAXIMUM_SUPPORTED = _ssl.PROTO_MAXIMUM_SUPPORTED enum._test_simple_enum(CheckedTLSVersion, TLSVersion) def test_tlscontenttype(self): class Checked_TLSContentType(enum.IntEnum): """Content types (record layer) See RFC 8446, section B.1 """ CHANGE_CIPHER_SPEC = 20 ALERT = 21 HANDSHAKE = 22 APPLICATION_DATA = 23 # pseudo content types HEADER = 0x100 INNER_CONTENT_TYPE = 0x101 enum._test_simple_enum(Checked_TLSContentType, _TLSContentType) def test_tlsalerttype(self): class Checked_TLSAlertType(enum.IntEnum): """Alert types for TLSContentType.ALERT messages See RFC 8466, section B.2 """ CLOSE_NOTIFY = 0 UNEXPECTED_MESSAGE = 10 BAD_RECORD_MAC = 20 DECRYPTION_FAILED = 21 RECORD_OVERFLOW = 22 DECOMPRESSION_FAILURE = 30 HANDSHAKE_FAILURE = 40 NO_CERTIFICATE = 41 BAD_CERTIFICATE = 42 UNSUPPORTED_CERTIFICATE = 43 CERTIFICATE_REVOKED = 44 CERTIFICATE_EXPIRED = 45 CERTIFICATE_UNKNOWN = 46 ILLEGAL_PARAMETER = 47 UNKNOWN_CA = 48 ACCESS_DENIED = 49 DECODE_ERROR = 50 DECRYPT_ERROR = 51 EXPORT_RESTRICTION = 60 PROTOCOL_VERSION = 70 INSUFFICIENT_SECURITY = 71 INTERNAL_ERROR = 80 INAPPROPRIATE_FALLBACK = 86 USER_CANCELED = 90 NO_RENEGOTIATION = 100 MISSING_EXTENSION = 109 UNSUPPORTED_EXTENSION = 110 CERTIFICATE_UNOBTAINABLE = 111 UNRECOGNIZED_NAME = 112 BAD_CERTIFICATE_STATUS_RESPONSE = 113 BAD_CERTIFICATE_HASH_VALUE = 114 UNKNOWN_PSK_IDENTITY = 115 CERTIFICATE_REQUIRED = 116 NO_APPLICATION_PROTOCOL = 120 enum._test_simple_enum(Checked_TLSAlertType, _TLSAlertType) def test_tlsmessagetype(self): class Checked_TLSMessageType(enum.IntEnum): """Message types (handshake protocol) See RFC 8446, section B.3 """ HELLO_REQUEST = 0 CLIENT_HELLO = 1 SERVER_HELLO = 2 HELLO_VERIFY_REQUEST = 3 NEWSESSION_TICKET = 4 END_OF_EARLY_DATA = 5 HELLO_RETRY_REQUEST = 6 ENCRYPTED_EXTENSIONS = 8 CERTIFICATE = 11 SERVER_KEY_EXCHANGE = 12 CERTIFICATE_REQUEST = 13 SERVER_DONE = 14 CERTIFICATE_VERIFY = 15 CLIENT_KEY_EXCHANGE = 16 FINISHED = 20 CERTIFICATE_URL = 21 CERTIFICATE_STATUS = 22 SUPPLEMENTAL_DATA = 23 KEY_UPDATE = 24 NEXT_PROTO = 67 MESSAGE_HASH = 254 CHANGE_CIPHER_SPEC = 0x0101 enum._test_simple_enum(Checked_TLSMessageType, _TLSMessageType) def test_sslmethod(self): Checked_SSLMethod = enum._old_convert_( enum.IntEnum, '_SSLMethod', 'ssl', lambda name: name.startswith('PROTOCOL_') and name != 'PROTOCOL_SSLv23', source=ssl._ssl, ) # This member is assigned dynamically in `ssl.py`: Checked_SSLMethod.PROTOCOL_SSLv23 = Checked_SSLMethod.PROTOCOL_TLS enum._test_simple_enum(Checked_SSLMethod, ssl._SSLMethod) def test_options(self): CheckedOptions = enum._old_convert_( enum.IntFlag, 'Options', 'ssl', lambda name: name.startswith('OP_'), source=ssl._ssl, ) enum._test_simple_enum(CheckedOptions, ssl.Options) def test_alertdescription(self): CheckedAlertDescription = enum._old_convert_( enum.IntEnum, 'AlertDescription', 'ssl', lambda name: name.startswith('ALERT_DESCRIPTION_'), source=ssl._ssl, ) enum._test_simple_enum(CheckedAlertDescription, ssl.AlertDescription) def test_sslerrornumber(self): Checked_SSLErrorNumber = enum._old_convert_( enum.IntEnum, 'SSLErrorNumber', 'ssl', lambda name: name.startswith('SSL_ERROR_'), source=ssl._ssl, ) enum._test_simple_enum(Checked_SSLErrorNumber, ssl.SSLErrorNumber) def test_verifyflags(self): CheckedVerifyFlags = enum._old_convert_( enum.IntFlag, 'VerifyFlags', 'ssl', lambda name: name.startswith('VERIFY_'), source=ssl._ssl, ) enum._test_simple_enum(CheckedVerifyFlags, ssl.VerifyFlags) def test_verifymode(self): CheckedVerifyMode = enum._old_convert_( enum.IntEnum, 'VerifyMode', 'ssl', lambda name: name.startswith('CERT_'), source=ssl._ssl, ) enum._test_simple_enum(CheckedVerifyMode, ssl.VerifyMode) def setUpModule(): if support.verbose: plats = { 'Mac': platform.mac_ver, 'Windows': platform.win32_ver, } for name, func in plats.items(): plat = func() if plat and plat[0]: plat = '%s %r' % (name, plat) break else: plat = repr(platform.platform()) print("test_ssl: testing with %r %r" % (ssl.OPENSSL_VERSION, ssl.OPENSSL_VERSION_INFO)) print(" under %s" % plat) print(" HAS_SNI = %r" % ssl.HAS_SNI) print(" OP_ALL = 0x%8x" % ssl.OP_ALL) try: print(" OP_NO_TLSv1_1 = 0x%8x" % ssl.OP_NO_TLSv1_1) except AttributeError: pass for filename in [ CERTFILE, BYTES_CERTFILE, ONLYCERT, ONLYKEY, BYTES_ONLYCERT, BYTES_ONLYKEY, SIGNED_CERTFILE, SIGNED_CERTFILE2, SIGNING_CA, BADCERT, BADKEY, EMPTYCERT]: if not os.path.exists(filename): raise support.TestFailed("Can't read certificate file %r" % filename) thread_info = threading_helper.threading_setup() unittest.addModuleCleanup(threading_helper.threading_cleanup, *thread_info) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.12/test_subprocess.py000066400000000000000000005043241471441230600221350ustar00rootroot00000000000000import unittest from unittest import mock from test import support from test.support import check_sanitizer from test.support import import_helper from test.support import os_helper from test.support import warnings_helper from test.support.script_helper import assert_python_ok import subprocess import sys import signal import io import itertools import os import errno import tempfile import time import traceback import types import selectors import sysconfig import select import shutil import threading import gc import textwrap import json from test.support.os_helper import FakePath try: import _testcapi except ImportError: _testcapi = None try: import pwd except ImportError: pwd = None try: import grp except ImportError: grp = None try: import fcntl except: fcntl = None if support.PGO: raise unittest.SkipTest("test is not helpful for PGO") if not support.has_subprocess_support: raise unittest.SkipTest("test module requires subprocess") mswindows = (sys.platform == "win32") # # Depends on the following external programs: Python # if mswindows: SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' 'os.O_BINARY);') else: SETBINARY = '' NONEXISTING_CMD = ('nonexisting_i_hope',) # Ignore errors that indicate the command was not found NONEXISTING_ERRORS = (FileNotFoundError, NotADirectoryError, PermissionError) ZERO_RETURN_CMD = (sys.executable, '-c', 'pass') def setUpModule(): shell_true = shutil.which('true') if shell_true is None: return if (os.access(shell_true, os.X_OK) and subprocess.run([shell_true]).returncode == 0): global ZERO_RETURN_CMD ZERO_RETURN_CMD = (shell_true,) # Faster than Python startup. class BaseTestCase(unittest.TestCase): def setUp(self): # Try to minimize the number of children we have so this test # doesn't crash on some buildbots (Alphas in particular). support.reap_children() def tearDown(self): if not mswindows: # subprocess._active is not used on Windows and is set to None. for inst in subprocess._active: inst.wait() subprocess._cleanup() self.assertFalse( subprocess._active, "subprocess._active not empty" ) self.doCleanups() support.reap_children() class PopenTestException(Exception): pass class PopenExecuteChildRaises(subprocess.Popen): """Popen subclass for testing cleanup of subprocess.PIPE filehandles when _execute_child fails. """ def _execute_child(self, *args, **kwargs): raise PopenTestException("Forced Exception for Test") class ProcessTestCase(BaseTestCase): def test_io_buffered_by_default(self): p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) try: self.assertIsInstance(p.stdin, io.BufferedIOBase) self.assertIsInstance(p.stdout, io.BufferedIOBase) self.assertIsInstance(p.stderr, io.BufferedIOBase) finally: p.stdin.close() p.stdout.close() p.stderr.close() p.wait() def test_io_unbuffered_works(self): p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=0) try: self.assertIsInstance(p.stdin, io.RawIOBase) self.assertIsInstance(p.stdout, io.RawIOBase) self.assertIsInstance(p.stderr, io.RawIOBase) finally: p.stdin.close() p.stdout.close() p.stderr.close() p.wait() def test_call_seq(self): # call() function with sequence argument rc = subprocess.call([sys.executable, "-c", "import sys; sys.exit(47)"]) self.assertEqual(rc, 47) def test_call_timeout(self): # call() function with timeout argument; we want to test that the child # process gets killed when the timeout expires. If the child isn't # killed, this call will deadlock since subprocess.call waits for the # child. self.assertRaises(subprocess.TimeoutExpired, subprocess.call, [sys.executable, "-c", "while True: pass"], timeout=0.1) def test_check_call_zero(self): # check_call() function with zero return code rc = subprocess.check_call(ZERO_RETURN_CMD) self.assertEqual(rc, 0) def test_check_call_nonzero(self): # check_call() function with non-zero return code with self.assertRaises(subprocess.CalledProcessError) as c: subprocess.check_call([sys.executable, "-c", "import sys; sys.exit(47)"]) self.assertEqual(c.exception.returncode, 47) def test_check_output(self): # check_output() function with zero return code output = subprocess.check_output( [sys.executable, "-c", "print('BDFL')"]) self.assertIn(b'BDFL', output) with self.assertRaisesRegex(ValueError, "stdout argument not allowed, it will be overridden"): subprocess.check_output([], stdout=None) with self.assertRaisesRegex(ValueError, "check argument not allowed, it will be overridden"): subprocess.check_output([], check=False) def test_check_output_nonzero(self): # check_call() function with non-zero return code with self.assertRaises(subprocess.CalledProcessError) as c: subprocess.check_output( [sys.executable, "-c", "import sys; sys.exit(5)"]) self.assertEqual(c.exception.returncode, 5) def test_check_output_stderr(self): # check_output() function stderr redirected to stdout output = subprocess.check_output( [sys.executable, "-c", "import sys; sys.stderr.write('BDFL')"], stderr=subprocess.STDOUT) self.assertIn(b'BDFL', output) def test_check_output_stdin_arg(self): # check_output() can be called with stdin set to a file tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) output = subprocess.check_output( [sys.executable, "-c", "import sys; sys.stdout.write(sys.stdin.read().upper())"], stdin=tf) self.assertIn(b'PEAR', output) def test_check_output_input_arg(self): # check_output() can be called with input set to a string output = subprocess.check_output( [sys.executable, "-c", "import sys; sys.stdout.write(sys.stdin.read().upper())"], input=b'pear') self.assertIn(b'PEAR', output) def test_check_output_input_none(self): """input=None has a legacy meaning of input='' on check_output.""" output = subprocess.check_output( [sys.executable, "-c", "import sys; print('XX' if sys.stdin.read() else '')"], input=None) self.assertNotIn(b'XX', output) def test_check_output_input_none_text(self): output = subprocess.check_output( [sys.executable, "-c", "import sys; print('XX' if sys.stdin.read() else '')"], input=None, text=True) self.assertNotIn('XX', output) def test_check_output_input_none_universal_newlines(self): output = subprocess.check_output( [sys.executable, "-c", "import sys; print('XX' if sys.stdin.read() else '')"], input=None, universal_newlines=True) self.assertNotIn('XX', output) def test_check_output_input_none_encoding_errors(self): output = subprocess.check_output( [sys.executable, "-c", "print('foo')"], input=None, encoding='utf-8', errors='ignore') self.assertIn('foo', output) def test_check_output_stdout_arg(self): # check_output() refuses to accept 'stdout' argument with self.assertRaises(ValueError) as c: output = subprocess.check_output( [sys.executable, "-c", "print('will not be run')"], stdout=sys.stdout) self.fail("Expected ValueError when stdout arg supplied.") self.assertIn('stdout', c.exception.args[0]) def test_check_output_stdin_with_input_arg(self): # check_output() refuses to accept 'stdin' with 'input' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) with self.assertRaises(ValueError) as c: output = subprocess.check_output( [sys.executable, "-c", "print('will not be run')"], stdin=tf, input=b'hare') self.fail("Expected ValueError when stdin and input args supplied.") self.assertIn('stdin', c.exception.args[0]) self.assertIn('input', c.exception.args[0]) @support.requires_resource('walltime') def test_check_output_timeout(self): # check_output() function with timeout arg with self.assertRaises(subprocess.TimeoutExpired) as c: output = subprocess.check_output( [sys.executable, "-c", "import sys, time\n" "sys.stdout.write('BDFL')\n" "sys.stdout.flush()\n" "time.sleep(3600)"], # Some heavily loaded buildbots (sparc Debian 3.x) require # this much time to start and print. timeout=3) self.fail("Expected TimeoutExpired.") self.assertEqual(c.exception.output, b'BDFL') def test_call_kwargs(self): # call() function with keyword args newenv = os.environ.copy() newenv["FRUIT"] = "banana" rc = subprocess.call([sys.executable, "-c", 'import sys, os;' 'sys.exit(os.getenv("FRUIT")=="banana")'], env=newenv) self.assertEqual(rc, 1) def test_invalid_args(self): # Popen() called with invalid arguments should raise TypeError # but Popen.__del__ should not complain (issue #12085) with support.captured_stderr() as s: self.assertRaises(TypeError, subprocess.Popen, invalid_arg_name=1) argcount = subprocess.Popen.__init__.__code__.co_argcount too_many_args = [0] * (argcount + 1) self.assertRaises(TypeError, subprocess.Popen, *too_many_args) self.assertEqual(s.getvalue(), '') def test_stdin_none(self): # .stdin is None when not redirected p = subprocess.Popen([sys.executable, "-c", 'print("banana")'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) p.wait() self.assertEqual(p.stdin, None) def test_stdout_none(self): # .stdout is None when not redirected, and the child's stdout will # be inherited from the parent. In order to test this we run a # subprocess in a subprocess: # this_test # \-- subprocess created by this test (parent) # \-- subprocess created by the parent subprocess (child) # The parent doesn't specify stdout, so the child will use the # parent's stdout. This test checks that the message printed by the # child goes to the parent stdout. The parent also checks that the # child's stdout is None. See #11963. code = ('import sys; from subprocess import Popen, PIPE;' 'p = Popen([sys.executable, "-c", "print(\'test_stdout_none\')"],' ' stdin=PIPE, stderr=PIPE);' 'p.wait(); assert p.stdout is None;') p = subprocess.Popen([sys.executable, "-c", code], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) out, err = p.communicate() self.assertEqual(p.returncode, 0, err) self.assertEqual(out.rstrip(), b'test_stdout_none') def test_stderr_none(self): # .stderr is None when not redirected p = subprocess.Popen([sys.executable, "-c", 'print("banana")'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stdin.close) p.wait() self.assertEqual(p.stderr, None) def _assert_python(self, pre_args, **kwargs): # We include sys.exit() to prevent the test runner from hanging # whenever python is found. args = pre_args + ["import sys; sys.exit(47)"] p = subprocess.Popen(args, **kwargs) p.wait() self.assertEqual(47, p.returncode) def test_executable(self): # Check that the executable argument works. # # On Unix (non-Mac and non-Windows), Python looks at args[0] to # determine where its standard library is, so we need the directory # of args[0] to be valid for the Popen() call to Python to succeed. # See also issue #16170 and issue #7774. doesnotexist = os.path.join(os.path.dirname(sys.executable), "doesnotexist") self._assert_python([doesnotexist, "-c"], executable=sys.executable) def test_bytes_executable(self): doesnotexist = os.path.join(os.path.dirname(sys.executable), "doesnotexist") self._assert_python([doesnotexist, "-c"], executable=os.fsencode(sys.executable)) def test_pathlike_executable(self): doesnotexist = os.path.join(os.path.dirname(sys.executable), "doesnotexist") self._assert_python([doesnotexist, "-c"], executable=FakePath(sys.executable)) def test_executable_takes_precedence(self): # Check that the executable argument takes precedence over args[0]. # # Verify first that the call succeeds without the executable arg. pre_args = [sys.executable, "-c"] self._assert_python(pre_args) self.assertRaises(NONEXISTING_ERRORS, self._assert_python, pre_args, executable=NONEXISTING_CMD[0]) @unittest.skipIf(mswindows, "executable argument replaces shell") def test_executable_replaces_shell(self): # Check that the executable argument replaces the default shell # when shell=True. self._assert_python([], executable=sys.executable, shell=True) @unittest.skipIf(mswindows, "executable argument replaces shell") def test_bytes_executable_replaces_shell(self): self._assert_python([], executable=os.fsencode(sys.executable), shell=True) @unittest.skipIf(mswindows, "executable argument replaces shell") def test_pathlike_executable_replaces_shell(self): self._assert_python([], executable=FakePath(sys.executable), shell=True) # For use in the test_cwd* tests below. def _normalize_cwd(self, cwd): # Normalize an expected cwd (for Tru64 support). # We can't use os.path.realpath since it doesn't expand Tru64 {memb} # strings. See bug #1063571. with os_helper.change_cwd(cwd): return os.getcwd() # For use in the test_cwd* tests below. def _split_python_path(self): # Return normalized (python_dir, python_base). python_path = os.path.realpath(sys.executable) return os.path.split(python_path) # For use in the test_cwd* tests below. def _assert_cwd(self, expected_cwd, python_arg, **kwargs): # Invoke Python via Popen, and assert that (1) the call succeeds, # and that (2) the current working directory of the child process # matches *expected_cwd*. p = subprocess.Popen([python_arg, "-c", "import os, sys; " "buf = sys.stdout.buffer; " "buf.write(os.getcwd().encode()); " "buf.flush(); " "sys.exit(47)"], stdout=subprocess.PIPE, **kwargs) self.addCleanup(p.stdout.close) p.wait() self.assertEqual(47, p.returncode) normcase = os.path.normcase self.assertEqual(normcase(expected_cwd), normcase(p.stdout.read().decode())) def test_cwd(self): # Check that cwd changes the cwd for the child process. temp_dir = tempfile.gettempdir() temp_dir = self._normalize_cwd(temp_dir) self._assert_cwd(temp_dir, sys.executable, cwd=temp_dir) def test_cwd_with_bytes(self): temp_dir = tempfile.gettempdir() temp_dir = self._normalize_cwd(temp_dir) self._assert_cwd(temp_dir, sys.executable, cwd=os.fsencode(temp_dir)) def test_cwd_with_pathlike(self): temp_dir = tempfile.gettempdir() temp_dir = self._normalize_cwd(temp_dir) self._assert_cwd(temp_dir, sys.executable, cwd=FakePath(temp_dir)) @unittest.skipIf(mswindows, "pending resolution of issue #15533") def test_cwd_with_relative_arg(self): # Check that Popen looks for args[0] relative to cwd if args[0] # is relative. python_dir, python_base = self._split_python_path() rel_python = os.path.join(os.curdir, python_base) with os_helper.temp_cwd() as wrong_dir: # Before calling with the correct cwd, confirm that the call fails # without cwd and with the wrong cwd. self.assertRaises(FileNotFoundError, subprocess.Popen, [rel_python]) self.assertRaises(FileNotFoundError, subprocess.Popen, [rel_python], cwd=wrong_dir) python_dir = self._normalize_cwd(python_dir) self._assert_cwd(python_dir, rel_python, cwd=python_dir) @unittest.skipIf(mswindows, "pending resolution of issue #15533") def test_cwd_with_relative_executable(self): # Check that Popen looks for executable relative to cwd if executable # is relative (and that executable takes precedence over args[0]). python_dir, python_base = self._split_python_path() rel_python = os.path.join(os.curdir, python_base) doesntexist = "somethingyoudonthave" with os_helper.temp_cwd() as wrong_dir: # Before calling with the correct cwd, confirm that the call fails # without cwd and with the wrong cwd. self.assertRaises(FileNotFoundError, subprocess.Popen, [doesntexist], executable=rel_python) self.assertRaises(FileNotFoundError, subprocess.Popen, [doesntexist], executable=rel_python, cwd=wrong_dir) python_dir = self._normalize_cwd(python_dir) self._assert_cwd(python_dir, doesntexist, executable=rel_python, cwd=python_dir) def test_cwd_with_absolute_arg(self): # Check that Popen can find the executable when the cwd is wrong # if args[0] is an absolute path. python_dir, python_base = self._split_python_path() abs_python = os.path.join(python_dir, python_base) rel_python = os.path.join(os.curdir, python_base) with os_helper.temp_dir() as wrong_dir: # Before calling with an absolute path, confirm that using a # relative path fails. self.assertRaises(FileNotFoundError, subprocess.Popen, [rel_python], cwd=wrong_dir) wrong_dir = self._normalize_cwd(wrong_dir) self._assert_cwd(wrong_dir, abs_python, cwd=wrong_dir) @unittest.skipIf(sys.base_prefix != sys.prefix, 'Test is not venv-compatible') def test_executable_with_cwd(self): python_dir, python_base = self._split_python_path() python_dir = self._normalize_cwd(python_dir) self._assert_cwd(python_dir, "somethingyoudonthave", executable=sys.executable, cwd=python_dir) @unittest.skipIf(sys.base_prefix != sys.prefix, 'Test is not venv-compatible') @unittest.skipIf(sysconfig.is_python_build(), "need an installed Python. See #7774") def test_executable_without_cwd(self): # For a normal installation, it should work without 'cwd' # argument. For test runs in the build directory, see #7774. self._assert_cwd(os.getcwd(), "somethingyoudonthave", executable=sys.executable) def test_stdin_pipe(self): # stdin redirection p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.exit(sys.stdin.read() == "pear")'], stdin=subprocess.PIPE) p.stdin.write(b"pear") p.stdin.close() p.wait() self.assertEqual(p.returncode, 1) def test_stdin_filedes(self): # stdin is set to open file descriptor tf = tempfile.TemporaryFile() self.addCleanup(tf.close) d = tf.fileno() os.write(d, b"pear") os.lseek(d, 0, 0) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.exit(sys.stdin.read() == "pear")'], stdin=d) p.wait() self.assertEqual(p.returncode, 1) def test_stdin_fileobj(self): # stdin is set to open file object tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b"pear") tf.seek(0) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.exit(sys.stdin.read() == "pear")'], stdin=tf) p.wait() self.assertEqual(p.returncode, 1) def test_stdout_pipe(self): # stdout redirection p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("orange")'], stdout=subprocess.PIPE) with p: self.assertEqual(p.stdout.read(), b"orange") def test_stdout_filedes(self): # stdout is set to open file descriptor tf = tempfile.TemporaryFile() self.addCleanup(tf.close) d = tf.fileno() p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("orange")'], stdout=d) p.wait() os.lseek(d, 0, 0) self.assertEqual(os.read(d, 1024), b"orange") def test_stdout_fileobj(self): # stdout is set to open file object tf = tempfile.TemporaryFile() self.addCleanup(tf.close) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("orange")'], stdout=tf) p.wait() tf.seek(0) self.assertEqual(tf.read(), b"orange") def test_stderr_pipe(self): # stderr redirection p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("strawberry")'], stderr=subprocess.PIPE) with p: self.assertEqual(p.stderr.read(), b"strawberry") def test_stderr_filedes(self): # stderr is set to open file descriptor tf = tempfile.TemporaryFile() self.addCleanup(tf.close) d = tf.fileno() p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("strawberry")'], stderr=d) p.wait() os.lseek(d, 0, 0) self.assertEqual(os.read(d, 1024), b"strawberry") def test_stderr_fileobj(self): # stderr is set to open file object tf = tempfile.TemporaryFile() self.addCleanup(tf.close) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("strawberry")'], stderr=tf) p.wait() tf.seek(0) self.assertEqual(tf.read(), b"strawberry") def test_stderr_redirect_with_no_stdout_redirect(self): # test stderr=STDOUT while stdout=None (not set) # - grandchild prints to stderr # - child redirects grandchild's stderr to its stdout # - the parent should get grandchild's stderr in child's stdout p = subprocess.Popen([sys.executable, "-c", 'import sys, subprocess;' 'rc = subprocess.call([sys.executable, "-c",' ' "import sys;"' ' "sys.stderr.write(\'42\')"],' ' stderr=subprocess.STDOUT);' 'sys.exit(rc)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() #NOTE: stdout should get stderr from grandchild self.assertEqual(stdout, b'42') self.assertEqual(stderr, b'') # should be empty self.assertEqual(p.returncode, 0) def test_stdout_stderr_pipe(self): # capture stdout and stderr to the same pipe p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple");' 'sys.stdout.flush();' 'sys.stderr.write("orange")'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) with p: self.assertEqual(p.stdout.read(), b"appleorange") def test_stdout_stderr_file(self): # capture stdout and stderr to the same open file tf = tempfile.TemporaryFile() self.addCleanup(tf.close) p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple");' 'sys.stdout.flush();' 'sys.stderr.write("orange")'], stdout=tf, stderr=tf) p.wait() tf.seek(0) self.assertEqual(tf.read(), b"appleorange") def test_stdout_filedes_of_stdout(self): # stdout is set to 1 (#1531862). # To avoid printing the text on stdout, we do something similar to # test_stdout_none (see above). The parent subprocess calls the child # subprocess passing stdout=1, and this test uses stdout=PIPE in # order to capture and check the output of the parent. See #11963. code = ('import sys, subprocess; ' 'rc = subprocess.call([sys.executable, "-c", ' ' "import os, sys; sys.exit(os.write(sys.stdout.fileno(), ' 'b\'test with stdout=1\'))"], stdout=1); ' 'assert rc == 18') p = subprocess.Popen([sys.executable, "-c", code], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) out, err = p.communicate() self.assertEqual(p.returncode, 0, err) self.assertEqual(out.rstrip(), b'test with stdout=1') def test_stdout_devnull(self): p = subprocess.Popen([sys.executable, "-c", 'for i in range(10240):' 'print("x" * 1024)'], stdout=subprocess.DEVNULL) p.wait() self.assertEqual(p.stdout, None) def test_stderr_devnull(self): p = subprocess.Popen([sys.executable, "-c", 'import sys\n' 'for i in range(10240):' 'sys.stderr.write("x" * 1024)'], stderr=subprocess.DEVNULL) p.wait() self.assertEqual(p.stderr, None) def test_stdin_devnull(self): p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdin.read(1)'], stdin=subprocess.DEVNULL) p.wait() self.assertEqual(p.stdin, None) @unittest.skipUnless(fcntl and hasattr(fcntl, 'F_GETPIPE_SZ'), 'fcntl.F_GETPIPE_SZ required for test.') def test_pipesizes(self): test_pipe_r, test_pipe_w = os.pipe() try: # Get the default pipesize with F_GETPIPE_SZ pipesize_default = fcntl.fcntl(test_pipe_w, fcntl.F_GETPIPE_SZ) finally: os.close(test_pipe_r) os.close(test_pipe_w) pipesize = pipesize_default // 2 pagesize_default = support.get_pagesize() if pipesize < pagesize_default: # the POSIX minimum raise unittest.SkipTest( 'default pipesize too small to perform test.') p = subprocess.Popen( [sys.executable, "-c", 'import sys; sys.stdin.read(); sys.stdout.write("out"); ' 'sys.stderr.write("error!")'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pipesize=pipesize) try: for fifo in [p.stdin, p.stdout, p.stderr]: self.assertEqual( fcntl.fcntl(fifo.fileno(), fcntl.F_GETPIPE_SZ), pipesize) # Windows pipe size can be acquired via GetNamedPipeInfoFunction # https://docs.microsoft.com/en-us/windows/win32/api/namedpipeapi/nf-namedpipeapi-getnamedpipeinfo # However, this function is not yet in _winapi. p.stdin.write(b"pear") p.stdin.close() p.stdout.close() p.stderr.close() finally: p.kill() p.wait() @unittest.skipUnless(fcntl and hasattr(fcntl, 'F_GETPIPE_SZ'), 'fcntl.F_GETPIPE_SZ required for test.') def test_pipesize_default(self): proc = subprocess.Popen( [sys.executable, "-c", 'import sys; sys.stdin.read(); sys.stdout.write("out"); ' 'sys.stderr.write("error!")'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pipesize=-1) with proc: try: fp_r, fp_w = os.pipe() try: default_read_pipesize = fcntl.fcntl(fp_r, fcntl.F_GETPIPE_SZ) default_write_pipesize = fcntl.fcntl(fp_w, fcntl.F_GETPIPE_SZ) finally: os.close(fp_r) os.close(fp_w) self.assertEqual( fcntl.fcntl(proc.stdin.fileno(), fcntl.F_GETPIPE_SZ), default_read_pipesize) self.assertEqual( fcntl.fcntl(proc.stdout.fileno(), fcntl.F_GETPIPE_SZ), default_write_pipesize) self.assertEqual( fcntl.fcntl(proc.stderr.fileno(), fcntl.F_GETPIPE_SZ), default_write_pipesize) # On other platforms we cannot test the pipe size (yet). But above # code using pipesize=-1 should not crash. finally: proc.kill() def test_env(self): newenv = os.environ.copy() newenv["FRUIT"] = "orange" with subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(os.getenv("FRUIT"))'], stdout=subprocess.PIPE, env=newenv) as p: stdout, stderr = p.communicate() self.assertEqual(stdout, b"orange") @unittest.skipUnless(sys.platform == "win32", "Windows only issue") def test_win32_duplicate_envs(self): newenv = os.environ.copy() newenv["fRUit"] = "cherry" newenv["fruit"] = "lemon" newenv["FRUIT"] = "orange" newenv["frUit"] = "banana" with subprocess.Popen(["CMD", "/c", "SET", "fruit"], stdout=subprocess.PIPE, env=newenv) as p: stdout, _ = p.communicate() self.assertEqual(stdout.strip(), b"frUit=banana") # Windows requires at least the SYSTEMROOT environment variable to start # Python @unittest.skipIf(sys.platform == 'win32', 'cannot test an empty env on Windows') @unittest.skipIf(sysconfig.get_config_var('Py_ENABLE_SHARED') == 1, 'The Python shared library cannot be loaded ' 'with an empty environment.') @unittest.skipIf(check_sanitizer(address=True), 'AddressSanitizer adds to the environment.') def test_empty_env(self): """Verify that env={} is as empty as possible.""" def is_env_var_to_ignore(n): """Determine if an environment variable is under our control.""" # This excludes some __CF_* and VERSIONER_* keys MacOS insists # on adding even when the environment in exec is empty. # Gentoo sandboxes also force LD_PRELOAD and SANDBOX_* to exist. return ('VERSIONER' in n or '__CF' in n or # MacOS n == 'LD_PRELOAD' or n.startswith('SANDBOX') or # Gentoo n == 'LC_CTYPE') # Locale coercion triggered with subprocess.Popen([sys.executable, "-c", 'import os; print(list(os.environ.keys()))'], stdout=subprocess.PIPE, env={}) as p: stdout, stderr = p.communicate() child_env_names = eval(stdout.strip()) self.assertIsInstance(child_env_names, list) child_env_names = [k for k in child_env_names if not is_env_var_to_ignore(k)] self.assertEqual(child_env_names, []) @unittest.skipIf(sysconfig.get_config_var('Py_ENABLE_SHARED') == 1, 'The Python shared library cannot be loaded ' 'without some system environments.') @unittest.skipIf(check_sanitizer(address=True), 'AddressSanitizer adds to the environment.') def test_one_environment_variable(self): newenv = {'fruit': 'orange'} cmd = [sys.executable, '-c', 'import sys,os;' 'sys.stdout.write("fruit="+os.getenv("fruit"))'] if sys.platform == "win32": cmd = ["CMD", "/c", "SET", "fruit"] with subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=newenv) as p: stdout, stderr = p.communicate() if p.returncode and support.verbose: print("STDOUT:", stdout.decode("ascii", "replace")) print("STDERR:", stderr.decode("ascii", "replace")) self.assertEqual(p.returncode, 0) self.assertEqual(stdout.strip(), b"fruit=orange") def test_invalid_cmd(self): # null character in the command name cmd = sys.executable + '\0' with self.assertRaises(ValueError): subprocess.Popen([cmd, "-c", "pass"]) # null character in the command argument with self.assertRaises(ValueError): subprocess.Popen([sys.executable, "-c", "pass#\0"]) def test_invalid_env(self): # null character in the environment variable name newenv = os.environ.copy() newenv["FRUIT\0VEGETABLE"] = "cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) # null character in the environment variable value newenv = os.environ.copy() newenv["FRUIT"] = "orange\0VEGETABLE=cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) # equal character in the environment variable name newenv = os.environ.copy() newenv["FRUIT=ORANGE"] = "lemon" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) # equal character in the environment variable value newenv = os.environ.copy() newenv["FRUIT"] = "orange=lemon" with subprocess.Popen([sys.executable, "-c", 'import sys, os;' 'sys.stdout.write(os.getenv("FRUIT"))'], stdout=subprocess.PIPE, env=newenv) as p: stdout, stderr = p.communicate() self.assertEqual(stdout, b"orange=lemon") @unittest.skipUnless(sys.platform == "win32", "Windows only issue") def test_win32_invalid_env(self): # '=' in the environment variable name newenv = os.environ.copy() newenv["FRUIT=VEGETABLE"] = "cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) newenv = os.environ.copy() newenv["==FRUIT"] = "cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) def test_communicate_stdin(self): p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.exit(sys.stdin.read() == "pear")'], stdin=subprocess.PIPE) p.communicate(b"pear") self.assertEqual(p.returncode, 1) def test_communicate_stdout(self): p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("pineapple")'], stdout=subprocess.PIPE) (stdout, stderr) = p.communicate() self.assertEqual(stdout, b"pineapple") self.assertEqual(stderr, None) def test_communicate_stderr(self): p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("pineapple")'], stderr=subprocess.PIPE) (stdout, stderr) = p.communicate() self.assertEqual(stdout, None) self.assertEqual(stderr, b"pineapple") def test_communicate(self): p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stderr.write("pineapple");' 'sys.stdout.write(sys.stdin.read())'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) (stdout, stderr) = p.communicate(b"banana") self.assertEqual(stdout, b"banana") self.assertEqual(stderr, b"pineapple") def test_communicate_timeout(self): p = subprocess.Popen([sys.executable, "-c", 'import sys,os,time;' 'sys.stderr.write("pineapple\\n");' 'time.sleep(1);' 'sys.stderr.write("pear\\n");' 'sys.stdout.write(sys.stdin.read())'], universal_newlines=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.assertRaises(subprocess.TimeoutExpired, p.communicate, "banana", timeout=0.3) # Make sure we can keep waiting for it, and that we get the whole output # after it completes. (stdout, stderr) = p.communicate() self.assertEqual(stdout, "banana") self.assertEqual(stderr.encode(), b"pineapple\npear\n") def test_communicate_timeout_large_output(self): # Test an expiring timeout while the child is outputting lots of data. p = subprocess.Popen([sys.executable, "-c", 'import sys,os,time;' 'sys.stdout.write("a" * (64 * 1024));' 'time.sleep(0.2);' 'sys.stdout.write("a" * (64 * 1024));' 'time.sleep(0.2);' 'sys.stdout.write("a" * (64 * 1024));' 'time.sleep(0.2);' 'sys.stdout.write("a" * (64 * 1024));'], stdout=subprocess.PIPE) self.assertRaises(subprocess.TimeoutExpired, p.communicate, timeout=0.4) (stdout, _) = p.communicate() self.assertEqual(len(stdout), 4 * 64 * 1024) # Test for the fd leak reported in http://bugs.python.org/issue2791. def test_communicate_pipe_fd_leak(self): for stdin_pipe in (False, True): for stdout_pipe in (False, True): for stderr_pipe in (False, True): options = {} if stdin_pipe: options['stdin'] = subprocess.PIPE if stdout_pipe: options['stdout'] = subprocess.PIPE if stderr_pipe: options['stderr'] = subprocess.PIPE if not options: continue p = subprocess.Popen(ZERO_RETURN_CMD, **options) p.communicate() if p.stdin is not None: self.assertTrue(p.stdin.closed) if p.stdout is not None: self.assertTrue(p.stdout.closed) if p.stderr is not None: self.assertTrue(p.stderr.closed) def test_communicate_returns(self): # communicate() should return None if no redirection is active p = subprocess.Popen([sys.executable, "-c", "import sys; sys.exit(47)"]) (stdout, stderr) = p.communicate() self.assertEqual(stdout, None) self.assertEqual(stderr, None) def test_communicate_pipe_buf(self): # communicate() with writes larger than pipe_buf # This test will probably deadlock rather than fail, if # communicate() does not work properly. x, y = os.pipe() os.close(x) os.close(y) p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(sys.stdin.read(47));' 'sys.stderr.write("x" * %d);' 'sys.stdout.write(sys.stdin.read())' % support.PIPE_MAX_SIZE], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) string_to_write = b"a" * support.PIPE_MAX_SIZE (stdout, stderr) = p.communicate(string_to_write) self.assertEqual(stdout, string_to_write) def test_writes_before_communicate(self): # stdin.write before communicate() p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(sys.stdin.read())'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) p.stdin.write(b"banana") (stdout, stderr) = p.communicate(b"split") self.assertEqual(stdout, b"bananasplit") self.assertEqual(stderr, b"") def test_universal_newlines_and_text(self): args = [ sys.executable, "-c", 'import sys,os;' + SETBINARY + 'buf = sys.stdout.buffer;' 'buf.write(sys.stdin.readline().encode());' 'buf.flush();' 'buf.write(b"line2\\n");' 'buf.flush();' 'buf.write(sys.stdin.read().encode());' 'buf.flush();' 'buf.write(b"line4\\n");' 'buf.flush();' 'buf.write(b"line5\\r\\n");' 'buf.flush();' 'buf.write(b"line6\\r");' 'buf.flush();' 'buf.write(b"\\nline7");' 'buf.flush();' 'buf.write(b"\\nline8");'] for extra_kwarg in ('universal_newlines', 'text'): p = subprocess.Popen(args, **{'stdin': subprocess.PIPE, 'stdout': subprocess.PIPE, extra_kwarg: True}) with p: p.stdin.write("line1\n") p.stdin.flush() self.assertEqual(p.stdout.readline(), "line1\n") p.stdin.write("line3\n") p.stdin.close() self.addCleanup(p.stdout.close) self.assertEqual(p.stdout.readline(), "line2\n") self.assertEqual(p.stdout.read(6), "line3\n") self.assertEqual(p.stdout.read(), "line4\nline5\nline6\nline7\nline8") def test_universal_newlines_communicate(self): # universal newlines through communicate() p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' + SETBINARY + 'buf = sys.stdout.buffer;' 'buf.write(b"line2\\n");' 'buf.flush();' 'buf.write(b"line4\\n");' 'buf.flush();' 'buf.write(b"line5\\r\\n");' 'buf.flush();' 'buf.write(b"line6\\r");' 'buf.flush();' 'buf.write(b"\\nline7");' 'buf.flush();' 'buf.write(b"\\nline8");'], stderr=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=1) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) (stdout, stderr) = p.communicate() self.assertEqual(stdout, "line2\nline4\nline5\nline6\nline7\nline8") def test_universal_newlines_communicate_stdin(self): # universal newlines through communicate(), with only stdin p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' + SETBINARY + textwrap.dedent(''' s = sys.stdin.readline() assert s == "line1\\n", repr(s) s = sys.stdin.read() assert s == "line3\\n", repr(s) ''')], stdin=subprocess.PIPE, universal_newlines=1) (stdout, stderr) = p.communicate("line1\nline3\n") self.assertEqual(p.returncode, 0) def test_universal_newlines_communicate_input_none(self): # Test communicate(input=None) with universal newlines. # # We set stdout to PIPE because, as of this writing, a different # code path is tested when the number of pipes is zero or one. p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=True) p.communicate() self.assertEqual(p.returncode, 0) def test_universal_newlines_communicate_stdin_stdout_stderr(self): # universal newlines through communicate(), with stdin, stdout, stderr p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' + SETBINARY + textwrap.dedent(''' s = sys.stdin.buffer.readline() sys.stdout.buffer.write(s) sys.stdout.buffer.write(b"line2\\r") sys.stderr.buffer.write(b"eline2\\n") s = sys.stdin.buffer.read() sys.stdout.buffer.write(s) sys.stdout.buffer.write(b"line4\\n") sys.stdout.buffer.write(b"line5\\r\\n") sys.stderr.buffer.write(b"eline6\\r") sys.stderr.buffer.write(b"eline7\\r\\nz") ''')], stdin=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=True) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) (stdout, stderr) = p.communicate("line1\nline3\n") self.assertEqual(p.returncode, 0) self.assertEqual("line1\nline2\nline3\nline4\nline5\n", stdout) # Python debug build push something like "[42442 refs]\n" # to stderr at exit of subprocess. self.assertTrue(stderr.startswith("eline2\neline6\neline7\n")) def test_universal_newlines_communicate_encodings(self): # Check that universal newlines mode works for various encodings, # in particular for encodings in the UTF-16 and UTF-32 families. # See issue #15595. # # UTF-16 and UTF-32-BE are sufficient to check both with BOM and # without, and UTF-16 and UTF-32. for encoding in ['utf-16', 'utf-32-be']: code = ("import sys; " r"sys.stdout.buffer.write('1\r\n2\r3\n4'.encode('%s'))" % encoding) args = [sys.executable, '-c', code] # We set stdin to be non-None because, as of this writing, # a different code path is used when the number of pipes is # zero or one. popen = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, encoding=encoding) stdout, stderr = popen.communicate(input='') self.assertEqual(stdout, '1\n2\n3\n4') def test_communicate_errors(self): for errors, expected in [ ('ignore', ''), ('replace', '\ufffd\ufffd'), ('surrogateescape', '\udc80\udc80'), ('backslashreplace', '\\x80\\x80'), ]: code = ("import sys; " r"sys.stdout.buffer.write(b'[\x80\x80]')") args = [sys.executable, '-c', code] # We set stdin to be non-None because, as of this writing, # a different code path is used when the number of pipes is # zero or one. popen = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, encoding='utf-8', errors=errors) stdout, stderr = popen.communicate(input='') self.assertEqual(stdout, '[{}]'.format(expected)) def test_no_leaking(self): # Make sure we leak no resources if not mswindows: max_handles = 1026 # too much for most UNIX systems else: max_handles = 2050 # too much for (at least some) Windows setups handles = [] tmpdir = tempfile.mkdtemp() try: for i in range(max_handles): try: tmpfile = os.path.join(tmpdir, os_helper.TESTFN) handles.append(os.open(tmpfile, os.O_WRONLY|os.O_CREAT)) except OSError as e: if e.errno != errno.EMFILE: raise break else: self.skipTest("failed to reach the file descriptor limit " "(tried %d)" % max_handles) # Close a couple of them (should be enough for a subprocess) for i in range(10): os.close(handles.pop()) # Loop creating some subprocesses. If one of them leaks some fds, # the next loop iteration will fail by reaching the max fd limit. for i in range(15): p = subprocess.Popen([sys.executable, "-c", "import sys;" "sys.stdout.write(sys.stdin.read())"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) data = p.communicate(b"lime")[0] self.assertEqual(data, b"lime") finally: for h in handles: os.close(h) shutil.rmtree(tmpdir) def test_list2cmdline(self): self.assertEqual(subprocess.list2cmdline(['a b c', 'd', 'e']), '"a b c" d e') self.assertEqual(subprocess.list2cmdline(['ab"c', '\\', 'd']), 'ab\\"c \\ d') self.assertEqual(subprocess.list2cmdline(['ab"c', ' \\', 'd']), 'ab\\"c " \\\\" d') self.assertEqual(subprocess.list2cmdline(['a\\\\\\b', 'de fg', 'h']), 'a\\\\\\b "de fg" h') self.assertEqual(subprocess.list2cmdline(['a\\"b', 'c', 'd']), 'a\\\\\\"b c d') self.assertEqual(subprocess.list2cmdline(['a\\\\b c', 'd', 'e']), '"a\\\\b c" d e') self.assertEqual(subprocess.list2cmdline(['a\\\\b\\ c', 'd', 'e']), '"a\\\\b\\ c" d e') self.assertEqual(subprocess.list2cmdline(['ab', '']), 'ab ""') def test_poll(self): p = subprocess.Popen([sys.executable, "-c", "import os; os.read(0, 1)"], stdin=subprocess.PIPE) self.addCleanup(p.stdin.close) self.assertIsNone(p.poll()) os.write(p.stdin.fileno(), b'A') p.wait() # Subsequent invocations should just return the returncode self.assertEqual(p.poll(), 0) def test_wait(self): p = subprocess.Popen(ZERO_RETURN_CMD) self.assertEqual(p.wait(), 0) # Subsequent invocations should just return the returncode self.assertEqual(p.wait(), 0) def test_wait_timeout(self): p = subprocess.Popen([sys.executable, "-c", "import time; time.sleep(0.3)"]) with self.assertRaises(subprocess.TimeoutExpired) as c: p.wait(timeout=0.0001) self.assertIn("0.0001", str(c.exception)) # For coverage of __str__. self.assertEqual(p.wait(timeout=support.SHORT_TIMEOUT), 0) def test_invalid_bufsize(self): # an invalid type of the bufsize argument should raise # TypeError. with self.assertRaises(TypeError): subprocess.Popen(ZERO_RETURN_CMD, "orange") def test_bufsize_is_none(self): # bufsize=None should be the same as bufsize=0. p = subprocess.Popen(ZERO_RETURN_CMD, None) self.assertEqual(p.wait(), 0) # Again with keyword arg p = subprocess.Popen(ZERO_RETURN_CMD, bufsize=None) self.assertEqual(p.wait(), 0) def _test_bufsize_equal_one(self, line, expected, universal_newlines): # subprocess may deadlock with bufsize=1, see issue #21332 with subprocess.Popen([sys.executable, "-c", "import sys;" "sys.stdout.write(sys.stdin.readline());" "sys.stdout.flush()"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL, bufsize=1, universal_newlines=universal_newlines) as p: p.stdin.write(line) # expect that it flushes the line in text mode os.close(p.stdin.fileno()) # close it without flushing the buffer read_line = p.stdout.readline() with support.SuppressCrashReport(): try: p.stdin.close() except OSError: pass p.stdin = None self.assertEqual(p.returncode, 0) self.assertEqual(read_line, expected) def test_bufsize_equal_one_text_mode(self): # line is flushed in text mode with bufsize=1. # we should get the full line in return line = "line\n" self._test_bufsize_equal_one(line, line, universal_newlines=True) def test_bufsize_equal_one_binary_mode(self): # line is not flushed in binary mode with bufsize=1. # we should get empty response line = b'line' + os.linesep.encode() # assume ascii-based locale with self.assertWarnsRegex(RuntimeWarning, 'line buffering'): self._test_bufsize_equal_one(line, b'', universal_newlines=False) @support.requires_resource('cpu') def test_leaking_fds_on_error(self): # see bug #5179: Popen leaks file descriptors to PIPEs if # the child fails to execute; this will eventually exhaust # the maximum number of open fds. 1024 seems a very common # value for that limit, but Windows has 2048, so we loop # 1024 times (each call leaked two fds). for i in range(1024): with self.assertRaises(NONEXISTING_ERRORS): subprocess.Popen(NONEXISTING_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE) def test_nonexisting_with_pipes(self): # bpo-30121: Popen with pipes must close properly pipes on error. # Previously, os.close() was called with a Windows handle which is not # a valid file descriptor. # # Run the test in a subprocess to control how the CRT reports errors # and to get stderr content. try: import msvcrt msvcrt.CrtSetReportMode except (AttributeError, ImportError): self.skipTest("need msvcrt.CrtSetReportMode") code = textwrap.dedent(f""" import msvcrt import subprocess cmd = {NONEXISTING_CMD!r} for report_type in [msvcrt.CRT_WARN, msvcrt.CRT_ERROR, msvcrt.CRT_ASSERT]: msvcrt.CrtSetReportMode(report_type, msvcrt.CRTDBG_MODE_FILE) msvcrt.CrtSetReportFile(report_type, msvcrt.CRTDBG_FILE_STDERR) try: subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) except OSError: pass """) cmd = [sys.executable, "-c", code] proc = subprocess.Popen(cmd, stderr=subprocess.PIPE, universal_newlines=True) with proc: stderr = proc.communicate()[1] self.assertEqual(stderr, "") self.assertEqual(proc.returncode, 0) def test_double_close_on_error(self): # Issue #18851 fds = [] def open_fds(): for i in range(20): fds.extend(os.pipe()) time.sleep(0.001) t = threading.Thread(target=open_fds) t.start() try: with self.assertRaises(OSError): subprocess.Popen(NONEXISTING_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) finally: t.join() exc = None for fd in fds: # If a double close occurred, some of those fds will # already have been closed by mistake, and os.close() # here will raise. try: os.close(fd) except OSError as e: exc = e if exc is not None: raise exc def test_threadsafe_wait(self): """Issue21291: Popen.wait() needs to be threadsafe for returncode.""" proc = subprocess.Popen([sys.executable, '-c', 'import time; time.sleep(12)']) self.assertEqual(proc.returncode, None) results = [] def kill_proc_timer_thread(): results.append(('thread-start-poll-result', proc.poll())) # terminate it from the thread and wait for the result. proc.kill() proc.wait() results.append(('thread-after-kill-and-wait', proc.returncode)) # this wait should be a no-op given the above. proc.wait() results.append(('thread-after-second-wait', proc.returncode)) # This is a timing sensitive test, the failure mode is # triggered when both the main thread and this thread are in # the wait() call at once. The delay here is to allow the # main thread to most likely be blocked in its wait() call. t = threading.Timer(0.2, kill_proc_timer_thread) t.start() if mswindows: expected_errorcode = 1 else: # Should be -9 because of the proc.kill() from the thread. expected_errorcode = -9 # Wait for the process to finish; the thread should kill it # long before it finishes on its own. Supplying a timeout # triggers a different code path for better coverage. proc.wait(timeout=support.SHORT_TIMEOUT) self.assertEqual(proc.returncode, expected_errorcode, msg="unexpected result in wait from main thread") # This should be a no-op with no change in returncode. proc.wait() self.assertEqual(proc.returncode, expected_errorcode, msg="unexpected result in second main wait.") t.join() # Ensure that all of the thread results are as expected. # When a race condition occurs in wait(), the returncode could # be set by the wrong thread that doesn't actually have it # leading to an incorrect value. self.assertEqual([('thread-start-poll-result', None), ('thread-after-kill-and-wait', expected_errorcode), ('thread-after-second-wait', expected_errorcode)], results) def test_issue8780(self): # Ensure that stdout is inherited from the parent # if stdout=PIPE is not used code = ';'.join(( 'import subprocess, sys', 'retcode = subprocess.call(' "[sys.executable, '-c', 'print(\"Hello World!\")'])", 'assert retcode == 0')) output = subprocess.check_output([sys.executable, '-c', code]) self.assertTrue(output.startswith(b'Hello World!'), ascii(output)) def test_handles_closed_on_exception(self): # If CreateProcess exits with an error, ensure the # duplicate output handles are released ifhandle, ifname = tempfile.mkstemp() ofhandle, ofname = tempfile.mkstemp() efhandle, efname = tempfile.mkstemp() try: subprocess.Popen (["*"], stdin=ifhandle, stdout=ofhandle, stderr=efhandle) except OSError: os.close(ifhandle) os.remove(ifname) os.close(ofhandle) os.remove(ofname) os.close(efhandle) os.remove(efname) self.assertFalse(os.path.exists(ifname)) self.assertFalse(os.path.exists(ofname)) self.assertFalse(os.path.exists(efname)) def test_communicate_epipe(self): # Issue 10963: communicate() should hide EPIPE p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) p.communicate(b"x" * 2**20) def test_repr(self): cases = [ ("ls", True, 123, ""), ('a' * 100, True, 0, ""), (["ls"], False, None, ""), (["ls", '--my-opts', 'a' * 100], False, None, ""), (os_helper.FakePath("my-tool.py"), False, 7, ">") ] with unittest.mock.patch.object(subprocess.Popen, '_execute_child'): for cmd, shell, code, sx in cases: p = subprocess.Popen(cmd, shell=shell) p.returncode = code self.assertEqual(repr(p), sx) def test_communicate_epipe_only_stdin(self): # Issue 10963: communicate() should hide EPIPE p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE) self.addCleanup(p.stdin.close) p.wait() p.communicate(b"x" * 2**20) @unittest.skipUnless(hasattr(signal, 'SIGUSR1'), "Requires signal.SIGUSR1") @unittest.skipUnless(hasattr(os, 'kill'), "Requires os.kill") @unittest.skipUnless(hasattr(os, 'getppid'), "Requires os.getppid") def test_communicate_eintr(self): # Issue #12493: communicate() should handle EINTR def handler(signum, frame): pass old_handler = signal.signal(signal.SIGUSR1, handler) self.addCleanup(signal.signal, signal.SIGUSR1, old_handler) args = [sys.executable, "-c", 'import os, signal;' 'os.kill(os.getppid(), signal.SIGUSR1)'] for stream in ('stdout', 'stderr'): kw = {stream: subprocess.PIPE} with subprocess.Popen(args, **kw) as process: # communicate() will be interrupted by SIGUSR1 process.communicate() # This test is Linux-ish specific for simplicity to at least have # some coverage. It is not a platform specific bug. @unittest.skipUnless(os.path.isdir('/proc/%d/fd' % os.getpid()), "Linux specific") def test_failed_child_execute_fd_leak(self): """Test for the fork() failure fd leak reported in issue16327.""" fd_directory = '/proc/%d/fd' % os.getpid() fds_before_popen = os.listdir(fd_directory) with self.assertRaises(PopenTestException): PopenExecuteChildRaises( ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # NOTE: This test doesn't verify that the real _execute_child # does not close the file descriptors itself on the way out # during an exception. Code inspection has confirmed that. fds_after_exception = os.listdir(fd_directory) self.assertEqual(fds_before_popen, fds_after_exception) @unittest.skipIf(mswindows, "behavior currently not supported on Windows") def test_file_not_found_includes_filename(self): with self.assertRaises(FileNotFoundError) as c: subprocess.call(['/opt/nonexistent_binary', 'with', 'some', 'args']) self.assertEqual(c.exception.filename, '/opt/nonexistent_binary') @unittest.skipIf(mswindows, "behavior currently not supported on Windows") def test_file_not_found_with_bad_cwd(self): with self.assertRaises(FileNotFoundError) as c: subprocess.Popen(['exit', '0'], cwd='/some/nonexistent/directory') self.assertEqual(c.exception.filename, '/some/nonexistent/directory') def test_class_getitems(self): self.assertIsInstance(subprocess.Popen[bytes], types.GenericAlias) self.assertIsInstance(subprocess.CompletedProcess[str], types.GenericAlias) @unittest.skipIf(not sysconfig.get_config_var("HAVE_VFORK"), "vfork() not enabled by configure.") @mock.patch("subprocess._fork_exec") def test__use_vfork(self, mock_fork_exec): self.assertTrue(subprocess._USE_VFORK) # The default value regardless. mock_fork_exec.side_effect = RuntimeError("just testing args") with self.assertRaises(RuntimeError): subprocess.run([sys.executable, "-c", "pass"]) mock_fork_exec.assert_called_once() self.assertTrue(mock_fork_exec.call_args.args[-1]) with mock.patch.object(subprocess, '_USE_VFORK', False): with self.assertRaises(RuntimeError): subprocess.run([sys.executable, "-c", "pass"]) self.assertFalse(mock_fork_exec.call_args_list[-1].args[-1]) @unittest.skipUnless(hasattr(subprocess, '_winapi'), 'need subprocess._winapi') def test_wait_negative_timeout(self): proc = subprocess.Popen(ZERO_RETURN_CMD) with proc: patch = mock.patch.object( subprocess._winapi, 'WaitForSingleObject', return_value=subprocess._winapi.WAIT_OBJECT_0) with patch as mock_wait: proc.wait(-1) # negative timeout mock_wait.assert_called_once_with(proc._handle, 0) proc.returncode = None self.assertEqual(proc.wait(), 0) class RunFuncTestCase(BaseTestCase): def run_python(self, code, **kwargs): """Run Python code in a subprocess using subprocess.run""" argv = [sys.executable, "-c", code] return subprocess.run(argv, **kwargs) def test_returncode(self): # call() function with sequence argument cp = self.run_python("import sys; sys.exit(47)") self.assertEqual(cp.returncode, 47) with self.assertRaises(subprocess.CalledProcessError): cp.check_returncode() def test_check(self): with self.assertRaises(subprocess.CalledProcessError) as c: self.run_python("import sys; sys.exit(47)", check=True) self.assertEqual(c.exception.returncode, 47) def test_check_zero(self): # check_returncode shouldn't raise when returncode is zero cp = subprocess.run(ZERO_RETURN_CMD, check=True) self.assertEqual(cp.returncode, 0) def test_timeout(self): # run() function with timeout argument; we want to test that the child # process gets killed when the timeout expires. If the child isn't # killed, this call will deadlock since subprocess.run waits for the # child. with self.assertRaises(subprocess.TimeoutExpired): self.run_python("while True: pass", timeout=0.0001) def test_capture_stdout(self): # capture stdout with zero return code cp = self.run_python("print('BDFL')", stdout=subprocess.PIPE) self.assertIn(b'BDFL', cp.stdout) def test_capture_stderr(self): cp = self.run_python("import sys; sys.stderr.write('BDFL')", stderr=subprocess.PIPE) self.assertIn(b'BDFL', cp.stderr) def test_check_output_stdin_arg(self): # run() can be called with stdin set to a file tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) cp = self.run_python( "import sys; sys.stdout.write(sys.stdin.read().upper())", stdin=tf, stdout=subprocess.PIPE) self.assertIn(b'PEAR', cp.stdout) def test_check_output_input_arg(self): # check_output() can be called with input set to a string cp = self.run_python( "import sys; sys.stdout.write(sys.stdin.read().upper())", input=b'pear', stdout=subprocess.PIPE) self.assertIn(b'PEAR', cp.stdout) def test_check_output_stdin_with_input_arg(self): # run() refuses to accept 'stdin' with 'input' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) with self.assertRaises(ValueError, msg="Expected ValueError when stdin and input args supplied.") as c: output = self.run_python("print('will not be run')", stdin=tf, input=b'hare') self.assertIn('stdin', c.exception.args[0]) self.assertIn('input', c.exception.args[0]) @support.requires_resource('walltime') def test_check_output_timeout(self): with self.assertRaises(subprocess.TimeoutExpired) as c: cp = self.run_python(( "import sys, time\n" "sys.stdout.write('BDFL')\n" "sys.stdout.flush()\n" "time.sleep(3600)"), # Some heavily loaded buildbots (sparc Debian 3.x) require # this much time to start and print. timeout=3, stdout=subprocess.PIPE) self.assertEqual(c.exception.output, b'BDFL') # output is aliased to stdout self.assertEqual(c.exception.stdout, b'BDFL') def test_run_kwargs(self): newenv = os.environ.copy() newenv["FRUIT"] = "banana" cp = self.run_python(('import sys, os;' 'sys.exit(33 if os.getenv("FRUIT")=="banana" else 31)'), env=newenv) self.assertEqual(cp.returncode, 33) def test_run_with_pathlike_path(self): # bpo-31961: test run(pathlike_object) # the name of a command that can be run without # any arguments that exit fast prog = 'tree.com' if mswindows else 'ls' path = shutil.which(prog) if path is None: self.skipTest(f'{prog} required for this test') path = FakePath(path) res = subprocess.run(path, stdout=subprocess.DEVNULL) self.assertEqual(res.returncode, 0) with self.assertRaises(TypeError): subprocess.run(path, stdout=subprocess.DEVNULL, shell=True) def test_run_with_bytes_path_and_arguments(self): # bpo-31961: test run([bytes_object, b'additional arguments']) path = os.fsencode(sys.executable) args = [path, '-c', b'import sys; sys.exit(57)'] res = subprocess.run(args) self.assertEqual(res.returncode, 57) def test_run_with_pathlike_path_and_arguments(self): # bpo-31961: test run([pathlike_object, 'additional arguments']) path = FakePath(sys.executable) args = [path, '-c', 'import sys; sys.exit(57)'] res = subprocess.run(args) self.assertEqual(res.returncode, 57) @unittest.skipUnless(mswindows, "Maybe test trigger a leak on Ubuntu") def test_run_with_an_empty_env(self): # gh-105436: fix subprocess.run(..., env={}) broken on Windows args = [sys.executable, "-c", 'pass'] # Ignore subprocess errors - we only care that the API doesn't # raise an OSError subprocess.run(args, env={}) def test_capture_output(self): cp = self.run_python(("import sys;" "sys.stdout.write('BDFL'); " "sys.stderr.write('FLUFL')"), capture_output=True) self.assertIn(b'BDFL', cp.stdout) self.assertIn(b'FLUFL', cp.stderr) def test_stdout_with_capture_output_arg(self): # run() refuses to accept 'stdout' with 'capture_output' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) with self.assertRaises(ValueError, msg=("Expected ValueError when stdout and capture_output " "args supplied.")) as c: output = self.run_python("print('will not be run')", capture_output=True, stdout=tf) self.assertIn('stdout', c.exception.args[0]) self.assertIn('capture_output', c.exception.args[0]) def test_stderr_with_capture_output_arg(self): # run() refuses to accept 'stderr' with 'capture_output' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) with self.assertRaises(ValueError, msg=("Expected ValueError when stderr and capture_output " "args supplied.")) as c: output = self.run_python("print('will not be run')", capture_output=True, stderr=tf) self.assertIn('stderr', c.exception.args[0]) self.assertIn('capture_output', c.exception.args[0]) # This test _might_ wind up a bit fragile on loaded build+test machines # as it depends on the timing with wide enough margins for normal situations # but does assert that it happened "soon enough" to believe the right thing # happened. @unittest.skipIf(mswindows, "requires posix like 'sleep' shell command") def test_run_with_shell_timeout_and_capture_output(self): """Output capturing after a timeout mustn't hang forever on open filehandles.""" before_secs = time.monotonic() try: subprocess.run('sleep 3', shell=True, timeout=0.1, capture_output=True) # New session unspecified. except subprocess.TimeoutExpired as exc: after_secs = time.monotonic() stacks = traceback.format_exc() # assertRaises doesn't give this. else: self.fail("TimeoutExpired not raised.") self.assertLess(after_secs - before_secs, 1.5, msg="TimeoutExpired was delayed! Bad traceback:\n```\n" f"{stacks}```") def test_encoding_warning(self): code = textwrap.dedent("""\ from subprocess import * run("echo hello", shell=True, text=True) check_output("echo hello", shell=True, text=True) """) cp = subprocess.run([sys.executable, "-Xwarn_default_encoding", "-c", code], capture_output=True) lines = cp.stderr.splitlines() self.assertEqual(len(lines), 2, lines) self.assertTrue(lines[0].startswith(b":2: EncodingWarning: ")) self.assertTrue(lines[1].startswith(b":3: EncodingWarning: ")) def _get_test_grp_name(): for name_group in ('staff', 'nogroup', 'grp', 'nobody', 'nfsnobody'): if grp: try: grp.getgrnam(name_group) except KeyError: continue return name_group else: raise unittest.SkipTest('No identified group name to use for this test on this platform.') @unittest.skipIf(mswindows, "POSIX specific tests") class POSIXProcessTestCase(BaseTestCase): def setUp(self): super().setUp() self._nonexistent_dir = "/_this/pa.th/does/not/exist" def _get_chdir_exception(self): try: os.chdir(self._nonexistent_dir) except OSError as e: # This avoids hard coding the errno value or the OS perror() # string and instead capture the exception that we want to see # below for comparison. desired_exception = e else: self.fail("chdir to nonexistent directory %s succeeded." % self._nonexistent_dir) return desired_exception def test_exception_cwd(self): """Test error in the child raised in the parent for a bad cwd.""" desired_exception = self._get_chdir_exception() try: p = subprocess.Popen([sys.executable, "-c", ""], cwd=self._nonexistent_dir) except OSError as e: # Test that the child process chdir failure actually makes # it up to the parent process as the correct exception. self.assertEqual(desired_exception.errno, e.errno) self.assertEqual(desired_exception.strerror, e.strerror) self.assertEqual(desired_exception.filename, e.filename) else: self.fail("Expected OSError: %s" % desired_exception) def test_exception_bad_executable(self): """Test error in the child raised in the parent for a bad executable.""" desired_exception = self._get_chdir_exception() try: p = subprocess.Popen([sys.executable, "-c", ""], executable=self._nonexistent_dir) except OSError as e: # Test that the child process exec failure actually makes # it up to the parent process as the correct exception. self.assertEqual(desired_exception.errno, e.errno) self.assertEqual(desired_exception.strerror, e.strerror) self.assertEqual(desired_exception.filename, e.filename) else: self.fail("Expected OSError: %s" % desired_exception) def test_exception_bad_args_0(self): """Test error in the child raised in the parent for a bad args[0].""" desired_exception = self._get_chdir_exception() try: p = subprocess.Popen([self._nonexistent_dir, "-c", ""]) except OSError as e: # Test that the child process exec failure actually makes # it up to the parent process as the correct exception. self.assertEqual(desired_exception.errno, e.errno) self.assertEqual(desired_exception.strerror, e.strerror) self.assertEqual(desired_exception.filename, e.filename) else: self.fail("Expected OSError: %s" % desired_exception) # We mock the __del__ method for Popen in the next two tests # because it does cleanup based on the pid returned by fork_exec # along with issuing a resource warning if it still exists. Since # we don't actually spawn a process in these tests we can forego # the destructor. An alternative would be to set _child_created to # False before the destructor is called but there is no easy way # to do that class PopenNoDestructor(subprocess.Popen): def __del__(self): pass @mock.patch("subprocess._fork_exec") def test_exception_errpipe_normal(self, fork_exec): """Test error passing done through errpipe_write in the good case""" def proper_error(*args): errpipe_write = args[13] # Write the hex for the error code EISDIR: 'is a directory' err_code = '{:x}'.format(errno.EISDIR).encode() os.write(errpipe_write, b"OSError:" + err_code + b":") return 0 fork_exec.side_effect = proper_error with mock.patch("subprocess.os.waitpid", side_effect=ChildProcessError): with self.assertRaises(IsADirectoryError): self.PopenNoDestructor(["non_existent_command"]) @mock.patch("subprocess._fork_exec") def test_exception_errpipe_bad_data(self, fork_exec): """Test error passing done through errpipe_write where its not in the expected format""" error_data = b"\xFF\x00\xDE\xAD" def bad_error(*args): errpipe_write = args[13] # Anything can be in the pipe, no assumptions should # be made about its encoding, so we'll write some # arbitrary hex bytes to test it out os.write(errpipe_write, error_data) return 0 fork_exec.side_effect = bad_error with mock.patch("subprocess.os.waitpid", side_effect=ChildProcessError): with self.assertRaises(subprocess.SubprocessError) as e: self.PopenNoDestructor(["non_existent_command"]) self.assertIn(repr(error_data), str(e.exception)) @unittest.skipIf(not os.path.exists('/proc/self/status'), "need /proc/self/status") def test_restore_signals(self): # Blindly assume that cat exists on systems with /proc/self/status... default_proc_status = subprocess.check_output( ['cat', '/proc/self/status'], restore_signals=False) for line in default_proc_status.splitlines(): if line.startswith(b'SigIgn'): default_sig_ign_mask = line break else: self.skipTest("SigIgn not found in /proc/self/status.") restored_proc_status = subprocess.check_output( ['cat', '/proc/self/status'], restore_signals=True) for line in restored_proc_status.splitlines(): if line.startswith(b'SigIgn'): restored_sig_ign_mask = line break self.assertNotEqual(default_sig_ign_mask, restored_sig_ign_mask, msg="restore_signals=True should've unblocked " "SIGPIPE and friends.") def test_start_new_session(self): # For code coverage of calling setsid(). We don't care if we get an # EPERM error from it depending on the test execution environment, that # still indicates that it was called. try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getsid(0))"], start_new_session=True) except PermissionError as e: if e.errno != errno.EPERM: raise # EACCES? else: parent_sid = os.getsid(0) child_sid = int(output) self.assertNotEqual(parent_sid, child_sid) @unittest.skipUnless(hasattr(os, 'setpgid') and hasattr(os, 'getpgid'), 'no setpgid or getpgid on platform') def test_process_group_0(self): # For code coverage of calling setpgid(). We don't care if we get an # EPERM error from it depending on the test execution environment, that # still indicates that it was called. try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getpgid(0))"], process_group=0) except PermissionError as e: if e.errno != errno.EPERM: raise # EACCES? else: parent_pgid = os.getpgid(0) child_pgid = int(output) self.assertNotEqual(parent_pgid, child_pgid) @unittest.skipUnless(hasattr(os, 'setreuid'), 'no setreuid on platform') def test_user(self): # For code coverage of the user parameter. We don't care if we get a # permission error from it depending on the test execution environment, # that still indicates that it was called. uid = os.geteuid() test_users = [65534 if uid != 65534 else 65533, uid] name_uid = "nobody" if sys.platform != 'darwin' else "unknown" if pwd is not None: try: pwd.getpwnam(name_uid) test_users.append(name_uid) except KeyError: # unknown user name name_uid = None for user in test_users: # posix_spawn() may be used with close_fds=False for close_fds in (False, True): with self.subTest(user=user, close_fds=close_fds): try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getuid())"], user=user, close_fds=close_fds) except PermissionError as e: # (EACCES, EPERM) if e.errno == errno.EACCES: self.assertEqual(e.filename, sys.executable) else: self.assertIsNone(e.filename) else: if isinstance(user, str): user_uid = pwd.getpwnam(user).pw_uid else: user_uid = user child_user = int(output) self.assertEqual(child_user, user_uid) with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, user=-1) with self.assertRaises(OverflowError): subprocess.check_call(ZERO_RETURN_CMD, cwd=os.curdir, env=os.environ, user=2**64) if pwd is None and name_uid is not None: with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, user=name_uid) @unittest.skipIf(hasattr(os, 'setreuid'), 'setreuid() available on platform') def test_user_error(self): with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, user=65535) @unittest.skipUnless(hasattr(os, 'setregid'), 'no setregid() on platform') def test_group(self): gid = os.getegid() group_list = [65534 if gid != 65534 else 65533] name_group = _get_test_grp_name() if grp is not None: group_list.append(name_group) for group in group_list + [gid]: # posix_spawn() may be used with close_fds=False for close_fds in (False, True): with self.subTest(group=group, close_fds=close_fds): try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getgid())"], group=group, close_fds=close_fds) except PermissionError as e: # (EACCES, EPERM) self.assertIsNone(e.filename) else: if isinstance(group, str): group_gid = grp.getgrnam(group).gr_gid else: group_gid = group child_group = int(output) self.assertEqual(child_group, group_gid) # make sure we bomb on negative values with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, group=-1) with self.assertRaises(OverflowError): subprocess.check_call(ZERO_RETURN_CMD, cwd=os.curdir, env=os.environ, group=2**64) if grp is None: with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, group=name_group) @unittest.skipIf(hasattr(os, 'setregid'), 'setregid() available on platform') def test_group_error(self): with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, group=65535) @unittest.skipUnless(hasattr(os, 'setgroups'), 'no setgroups() on platform') def test_extra_groups(self): gid = os.getegid() group_list = [65534 if gid != 65534 else 65533] self._test_extra_groups_impl(gid=gid, group_list=group_list) @unittest.skipUnless(hasattr(os, 'setgroups'), 'no setgroups() on platform') def test_extra_groups_empty_list(self): self._test_extra_groups_impl(gid=os.getegid(), group_list=[]) def _test_extra_groups_impl(self, *, gid, group_list): name_group = _get_test_grp_name() if grp is not None: group_list.append(name_group) try: output = subprocess.check_output( [sys.executable, "-c", "import os, sys, json; json.dump(os.getgroups(), sys.stdout)"], extra_groups=group_list) except PermissionError as e: self.assertIsNone(e.filename) self.skipTest("setgroup() EPERM; this test may require root.") else: parent_groups = os.getgroups() child_groups = json.loads(output) if grp is not None: desired_gids = [grp.getgrnam(g).gr_gid if isinstance(g, str) else g for g in group_list] else: desired_gids = group_list self.assertEqual(set(desired_gids), set(child_groups)) if grp is None: with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, extra_groups=[name_group]) # No skip necessary, this test won't make it to a setgroup() call. def test_extra_groups_invalid_gid_t_values(self): with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, extra_groups=[-1]) with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, cwd=os.curdir, env=os.environ, extra_groups=[2**64]) @unittest.skipIf(mswindows or not hasattr(os, 'umask'), 'POSIX umask() is not available.') def test_umask(self): tmpdir = None try: tmpdir = tempfile.mkdtemp() name = os.path.join(tmpdir, "beans") # We set an unusual umask in the child so as a unique mode # for us to test the child's touched file for. subprocess.check_call( [sys.executable, "-c", f"open({name!r}, 'w').close()"], umask=0o053) # Ignore execute permissions entirely in our test, # filesystems could be mounted to ignore or force that. st_mode = os.stat(name).st_mode & 0o666 expected_mode = 0o624 self.assertEqual(expected_mode, st_mode, msg=f'{oct(expected_mode)} != {oct(st_mode)}') finally: if tmpdir is not None: shutil.rmtree(tmpdir) def test_run_abort(self): # returncode handles signal termination with support.SuppressCrashReport(): p = subprocess.Popen([sys.executable, "-c", 'import os; os.abort()']) p.wait() self.assertEqual(-p.returncode, signal.SIGABRT) def test_CalledProcessError_str_signal(self): err = subprocess.CalledProcessError(-int(signal.SIGABRT), "fake cmd") error_string = str(err) # We're relying on the repr() of the signal.Signals intenum to provide # the word signal, the signal name and the numeric value. self.assertIn("signal", error_string.lower()) # We're not being specific about the signal name as some signals have # multiple names and which name is revealed can vary. self.assertIn("SIG", error_string) self.assertIn(str(signal.SIGABRT), error_string) def test_CalledProcessError_str_unknown_signal(self): err = subprocess.CalledProcessError(-9876543, "fake cmd") error_string = str(err) self.assertIn("unknown signal 9876543.", error_string) def test_CalledProcessError_str_non_zero(self): err = subprocess.CalledProcessError(2, "fake cmd") error_string = str(err) self.assertIn("non-zero exit status 2.", error_string) def test_preexec(self): # DISCLAIMER: Setting environment variables is *not* a good use # of a preexec_fn. This is merely a test. p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(os.getenv("FRUIT"))'], stdout=subprocess.PIPE, preexec_fn=lambda: os.putenv("FRUIT", "apple")) with p: self.assertEqual(p.stdout.read(), b"apple") def test_preexec_exception(self): def raise_it(): raise ValueError("What if two swallows carried a coconut?") try: p = subprocess.Popen([sys.executable, "-c", ""], preexec_fn=raise_it) except subprocess.SubprocessError as e: self.assertTrue( subprocess._fork_exec, "Expected a ValueError from the preexec_fn") except ValueError as e: self.assertIn("coconut", e.args[0]) else: self.fail("Exception raised by preexec_fn did not make it " "to the parent process.") class _TestExecuteChildPopen(subprocess.Popen): """Used to test behavior at the end of _execute_child.""" def __init__(self, testcase, *args, **kwargs): self._testcase = testcase subprocess.Popen.__init__(self, *args, **kwargs) def _execute_child(self, *args, **kwargs): try: subprocess.Popen._execute_child(self, *args, **kwargs) finally: # Open a bunch of file descriptors and verify that # none of them are the same as the ones the Popen # instance is using for stdin/stdout/stderr. devzero_fds = [os.open("/dev/zero", os.O_RDONLY) for _ in range(8)] try: for fd in devzero_fds: self._testcase.assertNotIn( fd, (self.stdin.fileno(), self.stdout.fileno(), self.stderr.fileno()), msg="At least one fd was closed early.") finally: for fd in devzero_fds: os.close(fd) @unittest.skipIf(not os.path.exists("/dev/zero"), "/dev/zero required.") def test_preexec_errpipe_does_not_double_close_pipes(self): """Issue16140: Don't double close pipes on preexec error.""" def raise_it(): raise subprocess.SubprocessError( "force the _execute_child() errpipe_data path.") with self.assertRaises(subprocess.SubprocessError): self._TestExecuteChildPopen( self, ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=raise_it) def test_preexec_gc_module_failure(self): # This tests the code that disables garbage collection if the child # process will execute any Python. enabled = gc.isenabled() try: gc.disable() self.assertFalse(gc.isenabled()) subprocess.call([sys.executable, '-c', ''], preexec_fn=lambda: None) self.assertFalse(gc.isenabled(), "Popen enabled gc when it shouldn't.") gc.enable() self.assertTrue(gc.isenabled()) subprocess.call([sys.executable, '-c', ''], preexec_fn=lambda: None) self.assertTrue(gc.isenabled(), "Popen left gc disabled.") finally: if not enabled: gc.disable() @unittest.skipIf( sys.platform == 'darwin', 'setrlimit() seems to fail on OS X') def test_preexec_fork_failure(self): # The internal code did not preserve the previous exception when # re-enabling garbage collection try: from resource import getrlimit, setrlimit, RLIMIT_NPROC except ImportError as err: self.skipTest(err) # RLIMIT_NPROC is specific to Linux and BSD limits = getrlimit(RLIMIT_NPROC) [_, hard] = limits setrlimit(RLIMIT_NPROC, (0, hard)) self.addCleanup(setrlimit, RLIMIT_NPROC, limits) try: subprocess.call([sys.executable, '-c', ''], preexec_fn=lambda: None) except BlockingIOError: # Forking should raise EAGAIN, translated to BlockingIOError pass else: self.skipTest('RLIMIT_NPROC had no effect; probably superuser') def test_args_string(self): # args is a string fd, fname = tempfile.mkstemp() # reopen in text mode with open(fd, "w", errors="surrogateescape") as fobj: fobj.write("#!%s\n" % support.unix_shell) fobj.write("exec '%s' -c 'import sys; sys.exit(47)'\n" % sys.executable) os.chmod(fname, 0o700) p = subprocess.Popen(fname) p.wait() os.remove(fname) self.assertEqual(p.returncode, 47) def test_invalid_args(self): # invalid arguments should raise ValueError self.assertRaises(ValueError, subprocess.call, [sys.executable, "-c", "import sys; sys.exit(47)"], startupinfo=47) self.assertRaises(ValueError, subprocess.call, [sys.executable, "-c", "import sys; sys.exit(47)"], creationflags=47) def test_shell_sequence(self): # Run command through the shell (sequence) newenv = os.environ.copy() newenv["FRUIT"] = "apple" p = subprocess.Popen(["echo $FRUIT"], shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertEqual(p.stdout.read().strip(b" \t\r\n\f"), b"apple") def test_shell_string(self): # Run command through the shell (string) newenv = os.environ.copy() newenv["FRUIT"] = "apple" p = subprocess.Popen("echo $FRUIT", shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertEqual(p.stdout.read().strip(b" \t\r\n\f"), b"apple") def test_call_string(self): # call() function with string argument on UNIX fd, fname = tempfile.mkstemp() # reopen in text mode with open(fd, "w", errors="surrogateescape") as fobj: fobj.write("#!%s\n" % support.unix_shell) fobj.write("exec '%s' -c 'import sys; sys.exit(47)'\n" % sys.executable) os.chmod(fname, 0o700) rc = subprocess.call(fname) os.remove(fname) self.assertEqual(rc, 47) def test_specific_shell(self): # Issue #9265: Incorrect name passed as arg[0]. shells = [] for prefix in ['/bin', '/usr/bin/', '/usr/local/bin']: for name in ['bash', 'ksh']: sh = os.path.join(prefix, name) if os.path.isfile(sh): shells.append(sh) if not shells: # Will probably work for any shell but csh. self.skipTest("bash or ksh required for this test") sh = '/bin/sh' if os.path.isfile(sh) and not os.path.islink(sh): # Test will fail if /bin/sh is a symlink to csh. shells.append(sh) for sh in shells: p = subprocess.Popen("echo $0", executable=sh, shell=True, stdout=subprocess.PIPE) with p: self.assertEqual(p.stdout.read().strip(), bytes(sh, 'ascii')) def _kill_process(self, method, *args): # Do not inherit file handles from the parent. # It should fix failures on some platforms. # Also set the SIGINT handler to the default to make sure it's not # being ignored (some tests rely on that.) old_handler = signal.signal(signal.SIGINT, signal.default_int_handler) try: p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() time.sleep(30) """], close_fds=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) finally: signal.signal(signal.SIGINT, old_handler) # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) getattr(p, method)(*args) return p @unittest.skipIf(sys.platform.startswith(('netbsd', 'openbsd')), "Due to known OS bug (issue #16762)") def _kill_dead_process(self, method, *args): # Do not inherit file handles from the parent. # It should fix failures on some platforms. p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() """], close_fds=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) # The process should end after this time.sleep(1) # This shouldn't raise even though the child is now dead getattr(p, method)(*args) p.communicate() def test_send_signal(self): p = self._kill_process('send_signal', signal.SIGINT) _, stderr = p.communicate() self.assertIn(b'KeyboardInterrupt', stderr) self.assertNotEqual(p.wait(), 0) def test_kill(self): p = self._kill_process('kill') _, stderr = p.communicate() self.assertEqual(stderr, b'') self.assertEqual(p.wait(), -signal.SIGKILL) def test_terminate(self): p = self._kill_process('terminate') _, stderr = p.communicate() self.assertEqual(stderr, b'') self.assertEqual(p.wait(), -signal.SIGTERM) def test_send_signal_dead(self): # Sending a signal to a dead process self._kill_dead_process('send_signal', signal.SIGINT) def test_kill_dead(self): # Killing a dead process self._kill_dead_process('kill') def test_terminate_dead(self): # Terminating a dead process self._kill_dead_process('terminate') def _save_fds(self, save_fds): fds = [] for fd in save_fds: inheritable = os.get_inheritable(fd) saved = os.dup(fd) fds.append((fd, saved, inheritable)) return fds def _restore_fds(self, fds): for fd, saved, inheritable in fds: os.dup2(saved, fd, inheritable=inheritable) os.close(saved) def check_close_std_fds(self, fds): # Issue #9905: test that subprocess pipes still work properly with # some standard fds closed stdin = 0 saved_fds = self._save_fds(fds) for fd, saved, inheritable in saved_fds: if fd == 0: stdin = saved break try: for fd in fds: os.close(fd) out, err = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple");' 'sys.stdout.flush();' 'sys.stderr.write("orange")'], stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate() self.assertEqual(out, b'apple') self.assertEqual(err, b'orange') finally: self._restore_fds(saved_fds) def test_close_fd_0(self): self.check_close_std_fds([0]) def test_close_fd_1(self): self.check_close_std_fds([1]) def test_close_fd_2(self): self.check_close_std_fds([2]) def test_close_fds_0_1(self): self.check_close_std_fds([0, 1]) def test_close_fds_0_2(self): self.check_close_std_fds([0, 2]) def test_close_fds_1_2(self): self.check_close_std_fds([1, 2]) def test_close_fds_0_1_2(self): # Issue #10806: test that subprocess pipes still work properly with # all standard fds closed. self.check_close_std_fds([0, 1, 2]) def test_small_errpipe_write_fd(self): """Issue #15798: Popen should work when stdio fds are available.""" new_stdin = os.dup(0) new_stdout = os.dup(1) try: os.close(0) os.close(1) # Side test: if errpipe_write fails to have its CLOEXEC # flag set this should cause the parent to think the exec # failed. Extremely unlikely: everyone supports CLOEXEC. subprocess.Popen([ sys.executable, "-c", "print('AssertionError:0:CLOEXEC failure.')"]).wait() finally: # Restore original stdin and stdout os.dup2(new_stdin, 0) os.dup2(new_stdout, 1) os.close(new_stdin) os.close(new_stdout) def test_remapping_std_fds(self): # open up some temporary files temps = [tempfile.mkstemp() for i in range(3)] try: temp_fds = [fd for fd, fname in temps] # unlink the files -- we won't need to reopen them for fd, fname in temps: os.unlink(fname) # write some data to what will become stdin, and rewind os.write(temp_fds[1], b"STDIN") os.lseek(temp_fds[1], 0, 0) # move the standard file descriptors out of the way saved_fds = self._save_fds(range(3)) try: # duplicate the file objects over the standard fd's for fd, temp_fd in enumerate(temp_fds): os.dup2(temp_fd, fd) # now use those files in the "wrong" order, so that subprocess # has to rearrange them in the child p = subprocess.Popen([sys.executable, "-c", 'import sys; got = sys.stdin.read();' 'sys.stdout.write("got %s"%got); sys.stderr.write("err")'], stdin=temp_fds[1], stdout=temp_fds[2], stderr=temp_fds[0]) p.wait() finally: self._restore_fds(saved_fds) for fd in temp_fds: os.lseek(fd, 0, 0) out = os.read(temp_fds[2], 1024) err = os.read(temp_fds[0], 1024).strip() self.assertEqual(out, b"got STDIN") self.assertEqual(err, b"err") finally: for fd in temp_fds: os.close(fd) def check_swap_fds(self, stdin_no, stdout_no, stderr_no): # open up some temporary files temps = [tempfile.mkstemp() for i in range(3)] temp_fds = [fd for fd, fname in temps] try: # unlink the files -- we won't need to reopen them for fd, fname in temps: os.unlink(fname) # save a copy of the standard file descriptors saved_fds = self._save_fds(range(3)) try: # duplicate the temp files over the standard fd's 0, 1, 2 for fd, temp_fd in enumerate(temp_fds): os.dup2(temp_fd, fd) # write some data to what will become stdin, and rewind os.write(stdin_no, b"STDIN") os.lseek(stdin_no, 0, 0) # now use those files in the given order, so that subprocess # has to rearrange them in the child p = subprocess.Popen([sys.executable, "-c", 'import sys; got = sys.stdin.read();' 'sys.stdout.write("got %s"%got); sys.stderr.write("err")'], stdin=stdin_no, stdout=stdout_no, stderr=stderr_no) p.wait() for fd in temp_fds: os.lseek(fd, 0, 0) out = os.read(stdout_no, 1024) err = os.read(stderr_no, 1024).strip() finally: self._restore_fds(saved_fds) self.assertEqual(out, b"got STDIN") self.assertEqual(err, b"err") finally: for fd in temp_fds: os.close(fd) # When duping fds, if there arises a situation where one of the fds is # either 0, 1 or 2, it is possible that it is overwritten (#12607). # This tests all combinations of this. def test_swap_fds(self): self.check_swap_fds(0, 1, 2) self.check_swap_fds(0, 2, 1) self.check_swap_fds(1, 0, 2) self.check_swap_fds(1, 2, 0) self.check_swap_fds(2, 0, 1) self.check_swap_fds(2, 1, 0) def _check_swap_std_fds_with_one_closed(self, from_fds, to_fds): saved_fds = self._save_fds(range(3)) try: for from_fd in from_fds: with tempfile.TemporaryFile() as f: os.dup2(f.fileno(), from_fd) fd_to_close = (set(range(3)) - set(from_fds)).pop() os.close(fd_to_close) arg_names = ['stdin', 'stdout', 'stderr'] kwargs = {} for from_fd, to_fd in zip(from_fds, to_fds): kwargs[arg_names[to_fd]] = from_fd code = textwrap.dedent(r''' import os, sys skipped_fd = int(sys.argv[1]) for fd in range(3): if fd != skipped_fd: os.write(fd, str(fd).encode('ascii')) ''') skipped_fd = (set(range(3)) - set(to_fds)).pop() rc = subprocess.call([sys.executable, '-c', code, str(skipped_fd)], **kwargs) self.assertEqual(rc, 0) for from_fd, to_fd in zip(from_fds, to_fds): os.lseek(from_fd, 0, os.SEEK_SET) read_bytes = os.read(from_fd, 1024) read_fds = list(map(int, read_bytes.decode('ascii'))) msg = textwrap.dedent(f""" When testing {from_fds} to {to_fds} redirection, parent descriptor {from_fd} got redirected to descriptor(s) {read_fds} instead of descriptor {to_fd}. """) self.assertEqual([to_fd], read_fds, msg) finally: self._restore_fds(saved_fds) # Check that subprocess can remap std fds correctly even # if one of them is closed (#32844). def test_swap_std_fds_with_one_closed(self): for from_fds in itertools.combinations(range(3), 2): for to_fds in itertools.permutations(range(3), 2): self._check_swap_std_fds_with_one_closed(from_fds, to_fds) def test_surrogates_error_message(self): def prepare(): raise ValueError("surrogate:\uDCff") try: subprocess.call( ZERO_RETURN_CMD, preexec_fn=prepare) except ValueError as err: # Pure Python implementations keeps the message self.assertIsNone(subprocess._fork_exec) self.assertEqual(str(err), "surrogate:\uDCff") except subprocess.SubprocessError as err: # _posixsubprocess uses a default message self.assertIsNotNone(subprocess._fork_exec) self.assertEqual(str(err), "Exception occurred in preexec_fn.") else: self.fail("Expected ValueError or subprocess.SubprocessError") def test_undecodable_env(self): for key, value in (('test', 'abc\uDCFF'), ('test\uDCFF', '42')): encoded_value = value.encode("ascii", "surrogateescape") # test str with surrogates script = "import os; print(ascii(os.getenv(%s)))" % repr(key) env = os.environ.copy() env[key] = value # Use C locale to get ASCII for the locale encoding to force # surrogate-escaping of \xFF in the child process env['LC_ALL'] = 'C' decoded_value = value stdout = subprocess.check_output( [sys.executable, "-c", script], env=env) stdout = stdout.rstrip(b'\n\r') self.assertEqual(stdout.decode('ascii'), ascii(decoded_value)) # test bytes key = key.encode("ascii", "surrogateescape") script = "import os; print(ascii(os.getenvb(%s)))" % repr(key) env = os.environ.copy() env[key] = encoded_value stdout = subprocess.check_output( [sys.executable, "-c", script], env=env) stdout = stdout.rstrip(b'\n\r') self.assertEqual(stdout.decode('ascii'), ascii(encoded_value)) def test_bytes_program(self): abs_program = os.fsencode(ZERO_RETURN_CMD[0]) args = list(ZERO_RETURN_CMD[1:]) path, program = os.path.split(ZERO_RETURN_CMD[0]) program = os.fsencode(program) # absolute bytes path exitcode = subprocess.call([abs_program]+args) self.assertEqual(exitcode, 0) # absolute bytes path as a string cmd = b"'%s' %s" % (abs_program, " ".join(args).encode("utf-8")) exitcode = subprocess.call(cmd, shell=True) self.assertEqual(exitcode, 0) # bytes program, unicode PATH env = os.environ.copy() env["PATH"] = path exitcode = subprocess.call([program]+args, env=env) self.assertEqual(exitcode, 0) # bytes program, bytes PATH envb = os.environb.copy() envb[b"PATH"] = os.fsencode(path) exitcode = subprocess.call([program]+args, env=envb) self.assertEqual(exitcode, 0) def test_pipe_cloexec(self): sleeper = support.findfile("input_reader.py", subdir="subprocessdata") fd_status = support.findfile("fd_status.py", subdir="subprocessdata") p1 = subprocess.Popen([sys.executable, sleeper], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=False) self.addCleanup(p1.communicate, b'') p2 = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=False) output, error = p2.communicate() result_fds = set(map(int, output.split(b','))) unwanted_fds = set([p1.stdin.fileno(), p1.stdout.fileno(), p1.stderr.fileno()]) self.assertFalse(result_fds & unwanted_fds, "Expected no fds from %r to be open in child, " "found %r" % (unwanted_fds, result_fds & unwanted_fds)) def test_pipe_cloexec_real_tools(self): qcat = support.findfile("qcat.py", subdir="subprocessdata") qgrep = support.findfile("qgrep.py", subdir="subprocessdata") subdata = b'zxcvbn' data = subdata * 4 + b'\n' p1 = subprocess.Popen([sys.executable, qcat], stdin=subprocess.PIPE, stdout=subprocess.PIPE, close_fds=False) p2 = subprocess.Popen([sys.executable, qgrep, subdata], stdin=p1.stdout, stdout=subprocess.PIPE, close_fds=False) self.addCleanup(p1.wait) self.addCleanup(p2.wait) def kill_p1(): try: p1.terminate() except ProcessLookupError: pass def kill_p2(): try: p2.terminate() except ProcessLookupError: pass self.addCleanup(kill_p1) self.addCleanup(kill_p2) p1.stdin.write(data) p1.stdin.close() readfiles, ignored1, ignored2 = select.select([p2.stdout], [], [], 10) self.assertTrue(readfiles, "The child hung") self.assertEqual(p2.stdout.read(), data) p1.stdout.close() p2.stdout.close() def test_close_fds(self): fd_status = support.findfile("fd_status.py", subdir="subprocessdata") fds = os.pipe() self.addCleanup(os.close, fds[0]) self.addCleanup(os.close, fds[1]) open_fds = set(fds) # add a bunch more fds for _ in range(9): fd = os.open(os.devnull, os.O_RDONLY) self.addCleanup(os.close, fd) open_fds.add(fd) for fd in open_fds: os.set_inheritable(fd, True) p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=False) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertEqual(remaining_fds & open_fds, open_fds, "Some fds were closed") p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertFalse(remaining_fds & open_fds, "Some fds were left open") self.assertIn(1, remaining_fds, "Subprocess failed") # Keep some of the fd's we opened open in the subprocess. # This tests _posixsubprocess.c's proper handling of fds_to_keep. fds_to_keep = set(open_fds.pop() for _ in range(8)) p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True, pass_fds=fds_to_keep) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertFalse((remaining_fds - fds_to_keep) & open_fds, "Some fds not in pass_fds were left open") self.assertIn(1, remaining_fds, "Subprocess failed") @unittest.skipIf(sys.platform.startswith("freebsd") and os.stat("/dev").st_dev == os.stat("/dev/fd").st_dev, "Requires fdescfs mounted on /dev/fd on FreeBSD") def test_close_fds_when_max_fd_is_lowered(self): """Confirm that issue21618 is fixed (may fail under valgrind).""" fd_status = support.findfile("fd_status.py", subdir="subprocessdata") # This launches the meat of the test in a child process to # avoid messing with the larger unittest processes maximum # number of file descriptors. # This process launches: # +--> Process that lowers its RLIMIT_NOFILE aftr setting up # a bunch of high open fds above the new lower rlimit. # Those are reported via stdout before launching a new # process with close_fds=False to run the actual test: # +--> The TEST: This one launches a fd_status.py # subprocess with close_fds=True so we can find out if # any of the fds above the lowered rlimit are still open. p = subprocess.Popen([sys.executable, '-c', textwrap.dedent( ''' import os, resource, subprocess, sys, textwrap open_fds = set() # Add a bunch more fds to pass down. for _ in range(40): fd = os.open(os.devnull, os.O_RDONLY) open_fds.add(fd) # Leave a two pairs of low ones available for use by the # internal child error pipe and the stdout pipe. # We also leave 10 more open as some Python buildbots run into # "too many open files" errors during the test if we do not. for fd in sorted(open_fds)[:14]: os.close(fd) open_fds.remove(fd) for fd in open_fds: #self.addCleanup(os.close, fd) os.set_inheritable(fd, True) max_fd_open = max(open_fds) # Communicate the open_fds to the parent unittest.TestCase process. print(','.join(map(str, sorted(open_fds)))) sys.stdout.flush() rlim_cur, rlim_max = resource.getrlimit(resource.RLIMIT_NOFILE) try: # 29 is lower than the highest fds we are leaving open. resource.setrlimit(resource.RLIMIT_NOFILE, (29, rlim_max)) # Launch a new Python interpreter with our low fd rlim_cur that # inherits open fds above that limit. It then uses subprocess # with close_fds=True to get a report of open fds in the child. # An explicit list of fds to check is passed to fd_status.py as # letting fd_status rely on its default logic would miss the # fds above rlim_cur as it normally only checks up to that limit. subprocess.Popen( [sys.executable, '-c', textwrap.dedent(""" import subprocess, sys subprocess.Popen([sys.executable, %r] + [str(x) for x in range({max_fd})], close_fds=True).wait() """.format(max_fd=max_fd_open+1))], close_fds=False).wait() finally: resource.setrlimit(resource.RLIMIT_NOFILE, (rlim_cur, rlim_max)) ''' % fd_status)], stdout=subprocess.PIPE) output, unused_stderr = p.communicate() output_lines = output.splitlines() self.assertEqual(len(output_lines), 2, msg="expected exactly two lines of output:\n%r" % output) opened_fds = set(map(int, output_lines[0].strip().split(b','))) remaining_fds = set(map(int, output_lines[1].strip().split(b','))) self.assertFalse(remaining_fds & opened_fds, msg="Some fds were left open.") # Mac OS X Tiger (10.4) has a kernel bug: sometimes, the file # descriptor of a pipe closed in the parent process is valid in the # child process according to fstat(), but the mode of the file # descriptor is invalid, and read or write raise an error. @support.requires_mac_ver(10, 5) def test_pass_fds(self): fd_status = support.findfile("fd_status.py", subdir="subprocessdata") open_fds = set() for x in range(5): fds = os.pipe() self.addCleanup(os.close, fds[0]) self.addCleanup(os.close, fds[1]) os.set_inheritable(fds[0], True) os.set_inheritable(fds[1], True) open_fds.update(fds) for fd in open_fds: p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True, pass_fds=(fd, )) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) to_be_closed = open_fds - {fd} self.assertIn(fd, remaining_fds, "fd to be passed not passed") self.assertFalse(remaining_fds & to_be_closed, "fd to be closed passed") # pass_fds overrides close_fds with a warning. with self.assertWarns(RuntimeWarning) as context: self.assertFalse(subprocess.call( ZERO_RETURN_CMD, close_fds=False, pass_fds=(fd, ))) self.assertIn('overriding close_fds', str(context.warning)) def test_pass_fds_inheritable(self): script = support.findfile("fd_status.py", subdir="subprocessdata") inheritable, non_inheritable = os.pipe() self.addCleanup(os.close, inheritable) self.addCleanup(os.close, non_inheritable) os.set_inheritable(inheritable, True) os.set_inheritable(non_inheritable, False) pass_fds = (inheritable, non_inheritable) args = [sys.executable, script] args += list(map(str, pass_fds)) p = subprocess.Popen(args, stdout=subprocess.PIPE, close_fds=True, pass_fds=pass_fds) output, ignored = p.communicate() fds = set(map(int, output.split(b','))) # the inheritable file descriptor must be inherited, so its inheritable # flag must be set in the child process after fork() and before exec() self.assertEqual(fds, set(pass_fds), "output=%a" % output) # inheritable flag must not be changed in the parent process self.assertEqual(os.get_inheritable(inheritable), True) self.assertEqual(os.get_inheritable(non_inheritable), False) # bpo-32270: Ensure that descriptors specified in pass_fds # are inherited even if they are used in redirections. # Contributed by @izbyshev. def test_pass_fds_redirected(self): """Regression test for https://bugs.python.org/issue32270.""" fd_status = support.findfile("fd_status.py", subdir="subprocessdata") pass_fds = [] for _ in range(2): fd = os.open(os.devnull, os.O_RDWR) self.addCleanup(os.close, fd) pass_fds.append(fd) stdout_r, stdout_w = os.pipe() self.addCleanup(os.close, stdout_r) self.addCleanup(os.close, stdout_w) pass_fds.insert(1, stdout_w) with subprocess.Popen([sys.executable, fd_status], stdin=pass_fds[0], stdout=pass_fds[1], stderr=pass_fds[2], close_fds=True, pass_fds=pass_fds): output = os.read(stdout_r, 1024) fds = {int(num) for num in output.split(b',')} self.assertEqual(fds, {0, 1, 2} | frozenset(pass_fds), f"output={output!a}") def test_stdout_stdin_are_single_inout_fd(self): with io.open(os.devnull, "r+") as inout: p = subprocess.Popen(ZERO_RETURN_CMD, stdout=inout, stdin=inout) p.wait() def test_stdout_stderr_are_single_inout_fd(self): with io.open(os.devnull, "r+") as inout: p = subprocess.Popen(ZERO_RETURN_CMD, stdout=inout, stderr=inout) p.wait() def test_stderr_stdin_are_single_inout_fd(self): with io.open(os.devnull, "r+") as inout: p = subprocess.Popen(ZERO_RETURN_CMD, stderr=inout, stdin=inout) p.wait() def test_wait_when_sigchild_ignored(self): # NOTE: sigchild_ignore.py may not be an effective test on all OSes. sigchild_ignore = support.findfile("sigchild_ignore.py", subdir="subprocessdata") p = subprocess.Popen([sys.executable, sigchild_ignore], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() self.assertEqual(0, p.returncode, "sigchild_ignore.py exited" " non-zero with this error:\n%s" % stderr.decode('utf-8')) def test_select_unbuffered(self): # Issue #11459: bufsize=0 should really set the pipes as # unbuffered (and therefore let select() work properly). select = import_helper.import_module("select") p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple")'], stdout=subprocess.PIPE, bufsize=0) f = p.stdout self.addCleanup(f.close) try: self.assertEqual(f.read(4), b"appl") self.assertIn(f, select.select([f], [], [], 0.0)[0]) finally: p.wait() def test_zombie_fast_process_del(self): # Issue #12650: on Unix, if Popen.__del__() was called before the # process exited, it wouldn't be added to subprocess._active, and would # remain a zombie. # spawn a Popen, and delete its reference before it exits p = subprocess.Popen([sys.executable, "-c", 'import sys, time;' 'time.sleep(0.2)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) ident = id(p) pid = p.pid with warnings_helper.check_warnings(('', ResourceWarning)): p = None if mswindows: # subprocess._active is not used on Windows and is set to None. self.assertIsNone(subprocess._active) else: # check that p is in the active processes list self.assertIn(ident, [id(o) for o in subprocess._active]) def test_leak_fast_process_del_killed(self): # Issue #12650: on Unix, if Popen.__del__() was called before the # process exited, and the process got killed by a signal, it would never # be removed from subprocess._active, which triggered a FD and memory # leak. # spawn a Popen, delete its reference and kill it p = subprocess.Popen([sys.executable, "-c", 'import time;' 'time.sleep(3)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) ident = id(p) pid = p.pid with warnings_helper.check_warnings(('', ResourceWarning)): p = None support.gc_collect() # For PyPy or other GCs. os.kill(pid, signal.SIGKILL) if mswindows: # subprocess._active is not used on Windows and is set to None. self.assertIsNone(subprocess._active) else: # check that p is in the active processes list self.assertIn(ident, [id(o) for o in subprocess._active]) # let some time for the process to exit, and create a new Popen: this # should trigger the wait() of p time.sleep(0.2) with self.assertRaises(OSError): with subprocess.Popen(NONEXISTING_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc: pass # p should have been wait()ed on, and removed from the _active list self.assertRaises(OSError, os.waitpid, pid, 0) if mswindows: # subprocess._active is not used on Windows and is set to None. self.assertIsNone(subprocess._active) else: self.assertNotIn(ident, [id(o) for o in subprocess._active]) def test_close_fds_after_preexec(self): fd_status = support.findfile("fd_status.py", subdir="subprocessdata") # this FD is used as dup2() target by preexec_fn, and should be closed # in the child process fd = os.dup(1) self.addCleanup(os.close, fd) p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True, preexec_fn=lambda: os.dup2(1, fd)) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertNotIn(fd, remaining_fds) @support.cpython_only def test_fork_exec(self): # Issue #22290: fork_exec() must not crash on memory allocation failure # or other errors import _posixsubprocess gc_enabled = gc.isenabled() try: # Use a preexec function and enable the garbage collector # to force fork_exec() to re-enable the garbage collector # on error. func = lambda: None gc.enable() for args, exe_list, cwd, env_list in ( (123, [b"exe"], None, [b"env"]), ([b"arg"], 123, None, [b"env"]), ([b"arg"], [b"exe"], 123, [b"env"]), ([b"arg"], [b"exe"], None, 123), ): with self.assertRaises(TypeError) as err: _posixsubprocess.fork_exec( args, exe_list, True, (), cwd, env_list, -1, -1, -1, -1, 1, 2, 3, 4, True, True, 0, False, [], 0, -1, func, False) # Attempt to prevent # "TypeError: fork_exec() takes exactly N arguments (M given)" # from passing the test. More refactoring to have us start # with a valid *args list, confirm a good call with that works # before mutating it in various ways to ensure that bad calls # with individual arg type errors raise a typeerror would be # ideal. Saving that for a future PR... self.assertNotIn('takes exactly', str(err.exception)) finally: if not gc_enabled: gc.disable() @support.cpython_only def test_fork_exec_sorted_fd_sanity_check(self): # Issue #23564: sanity check the fork_exec() fds_to_keep sanity check. import _posixsubprocess class BadInt: first = True def __init__(self, value): self.value = value def __int__(self): if self.first: self.first = False return self.value raise ValueError gc_enabled = gc.isenabled() try: gc.enable() for fds_to_keep in ( (-1, 2, 3, 4, 5), # Negative number. ('str', 4), # Not an int. (18, 23, 42, 2**63), # Out of range. (5, 4), # Not sorted. (6, 7, 7, 8), # Duplicate. (BadInt(1), BadInt(2)), ): with self.assertRaises( ValueError, msg='fds_to_keep={}'.format(fds_to_keep)) as c: _posixsubprocess.fork_exec( [b"false"], [b"false"], True, fds_to_keep, None, [b"env"], -1, -1, -1, -1, 1, 2, 3, 4, True, True, 0, None, None, None, -1, None, True) self.assertIn('fds_to_keep', str(c.exception)) finally: if not gc_enabled: gc.disable() def test_communicate_BrokenPipeError_stdin_close(self): # By not setting stdout or stderr or a timeout we force the fast path # that just calls _stdin_write() internally due to our mock. proc = subprocess.Popen(ZERO_RETURN_CMD) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin: mock_proc_stdin.close.side_effect = BrokenPipeError proc.communicate() # Should swallow BrokenPipeError from close. mock_proc_stdin.close.assert_called_with() def test_communicate_BrokenPipeError_stdin_write(self): # By not setting stdout or stderr or a timeout we force the fast path # that just calls _stdin_write() internally due to our mock. proc = subprocess.Popen(ZERO_RETURN_CMD) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin: mock_proc_stdin.write.side_effect = BrokenPipeError proc.communicate(b'stuff') # Should swallow the BrokenPipeError. mock_proc_stdin.write.assert_called_once_with(b'stuff') mock_proc_stdin.close.assert_called_once_with() def test_communicate_BrokenPipeError_stdin_flush(self): # Setting stdin and stdout forces the ._communicate() code path. # python -h exits faster than python -c pass (but spams stdout). proc = subprocess.Popen([sys.executable, '-h'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin, \ open(os.devnull, 'wb') as dev_null: mock_proc_stdin.flush.side_effect = BrokenPipeError # because _communicate registers a selector using proc.stdin... mock_proc_stdin.fileno.return_value = dev_null.fileno() # _communicate() should swallow BrokenPipeError from flush. proc.communicate(b'stuff') mock_proc_stdin.flush.assert_called_once_with() def test_communicate_BrokenPipeError_stdin_close_with_timeout(self): # Setting stdin and stdout forces the ._communicate() code path. # python -h exits faster than python -c pass (but spams stdout). proc = subprocess.Popen([sys.executable, '-h'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin: mock_proc_stdin.close.side_effect = BrokenPipeError # _communicate() should swallow BrokenPipeError from close. proc.communicate(timeout=999) mock_proc_stdin.close.assert_called_once_with() @unittest.skipUnless(_testcapi is not None and hasattr(_testcapi, 'W_STOPCODE'), 'need _testcapi.W_STOPCODE') def test_stopped(self): """Test wait() behavior when waitpid returns WIFSTOPPED; issue29335.""" args = ZERO_RETURN_CMD proc = subprocess.Popen(args) # Wait until the real process completes to avoid zombie process support.wait_process(proc.pid, exitcode=0) status = _testcapi.W_STOPCODE(3) with mock.patch('subprocess.os.waitpid', return_value=(proc.pid, status)): returncode = proc.wait() self.assertEqual(returncode, -3) def test_send_signal_race(self): # bpo-38630: send_signal() must poll the process exit status to reduce # the risk of sending the signal to the wrong process. proc = subprocess.Popen(ZERO_RETURN_CMD) # wait until the process completes without using the Popen APIs. support.wait_process(proc.pid, exitcode=0) # returncode is still None but the process completed. self.assertIsNone(proc.returncode) with mock.patch("os.kill") as mock_kill: proc.send_signal(signal.SIGTERM) # send_signal() didn't call os.kill() since the process already # completed. mock_kill.assert_not_called() # Don't check the returncode value: the test reads the exit status, # so Popen failed to read it and uses a default returncode instead. self.assertIsNotNone(proc.returncode) def test_send_signal_race2(self): # bpo-40550: the process might exist between the returncode check and # the kill operation p = subprocess.Popen([sys.executable, '-c', 'exit(1)']) # wait for process to exit while not p.returncode: p.poll() with mock.patch.object(p, 'poll', new=lambda: None): p.returncode = None p.send_signal(signal.SIGTERM) p.kill() def test_communicate_repeated_call_after_stdout_close(self): proc = subprocess.Popen([sys.executable, '-c', 'import os, time; os.close(1), time.sleep(2)'], stdout=subprocess.PIPE) while True: try: proc.communicate(timeout=0.1) return except subprocess.TimeoutExpired: pass def test_preexec_at_exit(self): code = f"""if 1: import atexit import subprocess def dummy(): pass class AtFinalization: def __del__(self): print("OK") subprocess.Popen({ZERO_RETURN_CMD}, preexec_fn=dummy) print("shouldn't be printed") at_finalization = AtFinalization() """ _, out, err = assert_python_ok("-c", code) self.assertEqual(out.strip(), b"OK") self.assertIn(b"preexec_fn not supported at interpreter shutdown", err) @unittest.skipUnless(mswindows, "Windows specific tests") class Win32ProcessTestCase(BaseTestCase): def test_startupinfo(self): # startupinfo argument # We uses hardcoded constants, because we do not want to # depend on win32all. STARTF_USESHOWWINDOW = 1 SW_MAXIMIZE = 3 startupinfo = subprocess.STARTUPINFO() startupinfo.dwFlags = STARTF_USESHOWWINDOW startupinfo.wShowWindow = SW_MAXIMIZE # Since Python is a console process, it won't be affected # by wShowWindow, but the argument should be silently # ignored subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_startupinfo_keywords(self): # startupinfo argument # We use hardcoded constants, because we do not want to # depend on win32all. STARTF_USERSHOWWINDOW = 1 SW_MAXIMIZE = 3 startupinfo = subprocess.STARTUPINFO( dwFlags=STARTF_USERSHOWWINDOW, wShowWindow=SW_MAXIMIZE ) # Since Python is a console process, it won't be affected # by wShowWindow, but the argument should be silently # ignored subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_startupinfo_copy(self): # bpo-34044: Popen must not modify input STARTUPINFO structure startupinfo = subprocess.STARTUPINFO() startupinfo.dwFlags = subprocess.STARTF_USESHOWWINDOW startupinfo.wShowWindow = subprocess.SW_HIDE # Call Popen() twice with the same startupinfo object to make sure # that it's not modified for _ in range(2): cmd = ZERO_RETURN_CMD with open(os.devnull, 'w') as null: proc = subprocess.Popen(cmd, stdout=null, stderr=subprocess.STDOUT, startupinfo=startupinfo) with proc: proc.communicate() self.assertEqual(proc.returncode, 0) self.assertEqual(startupinfo.dwFlags, subprocess.STARTF_USESHOWWINDOW) self.assertIsNone(startupinfo.hStdInput) self.assertIsNone(startupinfo.hStdOutput) self.assertIsNone(startupinfo.hStdError) self.assertEqual(startupinfo.wShowWindow, subprocess.SW_HIDE) self.assertEqual(startupinfo.lpAttributeList, {"handle_list": []}) def test_creationflags(self): # creationflags argument CREATE_NEW_CONSOLE = 16 sys.stderr.write(" a DOS box should flash briefly ...\n") subprocess.call(sys.executable + ' -c "import time; time.sleep(0.25)"', creationflags=CREATE_NEW_CONSOLE) def test_invalid_args(self): # invalid arguments should raise ValueError self.assertRaises(ValueError, subprocess.call, [sys.executable, "-c", "import sys; sys.exit(47)"], preexec_fn=lambda: 1) @support.cpython_only def test_issue31471(self): # There shouldn't be an assertion failure in Popen() in case the env # argument has a bad keys() method. class BadEnv(dict): keys = None with self.assertRaises(TypeError): subprocess.Popen(ZERO_RETURN_CMD, env=BadEnv()) def test_close_fds(self): # close file descriptors rc = subprocess.call([sys.executable, "-c", "import sys; sys.exit(47)"], close_fds=True) self.assertEqual(rc, 47) def test_close_fds_with_stdio(self): import msvcrt fds = os.pipe() self.addCleanup(os.close, fds[0]) self.addCleanup(os.close, fds[1]) handles = [] for fd in fds: os.set_inheritable(fd, True) handles.append(msvcrt.get_osfhandle(fd)) p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, close_fds=False) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 0) int(stdout.strip()) # Check that stdout is an integer p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 1) self.assertIn(b"OSError", stderr) # The same as the previous call, but with an empty handle_list handle_list = [] startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {"handle_list": handle_list} p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, stderr=subprocess.PIPE, startupinfo=startupinfo, close_fds=True) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 1) self.assertIn(b"OSError", stderr) # Check for a warning due to using handle_list and close_fds=False with warnings_helper.check_warnings((".*overriding close_fds", RuntimeWarning)): startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {"handle_list": handles[:]} p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, stderr=subprocess.PIPE, startupinfo=startupinfo, close_fds=False) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 0) def test_empty_attribute_list(self): startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {} subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_empty_handle_list(self): startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {"handle_list": []} subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_shell_sequence(self): # Run command through the shell (sequence) newenv = os.environ.copy() newenv["FRUIT"] = "physalis" p = subprocess.Popen(["set"], shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertIn(b"physalis", p.stdout.read()) def test_shell_string(self): # Run command through the shell (string) newenv = os.environ.copy() newenv["FRUIT"] = "physalis" p = subprocess.Popen("set", shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertIn(b"physalis", p.stdout.read()) def test_shell_encodings(self): # Run command through the shell (string) for enc in ['ansi', 'oem']: newenv = os.environ.copy() newenv["FRUIT"] = "physalis" p = subprocess.Popen("set", shell=1, stdout=subprocess.PIPE, env=newenv, encoding=enc) with p: self.assertIn("physalis", p.stdout.read(), enc) def test_call_string(self): # call() function with string argument on Windows rc = subprocess.call(sys.executable + ' -c "import sys; sys.exit(47)"') self.assertEqual(rc, 47) def _kill_process(self, method, *args): # Some win32 buildbot raises EOFError if stdin is inherited p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() time.sleep(30) """], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) with p: # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) getattr(p, method)(*args) _, stderr = p.communicate() self.assertEqual(stderr, b'') returncode = p.wait() self.assertNotEqual(returncode, 0) def _kill_dead_process(self, method, *args): p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() sys.exit(42) """], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) with p: # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) # The process should end after this time.sleep(1) # This shouldn't raise even though the child is now dead getattr(p, method)(*args) _, stderr = p.communicate() self.assertEqual(stderr, b'') rc = p.wait() self.assertEqual(rc, 42) def test_send_signal(self): self._kill_process('send_signal', signal.SIGTERM) def test_kill(self): self._kill_process('kill') def test_terminate(self): self._kill_process('terminate') def test_send_signal_dead(self): self._kill_dead_process('send_signal', signal.SIGTERM) def test_kill_dead(self): self._kill_dead_process('kill') def test_terminate_dead(self): self._kill_dead_process('terminate') class MiscTests(unittest.TestCase): class RecordingPopen(subprocess.Popen): """A Popen that saves a reference to each instance for testing.""" instances_created = [] def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.instances_created.append(self) @mock.patch.object(subprocess.Popen, "_communicate") def _test_keyboardinterrupt_no_kill(self, popener, mock__communicate, **kwargs): """Fake a SIGINT happening during Popen._communicate() and ._wait(). This avoids the need to actually try and get test environments to send and receive signals reliably across platforms. The net effect of a ^C happening during a blocking subprocess execution which we want to clean up from is a KeyboardInterrupt coming out of communicate() or wait(). """ mock__communicate.side_effect = KeyboardInterrupt try: with mock.patch.object(subprocess.Popen, "_wait") as mock__wait: # We patch out _wait() as no signal was involved so the # child process isn't actually going to exit rapidly. mock__wait.side_effect = KeyboardInterrupt with mock.patch.object(subprocess, "Popen", self.RecordingPopen): with self.assertRaises(KeyboardInterrupt): popener([sys.executable, "-c", "import time\ntime.sleep(9)\nimport sys\n" "sys.stderr.write('\\n!runaway child!\\n')"], stdout=subprocess.DEVNULL, **kwargs) for call in mock__wait.call_args_list[1:]: self.assertNotEqual( call, mock.call(timeout=None), "no open-ended wait() after the first allowed: " f"{mock__wait.call_args_list}") sigint_calls = [] for call in mock__wait.call_args_list: if call == mock.call(timeout=0.25): # from Popen.__init__ sigint_calls.append(call) self.assertLessEqual(mock__wait.call_count, 2, msg=mock__wait.call_args_list) self.assertEqual(len(sigint_calls), 1, msg=mock__wait.call_args_list) finally: # cleanup the forgotten (due to our mocks) child process process = self.RecordingPopen.instances_created.pop() process.kill() process.wait() self.assertEqual([], self.RecordingPopen.instances_created) def test_call_keyboardinterrupt_no_kill(self): self._test_keyboardinterrupt_no_kill(subprocess.call, timeout=6.282) def test_run_keyboardinterrupt_no_kill(self): self._test_keyboardinterrupt_no_kill(subprocess.run, timeout=6.282) def test_context_manager_keyboardinterrupt_no_kill(self): def popen_via_context_manager(*args, **kwargs): with subprocess.Popen(*args, **kwargs) as unused_process: raise KeyboardInterrupt # Test how __exit__ handles ^C. self._test_keyboardinterrupt_no_kill(popen_via_context_manager) def test_getoutput(self): self.assertEqual(subprocess.getoutput('echo xyzzy'), 'xyzzy') self.assertEqual(subprocess.getstatusoutput('echo xyzzy'), (0, 'xyzzy')) # we use mkdtemp in the next line to create an empty directory # under our exclusive control; from that, we can invent a pathname # that we _know_ won't exist. This is guaranteed to fail. dir = None try: dir = tempfile.mkdtemp() name = os.path.join(dir, "foo") status, output = subprocess.getstatusoutput( ("type " if mswindows else "cat ") + name) self.assertNotEqual(status, 0) finally: if dir is not None: os.rmdir(dir) def test__all__(self): """Ensure that __all__ is populated properly.""" intentionally_excluded = {"list2cmdline", "Handle", "pwd", "grp", "fcntl"} exported = set(subprocess.__all__) possible_exports = set() import types for name, value in subprocess.__dict__.items(): if name.startswith('_'): continue if isinstance(value, (types.ModuleType,)): continue possible_exports.add(name) self.assertEqual(exported, possible_exports - intentionally_excluded) @unittest.skipUnless(hasattr(selectors, 'PollSelector'), "Test needs selectors.PollSelector") class ProcessTestCaseNoPoll(ProcessTestCase): def setUp(self): self.orig_selector = subprocess._PopenSelector subprocess._PopenSelector = selectors.SelectSelector ProcessTestCase.setUp(self) def tearDown(self): subprocess._PopenSelector = self.orig_selector ProcessTestCase.tearDown(self) @unittest.skipUnless(mswindows, "Windows-specific tests") class CommandsWithSpaces (BaseTestCase): def setUp(self): super().setUp() f, fname = tempfile.mkstemp(".py", "te st") self.fname = fname.lower () os.write(f, b"import sys;" b"sys.stdout.write('%d %s' % (len(sys.argv), [a.lower () for a in sys.argv]))" ) os.close(f) def tearDown(self): os.remove(self.fname) super().tearDown() def with_spaces(self, *args, **kwargs): kwargs['stdout'] = subprocess.PIPE p = subprocess.Popen(*args, **kwargs) with p: self.assertEqual( p.stdout.read ().decode("mbcs"), "2 [%r, 'ab cd']" % self.fname ) def test_shell_string_with_spaces(self): # call() function with string argument with spaces on Windows self.with_spaces('"%s" "%s" "%s"' % (sys.executable, self.fname, "ab cd"), shell=1) def test_shell_sequence_with_spaces(self): # call() function with sequence argument with spaces on Windows self.with_spaces([sys.executable, self.fname, "ab cd"], shell=1) def test_noshell_string_with_spaces(self): # call() function with string argument with spaces on Windows self.with_spaces('"%s" "%s" "%s"' % (sys.executable, self.fname, "ab cd")) def test_noshell_sequence_with_spaces(self): # call() function with sequence argument with spaces on Windows self.with_spaces([sys.executable, self.fname, "ab cd"]) class ContextManagerTests(BaseTestCase): def test_pipe(self): with subprocess.Popen([sys.executable, "-c", "import sys;" "sys.stdout.write('stdout');" "sys.stderr.write('stderr');"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc: self.assertEqual(proc.stdout.read(), b"stdout") self.assertEqual(proc.stderr.read(), b"stderr") self.assertTrue(proc.stdout.closed) self.assertTrue(proc.stderr.closed) def test_returncode(self): with subprocess.Popen([sys.executable, "-c", "import sys; sys.exit(100)"]) as proc: pass # __exit__ calls wait(), so the returncode should be set self.assertEqual(proc.returncode, 100) def test_communicate_stdin(self): with subprocess.Popen([sys.executable, "-c", "import sys;" "sys.exit(sys.stdin.read() == 'context')"], stdin=subprocess.PIPE) as proc: proc.communicate(b"context") self.assertEqual(proc.returncode, 1) def test_invalid_args(self): with self.assertRaises(NONEXISTING_ERRORS): with subprocess.Popen(NONEXISTING_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc: pass def test_broken_pipe_cleanup(self): """Broken pipe error should not prevent wait() (Issue 21619)""" proc = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, bufsize=support.PIPE_MAX_SIZE*2) proc = proc.__enter__() # Prepare to send enough data to overflow any OS pipe buffering and # guarantee a broken pipe error. Data is held in BufferedWriter # buffer until closed. proc.stdin.write(b'x' * support.PIPE_MAX_SIZE) self.assertIsNone(proc.returncode) # EPIPE expected under POSIX; EINVAL under Windows self.assertRaises(OSError, proc.__exit__, None, None, None) self.assertEqual(proc.returncode, 0) self.assertTrue(proc.stdin.closed) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.12/test_threading.py000066400000000000000000002220051471441230600217030ustar00rootroot00000000000000""" Tests for the threading module. """ import test.support from test.support import threading_helper, requires_subprocess from test.support import verbose, cpython_only, os_helper from test.support.import_helper import import_module from test.support.script_helper import assert_python_ok, assert_python_failure import random import sys import _thread import threading import time import unittest import weakref import os import subprocess import signal import textwrap import traceback import warnings from unittest import mock from test import lock_tests from test import support try: from test.support import interpreters except ModuleNotFoundError: interpreters = None threading_helper.requires_working_threading(module=True) # Between fork() and exec(), only async-safe functions are allowed (issues # #12316 and #11870), and fork() from a worker thread is known to trigger # problems with some operating systems (issue #3863): skip problematic tests # on platforms known to behave badly. platforms_to_skip = ('netbsd5', 'hp-ux11') def skip_unless_reliable_fork(test): if not support.has_fork_support: return unittest.skip("requires working os.fork()")(test) if sys.platform in platforms_to_skip: return unittest.skip("due to known OS bug related to thread+fork")(test) if support.HAVE_ASAN_FORK_BUG: return unittest.skip("libasan has a pthread_create() dead lock related to thread+fork")(test) if support.check_sanitizer(thread=True): return unittest.skip("TSAN doesn't support threads after fork") return test def requires_subinterpreters(meth): """Decorator to skip a test if subinterpreters are not supported.""" return unittest.skipIf(interpreters is None, 'subinterpreters required')(meth) def restore_default_excepthook(testcase): testcase.addCleanup(setattr, threading, 'excepthook', threading.excepthook) threading.excepthook = threading.__excepthook__ # A trivial mutable counter. class Counter(object): def __init__(self): self.value = 0 def inc(self): self.value += 1 def dec(self): self.value -= 1 def get(self): return self.value class TestThread(threading.Thread): def __init__(self, name, testcase, sema, mutex, nrunning): threading.Thread.__init__(self, name=name) self.testcase = testcase self.sema = sema self.mutex = mutex self.nrunning = nrunning def run(self): delay = random.random() / 10000.0 if verbose: print('task %s will run for %.1f usec' % (self.name, delay * 1e6)) with self.sema: with self.mutex: self.nrunning.inc() if verbose: print(self.nrunning.get(), 'tasks are running') self.testcase.assertLessEqual(self.nrunning.get(), 3) time.sleep(delay) if verbose: print('task', self.name, 'done') with self.mutex: self.nrunning.dec() self.testcase.assertGreaterEqual(self.nrunning.get(), 0) if verbose: print('%s is finished. %d tasks are running' % (self.name, self.nrunning.get())) class BaseTestCase(unittest.TestCase): def setUp(self): self._threads = threading_helper.threading_setup() def tearDown(self): threading_helper.threading_cleanup(*self._threads) test.support.reap_children() class ThreadTests(BaseTestCase): maxDiff = 9999 @cpython_only def test_name(self): def func(): pass thread = threading.Thread(name="myname1") self.assertEqual(thread.name, "myname1") # Convert int name to str thread = threading.Thread(name=123) self.assertEqual(thread.name, "123") # target name is ignored if name is specified thread = threading.Thread(target=func, name="myname2") self.assertEqual(thread.name, "myname2") with mock.patch.object(threading, '_counter', return_value=2): thread = threading.Thread(name="") self.assertEqual(thread.name, "Thread-2") with mock.patch.object(threading, '_counter', return_value=3): thread = threading.Thread() self.assertEqual(thread.name, "Thread-3") with mock.patch.object(threading, '_counter', return_value=5): thread = threading.Thread(target=func) self.assertEqual(thread.name, "Thread-5 (func)") def test_args_argument(self): # bpo-45735: Using list or tuple as *args* in constructor could # achieve the same effect. num_list = [1] num_tuple = (1,) str_list = ["str"] str_tuple = ("str",) list_in_tuple = ([1],) tuple_in_list = [(1,)] test_cases = ( (num_list, lambda arg: self.assertEqual(arg, 1)), (num_tuple, lambda arg: self.assertEqual(arg, 1)), (str_list, lambda arg: self.assertEqual(arg, "str")), (str_tuple, lambda arg: self.assertEqual(arg, "str")), (list_in_tuple, lambda arg: self.assertEqual(arg, [1])), (tuple_in_list, lambda arg: self.assertEqual(arg, (1,))) ) for args, target in test_cases: with self.subTest(target=target, args=args): t = threading.Thread(target=target, args=args) t.start() t.join() @cpython_only def test_disallow_instantiation(self): # Ensure that the type disallows instantiation (bpo-43916) lock = threading.Lock() test.support.check_disallow_instantiation(self, type(lock)) # Create a bunch of threads, let each do some work, wait until all are # done. def test_various_ops(self): # This takes about n/3 seconds to run (about n/3 clumps of tasks, # times about 1 second per clump). NUMTASKS = 10 # no more than 3 of the 10 can run at once sema = threading.BoundedSemaphore(value=3) mutex = threading.RLock() numrunning = Counter() threads = [] for i in range(NUMTASKS): t = TestThread(""%i, self, sema, mutex, numrunning) threads.append(t) self.assertIsNone(t.ident) self.assertRegex(repr(t), r'^$') t.start() if hasattr(threading, 'get_native_id'): native_ids = set(t.native_id for t in threads) | {threading.get_native_id()} self.assertNotIn(None, native_ids) self.assertEqual(len(native_ids), NUMTASKS + 1) if verbose: print('waiting for all tasks to complete') for t in threads: t.join() self.assertFalse(t.is_alive()) self.assertNotEqual(t.ident, 0) self.assertIsNotNone(t.ident) self.assertRegex(repr(t), r'^$') if verbose: print('all tasks done') self.assertEqual(numrunning.get(), 0) def test_ident_of_no_threading_threads(self): # The ident still must work for the main thread and dummy threads. self.assertIsNotNone(threading.current_thread().ident) def f(): ident.append(threading.current_thread().ident) done.set() done = threading.Event() ident = [] with threading_helper.wait_threads_exit(): tid = _thread.start_new_thread(f, ()) done.wait() self.assertEqual(ident[0], tid) # Kill the "immortal" _DummyThread del threading._active[ident[0]] # run with a small(ish) thread stack size (256 KiB) def test_various_ops_small_stack(self): if verbose: print('with 256 KiB thread stack size...') try: threading.stack_size(262144) except _thread.error: raise unittest.SkipTest( 'platform does not support changing thread stack size') self.test_various_ops() threading.stack_size(0) # run with a large thread stack size (1 MiB) def test_various_ops_large_stack(self): if verbose: print('with 1 MiB thread stack size...') try: threading.stack_size(0x100000) except _thread.error: raise unittest.SkipTest( 'platform does not support changing thread stack size') self.test_various_ops() threading.stack_size(0) def test_foreign_thread(self): # Check that a "foreign" thread can use the threading module. def f(mutex): # Calling current_thread() forces an entry for the foreign # thread to get made in the threading._active map. threading.current_thread() mutex.release() mutex = threading.Lock() mutex.acquire() with threading_helper.wait_threads_exit(): tid = _thread.start_new_thread(f, (mutex,)) # Wait for the thread to finish. mutex.acquire() self.assertIn(tid, threading._active) self.assertIsInstance(threading._active[tid], threading._DummyThread) #Issue 29376 self.assertTrue(threading._active[tid].is_alive()) self.assertRegex(repr(threading._active[tid]), '_DummyThread') del threading._active[tid] # PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it. def test_PyThreadState_SetAsyncExc(self): ctypes = import_module("ctypes") set_async_exc = ctypes.pythonapi.PyThreadState_SetAsyncExc set_async_exc.argtypes = (ctypes.c_ulong, ctypes.py_object) class AsyncExc(Exception): pass exception = ctypes.py_object(AsyncExc) # First check it works when setting the exception from the same thread. tid = threading.get_ident() self.assertIsInstance(tid, int) self.assertGreater(tid, 0) try: result = set_async_exc(tid, exception) # The exception is async, so we might have to keep the VM busy until # it notices. while True: pass except AsyncExc: pass else: # This code is unreachable but it reflects the intent. If we wanted # to be smarter the above loop wouldn't be infinite. self.fail("AsyncExc not raised") try: self.assertEqual(result, 1) # one thread state modified except UnboundLocalError: # The exception was raised too quickly for us to get the result. pass # `worker_started` is set by the thread when it's inside a try/except # block waiting to catch the asynchronously set AsyncExc exception. # `worker_saw_exception` is set by the thread upon catching that # exception. worker_started = threading.Event() worker_saw_exception = threading.Event() class Worker(threading.Thread): def run(self): self.id = threading.get_ident() self.finished = False try: while True: worker_started.set() time.sleep(0.1) except AsyncExc: self.finished = True worker_saw_exception.set() t = Worker() t.daemon = True # so if this fails, we don't hang Python at shutdown t.start() if verbose: print(" started worker thread") # Try a thread id that doesn't make sense. if verbose: print(" trying nonsensical thread id") result = set_async_exc(-1, exception) self.assertEqual(result, 0) # no thread states modified # Now raise an exception in the worker thread. if verbose: print(" waiting for worker thread to get started") ret = worker_started.wait() self.assertTrue(ret) if verbose: print(" verifying worker hasn't exited") self.assertFalse(t.finished) if verbose: print(" attempting to raise asynch exception in worker") result = set_async_exc(t.id, exception) self.assertEqual(result, 1) # one thread state modified if verbose: print(" waiting for worker to say it caught the exception") worker_saw_exception.wait(timeout=support.SHORT_TIMEOUT) self.assertTrue(t.finished) if verbose: print(" all OK -- joining worker") if t.finished: t.join() # else the thread is still running, and we have no way to kill it def test_limbo_cleanup(self): # Issue 7481: Failure to start thread should cleanup the limbo map. def fail_new_thread(*args): raise threading.ThreadError() _start_new_thread = threading._start_new_thread threading._start_new_thread = fail_new_thread try: t = threading.Thread(target=lambda: None) self.assertRaises(threading.ThreadError, t.start) self.assertFalse( t in threading._limbo, "Failed to cleanup _limbo map on failure of Thread.start().") finally: threading._start_new_thread = _start_new_thread def test_finalize_running_thread(self): # Issue 1402: the PyGILState_Ensure / _Release functions may be called # very late on python exit: on deallocation of a running thread for # example. if support.check_sanitizer(thread=True): # the thread running `time.sleep(100)` below will still be alive # at process exit self.skipTest("TSAN would report thread leak") import_module("ctypes") rc, out, err = assert_python_failure("-c", """if 1: import ctypes, sys, time, _thread # This lock is used as a simple event variable. ready = _thread.allocate_lock() ready.acquire() # Module globals are cleared before __del__ is run # So we save the functions in class dict class C: ensure = ctypes.pythonapi.PyGILState_Ensure release = ctypes.pythonapi.PyGILState_Release def __del__(self): state = self.ensure() self.release(state) def waitingThread(): x = C() ready.release() time.sleep(100) _thread.start_new_thread(waitingThread, ()) ready.acquire() # Be sure the other thread is waiting. sys.exit(42) """) self.assertEqual(rc, 42) def test_finalize_with_trace(self): # Issue1733757 # Avoid a deadlock when sys.settrace steps into threading._shutdown if support.check_sanitizer(thread=True): # the thread running `time.sleep(2)` below will still be alive # at process exit self.skipTest("TSAN would report thread leak") assert_python_ok("-c", """if 1: import sys, threading # A deadlock-killer, to prevent the # testsuite to hang forever def killer(): import os, time time.sleep(2) print('program blocked; aborting') os._exit(2) t = threading.Thread(target=killer) t.daemon = True t.start() # This is the trace function def func(frame, event, arg): threading.current_thread() return func sys.settrace(func) """) def test_join_nondaemon_on_shutdown(self): # Issue 1722344 # Raising SystemExit skipped threading._shutdown rc, out, err = assert_python_ok("-c", """if 1: import threading from time import sleep def child(): sleep(1) # As a non-daemon thread we SHOULD wake up and nothing # should be torn down yet print("Woke up, sleep function is:", sleep) threading.Thread(target=child).start() raise SystemExit """) self.assertEqual(out.strip(), b"Woke up, sleep function is: ") self.assertEqual(err, b"") def test_enumerate_after_join(self): # Try hard to trigger #1703448: a thread is still returned in # threading.enumerate() after it has been join()ed. enum = threading.enumerate old_interval = sys.getswitchinterval() try: for i in range(1, 100): sys.setswitchinterval(i * 0.0002) t = threading.Thread(target=lambda: None) t.start() t.join() l = enum() self.assertNotIn(t, l, "#1703448 triggered after %d trials: %s" % (i, l)) finally: sys.setswitchinterval(old_interval) def test_no_refcycle_through_target(self): class RunSelfFunction(object): def __init__(self, should_raise): # The links in this refcycle from Thread back to self # should be cleaned up when the thread completes. self.should_raise = should_raise self.thread = threading.Thread(target=self._run, args=(self,), kwargs={'yet_another':self}) self.thread.start() def _run(self, other_ref, yet_another): if self.should_raise: raise SystemExit restore_default_excepthook(self) cyclic_object = RunSelfFunction(should_raise=False) weak_cyclic_object = weakref.ref(cyclic_object) cyclic_object.thread.join() del cyclic_object self.assertIsNone(weak_cyclic_object(), msg=('%d references still around' % sys.getrefcount(weak_cyclic_object()))) raising_cyclic_object = RunSelfFunction(should_raise=True) weak_raising_cyclic_object = weakref.ref(raising_cyclic_object) raising_cyclic_object.thread.join() del raising_cyclic_object self.assertIsNone(weak_raising_cyclic_object(), msg=('%d references still around' % sys.getrefcount(weak_raising_cyclic_object()))) def test_old_threading_api(self): # Just a quick sanity check to make sure the old method names are # still present t = threading.Thread() with self.assertWarnsRegex(DeprecationWarning, r'get the daemon attribute'): t.isDaemon() with self.assertWarnsRegex(DeprecationWarning, r'set the daemon attribute'): t.setDaemon(True) with self.assertWarnsRegex(DeprecationWarning, r'get the name attribute'): t.getName() with self.assertWarnsRegex(DeprecationWarning, r'set the name attribute'): t.setName("name") e = threading.Event() with self.assertWarnsRegex(DeprecationWarning, 'use is_set()'): e.isSet() cond = threading.Condition() cond.acquire() with self.assertWarnsRegex(DeprecationWarning, 'use notify_all()'): cond.notifyAll() with self.assertWarnsRegex(DeprecationWarning, 'use active_count()'): threading.activeCount() with self.assertWarnsRegex(DeprecationWarning, 'use current_thread()'): threading.currentThread() def test_repr_daemon(self): t = threading.Thread() self.assertNotIn('daemon', repr(t)) t.daemon = True self.assertIn('daemon', repr(t)) def test_daemon_param(self): t = threading.Thread() self.assertFalse(t.daemon) t = threading.Thread(daemon=False) self.assertFalse(t.daemon) t = threading.Thread(daemon=True) self.assertTrue(t.daemon) @skip_unless_reliable_fork def test_dummy_thread_after_fork(self): # Issue #14308: a dummy thread in the active list doesn't mess up # the after-fork mechanism. code = """if 1: import _thread, threading, os, time, warnings def background_thread(evt): # Creates and registers the _DummyThread instance threading.current_thread() evt.set() time.sleep(10) evt = threading.Event() _thread.start_new_thread(background_thread, (evt,)) evt.wait() assert threading.active_count() == 2, threading.active_count() with warnings.catch_warnings(record=True) as ws: warnings.filterwarnings( "always", category=DeprecationWarning) if os.fork() == 0: assert threading.active_count() == 1, threading.active_count() os._exit(0) else: assert ws[0].category == DeprecationWarning, ws[0] assert 'fork' in str(ws[0].message), ws[0] os.wait() """ _, out, err = assert_python_ok("-c", code) self.assertEqual(out, b'') self.assertEqual(err, b'') @skip_unless_reliable_fork def test_is_alive_after_fork(self): # Try hard to trigger #18418: is_alive() could sometimes be True on # threads that vanished after a fork. old_interval = sys.getswitchinterval() self.addCleanup(sys.setswitchinterval, old_interval) # Make the bug more likely to manifest. test.support.setswitchinterval(1e-6) for i in range(20): t = threading.Thread(target=lambda: None) t.start() # Ignore the warning about fork with threads. with warnings.catch_warnings(category=DeprecationWarning, action="ignore"): if (pid := os.fork()) == 0: os._exit(11 if t.is_alive() else 10) else: t.join() support.wait_process(pid, exitcode=10) def test_main_thread(self): main = threading.main_thread() self.assertEqual(main.name, 'MainThread') self.assertEqual(main.ident, threading.current_thread().ident) self.assertEqual(main.ident, threading.get_ident()) def f(): self.assertNotEqual(threading.main_thread().ident, threading.current_thread().ident) th = threading.Thread(target=f) th.start() th.join() @skip_unless_reliable_fork @unittest.skipUnless(hasattr(os, 'waitpid'), "test needs os.waitpid()") def test_main_thread_after_fork(self): code = """if 1: import os, threading from test import support ident = threading.get_ident() pid = os.fork() if pid == 0: print("current ident", threading.get_ident() == ident) main = threading.main_thread() print("main", main.name) print("main ident", main.ident == ident) print("current is main", threading.current_thread() is main) else: support.wait_process(pid, exitcode=0) """ _, out, err = assert_python_ok("-c", code) data = out.decode().replace('\r', '') self.assertEqual(err, b"") self.assertEqual(data, "current ident True\n" "main MainThread\n" "main ident True\n" "current is main True\n") @skip_unless_reliable_fork @unittest.skipUnless(hasattr(os, 'waitpid'), "test needs os.waitpid()") def test_main_thread_after_fork_from_nonmain_thread(self): code = """if 1: import os, threading, sys, warnings from test import support def func(): ident = threading.get_ident() with warnings.catch_warnings(record=True) as ws: warnings.filterwarnings( "always", category=DeprecationWarning) pid = os.fork() if pid == 0: print("current ident", threading.get_ident() == ident) main = threading.main_thread() print("main", main.name, type(main).__name__) print("main ident", main.ident == ident) print("current is main", threading.current_thread() is main) # stdout is fully buffered because not a tty, # we have to flush before exit. sys.stdout.flush() else: assert ws[0].category == DeprecationWarning, ws[0] assert 'fork' in str(ws[0].message), ws[0] support.wait_process(pid, exitcode=0) th = threading.Thread(target=func) th.start() th.join() """ _, out, err = assert_python_ok("-c", code) data = out.decode().replace('\r', '') self.assertEqual(err.decode('utf-8'), "") self.assertEqual(data, "current ident True\n" "main Thread-1 (func) Thread\n" "main ident True\n" "current is main True\n" ) @unittest.skipIf(sys.platform in platforms_to_skip, "due to known OS bug") @support.requires_fork() @unittest.skipUnless(hasattr(os, 'waitpid'), "test needs os.waitpid()") def test_main_thread_after_fork_from_foreign_thread(self, create_dummy=False): code = """if 1: import os, threading, sys, traceback, _thread from test import support def func(lock): ident = threading.get_ident() if %s: # call current_thread() before fork to allocate DummyThread current = threading.current_thread() print("current", current.name, type(current).__name__) print("ident in _active", ident in threading._active) # flush before fork, so child won't flush it again sys.stdout.flush() pid = os.fork() if pid == 0: print("current ident", threading.get_ident() == ident) main = threading.main_thread() print("main", main.name, type(main).__name__) print("main ident", main.ident == ident) print("current is main", threading.current_thread() is main) print("_dangling", [t.name for t in list(threading._dangling)]) # stdout is fully buffered because not a tty, # we have to flush before exit. sys.stdout.flush() try: threading._shutdown() os._exit(0) except: traceback.print_exc() sys.stderr.flush() os._exit(1) else: try: support.wait_process(pid, exitcode=0) except Exception: # avoid 'could not acquire lock for # <_io.BufferedWriter name=''> at interpreter shutdown,' traceback.print_exc() sys.stderr.flush() finally: lock.release() join_lock = _thread.allocate_lock() join_lock.acquire() th = _thread.start_new_thread(func, (join_lock,)) join_lock.acquire() """ % create_dummy # "DeprecationWarning: This process is multi-threaded, use of fork() # may lead to deadlocks in the child" _, out, err = assert_python_ok("-W", "ignore::DeprecationWarning", "-c", code) data = out.decode().replace('\r', '') self.assertEqual(err.decode(), "") self.assertEqual(data, ("current Dummy-1 _DummyThread\n" if create_dummy else "") + f"ident in _active {create_dummy!s}\n" + "current ident True\n" "main MainThread _MainThread\n" "main ident True\n" "current is main True\n" "_dangling ['MainThread']\n") def test_main_thread_after_fork_from_dummy_thread(self, create_dummy=False): self.test_main_thread_after_fork_from_foreign_thread(create_dummy=True) def test_main_thread_during_shutdown(self): # bpo-31516: current_thread() should still point to the main thread # at shutdown code = """if 1: import gc, threading main_thread = threading.current_thread() assert main_thread is threading.main_thread() # sanity check class RefCycle: def __init__(self): self.cycle = self def __del__(self): print("GC:", threading.current_thread() is main_thread, threading.main_thread() is main_thread, threading.enumerate() == [main_thread]) RefCycle() gc.collect() # sanity check x = RefCycle() """ _, out, err = assert_python_ok("-c", code) data = out.decode() self.assertEqual(err, b"") self.assertEqual(data.splitlines(), ["GC: True True True"] * 2) def test_finalization_shutdown(self): # bpo-36402: Py_Finalize() calls threading._shutdown() which must wait # until Python thread states of all non-daemon threads get deleted. # # Test similar to SubinterpThreadingTests.test_threads_join_2(), but # test the finalization of the main interpreter. code = """if 1: import os import threading import time import random def random_sleep(): seconds = random.random() * 0.010 time.sleep(seconds) class Sleeper: def __del__(self): random_sleep() tls = threading.local() def f(): # Sleep a bit so that the thread is still running when # Py_Finalize() is called. random_sleep() tls.x = Sleeper() random_sleep() threading.Thread(target=f).start() random_sleep() """ rc, out, err = assert_python_ok("-c", code) self.assertEqual(err, b"") def test_tstate_lock(self): # Test an implementation detail of Thread objects. started = _thread.allocate_lock() finish = _thread.allocate_lock() started.acquire() finish.acquire() def f(): started.release() finish.acquire() time.sleep(0.01) # The tstate lock is None until the thread is started t = threading.Thread(target=f) self.assertIs(t._tstate_lock, None) t.start() started.acquire() self.assertTrue(t.is_alive()) # The tstate lock can't be acquired when the thread is running # (or suspended). tstate_lock = t._tstate_lock self.assertFalse(tstate_lock.acquire(timeout=0), False) finish.release() # When the thread ends, the state_lock can be successfully # acquired. self.assertTrue(tstate_lock.acquire(timeout=support.SHORT_TIMEOUT), False) # But is_alive() is still True: we hold _tstate_lock now, which # prevents is_alive() from knowing the thread's end-of-life C code # is done. self.assertTrue(t.is_alive()) # Let is_alive() find out the C code is done. tstate_lock.release() self.assertFalse(t.is_alive()) # And verify the thread disposed of _tstate_lock. self.assertIsNone(t._tstate_lock) t.join() def test_repr_stopped(self): # Verify that "stopped" shows up in repr(Thread) appropriately. started = _thread.allocate_lock() finish = _thread.allocate_lock() started.acquire() finish.acquire() def f(): started.release() finish.acquire() t = threading.Thread(target=f) t.start() started.acquire() self.assertIn("started", repr(t)) finish.release() # "stopped" should appear in the repr in a reasonable amount of time. # Implementation detail: as of this writing, that's trivially true # if .join() is called, and almost trivially true if .is_alive() is # called. The detail we're testing here is that "stopped" shows up # "all on its own". LOOKING_FOR = "stopped" for i in range(500): if LOOKING_FOR in repr(t): break time.sleep(0.01) self.assertIn(LOOKING_FOR, repr(t)) # we waited at least 5 seconds t.join() def test_BoundedSemaphore_limit(self): # BoundedSemaphore should raise ValueError if released too often. for limit in range(1, 10): bs = threading.BoundedSemaphore(limit) threads = [threading.Thread(target=bs.acquire) for _ in range(limit)] for t in threads: t.start() for t in threads: t.join() threads = [threading.Thread(target=bs.release) for _ in range(limit)] for t in threads: t.start() for t in threads: t.join() self.assertRaises(ValueError, bs.release) @cpython_only def test_frame_tstate_tracing(self): # Issue #14432: Crash when a generator is created in a C thread that is # destroyed while the generator is still used. The issue was that a # generator contains a frame, and the frame kept a reference to the # Python state of the destroyed C thread. The crash occurs when a trace # function is setup. def noop_trace(frame, event, arg): # no operation return noop_trace def generator(): while 1: yield "generator" def callback(): if callback.gen is None: callback.gen = generator() return next(callback.gen) callback.gen = None old_trace = sys.gettrace() sys.settrace(noop_trace) try: # Install a trace function threading.settrace(noop_trace) # Create a generator in a C thread which exits after the call import _testcapi _testcapi.call_in_temporary_c_thread(callback) # Call the generator in a different Python thread, check that the # generator didn't keep a reference to the destroyed thread state for test in range(3): # The trace function is still called here callback() finally: sys.settrace(old_trace) threading.settrace(old_trace) def test_gettrace(self): def noop_trace(frame, event, arg): # no operation return noop_trace old_trace = threading.gettrace() try: threading.settrace(noop_trace) trace_func = threading.gettrace() self.assertEqual(noop_trace,trace_func) finally: threading.settrace(old_trace) def test_gettrace_all_threads(self): def fn(*args): pass old_trace = threading.gettrace() first_check = threading.Event() second_check = threading.Event() trace_funcs = [] def checker(): trace_funcs.append(sys.gettrace()) first_check.set() second_check.wait() trace_funcs.append(sys.gettrace()) try: t = threading.Thread(target=checker) t.start() first_check.wait() threading.settrace_all_threads(fn) second_check.set() t.join() self.assertEqual(trace_funcs, [None, fn]) self.assertEqual(threading.gettrace(), fn) self.assertEqual(sys.gettrace(), fn) finally: threading.settrace_all_threads(old_trace) self.assertEqual(threading.gettrace(), old_trace) self.assertEqual(sys.gettrace(), old_trace) def test_getprofile(self): def fn(*args): pass old_profile = threading.getprofile() try: threading.setprofile(fn) self.assertEqual(fn, threading.getprofile()) finally: threading.setprofile(old_profile) def test_getprofile_all_threads(self): def fn(*args): pass old_profile = threading.getprofile() first_check = threading.Event() second_check = threading.Event() profile_funcs = [] def checker(): profile_funcs.append(sys.getprofile()) first_check.set() second_check.wait() profile_funcs.append(sys.getprofile()) try: t = threading.Thread(target=checker) t.start() first_check.wait() threading.setprofile_all_threads(fn) second_check.set() t.join() self.assertEqual(profile_funcs, [None, fn]) self.assertEqual(threading.getprofile(), fn) self.assertEqual(sys.getprofile(), fn) finally: threading.setprofile_all_threads(old_profile) self.assertEqual(threading.getprofile(), old_profile) self.assertEqual(sys.getprofile(), old_profile) @cpython_only def test_shutdown_locks(self): for daemon in (False, True): with self.subTest(daemon=daemon): event = threading.Event() thread = threading.Thread(target=event.wait, daemon=daemon) # Thread.start() must add lock to _shutdown_locks, # but only for non-daemon thread thread.start() tstate_lock = thread._tstate_lock if not daemon: self.assertIn(tstate_lock, threading._shutdown_locks) else: self.assertNotIn(tstate_lock, threading._shutdown_locks) # unblock the thread and join it event.set() thread.join() # Thread._stop() must remove tstate_lock from _shutdown_locks. # Daemon threads must never add it to _shutdown_locks. self.assertNotIn(tstate_lock, threading._shutdown_locks) def test_locals_at_exit(self): # bpo-19466: thread locals must not be deleted before destructors # are called rc, out, err = assert_python_ok("-c", """if 1: import threading class Atexit: def __del__(self): print("thread_dict.atexit = %r" % thread_dict.atexit) thread_dict = threading.local() thread_dict.atexit = "value" atexit = Atexit() """) self.assertEqual(out.rstrip(), b"thread_dict.atexit = 'value'") def test_boolean_target(self): # bpo-41149: A thread that had a boolean value of False would not # run, regardless of whether it was callable. The correct behaviour # is for a thread to do nothing if its target is None, and to call # the target otherwise. class BooleanTarget(object): def __init__(self): self.ran = False def __bool__(self): return False def __call__(self): self.ran = True target = BooleanTarget() thread = threading.Thread(target=target) thread.start() thread.join() self.assertTrue(target.ran) def test_leak_without_join(self): # bpo-37788: Test that a thread which is not joined explicitly # does not leak. Test written for reference leak checks. def noop(): pass with threading_helper.wait_threads_exit(): threading.Thread(target=noop).start() # Thread.join() is not called def test_import_from_another_thread(self): # bpo-1596321: If the threading module is first import from a thread # different than the main thread, threading._shutdown() must handle # this case without logging an error at Python exit. code = textwrap.dedent(''' import _thread import sys event = _thread.allocate_lock() event.acquire() def import_threading(): import threading event.release() if 'threading' in sys.modules: raise Exception('threading is already imported') _thread.start_new_thread(import_threading, ()) # wait until the threading module is imported event.acquire() event.release() if 'threading' not in sys.modules: raise Exception('threading is not imported') # don't wait until the thread completes ''') rc, out, err = assert_python_ok("-c", code) self.assertEqual(out, b'') self.assertEqual(err, b'') def test_start_new_thread_at_finalization(self): code = """if 1: import _thread def f(): print("shouldn't be printed") class AtFinalization: def __del__(self): print("OK") _thread.start_new_thread(f, ()) at_finalization = AtFinalization() """ _, out, err = assert_python_ok("-c", code) self.assertEqual(out.strip(), b"OK") self.assertIn(b"can't create new thread at interpreter shutdown", err) class ThreadJoinOnShutdown(BaseTestCase): def _run_and_join(self, script): script = """if 1: import sys, os, time, threading # a thread, which waits for the main program to terminate def joiningfunc(mainthread): mainthread.join() print('end of thread') # stdout is fully buffered because not a tty, we have to flush # before exit. sys.stdout.flush() \n""" + script rc, out, err = assert_python_ok("-c", script) data = out.decode().replace('\r', '') self.assertEqual(data, "end of main\nend of thread\n") def test_1_join_on_shutdown(self): # The usual case: on exit, wait for a non-daemon thread script = """if 1: import os t = threading.Thread(target=joiningfunc, args=(threading.current_thread(),)) t.start() time.sleep(0.1) print('end of main') """ self._run_and_join(script) @skip_unless_reliable_fork def test_2_join_in_forked_process(self): # Like the test above, but from a forked interpreter script = """if 1: from test import support childpid = os.fork() if childpid != 0: # parent process support.wait_process(childpid, exitcode=0) sys.exit(0) # child process t = threading.Thread(target=joiningfunc, args=(threading.current_thread(),)) t.start() print('end of main') """ self._run_and_join(script) @skip_unless_reliable_fork def test_3_join_in_forked_from_thread(self): # Like the test above, but fork() was called from a worker thread # In the forked process, the main Thread object must be marked as stopped. script = """if 1: from test import support main_thread = threading.current_thread() def worker(): childpid = os.fork() if childpid != 0: # parent process support.wait_process(childpid, exitcode=0) sys.exit(0) # child process t = threading.Thread(target=joiningfunc, args=(main_thread,)) print('end of main') t.start() t.join() # Should not block: main_thread is already stopped w = threading.Thread(target=worker) w.start() """ self._run_and_join(script) @unittest.skipIf(sys.platform in platforms_to_skip, "due to known OS bug") def test_4_daemon_threads(self): # Check that a daemon thread cannot crash the interpreter on shutdown # by manipulating internal structures that are being disposed of in # the main thread. if support.check_sanitizer(thread=True): # some of the threads running `random_io` below will still be alive # at process exit self.skipTest("TSAN would report thread leak") script = """if True: import os import random import sys import time import threading thread_has_run = set() def random_io(): '''Loop for a while sleeping random tiny amounts and doing some I/O.''' import test.test_threading as mod while True: with open(mod.__file__, 'rb') as in_f: stuff = in_f.read(200) with open(os.devnull, 'wb') as null_f: null_f.write(stuff) time.sleep(random.random() / 1995) thread_has_run.add(threading.current_thread()) def main(): count = 0 for _ in range(40): new_thread = threading.Thread(target=random_io) new_thread.daemon = True new_thread.start() count += 1 while len(thread_has_run) < count: time.sleep(0.001) # Trigger process shutdown sys.exit(0) main() """ rc, out, err = assert_python_ok('-c', script) self.assertFalse(err) def test_thread_from_thread(self): script = """if True: import threading import time def thread2(): time.sleep(0.05) print("OK") def thread1(): time.sleep(0.05) t2 = threading.Thread(target=thread2) t2.start() t = threading.Thread(target=thread1) t.start() # do not join() -- the interpreter waits for non-daemon threads to # finish. """ rc, out, err = assert_python_ok('-c', script) self.assertEqual(err, b"") self.assertEqual(out.strip(), b"OK") self.assertEqual(rc, 0) @skip_unless_reliable_fork def test_reinit_tls_after_fork(self): # Issue #13817: fork() would deadlock in a multithreaded program with # the ad-hoc TLS implementation. def do_fork_and_wait(): # just fork a child process and wait it pid = os.fork() if pid > 0: support.wait_process(pid, exitcode=50) else: os._exit(50) # Ignore the warning about fork with threads. with warnings.catch_warnings(category=DeprecationWarning, action="ignore"): # start a bunch of threads that will fork() child processes threads = [] for i in range(16): t = threading.Thread(target=do_fork_and_wait) threads.append(t) t.start() for t in threads: t.join() @skip_unless_reliable_fork def test_clear_threads_states_after_fork(self): # Issue #17094: check that threads states are cleared after fork() # start a bunch of threads threads = [] for i in range(16): t = threading.Thread(target=lambda : time.sleep(0.3)) threads.append(t) t.start() try: # Ignore the warning about fork with threads. with warnings.catch_warnings(category=DeprecationWarning, action="ignore"): pid = os.fork() if pid == 0: # check that threads states have been cleared if len(sys._current_frames()) == 1: os._exit(51) else: os._exit(52) else: support.wait_process(pid, exitcode=51) finally: for t in threads: t.join() class SubinterpThreadingTests(BaseTestCase): def pipe(self): r, w = os.pipe() self.addCleanup(os.close, r) self.addCleanup(os.close, w) if hasattr(os, 'set_blocking'): os.set_blocking(r, False) return (r, w) def test_threads_join(self): # Non-daemon threads should be joined at subinterpreter shutdown # (issue #18808) r, w = self.pipe() code = textwrap.dedent(r""" import os import random import threading import time def random_sleep(): seconds = random.random() * 0.010 time.sleep(seconds) def f(): # Sleep a bit so that the thread is still running when # Py_EndInterpreter is called. random_sleep() os.write(%d, b"x") threading.Thread(target=f).start() random_sleep() """ % (w,)) ret = test.support.run_in_subinterp(code) self.assertEqual(ret, 0) # The thread was joined properly. self.assertEqual(os.read(r, 1), b"x") def test_threads_join_2(self): # Same as above, but a delay gets introduced after the thread's # Python code returned but before the thread state is deleted. # To achieve this, we register a thread-local object which sleeps # a bit when deallocated. r, w = self.pipe() code = textwrap.dedent(r""" import os import random import threading import time def random_sleep(): seconds = random.random() * 0.010 time.sleep(seconds) class Sleeper: def __del__(self): random_sleep() tls = threading.local() def f(): # Sleep a bit so that the thread is still running when # Py_EndInterpreter is called. random_sleep() tls.x = Sleeper() os.write(%d, b"x") threading.Thread(target=f).start() random_sleep() """ % (w,)) ret = test.support.run_in_subinterp(code) self.assertEqual(ret, 0) # The thread was joined properly. self.assertEqual(os.read(r, 1), b"x") @requires_subinterpreters def test_threads_join_with_no_main(self): r_interp, w_interp = self.pipe() INTERP = b'I' FINI = b'F' DONE = b'D' interp = interpreters.create() interp.run(f"""if True: import os import threading import time done = False def notify_fini(): global done done = True os.write({w_interp}, {FINI!r}) t.join() threading._register_atexit(notify_fini) def task(): while not done: time.sleep(0.1) os.write({w_interp}, {DONE!r}) t = threading.Thread(target=task) t.start() os.write({w_interp}, {INTERP!r}) """) interp.close() self.assertEqual(os.read(r_interp, 1), INTERP) self.assertEqual(os.read(r_interp, 1), FINI) self.assertEqual(os.read(r_interp, 1), DONE) @cpython_only def test_daemon_threads_fatal_error(self): subinterp_code = f"""if 1: import os import threading import time def f(): # Make sure the daemon thread is still running when # Py_EndInterpreter is called. time.sleep({test.support.SHORT_TIMEOUT}) threading.Thread(target=f, daemon=True).start() """ script = r"""if 1: import _testcapi _testcapi.run_in_subinterp(%r) """ % (subinterp_code,) with test.support.SuppressCrashReport(): rc, out, err = assert_python_failure("-c", script) self.assertIn("Fatal Python error: Py_EndInterpreter: " "not the last thread", err.decode()) def _check_allowed(self, before_start='', *, allowed=True, daemon_allowed=True, daemon=False, ): subinterp_code = textwrap.dedent(f""" import test.support import threading def func(): print('this should not have run!') t = threading.Thread(target=func, daemon={daemon}) {before_start} t.start() """) script = textwrap.dedent(f""" import test.support test.support.run_in_subinterp_with_config( {subinterp_code!r}, use_main_obmalloc=True, allow_fork=True, allow_exec=True, allow_threads={allowed}, allow_daemon_threads={daemon_allowed}, check_multi_interp_extensions=False, own_gil=False, ) """) with test.support.SuppressCrashReport(): _, _, err = assert_python_ok("-c", script) return err.decode() @cpython_only def test_threads_not_allowed(self): err = self._check_allowed( allowed=False, daemon_allowed=False, daemon=False, ) self.assertIn('RuntimeError', err) @cpython_only def test_daemon_threads_not_allowed(self): with self.subTest('via Thread()'): err = self._check_allowed( allowed=True, daemon_allowed=False, daemon=True, ) self.assertIn('RuntimeError', err) with self.subTest('via Thread.daemon setter'): err = self._check_allowed( 't.daemon = True', allowed=True, daemon_allowed=False, daemon=False, ) self.assertIn('RuntimeError', err) class ThreadingExceptionTests(BaseTestCase): # A RuntimeError should be raised if Thread.start() is called # multiple times. def test_start_thread_again(self): thread = threading.Thread() thread.start() self.assertRaises(RuntimeError, thread.start) thread.join() def test_joining_current_thread(self): current_thread = threading.current_thread() self.assertRaises(RuntimeError, current_thread.join); def test_joining_inactive_thread(self): thread = threading.Thread() self.assertRaises(RuntimeError, thread.join) def test_daemonize_active_thread(self): thread = threading.Thread() thread.start() self.assertRaises(RuntimeError, setattr, thread, "daemon", True) thread.join() def test_releasing_unacquired_lock(self): lock = threading.Lock() self.assertRaises(RuntimeError, lock.release) @requires_subprocess() def test_recursion_limit(self): # Issue 9670 # test that excessive recursion within a non-main thread causes # an exception rather than crashing the interpreter on platforms # like Mac OS X or FreeBSD which have small default stack sizes # for threads script = """if True: import threading def recurse(): return recurse() def outer(): try: recurse() except RecursionError: pass w = threading.Thread(target=outer) w.start() w.join() print('end of main thread') """ expected_output = "end of main thread\n" p = subprocess.Popen([sys.executable, "-c", script], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() data = stdout.decode().replace('\r', '') self.assertEqual(p.returncode, 0, "Unexpected error: " + stderr.decode()) self.assertEqual(data, expected_output) def test_print_exception(self): script = r"""if True: import threading import time running = False def run(): global running running = True while running: time.sleep(0.01) 1/0 t = threading.Thread(target=run) t.start() while not running: time.sleep(0.01) running = False t.join() """ rc, out, err = assert_python_ok("-c", script) self.assertEqual(out, b'') err = err.decode() self.assertIn("Exception in thread", err) self.assertIn("Traceback (most recent call last):", err) self.assertIn("ZeroDivisionError", err) self.assertNotIn("Unhandled exception", err) def test_print_exception_stderr_is_none_1(self): script = r"""if True: import sys import threading import time running = False def run(): global running running = True while running: time.sleep(0.01) 1/0 t = threading.Thread(target=run) t.start() while not running: time.sleep(0.01) sys.stderr = None running = False t.join() """ rc, out, err = assert_python_ok("-c", script) self.assertEqual(out, b'') err = err.decode() self.assertIn("Exception in thread", err) self.assertIn("Traceback (most recent call last):", err) self.assertIn("ZeroDivisionError", err) self.assertNotIn("Unhandled exception", err) def test_print_exception_stderr_is_none_2(self): script = r"""if True: import sys import threading import time running = False def run(): global running running = True while running: time.sleep(0.01) 1/0 sys.stderr = None t = threading.Thread(target=run) t.start() while not running: time.sleep(0.01) running = False t.join() """ rc, out, err = assert_python_ok("-c", script) self.assertEqual(out, b'') self.assertNotIn("Unhandled exception", err.decode()) def test_print_exception_gh_102056(self): # This used to crash. See gh-102056. script = r"""if True: import time import threading import _thread def f(): try: f() except RecursionError: f() def g(): try: raise ValueError() except* ValueError: f() def h(): time.sleep(1) _thread.interrupt_main() t = threading.Thread(target=h) t.start() g() t.join() """ assert_python_failure("-c", script) def test_bare_raise_in_brand_new_thread(self): def bare_raise(): raise class Issue27558(threading.Thread): exc = None def run(self): try: bare_raise() except Exception as exc: self.exc = exc thread = Issue27558() thread.start() thread.join() self.assertIsNotNone(thread.exc) self.assertIsInstance(thread.exc, RuntimeError) # explicitly break the reference cycle to not leak a dangling thread thread.exc = None def test_multithread_modify_file_noerror(self): # See issue25872 def modify_file(): with open(os_helper.TESTFN, 'w', encoding='utf-8') as fp: fp.write(' ') traceback.format_stack() self.addCleanup(os_helper.unlink, os_helper.TESTFN) threads = [ threading.Thread(target=modify_file) for i in range(100) ] for t in threads: t.start() t.join() class ThreadRunFail(threading.Thread): def run(self): raise ValueError("run failed") class ExceptHookTests(BaseTestCase): def setUp(self): restore_default_excepthook(self) super().setUp() def test_excepthook(self): with support.captured_output("stderr") as stderr: thread = ThreadRunFail(name="excepthook thread") thread.start() thread.join() stderr = stderr.getvalue().strip() self.assertIn(f'Exception in thread {thread.name}:\n', stderr) self.assertIn('Traceback (most recent call last):\n', stderr) self.assertIn(' raise ValueError("run failed")', stderr) self.assertIn('ValueError: run failed', stderr) @support.cpython_only def test_excepthook_thread_None(self): # threading.excepthook called with thread=None: log the thread # identifier in this case. with support.captured_output("stderr") as stderr: try: raise ValueError("bug") except Exception as exc: args = threading.ExceptHookArgs([*sys.exc_info(), None]) try: threading.excepthook(args) finally: # Explicitly break a reference cycle args = None stderr = stderr.getvalue().strip() self.assertIn(f'Exception in thread {threading.get_ident()}:\n', stderr) self.assertIn('Traceback (most recent call last):\n', stderr) self.assertIn(' raise ValueError("bug")', stderr) self.assertIn('ValueError: bug', stderr) def test_system_exit(self): class ThreadExit(threading.Thread): def run(self): sys.exit(1) # threading.excepthook() silently ignores SystemExit with support.captured_output("stderr") as stderr: thread = ThreadExit() thread.start() thread.join() self.assertEqual(stderr.getvalue(), '') def test_custom_excepthook(self): args = None def hook(hook_args): nonlocal args args = hook_args try: with support.swap_attr(threading, 'excepthook', hook): thread = ThreadRunFail() thread.start() thread.join() self.assertEqual(args.exc_type, ValueError) self.assertEqual(str(args.exc_value), 'run failed') self.assertEqual(args.exc_traceback, args.exc_value.__traceback__) self.assertIs(args.thread, thread) finally: # Break reference cycle args = None def test_custom_excepthook_fail(self): def threading_hook(args): raise ValueError("threading_hook failed") err_str = None def sys_hook(exc_type, exc_value, exc_traceback): nonlocal err_str err_str = str(exc_value) with support.swap_attr(threading, 'excepthook', threading_hook), \ support.swap_attr(sys, 'excepthook', sys_hook), \ support.captured_output('stderr') as stderr: thread = ThreadRunFail() thread.start() thread.join() self.assertEqual(stderr.getvalue(), 'Exception in threading.excepthook:\n') self.assertEqual(err_str, 'threading_hook failed') def test_original_excepthook(self): def run_thread(): with support.captured_output("stderr") as output: thread = ThreadRunFail(name="excepthook thread") thread.start() thread.join() return output.getvalue() def threading_hook(args): print("Running a thread failed", file=sys.stderr) default_output = run_thread() with support.swap_attr(threading, 'excepthook', threading_hook): custom_hook_output = run_thread() threading.excepthook = threading.__excepthook__ recovered_output = run_thread() self.assertEqual(default_output, recovered_output) self.assertNotEqual(default_output, custom_hook_output) self.assertEqual(custom_hook_output, "Running a thread failed\n") class TimerTests(BaseTestCase): def setUp(self): BaseTestCase.setUp(self) self.callback_args = [] self.callback_event = threading.Event() def test_init_immutable_default_args(self): # Issue 17435: constructor defaults were mutable objects, they could be # mutated via the object attributes and affect other Timer objects. timer1 = threading.Timer(0.01, self._callback_spy) timer1.start() self.callback_event.wait() timer1.args.append("blah") timer1.kwargs["foo"] = "bar" self.callback_event.clear() timer2 = threading.Timer(0.01, self._callback_spy) timer2.start() self.callback_event.wait() self.assertEqual(len(self.callback_args), 2) self.assertEqual(self.callback_args, [((), {}), ((), {})]) timer1.join() timer2.join() def _callback_spy(self, *args, **kwargs): self.callback_args.append((args[:], kwargs.copy())) self.callback_event.set() class LockTests(lock_tests.LockTests): locktype = staticmethod(threading.Lock) class PyRLockTests(lock_tests.RLockTests): locktype = staticmethod(threading._PyRLock) @unittest.skipIf(threading._CRLock is None, 'RLock not implemented in C') class CRLockTests(lock_tests.RLockTests): locktype = staticmethod(threading._CRLock) class EventTests(lock_tests.EventTests): eventtype = staticmethod(threading.Event) class ConditionAsRLockTests(lock_tests.RLockTests): # Condition uses an RLock by default and exports its API. locktype = staticmethod(threading.Condition) def test_recursion_count(self): self.skipTest("Condition does not expose _recursion_count()") class ConditionTests(lock_tests.ConditionTests): condtype = staticmethod(threading.Condition) class SemaphoreTests(lock_tests.SemaphoreTests): semtype = staticmethod(threading.Semaphore) class BoundedSemaphoreTests(lock_tests.BoundedSemaphoreTests): semtype = staticmethod(threading.BoundedSemaphore) class BarrierTests(lock_tests.BarrierTests): barriertype = staticmethod(threading.Barrier) class MiscTestCase(unittest.TestCase): def test__all__(self): restore_default_excepthook(self) extra = {"ThreadError"} not_exported = {'currentThread', 'activeCount'} support.check__all__(self, threading, ('threading', '_thread'), extra=extra, not_exported=not_exported) @requires_subprocess() def test_gh112826_missing__thread__is_main_interpreter(self): with os_helper.temp_dir() as tempdir: modname = '_thread_fake' import os.path filename = os.path.join(tempdir, modname + '.py') with open(filename, 'w') as outfile: outfile.write("""if True: import _thread globals().update(vars(_thread)) del _is_main_interpreter """) expected_output = b'success!' _, out, err = assert_python_ok("-c", f"""if True: import sys sys.path.insert(0, {tempdir!r}) import {modname} sys.modules['_thread'] = {modname} del sys.modules[{modname!r}] import threading print({expected_output.decode('utf-8')!r}, end='') """) self.assertEqual(out, expected_output) self.assertEqual(err, b'') class InterruptMainTests(unittest.TestCase): def check_interrupt_main_with_signal_handler(self, signum): def handler(signum, frame): 1/0 old_handler = signal.signal(signum, handler) self.addCleanup(signal.signal, signum, old_handler) with self.assertRaises(ZeroDivisionError): _thread.interrupt_main() def check_interrupt_main_noerror(self, signum): handler = signal.getsignal(signum) try: # No exception should arise. signal.signal(signum, signal.SIG_IGN) _thread.interrupt_main(signum) signal.signal(signum, signal.SIG_DFL) _thread.interrupt_main(signum) finally: # Restore original handler signal.signal(signum, handler) def test_interrupt_main_subthread(self): # Calling start_new_thread with a function that executes interrupt_main # should raise KeyboardInterrupt upon completion. def call_interrupt(): _thread.interrupt_main() t = threading.Thread(target=call_interrupt) with self.assertRaises(KeyboardInterrupt): t.start() t.join() t.join() def test_interrupt_main_mainthread(self): # Make sure that if interrupt_main is called in main thread that # KeyboardInterrupt is raised instantly. with self.assertRaises(KeyboardInterrupt): _thread.interrupt_main() def test_interrupt_main_with_signal_handler(self): self.check_interrupt_main_with_signal_handler(signal.SIGINT) self.check_interrupt_main_with_signal_handler(signal.SIGTERM) def test_interrupt_main_noerror(self): self.check_interrupt_main_noerror(signal.SIGINT) self.check_interrupt_main_noerror(signal.SIGTERM) def test_interrupt_main_invalid_signal(self): self.assertRaises(ValueError, _thread.interrupt_main, -1) self.assertRaises(ValueError, _thread.interrupt_main, signal.NSIG) self.assertRaises(ValueError, _thread.interrupt_main, 1000000) @threading_helper.reap_threads def test_can_interrupt_tight_loops(self): cont = [True] started = [False] interrupted = [False] def worker(started, cont, interrupted): iterations = 100_000_000 started[0] = True while cont[0]: if iterations: iterations -= 1 else: return pass interrupted[0] = True t = threading.Thread(target=worker,args=(started, cont, interrupted)) t.start() while not started[0]: pass cont[0] = False t.join() self.assertTrue(interrupted[0]) class AtexitTests(unittest.TestCase): def test_atexit_output(self): rc, out, err = assert_python_ok("-c", """if True: import threading def run_last(): print('parrot') threading._register_atexit(run_last) """) self.assertFalse(err) self.assertEqual(out.strip(), b'parrot') def test_atexit_called_once(self): rc, out, err = assert_python_ok("-c", """if True: import threading from unittest.mock import Mock mock = Mock() threading._register_atexit(mock) mock.assert_not_called() # force early shutdown to ensure it was called once threading._shutdown() mock.assert_called_once() """) self.assertFalse(err) def test_atexit_after_shutdown(self): # The only way to do this is by registering an atexit within # an atexit, which is intended to raise an exception. rc, out, err = assert_python_ok("-c", """if True: import threading def func(): pass def run_last(): threading._register_atexit(func) threading._register_atexit(run_last) """) self.assertTrue(err) self.assertIn("RuntimeError: can't register atexit after shutdown", err.decode()) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.12/test_weakref.py000066400000000000000000002245061471441230600213720ustar00rootroot00000000000000import gc import sys import doctest import unittest import collections import weakref import operator import contextlib import copy import threading import time import random from test import support from test.support import script_helper, ALWAYS_EQ from test.support import gc_collect from test.support import threading_helper # Used in ReferencesTestCase.test_ref_created_during_del() . ref_from_del = None # Used by FinalizeTestCase as a global that may be replaced by None # when the interpreter shuts down. _global_var = 'foobar' class C: def method(self): pass class Callable: bar = None def __call__(self, x): self.bar = x def create_function(): def f(): pass return f def create_bound_method(): return C().method class Object: def __init__(self, arg): self.arg = arg def __repr__(self): return "" % self.arg def __eq__(self, other): if isinstance(other, Object): return self.arg == other.arg return NotImplemented def __lt__(self, other): if isinstance(other, Object): return self.arg < other.arg return NotImplemented def __hash__(self): return hash(self.arg) def some_method(self): return 4 def other_method(self): return 5 class RefCycle: def __init__(self): self.cycle = self class TestBase(unittest.TestCase): def setUp(self): self.cbcalled = 0 def callback(self, ref): self.cbcalled += 1 @contextlib.contextmanager def collect_in_thread(period=0.0001): """ Ensure GC collections happen in a different thread, at a high frequency. """ please_stop = False def collect(): while not please_stop: time.sleep(period) gc.collect() with support.disable_gc(): t = threading.Thread(target=collect) t.start() try: yield finally: please_stop = True t.join() class ReferencesTestCase(TestBase): def test_basic_ref(self): self.check_basic_ref(C) self.check_basic_ref(create_function) self.check_basic_ref(create_bound_method) # Just make sure the tp_repr handler doesn't raise an exception. # Live reference: o = C() wr = weakref.ref(o) repr(wr) # Dead reference: del o repr(wr) def test_repr_failure_gh99184(self): class MyConfig(dict): def __getattr__(self, x): return self[x] obj = MyConfig(offset=5) obj_weakref = weakref.ref(obj) self.assertIn('MyConfig', repr(obj_weakref)) self.assertIn('MyConfig', str(obj_weakref)) def test_basic_callback(self): self.check_basic_callback(C) self.check_basic_callback(create_function) self.check_basic_callback(create_bound_method) @support.cpython_only def test_cfunction(self): import _testcapi create_cfunction = _testcapi.create_cfunction f = create_cfunction() wr = weakref.ref(f) self.assertIs(wr(), f) del f self.assertIsNone(wr()) self.check_basic_ref(create_cfunction) self.check_basic_callback(create_cfunction) def test_multiple_callbacks(self): o = C() ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del o gc_collect() # For PyPy or other GCs. self.assertIsNone(ref1(), "expected reference to be invalidated") self.assertIsNone(ref2(), "expected reference to be invalidated") self.assertEqual(self.cbcalled, 2, "callback not called the right number of times") def test_multiple_selfref_callbacks(self): # Make sure all references are invalidated before callbacks are called # # What's important here is that we're using the first # reference in the callback invoked on the second reference # (the most recently created ref is cleaned up first). This # tests that all references to the object are invalidated # before any of the callbacks are invoked, so that we only # have one invocation of _weakref.c:cleanup_helper() active # for a particular object at a time. # def callback(object, self=self): self.ref() c = C() self.ref = weakref.ref(c, callback) ref1 = weakref.ref(c, callback) del c def test_constructor_kwargs(self): c = C() self.assertRaises(TypeError, weakref.ref, c, callback=None) def test_proxy_ref(self): o = C() o.bar = 1 ref1 = weakref.proxy(o, self.callback) ref2 = weakref.proxy(o, self.callback) del o gc_collect() # For PyPy or other GCs. def check(proxy): proxy.bar self.assertRaises(ReferenceError, check, ref1) self.assertRaises(ReferenceError, check, ref2) ref3 = weakref.proxy(C()) gc_collect() # For PyPy or other GCs. self.assertRaises(ReferenceError, bool, ref3) self.assertEqual(self.cbcalled, 2) def check_basic_ref(self, factory): o = factory() ref = weakref.ref(o) self.assertIsNotNone(ref(), "weak reference to live object should be live") o2 = ref() self.assertIs(o, o2, "() should return original object if live") def check_basic_callback(self, factory): self.cbcalled = 0 o = factory() ref = weakref.ref(o, self.callback) del o gc_collect() # For PyPy or other GCs. self.assertEqual(self.cbcalled, 1, "callback did not properly set 'cbcalled'") self.assertIsNone(ref(), "ref2 should be dead after deleting object reference") def test_ref_reuse(self): o = C() ref1 = weakref.ref(o) # create a proxy to make sure that there's an intervening creation # between these two; it should make no difference proxy = weakref.proxy(o) ref2 = weakref.ref(o) self.assertIs(ref1, ref2, "reference object w/out callback should be re-used") o = C() proxy = weakref.proxy(o) ref1 = weakref.ref(o) ref2 = weakref.ref(o) self.assertIs(ref1, ref2, "reference object w/out callback should be re-used") self.assertEqual(weakref.getweakrefcount(o), 2, "wrong weak ref count for object") del proxy gc_collect() # For PyPy or other GCs. self.assertEqual(weakref.getweakrefcount(o), 1, "wrong weak ref count for object after deleting proxy") def test_proxy_reuse(self): o = C() proxy1 = weakref.proxy(o) ref = weakref.ref(o) proxy2 = weakref.proxy(o) self.assertIs(proxy1, proxy2, "proxy object w/out callback should have been re-used") def test_basic_proxy(self): o = C() self.check_proxy(o, weakref.proxy(o)) L = collections.UserList() p = weakref.proxy(L) self.assertFalse(p, "proxy for empty UserList should be false") p.append(12) self.assertEqual(len(L), 1) self.assertTrue(p, "proxy for non-empty UserList should be true") p[:] = [2, 3] self.assertEqual(len(L), 2) self.assertEqual(len(p), 2) self.assertIn(3, p, "proxy didn't support __contains__() properly") p[1] = 5 self.assertEqual(L[1], 5) self.assertEqual(p[1], 5) L2 = collections.UserList(L) p2 = weakref.proxy(L2) self.assertEqual(p, p2) ## self.assertEqual(repr(L2), repr(p2)) L3 = collections.UserList(range(10)) p3 = weakref.proxy(L3) self.assertEqual(L3[:], p3[:]) self.assertEqual(L3[5:], p3[5:]) self.assertEqual(L3[:5], p3[:5]) self.assertEqual(L3[2:5], p3[2:5]) def test_proxy_unicode(self): # See bug 5037 class C(object): def __str__(self): return "string" def __bytes__(self): return b"bytes" instance = C() self.assertIn("__bytes__", dir(weakref.proxy(instance))) self.assertEqual(bytes(weakref.proxy(instance)), b"bytes") def test_proxy_index(self): class C: def __index__(self): return 10 o = C() p = weakref.proxy(o) self.assertEqual(operator.index(p), 10) def test_proxy_div(self): class C: def __floordiv__(self, other): return 42 def __ifloordiv__(self, other): return 21 o = C() p = weakref.proxy(o) self.assertEqual(p // 5, 42) p //= 5 self.assertEqual(p, 21) def test_proxy_matmul(self): class C: def __matmul__(self, other): return 1729 def __rmatmul__(self, other): return -163 def __imatmul__(self, other): return 561 o = C() p = weakref.proxy(o) self.assertEqual(p @ 5, 1729) self.assertEqual(5 @ p, -163) p @= 5 self.assertEqual(p, 561) # The PyWeakref_* C API is documented as allowing either NULL or # None as the value for the callback, where either means "no # callback". The "no callback" ref and proxy objects are supposed # to be shared so long as they exist by all callers so long as # they are active. In Python 2.3.3 and earlier, this guarantee # was not honored, and was broken in different ways for # PyWeakref_NewRef() and PyWeakref_NewProxy(). (Two tests.) def test_shared_ref_without_callback(self): self.check_shared_without_callback(weakref.ref) def test_shared_proxy_without_callback(self): self.check_shared_without_callback(weakref.proxy) def check_shared_without_callback(self, makeref): o = Object(1) p1 = makeref(o, None) p2 = makeref(o, None) self.assertIs(p1, p2, "both callbacks were None in the C API") del p1, p2 p1 = makeref(o) p2 = makeref(o, None) self.assertIs(p1, p2, "callbacks were NULL, None in the C API") del p1, p2 p1 = makeref(o) p2 = makeref(o) self.assertIs(p1, p2, "both callbacks were NULL in the C API") del p1, p2 p1 = makeref(o, None) p2 = makeref(o) self.assertIs(p1, p2, "callbacks were None, NULL in the C API") def test_callable_proxy(self): o = Callable() ref1 = weakref.proxy(o) self.check_proxy(o, ref1) self.assertIs(type(ref1), weakref.CallableProxyType, "proxy is not of callable type") ref1('twinkies!') self.assertEqual(o.bar, 'twinkies!', "call through proxy not passed through to original") ref1(x='Splat.') self.assertEqual(o.bar, 'Splat.', "call through proxy not passed through to original") # expect due to too few args self.assertRaises(TypeError, ref1) # expect due to too many args self.assertRaises(TypeError, ref1, 1, 2, 3) def check_proxy(self, o, proxy): o.foo = 1 self.assertEqual(proxy.foo, 1, "proxy does not reflect attribute addition") o.foo = 2 self.assertEqual(proxy.foo, 2, "proxy does not reflect attribute modification") del o.foo self.assertFalse(hasattr(proxy, 'foo'), "proxy does not reflect attribute removal") proxy.foo = 1 self.assertEqual(o.foo, 1, "object does not reflect attribute addition via proxy") proxy.foo = 2 self.assertEqual(o.foo, 2, "object does not reflect attribute modification via proxy") del proxy.foo self.assertFalse(hasattr(o, 'foo'), "object does not reflect attribute removal via proxy") def test_proxy_deletion(self): # Test clearing of SF bug #762891 class Foo: result = None def __delitem__(self, accessor): self.result = accessor g = Foo() f = weakref.proxy(g) del f[0] self.assertEqual(f.result, 0) def test_proxy_bool(self): # Test clearing of SF bug #1170766 class List(list): pass lyst = List() self.assertEqual(bool(weakref.proxy(lyst)), bool(lyst)) def test_proxy_iter(self): # Test fails with a debug build of the interpreter # (see bpo-38395). obj = None class MyObj: def __iter__(self): nonlocal obj del obj return NotImplemented obj = MyObj() p = weakref.proxy(obj) with self.assertRaises(TypeError): # "blech" in p calls MyObj.__iter__ through the proxy, # without keeping a reference to the real object, so it # can be killed in the middle of the call "blech" in p def test_proxy_next(self): arr = [4, 5, 6] def iterator_func(): yield from arr it = iterator_func() class IteratesWeakly: def __iter__(self): return weakref.proxy(it) weak_it = IteratesWeakly() # Calls proxy.__next__ self.assertEqual(list(weak_it), [4, 5, 6]) def test_proxy_bad_next(self): # bpo-44720: PyIter_Next() shouldn't be called if the reference # isn't an iterator. not_an_iterator = lambda: 0 class A: def __iter__(self): return weakref.proxy(not_an_iterator) a = A() msg = "Weakref proxy referenced a non-iterator" with self.assertRaisesRegex(TypeError, msg): list(a) def test_proxy_reversed(self): class MyObj: def __len__(self): return 3 def __reversed__(self): return iter('cba') obj = MyObj() self.assertEqual("".join(reversed(weakref.proxy(obj))), "cba") def test_proxy_hash(self): class MyObj: def __hash__(self): return 42 obj = MyObj() with self.assertRaises(TypeError): hash(weakref.proxy(obj)) class MyObj: __hash__ = None obj = MyObj() with self.assertRaises(TypeError): hash(weakref.proxy(obj)) def test_getweakrefcount(self): o = C() ref1 = weakref.ref(o) ref2 = weakref.ref(o, self.callback) self.assertEqual(weakref.getweakrefcount(o), 2, "got wrong number of weak reference objects") proxy1 = weakref.proxy(o) proxy2 = weakref.proxy(o, self.callback) self.assertEqual(weakref.getweakrefcount(o), 4, "got wrong number of weak reference objects") del ref1, ref2, proxy1, proxy2 gc_collect() # For PyPy or other GCs. self.assertEqual(weakref.getweakrefcount(o), 0, "weak reference objects not unlinked from" " referent when discarded.") # assumes ints do not support weakrefs self.assertEqual(weakref.getweakrefcount(1), 0, "got wrong number of weak reference objects for int") def test_getweakrefs(self): o = C() ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref1 gc_collect() # For PyPy or other GCs. self.assertEqual(weakref.getweakrefs(o), [ref2], "list of refs does not match") o = C() ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref2 gc_collect() # For PyPy or other GCs. self.assertEqual(weakref.getweakrefs(o), [ref1], "list of refs does not match") del ref1 gc_collect() # For PyPy or other GCs. self.assertEqual(weakref.getweakrefs(o), [], "list of refs not cleared") # assumes ints do not support weakrefs self.assertEqual(weakref.getweakrefs(1), [], "list of refs does not match for int") def test_newstyle_number_ops(self): class F(float): pass f = F(2.0) p = weakref.proxy(f) self.assertEqual(p + 1.0, 3.0) self.assertEqual(1.0 + p, 3.0) # this used to SEGV def test_callbacks_protected(self): # Callbacks protected from already-set exceptions? # Regression test for SF bug #478534. class BogusError(Exception): pass data = {} def remove(k): del data[k] def encapsulate(): f = lambda : () data[weakref.ref(f, remove)] = None raise BogusError try: encapsulate() except BogusError: pass else: self.fail("exception not properly restored") try: encapsulate() except BogusError: pass else: self.fail("exception not properly restored") def test_sf_bug_840829(self): # "weakref callbacks and gc corrupt memory" # subtype_dealloc erroneously exposed a new-style instance # already in the process of getting deallocated to gc, # causing double-deallocation if the instance had a weakref # callback that triggered gc. # If the bug exists, there probably won't be an obvious symptom # in a release build. In a debug build, a segfault will occur # when the second attempt to remove the instance from the "list # of all objects" occurs. import gc class C(object): pass c = C() wr = weakref.ref(c, lambda ignore: gc.collect()) del c # There endeth the first part. It gets worse. del wr c1 = C() c1.i = C() wr = weakref.ref(c1.i, lambda ignore: gc.collect()) c2 = C() c2.c1 = c1 del c1 # still alive because c2 points to it # Now when subtype_dealloc gets called on c2, it's not enough just # that c2 is immune from gc while the weakref callbacks associated # with c2 execute (there are none in this 2nd half of the test, btw). # subtype_dealloc goes on to call the base classes' deallocs too, # so any gc triggered by weakref callbacks associated with anything # torn down by a base class dealloc can also trigger double # deallocation of c2. del c2 def test_callback_in_cycle(self): import gc class J(object): pass class II(object): def acallback(self, ignore): self.J I = II() I.J = J I.wr = weakref.ref(J, I.acallback) # Now J and II are each in a self-cycle (as all new-style class # objects are, since their __mro__ points back to them). I holds # both a weak reference (I.wr) and a strong reference (I.J) to class # J. I is also in a cycle (I.wr points to a weakref that references # I.acallback). When we del these three, they all become trash, but # the cycles prevent any of them from getting cleaned up immediately. # Instead they have to wait for cyclic gc to deduce that they're # trash. # # gc used to call tp_clear on all of them, and the order in which # it does that is pretty accidental. The exact order in which we # built up these things manages to provoke gc into running tp_clear # in just the right order (I last). Calling tp_clear on II leaves # behind an insane class object (its __mro__ becomes NULL). Calling # tp_clear on J breaks its self-cycle, but J doesn't get deleted # just then because of the strong reference from I.J. Calling # tp_clear on I starts to clear I's __dict__, and just happens to # clear I.J first -- I.wr is still intact. That removes the last # reference to J, which triggers the weakref callback. The callback # tries to do "self.J", and instances of new-style classes look up # attributes ("J") in the class dict first. The class (II) wants to # search II.__mro__, but that's NULL. The result was a segfault in # a release build, and an assert failure in a debug build. del I, J, II gc.collect() def test_callback_reachable_one_way(self): import gc # This one broke the first patch that fixed the previous test. In this case, # the objects reachable from the callback aren't also reachable # from the object (c1) *triggering* the callback: you can get to # c1 from c2, but not vice-versa. The result was that c2's __dict__ # got tp_clear'ed by the time the c2.cb callback got invoked. class C: def cb(self, ignore): self.me self.c1 self.wr c1, c2 = C(), C() c2.me = c2 c2.c1 = c1 c2.wr = weakref.ref(c1, c2.cb) del c1, c2 gc.collect() def test_callback_different_classes(self): import gc # Like test_callback_reachable_one_way, except c2 and c1 have different # classes. c2's class (C) isn't reachable from c1 then, so protecting # objects reachable from the dying object (c1) isn't enough to stop # c2's class (C) from getting tp_clear'ed before c2.cb is invoked. # The result was a segfault (C.__mro__ was NULL when the callback # tried to look up self.me). class C(object): def cb(self, ignore): self.me self.c1 self.wr class D: pass c1, c2 = D(), C() c2.me = c2 c2.c1 = c1 c2.wr = weakref.ref(c1, c2.cb) del c1, c2, C, D gc.collect() def test_callback_in_cycle_resurrection(self): import gc # Do something nasty in a weakref callback: resurrect objects # from dead cycles. For this to be attempted, the weakref and # its callback must also be part of the cyclic trash (else the # objects reachable via the callback couldn't be in cyclic trash # to begin with -- the callback would act like an external root). # But gc clears trash weakrefs with callbacks early now, which # disables the callbacks, so the callbacks shouldn't get called # at all (and so nothing actually gets resurrected). alist = [] class C(object): def __init__(self, value): self.attribute = value def acallback(self, ignore): alist.append(self.c) c1, c2 = C(1), C(2) c1.c = c2 c2.c = c1 c1.wr = weakref.ref(c2, c1.acallback) c2.wr = weakref.ref(c1, c2.acallback) def C_went_away(ignore): alist.append("C went away") wr = weakref.ref(C, C_went_away) del c1, c2, C # make them all trash self.assertEqual(alist, []) # del isn't enough to reclaim anything gc.collect() # c1.wr and c2.wr were part of the cyclic trash, so should have # been cleared without their callbacks executing. OTOH, the weakref # to C is bound to a function local (wr), and wasn't trash, so that # callback should have been invoked when C went away. self.assertEqual(alist, ["C went away"]) # The remaining weakref should be dead now (its callback ran). self.assertEqual(wr(), None) del alist[:] gc.collect() self.assertEqual(alist, []) def test_callbacks_on_callback(self): import gc # Set up weakref callbacks *on* weakref callbacks. alist = [] def safe_callback(ignore): alist.append("safe_callback called") class C(object): def cb(self, ignore): alist.append("cb called") c, d = C(), C() c.other = d d.other = c callback = c.cb c.wr = weakref.ref(d, callback) # this won't trigger d.wr = weakref.ref(callback, d.cb) # ditto external_wr = weakref.ref(callback, safe_callback) # but this will self.assertIs(external_wr(), callback) # The weakrefs attached to c and d should get cleared, so that # C.cb is never called. But external_wr isn't part of the cyclic # trash, and no cyclic trash is reachable from it, so safe_callback # should get invoked when the bound method object callback (c.cb) # -- which is itself a callback, and also part of the cyclic trash -- # gets reclaimed at the end of gc. del callback, c, d, C self.assertEqual(alist, []) # del isn't enough to clean up cycles gc.collect() self.assertEqual(alist, ["safe_callback called"]) self.assertEqual(external_wr(), None) del alist[:] gc.collect() self.assertEqual(alist, []) def test_gc_during_ref_creation(self): self.check_gc_during_creation(weakref.ref) def test_gc_during_proxy_creation(self): self.check_gc_during_creation(weakref.proxy) def check_gc_during_creation(self, makeref): thresholds = gc.get_threshold() gc.set_threshold(1, 1, 1) gc.collect() class A: pass def callback(*args): pass referenced = A() a = A() a.a = a a.wr = makeref(referenced) try: # now make sure the object and the ref get labeled as # cyclic trash: a = A() weakref.ref(referenced, callback) finally: gc.set_threshold(*thresholds) def test_ref_created_during_del(self): # Bug #1377858 # A weakref created in an object's __del__() would crash the # interpreter when the weakref was cleaned up since it would refer to # non-existent memory. This test should not segfault the interpreter. class Target(object): def __del__(self): global ref_from_del ref_from_del = weakref.ref(self) w = Target() def test_init(self): # Issue 3634 # .__init__() doesn't check errors correctly r = weakref.ref(Exception) self.assertRaises(TypeError, r.__init__, 0, 0, 0, 0, 0) # No exception should be raised here gc.collect() def test_classes(self): # Check that classes are weakrefable. class A(object): pass l = [] weakref.ref(int) a = weakref.ref(A, l.append) A = None gc.collect() self.assertEqual(a(), None) self.assertEqual(l, [a]) def test_equality(self): # Alive weakrefs defer equality testing to their underlying object. x = Object(1) y = Object(1) z = Object(2) a = weakref.ref(x) b = weakref.ref(y) c = weakref.ref(z) d = weakref.ref(x) # Note how we directly test the operators here, to stress both # __eq__ and __ne__. self.assertTrue(a == b) self.assertFalse(a != b) self.assertFalse(a == c) self.assertTrue(a != c) self.assertTrue(a == d) self.assertFalse(a != d) self.assertFalse(a == x) self.assertTrue(a != x) self.assertTrue(a == ALWAYS_EQ) self.assertFalse(a != ALWAYS_EQ) del x, y, z gc.collect() for r in a, b, c: # Sanity check self.assertIs(r(), None) # Dead weakrefs compare by identity: whether `a` and `d` are the # same weakref object is an implementation detail, since they pointed # to the same original object and didn't have a callback. # (see issue #16453). self.assertFalse(a == b) self.assertTrue(a != b) self.assertFalse(a == c) self.assertTrue(a != c) self.assertEqual(a == d, a is d) self.assertEqual(a != d, a is not d) def test_ordering(self): # weakrefs cannot be ordered, even if the underlying objects can. ops = [operator.lt, operator.gt, operator.le, operator.ge] x = Object(1) y = Object(1) a = weakref.ref(x) b = weakref.ref(y) for op in ops: self.assertRaises(TypeError, op, a, b) # Same when dead. del x, y gc.collect() for op in ops: self.assertRaises(TypeError, op, a, b) def test_hashing(self): # Alive weakrefs hash the same as the underlying object x = Object(42) y = Object(42) a = weakref.ref(x) b = weakref.ref(y) self.assertEqual(hash(a), hash(42)) del x, y gc.collect() # Dead weakrefs: # - retain their hash is they were hashed when alive; # - otherwise, cannot be hashed. self.assertEqual(hash(a), hash(42)) self.assertRaises(TypeError, hash, b) def test_trashcan_16602(self): # Issue #16602: when a weakref's target was part of a long # deallocation chain, the trashcan mechanism could delay clearing # of the weakref and make the target object visible from outside # code even though its refcount had dropped to 0. A crash ensued. class C: def __init__(self, parent): if not parent: return wself = weakref.ref(self) def cb(wparent): o = wself() self.wparent = weakref.ref(parent, cb) d = weakref.WeakKeyDictionary() root = c = C(None) for n in range(100): d[c] = c = C(c) del root gc.collect() def test_callback_attribute(self): x = Object(1) callback = lambda ref: None ref1 = weakref.ref(x, callback) self.assertIs(ref1.__callback__, callback) ref2 = weakref.ref(x) self.assertIsNone(ref2.__callback__) def test_callback_attribute_after_deletion(self): x = Object(1) ref = weakref.ref(x, self.callback) self.assertIsNotNone(ref.__callback__) del x support.gc_collect() self.assertIsNone(ref.__callback__) def test_set_callback_attribute(self): x = Object(1) callback = lambda ref: None ref1 = weakref.ref(x, callback) with self.assertRaises(AttributeError): ref1.__callback__ = lambda ref: None def test_callback_gcs(self): class ObjectWithDel(Object): def __del__(self): pass x = ObjectWithDel(1) ref1 = weakref.ref(x, lambda ref: support.gc_collect()) del x support.gc_collect() class SubclassableWeakrefTestCase(TestBase): def test_subclass_refs(self): class MyRef(weakref.ref): def __init__(self, ob, callback=None, value=42): self.value = value super().__init__(ob, callback) def __call__(self): self.called = True return super().__call__() o = Object("foo") mr = MyRef(o, value=24) self.assertIs(mr(), o) self.assertTrue(mr.called) self.assertEqual(mr.value, 24) del o gc_collect() # For PyPy or other GCs. self.assertIsNone(mr()) self.assertTrue(mr.called) def test_subclass_refs_dont_replace_standard_refs(self): class MyRef(weakref.ref): pass o = Object(42) r1 = MyRef(o) r2 = weakref.ref(o) self.assertIsNot(r1, r2) self.assertEqual(weakref.getweakrefs(o), [r2, r1]) self.assertEqual(weakref.getweakrefcount(o), 2) r3 = MyRef(o) self.assertEqual(weakref.getweakrefcount(o), 3) refs = weakref.getweakrefs(o) self.assertEqual(len(refs), 3) self.assertIs(r2, refs[0]) self.assertIn(r1, refs[1:]) self.assertIn(r3, refs[1:]) def test_subclass_refs_dont_conflate_callbacks(self): class MyRef(weakref.ref): pass o = Object(42) r1 = MyRef(o, id) r2 = MyRef(o, str) self.assertIsNot(r1, r2) refs = weakref.getweakrefs(o) self.assertIn(r1, refs) self.assertIn(r2, refs) def test_subclass_refs_with_slots(self): class MyRef(weakref.ref): __slots__ = "slot1", "slot2" def __new__(type, ob, callback, slot1, slot2): return weakref.ref.__new__(type, ob, callback) def __init__(self, ob, callback, slot1, slot2): self.slot1 = slot1 self.slot2 = slot2 def meth(self): return self.slot1 + self.slot2 o = Object(42) r = MyRef(o, None, "abc", "def") self.assertEqual(r.slot1, "abc") self.assertEqual(r.slot2, "def") self.assertEqual(r.meth(), "abcdef") self.assertFalse(hasattr(r, "__dict__")) def test_subclass_refs_with_cycle(self): """Confirm https://bugs.python.org/issue3100 is fixed.""" # An instance of a weakref subclass can have attributes. # If such a weakref holds the only strong reference to the object, # deleting the weakref will delete the object. In this case, # the callback must not be called, because the ref object is # being deleted. class MyRef(weakref.ref): pass # Use a local callback, for "regrtest -R::" # to detect refcounting problems def callback(w): self.cbcalled += 1 o = C() r1 = MyRef(o, callback) r1.o = o del o del r1 # Used to crash here self.assertEqual(self.cbcalled, 0) # Same test, with two weakrefs to the same object # (since code paths are different) o = C() r1 = MyRef(o, callback) r2 = MyRef(o, callback) r1.r = r2 r2.o = o del o del r2 del r1 # Used to crash here self.assertEqual(self.cbcalled, 0) class WeakMethodTestCase(unittest.TestCase): def _subclass(self): """Return an Object subclass overriding `some_method`.""" class C(Object): def some_method(self): return 6 return C def test_alive(self): o = Object(1) r = weakref.WeakMethod(o.some_method) self.assertIsInstance(r, weakref.ReferenceType) self.assertIsInstance(r(), type(o.some_method)) self.assertIs(r().__self__, o) self.assertIs(r().__func__, o.some_method.__func__) self.assertEqual(r()(), 4) def test_object_dead(self): o = Object(1) r = weakref.WeakMethod(o.some_method) del o gc.collect() self.assertIs(r(), None) def test_method_dead(self): C = self._subclass() o = C(1) r = weakref.WeakMethod(o.some_method) del C.some_method gc.collect() self.assertIs(r(), None) def test_callback_when_object_dead(self): # Test callback behaviour when object dies first. C = self._subclass() calls = [] def cb(arg): calls.append(arg) o = C(1) r = weakref.WeakMethod(o.some_method, cb) del o gc.collect() self.assertEqual(calls, [r]) # Callback is only called once. C.some_method = Object.some_method gc.collect() self.assertEqual(calls, [r]) def test_callback_when_method_dead(self): # Test callback behaviour when method dies first. C = self._subclass() calls = [] def cb(arg): calls.append(arg) o = C(1) r = weakref.WeakMethod(o.some_method, cb) del C.some_method gc.collect() self.assertEqual(calls, [r]) # Callback is only called once. del o gc.collect() self.assertEqual(calls, [r]) @support.cpython_only def test_no_cycles(self): # A WeakMethod doesn't create any reference cycle to itself. o = Object(1) def cb(_): pass r = weakref.WeakMethod(o.some_method, cb) wr = weakref.ref(r) del r self.assertIs(wr(), None) def test_equality(self): def _eq(a, b): self.assertTrue(a == b) self.assertFalse(a != b) def _ne(a, b): self.assertTrue(a != b) self.assertFalse(a == b) x = Object(1) y = Object(1) a = weakref.WeakMethod(x.some_method) b = weakref.WeakMethod(y.some_method) c = weakref.WeakMethod(x.other_method) d = weakref.WeakMethod(y.other_method) # Objects equal, same method _eq(a, b) _eq(c, d) # Objects equal, different method _ne(a, c) _ne(a, d) _ne(b, c) _ne(b, d) # Objects unequal, same or different method z = Object(2) e = weakref.WeakMethod(z.some_method) f = weakref.WeakMethod(z.other_method) _ne(a, e) _ne(a, f) _ne(b, e) _ne(b, f) # Compare with different types _ne(a, x.some_method) _eq(a, ALWAYS_EQ) del x, y, z gc.collect() # Dead WeakMethods compare by identity refs = a, b, c, d, e, f for q in refs: for r in refs: self.assertEqual(q == r, q is r) self.assertEqual(q != r, q is not r) def test_hashing(self): # Alive WeakMethods are hashable if the underlying object is # hashable. x = Object(1) y = Object(1) a = weakref.WeakMethod(x.some_method) b = weakref.WeakMethod(y.some_method) c = weakref.WeakMethod(y.other_method) # Since WeakMethod objects are equal, the hashes should be equal. self.assertEqual(hash(a), hash(b)) ha = hash(a) # Dead WeakMethods retain their old hash value del x, y gc.collect() self.assertEqual(hash(a), ha) self.assertEqual(hash(b), ha) # If it wasn't hashed when alive, a dead WeakMethod cannot be hashed. self.assertRaises(TypeError, hash, c) class MappingTestCase(TestBase): COUNT = 10 def check_len_cycles(self, dict_type, cons): N = 20 items = [RefCycle() for i in range(N)] dct = dict_type(cons(o) for o in items) # Keep an iterator alive it = dct.items() try: next(it) except StopIteration: pass del items gc.collect() n1 = len(dct) del it gc.collect() n2 = len(dct) # one item may be kept alive inside the iterator self.assertIn(n1, (0, 1)) self.assertEqual(n2, 0) def test_weak_keyed_len_cycles(self): self.check_len_cycles(weakref.WeakKeyDictionary, lambda k: (k, 1)) def test_weak_valued_len_cycles(self): self.check_len_cycles(weakref.WeakValueDictionary, lambda k: (1, k)) def check_len_race(self, dict_type, cons): # Extended sanity checks for len() in the face of cyclic collection self.addCleanup(gc.set_threshold, *gc.get_threshold()) for th in range(1, 100): N = 20 gc.collect(0) gc.set_threshold(th, th, th) items = [RefCycle() for i in range(N)] dct = dict_type(cons(o) for o in items) del items # All items will be collected at next garbage collection pass it = dct.items() try: next(it) except StopIteration: pass n1 = len(dct) del it n2 = len(dct) self.assertGreaterEqual(n1, 0) self.assertLessEqual(n1, N) self.assertGreaterEqual(n2, 0) self.assertLessEqual(n2, n1) def test_weak_keyed_len_race(self): self.check_len_race(weakref.WeakKeyDictionary, lambda k: (k, 1)) def test_weak_valued_len_race(self): self.check_len_race(weakref.WeakValueDictionary, lambda k: (1, k)) def test_weak_values(self): # # This exercises d.copy(), d.items(), d[], del d[], len(d). # dict, objects = self.make_weak_valued_dict() for o in objects: self.assertEqual(weakref.getweakrefcount(o), 1) self.assertIs(o, dict[o.arg], "wrong object returned by weak dict!") items1 = list(dict.items()) items2 = list(dict.copy().items()) items1.sort() items2.sort() self.assertEqual(items1, items2, "cloning of weak-valued dictionary did not work!") del items1, items2 self.assertEqual(len(dict), self.COUNT) del objects[0] gc_collect() # For PyPy or other GCs. self.assertEqual(len(dict), self.COUNT - 1, "deleting object did not cause dictionary update") del objects, o gc_collect() # For PyPy or other GCs. self.assertEqual(len(dict), 0, "deleting the values did not clear the dictionary") # regression on SF bug #447152: dict = weakref.WeakValueDictionary() self.assertRaises(KeyError, dict.__getitem__, 1) dict[2] = C() gc_collect() # For PyPy or other GCs. self.assertRaises(KeyError, dict.__getitem__, 2) def test_weak_keys(self): # # This exercises d.copy(), d.items(), d[] = v, d[], del d[], # len(d), k in d. # dict, objects = self.make_weak_keyed_dict() for o in objects: self.assertEqual(weakref.getweakrefcount(o), 1, "wrong number of weak references to %r!" % o) self.assertIs(o.arg, dict[o], "wrong object returned by weak dict!") items1 = dict.items() items2 = dict.copy().items() self.assertEqual(set(items1), set(items2), "cloning of weak-keyed dictionary did not work!") del items1, items2 self.assertEqual(len(dict), self.COUNT) del objects[0] gc_collect() # For PyPy or other GCs. self.assertEqual(len(dict), (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o gc_collect() # For PyPy or other GCs. self.assertEqual(len(dict), 0, "deleting the keys did not clear the dictionary") o = Object(42) dict[o] = "What is the meaning of the universe?" self.assertIn(o, dict) self.assertNotIn(34, dict) def test_weak_keyed_iters(self): dict, objects = self.make_weak_keyed_dict() self.check_iters(dict) # Test keyrefs() refs = dict.keyrefs() self.assertEqual(len(refs), len(objects)) objects2 = list(objects) for wr in refs: ob = wr() self.assertIn(ob, dict) self.assertIn(ob, dict) self.assertEqual(ob.arg, dict[ob]) objects2.remove(ob) self.assertEqual(len(objects2), 0) # Test iterkeyrefs() objects2 = list(objects) self.assertEqual(len(list(dict.keyrefs())), len(objects)) for wr in dict.keyrefs(): ob = wr() self.assertIn(ob, dict) self.assertIn(ob, dict) self.assertEqual(ob.arg, dict[ob]) objects2.remove(ob) self.assertEqual(len(objects2), 0) def test_weak_valued_iters(self): dict, objects = self.make_weak_valued_dict() self.check_iters(dict) # Test valuerefs() refs = dict.valuerefs() self.assertEqual(len(refs), len(objects)) objects2 = list(objects) for wr in refs: ob = wr() self.assertEqual(ob, dict[ob.arg]) self.assertEqual(ob.arg, dict[ob.arg].arg) objects2.remove(ob) self.assertEqual(len(objects2), 0) # Test itervaluerefs() objects2 = list(objects) self.assertEqual(len(list(dict.itervaluerefs())), len(objects)) for wr in dict.itervaluerefs(): ob = wr() self.assertEqual(ob, dict[ob.arg]) self.assertEqual(ob.arg, dict[ob.arg].arg) objects2.remove(ob) self.assertEqual(len(objects2), 0) def check_iters(self, dict): # item iterator: items = list(dict.items()) for item in dict.items(): items.remove(item) self.assertFalse(items, "items() did not touch all items") # key iterator, via __iter__(): keys = list(dict.keys()) for k in dict: keys.remove(k) self.assertFalse(keys, "__iter__() did not touch all keys") # key iterator, via iterkeys(): keys = list(dict.keys()) for k in dict.keys(): keys.remove(k) self.assertFalse(keys, "iterkeys() did not touch all keys") # value iterator: values = list(dict.values()) for v in dict.values(): values.remove(v) self.assertFalse(values, "itervalues() did not touch all values") def check_weak_destroy_while_iterating(self, dict, objects, iter_name): n = len(dict) it = iter(getattr(dict, iter_name)()) next(it) # Trigger internal iteration # Destroy an object del objects[-1] gc.collect() # just in case # We have removed either the first consumed object, or another one self.assertIn(len(list(it)), [len(objects), len(objects) - 1]) del it # The removal has been committed self.assertEqual(len(dict), n - 1) def check_weak_destroy_and_mutate_while_iterating(self, dict, testcontext): # Check that we can explicitly mutate the weak dict without # interfering with delayed removal. # `testcontext` should create an iterator, destroy one of the # weakref'ed objects and then return a new key/value pair corresponding # to the destroyed object. with testcontext() as (k, v): self.assertNotIn(k, dict) with testcontext() as (k, v): self.assertRaises(KeyError, dict.__delitem__, k) self.assertNotIn(k, dict) with testcontext() as (k, v): self.assertRaises(KeyError, dict.pop, k) self.assertNotIn(k, dict) with testcontext() as (k, v): dict[k] = v self.assertEqual(dict[k], v) ddict = copy.copy(dict) with testcontext() as (k, v): dict.update(ddict) self.assertEqual(dict, ddict) with testcontext() as (k, v): dict.clear() self.assertEqual(len(dict), 0) def check_weak_del_and_len_while_iterating(self, dict, testcontext): # Check that len() works when both iterating and removing keys # explicitly through various means (.pop(), .clear()...), while # implicit mutation is deferred because an iterator is alive. # (each call to testcontext() should schedule one item for removal # for this test to work properly) o = Object(123456) with testcontext(): n = len(dict) # Since underlying dict is ordered, first item is popped dict.pop(next(dict.keys())) self.assertEqual(len(dict), n - 1) dict[o] = o self.assertEqual(len(dict), n) # last item in objects is removed from dict in context shutdown with testcontext(): self.assertEqual(len(dict), n - 1) # Then, (o, o) is popped dict.popitem() self.assertEqual(len(dict), n - 2) with testcontext(): self.assertEqual(len(dict), n - 3) del dict[next(dict.keys())] self.assertEqual(len(dict), n - 4) with testcontext(): self.assertEqual(len(dict), n - 5) dict.popitem() self.assertEqual(len(dict), n - 6) with testcontext(): dict.clear() self.assertEqual(len(dict), 0) self.assertEqual(len(dict), 0) def test_weak_keys_destroy_while_iterating(self): # Issue #7105: iterators shouldn't crash when a key is implicitly removed dict, objects = self.make_weak_keyed_dict() self.check_weak_destroy_while_iterating(dict, objects, 'keys') self.check_weak_destroy_while_iterating(dict, objects, 'items') self.check_weak_destroy_while_iterating(dict, objects, 'values') self.check_weak_destroy_while_iterating(dict, objects, 'keyrefs') dict, objects = self.make_weak_keyed_dict() @contextlib.contextmanager def testcontext(): try: it = iter(dict.items()) next(it) # Schedule a key/value for removal and recreate it v = objects.pop().arg gc.collect() # just in case yield Object(v), v finally: it = None # should commit all removals gc.collect() self.check_weak_destroy_and_mutate_while_iterating(dict, testcontext) # Issue #21173: len() fragile when keys are both implicitly and # explicitly removed. dict, objects = self.make_weak_keyed_dict() self.check_weak_del_and_len_while_iterating(dict, testcontext) def test_weak_values_destroy_while_iterating(self): # Issue #7105: iterators shouldn't crash when a key is implicitly removed dict, objects = self.make_weak_valued_dict() self.check_weak_destroy_while_iterating(dict, objects, 'keys') self.check_weak_destroy_while_iterating(dict, objects, 'items') self.check_weak_destroy_while_iterating(dict, objects, 'values') self.check_weak_destroy_while_iterating(dict, objects, 'itervaluerefs') self.check_weak_destroy_while_iterating(dict, objects, 'valuerefs') dict, objects = self.make_weak_valued_dict() @contextlib.contextmanager def testcontext(): try: it = iter(dict.items()) next(it) # Schedule a key/value for removal and recreate it k = objects.pop().arg gc.collect() # just in case yield k, Object(k) finally: it = None # should commit all removals gc.collect() self.check_weak_destroy_and_mutate_while_iterating(dict, testcontext) dict, objects = self.make_weak_valued_dict() self.check_weak_del_and_len_while_iterating(dict, testcontext) def test_make_weak_keyed_dict_from_dict(self): o = Object(3) dict = weakref.WeakKeyDictionary({o:364}) self.assertEqual(dict[o], 364) def test_make_weak_keyed_dict_from_weak_keyed_dict(self): o = Object(3) dict = weakref.WeakKeyDictionary({o:364}) dict2 = weakref.WeakKeyDictionary(dict) self.assertEqual(dict[o], 364) def make_weak_keyed_dict(self): dict = weakref.WeakKeyDictionary() objects = list(map(Object, range(self.COUNT))) for o in objects: dict[o] = o.arg return dict, objects def test_make_weak_valued_dict_from_dict(self): o = Object(3) dict = weakref.WeakValueDictionary({364:o}) self.assertEqual(dict[364], o) def test_make_weak_valued_dict_from_weak_valued_dict(self): o = Object(3) dict = weakref.WeakValueDictionary({364:o}) dict2 = weakref.WeakValueDictionary(dict) self.assertEqual(dict[364], o) def test_make_weak_valued_dict_misc(self): # errors self.assertRaises(TypeError, weakref.WeakValueDictionary.__init__) self.assertRaises(TypeError, weakref.WeakValueDictionary, {}, {}) self.assertRaises(TypeError, weakref.WeakValueDictionary, (), ()) # special keyword arguments o = Object(3) for kw in 'self', 'dict', 'other', 'iterable': d = weakref.WeakValueDictionary(**{kw: o}) self.assertEqual(list(d.keys()), [kw]) self.assertEqual(d[kw], o) def make_weak_valued_dict(self): dict = weakref.WeakValueDictionary() objects = list(map(Object, range(self.COUNT))) for o in objects: dict[o.arg] = o return dict, objects def check_popitem(self, klass, key1, value1, key2, value2): weakdict = klass() weakdict[key1] = value1 weakdict[key2] = value2 self.assertEqual(len(weakdict), 2) k, v = weakdict.popitem() self.assertEqual(len(weakdict), 1) if k is key1: self.assertIs(v, value1) else: self.assertIs(v, value2) k, v = weakdict.popitem() self.assertEqual(len(weakdict), 0) if k is key1: self.assertIs(v, value1) else: self.assertIs(v, value2) def test_weak_valued_dict_popitem(self): self.check_popitem(weakref.WeakValueDictionary, "key1", C(), "key2", C()) def test_weak_keyed_dict_popitem(self): self.check_popitem(weakref.WeakKeyDictionary, C(), "value 1", C(), "value 2") def check_setdefault(self, klass, key, value1, value2): self.assertIsNot(value1, value2, "invalid test" " -- value parameters must be distinct objects") weakdict = klass() o = weakdict.setdefault(key, value1) self.assertIs(o, value1) self.assertIn(key, weakdict) self.assertIs(weakdict.get(key), value1) self.assertIs(weakdict[key], value1) o = weakdict.setdefault(key, value2) self.assertIs(o, value1) self.assertIn(key, weakdict) self.assertIs(weakdict.get(key), value1) self.assertIs(weakdict[key], value1) def test_weak_valued_dict_setdefault(self): self.check_setdefault(weakref.WeakValueDictionary, "key", C(), C()) def test_weak_keyed_dict_setdefault(self): self.check_setdefault(weakref.WeakKeyDictionary, C(), "value 1", "value 2") def check_update(self, klass, dict): # # This exercises d.update(), len(d), d.keys(), k in d, # d.get(), d[]. # weakdict = klass() weakdict.update(dict) self.assertEqual(len(weakdict), len(dict)) for k in weakdict.keys(): self.assertIn(k, dict, "mysterious new key appeared in weak dict") v = dict.get(k) self.assertIs(v, weakdict[k]) self.assertIs(v, weakdict.get(k)) for k in dict.keys(): self.assertIn(k, weakdict, "original key disappeared in weak dict") v = dict[k] self.assertIs(v, weakdict[k]) self.assertIs(v, weakdict.get(k)) def test_weak_valued_dict_update(self): self.check_update(weakref.WeakValueDictionary, {1: C(), 'a': C(), C(): C()}) # errors self.assertRaises(TypeError, weakref.WeakValueDictionary.update) d = weakref.WeakValueDictionary() self.assertRaises(TypeError, d.update, {}, {}) self.assertRaises(TypeError, d.update, (), ()) self.assertEqual(list(d.keys()), []) # special keyword arguments o = Object(3) for kw in 'self', 'dict', 'other', 'iterable': d = weakref.WeakValueDictionary() d.update(**{kw: o}) self.assertEqual(list(d.keys()), [kw]) self.assertEqual(d[kw], o) def test_weak_valued_union_operators(self): a = C() b = C() c = C() wvd1 = weakref.WeakValueDictionary({1: a}) wvd2 = weakref.WeakValueDictionary({1: b, 2: a}) wvd3 = wvd1.copy() d1 = {1: c, 3: b} pairs = [(5, c), (6, b)] tmp1 = wvd1 | wvd2 # Between two WeakValueDictionaries self.assertEqual(dict(tmp1), dict(wvd1) | dict(wvd2)) self.assertIs(type(tmp1), weakref.WeakValueDictionary) wvd1 |= wvd2 self.assertEqual(wvd1, tmp1) tmp2 = wvd2 | d1 # Between WeakValueDictionary and mapping self.assertEqual(dict(tmp2), dict(wvd2) | d1) self.assertIs(type(tmp2), weakref.WeakValueDictionary) wvd2 |= d1 self.assertEqual(wvd2, tmp2) tmp3 = wvd3.copy() # Between WeakValueDictionary and iterable key, value tmp3 |= pairs self.assertEqual(dict(tmp3), dict(wvd3) | dict(pairs)) self.assertIs(type(tmp3), weakref.WeakValueDictionary) tmp4 = d1 | wvd3 # Testing .__ror__ self.assertEqual(dict(tmp4), d1 | dict(wvd3)) self.assertIs(type(tmp4), weakref.WeakValueDictionary) del a self.assertNotIn(2, tmp1) self.assertNotIn(2, tmp2) self.assertNotIn(1, tmp3) self.assertNotIn(1, tmp4) def test_weak_keyed_dict_update(self): self.check_update(weakref.WeakKeyDictionary, {C(): 1, C(): 2, C(): 3}) def test_weak_keyed_delitem(self): d = weakref.WeakKeyDictionary() o1 = Object('1') o2 = Object('2') d[o1] = 'something' d[o2] = 'something' self.assertEqual(len(d), 2) del d[o1] self.assertEqual(len(d), 1) self.assertEqual(list(d.keys()), [o2]) def test_weak_keyed_union_operators(self): o1 = C() o2 = C() o3 = C() wkd1 = weakref.WeakKeyDictionary({o1: 1, o2: 2}) wkd2 = weakref.WeakKeyDictionary({o3: 3, o1: 4}) wkd3 = wkd1.copy() d1 = {o2: '5', o3: '6'} pairs = [(o2, 7), (o3, 8)] tmp1 = wkd1 | wkd2 # Between two WeakKeyDictionaries self.assertEqual(dict(tmp1), dict(wkd1) | dict(wkd2)) self.assertIs(type(tmp1), weakref.WeakKeyDictionary) wkd1 |= wkd2 self.assertEqual(wkd1, tmp1) tmp2 = wkd2 | d1 # Between WeakKeyDictionary and mapping self.assertEqual(dict(tmp2), dict(wkd2) | d1) self.assertIs(type(tmp2), weakref.WeakKeyDictionary) wkd2 |= d1 self.assertEqual(wkd2, tmp2) tmp3 = wkd3.copy() # Between WeakKeyDictionary and iterable key, value tmp3 |= pairs self.assertEqual(dict(tmp3), dict(wkd3) | dict(pairs)) self.assertIs(type(tmp3), weakref.WeakKeyDictionary) tmp4 = d1 | wkd3 # Testing .__ror__ self.assertEqual(dict(tmp4), d1 | dict(wkd3)) self.assertIs(type(tmp4), weakref.WeakKeyDictionary) del o1 self.assertNotIn(4, tmp1.values()) self.assertNotIn(4, tmp2.values()) self.assertNotIn(1, tmp3.values()) self.assertNotIn(1, tmp4.values()) def test_weak_valued_delitem(self): d = weakref.WeakValueDictionary() o1 = Object('1') o2 = Object('2') d['something'] = o1 d['something else'] = o2 self.assertEqual(len(d), 2) del d['something'] self.assertEqual(len(d), 1) self.assertEqual(list(d.items()), [('something else', o2)]) def test_weak_keyed_bad_delitem(self): d = weakref.WeakKeyDictionary() o = Object('1') # An attempt to delete an object that isn't there should raise # KeyError. It didn't before 2.3. self.assertRaises(KeyError, d.__delitem__, o) self.assertRaises(KeyError, d.__getitem__, o) # If a key isn't of a weakly referencable type, __getitem__ and # __setitem__ raise TypeError. __delitem__ should too. self.assertRaises(TypeError, d.__delitem__, 13) self.assertRaises(TypeError, d.__getitem__, 13) self.assertRaises(TypeError, d.__setitem__, 13, 13) def test_weak_keyed_cascading_deletes(self): # SF bug 742860. For some reason, before 2.3 __delitem__ iterated # over the keys via self.data.iterkeys(). If things vanished from # the dict during this (or got added), that caused a RuntimeError. d = weakref.WeakKeyDictionary() mutate = False class C(object): def __init__(self, i): self.value = i def __hash__(self): return hash(self.value) def __eq__(self, other): if mutate: # Side effect that mutates the dict, by removing the # last strong reference to a key. del objs[-1] return self.value == other.value objs = [C(i) for i in range(4)] for o in objs: d[o] = o.value del o # now the only strong references to keys are in objs # Find the order in which iterkeys sees the keys. objs = list(d.keys()) # Reverse it, so that the iteration implementation of __delitem__ # has to keep looping to find the first object we delete. objs.reverse() # Turn on mutation in C.__eq__. The first time through the loop, # under the iterkeys() business the first comparison will delete # the last item iterkeys() would see, and that causes a # RuntimeError: dictionary changed size during iteration # when the iterkeys() loop goes around to try comparing the next # key. After this was fixed, it just deletes the last object *our* # "for o in obj" loop would have gotten to. mutate = True count = 0 for o in objs: count += 1 del d[o] gc_collect() # For PyPy or other GCs. self.assertEqual(len(d), 0) self.assertEqual(count, 2) def test_make_weak_valued_dict_repr(self): dict = weakref.WeakValueDictionary() self.assertRegex(repr(dict), '') def test_make_weak_keyed_dict_repr(self): dict = weakref.WeakKeyDictionary() self.assertRegex(repr(dict), '') @threading_helper.requires_working_threading() def test_threaded_weak_valued_setdefault(self): d = weakref.WeakValueDictionary() with collect_in_thread(): for i in range(100000): x = d.setdefault(10, RefCycle()) self.assertIsNot(x, None) # we never put None in there! del x @threading_helper.requires_working_threading() def test_threaded_weak_valued_pop(self): d = weakref.WeakValueDictionary() with collect_in_thread(): for i in range(100000): d[10] = RefCycle() x = d.pop(10, 10) self.assertIsNot(x, None) # we never put None in there! @threading_helper.requires_working_threading() def test_threaded_weak_valued_consistency(self): # Issue #28427: old keys should not remove new values from # WeakValueDictionary when collecting from another thread. d = weakref.WeakValueDictionary() with collect_in_thread(): for i in range(200000): o = RefCycle() d[10] = o # o is still alive, so the dict can't be empty self.assertEqual(len(d), 1) o = None # lose ref def check_threaded_weak_dict_copy(self, type_, deepcopy): # `type_` should be either WeakKeyDictionary or WeakValueDictionary. # `deepcopy` should be either True or False. exc = [] class DummyKey: def __init__(self, ctr): self.ctr = ctr class DummyValue: def __init__(self, ctr): self.ctr = ctr def dict_copy(d, exc): try: if deepcopy is True: _ = copy.deepcopy(d) else: _ = d.copy() except Exception as ex: exc.append(ex) def pop_and_collect(lst): gc_ctr = 0 while lst: i = random.randint(0, len(lst) - 1) gc_ctr += 1 lst.pop(i) if gc_ctr % 10000 == 0: gc.collect() # just in case self.assertIn(type_, (weakref.WeakKeyDictionary, weakref.WeakValueDictionary)) d = type_() keys = [] values = [] # Initialize d with many entries for i in range(70000): k, v = DummyKey(i), DummyValue(i) keys.append(k) values.append(v) d[k] = v del k del v t_copy = threading.Thread(target=dict_copy, args=(d, exc,)) if type_ is weakref.WeakKeyDictionary: t_collect = threading.Thread(target=pop_and_collect, args=(keys,)) else: # weakref.WeakValueDictionary t_collect = threading.Thread(target=pop_and_collect, args=(values,)) t_copy.start() t_collect.start() t_copy.join() t_collect.join() # Test exceptions if exc: raise exc[0] @threading_helper.requires_working_threading() def test_threaded_weak_key_dict_copy(self): # Issue #35615: Weakref keys or values getting GC'ed during dict # copying should not result in a crash. self.check_threaded_weak_dict_copy(weakref.WeakKeyDictionary, False) @threading_helper.requires_working_threading() @support.requires_resource('cpu') def test_threaded_weak_key_dict_deepcopy(self): # Issue #35615: Weakref keys or values getting GC'ed during dict # copying should not result in a crash. self.check_threaded_weak_dict_copy(weakref.WeakKeyDictionary, True) @threading_helper.requires_working_threading() def test_threaded_weak_value_dict_copy(self): # Issue #35615: Weakref keys or values getting GC'ed during dict # copying should not result in a crash. self.check_threaded_weak_dict_copy(weakref.WeakValueDictionary, False) @threading_helper.requires_working_threading() @support.requires_resource('cpu') def test_threaded_weak_value_dict_deepcopy(self): # Issue #35615: Weakref keys or values getting GC'ed during dict # copying should not result in a crash. self.check_threaded_weak_dict_copy(weakref.WeakValueDictionary, True) @support.cpython_only def test_remove_closure(self): d = weakref.WeakValueDictionary() self.assertIsNone(d._remove.__closure__) from test import mapping_tests class WeakValueDictionaryTestCase(mapping_tests.BasicTestMappingProtocol): """Check that WeakValueDictionary conforms to the mapping protocol""" __ref = {"key1":Object(1), "key2":Object(2), "key3":Object(3)} type2test = weakref.WeakValueDictionary def _reference(self): return self.__ref.copy() class WeakKeyDictionaryTestCase(mapping_tests.BasicTestMappingProtocol): """Check that WeakKeyDictionary conforms to the mapping protocol""" __ref = {Object("key1"):1, Object("key2"):2, Object("key3"):3} type2test = weakref.WeakKeyDictionary def _reference(self): return self.__ref.copy() class FinalizeTestCase(unittest.TestCase): class A: pass def _collect_if_necessary(self): # we create no ref-cycles so in CPython no gc should be needed if sys.implementation.name != 'cpython': support.gc_collect() def test_finalize(self): def add(x,y,z): res.append(x + y + z) return x + y + z a = self.A() res = [] f = weakref.finalize(a, add, 67, 43, z=89) self.assertEqual(f.alive, True) self.assertEqual(f.peek(), (a, add, (67,43), {'z':89})) self.assertEqual(f(), 199) self.assertEqual(f(), None) self.assertEqual(f(), None) self.assertEqual(f.peek(), None) self.assertEqual(f.detach(), None) self.assertEqual(f.alive, False) self.assertEqual(res, [199]) res = [] f = weakref.finalize(a, add, 67, 43, 89) self.assertEqual(f.peek(), (a, add, (67,43,89), {})) self.assertEqual(f.detach(), (a, add, (67,43,89), {})) self.assertEqual(f(), None) self.assertEqual(f(), None) self.assertEqual(f.peek(), None) self.assertEqual(f.detach(), None) self.assertEqual(f.alive, False) self.assertEqual(res, []) res = [] f = weakref.finalize(a, add, x=67, y=43, z=89) del a self._collect_if_necessary() self.assertEqual(f(), None) self.assertEqual(f(), None) self.assertEqual(f.peek(), None) self.assertEqual(f.detach(), None) self.assertEqual(f.alive, False) self.assertEqual(res, [199]) def test_arg_errors(self): def fin(*args, **kwargs): res.append((args, kwargs)) a = self.A() res = [] f = weakref.finalize(a, fin, 1, 2, func=3, obj=4) self.assertEqual(f.peek(), (a, fin, (1, 2), {'func': 3, 'obj': 4})) f() self.assertEqual(res, [((1, 2), {'func': 3, 'obj': 4})]) with self.assertRaises(TypeError): weakref.finalize(a, func=fin, arg=1) with self.assertRaises(TypeError): weakref.finalize(obj=a, func=fin, arg=1) self.assertRaises(TypeError, weakref.finalize, a) self.assertRaises(TypeError, weakref.finalize) def test_order(self): a = self.A() res = [] f1 = weakref.finalize(a, res.append, 'f1') f2 = weakref.finalize(a, res.append, 'f2') f3 = weakref.finalize(a, res.append, 'f3') f4 = weakref.finalize(a, res.append, 'f4') f5 = weakref.finalize(a, res.append, 'f5') # make sure finalizers can keep themselves alive del f1, f4 self.assertTrue(f2.alive) self.assertTrue(f3.alive) self.assertTrue(f5.alive) self.assertTrue(f5.detach()) self.assertFalse(f5.alive) f5() # nothing because previously unregistered res.append('A') f3() # => res.append('f3') self.assertFalse(f3.alive) res.append('B') f3() # nothing because previously called res.append('C') del a self._collect_if_necessary() # => res.append('f4') # => res.append('f2') # => res.append('f1') self.assertFalse(f2.alive) res.append('D') f2() # nothing because previously called by gc expected = ['A', 'f3', 'B', 'C', 'f4', 'f2', 'f1', 'D'] self.assertEqual(res, expected) def test_all_freed(self): # we want a weakrefable subclass of weakref.finalize class MyFinalizer(weakref.finalize): pass a = self.A() res = [] def callback(): res.append(123) f = MyFinalizer(a, callback) wr_callback = weakref.ref(callback) wr_f = weakref.ref(f) del callback, f self.assertIsNotNone(wr_callback()) self.assertIsNotNone(wr_f()) del a self._collect_if_necessary() self.assertIsNone(wr_callback()) self.assertIsNone(wr_f()) self.assertEqual(res, [123]) @classmethod def run_in_child(cls): def error(): # Create an atexit finalizer from inside a finalizer called # at exit. This should be the next to be run. g1 = weakref.finalize(cls, print, 'g1') print('f3 error') 1/0 # cls should stay alive till atexit callbacks run f1 = weakref.finalize(cls, print, 'f1', _global_var) f2 = weakref.finalize(cls, print, 'f2', _global_var) f3 = weakref.finalize(cls, error) f4 = weakref.finalize(cls, print, 'f4', _global_var) assert f1.atexit == True f2.atexit = False assert f3.atexit == True assert f4.atexit == True def test_atexit(self): prog = ('from test.test_weakref import FinalizeTestCase;'+ 'FinalizeTestCase.run_in_child()') rc, out, err = script_helper.assert_python_ok('-c', prog) out = out.decode('ascii').splitlines() self.assertEqual(out, ['f4 foobar', 'f3 error', 'g1', 'f1 foobar']) self.assertTrue(b'ZeroDivisionError' in err) class ModuleTestCase(unittest.TestCase): def test_names(self): for name in ('ReferenceType', 'ProxyType', 'CallableProxyType', 'WeakMethod', 'WeakSet', 'WeakKeyDictionary', 'WeakValueDictionary'): obj = getattr(weakref, name) if name != 'WeakSet': self.assertEqual(obj.__module__, 'weakref') self.assertEqual(obj.__name__, name) self.assertEqual(obj.__qualname__, name) libreftest = """ Doctest for examples in the library reference: weakref.rst >>> from test.support import gc_collect >>> import weakref >>> class Dict(dict): ... pass ... >>> obj = Dict(red=1, green=2, blue=3) # this object is weak referencable >>> r = weakref.ref(obj) >>> print(r() is obj) True >>> import weakref >>> class Object: ... pass ... >>> o = Object() >>> r = weakref.ref(o) >>> o2 = r() >>> o is o2 True >>> del o, o2 >>> gc_collect() # For PyPy or other GCs. >>> print(r()) None >>> import weakref >>> class ExtendedRef(weakref.ref): ... def __init__(self, ob, callback=None, **annotations): ... super().__init__(ob, callback) ... self.__counter = 0 ... for k, v in annotations.items(): ... setattr(self, k, v) ... def __call__(self): ... '''Return a pair containing the referent and the number of ... times the reference has been called. ... ''' ... ob = super().__call__() ... if ob is not None: ... self.__counter += 1 ... ob = (ob, self.__counter) ... return ob ... >>> class A: # not in docs from here, just testing the ExtendedRef ... pass ... >>> a = A() >>> r = ExtendedRef(a, foo=1, bar="baz") >>> r.foo 1 >>> r.bar 'baz' >>> r()[1] 1 >>> r()[1] 2 >>> r()[0] is a True >>> import weakref >>> _id2obj_dict = weakref.WeakValueDictionary() >>> def remember(obj): ... oid = id(obj) ... _id2obj_dict[oid] = obj ... return oid ... >>> def id2obj(oid): ... return _id2obj_dict[oid] ... >>> a = A() # from here, just testing >>> a_id = remember(a) >>> id2obj(a_id) is a True >>> del a >>> gc_collect() # For PyPy or other GCs. >>> try: ... id2obj(a_id) ... except KeyError: ... print('OK') ... else: ... print('WeakValueDictionary error') OK """ __test__ = {'libreftest' : libreftest} def load_tests(loader, tests, pattern): tests.addTest(doctest.DocTestSuite()) return tests if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.12/test_wsgiref.py000066400000000000000000000725131471441230600214130ustar00rootroot00000000000000from unittest import mock from test import support from test.support import socket_helper from test.test_httpservers import NoLogRequestHandler from unittest import TestCase from wsgiref.util import setup_testing_defaults from wsgiref.headers import Headers from wsgiref.handlers import BaseHandler, BaseCGIHandler, SimpleHandler from wsgiref import util from wsgiref.validate import validator from wsgiref.simple_server import WSGIServer, WSGIRequestHandler from wsgiref.simple_server import make_server from http.client import HTTPConnection from io import StringIO, BytesIO, BufferedReader from socketserver import BaseServer from platform import python_implementation import os import re import signal import sys import threading import unittest class MockServer(WSGIServer): """Non-socket HTTP server""" def __init__(self, server_address, RequestHandlerClass): BaseServer.__init__(self, server_address, RequestHandlerClass) self.server_bind() def server_bind(self): host, port = self.server_address self.server_name = host self.server_port = port self.setup_environ() class MockHandler(WSGIRequestHandler): """Non-socket HTTP handler""" def setup(self): self.connection = self.request self.rfile, self.wfile = self.connection def finish(self): pass def hello_app(environ,start_response): start_response("200 OK", [ ('Content-Type','text/plain'), ('Date','Mon, 05 Jun 2006 18:49:54 GMT') ]) return [b"Hello, world!"] def header_app(environ, start_response): start_response("200 OK", [ ('Content-Type', 'text/plain'), ('Date', 'Mon, 05 Jun 2006 18:49:54 GMT') ]) return [';'.join([ environ['HTTP_X_TEST_HEADER'], environ['QUERY_STRING'], environ['PATH_INFO'] ]).encode('iso-8859-1')] def run_amock(app=hello_app, data=b"GET / HTTP/1.0\n\n"): server = make_server("", 80, app, MockServer, MockHandler) inp = BufferedReader(BytesIO(data)) out = BytesIO() olderr = sys.stderr err = sys.stderr = StringIO() try: server.finish_request((inp, out), ("127.0.0.1",8888)) finally: sys.stderr = olderr return out.getvalue(), err.getvalue() def compare_generic_iter(make_it, match): """Utility to compare a generic iterator with an iterable This tests the iterator using iter()/next(). 'make_it' must be a function returning a fresh iterator to be tested (since this may test the iterator twice).""" it = make_it() if not iter(it) is it: raise AssertionError for item in match: if not next(it) == item: raise AssertionError try: next(it) except StopIteration: pass else: raise AssertionError("Too many items from .__next__()", it) class IntegrationTests(TestCase): def check_hello(self, out, has_length=True): pyver = (python_implementation() + "/" + sys.version.split()[0]) self.assertEqual(out, ("HTTP/1.0 200 OK\r\n" "Server: WSGIServer/0.2 " + pyver +"\r\n" "Content-Type: text/plain\r\n" "Date: Mon, 05 Jun 2006 18:49:54 GMT\r\n" + (has_length and "Content-Length: 13\r\n" or "") + "\r\n" "Hello, world!").encode("iso-8859-1") ) def test_plain_hello(self): out, err = run_amock() self.check_hello(out) def test_environ(self): request = ( b"GET /p%61th/?query=test HTTP/1.0\n" b"X-Test-Header: Python test \n" b"X-Test-Header: Python test 2\n" b"Content-Length: 0\n\n" ) out, err = run_amock(header_app, request) self.assertEqual( out.splitlines()[-1], b"Python test,Python test 2;query=test;/path/" ) def test_request_length(self): out, err = run_amock(data=b"GET " + (b"x" * 65537) + b" HTTP/1.0\n\n") self.assertEqual(out.splitlines()[0], b"HTTP/1.0 414 Request-URI Too Long") def test_validated_hello(self): out, err = run_amock(validator(hello_app)) # the middleware doesn't support len(), so content-length isn't there self.check_hello(out, has_length=False) def test_simple_validation_error(self): def bad_app(environ,start_response): start_response("200 OK", ('Content-Type','text/plain')) return ["Hello, world!"] out, err = run_amock(validator(bad_app)) self.assertTrue(out.endswith( b"A server error occurred. Please contact the administrator." )) self.assertEqual( err.splitlines()[-2], "AssertionError: Headers (('Content-Type', 'text/plain')) must" " be of type list: " ) def test_status_validation_errors(self): def create_bad_app(status): def bad_app(environ, start_response): start_response(status, [("Content-Type", "text/plain; charset=utf-8")]) return [b"Hello, world!"] return bad_app tests = [ ('200', 'AssertionError: Status must be at least 4 characters'), ('20X OK', 'AssertionError: Status message must begin w/3-digit code'), ('200OK', 'AssertionError: Status message must have a space after code'), ] for status, exc_message in tests: with self.subTest(status=status): out, err = run_amock(create_bad_app(status)) self.assertTrue(out.endswith( b"A server error occurred. Please contact the administrator." )) self.assertEqual(err.splitlines()[-2], exc_message) def test_wsgi_input(self): def bad_app(e,s): e["wsgi.input"].read() s("200 OK", [("Content-Type", "text/plain; charset=utf-8")]) return [b"data"] out, err = run_amock(validator(bad_app)) self.assertTrue(out.endswith( b"A server error occurred. Please contact the administrator." )) self.assertEqual( err.splitlines()[-2], "AssertionError" ) def test_bytes_validation(self): def app(e, s): s("200 OK", [ ("Content-Type", "text/plain; charset=utf-8"), ("Date", "Wed, 24 Dec 2008 13:29:32 GMT"), ]) return [b"data"] out, err = run_amock(validator(app)) self.assertTrue(err.endswith('"GET / HTTP/1.0" 200 4\n')) ver = sys.version.split()[0].encode('ascii') py = python_implementation().encode('ascii') pyver = py + b"/" + ver self.assertEqual( b"HTTP/1.0 200 OK\r\n" b"Server: WSGIServer/0.2 "+ pyver + b"\r\n" b"Content-Type: text/plain; charset=utf-8\r\n" b"Date: Wed, 24 Dec 2008 13:29:32 GMT\r\n" b"\r\n" b"data", out) def test_cp1252_url(self): def app(e, s): s("200 OK", [ ("Content-Type", "text/plain"), ("Date", "Wed, 24 Dec 2008 13:29:32 GMT"), ]) # PEP3333 says environ variables are decoded as latin1. # Encode as latin1 to get original bytes return [e["PATH_INFO"].encode("latin1")] out, err = run_amock( validator(app), data=b"GET /\x80%80 HTTP/1.0") self.assertEqual( [ b"HTTP/1.0 200 OK", mock.ANY, b"Content-Type: text/plain", b"Date: Wed, 24 Dec 2008 13:29:32 GMT", b"", b"/\x80\x80", ], out.splitlines()) def test_interrupted_write(self): # BaseHandler._write() and _flush() have to write all data, even if # it takes multiple send() calls. Test this by interrupting a send() # call with a Unix signal. pthread_kill = support.get_attribute(signal, "pthread_kill") def app(environ, start_response): start_response("200 OK", []) return [b'\0' * support.SOCK_MAX_SIZE] class WsgiHandler(NoLogRequestHandler, WSGIRequestHandler): pass server = make_server(socket_helper.HOST, 0, app, handler_class=WsgiHandler) self.addCleanup(server.server_close) interrupted = threading.Event() def signal_handler(signum, frame): interrupted.set() original = signal.signal(signal.SIGUSR1, signal_handler) self.addCleanup(signal.signal, signal.SIGUSR1, original) received = None main_thread = threading.get_ident() def run_client(): http = HTTPConnection(*server.server_address) http.request("GET", "/") with http.getresponse() as response: response.read(100) # The main thread should now be blocking in a send() system # call. But in theory, it could get interrupted by other # signals, and then retried. So keep sending the signal in a # loop, in case an earlier signal happens to be delivered at # an inconvenient moment. while True: pthread_kill(main_thread, signal.SIGUSR1) if interrupted.wait(timeout=float(1)): break nonlocal received received = len(response.read()) http.close() background = threading.Thread(target=run_client) background.start() server.handle_request() background.join() self.assertEqual(received, support.SOCK_MAX_SIZE - 100) class UtilityTests(TestCase): def checkShift(self,sn_in,pi_in,part,sn_out,pi_out): env = {'SCRIPT_NAME':sn_in,'PATH_INFO':pi_in} util.setup_testing_defaults(env) self.assertEqual(util.shift_path_info(env),part) self.assertEqual(env['PATH_INFO'],pi_out) self.assertEqual(env['SCRIPT_NAME'],sn_out) return env def checkDefault(self, key, value, alt=None): # Check defaulting when empty env = {} util.setup_testing_defaults(env) if isinstance(value, StringIO): self.assertIsInstance(env[key], StringIO) elif isinstance(value,BytesIO): self.assertIsInstance(env[key],BytesIO) else: self.assertEqual(env[key], value) # Check existing value env = {key:alt} util.setup_testing_defaults(env) self.assertIs(env[key], alt) def checkCrossDefault(self,key,value,**kw): util.setup_testing_defaults(kw) self.assertEqual(kw[key],value) def checkAppURI(self,uri,**kw): util.setup_testing_defaults(kw) self.assertEqual(util.application_uri(kw),uri) def checkReqURI(self,uri,query=1,**kw): util.setup_testing_defaults(kw) self.assertEqual(util.request_uri(kw,query),uri) def checkFW(self,text,size,match): def make_it(text=text,size=size): return util.FileWrapper(StringIO(text),size) compare_generic_iter(make_it,match) it = make_it() self.assertFalse(it.filelike.closed) for item in it: pass self.assertFalse(it.filelike.closed) it.close() self.assertTrue(it.filelike.closed) def testSimpleShifts(self): self.checkShift('','/', '', '/', '') self.checkShift('','/x', 'x', '/x', '') self.checkShift('/','', None, '/', '') self.checkShift('/a','/x/y', 'x', '/a/x', '/y') self.checkShift('/a','/x/', 'x', '/a/x', '/') def testNormalizedShifts(self): self.checkShift('/a/b', '/../y', '..', '/a', '/y') self.checkShift('', '/../y', '..', '', '/y') self.checkShift('/a/b', '//y', 'y', '/a/b/y', '') self.checkShift('/a/b', '//y/', 'y', '/a/b/y', '/') self.checkShift('/a/b', '/./y', 'y', '/a/b/y', '') self.checkShift('/a/b', '/./y/', 'y', '/a/b/y', '/') self.checkShift('/a/b', '///./..//y/.//', '..', '/a', '/y/') self.checkShift('/a/b', '///', '', '/a/b/', '') self.checkShift('/a/b', '/.//', '', '/a/b/', '') self.checkShift('/a/b', '/x//', 'x', '/a/b/x', '/') self.checkShift('/a/b', '/.', None, '/a/b', '') def testDefaults(self): for key, value in [ ('SERVER_NAME','127.0.0.1'), ('SERVER_PORT', '80'), ('SERVER_PROTOCOL','HTTP/1.0'), ('HTTP_HOST','127.0.0.1'), ('REQUEST_METHOD','GET'), ('SCRIPT_NAME',''), ('PATH_INFO','/'), ('wsgi.version', (1,0)), ('wsgi.run_once', 0), ('wsgi.multithread', 0), ('wsgi.multiprocess', 0), ('wsgi.input', BytesIO()), ('wsgi.errors', StringIO()), ('wsgi.url_scheme','http'), ]: self.checkDefault(key,value) def testCrossDefaults(self): self.checkCrossDefault('HTTP_HOST',"foo.bar",SERVER_NAME="foo.bar") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="on") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="1") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="yes") self.checkCrossDefault('wsgi.url_scheme',"http",HTTPS="foo") self.checkCrossDefault('SERVER_PORT',"80",HTTPS="foo") self.checkCrossDefault('SERVER_PORT',"443",HTTPS="on") def testGuessScheme(self): self.assertEqual(util.guess_scheme({}), "http") self.assertEqual(util.guess_scheme({'HTTPS':"foo"}), "http") self.assertEqual(util.guess_scheme({'HTTPS':"on"}), "https") self.assertEqual(util.guess_scheme({'HTTPS':"yes"}), "https") self.assertEqual(util.guess_scheme({'HTTPS':"1"}), "https") def testAppURIs(self): self.checkAppURI("http://127.0.0.1/") self.checkAppURI("http://127.0.0.1/spam", SCRIPT_NAME="/spam") self.checkAppURI("http://127.0.0.1/sp%E4m", SCRIPT_NAME="/sp\xe4m") self.checkAppURI("http://spam.example.com:2071/", HTTP_HOST="spam.example.com:2071", SERVER_PORT="2071") self.checkAppURI("http://spam.example.com/", SERVER_NAME="spam.example.com") self.checkAppURI("http://127.0.0.1/", HTTP_HOST="127.0.0.1", SERVER_NAME="spam.example.com") self.checkAppURI("https://127.0.0.1/", HTTPS="on") self.checkAppURI("http://127.0.0.1:8000/", SERVER_PORT="8000", HTTP_HOST=None) def testReqURIs(self): self.checkReqURI("http://127.0.0.1/") self.checkReqURI("http://127.0.0.1/spam", SCRIPT_NAME="/spam") self.checkReqURI("http://127.0.0.1/sp%E4m", SCRIPT_NAME="/sp\xe4m") self.checkReqURI("http://127.0.0.1/spammity/spam", SCRIPT_NAME="/spammity", PATH_INFO="/spam") self.checkReqURI("http://127.0.0.1/spammity/sp%E4m", SCRIPT_NAME="/spammity", PATH_INFO="/sp\xe4m") self.checkReqURI("http://127.0.0.1/spammity/spam;ham", SCRIPT_NAME="/spammity", PATH_INFO="/spam;ham") self.checkReqURI("http://127.0.0.1/spammity/spam;cookie=1234,5678", SCRIPT_NAME="/spammity", PATH_INFO="/spam;cookie=1234,5678") self.checkReqURI("http://127.0.0.1/spammity/spam?say=ni", SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="say=ni") self.checkReqURI("http://127.0.0.1/spammity/spam?s%E4y=ni", SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="s%E4y=ni") self.checkReqURI("http://127.0.0.1/spammity/spam", 0, SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="say=ni") def testFileWrapper(self): self.checkFW("xyz"*50, 120, ["xyz"*40,"xyz"*10]) def testHopByHop(self): for hop in ( "Connection Keep-Alive Proxy-Authenticate Proxy-Authorization " "TE Trailers Transfer-Encoding Upgrade" ).split(): for alt in hop, hop.title(), hop.upper(), hop.lower(): self.assertTrue(util.is_hop_by_hop(alt)) # Not comprehensive, just a few random header names for hop in ( "Accept Cache-Control Date Pragma Trailer Via Warning" ).split(): for alt in hop, hop.title(), hop.upper(), hop.lower(): self.assertFalse(util.is_hop_by_hop(alt)) class HeaderTests(TestCase): def testMappingInterface(self): test = [('x','y')] self.assertEqual(len(Headers()), 0) self.assertEqual(len(Headers([])),0) self.assertEqual(len(Headers(test[:])),1) self.assertEqual(Headers(test[:]).keys(), ['x']) self.assertEqual(Headers(test[:]).values(), ['y']) self.assertEqual(Headers(test[:]).items(), test) self.assertIsNot(Headers(test).items(), test) # must be copy! h = Headers() del h['foo'] # should not raise an error h['Foo'] = 'bar' for m in h.__contains__, h.get, h.get_all, h.__getitem__: self.assertTrue(m('foo')) self.assertTrue(m('Foo')) self.assertTrue(m('FOO')) self.assertFalse(m('bar')) self.assertEqual(h['foo'],'bar') h['foo'] = 'baz' self.assertEqual(h['FOO'],'baz') self.assertEqual(h.get_all('foo'),['baz']) self.assertEqual(h.get("foo","whee"), "baz") self.assertEqual(h.get("zoo","whee"), "whee") self.assertEqual(h.setdefault("foo","whee"), "baz") self.assertEqual(h.setdefault("zoo","whee"), "whee") self.assertEqual(h["foo"],"baz") self.assertEqual(h["zoo"],"whee") def testRequireList(self): self.assertRaises(TypeError, Headers, "foo") def testExtras(self): h = Headers() self.assertEqual(str(h),'\r\n') h.add_header('foo','bar',baz="spam") self.assertEqual(h['foo'], 'bar; baz="spam"') self.assertEqual(str(h),'foo: bar; baz="spam"\r\n\r\n') h.add_header('Foo','bar',cheese=None) self.assertEqual(h.get_all('foo'), ['bar; baz="spam"', 'bar; cheese']) self.assertEqual(str(h), 'foo: bar; baz="spam"\r\n' 'Foo: bar; cheese\r\n' '\r\n' ) class ErrorHandler(BaseCGIHandler): """Simple handler subclass for testing BaseHandler""" # BaseHandler records the OS environment at import time, but envvars # might have been changed later by other tests, which trips up # HandlerTests.testEnviron(). os_environ = dict(os.environ.items()) def __init__(self,**kw): setup_testing_defaults(kw) BaseCGIHandler.__init__( self, BytesIO(), BytesIO(), StringIO(), kw, multithread=True, multiprocess=True ) class TestHandler(ErrorHandler): """Simple handler subclass for testing BaseHandler, w/error passthru""" def handle_error(self): raise # for testing, we want to see what's happening class HandlerTests(TestCase): # testEnviron() can produce long error message maxDiff = 80 * 50 def testEnviron(self): os_environ = { # very basic environment 'HOME': '/my/home', 'PATH': '/my/path', 'LANG': 'fr_FR.UTF-8', # set some WSGI variables 'SCRIPT_NAME': 'test_script_name', 'SERVER_NAME': 'test_server_name', } with support.swap_attr(TestHandler, 'os_environ', os_environ): # override X and HOME variables handler = TestHandler(X="Y", HOME="/override/home") handler.setup_environ() # Check that wsgi_xxx attributes are copied to wsgi.xxx variables # of handler.environ for attr in ('version', 'multithread', 'multiprocess', 'run_once', 'file_wrapper'): self.assertEqual(getattr(handler, 'wsgi_' + attr), handler.environ['wsgi.' + attr]) # Test handler.environ as a dict expected = {} setup_testing_defaults(expected) # Handler inherits os_environ variables which are not overridden # by SimpleHandler.add_cgi_vars() (SimpleHandler.base_env) for key, value in os_environ.items(): if key not in expected: expected[key] = value expected.update({ # X doesn't exist in os_environ "X": "Y", # HOME is overridden by TestHandler 'HOME': "/override/home", # overridden by setup_testing_defaults() "SCRIPT_NAME": "", "SERVER_NAME": "127.0.0.1", # set by BaseHandler.setup_environ() 'wsgi.input': handler.get_stdin(), 'wsgi.errors': handler.get_stderr(), 'wsgi.version': (1, 0), 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.multithread': True, 'wsgi.multiprocess': True, 'wsgi.file_wrapper': util.FileWrapper, }) self.assertDictEqual(handler.environ, expected) def testCGIEnviron(self): h = BaseCGIHandler(None,None,None,{}) h.setup_environ() for key in 'wsgi.url_scheme', 'wsgi.input', 'wsgi.errors': self.assertIn(key, h.environ) def testScheme(self): h=TestHandler(HTTPS="on"); h.setup_environ() self.assertEqual(h.environ['wsgi.url_scheme'],'https') h=TestHandler(); h.setup_environ() self.assertEqual(h.environ['wsgi.url_scheme'],'http') def testAbstractMethods(self): h = BaseHandler() for name in [ '_flush','get_stdin','get_stderr','add_cgi_vars' ]: self.assertRaises(NotImplementedError, getattr(h,name)) self.assertRaises(NotImplementedError, h._write, "test") def testContentLength(self): # Demo one reason iteration is better than write()... ;) def trivial_app1(e,s): s('200 OK',[]) return [e['wsgi.url_scheme'].encode('iso-8859-1')] def trivial_app2(e,s): s('200 OK',[])(e['wsgi.url_scheme'].encode('iso-8859-1')) return [] def trivial_app3(e,s): s('200 OK',[]) return ['\u0442\u0435\u0441\u0442'.encode("utf-8")] def trivial_app4(e,s): # Simulate a response to a HEAD request s('200 OK',[('Content-Length', '12345')]) return [] h = TestHandler() h.run(trivial_app1) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "Content-Length: 4\r\n" "\r\n" "http").encode("iso-8859-1")) h = TestHandler() h.run(trivial_app2) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "\r\n" "http").encode("iso-8859-1")) h = TestHandler() h.run(trivial_app3) self.assertEqual(h.stdout.getvalue(), b'Status: 200 OK\r\n' b'Content-Length: 8\r\n' b'\r\n' b'\xd1\x82\xd0\xb5\xd1\x81\xd1\x82') h = TestHandler() h.run(trivial_app4) self.assertEqual(h.stdout.getvalue(), b'Status: 200 OK\r\n' b'Content-Length: 12345\r\n' b'\r\n') def testBasicErrorOutput(self): def non_error_app(e,s): s('200 OK',[]) return [] def error_app(e,s): raise AssertionError("This should be caught by handler") h = ErrorHandler() h.run(non_error_app) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "Content-Length: 0\r\n" "\r\n").encode("iso-8859-1")) self.assertEqual(h.stderr.getvalue(),"") h = ErrorHandler() h.run(error_app) self.assertEqual(h.stdout.getvalue(), ("Status: %s\r\n" "Content-Type: text/plain\r\n" "Content-Length: %d\r\n" "\r\n" % (h.error_status,len(h.error_body))).encode('iso-8859-1') + h.error_body) self.assertIn("AssertionError", h.stderr.getvalue()) def testErrorAfterOutput(self): MSG = b"Some output has been sent" def error_app(e,s): s("200 OK",[])(MSG) raise AssertionError("This should be caught by handler") h = ErrorHandler() h.run(error_app) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "\r\n".encode("iso-8859-1")+MSG)) self.assertIn("AssertionError", h.stderr.getvalue()) def testHeaderFormats(self): def non_error_app(e,s): s('200 OK',[]) return [] stdpat = ( r"HTTP/%s 200 OK\r\n" r"Date: \w{3}, [ 0123]\d \w{3} \d{4} \d\d:\d\d:\d\d GMT\r\n" r"%s" r"Content-Length: 0\r\n" r"\r\n" ) shortpat = ( "Status: 200 OK\r\n" "Content-Length: 0\r\n" "\r\n" ).encode("iso-8859-1") for ssw in "FooBar/1.0", None: sw = ssw and "Server: %s\r\n" % ssw or "" for version in "1.0", "1.1": for proto in "HTTP/0.9", "HTTP/1.0", "HTTP/1.1": h = TestHandler(SERVER_PROTOCOL=proto) h.origin_server = False h.http_version = version h.server_software = ssw h.run(non_error_app) self.assertEqual(shortpat,h.stdout.getvalue()) h = TestHandler(SERVER_PROTOCOL=proto) h.origin_server = True h.http_version = version h.server_software = ssw h.run(non_error_app) if proto=="HTTP/0.9": self.assertEqual(h.stdout.getvalue(),b"") else: self.assertTrue( re.match((stdpat%(version,sw)).encode("iso-8859-1"), h.stdout.getvalue()), ((stdpat%(version,sw)).encode("iso-8859-1"), h.stdout.getvalue()) ) def testBytesData(self): def app(e, s): s("200 OK", [ ("Content-Type", "text/plain; charset=utf-8"), ]) return [b"data"] h = TestHandler() h.run(app) self.assertEqual(b"Status: 200 OK\r\n" b"Content-Type: text/plain; charset=utf-8\r\n" b"Content-Length: 4\r\n" b"\r\n" b"data", h.stdout.getvalue()) def testCloseOnError(self): side_effects = {'close_called': False} MSG = b"Some output has been sent" def error_app(e,s): s("200 OK",[])(MSG) class CrashyIterable(object): def __iter__(self): while True: yield b'blah' raise AssertionError("This should be caught by handler") def close(self): side_effects['close_called'] = True return CrashyIterable() h = ErrorHandler() h.run(error_app) self.assertEqual(side_effects['close_called'], True) def testPartialWrite(self): written = bytearray() class PartialWriter: def write(self, b): partial = b[:7] written.extend(partial) return len(partial) def flush(self): pass environ = {"SERVER_PROTOCOL": "HTTP/1.0"} h = SimpleHandler(BytesIO(), PartialWriter(), sys.stderr, environ) msg = "should not do partial writes" with self.assertWarnsRegex(DeprecationWarning, msg): h.run(hello_app) self.assertEqual(b"HTTP/1.0 200 OK\r\n" b"Content-Type: text/plain\r\n" b"Date: Mon, 05 Jun 2006 18:49:54 GMT\r\n" b"Content-Length: 13\r\n" b"\r\n" b"Hello, world!", written) def testClientConnectionTerminations(self): environ = {"SERVER_PROTOCOL": "HTTP/1.0"} for exception in ( ConnectionAbortedError, BrokenPipeError, ConnectionResetError, ): with self.subTest(exception=exception): class AbortingWriter: def write(self, b): raise exception stderr = StringIO() h = SimpleHandler(BytesIO(), AbortingWriter(), stderr, environ) h.run(hello_app) self.assertFalse(stderr.getvalue()) def testDontResetInternalStateOnException(self): class CustomException(ValueError): pass # We are raising CustomException here to trigger an exception # during the execution of SimpleHandler.finish_response(), so # we can easily test that the internal state of the handler is # preserved in case of an exception. class AbortingWriter: def write(self, b): raise CustomException stderr = StringIO() environ = {"SERVER_PROTOCOL": "HTTP/1.0"} h = SimpleHandler(BytesIO(), AbortingWriter(), stderr, environ) h.run(hello_app) self.assertIn("CustomException", stderr.getvalue()) # Test that the internal state of the handler is preserved. self.assertIsNotNone(h.result) self.assertIsNotNone(h.headers) self.assertIsNotNone(h.status) self.assertIsNotNone(h.environ) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.12/version000066400000000000000000000000071471441230600177310ustar00rootroot000000000000003.12.6 gevent-24.11.1/src/greentest/3.13/000077500000000000000000000000001471441230600163255ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.13/allsans.pem000066400000000000000000000235711471441230600204750ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQDBGvj+Uy/VUyTR mmIA1UEENThh0+pWODcvvUlkeIo+XTJ3FhF4/RVjImDHjozl28Xf2TzKnvQJa1KC pqa7fr8cL9QMwk4pH+S4ulxOu02Bl3Yafx2oJVUML37vciJg+zkzPx1k3tXFjXkr LGjZwOoufBC3AmPuq2xHFBzHrvp5/DIRH2slQFM9fpVZzN77gYyzxba0wCfCPpCf eJFRyYKW8c7MXrwnM82YtE7Rlnf227EkCdMNaSeZLUIxeVpcnScqZl0SIbR3YEiV 0LPFkx0wJFm8qUEFU/h+0jamgy/ON+11nqmMlp3BjNi/JTVsa7N7A3dvdHC7VVlr WnUgU6MoSniyL6ijpucyHtZzK2mJy0sHR8PadHKow0O423/5N8GKTSOvaGMXTjAe OGs+9/P1ZYo3IjjQPz/NV3QlhK8zRqxF3cW0ekHHkT+/jZjCvSKm6mdbMQunKE1W +dokAc815pb48Mzf1eWKd/7UyUf7CXussyAaJ3clpaK1sbbn9m0CAwEAAQKCAYAe BaCCgdJk+xk1USg9cuo5ykBqzTSYlQLXdDlN2oO7sGehJhgvVEGX+QdM3ze+oM2B wNd3tQDB2iKo11oCunDh4/m2xhq6wA+iPK8POoWRSUf+VJb6xlsTmurENV1s8IHz GrPqM87OePFGqg/fEuQVuAotObzppVMfNdxHm0er4W6zRMw2rWqDnAOCQ5zDQ1/p ryp5rYpA49M+R9NoAMlByHRbR7s+6Qnk3NuIMDmUcpF2xeQ/KIMUiHnLEU/gKDpi bsk+VtyjlibR4zhh9/cJrLTApAIA+4eC176EJvKXCh5UIjd92JC7741HTNQXJpvG 9PXbzhyUCmncr04U+46snGHdwD+lG4LS7oBGACTLMtpcMrlgAm6XCg4T8gRVE/9n FvCkqPHBR+vnhOxm+0x0yUY/DstJby6IPYPsfGK/s2n//j/vJrAZE1Pxlm9EPU13 MRLcHstwjAc/NXRPnUN1DfcQvPLx6Tt6rqw3Wm1KO75kM+HZ56BX9/Bi1TgkiI0C gcEA5JTlXssJ3W8Cz6w1ZtGsThHQBDbvHF2D5AdqO7y6/eqzCQgBQl9BTfXOzsvP I1gf2CLEFBtGK09UjAuJQg90/NlKur7i7xt7HpAzEfGsDAL4P5BW5JnMNrzpJjjL 0uUDsPJlA75Wi29N2SFiaIslY0sZ6nckInat5GRe4O1AMSHoJ5suY9yTZTU3XB4O A+XyddutI1GsFZgl8/8LyyNMcyNjxG3T5sr7IKf5/nIv6oMDjC2zLVZa8QS/MEnL Kaa7AoHBANhEsxfcjw2MaPkrsqAsOP0dDf7g2rdz6wKT5BzZu9e+/E76NmvVDpns e+kCjql9Os3/wonOMINvn1bTCQGTgk8+dw1fMyqg+zQCvH4ImcE6LSqhzblVHsIB zZ7rW86trri1U9+olNHG4nwkus0i4LV8eeORns+j8DgXr6/eOvjX3ZW5TyU7/Qgm SiSdBapzJbom3xJrbo9KQsrN5PVCOwuwrgY0o+2BeKyKhnt4uGv0bR+ii06EOJUA WvjD7gLI9wKBwGVRXk3jH29IOm3EvjLh80bzfEmx89CV3tUfOEZcRGIyOsNhCfXa dP7SWqWtDxZyhELwPgtPf43I7wfYQTHH2ioNQqN94ubrPmpwrkJg5cq5MkIyf2F6 jlsg5xMrD6VeH4G6H25GWuQZJN9+fbkrHBpj+ovD3X9tLWzT1H5Miyx8BAQyM6DN 74Nn0C8Dn2C49vyor5i9JdK4ivIY9ahH8CYE5L73k3p0NFXoPtY61ORUyCjFROtu oIa+fOQxgVzn6wKBwQC3DD7BnY7/Gq7m51ODOqrpoaPs7Qhyagyp298hhDD3hNEt T56sWmLHaV/fcqipUDNrlGRmGzz4ooutA2YGDYIn7Gj7ym4WULcN6Jr92e25nLIJ +XWUvjUQZFJThkXogxz1fZSGI7wCamHcTYJGipTDR54rPV+7w7hY4cN0CZbEdIE6 buRMUZ/zO+VZZAYdpORz0N7SSlgDtAkgenCmHe64EEzbN8bgCcvHzl/RNfZyeSm7 supSBJuXkfttvvg/JzUCgcEAlx0Pep9qCLvpk0WqzijBVHc3zK4wYxjhN2MBkF42 SLWfogKpiPfIqxX6YF94roIA0VlW6Pj50v+sbPwq8nwsgFNhml80A4ODKr3O3Y3M fXDBJW5W5ZRb/vhIKRjXyCSckSRfj7N8HUYjCLkxQansNWimrldmSet0H2mYJN0Y JpBXdqpa76zoHzWpKFwD0fSVzvnMelPHSDCNOdIEHmR8e1x2F1/ufR/9/dBzPULY HMj0OhQHoi8kJyMIj3+bQkbC -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5f Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=allsans Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:c1:1a:f8:fe:53:2f:d5:53:24:d1:9a:62:00:d5: 41:04:35:38:61:d3:ea:56:38:37:2f:bd:49:64:78: 8a:3e:5d:32:77:16:11:78:fd:15:63:22:60:c7:8e: 8c:e5:db:c5:df:d9:3c:ca:9e:f4:09:6b:52:82:a6: a6:bb:7e:bf:1c:2f:d4:0c:c2:4e:29:1f:e4:b8:ba: 5c:4e:bb:4d:81:97:76:1a:7f:1d:a8:25:55:0c:2f: 7e:ef:72:22:60:fb:39:33:3f:1d:64:de:d5:c5:8d: 79:2b:2c:68:d9:c0:ea:2e:7c:10:b7:02:63:ee:ab: 6c:47:14:1c:c7:ae:fa:79:fc:32:11:1f:6b:25:40: 53:3d:7e:95:59:cc:de:fb:81:8c:b3:c5:b6:b4:c0: 27:c2:3e:90:9f:78:91:51:c9:82:96:f1:ce:cc:5e: bc:27:33:cd:98:b4:4e:d1:96:77:f6:db:b1:24:09: d3:0d:69:27:99:2d:42:31:79:5a:5c:9d:27:2a:66: 5d:12:21:b4:77:60:48:95:d0:b3:c5:93:1d:30:24: 59:bc:a9:41:05:53:f8:7e:d2:36:a6:83:2f:ce:37: ed:75:9e:a9:8c:96:9d:c1:8c:d8:bf:25:35:6c:6b: b3:7b:03:77:6f:74:70:bb:55:59:6b:5a:75:20:53: a3:28:4a:78:b2:2f:a8:a3:a6:e7:32:1e:d6:73:2b: 69:89:cb:4b:07:47:c3:da:74:72:a8:c3:43:b8:db: 7f:f9:37:c1:8a:4d:23:af:68:63:17:4e:30:1e:38: 6b:3e:f7:f3:f5:65:8a:37:22:38:d0:3f:3f:cd:57: 74:25:84:af:33:46:ac:45:dd:c5:b4:7a:41:c7:91: 3f:bf:8d:98:c2:bd:22:a6:ea:67:5b:31:0b:a7:28: 4d:56:f9:da:24:01:cf:35:e6:96:f8:f0:cc:df:d5: e5:8a:77:fe:d4:c9:47:fb:09:7b:ac:b3:20:1a:27: 77:25:a5:a2:b5:b1:b6:e7:f6:6d Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:allsans, othername:, othername:, email:user@example.org, DNS:www.example.org, DirName:/C=XY/L=Castle Anthrax/O=Python Software Foundation/CN=dirname example, URI:https://www.python.org/, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1, Registered ID:1.2.3.4.5 X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: D4:F1:D8:23:E0:A7:E9:CA:12:45:A0:0D:03:C2:25:A6:E8:65:BC:EE X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 70:77:d8:82:b0:f4:ab:de:84:ce:88:32:63:5e:23:0f:b6:58: a2:b1:65:ff:12:22:0b:88:a6:fa:06:40:9a:e7:63:a7:5d:ae: 94:c5:68:3c:4b:e9:95:34:01:75:24:df:9d:6e:9b:e4:ff:3f: 61:97:29:7b:ab:34:2c:14:d3:01:d2:eb:fb:84:40:db:12:54: 7e:7a:44:bc:08:eb:9f:e2:15:0b:11:4f:25:d2:56:51:95:ad: 6d:ad:07:aa:6a:61:f9:39:d5:82:8c:45:31:9f:2a:ff:18:98: 49:0c:bb:17:ad:d5:24:d3:d1:c7:c4:10:3e:c4:79:26:58:f4: c5:de:82:16:c4:c3:c4:a7:a3:62:22:41:90:36:0f:bc:4c:fd: 6a:18:22:f2:87:e9:07:db:b4:3d:65:00:e4:70:f9:d6:e5:a8: a1:b9:c9:9d:e7:5d:78:aa:98:d5:f8:f4:fd:5c:d9:4c:d0:6d: bf:87:71:d3:5b:ec:f4:bf:46:f9:c8:f8:10:c5:72:af:c3:15: b9:c4:06:67:0b:3f:f6:f4:64:c5:27:74:c1:6b:00:37:da:ea: 18:36:77:36:a7:3e:80:2e:5d:54:0f:01:df:ce:9e:97:dd:c9: f2:8b:59:82:c5:65:31:c8:73:20:fd:24:23:25:d8:00:df:90: 93:26:76:08:0a:06:a9:0e:d3:d3:4c:6f:ef:a7:fb:de:eb:2a: 40:b9:e4:b1:44:0c:37:ca:c6:9e:44:4a:b4:7c:2c:40:52:35: bb:b3:71:28:3d:35:fd:be:c9:4f:54:b3:99:c5:5f:84:38:fb: 2b:fb:ea:dd:88:e8:9d:c1:9b:67:87:3d:79:7b:3d:7e:61:1f: 70:3c:b7:c8:4c:17:a5:0c:a3:28:c7:ab:48:11:14:f7:98:7a: da:4e:fb:91:76:89:0a:a6:c6:72:e0:96:d9:f1:80:ea:68:90: 37:5c:c6:69:c7:d7:bc:c7:d1:ae:5b:a9:12:59:c6:e4:6c:61: a9:8b:ba:51:b3:13 -----BEGIN CERTIFICATE----- MIIHDTCCBXWgAwIBAgIJAMstgJlaaVJfMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2Fs bHNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBGvj+Uy/VUyTR mmIA1UEENThh0+pWODcvvUlkeIo+XTJ3FhF4/RVjImDHjozl28Xf2TzKnvQJa1KC pqa7fr8cL9QMwk4pH+S4ulxOu02Bl3Yafx2oJVUML37vciJg+zkzPx1k3tXFjXkr LGjZwOoufBC3AmPuq2xHFBzHrvp5/DIRH2slQFM9fpVZzN77gYyzxba0wCfCPpCf eJFRyYKW8c7MXrwnM82YtE7Rlnf227EkCdMNaSeZLUIxeVpcnScqZl0SIbR3YEiV 0LPFkx0wJFm8qUEFU/h+0jamgy/ON+11nqmMlp3BjNi/JTVsa7N7A3dvdHC7VVlr WnUgU6MoSniyL6ijpucyHtZzK2mJy0sHR8PadHKow0O423/5N8GKTSOvaGMXTjAe OGs+9/P1ZYo3IjjQPz/NV3QlhK8zRqxF3cW0ekHHkT+/jZjCvSKm6mdbMQunKE1W +dokAc815pb48Mzf1eWKd/7UyUf7CXussyAaJ3clpaK1sbbn9m0CAwEAAaOCAt4w ggLaMIIBMAYDVR0RBIIBJzCCASOCB2FsbHNhbnOgHgYDKgMEoBcMFXNvbWUgb3Ro ZXIgaWRlbnRpZmllcqA1BgYrBgEFAgKgKzApoBAbDktFUkJFUk9TLlJFQUxNoRUw E6ADAgEBoQwwChsIdXNlcm5hbWWBEHVzZXJAZXhhbXBsZS5vcmeCD3d3dy5leGFt cGxlLm9yZ6RnMGUxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJh eDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xGDAWBgNVBAMM D2Rpcm5hbWUgZXhhbXBsZYYXaHR0cHM6Ly93d3cucHl0aG9uLm9yZy+HBH8AAAGH EAAAAAAAAAAAAAAAAAAAAAGIBCoDBAUwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQW MBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBTU 8dgj4KfpyhJFoA0DwiWm6GW87jB9BgNVHSMEdjB0gBSziqCiunHxqCR51KRbJTYV HknIzaFRpE8wTTELMAkGA1UEBhMCWFkxJjAkBgNVBAoMHVB5dGhvbiBTb2Z0d2Fy ZSBGb3VuZGF0aW9uIENBMRYwFAYDVQQDDA1vdXItY2Etc2VydmVyggkAyy2AmVpp UlswgYMGCCsGAQUFBwEBBHcwdTA8BggrBgEFBQcwAoYwaHR0cDovL3Rlc3RjYS5w eXRob250ZXN0Lm5ldC90ZXN0Y2EvcHljYWNlcnQuY2VyMDUGCCsGAQUFBzABhilo dHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9vY3NwLzBDBgNVHR8E PDA6MDigNqA0hjJodHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9y ZXZvY2F0aW9uLmNybDANBgkqhkiG9w0BAQsFAAOCAYEAcHfYgrD0q96EzogyY14j D7ZYorFl/xIiC4im+gZAmudjp12ulMVoPEvplTQBdSTfnW6b5P8/YZcpe6s0LBTT AdLr+4RA2xJUfnpEvAjrn+IVCxFPJdJWUZWtba0Hqmph+TnVgoxFMZ8q/xiYSQy7 F63VJNPRx8QQPsR5Jlj0xd6CFsTDxKejYiJBkDYPvEz9ahgi8ofpB9u0PWUA5HD5 1uWoobnJneddeKqY1fj0/VzZTNBtv4dx01vs9L9G+cj4EMVyr8MVucQGZws/9vRk xSd0wWsAN9rqGDZ3Nqc+gC5dVA8B386el93J8otZgsVlMchzIP0kIyXYAN+QkyZ2 CAoGqQ7T00xv76f73usqQLnksUQMN8rGnkRKtHwsQFI1u7NxKD01/b7JT1SzmcVf hDj7K/vq3YjoncGbZ4c9eXs9fmEfcDy3yEwXpQyjKMerSBEU95h62k77kXaJCqbG cuCW2fGA6miQN1zGacfXvMfRrlupElnG5GxhqYu6UbMT -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/badcert.pem000066400000000000000000000036101471441230600204340ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/badkey.pem000066400000000000000000000041621471441230600202720ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/capath/000077500000000000000000000000001471441230600175655ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.13/capath/4e1295a3.0000066400000000000000000000014561471441230600207310ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/capath/5ed36f99.0000066400000000000000000000050111471441230600210210ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/capath/6e88d7b8.0000066400000000000000000000014561471441230600210330ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/capath/99d0fa06.0000066400000000000000000000050111471441230600210050ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/capath/b1930218.0000066400000000000000000000030721471441230600206410ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/capath/ceff1710.0000066400000000000000000000030721471441230600210640ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/000077500000000000000000000000001471441230600201145ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.13/certdata/allsans.pem000066400000000000000000000236221471441230600222610ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQCczEVv5D2UDtn6 DMmZ/uCWCLyL+K5xTZp/5j3cyISoaTuU1Ku3kD97eLgpHj4Fgk5ZJi21zsQqepCj jAhBk6tj6RYUcnMbb8MuxUkQMEDW+5LfSyp+HCaetlHosWdhEDqX4kpJ5ajBwNRt 07mxQExtC4kcno0ut9rG5XzLN29XpCpRHlFFrntOgQAEoiz9/fc8qaTgb37RgGYP Qsxh7PcRDRe4ZGx1l06Irr8Y+2W50zWCfkwCS3DaLDOKIjSOfPHNqmfcfsTpzrj8 330cdPklrMIuiBv+iGklCjkPZJiEhxvY2k6ERM4HAxxuPCivrH5MCeMNYvBVUcLr GROm7JRXRllI/XubwwoAaAb+y+dZtCZ9AnzHIb+nyKiJxWAjzjR+QPL6jHrVWBVA WTc83YP5FvxUXMfY3sVv9tNSCV3cpYOW5+iXcQzLuczXnOLRYk7p9wkb0/hk9KuK 4BMA90eBhvFMCFgHJ1/xJg2nFmBHPo/xbcwPG/ma5T/McA8mAlECAwEAAQKCAYAB m29nxPNjod5Wm4xydWQYbZj/J0qkcyru/i1qpqyDbGa1sRNcg5A/A/8BPuPcWxhR /hvwVeD5XX2/i2cnQuv6D3DQP1cSNCxQPanwzknP2k7IVqUmG0RDErPWuoDIhCnR ljp0NPQsnj0fLhEkcbgG0xwx7KceUDigGsiTbatIvvBHGhQzrmTpqlVVdtMWvGRt HQEJYuMuIw6IwALHyy3CITv5wh/Bec5OhNoFF8iUZceR4ZkGWf8bYWIa25xlzH6K 4rhOOh1G2ObHHTjhZq4mGXTHY1MEkAxXKWlR3DJc0Lh5E1UETSI6WBHWRb08iwQ5 AkLOPyMpt08xHFWbJqywvlxenpri+gjY3xbXqGNhyDYWHZqlQmJVnzxoUOuuHi2R dO86IckUc4Thjbm7a6AilL9t8juNZvyeQUVgtVi25uPkm/cK6r5vo8y4UOUU41EN NOathlF69gh93il4t6zJW9jPV2WENv1H/vhKUWKW6cabX3vy4rANwy3q4V//GDEC gcEAuniGCHaEdSjV2sUHyt/yrCLbU6+eLTfNk6AQyXJk6Wrvj6g3gx90ewEq5i/C ukmSKDslr9pupq8Z/UNfYHZfJfpwEsYvIZ8DdFSd62/h66DhIoEn1v3Lwt+aexgX yGJHF0BG9JA2CU5Z5NGjlnQYqQBobO9uZMq62l15Ig1MAMHGL0ZYVvOqGZD7XvtC 4UnclK5kjp51Vd5rydEQxyi5qkyLl9Q6T0FQXOphGIOd8ifYeUGe7YC72cFPevdx wDXZAoHBANdDVvCMrjmzdrS6td39/2nHTeerFPbsOe2LIQYzqjeEe6GWqd2NL9NZ bk3/cAuVgbWtdvSQQhhmSqOC7JZic4hbZb3lK6v/sr4F/Zu0CfAu80swWFMeS7vq eQeYzN4w4dKpJArvU3ll7N9AlZhdlYkbPf0WdeOIjZawdAOxNtNe0O+j+5MsXR59 qkULatumhcKUnqxFCiVHzy21CVJtRzrtu6oGoSdFbmG82eSJ1rPXiuuDnCyzjyMW iClYRM4NOQKBwERnO/vUxihYT4LOLlqcpl/A9aYQUT0TMGWMHTxYq2343WJceeiu 3ELXHc6NDKjbnjMF54BH57lbmHQQh+dR5PuAkCZC7z0tIM5G0Bty0nRmcs/+gwfZ 2Cpnbjrjjq3iZ2O/H4hNcpUdWdqXkKP7eKReUvBLMLrmp369NVdpe0z3yGTFMFjN T8PLLHsePt14A+PCyX6L4E0cp3vEJpx4cwtmwvpyTuWN9xXuoKmmdoVDWqS4jr1f MQnjYO2h4ed5mQKBwGVttWli4DUP+r7tuwP+ynptDqg6VIaEiEcFZ2okre+63QYm l6NtAzvyx6a41XKf355bPdG+p2YXzNN+vTue6BE3/5iagxloQjCHYhgbnRMvDDRB c1y2ybihoqWRufZ30fARAoqkehCZliMbq2E/t1YDIBJAowuzLAP04LVcqxitdIV2 HvQZ00aqr7AY0SDuNdiZbqp9XWpzi4td4iaUlxuNKP/UX9rBPGGROpoU2LWkujB+ svfdI3TFCSNyE/mDAQKBwQCP++WZKxExrSFRk3W+TcHKHZb2pusfoPWE7WH6EnDW dkTZpa3PZaf0xgeglmNBv4Paxw2eMPsIhyNv62XY/6GbY6VJWRyx/s+NsazeP4ji xUOufnwTePjYw6x0pcl6BknZrHn8LCJU741h0yTum8cDdNfRKdc0AMy0gVXk4ZTG 2cAtbEcWb3J+a5kYf6mp5yx3BNwtewkGZhc2VuQ9mQNbMmOOS/pHQQTRWcxsQwyt GPAhMKawjrL1KFmu7vIqDSw= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5f Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=allsans Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (3072 bit) Modulus: 00:9c:cc:45:6f:e4:3d:94:0e:d9:fa:0c:c9:99:fe: e0:96:08:bc:8b:f8:ae:71:4d:9a:7f:e6:3d:dc:c8: 84:a8:69:3b:94:d4:ab:b7:90:3f:7b:78:b8:29:1e: 3e:05:82:4e:59:26:2d:b5:ce:c4:2a:7a:90:a3:8c: 08:41:93:ab:63:e9:16:14:72:73:1b:6f:c3:2e:c5: 49:10:30:40:d6:fb:92:df:4b:2a:7e:1c:26:9e:b6: 51:e8:b1:67:61:10:3a:97:e2:4a:49:e5:a8:c1:c0: d4:6d:d3:b9:b1:40:4c:6d:0b:89:1c:9e:8d:2e:b7: da:c6:e5:7c:cb:37:6f:57:a4:2a:51:1e:51:45:ae: 7b:4e:81:00:04:a2:2c:fd:fd:f7:3c:a9:a4:e0:6f: 7e:d1:80:66:0f:42:cc:61:ec:f7:11:0d:17:b8:64: 6c:75:97:4e:88:ae:bf:18:fb:65:b9:d3:35:82:7e: 4c:02:4b:70:da:2c:33:8a:22:34:8e:7c:f1:cd:aa: 67:dc:7e:c4:e9:ce:b8:fc:df:7d:1c:74:f9:25:ac: c2:2e:88:1b:fe:88:69:25:0a:39:0f:64:98:84:87: 1b:d8:da:4e:84:44:ce:07:03:1c:6e:3c:28:af:ac: 7e:4c:09:e3:0d:62:f0:55:51:c2:eb:19:13:a6:ec: 94:57:46:59:48:fd:7b:9b:c3:0a:00:68:06:fe:cb: e7:59:b4:26:7d:02:7c:c7:21:bf:a7:c8:a8:89:c5: 60:23:ce:34:7e:40:f2:fa:8c:7a:d5:58:15:40:59: 37:3c:dd:83:f9:16:fc:54:5c:c7:d8:de:c5:6f:f6: d3:52:09:5d:dc:a5:83:96:e7:e8:97:71:0c:cb:b9: cc:d7:9c:e2:d1:62:4e:e9:f7:09:1b:d3:f8:64:f4: ab:8a:e0:13:00:f7:47:81:86:f1:4c:08:58:07:27: 5f:f1:26:0d:a7:16:60:47:3e:8f:f1:6d:cc:0f:1b: f9:9a:e5:3f:cc:70:0f:26:02:51 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:allsans, othername: 1.2.3.4::some other identifier, othername: 1.3.6.1.5.2.2::, email:user@example.org, DNS:www.example.org, DirName:/C=XY/L=Castle Anthrax/O=Python Software Foundation/CN=dirname example, URI:https://www.python.org/, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1, Registered ID:1.2.3.4.5 X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 31:5E:C0:5E:2F:47:FF:8B:92:F9:EE:3D:B1:87:D0:53:75:3B:B1:48 X509v3 Authority Key Identifier: keyid:F3:EC:94:8E:F2:8E:30:C4:8E:68:C2:BF:8E:6A:19:C0:C1:9F:76:65 DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption Signature Value: 72:42:a6:fc:ee:3c:21:47:05:33:e8:8c:6b:27:07:4a:ed:e2: 81:47:96:79:43:ff:0f:ef:5a:06:aa:4c:01:70:5b:21:c4:b7: 5d:17:29:c8:10:02:c3:08:7b:8c:86:56:9e:e9:7c:6e:a8:b6: 26:13:9e:1e:1f:93:66:85:67:63:9e:08:fb:55:39:56:82:f5: be:0c:38:1e:eb:c4:54:b2:a7:7b:18:55:bb:00:87:43:50:50: bb:e1:29:10:cf:3d:c9:07:c7:d2:5d:b6:45:68:1f:d6:de:00: 96:3e:29:73:f6:22:70:21:a2:ba:68:28:94:ec:37:bc:a7:00: 70:58:4e:d1:48:ae:ef:8d:11:a4:6e:10:2f:92:83:07:e2:76: ac:bf:4f:bb:d6:9f:47:9e:a4:02:03:16:f8:a8:0a:3d:67:17: 31:44:0e:68:d0:d3:24:d5:e7:bf:67:30:8f:88:97:92:0a:1e: d7:74:df:7e:7b:4c:c6:d9:c3:84:92:2b:a0:89:11:08:4c:dd: 32:49:df:36:23:d4:63:56:e4:f1:68:5a:6f:a0:c3:3c:e2:36: ee:f3:46:60:78:4d:76:a5:5a:4a:61:c6:f8:ae:18:68:c2:8d: 0e:2f:76:50:bb:be:b9:56:f1:04:5c:ac:ad:d7:d6:a4:1e:45: 45:52:f4:10:a2:0f:9b:e3:d9:73:17:b6:52:42:a6:5b:c9:e9: 8d:60:74:68:d0:1f:7a:ce:01:8e:9e:55:cb:cf:64:c1:cc:9a: 72:aa:b4:5f:b5:55:13:41:10:51:a0:2c:a5:5b:43:12:ca:cc: b7:c4:ac:f2:6f:72:fd:0d:50:6a:d6:81:c1:91:93:21:fe:de: 9a:be:e5:3c:2a:98:95:a1:42:f8:f2:5c:75:c6:f1:fd:11:b1: 22:26:33:5b:43:63:21:06:61:d2:cd:04:f3:30:c6:a8:3f:17: d3:05:a3:87:45:2e:52:1e:51:88:e3:59:4c:78:51:b0:7b:b4: 58:d9:27:22:6e:8c -----BEGIN CERTIFICATE----- MIIHDTCCBXWgAwIBAgIJAMstgJlaaVJfMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2Fs bHNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCczEVv5D2UDtn6 DMmZ/uCWCLyL+K5xTZp/5j3cyISoaTuU1Ku3kD97eLgpHj4Fgk5ZJi21zsQqepCj jAhBk6tj6RYUcnMbb8MuxUkQMEDW+5LfSyp+HCaetlHosWdhEDqX4kpJ5ajBwNRt 07mxQExtC4kcno0ut9rG5XzLN29XpCpRHlFFrntOgQAEoiz9/fc8qaTgb37RgGYP Qsxh7PcRDRe4ZGx1l06Irr8Y+2W50zWCfkwCS3DaLDOKIjSOfPHNqmfcfsTpzrj8 330cdPklrMIuiBv+iGklCjkPZJiEhxvY2k6ERM4HAxxuPCivrH5MCeMNYvBVUcLr GROm7JRXRllI/XubwwoAaAb+y+dZtCZ9AnzHIb+nyKiJxWAjzjR+QPL6jHrVWBVA WTc83YP5FvxUXMfY3sVv9tNSCV3cpYOW5+iXcQzLuczXnOLRYk7p9wkb0/hk9KuK 4BMA90eBhvFMCFgHJ1/xJg2nFmBHPo/xbcwPG/ma5T/McA8mAlECAwEAAaOCAt4w ggLaMIIBMAYDVR0RBIIBJzCCASOCB2FsbHNhbnOgHgYDKgMEoBcMFXNvbWUgb3Ro ZXIgaWRlbnRpZmllcqA1BgYrBgEFAgKgKzApoBAbDktFUkJFUk9TLlJFQUxNoRUw E6ADAgEBoQwwChsIdXNlcm5hbWWBEHVzZXJAZXhhbXBsZS5vcmeCD3d3dy5leGFt cGxlLm9yZ6RnMGUxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJh eDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xGDAWBgNVBAMM D2Rpcm5hbWUgZXhhbXBsZYYXaHR0cHM6Ly93d3cucHl0aG9uLm9yZy+HBH8AAAGH EAAAAAAAAAAAAAAAAAAAAAGIBCoDBAUwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQW MBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBQx XsBeL0f/i5L57j2xh9BTdTuxSDB9BgNVHSMEdjB0gBTz7JSO8o4wxI5owr+OahnA wZ92ZaFRpE8wTTELMAkGA1UEBhMCWFkxJjAkBgNVBAoMHVB5dGhvbiBTb2Z0d2Fy ZSBGb3VuZGF0aW9uIENBMRYwFAYDVQQDDA1vdXItY2Etc2VydmVyggkAyy2AmVpp UlswgYMGCCsGAQUFBwEBBHcwdTA8BggrBgEFBQcwAoYwaHR0cDovL3Rlc3RjYS5w eXRob250ZXN0Lm5ldC90ZXN0Y2EvcHljYWNlcnQuY2VyMDUGCCsGAQUFBzABhilo dHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9vY3NwLzBDBgNVHR8E PDA6MDigNqA0hjJodHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9y ZXZvY2F0aW9uLmNybDANBgkqhkiG9w0BAQsFAAOCAYEAckKm/O48IUcFM+iMaycH Su3igUeWeUP/D+9aBqpMAXBbIcS3XRcpyBACwwh7jIZWnul8bqi2JhOeHh+TZoVn Y54I+1U5VoL1vgw4HuvEVLKnexhVuwCHQ1BQu+EpEM89yQfH0l22RWgf1t4Alj4p c/YicCGiumgolOw3vKcAcFhO0Uiu740RpG4QL5KDB+J2rL9Pu9afR56kAgMW+KgK PWcXMUQOaNDTJNXnv2cwj4iXkgoe13TffntMxtnDhJIroIkRCEzdMknfNiPUY1bk 8Whab6DDPOI27vNGYHhNdqVaSmHG+K4YaMKNDi92ULu+uVbxBFysrdfWpB5FRVL0 EKIPm+PZcxe2UkKmW8npjWB0aNAfes4Bjp5Vy89kwcyacqq0X7VVE0EQUaAspVtD EsrMt8Ss8m9y/Q1QataBwZGTIf7emr7lPCqYlaFC+PJcdcbx/RGxIiYzW0NjIQZh 0s0E8zDGqD8X0wWjh0UuUh5RiONZTHhRsHu0WNknIm6M -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/badcert.pem000066400000000000000000000036101471441230600222230ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/badkey.pem000066400000000000000000000041621471441230600220610ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/capath/000077500000000000000000000000001471441230600213545ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.13/certdata/capath/4e1295a3.0000066400000000000000000000014561471441230600225200ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/capath/5ed36f99.0000066400000000000000000000050111471441230600226100ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/capath/6e88d7b8.0000066400000000000000000000014561471441230600226220ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/capath/99d0fa06.0000066400000000000000000000050111471441230600225740ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/capath/b1930218.0000066400000000000000000000031271471441230600224310ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEgDCCAuigAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBANCgm7G5O3nuMS+4URwBde0JWUysyL9qCvh6 CPAl4yV7avjE2KqgYAclsM9zcQVSaL8Gk64QYZa8s2mBGn0Z/CCGj5poG+3N4mxh Z8dOVepDBiEb6bm+hF/C2uuJiOBCpkVJKtC5a4yTyUQ7yvw8lH/dcMWt2Es73B74 VUu1J4b437CDz/cWN78TFzTUyVXtaxbJf60gTvAe2Ru/jbrNypbvHmnLUWZhSA3o eaNZYdQQjeANOwuFttWFEt2lB8VL+iP6VDn3lwvJREceVnc8PBMBC2131hS6RPRT NVbZPbk+NV/bM5pPWrk4RMkySf5m9h8al6rKTEr2uF5Af/sLHfhbodz4wC7QbUn1 0kbUkFf+koE0ri04u6gXDOHlP+L3JgVUUPVksxxuRP9vqbQDlukOwojYclKQmcZB D0aQWbg+b9Linh02gpXTWIoS8+LYDSBRI/CQLZo+fSaGsqfX+ShgA+N3x4gEyf6J d3AQT8Ogijv0q0J74xSS2K4W1qHefQIDAQABo2MwYTAdBgNVHQ4EFgQU8+yUjvKO MMSOaMK/jmoZwMGfdmUwHwYDVR0jBBgwFoAU8+yUjvKOMMSOaMK/jmoZwMGfdmUw DwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAYYwDQYJKoZIhvcNAQELBQAD ggGBAIsAVHKzjevzrzSf1mDq3oQ/jASPGaa+AmfEY8V040c3WYOUBvFFGegHL9ZO S0+oPccHByeS9H5zT4syGZRGeiXE2cQnsBFjOmCLheFzTzQ7a6Q0jEmOzc9PsmUn QRmw/IAxePJzapt9cTRQ/Hio2gW0nFs6mXprXe870+k7MwESZc9eB9gZr9VT6vAQ rMS2Jjw0LnTuZN0dNnWJRACwDf0vswHMGosCzWzogILKv4LXAJ3YNhXSBzf8bHMd 2qgc6CCOMnr+bScW5Fhs6z7w/iRSKXG4lntTS0UgVUBehhvsyUaRku6sk2WRLpS2 tqzoozSJpBoSDU1EpVLti5HuL6avpJUl+c7HW6cA05PKtDxdTfexPMxttEW+gu0Y kMiG0XVRUARM6E/S1lCqdede/6F7Jxkca0ksbE1rY8w7cwDzmSbQgofTqTactD25 SGiokvAnjgzNFXZChIDJP6N+tN3X+Kx2umCXPFofTt5x7gk5EN0x1WhXXRrlQroO aOZF0w== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/capath/ceff1710.0000066400000000000000000000031271471441230600226540ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEgDCCAuigAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBANCgm7G5O3nuMS+4URwBde0JWUysyL9qCvh6 CPAl4yV7avjE2KqgYAclsM9zcQVSaL8Gk64QYZa8s2mBGn0Z/CCGj5poG+3N4mxh Z8dOVepDBiEb6bm+hF/C2uuJiOBCpkVJKtC5a4yTyUQ7yvw8lH/dcMWt2Es73B74 VUu1J4b437CDz/cWN78TFzTUyVXtaxbJf60gTvAe2Ru/jbrNypbvHmnLUWZhSA3o eaNZYdQQjeANOwuFttWFEt2lB8VL+iP6VDn3lwvJREceVnc8PBMBC2131hS6RPRT NVbZPbk+NV/bM5pPWrk4RMkySf5m9h8al6rKTEr2uF5Af/sLHfhbodz4wC7QbUn1 0kbUkFf+koE0ri04u6gXDOHlP+L3JgVUUPVksxxuRP9vqbQDlukOwojYclKQmcZB D0aQWbg+b9Linh02gpXTWIoS8+LYDSBRI/CQLZo+fSaGsqfX+ShgA+N3x4gEyf6J d3AQT8Ogijv0q0J74xSS2K4W1qHefQIDAQABo2MwYTAdBgNVHQ4EFgQU8+yUjvKO MMSOaMK/jmoZwMGfdmUwHwYDVR0jBBgwFoAU8+yUjvKOMMSOaMK/jmoZwMGfdmUw DwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAYYwDQYJKoZIhvcNAQELBQAD ggGBAIsAVHKzjevzrzSf1mDq3oQ/jASPGaa+AmfEY8V040c3WYOUBvFFGegHL9ZO S0+oPccHByeS9H5zT4syGZRGeiXE2cQnsBFjOmCLheFzTzQ7a6Q0jEmOzc9PsmUn QRmw/IAxePJzapt9cTRQ/Hio2gW0nFs6mXprXe870+k7MwESZc9eB9gZr9VT6vAQ rMS2Jjw0LnTuZN0dNnWJRACwDf0vswHMGosCzWzogILKv4LXAJ3YNhXSBzf8bHMd 2qgc6CCOMnr+bScW5Fhs6z7w/iRSKXG4lntTS0UgVUBehhvsyUaRku6sk2WRLpS2 tqzoozSJpBoSDU1EpVLti5HuL6avpJUl+c7HW6cA05PKtDxdTfexPMxttEW+gu0Y kMiG0XVRUARM6E/S1lCqdede/6F7Jxkca0ksbE1rY8w7cwDzmSbQgofTqTactD25 SGiokvAnjgzNFXZChIDJP6N+tN3X+Kx2umCXPFofTt5x7gk5EN0x1WhXXRrlQroO aOZF0w== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/cert3.pem000066400000000000000000000041111471441230600216340ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIF8TCCBFmgAwIBAgIJAMstgJlaaVJcMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxv Y2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKAqKHEL7aDt 3swl8hQF8VaK4zDGDRaF3E/IZTMwCN7FsQ4ejSiOe3E90f0phHCIpEpv2OebNenY IpOGoFgkh62r/cthmnhu8Mn+FUIv17iOq7WX7B30OSqEpnr1voLX93XYkAq8LlMh P79vsSCVhTwow3HZY7krEgl5WlfryOfj1i1TODSFPRCJePh66BsOTUvV/33GC+Qd pVZVDGLowU1Ycmr/FdRvwT+F39Dehp03UFcxaX0/joPhH5gYpBB1kWTAQmxuqKMW 9ZZs6hrPtMXF/yfSrrXrzTdpct9paKR8RcufOcS8qju/ISK+1P/LXg2b5KJHedLo TTIO3yCZ4d1odyuZBP7JDrI05gMJx95gz6sG685Qc+52MzLSTwr/Qg+MOjQoBy0o 8fRRVvIMEwoN0ZDb4uFEUuwZceUP1vTk/GGpNQt7ct4ropn6K4Zta3BUtovlLjZa IIBhc1KETUqjRDvC6ACKmlcJ/5pY/dbH1lOux+IMFsh+djmaV90b3QIDAQABo4IB wDCCAbwwFAYDVR0RBA0wC4IJbG9jYWxob3N0MA4GA1UdDwEB/wQEAwIFoDAdBgNV HSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4E FgQUP7HpT6C+MGY+ChjID0caTzRqD0IwfQYDVR0jBHYwdIAU8+yUjvKOMMSOaMK/ jmoZwMGfdmWhUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29m dHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMst gJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0 Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcw AYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYD VR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAMo0usXQzycxMtYN JzC42xfftzmnu7E7hsQx/fur22MazJCruU6rNEkMXow+cKOnay+nmiV7AVoYlkh2 +DZ4dPq8fWh/5cqmnXvccr2jJVEXaOjp1wKGLH0WfLXcRLIK4/fJM6NRNoO81HDN hJGfBrot0gUKZcPZVQmouAlpu5OGwrfCkHR8v/BdvA5jE4zr+g/x+uUScE0M64wu okJCAAQP/PkfQZxjePBmk7KPLuiTHFDLLX+2uldvUmLXOQsJgqumU03MBT4Z8NTA zqmtEM65ceSP8lo8Zbrcy+AEkCulFaZ92tyjtbe8oN4wTmTLFw06oFLSZzuiOgDV OaphdVKf/pvA6KBpr6izox0KQFIE5z3AAJZfKzMGDDD20xhy7jjQZNMAhjfsT+k4 SeYB/6KafNxq08uoulj7w4Z4R/EGpkXnU96ZHYHmvGN0RnxwI1cpYHCazG8AjsK/ anN9brBi5twTGrn+D8LRBqF5Yn+2MKkD0EdXJdtIENHP+32sPQ== -----END CERTIFICATE-----gevent-24.11.1/src/greentest/3.13/certdata/ffdh3072.pem000066400000000000000000000042441471441230600220460ustar00rootroot00000000000000 DH Parameters: (3072 bit) prime: 00:ff:ff:ff:ff:ff:ff:ff:ff:ad:f8:54:58:a2:bb: 4a:9a:af:dc:56:20:27:3d:3c:f1:d8:b9:c5:83:ce: 2d:36:95:a9:e1:36:41:14:64:33:fb:cc:93:9d:ce: 24:9b:3e:f9:7d:2f:e3:63:63:0c:75:d8:f6:81:b2: 02:ae:c4:61:7a:d3:df:1e:d5:d5:fd:65:61:24:33: f5:1f:5f:06:6e:d0:85:63:65:55:3d:ed:1a:f3:b5: 57:13:5e:7f:57:c9:35:98:4f:0c:70:e0:e6:8b:77: e2:a6:89:da:f3:ef:e8:72:1d:f1:58:a1:36:ad:e7: 35:30:ac:ca:4f:48:3a:79:7a:bc:0a:b1:82:b3:24: fb:61:d1:08:a9:4b:b2:c8:e3:fb:b9:6a:da:b7:60: d7:f4:68:1d:4f:42:a3:de:39:4d:f4:ae:56:ed:e7: 63:72:bb:19:0b:07:a7:c8:ee:0a:6d:70:9e:02:fc: e1:cd:f7:e2:ec:c0:34:04:cd:28:34:2f:61:91:72: fe:9c:e9:85:83:ff:8e:4f:12:32:ee:f2:81:83:c3: fe:3b:1b:4c:6f:ad:73:3b:b5:fc:bc:2e:c2:20:05: c5:8e:f1:83:7d:16:83:b2:c6:f3:4a:26:c1:b2:ef: fa:88:6b:42:38:61:1f:cf:dc:de:35:5b:3b:65:19: 03:5b:bc:34:f4:de:f9:9c:02:38:61:b4:6f:c9:d6: e6:c9:07:7a:d9:1d:26:91:f7:f7:ee:59:8c:b0:fa: c1:86:d9:1c:ae:fe:13:09:85:13:92:70:b4:13:0c: 93:bc:43:79:44:f4:fd:44:52:e2:d7:4d:d3:64:f2: e2:1e:71:f5:4b:ff:5c:ae:82:ab:9c:9d:f6:9e:e8: 6d:2b:c5:22:36:3a:0d:ab:c5:21:97:9b:0d:ea:da: 1d:bf:9a:42:d5:c4:48:4e:0a:bc:d0:6b:fa:53:dd: ef:3c:1b:20:ee:3f:d5:9d:7c:25:e4:1d:2b:66:c6: 2e:37:ff:ff:ff:ff:ff:ff:ff:ff generator: 2 (0x2) recommended-private-length: 276 bits -----BEGIN DH PARAMETERS----- MIIBjAKCAYEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz +8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a 87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7 YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi 7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD ssbzSibBsu/6iGtCOGEfz9zeNVs7ZRkDW7w09N75nAI4YbRvydbmyQd62R0mkff3 7lmMsPrBhtkcrv4TCYUTknC0EwyTvEN5RPT9RFLi103TZPLiHnH1S/9croKrnJ32 nuhtK8UiNjoNq8Uhl5sN6todv5pC1cRITgq80Gv6U93vPBsg7j/VnXwl5B0rZsYu N///////////AgECAgIBFA== -----END DH PARAMETERS----- gevent-24.11.1/src/greentest/3.13/certdata/idnsans.pem000066400000000000000000000233211471441230600222570ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/AIBADANBgkqhkiG9w0BAQEFAASCBuYwggbiAgEAAoIBgQCp6zt40WB3K7yj BGugnRuqI3ApftThZWDIpvW0cVmN0nqQxsO6CCnS4dS7SYhGFiIqWjNVc2WG0gv7 nC5DFguqbndNZk9/SjX8EOxKz4ANjd61WnTkDO5Tbiiyd+TuEBxhmbEF69bF9dtd 1Sgo8jmM7j+aa6ClYh/49bx+blJDF76EGSrmB1q+obMeZURhPXNBeoiqKR83x5Hc LTJYMocvb6m8uABwuSka13Gb3QGu06p5ldK6TDK38HsoOy6MFO5F1PrkakG/eBHO jcBOGPfNmTwWOqvwlcQWykr4QspWS+yTzdkgZ+mxar/yQuq7wuYSNaEfGH5yoYtV WIgKwwZRDPqpSQuVe+J+MWLPQ6RTM+rXIHVzHtPk1f8DrgN+hSepJy/sVBBEQCzj nyB+scn76ETWch3iyVoMj3oVOGs0b4XTDMmUw/DmEt5TDah7TqE3G+fpBIbgMSjx MzUQZl27izmM9nQCJRAosNoNwXqlM754K9WcY6gT8kkcj1CfTmMCAwEAAQKCAYAz 9ZdHkDsf5fN2pAznXfOOOOz8+2hMjmwkn42GAp1gdWr+Z5GFiyaC8oTTSp6N1AnZ iqCk8jcrHYMFi1JIOG8TzFjWBcGsinxsmp4vGDmvq2Ddcw5IiD2+rHJsdKZAOBP9 snpD9cTE3zQYAu0XbE617krrxRqoSBO/1SExRjoIgzPCgFGyarBQl/DGjC/3Tku2 y6oL4qxFqdTMD9QTzUuycUJlz5xu2+gaaaQ3hcMUe2xnZq28Qz3FKpf2ivZmZqWf 4+AIe0lRosmFoLAFjIyyuGCkWZ2t9KDIZV0OOS4+DvVOC/Um9r4VojeikripCGKY 2FzkkuQP3jz6pJ1UxCDg7YXZdR2IbcS18F1OYmLViU8oLDR6T01s0Npmp39dDazf A4U+WyV3o1ydiSpwAiN8pJadmr5BHrCSmawV8ztW5yholufrO+FR5ze2+QG99txm 6l7lUI8Nz01lYG6D10MjaQ9INk2WSjBPVNbfsTl73/hR76/319ctfOINRMHilJ0C gcEAvFgTdc5Qf9E7xEZE5UtpSTPvZ24DxQ7hmGW3pTs6+Nw4ccknnW0lLkWvY4Sf gXl4TyTTUhe6PAN3UHzbjN2vdsTmAVUlnkNH40ymF8FgRGFNDuvuCZvf5ZWNddSl 6Vu/e5TFPDAqS8rGnl23DgXhlT+3Y0/mrftWoigdOxgximLAsmmqp/tANGi9jqE1 3L0BN+xKqMMKSXkMtuzJHuini8MaWQgQcW/4czh4ArdesMzuVrstOD8947XHreY9 pesVAoHBAOb0y/AFEdR+mhk/cfqjTzsNU2sS9yHlzMVgecF8BA26xcfAwg4d47VS +LK8fV6KC4hHk4uQWjQzCG2PYXBbFT52xeJ3EC8DwWxJP09b4HV/1mWxXl5htjnr dfyTmXKvEe5ZBpKGWc8i7s7jBi7R5EpgIfc586iNRyjYAk60dyG0iP13SurRvXBg ID25VR4wABl3HQ3Hhv61dqC9FPrdHZQJdysfUqNrAFniWsSR2eyG5i4S1uHa3G+i MzBTOuBRlwKBwBNXUBhG6YlWqTaMqMKLLfKwfKM4bvargost1uAG5xVrN/inWYQX EzxfN5WWpvKa0Ln/5BuICD3ldTk0uS8MDNq7eYslfUl1S0qSMnQ6DXK4MzuXCsi9 0w42f2JcRfVi0JUWP/LgV1eVKTRWF1g/Tl0PP/vY1q2DI/BfAjFxWJUHcxZfN4Es kflP0Dd3YpqaZieiAkC2VrYY0i9uvXCJH7uAe5Is+9NKVk8uu1Q8FGM/iDIr4obm J6rcnfbDsAz7yQKBwGtIbW9qO3UU9ioiQaTmtYg90XEclzXk1HEfNo+9NvjVuMfo b3w1QDBbgXEtg6MlxuOgNBaRkIVM625ROzcA6GZir9tZ6Wede/z8LW+Ew0hxgLsu YCLBiu9uxBj2y0HttwubySTJSfChToNGC/o1v7EY5M492kSCk/qSFMhQpkI+5Z+w CVn44eHQlUl2zOY/79vka9eZxsiMrLVP/+3kRrgciYG7hByrOLeIIRfMlIl9xHDE iZLSorEsjFC3aNMIswKBwFELC2fvlziW9rECQcGXnvc1DPmZcxm1ATFZ93FpKleF TjLIWSdst0PmO8CSIuEZ2ZXPoK9CMJyQG+kt3k7IgZ1xKXg9y6ThwbznurXp1jaW NjEnYtFMBK9Ur3oaAsrG2XwZ2PMvnI/Yp8tciGvjJlzSM8gHJ9BL8Yf+3gIJi/0D KtaF9ha9J/SDDZdEiLIQ4LvSqYmlUgsCgiUvY3SVwCh8xDfBWD1hKw9vUiZu5cnJ 81hAHFgeD4f+C8fLols/sA== -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:60 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=idnsans Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (3072 bit) Modulus: 00:a9:eb:3b:78:d1:60:77:2b:bc:a3:04:6b:a0:9d: 1b:aa:23:70:29:7e:d4:e1:65:60:c8:a6:f5:b4:71: 59:8d:d2:7a:90:c6:c3:ba:08:29:d2:e1:d4:bb:49: 88:46:16:22:2a:5a:33:55:73:65:86:d2:0b:fb:9c: 2e:43:16:0b:aa:6e:77:4d:66:4f:7f:4a:35:fc:10: ec:4a:cf:80:0d:8d:de:b5:5a:74:e4:0c:ee:53:6e: 28:b2:77:e4:ee:10:1c:61:99:b1:05:eb:d6:c5:f5: db:5d:d5:28:28:f2:39:8c:ee:3f:9a:6b:a0:a5:62: 1f:f8:f5:bc:7e:6e:52:43:17:be:84:19:2a:e6:07: 5a:be:a1:b3:1e:65:44:61:3d:73:41:7a:88:aa:29: 1f:37:c7:91:dc:2d:32:58:32:87:2f:6f:a9:bc:b8: 00:70:b9:29:1a:d7:71:9b:dd:01:ae:d3:aa:79:95: d2:ba:4c:32:b7:f0:7b:28:3b:2e:8c:14:ee:45:d4: fa:e4:6a:41:bf:78:11:ce:8d:c0:4e:18:f7:cd:99: 3c:16:3a:ab:f0:95:c4:16:ca:4a:f8:42:ca:56:4b: ec:93:cd:d9:20:67:e9:b1:6a:bf:f2:42:ea:bb:c2: e6:12:35:a1:1f:18:7e:72:a1:8b:55:58:88:0a:c3: 06:51:0c:fa:a9:49:0b:95:7b:e2:7e:31:62:cf:43: a4:53:33:ea:d7:20:75:73:1e:d3:e4:d5:ff:03:ae: 03:7e:85:27:a9:27:2f:ec:54:10:44:40:2c:e3:9f: 20:7e:b1:c9:fb:e8:44:d6:72:1d:e2:c9:5a:0c:8f: 7a:15:38:6b:34:6f:85:d3:0c:c9:94:c3:f0:e6:12: de:53:0d:a8:7b:4e:a1:37:1b:e7:e9:04:86:e0:31: 28:f1:33:35:10:66:5d:bb:8b:39:8c:f6:74:02:25: 10:28:b0:da:0d:c1:7a:a5:33:be:78:2b:d5:9c:63: a8:13:f2:49:1c:8f:50:9f:4e:63 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:idnsans, DNS:xn--knig-5qa.idn.pythontest.net, DNS:xn--knigsgsschen-lcb0w.idna2003.pythontest.net, DNS:xn--knigsgchen-b4a3dun.idna2008.pythontest.net, DNS:xn--nxasmq6b.idna2003.pythontest.net, DNS:xn--nxasmm1c.idna2008.pythontest.net X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 5B:93:42:58:B0:B4:18:CC:41:4C:15:EB:42:33:66:77:4C:71:2F:42 X509v3 Authority Key Identifier: keyid:F3:EC:94:8E:F2:8E:30:C4:8E:68:C2:BF:8E:6A:19:C0:C1:9F:76:65 DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption Signature Value: 5f:d8:9b:dc:22:55:80:47:e1:9b:04:3e:46:53:9b:e5:a7:4a: 8f:eb:53:01:39:d5:04:f6:cf:dc:48:84:8a:a9:c3:a5:35:22: 2f:ab:74:77:ec:a6:fd:b1:e6:e6:74:82:38:54:0b:27:36:e6: ec:3d:fe:92:1a:b2:7a:35:0d:a3:e5:7c:ff:e5:5b:1a:28:4b: 29:1f:99:1b:3e:11:e9:e2:e0:d7:da:06:4f:e3:7b:8c:ad:30: f4:39:24:e8:ad:2a:0e:71:74:ab:ed:62:e9:9f:85:7e:6a:b0: bb:53:b4:d7:6b:b8:da:54:15:5c:9a:41:cf:61:f1:ab:67:d6: 27:5c:0c:a3:d7:41:e7:27:3e:58:89:d6:1f:3f:2a:52:cc:13: 0b:4b:e6:d6:ba:a0:c7:fd:e3:17:a4:b8:da:cc:cb:88:70:21: 3b:70:df:09:40:6c:e7:02:81:08:80:b0:36:77:fb:44:c5:cf: bf:19:54:7c:d1:4e:1f:a2:44:9e:d8:56:0e:bf:4b:0b:e0:84: 6f:bc:f6:c6:7f:35:7a:17:ca:83:b3:82:c6:4e:d3:f3:d8:30: 05:fd:6d:3c:8a:ab:63:55:6f:c5:18:ba:66:fe:e2:35:04:2b: ae:76:34:f0:56:18:e8:54:db:83:b2:1b:93:0a:25:81:81:f0: 25:ca:0a:95:be:8e:2f:05:3f:6c:e7:de:d1:7c:b8:a3:71:7c: 6f:8a:05:c3:69:eb:6f:e6:76:8c:11:e1:59:0b:12:53:07:42: 84:e8:89:ee:ab:7d:28:81:48:e8:79:d5:cf:a2:05:a4:fd:72: 2c:7d:b4:1c:08:90:4e:0d:10:05:d1:9a:c0:69:4c:0a:14:39: 17:fb:4d:5b:f6:42:bb:46:27:23:0f:5e:57:5b:b8:ae:9b:a3: 0e:23:59:41:63:41:a4:f1:69:df:b3:a3:5c:10:d5:63:30:74: a8:3c:0c:8e:1c:6b:10:e1:13:27:02:26:9b:fd:88:93:7e:91: 9c:f9:c2:07:27:a4 -----BEGIN CERTIFICATE----- MIIGvTCCBSWgAwIBAgIJAMstgJlaaVJgMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2lk bnNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQCp6zt40WB3K7yj BGugnRuqI3ApftThZWDIpvW0cVmN0nqQxsO6CCnS4dS7SYhGFiIqWjNVc2WG0gv7 nC5DFguqbndNZk9/SjX8EOxKz4ANjd61WnTkDO5Tbiiyd+TuEBxhmbEF69bF9dtd 1Sgo8jmM7j+aa6ClYh/49bx+blJDF76EGSrmB1q+obMeZURhPXNBeoiqKR83x5Hc LTJYMocvb6m8uABwuSka13Gb3QGu06p5ldK6TDK38HsoOy6MFO5F1PrkakG/eBHO jcBOGPfNmTwWOqvwlcQWykr4QspWS+yTzdkgZ+mxar/yQuq7wuYSNaEfGH5yoYtV WIgKwwZRDPqpSQuVe+J+MWLPQ6RTM+rXIHVzHtPk1f8DrgN+hSepJy/sVBBEQCzj nyB+scn76ETWch3iyVoMj3oVOGs0b4XTDMmUw/DmEt5TDah7TqE3G+fpBIbgMSjx MzUQZl27izmM9nQCJRAosNoNwXqlM754K9WcY6gT8kkcj1CfTmMCAwEAAaOCAo4w ggKKMIHhBgNVHREEgdkwgdaCB2lkbnNhbnOCH3huLS1rbmlnLTVxYS5pZG4ucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2dzc2NoZW4tbGNiMHcuaWRuYTIwMDMucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2djaGVuLWI0YTNkdW4uaWRuYTIwMDgucHl0 aG9udGVzdC5uZXSCJHhuLS1ueGFzbXE2Yi5pZG5hMjAwMy5weXRob250ZXN0Lm5l dIIkeG4tLW54YXNtbTFjLmlkbmEyMDA4LnB5dGhvbnRlc3QubmV0MA4GA1UdDwEB /wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/ BAIwADAdBgNVHQ4EFgQUW5NCWLC0GMxBTBXrQjNmd0xxL0IwfQYDVR0jBHYwdIAU 8+yUjvKOMMSOaMK/jmoZwMGfdmWhUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQK DB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNh LXNlcnZlcoIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKG MGh0dHA6Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNl cjA1BggrBgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2Evb2NzcC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250 ZXN0Lm5ldC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGB AF/Ym9wiVYBH4ZsEPkZTm+WnSo/rUwE51QT2z9xIhIqpw6U1Ii+rdHfspv2x5uZ0 gjhUCyc25uw9/pIasno1DaPlfP/lWxooSykfmRs+Eeni4NfaBk/je4ytMPQ5JOit Kg5xdKvtYumfhX5qsLtTtNdruNpUFVyaQc9h8atn1idcDKPXQecnPliJ1h8/KlLM EwtL5ta6oMf94xekuNrMy4hwITtw3wlAbOcCgQiAsDZ3+0TFz78ZVHzRTh+iRJ7Y Vg6/SwvghG+89sZ/NXoXyoOzgsZO0/PYMAX9bTyKq2NVb8UYumb+4jUEK652NPBW GOhU24OyG5MKJYGB8CXKCpW+ji8FP2zn3tF8uKNxfG+KBcNp62/mdowR4VkLElMH QoToie6rfSiBSOh51c+iBaT9cix9tBwIkE4NEAXRmsBpTAoUORf7TVv2QrtGJyMP XldbuK6bow4jWUFjQaTxad+zo1wQ1WMwdKg8DI4caxDhEycCJpv9iJN+kZz5wgcn pA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/keycert.passwd.pem000066400000000000000000000102711471441230600235660ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQIc17oH9riZswCAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBDwi0Mkj59S0hplpnDSNHwPBIIH EFGdZuO4Cwzg0bspLhE1UpBN5cBq1rKbf4PyVtCczIqJt3KjO3H5I4KdQd9zihkN A1qzMiqVZOnQZw1eWFXMdyWuCgvNe1S/PRLWY3iZfnuZ9gZXQvyMEHy4JU7pe2Ib GNm9mzadzJtGv0YZ05Kkza20zRlOxC/cgaNUV6TPeTSwW9CR2bylxw0lTFKBph+o uFGcAzhqQuw9vsURYJf1f1iE7bQsnWU2kKmb9cx6kaUXiGJpkUMUraBL/rShoHa0 eet6saiFnK3XGMCIK0mhS9s92CIQV5H9oQQPo/7s6MOoUHjC/gFoWBXoIDOcN9aR ngybosCLtofY2m14WcHXvu4NJnfnKStx73K3dy3ZLr2iyjnsqGD1OhqGEWOVG/ho QiZEhZ+9sOnqWI2OuMhMoQJNvrLj7AY4QbdkahdjNvLjDAQSuMI2uSUDFDNfkQdy hqF/iiEM28PmSHCapgCpzR4+VfEfXBoyBCqs973asa9qhrorfnBVxXnvsqmKNLGH dymtEPei9scpoftE5T9TPqQj46446bXk23Xpg8QIFa8InQC2Y+yZqqlqvzCAbN6S Qcq1DcTSAMnbmBXVu9hPmJYIYOlBMHL8JGbsGrkVOhLiiIou4w3G+DyAvIwPj6j9 BHLqa7HgUnUEC+zL4azVHOSMqmDsOiF3w9fkBWNSkOyNoZpe+gBjbxq7sp+GjAJv 1CemRC3LSoNzLcjRG2IEGs1jlEHSSfijvwlE4lEy3JVc+QK8BOkKXXDVhY1SQHcS pniEnj95RFVmAujdFDBoUgySyxK/y6Ju/tHPpSTG9VMNbJTKTdBWAVWWHVJzBFhR 0Ug62VrBK7fmfUdH1b37aIxqsPND2De6WLm0RX+7r3XPDJ7hm+baKCchI5CvnG19 ky8InhMbU4qV+9LceMETmNKKDkhKl4Zx/Y3nab7DG9s/RZfrTdCHojc9Va/t0Ykp qlVrvdj/893CdI78SW3VjWBJGWfKMyT16hBMY3TPz6ulbFXk6Pul/KcLLWslghS+ GKZjyBe96UwfH4C7WjuIB+zo+De3Wr8xOCdJR5zwEutBMM+L/Wul8B6wIEGS71kB TN/CAoeIgHLQFbcw4YE80dllTnSEsqF+ahVTTcCt3iLUaOgeTUxteMbXY9+nekSX x8aUcvkMhbU9omdEowFr5/HIMKXo4UXat4fIGgh2pG8v8fA46hZXkhWUh/PhbnQw StXzn4fA13erqVI679kHMmOIQebv4oqdcwkImrH5fEsACNjQbkYZF5fD4z+1GHkA e2eGqejVT+OV14I8qfx9oqs2f8aqijH8fYLU0TymE7p53DYZy4WvDwk22I4rMzoQ sGkOZwfKUYpdBI2t6tEf1ROBjoNG0E2Onq+5iooibN08rKXKAQMWsK+2vNHNHwBW 49vRheQNnRqSuLY+b7QAjA0KuRWo9YptCbnXyF/Aw64jMfAGjggDLoaZfALGZk3n P+ZoL9xc7rYRpIca44BeYI6AhHFcWWIOX7Sm69FvmyHlfsgTAXVgY1lQPuGy68Au PHSkgUyydDtkrfb2W2gJuqD/+h+9X2z+o/+nETYPCZm3sH5xvTY/DTcTx9kTpXxx YQBaFTt12eVX7wZVr5K3u9M371rg+SeXC2SzL4T6APHD52cxbA1jgM0JFh3KJTuk fADxIzM1NdzYQ45J6i2w+/Fh4VPnXZ0oiUSwE094XTBlvhI6zHgar2Q0Qx1P51vB odd9XzyDLULuIzei0DYjTIg0KhE+wAGq1I5qtiMhmy5TdCKKNA9WGb1Pq38zpyjU wGmztzSzCEjfLyhChaUObVRRxEfD5ioxKer/fczOhKQe8FXmGy5u/04tVmmEyNOO JkkDtZy+UbKuJ257QnY72wPjgtHNy+S4Iv7zHUbNJNhxk+xBlRcmRNWCEM20LBSO Tj4S9gyan+gH2+WFxy8FaENUhM+vHFEeJcjQIBFBeWICmSmdkh/r0YK1UVJ9NLfR l0HiKm3lKg+kNCexTAPLMt2rGZ4PAKVnhVaxtuHMYYDpl2GYmyH73B9BfcPdA/Zx GUBmd9hwcLz9SuUg+fjHcogZRRRlcZlKhw3zUCsqHSCQXZCQm7mBlG/5C/7cM7wQ IRtsNospLStOg51gv21ClQ+uWx30XEcwmnIfVoLl1vMaguuf1u5u3dWBD/UgmqiP 1Ym8jv0BF/AS+u/CtUpwe7ZWxFT0vbyi10xxIF7O07fwFa+5dME3ycZwcyiE95K1 ftcHlGOIhuVBMSNZXC4I9LM+7IWy+hanUcK+v5RvwBDSJV3fnAOdfrka1L/HyEEb x/FYKEiU/TAjXDw2NtZ2itpADTSG5KbdJSwPr01Ak7aE+QYe7TIKJhBDZXGQlqq8 1wv77zyv7V5Xq2cxSEKgSqzB9fhYZCASe8+HWlV2T+Sd -----END ENCRYPTED PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEgzCCAuugAwIBAgIUU+FIM/dUbCklbdDwNPd2xemDAEwwDQYJKoZIhvcNAQEL BQAwXzELMAkGA1UEBhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYD VQQKDBpQeXRob24gU29mdHdhcmUgRm91bmRhdGlvbjESMBAGA1UEAwwJbG9jYWxo b3N0MB4XDTIzMTEyNTA0MjEzNloXDTQzMDEyNDA0MjEzNlowXzELMAkGA1UEBhMC WFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYDVQQKDBpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjESMBAGA1UEAwwJbG9jYWxob3N0MIIBojANBgkqhkiG 9w0BAQEFAAOCAY8AMIIBigKCAYEAzXTIl1su11AGu6sDPsoxqcRGyAX0yjxIcswF vj+eW/fBs2GcBby95VEOKpJPKRYYB7fAEAjAKK59zFdsDX/ynxPZLqyLQocBkFVq tclhCRZu//KZND+uQuHSx3PjGkSvK/nrGjg5T0bkM4SFeb0YdLb+0aDTKGozUC82 oBAilNcrFz1VXpEF0qUe9QeKQhyd0MaW5T1oSn+U3RAj2MXm3TGExyZeaicpIM5O HFlnwUxsYSDZo0jUj342MbPOZh8szZDWi042jdtSA3i8uMSplEf4O8ZPmX0JCtrz fVjRVdaKXIjrhMNWB8K44q6AeyhqJcVHtOmPYoHDm0qIjcrurt0LZaGhmCuKimNd njcPxW0VQmDIS/mO5+s24SK+Mpznm5q/clXEwyD8FbrtrzV5cHCE8eNkxjuQjkmi wW9uadK1s54tDwRWMl6DRWRyxoF0an885UQWmbsgEB5aRmEx2L0JeD0/q6Iw1Nta As8DG4AaWuYMrgZXz7XvyiMq3IxVAgMBAAGjNzA1MBQGA1UdEQQNMAuCCWxvY2Fs aG9zdDAdBgNVHQ4EFgQUl2wd7iWE1JTZUVq2yFBKGm9N36owDQYJKoZIhvcNAQEL BQADggGBAF0f5x6QXFbgdyLOyeAPD/1DDxNjM68fJSmNM/6vxHJeDFzK0Pja+iJo xv54YiS9F2tiKPpejk4ujvLQgvrYrTQvliIE+7fUT0dV74wZKPdLphftT9uEo1dH TeIld+549fqcfZCJfVPE2Ka4vfyMGij9hVfY5FoZL1Xpnq/ZGYyWZNAPbkG292p8 KrfLZm/0fFYAhq8tG/6DX7+2btxeX4MP/49tzskcYWgOjlkknyhJ76aMG9BJ1D7F /TIEh5ihNwRTmyt023RBz/xWiN4xBLyIlpQ6d5ECKmFNFr0qnEui6UovfCHUF6lZ qcAQ5VFQQ2CayNlVmQ+UGmWIqANlacYWBt7Q6VqpGg24zTMec1/Pqd6X07ScSfrm MAtywrWrU7p1aEkN5lBa4n/XKZHGYMjor/YcMdF5yjdSrZr274YYO1pafmTFwRwH 5o16c8WPc0aPvTFbkGIFT5ddxYstw+QwsBtLKE2lJ4Qfmxt0Ew/0L7xkbK1BaCOo EGD2IF7VDQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/keycert.pem000066400000000000000000000100171471441230600222640ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQDNdMiXWy7XUAa7 qwM+yjGpxEbIBfTKPEhyzAW+P55b98GzYZwFvL3lUQ4qkk8pFhgHt8AQCMAorn3M V2wNf/KfE9kurItChwGQVWq1yWEJFm7/8pk0P65C4dLHc+MaRK8r+esaODlPRuQz hIV5vRh0tv7RoNMoajNQLzagECKU1ysXPVVekQXSpR71B4pCHJ3QxpblPWhKf5Td ECPYxebdMYTHJl5qJykgzk4cWWfBTGxhINmjSNSPfjYxs85mHyzNkNaLTjaN21ID eLy4xKmUR/g7xk+ZfQkK2vN9WNFV1opciOuEw1YHwrjiroB7KGolxUe06Y9igcOb SoiNyu6u3QtloaGYK4qKY12eNw/FbRVCYMhL+Y7n6zbhIr4ynOebmr9yVcTDIPwV uu2vNXlwcITx42TGO5COSaLBb25p0rWzni0PBFYyXoNFZHLGgXRqfzzlRBaZuyAQ HlpGYTHYvQl4PT+rojDU21oCzwMbgBpa5gyuBlfPte/KIyrcjFUCAwEAAQKCAYAO M1r0+TCy4Z1hhceu5JdLql0RELZTbxi71IW2GVwW87gv75hy3hGLAs/1mdC+YIBP MkBka1JqzWq0/7rgcP5CSAMsInFqqv2s7fZ286ERGXuZFbnInnkrNsQUlJo3E9W+ tqKtGIM/i0EVHX0DRdJlqMtSjmjh43tB+M1wAUV+n6OjEtJue5wZK+AIpBmGicdP qZY+6IBnm8tcfzPXFRCoq7ZHdIu0jxnc4l2MQJK3DdL04KoiStOkSl8xDsI+lTtq D3qa41LE0TY8X2jJ/w6KK3cUeK7F4DQYs+kfCKWMVPpn0/5u6TbC1F7gLvkrseph 7cIgrruNNs9iKacnR1w3U72R+hNxHsNfo4RGHFa192p/Mfc+kiBd5RNR/M9oHdeq U6T/+KM+QyF5dDOyonY0QjwfAcEx+ZsV72nj8AerjM907I6dgHo/9YZ2S1Dt/xuG ntD+76GDzmrOvXmmpF0DsTn+Wql7AC4uzaOjv6PVziqz03pR61RpjPDemyJEWMkC gcEA7BkGGX3enBENs3X6BYFoeXfGO/hV7/aNpA6ykLzw657dqwy2b6bWLiIaqZdZ u0oiY6+SpOtavkZBFTq4bTVD58FHL0n73Yvvaft507kijpYBrxyDOfTJOETv+dVG XiY8AUSAE6GjPi0ebuYIVUxoDnMeWDuRJNvTck4byn1hJ1aVlEhwXNxt/nAjq48s 5QDuR6Z9F8lqEACRYCHSMQYFm35c7c1pPsHJnElX8a7eZ9lT7HGPXHaf/ypMkOzo dvJNAoHBAN7GhDomff/kSgQLyzmqKqQowTZlyihnReapygwr8YpNcqKDqq6VlnfH Jl1+qtSMSVI0csmccwJWkz1WtSjDsvY+oMdv4gUK3028vQAMQZo+Sh7OElFPFET3 UmL+Nh73ACPgpiommsdLZQPcIqpWNT5NzO+Jm5xa+U9ToVZgQ7xjrqee5NUiMutr r7UWAz7vDWu3x7bzYRRdUJxU18NogGbFGWJ1KM0c67GUXu2E7wBQdjVdS78UWs+4 XBxKQkG2KQKBwQCtO+M82x122BB8iGkulvhogBjlMd8klnzxTpN5HhmMWWH+uvI1 1G29Jer4WwRNJyU6jb4E4mgPyw7AG/jssLOlniy0Jw32TlIaKpoGXwZbJvgPW9Vx tgnbDsIiR3o9ZMKMj42GWgike4ikCIc+xzRmvdMbHIHwUJfCfEtp9TtPGPnh9pDz og3XLsMNg52GXnt3+VI6HOCE41XH+qj2rZt5r2tSVXEOyjQ7R5mOzSeFfXJVwDFX v/a/zHKnuB0OAdUCgcBLrxPTEaqy2eMPdtZHM/mipbnmejRw/4zu7XYYJoG7483z SlodT/K7pKvzDYqKBVMPm4P33K/x9mm1aBTJ0ZqmL+a9etRFtEjjByEKuB89gLX7 uzTb7MrNF10lBopqgK3KgpLRNSZWWNXrtskMJ5eVICdkpdJ5Dyst+RKR3siEYzU9 +yxxAFpeQsqB8gWORva/RsOR8yNjIMS3J9fZqlIdGA8ktPr0nEOyo96QQR5VdACE 5rpKI2cqtM6OSegynOkCgcAnr2Xzjef6tdcrxrQrq0DjEFTMoCAxQRa6tuF/NYHV AK70Y4hBNX84Bvym4hmfbMUEuOCJU+QHQf/iDQrHXPhtX3X2/t8M+AlIzmwLKf2o VwCYnZ8SqiwSaWVg+GANWLh0JuKn/ZYyR8urR79dAXFfp0UK+N39vIxNoBisBf+F G8mca7zx3UtK2eOW8WgGHz+Y20VZy0m/nkNekd1ZTXoSGhL+iN4XsTRn1YQIn69R kNdcwhtZZ3dpChUdf+w/LIc= -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEgzCCAuugAwIBAgIUU+FIM/dUbCklbdDwNPd2xemDAEwwDQYJKoZIhvcNAQEL BQAwXzELMAkGA1UEBhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYD VQQKDBpQeXRob24gU29mdHdhcmUgRm91bmRhdGlvbjESMBAGA1UEAwwJbG9jYWxo b3N0MB4XDTIzMTEyNTA0MjEzNloXDTQzMDEyNDA0MjEzNlowXzELMAkGA1UEBhMC WFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYDVQQKDBpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjESMBAGA1UEAwwJbG9jYWxob3N0MIIBojANBgkqhkiG 9w0BAQEFAAOCAY8AMIIBigKCAYEAzXTIl1su11AGu6sDPsoxqcRGyAX0yjxIcswF vj+eW/fBs2GcBby95VEOKpJPKRYYB7fAEAjAKK59zFdsDX/ynxPZLqyLQocBkFVq tclhCRZu//KZND+uQuHSx3PjGkSvK/nrGjg5T0bkM4SFeb0YdLb+0aDTKGozUC82 oBAilNcrFz1VXpEF0qUe9QeKQhyd0MaW5T1oSn+U3RAj2MXm3TGExyZeaicpIM5O HFlnwUxsYSDZo0jUj342MbPOZh8szZDWi042jdtSA3i8uMSplEf4O8ZPmX0JCtrz fVjRVdaKXIjrhMNWB8K44q6AeyhqJcVHtOmPYoHDm0qIjcrurt0LZaGhmCuKimNd njcPxW0VQmDIS/mO5+s24SK+Mpznm5q/clXEwyD8FbrtrzV5cHCE8eNkxjuQjkmi wW9uadK1s54tDwRWMl6DRWRyxoF0an885UQWmbsgEB5aRmEx2L0JeD0/q6Iw1Nta As8DG4AaWuYMrgZXz7XvyiMq3IxVAgMBAAGjNzA1MBQGA1UdEQQNMAuCCWxvY2Fs aG9zdDAdBgNVHQ4EFgQUl2wd7iWE1JTZUVq2yFBKGm9N36owDQYJKoZIhvcNAQEL BQADggGBAF0f5x6QXFbgdyLOyeAPD/1DDxNjM68fJSmNM/6vxHJeDFzK0Pja+iJo xv54YiS9F2tiKPpejk4ujvLQgvrYrTQvliIE+7fUT0dV74wZKPdLphftT9uEo1dH TeIld+549fqcfZCJfVPE2Ka4vfyMGij9hVfY5FoZL1Xpnq/ZGYyWZNAPbkG292p8 KrfLZm/0fFYAhq8tG/6DX7+2btxeX4MP/49tzskcYWgOjlkknyhJ76aMG9BJ1D7F /TIEh5ihNwRTmyt023RBz/xWiN4xBLyIlpQ6d5ECKmFNFr0qnEui6UovfCHUF6lZ qcAQ5VFQQ2CayNlVmQ+UGmWIqANlacYWBt7Q6VqpGg24zTMec1/Pqd6X07ScSfrm MAtywrWrU7p1aEkN5lBa4n/XKZHGYMjor/YcMdF5yjdSrZr274YYO1pafmTFwRwH 5o16c8WPc0aPvTFbkGIFT5ddxYstw+QwsBtLKE2lJ4Qfmxt0Ew/0L7xkbK1BaCOo EGD2IF7VDQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/keycert2.pem000066400000000000000000000100331471441230600223440ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQCyAUXjczgUEn7m mOwDMi/++wDRxqJAJ2f7F9ADxTuOm+EtdpfYr4mBn8Uz9e3I+ZheG5y3QZ1ddBYA 9YTfcUL0on8UXLOOBVZCetxsQXoSAuDMPV0IXeEgtZZDXe7STqKSQeYk7Cz+VtHe lZ8j7oOOcx5sJgpbaD+OGJnPoAdB8l8nQfxqAG45sW4P6gfLKoJLviKctDe5pvgi JC8tvytg/IhESKeefLZ4ix2dNjj2GNUaL+khU6UEuM1kJHcPVjPoYc+y8fop/qhQ 0ithBhO2OvJ+YmOFdCE67SyCwU3p8zJpN+XkwbHttgmNg4OSs7H6V7E52/CsTNTA SthBHXtxqaM+vjbGARrz2Fpc/n+LwRt7MGIR0gVtntTgnP0HoeHskhAIeDtaPrZ6 zHdl3aDwgAecVebTEBT5YPboz+X1lWdOrRD2JW3bqXSRIN3E4qz5IMuNx3VvhpSR eFZzR6QIbxQqzO/Vp93Ivy8hPZ6WMgfSYWs7CGtu4NP79PJfdMsCAwEAAQKCAYAc e3yp2NlbyNvaXRTCrCim5ZXrexuiJUwLjvNfbxNJDeM5iZThfLEFd0GwP0U1l86M HGH2pr6d4gHVVHPW5wIeL9Qit3SZoHv9djhH8DAuqpw6wgTdXlw0BipNjD23FBMK URYYyVuntM+vDITi1Hrjc8Ml7e5RUvx8aa5O3R3cLQKRvwq7EWeRvrTMQhfOJ/ai VQGnzmRuRevFVsHf0YuI4M+TEYcUooL2BdiOu8rggfezUYA9r2sjtshSok0UvKeb 79pNzWmg9EWVeFk+A0HQpyLq+3EVyB5UZ3CZRkT0XhEm1B7mpKrtcGMjaumNAam7 jkhidGdhT/PV9BB1TttcqwTf+JH9P9sSpY9ZTA1LkkeWe9Rwqpxjjssqxxl6Xnds +wUfuovVvHuBinsO+5PLE5UQByn21WcIBNnPCAMvALy/68T7z8+ATJ+I2CqBXuM2 z5663hNrvdCu93PpK4j/k/1l3NTrthaorbnPhkmNYHQkBicpAfFQywrv6enD+30C gcEA7Vlw76og4oxI7SSD6gTfo85OqTLp2CUZhNNxzYiUOOssRnGHBKsGZ8p0OhLN vk9/SgbeHL5jsmnyd8ZuYWmsOQHRWgg1zO3S25iuq+VAo4cL/7IynoJ0RP5xP0Pw QD+xJLZQp0XuLUtXnlc6dM5Hg7tOTthOP9UxA1i57lzpYfkRnKmSeWi+4IDJymOt WoWnEK7Yr7qSg6aScLWRyIvAPVmKF9LToSFaTq0eOD0GIwAQxqNbIwN3U0UJ5Ruc KRBVAoHBAL/+DNGqnEzhhWS6zqZp2eH90YR+b3R4yOK4PROm2AVA3h1GhIAiWX68 PvKYZK9dZ9EdAswlFf9PVQYIXUraR3az0UiIywnNMri+kO1ZxwofGvljrOfRRLg0 B46wuHi6dVgTWzjTl503G9+FpAYNHv184xsr1tne0pf2TKEnN7oyQciCV8qtr8vV HrL46uaD0w1fcXIXbO3F/7ErLsvsgLzKfxR5BeQo6Fq0GmzD+lCmzVNirtfl2CZj 2ukROXUQnwKBwQDR1CqFlm/wGHk4PPnp31ke5XqhFoOpNFM1HAEV5VK0ZyQDOsZU mCXXiCHsXUdKodk0RpIB80cMKaHTxbc7o0JAO50q7OszOmUZAggZq1jTuMYgzRb3 DvlfLVpMxfEVu7kNbagr2STRIjRZpV/md569lM+L4Kp8wCrOfJgTZExm8txhFYCK mNF2hCThKfHNfy7NDuY9pMF2ZcI8pig1lWbkVc5BdX7miifeOinnKfvM4XfzQ+OE NsI8+WHgC+KoYukCgcBwrOpdCmHchOZCbZfl9m1Wwh16QrGqi1BqLnI53EsfGijA yaftgzs+s7/FpEZC3PCWuw3vPTyhr69YcQQ/b8dNFM8YYJ+4SuMfpUds5Kl5eTPd dO/+xMQtzus4hOJeiB9h50o8GYH7VGJZVhcjLgQoBGlMgvf+uVSitnvWgCumbORK hqR7YF+xoov3wToquubcDE2KBdF54h/jnFJEf7I2GilmnHgmpRNoWBbCCmoXdy09 aMbwEgY+0Y+iBOfRmkUCgcEAoHJLw7VnZQGQQL4l3lnoGU9o06RPkNbpda9G/Ptz v+K7DXmHiLFVDszvZaPVreohuc0tKdrT0cZpZ21h0GQD4B6JX66R/y6CCAr0QpdA pFZO9sc5ky6RJ4xZCoCsNJzORNUb36cagEzBWExb7Jz2v6gNa044K5bs3CVv5h15 rJtTxZNn/gcnIk+gt//67WUnKLS4PR5PVCCqYhSbhFwx/OvVTJmflIBUinAclf2Q M4mhHOfwBicqYzzEYbOE9Vk9 -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEjDCCAvSgAwIBAgIUQ2S3jJ5nve5k5956sgsrWY3vw9MwDQYJKoZIhvcNAQEL BQAwYjELMAkGA1UEBhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYD VQQKDBpQeXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhv c3RuYW1lMB4XDTIzMTEyNTA0MjEzN1oXDTQzMDEyNDA0MjEzN1owYjELMAkGA1UE BhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYDVQQKDBpQeXRob24g U29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhvc3RuYW1lMIIBojAN BgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEAsgFF43M4FBJ+5pjsAzIv/vsA0cai QCdn+xfQA8U7jpvhLXaX2K+JgZ/FM/XtyPmYXhuct0GdXXQWAPWE33FC9KJ/FFyz jgVWQnrcbEF6EgLgzD1dCF3hILWWQ13u0k6ikkHmJOws/lbR3pWfI+6DjnMebCYK W2g/jhiZz6AHQfJfJ0H8agBuObFuD+oHyyqCS74inLQ3uab4IiQvLb8rYPyIREin nny2eIsdnTY49hjVGi/pIVOlBLjNZCR3D1Yz6GHPsvH6Kf6oUNIrYQYTtjryfmJj hXQhOu0sgsFN6fMyaTfl5MGx7bYJjYODkrOx+lexOdvwrEzUwErYQR17camjPr42 xgEa89haXP5/i8EbezBiEdIFbZ7U4Jz9B6Hh7JIQCHg7Wj62esx3Zd2g8IAHnFXm 0xAU+WD26M/l9ZVnTq0Q9iVt26l0kSDdxOKs+SDLjcd1b4aUkXhWc0ekCG8UKszv 1afdyL8vIT2eljIH0mFrOwhrbuDT+/TyX3TLAgMBAAGjOjA4MBcGA1UdEQQQMA6C DGZha2Vob3N0bmFtZTAdBgNVHQ4EFgQU5wVOIuQD/Jxmam/97g91+igosWQwDQYJ KoZIhvcNAQELBQADggGBAFv5gW5x4ET5NXEw6vILlOtwxwplEbU/x6eUVR/AXtEz jtq9zIk2svX/JIzSLRQnjJmb/nCDCeNcFMkkgIiB64I3yMJT9n50fO4EhSGEaITZ vYAw0/U6QXw+B1VS1ijNA44X2zvC+aw1q9W+0SKtvnu7l16TQ654ey0Qh9hOF1HS AZQ46593T9gaZMeexz4CShoBZ80oFOJezfNhyT3FK6tzXNbkVoJDhlLvr/ep81GG mABUGtKQYYMhuSSp0TDvf7jnXxtQcZI5lQOxZp0fnWUcK4gMVJqFVicwY8NiOhAG 6TlvXYP4COLAvGmqBB+xUhekIS0jVzaMyek+hKK0sT/OE+W/fR5V9YT5QlHFJCf5 IUIfDCpBZrBpsOTwsUm8eL0krLiBjYf0HgH5oFBc7aF4w1kuUJjlsJ68bzO9mLEF HXDaOWbe00+7BMMDnyuEyLN8KaAGiN8x0NQRX+nTAjCdPs6E0NftcXtznWBID6tA j5m7qjsoGurj6TlDsBJb1A== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/keycert3.pem000066400000000000000000000223371471441230600223570ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQCgKihxC+2g7d7M JfIUBfFWiuMwxg0WhdxPyGUzMAjexbEOHo0ojntxPdH9KYRwiKRKb9jnmzXp2CKT hqBYJIetq/3LYZp4bvDJ/hVCL9e4jqu1l+wd9DkqhKZ69b6C1/d12JAKvC5TIT+/ b7EglYU8KMNx2WO5KxIJeVpX68jn49YtUzg0hT0QiXj4eugbDk1L1f99xgvkHaVW VQxi6MFNWHJq/xXUb8E/hd/Q3oadN1BXMWl9P46D4R+YGKQQdZFkwEJsbqijFvWW bOoaz7TFxf8n0q616803aXLfaWikfEXLnznEvKo7vyEivtT/y14Nm+SiR3nS6E0y Dt8gmeHdaHcrmQT+yQ6yNOYDCcfeYM+rBuvOUHPudjMy0k8K/0IPjDo0KActKPH0 UVbyDBMKDdGQ2+LhRFLsGXHlD9b05PxhqTULe3LeK6KZ+iuGbWtwVLaL5S42WiCA YXNShE1Ko0Q7wugAippXCf+aWP3Wx9ZTrsfiDBbIfnY5mlfdG90CAwEAAQKCAYAA ogoE4FoxD5+YyPGa+KcKg4QAVlgI5cCIJC+aMy9lyfw4JRDDv0RnnynsSTS3ySJ1 FNoTmD5vTSZd1ONfVc2fdxWKrzkQDsgu1C07VLsShKXTEuWg/K0ZKOsLg1scY0Qc GB4BnNrGA1SgKg3WJiEfqr2S/pvxSGVK2krsHAdwOytGhJStSHWEUjbDLKEsMjNG AHOBCL5VSXS00aM55NeWuanCGH36l/J4kMvgpHB9wJE1twFGuHCUvtgEHtzPH9fQ plmI0QDREm6UE6Qh01lxmwx3Xc5ASBURmxs+bxpk94BPRpj8/eF2HPiJalrkJksj Xk3QQ7k23v6XnmHKV3QqpjUgJTdbuMoTrVMu14cIH6FtXfwVhtthPnCI8rk5Lh8N cqLC7HT+NE1JyygzuMToOHMmSJTQ8L6BTIaRCZjvGTPYaZfFgeMHvvhAJtP5zAcc xQzyCyNBU8RdPGT8tJTyDUIRs20poqe7dKrPEIocKJX7tvNSI2QxkQ96Adxo1gEC gcEAvI8m6QCDGgDWI8yTH9EvZQwq+tF8I+WMC+jbPuDwKg5ZKC7VjRO//9JzPY+c TxmLnQu64OkECHbu7pswDBbtnPMbodF9inYEY5RkfufEjEMJGEdkoBJWnNx78EkV bcffWik0wXwdt6jd1CAnjmS9qaPz0T1NV8m5rQQn5JUYXlC9eB2kOojZYLbZBl3g xUSRbIqHC7h8HuyAU26EPiprHsIxrOpbxABFOdvo2optr50U7X10Eqb4mRQ4z22W ojJdAoHBANlzJjjEgGVB9W50pJqkTw8wXiTUG8AmhqrVvqEttLPcWpK6QwRkRC+i 5N1iUObf/kOlun2gNfHF6gM68Ja9wb2eGvE5sApq9hPpyYF0LS3g8BbJ9GOs6NU9 BfM1CkPrDCdc4kzlUpDibvc6Fc9raCqvrZRlKEukqQS8dumVdb74IaPsP6q8sZMz jibOk0eUrbx2c5vEnd0W8zMeNCuCwO1oXbfenPp/GLX9ZRlolWS/3cQoZYOSQc9J lFQYkxL3gQKBwQCy3Pwk9AZoqTh4dvtsqArUSImQqRygFIQXXAh1ifxneHrcYijS jVSIwEHuuIamhe3oyBK6fG8F9IPLtUwLe8hkJDwm8Misiiy5pS77LrFD9+btr/Nk 4GBmpcOveDQqkflt1j6j9y9dY4MhUGsVaLx86fhDmGoAh2tpEtMgwsl91gsUoNGD cQL6+he+MVkg510nX/Sgipy63M8R1Xj+W1CHueBTTXBE6ZjBPLiSbdOETXZnnaR4 eQjCdOs64JKOQ0UCgcBZ4kFAYel48aTT/Z801QphCus/afX2nXY5E5Vy5oO1fTZr RFcDb7bHwhu8bzFl3d0qdUz7NMhXoimzIB/nD5UQHlSgtenQxJnnbVIAEtfCCSL1 KJG+yfCMhGb7O0d8/6HMe5aHlptkjFS2GOp/DLTIQEoN9yqK6gt7i7PTphY/1C2D ptpCZzE32a2+2NEEW67dIlFzZ/ihNSVeUfPasHezKtricECPQw4h3BZ4RETMmoq+ 1LvxgPl3B8EqaeYRhwECgcEAjjp/0hu/ukQhiNeR5a9p1ECBFP8qFh6Cpo0Az/DT 1kX0qU8tnT3cYYhwbVGwLxn2HVRdLrbjMj/t88W/LM2IaQ162m7TvvBMxNmr058y sW/LADp5YWWsY70EJ8AfaTmdQriqKsNiLLpNdgcm1bkwHJ1CNlvEpDs1OOI3cCGi BEuUmeKxpRhwCaZeaR5tREmbD70My+BMDTDLfrXoKqzl4JrRua4jFTpHeZaFdkkh gDq3K6+KpVREQFEhyOtIB2kk -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5c Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (3072 bit) Modulus: 00:a0:2a:28:71:0b:ed:a0:ed:de:cc:25:f2:14:05: f1:56:8a:e3:30:c6:0d:16:85:dc:4f:c8:65:33:30: 08:de:c5:b1:0e:1e:8d:28:8e:7b:71:3d:d1:fd:29: 84:70:88:a4:4a:6f:d8:e7:9b:35:e9:d8:22:93:86: a0:58:24:87:ad:ab:fd:cb:61:9a:78:6e:f0:c9:fe: 15:42:2f:d7:b8:8e:ab:b5:97:ec:1d:f4:39:2a:84: a6:7a:f5:be:82:d7:f7:75:d8:90:0a:bc:2e:53:21: 3f:bf:6f:b1:20:95:85:3c:28:c3:71:d9:63:b9:2b: 12:09:79:5a:57:eb:c8:e7:e3:d6:2d:53:38:34:85: 3d:10:89:78:f8:7a:e8:1b:0e:4d:4b:d5:ff:7d:c6: 0b:e4:1d:a5:56:55:0c:62:e8:c1:4d:58:72:6a:ff: 15:d4:6f:c1:3f:85:df:d0:de:86:9d:37:50:57:31: 69:7d:3f:8e:83:e1:1f:98:18:a4:10:75:91:64:c0: 42:6c:6e:a8:a3:16:f5:96:6c:ea:1a:cf:b4:c5:c5: ff:27:d2:ae:b5:eb:cd:37:69:72:df:69:68:a4:7c: 45:cb:9f:39:c4:bc:aa:3b:bf:21:22:be:d4:ff:cb: 5e:0d:9b:e4:a2:47:79:d2:e8:4d:32:0e:df:20:99: e1:dd:68:77:2b:99:04:fe:c9:0e:b2:34:e6:03:09: c7:de:60:cf:ab:06:eb:ce:50:73:ee:76:33:32:d2: 4f:0a:ff:42:0f:8c:3a:34:28:07:2d:28:f1:f4:51: 56:f2:0c:13:0a:0d:d1:90:db:e2:e1:44:52:ec:19: 71:e5:0f:d6:f4:e4:fc:61:a9:35:0b:7b:72:de:2b: a2:99:fa:2b:86:6d:6b:70:54:b6:8b:e5:2e:36:5a: 20:80:61:73:52:84:4d:4a:a3:44:3b:c2:e8:00:8a: 9a:57:09:ff:9a:58:fd:d6:c7:d6:53:ae:c7:e2:0c: 16:c8:7e:76:39:9a:57:dd:1b:dd Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 3F:B1:E9:4F:A0:BE:30:66:3E:0A:18:C8:0F:47:1A:4F:34:6A:0F:42 X509v3 Authority Key Identifier: keyid:F3:EC:94:8E:F2:8E:30:C4:8E:68:C2:BF:8E:6A:19:C0:C1:9F:76:65 DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption Signature Value: ca:34:ba:c5:d0:cf:27:31:32:d6:0d:27:30:b8:db:17:df:b7: 39:a7:bb:b1:3b:86:c4:31:fd:fb:ab:db:63:1a:cc:90:ab:b9: 4e:ab:34:49:0c:5e:8c:3e:70:a3:a7:6b:2f:a7:9a:25:7b:01: 5a:18:96:48:76:f8:36:78:74:fa:bc:7d:68:7f:e5:ca:a6:9d: 7b:dc:72:bd:a3:25:51:17:68:e8:e9:d7:02:86:2c:7d:16:7c: b5:dc:44:b2:0a:e3:f7:c9:33:a3:51:36:83:bc:d4:70:cd:84: 91:9f:06:ba:2d:d2:05:0a:65:c3:d9:55:09:a8:b8:09:69:bb: 93:86:c2:b7:c2:90:74:7c:bf:f0:5d:bc:0e:63:13:8c:eb:fa: 0f:f1:fa:e5:12:70:4d:0c:eb:8c:2e:a2:42:42:00:04:0f:fc: f9:1f:41:9c:63:78:f0:66:93:b2:8f:2e:e8:93:1c:50:cb:2d: 7f:b6:ba:57:6f:52:62:d7:39:0b:09:82:ab:a6:53:4d:cc:05: 3e:19:f0:d4:c0:ce:a9:ad:10:ce:b9:71:e4:8f:f2:5a:3c:65: ba:dc:cb:e0:04:90:2b:a5:15:a6:7d:da:dc:a3:b5:b7:bc:a0: de:30:4e:64:cb:17:0d:3a:a0:52:d2:67:3b:a2:3a:00:d5:39: aa:61:75:52:9f:fe:9b:c0:e8:a0:69:af:a8:b3:a3:1d:0a:40: 52:04:e7:3d:c0:00:96:5f:2b:33:06:0c:30:f6:d3:18:72:ee: 38:d0:64:d3:00:86:37:ec:4f:e9:38:49:e6:01:ff:a2:9a:7c: dc:6a:d3:cb:a8:ba:58:fb:c3:86:78:47:f1:06:a6:45:e7:53: de:99:1d:81:e6:bc:63:74:46:7c:70:23:57:29:60:70:9a:cc: 6f:00:8e:c2:bf:6a:73:7d:6e:b0:62:e6:dc:13:1a:b9:fe:0f: c2:d1:06:a1:79:62:7f:b6:30:a9:03:d0:47:57:25:db:48:10: d1:cf:fb:7d:ac:3d -----BEGIN CERTIFICATE----- MIIF8TCCBFmgAwIBAgIJAMstgJlaaVJcMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxv Y2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAKAqKHEL7aDt 3swl8hQF8VaK4zDGDRaF3E/IZTMwCN7FsQ4ejSiOe3E90f0phHCIpEpv2OebNenY IpOGoFgkh62r/cthmnhu8Mn+FUIv17iOq7WX7B30OSqEpnr1voLX93XYkAq8LlMh P79vsSCVhTwow3HZY7krEgl5WlfryOfj1i1TODSFPRCJePh66BsOTUvV/33GC+Qd pVZVDGLowU1Ycmr/FdRvwT+F39Dehp03UFcxaX0/joPhH5gYpBB1kWTAQmxuqKMW 9ZZs6hrPtMXF/yfSrrXrzTdpct9paKR8RcufOcS8qju/ISK+1P/LXg2b5KJHedLo TTIO3yCZ4d1odyuZBP7JDrI05gMJx95gz6sG685Qc+52MzLSTwr/Qg+MOjQoBy0o 8fRRVvIMEwoN0ZDb4uFEUuwZceUP1vTk/GGpNQt7ct4ropn6K4Zta3BUtovlLjZa IIBhc1KETUqjRDvC6ACKmlcJ/5pY/dbH1lOux+IMFsh+djmaV90b3QIDAQABo4IB wDCCAbwwFAYDVR0RBA0wC4IJbG9jYWxob3N0MA4GA1UdDwEB/wQEAwIFoDAdBgNV HSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4E FgQUP7HpT6C+MGY+ChjID0caTzRqD0IwfQYDVR0jBHYwdIAU8+yUjvKOMMSOaMK/ jmoZwMGfdmWhUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29m dHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMst gJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0 Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcw AYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYD VR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAMo0usXQzycxMtYN JzC42xfftzmnu7E7hsQx/fur22MazJCruU6rNEkMXow+cKOnay+nmiV7AVoYlkh2 +DZ4dPq8fWh/5cqmnXvccr2jJVEXaOjp1wKGLH0WfLXcRLIK4/fJM6NRNoO81HDN hJGfBrot0gUKZcPZVQmouAlpu5OGwrfCkHR8v/BdvA5jE4zr+g/x+uUScE0M64wu okJCAAQP/PkfQZxjePBmk7KPLuiTHFDLLX+2uldvUmLXOQsJgqumU03MBT4Z8NTA zqmtEM65ceSP8lo8Zbrcy+AEkCulFaZ92tyjtbe8oN4wTmTLFw06oFLSZzuiOgDV OaphdVKf/pvA6KBpr6izox0KQFIE5z3AAJZfKzMGDDD20xhy7jjQZNMAhjfsT+k4 SeYB/6KafNxq08uoulj7w4Z4R/EGpkXnU96ZHYHmvGN0RnxwI1cpYHCazG8AjsK/ anN9brBi5twTGrn+D8LRBqF5Yn+2MKkD0EdXJdtIENHP+32sPQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/keycert4.pem000066400000000000000000000223551471441230600223600ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQDGKA1zZDjeNPh2 J9WHVXXMUf8h5N4/bHCM3CbIaZ1dShkCgfmFWmOtruEihgbfRYaSWZAwCmVAQGjm gvUfgOIgsFfM8yO+zDByPhza7XvWPZfEe7mNRFe5ZlYntbeM/vuWCM4VzwDq/mqF TFxNRmwInqE7hx0WnfCoQWe9N41hJyl1K0OjADb+SjlpJ0/UJ63hsB+dowGjaaBv J8HduQcRqNg8s6FcyJJ8Mjss1uRMFK2j9QrmgbA61XuIPCxzc3J57mW8FN2KsR8D 2HOhe9nsTGlxp+O5Cudf/RBWB443xcoyduwRXOFTdEAU45MS4tKGP2hzezuxMFQn LKARXVW4/gFxZk7kU8TweZUS6LAYPfYJnlfteb6z37LAbtoDvzKUKBEDf/nmoa7C uKxSPC5HIKhLbjU/6kuPglSVEfJPJWu2bZJDAkFL85Ot3gPs10EX2lMUy0Jt3tf+ TaQjEvFZhpKN8KAdYj3eVgOfzIBbQyjotHJjFe9Jkq4q7RoI+ncCAwEAAQKCAYAH tRsdRh1Z7JmHOasy+tPDsvhVuWLHMaYlScvAYhJh/W65YSKd56+zFKINlX3fYcp5 Fz67Yy+uWahXVE2QgFou3KX0u+9ucRiLFXfYheWL3xSMXJgRee0LI/T7tRe7uAHu CnoURqKCulIqzLOO1efx1eKasXmVuhEtmjhVpcmDGv8SChSKTIjzgOjqT7QGE9Xq eSRhq7mulpq9zWq+/369yG+0SvPs60vTxNovDIaBn/RHSW5FjeDss5QnmYMh/ukN dggoKllQlkTzHSxHmKrIJuryZC+bsqvEPUFXN0NMUYcZRvt1lwdjzq/A+w4gDDZG 7QqAzYMYQZMw9PJeHqu4mxfUX5hJWuAwG5I2eV3kBRheoFw7MxP0tw40fPlFU+Zh pRXbKwhMAlIHi0D8NyMn3hkVPyToWVVY3vHRknBB/52RqRq3MjqEFaAZfp0nFkiF ytv3Dd5aeBb1vraOIREyhxIxE/qY8CtZC+6JI8CpufLmFXB412WPwl0OrVpWYfEC gcEA486zOI46xRDgDw0jqTpOFHzh+3VZ8UoPoiqCjKzJGnrh2EeLvTsXX/GZOj0m 5zl6RHEGFjm5vKCh2C72Vj/m+AFVy7V9iJRzTYzP8So/3paaqo7ZaROTa6uStxdD VPnY1uIgVQz9w5coN4dmr+RLBpFvvWaHp1wuC08YIWxcC9HSTQpbi1EP5eo08fOk 8reNkDEHxihDGHr1xW0z0qJqK1IVyLP7wDkmapudMZjkjqJSGJwwefV4qyGMTV2b suW1AoHBAN6t9n6LBH553MF5iUrNJYxXh/SCom4Zft9aD6W4bZV/xL4XPpKBB4HX aWdeI0iYZU9U+CZ88tBoQCt+JMrJ9cz03ENOvA/MBMREwbZ2hKmQgnoDZsV0vNry 6UsxeQmeNpGQFUz9foVJQVRdQCceN2YEABdehV1HZoSBbuGZkzqGJXrWwaf/ZhpB dPYGUGOsczoD2/QLuWy2M7f7v0Ews6Heww3zipWzvdxKE0IpyVs30ZwVi8CRQiWU bEcleXP6+wKBwAi3xEwJxV39Q1XQHuk+/fXywYMp/oMpXmfKUKypgBivUy0/r61S MZbOXBrKdE6s+GzeFmmLU/xP+WGYinzKfUBIbMwa6e7sH218UgjcoQ0Xnlugk9ld kmqwajDvhvgdh5rRlIMsuBlgE33shJV+mxBpSGlrHw3cjTaJlFbTGsKpCO9B0jcG pyEZUWVg+ZMASz6VYcLHj6nEKtufTjhlVsLJpWPE34F/rmSuB9n6C+UZeSLP91rz dea2pfPf/TFfcQKBwF4DSj9Qx/vxzS7t9fXbuM+QoPitMpCTOQppRpPr0nA8uj6b J7LIwPejj3+xsenTVWpx8DanqAgvC3CRWE05iQoYEupj0mhE9Xo7oSE81nOUbFHB H+GbkKRLzA0P/Q7/egBouWWA3Kq/K9LHb+9UBYWPiM5U/K9OFs04rCyZHxylSCud gbNA08Wf/xZjwgri4t8KhBF75bQtFJbHtY57Vkuv9d/tA4SCl1Tq/UiAxd86KMfi HNeXPDsLd89t1eIOgwKBwQDJkqwZXkhwkhoNuHRdzPO/1f5FyKpQxFs+x+OBulzG zuwVKIawsLlUR4TBtF7PChOSZSH50VZaBI5kVti79kEtfNjfAzg4kleHrY8jQ/eq HludZ3nmiPqqlbH4MH8NWczPEjee6z4ODROsAe31pz3S8YQK7KVoEuSf0+usJ894 FtzS5wl6POAXTo2QeSNg9zTbb6JjVYcq6KCTnflDm4YEvFKI+ARqAXQHxm05wEOe DbKC6hxxQbDaNOvXEAda8wU= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5d Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=fakehostname Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (3072 bit) Modulus: 00:c6:28:0d:73:64:38:de:34:f8:76:27:d5:87:55: 75:cc:51:ff:21:e4:de:3f:6c:70:8c:dc:26:c8:69: 9d:5d:4a:19:02:81:f9:85:5a:63:ad:ae:e1:22:86: 06:df:45:86:92:59:90:30:0a:65:40:40:68:e6:82: f5:1f:80:e2:20:b0:57:cc:f3:23:be:cc:30:72:3e: 1c:da:ed:7b:d6:3d:97:c4:7b:b9:8d:44:57:b9:66: 56:27:b5:b7:8c:fe:fb:96:08:ce:15:cf:00:ea:fe: 6a:85:4c:5c:4d:46:6c:08:9e:a1:3b:87:1d:16:9d: f0:a8:41:67:bd:37:8d:61:27:29:75:2b:43:a3:00: 36:fe:4a:39:69:27:4f:d4:27:ad:e1:b0:1f:9d:a3: 01:a3:69:a0:6f:27:c1:dd:b9:07:11:a8:d8:3c:b3: a1:5c:c8:92:7c:32:3b:2c:d6:e4:4c:14:ad:a3:f5: 0a:e6:81:b0:3a:d5:7b:88:3c:2c:73:73:72:79:ee: 65:bc:14:dd:8a:b1:1f:03:d8:73:a1:7b:d9:ec:4c: 69:71:a7:e3:b9:0a:e7:5f:fd:10:56:07:8e:37:c5: ca:32:76:ec:11:5c:e1:53:74:40:14:e3:93:12:e2: d2:86:3f:68:73:7b:3b:b1:30:54:27:2c:a0:11:5d: 55:b8:fe:01:71:66:4e:e4:53:c4:f0:79:95:12:e8: b0:18:3d:f6:09:9e:57:ed:79:be:b3:df:b2:c0:6e: da:03:bf:32:94:28:11:03:7f:f9:e6:a1:ae:c2:b8: ac:52:3c:2e:47:20:a8:4b:6e:35:3f:ea:4b:8f:82: 54:95:11:f2:4f:25:6b:b6:6d:92:43:02:41:4b:f3: 93:ad:de:03:ec:d7:41:17:da:53:14:cb:42:6d:de: d7:fe:4d:a4:23:12:f1:59:86:92:8d:f0:a0:1d:62: 3d:de:56:03:9f:cc:80:5b:43:28:e8:b4:72:63:15: ef:49:92:ae:2a:ed:1a:08:fa:77 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:fakehostname X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 1C:70:14:B0:20:DD:08:76:A4:3B:56:59:FA:5F:34:F8:36:66:E8:56 X509v3 Authority Key Identifier: keyid:F3:EC:94:8E:F2:8E:30:C4:8E:68:C2:BF:8E:6A:19:C0:C1:9F:76:65 DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption Signature Value: 75:14:e5:68:45:8d:ed:6c:f1:27:1e:0e:f3:35:ae:0e:60:c1: 65:36:62:b8:07:78:e1:b9:8d:7a:50:70:af:06:c9:d4:ee:50: ef:d2:76:b2:a2:b6:cb:dc:a6:18:b5:3d:d2:f7:eb:0e:ec:b7: 95:cd:2e:b1:36:6f:a8:9f:b8:4d:ff:ce:8a:c4:8e:62:37:32: 80:3e:05:4a:4d:39:87:69:09:00:e8:40:64:d2:9d:f9:1f:9f: ab:67:1f:f9:c6:84:ba:7e:17:6c:8b:8d:08:ee:fb:8a:d7:cd: 06:25:72:9f:4e:1a:c2:71:e1:1b:cf:a2:d7:1c:05:12:95:d6: 49:4b:e9:95:95:89:cf:68:18:46:a3:ea:0d:9d:8e:ca:1c:28: 55:49:6b:c0:4b:58:f5:42:b9:0a:ec:0e:6e:21:a4:ff:60:c0: 1b:6e:40:72:d0:a5:c5:b5:db:4e:87:67:3a:31:70:cb:32:84: 70:a9:e2:ff:e0:f2:db:cd:03:b4:85:45:d3:07:cc:0f:c7:49: d8:c2:17:eb:73:f7:4a:c0:d9:8c:59:ef:c0:0a:ce:13:0b:84: c9:aa:0d:11:14:b4:e5:74:aa:ec:18:de:5f:26:18:98:4a:76: f0:7f:cd:e6:c4:b5:58:03:03:f5:10:01:5d:8f:63:88:ba:65: d7:b4:7f:5a:1a:51:0e:ed:e5:68:fa:18:03:72:15:a1:ec:27: 1f:ea:ac:24:46:18:6e:f1:97:db:4a:f4:d6:a1:91:a0:8c:b0: 2f:be:87:3b:44:b0:8d:2a:89:85:5f:f2:d9:e3:2e:66:b2:88: 98:04:2c:96:32:38:99:19:a9:83:fd:94:0c:dd:63:d4:1b:60: 9d:43:98:35:ac:b4:23:38:de:7f:85:52:57:a0:37:df:a5:cf: be:54:2c:3c:50:27:2b:d4:54:a9:9d:a3:d4:a5:b3:c0:ea:3d: 0e:e2:70:6b:fb:cb:a5:56:05:ec:64:72:f0:1a:db:64:01:cb: 5d:27:c4:a1:c4:63 -----BEGIN CERTIFICATE----- MIIF9zCCBF+gAwIBAgIJAMstgJlaaVJdMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGIxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFTATBgNVBAMMDGZh a2Vob3N0bmFtZTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAMYoDXNk ON40+HYn1YdVdcxR/yHk3j9scIzcJshpnV1KGQKB+YVaY62u4SKGBt9FhpJZkDAK ZUBAaOaC9R+A4iCwV8zzI77MMHI+HNrte9Y9l8R7uY1EV7lmVie1t4z++5YIzhXP AOr+aoVMXE1GbAieoTuHHRad8KhBZ703jWEnKXUrQ6MANv5KOWknT9QnreGwH52j AaNpoG8nwd25BxGo2DyzoVzIknwyOyzW5EwUraP1CuaBsDrVe4g8LHNzcnnuZbwU 3YqxHwPYc6F72exMaXGn47kK51/9EFYHjjfFyjJ27BFc4VN0QBTjkxLi0oY/aHN7 O7EwVCcsoBFdVbj+AXFmTuRTxPB5lRLosBg99gmeV+15vrPfssBu2gO/MpQoEQN/ +eahrsK4rFI8LkcgqEtuNT/qS4+CVJUR8k8la7ZtkkMCQUvzk63eA+zXQRfaUxTL Qm3e1/5NpCMS8VmGko3woB1iPd5WA5/MgFtDKOi0cmMV70mSrirtGgj6dwIDAQAB o4IBwzCCAb8wFwYDVR0RBBAwDoIMZmFrZWhvc3RuYW1lMA4GA1UdDwEB/wQEAwIF oDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAd BgNVHQ4EFgQUHHAUsCDdCHakO1ZZ+l80+DZm6FYwfQYDVR0jBHYwdIAU8+yUjvKO MMSOaMK/jmoZwMGfdmWhUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRo b24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZl coIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6 Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1Bggr BgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2Nz cC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5l dC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAHUU5WhF je1s8SceDvM1rg5gwWU2YrgHeOG5jXpQcK8GydTuUO/SdrKitsvcphi1PdL36w7s t5XNLrE2b6ifuE3/zorEjmI3MoA+BUpNOYdpCQDoQGTSnfkfn6tnH/nGhLp+F2yL jQju+4rXzQYlcp9OGsJx4RvPotccBRKV1klL6ZWVic9oGEaj6g2djsocKFVJa8BL WPVCuQrsDm4hpP9gwBtuQHLQpcW1206HZzoxcMsyhHCp4v/g8tvNA7SFRdMHzA/H SdjCF+tz90rA2YxZ78AKzhMLhMmqDREUtOV0quwY3l8mGJhKdvB/zebEtVgDA/UQ AV2PY4i6Zde0f1oaUQ7t5Wj6GANyFaHsJx/qrCRGGG7xl9tK9NahkaCMsC++hztE sI0qiYVf8tnjLmayiJgELJYyOJkZqYP9lAzdY9QbYJ1DmDWstCM43n+FUlegN9+l z75ULDxQJyvUVKmdo9Sls8DqPQ7icGv7y6VWBexkcvAa22QBy10nxKHEYw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/keycertecc.pem000066400000000000000000000130001471441230600227320ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIG2AgEAMBAGByqGSM49AgEGBSuBBAAiBIGeMIGbAgEBBDDRUbCeT3hMph4Y/ahL 1sy9Qfy4DYotuAP06UetzG6syv+EoQ02kX3xvazqwiJDrEyhZANiAAQef97STEPn 4Nk6C153VEx24MNkJUcmLe771u6lr3Q8Em3J/YPaA1i9Ys7KZA3WvoKBPoWaaikn 4yLQbd/6YE6AAjMuaThlR1/cqH5QnmS3DXHUjmxnLjWy/dZl0CJG1qo= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5e Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost-ecc Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (384 bit) pub: 04:1e:7f:de:d2:4c:43:e7:e0:d9:3a:0b:5e:77:54: 4c:76:e0:c3:64:25:47:26:2d:ee:fb:d6:ee:a5:af: 74:3c:12:6d:c9:fd:83:da:03:58:bd:62:ce:ca:64: 0d:d6:be:82:81:3e:85:9a:6a:29:27:e3:22:d0:6d: df:fa:60:4e:80:02:33:2e:69:38:65:47:5f:dc:a8: 7e:50:9e:64:b7:0d:71:d4:8e:6c:67:2e:35:b2:fd: d6:65:d0:22:46:d6:aa ASN1 OID: secp384r1 NIST CURVE: P-384 X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost-ecc X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 45:ED:32:14:6D:51:A2:3B:B0:80:55:E0:A6:9B:74:4C:A5:56:88:B1 X509v3 Authority Key Identifier: keyid:F3:EC:94:8E:F2:8E:30:C4:8E:68:C2:BF:8E:6A:19:C0:C1:9F:76:65 DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption Signature Value: 07:e4:91:0b:d3:ed:4b:52:7f:50:68:c7:8d:80:48:9f:b7:4a: 13:66:bf:9d:4c:2d:18:19:68:a0:da:3b:12:85:05:16:fa:8d: 9c:58:c6:81:b3:96:ba:11:62:65:d3:76:f1:1c:ab:95:e4:d8: 2a:e0:1f:7b:c5:20:2e:7c:8f:de:87:7a:2b:52:54:ca:d1:41: b0:5e:20:72:df:44:00:4a:69:1a:ef:10:63:52:13:ed:49:02: ee:dc:9d:f3:c8:ba:c4:01:81:5a:a9:1c:15:12:b6:21:de:44: a5:fd:7e:f9:22:d1:3e:ee:22:dd:31:55:32:4e:41:68:27:c5: 95:1b:7e:6b:18:74:f9:22:d6:b7:b9:31:72:51:a0:5a:2c:ff: 62:76:e9:a0:55:8d:78:33:52:4a:58:b2:f4:4b:0c:43:82:2f: a9:84:68:05:dd:11:47:70:24:fe:5c:92:fd:17:21:63:bb:fa: 93:fa:54:54:05:72:48:ed:81:48:ab:95:fc:6d:a8:62:96:f9: 3b:e2:71:18:05:3e:76:bb:df:95:17:7b:81:4b:1f:7f:e1:67: 76:c4:07:cb:65:a7:f2:cf:e6:b4:fb:75:7c:ee:df:a1:f5:34: 20:2b:48:fd:2e:49:ff:f3:a6:3b:00:49:6c:88:79:ed:9c:16: 2a:04:72:e2:93:e4:7e:3f:2a:dd:30:47:9a:99:84:2a:b9:c4: 40:31:a6:68:f3:20:d1:75:f1:1e:c8:18:64:5b:f8:4c:ce:9a: 3c:57:2c:e3:63:64:29:0a:c2:b6:8e:20:01:55:9f:fe:10:ba: 12:42:38:0a:9b:53:01:a5:b4:08:76:ec:e8:a6:fc:69:2c:f7: 7f:5e:0f:44:07:55:e1:7c:2e:58:e5:d6:fc:6f:c2:4d:83:65: bd:f3:32:e3:14:48:22:8d:80:18:ea:44:f8:24:79:ff:ff:c6: 04:c2:e9:90:34:40:d6:59:3f:59:1e:4a:9a:58:60:ce:ab:f9: 76:0e:ef:f7:05:17 -----BEGIN CERTIFICATE----- MIIEyzCCAzOgAwIBAgIJAMstgJlaaVJeMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGMxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFjAUBgNVBAMMDWxv Y2FsaG9zdC1lY2MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAAQef97STEPn4Nk6C153 VEx24MNkJUcmLe771u6lr3Q8Em3J/YPaA1i9Ys7KZA3WvoKBPoWaaikn4yLQbd/6 YE6AAjMuaThlR1/cqH5QnmS3DXHUjmxnLjWy/dZl0CJG1qqjggHEMIIBwDAYBgNV HREEETAPgg1sb2NhbGhvc3QtZWNjMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAU BggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQURe0y FG1RojuwgFXgppt0TKVWiLEwfQYDVR0jBHYwdIAU8+yUjvKOMMSOaMK/jmoZwMGf dmWhUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMstgJlaaVJb MIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0Y2EucHl0 aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcwAYYpaHR0 cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYDVR0fBDww OjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2EvcmV2 b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAAfkkQvT7UtSf1Box42ASJ+3 ShNmv51MLRgZaKDaOxKFBRb6jZxYxoGzlroRYmXTdvEcq5Xk2CrgH3vFIC58j96H eitSVMrRQbBeIHLfRABKaRrvEGNSE+1JAu7cnfPIusQBgVqpHBUStiHeRKX9fvki 0T7uIt0xVTJOQWgnxZUbfmsYdPki1re5MXJRoFos/2J26aBVjXgzUkpYsvRLDEOC L6mEaAXdEUdwJP5ckv0XIWO7+pP6VFQFckjtgUirlfxtqGKW+TvicRgFPna735UX e4FLH3/hZ3bEB8tlp/LP5rT7dXzu36H1NCArSP0uSf/zpjsASWyIee2cFioEcuKT 5H4/Kt0wR5qZhCq5xEAxpmjzINF18R7IGGRb+EzOmjxXLONjZCkKwraOIAFVn/4Q uhJCOAqbUwGltAh27Oim/Gks939eD0QHVeF8Lljl1vxvwk2DZb3zMuMUSCKNgBjq RPgkef//xgTC6ZA0QNZZP1keSppYYM6r+XYO7/cFFw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/leaf-missing-aki.ca.pem000066400000000000000000000013541471441230600243240ustar00rootroot00000000000000# Taken from x509-limbo's `rfc5280::aki::leaf-missing-aki` testcase. # See: https://x509-limbo.com/testcases/rfc5280/#rfc5280akileaf-missing-aki -----BEGIN CERTIFICATE----- MIIBkDCCATWgAwIBAgIUGjIb/aYm9u9fBh2o4GAYRJwk5XIwCgYIKoZIzj0EAwIw GjEYMBYGA1UEAwwPeDUwOS1saW1iby1yb290MCAXDTcwMDEwMTAwMDAwMVoYDzI5 NjkwNTAzMDAwMDAxWjAaMRgwFgYDVQQDDA94NTA5LWxpbWJvLXJvb3QwWTATBgcq hkjOPQIBBggqhkjOPQMBBwNCAARUzBhjMOkO911U65Fvs4YmL1YPNj63P9Fa+g9U KrUqiIy8WjaDXdIe8g8Zj0TalpbU1gYCs3atteMxgIp6qxwHo1cwVTAPBgNVHRMB Af8EBTADAQH/MAsGA1UdDwQEAwICBDAWBgNVHREEDzANggtleGFtcGxlLmNvbTAd BgNVHQ4EFgQUcv1fyqgezMGzmo+lhmUkdUuAbIowCgYIKoZIzj0EAwIDSQAwRgIh AIOErPSRlWpnyMub9UgtPF/lSzdvnD4Q8KjLQppHx6oPAiEA373p4L/HvUbs0xg8 6/pLyn0RT02toKKJcMV3ChohLtM= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/leaf-missing-aki.keycert.pem000066400000000000000000000017131471441230600254060ustar00rootroot00000000000000# Taken from x509-limbo's `rfc5280::aki::leaf-missing-aki` testcase. # See: https://x509-limbo.com/testcases/rfc5280/#rfc5280akileaf-missing-aki -----BEGIN EC PRIVATE KEY----- MHcCAQEEIF5Re+/FP3rg+7c1odKEQPXhb9V65kXnlZIWHDG9gKrLoAoGCCqGSM49 AwEHoUQDQgAE1WAQMdC7ims7T9lpK9uzaCuKqHb/oNMbGjh1f10pOHv3Z+oAvsqF Sv3hGzreu69YLy01afA6sUCf1AA/95dKkg== -----END EC PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIBjjCCATWgAwIBAgIUVlBgclml+OXlrWzZfcgYCiNm96UwCgYIKoZIzj0EAwIw GjEYMBYGA1UEAwwPeDUwOS1saW1iby1yb290MCAXDTcwMDEwMTAwMDAwMVoYDzI5 NjkwNTAzMDAwMDAxWjAWMRQwEgYDVQQDDAtleGFtcGxlLmNvbTBZMBMGByqGSM49 AgEGCCqGSM49AwEHA0IABNVgEDHQu4prO0/ZaSvbs2griqh2/6DTGxo4dX9dKTh7 92fqAL7KhUr94Rs63ruvWC8tNWnwOrFAn9QAP/eXSpKjWzBZMB0GA1UdDgQWBBS3 yYRQQwo3syjGVQ8Yw7/XRZHbpzALBgNVHQ8EBAMCB4AwEwYDVR0lBAwwCgYIKwYB BQUHAwEwFgYDVR0RBA8wDYILZXhhbXBsZS5jb20wCgYIKoZIzj0EAwIDRwAwRAIg BVq7lw4Y5MPEyisPhowMWd4KnERupdM5qeImDO+dD7ICIE/ksd6Wz1b8rMAfllNV yiYst9lfwTd2SkFgdDNUDFud -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/make_ssl_certs.py000066400000000000000000000225071471441230600234720ustar00rootroot00000000000000"""Make the custom certificate and private key files used by test_ssl and friends.""" import os import pprint import shutil import tempfile from subprocess import * startdate = "20180829142316Z" enddate = "20371028142316Z" req_template = """ [ default ] base_url = http://testca.pythontest.net/testca [req] distinguished_name = req_distinguished_name prompt = no [req_distinguished_name] C = XY L = Castle Anthrax O = Python Software Foundation CN = {hostname} [req_x509_extensions_nosan] [req_x509_extensions_simple] subjectAltName = @san [req_x509_extensions_full] subjectAltName = @san keyUsage = critical,keyEncipherment,digitalSignature extendedKeyUsage = serverAuth,clientAuth basicConstraints = critical,CA:false subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer:always authorityInfoAccess = @issuer_ocsp_info crlDistributionPoints = @crl_info [ issuer_ocsp_info ] caIssuers;URI.0 = $base_url/pycacert.cer OCSP;URI.0 = $base_url/ocsp/ [ crl_info ] URI.0 = $base_url/revocation.crl [san] DNS.1 = {hostname} {extra_san} [dir_sect] C = XY L = Castle Anthrax O = Python Software Foundation CN = dirname example [princ_name] realm = EXP:0, GeneralString:KERBEROS.REALM principal_name = EXP:1, SEQUENCE:principal_seq [principal_seq] name_type = EXP:0, INTEGER:1 name_string = EXP:1, SEQUENCE:principals [principals] princ1 = GeneralString:username [ ca ] default_ca = CA_default [ CA_default ] dir = cadir database = $dir/index.txt crlnumber = $dir/crl.txt default_md = sha256 startdate = {startdate} default_startdate = {startdate} enddate = {enddate} default_enddate = {enddate} default_days = 7000 default_crl_days = 7000 certificate = pycacert.pem private_key = pycakey.pem serial = $dir/serial RANDFILE = $dir/.rand policy = policy_match [ policy_match ] countryName = match stateOrProvinceName = optional organizationName = match organizationalUnitName = optional commonName = supplied emailAddress = optional [ policy_anything ] countryName = optional stateOrProvinceName = optional localityName = optional organizationName = optional organizationalUnitName = optional commonName = supplied emailAddress = optional [ v3_ca ] subjectKeyIdentifier=hash authorityKeyIdentifier=keyid:always,issuer basicConstraints = critical, CA:true keyUsage = critical, digitalSignature, keyCertSign, cRLSign """ here = os.path.abspath(os.path.dirname(__file__)) def make_cert_key(hostname, sign=False, extra_san='', ext='req_x509_extensions_full', key='rsa:3072'): print("creating cert for " + hostname) tempnames = [] for i in range(3): with tempfile.NamedTemporaryFile(delete=False) as f: tempnames.append(f.name) req_file, cert_file, key_file = tempnames try: req = req_template.format( hostname=hostname, extra_san=extra_san, startdate=startdate, enddate=enddate ) with open(req_file, 'w') as f: f.write(req) args = ['req', '-new', '-nodes', '-days', '7000', '-newkey', key, '-keyout', key_file, '-extensions', ext, '-config', req_file] if sign: with tempfile.NamedTemporaryFile(delete=False) as f: tempnames.append(f.name) reqfile = f.name args += ['-out', reqfile ] else: args += ['-x509', '-out', cert_file ] check_call(['openssl'] + args) if sign: args = [ 'ca', '-config', req_file, '-extensions', ext, '-out', cert_file, '-outdir', 'cadir', '-policy', 'policy_anything', '-batch', '-infiles', reqfile ] check_call(['openssl'] + args) with open(cert_file, 'r') as f: cert = f.read() with open(key_file, 'r') as f: key = f.read() return cert, key finally: for name in tempnames: os.remove(name) TMP_CADIR = 'cadir' def unmake_ca(): shutil.rmtree(TMP_CADIR) def make_ca(): os.mkdir(TMP_CADIR) with open(os.path.join('cadir','index.txt'),'a+') as f: pass # empty file with open(os.path.join('cadir','crl.txt'),'a+') as f: f.write("00") with open(os.path.join('cadir','index.txt.attr'),'w+') as f: f.write('unique_subject = no') # random start value for serial numbers with open(os.path.join('cadir','serial'), 'w') as f: f.write('CB2D80995A69525B\n') with tempfile.NamedTemporaryFile("w") as t: req = req_template.format( hostname='our-ca-server', extra_san='', startdate=startdate, enddate=enddate ) t.write(req) t.flush() with tempfile.NamedTemporaryFile() as f: args = ['req', '-config', t.name, '-new', '-nodes', '-newkey', 'rsa:3072', '-keyout', 'pycakey.pem', '-out', f.name, '-subj', '/C=XY/L=Castle Anthrax/O=Python Software Foundation CA/CN=our-ca-server'] check_call(['openssl'] + args) args = ['ca', '-config', t.name, '-out', 'pycacert.pem', '-batch', '-outdir', TMP_CADIR, '-keyfile', 'pycakey.pem', '-selfsign', '-extensions', 'v3_ca', '-infiles', f.name ] check_call(['openssl'] + args) args = ['ca', '-config', t.name, '-gencrl', '-out', 'revocation.crl'] check_call(['openssl'] + args) # capath hashes depend on subject! check_call([ 'openssl', 'x509', '-in', 'pycacert.pem', '-out', 'capath/ceff1710.0' ]) shutil.copy('capath/ceff1710.0', 'capath/b1930218.0') def print_cert(path): import _ssl pprint.pprint(_ssl._test_decode_cert(path)) if __name__ == '__main__': os.chdir(here) cert, key = make_cert_key('localhost', ext='req_x509_extensions_simple') with open('ssl_cert.pem', 'w') as f: f.write(cert) with open('ssl_key.pem', 'w') as f: f.write(key) print("password protecting ssl_key.pem in ssl_key.passwd.pem") check_call(['openssl','pkey','-in','ssl_key.pem','-out','ssl_key.passwd.pem','-aes256','-passout','pass:somepass']) check_call(['openssl','pkey','-in','ssl_key.pem','-out','keycert.passwd.pem','-aes256','-passout','pass:somepass']) with open('keycert.pem', 'w') as f: f.write(key) f.write(cert) with open('keycert.passwd.pem', 'a+') as f: f.write(cert) # For certificate matching tests make_ca() cert, key = make_cert_key('fakehostname', ext='req_x509_extensions_simple') with open('keycert2.pem', 'w') as f: f.write(key) f.write(cert) cert, key = make_cert_key('localhost', sign=True) with open('keycert3.pem', 'w') as f: f.write(key) f.write(cert) cert, key = make_cert_key('fakehostname', sign=True) with open('keycert4.pem', 'w') as f: f.write(key) f.write(cert) cert, key = make_cert_key( 'localhost-ecc', sign=True, key='param:secp384r1.pem' ) with open('keycertecc.pem', 'w') as f: f.write(key) f.write(cert) extra_san = [ 'otherName.1 = 1.2.3.4;UTF8:some other identifier', 'otherName.2 = 1.3.6.1.5.2.2;SEQUENCE:princ_name', 'email.1 = user@example.org', 'DNS.2 = www.example.org', # GEN_X400 'dirName.1 = dir_sect', # GEN_EDIPARTY 'URI.1 = https://www.python.org/', 'IP.1 = 127.0.0.1', 'IP.2 = ::1', 'RID.1 = 1.2.3.4.5', ] cert, key = make_cert_key('allsans', sign=True, extra_san='\n'.join(extra_san)) with open('allsans.pem', 'w') as f: f.write(key) f.write(cert) extra_san = [ # könig (king) 'DNS.2 = xn--knig-5qa.idn.pythontest.net', # königsgäßchen (king's alleyway) 'DNS.3 = xn--knigsgsschen-lcb0w.idna2003.pythontest.net', 'DNS.4 = xn--knigsgchen-b4a3dun.idna2008.pythontest.net', # βόλοσ (marble) 'DNS.5 = xn--nxasmq6b.idna2003.pythontest.net', 'DNS.6 = xn--nxasmm1c.idna2008.pythontest.net', ] # IDN SANS, signed cert, key = make_cert_key('idnsans', sign=True, extra_san='\n'.join(extra_san)) with open('idnsans.pem', 'w') as f: f.write(key) f.write(cert) cert, key = make_cert_key('nosan', sign=True, ext='req_x509_extensions_nosan') with open('nosan.pem', 'w') as f: f.write(key) f.write(cert) unmake_ca() print("update Lib/test/test_ssl.py and Lib/test/test_asyncio/utils.py") print_cert('keycert.pem') print_cert('keycert3.pem') gevent-24.11.1/src/greentest/3.13/certdata/nokia.pem000066400000000000000000000036031471441230600217220ustar00rootroot00000000000000# Certificate for projects.developer.nokia.com:443 (see issue 13034) -----BEGIN CERTIFICATE----- MIIFLDCCBBSgAwIBAgIQLubqdkCgdc7lAF9NfHlUmjANBgkqhkiG9w0BAQUFADCB vDELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTswOQYDVQQLEzJUZXJtcyBvZiB1c2Ug YXQgaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAoYykxMDE2MDQGA1UEAxMt VmVyaVNpZ24gQ2xhc3MgMyBJbnRlcm5hdGlvbmFsIFNlcnZlciBDQSAtIEczMB4X DTExMDkyMTAwMDAwMFoXDTEyMDkyMDIzNTk1OVowcTELMAkGA1UEBhMCRkkxDjAM BgNVBAgTBUVzcG9vMQ4wDAYDVQQHFAVFc3BvbzEOMAwGA1UEChQFTm9raWExCzAJ BgNVBAsUAkJJMSUwIwYDVQQDFBxwcm9qZWN0cy5kZXZlbG9wZXIubm9raWEuY29t MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCr92w1bpHYSYxUEx8N/8Iddda2 lYi+aXNtQfV/l2Fw9Ykv3Ipw4nLeGTj18FFlAZgMdPRlgrzF/NNXGw/9l3/qKdow CypkQf8lLaxb9Ze1E/KKmkRJa48QTOqvo6GqKuTI6HCeGlG1RxDb8YSKcQWLiytn yj3Wp4MgRQO266xmMQIDAQABo4IB9jCCAfIwQQYDVR0RBDowOIIccHJvamVjdHMu ZGV2ZWxvcGVyLm5va2lhLmNvbYIYcHJvamVjdHMuZm9ydW0ubm9raWEuY29tMAkG A1UdEwQCMAAwCwYDVR0PBAQDAgWgMEEGA1UdHwQ6MDgwNqA0oDKGMGh0dHA6Ly9T VlJJbnRsLUczLWNybC52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNybDBEBgNVHSAE PTA7MDkGC2CGSAGG+EUBBxcDMCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3LnZl cmlzaWduLmNvbS9ycGEwKAYDVR0lBCEwHwYJYIZIAYb4QgQBBggrBgEFBQcDAQYI KwYBBQUHAwIwcgYIKwYBBQUHAQEEZjBkMCQGCCsGAQUFBzABhhhodHRwOi8vb2Nz cC52ZXJpc2lnbi5jb20wPAYIKwYBBQUHMAKGMGh0dHA6Ly9TVlJJbnRsLUczLWFp YS52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNlcjBuBggrBgEFBQcBDARiMGChXqBc MFowWDBWFglpbWFnZS9naWYwITAfMAcGBSsOAwIaBBRLa7kolgYMu9BSOJsprEsH iyEFGDAmFiRodHRwOi8vbG9nby52ZXJpc2lnbi5jb20vdnNsb2dvMS5naWYwDQYJ KoZIhvcNAQEFBQADggEBACQuPyIJqXwUyFRWw9x5yDXgMW4zYFopQYOw/ItRY522 O5BsySTh56BWS6mQB07XVfxmYUGAvRQDA5QHpmY8jIlNwSmN3s8RKo+fAtiNRlcL x/mWSfuMs3D/S6ev3D6+dpEMZtjrhOdctsarMKp8n/hPbwhAbg5hVjpkW5n8vz2y 0KxvvkA1AxpLwpVv7OlK17ttzIHw8bp9HTlHBU5s8bKz4a565V/a5HI0CSEv/+0y ko4/ghTnZc1CkmUngKKeFMSah/mT/xAh8XnE2l1AazFa8UKuYki1e+ArHaGZc4ix UYOtiRphwfuYQhRZ7qX9q2MMkCMI65XNK/SaFrAbbG0= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/nosan.pem000066400000000000000000000170421471441230600217410ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQC99xEYPTwFN/ji i0lm11ckEGhcxciSsIgTgior54CLgQy7JXllTYmAWFTTg2zNBvDMexGI0h+xtZ4q 1Renghgt33N3Y6CT3v/L7JkE1abQbFveKW/ydlxH0+jLlsENSWjySwC80+f9L3bX TcD8T4Fu9Uty2Rg1a/Eyekng5RmfkmLNgxfnX5R5nWhh0Aia7h3Ax2zCALfxqZIm fxwavEgHsW/yZi+T+eoJwe0i7a6LaUoLqsPV9ZhagziNDaappPHH42NW39WlRhx1 UjtiRm2Jihnzxcfs+90zLXSp5pxo/cE9Ia4d8ieq3Rxd/XgjlF6FXXFJjwfL36Dw ehy8m3PKKAuO+fyMgPPPMQb7oaRy/MBG0NayRreTwyKILS2zafIW/iKpgICbxrWJ r/H1b3S6PBKYUE2uQs0/ZPnRjjh0VeNnue7JcRoNbe27I2d56KUBsVEPdokjU59v NYi6Se+ViZXtUbM1u/I0kvDMprAiobwtJFYgcE86N1lFJjHSwDMCAwEAAQKCAYBb lvnJBA0iPwBiyeFUElNTcg2/XST9hNu2/DU1AeM6X7gxqznCnAXFudD8Qgt9NvF2 xYeIvjbFydk+sYs8Gj9qLqhPUdukMAqI2cRVTmWla/lHPhdZgbOwdf1x23es3k4Z NAxg/pKFwhK8cCKyA+tWAjKkZwODDk42ljt0kUEvbLbye1hVGAJQOJKRRmo/uLrj rcNELnCBtc5ffT2hrlHUU7qz1ozt/brXhYa+JnbXhKZMxcKyMD2KtmXXrFNEy99o jXbrpDCos82bzQfPDo8IpCbVbEd2J00aFmrNjQWhZuXX5dXflrujW4J0nzeHrZ78 rNAz2/YuZ543BTB3XbogeFuLC5RqBgAMmw2WJ96Oa/UG8nZNvEw54N5r6dhfXj6A VlJFLVwlfBQdAdaM3P4uZ6WECrH3EerQa27qyUdRrcDaGPLt7wG9FmMivnW1KQsy 5ow/gM0CsxFj2xNoGw1S5jtclbgSy8HNJaBsNk4XMQ+ORABZdG1MTTE+GMSjD/EC gcEA+6JYiZEo+QrvItIZYB6Go4suu/F8df1pEjJlxwp2GmObitRhtV6r9g9IySFv 5SL7ZxARr4aQxvM7fNp57p9ssmkBtY0ofMjJAxhvs4T37bAcGK/2xCegNSmbqh24 FAWpRDMgE5PjtuWC5jTvSOYFeUxwI/cu0HxWdxJl2dPUSL1nI2jP+ok3pZobEQk9 E2+MlHpKmU+s/lAkuQiP+AW9a4M+ZJNWxocJjmtwj4OjJXPm7GddA/5x622DxFe6 4K2vAoHBAMFC0An25bwGoFrCV/96s45K4qZcZcJ660+aK3xXaq6/8KfiusJnWds2 nc0B6jYjKs8A7yTAGXke6fmyVsoLosZiXsbpW2m16g8jL79Tc85O9oDNmDIGk1uT tRLZc2BvmHmy/dNrdbT/EHC3FKNWQVqWc2sHhPeB6F3hIEXDSUO/GB0njMZNXrPJ 547RlhN0xCLb3vTzzGHwNmwfI81YjV/XI4vpJjq1YceN8Xyd1r5ZOFfU8woIACO3 I4dvBQ1avQKBwQCLDs9wzohfAFzg2Exvos7y6AKemDgYmD8NcE5wbWaQ9MTLNsz8 RuIu64lkpRbKAMf/z5CGeI3fdCFGwRGq/e06tu7b3rMmKmtzS3jHM08zyiPsvKlZ AzD00BaXLy8/2VUOPFaYmxy3QSRShaRKm9sgik5agcocKuo5iTBB7V8eB5VMqyps IJJg8MXOZ1WaPQXqM56wFKjcLXvtyT6OaNWh6Xh8ajQFKDDuxI8CsFNjaiaONBzi DSX1XaL4ySab7T8CgcEAsI+7xP6+EDP1mDVpc8zD8kHUI6zSgwUNqiHtjKHIo3JU CO2JNkZ5v158eGlBcshaOdheozKlkxR9KlSWGezbf2crs4pKq585AS9iVeeGK3vU lQRAAaQkSEv/6AKl9/q8UKMIZnkMhploibGZt0f8WSiOtb+e6QjUI8CjXVj2vF// RdN2N01EMflKBh7Qf2H0NuytGxkJJojxD4K7kMVQE7lXjmEpPgWsGUZC01jYcfrN EOFKUWXRys9sNDVnZjX5AoHAFRyOC1BlmVEtcOsgzum4+JEDWvRnO1hP1tm68eTZ ijB/XppDtVESpq3+1+gx2YOmzlUNEhKlcn6eHPWEJwdVxJ87Gdh03rIV/ZQUKe46 3+j6l/5coN4WfCBloy4b+Tcj+ZTL4sKaLm33RoD2UEWS5mmItfZuoEFQB958h3JD 1Ka1tgsLnuYGjcrg+ALvbM5nQlefzPqPJh0C8UV3Ny/4Gd02YgHw7Yoe4m6OUqQv hctFUL/gjIGg9PVqTWzVVKaI -----END PRIVATE KEY----- Certificate: Data: Version: 1 (0x0) Serial Number: cb:2d:80:99:5a:69:52:61 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=nosan Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (3072 bit) Modulus: 00:bd:f7:11:18:3d:3c:05:37:f8:e2:8b:49:66:d7: 57:24:10:68:5c:c5:c8:92:b0:88:13:82:2a:2b:e7: 80:8b:81:0c:bb:25:79:65:4d:89:80:58:54:d3:83: 6c:cd:06:f0:cc:7b:11:88:d2:1f:b1:b5:9e:2a:d5: 17:a7:82:18:2d:df:73:77:63:a0:93:de:ff:cb:ec: 99:04:d5:a6:d0:6c:5b:de:29:6f:f2:76:5c:47:d3: e8:cb:96:c1:0d:49:68:f2:4b:00:bc:d3:e7:fd:2f: 76:d7:4d:c0:fc:4f:81:6e:f5:4b:72:d9:18:35:6b: f1:32:7a:49:e0:e5:19:9f:92:62:cd:83:17:e7:5f: 94:79:9d:68:61:d0:08:9a:ee:1d:c0:c7:6c:c2:00: b7:f1:a9:92:26:7f:1c:1a:bc:48:07:b1:6f:f2:66: 2f:93:f9:ea:09:c1:ed:22:ed:ae:8b:69:4a:0b:aa: c3:d5:f5:98:5a:83:38:8d:0d:a6:a9:a4:f1:c7:e3: 63:56:df:d5:a5:46:1c:75:52:3b:62:46:6d:89:8a: 19:f3:c5:c7:ec:fb:dd:33:2d:74:a9:e6:9c:68:fd: c1:3d:21:ae:1d:f2:27:aa:dd:1c:5d:fd:78:23:94: 5e:85:5d:71:49:8f:07:cb:df:a0:f0:7a:1c:bc:9b: 73:ca:28:0b:8e:f9:fc:8c:80:f3:cf:31:06:fb:a1: a4:72:fc:c0:46:d0:d6:b2:46:b7:93:c3:22:88:2d: 2d:b3:69:f2:16:fe:22:a9:80:80:9b:c6:b5:89:af: f1:f5:6f:74:ba:3c:12:98:50:4d:ae:42:cd:3f:64: f9:d1:8e:38:74:55:e3:67:b9:ee:c9:71:1a:0d:6d: ed:bb:23:67:79:e8:a5:01:b1:51:0f:76:89:23:53: 9f:6f:35:88:ba:49:ef:95:89:95:ed:51:b3:35:bb: f2:34:92:f0:cc:a6:b0:22:a1:bc:2d:24:56:20:70: 4f:3a:37:59:45:26:31:d2:c0:33 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption Signature Value: 7e:dd:64:64:92:6c:b9:41:ce:f3:e3:f8:e6:9f:c8:5b:32:39: 8c:03:5b:5e:7e:b3:23:ca:6c:d1:99:2f:53:af:9d:3c:84:cd: c6:ce:0a:ee:94:de:ff:a7:06:81:7e:e2:38:a5:05:39:58:22: dc:13:83:53:e7:f8:16:cb:93:dc:cf:4b:e6:1b:9f:9e:71:ef: ee:ba:ea:b6:68:5c:32:22:7e:54:4f:46:a6:0b:11:8f:ef:05: 6e:d3:0b:d0:a8:be:95:23:a2:e4:e7:a8:a2:a4:7d:98:52:86: a4:15:fb:74:7a:9a:89:23:43:20:26:3a:56:9e:a3:6e:54:02: 76:4e:25:9c:a1:8c:03:99:e5:eb:a6:61:b4:9c:2a:b1:ed:eb: 94:f9:14:aa:a4:c3:f0:f7:7a:03:a3:b1:f8:c0:83:79:ab:8a: 93:7f:0a:95:08:50:ff:55:19:ac:28:a2:c8:9f:a6:77:72:a3: da:37:a9:ff:f3:57:70:c8:65:d9:55:14:84:b4:b3:78:86:82: da:84:2c:48:19:51:ec:9d:20:b1:4d:18:fb:82:9f:7b:a7:80: 22:69:25:83:4d:bf:ac:31:64:f5:39:11:f1:ed:53:fb:67:ab: 91:86:c5:4d:87:e8:6b:fe:9a:84:fe:6a:92:6b:62:c1:ae:d2: f0:cb:06:6e:f3:50:f4:8d:6d:fa:7d:6a:1c:64:c3:98:91:da: c9:8c:a9:79:e5:48:4c:a2:de:42:28:e8:0e:9f:52:6a:a4:e0: c7:ac:11:9c:ba:5d:d6:84:93:56:28:f1:6d:83:aa:62:b2:b7: 56:c6:64:d9:96:4e:97:ab:4e:8f:ba:f6:ab:b9:17:52:98:32: 7f:b5:12:fa:39:d7:34:2a:f3:ed:40:90:6f:66:7b:b6:c1:9d: b9:53:d0:e3:e9:69:8c:cf:7a:fd:08:0a:62:47:d4:ce:72:f7: 6f:80:b4:1d:18:7a:ba:a2:a9:45:49:ef:9c:0b:99:89:03:ab: 5f:7a:9d:c5:77:b7 -----BEGIN CERTIFICATE----- MIIEJDCCAowCCQDLLYCZWmlSYTANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJY WTEmMCQGA1UECgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNV BAMMDW91ci1jYS1zZXJ2ZXIwHhcNMTgwODI5MTQyMzE2WhcNMzcxMDI4MTQyMzE2 WjBbMQswCQYDVQQGEwJYWTEXMBUGA1UEBwwOQ2FzdGxlIEFudGhyYXgxIzAhBgNV BAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQ4wDAYDVQQDDAVub3NhbjCC AaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAL33ERg9PAU3+OKLSWbXVyQQ aFzFyJKwiBOCKivngIuBDLsleWVNiYBYVNODbM0G8Mx7EYjSH7G1nirVF6eCGC3f c3djoJPe/8vsmQTVptBsW94pb/J2XEfT6MuWwQ1JaPJLALzT5/0vdtdNwPxPgW71 S3LZGDVr8TJ6SeDlGZ+SYs2DF+dflHmdaGHQCJruHcDHbMIAt/GpkiZ/HBq8SAex b/JmL5P56gnB7SLtrotpSguqw9X1mFqDOI0Npqmk8cfjY1bf1aVGHHVSO2JGbYmK GfPFx+z73TMtdKnmnGj9wT0hrh3yJ6rdHF39eCOUXoVdcUmPB8vfoPB6HLybc8oo C475/IyA888xBvuhpHL8wEbQ1rJGt5PDIogtLbNp8hb+IqmAgJvGtYmv8fVvdLo8 EphQTa5CzT9k+dGOOHRV42e57slxGg1t7bsjZ3nopQGxUQ92iSNTn281iLpJ75WJ le1RszW78jSS8MymsCKhvC0kViBwTzo3WUUmMdLAMwIDAQABMA0GCSqGSIb3DQEB CwUAA4IBgQB+3WRkkmy5Qc7z4/jmn8hbMjmMA1tefrMjymzRmS9Tr508hM3Gzgru lN7/pwaBfuI4pQU5WCLcE4NT5/gWy5Pcz0vmG5+ece/uuuq2aFwyIn5UT0amCxGP 7wVu0wvQqL6VI6Lk56iipH2YUoakFft0epqJI0MgJjpWnqNuVAJ2TiWcoYwDmeXr pmG0nCqx7euU+RSqpMPw93oDo7H4wIN5q4qTfwqVCFD/VRmsKKLIn6Z3cqPaN6n/ 81dwyGXZVRSEtLN4hoLahCxIGVHsnSCxTRj7gp97p4AiaSWDTb+sMWT1ORHx7VP7 Z6uRhsVNh+hr/pqE/mqSa2LBrtLwywZu81D0jW36fWocZMOYkdrJjKl55UhMot5C KOgOn1JqpODHrBGcul3WhJNWKPFtg6pisrdWxmTZlk6Xq06PuvaruRdSmDJ/tRL6 Odc0KvPtQJBvZnu2wZ25U9Dj6WmMz3r9CApiR9TOcvdvgLQdGHq6oqlFSe+cC5mJ A6tfep3Fd7c= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/nullbytecert.pem000066400000000000000000000124731471441230600233420ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 0 (0x0) Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Validity Not Before: Aug 7 13:11:52 2013 GMT Not After : Aug 7 13:12:52 2013 GMT Subject: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:b5:ea:ed:c9:fb:46:7d:6f:3b:76:80:dd:3a:f3: 03:94:0b:a7:a6:db:ec:1d:df:ff:23:74:08:9d:97: 16:3f:a3:a4:7b:3e:1b:0e:96:59:25:03:a7:26:e2: 88:a9:cf:79:cd:f7:04:56:b0:ab:79:32:6e:59:c1: 32:30:54:eb:58:a8:cb:91:f0:42:a5:64:27:cb:d4: 56:31:88:52:ad:cf:bd:7f:f0:06:64:1f:cc:27:b8: a3:8b:8c:f3:d8:29:1f:25:0b:f5:46:06:1b:ca:02: 45:ad:7b:76:0a:9c:bf:bb:b9:ae:0d:16:ab:60:75: ae:06:3e:9c:7c:31:dc:92:2f:29:1a:e0:4b:0c:91: 90:6c:e9:37:c5:90:d7:2a:d7:97:15:a3:80:8f:5d: 7b:49:8f:54:30:d4:97:2c:1c:5b:37:b5:ab:69:30: 68:43:d3:33:78:4b:02:60:f5:3c:44:80:a1:8f:e7: f0:0f:d1:5e:87:9e:46:cf:62:fc:f9:bf:0c:65:12: f1:93:c8:35:79:3f:c8:ec:ec:47:f5:ef:be:44:d5: ae:82:1e:2d:9a:9f:98:5a:67:65:e1:74:70:7c:cb: d3:c2:ce:0e:45:49:27:dc:e3:2d:d4:fb:48:0e:2f: 9e:77:b8:14:46:c0:c4:36:ca:02:ae:6a:91:8c:da: 2f:85 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 88:5A:55:C0:52:FF:61:CD:52:A3:35:0F:EA:5A:9C:24:38:22:F7:5C X509v3 Key Usage: Digital Signature, Non Repudiation, Key Encipherment X509v3 Subject Alternative Name: ************************************************************* WARNING: The values for DNS, email and URI are WRONG. OpenSSL doesn't print the text after a NULL byte. ************************************************************* DNS:altnull.python.org, email:null@python.org, URI:http://null.python.org, IP Address:192.0.2.1, IP Address:2001:DB8:0:0:0:0:0:1 Signature Algorithm: sha1WithRSAEncryption ac:4f:45:ef:7d:49:a8:21:70:8e:88:59:3e:d4:36:42:70:f5: a3:bd:8b:d7:a8:d0:58:f6:31:4a:b1:a4:a6:dd:6f:d9:e8:44: 3c:b6:0a:71:d6:7f:b1:08:61:9d:60:ce:75:cf:77:0c:d2:37: 86:02:8d:5e:5d:f9:0f:71:b4:16:a8:c1:3d:23:1c:f1:11:b3: 56:6e:ca:d0:8d:34:94:e6:87:2a:99:f2:ae:ae:cc:c2:e8:86: de:08:a8:7f:c5:05:fa:6f:81:a7:82:e6:d0:53:9d:34:f4:ac: 3e:40:fe:89:57:7a:29:a4:91:7e:0b:c6:51:31:e5:10:2f:a4: 60:76:cd:95:51:1a:be:8b:a1:b0:fd:ad:52:bd:d7:1b:87:60: d2:31:c7:17:c4:18:4f:2d:08:25:a3:a7:4f:b7:92:ca:e2:f5: 25:f1:54:75:81:9d:b3:3d:61:a2:f7:da:ed:e1:c6:6f:2c:60: 1f:d8:6f:c5:92:05:ab:c9:09:62:49:a9:14:ad:55:11:cc:d6: 4a:19:94:99:97:37:1d:81:5f:8b:cf:a3:a8:96:44:51:08:3d: 0b:05:65:12:eb:b6:70:80:88:48:72:4f:c6:c2:da:cf:cd:8e: 5b:ba:97:2f:60:b4:96:56:49:5e:3a:43:76:63:04:be:2a:f6: c1:ca:a9:94 -----BEGIN CERTIFICATE----- MIIE2DCCA8CgAwIBAgIBADANBgkqhkiG9w0BAQUFADCBxTELMAkGA1UEBhMCVVMx DzANBgNVBAgMBk9yZWdvbjESMBAGA1UEBwwJQmVhdmVydG9uMSMwIQYDVQQKDBpQ eXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEgMB4GA1UECwwXUHl0aG9uIENvcmUg RGV2ZWxvcG1lbnQxJDAiBgNVBAMMG251bGwucHl0aG9uLm9yZwBleGFtcGxlLm9y ZzEkMCIGCSqGSIb3DQEJARYVcHl0aG9uLWRldkBweXRob24ub3JnMB4XDTEzMDgw NzEzMTE1MloXDTEzMDgwNzEzMTI1MlowgcUxCzAJBgNVBAYTAlVTMQ8wDQYDVQQI DAZPcmVnb24xEjAQBgNVBAcMCUJlYXZlcnRvbjEjMCEGA1UECgwaUHl0aG9uIFNv ZnR3YXJlIEZvdW5kYXRpb24xIDAeBgNVBAsMF1B5dGhvbiBDb3JlIERldmVsb3Bt ZW50MSQwIgYDVQQDDBtudWxsLnB5dGhvbi5vcmcAZXhhbXBsZS5vcmcxJDAiBgkq hkiG9w0BCQEWFXB5dGhvbi1kZXZAcHl0aG9uLm9yZzCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBALXq7cn7Rn1vO3aA3TrzA5QLp6bb7B3f/yN0CJ2XFj+j pHs+Gw6WWSUDpybiiKnPec33BFawq3kyblnBMjBU61ioy5HwQqVkJ8vUVjGIUq3P vX/wBmQfzCe4o4uM89gpHyUL9UYGG8oCRa17dgqcv7u5rg0Wq2B1rgY+nHwx3JIv KRrgSwyRkGzpN8WQ1yrXlxWjgI9de0mPVDDUlywcWze1q2kwaEPTM3hLAmD1PESA oY/n8A/RXoeeRs9i/Pm/DGUS8ZPINXk/yOzsR/XvvkTVroIeLZqfmFpnZeF0cHzL 08LODkVJJ9zjLdT7SA4vnne4FEbAxDbKAq5qkYzaL4UCAwEAAaOB0DCBzTAMBgNV HRMBAf8EAjAAMB0GA1UdDgQWBBSIWlXAUv9hzVKjNQ/qWpwkOCL3XDALBgNVHQ8E BAMCBeAwgZAGA1UdEQSBiDCBhYIeYWx0bnVsbC5weXRob24ub3JnAGV4YW1wbGUu Y29tgSBudWxsQHB5dGhvbi5vcmcAdXNlckBleGFtcGxlLm9yZ4YpaHR0cDovL251 bGwucHl0aG9uLm9yZwBodHRwOi8vZXhhbXBsZS5vcmeHBMAAAgGHECABDbgAAAAA AAAAAAAAAAEwDQYJKoZIhvcNAQEFBQADggEBAKxPRe99SaghcI6IWT7UNkJw9aO9 i9eo0Fj2MUqxpKbdb9noRDy2CnHWf7EIYZ1gznXPdwzSN4YCjV5d+Q9xtBaowT0j HPERs1ZuytCNNJTmhyqZ8q6uzMLoht4IqH/FBfpvgaeC5tBTnTT0rD5A/olXeimk kX4LxlEx5RAvpGB2zZVRGr6LobD9rVK91xuHYNIxxxfEGE8tCCWjp0+3ksri9SXx VHWBnbM9YaL32u3hxm8sYB/Yb8WSBavJCWJJqRStVRHM1koZlJmXNx2BX4vPo6iW RFEIPQsFZRLrtnCAiEhyT8bC2s/Njlu6ly9gtJZWSV46Q3ZjBL4q9sHKqZQ= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/nullcert.pem000066400000000000000000000000001471441230600224350ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.13/certdata/pycacert.pem000066400000000000000000000132361471441230600224360ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5b Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, O=Python Software Foundation CA, CN=our-ca-server Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (3072 bit) Modulus: 00:d0:a0:9b:b1:b9:3b:79:ee:31:2f:b8:51:1c:01: 75:ed:09:59:4c:ac:c8:bf:6a:0a:f8:7a:08:f0:25: e3:25:7b:6a:f8:c4:d8:aa:a0:60:07:25:b0:cf:73: 71:05:52:68:bf:06:93:ae:10:61:96:bc:b3:69:81: 1a:7d:19:fc:20:86:8f:9a:68:1b:ed:cd:e2:6c:61: 67:c7:4e:55:ea:43:06:21:1b:e9:b9:be:84:5f:c2: da:eb:89:88:e0:42:a6:45:49:2a:d0:b9:6b:8c:93: c9:44:3b:ca:fc:3c:94:7f:dd:70:c5:ad:d8:4b:3b: dc:1e:f8:55:4b:b5:27:86:f8:df:b0:83:cf:f7:16: 37:bf:13:17:34:d4:c9:55:ed:6b:16:c9:7f:ad:20: 4e:f0:1e:d9:1b:bf:8d:ba:cd:ca:96:ef:1e:69:cb: 51:66:61:48:0d:e8:79:a3:59:61:d4:10:8d:e0:0d: 3b:0b:85:b6:d5:85:12:dd:a5:07:c5:4b:fa:23:fa: 54:39:f7:97:0b:c9:44:47:1e:56:77:3c:3c:13:01: 0b:6d:77:d6:14:ba:44:f4:53:35:56:d9:3d:b9:3e: 35:5f:db:33:9a:4f:5a:b9:38:44:c9:32:49:fe:66: f6:1f:1a:97:aa:ca:4c:4a:f6:b8:5e:40:7f:fb:0b: 1d:f8:5b:a1:dc:f8:c0:2e:d0:6d:49:f5:d2:46:d4: 90:57:fe:92:81:34:ae:2d:38:bb:a8:17:0c:e1:e5: 3f:e2:f7:26:05:54:50:f5:64:b3:1c:6e:44:ff:6f: a9:b4:03:96:e9:0e:c2:88:d8:72:52:90:99:c6:41: 0f:46:90:59:b8:3e:6f:d2:e2:9e:1d:36:82:95:d3: 58:8a:12:f3:e2:d8:0d:20:51:23:f0:90:2d:9a:3e: 7d:26:86:b2:a7:d7:f9:28:60:03:e3:77:c7:88:04: c9:fe:89:77:70:10:4f:c3:a0:8a:3b:f4:ab:42:7b: e3:14:92:d8:ae:16:d6:a1:de:7d Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Key Identifier: F3:EC:94:8E:F2:8E:30:C4:8E:68:C2:BF:8E:6A:19:C0:C1:9F:76:65 X509v3 Authority Key Identifier: F3:EC:94:8E:F2:8E:30:C4:8E:68:C2:BF:8E:6A:19:C0:C1:9F:76:65 X509v3 Basic Constraints: critical CA:TRUE X509v3 Key Usage: critical Digital Signature, Certificate Sign, CRL Sign Signature Algorithm: sha256WithRSAEncryption Signature Value: 8b:00:54:72:b3:8d:eb:f3:af:34:9f:d6:60:ea:de:84:3f:8c: 04:8f:19:a6:be:02:67:c4:63:c5:74:e3:47:37:59:83:94:06: f1:45:19:e8:07:2f:d6:4e:4b:4f:a8:3d:c7:07:07:27:92:f4: 7e:73:4f:8b:32:19:94:46:7a:25:c4:d9:c4:27:b0:11:63:3a: 60:8b:85:e1:73:4f:34:3b:6b:a4:34:8c:49:8e:cd:cf:4f:b2: 65:27:41:19:b0:fc:80:31:78:f2:73:6a:9b:7d:71:34:50:fc: 78:a8:da:05:b4:9c:5b:3a:99:7a:6b:5d:ef:3b:d3:e9:3b:33: 01:12:65:cf:5e:07:d8:19:af:d5:53:ea:f0:10:ac:c4:b6:26: 3c:34:2e:74:ee:64:dd:1d:36:75:89:44:00:b0:0d:fd:2f:b3: 01:cc:1a:8b:02:cd:6c:e8:80:82:ca:bf:82:d7:00:9d:d8:36: 15:d2:07:37:fc:6c:73:1d:da:a8:1c:e8:20:8e:32:7a:fe:6d: 27:16:e4:58:6c:eb:3e:f0:fe:24:52:29:71:b8:96:7b:53:4b: 45:20:55:40:5e:86:1b:ec:c9:46:91:92:ee:ac:93:65:91:2e: 94:b6:b6:ac:e8:a3:34:89:a4:1a:12:0d:4d:44:a5:52:ed:8b: 91:ee:2f:a6:af:a4:95:25:f9:ce:c7:5b:a7:00:d3:93:ca:b4: 3c:5d:4d:f7:b1:3c:cc:6d:b4:45:be:82:ed:18:90:c8:86:d1: 75:51:50:04:4c:e8:4f:d2:d6:50:aa:75:e7:5e:ff:a1:7b:27: 19:1c:6b:49:2c:6c:4d:6b:63:cc:3b:73:00:f3:99:26:d0:82: 87:d3:a9:36:9c:b4:3d:b9:48:68:a8:92:f0:27:8e:0c:cd:15: 76:42:84:80:c9:3f:a3:7e:b4:dd:d7:f8:ac:76:ba:60:97:3c: 5a:1f:4e:de:71:ee:09:39:10:dd:31:d5:68:57:5d:1a:e5:42: ba:0e:68:e6:45:d3 -----BEGIN CERTIFICATE----- MIIEgDCCAuigAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBANCgm7G5O3nuMS+4URwBde0JWUysyL9qCvh6 CPAl4yV7avjE2KqgYAclsM9zcQVSaL8Gk64QYZa8s2mBGn0Z/CCGj5poG+3N4mxh Z8dOVepDBiEb6bm+hF/C2uuJiOBCpkVJKtC5a4yTyUQ7yvw8lH/dcMWt2Es73B74 VUu1J4b437CDz/cWN78TFzTUyVXtaxbJf60gTvAe2Ru/jbrNypbvHmnLUWZhSA3o eaNZYdQQjeANOwuFttWFEt2lB8VL+iP6VDn3lwvJREceVnc8PBMBC2131hS6RPRT NVbZPbk+NV/bM5pPWrk4RMkySf5m9h8al6rKTEr2uF5Af/sLHfhbodz4wC7QbUn1 0kbUkFf+koE0ri04u6gXDOHlP+L3JgVUUPVksxxuRP9vqbQDlukOwojYclKQmcZB D0aQWbg+b9Linh02gpXTWIoS8+LYDSBRI/CQLZo+fSaGsqfX+ShgA+N3x4gEyf6J d3AQT8Ogijv0q0J74xSS2K4W1qHefQIDAQABo2MwYTAdBgNVHQ4EFgQU8+yUjvKO MMSOaMK/jmoZwMGfdmUwHwYDVR0jBBgwFoAU8+yUjvKOMMSOaMK/jmoZwMGfdmUw DwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAYYwDQYJKoZIhvcNAQELBQAD ggGBAIsAVHKzjevzrzSf1mDq3oQ/jASPGaa+AmfEY8V040c3WYOUBvFFGegHL9ZO S0+oPccHByeS9H5zT4syGZRGeiXE2cQnsBFjOmCLheFzTzQ7a6Q0jEmOzc9PsmUn QRmw/IAxePJzapt9cTRQ/Hio2gW0nFs6mXprXe870+k7MwESZc9eB9gZr9VT6vAQ rMS2Jjw0LnTuZN0dNnWJRACwDf0vswHMGosCzWzogILKv4LXAJ3YNhXSBzf8bHMd 2qgc6CCOMnr+bScW5Fhs6z7w/iRSKXG4lntTS0UgVUBehhvsyUaRku6sk2WRLpS2 tqzoozSJpBoSDU1EpVLti5HuL6avpJUl+c7HW6cA05PKtDxdTfexPMxttEW+gu0Y kMiG0XVRUARM6E/S1lCqdede/6F7Jxkca0ksbE1rY8w7cwDzmSbQgofTqTactD25 SGiokvAnjgzNFXZChIDJP6N+tN3X+Kx2umCXPFofTt5x7gk5EN0x1WhXXRrlQroO aOZF0w== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/pycakey.pem000066400000000000000000000046641471441230600222760ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQDQoJuxuTt57jEv uFEcAXXtCVlMrMi/agr4egjwJeMle2r4xNiqoGAHJbDPc3EFUmi/BpOuEGGWvLNp gRp9Gfwgho+aaBvtzeJsYWfHTlXqQwYhG+m5voRfwtrriYjgQqZFSSrQuWuMk8lE O8r8PJR/3XDFrdhLO9we+FVLtSeG+N+wg8/3Fje/Exc01MlV7WsWyX+tIE7wHtkb v426zcqW7x5py1FmYUgN6HmjWWHUEI3gDTsLhbbVhRLdpQfFS/oj+lQ595cLyURH HlZ3PDwTAQttd9YUukT0UzVW2T25PjVf2zOaT1q5OETJMkn+ZvYfGpeqykxK9rhe QH/7Cx34W6Hc+MAu0G1J9dJG1JBX/pKBNK4tOLuoFwzh5T/i9yYFVFD1ZLMcbkT/ b6m0A5bpDsKI2HJSkJnGQQ9GkFm4Pm/S4p4dNoKV01iKEvPi2A0gUSPwkC2aPn0m hrKn1/koYAPjd8eIBMn+iXdwEE/DoIo79KtCe+MUktiuFtah3n0CAwEAAQKCAYAD iUK0/k2ZRqXJHXKBKy8rWjYMHCj3lvMM/M3g+tYWS7i88w00cIJ1geM006FDSf8i LxjatvFd2OCg9ay+w8LSbvrJJGGbeXAQjo1v7ePRPttAPWphQ8RCS+8NAKhJcNJu UzapZ13WJKfL2HLw1+VbziORXjMlLKRnAVDkzHMZO70C5MEQ0EIX+C6zrmBOl2HH du6LPy8crSaDQg8YxFCI7WWnvRKp+Gp8aIfYnR+7ifT1qr5o9sEUw8GAReyooJ3a yJ9uBUbcelO8fNjEABf9xjx+jOmOVsQfig2KuBEi0qXlQSpilZfUdYJhtNke9ADu Hui6MBn04D4RIzeKXV+OLjiLwqkJyNlPuxJ2EGpIHNMcx3gpjXIApAwc47BQwLKJ VhMWMXS0EWhCLtEzf5UrbMNX+Io3J7noEUu6jxmJV1BKhrnlYeoo4JryN0DUpkSb rOAOJLOkpfj7+gvqmWI4MT6SQXSr6BK+3m4J5bVSq4pej9uG5NR3Utghi5hF7DEC gcEA3cYNPYPFSTj9YAR3GUZvwUPVL3ZEFcwjrIeg87JhuZOH/hSQ33SgeEoAtaqL cLbimj7YzUYx3FOUCp/7yK/bAF1dhAbFab1yZE46Qv2Vi4e+/KEBBftqxyJl5KyV vc/HE1dXZGZIO1X5Z5MX8nO3rz/YayiozYVmMibrbHxgTEDC4BrbWtPJQNkflWEb FXNjkm0s2+J3kFANpL94NUKMGtArxQV3hWydGN8wS3Fn7LDnHDoM5mOt/naeKRES fwwpAoHBAPDTKsKs2LEe4YFzO1EClycDelppjxh5pHSnzTWSq40aKx533SG4aLyI DmghzoA3OmY0xpAy1WpT9FeiDNbYpiFCH3qBkArQR2QCu+WGUQ9tDoeN0C2Dje4e Yix49BjcGSWzSNvh+tU9PzRc/9eVBMAQuaCm3yNEL+Z7hFTzkrCWK23+jP/OzIIC XhnKdOveIYVAjlVgv8CoWIy3xhwXyqPAcstcPmlv9sDAYn37Ot7rGIS7e0WyQxvg gxnOxFzKNQKBwQDOPOn/NNV5HKh0bHKdbKVs4zoT4zW515etUIvbVR4QSCSFonZ/ d6PreVZjmvAFp+3fZ2aSrx6bOJZJszGhFfjhw/G9X9aiWO1SXnVL6yrxERIJOWkM ORy5h0GegOjYFauaTvUUhxHRLEi9i0sPy5EcRpFqReuFBPNe3Fa/EoMzJl6TriYj tyRHTCNU9XMMZbxJZYH8EgUCjY/Cj9SoIvTL0p+Bn23hBHqrsJLm9dWhhXnHBC0O 68/Y/lJi+l9rCtECgcEAt6PfTJovl0j8HxF23vyBtK9TQtSR2NERlh9LPZn9lViq Hs66YndT7sg1bDSzWlRDBSMjc1xAH5erkJOzBLYqYNwiUvGvnH9coSfwjkMRVxkL ZlS+taZGuZiTtmP5h2d3CaegXIQDGU5d/xkXwxYQjEF0u8vkBel+OVxg+cLPTjcF IRhl/r98dXtGtJYM+LvnhcxHfVWMg2YcOBn/SPbfgGVFZEuQECjf2fYaZQUJzGkr xjOM+gXIZN6cOjbQyA0tAoHADgR5/bMbcf6Jk0w56c/khFZz/pusne5cjXw5a6qq fClAqnqjGBpkRxs7HoCR3aje0Pd0pCS93a6Wiqneo4x4HDrpo+pWR2KGAAF4MeO3 3K94hncmiLAiZo8iqULLKCqJW2EGB2b7QzGpY7jCPiI1g80KuYPesf4ZohSfrr1w DoqGoNrcIVdVmUgX47lLqIiWarbbDRY0Am9j58dovmNINYr5wCYGbeh2RuUmHr4u E2bb0CdekSHf05HPiF9QpK1z -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.13/certdata/revocation.crl000066400000000000000000000014401471441230600227660ustar00rootroot00000000000000-----BEGIN X509 CRL----- MIICJjCBjwIBATANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJYWTEmMCQGA1UE CgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNVBAMMDW91ci1j YS1zZXJ2ZXIXDTIzMTEyNTA0MjEzNloXDTQzMDEyNDA0MjEzNlqgDjAMMAoGA1Ud FAQDAgEAMA0GCSqGSIb3DQEBCwUAA4IBgQDMZ4XLQlzUrqBbszEq9I/nXK3jN8/p VZ2aScU2le0ySJqIthe0yXEYuoFu+I4ZULyNkCA79baStIl8/Lt48DOHfBVv8SVx ZqF7/fdUZBCLJV1kuhuSSknbtNmja5NI4/lcRRXrodRWDMcOmqlKbAC6RMQz/gMG vpewGPX1oj5AQnqqd9spKtHbeqeDiyyWYr9ZZFO/433lP7GdsoriTPggYJJMWJvs 819buE0iGwWf+rTLB51VyGluhcz2pqimej6Ra2cdnYh5IztZlDFR99HywzWhVz/A 2fwUA91GR7zATerweXVKNd59mcgF4PZWiXmQMwcE0qQOMqMmAqYPLim1mretZsAs t1X+nDM0Ak3sKumIjteQF7I6VpSsG4NCtq23G8KpNHnBZVOt0U065lQEvx0ZmB94 1z7SzjfSZMVXYxBjSXljwuoc1keGpNT5xCmHyrOIxaHsmizzwNESW4dGVLu7/JfK w40uGbwH09w4Cfbwuo7w6sRWDWPnlW2mkoc= -----END X509 CRL----- gevent-24.11.1/src/greentest/3.13/certdata/secp384r1.pem000066400000000000000000000004001471441230600222450ustar00rootroot00000000000000$ openssl genpkey -genparam -algorithm EC -pkeyopt ec_paramgen_curve:secp384r1 -pkeyopt ec_param_enc:named_curve -text -----BEGIN EC PARAMETERS----- BgUrgQQAIg== -----END EC PARAMETERS----- ECDSA-Parameters: (384 bit) ASN1 OID: secp384r1 NIST CURVE: P-384 gevent-24.11.1/src/greentest/3.13/certdata/selfsigned_pythontestdotnet.pem000066400000000000000000000041221471441230600264600ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIF9zCCA9+gAwIBAgIUH98b4Fw/DyugC9cV7VK7ZODzHsIwDQYJKoZIhvcNAQEL BQAwgYoxCzAJBgNVBAYTAlhZMRcwFQYDVQQIDA5DYXN0bGUgQW50aHJheDEYMBYG A1UEBwwPQXJndW1lbnQgQ2xpbmljMSMwIQYDVQQKDBpQeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbjEjMCEGA1UEAwwac2VsZi1zaWduZWQucHl0aG9udGVzdC5uZXQw HhcNMTkwNTA4MDEwMjQzWhcNMjcwNzI0MDEwMjQzWjCBijELMAkGA1UEBhMCWFkx FzAVBgNVBAgMDkNhc3RsZSBBbnRocmF4MRgwFgYDVQQHDA9Bcmd1bWVudCBDbGlu aWMxIzAhBgNVBAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMSMwIQYDVQQD DBpzZWxmLXNpZ25lZC5weXRob250ZXN0Lm5ldDCCAiIwDQYJKoZIhvcNAQEBBQAD ggIPADCCAgoCggIBAMKdJlyCThkahwoBb7pl5q64Pe9Fn5jrIvzsveHTc97TpjV2 RLfICnXKrltPk/ohkVl6K5SUZQZwMVzFubkyxE0nZPHYHlpiKWQxbsYVkYv01rix IFdLvaxxbGYke2jwQao31s4o61AdlsfK1SdpHQUynBBMssqI3SB4XPmcA7e+wEEx jxjVish4ixA1vuIZOx8yibu+CFCf/geEjoBMF3QPdzULzlrCSw8k/45iZCSoNbvK DoL4TVV07PHOxpheDh8ZQmepGvU6pVqhb9m4lgmV0OGWHgozd5Ur9CbTVDmxIEz3 TSoRtNJK7qtyZdGNqwjksQxgZTjM/d/Lm/BJG99AiOmYOjsl9gbQMZgvQmMAtUsI aMJnQuZ6R+KEpW/TR5qSKLWZSG45z/op+tzI2m+cE6HwTRVAWbcuJxcAA55MZjqU OOOu3BBYMjS5nf2sQ9uoXsVBFH7i0mQqoW1SLzr9opI8KsWwFxQmO2vBxWYaN+lH OmwBZBwyODIsmI1YGXmTp09NxRYz3Qe5GCgFzYowpMrcxUC24iduIdMwwhRM7rKg 7GtIWMSrFfuI1XCLRmSlhDbhNN6fVg2f8Bo9PdH9ihiIyxSrc+FOUasUYCCJvlSZ 8hFUlLvcmrZlWuazohm0lsXuMK1JflmQr/DA/uXxP9xzFfRy+RU3jDyxJbRHAgMB AAGjUzBRMB0GA1UdDgQWBBSQJyxiPMRK01i+0BsV9zUwDiBaHzAfBgNVHSMEGDAW gBSQJyxiPMRK01i+0BsV9zUwDiBaHzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3 DQEBCwUAA4ICAQCR+7a7N/m+WLkxPPIA/CB4MOr2Uf8ixTv435Nyv6rXOun0+lTP ExSZ0uYQ+L0WylItI3cQHULldDueD+s8TGzxf5woaLKf6tqyr0NYhKs+UeNEzDnN 9PHQIhX0SZw3XyXGUgPNBfRCg2ZDdtMMdOU4XlQN/IN/9hbYTrueyY7eXq9hmtI9 1srftAMqr9SR1JP7aHI6DVgrEsZVMTDnfT8WmLSGLlY1HmGfdEn1Ip5sbo9uSkiH AEPgPfjYIvR5LqTOMn4KsrlZyBbFIDh9Sl99M1kZzgH6zUGVLCDg1y6Cms69fx/e W1HoIeVkY4b4TY7Bk7JsqyNhIuqu7ARaxkdaZWhYaA2YyknwANdFfNpfH+elCLIk BUt5S3f4i7DaUePTvKukCZiCq4Oyln7RcOn5If73wCeLB/ZM9Ei1HforyLWP1CN8 XLfpHaoeoPSWIveI0XHUl65LsPN2UbMbul/F23hwl+h8+BLmyAS680Yhn4zEN6Ku B7Po90HoFa1Du3bmx4jsN73UkT/dwMTi6K072FbipnC1904oGlWmLwvAHvrtxxmL Pl3pvEaZIu8wa/PNF6Y7J7VIewikIJq6Ta6FrWeFfzMWOj2qA1ZZi6fUaDSNYvuV J5quYKCc/O+I/yDDf8wyBbZ/gvUXzUHTMYGG+bFrn1p7XDbYYeEJ6R/xEg== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/ssl_cert.pem000066400000000000000000000031331471441230600224350ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEgzCCAuugAwIBAgIUU+FIM/dUbCklbdDwNPd2xemDAEwwDQYJKoZIhvcNAQEL BQAwXzELMAkGA1UEBhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYD VQQKDBpQeXRob24gU29mdHdhcmUgRm91bmRhdGlvbjESMBAGA1UEAwwJbG9jYWxo b3N0MB4XDTIzMTEyNTA0MjEzNloXDTQzMDEyNDA0MjEzNlowXzELMAkGA1UEBhMC WFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYDVQQKDBpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjESMBAGA1UEAwwJbG9jYWxob3N0MIIBojANBgkqhkiG 9w0BAQEFAAOCAY8AMIIBigKCAYEAzXTIl1su11AGu6sDPsoxqcRGyAX0yjxIcswF vj+eW/fBs2GcBby95VEOKpJPKRYYB7fAEAjAKK59zFdsDX/ynxPZLqyLQocBkFVq tclhCRZu//KZND+uQuHSx3PjGkSvK/nrGjg5T0bkM4SFeb0YdLb+0aDTKGozUC82 oBAilNcrFz1VXpEF0qUe9QeKQhyd0MaW5T1oSn+U3RAj2MXm3TGExyZeaicpIM5O HFlnwUxsYSDZo0jUj342MbPOZh8szZDWi042jdtSA3i8uMSplEf4O8ZPmX0JCtrz fVjRVdaKXIjrhMNWB8K44q6AeyhqJcVHtOmPYoHDm0qIjcrurt0LZaGhmCuKimNd njcPxW0VQmDIS/mO5+s24SK+Mpznm5q/clXEwyD8FbrtrzV5cHCE8eNkxjuQjkmi wW9uadK1s54tDwRWMl6DRWRyxoF0an885UQWmbsgEB5aRmEx2L0JeD0/q6Iw1Nta As8DG4AaWuYMrgZXz7XvyiMq3IxVAgMBAAGjNzA1MBQGA1UdEQQNMAuCCWxvY2Fs aG9zdDAdBgNVHQ4EFgQUl2wd7iWE1JTZUVq2yFBKGm9N36owDQYJKoZIhvcNAQEL BQADggGBAF0f5x6QXFbgdyLOyeAPD/1DDxNjM68fJSmNM/6vxHJeDFzK0Pja+iJo xv54YiS9F2tiKPpejk4ujvLQgvrYrTQvliIE+7fUT0dV74wZKPdLphftT9uEo1dH TeIld+549fqcfZCJfVPE2Ka4vfyMGij9hVfY5FoZL1Xpnq/ZGYyWZNAPbkG292p8 KrfLZm/0fFYAhq8tG/6DX7+2btxeX4MP/49tzskcYWgOjlkknyhJ76aMG9BJ1D7F /TIEh5ihNwRTmyt023RBz/xWiN4xBLyIlpQ6d5ECKmFNFr0qnEui6UovfCHUF6lZ qcAQ5VFQQ2CayNlVmQ+UGmWIqANlacYWBt7Q6VqpGg24zTMec1/Pqd6X07ScSfrm MAtywrWrU7p1aEkN5lBa4n/XKZHGYMjor/YcMdF5yjdSrZr274YYO1pafmTFwRwH 5o16c8WPc0aPvTFbkGIFT5ddxYstw+QwsBtLKE2lJ4Qfmxt0Ew/0L7xkbK1BaCOo EGD2IF7VDQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/certdata/ssl_key.passwd.pem000066400000000000000000000051361471441230600235750ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQIsc9l0YPybNICAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBDxb9ekR9MERvIff73hFLc6BIIH ENhkFePApZj7ZqpjBltINRnaZhu8sEfG1/y3ejDBOa5Sq3C/UPykPfJh0IXsraAB STZO22UQEDpJzDnf1aLCo2cJpdz4Mr+Uj8OUdPiX83OlhC36gMrkgSYUdhSFQEas MLiBnXU6Z5Mv1Lxe7TJrnMyA4A8JYXXu5XVTErJrC0YT6iCPQh7eAoEtml9a/tJM OPg6kn58zmzVDp8LAau4Th1yhdD/cUQM09wg2i5JHLeC9akD+CkNlujVoAirLMTh xoMXTy2dkv/lIwI9QVx6WE/VKIngBAPIi3Q+YCIm0PaTgWj5U10C8j4t7kW2AEZK z82+vDOpLRGLo/ItNCO9F/a9e4PK4xxwFCOfR80tQNhs5gjKnbDz5IQv2p+pUfUX u+AIO0rBb3M9Yya1MC2pc5VLAeQ3UF6YPrNyNjoDsQOytY3YtRVyxiKW72QzeUcX Vpc3U6u8ZyHhkxK6bMv3dkPHGW1MOBd9/U5z+9lhHOfCGFStIQ9M8N48ZCWEGyty oZT3UApxgqiBAi1h14ZyagA2mjsMNtTmmkSa3v26WUfrwnjm7LD1/0Vm+ptBOFH2 CkP/aAvr8Ie+ehWobXGpqwB6rlOAwdpPrePtEZiZtdt58anmCquRgE5GIYtVz30f flRABM8waJ196RDGkNAmDA3p/sqHy4vbsIOMl8faZ3QxvGVZlPbUEwPhiTIetA5Q 95fT/uIcuBLfpbaN23j/Av3LiJAeABSmGZ+dA+NXC5UMvuX8COyBU0YF2V6ofpIu gP3UC7Tn4yV3Pbes81LEDCskaN6qVRil47l0G+dNcEHVkrGKcSaRCN+joBSCbuin Rol34ir9azh8DqHRKdVlLlzTmDQcOwmi0Vx0ASgBXx4UI3IfK45gLJVoz6dkUz+3 GIPrnh5cw2DvIgIApwmuCQUXPbWZwUW0zuyzhtny9W6S72GUE/P5oUCV+kGYBsup FNiAyR9+n/xUuzB5HqIosj4rX+M4il4Ovt+KaCO6/COi+YjAO/9EnSttu8OTxsXl wvgblsT7Y1d+iUfmIVNGtbc5NX46ktrbGiqgPX7oR7YDy5/+FQlnPS1YL0ThUiAC 2RbItu6b0uUyfu2jfWaGqy+SiRZ81rLwKPU3vJSEPfooVcJTG49EE006ZC4TvRzu fNkId+P+BxvhEpUM4+VKzfzViEPuzR1u/DuwLAavS7nr5qb+zaUq+Fte5vDQmjjC fflT8hS0BGpYEGndeZT4k+mZunHgs3NVUQ4/HW0nflf1j6qAn4+yIB79dH9d/ubt RyBG29K+rN0TI/kH9BQZfsAcbnmhpT/ud0mJfeHZ0Lknn6mdJ/k4LXN0T1IlLKz3 cSleOWY3zjKaOsbuju1o5IiVIr+AF/w+M4nzzDX6DDVpBPAt9iUnDGqjh6mJ3QWQ CyCJDLNP0X8rZ8va2KOPorIBhmfDwJKEtIoXkb2hqWURTE0chC444QqiMsMXsX6+ mOmiWGkdBFnEpGITISFTGERCjEfqOgTMweCANpquiLymJXgDURL603N2WexSgwnu Gy1Ws1cA+1cT65ZLqjSqayZ6WdQvsKBBAnGW5LbwBhoCkX0vahs5nZiw0KnskP60 wNMnyxaS1SuDJ65n+vuLUl7WeysRyz10RWliYZFiUE7jIXfWeYGonAo4eyCEeV/f HInxxpswsg/na8BGBPMsx2SfBIiIvSIT4VNxHrL3sIfDrnb2HH/ut/oSLBgSKzY5 DdkPz309kMM5dqnHANAgRrtVhqzLQE3kNGZ9mO/X1FAyXx8eB7NSeB6ysD8CAHvm lkyfsGTzVsnuWWpeHqplds0wx5+XouVtFRI5J3RGa39mbpM1hMyIbS0O24CBKW6K 7n2UunbABwepL1hSa4e01OPdz4Zx/oayOevTtlfVqh68cEEc6ePdzf7z69pjot7B eqlNaqa1POOmkuygL+fiP1BAR3rGEoQKXqb+6JjzLM9CnhCQHHPR2UdqukkEYwsa bh9CU8AlfAJ19KFDria4JZXtl8LLMLLqWIO8fmQx7VqkEkEkl8jecO8YMaZTzFEb bW7QtIZ1qHWH0UIHH3Qlav72NJTKvGIbtp1JNrLdsHcYNcojLZkEeA83UPaiTB2R udltVUd016cktRVzLOKrust8kzPq3iSjpoIXFyFqIYHvWxGHgc7qD5gVBlazqSsV qudDv+0PCBjLWLjS6HkFI8BfyXd3ME2wvSmTzSSgSh4nVJNNrZ/RVTtQ5MLVcdh0 sJ3qsq2Pokf61XXjsTiQorX+cgI9zF6zETXHvnLf9FL+G/VSlcLUsQ0wC584qwQt OSASYTbM79xgmjRmolZOptcYXGktfi2C4iq6V6zpFJuNMVgzZ+SbaQw9bvzUo2jG VMwrTuQQ+fsAyn66WZvtkSGAdp58+3PNq31ZjafJXBzN -----END ENCRYPTED PRIVATE KEY----- gevent-24.11.1/src/greentest/3.13/certdata/ssl_key.pem000066400000000000000000000046641471441230600223020ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQDNdMiXWy7XUAa7 qwM+yjGpxEbIBfTKPEhyzAW+P55b98GzYZwFvL3lUQ4qkk8pFhgHt8AQCMAorn3M V2wNf/KfE9kurItChwGQVWq1yWEJFm7/8pk0P65C4dLHc+MaRK8r+esaODlPRuQz hIV5vRh0tv7RoNMoajNQLzagECKU1ysXPVVekQXSpR71B4pCHJ3QxpblPWhKf5Td ECPYxebdMYTHJl5qJykgzk4cWWfBTGxhINmjSNSPfjYxs85mHyzNkNaLTjaN21ID eLy4xKmUR/g7xk+ZfQkK2vN9WNFV1opciOuEw1YHwrjiroB7KGolxUe06Y9igcOb SoiNyu6u3QtloaGYK4qKY12eNw/FbRVCYMhL+Y7n6zbhIr4ynOebmr9yVcTDIPwV uu2vNXlwcITx42TGO5COSaLBb25p0rWzni0PBFYyXoNFZHLGgXRqfzzlRBaZuyAQ HlpGYTHYvQl4PT+rojDU21oCzwMbgBpa5gyuBlfPte/KIyrcjFUCAwEAAQKCAYAO M1r0+TCy4Z1hhceu5JdLql0RELZTbxi71IW2GVwW87gv75hy3hGLAs/1mdC+YIBP MkBka1JqzWq0/7rgcP5CSAMsInFqqv2s7fZ286ERGXuZFbnInnkrNsQUlJo3E9W+ tqKtGIM/i0EVHX0DRdJlqMtSjmjh43tB+M1wAUV+n6OjEtJue5wZK+AIpBmGicdP qZY+6IBnm8tcfzPXFRCoq7ZHdIu0jxnc4l2MQJK3DdL04KoiStOkSl8xDsI+lTtq D3qa41LE0TY8X2jJ/w6KK3cUeK7F4DQYs+kfCKWMVPpn0/5u6TbC1F7gLvkrseph 7cIgrruNNs9iKacnR1w3U72R+hNxHsNfo4RGHFa192p/Mfc+kiBd5RNR/M9oHdeq U6T/+KM+QyF5dDOyonY0QjwfAcEx+ZsV72nj8AerjM907I6dgHo/9YZ2S1Dt/xuG ntD+76GDzmrOvXmmpF0DsTn+Wql7AC4uzaOjv6PVziqz03pR61RpjPDemyJEWMkC gcEA7BkGGX3enBENs3X6BYFoeXfGO/hV7/aNpA6ykLzw657dqwy2b6bWLiIaqZdZ u0oiY6+SpOtavkZBFTq4bTVD58FHL0n73Yvvaft507kijpYBrxyDOfTJOETv+dVG XiY8AUSAE6GjPi0ebuYIVUxoDnMeWDuRJNvTck4byn1hJ1aVlEhwXNxt/nAjq48s 5QDuR6Z9F8lqEACRYCHSMQYFm35c7c1pPsHJnElX8a7eZ9lT7HGPXHaf/ypMkOzo dvJNAoHBAN7GhDomff/kSgQLyzmqKqQowTZlyihnReapygwr8YpNcqKDqq6VlnfH Jl1+qtSMSVI0csmccwJWkz1WtSjDsvY+oMdv4gUK3028vQAMQZo+Sh7OElFPFET3 UmL+Nh73ACPgpiommsdLZQPcIqpWNT5NzO+Jm5xa+U9ToVZgQ7xjrqee5NUiMutr r7UWAz7vDWu3x7bzYRRdUJxU18NogGbFGWJ1KM0c67GUXu2E7wBQdjVdS78UWs+4 XBxKQkG2KQKBwQCtO+M82x122BB8iGkulvhogBjlMd8klnzxTpN5HhmMWWH+uvI1 1G29Jer4WwRNJyU6jb4E4mgPyw7AG/jssLOlniy0Jw32TlIaKpoGXwZbJvgPW9Vx tgnbDsIiR3o9ZMKMj42GWgike4ikCIc+xzRmvdMbHIHwUJfCfEtp9TtPGPnh9pDz og3XLsMNg52GXnt3+VI6HOCE41XH+qj2rZt5r2tSVXEOyjQ7R5mOzSeFfXJVwDFX v/a/zHKnuB0OAdUCgcBLrxPTEaqy2eMPdtZHM/mipbnmejRw/4zu7XYYJoG7483z SlodT/K7pKvzDYqKBVMPm4P33K/x9mm1aBTJ0ZqmL+a9etRFtEjjByEKuB89gLX7 uzTb7MrNF10lBopqgK3KgpLRNSZWWNXrtskMJ5eVICdkpdJ5Dyst+RKR3siEYzU9 +yxxAFpeQsqB8gWORva/RsOR8yNjIMS3J9fZqlIdGA8ktPr0nEOyo96QQR5VdACE 5rpKI2cqtM6OSegynOkCgcAnr2Xzjef6tdcrxrQrq0DjEFTMoCAxQRa6tuF/NYHV AK70Y4hBNX84Bvym4hmfbMUEuOCJU+QHQf/iDQrHXPhtX3X2/t8M+AlIzmwLKf2o VwCYnZ8SqiwSaWVg+GANWLh0JuKn/ZYyR8urR79dAXFfp0UK+N39vIxNoBisBf+F G8mca7zx3UtK2eOW8WgGHz+Y20VZy0m/nkNekd1ZTXoSGhL+iN4XsTRn1YQIn69R kNdcwhtZZ3dpChUdf+w/LIc= -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.13/certdata/talos-2019-0758.pem000066400000000000000000000024621471441230600227370ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDqDCCApKgAwIBAgIBAjALBgkqhkiG9w0BAQswHzELMAkGA1UEBhMCVUsxEDAO BgNVBAMTB2NvZHktY2EwHhcNMTgwNjE4MTgwMDU4WhcNMjgwNjE0MTgwMDU4WjA7 MQswCQYDVQQGEwJVSzEsMCoGA1UEAxMjY29kZW5vbWljb24tdm0tMi50ZXN0Lmxh bC5jaXNjby5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC63fGB J80A9Av1GB0bptslKRIUtJm8EeEu34HkDWbL6AJY0P8WfDtlXjlPaLqFa6sqH6ES V48prSm1ZUbDSVL8R6BYVYpOlK8/48xk4pGTgRzv69gf5SGtQLwHy8UPBKgjSZoD 5a5k5wJXGswhKFFNqyyxqCvWmMnJWxXTt2XDCiWc4g4YAWi4O4+6SeeHVAV9rV7C 1wxqjzKovVe2uZOHjKEzJbbIU6JBPb6TRfMdRdYOw98n1VXDcKVgdX2DuuqjCzHP WhU4Tw050M9NaK3eXp4Mh69VuiKoBGOLSOcS8reqHIU46Reg0hqeL8LIL6OhFHIF j7HR6V1X6F+BfRS/AgMBAAGjgdYwgdMwCQYDVR0TBAIwADAdBgNVHQ4EFgQUOktp HQjxDXXUg8prleY9jeLKeQ4wTwYDVR0jBEgwRoAUx6zgPygZ0ZErF9sPC4+5e2Io UU+hI6QhMB8xCzAJBgNVBAYTAlVLMRAwDgYDVQQDEwdjb2R5LWNhggkA1QEAuwb7 2s0wCQYDVR0SBAIwADAuBgNVHREEJzAlgiNjb2Rlbm9taWNvbi12bS0yLnRlc3Qu bGFsLmNpc2NvLmNvbTAOBgNVHQ8BAf8EBAMCBaAwCwYDVR0fBAQwAjAAMAsGCSqG SIb3DQEBCwOCAQEAvqantx2yBlM11RoFiCfi+AfSblXPdrIrHvccepV4pYc/yO6p t1f2dxHQb8rWH3i6cWag/EgIZx+HJQvo0rgPY1BFJsX1WnYf1/znZpkUBGbVmlJr t/dW1gSkNS6sPsM0Q+7HPgEv8CPDNK5eo7vU2seE0iWOkxSyVUuiCEY9ZVGaLVit p0C78nZ35Pdv4I+1cosmHl28+es1WI22rrnmdBpH8J1eY6WvUw2xuZHLeNVN0TzV Q3qq53AaCWuLOD1AjESWuUCxMZTK9DPS4JKXTK8RLyDeqOvJGjsSWp3kL0y3GaQ+ 10T1rfkKJub2+m9A9duin1fn6tHc2wSvB7m3DA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/ffdh3072.pem000066400000000000000000000042441471441230600202570ustar00rootroot00000000000000 DH Parameters: (3072 bit) prime: 00:ff:ff:ff:ff:ff:ff:ff:ff:ad:f8:54:58:a2:bb: 4a:9a:af:dc:56:20:27:3d:3c:f1:d8:b9:c5:83:ce: 2d:36:95:a9:e1:36:41:14:64:33:fb:cc:93:9d:ce: 24:9b:3e:f9:7d:2f:e3:63:63:0c:75:d8:f6:81:b2: 02:ae:c4:61:7a:d3:df:1e:d5:d5:fd:65:61:24:33: f5:1f:5f:06:6e:d0:85:63:65:55:3d:ed:1a:f3:b5: 57:13:5e:7f:57:c9:35:98:4f:0c:70:e0:e6:8b:77: e2:a6:89:da:f3:ef:e8:72:1d:f1:58:a1:36:ad:e7: 35:30:ac:ca:4f:48:3a:79:7a:bc:0a:b1:82:b3:24: fb:61:d1:08:a9:4b:b2:c8:e3:fb:b9:6a:da:b7:60: d7:f4:68:1d:4f:42:a3:de:39:4d:f4:ae:56:ed:e7: 63:72:bb:19:0b:07:a7:c8:ee:0a:6d:70:9e:02:fc: e1:cd:f7:e2:ec:c0:34:04:cd:28:34:2f:61:91:72: fe:9c:e9:85:83:ff:8e:4f:12:32:ee:f2:81:83:c3: fe:3b:1b:4c:6f:ad:73:3b:b5:fc:bc:2e:c2:20:05: c5:8e:f1:83:7d:16:83:b2:c6:f3:4a:26:c1:b2:ef: fa:88:6b:42:38:61:1f:cf:dc:de:35:5b:3b:65:19: 03:5b:bc:34:f4:de:f9:9c:02:38:61:b4:6f:c9:d6: e6:c9:07:7a:d9:1d:26:91:f7:f7:ee:59:8c:b0:fa: c1:86:d9:1c:ae:fe:13:09:85:13:92:70:b4:13:0c: 93:bc:43:79:44:f4:fd:44:52:e2:d7:4d:d3:64:f2: e2:1e:71:f5:4b:ff:5c:ae:82:ab:9c:9d:f6:9e:e8: 6d:2b:c5:22:36:3a:0d:ab:c5:21:97:9b:0d:ea:da: 1d:bf:9a:42:d5:c4:48:4e:0a:bc:d0:6b:fa:53:dd: ef:3c:1b:20:ee:3f:d5:9d:7c:25:e4:1d:2b:66:c6: 2e:37:ff:ff:ff:ff:ff:ff:ff:ff generator: 2 (0x2) recommended-private-length: 276 bits -----BEGIN DH PARAMETERS----- MIIBjAKCAYEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz +8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a 87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7 YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi 7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD ssbzSibBsu/6iGtCOGEfz9zeNVs7ZRkDW7w09N75nAI4YbRvydbmyQd62R0mkff3 7lmMsPrBhtkcrv4TCYUTknC0EwyTvEN5RPT9RFLi103TZPLiHnH1S/9croKrnJ32 nuhtK8UiNjoNq8Uhl5sN6todv5pC1cRITgq80Gv6U93vPBsg7j/VnXwl5B0rZsYu N///////////AgECAgIBFA== -----END DH PARAMETERS----- gevent-24.11.1/src/greentest/3.13/idnsans.pem000066400000000000000000000233321471441230600204720ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQC8sqplTuHuLjbW TL5SL2D1fw9U6WQzLVAF5gsyhd5lr2FpfYwjrob5Mav91aOLbJRTvoNyXsJ26FPS 0RycRGXbomcIEJxXGy9aI+0MLYBt1G5mgqCH+HcVCwPzCNlhVnTwvpgA7y8zs3+6 ezZAPWkF0yWOMYLtTcq9A5GWeavt5VMgm1KZF3gO4k58oPyk3Ae9D0LAaYsX6DFi BYx41eUR5UbSb5IYXaDd8d6jqW/jnYhgc6Cxkv1gTJFn87V5lrG0vYMSRUtWDQ9Y Jh/EKAxjGw7AeY429p6TE4UoJhDmoFYR2NLvawhNIplxol/v0fs0veFQjI/UsTD8 2tRfnYL4IX8szhLsE5/5Iq8aiLHjVbIMwmDYAa0P63Ap2kf1biSn9mpDL8lQazSo yr8xzIq2QS5HMvGbeMAmS0ih10Zx84uVmkWlavgvtSflw8K/ZXT9c70rZp/TdBGY 95cOFsbg5U/20M/Llpis9tcBCaoVaYSFupatrP+p8y19qP2nebsCAwEAAQKCAYEA uaYWWwHW6pzxOrnabcVLYX0WunW9LVShbIw97AElI2n/LuhkXh6xkK48BsqP0vaK oDHJ5VYxgQdmoP03Zs8sX4BSWe7twg1u8wJxkA+cUXI1BAn0opHjpwJlalEEfe2v s8PwjMrF59nsCq56W42PrDlms5UmuQ5WLsw6Co++hZmfxW7LPu+GIS6qBZfluNT5 kBpZlDDCtkyteUD4SVI3wvmOSi+Wzv4e7P2wC9kByjENIcfhC5QQURRD4sA1hWCp 2SThYWqJOCEc2SvGgoqgTRaJuQ2aVG9qrntXt0N4V+WdJWXBK0jedkB2flLve1fR KmDYuc9k/c1svmS3Y+iZohBha9H8jpuJmXYBxxg1iNg9m7qkfg8F8wxCYLQKB+U6 tjRS7by+jSE08On7mpDDhJORnlh+rfEuWPPwAKQpLpdp76KDTvR++GvfOMUiOrFM e9s5aXp+vcgkSSqYvigE+sFpCjQWwkGBkMdT16Pf9CzhQaM08YuLnzfLEYgLFw6R AoHBAN5NQINBmlq/cptGSru66kfecqHfI7xHnnGWKAkto/B1x7Crrgs4Tk5b4vaA JmAqatt5P1e7zco7uAXXebY5VURuH/30TlkuaB+oGFp0OMw6165n8RVPT2ZaDViK ssJ9LT8fJ+23TWCCT2Z1zUlM/NnHAMjKOVsJK3/KEkVvlc7ROC7uVooc78AsQehg zpL3GBYEeBukT8aNUMqUlesCsIs/dQHW7DzQL2xGkQagm5/PDsxaCsT7ynA8eL3X TW+IXwKBwQDZTV3TaG6wqtL8y2DR0lN5jY/eYayX4e18iZ+XEZVTntPdVVyJIE4d 0A5ZfcILb9WE8R21iptROYSjcH/05j+3fQMJ1WAK0sNfGTUNNT3jYU8YzLvos+wW G8E+mNMpFPWNvLV5Qrl4VvoifGh8AMvplUEz8uAzGJbXbRxUPcmjth2ph8zULEDn /+o4OcT3gh1bp+HCqch0OuiJRn9qNUpsJG5GMm5FtjBjZM97ucZ1/0DaWl3JUxUN /pueo3J9vCUCgcBg2Fjdlcvv8u2z1aijJmgATVm1SWfhE3ZkV50zem2sSTNotTJK cwoyOveimeueA3ywBp9g0lFx5Bhkex3sFAggmrVXRoKHeZ8lA28woOdJmezybxfp R7b4iQy9YRdFgZEfqawUdMHB5KNAqNt5LpANNBQUZX0dOt53eooBM/6Yri8CyxRq cPbFysIfwWTdQ8Z7eRD2Qdv7TP9AcgDp9C8DSu7nkUEzsSKn0gpGT9vcgDEbN7Lv ZB4qTT3wvoZeq5MCgcBIG18eDtJkN1sp3Yb0OTnP5QSvg3PVNngq0jQt2fzWMacW FARP0HN7exW35n4kc2jD44q7OhJOAqsb3PHo3xqXlZkTg0WKceO4w9GR32/46spn bVCRaFrX/z/BuM6hHD5bWRpS8aw/3YTFOsklFNKVYRyw01BIREmRlLhIz/QAKidv oQt8AG9NTON44tqUUw3Q40WL5fEJeJ6/JrCTGrnmZrRdANEMuucVpFchNEVB1IC9 tCzY6IPdD/atzojoZi0CgcB2x9oWLjJ0XJIp2pMAb8nCMVjkKrznKFjZbDm8EQBs ou7pM2zkO3VRcWT1BXQocinJsjQqjQiTawP6IN2FQgT0d89V+pwd+jdvpdildQhP 1/6SErVRZV//oopKTsC6TIBL/EmW1TkP3ulQIZs8YklFgybeHdDyNFi+VgPXkVGe IHp0nEzrui9q0YJsjHfFHBeGyzDSfbiBYiF7Auk66gYZbXufebP/LZNG/FIamPP3 rwYIeeV1IVwk9tPBw6fGwrs= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:60 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=idnsans Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:bc:b2:aa:65:4e:e1:ee:2e:36:d6:4c:be:52:2f: 60:f5:7f:0f:54:e9:64:33:2d:50:05:e6:0b:32:85: de:65:af:61:69:7d:8c:23:ae:86:f9:31:ab:fd:d5: a3:8b:6c:94:53:be:83:72:5e:c2:76:e8:53:d2:d1: 1c:9c:44:65:db:a2:67:08:10:9c:57:1b:2f:5a:23: ed:0c:2d:80:6d:d4:6e:66:82:a0:87:f8:77:15:0b: 03:f3:08:d9:61:56:74:f0:be:98:00:ef:2f:33:b3: 7f:ba:7b:36:40:3d:69:05:d3:25:8e:31:82:ed:4d: ca:bd:03:91:96:79:ab:ed:e5:53:20:9b:52:99:17: 78:0e:e2:4e:7c:a0:fc:a4:dc:07:bd:0f:42:c0:69: 8b:17:e8:31:62:05:8c:78:d5:e5:11:e5:46:d2:6f: 92:18:5d:a0:dd:f1:de:a3:a9:6f:e3:9d:88:60:73: a0:b1:92:fd:60:4c:91:67:f3:b5:79:96:b1:b4:bd: 83:12:45:4b:56:0d:0f:58:26:1f:c4:28:0c:63:1b: 0e:c0:79:8e:36:f6:9e:93:13:85:28:26:10:e6:a0: 56:11:d8:d2:ef:6b:08:4d:22:99:71:a2:5f:ef:d1: fb:34:bd:e1:50:8c:8f:d4:b1:30:fc:da:d4:5f:9d: 82:f8:21:7f:2c:ce:12:ec:13:9f:f9:22:af:1a:88: b1:e3:55:b2:0c:c2:60:d8:01:ad:0f:eb:70:29:da: 47:f5:6e:24:a7:f6:6a:43:2f:c9:50:6b:34:a8:ca: bf:31:cc:8a:b6:41:2e:47:32:f1:9b:78:c0:26:4b: 48:a1:d7:46:71:f3:8b:95:9a:45:a5:6a:f8:2f:b5: 27:e5:c3:c2:bf:65:74:fd:73:bd:2b:66:9f:d3:74: 11:98:f7:97:0e:16:c6:e0:e5:4f:f6:d0:cf:cb:96: 98:ac:f6:d7:01:09:aa:15:69:84:85:ba:96:ad:ac: ff:a9:f3:2d:7d:a8:fd:a7:79:bb Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:idnsans, DNS:xn--knig-5qa.idn.pythontest.net, DNS:xn--knigsgsschen-lcb0w.idna2003.pythontest.net, DNS:xn--knigsgchen-b4a3dun.idna2008.pythontest.net, DNS:xn--nxasmq6b.idna2003.pythontest.net, DNS:xn--nxasmm1c.idna2008.pythontest.net X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 5C:BE:18:7F:7B:3F:CE:99:66:80:79:53:4B:DD:33:1B:42:A5:7E:00 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 5d:7a:f8:81:e0:a7:c1:3f:39:eb:d3:52:2c:e1:cb:4d:29:b3: 77:18:17:18:9e:12:fc:11:cc:3c:49:cb:6b:f4:4d:6c:b8:d2: f4:e9:37:f8:6b:ed:f5:d7:f1:eb:5a:41:04:c7:f3:8c:da:e1: 05:8e:ae:58:71:d9:01:8a:32:46:b2:dd:95:46:e1:ce:82:04: fa:0b:1c:29:75:07:85:ce:cd:59:d4:cc:f3:56:b3:72:4d:cb: 90:0f:ce:02:21:ce:5d:17:84:96:7f:6a:00:57:42:b7:24:5b: 07:25:1e:77:a8:9d:da:41:09:8e:29:79:b4:b0:a1:45:c8:70: ae:2c:86:24:ae:3d:9a:74:a7:04:78:d6:1f:1b:17:c5:c1:6d: b1:1a:fd:f4:50:2e:61:16:84:89:d0:42:3f:b6:bf:bd:52:bd: c8:3e:8e:87:b4:f0:bd:ad:c7:51:65:2f:77:e8:69:79:0e:03: 63:89:e7:70:ad:c8:d1:2f:1a:a5:06:d2:90:db:7c:07:35:9a: 0b:0e:85:87:d1:70:17:a7:88:0f:c6:b5:9c:88:00:fa:f9:b2: 0a:19:5a:4b:8d:91:12:51:5e:0e:c1:d8:9e:02:78:d0:2d:24: 09:fe:d4:97:3c:cb:a0:1f:9a:ab:f7:0f:e2:fa:64:23:4e:53: 0a:15:3e:f5:04:01:86:29:8b:8e:24:40:2f:b1:90:87:5c:3b: 7b:a7:4c:06:af:c3:90:7f:e9:c6:56:42:61:15:2c:83:f1:7c: 4f:89:17:f3:a0:11:34:3f:8d:af:75:34:60:1e:e0:f2:f3:02: e7:aa:b3:f7:9f:1c:f8:69:f4:fe:da:57:6e:1b:95:53:70:cd: ed:b6:bb:2a:84:eb:ab:c3:a9:b4:d5:15:a0:b2:cc:81:2d:f1: 56:c1:54:9b:5f:14:4c:5f:ad:5f:f5:06:ee:22:60:45:e4:50: 35:64:ac:ac:ca:4a:bf:86:78:f8:53:2d:17:d8:e8:84:c8:07: a4:c2:29:76:c7:1f -----BEGIN CERTIFICATE----- MIIGvTCCBSWgAwIBAgIJAMstgJlaaVJgMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2lk bnNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQC8sqplTuHuLjbW TL5SL2D1fw9U6WQzLVAF5gsyhd5lr2FpfYwjrob5Mav91aOLbJRTvoNyXsJ26FPS 0RycRGXbomcIEJxXGy9aI+0MLYBt1G5mgqCH+HcVCwPzCNlhVnTwvpgA7y8zs3+6 ezZAPWkF0yWOMYLtTcq9A5GWeavt5VMgm1KZF3gO4k58oPyk3Ae9D0LAaYsX6DFi BYx41eUR5UbSb5IYXaDd8d6jqW/jnYhgc6Cxkv1gTJFn87V5lrG0vYMSRUtWDQ9Y Jh/EKAxjGw7AeY429p6TE4UoJhDmoFYR2NLvawhNIplxol/v0fs0veFQjI/UsTD8 2tRfnYL4IX8szhLsE5/5Iq8aiLHjVbIMwmDYAa0P63Ap2kf1biSn9mpDL8lQazSo yr8xzIq2QS5HMvGbeMAmS0ih10Zx84uVmkWlavgvtSflw8K/ZXT9c70rZp/TdBGY 95cOFsbg5U/20M/Llpis9tcBCaoVaYSFupatrP+p8y19qP2nebsCAwEAAaOCAo4w ggKKMIHhBgNVHREEgdkwgdaCB2lkbnNhbnOCH3huLS1rbmlnLTVxYS5pZG4ucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2dzc2NoZW4tbGNiMHcuaWRuYTIwMDMucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2djaGVuLWI0YTNkdW4uaWRuYTIwMDgucHl0 aG9udGVzdC5uZXSCJHhuLS1ueGFzbXE2Yi5pZG5hMjAwMy5weXRob250ZXN0Lm5l dIIkeG4tLW54YXNtbTFjLmlkbmEyMDA4LnB5dGhvbnRlc3QubmV0MA4GA1UdDwEB /wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/ BAIwADAdBgNVHQ4EFgQUXL4Yf3s/zplmgHlTS90zG0KlfgAwfQYDVR0jBHYwdIAU s4qgorpx8agkedSkWyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQK DB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNh LXNlcnZlcoIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKG MGh0dHA6Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNl cjA1BggrBgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2Evb2NzcC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250 ZXN0Lm5ldC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGB AF16+IHgp8E/OevTUizhy00ps3cYFxieEvwRzDxJy2v0TWy40vTpN/hr7fXX8eta QQTH84za4QWOrlhx2QGKMkay3ZVG4c6CBPoLHCl1B4XOzVnUzPNWs3JNy5APzgIh zl0XhJZ/agBXQrckWwclHneondpBCY4pebSwoUXIcK4shiSuPZp0pwR41h8bF8XB bbEa/fRQLmEWhInQQj+2v71Svcg+joe08L2tx1FlL3foaXkOA2OJ53CtyNEvGqUG 0pDbfAc1mgsOhYfRcBeniA/GtZyIAPr5sgoZWkuNkRJRXg7B2J4CeNAtJAn+1Jc8 y6Afmqv3D+L6ZCNOUwoVPvUEAYYpi44kQC+xkIdcO3unTAavw5B/6cZWQmEVLIPx fE+JF/OgETQ/ja91NGAe4PLzAueqs/efHPhp9P7aV24blVNwze22uyqE66vDqbTV FaCyzIEt8VbBVJtfFExfrV/1Bu4iYEXkUDVkrKzKSr+GePhTLRfY6ITIB6TCKXbH Hw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/keycert.passwd.pem000066400000000000000000000102011471441230600217700ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQIhD+rJdxqb6ECAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBDTdyjCP3riOSUfxix4aXEvBIIH ECGkbsFabrcFMZcplw5jHMaOlG7rYjUzwDJ80JM8uzbv2Jb8SvNlns2+xmnEvH/M mNvRmnXmplbVjH3XBMK8o2Psnr2V/a0j7/pgqpRxHykG+koOY4gzdt3MAg8JPbS2 hymSl+Y5EpciO3xLfz4aFL1ZNqspQbO/TD13Ij7DUIy7xIRBMp4taoZCrP0cEBAZ +wgu9m23I4dh3E8RUBzWyFFNic2MVVHrui6JbHc4dIHfyKLtXJDhUcS0vIC9PvcV jhorh3UZC4lM+/jjXV5AhzQ0VrJ2tXAUX2dA144XHzkSH2QmwfnajPsci7BL2CGC rjyTy4NfB/lDwU+55dqJZQSKXMxAapJMrtgw7LD5CKQcN6zmfhXGssJ7HQUXKkaX I1YOFzuUD7oo56BVCnVswv0jX9RxrE5QYNreMlOP9cS+kIYH65N+PAhlURuQC14K PgDkHn5knSa2UQA5tc5f7zdHOZhGRUfcjLP+KAWA3nh+/2OKw/X3zuPx75YT/FKe tACPw5hjEpl62m9Xa0eWepZXwqkIOkzHMmCyNCsbC0mmRoEjmvfnslfsmnh4Dg/c 4YsTYMOLLIeCa+WIc38aA5W2lNO9lW0LwLhX1rP+GRVPv+TVHXlfoyaI+jp0iXrJ t3xxT0gaiIR/VznyS7Py68QV/zB7VdqbsNzS7LdquHK1k8+7OYiWjY3gqyU40Iu2 d1eSnIoDvQJwyYp7XYXbOlXNLY+s1Qb7yxcW3vXm0Bg3gKT8r1XHWJ9rj+CxAn5r ysfkPs1JsesxzzQjwTiDNvHnBnZnwxuxfBr26ektEHmuAXSl8V6dzLN/aaPjpTj4 CkE7KyqX3U9bLkp+ztl4xWKEmW44nskzm0+iqrtrxMyTfvvID4QrABjZL4zmWIqc e3ZfA3AYk9VDIegk/YKGC5VZ8YS7ZXQ0ASK652XqJ7QlMKTxxV7zda6Fp4uW6/qN ezt5wgbGGhZQXj2wDQmWNQYyG/juIgYTpCUA54U5XBIjuR6pg+Ytm0UrvNjsUoAC wGelyqaLDq8U8jdIFYVTJy9aJjQOYXjsUJ0dZN2aGHSlju0ZGIZc49cTIVQ9BTC5 Yc0Vlwzpl+LuA25DzKZNSb/ci0lO/cQGJ2uXQQgaNgdsHlu8nukENGJhnIzx4fzK wEh3yHxhTRCzPPwDfXmx0IHXrPqJhSpAgaXBVIm8OjvmMxO+W75W4uLfNY/B7e2H 3cjklGuvkofOf7sEOrGUYf4cb6Obg8FpvHgpKo5Twwmoh/qvEKckBFqNhZXDDl88 GbGlSEgyaAV1Ig8s1NJKBolWFa0juyPAwJ8vT1T4iwW7kQ7KXKt2UNn96K/HxkLu pikvukz8oRHMlfVHa0R48UB1fFHwZLzPmwkpu6ancIxk3uO3yfhf6iDk3bmnyMlz g3k/b6MrLYaOVByRxay85jH3Vvgqfgn6wa6BJ7xQ81eZ8B45gFuTH0J5JtLL7SH8 darRPLCYfA+Ums9/H6pU5EXfd3yfjMIbvhCXHkJrrljkZ+th3p8dyto6wmYqIY6I qR9sU+o6DhRaiP8tCICuhHxQpXylUM6WeJkJwduTJ8KWIvzsj4mReIKOl/oC2jSd gIdKhb9Q3zj9ce4N5m6v66tyvjxGZ+xf3BvUPDD+LwZeXgf7OBsNVbXzQbzto594 nbCzPocFi3gERE50ru4K70eQCy08TPG5NpOz+DDdO5vpAuMLYEuI7O3L+3GjW40Q G5bu7H5/i7o/RWR67qhG/7p9kPw3nkUtYgnvnWaPMIuTfb4c2d069kjlfgWjIbbI tpSKmm5DHlqTE4/ECAbIEDtSaw9dXHCdL3nh5+n428xDdGbjN4lT86tfu17EYKzl ydH1RJ1LX3o3TEj9UkmDPt7LnftvwybMFEcP7hM2xD4lC++wKQs7Alg6dTkBnJV4 5xU78WRntJkJTU7kFkpPKA0QfyCuSF1fAMoukDBkqUdOj6jE0BlJQlHk5iwgnJlt uEdkTjHZEjIUxWC6llPcAzaPNlmnD45AgfEW+Jn21IvutmJiQAz5lm9Z9PXaR0C8 hXB6owRY67C0YKQwXhoNf6xQun2xGBGYy5rPEEezX1S1tUH5GR/KW1Lh+FzFqHXI ZEb5avfDqHKehGAjPON+Br7akuQ125M9LLjKuSyPaQzeeCAy356Xd7XzVwbPddbm 9S9WSPqzaPgh10chIHoNoC8HMd33dB5j9/Q6jrbU/oPlptu/GlorWblvJdcTuBGI IVn45RFnkG8hCz0GJSNzW7+70YdESQbfJW79vssWMaiSjFE0pMyFXrFR5lBywBTx PiGEUWtvrKG94X1TMlGUzDzDJOQNZ9dT94bonNe9pVmP5BP4/DzwwiWh6qrzWk6p j8OE4cfCSh2WvHnhJbH7/N0v+JKjtxeIeJ16jx/K2oK5 -----END ENCRYPTED PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/keycert.pem000066400000000000000000000077321471441230600205070ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/wIBADANBgkqhkiG9w0BAQEFAASCBukwggblAgEAAoIBgQCylKlLaKU+hOvJ DfriTRLd+IthG5hv28I3A/CGjLICT0rDDtgaXd0uqloJAnjsgn5gMAcStpDW8Rm+ t6LsrBL+5fBgkyU1r94Rvx0HHoyaZwBBouitVHw28hP3W+smddkqB1UxpGnTeL2B gj3dVo/WTtRfO+0h0PKw1l98YE1pMTdqIwcOOE/ER0g4hvA/wrxuLhMvlVLMy/lL 58uctqaDUqryNyeerKbVkq4fJyCG5D2TwXVJ3i2DDh0xSt2Y10poZV4M4k8Su9Z5 8zN2PSvYMT50aqF277v8BaOeYUApBE4kZGIJpo13ATGdEwpUFZ0Fri4zLYUZ1hWb OC35sKo7OxWQ/+tefNUdgWHob6Vmy777jiYcLwxc3sS9rF3AJe0rMW83kCkR6hmy A3250E137N/1QumHuT/Nj9rnI/lwt9jfaYkZjoAgT/C97m/mM83cYpGTdoGV1xNo 7G90MhP0di5FnVsrIaSnvkbGT9UgUWx0oVMjocifdG2qIhMI9psCAwEAAQKCAYBT sHmaPmNaZj59jZCqp0YVQlpHWwBYQ5vD3pPE6oCttm0p9nXt/VkfenQRTthOtmT1 POzDp00/feP7zeGLmqSYUjgRekPw4gdnN7Ip2PY5kdW77NWwDSzdLxuOS8Rq1MW9 /Yu+ZPe3RBlDbT8C0IM+Atlh/BqIQ3zIxN4g0pzUlF0M33d6AYfYSzOcUhibOO7H j84r+YXBNkIRgYKZYbutRXuZYaGuqejRpBj3voVu0d3Ntdb6lCWuClpB9HzfGN0c RTv8g6UYO4sK3qyFn90ibIR/1GB9watvtoWVZqggiWeBzSWVWRsGEf9O+Cx4oJw1 IphglhmhbgNksbj7bD24on/icldSOiVkoUemUOFmHWhCm4PnB1GmbD8YMfEdSbks qDr1Ps1zg4mGOinVD/4cY7vuPFO/HCH07wfeaUGzRt4g0/yLr+XjVofOA3oowyxv JAzr+niHA3lg5ecj4r7M68efwzN1OCyjMrVJw2RAzwvGxE+rm5NiT08SWlKQZnkC gcEA4wvyLpIur/UB84nV3XVJ89UMNBLm++aTFzld047BLJtMaOhvNqx6Cl5c8VuW l261KHjiVzpfNM3/A2LBQJcYkhX7avkqEXlj57cl+dCWAVwUzKmLJTPjfaTTZnYJ xeN3dMYjJz2z2WtgvfvDoJLukVwIMmhTY8wtqqYyQBJ/l06pBsfw5TNvmVIOQHds 8ASOiFt+WRLk2bl9xrGGayqt3VV93KVRzF27cpjOgEcG74F3c0ZW9snERN7vIYwB JfrlAoHBAMlahPwMP2TYylG8OzHe7EiehTekSO26LGh0Cq3wTGXYsK/q8hQCzL14 kWW638vpwXL6L9ntvrd7hjzWRO3vX/VxnYEA6f0bpqHq1tZi6lzix5CTUN5McpDg QnjenSJNrNjS1zEF8WeY9iLEuDI/M/iUW4y9R6s3WpgQhPDXpSvd2g3gMGRUYhxQ Xna8auiJeYFq0oNaOxvJj+VeOfJ3ZMJttd+Y7gTOYZcbg3SdRb/kdxYki0RMD2hF 4ZvjJ6CTfwKBwQDiMqiZFTJGQwYqp4vWEmAW+I4r4xkUpWatoI2Fk5eI5T9+1PLX uYXsho56NxEU1UrOg4Cb/p+TcBc8PErkGqR0BkpxDMOInTOXSrQe6lxIBoECVXc3 HTbrmiay0a5y5GfCgxPKqIJhfcToAceoVjovv0y7S4yoxGZKuUEe7E8JY2iqRNAO yOvKCCICv/hcN235E44RF+2/rDlOltagNej5tY6rIFkaDdgOF4bD7f9O5eEni1Bg litfoesDtQP/3rECgcEAkQfvQ7D6tIPmbqsbJBfCr6fmoqZllT4FIJN84b50+OL0 mTGsfjdqC4tdhx3sdu7/VPbaIqm5NmX10bowWgWSY7MbVME4yQPyqSwC5NbIonEC d6N0mzoLR0kQ+Ai4u+2g82gicgAq2oj1uSNi3WZi48jQjHYFulCbo246o1NgeFFK 77WshYe2R1ioQfQDOU1URKCR0uTaMHClgfu112yiGd12JAD+aF3TM0kxDXz+sXI5 SKy311DFxECZeXRLpcC3AoHBAJkNMJWTyPYbeVu+CTQkec8Uun233EkXa2kUNZc/ 5DuXDaK+A3DMgYRufTKSPpDHGaCZ1SYPInX1Uoe2dgVjWssRL2uitR4ENabDoAOA ICVYXYYNagqQu5wwirF0QeaMXo1fjhuuHQh8GsMdXZvYEaAITZ9/NG5x/oY08+8H kr78SMBOPy3XQn964uKG+e3JwpOG14GKABdAlrHKFXNWchu/6dgcYXB87mrC/GhO zNwzC+QhFTZoOomFoqMgFWujng== -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/keycert2.pem000066400000000000000000000077561471441230600205770ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQCf8FWxi4oVlDVx e8NDFgb+IYAGr/hZWuY1Zq7d7g57yPoxJrgt+bN89+U7qTduqyB2Hy8G0TqeACOr IdpPZ8P7V5E5YiASwfJ72nbVo7qR9DAKA5FE8PU0bJFmFLjDDihc970zc4ilRDfR WylUpj68nefOY4CzFzeiqVOLX2wezs7Z0hflkSXGBmC0j1FbQU2I3YJg3CKCabhT tU6OyKItzjJ2vVaOoQ+B0Kv8leaRQ6ANZBAFQF2LepSy5F2+oSD+QHjPr+012V5D mrsdIc9We8YyonS1u/3HI7lLohf3W+qFroQWjn0DJI56ScV1uEr/B0+hn2jBRTM5 d1F9BeVWm1u8BOJu50CvOeuxiVLsxJpa4T41DJznJk5V+hE4hKvDKmlrwulsRp8o jUEyUi8dzWOBRfAijIWv3qAPjGA/J33n6+PllCczC2BsVZhVmLqSMCwp1g2JTCM/ KC7T4vOl/EGkm76fcmLeA1Ef8oUdRg+3T77VP+HqZ2JP06J8O8MCAwEAAQKCAYAw YvJZ82BEJQGCIrIxMpHNAm+MFmKpDdIFp9oRdDrXgjcG9bLU3e1KSmkEgq4tggIh GlAM3PHB6ULhPC2ixj7JZHWgCaqwYhKtG6vF+HGyRFDgRrIFTGyyfoICgxReloLp lV2dGj/l19yXLuAzJtRmFdOSYhIGnGiNgnKvAKBiNajoxyHJpv7piPZqyc0QMZJ2 bKVMDm02TSuhz4FDuzktaGtl9uQf5GQfnvTZRrRpkC70vigGnrFuSBiCgopF6NLq 6AXl8YS3Jcu2oGWrZDfS/GlG1QmvGGsmr9wndJSGG43jcpcRZt0g1nJNu4Fioq3e 7y6Gap9TEsciuQOv/6RD457XkNARmTQxFpEwmSgOPQn2pFcDspo71Ej7azzL/Z+3 jvnVo3wxgxBcrpyh+vhBtJARp4pT4anW4PcD6IcPSOWbnI8Ldoj1XN5QkJcBcykK 6LmsAUqsmEQDNsmnGZWyYSCns4P2vUJi0hwQz8UiQwgAta3xnq4v5On7l3cq35kC gcEA0+joOFbZBeGlCb27tDW4VCW0cQuczzuNEoBUKnsNSqy0nx1O7hgHm/f/NQDD cpxiD15bRQ0KM9QbQC4dGaVoLsM07hUGk97dCxQPs2zot4CodCKGohs7E154tEDP zVg3YS5mubUmqdqtn8ZCKeeZye/Tv2ageyF300sEgj2Cd7EZ8S4sB0PxZ2tqT3jy cBL5cDruLEWuHIQjN7WwSjxnXocpb1OU7dJ+v4zFPCkSCOoa0DTTw4jFhPEOBdqV T619AoHBAME3QyW4QVtU2Ct9u0B1XThhqSEyOpUrcH9nOoefggwP4WF3phVx16BG aDKUIGQ62klRa5fi2eooxcjQRLv1sWO0UzssnO6ABMnGkUiRdrowo6xukNak0RTp 0gvNoJ0SZxGF0yWSCw1Rq3qP2Koj7XDumFChAzLMyUsnoOl29SA7GfXcZp1pZTiq kOfFMWt0CIHu/EK03YWcd4vfQEq6lus39RCSXuL++Jva3yiEl5s069RFZvP1bNrD emkfetDSPwKBwQClk+8fVnzs44sZOW9ZOEB3P57mVbSJGHb6Zdtd9hhEqP3Y9gWe dJg9fmGjAJ23CAp3B7s5ER9PsAQ6+c0zJNNq9ox9G2CwWgtNhLdf81FDUPxPAktA jxZx4/dcoOe+A5gCD0elA67aOUxA86DvLVA1QXeqrn3muBfwuUUknvs6mt8yXGl6 o9QUgxHmVxLYD3tn/iPr4+ZP0c/Sz9yXpOsAKYxuuFg+G6N9+HiEsXKuFH4vAZgV yODNJ61VVZ4lS+ECgcAqFqOl39E81+qO7sCPdgFsermg5ZQlUmUbG52AVZq6jesG lE21disGWs/v1JyJuNg8CGRrnZriiycqa1PNreOKWImY5kr5GSHx4jNbn3RBcr70 nNEoMJbq+1QqBgzqqkuRYZlxIbMOn6++7v6/cTwT0aWUSr6rnjhrCqLeuG8FKlqp V+1ydLb79QvDsQzm30vLIggJb+ShakgQS/1xSdv+OR5FEd1hjTESokbiSJ/Ny2Vj xAp9MgUYUmSj6ZuTSXkCgcAggshdRQLom/EK2pYwffIpKfBiyLbi+KIjKxkiPEsb jrrQbvh9ZN6iAG3StVAYB5c6vewfeIlcDT0YJDyy1hGRLRG7vf9ubPf+n7Xp1y0W oo9L9qfCHu0jmWwtinkFYjpTDkXlxXCG2v3TllNsNX/5afYo8sb9oxXHLTpBlwZB fw6IgNZblWQevdgmUMTP9W2W7AZUxEz4gOM6lQkOwC3U59Dx2yO6rD3An6G1tlZF 2MClyf8o5d5ePObH8rkxrpY= -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIUF15VKdwjiTzzKgs6PnNpEekV9QQwDQYJKoZIhvcNAQEL BQAwYjELMAkGA1UEBhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYD VQQKDBpQeXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhv c3RuYW1lMB4XDTIxMDMxNzA4NDgyMFoXDTQwMDUxNjA4NDgyMFowYjELMAkGA1UE BhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYDVQQKDBpQeXRob24g U29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhvc3RuYW1lMIIBojAN BgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEAn/BVsYuKFZQ1cXvDQxYG/iGABq/4 WVrmNWau3e4Oe8j6MSa4LfmzfPflO6k3bqsgdh8vBtE6ngAjqyHaT2fD+1eROWIg EsHye9p21aO6kfQwCgORRPD1NGyRZhS4ww4oXPe9M3OIpUQ30VspVKY+vJ3nzmOA sxc3oqlTi19sHs7O2dIX5ZElxgZgtI9RW0FNiN2CYNwigmm4U7VOjsiiLc4ydr1W jqEPgdCr/JXmkUOgDWQQBUBdi3qUsuRdvqEg/kB4z6/tNdleQ5q7HSHPVnvGMqJ0 tbv9xyO5S6IX91vqha6EFo59AySOeknFdbhK/wdPoZ9owUUzOXdRfQXlVptbvATi budArznrsYlS7MSaWuE+NQyc5yZOVfoROISrwyppa8LpbEafKI1BMlIvHc1jgUXw IoyFr96gD4xgPyd95+vj5ZQnMwtgbFWYVZi6kjAsKdYNiUwjPygu0+LzpfxBpJu+ n3Ji3gNRH/KFHUYPt0++1T/h6mdiT9OifDvDAgMBAAGjGzAZMBcGA1UdEQQQMA6C DGZha2Vob3N0bmFtZTANBgkqhkiG9w0BAQsFAAOCAYEARzdkuqa0Hexi/saMkdi3 bubpQkc7X0RYKWnjy/PgcmbvQXLiWRMZOH9rMWvd5v+ZfkgAtsbOQuP8ycioNIFY Il5SEmxHEN81z5UNSPLOib6ky13gzrnXRAxnnO7cICG7AaMu1dHv57fqjevcx/n/ nxPNKwKL+TDpMw7ATVZw7Py7JciKyFAfwtkvt17j/ldvaQvuwmWHzyFVrQniQcQq QEa4jy/Y/pXHAgCKq1qbe0ush17j1ChyH7l4SkF2xJKcYYQF5ipw8zg6WeOL2NFE G1KDJN0SsMmM3PMN1e0lLQP3G+UaatervrKXu51QleKL32Xlby+pp1w9KKs39/Tb RT8EMe9A6cecod6TL0ZUQHow6ykNYBkfSKDLTKWnL9ifZ0C/DvgmS7DpJg3oAa1e GhIglMrgqJflTHAI/PvEsCKM1O0Un2dVGWsUCzPfhj1cKmagyb0Zd+2Tk9xGSRs9 2ceXMxRCjOJwEHUCFuTYeqowabdlpi0nyPbSn7JIwCpT -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/keycert3.pem000066400000000000000000000223501471441230600205630ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQDFtLOteQlQojN7 ztkux7m0hmGKkP1hh0hbKqTcD87jkLAqAwZWenjZMjCbbZ3vP+AObCIkYIKzPXY7 Yi+H5M3O2mXIDxoHGjL/GWtoEyDNXvm9UC+MRuSOq2MaLHHQG0Rx2TxcYrMVUM7b 93rpN1LGRrCv1gISXM4EvEJooAR7Aadj0pG/o0fqDAdFjH6QZbhn1iZle+eGbjcf dgH/H0F8dn1PPGoViHXicbsQ4kB6002Pf+aXP4b2QKAbflyNHEKHPHEOOTXrFjMd c+bqKW24epEsMZI59qx9hU/4Rvp3/v+vEwTL7Nm7ilptzZn2cvGCW39LC0nNYLOz kO3H8xwA75h6uykdB+WO/v2CKIK9M/ZO+9QNrmaokfKDamCk39b8hlCwNL6LsVpv d3XTS5Wn4YWn92EqiltUJJoPo7pc7VTdWCg4zVFn4Q8Zh4NFNn/qTB8lEMgrsNTV 5cyZ7zhoBiUMSO45bmo2NsnE7ce/JUhlqe5uh0PT1MIBgTV+oDMCAwEAAQKCAYEA udsy4gwblqK0tVnxz0lQqYV+os3EdO/BNHr1Oi7eNg2pngTz603812mYSjUVOHma vtQmkH3twGQyBoc52Y1dcGzdK+IOfMjDUg7qao840ffL3I1J9ZwbdodlhZBsec94 W3J1jP/4DDzICf8vm5g3h0+i/9m2Xt7BibAU2dg7/grC+lNUUoxDqaEfIOF/hW0q muq1c8e0EisAROIh5FzUqhWVnWxU6eM7tuFlkuyu4whLLHB3LI466Lo+CTqT9M+v jJYlvS5+AZW3qMBp6WOI8C+VIiBL178mo+Igkyyy5AYXcWeNkjp6ygRWvtWXIhCv CI29mf+BP/54jAY0rQRXJ2UcSHXmM6PTDkE/L2OKeiY1Ou8gLOwun3yBVdbkXJMb PWmUW4N8qSIJQ+vE2TDqmkqAT6m+ilzOXl1O+LLTvGyMnOiiSLXK9mC4ND3tqaQu hvKivnI1doErcWUaIf1DHiJmLrGxrTCUKjCEoefqVq2/dDdtCfx7CqUvjl3DYKMB AoHBAP+Vdi6D07gZFepEGCaJ+YH6cxEyO73CNnea/F1whVAzOv91kHS32jC9PAI3 /wYlX+DLcN9mVF/q62V4SLZYfOxTPW4vWO0A45URe9s9Z795fdAcQ5jt3QFOVSnk 3XSaCkIOwckuwabGJi4+foiUEOnLLzQi1/g7x12dwejxVNhqhz5KFkOQPv8fQRed sb5LVLYDeprsB2Vsx0fHwg4z9FvTIxLBeI7+sJD30lNpYZrCl/T9x4e1SV2Rwn2W bghxgQKBwQDGBx07biZK9RB5g4qPl+G6vz0M+/KBfpwQbMYxSyct7u6gfGD9mWBO qocIIr39Unac3kUL237Cn3HbgiGCRe7Mwd7XqnSSGWM5oWSlVQxEKTXYUlTbd9O9 DKuyQGOl/AMEwD4ZbEOfQNmnd1U4nh1AV052FQY8Ry/atGFT9fApA/5X/bbenOwQ YGDsokLzPf2BIDncpE+VNevUMoMI7EnySgjjfpL+cRld0qpLqBMo2h5VddeJ/5YM 1YcNfMQiw7MCgcEAwXqXuKa7A8aZvHpH/gS9CRRbP01TxFbdfLWrDeE8SnY9111c Ob9kQTk/0D4rpK9uYXIgxD1m6iWghXQFN2TNTOnGuz7EhsYBgrt1k4Zsn5qND5oV 4hNPFsoB1nEW5EooMdGSCYaHuoSOKrvMdgAAvbu+xC0MaTJ3vfrK7Fik7h/WueTD 7emohuFWGVabU38bZZ5EljrPboxmX4Rs9uuFtG2lQ3GKnlVXvKaeZd6EsO9WsXPc NHOcUmUhYokaSvIBAoHAGCxGJTsM8Zl4qVylTWH87A7sJOmccLJD2r1sdBf4cGL6 PhzwugQ+/VtToGqdRo8Ka5u2Ufw5PQi5nVIFRSHERLpluW3VTQBMXHyXDJeVJ7zg Fcf3E9NMxYcGbnvtrhVVSP8ulWvh1U7VQtwOSxsB9xixOzjVygXmkYvzVYxwBJG4 OoV+DS6aomUhb8Fe6tJmX5zPc1+bV1t9ril8VVqCrFDdROfuiaDEt+8/Wnzp2dLG YShBZ1cLugVWtw7D4nqBAoHAF29k64iAxY5Y4OOibVkqjUCPyqG2oxiXqgO7CxZp FGUat5UtV2mIBlSENs1o5AZ1nPlgWtPtg0xVCaG2t/Rq7ugvUfAnAhUK6zX8FS+T gCXE+7iKuuIJiCo13/iAwF/CLfuXvj4CZ71ta0wX9w99f1FcPEk0x+ytiyuWJK8K tyubL34JwNrnkh/8e3LcV3L88Sk9ZmxeTz31f3cA3Fy2ZJOAUMD9dKXeKtY7azzt MkhXedRsdLSKqMh0VGeGHoLS -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5c Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:c5:b4:b3:ad:79:09:50:a2:33:7b:ce:d9:2e:c7: b9:b4:86:61:8a:90:fd:61:87:48:5b:2a:a4:dc:0f: ce:e3:90:b0:2a:03:06:56:7a:78:d9:32:30:9b:6d: 9d:ef:3f:e0:0e:6c:22:24:60:82:b3:3d:76:3b:62: 2f:87:e4:cd:ce:da:65:c8:0f:1a:07:1a:32:ff:19: 6b:68:13:20:cd:5e:f9:bd:50:2f:8c:46:e4:8e:ab: 63:1a:2c:71:d0:1b:44:71:d9:3c:5c:62:b3:15:50: ce:db:f7:7a:e9:37:52:c6:46:b0:af:d6:02:12:5c: ce:04:bc:42:68:a0:04:7b:01:a7:63:d2:91:bf:a3: 47:ea:0c:07:45:8c:7e:90:65:b8:67:d6:26:65:7b: e7:86:6e:37:1f:76:01:ff:1f:41:7c:76:7d:4f:3c: 6a:15:88:75:e2:71:bb:10:e2:40:7a:d3:4d:8f:7f: e6:97:3f:86:f6:40:a0:1b:7e:5c:8d:1c:42:87:3c: 71:0e:39:35:eb:16:33:1d:73:e6:ea:29:6d:b8:7a: 91:2c:31:92:39:f6:ac:7d:85:4f:f8:46:fa:77:fe: ff:af:13:04:cb:ec:d9:bb:8a:5a:6d:cd:99:f6:72: f1:82:5b:7f:4b:0b:49:cd:60:b3:b3:90:ed:c7:f3: 1c:00:ef:98:7a:bb:29:1d:07:e5:8e:fe:fd:82:28: 82:bd:33:f6:4e:fb:d4:0d:ae:66:a8:91:f2:83:6a: 60:a4:df:d6:fc:86:50:b0:34:be:8b:b1:5a:6f:77: 75:d3:4b:95:a7:e1:85:a7:f7:61:2a:8a:5b:54:24: 9a:0f:a3:ba:5c:ed:54:dd:58:28:38:cd:51:67:e1: 0f:19:87:83:45:36:7f:ea:4c:1f:25:10:c8:2b:b0: d4:d5:e5:cc:99:ef:38:68:06:25:0c:48:ee:39:6e: 6a:36:36:c9:c4:ed:c7:bf:25:48:65:a9:ee:6e:87: 43:d3:d4:c2:01:81:35:7e:a0:33 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 85:75:10:25:D0:2C:80:50:24:1A:5B:57:70:DE:B5:CB:71:A9:3B:7B X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 95:f3:56:bb:d5:8c:70:bd:d1:de:da:63:b0:29:d7:db:60:27: d6:59:fd:61:1b:30:c6:d0:5d:73:7d:34:e1:68:e3:28:a6:89: e6:60:bd:89:d3:0e:f4:72:ad:72:76:f8:86:21:fd:75:3c:f8: 6d:be:9c:04:e1:82:03:69:6c:ae:d0:55:ba:5e:f2:ca:f5:0f: 8e:d6:d9:8d:c8:56:46:f4:f8:ac:74:2a:19:7b:8e:47:70:1f: fb:fb:bd:69:02:a1:a5:4a:6e:21:1c:04:14:15:55:bf:bf:24: 43:c8:17:03:be:3e:2c:ea:db:c8:af:1d:fd:52:df:d6:15:49: 9e:c2:44:69:ef:f1:45:43:83:b2:1e:cf:14:1c:13:3f:fe:9c: 71:cb:e7:1b:18:56:36:a7:af:44:f1:0b:a1:79:44:46:f9:43: 46:29:d8:b0:ca:49:4d:65:60:d3:f6:8e:74:bc:62:9e:1e:8d: 4b:29:9a:b4:0d:f0:a2:77:5b:34:e4:11:2f:a7:25:c5:e5:07: 76:12:ae:be:75:73:15:e4:0a:7d:53:38:56:3f:79:6d:6e:ca: ed:80:ab:56:ed:7e:8b:1c:e7:e3:d4:62:30:22:70:e7:29:b2: 03:3c:fe:fa:3d:f0:36:c0:4d:11:a2:99:d3:29:31:27:b8:c5: b8:15:a3:3c:4f:9b:73:5e:2b:b2:fb:cb:fd:75:47:b8:17:bd: 21:d8:e6:c1:b9:ff:73:81:d8:25:08:6d:08:5e:1c:a5:83:50: de:67:e6:da:d0:8e:5a:d3:f2:2a:b1:3f:b8:80:21:07:6a:71: 15:6d:05:eb:51:b3:59:8d:d4:15:46:7e:02:a8:13:01:16:99: bd:03:cc:70:71:2a:23:16:78:af:d1:d5:01:9d:04:b4:63:93: 9a:04:3a:92:2e:e6:7e:73:93:a5:fe:50:9b:bd:0e:ea:54:86: 6f:7c:e5:14:77:fe:c2:28:5a:4a:0e:d7:2d:8c:e9:ed:61:29: b2:53:ff:6c:04:bc -----BEGIN CERTIFICATE----- MIIF8TCCBFmgAwIBAgIJAMstgJlaaVJcMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxv Y2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAMW0s615CVCi M3vO2S7HubSGYYqQ/WGHSFsqpNwPzuOQsCoDBlZ6eNkyMJttne8/4A5sIiRggrM9 djtiL4fkzc7aZcgPGgcaMv8Za2gTIM1e+b1QL4xG5I6rYxoscdAbRHHZPFxisxVQ ztv3euk3UsZGsK/WAhJczgS8QmigBHsBp2PSkb+jR+oMB0WMfpBluGfWJmV754Zu Nx92Af8fQXx2fU88ahWIdeJxuxDiQHrTTY9/5pc/hvZAoBt+XI0cQoc8cQ45NesW Mx1z5uopbbh6kSwxkjn2rH2FT/hG+nf+/68TBMvs2buKWm3NmfZy8YJbf0sLSc1g s7OQ7cfzHADvmHq7KR0H5Y7+/YIogr0z9k771A2uZqiR8oNqYKTf1vyGULA0voux Wm93ddNLlafhhaf3YSqKW1Qkmg+julztVN1YKDjNUWfhDxmHg0U2f+pMHyUQyCuw 1NXlzJnvOGgGJQxI7jluajY2ycTtx78lSGWp7m6HQ9PUwgGBNX6gMwIDAQABo4IB wDCCAbwwFAYDVR0RBA0wC4IJbG9jYWxob3N0MA4GA1UdDwEB/wQEAwIFoDAdBgNV HSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4E FgQUhXUQJdAsgFAkGltXcN61y3GpO3swfQYDVR0jBHYwdIAUs4qgorpx8agkedSk WyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29m dHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMst gJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0 Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcw AYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYD VR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAJXzVrvVjHC90d7a Y7Ap19tgJ9ZZ/WEbMMbQXXN9NOFo4yimieZgvYnTDvRyrXJ2+IYh/XU8+G2+nATh ggNpbK7QVbpe8sr1D47W2Y3IVkb0+Kx0Khl7jkdwH/v7vWkCoaVKbiEcBBQVVb+/ JEPIFwO+Pizq28ivHf1S39YVSZ7CRGnv8UVDg7IezxQcEz/+nHHL5xsYVjanr0Tx C6F5REb5Q0Yp2LDKSU1lYNP2jnS8Yp4ejUspmrQN8KJ3WzTkES+nJcXlB3YSrr51 cxXkCn1TOFY/eW1uyu2Aq1btfosc5+PUYjAicOcpsgM8/vo98DbATRGimdMpMSe4 xbgVozxPm3NeK7L7y/11R7gXvSHY5sG5/3OB2CUIbQheHKWDUN5n5trQjlrT8iqx P7iAIQdqcRVtBetRs1mN1BVGfgKoEwEWmb0DzHBxKiMWeK/R1QGdBLRjk5oEOpIu 5n5zk6X+UJu9DupUhm985RR3/sIoWkoO1y2M6e1hKbJT/2wEvA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/keycert4.pem000066400000000000000000000223661471441230600205730ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQC34y3S6iXdmdvd M/2aFBe6CvRvZwhh1huGl7IQRtdoakPqMLlEdNHJtNeF5M27xLei+p4wt7N1Jyi0 2keHQb1m9TqH5AruOkE2ti+15zEoKoU9aWydTiH+epKTT0yjg2NcKQjRUaWcbhzB H4EMKuCIlzIIz8/EIKkOqhCDwq6+Fv3Ays+z7Bz+yR80ixivKu/l7SjxQ7z7R/kC I7OViRcIO5QBQPj7VLvCTz4VA6u/LdXngK2HNuau6WXm5yNNQbqrB11AEJcYZf/c VrneV4F+ZjLloAKgSn9GB8eWOyilTQ18TcKd+H2icipRaP/+QR/KPx5GK/SXU3my qm62QOGI7t/5ktVdjGhs6tHZxw1SRiipiLYWbtVRrSxa4wYlgpgoUwvrvvtC5kAN nTw1VGWsxcs+6a7+PocYnJiq7k4b5OAUb3Ryvl9DLAMy8NqpRWo4cHD/XQ3FCYwF HlOSgx/dL5Se0i3dW1KzbP6OvaNg6nl/1EXPUsJ1ATS8nzvzhccCAwEAAQKCAYEA nD3GvaJ9MeB802JNZBEWZ9jO/6jHknldQeq6POI0PF+t/NoRUH0BkyS4yucxdw0a CrxulG5BaJUxHRkqFV5iE4zhgnzcXLXamyYJO8GIHtyiASAGTVIJyDNVPxztvTDx x2iGOXPqBxP4Eo82EqSLywLMXHhVzAsEGZWeGpXb61+Vk62+9Nz1dfZlMTvOaWdO Fkp/sx8e/1KT3KGBANlOXIxioP4Xj1Tbg6nY0fogf3vud5j52B1pu8xL7PkPIaFq DEGz3XvWhBF/+Cs5iDeYz8eQpfQig7HdHVn2D8dZmzQgpLw1yGbPAnqrgopWfm7R MqiyFe82p2t+vfSoG5jz28XxPtzBJV3ljxKxlbnclqu/CAYSjzaYohDzyhjdZOZI r9DOfWOqu01Ha3EEsApn95fusHHGTH2FOy0u61FSTrfLfqsLw9WRJPWleirKikhf SZzi223QrmzZMtuCF7VgTx3ghDhBmFD8uzVVQ1SwPZ8CgftRkFcn1llXIAfJ3iHB AoHBAOg3DOIdtUVgpjMKhpAyuH54fYvGl7afIMNbKRle0kCiP45wtGJ43RPMqiR8 1rxZB3+iapICI/lnhk3O7vVRkR64yiqQBcl/hXZ1BhyD6iDXWYmm5mcnymcoqfwc p9TfzEPyGPb3SM2YlI0cSPRqM/jDvGvnDeKIpzEKvUlwJ59WoN2HOHTIXf+XbN5n unpuTt6YKJvc48DrXsPnUzkCmUfbOmgHfeb9/qBs/8kY4YJMsZEjqf88o7mCJCIy BtDxTwKBwQDKuOwE8e0GIA01ZHd6RfR+ZCvmp2oauxal4EJsBx+ZZnhEWGaSm1fE Bf/ih074ghcSKoSrdYpD1xGZ6fGVWMx3jcL11yLDOUiiPDJsm8hUBZ0IW1qXyfCP l7xy1bUkWwPXdmFuGp1exrcjooKrFNuTdYiK4nQZSKuCfXQRADrmEJmM+gYwhqI7 4XsYo848B9A4hbY6RLEox4uvo/RmafY0iR0PMhVEc+ydNLKB/4LpahZqBQ4kTpMv o4+rEvYt1gkCgcB08gx177ozx1nMCLf99N0/LBUmCIytNvR8DfPjyAIg9NUHOjFO CkpkR0VEfO50Cm4hVD1RbOyLFRzpIJbtSvfHvg5qYv/XG3auUn8Sa0jE408/aKNO PhbL3wnEYvYO2ep4KXtzHNQ4XmgprJ39IWMtG/5PZRx0ApgYtazgSDBcKXd4OTow bhwQtUTpuNmMAPONXJnO7O5yYNbn2B7sbiedrYV7kJJSe4X5awtiTjp7sX4XdxuM 5BAcQ7NI2WLfZTcCgcBp/X9hIoATmMRvKwUQx+yJ/KO7Z8KhETpJJdR0mNDbqmit Cy8t7cxYb+6WqLoQUivv0o0k/EJ7L8JDH76woAnfZB4P3RiOy69/K0wN3vFBhOHS kbju7aU53lKoE7YuuOtsRrewEng/KlRsbDY3bqNTGLt4KegbpBQQGLmLffxNd1Zh EAQWcP33ou9yNYrJdihWtQpOssWRlash/O32ceZJF3s7C6t068tFclz2fPocQdxQ OC5pqy9nU/P0tOhDlMkCgcEAosaBJLIeAYlOU0+2uSx5g5mIqOOTyrDEmqqad6T/ wkB7vW2QaoDvLL22Yrzdn9vQ0V0rqzhVtan7sq5pn/BQJAueZYN8rFxS3uuW+UQk Nsc4GLJzU8Az/2DvqEIrnE7zRc5E1FOI9gKLrBlpJB2o0hVcBznDe05Gax6Kjqbm jHqzyU73SpxpEy3OesClCeCQIMr47HaL9aSqaEX4U9bMpgHi0HgTTHqvJ5pch0hY dYl+WAE9LAyF1DF29BirEXVw -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5d Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=fakehostname Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:b7:e3:2d:d2:ea:25:dd:99:db:dd:33:fd:9a:14: 17:ba:0a:f4:6f:67:08:61:d6:1b:86:97:b2:10:46: d7:68:6a:43:ea:30:b9:44:74:d1:c9:b4:d7:85:e4: cd:bb:c4:b7:a2:fa:9e:30:b7:b3:75:27:28:b4:da: 47:87:41:bd:66:f5:3a:87:e4:0a:ee:3a:41:36:b6: 2f:b5:e7:31:28:2a:85:3d:69:6c:9d:4e:21:fe:7a: 92:93:4f:4c:a3:83:63:5c:29:08:d1:51:a5:9c:6e: 1c:c1:1f:81:0c:2a:e0:88:97:32:08:cf:cf:c4:20: a9:0e:aa:10:83:c2:ae:be:16:fd:c0:ca:cf:b3:ec: 1c:fe:c9:1f:34:8b:18:af:2a:ef:e5:ed:28:f1:43: bc:fb:47:f9:02:23:b3:95:89:17:08:3b:94:01:40: f8:fb:54:bb:c2:4f:3e:15:03:ab:bf:2d:d5:e7:80: ad:87:36:e6:ae:e9:65:e6:e7:23:4d:41:ba:ab:07: 5d:40:10:97:18:65:ff:dc:56:b9:de:57:81:7e:66: 32:e5:a0:02:a0:4a:7f:46:07:c7:96:3b:28:a5:4d: 0d:7c:4d:c2:9d:f8:7d:a2:72:2a:51:68:ff:fe:41: 1f:ca:3f:1e:46:2b:f4:97:53:79:b2:aa:6e:b6:40: e1:88:ee:df:f9:92:d5:5d:8c:68:6c:ea:d1:d9:c7: 0d:52:46:28:a9:88:b6:16:6e:d5:51:ad:2c:5a:e3: 06:25:82:98:28:53:0b:eb:be:fb:42:e6:40:0d:9d: 3c:35:54:65:ac:c5:cb:3e:e9:ae:fe:3e:87:18:9c: 98:aa:ee:4e:1b:e4:e0:14:6f:74:72:be:5f:43:2c: 03:32:f0:da:a9:45:6a:38:70:70:ff:5d:0d:c5:09: 8c:05:1e:53:92:83:1f:dd:2f:94:9e:d2:2d:dd:5b: 52:b3:6c:fe:8e:bd:a3:60:ea:79:7f:d4:45:cf:52: c2:75:01:34:bc:9f:3b:f3:85:c7 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:fakehostname X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: C8:BD:A8:B4:C0:F2:32:10:73:47:9C:48:81:32:F8:BA:BB:26:84:97 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 76:87:76:4d:e4:0f:88:bf:2c:f3:58:67:c0:97:6c:cd:59:18: 82:83:4c:04:19:a5:6d:aa:fa:64:3d:49:32:3e:e1:56:95:b2: 13:f7:cf:d3:11:b0:72:b7:5b:e7:d7:85:69:51:3c:b6:54:80: 45:2f:28:10:21:20:b9:ba:e9:27:5a:b7:3f:82:b7:69:f5:46: f5:bf:a2:8b:17:7f:f2:14:d1:46:97:b5:8b:47:fb:9f:e8:5c: 05:0e:9d:11:bd:7c:9a:03:84:0b:ca:29:66:4a:ca:0d:6f:09: 1e:7a:27:c1:7f:03:96:70:8d:18:a5:2f:a4:98:a5:19:aa:8c: 5d:1e:8c:3e:bb:6d:3b:c0:33:c0:15:e1:bd:09:3d:9f:e8:dc: 12:d4:cb:44:1d:06:f5:e8:d6:4e:a1:2d:5c:9f:5d:1f:5b:2a: c3:4d:40:8d:da:d1:78:80:d0:c6:31:72:10:48:8a:e9:10:7a: 13:30:11:b2:9e:67:0e:ed:a1:aa:ec:73:2d:f0:b8:8a:22:75: 0f:30:69:5c:50:7e:91:ce:da:91:c7:70:8c:65:ff:f6:58:fb: 00:bd:45:cc:e2:e4:e3:e5:16:36:7d:f3:a2:4a:9c:45:ff:d9: a5:16:e0:2f:b5:5b:6c:e6:8a:13:15:48:73:bd:7c:80:33:c3: d4:3b:3a:1d:85:0e:a4:f7:f7:fb:48:0c:e9:a0:4b:5e:8a:5c: 67:f8:25:02:6f:cd:72:c1:aa:5a:93:64:7c:14:20:43:e0:13: 7f:0d:e1:0d:61:5e:2e:2c:cd:7a:2e:2a:ae:b6:75:6a:5f:a0: 1a:9b:b6:67:2d:b0:a5:1c:54:bc:8c:70:7e:15:2b:c0:50:e3: 03:bb:a4:a5:fc:45:01:c9:3f:a7:b8:18:dc:3e:08:07:a1:9b: f5:bd:95:bd:49:e8:10:7c:91:7d:2d:c4:c2:98:b6:b7:51:69: d7:0a:68:40:b5:0f:85:a0:a9:67:77:c6:68:cb:0e:58:34:b3: 58:e7:c8:7c:09:67 -----BEGIN CERTIFICATE----- MIIF9zCCBF+gAwIBAgIJAMstgJlaaVJdMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGIxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFTATBgNVBAMMDGZh a2Vob3N0bmFtZTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALfjLdLq Jd2Z290z/ZoUF7oK9G9nCGHWG4aXshBG12hqQ+owuUR00cm014XkzbvEt6L6njC3 s3UnKLTaR4dBvWb1OofkCu46QTa2L7XnMSgqhT1pbJ1OIf56kpNPTKODY1wpCNFR pZxuHMEfgQwq4IiXMgjPz8QgqQ6qEIPCrr4W/cDKz7PsHP7JHzSLGK8q7+XtKPFD vPtH+QIjs5WJFwg7lAFA+PtUu8JPPhUDq78t1eeArYc25q7pZebnI01BuqsHXUAQ lxhl/9xWud5XgX5mMuWgAqBKf0YHx5Y7KKVNDXxNwp34faJyKlFo//5BH8o/HkYr 9JdTebKqbrZA4Yju3/mS1V2MaGzq0dnHDVJGKKmIthZu1VGtLFrjBiWCmChTC+u+ +0LmQA2dPDVUZazFyz7prv4+hxicmKruThvk4BRvdHK+X0MsAzLw2qlFajhwcP9d DcUJjAUeU5KDH90vlJ7SLd1bUrNs/o69o2DqeX/URc9SwnUBNLyfO/OFxwIDAQAB o4IBwzCCAb8wFwYDVR0RBBAwDoIMZmFrZWhvc3RuYW1lMA4GA1UdDwEB/wQEAwIF oDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAd BgNVHQ4EFgQUyL2otMDyMhBzR5xIgTL4ursmhJcwfQYDVR0jBHYwdIAUs4qgorpx 8agkedSkWyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRo b24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZl coIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6 Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1Bggr BgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2Nz cC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5l dC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAHaHdk3k D4i/LPNYZ8CXbM1ZGIKDTAQZpW2q+mQ9STI+4VaVshP3z9MRsHK3W+fXhWlRPLZU gEUvKBAhILm66Sdatz+Ct2n1RvW/oosXf/IU0UaXtYtH+5/oXAUOnRG9fJoDhAvK KWZKyg1vCR56J8F/A5ZwjRilL6SYpRmqjF0ejD67bTvAM8AV4b0JPZ/o3BLUy0Qd BvXo1k6hLVyfXR9bKsNNQI3a0XiA0MYxchBIiukQehMwEbKeZw7toarscy3wuIoi dQ8waVxQfpHO2pHHcIxl//ZY+wC9Rczi5OPlFjZ986JKnEX/2aUW4C+1W2zmihMV SHO9fIAzw9Q7Oh2FDqT39/tIDOmgS16KXGf4JQJvzXLBqlqTZHwUIEPgE38N4Q1h Xi4szXouKq62dWpfoBqbtmctsKUcVLyMcH4VK8BQ4wO7pKX8RQHJP6e4GNw+CAeh m/W9lb1J6BB8kX0txMKYtrdRadcKaEC1D4WgqWd3xmjLDlg0s1jnyHwJZw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/keycertecc.pem000066400000000000000000000130051471441230600211500ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIG2AgEAMBAGByqGSM49AgEGBSuBBAAiBIGeMIGbAgEBBDBcNwE+cm17mmr7Yg6d 0DNCnheGFOjkYH4tYzTyCkcZGShkmF/tKhIqb3imKz0Kx9+hZANiAATyp8ws6CuN OI2/3MC4jZVSkmoDzm/X/ZrkEm4TVHKPSZ6kzZRpmmUlLS9l7SQZSLYyDAFBFzoG JJYHhZNQXEO7HFszn6KnvLjhwS6ddzlaHPziEknrSr0OKhJmdJHrQAQ= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5e Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost-ecc Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (384 bit) pub: 04:f2:a7:cc:2c:e8:2b:8d:38:8d:bf:dc:c0:b8:8d: 95:52:92:6a:03:ce:6f:d7:fd:9a:e4:12:6e:13:54: 72:8f:49:9e:a4:cd:94:69:9a:65:25:2d:2f:65:ed: 24:19:48:b6:32:0c:01:41:17:3a:06:24:96:07:85: 93:50:5c:43:bb:1c:5b:33:9f:a2:a7:bc:b8:e1:c1: 2e:9d:77:39:5a:1c:fc:e2:12:49:eb:4a:bd:0e:2a: 12:66:74:91:eb:40:04 ASN1 OID: secp384r1 NIST CURVE: P-384 X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost-ecc X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 79:11:98:86:15:4F:48:F4:31:0B:D2:CC:C8:26:3A:09:07:5D:96:40 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 6e:42:e8:a2:2d:28:14:e3:25:5c:c1:7e:54:e9:3a:ff:30:db: 94:ba:b2:f6:5f:ae:9a:c1:90:b3:4f:ce:65:1d:84:64:c0:71: 2c:44:8e:7e:00:79:f5:8c:4a:1d:34:13:44:de:99:2e:db:53: ee:ec:74:97:4d:59:1a:09:82:4f:98:75:91:a7:a0:b9:da:5e: 68:f5:32:85:be:36:3d:83:d4:ee:f9:87:67:31:85:41:53:9a: e7:05:96:13:1c:88:2e:7f:33:b1:ee:bd:f9:50:52:24:ed:3d: 92:95:6e:30:c3:af:74:a9:ee:15:bb:da:7c:14:50:8e:e3:99: ea:ba:b4:37:8a:50:61:26:de:01:93:b8:a2:6b:d9:c7:38:5e: b2:f8:96:3d:a8:9f:7d:0c:71:d4:7e:cc:a0:57:af:7e:ce:3f: a7:a7:27:68:c1:28:d7:4f:44:c1:b4:93:c3:c7:35:2b:50:c3: 8e:2c:d0:46:c1:3f:e1:67:d3:f0:81:ae:f3:5c:3e:4f:d5:a8: 07:8f:e0:eb:ef:d8:dc:47:e0:3d:58:eb:de:0e:7f:b2:58:cb: 5c:f1:2f:65:7e:0f:0d:cc:ca:ba:83:53:63:bc:dd:18:0c:ee: ed:ec:96:88:d0:38:c5:d7:ab:e7:55:79:7b:6d:ba:c0:a0:e9: 5c:ca:7c:fb:f8:70:c7:fb:f5:b2:b5:74:cb:f7:c0:0d:20:9f: 1d:b7:4c:bf:8a:8d:cd:e3:bc:4e:30:78:02:12:a0:9b:d5:8f: 49:3c:95:91:76:6e:7c:54:dc:61:7a:2e:20:ed:35:25:e0:c5: 17:50:02:83:00:74:8f:f0:1c:97:96:08:fc:2e:63:a4:f7:97: 87:43:2a:32:04:2d:4c:f9:1a:07:bf:68:91:fc:50:21:a1:3c: 8d:8f:fb:83:57:83:1f:b6:55:5c:55:2f:58:64:ad:f3:27:ba: d0:e3:cd:58:01:a3:c9:ba:1d:95:dc:30:d5:af:b9:20:ad:d9: 48:ba:8d:9a:66:ee -----BEGIN CERTIFICATE----- MIIEyzCCAzOgAwIBAgIJAMstgJlaaVJeMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGMxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFjAUBgNVBAMMDWxv Y2FsaG9zdC1lY2MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATyp8ws6CuNOI2/3MC4 jZVSkmoDzm/X/ZrkEm4TVHKPSZ6kzZRpmmUlLS9l7SQZSLYyDAFBFzoGJJYHhZNQ XEO7HFszn6KnvLjhwS6ddzlaHPziEknrSr0OKhJmdJHrQASjggHEMIIBwDAYBgNV HREEETAPgg1sb2NhbGhvc3QtZWNjMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAU BggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUeRGY hhVPSPQxC9LMyCY6CQddlkAwfQYDVR0jBHYwdIAUs4qgorpx8agkedSkWyU2FR5J yM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMstgJlaaVJb MIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0Y2EucHl0 aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcwAYYpaHR0 cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYDVR0fBDww OjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2EvcmV2 b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAG5C6KItKBTjJVzBflTpOv8w 25S6svZfrprBkLNPzmUdhGTAcSxEjn4AefWMSh00E0TemS7bU+7sdJdNWRoJgk+Y dZGnoLnaXmj1MoW+Nj2D1O75h2cxhUFTmucFlhMciC5/M7HuvflQUiTtPZKVbjDD r3Sp7hW72nwUUI7jmeq6tDeKUGEm3gGTuKJr2cc4XrL4lj2on30McdR+zKBXr37O P6enJ2jBKNdPRMG0k8PHNStQw44s0EbBP+Fn0/CBrvNcPk/VqAeP4Ovv2NxH4D1Y 694Of7JYy1zxL2V+Dw3MyrqDU2O83RgM7u3slojQOMXXq+dVeXttusCg6VzKfPv4 cMf79bK1dMv3wA0gnx23TL+Kjc3jvE4weAISoJvVj0k8lZF2bnxU3GF6LiDtNSXg xRdQAoMAdI/wHJeWCPwuY6T3l4dDKjIELUz5Gge/aJH8UCGhPI2P+4NXgx+2VVxV L1hkrfMnutDjzVgBo8m6HZXcMNWvuSCt2Ui6jZpm7g== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/nokia.pem000066400000000000000000000036031471441230600201330ustar00rootroot00000000000000# Certificate for projects.developer.nokia.com:443 (see issue 13034) -----BEGIN CERTIFICATE----- MIIFLDCCBBSgAwIBAgIQLubqdkCgdc7lAF9NfHlUmjANBgkqhkiG9w0BAQUFADCB vDELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTswOQYDVQQLEzJUZXJtcyBvZiB1c2Ug YXQgaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAoYykxMDE2MDQGA1UEAxMt VmVyaVNpZ24gQ2xhc3MgMyBJbnRlcm5hdGlvbmFsIFNlcnZlciBDQSAtIEczMB4X DTExMDkyMTAwMDAwMFoXDTEyMDkyMDIzNTk1OVowcTELMAkGA1UEBhMCRkkxDjAM BgNVBAgTBUVzcG9vMQ4wDAYDVQQHFAVFc3BvbzEOMAwGA1UEChQFTm9raWExCzAJ BgNVBAsUAkJJMSUwIwYDVQQDFBxwcm9qZWN0cy5kZXZlbG9wZXIubm9raWEuY29t MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCr92w1bpHYSYxUEx8N/8Iddda2 lYi+aXNtQfV/l2Fw9Ykv3Ipw4nLeGTj18FFlAZgMdPRlgrzF/NNXGw/9l3/qKdow CypkQf8lLaxb9Ze1E/KKmkRJa48QTOqvo6GqKuTI6HCeGlG1RxDb8YSKcQWLiytn yj3Wp4MgRQO266xmMQIDAQABo4IB9jCCAfIwQQYDVR0RBDowOIIccHJvamVjdHMu ZGV2ZWxvcGVyLm5va2lhLmNvbYIYcHJvamVjdHMuZm9ydW0ubm9raWEuY29tMAkG A1UdEwQCMAAwCwYDVR0PBAQDAgWgMEEGA1UdHwQ6MDgwNqA0oDKGMGh0dHA6Ly9T VlJJbnRsLUczLWNybC52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNybDBEBgNVHSAE PTA7MDkGC2CGSAGG+EUBBxcDMCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3LnZl cmlzaWduLmNvbS9ycGEwKAYDVR0lBCEwHwYJYIZIAYb4QgQBBggrBgEFBQcDAQYI KwYBBQUHAwIwcgYIKwYBBQUHAQEEZjBkMCQGCCsGAQUFBzABhhhodHRwOi8vb2Nz cC52ZXJpc2lnbi5jb20wPAYIKwYBBQUHMAKGMGh0dHA6Ly9TVlJJbnRsLUczLWFp YS52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNlcjBuBggrBgEFBQcBDARiMGChXqBc MFowWDBWFglpbWFnZS9naWYwITAfMAcGBSsOAwIaBBRLa7kolgYMu9BSOJsprEsH iyEFGDAmFiRodHRwOi8vbG9nby52ZXJpc2lnbi5jb20vdnNsb2dvMS5naWYwDQYJ KoZIhvcNAQEFBQADggEBACQuPyIJqXwUyFRWw9x5yDXgMW4zYFopQYOw/ItRY522 O5BsySTh56BWS6mQB07XVfxmYUGAvRQDA5QHpmY8jIlNwSmN3s8RKo+fAtiNRlcL x/mWSfuMs3D/S6ev3D6+dpEMZtjrhOdctsarMKp8n/hPbwhAbg5hVjpkW5n8vz2y 0KxvvkA1AxpLwpVv7OlK17ttzIHw8bp9HTlHBU5s8bKz4a565V/a5HI0CSEv/+0y ko4/ghTnZc1CkmUngKKeFMSah/mT/xAh8XnE2l1AazFa8UKuYki1e+ArHaGZc4ix UYOtiRphwfuYQhRZ7qX9q2MMkCMI65XNK/SaFrAbbG0= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/nosan.pem000066400000000000000000000170471471441230600201570ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQCv3sUoOE4F7Pye AT2Q6XpXrGUOu1fYgdnItLLLhvn7ACuHMj7TA5UKXxsepJn5m2Ji9LvAbksr1IWd LZAvNgjwsUR+E4HbY108BhVt9sk3HFkvE0OOFbAa14ICtYPe18P/4Hv6Zfu/GJDU rwXHNCUu0p6i/mospZ5O3sx5MgVaShknGAEC3Kp7zOgusMmE8XSbkNQa3ARMkW4o kTqWKAeAHDjVFVyyhzZQmo+gaLzhWfJVSZhlJsuiLoZGGrVTq85EiXsE4l8rPaI+ mKkVzWP13IZW+Fx1tiIktumdHWb1OQWrvm8AiT9b8PcFCUUrvhQFcLDSCZjKlQ0t RWrSSKrrVsSldOreqRLtpjGzFJpGnTcvslL7rP5pg5DjBsYmVcDjrmRuJuhGq52X /6HEC97GouVK8tT1LVMv1wufVPn+i9TzwxOuRWeUvVqLAJgWQ9N3yKdymH+VrpZk /oB9ScyDakGezZBW5CeOQbNJ8WoX58jNxefGjtqKxmyztu43r3ECAwEAAQKCAYBQ fVoqYCqFV8L95X9x1QljGsldhqxbsIIl811o/KtoDtndFEfgd2E8z+4vhhHaRR0w QOW02kWZF7jXCMVWdhp9XgQE15S0/bLsB7TDERFiIZ1HiD+AxbhFcKBV8REbahCQ CQN0xDwFZ47RaBDy7JCf71EfM+UP7fSYECvww83jVspQNBIyZx+3bT5OMCbqqz88 +3m3mT52dJDADEeN9WAJZ+Ey1IYKRwu6tCJLvePEF1BrbDVNBgZogXZ+mzalxpjr 4RpGPMMa+VWc8HmDVd+LtpwKJcQD00GvUP4fNywn+5jvNWl54FdQiTLPrieTWxas XUQ2crxP7Aqr2/vsU5Ruru5uF7H+ssMHp9YQDhpJ2+SVhQ9P+/loXCuKGt+BrB2Z MlitO3f+vfRtzATmJ8G0qFrOqZK1A/qsiyIze240C1hAl3oy2xpZqTDGp4gRWwoi OIN0HmH9UbP7bbNQY1x/zstTbza4/7rGb1+DZKeZIMu7QjBCU0rtsJpGtUvcQGEC gcEA42GMYSL/HljZMF1LsDhTX/cmP8FDNgONhWYxT+w0Csnj1usLNBaT63dYnEiW QKydRR4casAR1Kdy4Yfcy2lCy1kCfwqkQYk8fxSsOSHRjUfwC1SnfdYlwKFMxw4a oZF0R4oVCBYrfP+8kqrj+5gs/gXblsw72XkYtbCdIriKKdmUzTx7MegzSqh2PVRi rJzuwCZQ/O0NfhwdOHxLQDo0dgD+vv9e+KOSoJ9FDv8HH1tnolpRMdkSA8AJR/Nk DXt1AoHBAMYBfTKQZ2jqLKybe4tP+YKjvjVp8vJx0iNUXFN/P6hBaSBOgq85uxXL X3s7N/pkOCjyE95B8QusIkbnbfdyEP89O4bTbUHPXyAkHyRkR7Vny49HYuaR/aXQ mXC0J2z5bXVpCQ514l/R/Io3wBph+hbG3To7pp9pMOV4qzvibUZaTZFwH+q+xDwf SKSFy3fcomgH4/K5/QuKVj0jOUQsYjQQWb8GukS2KZK3zYJIAG1bBcsCVpSuBdW0 eCZgqjnwjQKBwCUyUwWc9QEg5b68tGIKhNEhHDe3xOf0ItWcxxpc+JJ/Pm9tGfMW cnJFntBKK5I+6qdg6qMn8oLINcnhMORxvsSHNhpUQlSaP7RGTHo4JxCmoQUpfxDd 1GUzvdyeWQrvQYdmdlRRVCHpsA6KOCtzVIDlsmtz06Ka5cjrMHl6mNeJyYbdiwW6 B5ICBv23bUDxlzkFy5/ko51qufkAlErYeraHKSVTn1SrZZQzGdf/LkoZ6NUtUzUF XqYQZzRHA6oU9QKBwDslzLljC5D6ivfQxln6POV6dmJMUOd9erFVDPNgSqq/R2EA MueXDjzXcKFGMlWYxHHuxmKZPiEnfWHC1kWZjFxCdVq0I6oKATd/stHTJtyYseUO BQwtRiDXLE7PcguKgtkU1EC+lC3dc1vyhW8cH3HYW9N+aCqsaI/TuQr9e3kNlqhA XzhnXgU7rx5+XSZkARukZ8JlLqLY4yQGNqAXxgoZbEW1A8VsyQRr5XbqfT4td5CK FUT6qwGIlG+aZp9CLQKBwQCQkwdW9A/Q4Ffq8+XTL1hJ24m/q11OLAPODUypOhWw OCbX2fkv59pSBe6niZDBls1NpHB9mzalBrJCfU+yKC667gKcKULOnWULIoOQvmcg Ka3hkkW28gTnCjfDIYm3IdsLjc67zJplOixaKgxhO8NtJZGtg0oLIrofG8EYRInv OmtGw+XE+s4TVs6WgXnEg9zWQ5ZYtqQVn6PT5jsz+Nrvipi61HWHVBd7g+78ojps 3suWxl0FvgzTW5HD16WRXeI= -----END PRIVATE KEY----- Certificate: Data: Version: 1 (0x0) Serial Number: cb:2d:80:99:5a:69:52:61 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=nosan Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:af:de:c5:28:38:4e:05:ec:fc:9e:01:3d:90:e9: 7a:57:ac:65:0e:bb:57:d8:81:d9:c8:b4:b2:cb:86: f9:fb:00:2b:87:32:3e:d3:03:95:0a:5f:1b:1e:a4: 99:f9:9b:62:62:f4:bb:c0:6e:4b:2b:d4:85:9d:2d: 90:2f:36:08:f0:b1:44:7e:13:81:db:63:5d:3c:06: 15:6d:f6:c9:37:1c:59:2f:13:43:8e:15:b0:1a:d7: 82:02:b5:83:de:d7:c3:ff:e0:7b:fa:65:fb:bf:18: 90:d4:af:05:c7:34:25:2e:d2:9e:a2:fe:6a:2c:a5: 9e:4e:de:cc:79:32:05:5a:4a:19:27:18:01:02:dc: aa:7b:cc:e8:2e:b0:c9:84:f1:74:9b:90:d4:1a:dc: 04:4c:91:6e:28:91:3a:96:28:07:80:1c:38:d5:15: 5c:b2:87:36:50:9a:8f:a0:68:bc:e1:59:f2:55:49: 98:65:26:cb:a2:2e:86:46:1a:b5:53:ab:ce:44:89: 7b:04:e2:5f:2b:3d:a2:3e:98:a9:15:cd:63:f5:dc: 86:56:f8:5c:75:b6:22:24:b6:e9:9d:1d:66:f5:39: 05:ab:be:6f:00:89:3f:5b:f0:f7:05:09:45:2b:be: 14:05:70:b0:d2:09:98:ca:95:0d:2d:45:6a:d2:48: aa:eb:56:c4:a5:74:ea:de:a9:12:ed:a6:31:b3:14: 9a:46:9d:37:2f:b2:52:fb:ac:fe:69:83:90:e3:06: c6:26:55:c0:e3:ae:64:6e:26:e8:46:ab:9d:97:ff: a1:c4:0b:de:c6:a2:e5:4a:f2:d4:f5:2d:53:2f:d7: 0b:9f:54:f9:fe:8b:d4:f3:c3:13:ae:45:67:94:bd: 5a:8b:00:98:16:43:d3:77:c8:a7:72:98:7f:95:ae: 96:64:fe:80:7d:49:cc:83:6a:41:9e:cd:90:56:e4: 27:8e:41:b3:49:f1:6a:17:e7:c8:cd:c5:e7:c6:8e: da:8a:c6:6c:b3:b6:ee:37:af:71 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption 91:42:c2:15:57:42:47:77:e7:0f:c5:55:26:b1:5b:c3:5e:ba: 81:db:e1:a4:9f:b8:42:5a:21:c9:8c:18:ae:0f:90:ab:9a:24: e7:d2:78:fc:bd:97:29:b1:5c:46:1f:5b:b8:d2:a7:87:f1:50: 53:5b:d3:be:57:74:bd:e5:75:db:50:81:f7:37:95:0b:69:ef: 39:8c:5c:82:d5:64:62:d5:8b:e9:e0:31:e1:73:d2:5a:2c:de: 43:5a:06:e5:d3:4d:d0:35:e0:9f:c2:73:31:bc:35:69:d4:fb: 7d:f0:1a:33:f7:f6:25:72:9c:a6:84:05:08:f6:b5:e8:04:10: f1:1f:f2:95:ad:a1:f8:d8:80:a5:eb:75:43:99:33:90:0c:79: fc:c0:87:08:95:20:aa:c2:81:0b:22:6f:56:f4:8f:2a:23:f8: 40:47:1c:03:a5:b1:04:0a:04:4a:df:d0:88:a8:bc:31:f2:42: 9b:d8:11:14:9e:e3:68:ea:07:2c:15:de:d2:36:5a:15:38:ed: d2:af:0e:b4:b6:1d:a0:57:94:ea:c3:c7:4c:14:57:81:00:57: 94:d3:b0:27:69:d7:48:02:6c:e5:97:f7:be:22:7c:38:24:af: b2:b0:7b:08:75:1e:ca:2e:c7:41:ef:8b:74:cf:c9:c3:6f:39: b9:52:41:18:c6:70:24:54:51:04:fe:5f:88:70:35:e5:1c:8e: d6:67:69:44:44:33:9b:8c:fe:a5:b9:95:48:66:84:f3:1a:04: ab:a3:57:c1:b6:b4:2f:28:12:45:2b:cb:42:d3:f4:a5:ce:7b: 6c:1f:e4:c8:a9:e7:d4:6d:c8:27:2d:69:26:c5:e8:73:10:54: 1f:c3:bf:fd:aa:f5:95:6f:f6:ca:d5:06:8f:1b:79:93:e3:86: ba:8d:fe:a8:10:8f:95:3e:14:09:bf:ca:88:59:e2:93:b6:ec: 03:a9:7e:dd:1f:5f:13:d3:29:b3:a6:f3:6a:df:30:53:44:c8: cd:e5:82:57:bc:9c -----BEGIN CERTIFICATE----- MIIEJDCCAowCCQDLLYCZWmlSYTANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJY WTEmMCQGA1UECgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNV BAMMDW91ci1jYS1zZXJ2ZXIwHhcNMTgwODI5MTQyMzE2WhcNMzcxMDI4MTQyMzE2 WjBbMQswCQYDVQQGEwJYWTEXMBUGA1UEBwwOQ2FzdGxlIEFudGhyYXgxIzAhBgNV BAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQ4wDAYDVQQDDAVub3NhbjCC AaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAK/exSg4TgXs/J4BPZDpeles ZQ67V9iB2ci0ssuG+fsAK4cyPtMDlQpfGx6kmfmbYmL0u8BuSyvUhZ0tkC82CPCx RH4TgdtjXTwGFW32yTccWS8TQ44VsBrXggK1g97Xw//ge/pl+78YkNSvBcc0JS7S nqL+aiylnk7ezHkyBVpKGScYAQLcqnvM6C6wyYTxdJuQ1BrcBEyRbiiROpYoB4Ac ONUVXLKHNlCaj6BovOFZ8lVJmGUmy6IuhkYatVOrzkSJewTiXys9oj6YqRXNY/Xc hlb4XHW2IiS26Z0dZvU5Bau+bwCJP1vw9wUJRSu+FAVwsNIJmMqVDS1FatJIqutW xKV06t6pEu2mMbMUmkadNy+yUvus/mmDkOMGxiZVwOOuZG4m6EarnZf/ocQL3sai 5Ury1PUtUy/XC59U+f6L1PPDE65FZ5S9WosAmBZD03fIp3KYf5WulmT+gH1JzINq QZ7NkFbkJ45Bs0nxahfnyM3F58aO2orGbLO27jevcQIDAQABMA0GCSqGSIb3DQEB CwUAA4IBgQCRQsIVV0JHd+cPxVUmsVvDXrqB2+Gkn7hCWiHJjBiuD5CrmiTn0nj8 vZcpsVxGH1u40qeH8VBTW9O+V3S95XXbUIH3N5ULae85jFyC1WRi1Yvp4DHhc9Ja LN5DWgbl003QNeCfwnMxvDVp1Pt98Boz9/YlcpymhAUI9rXoBBDxH/KVraH42ICl 63VDmTOQDHn8wIcIlSCqwoELIm9W9I8qI/hARxwDpbEECgRK39CIqLwx8kKb2BEU nuNo6gcsFd7SNloVOO3Srw60th2gV5Tqw8dMFFeBAFeU07AnaddIAmzll/e+Inw4 JK+ysHsIdR7KLsdB74t0z8nDbzm5UkEYxnAkVFEE/l+IcDXlHI7WZ2lERDObjP6l uZVIZoTzGgSro1fBtrQvKBJFK8tC0/SlzntsH+TIqefUbcgnLWkmxehzEFQfw7/9 qvWVb/bK1QaPG3mT44a6jf6oEI+VPhQJv8qIWeKTtuwDqX7dH18T0ymzpvNq3zBT RMjN5YJXvJw= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/nullbytecert.pem000066400000000000000000000124731471441230600215530ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 0 (0x0) Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Validity Not Before: Aug 7 13:11:52 2013 GMT Not After : Aug 7 13:12:52 2013 GMT Subject: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:b5:ea:ed:c9:fb:46:7d:6f:3b:76:80:dd:3a:f3: 03:94:0b:a7:a6:db:ec:1d:df:ff:23:74:08:9d:97: 16:3f:a3:a4:7b:3e:1b:0e:96:59:25:03:a7:26:e2: 88:a9:cf:79:cd:f7:04:56:b0:ab:79:32:6e:59:c1: 32:30:54:eb:58:a8:cb:91:f0:42:a5:64:27:cb:d4: 56:31:88:52:ad:cf:bd:7f:f0:06:64:1f:cc:27:b8: a3:8b:8c:f3:d8:29:1f:25:0b:f5:46:06:1b:ca:02: 45:ad:7b:76:0a:9c:bf:bb:b9:ae:0d:16:ab:60:75: ae:06:3e:9c:7c:31:dc:92:2f:29:1a:e0:4b:0c:91: 90:6c:e9:37:c5:90:d7:2a:d7:97:15:a3:80:8f:5d: 7b:49:8f:54:30:d4:97:2c:1c:5b:37:b5:ab:69:30: 68:43:d3:33:78:4b:02:60:f5:3c:44:80:a1:8f:e7: f0:0f:d1:5e:87:9e:46:cf:62:fc:f9:bf:0c:65:12: f1:93:c8:35:79:3f:c8:ec:ec:47:f5:ef:be:44:d5: ae:82:1e:2d:9a:9f:98:5a:67:65:e1:74:70:7c:cb: d3:c2:ce:0e:45:49:27:dc:e3:2d:d4:fb:48:0e:2f: 9e:77:b8:14:46:c0:c4:36:ca:02:ae:6a:91:8c:da: 2f:85 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 88:5A:55:C0:52:FF:61:CD:52:A3:35:0F:EA:5A:9C:24:38:22:F7:5C X509v3 Key Usage: Digital Signature, Non Repudiation, Key Encipherment X509v3 Subject Alternative Name: ************************************************************* WARNING: The values for DNS, email and URI are WRONG. OpenSSL doesn't print the text after a NULL byte. ************************************************************* DNS:altnull.python.org, email:null@python.org, URI:http://null.python.org, IP Address:192.0.2.1, IP Address:2001:DB8:0:0:0:0:0:1 Signature Algorithm: sha1WithRSAEncryption ac:4f:45:ef:7d:49:a8:21:70:8e:88:59:3e:d4:36:42:70:f5: a3:bd:8b:d7:a8:d0:58:f6:31:4a:b1:a4:a6:dd:6f:d9:e8:44: 3c:b6:0a:71:d6:7f:b1:08:61:9d:60:ce:75:cf:77:0c:d2:37: 86:02:8d:5e:5d:f9:0f:71:b4:16:a8:c1:3d:23:1c:f1:11:b3: 56:6e:ca:d0:8d:34:94:e6:87:2a:99:f2:ae:ae:cc:c2:e8:86: de:08:a8:7f:c5:05:fa:6f:81:a7:82:e6:d0:53:9d:34:f4:ac: 3e:40:fe:89:57:7a:29:a4:91:7e:0b:c6:51:31:e5:10:2f:a4: 60:76:cd:95:51:1a:be:8b:a1:b0:fd:ad:52:bd:d7:1b:87:60: d2:31:c7:17:c4:18:4f:2d:08:25:a3:a7:4f:b7:92:ca:e2:f5: 25:f1:54:75:81:9d:b3:3d:61:a2:f7:da:ed:e1:c6:6f:2c:60: 1f:d8:6f:c5:92:05:ab:c9:09:62:49:a9:14:ad:55:11:cc:d6: 4a:19:94:99:97:37:1d:81:5f:8b:cf:a3:a8:96:44:51:08:3d: 0b:05:65:12:eb:b6:70:80:88:48:72:4f:c6:c2:da:cf:cd:8e: 5b:ba:97:2f:60:b4:96:56:49:5e:3a:43:76:63:04:be:2a:f6: c1:ca:a9:94 -----BEGIN CERTIFICATE----- MIIE2DCCA8CgAwIBAgIBADANBgkqhkiG9w0BAQUFADCBxTELMAkGA1UEBhMCVVMx DzANBgNVBAgMBk9yZWdvbjESMBAGA1UEBwwJQmVhdmVydG9uMSMwIQYDVQQKDBpQ eXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEgMB4GA1UECwwXUHl0aG9uIENvcmUg RGV2ZWxvcG1lbnQxJDAiBgNVBAMMG251bGwucHl0aG9uLm9yZwBleGFtcGxlLm9y ZzEkMCIGCSqGSIb3DQEJARYVcHl0aG9uLWRldkBweXRob24ub3JnMB4XDTEzMDgw NzEzMTE1MloXDTEzMDgwNzEzMTI1MlowgcUxCzAJBgNVBAYTAlVTMQ8wDQYDVQQI DAZPcmVnb24xEjAQBgNVBAcMCUJlYXZlcnRvbjEjMCEGA1UECgwaUHl0aG9uIFNv ZnR3YXJlIEZvdW5kYXRpb24xIDAeBgNVBAsMF1B5dGhvbiBDb3JlIERldmVsb3Bt ZW50MSQwIgYDVQQDDBtudWxsLnB5dGhvbi5vcmcAZXhhbXBsZS5vcmcxJDAiBgkq hkiG9w0BCQEWFXB5dGhvbi1kZXZAcHl0aG9uLm9yZzCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBALXq7cn7Rn1vO3aA3TrzA5QLp6bb7B3f/yN0CJ2XFj+j pHs+Gw6WWSUDpybiiKnPec33BFawq3kyblnBMjBU61ioy5HwQqVkJ8vUVjGIUq3P vX/wBmQfzCe4o4uM89gpHyUL9UYGG8oCRa17dgqcv7u5rg0Wq2B1rgY+nHwx3JIv KRrgSwyRkGzpN8WQ1yrXlxWjgI9de0mPVDDUlywcWze1q2kwaEPTM3hLAmD1PESA oY/n8A/RXoeeRs9i/Pm/DGUS8ZPINXk/yOzsR/XvvkTVroIeLZqfmFpnZeF0cHzL 08LODkVJJ9zjLdT7SA4vnne4FEbAxDbKAq5qkYzaL4UCAwEAAaOB0DCBzTAMBgNV HRMBAf8EAjAAMB0GA1UdDgQWBBSIWlXAUv9hzVKjNQ/qWpwkOCL3XDALBgNVHQ8E BAMCBeAwgZAGA1UdEQSBiDCBhYIeYWx0bnVsbC5weXRob24ub3JnAGV4YW1wbGUu Y29tgSBudWxsQHB5dGhvbi5vcmcAdXNlckBleGFtcGxlLm9yZ4YpaHR0cDovL251 bGwucHl0aG9uLm9yZwBodHRwOi8vZXhhbXBsZS5vcmeHBMAAAgGHECABDbgAAAAA AAAAAAAAAAEwDQYJKoZIhvcNAQEFBQADggEBAKxPRe99SaghcI6IWT7UNkJw9aO9 i9eo0Fj2MUqxpKbdb9noRDy2CnHWf7EIYZ1gznXPdwzSN4YCjV5d+Q9xtBaowT0j HPERs1ZuytCNNJTmhyqZ8q6uzMLoht4IqH/FBfpvgaeC5tBTnTT0rD5A/olXeimk kX4LxlEx5RAvpGB2zZVRGr6LobD9rVK91xuHYNIxxxfEGE8tCCWjp0+3ksri9SXx VHWBnbM9YaL32u3hxm8sYB/Yb8WSBavJCWJJqRStVRHM1koZlJmXNx2BX4vPo6iW RFEIPQsFZRLrtnCAiEhyT8bC2s/Njlu6ly9gtJZWSV46Q3ZjBL4q9sHKqZQ= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/nullcert.pem000066400000000000000000000000001471441230600206460ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.13/pycacert.pem000066400000000000000000000130401471441230600206400ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5b Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, O=Python Software Foundation CA, CN=our-ca-server Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:b1:84:d3:4f:5c:04:80:91:4f:82:49:ba:30:0b: f7:e8:cb:f9:14:ef:3d:9f:0b:3f:0a:62:fc:1b:20: a5:20:d1:60:5f:87:5a:1f:16:d1:ed:97:70:a6:da: 1b:03:2c:7e:a0:5b:3c:4e:2f:16:7e:0e:89:29:89: e1:10:0d:38:da:6a:77:5f:37:13:b3:28:8f:7b:5c: 76:ad:9e:e8:d3:f5:9e:f5:83:aa:10:07:8d:e6:51: 98:f0:7c:0d:52:f2:0c:21:1e:d8:b9:99:26:a9:25: 03:27:bb:5c:ab:2e:33:27:a2:d6:23:a8:83:87:44: 29:9f:97:b5:24:6f:d7:b9:0a:fd:28:ee:bb:fb:41: 58:ea:1d:99:dd:44:86:ab:98:be:1c:dc:cb:a9:89: 1d:36:5c:a9:e8:47:b5:f4:52:48:aa:b5:a4:67:ef: 3e:d7:e2:d3:33:de:98:29:d8:7a:b0:59:5c:e7:b1: 0e:cc:fd:9f:eb:f6:d5:3a:0e:0b:cf:fe:0b:3d:a2: bf:45:18:ce:94:e7:a9:55:60:88:d4:d8:84:50:79: 05:2e:41:03:74:ae:67:26:f6:5b:12:08:98:ce:0a: 97:ed:01:0f:89:4f:17:5c:fa:3e:1d:35:24:47:92: 32:bf:f7:a4:18:2b:3c:d0:48:99:e1:a2:cd:a3:cc: 50:53:20:b5:c6:e3:66:85:7b:57:10:ec:33:4f:c1: 77:e7:1b:7e:81:c6:c4:f3:45:20:c0:91:dd:13:76: 7b:03:af:f6:76:8e:a2:83:63:57:dd:63:bc:bb:5a: 1c:17:52:8a:d6:06:48:cc:0f:c7:d3:4f:e8:da:22: 6c:86:f9:4e:5c:a6:29:07:3b:d8:56:4c:59:b3:20: 49:07:7b:94:84:cf:2b:c3:1c:1a:4e:87:64:92:ba: 42:e1:e6:ad:7d:1d:f6:54:90:6f:2b:e9:b3:cc:4b: 2b:33:26:23:fd:65:c0:3c:f0:79:ad:c9:c1:81:ef: 37:04:e0:27:3e:b0:ee:15:be:51 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Key Identifier: B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD X509v3 Basic Constraints: CA:TRUE Signature Algorithm: sha256WithRSAEncryption 6b:32:2f:e7:05:18:ea:5c:c9:95:f4:e0:c2:0c:41:5f:1a:0a: 95:c9:c7:7d:05:ee:8a:56:29:35:50:40:b7:fe:9f:7b:5b:1c: c3:69:2f:a0:cb:d2:b8:91:2f:50:19:62:f7:27:18:6d:95:7b: 53:16:15:a2:5a:dc:14:e3:fb:b1:32:a9:69:db:a6:33:47:3c: bb:1f:d2:dc:70:f9:6a:2e:0c:d8:8c:6d:e5:5d:1d:43:3c:4e: 91:de:a0:c8:da:a0:4b:0e:9d:5e:b6:0f:4a:49:f0:7b:b6:53: 9e:fd:35:14:5b:e3:4d:b4:18:a6:36:61:e8:8f:33:9b:d4:05: f9:54:66:df:e0:cb:18:a3:4e:dc:17:a8:a0:b3:c1:a8:f4:d6: 9d:ca:7f:68:53:1a:d7:95:da:e8:d3:9e:48:00:71:95:99:11: 07:cf:96:c0:7d:ce:7d:30:e8:4f:e1:83:16:33:a1:ff:59:9b: 3e:4c:e7:3a:38:01:9f:0f:67:4c:fd:2d:8b:4a:d4:01:46:37: 33:e8:13:6b:15:a9:1d:68:76:45:a2:82:33:69:26:30:60:05: c8:8f:bd:b4:75:ab:be:7a:8b:48:68:70:40:b4:1b:51:c5:e6: 7a:ad:6b:4f:db:17:c0:60:67:2e:63:61:9b:2c:48:99:b8:76: 45:a0:9e:cc:ef:33:1e:50:4e:ab:72:c3:65:c8:b2:79:b3:35: 83:21:78:d3:8b:6c:3a:18:e8:65:32:39:b8:c0:9d:71:2f:35: 36:8a:c0:17:62:d8:8b:3e:e1:22:18:2b:4c:63:a6:0e:9d:0a: fa:ab:5b:35:fb:88:91:77:4c:8d:8c:9d:a9:cf:fc:ab:c2:e6: 5a:05:7b:7e:04:6e:39:cf:93:ce:67:3b:7a:cb:af:b6:36:e1: fb:71:64:45:d4:a6:f0:ce:ef:75:04:99:69:9a:e5:88:0a:10: 02:74:89:ec:75:84:44:80:48:df:c1:f7:e9:37:ce:ce:92:92: 5c:89:22:08:73:1f -----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/pycakey.pem000066400000000000000000000046641471441230600205070ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQCxhNNPXASAkU+C SbowC/foy/kU7z2fCz8KYvwbIKUg0WBfh1ofFtHtl3Cm2hsDLH6gWzxOLxZ+Dokp ieEQDTjaandfNxOzKI97XHatnujT9Z71g6oQB43mUZjwfA1S8gwhHti5mSapJQMn u1yrLjMnotYjqIOHRCmfl7Ukb9e5Cv0o7rv7QVjqHZndRIarmL4c3MupiR02XKno R7X0UkiqtaRn7z7X4tMz3pgp2HqwWVznsQ7M/Z/r9tU6DgvP/gs9or9FGM6U56lV YIjU2IRQeQUuQQN0rmcm9lsSCJjOCpftAQ+JTxdc+j4dNSRHkjK/96QYKzzQSJnh os2jzFBTILXG42aFe1cQ7DNPwXfnG36BxsTzRSDAkd0TdnsDr/Z2jqKDY1fdY7y7 WhwXUorWBkjMD8fTT+jaImyG+U5cpikHO9hWTFmzIEkHe5SEzyvDHBpOh2SSukLh 5q19HfZUkG8r6bPMSyszJiP9ZcA88HmtycGB7zcE4Cc+sO4VvlECAwEAAQKCAYB7 gUnzALYxLOgAYYMkQm9si9zz768TpCNr+ooj5YZ9Wq6OSAEveBT+FErQCxaYErDW qCNA0gn4Eezj9YWcQVa4vzHmEM+n6iRJU39ONC0Qqua5Ma10EY1sHIEnb2dlufku YeOu3RrEu3eCgRxsDGySuvv5OxinV4kN++KPQzD3EOopPE+U81YFLCsMgsyfPlmm gwc/IKIuXDHp5Vp2bXkZK98CYLV8RddjUw7SrkZNwx6cI9eET0CgTs7y4SrevoOy jCdnA0j1HvL8AbLQuYoXo9fdGYDeq55hyYlxSMYLaEToZG3DJ0UAldrT+r7x52D8 2QMnJUo2XHzVYPlXPJIAkFJisZZ36TkBvywCgXZMMLibPo9U6V0nfkybTtXKoory nmgBv+XSGSNrVWMiygpDPqpX1G6bBgqUX3CiTlxtSkYYz1M4Vgj2cux5XEPTnVCq CLVzvNIXZt1RyzXPxGWpPidCjOaiWBRT4u1Dol9fs3PmVvDaRxcKo9nspiUHCfEC gcEA4GgxZ+IJwpAMHkdYId0oxjKgTqIg+Ua+EwfUoQT10ERl/k/V4cDwJRHT8lML rKhTNQJMEE040jq+6mPJDl1KqMb/v05Q7fF22ToGw1HkZwK52O6CeEiJW4/J6bR1 pZGN0irsa6GvzV65Y6gZVFEUl0JPRf8wPvQHXsWAw8/2LuXkXjV0ieIMq4pbWJf4 kaid7dYLHnobiP9RVk7BGr7ifmCshoPjWp4TRMwYf6iIZrqMxUSX0QY8Xsqx6bch LLx/AoHBAMqCvvwUKTrF4gKh5jyl6T6DTZ/Dujaz7BuAJdsSSHvuTa/Y1EfsQHZN jABn89ZqHYDiyyCuVFO3dqhLtsPjhyFMSXj+98JYcL3FGKnqQqRTwtzzx2P2lV5X U0WhrNRb3iLu79Tr8pE/2EPnvTr+J5b0DHEeRyM72LWs43zrDYHorH0/Aa5Qd37F gDLCTBEl8jO5irRuAIq/KV9ZFnn8JDjNGVpXgHPW3354ON1YaMLnPASk7FQizSOQ QZAsyxtdLwKBwGUosvTYYXvygXP4x1LkpmfKFJe94E1exXpAsmovmTvcSXn9tTXC Sr77LWb0ZrPbYT7pHS7QEMg8MSnp941hIrG4mzs666KHkgLUdI4B0YtaIDsZMXlV gY3j4KpYbhxH4/2U2eSfC2fxxnKVKW3n6vdQrfmo0q/eQ6BGOgiLK7fybCLHyBQL 8Zg2k3z5bNUEhMTdE0AW3WjBZ4IXmFcdK26616r/szJ7RcZilrydVXexqpmWlTVl sTst9kucAPlwswKBwQCwf7my/GNezR8Jik+fZj7edBQQfcdra+8JnOvhfpLcKLte 2s1RjjA0q6usou1bYAsszP2bEzV97XWmgq7dFg4tUE7s/NO1d91zGDhBx2Gj1TkN 2A5dKonOuq9iDeITB6qYqcUvvyEfxRRZQr2jj+WzZCr/4BLCO6PJ29A9jKOuKLtF QcfWRF2RiNMN6lffzkHFIR4p2YHxa2DEsGGtmbt8Ig3Jtl/HFmydzmxJRoev71dY +ODdB6PhLhZmcRPoWpMCgcEAhGArwL68GwwRMqAX79gMv8tVT0CJnDyGk5mD/ZIB Nzt0yQFO7rTEa1l1vAtOiVJ9IpAak2lgbEwodOfGnQst7lujNYDFzTRPTFt/lID1 u6JBxmqawOSlqa00bt4l2YsTZV+BfSznBP6XO1PK4iR3o5G3NkoKJjZWm3e3asHk 6eTeMLcsIJ+Fp7gG0ve2EdQwhVSVMFEu4Q4C2FcJeU++L4kYpY7sTnAjUtiLvtHn yp3jllEn3CBD8Uhs4B+sL/6p -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.13/revocation.crl000066400000000000000000000014401471441230600211770ustar00rootroot00000000000000-----BEGIN X509 CRL----- MIICJjCBjwIBATANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJYWTEmMCQGA1UE CgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNVBAMMDW91ci1j YS1zZXJ2ZXIXDTIxMDMxNzA4NDgyMFoXDTQwMDUxNjA4NDgyMFqgDjAMMAoGA1Ud FAQDAgEAMA0GCSqGSIb3DQEBCwUAA4IBgQCd2GrHb4zr2R8eK7YMHwlkgICxbWP1 4nuEi55yzUcmMcCZJ6ZQV3yYqTlAULGQ9qWAUdhsyH+yu3hRKFKHQv0DAdKKxgow 66YasAQQ99DskXOPxmRoIA7qtIWZbLtBwHQJWh+uUFlTdUXitGIX5Xie74xu5YIr moa3QeuZyG5+gigSTUyst5T/J/cHfBzlAJLc2k3Ty4EPYXKHCVnrZWJbRmxq199l A7S+eBb9qWXSYXCn6v+EZ76pUS3u/66kZ86PO3h9294BzdhxbCJ27dQXNHw6owe2 Iyiv0aWx+TNSGSf4yCqaYTH6RtEoviI3h/inVFHNGgjlMzdaGw/0I3bkB0rt2WSR Vck37HnXvQvVEkgO/39C0WKZus6m4gmOgZcbJbXaR8uIR5Hmw3SEyGEPEIBu6tXV BLJOSOSu2vVUH5GUIrpvK9FTySKYa+MGryoPasuqZNfwpaXK+ON2G6QsmcXPWZY0 Dry6t0w2geW6UYVGmb831i8ZP3JVVVwcwi0= -----END X509 CRL----- gevent-24.11.1/src/greentest/3.13/secp384r1.pem000066400000000000000000000004001471441230600204560ustar00rootroot00000000000000$ openssl genpkey -genparam -algorithm EC -pkeyopt ec_paramgen_curve:secp384r1 -pkeyopt ec_param_enc:named_curve -text -----BEGIN EC PARAMETERS----- BgUrgQQAIg== -----END EC PARAMETERS----- ECDSA-Parameters: (384 bit) ASN1 OID: secp384r1 NIST CURVE: P-384 gevent-24.11.1/src/greentest/3.13/selfsigned_pythontestdotnet.pem000066400000000000000000000041221471441230600246710ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIF9zCCA9+gAwIBAgIUH98b4Fw/DyugC9cV7VK7ZODzHsIwDQYJKoZIhvcNAQEL BQAwgYoxCzAJBgNVBAYTAlhZMRcwFQYDVQQIDA5DYXN0bGUgQW50aHJheDEYMBYG A1UEBwwPQXJndW1lbnQgQ2xpbmljMSMwIQYDVQQKDBpQeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbjEjMCEGA1UEAwwac2VsZi1zaWduZWQucHl0aG9udGVzdC5uZXQw HhcNMTkwNTA4MDEwMjQzWhcNMjcwNzI0MDEwMjQzWjCBijELMAkGA1UEBhMCWFkx FzAVBgNVBAgMDkNhc3RsZSBBbnRocmF4MRgwFgYDVQQHDA9Bcmd1bWVudCBDbGlu aWMxIzAhBgNVBAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMSMwIQYDVQQD DBpzZWxmLXNpZ25lZC5weXRob250ZXN0Lm5ldDCCAiIwDQYJKoZIhvcNAQEBBQAD ggIPADCCAgoCggIBAMKdJlyCThkahwoBb7pl5q64Pe9Fn5jrIvzsveHTc97TpjV2 RLfICnXKrltPk/ohkVl6K5SUZQZwMVzFubkyxE0nZPHYHlpiKWQxbsYVkYv01rix IFdLvaxxbGYke2jwQao31s4o61AdlsfK1SdpHQUynBBMssqI3SB4XPmcA7e+wEEx jxjVish4ixA1vuIZOx8yibu+CFCf/geEjoBMF3QPdzULzlrCSw8k/45iZCSoNbvK DoL4TVV07PHOxpheDh8ZQmepGvU6pVqhb9m4lgmV0OGWHgozd5Ur9CbTVDmxIEz3 TSoRtNJK7qtyZdGNqwjksQxgZTjM/d/Lm/BJG99AiOmYOjsl9gbQMZgvQmMAtUsI aMJnQuZ6R+KEpW/TR5qSKLWZSG45z/op+tzI2m+cE6HwTRVAWbcuJxcAA55MZjqU OOOu3BBYMjS5nf2sQ9uoXsVBFH7i0mQqoW1SLzr9opI8KsWwFxQmO2vBxWYaN+lH OmwBZBwyODIsmI1YGXmTp09NxRYz3Qe5GCgFzYowpMrcxUC24iduIdMwwhRM7rKg 7GtIWMSrFfuI1XCLRmSlhDbhNN6fVg2f8Bo9PdH9ihiIyxSrc+FOUasUYCCJvlSZ 8hFUlLvcmrZlWuazohm0lsXuMK1JflmQr/DA/uXxP9xzFfRy+RU3jDyxJbRHAgMB AAGjUzBRMB0GA1UdDgQWBBSQJyxiPMRK01i+0BsV9zUwDiBaHzAfBgNVHSMEGDAW gBSQJyxiPMRK01i+0BsV9zUwDiBaHzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3 DQEBCwUAA4ICAQCR+7a7N/m+WLkxPPIA/CB4MOr2Uf8ixTv435Nyv6rXOun0+lTP ExSZ0uYQ+L0WylItI3cQHULldDueD+s8TGzxf5woaLKf6tqyr0NYhKs+UeNEzDnN 9PHQIhX0SZw3XyXGUgPNBfRCg2ZDdtMMdOU4XlQN/IN/9hbYTrueyY7eXq9hmtI9 1srftAMqr9SR1JP7aHI6DVgrEsZVMTDnfT8WmLSGLlY1HmGfdEn1Ip5sbo9uSkiH AEPgPfjYIvR5LqTOMn4KsrlZyBbFIDh9Sl99M1kZzgH6zUGVLCDg1y6Cms69fx/e W1HoIeVkY4b4TY7Bk7JsqyNhIuqu7ARaxkdaZWhYaA2YyknwANdFfNpfH+elCLIk BUt5S3f4i7DaUePTvKukCZiCq4Oyln7RcOn5If73wCeLB/ZM9Ei1HforyLWP1CN8 XLfpHaoeoPSWIveI0XHUl65LsPN2UbMbul/F23hwl+h8+BLmyAS680Yhn4zEN6Ku B7Po90HoFa1Du3bmx4jsN73UkT/dwMTi6K072FbipnC1904oGlWmLwvAHvrtxxmL Pl3pvEaZIu8wa/PNF6Y7J7VIewikIJq6Ta6FrWeFfzMWOj2qA1ZZi6fUaDSNYvuV J5quYKCc/O+I/yDDf8wyBbZ/gvUXzUHTMYGG+bFrn1p7XDbYYeEJ6R/xEg== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/signalinterproctester.py000066400000000000000000000061751471441230600233420ustar00rootroot00000000000000import gc import os import signal import subprocess import sys import time import unittest from test import support class SIGUSR1Exception(Exception): pass class InterProcessSignalTests(unittest.TestCase): def setUp(self): self.got_signals = {'SIGHUP': 0, 'SIGUSR1': 0, 'SIGALRM': 0} def sighup_handler(self, signum, frame): self.got_signals['SIGHUP'] += 1 def sigusr1_handler(self, signum, frame): self.got_signals['SIGUSR1'] += 1 raise SIGUSR1Exception def wait_signal(self, child, signame): if child is not None: # This wait should be interrupted by exc_class # (if set) child.wait() start_time = time.monotonic() for _ in support.busy_retry(support.SHORT_TIMEOUT, error=False): if self.got_signals[signame]: return signal.pause() else: dt = time.monotonic() - start_time self.fail('signal %s not received after %.1f seconds' % (signame, dt)) def subprocess_send_signal(self, pid, signame): code = 'import os, signal; os.kill(%s, signal.%s)' % (pid, signame) args = [sys.executable, '-I', '-c', code] return subprocess.Popen(args) def test_interprocess_signal(self): # Install handlers. This function runs in a sub-process, so we # don't worry about re-setting the default handlers. signal.signal(signal.SIGHUP, self.sighup_handler) signal.signal(signal.SIGUSR1, self.sigusr1_handler) signal.signal(signal.SIGUSR2, signal.SIG_IGN) signal.signal(signal.SIGALRM, signal.default_int_handler) # Let the sub-processes know who to send signals to. pid = str(os.getpid()) with self.subprocess_send_signal(pid, "SIGHUP") as child: self.wait_signal(child, 'SIGHUP') self.assertEqual(self.got_signals, {'SIGHUP': 1, 'SIGUSR1': 0, 'SIGALRM': 0}) # gh-110033: Make sure that the subprocess.Popen is deleted before # the next test which raises an exception. Otherwise, the exception # may be raised when Popen.__del__() is executed and so be logged # as "Exception ignored in: ". child = None gc.collect() with self.assertRaises(SIGUSR1Exception): with self.subprocess_send_signal(pid, "SIGUSR1") as child: self.wait_signal(child, 'SIGUSR1') self.assertEqual(self.got_signals, {'SIGHUP': 1, 'SIGUSR1': 1, 'SIGALRM': 0}) with self.subprocess_send_signal(pid, "SIGUSR2") as child: # Nothing should happen: SIGUSR2 is ignored child.wait() try: with self.assertRaises(KeyboardInterrupt): signal.alarm(1) self.wait_signal(None, 'SIGALRM') self.assertEqual(self.got_signals, {'SIGHUP': 1, 'SIGUSR1': 1, 'SIGALRM': 0}) finally: signal.alarm(0) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.13/ssl_cert.pem000066400000000000000000000030421471441230600206450ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/ssl_key.passwd.pem000066400000000000000000000051361471441230600220060ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQI072N7W+PDDMCAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBA/AuaRNi4vE4KGqI4In+70BIIH ENGS5Vex5NID873frmd1UZEHZ+O/Bd0wDb+NUpIqesHkRYf7kKi6Gnr+nKQ/oVVn Lm3JjE7c8ECP0OkOOXmiXuWL1SkzBBWqCI4stSGUPvBiHsGwNnvJAaGjUffgMlcC aJOA2+dnejLkzblq4CB2LQdm06N3Xoe9tyqtQaUHxfzJAf5Ydd8uj7vpKN2MMhY7 icIPJwSyh0N7S6XWVtHEokr9Kp4y2hS5a+BgCWV1/1z0aF7agnSVndmT1VR+nWmc lM14k+lethmHMB+fsNSjnqeJ7XOPlOTHqhiZ9bBSTgF/xr5Bck/NiKRzHjdovBox TKg+xchaBhpRh7wBPBIlNJeHmIjv+8obOKjKU98Ig/7R9+IryZaNcKAH0PuOT+Sw QHXiCGQbOiYHB9UyhDTWiB7YVjd8KHefOFxfHzOQb/iBhbv1x3bTl3DgepvRN6VO dIsPLoIZe42sdf9GeMsk8mGJyZUQ6AzsfhWk3grb/XscizPSvrNsJ2VL1R7YTyT3 3WA4ZXR1EqvXnWL7N/raemQjy62iOG6t7fcF5IdP9CMbWP+Plpsz4cQW7FtesCTq a5ZXraochQz361ODFNIeBEGU+0qqXUtZDlmos/EySkZykSeU/L0bImS62VGE3afo YXBmznTTT9kkFkqv7H0MerfJsrE/wF8puP3GM01DW2JRgXRpSWlvbPV/2LnMtRuD II7iH4rWDtTjCN6BWKAgDOnPkc9sZ4XulqT32lcUeV6LTdMBfq8kMEc8eDij1vUT maVCRpuwaq8EIT3lVgNLufHiG96ojlyYtj3orzw22IjkgC/9ee8UDik9CqbMVmFf fVHhsw8LNSg8Q4bmwm5Eg2w2it2gtI68+mwr75oCxuJ/8OMjW21Prj8XDh5reie2 c0lDKQOFZ9UnLU1bXR/6qUM+JFKR4DMq+fOCuoQSVoyVUEOsJpvBOYnYZN9cxsZm vh9dKafMEcKZ8flsbr+gOmOw7+Py2ifSlf25E/Frb1W4gtbTb0LQVHb6+drutrZj 8HEu4CnHYFCD4ZnOJb26XlZCb8GFBddW86yJYyUqMMV6Q1aJfAOAglsTo1LjIMOZ byo0BTAmwUevU/iuOXQ4qRBXXcoidDcTCrxfUSPG9wdt9l+m5SdQpWqfQ+fx5O7m SLlrHyZCiPSFMtC9DxqjIklHjf5W3wslGLgaD30YXa4VDYkRihf3CNsxGQ+tVvef l0ZjoAitF7Gaua06IESmKnpHe23dkr1cjYq+u2IV+xGH8LeExdwsQ9kpuTeXPnQs JOA99SsFx1ct32RrwjxnDDsiNkaViTKo9GDkV3jQTfoFgAVqfSgg9wGXpqUqhNG7 TiSIHCowllLny2zn4XrXCy2niD3VDt0skb3l/PaegHE2z7S5YY85nQtYwpLiwB9M SQ08DYKxPBZYKtS2iZ/fsA1gjSRQDPg/SIxMhUC3M3qH8iWny1Lzl25F2Uq7VVEX LdTUtaby49jRTT3CQGr5n6z7bMbUegiY7h8WmOekuThGDH+4xZp6+rDP4GFk4FeK JcF70vMQYIjQZhadic6olv+9VtUP42ltGG/yP9a3eWRkzfAf2eCh6B1rYdgEWwE8 rlcZzwM+y6eUmeNF2FVWB8iWtTMQHy+dYNPM+Jtus1KQKxiiq/yCRs7nWvzWRFWA HRyqV0J6/lqgm4FvfktFt1T0W+mDoLJOR2/zIwMy2lgL5zeHuR3SaMJnCikJbqKS HB3UvrhAWUcZqdH29+FhVWeM7ybyF1Wccmf+IIC/ePLa6gjtqPV8lG/5kbpcpnB6 UQY8WWaKMxyr3jJ9bAX5QKshchp04cDecOLZrpFGNNQngR8RxSEkiIgAqNxWunIu KrdBDrupv/XAgEOclmgToY3iywLJSV5gHAyHWDUhRH4cFCLiGPl4XIcnXOuTze3H 3j+EYSiS3v3DhHjp33YU2pXlJDjiYsKzAXejEh66++Y8qaQdCAad3ruWRCzW3kgk Md0A1VGzntTnQsewvExQEMZH2LtYIsPv3KCYGeSAuLabX4tbGk79PswjnjLLEOr0 Ghf6RF6qf5/iFyJoG4vrbKT8kx6ywh0InILCdjUunuDskIBxX6tEcr9XwajoIvb2 kcmGdjam5kKLS7QOWQTl8/r/cuFes0dj34cX5Qpq+Gd7tRq/D+b0207926Cxvftv qQ1cVn8HiLxKkZzd3tpf2xnoV1zkTL0oHrNg+qzxoxXUTUcwtIf1d/HRbYEAhi/d bBBoFeftEHWNq+sJgS9bH+XNzo/yK4u04B5miOq8v4CSkJdzu+ZdF22d4cjiGmtQ 8BTmcn0Unzm+u5H0+QSZe54QBHJGNXXOIKMTkgnOdW27g4DbI1y7fCqJiSMbRW6L oHmMfbdB3GWqGbsUkhY8i6h9op0MU6WOX7ea2Rxyt4t6 -----END ENCRYPTED PRIVATE KEY----- gevent-24.11.1/src/greentest/3.13/ssl_key.pem000066400000000000000000000046701471441230600205100ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/wIBADANBgkqhkiG9w0BAQEFAASCBukwggblAgEAAoIBgQCylKlLaKU+hOvJ DfriTRLd+IthG5hv28I3A/CGjLICT0rDDtgaXd0uqloJAnjsgn5gMAcStpDW8Rm+ t6LsrBL+5fBgkyU1r94Rvx0HHoyaZwBBouitVHw28hP3W+smddkqB1UxpGnTeL2B gj3dVo/WTtRfO+0h0PKw1l98YE1pMTdqIwcOOE/ER0g4hvA/wrxuLhMvlVLMy/lL 58uctqaDUqryNyeerKbVkq4fJyCG5D2TwXVJ3i2DDh0xSt2Y10poZV4M4k8Su9Z5 8zN2PSvYMT50aqF277v8BaOeYUApBE4kZGIJpo13ATGdEwpUFZ0Fri4zLYUZ1hWb OC35sKo7OxWQ/+tefNUdgWHob6Vmy777jiYcLwxc3sS9rF3AJe0rMW83kCkR6hmy A3250E137N/1QumHuT/Nj9rnI/lwt9jfaYkZjoAgT/C97m/mM83cYpGTdoGV1xNo 7G90MhP0di5FnVsrIaSnvkbGT9UgUWx0oVMjocifdG2qIhMI9psCAwEAAQKCAYBT sHmaPmNaZj59jZCqp0YVQlpHWwBYQ5vD3pPE6oCttm0p9nXt/VkfenQRTthOtmT1 POzDp00/feP7zeGLmqSYUjgRekPw4gdnN7Ip2PY5kdW77NWwDSzdLxuOS8Rq1MW9 /Yu+ZPe3RBlDbT8C0IM+Atlh/BqIQ3zIxN4g0pzUlF0M33d6AYfYSzOcUhibOO7H j84r+YXBNkIRgYKZYbutRXuZYaGuqejRpBj3voVu0d3Ntdb6lCWuClpB9HzfGN0c RTv8g6UYO4sK3qyFn90ibIR/1GB9watvtoWVZqggiWeBzSWVWRsGEf9O+Cx4oJw1 IphglhmhbgNksbj7bD24on/icldSOiVkoUemUOFmHWhCm4PnB1GmbD8YMfEdSbks qDr1Ps1zg4mGOinVD/4cY7vuPFO/HCH07wfeaUGzRt4g0/yLr+XjVofOA3oowyxv JAzr+niHA3lg5ecj4r7M68efwzN1OCyjMrVJw2RAzwvGxE+rm5NiT08SWlKQZnkC gcEA4wvyLpIur/UB84nV3XVJ89UMNBLm++aTFzld047BLJtMaOhvNqx6Cl5c8VuW l261KHjiVzpfNM3/A2LBQJcYkhX7avkqEXlj57cl+dCWAVwUzKmLJTPjfaTTZnYJ xeN3dMYjJz2z2WtgvfvDoJLukVwIMmhTY8wtqqYyQBJ/l06pBsfw5TNvmVIOQHds 8ASOiFt+WRLk2bl9xrGGayqt3VV93KVRzF27cpjOgEcG74F3c0ZW9snERN7vIYwB JfrlAoHBAMlahPwMP2TYylG8OzHe7EiehTekSO26LGh0Cq3wTGXYsK/q8hQCzL14 kWW638vpwXL6L9ntvrd7hjzWRO3vX/VxnYEA6f0bpqHq1tZi6lzix5CTUN5McpDg QnjenSJNrNjS1zEF8WeY9iLEuDI/M/iUW4y9R6s3WpgQhPDXpSvd2g3gMGRUYhxQ Xna8auiJeYFq0oNaOxvJj+VeOfJ3ZMJttd+Y7gTOYZcbg3SdRb/kdxYki0RMD2hF 4ZvjJ6CTfwKBwQDiMqiZFTJGQwYqp4vWEmAW+I4r4xkUpWatoI2Fk5eI5T9+1PLX uYXsho56NxEU1UrOg4Cb/p+TcBc8PErkGqR0BkpxDMOInTOXSrQe6lxIBoECVXc3 HTbrmiay0a5y5GfCgxPKqIJhfcToAceoVjovv0y7S4yoxGZKuUEe7E8JY2iqRNAO yOvKCCICv/hcN235E44RF+2/rDlOltagNej5tY6rIFkaDdgOF4bD7f9O5eEni1Bg litfoesDtQP/3rECgcEAkQfvQ7D6tIPmbqsbJBfCr6fmoqZllT4FIJN84b50+OL0 mTGsfjdqC4tdhx3sdu7/VPbaIqm5NmX10bowWgWSY7MbVME4yQPyqSwC5NbIonEC d6N0mzoLR0kQ+Ai4u+2g82gicgAq2oj1uSNi3WZi48jQjHYFulCbo246o1NgeFFK 77WshYe2R1ioQfQDOU1URKCR0uTaMHClgfu112yiGd12JAD+aF3TM0kxDXz+sXI5 SKy311DFxECZeXRLpcC3AoHBAJkNMJWTyPYbeVu+CTQkec8Uun233EkXa2kUNZc/ 5DuXDaK+A3DMgYRufTKSPpDHGaCZ1SYPInX1Uoe2dgVjWssRL2uitR4ENabDoAOA ICVYXYYNagqQu5wwirF0QeaMXo1fjhuuHQh8GsMdXZvYEaAITZ9/NG5x/oY08+8H kr78SMBOPy3XQn964uKG+e3JwpOG14GKABdAlrHKFXNWchu/6dgcYXB87mrC/GhO zNwzC+QhFTZoOomFoqMgFWujng== -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.13/talos-2019-0758.pem000066400000000000000000000024621471441230600211500ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDqDCCApKgAwIBAgIBAjALBgkqhkiG9w0BAQswHzELMAkGA1UEBhMCVUsxEDAO BgNVBAMTB2NvZHktY2EwHhcNMTgwNjE4MTgwMDU4WhcNMjgwNjE0MTgwMDU4WjA7 MQswCQYDVQQGEwJVSzEsMCoGA1UEAxMjY29kZW5vbWljb24tdm0tMi50ZXN0Lmxh bC5jaXNjby5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC63fGB J80A9Av1GB0bptslKRIUtJm8EeEu34HkDWbL6AJY0P8WfDtlXjlPaLqFa6sqH6ES V48prSm1ZUbDSVL8R6BYVYpOlK8/48xk4pGTgRzv69gf5SGtQLwHy8UPBKgjSZoD 5a5k5wJXGswhKFFNqyyxqCvWmMnJWxXTt2XDCiWc4g4YAWi4O4+6SeeHVAV9rV7C 1wxqjzKovVe2uZOHjKEzJbbIU6JBPb6TRfMdRdYOw98n1VXDcKVgdX2DuuqjCzHP WhU4Tw050M9NaK3eXp4Mh69VuiKoBGOLSOcS8reqHIU46Reg0hqeL8LIL6OhFHIF j7HR6V1X6F+BfRS/AgMBAAGjgdYwgdMwCQYDVR0TBAIwADAdBgNVHQ4EFgQUOktp HQjxDXXUg8prleY9jeLKeQ4wTwYDVR0jBEgwRoAUx6zgPygZ0ZErF9sPC4+5e2Io UU+hI6QhMB8xCzAJBgNVBAYTAlVLMRAwDgYDVQQDEwdjb2R5LWNhggkA1QEAuwb7 2s0wCQYDVR0SBAIwADAuBgNVHREEJzAlgiNjb2Rlbm9taWNvbi12bS0yLnRlc3Qu bGFsLmNpc2NvLmNvbTAOBgNVHQ8BAf8EBAMCBaAwCwYDVR0fBAQwAjAAMAsGCSqG SIb3DQEBCwOCAQEAvqantx2yBlM11RoFiCfi+AfSblXPdrIrHvccepV4pYc/yO6p t1f2dxHQb8rWH3i6cWag/EgIZx+HJQvo0rgPY1BFJsX1WnYf1/znZpkUBGbVmlJr t/dW1gSkNS6sPsM0Q+7HPgEv8CPDNK5eo7vU2seE0iWOkxSyVUuiCEY9ZVGaLVit p0C78nZ35Pdv4I+1cosmHl28+es1WI22rrnmdBpH8J1eY6WvUw2xuZHLeNVN0TzV Q3qq53AaCWuLOD1AjESWuUCxMZTK9DPS4JKXTK8RLyDeqOvJGjsSWp3kL0y3GaQ+ 10T1rfkKJub2+m9A9duin1fn6tHc2wSvB7m3DA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.13/test_context.py000066400000000000000000000754531471441230600214400ustar00rootroot00000000000000import concurrent.futures import contextvars import functools import gc import random import time import unittest import weakref from test import support from test.support import threading_helper try: from _testinternalcapi import hamt except ImportError: hamt = None def isolated_context(func): """Needed to make reftracking test mode work.""" @functools.wraps(func) def wrapper(*args, **kwargs): ctx = contextvars.Context() return ctx.run(func, *args, **kwargs) return wrapper class ContextTest(unittest.TestCase): def test_context_var_new_1(self): with self.assertRaisesRegex(TypeError, 'takes exactly 1'): contextvars.ContextVar() with self.assertRaisesRegex(TypeError, 'must be a str'): contextvars.ContextVar(1) c = contextvars.ContextVar('aaa') self.assertEqual(c.name, 'aaa') with self.assertRaises(AttributeError): c.name = 'bbb' self.assertNotEqual(hash(c), hash('aaa')) @isolated_context def test_context_var_repr_1(self): c = contextvars.ContextVar('a') self.assertIn('a', repr(c)) c = contextvars.ContextVar('a', default=123) self.assertIn('123', repr(c)) lst = [] c = contextvars.ContextVar('a', default=lst) lst.append(c) self.assertIn('...', repr(c)) self.assertIn('...', repr(lst)) t = c.set(1) self.assertIn(repr(c), repr(t)) self.assertNotIn(' used ', repr(t)) c.reset(t) self.assertIn(' used ', repr(t)) def test_context_subclassing_1(self): with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): class MyContextVar(contextvars.ContextVar): # Potentially we might want ContextVars to be subclassable. pass with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): class MyContext(contextvars.Context): pass with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): class MyToken(contextvars.Token): pass def test_context_new_1(self): with self.assertRaisesRegex(TypeError, 'any arguments'): contextvars.Context(1) with self.assertRaisesRegex(TypeError, 'any arguments'): contextvars.Context(1, a=1) with self.assertRaisesRegex(TypeError, 'any arguments'): contextvars.Context(a=1) contextvars.Context(**{}) def test_context_typerrors_1(self): ctx = contextvars.Context() with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): ctx[1] with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): 1 in ctx with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): ctx.get(1) def test_context_get_context_1(self): ctx = contextvars.copy_context() self.assertIsInstance(ctx, contextvars.Context) def test_context_run_1(self): ctx = contextvars.Context() with self.assertRaisesRegex(TypeError, 'missing 1 required'): ctx.run() def test_context_run_2(self): ctx = contextvars.Context() def func(*args, **kwargs): kwargs['spam'] = 'foo' args += ('bar',) return args, kwargs for f in (func, functools.partial(func)): # partial doesn't support FASTCALL self.assertEqual(ctx.run(f), (('bar',), {'spam': 'foo'})) self.assertEqual(ctx.run(f, 1), ((1, 'bar'), {'spam': 'foo'})) self.assertEqual( ctx.run(f, a=2), (('bar',), {'a': 2, 'spam': 'foo'})) self.assertEqual( ctx.run(f, 11, a=2), ((11, 'bar'), {'a': 2, 'spam': 'foo'})) a = {} self.assertEqual( ctx.run(f, 11, **a), ((11, 'bar'), {'spam': 'foo'})) self.assertEqual(a, {}) def test_context_run_3(self): ctx = contextvars.Context() def func(*args, **kwargs): 1 / 0 with self.assertRaises(ZeroDivisionError): ctx.run(func) with self.assertRaises(ZeroDivisionError): ctx.run(func, 1, 2) with self.assertRaises(ZeroDivisionError): ctx.run(func, 1, 2, a=123) @isolated_context def test_context_run_4(self): ctx1 = contextvars.Context() ctx2 = contextvars.Context() var = contextvars.ContextVar('var') def func2(): self.assertIsNone(var.get(None)) def func1(): self.assertIsNone(var.get(None)) var.set('spam') ctx2.run(func2) self.assertEqual(var.get(None), 'spam') cur = contextvars.copy_context() self.assertEqual(len(cur), 1) self.assertEqual(cur[var], 'spam') return cur returned_ctx = ctx1.run(func1) self.assertEqual(ctx1, returned_ctx) self.assertEqual(returned_ctx[var], 'spam') self.assertIn(var, returned_ctx) def test_context_run_5(self): ctx = contextvars.Context() var = contextvars.ContextVar('var') def func(): self.assertIsNone(var.get(None)) var.set('spam') 1 / 0 with self.assertRaises(ZeroDivisionError): ctx.run(func) self.assertIsNone(var.get(None)) def test_context_run_6(self): ctx = contextvars.Context() c = contextvars.ContextVar('a', default=0) def fun(): self.assertEqual(c.get(), 0) self.assertIsNone(ctx.get(c)) c.set(42) self.assertEqual(c.get(), 42) self.assertEqual(ctx.get(c), 42) ctx.run(fun) def test_context_run_7(self): ctx = contextvars.Context() def fun(): with self.assertRaisesRegex(RuntimeError, 'is already entered'): ctx.run(fun) ctx.run(fun) @isolated_context def test_context_getset_1(self): c = contextvars.ContextVar('c') with self.assertRaises(LookupError): c.get() self.assertIsNone(c.get(None)) t0 = c.set(42) self.assertEqual(c.get(), 42) self.assertEqual(c.get(None), 42) self.assertIs(t0.old_value, t0.MISSING) self.assertIs(t0.old_value, contextvars.Token.MISSING) self.assertIs(t0.var, c) t = c.set('spam') self.assertEqual(c.get(), 'spam') self.assertEqual(c.get(None), 'spam') self.assertEqual(t.old_value, 42) c.reset(t) self.assertEqual(c.get(), 42) self.assertEqual(c.get(None), 42) c.set('spam2') with self.assertRaisesRegex(RuntimeError, 'has already been used'): c.reset(t) self.assertEqual(c.get(), 'spam2') ctx1 = contextvars.copy_context() self.assertIn(c, ctx1) c.reset(t0) with self.assertRaisesRegex(RuntimeError, 'has already been used'): c.reset(t0) self.assertIsNone(c.get(None)) self.assertIn(c, ctx1) self.assertEqual(ctx1[c], 'spam2') self.assertEqual(ctx1.get(c, 'aa'), 'spam2') self.assertEqual(len(ctx1), 1) self.assertEqual(list(ctx1.items()), [(c, 'spam2')]) self.assertEqual(list(ctx1.values()), ['spam2']) self.assertEqual(list(ctx1.keys()), [c]) self.assertEqual(list(ctx1), [c]) ctx2 = contextvars.copy_context() self.assertNotIn(c, ctx2) with self.assertRaises(KeyError): ctx2[c] self.assertEqual(ctx2.get(c, 'aa'), 'aa') self.assertEqual(len(ctx2), 0) self.assertEqual(list(ctx2), []) @isolated_context def test_context_getset_2(self): v1 = contextvars.ContextVar('v1') v2 = contextvars.ContextVar('v2') t1 = v1.set(42) with self.assertRaisesRegex(ValueError, 'by a different'): v2.reset(t1) @isolated_context def test_context_getset_3(self): c = contextvars.ContextVar('c', default=42) ctx = contextvars.Context() def fun(): self.assertEqual(c.get(), 42) with self.assertRaises(KeyError): ctx[c] self.assertIsNone(ctx.get(c)) self.assertEqual(ctx.get(c, 'spam'), 'spam') self.assertNotIn(c, ctx) self.assertEqual(list(ctx.keys()), []) t = c.set(1) self.assertEqual(list(ctx.keys()), [c]) self.assertEqual(ctx[c], 1) c.reset(t) self.assertEqual(list(ctx.keys()), []) with self.assertRaises(KeyError): ctx[c] ctx.run(fun) @isolated_context def test_context_getset_4(self): c = contextvars.ContextVar('c', default=42) ctx = contextvars.Context() tok = ctx.run(c.set, 1) with self.assertRaisesRegex(ValueError, 'different Context'): c.reset(tok) @isolated_context def test_context_getset_5(self): c = contextvars.ContextVar('c', default=42) c.set([]) def fun(): c.set([]) c.get().append(42) self.assertEqual(c.get(), [42]) contextvars.copy_context().run(fun) self.assertEqual(c.get(), []) def test_context_copy_1(self): ctx1 = contextvars.Context() c = contextvars.ContextVar('c', default=42) def ctx1_fun(): c.set(10) ctx2 = ctx1.copy() self.assertEqual(ctx2[c], 10) c.set(20) self.assertEqual(ctx1[c], 20) self.assertEqual(ctx2[c], 10) ctx2.run(ctx2_fun) self.assertEqual(ctx1[c], 20) self.assertEqual(ctx2[c], 30) def ctx2_fun(): self.assertEqual(c.get(), 10) c.set(30) self.assertEqual(c.get(), 30) ctx1.run(ctx1_fun) @isolated_context @threading_helper.requires_working_threading() def test_context_threads_1(self): cvar = contextvars.ContextVar('cvar') def sub(num): for i in range(10): cvar.set(num + i) time.sleep(random.uniform(0.001, 0.05)) self.assertEqual(cvar.get(), num + i) return num tp = concurrent.futures.ThreadPoolExecutor(max_workers=10) try: results = list(tp.map(sub, range(10))) finally: tp.shutdown() self.assertEqual(results, list(range(10))) # HAMT Tests class HashKey: _crasher = None def __init__(self, hash, name, *, error_on_eq_to=None): assert hash != -1 self.name = name self.hash = hash self.error_on_eq_to = error_on_eq_to def __repr__(self): return f'' def __hash__(self): if self._crasher is not None and self._crasher.error_on_hash: raise HashingError return self.hash def __eq__(self, other): if not isinstance(other, HashKey): return NotImplemented if self._crasher is not None and self._crasher.error_on_eq: raise EqError if self.error_on_eq_to is not None and self.error_on_eq_to is other: raise ValueError(f'cannot compare {self!r} to {other!r}') if other.error_on_eq_to is not None and other.error_on_eq_to is self: raise ValueError(f'cannot compare {other!r} to {self!r}') return (self.name, self.hash) == (other.name, other.hash) class KeyStr(str): def __hash__(self): if HashKey._crasher is not None and HashKey._crasher.error_on_hash: raise HashingError return super().__hash__() def __eq__(self, other): if HashKey._crasher is not None and HashKey._crasher.error_on_eq: raise EqError return super().__eq__(other) class HaskKeyCrasher: def __init__(self, *, error_on_hash=False, error_on_eq=False): self.error_on_hash = error_on_hash self.error_on_eq = error_on_eq def __enter__(self): if HashKey._crasher is not None: raise RuntimeError('cannot nest crashers') HashKey._crasher = self def __exit__(self, *exc): HashKey._crasher = None class HashingError(Exception): pass class EqError(Exception): pass @unittest.skipIf(hamt is None, '_testinternalcapi.hamt() not available') class HamtTest(unittest.TestCase): def test_hashkey_helper_1(self): k1 = HashKey(10, 'aaa') k2 = HashKey(10, 'bbb') self.assertNotEqual(k1, k2) self.assertEqual(hash(k1), hash(k2)) d = dict() d[k1] = 'a' d[k2] = 'b' self.assertEqual(d[k1], 'a') self.assertEqual(d[k2], 'b') def test_hamt_basics_1(self): h = hamt() h = None # NoQA def test_hamt_basics_2(self): h = hamt() self.assertEqual(len(h), 0) h2 = h.set('a', 'b') self.assertIsNot(h, h2) self.assertEqual(len(h), 0) self.assertEqual(len(h2), 1) self.assertIsNone(h.get('a')) self.assertEqual(h.get('a', 42), 42) self.assertEqual(h2.get('a'), 'b') h3 = h2.set('b', 10) self.assertIsNot(h2, h3) self.assertEqual(len(h), 0) self.assertEqual(len(h2), 1) self.assertEqual(len(h3), 2) self.assertEqual(h3.get('a'), 'b') self.assertEqual(h3.get('b'), 10) self.assertIsNone(h.get('b')) self.assertIsNone(h2.get('b')) self.assertIsNone(h.get('a')) self.assertEqual(h2.get('a'), 'b') h = h2 = h3 = None def test_hamt_basics_3(self): h = hamt() o = object() h1 = h.set('1', o) h2 = h1.set('1', o) self.assertIs(h1, h2) def test_hamt_basics_4(self): h = hamt() h1 = h.set('key', []) h2 = h1.set('key', []) self.assertIsNot(h1, h2) self.assertEqual(len(h1), 1) self.assertEqual(len(h2), 1) self.assertIsNot(h1.get('key'), h2.get('key')) def test_hamt_collision_1(self): k1 = HashKey(10, 'aaa') k2 = HashKey(10, 'bbb') k3 = HashKey(10, 'ccc') h = hamt() h2 = h.set(k1, 'a') h3 = h2.set(k2, 'b') self.assertEqual(h.get(k1), None) self.assertEqual(h.get(k2), None) self.assertEqual(h2.get(k1), 'a') self.assertEqual(h2.get(k2), None) self.assertEqual(h3.get(k1), 'a') self.assertEqual(h3.get(k2), 'b') h4 = h3.set(k2, 'cc') h5 = h4.set(k3, 'aa') self.assertEqual(h3.get(k1), 'a') self.assertEqual(h3.get(k2), 'b') self.assertEqual(h4.get(k1), 'a') self.assertEqual(h4.get(k2), 'cc') self.assertEqual(h4.get(k3), None) self.assertEqual(h5.get(k1), 'a') self.assertEqual(h5.get(k2), 'cc') self.assertEqual(h5.get(k2), 'cc') self.assertEqual(h5.get(k3), 'aa') self.assertEqual(len(h), 0) self.assertEqual(len(h2), 1) self.assertEqual(len(h3), 2) self.assertEqual(len(h4), 2) self.assertEqual(len(h5), 3) def test_hamt_collision_3(self): # Test that iteration works with the deepest tree possible. # https://github.com/python/cpython/issues/93065 C = HashKey(0b10000000_00000000_00000000_00000000, 'C') D = HashKey(0b10000000_00000000_00000000_00000000, 'D') E = HashKey(0b00000000_00000000_00000000_00000000, 'E') h = hamt() h = h.set(C, 'C') h = h.set(D, 'D') h = h.set(E, 'E') # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=4 count=2 bitmap=0b101): # : 'E' # NULL: # CollisionNode(size=4 id=0x107a24520): # : 'C' # : 'D' self.assertEqual({k.name for k in h.keys()}, {'C', 'D', 'E'}) @support.requires_resource('cpu') def test_hamt_stress(self): COLLECTION_SIZE = 7000 TEST_ITERS_EVERY = 647 CRASH_HASH_EVERY = 97 CRASH_EQ_EVERY = 11 RUN_XTIMES = 3 for _ in range(RUN_XTIMES): h = hamt() d = dict() for i in range(COLLECTION_SIZE): key = KeyStr(i) if not (i % CRASH_HASH_EVERY): with HaskKeyCrasher(error_on_hash=True): with self.assertRaises(HashingError): h.set(key, i) h = h.set(key, i) if not (i % CRASH_EQ_EVERY): with HaskKeyCrasher(error_on_eq=True): with self.assertRaises(EqError): h.get(KeyStr(i)) # really trigger __eq__ d[key] = i self.assertEqual(len(d), len(h)) if not (i % TEST_ITERS_EVERY): self.assertEqual(set(h.items()), set(d.items())) self.assertEqual(len(h.items()), len(d.items())) self.assertEqual(len(h), COLLECTION_SIZE) for key in range(COLLECTION_SIZE): self.assertEqual(h.get(KeyStr(key), 'not found'), key) keys_to_delete = list(range(COLLECTION_SIZE)) random.shuffle(keys_to_delete) for iter_i, i in enumerate(keys_to_delete): key = KeyStr(i) if not (iter_i % CRASH_HASH_EVERY): with HaskKeyCrasher(error_on_hash=True): with self.assertRaises(HashingError): h.delete(key) if not (iter_i % CRASH_EQ_EVERY): with HaskKeyCrasher(error_on_eq=True): with self.assertRaises(EqError): h.delete(KeyStr(i)) h = h.delete(key) self.assertEqual(h.get(key, 'not found'), 'not found') del d[key] self.assertEqual(len(d), len(h)) if iter_i == COLLECTION_SIZE // 2: hm = h dm = d.copy() if not (iter_i % TEST_ITERS_EVERY): self.assertEqual(set(h.keys()), set(d.keys())) self.assertEqual(len(h.keys()), len(d.keys())) self.assertEqual(len(d), 0) self.assertEqual(len(h), 0) # ============ for key in dm: self.assertEqual(hm.get(str(key)), dm[key]) self.assertEqual(len(dm), len(hm)) for i, key in enumerate(keys_to_delete): hm = hm.delete(str(key)) self.assertEqual(hm.get(str(key), 'not found'), 'not found') dm.pop(str(key), None) self.assertEqual(len(d), len(h)) if not (i % TEST_ITERS_EVERY): self.assertEqual(set(h.values()), set(d.values())) self.assertEqual(len(h.values()), len(d.values())) self.assertEqual(len(d), 0) self.assertEqual(len(h), 0) self.assertEqual(list(h.items()), []) def test_hamt_delete_1(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(102, 'C') D = HashKey(103, 'D') E = HashKey(104, 'E') Z = HashKey(-100, 'Z') Er = HashKey(103, 'Er', error_on_eq_to=D) h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=10 bitmap=0b111110000 id=0x10eadc618): # : 'a' # : 'b' # : 'c' # : 'd' # : 'e' h = h.delete(C) self.assertEqual(len(h), orig_len - 1) with self.assertRaisesRegex(ValueError, 'cannot compare'): h.delete(Er) h = h.delete(D) self.assertEqual(len(h), orig_len - 2) h2 = h.delete(Z) self.assertIs(h2, h) h = h.delete(A) self.assertEqual(len(h), orig_len - 3) self.assertEqual(h.get(A, 42), 42) self.assertEqual(h.get(B), 'b') self.assertEqual(h.get(E), 'e') def test_hamt_delete_2(self): A = HashKey(100, 'A') B = HashKey(201001, 'B') C = HashKey(101001, 'C') D = HashKey(103, 'D') E = HashKey(104, 'E') Z = HashKey(-100, 'Z') Er = HashKey(201001, 'Er', error_on_eq_to=B) h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=8 bitmap=0b1110010000): # : 'a' # : 'd' # : 'e' # NULL: # BitmapNode(size=4 bitmap=0b100000000001000000000): # : 'b' # : 'c' with self.assertRaisesRegex(ValueError, 'cannot compare'): h.delete(Er) h = h.delete(Z) self.assertEqual(len(h), orig_len) h = h.delete(C) self.assertEqual(len(h), orig_len - 1) h = h.delete(B) self.assertEqual(len(h), orig_len - 2) h = h.delete(A) self.assertEqual(len(h), orig_len - 3) self.assertEqual(h.get(D), 'd') self.assertEqual(h.get(E), 'e') h = h.delete(A) h = h.delete(B) h = h.delete(D) h = h.delete(E) self.assertEqual(len(h), 0) def test_hamt_delete_3(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(104, 'E') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=6 bitmap=0b100110000): # NULL: # BitmapNode(size=4 bitmap=0b1000000000000000000001000): # : 'a' # NULL: # CollisionNode(size=4 id=0x108572410): # : 'c' # : 'd' # : 'b' # : 'e' h = h.delete(A) self.assertEqual(len(h), orig_len - 1) h = h.delete(E) self.assertEqual(len(h), orig_len - 2) self.assertEqual(h.get(C), 'c') self.assertEqual(h.get(B), 'b') def test_hamt_delete_4(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(100100, 'E') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=4 bitmap=0b110000): # NULL: # BitmapNode(size=4 bitmap=0b1000000000000000000001000): # : 'a' # NULL: # CollisionNode(size=6 id=0x10515ef30): # : 'c' # : 'd' # : 'e' # : 'b' h = h.delete(D) self.assertEqual(len(h), orig_len - 1) h = h.delete(E) self.assertEqual(len(h), orig_len - 2) h = h.delete(C) self.assertEqual(len(h), orig_len - 3) h = h.delete(A) self.assertEqual(len(h), orig_len - 4) h = h.delete(B) self.assertEqual(len(h), 0) def test_hamt_delete_5(self): h = hamt() keys = [] for i in range(17): key = HashKey(i, str(i)) keys.append(key) h = h.set(key, f'val-{i}') collision_key16 = HashKey(16, '18') h = h.set(collision_key16, 'collision') # ArrayNode(id=0x10f8b9318): # 0:: # BitmapNode(size=2 count=1 bitmap=0b1): # : 'val-0' # # ... 14 more BitmapNodes ... # # 15:: # BitmapNode(size=2 count=1 bitmap=0b1): # : 'val-15' # # 16:: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # CollisionNode(size=4 id=0x10f2f5af8): # : 'val-16' # : 'collision' self.assertEqual(len(h), 18) h = h.delete(keys[2]) self.assertEqual(len(h), 17) h = h.delete(collision_key16) self.assertEqual(len(h), 16) h = h.delete(keys[16]) self.assertEqual(len(h), 15) h = h.delete(keys[1]) self.assertEqual(len(h), 14) h = h.delete(keys[1]) self.assertEqual(len(h), 14) for key in keys: h = h.delete(key) self.assertEqual(len(h), 0) def test_hamt_items_1(self): A = HashKey(100, 'A') B = HashKey(201001, 'B') C = HashKey(101001, 'C') D = HashKey(103, 'D') E = HashKey(104, 'E') F = HashKey(110, 'F') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') h = h.set(F, 'f') it = h.items() self.assertEqual( set(list(it)), {(A, 'a'), (B, 'b'), (C, 'c'), (D, 'd'), (E, 'e'), (F, 'f')}) def test_hamt_items_2(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(100100, 'E') F = HashKey(110, 'F') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') h = h.set(F, 'f') it = h.items() self.assertEqual( set(list(it)), {(A, 'a'), (B, 'b'), (C, 'c'), (D, 'd'), (E, 'e'), (F, 'f')}) def test_hamt_keys_1(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(100100, 'E') F = HashKey(110, 'F') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') h = h.set(F, 'f') self.assertEqual(set(list(h.keys())), {A, B, C, D, E, F}) self.assertEqual(set(list(h)), {A, B, C, D, E, F}) def test_hamt_items_3(self): h = hamt() self.assertEqual(len(h.items()), 0) self.assertEqual(list(h.items()), []) def test_hamt_eq_1(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(120, 'E') h1 = hamt() h1 = h1.set(A, 'a') h1 = h1.set(B, 'b') h1 = h1.set(C, 'c') h1 = h1.set(D, 'd') h2 = hamt() h2 = h2.set(A, 'a') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(B, 'b') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(C, 'c') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(D, 'd2') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(D, 'd') self.assertTrue(h1 == h2) self.assertFalse(h1 != h2) h2 = h2.set(E, 'e') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.delete(D) self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(E, 'd') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) def test_hamt_eq_2(self): A = HashKey(100, 'A') Er = HashKey(100, 'Er', error_on_eq_to=A) h1 = hamt() h1 = h1.set(A, 'a') h2 = hamt() h2 = h2.set(Er, 'a') with self.assertRaisesRegex(ValueError, 'cannot compare'): h1 == h2 with self.assertRaisesRegex(ValueError, 'cannot compare'): h1 != h2 def test_hamt_gc_1(self): A = HashKey(100, 'A') h = hamt() h = h.set(0, 0) # empty HAMT node is memoized in hamt.c ref = weakref.ref(h) a = [] a.append(a) a.append(h) b = [] a.append(b) b.append(a) h = h.set(A, b) del h, a, b gc.collect() gc.collect() gc.collect() self.assertIsNone(ref()) def test_hamt_gc_2(self): A = HashKey(100, 'A') B = HashKey(101, 'B') h = hamt() h = h.set(A, 'a') h = h.set(A, h) ref = weakref.ref(h) hi = h.items() next(hi) del h, hi gc.collect() gc.collect() gc.collect() self.assertIsNone(ref()) def test_hamt_in_1(self): A = HashKey(100, 'A') AA = HashKey(100, 'A') B = HashKey(101, 'B') h = hamt() h = h.set(A, 1) self.assertTrue(A in h) self.assertFalse(B in h) with self.assertRaises(EqError): with HaskKeyCrasher(error_on_eq=True): AA in h with self.assertRaises(HashingError): with HaskKeyCrasher(error_on_hash=True): AA in h def test_hamt_getitem_1(self): A = HashKey(100, 'A') AA = HashKey(100, 'A') B = HashKey(101, 'B') h = hamt() h = h.set(A, 1) self.assertEqual(h[A], 1) self.assertEqual(h[AA], 1) with self.assertRaises(KeyError): h[B] with self.assertRaises(EqError): with HaskKeyCrasher(error_on_eq=True): h[AA] with self.assertRaises(HashingError): with HaskKeyCrasher(error_on_hash=True): h[AA] if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.13/test_ftplib.py000066400000000000000000001240311471441230600212170ustar00rootroot00000000000000"""Test script for ftplib module.""" # Modified by Giampaolo Rodola' to test FTP class, IPv6 and TLS # environment import ftplib import socket import io import errno import os import threading import time import unittest try: import ssl except ImportError: ssl = None from unittest import TestCase, skipUnless from test import support from test.support import requires_subprocess from test.support import threading_helper from test.support import socket_helper from test.support import warnings_helper from test.support import asynchat from test.support import asyncore from test.support.socket_helper import HOST, HOSTv6 support.requires_working_socket(module=True) TIMEOUT = support.LOOPBACK_TIMEOUT DEFAULT_ENCODING = 'utf-8' # the dummy data returned by server over the data channel when # RETR, LIST, NLST, MLSD commands are issued RETR_DATA = 'abcde\xB9\xB2\xB3\xA4\xA6\r\n' * 1000 LIST_DATA = 'foo\r\nbar\r\n non-ascii char \xAE\r\n' NLST_DATA = 'foo\r\nbar\r\n non-ascii char \xAE\r\n' MLSD_DATA = ("type=cdir;perm=el;unique==keVO1+ZF4; test\r\n" "type=pdir;perm=e;unique==keVO1+d?3; ..\r\n" "type=OS.unix=slink:/foobar;perm=;unique==keVO1+4G4; foobar\r\n" "type=OS.unix=chr-13/29;perm=;unique==keVO1+5G4; device\r\n" "type=OS.unix=blk-11/108;perm=;unique==keVO1+6G4; block\r\n" "type=file;perm=awr;unique==keVO1+8G4; writable\r\n" "type=dir;perm=cpmel;unique==keVO1+7G4; promiscuous\r\n" "type=dir;perm=;unique==keVO1+1t2; no-exec\r\n" "type=file;perm=r;unique==keVO1+EG4; two words\r\n" "type=file;perm=r;unique==keVO1+IH4; leading space\r\n" "type=file;perm=r;unique==keVO1+1G4; file1\r\n" "type=dir;perm=cpmel;unique==keVO1+7G4; incoming\r\n" "type=file;perm=r;unique==keVO1+1G4; file2\r\n" "type=file;perm=r;unique==keVO1+1G4; file3\r\n" "type=file;perm=r;unique==keVO1+1G4; file4\r\n" "type=dir;perm=cpmel;unique==SGP1; dir \xAE non-ascii char\r\n" "type=file;perm=r;unique==SGP2; file \xAE non-ascii char\r\n") def default_error_handler(): # bpo-44359: Silently ignore socket errors. Such errors occur when a client # socket is closed, in TestFTPClass.tearDown() and makepasv() tests, and # the server gets an error on its side. pass class DummyDTPHandler(asynchat.async_chat): dtp_conn_closed = False def __init__(self, conn, baseclass): asynchat.async_chat.__init__(self, conn) self.baseclass = baseclass self.baseclass.last_received_data = bytearray() self.encoding = baseclass.encoding def handle_read(self): new_data = self.recv(1024) self.baseclass.last_received_data += new_data def handle_close(self): # XXX: this method can be called many times in a row for a single # connection, including in clear-text (non-TLS) mode. # (behaviour witnessed with test_data_connection) if not self.dtp_conn_closed: self.baseclass.push('226 transfer complete') self.close() self.dtp_conn_closed = True def push(self, what): if self.baseclass.next_data is not None: what = self.baseclass.next_data self.baseclass.next_data = None if not what: return self.close_when_done() super(DummyDTPHandler, self).push(what.encode(self.encoding)) def handle_error(self): default_error_handler() class DummyFTPHandler(asynchat.async_chat): dtp_handler = DummyDTPHandler def __init__(self, conn, encoding=DEFAULT_ENCODING): asynchat.async_chat.__init__(self, conn) # tells the socket to handle urgent data inline (ABOR command) self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_OOBINLINE, 1) self.set_terminator(b"\r\n") self.in_buffer = [] self.dtp = None self.last_received_cmd = None self.last_received_data = bytearray() self.next_response = '' self.next_data = None self.rest = None self.next_retr_data = RETR_DATA self.push('220 welcome') self.encoding = encoding # We use this as the string IPv4 address to direct the client # to in response to a PASV command. To test security behavior. # https://bugs.python.org/issue43285/. self.fake_pasv_server_ip = '252.253.254.255' def collect_incoming_data(self, data): self.in_buffer.append(data) def found_terminator(self): line = b''.join(self.in_buffer).decode(self.encoding) self.in_buffer = [] if self.next_response: self.push(self.next_response) self.next_response = '' cmd = line.split(' ')[0].lower() self.last_received_cmd = cmd space = line.find(' ') if space != -1: arg = line[space + 1:] else: arg = "" if hasattr(self, 'cmd_' + cmd): method = getattr(self, 'cmd_' + cmd) method(arg) else: self.push('550 command "%s" not understood.' %cmd) def handle_error(self): default_error_handler() def push(self, data): asynchat.async_chat.push(self, data.encode(self.encoding) + b'\r\n') def cmd_port(self, arg): addr = list(map(int, arg.split(','))) ip = '%d.%d.%d.%d' %tuple(addr[:4]) port = (addr[4] * 256) + addr[5] s = socket.create_connection((ip, port), timeout=TIMEOUT) self.dtp = self.dtp_handler(s, baseclass=self) self.push('200 active data connection established') def cmd_pasv(self, arg): with socket.create_server((self.socket.getsockname()[0], 0)) as sock: sock.settimeout(TIMEOUT) port = sock.getsockname()[1] ip = self.fake_pasv_server_ip ip = ip.replace('.', ','); p1 = port / 256; p2 = port % 256 self.push('227 entering passive mode (%s,%d,%d)' %(ip, p1, p2)) conn, addr = sock.accept() self.dtp = self.dtp_handler(conn, baseclass=self) def cmd_eprt(self, arg): af, ip, port = arg.split(arg[0])[1:-1] port = int(port) s = socket.create_connection((ip, port), timeout=TIMEOUT) self.dtp = self.dtp_handler(s, baseclass=self) self.push('200 active data connection established') def cmd_epsv(self, arg): with socket.create_server((self.socket.getsockname()[0], 0), family=socket.AF_INET6) as sock: sock.settimeout(TIMEOUT) port = sock.getsockname()[1] self.push('229 entering extended passive mode (|||%d|)' %port) conn, addr = sock.accept() self.dtp = self.dtp_handler(conn, baseclass=self) def cmd_echo(self, arg): # sends back the received string (used by the test suite) self.push(arg) def cmd_noop(self, arg): self.push('200 noop ok') def cmd_user(self, arg): self.push('331 username ok') def cmd_pass(self, arg): self.push('230 password ok') def cmd_acct(self, arg): self.push('230 acct ok') def cmd_rnfr(self, arg): self.push('350 rnfr ok') def cmd_rnto(self, arg): self.push('250 rnto ok') def cmd_dele(self, arg): self.push('250 dele ok') def cmd_cwd(self, arg): self.push('250 cwd ok') def cmd_size(self, arg): self.push('250 1000') def cmd_mkd(self, arg): self.push('257 "%s"' %arg) def cmd_rmd(self, arg): self.push('250 rmd ok') def cmd_pwd(self, arg): self.push('257 "pwd ok"') def cmd_type(self, arg): self.push('200 type ok') def cmd_quit(self, arg): self.push('221 quit ok') self.close() def cmd_abor(self, arg): self.push('226 abor ok') def cmd_stor(self, arg): self.push('125 stor ok') def cmd_rest(self, arg): self.rest = arg self.push('350 rest ok') def cmd_retr(self, arg): self.push('125 retr ok') if self.rest is not None: offset = int(self.rest) else: offset = 0 self.dtp.push(self.next_retr_data[offset:]) self.dtp.close_when_done() self.rest = None def cmd_list(self, arg): self.push('125 list ok') self.dtp.push(LIST_DATA) self.dtp.close_when_done() def cmd_nlst(self, arg): self.push('125 nlst ok') self.dtp.push(NLST_DATA) self.dtp.close_when_done() def cmd_opts(self, arg): self.push('200 opts ok') def cmd_mlsd(self, arg): self.push('125 mlsd ok') self.dtp.push(MLSD_DATA) self.dtp.close_when_done() def cmd_setlongretr(self, arg): # For testing. Next RETR will return long line. self.next_retr_data = 'x' * int(arg) self.push('125 setlongretr ok') class DummyFTPServer(asyncore.dispatcher, threading.Thread): handler = DummyFTPHandler def __init__(self, address, af=socket.AF_INET, encoding=DEFAULT_ENCODING): threading.Thread.__init__(self) asyncore.dispatcher.__init__(self) self.daemon = True self.create_socket(af, socket.SOCK_STREAM) self.bind(address) self.listen(5) self.active = False self.active_lock = threading.Lock() self.host, self.port = self.socket.getsockname()[:2] self.handler_instance = None self.encoding = encoding def start(self): assert not self.active self.__flag = threading.Event() threading.Thread.start(self) self.__flag.wait() def run(self): self.active = True self.__flag.set() while self.active and asyncore.socket_map: self.active_lock.acquire() asyncore.loop(timeout=0.1, count=1) self.active_lock.release() asyncore.close_all(ignore_all=True) def stop(self): assert self.active self.active = False self.join() def handle_accepted(self, conn, addr): self.handler_instance = self.handler(conn, encoding=self.encoding) def handle_connect(self): self.close() handle_read = handle_connect def writable(self): return 0 def handle_error(self): default_error_handler() if ssl is not None: CERTFILE = os.path.join(os.path.dirname(__file__), "certdata", "keycert3.pem") CAFILE = os.path.join(os.path.dirname(__file__), "certdata", "pycacert.pem") class SSLConnection(asyncore.dispatcher): """An asyncore.dispatcher subclass supporting TLS/SSL.""" _ssl_accepting = False _ssl_closing = False def secure_connection(self): context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) context.load_cert_chain(CERTFILE) socket = context.wrap_socket(self.socket, suppress_ragged_eofs=False, server_side=True, do_handshake_on_connect=False) self.del_channel() self.set_socket(socket) self._ssl_accepting = True def _do_ssl_handshake(self): try: self.socket.do_handshake() except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return elif err.args[0] == ssl.SSL_ERROR_EOF: return self.handle_close() # TODO: SSLError does not expose alert information elif "SSLV3_ALERT_BAD_CERTIFICATE" in err.args[1]: return self.handle_close() raise except OSError as err: if err.args[0] == errno.ECONNABORTED: return self.handle_close() else: self._ssl_accepting = False def _do_ssl_shutdown(self): self._ssl_closing = True try: self.socket = self.socket.unwrap() except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return except OSError: # Any "socket error" corresponds to a SSL_ERROR_SYSCALL return # from OpenSSL's SSL_shutdown(), corresponding to a # closed socket condition. See also: # http://www.mail-archive.com/openssl-users@openssl.org/msg60710.html pass self._ssl_closing = False if getattr(self, '_ccc', False) is False: super(SSLConnection, self).close() else: pass def handle_read_event(self): if self._ssl_accepting: self._do_ssl_handshake() elif self._ssl_closing: self._do_ssl_shutdown() else: super(SSLConnection, self).handle_read_event() def handle_write_event(self): if self._ssl_accepting: self._do_ssl_handshake() elif self._ssl_closing: self._do_ssl_shutdown() else: super(SSLConnection, self).handle_write_event() def send(self, data): try: return super(SSLConnection, self).send(data) except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_EOF, ssl.SSL_ERROR_ZERO_RETURN, ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return 0 raise def recv(self, buffer_size): try: return super(SSLConnection, self).recv(buffer_size) except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return b'' if err.args[0] in (ssl.SSL_ERROR_EOF, ssl.SSL_ERROR_ZERO_RETURN): self.handle_close() return b'' raise def handle_error(self): default_error_handler() def close(self): if (isinstance(self.socket, ssl.SSLSocket) and self.socket._sslobj is not None): self._do_ssl_shutdown() else: super(SSLConnection, self).close() class DummyTLS_DTPHandler(SSLConnection, DummyDTPHandler): """A DummyDTPHandler subclass supporting TLS/SSL.""" def __init__(self, conn, baseclass): DummyDTPHandler.__init__(self, conn, baseclass) if self.baseclass.secure_data_channel: self.secure_connection() class DummyTLS_FTPHandler(SSLConnection, DummyFTPHandler): """A DummyFTPHandler subclass supporting TLS/SSL.""" dtp_handler = DummyTLS_DTPHandler def __init__(self, conn, encoding=DEFAULT_ENCODING): DummyFTPHandler.__init__(self, conn, encoding=encoding) self.secure_data_channel = False self._ccc = False def cmd_auth(self, line): """Set up secure control channel.""" self.push('234 AUTH TLS successful') self.secure_connection() def cmd_ccc(self, line): self.push('220 Reverting back to clear-text') self._ccc = True self._do_ssl_shutdown() def cmd_pbsz(self, line): """Negotiate size of buffer for secure data transfer. For TLS/SSL the only valid value for the parameter is '0'. Any other value is accepted but ignored. """ self.push('200 PBSZ=0 successful.') def cmd_prot(self, line): """Setup un/secure data channel.""" arg = line.upper() if arg == 'C': self.push('200 Protection set to Clear') self.secure_data_channel = False elif arg == 'P': self.push('200 Protection set to Private') self.secure_data_channel = True else: self.push("502 Unrecognized PROT type (use C or P).") class DummyTLS_FTPServer(DummyFTPServer): handler = DummyTLS_FTPHandler class TestFTPClass(TestCase): def setUp(self, encoding=DEFAULT_ENCODING): self.server = DummyFTPServer((HOST, 0), encoding=encoding) self.server.start() self.client = ftplib.FTP(timeout=TIMEOUT, encoding=encoding) self.client.connect(self.server.host, self.server.port) def tearDown(self): self.client.close() self.server.stop() # Explicitly clear the attribute to prevent dangling thread self.server = None asyncore.close_all(ignore_all=True) def check_data(self, received, expected): self.assertEqual(len(received), len(expected)) self.assertEqual(received, expected) def test_getwelcome(self): self.assertEqual(self.client.getwelcome(), '220 welcome') def test_sanitize(self): self.assertEqual(self.client.sanitize('foo'), repr('foo')) self.assertEqual(self.client.sanitize('pass 12345'), repr('pass *****')) self.assertEqual(self.client.sanitize('PASS 12345'), repr('PASS *****')) def test_exceptions(self): self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\r\n0') self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\n0') self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\r0') self.assertRaises(ftplib.error_temp, self.client.sendcmd, 'echo 400') self.assertRaises(ftplib.error_temp, self.client.sendcmd, 'echo 499') self.assertRaises(ftplib.error_perm, self.client.sendcmd, 'echo 500') self.assertRaises(ftplib.error_perm, self.client.sendcmd, 'echo 599') self.assertRaises(ftplib.error_proto, self.client.sendcmd, 'echo 999') def test_all_errors(self): exceptions = (ftplib.error_reply, ftplib.error_temp, ftplib.error_perm, ftplib.error_proto, ftplib.Error, OSError, EOFError) for x in exceptions: try: raise x('exception not included in all_errors set') except ftplib.all_errors: pass def test_set_pasv(self): # passive mode is supposed to be enabled by default self.assertTrue(self.client.passiveserver) self.client.set_pasv(True) self.assertTrue(self.client.passiveserver) self.client.set_pasv(False) self.assertFalse(self.client.passiveserver) def test_voidcmd(self): self.assertEqual(self.client.voidcmd('echo 200'), '200') self.assertEqual(self.client.voidcmd('echo 299'), '299') self.assertRaises(ftplib.error_reply, self.client.voidcmd, 'echo 199') self.assertRaises(ftplib.error_reply, self.client.voidcmd, 'echo 300') def test_login(self): self.client.login() def test_acct(self): self.client.acct('passwd') def test_rename(self): self.client.rename('a', 'b') self.server.handler_instance.next_response = '200' self.assertRaises(ftplib.error_reply, self.client.rename, 'a', 'b') def test_delete(self): self.client.delete('foo') self.server.handler_instance.next_response = '199' self.assertRaises(ftplib.error_reply, self.client.delete, 'foo') def test_size(self): self.client.size('foo') def test_mkd(self): dir = self.client.mkd('/foo') self.assertEqual(dir, '/foo') def test_rmd(self): self.client.rmd('foo') def test_cwd(self): dir = self.client.cwd('/foo') self.assertEqual(dir, '250 cwd ok') def test_pwd(self): dir = self.client.pwd() self.assertEqual(dir, 'pwd ok') def test_quit(self): self.assertEqual(self.client.quit(), '221 quit ok') # Ensure the connection gets closed; sock attribute should be None self.assertEqual(self.client.sock, None) def test_abort(self): self.client.abort() def test_retrbinary(self): received = [] self.client.retrbinary('retr', received.append) self.check_data(b''.join(received), RETR_DATA.encode(self.client.encoding)) def test_retrbinary_rest(self): for rest in (0, 10, 20): received = [] self.client.retrbinary('retr', received.append, rest=rest) self.check_data(b''.join(received), RETR_DATA[rest:].encode(self.client.encoding)) def test_retrlines(self): received = [] self.client.retrlines('retr', received.append) self.check_data(''.join(received), RETR_DATA.replace('\r\n', '')) def test_storbinary(self): f = io.BytesIO(RETR_DATA.encode(self.client.encoding)) self.client.storbinary('stor', f) self.check_data(self.server.handler_instance.last_received_data, RETR_DATA.encode(self.server.encoding)) # test new callback arg flag = [] f.seek(0) self.client.storbinary('stor', f, callback=lambda x: flag.append(None)) self.assertTrue(flag) def test_storbinary_rest(self): data = RETR_DATA.replace('\r\n', '\n').encode(self.client.encoding) f = io.BytesIO(data) for r in (30, '30'): f.seek(0) self.client.storbinary('stor', f, rest=r) self.assertEqual(self.server.handler_instance.rest, str(r)) def test_storlines(self): data = RETR_DATA.replace('\r\n', '\n').encode(self.client.encoding) f = io.BytesIO(data) self.client.storlines('stor', f) self.check_data(self.server.handler_instance.last_received_data, RETR_DATA.encode(self.server.encoding)) # test new callback arg flag = [] f.seek(0) self.client.storlines('stor foo', f, callback=lambda x: flag.append(None)) self.assertTrue(flag) f = io.StringIO(RETR_DATA.replace('\r\n', '\n')) # storlines() expects a binary file, not a text file with warnings_helper.check_warnings(('', BytesWarning), quiet=True): self.assertRaises(TypeError, self.client.storlines, 'stor foo', f) def test_nlst(self): self.client.nlst() self.assertEqual(self.client.nlst(), NLST_DATA.split('\r\n')[:-1]) def test_dir(self): l = [] self.client.dir(l.append) self.assertEqual(''.join(l), LIST_DATA.replace('\r\n', '')) def test_mlsd(self): list(self.client.mlsd()) list(self.client.mlsd(path='/')) list(self.client.mlsd(path='/', facts=['size', 'type'])) ls = list(self.client.mlsd()) for name, facts in ls: self.assertIsInstance(name, str) self.assertIsInstance(facts, dict) self.assertTrue(name) self.assertIn('type', facts) self.assertIn('perm', facts) self.assertIn('unique', facts) def set_data(data): self.server.handler_instance.next_data = data def test_entry(line, type=None, perm=None, unique=None, name=None): type = 'type' if type is None else type perm = 'perm' if perm is None else perm unique = 'unique' if unique is None else unique name = 'name' if name is None else name set_data(line) _name, facts = next(self.client.mlsd()) self.assertEqual(_name, name) self.assertEqual(facts['type'], type) self.assertEqual(facts['perm'], perm) self.assertEqual(facts['unique'], unique) # plain test_entry('type=type;perm=perm;unique=unique; name\r\n') # "=" in fact value test_entry('type=ty=pe;perm=perm;unique=unique; name\r\n', type="ty=pe") test_entry('type==type;perm=perm;unique=unique; name\r\n', type="=type") test_entry('type=t=y=pe;perm=perm;unique=unique; name\r\n', type="t=y=pe") test_entry('type=====;perm=perm;unique=unique; name\r\n', type="====") # spaces in name test_entry('type=type;perm=perm;unique=unique; na me\r\n', name="na me") test_entry('type=type;perm=perm;unique=unique; name \r\n', name="name ") test_entry('type=type;perm=perm;unique=unique; name\r\n', name=" name") test_entry('type=type;perm=perm;unique=unique; n am e\r\n', name="n am e") # ";" in name test_entry('type=type;perm=perm;unique=unique; na;me\r\n', name="na;me") test_entry('type=type;perm=perm;unique=unique; ;name\r\n', name=";name") test_entry('type=type;perm=perm;unique=unique; ;name;\r\n', name=";name;") test_entry('type=type;perm=perm;unique=unique; ;;;;\r\n', name=";;;;") # case sensitiveness set_data('Type=type;TyPe=perm;UNIQUE=unique; name\r\n') _name, facts = next(self.client.mlsd()) for x in facts: self.assertTrue(x.islower()) # no data (directory empty) set_data('') self.assertRaises(StopIteration, next, self.client.mlsd()) set_data('') for x in self.client.mlsd(): self.fail("unexpected data %s" % x) def test_makeport(self): with self.client.makeport(): # IPv4 is in use, just make sure send_eprt has not been used self.assertEqual(self.server.handler_instance.last_received_cmd, 'port') def test_makepasv(self): host, port = self.client.makepasv() conn = socket.create_connection((host, port), timeout=TIMEOUT) conn.close() # IPv4 is in use, just make sure send_epsv has not been used self.assertEqual(self.server.handler_instance.last_received_cmd, 'pasv') def test_makepasv_issue43285_security_disabled(self): """Test the opt-in to the old vulnerable behavior.""" self.client.trust_server_pasv_ipv4_address = True bad_host, port = self.client.makepasv() self.assertEqual( bad_host, self.server.handler_instance.fake_pasv_server_ip) # Opening and closing a connection keeps the dummy server happy # instead of timing out on accept. socket.create_connection((self.client.sock.getpeername()[0], port), timeout=TIMEOUT).close() def test_makepasv_issue43285_security_enabled_default(self): self.assertFalse(self.client.trust_server_pasv_ipv4_address) trusted_host, port = self.client.makepasv() self.assertNotEqual( trusted_host, self.server.handler_instance.fake_pasv_server_ip) # Opening and closing a connection keeps the dummy server happy # instead of timing out on accept. socket.create_connection((trusted_host, port), timeout=TIMEOUT).close() def test_with_statement(self): self.client.quit() def is_client_connected(): if self.client.sock is None: return False try: self.client.sendcmd('noop') except (OSError, EOFError): return False return True # base test with ftplib.FTP(timeout=TIMEOUT) as self.client: self.client.connect(self.server.host, self.server.port) self.client.sendcmd('noop') self.assertTrue(is_client_connected()) self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit') self.assertFalse(is_client_connected()) # QUIT sent inside the with block with ftplib.FTP(timeout=TIMEOUT) as self.client: self.client.connect(self.server.host, self.server.port) self.client.sendcmd('noop') self.client.quit() self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit') self.assertFalse(is_client_connected()) # force a wrong response code to be sent on QUIT: error_perm # is expected and the connection is supposed to be closed try: with ftplib.FTP(timeout=TIMEOUT) as self.client: self.client.connect(self.server.host, self.server.port) self.client.sendcmd('noop') self.server.handler_instance.next_response = '550 error on quit' except ftplib.error_perm as err: self.assertEqual(str(err), '550 error on quit') else: self.fail('Exception not raised') # needed to give the threaded server some time to set the attribute # which otherwise would still be == 'noop' time.sleep(0.1) self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit') self.assertFalse(is_client_connected()) def test_source_address(self): self.client.quit() port = socket_helper.find_unused_port() try: self.client.connect(self.server.host, self.server.port, source_address=(HOST, port)) self.assertEqual(self.client.sock.getsockname()[1], port) self.client.quit() except OSError as e: if e.errno == errno.EADDRINUSE: self.skipTest("couldn't bind to port %d" % port) raise def test_source_address_passive_connection(self): port = socket_helper.find_unused_port() self.client.source_address = (HOST, port) try: with self.client.transfercmd('list') as sock: self.assertEqual(sock.getsockname()[1], port) except OSError as e: if e.errno == errno.EADDRINUSE: self.skipTest("couldn't bind to port %d" % port) raise def test_parse257(self): self.assertEqual(ftplib.parse257('257 "/foo/bar"'), '/foo/bar') self.assertEqual(ftplib.parse257('257 "/foo/bar" created'), '/foo/bar') self.assertEqual(ftplib.parse257('257 ""'), '') self.assertEqual(ftplib.parse257('257 "" created'), '') self.assertRaises(ftplib.error_reply, ftplib.parse257, '250 "/foo/bar"') # The 257 response is supposed to include the directory # name and in case it contains embedded double-quotes # they must be doubled (see RFC-959, chapter 7, appendix 2). self.assertEqual(ftplib.parse257('257 "/foo/b""ar"'), '/foo/b"ar') self.assertEqual(ftplib.parse257('257 "/foo/b""ar" created'), '/foo/b"ar') def test_line_too_long(self): self.assertRaises(ftplib.Error, self.client.sendcmd, 'x' * self.client.maxline * 2) def test_retrlines_too_long(self): self.client.sendcmd('SETLONGRETR %d' % (self.client.maxline * 2)) received = [] self.assertRaises(ftplib.Error, self.client.retrlines, 'retr', received.append) def test_storlines_too_long(self): f = io.BytesIO(b'x' * self.client.maxline * 2) self.assertRaises(ftplib.Error, self.client.storlines, 'stor', f) def test_encoding_param(self): encodings = ['latin-1', 'utf-8'] for encoding in encodings: with self.subTest(encoding=encoding): self.tearDown() self.setUp(encoding=encoding) self.assertEqual(encoding, self.client.encoding) self.test_retrbinary() self.test_storbinary() self.test_retrlines() new_dir = self.client.mkd('/non-ascii dir \xAE') self.check_data(new_dir, '/non-ascii dir \xAE') # Check default encoding client = ftplib.FTP(timeout=TIMEOUT) self.assertEqual(DEFAULT_ENCODING, client.encoding) @skipUnless(socket_helper.IPV6_ENABLED, "IPv6 not enabled") class TestIPv6Environment(TestCase): def setUp(self): self.server = DummyFTPServer((HOSTv6, 0), af=socket.AF_INET6, encoding=DEFAULT_ENCODING) self.server.start() self.client = ftplib.FTP(timeout=TIMEOUT, encoding=DEFAULT_ENCODING) self.client.connect(self.server.host, self.server.port) def tearDown(self): self.client.close() self.server.stop() # Explicitly clear the attribute to prevent dangling thread self.server = None asyncore.close_all(ignore_all=True) def test_af(self): self.assertEqual(self.client.af, socket.AF_INET6) def test_makeport(self): with self.client.makeport(): self.assertEqual(self.server.handler_instance.last_received_cmd, 'eprt') def test_makepasv(self): host, port = self.client.makepasv() conn = socket.create_connection((host, port), timeout=TIMEOUT) conn.close() self.assertEqual(self.server.handler_instance.last_received_cmd, 'epsv') def test_transfer(self): def retr(): received = [] self.client.retrbinary('retr', received.append) self.assertEqual(b''.join(received), RETR_DATA.encode(self.client.encoding)) self.client.set_pasv(True) retr() self.client.set_pasv(False) retr() @skipUnless(ssl, "SSL not available") @requires_subprocess() class TestTLS_FTPClassMixin(TestFTPClass): """Repeat TestFTPClass tests starting the TLS layer for both control and data connections first. """ def setUp(self, encoding=DEFAULT_ENCODING): self.server = DummyTLS_FTPServer((HOST, 0), encoding=encoding) self.server.start() self.client = ftplib.FTP_TLS(timeout=TIMEOUT, encoding=encoding) self.client.connect(self.server.host, self.server.port) # enable TLS self.client.auth() self.client.prot_p() @skipUnless(ssl, "SSL not available") @requires_subprocess() class TestTLS_FTPClass(TestCase): """Specific TLS_FTP class tests.""" def setUp(self, encoding=DEFAULT_ENCODING): self.server = DummyTLS_FTPServer((HOST, 0), encoding=encoding) self.server.start() self.client = ftplib.FTP_TLS(timeout=TIMEOUT) self.client.connect(self.server.host, self.server.port) def tearDown(self): self.client.close() self.server.stop() # Explicitly clear the attribute to prevent dangling thread self.server = None asyncore.close_all(ignore_all=True) def test_control_connection(self): self.assertNotIsInstance(self.client.sock, ssl.SSLSocket) self.client.auth() self.assertIsInstance(self.client.sock, ssl.SSLSocket) def test_data_connection(self): # clear text with self.client.transfercmd('list') as sock: self.assertNotIsInstance(sock, ssl.SSLSocket) self.assertEqual(sock.recv(1024), LIST_DATA.encode(self.client.encoding)) self.assertEqual(self.client.voidresp(), "226 transfer complete") # secured, after PROT P self.client.prot_p() with self.client.transfercmd('list') as sock: self.assertIsInstance(sock, ssl.SSLSocket) # consume from SSL socket to finalize handshake and avoid # "SSLError [SSL] shutdown while in init" self.assertEqual(sock.recv(1024), LIST_DATA.encode(self.client.encoding)) self.assertEqual(self.client.voidresp(), "226 transfer complete") # PROT C is issued, the connection must be in cleartext again self.client.prot_c() with self.client.transfercmd('list') as sock: self.assertNotIsInstance(sock, ssl.SSLSocket) self.assertEqual(sock.recv(1024), LIST_DATA.encode(self.client.encoding)) self.assertEqual(self.client.voidresp(), "226 transfer complete") def test_login(self): # login() is supposed to implicitly secure the control connection self.assertNotIsInstance(self.client.sock, ssl.SSLSocket) self.client.login() self.assertIsInstance(self.client.sock, ssl.SSLSocket) # make sure that AUTH TLS doesn't get issued again self.client.login() def test_auth_issued_twice(self): self.client.auth() self.assertRaises(ValueError, self.client.auth) def test_context(self): self.client.quit() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE self.assertRaises(TypeError, ftplib.FTP_TLS, keyfile=CERTFILE, context=ctx) self.assertRaises(TypeError, ftplib.FTP_TLS, certfile=CERTFILE, context=ctx) self.assertRaises(TypeError, ftplib.FTP_TLS, certfile=CERTFILE, keyfile=CERTFILE, context=ctx) self.client = ftplib.FTP_TLS(context=ctx, timeout=TIMEOUT) self.client.connect(self.server.host, self.server.port) self.assertNotIsInstance(self.client.sock, ssl.SSLSocket) self.client.auth() self.assertIs(self.client.sock.context, ctx) self.assertIsInstance(self.client.sock, ssl.SSLSocket) self.client.prot_p() with self.client.transfercmd('list') as sock: self.assertIs(sock.context, ctx) self.assertIsInstance(sock, ssl.SSLSocket) def test_ccc(self): self.assertRaises(ValueError, self.client.ccc) self.client.login(secure=True) self.assertIsInstance(self.client.sock, ssl.SSLSocket) self.client.ccc() self.assertRaises(ValueError, self.client.sock.unwrap) @skipUnless(False, "FIXME: bpo-32706") def test_check_hostname(self): self.client.quit() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.check_hostname, True) ctx.load_verify_locations(CAFILE) self.client = ftplib.FTP_TLS(context=ctx, timeout=TIMEOUT) # 127.0.0.1 doesn't match SAN self.client.connect(self.server.host, self.server.port) with self.assertRaises(ssl.CertificateError): self.client.auth() # exception quits connection self.client.connect(self.server.host, self.server.port) self.client.prot_p() with self.assertRaises(ssl.CertificateError): with self.client.transfercmd("list") as sock: pass self.client.quit() self.client.connect("localhost", self.server.port) self.client.auth() self.client.quit() self.client.connect("localhost", self.server.port) self.client.prot_p() with self.client.transfercmd("list") as sock: pass class TestTimeouts(TestCase): def setUp(self): self.evt = threading.Event() self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.sock.settimeout(20) self.port = socket_helper.bind_port(self.sock) self.server_thread = threading.Thread(target=self.server) self.server_thread.daemon = True self.server_thread.start() # Wait for the server to be ready. self.evt.wait() self.evt.clear() self.old_port = ftplib.FTP.port ftplib.FTP.port = self.port def tearDown(self): ftplib.FTP.port = self.old_port self.server_thread.join() # Explicitly clear the attribute to prevent dangling thread self.server_thread = None def server(self): # This method sets the evt 3 times: # 1) when the connection is ready to be accepted. # 2) when it is safe for the caller to close the connection # 3) when we have closed the socket self.sock.listen() # (1) Signal the caller that we are ready to accept the connection. self.evt.set() try: conn, addr = self.sock.accept() except TimeoutError: pass else: conn.sendall(b"1 Hola mundo\n") conn.shutdown(socket.SHUT_WR) # (2) Signal the caller that it is safe to close the socket. self.evt.set() conn.close() finally: self.sock.close() def testTimeoutDefault(self): # default -- use global socket timeout self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: ftp = ftplib.FTP(HOST) finally: socket.setdefaulttimeout(None) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() def testTimeoutNone(self): # no timeout -- do not use global socket timeout self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: ftp = ftplib.FTP(HOST, timeout=None) finally: socket.setdefaulttimeout(None) self.assertIsNone(ftp.sock.gettimeout()) self.evt.wait() ftp.close() def testTimeoutValue(self): # a value ftp = ftplib.FTP(HOST, timeout=30) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() # bpo-39259 with self.assertRaises(ValueError): ftplib.FTP(HOST, timeout=0) def testTimeoutConnect(self): ftp = ftplib.FTP() ftp.connect(HOST, timeout=30) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() def testTimeoutDifferentOrder(self): ftp = ftplib.FTP(timeout=30) ftp.connect(HOST) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() def testTimeoutDirectAccess(self): ftp = ftplib.FTP() ftp.timeout = 30 ftp.connect(HOST) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() class MiscTestCase(TestCase): def test__all__(self): not_exported = { 'MSG_OOB', 'FTP_PORT', 'MAXLINE', 'CRLF', 'B_CRLF', 'Error', 'parse150', 'parse227', 'parse229', 'parse257', 'print_line', 'ftpcp', 'test'} support.check__all__(self, ftplib, not_exported=not_exported) def setUpModule(): thread_info = threading_helper.threading_setup() unittest.addModuleCleanup(threading_helper.threading_cleanup, *thread_info) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/greentest/3.13/test_httplib.py000066400000000000000000003006771471441230600214210ustar00rootroot00000000000000import enum import errno from http import client, HTTPStatus import io import itertools import os import array import re import socket import threading import unittest from unittest import mock TestCase = unittest.TestCase from test import support from test.support import os_helper from test.support import socket_helper support.requires_working_socket(module=True) here = os.path.dirname(__file__) # Self-signed cert file for 'localhost' CERT_localhost = os.path.join(here, 'certdata', 'keycert.pem') # Self-signed cert file for 'fakehostname' CERT_fakehostname = os.path.join(here, 'certdata', 'keycert2.pem') # Self-signed cert file for self-signed.pythontest.net CERT_selfsigned_pythontestdotnet = os.path.join( here, 'certdata', 'selfsigned_pythontestdotnet.pem', ) # constants for testing chunked encoding chunked_start = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello worl\r\n' '3\r\n' 'd! \r\n' '8\r\n' 'and now \r\n' '22\r\n' 'for something completely different\r\n' ) chunked_expected = b'hello world! and now for something completely different' chunk_extension = ";foo=bar" last_chunk = "0\r\n" last_chunk_extended = "0" + chunk_extension + "\r\n" trailers = "X-Dummy: foo\r\nX-Dumm2: bar\r\n" chunked_end = "\r\n" HOST = socket_helper.HOST class FakeSocket: def __init__(self, text, fileclass=io.BytesIO, host=None, port=None): if isinstance(text, str): text = text.encode("ascii") self.text = text self.fileclass = fileclass self.data = b'' self.sendall_calls = 0 self.file_closed = False self.host = host self.port = port def sendall(self, data): self.sendall_calls += 1 self.data += data def makefile(self, mode, bufsize=None): if mode != 'r' and mode != 'rb': raise client.UnimplementedFileMode() # keep the file around so we can check how much was read from it self.file = self.fileclass(self.text) self.file.close = self.file_close #nerf close () return self.file def file_close(self): self.file_closed = True def close(self): pass def setsockopt(self, level, optname, value): pass class EPipeSocket(FakeSocket): def __init__(self, text, pipe_trigger): # When sendall() is called with pipe_trigger, raise EPIPE. FakeSocket.__init__(self, text) self.pipe_trigger = pipe_trigger def sendall(self, data): if self.pipe_trigger in data: raise OSError(errno.EPIPE, "gotcha") self.data += data def close(self): pass class NoEOFBytesIO(io.BytesIO): """Like BytesIO, but raises AssertionError on EOF. This is used below to test that http.client doesn't try to read more from the underlying file than it should. """ def read(self, n=-1): data = io.BytesIO.read(self, n) if data == b'': raise AssertionError('caller tried to read past EOF') return data def readline(self, length=None): data = io.BytesIO.readline(self, length) if data == b'': raise AssertionError('caller tried to read past EOF') return data class FakeSocketHTTPConnection(client.HTTPConnection): """HTTPConnection subclass using FakeSocket; counts connect() calls""" def __init__(self, *args): self.connections = 0 super().__init__('example.com') self.fake_socket_args = args self._create_connection = self.create_connection def connect(self): """Count the number of times connect() is invoked""" self.connections += 1 return super().connect() def create_connection(self, *pos, **kw): return FakeSocket(*self.fake_socket_args) class HeaderTests(TestCase): def test_auto_headers(self): # Some headers are added automatically, but should not be added by # .request() if they are explicitly set. class HeaderCountingBuffer(list): def __init__(self): self.count = {} def append(self, item): kv = item.split(b':') if len(kv) > 1: # item is a 'Key: Value' header string lcKey = kv[0].decode('ascii').lower() self.count.setdefault(lcKey, 0) self.count[lcKey] += 1 list.append(self, item) for explicit_header in True, False: for header in 'Content-length', 'Host', 'Accept-encoding': conn = client.HTTPConnection('example.com') conn.sock = FakeSocket('blahblahblah') conn._buffer = HeaderCountingBuffer() body = 'spamspamspam' headers = {} if explicit_header: headers[header] = str(len(body)) conn.request('POST', '/', body, headers) self.assertEqual(conn._buffer.count[header.lower()], 1) def test_content_length_0(self): class ContentLengthChecker(list): def __init__(self): list.__init__(self) self.content_length = None def append(self, item): kv = item.split(b':', 1) if len(kv) > 1 and kv[0].lower() == b'content-length': self.content_length = kv[1].strip() list.append(self, item) # Here, we're testing that methods expecting a body get a # content-length set to zero if the body is empty (either None or '') bodies = (None, '') methods_with_body = ('PUT', 'POST', 'PATCH') for method, body in itertools.product(methods_with_body, bodies): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', body) self.assertEqual( conn._buffer.content_length, b'0', 'Header Content-Length incorrect on {}'.format(method) ) # For these methods, we make sure that content-length is not set when # the body is None because it might cause unexpected behaviour on the # server. methods_without_body = ( 'GET', 'CONNECT', 'DELETE', 'HEAD', 'OPTIONS', 'TRACE', ) for method in methods_without_body: conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', None) self.assertEqual( conn._buffer.content_length, None, 'Header Content-Length set for empty body on {}'.format(method) ) # If the body is set to '', that's considered to be "present but # empty" rather than "missing", so content length would be set, even # for methods that don't expect a body. for method in methods_without_body: conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', '') self.assertEqual( conn._buffer.content_length, b'0', 'Header Content-Length incorrect on {}'.format(method) ) # If the body is set, make sure Content-Length is set. for method in itertools.chain(methods_without_body, methods_with_body): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', ' ') self.assertEqual( conn._buffer.content_length, b'1', 'Header Content-Length incorrect on {}'.format(method) ) def test_putheader(self): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn.putrequest('GET','/') conn.putheader('Content-length', 42) self.assertIn(b'Content-length: 42', conn._buffer) conn.putheader('Foo', ' bar ') self.assertIn(b'Foo: bar ', conn._buffer) conn.putheader('Bar', '\tbaz\t') self.assertIn(b'Bar: \tbaz\t', conn._buffer) conn.putheader('Authorization', 'Bearer mytoken') self.assertIn(b'Authorization: Bearer mytoken', conn._buffer) conn.putheader('IterHeader', 'IterA', 'IterB') self.assertIn(b'IterHeader: IterA\r\n\tIterB', conn._buffer) conn.putheader('LatinHeader', b'\xFF') self.assertIn(b'LatinHeader: \xFF', conn._buffer) conn.putheader('Utf8Header', b'\xc3\x80') self.assertIn(b'Utf8Header: \xc3\x80', conn._buffer) conn.putheader('C1-Control', b'next\x85line') self.assertIn(b'C1-Control: next\x85line', conn._buffer) conn.putheader('Embedded-Fold-Space', 'is\r\n allowed') self.assertIn(b'Embedded-Fold-Space: is\r\n allowed', conn._buffer) conn.putheader('Embedded-Fold-Tab', 'is\r\n\tallowed') self.assertIn(b'Embedded-Fold-Tab: is\r\n\tallowed', conn._buffer) conn.putheader('Key Space', 'value') self.assertIn(b'Key Space: value', conn._buffer) conn.putheader('KeySpace ', 'value') self.assertIn(b'KeySpace : value', conn._buffer) conn.putheader(b'Nonbreak\xa0Space', 'value') self.assertIn(b'Nonbreak\xa0Space: value', conn._buffer) conn.putheader(b'\xa0NonbreakSpace', 'value') self.assertIn(b'\xa0NonbreakSpace: value', conn._buffer) def test_ipv6host_header(self): # Default host header on IPv6 transaction should be wrapped by [] if # it is an IPv6 address expected = b'GET /foo HTTP/1.1\r\nHost: [2001::]:81\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[2001::]:81') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) expected = b'GET /foo HTTP/1.1\r\nHost: [2001:102A::]\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[2001:102A::]') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) expected = b'GET /foo HTTP/1.1\r\nHost: [fe80::]\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[fe80::%2]') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) expected = b'GET /foo HTTP/1.1\r\nHost: [fe80::]:81\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[fe80::%2]:81') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) def test_malformed_headers_coped_with(self): # Issue 19996 body = "HTTP/1.1 200 OK\r\nFirst: val\r\n: nval\r\nSecond: val\r\n\r\n" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.getheader('First'), 'val') self.assertEqual(resp.getheader('Second'), 'val') def test_parse_all_octets(self): # Ensure no valid header field octet breaks the parser body = ( b'HTTP/1.1 200 OK\r\n' b"!#$%&'*+-.^_`|~: value\r\n" # Special token characters b'VCHAR: ' + bytes(range(0x21, 0x7E + 1)) + b'\r\n' b'obs-text: ' + bytes(range(0x80, 0xFF + 1)) + b'\r\n' b'obs-fold: text\r\n' b' folded with space\r\n' b'\tfolded with tab\r\n' b'Content-Length: 0\r\n' b'\r\n' ) sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.getheader('Content-Length'), '0') self.assertEqual(resp.msg['Content-Length'], '0') self.assertEqual(resp.getheader("!#$%&'*+-.^_`|~"), 'value') self.assertEqual(resp.msg["!#$%&'*+-.^_`|~"], 'value') vchar = ''.join(map(chr, range(0x21, 0x7E + 1))) self.assertEqual(resp.getheader('VCHAR'), vchar) self.assertEqual(resp.msg['VCHAR'], vchar) self.assertIsNotNone(resp.getheader('obs-text')) self.assertIn('obs-text', resp.msg) for folded in (resp.getheader('obs-fold'), resp.msg['obs-fold']): self.assertTrue(folded.startswith('text')) self.assertIn(' folded with space', folded) self.assertTrue(folded.endswith('folded with tab')) def test_invalid_headers(self): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket('') conn.putrequest('GET', '/') # http://tools.ietf.org/html/rfc7230#section-3.2.4, whitespace is no # longer allowed in header names cases = ( (b'Invalid\r\nName', b'ValidValue'), (b'Invalid\rName', b'ValidValue'), (b'Invalid\nName', b'ValidValue'), (b'\r\nInvalidName', b'ValidValue'), (b'\rInvalidName', b'ValidValue'), (b'\nInvalidName', b'ValidValue'), (b' InvalidName', b'ValidValue'), (b'\tInvalidName', b'ValidValue'), (b'Invalid:Name', b'ValidValue'), (b':InvalidName', b'ValidValue'), (b'ValidName', b'Invalid\r\nValue'), (b'ValidName', b'Invalid\rValue'), (b'ValidName', b'Invalid\nValue'), (b'ValidName', b'InvalidValue\r\n'), (b'ValidName', b'InvalidValue\r'), (b'ValidName', b'InvalidValue\n'), ) for name, value in cases: with self.subTest((name, value)): with self.assertRaisesRegex(ValueError, 'Invalid header'): conn.putheader(name, value) def test_headers_debuglevel(self): body = ( b'HTTP/1.1 200 OK\r\n' b'First: val\r\n' b'Second: val1\r\n' b'Second: val2\r\n' ) sock = FakeSocket(body) resp = client.HTTPResponse(sock, debuglevel=1) with support.captured_stdout() as output: resp.begin() lines = output.getvalue().splitlines() self.assertEqual(lines[0], "reply: 'HTTP/1.1 200 OK\\r\\n'") self.assertEqual(lines[1], "header: First: val") self.assertEqual(lines[2], "header: Second: val1") self.assertEqual(lines[3], "header: Second: val2") class HttpMethodTests(TestCase): def test_invalid_method_names(self): methods = ( 'GET\r', 'POST\n', 'PUT\n\r', 'POST\nValue', 'POST\nHOST:abc', 'GET\nrHost:abc\n', 'POST\rRemainder:\r', 'GET\rHOST:\n', '\nPUT' ) for method in methods: with self.assertRaisesRegex( ValueError, "method can't contain control characters"): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn.request(method=method, url="/") class TransferEncodingTest(TestCase): expected_body = b"It's just a flesh wound" def test_endheaders_chunked(self): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.putrequest('POST', '/') conn.endheaders(self._make_body(), encode_chunked=True) _, _, body = self._parse_request(conn.sock.data) body = self._parse_chunked(body) self.assertEqual(body, self.expected_body) def test_explicit_headers(self): # explicit chunked conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') # this shouldn't actually be automatically chunk-encoded because the # calling code has explicitly stated that it's taking care of it conn.request( 'POST', '/', self._make_body(), {'Transfer-Encoding': 'chunked'}) _, headers, body = self._parse_request(conn.sock.data) self.assertNotIn('content-length', [k.lower() for k in headers.keys()]) self.assertEqual(headers['Transfer-Encoding'], 'chunked') self.assertEqual(body, self.expected_body) # explicit chunked, string body conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request( 'POST', '/', self.expected_body.decode('latin-1'), {'Transfer-Encoding': 'chunked'}) _, headers, body = self._parse_request(conn.sock.data) self.assertNotIn('content-length', [k.lower() for k in headers.keys()]) self.assertEqual(headers['Transfer-Encoding'], 'chunked') self.assertEqual(body, self.expected_body) # User-specified TE, but request() does the chunk encoding conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request('POST', '/', headers={'Transfer-Encoding': 'gzip, chunked'}, encode_chunked=True, body=self._make_body()) _, headers, body = self._parse_request(conn.sock.data) self.assertNotIn('content-length', [k.lower() for k in headers]) self.assertEqual(headers['Transfer-Encoding'], 'gzip, chunked') self.assertEqual(self._parse_chunked(body), self.expected_body) def test_request(self): for empty_lines in (False, True,): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request( 'POST', '/', self._make_body(empty_lines=empty_lines)) _, headers, body = self._parse_request(conn.sock.data) body = self._parse_chunked(body) self.assertEqual(body, self.expected_body) self.assertEqual(headers['Transfer-Encoding'], 'chunked') # Content-Length and Transfer-Encoding SHOULD not be sent in the # same request self.assertNotIn('content-length', [k.lower() for k in headers]) def test_empty_body(self): # Zero-length iterable should be treated like any other iterable conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request('POST', '/', ()) _, headers, body = self._parse_request(conn.sock.data) self.assertEqual(headers['Transfer-Encoding'], 'chunked') self.assertNotIn('content-length', [k.lower() for k in headers]) self.assertEqual(body, b"0\r\n\r\n") def _make_body(self, empty_lines=False): lines = self.expected_body.split(b' ') for idx, line in enumerate(lines): # for testing handling empty lines if empty_lines and idx % 2: yield b'' if idx < len(lines) - 1: yield line + b' ' else: yield line def _parse_request(self, data): lines = data.split(b'\r\n') request = lines[0] headers = {} n = 1 while n < len(lines) and len(lines[n]) > 0: key, val = lines[n].split(b':') key = key.decode('latin-1').strip() headers[key] = val.decode('latin-1').strip() n += 1 return request, headers, b'\r\n'.join(lines[n + 1:]) def _parse_chunked(self, data): body = [] trailers = {} n = 0 lines = data.split(b'\r\n') # parse body while True: size, chunk = lines[n:n+2] size = int(size, 16) if size == 0: n += 1 break self.assertEqual(size, len(chunk)) body.append(chunk) n += 2 # we /should/ hit the end chunk, but check against the size of # lines so we're not stuck in an infinite loop should we get # malformed data if n > len(lines): break return b''.join(body) class BasicTest(TestCase): def test_dir_with_added_behavior_on_status(self): # see issue40084 self.assertTrue({'description', 'name', 'phrase', 'value'} <= set(dir(HTTPStatus(404)))) def test_simple_httpstatus(self): class CheckedHTTPStatus(enum.IntEnum): """HTTP status codes and reason phrases Status codes from the following RFCs are all observed: * RFC 7231: Hypertext Transfer Protocol (HTTP/1.1), obsoletes 2616 * RFC 6585: Additional HTTP Status Codes * RFC 3229: Delta encoding in HTTP * RFC 4918: HTTP Extensions for WebDAV, obsoletes 2518 * RFC 5842: Binding Extensions to WebDAV * RFC 7238: Permanent Redirect * RFC 2295: Transparent Content Negotiation in HTTP * RFC 2774: An HTTP Extension Framework * RFC 7725: An HTTP Status Code to Report Legal Obstacles * RFC 7540: Hypertext Transfer Protocol Version 2 (HTTP/2) * RFC 2324: Hyper Text Coffee Pot Control Protocol (HTCPCP/1.0) * RFC 8297: An HTTP Status Code for Indicating Hints * RFC 8470: Using Early Data in HTTP """ def __new__(cls, value, phrase, description=''): obj = int.__new__(cls, value) obj._value_ = value obj.phrase = phrase obj.description = description return obj @property def is_informational(self): return 100 <= self <= 199 @property def is_success(self): return 200 <= self <= 299 @property def is_redirection(self): return 300 <= self <= 399 @property def is_client_error(self): return 400 <= self <= 499 @property def is_server_error(self): return 500 <= self <= 599 # informational CONTINUE = 100, 'Continue', 'Request received, please continue' SWITCHING_PROTOCOLS = (101, 'Switching Protocols', 'Switching to new protocol; obey Upgrade header') PROCESSING = 102, 'Processing' EARLY_HINTS = 103, 'Early Hints' # success OK = 200, 'OK', 'Request fulfilled, document follows' CREATED = 201, 'Created', 'Document created, URL follows' ACCEPTED = (202, 'Accepted', 'Request accepted, processing continues off-line') NON_AUTHORITATIVE_INFORMATION = (203, 'Non-Authoritative Information', 'Request fulfilled from cache') NO_CONTENT = 204, 'No Content', 'Request fulfilled, nothing follows' RESET_CONTENT = 205, 'Reset Content', 'Clear input form for further input' PARTIAL_CONTENT = 206, 'Partial Content', 'Partial content follows' MULTI_STATUS = 207, 'Multi-Status' ALREADY_REPORTED = 208, 'Already Reported' IM_USED = 226, 'IM Used' # redirection MULTIPLE_CHOICES = (300, 'Multiple Choices', 'Object has several resources -- see URI list') MOVED_PERMANENTLY = (301, 'Moved Permanently', 'Object moved permanently -- see URI list') FOUND = 302, 'Found', 'Object moved temporarily -- see URI list' SEE_OTHER = 303, 'See Other', 'Object moved -- see Method and URL list' NOT_MODIFIED = (304, 'Not Modified', 'Document has not changed since given time') USE_PROXY = (305, 'Use Proxy', 'You must use proxy specified in Location to access this resource') TEMPORARY_REDIRECT = (307, 'Temporary Redirect', 'Object moved temporarily -- see URI list') PERMANENT_REDIRECT = (308, 'Permanent Redirect', 'Object moved permanently -- see URI list') # client error BAD_REQUEST = (400, 'Bad Request', 'Bad request syntax or unsupported method') UNAUTHORIZED = (401, 'Unauthorized', 'No permission -- see authorization schemes') PAYMENT_REQUIRED = (402, 'Payment Required', 'No payment -- see charging schemes') FORBIDDEN = (403, 'Forbidden', 'Request forbidden -- authorization will not help') NOT_FOUND = (404, 'Not Found', 'Nothing matches the given URI') METHOD_NOT_ALLOWED = (405, 'Method Not Allowed', 'Specified method is invalid for this resource') NOT_ACCEPTABLE = (406, 'Not Acceptable', 'URI not available in preferred format') PROXY_AUTHENTICATION_REQUIRED = (407, 'Proxy Authentication Required', 'You must authenticate with this proxy before proceeding') REQUEST_TIMEOUT = (408, 'Request Timeout', 'Request timed out; try again later') CONFLICT = 409, 'Conflict', 'Request conflict' GONE = (410, 'Gone', 'URI no longer exists and has been permanently removed') LENGTH_REQUIRED = (411, 'Length Required', 'Client must specify Content-Length') PRECONDITION_FAILED = (412, 'Precondition Failed', 'Precondition in headers is false') CONTENT_TOO_LARGE = (413, 'Content Too Large', 'Content is too large') REQUEST_ENTITY_TOO_LARGE = CONTENT_TOO_LARGE URI_TOO_LONG = (414, 'URI Too Long', 'URI is too long') REQUEST_URI_TOO_LONG = URI_TOO_LONG UNSUPPORTED_MEDIA_TYPE = (415, 'Unsupported Media Type', 'Entity body in unsupported format') RANGE_NOT_SATISFIABLE = (416, 'Range Not Satisfiable', 'Cannot satisfy request range') REQUESTED_RANGE_NOT_SATISFIABLE = RANGE_NOT_SATISFIABLE EXPECTATION_FAILED = (417, 'Expectation Failed', 'Expect condition could not be satisfied') IM_A_TEAPOT = (418, 'I\'m a Teapot', 'Server refuses to brew coffee because it is a teapot.') MISDIRECTED_REQUEST = (421, 'Misdirected Request', 'Server is not able to produce a response') UNPROCESSABLE_CONTENT = 422, 'Unprocessable Content' UNPROCESSABLE_ENTITY = UNPROCESSABLE_CONTENT LOCKED = 423, 'Locked' FAILED_DEPENDENCY = 424, 'Failed Dependency' TOO_EARLY = 425, 'Too Early' UPGRADE_REQUIRED = 426, 'Upgrade Required' PRECONDITION_REQUIRED = (428, 'Precondition Required', 'The origin server requires the request to be conditional') TOO_MANY_REQUESTS = (429, 'Too Many Requests', 'The user has sent too many requests in ' 'a given amount of time ("rate limiting")') REQUEST_HEADER_FIELDS_TOO_LARGE = (431, 'Request Header Fields Too Large', 'The server is unwilling to process the request because its header ' 'fields are too large') UNAVAILABLE_FOR_LEGAL_REASONS = (451, 'Unavailable For Legal Reasons', 'The server is denying access to the ' 'resource as a consequence of a legal demand') # server errors INTERNAL_SERVER_ERROR = (500, 'Internal Server Error', 'Server got itself in trouble') NOT_IMPLEMENTED = (501, 'Not Implemented', 'Server does not support this operation') BAD_GATEWAY = (502, 'Bad Gateway', 'Invalid responses from another server/proxy') SERVICE_UNAVAILABLE = (503, 'Service Unavailable', 'The server cannot process the request due to a high load') GATEWAY_TIMEOUT = (504, 'Gateway Timeout', 'The gateway server did not receive a timely response') HTTP_VERSION_NOT_SUPPORTED = (505, 'HTTP Version Not Supported', 'Cannot fulfill request') VARIANT_ALSO_NEGOTIATES = 506, 'Variant Also Negotiates' INSUFFICIENT_STORAGE = 507, 'Insufficient Storage' LOOP_DETECTED = 508, 'Loop Detected' NOT_EXTENDED = 510, 'Not Extended' NETWORK_AUTHENTICATION_REQUIRED = (511, 'Network Authentication Required', 'The client needs to authenticate to gain network access') enum._test_simple_enum(CheckedHTTPStatus, HTTPStatus) def test_httpstatus_range(self): """Checks that the statuses are in the 100-599 range""" for member in HTTPStatus.__members__.values(): self.assertGreaterEqual(member, 100) self.assertLessEqual(member, 599) def test_httpstatus_category(self): """Checks that the statuses belong to the standard categories""" categories = ( ((100, 199), "is_informational"), ((200, 299), "is_success"), ((300, 399), "is_redirection"), ((400, 499), "is_client_error"), ((500, 599), "is_server_error"), ) for member in HTTPStatus.__members__.values(): for (lower, upper), category in categories: category_indicator = getattr(member, category) if lower <= member <= upper: self.assertTrue(category_indicator) else: self.assertFalse(category_indicator) def test_status_lines(self): # Test HTTP status lines body = "HTTP/1.1 200 Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(0), b'') # Issue #20007 self.assertFalse(resp.isclosed()) self.assertFalse(resp.closed) self.assertEqual(resp.read(), b"Text") self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) body = "HTTP/1.1 400.100 Not Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) self.assertRaises(client.BadStatusLine, resp.begin) def test_bad_status_repr(self): exc = client.BadStatusLine('') self.assertEqual(repr(exc), '''BadStatusLine("''")''') def test_partial_reads(self): # if we have Content-Length, HTTPResponse knows when to close itself, # the same behaviour as when we read the whole thing with read() body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(2), b'Te') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_mixed_reads(self): # readline() should update the remaining length, so that read() knows # how much data is left and does not raise IncompleteRead body = "HTTP/1.1 200 Ok\r\nContent-Length: 13\r\n\r\nText\r\nAnother" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.readline(), b'Text\r\n') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(), b'Another') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_readintos(self): # if we have Content-Length, HTTPResponse knows when to close itself, # the same behaviour as when we read the whole thing with read() body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(2) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'Te') self.assertFalse(resp.isclosed()) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_reads_past_end(self): # if we have Content-Length, clip reads to the end body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(10), b'Text') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_readintos_past_end(self): # if we have Content-Length, clip readintos to the end body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(10) n = resp.readinto(b) self.assertEqual(n, 4) self.assertEqual(bytes(b)[:4], b'Text') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_reads_no_content_length(self): # when no length is present, the socket should be gracefully closed when # all data was read body = "HTTP/1.1 200 Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(2), b'Te') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertEqual(resp.read(1), b'') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_readintos_no_content_length(self): # when no length is present, the socket should be gracefully closed when # all data was read body = "HTTP/1.1 200 Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(2) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'Te') self.assertFalse(resp.isclosed()) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') n = resp.readinto(b) self.assertEqual(n, 0) self.assertTrue(resp.isclosed()) def test_partial_reads_incomplete_body(self): # if the server shuts down the connection before the whole # content-length is delivered, the socket is gracefully closed body = "HTTP/1.1 200 Ok\r\nContent-Length: 10\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(2), b'Te') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertEqual(resp.read(1), b'') self.assertTrue(resp.isclosed()) def test_partial_readintos_incomplete_body(self): # if the server shuts down the connection before the whole # content-length is delivered, the socket is gracefully closed body = "HTTP/1.1 200 Ok\r\nContent-Length: 10\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(2) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'Te') self.assertFalse(resp.isclosed()) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') n = resp.readinto(b) self.assertEqual(n, 0) self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_host_port(self): # Check invalid host_port for hp in ("www.python.org:abc", "user:password@www.python.org"): self.assertRaises(client.InvalidURL, client.HTTPConnection, hp) for hp, h, p in (("[fe80::207:e9ff:fe9b]:8000", "fe80::207:e9ff:fe9b", 8000), ("www.python.org:80", "www.python.org", 80), ("www.python.org:", "www.python.org", 80), ("www.python.org", "www.python.org", 80), ("[fe80::207:e9ff:fe9b]", "fe80::207:e9ff:fe9b", 80), ("[fe80::207:e9ff:fe9b]:", "fe80::207:e9ff:fe9b", 80)): c = client.HTTPConnection(hp) self.assertEqual(h, c.host) self.assertEqual(p, c.port) def test_response_headers(self): # test response with multiple message headers with the same field name. text = ('HTTP/1.1 200 OK\r\n' 'Set-Cookie: Customer="WILE_E_COYOTE"; ' 'Version="1"; Path="/acme"\r\n' 'Set-Cookie: Part_Number="Rocket_Launcher_0001"; Version="1";' ' Path="/acme"\r\n' '\r\n' 'No body\r\n') hdr = ('Customer="WILE_E_COYOTE"; Version="1"; Path="/acme"' ', ' 'Part_Number="Rocket_Launcher_0001"; Version="1"; Path="/acme"') s = FakeSocket(text) r = client.HTTPResponse(s) r.begin() cookies = r.getheader("Set-Cookie") self.assertEqual(cookies, hdr) def test_read_head(self): # Test that the library doesn't attempt to read any data # from a HEAD request. (Tickles SF bug #622042.) sock = FakeSocket( 'HTTP/1.1 200 OK\r\n' 'Content-Length: 14432\r\n' '\r\n', NoEOFBytesIO) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() if resp.read(): self.fail("Did not expect response from HEAD request") def test_readinto_head(self): # Test that the library doesn't attempt to read any data # from a HEAD request. (Tickles SF bug #622042.) sock = FakeSocket( 'HTTP/1.1 200 OK\r\n' 'Content-Length: 14432\r\n' '\r\n', NoEOFBytesIO) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() b = bytearray(5) if resp.readinto(b) != 0: self.fail("Did not expect response from HEAD request") self.assertEqual(bytes(b), b'\x00'*5) def test_too_many_headers(self): headers = '\r\n'.join('Header%d: foo' % i for i in range(client._MAXHEADERS + 1)) + '\r\n' text = ('HTTP/1.1 200 OK\r\n' + headers) s = FakeSocket(text) r = client.HTTPResponse(s) self.assertRaisesRegex(client.HTTPException, r"got more than \d+ headers", r.begin) def test_send_file(self): expected = (b'GET /foo HTTP/1.1\r\nHost: example.com\r\n' b'Accept-Encoding: identity\r\n' b'Transfer-Encoding: chunked\r\n' b'\r\n') with open(__file__, 'rb') as body: conn = client.HTTPConnection('example.com') sock = FakeSocket(body) conn.sock = sock conn.request('GET', '/foo', body) self.assertTrue(sock.data.startswith(expected), '%r != %r' % (sock.data[:len(expected)], expected)) def test_send(self): expected = b'this is a test this is only a test' conn = client.HTTPConnection('example.com') sock = FakeSocket(None) conn.sock = sock conn.send(expected) self.assertEqual(expected, sock.data) sock.data = b'' conn.send(array.array('b', expected)) self.assertEqual(expected, sock.data) sock.data = b'' conn.send(io.BytesIO(expected)) self.assertEqual(expected, sock.data) def test_send_updating_file(self): def data(): yield 'data' yield None yield 'data_two' class UpdatingFile(io.TextIOBase): mode = 'r' d = data() def read(self, blocksize=-1): return next(self.d) expected = b'data' conn = client.HTTPConnection('example.com') sock = FakeSocket("") conn.sock = sock conn.send(UpdatingFile()) self.assertEqual(sock.data, expected) def test_send_iter(self): expected = b'GET /foo HTTP/1.1\r\nHost: example.com\r\n' \ b'Accept-Encoding: identity\r\nContent-Length: 11\r\n' \ b'\r\nonetwothree' def body(): yield b"one" yield b"two" yield b"three" conn = client.HTTPConnection('example.com') sock = FakeSocket("") conn.sock = sock conn.request('GET', '/foo', body(), {'Content-Length': '11'}) self.assertEqual(sock.data, expected) def test_blocksize_request(self): """Check that request() respects the configured block size.""" blocksize = 8 # For easy debugging. conn = client.HTTPConnection('example.com', blocksize=blocksize) sock = FakeSocket(None) conn.sock = sock expected = b"a" * blocksize + b"b" conn.request("PUT", "/", io.BytesIO(expected), {"Content-Length": "9"}) self.assertEqual(sock.sendall_calls, 3) body = sock.data.split(b"\r\n\r\n", 1)[1] self.assertEqual(body, expected) def test_blocksize_send(self): """Check that send() respects the configured block size.""" blocksize = 8 # For easy debugging. conn = client.HTTPConnection('example.com', blocksize=blocksize) sock = FakeSocket(None) conn.sock = sock expected = b"a" * blocksize + b"b" conn.send(io.BytesIO(expected)) self.assertEqual(sock.sendall_calls, 2) self.assertEqual(sock.data, expected) def test_send_type_error(self): # See: Issue #12676 conn = client.HTTPConnection('example.com') conn.sock = FakeSocket('') with self.assertRaises(TypeError): conn.request('POST', 'test', conn) def test_chunked(self): expected = chunked_expected sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) resp.close() # Various read sizes for n in range(1, 12): sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(n) + resp.read(n) + resp.read(), expected) resp.close() for x in ('', 'foo\r\n'): sock = FakeSocket(chunked_start + x) resp = client.HTTPResponse(sock, method="GET") resp.begin() try: resp.read() except client.IncompleteRead as i: self.assertEqual(i.partial, expected) expected_message = 'IncompleteRead(%d bytes read)' % len(expected) self.assertEqual(repr(i), expected_message) self.assertEqual(str(i), expected_message) else: self.fail('IncompleteRead expected') finally: resp.close() def test_readinto_chunked(self): expected = chunked_expected nexpected = len(expected) b = bytearray(128) sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() n = resp.readinto(b) self.assertEqual(b[:nexpected], expected) self.assertEqual(n, nexpected) resp.close() # Various read sizes for n in range(1, 12): sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() m = memoryview(b) i = resp.readinto(m[0:n]) i += resp.readinto(m[i:n + i]) i += resp.readinto(m[i:]) self.assertEqual(b[:nexpected], expected) self.assertEqual(i, nexpected) resp.close() for x in ('', 'foo\r\n'): sock = FakeSocket(chunked_start + x) resp = client.HTTPResponse(sock, method="GET") resp.begin() try: n = resp.readinto(b) except client.IncompleteRead as i: self.assertEqual(i.partial, expected) expected_message = 'IncompleteRead(%d bytes read)' % len(expected) self.assertEqual(repr(i), expected_message) self.assertEqual(str(i), expected_message) else: self.fail('IncompleteRead expected') finally: resp.close() def test_chunked_head(self): chunked_start = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello world\r\n' '1\r\n' 'd\r\n' ) sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() self.assertEqual(resp.read(), b'') self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_readinto_chunked_head(self): chunked_start = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello world\r\n' '1\r\n' 'd\r\n' ) sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() b = bytearray(5) n = resp.readinto(b) self.assertEqual(n, 0) self.assertEqual(bytes(b), b'\x00'*5) self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_negative_content_length(self): sock = FakeSocket( 'HTTP/1.1 200 OK\r\nContent-Length: -1\r\n\r\nHello\r\n') resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), b'Hello\r\n') self.assertTrue(resp.isclosed()) def test_incomplete_read(self): sock = FakeSocket('HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\nHello\r\n') resp = client.HTTPResponse(sock, method="GET") resp.begin() try: resp.read() except client.IncompleteRead as i: self.assertEqual(i.partial, b'Hello\r\n') self.assertEqual(repr(i), "IncompleteRead(7 bytes read, 3 more expected)") self.assertEqual(str(i), "IncompleteRead(7 bytes read, 3 more expected)") self.assertTrue(resp.isclosed()) else: self.fail('IncompleteRead expected') def test_epipe(self): sock = EPipeSocket( "HTTP/1.0 401 Authorization Required\r\n" "Content-type: text/html\r\n" "WWW-Authenticate: Basic realm=\"example\"\r\n", b"Content-Length") conn = client.HTTPConnection("example.com") conn.sock = sock self.assertRaises(OSError, lambda: conn.request("PUT", "/url", "body")) resp = conn.getresponse() self.assertEqual(401, resp.status) self.assertEqual("Basic realm=\"example\"", resp.getheader("www-authenticate")) # Test lines overflowing the max line size (_MAXLINE in http.client) def test_overflowing_status_line(self): body = "HTTP/1.1 200 Ok" + "k" * 65536 + "\r\n" resp = client.HTTPResponse(FakeSocket(body)) self.assertRaises((client.LineTooLong, client.BadStatusLine), resp.begin) def test_overflowing_header_line(self): body = ( 'HTTP/1.1 200 OK\r\n' 'X-Foo: bar' + 'r' * 65536 + '\r\n\r\n' ) resp = client.HTTPResponse(FakeSocket(body)) self.assertRaises(client.LineTooLong, resp.begin) def test_overflowing_header_limit_after_100(self): body = ( 'HTTP/1.1 100 OK\r\n' 'r\n' * 32768 ) resp = client.HTTPResponse(FakeSocket(body)) with self.assertRaises(client.HTTPException) as cm: resp.begin() # We must assert more because other reasonable errors that we # do not want can also be HTTPException derived. self.assertIn('got more than ', str(cm.exception)) self.assertIn('headers', str(cm.exception)) def test_overflowing_chunked_line(self): body = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' + '0' * 65536 + 'a\r\n' 'hello world\r\n' '0\r\n' '\r\n' ) resp = client.HTTPResponse(FakeSocket(body)) resp.begin() self.assertRaises(client.LineTooLong, resp.read) def test_early_eof(self): # Test httpresponse with no \r\n termination, body = "HTTP/1.1 200 Ok" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(), b'') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_error_leak(self): # Test that the socket is not leaked if getresponse() fails conn = client.HTTPConnection('example.com') response = None class Response(client.HTTPResponse): def __init__(self, *pos, **kw): nonlocal response response = self # Avoid garbage collector closing the socket client.HTTPResponse.__init__(self, *pos, **kw) conn.response_class = Response conn.sock = FakeSocket('Invalid status line') conn.request('GET', '/') self.assertRaises(client.BadStatusLine, conn.getresponse) self.assertTrue(response.closed) self.assertTrue(conn.sock.file_closed) def test_chunked_extension(self): extra = '3;foo=bar\r\n' + 'abc\r\n' expected = chunked_expected + b'abc' sock = FakeSocket(chunked_start + extra + last_chunk_extended + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) resp.close() def test_chunked_missing_end(self): """some servers may serve up a short chunked encoding stream""" expected = chunked_expected sock = FakeSocket(chunked_start + last_chunk) #no terminating crlf resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) resp.close() def test_chunked_trailers(self): """See that trailers are read and ignored""" expected = chunked_expected sock = FakeSocket(chunked_start + last_chunk + trailers + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) # we should have reached the end of the file self.assertEqual(sock.file.read(), b"") #we read to the end resp.close() def test_chunked_sync(self): """Check that we don't read past the end of the chunked-encoding stream""" expected = chunked_expected extradata = "extradata" sock = FakeSocket(chunked_start + last_chunk + trailers + chunked_end + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata.encode("ascii")) #we read to the end resp.close() def test_content_length_sync(self): """Check that we don't read past the end of the Content-Length stream""" extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_readlines_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.readlines(2000), [expected]) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_read1_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read1(2000), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_readline_bound_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.readline(10), expected) self.assertEqual(resp.readline(10), b"") # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_read1_bound_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 30\r\n\r\n' + expected*3 + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read1(20), expected*2) self.assertEqual(resp.read(), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_response_fileno(self): # Make sure fd returned by fileno is valid. serv = socket.create_server((HOST, 0)) self.addCleanup(serv.close) result = None def run_server(): [conn, address] = serv.accept() with conn, conn.makefile("rb") as reader: # Read the request header until a blank line while True: line = reader.readline() if not line.rstrip(b"\r\n"): break conn.sendall(b"HTTP/1.1 200 Connection established\r\n\r\n") nonlocal result result = reader.read() thread = threading.Thread(target=run_server) thread.start() self.addCleanup(thread.join, float(1)) conn = client.HTTPConnection(*serv.getsockname()) conn.request("CONNECT", "dummy:1234") response = conn.getresponse() try: self.assertEqual(response.status, client.OK) s = socket.socket(fileno=response.fileno()) try: s.sendall(b"proxied data\n") finally: s.detach() finally: response.close() conn.close() thread.join() self.assertEqual(result, b"proxied data\n") def test_putrequest_override_domain_validation(self): """ It should be possible to override the default validation behavior in putrequest (bpo-38216). """ class UnsafeHTTPConnection(client.HTTPConnection): def _validate_path(self, url): pass conn = UnsafeHTTPConnection('example.com') conn.sock = FakeSocket('') conn.putrequest('GET', '/\x00') def test_putrequest_override_host_validation(self): class UnsafeHTTPConnection(client.HTTPConnection): def _validate_host(self, url): pass conn = UnsafeHTTPConnection('example.com\r\n') conn.sock = FakeSocket('') # set skip_host so a ValueError is not raised upon adding the # invalid URL as the value of the "Host:" header conn.putrequest('GET', '/', skip_host=1) def test_putrequest_override_encoding(self): """ It should be possible to override the default encoding to transmit bytes in another encoding even if invalid (bpo-36274). """ class UnsafeHTTPConnection(client.HTTPConnection): def _encode_request(self, str_url): return str_url.encode('utf-8') conn = UnsafeHTTPConnection('example.com') conn.sock = FakeSocket('') conn.putrequest('GET', '/☃') class ExtendedReadTest(TestCase): """ Test peek(), read1(), readline() """ lines = ( 'HTTP/1.1 200 OK\r\n' '\r\n' 'hello world!\n' 'and now \n' 'for something completely different\n' 'foo' ) lines_expected = lines[lines.find('hello'):].encode("ascii") lines_chunked = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello worl\r\n' '3\r\n' 'd!\n\r\n' '9\r\n' 'and now \n\r\n' '23\r\n' 'for something completely different\n\r\n' '3\r\n' 'foo\r\n' '0\r\n' # terminating chunk '\r\n' # end of trailers ) def setUp(self): sock = FakeSocket(self.lines) resp = client.HTTPResponse(sock, method="GET") resp.begin() resp.fp = io.BufferedReader(resp.fp) self.resp = resp def test_peek(self): resp = self.resp # patch up the buffered peek so that it returns not too much stuff oldpeek = resp.fp.peek def mypeek(n=-1): p = oldpeek(n) if n >= 0: return p[:n] return p[:10] resp.fp.peek = mypeek all = [] while True: # try a short peek p = resp.peek(3) if p: self.assertGreater(len(p), 0) # then unbounded peek p2 = resp.peek() self.assertGreaterEqual(len(p2), len(p)) self.assertTrue(p2.startswith(p)) next = resp.read(len(p2)) self.assertEqual(next, p2) else: next = resp.read() self.assertFalse(next) all.append(next) if not next: break self.assertEqual(b"".join(all), self.lines_expected) def test_readline(self): resp = self.resp self._verify_readline(self.resp.readline, self.lines_expected) def test_readline_without_limit(self): self._verify_readline(self.resp.readline, self.lines_expected, limit=-1) def _verify_readline(self, readline, expected, limit=5): all = [] while True: # short readlines line = readline(limit) if line and line != b"foo": if len(line) < 5: self.assertTrue(line.endswith(b"\n")) all.append(line) if not line: break self.assertEqual(b"".join(all), expected) self.assertTrue(self.resp.isclosed()) def test_read1(self): resp = self.resp def r(): res = resp.read1(4) self.assertLessEqual(len(res), 4) return res readliner = Readliner(r) self._verify_readline(readliner.readline, self.lines_expected) def test_read1_unbounded(self): resp = self.resp all = [] while True: data = resp.read1() if not data: break all.append(data) self.assertEqual(b"".join(all), self.lines_expected) self.assertTrue(resp.isclosed()) def test_read1_bounded(self): resp = self.resp all = [] while True: data = resp.read1(10) if not data: break self.assertLessEqual(len(data), 10) all.append(data) self.assertEqual(b"".join(all), self.lines_expected) self.assertTrue(resp.isclosed()) def test_read1_0(self): self.assertEqual(self.resp.read1(0), b"") self.assertFalse(self.resp.isclosed()) def test_peek_0(self): p = self.resp.peek(0) self.assertLessEqual(0, len(p)) class ExtendedReadTestContentLengthKnown(ExtendedReadTest): _header, _body = ExtendedReadTest.lines.split('\r\n\r\n', 1) lines = _header + f'\r\nContent-Length: {len(_body)}\r\n\r\n' + _body class ExtendedReadTestChunked(ExtendedReadTest): """ Test peek(), read1(), readline() in chunked mode """ lines = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello worl\r\n' '3\r\n' 'd!\n\r\n' '9\r\n' 'and now \n\r\n' '23\r\n' 'for something completely different\n\r\n' '3\r\n' 'foo\r\n' '0\r\n' # terminating chunk '\r\n' # end of trailers ) class Readliner: """ a simple readline class that uses an arbitrary read function and buffering """ def __init__(self, readfunc): self.readfunc = readfunc self.remainder = b"" def readline(self, limit): data = [] datalen = 0 read = self.remainder try: while True: idx = read.find(b'\n') if idx != -1: break if datalen + len(read) >= limit: idx = limit - datalen - 1 # read more data data.append(read) read = self.readfunc() if not read: idx = 0 #eof condition break idx += 1 data.append(read[:idx]) self.remainder = read[idx:] return b"".join(data) except: self.remainder = b"".join(data) raise class OfflineTest(TestCase): def test_all(self): # Documented objects defined in the module should be in __all__ expected = {"responses"} # Allowlist documented dict() object # HTTPMessage, parse_headers(), and the HTTP status code constants are # intentionally omitted for simplicity denylist = {"HTTPMessage", "parse_headers"} for name in dir(client): if name.startswith("_") or name in denylist: continue module_object = getattr(client, name) if getattr(module_object, "__module__", None) == "http.client": expected.add(name) self.assertCountEqual(client.__all__, expected) def test_responses(self): self.assertEqual(client.responses[client.NOT_FOUND], "Not Found") def test_client_constants(self): # Make sure we don't break backward compatibility with 3.4 expected = [ 'CONTINUE', 'SWITCHING_PROTOCOLS', 'PROCESSING', 'OK', 'CREATED', 'ACCEPTED', 'NON_AUTHORITATIVE_INFORMATION', 'NO_CONTENT', 'RESET_CONTENT', 'PARTIAL_CONTENT', 'MULTI_STATUS', 'IM_USED', 'MULTIPLE_CHOICES', 'MOVED_PERMANENTLY', 'FOUND', 'SEE_OTHER', 'NOT_MODIFIED', 'USE_PROXY', 'TEMPORARY_REDIRECT', 'BAD_REQUEST', 'UNAUTHORIZED', 'PAYMENT_REQUIRED', 'FORBIDDEN', 'NOT_FOUND', 'METHOD_NOT_ALLOWED', 'NOT_ACCEPTABLE', 'PROXY_AUTHENTICATION_REQUIRED', 'REQUEST_TIMEOUT', 'CONFLICT', 'GONE', 'LENGTH_REQUIRED', 'PRECONDITION_FAILED', 'CONTENT_TOO_LARGE', 'REQUEST_ENTITY_TOO_LARGE', 'URI_TOO_LONG', 'REQUEST_URI_TOO_LONG', 'UNSUPPORTED_MEDIA_TYPE', 'RANGE_NOT_SATISFIABLE', 'REQUESTED_RANGE_NOT_SATISFIABLE', 'EXPECTATION_FAILED', 'IM_A_TEAPOT', 'MISDIRECTED_REQUEST', 'UNPROCESSABLE_CONTENT', 'UNPROCESSABLE_ENTITY', 'LOCKED', 'FAILED_DEPENDENCY', 'UPGRADE_REQUIRED', 'PRECONDITION_REQUIRED', 'TOO_MANY_REQUESTS', 'REQUEST_HEADER_FIELDS_TOO_LARGE', 'UNAVAILABLE_FOR_LEGAL_REASONS', 'INTERNAL_SERVER_ERROR', 'NOT_IMPLEMENTED', 'BAD_GATEWAY', 'SERVICE_UNAVAILABLE', 'GATEWAY_TIMEOUT', 'HTTP_VERSION_NOT_SUPPORTED', 'INSUFFICIENT_STORAGE', 'NOT_EXTENDED', 'NETWORK_AUTHENTICATION_REQUIRED', 'EARLY_HINTS', 'TOO_EARLY' ] for const in expected: with self.subTest(constant=const): self.assertTrue(hasattr(client, const)) class SourceAddressTest(TestCase): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = socket_helper.bind_port(self.serv) self.source_port = socket_helper.find_unused_port() self.serv.listen() self.conn = None def tearDown(self): if self.conn: self.conn.close() self.conn = None self.serv.close() self.serv = None def testHTTPConnectionSourceAddress(self): self.conn = client.HTTPConnection(HOST, self.port, source_address=('', self.source_port)) self.conn.connect() self.assertEqual(self.conn.sock.getsockname()[1], self.source_port) @unittest.skipIf(not hasattr(client, 'HTTPSConnection'), 'http.client.HTTPSConnection not defined') def testHTTPSConnectionSourceAddress(self): self.conn = client.HTTPSConnection(HOST, self.port, source_address=('', self.source_port)) # We don't test anything here other than the constructor not barfing as # this code doesn't deal with setting up an active running SSL server # for an ssl_wrapped connect() to actually return from. class TimeoutTest(TestCase): PORT = None def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) TimeoutTest.PORT = socket_helper.bind_port(self.serv) self.serv.listen() def tearDown(self): self.serv.close() self.serv = None def testTimeoutAttribute(self): # This will prove that the timeout gets through HTTPConnection # and into the socket. # default -- use global socket timeout self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: httpConn = client.HTTPConnection(HOST, TimeoutTest.PORT) httpConn.connect() finally: socket.setdefaulttimeout(None) self.assertEqual(httpConn.sock.gettimeout(), 30) httpConn.close() # no timeout -- do not use global socket default self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: httpConn = client.HTTPConnection(HOST, TimeoutTest.PORT, timeout=None) httpConn.connect() finally: socket.setdefaulttimeout(None) self.assertEqual(httpConn.sock.gettimeout(), None) httpConn.close() # a value httpConn = client.HTTPConnection(HOST, TimeoutTest.PORT, timeout=30) httpConn.connect() self.assertEqual(httpConn.sock.gettimeout(), 30) httpConn.close() class PersistenceTest(TestCase): def test_reuse_reconnect(self): # Should reuse or reconnect depending on header from server tests = ( ('1.0', '', False), ('1.0', 'Connection: keep-alive\r\n', True), ('1.1', '', True), ('1.1', 'Connection: close\r\n', False), ('1.0', 'Connection: keep-ALIVE\r\n', True), ('1.1', 'Connection: cloSE\r\n', False), ) for version, header, reuse in tests: with self.subTest(version=version, header=header): msg = ( 'HTTP/{} 200 OK\r\n' '{}' 'Content-Length: 12\r\n' '\r\n' 'Dummy body\r\n' ).format(version, header) conn = FakeSocketHTTPConnection(msg) self.assertIsNone(conn.sock) conn.request('GET', '/open-connection') with conn.getresponse() as response: self.assertEqual(conn.sock is None, not reuse) response.read() self.assertEqual(conn.sock is None, not reuse) self.assertEqual(conn.connections, 1) conn.request('GET', '/subsequent-request') self.assertEqual(conn.connections, 1 if reuse else 2) def test_disconnected(self): def make_reset_reader(text): """Return BufferedReader that raises ECONNRESET at EOF""" stream = io.BytesIO(text) def readinto(buffer): size = io.BytesIO.readinto(stream, buffer) if size == 0: raise ConnectionResetError() return size stream.readinto = readinto return io.BufferedReader(stream) tests = ( (io.BytesIO, client.RemoteDisconnected), (make_reset_reader, ConnectionResetError), ) for stream_factory, exception in tests: with self.subTest(exception=exception): conn = FakeSocketHTTPConnection(b'', stream_factory) conn.request('GET', '/eof-response') self.assertRaises(exception, conn.getresponse) self.assertIsNone(conn.sock) # HTTPConnection.connect() should be automatically invoked conn.request('GET', '/reconnect') self.assertEqual(conn.connections, 2) def test_100_close(self): conn = FakeSocketHTTPConnection( b'HTTP/1.1 100 Continue\r\n' b'\r\n' # Missing final response ) conn.request('GET', '/', headers={'Expect': '100-continue'}) self.assertRaises(client.RemoteDisconnected, conn.getresponse) self.assertIsNone(conn.sock) conn.request('GET', '/reconnect') self.assertEqual(conn.connections, 2) class HTTPSTest(TestCase): def setUp(self): if not hasattr(client, 'HTTPSConnection'): self.skipTest('ssl support required') def make_server(self, certfile): from test.ssl_servers import make_https_server return make_https_server(self, certfile=certfile) def test_attributes(self): # simple test to check it's storing the timeout h = client.HTTPSConnection(HOST, TimeoutTest.PORT, timeout=30) self.assertEqual(h.timeout, 30) def test_networked(self): # Default settings: requires a valid cert from a trusted CA import ssl support.requires('network') with socket_helper.transient_internet('self-signed.pythontest.net'): h = client.HTTPSConnection('self-signed.pythontest.net', 443) with self.assertRaises(ssl.SSLError) as exc_info: h.request('GET', '/') self.assertEqual(exc_info.exception.reason, 'CERTIFICATE_VERIFY_FAILED') def test_networked_noverification(self): # Switch off cert verification import ssl support.requires('network') with socket_helper.transient_internet('self-signed.pythontest.net'): context = ssl._create_unverified_context() h = client.HTTPSConnection('self-signed.pythontest.net', 443, context=context) h.request('GET', '/') resp = h.getresponse() h.close() self.assertIn('nginx', resp.getheader('server')) resp.close() @support.system_must_validate_cert def test_networked_trusted_by_default_cert(self): # Default settings: requires a valid cert from a trusted CA support.requires('network') with socket_helper.transient_internet('www.python.org'): h = client.HTTPSConnection('www.python.org', 443) h.request('GET', '/') resp = h.getresponse() content_type = resp.getheader('content-type') resp.close() h.close() self.assertIn('text/html', content_type) def test_networked_good_cert(self): # We feed the server's cert as a validating cert import ssl support.requires('network') selfsigned_pythontestdotnet = 'self-signed.pythontest.net' with socket_helper.transient_internet(selfsigned_pythontestdotnet): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(context.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(context.check_hostname, True) context.load_verify_locations(CERT_selfsigned_pythontestdotnet) try: h = client.HTTPSConnection(selfsigned_pythontestdotnet, 443, context=context) h.request('GET', '/') resp = h.getresponse() except ssl.SSLError as ssl_err: ssl_err_str = str(ssl_err) # In the error message of [SSL: CERTIFICATE_VERIFY_FAILED] on # modern Linux distros (Debian Buster, etc) default OpenSSL # configurations it'll fail saying "key too weak" until we # address https://bugs.python.org/issue36816 to use a proper # key size on self-signed.pythontest.net. if re.search(r'(?i)key.too.weak', ssl_err_str): raise unittest.SkipTest( f'Got {ssl_err_str} trying to connect ' f'to {selfsigned_pythontestdotnet}. ' 'See https://bugs.python.org/issue36816.') raise server_string = resp.getheader('server') resp.close() h.close() self.assertIn('nginx', server_string) @support.requires_resource('walltime') def test_networked_bad_cert(self): # We feed a "CA" cert that is unrelated to the server's cert import ssl support.requires('network') with socket_helper.transient_internet('self-signed.pythontest.net'): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations(CERT_localhost) h = client.HTTPSConnection('self-signed.pythontest.net', 443, context=context) with self.assertRaises(ssl.SSLError) as exc_info: h.request('GET', '/') self.assertEqual(exc_info.exception.reason, 'CERTIFICATE_VERIFY_FAILED') def test_local_unknown_cert(self): # The custom cert isn't known to the default trust bundle import ssl server = self.make_server(CERT_localhost) h = client.HTTPSConnection('localhost', server.port) with self.assertRaises(ssl.SSLError) as exc_info: h.request('GET', '/') self.assertEqual(exc_info.exception.reason, 'CERTIFICATE_VERIFY_FAILED') def test_local_good_hostname(self): # The (valid) cert validates the HTTPS hostname import ssl server = self.make_server(CERT_localhost) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations(CERT_localhost) h = client.HTTPSConnection('localhost', server.port, context=context) self.addCleanup(h.close) h.request('GET', '/nonexistent') resp = h.getresponse() self.addCleanup(resp.close) self.assertEqual(resp.status, 404) def test_local_bad_hostname(self): # The (valid) cert doesn't validate the HTTPS hostname import ssl server = self.make_server(CERT_fakehostname) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations(CERT_fakehostname) h = client.HTTPSConnection('localhost', server.port, context=context) with self.assertRaises(ssl.CertificateError): h.request('GET', '/') # Same with explicit context.check_hostname=True context.check_hostname = True h = client.HTTPSConnection('localhost', server.port, context=context) with self.assertRaises(ssl.CertificateError): h.request('GET', '/') # With context.check_hostname=False, the mismatching is ignored context.check_hostname = False h = client.HTTPSConnection('localhost', server.port, context=context) h.request('GET', '/nonexistent') resp = h.getresponse() resp.close() h.close() self.assertEqual(resp.status, 404) @unittest.skipIf(not hasattr(client, 'HTTPSConnection'), 'http.client.HTTPSConnection not available') def test_host_port(self): # Check invalid host_port for hp in ("www.python.org:abc", "user:password@www.python.org"): self.assertRaises(client.InvalidURL, client.HTTPSConnection, hp) for hp, h, p in (("[fe80::207:e9ff:fe9b]:8000", "fe80::207:e9ff:fe9b", 8000), ("www.python.org:443", "www.python.org", 443), ("www.python.org:", "www.python.org", 443), ("www.python.org", "www.python.org", 443), ("[fe80::207:e9ff:fe9b]", "fe80::207:e9ff:fe9b", 443), ("[fe80::207:e9ff:fe9b]:", "fe80::207:e9ff:fe9b", 443)): c = client.HTTPSConnection(hp) self.assertEqual(h, c.host) self.assertEqual(p, c.port) def test_tls13_pha(self): import ssl if not ssl.HAS_TLSv1_3: self.skipTest('TLS 1.3 support required') # just check status of PHA flag h = client.HTTPSConnection('localhost', 443) self.assertTrue(h._context.post_handshake_auth) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertFalse(context.post_handshake_auth) h = client.HTTPSConnection('localhost', 443, context=context) self.assertIs(h._context, context) self.assertFalse(h._context.post_handshake_auth) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT, cert_file=CERT_localhost) context.post_handshake_auth = True h = client.HTTPSConnection('localhost', 443, context=context) self.assertTrue(h._context.post_handshake_auth) class RequestBodyTest(TestCase): """Test cases where a request includes a message body.""" def setUp(self): self.conn = client.HTTPConnection('example.com') self.conn.sock = self.sock = FakeSocket("") self.conn.sock = self.sock def get_headers_and_fp(self): f = io.BytesIO(self.sock.data) f.readline() # read the request line message = client.parse_headers(f) return message, f def test_list_body(self): # Note that no content-length is automatically calculated for # an iterable. The request will fall back to send chunked # transfer encoding. cases = ( ([b'foo', b'bar'], b'3\r\nfoo\r\n3\r\nbar\r\n0\r\n\r\n'), ((b'foo', b'bar'), b'3\r\nfoo\r\n3\r\nbar\r\n0\r\n\r\n'), ) for body, expected in cases: with self.subTest(body): self.conn = client.HTTPConnection('example.com') self.conn.sock = self.sock = FakeSocket('') self.conn.request('PUT', '/url', body) msg, f = self.get_headers_and_fp() self.assertNotIn('Content-Type', msg) self.assertNotIn('Content-Length', msg) self.assertEqual(msg.get('Transfer-Encoding'), 'chunked') self.assertEqual(expected, f.read()) def test_manual_content_length(self): # Set an incorrect content-length so that we can verify that # it will not be over-ridden by the library. self.conn.request("PUT", "/url", "body", {"Content-Length": "42"}) message, f = self.get_headers_and_fp() self.assertEqual("42", message.get("content-length")) self.assertEqual(4, len(f.read())) def test_ascii_body(self): self.conn.request("PUT", "/url", "body") message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("4", message.get("content-length")) self.assertEqual(b'body', f.read()) def test_latin1_body(self): self.conn.request("PUT", "/url", "body\xc1") message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("5", message.get("content-length")) self.assertEqual(b'body\xc1', f.read()) def test_bytes_body(self): self.conn.request("PUT", "/url", b"body\xc1") message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("5", message.get("content-length")) self.assertEqual(b'body\xc1', f.read()) def test_text_file_body(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) with open(os_helper.TESTFN, "w", encoding="utf-8") as f: f.write("body") with open(os_helper.TESTFN, encoding="utf-8") as f: self.conn.request("PUT", "/url", f) message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) # No content-length will be determined for files; the body # will be sent using chunked transfer encoding instead. self.assertIsNone(message.get("content-length")) self.assertEqual("chunked", message.get("transfer-encoding")) self.assertEqual(b'4\r\nbody\r\n0\r\n\r\n', f.read()) def test_binary_file_body(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) with open(os_helper.TESTFN, "wb") as f: f.write(b"body\xc1") with open(os_helper.TESTFN, "rb") as f: self.conn.request("PUT", "/url", f) message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("chunked", message.get("Transfer-Encoding")) self.assertNotIn("Content-Length", message) self.assertEqual(b'5\r\nbody\xc1\r\n0\r\n\r\n', f.read()) class HTTPResponseTest(TestCase): def setUp(self): body = "HTTP/1.1 200 Ok\r\nMy-Header: first-value\r\nMy-Header: \ second-value\r\n\r\nText" sock = FakeSocket(body) self.resp = client.HTTPResponse(sock) self.resp.begin() def test_getting_header(self): header = self.resp.getheader('My-Header') self.assertEqual(header, 'first-value, second-value') header = self.resp.getheader('My-Header', 'some default') self.assertEqual(header, 'first-value, second-value') def test_getting_nonexistent_header_with_string_default(self): header = self.resp.getheader('No-Such-Header', 'default-value') self.assertEqual(header, 'default-value') def test_getting_nonexistent_header_with_iterable_default(self): header = self.resp.getheader('No-Such-Header', ['default', 'values']) self.assertEqual(header, 'default, values') header = self.resp.getheader('No-Such-Header', ('default', 'values')) self.assertEqual(header, 'default, values') def test_getting_nonexistent_header_without_default(self): header = self.resp.getheader('No-Such-Header') self.assertEqual(header, None) def test_getting_header_defaultint(self): header = self.resp.getheader('No-Such-Header',default=42) self.assertEqual(header, 42) class TunnelTests(TestCase): def setUp(self): response_text = ( 'HTTP/1.1 200 OK\r\n\r\n' # Reply to CONNECT 'HTTP/1.1 200 OK\r\n' # Reply to HEAD 'Content-Length: 42\r\n\r\n' ) self.host = 'proxy.com' self.port = client.HTTP_PORT self.conn = client.HTTPConnection(self.host) self.conn._create_connection = self._create_connection(response_text) def tearDown(self): self.conn.close() def _create_connection(self, response_text): def create_connection(address, timeout=None, source_address=None): return FakeSocket(response_text, host=address[0], port=address[1]) return create_connection def test_set_tunnel_host_port_headers_add_host_missing(self): tunnel_host = 'destination.com' tunnel_port = 8888 tunnel_headers = {'User-Agent': 'Mozilla/5.0 (compatible, MSIE 11)'} tunnel_headers_after = tunnel_headers.copy() tunnel_headers_after['Host'] = '%s:%d' % (tunnel_host, tunnel_port) self.conn.set_tunnel(tunnel_host, port=tunnel_port, headers=tunnel_headers) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertEqual(self.conn._tunnel_host, tunnel_host) self.assertEqual(self.conn._tunnel_port, tunnel_port) self.assertEqual(self.conn._tunnel_headers, tunnel_headers_after) def test_set_tunnel_host_port_headers_set_host_identical(self): tunnel_host = 'destination.com' tunnel_port = 8888 tunnel_headers = {'User-Agent': 'Mozilla/5.0 (compatible, MSIE 11)', 'Host': '%s:%d' % (tunnel_host, tunnel_port)} self.conn.set_tunnel(tunnel_host, port=tunnel_port, headers=tunnel_headers) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertEqual(self.conn._tunnel_host, tunnel_host) self.assertEqual(self.conn._tunnel_port, tunnel_port) self.assertEqual(self.conn._tunnel_headers, tunnel_headers) def test_set_tunnel_host_port_headers_set_host_different(self): tunnel_host = 'destination.com' tunnel_port = 8888 tunnel_headers = {'User-Agent': 'Mozilla/5.0 (compatible, MSIE 11)', 'Host': '%s:%d' % ('example.com', 4200)} self.conn.set_tunnel(tunnel_host, port=tunnel_port, headers=tunnel_headers) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertEqual(self.conn._tunnel_host, tunnel_host) self.assertEqual(self.conn._tunnel_port, tunnel_port) self.assertEqual(self.conn._tunnel_headers, tunnel_headers) def test_disallow_set_tunnel_after_connect(self): # Once connected, we shouldn't be able to tunnel anymore self.conn.connect() self.assertRaises(RuntimeError, self.conn.set_tunnel, 'destination.com') def test_connect_with_tunnel(self): d = { b'host': b'destination.com', b'port': client.HTTP_PORT, } self.conn.set_tunnel(d[b'host'].decode('ascii')) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertIn(b'CONNECT %(host)s:%(port)d HTTP/1.1\r\n' b'Host: %(host)s:%(port)d\r\n\r\n' % d, self.conn.sock.data) self.assertIn(b'HEAD / HTTP/1.1\r\nHost: %(host)s\r\n' % d, self.conn.sock.data) def test_connect_with_tunnel_with_default_port(self): d = { b'host': b'destination.com', b'port': client.HTTP_PORT, } self.conn.set_tunnel(d[b'host'].decode('ascii'), port=d[b'port']) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertIn(b'CONNECT %(host)s:%(port)d HTTP/1.1\r\n' b'Host: %(host)s:%(port)d\r\n\r\n' % d, self.conn.sock.data) self.assertIn(b'HEAD / HTTP/1.1\r\nHost: %(host)s\r\n' % d, self.conn.sock.data) def test_connect_with_tunnel_with_nonstandard_port(self): d = { b'host': b'destination.com', b'port': 8888, } self.conn.set_tunnel(d[b'host'].decode('ascii'), port=d[b'port']) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertIn(b'CONNECT %(host)s:%(port)d HTTP/1.1\r\n' b'Host: %(host)s:%(port)d\r\n\r\n' % d, self.conn.sock.data) self.assertIn(b'HEAD / HTTP/1.1\r\nHost: %(host)s:%(port)d\r\n' % d, self.conn.sock.data) # This request is not RFC-valid, but it's been possible with the library # for years, so don't break it unexpectedly... This also tests # case-insensitivity when injecting Host: headers if they're missing. def test_connect_with_tunnel_with_different_host_header(self): d = { b'host': b'destination.com', b'tunnel_host_header': b'example.com:9876', b'port': client.HTTP_PORT, } self.conn.set_tunnel( d[b'host'].decode('ascii'), headers={'HOST': d[b'tunnel_host_header'].decode('ascii')}) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertIn(b'CONNECT %(host)s:%(port)d HTTP/1.1\r\n' b'HOST: %(tunnel_host_header)s\r\n\r\n' % d, self.conn.sock.data) self.assertIn(b'HEAD / HTTP/1.1\r\nHost: %(host)s\r\n' % d, self.conn.sock.data) def test_connect_with_tunnel_different_host(self): d = { b'host': b'destination.com', b'port': client.HTTP_PORT, } self.conn.set_tunnel(d[b'host'].decode('ascii')) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertIn(b'CONNECT %(host)s:%(port)d HTTP/1.1\r\n' b'Host: %(host)s:%(port)d\r\n\r\n' % d, self.conn.sock.data) self.assertIn(b'HEAD / HTTP/1.1\r\nHost: %(host)s\r\n' % d, self.conn.sock.data) def test_connect_with_tunnel_idna(self): dest = '\u03b4\u03c0\u03b8.gr' dest_port = b'%s:%d' % (dest.encode('idna'), client.HTTP_PORT) expected = b'CONNECT %s HTTP/1.1\r\nHost: %s\r\n\r\n' % ( dest_port, dest_port) self.conn.set_tunnel(dest) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, client.HTTP_PORT) self.assertIn(expected, self.conn.sock.data) def test_tunnel_connect_single_send_connection_setup(self): """Regresstion test for https://bugs.python.org/issue43332.""" with mock.patch.object(self.conn, 'send') as mock_send: self.conn.set_tunnel('destination.com') self.conn.connect() self.conn.request('GET', '/') mock_send.assert_called() # Likely 2, but this test only cares about the first. self.assertGreater( len(mock_send.mock_calls), 1, msg=f'unexpected number of send calls: {mock_send.mock_calls}') proxy_setup_data_sent = mock_send.mock_calls[0][1][0] self.assertIn(b'CONNECT destination.com', proxy_setup_data_sent) self.assertTrue( proxy_setup_data_sent.endswith(b'\r\n\r\n'), msg=f'unexpected proxy data sent {proxy_setup_data_sent!r}') def test_connect_put_request(self): d = { b'host': b'destination.com', b'port': client.HTTP_PORT, } self.conn.set_tunnel(d[b'host'].decode('ascii')) self.conn.request('PUT', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, self.port) self.assertIn(b'CONNECT %(host)s:%(port)d HTTP/1.1\r\n' b'Host: %(host)s:%(port)d\r\n\r\n' % d, self.conn.sock.data) self.assertIn(b'PUT / HTTP/1.1\r\nHost: %(host)s\r\n' % d, self.conn.sock.data) def test_connect_put_request_ipv6(self): self.conn.set_tunnel('[1:2:3::4]', 1234) self.conn.request('PUT', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, client.HTTP_PORT) self.assertIn(b'CONNECT [1:2:3::4]:1234', self.conn.sock.data) self.assertIn(b'Host: [1:2:3::4]:1234', self.conn.sock.data) def test_connect_put_request_ipv6_port(self): self.conn.set_tunnel('[1:2:3::4]:1234') self.conn.request('PUT', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, client.HTTP_PORT) self.assertIn(b'CONNECT [1:2:3::4]:1234', self.conn.sock.data) self.assertIn(b'Host: [1:2:3::4]:1234', self.conn.sock.data) def test_tunnel_debuglog(self): expected_header = 'X-Dummy: 1' response_text = 'HTTP/1.0 200 OK\r\n{}\r\n\r\n'.format(expected_header) self.conn.set_debuglevel(1) self.conn._create_connection = self._create_connection(response_text) self.conn.set_tunnel('destination.com') with support.captured_stdout() as output: self.conn.request('PUT', '/', '') lines = output.getvalue().splitlines() self.assertIn('header: {}'.format(expected_header), lines) def test_proxy_response_headers(self): expected_header = ('X-Dummy', '1') response_text = ( 'HTTP/1.0 200 OK\r\n' '{0}\r\n\r\n'.format(':'.join(expected_header)) ) self.conn._create_connection = self._create_connection(response_text) self.conn.set_tunnel('destination.com') self.conn.request('PUT', '/', '') headers = self.conn.get_proxy_response_headers() self.assertIn(expected_header, headers.items()) def test_no_proxy_response_headers(self): expected_header = ('X-Dummy', '1') response_text = ( 'HTTP/1.0 200 OK\r\n' '{0}\r\n\r\n'.format(':'.join(expected_header)) ) self.conn._create_connection = self._create_connection(response_text) self.conn.request('PUT', '/', '') headers = self.conn.get_proxy_response_headers() self.assertIsNone(headers) def test_tunnel_leak(self): sock = None def _create_connection(address, timeout=None, source_address=None): nonlocal sock sock = FakeSocket( 'HTTP/1.1 404 NOT FOUND\r\n\r\n', host=address[0], port=address[1], ) return sock self.conn._create_connection = _create_connection self.conn.set_tunnel('destination.com') exc = None try: self.conn.request('HEAD', '/', '') except OSError as e: # keeping a reference to exc keeps response alive in the traceback exc = e self.assertIsNotNone(exc) self.assertTrue(sock.file_closed) if __name__ == '__main__': unittest.main(verbosity=2) gevent-24.11.1/src/greentest/3.13/test_interpreters.py000066400000000000000000001001611471441230600224630ustar00rootroot00000000000000import contextlib import json import os import os.path import sys import threading from textwrap import dedent import unittest import time from test import support from test.support import import_helper from test.support import threading_helper from test.support import os_helper _interpreters = import_helper.import_module('_xxsubinterpreters') _channels = import_helper.import_module('_xxinterpchannels') from test.support import interpreters def _captured_script(script): r, w = os.pipe() indented = script.replace('\n', '\n ') wrapped = dedent(f""" import contextlib with open({w}, 'w', encoding='utf-8') as spipe: with contextlib.redirect_stdout(spipe): {indented} """) return wrapped, open(r, encoding='utf-8') def clean_up_interpreters(): for interp in interpreters.list_all(): if interp.id == 0: # main continue try: interp.close() except RuntimeError: pass # already destroyed def _run_output(interp, request, channels=None): script, rpipe = _captured_script(request) with rpipe: interp.run(script, channels=channels) return rpipe.read() @contextlib.contextmanager def _running(interp): r, w = os.pipe() def run(): interp.run(dedent(f""" # wait for "signal" with open({r}) as rpipe: rpipe.read() """)) t = threading.Thread(target=run) t.start() yield with open(w, 'w') as spipe: spipe.write('done') t.join() class TestBase(unittest.TestCase): def os_pipe(self): r, w = os.pipe() def cleanup(): try: os.close(w) except Exception: pass try: os.close(r) except Exception: pass self.addCleanup(cleanup) return r, w def tearDown(self): clean_up_interpreters() class CreateTests(TestBase): def test_in_main(self): interp = interpreters.create() self.assertIsInstance(interp, interpreters.Interpreter) self.assertIn(interp, interpreters.list_all()) def test_in_thread(self): lock = threading.Lock() interp = None def f(): nonlocal interp interp = interpreters.create() lock.acquire() lock.release() t = threading.Thread(target=f) with lock: t.start() t.join() self.assertIn(interp, interpreters.list_all()) def test_in_subinterpreter(self): main, = interpreters.list_all() interp = interpreters.create() out = _run_output(interp, dedent(""" from test.support import interpreters interp = interpreters.create() print(interp.id) """)) interp2 = interpreters.Interpreter(int(out)) self.assertEqual(interpreters.list_all(), [main, interp, interp2]) def test_after_destroy_all(self): before = set(interpreters.list_all()) # Create 3 subinterpreters. interp_lst = [] for _ in range(3): interps = interpreters.create() interp_lst.append(interps) # Now destroy them. for interp in interp_lst: interp.close() # Finally, create another. interp = interpreters.create() self.assertEqual(set(interpreters.list_all()), before | {interp}) def test_after_destroy_some(self): before = set(interpreters.list_all()) # Create 3 subinterpreters. interp1 = interpreters.create() interp2 = interpreters.create() interp3 = interpreters.create() # Now destroy 2 of them. interp1.close() interp2.close() # Finally, create another. interp = interpreters.create() self.assertEqual(set(interpreters.list_all()), before | {interp3, interp}) class GetCurrentTests(TestBase): def test_main(self): main = interpreters.get_main() current = interpreters.get_current() self.assertEqual(current, main) def test_subinterpreter(self): main = _interpreters.get_main() interp = interpreters.create() out = _run_output(interp, dedent(""" from test.support import interpreters cur = interpreters.get_current() print(cur.id) """)) current = interpreters.Interpreter(int(out)) self.assertNotEqual(current, main) class ListAllTests(TestBase): def test_initial(self): interps = interpreters.list_all() self.assertEqual(1, len(interps)) def test_after_creating(self): main = interpreters.get_current() first = interpreters.create() second = interpreters.create() ids = [] for interp in interpreters.list_all(): ids.append(interp.id) self.assertEqual(ids, [main.id, first.id, second.id]) def test_after_destroying(self): main = interpreters.get_current() first = interpreters.create() second = interpreters.create() first.close() ids = [] for interp in interpreters.list_all(): ids.append(interp.id) self.assertEqual(ids, [main.id, second.id]) class TestInterpreterAttrs(TestBase): def test_id_type(self): main = interpreters.get_main() current = interpreters.get_current() interp = interpreters.create() self.assertIsInstance(main.id, _interpreters.InterpreterID) self.assertIsInstance(current.id, _interpreters.InterpreterID) self.assertIsInstance(interp.id, _interpreters.InterpreterID) def test_main_id(self): main = interpreters.get_main() self.assertEqual(main.id, 0) def test_custom_id(self): interp = interpreters.Interpreter(1) self.assertEqual(interp.id, 1) with self.assertRaises(TypeError): interpreters.Interpreter('1') def test_id_readonly(self): interp = interpreters.Interpreter(1) with self.assertRaises(AttributeError): interp.id = 2 @unittest.skip('not ready yet (see bpo-32604)') def test_main_isolated(self): main = interpreters.get_main() self.assertFalse(main.isolated) @unittest.skip('not ready yet (see bpo-32604)') def test_subinterpreter_isolated_default(self): interp = interpreters.create() self.assertFalse(interp.isolated) def test_subinterpreter_isolated_explicit(self): interp1 = interpreters.create(isolated=True) interp2 = interpreters.create(isolated=False) self.assertTrue(interp1.isolated) self.assertFalse(interp2.isolated) @unittest.skip('not ready yet (see bpo-32604)') def test_custom_isolated_default(self): interp = interpreters.Interpreter(1) self.assertFalse(interp.isolated) def test_custom_isolated_explicit(self): interp1 = interpreters.Interpreter(1, isolated=True) interp2 = interpreters.Interpreter(1, isolated=False) self.assertTrue(interp1.isolated) self.assertFalse(interp2.isolated) def test_isolated_readonly(self): interp = interpreters.Interpreter(1) with self.assertRaises(AttributeError): interp.isolated = True def test_equality(self): interp1 = interpreters.create() interp2 = interpreters.create() self.assertEqual(interp1, interp1) self.assertNotEqual(interp1, interp2) class TestInterpreterIsRunning(TestBase): def test_main(self): main = interpreters.get_main() self.assertTrue(main.is_running()) @unittest.skip('Fails on FreeBSD') def test_subinterpreter(self): interp = interpreters.create() self.assertFalse(interp.is_running()) with _running(interp): self.assertTrue(interp.is_running()) self.assertFalse(interp.is_running()) def test_finished(self): r, w = self.os_pipe() interp = interpreters.create() interp.run(f"""if True: import os os.write({w}, b'x') """) self.assertFalse(interp.is_running()) self.assertEqual(os.read(r, 1), b'x') def test_from_subinterpreter(self): interp = interpreters.create() out = _run_output(interp, dedent(f""" import _xxsubinterpreters as _interpreters if _interpreters.is_running({interp.id}): print(True) else: print(False) """)) self.assertEqual(out.strip(), 'True') def test_already_destroyed(self): interp = interpreters.create() interp.close() with self.assertRaises(RuntimeError): interp.is_running() def test_does_not_exist(self): interp = interpreters.Interpreter(1_000_000) with self.assertRaises(RuntimeError): interp.is_running() def test_bad_id(self): interp = interpreters.Interpreter(-1) with self.assertRaises(ValueError): interp.is_running() def test_with_only_background_threads(self): r_interp, w_interp = self.os_pipe() r_thread, w_thread = self.os_pipe() DONE = b'D' FINISHED = b'F' interp = interpreters.create() interp.run(f"""if True: import os import threading def task(): v = os.read({r_thread}, 1) assert v == {DONE!r} os.write({w_interp}, {FINISHED!r}) t = threading.Thread(target=task) t.start() """) self.assertFalse(interp.is_running()) os.write(w_thread, DONE) interp.run('t.join()') self.assertEqual(os.read(r_interp, 1), FINISHED) class TestInterpreterClose(TestBase): def test_basic(self): main = interpreters.get_main() interp1 = interpreters.create() interp2 = interpreters.create() interp3 = interpreters.create() self.assertEqual(set(interpreters.list_all()), {main, interp1, interp2, interp3}) interp2.close() self.assertEqual(set(interpreters.list_all()), {main, interp1, interp3}) def test_all(self): before = set(interpreters.list_all()) interps = set() for _ in range(3): interp = interpreters.create() interps.add(interp) self.assertEqual(set(interpreters.list_all()), before | interps) for interp in interps: interp.close() self.assertEqual(set(interpreters.list_all()), before) def test_main(self): main, = interpreters.list_all() with self.assertRaises(RuntimeError): main.close() def f(): with self.assertRaises(RuntimeError): main.close() t = threading.Thread(target=f) t.start() t.join() def test_already_destroyed(self): interp = interpreters.create() interp.close() with self.assertRaises(RuntimeError): interp.close() def test_does_not_exist(self): interp = interpreters.Interpreter(1_000_000) with self.assertRaises(RuntimeError): interp.close() def test_bad_id(self): interp = interpreters.Interpreter(-1) with self.assertRaises(ValueError): interp.close() def test_from_current(self): main, = interpreters.list_all() interp = interpreters.create() out = _run_output(interp, dedent(f""" from test.support import interpreters interp = interpreters.Interpreter({int(interp.id)}) try: interp.close() except RuntimeError: print('failed') """)) self.assertEqual(out.strip(), 'failed') self.assertEqual(set(interpreters.list_all()), {main, interp}) def test_from_sibling(self): main, = interpreters.list_all() interp1 = interpreters.create() interp2 = interpreters.create() self.assertEqual(set(interpreters.list_all()), {main, interp1, interp2}) interp1.run(dedent(f""" from test.support import interpreters interp2 = interpreters.Interpreter(int({interp2.id})) interp2.close() interp3 = interpreters.create() interp3.close() """)) self.assertEqual(set(interpreters.list_all()), {main, interp1}) def test_from_other_thread(self): interp = interpreters.create() def f(): interp.close() t = threading.Thread(target=f) t.start() t.join() @unittest.skip('Fails on FreeBSD') def test_still_running(self): main, = interpreters.list_all() interp = interpreters.create() with _running(interp): with self.assertRaises(RuntimeError): interp.close() self.assertTrue(interp.is_running()) def test_subthreads_still_running(self): r_interp, w_interp = self.os_pipe() r_thread, w_thread = self.os_pipe() FINISHED = b'F' interp = interpreters.create() interp.run(f"""if True: import os import threading import time done = False def notify_fini(): global done done = True t.join() threading._register_atexit(notify_fini) def task(): while not done: time.sleep(0.1) os.write({w_interp}, {FINISHED!r}) t = threading.Thread(target=task) t.start() """) interp.close() self.assertEqual(os.read(r_interp, 1), FINISHED) class TestInterpreterRun(TestBase): def test_success(self): interp = interpreters.create() script, file = _captured_script('print("it worked!", end="")') with file: interp.run(script) out = file.read() self.assertEqual(out, 'it worked!') def test_in_thread(self): interp = interpreters.create() script, file = _captured_script('print("it worked!", end="")') with file: def f(): interp.run(script) t = threading.Thread(target=f) t.start() t.join() out = file.read() self.assertEqual(out, 'it worked!') @support.requires_fork() def test_fork(self): interp = interpreters.create() import tempfile with tempfile.NamedTemporaryFile('w+', encoding='utf-8') as file: file.write('') file.flush() expected = 'spam spam spam spam spam' script = dedent(f""" import os try: os.fork() except RuntimeError: with open('{file.name}', 'w', encoding='utf-8') as out: out.write('{expected}') """) interp.run(script) file.seek(0) content = file.read() self.assertEqual(content, expected) @unittest.skip('Fails on FreeBSD') def test_already_running(self): interp = interpreters.create() with _running(interp): with self.assertRaises(RuntimeError): interp.run('print("spam")') def test_does_not_exist(self): interp = interpreters.Interpreter(1_000_000) with self.assertRaises(RuntimeError): interp.run('print("spam")') def test_bad_id(self): interp = interpreters.Interpreter(-1) with self.assertRaises(ValueError): interp.run('print("spam")') def test_bad_script(self): interp = interpreters.create() with self.assertRaises(TypeError): interp.run(10) def test_bytes_for_script(self): interp = interpreters.create() with self.assertRaises(TypeError): interp.run(b'print("spam")') def test_with_background_threads_still_running(self): r_interp, w_interp = self.os_pipe() r_thread, w_thread = self.os_pipe() RAN = b'R' DONE = b'D' FINISHED = b'F' interp = interpreters.create() interp.run(f"""if True: import os import threading def task(): v = os.read({r_thread}, 1) assert v == {DONE!r} os.write({w_interp}, {FINISHED!r}) t = threading.Thread(target=task) t.start() os.write({w_interp}, {RAN!r}) """) interp.run(f"""if True: os.write({w_interp}, {RAN!r}) """) os.write(w_thread, DONE) interp.run('t.join()') self.assertEqual(os.read(r_interp, 1), RAN) self.assertEqual(os.read(r_interp, 1), RAN) self.assertEqual(os.read(r_interp, 1), FINISHED) # test_xxsubinterpreters covers the remaining Interpreter.run() behavior. @unittest.skip('these are crashing, likely just due just to _xxsubinterpreters (see gh-105699)') class StressTests(TestBase): # In these tests we generally want a lot of interpreters, # but not so many that any test takes too long. @support.requires_resource('cpu') def test_create_many_sequential(self): alive = [] for _ in range(100): interp = interpreters.create() alive.append(interp) @support.requires_resource('cpu') def test_create_many_threaded(self): alive = [] def task(): interp = interpreters.create() alive.append(interp) threads = (threading.Thread(target=task) for _ in range(200)) with threading_helper.start_threads(threads): pass class StartupTests(TestBase): # We want to ensure the initial state of subinterpreters # matches expectations. _subtest_count = 0 @contextlib.contextmanager def subTest(self, *args): with super().subTest(*args) as ctx: self._subtest_count += 1 try: yield ctx finally: if self._debugged_in_subtest: if self._subtest_count == 1: # The first subtest adds a leading newline, so we # compensate here by not printing a trailing newline. print('### end subtest debug ###', end='') else: print('### end subtest debug ###') self._debugged_in_subtest = False def debug(self, msg, *, header=None): if header: self._debug(f'--- {header} ---') if msg: if msg.endswith(os.linesep): self._debug(msg[:-len(os.linesep)]) else: self._debug(msg) self._debug('') self._debug('------') else: self._debug(msg) _debugged = False _debugged_in_subtest = False def _debug(self, msg): if not self._debugged: print() self._debugged = True if self._subtest is not None: if True: if not self._debugged_in_subtest: self._debugged_in_subtest = True print('### start subtest debug ###') print(msg) else: print(msg) def create_temp_dir(self): import tempfile tmp = tempfile.mkdtemp(prefix='test_interpreters_') tmp = os.path.realpath(tmp) self.addCleanup(os_helper.rmtree, tmp) return tmp def write_script(self, *path, text): filename = os.path.join(*path) dirname = os.path.dirname(filename) if dirname: os.makedirs(dirname, exist_ok=True) with open(filename, 'w', encoding='utf-8') as outfile: outfile.write(dedent(text)) return filename @support.requires_subprocess() def run_python(self, argv, *, cwd=None): # This method is inspired by # EmbeddingTestsMixin.run_embedded_interpreter() in test_embed.py. import shlex import subprocess if isinstance(argv, str): argv = shlex.split(argv) argv = [sys.executable, *argv] try: proc = subprocess.run( argv, cwd=cwd, capture_output=True, text=True, ) except Exception as exc: self.debug(f'# cmd: {shlex.join(argv)}') if isinstance(exc, FileNotFoundError) and not exc.filename: if os.path.exists(argv[0]): exists = 'exists' else: exists = 'does not exist' self.debug(f'{argv[0]} {exists}') raise # re-raise assert proc.stderr == '' or proc.returncode != 0, proc.stderr if proc.returncode != 0 and support.verbose: self.debug(f'# python3 {shlex.join(argv[1:])} failed:') self.debug(proc.stdout, header='stdout') self.debug(proc.stderr, header='stderr') self.assertEqual(proc.returncode, 0) self.assertEqual(proc.stderr, '') return proc.stdout def test_sys_path_0(self): # The main interpreter's sys.path[0] should be used by subinterpreters. script = ''' import sys from test.support import interpreters orig = sys.path[0] interp = interpreters.create() interp.run(f"""if True: import json import sys print(json.dumps({{ 'main': {orig!r}, 'sub': sys.path[0], }}, indent=4), flush=True) """) ''' # / # pkg/ # __init__.py # __main__.py # script.py # script.py cwd = self.create_temp_dir() self.write_script(cwd, 'pkg', '__init__.py', text='') self.write_script(cwd, 'pkg', '__main__.py', text=script) self.write_script(cwd, 'pkg', 'script.py', text=script) self.write_script(cwd, 'script.py', text=script) cases = [ ('script.py', cwd), ('-m script', cwd), ('-m pkg', cwd), ('-m pkg.script', cwd), ('-c "import script"', ''), ] for argv, expected in cases: with self.subTest(f'python3 {argv}'): out = self.run_python(argv, cwd=cwd) data = json.loads(out) sp0_main, sp0_sub = data['main'], data['sub'] self.assertEqual(sp0_sub, sp0_main) self.assertEqual(sp0_sub, expected) class FinalizationTests(TestBase): def test_gh_109793(self): import subprocess argv = [sys.executable, '-c', '''if True: import _xxsubinterpreters as _interpreters interpid = _interpreters.create() raise Exception '''] proc = subprocess.run(argv, capture_output=True, text=True) self.assertIn('Traceback', proc.stderr) if proc.returncode == 0 and support.verbose: print() print("--- cmd unexpected succeeded ---") print(f"stdout:\n{proc.stdout}") print(f"stderr:\n{proc.stderr}") print("------") self.assertEqual(proc.returncode, 1) class TestIsShareable(TestBase): def test_default_shareables(self): shareables = [ # singletons None, # builtin objects b'spam', 'spam', 10, -10, ] for obj in shareables: with self.subTest(obj): shareable = interpreters.is_shareable(obj) self.assertTrue(shareable) def test_not_shareable(self): class Cheese: def __init__(self, name): self.name = name def __str__(self): return self.name class SubBytes(bytes): """A subclass of a shareable type.""" not_shareables = [ # singletons True, False, NotImplemented, ..., # builtin types and objects type, object, object(), Exception(), 100.0, # user-defined types and objects Cheese, Cheese('Wensleydale'), SubBytes(b'spam'), ] for obj in not_shareables: with self.subTest(repr(obj)): self.assertFalse( interpreters.is_shareable(obj)) class TestChannels(TestBase): def test_create(self): r, s = interpreters.create_channel() self.assertIsInstance(r, interpreters.RecvChannel) self.assertIsInstance(s, interpreters.SendChannel) def test_list_all(self): self.assertEqual(interpreters.list_all_channels(), []) created = set() for _ in range(3): ch = interpreters.create_channel() created.add(ch) after = set(interpreters.list_all_channels()) self.assertEqual(after, created) class TestRecvChannelAttrs(TestBase): def test_id_type(self): rch, _ = interpreters.create_channel() self.assertIsInstance(rch.id, _channels.ChannelID) def test_custom_id(self): rch = interpreters.RecvChannel(1) self.assertEqual(rch.id, 1) with self.assertRaises(TypeError): interpreters.RecvChannel('1') def test_id_readonly(self): rch = interpreters.RecvChannel(1) with self.assertRaises(AttributeError): rch.id = 2 def test_equality(self): ch1, _ = interpreters.create_channel() ch2, _ = interpreters.create_channel() self.assertEqual(ch1, ch1) self.assertNotEqual(ch1, ch2) class TestSendChannelAttrs(TestBase): def test_id_type(self): _, sch = interpreters.create_channel() self.assertIsInstance(sch.id, _channels.ChannelID) def test_custom_id(self): sch = interpreters.SendChannel(1) self.assertEqual(sch.id, 1) with self.assertRaises(TypeError): interpreters.SendChannel('1') def test_id_readonly(self): sch = interpreters.SendChannel(1) with self.assertRaises(AttributeError): sch.id = 2 def test_equality(self): _, ch1 = interpreters.create_channel() _, ch2 = interpreters.create_channel() self.assertEqual(ch1, ch1) self.assertNotEqual(ch1, ch2) class TestSendRecv(TestBase): def test_send_recv_main(self): r, s = interpreters.create_channel() orig = b'spam' s.send_nowait(orig) obj = r.recv() self.assertEqual(obj, orig) self.assertIsNot(obj, orig) def test_send_recv_same_interpreter(self): interp = interpreters.create() interp.run(dedent(""" from test.support import interpreters r, s = interpreters.create_channel() orig = b'spam' s.send_nowait(orig) obj = r.recv() assert obj == orig, 'expected: obj == orig' assert obj is not orig, 'expected: obj is not orig' """)) @unittest.skip('broken (see BPO-...)') def test_send_recv_different_interpreters(self): r1, s1 = interpreters.create_channel() r2, s2 = interpreters.create_channel() orig1 = b'spam' s1.send_nowait(orig1) out = _run_output( interpreters.create(), dedent(f""" obj1 = r.recv() assert obj1 == b'spam', 'expected: obj1 == orig1' # When going to another interpreter we get a copy. assert id(obj1) != {id(orig1)}, 'expected: obj1 is not orig1' orig2 = b'eggs' print(id(orig2)) s.send_nowait(orig2) """), channels=dict(r=r1, s=s2), ) obj2 = r2.recv() self.assertEqual(obj2, b'eggs') self.assertNotEqual(id(obj2), int(out)) def test_send_recv_different_threads(self): r, s = interpreters.create_channel() def f(): while True: try: obj = r.recv() break except interpreters.ChannelEmptyError: time.sleep(0.1) s.send(obj) t = threading.Thread(target=f) t.start() orig = b'spam' s.send(orig) t.join() obj = r.recv() self.assertEqual(obj, orig) self.assertIsNot(obj, orig) def test_send_recv_nowait_main(self): r, s = interpreters.create_channel() orig = b'spam' s.send_nowait(orig) obj = r.recv_nowait() self.assertEqual(obj, orig) self.assertIsNot(obj, orig) def test_send_recv_nowait_main_with_default(self): r, _ = interpreters.create_channel() obj = r.recv_nowait(None) self.assertIsNone(obj) def test_send_recv_nowait_same_interpreter(self): interp = interpreters.create() interp.run(dedent(""" from test.support import interpreters r, s = interpreters.create_channel() orig = b'spam' s.send_nowait(orig) obj = r.recv_nowait() assert obj == orig, 'expected: obj == orig' # When going back to the same interpreter we get the same object. assert obj is not orig, 'expected: obj is not orig' """)) @unittest.skip('broken (see BPO-...)') def test_send_recv_nowait_different_interpreters(self): r1, s1 = interpreters.create_channel() r2, s2 = interpreters.create_channel() orig1 = b'spam' s1.send_nowait(orig1) out = _run_output( interpreters.create(), dedent(f""" obj1 = r.recv_nowait() assert obj1 == b'spam', 'expected: obj1 == orig1' # When going to another interpreter we get a copy. assert id(obj1) != {id(orig1)}, 'expected: obj1 is not orig1' orig2 = b'eggs' print(id(orig2)) s.send_nowait(orig2) """), channels=dict(r=r1, s=s2), ) obj2 = r2.recv_nowait() self.assertEqual(obj2, b'eggs') self.assertNotEqual(id(obj2), int(out)) def test_recv_channel_does_not_exist(self): ch = interpreters.RecvChannel(1_000_000) with self.assertRaises(interpreters.ChannelNotFoundError): ch.recv() def test_send_channel_does_not_exist(self): ch = interpreters.SendChannel(1_000_000) with self.assertRaises(interpreters.ChannelNotFoundError): ch.send(b'spam') def test_recv_nowait_channel_does_not_exist(self): ch = interpreters.RecvChannel(1_000_000) with self.assertRaises(interpreters.ChannelNotFoundError): ch.recv_nowait() def test_send_nowait_channel_does_not_exist(self): ch = interpreters.SendChannel(1_000_000) with self.assertRaises(interpreters.ChannelNotFoundError): ch.send_nowait(b'spam') def test_recv_nowait_empty(self): ch, _ = interpreters.create_channel() with self.assertRaises(interpreters.ChannelEmptyError): ch.recv_nowait() def test_recv_nowait_default(self): default = object() rch, sch = interpreters.create_channel() obj1 = rch.recv_nowait(default) sch.send_nowait(None) sch.send_nowait(1) sch.send_nowait(b'spam') sch.send_nowait(b'eggs') obj2 = rch.recv_nowait(default) obj3 = rch.recv_nowait(default) obj4 = rch.recv_nowait() obj5 = rch.recv_nowait(default) obj6 = rch.recv_nowait(default) self.assertIs(obj1, default) self.assertIs(obj2, None) self.assertEqual(obj3, 1) self.assertEqual(obj4, b'spam') self.assertEqual(obj5, b'eggs') self.assertIs(obj6, default) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.13/test_queue.py000066400000000000000000001026101471441230600210620ustar00rootroot00000000000000# Some simple queue module tests, plus some failure conditions # to ensure the Queue locks remain stable. import itertools import random import sys import threading import time import unittest import weakref from test.support import gc_collect from test.support import import_helper from test.support import threading_helper # queue module depends on threading primitives threading_helper.requires_working_threading(module=True) py_queue = import_helper.import_fresh_module('queue', blocked=['_queue']) c_queue = import_helper.import_fresh_module('queue', fresh=['_queue']) need_c_queue = unittest.skipUnless(c_queue, "No _queue module found") QUEUE_SIZE = 5 def qfull(q): return q.maxsize > 0 and q.qsize() == q.maxsize # A thread to run a function that unclogs a blocked Queue. class _TriggerThread(threading.Thread): def __init__(self, fn, args): self.fn = fn self.args = args self.startedEvent = threading.Event() threading.Thread.__init__(self) def run(self): # The sleep isn't necessary, but is intended to give the blocking # function in the main thread a chance at actually blocking before # we unclog it. But if the sleep is longer than the timeout-based # tests wait in their blocking functions, those tests will fail. # So we give them much longer timeout values compared to the # sleep here (I aimed at 10 seconds for blocking functions -- # they should never actually wait that long - they should make # progress as soon as we call self.fn()). time.sleep(0.1) self.startedEvent.set() self.fn(*self.args) # Execute a function that blocks, and in a separate thread, a function that # triggers the release. Returns the result of the blocking function. Caution: # block_func must guarantee to block until trigger_func is called, and # trigger_func must guarantee to change queue state so that block_func can make # enough progress to return. In particular, a block_func that just raises an # exception regardless of whether trigger_func is called will lead to # timing-dependent sporadic failures, and one of those went rarely seen but # undiagnosed for years. Now block_func must be unexceptional. If block_func # is supposed to raise an exception, call do_exceptional_blocking_test() # instead. class BlockingTestMixin: def do_blocking_test(self, block_func, block_args, trigger_func, trigger_args): thread = _TriggerThread(trigger_func, trigger_args) thread.start() try: self.result = block_func(*block_args) # If block_func returned before our thread made the call, we failed! if not thread.startedEvent.is_set(): self.fail("blocking function %r appeared not to block" % block_func) return self.result finally: threading_helper.join_thread(thread) # make sure the thread terminates # Call this instead if block_func is supposed to raise an exception. def do_exceptional_blocking_test(self,block_func, block_args, trigger_func, trigger_args, expected_exception_class): thread = _TriggerThread(trigger_func, trigger_args) thread.start() try: try: block_func(*block_args) except expected_exception_class: raise else: self.fail("expected exception of kind %r" % expected_exception_class) finally: threading_helper.join_thread(thread) # make sure the thread terminates if not thread.startedEvent.is_set(): self.fail("trigger thread ended but event never set") class BaseQueueTestMixin(BlockingTestMixin): def setUp(self): self.cum = 0 self.cumlock = threading.Lock() def basic_queue_test(self, q): if q.qsize(): raise RuntimeError("Call this function with an empty queue") self.assertTrue(q.empty()) self.assertFalse(q.full()) # I guess we better check things actually queue correctly a little :) q.put(111) q.put(333) q.put(222) target_order = dict(Queue = [111, 333, 222], LifoQueue = [222, 333, 111], PriorityQueue = [111, 222, 333]) actual_order = [q.get(), q.get(), q.get()] self.assertEqual(actual_order, target_order[q.__class__.__name__], "Didn't seem to queue the correct data!") for i in range(QUEUE_SIZE-1): q.put(i) self.assertTrue(q.qsize(), "Queue should not be empty") self.assertTrue(not qfull(q), "Queue should not be full") last = 2 * QUEUE_SIZE full = 3 * 2 * QUEUE_SIZE q.put(last) self.assertTrue(qfull(q), "Queue should be full") self.assertFalse(q.empty()) self.assertTrue(q.full()) try: q.put(full, block=0) self.fail("Didn't appear to block with a full queue") except self.queue.Full: pass try: q.put(full, timeout=0.01) self.fail("Didn't appear to time-out with a full queue") except self.queue.Full: pass # Test a blocking put self.do_blocking_test(q.put, (full,), q.get, ()) self.do_blocking_test(q.put, (full, True, 10), q.get, ()) # Empty it for i in range(QUEUE_SIZE): q.get() self.assertTrue(not q.qsize(), "Queue should be empty") try: q.get(block=0) self.fail("Didn't appear to block with an empty queue") except self.queue.Empty: pass try: q.get(timeout=0.01) self.fail("Didn't appear to time-out with an empty queue") except self.queue.Empty: pass # Test a blocking get self.do_blocking_test(q.get, (), q.put, ('empty',)) self.do_blocking_test(q.get, (True, 10), q.put, ('empty',)) def worker(self, q): while True: x = q.get() if x < 0: q.task_done() return with self.cumlock: self.cum += x q.task_done() def queue_join_test(self, q): self.cum = 0 threads = [] for i in (0,1): thread = threading.Thread(target=self.worker, args=(q,)) thread.start() threads.append(thread) for i in range(100): q.put(i) q.join() self.assertEqual(self.cum, sum(range(100)), "q.join() did not block until all tasks were done") for i in (0,1): q.put(-1) # instruct the threads to close q.join() # verify that you can join twice for thread in threads: thread.join() def test_queue_task_done(self): # Test to make sure a queue task completed successfully. q = self.type2test() try: q.task_done() except ValueError: pass else: self.fail("Did not detect task count going negative") def test_queue_join(self): # Test that a queue join()s successfully, and before anything else # (done twice for insurance). q = self.type2test() self.queue_join_test(q) self.queue_join_test(q) try: q.task_done() except ValueError: pass else: self.fail("Did not detect task count going negative") def test_basic(self): # Do it a couple of times on the same queue. # Done twice to make sure works with same instance reused. q = self.type2test(QUEUE_SIZE) self.basic_queue_test(q) self.basic_queue_test(q) def test_negative_timeout_raises_exception(self): q = self.type2test(QUEUE_SIZE) with self.assertRaises(ValueError): q.put(1, timeout=-1) with self.assertRaises(ValueError): q.get(1, timeout=-1) def test_nowait(self): q = self.type2test(QUEUE_SIZE) for i in range(QUEUE_SIZE): q.put_nowait(1) with self.assertRaises(self.queue.Full): q.put_nowait(1) for i in range(QUEUE_SIZE): q.get_nowait() with self.assertRaises(self.queue.Empty): q.get_nowait() def test_shrinking_queue(self): # issue 10110 q = self.type2test(3) q.put(1) q.put(2) q.put(3) with self.assertRaises(self.queue.Full): q.put_nowait(4) self.assertEqual(q.qsize(), 3) q.maxsize = 2 # shrink the queue with self.assertRaises(self.queue.Full): q.put_nowait(4) def test_shutdown_empty(self): q = self.type2test() q.shutdown() with self.assertRaises(self.queue.ShutDown): q.put("data") with self.assertRaises(self.queue.ShutDown): q.get() def test_shutdown_nonempty(self): q = self.type2test() q.put("data") q.shutdown() q.get() with self.assertRaises(self.queue.ShutDown): q.get() def test_shutdown_immediate(self): q = self.type2test() q.put("data") q.shutdown(immediate=True) with self.assertRaises(self.queue.ShutDown): q.get() def test_shutdown_allowed_transitions(self): # allowed transitions would be from alive via shutdown to immediate q = self.type2test() self.assertFalse(q.is_shutdown) q.shutdown() self.assertTrue(q.is_shutdown) q.shutdown(immediate=True) self.assertTrue(q.is_shutdown) q.shutdown(immediate=False) def _shutdown_all_methods_in_one_thread(self, immediate): q = self.type2test(2) q.put("L") q.put_nowait("O") q.shutdown(immediate) with self.assertRaises(self.queue.ShutDown): q.put("E") with self.assertRaises(self.queue.ShutDown): q.put_nowait("W") if immediate: with self.assertRaises(self.queue.ShutDown): q.get() with self.assertRaises(self.queue.ShutDown): q.get_nowait() with self.assertRaises(ValueError): q.task_done() q.join() else: self.assertIn(q.get(), "LO") q.task_done() self.assertIn(q.get(), "LO") q.task_done() q.join() # on shutdown(immediate=False) # when queue is empty, should raise ShutDown Exception with self.assertRaises(self.queue.ShutDown): q.get() # p.get(True) with self.assertRaises(self.queue.ShutDown): q.get_nowait() # p.get(False) with self.assertRaises(self.queue.ShutDown): q.get(True, 1.0) def test_shutdown_all_methods_in_one_thread(self): return self._shutdown_all_methods_in_one_thread(False) def test_shutdown_immediate_all_methods_in_one_thread(self): return self._shutdown_all_methods_in_one_thread(True) def _write_msg_thread(self, q, n, results, i_when_exec_shutdown, event_shutdown, barrier_start): # All `write_msg_threads` # put several items into the queue. for i in range(0, i_when_exec_shutdown//2): q.put((i, 'LOYD')) # Wait for the barrier to be complete. barrier_start.wait() for i in range(i_when_exec_shutdown//2, n): try: q.put((i, "YDLO")) except self.queue.ShutDown: results.append(False) break # Trigger queue shutdown. if i == i_when_exec_shutdown: # Only one thread should call shutdown(). if not event_shutdown.is_set(): event_shutdown.set() results.append(True) def _read_msg_thread(self, q, results, barrier_start): # Get at least one item. q.get(True) q.task_done() # Wait for the barrier to be complete. barrier_start.wait() while True: try: q.get(False) q.task_done() except self.queue.ShutDown: results.append(True) break except self.queue.Empty: pass def _shutdown_thread(self, q, results, event_end, immediate): event_end.wait() q.shutdown(immediate) results.append(q.qsize() == 0) def _join_thread(self, q, barrier_start): # Wait for the barrier to be complete. barrier_start.wait() q.join() def _shutdown_all_methods_in_many_threads(self, immediate): # Run a 'multi-producers/consumers queue' use case, # with enough items into the queue. # When shutdown, all running threads will be joined. q = self.type2test() ps = [] res_puts = [] res_gets = [] res_shutdown = [] write_threads = 4 read_threads = 6 join_threads = 2 nb_msgs = 1024*64 nb_msgs_w = nb_msgs // write_threads when_exec_shutdown = nb_msgs_w // 2 # Use of a Barrier to ensure that # - all write threads put all their items into the queue, # - all read thread get at least one item from the queue, # and keep on running until shutdown. # The join thread is started only when shutdown is immediate. nparties = write_threads + read_threads if immediate: nparties += join_threads barrier_start = threading.Barrier(nparties) ev_exec_shutdown = threading.Event() lprocs = [ (self._write_msg_thread, write_threads, (q, nb_msgs_w, res_puts, when_exec_shutdown, ev_exec_shutdown, barrier_start)), (self._read_msg_thread, read_threads, (q, res_gets, barrier_start)), (self._shutdown_thread, 1, (q, res_shutdown, ev_exec_shutdown, immediate)), ] if immediate: lprocs.append((self._join_thread, join_threads, (q, barrier_start))) # start all threads. for func, n, args in lprocs: for i in range(n): ps.append(threading.Thread(target=func, args=args)) ps[-1].start() for thread in ps: thread.join() self.assertTrue(True in res_puts) self.assertEqual(res_gets.count(True), read_threads) if immediate: self.assertListEqual(res_shutdown, [True]) self.assertTrue(q.empty()) def test_shutdown_all_methods_in_many_threads(self): return self._shutdown_all_methods_in_many_threads(False) def test_shutdown_immediate_all_methods_in_many_threads(self): return self._shutdown_all_methods_in_many_threads(True) def _get(self, q, go, results, shutdown=False): go.wait() try: msg = q.get() results.append(not shutdown) return not shutdown except self.queue.ShutDown: results.append(shutdown) return shutdown def _get_shutdown(self, q, go, results): return self._get(q, go, results, True) def _get_task_done(self, q, go, results): go.wait() try: msg = q.get() q.task_done() results.append(True) return msg except self.queue.ShutDown: results.append(False) return False def _put(self, q, msg, go, results, shutdown=False): go.wait() try: q.put(msg) results.append(not shutdown) return not shutdown except self.queue.ShutDown: results.append(shutdown) return shutdown def _put_shutdown(self, q, msg, go, results): return self._put(q, msg, go, results, True) def _join(self, q, results, shutdown=False): try: q.join() results.append(not shutdown) return not shutdown except self.queue.ShutDown: results.append(shutdown) return shutdown def _join_shutdown(self, q, results): return self._join(q, results, True) def _shutdown_get(self, immediate): q = self.type2test(2) results = [] go = threading.Event() q.put("Y") q.put("D") # queue full if immediate: thrds = ( (self._get_shutdown, (q, go, results)), (self._get_shutdown, (q, go, results)), ) else: thrds = ( # on shutdown(immediate=False) # one of these threads shoud raise Shutdown (self._get, (q, go, results)), (self._get, (q, go, results)), (self._get, (q, go, results)), ) threads = [] for func, params in thrds: threads.append(threading.Thread(target=func, args=params)) threads[-1].start() q.shutdown(immediate) go.set() for t in threads: t.join() if immediate: self.assertListEqual(results, [True, True]) else: self.assertListEqual(sorted(results), [False] + [True]*(len(thrds)-1)) def test_shutdown_get(self): return self._shutdown_get(False) def test_shutdown_immediate_get(self): return self._shutdown_get(True) def _shutdown_put(self, immediate): q = self.type2test(2) results = [] go = threading.Event() q.put("Y") q.put("D") # queue fulled thrds = ( (self._put_shutdown, (q, "E", go, results)), (self._put_shutdown, (q, "W", go, results)), ) threads = [] for func, params in thrds: threads.append(threading.Thread(target=func, args=params)) threads[-1].start() q.shutdown() go.set() for t in threads: t.join() self.assertEqual(results, [True]*len(thrds)) def test_shutdown_put(self): return self._shutdown_put(False) def test_shutdown_immediate_put(self): return self._shutdown_put(True) def _shutdown_join(self, immediate): q = self.type2test() results = [] q.put("Y") go = threading.Event() nb = q.qsize() thrds = ( (self._join, (q, results)), (self._join, (q, results)), ) threads = [] for func, params in thrds: threads.append(threading.Thread(target=func, args=params)) threads[-1].start() if not immediate: res = [] for i in range(nb): threads.append(threading.Thread(target=self._get_task_done, args=(q, go, res))) threads[-1].start() q.shutdown(immediate) go.set() for t in threads: t.join() self.assertEqual(results, [True]*len(thrds)) def test_shutdown_immediate_join(self): return self._shutdown_join(True) def test_shutdown_join(self): return self._shutdown_join(False) def _shutdown_put_join(self, immediate): q = self.type2test(2) results = [] go = threading.Event() q.put("Y") # queue not fulled thrds = ( (self._put_shutdown, (q, "E", go, results)), (self._join, (q, results)), ) threads = [] for func, params in thrds: threads.append(threading.Thread(target=func, args=params)) threads[-1].start() self.assertEqual(q.unfinished_tasks, 1) q.shutdown(immediate) go.set() if immediate: with self.assertRaises(self.queue.ShutDown): q.get_nowait() else: result = q.get() self.assertEqual(result, "Y") q.task_done() for t in threads: t.join() self.assertEqual(results, [True]*len(thrds)) def test_shutdown_immediate_put_join(self): return self._shutdown_put_join(True) def test_shutdown_put_join(self): return self._shutdown_put_join(False) def test_shutdown_get_task_done_join(self): q = self.type2test(2) results = [] go = threading.Event() q.put("Y") q.put("D") self.assertEqual(q.unfinished_tasks, q.qsize()) thrds = ( (self._get_task_done, (q, go, results)), (self._get_task_done, (q, go, results)), (self._join, (q, results)), (self._join, (q, results)), ) threads = [] for func, params in thrds: threads.append(threading.Thread(target=func, args=params)) threads[-1].start() go.set() q.shutdown(False) for t in threads: t.join() self.assertEqual(results, [True]*len(thrds)) def test_shutdown_pending_get(self): def get(): try: results.append(q.get()) except Exception as e: results.append(e) q = self.type2test() results = [] get_thread = threading.Thread(target=get) get_thread.start() q.shutdown(immediate=False) get_thread.join(timeout=10.0) self.assertFalse(get_thread.is_alive()) self.assertEqual(len(results), 1) self.assertIsInstance(results[0], self.queue.ShutDown) class QueueTest(BaseQueueTestMixin): def setUp(self): self.type2test = self.queue.Queue super().setUp() class PyQueueTest(QueueTest, unittest.TestCase): queue = py_queue @need_c_queue class CQueueTest(QueueTest, unittest.TestCase): queue = c_queue class LifoQueueTest(BaseQueueTestMixin): def setUp(self): self.type2test = self.queue.LifoQueue super().setUp() class PyLifoQueueTest(LifoQueueTest, unittest.TestCase): queue = py_queue @need_c_queue class CLifoQueueTest(LifoQueueTest, unittest.TestCase): queue = c_queue class PriorityQueueTest(BaseQueueTestMixin): def setUp(self): self.type2test = self.queue.PriorityQueue super().setUp() class PyPriorityQueueTest(PriorityQueueTest, unittest.TestCase): queue = py_queue @need_c_queue class CPriorityQueueTest(PriorityQueueTest, unittest.TestCase): queue = c_queue # A Queue subclass that can provoke failure at a moment's notice :) class FailingQueueException(Exception): pass class FailingQueueTest(BlockingTestMixin): def setUp(self): Queue = self.queue.Queue class FailingQueue(Queue): def __init__(self, *args): self.fail_next_put = False self.fail_next_get = False Queue.__init__(self, *args) def _put(self, item): if self.fail_next_put: self.fail_next_put = False raise FailingQueueException("You Lose") return Queue._put(self, item) def _get(self): if self.fail_next_get: self.fail_next_get = False raise FailingQueueException("You Lose") return Queue._get(self) self.FailingQueue = FailingQueue super().setUp() def failing_queue_test(self, q): if q.qsize(): raise RuntimeError("Call this function with an empty queue") for i in range(QUEUE_SIZE-1): q.put(i) # Test a failing non-blocking put. q.fail_next_put = True try: q.put("oops", block=0) self.fail("The queue didn't fail when it should have") except FailingQueueException: pass q.fail_next_put = True try: q.put("oops", timeout=0.1) self.fail("The queue didn't fail when it should have") except FailingQueueException: pass q.put("last") self.assertTrue(qfull(q), "Queue should be full") # Test a failing blocking put q.fail_next_put = True try: self.do_blocking_test(q.put, ("full",), q.get, ()) self.fail("The queue didn't fail when it should have") except FailingQueueException: pass # Check the Queue isn't damaged. # put failed, but get succeeded - re-add q.put("last") # Test a failing timeout put q.fail_next_put = True try: self.do_exceptional_blocking_test(q.put, ("full", True, 10), q.get, (), FailingQueueException) self.fail("The queue didn't fail when it should have") except FailingQueueException: pass # Check the Queue isn't damaged. # put failed, but get succeeded - re-add q.put("last") self.assertTrue(qfull(q), "Queue should be full") q.get() self.assertTrue(not qfull(q), "Queue should not be full") q.put("last") self.assertTrue(qfull(q), "Queue should be full") # Test a blocking put self.do_blocking_test(q.put, ("full",), q.get, ()) # Empty it for i in range(QUEUE_SIZE): q.get() self.assertTrue(not q.qsize(), "Queue should be empty") q.put("first") q.fail_next_get = True try: q.get() self.fail("The queue didn't fail when it should have") except FailingQueueException: pass self.assertTrue(q.qsize(), "Queue should not be empty") q.fail_next_get = True try: q.get(timeout=0.1) self.fail("The queue didn't fail when it should have") except FailingQueueException: pass self.assertTrue(q.qsize(), "Queue should not be empty") q.get() self.assertTrue(not q.qsize(), "Queue should be empty") q.fail_next_get = True try: self.do_exceptional_blocking_test(q.get, (), q.put, ('empty',), FailingQueueException) self.fail("The queue didn't fail when it should have") except FailingQueueException: pass # put succeeded, but get failed. self.assertTrue(q.qsize(), "Queue should not be empty") q.get() self.assertTrue(not q.qsize(), "Queue should be empty") def test_failing_queue(self): # Test to make sure a queue is functioning correctly. # Done twice to the same instance. q = self.FailingQueue(QUEUE_SIZE) self.failing_queue_test(q) self.failing_queue_test(q) class PyFailingQueueTest(FailingQueueTest, unittest.TestCase): queue = py_queue @need_c_queue class CFailingQueueTest(FailingQueueTest, unittest.TestCase): queue = c_queue class BaseSimpleQueueTest: def setUp(self): self.q = self.type2test() def feed(self, q, seq, rnd, sentinel): while True: try: val = seq.pop() except IndexError: q.put(sentinel) return q.put(val) if rnd.random() > 0.5: time.sleep(rnd.random() * 1e-3) def consume(self, q, results, sentinel): while True: val = q.get() if val == sentinel: return results.append(val) def consume_nonblock(self, q, results, sentinel): while True: while True: try: val = q.get(block=False) except self.queue.Empty: time.sleep(1e-5) else: break if val == sentinel: return results.append(val) def consume_timeout(self, q, results, sentinel): while True: while True: try: val = q.get(timeout=1e-5) except self.queue.Empty: pass else: break if val == sentinel: return results.append(val) def run_threads(self, n_threads, q, inputs, feed_func, consume_func): results = [] sentinel = None seq = inputs.copy() seq.reverse() rnd = random.Random(42) exceptions = [] def log_exceptions(f): def wrapper(*args, **kwargs): try: f(*args, **kwargs) except BaseException as e: exceptions.append(e) return wrapper feeders = [threading.Thread(target=log_exceptions(feed_func), args=(q, seq, rnd, sentinel)) for i in range(n_threads)] consumers = [threading.Thread(target=log_exceptions(consume_func), args=(q, results, sentinel)) for i in range(n_threads)] with threading_helper.start_threads(feeders + consumers): pass self.assertFalse(exceptions) self.assertTrue(q.empty()) self.assertEqual(q.qsize(), 0) return results def test_basic(self): # Basic tests for get(), put() etc. q = self.q self.assertTrue(q.empty()) self.assertEqual(q.qsize(), 0) q.put(1) self.assertFalse(q.empty()) self.assertEqual(q.qsize(), 1) q.put(2) q.put_nowait(3) q.put(4) self.assertFalse(q.empty()) self.assertEqual(q.qsize(), 4) self.assertEqual(q.get(), 1) self.assertEqual(q.qsize(), 3) self.assertEqual(q.get_nowait(), 2) self.assertEqual(q.qsize(), 2) self.assertEqual(q.get(block=False), 3) self.assertFalse(q.empty()) self.assertEqual(q.qsize(), 1) self.assertEqual(q.get(timeout=0.1), 4) self.assertTrue(q.empty()) self.assertEqual(q.qsize(), 0) with self.assertRaises(self.queue.Empty): q.get(block=False) with self.assertRaises(self.queue.Empty): q.get(timeout=1e-3) with self.assertRaises(self.queue.Empty): q.get_nowait() self.assertTrue(q.empty()) self.assertEqual(q.qsize(), 0) def test_negative_timeout_raises_exception(self): q = self.q q.put(1) with self.assertRaises(ValueError): q.get(timeout=-1) def test_order(self): # Test a pair of concurrent put() and get() q = self.q inputs = list(range(100)) results = self.run_threads(1, q, inputs, self.feed, self.consume) # One producer, one consumer => results appended in well-defined order self.assertEqual(results, inputs) def test_many_threads(self): # Test multiple concurrent put() and get() N = 50 q = self.q inputs = list(range(10000)) results = self.run_threads(N, q, inputs, self.feed, self.consume) # Multiple consumers without synchronization append the # results in random order self.assertEqual(sorted(results), inputs) def test_many_threads_nonblock(self): # Test multiple concurrent put() and get(block=False) N = 50 q = self.q inputs = list(range(10000)) results = self.run_threads(N, q, inputs, self.feed, self.consume_nonblock) self.assertEqual(sorted(results), inputs) def test_many_threads_timeout(self): # Test multiple concurrent put() and get(timeout=...) N = 50 q = self.q inputs = list(range(1000)) results = self.run_threads(N, q, inputs, self.feed, self.consume_timeout) self.assertEqual(sorted(results), inputs) def test_references(self): # The queue should lose references to each item as soon as # it leaves the queue. class C: pass N = 20 q = self.q for i in range(N): q.put(C()) for i in range(N): wr = weakref.ref(q.get()) gc_collect() # For PyPy or other GCs. self.assertIsNone(wr()) class PySimpleQueueTest(BaseSimpleQueueTest, unittest.TestCase): queue = py_queue def setUp(self): self.type2test = self.queue._PySimpleQueue super().setUp() @need_c_queue class CSimpleQueueTest(BaseSimpleQueueTest, unittest.TestCase): queue = c_queue def setUp(self): self.type2test = self.queue.SimpleQueue super().setUp() def test_is_default(self): self.assertIs(self.type2test, self.queue.SimpleQueue) self.assertIs(self.type2test, self.queue.SimpleQueue) def test_reentrancy(self): # bpo-14976: put() may be called reentrantly in an asynchronous # callback. q = self.q gen = itertools.count() N = 10000 results = [] # This test exploits the fact that __del__ in a reference cycle # can be called any time the GC may run. class Circular(object): def __init__(self): self.circular = self def __del__(self): q.put(next(gen)) while True: o = Circular() q.put(next(gen)) del o results.append(q.get()) if results[-1] >= N: break self.assertEqual(results, list(range(N + 1))) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.13/test_select.py000066400000000000000000000066601471441230600212250ustar00rootroot00000000000000import errno import select import subprocess import sys import textwrap import unittest from test import support support.requires_working_socket(module=True) @unittest.skipIf((sys.platform[:3]=='win'), "can't easily test on this system") class SelectTestCase(unittest.TestCase): class Nope: pass class Almost: def fileno(self): return 'fileno' def test_error_conditions(self): self.assertRaises(TypeError, select.select, 1, 2, 3) self.assertRaises(TypeError, select.select, [self.Nope()], [], []) self.assertRaises(TypeError, select.select, [self.Almost()], [], []) self.assertRaises(TypeError, select.select, [], [], [], "not a number") self.assertRaises(ValueError, select.select, [], [], [], -1) # Issue #12367: http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/155606 @unittest.skipIf(sys.platform.startswith('freebsd'), 'skip because of a FreeBSD bug: kern/155606') def test_errno(self): with open(__file__, 'rb') as fp: fd = fp.fileno() fp.close() try: select.select([fd], [], [], 0) except OSError as err: self.assertEqual(err.errno, errno.EBADF) else: self.fail("exception not raised") def test_returned_list_identity(self): # See issue #8329 r, w, x = select.select([], [], [], 1) self.assertIsNot(r, w) self.assertIsNot(r, x) self.assertIsNot(w, x) @support.requires_fork() def test_select(self): code = textwrap.dedent(''' import time for i in range(10): print("testing...", flush=True) time.sleep(0.050) ''') cmd = [sys.executable, '-I', '-c', code] with subprocess.Popen(cmd, stdout=subprocess.PIPE) as proc: pipe = proc.stdout for timeout in (0, 1, 2, 4, 8, 16) + (None,)*10: if support.verbose: print(f'timeout = {timeout}') rfd, wfd, xfd = select.select([pipe], [], [], timeout) self.assertEqual(wfd, []) self.assertEqual(xfd, []) if not rfd: continue if rfd == [pipe]: line = pipe.readline() if support.verbose: print(repr(line)) if not line: if support.verbose: print('EOF') break continue self.fail('Unexpected return values from select():', rfd, wfd, xfd) # Issue 16230: Crash on select resized list @unittest.skipIf( support.is_emscripten, "Emscripten cannot select a fd multiple times." ) def test_select_mutated(self): a = [] class F: def fileno(self): del a[-1] return sys.__stdout__.fileno() a[:] = [F()] * 10 self.assertEqual(select.select([], a, []), ([], a[:5], [])) def test_disallow_instantiation(self): support.check_disallow_instantiation(self, type(select.poll())) if hasattr(select, 'devpoll'): support.check_disallow_instantiation(self, type(select.devpoll())) def tearDownModule(): support.reap_children() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.13/test_selectors.py000066400000000000000000000475771471441230600217650ustar00rootroot00000000000000import errno import os import random import selectors import signal import socket import sys from test import support from test.support import is_apple, os_helper, socket_helper from time import sleep import unittest import unittest.mock import tempfile from time import monotonic as time try: import resource except ImportError: resource = None if support.is_emscripten or support.is_wasi: raise unittest.SkipTest("Cannot create socketpair on Emscripten/WASI.") if hasattr(socket, 'socketpair'): socketpair = socket.socketpair else: def socketpair(family=socket.AF_INET, type=socket.SOCK_STREAM, proto=0): with socket.socket(family, type, proto) as l: l.bind((socket_helper.HOST, 0)) l.listen() c = socket.socket(family, type, proto) try: c.connect(l.getsockname()) caddr = c.getsockname() while True: a, addr = l.accept() # check that we've got the correct client if addr == caddr: return c, a a.close() except OSError: c.close() raise def find_ready_matching(ready, flag): match = [] for key, events in ready: if events & flag: match.append(key.fileobj) return match class BaseSelectorTestCase: def make_socketpair(self): rd, wr = socketpair() self.addCleanup(rd.close) self.addCleanup(wr.close) return rd, wr def test_register(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() key = s.register(rd, selectors.EVENT_READ, "data") self.assertIsInstance(key, selectors.SelectorKey) self.assertEqual(key.fileobj, rd) self.assertEqual(key.fd, rd.fileno()) self.assertEqual(key.events, selectors.EVENT_READ) self.assertEqual(key.data, "data") # register an unknown event self.assertRaises(ValueError, s.register, 0, 999999) # register an invalid FD self.assertRaises(ValueError, s.register, -10, selectors.EVENT_READ) # register twice self.assertRaises(KeyError, s.register, rd, selectors.EVENT_READ) # register the same FD, but with a different object self.assertRaises(KeyError, s.register, rd.fileno(), selectors.EVENT_READ) def test_unregister(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.unregister(rd) # unregister an unknown file obj self.assertRaises(KeyError, s.unregister, 999999) # unregister twice self.assertRaises(KeyError, s.unregister, rd) def test_unregister_after_fd_close(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() r, w = rd.fileno(), wr.fileno() s.register(r, selectors.EVENT_READ) s.register(w, selectors.EVENT_WRITE) rd.close() wr.close() s.unregister(r) s.unregister(w) @unittest.skipUnless(os.name == 'posix', "requires posix") def test_unregister_after_fd_close_and_reuse(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() r, w = rd.fileno(), wr.fileno() s.register(r, selectors.EVENT_READ) s.register(w, selectors.EVENT_WRITE) rd2, wr2 = self.make_socketpair() rd.close() wr.close() os.dup2(rd2.fileno(), r) os.dup2(wr2.fileno(), w) self.addCleanup(os.close, r) self.addCleanup(os.close, w) s.unregister(r) s.unregister(w) def test_unregister_after_socket_close(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) rd.close() wr.close() s.unregister(rd) s.unregister(wr) def test_modify(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() key = s.register(rd, selectors.EVENT_READ) # modify events key2 = s.modify(rd, selectors.EVENT_WRITE) self.assertNotEqual(key.events, key2.events) self.assertEqual(key2, s.get_key(rd)) s.unregister(rd) # modify data d1 = object() d2 = object() key = s.register(rd, selectors.EVENT_READ, d1) key2 = s.modify(rd, selectors.EVENT_READ, d2) self.assertEqual(key.events, key2.events) self.assertNotEqual(key.data, key2.data) self.assertEqual(key2, s.get_key(rd)) self.assertEqual(key2.data, d2) # modify unknown file obj self.assertRaises(KeyError, s.modify, 999999, selectors.EVENT_READ) # modify use a shortcut d3 = object() s.register = unittest.mock.Mock() s.unregister = unittest.mock.Mock() s.modify(rd, selectors.EVENT_READ, d3) self.assertFalse(s.register.called) self.assertFalse(s.unregister.called) def test_modify_unregister(self): # Make sure the fd is unregister()ed in case of error on # modify(): http://bugs.python.org/issue30014 if self.SELECTOR.__name__ == 'EpollSelector': patch = unittest.mock.patch( 'selectors.EpollSelector._selector_cls') elif self.SELECTOR.__name__ == 'PollSelector': patch = unittest.mock.patch( 'selectors.PollSelector._selector_cls') elif self.SELECTOR.__name__ == 'DevpollSelector': patch = unittest.mock.patch( 'selectors.DevpollSelector._selector_cls') else: raise self.skipTest("") with patch as m: m.return_value.modify = unittest.mock.Mock( side_effect=ZeroDivisionError) s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) self.assertEqual(len(s._map), 1) with self.assertRaises(ZeroDivisionError): s.modify(rd, selectors.EVENT_WRITE) self.assertEqual(len(s._map), 0) def test_close(self): s = self.SELECTOR() self.addCleanup(s.close) mapping = s.get_map() rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) s.close() self.assertRaises(RuntimeError, s.get_key, rd) self.assertRaises(RuntimeError, s.get_key, wr) self.assertRaises(KeyError, mapping.__getitem__, rd) self.assertRaises(KeyError, mapping.__getitem__, wr) self.assertEqual(mapping.get(rd), None) self.assertEqual(mapping.get(wr), None) def test_get_key(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() key = s.register(rd, selectors.EVENT_READ, "data") self.assertEqual(key, s.get_key(rd)) # unknown file obj self.assertRaises(KeyError, s.get_key, 999999) def test_get_map(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() sentinel = object() keys = s.get_map() self.assertFalse(keys) self.assertEqual(len(keys), 0) self.assertEqual(list(keys), []) self.assertEqual(keys.get(rd), None) self.assertEqual(keys.get(rd, sentinel), sentinel) key = s.register(rd, selectors.EVENT_READ, "data") self.assertIn(rd, keys) self.assertEqual(key, keys.get(rd)) self.assertEqual(key, keys[rd]) self.assertEqual(len(keys), 1) self.assertEqual(list(keys), [rd.fileno()]) self.assertEqual(list(keys.values()), [key]) # unknown file obj with self.assertRaises(KeyError): keys[999999] # Read-only mapping with self.assertRaises(TypeError): del keys[rd] def test_select(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) wr_key = s.register(wr, selectors.EVENT_WRITE) result = s.select() for key, events in result: self.assertTrue(isinstance(key, selectors.SelectorKey)) self.assertTrue(events) self.assertFalse(events & ~(selectors.EVENT_READ | selectors.EVENT_WRITE)) self.assertEqual([(wr_key, selectors.EVENT_WRITE)], result) def test_select_read_write(self): # gh-110038: when a file descriptor is registered for both read and # write, the two events must be seen on a single call to select(). s = self.SELECTOR() self.addCleanup(s.close) sock1, sock2 = self.make_socketpair() sock2.send(b"foo") my_key = s.register(sock1, selectors.EVENT_READ | selectors.EVENT_WRITE) seen_read, seen_write = False, False result = s.select() # We get the read and write either in the same result entry or in two # distinct entries with the same key. self.assertLessEqual(len(result), 2) for key, events in result: self.assertTrue(isinstance(key, selectors.SelectorKey)) self.assertEqual(key, my_key) self.assertFalse(events & ~(selectors.EVENT_READ | selectors.EVENT_WRITE)) if events & selectors.EVENT_READ: self.assertFalse(seen_read) seen_read = True if events & selectors.EVENT_WRITE: self.assertFalse(seen_write) seen_write = True self.assertTrue(seen_read) self.assertTrue(seen_write) def test_context_manager(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() with s as sel: sel.register(rd, selectors.EVENT_READ) sel.register(wr, selectors.EVENT_WRITE) self.assertRaises(RuntimeError, s.get_key, rd) self.assertRaises(RuntimeError, s.get_key, wr) def test_fileno(self): s = self.SELECTOR() self.addCleanup(s.close) if hasattr(s, 'fileno'): fd = s.fileno() self.assertTrue(isinstance(fd, int)) self.assertGreaterEqual(fd, 0) def test_selector(self): s = self.SELECTOR() self.addCleanup(s.close) NUM_SOCKETS = 12 MSG = b" This is a test." MSG_LEN = len(MSG) readers = [] writers = [] r2w = {} w2r = {} for i in range(NUM_SOCKETS): rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) readers.append(rd) writers.append(wr) r2w[rd] = wr w2r[wr] = rd bufs = [] while writers: ready = s.select() ready_writers = find_ready_matching(ready, selectors.EVENT_WRITE) if not ready_writers: self.fail("no sockets ready for writing") wr = random.choice(ready_writers) wr.send(MSG) for i in range(10): ready = s.select() ready_readers = find_ready_matching(ready, selectors.EVENT_READ) if ready_readers: break # there might be a delay between the write to the write end and # the read end is reported ready sleep(0.1) else: self.fail("no sockets ready for reading") self.assertEqual([w2r[wr]], ready_readers) rd = ready_readers[0] buf = rd.recv(MSG_LEN) self.assertEqual(len(buf), MSG_LEN) bufs.append(buf) s.unregister(r2w[rd]) s.unregister(rd) writers.remove(r2w[rd]) self.assertEqual(bufs, [MSG] * NUM_SOCKETS) @unittest.skipIf(sys.platform == 'win32', 'select.select() cannot be used with empty fd sets') def test_empty_select(self): # Issue #23009: Make sure EpollSelector.select() works when no FD is # registered. s = self.SELECTOR() self.addCleanup(s.close) self.assertEqual(s.select(timeout=0), []) def test_timeout(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(wr, selectors.EVENT_WRITE) t = time() self.assertEqual(1, len(s.select(0))) self.assertEqual(1, len(s.select(-1))) self.assertLess(time() - t, 0.5) s.unregister(wr) s.register(rd, selectors.EVENT_READ) t = time() self.assertFalse(s.select(0)) self.assertFalse(s.select(-1)) self.assertLess(time() - t, 0.5) t0 = time() self.assertFalse(s.select(1)) t1 = time() dt = t1 - t0 # Tolerate 2.0 seconds for very slow buildbots self.assertTrue(0.8 <= dt <= 2.0, dt) @unittest.skipUnless(hasattr(signal, "alarm"), "signal.alarm() required for this test") def test_select_interrupt_exc(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() class InterruptSelect(Exception): pass def handler(*args): raise InterruptSelect orig_alrm_handler = signal.signal(signal.SIGALRM, handler) self.addCleanup(signal.signal, signal.SIGALRM, orig_alrm_handler) try: signal.alarm(1) s.register(rd, selectors.EVENT_READ) t = time() # select() is interrupted by a signal which raises an exception with self.assertRaises(InterruptSelect): s.select(30) # select() was interrupted before the timeout of 30 seconds self.assertLess(time() - t, 5.0) finally: signal.alarm(0) @unittest.skipUnless(hasattr(signal, "alarm"), "signal.alarm() required for this test") def test_select_interrupt_noraise(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() orig_alrm_handler = signal.signal(signal.SIGALRM, lambda *args: None) self.addCleanup(signal.signal, signal.SIGALRM, orig_alrm_handler) try: signal.alarm(1) s.register(rd, selectors.EVENT_READ) t = time() # select() is interrupted by a signal, but the signal handler doesn't # raise an exception, so select() should by retries with a recomputed # timeout self.assertFalse(s.select(1.5)) self.assertGreaterEqual(time() - t, 1.0) finally: signal.alarm(0) class ScalableSelectorMixIn: # see issue #18963 for why it's skipped on older OS X versions @support.requires_mac_ver(10, 5) @unittest.skipUnless(resource, "Test needs resource module") @support.requires_resource('cpu') def test_above_fd_setsize(self): # A scalable implementation should have no problem with more than # FD_SETSIZE file descriptors. Since we don't know the value, we just # try to set the soft RLIMIT_NOFILE to the hard RLIMIT_NOFILE ceiling. soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE) try: resource.setrlimit(resource.RLIMIT_NOFILE, (hard, hard)) self.addCleanup(resource.setrlimit, resource.RLIMIT_NOFILE, (soft, hard)) NUM_FDS = min(hard, 2**16) except (OSError, ValueError): NUM_FDS = soft # guard for already allocated FDs (stdin, stdout...) NUM_FDS -= 32 s = self.SELECTOR() self.addCleanup(s.close) for i in range(NUM_FDS // 2): try: rd, wr = self.make_socketpair() except OSError: # too many FDs, skip - note that we should only catch EMFILE # here, but apparently *BSD and Solaris can fail upon connect() # or bind() with EADDRNOTAVAIL, so let's be safe self.skipTest("FD limit reached") try: s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) except OSError as e: if e.errno == errno.ENOSPC: # this can be raised by epoll if we go over # fs.epoll.max_user_watches sysctl self.skipTest("FD limit reached") raise try: fds = s.select() except OSError as e: if e.errno == errno.EINVAL and is_apple: # unexplainable errors on macOS don't need to fail the test self.skipTest("Invalid argument error calling poll()") raise self.assertEqual(NUM_FDS // 2, len(fds)) class DefaultSelectorTestCase(BaseSelectorTestCase, unittest.TestCase): SELECTOR = selectors.DefaultSelector class SelectSelectorTestCase(BaseSelectorTestCase, unittest.TestCase): SELECTOR = selectors.SelectSelector @unittest.skipUnless(hasattr(selectors, 'PollSelector'), "Test needs selectors.PollSelector") class PollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'PollSelector', None) @unittest.skipUnless(hasattr(selectors, 'EpollSelector'), "Test needs selectors.EpollSelector") class EpollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'EpollSelector', None) def test_register_file(self): # epoll(7) returns EPERM when given a file to watch s = self.SELECTOR() with tempfile.NamedTemporaryFile() as f: with self.assertRaises(IOError): s.register(f, selectors.EVENT_READ) # the SelectorKey has been removed with self.assertRaises(KeyError): s.get_key(f) @unittest.skipUnless(hasattr(selectors, 'KqueueSelector'), "Test needs selectors.KqueueSelector)") class KqueueSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'KqueueSelector', None) def test_register_bad_fd(self): # a file descriptor that's been closed should raise an OSError # with EBADF s = self.SELECTOR() bad_f = os_helper.make_bad_fd() with self.assertRaises(OSError) as cm: s.register(bad_f, selectors.EVENT_READ) self.assertEqual(cm.exception.errno, errno.EBADF) # the SelectorKey has been removed with self.assertRaises(KeyError): s.get_key(bad_f) def test_empty_select_timeout(self): # Issues #23009, #29255: Make sure timeout is applied when no fds # are registered. s = self.SELECTOR() self.addCleanup(s.close) t0 = time() self.assertEqual(s.select(1), []) t1 = time() dt = t1 - t0 # Tolerate 2.0 seconds for very slow buildbots self.assertTrue(0.8 <= dt <= 2.0, dt) @unittest.skipUnless(hasattr(selectors, 'DevpollSelector'), "Test needs selectors.DevpollSelector") class DevpollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'DevpollSelector', None) def tearDownModule(): support.reap_children() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.13/test_signal.py000066400000000000000000001507231471441230600212230ustar00rootroot00000000000000import enum import errno import functools import inspect import os import random import signal import socket import statistics import subprocess import sys import threading import time import unittest from test import support from test.support import ( is_apple, is_apple_mobile, os_helper, threading_helper ) from test.support.script_helper import assert_python_ok, spawn_python try: import _testcapi except ImportError: _testcapi = None class GenericTests(unittest.TestCase): def test_enums(self): for name in dir(signal): sig = getattr(signal, name) if name in {'SIG_DFL', 'SIG_IGN'}: self.assertIsInstance(sig, signal.Handlers) elif name in {'SIG_BLOCK', 'SIG_UNBLOCK', 'SIG_SETMASK'}: self.assertIsInstance(sig, signal.Sigmasks) elif name.startswith('SIG') and not name.startswith('SIG_'): self.assertIsInstance(sig, signal.Signals) elif name.startswith('CTRL_'): self.assertIsInstance(sig, signal.Signals) self.assertEqual(sys.platform, "win32") CheckedSignals = enum._old_convert_( enum.IntEnum, 'Signals', 'signal', lambda name: name.isupper() and (name.startswith('SIG') and not name.startswith('SIG_')) or name.startswith('CTRL_'), source=signal, ) enum._test_simple_enum(CheckedSignals, signal.Signals) CheckedHandlers = enum._old_convert_( enum.IntEnum, 'Handlers', 'signal', lambda name: name in ('SIG_DFL', 'SIG_IGN'), source=signal, ) enum._test_simple_enum(CheckedHandlers, signal.Handlers) Sigmasks = getattr(signal, 'Sigmasks', None) if Sigmasks is not None: CheckedSigmasks = enum._old_convert_( enum.IntEnum, 'Sigmasks', 'signal', lambda name: name in ('SIG_BLOCK', 'SIG_UNBLOCK', 'SIG_SETMASK'), source=signal, ) enum._test_simple_enum(CheckedSigmasks, Sigmasks) def test_functions_module_attr(self): # Issue #27718: If __all__ is not defined all non-builtin functions # should have correct __module__ to be displayed by pydoc. for name in dir(signal): value = getattr(signal, name) if inspect.isroutine(value) and not inspect.isbuiltin(value): self.assertEqual(value.__module__, 'signal') @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") class PosixTests(unittest.TestCase): def trivial_signal_handler(self, *args): pass def create_handler_with_partial(self, argument): return functools.partial(self.trivial_signal_handler, argument) def test_out_of_range_signal_number_raises_error(self): self.assertRaises(ValueError, signal.getsignal, 4242) self.assertRaises(ValueError, signal.signal, 4242, self.trivial_signal_handler) self.assertRaises(ValueError, signal.strsignal, 4242) def test_setting_signal_handler_to_none_raises_error(self): self.assertRaises(TypeError, signal.signal, signal.SIGUSR1, None) def test_getsignal(self): hup = signal.signal(signal.SIGHUP, self.trivial_signal_handler) self.assertIsInstance(hup, signal.Handlers) self.assertEqual(signal.getsignal(signal.SIGHUP), self.trivial_signal_handler) signal.signal(signal.SIGHUP, hup) self.assertEqual(signal.getsignal(signal.SIGHUP), hup) def test_no_repr_is_called_on_signal_handler(self): # See https://github.com/python/cpython/issues/112559. class MyArgument: def __init__(self): self.repr_count = 0 def __repr__(self): self.repr_count += 1 return super().__repr__() argument = MyArgument() self.assertEqual(0, argument.repr_count) handler = self.create_handler_with_partial(argument) hup = signal.signal(signal.SIGHUP, handler) self.assertIsInstance(hup, signal.Handlers) self.assertEqual(signal.getsignal(signal.SIGHUP), handler) signal.signal(signal.SIGHUP, hup) self.assertEqual(signal.getsignal(signal.SIGHUP), hup) self.assertEqual(0, argument.repr_count) def test_strsignal(self): self.assertIn("Interrupt", signal.strsignal(signal.SIGINT)) self.assertIn("Terminated", signal.strsignal(signal.SIGTERM)) self.assertIn("Hangup", signal.strsignal(signal.SIGHUP)) # Issue 3864, unknown if this affects earlier versions of freebsd also def test_interprocess_signal(self): dirname = os.path.dirname(__file__) script = os.path.join(dirname, 'signalinterproctester.py') assert_python_ok(script) @unittest.skipUnless( hasattr(signal, "valid_signals"), "requires signal.valid_signals" ) def test_valid_signals(self): s = signal.valid_signals() self.assertIsInstance(s, set) self.assertIn(signal.Signals.SIGINT, s) self.assertIn(signal.Signals.SIGALRM, s) self.assertNotIn(0, s) self.assertNotIn(signal.NSIG, s) self.assertLess(len(s), signal.NSIG) # gh-91145: Make sure that all SIGxxx constants exposed by the Python # signal module have a number in the [0; signal.NSIG-1] range. for name in dir(signal): if not name.startswith("SIG"): continue if name in {"SIG_IGN", "SIG_DFL"}: # SIG_IGN and SIG_DFL are pointers continue with self.subTest(name=name): signum = getattr(signal, name) self.assertGreaterEqual(signum, 0) self.assertLess(signum, signal.NSIG) @unittest.skipUnless(sys.executable, "sys.executable required.") @support.requires_subprocess() def test_keyboard_interrupt_exit_code(self): """KeyboardInterrupt triggers exit via SIGINT.""" process = subprocess.run( [sys.executable, "-c", "import os, signal, time\n" "os.kill(os.getpid(), signal.SIGINT)\n" "for _ in range(999): time.sleep(0.01)"], stderr=subprocess.PIPE) self.assertIn(b"KeyboardInterrupt", process.stderr) self.assertEqual(process.returncode, -signal.SIGINT) # Caveat: The exit code is insufficient to guarantee we actually died # via a signal. POSIX shells do more than look at the 8 bit value. # Writing an automation friendly test of an interactive shell # to confirm that our process died via a SIGINT proved too complex. @unittest.skipUnless(sys.platform == "win32", "Windows specific") class WindowsSignalTests(unittest.TestCase): def test_valid_signals(self): s = signal.valid_signals() self.assertIsInstance(s, set) self.assertGreaterEqual(len(s), 6) self.assertIn(signal.Signals.SIGINT, s) self.assertNotIn(0, s) self.assertNotIn(signal.NSIG, s) self.assertLess(len(s), signal.NSIG) def test_issue9324(self): # Updated for issue #10003, adding SIGBREAK handler = lambda x, y: None checked = set() for sig in (signal.SIGABRT, signal.SIGBREAK, signal.SIGFPE, signal.SIGILL, signal.SIGINT, signal.SIGSEGV, signal.SIGTERM): # Set and then reset a handler for signals that work on windows. # Issue #18396, only for signals without a C-level handler. if signal.getsignal(sig) is not None: signal.signal(sig, signal.signal(sig, handler)) checked.add(sig) # Issue #18396: Ensure the above loop at least tested *something* self.assertTrue(checked) with self.assertRaises(ValueError): signal.signal(-1, handler) with self.assertRaises(ValueError): signal.signal(7, handler) @unittest.skipUnless(sys.executable, "sys.executable required.") @support.requires_subprocess() def test_keyboard_interrupt_exit_code(self): """KeyboardInterrupt triggers an exit using STATUS_CONTROL_C_EXIT.""" # We don't test via os.kill(os.getpid(), signal.CTRL_C_EVENT) here # as that requires setting up a console control handler in a child # in its own process group. Doable, but quite complicated. (see # @eryksun on https://github.com/python/cpython/pull/11862) process = subprocess.run( [sys.executable, "-c", "raise KeyboardInterrupt"], stderr=subprocess.PIPE) self.assertIn(b"KeyboardInterrupt", process.stderr) STATUS_CONTROL_C_EXIT = 0xC000013A self.assertEqual(process.returncode, STATUS_CONTROL_C_EXIT) class WakeupFDTests(unittest.TestCase): def test_invalid_call(self): # First parameter is positional-only with self.assertRaises(TypeError): signal.set_wakeup_fd(signum=signal.SIGINT) # warn_on_full_buffer is a keyword-only parameter with self.assertRaises(TypeError): signal.set_wakeup_fd(signal.SIGINT, False) def test_invalid_fd(self): fd = os_helper.make_bad_fd() self.assertRaises((ValueError, OSError), signal.set_wakeup_fd, fd) @unittest.skipUnless(support.has_socket_support, "needs working sockets.") def test_invalid_socket(self): sock = socket.socket() fd = sock.fileno() sock.close() self.assertRaises((ValueError, OSError), signal.set_wakeup_fd, fd) # Emscripten does not support fstat on pipes yet. # https://github.com/emscripten-core/emscripten/issues/16414 @unittest.skipIf(support.is_emscripten, "Emscripten cannot fstat pipes.") @unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()") def test_set_wakeup_fd_result(self): r1, w1 = os.pipe() self.addCleanup(os.close, r1) self.addCleanup(os.close, w1) r2, w2 = os.pipe() self.addCleanup(os.close, r2) self.addCleanup(os.close, w2) if hasattr(os, 'set_blocking'): os.set_blocking(w1, False) os.set_blocking(w2, False) signal.set_wakeup_fd(w1) self.assertEqual(signal.set_wakeup_fd(w2), w1) self.assertEqual(signal.set_wakeup_fd(-1), w2) self.assertEqual(signal.set_wakeup_fd(-1), -1) @unittest.skipIf(support.is_emscripten, "Emscripten cannot fstat pipes.") @unittest.skipUnless(support.has_socket_support, "needs working sockets.") def test_set_wakeup_fd_socket_result(self): sock1 = socket.socket() self.addCleanup(sock1.close) sock1.setblocking(False) fd1 = sock1.fileno() sock2 = socket.socket() self.addCleanup(sock2.close) sock2.setblocking(False) fd2 = sock2.fileno() signal.set_wakeup_fd(fd1) self.assertEqual(signal.set_wakeup_fd(fd2), fd1) self.assertEqual(signal.set_wakeup_fd(-1), fd2) self.assertEqual(signal.set_wakeup_fd(-1), -1) # On Windows, files are always blocking and Windows does not provide a # function to test if a socket is in non-blocking mode. @unittest.skipIf(sys.platform == "win32", "tests specific to POSIX") @unittest.skipIf(support.is_emscripten, "Emscripten cannot fstat pipes.") @unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()") def test_set_wakeup_fd_blocking(self): rfd, wfd = os.pipe() self.addCleanup(os.close, rfd) self.addCleanup(os.close, wfd) # fd must be non-blocking os.set_blocking(wfd, True) with self.assertRaises(ValueError) as cm: signal.set_wakeup_fd(wfd) self.assertEqual(str(cm.exception), "the fd %s must be in non-blocking mode" % wfd) # non-blocking is ok os.set_blocking(wfd, False) signal.set_wakeup_fd(wfd) signal.set_wakeup_fd(-1) @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") class WakeupSignalTests(unittest.TestCase): @unittest.skipIf(_testcapi is None, 'need _testcapi') def check_wakeup(self, test_body, *signals, ordered=True): # use a subprocess to have only one thread code = """if 1: import _testcapi import os import signal import struct signals = {!r} def handler(signum, frame): pass def check_signum(signals): data = os.read(read, len(signals)+1) raised = struct.unpack('%uB' % len(data), data) if not {!r}: raised = set(raised) signals = set(signals) if raised != signals: raise Exception("%r != %r" % (raised, signals)) {} signal.signal(signal.SIGALRM, handler) read, write = os.pipe() os.set_blocking(write, False) signal.set_wakeup_fd(write) test() check_signum(signals) os.close(read) os.close(write) """.format(tuple(map(int, signals)), ordered, test_body) assert_python_ok('-c', code) @unittest.skipIf(_testcapi is None, 'need _testcapi') @unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()") def test_wakeup_write_error(self): # Issue #16105: write() errors in the C signal handler should not # pass silently. # Use a subprocess to have only one thread. code = """if 1: import _testcapi import errno import os import signal import sys from test.support import captured_stderr def handler(signum, frame): 1/0 signal.signal(signal.SIGALRM, handler) r, w = os.pipe() os.set_blocking(r, False) # Set wakeup_fd a read-only file descriptor to trigger the error signal.set_wakeup_fd(r) try: with captured_stderr() as err: signal.raise_signal(signal.SIGALRM) except ZeroDivisionError: # An ignored exception should have been printed out on stderr err = err.getvalue() if ('Exception ignored when trying to write to the signal wakeup fd' not in err): raise AssertionError(err) if ('OSError: [Errno %d]' % errno.EBADF) not in err: raise AssertionError(err) else: raise AssertionError("ZeroDivisionError not raised") os.close(r) os.close(w) """ r, w = os.pipe() try: os.write(r, b'x') except OSError: pass else: self.skipTest("OS doesn't report write() error on the read end of a pipe") finally: os.close(r) os.close(w) assert_python_ok('-c', code) def test_wakeup_fd_early(self): self.check_wakeup("""def test(): import select import time TIMEOUT_FULL = 10 TIMEOUT_HALF = 5 class InterruptSelect(Exception): pass def handler(signum, frame): raise InterruptSelect signal.signal(signal.SIGALRM, handler) signal.alarm(1) # We attempt to get a signal during the sleep, # before select is called try: select.select([], [], [], TIMEOUT_FULL) except InterruptSelect: pass else: raise Exception("select() was not interrupted") before_time = time.monotonic() select.select([read], [], [], TIMEOUT_FULL) after_time = time.monotonic() dt = after_time - before_time if dt >= TIMEOUT_HALF: raise Exception("%s >= %s" % (dt, TIMEOUT_HALF)) """, signal.SIGALRM) def test_wakeup_fd_during(self): self.check_wakeup("""def test(): import select import time TIMEOUT_FULL = 10 TIMEOUT_HALF = 5 class InterruptSelect(Exception): pass def handler(signum, frame): raise InterruptSelect signal.signal(signal.SIGALRM, handler) signal.alarm(1) before_time = time.monotonic() # We attempt to get a signal during the select call try: select.select([read], [], [], TIMEOUT_FULL) except InterruptSelect: pass else: raise Exception("select() was not interrupted") after_time = time.monotonic() dt = after_time - before_time if dt >= TIMEOUT_HALF: raise Exception("%s >= %s" % (dt, TIMEOUT_HALF)) """, signal.SIGALRM) def test_signum(self): self.check_wakeup("""def test(): signal.signal(signal.SIGUSR1, handler) signal.raise_signal(signal.SIGUSR1) signal.raise_signal(signal.SIGALRM) """, signal.SIGUSR1, signal.SIGALRM) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pending(self): self.check_wakeup("""def test(): signum1 = signal.SIGUSR1 signum2 = signal.SIGUSR2 signal.signal(signum1, handler) signal.signal(signum2, handler) signal.pthread_sigmask(signal.SIG_BLOCK, (signum1, signum2)) signal.raise_signal(signum1) signal.raise_signal(signum2) # Unblocking the 2 signals calls the C signal handler twice signal.pthread_sigmask(signal.SIG_UNBLOCK, (signum1, signum2)) """, signal.SIGUSR1, signal.SIGUSR2, ordered=False) @unittest.skipUnless(hasattr(socket, 'socketpair'), 'need socket.socketpair') class WakeupSocketSignalTests(unittest.TestCase): @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_socket(self): # use a subprocess to have only one thread code = """if 1: import signal import socket import struct import _testcapi signum = signal.SIGINT signals = (signum,) def handler(signum, frame): pass signal.signal(signum, handler) read, write = socket.socketpair() write.setblocking(False) signal.set_wakeup_fd(write.fileno()) signal.raise_signal(signum) data = read.recv(1) if not data: raise Exception("no signum written") raised = struct.unpack('B', data) if raised != signals: raise Exception("%r != %r" % (raised, signals)) read.close() write.close() """ assert_python_ok('-c', code) @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_send_error(self): # Use a subprocess to have only one thread. if os.name == 'nt': action = 'send' else: action = 'write' code = """if 1: import errno import signal import socket import sys import time import _testcapi from test.support import captured_stderr signum = signal.SIGINT def handler(signum, frame): pass signal.signal(signum, handler) read, write = socket.socketpair() read.setblocking(False) write.setblocking(False) signal.set_wakeup_fd(write.fileno()) # Close sockets: send() will fail read.close() write.close() with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if ('Exception ignored when trying to {action} to the signal wakeup fd' not in err): raise AssertionError(err) """.format(action=action) assert_python_ok('-c', code) @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_warn_on_full_buffer(self): # Use a subprocess to have only one thread. if os.name == 'nt': action = 'send' else: action = 'write' code = """if 1: import errno import signal import socket import sys import time import _testcapi from test.support import captured_stderr signum = signal.SIGINT # This handler will be called, but we intentionally won't read from # the wakeup fd. def handler(signum, frame): pass signal.signal(signum, handler) read, write = socket.socketpair() # Fill the socketpair buffer if sys.platform == 'win32': # bpo-34130: On Windows, sometimes non-blocking send fails to fill # the full socketpair buffer, so use a timeout of 50 ms instead. write.settimeout(0.050) else: write.setblocking(False) written = 0 if sys.platform == "vxworks": CHUNK_SIZES = (1,) else: # Start with large chunk size to reduce the # number of send needed to fill the buffer. CHUNK_SIZES = (2 ** 16, 2 ** 8, 1) for chunk_size in CHUNK_SIZES: chunk = b"x" * chunk_size try: while True: write.send(chunk) written += chunk_size except (BlockingIOError, TimeoutError): pass print(f"%s bytes written into the socketpair" % written, flush=True) write.setblocking(False) try: write.send(b"x") except BlockingIOError: # The socketpair buffer seems full pass else: raise AssertionError("%s bytes failed to fill the socketpair " "buffer" % written) # By default, we get a warning when a signal arrives msg = ('Exception ignored when trying to {action} ' 'to the signal wakeup fd') signal.set_wakeup_fd(write.fileno()) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if msg not in err: raise AssertionError("first set_wakeup_fd() test failed, " "stderr: %r" % err) # And also if warn_on_full_buffer=True signal.set_wakeup_fd(write.fileno(), warn_on_full_buffer=True) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if msg not in err: raise AssertionError("set_wakeup_fd(warn_on_full_buffer=True) " "test failed, stderr: %r" % err) # But not if warn_on_full_buffer=False signal.set_wakeup_fd(write.fileno(), warn_on_full_buffer=False) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if err != "": raise AssertionError("set_wakeup_fd(warn_on_full_buffer=False) " "test failed, stderr: %r" % err) # And then check the default again, to make sure warn_on_full_buffer # settings don't leak across calls. signal.set_wakeup_fd(write.fileno()) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if msg not in err: raise AssertionError("second set_wakeup_fd() test failed, " "stderr: %r" % err) """.format(action=action) assert_python_ok('-c', code) @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") @unittest.skipUnless(hasattr(signal, 'siginterrupt'), "needs signal.siginterrupt()") @support.requires_subprocess() @unittest.skipUnless(hasattr(os, "pipe"), "requires os.pipe()") class SiginterruptTest(unittest.TestCase): def readpipe_interrupted(self, interrupt, timeout=support.SHORT_TIMEOUT): """Perform a read during which a signal will arrive. Return True if the read is interrupted by the signal and raises an exception. Return False if it returns normally. """ # use a subprocess to have only one thread, to have a timeout on the # blocking read and to not touch signal handling in this process code = """if 1: import errno import os import signal import sys interrupt = %r r, w = os.pipe() def handler(signum, frame): 1 / 0 signal.signal(signal.SIGALRM, handler) if interrupt is not None: signal.siginterrupt(signal.SIGALRM, interrupt) print("ready") sys.stdout.flush() # run the test twice try: for loop in range(2): # send a SIGALRM in a second (during the read) signal.alarm(1) try: # blocking call: read from a pipe without data os.read(r, 1) except ZeroDivisionError: pass else: sys.exit(2) sys.exit(3) finally: os.close(r) os.close(w) """ % (interrupt,) with spawn_python('-c', code) as process: try: # wait until the child process is loaded and has started first_line = process.stdout.readline() stdout, stderr = process.communicate(timeout=timeout) except subprocess.TimeoutExpired: process.kill() return False else: stdout = first_line + stdout exitcode = process.wait() if exitcode not in (2, 3): raise Exception("Child error (exit code %s): %r" % (exitcode, stdout)) return (exitcode == 3) def test_without_siginterrupt(self): # If a signal handler is installed and siginterrupt is not called # at all, when that signal arrives, it interrupts a syscall that's in # progress. interrupted = self.readpipe_interrupted(None) self.assertTrue(interrupted) def test_siginterrupt_on(self): # If a signal handler is installed and siginterrupt is called with # a true value for the second argument, when that signal arrives, it # interrupts a syscall that's in progress. interrupted = self.readpipe_interrupted(True) self.assertTrue(interrupted) @support.requires_resource('walltime') def test_siginterrupt_off(self): # If a signal handler is installed and siginterrupt is called with # a false value for the second argument, when that signal arrives, it # does not interrupt a syscall that's in progress. interrupted = self.readpipe_interrupted(False, timeout=2) self.assertFalse(interrupted) @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") @unittest.skipUnless(hasattr(signal, 'getitimer') and hasattr(signal, 'setitimer'), "needs signal.getitimer() and signal.setitimer()") class ItimerTest(unittest.TestCase): def setUp(self): self.hndl_called = False self.hndl_count = 0 self.itimer = None self.old_alarm = signal.signal(signal.SIGALRM, self.sig_alrm) def tearDown(self): signal.signal(signal.SIGALRM, self.old_alarm) if self.itimer is not None: # test_itimer_exc doesn't change this attr # just ensure that itimer is stopped signal.setitimer(self.itimer, 0) def sig_alrm(self, *args): self.hndl_called = True def sig_vtalrm(self, *args): self.hndl_called = True if self.hndl_count > 3: # it shouldn't be here, because it should have been disabled. raise signal.ItimerError("setitimer didn't disable ITIMER_VIRTUAL " "timer.") elif self.hndl_count == 3: # disable ITIMER_VIRTUAL, this function shouldn't be called anymore signal.setitimer(signal.ITIMER_VIRTUAL, 0) self.hndl_count += 1 def sig_prof(self, *args): self.hndl_called = True signal.setitimer(signal.ITIMER_PROF, 0) def test_itimer_exc(self): # XXX I'm assuming -1 is an invalid itimer, but maybe some platform # defines it ? self.assertRaises(signal.ItimerError, signal.setitimer, -1, 0) # Negative times are treated as zero on some platforms. if 0: self.assertRaises(signal.ItimerError, signal.setitimer, signal.ITIMER_REAL, -1) def test_itimer_real(self): self.itimer = signal.ITIMER_REAL signal.setitimer(self.itimer, 1.0) signal.pause() self.assertEqual(self.hndl_called, True) # Issue 3864, unknown if this affects earlier versions of freebsd also @unittest.skipIf(sys.platform in ('netbsd5',) or is_apple_mobile, 'itimer not reliable (does not mix well with threading) on some BSDs.') def test_itimer_virtual(self): self.itimer = signal.ITIMER_VIRTUAL signal.signal(signal.SIGVTALRM, self.sig_vtalrm) signal.setitimer(self.itimer, 0.3, 0.2) for _ in support.busy_retry(support.LONG_TIMEOUT): # use up some virtual time by doing real work _ = pow(12345, 67890, 10000019) if signal.getitimer(self.itimer) == (0.0, 0.0): # sig_vtalrm handler stopped this itimer break # virtual itimer should be (0.0, 0.0) now self.assertEqual(signal.getitimer(self.itimer), (0.0, 0.0)) # and the handler should have been called self.assertEqual(self.hndl_called, True) def test_itimer_prof(self): self.itimer = signal.ITIMER_PROF signal.signal(signal.SIGPROF, self.sig_prof) signal.setitimer(self.itimer, 0.2, 0.2) for _ in support.busy_retry(support.LONG_TIMEOUT): # do some work _ = pow(12345, 67890, 10000019) if signal.getitimer(self.itimer) == (0.0, 0.0): # sig_prof handler stopped this itimer break # profiling itimer should be (0.0, 0.0) now self.assertEqual(signal.getitimer(self.itimer), (0.0, 0.0)) # and the handler should have been called self.assertEqual(self.hndl_called, True) def test_setitimer_tiny(self): # bpo-30807: C setitimer() takes a microsecond-resolution interval. # Check that float -> timeval conversion doesn't round # the interval down to zero, which would disable the timer. self.itimer = signal.ITIMER_REAL signal.setitimer(self.itimer, 1e-6) time.sleep(1) self.assertEqual(self.hndl_called, True) class PendingSignalsTests(unittest.TestCase): """ Test pthread_sigmask(), pthread_kill(), sigpending() and sigwait() functions. """ @unittest.skipUnless(hasattr(signal, 'sigpending'), 'need signal.sigpending()') def test_sigpending_empty(self): self.assertEqual(signal.sigpending(), set()) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') @unittest.skipUnless(hasattr(signal, 'sigpending'), 'need signal.sigpending()') def test_sigpending(self): code = """if 1: import os import signal def handler(signum, frame): 1/0 signum = signal.SIGUSR1 signal.signal(signum, handler) signal.pthread_sigmask(signal.SIG_BLOCK, [signum]) os.kill(os.getpid(), signum) pending = signal.sigpending() for sig in pending: assert isinstance(sig, signal.Signals), repr(pending) if pending != {signum}: raise Exception('%s != {%s}' % (pending, signum)) try: signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") """ assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'pthread_kill'), 'need signal.pthread_kill()') @threading_helper.requires_working_threading() def test_pthread_kill(self): code = """if 1: import signal import threading import sys signum = signal.SIGUSR1 def handler(signum, frame): 1/0 signal.signal(signum, handler) tid = threading.get_ident() try: signal.pthread_kill(tid, signum) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") """ assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def wait_helper(self, blocked, test): """ test: body of the "def test(signum):" function. blocked: number of the blocked signal """ code = '''if 1: import signal import sys from signal import Signals def handler(signum, frame): 1/0 %s blocked = %s signum = signal.SIGALRM # child: block and wait the signal try: signal.signal(signum, handler) signal.pthread_sigmask(signal.SIG_BLOCK, [blocked]) # Do the tests test(signum) # The handler must not be called on unblock try: signal.pthread_sigmask(signal.SIG_UNBLOCK, [blocked]) except ZeroDivisionError: print("the signal handler has been called", file=sys.stderr) sys.exit(1) except BaseException as err: print("error: {}".format(err), file=sys.stderr) sys.stderr.flush() sys.exit(1) ''' % (test.strip(), blocked) # sig*wait* must be called with the signal blocked: since the current # process might have several threads running, use a subprocess to have # a single thread. assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'sigwait'), 'need signal.sigwait()') def test_sigwait(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): signal.alarm(1) received = signal.sigwait([signum]) assert isinstance(received, signal.Signals), received if received != signum: raise Exception('received %s, not %s' % (received, signum)) ''') @unittest.skipUnless(hasattr(signal, 'sigwaitinfo'), 'need signal.sigwaitinfo()') def test_sigwaitinfo(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): signal.alarm(1) info = signal.sigwaitinfo([signum]) if info.si_signo != signum: raise Exception("info.si_signo != %s" % signum) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): signal.alarm(1) info = signal.sigtimedwait([signum], 10.1000) if info.si_signo != signum: raise Exception('info.si_signo != %s' % signum) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait_poll(self): # check that polling with sigtimedwait works self.wait_helper(signal.SIGALRM, ''' def test(signum): import os os.kill(os.getpid(), signum) info = signal.sigtimedwait([signum], 0) if info.si_signo != signum: raise Exception('info.si_signo != %s' % signum) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait_timeout(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): received = signal.sigtimedwait([signum], 1.0) if received is not None: raise Exception("received=%r" % (received,)) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait_negative_timeout(self): signum = signal.SIGALRM self.assertRaises(ValueError, signal.sigtimedwait, [signum], -1.0) @unittest.skipUnless(hasattr(signal, 'sigwait'), 'need signal.sigwait()') @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') @threading_helper.requires_working_threading() def test_sigwait_thread(self): # Check that calling sigwait() from a thread doesn't suspend the whole # process. A new interpreter is spawned to avoid problems when mixing # threads and fork(): only async-safe functions are allowed between # fork() and exec(). assert_python_ok("-c", """if True: import os, threading, sys, time, signal # the default handler terminates the process signum = signal.SIGUSR1 def kill_later(): # wait until the main thread is waiting in sigwait() time.sleep(1) os.kill(os.getpid(), signum) # the signal must be blocked by all the threads signal.pthread_sigmask(signal.SIG_BLOCK, [signum]) killer = threading.Thread(target=kill_later) killer.start() received = signal.sigwait([signum]) if received != signum: print("sigwait() received %s, not %s" % (received, signum), file=sys.stderr) sys.exit(1) killer.join() # unblock the signal, which should have been cleared by sigwait() signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) """) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pthread_sigmask_arguments(self): self.assertRaises(TypeError, signal.pthread_sigmask) self.assertRaises(TypeError, signal.pthread_sigmask, 1) self.assertRaises(TypeError, signal.pthread_sigmask, 1, 2, 3) self.assertRaises(OSError, signal.pthread_sigmask, 1700, []) with self.assertRaises(ValueError): signal.pthread_sigmask(signal.SIG_BLOCK, [signal.NSIG]) with self.assertRaises(ValueError): signal.pthread_sigmask(signal.SIG_BLOCK, [0]) with self.assertRaises(ValueError): signal.pthread_sigmask(signal.SIG_BLOCK, [1<<1000]) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pthread_sigmask_valid_signals(self): s = signal.pthread_sigmask(signal.SIG_BLOCK, signal.valid_signals()) self.addCleanup(signal.pthread_sigmask, signal.SIG_SETMASK, s) # Get current blocked set s = signal.pthread_sigmask(signal.SIG_UNBLOCK, signal.valid_signals()) self.assertLessEqual(s, signal.valid_signals()) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') @threading_helper.requires_working_threading() def test_pthread_sigmask(self): code = """if 1: import signal import os; import threading def handler(signum, frame): 1/0 def kill(signum): os.kill(os.getpid(), signum) def check_mask(mask): for sig in mask: assert isinstance(sig, signal.Signals), repr(sig) def read_sigmask(): sigmask = signal.pthread_sigmask(signal.SIG_BLOCK, []) check_mask(sigmask) return sigmask signum = signal.SIGUSR1 # Install our signal handler old_handler = signal.signal(signum, handler) # Unblock SIGUSR1 (and copy the old mask) to test our signal handler old_mask = signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) check_mask(old_mask) try: kill(signum) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") # Block and then raise SIGUSR1. The signal is blocked: the signal # handler is not called, and the signal is now pending mask = signal.pthread_sigmask(signal.SIG_BLOCK, [signum]) check_mask(mask) kill(signum) # Check the new mask blocked = read_sigmask() check_mask(blocked) if signum not in blocked: raise Exception("%s not in %s" % (signum, blocked)) if old_mask ^ blocked != {signum}: raise Exception("%s ^ %s != {%s}" % (old_mask, blocked, signum)) # Unblock SIGUSR1 try: # unblock the pending signal calls immediately the signal handler signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") try: kill(signum) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") # Check the new mask unblocked = read_sigmask() if signum in unblocked: raise Exception("%s in %s" % (signum, unblocked)) if blocked ^ unblocked != {signum}: raise Exception("%s ^ %s != {%s}" % (blocked, unblocked, signum)) if old_mask != unblocked: raise Exception("%s != %s" % (old_mask, unblocked)) """ assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'pthread_kill'), 'need signal.pthread_kill()') @threading_helper.requires_working_threading() def test_pthread_kill_main_thread(self): # Test that a signal can be sent to the main thread with pthread_kill() # before any other thread has been created (see issue #12392). code = """if True: import threading import signal import sys def handler(signum, frame): sys.exit(3) signal.signal(signal.SIGUSR1, handler) signal.pthread_kill(threading.get_ident(), signal.SIGUSR1) sys.exit(2) """ with spawn_python('-c', code) as process: stdout, stderr = process.communicate() exitcode = process.wait() if exitcode != 3: raise Exception("Child error (exit code %s): %s" % (exitcode, stdout)) class StressTest(unittest.TestCase): """ Stress signal delivery, especially when a signal arrives in the middle of recomputing the signal state or executing previously tripped signal handlers. """ def setsig(self, signum, handler): old_handler = signal.signal(signum, handler) self.addCleanup(signal.signal, signum, old_handler) def measure_itimer_resolution(self): N = 20 times = [] def handler(signum=None, frame=None): if len(times) < N: times.append(time.perf_counter()) # 1 µs is the smallest possible timer interval, # we want to measure what the concrete duration # will be on this platform signal.setitimer(signal.ITIMER_REAL, 1e-6) self.addCleanup(signal.setitimer, signal.ITIMER_REAL, 0) self.setsig(signal.SIGALRM, handler) handler() while len(times) < N: time.sleep(1e-3) durations = [times[i+1] - times[i] for i in range(len(times) - 1)] med = statistics.median(durations) if support.verbose: print("detected median itimer() resolution: %.6f s." % (med,)) return med def decide_itimer_count(self): # Some systems have poor setitimer() resolution (for example # measured around 20 ms. on FreeBSD 9), so decide on a reasonable # number of sequential timers based on that. reso = self.measure_itimer_resolution() if reso <= 1e-4: return 10000 elif reso <= 1e-2: return 100 else: self.skipTest("detected itimer resolution (%.3f s.) too high " "(> 10 ms.) on this platform (or system too busy)" % (reso,)) @unittest.skipUnless(hasattr(signal, "setitimer"), "test needs setitimer()") def test_stress_delivery_dependent(self): """ This test uses dependent signal handlers. """ N = self.decide_itimer_count() sigs = [] def first_handler(signum, frame): # 1e-6 is the minimum non-zero value for `setitimer()`. # Choose a random delay so as to improve chances of # triggering a race condition. Ideally the signal is received # when inside critical signal-handling routines such as # Py_MakePendingCalls(). signal.setitimer(signal.ITIMER_REAL, 1e-6 + random.random() * 1e-5) def second_handler(signum=None, frame=None): sigs.append(signum) # Here on Linux, SIGPROF > SIGALRM > SIGUSR1. By using both # ascending and descending sequences (SIGUSR1 then SIGALRM, # SIGPROF then SIGALRM), we maximize chances of hitting a bug. self.setsig(signal.SIGPROF, first_handler) self.setsig(signal.SIGUSR1, first_handler) self.setsig(signal.SIGALRM, second_handler) # for ITIMER_REAL expected_sigs = 0 deadline = time.monotonic() + support.SHORT_TIMEOUT while expected_sigs < N: os.kill(os.getpid(), signal.SIGPROF) expected_sigs += 1 # Wait for handlers to run to avoid signal coalescing while len(sigs) < expected_sigs and time.monotonic() < deadline: time.sleep(1e-5) os.kill(os.getpid(), signal.SIGUSR1) expected_sigs += 1 while len(sigs) < expected_sigs and time.monotonic() < deadline: time.sleep(1e-5) # All ITIMER_REAL signals should have been delivered to the # Python handler self.assertEqual(len(sigs), N, "Some signals were lost") @unittest.skipUnless(hasattr(signal, "setitimer"), "test needs setitimer()") def test_stress_delivery_simultaneous(self): """ This test uses simultaneous signal handlers. """ N = self.decide_itimer_count() sigs = [] def handler(signum, frame): sigs.append(signum) self.setsig(signal.SIGUSR1, handler) self.setsig(signal.SIGALRM, handler) # for ITIMER_REAL expected_sigs = 0 while expected_sigs < N: # Hopefully the SIGALRM will be received somewhere during # initial processing of SIGUSR1. signal.setitimer(signal.ITIMER_REAL, 1e-6 + random.random() * 1e-5) os.kill(os.getpid(), signal.SIGUSR1) expected_sigs += 2 # Wait for handlers to run to avoid signal coalescing for _ in support.sleeping_retry(support.SHORT_TIMEOUT): if len(sigs) >= expected_sigs: break # All ITIMER_REAL signals should have been delivered to the # Python handler self.assertEqual(len(sigs), N, "Some signals were lost") @unittest.skipIf(is_apple, "crashes due to system bug (FB13453490)") @unittest.skipUnless(hasattr(signal, "SIGUSR1"), "test needs SIGUSR1") @threading_helper.requires_working_threading() def test_stress_modifying_handlers(self): # bpo-43406: race condition between trip_signal() and signal.signal signum = signal.SIGUSR1 num_sent_signals = 0 num_received_signals = 0 do_stop = False def custom_handler(signum, frame): nonlocal num_received_signals num_received_signals += 1 def set_interrupts(): nonlocal num_sent_signals while not do_stop: signal.raise_signal(signum) num_sent_signals += 1 def cycle_handlers(): while num_sent_signals < 100 or num_received_signals < 1: for i in range(20000): # Cycle between a Python-defined and a non-Python handler for handler in [custom_handler, signal.SIG_IGN]: signal.signal(signum, handler) old_handler = signal.signal(signum, custom_handler) self.addCleanup(signal.signal, signum, old_handler) t = threading.Thread(target=set_interrupts) try: ignored = False with support.catch_unraisable_exception() as cm: t.start() cycle_handlers() do_stop = True t.join() if cm.unraisable is not None: # An unraisable exception may be printed out when # a signal is ignored due to the aforementioned # race condition, check it. self.assertIsInstance(cm.unraisable.exc_value, OSError) self.assertIn( f"Signal {signum:d} ignored due to race condition", str(cm.unraisable.exc_value)) ignored = True # bpo-43406: Even if it is unlikely, it's technically possible that # all signals were ignored because of race conditions. if not ignored: # Sanity check that some signals were received, but not all self.assertGreater(num_received_signals, 0) self.assertLessEqual(num_received_signals, num_sent_signals) finally: do_stop = True t.join() class RaiseSignalTest(unittest.TestCase): def test_sigint(self): with self.assertRaises(KeyboardInterrupt): signal.raise_signal(signal.SIGINT) @unittest.skipIf(sys.platform != "win32", "Windows specific test") def test_invalid_argument(self): try: SIGHUP = 1 # not supported on win32 signal.raise_signal(SIGHUP) self.fail("OSError (Invalid argument) expected") except OSError as e: if e.errno == errno.EINVAL: pass else: raise def test_handler(self): is_ok = False def handler(a, b): nonlocal is_ok is_ok = True old_signal = signal.signal(signal.SIGINT, handler) self.addCleanup(signal.signal, signal.SIGINT, old_signal) signal.raise_signal(signal.SIGINT) self.assertTrue(is_ok) def test__thread_interrupt_main(self): # See https://github.com/python/cpython/issues/102397 code = """if 1: import _thread class Foo(): def __del__(self): _thread.interrupt_main() x = Foo() """ rc, out, err = assert_python_ok('-c', code) self.assertIn(b'OSError: Signal 2 ignored due to race condition', err) class PidfdSignalTest(unittest.TestCase): @unittest.skipUnless( hasattr(signal, "pidfd_send_signal"), "pidfd support not built in", ) def test_pidfd_send_signal(self): with self.assertRaises(OSError) as cm: signal.pidfd_send_signal(0, signal.SIGINT) if cm.exception.errno == errno.ENOSYS: self.skipTest("kernel does not support pidfds") elif cm.exception.errno == errno.EPERM: self.skipTest("Not enough privileges to use pidfs") self.assertEqual(cm.exception.errno, errno.EBADF) my_pidfd = os.open(f'/proc/{os.getpid()}', os.O_DIRECTORY) self.addCleanup(os.close, my_pidfd) with self.assertRaisesRegex(TypeError, "^siginfo must be None$"): signal.pidfd_send_signal(my_pidfd, signal.SIGINT, object(), 0) with self.assertRaises(KeyboardInterrupt): signal.pidfd_send_signal(my_pidfd, signal.SIGINT) def tearDownModule(): support.reap_children() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.13/test_socket.py000066400000000000000000010121611471441230600212300ustar00rootroot00000000000000import unittest from test import support from test.support import ( is_apple, os_helper, refleak_helper, socket_helper, threading_helper ) import _thread as thread import array import contextlib import errno import gc import io import itertools import math import os import pickle import platform import queue import random import re import select import signal import socket import string import struct import sys import tempfile import threading import time import traceback from weakref import proxy try: import multiprocessing except ImportError: multiprocessing = False try: import fcntl except ImportError: fcntl = None try: import _testcapi except ImportError: _testcapi = None support.requires_working_socket(module=True) HOST = socket_helper.HOST # test unicode string and carriage return MSG = 'Michael Gilfix was here\u1234\r\n'.encode('utf-8') VMADDR_CID_LOCAL = 1 VSOCKPORT = 1234 AIX = platform.system() == "AIX" WSL = "microsoft-standard-WSL" in platform.release() try: import _socket except ImportError: _socket = None def skipForRefleakHuntinIf(condition, issueref): if not condition: def decorator(f): f.client_skip = lambda f: f return f else: def decorator(f): @contextlib.wraps(f) def wrapper(*args, **kwds): if refleak_helper.hunting_for_refleaks(): raise unittest.SkipTest(f"ignore while hunting for refleaks, see {issueref}") return f(*args, **kwds) def client_skip(f): @contextlib.wraps(f) def wrapper(*args, **kwds): if refleak_helper.hunting_for_refleaks(): return return f(*args, **kwds) return wrapper wrapper.client_skip = client_skip return wrapper return decorator def get_cid(): if fcntl is None: return None if not hasattr(socket, 'IOCTL_VM_SOCKETS_GET_LOCAL_CID'): return None try: with open("/dev/vsock", "rb") as f: r = fcntl.ioctl(f, socket.IOCTL_VM_SOCKETS_GET_LOCAL_CID, " ") except OSError: return None else: return struct.unpack("I", r)[0] def _have_socket_can(): """Check whether CAN sockets are supported on this host.""" try: s = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_can_isotp(): """Check whether CAN ISOTP sockets are supported on this host.""" try: s = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_can_j1939(): """Check whether CAN J1939 sockets are supported on this host.""" try: s = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_rds(): """Check whether RDS sockets are supported on this host.""" try: s = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_alg(): """Check whether AF_ALG sockets are supported on this host.""" try: s = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_qipcrtr(): """Check whether AF_QIPCRTR sockets are supported on this host.""" try: s = socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM, 0) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_vsock(): """Check whether AF_VSOCK sockets are supported on this host.""" cid = get_cid() return (cid is not None) def _have_socket_bluetooth(): """Check whether AF_BLUETOOTH sockets are supported on this host.""" try: # RFCOMM is supported by all platforms with bluetooth support. Windows # does not support omitting the protocol. s = socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_hyperv(): """Check whether AF_HYPERV sockets are supported on this host.""" try: s = socket.socket(socket.AF_HYPERV, socket.SOCK_STREAM, socket.HV_PROTOCOL_RAW) except (AttributeError, OSError): return False else: s.close() return True @contextlib.contextmanager def socket_setdefaulttimeout(timeout): old_timeout = socket.getdefaulttimeout() try: socket.setdefaulttimeout(timeout) yield finally: socket.setdefaulttimeout(old_timeout) HAVE_SOCKET_CAN = _have_socket_can() HAVE_SOCKET_CAN_ISOTP = _have_socket_can_isotp() HAVE_SOCKET_CAN_J1939 = _have_socket_can_j1939() HAVE_SOCKET_RDS = _have_socket_rds() HAVE_SOCKET_ALG = _have_socket_alg() HAVE_SOCKET_QIPCRTR = _have_socket_qipcrtr() HAVE_SOCKET_VSOCK = _have_socket_vsock() # Older Android versions block UDPLITE with SELinux. HAVE_SOCKET_UDPLITE = ( hasattr(socket, "IPPROTO_UDPLITE") and not (support.is_android and platform.android_ver().api_level < 29)) HAVE_SOCKET_BLUETOOTH = _have_socket_bluetooth() HAVE_SOCKET_HYPERV = _have_socket_hyperv() # Size in bytes of the int type SIZEOF_INT = array.array("i").itemsize class SocketTCPTest(unittest.TestCase): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = socket_helper.bind_port(self.serv) self.serv.listen() def tearDown(self): self.serv.close() self.serv = None class SocketUDPTest(unittest.TestCase): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.port = socket_helper.bind_port(self.serv) def tearDown(self): self.serv.close() self.serv = None class SocketUDPLITETest(SocketUDPTest): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) self.port = socket_helper.bind_port(self.serv) class SocketCANTest(unittest.TestCase): """To be able to run this test, a `vcan0` CAN interface can be created with the following commands: # modprobe vcan # ip link add dev vcan0 type vcan # ip link set up vcan0 """ interface = 'vcan0' bufsize = 128 """The CAN frame structure is defined in : struct can_frame { canid_t can_id; /* 32 bit CAN_ID + EFF/RTR/ERR flags */ __u8 can_dlc; /* data length code: 0 .. 8 */ __u8 data[8] __attribute__((aligned(8))); }; """ can_frame_fmt = "=IB3x8s" can_frame_size = struct.calcsize(can_frame_fmt) """The Broadcast Management Command frame structure is defined in : struct bcm_msg_head { __u32 opcode; __u32 flags; __u32 count; struct timeval ival1, ival2; canid_t can_id; __u32 nframes; struct can_frame frames[0]; } `bcm_msg_head` must be 8 bytes aligned because of the `frames` member (see `struct can_frame` definition). Must use native not standard types for packing. """ bcm_cmd_msg_fmt = "@3I4l2I" bcm_cmd_msg_fmt += "x" * (struct.calcsize(bcm_cmd_msg_fmt) % 8) def setUp(self): self.s = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) self.addCleanup(self.s.close) try: self.s.bind((self.interface,)) except OSError: self.skipTest('network interface `%s` does not exist' % self.interface) class SocketRDSTest(unittest.TestCase): """To be able to run this test, the `rds` kernel module must be loaded: # modprobe rds """ bufsize = 8192 def setUp(self): self.serv = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) self.addCleanup(self.serv.close) try: self.port = socket_helper.bind_port(self.serv) except OSError: self.skipTest('unable to bind RDS socket') class ThreadableTest: """Threadable Test class The ThreadableTest class makes it easy to create a threaded client/server pair from an existing unit test. To create a new threaded class from an existing unit test, use multiple inheritance: class NewClass (OldClass, ThreadableTest): pass This class defines two new fixture functions with obvious purposes for overriding: clientSetUp () clientTearDown () Any new test functions within the class must then define tests in pairs, where the test name is preceded with a '_' to indicate the client portion of the test. Ex: def testFoo(self): # Server portion def _testFoo(self): # Client portion Any exceptions raised by the clients during their tests are caught and transferred to the main thread to alert the testing framework. Note, the server setup function cannot call any blocking functions that rely on the client thread during setup, unless serverExplicitReady() is called just before the blocking call (such as in setting up a client/server connection and performing the accept() in setUp(). """ def __init__(self): # Swap the true setup function self.__setUp = self.setUp self.setUp = self._setUp def serverExplicitReady(self): """This method allows the server to explicitly indicate that it wants the client thread to proceed. This is useful if the server is about to execute a blocking routine that is dependent upon the client thread during its setup routine.""" self.server_ready.set() def _setUp(self): self.enterContext(threading_helper.wait_threads_exit()) self.server_ready = threading.Event() self.client_ready = threading.Event() self.done = threading.Event() self.queue = queue.Queue(1) self.server_crashed = False def raise_queued_exception(): if self.queue.qsize(): raise self.queue.get() self.addCleanup(raise_queued_exception) # Do some munging to start the client test. methodname = self.id() i = methodname.rfind('.') methodname = methodname[i+1:] test_method = getattr(self, '_' + methodname) self.client_thread = thread.start_new_thread( self.clientRun, (test_method,)) try: self.__setUp() except: self.server_crashed = True raise finally: self.server_ready.set() self.client_ready.wait() self.addCleanup(self.done.wait) def clientRun(self, test_func): self.server_ready.wait() try: self.clientSetUp() except BaseException as e: self.queue.put(e) self.clientTearDown() return finally: self.client_ready.set() if self.server_crashed: self.clientTearDown() return if not hasattr(test_func, '__call__'): raise TypeError("test_func must be a callable function") try: test_func() except BaseException as e: self.queue.put(e) finally: self.clientTearDown() def clientSetUp(self): raise NotImplementedError("clientSetUp must be implemented.") def clientTearDown(self): self.done.set() thread.exit() class ThreadedTCPSocketTest(SocketTCPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketTCPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.AF_INET, socket.SOCK_STREAM) def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ThreadedUDPSocketTest(SocketUDPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketUDPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class ThreadedUDPLITESocketTest(SocketUDPLITETest, ThreadableTest): def __init__(self, methodName='runTest'): SocketUDPLITETest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ThreadedCANSocketTest(SocketCANTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketCANTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) try: self.cli.bind((self.interface,)) except OSError: # skipTest should not be called here, and will be called in the # server instead pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ThreadedRDSSocketTest(SocketRDSTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketRDSTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) try: # RDS sockets must be bound explicitly to send or receive data self.cli.bind((HOST, 0)) self.cli_addr = self.cli.getsockname() except OSError: # skipTest should not be called here, and will be called in the # server instead pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) @unittest.skipIf(fcntl is None, "need fcntl") @unittest.skipIf(WSL, 'VSOCK does not work on Microsoft WSL') @unittest.skipUnless(HAVE_SOCKET_VSOCK, 'VSOCK sockets required for this test.') class ThreadedVSOCKSocketStreamTest(unittest.TestCase, ThreadableTest): def __init__(self, methodName='runTest'): unittest.TestCase.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def setUp(self): self.serv = socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) self.addCleanup(self.serv.close) self.serv.bind((socket.VMADDR_CID_ANY, VSOCKPORT)) self.serv.listen() self.serverExplicitReady() self.serv.settimeout(support.LOOPBACK_TIMEOUT) self.conn, self.connaddr = self.serv.accept() self.addCleanup(self.conn.close) def clientSetUp(self): time.sleep(0.1) self.cli = socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) self.addCleanup(self.cli.close) cid = get_cid() if cid in (socket.VMADDR_CID_HOST, socket.VMADDR_CID_ANY): # gh-119461: Use the local communication address (loopback) cid = VMADDR_CID_LOCAL self.cli.connect((cid, VSOCKPORT)) def testStream(self): msg = self.conn.recv(1024) self.assertEqual(msg, MSG) def _testStream(self): self.cli.send(MSG) self.cli.close() class SocketConnectedTest(ThreadedTCPSocketTest): """Socket tests for client-server connection. self.cli_conn is a client socket connected to the server. The setUp() method guarantees that it is connected to the server. """ def __init__(self, methodName='runTest'): ThreadedTCPSocketTest.__init__(self, methodName=methodName) def setUp(self): ThreadedTCPSocketTest.setUp(self) # Indicate explicitly we're ready for the client thread to # proceed and then perform the blocking call to accept self.serverExplicitReady() conn, addr = self.serv.accept() self.cli_conn = conn def tearDown(self): self.cli_conn.close() self.cli_conn = None ThreadedTCPSocketTest.tearDown(self) def clientSetUp(self): ThreadedTCPSocketTest.clientSetUp(self) self.cli.connect((HOST, self.port)) self.serv_conn = self.cli def clientTearDown(self): self.serv_conn.close() self.serv_conn = None ThreadedTCPSocketTest.clientTearDown(self) class SocketPairTest(unittest.TestCase, ThreadableTest): def __init__(self, methodName='runTest'): unittest.TestCase.__init__(self, methodName=methodName) ThreadableTest.__init__(self) self.cli = None self.serv = None def socketpair(self): # To be overridden by some child classes. return socket.socketpair() def setUp(self): self.serv, self.cli = self.socketpair() def tearDown(self): if self.serv: self.serv.close() self.serv = None def clientSetUp(self): pass def clientTearDown(self): if self.cli: self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) # The following classes are used by the sendmsg()/recvmsg() tests. # Combining, for instance, ConnectedStreamTestMixin and TCPTestBase # gives a drop-in replacement for SocketConnectedTest, but different # address families can be used, and the attributes serv_addr and # cli_addr will be set to the addresses of the endpoints. class SocketTestBase(unittest.TestCase): """A base class for socket tests. Subclasses must provide methods newSocket() to return a new socket and bindSock(sock) to bind it to an unused address. Creates a socket self.serv and sets self.serv_addr to its address. """ def setUp(self): self.serv = self.newSocket() self.addCleanup(self.close_server) self.bindServer() def close_server(self): self.serv.close() self.serv = None def bindServer(self): """Bind server socket and set self.serv_addr to its address.""" self.bindSock(self.serv) self.serv_addr = self.serv.getsockname() class SocketListeningTestMixin(SocketTestBase): """Mixin to listen on the server socket.""" def setUp(self): super().setUp() self.serv.listen() class ThreadedSocketTestMixin(SocketTestBase, ThreadableTest): """Mixin to add client socket and allow client/server tests. Client socket is self.cli and its address is self.cli_addr. See ThreadableTest for usage information. """ def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = self.newClientSocket() self.bindClient() def newClientSocket(self): """Return a new socket for use as client.""" return self.newSocket() def bindClient(self): """Bind client socket and set self.cli_addr to its address.""" self.bindSock(self.cli) self.cli_addr = self.cli.getsockname() def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ConnectedStreamTestMixin(SocketListeningTestMixin, ThreadedSocketTestMixin): """Mixin to allow client/server stream tests with connected client. Server's socket representing connection to client is self.cli_conn and client's connection to server is self.serv_conn. (Based on SocketConnectedTest.) """ def setUp(self): super().setUp() # Indicate explicitly we're ready for the client thread to # proceed and then perform the blocking call to accept self.serverExplicitReady() conn, addr = self.serv.accept() self.cli_conn = conn def tearDown(self): self.cli_conn.close() self.cli_conn = None super().tearDown() def clientSetUp(self): super().clientSetUp() self.cli.connect(self.serv_addr) self.serv_conn = self.cli def clientTearDown(self): try: self.serv_conn.close() self.serv_conn = None except AttributeError: pass super().clientTearDown() class UnixSocketTestBase(SocketTestBase): """Base class for Unix-domain socket tests.""" # This class is used for file descriptor passing tests, so we # create the sockets in a private directory so that other users # can't send anything that might be problematic for a privileged # user running the tests. def bindSock(self, sock): path = socket_helper.create_unix_domain_name() self.addCleanup(os_helper.unlink, path) socket_helper.bind_unix_socket(sock, path) class UnixStreamBase(UnixSocketTestBase): """Base class for Unix-domain SOCK_STREAM tests.""" def newSocket(self): return socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) class InetTestBase(SocketTestBase): """Base class for IPv4 socket tests.""" host = HOST def setUp(self): super().setUp() self.port = self.serv_addr[1] def bindSock(self, sock): socket_helper.bind_port(sock, host=self.host) class TCPTestBase(InetTestBase): """Base class for TCP-over-IPv4 tests.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_STREAM) class UDPTestBase(InetTestBase): """Base class for UDP-over-IPv4 tests.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_DGRAM) class UDPLITETestBase(InetTestBase): """Base class for UDPLITE-over-IPv4 tests.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) class SCTPStreamBase(InetTestBase): """Base class for SCTP tests in one-to-one (SOCK_STREAM) mode.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_SCTP) class Inet6TestBase(InetTestBase): """Base class for IPv6 socket tests.""" host = socket_helper.HOSTv6 class UDP6TestBase(Inet6TestBase): """Base class for UDP-over-IPv6 tests.""" def newSocket(self): return socket.socket(socket.AF_INET6, socket.SOCK_DGRAM) class UDPLITE6TestBase(Inet6TestBase): """Base class for UDPLITE-over-IPv6 tests.""" def newSocket(self): return socket.socket(socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) # Test-skipping decorators for use with ThreadableTest. def skipWithClientIf(condition, reason): """Skip decorated test if condition is true, add client_skip decorator. If the decorated object is not a class, sets its attribute "client_skip" to a decorator which will return an empty function if the test is to be skipped, or the original function if it is not. This can be used to avoid running the client part of a skipped test when using ThreadableTest. """ def client_pass(*args, **kwargs): pass def skipdec(obj): retval = unittest.skip(reason)(obj) if not isinstance(obj, type): retval.client_skip = lambda f: client_pass return retval def noskipdec(obj): if not (isinstance(obj, type) or hasattr(obj, "client_skip")): obj.client_skip = lambda f: f return obj return skipdec if condition else noskipdec def requireAttrs(obj, *attributes): """Skip decorated test if obj is missing any of the given attributes. Sets client_skip attribute as skipWithClientIf() does. """ missing = [name for name in attributes if not hasattr(obj, name)] return skipWithClientIf( missing, "don't have " + ", ".join(name for name in missing)) def requireSocket(*args): """Skip decorated test if a socket cannot be created with given arguments. When an argument is given as a string, will use the value of that attribute of the socket module, or skip the test if it doesn't exist. Sets client_skip attribute as skipWithClientIf() does. """ err = None missing = [obj for obj in args if isinstance(obj, str) and not hasattr(socket, obj)] if missing: err = "don't have " + ", ".join(name for name in missing) else: callargs = [getattr(socket, obj) if isinstance(obj, str) else obj for obj in args] try: s = socket.socket(*callargs) except OSError as e: # XXX: check errno? err = str(e) else: s.close() return skipWithClientIf( err is not None, "can't create socket({0}): {1}".format( ", ".join(str(o) for o in args), err)) ####################################################################### ## Begin Tests class GeneralModuleTests(unittest.TestCase): @unittest.skipUnless(_socket is not None, 'need _socket module') def test_socket_type(self): self.assertTrue(gc.is_tracked(_socket.socket)) with self.assertRaisesRegex(TypeError, "immutable"): _socket.socket.foo = 1 def test_SocketType_is_socketobject(self): import _socket self.assertTrue(socket.SocketType is _socket.socket) s = socket.socket() self.assertIsInstance(s, socket.SocketType) s.close() def test_repr(self): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) with s: self.assertIn('fd=%i' % s.fileno(), repr(s)) self.assertIn('family=%s' % socket.AF_INET, repr(s)) self.assertIn('type=%s' % socket.SOCK_STREAM, repr(s)) self.assertIn('proto=0', repr(s)) self.assertNotIn('raddr', repr(s)) s.bind(('127.0.0.1', 0)) self.assertIn('laddr', repr(s)) self.assertIn(str(s.getsockname()), repr(s)) self.assertIn('[closed]', repr(s)) self.assertNotIn('laddr', repr(s)) @unittest.skipUnless(_socket is not None, 'need _socket module') def test_csocket_repr(self): s = _socket.socket(_socket.AF_INET, _socket.SOCK_STREAM) try: expected = ('' % (s.fileno(), s.family, s.type, s.proto)) self.assertEqual(repr(s), expected) finally: s.close() expected = ('' % (s.family, s.type, s.proto)) self.assertEqual(repr(s), expected) def test_weakref(self): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: p = proxy(s) self.assertEqual(p.fileno(), s.fileno()) s = None support.gc_collect() # For PyPy or other GCs. try: p.fileno() except ReferenceError: pass else: self.fail('Socket proxy still exists') def testSocketError(self): # Testing socket module exceptions msg = "Error raising socket exception (%s)." with self.assertRaises(OSError, msg=msg % 'OSError'): raise OSError with self.assertRaises(OSError, msg=msg % 'socket.herror'): raise socket.herror with self.assertRaises(OSError, msg=msg % 'socket.gaierror'): raise socket.gaierror def testSendtoErrors(self): # Testing that sendto doesn't mask failures. See #10169. s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.addCleanup(s.close) s.bind(('', 0)) sockname = s.getsockname() # 2 args with self.assertRaises(TypeError) as cm: s.sendto('\u2620', sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'str'") with self.assertRaises(TypeError) as cm: s.sendto(5j, sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'complex'") with self.assertRaises(TypeError) as cm: s.sendto(b'foo', None) self.assertIn('not NoneType',str(cm.exception)) # 3 args with self.assertRaises(TypeError) as cm: s.sendto('\u2620', 0, sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'str'") with self.assertRaises(TypeError) as cm: s.sendto(5j, 0, sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'complex'") with self.assertRaises(TypeError) as cm: s.sendto(b'foo', 0, None) self.assertIn('not NoneType', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto(b'foo', 'bar', sockname) with self.assertRaises(TypeError) as cm: s.sendto(b'foo', None, None) # wrong number of args with self.assertRaises(TypeError) as cm: s.sendto(b'foo') self.assertIn('(1 given)', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto(b'foo', 0, sockname, 4) self.assertIn('(4 given)', str(cm.exception)) def testCrucialConstants(self): # Testing for mission critical constants socket.AF_INET if socket.has_ipv6: socket.AF_INET6 socket.SOCK_STREAM socket.SOCK_DGRAM socket.SOCK_RAW socket.SOCK_RDM socket.SOCK_SEQPACKET socket.SOL_SOCKET socket.SO_REUSEADDR def testCrucialIpProtoConstants(self): socket.IPPROTO_TCP socket.IPPROTO_UDP if socket.has_ipv6: socket.IPPROTO_IPV6 @unittest.skipUnless(os.name == "nt", "Windows specific") def testWindowsSpecificConstants(self): socket.IPPROTO_ICLFXBM socket.IPPROTO_ST socket.IPPROTO_CBT socket.IPPROTO_IGP socket.IPPROTO_RDP socket.IPPROTO_PGM socket.IPPROTO_L2TP socket.IPPROTO_SCTP @unittest.skipIf(support.is_wasi, "WASI is missing these methods") def test_socket_methods(self): # socket methods that depend on a configure HAVE_ check. They should # be present on all platforms except WASI. names = [ "_accept", "bind", "connect", "connect_ex", "getpeername", "getsockname", "listen", "recvfrom", "recvfrom_into", "sendto", "setsockopt", "shutdown" ] for name in names: if not hasattr(socket.socket, name): self.fail(f"socket method {name} is missing") @unittest.skipUnless(sys.platform == 'darwin', 'macOS specific test') @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test3542SocketOptions(self): # Ref. issue #35569 and https://tools.ietf.org/html/rfc3542 opts = { 'IPV6_CHECKSUM', 'IPV6_DONTFRAG', 'IPV6_DSTOPTS', 'IPV6_HOPLIMIT', 'IPV6_HOPOPTS', 'IPV6_NEXTHOP', 'IPV6_PATHMTU', 'IPV6_PKTINFO', 'IPV6_RECVDSTOPTS', 'IPV6_RECVHOPLIMIT', 'IPV6_RECVHOPOPTS', 'IPV6_RECVPATHMTU', 'IPV6_RECVPKTINFO', 'IPV6_RECVRTHDR', 'IPV6_RECVTCLASS', 'IPV6_RTHDR', 'IPV6_RTHDRDSTOPTS', 'IPV6_RTHDR_TYPE_0', 'IPV6_TCLASS', 'IPV6_USE_MIN_MTU', } for opt in opts: self.assertTrue( hasattr(socket, opt), f"Missing RFC3542 socket option '{opt}'" ) def testHostnameRes(self): # Testing hostname resolution mechanisms hostname = socket.gethostname() try: ip = socket.gethostbyname(hostname) except OSError: # Probably name lookup wasn't set up right; skip this test self.skipTest('name lookup failure') self.assertTrue(ip.find('.') >= 0, "Error resolving host to ip.") try: hname, aliases, ipaddrs = socket.gethostbyaddr(ip) except OSError: # Probably a similar problem as above; skip this test self.skipTest('name lookup failure') all_host_names = [hostname, hname] + aliases fqhn = socket.getfqdn(ip) if not fqhn in all_host_names: self.fail("Error testing host resolution mechanisms. (fqdn: %s, all: %s)" % (fqhn, repr(all_host_names))) def test_host_resolution(self): for addr in [socket_helper.HOSTv4, '10.0.0.1', '255.255.255.255']: self.assertEqual(socket.gethostbyname(addr), addr) # we don't test socket_helper.HOSTv6 because there's a chance it doesn't have # a matching name entry (e.g. 'ip6-localhost') for host in [socket_helper.HOSTv4]: self.assertIn(host, socket.gethostbyaddr(host)[2]) def test_host_resolution_bad_address(self): # These are all malformed IP addresses and expected not to resolve to # any result. But some ISPs, e.g. AWS and AT&T, may successfully # resolve these IPs. In particular, AT&T's DNS Error Assist service # will break this test. See https://bugs.python.org/issue42092 for a # workaround. explanation = ( "resolving an invalid IP address did not raise OSError; " "can be caused by a broken DNS server" ) for addr in ['0.1.1.~1', '1+.1.1.1', '::1q', '::1::2', '1:1:1:1:1:1:1:1:1']: with self.assertRaises(OSError, msg=addr): socket.gethostbyname(addr) with self.assertRaises(OSError, msg=explanation): socket.gethostbyaddr(addr) @unittest.skipUnless(hasattr(socket, 'sethostname'), "test needs socket.sethostname()") @unittest.skipUnless(hasattr(socket, 'gethostname'), "test needs socket.gethostname()") def test_sethostname(self): oldhn = socket.gethostname() try: socket.sethostname('new') except OSError as e: if e.errno == errno.EPERM: self.skipTest("test should be run as root") else: raise try: # running test as root! self.assertEqual(socket.gethostname(), 'new') # Should work with bytes objects too socket.sethostname(b'bar') self.assertEqual(socket.gethostname(), 'bar') finally: socket.sethostname(oldhn) @unittest.skipUnless(hasattr(socket, 'if_nameindex'), 'socket.if_nameindex() not available.') def testInterfaceNameIndex(self): interfaces = socket.if_nameindex() for index, name in interfaces: self.assertIsInstance(index, int) self.assertIsInstance(name, str) # interface indices are non-zero integers self.assertGreater(index, 0) _index = socket.if_nametoindex(name) self.assertIsInstance(_index, int) self.assertEqual(index, _index) _name = socket.if_indextoname(index) self.assertIsInstance(_name, str) self.assertEqual(name, _name) @unittest.skipUnless(hasattr(socket, 'if_indextoname'), 'socket.if_indextoname() not available.') def testInvalidInterfaceIndexToName(self): self.assertRaises(OSError, socket.if_indextoname, 0) self.assertRaises(OverflowError, socket.if_indextoname, -1) self.assertRaises(OverflowError, socket.if_indextoname, 2**1000) self.assertRaises(TypeError, socket.if_indextoname, '_DEADBEEF') if hasattr(socket, 'if_nameindex'): indices = dict(socket.if_nameindex()) for index in indices: index2 = index + 2**32 if index2 not in indices: with self.assertRaises((OverflowError, OSError)): socket.if_indextoname(index2) for index in 2**32-1, 2**64-1: if index not in indices: with self.assertRaises((OverflowError, OSError)): socket.if_indextoname(index) @unittest.skipUnless(hasattr(socket, 'if_nametoindex'), 'socket.if_nametoindex() not available.') def testInvalidInterfaceNameToIndex(self): self.assertRaises(TypeError, socket.if_nametoindex, 0) self.assertRaises(OSError, socket.if_nametoindex, '_DEADBEEF') @unittest.skipUnless(hasattr(sys, 'getrefcount'), 'test needs sys.getrefcount()') def testRefCountGetNameInfo(self): # Testing reference count for getnameinfo try: # On some versions, this loses a reference orig = sys.getrefcount(__name__) socket.getnameinfo(__name__,0) except TypeError: if sys.getrefcount(__name__) != orig: self.fail("socket.getnameinfo loses a reference") def testInterpreterCrash(self): # Making sure getnameinfo doesn't crash the interpreter try: # On some versions, this crashes the interpreter. socket.getnameinfo(('x', 0, 0, 0), 0) except OSError: pass def testNtoH(self): # This just checks that htons etc. are their own inverse, # when looking at the lower 16 or 32 bits. sizes = {socket.htonl: 32, socket.ntohl: 32, socket.htons: 16, socket.ntohs: 16} for func, size in sizes.items(): mask = (1<' % family.value) self.assertEqual(str(family), str(family.value)) self.assertEqual(type, socket.SOCK_STREAM) self.assertEqual(repr(type), '' % type.value) self.assertEqual(str(type), str(type.value)) infos = socket.getaddrinfo(HOST, None, 0, socket.SOCK_STREAM) for _, socktype, _, _, _ in infos: self.assertEqual(socktype, socket.SOCK_STREAM) # test proto and flags arguments socket.getaddrinfo(HOST, None, 0, 0, socket.SOL_TCP) socket.getaddrinfo(HOST, None, 0, 0, 0, socket.AI_PASSIVE) # a server willing to support both IPv4 and IPv6 will # usually do this socket.getaddrinfo(None, 0, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE) # test keyword arguments a = socket.getaddrinfo(HOST, None) b = socket.getaddrinfo(host=HOST, port=None) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, socket.AF_INET) b = socket.getaddrinfo(HOST, None, family=socket.AF_INET) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, 0, socket.SOCK_STREAM) b = socket.getaddrinfo(HOST, None, type=socket.SOCK_STREAM) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, 0, 0, socket.SOL_TCP) b = socket.getaddrinfo(HOST, None, proto=socket.SOL_TCP) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, 0, 0, 0, socket.AI_PASSIVE) b = socket.getaddrinfo(HOST, None, flags=socket.AI_PASSIVE) self.assertEqual(a, b) a = socket.getaddrinfo(None, 0, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE) b = socket.getaddrinfo(host=None, port=0, family=socket.AF_UNSPEC, type=socket.SOCK_STREAM, proto=0, flags=socket.AI_PASSIVE) self.assertEqual(a, b) # Issue #6697. self.assertRaises(UnicodeEncodeError, socket.getaddrinfo, 'localhost', '\uD800') # Issue 17269: test workaround for OS X platform bug segfault if hasattr(socket, 'AI_NUMERICSERV'): try: # The arguments here are undefined and the call may succeed # or fail. All we care here is that it doesn't segfault. socket.getaddrinfo("localhost", None, 0, 0, 0, socket.AI_NUMERICSERV) except socket.gaierror: pass @unittest.skipIf(_testcapi is None, "requires _testcapi") def test_getaddrinfo_int_port_overflow(self): # gh-74895: Test that getaddrinfo does not raise OverflowError on port. # # POSIX getaddrinfo() never specify the valid range for "service" # decimal port number values. For IPv4 and IPv6 they are technically # unsigned 16-bit values, but the API is protocol agnostic. Which values # trigger an error from the C library function varies by platform as # they do not all perform validation. # The key here is that we don't want to produce OverflowError as Python # prior to 3.12 did for ints outside of a [LONG_MIN, LONG_MAX] range. # Leave the error up to the underlying string based platform C API. from _testcapi import ULONG_MAX, LONG_MAX, LONG_MIN try: socket.getaddrinfo(None, ULONG_MAX + 1, type=socket.SOCK_STREAM) except OverflowError: # Platforms differ as to what values consitute a getaddrinfo() error # return. Some fail for LONG_MAX+1, others ULONG_MAX+1, and Windows # silently accepts such huge "port" aka "service" numeric values. self.fail("Either no error or socket.gaierror expected.") except socket.gaierror: pass try: socket.getaddrinfo(None, LONG_MAX + 1, type=socket.SOCK_STREAM) except OverflowError: self.fail("Either no error or socket.gaierror expected.") except socket.gaierror: pass try: socket.getaddrinfo(None, LONG_MAX - 0xffff + 1, type=socket.SOCK_STREAM) except OverflowError: self.fail("Either no error or socket.gaierror expected.") except socket.gaierror: pass try: socket.getaddrinfo(None, LONG_MIN - 1, type=socket.SOCK_STREAM) except OverflowError: self.fail("Either no error or socket.gaierror expected.") except socket.gaierror: pass socket.getaddrinfo(None, 0, type=socket.SOCK_STREAM) # No error expected. socket.getaddrinfo(None, 0xffff, type=socket.SOCK_STREAM) # No error expected. def test_getnameinfo(self): # only IP addresses are allowed self.assertRaises(OSError, socket.getnameinfo, ('mail.python.org',0), 0) @unittest.skipUnless(support.is_resource_enabled('network'), 'network is not enabled') def test_idna(self): # Check for internet access before running test # (issue #12804, issue #25138). with socket_helper.transient_internet('python.org'): socket.gethostbyname('python.org') # these should all be successful domain = 'испытание.pythontest.net' socket.gethostbyname(domain) socket.gethostbyname_ex(domain) socket.getaddrinfo(domain,0,socket.AF_UNSPEC,socket.SOCK_STREAM) # this may not work if the forward lookup chooses the IPv6 address, as that doesn't # have a reverse entry yet # socket.gethostbyaddr('испытание.python.org') def check_sendall_interrupted(self, with_timeout): # socketpair() is not strictly required, but it makes things easier. if not hasattr(signal, 'alarm') or not hasattr(socket, 'socketpair'): self.skipTest("signal.alarm and socket.socketpair required for this test") # Our signal handlers clobber the C errno by calling a math function # with an invalid domain value. def ok_handler(*args): self.assertRaises(ValueError, math.acosh, 0) def raising_handler(*args): self.assertRaises(ValueError, math.acosh, 0) 1 // 0 c, s = socket.socketpair() old_alarm = signal.signal(signal.SIGALRM, raising_handler) try: if with_timeout: # Just above the one second minimum for signal.alarm c.settimeout(1.5) with self.assertRaises(ZeroDivisionError): signal.alarm(1) c.sendall(b"x" * support.SOCK_MAX_SIZE) if with_timeout: signal.signal(signal.SIGALRM, ok_handler) signal.alarm(1) self.assertRaises(TimeoutError, c.sendall, b"x" * support.SOCK_MAX_SIZE) finally: signal.alarm(0) signal.signal(signal.SIGALRM, old_alarm) c.close() s.close() def test_sendall_interrupted(self): self.check_sendall_interrupted(False) def test_sendall_interrupted_with_timeout(self): self.check_sendall_interrupted(True) def test_dealloc_warn(self): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) r = repr(sock) with self.assertWarns(ResourceWarning) as cm: sock = None support.gc_collect() self.assertIn(r, str(cm.warning.args[0])) # An open socket file object gets dereferenced after the socket sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) f = sock.makefile('rb') r = repr(sock) sock = None support.gc_collect() with self.assertWarns(ResourceWarning): f = None support.gc_collect() def test_name_closed_socketio(self): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: fp = sock.makefile("rb") fp.close() self.assertEqual(repr(fp), "<_io.BufferedReader name=-1>") def test_unusable_closed_socketio(self): with socket.socket() as sock: fp = sock.makefile("rb", buffering=0) self.assertTrue(fp.readable()) self.assertFalse(fp.writable()) self.assertFalse(fp.seekable()) fp.close() self.assertRaises(ValueError, fp.readable) self.assertRaises(ValueError, fp.writable) self.assertRaises(ValueError, fp.seekable) def test_socket_close(self): sock = socket.socket() try: sock.bind((HOST, 0)) socket.close(sock.fileno()) with self.assertRaises(OSError): sock.listen(1) finally: with self.assertRaises(OSError): # sock.close() fails with EBADF sock.close() with self.assertRaises(TypeError): socket.close(None) with self.assertRaises(OSError): socket.close(-1) def test_makefile_mode(self): for mode in 'r', 'rb', 'rw', 'w', 'wb': with self.subTest(mode=mode): with socket.socket() as sock: encoding = None if "b" in mode else "utf-8" with sock.makefile(mode, encoding=encoding) as fp: self.assertEqual(fp.mode, mode) def test_makefile_invalid_mode(self): for mode in 'rt', 'x', '+', 'a': with self.subTest(mode=mode): with socket.socket() as sock: with self.assertRaisesRegex(ValueError, 'invalid mode'): sock.makefile(mode) def test_pickle(self): sock = socket.socket() with sock: for protocol in range(pickle.HIGHEST_PROTOCOL + 1): self.assertRaises(TypeError, pickle.dumps, sock, protocol) for protocol in range(pickle.HIGHEST_PROTOCOL + 1): family = pickle.loads(pickle.dumps(socket.AF_INET, protocol)) self.assertEqual(family, socket.AF_INET) type = pickle.loads(pickle.dumps(socket.SOCK_STREAM, protocol)) self.assertEqual(type, socket.SOCK_STREAM) def test_listen_backlog(self): for backlog in 0, -1: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as srv: srv.bind((HOST, 0)) srv.listen(backlog) with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as srv: srv.bind((HOST, 0)) srv.listen() @support.cpython_only @unittest.skipIf(_testcapi is None, "requires _testcapi") def test_listen_backlog_overflow(self): # Issue 15989 import _testcapi with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as srv: srv.bind((HOST, 0)) self.assertRaises(OverflowError, srv.listen, _testcapi.INT_MAX + 1) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') def test_flowinfo(self): self.assertRaises(OverflowError, socket.getnameinfo, (socket_helper.HOSTv6, 0, 0xffffffff), 0) with socket.socket(socket.AF_INET6, socket.SOCK_STREAM) as s: self.assertRaises(OverflowError, s.bind, (socket_helper.HOSTv6, 0, -10)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') def test_getaddrinfo_ipv6_basic(self): ((*_, sockaddr),) = socket.getaddrinfo( 'ff02::1de:c0:face:8D', # Note capital letter `D`. 1234, socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP ) self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, 0)) def test_getfqdn_filter_localhost(self): self.assertEqual(socket.getfqdn(), socket.getfqdn("0.0.0.0")) self.assertEqual(socket.getfqdn(), socket.getfqdn("::")) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipIf(sys.platform == 'win32', 'does not work on Windows') @unittest.skipIf(AIX, 'Symbolic scope id does not work') @unittest.skipUnless(hasattr(socket, 'if_nameindex'), "test needs socket.if_nameindex()") def test_getaddrinfo_ipv6_scopeid_symbolic(self): # Just pick up any network interface (Linux, Mac OS X) (ifindex, test_interface) = socket.if_nameindex()[0] ((*_, sockaddr),) = socket.getaddrinfo( 'ff02::1de:c0:face:8D%' + test_interface, 1234, socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP ) # Note missing interface name part in IPv6 address self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, ifindex)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless( sys.platform == 'win32', 'Numeric scope id does not work or undocumented') def test_getaddrinfo_ipv6_scopeid_numeric(self): # Also works on Linux and Mac OS X, but is not documented (?) # Windows, Linux and Max OS X allow nonexistent interface numbers here. ifindex = 42 ((*_, sockaddr),) = socket.getaddrinfo( 'ff02::1de:c0:face:8D%' + str(ifindex), 1234, socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP ) # Note missing interface name part in IPv6 address self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, ifindex)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipIf(sys.platform == 'win32', 'does not work on Windows') @unittest.skipIf(AIX, 'Symbolic scope id does not work') @unittest.skipUnless(hasattr(socket, 'if_nameindex'), "test needs socket.if_nameindex()") def test_getnameinfo_ipv6_scopeid_symbolic(self): # Just pick up any network interface. (ifindex, test_interface) = socket.if_nameindex()[0] sockaddr = ('ff02::1de:c0:face:8D', 1234, 0, ifindex) # Note capital letter `D`. nameinfo = socket.getnameinfo(sockaddr, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV) self.assertEqual(nameinfo, ('ff02::1de:c0:face:8d%' + test_interface, '1234')) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless( sys.platform == 'win32', 'Numeric scope id does not work or undocumented') def test_getnameinfo_ipv6_scopeid_numeric(self): # Also works on Linux (undocumented), but does not work on Mac OS X # Windows and Linux allow nonexistent interface numbers here. ifindex = 42 sockaddr = ('ff02::1de:c0:face:8D', 1234, 0, ifindex) # Note capital letter `D`. nameinfo = socket.getnameinfo(sockaddr, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV) self.assertEqual(nameinfo, ('ff02::1de:c0:face:8d%' + str(ifindex), '1234')) def test_str_for_enums(self): # Make sure that the AF_* and SOCK_* constants have enum-like string # reprs. with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: self.assertEqual(repr(s.family), '' % s.family.value) self.assertEqual(repr(s.type), '' % s.type.value) self.assertEqual(str(s.family), str(s.family.value)) self.assertEqual(str(s.type), str(s.type.value)) def test_socket_consistent_sock_type(self): SOCK_NONBLOCK = getattr(socket, 'SOCK_NONBLOCK', 0) SOCK_CLOEXEC = getattr(socket, 'SOCK_CLOEXEC', 0) sock_type = socket.SOCK_STREAM | SOCK_NONBLOCK | SOCK_CLOEXEC with socket.socket(socket.AF_INET, sock_type) as s: self.assertEqual(s.type, socket.SOCK_STREAM) s.settimeout(1) self.assertEqual(s.type, socket.SOCK_STREAM) s.settimeout(0) self.assertEqual(s.type, socket.SOCK_STREAM) s.setblocking(True) self.assertEqual(s.type, socket.SOCK_STREAM) s.setblocking(False) self.assertEqual(s.type, socket.SOCK_STREAM) def test_unknown_socket_family_repr(self): # Test that when created with a family that's not one of the known # AF_*/SOCK_* constants, socket.family just returns the number. # # To do this we fool socket.socket into believing it already has an # open fd because on this path it doesn't actually verify the family and # type and populates the socket object. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) fd = sock.detach() unknown_family = max(socket.AddressFamily.__members__.values()) + 1 unknown_type = max( kind for name, kind in socket.SocketKind.__members__.items() if name not in {'SOCK_NONBLOCK', 'SOCK_CLOEXEC'} ) + 1 with socket.socket( family=unknown_family, type=unknown_type, proto=23, fileno=fd) as s: self.assertEqual(s.family, unknown_family) self.assertEqual(s.type, unknown_type) # some OS like macOS ignore proto self.assertIn(s.proto, {0, 23}) @unittest.skipUnless(hasattr(os, 'sendfile'), 'test needs os.sendfile()') def test__sendfile_use_sendfile(self): class File: def __init__(self, fd): self.fd = fd def fileno(self): return self.fd with socket.socket() as sock: fd = os.open(os.curdir, os.O_RDONLY) os.close(fd) with self.assertRaises(socket._GiveupOnSendfile): sock._sendfile_use_sendfile(File(fd)) with self.assertRaises(OverflowError): sock._sendfile_use_sendfile(File(2**1000)) with self.assertRaises(TypeError): sock._sendfile_use_sendfile(File(None)) def _test_socket_fileno(self, s, family, stype): self.assertEqual(s.family, family) self.assertEqual(s.type, stype) fd = s.fileno() s2 = socket.socket(fileno=fd) self.addCleanup(s2.close) # detach old fd to avoid double close s.detach() self.assertEqual(s2.family, family) self.assertEqual(s2.type, stype) self.assertEqual(s2.fileno(), fd) def test_socket_fileno(self): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(s.close) s.bind((socket_helper.HOST, 0)) self._test_socket_fileno(s, socket.AF_INET, socket.SOCK_STREAM) if hasattr(socket, "SOCK_DGRAM"): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.addCleanup(s.close) s.bind((socket_helper.HOST, 0)) self._test_socket_fileno(s, socket.AF_INET, socket.SOCK_DGRAM) if socket_helper.IPV6_ENABLED: s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) self.addCleanup(s.close) s.bind((socket_helper.HOSTv6, 0, 0, 0)) self._test_socket_fileno(s, socket.AF_INET6, socket.SOCK_STREAM) if hasattr(socket, "AF_UNIX"): unix_name = socket_helper.create_unix_domain_name() self.addCleanup(os_helper.unlink, unix_name) s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) with s: try: s.bind(unix_name) except PermissionError: pass else: self._test_socket_fileno(s, socket.AF_UNIX, socket.SOCK_STREAM) def test_socket_fileno_rejects_float(self): with self.assertRaises(TypeError): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=42.5) def test_socket_fileno_rejects_other_types(self): with self.assertRaises(TypeError): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno="foo") def test_socket_fileno_rejects_invalid_socket(self): with self.assertRaisesRegex(ValueError, "negative file descriptor"): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=-1) @unittest.skipIf(os.name == "nt", "Windows disallows -1 only") def test_socket_fileno_rejects_negative(self): with self.assertRaisesRegex(ValueError, "negative file descriptor"): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=-42) def test_socket_fileno_requires_valid_fd(self): WSAENOTSOCK = 10038 with self.assertRaises(OSError) as cm: socket.socket(fileno=os_helper.make_bad_fd()) self.assertIn(cm.exception.errno, (errno.EBADF, WSAENOTSOCK)) with self.assertRaises(OSError) as cm: socket.socket( socket.AF_INET, socket.SOCK_STREAM, fileno=os_helper.make_bad_fd()) self.assertIn(cm.exception.errno, (errno.EBADF, WSAENOTSOCK)) def test_socket_fileno_requires_socket_fd(self): with tempfile.NamedTemporaryFile() as afile: with self.assertRaises(OSError): socket.socket(fileno=afile.fileno()) with self.assertRaises(OSError) as cm: socket.socket( socket.AF_INET, socket.SOCK_STREAM, fileno=afile.fileno()) self.assertEqual(cm.exception.errno, errno.ENOTSOCK) def test_addressfamily_enum(self): import _socket, enum CheckedAddressFamily = enum._old_convert_( enum.IntEnum, 'AddressFamily', 'socket', lambda C: C.isupper() and C.startswith('AF_'), source=_socket, ) enum._test_simple_enum(CheckedAddressFamily, socket.AddressFamily) def test_socketkind_enum(self): import _socket, enum CheckedSocketKind = enum._old_convert_( enum.IntEnum, 'SocketKind', 'socket', lambda C: C.isupper() and C.startswith('SOCK_'), source=_socket, ) enum._test_simple_enum(CheckedSocketKind, socket.SocketKind) def test_msgflag_enum(self): import _socket, enum CheckedMsgFlag = enum._old_convert_( enum.IntFlag, 'MsgFlag', 'socket', lambda C: C.isupper() and C.startswith('MSG_'), source=_socket, ) enum._test_simple_enum(CheckedMsgFlag, socket.MsgFlag) def test_addressinfo_enum(self): import _socket, enum CheckedAddressInfo = enum._old_convert_( enum.IntFlag, 'AddressInfo', 'socket', lambda C: C.isupper() and C.startswith('AI_'), source=_socket) enum._test_simple_enum(CheckedAddressInfo, socket.AddressInfo) @unittest.skipUnless(HAVE_SOCKET_CAN, 'SocketCan required for this test.') class BasicCANTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_CAN socket.PF_CAN socket.CAN_RAW @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def testBCMConstants(self): socket.CAN_BCM # opcodes socket.CAN_BCM_TX_SETUP # create (cyclic) transmission task socket.CAN_BCM_TX_DELETE # remove (cyclic) transmission task socket.CAN_BCM_TX_READ # read properties of (cyclic) transmission task socket.CAN_BCM_TX_SEND # send one CAN frame socket.CAN_BCM_RX_SETUP # create RX content filter subscription socket.CAN_BCM_RX_DELETE # remove RX content filter subscription socket.CAN_BCM_RX_READ # read properties of RX content filter subscription socket.CAN_BCM_TX_STATUS # reply to TX_READ request socket.CAN_BCM_TX_EXPIRED # notification on performed transmissions (count=0) socket.CAN_BCM_RX_STATUS # reply to RX_READ request socket.CAN_BCM_RX_TIMEOUT # cyclic message is absent socket.CAN_BCM_RX_CHANGED # updated CAN frame (detected content change) # flags socket.CAN_BCM_SETTIMER socket.CAN_BCM_STARTTIMER socket.CAN_BCM_TX_COUNTEVT socket.CAN_BCM_TX_ANNOUNCE socket.CAN_BCM_TX_CP_CAN_ID socket.CAN_BCM_RX_FILTER_ID socket.CAN_BCM_RX_CHECK_DLC socket.CAN_BCM_RX_NO_AUTOTIMER socket.CAN_BCM_RX_ANNOUNCE_RESUME socket.CAN_BCM_TX_RESET_MULTI_IDX socket.CAN_BCM_RX_RTR_FRAME def testCreateSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: pass @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def testCreateBCMSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) as s: pass def testBindAny(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: address = ('', ) s.bind(address) self.assertEqual(s.getsockname(), address) def testTooLongInterfaceName(self): # most systems limit IFNAMSIZ to 16, take 1024 to be sure with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: self.assertRaisesRegex(OSError, 'interface name too long', s.bind, ('x' * 1024,)) @unittest.skipUnless(hasattr(socket, "CAN_RAW_LOOPBACK"), 'socket.CAN_RAW_LOOPBACK required for this test.') def testLoopback(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: for loopback in (0, 1): s.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_LOOPBACK, loopback) self.assertEqual(loopback, s.getsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_LOOPBACK)) @unittest.skipUnless(hasattr(socket, "CAN_RAW_FILTER"), 'socket.CAN_RAW_FILTER required for this test.') def testFilter(self): can_id, can_mask = 0x200, 0x700 can_filter = struct.pack("=II", can_id, can_mask) with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: s.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_FILTER, can_filter) self.assertEqual(can_filter, s.getsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_FILTER, 8)) s.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_FILTER, bytearray(can_filter)) @unittest.skipUnless(HAVE_SOCKET_CAN, 'SocketCan required for this test.') class CANTest(ThreadedCANSocketTest): def __init__(self, methodName='runTest'): ThreadedCANSocketTest.__init__(self, methodName=methodName) @classmethod def build_can_frame(cls, can_id, data): """Build a CAN frame.""" can_dlc = len(data) data = data.ljust(8, b'\x00') return struct.pack(cls.can_frame_fmt, can_id, can_dlc, data) @classmethod def dissect_can_frame(cls, frame): """Dissect a CAN frame.""" can_id, can_dlc, data = struct.unpack(cls.can_frame_fmt, frame) return (can_id, can_dlc, data[:can_dlc]) def testSendFrame(self): cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf, cf) self.assertEqual(addr[0], self.interface) def _testSendFrame(self): self.cf = self.build_can_frame(0x00, b'\x01\x02\x03\x04\x05') self.cli.send(self.cf) def testSendMaxFrame(self): cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf, cf) def _testSendMaxFrame(self): self.cf = self.build_can_frame(0x00, b'\x07' * 8) self.cli.send(self.cf) def testSendMultiFrames(self): cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf1, cf) cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf2, cf) def _testSendMultiFrames(self): self.cf1 = self.build_can_frame(0x07, b'\x44\x33\x22\x11') self.cli.send(self.cf1) self.cf2 = self.build_can_frame(0x12, b'\x99\x22\x33') self.cli.send(self.cf2) @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def _testBCM(self): cf, addr = self.cli.recvfrom(self.bufsize) self.assertEqual(self.cf, cf) can_id, can_dlc, data = self.dissect_can_frame(cf) self.assertEqual(self.can_id, can_id) self.assertEqual(self.data, data) @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def testBCM(self): bcm = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) self.addCleanup(bcm.close) bcm.connect((self.interface,)) self.can_id = 0x123 self.data = bytes([0xc0, 0xff, 0xee]) self.cf = self.build_can_frame(self.can_id, self.data) opcode = socket.CAN_BCM_TX_SEND flags = 0 count = 0 ival1_seconds = ival1_usec = ival2_seconds = ival2_usec = 0 bcm_can_id = 0x0222 nframes = 1 assert len(self.cf) == 16 header = struct.pack(self.bcm_cmd_msg_fmt, opcode, flags, count, ival1_seconds, ival1_usec, ival2_seconds, ival2_usec, bcm_can_id, nframes, ) header_plus_frame = header + self.cf bytes_sent = bcm.send(header_plus_frame) self.assertEqual(bytes_sent, len(header_plus_frame)) @unittest.skipUnless(HAVE_SOCKET_CAN_ISOTP, 'CAN ISOTP required for this test.') class ISOTPTest(unittest.TestCase): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.interface = "vcan0" def testCrucialConstants(self): socket.AF_CAN socket.PF_CAN socket.CAN_ISOTP socket.SOCK_DGRAM def testCreateSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: pass @unittest.skipUnless(hasattr(socket, "CAN_ISOTP"), 'socket.CAN_ISOTP required for this test.') def testCreateISOTPSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) as s: pass def testTooLongInterfaceName(self): # most systems limit IFNAMSIZ to 16, take 1024 to be sure with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) as s: with self.assertRaisesRegex(OSError, 'interface name too long'): s.bind(('x' * 1024, 1, 2)) def testBind(self): try: with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) as s: addr = self.interface, 0x123, 0x456 s.bind(addr) self.assertEqual(s.getsockname(), addr) except OSError as e: if e.errno == errno.ENODEV: self.skipTest('network interface `%s` does not exist' % self.interface) else: raise @unittest.skipUnless(HAVE_SOCKET_CAN_J1939, 'CAN J1939 required for this test.') class J1939Test(unittest.TestCase): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.interface = "vcan0" @unittest.skipUnless(hasattr(socket, "CAN_J1939"), 'socket.CAN_J1939 required for this test.') def testJ1939Constants(self): socket.CAN_J1939 socket.J1939_MAX_UNICAST_ADDR socket.J1939_IDLE_ADDR socket.J1939_NO_ADDR socket.J1939_NO_NAME socket.J1939_PGN_REQUEST socket.J1939_PGN_ADDRESS_CLAIMED socket.J1939_PGN_ADDRESS_COMMANDED socket.J1939_PGN_PDU1_MAX socket.J1939_PGN_MAX socket.J1939_NO_PGN # J1939 socket options socket.SO_J1939_FILTER socket.SO_J1939_PROMISC socket.SO_J1939_SEND_PRIO socket.SO_J1939_ERRQUEUE socket.SCM_J1939_DEST_ADDR socket.SCM_J1939_DEST_NAME socket.SCM_J1939_PRIO socket.SCM_J1939_ERRQUEUE socket.J1939_NLA_PAD socket.J1939_NLA_BYTES_ACKED socket.J1939_EE_INFO_NONE socket.J1939_EE_INFO_TX_ABORT socket.J1939_FILTER_MAX @unittest.skipUnless(hasattr(socket, "CAN_J1939"), 'socket.CAN_J1939 required for this test.') def testCreateJ1939Socket(self): with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) as s: pass def testBind(self): try: with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) as s: addr = self.interface, socket.J1939_NO_NAME, socket.J1939_NO_PGN, socket.J1939_NO_ADDR s.bind(addr) self.assertEqual(s.getsockname(), addr) except OSError as e: if e.errno == errno.ENODEV: self.skipTest('network interface `%s` does not exist' % self.interface) else: raise @unittest.skipUnless(HAVE_SOCKET_RDS, 'RDS sockets required for this test.') class BasicRDSTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_RDS socket.PF_RDS def testCreateSocket(self): with socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) as s: pass def testSocketBufferSize(self): bufsize = 16384 with socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) as s: s.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, bufsize) s.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, bufsize) @unittest.skipUnless(HAVE_SOCKET_RDS, 'RDS sockets required for this test.') class RDSTest(ThreadedRDSSocketTest): def __init__(self, methodName='runTest'): ThreadedRDSSocketTest.__init__(self, methodName=methodName) def setUp(self): super().setUp() self.evt = threading.Event() def testSendAndRecv(self): data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data, data) self.assertEqual(self.cli_addr, addr) def _testSendAndRecv(self): self.data = b'spam' self.cli.sendto(self.data, 0, (HOST, self.port)) def testPeek(self): data, addr = self.serv.recvfrom(self.bufsize, socket.MSG_PEEK) self.assertEqual(self.data, data) data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data, data) def _testPeek(self): self.data = b'spam' self.cli.sendto(self.data, 0, (HOST, self.port)) @requireAttrs(socket.socket, 'recvmsg') def testSendAndRecvMsg(self): data, ancdata, msg_flags, addr = self.serv.recvmsg(self.bufsize) self.assertEqual(self.data, data) @requireAttrs(socket.socket, 'sendmsg') def _testSendAndRecvMsg(self): self.data = b'hello ' * 10 self.cli.sendmsg([self.data], (), 0, (HOST, self.port)) def testSendAndRecvMulti(self): data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data1, data) data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data2, data) def _testSendAndRecvMulti(self): self.data1 = b'bacon' self.cli.sendto(self.data1, 0, (HOST, self.port)) self.data2 = b'egg' self.cli.sendto(self.data2, 0, (HOST, self.port)) def testSelect(self): r, w, x = select.select([self.serv], [], [], 3.0) self.assertIn(self.serv, r) data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data, data) def _testSelect(self): self.data = b'select' self.cli.sendto(self.data, 0, (HOST, self.port)) @unittest.skipUnless(HAVE_SOCKET_QIPCRTR, 'QIPCRTR sockets required for this test.') class BasicQIPCRTRTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_QIPCRTR def testCreateSocket(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: pass def testUnbound(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: self.assertEqual(s.getsockname()[1], 0) def testBindSock(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: socket_helper.bind_port(s, host=s.getsockname()[0]) self.assertNotEqual(s.getsockname()[1], 0) def testInvalidBindSock(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: self.assertRaises(OSError, socket_helper.bind_port, s, host=-2) def testAutoBindSock(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: s.connect((123, 123)) self.assertNotEqual(s.getsockname()[1], 0) @unittest.skipIf(fcntl is None, "need fcntl") @unittest.skipUnless(HAVE_SOCKET_VSOCK, 'VSOCK sockets required for this test.') class BasicVSOCKTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_VSOCK def testVSOCKConstants(self): socket.SO_VM_SOCKETS_BUFFER_SIZE socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE socket.VMADDR_CID_ANY socket.VMADDR_PORT_ANY socket.VMADDR_CID_HOST socket.VM_SOCKETS_INVALID_VERSION socket.IOCTL_VM_SOCKETS_GET_LOCAL_CID def testCreateSocket(self): with socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) as s: pass def testSocketBufferSize(self): with socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) as s: orig_max = s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE) orig = s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_SIZE) orig_min = s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE) s.setsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE, orig_max * 2) s.setsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_SIZE, orig * 2) s.setsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE, orig_min * 2) self.assertEqual(orig_max * 2, s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE)) self.assertEqual(orig * 2, s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_SIZE)) self.assertEqual(orig_min * 2, s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE)) @unittest.skipUnless(HAVE_SOCKET_BLUETOOTH, 'Bluetooth sockets required for this test.') class BasicBluetoothTest(unittest.TestCase): def testBluetoothConstants(self): socket.BDADDR_ANY socket.BDADDR_LOCAL socket.AF_BLUETOOTH socket.BTPROTO_RFCOMM if sys.platform != "win32": socket.BTPROTO_HCI socket.SOL_HCI socket.BTPROTO_L2CAP if not sys.platform.startswith("freebsd"): socket.BTPROTO_SCO def testCreateRfcommSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM) as s: pass @unittest.skipIf(sys.platform == "win32", "windows does not support L2CAP sockets") def testCreateL2capSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_SEQPACKET, socket.BTPROTO_L2CAP) as s: pass @unittest.skipIf(sys.platform == "win32", "windows does not support HCI sockets") def testCreateHciSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_RAW, socket.BTPROTO_HCI) as s: pass @unittest.skipIf(sys.platform == "win32" or sys.platform.startswith("freebsd"), "windows and freebsd do not support SCO sockets") def testCreateScoSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_SEQPACKET, socket.BTPROTO_SCO) as s: pass @unittest.skipUnless(HAVE_SOCKET_HYPERV, 'Hyper-V sockets required for this test.') class BasicHyperVTest(unittest.TestCase): def testHyperVConstants(self): socket.HVSOCKET_CONNECT_TIMEOUT socket.HVSOCKET_CONNECT_TIMEOUT_MAX socket.HVSOCKET_CONNECTED_SUSPEND socket.HVSOCKET_ADDRESS_FLAG_PASSTHRU socket.HV_GUID_ZERO socket.HV_GUID_WILDCARD socket.HV_GUID_BROADCAST socket.HV_GUID_CHILDREN socket.HV_GUID_LOOPBACK socket.HV_GUID_PARENT def testCreateHyperVSocketWithUnknownProtoFailure(self): expected = r"\[WinError 10041\]" with self.assertRaisesRegex(OSError, expected): socket.socket(socket.AF_HYPERV, socket.SOCK_STREAM) def testCreateHyperVSocketAddrNotTupleFailure(self): expected = "connect(): AF_HYPERV address must be tuple, not str" with socket.socket(socket.AF_HYPERV, socket.SOCK_STREAM, socket.HV_PROTOCOL_RAW) as s: with self.assertRaisesRegex(TypeError, re.escape(expected)): s.connect(socket.HV_GUID_ZERO) def testCreateHyperVSocketAddrNotTupleOf2StrsFailure(self): expected = "AF_HYPERV address must be a str tuple (vm_id, service_id)" with socket.socket(socket.AF_HYPERV, socket.SOCK_STREAM, socket.HV_PROTOCOL_RAW) as s: with self.assertRaisesRegex(TypeError, re.escape(expected)): s.connect((socket.HV_GUID_ZERO,)) def testCreateHyperVSocketAddrNotTupleOfStrsFailure(self): expected = "AF_HYPERV address must be a str tuple (vm_id, service_id)" with socket.socket(socket.AF_HYPERV, socket.SOCK_STREAM, socket.HV_PROTOCOL_RAW) as s: with self.assertRaisesRegex(TypeError, re.escape(expected)): s.connect((1, 2)) def testCreateHyperVSocketAddrVmIdNotValidUUIDFailure(self): expected = "connect(): AF_HYPERV address vm_id is not a valid UUID string" with socket.socket(socket.AF_HYPERV, socket.SOCK_STREAM, socket.HV_PROTOCOL_RAW) as s: with self.assertRaisesRegex(ValueError, re.escape(expected)): s.connect(("00", socket.HV_GUID_ZERO)) def testCreateHyperVSocketAddrServiceIdNotValidUUIDFailure(self): expected = "connect(): AF_HYPERV address service_id is not a valid UUID string" with socket.socket(socket.AF_HYPERV, socket.SOCK_STREAM, socket.HV_PROTOCOL_RAW) as s: with self.assertRaisesRegex(ValueError, re.escape(expected)): s.connect((socket.HV_GUID_ZERO, "00")) class BasicTCPTest(SocketConnectedTest): def __init__(self, methodName='runTest'): SocketConnectedTest.__init__(self, methodName=methodName) def testRecv(self): # Testing large receive over TCP msg = self.cli_conn.recv(1024) self.assertEqual(msg, MSG) def _testRecv(self): self.serv_conn.send(MSG) def testOverFlowRecv(self): # Testing receive in chunks over TCP seg1 = self.cli_conn.recv(len(MSG) - 3) seg2 = self.cli_conn.recv(1024) msg = seg1 + seg2 self.assertEqual(msg, MSG) def _testOverFlowRecv(self): self.serv_conn.send(MSG) def testRecvFrom(self): # Testing large recvfrom() over TCP msg, addr = self.cli_conn.recvfrom(1024) self.assertEqual(msg, MSG) def _testRecvFrom(self): self.serv_conn.send(MSG) def testOverFlowRecvFrom(self): # Testing recvfrom() in chunks over TCP seg1, addr = self.cli_conn.recvfrom(len(MSG)-3) seg2, addr = self.cli_conn.recvfrom(1024) msg = seg1 + seg2 self.assertEqual(msg, MSG) def _testOverFlowRecvFrom(self): self.serv_conn.send(MSG) def testSendAll(self): # Testing sendall() with a 2048 byte string over TCP msg = b'' while 1: read = self.cli_conn.recv(1024) if not read: break msg += read self.assertEqual(msg, b'f' * 2048) def _testSendAll(self): big_chunk = b'f' * 2048 self.serv_conn.sendall(big_chunk) def testFromFd(self): # Testing fromfd() fd = self.cli_conn.fileno() sock = socket.fromfd(fd, socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) self.assertIsInstance(sock, socket.socket) msg = sock.recv(1024) self.assertEqual(msg, MSG) def _testFromFd(self): self.serv_conn.send(MSG) def testDup(self): # Testing dup() sock = self.cli_conn.dup() self.addCleanup(sock.close) msg = sock.recv(1024) self.assertEqual(msg, MSG) def _testDup(self): self.serv_conn.send(MSG) def check_shutdown(self): # Test shutdown() helper msg = self.cli_conn.recv(1024) self.assertEqual(msg, MSG) # wait for _testShutdown[_overflow] to finish: on OS X, when the server # closes the connection the client also becomes disconnected, # and the client's shutdown call will fail. (Issue #4397.) self.done.wait() def testShutdown(self): self.check_shutdown() def _testShutdown(self): self.serv_conn.send(MSG) self.serv_conn.shutdown(2) @support.cpython_only @unittest.skipIf(_testcapi is None, "requires _testcapi") def testShutdown_overflow(self): self.check_shutdown() @support.cpython_only @unittest.skipIf(_testcapi is None, "requires _testcapi") def _testShutdown_overflow(self): import _testcapi self.serv_conn.send(MSG) # Issue 15989 self.assertRaises(OverflowError, self.serv_conn.shutdown, _testcapi.INT_MAX + 1) self.assertRaises(OverflowError, self.serv_conn.shutdown, 2 + (_testcapi.UINT_MAX + 1)) self.serv_conn.shutdown(2) def testDetach(self): # Testing detach() fileno = self.cli_conn.fileno() f = self.cli_conn.detach() self.assertEqual(f, fileno) # cli_conn cannot be used anymore... self.assertTrue(self.cli_conn._closed) self.assertRaises(OSError, self.cli_conn.recv, 1024) self.cli_conn.close() # ...but we can create another socket using the (still open) # file descriptor sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=f) self.addCleanup(sock.close) msg = sock.recv(1024) self.assertEqual(msg, MSG) def _testDetach(self): self.serv_conn.send(MSG) class BasicUDPTest(ThreadedUDPSocketTest): def __init__(self, methodName='runTest'): ThreadedUDPSocketTest.__init__(self, methodName=methodName) def testSendtoAndRecv(self): # Testing sendto() and Recv() over UDP msg = self.serv.recv(len(MSG)) self.assertEqual(msg, MSG) def _testSendtoAndRecv(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFrom(self): # Testing recvfrom() over UDP msg, addr = self.serv.recvfrom(len(MSG)) self.assertEqual(msg, MSG) def _testRecvFrom(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFromNegative(self): # Negative lengths passed to recvfrom should give ValueError. self.assertRaises(ValueError, self.serv.recvfrom, -1) def _testRecvFromNegative(self): self.cli.sendto(MSG, 0, (HOST, self.port)) @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class BasicUDPLITETest(ThreadedUDPLITESocketTest): def __init__(self, methodName='runTest'): ThreadedUDPLITESocketTest.__init__(self, methodName=methodName) def testSendtoAndRecv(self): # Testing sendto() and Recv() over UDPLITE msg = self.serv.recv(len(MSG)) self.assertEqual(msg, MSG) def _testSendtoAndRecv(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFrom(self): # Testing recvfrom() over UDPLITE msg, addr = self.serv.recvfrom(len(MSG)) self.assertEqual(msg, MSG) def _testRecvFrom(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFromNegative(self): # Negative lengths passed to recvfrom should give ValueError. self.assertRaises(ValueError, self.serv.recvfrom, -1) def _testRecvFromNegative(self): self.cli.sendto(MSG, 0, (HOST, self.port)) # Tests for the sendmsg()/recvmsg() interface. Where possible, the # same test code is used with different families and types of socket # (e.g. stream, datagram), and tests using recvmsg() are repeated # using recvmsg_into(). # # The generic test classes such as SendmsgTests and # RecvmsgGenericTests inherit from SendrecvmsgBase and expect to be # supplied with sockets cli_sock and serv_sock representing the # client's and the server's end of the connection respectively, and # attributes cli_addr and serv_addr holding their (numeric where # appropriate) addresses. # # The final concrete test classes combine these with subclasses of # SocketTestBase which set up client and server sockets of a specific # type, and with subclasses of SendrecvmsgBase such as # SendrecvmsgDgramBase and SendrecvmsgConnectedBase which map these # sockets to cli_sock and serv_sock and override the methods and # attributes of SendrecvmsgBase to fill in destination addresses if # needed when sending, check for specific flags in msg_flags, etc. # # RecvmsgIntoMixin provides a version of doRecvmsg() implemented using # recvmsg_into(). # XXX: like the other datagram (UDP) tests in this module, the code # here assumes that datagram delivery on the local machine will be # reliable. class SendrecvmsgBase: # Base class for sendmsg()/recvmsg() tests. # Time in seconds to wait before considering a test failed, or # None for no timeout. Not all tests actually set a timeout. fail_timeout = support.LOOPBACK_TIMEOUT def setUp(self): self.misc_event = threading.Event() super().setUp() def sendToServer(self, msg): # Send msg to the server. return self.cli_sock.send(msg) # Tuple of alternative default arguments for sendmsg() when called # via sendmsgToServer() (e.g. to include a destination address). sendmsg_to_server_defaults = () def sendmsgToServer(self, *args): # Call sendmsg() on self.cli_sock with the given arguments, # filling in any arguments which are not supplied with the # corresponding items of self.sendmsg_to_server_defaults, if # any. return self.cli_sock.sendmsg( *(args + self.sendmsg_to_server_defaults[len(args):])) def doRecvmsg(self, sock, bufsize, *args): # Call recvmsg() on sock with given arguments and return its # result. Should be used for tests which can use either # recvmsg() or recvmsg_into() - RecvmsgIntoMixin overrides # this method with one which emulates it using recvmsg_into(), # thus allowing the same test to be used for both methods. result = sock.recvmsg(bufsize, *args) self.registerRecvmsgResult(result) return result def registerRecvmsgResult(self, result): # Called by doRecvmsg() with the return value of recvmsg() or # recvmsg_into(). Can be overridden to arrange cleanup based # on the returned ancillary data, for instance. pass def checkRecvmsgAddress(self, addr1, addr2): # Called to compare the received address with the address of # the peer. self.assertEqual(addr1, addr2) # Flags that are normally unset in msg_flags msg_flags_common_unset = 0 for name in ("MSG_CTRUNC", "MSG_OOB"): msg_flags_common_unset |= getattr(socket, name, 0) # Flags that are normally set msg_flags_common_set = 0 # Flags set when a complete record has been received (e.g. MSG_EOR # for SCTP) msg_flags_eor_indicator = 0 # Flags set when a complete record has not been received # (e.g. MSG_TRUNC for datagram sockets) msg_flags_non_eor_indicator = 0 def checkFlags(self, flags, eor=None, checkset=0, checkunset=0, ignore=0): # Method to check the value of msg_flags returned by recvmsg[_into](). # # Checks that all bits in msg_flags_common_set attribute are # set in "flags" and all bits in msg_flags_common_unset are # unset. # # The "eor" argument specifies whether the flags should # indicate that a full record (or datagram) has been received. # If "eor" is None, no checks are done; otherwise, checks # that: # # * if "eor" is true, all bits in msg_flags_eor_indicator are # set and all bits in msg_flags_non_eor_indicator are unset # # * if "eor" is false, all bits in msg_flags_non_eor_indicator # are set and all bits in msg_flags_eor_indicator are unset # # If "checkset" and/or "checkunset" are supplied, they require # the given bits to be set or unset respectively, overriding # what the attributes require for those bits. # # If any bits are set in "ignore", they will not be checked, # regardless of the other inputs. # # Will raise Exception if the inputs require a bit to be both # set and unset, and it is not ignored. defaultset = self.msg_flags_common_set defaultunset = self.msg_flags_common_unset if eor: defaultset |= self.msg_flags_eor_indicator defaultunset |= self.msg_flags_non_eor_indicator elif eor is not None: defaultset |= self.msg_flags_non_eor_indicator defaultunset |= self.msg_flags_eor_indicator # Function arguments override defaults defaultset &= ~checkunset defaultunset &= ~checkset # Merge arguments with remaining defaults, and check for conflicts checkset |= defaultset checkunset |= defaultunset inboth = checkset & checkunset & ~ignore if inboth: raise Exception("contradictory set, unset requirements for flags " "{0:#x}".format(inboth)) # Compare with given msg_flags value mask = (checkset | checkunset) & ~ignore self.assertEqual(flags & mask, checkset & mask) class RecvmsgIntoMixin(SendrecvmsgBase): # Mixin to implement doRecvmsg() using recvmsg_into(). def doRecvmsg(self, sock, bufsize, *args): buf = bytearray(bufsize) result = sock.recvmsg_into([buf], *args) self.registerRecvmsgResult(result) self.assertGreaterEqual(result[0], 0) self.assertLessEqual(result[0], bufsize) return (bytes(buf[:result[0]]),) + result[1:] class SendrecvmsgDgramFlagsBase(SendrecvmsgBase): # Defines flags to be checked in msg_flags for datagram sockets. @property def msg_flags_non_eor_indicator(self): return super().msg_flags_non_eor_indicator | socket.MSG_TRUNC class SendrecvmsgSCTPFlagsBase(SendrecvmsgBase): # Defines flags to be checked in msg_flags for SCTP sockets. @property def msg_flags_eor_indicator(self): return super().msg_flags_eor_indicator | socket.MSG_EOR class SendrecvmsgConnectionlessBase(SendrecvmsgBase): # Base class for tests on connectionless-mode sockets. Users must # supply sockets on attributes cli and serv to be mapped to # cli_sock and serv_sock respectively. @property def serv_sock(self): return self.serv @property def cli_sock(self): return self.cli @property def sendmsg_to_server_defaults(self): return ([], [], 0, self.serv_addr) def sendToServer(self, msg): return self.cli_sock.sendto(msg, self.serv_addr) class SendrecvmsgConnectedBase(SendrecvmsgBase): # Base class for tests on connected sockets. Users must supply # sockets on attributes serv_conn and cli_conn (representing the # connections *to* the server and the client), to be mapped to # cli_sock and serv_sock respectively. @property def serv_sock(self): return self.cli_conn @property def cli_sock(self): return self.serv_conn def checkRecvmsgAddress(self, addr1, addr2): # Address is currently "unspecified" for a connected socket, # so we don't examine it pass class SendrecvmsgServerTimeoutBase(SendrecvmsgBase): # Base class to set a timeout on server's socket. def setUp(self): super().setUp() self.serv_sock.settimeout(self.fail_timeout) class SendmsgTests(SendrecvmsgServerTimeoutBase): # Tests for sendmsg() which can use any socket type and do not # involve recvmsg() or recvmsg_into(). def testSendmsg(self): # Send a simple message with sendmsg(). self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsg(self): self.assertEqual(self.sendmsgToServer([MSG]), len(MSG)) def testSendmsgDataGenerator(self): # Send from buffer obtained from a generator (not a sequence). self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgDataGenerator(self): self.assertEqual(self.sendmsgToServer((o for o in [MSG])), len(MSG)) def testSendmsgAncillaryGenerator(self): # Gather (empty) ancillary data from a generator. self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgAncillaryGenerator(self): self.assertEqual(self.sendmsgToServer([MSG], (o for o in [])), len(MSG)) def testSendmsgArray(self): # Send data from an array instead of the usual bytes object. self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgArray(self): self.assertEqual(self.sendmsgToServer([array.array("B", MSG)]), len(MSG)) def testSendmsgGather(self): # Send message data from more than one buffer (gather write). self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgGather(self): self.assertEqual(self.sendmsgToServer([MSG[:3], MSG[3:]]), len(MSG)) def testSendmsgBadArgs(self): # Check that sendmsg() rejects invalid arguments. self.assertEqual(self.serv_sock.recv(1000), b"done") def _testSendmsgBadArgs(self): self.assertRaises(TypeError, self.cli_sock.sendmsg) self.assertRaises(TypeError, self.sendmsgToServer, b"not in an iterable") self.assertRaises(TypeError, self.sendmsgToServer, object()) self.assertRaises(TypeError, self.sendmsgToServer, [object()]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG, object()]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], object()) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [], object()) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [], 0, object()) self.sendToServer(b"done") def testSendmsgBadCmsg(self): # Check that invalid ancillary data items are rejected. self.assertEqual(self.serv_sock.recv(1000), b"done") def _testSendmsgBadCmsg(self): self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [object()]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(object(), 0, b"data")]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, object(), b"data")]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0, object())]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0)]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0, b"data", 42)]) self.sendToServer(b"done") @requireAttrs(socket, "CMSG_SPACE") def testSendmsgBadMultiCmsg(self): # Check that invalid ancillary data items are rejected when # more than one item is present. self.assertEqual(self.serv_sock.recv(1000), b"done") @testSendmsgBadMultiCmsg.client_skip def _testSendmsgBadMultiCmsg(self): self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [0, 0, b""]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0, b""), object()]) self.sendToServer(b"done") def testSendmsgExcessCmsgReject(self): # Check that sendmsg() rejects excess ancillary data items # when the number that can be sent is limited. self.assertEqual(self.serv_sock.recv(1000), b"done") def _testSendmsgExcessCmsgReject(self): if not hasattr(socket, "CMSG_SPACE"): # Can only send one item with self.assertRaises(OSError) as cm: self.sendmsgToServer([MSG], [(0, 0, b""), (0, 0, b"")]) self.assertIsNone(cm.exception.errno) self.sendToServer(b"done") def testSendmsgAfterClose(self): # Check that sendmsg() fails on a closed socket. pass def _testSendmsgAfterClose(self): self.cli_sock.close() self.assertRaises(OSError, self.sendmsgToServer, [MSG]) class SendmsgStreamTests(SendmsgTests): # Tests for sendmsg() which require a stream socket and do not # involve recvmsg() or recvmsg_into(). def testSendmsgExplicitNoneAddr(self): # Check that peer address can be specified as None. self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgExplicitNoneAddr(self): self.assertEqual(self.sendmsgToServer([MSG], [], 0, None), len(MSG)) def testSendmsgTimeout(self): # Check that timeout works with sendmsg(). self.assertEqual(self.serv_sock.recv(512), b"a"*512) self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) def _testSendmsgTimeout(self): try: self.cli_sock.settimeout(0.03) try: while True: self.sendmsgToServer([b"a"*512]) except TimeoutError: pass except OSError as exc: if exc.errno != errno.ENOMEM: raise # bpo-33937 the test randomly fails on Travis CI with # "OSError: [Errno 12] Cannot allocate memory" else: self.fail("TimeoutError not raised") finally: self.misc_event.set() # XXX: would be nice to have more tests for sendmsg flags argument. # Linux supports MSG_DONTWAIT when sending, but in general, it # only works when receiving. Could add other platforms if they # support it too. @skipWithClientIf(sys.platform not in {"linux", "android"}, "MSG_DONTWAIT not known to work on this platform when " "sending") def testSendmsgDontWait(self): # Check that MSG_DONTWAIT in flags causes non-blocking behaviour. self.assertEqual(self.serv_sock.recv(512), b"a"*512) self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) @testSendmsgDontWait.client_skip def _testSendmsgDontWait(self): try: with self.assertRaises(OSError) as cm: while True: self.sendmsgToServer([b"a"*512], [], socket.MSG_DONTWAIT) # bpo-33937: catch also ENOMEM, the test randomly fails on Travis CI # with "OSError: [Errno 12] Cannot allocate memory" self.assertIn(cm.exception.errno, (errno.EAGAIN, errno.EWOULDBLOCK, errno.ENOMEM)) finally: self.misc_event.set() class SendmsgConnectionlessTests(SendmsgTests): # Tests for sendmsg() which require a connectionless-mode # (e.g. datagram) socket, and do not involve recvmsg() or # recvmsg_into(). def testSendmsgNoDestAddr(self): # Check that sendmsg() fails when no destination address is # given for unconnected socket. pass def _testSendmsgNoDestAddr(self): self.assertRaises(OSError, self.cli_sock.sendmsg, [MSG]) self.assertRaises(OSError, self.cli_sock.sendmsg, [MSG], [], 0, None) class RecvmsgGenericTests(SendrecvmsgBase): # Tests for recvmsg() which can also be emulated using # recvmsg_into(), and can use any socket type. def testRecvmsg(self): # Receive a simple message with recvmsg[_into](). msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG)) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsg(self): self.sendToServer(MSG) def testRecvmsgExplicitDefaults(self): # Test recvmsg[_into]() with default arguments provided explicitly. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 0, 0) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgExplicitDefaults(self): self.sendToServer(MSG) def testRecvmsgShorter(self): # Receive a message smaller than buffer. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) + 42) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgShorter(self): self.sendToServer(MSG) def testRecvmsgTrunc(self): # Receive part of message, check for truncation indicators. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) - 3) self.assertEqual(msg, MSG[:-3]) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=False) def _testRecvmsgTrunc(self): self.sendToServer(MSG) def testRecvmsgShortAncillaryBuf(self): # Test ancillary data buffer too small to hold any ancillary data. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 1) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgShortAncillaryBuf(self): self.sendToServer(MSG) def testRecvmsgLongAncillaryBuf(self): # Test large ancillary data buffer. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 10240) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgLongAncillaryBuf(self): self.sendToServer(MSG) def testRecvmsgAfterClose(self): # Check that recvmsg[_into]() fails on a closed socket. self.serv_sock.close() self.assertRaises(OSError, self.doRecvmsg, self.serv_sock, 1024) def _testRecvmsgAfterClose(self): pass def testRecvmsgTimeout(self): # Check that timeout works. try: self.serv_sock.settimeout(0.03) self.assertRaises(TimeoutError, self.doRecvmsg, self.serv_sock, len(MSG)) finally: self.misc_event.set() def _testRecvmsgTimeout(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) @requireAttrs(socket, "MSG_PEEK") def testRecvmsgPeek(self): # Check that MSG_PEEK in flags enables examination of pending # data without consuming it. # Receive part of data with MSG_PEEK. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) - 3, 0, socket.MSG_PEEK) self.assertEqual(msg, MSG[:-3]) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) # Ignoring MSG_TRUNC here (so this test is the same for stream # and datagram sockets). Some wording in POSIX seems to # suggest that it needn't be set when peeking, but that may # just be a slip. self.checkFlags(flags, eor=False, ignore=getattr(socket, "MSG_TRUNC", 0)) # Receive all data with MSG_PEEK. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 0, socket.MSG_PEEK) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) # Check that the same data can still be received normally. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG)) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) @testRecvmsgPeek.client_skip def _testRecvmsgPeek(self): self.sendToServer(MSG) @requireAttrs(socket.socket, "sendmsg") def testRecvmsgFromSendmsg(self): # Test receiving with recvmsg[_into]() when message is sent # using sendmsg(). self.serv_sock.settimeout(self.fail_timeout) msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG)) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) @testRecvmsgFromSendmsg.client_skip def _testRecvmsgFromSendmsg(self): self.assertEqual(self.sendmsgToServer([MSG[:3], MSG[3:]]), len(MSG)) class RecvmsgGenericStreamTests(RecvmsgGenericTests): # Tests which require a stream socket and can use either recvmsg() # or recvmsg_into(). def testRecvmsgEOF(self): # Receive end-of-stream indicator (b"", peer socket closed). msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, 1024) self.assertEqual(msg, b"") self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=None) # Might not have end-of-record marker def _testRecvmsgEOF(self): self.cli_sock.close() def testRecvmsgOverflow(self): # Receive a message in more than one chunk. seg1, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) - 3) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=False) seg2, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, 1024) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) msg = seg1 + seg2 self.assertEqual(msg, MSG) def _testRecvmsgOverflow(self): self.sendToServer(MSG) class RecvmsgTests(RecvmsgGenericTests): # Tests for recvmsg() which can use any socket type. def testRecvmsgBadArgs(self): # Check that recvmsg() rejects invalid arguments. self.assertRaises(TypeError, self.serv_sock.recvmsg) self.assertRaises(ValueError, self.serv_sock.recvmsg, -1, 0, 0) self.assertRaises(ValueError, self.serv_sock.recvmsg, len(MSG), -1, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, [bytearray(10)], 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, object(), 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, len(MSG), object(), 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, len(MSG), 0, object()) msg, ancdata, flags, addr = self.serv_sock.recvmsg(len(MSG), 0, 0) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgBadArgs(self): self.sendToServer(MSG) class RecvmsgIntoTests(RecvmsgIntoMixin, RecvmsgGenericTests): # Tests for recvmsg_into() which can use any socket type. def testRecvmsgIntoBadArgs(self): # Check that recvmsg_into() rejects invalid arguments. buf = bytearray(len(MSG)) self.assertRaises(TypeError, self.serv_sock.recvmsg_into) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, len(MSG), 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, buf, 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [object()], 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [b"I'm not writable"], 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [buf, object()], 0, 0) self.assertRaises(ValueError, self.serv_sock.recvmsg_into, [buf], -1, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [buf], object(), 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [buf], 0, object()) nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into([buf], 0, 0) self.assertEqual(nbytes, len(MSG)) self.assertEqual(buf, bytearray(MSG)) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoBadArgs(self): self.sendToServer(MSG) def testRecvmsgIntoGenerator(self): # Receive into buffer obtained from a generator (not a sequence). buf = bytearray(len(MSG)) nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into( (o for o in [buf])) self.assertEqual(nbytes, len(MSG)) self.assertEqual(buf, bytearray(MSG)) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoGenerator(self): self.sendToServer(MSG) def testRecvmsgIntoArray(self): # Receive into an array rather than the usual bytearray. buf = array.array("B", [0] * len(MSG)) nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into([buf]) self.assertEqual(nbytes, len(MSG)) self.assertEqual(buf.tobytes(), MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoArray(self): self.sendToServer(MSG) def testRecvmsgIntoScatter(self): # Receive into multiple buffers (scatter write). b1 = bytearray(b"----") b2 = bytearray(b"0123456789") b3 = bytearray(b"--------------") nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into( [b1, memoryview(b2)[2:9], b3]) self.assertEqual(nbytes, len(b"Mary had a little lamb")) self.assertEqual(b1, bytearray(b"Mary")) self.assertEqual(b2, bytearray(b"01 had a 9")) self.assertEqual(b3, bytearray(b"little lamb---")) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoScatter(self): self.sendToServer(b"Mary had a little lamb") class CmsgMacroTests(unittest.TestCase): # Test the functions CMSG_LEN() and CMSG_SPACE(). Tests # assumptions used by sendmsg() and recvmsg[_into](), which share # code with these functions. # Match the definition in socketmodule.c try: import _testcapi except ImportError: socklen_t_limit = 0x7fffffff else: socklen_t_limit = min(0x7fffffff, _testcapi.INT_MAX) @requireAttrs(socket, "CMSG_LEN") def testCMSG_LEN(self): # Test CMSG_LEN() with various valid and invalid values, # checking the assumptions used by recvmsg() and sendmsg(). toobig = self.socklen_t_limit - socket.CMSG_LEN(0) + 1 values = list(range(257)) + list(range(toobig - 257, toobig)) # struct cmsghdr has at least three members, two of which are ints self.assertGreater(socket.CMSG_LEN(0), array.array("i").itemsize * 2) for n in values: ret = socket.CMSG_LEN(n) # This is how recvmsg() calculates the data size self.assertEqual(ret - socket.CMSG_LEN(0), n) self.assertLessEqual(ret, self.socklen_t_limit) self.assertRaises(OverflowError, socket.CMSG_LEN, -1) # sendmsg() shares code with these functions, and requires # that it reject values over the limit. self.assertRaises(OverflowError, socket.CMSG_LEN, toobig) self.assertRaises(OverflowError, socket.CMSG_LEN, sys.maxsize) @requireAttrs(socket, "CMSG_SPACE") def testCMSG_SPACE(self): # Test CMSG_SPACE() with various valid and invalid values, # checking the assumptions used by sendmsg(). toobig = self.socklen_t_limit - socket.CMSG_SPACE(1) + 1 values = list(range(257)) + list(range(toobig - 257, toobig)) last = socket.CMSG_SPACE(0) # struct cmsghdr has at least three members, two of which are ints self.assertGreater(last, array.array("i").itemsize * 2) for n in values: ret = socket.CMSG_SPACE(n) self.assertGreaterEqual(ret, last) self.assertGreaterEqual(ret, socket.CMSG_LEN(n)) self.assertGreaterEqual(ret, n + socket.CMSG_LEN(0)) self.assertLessEqual(ret, self.socklen_t_limit) last = ret self.assertRaises(OverflowError, socket.CMSG_SPACE, -1) # sendmsg() shares code with these functions, and requires # that it reject values over the limit. self.assertRaises(OverflowError, socket.CMSG_SPACE, toobig) self.assertRaises(OverflowError, socket.CMSG_SPACE, sys.maxsize) class SCMRightsTest(SendrecvmsgServerTimeoutBase): # Tests for file descriptor passing on Unix-domain sockets. # Invalid file descriptor value that's unlikely to evaluate to a # real FD even if one of its bytes is replaced with a different # value (which shouldn't actually happen). badfd = -0x5555 def newFDs(self, n): # Return a list of n file descriptors for newly-created files # containing their list indices as ASCII numbers. fds = [] for i in range(n): fd, path = tempfile.mkstemp() self.addCleanup(os.unlink, path) self.addCleanup(os.close, fd) os.write(fd, str(i).encode()) fds.append(fd) return fds def checkFDs(self, fds): # Check that the file descriptors in the given list contain # their correct list indices as ASCII numbers. for n, fd in enumerate(fds): os.lseek(fd, 0, os.SEEK_SET) self.assertEqual(os.read(fd, 1024), str(n).encode()) def registerRecvmsgResult(self, result): self.addCleanup(self.closeRecvmsgFDs, result) def closeRecvmsgFDs(self, recvmsg_result): # Close all file descriptors specified in the ancillary data # of the given return value from recvmsg() or recvmsg_into(). for cmsg_level, cmsg_type, cmsg_data in recvmsg_result[1]: if (cmsg_level == socket.SOL_SOCKET and cmsg_type == socket.SCM_RIGHTS): fds = array.array("i") fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) for fd in fds: os.close(fd) def createAndSendFDs(self, n): # Send n new file descriptors created by newFDs() to the # server, with the constant MSG as the non-ancillary data. self.assertEqual( self.sendmsgToServer([MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", self.newFDs(n)))]), len(MSG)) def checkRecvmsgFDs(self, numfds, result, maxcmsgs=1, ignoreflags=0): # Check that constant MSG was received with numfds file # descriptors in a maximum of maxcmsgs control messages (which # must contain only complete integers). By default, check # that MSG_CTRUNC is unset, but ignore any flags in # ignoreflags. msg, ancdata, flags, addr = result self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkunset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertIsInstance(ancdata, list) self.assertLessEqual(len(ancdata), maxcmsgs) fds = array.array("i") for item in ancdata: self.assertIsInstance(item, tuple) cmsg_level, cmsg_type, cmsg_data = item self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) self.assertIsInstance(cmsg_data, bytes) self.assertEqual(len(cmsg_data) % SIZEOF_INT, 0) fds.frombytes(cmsg_data) self.assertEqual(len(fds), numfds) self.checkFDs(fds) def testFDPassSimple(self): # Pass a single FD (array read from bytes object). self.checkRecvmsgFDs(1, self.doRecvmsg(self.serv_sock, len(MSG), 10240)) def _testFDPassSimple(self): self.assertEqual( self.sendmsgToServer( [MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", self.newFDs(1)).tobytes())]), len(MSG)) def testMultipleFDPass(self): # Pass multiple FDs in a single array. self.checkRecvmsgFDs(4, self.doRecvmsg(self.serv_sock, len(MSG), 10240)) def _testMultipleFDPass(self): self.createAndSendFDs(4) @requireAttrs(socket, "CMSG_SPACE") def testFDPassCMSG_SPACE(self): # Test using CMSG_SPACE() to calculate ancillary buffer size. self.checkRecvmsgFDs( 4, self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_SPACE(4 * SIZEOF_INT))) @testFDPassCMSG_SPACE.client_skip def _testFDPassCMSG_SPACE(self): self.createAndSendFDs(4) def testFDPassCMSG_LEN(self): # Test using CMSG_LEN() to calculate ancillary buffer size. self.checkRecvmsgFDs(1, self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_LEN(4 * SIZEOF_INT)), # RFC 3542 says implementations may set # MSG_CTRUNC if there isn't enough space # for trailing padding. ignoreflags=socket.MSG_CTRUNC) def _testFDPassCMSG_LEN(self): self.createAndSendFDs(1) @unittest.skipIf(is_apple, "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") @requireAttrs(socket, "CMSG_SPACE") def testFDPassSeparate(self): # Pass two FDs in two separate arrays. Arrays may be combined # into a single control message by the OS. self.checkRecvmsgFDs(2, self.doRecvmsg(self.serv_sock, len(MSG), 10240), maxcmsgs=2) @testFDPassSeparate.client_skip @unittest.skipIf(is_apple, "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") def _testFDPassSeparate(self): fd0, fd1 = self.newFDs(2) self.assertEqual( self.sendmsgToServer([MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd0])), (socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd1]))]), len(MSG)) @unittest.skipIf(is_apple, "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") @requireAttrs(socket, "CMSG_SPACE") def testFDPassSeparateMinSpace(self): # Pass two FDs in two separate arrays, receiving them into the # minimum space for two arrays. num_fds = 2 self.checkRecvmsgFDs(num_fds, self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_SPACE(SIZEOF_INT) + socket.CMSG_LEN(SIZEOF_INT * num_fds)), maxcmsgs=2, ignoreflags=socket.MSG_CTRUNC) @testFDPassSeparateMinSpace.client_skip @unittest.skipIf(is_apple, "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") def _testFDPassSeparateMinSpace(self): fd0, fd1 = self.newFDs(2) self.assertEqual( self.sendmsgToServer([MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd0])), (socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd1]))]), len(MSG)) def sendAncillaryIfPossible(self, msg, ancdata): # Try to send msg and ancdata to server, but if the system # call fails, just send msg with no ancillary data. try: nbytes = self.sendmsgToServer([msg], ancdata) except OSError as e: # Check that it was the system call that failed self.assertIsInstance(e.errno, int) nbytes = self.sendmsgToServer([msg]) self.assertEqual(nbytes, len(msg)) @unittest.skipIf(is_apple, "skipping, see issue #12958") def testFDPassEmpty(self): # Try to pass an empty FD array. Can receive either no array # or an empty array. self.checkRecvmsgFDs(0, self.doRecvmsg(self.serv_sock, len(MSG), 10240), ignoreflags=socket.MSG_CTRUNC) def _testFDPassEmpty(self): self.sendAncillaryIfPossible(MSG, [(socket.SOL_SOCKET, socket.SCM_RIGHTS, b"")]) def testFDPassPartialInt(self): # Try to pass a truncated FD array. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 10240) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, ignore=socket.MSG_CTRUNC) self.assertLessEqual(len(ancdata), 1) for cmsg_level, cmsg_type, cmsg_data in ancdata: self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) self.assertLess(len(cmsg_data), SIZEOF_INT) def _testFDPassPartialInt(self): self.sendAncillaryIfPossible( MSG, [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [self.badfd]).tobytes()[:-1])]) @requireAttrs(socket, "CMSG_SPACE") def testFDPassPartialIntInMiddle(self): # Try to pass two FD arrays, the first of which is truncated. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 10240) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, ignore=socket.MSG_CTRUNC) self.assertLessEqual(len(ancdata), 2) fds = array.array("i") # Arrays may have been combined in a single control message for cmsg_level, cmsg_type, cmsg_data in ancdata: self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) self.assertLessEqual(len(fds), 2) self.checkFDs(fds) @testFDPassPartialIntInMiddle.client_skip def _testFDPassPartialIntInMiddle(self): fd0, fd1 = self.newFDs(2) self.sendAncillaryIfPossible( MSG, [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd0, self.badfd]).tobytes()[:-1]), (socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd1]))]) def checkTruncatedHeader(self, result, ignoreflags=0): # Check that no ancillary data items are returned when data is # truncated inside the cmsghdr structure. msg, ancdata, flags, addr = result self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC, ignore=ignoreflags) @skipForRefleakHuntinIf(sys.platform == "darwin", "#80931") def testCmsgTruncNoBufSize(self): # Check that no ancillary data is received when no buffer size # is specified. self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG)), # BSD seems to set MSG_CTRUNC only # if an item has been partially # received. ignoreflags=socket.MSG_CTRUNC) @testCmsgTruncNoBufSize.client_skip def _testCmsgTruncNoBufSize(self): self.createAndSendFDs(1) @skipForRefleakHuntinIf(sys.platform == "darwin", "#80931") def testCmsgTrunc0(self): # Check that no ancillary data is received when buffer size is 0. self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), 0), ignoreflags=socket.MSG_CTRUNC) @testCmsgTrunc0.client_skip def _testCmsgTrunc0(self): self.createAndSendFDs(1) # Check that no ancillary data is returned for various non-zero # (but still too small) buffer sizes. @skipForRefleakHuntinIf(sys.platform == "darwin", "#80931") def testCmsgTrunc1(self): self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), 1)) @testCmsgTrunc1.client_skip def _testCmsgTrunc1(self): self.createAndSendFDs(1) @skipForRefleakHuntinIf(sys.platform == "darwin", "#80931") def testCmsgTrunc2Int(self): # The cmsghdr structure has at least three members, two of # which are ints, so we still shouldn't see any ancillary # data. self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), SIZEOF_INT * 2)) @testCmsgTrunc2Int.client_skip def _testCmsgTrunc2Int(self): self.createAndSendFDs(1) @skipForRefleakHuntinIf(sys.platform == "darwin", "#80931") def testCmsgTruncLen0Minus1(self): self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_LEN(0) - 1)) @testCmsgTruncLen0Minus1.client_skip def _testCmsgTruncLen0Minus1(self): self.createAndSendFDs(1) # The following tests try to truncate the control message in the # middle of the FD array. def checkTruncatedArray(self, ancbuf, maxdata, mindata=0): # Check that file descriptor data is truncated to between # mindata and maxdata bytes when received with buffer size # ancbuf, and that any complete file descriptor numbers are # valid. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbuf) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC) if mindata == 0 and ancdata == []: return self.assertEqual(len(ancdata), 1) cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) self.assertGreaterEqual(len(cmsg_data), mindata) self.assertLessEqual(len(cmsg_data), maxdata) fds = array.array("i") fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) self.checkFDs(fds) @skipForRefleakHuntinIf(sys.platform == "darwin", "#80931") def testCmsgTruncLen0(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(0), maxdata=0) @testCmsgTruncLen0.client_skip def _testCmsgTruncLen0(self): self.createAndSendFDs(1) @skipForRefleakHuntinIf(sys.platform == "darwin", "#80931") def testCmsgTruncLen0Plus1(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(0) + 1, maxdata=1) @testCmsgTruncLen0Plus1.client_skip def _testCmsgTruncLen0Plus1(self): self.createAndSendFDs(2) @skipForRefleakHuntinIf(sys.platform == "darwin", "#80931") def testCmsgTruncLen1(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(SIZEOF_INT), maxdata=SIZEOF_INT) @testCmsgTruncLen1.client_skip def _testCmsgTruncLen1(self): self.createAndSendFDs(2) @skipForRefleakHuntinIf(sys.platform == "darwin", "#80931") def testCmsgTruncLen2Minus1(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(2 * SIZEOF_INT) - 1, maxdata=(2 * SIZEOF_INT) - 1) @testCmsgTruncLen2Minus1.client_skip def _testCmsgTruncLen2Minus1(self): self.createAndSendFDs(2) class RFC3542AncillaryTest(SendrecvmsgServerTimeoutBase): # Test sendmsg() and recvmsg[_into]() using the ancillary data # features of the RFC 3542 Advanced Sockets API for IPv6. # Currently we can only handle certain data items (e.g. traffic # class, hop limit, MTU discovery and fragmentation settings) # without resorting to unportable means such as the struct module, # but the tests here are aimed at testing the ancillary data # handling in sendmsg() and recvmsg() rather than the IPv6 API # itself. # Test value to use when setting hop limit of packet hop_limit = 2 # Test value to use when setting traffic class of packet. # -1 means "use kernel default". traffic_class = -1 def ancillaryMapping(self, ancdata): # Given ancillary data list ancdata, return a mapping from # pairs (cmsg_level, cmsg_type) to corresponding cmsg_data. # Check that no (level, type) pair appears more than once. d = {} for cmsg_level, cmsg_type, cmsg_data in ancdata: self.assertNotIn((cmsg_level, cmsg_type), d) d[(cmsg_level, cmsg_type)] = cmsg_data return d def checkHopLimit(self, ancbufsize, maxhop=255, ignoreflags=0): # Receive hop limit into ancbufsize bytes of ancillary data # space. Check that data is MSG, ancillary data is not # truncated (but ignore any flags in ignoreflags), and hop # limit is between 0 and maxhop inclusive. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbufsize) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkunset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertEqual(len(ancdata), 1) self.assertIsInstance(ancdata[0], tuple) cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) self.assertEqual(cmsg_type, socket.IPV6_HOPLIMIT) self.assertIsInstance(cmsg_data, bytes) self.assertEqual(len(cmsg_data), SIZEOF_INT) a = array.array("i") a.frombytes(cmsg_data) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], maxhop) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testRecvHopLimit(self): # Test receiving the packet hop limit as ancillary data. self.checkHopLimit(ancbufsize=10240) @testRecvHopLimit.client_skip def _testRecvHopLimit(self): # Need to wait until server has asked to receive ancillary # data, as implementations are not required to buffer it # otherwise. self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testRecvHopLimitCMSG_SPACE(self): # Test receiving hop limit, using CMSG_SPACE to calculate buffer size. self.checkHopLimit(ancbufsize=socket.CMSG_SPACE(SIZEOF_INT)) @testRecvHopLimitCMSG_SPACE.client_skip def _testRecvHopLimitCMSG_SPACE(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) # Could test receiving into buffer sized using CMSG_LEN, but RFC # 3542 says portable applications must provide space for trailing # padding. Implementations may set MSG_CTRUNC if there isn't # enough space for the padding. @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSetHopLimit(self): # Test setting hop limit on outgoing packet and receiving it # at the other end. self.checkHopLimit(ancbufsize=10240, maxhop=self.hop_limit) @testSetHopLimit.client_skip def _testSetHopLimit(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.assertEqual( self.sendmsgToServer([MSG], [(socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]), len(MSG)) def checkTrafficClassAndHopLimit(self, ancbufsize, maxhop=255, ignoreflags=0): # Receive traffic class and hop limit into ancbufsize bytes of # ancillary data space. Check that data is MSG, ancillary # data is not truncated (but ignore any flags in ignoreflags), # and traffic class and hop limit are in range (hop limit no # more than maxhop). self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVTCLASS, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbufsize) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkunset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertEqual(len(ancdata), 2) ancmap = self.ancillaryMapping(ancdata) tcdata = ancmap[(socket.IPPROTO_IPV6, socket.IPV6_TCLASS)] self.assertEqual(len(tcdata), SIZEOF_INT) a = array.array("i") a.frombytes(tcdata) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], 255) hldata = ancmap[(socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT)] self.assertEqual(len(hldata), SIZEOF_INT) a = array.array("i") a.frombytes(hldata) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], maxhop) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testRecvTrafficClassAndHopLimit(self): # Test receiving traffic class and hop limit as ancillary data. self.checkTrafficClassAndHopLimit(ancbufsize=10240) @testRecvTrafficClassAndHopLimit.client_skip def _testRecvTrafficClassAndHopLimit(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testRecvTrafficClassAndHopLimitCMSG_SPACE(self): # Test receiving traffic class and hop limit, using # CMSG_SPACE() to calculate buffer size. self.checkTrafficClassAndHopLimit( ancbufsize=socket.CMSG_SPACE(SIZEOF_INT) * 2) @testRecvTrafficClassAndHopLimitCMSG_SPACE.client_skip def _testRecvTrafficClassAndHopLimitCMSG_SPACE(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSetTrafficClassAndHopLimit(self): # Test setting traffic class and hop limit on outgoing packet, # and receiving them at the other end. self.checkTrafficClassAndHopLimit(ancbufsize=10240, maxhop=self.hop_limit) @testSetTrafficClassAndHopLimit.client_skip def _testSetTrafficClassAndHopLimit(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.assertEqual( self.sendmsgToServer([MSG], [(socket.IPPROTO_IPV6, socket.IPV6_TCLASS, array.array("i", [self.traffic_class])), (socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]), len(MSG)) @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testOddCmsgSize(self): # Try to send ancillary data with first item one byte too # long. Fall back to sending with correct size if this fails, # and check that second item was handled correctly. self.checkTrafficClassAndHopLimit(ancbufsize=10240, maxhop=self.hop_limit) @testOddCmsgSize.client_skip def _testOddCmsgSize(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) try: nbytes = self.sendmsgToServer( [MSG], [(socket.IPPROTO_IPV6, socket.IPV6_TCLASS, array.array("i", [self.traffic_class]).tobytes() + b"\x00"), (socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]) except OSError as e: self.assertIsInstance(e.errno, int) nbytes = self.sendmsgToServer( [MSG], [(socket.IPPROTO_IPV6, socket.IPV6_TCLASS, array.array("i", [self.traffic_class])), (socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]) self.assertEqual(nbytes, len(MSG)) # Tests for proper handling of truncated ancillary data def checkHopLimitTruncatedHeader(self, ancbufsize, ignoreflags=0): # Receive hop limit into ancbufsize bytes of ancillary data # space, which should be too small to contain the ancillary # data header (if ancbufsize is None, pass no second argument # to recvmsg()). Check that data is MSG, MSG_CTRUNC is set # (unless included in ignoreflags), and no ancillary data is # returned. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.misc_event.set() args = () if ancbufsize is None else (ancbufsize,) msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), *args) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC, ignore=ignoreflags) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testCmsgTruncNoBufSize(self): # Check that no ancillary data is received when no ancillary # buffer size is provided. self.checkHopLimitTruncatedHeader(ancbufsize=None, # BSD seems to set # MSG_CTRUNC only if an item # has been partially # received. ignoreflags=socket.MSG_CTRUNC) @testCmsgTruncNoBufSize.client_skip def _testCmsgTruncNoBufSize(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTrunc0(self): # Check that no ancillary data is received when ancillary # buffer size is zero. self.checkHopLimitTruncatedHeader(ancbufsize=0, ignoreflags=socket.MSG_CTRUNC) @testSingleCmsgTrunc0.client_skip def _testSingleCmsgTrunc0(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) # Check that no ancillary data is returned for various non-zero # (but still too small) buffer sizes. @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTrunc1(self): self.checkHopLimitTruncatedHeader(ancbufsize=1) @testSingleCmsgTrunc1.client_skip def _testSingleCmsgTrunc1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTrunc2Int(self): self.checkHopLimitTruncatedHeader(ancbufsize=2 * SIZEOF_INT) @testSingleCmsgTrunc2Int.client_skip def _testSingleCmsgTrunc2Int(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTruncLen0Minus1(self): self.checkHopLimitTruncatedHeader(ancbufsize=socket.CMSG_LEN(0) - 1) @testSingleCmsgTruncLen0Minus1.client_skip def _testSingleCmsgTruncLen0Minus1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTruncInData(self): # Test truncation of a control message inside its associated # data. The message may be returned with its data truncated, # or not returned at all. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg( self.serv_sock, len(MSG), socket.CMSG_LEN(SIZEOF_INT) - 1) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC) self.assertLessEqual(len(ancdata), 1) if ancdata: cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) self.assertEqual(cmsg_type, socket.IPV6_HOPLIMIT) self.assertLess(len(cmsg_data), SIZEOF_INT) @testSingleCmsgTruncInData.client_skip def _testSingleCmsgTruncInData(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) def checkTruncatedSecondHeader(self, ancbufsize, ignoreflags=0): # Receive traffic class and hop limit into ancbufsize bytes of # ancillary data space, which should be large enough to # contain the first item, but too small to contain the header # of the second. Check that data is MSG, MSG_CTRUNC is set # (unless included in ignoreflags), and only one ancillary # data item is returned. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVTCLASS, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbufsize) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertEqual(len(ancdata), 1) cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) self.assertIn(cmsg_type, {socket.IPV6_TCLASS, socket.IPV6_HOPLIMIT}) self.assertEqual(len(cmsg_data), SIZEOF_INT) a = array.array("i") a.frombytes(cmsg_data) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], 255) # Try the above test with various buffer sizes. @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTrunc0(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT), ignoreflags=socket.MSG_CTRUNC) @testSecondCmsgTrunc0.client_skip def _testSecondCmsgTrunc0(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTrunc1(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT) + 1) @testSecondCmsgTrunc1.client_skip def _testSecondCmsgTrunc1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTrunc2Int(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT) + 2 * SIZEOF_INT) @testSecondCmsgTrunc2Int.client_skip def _testSecondCmsgTrunc2Int(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTruncLen0Minus1(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT) + socket.CMSG_LEN(0) - 1) @testSecondCmsgTruncLen0Minus1.client_skip def _testSecondCmsgTruncLen0Minus1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTruncInData(self): # Test truncation of the second of two control messages inside # its associated data. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVTCLASS, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg( self.serv_sock, len(MSG), socket.CMSG_SPACE(SIZEOF_INT) + socket.CMSG_LEN(SIZEOF_INT) - 1) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC) cmsg_types = {socket.IPV6_TCLASS, socket.IPV6_HOPLIMIT} cmsg_level, cmsg_type, cmsg_data = ancdata.pop(0) self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) cmsg_types.remove(cmsg_type) self.assertEqual(len(cmsg_data), SIZEOF_INT) a = array.array("i") a.frombytes(cmsg_data) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], 255) if ancdata: cmsg_level, cmsg_type, cmsg_data = ancdata.pop(0) self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) cmsg_types.remove(cmsg_type) self.assertLess(len(cmsg_data), SIZEOF_INT) self.assertEqual(ancdata, []) @testSecondCmsgTruncInData.client_skip def _testSecondCmsgTruncInData(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) # Derive concrete test classes for different socket types. class SendrecvmsgUDPTestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDPTestBase): pass @requireAttrs(socket.socket, "sendmsg") class SendmsgUDPTest(SendmsgConnectionlessTests, SendrecvmsgUDPTestBase): pass @requireAttrs(socket.socket, "recvmsg") class RecvmsgUDPTest(RecvmsgTests, SendrecvmsgUDPTestBase): pass @requireAttrs(socket.socket, "recvmsg_into") class RecvmsgIntoUDPTest(RecvmsgIntoTests, SendrecvmsgUDPTestBase): pass class SendrecvmsgUDP6TestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDP6TestBase): def checkRecvmsgAddress(self, addr1, addr2): # Called to compare the received address with the address of # the peer, ignoring scope ID self.assertEqual(addr1[:-1], addr2[:-1]) @requireAttrs(socket.socket, "sendmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class SendmsgUDP6Test(SendmsgConnectionlessTests, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgUDP6Test(RecvmsgTests, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoUDP6Test(RecvmsgIntoTests, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgRFC3542AncillaryUDP6Test(RFC3542AncillaryTest, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoRFC3542AncillaryUDP6Test(RecvmsgIntoMixin, RFC3542AncillaryTest, SendrecvmsgUDP6TestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class SendrecvmsgUDPLITETestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket.socket, "sendmsg") class SendmsgUDPLITETest(SendmsgConnectionlessTests, SendrecvmsgUDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket.socket, "recvmsg") class RecvmsgUDPLITETest(RecvmsgTests, SendrecvmsgUDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket.socket, "recvmsg_into") class RecvmsgIntoUDPLITETest(RecvmsgIntoTests, SendrecvmsgUDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class SendrecvmsgUDPLITE6TestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDPLITE6TestBase): def checkRecvmsgAddress(self, addr1, addr2): # Called to compare the received address with the address of # the peer, ignoring scope ID self.assertEqual(addr1[:-1], addr2[:-1]) @requireAttrs(socket.socket, "sendmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class SendmsgUDPLITE6Test(SendmsgConnectionlessTests, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgUDPLITE6Test(RecvmsgTests, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoUDPLITE6Test(RecvmsgIntoTests, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgRFC3542AncillaryUDPLITE6Test(RFC3542AncillaryTest, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoRFC3542AncillaryUDPLITE6Test(RecvmsgIntoMixin, RFC3542AncillaryTest, SendrecvmsgUDPLITE6TestBase): pass class SendrecvmsgTCPTestBase(SendrecvmsgConnectedBase, ConnectedStreamTestMixin, TCPTestBase): pass @requireAttrs(socket.socket, "sendmsg") class SendmsgTCPTest(SendmsgStreamTests, SendrecvmsgTCPTestBase): pass @requireAttrs(socket.socket, "recvmsg") class RecvmsgTCPTest(RecvmsgTests, RecvmsgGenericStreamTests, SendrecvmsgTCPTestBase): pass @requireAttrs(socket.socket, "recvmsg_into") class RecvmsgIntoTCPTest(RecvmsgIntoTests, RecvmsgGenericStreamTests, SendrecvmsgTCPTestBase): pass class SendrecvmsgSCTPStreamTestBase(SendrecvmsgSCTPFlagsBase, SendrecvmsgConnectedBase, ConnectedStreamTestMixin, SCTPStreamBase): pass @requireAttrs(socket.socket, "sendmsg") @unittest.skipIf(AIX, "IPPROTO_SCTP: [Errno 62] Protocol not supported on AIX") @requireSocket("AF_INET", "SOCK_STREAM", "IPPROTO_SCTP") class SendmsgSCTPStreamTest(SendmsgStreamTests, SendrecvmsgSCTPStreamTestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipIf(AIX, "IPPROTO_SCTP: [Errno 62] Protocol not supported on AIX") @requireSocket("AF_INET", "SOCK_STREAM", "IPPROTO_SCTP") class RecvmsgSCTPStreamTest(RecvmsgTests, RecvmsgGenericStreamTests, SendrecvmsgSCTPStreamTestBase): def testRecvmsgEOF(self): try: super(RecvmsgSCTPStreamTest, self).testRecvmsgEOF() except OSError as e: if e.errno != errno.ENOTCONN: raise self.skipTest("sporadic ENOTCONN (kernel issue?) - see issue #13876") @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipIf(AIX, "IPPROTO_SCTP: [Errno 62] Protocol not supported on AIX") @requireSocket("AF_INET", "SOCK_STREAM", "IPPROTO_SCTP") class RecvmsgIntoSCTPStreamTest(RecvmsgIntoTests, RecvmsgGenericStreamTests, SendrecvmsgSCTPStreamTestBase): def testRecvmsgEOF(self): try: super(RecvmsgIntoSCTPStreamTest, self).testRecvmsgEOF() except OSError as e: if e.errno != errno.ENOTCONN: raise self.skipTest("sporadic ENOTCONN (kernel issue?) - see issue #13876") class SendrecvmsgUnixStreamTestBase(SendrecvmsgConnectedBase, ConnectedStreamTestMixin, UnixStreamBase): pass @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "AF_UNIX") class SendmsgUnixStreamTest(SendmsgStreamTests, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "recvmsg") @requireAttrs(socket, "AF_UNIX") class RecvmsgUnixStreamTest(RecvmsgTests, RecvmsgGenericStreamTests, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @requireAttrs(socket, "AF_UNIX") class RecvmsgIntoUnixStreamTest(RecvmsgIntoTests, RecvmsgGenericStreamTests, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "sendmsg", "recvmsg") @requireAttrs(socket, "AF_UNIX", "SOL_SOCKET", "SCM_RIGHTS") class RecvmsgSCMRightsStreamTest(SCMRightsTest, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "sendmsg", "recvmsg_into") @requireAttrs(socket, "AF_UNIX", "SOL_SOCKET", "SCM_RIGHTS") class RecvmsgIntoSCMRightsStreamTest(RecvmsgIntoMixin, SCMRightsTest, SendrecvmsgUnixStreamTestBase): pass # Test interrupting the interruptible send/receive methods with a # signal when a timeout is set. These tests avoid having multiple # threads alive during the test so that the OS cannot deliver the # signal to the wrong one. class InterruptedTimeoutBase: # Base class for interrupted send/receive tests. Installs an # empty handler for SIGALRM and removes it on teardown, along with # any scheduled alarms. def setUp(self): super().setUp() orig_alrm_handler = signal.signal(signal.SIGALRM, lambda signum, frame: 1 / 0) self.addCleanup(signal.signal, signal.SIGALRM, orig_alrm_handler) # Timeout for socket operations timeout = support.LOOPBACK_TIMEOUT # Provide setAlarm() method to schedule delivery of SIGALRM after # given number of seconds, or cancel it if zero, and an # appropriate time value to use. Use setitimer() if available. if hasattr(signal, "setitimer"): alarm_time = 0.05 def setAlarm(self, seconds): signal.setitimer(signal.ITIMER_REAL, seconds) else: # Old systems may deliver the alarm up to one second early alarm_time = 2 def setAlarm(self, seconds): signal.alarm(seconds) # Require siginterrupt() in order to ensure that system calls are # interrupted by default. @requireAttrs(signal, "siginterrupt") @unittest.skipUnless(hasattr(signal, "alarm") or hasattr(signal, "setitimer"), "Don't have signal.alarm or signal.setitimer") class InterruptedRecvTimeoutTest(InterruptedTimeoutBase, UDPTestBase): # Test interrupting the recv*() methods with signals when a # timeout is set. def setUp(self): super().setUp() self.serv.settimeout(self.timeout) def checkInterruptedRecv(self, func, *args, **kwargs): # Check that func(*args, **kwargs) raises # errno of EINTR when interrupted by a signal. try: self.setAlarm(self.alarm_time) with self.assertRaises(ZeroDivisionError) as cm: func(*args, **kwargs) finally: self.setAlarm(0) def testInterruptedRecvTimeout(self): self.checkInterruptedRecv(self.serv.recv, 1024) def testInterruptedRecvIntoTimeout(self): self.checkInterruptedRecv(self.serv.recv_into, bytearray(1024)) def testInterruptedRecvfromTimeout(self): self.checkInterruptedRecv(self.serv.recvfrom, 1024) def testInterruptedRecvfromIntoTimeout(self): self.checkInterruptedRecv(self.serv.recvfrom_into, bytearray(1024)) @requireAttrs(socket.socket, "recvmsg") def testInterruptedRecvmsgTimeout(self): self.checkInterruptedRecv(self.serv.recvmsg, 1024) @requireAttrs(socket.socket, "recvmsg_into") def testInterruptedRecvmsgIntoTimeout(self): self.checkInterruptedRecv(self.serv.recvmsg_into, [bytearray(1024)]) # Require siginterrupt() in order to ensure that system calls are # interrupted by default. @requireAttrs(signal, "siginterrupt") @unittest.skipUnless(hasattr(signal, "alarm") or hasattr(signal, "setitimer"), "Don't have signal.alarm or signal.setitimer") class InterruptedSendTimeoutTest(InterruptedTimeoutBase, SocketListeningTestMixin, TCPTestBase): # Test interrupting the interruptible send*() methods with signals # when a timeout is set. def setUp(self): super().setUp() self.serv_conn = self.newSocket() self.addCleanup(self.serv_conn.close) # Use a thread to complete the connection, but wait for it to # terminate before running the test, so that there is only one # thread to accept the signal. cli_thread = threading.Thread(target=self.doConnect) cli_thread.start() self.cli_conn, addr = self.serv.accept() self.addCleanup(self.cli_conn.close) cli_thread.join() self.serv_conn.settimeout(self.timeout) def doConnect(self): self.serv_conn.connect(self.serv_addr) def checkInterruptedSend(self, func, *args, **kwargs): # Check that func(*args, **kwargs), run in a loop, raises # OSError with an errno of EINTR when interrupted by a # signal. try: with self.assertRaises(ZeroDivisionError) as cm: while True: self.setAlarm(self.alarm_time) func(*args, **kwargs) finally: self.setAlarm(0) # Issue #12958: The following tests have problems on OS X prior to 10.7 @support.requires_mac_ver(10, 7) def testInterruptedSendTimeout(self): self.checkInterruptedSend(self.serv_conn.send, b"a"*512) @support.requires_mac_ver(10, 7) def testInterruptedSendtoTimeout(self): # Passing an actual address here as Python's wrapper for # sendto() doesn't allow passing a zero-length one; POSIX # requires that the address is ignored since the socket is # connection-mode, however. self.checkInterruptedSend(self.serv_conn.sendto, b"a"*512, self.serv_addr) @support.requires_mac_ver(10, 7) @requireAttrs(socket.socket, "sendmsg") def testInterruptedSendmsgTimeout(self): self.checkInterruptedSend(self.serv_conn.sendmsg, [b"a"*512]) class TCPCloserTest(ThreadedTCPSocketTest): def testClose(self): conn, addr = self.serv.accept() conn.close() sd = self.cli read, write, err = select.select([sd], [], [], 1.0) self.assertEqual(read, [sd]) self.assertEqual(sd.recv(1), b'') # Calling close() many times should be safe. conn.close() conn.close() def _testClose(self): self.cli.connect((HOST, self.port)) time.sleep(1.0) class BasicSocketPairTest(SocketPairTest): def __init__(self, methodName='runTest'): SocketPairTest.__init__(self, methodName=methodName) def _check_defaults(self, sock): self.assertIsInstance(sock, socket.socket) if hasattr(socket, 'AF_UNIX'): self.assertEqual(sock.family, socket.AF_UNIX) else: self.assertEqual(sock.family, socket.AF_INET) self.assertEqual(sock.type, socket.SOCK_STREAM) self.assertEqual(sock.proto, 0) def _testDefaults(self): self._check_defaults(self.cli) def testDefaults(self): self._check_defaults(self.serv) def testRecv(self): msg = self.serv.recv(1024) self.assertEqual(msg, MSG) def _testRecv(self): self.cli.send(MSG) def testSend(self): self.serv.send(MSG) def _testSend(self): msg = self.cli.recv(1024) self.assertEqual(msg, MSG) class PurePythonSocketPairTest(SocketPairTest): # Explicitly use socketpair AF_INET or AF_INET6 to ensure that is the # code path we're using regardless platform is the pure python one where # `_socket.socketpair` does not exist. (AF_INET does not work with # _socket.socketpair on many platforms). def socketpair(self): # called by super().setUp(). try: return socket.socketpair(socket.AF_INET6) except OSError: return socket.socketpair(socket.AF_INET) # Local imports in this class make for easy security fix backporting. def setUp(self): if hasattr(_socket, "socketpair"): self._orig_sp = socket.socketpair # This forces the version using the non-OS provided socketpair # emulation via an AF_INET socket in Lib/socket.py. socket.socketpair = socket._fallback_socketpair else: # This platform already uses the non-OS provided version. self._orig_sp = None super().setUp() def tearDown(self): super().tearDown() if self._orig_sp is not None: # Restore the default socket.socketpair definition. socket.socketpair = self._orig_sp def test_recv(self): msg = self.serv.recv(1024) self.assertEqual(msg, MSG) def _test_recv(self): self.cli.send(MSG) def test_send(self): self.serv.send(MSG) def _test_send(self): msg = self.cli.recv(1024) self.assertEqual(msg, MSG) def test_ipv4(self): cli, srv = socket.socketpair(socket.AF_INET) cli.close() srv.close() def _test_ipv4(self): pass @unittest.skipIf(not hasattr(_socket, 'IPPROTO_IPV6') or not hasattr(_socket, 'IPV6_V6ONLY'), "IPV6_V6ONLY option not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_ipv6(self): cli, srv = socket.socketpair(socket.AF_INET6) cli.close() srv.close() def _test_ipv6(self): pass def test_injected_authentication_failure(self): orig_getsockname = socket.socket.getsockname inject_sock = None def inject_getsocketname(self): nonlocal inject_sock sockname = orig_getsockname(self) # Connect to the listening socket ahead of the # client socket. if inject_sock is None: inject_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) inject_sock.setblocking(False) try: inject_sock.connect(sockname[:2]) except (BlockingIOError, InterruptedError): pass inject_sock.setblocking(True) return sockname sock1 = sock2 = None try: socket.socket.getsockname = inject_getsocketname with self.assertRaises(OSError): sock1, sock2 = socket.socketpair() finally: socket.socket.getsockname = orig_getsockname if inject_sock: inject_sock.close() if sock1: # This cleanup isn't needed on a successful test. sock1.close() if sock2: sock2.close() def _test_injected_authentication_failure(self): # No-op. Exists for base class threading infrastructure to call. # We could refactor this test into its own lesser class along with the # setUp and tearDown code to construct an ideal; it is simpler to keep # it here and live with extra overhead one this _one_ failure test. pass class NonBlockingTCPTests(ThreadedTCPSocketTest): def __init__(self, methodName='runTest'): self.event = threading.Event() ThreadedTCPSocketTest.__init__(self, methodName=methodName) def assert_sock_timeout(self, sock, timeout): self.assertEqual(self.serv.gettimeout(), timeout) blocking = (timeout != 0.0) self.assertEqual(sock.getblocking(), blocking) if fcntl is not None: # When a Python socket has a non-zero timeout, it's switched # internally to a non-blocking mode. Later, sock.sendall(), # sock.recv(), and other socket operations use a select() call and # handle EWOULDBLOCK/EGAIN on all socket operations. That's how # timeouts are enforced. fd_blocking = (timeout is None) flag = fcntl.fcntl(sock, fcntl.F_GETFL, os.O_NONBLOCK) self.assertEqual(not bool(flag & os.O_NONBLOCK), fd_blocking) def testSetBlocking(self): # Test setblocking() and settimeout() methods self.serv.setblocking(True) self.assert_sock_timeout(self.serv, None) self.serv.setblocking(False) self.assert_sock_timeout(self.serv, 0.0) self.serv.settimeout(None) self.assert_sock_timeout(self.serv, None) self.serv.settimeout(0) self.assert_sock_timeout(self.serv, 0) self.serv.settimeout(10) self.assert_sock_timeout(self.serv, 10) self.serv.settimeout(0) self.assert_sock_timeout(self.serv, 0) def _testSetBlocking(self): pass @support.cpython_only @unittest.skipIf(_testcapi is None, "requires _testcapi") def testSetBlocking_overflow(self): # Issue 15989 import _testcapi if _testcapi.UINT_MAX >= _testcapi.ULONG_MAX: self.skipTest('needs UINT_MAX < ULONG_MAX') self.serv.setblocking(False) self.assertEqual(self.serv.gettimeout(), 0.0) self.serv.setblocking(_testcapi.UINT_MAX + 1) self.assertIsNone(self.serv.gettimeout()) _testSetBlocking_overflow = support.cpython_only(_testSetBlocking) @unittest.skipUnless(hasattr(socket, 'SOCK_NONBLOCK'), 'test needs socket.SOCK_NONBLOCK') @support.requires_linux_version(2, 6, 28) def testInitNonBlocking(self): # create a socket with SOCK_NONBLOCK self.serv.close() self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_NONBLOCK) self.assert_sock_timeout(self.serv, 0) def _testInitNonBlocking(self): pass def testInheritFlagsBlocking(self): # bpo-7995: accept() on a listening socket with a timeout and the # default timeout is None, the resulting socket must be blocking. with socket_setdefaulttimeout(None): self.serv.settimeout(10) conn, addr = self.serv.accept() self.addCleanup(conn.close) self.assertIsNone(conn.gettimeout()) def _testInheritFlagsBlocking(self): self.cli.connect((HOST, self.port)) def testInheritFlagsTimeout(self): # bpo-7995: accept() on a listening socket with a timeout and the # default timeout is None, the resulting socket must inherit # the default timeout. default_timeout = 20.0 with socket_setdefaulttimeout(default_timeout): self.serv.settimeout(10) conn, addr = self.serv.accept() self.addCleanup(conn.close) self.assertEqual(conn.gettimeout(), default_timeout) def _testInheritFlagsTimeout(self): self.cli.connect((HOST, self.port)) def testAccept(self): # Testing non-blocking accept self.serv.setblocking(False) # connect() didn't start: non-blocking accept() fails start_time = time.monotonic() with self.assertRaises(BlockingIOError): conn, addr = self.serv.accept() dt = time.monotonic() - start_time self.assertLess(dt, 1.0) self.event.set() read, write, err = select.select([self.serv], [], [], support.LONG_TIMEOUT) if self.serv not in read: self.fail("Error trying to do accept after select.") # connect() completed: non-blocking accept() doesn't block conn, addr = self.serv.accept() self.addCleanup(conn.close) self.assertIsNone(conn.gettimeout()) def _testAccept(self): # don't connect before event is set to check # that non-blocking accept() raises BlockingIOError self.event.wait() self.cli.connect((HOST, self.port)) def testRecv(self): # Testing non-blocking recv conn, addr = self.serv.accept() self.addCleanup(conn.close) conn.setblocking(False) # the server didn't send data yet: non-blocking recv() fails with self.assertRaises(BlockingIOError): msg = conn.recv(len(MSG)) self.event.set() read, write, err = select.select([conn], [], [], support.LONG_TIMEOUT) if conn not in read: self.fail("Error during select call to non-blocking socket.") # the server sent data yet: non-blocking recv() doesn't block msg = conn.recv(len(MSG)) self.assertEqual(msg, MSG) def _testRecv(self): self.cli.connect((HOST, self.port)) # don't send anything before event is set to check # that non-blocking recv() raises BlockingIOError self.event.wait() # send data: recv() will no longer block self.cli.sendall(MSG) class FileObjectClassTestCase(SocketConnectedTest): """Unit tests for the object returned by socket.makefile() self.read_file is the io object returned by makefile() on the client connection. You can read from this file to get output from the server. self.write_file is the io object returned by makefile() on the server connection. You can write to this file to send output to the client. """ bufsize = -1 # Use default buffer size encoding = 'utf-8' errors = 'strict' newline = None read_mode = 'rb' read_msg = MSG write_mode = 'wb' write_msg = MSG def __init__(self, methodName='runTest'): SocketConnectedTest.__init__(self, methodName=methodName) def setUp(self): self.evt1, self.evt2, self.serv_finished, self.cli_finished = [ threading.Event() for i in range(4)] SocketConnectedTest.setUp(self) self.read_file = self.cli_conn.makefile( self.read_mode, self.bufsize, encoding = self.encoding, errors = self.errors, newline = self.newline) def tearDown(self): self.serv_finished.set() self.read_file.close() self.assertTrue(self.read_file.closed) self.read_file = None SocketConnectedTest.tearDown(self) def clientSetUp(self): SocketConnectedTest.clientSetUp(self) self.write_file = self.serv_conn.makefile( self.write_mode, self.bufsize, encoding = self.encoding, errors = self.errors, newline = self.newline) def clientTearDown(self): self.cli_finished.set() self.write_file.close() self.assertTrue(self.write_file.closed) self.write_file = None SocketConnectedTest.clientTearDown(self) def testReadAfterTimeout(self): # Issue #7322: A file object must disallow further reads # after a timeout has occurred. self.cli_conn.settimeout(1) self.read_file.read(3) # First read raises a timeout self.assertRaises(TimeoutError, self.read_file.read, 1) # Second read is disallowed with self.assertRaises(OSError) as ctx: self.read_file.read(1) self.assertIn("cannot read from timed out object", str(ctx.exception)) def _testReadAfterTimeout(self): self.write_file.write(self.write_msg[0:3]) self.write_file.flush() self.serv_finished.wait() def testSmallRead(self): # Performing small file read test first_seg = self.read_file.read(len(self.read_msg)-3) second_seg = self.read_file.read(3) msg = first_seg + second_seg self.assertEqual(msg, self.read_msg) def _testSmallRead(self): self.write_file.write(self.write_msg) self.write_file.flush() def testFullRead(self): # read until EOF msg = self.read_file.read() self.assertEqual(msg, self.read_msg) def _testFullRead(self): self.write_file.write(self.write_msg) self.write_file.close() def testUnbufferedRead(self): # Performing unbuffered file read test buf = type(self.read_msg)() while 1: char = self.read_file.read(1) if not char: break buf += char self.assertEqual(buf, self.read_msg) def _testUnbufferedRead(self): self.write_file.write(self.write_msg) self.write_file.flush() def testReadline(self): # Performing file readline test line = self.read_file.readline() self.assertEqual(line, self.read_msg) def _testReadline(self): self.write_file.write(self.write_msg) self.write_file.flush() def testCloseAfterMakefile(self): # The file returned by makefile should keep the socket open. self.cli_conn.close() # read until EOF msg = self.read_file.read() self.assertEqual(msg, self.read_msg) def _testCloseAfterMakefile(self): self.write_file.write(self.write_msg) self.write_file.flush() def testMakefileAfterMakefileClose(self): self.read_file.close() msg = self.cli_conn.recv(len(MSG)) if isinstance(self.read_msg, str): msg = msg.decode() self.assertEqual(msg, self.read_msg) def _testMakefileAfterMakefileClose(self): self.write_file.write(self.write_msg) self.write_file.flush() def testClosedAttr(self): self.assertTrue(not self.read_file.closed) def _testClosedAttr(self): self.assertTrue(not self.write_file.closed) def testAttributes(self): self.assertEqual(self.read_file.mode, self.read_mode) self.assertEqual(self.read_file.name, self.cli_conn.fileno()) def _testAttributes(self): self.assertEqual(self.write_file.mode, self.write_mode) self.assertEqual(self.write_file.name, self.serv_conn.fileno()) def testRealClose(self): self.read_file.close() self.assertRaises(ValueError, self.read_file.fileno) self.cli_conn.close() self.assertRaises(OSError, self.cli_conn.getsockname) def _testRealClose(self): pass class UnbufferedFileObjectClassTestCase(FileObjectClassTestCase): """Repeat the tests from FileObjectClassTestCase with bufsize==0. In this case (and in this case only), it should be possible to create a file object, read a line from it, create another file object, read another line from it, without loss of data in the first file object's buffer. Note that http.client relies on this when reading multiple requests from the same socket.""" bufsize = 0 # Use unbuffered mode def testUnbufferedReadline(self): # Read a line, create a new file object, read another line with it line = self.read_file.readline() # first line self.assertEqual(line, b"A. " + self.write_msg) # first line self.read_file = self.cli_conn.makefile('rb', 0) line = self.read_file.readline() # second line self.assertEqual(line, b"B. " + self.write_msg) # second line def _testUnbufferedReadline(self): self.write_file.write(b"A. " + self.write_msg) self.write_file.write(b"B. " + self.write_msg) self.write_file.flush() def testMakefileClose(self): # The file returned by makefile should keep the socket open... self.cli_conn.close() msg = self.cli_conn.recv(1024) self.assertEqual(msg, self.read_msg) # ...until the file is itself closed self.read_file.close() self.assertRaises(OSError, self.cli_conn.recv, 1024) def _testMakefileClose(self): self.write_file.write(self.write_msg) self.write_file.flush() def testMakefileCloseSocketDestroy(self): refcount_before = sys.getrefcount(self.cli_conn) self.read_file.close() refcount_after = sys.getrefcount(self.cli_conn) self.assertEqual(refcount_before - 1, refcount_after) def _testMakefileCloseSocketDestroy(self): pass # Non-blocking ops # NOTE: to set `read_file` as non-blocking, we must call # `cli_conn.setblocking` and vice-versa (see setUp / clientSetUp). def testSmallReadNonBlocking(self): self.cli_conn.setblocking(False) self.assertEqual(self.read_file.readinto(bytearray(10)), None) self.assertEqual(self.read_file.read(len(self.read_msg) - 3), None) self.evt1.set() self.evt2.wait(1.0) first_seg = self.read_file.read(len(self.read_msg) - 3) if first_seg is None: # Data not arrived (can happen under Windows), wait a bit time.sleep(0.5) first_seg = self.read_file.read(len(self.read_msg) - 3) buf = bytearray(10) n = self.read_file.readinto(buf) self.assertEqual(n, 3) msg = first_seg + buf[:n] self.assertEqual(msg, self.read_msg) self.assertEqual(self.read_file.readinto(bytearray(16)), None) self.assertEqual(self.read_file.read(1), None) def _testSmallReadNonBlocking(self): self.evt1.wait(1.0) self.write_file.write(self.write_msg) self.write_file.flush() self.evt2.set() # Avoid closing the socket before the server test has finished, # otherwise system recv() will return 0 instead of EWOULDBLOCK. self.serv_finished.wait(5.0) def testWriteNonBlocking(self): self.cli_finished.wait(5.0) # The client thread can't skip directly - the SkipTest exception # would appear as a failure. if self.serv_skipped: self.skipTest(self.serv_skipped) def _testWriteNonBlocking(self): self.serv_skipped = None self.serv_conn.setblocking(False) # Try to saturate the socket buffer pipe with repeated large writes. BIG = b"x" * support.SOCK_MAX_SIZE LIMIT = 10 # The first write() succeeds since a chunk of data can be buffered n = self.write_file.write(BIG) self.assertGreater(n, 0) for i in range(LIMIT): n = self.write_file.write(BIG) if n is None: # Succeeded break self.assertGreater(n, 0) else: # Let us know that this test didn't manage to establish # the expected conditions. This is not a failure in itself but, # if it happens repeatedly, the test should be fixed. self.serv_skipped = "failed to saturate the socket buffer" class LineBufferedFileObjectClassTestCase(FileObjectClassTestCase): bufsize = 1 # Default-buffered for reading; line-buffered for writing class SmallBufferedFileObjectClassTestCase(FileObjectClassTestCase): bufsize = 2 # Exercise the buffering code class UnicodeReadFileObjectClassTestCase(FileObjectClassTestCase): """Tests for socket.makefile() in text mode (rather than binary)""" read_mode = 'r' read_msg = MSG.decode('utf-8') write_mode = 'wb' write_msg = MSG newline = '' class UnicodeWriteFileObjectClassTestCase(FileObjectClassTestCase): """Tests for socket.makefile() in text mode (rather than binary)""" read_mode = 'rb' read_msg = MSG write_mode = 'w' write_msg = MSG.decode('utf-8') newline = '' class UnicodeReadWriteFileObjectClassTestCase(FileObjectClassTestCase): """Tests for socket.makefile() in text mode (rather than binary)""" read_mode = 'r' read_msg = MSG.decode('utf-8') write_mode = 'w' write_msg = MSG.decode('utf-8') newline = '' class NetworkConnectionTest(object): """Prove network connection.""" def clientSetUp(self): # We're inherited below by BasicTCPTest2, which also inherits # BasicTCPTest, which defines self.port referenced below. self.cli = socket.create_connection((HOST, self.port)) self.serv_conn = self.cli class BasicTCPTest2(NetworkConnectionTest, BasicTCPTest): """Tests that NetworkConnection does not break existing TCP functionality. """ class NetworkConnectionNoServer(unittest.TestCase): class MockSocket(socket.socket): def connect(self, *args): raise TimeoutError('timed out') @contextlib.contextmanager def mocked_socket_module(self): """Return a socket which times out on connect""" old_socket = socket.socket socket.socket = self.MockSocket try: yield finally: socket.socket = old_socket @socket_helper.skip_if_tcp_blackhole def test_connect(self): port = socket_helper.find_unused_port() cli = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(cli.close) with self.assertRaises(OSError) as cm: cli.connect((HOST, port)) self.assertEqual(cm.exception.errno, errno.ECONNREFUSED) @socket_helper.skip_if_tcp_blackhole def test_create_connection(self): # Issue #9792: errors raised by create_connection() should have # a proper errno attribute. port = socket_helper.find_unused_port() with self.assertRaises(OSError) as cm: socket.create_connection((HOST, port)) # Issue #16257: create_connection() calls getaddrinfo() against # 'localhost'. This may result in an IPV6 addr being returned # as well as an IPV4 one: # >>> socket.getaddrinfo('localhost', port, 0, SOCK_STREAM) # >>> [(2, 2, 0, '', ('127.0.0.1', 41230)), # (26, 2, 0, '', ('::1', 41230, 0, 0))] # # create_connection() enumerates through all the addresses returned # and if it doesn't successfully bind to any of them, it propagates # the last exception it encountered. # # On Solaris, ENETUNREACH is returned in this circumstance instead # of ECONNREFUSED. So, if that errno exists, add it to our list of # expected errnos. expected_errnos = socket_helper.get_socket_conn_refused_errs() self.assertIn(cm.exception.errno, expected_errnos) def test_create_connection_all_errors(self): port = socket_helper.find_unused_port() try: socket.create_connection((HOST, port), all_errors=True) except ExceptionGroup as e: eg = e else: self.fail('expected connection to fail') self.assertIsInstance(eg, ExceptionGroup) for e in eg.exceptions: self.assertIsInstance(e, OSError) addresses = socket.getaddrinfo( 'localhost', port, 0, socket.SOCK_STREAM) # assert that we got an exception for each address self.assertEqual(len(addresses), len(eg.exceptions)) def test_create_connection_timeout(self): # Issue #9792: create_connection() should not recast timeout errors # as generic socket errors. with self.mocked_socket_module(): try: socket.create_connection((HOST, 1234)) except TimeoutError: pass except OSError as exc: if socket_helper.IPV6_ENABLED or exc.errno != errno.EAFNOSUPPORT: raise else: self.fail('TimeoutError not raised') class NetworkConnectionAttributesTest(SocketTCPTest, ThreadableTest): cli = None def __init__(self, methodName='runTest'): SocketTCPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.source_port = socket_helper.find_unused_port() def clientTearDown(self): if self.cli is not None: self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) def _justAccept(self): conn, addr = self.serv.accept() conn.close() testFamily = _justAccept def _testFamily(self): self.cli = socket.create_connection((HOST, self.port), timeout=support.LOOPBACK_TIMEOUT) self.addCleanup(self.cli.close) self.assertEqual(self.cli.family, 2) testSourceAddress = _justAccept def _testSourceAddress(self): self.cli = socket.create_connection((HOST, self.port), timeout=support.LOOPBACK_TIMEOUT, source_address=('', self.source_port)) self.addCleanup(self.cli.close) self.assertEqual(self.cli.getsockname()[1], self.source_port) # The port number being used is sufficient to show that the bind() # call happened. testTimeoutDefault = _justAccept def _testTimeoutDefault(self): # passing no explicit timeout uses socket's global default self.assertTrue(socket.getdefaulttimeout() is None) socket.setdefaulttimeout(42) try: self.cli = socket.create_connection((HOST, self.port)) self.addCleanup(self.cli.close) finally: socket.setdefaulttimeout(None) self.assertEqual(self.cli.gettimeout(), 42) testTimeoutNone = _justAccept def _testTimeoutNone(self): # None timeout means the same as sock.settimeout(None) self.assertTrue(socket.getdefaulttimeout() is None) socket.setdefaulttimeout(30) try: self.cli = socket.create_connection((HOST, self.port), timeout=None) self.addCleanup(self.cli.close) finally: socket.setdefaulttimeout(None) self.assertEqual(self.cli.gettimeout(), None) testTimeoutValueNamed = _justAccept def _testTimeoutValueNamed(self): self.cli = socket.create_connection((HOST, self.port), timeout=30) self.assertEqual(self.cli.gettimeout(), 30) testTimeoutValueNonamed = _justAccept def _testTimeoutValueNonamed(self): self.cli = socket.create_connection((HOST, self.port), 30) self.addCleanup(self.cli.close) self.assertEqual(self.cli.gettimeout(), 30) class NetworkConnectionBehaviourTest(SocketTCPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketTCPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) def testInsideTimeout(self): conn, addr = self.serv.accept() self.addCleanup(conn.close) time.sleep(3) conn.send(b"done!") testOutsideTimeout = testInsideTimeout def _testInsideTimeout(self): self.cli = sock = socket.create_connection((HOST, self.port)) data = sock.recv(5) self.assertEqual(data, b"done!") def _testOutsideTimeout(self): self.cli = sock = socket.create_connection((HOST, self.port), timeout=1) self.assertRaises(TimeoutError, lambda: sock.recv(5)) class TCPTimeoutTest(SocketTCPTest): def testTCPTimeout(self): def raise_timeout(*args, **kwargs): self.serv.settimeout(1.0) self.serv.accept() self.assertRaises(TimeoutError, raise_timeout, "Error generating a timeout exception (TCP)") def testTimeoutZero(self): ok = False try: self.serv.settimeout(0.0) foo = self.serv.accept() except TimeoutError: self.fail("caught timeout instead of error (TCP)") except OSError: ok = True except: self.fail("caught unexpected exception (TCP)") if not ok: self.fail("accept() returned success when we did not expect it") @unittest.skipUnless(hasattr(signal, 'alarm'), 'test needs signal.alarm()') def testInterruptedTimeout(self): # XXX I don't know how to do this test on MSWindows or any other # platform that doesn't support signal.alarm() or os.kill(), though # the bug should have existed on all platforms. self.serv.settimeout(5.0) # must be longer than alarm class Alarm(Exception): pass def alarm_handler(signal, frame): raise Alarm old_alarm = signal.signal(signal.SIGALRM, alarm_handler) try: try: signal.alarm(2) # POSIX allows alarm to be up to 1 second early foo = self.serv.accept() except TimeoutError: self.fail("caught timeout instead of Alarm") except Alarm: pass except BaseException as e: self.fail("caught other exception instead of Alarm:" " %s(%s):\n%s" % (type(e), e, traceback.format_exc())) else: self.fail("nothing caught") finally: signal.alarm(0) # shut off alarm except Alarm: self.fail("got Alarm in wrong place") finally: # no alarm can be pending. Safe to restore old handler. signal.signal(signal.SIGALRM, old_alarm) class UDPTimeoutTest(SocketUDPTest): def testUDPTimeout(self): def raise_timeout(*args, **kwargs): self.serv.settimeout(1.0) self.serv.recv(1024) self.assertRaises(TimeoutError, raise_timeout, "Error generating a timeout exception (UDP)") def testTimeoutZero(self): ok = False try: self.serv.settimeout(0.0) foo = self.serv.recv(1024) except TimeoutError: self.fail("caught timeout instead of error (UDP)") except OSError: ok = True except: self.fail("caught unexpected exception (UDP)") if not ok: self.fail("recv() returned success when we did not expect it") @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class UDPLITETimeoutTest(SocketUDPLITETest): def testUDPLITETimeout(self): def raise_timeout(*args, **kwargs): self.serv.settimeout(1.0) self.serv.recv(1024) self.assertRaises(TimeoutError, raise_timeout, "Error generating a timeout exception (UDPLITE)") def testTimeoutZero(self): ok = False try: self.serv.settimeout(0.0) foo = self.serv.recv(1024) except TimeoutError: self.fail("caught timeout instead of error (UDPLITE)") except OSError: ok = True except: self.fail("caught unexpected exception (UDPLITE)") if not ok: self.fail("recv() returned success when we did not expect it") class TestExceptions(unittest.TestCase): def testExceptionTree(self): self.assertTrue(issubclass(OSError, Exception)) self.assertTrue(issubclass(socket.herror, OSError)) self.assertTrue(issubclass(socket.gaierror, OSError)) self.assertTrue(issubclass(socket.timeout, OSError)) self.assertIs(socket.error, OSError) self.assertIs(socket.timeout, TimeoutError) def test_setblocking_invalidfd(self): # Regression test for issue #28471 sock0 = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) sock = socket.socket( socket.AF_INET, socket.SOCK_STREAM, 0, sock0.fileno()) sock0.close() self.addCleanup(sock.detach) with self.assertRaises(OSError): sock.setblocking(False) @unittest.skipUnless(sys.platform in ('linux', 'android'), 'Linux specific test') class TestLinuxAbstractNamespace(unittest.TestCase): UNIX_PATH_MAX = 108 def testLinuxAbstractNamespace(self): address = b"\x00python-test-hello\x00\xff" with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s1: s1.bind(address) s1.listen() with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s2: s2.connect(s1.getsockname()) with s1.accept()[0] as s3: self.assertEqual(s1.getsockname(), address) self.assertEqual(s2.getpeername(), address) def testMaxName(self): address = b"\x00" + b"h" * (self.UNIX_PATH_MAX - 1) with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s: s.bind(address) self.assertEqual(s.getsockname(), address) def testNameOverflow(self): address = "\x00" + "h" * self.UNIX_PATH_MAX with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s: self.assertRaises(OSError, s.bind, address) def testStrName(self): # Check that an abstract name can be passed as a string. s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) try: s.bind("\x00python\x00test\x00") self.assertEqual(s.getsockname(), b"\x00python\x00test\x00") finally: s.close() def testBytearrayName(self): # Check that an abstract name can be passed as a bytearray. with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s: s.bind(bytearray(b"\x00python\x00test\x00")) self.assertEqual(s.getsockname(), b"\x00python\x00test\x00") def testAutobind(self): # Check that binding to an empty string binds to an available address # in the abstract namespace as specified in unix(7) "Autobind feature". abstract_address = b"^\0[0-9a-f]{5}" with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s1: s1.bind("") self.assertRegex(s1.getsockname(), abstract_address) # Each socket is bound to a different abstract address. with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s2: s2.bind("") self.assertRegex(s2.getsockname(), abstract_address) self.assertNotEqual(s1.getsockname(), s2.getsockname()) @unittest.skipUnless(hasattr(socket, 'AF_UNIX'), 'test needs socket.AF_UNIX') class TestUnixDomain(unittest.TestCase): def setUp(self): self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) def tearDown(self): self.sock.close() def encoded(self, path): # Return the given path encoded in the file system encoding, # or skip the test if this is not possible. try: return os.fsencode(path) except UnicodeEncodeError: self.skipTest( "Pathname {0!a} cannot be represented in file " "system encoding {1!r}".format( path, sys.getfilesystemencoding())) def bind(self, sock, path): # Bind the socket try: socket_helper.bind_unix_socket(sock, path) except OSError as e: if str(e) == "AF_UNIX path too long": self.skipTest( "Pathname {0!a} is too long to serve as an AF_UNIX path" .format(path)) else: raise def testUnbound(self): # Issue #30205 (note getsockname() can return None on OS X) self.assertIn(self.sock.getsockname(), ('', None)) def testStrAddr(self): # Test binding to and retrieving a normal string pathname. path = os.path.abspath(os_helper.TESTFN) self.bind(self.sock, path) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) def testBytesAddr(self): # Test binding to a bytes pathname. path = os.path.abspath(os_helper.TESTFN) self.bind(self.sock, self.encoded(path)) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) def testSurrogateescapeBind(self): # Test binding to a valid non-ASCII pathname, with the # non-ASCII bytes supplied using surrogateescape encoding. path = os.path.abspath(os_helper.TESTFN_UNICODE) b = self.encoded(path) self.bind(self.sock, b.decode("ascii", "surrogateescape")) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) def testUnencodableAddr(self): # Test binding to a pathname that cannot be encoded in the # file system encoding. if os_helper.TESTFN_UNENCODABLE is None: self.skipTest("No unencodable filename available") path = os.path.abspath(os_helper.TESTFN_UNENCODABLE) self.bind(self.sock, path) self.addCleanup(os_helper.unlink, path) self.assertEqual(self.sock.getsockname(), path) @unittest.skipIf(sys.platform in ('linux', 'android'), 'Linux behavior is tested by TestLinuxAbstractNamespace') def testEmptyAddress(self): # Test that binding empty address fails. self.assertRaises(OSError, self.sock.bind, "") class BufferIOTest(SocketConnectedTest): """ Test the buffer versions of socket.recv() and socket.send(). """ def __init__(self, methodName='runTest'): SocketConnectedTest.__init__(self, methodName=methodName) def testRecvIntoArray(self): buf = array.array("B", [0] * len(MSG)) nbytes = self.cli_conn.recv_into(buf) self.assertEqual(nbytes, len(MSG)) buf = buf.tobytes() msg = buf[:len(MSG)] self.assertEqual(msg, MSG) def _testRecvIntoArray(self): buf = bytes(MSG) self.serv_conn.send(buf) def testRecvIntoBytearray(self): buf = bytearray(1024) nbytes = self.cli_conn.recv_into(buf) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvIntoBytearray = _testRecvIntoArray def testRecvIntoMemoryview(self): buf = bytearray(1024) nbytes = self.cli_conn.recv_into(memoryview(buf)) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvIntoMemoryview = _testRecvIntoArray def testRecvFromIntoArray(self): buf = array.array("B", [0] * len(MSG)) nbytes, addr = self.cli_conn.recvfrom_into(buf) self.assertEqual(nbytes, len(MSG)) buf = buf.tobytes() msg = buf[:len(MSG)] self.assertEqual(msg, MSG) def _testRecvFromIntoArray(self): buf = bytes(MSG) self.serv_conn.send(buf) def testRecvFromIntoBytearray(self): buf = bytearray(1024) nbytes, addr = self.cli_conn.recvfrom_into(buf) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvFromIntoBytearray = _testRecvFromIntoArray def testRecvFromIntoMemoryview(self): buf = bytearray(1024) nbytes, addr = self.cli_conn.recvfrom_into(memoryview(buf)) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvFromIntoMemoryview = _testRecvFromIntoArray def testRecvFromIntoSmallBuffer(self): # See issue #20246. buf = bytearray(8) self.assertRaises(ValueError, self.cli_conn.recvfrom_into, buf, 1024) def _testRecvFromIntoSmallBuffer(self): self.serv_conn.send(MSG) def testRecvFromIntoEmptyBuffer(self): buf = bytearray() self.cli_conn.recvfrom_into(buf) self.cli_conn.recvfrom_into(buf, 0) _testRecvFromIntoEmptyBuffer = _testRecvFromIntoArray TIPC_STYPE = 2000 TIPC_LOWER = 200 TIPC_UPPER = 210 def isTipcAvailable(): """Check if the TIPC module is loaded The TIPC module is not loaded automatically on Ubuntu and probably other Linux distros. """ if not hasattr(socket, "AF_TIPC"): return False try: f = open("/proc/modules", encoding="utf-8") except (FileNotFoundError, IsADirectoryError, PermissionError): # It's ok if the file does not exist, is a directory or if we # have not the permission to read it. return False with f: for line in f: if line.startswith("tipc "): return True return False @unittest.skipUnless(isTipcAvailable(), "TIPC module is not loaded, please 'sudo modprobe tipc'") class TIPCTest(unittest.TestCase): def testRDM(self): srv = socket.socket(socket.AF_TIPC, socket.SOCK_RDM) cli = socket.socket(socket.AF_TIPC, socket.SOCK_RDM) self.addCleanup(srv.close) self.addCleanup(cli.close) srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) srvaddr = (socket.TIPC_ADDR_NAMESEQ, TIPC_STYPE, TIPC_LOWER, TIPC_UPPER) srv.bind(srvaddr) sendaddr = (socket.TIPC_ADDR_NAME, TIPC_STYPE, TIPC_LOWER + int((TIPC_UPPER - TIPC_LOWER) / 2), 0) cli.sendto(MSG, sendaddr) msg, recvaddr = srv.recvfrom(1024) self.assertEqual(cli.getsockname(), recvaddr) self.assertEqual(msg, MSG) @unittest.skipUnless(isTipcAvailable(), "TIPC module is not loaded, please 'sudo modprobe tipc'") class TIPCThreadableTest(unittest.TestCase, ThreadableTest): def __init__(self, methodName = 'runTest'): unittest.TestCase.__init__(self, methodName = methodName) ThreadableTest.__init__(self) def setUp(self): self.srv = socket.socket(socket.AF_TIPC, socket.SOCK_STREAM) self.addCleanup(self.srv.close) self.srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) srvaddr = (socket.TIPC_ADDR_NAMESEQ, TIPC_STYPE, TIPC_LOWER, TIPC_UPPER) self.srv.bind(srvaddr) self.srv.listen() self.serverExplicitReady() self.conn, self.connaddr = self.srv.accept() self.addCleanup(self.conn.close) def clientSetUp(self): # There is a hittable race between serverExplicitReady() and the # accept() call; sleep a little while to avoid it, otherwise # we could get an exception time.sleep(0.1) self.cli = socket.socket(socket.AF_TIPC, socket.SOCK_STREAM) self.addCleanup(self.cli.close) addr = (socket.TIPC_ADDR_NAME, TIPC_STYPE, TIPC_LOWER + int((TIPC_UPPER - TIPC_LOWER) / 2), 0) self.cli.connect(addr) self.cliaddr = self.cli.getsockname() def testStream(self): msg = self.conn.recv(1024) self.assertEqual(msg, MSG) self.assertEqual(self.cliaddr, self.connaddr) def _testStream(self): self.cli.send(MSG) self.cli.close() class ContextManagersTest(ThreadedTCPSocketTest): def _testSocketClass(self): # base test with socket.socket() as sock: self.assertFalse(sock._closed) self.assertTrue(sock._closed) # close inside with block with socket.socket() as sock: sock.close() self.assertTrue(sock._closed) # exception inside with block with socket.socket() as sock: self.assertRaises(OSError, sock.sendall, b'foo') self.assertTrue(sock._closed) def testCreateConnectionBase(self): conn, addr = self.serv.accept() self.addCleanup(conn.close) data = conn.recv(1024) conn.sendall(data) def _testCreateConnectionBase(self): address = self.serv.getsockname() with socket.create_connection(address) as sock: self.assertFalse(sock._closed) sock.sendall(b'foo') self.assertEqual(sock.recv(1024), b'foo') self.assertTrue(sock._closed) def testCreateConnectionClose(self): conn, addr = self.serv.accept() self.addCleanup(conn.close) data = conn.recv(1024) conn.sendall(data) def _testCreateConnectionClose(self): address = self.serv.getsockname() with socket.create_connection(address) as sock: sock.close() self.assertTrue(sock._closed) self.assertRaises(OSError, sock.sendall, b'foo') class InheritanceTest(unittest.TestCase): @unittest.skipUnless(hasattr(socket, "SOCK_CLOEXEC"), "SOCK_CLOEXEC not defined") @support.requires_linux_version(2, 6, 28) def test_SOCK_CLOEXEC(self): with socket.socket(socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_CLOEXEC) as s: self.assertEqual(s.type, socket.SOCK_STREAM) self.assertFalse(s.get_inheritable()) def test_default_inheritable(self): sock = socket.socket() with sock: self.assertEqual(sock.get_inheritable(), False) def test_dup(self): sock = socket.socket() with sock: newsock = sock.dup() sock.close() with newsock: self.assertEqual(newsock.get_inheritable(), False) def test_set_inheritable(self): sock = socket.socket() with sock: sock.set_inheritable(True) self.assertEqual(sock.get_inheritable(), True) sock.set_inheritable(False) self.assertEqual(sock.get_inheritable(), False) @unittest.skipIf(fcntl is None, "need fcntl") def test_get_inheritable_cloexec(self): sock = socket.socket() with sock: fd = sock.fileno() self.assertEqual(sock.get_inheritable(), False) # clear FD_CLOEXEC flag flags = fcntl.fcntl(fd, fcntl.F_GETFD) flags &= ~fcntl.FD_CLOEXEC fcntl.fcntl(fd, fcntl.F_SETFD, flags) self.assertEqual(sock.get_inheritable(), True) @unittest.skipIf(fcntl is None, "need fcntl") def test_set_inheritable_cloexec(self): sock = socket.socket() with sock: fd = sock.fileno() self.assertEqual(fcntl.fcntl(fd, fcntl.F_GETFD) & fcntl.FD_CLOEXEC, fcntl.FD_CLOEXEC) sock.set_inheritable(True) self.assertEqual(fcntl.fcntl(fd, fcntl.F_GETFD) & fcntl.FD_CLOEXEC, 0) def test_socketpair(self): s1, s2 = socket.socketpair() self.addCleanup(s1.close) self.addCleanup(s2.close) self.assertEqual(s1.get_inheritable(), False) self.assertEqual(s2.get_inheritable(), False) @unittest.skipUnless(hasattr(socket, "SOCK_NONBLOCK"), "SOCK_NONBLOCK not defined") class NonblockConstantTest(unittest.TestCase): def checkNonblock(self, s, nonblock=True, timeout=0.0): if nonblock: self.assertEqual(s.type, socket.SOCK_STREAM) self.assertEqual(s.gettimeout(), timeout) self.assertTrue( fcntl.fcntl(s, fcntl.F_GETFL, os.O_NONBLOCK) & os.O_NONBLOCK) if timeout == 0: # timeout == 0: means that getblocking() must be False. self.assertFalse(s.getblocking()) else: # If timeout > 0, the socket will be in a "blocking" mode # from the standpoint of the Python API. For Python socket # object, "blocking" means that operations like 'sock.recv()' # will block. Internally, file descriptors for # "blocking" Python sockets *with timeouts* are in a # *non-blocking* mode, and 'sock.recv()' uses 'select()' # and handles EWOULDBLOCK/EAGAIN to enforce the timeout. self.assertTrue(s.getblocking()) else: self.assertEqual(s.type, socket.SOCK_STREAM) self.assertEqual(s.gettimeout(), None) self.assertFalse( fcntl.fcntl(s, fcntl.F_GETFL, os.O_NONBLOCK) & os.O_NONBLOCK) self.assertTrue(s.getblocking()) @support.requires_linux_version(2, 6, 28) def test_SOCK_NONBLOCK(self): # a lot of it seems silly and redundant, but I wanted to test that # changing back and forth worked ok with socket.socket(socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_NONBLOCK) as s: self.checkNonblock(s) s.setblocking(True) self.checkNonblock(s, nonblock=False) s.setblocking(False) self.checkNonblock(s) s.settimeout(None) self.checkNonblock(s, nonblock=False) s.settimeout(2.0) self.checkNonblock(s, timeout=2.0) s.setblocking(True) self.checkNonblock(s, nonblock=False) # defaulttimeout t = socket.getdefaulttimeout() socket.setdefaulttimeout(0.0) with socket.socket() as s: self.checkNonblock(s) socket.setdefaulttimeout(None) with socket.socket() as s: self.checkNonblock(s, False) socket.setdefaulttimeout(2.0) with socket.socket() as s: self.checkNonblock(s, timeout=2.0) socket.setdefaulttimeout(None) with socket.socket() as s: self.checkNonblock(s, False) socket.setdefaulttimeout(t) @unittest.skipUnless(os.name == "nt", "Windows specific") @unittest.skipUnless(multiprocessing, "need multiprocessing") class TestSocketSharing(SocketTCPTest): # This must be classmethod and not staticmethod or multiprocessing # won't be able to bootstrap it. @classmethod def remoteProcessServer(cls, q): # Recreate socket from shared data sdata = q.get() message = q.get() s = socket.fromshare(sdata) s2, c = s.accept() # Send the message s2.sendall(message) s2.close() s.close() def testShare(self): # Transfer the listening server socket to another process # and service it from there. # Create process: q = multiprocessing.Queue() p = multiprocessing.Process(target=self.remoteProcessServer, args=(q,)) p.start() # Get the shared socket data data = self.serv.share(p.pid) # Pass the shared socket to the other process addr = self.serv.getsockname() self.serv.close() q.put(data) # The data that the server will send us message = b"slapmahfro" q.put(message) # Connect s = socket.create_connection(addr) # listen for the data m = [] while True: data = s.recv(100) if not data: break m.append(data) s.close() received = b"".join(m) self.assertEqual(received, message) p.join() def testShareLength(self): data = self.serv.share(os.getpid()) self.assertRaises(ValueError, socket.fromshare, data[:-1]) self.assertRaises(ValueError, socket.fromshare, data+b"foo") def compareSockets(self, org, other): # socket sharing is expected to work only for blocking socket # since the internal python timeout value isn't transferred. self.assertEqual(org.gettimeout(), None) self.assertEqual(org.gettimeout(), other.gettimeout()) self.assertEqual(org.family, other.family) self.assertEqual(org.type, other.type) # If the user specified "0" for proto, then # internally windows will have picked the correct value. # Python introspection on the socket however will still return # 0. For the shared socket, the python value is recreated # from the actual value, so it may not compare correctly. if org.proto != 0: self.assertEqual(org.proto, other.proto) def testShareLocal(self): data = self.serv.share(os.getpid()) s = socket.fromshare(data) try: self.compareSockets(self.serv, s) finally: s.close() def testTypes(self): families = [socket.AF_INET, socket.AF_INET6] types = [socket.SOCK_STREAM, socket.SOCK_DGRAM] for f in families: for t in types: try: source = socket.socket(f, t) except OSError: continue # This combination is not supported try: data = source.share(os.getpid()) shared = socket.fromshare(data) try: self.compareSockets(source, shared) finally: shared.close() finally: source.close() class SendfileUsingSendTest(ThreadedTCPSocketTest): """ Test the send() implementation of socket.sendfile(). """ FILESIZE = (10 * 1024 * 1024) # 10 MiB BUFSIZE = 8192 FILEDATA = b"" TIMEOUT = support.LOOPBACK_TIMEOUT @classmethod def setUpClass(cls): def chunks(total, step): assert total >= step while total > step: yield step total -= step if total: yield total chunk = b"".join([random.choice(string.ascii_letters).encode() for i in range(cls.BUFSIZE)]) with open(os_helper.TESTFN, 'wb') as f: for csize in chunks(cls.FILESIZE, cls.BUFSIZE): f.write(chunk) with open(os_helper.TESTFN, 'rb') as f: cls.FILEDATA = f.read() assert len(cls.FILEDATA) == cls.FILESIZE @classmethod def tearDownClass(cls): os_helper.unlink(os_helper.TESTFN) def accept_conn(self): self.serv.settimeout(support.LONG_TIMEOUT) conn, addr = self.serv.accept() conn.settimeout(self.TIMEOUT) self.addCleanup(conn.close) return conn def recv_data(self, conn): received = [] while True: chunk = conn.recv(self.BUFSIZE) if not chunk: break received.append(chunk) return b''.join(received) def meth_from_sock(self, sock): # Depending on the mixin class being run return either send() # or sendfile() method implementation. return getattr(sock, "_sendfile_use_send") # regular file def _testRegularFile(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address) as sock, file as file: meth = self.meth_from_sock(sock) sent = meth(file) self.assertEqual(sent, self.FILESIZE) self.assertEqual(file.tell(), self.FILESIZE) def testRegularFile(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE) self.assertEqual(data, self.FILEDATA) # non regular file def _testNonRegularFile(self): address = self.serv.getsockname() file = io.BytesIO(self.FILEDATA) with socket.create_connection(address) as sock, file as file: sent = sock.sendfile(file) self.assertEqual(sent, self.FILESIZE) self.assertEqual(file.tell(), self.FILESIZE) self.assertRaises(socket._GiveupOnSendfile, sock._sendfile_use_sendfile, file) def testNonRegularFile(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE) self.assertEqual(data, self.FILEDATA) # empty file def _testEmptyFileSend(self): address = self.serv.getsockname() filename = os_helper.TESTFN + "2" with open(filename, 'wb'): self.addCleanup(os_helper.unlink, filename) file = open(filename, 'rb') with socket.create_connection(address) as sock, file as file: meth = self.meth_from_sock(sock) sent = meth(file) self.assertEqual(sent, 0) self.assertEqual(file.tell(), 0) def testEmptyFileSend(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(data, b"") # offset def _testOffset(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address) as sock, file as file: meth = self.meth_from_sock(sock) sent = meth(file, offset=5000) self.assertEqual(sent, self.FILESIZE - 5000) self.assertEqual(file.tell(), self.FILESIZE) def testOffset(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE - 5000) self.assertEqual(data, self.FILEDATA[5000:]) # count def _testCount(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') sock = socket.create_connection(address, timeout=support.LOOPBACK_TIMEOUT) with sock, file: count = 5000007 meth = self.meth_from_sock(sock) sent = meth(file, count=count) self.assertEqual(sent, count) self.assertEqual(file.tell(), count) def testCount(self): count = 5000007 conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), count) self.assertEqual(data, self.FILEDATA[:count]) # count small def _testCountSmall(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') sock = socket.create_connection(address, timeout=support.LOOPBACK_TIMEOUT) with sock, file: count = 1 meth = self.meth_from_sock(sock) sent = meth(file, count=count) self.assertEqual(sent, count) self.assertEqual(file.tell(), count) def testCountSmall(self): count = 1 conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), count) self.assertEqual(data, self.FILEDATA[:count]) # count + offset def _testCountWithOffset(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address, timeout=2) as sock, file as file: count = 100007 meth = self.meth_from_sock(sock) sent = meth(file, offset=2007, count=count) self.assertEqual(sent, count) self.assertEqual(file.tell(), count + 2007) def testCountWithOffset(self): count = 100007 conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), count) self.assertEqual(data, self.FILEDATA[2007:count+2007]) # non blocking sockets are not supposed to work def _testNonBlocking(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') with socket.create_connection(address) as sock, file as file: sock.setblocking(False) meth = self.meth_from_sock(sock) self.assertRaises(ValueError, meth, file) self.assertRaises(ValueError, sock.sendfile, file) def testNonBlocking(self): conn = self.accept_conn() if conn.recv(8192): self.fail('was not supposed to receive any data') # timeout (non-triggered) def _testWithTimeout(self): address = self.serv.getsockname() file = open(os_helper.TESTFN, 'rb') sock = socket.create_connection(address, timeout=support.LOOPBACK_TIMEOUT) with sock, file: meth = self.meth_from_sock(sock) sent = meth(file) self.assertEqual(sent, self.FILESIZE) def testWithTimeout(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE) self.assertEqual(data, self.FILEDATA) # timeout (triggered) def _testWithTimeoutTriggeredSend(self): address = self.serv.getsockname() with open(os_helper.TESTFN, 'rb') as file: with socket.create_connection(address) as sock: sock.settimeout(0.01) meth = self.meth_from_sock(sock) self.assertRaises(TimeoutError, meth, file) def testWithTimeoutTriggeredSend(self): conn = self.accept_conn() conn.recv(88192) # bpo-45212: the wait here needs to be longer than the client-side timeout (0.01s) time.sleep(1) # errors def _test_errors(self): pass def test_errors(self): with open(os_helper.TESTFN, 'rb') as file: with socket.socket(type=socket.SOCK_DGRAM) as s: meth = self.meth_from_sock(s) self.assertRaisesRegex( ValueError, "SOCK_STREAM", meth, file) with open(os_helper.TESTFN, encoding="utf-8") as file: with socket.socket() as s: meth = self.meth_from_sock(s) self.assertRaisesRegex( ValueError, "binary mode", meth, file) with open(os_helper.TESTFN, 'rb') as file: with socket.socket() as s: meth = self.meth_from_sock(s) self.assertRaisesRegex(TypeError, "positive integer", meth, file, count='2') self.assertRaisesRegex(TypeError, "positive integer", meth, file, count=0.1) self.assertRaisesRegex(ValueError, "positive integer", meth, file, count=0) self.assertRaisesRegex(ValueError, "positive integer", meth, file, count=-1) @unittest.skipUnless(hasattr(os, "sendfile"), 'os.sendfile() required for this test.') class SendfileUsingSendfileTest(SendfileUsingSendTest): """ Test the sendfile() implementation of socket.sendfile(). """ def meth_from_sock(self, sock): return getattr(sock, "_sendfile_use_sendfile") @unittest.skipUnless(HAVE_SOCKET_ALG, 'AF_ALG required') class LinuxKernelCryptoAPI(unittest.TestCase): # tests for AF_ALG def create_alg(self, typ, name): sock = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) try: sock.bind((typ, name)) except FileNotFoundError as e: # type / algorithm is not available sock.close() raise unittest.SkipTest(str(e), typ, name) else: return sock # bpo-31705: On kernel older than 4.5, sendto() failed with ENOKEY, # at least on ppc64le architecture @support.requires_linux_version(4, 5) def test_sha256(self): expected = bytes.fromhex("ba7816bf8f01cfea414140de5dae2223b00361a396" "177a9cb410ff61f20015ad") with self.create_alg('hash', 'sha256') as algo: op, _ = algo.accept() with op: op.sendall(b"abc") self.assertEqual(op.recv(512), expected) op, _ = algo.accept() with op: op.send(b'a', socket.MSG_MORE) op.send(b'b', socket.MSG_MORE) op.send(b'c', socket.MSG_MORE) op.send(b'') self.assertEqual(op.recv(512), expected) def test_hmac_sha1(self): # gh-109396: In FIPS mode, Linux 6.5 requires a key # of at least 112 bits. Use a key of 152 bits. key = b"Python loves AF_ALG" data = b"what do ya want for nothing?" expected = bytes.fromhex("193dbb43c6297b47ea6277ec0ce67119a3f3aa66") with self.create_alg('hash', 'hmac(sha1)') as algo: algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, key) op, _ = algo.accept() with op: op.sendall(data) self.assertEqual(op.recv(512), expected) # Although it should work with 3.19 and newer the test blocks on # Ubuntu 15.10 with Kernel 4.2.0-19. @support.requires_linux_version(4, 3) def test_aes_cbc(self): key = bytes.fromhex('06a9214036b8a15b512e03d534120006') iv = bytes.fromhex('3dafba429d9eb430b422da802c9fac41') msg = b"Single block msg" ciphertext = bytes.fromhex('e353779c1079aeb82708942dbe77181a') msglen = len(msg) with self.create_alg('skcipher', 'cbc(aes)') as algo: algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, key) op, _ = algo.accept() with op: op.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, iv=iv, flags=socket.MSG_MORE) op.sendall(msg) self.assertEqual(op.recv(msglen), ciphertext) op, _ = algo.accept() with op: op.sendmsg_afalg([ciphertext], op=socket.ALG_OP_DECRYPT, iv=iv) self.assertEqual(op.recv(msglen), msg) # long message multiplier = 1024 longmsg = [msg] * multiplier op, _ = algo.accept() with op: op.sendmsg_afalg(longmsg, op=socket.ALG_OP_ENCRYPT, iv=iv) enc = op.recv(msglen * multiplier) self.assertEqual(len(enc), msglen * multiplier) self.assertEqual(enc[:msglen], ciphertext) op, _ = algo.accept() with op: op.sendmsg_afalg([enc], op=socket.ALG_OP_DECRYPT, iv=iv) dec = op.recv(msglen * multiplier) self.assertEqual(len(dec), msglen * multiplier) self.assertEqual(dec, msg * multiplier) @support.requires_linux_version(4, 9) # see issue29324 def test_aead_aes_gcm(self): key = bytes.fromhex('c939cc13397c1d37de6ae0e1cb7c423c') iv = bytes.fromhex('b3d8cc017cbb89b39e0f67e2') plain = bytes.fromhex('c3b3c41f113a31b73d9a5cd432103069') assoc = bytes.fromhex('24825602bd12a984e0092d3e448eda5f') expected_ct = bytes.fromhex('93fe7d9e9bfd10348a5606e5cafa7354') expected_tag = bytes.fromhex('0032a1dc85f1c9786925a2e71d8272dd') taglen = len(expected_tag) assoclen = len(assoc) with self.create_alg('aead', 'gcm(aes)') as algo: algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, key) algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_AEAD_AUTHSIZE, None, taglen) # send assoc, plain and tag buffer in separate steps op, _ = algo.accept() with op: op.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, iv=iv, assoclen=assoclen, flags=socket.MSG_MORE) op.sendall(assoc, socket.MSG_MORE) op.sendall(plain) res = op.recv(assoclen + len(plain) + taglen) self.assertEqual(expected_ct, res[assoclen:-taglen]) self.assertEqual(expected_tag, res[-taglen:]) # now with msg op, _ = algo.accept() with op: msg = assoc + plain op.sendmsg_afalg([msg], op=socket.ALG_OP_ENCRYPT, iv=iv, assoclen=assoclen) res = op.recv(assoclen + len(plain) + taglen) self.assertEqual(expected_ct, res[assoclen:-taglen]) self.assertEqual(expected_tag, res[-taglen:]) # create anc data manually pack_uint32 = struct.Struct('I').pack op, _ = algo.accept() with op: msg = assoc + plain op.sendmsg( [msg], ([socket.SOL_ALG, socket.ALG_SET_OP, pack_uint32(socket.ALG_OP_ENCRYPT)], [socket.SOL_ALG, socket.ALG_SET_IV, pack_uint32(len(iv)) + iv], [socket.SOL_ALG, socket.ALG_SET_AEAD_ASSOCLEN, pack_uint32(assoclen)], ) ) res = op.recv(len(msg) + taglen) self.assertEqual(expected_ct, res[assoclen:-taglen]) self.assertEqual(expected_tag, res[-taglen:]) # decrypt and verify op, _ = algo.accept() with op: msg = assoc + expected_ct + expected_tag op.sendmsg_afalg([msg], op=socket.ALG_OP_DECRYPT, iv=iv, assoclen=assoclen) res = op.recv(len(msg) - taglen) self.assertEqual(plain, res[assoclen:]) @support.requires_linux_version(4, 3) # see test_aes_cbc def test_drbg_pr_sha256(self): # deterministic random bit generator, prediction resistance, sha256 with self.create_alg('rng', 'drbg_pr_sha256') as algo: extra_seed = os.urandom(32) algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, extra_seed) op, _ = algo.accept() with op: rn = op.recv(32) self.assertEqual(len(rn), 32) def test_sendmsg_afalg_args(self): sock = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) with sock: with self.assertRaises(TypeError): sock.sendmsg_afalg() with self.assertRaises(TypeError): sock.sendmsg_afalg(op=None) with self.assertRaises(TypeError): sock.sendmsg_afalg(1) with self.assertRaises(TypeError): sock.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, assoclen=None) with self.assertRaises(TypeError): sock.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, assoclen=-1) def test_length_restriction(self): # bpo-35050, off-by-one error in length check sock = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) self.addCleanup(sock.close) # salg_type[14] with self.assertRaises(FileNotFoundError): sock.bind(("t" * 13, "name")) with self.assertRaisesRegex(ValueError, "type too long"): sock.bind(("t" * 14, "name")) # salg_name[64] with self.assertRaises(FileNotFoundError): sock.bind(("type", "n" * 63)) with self.assertRaisesRegex(ValueError, "name too long"): sock.bind(("type", "n" * 64)) @unittest.skipUnless(sys.platform == 'darwin', 'macOS specific test') class TestMacOSTCPFlags(unittest.TestCase): def test_tcp_keepalive(self): self.assertTrue(socket.TCP_KEEPALIVE) @unittest.skipUnless(sys.platform.startswith("win"), "requires Windows") class TestMSWindowsTCPFlags(unittest.TestCase): knownTCPFlags = { # available since long time ago 'TCP_MAXSEG', 'TCP_NODELAY', # available starting with Windows 10 1607 'TCP_FASTOPEN', # available starting with Windows 10 1703 'TCP_KEEPCNT', # available starting with Windows 10 1709 'TCP_KEEPIDLE', 'TCP_KEEPINTVL' } def test_new_tcp_flags(self): provided = [s for s in dir(socket) if s.startswith('TCP')] unknown = [s for s in provided if s not in self.knownTCPFlags] self.assertEqual([], unknown, "New TCP flags were discovered. See bpo-32394 for more information") class CreateServerTest(unittest.TestCase): def test_address(self): port = socket_helper.find_unused_port() with socket.create_server(("127.0.0.1", port)) as sock: self.assertEqual(sock.getsockname()[0], "127.0.0.1") self.assertEqual(sock.getsockname()[1], port) if socket_helper.IPV6_ENABLED: with socket.create_server(("::1", port), family=socket.AF_INET6) as sock: self.assertEqual(sock.getsockname()[0], "::1") self.assertEqual(sock.getsockname()[1], port) def test_family_and_type(self): with socket.create_server(("127.0.0.1", 0)) as sock: self.assertEqual(sock.family, socket.AF_INET) self.assertEqual(sock.type, socket.SOCK_STREAM) if socket_helper.IPV6_ENABLED: with socket.create_server(("::1", 0), family=socket.AF_INET6) as s: self.assertEqual(s.family, socket.AF_INET6) self.assertEqual(sock.type, socket.SOCK_STREAM) def test_reuse_port(self): if not hasattr(socket, "SO_REUSEPORT"): with self.assertRaises(ValueError): socket.create_server(("localhost", 0), reuse_port=True) else: with socket.create_server(("localhost", 0)) as sock: opt = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT) self.assertEqual(opt, 0) with socket.create_server(("localhost", 0), reuse_port=True) as sock: opt = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT) self.assertNotEqual(opt, 0) @unittest.skipIf(not hasattr(_socket, 'IPPROTO_IPV6') or not hasattr(_socket, 'IPV6_V6ONLY'), "IPV6_V6ONLY option not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_ipv6_only_default(self): with socket.create_server(("::1", 0), family=socket.AF_INET6) as sock: assert sock.getsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY) @unittest.skipIf(not socket.has_dualstack_ipv6(), "dualstack_ipv6 not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_dualstack_ipv6_family(self): with socket.create_server(("::1", 0), family=socket.AF_INET6, dualstack_ipv6=True) as sock: self.assertEqual(sock.family, socket.AF_INET6) class CreateServerFunctionalTest(unittest.TestCase): timeout = support.LOOPBACK_TIMEOUT def echo_server(self, sock): def run(sock): with sock: conn, _ = sock.accept() with conn: event.wait(self.timeout) msg = conn.recv(1024) if not msg: return conn.sendall(msg) event = threading.Event() sock.settimeout(self.timeout) thread = threading.Thread(target=run, args=(sock, )) thread.start() self.addCleanup(thread.join, self.timeout) event.set() def echo_client(self, addr, family): with socket.socket(family=family) as sock: sock.settimeout(self.timeout) sock.connect(addr) sock.sendall(b'foo') self.assertEqual(sock.recv(1024), b'foo') def test_tcp4(self): port = socket_helper.find_unused_port() with socket.create_server(("", port)) as sock: self.echo_server(sock) self.echo_client(("127.0.0.1", port), socket.AF_INET) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_tcp6(self): port = socket_helper.find_unused_port() with socket.create_server(("", port), family=socket.AF_INET6) as sock: self.echo_server(sock) self.echo_client(("::1", port), socket.AF_INET6) # --- dual stack tests @unittest.skipIf(not socket.has_dualstack_ipv6(), "dualstack_ipv6 not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_dual_stack_client_v4(self): port = socket_helper.find_unused_port() with socket.create_server(("", port), family=socket.AF_INET6, dualstack_ipv6=True) as sock: self.echo_server(sock) self.echo_client(("127.0.0.1", port), socket.AF_INET) @unittest.skipIf(not socket.has_dualstack_ipv6(), "dualstack_ipv6 not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_dual_stack_client_v6(self): port = socket_helper.find_unused_port() with socket.create_server(("", port), family=socket.AF_INET6, dualstack_ipv6=True) as sock: self.echo_server(sock) self.echo_client(("::1", port), socket.AF_INET6) @requireAttrs(socket, "send_fds") @requireAttrs(socket, "recv_fds") @requireAttrs(socket, "AF_UNIX") class SendRecvFdsTests(unittest.TestCase): def testSendAndRecvFds(self): def close_pipes(pipes): for fd1, fd2 in pipes: os.close(fd1) os.close(fd2) def close_fds(fds): for fd in fds: os.close(fd) # send 10 file descriptors pipes = [os.pipe() for _ in range(10)] self.addCleanup(close_pipes, pipes) fds = [rfd for rfd, wfd in pipes] # use a UNIX socket pair to exchange file descriptors locally sock1, sock2 = socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM) with sock1, sock2: socket.send_fds(sock1, [MSG], fds) # request more data and file descriptors than expected msg, fds2, flags, addr = socket.recv_fds(sock2, len(MSG) * 2, len(fds) * 2) self.addCleanup(close_fds, fds2) self.assertEqual(msg, MSG) self.assertEqual(len(fds2), len(fds)) self.assertEqual(flags, 0) # don't test addr # test that file descriptors are connected for index, fds in enumerate(pipes): rfd, wfd = fds os.write(wfd, str(index).encode()) for index, rfd in enumerate(fds2): data = os.read(rfd, 100) self.assertEqual(data, str(index).encode()) def setUpModule(): thread_info = threading_helper.threading_setup() unittest.addModuleCleanup(threading_helper.threading_cleanup, *thread_info) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.13/test_ssl.py000066400000000000000000006767131471441230600205630ustar00rootroot00000000000000# Test the support for SSL and sockets import sys import unittest import unittest.mock from test import support from test.support import import_helper from test.support import os_helper from test.support import socket_helper from test.support import threading_helper from test.support import warnings_helper from test.support import asyncore import array import re import socket import select import struct import time import enum import gc import http.client import os import errno import pprint import urllib.request import threading import traceback import weakref import platform import sysconfig import functools try: import ctypes except ImportError: ctypes = None ssl = import_helper.import_module("ssl") import _ssl from ssl import Purpose, TLSVersion, _TLSContentType, _TLSMessageType, _TLSAlertType Py_DEBUG_WIN32 = support.Py_DEBUG and sys.platform == 'win32' PROTOCOLS = sorted(ssl._PROTOCOL_NAMES) HOST = socket_helper.HOST IS_OPENSSL_3_0_0 = ssl.OPENSSL_VERSION_INFO >= (3, 0, 0) PY_SSL_DEFAULT_CIPHERS = sysconfig.get_config_var('PY_SSL_DEFAULT_CIPHERS') PROTOCOL_TO_TLS_VERSION = {} for proto, ver in ( ("PROTOCOL_SSLv3", "SSLv3"), ("PROTOCOL_TLSv1", "TLSv1"), ("PROTOCOL_TLSv1_1", "TLSv1_1"), ): try: proto = getattr(ssl, proto) ver = getattr(ssl.TLSVersion, ver) except AttributeError: continue PROTOCOL_TO_TLS_VERSION[proto] = ver def data_file(*name): return os.path.join(os.path.dirname(__file__), "certdata", *name) # The custom key and certificate files used in test_ssl are generated # using Lib/test/certdata/make_ssl_certs.py. # Other certificates are simply fetched from the internet servers they # are meant to authenticate. CERTFILE = data_file("keycert.pem") BYTES_CERTFILE = os.fsencode(CERTFILE) ONLYCERT = data_file("ssl_cert.pem") ONLYKEY = data_file("ssl_key.pem") BYTES_ONLYCERT = os.fsencode(ONLYCERT) BYTES_ONLYKEY = os.fsencode(ONLYKEY) CERTFILE_PROTECTED = data_file("keycert.passwd.pem") ONLYKEY_PROTECTED = data_file("ssl_key.passwd.pem") KEY_PASSWORD = "somepass" CAPATH = data_file("capath") BYTES_CAPATH = os.fsencode(CAPATH) CAFILE_NEURONIO = data_file("capath", "4e1295a3.0") CAFILE_CACERT = data_file("capath", "5ed36f99.0") CERTFILE_INFO = { 'issuer': ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'localhost'),)), 'notAfter': 'Jan 24 04:21:36 2043 GMT', 'notBefore': 'Nov 25 04:21:36 2023 GMT', 'serialNumber': '53E14833F7546C29256DD0F034F776C5E983004C', 'subject': ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'localhost'),)), 'subjectAltName': (('DNS', 'localhost'),), 'version': 3 } # empty CRL CRLFILE = data_file("revocation.crl") # Two keys and certs signed by the same CA (for SNI tests) SIGNED_CERTFILE = data_file("keycert3.pem") SINGED_CERTFILE_ONLY = data_file("cert3.pem") SIGNED_CERTFILE_HOSTNAME = 'localhost' SIGNED_CERTFILE_INFO = { 'OCSP': ('http://testca.pythontest.net/testca/ocsp/',), 'caIssuers': ('http://testca.pythontest.net/testca/pycacert.cer',), 'crlDistributionPoints': ('http://testca.pythontest.net/testca/revocation.crl',), 'issuer': ((('countryName', 'XY'),), (('organizationName', 'Python Software Foundation CA'),), (('commonName', 'our-ca-server'),)), 'notAfter': 'Oct 28 14:23:16 2037 GMT', 'notBefore': 'Aug 29 14:23:16 2018 GMT', 'serialNumber': 'CB2D80995A69525C', 'subject': ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'localhost'),)), 'subjectAltName': (('DNS', 'localhost'),), 'version': 3 } SIGNED_CERTFILE2 = data_file("keycert4.pem") SIGNED_CERTFILE2_HOSTNAME = 'fakehostname' SIGNED_CERTFILE_ECC = data_file("keycertecc.pem") SIGNED_CERTFILE_ECC_HOSTNAME = 'localhost-ecc' # A custom testcase, extracted from `rfc5280::aki::leaf-missing-aki` in x509-limbo: # The leaf (server) certificate has no AKI, which is forbidden under RFC 5280. # See: https://x509-limbo.com/testcases/rfc5280/#rfc5280akileaf-missing-aki LEAF_MISSING_AKI_CERTFILE = data_file("leaf-missing-aki.keycert.pem") LEAF_MISSING_AKI_CERTFILE_HOSTNAME = "example.com" LEAF_MISSING_AKI_CA = data_file("leaf-missing-aki.ca.pem") # Same certificate as pycacert.pem, but without extra text in file SIGNING_CA = data_file("capath", "ceff1710.0") # cert with all kinds of subject alt names ALLSANFILE = data_file("allsans.pem") IDNSANSFILE = data_file("idnsans.pem") NOSANFILE = data_file("nosan.pem") NOSAN_HOSTNAME = 'localhost' REMOTE_HOST = "self-signed.pythontest.net" EMPTYCERT = data_file("nullcert.pem") BADCERT = data_file("badcert.pem") NONEXISTINGCERT = data_file("XXXnonexisting.pem") BADKEY = data_file("badkey.pem") NOKIACERT = data_file("nokia.pem") NULLBYTECERT = data_file("nullbytecert.pem") TALOS_INVALID_CRLDP = data_file("talos-2019-0758.pem") DHFILE = data_file("ffdh3072.pem") BYTES_DHFILE = os.fsencode(DHFILE) # Not defined in all versions of OpenSSL OP_NO_COMPRESSION = getattr(ssl, "OP_NO_COMPRESSION", 0) OP_SINGLE_DH_USE = getattr(ssl, "OP_SINGLE_DH_USE", 0) OP_SINGLE_ECDH_USE = getattr(ssl, "OP_SINGLE_ECDH_USE", 0) OP_CIPHER_SERVER_PREFERENCE = getattr(ssl, "OP_CIPHER_SERVER_PREFERENCE", 0) OP_ENABLE_MIDDLEBOX_COMPAT = getattr(ssl, "OP_ENABLE_MIDDLEBOX_COMPAT", 0) # Ubuntu has patched OpenSSL and changed behavior of security level 2 # see https://bugs.python.org/issue41561#msg389003 def is_ubuntu(): try: # Assume that any references of "ubuntu" implies Ubuntu-like distro # The workaround is not required for 18.04, but doesn't hurt either. with open("/etc/os-release", encoding="utf-8") as f: return "ubuntu" in f.read() except FileNotFoundError: return False if is_ubuntu(): def seclevel_workaround(*ctxs): """"Lower security level to '1' and allow all ciphers for TLS 1.0/1""" for ctx in ctxs: if ( hasattr(ctx, "minimum_version") and ctx.minimum_version <= ssl.TLSVersion.TLSv1_1 ): ctx.set_ciphers("@SECLEVEL=1:ALL") else: def seclevel_workaround(*ctxs): pass def has_tls_protocol(protocol): """Check if a TLS protocol is available and enabled :param protocol: enum ssl._SSLMethod member or name :return: bool """ if isinstance(protocol, str): assert protocol.startswith('PROTOCOL_') protocol = getattr(ssl, protocol, None) if protocol is None: return False if protocol in { ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS_SERVER, ssl.PROTOCOL_TLS_CLIENT }: # auto-negotiate protocols are always available return True name = protocol.name return has_tls_version(name[len('PROTOCOL_'):]) @functools.lru_cache def has_tls_version(version): """Check if a TLS/SSL version is enabled :param version: TLS version name or ssl.TLSVersion member :return: bool """ if isinstance(version, str): version = ssl.TLSVersion.__members__[version] # check compile time flags like ssl.HAS_TLSv1_2 if not getattr(ssl, f'HAS_{version.name}'): return False if IS_OPENSSL_3_0_0 and version < ssl.TLSVersion.TLSv1_2: # bpo43791: 3.0.0-alpha14 fails with TLSV1_ALERT_INTERNAL_ERROR return False # check runtime and dynamic crypto policy settings. A TLS version may # be compiled in but disabled by a policy or config option. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) if ( hasattr(ctx, 'minimum_version') and ctx.minimum_version != ssl.TLSVersion.MINIMUM_SUPPORTED and version < ctx.minimum_version ): return False if ( hasattr(ctx, 'maximum_version') and ctx.maximum_version != ssl.TLSVersion.MAXIMUM_SUPPORTED and version > ctx.maximum_version ): return False return True def requires_tls_version(version): """Decorator to skip tests when a required TLS version is not available :param version: TLS version name or ssl.TLSVersion member :return: """ def decorator(func): @functools.wraps(func) def wrapper(*args, **kw): if not has_tls_version(version): raise unittest.SkipTest(f"{version} is not available.") else: return func(*args, **kw) return wrapper return decorator def handle_error(prefix): exc_format = ' '.join(traceback.format_exception(sys.exception())) if support.verbose: sys.stdout.write(prefix + exc_format) def utc_offset(): #NOTE: ignore issues like #1647654 # local time = utc time + utc offset if time.daylight and time.localtime().tm_isdst > 0: return -time.altzone # seconds return -time.timezone ignore_deprecation = warnings_helper.ignore_warnings( category=DeprecationWarning ) def test_wrap_socket(sock, *, cert_reqs=ssl.CERT_NONE, ca_certs=None, ciphers=None, certfile=None, keyfile=None, **kwargs): if not kwargs.get("server_side"): kwargs["server_hostname"] = SIGNED_CERTFILE_HOSTNAME context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) else: context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) if cert_reqs is not None: if cert_reqs == ssl.CERT_NONE: context.check_hostname = False context.verify_mode = cert_reqs if ca_certs is not None: context.load_verify_locations(ca_certs) if certfile is not None or keyfile is not None: context.load_cert_chain(certfile, keyfile) if ciphers is not None: context.set_ciphers(ciphers) return context.wrap_socket(sock, **kwargs) def testing_context(server_cert=SIGNED_CERTFILE, *, server_chain=True): """Create context client_context, server_context, hostname = testing_context() """ if server_cert == SIGNED_CERTFILE: hostname = SIGNED_CERTFILE_HOSTNAME elif server_cert == SIGNED_CERTFILE2: hostname = SIGNED_CERTFILE2_HOSTNAME elif server_cert == NOSANFILE: hostname = NOSAN_HOSTNAME else: raise ValueError(server_cert) client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(server_cert) if server_chain: server_context.load_verify_locations(SIGNING_CA) return client_context, server_context, hostname class BasicSocketTests(unittest.TestCase): def test_constants(self): ssl.CERT_NONE ssl.CERT_OPTIONAL ssl.CERT_REQUIRED ssl.OP_CIPHER_SERVER_PREFERENCE ssl.OP_SINGLE_DH_USE ssl.OP_SINGLE_ECDH_USE ssl.OP_NO_COMPRESSION self.assertEqual(ssl.HAS_SNI, True) self.assertEqual(ssl.HAS_ECDH, True) self.assertEqual(ssl.HAS_TLSv1_2, True) self.assertEqual(ssl.HAS_TLSv1_3, True) ssl.OP_NO_SSLv2 ssl.OP_NO_SSLv3 ssl.OP_NO_TLSv1 ssl.OP_NO_TLSv1_3 ssl.OP_NO_TLSv1_1 ssl.OP_NO_TLSv1_2 self.assertEqual(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv23) def test_options(self): # gh-106687: SSL options values are unsigned integer (uint64_t) for name in dir(ssl): if not name.startswith('OP_'): continue with self.subTest(option=name): value = getattr(ssl, name) self.assertGreaterEqual(value, 0, f"ssl.{name}") def test_ssl_types(self): ssl_types = [ _ssl._SSLContext, _ssl._SSLSocket, _ssl.MemoryBIO, _ssl.Certificate, _ssl.SSLSession, _ssl.SSLError, ] for ssl_type in ssl_types: with self.subTest(ssl_type=ssl_type): with self.assertRaisesRegex(TypeError, "immutable type"): ssl_type.value = None support.check_disallow_instantiation(self, _ssl.Certificate) def test_private_init(self): with self.assertRaisesRegex(TypeError, "public constructor"): with socket.socket() as s: ssl.SSLSocket(s) def test_str_for_enums(self): # Make sure that the PROTOCOL_* constants have enum-like string # reprs. proto = ssl.PROTOCOL_TLS_CLIENT self.assertEqual(repr(proto), '<_SSLMethod.PROTOCOL_TLS_CLIENT: %r>' % proto.value) self.assertEqual(str(proto), str(proto.value)) ctx = ssl.SSLContext(proto) self.assertIs(ctx.protocol, proto) def test_random(self): v = ssl.RAND_status() if support.verbose: sys.stdout.write("\n RAND_status is %d (%s)\n" % (v, (v and "sufficient randomness") or "insufficient randomness")) if v: data = ssl.RAND_bytes(16) self.assertEqual(len(data), 16) else: self.assertRaises(ssl.SSLError, ssl.RAND_bytes, 16) # negative num is invalid self.assertRaises(ValueError, ssl.RAND_bytes, -5) ssl.RAND_add("this is a random string", 75.0) ssl.RAND_add(b"this is a random bytes object", 75.0) ssl.RAND_add(bytearray(b"this is a random bytearray object"), 75.0) def test_parse_cert(self): # note that this uses an 'unofficial' function in _ssl.c, # provided solely for this test, to exercise the certificate # parsing code self.assertEqual( ssl._ssl._test_decode_cert(CERTFILE), CERTFILE_INFO ) self.assertEqual( ssl._ssl._test_decode_cert(SIGNED_CERTFILE), SIGNED_CERTFILE_INFO ) # Issue #13034: the subjectAltName in some certificates # (notably projects.developer.nokia.com:443) wasn't parsed p = ssl._ssl._test_decode_cert(NOKIACERT) if support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") self.assertEqual(p['subjectAltName'], (('DNS', 'projects.developer.nokia.com'), ('DNS', 'projects.forum.nokia.com')) ) # extra OCSP and AIA fields self.assertEqual(p['OCSP'], ('http://ocsp.verisign.com',)) self.assertEqual(p['caIssuers'], ('http://SVRIntl-G3-aia.verisign.com/SVRIntlG3.cer',)) self.assertEqual(p['crlDistributionPoints'], ('http://SVRIntl-G3-crl.verisign.com/SVRIntlG3.crl',)) def test_parse_cert_CVE_2019_5010(self): p = ssl._ssl._test_decode_cert(TALOS_INVALID_CRLDP) if support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") self.assertEqual( p, { 'issuer': ( (('countryName', 'UK'),), (('commonName', 'cody-ca'),)), 'notAfter': 'Jun 14 18:00:58 2028 GMT', 'notBefore': 'Jun 18 18:00:58 2018 GMT', 'serialNumber': '02', 'subject': ((('countryName', 'UK'),), (('commonName', 'codenomicon-vm-2.test.lal.cisco.com'),)), 'subjectAltName': ( ('DNS', 'codenomicon-vm-2.test.lal.cisco.com'),), 'version': 3 } ) def test_parse_cert_CVE_2013_4238(self): p = ssl._ssl._test_decode_cert(NULLBYTECERT) if support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") subject = ((('countryName', 'US'),), (('stateOrProvinceName', 'Oregon'),), (('localityName', 'Beaverton'),), (('organizationName', 'Python Software Foundation'),), (('organizationalUnitName', 'Python Core Development'),), (('commonName', 'null.python.org\x00example.org'),), (('emailAddress', 'python-dev@python.org'),)) self.assertEqual(p['subject'], subject) self.assertEqual(p['issuer'], subject) if ssl._OPENSSL_API_VERSION >= (0, 9, 8): san = (('DNS', 'altnull.python.org\x00example.com'), ('email', 'null@python.org\x00user@example.org'), ('URI', 'http://null.python.org\x00http://example.org'), ('IP Address', '192.0.2.1'), ('IP Address', '2001:DB8:0:0:0:0:0:1')) else: # OpenSSL 0.9.7 doesn't support IPv6 addresses in subjectAltName san = (('DNS', 'altnull.python.org\x00example.com'), ('email', 'null@python.org\x00user@example.org'), ('URI', 'http://null.python.org\x00http://example.org'), ('IP Address', '192.0.2.1'), ('IP Address', '')) self.assertEqual(p['subjectAltName'], san) def test_parse_all_sans(self): p = ssl._ssl._test_decode_cert(ALLSANFILE) self.assertEqual(p['subjectAltName'], ( ('DNS', 'allsans'), ('othername', ''), ('othername', ''), ('email', 'user@example.org'), ('DNS', 'www.example.org'), ('DirName', ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'dirname example'),))), ('URI', 'https://www.python.org/'), ('IP Address', '127.0.0.1'), ('IP Address', '0:0:0:0:0:0:0:1'), ('Registered ID', '1.2.3.4.5') ) ) def test_DER_to_PEM(self): with open(CAFILE_CACERT, 'r') as f: pem = f.read() d1 = ssl.PEM_cert_to_DER_cert(pem) p2 = ssl.DER_cert_to_PEM_cert(d1) d2 = ssl.PEM_cert_to_DER_cert(p2) self.assertEqual(d1, d2) if not p2.startswith(ssl.PEM_HEADER + '\n'): self.fail("DER-to-PEM didn't include correct header:\n%r\n" % p2) if not p2.endswith('\n' + ssl.PEM_FOOTER + '\n'): self.fail("DER-to-PEM didn't include correct footer:\n%r\n" % p2) def test_openssl_version(self): n = ssl.OPENSSL_VERSION_NUMBER t = ssl.OPENSSL_VERSION_INFO s = ssl.OPENSSL_VERSION self.assertIsInstance(n, int) self.assertIsInstance(t, tuple) self.assertIsInstance(s, str) # Some sanity checks follow # >= 1.1.1 self.assertGreaterEqual(n, 0x10101000) # < 4.0 self.assertLess(n, 0x40000000) major, minor, fix, patch, status = t self.assertGreaterEqual(major, 1) self.assertLess(major, 4) self.assertGreaterEqual(minor, 0) self.assertLess(minor, 256) self.assertGreaterEqual(fix, 0) self.assertLess(fix, 256) self.assertGreaterEqual(patch, 0) self.assertLessEqual(patch, 63) self.assertGreaterEqual(status, 0) self.assertLessEqual(status, 15) libressl_ver = f"LibreSSL {major:d}" if major >= 3: # 3.x uses 0xMNN00PP0L openssl_ver = f"OpenSSL {major:d}.{minor:d}.{patch:d}" else: openssl_ver = f"OpenSSL {major:d}.{minor:d}.{fix:d}" self.assertTrue( s.startswith((openssl_ver, libressl_ver, "AWS-LC")), (s, t, hex(n)) ) @support.cpython_only def test_refcycle(self): # Issue #7943: an SSL object doesn't create reference cycles with # itself. s = socket.socket(socket.AF_INET) ss = test_wrap_socket(s) wr = weakref.ref(ss) with warnings_helper.check_warnings(("", ResourceWarning)): del ss self.assertEqual(wr(), None) def test_wrapped_unconnected(self): # Methods on an unconnected SSLSocket propagate the original # OSError raise by the underlying socket object. s = socket.socket(socket.AF_INET) with test_wrap_socket(s) as ss: self.assertRaises(OSError, ss.recv, 1) self.assertRaises(OSError, ss.recv_into, bytearray(b'x')) self.assertRaises(OSError, ss.recvfrom, 1) self.assertRaises(OSError, ss.recvfrom_into, bytearray(b'x'), 1) self.assertRaises(OSError, ss.send, b'x') self.assertRaises(OSError, ss.sendto, b'x', ('0.0.0.0', 0)) self.assertRaises(NotImplementedError, ss.dup) self.assertRaises(NotImplementedError, ss.sendmsg, [b'x'], (), 0, ('0.0.0.0', 0)) self.assertRaises(NotImplementedError, ss.recvmsg, 100) self.assertRaises(NotImplementedError, ss.recvmsg_into, [bytearray(100)]) def test_timeout(self): # Issue #8524: when creating an SSL socket, the timeout of the # original socket should be retained. for timeout in (None, 0.0, 5.0): s = socket.socket(socket.AF_INET) s.settimeout(timeout) with test_wrap_socket(s) as ss: self.assertEqual(timeout, ss.gettimeout()) def test_openssl111_deprecations(self): options = [ ssl.OP_NO_TLSv1, ssl.OP_NO_TLSv1_1, ssl.OP_NO_TLSv1_2, ssl.OP_NO_TLSv1_3 ] protocols = [ ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLS ] versions = [ ssl.TLSVersion.SSLv3, ssl.TLSVersion.TLSv1, ssl.TLSVersion.TLSv1_1, ] for option in options: with self.subTest(option=option): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertWarns(DeprecationWarning) as cm: ctx.options |= option self.assertEqual( 'ssl.OP_NO_SSL*/ssl.OP_NO_TLS* options are deprecated', str(cm.warning) ) for protocol in protocols: if not has_tls_protocol(protocol): continue with self.subTest(protocol=protocol): with self.assertWarns(DeprecationWarning) as cm: ssl.SSLContext(protocol) self.assertEqual( f'ssl.{protocol.name} is deprecated', str(cm.warning) ) for version in versions: if not has_tls_version(version): continue with self.subTest(version=version): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertWarns(DeprecationWarning) as cm: ctx.minimum_version = version version_text = '%s.%s' % (version.__class__.__name__, version.name) self.assertEqual( f'ssl.{version_text} is deprecated', str(cm.warning) ) def bad_cert_test(self, certfile): """Check that trying to use the given client certificate fails""" certfile = os.path.join(os.path.dirname(__file__) or os.curdir, "certdata", certfile) sock = socket.socket() self.addCleanup(sock.close) with self.assertRaises(ssl.SSLError): test_wrap_socket(sock, certfile=certfile) def test_empty_cert(self): """Wrapping with an empty cert file""" self.bad_cert_test("nullcert.pem") def test_malformed_cert(self): """Wrapping with a badly formatted certificate (syntax error)""" self.bad_cert_test("badcert.pem") def test_malformed_key(self): """Wrapping with a badly formatted key (syntax error)""" self.bad_cert_test("badkey.pem") def test_server_side(self): # server_hostname doesn't work for server sockets ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) with socket.socket() as sock: self.assertRaises(ValueError, ctx.wrap_socket, sock, True, server_hostname="some.hostname") def test_unknown_channel_binding(self): # should raise ValueError for unknown type s = socket.create_server(('127.0.0.1', 0)) c = socket.socket(socket.AF_INET) c.connect(s.getsockname()) with test_wrap_socket(c, do_handshake_on_connect=False) as ss: with self.assertRaises(ValueError): ss.get_channel_binding("unknown-type") s.close() @unittest.skipUnless("tls-unique" in ssl.CHANNEL_BINDING_TYPES, "'tls-unique' channel binding not available") def test_tls_unique_channel_binding(self): # unconnected should return None for known type s = socket.socket(socket.AF_INET) with test_wrap_socket(s) as ss: self.assertIsNone(ss.get_channel_binding("tls-unique")) # the same for server-side s = socket.socket(socket.AF_INET) with test_wrap_socket(s, server_side=True, certfile=CERTFILE) as ss: self.assertIsNone(ss.get_channel_binding("tls-unique")) def test_dealloc_warn(self): ss = test_wrap_socket(socket.socket(socket.AF_INET)) r = repr(ss) with self.assertWarns(ResourceWarning) as cm: ss = None support.gc_collect() self.assertIn(r, str(cm.warning.args[0])) def test_get_default_verify_paths(self): paths = ssl.get_default_verify_paths() self.assertEqual(len(paths), 6) self.assertIsInstance(paths, ssl.DefaultVerifyPaths) with os_helper.EnvironmentVarGuard() as env: env["SSL_CERT_DIR"] = CAPATH env["SSL_CERT_FILE"] = CERTFILE paths = ssl.get_default_verify_paths() self.assertEqual(paths.cafile, CERTFILE) self.assertEqual(paths.capath, CAPATH) @unittest.skipUnless(sys.platform == "win32", "Windows specific") def test_enum_certificates(self): self.assertTrue(ssl.enum_certificates("CA")) self.assertTrue(ssl.enum_certificates("ROOT")) self.assertRaises(TypeError, ssl.enum_certificates) self.assertRaises(WindowsError, ssl.enum_certificates, "") trust_oids = set() for storename in ("CA", "ROOT"): store = ssl.enum_certificates(storename) self.assertIsInstance(store, list) for element in store: self.assertIsInstance(element, tuple) self.assertEqual(len(element), 3) cert, enc, trust = element self.assertIsInstance(cert, bytes) self.assertIn(enc, {"x509_asn", "pkcs_7_asn"}) self.assertIsInstance(trust, (frozenset, set, bool)) if isinstance(trust, (frozenset, set)): trust_oids.update(trust) serverAuth = "1.3.6.1.5.5.7.3.1" self.assertIn(serverAuth, trust_oids) @unittest.skipUnless(sys.platform == "win32", "Windows specific") def test_enum_crls(self): self.assertTrue(ssl.enum_crls("CA")) self.assertRaises(TypeError, ssl.enum_crls) self.assertRaises(WindowsError, ssl.enum_crls, "") crls = ssl.enum_crls("CA") self.assertIsInstance(crls, list) for element in crls: self.assertIsInstance(element, tuple) self.assertEqual(len(element), 2) self.assertIsInstance(element[0], bytes) self.assertIn(element[1], {"x509_asn", "pkcs_7_asn"}) def test_asn1object(self): expected = (129, 'serverAuth', 'TLS Web Server Authentication', '1.3.6.1.5.5.7.3.1') val = ssl._ASN1Object('1.3.6.1.5.5.7.3.1') self.assertEqual(val, expected) self.assertEqual(val.nid, 129) self.assertEqual(val.shortname, 'serverAuth') self.assertEqual(val.longname, 'TLS Web Server Authentication') self.assertEqual(val.oid, '1.3.6.1.5.5.7.3.1') self.assertIsInstance(val, ssl._ASN1Object) self.assertRaises(ValueError, ssl._ASN1Object, 'serverAuth') val = ssl._ASN1Object.fromnid(129) self.assertEqual(val, expected) self.assertIsInstance(val, ssl._ASN1Object) self.assertRaises(ValueError, ssl._ASN1Object.fromnid, -1) with self.assertRaisesRegex(ValueError, "unknown NID 100000"): ssl._ASN1Object.fromnid(100000) for i in range(1000): try: obj = ssl._ASN1Object.fromnid(i) except ValueError: pass else: self.assertIsInstance(obj.nid, int) self.assertIsInstance(obj.shortname, str) self.assertIsInstance(obj.longname, str) self.assertIsInstance(obj.oid, (str, type(None))) val = ssl._ASN1Object.fromname('TLS Web Server Authentication') self.assertEqual(val, expected) self.assertIsInstance(val, ssl._ASN1Object) self.assertEqual(ssl._ASN1Object.fromname('serverAuth'), expected) self.assertEqual(ssl._ASN1Object.fromname('1.3.6.1.5.5.7.3.1'), expected) with self.assertRaisesRegex(ValueError, "unknown object 'serverauth'"): ssl._ASN1Object.fromname('serverauth') def test_purpose_enum(self): val = ssl._ASN1Object('1.3.6.1.5.5.7.3.1') self.assertIsInstance(ssl.Purpose.SERVER_AUTH, ssl._ASN1Object) self.assertEqual(ssl.Purpose.SERVER_AUTH, val) self.assertEqual(ssl.Purpose.SERVER_AUTH.nid, 129) self.assertEqual(ssl.Purpose.SERVER_AUTH.shortname, 'serverAuth') self.assertEqual(ssl.Purpose.SERVER_AUTH.oid, '1.3.6.1.5.5.7.3.1') val = ssl._ASN1Object('1.3.6.1.5.5.7.3.2') self.assertIsInstance(ssl.Purpose.CLIENT_AUTH, ssl._ASN1Object) self.assertEqual(ssl.Purpose.CLIENT_AUTH, val) self.assertEqual(ssl.Purpose.CLIENT_AUTH.nid, 130) self.assertEqual(ssl.Purpose.CLIENT_AUTH.shortname, 'clientAuth') self.assertEqual(ssl.Purpose.CLIENT_AUTH.oid, '1.3.6.1.5.5.7.3.2') def test_unsupported_dtls(self): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.addCleanup(s.close) with self.assertRaises(NotImplementedError) as cx: test_wrap_socket(s, cert_reqs=ssl.CERT_NONE) self.assertEqual(str(cx.exception), "only stream sockets are supported") ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertRaises(NotImplementedError) as cx: ctx.wrap_socket(s) self.assertEqual(str(cx.exception), "only stream sockets are supported") def cert_time_ok(self, timestring, timestamp): self.assertEqual(ssl.cert_time_to_seconds(timestring), timestamp) def cert_time_fail(self, timestring): with self.assertRaises(ValueError): ssl.cert_time_to_seconds(timestring) @unittest.skipUnless(utc_offset(), 'local time needs to be different from UTC') def test_cert_time_to_seconds_timezone(self): # Issue #19940: ssl.cert_time_to_seconds() returns wrong # results if local timezone is not UTC self.cert_time_ok("May 9 00:00:00 2007 GMT", 1178668800.0) self.cert_time_ok("Jan 5 09:34:43 2018 GMT", 1515144883.0) def test_cert_time_to_seconds(self): timestring = "Jan 5 09:34:43 2018 GMT" ts = 1515144883.0 self.cert_time_ok(timestring, ts) # accept keyword parameter, assert its name self.assertEqual(ssl.cert_time_to_seconds(cert_time=timestring), ts) # accept both %e and %d (space or zero generated by strftime) self.cert_time_ok("Jan 05 09:34:43 2018 GMT", ts) # case-insensitive self.cert_time_ok("JaN 5 09:34:43 2018 GmT", ts) self.cert_time_fail("Jan 5 09:34 2018 GMT") # no seconds self.cert_time_fail("Jan 5 09:34:43 2018") # no GMT self.cert_time_fail("Jan 5 09:34:43 2018 UTC") # not GMT timezone self.cert_time_fail("Jan 35 09:34:43 2018 GMT") # invalid day self.cert_time_fail("Jon 5 09:34:43 2018 GMT") # invalid month self.cert_time_fail("Jan 5 24:00:00 2018 GMT") # invalid hour self.cert_time_fail("Jan 5 09:60:43 2018 GMT") # invalid minute newyear_ts = 1230768000.0 # leap seconds self.cert_time_ok("Dec 31 23:59:60 2008 GMT", newyear_ts) # same timestamp self.cert_time_ok("Jan 1 00:00:00 2009 GMT", newyear_ts) self.cert_time_ok("Jan 5 09:34:59 2018 GMT", 1515144899) # allow 60th second (even if it is not a leap second) self.cert_time_ok("Jan 5 09:34:60 2018 GMT", 1515144900) # allow 2nd leap second for compatibility with time.strptime() self.cert_time_ok("Jan 5 09:34:61 2018 GMT", 1515144901) self.cert_time_fail("Jan 5 09:34:62 2018 GMT") # invalid seconds # no special treatment for the special value: # 99991231235959Z (rfc 5280) self.cert_time_ok("Dec 31 23:59:59 9999 GMT", 253402300799.0) @support.run_with_locale('LC_ALL', '') def test_cert_time_to_seconds_locale(self): # `cert_time_to_seconds()` should be locale independent def local_february_name(): return time.strftime('%b', (1, 2, 3, 4, 5, 6, 0, 0, 0)) if local_february_name().lower() == 'feb': self.skipTest("locale-specific month name needs to be " "different from C locale") # locale-independent self.cert_time_ok("Feb 9 00:00:00 2007 GMT", 1170979200.0) self.cert_time_fail(local_february_name() + " 9 00:00:00 2007 GMT") def test_connect_ex_error(self): server = socket.socket(socket.AF_INET) self.addCleanup(server.close) port = socket_helper.bind_port(server) # Reserve port but don't listen s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED) self.addCleanup(s.close) rc = s.connect_ex((HOST, port)) # Issue #19919: Windows machines or VMs hosted on Windows # machines sometimes return EWOULDBLOCK. errors = ( errno.ECONNREFUSED, errno.EHOSTUNREACH, errno.ETIMEDOUT, errno.EWOULDBLOCK, ) self.assertIn(rc, errors) def test_read_write_zero(self): # empty reads and writes now work, bpo-42854, bpo-31711 client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.recv(0), b"") self.assertEqual(s.send(b""), 0) class ContextTests(unittest.TestCase): def test_constructor(self): for protocol in PROTOCOLS: if has_tls_protocol(protocol): with warnings_helper.check_warnings(): ctx = ssl.SSLContext(protocol) self.assertEqual(ctx.protocol, protocol) with warnings_helper.check_warnings(): ctx = ssl.SSLContext() self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS) self.assertRaises(ValueError, ssl.SSLContext, -1) self.assertRaises(ValueError, ssl.SSLContext, 42) def test_ciphers(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.set_ciphers("ALL") ctx.set_ciphers("DEFAULT") with self.assertRaisesRegex(ssl.SSLError, "No cipher can be selected"): ctx.set_ciphers("^$:,;?*'dorothyx") @unittest.skipUnless(PY_SSL_DEFAULT_CIPHERS == 1, "Test applies only to Python default ciphers") def test_python_ciphers(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ciphers = ctx.get_ciphers() for suite in ciphers: name = suite['name'] self.assertNotIn("PSK", name) self.assertNotIn("SRP", name) self.assertNotIn("MD5", name) self.assertNotIn("RC4", name) self.assertNotIn("3DES", name) def test_get_ciphers(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.set_ciphers('AESGCM') names = set(d['name'] for d in ctx.get_ciphers()) expected = { 'AES128-GCM-SHA256', 'ECDHE-ECDSA-AES128-GCM-SHA256', 'ECDHE-RSA-AES128-GCM-SHA256', 'DHE-RSA-AES128-GCM-SHA256', 'AES256-GCM-SHA384', 'ECDHE-ECDSA-AES256-GCM-SHA384', 'ECDHE-RSA-AES256-GCM-SHA384', 'DHE-RSA-AES256-GCM-SHA384', } intersection = names.intersection(expected) self.assertGreaterEqual( len(intersection), 2, f"\ngot: {sorted(names)}\nexpected: {sorted(expected)}" ) def test_options(self): # Test default SSLContext options ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) # OP_ALL | OP_NO_SSLv2 | OP_NO_SSLv3 is the default value default = (ssl.OP_ALL | ssl.OP_NO_SSLv2 | ssl.OP_NO_SSLv3) # SSLContext also enables these by default default |= (OP_NO_COMPRESSION | OP_CIPHER_SERVER_PREFERENCE | OP_SINGLE_DH_USE | OP_SINGLE_ECDH_USE | OP_ENABLE_MIDDLEBOX_COMPAT) self.assertEqual(default, ctx.options) # disallow TLSv1 with warnings_helper.check_warnings(): ctx.options |= ssl.OP_NO_TLSv1 self.assertEqual(default | ssl.OP_NO_TLSv1, ctx.options) # allow TLSv1 with warnings_helper.check_warnings(): ctx.options = (ctx.options & ~ssl.OP_NO_TLSv1) self.assertEqual(default, ctx.options) # clear all options ctx.options = 0 # Ubuntu has OP_NO_SSLv3 forced on by default self.assertEqual(0, ctx.options & ~ssl.OP_NO_SSLv3) # invalid options with self.assertRaises(OverflowError): ctx.options = -1 with self.assertRaises(OverflowError): ctx.options = 2 ** 100 with self.assertRaises(TypeError): ctx.options = "abc" def test_verify_mode_protocol(self): with warnings_helper.check_warnings(): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) # Default value self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) ctx.verify_mode = ssl.CERT_OPTIONAL self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) ctx.verify_mode = ssl.CERT_REQUIRED self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.verify_mode = ssl.CERT_NONE self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) with self.assertRaises(TypeError): ctx.verify_mode = None with self.assertRaises(ValueError): ctx.verify_mode = 42 ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self.assertFalse(ctx.check_hostname) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertTrue(ctx.check_hostname) def test_hostname_checks_common_name(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertTrue(ctx.hostname_checks_common_name) if ssl.HAS_NEVER_CHECK_COMMON_NAME: ctx.hostname_checks_common_name = True self.assertTrue(ctx.hostname_checks_common_name) ctx.hostname_checks_common_name = False self.assertFalse(ctx.hostname_checks_common_name) ctx.hostname_checks_common_name = True self.assertTrue(ctx.hostname_checks_common_name) else: with self.assertRaises(AttributeError): ctx.hostname_checks_common_name = True @ignore_deprecation def test_min_max_version(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # OpenSSL default is MINIMUM_SUPPORTED, however some vendors like # Fedora override the setting to TLS 1.0. minimum_range = { # stock OpenSSL ssl.TLSVersion.MINIMUM_SUPPORTED, # Fedora 29 uses TLS 1.0 by default ssl.TLSVersion.TLSv1, # RHEL 8 uses TLS 1.2 by default ssl.TLSVersion.TLSv1_2 } maximum_range = { # stock OpenSSL ssl.TLSVersion.MAXIMUM_SUPPORTED, # Fedora 32 uses TLS 1.3 by default ssl.TLSVersion.TLSv1_3 } self.assertIn( ctx.minimum_version, minimum_range ) self.assertIn( ctx.maximum_version, maximum_range ) ctx.minimum_version = ssl.TLSVersion.TLSv1_1 ctx.maximum_version = ssl.TLSVersion.TLSv1_2 self.assertEqual( ctx.minimum_version, ssl.TLSVersion.TLSv1_1 ) self.assertEqual( ctx.maximum_version, ssl.TLSVersion.TLSv1_2 ) ctx.minimum_version = ssl.TLSVersion.MINIMUM_SUPPORTED ctx.maximum_version = ssl.TLSVersion.TLSv1 self.assertEqual( ctx.minimum_version, ssl.TLSVersion.MINIMUM_SUPPORTED ) self.assertEqual( ctx.maximum_version, ssl.TLSVersion.TLSv1 ) ctx.maximum_version = ssl.TLSVersion.MAXIMUM_SUPPORTED self.assertEqual( ctx.maximum_version, ssl.TLSVersion.MAXIMUM_SUPPORTED ) ctx.maximum_version = ssl.TLSVersion.MINIMUM_SUPPORTED self.assertIn( ctx.maximum_version, {ssl.TLSVersion.TLSv1, ssl.TLSVersion.TLSv1_1, ssl.TLSVersion.SSLv3} ) ctx.minimum_version = ssl.TLSVersion.MAXIMUM_SUPPORTED self.assertIn( ctx.minimum_version, {ssl.TLSVersion.TLSv1_2, ssl.TLSVersion.TLSv1_3} ) with self.assertRaises(ValueError): ctx.minimum_version = 42 if has_tls_protocol(ssl.PROTOCOL_TLSv1_1): ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1_1) self.assertIn( ctx.minimum_version, minimum_range ) self.assertEqual( ctx.maximum_version, ssl.TLSVersion.MAXIMUM_SUPPORTED ) with self.assertRaises(ValueError): ctx.minimum_version = ssl.TLSVersion.MINIMUM_SUPPORTED with self.assertRaises(ValueError): ctx.maximum_version = ssl.TLSVersion.TLSv1 @unittest.skipUnless( hasattr(ssl.SSLContext, 'security_level'), "requires OpenSSL >= 1.1.0" ) def test_security_level(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) # The default security callback allows for levels between 0-5 # with OpenSSL defaulting to 1, however some vendors override the # default value (e.g. Debian defaults to 2) security_level_range = { 0, 1, # OpenSSL default 2, # Debian 3, 4, 5, } self.assertIn(ctx.security_level, security_level_range) def test_verify_flags(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # default value tf = getattr(ssl, "VERIFY_X509_TRUSTED_FIRST", 0) self.assertEqual(ctx.verify_flags, ssl.VERIFY_DEFAULT | tf) ctx.verify_flags = ssl.VERIFY_CRL_CHECK_LEAF self.assertEqual(ctx.verify_flags, ssl.VERIFY_CRL_CHECK_LEAF) ctx.verify_flags = ssl.VERIFY_CRL_CHECK_CHAIN self.assertEqual(ctx.verify_flags, ssl.VERIFY_CRL_CHECK_CHAIN) ctx.verify_flags = ssl.VERIFY_DEFAULT self.assertEqual(ctx.verify_flags, ssl.VERIFY_DEFAULT) ctx.verify_flags = ssl.VERIFY_ALLOW_PROXY_CERTS self.assertEqual(ctx.verify_flags, ssl.VERIFY_ALLOW_PROXY_CERTS) # supports any value ctx.verify_flags = ssl.VERIFY_CRL_CHECK_LEAF | ssl.VERIFY_X509_STRICT self.assertEqual(ctx.verify_flags, ssl.VERIFY_CRL_CHECK_LEAF | ssl.VERIFY_X509_STRICT) with self.assertRaises(TypeError): ctx.verify_flags = None def test_load_cert_chain(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # Combined key and cert in a single file ctx.load_cert_chain(CERTFILE, keyfile=None) ctx.load_cert_chain(CERTFILE, keyfile=CERTFILE) self.assertRaises(TypeError, ctx.load_cert_chain, keyfile=CERTFILE) with self.assertRaises(OSError) as cm: ctx.load_cert_chain(NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaisesRegex(ssl.SSLError, "PEM (lib|routines)"): ctx.load_cert_chain(BADCERT) with self.assertRaisesRegex(ssl.SSLError, "PEM (lib|routines)"): ctx.load_cert_chain(EMPTYCERT) # Separate key and cert ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.load_cert_chain(ONLYCERT, ONLYKEY) ctx.load_cert_chain(certfile=ONLYCERT, keyfile=ONLYKEY) ctx.load_cert_chain(certfile=BYTES_ONLYCERT, keyfile=BYTES_ONLYKEY) with self.assertRaisesRegex(ssl.SSLError, "PEM (lib|routines)"): ctx.load_cert_chain(ONLYCERT) with self.assertRaisesRegex(ssl.SSLError, "PEM (lib|routines)"): ctx.load_cert_chain(ONLYKEY) with self.assertRaisesRegex(ssl.SSLError, "PEM (lib|routines)"): ctx.load_cert_chain(certfile=ONLYKEY, keyfile=ONLYCERT) # Mismatching key and cert ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # Allow for flexible libssl error messages. regex = re.compile(r"""( key values mismatch # OpenSSL | KEY_VALUES_MISMATCH # AWS-LC )""", re.X) with self.assertRaisesRegex(ssl.SSLError, regex): ctx.load_cert_chain(CAFILE_CACERT, ONLYKEY) # Password protected key and cert ctx.load_cert_chain(CERTFILE_PROTECTED, password=KEY_PASSWORD) ctx.load_cert_chain(CERTFILE_PROTECTED, password=KEY_PASSWORD.encode()) ctx.load_cert_chain(CERTFILE_PROTECTED, password=bytearray(KEY_PASSWORD.encode())) ctx.load_cert_chain(ONLYCERT, ONLYKEY_PROTECTED, KEY_PASSWORD) ctx.load_cert_chain(ONLYCERT, ONLYKEY_PROTECTED, KEY_PASSWORD.encode()) ctx.load_cert_chain(ONLYCERT, ONLYKEY_PROTECTED, bytearray(KEY_PASSWORD.encode())) with self.assertRaisesRegex(TypeError, "should be a string"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=True) with self.assertRaises(ssl.SSLError): ctx.load_cert_chain(CERTFILE_PROTECTED, password="badpass") with self.assertRaisesRegex(ValueError, "cannot be longer"): # openssl has a fixed limit on the password buffer. # PEM_BUFSIZE is generally set to 1kb. # Return a string larger than this. ctx.load_cert_chain(CERTFILE_PROTECTED, password=b'a' * 102400) # Password callback def getpass_unicode(): return KEY_PASSWORD def getpass_bytes(): return KEY_PASSWORD.encode() def getpass_bytearray(): return bytearray(KEY_PASSWORD.encode()) def getpass_badpass(): return "badpass" def getpass_huge(): return b'a' * (1024 * 1024) def getpass_bad_type(): return 9 def getpass_exception(): raise Exception('getpass error') class GetPassCallable: def __call__(self): return KEY_PASSWORD def getpass(self): return KEY_PASSWORD ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_unicode) ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_bytes) ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_bytearray) ctx.load_cert_chain(CERTFILE_PROTECTED, password=GetPassCallable()) ctx.load_cert_chain(CERTFILE_PROTECTED, password=GetPassCallable().getpass) with self.assertRaises(ssl.SSLError): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_badpass) with self.assertRaisesRegex(ValueError, "cannot be longer"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_huge) with self.assertRaisesRegex(TypeError, "must return a string"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_bad_type) with self.assertRaisesRegex(Exception, "getpass error"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_exception) # Make sure the password function isn't called if it isn't needed ctx.load_cert_chain(CERTFILE, password=getpass_exception) def test_load_verify_locations(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.load_verify_locations(CERTFILE) ctx.load_verify_locations(cafile=CERTFILE, capath=None) ctx.load_verify_locations(BYTES_CERTFILE) ctx.load_verify_locations(cafile=BYTES_CERTFILE, capath=None) self.assertRaises(TypeError, ctx.load_verify_locations) self.assertRaises(TypeError, ctx.load_verify_locations, None, None, None) with self.assertRaises(OSError) as cm: ctx.load_verify_locations(NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaisesRegex(ssl.SSLError, "PEM (lib|routines)"): ctx.load_verify_locations(BADCERT) ctx.load_verify_locations(CERTFILE, CAPATH) ctx.load_verify_locations(CERTFILE, capath=BYTES_CAPATH) # Issue #10989: crash if the second argument type is invalid self.assertRaises(TypeError, ctx.load_verify_locations, None, True) def test_load_verify_cadata(self): # test cadata with open(CAFILE_CACERT) as f: cacert_pem = f.read() cacert_der = ssl.PEM_cert_to_DER_cert(cacert_pem) with open(CAFILE_NEURONIO) as f: neuronio_pem = f.read() neuronio_der = ssl.PEM_cert_to_DER_cert(neuronio_pem) # test PEM ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 0) ctx.load_verify_locations(cadata=cacert_pem) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 1) ctx.load_verify_locations(cadata=neuronio_pem) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # cert already in hash table ctx.load_verify_locations(cadata=neuronio_pem) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # combined ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) combined = "\n".join((cacert_pem, neuronio_pem)) ctx.load_verify_locations(cadata=combined) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # with junk around the certs ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) combined = ["head", cacert_pem, "other", neuronio_pem, "again", neuronio_pem, "tail"] ctx.load_verify_locations(cadata="\n".join(combined)) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # test DER ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(cadata=cacert_der) ctx.load_verify_locations(cadata=neuronio_der) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # cert already in hash table ctx.load_verify_locations(cadata=cacert_der) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # combined ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) combined = b"".join((cacert_der, neuronio_der)) ctx.load_verify_locations(cadata=combined) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # error cases ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertRaises(TypeError, ctx.load_verify_locations, cadata=object) with self.assertRaisesRegex( ssl.SSLError, "no start line: cadata does not contain a certificate" ): ctx.load_verify_locations(cadata="broken") with self.assertRaisesRegex( ssl.SSLError, "not enough data: cadata does not contain a certificate" ): ctx.load_verify_locations(cadata=b"broken") with self.assertRaises(ssl.SSLError): ctx.load_verify_locations(cadata=cacert_der + b"A") @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_load_dh_params(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.load_dh_params(DHFILE) if os.name != 'nt': ctx.load_dh_params(BYTES_DHFILE) self.assertRaises(TypeError, ctx.load_dh_params) self.assertRaises(TypeError, ctx.load_dh_params, None) with self.assertRaises(FileNotFoundError) as cm: ctx.load_dh_params(NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaises(ssl.SSLError) as cm: ctx.load_dh_params(CERTFILE) def test_session_stats(self): for proto in {ssl.PROTOCOL_TLS_CLIENT, ssl.PROTOCOL_TLS_SERVER}: ctx = ssl.SSLContext(proto) self.assertEqual(ctx.session_stats(), { 'number': 0, 'connect': 0, 'connect_good': 0, 'connect_renegotiate': 0, 'accept': 0, 'accept_good': 0, 'accept_renegotiate': 0, 'hits': 0, 'misses': 0, 'timeouts': 0, 'cache_full': 0, }) def test_set_default_verify_paths(self): # There's not much we can do to test that it acts as expected, # so just check it doesn't crash or raise an exception. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.set_default_verify_paths() @unittest.skipUnless(ssl.HAS_ECDH, "ECDH disabled on this OpenSSL build") def test_set_ecdh_curve(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.set_ecdh_curve("prime256v1") ctx.set_ecdh_curve(b"prime256v1") self.assertRaises(TypeError, ctx.set_ecdh_curve) self.assertRaises(TypeError, ctx.set_ecdh_curve, None) self.assertRaises(ValueError, ctx.set_ecdh_curve, "foo") self.assertRaises(ValueError, ctx.set_ecdh_curve, b"foo") def test_sni_callback(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # set_servername_callback expects a callable, or None self.assertRaises(TypeError, ctx.set_servername_callback) self.assertRaises(TypeError, ctx.set_servername_callback, 4) self.assertRaises(TypeError, ctx.set_servername_callback, "") self.assertRaises(TypeError, ctx.set_servername_callback, ctx) def dummycallback(sock, servername, ctx): pass ctx.set_servername_callback(None) ctx.set_servername_callback(dummycallback) def test_sni_callback_refcycle(self): # Reference cycles through the servername callback are detected # and cleared. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) def dummycallback(sock, servername, ctx, cycle=ctx): pass ctx.set_servername_callback(dummycallback) wr = weakref.ref(ctx) del ctx, dummycallback gc.collect() self.assertIs(wr(), None) def test_cert_store_stats(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 0, 'crl': 0, 'x509': 0}) ctx.load_cert_chain(CERTFILE) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 0, 'crl': 0, 'x509': 0}) ctx.load_verify_locations(CERTFILE) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 0, 'crl': 0, 'x509': 1}) ctx.load_verify_locations(CAFILE_CACERT) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 1, 'crl': 0, 'x509': 2}) def test_get_ca_certs(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.get_ca_certs(), []) # CERTFILE is not flagged as X509v3 Basic Constraints: CA:TRUE ctx.load_verify_locations(CERTFILE) self.assertEqual(ctx.get_ca_certs(), []) # but CAFILE_CACERT is a CA cert ctx.load_verify_locations(CAFILE_CACERT) self.assertEqual(ctx.get_ca_certs(), [{'issuer': ((('organizationName', 'Root CA'),), (('organizationalUnitName', 'http://www.cacert.org'),), (('commonName', 'CA Cert Signing Authority'),), (('emailAddress', 'support@cacert.org'),)), 'notAfter': 'Mar 29 12:29:49 2033 GMT', 'notBefore': 'Mar 30 12:29:49 2003 GMT', 'serialNumber': '00', 'crlDistributionPoints': ('https://www.cacert.org/revoke.crl',), 'subject': ((('organizationName', 'Root CA'),), (('organizationalUnitName', 'http://www.cacert.org'),), (('commonName', 'CA Cert Signing Authority'),), (('emailAddress', 'support@cacert.org'),)), 'version': 3}]) with open(CAFILE_CACERT) as f: pem = f.read() der = ssl.PEM_cert_to_DER_cert(pem) self.assertEqual(ctx.get_ca_certs(True), [der]) def test_load_default_certs(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs(ssl.Purpose.SERVER_AUTH) ctx.load_default_certs() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs(ssl.Purpose.CLIENT_AUTH) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertRaises(TypeError, ctx.load_default_certs, None) self.assertRaises(TypeError, ctx.load_default_certs, 'SERVER_AUTH') @unittest.skipIf(sys.platform == "win32", "not-Windows specific") def test_load_default_certs_env(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with os_helper.EnvironmentVarGuard() as env: env["SSL_CERT_DIR"] = CAPATH env["SSL_CERT_FILE"] = CERTFILE ctx.load_default_certs() self.assertEqual(ctx.cert_store_stats(), {"crl": 0, "x509": 1, "x509_ca": 0}) @unittest.skipUnless(sys.platform == "win32", "Windows specific") @unittest.skipIf(support.Py_DEBUG, "Debug build does not share environment between CRTs") def test_load_default_certs_env_windows(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs() stats = ctx.cert_store_stats() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with os_helper.EnvironmentVarGuard() as env: env["SSL_CERT_DIR"] = CAPATH env["SSL_CERT_FILE"] = CERTFILE ctx.load_default_certs() stats["x509"] += 1 self.assertEqual(ctx.cert_store_stats(), stats) def _assert_context_options(self, ctx): self.assertEqual(ctx.options & ssl.OP_NO_SSLv2, ssl.OP_NO_SSLv2) if OP_NO_COMPRESSION != 0: self.assertEqual(ctx.options & OP_NO_COMPRESSION, OP_NO_COMPRESSION) if OP_SINGLE_DH_USE != 0: self.assertEqual(ctx.options & OP_SINGLE_DH_USE, OP_SINGLE_DH_USE) if OP_SINGLE_ECDH_USE != 0: self.assertEqual(ctx.options & OP_SINGLE_ECDH_USE, OP_SINGLE_ECDH_USE) if OP_CIPHER_SERVER_PREFERENCE != 0: self.assertEqual(ctx.options & OP_CIPHER_SERVER_PREFERENCE, OP_CIPHER_SERVER_PREFERENCE) self.assertEqual(ctx.options & ssl.OP_LEGACY_SERVER_CONNECT, 0 if IS_OPENSSL_3_0_0 else ssl.OP_LEGACY_SERVER_CONNECT) def test_create_default_context(self): ctx = ssl.create_default_context() self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.verify_flags & ssl.VERIFY_X509_PARTIAL_CHAIN, ssl.VERIFY_X509_PARTIAL_CHAIN) self.assertEqual(ctx.verify_flags & ssl.VERIFY_X509_STRICT, ssl.VERIFY_X509_STRICT) self.assertTrue(ctx.check_hostname) self._assert_context_options(ctx) with open(SIGNING_CA) as f: cadata = f.read() ctx = ssl.create_default_context(cafile=SIGNING_CA, capath=CAPATH, cadata=cadata) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self._assert_context_options(ctx) ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self._assert_context_options(ctx) def test__create_stdlib_context(self): ctx = ssl._create_stdlib_context() self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self.assertFalse(ctx.check_hostname) self._assert_context_options(ctx) if has_tls_protocol(ssl.PROTOCOL_TLSv1): with warnings_helper.check_warnings(): ctx = ssl._create_stdlib_context(ssl.PROTOCOL_TLSv1) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLSv1) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self._assert_context_options(ctx) with warnings_helper.check_warnings(): ctx = ssl._create_stdlib_context( ssl.PROTOCOL_TLSv1_2, cert_reqs=ssl.CERT_REQUIRED, check_hostname=True ) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLSv1_2) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertTrue(ctx.check_hostname) self._assert_context_options(ctx) ctx = ssl._create_stdlib_context(purpose=ssl.Purpose.CLIENT_AUTH) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self._assert_context_options(ctx) def test_check_hostname(self): with warnings_helper.check_warnings(): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) # Auto set CERT_REQUIRED ctx.check_hostname = True self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_REQUIRED self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) # Changing verify_mode does not affect check_hostname ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE ctx.check_hostname = False self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) # Auto set ctx.check_hostname = True self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_OPTIONAL ctx.check_hostname = False self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) # keep CERT_OPTIONAL ctx.check_hostname = True self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) # Cannot set CERT_NONE with check_hostname enabled with self.assertRaises(ValueError): ctx.verify_mode = ssl.CERT_NONE ctx.check_hostname = False self.assertFalse(ctx.check_hostname) ctx.verify_mode = ssl.CERT_NONE self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) def test_context_client_server(self): # PROTOCOL_TLS_CLIENT has sane defaults ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) # PROTOCOL_TLS_SERVER has different but also sane defaults ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) def test_context_custom_class(self): class MySSLSocket(ssl.SSLSocket): pass class MySSLObject(ssl.SSLObject): pass ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.sslsocket_class = MySSLSocket ctx.sslobject_class = MySSLObject with ctx.wrap_socket(socket.socket(), server_side=True) as sock: self.assertIsInstance(sock, MySSLSocket) obj = ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_side=True) self.assertIsInstance(obj, MySSLObject) def test_num_tickest(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.num_tickets, 2) ctx.num_tickets = 1 self.assertEqual(ctx.num_tickets, 1) ctx.num_tickets = 0 self.assertEqual(ctx.num_tickets, 0) with self.assertRaises(ValueError): ctx.num_tickets = -1 with self.assertRaises(TypeError): ctx.num_tickets = None ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.num_tickets, 2) with self.assertRaises(ValueError): ctx.num_tickets = 1 class SSLErrorTests(unittest.TestCase): def test_str(self): # The str() of a SSLError doesn't include the errno e = ssl.SSLError(1, "foo") self.assertEqual(str(e), "foo") self.assertEqual(e.errno, 1) # Same for a subclass e = ssl.SSLZeroReturnError(1, "foo") self.assertEqual(str(e), "foo") self.assertEqual(e.errno, 1) @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_lib_reason(self): # Test the library and reason attributes ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertRaises(ssl.SSLError) as cm: ctx.load_dh_params(CERTFILE) self.assertEqual(cm.exception.library, 'PEM') regex = "(NO_START_LINE|UNSUPPORTED_PUBLIC_KEY_TYPE)" self.assertRegex(cm.exception.reason, regex) s = str(cm.exception) self.assertTrue("NO_START_LINE" in s, s) def test_subclass(self): # Check that the appropriate SSLError subclass is raised # (this only tests one of them) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE with socket.create_server(("127.0.0.1", 0)) as s: c = socket.create_connection(s.getsockname()) c.setblocking(False) with ctx.wrap_socket(c, False, do_handshake_on_connect=False) as c: with self.assertRaises(ssl.SSLWantReadError) as cm: c.do_handshake() s = str(cm.exception) self.assertTrue(s.startswith("The operation did not complete (read)"), s) # For compatibility self.assertEqual(cm.exception.errno, ssl.SSL_ERROR_WANT_READ) def test_bad_server_hostname(self): ctx = ssl.create_default_context() with self.assertRaises(ValueError): ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_hostname="") with self.assertRaises(ValueError): ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_hostname=".example.org") with self.assertRaises(TypeError): ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_hostname="example.org\x00evil.com") class MemoryBIOTests(unittest.TestCase): def test_read_write(self): bio = ssl.MemoryBIO() bio.write(b'foo') self.assertEqual(bio.read(), b'foo') self.assertEqual(bio.read(), b'') bio.write(b'foo') bio.write(b'bar') self.assertEqual(bio.read(), b'foobar') self.assertEqual(bio.read(), b'') bio.write(b'baz') self.assertEqual(bio.read(2), b'ba') self.assertEqual(bio.read(1), b'z') self.assertEqual(bio.read(1), b'') def test_eof(self): bio = ssl.MemoryBIO() self.assertFalse(bio.eof) self.assertEqual(bio.read(), b'') self.assertFalse(bio.eof) bio.write(b'foo') self.assertFalse(bio.eof) bio.write_eof() self.assertFalse(bio.eof) self.assertEqual(bio.read(2), b'fo') self.assertFalse(bio.eof) self.assertEqual(bio.read(1), b'o') self.assertTrue(bio.eof) self.assertEqual(bio.read(), b'') self.assertTrue(bio.eof) def test_pending(self): bio = ssl.MemoryBIO() self.assertEqual(bio.pending, 0) bio.write(b'foo') self.assertEqual(bio.pending, 3) for i in range(3): bio.read(1) self.assertEqual(bio.pending, 3-i-1) for i in range(3): bio.write(b'x') self.assertEqual(bio.pending, i+1) bio.read() self.assertEqual(bio.pending, 0) def test_buffer_types(self): bio = ssl.MemoryBIO() bio.write(b'foo') self.assertEqual(bio.read(), b'foo') bio.write(bytearray(b'bar')) self.assertEqual(bio.read(), b'bar') bio.write(memoryview(b'baz')) self.assertEqual(bio.read(), b'baz') m = memoryview(bytearray(b'noncontig')) noncontig_writable = m[::-2] with self.assertRaises(BufferError): bio.write(memoryview(noncontig_writable)) def test_error_types(self): bio = ssl.MemoryBIO() self.assertRaises(TypeError, bio.write, 'foo') self.assertRaises(TypeError, bio.write, None) self.assertRaises(TypeError, bio.write, True) self.assertRaises(TypeError, bio.write, 1) class SSLObjectTests(unittest.TestCase): def test_private_init(self): bio = ssl.MemoryBIO() with self.assertRaisesRegex(TypeError, "public constructor"): ssl.SSLObject(bio, bio) def test_unwrap(self): client_ctx, server_ctx, hostname = testing_context() c_in = ssl.MemoryBIO() c_out = ssl.MemoryBIO() s_in = ssl.MemoryBIO() s_out = ssl.MemoryBIO() client = client_ctx.wrap_bio(c_in, c_out, server_hostname=hostname) server = server_ctx.wrap_bio(s_in, s_out, server_side=True) # Loop on the handshake for a bit to get it settled for _ in range(5): try: client.do_handshake() except ssl.SSLWantReadError: pass if c_out.pending: s_in.write(c_out.read()) try: server.do_handshake() except ssl.SSLWantReadError: pass if s_out.pending: c_in.write(s_out.read()) # Now the handshakes should be complete (don't raise WantReadError) client.do_handshake() server.do_handshake() # Now if we unwrap one side unilaterally, it should send close-notify # and raise WantReadError: with self.assertRaises(ssl.SSLWantReadError): client.unwrap() # But server.unwrap() does not raise, because it reads the client's # close-notify: s_in.write(c_out.read()) server.unwrap() # And now that the client gets the server's close-notify, it doesn't # raise either. c_in.write(s_out.read()) client.unwrap() class SimpleBackgroundTests(unittest.TestCase): """Tests that connect to a simple server running in the background""" def setUp(self): self.server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.server_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=self.server_context) self.enterContext(server) self.server_addr = (HOST, server.port) def test_connect(self): with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_NONE) as s: s.connect(self.server_addr) self.assertEqual({}, s.getpeercert()) self.assertFalse(s.server_side) # this should succeed because we specify the root cert with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=SIGNING_CA) as s: s.connect(self.server_addr) self.assertTrue(s.getpeercert()) self.assertFalse(s.server_side) def test_connect_fail(self): # This should fail because we have no verification certs. Connection # failure crashes ThreadedEchoServer, so run this in an independent # test method. s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED) self.addCleanup(s.close) # Allow for flexible libssl error messages. regex = re.compile(r"""( certificate verify failed # OpenSSL | CERTIFICATE_VERIFY_FAILED # AWS-LC )""", re.X) self.assertRaisesRegex(ssl.SSLError, regex, s.connect, self.server_addr) def test_connect_ex(self): # Issue #11326: check connect_ex() implementation s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=SIGNING_CA) self.addCleanup(s.close) self.assertEqual(0, s.connect_ex(self.server_addr)) self.assertTrue(s.getpeercert()) def test_non_blocking_connect_ex(self): # Issue #11326: non-blocking connect_ex() should allow handshake # to proceed after the socket gets ready. s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=SIGNING_CA, do_handshake_on_connect=False) self.addCleanup(s.close) s.setblocking(False) rc = s.connect_ex(self.server_addr) # EWOULDBLOCK under Windows, EINPROGRESS elsewhere self.assertIn(rc, (0, errno.EINPROGRESS, errno.EWOULDBLOCK)) # Wait for connect to finish select.select([], [s], [], 5.0) # Non-blocking handshake while True: try: s.do_handshake() break except ssl.SSLWantReadError: select.select([s], [], [], 5.0) except ssl.SSLWantWriteError: select.select([], [s], [], 5.0) # SSL established self.assertTrue(s.getpeercert()) def test_connect_with_context(self): # Same as test_connect, but with a separately created context ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE with ctx.wrap_socket(socket.socket(socket.AF_INET)) as s: s.connect(self.server_addr) self.assertEqual({}, s.getpeercert()) # Same with a server hostname with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname="dummy") as s: s.connect(self.server_addr) ctx.verify_mode = ssl.CERT_REQUIRED # This should succeed because we specify the root cert ctx.load_verify_locations(SIGNING_CA) with ctx.wrap_socket(socket.socket(socket.AF_INET)) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) def test_connect_with_context_fail(self): # This should fail because we have no verification certs. Connection # failure crashes ThreadedEchoServer, so run this in an independent # test method. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) s = ctx.wrap_socket( socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME ) self.addCleanup(s.close) # Allow for flexible libssl error messages. regex = re.compile(r"""( certificate verify failed # OpenSSL | CERTIFICATE_VERIFY_FAILED # AWS-LC )""", re.X) self.assertRaisesRegex(ssl.SSLError, regex, s.connect, self.server_addr) def test_connect_capath(self): # Verify server certificates using the `capath` argument # NOTE: the subject hashing algorithm has been changed between # OpenSSL 0.9.8n and 1.0.0, as a result the capath directory must # contain both versions of each certificate (same content, different # filename) for this test to be portable across OpenSSL releases. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(capath=CAPATH) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) # Same with a bytes `capath` argument ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(capath=BYTES_CAPATH) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) def test_connect_cadata(self): with open(SIGNING_CA) as f: pem = f.read() der = ssl.PEM_cert_to_DER_cert(pem) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(cadata=pem) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) # same with DER ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(cadata=der) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) @unittest.skipIf(os.name == "nt", "Can't use a socket as a file under Windows") def test_makefile_close(self): # Issue #5238: creating a file-like object with makefile() shouldn't # delay closing the underlying "real socket" (here tested with its # file descriptor, hence skipping the test under Windows). ss = test_wrap_socket(socket.socket(socket.AF_INET)) ss.connect(self.server_addr) fd = ss.fileno() f = ss.makefile() f.close() # The fd is still open os.read(fd, 0) # Closing the SSL socket should close the fd too ss.close() gc.collect() with self.assertRaises(OSError) as e: os.read(fd, 0) self.assertEqual(e.exception.errno, errno.EBADF) def test_non_blocking_handshake(self): s = socket.socket(socket.AF_INET) s.connect(self.server_addr) s.setblocking(False) s = test_wrap_socket(s, cert_reqs=ssl.CERT_NONE, do_handshake_on_connect=False) self.addCleanup(s.close) count = 0 while True: try: count += 1 s.do_handshake() break except ssl.SSLWantReadError: select.select([s], [], []) except ssl.SSLWantWriteError: select.select([], [s], []) if support.verbose: sys.stdout.write("\nNeeded %d calls to do_handshake() to establish session.\n" % count) def test_get_server_certificate(self): _test_get_server_certificate(self, *self.server_addr, cert=SIGNING_CA) def test_get_server_certificate_sni(self): host, port = self.server_addr server_names = [] # We store servername_cb arguments to make sure they match the host def servername_cb(ssl_sock, server_name, initial_context): server_names.append(server_name) self.server_context.set_servername_callback(servername_cb) pem = ssl.get_server_certificate((host, port)) if not pem: self.fail("No server certificate on %s:%s!" % (host, port)) pem = ssl.get_server_certificate((host, port), ca_certs=SIGNING_CA) if not pem: self.fail("No server certificate on %s:%s!" % (host, port)) if support.verbose: sys.stdout.write("\nVerified certificate for %s:%s is\n%s\n" % (host, port, pem)) self.assertEqual(server_names, [host, host]) def test_get_server_certificate_fail(self): # Connection failure crashes ThreadedEchoServer, so run this in an # independent test method _test_get_server_certificate_fail(self, *self.server_addr) def test_get_server_certificate_timeout(self): def servername_cb(ssl_sock, server_name, initial_context): time.sleep(0.2) self.server_context.set_servername_callback(servername_cb) with self.assertRaises(socket.timeout): ssl.get_server_certificate(self.server_addr, ca_certs=SIGNING_CA, timeout=0.1) def test_ciphers(self): with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_NONE, ciphers="ALL") as s: s.connect(self.server_addr) with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_NONE, ciphers="DEFAULT") as s: s.connect(self.server_addr) # Error checking can happen at instantiation or when connecting with self.assertRaisesRegex(ssl.SSLError, "No cipher can be selected"): with socket.socket(socket.AF_INET) as sock: s = test_wrap_socket(sock, cert_reqs=ssl.CERT_NONE, ciphers="^$:,;?*'dorothyx") s.connect(self.server_addr) def test_get_ca_certs_capath(self): # capath certs are loaded on request ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(capath=CAPATH) self.assertEqual(ctx.get_ca_certs(), []) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname='localhost') as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) self.assertEqual(len(ctx.get_ca_certs()), 1) def test_context_setget(self): # Check that the context of a connected socket can be replaced. ctx1 = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx1.load_verify_locations(capath=CAPATH) ctx2 = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx2.load_verify_locations(capath=CAPATH) s = socket.socket(socket.AF_INET) with ctx1.wrap_socket(s, server_hostname='localhost') as ss: ss.connect(self.server_addr) self.assertIs(ss.context, ctx1) self.assertIs(ss._sslobj.context, ctx1) ss.context = ctx2 self.assertIs(ss.context, ctx2) self.assertIs(ss._sslobj.context, ctx2) def ssl_io_loop(self, sock, incoming, outgoing, func, *args, **kwargs): # A simple IO loop. Call func(*args) depending on the error we get # (WANT_READ or WANT_WRITE) move data between the socket and the BIOs. timeout = kwargs.get('timeout', support.SHORT_TIMEOUT) count = 0 for _ in support.busy_retry(timeout): errno = None count += 1 try: ret = func(*args) except ssl.SSLError as e: if e.errno not in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): raise errno = e.errno # Get any data from the outgoing BIO irrespective of any error, and # send it to the socket. buf = outgoing.read() sock.sendall(buf) # If there's no error, we're done. For WANT_READ, we need to get # data from the socket and put it in the incoming BIO. if errno is None: break elif errno == ssl.SSL_ERROR_WANT_READ: buf = sock.recv(32768) if buf: incoming.write(buf) else: incoming.write_eof() if support.verbose: sys.stdout.write("Needed %d calls to complete %s().\n" % (count, func.__name__)) return ret def test_bio_handshake(self): sock = socket.socket(socket.AF_INET) self.addCleanup(sock.close) sock.connect(self.server_addr) incoming = ssl.MemoryBIO() outgoing = ssl.MemoryBIO() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.load_verify_locations(SIGNING_CA) sslobj = ctx.wrap_bio(incoming, outgoing, False, SIGNED_CERTFILE_HOSTNAME) self.assertIs(sslobj._sslobj.owner, sslobj) self.assertIsNone(sslobj.cipher()) self.assertIsNone(sslobj.version()) self.assertIsNone(sslobj.shared_ciphers()) self.assertRaises(ValueError, sslobj.getpeercert) # tls-unique is not defined for TLSv1.3 # https://datatracker.ietf.org/doc/html/rfc8446#appendix-C.5 if 'tls-unique' in ssl.CHANNEL_BINDING_TYPES and sslobj.version() != "TLSv1.3": self.assertIsNone(sslobj.get_channel_binding('tls-unique')) self.ssl_io_loop(sock, incoming, outgoing, sslobj.do_handshake) self.assertTrue(sslobj.cipher()) self.assertIsNone(sslobj.shared_ciphers()) self.assertIsNotNone(sslobj.version()) self.assertTrue(sslobj.getpeercert()) if 'tls-unique' in ssl.CHANNEL_BINDING_TYPES and sslobj.version() != "TLSv1.3": self.assertTrue(sslobj.get_channel_binding('tls-unique')) try: self.ssl_io_loop(sock, incoming, outgoing, sslobj.unwrap) except ssl.SSLSyscallError: # If the server shuts down the TCP connection without sending a # secure shutdown message, this is reported as SSL_ERROR_SYSCALL pass self.assertRaises(ssl.SSLError, sslobj.write, b'foo') def test_bio_read_write_data(self): sock = socket.socket(socket.AF_INET) self.addCleanup(sock.close) sock.connect(self.server_addr) incoming = ssl.MemoryBIO() outgoing = ssl.MemoryBIO() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE sslobj = ctx.wrap_bio(incoming, outgoing, False) self.ssl_io_loop(sock, incoming, outgoing, sslobj.do_handshake) req = b'FOO\n' self.ssl_io_loop(sock, incoming, outgoing, sslobj.write, req) buf = self.ssl_io_loop(sock, incoming, outgoing, sslobj.read, 1024) self.assertEqual(buf, b'foo\n') self.ssl_io_loop(sock, incoming, outgoing, sslobj.unwrap) def test_transport_eof(self): client_context, server_context, hostname = testing_context() with socket.socket(socket.AF_INET) as sock: sock.connect(self.server_addr) incoming = ssl.MemoryBIO() outgoing = ssl.MemoryBIO() sslobj = client_context.wrap_bio(incoming, outgoing, server_hostname=hostname) self.ssl_io_loop(sock, incoming, outgoing, sslobj.do_handshake) # Simulate EOF from the transport. incoming.write_eof() self.assertRaises(ssl.SSLEOFError, sslobj.read) @support.requires_resource('network') class NetworkedTests(unittest.TestCase): def test_timeout_connect_ex(self): # Issue #12065: on a timeout, connect_ex() should return the original # errno (mimicking the behaviour of non-SSL sockets). with socket_helper.transient_internet(REMOTE_HOST): s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, do_handshake_on_connect=False) self.addCleanup(s.close) s.settimeout(0.0000001) rc = s.connect_ex((REMOTE_HOST, 443)) if rc == 0: self.skipTest("REMOTE_HOST responded too quickly") elif rc == errno.ENETUNREACH: self.skipTest("Network unreachable.") self.assertIn(rc, (errno.EAGAIN, errno.EWOULDBLOCK)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'Needs IPv6') @support.requires_resource('walltime') def test_get_server_certificate_ipv6(self): with socket_helper.transient_internet('ipv6.google.com'): _test_get_server_certificate(self, 'ipv6.google.com', 443) _test_get_server_certificate_fail(self, 'ipv6.google.com', 443) def _test_get_server_certificate(test, host, port, cert=None): pem = ssl.get_server_certificate((host, port)) if not pem: test.fail("No server certificate on %s:%s!" % (host, port)) pem = ssl.get_server_certificate((host, port), ca_certs=cert) if not pem: test.fail("No server certificate on %s:%s!" % (host, port)) if support.verbose: sys.stdout.write("\nVerified certificate for %s:%s is\n%s\n" % (host, port ,pem)) def _test_get_server_certificate_fail(test, host, port): with warnings_helper.check_no_resource_warning(test): try: pem = ssl.get_server_certificate((host, port), ca_certs=CERTFILE) except ssl.SSLError as x: #should fail if support.verbose: sys.stdout.write("%s\n" % x) else: test.fail("Got server certificate %s for %s:%s!" % (pem, host, port)) from test.ssl_servers import make_https_server class ThreadedEchoServer(threading.Thread): class ConnectionHandler(threading.Thread): """A mildly complicated class, because we want it to work both with and without the SSL wrapper around the socket connection, so that we can test the STARTTLS functionality.""" def __init__(self, server, connsock, addr): self.server = server self.running = False self.sock = connsock self.addr = addr self.sock.setblocking(True) self.sslconn = None threading.Thread.__init__(self) self.daemon = True def wrap_conn(self): try: self.sslconn = self.server.context.wrap_socket( self.sock, server_side=True) self.server.selected_alpn_protocols.append(self.sslconn.selected_alpn_protocol()) except (ConnectionResetError, BrokenPipeError, ConnectionAbortedError) as e: # We treat ConnectionResetError as though it were an # SSLError - OpenSSL on Ubuntu abruptly closes the # connection when asked to use an unsupported protocol. # # BrokenPipeError is raised in TLS 1.3 mode, when OpenSSL # tries to send session tickets after handshake. # https://github.com/openssl/openssl/issues/6342 # # ConnectionAbortedError is raised in TLS 1.3 mode, when OpenSSL # tries to send session tickets after handshake when using WinSock. self.server.conn_errors.append(str(e)) if self.server.chatty: handle_error("\n server: bad connection attempt from " + repr(self.addr) + ":\n") self.running = False self.close() return False except (ssl.SSLError, OSError) as e: # OSError may occur with wrong protocols, e.g. both # sides use PROTOCOL_TLS_SERVER. # # XXX Various errors can have happened here, for example # a mismatching protocol version, an invalid certificate, # or a low-level bug. This should be made more discriminating. # # bpo-31323: Store the exception as string to prevent # a reference leak: server -> conn_errors -> exception # -> traceback -> self (ConnectionHandler) -> server self.server.conn_errors.append(str(e)) if self.server.chatty: handle_error("\n server: bad connection attempt from " + repr(self.addr) + ":\n") # bpo-44229, bpo-43855, bpo-44237, and bpo-33450: # Ignore spurious EPROTOTYPE returned by write() on macOS. # See also http://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/ if e.errno != errno.EPROTOTYPE and sys.platform != "darwin": self.running = False self.server.stop() self.close() return False else: self.server.shared_ciphers.append(self.sslconn.shared_ciphers()) if self.server.context.verify_mode == ssl.CERT_REQUIRED: cert = self.sslconn.getpeercert() if support.verbose and self.server.chatty: sys.stdout.write(" client cert is " + pprint.pformat(cert) + "\n") cert_binary = self.sslconn.getpeercert(True) if support.verbose and self.server.chatty: if cert_binary is None: sys.stdout.write(" client did not provide a cert\n") else: sys.stdout.write(f" cert binary is {len(cert_binary)}b\n") cipher = self.sslconn.cipher() if support.verbose and self.server.chatty: sys.stdout.write(" server: connection cipher is now " + str(cipher) + "\n") return True def read(self): if self.sslconn: return self.sslconn.read() else: return self.sock.recv(1024) def write(self, bytes): if self.sslconn: return self.sslconn.write(bytes) else: return self.sock.send(bytes) def close(self): if self.sslconn: self.sslconn.close() else: self.sock.close() def run(self): self.running = True if not self.server.starttls_server: if not self.wrap_conn(): return while self.running: try: msg = self.read() stripped = msg.strip() if not stripped: # eof, so quit this handler self.running = False try: self.sock = self.sslconn.unwrap() except OSError: # Many tests shut the TCP connection down # without an SSL shutdown. This causes # unwrap() to raise OSError with errno=0! pass else: self.sslconn = None self.close() elif stripped == b'over': if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: client closed connection\n") self.close() return elif (self.server.starttls_server and stripped == b'STARTTLS'): if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: read STARTTLS from client, sending OK...\n") self.write(b"OK\n") if not self.wrap_conn(): return elif (self.server.starttls_server and self.sslconn and stripped == b'ENDTLS'): if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: read ENDTLS from client, sending OK...\n") self.write(b"OK\n") self.sock = self.sslconn.unwrap() self.sslconn = None if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: connection is now unencrypted...\n") elif stripped == b'CB tls-unique': if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: read CB tls-unique from client, sending our CB data...\n") data = self.sslconn.get_channel_binding("tls-unique") self.write(repr(data).encode("us-ascii") + b"\n") elif stripped == b'PHA': if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: initiating post handshake auth\n") try: self.sslconn.verify_client_post_handshake() except ssl.SSLError as e: self.write(repr(e).encode("us-ascii") + b"\n") else: self.write(b"OK\n") elif stripped == b'HASCERT': if self.sslconn.getpeercert() is not None: self.write(b'TRUE\n') else: self.write(b'FALSE\n') elif stripped == b'GETCERT': cert = self.sslconn.getpeercert() self.write(repr(cert).encode("us-ascii") + b"\n") elif stripped == b'VERIFIEDCHAIN': certs = self.sslconn._sslobj.get_verified_chain() self.write(len(certs).to_bytes(1, "big") + b"\n") elif stripped == b'UNVERIFIEDCHAIN': certs = self.sslconn._sslobj.get_unverified_chain() self.write(len(certs).to_bytes(1, "big") + b"\n") else: if (support.verbose and self.server.connectionchatty): ctype = (self.sslconn and "encrypted") or "unencrypted" sys.stdout.write(" server: read %r (%s), sending back %r (%s)...\n" % (msg, ctype, msg.lower(), ctype)) self.write(msg.lower()) except OSError as e: # handles SSLError and socket errors if isinstance(e, ConnectionError): # OpenSSL 1.1.1 sometimes raises # ConnectionResetError when connection is not # shut down gracefully. if self.server.chatty and support.verbose: print(f" Connection reset by peer: {self.addr}") self.close() self.running = False return if self.server.chatty and support.verbose: handle_error("Test server failure:\n") try: self.write(b"ERROR\n") except OSError: pass self.close() self.running = False # normally, we'd just stop here, but for the test # harness, we want to stop the server self.server.stop() def __init__(self, certificate=None, ssl_version=None, certreqs=None, cacerts=None, chatty=True, connectionchatty=False, starttls_server=False, alpn_protocols=None, ciphers=None, context=None): if context: self.context = context else: self.context = ssl.SSLContext(ssl_version if ssl_version is not None else ssl.PROTOCOL_TLS_SERVER) self.context.verify_mode = (certreqs if certreqs is not None else ssl.CERT_NONE) if cacerts: self.context.load_verify_locations(cacerts) if certificate: self.context.load_cert_chain(certificate) if alpn_protocols: self.context.set_alpn_protocols(alpn_protocols) if ciphers: self.context.set_ciphers(ciphers) self.chatty = chatty self.connectionchatty = connectionchatty self.starttls_server = starttls_server self.sock = socket.socket() self.port = socket_helper.bind_port(self.sock) self.flag = None self.active = False self.selected_alpn_protocols = [] self.shared_ciphers = [] self.conn_errors = [] threading.Thread.__init__(self) self.daemon = True def __enter__(self): self.start(threading.Event()) self.flag.wait() return self def __exit__(self, *args): self.stop() self.join() def start(self, flag=None): self.flag = flag threading.Thread.start(self) def run(self): self.sock.settimeout(1.0) self.sock.listen(5) self.active = True if self.flag: # signal an event self.flag.set() while self.active: try: newconn, connaddr = self.sock.accept() if support.verbose and self.chatty: sys.stdout.write(' server: new connection from ' + repr(connaddr) + '\n') handler = self.ConnectionHandler(self, newconn, connaddr) handler.start() handler.join() except TimeoutError as e: if support.verbose: sys.stdout.write(f' connection timeout {e!r}\n') except KeyboardInterrupt: self.stop() except BaseException as e: if support.verbose and self.chatty: sys.stdout.write( ' connection handling failed: ' + repr(e) + '\n') self.close() def close(self): if self.sock is not None: self.sock.close() self.sock = None def stop(self): self.active = False class AsyncoreEchoServer(threading.Thread): # this one's based on asyncore.dispatcher class EchoServer (asyncore.dispatcher): class ConnectionHandler(asyncore.dispatcher_with_send): def __init__(self, conn, certfile): self.socket = test_wrap_socket(conn, server_side=True, certfile=certfile, do_handshake_on_connect=False) asyncore.dispatcher_with_send.__init__(self, self.socket) self._ssl_accepting = True self._do_ssl_handshake() def readable(self): if isinstance(self.socket, ssl.SSLSocket): while self.socket.pending() > 0: self.handle_read_event() return True def _do_ssl_handshake(self): try: self.socket.do_handshake() except (ssl.SSLWantReadError, ssl.SSLWantWriteError): return except ssl.SSLEOFError: return self.handle_close() except ssl.SSLError: raise except OSError as err: if err.args[0] == errno.ECONNABORTED: return self.handle_close() else: self._ssl_accepting = False def handle_read(self): if self._ssl_accepting: self._do_ssl_handshake() else: data = self.recv(1024) if support.verbose: sys.stdout.write(" server: read %s from client\n" % repr(data)) if not data: self.close() else: self.send(data.lower()) def handle_close(self): self.close() if support.verbose: sys.stdout.write(" server: closed connection %s\n" % self.socket) def handle_error(self): raise def __init__(self, certfile): self.certfile = certfile sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = socket_helper.bind_port(sock, '') asyncore.dispatcher.__init__(self, sock) self.listen(5) def handle_accepted(self, sock_obj, addr): if support.verbose: sys.stdout.write(" server: new connection from %s:%s\n" %addr) self.ConnectionHandler(sock_obj, self.certfile) def handle_error(self): raise def __init__(self, certfile): self.flag = None self.active = False self.server = self.EchoServer(certfile) self.port = self.server.port threading.Thread.__init__(self) self.daemon = True def __str__(self): return "<%s %s>" % (self.__class__.__name__, self.server) def __enter__(self): self.start(threading.Event()) self.flag.wait() return self def __exit__(self, *args): if support.verbose: sys.stdout.write(" cleanup: stopping server.\n") self.stop() if support.verbose: sys.stdout.write(" cleanup: joining server thread.\n") self.join() if support.verbose: sys.stdout.write(" cleanup: successfully joined.\n") # make sure that ConnectionHandler is removed from socket_map asyncore.close_all(ignore_all=True) def start (self, flag=None): self.flag = flag threading.Thread.start(self) def run(self): self.active = True if self.flag: self.flag.set() while self.active: try: asyncore.loop(1) except: pass def stop(self): self.active = False self.server.close() def server_params_test(client_context, server_context, indata=b"FOO\n", chatty=True, connectionchatty=False, sni_name=None, session=None): """ Launch a server, connect a client to it and try various reads and writes. """ stats = {} server = ThreadedEchoServer(context=server_context, chatty=chatty, connectionchatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=sni_name, session=session) as s: s.connect((HOST, server.port)) for arg in [indata, bytearray(indata), memoryview(indata)]: if connectionchatty: if support.verbose: sys.stdout.write( " client: sending %r...\n" % indata) s.write(arg) outdata = s.read() if connectionchatty: if support.verbose: sys.stdout.write(" client: read %r\n" % outdata) if outdata != indata.lower(): raise AssertionError( "bad data <<%r>> (%d) received; expected <<%r>> (%d)\n" % (outdata[:20], len(outdata), indata[:20].lower(), len(indata))) s.write(b"over\n") if connectionchatty: if support.verbose: sys.stdout.write(" client: closing connection.\n") stats.update({ 'compression': s.compression(), 'cipher': s.cipher(), 'peercert': s.getpeercert(), 'client_alpn_protocol': s.selected_alpn_protocol(), 'version': s.version(), 'session_reused': s.session_reused, 'session': s.session, }) s.close() stats['server_alpn_protocols'] = server.selected_alpn_protocols stats['server_shared_ciphers'] = server.shared_ciphers return stats def try_protocol_combo(server_protocol, client_protocol, expect_success, certsreqs=None, server_options=0, client_options=0): """ Try to SSL-connect using *client_protocol* to *server_protocol*. If *expect_success* is true, assert that the connection succeeds, if it's false, assert that the connection fails. Also, if *expect_success* is a string, assert that it is the protocol version actually used by the connection. """ if certsreqs is None: certsreqs = ssl.CERT_NONE certtype = { ssl.CERT_NONE: "CERT_NONE", ssl.CERT_OPTIONAL: "CERT_OPTIONAL", ssl.CERT_REQUIRED: "CERT_REQUIRED", }[certsreqs] if support.verbose: formatstr = (expect_success and " %s->%s %s\n") or " {%s->%s} %s\n" sys.stdout.write(formatstr % (ssl.get_protocol_name(client_protocol), ssl.get_protocol_name(server_protocol), certtype)) with warnings_helper.check_warnings(): # ignore Deprecation warnings client_context = ssl.SSLContext(client_protocol) client_context.options |= client_options server_context = ssl.SSLContext(server_protocol) server_context.options |= server_options min_version = PROTOCOL_TO_TLS_VERSION.get(client_protocol, None) if (min_version is not None # SSLContext.minimum_version is only available on recent OpenSSL # (setter added in OpenSSL 1.1.0, getter added in OpenSSL 1.1.1) and hasattr(server_context, 'minimum_version') and server_protocol == ssl.PROTOCOL_TLS and server_context.minimum_version > min_version ): # If OpenSSL configuration is strict and requires more recent TLS # version, we have to change the minimum to test old TLS versions. with warnings_helper.check_warnings(): server_context.minimum_version = min_version # NOTE: we must enable "ALL" ciphers on the client, otherwise an # SSLv23 client will send an SSLv3 hello (rather than SSLv2) # starting from OpenSSL 1.0.0 (see issue #8322). if client_context.protocol == ssl.PROTOCOL_TLS: client_context.set_ciphers("ALL") seclevel_workaround(server_context, client_context) for ctx in (client_context, server_context): ctx.verify_mode = certsreqs ctx.load_cert_chain(SIGNED_CERTFILE) ctx.load_verify_locations(SIGNING_CA) try: stats = server_params_test(client_context, server_context, chatty=False, connectionchatty=False) # Protocol mismatch can result in either an SSLError, or a # "Connection reset by peer" error. except ssl.SSLError: if expect_success: raise except OSError as e: if expect_success or e.errno != errno.ECONNRESET: raise else: if not expect_success: raise AssertionError( "Client protocol %s succeeded with server protocol %s!" % (ssl.get_protocol_name(client_protocol), ssl.get_protocol_name(server_protocol))) elif (expect_success is not True and expect_success != stats['version']): raise AssertionError("version mismatch: expected %r, got %r" % (expect_success, stats['version'])) class ThreadedTests(unittest.TestCase): @support.requires_resource('walltime') def test_echo(self): """Basic test of an SSL client connecting to a server""" if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() with self.subTest(client=ssl.PROTOCOL_TLS_CLIENT, server=ssl.PROTOCOL_TLS_SERVER): server_params_test(client_context=client_context, server_context=server_context, chatty=True, connectionchatty=True, sni_name=hostname) client_context.check_hostname = False with self.subTest(client=ssl.PROTOCOL_TLS_SERVER, server=ssl.PROTOCOL_TLS_CLIENT): with self.assertRaises(ssl.SSLError) as e: server_params_test(client_context=server_context, server_context=client_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIn( 'Cannot create a client socket with a PROTOCOL_TLS_SERVER context', str(e.exception) ) with self.subTest(client=ssl.PROTOCOL_TLS_SERVER, server=ssl.PROTOCOL_TLS_SERVER): with self.assertRaises(ssl.SSLError) as e: server_params_test(client_context=server_context, server_context=server_context, chatty=True, connectionchatty=True) self.assertIn( 'Cannot create a client socket with a PROTOCOL_TLS_SERVER context', str(e.exception) ) with self.subTest(client=ssl.PROTOCOL_TLS_CLIENT, server=ssl.PROTOCOL_TLS_CLIENT): with self.assertRaises(ssl.SSLError) as e: server_params_test(client_context=server_context, server_context=client_context, chatty=True, connectionchatty=True) self.assertIn( 'Cannot create a client socket with a PROTOCOL_TLS_SERVER context', str(e.exception)) def test_getpeercert(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), do_handshake_on_connect=False, server_hostname=hostname) as s: s.connect((HOST, server.port)) # getpeercert() raise ValueError while the handshake isn't # done. with self.assertRaises(ValueError): s.getpeercert() s.do_handshake() cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") cipher = s.cipher() if support.verbose: sys.stdout.write(pprint.pformat(cert) + '\n') sys.stdout.write("Connection cipher is " + str(cipher) + '.\n') if 'subject' not in cert: self.fail("No subject field in certificate: %s." % pprint.pformat(cert)) if ((('organizationName', 'Python Software Foundation'),) not in cert['subject']): self.fail( "Missing or invalid 'organizationName' field in certificate subject; " "should be 'Python Software Foundation'.") self.assertIn('notBefore', cert) self.assertIn('notAfter', cert) before = ssl.cert_time_to_seconds(cert['notBefore']) after = ssl.cert_time_to_seconds(cert['notAfter']) self.assertLess(before, after) def test_crl_check(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() tf = getattr(ssl, "VERIFY_X509_TRUSTED_FIRST", 0) self.assertEqual(client_context.verify_flags, ssl.VERIFY_DEFAULT | tf) # VERIFY_DEFAULT should pass server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") # VERIFY_CRL_CHECK_LEAF without a loaded CRL file fails client_context.verify_flags |= ssl.VERIFY_CRL_CHECK_LEAF server = ThreadedEchoServer(context=server_context, chatty=True) # Allow for flexible libssl error messages. regex = re.compile(r"""( certificate verify failed # OpenSSL | CERTIFICATE_VERIFY_FAILED # AWS-LC )""", re.X) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaisesRegex(ssl.SSLError, regex): s.connect((HOST, server.port)) # now load a CRL file. The CRL file is signed by the CA. client_context.load_verify_locations(CRLFILE) server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") def test_check_hostname(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() # correct hostname should verify server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") # incorrect hostname should raise an exception server = ThreadedEchoServer(context=server_context, chatty=True) # Allow for flexible libssl error messages. regex = re.compile(r"""( certificate verify failed # OpenSSL | CERTIFICATE_VERIFY_FAILED # AWS-LC )""", re.X) with server: with client_context.wrap_socket(socket.socket(), server_hostname="invalid") as s: with self.assertRaisesRegex(ssl.CertificateError, regex): s.connect((HOST, server.port)) # missing server_hostname arg should cause an exception, too server = ThreadedEchoServer(context=server_context, chatty=True) with server: with socket.socket() as s: with self.assertRaisesRegex(ValueError, "check_hostname requires server_hostname"): client_context.wrap_socket(s) @unittest.skipUnless( ssl.HAS_NEVER_CHECK_COMMON_NAME, "test requires hostname_checks_common_name" ) def test_hostname_checks_common_name(self): client_context, server_context, hostname = testing_context() assert client_context.hostname_checks_common_name client_context.hostname_checks_common_name = False # default cert has a SAN server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) client_context, server_context, hostname = testing_context(NOSANFILE) client_context.hostname_checks_common_name = False server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(ssl.SSLCertVerificationError): s.connect((HOST, server.port)) def test_ecc_cert(self): client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) client_context.set_ciphers('ECDHE:ECDSA:!NULL:!aRSA') hostname = SIGNED_CERTFILE_ECC_HOSTNAME server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # load ECC cert server_context.load_cert_chain(SIGNED_CERTFILE_ECC) # correct hostname should verify server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") cipher = s.cipher()[0].split('-') self.assertTrue(cipher[:2], ('ECDHE', 'ECDSA')) @unittest.skipUnless(IS_OPENSSL_3_0_0, "test requires RFC 5280 check added in OpenSSL 3.0+") def test_verify_strict(self): # verification fails by default, since the server cert is non-conforming client_context = ssl.create_default_context() client_context.load_verify_locations(LEAF_MISSING_AKI_CA) hostname = LEAF_MISSING_AKI_CERTFILE_HOSTNAME server_context = ssl.create_default_context(purpose=Purpose.CLIENT_AUTH) server_context.load_cert_chain(LEAF_MISSING_AKI_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(ssl.SSLError): s.connect((HOST, server.port)) # explicitly disabling VERIFY_X509_STRICT allows it to succeed client_context = ssl.create_default_context() client_context.load_verify_locations(LEAF_MISSING_AKI_CA) client_context.verify_flags &= ~ssl.VERIFY_X509_STRICT server_context = ssl.create_default_context(purpose=Purpose.CLIENT_AUTH) server_context.load_cert_chain(LEAF_MISSING_AKI_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") def test_dual_rsa_ecc(self): client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) # TODO: fix TLSv1.3 once SSLContext can restrict signature # algorithms. client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # only ECDSA certs client_context.set_ciphers('ECDHE:ECDSA:!NULL:!aRSA') hostname = SIGNED_CERTFILE_ECC_HOSTNAME server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # load ECC and RSA key/cert pairs server_context.load_cert_chain(SIGNED_CERTFILE_ECC) server_context.load_cert_chain(SIGNED_CERTFILE) # correct hostname should verify server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") cipher = s.cipher()[0].split('-') self.assertTrue(cipher[:2], ('ECDHE', 'ECDSA')) def test_check_hostname_idn(self): if support.verbose: sys.stdout.write("\n") server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(IDNSANSFILE) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.verify_mode = ssl.CERT_REQUIRED context.check_hostname = True context.load_verify_locations(SIGNING_CA) # correct hostname should verify, when specified in several # different ways idn_hostnames = [ ('könig.idn.pythontest.net', 'xn--knig-5qa.idn.pythontest.net'), ('xn--knig-5qa.idn.pythontest.net', 'xn--knig-5qa.idn.pythontest.net'), (b'xn--knig-5qa.idn.pythontest.net', 'xn--knig-5qa.idn.pythontest.net'), ('königsgäßchen.idna2003.pythontest.net', 'xn--knigsgsschen-lcb0w.idna2003.pythontest.net'), ('xn--knigsgsschen-lcb0w.idna2003.pythontest.net', 'xn--knigsgsschen-lcb0w.idna2003.pythontest.net'), (b'xn--knigsgsschen-lcb0w.idna2003.pythontest.net', 'xn--knigsgsschen-lcb0w.idna2003.pythontest.net'), # ('königsgäßchen.idna2008.pythontest.net', # 'xn--knigsgchen-b4a3dun.idna2008.pythontest.net'), ('xn--knigsgchen-b4a3dun.idna2008.pythontest.net', 'xn--knigsgchen-b4a3dun.idna2008.pythontest.net'), (b'xn--knigsgchen-b4a3dun.idna2008.pythontest.net', 'xn--knigsgchen-b4a3dun.idna2008.pythontest.net'), ] for server_hostname, expected_hostname in idn_hostnames: server = ThreadedEchoServer(context=server_context, chatty=True) with server: with context.wrap_socket(socket.socket(), server_hostname=server_hostname) as s: self.assertEqual(s.server_hostname, expected_hostname) s.connect((HOST, server.port)) cert = s.getpeercert() self.assertEqual(s.server_hostname, expected_hostname) self.assertTrue(cert, "Can't get peer certificate.") # incorrect hostname should raise an exception server = ThreadedEchoServer(context=server_context, chatty=True) with server: with context.wrap_socket(socket.socket(), server_hostname="python.example.org") as s: with self.assertRaises(ssl.CertificateError): s.connect((HOST, server.port)) with ThreadedEchoServer(context=server_context, chatty=True) as server: with warnings_helper.check_no_resource_warning(self): with self.assertRaises(UnicodeError): context.wrap_socket(socket.socket(), server_hostname='.pythontest.net') with ThreadedEchoServer(context=server_context, chatty=True) as server: with warnings_helper.check_no_resource_warning(self): with self.assertRaises(UnicodeDecodeError): context.wrap_socket(socket.socket(), server_hostname=b'k\xf6nig.idn.pythontest.net') def test_wrong_cert_tls12(self): """Connecting when the server rejects the client's certificate Launch a server with CERT_REQUIRED, and check that trying to connect to it with a wrong client certificate fails. """ client_context, server_context, hostname = testing_context() # load client cert that is not signed by trusted CA client_context.load_cert_chain(CERTFILE) # require TLS client authentication server_context.verify_mode = ssl.CERT_REQUIRED # TLS 1.3 has different handshake client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server = ThreadedEchoServer( context=server_context, chatty=True, connectionchatty=True, ) with server, \ client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: try: # Expect either an SSL error about the server rejecting # the connection, or a low-level connection reset (which # sometimes happens on Windows) s.connect((HOST, server.port)) except ssl.SSLError as e: if support.verbose: sys.stdout.write("\nSSLError is %r\n" % e) except OSError as e: if e.errno != errno.ECONNRESET: raise if support.verbose: sys.stdout.write("\nsocket.error is %r\n" % e) else: self.fail("Use of invalid cert should have failed!") @requires_tls_version('TLSv1_3') def test_wrong_cert_tls13(self): client_context, server_context, hostname = testing_context() # load client cert that is not signed by trusted CA client_context.load_cert_chain(CERTFILE) server_context.verify_mode = ssl.CERT_REQUIRED server_context.minimum_version = ssl.TLSVersion.TLSv1_3 client_context.minimum_version = ssl.TLSVersion.TLSv1_3 server = ThreadedEchoServer( context=server_context, chatty=True, connectionchatty=True, ) with server, \ client_context.wrap_socket(socket.socket(), server_hostname=hostname, suppress_ragged_eofs=False) as s: s.connect((HOST, server.port)) with self.assertRaisesRegex( OSError, 'alert unknown ca|EOF occurred|TLSV1_ALERT_UNKNOWN_CA|' 'closed by the remote host|Connection reset by peer|' 'Broken pipe' ): # TLS 1.3 perform client cert exchange after handshake s.write(b'data') s.read(1000) s.write(b'should have failed already') s.read(1000) def test_rude_shutdown(self): """A brutal shutdown of an SSL server should raise an OSError in the client when attempting handshake. """ listener_ready = threading.Event() listener_gone = threading.Event() s = socket.socket() port = socket_helper.bind_port(s, HOST) # `listener` runs in a thread. It sits in an accept() until # the main thread connects. Then it rudely closes the socket, # and sets Event `listener_gone` to let the main thread know # the socket is gone. def listener(): s.listen() listener_ready.set() newsock, addr = s.accept() newsock.close() s.close() listener_gone.set() def connector(): listener_ready.wait() with socket.socket() as c: c.connect((HOST, port)) listener_gone.wait() try: ssl_sock = test_wrap_socket(c) except OSError: pass else: self.fail('connecting to closed SSL socket should have failed') t = threading.Thread(target=listener) t.start() try: connector() finally: t.join() def test_ssl_cert_verify_error(self): if support.verbose: sys.stdout.write("\n") server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(SIGNED_CERTFILE) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) server = ThreadedEchoServer(context=server_context, chatty=True) with server: with context.wrap_socket(socket.socket(), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: try: s.connect((HOST, server.port)) self.fail("Expected connection failure") except ssl.SSLError as e: msg = 'unable to get local issuer certificate' self.assertIsInstance(e, ssl.SSLCertVerificationError) self.assertEqual(e.verify_code, 20) self.assertEqual(e.verify_message, msg) # Allow for flexible libssl error messages. regex = f"({msg}|CERTIFICATE_VERIFY_FAILED)" self.assertRegex(repr(e), regex) regex = re.compile(r"""( certificate verify failed # OpenSSL | CERTIFICATE_VERIFY_FAILED # AWS-LC )""", re.X) self.assertRegex(repr(e), regex) def test_PROTOCOL_TLS(self): """Connecting to an SSLv23 server with various client options""" if support.verbose: sys.stdout.write("\n") if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1') if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True, ssl.CERT_OPTIONAL) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_OPTIONAL) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False, ssl.CERT_REQUIRED) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True, ssl.CERT_REQUIRED) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_REQUIRED) # Server with specific SSL options if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False, server_options=ssl.OP_NO_SSLv3) # Will choose TLSv1 try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True, server_options=ssl.OP_NO_SSLv2 | ssl.OP_NO_SSLv3) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, False, server_options=ssl.OP_NO_TLSv1) @requires_tls_version('SSLv3') def test_protocol_sslv3(self): """Connecting to an SSLv3 server with various client options""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, 'SSLv3') try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, 'SSLv3', ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, 'SSLv3', ssl.CERT_REQUIRED) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_SSLv3) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) @requires_tls_version('TLSv1') def test_protocol_tlsv1(self): """Connecting to a TLSv1 server with various client options""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, 'TLSv1') try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_REQUIRED) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1) @requires_tls_version('TLSv1_1') def test_protocol_tlsv1_1(self): """Connecting to a TLSv1.1 server with various client options. Testing against older TLS versions.""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_1, 'TLSv1.1') if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1_1) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1_1, 'TLSv1.1') try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_1, False) @requires_tls_version('TLSv1_2') def test_protocol_tlsv1_2(self): """Connecting to a TLSv1.2 server with various client options. Testing against older TLS versions.""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_2, 'TLSv1.2', server_options=ssl.OP_NO_SSLv3|ssl.OP_NO_SSLv2, client_options=ssl.OP_NO_SSLv3|ssl.OP_NO_SSLv2,) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1_2) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1_2, 'TLSv1.2') if has_tls_protocol(ssl.PROTOCOL_TLSv1): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1_2, False) if has_tls_protocol(ssl.PROTOCOL_TLSv1_1): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_1, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, False) def test_starttls(self): """Switching from clear text to encrypted and back again.""" msgs = (b"msg 1", b"MSG 2", b"STARTTLS", b"MSG 3", b"msg 4", b"ENDTLS", b"msg 5", b"msg 6") server = ThreadedEchoServer(CERTFILE, starttls_server=True, chatty=True, connectionchatty=True) wrapped = False with server: s = socket.socket() s.setblocking(True) s.connect((HOST, server.port)) if support.verbose: sys.stdout.write("\n") for indata in msgs: if support.verbose: sys.stdout.write( " client: sending %r...\n" % indata) if wrapped: conn.write(indata) outdata = conn.read() else: s.send(indata) outdata = s.recv(1024) msg = outdata.strip().lower() if indata == b"STARTTLS" and msg.startswith(b"ok"): # STARTTLS ok, switch to secure mode if support.verbose: sys.stdout.write( " client: read %r from server, starting TLS...\n" % msg) conn = test_wrap_socket(s) wrapped = True elif indata == b"ENDTLS" and msg.startswith(b"ok"): # ENDTLS ok, switch back to clear text if support.verbose: sys.stdout.write( " client: read %r from server, ending TLS...\n" % msg) s = conn.unwrap() wrapped = False else: if support.verbose: sys.stdout.write( " client: read %r from server\n" % msg) if support.verbose: sys.stdout.write(" client: closing connection.\n") if wrapped: conn.write(b"over\n") else: s.send(b"over\n") if wrapped: conn.close() else: s.close() def test_socketserver(self): """Using socketserver to create and manage SSL connections.""" server = make_https_server(self, certfile=SIGNED_CERTFILE) # try to connect if support.verbose: sys.stdout.write('\n') # Get this test file itself: with open(__file__, 'rb') as f: d1 = f.read() d2 = '' # now fetch the same data from the HTTPS server url = f'https://localhost:{server.port}/test_ssl.py' context = ssl.create_default_context(cafile=SIGNING_CA) f = urllib.request.urlopen(url, context=context) try: dlen = f.info().get("content-length") if dlen and (int(dlen) > 0): d2 = f.read(int(dlen)) if support.verbose: sys.stdout.write( " client: read %d bytes from remote server '%s'\n" % (len(d2), server)) finally: f.close() self.assertEqual(d1, d2) def test_asyncore_server(self): """Check the example asyncore integration.""" if support.verbose: sys.stdout.write("\n") indata = b"FOO\n" server = AsyncoreEchoServer(CERTFILE) with server: s = test_wrap_socket(socket.socket()) s.connect(('127.0.0.1', server.port)) if support.verbose: sys.stdout.write( " client: sending %r...\n" % indata) s.write(indata) outdata = s.read() if support.verbose: sys.stdout.write(" client: read %r\n" % outdata) if outdata != indata.lower(): self.fail( "bad data <<%r>> (%d) received; expected <<%r>> (%d)\n" % (outdata[:20], len(outdata), indata[:20].lower(), len(indata))) s.write(b"over\n") if support.verbose: sys.stdout.write(" client: closing connection.\n") s.close() if support.verbose: sys.stdout.write(" client: connection closed.\n") def test_recv_send(self): """Test recv(), send() and friends.""" if support.verbose: sys.stdout.write("\n") server = ThreadedEchoServer(CERTFILE, certreqs=ssl.CERT_NONE, ssl_version=ssl.PROTOCOL_TLS_SERVER, cacerts=CERTFILE, chatty=True, connectionchatty=False) with server: s = test_wrap_socket(socket.socket(), server_side=False, certfile=CERTFILE, ca_certs=CERTFILE, cert_reqs=ssl.CERT_NONE) s.connect((HOST, server.port)) # helper methods for standardising recv* method signatures def _recv_into(): b = bytearray(b"\0"*100) count = s.recv_into(b) return b[:count] def _recvfrom_into(): b = bytearray(b"\0"*100) count, addr = s.recvfrom_into(b) return b[:count] # (name, method, expect success?, *args, return value func) send_methods = [ ('send', s.send, True, [], len), ('sendto', s.sendto, False, ["some.address"], len), ('sendall', s.sendall, True, [], lambda x: None), ] # (name, method, whether to expect success, *args) recv_methods = [ ('recv', s.recv, True, []), ('recvfrom', s.recvfrom, False, ["some.address"]), ('recv_into', _recv_into, True, []), ('recvfrom_into', _recvfrom_into, False, []), ] data_prefix = "PREFIX_" for (meth_name, send_meth, expect_success, args, ret_val_meth) in send_methods: indata = (data_prefix + meth_name).encode('ascii') try: ret = send_meth(indata, *args) msg = "sending with {}".format(meth_name) self.assertEqual(ret, ret_val_meth(indata), msg=msg) outdata = s.read() if outdata != indata.lower(): self.fail( "While sending with <<{name:s}>> bad data " "<<{outdata:r}>> ({nout:d}) received; " "expected <<{indata:r}>> ({nin:d})\n".format( name=meth_name, outdata=outdata[:20], nout=len(outdata), indata=indata[:20], nin=len(indata) ) ) except ValueError as e: if expect_success: self.fail( "Failed to send with method <<{name:s}>>; " "expected to succeed.\n".format(name=meth_name) ) if not str(e).startswith(meth_name): self.fail( "Method <<{name:s}>> failed with unexpected " "exception message: {exp:s}\n".format( name=meth_name, exp=e ) ) for meth_name, recv_meth, expect_success, args in recv_methods: indata = (data_prefix + meth_name).encode('ascii') try: s.send(indata) outdata = recv_meth(*args) if outdata != indata.lower(): self.fail( "While receiving with <<{name:s}>> bad data " "<<{outdata:r}>> ({nout:d}) received; " "expected <<{indata:r}>> ({nin:d})\n".format( name=meth_name, outdata=outdata[:20], nout=len(outdata), indata=indata[:20], nin=len(indata) ) ) except ValueError as e: if expect_success: self.fail( "Failed to receive with method <<{name:s}>>; " "expected to succeed.\n".format(name=meth_name) ) if not str(e).startswith(meth_name): self.fail( "Method <<{name:s}>> failed with unexpected " "exception message: {exp:s}\n".format( name=meth_name, exp=e ) ) # consume data s.read() # read(-1, buffer) is supported, even though read(-1) is not data = b"data" s.send(data) buffer = bytearray(len(data)) self.assertEqual(s.read(-1, buffer), len(data)) self.assertEqual(buffer, data) # sendall accepts bytes-like objects if ctypes is not None: ubyte = ctypes.c_ubyte * len(data) byteslike = ubyte.from_buffer_copy(data) s.sendall(byteslike) self.assertEqual(s.read(), data) # Make sure sendmsg et al are disallowed to avoid # inadvertent disclosure of data and/or corruption # of the encrypted data stream self.assertRaises(NotImplementedError, s.dup) self.assertRaises(NotImplementedError, s.sendmsg, [b"data"]) self.assertRaises(NotImplementedError, s.recvmsg, 100) self.assertRaises(NotImplementedError, s.recvmsg_into, [bytearray(100)]) s.write(b"over\n") self.assertRaises(ValueError, s.recv, -1) self.assertRaises(ValueError, s.read, -1) s.close() def test_recv_zero(self): server = ThreadedEchoServer(CERTFILE) self.enterContext(server) s = socket.create_connection((HOST, server.port)) self.addCleanup(s.close) s = test_wrap_socket(s, suppress_ragged_eofs=False) self.addCleanup(s.close) # recv/read(0) should return no data s.send(b"data") self.assertEqual(s.recv(0), b"") self.assertEqual(s.read(0), b"") self.assertEqual(s.read(), b"data") # Should not block if the other end sends no data s.setblocking(False) self.assertEqual(s.recv(0), b"") self.assertEqual(s.recv_into(bytearray()), 0) def test_recv_into_buffer_protocol_len(self): server = ThreadedEchoServer(CERTFILE) self.enterContext(server) s = socket.create_connection((HOST, server.port)) self.addCleanup(s.close) s = test_wrap_socket(s, suppress_ragged_eofs=False) self.addCleanup(s.close) s.send(b"data") buf = array.array('I', [0, 0]) self.assertEqual(s.recv_into(buf), 4) self.assertEqual(bytes(buf)[:4], b"data") class B(bytearray): def __len__(self): 1/0 s.send(b"data") buf = B(6) self.assertEqual(s.recv_into(buf), 4) self.assertEqual(bytes(buf), b"data\0\0") def test_nonblocking_send(self): server = ThreadedEchoServer(CERTFILE, certreqs=ssl.CERT_NONE, ssl_version=ssl.PROTOCOL_TLS_SERVER, cacerts=CERTFILE, chatty=True, connectionchatty=False) with server: s = test_wrap_socket(socket.socket(), server_side=False, certfile=CERTFILE, ca_certs=CERTFILE, cert_reqs=ssl.CERT_NONE) s.connect((HOST, server.port)) s.setblocking(False) # If we keep sending data, at some point the buffers # will be full and the call will block buf = bytearray(8192) def fill_buffer(): while True: s.send(buf) self.assertRaises((ssl.SSLWantWriteError, ssl.SSLWantReadError), fill_buffer) # Now read all the output and discard it s.setblocking(True) s.close() def test_handshake_timeout(self): # Issue #5103: SSL handshake must respect the socket timeout server = socket.socket(socket.AF_INET) host = "127.0.0.1" port = socket_helper.bind_port(server) started = threading.Event() finish = False def serve(): server.listen() started.set() conns = [] while not finish: r, w, e = select.select([server], [], [], 0.1) if server in r: # Let the socket hang around rather than having # it closed by garbage collection. conns.append(server.accept()[0]) for sock in conns: sock.close() t = threading.Thread(target=serve) t.start() started.wait() try: try: c = socket.socket(socket.AF_INET) c.settimeout(0.2) c.connect((host, port)) # Will attempt handshake and time out self.assertRaisesRegex(TimeoutError, "timed out", test_wrap_socket, c) finally: c.close() try: c = socket.socket(socket.AF_INET) c = test_wrap_socket(c) c.settimeout(0.2) # Will attempt handshake and time out self.assertRaisesRegex(TimeoutError, "timed out", c.connect, (host, port)) finally: c.close() finally: finish = True t.join() server.close() def test_server_accept(self): # Issue #16357: accept() on a SSLSocket created through # SSLContext.wrap_socket(). client_ctx, server_ctx, hostname = testing_context() server = socket.socket(socket.AF_INET) host = "127.0.0.1" port = socket_helper.bind_port(server) server = server_ctx.wrap_socket(server, server_side=True) self.assertTrue(server.server_side) evt = threading.Event() remote = None peer = None def serve(): nonlocal remote, peer server.listen() # Block on the accept and wait on the connection to close. evt.set() remote, peer = server.accept() remote.send(remote.recv(4)) t = threading.Thread(target=serve) t.start() # Client wait until server setup and perform a connect. evt.wait() client = client_ctx.wrap_socket( socket.socket(), server_hostname=hostname ) client.connect((hostname, port)) client.send(b'data') client.recv() client_addr = client.getsockname() client.close() t.join() remote.close() server.close() # Sanity checks. self.assertIsInstance(remote, ssl.SSLSocket) self.assertEqual(peer, client_addr) def test_getpeercert_enotconn(self): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.check_hostname = False with context.wrap_socket(socket.socket()) as sock: with self.assertRaises(OSError) as cm: sock.getpeercert() self.assertEqual(cm.exception.errno, errno.ENOTCONN) def test_do_handshake_enotconn(self): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.check_hostname = False with context.wrap_socket(socket.socket()) as sock: with self.assertRaises(OSError) as cm: sock.do_handshake() self.assertEqual(cm.exception.errno, errno.ENOTCONN) def test_no_shared_ciphers(self): client_context, server_context, hostname = testing_context() # OpenSSL enables all TLS 1.3 ciphers, enforce TLS 1.2 for test client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # Force different suites on client and server client_context.set_ciphers("AES128") server_context.set_ciphers("AES256") with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(OSError): s.connect((HOST, server.port)) self.assertIn("NO_SHARED_CIPHER", server.conn_errors[0]) def test_version_basic(self): """ Basic tests for SSLSocket.version(). More tests are done in the test_protocol_*() methods. """ context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.check_hostname = False context.verify_mode = ssl.CERT_NONE with ThreadedEchoServer(CERTFILE, ssl_version=ssl.PROTOCOL_TLS_SERVER, chatty=False) as server: with context.wrap_socket(socket.socket()) as s: self.assertIs(s.version(), None) self.assertIs(s._sslobj, None) s.connect((HOST, server.port)) self.assertEqual(s.version(), 'TLSv1.3') self.assertIs(s._sslobj, None) self.assertIs(s.version(), None) @requires_tls_version('TLSv1_3') def test_tls1_3(self): client_context, server_context, hostname = testing_context() client_context.minimum_version = ssl.TLSVersion.TLSv1_3 with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertIn(s.cipher()[0], { 'TLS_AES_256_GCM_SHA384', 'TLS_CHACHA20_POLY1305_SHA256', 'TLS_AES_128_GCM_SHA256', }) self.assertEqual(s.version(), 'TLSv1.3') @requires_tls_version('TLSv1_2') @requires_tls_version('TLSv1') @ignore_deprecation def test_min_max_version_tlsv1_2(self): client_context, server_context, hostname = testing_context() # client TLSv1.0 to 1.2 client_context.minimum_version = ssl.TLSVersion.TLSv1 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # server only TLSv1.2 server_context.minimum_version = ssl.TLSVersion.TLSv1_2 server_context.maximum_version = ssl.TLSVersion.TLSv1_2 with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.version(), 'TLSv1.2') @requires_tls_version('TLSv1_1') @ignore_deprecation def test_min_max_version_tlsv1_1(self): client_context, server_context, hostname = testing_context() # client 1.0 to 1.2, server 1.0 to 1.1 client_context.minimum_version = ssl.TLSVersion.TLSv1 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server_context.minimum_version = ssl.TLSVersion.TLSv1 server_context.maximum_version = ssl.TLSVersion.TLSv1_1 seclevel_workaround(client_context, server_context) with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.version(), 'TLSv1.1') @requires_tls_version('TLSv1_2') @requires_tls_version('TLSv1') @ignore_deprecation def test_min_max_version_mismatch(self): client_context, server_context, hostname = testing_context() # client 1.0, server 1.2 (mismatch) server_context.maximum_version = ssl.TLSVersion.TLSv1_2 server_context.minimum_version = ssl.TLSVersion.TLSv1_2 client_context.maximum_version = ssl.TLSVersion.TLSv1 client_context.minimum_version = ssl.TLSVersion.TLSv1 seclevel_workaround(client_context, server_context) with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(ssl.SSLError) as e: s.connect((HOST, server.port)) self.assertRegex(str(e.exception), "(alert|ALERT)") @requires_tls_version('SSLv3') def test_min_max_version_sslv3(self): client_context, server_context, hostname = testing_context() server_context.minimum_version = ssl.TLSVersion.SSLv3 client_context.minimum_version = ssl.TLSVersion.SSLv3 client_context.maximum_version = ssl.TLSVersion.SSLv3 seclevel_workaround(client_context, server_context) with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.version(), 'SSLv3') def test_default_ecdh_curve(self): # Issue #21015: elliptic curve-based Diffie Hellman key exchange # should be enabled by default on SSL contexts. client_context, server_context, hostname = testing_context() # TLSv1.3 defaults to PFS key agreement and no longer has KEA in # cipher name. client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # Prior to OpenSSL 1.0.0, ECDH ciphers have to be enabled # explicitly using the 'ECCdraft' cipher alias. Otherwise, # our default cipher list should prefer ECDH-based ciphers # automatically. with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertIn("ECDH", s.cipher()[0]) @unittest.skipUnless("tls-unique" in ssl.CHANNEL_BINDING_TYPES, "'tls-unique' channel binding not available") def test_tls_unique_channel_binding(self): """Test tls-unique channel binding.""" if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() # tls-unique is not defined for TLSv1.3 # https://datatracker.ietf.org/doc/html/rfc8446#appendix-C.5 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server = ThreadedEchoServer(context=server_context, chatty=True, connectionchatty=False) with server: with client_context.wrap_socket( socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # get the data cb_data = s.get_channel_binding("tls-unique") if support.verbose: sys.stdout.write( " got channel binding data: {0!r}\n".format(cb_data)) # check if it is sane self.assertIsNotNone(cb_data) if s.version() == 'TLSv1.3': self.assertEqual(len(cb_data), 48) else: self.assertEqual(len(cb_data), 12) # True for TLSv1 # and compare with the peers version s.write(b"CB tls-unique\n") peer_data_repr = s.read().strip() self.assertEqual(peer_data_repr, repr(cb_data).encode("us-ascii")) # now, again with client_context.wrap_socket( socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) new_cb_data = s.get_channel_binding("tls-unique") if support.verbose: sys.stdout.write( "got another channel binding data: {0!r}\n".format( new_cb_data) ) # is it really unique self.assertNotEqual(cb_data, new_cb_data) self.assertIsNotNone(cb_data) if s.version() == 'TLSv1.3': self.assertEqual(len(cb_data), 48) else: self.assertEqual(len(cb_data), 12) # True for TLSv1 s.write(b"CB tls-unique\n") peer_data_repr = s.read().strip() self.assertEqual(peer_data_repr, repr(new_cb_data).encode("us-ascii")) def test_compression(self): client_context, server_context, hostname = testing_context() stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) if support.verbose: sys.stdout.write(" got compression: {!r}\n".format(stats['compression'])) self.assertIn(stats['compression'], { None, 'ZLIB', 'RLE' }) @unittest.skipUnless(hasattr(ssl, 'OP_NO_COMPRESSION'), "ssl.OP_NO_COMPRESSION needed for this test") def test_compression_disabled(self): client_context, server_context, hostname = testing_context() client_context.options |= ssl.OP_NO_COMPRESSION server_context.options |= ssl.OP_NO_COMPRESSION stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['compression'], None) def test_legacy_server_connect(self): client_context, server_context, hostname = testing_context() client_context.options |= ssl.OP_LEGACY_SERVER_CONNECT server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) def test_no_legacy_server_connect(self): client_context, server_context, hostname = testing_context() client_context.options &= ~ssl.OP_LEGACY_SERVER_CONNECT server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_dh_params(self): # Check we can get a connection with ephemeral Diffie-Hellman client_context, server_context, hostname = testing_context() # test scenario needs TLS <= 1.2 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server_context.load_dh_params(DHFILE) server_context.set_ciphers("kEDH") server_context.maximum_version = ssl.TLSVersion.TLSv1_2 stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) cipher = stats["cipher"][0] parts = cipher.split("-") if "ADH" not in parts and "EDH" not in parts and "DHE" not in parts: self.fail("Non-DH key exchange: " + cipher[0]) def test_ecdh_curve(self): # server secp384r1, client auto client_context, server_context, hostname = testing_context() server_context.set_ecdh_curve("secp384r1") server_context.set_ciphers("ECDHE:!eNULL:!aNULL") server_context.minimum_version = ssl.TLSVersion.TLSv1_2 stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) # server auto, client secp384r1 client_context, server_context, hostname = testing_context() client_context.set_ecdh_curve("secp384r1") server_context.set_ciphers("ECDHE:!eNULL:!aNULL") server_context.minimum_version = ssl.TLSVersion.TLSv1_2 stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) # server / client curve mismatch client_context, server_context, hostname = testing_context() client_context.set_ecdh_curve("prime256v1") server_context.set_ecdh_curve("secp384r1") server_context.set_ciphers("ECDHE:!eNULL:!aNULL") server_context.minimum_version = ssl.TLSVersion.TLSv1_2 with self.assertRaises(ssl.SSLError): server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) def test_selected_alpn_protocol(self): # selected_alpn_protocol() is None unless ALPN is used. client_context, server_context, hostname = testing_context() stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['client_alpn_protocol'], None) def test_selected_alpn_protocol_if_server_uses_alpn(self): # selected_alpn_protocol() is None unless ALPN is used by the client. client_context, server_context, hostname = testing_context() server_context.set_alpn_protocols(['foo', 'bar']) stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['client_alpn_protocol'], None) def test_alpn_protocols(self): server_protocols = ['foo', 'bar', 'milkshake'] protocol_tests = [ (['foo', 'bar'], 'foo'), (['bar', 'foo'], 'foo'), (['milkshake'], 'milkshake'), (['http/3.0', 'http/4.0'], None) ] for client_protocols, expected in protocol_tests: client_context, server_context, hostname = testing_context() server_context.set_alpn_protocols(server_protocols) client_context.set_alpn_protocols(client_protocols) try: stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) except ssl.SSLError as e: stats = e msg = "failed trying %s (s) and %s (c).\n" \ "was expecting %s, but got %%s from the %%s" \ % (str(server_protocols), str(client_protocols), str(expected)) client_result = stats['client_alpn_protocol'] self.assertEqual(client_result, expected, msg % (client_result, "client")) server_result = stats['server_alpn_protocols'][-1] \ if len(stats['server_alpn_protocols']) else 'nothing' self.assertEqual(server_result, expected, msg % (server_result, "server")) def test_npn_protocols(self): assert not ssl.HAS_NPN def sni_contexts(self): server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(SIGNED_CERTFILE) other_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) other_context.load_cert_chain(SIGNED_CERTFILE2) client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) return server_context, other_context, client_context def check_common_name(self, stats, name): cert = stats['peercert'] self.assertIn((('commonName', name),), cert['subject']) def test_sni_callback(self): calls = [] server_context, other_context, client_context = self.sni_contexts() client_context.check_hostname = False def servername_cb(ssl_sock, server_name, initial_context): calls.append((server_name, initial_context)) if server_name is not None: ssl_sock.context = other_context server_context.set_servername_callback(servername_cb) stats = server_params_test(client_context, server_context, chatty=True, sni_name='supermessage') # The hostname was fetched properly, and the certificate was # changed for the connection. self.assertEqual(calls, [("supermessage", server_context)]) # CERTFILE4 was selected self.check_common_name(stats, 'fakehostname') calls = [] # The callback is called with server_name=None stats = server_params_test(client_context, server_context, chatty=True, sni_name=None) self.assertEqual(calls, [(None, server_context)]) self.check_common_name(stats, SIGNED_CERTFILE_HOSTNAME) # Check disabling the callback calls = [] server_context.set_servername_callback(None) stats = server_params_test(client_context, server_context, chatty=True, sni_name='notfunny') # Certificate didn't change self.check_common_name(stats, SIGNED_CERTFILE_HOSTNAME) self.assertEqual(calls, []) def test_sni_callback_alert(self): # Returning a TLS alert is reflected to the connecting client server_context, other_context, client_context = self.sni_contexts() def cb_returning_alert(ssl_sock, server_name, initial_context): return ssl.ALERT_DESCRIPTION_ACCESS_DENIED server_context.set_servername_callback(cb_returning_alert) with self.assertRaises(ssl.SSLError) as cm: stats = server_params_test(client_context, server_context, chatty=False, sni_name='supermessage') self.assertEqual(cm.exception.reason, 'TLSV1_ALERT_ACCESS_DENIED') def test_sni_callback_raising(self): # Raising fails the connection with a TLS handshake failure alert. server_context, other_context, client_context = self.sni_contexts() def cb_raising(ssl_sock, server_name, initial_context): 1/0 server_context.set_servername_callback(cb_raising) with support.catch_unraisable_exception() as catch: with self.assertRaises(ssl.SSLError) as cm: stats = server_params_test(client_context, server_context, chatty=False, sni_name='supermessage') # Allow for flexible libssl error messages. regex = "(SSLV3_ALERT_HANDSHAKE_FAILURE|NO_PRIVATE_VALUE)" self.assertRegex(cm.exception.reason, regex) self.assertEqual(catch.unraisable.exc_type, ZeroDivisionError) def test_sni_callback_wrong_return_type(self): # Returning the wrong return type terminates the TLS connection # with an internal error alert. server_context, other_context, client_context = self.sni_contexts() def cb_wrong_return_type(ssl_sock, server_name, initial_context): return "foo" server_context.set_servername_callback(cb_wrong_return_type) with support.catch_unraisable_exception() as catch: with self.assertRaises(ssl.SSLError) as cm: stats = server_params_test(client_context, server_context, chatty=False, sni_name='supermessage') self.assertEqual(cm.exception.reason, 'TLSV1_ALERT_INTERNAL_ERROR') self.assertEqual(catch.unraisable.exc_type, TypeError) def test_shared_ciphers(self): client_context, server_context, hostname = testing_context() client_context.set_ciphers("AES128:AES256") server_context.set_ciphers("AES256:eNULL") expected_algs = [ "AES256", "AES-256", # TLS 1.3 ciphers are always enabled "TLS_CHACHA20", "TLS_AES", ] stats = server_params_test(client_context, server_context, sni_name=hostname) ciphers = stats['server_shared_ciphers'][0] self.assertGreater(len(ciphers), 0) for name, tls_version, bits in ciphers: if not any(alg in name for alg in expected_algs): self.fail(name) def test_read_write_after_close_raises_valuerror(self): client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=False) with server: s = client_context.wrap_socket(socket.socket(), server_hostname=hostname) s.connect((HOST, server.port)) s.close() self.assertRaises(ValueError, s.read, 1024) self.assertRaises(ValueError, s.write, b'hello') def test_sendfile(self): TEST_DATA = b"x" * 512 with open(os_helper.TESTFN, 'wb') as f: f.write(TEST_DATA) self.addCleanup(os_helper.unlink, os_helper.TESTFN) client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) with open(os_helper.TESTFN, 'rb') as file: s.sendfile(file) self.assertEqual(s.recv(1024), TEST_DATA) def test_session(self): client_context, server_context, hostname = testing_context() # TODO: sessions aren't compatible with TLSv1.3 yet client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # first connection without session stats = server_params_test(client_context, server_context, sni_name=hostname) session = stats['session'] self.assertTrue(session.id) self.assertGreater(session.time, 0) self.assertGreater(session.timeout, 0) self.assertTrue(session.has_ticket) self.assertGreater(session.ticket_lifetime_hint, 0) self.assertFalse(stats['session_reused']) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 1) self.assertEqual(sess_stat['hits'], 0) # reuse session stats = server_params_test(client_context, server_context, session=session, sni_name=hostname) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 2) self.assertEqual(sess_stat['hits'], 1) self.assertTrue(stats['session_reused']) session2 = stats['session'] self.assertEqual(session2.id, session.id) self.assertEqual(session2, session) self.assertIsNot(session2, session) self.assertGreaterEqual(session2.time, session.time) self.assertGreaterEqual(session2.timeout, session.timeout) # another one without session stats = server_params_test(client_context, server_context, sni_name=hostname) self.assertFalse(stats['session_reused']) session3 = stats['session'] self.assertNotEqual(session3.id, session.id) self.assertNotEqual(session3, session) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 3) self.assertEqual(sess_stat['hits'], 1) # reuse session again stats = server_params_test(client_context, server_context, session=session, sni_name=hostname) self.assertTrue(stats['session_reused']) session4 = stats['session'] self.assertEqual(session4.id, session.id) self.assertEqual(session4, session) self.assertGreaterEqual(session4.time, session.time) self.assertGreaterEqual(session4.timeout, session.timeout) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 4) self.assertEqual(sess_stat['hits'], 2) def test_session_handling(self): client_context, server_context, hostname = testing_context() client_context2, _, _ = testing_context() # TODO: session reuse does not work with TLSv1.3 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 client_context2.maximum_version = ssl.TLSVersion.TLSv1_2 server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: # session is None before handshake self.assertEqual(s.session, None) self.assertEqual(s.session_reused, None) s.connect((HOST, server.port)) session = s.session self.assertTrue(session) with self.assertRaises(TypeError) as e: s.session = object self.assertEqual(str(e.exception), 'Value is not a SSLSession.') with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # cannot set session after handshake with self.assertRaises(ValueError) as e: s.session = session self.assertEqual(str(e.exception), 'Cannot set session after handshake.') with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: # can set session before handshake and before the # connection was established s.session = session s.connect((HOST, server.port)) self.assertEqual(s.session.id, session.id) self.assertEqual(s.session, session) self.assertEqual(s.session_reused, True) with client_context2.wrap_socket(socket.socket(), server_hostname=hostname) as s: # cannot re-use session with a different SSLContext with self.assertRaises(ValueError) as e: s.session = session s.connect((HOST, server.port)) self.assertEqual(str(e.exception), 'Session refers to a different SSLContext.') @requires_tls_version('TLSv1_2') @unittest.skipUnless(ssl.HAS_PSK, 'TLS-PSK disabled on this OpenSSL build') def test_psk(self): psk = bytes.fromhex('deadbeef') client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.check_hostname = False client_context.verify_mode = ssl.CERT_NONE client_context.maximum_version = ssl.TLSVersion.TLSv1_2 client_context.set_ciphers('PSK') client_context.set_psk_client_callback(lambda hint: (None, psk)) server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.maximum_version = ssl.TLSVersion.TLSv1_2 server_context.set_ciphers('PSK') server_context.set_psk_server_callback(lambda identity: psk) # correct PSK should connect server = ThreadedEchoServer(context=server_context) with server: with client_context.wrap_socket(socket.socket()) as s: s.connect((HOST, server.port)) # incorrect PSK should fail incorrect_psk = bytes.fromhex('cafebabe') client_context.set_psk_client_callback(lambda hint: (None, incorrect_psk)) server = ThreadedEchoServer(context=server_context) with server: with client_context.wrap_socket(socket.socket()) as s: with self.assertRaises(ssl.SSLError): s.connect((HOST, server.port)) # identity_hint and client_identity should be sent to the other side identity_hint = 'identity-hint' client_identity = 'client-identity' def client_callback(hint): self.assertEqual(hint, identity_hint) return client_identity, psk def server_callback(identity): self.assertEqual(identity, client_identity) return psk client_context.set_psk_client_callback(client_callback) server_context.set_psk_server_callback(server_callback, identity_hint) server = ThreadedEchoServer(context=server_context) with server: with client_context.wrap_socket(socket.socket()) as s: s.connect((HOST, server.port)) # adding client callback to server or vice versa raises an exception with self.assertRaisesRegex(ssl.SSLError, 'Cannot add PSK server callback'): client_context.set_psk_server_callback(server_callback, identity_hint) with self.assertRaisesRegex(ssl.SSLError, 'Cannot add PSK client callback'): server_context.set_psk_client_callback(client_callback) # test with UTF-8 identities identity_hint = '身份暗示' # Translation: "Identity hint" client_identity = '客户身份' # Translation: "Customer identity" client_context.set_psk_client_callback(client_callback) server_context.set_psk_server_callback(server_callback, identity_hint) server = ThreadedEchoServer(context=server_context) with server: with client_context.wrap_socket(socket.socket()) as s: s.connect((HOST, server.port)) @requires_tls_version('TLSv1_3') @unittest.skipUnless(ssl.HAS_PSK, 'TLS-PSK disabled on this OpenSSL build') def test_psk_tls1_3(self): psk = bytes.fromhex('deadbeef') identity_hint = 'identity-hint' client_identity = 'client-identity' def client_callback(hint): # identity_hint is not sent to the client in TLS 1.3 self.assertIsNone(hint) return client_identity, psk def server_callback(identity): self.assertEqual(identity, client_identity) return psk client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.check_hostname = False client_context.verify_mode = ssl.CERT_NONE client_context.minimum_version = ssl.TLSVersion.TLSv1_3 client_context.set_ciphers('PSK') client_context.set_psk_client_callback(client_callback) server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.minimum_version = ssl.TLSVersion.TLSv1_3 server_context.set_ciphers('PSK') server_context.set_psk_server_callback(server_callback, identity_hint) server = ThreadedEchoServer(context=server_context) with server: with client_context.wrap_socket(socket.socket()) as s: s.connect((HOST, server.port)) @unittest.skipUnless(has_tls_version('TLSv1_3'), "Test needs TLS 1.3") class TestPostHandshakeAuth(unittest.TestCase): def test_pha_setter(self): protocols = [ ssl.PROTOCOL_TLS_SERVER, ssl.PROTOCOL_TLS_CLIENT ] for protocol in protocols: ctx = ssl.SSLContext(protocol) self.assertEqual(ctx.post_handshake_auth, False) ctx.post_handshake_auth = True self.assertEqual(ctx.post_handshake_auth, True) ctx.verify_mode = ssl.CERT_REQUIRED self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.post_handshake_auth, True) ctx.post_handshake_auth = False self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.post_handshake_auth, False) ctx.verify_mode = ssl.CERT_OPTIONAL ctx.post_handshake_auth = True self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) self.assertEqual(ctx.post_handshake_auth, True) def test_pha_required(self): client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') # PHA method just returns true when cert is already available s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'GETCERT') cert_text = s.recv(4096).decode('us-ascii') self.assertIn('Python Software Foundation CA', cert_text) def test_pha_required_nocert(self): client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True def msg_cb(conn, direction, version, content_type, msg_type, data): if support.verbose and content_type == _TLSContentType.ALERT: info = (conn, direction, version, content_type, msg_type, data) sys.stdout.write(f"TLS: {info!r}\n") server_context._msg_callback = msg_cb client_context._msg_callback = msg_cb server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname, suppress_ragged_eofs=False) as s: s.connect((HOST, server.port)) s.write(b'PHA') # test sometimes fails with EOF error. Test passes as long as # server aborts connection with an error. with self.assertRaisesRegex( OSError, ('certificate required' '|EOF occurred' '|closed by the remote host' '|Connection reset by peer' '|Broken pipe') ): # receive CertificateRequest data = s.recv(1024) self.assertEqual(data, b'OK\n') # send empty Certificate + Finish s.write(b'HASCERT') # receive alert s.recv(1024) def test_pha_optional(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) # check CERT_OPTIONAL server_context.verify_mode = ssl.CERT_OPTIONAL server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') def test_pha_optional_nocert(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_OPTIONAL client_context.post_handshake_auth = True server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') # optional doesn't fail when client does not have a cert s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') def test_pha_no_pha_client(self): client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) with self.assertRaisesRegex(ssl.SSLError, 'not server'): s.verify_client_post_handshake() s.write(b'PHA') self.assertIn(b'extension not received', s.recv(1024)) def test_pha_no_pha_server(self): # server doesn't have PHA enabled, cert is requested in handshake client_context, server_context, hostname = testing_context() server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') # PHA doesn't fail if there is already a cert s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') def test_pha_not_tls13(self): # TLS 1.2 client_context, server_context, hostname = testing_context() server_context.verify_mode = ssl.CERT_REQUIRED client_context.maximum_version = ssl.TLSVersion.TLSv1_2 client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # PHA fails for TLS != 1.3 s.write(b'PHA') self.assertIn(b'WRONG_SSL_VERSION', s.recv(1024)) def test_bpo37428_pha_cert_none(self): # verify that post_handshake_auth does not implicitly enable cert # validation. hostname = SIGNED_CERTFILE_HOSTNAME client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) # no cert validation and CA on client side client_context.check_hostname = False client_context.verify_mode = ssl.CERT_NONE server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(SIGNED_CERTFILE) server_context.load_verify_locations(SIGNING_CA) server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') # server cert has not been validated self.assertEqual(s.getpeercert(), {}) def test_internal_chain_client(self): client_context, server_context, hostname = testing_context( server_chain=False ) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket( socket.socket(), server_hostname=hostname ) as s: s.connect((HOST, server.port)) vc = s._sslobj.get_verified_chain() self.assertEqual(len(vc), 2) ee, ca = vc uvc = s._sslobj.get_unverified_chain() self.assertEqual(len(uvc), 1) self.assertEqual(ee, uvc[0]) self.assertEqual(hash(ee), hash(uvc[0])) self.assertEqual(repr(ee), repr(uvc[0])) self.assertNotEqual(ee, ca) self.assertNotEqual(hash(ee), hash(ca)) self.assertNotEqual(repr(ee), repr(ca)) self.assertNotEqual(ee.get_info(), ca.get_info()) self.assertIn("CN=localhost", repr(ee)) self.assertIn("CN=our-ca-server", repr(ca)) pem = ee.public_bytes(_ssl.ENCODING_PEM) der = ee.public_bytes(_ssl.ENCODING_DER) self.assertIsInstance(pem, str) self.assertIn("-----BEGIN CERTIFICATE-----", pem) self.assertIsInstance(der, bytes) self.assertEqual( ssl.PEM_cert_to_DER_cert(pem), der ) def test_certificate_chain(self): client_context, server_context, hostname = testing_context( server_chain=False ) server = ThreadedEchoServer(context=server_context, chatty=False) with open(SIGNING_CA) as f: expected_ca_cert = ssl.PEM_cert_to_DER_cert(f.read()) with open(SINGED_CERTFILE_ONLY) as f: expected_ee_cert = ssl.PEM_cert_to_DER_cert(f.read()) with server: with client_context.wrap_socket( socket.socket(), server_hostname=hostname ) as s: s.connect((HOST, server.port)) vc = s.get_verified_chain() self.assertEqual(len(vc), 2) ee, ca = vc self.assertIsInstance(ee, bytes) self.assertIsInstance(ca, bytes) self.assertEqual(expected_ca_cert, ca) self.assertEqual(expected_ee_cert, ee) uvc = s.get_unverified_chain() self.assertEqual(len(uvc), 1) self.assertIsInstance(uvc[0], bytes) self.assertEqual(ee, uvc[0]) self.assertNotEqual(ee, ca) def test_internal_chain_server(self): client_context, server_context, hostname = testing_context() client_context.load_cert_chain(SIGNED_CERTFILE) server_context.verify_mode = ssl.CERT_REQUIRED server_context.maximum_version = ssl.TLSVersion.TLSv1_2 server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket( socket.socket(), server_hostname=hostname ) as s: s.connect((HOST, server.port)) s.write(b'VERIFIEDCHAIN\n') res = s.recv(1024) self.assertEqual(res, b'\x02\n') s.write(b'UNVERIFIEDCHAIN\n') res = s.recv(1024) self.assertEqual(res, b'\x02\n') HAS_KEYLOG = hasattr(ssl.SSLContext, 'keylog_filename') requires_keylog = unittest.skipUnless( HAS_KEYLOG, 'test requires OpenSSL 1.1.1 with keylog callback') class TestSSLDebug(unittest.TestCase): def keylog_lines(self, fname=os_helper.TESTFN): with open(fname) as f: return len(list(f)) @requires_keylog @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_keylog_defaults(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.keylog_filename, None) self.assertFalse(os.path.isfile(os_helper.TESTFN)) ctx.keylog_filename = os_helper.TESTFN self.assertEqual(ctx.keylog_filename, os_helper.TESTFN) self.assertTrue(os.path.isfile(os_helper.TESTFN)) self.assertEqual(self.keylog_lines(), 1) ctx.keylog_filename = None self.assertEqual(ctx.keylog_filename, None) with self.assertRaises((IsADirectoryError, PermissionError)): # Windows raises PermissionError ctx.keylog_filename = os.path.dirname( os.path.abspath(os_helper.TESTFN)) with self.assertRaises(TypeError): ctx.keylog_filename = 1 @requires_keylog @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_keylog_filename(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) client_context, server_context, hostname = testing_context() client_context.keylog_filename = os_helper.TESTFN server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # header, 5 lines for TLS 1.3 self.assertEqual(self.keylog_lines(), 6) client_context.keylog_filename = None server_context.keylog_filename = os_helper.TESTFN server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertGreaterEqual(self.keylog_lines(), 11) client_context.keylog_filename = os_helper.TESTFN server_context.keylog_filename = os_helper.TESTFN server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertGreaterEqual(self.keylog_lines(), 21) client_context.keylog_filename = None server_context.keylog_filename = None @requires_keylog @unittest.skipIf(sys.flags.ignore_environment, "test is not compatible with ignore_environment") @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_keylog_env(self): self.addCleanup(os_helper.unlink, os_helper.TESTFN) with unittest.mock.patch.dict(os.environ): os.environ['SSLKEYLOGFILE'] = os_helper.TESTFN self.assertEqual(os.environ['SSLKEYLOGFILE'], os_helper.TESTFN) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.keylog_filename, None) ctx = ssl.create_default_context() self.assertEqual(ctx.keylog_filename, os_helper.TESTFN) ctx = ssl._create_stdlib_context() self.assertEqual(ctx.keylog_filename, os_helper.TESTFN) def test_msg_callback(self): client_context, server_context, hostname = testing_context() def msg_cb(conn, direction, version, content_type, msg_type, data): pass self.assertIs(client_context._msg_callback, None) client_context._msg_callback = msg_cb self.assertIs(client_context._msg_callback, msg_cb) with self.assertRaises(TypeError): client_context._msg_callback = object() def test_msg_callback_tls12(self): client_context, server_context, hostname = testing_context() client_context.maximum_version = ssl.TLSVersion.TLSv1_2 msg = [] def msg_cb(conn, direction, version, content_type, msg_type, data): self.assertIsInstance(conn, ssl.SSLSocket) self.assertIsInstance(data, bytes) self.assertIn(direction, {'read', 'write'}) msg.append((direction, version, content_type, msg_type)) client_context._msg_callback = msg_cb server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertIn( ("read", TLSVersion.TLSv1_2, _TLSContentType.HANDSHAKE, _TLSMessageType.SERVER_KEY_EXCHANGE), msg ) self.assertIn( ("write", TLSVersion.TLSv1_2, _TLSContentType.CHANGE_CIPHER_SPEC, _TLSMessageType.CHANGE_CIPHER_SPEC), msg ) def test_msg_callback_deadlock_bpo43577(self): client_context, server_context, hostname = testing_context() server_context2 = testing_context()[1] def msg_cb(conn, direction, version, content_type, msg_type, data): pass def sni_cb(sock, servername, ctx): sock.context = server_context2 server_context._msg_callback = msg_cb server_context.sni_callback = sni_cb server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) def set_socket_so_linger_on_with_zero_timeout(sock): sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0)) class TestPreHandshakeClose(unittest.TestCase): """Verify behavior of close sockets with received data before to the handshake. """ class SingleConnectionTestServerThread(threading.Thread): def __init__(self, *, name, call_after_accept, timeout=None): self.call_after_accept = call_after_accept self.received_data = b'' # set by .run() self.wrap_error = None # set by .run() self.listener = None # set by .start() self.port = None # set by .start() if timeout is None: self.timeout = support.SHORT_TIMEOUT else: self.timeout = timeout super().__init__(name=name) def __enter__(self): self.start() return self def __exit__(self, *args): try: if self.listener: self.listener.close() except OSError: pass self.join() self.wrap_error = None # avoid dangling references def start(self): self.ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) self.ssl_ctx.verify_mode = ssl.CERT_REQUIRED self.ssl_ctx.load_verify_locations(cafile=ONLYCERT) self.ssl_ctx.load_cert_chain(certfile=ONLYCERT, keyfile=ONLYKEY) self.listener = socket.socket() self.port = socket_helper.bind_port(self.listener) self.listener.settimeout(self.timeout) self.listener.listen(1) super().start() def run(self): try: conn, address = self.listener.accept() except TimeoutError: # on timeout, just close the listener return finally: self.listener.close() with conn: if self.call_after_accept(conn): return try: tls_socket = self.ssl_ctx.wrap_socket(conn, server_side=True) except OSError as err: # ssl.SSLError inherits from OSError self.wrap_error = err else: try: self.received_data = tls_socket.recv(400) except OSError: pass # closed, protocol error, etc. def non_linux_skip_if_other_okay_error(self, err): if sys.platform in ("linux", "android"): return # Expect the full test setup to always work on Linux. if (isinstance(err, ConnectionResetError) or (isinstance(err, OSError) and err.errno == errno.EINVAL) or re.search('wrong.version.number', getattr(err, "reason", ""), re.I)): # On Windows the TCP RST leads to a ConnectionResetError # (ECONNRESET) which Linux doesn't appear to surface to userspace. # If wrap_socket() winds up on the "if connected:" path and doing # the actual wrapping... we get an SSLError from OpenSSL. Typically # WRONG_VERSION_NUMBER. While appropriate, neither is the scenario # we're specifically trying to test. The way this test is written # is known to work on Linux. We'll skip it anywhere else that it # does not present as doing so. try: self.skipTest(f"Could not recreate conditions on {sys.platform}:" f" {err=}") finally: # gh-108342: Explicitly break the reference cycle err = None # If maintaining this conditional winds up being a problem. # just turn this into an unconditional skip anything but Linux. # The important thing is that our CI has the logic covered. def test_preauth_data_to_tls_server(self): server_accept_called = threading.Event() ready_for_server_wrap_socket = threading.Event() def call_after_accept(unused): server_accept_called.set() if not ready_for_server_wrap_socket.wait(support.SHORT_TIMEOUT): raise RuntimeError("wrap_socket event never set, test may fail.") return False # Tell the server thread to continue. server = self.SingleConnectionTestServerThread( call_after_accept=call_after_accept, name="preauth_data_to_tls_server") self.enterContext(server) # starts it & unittest.TestCase stops it. with socket.socket() as client: client.connect(server.listener.getsockname()) # This forces an immediate connection close via RST on .close(). set_socket_so_linger_on_with_zero_timeout(client) client.setblocking(False) server_accept_called.wait() client.send(b"DELETE /data HTTP/1.0\r\n\r\n") client.close() # RST ready_for_server_wrap_socket.set() server.join() wrap_error = server.wrap_error server.wrap_error = None try: self.assertEqual(b"", server.received_data) self.assertIsInstance(wrap_error, OSError) # All platforms. self.non_linux_skip_if_other_okay_error(wrap_error) self.assertIsInstance(wrap_error, ssl.SSLError) self.assertIn("before TLS handshake with data", wrap_error.args[1]) self.assertIn("before TLS handshake with data", wrap_error.reason) self.assertNotEqual(0, wrap_error.args[0]) self.assertIsNone(wrap_error.library, msg="attr must exist") finally: # gh-108342: Explicitly break the reference cycle wrap_error = None server = None def test_preauth_data_to_tls_client(self): server_can_continue_with_wrap_socket = threading.Event() client_can_continue_with_wrap_socket = threading.Event() def call_after_accept(conn_to_client): if not server_can_continue_with_wrap_socket.wait(support.SHORT_TIMEOUT): print("ERROR: test client took too long") # This forces an immediate connection close via RST on .close(). set_socket_so_linger_on_with_zero_timeout(conn_to_client) conn_to_client.send( b"HTTP/1.0 307 Temporary Redirect\r\n" b"Location: https://example.com/someone-elses-server\r\n" b"\r\n") conn_to_client.close() # RST client_can_continue_with_wrap_socket.set() return True # Tell the server to stop. server = self.SingleConnectionTestServerThread( call_after_accept=call_after_accept, name="preauth_data_to_tls_client") self.enterContext(server) # starts it & unittest.TestCase stops it. # Redundant; call_after_accept sets SO_LINGER on the accepted conn. set_socket_so_linger_on_with_zero_timeout(server.listener) with socket.socket() as client: client.connect(server.listener.getsockname()) server_can_continue_with_wrap_socket.set() if not client_can_continue_with_wrap_socket.wait(support.SHORT_TIMEOUT): self.fail("test server took too long") ssl_ctx = ssl.create_default_context() try: tls_client = ssl_ctx.wrap_socket( client, server_hostname="localhost") except OSError as err: # SSLError inherits from OSError wrap_error = err received_data = b"" else: wrap_error = None received_data = tls_client.recv(400) tls_client.close() server.join() try: self.assertEqual(b"", received_data) self.assertIsInstance(wrap_error, OSError) # All platforms. self.non_linux_skip_if_other_okay_error(wrap_error) self.assertIsInstance(wrap_error, ssl.SSLError) self.assertIn("before TLS handshake with data", wrap_error.args[1]) self.assertIn("before TLS handshake with data", wrap_error.reason) self.assertNotEqual(0, wrap_error.args[0]) self.assertIsNone(wrap_error.library, msg="attr must exist") finally: # gh-108342: Explicitly break the reference cycle with warnings_helper.check_no_resource_warning(self): wrap_error = None server = None def test_https_client_non_tls_response_ignored(self): server_responding = threading.Event() class SynchronizedHTTPSConnection(http.client.HTTPSConnection): def connect(self): # Call clear text HTTP connect(), not the encrypted HTTPS (TLS) # connect(): wrap_socket() is called manually below. http.client.HTTPConnection.connect(self) # Wait for our fault injection server to have done its thing. if not server_responding.wait(support.SHORT_TIMEOUT) and support.verbose: sys.stdout.write("server_responding event never set.") self.sock = self._context.wrap_socket( self.sock, server_hostname=self.host) def call_after_accept(conn_to_client): # This forces an immediate connection close via RST on .close(). set_socket_so_linger_on_with_zero_timeout(conn_to_client) conn_to_client.send( b"HTTP/1.0 402 Payment Required\r\n" b"\r\n") conn_to_client.close() # RST server_responding.set() return True # Tell the server to stop. timeout = 2.0 server = self.SingleConnectionTestServerThread( call_after_accept=call_after_accept, name="non_tls_http_RST_responder", timeout=timeout) self.enterContext(server) # starts it & unittest.TestCase stops it. # Redundant; call_after_accept sets SO_LINGER on the accepted conn. set_socket_so_linger_on_with_zero_timeout(server.listener) connection = SynchronizedHTTPSConnection( server.listener.getsockname()[0], port=server.port, context=ssl.create_default_context(), timeout=timeout, ) # There are lots of reasons this raises as desired, long before this # test was added. Sending the request requires a successful TLS wrapped # socket; that fails if the connection is broken. It may seem pointless # to test this. It serves as an illustration of something that we never # want to happen... properly not happening. with warnings_helper.check_no_resource_warning(self), \ self.assertRaises(OSError): connection.request("HEAD", "/test", headers={"Host": "localhost"}) response = connection.getresponse() server.join() class TestEnumerations(unittest.TestCase): def test_tlsversion(self): class CheckedTLSVersion(enum.IntEnum): MINIMUM_SUPPORTED = _ssl.PROTO_MINIMUM_SUPPORTED SSLv3 = _ssl.PROTO_SSLv3 TLSv1 = _ssl.PROTO_TLSv1 TLSv1_1 = _ssl.PROTO_TLSv1_1 TLSv1_2 = _ssl.PROTO_TLSv1_2 TLSv1_3 = _ssl.PROTO_TLSv1_3 MAXIMUM_SUPPORTED = _ssl.PROTO_MAXIMUM_SUPPORTED enum._test_simple_enum(CheckedTLSVersion, TLSVersion) def test_tlscontenttype(self): class Checked_TLSContentType(enum.IntEnum): """Content types (record layer) See RFC 8446, section B.1 """ CHANGE_CIPHER_SPEC = 20 ALERT = 21 HANDSHAKE = 22 APPLICATION_DATA = 23 # pseudo content types HEADER = 0x100 INNER_CONTENT_TYPE = 0x101 enum._test_simple_enum(Checked_TLSContentType, _TLSContentType) def test_tlsalerttype(self): class Checked_TLSAlertType(enum.IntEnum): """Alert types for TLSContentType.ALERT messages See RFC 8466, section B.2 """ CLOSE_NOTIFY = 0 UNEXPECTED_MESSAGE = 10 BAD_RECORD_MAC = 20 DECRYPTION_FAILED = 21 RECORD_OVERFLOW = 22 DECOMPRESSION_FAILURE = 30 HANDSHAKE_FAILURE = 40 NO_CERTIFICATE = 41 BAD_CERTIFICATE = 42 UNSUPPORTED_CERTIFICATE = 43 CERTIFICATE_REVOKED = 44 CERTIFICATE_EXPIRED = 45 CERTIFICATE_UNKNOWN = 46 ILLEGAL_PARAMETER = 47 UNKNOWN_CA = 48 ACCESS_DENIED = 49 DECODE_ERROR = 50 DECRYPT_ERROR = 51 EXPORT_RESTRICTION = 60 PROTOCOL_VERSION = 70 INSUFFICIENT_SECURITY = 71 INTERNAL_ERROR = 80 INAPPROPRIATE_FALLBACK = 86 USER_CANCELED = 90 NO_RENEGOTIATION = 100 MISSING_EXTENSION = 109 UNSUPPORTED_EXTENSION = 110 CERTIFICATE_UNOBTAINABLE = 111 UNRECOGNIZED_NAME = 112 BAD_CERTIFICATE_STATUS_RESPONSE = 113 BAD_CERTIFICATE_HASH_VALUE = 114 UNKNOWN_PSK_IDENTITY = 115 CERTIFICATE_REQUIRED = 116 NO_APPLICATION_PROTOCOL = 120 enum._test_simple_enum(Checked_TLSAlertType, _TLSAlertType) def test_tlsmessagetype(self): class Checked_TLSMessageType(enum.IntEnum): """Message types (handshake protocol) See RFC 8446, section B.3 """ HELLO_REQUEST = 0 CLIENT_HELLO = 1 SERVER_HELLO = 2 HELLO_VERIFY_REQUEST = 3 NEWSESSION_TICKET = 4 END_OF_EARLY_DATA = 5 HELLO_RETRY_REQUEST = 6 ENCRYPTED_EXTENSIONS = 8 CERTIFICATE = 11 SERVER_KEY_EXCHANGE = 12 CERTIFICATE_REQUEST = 13 SERVER_DONE = 14 CERTIFICATE_VERIFY = 15 CLIENT_KEY_EXCHANGE = 16 FINISHED = 20 CERTIFICATE_URL = 21 CERTIFICATE_STATUS = 22 SUPPLEMENTAL_DATA = 23 KEY_UPDATE = 24 NEXT_PROTO = 67 MESSAGE_HASH = 254 CHANGE_CIPHER_SPEC = 0x0101 enum._test_simple_enum(Checked_TLSMessageType, _TLSMessageType) def test_sslmethod(self): Checked_SSLMethod = enum._old_convert_( enum.IntEnum, '_SSLMethod', 'ssl', lambda name: name.startswith('PROTOCOL_') and name != 'PROTOCOL_SSLv23', source=ssl._ssl, ) # This member is assigned dynamically in `ssl.py`: Checked_SSLMethod.PROTOCOL_SSLv23 = Checked_SSLMethod.PROTOCOL_TLS enum._test_simple_enum(Checked_SSLMethod, ssl._SSLMethod) def test_options(self): CheckedOptions = enum._old_convert_( enum.IntFlag, 'Options', 'ssl', lambda name: name.startswith('OP_'), source=ssl._ssl, ) enum._test_simple_enum(CheckedOptions, ssl.Options) def test_alertdescription(self): CheckedAlertDescription = enum._old_convert_( enum.IntEnum, 'AlertDescription', 'ssl', lambda name: name.startswith('ALERT_DESCRIPTION_'), source=ssl._ssl, ) enum._test_simple_enum(CheckedAlertDescription, ssl.AlertDescription) def test_sslerrornumber(self): Checked_SSLErrorNumber = enum._old_convert_( enum.IntEnum, 'SSLErrorNumber', 'ssl', lambda name: name.startswith('SSL_ERROR_'), source=ssl._ssl, ) enum._test_simple_enum(Checked_SSLErrorNumber, ssl.SSLErrorNumber) def test_verifyflags(self): CheckedVerifyFlags = enum._old_convert_( enum.IntFlag, 'VerifyFlags', 'ssl', lambda name: name.startswith('VERIFY_'), source=ssl._ssl, ) enum._test_simple_enum(CheckedVerifyFlags, ssl.VerifyFlags) def test_verifymode(self): CheckedVerifyMode = enum._old_convert_( enum.IntEnum, 'VerifyMode', 'ssl', lambda name: name.startswith('CERT_'), source=ssl._ssl, ) enum._test_simple_enum(CheckedVerifyMode, ssl.VerifyMode) def setUpModule(): if support.verbose: plats = { 'Mac': platform.mac_ver, 'Windows': platform.win32_ver, } for name, func in plats.items(): plat = func() if plat and plat[0]: plat = '%s %r' % (name, plat) break else: plat = repr(platform.platform()) print("test_ssl: testing with %r %r" % (ssl.OPENSSL_VERSION, ssl.OPENSSL_VERSION_INFO)) print(" under %s" % plat) print(" HAS_SNI = %r" % ssl.HAS_SNI) print(" OP_ALL = 0x%8x" % ssl.OP_ALL) try: print(" OP_NO_TLSv1_1 = 0x%8x" % ssl.OP_NO_TLSv1_1) except AttributeError: pass for filename in [ CERTFILE, BYTES_CERTFILE, ONLYCERT, ONLYKEY, BYTES_ONLYCERT, BYTES_ONLYKEY, SIGNED_CERTFILE, SIGNED_CERTFILE2, SIGNING_CA, BADCERT, BADKEY, EMPTYCERT]: if not os.path.exists(filename): raise support.TestFailed("Can't read certificate file %r" % filename) thread_info = threading_helper.threading_setup() unittest.addModuleCleanup(threading_helper.threading_cleanup, *thread_info) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.13/test_subprocess.py000066400000000000000000005140141471441230600221330ustar00rootroot00000000000000import unittest from unittest import mock from test import support from test.support import check_sanitizer from test.support import import_helper from test.support import os_helper from test.support import warnings_helper from test.support.script_helper import assert_python_ok import subprocess import sys import signal import io import itertools import os import errno import tempfile import time import traceback import types import selectors import sysconfig import select import shutil import threading import gc import textwrap import json from test.support.os_helper import FakePath try: import _testcapi except ImportError: _testcapi = None try: import pwd except ImportError: pwd = None try: import grp except ImportError: grp = None try: import fcntl except: fcntl = None if support.PGO: raise unittest.SkipTest("test is not helpful for PGO") if not support.has_subprocess_support: raise unittest.SkipTest("test module requires subprocess") mswindows = (sys.platform == "win32") # # Depends on the following external programs: Python # if mswindows: SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' 'os.O_BINARY);') else: SETBINARY = '' NONEXISTING_CMD = ('nonexisting_i_hope',) # Ignore errors that indicate the command was not found NONEXISTING_ERRORS = (FileNotFoundError, NotADirectoryError, PermissionError) ZERO_RETURN_CMD = (sys.executable, '-c', 'pass') def setUpModule(): shell_true = shutil.which('true') if shell_true is None: return if (os.access(shell_true, os.X_OK) and subprocess.run([shell_true]).returncode == 0): global ZERO_RETURN_CMD ZERO_RETURN_CMD = (shell_true,) # Faster than Python startup. class BaseTestCase(unittest.TestCase): def setUp(self): # Try to minimize the number of children we have so this test # doesn't crash on some buildbots (Alphas in particular). support.reap_children() def tearDown(self): if not mswindows: # subprocess._active is not used on Windows and is set to None. for inst in subprocess._active: inst.wait() subprocess._cleanup() self.assertFalse( subprocess._active, "subprocess._active not empty" ) self.doCleanups() support.reap_children() class PopenTestException(Exception): pass class PopenExecuteChildRaises(subprocess.Popen): """Popen subclass for testing cleanup of subprocess.PIPE filehandles when _execute_child fails. """ def _execute_child(self, *args, **kwargs): raise PopenTestException("Forced Exception for Test") class ProcessTestCase(BaseTestCase): def test_io_buffered_by_default(self): p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) try: self.assertIsInstance(p.stdin, io.BufferedIOBase) self.assertIsInstance(p.stdout, io.BufferedIOBase) self.assertIsInstance(p.stderr, io.BufferedIOBase) finally: p.stdin.close() p.stdout.close() p.stderr.close() p.wait() def test_io_unbuffered_works(self): p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=0) try: self.assertIsInstance(p.stdin, io.RawIOBase) self.assertIsInstance(p.stdout, io.RawIOBase) self.assertIsInstance(p.stderr, io.RawIOBase) finally: p.stdin.close() p.stdout.close() p.stderr.close() p.wait() def test_call_seq(self): # call() function with sequence argument rc = subprocess.call([sys.executable, "-c", "import sys; sys.exit(47)"]) self.assertEqual(rc, 47) def test_call_timeout(self): # call() function with timeout argument; we want to test that the child # process gets killed when the timeout expires. If the child isn't # killed, this call will deadlock since subprocess.call waits for the # child. self.assertRaises(subprocess.TimeoutExpired, subprocess.call, [sys.executable, "-c", "while True: pass"], timeout=0.1) def test_check_call_zero(self): # check_call() function with zero return code rc = subprocess.check_call(ZERO_RETURN_CMD) self.assertEqual(rc, 0) def test_check_call_nonzero(self): # check_call() function with non-zero return code with self.assertRaises(subprocess.CalledProcessError) as c: subprocess.check_call([sys.executable, "-c", "import sys; sys.exit(47)"]) self.assertEqual(c.exception.returncode, 47) def test_check_output(self): # check_output() function with zero return code output = subprocess.check_output( [sys.executable, "-c", "print('BDFL')"]) self.assertIn(b'BDFL', output) with self.assertRaisesRegex(ValueError, "stdout argument not allowed, it will be overridden"): subprocess.check_output([], stdout=None) with self.assertRaisesRegex(ValueError, "check argument not allowed, it will be overridden"): subprocess.check_output([], check=False) def test_check_output_nonzero(self): # check_call() function with non-zero return code with self.assertRaises(subprocess.CalledProcessError) as c: subprocess.check_output( [sys.executable, "-c", "import sys; sys.exit(5)"]) self.assertEqual(c.exception.returncode, 5) def test_check_output_stderr(self): # check_output() function stderr redirected to stdout output = subprocess.check_output( [sys.executable, "-c", "import sys; sys.stderr.write('BDFL')"], stderr=subprocess.STDOUT) self.assertIn(b'BDFL', output) def test_check_output_stdin_arg(self): # check_output() can be called with stdin set to a file tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) output = subprocess.check_output( [sys.executable, "-c", "import sys; sys.stdout.write(sys.stdin.read().upper())"], stdin=tf) self.assertIn(b'PEAR', output) def test_check_output_input_arg(self): # check_output() can be called with input set to a string output = subprocess.check_output( [sys.executable, "-c", "import sys; sys.stdout.write(sys.stdin.read().upper())"], input=b'pear') self.assertIn(b'PEAR', output) def test_check_output_input_none(self): """input=None has a legacy meaning of input='' on check_output.""" output = subprocess.check_output( [sys.executable, "-c", "import sys; print('XX' if sys.stdin.read() else '')"], input=None) self.assertNotIn(b'XX', output) def test_check_output_input_none_text(self): output = subprocess.check_output( [sys.executable, "-c", "import sys; print('XX' if sys.stdin.read() else '')"], input=None, text=True) self.assertNotIn('XX', output) def test_check_output_input_none_universal_newlines(self): output = subprocess.check_output( [sys.executable, "-c", "import sys; print('XX' if sys.stdin.read() else '')"], input=None, universal_newlines=True) self.assertNotIn('XX', output) def test_check_output_input_none_encoding_errors(self): output = subprocess.check_output( [sys.executable, "-c", "print('foo')"], input=None, encoding='utf-8', errors='ignore') self.assertIn('foo', output) def test_check_output_stdout_arg(self): # check_output() refuses to accept 'stdout' argument with self.assertRaises(ValueError) as c: output = subprocess.check_output( [sys.executable, "-c", "print('will not be run')"], stdout=sys.stdout) self.fail("Expected ValueError when stdout arg supplied.") self.assertIn('stdout', c.exception.args[0]) def test_check_output_stdin_with_input_arg(self): # check_output() refuses to accept 'stdin' with 'input' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) with self.assertRaises(ValueError) as c: output = subprocess.check_output( [sys.executable, "-c", "print('will not be run')"], stdin=tf, input=b'hare') self.fail("Expected ValueError when stdin and input args supplied.") self.assertIn('stdin', c.exception.args[0]) self.assertIn('input', c.exception.args[0]) @support.requires_resource('walltime') def test_check_output_timeout(self): # check_output() function with timeout arg with self.assertRaises(subprocess.TimeoutExpired) as c: output = subprocess.check_output( [sys.executable, "-c", "import sys, time\n" "sys.stdout.write('BDFL')\n" "sys.stdout.flush()\n" "time.sleep(3600)"], # Some heavily loaded buildbots (sparc Debian 3.x) require # this much time to start and print. timeout=3) self.fail("Expected TimeoutExpired.") self.assertEqual(c.exception.output, b'BDFL') def test_call_kwargs(self): # call() function with keyword args newenv = os.environ.copy() newenv["FRUIT"] = "banana" rc = subprocess.call([sys.executable, "-c", 'import sys, os;' 'sys.exit(os.getenv("FRUIT")=="banana")'], env=newenv) self.assertEqual(rc, 1) def test_invalid_args(self): # Popen() called with invalid arguments should raise TypeError # but Popen.__del__ should not complain (issue #12085) with support.captured_stderr() as s: self.assertRaises(TypeError, subprocess.Popen, invalid_arg_name=1) argcount = subprocess.Popen.__init__.__code__.co_argcount too_many_args = [0] * (argcount + 1) self.assertRaises(TypeError, subprocess.Popen, *too_many_args) self.assertEqual(s.getvalue(), '') def test_stdin_none(self): # .stdin is None when not redirected p = subprocess.Popen([sys.executable, "-c", 'print("banana")'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) p.wait() self.assertEqual(p.stdin, None) def test_stdout_none(self): # .stdout is None when not redirected, and the child's stdout will # be inherited from the parent. In order to test this we run a # subprocess in a subprocess: # this_test # \-- subprocess created by this test (parent) # \-- subprocess created by the parent subprocess (child) # The parent doesn't specify stdout, so the child will use the # parent's stdout. This test checks that the message printed by the # child goes to the parent stdout. The parent also checks that the # child's stdout is None. See #11963. code = ('import sys; from subprocess import Popen, PIPE;' 'p = Popen([sys.executable, "-c", "print(\'test_stdout_none\')"],' ' stdin=PIPE, stderr=PIPE);' 'p.wait(); assert p.stdout is None;') p = subprocess.Popen([sys.executable, "-c", code], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) out, err = p.communicate() self.assertEqual(p.returncode, 0, err) self.assertEqual(out.rstrip(), b'test_stdout_none') def test_stderr_none(self): # .stderr is None when not redirected p = subprocess.Popen([sys.executable, "-c", 'print("banana")'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stdin.close) p.wait() self.assertEqual(p.stderr, None) def _assert_python(self, pre_args, **kwargs): # We include sys.exit() to prevent the test runner from hanging # whenever python is found. args = pre_args + ["import sys; sys.exit(47)"] p = subprocess.Popen(args, **kwargs) p.wait() self.assertEqual(47, p.returncode) def test_executable(self): # Check that the executable argument works. # # On Unix (non-Mac and non-Windows), Python looks at args[0] to # determine where its standard library is, so we need the directory # of args[0] to be valid for the Popen() call to Python to succeed. # See also issue #16170 and issue #7774. doesnotexist = os.path.join(os.path.dirname(sys.executable), "doesnotexist") self._assert_python([doesnotexist, "-c"], executable=sys.executable) def test_bytes_executable(self): doesnotexist = os.path.join(os.path.dirname(sys.executable), "doesnotexist") self._assert_python([doesnotexist, "-c"], executable=os.fsencode(sys.executable)) def test_pathlike_executable(self): doesnotexist = os.path.join(os.path.dirname(sys.executable), "doesnotexist") self._assert_python([doesnotexist, "-c"], executable=FakePath(sys.executable)) def test_executable_takes_precedence(self): # Check that the executable argument takes precedence over args[0]. # # Verify first that the call succeeds without the executable arg. pre_args = [sys.executable, "-c"] self._assert_python(pre_args) self.assertRaises(NONEXISTING_ERRORS, self._assert_python, pre_args, executable=NONEXISTING_CMD[0]) @unittest.skipIf(mswindows, "executable argument replaces shell") def test_executable_replaces_shell(self): # Check that the executable argument replaces the default shell # when shell=True. self._assert_python([], executable=sys.executable, shell=True) @unittest.skipIf(mswindows, "executable argument replaces shell") def test_bytes_executable_replaces_shell(self): self._assert_python([], executable=os.fsencode(sys.executable), shell=True) @unittest.skipIf(mswindows, "executable argument replaces shell") def test_pathlike_executable_replaces_shell(self): self._assert_python([], executable=FakePath(sys.executable), shell=True) # For use in the test_cwd* tests below. def _normalize_cwd(self, cwd): # Normalize an expected cwd (for Tru64 support). # We can't use os.path.realpath since it doesn't expand Tru64 {memb} # strings. See bug #1063571. with os_helper.change_cwd(cwd): return os.getcwd() # For use in the test_cwd* tests below. def _split_python_path(self): # Return normalized (python_dir, python_base). python_path = os.path.realpath(sys.executable) return os.path.split(python_path) # For use in the test_cwd* tests below. def _assert_cwd(self, expected_cwd, python_arg, **kwargs): # Invoke Python via Popen, and assert that (1) the call succeeds, # and that (2) the current working directory of the child process # matches *expected_cwd*. p = subprocess.Popen([python_arg, "-c", "import os, sys; " "buf = sys.stdout.buffer; " "buf.write(os.getcwd().encode()); " "buf.flush(); " "sys.exit(47)"], stdout=subprocess.PIPE, **kwargs) self.addCleanup(p.stdout.close) p.wait() self.assertEqual(47, p.returncode) normcase = os.path.normcase self.assertEqual(normcase(expected_cwd), normcase(p.stdout.read().decode())) def test_cwd(self): # Check that cwd changes the cwd for the child process. temp_dir = tempfile.gettempdir() temp_dir = self._normalize_cwd(temp_dir) self._assert_cwd(temp_dir, sys.executable, cwd=temp_dir) def test_cwd_with_bytes(self): temp_dir = tempfile.gettempdir() temp_dir = self._normalize_cwd(temp_dir) self._assert_cwd(temp_dir, sys.executable, cwd=os.fsencode(temp_dir)) def test_cwd_with_pathlike(self): temp_dir = tempfile.gettempdir() temp_dir = self._normalize_cwd(temp_dir) self._assert_cwd(temp_dir, sys.executable, cwd=FakePath(temp_dir)) @unittest.skipIf(mswindows, "pending resolution of issue #15533") def test_cwd_with_relative_arg(self): # Check that Popen looks for args[0] relative to cwd if args[0] # is relative. python_dir, python_base = self._split_python_path() rel_python = os.path.join(os.curdir, python_base) with os_helper.temp_cwd() as wrong_dir: # Before calling with the correct cwd, confirm that the call fails # without cwd and with the wrong cwd. self.assertRaises(FileNotFoundError, subprocess.Popen, [rel_python]) self.assertRaises(FileNotFoundError, subprocess.Popen, [rel_python], cwd=wrong_dir) python_dir = self._normalize_cwd(python_dir) self._assert_cwd(python_dir, rel_python, cwd=python_dir) @unittest.skipIf(mswindows, "pending resolution of issue #15533") def test_cwd_with_relative_executable(self): # Check that Popen looks for executable relative to cwd if executable # is relative (and that executable takes precedence over args[0]). python_dir, python_base = self._split_python_path() rel_python = os.path.join(os.curdir, python_base) doesntexist = "somethingyoudonthave" with os_helper.temp_cwd() as wrong_dir: # Before calling with the correct cwd, confirm that the call fails # without cwd and with the wrong cwd. self.assertRaises(FileNotFoundError, subprocess.Popen, [doesntexist], executable=rel_python) self.assertRaises(FileNotFoundError, subprocess.Popen, [doesntexist], executable=rel_python, cwd=wrong_dir) python_dir = self._normalize_cwd(python_dir) self._assert_cwd(python_dir, doesntexist, executable=rel_python, cwd=python_dir) def test_cwd_with_absolute_arg(self): # Check that Popen can find the executable when the cwd is wrong # if args[0] is an absolute path. python_dir, python_base = self._split_python_path() abs_python = os.path.join(python_dir, python_base) rel_python = os.path.join(os.curdir, python_base) with os_helper.temp_dir() as wrong_dir: # Before calling with an absolute path, confirm that using a # relative path fails. self.assertRaises(FileNotFoundError, subprocess.Popen, [rel_python], cwd=wrong_dir) wrong_dir = self._normalize_cwd(wrong_dir) self._assert_cwd(wrong_dir, abs_python, cwd=wrong_dir) @unittest.skipIf(sys.base_prefix != sys.prefix, 'Test is not venv-compatible') def test_executable_with_cwd(self): python_dir, python_base = self._split_python_path() python_dir = self._normalize_cwd(python_dir) self._assert_cwd(python_dir, "somethingyoudonthave", executable=sys.executable, cwd=python_dir) @unittest.skipIf(sys.base_prefix != sys.prefix, 'Test is not venv-compatible') @unittest.skipIf(sysconfig.is_python_build(), "need an installed Python. See #7774") def test_executable_without_cwd(self): # For a normal installation, it should work without 'cwd' # argument. For test runs in the build directory, see #7774. self._assert_cwd(os.getcwd(), "somethingyoudonthave", executable=sys.executable) def test_stdin_pipe(self): # stdin redirection p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.exit(sys.stdin.read() == "pear")'], stdin=subprocess.PIPE) p.stdin.write(b"pear") p.stdin.close() p.wait() self.assertEqual(p.returncode, 1) def test_stdin_filedes(self): # stdin is set to open file descriptor tf = tempfile.TemporaryFile() self.addCleanup(tf.close) d = tf.fileno() os.write(d, b"pear") os.lseek(d, 0, 0) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.exit(sys.stdin.read() == "pear")'], stdin=d) p.wait() self.assertEqual(p.returncode, 1) def test_stdin_fileobj(self): # stdin is set to open file object tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b"pear") tf.seek(0) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.exit(sys.stdin.read() == "pear")'], stdin=tf) p.wait() self.assertEqual(p.returncode, 1) def test_stdout_pipe(self): # stdout redirection p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("orange")'], stdout=subprocess.PIPE) with p: self.assertEqual(p.stdout.read(), b"orange") def test_stdout_filedes(self): # stdout is set to open file descriptor tf = tempfile.TemporaryFile() self.addCleanup(tf.close) d = tf.fileno() p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("orange")'], stdout=d) p.wait() os.lseek(d, 0, 0) self.assertEqual(os.read(d, 1024), b"orange") def test_stdout_fileobj(self): # stdout is set to open file object tf = tempfile.TemporaryFile() self.addCleanup(tf.close) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("orange")'], stdout=tf) p.wait() tf.seek(0) self.assertEqual(tf.read(), b"orange") def test_stderr_pipe(self): # stderr redirection p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("strawberry")'], stderr=subprocess.PIPE) with p: self.assertEqual(p.stderr.read(), b"strawberry") def test_stderr_filedes(self): # stderr is set to open file descriptor tf = tempfile.TemporaryFile() self.addCleanup(tf.close) d = tf.fileno() p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("strawberry")'], stderr=d) p.wait() os.lseek(d, 0, 0) self.assertEqual(os.read(d, 1024), b"strawberry") def test_stderr_fileobj(self): # stderr is set to open file object tf = tempfile.TemporaryFile() self.addCleanup(tf.close) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("strawberry")'], stderr=tf) p.wait() tf.seek(0) self.assertEqual(tf.read(), b"strawberry") def test_stderr_redirect_with_no_stdout_redirect(self): # test stderr=STDOUT while stdout=None (not set) # - grandchild prints to stderr # - child redirects grandchild's stderr to its stdout # - the parent should get grandchild's stderr in child's stdout p = subprocess.Popen([sys.executable, "-c", 'import sys, subprocess;' 'rc = subprocess.call([sys.executable, "-c",' ' "import sys;"' ' "sys.stderr.write(\'42\')"],' ' stderr=subprocess.STDOUT);' 'sys.exit(rc)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() #NOTE: stdout should get stderr from grandchild self.assertEqual(stdout, b'42') self.assertEqual(stderr, b'') # should be empty self.assertEqual(p.returncode, 0) def test_stdout_stderr_pipe(self): # capture stdout and stderr to the same pipe p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple");' 'sys.stdout.flush();' 'sys.stderr.write("orange")'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) with p: self.assertEqual(p.stdout.read(), b"appleorange") def test_stdout_stderr_file(self): # capture stdout and stderr to the same open file tf = tempfile.TemporaryFile() self.addCleanup(tf.close) p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple");' 'sys.stdout.flush();' 'sys.stderr.write("orange")'], stdout=tf, stderr=tf) p.wait() tf.seek(0) self.assertEqual(tf.read(), b"appleorange") def test_stdout_filedes_of_stdout(self): # stdout is set to 1 (#1531862). # To avoid printing the text on stdout, we do something similar to # test_stdout_none (see above). The parent subprocess calls the child # subprocess passing stdout=1, and this test uses stdout=PIPE in # order to capture and check the output of the parent. See #11963. code = ('import sys, subprocess; ' 'rc = subprocess.call([sys.executable, "-c", ' ' "import os, sys; sys.exit(os.write(sys.stdout.fileno(), ' 'b\'test with stdout=1\'))"], stdout=1); ' 'assert rc == 18') p = subprocess.Popen([sys.executable, "-c", code], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) out, err = p.communicate() self.assertEqual(p.returncode, 0, err) self.assertEqual(out.rstrip(), b'test with stdout=1') def test_stdout_devnull(self): p = subprocess.Popen([sys.executable, "-c", 'for i in range(10240):' 'print("x" * 1024)'], stdout=subprocess.DEVNULL) p.wait() self.assertEqual(p.stdout, None) def test_stderr_devnull(self): p = subprocess.Popen([sys.executable, "-c", 'import sys\n' 'for i in range(10240):' 'sys.stderr.write("x" * 1024)'], stderr=subprocess.DEVNULL) p.wait() self.assertEqual(p.stderr, None) def test_stdin_devnull(self): p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdin.read(1)'], stdin=subprocess.DEVNULL) p.wait() self.assertEqual(p.stdin, None) @unittest.skipUnless(fcntl and hasattr(fcntl, 'F_GETPIPE_SZ'), 'fcntl.F_GETPIPE_SZ required for test.') def test_pipesizes(self): test_pipe_r, test_pipe_w = os.pipe() try: # Get the default pipesize with F_GETPIPE_SZ pipesize_default = fcntl.fcntl(test_pipe_w, fcntl.F_GETPIPE_SZ) finally: os.close(test_pipe_r) os.close(test_pipe_w) pipesize = pipesize_default // 2 pagesize_default = support.get_pagesize() if pipesize < pagesize_default: # the POSIX minimum raise unittest.SkipTest( 'default pipesize too small to perform test.') p = subprocess.Popen( [sys.executable, "-c", 'import sys; sys.stdin.read(); sys.stdout.write("out"); ' 'sys.stderr.write("error!")'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pipesize=pipesize) try: for fifo in [p.stdin, p.stdout, p.stderr]: self.assertEqual( fcntl.fcntl(fifo.fileno(), fcntl.F_GETPIPE_SZ), pipesize) # Windows pipe size can be acquired via GetNamedPipeInfoFunction # https://docs.microsoft.com/en-us/windows/win32/api/namedpipeapi/nf-namedpipeapi-getnamedpipeinfo # However, this function is not yet in _winapi. p.stdin.write(b"pear") p.stdin.close() p.stdout.close() p.stderr.close() finally: p.kill() p.wait() @unittest.skipUnless(fcntl and hasattr(fcntl, 'F_GETPIPE_SZ'), 'fcntl.F_GETPIPE_SZ required for test.') def test_pipesize_default(self): proc = subprocess.Popen( [sys.executable, "-c", 'import sys; sys.stdin.read(); sys.stdout.write("out"); ' 'sys.stderr.write("error!")'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pipesize=-1) with proc: try: fp_r, fp_w = os.pipe() try: default_read_pipesize = fcntl.fcntl(fp_r, fcntl.F_GETPIPE_SZ) default_write_pipesize = fcntl.fcntl(fp_w, fcntl.F_GETPIPE_SZ) finally: os.close(fp_r) os.close(fp_w) self.assertEqual( fcntl.fcntl(proc.stdin.fileno(), fcntl.F_GETPIPE_SZ), default_read_pipesize) self.assertEqual( fcntl.fcntl(proc.stdout.fileno(), fcntl.F_GETPIPE_SZ), default_write_pipesize) self.assertEqual( fcntl.fcntl(proc.stderr.fileno(), fcntl.F_GETPIPE_SZ), default_write_pipesize) # On other platforms we cannot test the pipe size (yet). But above # code using pipesize=-1 should not crash. finally: proc.kill() def test_env(self): newenv = os.environ.copy() newenv["FRUIT"] = "orange" with subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(os.getenv("FRUIT"))'], stdout=subprocess.PIPE, env=newenv) as p: stdout, stderr = p.communicate() self.assertEqual(stdout, b"orange") @unittest.skipUnless(sys.platform == "win32", "Windows only issue") def test_win32_duplicate_envs(self): newenv = os.environ.copy() newenv["fRUit"] = "cherry" newenv["fruit"] = "lemon" newenv["FRUIT"] = "orange" newenv["frUit"] = "banana" with subprocess.Popen(["CMD", "/c", "SET", "fruit"], stdout=subprocess.PIPE, env=newenv) as p: stdout, _ = p.communicate() self.assertEqual(stdout.strip(), b"frUit=banana") # Windows requires at least the SYSTEMROOT environment variable to start # Python @unittest.skipIf(sys.platform == 'win32', 'cannot test an empty env on Windows') @unittest.skipIf(sysconfig.get_config_var('Py_ENABLE_SHARED') == 1, 'The Python shared library cannot be loaded ' 'with an empty environment.') @unittest.skipIf(check_sanitizer(address=True), 'AddressSanitizer adds to the environment.') def test_empty_env(self): """Verify that env={} is as empty as possible.""" def is_env_var_to_ignore(n): """Determine if an environment variable is under our control.""" # This excludes some __CF_* and VERSIONER_* keys MacOS insists # on adding even when the environment in exec is empty. # Gentoo sandboxes also force LD_PRELOAD and SANDBOX_* to exist. return ('VERSIONER' in n or '__CF' in n or # MacOS n == 'LD_PRELOAD' or n.startswith('SANDBOX') or # Gentoo n == 'LC_CTYPE') # Locale coercion triggered with subprocess.Popen([sys.executable, "-c", 'import os; print(list(os.environ.keys()))'], stdout=subprocess.PIPE, env={}) as p: stdout, stderr = p.communicate() child_env_names = eval(stdout.strip()) self.assertIsInstance(child_env_names, list) child_env_names = [k for k in child_env_names if not is_env_var_to_ignore(k)] self.assertEqual(child_env_names, []) @unittest.skipIf(sysconfig.get_config_var('Py_ENABLE_SHARED') == 1, 'The Python shared library cannot be loaded ' 'without some system environments.') @unittest.skipIf(check_sanitizer(address=True), 'AddressSanitizer adds to the environment.') def test_one_environment_variable(self): newenv = {'fruit': 'orange'} cmd = [sys.executable, '-c', 'import sys,os;' 'sys.stdout.write("fruit="+os.getenv("fruit"))'] if sys.platform == "win32": cmd = ["CMD", "/c", "SET", "fruit"] with subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=newenv) as p: stdout, stderr = p.communicate() if p.returncode and support.verbose: print("STDOUT:", stdout.decode("ascii", "replace")) print("STDERR:", stderr.decode("ascii", "replace")) self.assertEqual(p.returncode, 0) self.assertEqual(stdout.strip(), b"fruit=orange") def test_invalid_cmd(self): # null character in the command name cmd = sys.executable + '\0' with self.assertRaises(ValueError): subprocess.Popen([cmd, "-c", "pass"]) # null character in the command argument with self.assertRaises(ValueError): subprocess.Popen([sys.executable, "-c", "pass#\0"]) def test_invalid_env(self): # null character in the environment variable name newenv = os.environ.copy() newenv["FRUIT\0VEGETABLE"] = "cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) # null character in the environment variable value newenv = os.environ.copy() newenv["FRUIT"] = "orange\0VEGETABLE=cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) # equal character in the environment variable name newenv = os.environ.copy() newenv["FRUIT=ORANGE"] = "lemon" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) # equal character in the environment variable value newenv = os.environ.copy() newenv["FRUIT"] = "orange=lemon" with subprocess.Popen([sys.executable, "-c", 'import sys, os;' 'sys.stdout.write(os.getenv("FRUIT"))'], stdout=subprocess.PIPE, env=newenv) as p: stdout, stderr = p.communicate() self.assertEqual(stdout, b"orange=lemon") @unittest.skipUnless(sys.platform == "win32", "Windows only issue") def test_win32_invalid_env(self): # '=' in the environment variable name newenv = os.environ.copy() newenv["FRUIT=VEGETABLE"] = "cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) newenv = os.environ.copy() newenv["==FRUIT"] = "cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) def test_communicate_stdin(self): p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.exit(sys.stdin.read() == "pear")'], stdin=subprocess.PIPE) p.communicate(b"pear") self.assertEqual(p.returncode, 1) def test_communicate_stdout(self): p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("pineapple")'], stdout=subprocess.PIPE) (stdout, stderr) = p.communicate() self.assertEqual(stdout, b"pineapple") self.assertEqual(stderr, None) def test_communicate_stderr(self): p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("pineapple")'], stderr=subprocess.PIPE) (stdout, stderr) = p.communicate() self.assertEqual(stdout, None) self.assertEqual(stderr, b"pineapple") def test_communicate(self): p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stderr.write("pineapple");' 'sys.stdout.write(sys.stdin.read())'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) (stdout, stderr) = p.communicate(b"banana") self.assertEqual(stdout, b"banana") self.assertEqual(stderr, b"pineapple") def test_communicate_timeout(self): p = subprocess.Popen([sys.executable, "-c", 'import sys,os,time;' 'sys.stderr.write("pineapple\\n");' 'time.sleep(1);' 'sys.stderr.write("pear\\n");' 'sys.stdout.write(sys.stdin.read())'], universal_newlines=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.assertRaises(subprocess.TimeoutExpired, p.communicate, "banana", timeout=0.3) # Make sure we can keep waiting for it, and that we get the whole output # after it completes. (stdout, stderr) = p.communicate() self.assertEqual(stdout, "banana") self.assertEqual(stderr.encode(), b"pineapple\npear\n") def test_communicate_timeout_large_output(self): # Test an expiring timeout while the child is outputting lots of data. p = subprocess.Popen([sys.executable, "-c", 'import sys,os,time;' 'sys.stdout.write("a" * (64 * 1024));' 'time.sleep(0.2);' 'sys.stdout.write("a" * (64 * 1024));' 'time.sleep(0.2);' 'sys.stdout.write("a" * (64 * 1024));' 'time.sleep(0.2);' 'sys.stdout.write("a" * (64 * 1024));'], stdout=subprocess.PIPE) self.assertRaises(subprocess.TimeoutExpired, p.communicate, timeout=0.4) (stdout, _) = p.communicate() self.assertEqual(len(stdout), 4 * 64 * 1024) # Test for the fd leak reported in http://bugs.python.org/issue2791. def test_communicate_pipe_fd_leak(self): for stdin_pipe in (False, True): for stdout_pipe in (False, True): for stderr_pipe in (False, True): options = {} if stdin_pipe: options['stdin'] = subprocess.PIPE if stdout_pipe: options['stdout'] = subprocess.PIPE if stderr_pipe: options['stderr'] = subprocess.PIPE if not options: continue p = subprocess.Popen(ZERO_RETURN_CMD, **options) p.communicate() if p.stdin is not None: self.assertTrue(p.stdin.closed) if p.stdout is not None: self.assertTrue(p.stdout.closed) if p.stderr is not None: self.assertTrue(p.stderr.closed) def test_communicate_returns(self): # communicate() should return None if no redirection is active p = subprocess.Popen([sys.executable, "-c", "import sys; sys.exit(47)"]) (stdout, stderr) = p.communicate() self.assertEqual(stdout, None) self.assertEqual(stderr, None) def test_communicate_pipe_buf(self): # communicate() with writes larger than pipe_buf # This test will probably deadlock rather than fail, if # communicate() does not work properly. x, y = os.pipe() os.close(x) os.close(y) p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(sys.stdin.read(47));' 'sys.stderr.write("x" * %d);' 'sys.stdout.write(sys.stdin.read())' % support.PIPE_MAX_SIZE], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) string_to_write = b"a" * support.PIPE_MAX_SIZE (stdout, stderr) = p.communicate(string_to_write) self.assertEqual(stdout, string_to_write) def test_writes_before_communicate(self): # stdin.write before communicate() p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(sys.stdin.read())'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) p.stdin.write(b"banana") (stdout, stderr) = p.communicate(b"split") self.assertEqual(stdout, b"bananasplit") self.assertEqual(stderr, b"") def test_universal_newlines_and_text(self): args = [ sys.executable, "-c", 'import sys,os;' + SETBINARY + 'buf = sys.stdout.buffer;' 'buf.write(sys.stdin.readline().encode());' 'buf.flush();' 'buf.write(b"line2\\n");' 'buf.flush();' 'buf.write(sys.stdin.read().encode());' 'buf.flush();' 'buf.write(b"line4\\n");' 'buf.flush();' 'buf.write(b"line5\\r\\n");' 'buf.flush();' 'buf.write(b"line6\\r");' 'buf.flush();' 'buf.write(b"\\nline7");' 'buf.flush();' 'buf.write(b"\\nline8");'] for extra_kwarg in ('universal_newlines', 'text'): p = subprocess.Popen(args, **{'stdin': subprocess.PIPE, 'stdout': subprocess.PIPE, extra_kwarg: True}) with p: p.stdin.write("line1\n") p.stdin.flush() self.assertEqual(p.stdout.readline(), "line1\n") p.stdin.write("line3\n") p.stdin.close() self.addCleanup(p.stdout.close) self.assertEqual(p.stdout.readline(), "line2\n") self.assertEqual(p.stdout.read(6), "line3\n") self.assertEqual(p.stdout.read(), "line4\nline5\nline6\nline7\nline8") def test_universal_newlines_communicate(self): # universal newlines through communicate() p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' + SETBINARY + 'buf = sys.stdout.buffer;' 'buf.write(b"line2\\n");' 'buf.flush();' 'buf.write(b"line4\\n");' 'buf.flush();' 'buf.write(b"line5\\r\\n");' 'buf.flush();' 'buf.write(b"line6\\r");' 'buf.flush();' 'buf.write(b"\\nline7");' 'buf.flush();' 'buf.write(b"\\nline8");'], stderr=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=1) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) (stdout, stderr) = p.communicate() self.assertEqual(stdout, "line2\nline4\nline5\nline6\nline7\nline8") def test_universal_newlines_communicate_stdin(self): # universal newlines through communicate(), with only stdin p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' + SETBINARY + textwrap.dedent(''' s = sys.stdin.readline() assert s == "line1\\n", repr(s) s = sys.stdin.read() assert s == "line3\\n", repr(s) ''')], stdin=subprocess.PIPE, universal_newlines=1) (stdout, stderr) = p.communicate("line1\nline3\n") self.assertEqual(p.returncode, 0) def test_universal_newlines_communicate_input_none(self): # Test communicate(input=None) with universal newlines. # # We set stdout to PIPE because, as of this writing, a different # code path is tested when the number of pipes is zero or one. p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=True) p.communicate() self.assertEqual(p.returncode, 0) def test_universal_newlines_communicate_stdin_stdout_stderr(self): # universal newlines through communicate(), with stdin, stdout, stderr p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' + SETBINARY + textwrap.dedent(''' s = sys.stdin.buffer.readline() sys.stdout.buffer.write(s) sys.stdout.buffer.write(b"line2\\r") sys.stderr.buffer.write(b"eline2\\n") s = sys.stdin.buffer.read() sys.stdout.buffer.write(s) sys.stdout.buffer.write(b"line4\\n") sys.stdout.buffer.write(b"line5\\r\\n") sys.stderr.buffer.write(b"eline6\\r") sys.stderr.buffer.write(b"eline7\\r\\nz") ''')], stdin=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=True) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) (stdout, stderr) = p.communicate("line1\nline3\n") self.assertEqual(p.returncode, 0) self.assertEqual("line1\nline2\nline3\nline4\nline5\n", stdout) # Python debug build push something like "[42442 refs]\n" # to stderr at exit of subprocess. self.assertTrue(stderr.startswith("eline2\neline6\neline7\n")) def test_universal_newlines_communicate_encodings(self): # Check that universal newlines mode works for various encodings, # in particular for encodings in the UTF-16 and UTF-32 families. # See issue #15595. # # UTF-16 and UTF-32-BE are sufficient to check both with BOM and # without, and UTF-16 and UTF-32. for encoding in ['utf-16', 'utf-32-be']: code = ("import sys; " r"sys.stdout.buffer.write('1\r\n2\r3\n4'.encode('%s'))" % encoding) args = [sys.executable, '-c', code] # We set stdin to be non-None because, as of this writing, # a different code path is used when the number of pipes is # zero or one. popen = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, encoding=encoding) stdout, stderr = popen.communicate(input='') self.assertEqual(stdout, '1\n2\n3\n4') def test_communicate_errors(self): for errors, expected in [ ('ignore', ''), ('replace', '\ufffd\ufffd'), ('surrogateescape', '\udc80\udc80'), ('backslashreplace', '\\x80\\x80'), ]: code = ("import sys; " r"sys.stdout.buffer.write(b'[\x80\x80]')") args = [sys.executable, '-c', code] # We set stdin to be non-None because, as of this writing, # a different code path is used when the number of pipes is # zero or one. popen = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, encoding='utf-8', errors=errors) stdout, stderr = popen.communicate(input='') self.assertEqual(stdout, '[{}]'.format(expected)) def test_no_leaking(self): # Make sure we leak no resources if not mswindows: max_handles = 1026 # too much for most UNIX systems else: max_handles = 2050 # too much for (at least some) Windows setups handles = [] tmpdir = tempfile.mkdtemp() try: for i in range(max_handles): try: tmpfile = os.path.join(tmpdir, os_helper.TESTFN) handles.append(os.open(tmpfile, os.O_WRONLY|os.O_CREAT)) except OSError as e: if e.errno != errno.EMFILE: raise break else: self.skipTest("failed to reach the file descriptor limit " "(tried %d)" % max_handles) # Close a couple of them (should be enough for a subprocess) for i in range(10): os.close(handles.pop()) # Loop creating some subprocesses. If one of them leaks some fds, # the next loop iteration will fail by reaching the max fd limit. for i in range(15): p = subprocess.Popen([sys.executable, "-c", "import sys;" "sys.stdout.write(sys.stdin.read())"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) data = p.communicate(b"lime")[0] self.assertEqual(data, b"lime") finally: for h in handles: os.close(h) shutil.rmtree(tmpdir) def test_list2cmdline(self): self.assertEqual(subprocess.list2cmdline(['a b c', 'd', 'e']), '"a b c" d e') self.assertEqual(subprocess.list2cmdline(['ab"c', '\\', 'd']), 'ab\\"c \\ d') self.assertEqual(subprocess.list2cmdline(['ab"c', ' \\', 'd']), 'ab\\"c " \\\\" d') self.assertEqual(subprocess.list2cmdline(['a\\\\\\b', 'de fg', 'h']), 'a\\\\\\b "de fg" h') self.assertEqual(subprocess.list2cmdline(['a\\"b', 'c', 'd']), 'a\\\\\\"b c d') self.assertEqual(subprocess.list2cmdline(['a\\\\b c', 'd', 'e']), '"a\\\\b c" d e') self.assertEqual(subprocess.list2cmdline(['a\\\\b\\ c', 'd', 'e']), '"a\\\\b\\ c" d e') self.assertEqual(subprocess.list2cmdline(['ab', '']), 'ab ""') def test_poll(self): p = subprocess.Popen([sys.executable, "-c", "import os; os.read(0, 1)"], stdin=subprocess.PIPE) self.addCleanup(p.stdin.close) self.assertIsNone(p.poll()) os.write(p.stdin.fileno(), b'A') p.wait() # Subsequent invocations should just return the returncode self.assertEqual(p.poll(), 0) def test_wait(self): p = subprocess.Popen(ZERO_RETURN_CMD) self.assertEqual(p.wait(), 0) # Subsequent invocations should just return the returncode self.assertEqual(p.wait(), 0) def test_wait_timeout(self): p = subprocess.Popen([sys.executable, "-c", "import time; time.sleep(0.3)"]) with self.assertRaises(subprocess.TimeoutExpired) as c: p.wait(timeout=0.0001) self.assertIn("0.0001", str(c.exception)) # For coverage of __str__. self.assertEqual(p.wait(timeout=support.SHORT_TIMEOUT), 0) def test_invalid_bufsize(self): # an invalid type of the bufsize argument should raise # TypeError. with self.assertRaises(TypeError): subprocess.Popen(ZERO_RETURN_CMD, "orange") def test_bufsize_is_none(self): # bufsize=None should be the same as bufsize=0. p = subprocess.Popen(ZERO_RETURN_CMD, None) self.assertEqual(p.wait(), 0) # Again with keyword arg p = subprocess.Popen(ZERO_RETURN_CMD, bufsize=None) self.assertEqual(p.wait(), 0) def _test_bufsize_equal_one(self, line, expected, universal_newlines): # subprocess may deadlock with bufsize=1, see issue #21332 with subprocess.Popen([sys.executable, "-c", "import sys;" "sys.stdout.write(sys.stdin.readline());" "sys.stdout.flush()"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL, bufsize=1, universal_newlines=universal_newlines) as p: p.stdin.write(line) # expect that it flushes the line in text mode os.close(p.stdin.fileno()) # close it without flushing the buffer read_line = p.stdout.readline() with support.SuppressCrashReport(): try: p.stdin.close() except OSError: pass p.stdin = None self.assertEqual(p.returncode, 0) self.assertEqual(read_line, expected) def test_bufsize_equal_one_text_mode(self): # line is flushed in text mode with bufsize=1. # we should get the full line in return line = "line\n" self._test_bufsize_equal_one(line, line, universal_newlines=True) def test_bufsize_equal_one_binary_mode(self): # line is not flushed in binary mode with bufsize=1. # we should get empty response line = b'line' + os.linesep.encode() # assume ascii-based locale with self.assertWarnsRegex(RuntimeWarning, 'line buffering'): self._test_bufsize_equal_one(line, b'', universal_newlines=False) @support.requires_resource('cpu') def test_leaking_fds_on_error(self): # see bug #5179: Popen leaks file descriptors to PIPEs if # the child fails to execute; this will eventually exhaust # the maximum number of open fds. 1024 seems a very common # value for that limit, but Windows has 2048, so we loop # 1024 times (each call leaked two fds). for i in range(1024): with self.assertRaises(NONEXISTING_ERRORS): subprocess.Popen(NONEXISTING_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE) def test_nonexisting_with_pipes(self): # bpo-30121: Popen with pipes must close properly pipes on error. # Previously, os.close() was called with a Windows handle which is not # a valid file descriptor. # # Run the test in a subprocess to control how the CRT reports errors # and to get stderr content. try: import msvcrt msvcrt.CrtSetReportMode except (AttributeError, ImportError): self.skipTest("need msvcrt.CrtSetReportMode") code = textwrap.dedent(f""" import msvcrt import subprocess cmd = {NONEXISTING_CMD!r} for report_type in [msvcrt.CRT_WARN, msvcrt.CRT_ERROR, msvcrt.CRT_ASSERT]: msvcrt.CrtSetReportMode(report_type, msvcrt.CRTDBG_MODE_FILE) msvcrt.CrtSetReportFile(report_type, msvcrt.CRTDBG_FILE_STDERR) try: subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) except OSError: pass """) cmd = [sys.executable, "-c", code] proc = subprocess.Popen(cmd, stderr=subprocess.PIPE, universal_newlines=True) with proc: stderr = proc.communicate()[1] self.assertEqual(stderr, "") self.assertEqual(proc.returncode, 0) def test_double_close_on_error(self): # Issue #18851 fds = [] def open_fds(): for i in range(20): fds.extend(os.pipe()) time.sleep(0.001) t = threading.Thread(target=open_fds) t.start() try: with self.assertRaises(OSError): subprocess.Popen(NONEXISTING_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) finally: t.join() exc = None for fd in fds: # If a double close occurred, some of those fds will # already have been closed by mistake, and os.close() # here will raise. try: os.close(fd) except OSError as e: exc = e if exc is not None: raise exc def test_threadsafe_wait(self): """Issue21291: Popen.wait() needs to be threadsafe for returncode.""" proc = subprocess.Popen([sys.executable, '-c', 'import time; time.sleep(12)']) self.assertEqual(proc.returncode, None) results = [] def kill_proc_timer_thread(): results.append(('thread-start-poll-result', proc.poll())) # terminate it from the thread and wait for the result. proc.kill() proc.wait() results.append(('thread-after-kill-and-wait', proc.returncode)) # this wait should be a no-op given the above. proc.wait() results.append(('thread-after-second-wait', proc.returncode)) # This is a timing sensitive test, the failure mode is # triggered when both the main thread and this thread are in # the wait() call at once. The delay here is to allow the # main thread to most likely be blocked in its wait() call. t = threading.Timer(0.2, kill_proc_timer_thread) t.start() if mswindows: expected_errorcode = 1 else: # Should be -9 because of the proc.kill() from the thread. expected_errorcode = -9 # Wait for the process to finish; the thread should kill it # long before it finishes on its own. Supplying a timeout # triggers a different code path for better coverage. proc.wait(timeout=support.SHORT_TIMEOUT) self.assertEqual(proc.returncode, expected_errorcode, msg="unexpected result in wait from main thread") # This should be a no-op with no change in returncode. proc.wait() self.assertEqual(proc.returncode, expected_errorcode, msg="unexpected result in second main wait.") t.join() # Ensure that all of the thread results are as expected. # When a race condition occurs in wait(), the returncode could # be set by the wrong thread that doesn't actually have it # leading to an incorrect value. self.assertEqual([('thread-start-poll-result', None), ('thread-after-kill-and-wait', expected_errorcode), ('thread-after-second-wait', expected_errorcode)], results) def test_issue8780(self): # Ensure that stdout is inherited from the parent # if stdout=PIPE is not used code = ';'.join(( 'import subprocess, sys', 'retcode = subprocess.call(' "[sys.executable, '-c', 'print(\"Hello World!\")'])", 'assert retcode == 0')) output = subprocess.check_output([sys.executable, '-c', code]) self.assertTrue(output.startswith(b'Hello World!'), ascii(output)) def test_handles_closed_on_exception(self): # If CreateProcess exits with an error, ensure the # duplicate output handles are released ifhandle, ifname = tempfile.mkstemp() ofhandle, ofname = tempfile.mkstemp() efhandle, efname = tempfile.mkstemp() try: subprocess.Popen (["*"], stdin=ifhandle, stdout=ofhandle, stderr=efhandle) except OSError: os.close(ifhandle) os.remove(ifname) os.close(ofhandle) os.remove(ofname) os.close(efhandle) os.remove(efname) self.assertFalse(os.path.exists(ifname)) self.assertFalse(os.path.exists(ofname)) self.assertFalse(os.path.exists(efname)) def test_communicate_epipe(self): # Issue 10963: communicate() should hide EPIPE p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) p.communicate(b"x" * 2**20) def test_repr(self): cases = [ ("ls", True, 123, ""), ('a' * 100, True, 0, ""), (["ls"], False, None, ""), (["ls", '--my-opts', 'a' * 100], False, None, ""), (os_helper.FakePath("my-tool.py"), False, 7, ">") ] with unittest.mock.patch.object(subprocess.Popen, '_execute_child'): for cmd, shell, code, sx in cases: p = subprocess.Popen(cmd, shell=shell) p.returncode = code self.assertEqual(repr(p), sx) def test_communicate_epipe_only_stdin(self): # Issue 10963: communicate() should hide EPIPE p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE) self.addCleanup(p.stdin.close) p.wait() p.communicate(b"x" * 2**20) @unittest.skipUnless(hasattr(signal, 'SIGUSR1'), "Requires signal.SIGUSR1") @unittest.skipUnless(hasattr(os, 'kill'), "Requires os.kill") @unittest.skipUnless(hasattr(os, 'getppid'), "Requires os.getppid") def test_communicate_eintr(self): # Issue #12493: communicate() should handle EINTR def handler(signum, frame): pass old_handler = signal.signal(signal.SIGUSR1, handler) self.addCleanup(signal.signal, signal.SIGUSR1, old_handler) args = [sys.executable, "-c", 'import os, signal;' 'os.kill(os.getppid(), signal.SIGUSR1)'] for stream in ('stdout', 'stderr'): kw = {stream: subprocess.PIPE} with subprocess.Popen(args, **kw) as process: # communicate() will be interrupted by SIGUSR1 process.communicate() # This test is Linux-ish specific for simplicity to at least have # some coverage. It is not a platform specific bug. @unittest.skipUnless(os.path.isdir('/proc/%d/fd' % os.getpid()), "Linux specific") def test_failed_child_execute_fd_leak(self): """Test for the fork() failure fd leak reported in issue16327.""" fd_directory = '/proc/%d/fd' % os.getpid() fds_before_popen = os.listdir(fd_directory) with self.assertRaises(PopenTestException): PopenExecuteChildRaises( ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # NOTE: This test doesn't verify that the real _execute_child # does not close the file descriptors itself on the way out # during an exception. Code inspection has confirmed that. fds_after_exception = os.listdir(fd_directory) self.assertEqual(fds_before_popen, fds_after_exception) @unittest.skipIf(mswindows, "behavior currently not supported on Windows") def test_file_not_found_includes_filename(self): with self.assertRaises(FileNotFoundError) as c: subprocess.call(['/opt/nonexistent_binary', 'with', 'some', 'args']) self.assertEqual(c.exception.filename, '/opt/nonexistent_binary') @unittest.skipIf(mswindows, "behavior currently not supported on Windows") def test_file_not_found_with_bad_cwd(self): with self.assertRaises(FileNotFoundError) as c: subprocess.Popen(['exit', '0'], cwd='/some/nonexistent/directory') self.assertEqual(c.exception.filename, '/some/nonexistent/directory') def test_class_getitems(self): self.assertIsInstance(subprocess.Popen[bytes], types.GenericAlias) self.assertIsInstance(subprocess.CompletedProcess[str], types.GenericAlias) @unittest.skipUnless(hasattr(subprocess, '_winapi'), 'need subprocess._winapi') def test_wait_negative_timeout(self): proc = subprocess.Popen(ZERO_RETURN_CMD) with proc: patch = mock.patch.object( subprocess._winapi, 'WaitForSingleObject', return_value=subprocess._winapi.WAIT_OBJECT_0) with patch as mock_wait: proc.wait(-1) # negative timeout mock_wait.assert_called_once_with(proc._handle, 0) proc.returncode = None self.assertEqual(proc.wait(), 0) class RunFuncTestCase(BaseTestCase): def run_python(self, code, **kwargs): """Run Python code in a subprocess using subprocess.run""" argv = [sys.executable, "-c", code] return subprocess.run(argv, **kwargs) def test_returncode(self): # call() function with sequence argument cp = self.run_python("import sys; sys.exit(47)") self.assertEqual(cp.returncode, 47) with self.assertRaises(subprocess.CalledProcessError): cp.check_returncode() def test_check(self): with self.assertRaises(subprocess.CalledProcessError) as c: self.run_python("import sys; sys.exit(47)", check=True) self.assertEqual(c.exception.returncode, 47) def test_check_zero(self): # check_returncode shouldn't raise when returncode is zero cp = subprocess.run(ZERO_RETURN_CMD, check=True) self.assertEqual(cp.returncode, 0) def test_timeout(self): # run() function with timeout argument; we want to test that the child # process gets killed when the timeout expires. If the child isn't # killed, this call will deadlock since subprocess.run waits for the # child. with self.assertRaises(subprocess.TimeoutExpired): self.run_python("while True: pass", timeout=0.0001) def test_capture_stdout(self): # capture stdout with zero return code cp = self.run_python("print('BDFL')", stdout=subprocess.PIPE) self.assertIn(b'BDFL', cp.stdout) def test_capture_stderr(self): cp = self.run_python("import sys; sys.stderr.write('BDFL')", stderr=subprocess.PIPE) self.assertIn(b'BDFL', cp.stderr) def test_check_output_stdin_arg(self): # run() can be called with stdin set to a file tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) cp = self.run_python( "import sys; sys.stdout.write(sys.stdin.read().upper())", stdin=tf, stdout=subprocess.PIPE) self.assertIn(b'PEAR', cp.stdout) def test_check_output_input_arg(self): # check_output() can be called with input set to a string cp = self.run_python( "import sys; sys.stdout.write(sys.stdin.read().upper())", input=b'pear', stdout=subprocess.PIPE) self.assertIn(b'PEAR', cp.stdout) def test_check_output_stdin_with_input_arg(self): # run() refuses to accept 'stdin' with 'input' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) with self.assertRaises(ValueError, msg="Expected ValueError when stdin and input args supplied.") as c: output = self.run_python("print('will not be run')", stdin=tf, input=b'hare') self.assertIn('stdin', c.exception.args[0]) self.assertIn('input', c.exception.args[0]) @support.requires_resource('walltime') def test_check_output_timeout(self): with self.assertRaises(subprocess.TimeoutExpired) as c: cp = self.run_python(( "import sys, time\n" "sys.stdout.write('BDFL')\n" "sys.stdout.flush()\n" "time.sleep(3600)"), # Some heavily loaded buildbots (sparc Debian 3.x) require # this much time to start and print. timeout=3, stdout=subprocess.PIPE) self.assertEqual(c.exception.output, b'BDFL') # output is aliased to stdout self.assertEqual(c.exception.stdout, b'BDFL') def test_run_kwargs(self): newenv = os.environ.copy() newenv["FRUIT"] = "banana" cp = self.run_python(('import sys, os;' 'sys.exit(33 if os.getenv("FRUIT")=="banana" else 31)'), env=newenv) self.assertEqual(cp.returncode, 33) def test_run_with_pathlike_path(self): # bpo-31961: test run(pathlike_object) # the name of a command that can be run without # any arguments that exit fast prog = 'tree.com' if mswindows else 'ls' path = shutil.which(prog) if path is None: self.skipTest(f'{prog} required for this test') path = FakePath(path) res = subprocess.run(path, stdout=subprocess.DEVNULL) self.assertEqual(res.returncode, 0) with self.assertRaises(TypeError): subprocess.run(path, stdout=subprocess.DEVNULL, shell=True) def test_run_with_bytes_path_and_arguments(self): # bpo-31961: test run([bytes_object, b'additional arguments']) path = os.fsencode(sys.executable) args = [path, '-c', b'import sys; sys.exit(57)'] res = subprocess.run(args) self.assertEqual(res.returncode, 57) def test_run_with_pathlike_path_and_arguments(self): # bpo-31961: test run([pathlike_object, 'additional arguments']) path = FakePath(sys.executable) args = [path, '-c', 'import sys; sys.exit(57)'] res = subprocess.run(args) self.assertEqual(res.returncode, 57) @unittest.skipUnless(mswindows, "Maybe test trigger a leak on Ubuntu") def test_run_with_an_empty_env(self): # gh-105436: fix subprocess.run(..., env={}) broken on Windows args = [sys.executable, "-c", 'pass'] # Ignore subprocess errors - we only care that the API doesn't # raise an OSError subprocess.run(args, env={}) def test_capture_output(self): cp = self.run_python(("import sys;" "sys.stdout.write('BDFL'); " "sys.stderr.write('FLUFL')"), capture_output=True) self.assertIn(b'BDFL', cp.stdout) self.assertIn(b'FLUFL', cp.stderr) def test_stdout_stdout(self): # run() refuses to accept stdout=STDOUT with self.assertRaises(ValueError, msg=("STDOUT can only be used for stderr")): self.run_python("print('will not be run')", stdout=subprocess.STDOUT) def test_stdout_with_capture_output_arg(self): # run() refuses to accept 'stdout' with 'capture_output' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) with self.assertRaises(ValueError, msg=("Expected ValueError when stdout and capture_output " "args supplied.")) as c: output = self.run_python("print('will not be run')", capture_output=True, stdout=tf) self.assertIn('stdout', c.exception.args[0]) self.assertIn('capture_output', c.exception.args[0]) def test_stderr_with_capture_output_arg(self): # run() refuses to accept 'stderr' with 'capture_output' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) with self.assertRaises(ValueError, msg=("Expected ValueError when stderr and capture_output " "args supplied.")) as c: output = self.run_python("print('will not be run')", capture_output=True, stderr=tf) self.assertIn('stderr', c.exception.args[0]) self.assertIn('capture_output', c.exception.args[0]) # This test _might_ wind up a bit fragile on loaded build+test machines # as it depends on the timing with wide enough margins for normal situations # but does assert that it happened "soon enough" to believe the right thing # happened. @unittest.skipIf(mswindows, "requires posix like 'sleep' shell command") def test_run_with_shell_timeout_and_capture_output(self): """Output capturing after a timeout mustn't hang forever on open filehandles.""" before_secs = time.monotonic() try: subprocess.run('sleep 3', shell=True, timeout=0.1, capture_output=True) # New session unspecified. except subprocess.TimeoutExpired as exc: after_secs = time.monotonic() stacks = traceback.format_exc() # assertRaises doesn't give this. else: self.fail("TimeoutExpired not raised.") self.assertLess(after_secs - before_secs, 1.5, msg="TimeoutExpired was delayed! Bad traceback:\n```\n" f"{stacks}```") def test_encoding_warning(self): code = textwrap.dedent("""\ from subprocess import * run("echo hello", shell=True, text=True) check_output("echo hello", shell=True, text=True) """) cp = subprocess.run([sys.executable, "-Xwarn_default_encoding", "-c", code], capture_output=True) lines = cp.stderr.splitlines() self.assertEqual(len(lines), 4, lines) self.assertTrue(lines[0].startswith(b":2: EncodingWarning: ")) self.assertTrue(lines[2].startswith(b":3: EncodingWarning: ")) def _get_test_grp_name(): for name_group in ('staff', 'nogroup', 'grp', 'nobody', 'nfsnobody'): if grp: try: grp.getgrnam(name_group) except KeyError: continue return name_group else: raise unittest.SkipTest('No identified group name to use for this test on this platform.') @unittest.skipIf(mswindows, "POSIX specific tests") class POSIXProcessTestCase(BaseTestCase): def setUp(self): super().setUp() self._nonexistent_dir = "/_this/pa.th/does/not/exist" def _get_chdir_exception(self): try: os.chdir(self._nonexistent_dir) except OSError as e: # This avoids hard coding the errno value or the OS perror() # string and instead capture the exception that we want to see # below for comparison. desired_exception = e else: self.fail("chdir to nonexistent directory %s succeeded." % self._nonexistent_dir) return desired_exception def test_exception_cwd(self): """Test error in the child raised in the parent for a bad cwd.""" desired_exception = self._get_chdir_exception() try: p = subprocess.Popen([sys.executable, "-c", ""], cwd=self._nonexistent_dir) except OSError as e: # Test that the child process chdir failure actually makes # it up to the parent process as the correct exception. self.assertEqual(desired_exception.errno, e.errno) self.assertEqual(desired_exception.strerror, e.strerror) self.assertEqual(desired_exception.filename, e.filename) else: self.fail("Expected OSError: %s" % desired_exception) def test_exception_bad_executable(self): """Test error in the child raised in the parent for a bad executable.""" desired_exception = self._get_chdir_exception() try: p = subprocess.Popen([sys.executable, "-c", ""], executable=self._nonexistent_dir) except OSError as e: # Test that the child process exec failure actually makes # it up to the parent process as the correct exception. self.assertEqual(desired_exception.errno, e.errno) self.assertEqual(desired_exception.strerror, e.strerror) self.assertEqual(desired_exception.filename, e.filename) else: self.fail("Expected OSError: %s" % desired_exception) def test_exception_bad_args_0(self): """Test error in the child raised in the parent for a bad args[0].""" desired_exception = self._get_chdir_exception() try: p = subprocess.Popen([self._nonexistent_dir, "-c", ""]) except OSError as e: # Test that the child process exec failure actually makes # it up to the parent process as the correct exception. self.assertEqual(desired_exception.errno, e.errno) self.assertEqual(desired_exception.strerror, e.strerror) self.assertEqual(desired_exception.filename, e.filename) else: self.fail("Expected OSError: %s" % desired_exception) # We mock the __del__ method for Popen in the next two tests # because it does cleanup based on the pid returned by fork_exec # along with issuing a resource warning if it still exists. Since # we don't actually spawn a process in these tests we can forego # the destructor. An alternative would be to set _child_created to # False before the destructor is called but there is no easy way # to do that class PopenNoDestructor(subprocess.Popen): def __del__(self): pass @mock.patch("subprocess._fork_exec") def test_exception_errpipe_normal(self, fork_exec): """Test error passing done through errpipe_write in the good case""" def proper_error(*args): errpipe_write = args[13] # Write the hex for the error code EISDIR: 'is a directory' err_code = '{:x}'.format(errno.EISDIR).encode() os.write(errpipe_write, b"OSError:" + err_code + b":") return 0 fork_exec.side_effect = proper_error with mock.patch("subprocess.os.waitpid", side_effect=ChildProcessError): with self.assertRaises(IsADirectoryError): self.PopenNoDestructor(["non_existent_command"]) @mock.patch("subprocess._fork_exec") def test_exception_errpipe_bad_data(self, fork_exec): """Test error passing done through errpipe_write where its not in the expected format""" error_data = b"\xFF\x00\xDE\xAD" def bad_error(*args): errpipe_write = args[13] # Anything can be in the pipe, no assumptions should # be made about its encoding, so we'll write some # arbitrary hex bytes to test it out os.write(errpipe_write, error_data) return 0 fork_exec.side_effect = bad_error with mock.patch("subprocess.os.waitpid", side_effect=ChildProcessError): with self.assertRaises(subprocess.SubprocessError) as e: self.PopenNoDestructor(["non_existent_command"]) self.assertIn(repr(error_data), str(e.exception)) @unittest.skipIf(not os.path.exists('/proc/self/status'), "need /proc/self/status") def test_restore_signals(self): # Blindly assume that cat exists on systems with /proc/self/status... default_proc_status = subprocess.check_output( ['cat', '/proc/self/status'], restore_signals=False) for line in default_proc_status.splitlines(): if line.startswith(b'SigIgn'): default_sig_ign_mask = line break else: self.skipTest("SigIgn not found in /proc/self/status.") restored_proc_status = subprocess.check_output( ['cat', '/proc/self/status'], restore_signals=True) for line in restored_proc_status.splitlines(): if line.startswith(b'SigIgn'): restored_sig_ign_mask = line break self.assertNotEqual(default_sig_ign_mask, restored_sig_ign_mask, msg="restore_signals=True should've unblocked " "SIGPIPE and friends.") def test_start_new_session(self): # For code coverage of calling setsid(). We don't care if we get an # EPERM error from it depending on the test execution environment, that # still indicates that it was called. try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getsid(0))"], start_new_session=True) except PermissionError as e: if e.errno != errno.EPERM: raise # EACCES? else: parent_sid = os.getsid(0) child_sid = int(output) self.assertNotEqual(parent_sid, child_sid) @unittest.skipUnless(hasattr(os, 'setpgid') and hasattr(os, 'getpgid'), 'no setpgid or getpgid on platform') def test_process_group_0(self): # For code coverage of calling setpgid(). We don't care if we get an # EPERM error from it depending on the test execution environment, that # still indicates that it was called. try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getpgid(0))"], process_group=0) except PermissionError as e: if e.errno != errno.EPERM: raise # EACCES? else: parent_pgid = os.getpgid(0) child_pgid = int(output) self.assertNotEqual(parent_pgid, child_pgid) @unittest.skipUnless(hasattr(os, 'setreuid'), 'no setreuid on platform') def test_user(self): # For code coverage of the user parameter. We don't care if we get a # permission error from it depending on the test execution environment, # that still indicates that it was called. uid = os.geteuid() test_users = [65534 if uid != 65534 else 65533, uid] name_uid = "nobody" if sys.platform != 'darwin' else "unknown" if pwd is not None: try: pwd.getpwnam(name_uid) test_users.append(name_uid) except KeyError: # unknown user name name_uid = None for user in test_users: # posix_spawn() may be used with close_fds=False for close_fds in (False, True): with self.subTest(user=user, close_fds=close_fds): try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getuid())"], user=user, close_fds=close_fds) except PermissionError as e: # (EACCES, EPERM) if e.errno == errno.EACCES: self.assertEqual(e.filename, sys.executable) else: self.assertIsNone(e.filename) else: if isinstance(user, str): user_uid = pwd.getpwnam(user).pw_uid else: user_uid = user child_user = int(output) self.assertEqual(child_user, user_uid) with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, user=-1) with self.assertRaises(OverflowError): subprocess.check_call(ZERO_RETURN_CMD, cwd=os.curdir, env=os.environ, user=2**64) if pwd is None and name_uid is not None: with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, user=name_uid) @unittest.skipIf(hasattr(os, 'setreuid'), 'setreuid() available on platform') def test_user_error(self): with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, user=65535) @unittest.skipUnless(hasattr(os, 'setregid'), 'no setregid() on platform') def test_group(self): gid = os.getegid() group_list = [65534 if gid != 65534 else 65533] name_group = _get_test_grp_name() if grp is not None: group_list.append(name_group) for group in group_list + [gid]: # posix_spawn() may be used with close_fds=False for close_fds in (False, True): with self.subTest(group=group, close_fds=close_fds): try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getgid())"], group=group, close_fds=close_fds) except PermissionError as e: # (EACCES, EPERM) self.assertIsNone(e.filename) else: if isinstance(group, str): group_gid = grp.getgrnam(group).gr_gid else: group_gid = group child_group = int(output) self.assertEqual(child_group, group_gid) # make sure we bomb on negative values with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, group=-1) with self.assertRaises(OverflowError): subprocess.check_call(ZERO_RETURN_CMD, cwd=os.curdir, env=os.environ, group=2**64) if grp is None: with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, group=name_group) @unittest.skipIf(hasattr(os, 'setregid'), 'setregid() available on platform') def test_group_error(self): with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, group=65535) @unittest.skipUnless(hasattr(os, 'setgroups'), 'no setgroups() on platform') def test_extra_groups(self): gid = os.getegid() group_list = [65534 if gid != 65534 else 65533] self._test_extra_groups_impl(gid=gid, group_list=group_list) @unittest.skipUnless(hasattr(os, 'setgroups'), 'no setgroups() on platform') def test_extra_groups_empty_list(self): self._test_extra_groups_impl(gid=os.getegid(), group_list=[]) def _test_extra_groups_impl(self, *, gid, group_list): name_group = _get_test_grp_name() if grp is not None: group_list.append(name_group) try: output = subprocess.check_output( [sys.executable, "-c", "import os, sys, json; json.dump(os.getgroups(), sys.stdout)"], extra_groups=group_list) except PermissionError as e: self.assertIsNone(e.filename) self.skipTest("setgroup() EPERM; this test may require root.") else: parent_groups = os.getgroups() child_groups = json.loads(output) if grp is not None: desired_gids = [grp.getgrnam(g).gr_gid if isinstance(g, str) else g for g in group_list] else: desired_gids = group_list self.assertEqual(set(desired_gids), set(child_groups)) if grp is None: with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, extra_groups=[name_group]) # No skip necessary, this test won't make it to a setgroup() call. def test_extra_groups_invalid_gid_t_values(self): with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, extra_groups=[-1]) with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, cwd=os.curdir, env=os.environ, extra_groups=[2**64]) @unittest.skipIf(mswindows or not hasattr(os, 'umask'), 'POSIX umask() is not available.') def test_umask(self): tmpdir = None try: tmpdir = tempfile.mkdtemp() name = os.path.join(tmpdir, "beans") # We set an unusual umask in the child so as a unique mode # for us to test the child's touched file for. subprocess.check_call( [sys.executable, "-c", f"open({name!r}, 'w').close()"], umask=0o053) # Ignore execute permissions entirely in our test, # filesystems could be mounted to ignore or force that. st_mode = os.stat(name).st_mode & 0o666 expected_mode = 0o624 self.assertEqual(expected_mode, st_mode, msg=f'{oct(expected_mode)} != {oct(st_mode)}') finally: if tmpdir is not None: shutil.rmtree(tmpdir) def test_run_abort(self): # returncode handles signal termination with support.SuppressCrashReport(): p = subprocess.Popen([sys.executable, "-c", 'import os; os.abort()']) p.wait() self.assertEqual(-p.returncode, signal.SIGABRT) def test_CalledProcessError_str_signal(self): err = subprocess.CalledProcessError(-int(signal.SIGABRT), "fake cmd") error_string = str(err) # We're relying on the repr() of the signal.Signals intenum to provide # the word signal, the signal name and the numeric value. self.assertIn("signal", error_string.lower()) # We're not being specific about the signal name as some signals have # multiple names and which name is revealed can vary. self.assertIn("SIG", error_string) self.assertIn(str(signal.SIGABRT), error_string) def test_CalledProcessError_str_unknown_signal(self): err = subprocess.CalledProcessError(-9876543, "fake cmd") error_string = str(err) self.assertIn("unknown signal 9876543.", error_string) def test_CalledProcessError_str_non_zero(self): err = subprocess.CalledProcessError(2, "fake cmd") error_string = str(err) self.assertIn("non-zero exit status 2.", error_string) def test_preexec(self): # DISCLAIMER: Setting environment variables is *not* a good use # of a preexec_fn. This is merely a test. p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(os.getenv("FRUIT"))'], stdout=subprocess.PIPE, preexec_fn=lambda: os.putenv("FRUIT", "apple")) with p: self.assertEqual(p.stdout.read(), b"apple") def test_preexec_exception(self): def raise_it(): raise ValueError("What if two swallows carried a coconut?") try: p = subprocess.Popen([sys.executable, "-c", ""], preexec_fn=raise_it) except subprocess.SubprocessError as e: self.assertTrue( subprocess._fork_exec, "Expected a ValueError from the preexec_fn") except ValueError as e: self.assertIn("coconut", e.args[0]) else: self.fail("Exception raised by preexec_fn did not make it " "to the parent process.") class _TestExecuteChildPopen(subprocess.Popen): """Used to test behavior at the end of _execute_child.""" def __init__(self, testcase, *args, **kwargs): self._testcase = testcase subprocess.Popen.__init__(self, *args, **kwargs) def _execute_child(self, *args, **kwargs): try: subprocess.Popen._execute_child(self, *args, **kwargs) finally: # Open a bunch of file descriptors and verify that # none of them are the same as the ones the Popen # instance is using for stdin/stdout/stderr. devzero_fds = [os.open("/dev/zero", os.O_RDONLY) for _ in range(8)] try: for fd in devzero_fds: self._testcase.assertNotIn( fd, (self.stdin.fileno(), self.stdout.fileno(), self.stderr.fileno()), msg="At least one fd was closed early.") finally: for fd in devzero_fds: os.close(fd) @unittest.skipIf(not os.path.exists("/dev/zero"), "/dev/zero required.") def test_preexec_errpipe_does_not_double_close_pipes(self): """Issue16140: Don't double close pipes on preexec error.""" def raise_it(): raise subprocess.SubprocessError( "force the _execute_child() errpipe_data path.") with self.assertRaises(subprocess.SubprocessError): self._TestExecuteChildPopen( self, ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=raise_it) def test_preexec_gc_module_failure(self): # This tests the code that disables garbage collection if the child # process will execute any Python. enabled = gc.isenabled() try: gc.disable() self.assertFalse(gc.isenabled()) subprocess.call([sys.executable, '-c', ''], preexec_fn=lambda: None) self.assertFalse(gc.isenabled(), "Popen enabled gc when it shouldn't.") gc.enable() self.assertTrue(gc.isenabled()) subprocess.call([sys.executable, '-c', ''], preexec_fn=lambda: None) self.assertTrue(gc.isenabled(), "Popen left gc disabled.") finally: if not enabled: gc.disable() @unittest.skipIf( sys.platform == 'darwin', 'setrlimit() seems to fail on OS X') def test_preexec_fork_failure(self): # The internal code did not preserve the previous exception when # re-enabling garbage collection try: from resource import getrlimit, setrlimit, RLIMIT_NPROC except ImportError as err: self.skipTest(err) # RLIMIT_NPROC is specific to Linux and BSD limits = getrlimit(RLIMIT_NPROC) [_, hard] = limits setrlimit(RLIMIT_NPROC, (0, hard)) self.addCleanup(setrlimit, RLIMIT_NPROC, limits) try: subprocess.call([sys.executable, '-c', ''], preexec_fn=lambda: None) except BlockingIOError: # Forking should raise EAGAIN, translated to BlockingIOError pass else: self.skipTest('RLIMIT_NPROC had no effect; probably superuser') def test_args_string(self): # args is a string fd, fname = tempfile.mkstemp() # reopen in text mode with open(fd, "w", errors="surrogateescape") as fobj: fobj.write("#!%s\n" % support.unix_shell) fobj.write("exec '%s' -c 'import sys; sys.exit(47)'\n" % sys.executable) os.chmod(fname, 0o700) p = subprocess.Popen(fname) p.wait() os.remove(fname) self.assertEqual(p.returncode, 47) def test_invalid_args(self): # invalid arguments should raise ValueError self.assertRaises(ValueError, subprocess.call, [sys.executable, "-c", "import sys; sys.exit(47)"], startupinfo=47) self.assertRaises(ValueError, subprocess.call, [sys.executable, "-c", "import sys; sys.exit(47)"], creationflags=47) def test_shell_sequence(self): # Run command through the shell (sequence) newenv = os.environ.copy() newenv["FRUIT"] = "apple" p = subprocess.Popen(["echo $FRUIT"], shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertEqual(p.stdout.read().strip(b" \t\r\n\f"), b"apple") def test_shell_string(self): # Run command through the shell (string) newenv = os.environ.copy() newenv["FRUIT"] = "apple" p = subprocess.Popen("echo $FRUIT", shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertEqual(p.stdout.read().strip(b" \t\r\n\f"), b"apple") def test_call_string(self): # call() function with string argument on UNIX fd, fname = tempfile.mkstemp() # reopen in text mode with open(fd, "w", errors="surrogateescape") as fobj: fobj.write("#!%s\n" % support.unix_shell) fobj.write("exec '%s' -c 'import sys; sys.exit(47)'\n" % sys.executable) os.chmod(fname, 0o700) rc = subprocess.call(fname) os.remove(fname) self.assertEqual(rc, 47) def test_specific_shell(self): # Issue #9265: Incorrect name passed as arg[0]. shells = [] for prefix in ['/bin', '/usr/bin/', '/usr/local/bin']: for name in ['bash', 'ksh']: sh = os.path.join(prefix, name) if os.path.isfile(sh): shells.append(sh) if not shells: # Will probably work for any shell but csh. self.skipTest("bash or ksh required for this test") sh = '/bin/sh' if os.path.isfile(sh) and not os.path.islink(sh): # Test will fail if /bin/sh is a symlink to csh. shells.append(sh) for sh in shells: p = subprocess.Popen("echo $0", executable=sh, shell=True, stdout=subprocess.PIPE) with p: self.assertEqual(p.stdout.read().strip(), bytes(sh, 'ascii')) def _kill_process(self, method, *args): # Do not inherit file handles from the parent. # It should fix failures on some platforms. # Also set the SIGINT handler to the default to make sure it's not # being ignored (some tests rely on that.) old_handler = signal.signal(signal.SIGINT, signal.default_int_handler) try: p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() time.sleep(30) """], close_fds=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) finally: signal.signal(signal.SIGINT, old_handler) # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) getattr(p, method)(*args) return p @unittest.skipIf(sys.platform.startswith(('netbsd', 'openbsd')), "Due to known OS bug (issue #16762)") def _kill_dead_process(self, method, *args): # Do not inherit file handles from the parent. # It should fix failures on some platforms. p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() """], close_fds=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) # The process should end after this time.sleep(1) # This shouldn't raise even though the child is now dead getattr(p, method)(*args) p.communicate() def test_send_signal(self): p = self._kill_process('send_signal', signal.SIGINT) _, stderr = p.communicate() self.assertIn(b'KeyboardInterrupt', stderr) self.assertNotEqual(p.wait(), 0) def test_kill(self): p = self._kill_process('kill') _, stderr = p.communicate() self.assertEqual(stderr, b'') self.assertEqual(p.wait(), -signal.SIGKILL) def test_terminate(self): p = self._kill_process('terminate') _, stderr = p.communicate() self.assertEqual(stderr, b'') self.assertEqual(p.wait(), -signal.SIGTERM) def test_send_signal_dead(self): # Sending a signal to a dead process self._kill_dead_process('send_signal', signal.SIGINT) def test_kill_dead(self): # Killing a dead process self._kill_dead_process('kill') def test_terminate_dead(self): # Terminating a dead process self._kill_dead_process('terminate') def _save_fds(self, save_fds): fds = [] for fd in save_fds: inheritable = os.get_inheritable(fd) saved = os.dup(fd) fds.append((fd, saved, inheritable)) return fds def _restore_fds(self, fds): for fd, saved, inheritable in fds: os.dup2(saved, fd, inheritable=inheritable) os.close(saved) def check_close_std_fds(self, fds): # Issue #9905: test that subprocess pipes still work properly with # some standard fds closed stdin = 0 saved_fds = self._save_fds(fds) for fd, saved, inheritable in saved_fds: if fd == 0: stdin = saved break try: for fd in fds: os.close(fd) out, err = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple");' 'sys.stdout.flush();' 'sys.stderr.write("orange")'], stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate() self.assertEqual(out, b'apple') self.assertEqual(err, b'orange') finally: self._restore_fds(saved_fds) def test_close_fd_0(self): self.check_close_std_fds([0]) def test_close_fd_1(self): self.check_close_std_fds([1]) def test_close_fd_2(self): self.check_close_std_fds([2]) def test_close_fds_0_1(self): self.check_close_std_fds([0, 1]) def test_close_fds_0_2(self): self.check_close_std_fds([0, 2]) def test_close_fds_1_2(self): self.check_close_std_fds([1, 2]) def test_close_fds_0_1_2(self): # Issue #10806: test that subprocess pipes still work properly with # all standard fds closed. self.check_close_std_fds([0, 1, 2]) def test_small_errpipe_write_fd(self): """Issue #15798: Popen should work when stdio fds are available.""" new_stdin = os.dup(0) new_stdout = os.dup(1) try: os.close(0) os.close(1) # Side test: if errpipe_write fails to have its CLOEXEC # flag set this should cause the parent to think the exec # failed. Extremely unlikely: everyone supports CLOEXEC. subprocess.Popen([ sys.executable, "-c", "print('AssertionError:0:CLOEXEC failure.')"]).wait() finally: # Restore original stdin and stdout os.dup2(new_stdin, 0) os.dup2(new_stdout, 1) os.close(new_stdin) os.close(new_stdout) def test_remapping_std_fds(self): # open up some temporary files temps = [tempfile.mkstemp() for i in range(3)] try: temp_fds = [fd for fd, fname in temps] # unlink the files -- we won't need to reopen them for fd, fname in temps: os.unlink(fname) # write some data to what will become stdin, and rewind os.write(temp_fds[1], b"STDIN") os.lseek(temp_fds[1], 0, 0) # move the standard file descriptors out of the way saved_fds = self._save_fds(range(3)) try: # duplicate the file objects over the standard fd's for fd, temp_fd in enumerate(temp_fds): os.dup2(temp_fd, fd) # now use those files in the "wrong" order, so that subprocess # has to rearrange them in the child p = subprocess.Popen([sys.executable, "-c", 'import sys; got = sys.stdin.read();' 'sys.stdout.write("got %s"%got); sys.stderr.write("err")'], stdin=temp_fds[1], stdout=temp_fds[2], stderr=temp_fds[0]) p.wait() finally: self._restore_fds(saved_fds) for fd in temp_fds: os.lseek(fd, 0, 0) out = os.read(temp_fds[2], 1024) err = os.read(temp_fds[0], 1024).strip() self.assertEqual(out, b"got STDIN") self.assertEqual(err, b"err") finally: for fd in temp_fds: os.close(fd) def check_swap_fds(self, stdin_no, stdout_no, stderr_no): # open up some temporary files temps = [tempfile.mkstemp() for i in range(3)] temp_fds = [fd for fd, fname in temps] try: # unlink the files -- we won't need to reopen them for fd, fname in temps: os.unlink(fname) # save a copy of the standard file descriptors saved_fds = self._save_fds(range(3)) try: # duplicate the temp files over the standard fd's 0, 1, 2 for fd, temp_fd in enumerate(temp_fds): os.dup2(temp_fd, fd) # write some data to what will become stdin, and rewind os.write(stdin_no, b"STDIN") os.lseek(stdin_no, 0, 0) # now use those files in the given order, so that subprocess # has to rearrange them in the child p = subprocess.Popen([sys.executable, "-c", 'import sys; got = sys.stdin.read();' 'sys.stdout.write("got %s"%got); sys.stderr.write("err")'], stdin=stdin_no, stdout=stdout_no, stderr=stderr_no) p.wait() for fd in temp_fds: os.lseek(fd, 0, 0) out = os.read(stdout_no, 1024) err = os.read(stderr_no, 1024).strip() finally: self._restore_fds(saved_fds) self.assertEqual(out, b"got STDIN") self.assertEqual(err, b"err") finally: for fd in temp_fds: os.close(fd) # When duping fds, if there arises a situation where one of the fds is # either 0, 1 or 2, it is possible that it is overwritten (#12607). # This tests all combinations of this. def test_swap_fds(self): self.check_swap_fds(0, 1, 2) self.check_swap_fds(0, 2, 1) self.check_swap_fds(1, 0, 2) self.check_swap_fds(1, 2, 0) self.check_swap_fds(2, 0, 1) self.check_swap_fds(2, 1, 0) def _check_swap_std_fds_with_one_closed(self, from_fds, to_fds): saved_fds = self._save_fds(range(3)) try: for from_fd in from_fds: with tempfile.TemporaryFile() as f: os.dup2(f.fileno(), from_fd) fd_to_close = (set(range(3)) - set(from_fds)).pop() os.close(fd_to_close) arg_names = ['stdin', 'stdout', 'stderr'] kwargs = {} for from_fd, to_fd in zip(from_fds, to_fds): kwargs[arg_names[to_fd]] = from_fd code = textwrap.dedent(r''' import os, sys skipped_fd = int(sys.argv[1]) for fd in range(3): if fd != skipped_fd: os.write(fd, str(fd).encode('ascii')) ''') skipped_fd = (set(range(3)) - set(to_fds)).pop() rc = subprocess.call([sys.executable, '-c', code, str(skipped_fd)], **kwargs) self.assertEqual(rc, 0) for from_fd, to_fd in zip(from_fds, to_fds): os.lseek(from_fd, 0, os.SEEK_SET) read_bytes = os.read(from_fd, 1024) read_fds = list(map(int, read_bytes.decode('ascii'))) msg = textwrap.dedent(f""" When testing {from_fds} to {to_fds} redirection, parent descriptor {from_fd} got redirected to descriptor(s) {read_fds} instead of descriptor {to_fd}. """) self.assertEqual([to_fd], read_fds, msg) finally: self._restore_fds(saved_fds) # Check that subprocess can remap std fds correctly even # if one of them is closed (#32844). def test_swap_std_fds_with_one_closed(self): for from_fds in itertools.combinations(range(3), 2): for to_fds in itertools.permutations(range(3), 2): self._check_swap_std_fds_with_one_closed(from_fds, to_fds) def test_surrogates_error_message(self): def prepare(): raise ValueError("surrogate:\uDCff") try: subprocess.call( ZERO_RETURN_CMD, preexec_fn=prepare) except ValueError as err: # Pure Python implementations keeps the message self.assertIsNone(subprocess._fork_exec) self.assertEqual(str(err), "surrogate:\uDCff") except subprocess.SubprocessError as err: # _posixsubprocess uses a default message self.assertIsNotNone(subprocess._fork_exec) self.assertEqual(str(err), "Exception occurred in preexec_fn.") else: self.fail("Expected ValueError or subprocess.SubprocessError") def test_undecodable_env(self): for key, value in (('test', 'abc\uDCFF'), ('test\uDCFF', '42')): encoded_value = value.encode("ascii", "surrogateescape") # test str with surrogates script = "import os; print(ascii(os.getenv(%s)))" % repr(key) env = os.environ.copy() env[key] = value # Use C locale to get ASCII for the locale encoding to force # surrogate-escaping of \xFF in the child process env['LC_ALL'] = 'C' decoded_value = value stdout = subprocess.check_output( [sys.executable, "-c", script], env=env) stdout = stdout.rstrip(b'\n\r') self.assertEqual(stdout.decode('ascii'), ascii(decoded_value)) # test bytes key = key.encode("ascii", "surrogateescape") script = "import os; print(ascii(os.getenvb(%s)))" % repr(key) env = os.environ.copy() env[key] = encoded_value stdout = subprocess.check_output( [sys.executable, "-c", script], env=env) stdout = stdout.rstrip(b'\n\r') self.assertEqual(stdout.decode('ascii'), ascii(encoded_value)) def test_bytes_program(self): abs_program = os.fsencode(ZERO_RETURN_CMD[0]) args = list(ZERO_RETURN_CMD[1:]) path, program = os.path.split(ZERO_RETURN_CMD[0]) program = os.fsencode(program) # absolute bytes path exitcode = subprocess.call([abs_program]+args) self.assertEqual(exitcode, 0) # absolute bytes path as a string cmd = b"'%s' %s" % (abs_program, " ".join(args).encode("utf-8")) exitcode = subprocess.call(cmd, shell=True) self.assertEqual(exitcode, 0) # bytes program, unicode PATH env = os.environ.copy() env["PATH"] = path exitcode = subprocess.call([program]+args, env=env) self.assertEqual(exitcode, 0) # bytes program, bytes PATH envb = os.environb.copy() envb[b"PATH"] = os.fsencode(path) exitcode = subprocess.call([program]+args, env=envb) self.assertEqual(exitcode, 0) def test_pipe_cloexec(self): sleeper = support.findfile("input_reader.py", subdir="subprocessdata") fd_status = support.findfile("fd_status.py", subdir="subprocessdata") p1 = subprocess.Popen([sys.executable, sleeper], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=False) self.addCleanup(p1.communicate, b'') p2 = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=False) output, error = p2.communicate() result_fds = set(map(int, output.split(b','))) unwanted_fds = set([p1.stdin.fileno(), p1.stdout.fileno(), p1.stderr.fileno()]) self.assertFalse(result_fds & unwanted_fds, "Expected no fds from %r to be open in child, " "found %r" % (unwanted_fds, result_fds & unwanted_fds)) def test_pipe_cloexec_real_tools(self): qcat = support.findfile("qcat.py", subdir="subprocessdata") qgrep = support.findfile("qgrep.py", subdir="subprocessdata") subdata = b'zxcvbn' data = subdata * 4 + b'\n' p1 = subprocess.Popen([sys.executable, qcat], stdin=subprocess.PIPE, stdout=subprocess.PIPE, close_fds=False) p2 = subprocess.Popen([sys.executable, qgrep, subdata], stdin=p1.stdout, stdout=subprocess.PIPE, close_fds=False) self.addCleanup(p1.wait) self.addCleanup(p2.wait) def kill_p1(): try: p1.terminate() except ProcessLookupError: pass def kill_p2(): try: p2.terminate() except ProcessLookupError: pass self.addCleanup(kill_p1) self.addCleanup(kill_p2) p1.stdin.write(data) p1.stdin.close() readfiles, ignored1, ignored2 = select.select([p2.stdout], [], [], 10) self.assertTrue(readfiles, "The child hung") self.assertEqual(p2.stdout.read(), data) p1.stdout.close() p2.stdout.close() def test_close_fds(self): fd_status = support.findfile("fd_status.py", subdir="subprocessdata") fds = os.pipe() self.addCleanup(os.close, fds[0]) self.addCleanup(os.close, fds[1]) open_fds = set(fds) # add a bunch more fds for _ in range(9): fd = os.open(os.devnull, os.O_RDONLY) self.addCleanup(os.close, fd) open_fds.add(fd) for fd in open_fds: os.set_inheritable(fd, True) p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=False) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertEqual(remaining_fds & open_fds, open_fds, "Some fds were closed") p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertFalse(remaining_fds & open_fds, "Some fds were left open") self.assertIn(1, remaining_fds, "Subprocess failed") # Keep some of the fd's we opened open in the subprocess. # This tests _posixsubprocess.c's proper handling of fds_to_keep. fds_to_keep = set(open_fds.pop() for _ in range(8)) p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True, pass_fds=fds_to_keep) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertFalse((remaining_fds - fds_to_keep) & open_fds, "Some fds not in pass_fds were left open") self.assertIn(1, remaining_fds, "Subprocess failed") @unittest.skipIf(sys.platform.startswith("freebsd") and os.stat("/dev").st_dev == os.stat("/dev/fd").st_dev, "Requires fdescfs mounted on /dev/fd on FreeBSD") def test_close_fds_when_max_fd_is_lowered(self): """Confirm that issue21618 is fixed (may fail under valgrind).""" fd_status = support.findfile("fd_status.py", subdir="subprocessdata") # This launches the meat of the test in a child process to # avoid messing with the larger unittest processes maximum # number of file descriptors. # This process launches: # +--> Process that lowers its RLIMIT_NOFILE aftr setting up # a bunch of high open fds above the new lower rlimit. # Those are reported via stdout before launching a new # process with close_fds=False to run the actual test: # +--> The TEST: This one launches a fd_status.py # subprocess with close_fds=True so we can find out if # any of the fds above the lowered rlimit are still open. p = subprocess.Popen([sys.executable, '-c', textwrap.dedent( ''' import os, resource, subprocess, sys, textwrap open_fds = set() # Add a bunch more fds to pass down. for _ in range(40): fd = os.open(os.devnull, os.O_RDONLY) open_fds.add(fd) # Leave a two pairs of low ones available for use by the # internal child error pipe and the stdout pipe. # We also leave 10 more open as some Python buildbots run into # "too many open files" errors during the test if we do not. for fd in sorted(open_fds)[:14]: os.close(fd) open_fds.remove(fd) for fd in open_fds: #self.addCleanup(os.close, fd) os.set_inheritable(fd, True) max_fd_open = max(open_fds) # Communicate the open_fds to the parent unittest.TestCase process. print(','.join(map(str, sorted(open_fds)))) sys.stdout.flush() rlim_cur, rlim_max = resource.getrlimit(resource.RLIMIT_NOFILE) try: # 29 is lower than the highest fds we are leaving open. resource.setrlimit(resource.RLIMIT_NOFILE, (29, rlim_max)) # Launch a new Python interpreter with our low fd rlim_cur that # inherits open fds above that limit. It then uses subprocess # with close_fds=True to get a report of open fds in the child. # An explicit list of fds to check is passed to fd_status.py as # letting fd_status rely on its default logic would miss the # fds above rlim_cur as it normally only checks up to that limit. subprocess.Popen( [sys.executable, '-c', textwrap.dedent(""" import subprocess, sys subprocess.Popen([sys.executable, %r] + [str(x) for x in range({max_fd})], close_fds=True).wait() """.format(max_fd=max_fd_open+1))], close_fds=False).wait() finally: resource.setrlimit(resource.RLIMIT_NOFILE, (rlim_cur, rlim_max)) ''' % fd_status)], stdout=subprocess.PIPE) output, unused_stderr = p.communicate() output_lines = output.splitlines() self.assertEqual(len(output_lines), 2, msg="expected exactly two lines of output:\n%r" % output) opened_fds = set(map(int, output_lines[0].strip().split(b','))) remaining_fds = set(map(int, output_lines[1].strip().split(b','))) self.assertFalse(remaining_fds & opened_fds, msg="Some fds were left open.") # Mac OS X Tiger (10.4) has a kernel bug: sometimes, the file # descriptor of a pipe closed in the parent process is valid in the # child process according to fstat(), but the mode of the file # descriptor is invalid, and read or write raise an error. @support.requires_mac_ver(10, 5) def test_pass_fds(self): fd_status = support.findfile("fd_status.py", subdir="subprocessdata") open_fds = set() for x in range(5): fds = os.pipe() self.addCleanup(os.close, fds[0]) self.addCleanup(os.close, fds[1]) os.set_inheritable(fds[0], True) os.set_inheritable(fds[1], True) open_fds.update(fds) for fd in open_fds: p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True, pass_fds=(fd, )) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) to_be_closed = open_fds - {fd} self.assertIn(fd, remaining_fds, "fd to be passed not passed") self.assertFalse(remaining_fds & to_be_closed, "fd to be closed passed") # pass_fds overrides close_fds with a warning. with self.assertWarns(RuntimeWarning) as context: self.assertFalse(subprocess.call( ZERO_RETURN_CMD, close_fds=False, pass_fds=(fd, ))) self.assertIn('overriding close_fds', str(context.warning)) def test_pass_fds_inheritable(self): script = support.findfile("fd_status.py", subdir="subprocessdata") inheritable, non_inheritable = os.pipe() self.addCleanup(os.close, inheritable) self.addCleanup(os.close, non_inheritable) os.set_inheritable(inheritable, True) os.set_inheritable(non_inheritable, False) pass_fds = (inheritable, non_inheritable) args = [sys.executable, script] args += list(map(str, pass_fds)) p = subprocess.Popen(args, stdout=subprocess.PIPE, close_fds=True, pass_fds=pass_fds) output, ignored = p.communicate() fds = set(map(int, output.split(b','))) # the inheritable file descriptor must be inherited, so its inheritable # flag must be set in the child process after fork() and before exec() self.assertEqual(fds, set(pass_fds), "output=%a" % output) # inheritable flag must not be changed in the parent process self.assertEqual(os.get_inheritable(inheritable), True) self.assertEqual(os.get_inheritable(non_inheritable), False) # bpo-32270: Ensure that descriptors specified in pass_fds # are inherited even if they are used in redirections. # Contributed by @izbyshev. def test_pass_fds_redirected(self): """Regression test for https://bugs.python.org/issue32270.""" fd_status = support.findfile("fd_status.py", subdir="subprocessdata") pass_fds = [] for _ in range(2): fd = os.open(os.devnull, os.O_RDWR) self.addCleanup(os.close, fd) pass_fds.append(fd) stdout_r, stdout_w = os.pipe() self.addCleanup(os.close, stdout_r) self.addCleanup(os.close, stdout_w) pass_fds.insert(1, stdout_w) with subprocess.Popen([sys.executable, fd_status], stdin=pass_fds[0], stdout=pass_fds[1], stderr=pass_fds[2], close_fds=True, pass_fds=pass_fds): output = os.read(stdout_r, 1024) fds = {int(num) for num in output.split(b',')} self.assertEqual(fds, {0, 1, 2} | frozenset(pass_fds), f"output={output!a}") def test_stdout_stdin_are_single_inout_fd(self): with io.open(os.devnull, "r+") as inout: p = subprocess.Popen(ZERO_RETURN_CMD, stdout=inout, stdin=inout) p.wait() def test_stdout_stderr_are_single_inout_fd(self): with io.open(os.devnull, "r+") as inout: p = subprocess.Popen(ZERO_RETURN_CMD, stdout=inout, stderr=inout) p.wait() def test_stderr_stdin_are_single_inout_fd(self): with io.open(os.devnull, "r+") as inout: p = subprocess.Popen(ZERO_RETURN_CMD, stderr=inout, stdin=inout) p.wait() def test_wait_when_sigchild_ignored(self): # NOTE: sigchild_ignore.py may not be an effective test on all OSes. sigchild_ignore = support.findfile("sigchild_ignore.py", subdir="subprocessdata") p = subprocess.Popen([sys.executable, sigchild_ignore], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() self.assertEqual(0, p.returncode, "sigchild_ignore.py exited" " non-zero with this error:\n%s" % stderr.decode('utf-8')) def test_select_unbuffered(self): # Issue #11459: bufsize=0 should really set the pipes as # unbuffered (and therefore let select() work properly). select = import_helper.import_module("select") p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple")'], stdout=subprocess.PIPE, bufsize=0) f = p.stdout self.addCleanup(f.close) try: self.assertEqual(f.read(4), b"appl") self.assertIn(f, select.select([f], [], [], 0.0)[0]) finally: p.wait() def test_zombie_fast_process_del(self): # Issue #12650: on Unix, if Popen.__del__() was called before the # process exited, it wouldn't be added to subprocess._active, and would # remain a zombie. # spawn a Popen, and delete its reference before it exits p = subprocess.Popen([sys.executable, "-c", 'import sys, time;' 'time.sleep(0.2)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) ident = id(p) pid = p.pid with warnings_helper.check_warnings(('', ResourceWarning)): p = None if mswindows: # subprocess._active is not used on Windows and is set to None. self.assertIsNone(subprocess._active) else: # check that p is in the active processes list self.assertIn(ident, [id(o) for o in subprocess._active]) def test_leak_fast_process_del_killed(self): # Issue #12650: on Unix, if Popen.__del__() was called before the # process exited, and the process got killed by a signal, it would never # be removed from subprocess._active, which triggered a FD and memory # leak. # spawn a Popen, delete its reference and kill it p = subprocess.Popen([sys.executable, "-c", 'import time;' 'time.sleep(3)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) ident = id(p) pid = p.pid with warnings_helper.check_warnings(('', ResourceWarning)): p = None support.gc_collect() # For PyPy or other GCs. os.kill(pid, signal.SIGKILL) if mswindows: # subprocess._active is not used on Windows and is set to None. self.assertIsNone(subprocess._active) else: # check that p is in the active processes list self.assertIn(ident, [id(o) for o in subprocess._active]) # let some time for the process to exit, and create a new Popen: this # should trigger the wait() of p time.sleep(0.2) with self.assertRaises(OSError): with subprocess.Popen(NONEXISTING_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc: pass # p should have been wait()ed on, and removed from the _active list self.assertRaises(OSError, os.waitpid, pid, 0) if mswindows: # subprocess._active is not used on Windows and is set to None. self.assertIsNone(subprocess._active) else: self.assertNotIn(ident, [id(o) for o in subprocess._active]) def test_close_fds_after_preexec(self): fd_status = support.findfile("fd_status.py", subdir="subprocessdata") # this FD is used as dup2() target by preexec_fn, and should be closed # in the child process fd = os.dup(1) self.addCleanup(os.close, fd) p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True, preexec_fn=lambda: os.dup2(1, fd)) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertNotIn(fd, remaining_fds) @support.cpython_only def test_fork_exec(self): # Issue #22290: fork_exec() must not crash on memory allocation failure # or other errors import _posixsubprocess gc_enabled = gc.isenabled() try: # Use a preexec function and enable the garbage collector # to force fork_exec() to re-enable the garbage collector # on error. func = lambda: None gc.enable() for args, exe_list, cwd, env_list in ( (123, [b"exe"], None, [b"env"]), ([b"arg"], 123, None, [b"env"]), ([b"arg"], [b"exe"], 123, [b"env"]), ([b"arg"], [b"exe"], None, 123), ): with self.assertRaises(TypeError) as err: _posixsubprocess.fork_exec( args, exe_list, True, (), cwd, env_list, -1, -1, -1, -1, 1, 2, 3, 4, True, True, 0, False, [], 0, -1, func, False) # Attempt to prevent # "TypeError: fork_exec() takes exactly N arguments (M given)" # from passing the test. More refactoring to have us start # with a valid *args list, confirm a good call with that works # before mutating it in various ways to ensure that bad calls # with individual arg type errors raise a typeerror would be # ideal. Saving that for a future PR... self.assertNotIn('takes exactly', str(err.exception)) finally: if not gc_enabled: gc.disable() @support.cpython_only def test_fork_exec_sorted_fd_sanity_check(self): # Issue #23564: sanity check the fork_exec() fds_to_keep sanity check. import _posixsubprocess class BadInt: first = True def __init__(self, value): self.value = value def __int__(self): if self.first: self.first = False return self.value raise ValueError gc_enabled = gc.isenabled() try: gc.enable() for fds_to_keep in ( (-1, 2, 3, 4, 5), # Negative number. ('str', 4), # Not an int. (18, 23, 42, 2**63), # Out of range. (5, 4), # Not sorted. (6, 7, 7, 8), # Duplicate. (BadInt(1), BadInt(2)), ): with self.assertRaises( ValueError, msg='fds_to_keep={}'.format(fds_to_keep)) as c: _posixsubprocess.fork_exec( [b"false"], [b"false"], True, fds_to_keep, None, [b"env"], -1, -1, -1, -1, 1, 2, 3, 4, True, True, 0, None, None, None, -1, None, True) self.assertIn('fds_to_keep', str(c.exception)) finally: if not gc_enabled: gc.disable() def test_communicate_BrokenPipeError_stdin_close(self): # By not setting stdout or stderr or a timeout we force the fast path # that just calls _stdin_write() internally due to our mock. proc = subprocess.Popen(ZERO_RETURN_CMD) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin: mock_proc_stdin.close.side_effect = BrokenPipeError proc.communicate() # Should swallow BrokenPipeError from close. mock_proc_stdin.close.assert_called_with() def test_communicate_BrokenPipeError_stdin_write(self): # By not setting stdout or stderr or a timeout we force the fast path # that just calls _stdin_write() internally due to our mock. proc = subprocess.Popen(ZERO_RETURN_CMD) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin: mock_proc_stdin.write.side_effect = BrokenPipeError proc.communicate(b'stuff') # Should swallow the BrokenPipeError. mock_proc_stdin.write.assert_called_once_with(b'stuff') mock_proc_stdin.close.assert_called_once_with() def test_communicate_BrokenPipeError_stdin_flush(self): # Setting stdin and stdout forces the ._communicate() code path. # python -h exits faster than python -c pass (but spams stdout). proc = subprocess.Popen([sys.executable, '-h'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin, \ open(os.devnull, 'wb') as dev_null: mock_proc_stdin.flush.side_effect = BrokenPipeError # because _communicate registers a selector using proc.stdin... mock_proc_stdin.fileno.return_value = dev_null.fileno() # _communicate() should swallow BrokenPipeError from flush. proc.communicate(b'stuff') mock_proc_stdin.flush.assert_called_once_with() def test_communicate_BrokenPipeError_stdin_close_with_timeout(self): # Setting stdin and stdout forces the ._communicate() code path. # python -h exits faster than python -c pass (but spams stdout). proc = subprocess.Popen([sys.executable, '-h'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin: mock_proc_stdin.close.side_effect = BrokenPipeError # _communicate() should swallow BrokenPipeError from close. proc.communicate(timeout=999) mock_proc_stdin.close.assert_called_once_with() @unittest.skipUnless(_testcapi is not None and hasattr(_testcapi, 'W_STOPCODE'), 'need _testcapi.W_STOPCODE') def test_stopped(self): """Test wait() behavior when waitpid returns WIFSTOPPED; issue29335.""" args = ZERO_RETURN_CMD proc = subprocess.Popen(args) # Wait until the real process completes to avoid zombie process support.wait_process(proc.pid, exitcode=0) status = _testcapi.W_STOPCODE(3) with mock.patch('subprocess.os.waitpid', return_value=(proc.pid, status)): returncode = proc.wait() self.assertEqual(returncode, -3) def test_send_signal_race(self): # bpo-38630: send_signal() must poll the process exit status to reduce # the risk of sending the signal to the wrong process. proc = subprocess.Popen(ZERO_RETURN_CMD) # wait until the process completes without using the Popen APIs. support.wait_process(proc.pid, exitcode=0) # returncode is still None but the process completed. self.assertIsNone(proc.returncode) with mock.patch("os.kill") as mock_kill: proc.send_signal(signal.SIGTERM) # send_signal() didn't call os.kill() since the process already # completed. mock_kill.assert_not_called() # Don't check the returncode value: the test reads the exit status, # so Popen failed to read it and uses a default returncode instead. self.assertIsNotNone(proc.returncode) def test_send_signal_race2(self): # bpo-40550: the process might exist between the returncode check and # the kill operation p = subprocess.Popen([sys.executable, '-c', 'exit(1)']) # wait for process to exit while not p.returncode: p.poll() with mock.patch.object(p, 'poll', new=lambda: None): p.returncode = None p.send_signal(signal.SIGTERM) p.kill() def test_communicate_repeated_call_after_stdout_close(self): proc = subprocess.Popen([sys.executable, '-c', 'import os, time; os.close(1), time.sleep(2)'], stdout=subprocess.PIPE) while True: try: proc.communicate(timeout=0.1) return except subprocess.TimeoutExpired: pass def test_preexec_at_exit(self): code = f"""if 1: import atexit import subprocess def dummy(): pass class AtFinalization: def __del__(self): print("OK") subprocess.Popen({ZERO_RETURN_CMD}, preexec_fn=dummy) print("shouldn't be printed") at_finalization = AtFinalization() """ _, out, err = assert_python_ok("-c", code) self.assertEqual(out.strip(), b"OK") self.assertIn(b"preexec_fn not supported at interpreter shutdown", err) @unittest.skipIf(not sysconfig.get_config_var("HAVE_VFORK"), "vfork() not enabled by configure.") @mock.patch("subprocess._fork_exec") @mock.patch("subprocess._USE_POSIX_SPAWN", new=False) def test__use_vfork(self, mock_fork_exec): self.assertTrue(subprocess._USE_VFORK) # The default value regardless. mock_fork_exec.side_effect = RuntimeError("just testing args") with self.assertRaises(RuntimeError): subprocess.run([sys.executable, "-c", "pass"]) mock_fork_exec.assert_called_once() # NOTE: These assertions are *ugly* as they require the last arg # to remain the have_vfork boolean. We really need to refactor away # from the giant "wall of args" internal C extension API. self.assertTrue(mock_fork_exec.call_args.args[-1]) with mock.patch.object(subprocess, '_USE_VFORK', False): with self.assertRaises(RuntimeError): subprocess.run([sys.executable, "-c", "pass"]) self.assertFalse(mock_fork_exec.call_args_list[-1].args[-1]) @unittest.skipIf(not sysconfig.get_config_var("HAVE_VFORK"), "vfork() not enabled by configure.") @unittest.skipIf(sys.platform != "linux", "Linux only, requires strace.") @mock.patch("subprocess._USE_POSIX_SPAWN", new=False) def test_vfork_used_when_expected(self): # This is a performance regression test to ensure we default to using # vfork() when possible. # Technically this test could pass when posix_spawn is used as well # because libc tends to implement that internally using vfork. But # that'd just be testing a libc+kernel implementation detail. strace_binary = "/usr/bin/strace" # The only system calls we are interested in. strace_filter = "--trace=clone,clone2,clone3,fork,vfork,exit,exit_group" true_binary = "/bin/true" strace_command = [strace_binary, strace_filter] try: does_strace_work_process = subprocess.run( strace_command + [true_binary], stderr=subprocess.PIPE, stdout=subprocess.DEVNULL, ) rc = does_strace_work_process.returncode stderr = does_strace_work_process.stderr except OSError: rc = -1 stderr = "" if rc or (b"+++ exited with 0 +++" not in stderr): self.skipTest("strace not found or not working as expected.") with self.subTest(name="default_is_vfork"): vfork_result = assert_python_ok( "-c", textwrap.dedent(f"""\ import subprocess subprocess.check_call([{true_binary!r}])"""), __run_using_command=strace_command, ) # Match both vfork() and clone(..., flags=...|CLONE_VFORK|...) self.assertRegex(vfork_result.err, br"(?i)vfork") # Do NOT check that fork() or other clones did not happen. # If the OS denys the vfork it'll fallback to plain fork(). # Test that each individual thing that would disable the use of vfork # actually disables it. for sub_name, preamble, sp_kwarg, expect_permission_error in ( ("!use_vfork", "subprocess._USE_VFORK = False", "", False), ("preexec", "", "preexec_fn=lambda: None", False), ("setgid", "", f"group={os.getgid()}", True), ("setuid", "", f"user={os.getuid()}", True), ("setgroups", "", "extra_groups=[]", True), ): with self.subTest(name=sub_name): non_vfork_result = assert_python_ok( "-c", textwrap.dedent(f"""\ import subprocess {preamble} try: subprocess.check_call( [{true_binary!r}], **dict({sp_kwarg})) except PermissionError: if not {expect_permission_error}: raise"""), __run_using_command=strace_command, ) # Ensure neither vfork() or clone(..., flags=...|CLONE_VFORK|...). self.assertNotRegex(non_vfork_result.err, br"(?i)vfork") @unittest.skipUnless(mswindows, "Windows specific tests") class Win32ProcessTestCase(BaseTestCase): def test_startupinfo(self): # startupinfo argument # We uses hardcoded constants, because we do not want to # depend on win32all. STARTF_USESHOWWINDOW = 1 SW_MAXIMIZE = 3 startupinfo = subprocess.STARTUPINFO() startupinfo.dwFlags = STARTF_USESHOWWINDOW startupinfo.wShowWindow = SW_MAXIMIZE # Since Python is a console process, it won't be affected # by wShowWindow, but the argument should be silently # ignored subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_startupinfo_keywords(self): # startupinfo argument # We use hardcoded constants, because we do not want to # depend on win32all. STARTF_USERSHOWWINDOW = 1 SW_MAXIMIZE = 3 startupinfo = subprocess.STARTUPINFO( dwFlags=STARTF_USERSHOWWINDOW, wShowWindow=SW_MAXIMIZE ) # Since Python is a console process, it won't be affected # by wShowWindow, but the argument should be silently # ignored subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_startupinfo_copy(self): # bpo-34044: Popen must not modify input STARTUPINFO structure startupinfo = subprocess.STARTUPINFO() startupinfo.dwFlags = subprocess.STARTF_USESHOWWINDOW startupinfo.wShowWindow = subprocess.SW_HIDE # Call Popen() twice with the same startupinfo object to make sure # that it's not modified for _ in range(2): cmd = ZERO_RETURN_CMD with open(os.devnull, 'w') as null: proc = subprocess.Popen(cmd, stdout=null, stderr=subprocess.STDOUT, startupinfo=startupinfo) with proc: proc.communicate() self.assertEqual(proc.returncode, 0) self.assertEqual(startupinfo.dwFlags, subprocess.STARTF_USESHOWWINDOW) self.assertIsNone(startupinfo.hStdInput) self.assertIsNone(startupinfo.hStdOutput) self.assertIsNone(startupinfo.hStdError) self.assertEqual(startupinfo.wShowWindow, subprocess.SW_HIDE) self.assertEqual(startupinfo.lpAttributeList, {"handle_list": []}) def test_creationflags(self): # creationflags argument CREATE_NEW_CONSOLE = 16 sys.stderr.write(" a DOS box should flash briefly ...\n") subprocess.call(sys.executable + ' -c "import time; time.sleep(0.25)"', creationflags=CREATE_NEW_CONSOLE) def test_invalid_args(self): # invalid arguments should raise ValueError self.assertRaises(ValueError, subprocess.call, [sys.executable, "-c", "import sys; sys.exit(47)"], preexec_fn=lambda: 1) @support.cpython_only def test_issue31471(self): # There shouldn't be an assertion failure in Popen() in case the env # argument has a bad keys() method. class BadEnv(dict): keys = None with self.assertRaises(TypeError): subprocess.Popen(ZERO_RETURN_CMD, env=BadEnv()) def test_close_fds(self): # close file descriptors rc = subprocess.call([sys.executable, "-c", "import sys; sys.exit(47)"], close_fds=True) self.assertEqual(rc, 47) def test_close_fds_with_stdio(self): import msvcrt fds = os.pipe() self.addCleanup(os.close, fds[0]) self.addCleanup(os.close, fds[1]) handles = [] for fd in fds: os.set_inheritable(fd, True) handles.append(msvcrt.get_osfhandle(fd)) p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, close_fds=False) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 0) int(stdout.strip()) # Check that stdout is an integer p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 1) self.assertIn(b"OSError", stderr) # The same as the previous call, but with an empty handle_list handle_list = [] startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {"handle_list": handle_list} p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, stderr=subprocess.PIPE, startupinfo=startupinfo, close_fds=True) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 1) self.assertIn(b"OSError", stderr) # Check for a warning due to using handle_list and close_fds=False with warnings_helper.check_warnings((".*overriding close_fds", RuntimeWarning)): startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {"handle_list": handles[:]} p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, stderr=subprocess.PIPE, startupinfo=startupinfo, close_fds=False) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 0) def test_empty_attribute_list(self): startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {} subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_empty_handle_list(self): startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {"handle_list": []} subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_shell_sequence(self): # Run command through the shell (sequence) newenv = os.environ.copy() newenv["FRUIT"] = "physalis" p = subprocess.Popen(["set"], shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertIn(b"physalis", p.stdout.read()) def test_shell_string(self): # Run command through the shell (string) newenv = os.environ.copy() newenv["FRUIT"] = "physalis" p = subprocess.Popen("set", shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertIn(b"physalis", p.stdout.read()) def test_shell_encodings(self): # Run command through the shell (string) for enc in ['ansi', 'oem']: newenv = os.environ.copy() newenv["FRUIT"] = "physalis" p = subprocess.Popen("set", shell=1, stdout=subprocess.PIPE, env=newenv, encoding=enc) with p: self.assertIn("physalis", p.stdout.read(), enc) def test_call_string(self): # call() function with string argument on Windows rc = subprocess.call(sys.executable + ' -c "import sys; sys.exit(47)"') self.assertEqual(rc, 47) def _kill_process(self, method, *args): # Some win32 buildbot raises EOFError if stdin is inherited p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() time.sleep(30) """], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) with p: # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) getattr(p, method)(*args) _, stderr = p.communicate() self.assertEqual(stderr, b'') returncode = p.wait() self.assertNotEqual(returncode, 0) def _kill_dead_process(self, method, *args): p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() sys.exit(42) """], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) with p: # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) # The process should end after this time.sleep(1) # This shouldn't raise even though the child is now dead getattr(p, method)(*args) _, stderr = p.communicate() self.assertEqual(stderr, b'') rc = p.wait() self.assertEqual(rc, 42) def test_send_signal(self): self._kill_process('send_signal', signal.SIGTERM) def test_kill(self): self._kill_process('kill') def test_terminate(self): self._kill_process('terminate') def test_send_signal_dead(self): self._kill_dead_process('send_signal', signal.SIGTERM) def test_kill_dead(self): self._kill_dead_process('kill') def test_terminate_dead(self): self._kill_dead_process('terminate') class MiscTests(unittest.TestCase): class RecordingPopen(subprocess.Popen): """A Popen that saves a reference to each instance for testing.""" instances_created = [] def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.instances_created.append(self) @mock.patch.object(subprocess.Popen, "_communicate") def _test_keyboardinterrupt_no_kill(self, popener, mock__communicate, **kwargs): """Fake a SIGINT happening during Popen._communicate() and ._wait(). This avoids the need to actually try and get test environments to send and receive signals reliably across platforms. The net effect of a ^C happening during a blocking subprocess execution which we want to clean up from is a KeyboardInterrupt coming out of communicate() or wait(). """ mock__communicate.side_effect = KeyboardInterrupt try: with mock.patch.object(subprocess.Popen, "_wait") as mock__wait: # We patch out _wait() as no signal was involved so the # child process isn't actually going to exit rapidly. mock__wait.side_effect = KeyboardInterrupt with mock.patch.object(subprocess, "Popen", self.RecordingPopen): with self.assertRaises(KeyboardInterrupt): popener([sys.executable, "-c", "import time\ntime.sleep(9)\nimport sys\n" "sys.stderr.write('\\n!runaway child!\\n')"], stdout=subprocess.DEVNULL, **kwargs) for call in mock__wait.call_args_list[1:]: self.assertNotEqual( call, mock.call(timeout=None), "no open-ended wait() after the first allowed: " f"{mock__wait.call_args_list}") sigint_calls = [] for call in mock__wait.call_args_list: if call == mock.call(timeout=0.25): # from Popen.__init__ sigint_calls.append(call) self.assertLessEqual(mock__wait.call_count, 2, msg=mock__wait.call_args_list) self.assertEqual(len(sigint_calls), 1, msg=mock__wait.call_args_list) finally: # cleanup the forgotten (due to our mocks) child process process = self.RecordingPopen.instances_created.pop() process.kill() process.wait() self.assertEqual([], self.RecordingPopen.instances_created) def test_call_keyboardinterrupt_no_kill(self): self._test_keyboardinterrupt_no_kill(subprocess.call, timeout=6.282) def test_run_keyboardinterrupt_no_kill(self): self._test_keyboardinterrupt_no_kill(subprocess.run, timeout=6.282) def test_context_manager_keyboardinterrupt_no_kill(self): def popen_via_context_manager(*args, **kwargs): with subprocess.Popen(*args, **kwargs) as unused_process: raise KeyboardInterrupt # Test how __exit__ handles ^C. self._test_keyboardinterrupt_no_kill(popen_via_context_manager) def test_getoutput(self): self.assertEqual(subprocess.getoutput('echo xyzzy'), 'xyzzy') self.assertEqual(subprocess.getstatusoutput('echo xyzzy'), (0, 'xyzzy')) # we use mkdtemp in the next line to create an empty directory # under our exclusive control; from that, we can invent a pathname # that we _know_ won't exist. This is guaranteed to fail. dir = None try: dir = tempfile.mkdtemp() name = os.path.join(dir, "foo") status, output = subprocess.getstatusoutput( ("type " if mswindows else "cat ") + name) self.assertNotEqual(status, 0) finally: if dir is not None: os.rmdir(dir) def test__all__(self): """Ensure that __all__ is populated properly.""" intentionally_excluded = {"list2cmdline", "Handle", "pwd", "grp", "fcntl"} exported = set(subprocess.__all__) possible_exports = set() import types for name, value in subprocess.__dict__.items(): if name.startswith('_'): continue if isinstance(value, (types.ModuleType,)): continue possible_exports.add(name) self.assertEqual(exported, possible_exports - intentionally_excluded) @unittest.skipUnless(hasattr(selectors, 'PollSelector'), "Test needs selectors.PollSelector") class ProcessTestCaseNoPoll(ProcessTestCase): def setUp(self): self.orig_selector = subprocess._PopenSelector subprocess._PopenSelector = selectors.SelectSelector ProcessTestCase.setUp(self) def tearDown(self): subprocess._PopenSelector = self.orig_selector ProcessTestCase.tearDown(self) @unittest.skipUnless(mswindows, "Windows-specific tests") class CommandsWithSpaces (BaseTestCase): def setUp(self): super().setUp() f, fname = tempfile.mkstemp(".py", "te st") self.fname = fname.lower () os.write(f, b"import sys;" b"sys.stdout.write('%d %s' % (len(sys.argv), [a.lower () for a in sys.argv]))" ) os.close(f) def tearDown(self): os.remove(self.fname) super().tearDown() def with_spaces(self, *args, **kwargs): kwargs['stdout'] = subprocess.PIPE p = subprocess.Popen(*args, **kwargs) with p: self.assertEqual( p.stdout.read ().decode("mbcs"), "2 [%r, 'ab cd']" % self.fname ) def test_shell_string_with_spaces(self): # call() function with string argument with spaces on Windows self.with_spaces('"%s" "%s" "%s"' % (sys.executable, self.fname, "ab cd"), shell=1) def test_shell_sequence_with_spaces(self): # call() function with sequence argument with spaces on Windows self.with_spaces([sys.executable, self.fname, "ab cd"], shell=1) def test_noshell_string_with_spaces(self): # call() function with string argument with spaces on Windows self.with_spaces('"%s" "%s" "%s"' % (sys.executable, self.fname, "ab cd")) def test_noshell_sequence_with_spaces(self): # call() function with sequence argument with spaces on Windows self.with_spaces([sys.executable, self.fname, "ab cd"]) class ContextManagerTests(BaseTestCase): def test_pipe(self): with subprocess.Popen([sys.executable, "-c", "import sys;" "sys.stdout.write('stdout');" "sys.stderr.write('stderr');"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc: self.assertEqual(proc.stdout.read(), b"stdout") self.assertEqual(proc.stderr.read(), b"stderr") self.assertTrue(proc.stdout.closed) self.assertTrue(proc.stderr.closed) def test_returncode(self): with subprocess.Popen([sys.executable, "-c", "import sys; sys.exit(100)"]) as proc: pass # __exit__ calls wait(), so the returncode should be set self.assertEqual(proc.returncode, 100) def test_communicate_stdin(self): with subprocess.Popen([sys.executable, "-c", "import sys;" "sys.exit(sys.stdin.read() == 'context')"], stdin=subprocess.PIPE) as proc: proc.communicate(b"context") self.assertEqual(proc.returncode, 1) def test_invalid_args(self): with self.assertRaises(NONEXISTING_ERRORS): with subprocess.Popen(NONEXISTING_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc: pass def test_broken_pipe_cleanup(self): """Broken pipe error should not prevent wait() (Issue 21619)""" proc = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, bufsize=support.PIPE_MAX_SIZE*2) proc = proc.__enter__() # Prepare to send enough data to overflow any OS pipe buffering and # guarantee a broken pipe error. Data is held in BufferedWriter # buffer until closed. proc.stdin.write(b'x' * support.PIPE_MAX_SIZE) self.assertIsNone(proc.returncode) # EPIPE expected under POSIX; EINVAL under Windows self.assertRaises(OSError, proc.__exit__, None, None, None) self.assertEqual(proc.returncode, 0) self.assertTrue(proc.stdin.closed) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.13/test_threading.py000066400000000000000000002230521471441230600217070ustar00rootroot00000000000000""" Tests for the threading module. """ import test.support from test.support import threading_helper, requires_subprocess, requires_gil_enabled from test.support import verbose, cpython_only, os_helper from test.support.import_helper import import_module from test.support.script_helper import assert_python_ok, assert_python_failure from test.support import force_not_colorized import random import sys import _thread import threading import time import unittest import weakref import os import subprocess import signal import textwrap import traceback import warnings from unittest import mock from test import lock_tests from test import support try: from test.support import interpreters except ImportError: interpreters = None threading_helper.requires_working_threading(module=True) # Between fork() and exec(), only async-safe functions are allowed (issues # #12316 and #11870), and fork() from a worker thread is known to trigger # problems with some operating systems (issue #3863): skip problematic tests # on platforms known to behave badly. platforms_to_skip = ('netbsd5', 'hp-ux11') def skip_unless_reliable_fork(test): if not support.has_fork_support: return unittest.skip("requires working os.fork()")(test) if sys.platform in platforms_to_skip: return unittest.skip("due to known OS bug related to thread+fork")(test) if support.HAVE_ASAN_FORK_BUG: return unittest.skip("libasan has a pthread_create() dead lock related to thread+fork")(test) if support.check_sanitizer(thread=True): return unittest.skip("TSAN doesn't support threads after fork")(test) return test def requires_subinterpreters(meth): """Decorator to skip a test if subinterpreters are not supported.""" return unittest.skipIf(interpreters is None, 'subinterpreters required')(meth) def restore_default_excepthook(testcase): testcase.addCleanup(setattr, threading, 'excepthook', threading.excepthook) threading.excepthook = threading.__excepthook__ # A trivial mutable counter. class Counter(object): def __init__(self): self.value = 0 def inc(self): self.value += 1 def dec(self): self.value -= 1 def get(self): return self.value class TestThread(threading.Thread): def __init__(self, name, testcase, sema, mutex, nrunning): threading.Thread.__init__(self, name=name) self.testcase = testcase self.sema = sema self.mutex = mutex self.nrunning = nrunning def run(self): delay = random.random() / 10000.0 if verbose: print('task %s will run for %.1f usec' % (self.name, delay * 1e6)) with self.sema: with self.mutex: self.nrunning.inc() if verbose: print(self.nrunning.get(), 'tasks are running') self.testcase.assertLessEqual(self.nrunning.get(), 3) time.sleep(delay) if verbose: print('task', self.name, 'done') with self.mutex: self.nrunning.dec() self.testcase.assertGreaterEqual(self.nrunning.get(), 0) if verbose: print('%s is finished. %d tasks are running' % (self.name, self.nrunning.get())) class BaseTestCase(unittest.TestCase): def setUp(self): self._threads = threading_helper.threading_setup() def tearDown(self): threading_helper.threading_cleanup(*self._threads) test.support.reap_children() class ThreadTests(BaseTestCase): maxDiff = 9999 @cpython_only def test_name(self): def func(): pass thread = threading.Thread(name="myname1") self.assertEqual(thread.name, "myname1") # Convert int name to str thread = threading.Thread(name=123) self.assertEqual(thread.name, "123") # target name is ignored if name is specified thread = threading.Thread(target=func, name="myname2") self.assertEqual(thread.name, "myname2") with mock.patch.object(threading, '_counter', return_value=2): thread = threading.Thread(name="") self.assertEqual(thread.name, "Thread-2") with mock.patch.object(threading, '_counter', return_value=3): thread = threading.Thread() self.assertEqual(thread.name, "Thread-3") with mock.patch.object(threading, '_counter', return_value=5): thread = threading.Thread(target=func) self.assertEqual(thread.name, "Thread-5 (func)") def test_args_argument(self): # bpo-45735: Using list or tuple as *args* in constructor could # achieve the same effect. num_list = [1] num_tuple = (1,) str_list = ["str"] str_tuple = ("str",) list_in_tuple = ([1],) tuple_in_list = [(1,)] test_cases = ( (num_list, lambda arg: self.assertEqual(arg, 1)), (num_tuple, lambda arg: self.assertEqual(arg, 1)), (str_list, lambda arg: self.assertEqual(arg, "str")), (str_tuple, lambda arg: self.assertEqual(arg, "str")), (list_in_tuple, lambda arg: self.assertEqual(arg, [1])), (tuple_in_list, lambda arg: self.assertEqual(arg, (1,))) ) for args, target in test_cases: with self.subTest(target=target, args=args): t = threading.Thread(target=target, args=args) t.start() t.join() def test_lock_no_args(self): threading.Lock() # works self.assertRaises(TypeError, threading.Lock, 1) self.assertRaises(TypeError, threading.Lock, a=1) self.assertRaises(TypeError, threading.Lock, 1, 2, a=1, b=2) def test_lock_no_subclass(self): # Intentionally disallow subclasses of threading.Lock because they have # never been allowed, so why start now just because the type is public? with self.assertRaises(TypeError): class MyLock(threading.Lock): pass def test_lock_or_none(self): import types self.assertIsInstance(threading.Lock | None, types.UnionType) # Create a bunch of threads, let each do some work, wait until all are # done. def test_various_ops(self): # This takes about n/3 seconds to run (about n/3 clumps of tasks, # times about 1 second per clump). NUMTASKS = 10 # no more than 3 of the 10 can run at once sema = threading.BoundedSemaphore(value=3) mutex = threading.RLock() numrunning = Counter() threads = [] for i in range(NUMTASKS): t = TestThread(""%i, self, sema, mutex, numrunning) threads.append(t) self.assertIsNone(t.ident) self.assertRegex(repr(t), r'^$') t.start() if hasattr(threading, 'get_native_id'): native_ids = set(t.native_id for t in threads) | {threading.get_native_id()} self.assertNotIn(None, native_ids) self.assertEqual(len(native_ids), NUMTASKS + 1) if verbose: print('waiting for all tasks to complete') for t in threads: t.join() self.assertFalse(t.is_alive()) self.assertNotEqual(t.ident, 0) self.assertIsNotNone(t.ident) self.assertRegex(repr(t), r'^$') if verbose: print('all tasks done') self.assertEqual(numrunning.get(), 0) def test_ident_of_no_threading_threads(self): # The ident still must work for the main thread and dummy threads. self.assertIsNotNone(threading.current_thread().ident) def f(): ident.append(threading.current_thread().ident) done.set() done = threading.Event() ident = [] with threading_helper.wait_threads_exit(): tid = _thread.start_new_thread(f, ()) done.wait() self.assertEqual(ident[0], tid) # run with a small(ish) thread stack size (256 KiB) def test_various_ops_small_stack(self): if verbose: print('with 256 KiB thread stack size...') try: threading.stack_size(262144) except _thread.error: raise unittest.SkipTest( 'platform does not support changing thread stack size') self.test_various_ops() threading.stack_size(0) # run with a large thread stack size (1 MiB) def test_various_ops_large_stack(self): if verbose: print('with 1 MiB thread stack size...') try: threading.stack_size(0x100000) except _thread.error: raise unittest.SkipTest( 'platform does not support changing thread stack size') self.test_various_ops() threading.stack_size(0) def test_foreign_thread(self): # Check that a "foreign" thread can use the threading module. dummy_thread = None error = None def f(mutex): try: nonlocal dummy_thread nonlocal error # Calling current_thread() forces an entry for the foreign # thread to get made in the threading._active map. dummy_thread = threading.current_thread() tid = dummy_thread.ident self.assertIn(tid, threading._active) self.assertIsInstance(dummy_thread, threading._DummyThread) self.assertIs(threading._active.get(tid), dummy_thread) # gh-29376 self.assertTrue( dummy_thread.is_alive(), 'Expected _DummyThread to be considered alive.' ) self.assertIn('_DummyThread', repr(dummy_thread)) except BaseException as e: error = e finally: mutex.release() mutex = threading.Lock() mutex.acquire() with threading_helper.wait_threads_exit(): tid = _thread.start_new_thread(f, (mutex,)) # Wait for the thread to finish. mutex.acquire() if error is not None: raise error self.assertEqual(tid, dummy_thread.ident) # Issue gh-106236: with self.assertRaises(RuntimeError): dummy_thread.join() dummy_thread._started.clear() with self.assertRaises(RuntimeError): dummy_thread.is_alive() # Busy wait for the following condition: after the thread dies, the # related dummy thread must be removed from threading._active. timeout = 5 timeout_at = time.monotonic() + timeout while time.monotonic() < timeout_at: if threading._active.get(dummy_thread.ident) is not dummy_thread: break time.sleep(.1) else: self.fail('It was expected that the created threading._DummyThread was removed from threading._active.') # PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it. def test_PyThreadState_SetAsyncExc(self): ctypes = import_module("ctypes") set_async_exc = ctypes.pythonapi.PyThreadState_SetAsyncExc set_async_exc.argtypes = (ctypes.c_ulong, ctypes.py_object) class AsyncExc(Exception): pass exception = ctypes.py_object(AsyncExc) # First check it works when setting the exception from the same thread. tid = threading.get_ident() self.assertIsInstance(tid, int) self.assertGreater(tid, 0) try: result = set_async_exc(tid, exception) # The exception is async, so we might have to keep the VM busy until # it notices. while True: pass except AsyncExc: pass else: # This code is unreachable but it reflects the intent. If we wanted # to be smarter the above loop wouldn't be infinite. self.fail("AsyncExc not raised") try: self.assertEqual(result, 1) # one thread state modified except UnboundLocalError: # The exception was raised too quickly for us to get the result. pass # `worker_started` is set by the thread when it's inside a try/except # block waiting to catch the asynchronously set AsyncExc exception. # `worker_saw_exception` is set by the thread upon catching that # exception. worker_started = threading.Event() worker_saw_exception = threading.Event() class Worker(threading.Thread): def run(self): self.id = threading.get_ident() self.finished = False try: while True: worker_started.set() time.sleep(0.1) except AsyncExc: self.finished = True worker_saw_exception.set() t = Worker() t.daemon = True # so if this fails, we don't hang Python at shutdown t.start() if verbose: print(" started worker thread") # Try a thread id that doesn't make sense. if verbose: print(" trying nonsensical thread id") result = set_async_exc(-1, exception) self.assertEqual(result, 0) # no thread states modified # Now raise an exception in the worker thread. if verbose: print(" waiting for worker thread to get started") ret = worker_started.wait() self.assertTrue(ret) if verbose: print(" verifying worker hasn't exited") self.assertFalse(t.finished) if verbose: print(" attempting to raise asynch exception in worker") result = set_async_exc(t.id, exception) self.assertEqual(result, 1) # one thread state modified if verbose: print(" waiting for worker to say it caught the exception") worker_saw_exception.wait(timeout=support.SHORT_TIMEOUT) self.assertTrue(t.finished) if verbose: print(" all OK -- joining worker") if t.finished: t.join() # else the thread is still running, and we have no way to kill it def test_limbo_cleanup(self): # Issue 7481: Failure to start thread should cleanup the limbo map. def fail_new_thread(*args, **kwargs): raise threading.ThreadError() _start_joinable_thread = threading._start_joinable_thread threading._start_joinable_thread = fail_new_thread try: t = threading.Thread(target=lambda: None) self.assertRaises(threading.ThreadError, t.start) self.assertFalse( t in threading._limbo, "Failed to cleanup _limbo map on failure of Thread.start().") finally: threading._start_joinable_thread = _start_joinable_thread def test_finalize_running_thread(self): # Issue 1402: the PyGILState_Ensure / _Release functions may be called # very late on python exit: on deallocation of a running thread for # example. if support.check_sanitizer(thread=True): # the thread running `time.sleep(100)` below will still be alive # at process exit self.skipTest("TSAN would report thread leak") import_module("ctypes") rc, out, err = assert_python_failure("-c", """if 1: import ctypes, sys, time, _thread # This lock is used as a simple event variable. ready = _thread.allocate_lock() ready.acquire() # Module globals are cleared before __del__ is run # So we save the functions in class dict class C: ensure = ctypes.pythonapi.PyGILState_Ensure release = ctypes.pythonapi.PyGILState_Release def __del__(self): state = self.ensure() self.release(state) def waitingThread(): x = C() ready.release() time.sleep(100) _thread.start_new_thread(waitingThread, ()) ready.acquire() # Be sure the other thread is waiting. sys.exit(42) """) self.assertEqual(rc, 42) def test_finalize_with_trace(self): # Issue1733757 # Avoid a deadlock when sys.settrace steps into threading._shutdown if support.check_sanitizer(thread=True): # the thread running `time.sleep(2)` below will still be alive # at process exit self.skipTest("TSAN would report thread leak") assert_python_ok("-c", """if 1: import sys, threading # A deadlock-killer, to prevent the # testsuite to hang forever def killer(): import os, time time.sleep(2) print('program blocked; aborting') os._exit(2) t = threading.Thread(target=killer) t.daemon = True t.start() # This is the trace function def func(frame, event, arg): threading.current_thread() return func sys.settrace(func) """) def test_join_nondaemon_on_shutdown(self): # Issue 1722344 # Raising SystemExit skipped threading._shutdown rc, out, err = assert_python_ok("-c", """if 1: import threading from time import sleep def child(): sleep(1) # As a non-daemon thread we SHOULD wake up and nothing # should be torn down yet print("Woke up, sleep function is:", sleep) threading.Thread(target=child).start() raise SystemExit """) self.assertEqual(out.strip(), b"Woke up, sleep function is: ") self.assertEqual(err, b"") def test_enumerate_after_join(self): # Try hard to trigger #1703448: a thread is still returned in # threading.enumerate() after it has been join()ed. enum = threading.enumerate old_interval = sys.getswitchinterval() try: for i in range(1, 100): support.setswitchinterval(i * 0.0002) t = threading.Thread(target=lambda: None) t.start() t.join() l = enum() self.assertNotIn(t, l, "#1703448 triggered after %d trials: %s" % (i, l)) finally: sys.setswitchinterval(old_interval) def test_join_from_multiple_threads(self): # Thread.join() should be thread-safe errors = [] def worker(): time.sleep(0.005) def joiner(thread): try: thread.join() except Exception as e: errors.append(e) for N in range(2, 20): threads = [threading.Thread(target=worker)] for i in range(N): threads.append(threading.Thread(target=joiner, args=(threads[0],))) for t in threads: t.start() time.sleep(0.01) for t in threads: t.join() if errors: raise errors[0] def test_join_with_timeout(self): lock = _thread.allocate_lock() lock.acquire() def worker(): lock.acquire() thread = threading.Thread(target=worker) thread.start() thread.join(timeout=0.01) assert thread.is_alive() lock.release() thread.join() assert not thread.is_alive() def test_no_refcycle_through_target(self): class RunSelfFunction(object): def __init__(self, should_raise): # The links in this refcycle from Thread back to self # should be cleaned up when the thread completes. self.should_raise = should_raise self.thread = threading.Thread(target=self._run, args=(self,), kwargs={'yet_another':self}) self.thread.start() def _run(self, other_ref, yet_another): if self.should_raise: raise SystemExit restore_default_excepthook(self) cyclic_object = RunSelfFunction(should_raise=False) weak_cyclic_object = weakref.ref(cyclic_object) cyclic_object.thread.join() del cyclic_object self.assertIsNone(weak_cyclic_object(), msg=('%d references still around' % sys.getrefcount(weak_cyclic_object()))) raising_cyclic_object = RunSelfFunction(should_raise=True) weak_raising_cyclic_object = weakref.ref(raising_cyclic_object) raising_cyclic_object.thread.join() del raising_cyclic_object self.assertIsNone(weak_raising_cyclic_object(), msg=('%d references still around' % sys.getrefcount(weak_raising_cyclic_object()))) def test_old_threading_api(self): # Just a quick sanity check to make sure the old method names are # still present t = threading.Thread() with self.assertWarnsRegex(DeprecationWarning, r'get the daemon attribute'): t.isDaemon() with self.assertWarnsRegex(DeprecationWarning, r'set the daemon attribute'): t.setDaemon(True) with self.assertWarnsRegex(DeprecationWarning, r'get the name attribute'): t.getName() with self.assertWarnsRegex(DeprecationWarning, r'set the name attribute'): t.setName("name") e = threading.Event() with self.assertWarnsRegex(DeprecationWarning, 'use is_set()'): e.isSet() cond = threading.Condition() cond.acquire() with self.assertWarnsRegex(DeprecationWarning, 'use notify_all()'): cond.notifyAll() with self.assertWarnsRegex(DeprecationWarning, 'use active_count()'): threading.activeCount() with self.assertWarnsRegex(DeprecationWarning, 'use current_thread()'): threading.currentThread() def test_repr_daemon(self): t = threading.Thread() self.assertNotIn('daemon', repr(t)) t.daemon = True self.assertIn('daemon', repr(t)) def test_daemon_param(self): t = threading.Thread() self.assertFalse(t.daemon) t = threading.Thread(daemon=False) self.assertFalse(t.daemon) t = threading.Thread(daemon=True) self.assertTrue(t.daemon) @skip_unless_reliable_fork def test_dummy_thread_after_fork(self): # Issue #14308: a dummy thread in the active list doesn't mess up # the after-fork mechanism. code = """if 1: import _thread, threading, os, time, warnings def background_thread(evt): # Creates and registers the _DummyThread instance threading.current_thread() evt.set() time.sleep(10) evt = threading.Event() _thread.start_new_thread(background_thread, (evt,)) evt.wait() assert threading.active_count() == 2, threading.active_count() with warnings.catch_warnings(record=True) as ws: warnings.filterwarnings( "always", category=DeprecationWarning) if os.fork() == 0: assert threading.active_count() == 1, threading.active_count() os._exit(0) else: assert ws[0].category == DeprecationWarning, ws[0] assert 'fork' in str(ws[0].message), ws[0] os.wait() """ _, out, err = assert_python_ok("-c", code) self.assertEqual(out, b'') self.assertEqual(err, b'') @skip_unless_reliable_fork def test_is_alive_after_fork(self): # Try hard to trigger #18418: is_alive() could sometimes be True on # threads that vanished after a fork. old_interval = sys.getswitchinterval() self.addCleanup(sys.setswitchinterval, old_interval) # Make the bug more likely to manifest. test.support.setswitchinterval(1e-6) for i in range(20): t = threading.Thread(target=lambda: None) t.start() # Ignore the warning about fork with threads. with warnings.catch_warnings(category=DeprecationWarning, action="ignore"): if (pid := os.fork()) == 0: os._exit(11 if t.is_alive() else 10) else: t.join() support.wait_process(pid, exitcode=10) def test_main_thread(self): main = threading.main_thread() self.assertEqual(main.name, 'MainThread') self.assertEqual(main.ident, threading.current_thread().ident) self.assertEqual(main.ident, threading.get_ident()) def f(): self.assertNotEqual(threading.main_thread().ident, threading.current_thread().ident) th = threading.Thread(target=f) th.start() th.join() @skip_unless_reliable_fork @unittest.skipUnless(hasattr(os, 'waitpid'), "test needs os.waitpid()") def test_main_thread_after_fork(self): code = """if 1: import os, threading from test import support ident = threading.get_ident() pid = os.fork() if pid == 0: print("current ident", threading.get_ident() == ident) main = threading.main_thread() print("main", main.name) print("main ident", main.ident == ident) print("current is main", threading.current_thread() is main) else: support.wait_process(pid, exitcode=0) """ _, out, err = assert_python_ok("-c", code) data = out.decode().replace('\r', '') self.assertEqual(err, b"") self.assertEqual(data, "current ident True\n" "main MainThread\n" "main ident True\n" "current is main True\n") @skip_unless_reliable_fork @unittest.skipUnless(hasattr(os, 'waitpid'), "test needs os.waitpid()") def test_main_thread_after_fork_from_nonmain_thread(self): code = """if 1: import os, threading, sys, warnings from test import support def func(): ident = threading.get_ident() with warnings.catch_warnings(record=True) as ws: warnings.filterwarnings( "always", category=DeprecationWarning) pid = os.fork() if pid == 0: print("current ident", threading.get_ident() == ident) main = threading.main_thread() print("main", main.name, type(main).__name__) print("main ident", main.ident == ident) print("current is main", threading.current_thread() is main) # stdout is fully buffered because not a tty, # we have to flush before exit. sys.stdout.flush() else: assert ws[0].category == DeprecationWarning, ws[0] assert 'fork' in str(ws[0].message), ws[0] support.wait_process(pid, exitcode=0) th = threading.Thread(target=func) th.start() th.join() """ _, out, err = assert_python_ok("-c", code) data = out.decode().replace('\r', '') self.assertEqual(err.decode('utf-8'), "") self.assertEqual(data, "current ident True\n" "main Thread-1 (func) Thread\n" "main ident True\n" "current is main True\n" ) @skip_unless_reliable_fork @unittest.skipUnless(hasattr(os, 'waitpid'), "test needs os.waitpid()") def test_main_thread_after_fork_from_foreign_thread(self, create_dummy=False): code = """if 1: import os, threading, sys, traceback, _thread from test import support def func(lock): ident = threading.get_ident() if %s: # call current_thread() before fork to allocate DummyThread current = threading.current_thread() print("current", current.name, type(current).__name__) print("ident in _active", ident in threading._active) # flush before fork, so child won't flush it again sys.stdout.flush() pid = os.fork() if pid == 0: print("current ident", threading.get_ident() == ident) main = threading.main_thread() print("main", main.name, type(main).__name__) print("main ident", main.ident == ident) print("current is main", threading.current_thread() is main) print("_dangling", [t.name for t in list(threading._dangling)]) # stdout is fully buffered because not a tty, # we have to flush before exit. sys.stdout.flush() try: threading._shutdown() os._exit(0) except: traceback.print_exc() sys.stderr.flush() os._exit(1) else: try: support.wait_process(pid, exitcode=0) except Exception: # avoid 'could not acquire lock for # <_io.BufferedWriter name=''> at interpreter shutdown,' traceback.print_exc() sys.stderr.flush() finally: lock.release() join_lock = _thread.allocate_lock() join_lock.acquire() th = _thread.start_new_thread(func, (join_lock,)) join_lock.acquire() """ % create_dummy # "DeprecationWarning: This process is multi-threaded, use of fork() # may lead to deadlocks in the child" _, out, err = assert_python_ok("-W", "ignore::DeprecationWarning", "-c", code) data = out.decode().replace('\r', '') self.assertEqual(err.decode(), "") self.assertEqual(data, ("current Dummy-1 _DummyThread\n" if create_dummy else "") + f"ident in _active {create_dummy!s}\n" + "current ident True\n" "main MainThread _MainThread\n" "main ident True\n" "current is main True\n" "_dangling ['MainThread']\n") def test_main_thread_after_fork_from_dummy_thread(self, create_dummy=False): self.test_main_thread_after_fork_from_foreign_thread(create_dummy=True) def test_main_thread_during_shutdown(self): # bpo-31516: current_thread() should still point to the main thread # at shutdown code = """if 1: import gc, threading main_thread = threading.current_thread() assert main_thread is threading.main_thread() # sanity check class RefCycle: def __init__(self): self.cycle = self def __del__(self): print("GC:", threading.current_thread() is main_thread, threading.main_thread() is main_thread, threading.enumerate() == [main_thread]) RefCycle() gc.collect() # sanity check x = RefCycle() """ _, out, err = assert_python_ok("-c", code) data = out.decode() self.assertEqual(err, b"") self.assertEqual(data.splitlines(), ["GC: True True True"] * 2) def test_finalization_shutdown(self): # bpo-36402: Py_Finalize() calls threading._shutdown() which must wait # until Python thread states of all non-daemon threads get deleted. # # Test similar to SubinterpThreadingTests.test_threads_join_2(), but # test the finalization of the main interpreter. code = """if 1: import os import threading import time import random def random_sleep(): seconds = random.random() * 0.010 time.sleep(seconds) class Sleeper: def __del__(self): random_sleep() tls = threading.local() def f(): # Sleep a bit so that the thread is still running when # Py_Finalize() is called. random_sleep() tls.x = Sleeper() random_sleep() threading.Thread(target=f).start() random_sleep() """ rc, out, err = assert_python_ok("-c", code) self.assertEqual(err, b"") def test_repr_stopped(self): # Verify that "stopped" shows up in repr(Thread) appropriately. started = _thread.allocate_lock() finish = _thread.allocate_lock() started.acquire() finish.acquire() def f(): started.release() finish.acquire() t = threading.Thread(target=f) t.start() started.acquire() self.assertIn("started", repr(t)) finish.release() # "stopped" should appear in the repr in a reasonable amount of time. # Implementation detail: as of this writing, that's trivially true # if .join() is called, and almost trivially true if .is_alive() is # called. The detail we're testing here is that "stopped" shows up # "all on its own". LOOKING_FOR = "stopped" for i in range(500): if LOOKING_FOR in repr(t): break time.sleep(0.01) self.assertIn(LOOKING_FOR, repr(t)) # we waited at least 5 seconds t.join() def test_BoundedSemaphore_limit(self): # BoundedSemaphore should raise ValueError if released too often. for limit in range(1, 10): bs = threading.BoundedSemaphore(limit) threads = [threading.Thread(target=bs.acquire) for _ in range(limit)] for t in threads: t.start() for t in threads: t.join() threads = [threading.Thread(target=bs.release) for _ in range(limit)] for t in threads: t.start() for t in threads: t.join() self.assertRaises(ValueError, bs.release) @cpython_only def test_frame_tstate_tracing(self): _testcapi = import_module("_testcapi") # Issue #14432: Crash when a generator is created in a C thread that is # destroyed while the generator is still used. The issue was that a # generator contains a frame, and the frame kept a reference to the # Python state of the destroyed C thread. The crash occurs when a trace # function is setup. def noop_trace(frame, event, arg): # no operation return noop_trace def generator(): while 1: yield "generator" def callback(): if callback.gen is None: callback.gen = generator() return next(callback.gen) callback.gen = None old_trace = sys.gettrace() sys.settrace(noop_trace) try: # Install a trace function threading.settrace(noop_trace) # Create a generator in a C thread which exits after the call _testcapi.call_in_temporary_c_thread(callback) # Call the generator in a different Python thread, check that the # generator didn't keep a reference to the destroyed thread state for test in range(3): # The trace function is still called here callback() finally: sys.settrace(old_trace) threading.settrace(old_trace) def test_gettrace(self): def noop_trace(frame, event, arg): # no operation return noop_trace old_trace = threading.gettrace() try: threading.settrace(noop_trace) trace_func = threading.gettrace() self.assertEqual(noop_trace,trace_func) finally: threading.settrace(old_trace) def test_gettrace_all_threads(self): def fn(*args): pass old_trace = threading.gettrace() first_check = threading.Event() second_check = threading.Event() trace_funcs = [] def checker(): trace_funcs.append(sys.gettrace()) first_check.set() second_check.wait() trace_funcs.append(sys.gettrace()) try: t = threading.Thread(target=checker) t.start() first_check.wait() threading.settrace_all_threads(fn) second_check.set() t.join() self.assertEqual(trace_funcs, [None, fn]) self.assertEqual(threading.gettrace(), fn) self.assertEqual(sys.gettrace(), fn) finally: threading.settrace_all_threads(old_trace) self.assertEqual(threading.gettrace(), old_trace) self.assertEqual(sys.gettrace(), old_trace) def test_getprofile(self): def fn(*args): pass old_profile = threading.getprofile() try: threading.setprofile(fn) self.assertEqual(fn, threading.getprofile()) finally: threading.setprofile(old_profile) def test_getprofile_all_threads(self): def fn(*args): pass old_profile = threading.getprofile() first_check = threading.Event() second_check = threading.Event() profile_funcs = [] def checker(): profile_funcs.append(sys.getprofile()) first_check.set() second_check.wait() profile_funcs.append(sys.getprofile()) try: t = threading.Thread(target=checker) t.start() first_check.wait() threading.setprofile_all_threads(fn) second_check.set() t.join() self.assertEqual(profile_funcs, [None, fn]) self.assertEqual(threading.getprofile(), fn) self.assertEqual(sys.getprofile(), fn) finally: threading.setprofile_all_threads(old_profile) self.assertEqual(threading.getprofile(), old_profile) self.assertEqual(sys.getprofile(), old_profile) def test_locals_at_exit(self): # bpo-19466: thread locals must not be deleted before destructors # are called rc, out, err = assert_python_ok("-c", """if 1: import threading class Atexit: def __del__(self): print("thread_dict.atexit = %r" % thread_dict.atexit) thread_dict = threading.local() thread_dict.atexit = "value" atexit = Atexit() """) self.assertEqual(out.rstrip(), b"thread_dict.atexit = 'value'") def test_boolean_target(self): # bpo-41149: A thread that had a boolean value of False would not # run, regardless of whether it was callable. The correct behaviour # is for a thread to do nothing if its target is None, and to call # the target otherwise. class BooleanTarget(object): def __init__(self): self.ran = False def __bool__(self): return False def __call__(self): self.ran = True target = BooleanTarget() thread = threading.Thread(target=target) thread.start() thread.join() self.assertTrue(target.ran) def test_leak_without_join(self): # bpo-37788: Test that a thread which is not joined explicitly # does not leak. Test written for reference leak checks. def noop(): pass with threading_helper.wait_threads_exit(): threading.Thread(target=noop).start() # Thread.join() is not called def test_import_from_another_thread(self): # bpo-1596321: If the threading module is first import from a thread # different than the main thread, threading._shutdown() must handle # this case without logging an error at Python exit. code = textwrap.dedent(''' import _thread import sys event = _thread.allocate_lock() event.acquire() def import_threading(): import threading event.release() if 'threading' in sys.modules: raise Exception('threading is already imported') _thread.start_new_thread(import_threading, ()) # wait until the threading module is imported event.acquire() event.release() if 'threading' not in sys.modules: raise Exception('threading is not imported') # don't wait until the thread completes ''') rc, out, err = assert_python_ok("-c", code) self.assertEqual(out, b'') self.assertEqual(err, b'') def test_start_new_thread_at_finalization(self): code = """if 1: import _thread def f(): print("shouldn't be printed") class AtFinalization: def __del__(self): print("OK") _thread.start_new_thread(f, ()) at_finalization = AtFinalization() """ _, out, err = assert_python_ok("-c", code) self.assertEqual(out.strip(), b"OK") self.assertIn(b"can't create new thread at interpreter shutdown", err) class ThreadJoinOnShutdown(BaseTestCase): def _run_and_join(self, script): script = """if 1: import sys, os, time, threading # a thread, which waits for the main program to terminate def joiningfunc(mainthread): mainthread.join() print('end of thread') # stdout is fully buffered because not a tty, we have to flush # before exit. sys.stdout.flush() \n""" + script rc, out, err = assert_python_ok("-c", script) data = out.decode().replace('\r', '') self.assertEqual(data, "end of main\nend of thread\n") def test_1_join_on_shutdown(self): # The usual case: on exit, wait for a non-daemon thread script = """if 1: import os t = threading.Thread(target=joiningfunc, args=(threading.current_thread(),)) t.start() time.sleep(0.1) print('end of main') """ self._run_and_join(script) @skip_unless_reliable_fork def test_2_join_in_forked_process(self): # Like the test above, but from a forked interpreter script = """if 1: from test import support childpid = os.fork() if childpid != 0: # parent process support.wait_process(childpid, exitcode=0) sys.exit(0) # child process t = threading.Thread(target=joiningfunc, args=(threading.current_thread(),)) t.start() print('end of main') """ self._run_and_join(script) @skip_unless_reliable_fork def test_3_join_in_forked_from_thread(self): # Like the test above, but fork() was called from a worker thread # In the forked process, the main Thread object must be marked as stopped. script = """if 1: from test import support main_thread = threading.current_thread() def worker(): childpid = os.fork() if childpid != 0: # parent process support.wait_process(childpid, exitcode=0) sys.exit(0) # child process t = threading.Thread(target=joiningfunc, args=(main_thread,)) print('end of main') t.start() t.join() # Should not block: main_thread is already stopped w = threading.Thread(target=worker) w.start() """ self._run_and_join(script) @unittest.skipIf(sys.platform in platforms_to_skip, "due to known OS bug") def test_4_daemon_threads(self): # Check that a daemon thread cannot crash the interpreter on shutdown # by manipulating internal structures that are being disposed of in # the main thread. if support.check_sanitizer(thread=True): # some of the threads running `random_io` below will still be alive # at process exit self.skipTest("TSAN would report thread leak") script = """if True: import os import random import sys import time import threading thread_has_run = set() def random_io(): '''Loop for a while sleeping random tiny amounts and doing some I/O.''' import test.test_threading as mod while True: with open(mod.__file__, 'rb') as in_f: stuff = in_f.read(200) with open(os.devnull, 'wb') as null_f: null_f.write(stuff) time.sleep(random.random() / 1995) thread_has_run.add(threading.current_thread()) def main(): count = 0 for _ in range(40): new_thread = threading.Thread(target=random_io) new_thread.daemon = True new_thread.start() count += 1 while len(thread_has_run) < count: time.sleep(0.001) # Trigger process shutdown sys.exit(0) main() """ rc, out, err = assert_python_ok('-c', script) self.assertFalse(err) def test_thread_from_thread(self): script = """if True: import threading import time def thread2(): time.sleep(0.05) print("OK") def thread1(): time.sleep(0.05) t2 = threading.Thread(target=thread2) t2.start() t = threading.Thread(target=thread1) t.start() # do not join() -- the interpreter waits for non-daemon threads to # finish. """ rc, out, err = assert_python_ok('-c', script) self.assertEqual(err, b"") self.assertEqual(out.strip(), b"OK") self.assertEqual(rc, 0) @skip_unless_reliable_fork def test_reinit_tls_after_fork(self): # Issue #13817: fork() would deadlock in a multithreaded program with # the ad-hoc TLS implementation. def do_fork_and_wait(): # just fork a child process and wait it pid = os.fork() if pid > 0: support.wait_process(pid, exitcode=50) else: os._exit(50) # Ignore the warning about fork with threads. with warnings.catch_warnings(category=DeprecationWarning, action="ignore"): # start a bunch of threads that will fork() child processes threads = [] for i in range(16): t = threading.Thread(target=do_fork_and_wait) threads.append(t) t.start() for t in threads: t.join() @skip_unless_reliable_fork def test_clear_threads_states_after_fork(self): # Issue #17094: check that threads states are cleared after fork() # start a bunch of threads threads = [] for i in range(16): t = threading.Thread(target=lambda : time.sleep(0.3)) threads.append(t) t.start() try: # Ignore the warning about fork with threads. with warnings.catch_warnings(category=DeprecationWarning, action="ignore"): pid = os.fork() if pid == 0: # check that threads states have been cleared if len(sys._current_frames()) == 1: os._exit(51) else: os._exit(52) else: support.wait_process(pid, exitcode=51) finally: for t in threads: t.join() class SubinterpThreadingTests(BaseTestCase): def pipe(self): r, w = os.pipe() self.addCleanup(os.close, r) self.addCleanup(os.close, w) if hasattr(os, 'set_blocking'): os.set_blocking(r, False) return (r, w) def test_threads_join(self): # Non-daemon threads should be joined at subinterpreter shutdown # (issue #18808) r, w = self.pipe() code = textwrap.dedent(r""" import os import random import threading import time def random_sleep(): seconds = random.random() * 0.010 time.sleep(seconds) def f(): # Sleep a bit so that the thread is still running when # Py_EndInterpreter is called. random_sleep() os.write(%d, b"x") threading.Thread(target=f).start() random_sleep() """ % (w,)) ret = test.support.run_in_subinterp(code) self.assertEqual(ret, 0) # The thread was joined properly. self.assertEqual(os.read(r, 1), b"x") def test_threads_join_2(self): # Same as above, but a delay gets introduced after the thread's # Python code returned but before the thread state is deleted. # To achieve this, we register a thread-local object which sleeps # a bit when deallocated. r, w = self.pipe() code = textwrap.dedent(r""" import os import random import threading import time def random_sleep(): seconds = random.random() * 0.010 time.sleep(seconds) class Sleeper: def __del__(self): random_sleep() tls = threading.local() def f(): # Sleep a bit so that the thread is still running when # Py_EndInterpreter is called. random_sleep() tls.x = Sleeper() os.write(%d, b"x") threading.Thread(target=f).start() random_sleep() """ % (w,)) ret = test.support.run_in_subinterp(code) self.assertEqual(ret, 0) # The thread was joined properly. self.assertEqual(os.read(r, 1), b"x") @requires_subinterpreters def test_threads_join_with_no_main(self): r_interp, w_interp = self.pipe() INTERP = b'I' FINI = b'F' DONE = b'D' interp = interpreters.create() interp.exec(f"""if True: import os import threading import time done = False def notify_fini(): global done done = True os.write({w_interp}, {FINI!r}) t.join() threading._register_atexit(notify_fini) def task(): while not done: time.sleep(0.1) os.write({w_interp}, {DONE!r}) t = threading.Thread(target=task) t.start() os.write({w_interp}, {INTERP!r}) """) interp.close() self.assertEqual(os.read(r_interp, 1), INTERP) self.assertEqual(os.read(r_interp, 1), FINI) self.assertEqual(os.read(r_interp, 1), DONE) @cpython_only def test_daemon_threads_fatal_error(self): import_module("_testcapi") subinterp_code = f"""if 1: import os import threading import time def f(): # Make sure the daemon thread is still running when # Py_EndInterpreter is called. time.sleep({test.support.SHORT_TIMEOUT}) threading.Thread(target=f, daemon=True).start() """ script = r"""if 1: import _testcapi _testcapi.run_in_subinterp(%r) """ % (subinterp_code,) with test.support.SuppressCrashReport(): rc, out, err = assert_python_failure("-c", script) self.assertIn("Fatal Python error: Py_EndInterpreter: " "not the last thread", err.decode()) def _check_allowed(self, before_start='', *, allowed=True, daemon_allowed=True, daemon=False, ): import_module("_testinternalcapi") subinterp_code = textwrap.dedent(f""" import test.support import threading def func(): print('this should not have run!') t = threading.Thread(target=func, daemon={daemon}) {before_start} t.start() """) check_multi_interp_extensions = bool(support.Py_GIL_DISABLED) script = textwrap.dedent(f""" import test.support test.support.run_in_subinterp_with_config( {subinterp_code!r}, use_main_obmalloc=True, allow_fork=True, allow_exec=True, allow_threads={allowed}, allow_daemon_threads={daemon_allowed}, check_multi_interp_extensions={check_multi_interp_extensions}, own_gil=False, ) """) with test.support.SuppressCrashReport(): _, _, err = assert_python_ok("-c", script) return err.decode() @cpython_only def test_threads_not_allowed(self): err = self._check_allowed( allowed=False, daemon_allowed=False, daemon=False, ) self.assertIn('RuntimeError', err) @cpython_only def test_daemon_threads_not_allowed(self): with self.subTest('via Thread()'): err = self._check_allowed( allowed=True, daemon_allowed=False, daemon=True, ) self.assertIn('RuntimeError', err) with self.subTest('via Thread.daemon setter'): err = self._check_allowed( 't.daemon = True', allowed=True, daemon_allowed=False, daemon=False, ) self.assertIn('RuntimeError', err) class ThreadingExceptionTests(BaseTestCase): # A RuntimeError should be raised if Thread.start() is called # multiple times. def test_start_thread_again(self): thread = threading.Thread() thread.start() self.assertRaises(RuntimeError, thread.start) thread.join() def test_joining_current_thread(self): current_thread = threading.current_thread() self.assertRaises(RuntimeError, current_thread.join); def test_joining_inactive_thread(self): thread = threading.Thread() self.assertRaises(RuntimeError, thread.join) def test_daemonize_active_thread(self): thread = threading.Thread() thread.start() self.assertRaises(RuntimeError, setattr, thread, "daemon", True) thread.join() def test_releasing_unacquired_lock(self): lock = threading.Lock() self.assertRaises(RuntimeError, lock.release) @requires_subprocess() def test_recursion_limit(self): # Issue 9670 # test that excessive recursion within a non-main thread causes # an exception rather than crashing the interpreter on platforms # like Mac OS X or FreeBSD which have small default stack sizes # for threads script = """if True: import threading def recurse(): return recurse() def outer(): try: recurse() except RecursionError: pass w = threading.Thread(target=outer) w.start() w.join() print('end of main thread') """ expected_output = "end of main thread\n" p = subprocess.Popen([sys.executable, "-c", script], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() data = stdout.decode().replace('\r', '') self.assertEqual(p.returncode, 0, "Unexpected error: " + stderr.decode()) self.assertEqual(data, expected_output) def test_print_exception(self): script = r"""if True: import threading import time running = False def run(): global running running = True while running: time.sleep(0.01) 1/0 t = threading.Thread(target=run) t.start() while not running: time.sleep(0.01) running = False t.join() """ rc, out, err = assert_python_ok("-c", script) self.assertEqual(out, b'') err = err.decode() self.assertIn("Exception in thread", err) self.assertIn("Traceback (most recent call last):", err) self.assertIn("ZeroDivisionError", err) self.assertNotIn("Unhandled exception", err) def test_print_exception_stderr_is_none_1(self): script = r"""if True: import sys import threading import time running = False def run(): global running running = True while running: time.sleep(0.01) 1/0 t = threading.Thread(target=run) t.start() while not running: time.sleep(0.01) sys.stderr = None running = False t.join() """ rc, out, err = assert_python_ok("-c", script) self.assertEqual(out, b'') err = err.decode() self.assertIn("Exception in thread", err) self.assertIn("Traceback (most recent call last):", err) self.assertIn("ZeroDivisionError", err) self.assertNotIn("Unhandled exception", err) def test_print_exception_stderr_is_none_2(self): script = r"""if True: import sys import threading import time running = False def run(): global running running = True while running: time.sleep(0.01) 1/0 sys.stderr = None t = threading.Thread(target=run) t.start() while not running: time.sleep(0.01) running = False t.join() """ rc, out, err = assert_python_ok("-c", script) self.assertEqual(out, b'') self.assertNotIn("Unhandled exception", err.decode()) def test_print_exception_gh_102056(self): # This used to crash. See gh-102056. script = r"""if True: import time import threading import _thread def f(): try: f() except RecursionError: f() def g(): try: raise ValueError() except* ValueError: f() def h(): time.sleep(1) _thread.interrupt_main() t = threading.Thread(target=h) t.start() g() t.join() """ assert_python_failure("-c", script) def test_bare_raise_in_brand_new_thread(self): def bare_raise(): raise class Issue27558(threading.Thread): exc = None def run(self): try: bare_raise() except Exception as exc: self.exc = exc thread = Issue27558() thread.start() thread.join() self.assertIsNotNone(thread.exc) self.assertIsInstance(thread.exc, RuntimeError) # explicitly break the reference cycle to not leak a dangling thread thread.exc = None def test_multithread_modify_file_noerror(self): # See issue25872 def modify_file(): with open(os_helper.TESTFN, 'w', encoding='utf-8') as fp: fp.write(' ') traceback.format_stack() self.addCleanup(os_helper.unlink, os_helper.TESTFN) threads = [ threading.Thread(target=modify_file) for i in range(100) ] for t in threads: t.start() t.join() class ThreadRunFail(threading.Thread): def run(self): raise ValueError("run failed") class ExceptHookTests(BaseTestCase): def setUp(self): restore_default_excepthook(self) super().setUp() @force_not_colorized def test_excepthook(self): with support.captured_output("stderr") as stderr: thread = ThreadRunFail(name="excepthook thread") thread.start() thread.join() stderr = stderr.getvalue().strip() self.assertIn(f'Exception in thread {thread.name}:\n', stderr) self.assertIn('Traceback (most recent call last):\n', stderr) self.assertIn(' raise ValueError("run failed")', stderr) self.assertIn('ValueError: run failed', stderr) @support.cpython_only @force_not_colorized def test_excepthook_thread_None(self): # threading.excepthook called with thread=None: log the thread # identifier in this case. with support.captured_output("stderr") as stderr: try: raise ValueError("bug") except Exception as exc: args = threading.ExceptHookArgs([*sys.exc_info(), None]) try: threading.excepthook(args) finally: # Explicitly break a reference cycle args = None stderr = stderr.getvalue().strip() self.assertIn(f'Exception in thread {threading.get_ident()}:\n', stderr) self.assertIn('Traceback (most recent call last):\n', stderr) self.assertIn(' raise ValueError("bug")', stderr) self.assertIn('ValueError: bug', stderr) def test_system_exit(self): class ThreadExit(threading.Thread): def run(self): sys.exit(1) # threading.excepthook() silently ignores SystemExit with support.captured_output("stderr") as stderr: thread = ThreadExit() thread.start() thread.join() self.assertEqual(stderr.getvalue(), '') def test_custom_excepthook(self): args = None def hook(hook_args): nonlocal args args = hook_args try: with support.swap_attr(threading, 'excepthook', hook): thread = ThreadRunFail() thread.start() thread.join() self.assertEqual(args.exc_type, ValueError) self.assertEqual(str(args.exc_value), 'run failed') self.assertEqual(args.exc_traceback, args.exc_value.__traceback__) self.assertIs(args.thread, thread) finally: # Break reference cycle args = None def test_custom_excepthook_fail(self): def threading_hook(args): raise ValueError("threading_hook failed") err_str = None def sys_hook(exc_type, exc_value, exc_traceback): nonlocal err_str err_str = str(exc_value) with support.swap_attr(threading, 'excepthook', threading_hook), \ support.swap_attr(sys, 'excepthook', sys_hook), \ support.captured_output('stderr') as stderr: thread = ThreadRunFail() thread.start() thread.join() self.assertEqual(stderr.getvalue(), 'Exception in threading.excepthook:\n') self.assertEqual(err_str, 'threading_hook failed') def test_original_excepthook(self): def run_thread(): with support.captured_output("stderr") as output: thread = ThreadRunFail(name="excepthook thread") thread.start() thread.join() return output.getvalue() def threading_hook(args): print("Running a thread failed", file=sys.stderr) default_output = run_thread() with support.swap_attr(threading, 'excepthook', threading_hook): custom_hook_output = run_thread() threading.excepthook = threading.__excepthook__ recovered_output = run_thread() self.assertEqual(default_output, recovered_output) self.assertNotEqual(default_output, custom_hook_output) self.assertEqual(custom_hook_output, "Running a thread failed\n") class TimerTests(BaseTestCase): def setUp(self): BaseTestCase.setUp(self) self.callback_args = [] self.callback_event = threading.Event() def test_init_immutable_default_args(self): # Issue 17435: constructor defaults were mutable objects, they could be # mutated via the object attributes and affect other Timer objects. timer1 = threading.Timer(0.01, self._callback_spy) timer1.start() self.callback_event.wait() timer1.args.append("blah") timer1.kwargs["foo"] = "bar" self.callback_event.clear() timer2 = threading.Timer(0.01, self._callback_spy) timer2.start() self.callback_event.wait() self.assertEqual(len(self.callback_args), 2) self.assertEqual(self.callback_args, [((), {}), ((), {})]) timer1.join() timer2.join() def _callback_spy(self, *args, **kwargs): self.callback_args.append((args[:], kwargs.copy())) self.callback_event.set() class LockTests(lock_tests.LockTests): locktype = staticmethod(threading.Lock) class PyRLockTests(lock_tests.RLockTests): locktype = staticmethod(threading._PyRLock) @unittest.skipIf(threading._CRLock is None, 'RLock not implemented in C') class CRLockTests(lock_tests.RLockTests): locktype = staticmethod(threading._CRLock) def test_signature(self): # gh-102029 with warnings.catch_warnings(record=True) as warnings_log: threading.RLock() self.assertEqual(warnings_log, []) arg_types = [ ((1,), {}), ((), {'a': 1}), ((1, 2), {'a': 1}), ] for args, kwargs in arg_types: with self.subTest(args=args, kwargs=kwargs): with self.assertWarns(DeprecationWarning): threading.RLock(*args, **kwargs) # Subtypes with custom `__init__` are allowed (but, not recommended): class CustomRLock(self.locktype): def __init__(self, a, *, b) -> None: super().__init__() with warnings.catch_warnings(record=True) as warnings_log: CustomRLock(1, b=2) self.assertEqual(warnings_log, []) class EventTests(lock_tests.EventTests): eventtype = staticmethod(threading.Event) class ConditionAsRLockTests(lock_tests.RLockTests): # Condition uses an RLock by default and exports its API. locktype = staticmethod(threading.Condition) def test_recursion_count(self): self.skipTest("Condition does not expose _recursion_count()") class ConditionTests(lock_tests.ConditionTests): condtype = staticmethod(threading.Condition) class SemaphoreTests(lock_tests.SemaphoreTests): semtype = staticmethod(threading.Semaphore) class BoundedSemaphoreTests(lock_tests.BoundedSemaphoreTests): semtype = staticmethod(threading.BoundedSemaphore) class BarrierTests(lock_tests.BarrierTests): barriertype = staticmethod(threading.Barrier) class MiscTestCase(unittest.TestCase): def test__all__(self): restore_default_excepthook(self) extra = {"ThreadError"} not_exported = {'currentThread', 'activeCount'} support.check__all__(self, threading, ('threading', '_thread'), extra=extra, not_exported=not_exported) class InterruptMainTests(unittest.TestCase): def check_interrupt_main_with_signal_handler(self, signum): def handler(signum, frame): 1/0 old_handler = signal.signal(signum, handler) self.addCleanup(signal.signal, signum, old_handler) with self.assertRaises(ZeroDivisionError): _thread.interrupt_main() def check_interrupt_main_noerror(self, signum): handler = signal.getsignal(signum) try: # No exception should arise. signal.signal(signum, signal.SIG_IGN) _thread.interrupt_main(signum) signal.signal(signum, signal.SIG_DFL) _thread.interrupt_main(signum) finally: # Restore original handler signal.signal(signum, handler) @requires_gil_enabled("gh-118433: Flaky due to a longstanding bug") def test_interrupt_main_subthread(self): # Calling start_new_thread with a function that executes interrupt_main # should raise KeyboardInterrupt upon completion. def call_interrupt(): _thread.interrupt_main() t = threading.Thread(target=call_interrupt) with self.assertRaises(KeyboardInterrupt): t.start() t.join() t.join() def test_interrupt_main_mainthread(self): # Make sure that if interrupt_main is called in main thread that # KeyboardInterrupt is raised instantly. with self.assertRaises(KeyboardInterrupt): _thread.interrupt_main() def test_interrupt_main_with_signal_handler(self): self.check_interrupt_main_with_signal_handler(signal.SIGINT) self.check_interrupt_main_with_signal_handler(signal.SIGTERM) def test_interrupt_main_noerror(self): self.check_interrupt_main_noerror(signal.SIGINT) self.check_interrupt_main_noerror(signal.SIGTERM) def test_interrupt_main_invalid_signal(self): self.assertRaises(ValueError, _thread.interrupt_main, -1) self.assertRaises(ValueError, _thread.interrupt_main, signal.NSIG) self.assertRaises(ValueError, _thread.interrupt_main, 1000000) @threading_helper.reap_threads def test_can_interrupt_tight_loops(self): cont = [True] started = [False] interrupted = [False] def worker(started, cont, interrupted): iterations = 100_000_000 started[0] = True while cont[0]: if iterations: iterations -= 1 else: return pass interrupted[0] = True t = threading.Thread(target=worker,args=(started, cont, interrupted)) t.start() while not started[0]: pass cont[0] = False t.join() self.assertTrue(interrupted[0]) class AtexitTests(unittest.TestCase): def test_atexit_output(self): rc, out, err = assert_python_ok("-c", """if True: import threading def run_last(): print('parrot') threading._register_atexit(run_last) """) self.assertFalse(err) self.assertEqual(out.strip(), b'parrot') def test_atexit_called_once(self): rc, out, err = assert_python_ok("-c", """if True: import threading from unittest.mock import Mock mock = Mock() threading._register_atexit(mock) mock.assert_not_called() # force early shutdown to ensure it was called once threading._shutdown() mock.assert_called_once() """) self.assertFalse(err) def test_atexit_after_shutdown(self): # The only way to do this is by registering an atexit within # an atexit, which is intended to raise an exception. rc, out, err = assert_python_ok("-c", """if True: import threading def func(): pass def run_last(): threading._register_atexit(func) threading._register_atexit(run_last) """) self.assertTrue(err) self.assertIn("RuntimeError: can't register atexit after shutdown", err.decode()) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.13/test_weakref.py000066400000000000000000002341651471441230600213750ustar00rootroot00000000000000import gc import sys import doctest import unittest import collections import weakref import operator import contextlib import copy import threading import time import random import textwrap from test import support from test.support import script_helper, ALWAYS_EQ, suppress_immortalization from test.support import gc_collect from test.support import import_helper from test.support import threading_helper from test.support import is_wasi, Py_DEBUG # Used in ReferencesTestCase.test_ref_created_during_del() . ref_from_del = None # Used by FinalizeTestCase as a global that may be replaced by None # when the interpreter shuts down. _global_var = 'foobar' class C: def method(self): pass class Callable: bar = None def __call__(self, x): self.bar = x def create_function(): def f(): pass return f def create_bound_method(): return C().method class Object: def __init__(self, arg): self.arg = arg def __repr__(self): return "" % self.arg def __eq__(self, other): if isinstance(other, Object): return self.arg == other.arg return NotImplemented def __lt__(self, other): if isinstance(other, Object): return self.arg < other.arg return NotImplemented def __hash__(self): return hash(self.arg) def some_method(self): return 4 def other_method(self): return 5 class RefCycle: def __init__(self): self.cycle = self class TestBase(unittest.TestCase): def setUp(self): self.cbcalled = 0 def callback(self, ref): self.cbcalled += 1 @contextlib.contextmanager def collect_in_thread(period=0.005): """ Ensure GC collections happen in a different thread, at a high frequency. """ please_stop = False def collect(): while not please_stop: time.sleep(period) gc.collect() with support.disable_gc(): t = threading.Thread(target=collect) t.start() try: yield finally: please_stop = True t.join() class ReferencesTestCase(TestBase): def test_basic_ref(self): self.check_basic_ref(C) self.check_basic_ref(create_function) self.check_basic_ref(create_bound_method) # Just make sure the tp_repr handler doesn't raise an exception. # Live reference: o = C() wr = weakref.ref(o) repr(wr) # Dead reference: del o repr(wr) @support.cpython_only def test_ref_repr(self): obj = C() ref = weakref.ref(obj) regex = ( rf"" ) self.assertRegex(repr(ref), regex) obj = None gc_collect() self.assertRegex(repr(ref), rf'') # test type with __name__ class WithName: @property def __name__(self): return "custom_name" obj2 = WithName() ref2 = weakref.ref(obj2) regex = ( rf"" ) self.assertRegex(repr(ref2), regex) def test_repr_failure_gh99184(self): class MyConfig(dict): def __getattr__(self, x): return self[x] obj = MyConfig(offset=5) obj_weakref = weakref.ref(obj) self.assertIn('MyConfig', repr(obj_weakref)) self.assertIn('MyConfig', str(obj_weakref)) def test_basic_callback(self): self.check_basic_callback(C) self.check_basic_callback(create_function) self.check_basic_callback(create_bound_method) @support.cpython_only def test_cfunction(self): _testcapi = import_helper.import_module("_testcapi") create_cfunction = _testcapi.create_cfunction f = create_cfunction() wr = weakref.ref(f) self.assertIs(wr(), f) del f self.assertIsNone(wr()) self.check_basic_ref(create_cfunction) self.check_basic_callback(create_cfunction) def test_multiple_callbacks(self): o = C() ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del o gc_collect() # For PyPy or other GCs. self.assertIsNone(ref1(), "expected reference to be invalidated") self.assertIsNone(ref2(), "expected reference to be invalidated") self.assertEqual(self.cbcalled, 2, "callback not called the right number of times") def test_multiple_selfref_callbacks(self): # Make sure all references are invalidated before callbacks are called # # What's important here is that we're using the first # reference in the callback invoked on the second reference # (the most recently created ref is cleaned up first). This # tests that all references to the object are invalidated # before any of the callbacks are invoked, so that we only # have one invocation of _weakref.c:cleanup_helper() active # for a particular object at a time. # def callback(object, self=self): self.ref() c = C() self.ref = weakref.ref(c, callback) ref1 = weakref.ref(c, callback) del c def test_constructor_kwargs(self): c = C() self.assertRaises(TypeError, weakref.ref, c, callback=None) def test_proxy_ref(self): o = C() o.bar = 1 ref1 = weakref.proxy(o, self.callback) ref2 = weakref.proxy(o, self.callback) del o gc_collect() # For PyPy or other GCs. def check(proxy): proxy.bar self.assertRaises(ReferenceError, check, ref1) self.assertRaises(ReferenceError, check, ref2) ref3 = weakref.proxy(C()) gc_collect() # For PyPy or other GCs. self.assertRaises(ReferenceError, bool, ref3) self.assertEqual(self.cbcalled, 2) @support.cpython_only def test_proxy_repr(self): obj = C() ref = weakref.proxy(obj, self.callback) regex = ( rf"" ) self.assertRegex(repr(ref), regex) obj = None gc_collect() self.assertRegex(repr(ref), rf'') def check_basic_ref(self, factory): o = factory() ref = weakref.ref(o) self.assertIsNotNone(ref(), "weak reference to live object should be live") o2 = ref() self.assertIs(o, o2, "() should return original object if live") def check_basic_callback(self, factory): self.cbcalled = 0 o = factory() ref = weakref.ref(o, self.callback) del o gc_collect() # For PyPy or other GCs. self.assertEqual(self.cbcalled, 1, "callback did not properly set 'cbcalled'") self.assertIsNone(ref(), "ref2 should be dead after deleting object reference") def test_ref_reuse(self): o = C() ref1 = weakref.ref(o) # create a proxy to make sure that there's an intervening creation # between these two; it should make no difference proxy = weakref.proxy(o) ref2 = weakref.ref(o) self.assertIs(ref1, ref2, "reference object w/out callback should be re-used") o = C() proxy = weakref.proxy(o) ref1 = weakref.ref(o) ref2 = weakref.ref(o) self.assertIs(ref1, ref2, "reference object w/out callback should be re-used") self.assertEqual(weakref.getweakrefcount(o), 2, "wrong weak ref count for object") del proxy gc_collect() # For PyPy or other GCs. self.assertEqual(weakref.getweakrefcount(o), 1, "wrong weak ref count for object after deleting proxy") def test_proxy_reuse(self): o = C() proxy1 = weakref.proxy(o) ref = weakref.ref(o) proxy2 = weakref.proxy(o) self.assertIs(proxy1, proxy2, "proxy object w/out callback should have been re-used") def test_basic_proxy(self): o = C() self.check_proxy(o, weakref.proxy(o)) L = collections.UserList() p = weakref.proxy(L) self.assertFalse(p, "proxy for empty UserList should be false") p.append(12) self.assertEqual(len(L), 1) self.assertTrue(p, "proxy for non-empty UserList should be true") p[:] = [2, 3] self.assertEqual(len(L), 2) self.assertEqual(len(p), 2) self.assertIn(3, p, "proxy didn't support __contains__() properly") p[1] = 5 self.assertEqual(L[1], 5) self.assertEqual(p[1], 5) L2 = collections.UserList(L) p2 = weakref.proxy(L2) self.assertEqual(p, p2) ## self.assertEqual(repr(L2), repr(p2)) L3 = collections.UserList(range(10)) p3 = weakref.proxy(L3) self.assertEqual(L3[:], p3[:]) self.assertEqual(L3[5:], p3[5:]) self.assertEqual(L3[:5], p3[:5]) self.assertEqual(L3[2:5], p3[2:5]) def test_proxy_unicode(self): # See bug 5037 class C(object): def __str__(self): return "string" def __bytes__(self): return b"bytes" instance = C() self.assertIn("__bytes__", dir(weakref.proxy(instance))) self.assertEqual(bytes(weakref.proxy(instance)), b"bytes") def test_proxy_index(self): class C: def __index__(self): return 10 o = C() p = weakref.proxy(o) self.assertEqual(operator.index(p), 10) def test_proxy_div(self): class C: def __floordiv__(self, other): return 42 def __ifloordiv__(self, other): return 21 o = C() p = weakref.proxy(o) self.assertEqual(p // 5, 42) p //= 5 self.assertEqual(p, 21) def test_proxy_matmul(self): class C: def __matmul__(self, other): return 1729 def __rmatmul__(self, other): return -163 def __imatmul__(self, other): return 561 o = C() p = weakref.proxy(o) self.assertEqual(p @ 5, 1729) self.assertEqual(5 @ p, -163) p @= 5 self.assertEqual(p, 561) # The PyWeakref_* C API is documented as allowing either NULL or # None as the value for the callback, where either means "no # callback". The "no callback" ref and proxy objects are supposed # to be shared so long as they exist by all callers so long as # they are active. In Python 2.3.3 and earlier, this guarantee # was not honored, and was broken in different ways for # PyWeakref_NewRef() and PyWeakref_NewProxy(). (Two tests.) def test_shared_ref_without_callback(self): self.check_shared_without_callback(weakref.ref) def test_shared_proxy_without_callback(self): self.check_shared_without_callback(weakref.proxy) def check_shared_without_callback(self, makeref): o = Object(1) p1 = makeref(o, None) p2 = makeref(o, None) self.assertIs(p1, p2, "both callbacks were None in the C API") del p1, p2 p1 = makeref(o) p2 = makeref(o, None) self.assertIs(p1, p2, "callbacks were NULL, None in the C API") del p1, p2 p1 = makeref(o) p2 = makeref(o) self.assertIs(p1, p2, "both callbacks were NULL in the C API") del p1, p2 p1 = makeref(o, None) p2 = makeref(o) self.assertIs(p1, p2, "callbacks were None, NULL in the C API") def test_callable_proxy(self): o = Callable() ref1 = weakref.proxy(o) self.check_proxy(o, ref1) self.assertIs(type(ref1), weakref.CallableProxyType, "proxy is not of callable type") ref1('twinkies!') self.assertEqual(o.bar, 'twinkies!', "call through proxy not passed through to original") ref1(x='Splat.') self.assertEqual(o.bar, 'Splat.', "call through proxy not passed through to original") # expect due to too few args self.assertRaises(TypeError, ref1) # expect due to too many args self.assertRaises(TypeError, ref1, 1, 2, 3) def check_proxy(self, o, proxy): o.foo = 1 self.assertEqual(proxy.foo, 1, "proxy does not reflect attribute addition") o.foo = 2 self.assertEqual(proxy.foo, 2, "proxy does not reflect attribute modification") del o.foo self.assertFalse(hasattr(proxy, 'foo'), "proxy does not reflect attribute removal") proxy.foo = 1 self.assertEqual(o.foo, 1, "object does not reflect attribute addition via proxy") proxy.foo = 2 self.assertEqual(o.foo, 2, "object does not reflect attribute modification via proxy") del proxy.foo self.assertFalse(hasattr(o, 'foo'), "object does not reflect attribute removal via proxy") def test_proxy_deletion(self): # Test clearing of SF bug #762891 class Foo: result = None def __delitem__(self, accessor): self.result = accessor g = Foo() f = weakref.proxy(g) del f[0] self.assertEqual(f.result, 0) def test_proxy_bool(self): # Test clearing of SF bug #1170766 class List(list): pass lyst = List() self.assertEqual(bool(weakref.proxy(lyst)), bool(lyst)) def test_proxy_iter(self): # Test fails with a debug build of the interpreter # (see bpo-38395). obj = None class MyObj: def __iter__(self): nonlocal obj del obj return NotImplemented obj = MyObj() p = weakref.proxy(obj) with self.assertRaises(TypeError): # "blech" in p calls MyObj.__iter__ through the proxy, # without keeping a reference to the real object, so it # can be killed in the middle of the call "blech" in p def test_proxy_next(self): arr = [4, 5, 6] def iterator_func(): yield from arr it = iterator_func() class IteratesWeakly: def __iter__(self): return weakref.proxy(it) weak_it = IteratesWeakly() # Calls proxy.__next__ self.assertEqual(list(weak_it), [4, 5, 6]) def test_proxy_bad_next(self): # bpo-44720: PyIter_Next() shouldn't be called if the reference # isn't an iterator. not_an_iterator = lambda: 0 class A: def __iter__(self): return weakref.proxy(not_an_iterator) a = A() msg = "Weakref proxy referenced a non-iterator" with self.assertRaisesRegex(TypeError, msg): list(a) def test_proxy_reversed(self): class MyObj: def __len__(self): return 3 def __reversed__(self): return iter('cba') obj = MyObj() self.assertEqual("".join(reversed(weakref.proxy(obj))), "cba") def test_proxy_hash(self): class MyObj: def __hash__(self): return 42 obj = MyObj() with self.assertRaises(TypeError): hash(weakref.proxy(obj)) class MyObj: __hash__ = None obj = MyObj() with self.assertRaises(TypeError): hash(weakref.proxy(obj)) def test_getweakrefcount(self): o = C() ref1 = weakref.ref(o) ref2 = weakref.ref(o, self.callback) self.assertEqual(weakref.getweakrefcount(o), 2, "got wrong number of weak reference objects") proxy1 = weakref.proxy(o) proxy2 = weakref.proxy(o, self.callback) self.assertEqual(weakref.getweakrefcount(o), 4, "got wrong number of weak reference objects") del ref1, ref2, proxy1, proxy2 gc_collect() # For PyPy or other GCs. self.assertEqual(weakref.getweakrefcount(o), 0, "weak reference objects not unlinked from" " referent when discarded.") # assumes ints do not support weakrefs self.assertEqual(weakref.getweakrefcount(1), 0, "got wrong number of weak reference objects for int") def test_getweakrefs(self): o = C() ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref1 gc_collect() # For PyPy or other GCs. self.assertEqual(weakref.getweakrefs(o), [ref2], "list of refs does not match") o = C() ref1 = weakref.ref(o, self.callback) ref2 = weakref.ref(o, self.callback) del ref2 gc_collect() # For PyPy or other GCs. self.assertEqual(weakref.getweakrefs(o), [ref1], "list of refs does not match") del ref1 gc_collect() # For PyPy or other GCs. self.assertEqual(weakref.getweakrefs(o), [], "list of refs not cleared") # assumes ints do not support weakrefs self.assertEqual(weakref.getweakrefs(1), [], "list of refs does not match for int") def test_newstyle_number_ops(self): class F(float): pass f = F(2.0) p = weakref.proxy(f) self.assertEqual(p + 1.0, 3.0) self.assertEqual(1.0 + p, 3.0) # this used to SEGV def test_callbacks_protected(self): # Callbacks protected from already-set exceptions? # Regression test for SF bug #478534. class BogusError(Exception): pass data = {} def remove(k): del data[k] def encapsulate(): f = lambda : () data[weakref.ref(f, remove)] = None raise BogusError try: encapsulate() except BogusError: pass else: self.fail("exception not properly restored") try: encapsulate() except BogusError: pass else: self.fail("exception not properly restored") def test_sf_bug_840829(self): # "weakref callbacks and gc corrupt memory" # subtype_dealloc erroneously exposed a new-style instance # already in the process of getting deallocated to gc, # causing double-deallocation if the instance had a weakref # callback that triggered gc. # If the bug exists, there probably won't be an obvious symptom # in a release build. In a debug build, a segfault will occur # when the second attempt to remove the instance from the "list # of all objects" occurs. import gc class C(object): pass c = C() wr = weakref.ref(c, lambda ignore: gc.collect()) del c # There endeth the first part. It gets worse. del wr c1 = C() c1.i = C() wr = weakref.ref(c1.i, lambda ignore: gc.collect()) c2 = C() c2.c1 = c1 del c1 # still alive because c2 points to it # Now when subtype_dealloc gets called on c2, it's not enough just # that c2 is immune from gc while the weakref callbacks associated # with c2 execute (there are none in this 2nd half of the test, btw). # subtype_dealloc goes on to call the base classes' deallocs too, # so any gc triggered by weakref callbacks associated with anything # torn down by a base class dealloc can also trigger double # deallocation of c2. del c2 @suppress_immortalization() def test_callback_in_cycle(self): import gc class J(object): pass class II(object): def acallback(self, ignore): self.J I = II() I.J = J I.wr = weakref.ref(J, I.acallback) # Now J and II are each in a self-cycle (as all new-style class # objects are, since their __mro__ points back to them). I holds # both a weak reference (I.wr) and a strong reference (I.J) to class # J. I is also in a cycle (I.wr points to a weakref that references # I.acallback). When we del these three, they all become trash, but # the cycles prevent any of them from getting cleaned up immediately. # Instead they have to wait for cyclic gc to deduce that they're # trash. # # gc used to call tp_clear on all of them, and the order in which # it does that is pretty accidental. The exact order in which we # built up these things manages to provoke gc into running tp_clear # in just the right order (I last). Calling tp_clear on II leaves # behind an insane class object (its __mro__ becomes NULL). Calling # tp_clear on J breaks its self-cycle, but J doesn't get deleted # just then because of the strong reference from I.J. Calling # tp_clear on I starts to clear I's __dict__, and just happens to # clear I.J first -- I.wr is still intact. That removes the last # reference to J, which triggers the weakref callback. The callback # tries to do "self.J", and instances of new-style classes look up # attributes ("J") in the class dict first. The class (II) wants to # search II.__mro__, but that's NULL. The result was a segfault in # a release build, and an assert failure in a debug build. del I, J, II gc.collect() def test_callback_reachable_one_way(self): import gc # This one broke the first patch that fixed the previous test. In this case, # the objects reachable from the callback aren't also reachable # from the object (c1) *triggering* the callback: you can get to # c1 from c2, but not vice-versa. The result was that c2's __dict__ # got tp_clear'ed by the time the c2.cb callback got invoked. class C: def cb(self, ignore): self.me self.c1 self.wr c1, c2 = C(), C() c2.me = c2 c2.c1 = c1 c2.wr = weakref.ref(c1, c2.cb) del c1, c2 gc.collect() def test_callback_different_classes(self): import gc # Like test_callback_reachable_one_way, except c2 and c1 have different # classes. c2's class (C) isn't reachable from c1 then, so protecting # objects reachable from the dying object (c1) isn't enough to stop # c2's class (C) from getting tp_clear'ed before c2.cb is invoked. # The result was a segfault (C.__mro__ was NULL when the callback # tried to look up self.me). class C(object): def cb(self, ignore): self.me self.c1 self.wr class D: pass c1, c2 = D(), C() c2.me = c2 c2.c1 = c1 c2.wr = weakref.ref(c1, c2.cb) del c1, c2, C, D gc.collect() @suppress_immortalization() def test_callback_in_cycle_resurrection(self): import gc # Do something nasty in a weakref callback: resurrect objects # from dead cycles. For this to be attempted, the weakref and # its callback must also be part of the cyclic trash (else the # objects reachable via the callback couldn't be in cyclic trash # to begin with -- the callback would act like an external root). # But gc clears trash weakrefs with callbacks early now, which # disables the callbacks, so the callbacks shouldn't get called # at all (and so nothing actually gets resurrected). alist = [] class C(object): def __init__(self, value): self.attribute = value def acallback(self, ignore): alist.append(self.c) c1, c2 = C(1), C(2) c1.c = c2 c2.c = c1 c1.wr = weakref.ref(c2, c1.acallback) c2.wr = weakref.ref(c1, c2.acallback) def C_went_away(ignore): alist.append("C went away") wr = weakref.ref(C, C_went_away) del c1, c2, C # make them all trash self.assertEqual(alist, []) # del isn't enough to reclaim anything gc.collect() # c1.wr and c2.wr were part of the cyclic trash, so should have # been cleared without their callbacks executing. OTOH, the weakref # to C is bound to a function local (wr), and wasn't trash, so that # callback should have been invoked when C went away. self.assertEqual(alist, ["C went away"]) # The remaining weakref should be dead now (its callback ran). self.assertEqual(wr(), None) del alist[:] gc.collect() self.assertEqual(alist, []) def test_callbacks_on_callback(self): import gc # Set up weakref callbacks *on* weakref callbacks. alist = [] def safe_callback(ignore): alist.append("safe_callback called") class C(object): def cb(self, ignore): alist.append("cb called") c, d = C(), C() c.other = d d.other = c callback = c.cb c.wr = weakref.ref(d, callback) # this won't trigger d.wr = weakref.ref(callback, d.cb) # ditto external_wr = weakref.ref(callback, safe_callback) # but this will self.assertIs(external_wr(), callback) # The weakrefs attached to c and d should get cleared, so that # C.cb is never called. But external_wr isn't part of the cyclic # trash, and no cyclic trash is reachable from it, so safe_callback # should get invoked when the bound method object callback (c.cb) # -- which is itself a callback, and also part of the cyclic trash -- # gets reclaimed at the end of gc. del callback, c, d, C self.assertEqual(alist, []) # del isn't enough to clean up cycles gc.collect() self.assertEqual(alist, ["safe_callback called"]) self.assertEqual(external_wr(), None) del alist[:] gc.collect() self.assertEqual(alist, []) def test_gc_during_ref_creation(self): self.check_gc_during_creation(weakref.ref) def test_gc_during_proxy_creation(self): self.check_gc_during_creation(weakref.proxy) def check_gc_during_creation(self, makeref): thresholds = gc.get_threshold() gc.set_threshold(1, 1, 1) gc.collect() class A: pass def callback(*args): pass referenced = A() a = A() a.a = a a.wr = makeref(referenced) try: # now make sure the object and the ref get labeled as # cyclic trash: a = A() weakref.ref(referenced, callback) finally: gc.set_threshold(*thresholds) def test_ref_created_during_del(self): # Bug #1377858 # A weakref created in an object's __del__() would crash the # interpreter when the weakref was cleaned up since it would refer to # non-existent memory. This test should not segfault the interpreter. class Target(object): def __del__(self): global ref_from_del ref_from_del = weakref.ref(self) w = Target() def test_init(self): # Issue 3634 # .__init__() doesn't check errors correctly r = weakref.ref(Exception) self.assertRaises(TypeError, r.__init__, 0, 0, 0, 0, 0) # No exception should be raised here gc.collect() @suppress_immortalization() def test_classes(self): # Check that classes are weakrefable. class A(object): pass l = [] weakref.ref(int) a = weakref.ref(A, l.append) A = None gc.collect() self.assertEqual(a(), None) self.assertEqual(l, [a]) def test_equality(self): # Alive weakrefs defer equality testing to their underlying object. x = Object(1) y = Object(1) z = Object(2) a = weakref.ref(x) b = weakref.ref(y) c = weakref.ref(z) d = weakref.ref(x) # Note how we directly test the operators here, to stress both # __eq__ and __ne__. self.assertTrue(a == b) self.assertFalse(a != b) self.assertFalse(a == c) self.assertTrue(a != c) self.assertTrue(a == d) self.assertFalse(a != d) self.assertFalse(a == x) self.assertTrue(a != x) self.assertTrue(a == ALWAYS_EQ) self.assertFalse(a != ALWAYS_EQ) del x, y, z gc.collect() for r in a, b, c: # Sanity check self.assertIs(r(), None) # Dead weakrefs compare by identity: whether `a` and `d` are the # same weakref object is an implementation detail, since they pointed # to the same original object and didn't have a callback. # (see issue #16453). self.assertFalse(a == b) self.assertTrue(a != b) self.assertFalse(a == c) self.assertTrue(a != c) self.assertEqual(a == d, a is d) self.assertEqual(a != d, a is not d) def test_ordering(self): # weakrefs cannot be ordered, even if the underlying objects can. ops = [operator.lt, operator.gt, operator.le, operator.ge] x = Object(1) y = Object(1) a = weakref.ref(x) b = weakref.ref(y) for op in ops: self.assertRaises(TypeError, op, a, b) # Same when dead. del x, y gc.collect() for op in ops: self.assertRaises(TypeError, op, a, b) def test_hashing(self): # Alive weakrefs hash the same as the underlying object x = Object(42) y = Object(42) a = weakref.ref(x) b = weakref.ref(y) self.assertEqual(hash(a), hash(42)) del x, y gc.collect() # Dead weakrefs: # - retain their hash is they were hashed when alive; # - otherwise, cannot be hashed. self.assertEqual(hash(a), hash(42)) self.assertRaises(TypeError, hash, b) @unittest.skipIf(is_wasi and Py_DEBUG, "requires deep stack") def test_trashcan_16602(self): # Issue #16602: when a weakref's target was part of a long # deallocation chain, the trashcan mechanism could delay clearing # of the weakref and make the target object visible from outside # code even though its refcount had dropped to 0. A crash ensued. class C: def __init__(self, parent): if not parent: return wself = weakref.ref(self) def cb(wparent): o = wself() self.wparent = weakref.ref(parent, cb) d = weakref.WeakKeyDictionary() root = c = C(None) for n in range(100): d[c] = c = C(c) del root gc.collect() def test_callback_attribute(self): x = Object(1) callback = lambda ref: None ref1 = weakref.ref(x, callback) self.assertIs(ref1.__callback__, callback) ref2 = weakref.ref(x) self.assertIsNone(ref2.__callback__) def test_callback_attribute_after_deletion(self): x = Object(1) ref = weakref.ref(x, self.callback) self.assertIsNotNone(ref.__callback__) del x support.gc_collect() self.assertIsNone(ref.__callback__) def test_set_callback_attribute(self): x = Object(1) callback = lambda ref: None ref1 = weakref.ref(x, callback) with self.assertRaises(AttributeError): ref1.__callback__ = lambda ref: None def test_callback_gcs(self): class ObjectWithDel(Object): def __del__(self): pass x = ObjectWithDel(1) ref1 = weakref.ref(x, lambda ref: support.gc_collect()) del x support.gc_collect() @support.cpython_only def test_no_memory_when_clearing(self): # gh-118331: Make sure we do not raise an exception from the destructor # when clearing weakrefs if allocating the intermediate tuple fails. code = textwrap.dedent(""" import _testcapi import weakref class TestObj: pass def callback(obj): pass obj = TestObj() # The choice of 50 is arbitrary, but must be large enough to ensure # the allocation won't be serviced by the free list. wrs = [weakref.ref(obj, callback) for _ in range(50)] _testcapi.set_nomemory(0) del obj """).strip() res, _ = script_helper.run_python_until_end("-c", code) stderr = res.err.decode("ascii", "backslashreplace") self.assertNotRegex(stderr, "_Py_Dealloc: Deallocator of type 'TestObj'") class SubclassableWeakrefTestCase(TestBase): def test_subclass_refs(self): class MyRef(weakref.ref): def __init__(self, ob, callback=None, value=42): self.value = value super().__init__(ob, callback) def __call__(self): self.called = True return super().__call__() o = Object("foo") mr = MyRef(o, value=24) self.assertIs(mr(), o) self.assertTrue(mr.called) self.assertEqual(mr.value, 24) del o gc_collect() # For PyPy or other GCs. self.assertIsNone(mr()) self.assertTrue(mr.called) def test_subclass_refs_dont_replace_standard_refs(self): class MyRef(weakref.ref): pass o = Object(42) r1 = MyRef(o) r2 = weakref.ref(o) self.assertIsNot(r1, r2) self.assertEqual(weakref.getweakrefs(o), [r2, r1]) self.assertEqual(weakref.getweakrefcount(o), 2) r3 = MyRef(o) self.assertEqual(weakref.getweakrefcount(o), 3) refs = weakref.getweakrefs(o) self.assertEqual(len(refs), 3) self.assertIs(r2, refs[0]) self.assertIn(r1, refs[1:]) self.assertIn(r3, refs[1:]) def test_subclass_refs_dont_conflate_callbacks(self): class MyRef(weakref.ref): pass o = Object(42) r1 = MyRef(o, id) r2 = MyRef(o, str) self.assertIsNot(r1, r2) refs = weakref.getweakrefs(o) self.assertIn(r1, refs) self.assertIn(r2, refs) def test_subclass_refs_with_slots(self): class MyRef(weakref.ref): __slots__ = "slot1", "slot2" def __new__(type, ob, callback, slot1, slot2): return weakref.ref.__new__(type, ob, callback) def __init__(self, ob, callback, slot1, slot2): self.slot1 = slot1 self.slot2 = slot2 def meth(self): return self.slot1 + self.slot2 o = Object(42) r = MyRef(o, None, "abc", "def") self.assertEqual(r.slot1, "abc") self.assertEqual(r.slot2, "def") self.assertEqual(r.meth(), "abcdef") self.assertFalse(hasattr(r, "__dict__")) def test_subclass_refs_with_cycle(self): """Confirm https://bugs.python.org/issue3100 is fixed.""" # An instance of a weakref subclass can have attributes. # If such a weakref holds the only strong reference to the object, # deleting the weakref will delete the object. In this case, # the callback must not be called, because the ref object is # being deleted. class MyRef(weakref.ref): pass # Use a local callback, for "regrtest -R::" # to detect refcounting problems def callback(w): self.cbcalled += 1 o = C() r1 = MyRef(o, callback) r1.o = o del o del r1 # Used to crash here self.assertEqual(self.cbcalled, 0) # Same test, with two weakrefs to the same object # (since code paths are different) o = C() r1 = MyRef(o, callback) r2 = MyRef(o, callback) r1.r = r2 r2.o = o del o del r2 del r1 # Used to crash here self.assertEqual(self.cbcalled, 0) class WeakMethodTestCase(unittest.TestCase): def _subclass(self): """Return an Object subclass overriding `some_method`.""" class C(Object): def some_method(self): return 6 return C def test_alive(self): o = Object(1) r = weakref.WeakMethod(o.some_method) self.assertIsInstance(r, weakref.ReferenceType) self.assertIsInstance(r(), type(o.some_method)) self.assertIs(r().__self__, o) self.assertIs(r().__func__, o.some_method.__func__) self.assertEqual(r()(), 4) def test_object_dead(self): o = Object(1) r = weakref.WeakMethod(o.some_method) del o gc.collect() self.assertIs(r(), None) def test_method_dead(self): C = self._subclass() o = C(1) r = weakref.WeakMethod(o.some_method) del C.some_method gc.collect() self.assertIs(r(), None) def test_callback_when_object_dead(self): # Test callback behaviour when object dies first. C = self._subclass() calls = [] def cb(arg): calls.append(arg) o = C(1) r = weakref.WeakMethod(o.some_method, cb) del o gc.collect() self.assertEqual(calls, [r]) # Callback is only called once. C.some_method = Object.some_method gc.collect() self.assertEqual(calls, [r]) def test_callback_when_method_dead(self): # Test callback behaviour when method dies first. C = self._subclass() calls = [] def cb(arg): calls.append(arg) o = C(1) r = weakref.WeakMethod(o.some_method, cb) del C.some_method gc.collect() self.assertEqual(calls, [r]) # Callback is only called once. del o gc.collect() self.assertEqual(calls, [r]) @support.cpython_only def test_no_cycles(self): # A WeakMethod doesn't create any reference cycle to itself. o = Object(1) def cb(_): pass r = weakref.WeakMethod(o.some_method, cb) wr = weakref.ref(r) del r self.assertIs(wr(), None) def test_equality(self): def _eq(a, b): self.assertTrue(a == b) self.assertFalse(a != b) def _ne(a, b): self.assertTrue(a != b) self.assertFalse(a == b) x = Object(1) y = Object(1) a = weakref.WeakMethod(x.some_method) b = weakref.WeakMethod(y.some_method) c = weakref.WeakMethod(x.other_method) d = weakref.WeakMethod(y.other_method) # Objects equal, same method _eq(a, b) _eq(c, d) # Objects equal, different method _ne(a, c) _ne(a, d) _ne(b, c) _ne(b, d) # Objects unequal, same or different method z = Object(2) e = weakref.WeakMethod(z.some_method) f = weakref.WeakMethod(z.other_method) _ne(a, e) _ne(a, f) _ne(b, e) _ne(b, f) # Compare with different types _ne(a, x.some_method) _eq(a, ALWAYS_EQ) del x, y, z gc.collect() # Dead WeakMethods compare by identity refs = a, b, c, d, e, f for q in refs: for r in refs: self.assertEqual(q == r, q is r) self.assertEqual(q != r, q is not r) def test_hashing(self): # Alive WeakMethods are hashable if the underlying object is # hashable. x = Object(1) y = Object(1) a = weakref.WeakMethod(x.some_method) b = weakref.WeakMethod(y.some_method) c = weakref.WeakMethod(y.other_method) # Since WeakMethod objects are equal, the hashes should be equal. self.assertEqual(hash(a), hash(b)) ha = hash(a) # Dead WeakMethods retain their old hash value del x, y gc.collect() self.assertEqual(hash(a), ha) self.assertEqual(hash(b), ha) # If it wasn't hashed when alive, a dead WeakMethod cannot be hashed. self.assertRaises(TypeError, hash, c) class MappingTestCase(TestBase): COUNT = 10 if support.check_sanitizer(thread=True) and support.Py_GIL_DISABLED: # Reduce iteration count to get acceptable latency NUM_THREADED_ITERATIONS = 1000 else: NUM_THREADED_ITERATIONS = 100000 def check_len_cycles(self, dict_type, cons): N = 20 items = [RefCycle() for i in range(N)] dct = dict_type(cons(o) for o in items) # Keep an iterator alive it = dct.items() try: next(it) except StopIteration: pass del items gc.collect() n1 = len(dct) del it gc.collect() n2 = len(dct) # one item may be kept alive inside the iterator self.assertIn(n1, (0, 1)) self.assertEqual(n2, 0) def test_weak_keyed_len_cycles(self): self.check_len_cycles(weakref.WeakKeyDictionary, lambda k: (k, 1)) def test_weak_valued_len_cycles(self): self.check_len_cycles(weakref.WeakValueDictionary, lambda k: (1, k)) def check_len_race(self, dict_type, cons): # Extended sanity checks for len() in the face of cyclic collection self.addCleanup(gc.set_threshold, *gc.get_threshold()) for th in range(1, 100): N = 20 gc.collect(0) gc.set_threshold(th, th, th) items = [RefCycle() for i in range(N)] dct = dict_type(cons(o) for o in items) del items # All items will be collected at next garbage collection pass it = dct.items() try: next(it) except StopIteration: pass n1 = len(dct) del it n2 = len(dct) self.assertGreaterEqual(n1, 0) self.assertLessEqual(n1, N) self.assertGreaterEqual(n2, 0) self.assertLessEqual(n2, n1) def test_weak_keyed_len_race(self): self.check_len_race(weakref.WeakKeyDictionary, lambda k: (k, 1)) def test_weak_valued_len_race(self): self.check_len_race(weakref.WeakValueDictionary, lambda k: (1, k)) def test_weak_values(self): # # This exercises d.copy(), d.items(), d[], del d[], len(d). # dict, objects = self.make_weak_valued_dict() for o in objects: self.assertEqual(weakref.getweakrefcount(o), 1) self.assertIs(o, dict[o.arg], "wrong object returned by weak dict!") items1 = list(dict.items()) items2 = list(dict.copy().items()) items1.sort() items2.sort() self.assertEqual(items1, items2, "cloning of weak-valued dictionary did not work!") del items1, items2 self.assertEqual(len(dict), self.COUNT) del objects[0] gc_collect() # For PyPy or other GCs. self.assertEqual(len(dict), self.COUNT - 1, "deleting object did not cause dictionary update") del objects, o gc_collect() # For PyPy or other GCs. self.assertEqual(len(dict), 0, "deleting the values did not clear the dictionary") # regression on SF bug #447152: dict = weakref.WeakValueDictionary() self.assertRaises(KeyError, dict.__getitem__, 1) dict[2] = C() gc_collect() # For PyPy or other GCs. self.assertRaises(KeyError, dict.__getitem__, 2) def test_weak_keys(self): # # This exercises d.copy(), d.items(), d[] = v, d[], del d[], # len(d), k in d. # dict, objects = self.make_weak_keyed_dict() for o in objects: self.assertEqual(weakref.getweakrefcount(o), 1, "wrong number of weak references to %r!" % o) self.assertIs(o.arg, dict[o], "wrong object returned by weak dict!") items1 = dict.items() items2 = dict.copy().items() self.assertEqual(set(items1), set(items2), "cloning of weak-keyed dictionary did not work!") del items1, items2 self.assertEqual(len(dict), self.COUNT) del objects[0] gc_collect() # For PyPy or other GCs. self.assertEqual(len(dict), (self.COUNT - 1), "deleting object did not cause dictionary update") del objects, o gc_collect() # For PyPy or other GCs. self.assertEqual(len(dict), 0, "deleting the keys did not clear the dictionary") o = Object(42) dict[o] = "What is the meaning of the universe?" self.assertIn(o, dict) self.assertNotIn(34, dict) def test_weak_keyed_iters(self): dict, objects = self.make_weak_keyed_dict() self.check_iters(dict) # Test keyrefs() refs = dict.keyrefs() self.assertEqual(len(refs), len(objects)) objects2 = list(objects) for wr in refs: ob = wr() self.assertIn(ob, dict) self.assertIn(ob, dict) self.assertEqual(ob.arg, dict[ob]) objects2.remove(ob) self.assertEqual(len(objects2), 0) # Test iterkeyrefs() objects2 = list(objects) self.assertEqual(len(list(dict.keyrefs())), len(objects)) for wr in dict.keyrefs(): ob = wr() self.assertIn(ob, dict) self.assertIn(ob, dict) self.assertEqual(ob.arg, dict[ob]) objects2.remove(ob) self.assertEqual(len(objects2), 0) def test_weak_valued_iters(self): dict, objects = self.make_weak_valued_dict() self.check_iters(dict) # Test valuerefs() refs = dict.valuerefs() self.assertEqual(len(refs), len(objects)) objects2 = list(objects) for wr in refs: ob = wr() self.assertEqual(ob, dict[ob.arg]) self.assertEqual(ob.arg, dict[ob.arg].arg) objects2.remove(ob) self.assertEqual(len(objects2), 0) # Test itervaluerefs() objects2 = list(objects) self.assertEqual(len(list(dict.itervaluerefs())), len(objects)) for wr in dict.itervaluerefs(): ob = wr() self.assertEqual(ob, dict[ob.arg]) self.assertEqual(ob.arg, dict[ob.arg].arg) objects2.remove(ob) self.assertEqual(len(objects2), 0) def check_iters(self, dict): # item iterator: items = list(dict.items()) for item in dict.items(): items.remove(item) self.assertFalse(items, "items() did not touch all items") # key iterator, via __iter__(): keys = list(dict.keys()) for k in dict: keys.remove(k) self.assertFalse(keys, "__iter__() did not touch all keys") # key iterator, via iterkeys(): keys = list(dict.keys()) for k in dict.keys(): keys.remove(k) self.assertFalse(keys, "iterkeys() did not touch all keys") # value iterator: values = list(dict.values()) for v in dict.values(): values.remove(v) self.assertFalse(values, "itervalues() did not touch all values") def check_weak_destroy_while_iterating(self, dict, objects, iter_name): n = len(dict) it = iter(getattr(dict, iter_name)()) next(it) # Trigger internal iteration # Destroy an object del objects[-1] gc.collect() # just in case # We have removed either the first consumed object, or another one self.assertIn(len(list(it)), [len(objects), len(objects) - 1]) del it # The removal has been committed self.assertEqual(len(dict), n - 1) def check_weak_destroy_and_mutate_while_iterating(self, dict, testcontext): # Check that we can explicitly mutate the weak dict without # interfering with delayed removal. # `testcontext` should create an iterator, destroy one of the # weakref'ed objects and then return a new key/value pair corresponding # to the destroyed object. with testcontext() as (k, v): self.assertNotIn(k, dict) with testcontext() as (k, v): self.assertRaises(KeyError, dict.__delitem__, k) self.assertNotIn(k, dict) with testcontext() as (k, v): self.assertRaises(KeyError, dict.pop, k) self.assertNotIn(k, dict) with testcontext() as (k, v): dict[k] = v self.assertEqual(dict[k], v) ddict = copy.copy(dict) with testcontext() as (k, v): dict.update(ddict) self.assertEqual(dict, ddict) with testcontext() as (k, v): dict.clear() self.assertEqual(len(dict), 0) def check_weak_del_and_len_while_iterating(self, dict, testcontext): # Check that len() works when both iterating and removing keys # explicitly through various means (.pop(), .clear()...), while # implicit mutation is deferred because an iterator is alive. # (each call to testcontext() should schedule one item for removal # for this test to work properly) o = Object(123456) with testcontext(): n = len(dict) # Since underlying dict is ordered, first item is popped dict.pop(next(dict.keys())) self.assertEqual(len(dict), n - 1) dict[o] = o self.assertEqual(len(dict), n) # last item in objects is removed from dict in context shutdown with testcontext(): self.assertEqual(len(dict), n - 1) # Then, (o, o) is popped dict.popitem() self.assertEqual(len(dict), n - 2) with testcontext(): self.assertEqual(len(dict), n - 3) del dict[next(dict.keys())] self.assertEqual(len(dict), n - 4) with testcontext(): self.assertEqual(len(dict), n - 5) dict.popitem() self.assertEqual(len(dict), n - 6) with testcontext(): dict.clear() self.assertEqual(len(dict), 0) self.assertEqual(len(dict), 0) def test_weak_keys_destroy_while_iterating(self): # Issue #7105: iterators shouldn't crash when a key is implicitly removed dict, objects = self.make_weak_keyed_dict() self.check_weak_destroy_while_iterating(dict, objects, 'keys') self.check_weak_destroy_while_iterating(dict, objects, 'items') self.check_weak_destroy_while_iterating(dict, objects, 'values') self.check_weak_destroy_while_iterating(dict, objects, 'keyrefs') dict, objects = self.make_weak_keyed_dict() @contextlib.contextmanager def testcontext(): try: it = iter(dict.items()) next(it) # Schedule a key/value for removal and recreate it v = objects.pop().arg gc.collect() # just in case yield Object(v), v finally: it = None # should commit all removals gc.collect() self.check_weak_destroy_and_mutate_while_iterating(dict, testcontext) # Issue #21173: len() fragile when keys are both implicitly and # explicitly removed. dict, objects = self.make_weak_keyed_dict() self.check_weak_del_and_len_while_iterating(dict, testcontext) def test_weak_values_destroy_while_iterating(self): # Issue #7105: iterators shouldn't crash when a key is implicitly removed dict, objects = self.make_weak_valued_dict() self.check_weak_destroy_while_iterating(dict, objects, 'keys') self.check_weak_destroy_while_iterating(dict, objects, 'items') self.check_weak_destroy_while_iterating(dict, objects, 'values') self.check_weak_destroy_while_iterating(dict, objects, 'itervaluerefs') self.check_weak_destroy_while_iterating(dict, objects, 'valuerefs') dict, objects = self.make_weak_valued_dict() @contextlib.contextmanager def testcontext(): try: it = iter(dict.items()) next(it) # Schedule a key/value for removal and recreate it k = objects.pop().arg gc.collect() # just in case yield k, Object(k) finally: it = None # should commit all removals gc.collect() self.check_weak_destroy_and_mutate_while_iterating(dict, testcontext) dict, objects = self.make_weak_valued_dict() self.check_weak_del_and_len_while_iterating(dict, testcontext) def test_make_weak_keyed_dict_from_dict(self): o = Object(3) dict = weakref.WeakKeyDictionary({o:364}) self.assertEqual(dict[o], 364) def test_make_weak_keyed_dict_from_weak_keyed_dict(self): o = Object(3) dict = weakref.WeakKeyDictionary({o:364}) dict2 = weakref.WeakKeyDictionary(dict) self.assertEqual(dict[o], 364) def make_weak_keyed_dict(self): dict = weakref.WeakKeyDictionary() objects = list(map(Object, range(self.COUNT))) for o in objects: dict[o] = o.arg return dict, objects def test_make_weak_valued_dict_from_dict(self): o = Object(3) dict = weakref.WeakValueDictionary({364:o}) self.assertEqual(dict[364], o) def test_make_weak_valued_dict_from_weak_valued_dict(self): o = Object(3) dict = weakref.WeakValueDictionary({364:o}) dict2 = weakref.WeakValueDictionary(dict) self.assertEqual(dict[364], o) def test_make_weak_valued_dict_misc(self): # errors self.assertRaises(TypeError, weakref.WeakValueDictionary.__init__) self.assertRaises(TypeError, weakref.WeakValueDictionary, {}, {}) self.assertRaises(TypeError, weakref.WeakValueDictionary, (), ()) # special keyword arguments o = Object(3) for kw in 'self', 'dict', 'other', 'iterable': d = weakref.WeakValueDictionary(**{kw: o}) self.assertEqual(list(d.keys()), [kw]) self.assertEqual(d[kw], o) def make_weak_valued_dict(self): dict = weakref.WeakValueDictionary() objects = list(map(Object, range(self.COUNT))) for o in objects: dict[o.arg] = o return dict, objects def check_popitem(self, klass, key1, value1, key2, value2): weakdict = klass() weakdict[key1] = value1 weakdict[key2] = value2 self.assertEqual(len(weakdict), 2) k, v = weakdict.popitem() self.assertEqual(len(weakdict), 1) if k is key1: self.assertIs(v, value1) else: self.assertIs(v, value2) k, v = weakdict.popitem() self.assertEqual(len(weakdict), 0) if k is key1: self.assertIs(v, value1) else: self.assertIs(v, value2) def test_weak_valued_dict_popitem(self): self.check_popitem(weakref.WeakValueDictionary, "key1", C(), "key2", C()) def test_weak_keyed_dict_popitem(self): self.check_popitem(weakref.WeakKeyDictionary, C(), "value 1", C(), "value 2") def check_setdefault(self, klass, key, value1, value2): self.assertIsNot(value1, value2, "invalid test" " -- value parameters must be distinct objects") weakdict = klass() o = weakdict.setdefault(key, value1) self.assertIs(o, value1) self.assertIn(key, weakdict) self.assertIs(weakdict.get(key), value1) self.assertIs(weakdict[key], value1) o = weakdict.setdefault(key, value2) self.assertIs(o, value1) self.assertIn(key, weakdict) self.assertIs(weakdict.get(key), value1) self.assertIs(weakdict[key], value1) def test_weak_valued_dict_setdefault(self): self.check_setdefault(weakref.WeakValueDictionary, "key", C(), C()) def test_weak_keyed_dict_setdefault(self): self.check_setdefault(weakref.WeakKeyDictionary, C(), "value 1", "value 2") def check_update(self, klass, dict): # # This exercises d.update(), len(d), d.keys(), k in d, # d.get(), d[]. # weakdict = klass() weakdict.update(dict) self.assertEqual(len(weakdict), len(dict)) for k in weakdict.keys(): self.assertIn(k, dict, "mysterious new key appeared in weak dict") v = dict.get(k) self.assertIs(v, weakdict[k]) self.assertIs(v, weakdict.get(k)) for k in dict.keys(): self.assertIn(k, weakdict, "original key disappeared in weak dict") v = dict[k] self.assertIs(v, weakdict[k]) self.assertIs(v, weakdict.get(k)) def test_weak_valued_dict_update(self): self.check_update(weakref.WeakValueDictionary, {1: C(), 'a': C(), C(): C()}) # errors self.assertRaises(TypeError, weakref.WeakValueDictionary.update) d = weakref.WeakValueDictionary() self.assertRaises(TypeError, d.update, {}, {}) self.assertRaises(TypeError, d.update, (), ()) self.assertEqual(list(d.keys()), []) # special keyword arguments o = Object(3) for kw in 'self', 'dict', 'other', 'iterable': d = weakref.WeakValueDictionary() d.update(**{kw: o}) self.assertEqual(list(d.keys()), [kw]) self.assertEqual(d[kw], o) def test_weak_valued_union_operators(self): a = C() b = C() c = C() wvd1 = weakref.WeakValueDictionary({1: a}) wvd2 = weakref.WeakValueDictionary({1: b, 2: a}) wvd3 = wvd1.copy() d1 = {1: c, 3: b} pairs = [(5, c), (6, b)] tmp1 = wvd1 | wvd2 # Between two WeakValueDictionaries self.assertEqual(dict(tmp1), dict(wvd1) | dict(wvd2)) self.assertIs(type(tmp1), weakref.WeakValueDictionary) wvd1 |= wvd2 self.assertEqual(wvd1, tmp1) tmp2 = wvd2 | d1 # Between WeakValueDictionary and mapping self.assertEqual(dict(tmp2), dict(wvd2) | d1) self.assertIs(type(tmp2), weakref.WeakValueDictionary) wvd2 |= d1 self.assertEqual(wvd2, tmp2) tmp3 = wvd3.copy() # Between WeakValueDictionary and iterable key, value tmp3 |= pairs self.assertEqual(dict(tmp3), dict(wvd3) | dict(pairs)) self.assertIs(type(tmp3), weakref.WeakValueDictionary) tmp4 = d1 | wvd3 # Testing .__ror__ self.assertEqual(dict(tmp4), d1 | dict(wvd3)) self.assertIs(type(tmp4), weakref.WeakValueDictionary) del a self.assertNotIn(2, tmp1) self.assertNotIn(2, tmp2) self.assertNotIn(1, tmp3) self.assertNotIn(1, tmp4) def test_weak_keyed_dict_update(self): self.check_update(weakref.WeakKeyDictionary, {C(): 1, C(): 2, C(): 3}) def test_weak_keyed_delitem(self): d = weakref.WeakKeyDictionary() o1 = Object('1') o2 = Object('2') d[o1] = 'something' d[o2] = 'something' self.assertEqual(len(d), 2) del d[o1] self.assertEqual(len(d), 1) self.assertEqual(list(d.keys()), [o2]) def test_weak_keyed_union_operators(self): o1 = C() o2 = C() o3 = C() wkd1 = weakref.WeakKeyDictionary({o1: 1, o2: 2}) wkd2 = weakref.WeakKeyDictionary({o3: 3, o1: 4}) wkd3 = wkd1.copy() d1 = {o2: '5', o3: '6'} pairs = [(o2, 7), (o3, 8)] tmp1 = wkd1 | wkd2 # Between two WeakKeyDictionaries self.assertEqual(dict(tmp1), dict(wkd1) | dict(wkd2)) self.assertIs(type(tmp1), weakref.WeakKeyDictionary) wkd1 |= wkd2 self.assertEqual(wkd1, tmp1) tmp2 = wkd2 | d1 # Between WeakKeyDictionary and mapping self.assertEqual(dict(tmp2), dict(wkd2) | d1) self.assertIs(type(tmp2), weakref.WeakKeyDictionary) wkd2 |= d1 self.assertEqual(wkd2, tmp2) tmp3 = wkd3.copy() # Between WeakKeyDictionary and iterable key, value tmp3 |= pairs self.assertEqual(dict(tmp3), dict(wkd3) | dict(pairs)) self.assertIs(type(tmp3), weakref.WeakKeyDictionary) tmp4 = d1 | wkd3 # Testing .__ror__ self.assertEqual(dict(tmp4), d1 | dict(wkd3)) self.assertIs(type(tmp4), weakref.WeakKeyDictionary) del o1 self.assertNotIn(4, tmp1.values()) self.assertNotIn(4, tmp2.values()) self.assertNotIn(1, tmp3.values()) self.assertNotIn(1, tmp4.values()) def test_weak_valued_delitem(self): d = weakref.WeakValueDictionary() o1 = Object('1') o2 = Object('2') d['something'] = o1 d['something else'] = o2 self.assertEqual(len(d), 2) del d['something'] self.assertEqual(len(d), 1) self.assertEqual(list(d.items()), [('something else', o2)]) def test_weak_keyed_bad_delitem(self): d = weakref.WeakKeyDictionary() o = Object('1') # An attempt to delete an object that isn't there should raise # KeyError. It didn't before 2.3. self.assertRaises(KeyError, d.__delitem__, o) self.assertRaises(KeyError, d.__getitem__, o) # If a key isn't of a weakly referencable type, __getitem__ and # __setitem__ raise TypeError. __delitem__ should too. self.assertRaises(TypeError, d.__delitem__, 13) self.assertRaises(TypeError, d.__getitem__, 13) self.assertRaises(TypeError, d.__setitem__, 13, 13) def test_weak_keyed_cascading_deletes(self): # SF bug 742860. For some reason, before 2.3 __delitem__ iterated # over the keys via self.data.iterkeys(). If things vanished from # the dict during this (or got added), that caused a RuntimeError. d = weakref.WeakKeyDictionary() mutate = False class C(object): def __init__(self, i): self.value = i def __hash__(self): return hash(self.value) def __eq__(self, other): if mutate: # Side effect that mutates the dict, by removing the # last strong reference to a key. del objs[-1] return self.value == other.value objs = [C(i) for i in range(4)] for o in objs: d[o] = o.value del o # now the only strong references to keys are in objs # Find the order in which iterkeys sees the keys. objs = list(d.keys()) # Reverse it, so that the iteration implementation of __delitem__ # has to keep looping to find the first object we delete. objs.reverse() # Turn on mutation in C.__eq__. The first time through the loop, # under the iterkeys() business the first comparison will delete # the last item iterkeys() would see, and that causes a # RuntimeError: dictionary changed size during iteration # when the iterkeys() loop goes around to try comparing the next # key. After this was fixed, it just deletes the last object *our* # "for o in obj" loop would have gotten to. mutate = True count = 0 for o in objs: count += 1 del d[o] gc_collect() # For PyPy or other GCs. self.assertEqual(len(d), 0) self.assertEqual(count, 2) def test_make_weak_valued_dict_repr(self): dict = weakref.WeakValueDictionary() self.assertRegex(repr(dict), '') def test_make_weak_keyed_dict_repr(self): dict = weakref.WeakKeyDictionary() self.assertRegex(repr(dict), '') @threading_helper.requires_working_threading() def test_threaded_weak_valued_setdefault(self): d = weakref.WeakValueDictionary() with collect_in_thread(): for i in range(self.NUM_THREADED_ITERATIONS): x = d.setdefault(10, RefCycle()) self.assertIsNot(x, None) # we never put None in there! del x @threading_helper.requires_working_threading() def test_threaded_weak_valued_pop(self): d = weakref.WeakValueDictionary() with collect_in_thread(): for i in range(self.NUM_THREADED_ITERATIONS): d[10] = RefCycle() x = d.pop(10, 10) self.assertIsNot(x, None) # we never put None in there! @threading_helper.requires_working_threading() def test_threaded_weak_valued_consistency(self): # Issue #28427: old keys should not remove new values from # WeakValueDictionary when collecting from another thread. d = weakref.WeakValueDictionary() with collect_in_thread(): for i in range(2 * self.NUM_THREADED_ITERATIONS): o = RefCycle() d[10] = o # o is still alive, so the dict can't be empty self.assertEqual(len(d), 1) o = None # lose ref @support.cpython_only def test_weak_valued_consistency(self): # A single-threaded, deterministic repro for issue #28427: old keys # should not remove new values from WeakValueDictionary. This relies on # an implementation detail of CPython's WeakValueDictionary (its # underlying dictionary of KeyedRefs) to reproduce the issue. d = weakref.WeakValueDictionary() with support.disable_gc(): d[10] = RefCycle() # Keep the KeyedRef alive after it's replaced so that GC will invoke # the callback. wr = d.data[10] # Replace the value with something that isn't cyclic garbage o = RefCycle() d[10] = o # Trigger GC, which will invoke the callback for `wr` gc.collect() self.assertEqual(len(d), 1) def check_threaded_weak_dict_copy(self, type_, deepcopy): # `type_` should be either WeakKeyDictionary or WeakValueDictionary. # `deepcopy` should be either True or False. exc = [] class DummyKey: def __init__(self, ctr): self.ctr = ctr class DummyValue: def __init__(self, ctr): self.ctr = ctr def dict_copy(d, exc): try: if deepcopy is True: _ = copy.deepcopy(d) else: _ = d.copy() except Exception as ex: exc.append(ex) def pop_and_collect(lst): gc_ctr = 0 while lst: i = random.randint(0, len(lst) - 1) gc_ctr += 1 lst.pop(i) if gc_ctr % 10000 == 0: gc.collect() # just in case self.assertIn(type_, (weakref.WeakKeyDictionary, weakref.WeakValueDictionary)) d = type_() keys = [] values = [] # Initialize d with many entries for i in range(70000): k, v = DummyKey(i), DummyValue(i) keys.append(k) values.append(v) d[k] = v del k del v t_copy = threading.Thread(target=dict_copy, args=(d, exc,)) if type_ is weakref.WeakKeyDictionary: t_collect = threading.Thread(target=pop_and_collect, args=(keys,)) else: # weakref.WeakValueDictionary t_collect = threading.Thread(target=pop_and_collect, args=(values,)) t_copy.start() t_collect.start() t_copy.join() t_collect.join() # Test exceptions if exc: raise exc[0] @threading_helper.requires_working_threading() def test_threaded_weak_key_dict_copy(self): # Issue #35615: Weakref keys or values getting GC'ed during dict # copying should not result in a crash. self.check_threaded_weak_dict_copy(weakref.WeakKeyDictionary, False) @threading_helper.requires_working_threading() @support.requires_resource('cpu') def test_threaded_weak_key_dict_deepcopy(self): # Issue #35615: Weakref keys or values getting GC'ed during dict # copying should not result in a crash. self.check_threaded_weak_dict_copy(weakref.WeakKeyDictionary, True) @threading_helper.requires_working_threading() def test_threaded_weak_value_dict_copy(self): # Issue #35615: Weakref keys or values getting GC'ed during dict # copying should not result in a crash. self.check_threaded_weak_dict_copy(weakref.WeakValueDictionary, False) @threading_helper.requires_working_threading() @support.requires_resource('cpu') def test_threaded_weak_value_dict_deepcopy(self): # Issue #35615: Weakref keys or values getting GC'ed during dict # copying should not result in a crash. self.check_threaded_weak_dict_copy(weakref.WeakValueDictionary, True) @support.cpython_only def test_remove_closure(self): d = weakref.WeakValueDictionary() self.assertIsNone(d._remove.__closure__) from test import mapping_tests class WeakValueDictionaryTestCase(mapping_tests.BasicTestMappingProtocol): """Check that WeakValueDictionary conforms to the mapping protocol""" __ref = {"key1":Object(1), "key2":Object(2), "key3":Object(3)} type2test = weakref.WeakValueDictionary def _reference(self): return self.__ref.copy() class WeakKeyDictionaryTestCase(mapping_tests.BasicTestMappingProtocol): """Check that WeakKeyDictionary conforms to the mapping protocol""" __ref = {Object("key1"):1, Object("key2"):2, Object("key3"):3} type2test = weakref.WeakKeyDictionary def _reference(self): return self.__ref.copy() class FinalizeTestCase(unittest.TestCase): class A: pass def _collect_if_necessary(self): # we create no ref-cycles so in CPython no gc should be needed if sys.implementation.name != 'cpython': support.gc_collect() def test_finalize(self): def add(x,y,z): res.append(x + y + z) return x + y + z a = self.A() res = [] f = weakref.finalize(a, add, 67, 43, z=89) self.assertEqual(f.alive, True) self.assertEqual(f.peek(), (a, add, (67,43), {'z':89})) self.assertEqual(f(), 199) self.assertEqual(f(), None) self.assertEqual(f(), None) self.assertEqual(f.peek(), None) self.assertEqual(f.detach(), None) self.assertEqual(f.alive, False) self.assertEqual(res, [199]) res = [] f = weakref.finalize(a, add, 67, 43, 89) self.assertEqual(f.peek(), (a, add, (67,43,89), {})) self.assertEqual(f.detach(), (a, add, (67,43,89), {})) self.assertEqual(f(), None) self.assertEqual(f(), None) self.assertEqual(f.peek(), None) self.assertEqual(f.detach(), None) self.assertEqual(f.alive, False) self.assertEqual(res, []) res = [] f = weakref.finalize(a, add, x=67, y=43, z=89) del a self._collect_if_necessary() self.assertEqual(f(), None) self.assertEqual(f(), None) self.assertEqual(f.peek(), None) self.assertEqual(f.detach(), None) self.assertEqual(f.alive, False) self.assertEqual(res, [199]) def test_arg_errors(self): def fin(*args, **kwargs): res.append((args, kwargs)) a = self.A() res = [] f = weakref.finalize(a, fin, 1, 2, func=3, obj=4) self.assertEqual(f.peek(), (a, fin, (1, 2), {'func': 3, 'obj': 4})) f() self.assertEqual(res, [((1, 2), {'func': 3, 'obj': 4})]) with self.assertRaises(TypeError): weakref.finalize(a, func=fin, arg=1) with self.assertRaises(TypeError): weakref.finalize(obj=a, func=fin, arg=1) self.assertRaises(TypeError, weakref.finalize, a) self.assertRaises(TypeError, weakref.finalize) def test_order(self): a = self.A() res = [] f1 = weakref.finalize(a, res.append, 'f1') f2 = weakref.finalize(a, res.append, 'f2') f3 = weakref.finalize(a, res.append, 'f3') f4 = weakref.finalize(a, res.append, 'f4') f5 = weakref.finalize(a, res.append, 'f5') # make sure finalizers can keep themselves alive del f1, f4 self.assertTrue(f2.alive) self.assertTrue(f3.alive) self.assertTrue(f5.alive) self.assertTrue(f5.detach()) self.assertFalse(f5.alive) f5() # nothing because previously unregistered res.append('A') f3() # => res.append('f3') self.assertFalse(f3.alive) res.append('B') f3() # nothing because previously called res.append('C') del a self._collect_if_necessary() # => res.append('f4') # => res.append('f2') # => res.append('f1') self.assertFalse(f2.alive) res.append('D') f2() # nothing because previously called by gc expected = ['A', 'f3', 'B', 'C', 'f4', 'f2', 'f1', 'D'] self.assertEqual(res, expected) def test_all_freed(self): # we want a weakrefable subclass of weakref.finalize class MyFinalizer(weakref.finalize): pass a = self.A() res = [] def callback(): res.append(123) f = MyFinalizer(a, callback) wr_callback = weakref.ref(callback) wr_f = weakref.ref(f) del callback, f self.assertIsNotNone(wr_callback()) self.assertIsNotNone(wr_f()) del a self._collect_if_necessary() self.assertIsNone(wr_callback()) self.assertIsNone(wr_f()) self.assertEqual(res, [123]) @classmethod def run_in_child(cls): def error(): # Create an atexit finalizer from inside a finalizer called # at exit. This should be the next to be run. g1 = weakref.finalize(cls, print, 'g1') print('f3 error') 1/0 # cls should stay alive till atexit callbacks run f1 = weakref.finalize(cls, print, 'f1', _global_var) f2 = weakref.finalize(cls, print, 'f2', _global_var) f3 = weakref.finalize(cls, error) f4 = weakref.finalize(cls, print, 'f4', _global_var) assert f1.atexit == True f2.atexit = False assert f3.atexit == True assert f4.atexit == True def test_atexit(self): prog = ('from test.test_weakref import FinalizeTestCase;'+ 'FinalizeTestCase.run_in_child()') rc, out, err = script_helper.assert_python_ok('-c', prog) out = out.decode('ascii').splitlines() self.assertEqual(out, ['f4 foobar', 'f3 error', 'g1', 'f1 foobar']) self.assertTrue(b'ZeroDivisionError' in err) class ModuleTestCase(unittest.TestCase): def test_names(self): for name in ('ReferenceType', 'ProxyType', 'CallableProxyType', 'WeakMethod', 'WeakSet', 'WeakKeyDictionary', 'WeakValueDictionary'): obj = getattr(weakref, name) if name != 'WeakSet': self.assertEqual(obj.__module__, 'weakref') self.assertEqual(obj.__name__, name) self.assertEqual(obj.__qualname__, name) libreftest = """ Doctest for examples in the library reference: weakref.rst >>> from test.support import gc_collect >>> import weakref >>> class Dict(dict): ... pass ... >>> obj = Dict(red=1, green=2, blue=3) # this object is weak referencable >>> r = weakref.ref(obj) >>> print(r() is obj) True >>> import weakref >>> class Object: ... pass ... >>> o = Object() >>> r = weakref.ref(o) >>> o2 = r() >>> o is o2 True >>> del o, o2 >>> gc_collect() # For PyPy or other GCs. >>> print(r()) None >>> import weakref >>> class ExtendedRef(weakref.ref): ... def __init__(self, ob, callback=None, **annotations): ... super().__init__(ob, callback) ... self.__counter = 0 ... for k, v in annotations.items(): ... setattr(self, k, v) ... def __call__(self): ... '''Return a pair containing the referent and the number of ... times the reference has been called. ... ''' ... ob = super().__call__() ... if ob is not None: ... self.__counter += 1 ... ob = (ob, self.__counter) ... return ob ... >>> class A: # not in docs from here, just testing the ExtendedRef ... pass ... >>> a = A() >>> r = ExtendedRef(a, foo=1, bar="baz") >>> r.foo 1 >>> r.bar 'baz' >>> r()[1] 1 >>> r()[1] 2 >>> r()[0] is a True >>> import weakref >>> _id2obj_dict = weakref.WeakValueDictionary() >>> def remember(obj): ... oid = id(obj) ... _id2obj_dict[oid] = obj ... return oid ... >>> def id2obj(oid): ... return _id2obj_dict[oid] ... >>> a = A() # from here, just testing >>> a_id = remember(a) >>> id2obj(a_id) is a True >>> del a >>> gc_collect() # For PyPy or other GCs. >>> try: ... id2obj(a_id) ... except KeyError: ... print('OK') ... else: ... print('WeakValueDictionary error') OK """ __test__ = {'libreftest' : libreftest} def load_tests(loader, tests, pattern): tests.addTest(doctest.DocTestSuite()) return tests if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.13/test_wsgiref.py000066400000000000000000000725031471441230600214130ustar00rootroot00000000000000from unittest import mock from test import support from test.support import socket_helper from test.test_httpservers import NoLogRequestHandler from unittest import TestCase from wsgiref.util import setup_testing_defaults from wsgiref.headers import Headers from wsgiref.handlers import BaseHandler, BaseCGIHandler, SimpleHandler from wsgiref import util from wsgiref.validate import validator from wsgiref.simple_server import WSGIServer, WSGIRequestHandler from wsgiref.simple_server import make_server from http.client import HTTPConnection from io import StringIO, BytesIO, BufferedReader from socketserver import BaseServer from platform import python_implementation import os import re import signal import sys import threading import unittest class MockServer(WSGIServer): """Non-socket HTTP server""" def __init__(self, server_address, RequestHandlerClass): BaseServer.__init__(self, server_address, RequestHandlerClass) self.server_bind() def server_bind(self): host, port = self.server_address self.server_name = host self.server_port = port self.setup_environ() class MockHandler(WSGIRequestHandler): """Non-socket HTTP handler""" def setup(self): self.connection = self.request self.rfile, self.wfile = self.connection def finish(self): pass def hello_app(environ,start_response): start_response("200 OK", [ ('Content-Type','text/plain'), ('Date','Mon, 05 Jun 2006 18:49:54 GMT') ]) return [b"Hello, world!"] def header_app(environ, start_response): start_response("200 OK", [ ('Content-Type', 'text/plain'), ('Date', 'Mon, 05 Jun 2006 18:49:54 GMT') ]) return [';'.join([ environ['HTTP_X_TEST_HEADER'], environ['QUERY_STRING'], environ['PATH_INFO'] ]).encode('iso-8859-1')] def run_amock(app=hello_app, data=b"GET / HTTP/1.0\n\n"): server = make_server("", 80, app, MockServer, MockHandler) inp = BufferedReader(BytesIO(data)) out = BytesIO() olderr = sys.stderr err = sys.stderr = StringIO() try: server.finish_request((inp, out), ("127.0.0.1",8888)) finally: sys.stderr = olderr return out.getvalue(), err.getvalue() def compare_generic_iter(make_it, match): """Utility to compare a generic iterator with an iterable This tests the iterator using iter()/next(). 'make_it' must be a function returning a fresh iterator to be tested (since this may test the iterator twice).""" it = make_it() if not iter(it) is it: raise AssertionError for item in match: if not next(it) == item: raise AssertionError try: next(it) except StopIteration: pass else: raise AssertionError("Too many items from .__next__()", it) class IntegrationTests(TestCase): def check_hello(self, out, has_length=True): pyver = (python_implementation() + "/" + sys.version.split()[0]) self.assertEqual(out, ("HTTP/1.0 200 OK\r\n" "Server: WSGIServer/0.2 " + pyver +"\r\n" "Content-Type: text/plain\r\n" "Date: Mon, 05 Jun 2006 18:49:54 GMT\r\n" + (has_length and "Content-Length: 13\r\n" or "") + "\r\n" "Hello, world!").encode("iso-8859-1") ) def test_plain_hello(self): out, err = run_amock() self.check_hello(out) def test_environ(self): request = ( b"GET /p%61th/?query=test HTTP/1.0\n" b"X-Test-Header: Python test \n" b"X-Test-Header: Python test 2\n" b"Content-Length: 0\n\n" ) out, err = run_amock(header_app, request) self.assertEqual( out.splitlines()[-1], b"Python test,Python test 2;query=test;/path/" ) def test_request_length(self): out, err = run_amock(data=b"GET " + (b"x" * 65537) + b" HTTP/1.0\n\n") self.assertEqual(out.splitlines()[0], b"HTTP/1.0 414 URI Too Long") def test_validated_hello(self): out, err = run_amock(validator(hello_app)) # the middleware doesn't support len(), so content-length isn't there self.check_hello(out, has_length=False) def test_simple_validation_error(self): def bad_app(environ,start_response): start_response("200 OK", ('Content-Type','text/plain')) return ["Hello, world!"] out, err = run_amock(validator(bad_app)) self.assertTrue(out.endswith( b"A server error occurred. Please contact the administrator." )) self.assertEqual( err.splitlines()[-2], "AssertionError: Headers (('Content-Type', 'text/plain')) must" " be of type list: " ) def test_status_validation_errors(self): def create_bad_app(status): def bad_app(environ, start_response): start_response(status, [("Content-Type", "text/plain; charset=utf-8")]) return [b"Hello, world!"] return bad_app tests = [ ('200', 'AssertionError: Status must be at least 4 characters'), ('20X OK', 'AssertionError: Status message must begin w/3-digit code'), ('200OK', 'AssertionError: Status message must have a space after code'), ] for status, exc_message in tests: with self.subTest(status=status): out, err = run_amock(create_bad_app(status)) self.assertTrue(out.endswith( b"A server error occurred. Please contact the administrator." )) self.assertEqual(err.splitlines()[-2], exc_message) def test_wsgi_input(self): def bad_app(e,s): e["wsgi.input"].read() s("200 OK", [("Content-Type", "text/plain; charset=utf-8")]) return [b"data"] out, err = run_amock(validator(bad_app)) self.assertTrue(out.endswith( b"A server error occurred. Please contact the administrator." )) self.assertEqual( err.splitlines()[-2], "AssertionError" ) def test_bytes_validation(self): def app(e, s): s("200 OK", [ ("Content-Type", "text/plain; charset=utf-8"), ("Date", "Wed, 24 Dec 2008 13:29:32 GMT"), ]) return [b"data"] out, err = run_amock(validator(app)) self.assertTrue(err.endswith('"GET / HTTP/1.0" 200 4\n')) ver = sys.version.split()[0].encode('ascii') py = python_implementation().encode('ascii') pyver = py + b"/" + ver self.assertEqual( b"HTTP/1.0 200 OK\r\n" b"Server: WSGIServer/0.2 "+ pyver + b"\r\n" b"Content-Type: text/plain; charset=utf-8\r\n" b"Date: Wed, 24 Dec 2008 13:29:32 GMT\r\n" b"\r\n" b"data", out) def test_cp1252_url(self): def app(e, s): s("200 OK", [ ("Content-Type", "text/plain"), ("Date", "Wed, 24 Dec 2008 13:29:32 GMT"), ]) # PEP3333 says environ variables are decoded as latin1. # Encode as latin1 to get original bytes return [e["PATH_INFO"].encode("latin1")] out, err = run_amock( validator(app), data=b"GET /\x80%80 HTTP/1.0") self.assertEqual( [ b"HTTP/1.0 200 OK", mock.ANY, b"Content-Type: text/plain", b"Date: Wed, 24 Dec 2008 13:29:32 GMT", b"", b"/\x80\x80", ], out.splitlines()) def test_interrupted_write(self): # BaseHandler._write() and _flush() have to write all data, even if # it takes multiple send() calls. Test this by interrupting a send() # call with a Unix signal. pthread_kill = support.get_attribute(signal, "pthread_kill") def app(environ, start_response): start_response("200 OK", []) return [b'\0' * support.SOCK_MAX_SIZE] class WsgiHandler(NoLogRequestHandler, WSGIRequestHandler): pass server = make_server(socket_helper.HOST, 0, app, handler_class=WsgiHandler) self.addCleanup(server.server_close) interrupted = threading.Event() def signal_handler(signum, frame): interrupted.set() original = signal.signal(signal.SIGUSR1, signal_handler) self.addCleanup(signal.signal, signal.SIGUSR1, original) received = None main_thread = threading.get_ident() def run_client(): http = HTTPConnection(*server.server_address) http.request("GET", "/") with http.getresponse() as response: response.read(100) # The main thread should now be blocking in a send() system # call. But in theory, it could get interrupted by other # signals, and then retried. So keep sending the signal in a # loop, in case an earlier signal happens to be delivered at # an inconvenient moment. while True: pthread_kill(main_thread, signal.SIGUSR1) if interrupted.wait(timeout=float(1)): break nonlocal received received = len(response.read()) http.close() background = threading.Thread(target=run_client) background.start() server.handle_request() background.join() self.assertEqual(received, support.SOCK_MAX_SIZE - 100) class UtilityTests(TestCase): def checkShift(self,sn_in,pi_in,part,sn_out,pi_out): env = {'SCRIPT_NAME':sn_in,'PATH_INFO':pi_in} util.setup_testing_defaults(env) self.assertEqual(util.shift_path_info(env),part) self.assertEqual(env['PATH_INFO'],pi_out) self.assertEqual(env['SCRIPT_NAME'],sn_out) return env def checkDefault(self, key, value, alt=None): # Check defaulting when empty env = {} util.setup_testing_defaults(env) if isinstance(value, StringIO): self.assertIsInstance(env[key], StringIO) elif isinstance(value,BytesIO): self.assertIsInstance(env[key],BytesIO) else: self.assertEqual(env[key], value) # Check existing value env = {key:alt} util.setup_testing_defaults(env) self.assertIs(env[key], alt) def checkCrossDefault(self,key,value,**kw): util.setup_testing_defaults(kw) self.assertEqual(kw[key],value) def checkAppURI(self,uri,**kw): util.setup_testing_defaults(kw) self.assertEqual(util.application_uri(kw),uri) def checkReqURI(self,uri,query=1,**kw): util.setup_testing_defaults(kw) self.assertEqual(util.request_uri(kw,query),uri) def checkFW(self,text,size,match): def make_it(text=text,size=size): return util.FileWrapper(StringIO(text),size) compare_generic_iter(make_it,match) it = make_it() self.assertFalse(it.filelike.closed) for item in it: pass self.assertFalse(it.filelike.closed) it.close() self.assertTrue(it.filelike.closed) def testSimpleShifts(self): self.checkShift('','/', '', '/', '') self.checkShift('','/x', 'x', '/x', '') self.checkShift('/','', None, '/', '') self.checkShift('/a','/x/y', 'x', '/a/x', '/y') self.checkShift('/a','/x/', 'x', '/a/x', '/') def testNormalizedShifts(self): self.checkShift('/a/b', '/../y', '..', '/a', '/y') self.checkShift('', '/../y', '..', '', '/y') self.checkShift('/a/b', '//y', 'y', '/a/b/y', '') self.checkShift('/a/b', '//y/', 'y', '/a/b/y', '/') self.checkShift('/a/b', '/./y', 'y', '/a/b/y', '') self.checkShift('/a/b', '/./y/', 'y', '/a/b/y', '/') self.checkShift('/a/b', '///./..//y/.//', '..', '/a', '/y/') self.checkShift('/a/b', '///', '', '/a/b/', '') self.checkShift('/a/b', '/.//', '', '/a/b/', '') self.checkShift('/a/b', '/x//', 'x', '/a/b/x', '/') self.checkShift('/a/b', '/.', None, '/a/b', '') def testDefaults(self): for key, value in [ ('SERVER_NAME','127.0.0.1'), ('SERVER_PORT', '80'), ('SERVER_PROTOCOL','HTTP/1.0'), ('HTTP_HOST','127.0.0.1'), ('REQUEST_METHOD','GET'), ('SCRIPT_NAME',''), ('PATH_INFO','/'), ('wsgi.version', (1,0)), ('wsgi.run_once', 0), ('wsgi.multithread', 0), ('wsgi.multiprocess', 0), ('wsgi.input', BytesIO()), ('wsgi.errors', StringIO()), ('wsgi.url_scheme','http'), ]: self.checkDefault(key,value) def testCrossDefaults(self): self.checkCrossDefault('HTTP_HOST',"foo.bar",SERVER_NAME="foo.bar") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="on") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="1") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="yes") self.checkCrossDefault('wsgi.url_scheme',"http",HTTPS="foo") self.checkCrossDefault('SERVER_PORT',"80",HTTPS="foo") self.checkCrossDefault('SERVER_PORT',"443",HTTPS="on") def testGuessScheme(self): self.assertEqual(util.guess_scheme({}), "http") self.assertEqual(util.guess_scheme({'HTTPS':"foo"}), "http") self.assertEqual(util.guess_scheme({'HTTPS':"on"}), "https") self.assertEqual(util.guess_scheme({'HTTPS':"yes"}), "https") self.assertEqual(util.guess_scheme({'HTTPS':"1"}), "https") def testAppURIs(self): self.checkAppURI("http://127.0.0.1/") self.checkAppURI("http://127.0.0.1/spam", SCRIPT_NAME="/spam") self.checkAppURI("http://127.0.0.1/sp%E4m", SCRIPT_NAME="/sp\xe4m") self.checkAppURI("http://spam.example.com:2071/", HTTP_HOST="spam.example.com:2071", SERVER_PORT="2071") self.checkAppURI("http://spam.example.com/", SERVER_NAME="spam.example.com") self.checkAppURI("http://127.0.0.1/", HTTP_HOST="127.0.0.1", SERVER_NAME="spam.example.com") self.checkAppURI("https://127.0.0.1/", HTTPS="on") self.checkAppURI("http://127.0.0.1:8000/", SERVER_PORT="8000", HTTP_HOST=None) def testReqURIs(self): self.checkReqURI("http://127.0.0.1/") self.checkReqURI("http://127.0.0.1/spam", SCRIPT_NAME="/spam") self.checkReqURI("http://127.0.0.1/sp%E4m", SCRIPT_NAME="/sp\xe4m") self.checkReqURI("http://127.0.0.1/spammity/spam", SCRIPT_NAME="/spammity", PATH_INFO="/spam") self.checkReqURI("http://127.0.0.1/spammity/sp%E4m", SCRIPT_NAME="/spammity", PATH_INFO="/sp\xe4m") self.checkReqURI("http://127.0.0.1/spammity/spam;ham", SCRIPT_NAME="/spammity", PATH_INFO="/spam;ham") self.checkReqURI("http://127.0.0.1/spammity/spam;cookie=1234,5678", SCRIPT_NAME="/spammity", PATH_INFO="/spam;cookie=1234,5678") self.checkReqURI("http://127.0.0.1/spammity/spam?say=ni", SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="say=ni") self.checkReqURI("http://127.0.0.1/spammity/spam?s%E4y=ni", SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="s%E4y=ni") self.checkReqURI("http://127.0.0.1/spammity/spam", 0, SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="say=ni") def testFileWrapper(self): self.checkFW("xyz"*50, 120, ["xyz"*40,"xyz"*10]) def testHopByHop(self): for hop in ( "Connection Keep-Alive Proxy-Authenticate Proxy-Authorization " "TE Trailers Transfer-Encoding Upgrade" ).split(): for alt in hop, hop.title(), hop.upper(), hop.lower(): self.assertTrue(util.is_hop_by_hop(alt)) # Not comprehensive, just a few random header names for hop in ( "Accept Cache-Control Date Pragma Trailer Via Warning" ).split(): for alt in hop, hop.title(), hop.upper(), hop.lower(): self.assertFalse(util.is_hop_by_hop(alt)) class HeaderTests(TestCase): def testMappingInterface(self): test = [('x','y')] self.assertEqual(len(Headers()), 0) self.assertEqual(len(Headers([])),0) self.assertEqual(len(Headers(test[:])),1) self.assertEqual(Headers(test[:]).keys(), ['x']) self.assertEqual(Headers(test[:]).values(), ['y']) self.assertEqual(Headers(test[:]).items(), test) self.assertIsNot(Headers(test).items(), test) # must be copy! h = Headers() del h['foo'] # should not raise an error h['Foo'] = 'bar' for m in h.__contains__, h.get, h.get_all, h.__getitem__: self.assertTrue(m('foo')) self.assertTrue(m('Foo')) self.assertTrue(m('FOO')) self.assertFalse(m('bar')) self.assertEqual(h['foo'],'bar') h['foo'] = 'baz' self.assertEqual(h['FOO'],'baz') self.assertEqual(h.get_all('foo'),['baz']) self.assertEqual(h.get("foo","whee"), "baz") self.assertEqual(h.get("zoo","whee"), "whee") self.assertEqual(h.setdefault("foo","whee"), "baz") self.assertEqual(h.setdefault("zoo","whee"), "whee") self.assertEqual(h["foo"],"baz") self.assertEqual(h["zoo"],"whee") def testRequireList(self): self.assertRaises(TypeError, Headers, "foo") def testExtras(self): h = Headers() self.assertEqual(str(h),'\r\n') h.add_header('foo','bar',baz="spam") self.assertEqual(h['foo'], 'bar; baz="spam"') self.assertEqual(str(h),'foo: bar; baz="spam"\r\n\r\n') h.add_header('Foo','bar',cheese=None) self.assertEqual(h.get_all('foo'), ['bar; baz="spam"', 'bar; cheese']) self.assertEqual(str(h), 'foo: bar; baz="spam"\r\n' 'Foo: bar; cheese\r\n' '\r\n' ) class ErrorHandler(BaseCGIHandler): """Simple handler subclass for testing BaseHandler""" # BaseHandler records the OS environment at import time, but envvars # might have been changed later by other tests, which trips up # HandlerTests.testEnviron(). os_environ = dict(os.environ.items()) def __init__(self,**kw): setup_testing_defaults(kw) BaseCGIHandler.__init__( self, BytesIO(), BytesIO(), StringIO(), kw, multithread=True, multiprocess=True ) class TestHandler(ErrorHandler): """Simple handler subclass for testing BaseHandler, w/error passthru""" def handle_error(self): raise # for testing, we want to see what's happening class HandlerTests(TestCase): # testEnviron() can produce long error message maxDiff = 80 * 50 def testEnviron(self): os_environ = { # very basic environment 'HOME': '/my/home', 'PATH': '/my/path', 'LANG': 'fr_FR.UTF-8', # set some WSGI variables 'SCRIPT_NAME': 'test_script_name', 'SERVER_NAME': 'test_server_name', } with support.swap_attr(TestHandler, 'os_environ', os_environ): # override X and HOME variables handler = TestHandler(X="Y", HOME="/override/home") handler.setup_environ() # Check that wsgi_xxx attributes are copied to wsgi.xxx variables # of handler.environ for attr in ('version', 'multithread', 'multiprocess', 'run_once', 'file_wrapper'): self.assertEqual(getattr(handler, 'wsgi_' + attr), handler.environ['wsgi.' + attr]) # Test handler.environ as a dict expected = {} setup_testing_defaults(expected) # Handler inherits os_environ variables which are not overridden # by SimpleHandler.add_cgi_vars() (SimpleHandler.base_env) for key, value in os_environ.items(): if key not in expected: expected[key] = value expected.update({ # X doesn't exist in os_environ "X": "Y", # HOME is overridden by TestHandler 'HOME': "/override/home", # overridden by setup_testing_defaults() "SCRIPT_NAME": "", "SERVER_NAME": "127.0.0.1", # set by BaseHandler.setup_environ() 'wsgi.input': handler.get_stdin(), 'wsgi.errors': handler.get_stderr(), 'wsgi.version': (1, 0), 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.multithread': True, 'wsgi.multiprocess': True, 'wsgi.file_wrapper': util.FileWrapper, }) self.assertDictEqual(handler.environ, expected) def testCGIEnviron(self): h = BaseCGIHandler(None,None,None,{}) h.setup_environ() for key in 'wsgi.url_scheme', 'wsgi.input', 'wsgi.errors': self.assertIn(key, h.environ) def testScheme(self): h=TestHandler(HTTPS="on"); h.setup_environ() self.assertEqual(h.environ['wsgi.url_scheme'],'https') h=TestHandler(); h.setup_environ() self.assertEqual(h.environ['wsgi.url_scheme'],'http') def testAbstractMethods(self): h = BaseHandler() for name in [ '_flush','get_stdin','get_stderr','add_cgi_vars' ]: self.assertRaises(NotImplementedError, getattr(h,name)) self.assertRaises(NotImplementedError, h._write, "test") def testContentLength(self): # Demo one reason iteration is better than write()... ;) def trivial_app1(e,s): s('200 OK',[]) return [e['wsgi.url_scheme'].encode('iso-8859-1')] def trivial_app2(e,s): s('200 OK',[])(e['wsgi.url_scheme'].encode('iso-8859-1')) return [] def trivial_app3(e,s): s('200 OK',[]) return ['\u0442\u0435\u0441\u0442'.encode("utf-8")] def trivial_app4(e,s): # Simulate a response to a HEAD request s('200 OK',[('Content-Length', '12345')]) return [] h = TestHandler() h.run(trivial_app1) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "Content-Length: 4\r\n" "\r\n" "http").encode("iso-8859-1")) h = TestHandler() h.run(trivial_app2) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "\r\n" "http").encode("iso-8859-1")) h = TestHandler() h.run(trivial_app3) self.assertEqual(h.stdout.getvalue(), b'Status: 200 OK\r\n' b'Content-Length: 8\r\n' b'\r\n' b'\xd1\x82\xd0\xb5\xd1\x81\xd1\x82') h = TestHandler() h.run(trivial_app4) self.assertEqual(h.stdout.getvalue(), b'Status: 200 OK\r\n' b'Content-Length: 12345\r\n' b'\r\n') def testBasicErrorOutput(self): def non_error_app(e,s): s('200 OK',[]) return [] def error_app(e,s): raise AssertionError("This should be caught by handler") h = ErrorHandler() h.run(non_error_app) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "Content-Length: 0\r\n" "\r\n").encode("iso-8859-1")) self.assertEqual(h.stderr.getvalue(),"") h = ErrorHandler() h.run(error_app) self.assertEqual(h.stdout.getvalue(), ("Status: %s\r\n" "Content-Type: text/plain\r\n" "Content-Length: %d\r\n" "\r\n" % (h.error_status,len(h.error_body))).encode('iso-8859-1') + h.error_body) self.assertIn("AssertionError", h.stderr.getvalue()) def testErrorAfterOutput(self): MSG = b"Some output has been sent" def error_app(e,s): s("200 OK",[])(MSG) raise AssertionError("This should be caught by handler") h = ErrorHandler() h.run(error_app) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "\r\n".encode("iso-8859-1")+MSG)) self.assertIn("AssertionError", h.stderr.getvalue()) def testHeaderFormats(self): def non_error_app(e,s): s('200 OK',[]) return [] stdpat = ( r"HTTP/%s 200 OK\r\n" r"Date: \w{3}, [ 0123]\d \w{3} \d{4} \d\d:\d\d:\d\d GMT\r\n" r"%s" r"Content-Length: 0\r\n" r"\r\n" ) shortpat = ( "Status: 200 OK\r\n" "Content-Length: 0\r\n" "\r\n" ).encode("iso-8859-1") for ssw in "FooBar/1.0", None: sw = ssw and "Server: %s\r\n" % ssw or "" for version in "1.0", "1.1": for proto in "HTTP/0.9", "HTTP/1.0", "HTTP/1.1": h = TestHandler(SERVER_PROTOCOL=proto) h.origin_server = False h.http_version = version h.server_software = ssw h.run(non_error_app) self.assertEqual(shortpat,h.stdout.getvalue()) h = TestHandler(SERVER_PROTOCOL=proto) h.origin_server = True h.http_version = version h.server_software = ssw h.run(non_error_app) if proto=="HTTP/0.9": self.assertEqual(h.stdout.getvalue(),b"") else: self.assertTrue( re.match((stdpat%(version,sw)).encode("iso-8859-1"), h.stdout.getvalue()), ((stdpat%(version,sw)).encode("iso-8859-1"), h.stdout.getvalue()) ) def testBytesData(self): def app(e, s): s("200 OK", [ ("Content-Type", "text/plain; charset=utf-8"), ]) return [b"data"] h = TestHandler() h.run(app) self.assertEqual(b"Status: 200 OK\r\n" b"Content-Type: text/plain; charset=utf-8\r\n" b"Content-Length: 4\r\n" b"\r\n" b"data", h.stdout.getvalue()) def testCloseOnError(self): side_effects = {'close_called': False} MSG = b"Some output has been sent" def error_app(e,s): s("200 OK",[])(MSG) class CrashyIterable(object): def __iter__(self): while True: yield b'blah' raise AssertionError("This should be caught by handler") def close(self): side_effects['close_called'] = True return CrashyIterable() h = ErrorHandler() h.run(error_app) self.assertEqual(side_effects['close_called'], True) def testPartialWrite(self): written = bytearray() class PartialWriter: def write(self, b): partial = b[:7] written.extend(partial) return len(partial) def flush(self): pass environ = {"SERVER_PROTOCOL": "HTTP/1.0"} h = SimpleHandler(BytesIO(), PartialWriter(), sys.stderr, environ) msg = "should not do partial writes" with self.assertWarnsRegex(DeprecationWarning, msg): h.run(hello_app) self.assertEqual(b"HTTP/1.0 200 OK\r\n" b"Content-Type: text/plain\r\n" b"Date: Mon, 05 Jun 2006 18:49:54 GMT\r\n" b"Content-Length: 13\r\n" b"\r\n" b"Hello, world!", written) def testClientConnectionTerminations(self): environ = {"SERVER_PROTOCOL": "HTTP/1.0"} for exception in ( ConnectionAbortedError, BrokenPipeError, ConnectionResetError, ): with self.subTest(exception=exception): class AbortingWriter: def write(self, b): raise exception stderr = StringIO() h = SimpleHandler(BytesIO(), AbortingWriter(), stderr, environ) h.run(hello_app) self.assertFalse(stderr.getvalue()) def testDontResetInternalStateOnException(self): class CustomException(ValueError): pass # We are raising CustomException here to trigger an exception # during the execution of SimpleHandler.finish_response(), so # we can easily test that the internal state of the handler is # preserved in case of an exception. class AbortingWriter: def write(self, b): raise CustomException stderr = StringIO() environ = {"SERVER_PROTOCOL": "HTTP/1.0"} h = SimpleHandler(BytesIO(), AbortingWriter(), stderr, environ) h.run(hello_app) self.assertIn("CustomException", stderr.getvalue()) # Test that the internal state of the handler is preserved. self.assertIsNotNone(h.result) self.assertIsNotNone(h.headers) self.assertIsNotNone(h.status) self.assertIsNotNone(h.environ) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.13/version000066400000000000000000000000121471441230600177260ustar00rootroot000000000000003.13.0rc2 gevent-24.11.1/src/greentest/3.9/000077500000000000000000000000001471441230600162525ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.9/allsans.pem000066400000000000000000000235711471441230600204220ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQDBGvj+Uy/VUyTR mmIA1UEENThh0+pWODcvvUlkeIo+XTJ3FhF4/RVjImDHjozl28Xf2TzKnvQJa1KC pqa7fr8cL9QMwk4pH+S4ulxOu02Bl3Yafx2oJVUML37vciJg+zkzPx1k3tXFjXkr LGjZwOoufBC3AmPuq2xHFBzHrvp5/DIRH2slQFM9fpVZzN77gYyzxba0wCfCPpCf eJFRyYKW8c7MXrwnM82YtE7Rlnf227EkCdMNaSeZLUIxeVpcnScqZl0SIbR3YEiV 0LPFkx0wJFm8qUEFU/h+0jamgy/ON+11nqmMlp3BjNi/JTVsa7N7A3dvdHC7VVlr WnUgU6MoSniyL6ijpucyHtZzK2mJy0sHR8PadHKow0O423/5N8GKTSOvaGMXTjAe OGs+9/P1ZYo3IjjQPz/NV3QlhK8zRqxF3cW0ekHHkT+/jZjCvSKm6mdbMQunKE1W +dokAc815pb48Mzf1eWKd/7UyUf7CXussyAaJ3clpaK1sbbn9m0CAwEAAQKCAYAe BaCCgdJk+xk1USg9cuo5ykBqzTSYlQLXdDlN2oO7sGehJhgvVEGX+QdM3ze+oM2B wNd3tQDB2iKo11oCunDh4/m2xhq6wA+iPK8POoWRSUf+VJb6xlsTmurENV1s8IHz GrPqM87OePFGqg/fEuQVuAotObzppVMfNdxHm0er4W6zRMw2rWqDnAOCQ5zDQ1/p ryp5rYpA49M+R9NoAMlByHRbR7s+6Qnk3NuIMDmUcpF2xeQ/KIMUiHnLEU/gKDpi bsk+VtyjlibR4zhh9/cJrLTApAIA+4eC176EJvKXCh5UIjd92JC7741HTNQXJpvG 9PXbzhyUCmncr04U+46snGHdwD+lG4LS7oBGACTLMtpcMrlgAm6XCg4T8gRVE/9n FvCkqPHBR+vnhOxm+0x0yUY/DstJby6IPYPsfGK/s2n//j/vJrAZE1Pxlm9EPU13 MRLcHstwjAc/NXRPnUN1DfcQvPLx6Tt6rqw3Wm1KO75kM+HZ56BX9/Bi1TgkiI0C gcEA5JTlXssJ3W8Cz6w1ZtGsThHQBDbvHF2D5AdqO7y6/eqzCQgBQl9BTfXOzsvP I1gf2CLEFBtGK09UjAuJQg90/NlKur7i7xt7HpAzEfGsDAL4P5BW5JnMNrzpJjjL 0uUDsPJlA75Wi29N2SFiaIslY0sZ6nckInat5GRe4O1AMSHoJ5suY9yTZTU3XB4O A+XyddutI1GsFZgl8/8LyyNMcyNjxG3T5sr7IKf5/nIv6oMDjC2zLVZa8QS/MEnL Kaa7AoHBANhEsxfcjw2MaPkrsqAsOP0dDf7g2rdz6wKT5BzZu9e+/E76NmvVDpns e+kCjql9Os3/wonOMINvn1bTCQGTgk8+dw1fMyqg+zQCvH4ImcE6LSqhzblVHsIB zZ7rW86trri1U9+olNHG4nwkus0i4LV8eeORns+j8DgXr6/eOvjX3ZW5TyU7/Qgm SiSdBapzJbom3xJrbo9KQsrN5PVCOwuwrgY0o+2BeKyKhnt4uGv0bR+ii06EOJUA WvjD7gLI9wKBwGVRXk3jH29IOm3EvjLh80bzfEmx89CV3tUfOEZcRGIyOsNhCfXa dP7SWqWtDxZyhELwPgtPf43I7wfYQTHH2ioNQqN94ubrPmpwrkJg5cq5MkIyf2F6 jlsg5xMrD6VeH4G6H25GWuQZJN9+fbkrHBpj+ovD3X9tLWzT1H5Miyx8BAQyM6DN 74Nn0C8Dn2C49vyor5i9JdK4ivIY9ahH8CYE5L73k3p0NFXoPtY61ORUyCjFROtu oIa+fOQxgVzn6wKBwQC3DD7BnY7/Gq7m51ODOqrpoaPs7Qhyagyp298hhDD3hNEt T56sWmLHaV/fcqipUDNrlGRmGzz4ooutA2YGDYIn7Gj7ym4WULcN6Jr92e25nLIJ +XWUvjUQZFJThkXogxz1fZSGI7wCamHcTYJGipTDR54rPV+7w7hY4cN0CZbEdIE6 buRMUZ/zO+VZZAYdpORz0N7SSlgDtAkgenCmHe64EEzbN8bgCcvHzl/RNfZyeSm7 supSBJuXkfttvvg/JzUCgcEAlx0Pep9qCLvpk0WqzijBVHc3zK4wYxjhN2MBkF42 SLWfogKpiPfIqxX6YF94roIA0VlW6Pj50v+sbPwq8nwsgFNhml80A4ODKr3O3Y3M fXDBJW5W5ZRb/vhIKRjXyCSckSRfj7N8HUYjCLkxQansNWimrldmSet0H2mYJN0Y JpBXdqpa76zoHzWpKFwD0fSVzvnMelPHSDCNOdIEHmR8e1x2F1/ufR/9/dBzPULY HMj0OhQHoi8kJyMIj3+bQkbC -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5f Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=allsans Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:c1:1a:f8:fe:53:2f:d5:53:24:d1:9a:62:00:d5: 41:04:35:38:61:d3:ea:56:38:37:2f:bd:49:64:78: 8a:3e:5d:32:77:16:11:78:fd:15:63:22:60:c7:8e: 8c:e5:db:c5:df:d9:3c:ca:9e:f4:09:6b:52:82:a6: a6:bb:7e:bf:1c:2f:d4:0c:c2:4e:29:1f:e4:b8:ba: 5c:4e:bb:4d:81:97:76:1a:7f:1d:a8:25:55:0c:2f: 7e:ef:72:22:60:fb:39:33:3f:1d:64:de:d5:c5:8d: 79:2b:2c:68:d9:c0:ea:2e:7c:10:b7:02:63:ee:ab: 6c:47:14:1c:c7:ae:fa:79:fc:32:11:1f:6b:25:40: 53:3d:7e:95:59:cc:de:fb:81:8c:b3:c5:b6:b4:c0: 27:c2:3e:90:9f:78:91:51:c9:82:96:f1:ce:cc:5e: bc:27:33:cd:98:b4:4e:d1:96:77:f6:db:b1:24:09: d3:0d:69:27:99:2d:42:31:79:5a:5c:9d:27:2a:66: 5d:12:21:b4:77:60:48:95:d0:b3:c5:93:1d:30:24: 59:bc:a9:41:05:53:f8:7e:d2:36:a6:83:2f:ce:37: ed:75:9e:a9:8c:96:9d:c1:8c:d8:bf:25:35:6c:6b: b3:7b:03:77:6f:74:70:bb:55:59:6b:5a:75:20:53: a3:28:4a:78:b2:2f:a8:a3:a6:e7:32:1e:d6:73:2b: 69:89:cb:4b:07:47:c3:da:74:72:a8:c3:43:b8:db: 7f:f9:37:c1:8a:4d:23:af:68:63:17:4e:30:1e:38: 6b:3e:f7:f3:f5:65:8a:37:22:38:d0:3f:3f:cd:57: 74:25:84:af:33:46:ac:45:dd:c5:b4:7a:41:c7:91: 3f:bf:8d:98:c2:bd:22:a6:ea:67:5b:31:0b:a7:28: 4d:56:f9:da:24:01:cf:35:e6:96:f8:f0:cc:df:d5: e5:8a:77:fe:d4:c9:47:fb:09:7b:ac:b3:20:1a:27: 77:25:a5:a2:b5:b1:b6:e7:f6:6d Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:allsans, othername:, othername:, email:user@example.org, DNS:www.example.org, DirName:/C=XY/L=Castle Anthrax/O=Python Software Foundation/CN=dirname example, URI:https://www.python.org/, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1, Registered ID:1.2.3.4.5 X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: D4:F1:D8:23:E0:A7:E9:CA:12:45:A0:0D:03:C2:25:A6:E8:65:BC:EE X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 70:77:d8:82:b0:f4:ab:de:84:ce:88:32:63:5e:23:0f:b6:58: a2:b1:65:ff:12:22:0b:88:a6:fa:06:40:9a:e7:63:a7:5d:ae: 94:c5:68:3c:4b:e9:95:34:01:75:24:df:9d:6e:9b:e4:ff:3f: 61:97:29:7b:ab:34:2c:14:d3:01:d2:eb:fb:84:40:db:12:54: 7e:7a:44:bc:08:eb:9f:e2:15:0b:11:4f:25:d2:56:51:95:ad: 6d:ad:07:aa:6a:61:f9:39:d5:82:8c:45:31:9f:2a:ff:18:98: 49:0c:bb:17:ad:d5:24:d3:d1:c7:c4:10:3e:c4:79:26:58:f4: c5:de:82:16:c4:c3:c4:a7:a3:62:22:41:90:36:0f:bc:4c:fd: 6a:18:22:f2:87:e9:07:db:b4:3d:65:00:e4:70:f9:d6:e5:a8: a1:b9:c9:9d:e7:5d:78:aa:98:d5:f8:f4:fd:5c:d9:4c:d0:6d: bf:87:71:d3:5b:ec:f4:bf:46:f9:c8:f8:10:c5:72:af:c3:15: b9:c4:06:67:0b:3f:f6:f4:64:c5:27:74:c1:6b:00:37:da:ea: 18:36:77:36:a7:3e:80:2e:5d:54:0f:01:df:ce:9e:97:dd:c9: f2:8b:59:82:c5:65:31:c8:73:20:fd:24:23:25:d8:00:df:90: 93:26:76:08:0a:06:a9:0e:d3:d3:4c:6f:ef:a7:fb:de:eb:2a: 40:b9:e4:b1:44:0c:37:ca:c6:9e:44:4a:b4:7c:2c:40:52:35: bb:b3:71:28:3d:35:fd:be:c9:4f:54:b3:99:c5:5f:84:38:fb: 2b:fb:ea:dd:88:e8:9d:c1:9b:67:87:3d:79:7b:3d:7e:61:1f: 70:3c:b7:c8:4c:17:a5:0c:a3:28:c7:ab:48:11:14:f7:98:7a: da:4e:fb:91:76:89:0a:a6:c6:72:e0:96:d9:f1:80:ea:68:90: 37:5c:c6:69:c7:d7:bc:c7:d1:ae:5b:a9:12:59:c6:e4:6c:61: a9:8b:ba:51:b3:13 -----BEGIN CERTIFICATE----- MIIHDTCCBXWgAwIBAgIJAMstgJlaaVJfMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2Fs bHNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQDBGvj+Uy/VUyTR mmIA1UEENThh0+pWODcvvUlkeIo+XTJ3FhF4/RVjImDHjozl28Xf2TzKnvQJa1KC pqa7fr8cL9QMwk4pH+S4ulxOu02Bl3Yafx2oJVUML37vciJg+zkzPx1k3tXFjXkr LGjZwOoufBC3AmPuq2xHFBzHrvp5/DIRH2slQFM9fpVZzN77gYyzxba0wCfCPpCf eJFRyYKW8c7MXrwnM82YtE7Rlnf227EkCdMNaSeZLUIxeVpcnScqZl0SIbR3YEiV 0LPFkx0wJFm8qUEFU/h+0jamgy/ON+11nqmMlp3BjNi/JTVsa7N7A3dvdHC7VVlr WnUgU6MoSniyL6ijpucyHtZzK2mJy0sHR8PadHKow0O423/5N8GKTSOvaGMXTjAe OGs+9/P1ZYo3IjjQPz/NV3QlhK8zRqxF3cW0ekHHkT+/jZjCvSKm6mdbMQunKE1W +dokAc815pb48Mzf1eWKd/7UyUf7CXussyAaJ3clpaK1sbbn9m0CAwEAAaOCAt4w ggLaMIIBMAYDVR0RBIIBJzCCASOCB2FsbHNhbnOgHgYDKgMEoBcMFXNvbWUgb3Ro ZXIgaWRlbnRpZmllcqA1BgYrBgEFAgKgKzApoBAbDktFUkJFUk9TLlJFQUxNoRUw E6ADAgEBoQwwChsIdXNlcm5hbWWBEHVzZXJAZXhhbXBsZS5vcmeCD3d3dy5leGFt cGxlLm9yZ6RnMGUxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJh eDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xGDAWBgNVBAMM D2Rpcm5hbWUgZXhhbXBsZYYXaHR0cHM6Ly93d3cucHl0aG9uLm9yZy+HBH8AAAGH EAAAAAAAAAAAAAAAAAAAAAGIBCoDBAUwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQW MBQGCCsGAQUFBwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBTU 8dgj4KfpyhJFoA0DwiWm6GW87jB9BgNVHSMEdjB0gBSziqCiunHxqCR51KRbJTYV HknIzaFRpE8wTTELMAkGA1UEBhMCWFkxJjAkBgNVBAoMHVB5dGhvbiBTb2Z0d2Fy ZSBGb3VuZGF0aW9uIENBMRYwFAYDVQQDDA1vdXItY2Etc2VydmVyggkAyy2AmVpp UlswgYMGCCsGAQUFBwEBBHcwdTA8BggrBgEFBQcwAoYwaHR0cDovL3Rlc3RjYS5w eXRob250ZXN0Lm5ldC90ZXN0Y2EvcHljYWNlcnQuY2VyMDUGCCsGAQUFBzABhilo dHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9vY3NwLzBDBgNVHR8E PDA6MDigNqA0hjJodHRwOi8vdGVzdGNhLnB5dGhvbnRlc3QubmV0L3Rlc3RjYS9y ZXZvY2F0aW9uLmNybDANBgkqhkiG9w0BAQsFAAOCAYEAcHfYgrD0q96EzogyY14j D7ZYorFl/xIiC4im+gZAmudjp12ulMVoPEvplTQBdSTfnW6b5P8/YZcpe6s0LBTT AdLr+4RA2xJUfnpEvAjrn+IVCxFPJdJWUZWtba0Hqmph+TnVgoxFMZ8q/xiYSQy7 F63VJNPRx8QQPsR5Jlj0xd6CFsTDxKejYiJBkDYPvEz9ahgi8ofpB9u0PWUA5HD5 1uWoobnJneddeKqY1fj0/VzZTNBtv4dx01vs9L9G+cj4EMVyr8MVucQGZws/9vRk xSd0wWsAN9rqGDZ3Nqc+gC5dVA8B386el93J8otZgsVlMchzIP0kIyXYAN+QkyZ2 CAoGqQ7T00xv76f73usqQLnksUQMN8rGnkRKtHwsQFI1u7NxKD01/b7JT1SzmcVf hDj7K/vq3YjoncGbZ4c9eXs9fmEfcDy3yEwXpQyjKMerSBEU95h62k77kXaJCqbG cuCW2fGA6miQN1zGacfXvMfRrlupElnG5GxhqYu6UbMT -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/badcert.pem000066400000000000000000000036101471441230600203610ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIICXwIBAAKBgQC8ddrhm+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9L opdJhTvbGfEj0DQs1IE8M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVH fhi/VwovESJlaBOp+WMnfhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQAB AoGBAK0FZpaKj6WnJZN0RqhhK+ggtBWwBnc0U/ozgKz2j1s3fsShYeiGtW6CK5nU D1dZ5wzhbGThI7LiOXDvRucc9n7vUgi0alqPQ/PFodPxAN/eEYkmXQ7W2k7zwsDA IUK0KUhktQbLu8qF/m8qM86ba9y9/9YkXuQbZ3COl5ahTZrhAkEA301P08RKv3KM oXnGU2UHTuJ1MAD2hOrPxjD4/wxA/39EWG9bZczbJyggB4RHu0I3NOSFjAm3HQm0 ANOu5QK9owJBANgOeLfNNcF4pp+UikRFqxk5hULqRAWzVxVrWe85FlPm0VVmHbb/ loif7mqjU8o1jTd/LM7RD9f2usZyE2psaw8CQQCNLhkpX3KO5kKJmS9N7JMZSc4j oog58yeYO8BBqKKzpug0LXuQultYv2K4veaIO04iL9VLe5z9S/Q1jaCHBBuXAkEA z8gjGoi1AOp6PBBLZNsncCvcV/0aC+1se4HxTNo2+duKSDnbq+ljqOM+E7odU+Nq ewvIWOG//e8fssd0mq3HywJBAJ8l/c8GVmrpFTx8r/nZ2Pyyjt3dH1widooDXYSV q6Gbf41Llo5sYAtmxdndTLASuHKecacTgZVhy0FryZpLKrU= -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- Just bad cert data -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/badkey.pem000066400000000000000000000041621471441230600202170ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- Bad Key, though the cert should be OK -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIICpzCCAhCgAwIBAgIJAP+qStv1cIGNMA0GCSqGSIb3DQEBBQUAMIGJMQswCQYD VQQGEwJVUzERMA8GA1UECBMIRGVsYXdhcmUxEzARBgNVBAcTCldpbG1pbmd0b24x IzAhBgNVBAoTGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQwwCgYDVQQLEwNT U0wxHzAdBgNVBAMTFnNvbWVtYWNoaW5lLnB5dGhvbi5vcmcwHhcNMDcwODI3MTY1 NDUwWhcNMTMwMjE2MTY1NDUwWjCBiTELMAkGA1UEBhMCVVMxETAPBgNVBAgTCERl bGF3YXJlMRMwEQYDVQQHEwpXaWxtaW5ndG9uMSMwIQYDVQQKExpQeXRob24gU29m dHdhcmUgRm91bmRhdGlvbjEMMAoGA1UECxMDU1NMMR8wHQYDVQQDExZzb21lbWFj aGluZS5weXRob24ub3JnMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC8ddrh m+LutBvjYcQlnH21PPIseJ1JVG2HMmN2CmZk2YukO+9LopdJhTvbGfEj0DQs1IE8 M+kTUyOmuKfVrFMKwtVeCJphrAnhoz7TYOuLBSqt7lVHfhi/VwovESJlaBOp+WMn fhcduPEYHYx/6cnVapIkZnLt30zu2um+DzA9jQIDAQABoxUwEzARBglghkgBhvhC AQEEBAMCBkAwDQYJKoZIhvcNAQEFBQADgYEAF4Q5BVqmCOLv1n8je/Jw9K669VXb 08hyGzQhkemEBYQd6fzQ9A/1ZzHkJKb1P6yreOLSEh4KcxYPyrLRC1ll8nr5OlCx CMhKkTnR6qBsdNV0XtdU2+N25hqW+Ma4ZeqsN/iiJVCGNOZGnvQuvCAGWF8+J/f/ iHkC6gGdBJhogs4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/capath/000077500000000000000000000000001471441230600175125ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.9/capath/4e1295a3.0000066400000000000000000000014561471441230600206560ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/capath/5ed36f99.0000066400000000000000000000050111471441230600207460ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/capath/6e88d7b8.0000066400000000000000000000014561471441230600207600ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIICLDCCAdYCAQAwDQYJKoZIhvcNAQEEBQAwgaAxCzAJBgNVBAYTAlBUMRMwEQYD VQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5ldXJv bmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMTEmJy dXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZpMB4X DTk2MDkwNTAzNDI0M1oXDTk2MTAwNTAzNDI0M1owgaAxCzAJBgNVBAYTAlBUMRMw EQYDVQQIEwpRdWVlbnNsYW5kMQ8wDQYDVQQHEwZMaXNib2ExFzAVBgNVBAoTDk5l dXJvbmlvLCBMZGEuMRgwFgYDVQQLEw9EZXNlbnZvbHZpbWVudG8xGzAZBgNVBAMT EmJydXR1cy5uZXVyb25pby5wdDEbMBkGCSqGSIb3DQEJARYMc2FtcG9AaWtpLmZp MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7+aty3S1iBA/+yxjxv4q1MUTd1kjNw L4lYKbpzzlmC5beaQXeQ2RmGMTXU+mDvuqItjVHOK3DvPK7lTcSGftUCAwEAATAN BgkqhkiG9w0BAQQFAANBAFqPEKFjk6T6CKTHvaQeEAsX0/8YHPHqH/9AnhSjrwuX 9EBc0n6bVGhN7XaXd6sJ7dym9sbsWxb+pJdurnkxjx4= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/capath/99d0fa06.0000066400000000000000000000050111471441230600207320ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIHPTCCBSWgAwIBAgIBADANBgkqhkiG9w0BAQQFADB5MRAwDgYDVQQKEwdSb290 IENBMR4wHAYDVQQLExVodHRwOi8vd3d3LmNhY2VydC5vcmcxIjAgBgNVBAMTGUNB IENlcnQgU2lnbmluZyBBdXRob3JpdHkxITAfBgkqhkiG9w0BCQEWEnN1cHBvcnRA Y2FjZXJ0Lm9yZzAeFw0wMzAzMzAxMjI5NDlaFw0zMzAzMjkxMjI5NDlaMHkxEDAO BgNVBAoTB1Jvb3QgQ0ExHjAcBgNVBAsTFWh0dHA6Ly93d3cuY2FjZXJ0Lm9yZzEi MCAGA1UEAxMZQ0EgQ2VydCBTaWduaW5nIEF1dGhvcml0eTEhMB8GCSqGSIb3DQEJ ARYSc3VwcG9ydEBjYWNlcnQub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIIC CgKCAgEAziLA4kZ97DYoB1CW8qAzQIxL8TtmPzHlawI229Z89vGIj053NgVBlfkJ 8BLPRoZzYLdufujAWGSuzbCtRRcMY/pnCujW0r8+55jE8Ez64AO7NV1sId6eINm6 zWYyN3L69wj1x81YyY7nDl7qPv4coRQKFWyGhFtkZip6qUtTefWIonvuLwphK42y fk1WpRPs6tqSnqxEQR5YYGUFZvjARL3LlPdCfgv3ZWiYUQXw8wWRBB0bF4LsyFe7 w2t6iPGwcswlWyCR7BYCEo8y6RcYSNDHBS4CMEK4JZwFaz+qOqfrU0j36NK2B5jc G8Y0f3/JHIJ6BVgrCFvzOKKrF11myZjXnhCLotLddJr3cQxyYN/Nb5gznZY0dj4k epKwDpUeb+agRThHqtdB7Uq3EvbXG4OKDy7YCbZZ16oE/9KTfWgu3YtLq1i6L43q laegw1SJpfvbi1EinbLDvhG+LJGGi5Z4rSDTii8aP8bQUWWHIbEZAWV/RRyH9XzQ QUxPKZgh/TMfdQwEUfoZd9vUFBzugcMd9Zi3aQaRIt0AUMyBMawSB3s42mhb5ivU fslfrejrckzzAeVLIL+aplfKkQABi6F1ITe1Yw1nPkZPcCBnzsXWWdsC4PDSy826 YreQQejdIOQpvGQpQsgi3Hia/0PsmBsJUUtaWsJx8cTLc6nloQsCAwEAAaOCAc4w ggHKMB0GA1UdDgQWBBQWtTIb1Mfz4OaO873SsDrusjkY0TCBowYDVR0jBIGbMIGY gBQWtTIb1Mfz4OaO873SsDrusjkY0aF9pHsweTEQMA4GA1UEChMHUm9vdCBDQTEe MBwGA1UECxMVaHR0cDovL3d3dy5jYWNlcnQub3JnMSIwIAYDVQQDExlDQSBDZXJ0 IFNpZ25pbmcgQXV0aG9yaXR5MSEwHwYJKoZIhvcNAQkBFhJzdXBwb3J0QGNhY2Vy dC5vcmeCAQAwDwYDVR0TAQH/BAUwAwEB/zAyBgNVHR8EKzApMCegJaAjhiFodHRw czovL3d3dy5jYWNlcnQub3JnL3Jldm9rZS5jcmwwMAYJYIZIAYb4QgEEBCMWIWh0 dHBzOi8vd3d3LmNhY2VydC5vcmcvcmV2b2tlLmNybDA0BglghkgBhvhCAQgEJxYl aHR0cDovL3d3dy5jYWNlcnQub3JnL2luZGV4LnBocD9pZD0xMDBWBglghkgBhvhC AQ0ESRZHVG8gZ2V0IHlvdXIgb3duIGNlcnRpZmljYXRlIGZvciBGUkVFIGhlYWQg b3ZlciB0byBodHRwOi8vd3d3LmNhY2VydC5vcmcwDQYJKoZIhvcNAQEEBQADggIB ACjH7pyCArpcgBLKNQodgW+JapnM8mgPf6fhjViVPr3yBsOQWqy1YPaZQwGjiHCc nWKdpIevZ1gNMDY75q1I08t0AoZxPuIrA2jxNGJARjtT6ij0rPtmlVOKTV39O9lg 18p5aTuxZZKmxoGCXJzN600BiqXfEVWqFcofN8CCmHBh22p8lqOOLlQ+TyGpkO/c gr/c6EWtTZBzCDyUZbAEmXZ/4rzCahWqlwQ3JNgelE5tDlG+1sSPypZt90Pf6DBl Jzt7u0NDY8RD97LsaMzhGY4i+5jhe1o+ATc7iwiwovOVThrLm82asduycPAtStvY sONvRUgzEv/+PDIqVPfE94rwiCPCR/5kenHA0R6mY7AHfqQv0wGP3J8rtsYIqQ+T SCX8Ev2fQtzzxD72V7DX3WnRBnc0CkvSyqD/HMaMyRa+xMwyN2hzXwj7UfdJUzYF CpUCTPJ5GhD22Dp1nPMd8aINcGeGG7MW9S/lpOt5hvk9C8JzC6WZrG/8Z7jlLwum GCSNe9FINSkYQKyTYOGWhlC0elnYjyELn8+CkcY7v2vcB5G5l1YjqrZslMZIBjzk zk6q5PYvCdxTby78dOs6Y5nCpqyJvKeyRKANihDjbPIky/qbn3BHLt4Ui9SyIAmW omTxJBzcoTWcFbLUvFUufQb1nA5V9FrWk9p2rSVzTMVD -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/capath/b1930218.0000066400000000000000000000030721471441230600205660ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/capath/ceff1710.0000066400000000000000000000030721471441230600210110ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/ffdh3072.pem000066400000000000000000000042441471441230600202040ustar00rootroot00000000000000 DH Parameters: (3072 bit) prime: 00:ff:ff:ff:ff:ff:ff:ff:ff:ad:f8:54:58:a2:bb: 4a:9a:af:dc:56:20:27:3d:3c:f1:d8:b9:c5:83:ce: 2d:36:95:a9:e1:36:41:14:64:33:fb:cc:93:9d:ce: 24:9b:3e:f9:7d:2f:e3:63:63:0c:75:d8:f6:81:b2: 02:ae:c4:61:7a:d3:df:1e:d5:d5:fd:65:61:24:33: f5:1f:5f:06:6e:d0:85:63:65:55:3d:ed:1a:f3:b5: 57:13:5e:7f:57:c9:35:98:4f:0c:70:e0:e6:8b:77: e2:a6:89:da:f3:ef:e8:72:1d:f1:58:a1:36:ad:e7: 35:30:ac:ca:4f:48:3a:79:7a:bc:0a:b1:82:b3:24: fb:61:d1:08:a9:4b:b2:c8:e3:fb:b9:6a:da:b7:60: d7:f4:68:1d:4f:42:a3:de:39:4d:f4:ae:56:ed:e7: 63:72:bb:19:0b:07:a7:c8:ee:0a:6d:70:9e:02:fc: e1:cd:f7:e2:ec:c0:34:04:cd:28:34:2f:61:91:72: fe:9c:e9:85:83:ff:8e:4f:12:32:ee:f2:81:83:c3: fe:3b:1b:4c:6f:ad:73:3b:b5:fc:bc:2e:c2:20:05: c5:8e:f1:83:7d:16:83:b2:c6:f3:4a:26:c1:b2:ef: fa:88:6b:42:38:61:1f:cf:dc:de:35:5b:3b:65:19: 03:5b:bc:34:f4:de:f9:9c:02:38:61:b4:6f:c9:d6: e6:c9:07:7a:d9:1d:26:91:f7:f7:ee:59:8c:b0:fa: c1:86:d9:1c:ae:fe:13:09:85:13:92:70:b4:13:0c: 93:bc:43:79:44:f4:fd:44:52:e2:d7:4d:d3:64:f2: e2:1e:71:f5:4b:ff:5c:ae:82:ab:9c:9d:f6:9e:e8: 6d:2b:c5:22:36:3a:0d:ab:c5:21:97:9b:0d:ea:da: 1d:bf:9a:42:d5:c4:48:4e:0a:bc:d0:6b:fa:53:dd: ef:3c:1b:20:ee:3f:d5:9d:7c:25:e4:1d:2b:66:c6: 2e:37:ff:ff:ff:ff:ff:ff:ff:ff generator: 2 (0x2) recommended-private-length: 276 bits -----BEGIN DH PARAMETERS----- MIIBjAKCAYEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz +8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a 87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7 YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi 7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD ssbzSibBsu/6iGtCOGEfz9zeNVs7ZRkDW7w09N75nAI4YbRvydbmyQd62R0mkff3 7lmMsPrBhtkcrv4TCYUTknC0EwyTvEN5RPT9RFLi103TZPLiHnH1S/9croKrnJ32 nuhtK8UiNjoNq8Uhl5sN6todv5pC1cRITgq80Gv6U93vPBsg7j/VnXwl5B0rZsYu N///////////AgECAgIBFA== -----END DH PARAMETERS----- gevent-24.11.1/src/greentest/3.9/idnsans.pem000066400000000000000000000233321471441230600204170ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQC8sqplTuHuLjbW TL5SL2D1fw9U6WQzLVAF5gsyhd5lr2FpfYwjrob5Mav91aOLbJRTvoNyXsJ26FPS 0RycRGXbomcIEJxXGy9aI+0MLYBt1G5mgqCH+HcVCwPzCNlhVnTwvpgA7y8zs3+6 ezZAPWkF0yWOMYLtTcq9A5GWeavt5VMgm1KZF3gO4k58oPyk3Ae9D0LAaYsX6DFi BYx41eUR5UbSb5IYXaDd8d6jqW/jnYhgc6Cxkv1gTJFn87V5lrG0vYMSRUtWDQ9Y Jh/EKAxjGw7AeY429p6TE4UoJhDmoFYR2NLvawhNIplxol/v0fs0veFQjI/UsTD8 2tRfnYL4IX8szhLsE5/5Iq8aiLHjVbIMwmDYAa0P63Ap2kf1biSn9mpDL8lQazSo yr8xzIq2QS5HMvGbeMAmS0ih10Zx84uVmkWlavgvtSflw8K/ZXT9c70rZp/TdBGY 95cOFsbg5U/20M/Llpis9tcBCaoVaYSFupatrP+p8y19qP2nebsCAwEAAQKCAYEA uaYWWwHW6pzxOrnabcVLYX0WunW9LVShbIw97AElI2n/LuhkXh6xkK48BsqP0vaK oDHJ5VYxgQdmoP03Zs8sX4BSWe7twg1u8wJxkA+cUXI1BAn0opHjpwJlalEEfe2v s8PwjMrF59nsCq56W42PrDlms5UmuQ5WLsw6Co++hZmfxW7LPu+GIS6qBZfluNT5 kBpZlDDCtkyteUD4SVI3wvmOSi+Wzv4e7P2wC9kByjENIcfhC5QQURRD4sA1hWCp 2SThYWqJOCEc2SvGgoqgTRaJuQ2aVG9qrntXt0N4V+WdJWXBK0jedkB2flLve1fR KmDYuc9k/c1svmS3Y+iZohBha9H8jpuJmXYBxxg1iNg9m7qkfg8F8wxCYLQKB+U6 tjRS7by+jSE08On7mpDDhJORnlh+rfEuWPPwAKQpLpdp76KDTvR++GvfOMUiOrFM e9s5aXp+vcgkSSqYvigE+sFpCjQWwkGBkMdT16Pf9CzhQaM08YuLnzfLEYgLFw6R AoHBAN5NQINBmlq/cptGSru66kfecqHfI7xHnnGWKAkto/B1x7Crrgs4Tk5b4vaA JmAqatt5P1e7zco7uAXXebY5VURuH/30TlkuaB+oGFp0OMw6165n8RVPT2ZaDViK ssJ9LT8fJ+23TWCCT2Z1zUlM/NnHAMjKOVsJK3/KEkVvlc7ROC7uVooc78AsQehg zpL3GBYEeBukT8aNUMqUlesCsIs/dQHW7DzQL2xGkQagm5/PDsxaCsT7ynA8eL3X TW+IXwKBwQDZTV3TaG6wqtL8y2DR0lN5jY/eYayX4e18iZ+XEZVTntPdVVyJIE4d 0A5ZfcILb9WE8R21iptROYSjcH/05j+3fQMJ1WAK0sNfGTUNNT3jYU8YzLvos+wW G8E+mNMpFPWNvLV5Qrl4VvoifGh8AMvplUEz8uAzGJbXbRxUPcmjth2ph8zULEDn /+o4OcT3gh1bp+HCqch0OuiJRn9qNUpsJG5GMm5FtjBjZM97ucZ1/0DaWl3JUxUN /pueo3J9vCUCgcBg2Fjdlcvv8u2z1aijJmgATVm1SWfhE3ZkV50zem2sSTNotTJK cwoyOveimeueA3ywBp9g0lFx5Bhkex3sFAggmrVXRoKHeZ8lA28woOdJmezybxfp R7b4iQy9YRdFgZEfqawUdMHB5KNAqNt5LpANNBQUZX0dOt53eooBM/6Yri8CyxRq cPbFysIfwWTdQ8Z7eRD2Qdv7TP9AcgDp9C8DSu7nkUEzsSKn0gpGT9vcgDEbN7Lv ZB4qTT3wvoZeq5MCgcBIG18eDtJkN1sp3Yb0OTnP5QSvg3PVNngq0jQt2fzWMacW FARP0HN7exW35n4kc2jD44q7OhJOAqsb3PHo3xqXlZkTg0WKceO4w9GR32/46spn bVCRaFrX/z/BuM6hHD5bWRpS8aw/3YTFOsklFNKVYRyw01BIREmRlLhIz/QAKidv oQt8AG9NTON44tqUUw3Q40WL5fEJeJ6/JrCTGrnmZrRdANEMuucVpFchNEVB1IC9 tCzY6IPdD/atzojoZi0CgcB2x9oWLjJ0XJIp2pMAb8nCMVjkKrznKFjZbDm8EQBs ou7pM2zkO3VRcWT1BXQocinJsjQqjQiTawP6IN2FQgT0d89V+pwd+jdvpdildQhP 1/6SErVRZV//oopKTsC6TIBL/EmW1TkP3ulQIZs8YklFgybeHdDyNFi+VgPXkVGe IHp0nEzrui9q0YJsjHfFHBeGyzDSfbiBYiF7Auk66gYZbXufebP/LZNG/FIamPP3 rwYIeeV1IVwk9tPBw6fGwrs= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:60 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=idnsans Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:bc:b2:aa:65:4e:e1:ee:2e:36:d6:4c:be:52:2f: 60:f5:7f:0f:54:e9:64:33:2d:50:05:e6:0b:32:85: de:65:af:61:69:7d:8c:23:ae:86:f9:31:ab:fd:d5: a3:8b:6c:94:53:be:83:72:5e:c2:76:e8:53:d2:d1: 1c:9c:44:65:db:a2:67:08:10:9c:57:1b:2f:5a:23: ed:0c:2d:80:6d:d4:6e:66:82:a0:87:f8:77:15:0b: 03:f3:08:d9:61:56:74:f0:be:98:00:ef:2f:33:b3: 7f:ba:7b:36:40:3d:69:05:d3:25:8e:31:82:ed:4d: ca:bd:03:91:96:79:ab:ed:e5:53:20:9b:52:99:17: 78:0e:e2:4e:7c:a0:fc:a4:dc:07:bd:0f:42:c0:69: 8b:17:e8:31:62:05:8c:78:d5:e5:11:e5:46:d2:6f: 92:18:5d:a0:dd:f1:de:a3:a9:6f:e3:9d:88:60:73: a0:b1:92:fd:60:4c:91:67:f3:b5:79:96:b1:b4:bd: 83:12:45:4b:56:0d:0f:58:26:1f:c4:28:0c:63:1b: 0e:c0:79:8e:36:f6:9e:93:13:85:28:26:10:e6:a0: 56:11:d8:d2:ef:6b:08:4d:22:99:71:a2:5f:ef:d1: fb:34:bd:e1:50:8c:8f:d4:b1:30:fc:da:d4:5f:9d: 82:f8:21:7f:2c:ce:12:ec:13:9f:f9:22:af:1a:88: b1:e3:55:b2:0c:c2:60:d8:01:ad:0f:eb:70:29:da: 47:f5:6e:24:a7:f6:6a:43:2f:c9:50:6b:34:a8:ca: bf:31:cc:8a:b6:41:2e:47:32:f1:9b:78:c0:26:4b: 48:a1:d7:46:71:f3:8b:95:9a:45:a5:6a:f8:2f:b5: 27:e5:c3:c2:bf:65:74:fd:73:bd:2b:66:9f:d3:74: 11:98:f7:97:0e:16:c6:e0:e5:4f:f6:d0:cf:cb:96: 98:ac:f6:d7:01:09:aa:15:69:84:85:ba:96:ad:ac: ff:a9:f3:2d:7d:a8:fd:a7:79:bb Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:idnsans, DNS:xn--knig-5qa.idn.pythontest.net, DNS:xn--knigsgsschen-lcb0w.idna2003.pythontest.net, DNS:xn--knigsgchen-b4a3dun.idna2008.pythontest.net, DNS:xn--nxasmq6b.idna2003.pythontest.net, DNS:xn--nxasmm1c.idna2008.pythontest.net X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 5C:BE:18:7F:7B:3F:CE:99:66:80:79:53:4B:DD:33:1B:42:A5:7E:00 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 5d:7a:f8:81:e0:a7:c1:3f:39:eb:d3:52:2c:e1:cb:4d:29:b3: 77:18:17:18:9e:12:fc:11:cc:3c:49:cb:6b:f4:4d:6c:b8:d2: f4:e9:37:f8:6b:ed:f5:d7:f1:eb:5a:41:04:c7:f3:8c:da:e1: 05:8e:ae:58:71:d9:01:8a:32:46:b2:dd:95:46:e1:ce:82:04: fa:0b:1c:29:75:07:85:ce:cd:59:d4:cc:f3:56:b3:72:4d:cb: 90:0f:ce:02:21:ce:5d:17:84:96:7f:6a:00:57:42:b7:24:5b: 07:25:1e:77:a8:9d:da:41:09:8e:29:79:b4:b0:a1:45:c8:70: ae:2c:86:24:ae:3d:9a:74:a7:04:78:d6:1f:1b:17:c5:c1:6d: b1:1a:fd:f4:50:2e:61:16:84:89:d0:42:3f:b6:bf:bd:52:bd: c8:3e:8e:87:b4:f0:bd:ad:c7:51:65:2f:77:e8:69:79:0e:03: 63:89:e7:70:ad:c8:d1:2f:1a:a5:06:d2:90:db:7c:07:35:9a: 0b:0e:85:87:d1:70:17:a7:88:0f:c6:b5:9c:88:00:fa:f9:b2: 0a:19:5a:4b:8d:91:12:51:5e:0e:c1:d8:9e:02:78:d0:2d:24: 09:fe:d4:97:3c:cb:a0:1f:9a:ab:f7:0f:e2:fa:64:23:4e:53: 0a:15:3e:f5:04:01:86:29:8b:8e:24:40:2f:b1:90:87:5c:3b: 7b:a7:4c:06:af:c3:90:7f:e9:c6:56:42:61:15:2c:83:f1:7c: 4f:89:17:f3:a0:11:34:3f:8d:af:75:34:60:1e:e0:f2:f3:02: e7:aa:b3:f7:9f:1c:f8:69:f4:fe:da:57:6e:1b:95:53:70:cd: ed:b6:bb:2a:84:eb:ab:c3:a9:b4:d5:15:a0:b2:cc:81:2d:f1: 56:c1:54:9b:5f:14:4c:5f:ad:5f:f5:06:ee:22:60:45:e4:50: 35:64:ac:ac:ca:4a:bf:86:78:f8:53:2d:17:d8:e8:84:c8:07: a4:c2:29:76:c7:1f -----BEGIN CERTIFICATE----- MIIGvTCCBSWgAwIBAgIJAMstgJlaaVJgMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF0xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEDAOBgNVBAMMB2lk bnNhbnMwggGiMA0GCSqGSIb3DQEBAQUAA4IBjwAwggGKAoIBgQC8sqplTuHuLjbW TL5SL2D1fw9U6WQzLVAF5gsyhd5lr2FpfYwjrob5Mav91aOLbJRTvoNyXsJ26FPS 0RycRGXbomcIEJxXGy9aI+0MLYBt1G5mgqCH+HcVCwPzCNlhVnTwvpgA7y8zs3+6 ezZAPWkF0yWOMYLtTcq9A5GWeavt5VMgm1KZF3gO4k58oPyk3Ae9D0LAaYsX6DFi BYx41eUR5UbSb5IYXaDd8d6jqW/jnYhgc6Cxkv1gTJFn87V5lrG0vYMSRUtWDQ9Y Jh/EKAxjGw7AeY429p6TE4UoJhDmoFYR2NLvawhNIplxol/v0fs0veFQjI/UsTD8 2tRfnYL4IX8szhLsE5/5Iq8aiLHjVbIMwmDYAa0P63Ap2kf1biSn9mpDL8lQazSo yr8xzIq2QS5HMvGbeMAmS0ih10Zx84uVmkWlavgvtSflw8K/ZXT9c70rZp/TdBGY 95cOFsbg5U/20M/Llpis9tcBCaoVaYSFupatrP+p8y19qP2nebsCAwEAAaOCAo4w ggKKMIHhBgNVHREEgdkwgdaCB2lkbnNhbnOCH3huLS1rbmlnLTVxYS5pZG4ucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2dzc2NoZW4tbGNiMHcuaWRuYTIwMDMucHl0 aG9udGVzdC5uZXSCLnhuLS1rbmlnc2djaGVuLWI0YTNkdW4uaWRuYTIwMDgucHl0 aG9udGVzdC5uZXSCJHhuLS1ueGFzbXE2Yi5pZG5hMjAwMy5weXRob250ZXN0Lm5l dIIkeG4tLW54YXNtbTFjLmlkbmEyMDA4LnB5dGhvbnRlc3QubmV0MA4GA1UdDwEB /wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/ BAIwADAdBgNVHQ4EFgQUXL4Yf3s/zplmgHlTS90zG0KlfgAwfQYDVR0jBHYwdIAU s4qgorpx8agkedSkWyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQK DB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNh LXNlcnZlcoIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKG MGh0dHA6Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNl cjA1BggrBgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2Evb2NzcC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250 ZXN0Lm5ldC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGB AF16+IHgp8E/OevTUizhy00ps3cYFxieEvwRzDxJy2v0TWy40vTpN/hr7fXX8eta QQTH84za4QWOrlhx2QGKMkay3ZVG4c6CBPoLHCl1B4XOzVnUzPNWs3JNy5APzgIh zl0XhJZ/agBXQrckWwclHneondpBCY4pebSwoUXIcK4shiSuPZp0pwR41h8bF8XB bbEa/fRQLmEWhInQQj+2v71Svcg+joe08L2tx1FlL3foaXkOA2OJ53CtyNEvGqUG 0pDbfAc1mgsOhYfRcBeniA/GtZyIAPr5sgoZWkuNkRJRXg7B2J4CeNAtJAn+1Jc8 y6Afmqv3D+L6ZCNOUwoVPvUEAYYpi44kQC+xkIdcO3unTAavw5B/6cZWQmEVLIPx fE+JF/OgETQ/ja91NGAe4PLzAueqs/efHPhp9P7aV24blVNwze22uyqE66vDqbTV FaCyzIEt8VbBVJtfFExfrV/1Bu4iYEXkUDVkrKzKSr+GePhTLRfY6ITIB6TCKXbH Hw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/keycert.passwd.pem000066400000000000000000000102011471441230600217150ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQIhD+rJdxqb6ECAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBDTdyjCP3riOSUfxix4aXEvBIIH ECGkbsFabrcFMZcplw5jHMaOlG7rYjUzwDJ80JM8uzbv2Jb8SvNlns2+xmnEvH/M mNvRmnXmplbVjH3XBMK8o2Psnr2V/a0j7/pgqpRxHykG+koOY4gzdt3MAg8JPbS2 hymSl+Y5EpciO3xLfz4aFL1ZNqspQbO/TD13Ij7DUIy7xIRBMp4taoZCrP0cEBAZ +wgu9m23I4dh3E8RUBzWyFFNic2MVVHrui6JbHc4dIHfyKLtXJDhUcS0vIC9PvcV jhorh3UZC4lM+/jjXV5AhzQ0VrJ2tXAUX2dA144XHzkSH2QmwfnajPsci7BL2CGC rjyTy4NfB/lDwU+55dqJZQSKXMxAapJMrtgw7LD5CKQcN6zmfhXGssJ7HQUXKkaX I1YOFzuUD7oo56BVCnVswv0jX9RxrE5QYNreMlOP9cS+kIYH65N+PAhlURuQC14K PgDkHn5knSa2UQA5tc5f7zdHOZhGRUfcjLP+KAWA3nh+/2OKw/X3zuPx75YT/FKe tACPw5hjEpl62m9Xa0eWepZXwqkIOkzHMmCyNCsbC0mmRoEjmvfnslfsmnh4Dg/c 4YsTYMOLLIeCa+WIc38aA5W2lNO9lW0LwLhX1rP+GRVPv+TVHXlfoyaI+jp0iXrJ t3xxT0gaiIR/VznyS7Py68QV/zB7VdqbsNzS7LdquHK1k8+7OYiWjY3gqyU40Iu2 d1eSnIoDvQJwyYp7XYXbOlXNLY+s1Qb7yxcW3vXm0Bg3gKT8r1XHWJ9rj+CxAn5r ysfkPs1JsesxzzQjwTiDNvHnBnZnwxuxfBr26ektEHmuAXSl8V6dzLN/aaPjpTj4 CkE7KyqX3U9bLkp+ztl4xWKEmW44nskzm0+iqrtrxMyTfvvID4QrABjZL4zmWIqc e3ZfA3AYk9VDIegk/YKGC5VZ8YS7ZXQ0ASK652XqJ7QlMKTxxV7zda6Fp4uW6/qN ezt5wgbGGhZQXj2wDQmWNQYyG/juIgYTpCUA54U5XBIjuR6pg+Ytm0UrvNjsUoAC wGelyqaLDq8U8jdIFYVTJy9aJjQOYXjsUJ0dZN2aGHSlju0ZGIZc49cTIVQ9BTC5 Yc0Vlwzpl+LuA25DzKZNSb/ci0lO/cQGJ2uXQQgaNgdsHlu8nukENGJhnIzx4fzK wEh3yHxhTRCzPPwDfXmx0IHXrPqJhSpAgaXBVIm8OjvmMxO+W75W4uLfNY/B7e2H 3cjklGuvkofOf7sEOrGUYf4cb6Obg8FpvHgpKo5Twwmoh/qvEKckBFqNhZXDDl88 GbGlSEgyaAV1Ig8s1NJKBolWFa0juyPAwJ8vT1T4iwW7kQ7KXKt2UNn96K/HxkLu pikvukz8oRHMlfVHa0R48UB1fFHwZLzPmwkpu6ancIxk3uO3yfhf6iDk3bmnyMlz g3k/b6MrLYaOVByRxay85jH3Vvgqfgn6wa6BJ7xQ81eZ8B45gFuTH0J5JtLL7SH8 darRPLCYfA+Ums9/H6pU5EXfd3yfjMIbvhCXHkJrrljkZ+th3p8dyto6wmYqIY6I qR9sU+o6DhRaiP8tCICuhHxQpXylUM6WeJkJwduTJ8KWIvzsj4mReIKOl/oC2jSd gIdKhb9Q3zj9ce4N5m6v66tyvjxGZ+xf3BvUPDD+LwZeXgf7OBsNVbXzQbzto594 nbCzPocFi3gERE50ru4K70eQCy08TPG5NpOz+DDdO5vpAuMLYEuI7O3L+3GjW40Q G5bu7H5/i7o/RWR67qhG/7p9kPw3nkUtYgnvnWaPMIuTfb4c2d069kjlfgWjIbbI tpSKmm5DHlqTE4/ECAbIEDtSaw9dXHCdL3nh5+n428xDdGbjN4lT86tfu17EYKzl ydH1RJ1LX3o3TEj9UkmDPt7LnftvwybMFEcP7hM2xD4lC++wKQs7Alg6dTkBnJV4 5xU78WRntJkJTU7kFkpPKA0QfyCuSF1fAMoukDBkqUdOj6jE0BlJQlHk5iwgnJlt uEdkTjHZEjIUxWC6llPcAzaPNlmnD45AgfEW+Jn21IvutmJiQAz5lm9Z9PXaR0C8 hXB6owRY67C0YKQwXhoNf6xQun2xGBGYy5rPEEezX1S1tUH5GR/KW1Lh+FzFqHXI ZEb5avfDqHKehGAjPON+Br7akuQ125M9LLjKuSyPaQzeeCAy356Xd7XzVwbPddbm 9S9WSPqzaPgh10chIHoNoC8HMd33dB5j9/Q6jrbU/oPlptu/GlorWblvJdcTuBGI IVn45RFnkG8hCz0GJSNzW7+70YdESQbfJW79vssWMaiSjFE0pMyFXrFR5lBywBTx PiGEUWtvrKG94X1TMlGUzDzDJOQNZ9dT94bonNe9pVmP5BP4/DzwwiWh6qrzWk6p j8OE4cfCSh2WvHnhJbH7/N0v+JKjtxeIeJ16jx/K2oK5 -----END ENCRYPTED PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/keycert.pem000066400000000000000000000077321471441230600204340ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/wIBADANBgkqhkiG9w0BAQEFAASCBukwggblAgEAAoIBgQCylKlLaKU+hOvJ DfriTRLd+IthG5hv28I3A/CGjLICT0rDDtgaXd0uqloJAnjsgn5gMAcStpDW8Rm+ t6LsrBL+5fBgkyU1r94Rvx0HHoyaZwBBouitVHw28hP3W+smddkqB1UxpGnTeL2B gj3dVo/WTtRfO+0h0PKw1l98YE1pMTdqIwcOOE/ER0g4hvA/wrxuLhMvlVLMy/lL 58uctqaDUqryNyeerKbVkq4fJyCG5D2TwXVJ3i2DDh0xSt2Y10poZV4M4k8Su9Z5 8zN2PSvYMT50aqF277v8BaOeYUApBE4kZGIJpo13ATGdEwpUFZ0Fri4zLYUZ1hWb OC35sKo7OxWQ/+tefNUdgWHob6Vmy777jiYcLwxc3sS9rF3AJe0rMW83kCkR6hmy A3250E137N/1QumHuT/Nj9rnI/lwt9jfaYkZjoAgT/C97m/mM83cYpGTdoGV1xNo 7G90MhP0di5FnVsrIaSnvkbGT9UgUWx0oVMjocifdG2qIhMI9psCAwEAAQKCAYBT sHmaPmNaZj59jZCqp0YVQlpHWwBYQ5vD3pPE6oCttm0p9nXt/VkfenQRTthOtmT1 POzDp00/feP7zeGLmqSYUjgRekPw4gdnN7Ip2PY5kdW77NWwDSzdLxuOS8Rq1MW9 /Yu+ZPe3RBlDbT8C0IM+Atlh/BqIQ3zIxN4g0pzUlF0M33d6AYfYSzOcUhibOO7H j84r+YXBNkIRgYKZYbutRXuZYaGuqejRpBj3voVu0d3Ntdb6lCWuClpB9HzfGN0c RTv8g6UYO4sK3qyFn90ibIR/1GB9watvtoWVZqggiWeBzSWVWRsGEf9O+Cx4oJw1 IphglhmhbgNksbj7bD24on/icldSOiVkoUemUOFmHWhCm4PnB1GmbD8YMfEdSbks qDr1Ps1zg4mGOinVD/4cY7vuPFO/HCH07wfeaUGzRt4g0/yLr+XjVofOA3oowyxv JAzr+niHA3lg5ecj4r7M68efwzN1OCyjMrVJw2RAzwvGxE+rm5NiT08SWlKQZnkC gcEA4wvyLpIur/UB84nV3XVJ89UMNBLm++aTFzld047BLJtMaOhvNqx6Cl5c8VuW l261KHjiVzpfNM3/A2LBQJcYkhX7avkqEXlj57cl+dCWAVwUzKmLJTPjfaTTZnYJ xeN3dMYjJz2z2WtgvfvDoJLukVwIMmhTY8wtqqYyQBJ/l06pBsfw5TNvmVIOQHds 8ASOiFt+WRLk2bl9xrGGayqt3VV93KVRzF27cpjOgEcG74F3c0ZW9snERN7vIYwB JfrlAoHBAMlahPwMP2TYylG8OzHe7EiehTekSO26LGh0Cq3wTGXYsK/q8hQCzL14 kWW638vpwXL6L9ntvrd7hjzWRO3vX/VxnYEA6f0bpqHq1tZi6lzix5CTUN5McpDg QnjenSJNrNjS1zEF8WeY9iLEuDI/M/iUW4y9R6s3WpgQhPDXpSvd2g3gMGRUYhxQ Xna8auiJeYFq0oNaOxvJj+VeOfJ3ZMJttd+Y7gTOYZcbg3SdRb/kdxYki0RMD2hF 4ZvjJ6CTfwKBwQDiMqiZFTJGQwYqp4vWEmAW+I4r4xkUpWatoI2Fk5eI5T9+1PLX uYXsho56NxEU1UrOg4Cb/p+TcBc8PErkGqR0BkpxDMOInTOXSrQe6lxIBoECVXc3 HTbrmiay0a5y5GfCgxPKqIJhfcToAceoVjovv0y7S4yoxGZKuUEe7E8JY2iqRNAO yOvKCCICv/hcN235E44RF+2/rDlOltagNej5tY6rIFkaDdgOF4bD7f9O5eEni1Bg litfoesDtQP/3rECgcEAkQfvQ7D6tIPmbqsbJBfCr6fmoqZllT4FIJN84b50+OL0 mTGsfjdqC4tdhx3sdu7/VPbaIqm5NmX10bowWgWSY7MbVME4yQPyqSwC5NbIonEC d6N0mzoLR0kQ+Ai4u+2g82gicgAq2oj1uSNi3WZi48jQjHYFulCbo246o1NgeFFK 77WshYe2R1ioQfQDOU1URKCR0uTaMHClgfu112yiGd12JAD+aF3TM0kxDXz+sXI5 SKy311DFxECZeXRLpcC3AoHBAJkNMJWTyPYbeVu+CTQkec8Uun233EkXa2kUNZc/ 5DuXDaK+A3DMgYRufTKSPpDHGaCZ1SYPInX1Uoe2dgVjWssRL2uitR4ENabDoAOA ICVYXYYNagqQu5wwirF0QeaMXo1fjhuuHQh8GsMdXZvYEaAITZ9/NG5x/oY08+8H kr78SMBOPy3XQn964uKG+e3JwpOG14GKABdAlrHKFXNWchu/6dgcYXB87mrC/GhO zNwzC+QhFTZoOomFoqMgFWujng== -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/keycert2.pem000066400000000000000000000077561471441230600205240ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQCf8FWxi4oVlDVx e8NDFgb+IYAGr/hZWuY1Zq7d7g57yPoxJrgt+bN89+U7qTduqyB2Hy8G0TqeACOr IdpPZ8P7V5E5YiASwfJ72nbVo7qR9DAKA5FE8PU0bJFmFLjDDihc970zc4ilRDfR WylUpj68nefOY4CzFzeiqVOLX2wezs7Z0hflkSXGBmC0j1FbQU2I3YJg3CKCabhT tU6OyKItzjJ2vVaOoQ+B0Kv8leaRQ6ANZBAFQF2LepSy5F2+oSD+QHjPr+012V5D mrsdIc9We8YyonS1u/3HI7lLohf3W+qFroQWjn0DJI56ScV1uEr/B0+hn2jBRTM5 d1F9BeVWm1u8BOJu50CvOeuxiVLsxJpa4T41DJznJk5V+hE4hKvDKmlrwulsRp8o jUEyUi8dzWOBRfAijIWv3qAPjGA/J33n6+PllCczC2BsVZhVmLqSMCwp1g2JTCM/ KC7T4vOl/EGkm76fcmLeA1Ef8oUdRg+3T77VP+HqZ2JP06J8O8MCAwEAAQKCAYAw YvJZ82BEJQGCIrIxMpHNAm+MFmKpDdIFp9oRdDrXgjcG9bLU3e1KSmkEgq4tggIh GlAM3PHB6ULhPC2ixj7JZHWgCaqwYhKtG6vF+HGyRFDgRrIFTGyyfoICgxReloLp lV2dGj/l19yXLuAzJtRmFdOSYhIGnGiNgnKvAKBiNajoxyHJpv7piPZqyc0QMZJ2 bKVMDm02TSuhz4FDuzktaGtl9uQf5GQfnvTZRrRpkC70vigGnrFuSBiCgopF6NLq 6AXl8YS3Jcu2oGWrZDfS/GlG1QmvGGsmr9wndJSGG43jcpcRZt0g1nJNu4Fioq3e 7y6Gap9TEsciuQOv/6RD457XkNARmTQxFpEwmSgOPQn2pFcDspo71Ej7azzL/Z+3 jvnVo3wxgxBcrpyh+vhBtJARp4pT4anW4PcD6IcPSOWbnI8Ldoj1XN5QkJcBcykK 6LmsAUqsmEQDNsmnGZWyYSCns4P2vUJi0hwQz8UiQwgAta3xnq4v5On7l3cq35kC gcEA0+joOFbZBeGlCb27tDW4VCW0cQuczzuNEoBUKnsNSqy0nx1O7hgHm/f/NQDD cpxiD15bRQ0KM9QbQC4dGaVoLsM07hUGk97dCxQPs2zot4CodCKGohs7E154tEDP zVg3YS5mubUmqdqtn8ZCKeeZye/Tv2ageyF300sEgj2Cd7EZ8S4sB0PxZ2tqT3jy cBL5cDruLEWuHIQjN7WwSjxnXocpb1OU7dJ+v4zFPCkSCOoa0DTTw4jFhPEOBdqV T619AoHBAME3QyW4QVtU2Ct9u0B1XThhqSEyOpUrcH9nOoefggwP4WF3phVx16BG aDKUIGQ62klRa5fi2eooxcjQRLv1sWO0UzssnO6ABMnGkUiRdrowo6xukNak0RTp 0gvNoJ0SZxGF0yWSCw1Rq3qP2Koj7XDumFChAzLMyUsnoOl29SA7GfXcZp1pZTiq kOfFMWt0CIHu/EK03YWcd4vfQEq6lus39RCSXuL++Jva3yiEl5s069RFZvP1bNrD emkfetDSPwKBwQClk+8fVnzs44sZOW9ZOEB3P57mVbSJGHb6Zdtd9hhEqP3Y9gWe dJg9fmGjAJ23CAp3B7s5ER9PsAQ6+c0zJNNq9ox9G2CwWgtNhLdf81FDUPxPAktA jxZx4/dcoOe+A5gCD0elA67aOUxA86DvLVA1QXeqrn3muBfwuUUknvs6mt8yXGl6 o9QUgxHmVxLYD3tn/iPr4+ZP0c/Sz9yXpOsAKYxuuFg+G6N9+HiEsXKuFH4vAZgV yODNJ61VVZ4lS+ECgcAqFqOl39E81+qO7sCPdgFsermg5ZQlUmUbG52AVZq6jesG lE21disGWs/v1JyJuNg8CGRrnZriiycqa1PNreOKWImY5kr5GSHx4jNbn3RBcr70 nNEoMJbq+1QqBgzqqkuRYZlxIbMOn6++7v6/cTwT0aWUSr6rnjhrCqLeuG8FKlqp V+1ydLb79QvDsQzm30vLIggJb+ShakgQS/1xSdv+OR5FEd1hjTESokbiSJ/Ny2Vj xAp9MgUYUmSj6ZuTSXkCgcAggshdRQLom/EK2pYwffIpKfBiyLbi+KIjKxkiPEsb jrrQbvh9ZN6iAG3StVAYB5c6vewfeIlcDT0YJDyy1hGRLRG7vf9ubPf+n7Xp1y0W oo9L9qfCHu0jmWwtinkFYjpTDkXlxXCG2v3TllNsNX/5afYo8sb9oxXHLTpBlwZB fw6IgNZblWQevdgmUMTP9W2W7AZUxEz4gOM6lQkOwC3U59Dx2yO6rD3An6G1tlZF 2MClyf8o5d5ePObH8rkxrpY= -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIUF15VKdwjiTzzKgs6PnNpEekV9QQwDQYJKoZIhvcNAQEL BQAwYjELMAkGA1UEBhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYD VQQKDBpQeXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhv c3RuYW1lMB4XDTIxMDMxNzA4NDgyMFoXDTQwMDUxNjA4NDgyMFowYjELMAkGA1UE BhMCWFkxFzAVBgNVBAcMDkNhc3RsZSBBbnRocmF4MSMwIQYDVQQKDBpQeXRob24g U29mdHdhcmUgRm91bmRhdGlvbjEVMBMGA1UEAwwMZmFrZWhvc3RuYW1lMIIBojAN BgkqhkiG9w0BAQEFAAOCAY8AMIIBigKCAYEAn/BVsYuKFZQ1cXvDQxYG/iGABq/4 WVrmNWau3e4Oe8j6MSa4LfmzfPflO6k3bqsgdh8vBtE6ngAjqyHaT2fD+1eROWIg EsHye9p21aO6kfQwCgORRPD1NGyRZhS4ww4oXPe9M3OIpUQ30VspVKY+vJ3nzmOA sxc3oqlTi19sHs7O2dIX5ZElxgZgtI9RW0FNiN2CYNwigmm4U7VOjsiiLc4ydr1W jqEPgdCr/JXmkUOgDWQQBUBdi3qUsuRdvqEg/kB4z6/tNdleQ5q7HSHPVnvGMqJ0 tbv9xyO5S6IX91vqha6EFo59AySOeknFdbhK/wdPoZ9owUUzOXdRfQXlVptbvATi budArznrsYlS7MSaWuE+NQyc5yZOVfoROISrwyppa8LpbEafKI1BMlIvHc1jgUXw IoyFr96gD4xgPyd95+vj5ZQnMwtgbFWYVZi6kjAsKdYNiUwjPygu0+LzpfxBpJu+ n3Ji3gNRH/KFHUYPt0++1T/h6mdiT9OifDvDAgMBAAGjGzAZMBcGA1UdEQQQMA6C DGZha2Vob3N0bmFtZTANBgkqhkiG9w0BAQsFAAOCAYEARzdkuqa0Hexi/saMkdi3 bubpQkc7X0RYKWnjy/PgcmbvQXLiWRMZOH9rMWvd5v+ZfkgAtsbOQuP8ycioNIFY Il5SEmxHEN81z5UNSPLOib6ky13gzrnXRAxnnO7cICG7AaMu1dHv57fqjevcx/n/ nxPNKwKL+TDpMw7ATVZw7Py7JciKyFAfwtkvt17j/ldvaQvuwmWHzyFVrQniQcQq QEa4jy/Y/pXHAgCKq1qbe0ush17j1ChyH7l4SkF2xJKcYYQF5ipw8zg6WeOL2NFE G1KDJN0SsMmM3PMN1e0lLQP3G+UaatervrKXu51QleKL32Xlby+pp1w9KKs39/Tb RT8EMe9A6cecod6TL0ZUQHow6ykNYBkfSKDLTKWnL9ifZ0C/DvgmS7DpJg3oAa1e GhIglMrgqJflTHAI/PvEsCKM1O0Un2dVGWsUCzPfhj1cKmagyb0Zd+2Tk9xGSRs9 2ceXMxRCjOJwEHUCFuTYeqowabdlpi0nyPbSn7JIwCpT -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/keycert3.pem000066400000000000000000000223501471441230600205100ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQDFtLOteQlQojN7 ztkux7m0hmGKkP1hh0hbKqTcD87jkLAqAwZWenjZMjCbbZ3vP+AObCIkYIKzPXY7 Yi+H5M3O2mXIDxoHGjL/GWtoEyDNXvm9UC+MRuSOq2MaLHHQG0Rx2TxcYrMVUM7b 93rpN1LGRrCv1gISXM4EvEJooAR7Aadj0pG/o0fqDAdFjH6QZbhn1iZle+eGbjcf dgH/H0F8dn1PPGoViHXicbsQ4kB6002Pf+aXP4b2QKAbflyNHEKHPHEOOTXrFjMd c+bqKW24epEsMZI59qx9hU/4Rvp3/v+vEwTL7Nm7ilptzZn2cvGCW39LC0nNYLOz kO3H8xwA75h6uykdB+WO/v2CKIK9M/ZO+9QNrmaokfKDamCk39b8hlCwNL6LsVpv d3XTS5Wn4YWn92EqiltUJJoPo7pc7VTdWCg4zVFn4Q8Zh4NFNn/qTB8lEMgrsNTV 5cyZ7zhoBiUMSO45bmo2NsnE7ce/JUhlqe5uh0PT1MIBgTV+oDMCAwEAAQKCAYEA udsy4gwblqK0tVnxz0lQqYV+os3EdO/BNHr1Oi7eNg2pngTz603812mYSjUVOHma vtQmkH3twGQyBoc52Y1dcGzdK+IOfMjDUg7qao840ffL3I1J9ZwbdodlhZBsec94 W3J1jP/4DDzICf8vm5g3h0+i/9m2Xt7BibAU2dg7/grC+lNUUoxDqaEfIOF/hW0q muq1c8e0EisAROIh5FzUqhWVnWxU6eM7tuFlkuyu4whLLHB3LI466Lo+CTqT9M+v jJYlvS5+AZW3qMBp6WOI8C+VIiBL178mo+Igkyyy5AYXcWeNkjp6ygRWvtWXIhCv CI29mf+BP/54jAY0rQRXJ2UcSHXmM6PTDkE/L2OKeiY1Ou8gLOwun3yBVdbkXJMb PWmUW4N8qSIJQ+vE2TDqmkqAT6m+ilzOXl1O+LLTvGyMnOiiSLXK9mC4ND3tqaQu hvKivnI1doErcWUaIf1DHiJmLrGxrTCUKjCEoefqVq2/dDdtCfx7CqUvjl3DYKMB AoHBAP+Vdi6D07gZFepEGCaJ+YH6cxEyO73CNnea/F1whVAzOv91kHS32jC9PAI3 /wYlX+DLcN9mVF/q62V4SLZYfOxTPW4vWO0A45URe9s9Z795fdAcQ5jt3QFOVSnk 3XSaCkIOwckuwabGJi4+foiUEOnLLzQi1/g7x12dwejxVNhqhz5KFkOQPv8fQRed sb5LVLYDeprsB2Vsx0fHwg4z9FvTIxLBeI7+sJD30lNpYZrCl/T9x4e1SV2Rwn2W bghxgQKBwQDGBx07biZK9RB5g4qPl+G6vz0M+/KBfpwQbMYxSyct7u6gfGD9mWBO qocIIr39Unac3kUL237Cn3HbgiGCRe7Mwd7XqnSSGWM5oWSlVQxEKTXYUlTbd9O9 DKuyQGOl/AMEwD4ZbEOfQNmnd1U4nh1AV052FQY8Ry/atGFT9fApA/5X/bbenOwQ YGDsokLzPf2BIDncpE+VNevUMoMI7EnySgjjfpL+cRld0qpLqBMo2h5VddeJ/5YM 1YcNfMQiw7MCgcEAwXqXuKa7A8aZvHpH/gS9CRRbP01TxFbdfLWrDeE8SnY9111c Ob9kQTk/0D4rpK9uYXIgxD1m6iWghXQFN2TNTOnGuz7EhsYBgrt1k4Zsn5qND5oV 4hNPFsoB1nEW5EooMdGSCYaHuoSOKrvMdgAAvbu+xC0MaTJ3vfrK7Fik7h/WueTD 7emohuFWGVabU38bZZ5EljrPboxmX4Rs9uuFtG2lQ3GKnlVXvKaeZd6EsO9WsXPc NHOcUmUhYokaSvIBAoHAGCxGJTsM8Zl4qVylTWH87A7sJOmccLJD2r1sdBf4cGL6 PhzwugQ+/VtToGqdRo8Ka5u2Ufw5PQi5nVIFRSHERLpluW3VTQBMXHyXDJeVJ7zg Fcf3E9NMxYcGbnvtrhVVSP8ulWvh1U7VQtwOSxsB9xixOzjVygXmkYvzVYxwBJG4 OoV+DS6aomUhb8Fe6tJmX5zPc1+bV1t9ril8VVqCrFDdROfuiaDEt+8/Wnzp2dLG YShBZ1cLugVWtw7D4nqBAoHAF29k64iAxY5Y4OOibVkqjUCPyqG2oxiXqgO7CxZp FGUat5UtV2mIBlSENs1o5AZ1nPlgWtPtg0xVCaG2t/Rq7ugvUfAnAhUK6zX8FS+T gCXE+7iKuuIJiCo13/iAwF/CLfuXvj4CZ71ta0wX9w99f1FcPEk0x+ytiyuWJK8K tyubL34JwNrnkh/8e3LcV3L88Sk9ZmxeTz31f3cA3Fy2ZJOAUMD9dKXeKtY7azzt MkhXedRsdLSKqMh0VGeGHoLS -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5c Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:c5:b4:b3:ad:79:09:50:a2:33:7b:ce:d9:2e:c7: b9:b4:86:61:8a:90:fd:61:87:48:5b:2a:a4:dc:0f: ce:e3:90:b0:2a:03:06:56:7a:78:d9:32:30:9b:6d: 9d:ef:3f:e0:0e:6c:22:24:60:82:b3:3d:76:3b:62: 2f:87:e4:cd:ce:da:65:c8:0f:1a:07:1a:32:ff:19: 6b:68:13:20:cd:5e:f9:bd:50:2f:8c:46:e4:8e:ab: 63:1a:2c:71:d0:1b:44:71:d9:3c:5c:62:b3:15:50: ce:db:f7:7a:e9:37:52:c6:46:b0:af:d6:02:12:5c: ce:04:bc:42:68:a0:04:7b:01:a7:63:d2:91:bf:a3: 47:ea:0c:07:45:8c:7e:90:65:b8:67:d6:26:65:7b: e7:86:6e:37:1f:76:01:ff:1f:41:7c:76:7d:4f:3c: 6a:15:88:75:e2:71:bb:10:e2:40:7a:d3:4d:8f:7f: e6:97:3f:86:f6:40:a0:1b:7e:5c:8d:1c:42:87:3c: 71:0e:39:35:eb:16:33:1d:73:e6:ea:29:6d:b8:7a: 91:2c:31:92:39:f6:ac:7d:85:4f:f8:46:fa:77:fe: ff:af:13:04:cb:ec:d9:bb:8a:5a:6d:cd:99:f6:72: f1:82:5b:7f:4b:0b:49:cd:60:b3:b3:90:ed:c7:f3: 1c:00:ef:98:7a:bb:29:1d:07:e5:8e:fe:fd:82:28: 82:bd:33:f6:4e:fb:d4:0d:ae:66:a8:91:f2:83:6a: 60:a4:df:d6:fc:86:50:b0:34:be:8b:b1:5a:6f:77: 75:d3:4b:95:a7:e1:85:a7:f7:61:2a:8a:5b:54:24: 9a:0f:a3:ba:5c:ed:54:dd:58:28:38:cd:51:67:e1: 0f:19:87:83:45:36:7f:ea:4c:1f:25:10:c8:2b:b0: d4:d5:e5:cc:99:ef:38:68:06:25:0c:48:ee:39:6e: 6a:36:36:c9:c4:ed:c7:bf:25:48:65:a9:ee:6e:87: 43:d3:d4:c2:01:81:35:7e:a0:33 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 85:75:10:25:D0:2C:80:50:24:1A:5B:57:70:DE:B5:CB:71:A9:3B:7B X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 95:f3:56:bb:d5:8c:70:bd:d1:de:da:63:b0:29:d7:db:60:27: d6:59:fd:61:1b:30:c6:d0:5d:73:7d:34:e1:68:e3:28:a6:89: e6:60:bd:89:d3:0e:f4:72:ad:72:76:f8:86:21:fd:75:3c:f8: 6d:be:9c:04:e1:82:03:69:6c:ae:d0:55:ba:5e:f2:ca:f5:0f: 8e:d6:d9:8d:c8:56:46:f4:f8:ac:74:2a:19:7b:8e:47:70:1f: fb:fb:bd:69:02:a1:a5:4a:6e:21:1c:04:14:15:55:bf:bf:24: 43:c8:17:03:be:3e:2c:ea:db:c8:af:1d:fd:52:df:d6:15:49: 9e:c2:44:69:ef:f1:45:43:83:b2:1e:cf:14:1c:13:3f:fe:9c: 71:cb:e7:1b:18:56:36:a7:af:44:f1:0b:a1:79:44:46:f9:43: 46:29:d8:b0:ca:49:4d:65:60:d3:f6:8e:74:bc:62:9e:1e:8d: 4b:29:9a:b4:0d:f0:a2:77:5b:34:e4:11:2f:a7:25:c5:e5:07: 76:12:ae:be:75:73:15:e4:0a:7d:53:38:56:3f:79:6d:6e:ca: ed:80:ab:56:ed:7e:8b:1c:e7:e3:d4:62:30:22:70:e7:29:b2: 03:3c:fe:fa:3d:f0:36:c0:4d:11:a2:99:d3:29:31:27:b8:c5: b8:15:a3:3c:4f:9b:73:5e:2b:b2:fb:cb:fd:75:47:b8:17:bd: 21:d8:e6:c1:b9:ff:73:81:d8:25:08:6d:08:5e:1c:a5:83:50: de:67:e6:da:d0:8e:5a:d3:f2:2a:b1:3f:b8:80:21:07:6a:71: 15:6d:05:eb:51:b3:59:8d:d4:15:46:7e:02:a8:13:01:16:99: bd:03:cc:70:71:2a:23:16:78:af:d1:d5:01:9d:04:b4:63:93: 9a:04:3a:92:2e:e6:7e:73:93:a5:fe:50:9b:bd:0e:ea:54:86: 6f:7c:e5:14:77:fe:c2:28:5a:4a:0e:d7:2d:8c:e9:ed:61:29: b2:53:ff:6c:04:bc -----BEGIN CERTIFICATE----- MIIF8TCCBFmgAwIBAgIJAMstgJlaaVJcMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxv Y2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAMW0s615CVCi M3vO2S7HubSGYYqQ/WGHSFsqpNwPzuOQsCoDBlZ6eNkyMJttne8/4A5sIiRggrM9 djtiL4fkzc7aZcgPGgcaMv8Za2gTIM1e+b1QL4xG5I6rYxoscdAbRHHZPFxisxVQ ztv3euk3UsZGsK/WAhJczgS8QmigBHsBp2PSkb+jR+oMB0WMfpBluGfWJmV754Zu Nx92Af8fQXx2fU88ahWIdeJxuxDiQHrTTY9/5pc/hvZAoBt+XI0cQoc8cQ45NesW Mx1z5uopbbh6kSwxkjn2rH2FT/hG+nf+/68TBMvs2buKWm3NmfZy8YJbf0sLSc1g s7OQ7cfzHADvmHq7KR0H5Y7+/YIogr0z9k771A2uZqiR8oNqYKTf1vyGULA0voux Wm93ddNLlafhhaf3YSqKW1Qkmg+julztVN1YKDjNUWfhDxmHg0U2f+pMHyUQyCuw 1NXlzJnvOGgGJQxI7jluajY2ycTtx78lSGWp7m6HQ9PUwgGBNX6gMwIDAQABo4IB wDCCAbwwFAYDVR0RBA0wC4IJbG9jYWxob3N0MA4GA1UdDwEB/wQEAwIFoDAdBgNV HSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4E FgQUhXUQJdAsgFAkGltXcN61y3GpO3swfQYDVR0jBHYwdIAUs4qgorpx8agkedSk WyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29m dHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMst gJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0 Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcw AYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYD VR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0 Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAJXzVrvVjHC90d7a Y7Ap19tgJ9ZZ/WEbMMbQXXN9NOFo4yimieZgvYnTDvRyrXJ2+IYh/XU8+G2+nATh ggNpbK7QVbpe8sr1D47W2Y3IVkb0+Kx0Khl7jkdwH/v7vWkCoaVKbiEcBBQVVb+/ JEPIFwO+Pizq28ivHf1S39YVSZ7CRGnv8UVDg7IezxQcEz/+nHHL5xsYVjanr0Tx C6F5REb5Q0Yp2LDKSU1lYNP2jnS8Yp4ejUspmrQN8KJ3WzTkES+nJcXlB3YSrr51 cxXkCn1TOFY/eW1uyu2Aq1btfosc5+PUYjAicOcpsgM8/vo98DbATRGimdMpMSe4 xbgVozxPm3NeK7L7y/11R7gXvSHY5sG5/3OB2CUIbQheHKWDUN5n5trQjlrT8iqx P7iAIQdqcRVtBetRs1mN1BVGfgKoEwEWmb0DzHBxKiMWeK/R1QGdBLRjk5oEOpIu 5n5zk6X+UJu9DupUhm985RR3/sIoWkoO1y2M6e1hKbJT/2wEvA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/keycert4.pem000066400000000000000000000223661471441230600205200ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQC34y3S6iXdmdvd M/2aFBe6CvRvZwhh1huGl7IQRtdoakPqMLlEdNHJtNeF5M27xLei+p4wt7N1Jyi0 2keHQb1m9TqH5AruOkE2ti+15zEoKoU9aWydTiH+epKTT0yjg2NcKQjRUaWcbhzB H4EMKuCIlzIIz8/EIKkOqhCDwq6+Fv3Ays+z7Bz+yR80ixivKu/l7SjxQ7z7R/kC I7OViRcIO5QBQPj7VLvCTz4VA6u/LdXngK2HNuau6WXm5yNNQbqrB11AEJcYZf/c VrneV4F+ZjLloAKgSn9GB8eWOyilTQ18TcKd+H2icipRaP/+QR/KPx5GK/SXU3my qm62QOGI7t/5ktVdjGhs6tHZxw1SRiipiLYWbtVRrSxa4wYlgpgoUwvrvvtC5kAN nTw1VGWsxcs+6a7+PocYnJiq7k4b5OAUb3Ryvl9DLAMy8NqpRWo4cHD/XQ3FCYwF HlOSgx/dL5Se0i3dW1KzbP6OvaNg6nl/1EXPUsJ1ATS8nzvzhccCAwEAAQKCAYEA nD3GvaJ9MeB802JNZBEWZ9jO/6jHknldQeq6POI0PF+t/NoRUH0BkyS4yucxdw0a CrxulG5BaJUxHRkqFV5iE4zhgnzcXLXamyYJO8GIHtyiASAGTVIJyDNVPxztvTDx x2iGOXPqBxP4Eo82EqSLywLMXHhVzAsEGZWeGpXb61+Vk62+9Nz1dfZlMTvOaWdO Fkp/sx8e/1KT3KGBANlOXIxioP4Xj1Tbg6nY0fogf3vud5j52B1pu8xL7PkPIaFq DEGz3XvWhBF/+Cs5iDeYz8eQpfQig7HdHVn2D8dZmzQgpLw1yGbPAnqrgopWfm7R MqiyFe82p2t+vfSoG5jz28XxPtzBJV3ljxKxlbnclqu/CAYSjzaYohDzyhjdZOZI r9DOfWOqu01Ha3EEsApn95fusHHGTH2FOy0u61FSTrfLfqsLw9WRJPWleirKikhf SZzi223QrmzZMtuCF7VgTx3ghDhBmFD8uzVVQ1SwPZ8CgftRkFcn1llXIAfJ3iHB AoHBAOg3DOIdtUVgpjMKhpAyuH54fYvGl7afIMNbKRle0kCiP45wtGJ43RPMqiR8 1rxZB3+iapICI/lnhk3O7vVRkR64yiqQBcl/hXZ1BhyD6iDXWYmm5mcnymcoqfwc p9TfzEPyGPb3SM2YlI0cSPRqM/jDvGvnDeKIpzEKvUlwJ59WoN2HOHTIXf+XbN5n unpuTt6YKJvc48DrXsPnUzkCmUfbOmgHfeb9/qBs/8kY4YJMsZEjqf88o7mCJCIy BtDxTwKBwQDKuOwE8e0GIA01ZHd6RfR+ZCvmp2oauxal4EJsBx+ZZnhEWGaSm1fE Bf/ih074ghcSKoSrdYpD1xGZ6fGVWMx3jcL11yLDOUiiPDJsm8hUBZ0IW1qXyfCP l7xy1bUkWwPXdmFuGp1exrcjooKrFNuTdYiK4nQZSKuCfXQRADrmEJmM+gYwhqI7 4XsYo848B9A4hbY6RLEox4uvo/RmafY0iR0PMhVEc+ydNLKB/4LpahZqBQ4kTpMv o4+rEvYt1gkCgcB08gx177ozx1nMCLf99N0/LBUmCIytNvR8DfPjyAIg9NUHOjFO CkpkR0VEfO50Cm4hVD1RbOyLFRzpIJbtSvfHvg5qYv/XG3auUn8Sa0jE408/aKNO PhbL3wnEYvYO2ep4KXtzHNQ4XmgprJ39IWMtG/5PZRx0ApgYtazgSDBcKXd4OTow bhwQtUTpuNmMAPONXJnO7O5yYNbn2B7sbiedrYV7kJJSe4X5awtiTjp7sX4XdxuM 5BAcQ7NI2WLfZTcCgcBp/X9hIoATmMRvKwUQx+yJ/KO7Z8KhETpJJdR0mNDbqmit Cy8t7cxYb+6WqLoQUivv0o0k/EJ7L8JDH76woAnfZB4P3RiOy69/K0wN3vFBhOHS kbju7aU53lKoE7YuuOtsRrewEng/KlRsbDY3bqNTGLt4KegbpBQQGLmLffxNd1Zh EAQWcP33ou9yNYrJdihWtQpOssWRlash/O32ceZJF3s7C6t068tFclz2fPocQdxQ OC5pqy9nU/P0tOhDlMkCgcEAosaBJLIeAYlOU0+2uSx5g5mIqOOTyrDEmqqad6T/ wkB7vW2QaoDvLL22Yrzdn9vQ0V0rqzhVtan7sq5pn/BQJAueZYN8rFxS3uuW+UQk Nsc4GLJzU8Az/2DvqEIrnE7zRc5E1FOI9gKLrBlpJB2o0hVcBznDe05Gax6Kjqbm jHqzyU73SpxpEy3OesClCeCQIMr47HaL9aSqaEX4U9bMpgHi0HgTTHqvJ5pch0hY dYl+WAE9LAyF1DF29BirEXVw -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5d Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=fakehostname Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:b7:e3:2d:d2:ea:25:dd:99:db:dd:33:fd:9a:14: 17:ba:0a:f4:6f:67:08:61:d6:1b:86:97:b2:10:46: d7:68:6a:43:ea:30:b9:44:74:d1:c9:b4:d7:85:e4: cd:bb:c4:b7:a2:fa:9e:30:b7:b3:75:27:28:b4:da: 47:87:41:bd:66:f5:3a:87:e4:0a:ee:3a:41:36:b6: 2f:b5:e7:31:28:2a:85:3d:69:6c:9d:4e:21:fe:7a: 92:93:4f:4c:a3:83:63:5c:29:08:d1:51:a5:9c:6e: 1c:c1:1f:81:0c:2a:e0:88:97:32:08:cf:cf:c4:20: a9:0e:aa:10:83:c2:ae:be:16:fd:c0:ca:cf:b3:ec: 1c:fe:c9:1f:34:8b:18:af:2a:ef:e5:ed:28:f1:43: bc:fb:47:f9:02:23:b3:95:89:17:08:3b:94:01:40: f8:fb:54:bb:c2:4f:3e:15:03:ab:bf:2d:d5:e7:80: ad:87:36:e6:ae:e9:65:e6:e7:23:4d:41:ba:ab:07: 5d:40:10:97:18:65:ff:dc:56:b9:de:57:81:7e:66: 32:e5:a0:02:a0:4a:7f:46:07:c7:96:3b:28:a5:4d: 0d:7c:4d:c2:9d:f8:7d:a2:72:2a:51:68:ff:fe:41: 1f:ca:3f:1e:46:2b:f4:97:53:79:b2:aa:6e:b6:40: e1:88:ee:df:f9:92:d5:5d:8c:68:6c:ea:d1:d9:c7: 0d:52:46:28:a9:88:b6:16:6e:d5:51:ad:2c:5a:e3: 06:25:82:98:28:53:0b:eb:be:fb:42:e6:40:0d:9d: 3c:35:54:65:ac:c5:cb:3e:e9:ae:fe:3e:87:18:9c: 98:aa:ee:4e:1b:e4:e0:14:6f:74:72:be:5f:43:2c: 03:32:f0:da:a9:45:6a:38:70:70:ff:5d:0d:c5:09: 8c:05:1e:53:92:83:1f:dd:2f:94:9e:d2:2d:dd:5b: 52:b3:6c:fe:8e:bd:a3:60:ea:79:7f:d4:45:cf:52: c2:75:01:34:bc:9f:3b:f3:85:c7 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:fakehostname X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: C8:BD:A8:B4:C0:F2:32:10:73:47:9C:48:81:32:F8:BA:BB:26:84:97 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 76:87:76:4d:e4:0f:88:bf:2c:f3:58:67:c0:97:6c:cd:59:18: 82:83:4c:04:19:a5:6d:aa:fa:64:3d:49:32:3e:e1:56:95:b2: 13:f7:cf:d3:11:b0:72:b7:5b:e7:d7:85:69:51:3c:b6:54:80: 45:2f:28:10:21:20:b9:ba:e9:27:5a:b7:3f:82:b7:69:f5:46: f5:bf:a2:8b:17:7f:f2:14:d1:46:97:b5:8b:47:fb:9f:e8:5c: 05:0e:9d:11:bd:7c:9a:03:84:0b:ca:29:66:4a:ca:0d:6f:09: 1e:7a:27:c1:7f:03:96:70:8d:18:a5:2f:a4:98:a5:19:aa:8c: 5d:1e:8c:3e:bb:6d:3b:c0:33:c0:15:e1:bd:09:3d:9f:e8:dc: 12:d4:cb:44:1d:06:f5:e8:d6:4e:a1:2d:5c:9f:5d:1f:5b:2a: c3:4d:40:8d:da:d1:78:80:d0:c6:31:72:10:48:8a:e9:10:7a: 13:30:11:b2:9e:67:0e:ed:a1:aa:ec:73:2d:f0:b8:8a:22:75: 0f:30:69:5c:50:7e:91:ce:da:91:c7:70:8c:65:ff:f6:58:fb: 00:bd:45:cc:e2:e4:e3:e5:16:36:7d:f3:a2:4a:9c:45:ff:d9: a5:16:e0:2f:b5:5b:6c:e6:8a:13:15:48:73:bd:7c:80:33:c3: d4:3b:3a:1d:85:0e:a4:f7:f7:fb:48:0c:e9:a0:4b:5e:8a:5c: 67:f8:25:02:6f:cd:72:c1:aa:5a:93:64:7c:14:20:43:e0:13: 7f:0d:e1:0d:61:5e:2e:2c:cd:7a:2e:2a:ae:b6:75:6a:5f:a0: 1a:9b:b6:67:2d:b0:a5:1c:54:bc:8c:70:7e:15:2b:c0:50:e3: 03:bb:a4:a5:fc:45:01:c9:3f:a7:b8:18:dc:3e:08:07:a1:9b: f5:bd:95:bd:49:e8:10:7c:91:7d:2d:c4:c2:98:b6:b7:51:69: d7:0a:68:40:b5:0f:85:a0:a9:67:77:c6:68:cb:0e:58:34:b3: 58:e7:c8:7c:09:67 -----BEGIN CERTIFICATE----- MIIF9zCCBF+gAwIBAgIJAMstgJlaaVJdMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGIxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFTATBgNVBAMMDGZh a2Vob3N0bmFtZTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBALfjLdLq Jd2Z290z/ZoUF7oK9G9nCGHWG4aXshBG12hqQ+owuUR00cm014XkzbvEt6L6njC3 s3UnKLTaR4dBvWb1OofkCu46QTa2L7XnMSgqhT1pbJ1OIf56kpNPTKODY1wpCNFR pZxuHMEfgQwq4IiXMgjPz8QgqQ6qEIPCrr4W/cDKz7PsHP7JHzSLGK8q7+XtKPFD vPtH+QIjs5WJFwg7lAFA+PtUu8JPPhUDq78t1eeArYc25q7pZebnI01BuqsHXUAQ lxhl/9xWud5XgX5mMuWgAqBKf0YHx5Y7KKVNDXxNwp34faJyKlFo//5BH8o/HkYr 9JdTebKqbrZA4Yju3/mS1V2MaGzq0dnHDVJGKKmIthZu1VGtLFrjBiWCmChTC+u+ +0LmQA2dPDVUZazFyz7prv4+hxicmKruThvk4BRvdHK+X0MsAzLw2qlFajhwcP9d DcUJjAUeU5KDH90vlJ7SLd1bUrNs/o69o2DqeX/URc9SwnUBNLyfO/OFxwIDAQAB o4IBwzCCAb8wFwYDVR0RBBAwDoIMZmFrZWhvc3RuYW1lMA4GA1UdDwEB/wQEAwIF oDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAd BgNVHQ4EFgQUyL2otMDyMhBzR5xIgTL4ursmhJcwfQYDVR0jBHYwdIAUs4qgorpx 8agkedSkWyU2FR5JyM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRo b24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZl coIJAMstgJlaaVJbMIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6 Ly90ZXN0Y2EucHl0aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1Bggr BgEFBQcwAYYpaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2Nz cC8wQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5l dC90ZXN0Y2EvcmV2b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAHaHdk3k D4i/LPNYZ8CXbM1ZGIKDTAQZpW2q+mQ9STI+4VaVshP3z9MRsHK3W+fXhWlRPLZU gEUvKBAhILm66Sdatz+Ct2n1RvW/oosXf/IU0UaXtYtH+5/oXAUOnRG9fJoDhAvK KWZKyg1vCR56J8F/A5ZwjRilL6SYpRmqjF0ejD67bTvAM8AV4b0JPZ/o3BLUy0Qd BvXo1k6hLVyfXR9bKsNNQI3a0XiA0MYxchBIiukQehMwEbKeZw7toarscy3wuIoi dQ8waVxQfpHO2pHHcIxl//ZY+wC9Rczi5OPlFjZ986JKnEX/2aUW4C+1W2zmihMV SHO9fIAzw9Q7Oh2FDqT39/tIDOmgS16KXGf4JQJvzXLBqlqTZHwUIEPgE38N4Q1h Xi4szXouKq62dWpfoBqbtmctsKUcVLyMcH4VK8BQ4wO7pKX8RQHJP6e4GNw+CAeh m/W9lb1J6BB8kX0txMKYtrdRadcKaEC1D4WgqWd3xmjLDlg0s1jnyHwJZw== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/keycertecc.pem000066400000000000000000000130051471441230600210750ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIG2AgEAMBAGByqGSM49AgEGBSuBBAAiBIGeMIGbAgEBBDBcNwE+cm17mmr7Yg6d 0DNCnheGFOjkYH4tYzTyCkcZGShkmF/tKhIqb3imKz0Kx9+hZANiAATyp8ws6CuN OI2/3MC4jZVSkmoDzm/X/ZrkEm4TVHKPSZ6kzZRpmmUlLS9l7SQZSLYyDAFBFzoG JJYHhZNQXEO7HFszn6KnvLjhwS6ddzlaHPziEknrSr0OKhJmdJHrQAQ= -----END PRIVATE KEY----- Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5e Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=localhost-ecc Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (384 bit) pub: 04:f2:a7:cc:2c:e8:2b:8d:38:8d:bf:dc:c0:b8:8d: 95:52:92:6a:03:ce:6f:d7:fd:9a:e4:12:6e:13:54: 72:8f:49:9e:a4:cd:94:69:9a:65:25:2d:2f:65:ed: 24:19:48:b6:32:0c:01:41:17:3a:06:24:96:07:85: 93:50:5c:43:bb:1c:5b:33:9f:a2:a7:bc:b8:e1:c1: 2e:9d:77:39:5a:1c:fc:e2:12:49:eb:4a:bd:0e:2a: 12:66:74:91:eb:40:04 ASN1 OID: secp384r1 NIST CURVE: P-384 X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost-ecc X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 79:11:98:86:15:4F:48:F4:31:0B:D2:CC:C8:26:3A:09:07:5D:96:40 X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD DirName:/C=XY/O=Python Software Foundation CA/CN=our-ca-server serial:CB:2D:80:99:5A:69:52:5B Authority Information Access: CA Issuers - URI:http://testca.pythontest.net/testca/pycacert.cer OCSP - URI:http://testca.pythontest.net/testca/ocsp/ X509v3 CRL Distribution Points: Full Name: URI:http://testca.pythontest.net/testca/revocation.crl Signature Algorithm: sha256WithRSAEncryption 6e:42:e8:a2:2d:28:14:e3:25:5c:c1:7e:54:e9:3a:ff:30:db: 94:ba:b2:f6:5f:ae:9a:c1:90:b3:4f:ce:65:1d:84:64:c0:71: 2c:44:8e:7e:00:79:f5:8c:4a:1d:34:13:44:de:99:2e:db:53: ee:ec:74:97:4d:59:1a:09:82:4f:98:75:91:a7:a0:b9:da:5e: 68:f5:32:85:be:36:3d:83:d4:ee:f9:87:67:31:85:41:53:9a: e7:05:96:13:1c:88:2e:7f:33:b1:ee:bd:f9:50:52:24:ed:3d: 92:95:6e:30:c3:af:74:a9:ee:15:bb:da:7c:14:50:8e:e3:99: ea:ba:b4:37:8a:50:61:26:de:01:93:b8:a2:6b:d9:c7:38:5e: b2:f8:96:3d:a8:9f:7d:0c:71:d4:7e:cc:a0:57:af:7e:ce:3f: a7:a7:27:68:c1:28:d7:4f:44:c1:b4:93:c3:c7:35:2b:50:c3: 8e:2c:d0:46:c1:3f:e1:67:d3:f0:81:ae:f3:5c:3e:4f:d5:a8: 07:8f:e0:eb:ef:d8:dc:47:e0:3d:58:eb:de:0e:7f:b2:58:cb: 5c:f1:2f:65:7e:0f:0d:cc:ca:ba:83:53:63:bc:dd:18:0c:ee: ed:ec:96:88:d0:38:c5:d7:ab:e7:55:79:7b:6d:ba:c0:a0:e9: 5c:ca:7c:fb:f8:70:c7:fb:f5:b2:b5:74:cb:f7:c0:0d:20:9f: 1d:b7:4c:bf:8a:8d:cd:e3:bc:4e:30:78:02:12:a0:9b:d5:8f: 49:3c:95:91:76:6e:7c:54:dc:61:7a:2e:20:ed:35:25:e0:c5: 17:50:02:83:00:74:8f:f0:1c:97:96:08:fc:2e:63:a4:f7:97: 87:43:2a:32:04:2d:4c:f9:1a:07:bf:68:91:fc:50:21:a1:3c: 8d:8f:fb:83:57:83:1f:b6:55:5c:55:2f:58:64:ad:f3:27:ba: d0:e3:cd:58:01:a3:c9:ba:1d:95:dc:30:d5:af:b9:20:ad:d9: 48:ba:8d:9a:66:ee -----BEGIN CERTIFICATE----- MIIEyzCCAzOgAwIBAgIJAMstgJlaaVJeMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaMGMxCzAJBgNVBAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEj MCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24xFjAUBgNVBAMMDWxv Y2FsaG9zdC1lY2MwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATyp8ws6CuNOI2/3MC4 jZVSkmoDzm/X/ZrkEm4TVHKPSZ6kzZRpmmUlLS9l7SQZSLYyDAFBFzoGJJYHhZNQ XEO7HFszn6KnvLjhwS6ddzlaHPziEknrSr0OKhJmdJHrQASjggHEMIIBwDAYBgNV HREEETAPgg1sb2NhbGhvc3QtZWNjMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAU BggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQUeRGY hhVPSPQxC9LMyCY6CQddlkAwfQYDVR0jBHYwdIAUs4qgorpx8agkedSkWyU2FR5J yM2hUaRPME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcoIJAMstgJlaaVJb MIGDBggrBgEFBQcBAQR3MHUwPAYIKwYBBQUHMAKGMGh0dHA6Ly90ZXN0Y2EucHl0 aG9udGVzdC5uZXQvdGVzdGNhL3B5Y2FjZXJ0LmNlcjA1BggrBgEFBQcwAYYpaHR0 cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2Evb2NzcC8wQwYDVR0fBDww OjA4oDagNIYyaHR0cDovL3Rlc3RjYS5weXRob250ZXN0Lm5ldC90ZXN0Y2EvcmV2 b2NhdGlvbi5jcmwwDQYJKoZIhvcNAQELBQADggGBAG5C6KItKBTjJVzBflTpOv8w 25S6svZfrprBkLNPzmUdhGTAcSxEjn4AefWMSh00E0TemS7bU+7sdJdNWRoJgk+Y dZGnoLnaXmj1MoW+Nj2D1O75h2cxhUFTmucFlhMciC5/M7HuvflQUiTtPZKVbjDD r3Sp7hW72nwUUI7jmeq6tDeKUGEm3gGTuKJr2cc4XrL4lj2on30McdR+zKBXr37O P6enJ2jBKNdPRMG0k8PHNStQw44s0EbBP+Fn0/CBrvNcPk/VqAeP4Ovv2NxH4D1Y 694Of7JYy1zxL2V+Dw3MyrqDU2O83RgM7u3slojQOMXXq+dVeXttusCg6VzKfPv4 cMf79bK1dMv3wA0gnx23TL+Kjc3jvE4weAISoJvVj0k8lZF2bnxU3GF6LiDtNSXg xRdQAoMAdI/wHJeWCPwuY6T3l4dDKjIELUz5Gge/aJH8UCGhPI2P+4NXgx+2VVxV L1hkrfMnutDjzVgBo8m6HZXcMNWvuSCt2Ui6jZpm7g== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/nokia.pem000066400000000000000000000036031471441230600200600ustar00rootroot00000000000000# Certificate for projects.developer.nokia.com:443 (see issue 13034) -----BEGIN CERTIFICATE----- MIIFLDCCBBSgAwIBAgIQLubqdkCgdc7lAF9NfHlUmjANBgkqhkiG9w0BAQUFADCB vDELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMR8wHQYDVQQL ExZWZXJpU2lnbiBUcnVzdCBOZXR3b3JrMTswOQYDVQQLEzJUZXJtcyBvZiB1c2Ug YXQgaHR0cHM6Ly93d3cudmVyaXNpZ24uY29tL3JwYSAoYykxMDE2MDQGA1UEAxMt VmVyaVNpZ24gQ2xhc3MgMyBJbnRlcm5hdGlvbmFsIFNlcnZlciBDQSAtIEczMB4X DTExMDkyMTAwMDAwMFoXDTEyMDkyMDIzNTk1OVowcTELMAkGA1UEBhMCRkkxDjAM BgNVBAgTBUVzcG9vMQ4wDAYDVQQHFAVFc3BvbzEOMAwGA1UEChQFTm9raWExCzAJ BgNVBAsUAkJJMSUwIwYDVQQDFBxwcm9qZWN0cy5kZXZlbG9wZXIubm9raWEuY29t MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCr92w1bpHYSYxUEx8N/8Iddda2 lYi+aXNtQfV/l2Fw9Ykv3Ipw4nLeGTj18FFlAZgMdPRlgrzF/NNXGw/9l3/qKdow CypkQf8lLaxb9Ze1E/KKmkRJa48QTOqvo6GqKuTI6HCeGlG1RxDb8YSKcQWLiytn yj3Wp4MgRQO266xmMQIDAQABo4IB9jCCAfIwQQYDVR0RBDowOIIccHJvamVjdHMu ZGV2ZWxvcGVyLm5va2lhLmNvbYIYcHJvamVjdHMuZm9ydW0ubm9raWEuY29tMAkG A1UdEwQCMAAwCwYDVR0PBAQDAgWgMEEGA1UdHwQ6MDgwNqA0oDKGMGh0dHA6Ly9T VlJJbnRsLUczLWNybC52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNybDBEBgNVHSAE PTA7MDkGC2CGSAGG+EUBBxcDMCowKAYIKwYBBQUHAgEWHGh0dHBzOi8vd3d3LnZl cmlzaWduLmNvbS9ycGEwKAYDVR0lBCEwHwYJYIZIAYb4QgQBBggrBgEFBQcDAQYI KwYBBQUHAwIwcgYIKwYBBQUHAQEEZjBkMCQGCCsGAQUFBzABhhhodHRwOi8vb2Nz cC52ZXJpc2lnbi5jb20wPAYIKwYBBQUHMAKGMGh0dHA6Ly9TVlJJbnRsLUczLWFp YS52ZXJpc2lnbi5jb20vU1ZSSW50bEczLmNlcjBuBggrBgEFBQcBDARiMGChXqBc MFowWDBWFglpbWFnZS9naWYwITAfMAcGBSsOAwIaBBRLa7kolgYMu9BSOJsprEsH iyEFGDAmFiRodHRwOi8vbG9nby52ZXJpc2lnbi5jb20vdnNsb2dvMS5naWYwDQYJ KoZIhvcNAQEFBQADggEBACQuPyIJqXwUyFRWw9x5yDXgMW4zYFopQYOw/ItRY522 O5BsySTh56BWS6mQB07XVfxmYUGAvRQDA5QHpmY8jIlNwSmN3s8RKo+fAtiNRlcL x/mWSfuMs3D/S6ev3D6+dpEMZtjrhOdctsarMKp8n/hPbwhAbg5hVjpkW5n8vz2y 0KxvvkA1AxpLwpVv7OlK17ttzIHw8bp9HTlHBU5s8bKz4a565V/a5HI0CSEv/+0y ko4/ghTnZc1CkmUngKKeFMSah/mT/xAh8XnE2l1AazFa8UKuYki1e+ArHaGZc4ix UYOtiRphwfuYQhRZ7qX9q2MMkCMI65XNK/SaFrAbbG0= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/nosan.pem000066400000000000000000000170471471441230600201040ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/QIBADANBgkqhkiG9w0BAQEFAASCBucwggbjAgEAAoIBgQCv3sUoOE4F7Pye AT2Q6XpXrGUOu1fYgdnItLLLhvn7ACuHMj7TA5UKXxsepJn5m2Ji9LvAbksr1IWd LZAvNgjwsUR+E4HbY108BhVt9sk3HFkvE0OOFbAa14ICtYPe18P/4Hv6Zfu/GJDU rwXHNCUu0p6i/mospZ5O3sx5MgVaShknGAEC3Kp7zOgusMmE8XSbkNQa3ARMkW4o kTqWKAeAHDjVFVyyhzZQmo+gaLzhWfJVSZhlJsuiLoZGGrVTq85EiXsE4l8rPaI+ mKkVzWP13IZW+Fx1tiIktumdHWb1OQWrvm8AiT9b8PcFCUUrvhQFcLDSCZjKlQ0t RWrSSKrrVsSldOreqRLtpjGzFJpGnTcvslL7rP5pg5DjBsYmVcDjrmRuJuhGq52X /6HEC97GouVK8tT1LVMv1wufVPn+i9TzwxOuRWeUvVqLAJgWQ9N3yKdymH+VrpZk /oB9ScyDakGezZBW5CeOQbNJ8WoX58jNxefGjtqKxmyztu43r3ECAwEAAQKCAYBQ fVoqYCqFV8L95X9x1QljGsldhqxbsIIl811o/KtoDtndFEfgd2E8z+4vhhHaRR0w QOW02kWZF7jXCMVWdhp9XgQE15S0/bLsB7TDERFiIZ1HiD+AxbhFcKBV8REbahCQ CQN0xDwFZ47RaBDy7JCf71EfM+UP7fSYECvww83jVspQNBIyZx+3bT5OMCbqqz88 +3m3mT52dJDADEeN9WAJZ+Ey1IYKRwu6tCJLvePEF1BrbDVNBgZogXZ+mzalxpjr 4RpGPMMa+VWc8HmDVd+LtpwKJcQD00GvUP4fNywn+5jvNWl54FdQiTLPrieTWxas XUQ2crxP7Aqr2/vsU5Ruru5uF7H+ssMHp9YQDhpJ2+SVhQ9P+/loXCuKGt+BrB2Z MlitO3f+vfRtzATmJ8G0qFrOqZK1A/qsiyIze240C1hAl3oy2xpZqTDGp4gRWwoi OIN0HmH9UbP7bbNQY1x/zstTbza4/7rGb1+DZKeZIMu7QjBCU0rtsJpGtUvcQGEC gcEA42GMYSL/HljZMF1LsDhTX/cmP8FDNgONhWYxT+w0Csnj1usLNBaT63dYnEiW QKydRR4casAR1Kdy4Yfcy2lCy1kCfwqkQYk8fxSsOSHRjUfwC1SnfdYlwKFMxw4a oZF0R4oVCBYrfP+8kqrj+5gs/gXblsw72XkYtbCdIriKKdmUzTx7MegzSqh2PVRi rJzuwCZQ/O0NfhwdOHxLQDo0dgD+vv9e+KOSoJ9FDv8HH1tnolpRMdkSA8AJR/Nk DXt1AoHBAMYBfTKQZ2jqLKybe4tP+YKjvjVp8vJx0iNUXFN/P6hBaSBOgq85uxXL X3s7N/pkOCjyE95B8QusIkbnbfdyEP89O4bTbUHPXyAkHyRkR7Vny49HYuaR/aXQ mXC0J2z5bXVpCQ514l/R/Io3wBph+hbG3To7pp9pMOV4qzvibUZaTZFwH+q+xDwf SKSFy3fcomgH4/K5/QuKVj0jOUQsYjQQWb8GukS2KZK3zYJIAG1bBcsCVpSuBdW0 eCZgqjnwjQKBwCUyUwWc9QEg5b68tGIKhNEhHDe3xOf0ItWcxxpc+JJ/Pm9tGfMW cnJFntBKK5I+6qdg6qMn8oLINcnhMORxvsSHNhpUQlSaP7RGTHo4JxCmoQUpfxDd 1GUzvdyeWQrvQYdmdlRRVCHpsA6KOCtzVIDlsmtz06Ka5cjrMHl6mNeJyYbdiwW6 B5ICBv23bUDxlzkFy5/ko51qufkAlErYeraHKSVTn1SrZZQzGdf/LkoZ6NUtUzUF XqYQZzRHA6oU9QKBwDslzLljC5D6ivfQxln6POV6dmJMUOd9erFVDPNgSqq/R2EA MueXDjzXcKFGMlWYxHHuxmKZPiEnfWHC1kWZjFxCdVq0I6oKATd/stHTJtyYseUO BQwtRiDXLE7PcguKgtkU1EC+lC3dc1vyhW8cH3HYW9N+aCqsaI/TuQr9e3kNlqhA XzhnXgU7rx5+XSZkARukZ8JlLqLY4yQGNqAXxgoZbEW1A8VsyQRr5XbqfT4td5CK FUT6qwGIlG+aZp9CLQKBwQCQkwdW9A/Q4Ffq8+XTL1hJ24m/q11OLAPODUypOhWw OCbX2fkv59pSBe6niZDBls1NpHB9mzalBrJCfU+yKC667gKcKULOnWULIoOQvmcg Ka3hkkW28gTnCjfDIYm3IdsLjc67zJplOixaKgxhO8NtJZGtg0oLIrofG8EYRInv OmtGw+XE+s4TVs6WgXnEg9zWQ5ZYtqQVn6PT5jsz+Nrvipi61HWHVBd7g+78ojps 3suWxl0FvgzTW5HD16WRXeI= -----END PRIVATE KEY----- Certificate: Data: Version: 1 (0x0) Serial Number: cb:2d:80:99:5a:69:52:61 Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, L=Castle Anthrax, O=Python Software Foundation, CN=nosan Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:af:de:c5:28:38:4e:05:ec:fc:9e:01:3d:90:e9: 7a:57:ac:65:0e:bb:57:d8:81:d9:c8:b4:b2:cb:86: f9:fb:00:2b:87:32:3e:d3:03:95:0a:5f:1b:1e:a4: 99:f9:9b:62:62:f4:bb:c0:6e:4b:2b:d4:85:9d:2d: 90:2f:36:08:f0:b1:44:7e:13:81:db:63:5d:3c:06: 15:6d:f6:c9:37:1c:59:2f:13:43:8e:15:b0:1a:d7: 82:02:b5:83:de:d7:c3:ff:e0:7b:fa:65:fb:bf:18: 90:d4:af:05:c7:34:25:2e:d2:9e:a2:fe:6a:2c:a5: 9e:4e:de:cc:79:32:05:5a:4a:19:27:18:01:02:dc: aa:7b:cc:e8:2e:b0:c9:84:f1:74:9b:90:d4:1a:dc: 04:4c:91:6e:28:91:3a:96:28:07:80:1c:38:d5:15: 5c:b2:87:36:50:9a:8f:a0:68:bc:e1:59:f2:55:49: 98:65:26:cb:a2:2e:86:46:1a:b5:53:ab:ce:44:89: 7b:04:e2:5f:2b:3d:a2:3e:98:a9:15:cd:63:f5:dc: 86:56:f8:5c:75:b6:22:24:b6:e9:9d:1d:66:f5:39: 05:ab:be:6f:00:89:3f:5b:f0:f7:05:09:45:2b:be: 14:05:70:b0:d2:09:98:ca:95:0d:2d:45:6a:d2:48: aa:eb:56:c4:a5:74:ea:de:a9:12:ed:a6:31:b3:14: 9a:46:9d:37:2f:b2:52:fb:ac:fe:69:83:90:e3:06: c6:26:55:c0:e3:ae:64:6e:26:e8:46:ab:9d:97:ff: a1:c4:0b:de:c6:a2:e5:4a:f2:d4:f5:2d:53:2f:d7: 0b:9f:54:f9:fe:8b:d4:f3:c3:13:ae:45:67:94:bd: 5a:8b:00:98:16:43:d3:77:c8:a7:72:98:7f:95:ae: 96:64:fe:80:7d:49:cc:83:6a:41:9e:cd:90:56:e4: 27:8e:41:b3:49:f1:6a:17:e7:c8:cd:c5:e7:c6:8e: da:8a:c6:6c:b3:b6:ee:37:af:71 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption 91:42:c2:15:57:42:47:77:e7:0f:c5:55:26:b1:5b:c3:5e:ba: 81:db:e1:a4:9f:b8:42:5a:21:c9:8c:18:ae:0f:90:ab:9a:24: e7:d2:78:fc:bd:97:29:b1:5c:46:1f:5b:b8:d2:a7:87:f1:50: 53:5b:d3:be:57:74:bd:e5:75:db:50:81:f7:37:95:0b:69:ef: 39:8c:5c:82:d5:64:62:d5:8b:e9:e0:31:e1:73:d2:5a:2c:de: 43:5a:06:e5:d3:4d:d0:35:e0:9f:c2:73:31:bc:35:69:d4:fb: 7d:f0:1a:33:f7:f6:25:72:9c:a6:84:05:08:f6:b5:e8:04:10: f1:1f:f2:95:ad:a1:f8:d8:80:a5:eb:75:43:99:33:90:0c:79: fc:c0:87:08:95:20:aa:c2:81:0b:22:6f:56:f4:8f:2a:23:f8: 40:47:1c:03:a5:b1:04:0a:04:4a:df:d0:88:a8:bc:31:f2:42: 9b:d8:11:14:9e:e3:68:ea:07:2c:15:de:d2:36:5a:15:38:ed: d2:af:0e:b4:b6:1d:a0:57:94:ea:c3:c7:4c:14:57:81:00:57: 94:d3:b0:27:69:d7:48:02:6c:e5:97:f7:be:22:7c:38:24:af: b2:b0:7b:08:75:1e:ca:2e:c7:41:ef:8b:74:cf:c9:c3:6f:39: b9:52:41:18:c6:70:24:54:51:04:fe:5f:88:70:35:e5:1c:8e: d6:67:69:44:44:33:9b:8c:fe:a5:b9:95:48:66:84:f3:1a:04: ab:a3:57:c1:b6:b4:2f:28:12:45:2b:cb:42:d3:f4:a5:ce:7b: 6c:1f:e4:c8:a9:e7:d4:6d:c8:27:2d:69:26:c5:e8:73:10:54: 1f:c3:bf:fd:aa:f5:95:6f:f6:ca:d5:06:8f:1b:79:93:e3:86: ba:8d:fe:a8:10:8f:95:3e:14:09:bf:ca:88:59:e2:93:b6:ec: 03:a9:7e:dd:1f:5f:13:d3:29:b3:a6:f3:6a:df:30:53:44:c8: cd:e5:82:57:bc:9c -----BEGIN CERTIFICATE----- MIIEJDCCAowCCQDLLYCZWmlSYTANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJY WTEmMCQGA1UECgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNV BAMMDW91ci1jYS1zZXJ2ZXIwHhcNMTgwODI5MTQyMzE2WhcNMzcxMDI4MTQyMzE2 WjBbMQswCQYDVQQGEwJYWTEXMBUGA1UEBwwOQ2FzdGxlIEFudGhyYXgxIzAhBgNV BAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMQ4wDAYDVQQDDAVub3NhbjCC AaIwDQYJKoZIhvcNAQEBBQADggGPADCCAYoCggGBAK/exSg4TgXs/J4BPZDpeles ZQ67V9iB2ci0ssuG+fsAK4cyPtMDlQpfGx6kmfmbYmL0u8BuSyvUhZ0tkC82CPCx RH4TgdtjXTwGFW32yTccWS8TQ44VsBrXggK1g97Xw//ge/pl+78YkNSvBcc0JS7S nqL+aiylnk7ezHkyBVpKGScYAQLcqnvM6C6wyYTxdJuQ1BrcBEyRbiiROpYoB4Ac ONUVXLKHNlCaj6BovOFZ8lVJmGUmy6IuhkYatVOrzkSJewTiXys9oj6YqRXNY/Xc hlb4XHW2IiS26Z0dZvU5Bau+bwCJP1vw9wUJRSu+FAVwsNIJmMqVDS1FatJIqutW xKV06t6pEu2mMbMUmkadNy+yUvus/mmDkOMGxiZVwOOuZG4m6EarnZf/ocQL3sai 5Ury1PUtUy/XC59U+f6L1PPDE65FZ5S9WosAmBZD03fIp3KYf5WulmT+gH1JzINq QZ7NkFbkJ45Bs0nxahfnyM3F58aO2orGbLO27jevcQIDAQABMA0GCSqGSIb3DQEB CwUAA4IBgQCRQsIVV0JHd+cPxVUmsVvDXrqB2+Gkn7hCWiHJjBiuD5CrmiTn0nj8 vZcpsVxGH1u40qeH8VBTW9O+V3S95XXbUIH3N5ULae85jFyC1WRi1Yvp4DHhc9Ja LN5DWgbl003QNeCfwnMxvDVp1Pt98Boz9/YlcpymhAUI9rXoBBDxH/KVraH42ICl 63VDmTOQDHn8wIcIlSCqwoELIm9W9I8qI/hARxwDpbEECgRK39CIqLwx8kKb2BEU nuNo6gcsFd7SNloVOO3Srw60th2gV5Tqw8dMFFeBAFeU07AnaddIAmzll/e+Inw4 JK+ysHsIdR7KLsdB74t0z8nDbzm5UkEYxnAkVFEE/l+IcDXlHI7WZ2lERDObjP6l uZVIZoTzGgSro1fBtrQvKBJFK8tC0/SlzntsH+TIqefUbcgnLWkmxehzEFQfw7/9 qvWVb/bK1QaPG3mT44a6jf6oEI+VPhQJv8qIWeKTtuwDqX7dH18T0ymzpvNq3zBT RMjN5YJXvJw= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/nullbytecert.pem000066400000000000000000000124731471441230600215000ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: 0 (0x0) Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Validity Not Before: Aug 7 13:11:52 2013 GMT Not After : Aug 7 13:12:52 2013 GMT Subject: C=US, ST=Oregon, L=Beaverton, O=Python Software Foundation, OU=Python Core Development, CN=null.python.org\x00example.org/emailAddress=python-dev@python.org Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:b5:ea:ed:c9:fb:46:7d:6f:3b:76:80:dd:3a:f3: 03:94:0b:a7:a6:db:ec:1d:df:ff:23:74:08:9d:97: 16:3f:a3:a4:7b:3e:1b:0e:96:59:25:03:a7:26:e2: 88:a9:cf:79:cd:f7:04:56:b0:ab:79:32:6e:59:c1: 32:30:54:eb:58:a8:cb:91:f0:42:a5:64:27:cb:d4: 56:31:88:52:ad:cf:bd:7f:f0:06:64:1f:cc:27:b8: a3:8b:8c:f3:d8:29:1f:25:0b:f5:46:06:1b:ca:02: 45:ad:7b:76:0a:9c:bf:bb:b9:ae:0d:16:ab:60:75: ae:06:3e:9c:7c:31:dc:92:2f:29:1a:e0:4b:0c:91: 90:6c:e9:37:c5:90:d7:2a:d7:97:15:a3:80:8f:5d: 7b:49:8f:54:30:d4:97:2c:1c:5b:37:b5:ab:69:30: 68:43:d3:33:78:4b:02:60:f5:3c:44:80:a1:8f:e7: f0:0f:d1:5e:87:9e:46:cf:62:fc:f9:bf:0c:65:12: f1:93:c8:35:79:3f:c8:ec:ec:47:f5:ef:be:44:d5: ae:82:1e:2d:9a:9f:98:5a:67:65:e1:74:70:7c:cb: d3:c2:ce:0e:45:49:27:dc:e3:2d:d4:fb:48:0e:2f: 9e:77:b8:14:46:c0:c4:36:ca:02:ae:6a:91:8c:da: 2f:85 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 88:5A:55:C0:52:FF:61:CD:52:A3:35:0F:EA:5A:9C:24:38:22:F7:5C X509v3 Key Usage: Digital Signature, Non Repudiation, Key Encipherment X509v3 Subject Alternative Name: ************************************************************* WARNING: The values for DNS, email and URI are WRONG. OpenSSL doesn't print the text after a NULL byte. ************************************************************* DNS:altnull.python.org, email:null@python.org, URI:http://null.python.org, IP Address:192.0.2.1, IP Address:2001:DB8:0:0:0:0:0:1 Signature Algorithm: sha1WithRSAEncryption ac:4f:45:ef:7d:49:a8:21:70:8e:88:59:3e:d4:36:42:70:f5: a3:bd:8b:d7:a8:d0:58:f6:31:4a:b1:a4:a6:dd:6f:d9:e8:44: 3c:b6:0a:71:d6:7f:b1:08:61:9d:60:ce:75:cf:77:0c:d2:37: 86:02:8d:5e:5d:f9:0f:71:b4:16:a8:c1:3d:23:1c:f1:11:b3: 56:6e:ca:d0:8d:34:94:e6:87:2a:99:f2:ae:ae:cc:c2:e8:86: de:08:a8:7f:c5:05:fa:6f:81:a7:82:e6:d0:53:9d:34:f4:ac: 3e:40:fe:89:57:7a:29:a4:91:7e:0b:c6:51:31:e5:10:2f:a4: 60:76:cd:95:51:1a:be:8b:a1:b0:fd:ad:52:bd:d7:1b:87:60: d2:31:c7:17:c4:18:4f:2d:08:25:a3:a7:4f:b7:92:ca:e2:f5: 25:f1:54:75:81:9d:b3:3d:61:a2:f7:da:ed:e1:c6:6f:2c:60: 1f:d8:6f:c5:92:05:ab:c9:09:62:49:a9:14:ad:55:11:cc:d6: 4a:19:94:99:97:37:1d:81:5f:8b:cf:a3:a8:96:44:51:08:3d: 0b:05:65:12:eb:b6:70:80:88:48:72:4f:c6:c2:da:cf:cd:8e: 5b:ba:97:2f:60:b4:96:56:49:5e:3a:43:76:63:04:be:2a:f6: c1:ca:a9:94 -----BEGIN CERTIFICATE----- MIIE2DCCA8CgAwIBAgIBADANBgkqhkiG9w0BAQUFADCBxTELMAkGA1UEBhMCVVMx DzANBgNVBAgMBk9yZWdvbjESMBAGA1UEBwwJQmVhdmVydG9uMSMwIQYDVQQKDBpQ eXRob24gU29mdHdhcmUgRm91bmRhdGlvbjEgMB4GA1UECwwXUHl0aG9uIENvcmUg RGV2ZWxvcG1lbnQxJDAiBgNVBAMMG251bGwucHl0aG9uLm9yZwBleGFtcGxlLm9y ZzEkMCIGCSqGSIb3DQEJARYVcHl0aG9uLWRldkBweXRob24ub3JnMB4XDTEzMDgw NzEzMTE1MloXDTEzMDgwNzEzMTI1MlowgcUxCzAJBgNVBAYTAlVTMQ8wDQYDVQQI DAZPcmVnb24xEjAQBgNVBAcMCUJlYXZlcnRvbjEjMCEGA1UECgwaUHl0aG9uIFNv ZnR3YXJlIEZvdW5kYXRpb24xIDAeBgNVBAsMF1B5dGhvbiBDb3JlIERldmVsb3Bt ZW50MSQwIgYDVQQDDBtudWxsLnB5dGhvbi5vcmcAZXhhbXBsZS5vcmcxJDAiBgkq hkiG9w0BCQEWFXB5dGhvbi1kZXZAcHl0aG9uLm9yZzCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBALXq7cn7Rn1vO3aA3TrzA5QLp6bb7B3f/yN0CJ2XFj+j pHs+Gw6WWSUDpybiiKnPec33BFawq3kyblnBMjBU61ioy5HwQqVkJ8vUVjGIUq3P vX/wBmQfzCe4o4uM89gpHyUL9UYGG8oCRa17dgqcv7u5rg0Wq2B1rgY+nHwx3JIv KRrgSwyRkGzpN8WQ1yrXlxWjgI9de0mPVDDUlywcWze1q2kwaEPTM3hLAmD1PESA oY/n8A/RXoeeRs9i/Pm/DGUS8ZPINXk/yOzsR/XvvkTVroIeLZqfmFpnZeF0cHzL 08LODkVJJ9zjLdT7SA4vnne4FEbAxDbKAq5qkYzaL4UCAwEAAaOB0DCBzTAMBgNV HRMBAf8EAjAAMB0GA1UdDgQWBBSIWlXAUv9hzVKjNQ/qWpwkOCL3XDALBgNVHQ8E BAMCBeAwgZAGA1UdEQSBiDCBhYIeYWx0bnVsbC5weXRob24ub3JnAGV4YW1wbGUu Y29tgSBudWxsQHB5dGhvbi5vcmcAdXNlckBleGFtcGxlLm9yZ4YpaHR0cDovL251 bGwucHl0aG9uLm9yZwBodHRwOi8vZXhhbXBsZS5vcmeHBMAAAgGHECABDbgAAAAA AAAAAAAAAAEwDQYJKoZIhvcNAQEFBQADggEBAKxPRe99SaghcI6IWT7UNkJw9aO9 i9eo0Fj2MUqxpKbdb9noRDy2CnHWf7EIYZ1gznXPdwzSN4YCjV5d+Q9xtBaowT0j HPERs1ZuytCNNJTmhyqZ8q6uzMLoht4IqH/FBfpvgaeC5tBTnTT0rD5A/olXeimk kX4LxlEx5RAvpGB2zZVRGr6LobD9rVK91xuHYNIxxxfEGE8tCCWjp0+3ksri9SXx VHWBnbM9YaL32u3hxm8sYB/Yb8WSBavJCWJJqRStVRHM1koZlJmXNx2BX4vPo6iW RFEIPQsFZRLrtnCAiEhyT8bC2s/Njlu6ly9gtJZWSV46Q3ZjBL4q9sHKqZQ= -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/nullcert.pem000066400000000000000000000000001471441230600205730ustar00rootroot00000000000000gevent-24.11.1/src/greentest/3.9/pycacert.pem000066400000000000000000000130401471441230600205650ustar00rootroot00000000000000Certificate: Data: Version: 3 (0x2) Serial Number: cb:2d:80:99:5a:69:52:5b Signature Algorithm: sha256WithRSAEncryption Issuer: C=XY, O=Python Software Foundation CA, CN=our-ca-server Validity Not Before: Aug 29 14:23:16 2018 GMT Not After : Oct 28 14:23:16 2037 GMT Subject: C=XY, O=Python Software Foundation CA, CN=our-ca-server Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (3072 bit) Modulus: 00:b1:84:d3:4f:5c:04:80:91:4f:82:49:ba:30:0b: f7:e8:cb:f9:14:ef:3d:9f:0b:3f:0a:62:fc:1b:20: a5:20:d1:60:5f:87:5a:1f:16:d1:ed:97:70:a6:da: 1b:03:2c:7e:a0:5b:3c:4e:2f:16:7e:0e:89:29:89: e1:10:0d:38:da:6a:77:5f:37:13:b3:28:8f:7b:5c: 76:ad:9e:e8:d3:f5:9e:f5:83:aa:10:07:8d:e6:51: 98:f0:7c:0d:52:f2:0c:21:1e:d8:b9:99:26:a9:25: 03:27:bb:5c:ab:2e:33:27:a2:d6:23:a8:83:87:44: 29:9f:97:b5:24:6f:d7:b9:0a:fd:28:ee:bb:fb:41: 58:ea:1d:99:dd:44:86:ab:98:be:1c:dc:cb:a9:89: 1d:36:5c:a9:e8:47:b5:f4:52:48:aa:b5:a4:67:ef: 3e:d7:e2:d3:33:de:98:29:d8:7a:b0:59:5c:e7:b1: 0e:cc:fd:9f:eb:f6:d5:3a:0e:0b:cf:fe:0b:3d:a2: bf:45:18:ce:94:e7:a9:55:60:88:d4:d8:84:50:79: 05:2e:41:03:74:ae:67:26:f6:5b:12:08:98:ce:0a: 97:ed:01:0f:89:4f:17:5c:fa:3e:1d:35:24:47:92: 32:bf:f7:a4:18:2b:3c:d0:48:99:e1:a2:cd:a3:cc: 50:53:20:b5:c6:e3:66:85:7b:57:10:ec:33:4f:c1: 77:e7:1b:7e:81:c6:c4:f3:45:20:c0:91:dd:13:76: 7b:03:af:f6:76:8e:a2:83:63:57:dd:63:bc:bb:5a: 1c:17:52:8a:d6:06:48:cc:0f:c7:d3:4f:e8:da:22: 6c:86:f9:4e:5c:a6:29:07:3b:d8:56:4c:59:b3:20: 49:07:7b:94:84:cf:2b:c3:1c:1a:4e:87:64:92:ba: 42:e1:e6:ad:7d:1d:f6:54:90:6f:2b:e9:b3:cc:4b: 2b:33:26:23:fd:65:c0:3c:f0:79:ad:c9:c1:81:ef: 37:04:e0:27:3e:b0:ee:15:be:51 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Key Identifier: B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD X509v3 Authority Key Identifier: keyid:B3:8A:A0:A2:BA:71:F1:A8:24:79:D4:A4:5B:25:36:15:1E:49:C8:CD X509v3 Basic Constraints: CA:TRUE Signature Algorithm: sha256WithRSAEncryption 6b:32:2f:e7:05:18:ea:5c:c9:95:f4:e0:c2:0c:41:5f:1a:0a: 95:c9:c7:7d:05:ee:8a:56:29:35:50:40:b7:fe:9f:7b:5b:1c: c3:69:2f:a0:cb:d2:b8:91:2f:50:19:62:f7:27:18:6d:95:7b: 53:16:15:a2:5a:dc:14:e3:fb:b1:32:a9:69:db:a6:33:47:3c: bb:1f:d2:dc:70:f9:6a:2e:0c:d8:8c:6d:e5:5d:1d:43:3c:4e: 91:de:a0:c8:da:a0:4b:0e:9d:5e:b6:0f:4a:49:f0:7b:b6:53: 9e:fd:35:14:5b:e3:4d:b4:18:a6:36:61:e8:8f:33:9b:d4:05: f9:54:66:df:e0:cb:18:a3:4e:dc:17:a8:a0:b3:c1:a8:f4:d6: 9d:ca:7f:68:53:1a:d7:95:da:e8:d3:9e:48:00:71:95:99:11: 07:cf:96:c0:7d:ce:7d:30:e8:4f:e1:83:16:33:a1:ff:59:9b: 3e:4c:e7:3a:38:01:9f:0f:67:4c:fd:2d:8b:4a:d4:01:46:37: 33:e8:13:6b:15:a9:1d:68:76:45:a2:82:33:69:26:30:60:05: c8:8f:bd:b4:75:ab:be:7a:8b:48:68:70:40:b4:1b:51:c5:e6: 7a:ad:6b:4f:db:17:c0:60:67:2e:63:61:9b:2c:48:99:b8:76: 45:a0:9e:cc:ef:33:1e:50:4e:ab:72:c3:65:c8:b2:79:b3:35: 83:21:78:d3:8b:6c:3a:18:e8:65:32:39:b8:c0:9d:71:2f:35: 36:8a:c0:17:62:d8:8b:3e:e1:22:18:2b:4c:63:a6:0e:9d:0a: fa:ab:5b:35:fb:88:91:77:4c:8d:8c:9d:a9:cf:fc:ab:c2:e6: 5a:05:7b:7e:04:6e:39:cf:93:ce:67:3b:7a:cb:af:b6:36:e1: fb:71:64:45:d4:a6:f0:ce:ef:75:04:99:69:9a:e5:88:0a:10: 02:74:89:ec:75:84:44:80:48:df:c1:f7:e9:37:ce:ce:92:92: 5c:89:22:08:73:1f -----BEGIN CERTIFICATE----- MIIEbTCCAtWgAwIBAgIJAMstgJlaaVJbMA0GCSqGSIb3DQEBCwUAME0xCzAJBgNV BAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUgRm91bmRhdGlvbiBDQTEW MBQGA1UEAwwNb3VyLWNhLXNlcnZlcjAeFw0xODA4MjkxNDIzMTZaFw0zNzEwMjgx NDIzMTZaME0xCzAJBgNVBAYTAlhZMSYwJAYDVQQKDB1QeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbiBDQTEWMBQGA1UEAwwNb3VyLWNhLXNlcnZlcjCCAaIwDQYJKoZI hvcNAQEBBQADggGPADCCAYoCggGBALGE009cBICRT4JJujAL9+jL+RTvPZ8LPwpi /BsgpSDRYF+HWh8W0e2XcKbaGwMsfqBbPE4vFn4OiSmJ4RANONpqd183E7Moj3tc dq2e6NP1nvWDqhAHjeZRmPB8DVLyDCEe2LmZJqklAye7XKsuMyei1iOog4dEKZ+X tSRv17kK/Sjuu/tBWOodmd1EhquYvhzcy6mJHTZcqehHtfRSSKq1pGfvPtfi0zPe mCnYerBZXOexDsz9n+v21ToOC8/+Cz2iv0UYzpTnqVVgiNTYhFB5BS5BA3SuZyb2 WxIImM4Kl+0BD4lPF1z6Ph01JEeSMr/3pBgrPNBImeGizaPMUFMgtcbjZoV7VxDs M0/Bd+cbfoHGxPNFIMCR3RN2ewOv9naOooNjV91jvLtaHBdSitYGSMwPx9NP6Noi bIb5TlymKQc72FZMWbMgSQd7lITPK8McGk6HZJK6QuHmrX0d9lSQbyvps8xLKzMm I/1lwDzwea3JwYHvNwTgJz6w7hW+UQIDAQABo1AwTjAdBgNVHQ4EFgQUs4qgorpx 8agkedSkWyU2FR5JyM0wHwYDVR0jBBgwFoAUs4qgorpx8agkedSkWyU2FR5JyM0w DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAYEAazIv5wUY6lzJlfTgwgxB XxoKlcnHfQXuilYpNVBAt/6fe1scw2kvoMvSuJEvUBli9ycYbZV7UxYVolrcFOP7 sTKpadumM0c8ux/S3HD5ai4M2Ixt5V0dQzxOkd6gyNqgSw6dXrYPSknwe7ZTnv01 FFvjTbQYpjZh6I8zm9QF+VRm3+DLGKNO3BeooLPBqPTWncp/aFMa15Xa6NOeSABx lZkRB8+WwH3OfTDoT+GDFjOh/1mbPkznOjgBnw9nTP0ti0rUAUY3M+gTaxWpHWh2 RaKCM2kmMGAFyI+9tHWrvnqLSGhwQLQbUcXmeq1rT9sXwGBnLmNhmyxImbh2RaCe zO8zHlBOq3LDZciyebM1gyF404tsOhjoZTI5uMCdcS81NorAF2LYiz7hIhgrTGOm Dp0K+qtbNfuIkXdMjYydqc/8q8LmWgV7fgRuOc+Tzmc7esuvtjbh+3FkRdSm8M7v dQSZaZrliAoQAnSJ7HWERIBI38H36TfOzpKSXIkiCHMf -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/pycakey.pem000066400000000000000000000046641471441230600204340ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/gIBADANBgkqhkiG9w0BAQEFAASCBugwggbkAgEAAoIBgQCxhNNPXASAkU+C SbowC/foy/kU7z2fCz8KYvwbIKUg0WBfh1ofFtHtl3Cm2hsDLH6gWzxOLxZ+Dokp ieEQDTjaandfNxOzKI97XHatnujT9Z71g6oQB43mUZjwfA1S8gwhHti5mSapJQMn u1yrLjMnotYjqIOHRCmfl7Ukb9e5Cv0o7rv7QVjqHZndRIarmL4c3MupiR02XKno R7X0UkiqtaRn7z7X4tMz3pgp2HqwWVznsQ7M/Z/r9tU6DgvP/gs9or9FGM6U56lV YIjU2IRQeQUuQQN0rmcm9lsSCJjOCpftAQ+JTxdc+j4dNSRHkjK/96QYKzzQSJnh os2jzFBTILXG42aFe1cQ7DNPwXfnG36BxsTzRSDAkd0TdnsDr/Z2jqKDY1fdY7y7 WhwXUorWBkjMD8fTT+jaImyG+U5cpikHO9hWTFmzIEkHe5SEzyvDHBpOh2SSukLh 5q19HfZUkG8r6bPMSyszJiP9ZcA88HmtycGB7zcE4Cc+sO4VvlECAwEAAQKCAYB7 gUnzALYxLOgAYYMkQm9si9zz768TpCNr+ooj5YZ9Wq6OSAEveBT+FErQCxaYErDW qCNA0gn4Eezj9YWcQVa4vzHmEM+n6iRJU39ONC0Qqua5Ma10EY1sHIEnb2dlufku YeOu3RrEu3eCgRxsDGySuvv5OxinV4kN++KPQzD3EOopPE+U81YFLCsMgsyfPlmm gwc/IKIuXDHp5Vp2bXkZK98CYLV8RddjUw7SrkZNwx6cI9eET0CgTs7y4SrevoOy jCdnA0j1HvL8AbLQuYoXo9fdGYDeq55hyYlxSMYLaEToZG3DJ0UAldrT+r7x52D8 2QMnJUo2XHzVYPlXPJIAkFJisZZ36TkBvywCgXZMMLibPo9U6V0nfkybTtXKoory nmgBv+XSGSNrVWMiygpDPqpX1G6bBgqUX3CiTlxtSkYYz1M4Vgj2cux5XEPTnVCq CLVzvNIXZt1RyzXPxGWpPidCjOaiWBRT4u1Dol9fs3PmVvDaRxcKo9nspiUHCfEC gcEA4GgxZ+IJwpAMHkdYId0oxjKgTqIg+Ua+EwfUoQT10ERl/k/V4cDwJRHT8lML rKhTNQJMEE040jq+6mPJDl1KqMb/v05Q7fF22ToGw1HkZwK52O6CeEiJW4/J6bR1 pZGN0irsa6GvzV65Y6gZVFEUl0JPRf8wPvQHXsWAw8/2LuXkXjV0ieIMq4pbWJf4 kaid7dYLHnobiP9RVk7BGr7ifmCshoPjWp4TRMwYf6iIZrqMxUSX0QY8Xsqx6bch LLx/AoHBAMqCvvwUKTrF4gKh5jyl6T6DTZ/Dujaz7BuAJdsSSHvuTa/Y1EfsQHZN jABn89ZqHYDiyyCuVFO3dqhLtsPjhyFMSXj+98JYcL3FGKnqQqRTwtzzx2P2lV5X U0WhrNRb3iLu79Tr8pE/2EPnvTr+J5b0DHEeRyM72LWs43zrDYHorH0/Aa5Qd37F gDLCTBEl8jO5irRuAIq/KV9ZFnn8JDjNGVpXgHPW3354ON1YaMLnPASk7FQizSOQ QZAsyxtdLwKBwGUosvTYYXvygXP4x1LkpmfKFJe94E1exXpAsmovmTvcSXn9tTXC Sr77LWb0ZrPbYT7pHS7QEMg8MSnp941hIrG4mzs666KHkgLUdI4B0YtaIDsZMXlV gY3j4KpYbhxH4/2U2eSfC2fxxnKVKW3n6vdQrfmo0q/eQ6BGOgiLK7fybCLHyBQL 8Zg2k3z5bNUEhMTdE0AW3WjBZ4IXmFcdK26616r/szJ7RcZilrydVXexqpmWlTVl sTst9kucAPlwswKBwQCwf7my/GNezR8Jik+fZj7edBQQfcdra+8JnOvhfpLcKLte 2s1RjjA0q6usou1bYAsszP2bEzV97XWmgq7dFg4tUE7s/NO1d91zGDhBx2Gj1TkN 2A5dKonOuq9iDeITB6qYqcUvvyEfxRRZQr2jj+WzZCr/4BLCO6PJ29A9jKOuKLtF QcfWRF2RiNMN6lffzkHFIR4p2YHxa2DEsGGtmbt8Ig3Jtl/HFmydzmxJRoev71dY +ODdB6PhLhZmcRPoWpMCgcEAhGArwL68GwwRMqAX79gMv8tVT0CJnDyGk5mD/ZIB Nzt0yQFO7rTEa1l1vAtOiVJ9IpAak2lgbEwodOfGnQst7lujNYDFzTRPTFt/lID1 u6JBxmqawOSlqa00bt4l2YsTZV+BfSznBP6XO1PK4iR3o5G3NkoKJjZWm3e3asHk 6eTeMLcsIJ+Fp7gG0ve2EdQwhVSVMFEu4Q4C2FcJeU++L4kYpY7sTnAjUtiLvtHn yp3jllEn3CBD8Uhs4B+sL/6p -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.9/revocation.crl000066400000000000000000000014401471441230600211240ustar00rootroot00000000000000-----BEGIN X509 CRL----- MIICJjCBjwIBATANBgkqhkiG9w0BAQsFADBNMQswCQYDVQQGEwJYWTEmMCQGA1UE CgwdUHl0aG9uIFNvZnR3YXJlIEZvdW5kYXRpb24gQ0ExFjAUBgNVBAMMDW91ci1j YS1zZXJ2ZXIXDTIxMDMxNzA4NDgyMFoXDTQwMDUxNjA4NDgyMFqgDjAMMAoGA1Ud FAQDAgEAMA0GCSqGSIb3DQEBCwUAA4IBgQCd2GrHb4zr2R8eK7YMHwlkgICxbWP1 4nuEi55yzUcmMcCZJ6ZQV3yYqTlAULGQ9qWAUdhsyH+yu3hRKFKHQv0DAdKKxgow 66YasAQQ99DskXOPxmRoIA7qtIWZbLtBwHQJWh+uUFlTdUXitGIX5Xie74xu5YIr moa3QeuZyG5+gigSTUyst5T/J/cHfBzlAJLc2k3Ty4EPYXKHCVnrZWJbRmxq199l A7S+eBb9qWXSYXCn6v+EZ76pUS3u/66kZ86PO3h9294BzdhxbCJ27dQXNHw6owe2 Iyiv0aWx+TNSGSf4yCqaYTH6RtEoviI3h/inVFHNGgjlMzdaGw/0I3bkB0rt2WSR Vck37HnXvQvVEkgO/39C0WKZus6m4gmOgZcbJbXaR8uIR5Hmw3SEyGEPEIBu6tXV BLJOSOSu2vVUH5GUIrpvK9FTySKYa+MGryoPasuqZNfwpaXK+ON2G6QsmcXPWZY0 Dry6t0w2geW6UYVGmb831i8ZP3JVVVwcwi0= -----END X509 CRL----- gevent-24.11.1/src/greentest/3.9/secp384r1.pem000066400000000000000000000004001471441230600204030ustar00rootroot00000000000000$ openssl genpkey -genparam -algorithm EC -pkeyopt ec_paramgen_curve:secp384r1 -pkeyopt ec_param_enc:named_curve -text -----BEGIN EC PARAMETERS----- BgUrgQQAIg== -----END EC PARAMETERS----- ECDSA-Parameters: (384 bit) ASN1 OID: secp384r1 NIST CURVE: P-384 gevent-24.11.1/src/greentest/3.9/selfsigned_pythontestdotnet.pem000066400000000000000000000041221471441230600246160ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIF9zCCA9+gAwIBAgIUH98b4Fw/DyugC9cV7VK7ZODzHsIwDQYJKoZIhvcNAQEL BQAwgYoxCzAJBgNVBAYTAlhZMRcwFQYDVQQIDA5DYXN0bGUgQW50aHJheDEYMBYG A1UEBwwPQXJndW1lbnQgQ2xpbmljMSMwIQYDVQQKDBpQeXRob24gU29mdHdhcmUg Rm91bmRhdGlvbjEjMCEGA1UEAwwac2VsZi1zaWduZWQucHl0aG9udGVzdC5uZXQw HhcNMTkwNTA4MDEwMjQzWhcNMjcwNzI0MDEwMjQzWjCBijELMAkGA1UEBhMCWFkx FzAVBgNVBAgMDkNhc3RsZSBBbnRocmF4MRgwFgYDVQQHDA9Bcmd1bWVudCBDbGlu aWMxIzAhBgNVBAoMGlB5dGhvbiBTb2Z0d2FyZSBGb3VuZGF0aW9uMSMwIQYDVQQD DBpzZWxmLXNpZ25lZC5weXRob250ZXN0Lm5ldDCCAiIwDQYJKoZIhvcNAQEBBQAD ggIPADCCAgoCggIBAMKdJlyCThkahwoBb7pl5q64Pe9Fn5jrIvzsveHTc97TpjV2 RLfICnXKrltPk/ohkVl6K5SUZQZwMVzFubkyxE0nZPHYHlpiKWQxbsYVkYv01rix IFdLvaxxbGYke2jwQao31s4o61AdlsfK1SdpHQUynBBMssqI3SB4XPmcA7e+wEEx jxjVish4ixA1vuIZOx8yibu+CFCf/geEjoBMF3QPdzULzlrCSw8k/45iZCSoNbvK DoL4TVV07PHOxpheDh8ZQmepGvU6pVqhb9m4lgmV0OGWHgozd5Ur9CbTVDmxIEz3 TSoRtNJK7qtyZdGNqwjksQxgZTjM/d/Lm/BJG99AiOmYOjsl9gbQMZgvQmMAtUsI aMJnQuZ6R+KEpW/TR5qSKLWZSG45z/op+tzI2m+cE6HwTRVAWbcuJxcAA55MZjqU OOOu3BBYMjS5nf2sQ9uoXsVBFH7i0mQqoW1SLzr9opI8KsWwFxQmO2vBxWYaN+lH OmwBZBwyODIsmI1YGXmTp09NxRYz3Qe5GCgFzYowpMrcxUC24iduIdMwwhRM7rKg 7GtIWMSrFfuI1XCLRmSlhDbhNN6fVg2f8Bo9PdH9ihiIyxSrc+FOUasUYCCJvlSZ 8hFUlLvcmrZlWuazohm0lsXuMK1JflmQr/DA/uXxP9xzFfRy+RU3jDyxJbRHAgMB AAGjUzBRMB0GA1UdDgQWBBSQJyxiPMRK01i+0BsV9zUwDiBaHzAfBgNVHSMEGDAW gBSQJyxiPMRK01i+0BsV9zUwDiBaHzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3 DQEBCwUAA4ICAQCR+7a7N/m+WLkxPPIA/CB4MOr2Uf8ixTv435Nyv6rXOun0+lTP ExSZ0uYQ+L0WylItI3cQHULldDueD+s8TGzxf5woaLKf6tqyr0NYhKs+UeNEzDnN 9PHQIhX0SZw3XyXGUgPNBfRCg2ZDdtMMdOU4XlQN/IN/9hbYTrueyY7eXq9hmtI9 1srftAMqr9SR1JP7aHI6DVgrEsZVMTDnfT8WmLSGLlY1HmGfdEn1Ip5sbo9uSkiH AEPgPfjYIvR5LqTOMn4KsrlZyBbFIDh9Sl99M1kZzgH6zUGVLCDg1y6Cms69fx/e W1HoIeVkY4b4TY7Bk7JsqyNhIuqu7ARaxkdaZWhYaA2YyknwANdFfNpfH+elCLIk BUt5S3f4i7DaUePTvKukCZiCq4Oyln7RcOn5If73wCeLB/ZM9Ei1HforyLWP1CN8 XLfpHaoeoPSWIveI0XHUl65LsPN2UbMbul/F23hwl+h8+BLmyAS680Yhn4zEN6Ku B7Po90HoFa1Du3bmx4jsN73UkT/dwMTi6K072FbipnC1904oGlWmLwvAHvrtxxmL Pl3pvEaZIu8wa/PNF6Y7J7VIewikIJq6Ta6FrWeFfzMWOj2qA1ZZi6fUaDSNYvuV J5quYKCc/O+I/yDDf8wyBbZ/gvUXzUHTMYGG+bFrn1p7XDbYYeEJ6R/xEg== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/ssl_cert.pem000066400000000000000000000030421471441230600205720ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIEWTCCAsGgAwIBAgIJAJinz4jHSjLtMA0GCSqGSIb3DQEBCwUAMF8xCzAJBgNV BAYTAlhZMRcwFQYDVQQHDA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9u IFNvZnR3YXJlIEZvdW5kYXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xODA4 MjkxNDIzMTVaFw0yODA4MjYxNDIzMTVaMF8xCzAJBgNVBAYTAlhZMRcwFQYDVQQH DA5DYXN0bGUgQW50aHJheDEjMCEGA1UECgwaUHl0aG9uIFNvZnR3YXJlIEZvdW5k YXRpb24xEjAQBgNVBAMMCWxvY2FsaG9zdDCCAaIwDQYJKoZIhvcNAQEBBQADggGP ADCCAYoCggGBALKUqUtopT6E68kN+uJNEt34i2EbmG/bwjcD8IaMsgJPSsMO2Bpd 3S6qWgkCeOyCfmAwBxK2kNbxGb63ouysEv7l8GCTJTWv3hG/HQcejJpnAEGi6K1U fDbyE/db6yZ12SoHVTGkadN4vYGCPd1Wj9ZO1F877SHQ8rDWX3xgTWkxN2ojBw44 T8RHSDiG8D/CvG4uEy+VUszL+Uvny5y2poNSqvI3J56sptWSrh8nIIbkPZPBdUne LYMOHTFK3ZjXSmhlXgziTxK71nnzM3Y9K9gxPnRqoXbvu/wFo55hQCkETiRkYgmm jXcBMZ0TClQVnQWuLjMthRnWFZs4Lfmwqjs7FZD/61581R2BYehvpWbLvvuOJhwv DFzexL2sXcAl7SsxbzeQKRHqGbIDfbnQTXfs3/VC6Ye5P82P2ucj+XC32N9piRmO gCBP8L3ub+YzzdxikZN2gZXXE2jsb3QyE/R2LkWdWyshpKe+RsZP1SBRbHShUyOh yJ90baoiEwj2mwIDAQABoxgwFjAUBgNVHREEDTALgglsb2NhbGhvc3QwDQYJKoZI hvcNAQELBQADggGBAHRUO/UIHl3jXQENewYayHxkIx8t7nu40iO2DXbicSijz5bo 5//xAB6RxhBAlsDBehgQP1uoZg+WJW+nHu3CIVOU3qZNZRaozxiCl2UFKcNqLOmx R3NKpo1jYf4REQIeG8Yw9+hSWLRbshNteP6bKUUf+vanhg9+axyOEOH/iOQvgk/m b8wA8wNa4ujWljPbTQnj7ry8RqhTM0GcAN5LSdSvcKcpzLcs3aYwh+Z8e30sQWna F40sa5u7izgBTOrwpcDm/w5kC46vpRQ5fnbshVw6pne2by0mdMECASid/p25N103 jMqTFlmO7kpf/jpCSmamp3/JSEE1BJKHwQ6Ql4nzRA2N1mnvWH7Zxcv043gkHeAu 0x8evpvwuhdIyproejNFlBpKmW8OX7yKTCPPMC/VkX8Q1rVkxU0DQ6hmvwZlhoKa 9Wc2uXpw9xF8itV4Uvcdr3dwqByvIqn7iI/gB+4l41e0u8OmH2MKOx4Nxlly5TNW HcVKQHyOeyvnINuBAQ== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/ssl_key.passwd.pem000066400000000000000000000051361471441230600217330ustar00rootroot00000000000000-----BEGIN ENCRYPTED PRIVATE KEY----- MIIHbTBXBgkqhkiG9w0BBQ0wSjApBgkqhkiG9w0BBQwwHAQI072N7W+PDDMCAggA MAwGCCqGSIb3DQIJBQAwHQYJYIZIAWUDBAEqBBA/AuaRNi4vE4KGqI4In+70BIIH ENGS5Vex5NID873frmd1UZEHZ+O/Bd0wDb+NUpIqesHkRYf7kKi6Gnr+nKQ/oVVn Lm3JjE7c8ECP0OkOOXmiXuWL1SkzBBWqCI4stSGUPvBiHsGwNnvJAaGjUffgMlcC aJOA2+dnejLkzblq4CB2LQdm06N3Xoe9tyqtQaUHxfzJAf5Ydd8uj7vpKN2MMhY7 icIPJwSyh0N7S6XWVtHEokr9Kp4y2hS5a+BgCWV1/1z0aF7agnSVndmT1VR+nWmc lM14k+lethmHMB+fsNSjnqeJ7XOPlOTHqhiZ9bBSTgF/xr5Bck/NiKRzHjdovBox TKg+xchaBhpRh7wBPBIlNJeHmIjv+8obOKjKU98Ig/7R9+IryZaNcKAH0PuOT+Sw QHXiCGQbOiYHB9UyhDTWiB7YVjd8KHefOFxfHzOQb/iBhbv1x3bTl3DgepvRN6VO dIsPLoIZe42sdf9GeMsk8mGJyZUQ6AzsfhWk3grb/XscizPSvrNsJ2VL1R7YTyT3 3WA4ZXR1EqvXnWL7N/raemQjy62iOG6t7fcF5IdP9CMbWP+Plpsz4cQW7FtesCTq a5ZXraochQz361ODFNIeBEGU+0qqXUtZDlmos/EySkZykSeU/L0bImS62VGE3afo YXBmznTTT9kkFkqv7H0MerfJsrE/wF8puP3GM01DW2JRgXRpSWlvbPV/2LnMtRuD II7iH4rWDtTjCN6BWKAgDOnPkc9sZ4XulqT32lcUeV6LTdMBfq8kMEc8eDij1vUT maVCRpuwaq8EIT3lVgNLufHiG96ojlyYtj3orzw22IjkgC/9ee8UDik9CqbMVmFf fVHhsw8LNSg8Q4bmwm5Eg2w2it2gtI68+mwr75oCxuJ/8OMjW21Prj8XDh5reie2 c0lDKQOFZ9UnLU1bXR/6qUM+JFKR4DMq+fOCuoQSVoyVUEOsJpvBOYnYZN9cxsZm vh9dKafMEcKZ8flsbr+gOmOw7+Py2ifSlf25E/Frb1W4gtbTb0LQVHb6+drutrZj 8HEu4CnHYFCD4ZnOJb26XlZCb8GFBddW86yJYyUqMMV6Q1aJfAOAglsTo1LjIMOZ byo0BTAmwUevU/iuOXQ4qRBXXcoidDcTCrxfUSPG9wdt9l+m5SdQpWqfQ+fx5O7m SLlrHyZCiPSFMtC9DxqjIklHjf5W3wslGLgaD30YXa4VDYkRihf3CNsxGQ+tVvef l0ZjoAitF7Gaua06IESmKnpHe23dkr1cjYq+u2IV+xGH8LeExdwsQ9kpuTeXPnQs JOA99SsFx1ct32RrwjxnDDsiNkaViTKo9GDkV3jQTfoFgAVqfSgg9wGXpqUqhNG7 TiSIHCowllLny2zn4XrXCy2niD3VDt0skb3l/PaegHE2z7S5YY85nQtYwpLiwB9M SQ08DYKxPBZYKtS2iZ/fsA1gjSRQDPg/SIxMhUC3M3qH8iWny1Lzl25F2Uq7VVEX LdTUtaby49jRTT3CQGr5n6z7bMbUegiY7h8WmOekuThGDH+4xZp6+rDP4GFk4FeK JcF70vMQYIjQZhadic6olv+9VtUP42ltGG/yP9a3eWRkzfAf2eCh6B1rYdgEWwE8 rlcZzwM+y6eUmeNF2FVWB8iWtTMQHy+dYNPM+Jtus1KQKxiiq/yCRs7nWvzWRFWA HRyqV0J6/lqgm4FvfktFt1T0W+mDoLJOR2/zIwMy2lgL5zeHuR3SaMJnCikJbqKS HB3UvrhAWUcZqdH29+FhVWeM7ybyF1Wccmf+IIC/ePLa6gjtqPV8lG/5kbpcpnB6 UQY8WWaKMxyr3jJ9bAX5QKshchp04cDecOLZrpFGNNQngR8RxSEkiIgAqNxWunIu KrdBDrupv/XAgEOclmgToY3iywLJSV5gHAyHWDUhRH4cFCLiGPl4XIcnXOuTze3H 3j+EYSiS3v3DhHjp33YU2pXlJDjiYsKzAXejEh66++Y8qaQdCAad3ruWRCzW3kgk Md0A1VGzntTnQsewvExQEMZH2LtYIsPv3KCYGeSAuLabX4tbGk79PswjnjLLEOr0 Ghf6RF6qf5/iFyJoG4vrbKT8kx6ywh0InILCdjUunuDskIBxX6tEcr9XwajoIvb2 kcmGdjam5kKLS7QOWQTl8/r/cuFes0dj34cX5Qpq+Gd7tRq/D+b0207926Cxvftv qQ1cVn8HiLxKkZzd3tpf2xnoV1zkTL0oHrNg+qzxoxXUTUcwtIf1d/HRbYEAhi/d bBBoFeftEHWNq+sJgS9bH+XNzo/yK4u04B5miOq8v4CSkJdzu+ZdF22d4cjiGmtQ 8BTmcn0Unzm+u5H0+QSZe54QBHJGNXXOIKMTkgnOdW27g4DbI1y7fCqJiSMbRW6L oHmMfbdB3GWqGbsUkhY8i6h9op0MU6WOX7ea2Rxyt4t6 -----END ENCRYPTED PRIVATE KEY----- gevent-24.11.1/src/greentest/3.9/ssl_key.pem000066400000000000000000000046701471441230600204350ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIG/wIBADANBgkqhkiG9w0BAQEFAASCBukwggblAgEAAoIBgQCylKlLaKU+hOvJ DfriTRLd+IthG5hv28I3A/CGjLICT0rDDtgaXd0uqloJAnjsgn5gMAcStpDW8Rm+ t6LsrBL+5fBgkyU1r94Rvx0HHoyaZwBBouitVHw28hP3W+smddkqB1UxpGnTeL2B gj3dVo/WTtRfO+0h0PKw1l98YE1pMTdqIwcOOE/ER0g4hvA/wrxuLhMvlVLMy/lL 58uctqaDUqryNyeerKbVkq4fJyCG5D2TwXVJ3i2DDh0xSt2Y10poZV4M4k8Su9Z5 8zN2PSvYMT50aqF277v8BaOeYUApBE4kZGIJpo13ATGdEwpUFZ0Fri4zLYUZ1hWb OC35sKo7OxWQ/+tefNUdgWHob6Vmy777jiYcLwxc3sS9rF3AJe0rMW83kCkR6hmy A3250E137N/1QumHuT/Nj9rnI/lwt9jfaYkZjoAgT/C97m/mM83cYpGTdoGV1xNo 7G90MhP0di5FnVsrIaSnvkbGT9UgUWx0oVMjocifdG2qIhMI9psCAwEAAQKCAYBT sHmaPmNaZj59jZCqp0YVQlpHWwBYQ5vD3pPE6oCttm0p9nXt/VkfenQRTthOtmT1 POzDp00/feP7zeGLmqSYUjgRekPw4gdnN7Ip2PY5kdW77NWwDSzdLxuOS8Rq1MW9 /Yu+ZPe3RBlDbT8C0IM+Atlh/BqIQ3zIxN4g0pzUlF0M33d6AYfYSzOcUhibOO7H j84r+YXBNkIRgYKZYbutRXuZYaGuqejRpBj3voVu0d3Ntdb6lCWuClpB9HzfGN0c RTv8g6UYO4sK3qyFn90ibIR/1GB9watvtoWVZqggiWeBzSWVWRsGEf9O+Cx4oJw1 IphglhmhbgNksbj7bD24on/icldSOiVkoUemUOFmHWhCm4PnB1GmbD8YMfEdSbks qDr1Ps1zg4mGOinVD/4cY7vuPFO/HCH07wfeaUGzRt4g0/yLr+XjVofOA3oowyxv JAzr+niHA3lg5ecj4r7M68efwzN1OCyjMrVJw2RAzwvGxE+rm5NiT08SWlKQZnkC gcEA4wvyLpIur/UB84nV3XVJ89UMNBLm++aTFzld047BLJtMaOhvNqx6Cl5c8VuW l261KHjiVzpfNM3/A2LBQJcYkhX7avkqEXlj57cl+dCWAVwUzKmLJTPjfaTTZnYJ xeN3dMYjJz2z2WtgvfvDoJLukVwIMmhTY8wtqqYyQBJ/l06pBsfw5TNvmVIOQHds 8ASOiFt+WRLk2bl9xrGGayqt3VV93KVRzF27cpjOgEcG74F3c0ZW9snERN7vIYwB JfrlAoHBAMlahPwMP2TYylG8OzHe7EiehTekSO26LGh0Cq3wTGXYsK/q8hQCzL14 kWW638vpwXL6L9ntvrd7hjzWRO3vX/VxnYEA6f0bpqHq1tZi6lzix5CTUN5McpDg QnjenSJNrNjS1zEF8WeY9iLEuDI/M/iUW4y9R6s3WpgQhPDXpSvd2g3gMGRUYhxQ Xna8auiJeYFq0oNaOxvJj+VeOfJ3ZMJttd+Y7gTOYZcbg3SdRb/kdxYki0RMD2hF 4ZvjJ6CTfwKBwQDiMqiZFTJGQwYqp4vWEmAW+I4r4xkUpWatoI2Fk5eI5T9+1PLX uYXsho56NxEU1UrOg4Cb/p+TcBc8PErkGqR0BkpxDMOInTOXSrQe6lxIBoECVXc3 HTbrmiay0a5y5GfCgxPKqIJhfcToAceoVjovv0y7S4yoxGZKuUEe7E8JY2iqRNAO yOvKCCICv/hcN235E44RF+2/rDlOltagNej5tY6rIFkaDdgOF4bD7f9O5eEni1Bg litfoesDtQP/3rECgcEAkQfvQ7D6tIPmbqsbJBfCr6fmoqZllT4FIJN84b50+OL0 mTGsfjdqC4tdhx3sdu7/VPbaIqm5NmX10bowWgWSY7MbVME4yQPyqSwC5NbIonEC d6N0mzoLR0kQ+Ai4u+2g82gicgAq2oj1uSNi3WZi48jQjHYFulCbo246o1NgeFFK 77WshYe2R1ioQfQDOU1URKCR0uTaMHClgfu112yiGd12JAD+aF3TM0kxDXz+sXI5 SKy311DFxECZeXRLpcC3AoHBAJkNMJWTyPYbeVu+CTQkec8Uun233EkXa2kUNZc/ 5DuXDaK+A3DMgYRufTKSPpDHGaCZ1SYPInX1Uoe2dgVjWssRL2uitR4ENabDoAOA ICVYXYYNagqQu5wwirF0QeaMXo1fjhuuHQh8GsMdXZvYEaAITZ9/NG5x/oY08+8H kr78SMBOPy3XQn964uKG+e3JwpOG14GKABdAlrHKFXNWchu/6dgcYXB87mrC/GhO zNwzC+QhFTZoOomFoqMgFWujng== -----END PRIVATE KEY----- gevent-24.11.1/src/greentest/3.9/talos-2019-0758.pem000066400000000000000000000024621471441230600210750ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDqDCCApKgAwIBAgIBAjALBgkqhkiG9w0BAQswHzELMAkGA1UEBhMCVUsxEDAO BgNVBAMTB2NvZHktY2EwHhcNMTgwNjE4MTgwMDU4WhcNMjgwNjE0MTgwMDU4WjA7 MQswCQYDVQQGEwJVSzEsMCoGA1UEAxMjY29kZW5vbWljb24tdm0tMi50ZXN0Lmxh bC5jaXNjby5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC63fGB J80A9Av1GB0bptslKRIUtJm8EeEu34HkDWbL6AJY0P8WfDtlXjlPaLqFa6sqH6ES V48prSm1ZUbDSVL8R6BYVYpOlK8/48xk4pGTgRzv69gf5SGtQLwHy8UPBKgjSZoD 5a5k5wJXGswhKFFNqyyxqCvWmMnJWxXTt2XDCiWc4g4YAWi4O4+6SeeHVAV9rV7C 1wxqjzKovVe2uZOHjKEzJbbIU6JBPb6TRfMdRdYOw98n1VXDcKVgdX2DuuqjCzHP WhU4Tw050M9NaK3eXp4Mh69VuiKoBGOLSOcS8reqHIU46Reg0hqeL8LIL6OhFHIF j7HR6V1X6F+BfRS/AgMBAAGjgdYwgdMwCQYDVR0TBAIwADAdBgNVHQ4EFgQUOktp HQjxDXXUg8prleY9jeLKeQ4wTwYDVR0jBEgwRoAUx6zgPygZ0ZErF9sPC4+5e2Io UU+hI6QhMB8xCzAJBgNVBAYTAlVLMRAwDgYDVQQDEwdjb2R5LWNhggkA1QEAuwb7 2s0wCQYDVR0SBAIwADAuBgNVHREEJzAlgiNjb2Rlbm9taWNvbi12bS0yLnRlc3Qu bGFsLmNpc2NvLmNvbTAOBgNVHQ8BAf8EBAMCBaAwCwYDVR0fBAQwAjAAMAsGCSqG SIb3DQEBCwOCAQEAvqantx2yBlM11RoFiCfi+AfSblXPdrIrHvccepV4pYc/yO6p t1f2dxHQb8rWH3i6cWag/EgIZx+HJQvo0rgPY1BFJsX1WnYf1/znZpkUBGbVmlJr t/dW1gSkNS6sPsM0Q+7HPgEv8CPDNK5eo7vU2seE0iWOkxSyVUuiCEY9ZVGaLVit p0C78nZ35Pdv4I+1cosmHl28+es1WI22rrnmdBpH8J1eY6WvUw2xuZHLeNVN0TzV Q3qq53AaCWuLOD1AjESWuUCxMZTK9DPS4JKXTK8RLyDeqOvJGjsSWp3kL0y3GaQ+ 10T1rfkKJub2+m9A9duin1fn6tHc2wSvB7m3DA== -----END CERTIFICATE----- gevent-24.11.1/src/greentest/3.9/test_asyncore.py000066400000000000000000000635341471441230600215210ustar00rootroot00000000000000import asyncore import unittest import select import os import socket import sys import time import errno import struct import threading from test import support from test.support import socket_helper from io import BytesIO if support.PGO: raise unittest.SkipTest("test is not helpful for PGO") HAS_UNIX_SOCKETS = hasattr(socket, 'AF_UNIX') class dummysocket: def __init__(self): self.closed = False def close(self): self.closed = True def fileno(self): return 42 class dummychannel: def __init__(self): self.socket = dummysocket() def close(self): self.socket.close() class exitingdummy: def __init__(self): pass def handle_read_event(self): raise asyncore.ExitNow() handle_write_event = handle_read_event handle_close = handle_read_event handle_expt_event = handle_read_event class crashingdummy: def __init__(self): self.error_handled = False def handle_read_event(self): raise Exception() handle_write_event = handle_read_event handle_close = handle_read_event handle_expt_event = handle_read_event def handle_error(self): self.error_handled = True # used when testing senders; just collects what it gets until newline is sent def capture_server(evt, buf, serv): try: serv.listen() conn, addr = serv.accept() except socket.timeout: pass else: n = 200 start = time.monotonic() while n > 0 and time.monotonic() - start < 3.0: r, w, e = select.select([conn], [], [], 0.1) if r: n -= 1 data = conn.recv(10) # keep everything except for the newline terminator buf.write(data.replace(b'\n', b'')) if b'\n' in data: break time.sleep(0.01) conn.close() finally: serv.close() evt.set() def bind_af_aware(sock, addr): """Helper function to bind a socket according to its family.""" if HAS_UNIX_SOCKETS and sock.family == socket.AF_UNIX: # Make sure the path doesn't exist. support.unlink(addr) socket_helper.bind_unix_socket(sock, addr) else: sock.bind(addr) class HelperFunctionTests(unittest.TestCase): def test_readwriteexc(self): # Check exception handling behavior of read, write and _exception # check that ExitNow exceptions in the object handler method # bubbles all the way up through asyncore read/write/_exception calls tr1 = exitingdummy() self.assertRaises(asyncore.ExitNow, asyncore.read, tr1) self.assertRaises(asyncore.ExitNow, asyncore.write, tr1) self.assertRaises(asyncore.ExitNow, asyncore._exception, tr1) # check that an exception other than ExitNow in the object handler # method causes the handle_error method to get called tr2 = crashingdummy() asyncore.read(tr2) self.assertEqual(tr2.error_handled, True) tr2 = crashingdummy() asyncore.write(tr2) self.assertEqual(tr2.error_handled, True) tr2 = crashingdummy() asyncore._exception(tr2) self.assertEqual(tr2.error_handled, True) # asyncore.readwrite uses constants in the select module that # are not present in Windows systems (see this thread: # http://mail.python.org/pipermail/python-list/2001-October/109973.html) # These constants should be present as long as poll is available @unittest.skipUnless(hasattr(select, 'poll'), 'select.poll required') def test_readwrite(self): # Check that correct methods are called by readwrite() attributes = ('read', 'expt', 'write', 'closed', 'error_handled') expected = ( (select.POLLIN, 'read'), (select.POLLPRI, 'expt'), (select.POLLOUT, 'write'), (select.POLLERR, 'closed'), (select.POLLHUP, 'closed'), (select.POLLNVAL, 'closed'), ) class testobj: def __init__(self): self.read = False self.write = False self.closed = False self.expt = False self.error_handled = False def handle_read_event(self): self.read = True def handle_write_event(self): self.write = True def handle_close(self): self.closed = True def handle_expt_event(self): self.expt = True def handle_error(self): self.error_handled = True for flag, expectedattr in expected: tobj = testobj() self.assertEqual(getattr(tobj, expectedattr), False) asyncore.readwrite(tobj, flag) # Only the attribute modified by the routine we expect to be # called should be True. for attr in attributes: self.assertEqual(getattr(tobj, attr), attr==expectedattr) # check that ExitNow exceptions in the object handler method # bubbles all the way up through asyncore readwrite call tr1 = exitingdummy() self.assertRaises(asyncore.ExitNow, asyncore.readwrite, tr1, flag) # check that an exception other than ExitNow in the object handler # method causes the handle_error method to get called tr2 = crashingdummy() self.assertEqual(tr2.error_handled, False) asyncore.readwrite(tr2, flag) self.assertEqual(tr2.error_handled, True) def test_closeall(self): self.closeall_check(False) def test_closeall_default(self): self.closeall_check(True) def closeall_check(self, usedefault): # Check that close_all() closes everything in a given map l = [] testmap = {} for i in range(10): c = dummychannel() l.append(c) self.assertEqual(c.socket.closed, False) testmap[i] = c if usedefault: socketmap = asyncore.socket_map try: asyncore.socket_map = testmap asyncore.close_all() finally: testmap, asyncore.socket_map = asyncore.socket_map, socketmap else: asyncore.close_all(testmap) self.assertEqual(len(testmap), 0) for c in l: self.assertEqual(c.socket.closed, True) def test_compact_traceback(self): try: raise Exception("I don't like spam!") except: real_t, real_v, real_tb = sys.exc_info() r = asyncore.compact_traceback() else: self.fail("Expected exception") (f, function, line), t, v, info = r self.assertEqual(os.path.split(f)[-1], 'test_asyncore.py') self.assertEqual(function, 'test_compact_traceback') self.assertEqual(t, real_t) self.assertEqual(v, real_v) self.assertEqual(info, '[%s|%s|%s]' % (f, function, line)) class DispatcherTests(unittest.TestCase): def setUp(self): pass def tearDown(self): asyncore.close_all() def test_basic(self): d = asyncore.dispatcher() self.assertEqual(d.readable(), True) self.assertEqual(d.writable(), True) def test_repr(self): d = asyncore.dispatcher() self.assertEqual(repr(d), '' % id(d)) def test_log(self): d = asyncore.dispatcher() # capture output of dispatcher.log() (to stderr) l1 = "Lovely spam! Wonderful spam!" l2 = "I don't like spam!" with support.captured_stderr() as stderr: d.log(l1) d.log(l2) lines = stderr.getvalue().splitlines() self.assertEqual(lines, ['log: %s' % l1, 'log: %s' % l2]) def test_log_info(self): d = asyncore.dispatcher() # capture output of dispatcher.log_info() (to stdout via print) l1 = "Have you got anything without spam?" l2 = "Why can't she have egg bacon spam and sausage?" l3 = "THAT'S got spam in it!" with support.captured_stdout() as stdout: d.log_info(l1, 'EGGS') d.log_info(l2) d.log_info(l3, 'SPAM') lines = stdout.getvalue().splitlines() expected = ['EGGS: %s' % l1, 'info: %s' % l2, 'SPAM: %s' % l3] self.assertEqual(lines, expected) def test_unhandled(self): d = asyncore.dispatcher() d.ignore_log_types = () # capture output of dispatcher.log_info() (to stdout via print) with support.captured_stdout() as stdout: d.handle_expt() d.handle_read() d.handle_write() d.handle_connect() lines = stdout.getvalue().splitlines() expected = ['warning: unhandled incoming priority event', 'warning: unhandled read event', 'warning: unhandled write event', 'warning: unhandled connect event'] self.assertEqual(lines, expected) def test_strerror(self): # refers to bug #8573 err = asyncore._strerror(errno.EPERM) if hasattr(os, 'strerror'): self.assertEqual(err, os.strerror(errno.EPERM)) err = asyncore._strerror(-1) self.assertTrue(err != "") class dispatcherwithsend_noread(asyncore.dispatcher_with_send): def readable(self): return False def handle_connect(self): pass class DispatcherWithSendTests(unittest.TestCase): def setUp(self): pass def tearDown(self): asyncore.close_all() @support.reap_threads def test_send(self): evt = threading.Event() sock = socket.socket() sock.settimeout(3) port = socket_helper.bind_port(sock) cap = BytesIO() args = (evt, cap, sock) t = threading.Thread(target=capture_server, args=args) t.start() try: # wait a little longer for the server to initialize (it sometimes # refuses connections on slow machines without this wait) time.sleep(0.2) data = b"Suppose there isn't a 16-ton weight?" d = dispatcherwithsend_noread() d.create_socket() d.connect((socket_helper.HOST, port)) # give time for socket to connect time.sleep(0.1) d.send(data) d.send(data) d.send(b'\n') n = 1000 while d.out_buffer and n > 0: asyncore.poll() n -= 1 evt.wait() self.assertEqual(cap.getvalue(), data*2) finally: support.join_thread(t) @unittest.skipUnless(hasattr(asyncore, 'file_wrapper'), 'asyncore.file_wrapper required') class FileWrapperTest(unittest.TestCase): def setUp(self): self.d = b"It's not dead, it's sleeping!" with open(support.TESTFN, 'wb') as file: file.write(self.d) def tearDown(self): support.unlink(support.TESTFN) def test_recv(self): fd = os.open(support.TESTFN, os.O_RDONLY) w = asyncore.file_wrapper(fd) os.close(fd) self.assertNotEqual(w.fd, fd) self.assertNotEqual(w.fileno(), fd) self.assertEqual(w.recv(13), b"It's not dead") self.assertEqual(w.read(6), b", it's") w.close() self.assertRaises(OSError, w.read, 1) def test_send(self): d1 = b"Come again?" d2 = b"I want to buy some cheese." fd = os.open(support.TESTFN, os.O_WRONLY | os.O_APPEND) w = asyncore.file_wrapper(fd) os.close(fd) w.write(d1) w.send(d2) w.close() with open(support.TESTFN, 'rb') as file: self.assertEqual(file.read(), self.d + d1 + d2) @unittest.skipUnless(hasattr(asyncore, 'file_dispatcher'), 'asyncore.file_dispatcher required') def test_dispatcher(self): fd = os.open(support.TESTFN, os.O_RDONLY) data = [] class FileDispatcher(asyncore.file_dispatcher): def handle_read(self): data.append(self.recv(29)) s = FileDispatcher(fd) os.close(fd) asyncore.loop(timeout=0.01, use_poll=True, count=2) self.assertEqual(b"".join(data), self.d) def test_resource_warning(self): # Issue #11453 fd = os.open(support.TESTFN, os.O_RDONLY) f = asyncore.file_wrapper(fd) os.close(fd) with support.check_warnings(('', ResourceWarning)): f = None support.gc_collect() def test_close_twice(self): fd = os.open(support.TESTFN, os.O_RDONLY) f = asyncore.file_wrapper(fd) os.close(fd) os.close(f.fd) # file_wrapper dupped fd with self.assertRaises(OSError): f.close() self.assertEqual(f.fd, -1) # calling close twice should not fail f.close() class BaseTestHandler(asyncore.dispatcher): def __init__(self, sock=None): asyncore.dispatcher.__init__(self, sock) self.flag = False def handle_accept(self): raise Exception("handle_accept not supposed to be called") def handle_accepted(self): raise Exception("handle_accepted not supposed to be called") def handle_connect(self): raise Exception("handle_connect not supposed to be called") def handle_expt(self): raise Exception("handle_expt not supposed to be called") def handle_close(self): raise Exception("handle_close not supposed to be called") def handle_error(self): raise class BaseServer(asyncore.dispatcher): """A server which listens on an address and dispatches the connection to a handler. """ def __init__(self, family, addr, handler=BaseTestHandler): asyncore.dispatcher.__init__(self) self.create_socket(family) self.set_reuse_addr() bind_af_aware(self.socket, addr) self.listen(5) self.handler = handler @property def address(self): return self.socket.getsockname() def handle_accepted(self, sock, addr): self.handler(sock) def handle_error(self): raise class BaseClient(BaseTestHandler): def __init__(self, family, address): BaseTestHandler.__init__(self) self.create_socket(family) self.connect(address) def handle_connect(self): pass class BaseTestAPI: def tearDown(self): asyncore.close_all(ignore_all=True) def loop_waiting_for_flag(self, instance, timeout=5): timeout = float(timeout) / 100 count = 100 while asyncore.socket_map and count > 0: asyncore.loop(timeout=0.01, count=1, use_poll=self.use_poll) if instance.flag: return count -= 1 time.sleep(timeout) self.fail("flag not set") def test_handle_connect(self): # make sure handle_connect is called on connect() class TestClient(BaseClient): def handle_connect(self): self.flag = True server = BaseServer(self.family, self.addr) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_handle_accept(self): # make sure handle_accept() is called when a client connects class TestListener(BaseTestHandler): def __init__(self, family, addr): BaseTestHandler.__init__(self) self.create_socket(family) bind_af_aware(self.socket, addr) self.listen(5) self.address = self.socket.getsockname() def handle_accept(self): self.flag = True server = TestListener(self.family, self.addr) client = BaseClient(self.family, server.address) self.loop_waiting_for_flag(server) def test_handle_accepted(self): # make sure handle_accepted() is called when a client connects class TestListener(BaseTestHandler): def __init__(self, family, addr): BaseTestHandler.__init__(self) self.create_socket(family) bind_af_aware(self.socket, addr) self.listen(5) self.address = self.socket.getsockname() def handle_accept(self): asyncore.dispatcher.handle_accept(self) def handle_accepted(self, sock, addr): sock.close() self.flag = True server = TestListener(self.family, self.addr) client = BaseClient(self.family, server.address) self.loop_waiting_for_flag(server) def test_handle_read(self): # make sure handle_read is called on data received class TestClient(BaseClient): def handle_read(self): self.flag = True class TestHandler(BaseTestHandler): def __init__(self, conn): BaseTestHandler.__init__(self, conn) self.send(b'x' * 1024) server = BaseServer(self.family, self.addr, TestHandler) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_handle_write(self): # make sure handle_write is called class TestClient(BaseClient): def handle_write(self): self.flag = True server = BaseServer(self.family, self.addr) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_handle_close(self): # make sure handle_close is called when the other end closes # the connection class TestClient(BaseClient): def handle_read(self): # in order to make handle_close be called we are supposed # to make at least one recv() call self.recv(1024) def handle_close(self): self.flag = True self.close() class TestHandler(BaseTestHandler): def __init__(self, conn): BaseTestHandler.__init__(self, conn) self.close() server = BaseServer(self.family, self.addr, TestHandler) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_handle_close_after_conn_broken(self): # Check that ECONNRESET/EPIPE is correctly handled (issues #5661 and # #11265). data = b'\0' * 128 class TestClient(BaseClient): def handle_write(self): self.send(data) def handle_close(self): self.flag = True self.close() def handle_expt(self): self.flag = True self.close() class TestHandler(BaseTestHandler): def handle_read(self): self.recv(len(data)) self.close() def writable(self): return False server = BaseServer(self.family, self.addr, TestHandler) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) @unittest.skipIf(sys.platform.startswith("sunos"), "OOB support is broken on Solaris") def test_handle_expt(self): # Make sure handle_expt is called on OOB data received. # Note: this might fail on some platforms as OOB data is # tenuously supported and rarely used. if HAS_UNIX_SOCKETS and self.family == socket.AF_UNIX: self.skipTest("Not applicable to AF_UNIX sockets.") if sys.platform == "darwin" and self.use_poll: self.skipTest("poll may fail on macOS; see issue #28087") class TestClient(BaseClient): def handle_expt(self): self.socket.recv(1024, socket.MSG_OOB) self.flag = True class TestHandler(BaseTestHandler): def __init__(self, conn): BaseTestHandler.__init__(self, conn) self.socket.send(bytes(chr(244), 'latin-1'), socket.MSG_OOB) server = BaseServer(self.family, self.addr, TestHandler) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_handle_error(self): class TestClient(BaseClient): def handle_write(self): 1.0 / 0 def handle_error(self): self.flag = True try: raise except ZeroDivisionError: pass else: raise Exception("exception not raised") server = BaseServer(self.family, self.addr) client = TestClient(self.family, server.address) self.loop_waiting_for_flag(client) def test_connection_attributes(self): server = BaseServer(self.family, self.addr) client = BaseClient(self.family, server.address) # we start disconnected self.assertFalse(server.connected) self.assertTrue(server.accepting) # this can't be taken for granted across all platforms #self.assertFalse(client.connected) self.assertFalse(client.accepting) # execute some loops so that client connects to server asyncore.loop(timeout=0.01, use_poll=self.use_poll, count=100) self.assertFalse(server.connected) self.assertTrue(server.accepting) self.assertTrue(client.connected) self.assertFalse(client.accepting) # disconnect the client client.close() self.assertFalse(server.connected) self.assertTrue(server.accepting) self.assertFalse(client.connected) self.assertFalse(client.accepting) # stop serving server.close() self.assertFalse(server.connected) self.assertFalse(server.accepting) def test_create_socket(self): s = asyncore.dispatcher() s.create_socket(self.family) self.assertEqual(s.socket.type, socket.SOCK_STREAM) self.assertEqual(s.socket.family, self.family) self.assertEqual(s.socket.gettimeout(), 0) self.assertFalse(s.socket.get_inheritable()) def test_bind(self): if HAS_UNIX_SOCKETS and self.family == socket.AF_UNIX: self.skipTest("Not applicable to AF_UNIX sockets.") s1 = asyncore.dispatcher() s1.create_socket(self.family) s1.bind(self.addr) s1.listen(5) port = s1.socket.getsockname()[1] s2 = asyncore.dispatcher() s2.create_socket(self.family) # EADDRINUSE indicates the socket was correctly bound self.assertRaises(OSError, s2.bind, (self.addr[0], port)) def test_set_reuse_addr(self): if HAS_UNIX_SOCKETS and self.family == socket.AF_UNIX: self.skipTest("Not applicable to AF_UNIX sockets.") with socket.socket(self.family) as sock: try: sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) except OSError: unittest.skip("SO_REUSEADDR not supported on this platform") else: # if SO_REUSEADDR succeeded for sock we expect asyncore # to do the same s = asyncore.dispatcher(socket.socket(self.family)) self.assertFalse(s.socket.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR)) s.socket.close() s.create_socket(self.family) s.set_reuse_addr() self.assertTrue(s.socket.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR)) @support.reap_threads def test_quick_connect(self): # see: http://bugs.python.org/issue10340 if self.family not in (socket.AF_INET, getattr(socket, "AF_INET6", object())): self.skipTest("test specific to AF_INET and AF_INET6") server = BaseServer(self.family, self.addr) # run the thread 500 ms: the socket should be connected in 200 ms t = threading.Thread(target=lambda: asyncore.loop(timeout=0.1, count=5)) t.start() try: with socket.socket(self.family, socket.SOCK_STREAM) as s: s.settimeout(.2) s.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0)) try: s.connect(server.address) except OSError: pass finally: support.join_thread(t) class TestAPI_UseIPv4Sockets(BaseTestAPI): family = socket.AF_INET addr = (socket_helper.HOST, 0) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 support required') class TestAPI_UseIPv6Sockets(BaseTestAPI): family = socket.AF_INET6 addr = (socket_helper.HOSTv6, 0) @unittest.skipUnless(HAS_UNIX_SOCKETS, 'Unix sockets required') class TestAPI_UseUnixSockets(BaseTestAPI): if HAS_UNIX_SOCKETS: family = socket.AF_UNIX addr = support.TESTFN def tearDown(self): support.unlink(self.addr) BaseTestAPI.tearDown(self) class TestAPI_UseIPv4Select(TestAPI_UseIPv4Sockets, unittest.TestCase): use_poll = False @unittest.skipUnless(hasattr(select, 'poll'), 'select.poll required') class TestAPI_UseIPv4Poll(TestAPI_UseIPv4Sockets, unittest.TestCase): use_poll = True class TestAPI_UseIPv6Select(TestAPI_UseIPv6Sockets, unittest.TestCase): use_poll = False @unittest.skipUnless(hasattr(select, 'poll'), 'select.poll required') class TestAPI_UseIPv6Poll(TestAPI_UseIPv6Sockets, unittest.TestCase): use_poll = True class TestAPI_UseUnixSocketsSelect(TestAPI_UseUnixSockets, unittest.TestCase): use_poll = False @unittest.skipUnless(hasattr(select, 'poll'), 'select.poll required') class TestAPI_UseUnixSocketsPoll(TestAPI_UseUnixSockets, unittest.TestCase): use_poll = True if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.9/test_context.py000066400000000000000000000752021471441230600213550ustar00rootroot00000000000000import concurrent.futures import contextvars import functools import gc import random import time import unittest import weakref try: from _testcapi import hamt except ImportError: hamt = None def isolated_context(func): """Needed to make reftracking test mode work.""" @functools.wraps(func) def wrapper(*args, **kwargs): ctx = contextvars.Context() return ctx.run(func, *args, **kwargs) return wrapper class ContextTest(unittest.TestCase): def test_context_var_new_1(self): with self.assertRaisesRegex(TypeError, 'takes exactly 1'): contextvars.ContextVar() with self.assertRaisesRegex(TypeError, 'must be a str'): contextvars.ContextVar(1) c = contextvars.ContextVar('aaa') self.assertEqual(c.name, 'aaa') with self.assertRaises(AttributeError): c.name = 'bbb' self.assertNotEqual(hash(c), hash('aaa')) @isolated_context def test_context_var_repr_1(self): c = contextvars.ContextVar('a') self.assertIn('a', repr(c)) c = contextvars.ContextVar('a', default=123) self.assertIn('123', repr(c)) lst = [] c = contextvars.ContextVar('a', default=lst) lst.append(c) self.assertIn('...', repr(c)) self.assertIn('...', repr(lst)) t = c.set(1) self.assertIn(repr(c), repr(t)) self.assertNotIn(' used ', repr(t)) c.reset(t) self.assertIn(' used ', repr(t)) def test_context_subclassing_1(self): with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): class MyContextVar(contextvars.ContextVar): # Potentially we might want ContextVars to be subclassable. pass with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): class MyContext(contextvars.Context): pass with self.assertRaisesRegex(TypeError, 'not an acceptable base type'): class MyToken(contextvars.Token): pass def test_context_new_1(self): with self.assertRaisesRegex(TypeError, 'any arguments'): contextvars.Context(1) with self.assertRaisesRegex(TypeError, 'any arguments'): contextvars.Context(1, a=1) with self.assertRaisesRegex(TypeError, 'any arguments'): contextvars.Context(a=1) contextvars.Context(**{}) def test_context_typerrors_1(self): ctx = contextvars.Context() with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): ctx[1] with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): 1 in ctx with self.assertRaisesRegex(TypeError, 'ContextVar key was expected'): ctx.get(1) def test_context_get_context_1(self): ctx = contextvars.copy_context() self.assertIsInstance(ctx, contextvars.Context) def test_context_run_1(self): ctx = contextvars.Context() with self.assertRaisesRegex(TypeError, 'missing 1 required'): ctx.run() def test_context_run_2(self): ctx = contextvars.Context() def func(*args, **kwargs): kwargs['spam'] = 'foo' args += ('bar',) return args, kwargs for f in (func, functools.partial(func)): # partial doesn't support FASTCALL self.assertEqual(ctx.run(f), (('bar',), {'spam': 'foo'})) self.assertEqual(ctx.run(f, 1), ((1, 'bar'), {'spam': 'foo'})) self.assertEqual( ctx.run(f, a=2), (('bar',), {'a': 2, 'spam': 'foo'})) self.assertEqual( ctx.run(f, 11, a=2), ((11, 'bar'), {'a': 2, 'spam': 'foo'})) a = {} self.assertEqual( ctx.run(f, 11, **a), ((11, 'bar'), {'spam': 'foo'})) self.assertEqual(a, {}) def test_context_run_3(self): ctx = contextvars.Context() def func(*args, **kwargs): 1 / 0 with self.assertRaises(ZeroDivisionError): ctx.run(func) with self.assertRaises(ZeroDivisionError): ctx.run(func, 1, 2) with self.assertRaises(ZeroDivisionError): ctx.run(func, 1, 2, a=123) @isolated_context def test_context_run_4(self): ctx1 = contextvars.Context() ctx2 = contextvars.Context() var = contextvars.ContextVar('var') def func2(): self.assertIsNone(var.get(None)) def func1(): self.assertIsNone(var.get(None)) var.set('spam') ctx2.run(func2) self.assertEqual(var.get(None), 'spam') cur = contextvars.copy_context() self.assertEqual(len(cur), 1) self.assertEqual(cur[var], 'spam') return cur returned_ctx = ctx1.run(func1) self.assertEqual(ctx1, returned_ctx) self.assertEqual(returned_ctx[var], 'spam') self.assertIn(var, returned_ctx) def test_context_run_5(self): ctx = contextvars.Context() var = contextvars.ContextVar('var') def func(): self.assertIsNone(var.get(None)) var.set('spam') 1 / 0 with self.assertRaises(ZeroDivisionError): ctx.run(func) self.assertIsNone(var.get(None)) def test_context_run_6(self): ctx = contextvars.Context() c = contextvars.ContextVar('a', default=0) def fun(): self.assertEqual(c.get(), 0) self.assertIsNone(ctx.get(c)) c.set(42) self.assertEqual(c.get(), 42) self.assertEqual(ctx.get(c), 42) ctx.run(fun) def test_context_run_7(self): ctx = contextvars.Context() def fun(): with self.assertRaisesRegex(RuntimeError, 'is already entered'): ctx.run(fun) ctx.run(fun) @isolated_context def test_context_getset_1(self): c = contextvars.ContextVar('c') with self.assertRaises(LookupError): c.get() self.assertIsNone(c.get(None)) t0 = c.set(42) self.assertEqual(c.get(), 42) self.assertEqual(c.get(None), 42) self.assertIs(t0.old_value, t0.MISSING) self.assertIs(t0.old_value, contextvars.Token.MISSING) self.assertIs(t0.var, c) t = c.set('spam') self.assertEqual(c.get(), 'spam') self.assertEqual(c.get(None), 'spam') self.assertEqual(t.old_value, 42) c.reset(t) self.assertEqual(c.get(), 42) self.assertEqual(c.get(None), 42) c.set('spam2') with self.assertRaisesRegex(RuntimeError, 'has already been used'): c.reset(t) self.assertEqual(c.get(), 'spam2') ctx1 = contextvars.copy_context() self.assertIn(c, ctx1) c.reset(t0) with self.assertRaisesRegex(RuntimeError, 'has already been used'): c.reset(t0) self.assertIsNone(c.get(None)) self.assertIn(c, ctx1) self.assertEqual(ctx1[c], 'spam2') self.assertEqual(ctx1.get(c, 'aa'), 'spam2') self.assertEqual(len(ctx1), 1) self.assertEqual(list(ctx1.items()), [(c, 'spam2')]) self.assertEqual(list(ctx1.values()), ['spam2']) self.assertEqual(list(ctx1.keys()), [c]) self.assertEqual(list(ctx1), [c]) ctx2 = contextvars.copy_context() self.assertNotIn(c, ctx2) with self.assertRaises(KeyError): ctx2[c] self.assertEqual(ctx2.get(c, 'aa'), 'aa') self.assertEqual(len(ctx2), 0) self.assertEqual(list(ctx2), []) @isolated_context def test_context_getset_2(self): v1 = contextvars.ContextVar('v1') v2 = contextvars.ContextVar('v2') t1 = v1.set(42) with self.assertRaisesRegex(ValueError, 'by a different'): v2.reset(t1) @isolated_context def test_context_getset_3(self): c = contextvars.ContextVar('c', default=42) ctx = contextvars.Context() def fun(): self.assertEqual(c.get(), 42) with self.assertRaises(KeyError): ctx[c] self.assertIsNone(ctx.get(c)) self.assertEqual(ctx.get(c, 'spam'), 'spam') self.assertNotIn(c, ctx) self.assertEqual(list(ctx.keys()), []) t = c.set(1) self.assertEqual(list(ctx.keys()), [c]) self.assertEqual(ctx[c], 1) c.reset(t) self.assertEqual(list(ctx.keys()), []) with self.assertRaises(KeyError): ctx[c] ctx.run(fun) @isolated_context def test_context_getset_4(self): c = contextvars.ContextVar('c', default=42) ctx = contextvars.Context() tok = ctx.run(c.set, 1) with self.assertRaisesRegex(ValueError, 'different Context'): c.reset(tok) @isolated_context def test_context_getset_5(self): c = contextvars.ContextVar('c', default=42) c.set([]) def fun(): c.set([]) c.get().append(42) self.assertEqual(c.get(), [42]) contextvars.copy_context().run(fun) self.assertEqual(c.get(), []) def test_context_copy_1(self): ctx1 = contextvars.Context() c = contextvars.ContextVar('c', default=42) def ctx1_fun(): c.set(10) ctx2 = ctx1.copy() self.assertEqual(ctx2[c], 10) c.set(20) self.assertEqual(ctx1[c], 20) self.assertEqual(ctx2[c], 10) ctx2.run(ctx2_fun) self.assertEqual(ctx1[c], 20) self.assertEqual(ctx2[c], 30) def ctx2_fun(): self.assertEqual(c.get(), 10) c.set(30) self.assertEqual(c.get(), 30) ctx1.run(ctx1_fun) @isolated_context def test_context_threads_1(self): cvar = contextvars.ContextVar('cvar') def sub(num): for i in range(10): cvar.set(num + i) time.sleep(random.uniform(0.001, 0.05)) self.assertEqual(cvar.get(), num + i) return num tp = concurrent.futures.ThreadPoolExecutor(max_workers=10) try: results = list(tp.map(sub, range(10))) finally: tp.shutdown() self.assertEqual(results, list(range(10))) # HAMT Tests class HashKey: _crasher = None def __init__(self, hash, name, *, error_on_eq_to=None): assert hash != -1 self.name = name self.hash = hash self.error_on_eq_to = error_on_eq_to def __repr__(self): return f'' def __hash__(self): if self._crasher is not None and self._crasher.error_on_hash: raise HashingError return self.hash def __eq__(self, other): if not isinstance(other, HashKey): return NotImplemented if self._crasher is not None and self._crasher.error_on_eq: raise EqError if self.error_on_eq_to is not None and self.error_on_eq_to is other: raise ValueError(f'cannot compare {self!r} to {other!r}') if other.error_on_eq_to is not None and other.error_on_eq_to is self: raise ValueError(f'cannot compare {other!r} to {self!r}') return (self.name, self.hash) == (other.name, other.hash) class KeyStr(str): def __hash__(self): if HashKey._crasher is not None and HashKey._crasher.error_on_hash: raise HashingError return super().__hash__() def __eq__(self, other): if HashKey._crasher is not None and HashKey._crasher.error_on_eq: raise EqError return super().__eq__(other) class HaskKeyCrasher: def __init__(self, *, error_on_hash=False, error_on_eq=False): self.error_on_hash = error_on_hash self.error_on_eq = error_on_eq def __enter__(self): if HashKey._crasher is not None: raise RuntimeError('cannot nest crashers') HashKey._crasher = self def __exit__(self, *exc): HashKey._crasher = None class HashingError(Exception): pass class EqError(Exception): pass @unittest.skipIf(hamt is None, '_testcapi lacks "hamt()" function') class HamtTest(unittest.TestCase): def test_hashkey_helper_1(self): k1 = HashKey(10, 'aaa') k2 = HashKey(10, 'bbb') self.assertNotEqual(k1, k2) self.assertEqual(hash(k1), hash(k2)) d = dict() d[k1] = 'a' d[k2] = 'b' self.assertEqual(d[k1], 'a') self.assertEqual(d[k2], 'b') def test_hamt_basics_1(self): h = hamt() h = None # NoQA def test_hamt_basics_2(self): h = hamt() self.assertEqual(len(h), 0) h2 = h.set('a', 'b') self.assertIsNot(h, h2) self.assertEqual(len(h), 0) self.assertEqual(len(h2), 1) self.assertIsNone(h.get('a')) self.assertEqual(h.get('a', 42), 42) self.assertEqual(h2.get('a'), 'b') h3 = h2.set('b', 10) self.assertIsNot(h2, h3) self.assertEqual(len(h), 0) self.assertEqual(len(h2), 1) self.assertEqual(len(h3), 2) self.assertEqual(h3.get('a'), 'b') self.assertEqual(h3.get('b'), 10) self.assertIsNone(h.get('b')) self.assertIsNone(h2.get('b')) self.assertIsNone(h.get('a')) self.assertEqual(h2.get('a'), 'b') h = h2 = h3 = None def test_hamt_basics_3(self): h = hamt() o = object() h1 = h.set('1', o) h2 = h1.set('1', o) self.assertIs(h1, h2) def test_hamt_basics_4(self): h = hamt() h1 = h.set('key', []) h2 = h1.set('key', []) self.assertIsNot(h1, h2) self.assertEqual(len(h1), 1) self.assertEqual(len(h2), 1) self.assertIsNot(h1.get('key'), h2.get('key')) def test_hamt_collision_1(self): k1 = HashKey(10, 'aaa') k2 = HashKey(10, 'bbb') k3 = HashKey(10, 'ccc') h = hamt() h2 = h.set(k1, 'a') h3 = h2.set(k2, 'b') self.assertEqual(h.get(k1), None) self.assertEqual(h.get(k2), None) self.assertEqual(h2.get(k1), 'a') self.assertEqual(h2.get(k2), None) self.assertEqual(h3.get(k1), 'a') self.assertEqual(h3.get(k2), 'b') h4 = h3.set(k2, 'cc') h5 = h4.set(k3, 'aa') self.assertEqual(h3.get(k1), 'a') self.assertEqual(h3.get(k2), 'b') self.assertEqual(h4.get(k1), 'a') self.assertEqual(h4.get(k2), 'cc') self.assertEqual(h4.get(k3), None) self.assertEqual(h5.get(k1), 'a') self.assertEqual(h5.get(k2), 'cc') self.assertEqual(h5.get(k2), 'cc') self.assertEqual(h5.get(k3), 'aa') self.assertEqual(len(h), 0) self.assertEqual(len(h2), 1) self.assertEqual(len(h3), 2) self.assertEqual(len(h4), 2) self.assertEqual(len(h5), 3) def test_hamt_collision_3(self): # Test that iteration works with the deepest tree possible. # https://github.com/python/cpython/issues/93065 C = HashKey(0b10000000_00000000_00000000_00000000, 'C') D = HashKey(0b10000000_00000000_00000000_00000000, 'D') E = HashKey(0b00000000_00000000_00000000_00000000, 'E') h = hamt() h = h.set(C, 'C') h = h.set(D, 'D') h = h.set(E, 'E') # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # BitmapNode(size=4 count=2 bitmap=0b101): # : 'E' # NULL: # CollisionNode(size=4 id=0x107a24520): # : 'C' # : 'D' self.assertEqual({k.name for k in h.keys()}, {'C', 'D', 'E'}) def test_hamt_stress(self): COLLECTION_SIZE = 7000 TEST_ITERS_EVERY = 647 CRASH_HASH_EVERY = 97 CRASH_EQ_EVERY = 11 RUN_XTIMES = 3 for _ in range(RUN_XTIMES): h = hamt() d = dict() for i in range(COLLECTION_SIZE): key = KeyStr(i) if not (i % CRASH_HASH_EVERY): with HaskKeyCrasher(error_on_hash=True): with self.assertRaises(HashingError): h.set(key, i) h = h.set(key, i) if not (i % CRASH_EQ_EVERY): with HaskKeyCrasher(error_on_eq=True): with self.assertRaises(EqError): h.get(KeyStr(i)) # really trigger __eq__ d[key] = i self.assertEqual(len(d), len(h)) if not (i % TEST_ITERS_EVERY): self.assertEqual(set(h.items()), set(d.items())) self.assertEqual(len(h.items()), len(d.items())) self.assertEqual(len(h), COLLECTION_SIZE) for key in range(COLLECTION_SIZE): self.assertEqual(h.get(KeyStr(key), 'not found'), key) keys_to_delete = list(range(COLLECTION_SIZE)) random.shuffle(keys_to_delete) for iter_i, i in enumerate(keys_to_delete): key = KeyStr(i) if not (iter_i % CRASH_HASH_EVERY): with HaskKeyCrasher(error_on_hash=True): with self.assertRaises(HashingError): h.delete(key) if not (iter_i % CRASH_EQ_EVERY): with HaskKeyCrasher(error_on_eq=True): with self.assertRaises(EqError): h.delete(KeyStr(i)) h = h.delete(key) self.assertEqual(h.get(key, 'not found'), 'not found') del d[key] self.assertEqual(len(d), len(h)) if iter_i == COLLECTION_SIZE // 2: hm = h dm = d.copy() if not (iter_i % TEST_ITERS_EVERY): self.assertEqual(set(h.keys()), set(d.keys())) self.assertEqual(len(h.keys()), len(d.keys())) self.assertEqual(len(d), 0) self.assertEqual(len(h), 0) # ============ for key in dm: self.assertEqual(hm.get(str(key)), dm[key]) self.assertEqual(len(dm), len(hm)) for i, key in enumerate(keys_to_delete): hm = hm.delete(str(key)) self.assertEqual(hm.get(str(key), 'not found'), 'not found') dm.pop(str(key), None) self.assertEqual(len(d), len(h)) if not (i % TEST_ITERS_EVERY): self.assertEqual(set(h.values()), set(d.values())) self.assertEqual(len(h.values()), len(d.values())) self.assertEqual(len(d), 0) self.assertEqual(len(h), 0) self.assertEqual(list(h.items()), []) def test_hamt_delete_1(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(102, 'C') D = HashKey(103, 'D') E = HashKey(104, 'E') Z = HashKey(-100, 'Z') Er = HashKey(103, 'Er', error_on_eq_to=D) h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=10 bitmap=0b111110000 id=0x10eadc618): # : 'a' # : 'b' # : 'c' # : 'd' # : 'e' h = h.delete(C) self.assertEqual(len(h), orig_len - 1) with self.assertRaisesRegex(ValueError, 'cannot compare'): h.delete(Er) h = h.delete(D) self.assertEqual(len(h), orig_len - 2) h2 = h.delete(Z) self.assertIs(h2, h) h = h.delete(A) self.assertEqual(len(h), orig_len - 3) self.assertEqual(h.get(A, 42), 42) self.assertEqual(h.get(B), 'b') self.assertEqual(h.get(E), 'e') def test_hamt_delete_2(self): A = HashKey(100, 'A') B = HashKey(201001, 'B') C = HashKey(101001, 'C') D = HashKey(103, 'D') E = HashKey(104, 'E') Z = HashKey(-100, 'Z') Er = HashKey(201001, 'Er', error_on_eq_to=B) h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=8 bitmap=0b1110010000): # : 'a' # : 'd' # : 'e' # NULL: # BitmapNode(size=4 bitmap=0b100000000001000000000): # : 'b' # : 'c' with self.assertRaisesRegex(ValueError, 'cannot compare'): h.delete(Er) h = h.delete(Z) self.assertEqual(len(h), orig_len) h = h.delete(C) self.assertEqual(len(h), orig_len - 1) h = h.delete(B) self.assertEqual(len(h), orig_len - 2) h = h.delete(A) self.assertEqual(len(h), orig_len - 3) self.assertEqual(h.get(D), 'd') self.assertEqual(h.get(E), 'e') h = h.delete(A) h = h.delete(B) h = h.delete(D) h = h.delete(E) self.assertEqual(len(h), 0) def test_hamt_delete_3(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(104, 'E') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=6 bitmap=0b100110000): # NULL: # BitmapNode(size=4 bitmap=0b1000000000000000000001000): # : 'a' # NULL: # CollisionNode(size=4 id=0x108572410): # : 'c' # : 'd' # : 'b' # : 'e' h = h.delete(A) self.assertEqual(len(h), orig_len - 1) h = h.delete(E) self.assertEqual(len(h), orig_len - 2) self.assertEqual(h.get(C), 'c') self.assertEqual(h.get(B), 'b') def test_hamt_delete_4(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(100100, 'E') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') orig_len = len(h) # BitmapNode(size=4 bitmap=0b110000): # NULL: # BitmapNode(size=4 bitmap=0b1000000000000000000001000): # : 'a' # NULL: # CollisionNode(size=6 id=0x10515ef30): # : 'c' # : 'd' # : 'e' # : 'b' h = h.delete(D) self.assertEqual(len(h), orig_len - 1) h = h.delete(E) self.assertEqual(len(h), orig_len - 2) h = h.delete(C) self.assertEqual(len(h), orig_len - 3) h = h.delete(A) self.assertEqual(len(h), orig_len - 4) h = h.delete(B) self.assertEqual(len(h), 0) def test_hamt_delete_5(self): h = hamt() keys = [] for i in range(17): key = HashKey(i, str(i)) keys.append(key) h = h.set(key, f'val-{i}') collision_key16 = HashKey(16, '18') h = h.set(collision_key16, 'collision') # ArrayNode(id=0x10f8b9318): # 0:: # BitmapNode(size=2 count=1 bitmap=0b1): # : 'val-0' # # ... 14 more BitmapNodes ... # # 15:: # BitmapNode(size=2 count=1 bitmap=0b1): # : 'val-15' # # 16:: # BitmapNode(size=2 count=1 bitmap=0b1): # NULL: # CollisionNode(size=4 id=0x10f2f5af8): # : 'val-16' # : 'collision' self.assertEqual(len(h), 18) h = h.delete(keys[2]) self.assertEqual(len(h), 17) h = h.delete(collision_key16) self.assertEqual(len(h), 16) h = h.delete(keys[16]) self.assertEqual(len(h), 15) h = h.delete(keys[1]) self.assertEqual(len(h), 14) h = h.delete(keys[1]) self.assertEqual(len(h), 14) for key in keys: h = h.delete(key) self.assertEqual(len(h), 0) def test_hamt_items_1(self): A = HashKey(100, 'A') B = HashKey(201001, 'B') C = HashKey(101001, 'C') D = HashKey(103, 'D') E = HashKey(104, 'E') F = HashKey(110, 'F') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') h = h.set(F, 'f') it = h.items() self.assertEqual( set(list(it)), {(A, 'a'), (B, 'b'), (C, 'c'), (D, 'd'), (E, 'e'), (F, 'f')}) def test_hamt_items_2(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(100100, 'E') F = HashKey(110, 'F') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') h = h.set(F, 'f') it = h.items() self.assertEqual( set(list(it)), {(A, 'a'), (B, 'b'), (C, 'c'), (D, 'd'), (E, 'e'), (F, 'f')}) def test_hamt_keys_1(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(100100, 'E') F = HashKey(110, 'F') h = hamt() h = h.set(A, 'a') h = h.set(B, 'b') h = h.set(C, 'c') h = h.set(D, 'd') h = h.set(E, 'e') h = h.set(F, 'f') self.assertEqual(set(list(h.keys())), {A, B, C, D, E, F}) self.assertEqual(set(list(h)), {A, B, C, D, E, F}) def test_hamt_items_3(self): h = hamt() self.assertEqual(len(h.items()), 0) self.assertEqual(list(h.items()), []) def test_hamt_eq_1(self): A = HashKey(100, 'A') B = HashKey(101, 'B') C = HashKey(100100, 'C') D = HashKey(100100, 'D') E = HashKey(120, 'E') h1 = hamt() h1 = h1.set(A, 'a') h1 = h1.set(B, 'b') h1 = h1.set(C, 'c') h1 = h1.set(D, 'd') h2 = hamt() h2 = h2.set(A, 'a') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(B, 'b') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(C, 'c') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(D, 'd2') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(D, 'd') self.assertTrue(h1 == h2) self.assertFalse(h1 != h2) h2 = h2.set(E, 'e') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.delete(D) self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) h2 = h2.set(E, 'd') self.assertFalse(h1 == h2) self.assertTrue(h1 != h2) def test_hamt_eq_2(self): A = HashKey(100, 'A') Er = HashKey(100, 'Er', error_on_eq_to=A) h1 = hamt() h1 = h1.set(A, 'a') h2 = hamt() h2 = h2.set(Er, 'a') with self.assertRaisesRegex(ValueError, 'cannot compare'): h1 == h2 with self.assertRaisesRegex(ValueError, 'cannot compare'): h1 != h2 def test_hamt_gc_1(self): A = HashKey(100, 'A') h = hamt() h = h.set(0, 0) # empty HAMT node is memoized in hamt.c ref = weakref.ref(h) a = [] a.append(a) a.append(h) b = [] a.append(b) b.append(a) h = h.set(A, b) del h, a, b gc.collect() gc.collect() gc.collect() self.assertIsNone(ref()) def test_hamt_gc_2(self): A = HashKey(100, 'A') B = HashKey(101, 'B') h = hamt() h = h.set(A, 'a') h = h.set(A, h) ref = weakref.ref(h) hi = h.items() next(hi) del h, hi gc.collect() gc.collect() gc.collect() self.assertIsNone(ref()) def test_hamt_in_1(self): A = HashKey(100, 'A') AA = HashKey(100, 'A') B = HashKey(101, 'B') h = hamt() h = h.set(A, 1) self.assertTrue(A in h) self.assertFalse(B in h) with self.assertRaises(EqError): with HaskKeyCrasher(error_on_eq=True): AA in h with self.assertRaises(HashingError): with HaskKeyCrasher(error_on_hash=True): AA in h def test_hamt_getitem_1(self): A = HashKey(100, 'A') AA = HashKey(100, 'A') B = HashKey(101, 'B') h = hamt() h = h.set(A, 1) self.assertEqual(h[A], 1) self.assertEqual(h[AA], 1) with self.assertRaises(KeyError): h[B] with self.assertRaises(EqError): with HaskKeyCrasher(error_on_eq=True): h[AA] with self.assertRaises(HashingError): with HaskKeyCrasher(error_on_hash=True): h[AA] if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.9/test_ftplib.py000066400000000000000000001233731471441230600211540ustar00rootroot00000000000000"""Test script for ftplib module.""" # Modified by Giampaolo Rodola' to test FTP class, IPv6 and TLS # environment import ftplib import asyncore import asynchat import socket import io import errno import os import threading import time import unittest try: import ssl except ImportError: ssl = None from unittest import TestCase, skipUnless from test import support from test.support import socket_helper from test.support.socket_helper import HOST, HOSTv6 TIMEOUT = support.LOOPBACK_TIMEOUT DEFAULT_ENCODING = 'utf-8' # the dummy data returned by server over the data channel when # RETR, LIST, NLST, MLSD commands are issued RETR_DATA = 'abcde12345\r\n' * 1000 + 'non-ascii char \xAE\r\n' LIST_DATA = 'foo\r\nbar\r\n non-ascii char \xAE\r\n' NLST_DATA = 'foo\r\nbar\r\n non-ascii char \xAE\r\n' MLSD_DATA = ("type=cdir;perm=el;unique==keVO1+ZF4; test\r\n" "type=pdir;perm=e;unique==keVO1+d?3; ..\r\n" "type=OS.unix=slink:/foobar;perm=;unique==keVO1+4G4; foobar\r\n" "type=OS.unix=chr-13/29;perm=;unique==keVO1+5G4; device\r\n" "type=OS.unix=blk-11/108;perm=;unique==keVO1+6G4; block\r\n" "type=file;perm=awr;unique==keVO1+8G4; writable\r\n" "type=dir;perm=cpmel;unique==keVO1+7G4; promiscuous\r\n" "type=dir;perm=;unique==keVO1+1t2; no-exec\r\n" "type=file;perm=r;unique==keVO1+EG4; two words\r\n" "type=file;perm=r;unique==keVO1+IH4; leading space\r\n" "type=file;perm=r;unique==keVO1+1G4; file1\r\n" "type=dir;perm=cpmel;unique==keVO1+7G4; incoming\r\n" "type=file;perm=r;unique==keVO1+1G4; file2\r\n" "type=file;perm=r;unique==keVO1+1G4; file3\r\n" "type=file;perm=r;unique==keVO1+1G4; file4\r\n" "type=dir;perm=cpmel;unique==SGP1; dir \xAE non-ascii char\r\n" "type=file;perm=r;unique==SGP2; file \xAE non-ascii char\r\n") def default_error_handler(): # bpo-44359: Silently ignore socket errors. Such errors occur when a client # socket is closed, in TestFTPClass.tearDown() and makepasv() tests, and # the server gets an error on its side. pass class DummyDTPHandler(asynchat.async_chat): dtp_conn_closed = False def __init__(self, conn, baseclass): asynchat.async_chat.__init__(self, conn) self.baseclass = baseclass self.baseclass.last_received_data = '' self.encoding = baseclass.encoding def handle_read(self): new_data = self.recv(1024).decode(self.encoding, 'replace') self.baseclass.last_received_data += new_data def handle_close(self): # XXX: this method can be called many times in a row for a single # connection, including in clear-text (non-TLS) mode. # (behaviour witnessed with test_data_connection) if not self.dtp_conn_closed: self.baseclass.push('226 transfer complete') self.close() self.dtp_conn_closed = True def push(self, what): if self.baseclass.next_data is not None: what = self.baseclass.next_data self.baseclass.next_data = None if not what: return self.close_when_done() super(DummyDTPHandler, self).push(what.encode(self.encoding)) def handle_error(self): default_error_handler() class DummyFTPHandler(asynchat.async_chat): dtp_handler = DummyDTPHandler def __init__(self, conn, encoding=DEFAULT_ENCODING): asynchat.async_chat.__init__(self, conn) # tells the socket to handle urgent data inline (ABOR command) self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_OOBINLINE, 1) self.set_terminator(b"\r\n") self.in_buffer = [] self.dtp = None self.last_received_cmd = None self.last_received_data = '' self.next_response = '' self.next_data = None self.rest = None self.next_retr_data = RETR_DATA self.push('220 welcome') self.encoding = encoding # We use this as the string IPv4 address to direct the client # to in response to a PASV command. To test security behavior. # https://bugs.python.org/issue43285/. self.fake_pasv_server_ip = '252.253.254.255' def collect_incoming_data(self, data): self.in_buffer.append(data) def found_terminator(self): line = b''.join(self.in_buffer).decode(self.encoding) self.in_buffer = [] if self.next_response: self.push(self.next_response) self.next_response = '' cmd = line.split(' ')[0].lower() self.last_received_cmd = cmd space = line.find(' ') if space != -1: arg = line[space + 1:] else: arg = "" if hasattr(self, 'cmd_' + cmd): method = getattr(self, 'cmd_' + cmd) method(arg) else: self.push('550 command "%s" not understood.' %cmd) def handle_error(self): default_error_handler() def push(self, data): asynchat.async_chat.push(self, data.encode(self.encoding) + b'\r\n') def cmd_port(self, arg): addr = list(map(int, arg.split(','))) ip = '%d.%d.%d.%d' %tuple(addr[:4]) port = (addr[4] * 256) + addr[5] s = socket.create_connection((ip, port), timeout=TIMEOUT) self.dtp = self.dtp_handler(s, baseclass=self) self.push('200 active data connection established') def cmd_pasv(self, arg): with socket.create_server((self.socket.getsockname()[0], 0)) as sock: sock.settimeout(TIMEOUT) port = sock.getsockname()[1] ip = self.fake_pasv_server_ip ip = ip.replace('.', ','); p1 = port / 256; p2 = port % 256 self.push('227 entering passive mode (%s,%d,%d)' %(ip, p1, p2)) conn, addr = sock.accept() self.dtp = self.dtp_handler(conn, baseclass=self) def cmd_eprt(self, arg): af, ip, port = arg.split(arg[0])[1:-1] port = int(port) s = socket.create_connection((ip, port), timeout=TIMEOUT) self.dtp = self.dtp_handler(s, baseclass=self) self.push('200 active data connection established') def cmd_epsv(self, arg): with socket.create_server((self.socket.getsockname()[0], 0), family=socket.AF_INET6) as sock: sock.settimeout(TIMEOUT) port = sock.getsockname()[1] self.push('229 entering extended passive mode (|||%d|)' %port) conn, addr = sock.accept() self.dtp = self.dtp_handler(conn, baseclass=self) def cmd_echo(self, arg): # sends back the received string (used by the test suite) self.push(arg) def cmd_noop(self, arg): self.push('200 noop ok') def cmd_user(self, arg): self.push('331 username ok') def cmd_pass(self, arg): self.push('230 password ok') def cmd_acct(self, arg): self.push('230 acct ok') def cmd_rnfr(self, arg): self.push('350 rnfr ok') def cmd_rnto(self, arg): self.push('250 rnto ok') def cmd_dele(self, arg): self.push('250 dele ok') def cmd_cwd(self, arg): self.push('250 cwd ok') def cmd_size(self, arg): self.push('250 1000') def cmd_mkd(self, arg): self.push('257 "%s"' %arg) def cmd_rmd(self, arg): self.push('250 rmd ok') def cmd_pwd(self, arg): self.push('257 "pwd ok"') def cmd_type(self, arg): self.push('200 type ok') def cmd_quit(self, arg): self.push('221 quit ok') self.close() def cmd_abor(self, arg): self.push('226 abor ok') def cmd_stor(self, arg): self.push('125 stor ok') def cmd_rest(self, arg): self.rest = arg self.push('350 rest ok') def cmd_retr(self, arg): self.push('125 retr ok') if self.rest is not None: offset = int(self.rest) else: offset = 0 self.dtp.push(self.next_retr_data[offset:]) self.dtp.close_when_done() self.rest = None def cmd_list(self, arg): self.push('125 list ok') self.dtp.push(LIST_DATA) self.dtp.close_when_done() def cmd_nlst(self, arg): self.push('125 nlst ok') self.dtp.push(NLST_DATA) self.dtp.close_when_done() def cmd_opts(self, arg): self.push('200 opts ok') def cmd_mlsd(self, arg): self.push('125 mlsd ok') self.dtp.push(MLSD_DATA) self.dtp.close_when_done() def cmd_setlongretr(self, arg): # For testing. Next RETR will return long line. self.next_retr_data = 'x' * int(arg) self.push('125 setlongretr ok') class DummyFTPServer(asyncore.dispatcher, threading.Thread): handler = DummyFTPHandler def __init__(self, address, af=socket.AF_INET, encoding=DEFAULT_ENCODING): threading.Thread.__init__(self) asyncore.dispatcher.__init__(self) self.daemon = True self.create_socket(af, socket.SOCK_STREAM) self.bind(address) self.listen(5) self.active = False self.active_lock = threading.Lock() self.host, self.port = self.socket.getsockname()[:2] self.handler_instance = None self.encoding = encoding def start(self): assert not self.active self.__flag = threading.Event() threading.Thread.start(self) self.__flag.wait() def run(self): self.active = True self.__flag.set() while self.active and asyncore.socket_map: self.active_lock.acquire() asyncore.loop(timeout=0.1, count=1) self.active_lock.release() asyncore.close_all(ignore_all=True) def stop(self): assert self.active self.active = False self.join() def handle_accepted(self, conn, addr): self.handler_instance = self.handler(conn, encoding=self.encoding) def handle_connect(self): self.close() handle_read = handle_connect def writable(self): return 0 def handle_error(self): default_error_handler() if ssl is not None: CERTFILE = os.path.join(os.path.dirname(__file__), "keycert3.pem") CAFILE = os.path.join(os.path.dirname(__file__), "pycacert.pem") class SSLConnection(asyncore.dispatcher): """An asyncore.dispatcher subclass supporting TLS/SSL.""" _ssl_accepting = False _ssl_closing = False def secure_connection(self): context = ssl.SSLContext() context.load_cert_chain(CERTFILE) socket = context.wrap_socket(self.socket, suppress_ragged_eofs=False, server_side=True, do_handshake_on_connect=False) self.del_channel() self.set_socket(socket) self._ssl_accepting = True def _do_ssl_handshake(self): try: self.socket.do_handshake() except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return elif err.args[0] == ssl.SSL_ERROR_EOF: return self.handle_close() # TODO: SSLError does not expose alert information elif "SSLV3_ALERT_BAD_CERTIFICATE" in err.args[1]: return self.handle_close() raise except OSError as err: if err.args[0] == errno.ECONNABORTED: return self.handle_close() else: self._ssl_accepting = False def _do_ssl_shutdown(self): self._ssl_closing = True try: self.socket = self.socket.unwrap() except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return except OSError: # Any "socket error" corresponds to a SSL_ERROR_SYSCALL return # from OpenSSL's SSL_shutdown(), corresponding to a # closed socket condition. See also: # http://www.mail-archive.com/openssl-users@openssl.org/msg60710.html pass self._ssl_closing = False if getattr(self, '_ccc', False) is False: super(SSLConnection, self).close() else: pass def handle_read_event(self): if self._ssl_accepting: self._do_ssl_handshake() elif self._ssl_closing: self._do_ssl_shutdown() else: super(SSLConnection, self).handle_read_event() def handle_write_event(self): if self._ssl_accepting: self._do_ssl_handshake() elif self._ssl_closing: self._do_ssl_shutdown() else: super(SSLConnection, self).handle_write_event() def send(self, data): try: return super(SSLConnection, self).send(data) except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_EOF, ssl.SSL_ERROR_ZERO_RETURN, ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return 0 raise def recv(self, buffer_size): try: return super(SSLConnection, self).recv(buffer_size) except ssl.SSLError as err: if err.args[0] in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): return b'' if err.args[0] in (ssl.SSL_ERROR_EOF, ssl.SSL_ERROR_ZERO_RETURN): self.handle_close() return b'' raise def handle_error(self): default_error_handler() def close(self): if (isinstance(self.socket, ssl.SSLSocket) and self.socket._sslobj is not None): self._do_ssl_shutdown() else: super(SSLConnection, self).close() class DummyTLS_DTPHandler(SSLConnection, DummyDTPHandler): """A DummyDTPHandler subclass supporting TLS/SSL.""" def __init__(self, conn, baseclass): DummyDTPHandler.__init__(self, conn, baseclass) if self.baseclass.secure_data_channel: self.secure_connection() class DummyTLS_FTPHandler(SSLConnection, DummyFTPHandler): """A DummyFTPHandler subclass supporting TLS/SSL.""" dtp_handler = DummyTLS_DTPHandler def __init__(self, conn, encoding=DEFAULT_ENCODING): DummyFTPHandler.__init__(self, conn, encoding=encoding) self.secure_data_channel = False self._ccc = False def cmd_auth(self, line): """Set up secure control channel.""" self.push('234 AUTH TLS successful') self.secure_connection() def cmd_ccc(self, line): self.push('220 Reverting back to clear-text') self._ccc = True self._do_ssl_shutdown() def cmd_pbsz(self, line): """Negotiate size of buffer for secure data transfer. For TLS/SSL the only valid value for the parameter is '0'. Any other value is accepted but ignored. """ self.push('200 PBSZ=0 successful.') def cmd_prot(self, line): """Setup un/secure data channel.""" arg = line.upper() if arg == 'C': self.push('200 Protection set to Clear') self.secure_data_channel = False elif arg == 'P': self.push('200 Protection set to Private') self.secure_data_channel = True else: self.push("502 Unrecognized PROT type (use C or P).") class DummyTLS_FTPServer(DummyFTPServer): handler = DummyTLS_FTPHandler class TestFTPClass(TestCase): def setUp(self, encoding=DEFAULT_ENCODING): self.server = DummyFTPServer((HOST, 0), encoding=encoding) self.server.start() self.client = ftplib.FTP(timeout=TIMEOUT, encoding=encoding) self.client.connect(self.server.host, self.server.port) def tearDown(self): self.client.close() self.server.stop() # Explicitly clear the attribute to prevent dangling thread self.server = None asyncore.close_all(ignore_all=True) def check_data(self, received, expected): self.assertEqual(len(received), len(expected)) self.assertEqual(received, expected) def test_getwelcome(self): self.assertEqual(self.client.getwelcome(), '220 welcome') def test_sanitize(self): self.assertEqual(self.client.sanitize('foo'), repr('foo')) self.assertEqual(self.client.sanitize('pass 12345'), repr('pass *****')) self.assertEqual(self.client.sanitize('PASS 12345'), repr('PASS *****')) def test_exceptions(self): self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\r\n0') self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\n0') self.assertRaises(ValueError, self.client.sendcmd, 'echo 40\r0') self.assertRaises(ftplib.error_temp, self.client.sendcmd, 'echo 400') self.assertRaises(ftplib.error_temp, self.client.sendcmd, 'echo 499') self.assertRaises(ftplib.error_perm, self.client.sendcmd, 'echo 500') self.assertRaises(ftplib.error_perm, self.client.sendcmd, 'echo 599') self.assertRaises(ftplib.error_proto, self.client.sendcmd, 'echo 999') def test_all_errors(self): exceptions = (ftplib.error_reply, ftplib.error_temp, ftplib.error_perm, ftplib.error_proto, ftplib.Error, OSError, EOFError) for x in exceptions: try: raise x('exception not included in all_errors set') except ftplib.all_errors: pass def test_set_pasv(self): # passive mode is supposed to be enabled by default self.assertTrue(self.client.passiveserver) self.client.set_pasv(True) self.assertTrue(self.client.passiveserver) self.client.set_pasv(False) self.assertFalse(self.client.passiveserver) def test_voidcmd(self): self.client.voidcmd('echo 200') self.client.voidcmd('echo 299') self.assertRaises(ftplib.error_reply, self.client.voidcmd, 'echo 199') self.assertRaises(ftplib.error_reply, self.client.voidcmd, 'echo 300') def test_login(self): self.client.login() def test_acct(self): self.client.acct('passwd') def test_rename(self): self.client.rename('a', 'b') self.server.handler_instance.next_response = '200' self.assertRaises(ftplib.error_reply, self.client.rename, 'a', 'b') def test_delete(self): self.client.delete('foo') self.server.handler_instance.next_response = '199' self.assertRaises(ftplib.error_reply, self.client.delete, 'foo') def test_size(self): self.client.size('foo') def test_mkd(self): dir = self.client.mkd('/foo') self.assertEqual(dir, '/foo') def test_rmd(self): self.client.rmd('foo') def test_cwd(self): dir = self.client.cwd('/foo') self.assertEqual(dir, '250 cwd ok') def test_pwd(self): dir = self.client.pwd() self.assertEqual(dir, 'pwd ok') def test_quit(self): self.assertEqual(self.client.quit(), '221 quit ok') # Ensure the connection gets closed; sock attribute should be None self.assertEqual(self.client.sock, None) def test_abort(self): self.client.abort() def test_retrbinary(self): def callback(data): received.append(data.decode(self.client.encoding)) received = [] self.client.retrbinary('retr', callback) self.check_data(''.join(received), RETR_DATA) def test_retrbinary_rest(self): def callback(data): received.append(data.decode(self.client.encoding)) for rest in (0, 10, 20): received = [] self.client.retrbinary('retr', callback, rest=rest) self.check_data(''.join(received), RETR_DATA[rest:]) def test_retrlines(self): received = [] self.client.retrlines('retr', received.append) self.check_data(''.join(received), RETR_DATA.replace('\r\n', '')) def test_storbinary(self): f = io.BytesIO(RETR_DATA.encode(self.client.encoding)) self.client.storbinary('stor', f) self.check_data(self.server.handler_instance.last_received_data, RETR_DATA) # test new callback arg flag = [] f.seek(0) self.client.storbinary('stor', f, callback=lambda x: flag.append(None)) self.assertTrue(flag) def test_storbinary_rest(self): data = RETR_DATA.replace('\r\n', '\n').encode(self.client.encoding) f = io.BytesIO(data) for r in (30, '30'): f.seek(0) self.client.storbinary('stor', f, rest=r) self.assertEqual(self.server.handler_instance.rest, str(r)) def test_storlines(self): data = RETR_DATA.replace('\r\n', '\n').encode(self.client.encoding) f = io.BytesIO(data) self.client.storlines('stor', f) self.check_data(self.server.handler_instance.last_received_data, RETR_DATA) # test new callback arg flag = [] f.seek(0) self.client.storlines('stor foo', f, callback=lambda x: flag.append(None)) self.assertTrue(flag) f = io.StringIO(RETR_DATA.replace('\r\n', '\n')) # storlines() expects a binary file, not a text file with support.check_warnings(('', BytesWarning), quiet=True): self.assertRaises(TypeError, self.client.storlines, 'stor foo', f) def test_nlst(self): self.client.nlst() self.assertEqual(self.client.nlst(), NLST_DATA.split('\r\n')[:-1]) def test_dir(self): l = [] self.client.dir(lambda x: l.append(x)) self.assertEqual(''.join(l), LIST_DATA.replace('\r\n', '')) def test_mlsd(self): list(self.client.mlsd()) list(self.client.mlsd(path='/')) list(self.client.mlsd(path='/', facts=['size', 'type'])) ls = list(self.client.mlsd()) for name, facts in ls: self.assertIsInstance(name, str) self.assertIsInstance(facts, dict) self.assertTrue(name) self.assertIn('type', facts) self.assertIn('perm', facts) self.assertIn('unique', facts) def set_data(data): self.server.handler_instance.next_data = data def test_entry(line, type=None, perm=None, unique=None, name=None): type = 'type' if type is None else type perm = 'perm' if perm is None else perm unique = 'unique' if unique is None else unique name = 'name' if name is None else name set_data(line) _name, facts = next(self.client.mlsd()) self.assertEqual(_name, name) self.assertEqual(facts['type'], type) self.assertEqual(facts['perm'], perm) self.assertEqual(facts['unique'], unique) # plain test_entry('type=type;perm=perm;unique=unique; name\r\n') # "=" in fact value test_entry('type=ty=pe;perm=perm;unique=unique; name\r\n', type="ty=pe") test_entry('type==type;perm=perm;unique=unique; name\r\n', type="=type") test_entry('type=t=y=pe;perm=perm;unique=unique; name\r\n', type="t=y=pe") test_entry('type=====;perm=perm;unique=unique; name\r\n', type="====") # spaces in name test_entry('type=type;perm=perm;unique=unique; na me\r\n', name="na me") test_entry('type=type;perm=perm;unique=unique; name \r\n', name="name ") test_entry('type=type;perm=perm;unique=unique; name\r\n', name=" name") test_entry('type=type;perm=perm;unique=unique; n am e\r\n', name="n am e") # ";" in name test_entry('type=type;perm=perm;unique=unique; na;me\r\n', name="na;me") test_entry('type=type;perm=perm;unique=unique; ;name\r\n', name=";name") test_entry('type=type;perm=perm;unique=unique; ;name;\r\n', name=";name;") test_entry('type=type;perm=perm;unique=unique; ;;;;\r\n', name=";;;;") # case sensitiveness set_data('Type=type;TyPe=perm;UNIQUE=unique; name\r\n') _name, facts = next(self.client.mlsd()) for x in facts: self.assertTrue(x.islower()) # no data (directory empty) set_data('') self.assertRaises(StopIteration, next, self.client.mlsd()) set_data('') for x in self.client.mlsd(): self.fail("unexpected data %s" % x) def test_makeport(self): with self.client.makeport(): # IPv4 is in use, just make sure send_eprt has not been used self.assertEqual(self.server.handler_instance.last_received_cmd, 'port') def test_makepasv(self): host, port = self.client.makepasv() conn = socket.create_connection((host, port), timeout=TIMEOUT) conn.close() # IPv4 is in use, just make sure send_epsv has not been used self.assertEqual(self.server.handler_instance.last_received_cmd, 'pasv') def test_makepasv_issue43285_security_disabled(self): """Test the opt-in to the old vulnerable behavior.""" self.client.trust_server_pasv_ipv4_address = True bad_host, port = self.client.makepasv() self.assertEqual( bad_host, self.server.handler_instance.fake_pasv_server_ip) # Opening and closing a connection keeps the dummy server happy # instead of timing out on accept. socket.create_connection((self.client.sock.getpeername()[0], port), timeout=TIMEOUT).close() def test_makepasv_issue43285_security_enabled_default(self): self.assertFalse(self.client.trust_server_pasv_ipv4_address) trusted_host, port = self.client.makepasv() self.assertNotEqual( trusted_host, self.server.handler_instance.fake_pasv_server_ip) # Opening and closing a connection keeps the dummy server happy # instead of timing out on accept. socket.create_connection((trusted_host, port), timeout=TIMEOUT).close() def test_with_statement(self): self.client.quit() def is_client_connected(): if self.client.sock is None: return False try: self.client.sendcmd('noop') except (OSError, EOFError): return False return True # base test with ftplib.FTP(timeout=TIMEOUT) as self.client: self.client.connect(self.server.host, self.server.port) self.client.sendcmd('noop') self.assertTrue(is_client_connected()) self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit') self.assertFalse(is_client_connected()) # QUIT sent inside the with block with ftplib.FTP(timeout=TIMEOUT) as self.client: self.client.connect(self.server.host, self.server.port) self.client.sendcmd('noop') self.client.quit() self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit') self.assertFalse(is_client_connected()) # force a wrong response code to be sent on QUIT: error_perm # is expected and the connection is supposed to be closed try: with ftplib.FTP(timeout=TIMEOUT) as self.client: self.client.connect(self.server.host, self.server.port) self.client.sendcmd('noop') self.server.handler_instance.next_response = '550 error on quit' except ftplib.error_perm as err: self.assertEqual(str(err), '550 error on quit') else: self.fail('Exception not raised') # needed to give the threaded server some time to set the attribute # which otherwise would still be == 'noop' time.sleep(0.1) self.assertEqual(self.server.handler_instance.last_received_cmd, 'quit') self.assertFalse(is_client_connected()) def test_source_address(self): self.client.quit() port = socket_helper.find_unused_port() try: self.client.connect(self.server.host, self.server.port, source_address=(HOST, port)) self.assertEqual(self.client.sock.getsockname()[1], port) self.client.quit() except OSError as e: if e.errno == errno.EADDRINUSE: self.skipTest("couldn't bind to port %d" % port) raise def test_source_address_passive_connection(self): port = socket_helper.find_unused_port() self.client.source_address = (HOST, port) try: with self.client.transfercmd('list') as sock: self.assertEqual(sock.getsockname()[1], port) except OSError as e: if e.errno == errno.EADDRINUSE: self.skipTest("couldn't bind to port %d" % port) raise def test_parse257(self): self.assertEqual(ftplib.parse257('257 "/foo/bar"'), '/foo/bar') self.assertEqual(ftplib.parse257('257 "/foo/bar" created'), '/foo/bar') self.assertEqual(ftplib.parse257('257 ""'), '') self.assertEqual(ftplib.parse257('257 "" created'), '') self.assertRaises(ftplib.error_reply, ftplib.parse257, '250 "/foo/bar"') # The 257 response is supposed to include the directory # name and in case it contains embedded double-quotes # they must be doubled (see RFC-959, chapter 7, appendix 2). self.assertEqual(ftplib.parse257('257 "/foo/b""ar"'), '/foo/b"ar') self.assertEqual(ftplib.parse257('257 "/foo/b""ar" created'), '/foo/b"ar') def test_line_too_long(self): self.assertRaises(ftplib.Error, self.client.sendcmd, 'x' * self.client.maxline * 2) def test_retrlines_too_long(self): self.client.sendcmd('SETLONGRETR %d' % (self.client.maxline * 2)) received = [] self.assertRaises(ftplib.Error, self.client.retrlines, 'retr', received.append) def test_storlines_too_long(self): f = io.BytesIO(b'x' * self.client.maxline * 2) self.assertRaises(ftplib.Error, self.client.storlines, 'stor', f) def test_encoding_param(self): encodings = ['latin-1', 'utf-8'] for encoding in encodings: with self.subTest(encoding=encoding): self.tearDown() self.setUp(encoding=encoding) self.assertEqual(encoding, self.client.encoding) self.test_retrbinary() self.test_storbinary() self.test_retrlines() new_dir = self.client.mkd('/non-ascii dir \xAE') self.check_data(new_dir, '/non-ascii dir \xAE') # Check default encoding client = ftplib.FTP(timeout=TIMEOUT) self.assertEqual(DEFAULT_ENCODING, client.encoding) @skipUnless(socket_helper.IPV6_ENABLED, "IPv6 not enabled") class TestIPv6Environment(TestCase): def setUp(self): self.server = DummyFTPServer((HOSTv6, 0), af=socket.AF_INET6, encoding=DEFAULT_ENCODING) self.server.start() self.client = ftplib.FTP(timeout=TIMEOUT, encoding=DEFAULT_ENCODING) self.client.connect(self.server.host, self.server.port) def tearDown(self): self.client.close() self.server.stop() # Explicitly clear the attribute to prevent dangling thread self.server = None asyncore.close_all(ignore_all=True) def test_af(self): self.assertEqual(self.client.af, socket.AF_INET6) def test_makeport(self): with self.client.makeport(): self.assertEqual(self.server.handler_instance.last_received_cmd, 'eprt') def test_makepasv(self): host, port = self.client.makepasv() conn = socket.create_connection((host, port), timeout=TIMEOUT) conn.close() self.assertEqual(self.server.handler_instance.last_received_cmd, 'epsv') def test_transfer(self): def retr(): def callback(data): received.append(data.decode(self.client.encoding)) received = [] self.client.retrbinary('retr', callback) self.assertEqual(len(''.join(received)), len(RETR_DATA)) self.assertEqual(''.join(received), RETR_DATA) self.client.set_pasv(True) retr() self.client.set_pasv(False) retr() @skipUnless(ssl, "SSL not available") class TestTLS_FTPClassMixin(TestFTPClass): """Repeat TestFTPClass tests starting the TLS layer for both control and data connections first. """ def setUp(self, encoding=DEFAULT_ENCODING): self.server = DummyTLS_FTPServer((HOST, 0), encoding=encoding) self.server.start() self.client = ftplib.FTP_TLS(timeout=TIMEOUT, encoding=encoding) self.client.connect(self.server.host, self.server.port) # enable TLS self.client.auth() self.client.prot_p() @skipUnless(ssl, "SSL not available") class TestTLS_FTPClass(TestCase): """Specific TLS_FTP class tests.""" def setUp(self, encoding=DEFAULT_ENCODING): self.server = DummyTLS_FTPServer((HOST, 0), encoding=encoding) self.server.start() self.client = ftplib.FTP_TLS(timeout=TIMEOUT) self.client.connect(self.server.host, self.server.port) def tearDown(self): self.client.close() self.server.stop() # Explicitly clear the attribute to prevent dangling thread self.server = None asyncore.close_all(ignore_all=True) def test_control_connection(self): self.assertNotIsInstance(self.client.sock, ssl.SSLSocket) self.client.auth() self.assertIsInstance(self.client.sock, ssl.SSLSocket) def test_data_connection(self): # clear text with self.client.transfercmd('list') as sock: self.assertNotIsInstance(sock, ssl.SSLSocket) self.assertEqual(sock.recv(1024), LIST_DATA.encode(self.client.encoding)) self.assertEqual(self.client.voidresp(), "226 transfer complete") # secured, after PROT P self.client.prot_p() with self.client.transfercmd('list') as sock: self.assertIsInstance(sock, ssl.SSLSocket) # consume from SSL socket to finalize handshake and avoid # "SSLError [SSL] shutdown while in init" self.assertEqual(sock.recv(1024), LIST_DATA.encode(self.client.encoding)) self.assertEqual(self.client.voidresp(), "226 transfer complete") # PROT C is issued, the connection must be in cleartext again self.client.prot_c() with self.client.transfercmd('list') as sock: self.assertNotIsInstance(sock, ssl.SSLSocket) self.assertEqual(sock.recv(1024), LIST_DATA.encode(self.client.encoding)) self.assertEqual(self.client.voidresp(), "226 transfer complete") def test_login(self): # login() is supposed to implicitly secure the control connection self.assertNotIsInstance(self.client.sock, ssl.SSLSocket) self.client.login() self.assertIsInstance(self.client.sock, ssl.SSLSocket) # make sure that AUTH TLS doesn't get issued again self.client.login() def test_auth_issued_twice(self): self.client.auth() self.assertRaises(ValueError, self.client.auth) def test_context(self): self.client.quit() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE self.assertRaises(ValueError, ftplib.FTP_TLS, keyfile=CERTFILE, context=ctx) self.assertRaises(ValueError, ftplib.FTP_TLS, certfile=CERTFILE, context=ctx) self.assertRaises(ValueError, ftplib.FTP_TLS, certfile=CERTFILE, keyfile=CERTFILE, context=ctx) self.client = ftplib.FTP_TLS(context=ctx, timeout=TIMEOUT) self.client.connect(self.server.host, self.server.port) self.assertNotIsInstance(self.client.sock, ssl.SSLSocket) self.client.auth() self.assertIs(self.client.sock.context, ctx) self.assertIsInstance(self.client.sock, ssl.SSLSocket) self.client.prot_p() with self.client.transfercmd('list') as sock: self.assertIs(sock.context, ctx) self.assertIsInstance(sock, ssl.SSLSocket) def test_ccc(self): self.assertRaises(ValueError, self.client.ccc) self.client.login(secure=True) self.assertIsInstance(self.client.sock, ssl.SSLSocket) self.client.ccc() self.assertRaises(ValueError, self.client.sock.unwrap) @skipUnless(False, "FIXME: bpo-32706") def test_check_hostname(self): self.client.quit() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.check_hostname, True) ctx.load_verify_locations(CAFILE) self.client = ftplib.FTP_TLS(context=ctx, timeout=TIMEOUT) # 127.0.0.1 doesn't match SAN self.client.connect(self.server.host, self.server.port) with self.assertRaises(ssl.CertificateError): self.client.auth() # exception quits connection self.client.connect(self.server.host, self.server.port) self.client.prot_p() with self.assertRaises(ssl.CertificateError): with self.client.transfercmd("list") as sock: pass self.client.quit() self.client.connect("localhost", self.server.port) self.client.auth() self.client.quit() self.client.connect("localhost", self.server.port) self.client.prot_p() with self.client.transfercmd("list") as sock: pass class TestTimeouts(TestCase): def setUp(self): self.evt = threading.Event() self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.sock.settimeout(20) self.port = socket_helper.bind_port(self.sock) self.server_thread = threading.Thread(target=self.server) self.server_thread.daemon = True self.server_thread.start() # Wait for the server to be ready. self.evt.wait() self.evt.clear() self.old_port = ftplib.FTP.port ftplib.FTP.port = self.port def tearDown(self): ftplib.FTP.port = self.old_port self.server_thread.join() # Explicitly clear the attribute to prevent dangling thread self.server_thread = None def server(self): # This method sets the evt 3 times: # 1) when the connection is ready to be accepted. # 2) when it is safe for the caller to close the connection # 3) when we have closed the socket self.sock.listen() # (1) Signal the caller that we are ready to accept the connection. self.evt.set() try: conn, addr = self.sock.accept() except socket.timeout: pass else: conn.sendall(b"1 Hola mundo\n") conn.shutdown(socket.SHUT_WR) # (2) Signal the caller that it is safe to close the socket. self.evt.set() conn.close() finally: self.sock.close() def testTimeoutDefault(self): # default -- use global socket timeout self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: ftp = ftplib.FTP(HOST) finally: socket.setdefaulttimeout(None) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() def testTimeoutNone(self): # no timeout -- do not use global socket timeout self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: ftp = ftplib.FTP(HOST, timeout=None) finally: socket.setdefaulttimeout(None) self.assertIsNone(ftp.sock.gettimeout()) self.evt.wait() ftp.close() def testTimeoutValue(self): # a value ftp = ftplib.FTP(HOST, timeout=30) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() # bpo-39259 with self.assertRaises(ValueError): ftplib.FTP(HOST, timeout=0) def testTimeoutConnect(self): ftp = ftplib.FTP() ftp.connect(HOST, timeout=30) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() def testTimeoutDifferentOrder(self): ftp = ftplib.FTP(timeout=30) ftp.connect(HOST) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() def testTimeoutDirectAccess(self): ftp = ftplib.FTP() ftp.timeout = 30 ftp.connect(HOST) self.assertEqual(ftp.sock.gettimeout(), 30) self.evt.wait() ftp.close() class MiscTestCase(TestCase): def test__all__(self): blacklist = {'MSG_OOB', 'FTP_PORT', 'MAXLINE', 'CRLF', 'B_CRLF', 'Error', 'parse150', 'parse227', 'parse229', 'parse257', 'print_line', 'ftpcp', 'test'} support.check__all__(self, ftplib, blacklist=blacklist) def setUpModule(): thread_info = support.threading_setup() unittest.addModuleCleanup(support.threading_cleanup, *thread_info) if __name__ == '__main__': unittest.main() gevent-24.11.1/src/greentest/3.9/test_httplib.py000066400000000000000000002340271471441230600213410ustar00rootroot00000000000000import errno from http import client, HTTPStatus import io import itertools import os import array import re import socket import threading import warnings import unittest from unittest import mock TestCase = unittest.TestCase from test import support from test.support import socket_helper here = os.path.dirname(__file__) # Self-signed cert file for 'localhost' CERT_localhost = os.path.join(here, 'keycert.pem') # Self-signed cert file for 'fakehostname' CERT_fakehostname = os.path.join(here, 'keycert2.pem') # Self-signed cert file for self-signed.pythontest.net CERT_selfsigned_pythontestdotnet = os.path.join(here, 'selfsigned_pythontestdotnet.pem') # constants for testing chunked encoding chunked_start = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello worl\r\n' '3\r\n' 'd! \r\n' '8\r\n' 'and now \r\n' '22\r\n' 'for something completely different\r\n' ) chunked_expected = b'hello world! and now for something completely different' chunk_extension = ";foo=bar" last_chunk = "0\r\n" last_chunk_extended = "0" + chunk_extension + "\r\n" trailers = "X-Dummy: foo\r\nX-Dumm2: bar\r\n" chunked_end = "\r\n" HOST = socket_helper.HOST class FakeSocket: def __init__(self, text, fileclass=io.BytesIO, host=None, port=None): if isinstance(text, str): text = text.encode("ascii") self.text = text self.fileclass = fileclass self.data = b'' self.sendall_calls = 0 self.file_closed = False self.host = host self.port = port def sendall(self, data): self.sendall_calls += 1 self.data += data def makefile(self, mode, bufsize=None): if mode != 'r' and mode != 'rb': raise client.UnimplementedFileMode() # keep the file around so we can check how much was read from it self.file = self.fileclass(self.text) self.file.close = self.file_close #nerf close () return self.file def file_close(self): self.file_closed = True def close(self): pass def setsockopt(self, level, optname, value): pass class EPipeSocket(FakeSocket): def __init__(self, text, pipe_trigger): # When sendall() is called with pipe_trigger, raise EPIPE. FakeSocket.__init__(self, text) self.pipe_trigger = pipe_trigger def sendall(self, data): if self.pipe_trigger in data: raise OSError(errno.EPIPE, "gotcha") self.data += data def close(self): pass class NoEOFBytesIO(io.BytesIO): """Like BytesIO, but raises AssertionError on EOF. This is used below to test that http.client doesn't try to read more from the underlying file than it should. """ def read(self, n=-1): data = io.BytesIO.read(self, n) if data == b'': raise AssertionError('caller tried to read past EOF') return data def readline(self, length=None): data = io.BytesIO.readline(self, length) if data == b'': raise AssertionError('caller tried to read past EOF') return data class FakeSocketHTTPConnection(client.HTTPConnection): """HTTPConnection subclass using FakeSocket; counts connect() calls""" def __init__(self, *args): self.connections = 0 super().__init__('example.com') self.fake_socket_args = args self._create_connection = self.create_connection def connect(self): """Count the number of times connect() is invoked""" self.connections += 1 return super().connect() def create_connection(self, *pos, **kw): return FakeSocket(*self.fake_socket_args) class HeaderTests(TestCase): def test_auto_headers(self): # Some headers are added automatically, but should not be added by # .request() if they are explicitly set. class HeaderCountingBuffer(list): def __init__(self): self.count = {} def append(self, item): kv = item.split(b':') if len(kv) > 1: # item is a 'Key: Value' header string lcKey = kv[0].decode('ascii').lower() self.count.setdefault(lcKey, 0) self.count[lcKey] += 1 list.append(self, item) for explicit_header in True, False: for header in 'Content-length', 'Host', 'Accept-encoding': conn = client.HTTPConnection('example.com') conn.sock = FakeSocket('blahblahblah') conn._buffer = HeaderCountingBuffer() body = 'spamspamspam' headers = {} if explicit_header: headers[header] = str(len(body)) conn.request('POST', '/', body, headers) self.assertEqual(conn._buffer.count[header.lower()], 1) def test_content_length_0(self): class ContentLengthChecker(list): def __init__(self): list.__init__(self) self.content_length = None def append(self, item): kv = item.split(b':', 1) if len(kv) > 1 and kv[0].lower() == b'content-length': self.content_length = kv[1].strip() list.append(self, item) # Here, we're testing that methods expecting a body get a # content-length set to zero if the body is empty (either None or '') bodies = (None, '') methods_with_body = ('PUT', 'POST', 'PATCH') for method, body in itertools.product(methods_with_body, bodies): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', body) self.assertEqual( conn._buffer.content_length, b'0', 'Header Content-Length incorrect on {}'.format(method) ) # For these methods, we make sure that content-length is not set when # the body is None because it might cause unexpected behaviour on the # server. methods_without_body = ( 'GET', 'CONNECT', 'DELETE', 'HEAD', 'OPTIONS', 'TRACE', ) for method in methods_without_body: conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', None) self.assertEqual( conn._buffer.content_length, None, 'Header Content-Length set for empty body on {}'.format(method) ) # If the body is set to '', that's considered to be "present but # empty" rather than "missing", so content length would be set, even # for methods that don't expect a body. for method in methods_without_body: conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', '') self.assertEqual( conn._buffer.content_length, b'0', 'Header Content-Length incorrect on {}'.format(method) ) # If the body is set, make sure Content-Length is set. for method in itertools.chain(methods_without_body, methods_with_body): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn._buffer = ContentLengthChecker() conn.request(method, '/', ' ') self.assertEqual( conn._buffer.content_length, b'1', 'Header Content-Length incorrect on {}'.format(method) ) def test_putheader(self): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn.putrequest('GET','/') conn.putheader('Content-length', 42) self.assertIn(b'Content-length: 42', conn._buffer) conn.putheader('Foo', ' bar ') self.assertIn(b'Foo: bar ', conn._buffer) conn.putheader('Bar', '\tbaz\t') self.assertIn(b'Bar: \tbaz\t', conn._buffer) conn.putheader('Authorization', 'Bearer mytoken') self.assertIn(b'Authorization: Bearer mytoken', conn._buffer) conn.putheader('IterHeader', 'IterA', 'IterB') self.assertIn(b'IterHeader: IterA\r\n\tIterB', conn._buffer) conn.putheader('LatinHeader', b'\xFF') self.assertIn(b'LatinHeader: \xFF', conn._buffer) conn.putheader('Utf8Header', b'\xc3\x80') self.assertIn(b'Utf8Header: \xc3\x80', conn._buffer) conn.putheader('C1-Control', b'next\x85line') self.assertIn(b'C1-Control: next\x85line', conn._buffer) conn.putheader('Embedded-Fold-Space', 'is\r\n allowed') self.assertIn(b'Embedded-Fold-Space: is\r\n allowed', conn._buffer) conn.putheader('Embedded-Fold-Tab', 'is\r\n\tallowed') self.assertIn(b'Embedded-Fold-Tab: is\r\n\tallowed', conn._buffer) conn.putheader('Key Space', 'value') self.assertIn(b'Key Space: value', conn._buffer) conn.putheader('KeySpace ', 'value') self.assertIn(b'KeySpace : value', conn._buffer) conn.putheader(b'Nonbreak\xa0Space', 'value') self.assertIn(b'Nonbreak\xa0Space: value', conn._buffer) conn.putheader(b'\xa0NonbreakSpace', 'value') self.assertIn(b'\xa0NonbreakSpace: value', conn._buffer) def test_ipv6host_header(self): # Default host header on IPv6 transaction should be wrapped by [] if # it is an IPv6 address expected = b'GET /foo HTTP/1.1\r\nHost: [2001::]:81\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[2001::]:81') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) expected = b'GET /foo HTTP/1.1\r\nHost: [2001:102A::]\r\n' \ b'Accept-Encoding: identity\r\n\r\n' conn = client.HTTPConnection('[2001:102A::]') sock = FakeSocket('') conn.sock = sock conn.request('GET', '/foo') self.assertTrue(sock.data.startswith(expected)) def test_malformed_headers_coped_with(self): # Issue 19996 body = "HTTP/1.1 200 OK\r\nFirst: val\r\n: nval\r\nSecond: val\r\n\r\n" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.getheader('First'), 'val') self.assertEqual(resp.getheader('Second'), 'val') def test_parse_all_octets(self): # Ensure no valid header field octet breaks the parser body = ( b'HTTP/1.1 200 OK\r\n' b"!#$%&'*+-.^_`|~: value\r\n" # Special token characters b'VCHAR: ' + bytes(range(0x21, 0x7E + 1)) + b'\r\n' b'obs-text: ' + bytes(range(0x80, 0xFF + 1)) + b'\r\n' b'obs-fold: text\r\n' b' folded with space\r\n' b'\tfolded with tab\r\n' b'Content-Length: 0\r\n' b'\r\n' ) sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.getheader('Content-Length'), '0') self.assertEqual(resp.msg['Content-Length'], '0') self.assertEqual(resp.getheader("!#$%&'*+-.^_`|~"), 'value') self.assertEqual(resp.msg["!#$%&'*+-.^_`|~"], 'value') vchar = ''.join(map(chr, range(0x21, 0x7E + 1))) self.assertEqual(resp.getheader('VCHAR'), vchar) self.assertEqual(resp.msg['VCHAR'], vchar) self.assertIsNotNone(resp.getheader('obs-text')) self.assertIn('obs-text', resp.msg) for folded in (resp.getheader('obs-fold'), resp.msg['obs-fold']): self.assertTrue(folded.startswith('text')) self.assertIn(' folded with space', folded) self.assertTrue(folded.endswith('folded with tab')) def test_invalid_headers(self): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket('') conn.putrequest('GET', '/') # http://tools.ietf.org/html/rfc7230#section-3.2.4, whitespace is no # longer allowed in header names cases = ( (b'Invalid\r\nName', b'ValidValue'), (b'Invalid\rName', b'ValidValue'), (b'Invalid\nName', b'ValidValue'), (b'\r\nInvalidName', b'ValidValue'), (b'\rInvalidName', b'ValidValue'), (b'\nInvalidName', b'ValidValue'), (b' InvalidName', b'ValidValue'), (b'\tInvalidName', b'ValidValue'), (b'Invalid:Name', b'ValidValue'), (b':InvalidName', b'ValidValue'), (b'ValidName', b'Invalid\r\nValue'), (b'ValidName', b'Invalid\rValue'), (b'ValidName', b'Invalid\nValue'), (b'ValidName', b'InvalidValue\r\n'), (b'ValidName', b'InvalidValue\r'), (b'ValidName', b'InvalidValue\n'), ) for name, value in cases: with self.subTest((name, value)): with self.assertRaisesRegex(ValueError, 'Invalid header'): conn.putheader(name, value) def test_headers_debuglevel(self): body = ( b'HTTP/1.1 200 OK\r\n' b'First: val\r\n' b'Second: val1\r\n' b'Second: val2\r\n' ) sock = FakeSocket(body) resp = client.HTTPResponse(sock, debuglevel=1) with support.captured_stdout() as output: resp.begin() lines = output.getvalue().splitlines() self.assertEqual(lines[0], "reply: 'HTTP/1.1 200 OK\\r\\n'") self.assertEqual(lines[1], "header: First: val") self.assertEqual(lines[2], "header: Second: val1") self.assertEqual(lines[3], "header: Second: val2") class HttpMethodTests(TestCase): def test_invalid_method_names(self): methods = ( 'GET\r', 'POST\n', 'PUT\n\r', 'POST\nValue', 'POST\nHOST:abc', 'GET\nrHost:abc\n', 'POST\rRemainder:\r', 'GET\rHOST:\n', '\nPUT' ) for method in methods: with self.assertRaisesRegex( ValueError, "method can't contain control characters"): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(None) conn.request(method=method, url="/") class TransferEncodingTest(TestCase): expected_body = b"It's just a flesh wound" def test_endheaders_chunked(self): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.putrequest('POST', '/') conn.endheaders(self._make_body(), encode_chunked=True) _, _, body = self._parse_request(conn.sock.data) body = self._parse_chunked(body) self.assertEqual(body, self.expected_body) def test_explicit_headers(self): # explicit chunked conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') # this shouldn't actually be automatically chunk-encoded because the # calling code has explicitly stated that it's taking care of it conn.request( 'POST', '/', self._make_body(), {'Transfer-Encoding': 'chunked'}) _, headers, body = self._parse_request(conn.sock.data) self.assertNotIn('content-length', [k.lower() for k in headers.keys()]) self.assertEqual(headers['Transfer-Encoding'], 'chunked') self.assertEqual(body, self.expected_body) # explicit chunked, string body conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request( 'POST', '/', self.expected_body.decode('latin-1'), {'Transfer-Encoding': 'chunked'}) _, headers, body = self._parse_request(conn.sock.data) self.assertNotIn('content-length', [k.lower() for k in headers.keys()]) self.assertEqual(headers['Transfer-Encoding'], 'chunked') self.assertEqual(body, self.expected_body) # User-specified TE, but request() does the chunk encoding conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request('POST', '/', headers={'Transfer-Encoding': 'gzip, chunked'}, encode_chunked=True, body=self._make_body()) _, headers, body = self._parse_request(conn.sock.data) self.assertNotIn('content-length', [k.lower() for k in headers]) self.assertEqual(headers['Transfer-Encoding'], 'gzip, chunked') self.assertEqual(self._parse_chunked(body), self.expected_body) def test_request(self): for empty_lines in (False, True,): conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request( 'POST', '/', self._make_body(empty_lines=empty_lines)) _, headers, body = self._parse_request(conn.sock.data) body = self._parse_chunked(body) self.assertEqual(body, self.expected_body) self.assertEqual(headers['Transfer-Encoding'], 'chunked') # Content-Length and Transfer-Encoding SHOULD not be sent in the # same request self.assertNotIn('content-length', [k.lower() for k in headers]) def test_empty_body(self): # Zero-length iterable should be treated like any other iterable conn = client.HTTPConnection('example.com') conn.sock = FakeSocket(b'') conn.request('POST', '/', ()) _, headers, body = self._parse_request(conn.sock.data) self.assertEqual(headers['Transfer-Encoding'], 'chunked') self.assertNotIn('content-length', [k.lower() for k in headers]) self.assertEqual(body, b"0\r\n\r\n") def _make_body(self, empty_lines=False): lines = self.expected_body.split(b' ') for idx, line in enumerate(lines): # for testing handling empty lines if empty_lines and idx % 2: yield b'' if idx < len(lines) - 1: yield line + b' ' else: yield line def _parse_request(self, data): lines = data.split(b'\r\n') request = lines[0] headers = {} n = 1 while n < len(lines) and len(lines[n]) > 0: key, val = lines[n].split(b':') key = key.decode('latin-1').strip() headers[key] = val.decode('latin-1').strip() n += 1 return request, headers, b'\r\n'.join(lines[n + 1:]) def _parse_chunked(self, data): body = [] trailers = {} n = 0 lines = data.split(b'\r\n') # parse body while True: size, chunk = lines[n:n+2] size = int(size, 16) if size == 0: n += 1 break self.assertEqual(size, len(chunk)) body.append(chunk) n += 2 # we /should/ hit the end chunk, but check against the size of # lines so we're not stuck in an infinite loop should we get # malformed data if n > len(lines): break return b''.join(body) class BasicTest(TestCase): def test_dir_with_added_behavior_on_status(self): # see issue40084 self.assertTrue({'description', 'name', 'phrase', 'value'} <= set(dir(HTTPStatus(404)))) def test_status_lines(self): # Test HTTP status lines body = "HTTP/1.1 200 Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(0), b'') # Issue #20007 self.assertFalse(resp.isclosed()) self.assertFalse(resp.closed) self.assertEqual(resp.read(), b"Text") self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) body = "HTTP/1.1 400.100 Not Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) self.assertRaises(client.BadStatusLine, resp.begin) def test_bad_status_repr(self): exc = client.BadStatusLine('') self.assertEqual(repr(exc), '''BadStatusLine("''")''') def test_partial_reads(self): # if we have Content-Length, HTTPResponse knows when to close itself, # the same behaviour as when we read the whole thing with read() body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(2), b'Te') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_mixed_reads(self): # readline() should update the remaining length, so that read() knows # how much data is left and does not raise IncompleteRead body = "HTTP/1.1 200 Ok\r\nContent-Length: 13\r\n\r\nText\r\nAnother" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.readline(), b'Text\r\n') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(), b'Another') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_readintos(self): # if we have Content-Length, HTTPResponse knows when to close itself, # the same behaviour as when we read the whole thing with read() body = "HTTP/1.1 200 Ok\r\nContent-Length: 4\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(2) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'Te') self.assertFalse(resp.isclosed()) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_reads_no_content_length(self): # when no length is present, the socket should be gracefully closed when # all data was read body = "HTTP/1.1 200 Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(2), b'Te') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertEqual(resp.read(1), b'') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_partial_readintos_no_content_length(self): # when no length is present, the socket should be gracefully closed when # all data was read body = "HTTP/1.1 200 Ok\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(2) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'Te') self.assertFalse(resp.isclosed()) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') n = resp.readinto(b) self.assertEqual(n, 0) self.assertTrue(resp.isclosed()) def test_partial_reads_incomplete_body(self): # if the server shuts down the connection before the whole # content-length is delivered, the socket is gracefully closed body = "HTTP/1.1 200 Ok\r\nContent-Length: 10\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(2), b'Te') self.assertFalse(resp.isclosed()) self.assertEqual(resp.read(2), b'xt') self.assertEqual(resp.read(1), b'') self.assertTrue(resp.isclosed()) def test_partial_readintos_incomplete_body(self): # if the server shuts down the connection before the whole # content-length is delivered, the socket is gracefully closed body = "HTTP/1.1 200 Ok\r\nContent-Length: 10\r\n\r\nText" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() b = bytearray(2) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'Te') self.assertFalse(resp.isclosed()) n = resp.readinto(b) self.assertEqual(n, 2) self.assertEqual(bytes(b), b'xt') n = resp.readinto(b) self.assertEqual(n, 0) self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_host_port(self): # Check invalid host_port for hp in ("www.python.org:abc", "user:password@www.python.org"): self.assertRaises(client.InvalidURL, client.HTTPConnection, hp) for hp, h, p in (("[fe80::207:e9ff:fe9b]:8000", "fe80::207:e9ff:fe9b", 8000), ("www.python.org:80", "www.python.org", 80), ("www.python.org:", "www.python.org", 80), ("www.python.org", "www.python.org", 80), ("[fe80::207:e9ff:fe9b]", "fe80::207:e9ff:fe9b", 80), ("[fe80::207:e9ff:fe9b]:", "fe80::207:e9ff:fe9b", 80)): c = client.HTTPConnection(hp) self.assertEqual(h, c.host) self.assertEqual(p, c.port) def test_response_headers(self): # test response with multiple message headers with the same field name. text = ('HTTP/1.1 200 OK\r\n' 'Set-Cookie: Customer="WILE_E_COYOTE"; ' 'Version="1"; Path="/acme"\r\n' 'Set-Cookie: Part_Number="Rocket_Launcher_0001"; Version="1";' ' Path="/acme"\r\n' '\r\n' 'No body\r\n') hdr = ('Customer="WILE_E_COYOTE"; Version="1"; Path="/acme"' ', ' 'Part_Number="Rocket_Launcher_0001"; Version="1"; Path="/acme"') s = FakeSocket(text) r = client.HTTPResponse(s) r.begin() cookies = r.getheader("Set-Cookie") self.assertEqual(cookies, hdr) def test_read_head(self): # Test that the library doesn't attempt to read any data # from a HEAD request. (Tickles SF bug #622042.) sock = FakeSocket( 'HTTP/1.1 200 OK\r\n' 'Content-Length: 14432\r\n' '\r\n', NoEOFBytesIO) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() if resp.read(): self.fail("Did not expect response from HEAD request") def test_readinto_head(self): # Test that the library doesn't attempt to read any data # from a HEAD request. (Tickles SF bug #622042.) sock = FakeSocket( 'HTTP/1.1 200 OK\r\n' 'Content-Length: 14432\r\n' '\r\n', NoEOFBytesIO) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() b = bytearray(5) if resp.readinto(b) != 0: self.fail("Did not expect response from HEAD request") self.assertEqual(bytes(b), b'\x00'*5) def test_too_many_headers(self): headers = '\r\n'.join('Header%d: foo' % i for i in range(client._MAXHEADERS + 1)) + '\r\n' text = ('HTTP/1.1 200 OK\r\n' + headers) s = FakeSocket(text) r = client.HTTPResponse(s) self.assertRaisesRegex(client.HTTPException, r"got more than \d+ headers", r.begin) def test_send_file(self): expected = (b'GET /foo HTTP/1.1\r\nHost: example.com\r\n' b'Accept-Encoding: identity\r\n' b'Transfer-Encoding: chunked\r\n' b'\r\n') with open(__file__, 'rb') as body: conn = client.HTTPConnection('example.com') sock = FakeSocket(body) conn.sock = sock conn.request('GET', '/foo', body) self.assertTrue(sock.data.startswith(expected), '%r != %r' % (sock.data[:len(expected)], expected)) def test_send(self): expected = b'this is a test this is only a test' conn = client.HTTPConnection('example.com') sock = FakeSocket(None) conn.sock = sock conn.send(expected) self.assertEqual(expected, sock.data) sock.data = b'' conn.send(array.array('b', expected)) self.assertEqual(expected, sock.data) sock.data = b'' conn.send(io.BytesIO(expected)) self.assertEqual(expected, sock.data) def test_send_updating_file(self): def data(): yield 'data' yield None yield 'data_two' class UpdatingFile(io.TextIOBase): mode = 'r' d = data() def read(self, blocksize=-1): return next(self.d) expected = b'data' conn = client.HTTPConnection('example.com') sock = FakeSocket("") conn.sock = sock conn.send(UpdatingFile()) self.assertEqual(sock.data, expected) def test_send_iter(self): expected = b'GET /foo HTTP/1.1\r\nHost: example.com\r\n' \ b'Accept-Encoding: identity\r\nContent-Length: 11\r\n' \ b'\r\nonetwothree' def body(): yield b"one" yield b"two" yield b"three" conn = client.HTTPConnection('example.com') sock = FakeSocket("") conn.sock = sock conn.request('GET', '/foo', body(), {'Content-Length': '11'}) self.assertEqual(sock.data, expected) def test_blocksize_request(self): """Check that request() respects the configured block size.""" blocksize = 8 # For easy debugging. conn = client.HTTPConnection('example.com', blocksize=blocksize) sock = FakeSocket(None) conn.sock = sock expected = b"a" * blocksize + b"b" conn.request("PUT", "/", io.BytesIO(expected), {"Content-Length": "9"}) self.assertEqual(sock.sendall_calls, 3) body = sock.data.split(b"\r\n\r\n", 1)[1] self.assertEqual(body, expected) def test_blocksize_send(self): """Check that send() respects the configured block size.""" blocksize = 8 # For easy debugging. conn = client.HTTPConnection('example.com', blocksize=blocksize) sock = FakeSocket(None) conn.sock = sock expected = b"a" * blocksize + b"b" conn.send(io.BytesIO(expected)) self.assertEqual(sock.sendall_calls, 2) self.assertEqual(sock.data, expected) def test_send_type_error(self): # See: Issue #12676 conn = client.HTTPConnection('example.com') conn.sock = FakeSocket('') with self.assertRaises(TypeError): conn.request('POST', 'test', conn) def test_chunked(self): expected = chunked_expected sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) resp.close() # Various read sizes for n in range(1, 12): sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(n) + resp.read(n) + resp.read(), expected) resp.close() for x in ('', 'foo\r\n'): sock = FakeSocket(chunked_start + x) resp = client.HTTPResponse(sock, method="GET") resp.begin() try: resp.read() except client.IncompleteRead as i: self.assertEqual(i.partial, expected) expected_message = 'IncompleteRead(%d bytes read)' % len(expected) self.assertEqual(repr(i), expected_message) self.assertEqual(str(i), expected_message) else: self.fail('IncompleteRead expected') finally: resp.close() def test_readinto_chunked(self): expected = chunked_expected nexpected = len(expected) b = bytearray(128) sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() n = resp.readinto(b) self.assertEqual(b[:nexpected], expected) self.assertEqual(n, nexpected) resp.close() # Various read sizes for n in range(1, 12): sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() m = memoryview(b) i = resp.readinto(m[0:n]) i += resp.readinto(m[i:n + i]) i += resp.readinto(m[i:]) self.assertEqual(b[:nexpected], expected) self.assertEqual(i, nexpected) resp.close() for x in ('', 'foo\r\n'): sock = FakeSocket(chunked_start + x) resp = client.HTTPResponse(sock, method="GET") resp.begin() try: n = resp.readinto(b) except client.IncompleteRead as i: self.assertEqual(i.partial, expected) expected_message = 'IncompleteRead(%d bytes read)' % len(expected) self.assertEqual(repr(i), expected_message) self.assertEqual(str(i), expected_message) else: self.fail('IncompleteRead expected') finally: resp.close() def test_chunked_head(self): chunked_start = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello world\r\n' '1\r\n' 'd\r\n' ) sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() self.assertEqual(resp.read(), b'') self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_readinto_chunked_head(self): chunked_start = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello world\r\n' '1\r\n' 'd\r\n' ) sock = FakeSocket(chunked_start + last_chunk + chunked_end) resp = client.HTTPResponse(sock, method="HEAD") resp.begin() b = bytearray(5) n = resp.readinto(b) self.assertEqual(n, 0) self.assertEqual(bytes(b), b'\x00'*5) self.assertEqual(resp.status, 200) self.assertEqual(resp.reason, 'OK') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_negative_content_length(self): sock = FakeSocket( 'HTTP/1.1 200 OK\r\nContent-Length: -1\r\n\r\nHello\r\n') resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), b'Hello\r\n') self.assertTrue(resp.isclosed()) def test_incomplete_read(self): sock = FakeSocket('HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\nHello\r\n') resp = client.HTTPResponse(sock, method="GET") resp.begin() try: resp.read() except client.IncompleteRead as i: self.assertEqual(i.partial, b'Hello\r\n') self.assertEqual(repr(i), "IncompleteRead(7 bytes read, 3 more expected)") self.assertEqual(str(i), "IncompleteRead(7 bytes read, 3 more expected)") self.assertTrue(resp.isclosed()) else: self.fail('IncompleteRead expected') def test_epipe(self): sock = EPipeSocket( "HTTP/1.0 401 Authorization Required\r\n" "Content-type: text/html\r\n" "WWW-Authenticate: Basic realm=\"example\"\r\n", b"Content-Length") conn = client.HTTPConnection("example.com") conn.sock = sock self.assertRaises(OSError, lambda: conn.request("PUT", "/url", "body")) resp = conn.getresponse() self.assertEqual(401, resp.status) self.assertEqual("Basic realm=\"example\"", resp.getheader("www-authenticate")) # Test lines overflowing the max line size (_MAXLINE in http.client) def test_overflowing_status_line(self): body = "HTTP/1.1 200 Ok" + "k" * 65536 + "\r\n" resp = client.HTTPResponse(FakeSocket(body)) self.assertRaises((client.LineTooLong, client.BadStatusLine), resp.begin) def test_overflowing_header_line(self): body = ( 'HTTP/1.1 200 OK\r\n' 'X-Foo: bar' + 'r' * 65536 + '\r\n\r\n' ) resp = client.HTTPResponse(FakeSocket(body)) self.assertRaises(client.LineTooLong, resp.begin) def test_overflowing_header_limit_after_100(self): body = ( 'HTTP/1.1 100 OK\r\n' 'r\n' * 32768 ) resp = client.HTTPResponse(FakeSocket(body)) with self.assertRaises(client.HTTPException) as cm: resp.begin() # We must assert more because other reasonable errors that we # do not want can also be HTTPException derived. self.assertIn('got more than ', str(cm.exception)) self.assertIn('headers', str(cm.exception)) def test_overflowing_chunked_line(self): body = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' + '0' * 65536 + 'a\r\n' 'hello world\r\n' '0\r\n' '\r\n' ) resp = client.HTTPResponse(FakeSocket(body)) resp.begin() self.assertRaises(client.LineTooLong, resp.read) def test_early_eof(self): # Test httpresponse with no \r\n termination, body = "HTTP/1.1 200 Ok" sock = FakeSocket(body) resp = client.HTTPResponse(sock) resp.begin() self.assertEqual(resp.read(), b'') self.assertTrue(resp.isclosed()) self.assertFalse(resp.closed) resp.close() self.assertTrue(resp.closed) def test_error_leak(self): # Test that the socket is not leaked if getresponse() fails conn = client.HTTPConnection('example.com') response = None class Response(client.HTTPResponse): def __init__(self, *pos, **kw): nonlocal response response = self # Avoid garbage collector closing the socket client.HTTPResponse.__init__(self, *pos, **kw) conn.response_class = Response conn.sock = FakeSocket('Invalid status line') conn.request('GET', '/') self.assertRaises(client.BadStatusLine, conn.getresponse) self.assertTrue(response.closed) self.assertTrue(conn.sock.file_closed) def test_chunked_extension(self): extra = '3;foo=bar\r\n' + 'abc\r\n' expected = chunked_expected + b'abc' sock = FakeSocket(chunked_start + extra + last_chunk_extended + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) resp.close() def test_chunked_missing_end(self): """some servers may serve up a short chunked encoding stream""" expected = chunked_expected sock = FakeSocket(chunked_start + last_chunk) #no terminating crlf resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) resp.close() def test_chunked_trailers(self): """See that trailers are read and ignored""" expected = chunked_expected sock = FakeSocket(chunked_start + last_chunk + trailers + chunked_end) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) # we should have reached the end of the file self.assertEqual(sock.file.read(), b"") #we read to the end resp.close() def test_chunked_sync(self): """Check that we don't read past the end of the chunked-encoding stream""" expected = chunked_expected extradata = "extradata" sock = FakeSocket(chunked_start + last_chunk + trailers + chunked_end + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata.encode("ascii")) #we read to the end resp.close() def test_content_length_sync(self): """Check that we don't read past the end of the Content-Length stream""" extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read(), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_readlines_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.readlines(2000), [expected]) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_read1_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read1(2000), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_readline_bound_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 10\r\n\r\n' + expected + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.readline(10), expected) self.assertEqual(resp.readline(10), b"") # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_read1_bound_content_length(self): extradata = b"extradata" expected = b"Hello123\r\n" sock = FakeSocket(b'HTTP/1.1 200 OK\r\nContent-Length: 30\r\n\r\n' + expected*3 + extradata) resp = client.HTTPResponse(sock, method="GET") resp.begin() self.assertEqual(resp.read1(20), expected*2) self.assertEqual(resp.read(), expected) # the file should now have our extradata ready to be read self.assertEqual(sock.file.read(), extradata) #we read to the end resp.close() def test_response_fileno(self): # Make sure fd returned by fileno is valid. serv = socket.create_server((HOST, 0)) self.addCleanup(serv.close) result = None def run_server(): [conn, address] = serv.accept() with conn, conn.makefile("rb") as reader: # Read the request header until a blank line while True: line = reader.readline() if not line.rstrip(b"\r\n"): break conn.sendall(b"HTTP/1.1 200 Connection established\r\n\r\n") nonlocal result result = reader.read() thread = threading.Thread(target=run_server) thread.start() self.addCleanup(thread.join, float(1)) conn = client.HTTPConnection(*serv.getsockname()) conn.request("CONNECT", "dummy:1234") response = conn.getresponse() try: self.assertEqual(response.status, client.OK) s = socket.socket(fileno=response.fileno()) try: s.sendall(b"proxied data\n") finally: s.detach() finally: response.close() conn.close() thread.join() self.assertEqual(result, b"proxied data\n") def test_putrequest_override_domain_validation(self): """ It should be possible to override the default validation behavior in putrequest (bpo-38216). """ class UnsafeHTTPConnection(client.HTTPConnection): def _validate_path(self, url): pass conn = UnsafeHTTPConnection('example.com') conn.sock = FakeSocket('') conn.putrequest('GET', '/\x00') def test_putrequest_override_host_validation(self): class UnsafeHTTPConnection(client.HTTPConnection): def _validate_host(self, url): pass conn = UnsafeHTTPConnection('example.com\r\n') conn.sock = FakeSocket('') # set skip_host so a ValueError is not raised upon adding the # invalid URL as the value of the "Host:" header conn.putrequest('GET', '/', skip_host=1) def test_putrequest_override_encoding(self): """ It should be possible to override the default encoding to transmit bytes in another encoding even if invalid (bpo-36274). """ class UnsafeHTTPConnection(client.HTTPConnection): def _encode_request(self, str_url): return str_url.encode('utf-8') conn = UnsafeHTTPConnection('example.com') conn.sock = FakeSocket('') conn.putrequest('GET', '/☃') class ExtendedReadTest(TestCase): """ Test peek(), read1(), readline() """ lines = ( 'HTTP/1.1 200 OK\r\n' '\r\n' 'hello world!\n' 'and now \n' 'for something completely different\n' 'foo' ) lines_expected = lines[lines.find('hello'):].encode("ascii") lines_chunked = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello worl\r\n' '3\r\n' 'd!\n\r\n' '9\r\n' 'and now \n\r\n' '23\r\n' 'for something completely different\n\r\n' '3\r\n' 'foo\r\n' '0\r\n' # terminating chunk '\r\n' # end of trailers ) def setUp(self): sock = FakeSocket(self.lines) resp = client.HTTPResponse(sock, method="GET") resp.begin() resp.fp = io.BufferedReader(resp.fp) self.resp = resp def test_peek(self): resp = self.resp # patch up the buffered peek so that it returns not too much stuff oldpeek = resp.fp.peek def mypeek(n=-1): p = oldpeek(n) if n >= 0: return p[:n] return p[:10] resp.fp.peek = mypeek all = [] while True: # try a short peek p = resp.peek(3) if p: self.assertGreater(len(p), 0) # then unbounded peek p2 = resp.peek() self.assertGreaterEqual(len(p2), len(p)) self.assertTrue(p2.startswith(p)) next = resp.read(len(p2)) self.assertEqual(next, p2) else: next = resp.read() self.assertFalse(next) all.append(next) if not next: break self.assertEqual(b"".join(all), self.lines_expected) def test_readline(self): resp = self.resp self._verify_readline(self.resp.readline, self.lines_expected) def _verify_readline(self, readline, expected): all = [] while True: # short readlines line = readline(5) if line and line != b"foo": if len(line) < 5: self.assertTrue(line.endswith(b"\n")) all.append(line) if not line: break self.assertEqual(b"".join(all), expected) def test_read1(self): resp = self.resp def r(): res = resp.read1(4) self.assertLessEqual(len(res), 4) return res readliner = Readliner(r) self._verify_readline(readliner.readline, self.lines_expected) def test_read1_unbounded(self): resp = self.resp all = [] while True: data = resp.read1() if not data: break all.append(data) self.assertEqual(b"".join(all), self.lines_expected) def test_read1_bounded(self): resp = self.resp all = [] while True: data = resp.read1(10) if not data: break self.assertLessEqual(len(data), 10) all.append(data) self.assertEqual(b"".join(all), self.lines_expected) def test_read1_0(self): self.assertEqual(self.resp.read1(0), b"") def test_peek_0(self): p = self.resp.peek(0) self.assertLessEqual(0, len(p)) class ExtendedReadTestChunked(ExtendedReadTest): """ Test peek(), read1(), readline() in chunked mode """ lines = ( 'HTTP/1.1 200 OK\r\n' 'Transfer-Encoding: chunked\r\n\r\n' 'a\r\n' 'hello worl\r\n' '3\r\n' 'd!\n\r\n' '9\r\n' 'and now \n\r\n' '23\r\n' 'for something completely different\n\r\n' '3\r\n' 'foo\r\n' '0\r\n' # terminating chunk '\r\n' # end of trailers ) class Readliner: """ a simple readline class that uses an arbitrary read function and buffering """ def __init__(self, readfunc): self.readfunc = readfunc self.remainder = b"" def readline(self, limit): data = [] datalen = 0 read = self.remainder try: while True: idx = read.find(b'\n') if idx != -1: break if datalen + len(read) >= limit: idx = limit - datalen - 1 # read more data data.append(read) read = self.readfunc() if not read: idx = 0 #eof condition break idx += 1 data.append(read[:idx]) self.remainder = read[idx:] return b"".join(data) except: self.remainder = b"".join(data) raise class OfflineTest(TestCase): def test_all(self): # Documented objects defined in the module should be in __all__ expected = {"responses"} # Allowlist documented dict() object # HTTPMessage, parse_headers(), and the HTTP status code constants are # intentionally omitted for simplicity blacklist = {"HTTPMessage", "parse_headers"} for name in dir(client): if name.startswith("_") or name in blacklist: continue module_object = getattr(client, name) if getattr(module_object, "__module__", None) == "http.client": expected.add(name) self.assertCountEqual(client.__all__, expected) def test_responses(self): self.assertEqual(client.responses[client.NOT_FOUND], "Not Found") def test_client_constants(self): # Make sure we don't break backward compatibility with 3.4 expected = [ 'CONTINUE', 'SWITCHING_PROTOCOLS', 'PROCESSING', 'OK', 'CREATED', 'ACCEPTED', 'NON_AUTHORITATIVE_INFORMATION', 'NO_CONTENT', 'RESET_CONTENT', 'PARTIAL_CONTENT', 'MULTI_STATUS', 'IM_USED', 'MULTIPLE_CHOICES', 'MOVED_PERMANENTLY', 'FOUND', 'SEE_OTHER', 'NOT_MODIFIED', 'USE_PROXY', 'TEMPORARY_REDIRECT', 'BAD_REQUEST', 'UNAUTHORIZED', 'PAYMENT_REQUIRED', 'FORBIDDEN', 'NOT_FOUND', 'METHOD_NOT_ALLOWED', 'NOT_ACCEPTABLE', 'PROXY_AUTHENTICATION_REQUIRED', 'REQUEST_TIMEOUT', 'CONFLICT', 'GONE', 'LENGTH_REQUIRED', 'PRECONDITION_FAILED', 'REQUEST_ENTITY_TOO_LARGE', 'REQUEST_URI_TOO_LONG', 'UNSUPPORTED_MEDIA_TYPE', 'REQUESTED_RANGE_NOT_SATISFIABLE', 'EXPECTATION_FAILED', 'IM_A_TEAPOT', 'MISDIRECTED_REQUEST', 'UNPROCESSABLE_ENTITY', 'LOCKED', 'FAILED_DEPENDENCY', 'UPGRADE_REQUIRED', 'PRECONDITION_REQUIRED', 'TOO_MANY_REQUESTS', 'REQUEST_HEADER_FIELDS_TOO_LARGE', 'UNAVAILABLE_FOR_LEGAL_REASONS', 'INTERNAL_SERVER_ERROR', 'NOT_IMPLEMENTED', 'BAD_GATEWAY', 'SERVICE_UNAVAILABLE', 'GATEWAY_TIMEOUT', 'HTTP_VERSION_NOT_SUPPORTED', 'INSUFFICIENT_STORAGE', 'NOT_EXTENDED', 'NETWORK_AUTHENTICATION_REQUIRED', 'EARLY_HINTS', 'TOO_EARLY' ] for const in expected: with self.subTest(constant=const): self.assertTrue(hasattr(client, const)) class SourceAddressTest(TestCase): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = socket_helper.bind_port(self.serv) self.source_port = socket_helper.find_unused_port() self.serv.listen() self.conn = None def tearDown(self): if self.conn: self.conn.close() self.conn = None self.serv.close() self.serv = None def testHTTPConnectionSourceAddress(self): self.conn = client.HTTPConnection(HOST, self.port, source_address=('', self.source_port)) self.conn.connect() self.assertEqual(self.conn.sock.getsockname()[1], self.source_port) @unittest.skipIf(not hasattr(client, 'HTTPSConnection'), 'http.client.HTTPSConnection not defined') def testHTTPSConnectionSourceAddress(self): self.conn = client.HTTPSConnection(HOST, self.port, source_address=('', self.source_port)) # We don't test anything here other than the constructor not barfing as # this code doesn't deal with setting up an active running SSL server # for an ssl_wrapped connect() to actually return from. class TimeoutTest(TestCase): PORT = None def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) TimeoutTest.PORT = socket_helper.bind_port(self.serv) self.serv.listen() def tearDown(self): self.serv.close() self.serv = None def testTimeoutAttribute(self): # This will prove that the timeout gets through HTTPConnection # and into the socket. # default -- use global socket timeout self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: httpConn = client.HTTPConnection(HOST, TimeoutTest.PORT) httpConn.connect() finally: socket.setdefaulttimeout(None) self.assertEqual(httpConn.sock.gettimeout(), 30) httpConn.close() # no timeout -- do not use global socket default self.assertIsNone(socket.getdefaulttimeout()) socket.setdefaulttimeout(30) try: httpConn = client.HTTPConnection(HOST, TimeoutTest.PORT, timeout=None) httpConn.connect() finally: socket.setdefaulttimeout(None) self.assertEqual(httpConn.sock.gettimeout(), None) httpConn.close() # a value httpConn = client.HTTPConnection(HOST, TimeoutTest.PORT, timeout=30) httpConn.connect() self.assertEqual(httpConn.sock.gettimeout(), 30) httpConn.close() class PersistenceTest(TestCase): def test_reuse_reconnect(self): # Should reuse or reconnect depending on header from server tests = ( ('1.0', '', False), ('1.0', 'Connection: keep-alive\r\n', True), ('1.1', '', True), ('1.1', 'Connection: close\r\n', False), ('1.0', 'Connection: keep-ALIVE\r\n', True), ('1.1', 'Connection: cloSE\r\n', False), ) for version, header, reuse in tests: with self.subTest(version=version, header=header): msg = ( 'HTTP/{} 200 OK\r\n' '{}' 'Content-Length: 12\r\n' '\r\n' 'Dummy body\r\n' ).format(version, header) conn = FakeSocketHTTPConnection(msg) self.assertIsNone(conn.sock) conn.request('GET', '/open-connection') with conn.getresponse() as response: self.assertEqual(conn.sock is None, not reuse) response.read() self.assertEqual(conn.sock is None, not reuse) self.assertEqual(conn.connections, 1) conn.request('GET', '/subsequent-request') self.assertEqual(conn.connections, 1 if reuse else 2) def test_disconnected(self): def make_reset_reader(text): """Return BufferedReader that raises ECONNRESET at EOF""" stream = io.BytesIO(text) def readinto(buffer): size = io.BytesIO.readinto(stream, buffer) if size == 0: raise ConnectionResetError() return size stream.readinto = readinto return io.BufferedReader(stream) tests = ( (io.BytesIO, client.RemoteDisconnected), (make_reset_reader, ConnectionResetError), ) for stream_factory, exception in tests: with self.subTest(exception=exception): conn = FakeSocketHTTPConnection(b'', stream_factory) conn.request('GET', '/eof-response') self.assertRaises(exception, conn.getresponse) self.assertIsNone(conn.sock) # HTTPConnection.connect() should be automatically invoked conn.request('GET', '/reconnect') self.assertEqual(conn.connections, 2) def test_100_close(self): conn = FakeSocketHTTPConnection( b'HTTP/1.1 100 Continue\r\n' b'\r\n' # Missing final response ) conn.request('GET', '/', headers={'Expect': '100-continue'}) self.assertRaises(client.RemoteDisconnected, conn.getresponse) self.assertIsNone(conn.sock) conn.request('GET', '/reconnect') self.assertEqual(conn.connections, 2) class HTTPSTest(TestCase): def setUp(self): if not hasattr(client, 'HTTPSConnection'): self.skipTest('ssl support required') def make_server(self, certfile): from test.ssl_servers import make_https_server return make_https_server(self, certfile=certfile) def test_attributes(self): # simple test to check it's storing the timeout h = client.HTTPSConnection(HOST, TimeoutTest.PORT, timeout=30) self.assertEqual(h.timeout, 30) def test_networked(self): # Default settings: requires a valid cert from a trusted CA import ssl support.requires('network') with socket_helper.transient_internet('self-signed.pythontest.net'): h = client.HTTPSConnection('self-signed.pythontest.net', 443) with self.assertRaises(ssl.SSLError) as exc_info: h.request('GET', '/') self.assertEqual(exc_info.exception.reason, 'CERTIFICATE_VERIFY_FAILED') def test_networked_noverification(self): # Switch off cert verification import ssl support.requires('network') with socket_helper.transient_internet('self-signed.pythontest.net'): context = ssl._create_unverified_context() h = client.HTTPSConnection('self-signed.pythontest.net', 443, context=context) h.request('GET', '/') resp = h.getresponse() h.close() self.assertIn('nginx', resp.getheader('server')) resp.close() @support.system_must_validate_cert def test_networked_trusted_by_default_cert(self): # Default settings: requires a valid cert from a trusted CA support.requires('network') with socket_helper.transient_internet('www.python.org'): h = client.HTTPSConnection('www.python.org', 443) h.request('GET', '/') resp = h.getresponse() content_type = resp.getheader('content-type') resp.close() h.close() self.assertIn('text/html', content_type) def test_networked_good_cert(self): # We feed the server's cert as a validating cert import ssl support.requires('network') selfsigned_pythontestdotnet = 'self-signed.pythontest.net' with socket_helper.transient_internet(selfsigned_pythontestdotnet): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(context.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(context.check_hostname, True) context.load_verify_locations(CERT_selfsigned_pythontestdotnet) try: h = client.HTTPSConnection(selfsigned_pythontestdotnet, 443, context=context) h.request('GET', '/') resp = h.getresponse() except ssl.SSLError as ssl_err: ssl_err_str = str(ssl_err) # In the error message of [SSL: CERTIFICATE_VERIFY_FAILED] on # modern Linux distros (Debian Buster, etc) default OpenSSL # configurations it'll fail saying "key too weak" until we # address https://bugs.python.org/issue36816 to use a proper # key size on self-signed.pythontest.net. if re.search(r'(?i)key.too.weak', ssl_err_str): raise unittest.SkipTest( f'Got {ssl_err_str} trying to connect ' f'to {selfsigned_pythontestdotnet}. ' 'See https://bugs.python.org/issue36816.') raise server_string = resp.getheader('server') resp.close() h.close() self.assertIn('nginx', server_string) def test_networked_bad_cert(self): # We feed a "CA" cert that is unrelated to the server's cert import ssl support.requires('network') with socket_helper.transient_internet('self-signed.pythontest.net'): context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations(CERT_localhost) h = client.HTTPSConnection('self-signed.pythontest.net', 443, context=context) with self.assertRaises(ssl.SSLError) as exc_info: h.request('GET', '/') self.assertEqual(exc_info.exception.reason, 'CERTIFICATE_VERIFY_FAILED') def test_local_unknown_cert(self): # The custom cert isn't known to the default trust bundle import ssl server = self.make_server(CERT_localhost) h = client.HTTPSConnection('localhost', server.port) with self.assertRaises(ssl.SSLError) as exc_info: h.request('GET', '/') self.assertEqual(exc_info.exception.reason, 'CERTIFICATE_VERIFY_FAILED') def test_local_good_hostname(self): # The (valid) cert validates the HTTP hostname import ssl server = self.make_server(CERT_localhost) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations(CERT_localhost) h = client.HTTPSConnection('localhost', server.port, context=context) self.addCleanup(h.close) h.request('GET', '/nonexistent') resp = h.getresponse() self.addCleanup(resp.close) self.assertEqual(resp.status, 404) def test_local_bad_hostname(self): # The (valid) cert doesn't validate the HTTP hostname import ssl server = self.make_server(CERT_fakehostname) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.load_verify_locations(CERT_fakehostname) h = client.HTTPSConnection('localhost', server.port, context=context) with self.assertRaises(ssl.CertificateError): h.request('GET', '/') # Same with explicit check_hostname=True with support.check_warnings(('', DeprecationWarning)): h = client.HTTPSConnection('localhost', server.port, context=context, check_hostname=True) with self.assertRaises(ssl.CertificateError): h.request('GET', '/') # With check_hostname=False, the mismatching is ignored context.check_hostname = False with support.check_warnings(('', DeprecationWarning)): h = client.HTTPSConnection('localhost', server.port, context=context, check_hostname=False) h.request('GET', '/nonexistent') resp = h.getresponse() resp.close() h.close() self.assertEqual(resp.status, 404) # The context's check_hostname setting is used if one isn't passed to # HTTPSConnection. context.check_hostname = False h = client.HTTPSConnection('localhost', server.port, context=context) h.request('GET', '/nonexistent') resp = h.getresponse() self.assertEqual(resp.status, 404) resp.close() h.close() # Passing check_hostname to HTTPSConnection should override the # context's setting. with support.check_warnings(('', DeprecationWarning)): h = client.HTTPSConnection('localhost', server.port, context=context, check_hostname=True) with self.assertRaises(ssl.CertificateError): h.request('GET', '/') @unittest.skipIf(not hasattr(client, 'HTTPSConnection'), 'http.client.HTTPSConnection not available') def test_host_port(self): # Check invalid host_port for hp in ("www.python.org:abc", "user:password@www.python.org"): self.assertRaises(client.InvalidURL, client.HTTPSConnection, hp) for hp, h, p in (("[fe80::207:e9ff:fe9b]:8000", "fe80::207:e9ff:fe9b", 8000), ("www.python.org:443", "www.python.org", 443), ("www.python.org:", "www.python.org", 443), ("www.python.org", "www.python.org", 443), ("[fe80::207:e9ff:fe9b]", "fe80::207:e9ff:fe9b", 443), ("[fe80::207:e9ff:fe9b]:", "fe80::207:e9ff:fe9b", 443)): c = client.HTTPSConnection(hp) self.assertEqual(h, c.host) self.assertEqual(p, c.port) def test_tls13_pha(self): import ssl if not ssl.HAS_TLSv1_3: self.skipTest('TLS 1.3 support required') # just check status of PHA flag h = client.HTTPSConnection('localhost', 443) self.assertTrue(h._context.post_handshake_auth) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertFalse(context.post_handshake_auth) h = client.HTTPSConnection('localhost', 443, context=context) self.assertIs(h._context, context) self.assertFalse(h._context.post_handshake_auth) with warnings.catch_warnings(): warnings.filterwarnings('ignore', 'key_file, cert_file and check_hostname are deprecated', DeprecationWarning) h = client.HTTPSConnection('localhost', 443, context=context, cert_file=CERT_localhost) self.assertTrue(h._context.post_handshake_auth) class RequestBodyTest(TestCase): """Test cases where a request includes a message body.""" def setUp(self): self.conn = client.HTTPConnection('example.com') self.conn.sock = self.sock = FakeSocket("") self.conn.sock = self.sock def get_headers_and_fp(self): f = io.BytesIO(self.sock.data) f.readline() # read the request line message = client.parse_headers(f) return message, f def test_list_body(self): # Note that no content-length is automatically calculated for # an iterable. The request will fall back to send chunked # transfer encoding. cases = ( ([b'foo', b'bar'], b'3\r\nfoo\r\n3\r\nbar\r\n0\r\n\r\n'), ((b'foo', b'bar'), b'3\r\nfoo\r\n3\r\nbar\r\n0\r\n\r\n'), ) for body, expected in cases: with self.subTest(body): self.conn = client.HTTPConnection('example.com') self.conn.sock = self.sock = FakeSocket('') self.conn.request('PUT', '/url', body) msg, f = self.get_headers_and_fp() self.assertNotIn('Content-Type', msg) self.assertNotIn('Content-Length', msg) self.assertEqual(msg.get('Transfer-Encoding'), 'chunked') self.assertEqual(expected, f.read()) def test_manual_content_length(self): # Set an incorrect content-length so that we can verify that # it will not be over-ridden by the library. self.conn.request("PUT", "/url", "body", {"Content-Length": "42"}) message, f = self.get_headers_and_fp() self.assertEqual("42", message.get("content-length")) self.assertEqual(4, len(f.read())) def test_ascii_body(self): self.conn.request("PUT", "/url", "body") message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("4", message.get("content-length")) self.assertEqual(b'body', f.read()) def test_latin1_body(self): self.conn.request("PUT", "/url", "body\xc1") message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("5", message.get("content-length")) self.assertEqual(b'body\xc1', f.read()) def test_bytes_body(self): self.conn.request("PUT", "/url", b"body\xc1") message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("5", message.get("content-length")) self.assertEqual(b'body\xc1', f.read()) def test_text_file_body(self): self.addCleanup(support.unlink, support.TESTFN) with open(support.TESTFN, "w") as f: f.write("body") with open(support.TESTFN) as f: self.conn.request("PUT", "/url", f) message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) # No content-length will be determined for files; the body # will be sent using chunked transfer encoding instead. self.assertIsNone(message.get("content-length")) self.assertEqual("chunked", message.get("transfer-encoding")) self.assertEqual(b'4\r\nbody\r\n0\r\n\r\n', f.read()) def test_binary_file_body(self): self.addCleanup(support.unlink, support.TESTFN) with open(support.TESTFN, "wb") as f: f.write(b"body\xc1") with open(support.TESTFN, "rb") as f: self.conn.request("PUT", "/url", f) message, f = self.get_headers_and_fp() self.assertEqual("text/plain", message.get_content_type()) self.assertIsNone(message.get_charset()) self.assertEqual("chunked", message.get("Transfer-Encoding")) self.assertNotIn("Content-Length", message) self.assertEqual(b'5\r\nbody\xc1\r\n0\r\n\r\n', f.read()) class HTTPResponseTest(TestCase): def setUp(self): body = "HTTP/1.1 200 Ok\r\nMy-Header: first-value\r\nMy-Header: \ second-value\r\n\r\nText" sock = FakeSocket(body) self.resp = client.HTTPResponse(sock) self.resp.begin() def test_getting_header(self): header = self.resp.getheader('My-Header') self.assertEqual(header, 'first-value, second-value') header = self.resp.getheader('My-Header', 'some default') self.assertEqual(header, 'first-value, second-value') def test_getting_nonexistent_header_with_string_default(self): header = self.resp.getheader('No-Such-Header', 'default-value') self.assertEqual(header, 'default-value') def test_getting_nonexistent_header_with_iterable_default(self): header = self.resp.getheader('No-Such-Header', ['default', 'values']) self.assertEqual(header, 'default, values') header = self.resp.getheader('No-Such-Header', ('default', 'values')) self.assertEqual(header, 'default, values') def test_getting_nonexistent_header_without_default(self): header = self.resp.getheader('No-Such-Header') self.assertEqual(header, None) def test_getting_header_defaultint(self): header = self.resp.getheader('No-Such-Header',default=42) self.assertEqual(header, 42) class TunnelTests(TestCase): def setUp(self): response_text = ( 'HTTP/1.0 200 OK\r\n\r\n' # Reply to CONNECT 'HTTP/1.1 200 OK\r\n' # Reply to HEAD 'Content-Length: 42\r\n\r\n' ) self.host = 'proxy.com' self.conn = client.HTTPConnection(self.host) self.conn._create_connection = self._create_connection(response_text) def tearDown(self): self.conn.close() def _create_connection(self, response_text): def create_connection(address, timeout=None, source_address=None): return FakeSocket(response_text, host=address[0], port=address[1]) return create_connection def test_set_tunnel_host_port_headers(self): tunnel_host = 'destination.com' tunnel_port = 8888 tunnel_headers = {'User-Agent': 'Mozilla/5.0 (compatible, MSIE 11)'} self.conn.set_tunnel(tunnel_host, port=tunnel_port, headers=tunnel_headers) self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, client.HTTP_PORT) self.assertEqual(self.conn._tunnel_host, tunnel_host) self.assertEqual(self.conn._tunnel_port, tunnel_port) self.assertEqual(self.conn._tunnel_headers, tunnel_headers) def test_disallow_set_tunnel_after_connect(self): # Once connected, we shouldn't be able to tunnel anymore self.conn.connect() self.assertRaises(RuntimeError, self.conn.set_tunnel, 'destination.com') def test_connect_with_tunnel(self): self.conn.set_tunnel('destination.com') self.conn.request('HEAD', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, client.HTTP_PORT) self.assertIn(b'CONNECT destination.com', self.conn.sock.data) # issue22095 self.assertNotIn(b'Host: destination.com:None', self.conn.sock.data) self.assertIn(b'Host: destination.com', self.conn.sock.data) # This test should be removed when CONNECT gets the HTTP/1.1 blessing self.assertNotIn(b'Host: proxy.com', self.conn.sock.data) def test_tunnel_connect_single_send_connection_setup(self): """Regresstion test for https://bugs.python.org/issue43332.""" with mock.patch.object(self.conn, 'send') as mock_send: self.conn.set_tunnel('destination.com') self.conn.connect() self.conn.request('GET', '/') mock_send.assert_called() # Likely 2, but this test only cares about the first. self.assertGreater( len(mock_send.mock_calls), 1, msg=f'unexpected number of send calls: {mock_send.mock_calls}') proxy_setup_data_sent = mock_send.mock_calls[0][1][0] self.assertIn(b'CONNECT destination.com', proxy_setup_data_sent) self.assertTrue( proxy_setup_data_sent.endswith(b'\r\n\r\n'), msg=f'unexpected proxy data sent {proxy_setup_data_sent!r}') def test_connect_put_request(self): self.conn.set_tunnel('destination.com') self.conn.request('PUT', '/', '') self.assertEqual(self.conn.sock.host, self.host) self.assertEqual(self.conn.sock.port, client.HTTP_PORT) self.assertIn(b'CONNECT destination.com', self.conn.sock.data) self.assertIn(b'Host: destination.com', self.conn.sock.data) def test_tunnel_debuglog(self): expected_header = 'X-Dummy: 1' response_text = 'HTTP/1.0 200 OK\r\n{}\r\n\r\n'.format(expected_header) self.conn.set_debuglevel(1) self.conn._create_connection = self._create_connection(response_text) self.conn.set_tunnel('destination.com') with support.captured_stdout() as output: self.conn.request('PUT', '/', '') lines = output.getvalue().splitlines() self.assertIn('header: {}'.format(expected_header), lines) if __name__ == '__main__': unittest.main(verbosity=2) gevent-24.11.1/src/greentest/3.9/test_select.py000066400000000000000000000053061471441230600211460ustar00rootroot00000000000000import errno import os import select import sys import unittest from test import support @unittest.skipIf((sys.platform[:3]=='win'), "can't easily test on this system") class SelectTestCase(unittest.TestCase): class Nope: pass class Almost: def fileno(self): return 'fileno' def test_error_conditions(self): self.assertRaises(TypeError, select.select, 1, 2, 3) self.assertRaises(TypeError, select.select, [self.Nope()], [], []) self.assertRaises(TypeError, select.select, [self.Almost()], [], []) self.assertRaises(TypeError, select.select, [], [], [], "not a number") self.assertRaises(ValueError, select.select, [], [], [], -1) # Issue #12367: http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/155606 @unittest.skipIf(sys.platform.startswith('freebsd'), 'skip because of a FreeBSD bug: kern/155606') def test_errno(self): with open(__file__, 'rb') as fp: fd = fp.fileno() fp.close() try: select.select([fd], [], [], 0) except OSError as err: self.assertEqual(err.errno, errno.EBADF) else: self.fail("exception not raised") def test_returned_list_identity(self): # See issue #8329 r, w, x = select.select([], [], [], 1) self.assertIsNot(r, w) self.assertIsNot(r, x) self.assertIsNot(w, x) def test_select(self): cmd = 'for i in 0 1 2 3 4 5 6 7 8 9; do echo testing...; sleep 1; done' with os.popen(cmd) as p: for tout in (0, 1, 2, 4, 8, 16) + (None,)*10: if support.verbose: print('timeout =', tout) rfd, wfd, xfd = select.select([p], [], [], tout) if (rfd, wfd, xfd) == ([], [], []): continue if (rfd, wfd, xfd) == ([p], [], []): line = p.readline() if support.verbose: print(repr(line)) if not line: if support.verbose: print('EOF') break continue self.fail('Unexpected return values from select():', rfd, wfd, xfd) # Issue 16230: Crash on select resized list def test_select_mutated(self): a = [] class F: def fileno(self): del a[-1] return sys.__stdout__.fileno() a[:] = [F()] * 10 self.assertEqual(select.select([], a, []), ([], a[:5], [])) def tearDownModule(): support.reap_children() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.9/test_selectors.py000066400000000000000000000443561471441230600217020ustar00rootroot00000000000000import errno import os import random import selectors import signal import socket import sys from test import support from test.support import socket_helper from time import sleep import unittest import unittest.mock import tempfile from time import monotonic as time try: import resource except ImportError: resource = None if hasattr(socket, 'socketpair'): socketpair = socket.socketpair else: def socketpair(family=socket.AF_INET, type=socket.SOCK_STREAM, proto=0): with socket.socket(family, type, proto) as l: l.bind((socket_helper.HOST, 0)) l.listen() c = socket.socket(family, type, proto) try: c.connect(l.getsockname()) caddr = c.getsockname() while True: a, addr = l.accept() # check that we've got the correct client if addr == caddr: return c, a a.close() except OSError: c.close() raise def find_ready_matching(ready, flag): match = [] for key, events in ready: if events & flag: match.append(key.fileobj) return match class BaseSelectorTestCase: def make_socketpair(self): rd, wr = socketpair() self.addCleanup(rd.close) self.addCleanup(wr.close) return rd, wr def test_register(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() key = s.register(rd, selectors.EVENT_READ, "data") self.assertIsInstance(key, selectors.SelectorKey) self.assertEqual(key.fileobj, rd) self.assertEqual(key.fd, rd.fileno()) self.assertEqual(key.events, selectors.EVENT_READ) self.assertEqual(key.data, "data") # register an unknown event self.assertRaises(ValueError, s.register, 0, 999999) # register an invalid FD self.assertRaises(ValueError, s.register, -10, selectors.EVENT_READ) # register twice self.assertRaises(KeyError, s.register, rd, selectors.EVENT_READ) # register the same FD, but with a different object self.assertRaises(KeyError, s.register, rd.fileno(), selectors.EVENT_READ) def test_unregister(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.unregister(rd) # unregister an unknown file obj self.assertRaises(KeyError, s.unregister, 999999) # unregister twice self.assertRaises(KeyError, s.unregister, rd) def test_unregister_after_fd_close(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() r, w = rd.fileno(), wr.fileno() s.register(r, selectors.EVENT_READ) s.register(w, selectors.EVENT_WRITE) rd.close() wr.close() s.unregister(r) s.unregister(w) @unittest.skipUnless(os.name == 'posix', "requires posix") def test_unregister_after_fd_close_and_reuse(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() r, w = rd.fileno(), wr.fileno() s.register(r, selectors.EVENT_READ) s.register(w, selectors.EVENT_WRITE) rd2, wr2 = self.make_socketpair() rd.close() wr.close() os.dup2(rd2.fileno(), r) os.dup2(wr2.fileno(), w) self.addCleanup(os.close, r) self.addCleanup(os.close, w) s.unregister(r) s.unregister(w) def test_unregister_after_socket_close(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) rd.close() wr.close() s.unregister(rd) s.unregister(wr) def test_modify(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() key = s.register(rd, selectors.EVENT_READ) # modify events key2 = s.modify(rd, selectors.EVENT_WRITE) self.assertNotEqual(key.events, key2.events) self.assertEqual(key2, s.get_key(rd)) s.unregister(rd) # modify data d1 = object() d2 = object() key = s.register(rd, selectors.EVENT_READ, d1) key2 = s.modify(rd, selectors.EVENT_READ, d2) self.assertEqual(key.events, key2.events) self.assertNotEqual(key.data, key2.data) self.assertEqual(key2, s.get_key(rd)) self.assertEqual(key2.data, d2) # modify unknown file obj self.assertRaises(KeyError, s.modify, 999999, selectors.EVENT_READ) # modify use a shortcut d3 = object() s.register = unittest.mock.Mock() s.unregister = unittest.mock.Mock() s.modify(rd, selectors.EVENT_READ, d3) self.assertFalse(s.register.called) self.assertFalse(s.unregister.called) def test_modify_unregister(self): # Make sure the fd is unregister()ed in case of error on # modify(): http://bugs.python.org/issue30014 if self.SELECTOR.__name__ == 'EpollSelector': patch = unittest.mock.patch( 'selectors.EpollSelector._selector_cls') elif self.SELECTOR.__name__ == 'PollSelector': patch = unittest.mock.patch( 'selectors.PollSelector._selector_cls') elif self.SELECTOR.__name__ == 'DevpollSelector': patch = unittest.mock.patch( 'selectors.DevpollSelector._selector_cls') else: raise self.skipTest("") with patch as m: m.return_value.modify = unittest.mock.Mock( side_effect=ZeroDivisionError) s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) self.assertEqual(len(s._map), 1) with self.assertRaises(ZeroDivisionError): s.modify(rd, selectors.EVENT_WRITE) self.assertEqual(len(s._map), 0) def test_close(self): s = self.SELECTOR() self.addCleanup(s.close) mapping = s.get_map() rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) s.close() self.assertRaises(RuntimeError, s.get_key, rd) self.assertRaises(RuntimeError, s.get_key, wr) self.assertRaises(KeyError, mapping.__getitem__, rd) self.assertRaises(KeyError, mapping.__getitem__, wr) def test_get_key(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() key = s.register(rd, selectors.EVENT_READ, "data") self.assertEqual(key, s.get_key(rd)) # unknown file obj self.assertRaises(KeyError, s.get_key, 999999) def test_get_map(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() keys = s.get_map() self.assertFalse(keys) self.assertEqual(len(keys), 0) self.assertEqual(list(keys), []) key = s.register(rd, selectors.EVENT_READ, "data") self.assertIn(rd, keys) self.assertEqual(key, keys[rd]) self.assertEqual(len(keys), 1) self.assertEqual(list(keys), [rd.fileno()]) self.assertEqual(list(keys.values()), [key]) # unknown file obj with self.assertRaises(KeyError): keys[999999] # Read-only mapping with self.assertRaises(TypeError): del keys[rd] def test_select(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) wr_key = s.register(wr, selectors.EVENT_WRITE) result = s.select() for key, events in result: self.assertTrue(isinstance(key, selectors.SelectorKey)) self.assertTrue(events) self.assertFalse(events & ~(selectors.EVENT_READ | selectors.EVENT_WRITE)) self.assertEqual([(wr_key, selectors.EVENT_WRITE)], result) def test_context_manager(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() with s as sel: sel.register(rd, selectors.EVENT_READ) sel.register(wr, selectors.EVENT_WRITE) self.assertRaises(RuntimeError, s.get_key, rd) self.assertRaises(RuntimeError, s.get_key, wr) def test_fileno(self): s = self.SELECTOR() self.addCleanup(s.close) if hasattr(s, 'fileno'): fd = s.fileno() self.assertTrue(isinstance(fd, int)) self.assertGreaterEqual(fd, 0) def test_selector(self): s = self.SELECTOR() self.addCleanup(s.close) NUM_SOCKETS = 12 MSG = b" This is a test." MSG_LEN = len(MSG) readers = [] writers = [] r2w = {} w2r = {} for i in range(NUM_SOCKETS): rd, wr = self.make_socketpair() s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) readers.append(rd) writers.append(wr) r2w[rd] = wr w2r[wr] = rd bufs = [] while writers: ready = s.select() ready_writers = find_ready_matching(ready, selectors.EVENT_WRITE) if not ready_writers: self.fail("no sockets ready for writing") wr = random.choice(ready_writers) wr.send(MSG) for i in range(10): ready = s.select() ready_readers = find_ready_matching(ready, selectors.EVENT_READ) if ready_readers: break # there might be a delay between the write to the write end and # the read end is reported ready sleep(0.1) else: self.fail("no sockets ready for reading") self.assertEqual([w2r[wr]], ready_readers) rd = ready_readers[0] buf = rd.recv(MSG_LEN) self.assertEqual(len(buf), MSG_LEN) bufs.append(buf) s.unregister(r2w[rd]) s.unregister(rd) writers.remove(r2w[rd]) self.assertEqual(bufs, [MSG] * NUM_SOCKETS) @unittest.skipIf(sys.platform == 'win32', 'select.select() cannot be used with empty fd sets') def test_empty_select(self): # Issue #23009: Make sure EpollSelector.select() works when no FD is # registered. s = self.SELECTOR() self.addCleanup(s.close) self.assertEqual(s.select(timeout=0), []) def test_timeout(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() s.register(wr, selectors.EVENT_WRITE) t = time() self.assertEqual(1, len(s.select(0))) self.assertEqual(1, len(s.select(-1))) self.assertLess(time() - t, 0.5) s.unregister(wr) s.register(rd, selectors.EVENT_READ) t = time() self.assertFalse(s.select(0)) self.assertFalse(s.select(-1)) self.assertLess(time() - t, 0.5) t0 = time() self.assertFalse(s.select(1)) t1 = time() dt = t1 - t0 # Tolerate 2.0 seconds for very slow buildbots self.assertTrue(0.8 <= dt <= 2.0, dt) @unittest.skipUnless(hasattr(signal, "alarm"), "signal.alarm() required for this test") def test_select_interrupt_exc(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() class InterruptSelect(Exception): pass def handler(*args): raise InterruptSelect orig_alrm_handler = signal.signal(signal.SIGALRM, handler) self.addCleanup(signal.signal, signal.SIGALRM, orig_alrm_handler) try: signal.alarm(1) s.register(rd, selectors.EVENT_READ) t = time() # select() is interrupted by a signal which raises an exception with self.assertRaises(InterruptSelect): s.select(30) # select() was interrupted before the timeout of 30 seconds self.assertLess(time() - t, 5.0) finally: signal.alarm(0) @unittest.skipUnless(hasattr(signal, "alarm"), "signal.alarm() required for this test") def test_select_interrupt_noraise(self): s = self.SELECTOR() self.addCleanup(s.close) rd, wr = self.make_socketpair() orig_alrm_handler = signal.signal(signal.SIGALRM, lambda *args: None) self.addCleanup(signal.signal, signal.SIGALRM, orig_alrm_handler) try: signal.alarm(1) s.register(rd, selectors.EVENT_READ) t = time() # select() is interrupted by a signal, but the signal handler doesn't # raise an exception, so select() should by retries with a recomputed # timeout self.assertFalse(s.select(1.5)) self.assertGreaterEqual(time() - t, 1.0) finally: signal.alarm(0) class ScalableSelectorMixIn: # see issue #18963 for why it's skipped on older OS X versions @support.requires_mac_ver(10, 5) @unittest.skipUnless(resource, "Test needs resource module") def test_above_fd_setsize(self): # A scalable implementation should have no problem with more than # FD_SETSIZE file descriptors. Since we don't know the value, we just # try to set the soft RLIMIT_NOFILE to the hard RLIMIT_NOFILE ceiling. soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE) try: resource.setrlimit(resource.RLIMIT_NOFILE, (hard, hard)) self.addCleanup(resource.setrlimit, resource.RLIMIT_NOFILE, (soft, hard)) NUM_FDS = min(hard, 2**16) except (OSError, ValueError): NUM_FDS = soft # guard for already allocated FDs (stdin, stdout...) NUM_FDS -= 32 s = self.SELECTOR() self.addCleanup(s.close) for i in range(NUM_FDS // 2): try: rd, wr = self.make_socketpair() except OSError: # too many FDs, skip - note that we should only catch EMFILE # here, but apparently *BSD and Solaris can fail upon connect() # or bind() with EADDRNOTAVAIL, so let's be safe self.skipTest("FD limit reached") try: s.register(rd, selectors.EVENT_READ) s.register(wr, selectors.EVENT_WRITE) except OSError as e: if e.errno == errno.ENOSPC: # this can be raised by epoll if we go over # fs.epoll.max_user_watches sysctl self.skipTest("FD limit reached") raise try: fds = s.select() except OSError as e: if e.errno == errno.EINVAL and sys.platform == 'darwin': # unexplainable errors on macOS don't need to fail the test self.skipTest("Invalid argument error calling poll()") raise self.assertEqual(NUM_FDS // 2, len(fds)) class DefaultSelectorTestCase(BaseSelectorTestCase, unittest.TestCase): SELECTOR = selectors.DefaultSelector class SelectSelectorTestCase(BaseSelectorTestCase, unittest.TestCase): SELECTOR = selectors.SelectSelector @unittest.skipUnless(hasattr(selectors, 'PollSelector'), "Test needs selectors.PollSelector") class PollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'PollSelector', None) @unittest.skipUnless(hasattr(selectors, 'EpollSelector'), "Test needs selectors.EpollSelector") class EpollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'EpollSelector', None) def test_register_file(self): # epoll(7) returns EPERM when given a file to watch s = self.SELECTOR() with tempfile.NamedTemporaryFile() as f: with self.assertRaises(IOError): s.register(f, selectors.EVENT_READ) # the SelectorKey has been removed with self.assertRaises(KeyError): s.get_key(f) @unittest.skipUnless(hasattr(selectors, 'KqueueSelector'), "Test needs selectors.KqueueSelector)") class KqueueSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'KqueueSelector', None) def test_register_bad_fd(self): # a file descriptor that's been closed should raise an OSError # with EBADF s = self.SELECTOR() bad_f = support.make_bad_fd() with self.assertRaises(OSError) as cm: s.register(bad_f, selectors.EVENT_READ) self.assertEqual(cm.exception.errno, errno.EBADF) # the SelectorKey has been removed with self.assertRaises(KeyError): s.get_key(bad_f) def test_empty_select_timeout(self): # Issues #23009, #29255: Make sure timeout is applied when no fds # are registered. s = self.SELECTOR() self.addCleanup(s.close) t0 = time() self.assertEqual(s.select(1), []) t1 = time() dt = t1 - t0 # Tolerate 2.0 seconds for very slow buildbots self.assertTrue(0.8 <= dt <= 2.0, dt) @unittest.skipUnless(hasattr(selectors, 'DevpollSelector'), "Test needs selectors.DevpollSelector") class DevpollSelectorTestCase(BaseSelectorTestCase, ScalableSelectorMixIn, unittest.TestCase): SELECTOR = getattr(selectors, 'DevpollSelector', None) def tearDownModule(): support.reap_children() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.9/test_signal.py000066400000000000000000001377731471441230600211620ustar00rootroot00000000000000import errno import inspect import os import random import signal import socket import statistics import subprocess import sys import threading import time import unittest from test import support from test.support.script_helper import assert_python_ok, spawn_python try: import _testcapi except ImportError: _testcapi = None class GenericTests(unittest.TestCase): def test_enums(self): for name in dir(signal): sig = getattr(signal, name) if name in {'SIG_DFL', 'SIG_IGN'}: self.assertIsInstance(sig, signal.Handlers) elif name in {'SIG_BLOCK', 'SIG_UNBLOCK', 'SIG_SETMASK'}: self.assertIsInstance(sig, signal.Sigmasks) elif name.startswith('SIG') and not name.startswith('SIG_'): self.assertIsInstance(sig, signal.Signals) elif name.startswith('CTRL_'): self.assertIsInstance(sig, signal.Signals) self.assertEqual(sys.platform, "win32") def test_functions_module_attr(self): # Issue #27718: If __all__ is not defined all non-builtin functions # should have correct __module__ to be displayed by pydoc. for name in dir(signal): value = getattr(signal, name) if inspect.isroutine(value) and not inspect.isbuiltin(value): self.assertEqual(value.__module__, 'signal') @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") class PosixTests(unittest.TestCase): def trivial_signal_handler(self, *args): pass def test_out_of_range_signal_number_raises_error(self): self.assertRaises(ValueError, signal.getsignal, 4242) self.assertRaises(ValueError, signal.signal, 4242, self.trivial_signal_handler) self.assertRaises(ValueError, signal.strsignal, 4242) def test_setting_signal_handler_to_none_raises_error(self): self.assertRaises(TypeError, signal.signal, signal.SIGUSR1, None) def test_getsignal(self): hup = signal.signal(signal.SIGHUP, self.trivial_signal_handler) self.assertIsInstance(hup, signal.Handlers) self.assertEqual(signal.getsignal(signal.SIGHUP), self.trivial_signal_handler) signal.signal(signal.SIGHUP, hup) self.assertEqual(signal.getsignal(signal.SIGHUP), hup) def test_strsignal(self): self.assertIn("Interrupt", signal.strsignal(signal.SIGINT)) self.assertIn("Terminated", signal.strsignal(signal.SIGTERM)) self.assertIn("Hangup", signal.strsignal(signal.SIGHUP)) # Issue 3864, unknown if this affects earlier versions of freebsd also def test_interprocess_signal(self): dirname = os.path.dirname(__file__) script = os.path.join(dirname, 'signalinterproctester.py') assert_python_ok(script) def test_valid_signals(self): s = signal.valid_signals() self.assertIsInstance(s, set) self.assertIn(signal.Signals.SIGINT, s) self.assertIn(signal.Signals.SIGALRM, s) self.assertNotIn(0, s) self.assertNotIn(signal.NSIG, s) self.assertLess(len(s), signal.NSIG) @unittest.skipUnless(sys.executable, "sys.executable required.") def test_keyboard_interrupt_exit_code(self): """KeyboardInterrupt triggers exit via SIGINT.""" process = subprocess.run( [sys.executable, "-c", "import os, signal, time\n" "os.kill(os.getpid(), signal.SIGINT)\n" "for _ in range(999): time.sleep(0.01)"], stderr=subprocess.PIPE) self.assertIn(b"KeyboardInterrupt", process.stderr) self.assertEqual(process.returncode, -signal.SIGINT) # Caveat: The exit code is insufficient to guarantee we actually died # via a signal. POSIX shells do more than look at the 8 bit value. # Writing an automation friendly test of an interactive shell # to confirm that our process died via a SIGINT proved too complex. @unittest.skipUnless(sys.platform == "win32", "Windows specific") class WindowsSignalTests(unittest.TestCase): def test_valid_signals(self): s = signal.valid_signals() self.assertIsInstance(s, set) self.assertGreaterEqual(len(s), 6) self.assertIn(signal.Signals.SIGINT, s) self.assertNotIn(0, s) self.assertNotIn(signal.NSIG, s) self.assertLess(len(s), signal.NSIG) def test_issue9324(self): # Updated for issue #10003, adding SIGBREAK handler = lambda x, y: None checked = set() for sig in (signal.SIGABRT, signal.SIGBREAK, signal.SIGFPE, signal.SIGILL, signal.SIGINT, signal.SIGSEGV, signal.SIGTERM): # Set and then reset a handler for signals that work on windows. # Issue #18396, only for signals without a C-level handler. if signal.getsignal(sig) is not None: signal.signal(sig, signal.signal(sig, handler)) checked.add(sig) # Issue #18396: Ensure the above loop at least tested *something* self.assertTrue(checked) with self.assertRaises(ValueError): signal.signal(-1, handler) with self.assertRaises(ValueError): signal.signal(7, handler) @unittest.skipUnless(sys.executable, "sys.executable required.") def test_keyboard_interrupt_exit_code(self): """KeyboardInterrupt triggers an exit using STATUS_CONTROL_C_EXIT.""" # We don't test via os.kill(os.getpid(), signal.CTRL_C_EVENT) here # as that requires setting up a console control handler in a child # in its own process group. Doable, but quite complicated. (see # @eryksun on https://github.com/python/cpython/pull/11862) process = subprocess.run( [sys.executable, "-c", "raise KeyboardInterrupt"], stderr=subprocess.PIPE) self.assertIn(b"KeyboardInterrupt", process.stderr) STATUS_CONTROL_C_EXIT = 0xC000013A self.assertEqual(process.returncode, STATUS_CONTROL_C_EXIT) class WakeupFDTests(unittest.TestCase): def test_invalid_call(self): # First parameter is positional-only with self.assertRaises(TypeError): signal.set_wakeup_fd(signum=signal.SIGINT) # warn_on_full_buffer is a keyword-only parameter with self.assertRaises(TypeError): signal.set_wakeup_fd(signal.SIGINT, False) def test_invalid_fd(self): fd = support.make_bad_fd() self.assertRaises((ValueError, OSError), signal.set_wakeup_fd, fd) def test_invalid_socket(self): sock = socket.socket() fd = sock.fileno() sock.close() self.assertRaises((ValueError, OSError), signal.set_wakeup_fd, fd) def test_set_wakeup_fd_result(self): r1, w1 = os.pipe() self.addCleanup(os.close, r1) self.addCleanup(os.close, w1) r2, w2 = os.pipe() self.addCleanup(os.close, r2) self.addCleanup(os.close, w2) if hasattr(os, 'set_blocking'): os.set_blocking(w1, False) os.set_blocking(w2, False) signal.set_wakeup_fd(w1) self.assertEqual(signal.set_wakeup_fd(w2), w1) self.assertEqual(signal.set_wakeup_fd(-1), w2) self.assertEqual(signal.set_wakeup_fd(-1), -1) def test_set_wakeup_fd_socket_result(self): sock1 = socket.socket() self.addCleanup(sock1.close) sock1.setblocking(False) fd1 = sock1.fileno() sock2 = socket.socket() self.addCleanup(sock2.close) sock2.setblocking(False) fd2 = sock2.fileno() signal.set_wakeup_fd(fd1) self.assertEqual(signal.set_wakeup_fd(fd2), fd1) self.assertEqual(signal.set_wakeup_fd(-1), fd2) self.assertEqual(signal.set_wakeup_fd(-1), -1) # On Windows, files are always blocking and Windows does not provide a # function to test if a socket is in non-blocking mode. @unittest.skipIf(sys.platform == "win32", "tests specific to POSIX") def test_set_wakeup_fd_blocking(self): rfd, wfd = os.pipe() self.addCleanup(os.close, rfd) self.addCleanup(os.close, wfd) # fd must be non-blocking os.set_blocking(wfd, True) with self.assertRaises(ValueError) as cm: signal.set_wakeup_fd(wfd) self.assertEqual(str(cm.exception), "the fd %s must be in non-blocking mode" % wfd) # non-blocking is ok os.set_blocking(wfd, False) signal.set_wakeup_fd(wfd) signal.set_wakeup_fd(-1) @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") class WakeupSignalTests(unittest.TestCase): @unittest.skipIf(_testcapi is None, 'need _testcapi') def check_wakeup(self, test_body, *signals, ordered=True): # use a subprocess to have only one thread code = """if 1: import _testcapi import os import signal import struct signals = {!r} def handler(signum, frame): pass def check_signum(signals): data = os.read(read, len(signals)+1) raised = struct.unpack('%uB' % len(data), data) if not {!r}: raised = set(raised) signals = set(signals) if raised != signals: raise Exception("%r != %r" % (raised, signals)) {} signal.signal(signal.SIGALRM, handler) read, write = os.pipe() os.set_blocking(write, False) signal.set_wakeup_fd(write) test() check_signum(signals) os.close(read) os.close(write) """.format(tuple(map(int, signals)), ordered, test_body) assert_python_ok('-c', code) @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_wakeup_write_error(self): # Issue #16105: write() errors in the C signal handler should not # pass silently. # Use a subprocess to have only one thread. code = """if 1: import _testcapi import errno import os import signal import sys from test.support import captured_stderr def handler(signum, frame): 1/0 signal.signal(signal.SIGALRM, handler) r, w = os.pipe() os.set_blocking(r, False) # Set wakeup_fd a read-only file descriptor to trigger the error signal.set_wakeup_fd(r) try: with captured_stderr() as err: signal.raise_signal(signal.SIGALRM) except ZeroDivisionError: # An ignored exception should have been printed out on stderr err = err.getvalue() if ('Exception ignored when trying to write to the signal wakeup fd' not in err): raise AssertionError(err) if ('OSError: [Errno %d]' % errno.EBADF) not in err: raise AssertionError(err) else: raise AssertionError("ZeroDivisionError not raised") os.close(r) os.close(w) """ r, w = os.pipe() try: os.write(r, b'x') except OSError: pass else: self.skipTest("OS doesn't report write() error on the read end of a pipe") finally: os.close(r) os.close(w) assert_python_ok('-c', code) def test_wakeup_fd_early(self): self.check_wakeup("""def test(): import select import time TIMEOUT_FULL = 10 TIMEOUT_HALF = 5 class InterruptSelect(Exception): pass def handler(signum, frame): raise InterruptSelect signal.signal(signal.SIGALRM, handler) signal.alarm(1) # We attempt to get a signal during the sleep, # before select is called try: select.select([], [], [], TIMEOUT_FULL) except InterruptSelect: pass else: raise Exception("select() was not interrupted") before_time = time.monotonic() select.select([read], [], [], TIMEOUT_FULL) after_time = time.monotonic() dt = after_time - before_time if dt >= TIMEOUT_HALF: raise Exception("%s >= %s" % (dt, TIMEOUT_HALF)) """, signal.SIGALRM) def test_wakeup_fd_during(self): self.check_wakeup("""def test(): import select import time TIMEOUT_FULL = 10 TIMEOUT_HALF = 5 class InterruptSelect(Exception): pass def handler(signum, frame): raise InterruptSelect signal.signal(signal.SIGALRM, handler) signal.alarm(1) before_time = time.monotonic() # We attempt to get a signal during the select call try: select.select([read], [], [], TIMEOUT_FULL) except InterruptSelect: pass else: raise Exception("select() was not interrupted") after_time = time.monotonic() dt = after_time - before_time if dt >= TIMEOUT_HALF: raise Exception("%s >= %s" % (dt, TIMEOUT_HALF)) """, signal.SIGALRM) def test_signum(self): self.check_wakeup("""def test(): signal.signal(signal.SIGUSR1, handler) signal.raise_signal(signal.SIGUSR1) signal.raise_signal(signal.SIGALRM) """, signal.SIGUSR1, signal.SIGALRM) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pending(self): self.check_wakeup("""def test(): signum1 = signal.SIGUSR1 signum2 = signal.SIGUSR2 signal.signal(signum1, handler) signal.signal(signum2, handler) signal.pthread_sigmask(signal.SIG_BLOCK, (signum1, signum2)) signal.raise_signal(signum1) signal.raise_signal(signum2) # Unblocking the 2 signals calls the C signal handler twice signal.pthread_sigmask(signal.SIG_UNBLOCK, (signum1, signum2)) """, signal.SIGUSR1, signal.SIGUSR2, ordered=False) @unittest.skipUnless(hasattr(socket, 'socketpair'), 'need socket.socketpair') class WakeupSocketSignalTests(unittest.TestCase): @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_socket(self): # use a subprocess to have only one thread code = """if 1: import signal import socket import struct import _testcapi signum = signal.SIGINT signals = (signum,) def handler(signum, frame): pass signal.signal(signum, handler) read, write = socket.socketpair() write.setblocking(False) signal.set_wakeup_fd(write.fileno()) signal.raise_signal(signum) data = read.recv(1) if not data: raise Exception("no signum written") raised = struct.unpack('B', data) if raised != signals: raise Exception("%r != %r" % (raised, signals)) read.close() write.close() """ assert_python_ok('-c', code) @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_send_error(self): # Use a subprocess to have only one thread. if os.name == 'nt': action = 'send' else: action = 'write' code = """if 1: import errno import signal import socket import sys import time import _testcapi from test.support import captured_stderr signum = signal.SIGINT def handler(signum, frame): pass signal.signal(signum, handler) read, write = socket.socketpair() read.setblocking(False) write.setblocking(False) signal.set_wakeup_fd(write.fileno()) # Close sockets: send() will fail read.close() write.close() with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if ('Exception ignored when trying to {action} to the signal wakeup fd' not in err): raise AssertionError(err) """.format(action=action) assert_python_ok('-c', code) @unittest.skipIf(_testcapi is None, 'need _testcapi') def test_warn_on_full_buffer(self): # Use a subprocess to have only one thread. if os.name == 'nt': action = 'send' else: action = 'write' code = """if 1: import errno import signal import socket import sys import time import _testcapi from test.support import captured_stderr signum = signal.SIGINT # This handler will be called, but we intentionally won't read from # the wakeup fd. def handler(signum, frame): pass signal.signal(signum, handler) read, write = socket.socketpair() # Fill the socketpair buffer if sys.platform == 'win32': # bpo-34130: On Windows, sometimes non-blocking send fails to fill # the full socketpair buffer, so use a timeout of 50 ms instead. write.settimeout(0.050) else: write.setblocking(False) # Start with large chunk size to reduce the # number of send needed to fill the buffer. written = 0 for chunk_size in (2 ** 16, 2 ** 8, 1): chunk = b"x" * chunk_size try: while True: write.send(chunk) written += chunk_size except (BlockingIOError, socket.timeout): pass print(f"%s bytes written into the socketpair" % written, flush=True) write.setblocking(False) try: write.send(b"x") except BlockingIOError: # The socketpair buffer seems full pass else: raise AssertionError("%s bytes failed to fill the socketpair " "buffer" % written) # By default, we get a warning when a signal arrives msg = ('Exception ignored when trying to {action} ' 'to the signal wakeup fd') signal.set_wakeup_fd(write.fileno()) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if msg not in err: raise AssertionError("first set_wakeup_fd() test failed, " "stderr: %r" % err) # And also if warn_on_full_buffer=True signal.set_wakeup_fd(write.fileno(), warn_on_full_buffer=True) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if msg not in err: raise AssertionError("set_wakeup_fd(warn_on_full_buffer=True) " "test failed, stderr: %r" % err) # But not if warn_on_full_buffer=False signal.set_wakeup_fd(write.fileno(), warn_on_full_buffer=False) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if err != "": raise AssertionError("set_wakeup_fd(warn_on_full_buffer=False) " "test failed, stderr: %r" % err) # And then check the default again, to make sure warn_on_full_buffer # settings don't leak across calls. signal.set_wakeup_fd(write.fileno()) with captured_stderr() as err: signal.raise_signal(signum) err = err.getvalue() if msg not in err: raise AssertionError("second set_wakeup_fd() test failed, " "stderr: %r" % err) """.format(action=action) assert_python_ok('-c', code) @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") class SiginterruptTest(unittest.TestCase): def readpipe_interrupted(self, interrupt): """Perform a read during which a signal will arrive. Return True if the read is interrupted by the signal and raises an exception. Return False if it returns normally. """ # use a subprocess to have only one thread, to have a timeout on the # blocking read and to not touch signal handling in this process code = """if 1: import errno import os import signal import sys interrupt = %r r, w = os.pipe() def handler(signum, frame): 1 / 0 signal.signal(signal.SIGALRM, handler) if interrupt is not None: signal.siginterrupt(signal.SIGALRM, interrupt) print("ready") sys.stdout.flush() # run the test twice try: for loop in range(2): # send a SIGALRM in a second (during the read) signal.alarm(1) try: # blocking call: read from a pipe without data os.read(r, 1) except ZeroDivisionError: pass else: sys.exit(2) sys.exit(3) finally: os.close(r) os.close(w) """ % (interrupt,) with spawn_python('-c', code) as process: try: # wait until the child process is loaded and has started first_line = process.stdout.readline() stdout, stderr = process.communicate(timeout=support.SHORT_TIMEOUT) except subprocess.TimeoutExpired: process.kill() return False else: stdout = first_line + stdout exitcode = process.wait() if exitcode not in (2, 3): raise Exception("Child error (exit code %s): %r" % (exitcode, stdout)) return (exitcode == 3) def test_without_siginterrupt(self): # If a signal handler is installed and siginterrupt is not called # at all, when that signal arrives, it interrupts a syscall that's in # progress. interrupted = self.readpipe_interrupted(None) self.assertTrue(interrupted) def test_siginterrupt_on(self): # If a signal handler is installed and siginterrupt is called with # a true value for the second argument, when that signal arrives, it # interrupts a syscall that's in progress. interrupted = self.readpipe_interrupted(True) self.assertTrue(interrupted) def test_siginterrupt_off(self): # If a signal handler is installed and siginterrupt is called with # a false value for the second argument, when that signal arrives, it # does not interrupt a syscall that's in progress. interrupted = self.readpipe_interrupted(False) self.assertFalse(interrupted) @unittest.skipIf(sys.platform == "win32", "Not valid on Windows") class ItimerTest(unittest.TestCase): def setUp(self): self.hndl_called = False self.hndl_count = 0 self.itimer = None self.old_alarm = signal.signal(signal.SIGALRM, self.sig_alrm) def tearDown(self): signal.signal(signal.SIGALRM, self.old_alarm) if self.itimer is not None: # test_itimer_exc doesn't change this attr # just ensure that itimer is stopped signal.setitimer(self.itimer, 0) def sig_alrm(self, *args): self.hndl_called = True def sig_vtalrm(self, *args): self.hndl_called = True if self.hndl_count > 3: # it shouldn't be here, because it should have been disabled. raise signal.ItimerError("setitimer didn't disable ITIMER_VIRTUAL " "timer.") elif self.hndl_count == 3: # disable ITIMER_VIRTUAL, this function shouldn't be called anymore signal.setitimer(signal.ITIMER_VIRTUAL, 0) self.hndl_count += 1 def sig_prof(self, *args): self.hndl_called = True signal.setitimer(signal.ITIMER_PROF, 0) def test_itimer_exc(self): # XXX I'm assuming -1 is an invalid itimer, but maybe some platform # defines it ? self.assertRaises(signal.ItimerError, signal.setitimer, -1, 0) # Negative times are treated as zero on some platforms. if 0: self.assertRaises(signal.ItimerError, signal.setitimer, signal.ITIMER_REAL, -1) def test_itimer_real(self): self.itimer = signal.ITIMER_REAL signal.setitimer(self.itimer, 1.0) signal.pause() self.assertEqual(self.hndl_called, True) # Issue 3864, unknown if this affects earlier versions of freebsd also @unittest.skipIf(sys.platform in ('netbsd5',), 'itimer not reliable (does not mix well with threading) on some BSDs.') def test_itimer_virtual(self): self.itimer = signal.ITIMER_VIRTUAL signal.signal(signal.SIGVTALRM, self.sig_vtalrm) signal.setitimer(self.itimer, 0.3, 0.2) start_time = time.monotonic() while time.monotonic() - start_time < 60.0: # use up some virtual time by doing real work _ = pow(12345, 67890, 10000019) if signal.getitimer(self.itimer) == (0.0, 0.0): break # sig_vtalrm handler stopped this itimer else: # Issue 8424 self.skipTest("timeout: likely cause: machine too slow or load too " "high") # virtual itimer should be (0.0, 0.0) now self.assertEqual(signal.getitimer(self.itimer), (0.0, 0.0)) # and the handler should have been called self.assertEqual(self.hndl_called, True) def test_itimer_prof(self): self.itimer = signal.ITIMER_PROF signal.signal(signal.SIGPROF, self.sig_prof) signal.setitimer(self.itimer, 0.2, 0.2) start_time = time.monotonic() while time.monotonic() - start_time < 60.0: # do some work _ = pow(12345, 67890, 10000019) if signal.getitimer(self.itimer) == (0.0, 0.0): break # sig_prof handler stopped this itimer else: # Issue 8424 self.skipTest("timeout: likely cause: machine too slow or load too " "high") # profiling itimer should be (0.0, 0.0) now self.assertEqual(signal.getitimer(self.itimer), (0.0, 0.0)) # and the handler should have been called self.assertEqual(self.hndl_called, True) def test_setitimer_tiny(self): # bpo-30807: C setitimer() takes a microsecond-resolution interval. # Check that float -> timeval conversion doesn't round # the interval down to zero, which would disable the timer. self.itimer = signal.ITIMER_REAL signal.setitimer(self.itimer, 1e-6) time.sleep(1) self.assertEqual(self.hndl_called, True) class PendingSignalsTests(unittest.TestCase): """ Test pthread_sigmask(), pthread_kill(), sigpending() and sigwait() functions. """ @unittest.skipUnless(hasattr(signal, 'sigpending'), 'need signal.sigpending()') def test_sigpending_empty(self): self.assertEqual(signal.sigpending(), set()) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') @unittest.skipUnless(hasattr(signal, 'sigpending'), 'need signal.sigpending()') def test_sigpending(self): code = """if 1: import os import signal def handler(signum, frame): 1/0 signum = signal.SIGUSR1 signal.signal(signum, handler) signal.pthread_sigmask(signal.SIG_BLOCK, [signum]) os.kill(os.getpid(), signum) pending = signal.sigpending() for sig in pending: assert isinstance(sig, signal.Signals), repr(pending) if pending != {signum}: raise Exception('%s != {%s}' % (pending, signum)) try: signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") """ assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'pthread_kill'), 'need signal.pthread_kill()') def test_pthread_kill(self): code = """if 1: import signal import threading import sys signum = signal.SIGUSR1 def handler(signum, frame): 1/0 signal.signal(signum, handler) tid = threading.get_ident() try: signal.pthread_kill(tid, signum) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") """ assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def wait_helper(self, blocked, test): """ test: body of the "def test(signum):" function. blocked: number of the blocked signal """ code = '''if 1: import signal import sys from signal import Signals def handler(signum, frame): 1/0 %s blocked = %s signum = signal.SIGALRM # child: block and wait the signal try: signal.signal(signum, handler) signal.pthread_sigmask(signal.SIG_BLOCK, [blocked]) # Do the tests test(signum) # The handler must not be called on unblock try: signal.pthread_sigmask(signal.SIG_UNBLOCK, [blocked]) except ZeroDivisionError: print("the signal handler has been called", file=sys.stderr) sys.exit(1) except BaseException as err: print("error: {}".format(err), file=sys.stderr) sys.stderr.flush() sys.exit(1) ''' % (test.strip(), blocked) # sig*wait* must be called with the signal blocked: since the current # process might have several threads running, use a subprocess to have # a single thread. assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'sigwait'), 'need signal.sigwait()') def test_sigwait(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): signal.alarm(1) received = signal.sigwait([signum]) assert isinstance(received, signal.Signals), received if received != signum: raise Exception('received %s, not %s' % (received, signum)) ''') @unittest.skipUnless(hasattr(signal, 'sigwaitinfo'), 'need signal.sigwaitinfo()') def test_sigwaitinfo(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): signal.alarm(1) info = signal.sigwaitinfo([signum]) if info.si_signo != signum: raise Exception("info.si_signo != %s" % signum) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): signal.alarm(1) info = signal.sigtimedwait([signum], 10.1000) if info.si_signo != signum: raise Exception('info.si_signo != %s' % signum) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait_poll(self): # check that polling with sigtimedwait works self.wait_helper(signal.SIGALRM, ''' def test(signum): import os os.kill(os.getpid(), signum) info = signal.sigtimedwait([signum], 0) if info.si_signo != signum: raise Exception('info.si_signo != %s' % signum) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait_timeout(self): self.wait_helper(signal.SIGALRM, ''' def test(signum): received = signal.sigtimedwait([signum], 1.0) if received is not None: raise Exception("received=%r" % (received,)) ''') @unittest.skipUnless(hasattr(signal, 'sigtimedwait'), 'need signal.sigtimedwait()') def test_sigtimedwait_negative_timeout(self): signum = signal.SIGALRM self.assertRaises(ValueError, signal.sigtimedwait, [signum], -1.0) @unittest.skipUnless(hasattr(signal, 'sigwait'), 'need signal.sigwait()') @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_sigwait_thread(self): # Check that calling sigwait() from a thread doesn't suspend the whole # process. A new interpreter is spawned to avoid problems when mixing # threads and fork(): only async-safe functions are allowed between # fork() and exec(). assert_python_ok("-c", """if True: import os, threading, sys, time, signal # the default handler terminates the process signum = signal.SIGUSR1 def kill_later(): # wait until the main thread is waiting in sigwait() time.sleep(1) os.kill(os.getpid(), signum) # the signal must be blocked by all the threads signal.pthread_sigmask(signal.SIG_BLOCK, [signum]) killer = threading.Thread(target=kill_later) killer.start() received = signal.sigwait([signum]) if received != signum: print("sigwait() received %s, not %s" % (received, signum), file=sys.stderr) sys.exit(1) killer.join() # unblock the signal, which should have been cleared by sigwait() signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) """) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pthread_sigmask_arguments(self): self.assertRaises(TypeError, signal.pthread_sigmask) self.assertRaises(TypeError, signal.pthread_sigmask, 1) self.assertRaises(TypeError, signal.pthread_sigmask, 1, 2, 3) self.assertRaises(OSError, signal.pthread_sigmask, 1700, []) with self.assertRaises(ValueError): signal.pthread_sigmask(signal.SIG_BLOCK, [signal.NSIG]) with self.assertRaises(ValueError): signal.pthread_sigmask(signal.SIG_BLOCK, [0]) with self.assertRaises(ValueError): signal.pthread_sigmask(signal.SIG_BLOCK, [1<<1000]) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pthread_sigmask_valid_signals(self): s = signal.pthread_sigmask(signal.SIG_BLOCK, signal.valid_signals()) self.addCleanup(signal.pthread_sigmask, signal.SIG_SETMASK, s) # Get current blocked set s = signal.pthread_sigmask(signal.SIG_UNBLOCK, signal.valid_signals()) self.assertLessEqual(s, signal.valid_signals()) @unittest.skipUnless(hasattr(signal, 'pthread_sigmask'), 'need signal.pthread_sigmask()') def test_pthread_sigmask(self): code = """if 1: import signal import os; import threading def handler(signum, frame): 1/0 def kill(signum): os.kill(os.getpid(), signum) def check_mask(mask): for sig in mask: assert isinstance(sig, signal.Signals), repr(sig) def read_sigmask(): sigmask = signal.pthread_sigmask(signal.SIG_BLOCK, []) check_mask(sigmask) return sigmask signum = signal.SIGUSR1 # Install our signal handler old_handler = signal.signal(signum, handler) # Unblock SIGUSR1 (and copy the old mask) to test our signal handler old_mask = signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) check_mask(old_mask) try: kill(signum) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") # Block and then raise SIGUSR1. The signal is blocked: the signal # handler is not called, and the signal is now pending mask = signal.pthread_sigmask(signal.SIG_BLOCK, [signum]) check_mask(mask) kill(signum) # Check the new mask blocked = read_sigmask() check_mask(blocked) if signum not in blocked: raise Exception("%s not in %s" % (signum, blocked)) if old_mask ^ blocked != {signum}: raise Exception("%s ^ %s != {%s}" % (old_mask, blocked, signum)) # Unblock SIGUSR1 try: # unblock the pending signal calls immediately the signal handler signal.pthread_sigmask(signal.SIG_UNBLOCK, [signum]) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") try: kill(signum) except ZeroDivisionError: pass else: raise Exception("ZeroDivisionError not raised") # Check the new mask unblocked = read_sigmask() if signum in unblocked: raise Exception("%s in %s" % (signum, unblocked)) if blocked ^ unblocked != {signum}: raise Exception("%s ^ %s != {%s}" % (blocked, unblocked, signum)) if old_mask != unblocked: raise Exception("%s != %s" % (old_mask, unblocked)) """ assert_python_ok('-c', code) @unittest.skipUnless(hasattr(signal, 'pthread_kill'), 'need signal.pthread_kill()') def test_pthread_kill_main_thread(self): # Test that a signal can be sent to the main thread with pthread_kill() # before any other thread has been created (see issue #12392). code = """if True: import threading import signal import sys def handler(signum, frame): sys.exit(3) signal.signal(signal.SIGUSR1, handler) signal.pthread_kill(threading.get_ident(), signal.SIGUSR1) sys.exit(2) """ with spawn_python('-c', code) as process: stdout, stderr = process.communicate() exitcode = process.wait() if exitcode != 3: raise Exception("Child error (exit code %s): %s" % (exitcode, stdout)) class StressTest(unittest.TestCase): """ Stress signal delivery, especially when a signal arrives in the middle of recomputing the signal state or executing previously tripped signal handlers. """ def setsig(self, signum, handler): old_handler = signal.signal(signum, handler) self.addCleanup(signal.signal, signum, old_handler) def measure_itimer_resolution(self): N = 20 times = [] def handler(signum=None, frame=None): if len(times) < N: times.append(time.perf_counter()) # 1 µs is the smallest possible timer interval, # we want to measure what the concrete duration # will be on this platform signal.setitimer(signal.ITIMER_REAL, 1e-6) self.addCleanup(signal.setitimer, signal.ITIMER_REAL, 0) self.setsig(signal.SIGALRM, handler) handler() while len(times) < N: time.sleep(1e-3) durations = [times[i+1] - times[i] for i in range(len(times) - 1)] med = statistics.median(durations) if support.verbose: print("detected median itimer() resolution: %.6f s." % (med,)) return med def decide_itimer_count(self): # Some systems have poor setitimer() resolution (for example # measured around 20 ms. on FreeBSD 9), so decide on a reasonable # number of sequential timers based on that. reso = self.measure_itimer_resolution() if reso <= 1e-4: return 10000 elif reso <= 1e-2: return 100 else: self.skipTest("detected itimer resolution (%.3f s.) too high " "(> 10 ms.) on this platform (or system too busy)" % (reso,)) @unittest.skipUnless(hasattr(signal, "setitimer"), "test needs setitimer()") def test_stress_delivery_dependent(self): """ This test uses dependent signal handlers. """ N = self.decide_itimer_count() sigs = [] def first_handler(signum, frame): # 1e-6 is the minimum non-zero value for `setitimer()`. # Choose a random delay so as to improve chances of # triggering a race condition. Ideally the signal is received # when inside critical signal-handling routines such as # Py_MakePendingCalls(). signal.setitimer(signal.ITIMER_REAL, 1e-6 + random.random() * 1e-5) def second_handler(signum=None, frame=None): sigs.append(signum) # Here on Linux, SIGPROF > SIGALRM > SIGUSR1. By using both # ascending and descending sequences (SIGUSR1 then SIGALRM, # SIGPROF then SIGALRM), we maximize chances of hitting a bug. self.setsig(signal.SIGPROF, first_handler) self.setsig(signal.SIGUSR1, first_handler) self.setsig(signal.SIGALRM, second_handler) # for ITIMER_REAL expected_sigs = 0 deadline = time.monotonic() + support.SHORT_TIMEOUT while expected_sigs < N: os.kill(os.getpid(), signal.SIGPROF) expected_sigs += 1 # Wait for handlers to run to avoid signal coalescing while len(sigs) < expected_sigs and time.monotonic() < deadline: time.sleep(1e-5) os.kill(os.getpid(), signal.SIGUSR1) expected_sigs += 1 while len(sigs) < expected_sigs and time.monotonic() < deadline: time.sleep(1e-5) # All ITIMER_REAL signals should have been delivered to the # Python handler self.assertEqual(len(sigs), N, "Some signals were lost") @unittest.skipUnless(hasattr(signal, "setitimer"), "test needs setitimer()") def test_stress_delivery_simultaneous(self): """ This test uses simultaneous signal handlers. """ N = self.decide_itimer_count() sigs = [] def handler(signum, frame): sigs.append(signum) self.setsig(signal.SIGUSR1, handler) self.setsig(signal.SIGALRM, handler) # for ITIMER_REAL expected_sigs = 0 deadline = time.monotonic() + support.SHORT_TIMEOUT while expected_sigs < N: # Hopefully the SIGALRM will be received somewhere during # initial processing of SIGUSR1. signal.setitimer(signal.ITIMER_REAL, 1e-6 + random.random() * 1e-5) os.kill(os.getpid(), signal.SIGUSR1) expected_sigs += 2 # Wait for handlers to run to avoid signal coalescing while len(sigs) < expected_sigs and time.monotonic() < deadline: time.sleep(1e-5) # All ITIMER_REAL signals should have been delivered to the # Python handler self.assertEqual(len(sigs), N, "Some signals were lost") @unittest.skipUnless(hasattr(signal, "SIGUSR1"), "test needs SIGUSR1") def test_stress_modifying_handlers(self): # bpo-43406: race condition between trip_signal() and signal.signal signum = signal.SIGUSR1 num_sent_signals = 0 num_received_signals = 0 do_stop = False def custom_handler(signum, frame): nonlocal num_received_signals num_received_signals += 1 def set_interrupts(): nonlocal num_sent_signals while not do_stop: signal.raise_signal(signum) num_sent_signals += 1 def cycle_handlers(): while num_sent_signals < 100: for i in range(20000): # Cycle between a Python-defined and a non-Python handler for handler in [custom_handler, signal.SIG_IGN]: signal.signal(signum, handler) old_handler = signal.signal(signum, custom_handler) self.addCleanup(signal.signal, signum, old_handler) t = threading.Thread(target=set_interrupts) try: ignored = False with support.catch_unraisable_exception() as cm: t.start() cycle_handlers() do_stop = True t.join() if cm.unraisable is not None: # An unraisable exception may be printed out when # a signal is ignored due to the aforementioned # race condition, check it. self.assertIsInstance(cm.unraisable.exc_value, OSError) self.assertIn( f"Signal {signum} ignored due to race condition", str(cm.unraisable.exc_value)) ignored = True # bpo-43406: Even if it is unlikely, it's technically possible that # all signals were ignored because of race conditions. if not ignored: # Sanity check that some signals were received, but not all self.assertGreater(num_received_signals, 0) self.assertLess(num_received_signals, num_sent_signals) finally: do_stop = True t.join() class RaiseSignalTest(unittest.TestCase): def test_sigint(self): with self.assertRaises(KeyboardInterrupt): signal.raise_signal(signal.SIGINT) @unittest.skipIf(sys.platform != "win32", "Windows specific test") def test_invalid_argument(self): try: SIGHUP = 1 # not supported on win32 signal.raise_signal(SIGHUP) self.fail("OSError (Invalid argument) expected") except OSError as e: if e.errno == errno.EINVAL: pass else: raise def test_handler(self): is_ok = False def handler(a, b): nonlocal is_ok is_ok = True old_signal = signal.signal(signal.SIGINT, handler) self.addCleanup(signal.signal, signal.SIGINT, old_signal) signal.raise_signal(signal.SIGINT) self.assertTrue(is_ok) class PidfdSignalTest(unittest.TestCase): @unittest.skipUnless( hasattr(signal, "pidfd_send_signal"), "pidfd support not built in", ) def test_pidfd_send_signal(self): with self.assertRaises(OSError) as cm: signal.pidfd_send_signal(0, signal.SIGINT) if cm.exception.errno == errno.ENOSYS: self.skipTest("kernel does not support pidfds") elif cm.exception.errno == errno.EPERM: self.skipTest("Not enough privileges to use pidfs") self.assertEqual(cm.exception.errno, errno.EBADF) my_pidfd = os.open(f'/proc/{os.getpid()}', os.O_DIRECTORY) self.addCleanup(os.close, my_pidfd) with self.assertRaisesRegex(TypeError, "^siginfo must be None$"): signal.pidfd_send_signal(my_pidfd, signal.SIGINT, object(), 0) with self.assertRaises(KeyboardInterrupt): signal.pidfd_send_signal(my_pidfd, signal.SIGINT) def tearDownModule(): support.reap_children() if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.9/test_smtpd.py000066400000000000000000001205131471441230600210140ustar00rootroot00000000000000import unittest import textwrap from test import support, mock_socket from test.support import socket_helper import socket import io import smtpd import asyncore class DummyServer(smtpd.SMTPServer): def __init__(self, *args, **kwargs): smtpd.SMTPServer.__init__(self, *args, **kwargs) self.messages = [] if self._decode_data: self.return_status = 'return status' else: self.return_status = b'return status' def process_message(self, peer, mailfrom, rcpttos, data, **kw): self.messages.append((peer, mailfrom, rcpttos, data)) if data == self.return_status: return '250 Okish' if 'mail_options' in kw and 'SMTPUTF8' in kw['mail_options']: return '250 SMTPUTF8 message okish' class DummyDispatcherBroken(Exception): pass class BrokenDummyServer(DummyServer): def listen(self, num): raise DummyDispatcherBroken() class SMTPDServerTest(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket def test_process_message_unimplemented(self): server = smtpd.SMTPServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, decode_data=True) def write_line(line): channel.socket.queue_recv(line) channel.handle_read() write_line(b'HELO example') write_line(b'MAIL From:eggs@example') write_line(b'RCPT To:spam@example') write_line(b'DATA') self.assertRaises(NotImplementedError, write_line, b'spam\r\n.\r\n') def test_decode_data_and_enable_SMTPUTF8_raises(self): self.assertRaises( ValueError, smtpd.SMTPServer, (socket_helper.HOST, 0), ('b', 0), enable_SMTPUTF8=True, decode_data=True) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket class DebuggingServerTest(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket def send_data(self, channel, data, enable_SMTPUTF8=False): def write_line(line): channel.socket.queue_recv(line) channel.handle_read() write_line(b'EHLO example') if enable_SMTPUTF8: write_line(b'MAIL From:eggs@example BODY=8BITMIME SMTPUTF8') else: write_line(b'MAIL From:eggs@example') write_line(b'RCPT To:spam@example') write_line(b'DATA') write_line(data) write_line(b'.') def test_process_message_with_decode_data_true(self): server = smtpd.DebuggingServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, decode_data=True) with support.captured_stdout() as s: self.send_data(channel, b'From: test\n\nhello\n') stdout = s.getvalue() self.assertEqual(stdout, textwrap.dedent("""\ ---------- MESSAGE FOLLOWS ---------- From: test X-Peer: peer-address hello ------------ END MESSAGE ------------ """)) def test_process_message_with_decode_data_false(self): server = smtpd.DebuggingServer((socket_helper.HOST, 0), ('b', 0)) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr) with support.captured_stdout() as s: self.send_data(channel, b'From: test\n\nh\xc3\xa9llo\xff\n') stdout = s.getvalue() self.assertEqual(stdout, textwrap.dedent("""\ ---------- MESSAGE FOLLOWS ---------- b'From: test' b'X-Peer: peer-address' b'' b'h\\xc3\\xa9llo\\xff' ------------ END MESSAGE ------------ """)) def test_process_message_with_enable_SMTPUTF8_true(self): server = smtpd.DebuggingServer((socket_helper.HOST, 0), ('b', 0), enable_SMTPUTF8=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, enable_SMTPUTF8=True) with support.captured_stdout() as s: self.send_data(channel, b'From: test\n\nh\xc3\xa9llo\xff\n') stdout = s.getvalue() self.assertEqual(stdout, textwrap.dedent("""\ ---------- MESSAGE FOLLOWS ---------- b'From: test' b'X-Peer: peer-address' b'' b'h\\xc3\\xa9llo\\xff' ------------ END MESSAGE ------------ """)) def test_process_SMTPUTF8_message_with_enable_SMTPUTF8_true(self): server = smtpd.DebuggingServer((socket_helper.HOST, 0), ('b', 0), enable_SMTPUTF8=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, enable_SMTPUTF8=True) with support.captured_stdout() as s: self.send_data(channel, b'From: test\n\nh\xc3\xa9llo\xff\n', enable_SMTPUTF8=True) stdout = s.getvalue() self.assertEqual(stdout, textwrap.dedent("""\ ---------- MESSAGE FOLLOWS ---------- mail options: ['BODY=8BITMIME', 'SMTPUTF8'] b'From: test' b'X-Peer: peer-address' b'' b'h\\xc3\\xa9llo\\xff' ------------ END MESSAGE ------------ """)) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket class TestFamilyDetection(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket @unittest.skipUnless(socket_helper.IPV6_ENABLED, "IPv6 not enabled") def test_socket_uses_IPv6(self): server = smtpd.SMTPServer((socket_helper.HOSTv6, 0), (socket_helper.HOSTv4, 0)) self.assertEqual(server.socket.family, socket.AF_INET6) def test_socket_uses_IPv4(self): server = smtpd.SMTPServer((socket_helper.HOSTv4, 0), (socket_helper.HOSTv6, 0)) self.assertEqual(server.socket.family, socket.AF_INET) class TestRcptOptionParsing(unittest.TestCase): error_response = (b'555 RCPT TO parameters not recognized or not ' b'implemented\r\n') def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, channel, line): channel.socket.queue_recv(line) channel.handle_read() def test_params_rejected(self): server = DummyServer((socket_helper.HOST, 0), ('b', 0)) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr) self.write_line(channel, b'EHLO example') self.write_line(channel, b'MAIL from: size=20') self.write_line(channel, b'RCPT to: foo=bar') self.assertEqual(channel.socket.last, self.error_response) def test_nothing_accepted(self): server = DummyServer((socket_helper.HOST, 0), ('b', 0)) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr) self.write_line(channel, b'EHLO example') self.write_line(channel, b'MAIL from: size=20') self.write_line(channel, b'RCPT to: ') self.assertEqual(channel.socket.last, b'250 OK\r\n') class TestMailOptionParsing(unittest.TestCase): error_response = (b'555 MAIL FROM parameters not recognized or not ' b'implemented\r\n') def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, channel, line): channel.socket.queue_recv(line) channel.handle_read() def test_with_decode_data_true(self): server = DummyServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, decode_data=True) self.write_line(channel, b'EHLO example') for line in [ b'MAIL from: size=20 SMTPUTF8', b'MAIL from: size=20 SMTPUTF8 BODY=8BITMIME', b'MAIL from: size=20 BODY=UNKNOWN', b'MAIL from: size=20 body=8bitmime', ]: self.write_line(channel, line) self.assertEqual(channel.socket.last, self.error_response) self.write_line(channel, b'MAIL from: size=20') self.assertEqual(channel.socket.last, b'250 OK\r\n') def test_with_decode_data_false(self): server = DummyServer((socket_helper.HOST, 0), ('b', 0)) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr) self.write_line(channel, b'EHLO example') for line in [ b'MAIL from: size=20 SMTPUTF8', b'MAIL from: size=20 SMTPUTF8 BODY=8BITMIME', ]: self.write_line(channel, line) self.assertEqual(channel.socket.last, self.error_response) self.write_line( channel, b'MAIL from: size=20 SMTPUTF8 BODY=UNKNOWN') self.assertEqual( channel.socket.last, b'501 Error: BODY can only be one of 7BIT, 8BITMIME\r\n') self.write_line( channel, b'MAIL from: size=20 body=8bitmime') self.assertEqual(channel.socket.last, b'250 OK\r\n') def test_with_enable_smtputf8_true(self): server = DummyServer((socket_helper.HOST, 0), ('b', 0), enable_SMTPUTF8=True) conn, addr = server.accept() channel = smtpd.SMTPChannel(server, conn, addr, enable_SMTPUTF8=True) self.write_line(channel, b'EHLO example') self.write_line( channel, b'MAIL from: size=20 body=8bitmime smtputf8') self.assertEqual(channel.socket.last, b'250 OK\r\n') class SMTPDChannelTest(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = self.server.accept() self.channel = smtpd.SMTPChannel(self.server, conn, addr, decode_data=True) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, line): self.channel.socket.queue_recv(line) self.channel.handle_read() def test_broken_connect(self): self.assertRaises( DummyDispatcherBroken, BrokenDummyServer, (socket_helper.HOST, 0), ('b', 0), decode_data=True) def test_decode_data_and_enable_SMTPUTF8_raises(self): self.assertRaises( ValueError, smtpd.SMTPChannel, self.server, self.channel.conn, self.channel.addr, enable_SMTPUTF8=True, decode_data=True) def test_server_accept(self): self.server.handle_accept() def test_missing_data(self): self.write_line(b'') self.assertEqual(self.channel.socket.last, b'500 Error: bad syntax\r\n') def test_EHLO(self): self.write_line(b'EHLO example') self.assertEqual(self.channel.socket.last, b'250 HELP\r\n') def test_EHLO_bad_syntax(self): self.write_line(b'EHLO') self.assertEqual(self.channel.socket.last, b'501 Syntax: EHLO hostname\r\n') def test_EHLO_duplicate(self): self.write_line(b'EHLO example') self.write_line(b'EHLO example') self.assertEqual(self.channel.socket.last, b'503 Duplicate HELO/EHLO\r\n') def test_EHLO_HELO_duplicate(self): self.write_line(b'EHLO example') self.write_line(b'HELO example') self.assertEqual(self.channel.socket.last, b'503 Duplicate HELO/EHLO\r\n') def test_HELO(self): name = smtpd.socket.getfqdn() self.write_line(b'HELO example') self.assertEqual(self.channel.socket.last, '250 {}\r\n'.format(name).encode('ascii')) def test_HELO_EHLO_duplicate(self): self.write_line(b'HELO example') self.write_line(b'EHLO example') self.assertEqual(self.channel.socket.last, b'503 Duplicate HELO/EHLO\r\n') def test_HELP(self): self.write_line(b'HELP') self.assertEqual(self.channel.socket.last, b'250 Supported commands: EHLO HELO MAIL RCPT ' + \ b'DATA RSET NOOP QUIT VRFY\r\n') def test_HELP_command(self): self.write_line(b'HELP MAIL') self.assertEqual(self.channel.socket.last, b'250 Syntax: MAIL FROM:
\r\n') def test_HELP_command_unknown(self): self.write_line(b'HELP SPAM') self.assertEqual(self.channel.socket.last, b'501 Supported commands: EHLO HELO MAIL RCPT ' + \ b'DATA RSET NOOP QUIT VRFY\r\n') def test_HELO_bad_syntax(self): self.write_line(b'HELO') self.assertEqual(self.channel.socket.last, b'501 Syntax: HELO hostname\r\n') def test_HELO_duplicate(self): self.write_line(b'HELO example') self.write_line(b'HELO example') self.assertEqual(self.channel.socket.last, b'503 Duplicate HELO/EHLO\r\n') def test_HELO_parameter_rejected_when_extensions_not_enabled(self): self.extended_smtp = False self.write_line(b'HELO example') self.write_line(b'MAIL from: SIZE=1234') self.assertEqual(self.channel.socket.last, b'501 Syntax: MAIL FROM:
\r\n') def test_MAIL_allows_space_after_colon(self): self.write_line(b'HELO example') self.write_line(b'MAIL from: ') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_extended_MAIL_allows_space_after_colon(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from: size=20') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_NOOP(self): self.write_line(b'NOOP') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_HELO_NOOP(self): self.write_line(b'HELO example') self.write_line(b'NOOP') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_NOOP_bad_syntax(self): self.write_line(b'NOOP hi') self.assertEqual(self.channel.socket.last, b'501 Syntax: NOOP\r\n') def test_QUIT(self): self.write_line(b'QUIT') self.assertEqual(self.channel.socket.last, b'221 Bye\r\n') def test_HELO_QUIT(self): self.write_line(b'HELO example') self.write_line(b'QUIT') self.assertEqual(self.channel.socket.last, b'221 Bye\r\n') def test_QUIT_arg_ignored(self): self.write_line(b'QUIT bye bye') self.assertEqual(self.channel.socket.last, b'221 Bye\r\n') def test_bad_state(self): self.channel.smtp_state = 'BAD STATE' self.write_line(b'HELO example') self.assertEqual(self.channel.socket.last, b'451 Internal confusion\r\n') def test_command_too_long(self): self.write_line(b'HELO example') self.write_line(b'MAIL from: ' + b'a' * self.channel.command_size_limit + b'@example') self.assertEqual(self.channel.socket.last, b'500 Error: line too long\r\n') def test_MAIL_command_limit_extended_with_SIZE(self): self.write_line(b'EHLO example') fill_len = self.channel.command_size_limit - len('MAIL from:<@example>') self.write_line(b'MAIL from:<' + b'a' * fill_len + b'@example> SIZE=1234') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'MAIL from:<' + b'a' * (fill_len + 26) + b'@example> SIZE=1234') self.assertEqual(self.channel.socket.last, b'500 Error: line too long\r\n') def test_MAIL_command_rejects_SMTPUTF8_by_default(self): self.write_line(b'EHLO example') self.write_line( b'MAIL from: BODY=8BITMIME SMTPUTF8') self.assertEqual(self.channel.socket.last[0:1], b'5') def test_data_longer_than_default_data_size_limit(self): # Hack the default so we don't have to generate so much data. self.channel.data_size_limit = 1048 self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'A' * self.channel.data_size_limit + b'A\r\n.') self.assertEqual(self.channel.socket.last, b'552 Error: Too much mail data\r\n') def test_MAIL_size_parameter(self): self.write_line(b'EHLO example') self.write_line(b'MAIL FROM: SIZE=512') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_MAIL_invalid_size_parameter(self): self.write_line(b'EHLO example') self.write_line(b'MAIL FROM: SIZE=invalid') self.assertEqual(self.channel.socket.last, b'501 Syntax: MAIL FROM:
[SP ]\r\n') def test_MAIL_RCPT_unknown_parameters(self): self.write_line(b'EHLO example') self.write_line(b'MAIL FROM: ham=green') self.assertEqual(self.channel.socket.last, b'555 MAIL FROM parameters not recognized or not implemented\r\n') self.write_line(b'MAIL FROM:') self.write_line(b'RCPT TO: ham=green') self.assertEqual(self.channel.socket.last, b'555 RCPT TO parameters not recognized or not implemented\r\n') def test_MAIL_size_parameter_larger_than_default_data_size_limit(self): self.channel.data_size_limit = 1048 self.write_line(b'EHLO example') self.write_line(b'MAIL FROM: SIZE=2096') self.assertEqual(self.channel.socket.last, b'552 Error: message size exceeds fixed maximum message size\r\n') def test_need_MAIL(self): self.write_line(b'HELO example') self.write_line(b'RCPT to:spam@example') self.assertEqual(self.channel.socket.last, b'503 Error: need MAIL command\r\n') def test_MAIL_syntax_HELO(self): self.write_line(b'HELO example') self.write_line(b'MAIL from eggs@example') self.assertEqual(self.channel.socket.last, b'501 Syntax: MAIL FROM:
\r\n') def test_MAIL_syntax_EHLO(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from eggs@example') self.assertEqual(self.channel.socket.last, b'501 Syntax: MAIL FROM:
[SP ]\r\n') def test_MAIL_missing_address(self): self.write_line(b'HELO example') self.write_line(b'MAIL from:') self.assertEqual(self.channel.socket.last, b'501 Syntax: MAIL FROM:
\r\n') def test_MAIL_chevrons(self): self.write_line(b'HELO example') self.write_line(b'MAIL from:') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_MAIL_empty_chevrons(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from:<>') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_MAIL_quoted_localpart(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from: <"Fred Blogs"@example.com>') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.channel.mailfrom, '"Fred Blogs"@example.com') def test_MAIL_quoted_localpart_no_angles(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from: "Fred Blogs"@example.com') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.channel.mailfrom, '"Fred Blogs"@example.com') def test_MAIL_quoted_localpart_with_size(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from: <"Fred Blogs"@example.com> SIZE=1000') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.channel.mailfrom, '"Fred Blogs"@example.com') def test_MAIL_quoted_localpart_with_size_no_angles(self): self.write_line(b'EHLO example') self.write_line(b'MAIL from: "Fred Blogs"@example.com SIZE=1000') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.channel.mailfrom, '"Fred Blogs"@example.com') def test_nested_MAIL(self): self.write_line(b'HELO example') self.write_line(b'MAIL from:eggs@example') self.write_line(b'MAIL from:spam@example') self.assertEqual(self.channel.socket.last, b'503 Error: nested MAIL command\r\n') def test_VRFY(self): self.write_line(b'VRFY eggs@example') self.assertEqual(self.channel.socket.last, b'252 Cannot VRFY user, but will accept message and attempt ' + \ b'delivery\r\n') def test_VRFY_syntax(self): self.write_line(b'VRFY') self.assertEqual(self.channel.socket.last, b'501 Syntax: VRFY
\r\n') def test_EXPN_not_implemented(self): self.write_line(b'EXPN') self.assertEqual(self.channel.socket.last, b'502 EXPN not implemented\r\n') def test_no_HELO_MAIL(self): self.write_line(b'MAIL from:') self.assertEqual(self.channel.socket.last, b'503 Error: send HELO first\r\n') def test_need_RCPT(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'DATA') self.assertEqual(self.channel.socket.last, b'503 Error: need RCPT command\r\n') def test_RCPT_syntax_HELO(self): self.write_line(b'HELO example') self.write_line(b'MAIL From: eggs@example') self.write_line(b'RCPT to eggs@example') self.assertEqual(self.channel.socket.last, b'501 Syntax: RCPT TO:
\r\n') def test_RCPT_syntax_EHLO(self): self.write_line(b'EHLO example') self.write_line(b'MAIL From: eggs@example') self.write_line(b'RCPT to eggs@example') self.assertEqual(self.channel.socket.last, b'501 Syntax: RCPT TO:
[SP ]\r\n') def test_RCPT_lowercase_to_OK(self): self.write_line(b'HELO example') self.write_line(b'MAIL From: eggs@example') self.write_line(b'RCPT to: ') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_no_HELO_RCPT(self): self.write_line(b'RCPT to eggs@example') self.assertEqual(self.channel.socket.last, b'503 Error: send HELO first\r\n') def test_data_dialog(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'RCPT To:spam@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'DATA') self.assertEqual(self.channel.socket.last, b'354 End data with .\r\n') self.write_line(b'data\r\nmore\r\n.') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.server.messages, [(('peer-address', 'peer-port'), 'eggs@example', ['spam@example'], 'data\nmore')]) def test_DATA_syntax(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA spam') self.assertEqual(self.channel.socket.last, b'501 Syntax: DATA\r\n') def test_no_HELO_DATA(self): self.write_line(b'DATA spam') self.assertEqual(self.channel.socket.last, b'503 Error: send HELO first\r\n') def test_data_transparency_section_4_5_2(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'..\r\n.\r\n') self.assertEqual(self.channel.received_data, '.') def test_multiple_RCPT(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'RCPT To:ham@example') self.write_line(b'DATA') self.write_line(b'data\r\n.') self.assertEqual(self.server.messages, [(('peer-address', 'peer-port'), 'eggs@example', ['spam@example','ham@example'], 'data')]) def test_manual_status(self): # checks that the Channel is able to return a custom status message self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'return status\r\n.') self.assertEqual(self.channel.socket.last, b'250 Okish\r\n') def test_RSET(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'RSET') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'MAIL From:foo@example') self.write_line(b'RCPT To:eggs@example') self.write_line(b'DATA') self.write_line(b'data\r\n.') self.assertEqual(self.server.messages, [(('peer-address', 'peer-port'), 'foo@example', ['eggs@example'], 'data')]) def test_HELO_RSET(self): self.write_line(b'HELO example') self.write_line(b'RSET') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_RSET_syntax(self): self.write_line(b'RSET hi') self.assertEqual(self.channel.socket.last, b'501 Syntax: RSET\r\n') def test_unknown_command(self): self.write_line(b'UNKNOWN_CMD') self.assertEqual(self.channel.socket.last, b'500 Error: command "UNKNOWN_CMD" not ' + \ b'recognized\r\n') def test_attribute_deprecations(self): with support.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__server with support.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__server = 'spam' with support.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__line with support.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__line = 'spam' with support.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__state with support.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__state = 'spam' with support.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__greeting with support.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__greeting = 'spam' with support.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__mailfrom with support.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__mailfrom = 'spam' with support.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__rcpttos with support.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__rcpttos = 'spam' with support.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__data with support.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__data = 'spam' with support.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__fqdn with support.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__fqdn = 'spam' with support.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__peer with support.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__peer = 'spam' with support.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__conn with support.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__conn = 'spam' with support.check_warnings(('', DeprecationWarning)): spam = self.channel._SMTPChannel__addr with support.check_warnings(('', DeprecationWarning)): self.channel._SMTPChannel__addr = 'spam' @unittest.skipUnless(socket_helper.IPV6_ENABLED, "IPv6 not enabled") class SMTPDChannelIPv6Test(SMTPDChannelTest): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOSTv6, 0), ('b', 0), decode_data=True) conn, addr = self.server.accept() self.channel = smtpd.SMTPChannel(self.server, conn, addr, decode_data=True) class SMTPDChannelWithDataSizeLimitTest(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = self.server.accept() # Set DATA size limit to 32 bytes for easy testing self.channel = smtpd.SMTPChannel(self.server, conn, addr, 32, decode_data=True) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, line): self.channel.socket.queue_recv(line) self.channel.handle_read() def test_data_limit_dialog(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'RCPT To:spam@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'DATA') self.assertEqual(self.channel.socket.last, b'354 End data with .\r\n') self.write_line(b'data\r\nmore\r\n.') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.assertEqual(self.server.messages, [(('peer-address', 'peer-port'), 'eggs@example', ['spam@example'], 'data\nmore')]) def test_data_limit_dialog_too_much_data(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'RCPT To:spam@example') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') self.write_line(b'DATA') self.assertEqual(self.channel.socket.last, b'354 End data with .\r\n') self.write_line(b'This message is longer than 32 bytes\r\n.') self.assertEqual(self.channel.socket.last, b'552 Error: Too much mail data\r\n') class SMTPDChannelWithDecodeDataFalse(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOST, 0), ('b', 0)) conn, addr = self.server.accept() self.channel = smtpd.SMTPChannel(self.server, conn, addr) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, line): self.channel.socket.queue_recv(line) self.channel.handle_read() def test_ascii_data(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'plain ascii text') self.write_line(b'.') self.assertEqual(self.channel.received_data, b'plain ascii text') def test_utf8_data(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'utf8 enriched text: \xc5\xbc\xc5\xba\xc4\x87') self.write_line(b'and some plain ascii') self.write_line(b'.') self.assertEqual( self.channel.received_data, b'utf8 enriched text: \xc5\xbc\xc5\xba\xc4\x87\n' b'and some plain ascii') class SMTPDChannelWithDecodeDataTrue(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOST, 0), ('b', 0), decode_data=True) conn, addr = self.server.accept() # Set decode_data to True self.channel = smtpd.SMTPChannel(self.server, conn, addr, decode_data=True) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, line): self.channel.socket.queue_recv(line) self.channel.handle_read() def test_ascii_data(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'plain ascii text') self.write_line(b'.') self.assertEqual(self.channel.received_data, 'plain ascii text') def test_utf8_data(self): self.write_line(b'HELO example') self.write_line(b'MAIL From:eggs@example') self.write_line(b'RCPT To:spam@example') self.write_line(b'DATA') self.write_line(b'utf8 enriched text: \xc5\xbc\xc5\xba\xc4\x87') self.write_line(b'and some plain ascii') self.write_line(b'.') self.assertEqual( self.channel.received_data, 'utf8 enriched text: żźć\nand some plain ascii') class SMTPDChannelTestWithEnableSMTPUTF8True(unittest.TestCase): def setUp(self): smtpd.socket = asyncore.socket = mock_socket self.old_debugstream = smtpd.DEBUGSTREAM self.debug = smtpd.DEBUGSTREAM = io.StringIO() self.server = DummyServer((socket_helper.HOST, 0), ('b', 0), enable_SMTPUTF8=True) conn, addr = self.server.accept() self.channel = smtpd.SMTPChannel(self.server, conn, addr, enable_SMTPUTF8=True) def tearDown(self): asyncore.close_all() asyncore.socket = smtpd.socket = socket smtpd.DEBUGSTREAM = self.old_debugstream def write_line(self, line): self.channel.socket.queue_recv(line) self.channel.handle_read() def test_MAIL_command_accepts_SMTPUTF8_when_announced(self): self.write_line(b'EHLO example') self.write_line( 'MAIL from: BODY=8BITMIME SMTPUTF8'.encode( 'utf-8') ) self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_process_smtputf8_message(self): self.write_line(b'EHLO example') for mail_parameters in [b'', b'BODY=8BITMIME SMTPUTF8']: self.write_line(b'MAIL from: ' + mail_parameters) self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line(b'rcpt to:') self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line(b'data') self.assertEqual(self.channel.socket.last[0:3], b'354') self.write_line(b'c\r\n.') if mail_parameters == b'': self.assertEqual(self.channel.socket.last, b'250 OK\r\n') else: self.assertEqual(self.channel.socket.last, b'250 SMTPUTF8 message okish\r\n') def test_utf8_data(self): self.write_line(b'EHLO example') self.write_line( 'MAIL From: naïve@examplé BODY=8BITMIME SMTPUTF8'.encode('utf-8')) self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line('RCPT To:späm@examplé'.encode('utf-8')) self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line(b'DATA') self.assertEqual(self.channel.socket.last[0:3], b'354') self.write_line(b'utf8 enriched text: \xc5\xbc\xc5\xba\xc4\x87') self.write_line(b'.') self.assertEqual( self.channel.received_data, b'utf8 enriched text: \xc5\xbc\xc5\xba\xc4\x87') def test_MAIL_command_limit_extended_with_SIZE_and_SMTPUTF8(self): self.write_line(b'ehlo example') fill_len = (512 + 26 + 10) - len('mail from:<@example>') self.write_line(b'MAIL from:<' + b'a' * (fill_len + 1) + b'@example>') self.assertEqual(self.channel.socket.last, b'500 Error: line too long\r\n') self.write_line(b'MAIL from:<' + b'a' * fill_len + b'@example>') self.assertEqual(self.channel.socket.last, b'250 OK\r\n') def test_multiple_emails_with_extended_command_length(self): self.write_line(b'ehlo example') fill_len = (512 + 26 + 10) - len('mail from:<@example>') for char in [b'a', b'b', b'c']: self.write_line(b'MAIL from:<' + char * fill_len + b'a@example>') self.assertEqual(self.channel.socket.last[0:3], b'500') self.write_line(b'MAIL from:<' + char * fill_len + b'@example>') self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line(b'rcpt to:') self.assertEqual(self.channel.socket.last[0:3], b'250') self.write_line(b'data') self.assertEqual(self.channel.socket.last[0:3], b'354') self.write_line(b'test\r\n.') self.assertEqual(self.channel.socket.last[0:3], b'250') class MiscTestCase(unittest.TestCase): def test__all__(self): blacklist = { "program", "Devnull", "DEBUGSTREAM", "NEWLINE", "COMMASPACE", "DATA_SIZE_DEFAULT", "usage", "Options", "parseargs", } support.check__all__(self, smtpd, blacklist=blacklist) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.9/test_socket.py000077500000000000000000007634201471441230600211720ustar00rootroot00000000000000import unittest from test import support from test.support import socket_helper import errno import io import itertools import socket import select import tempfile import time import traceback import queue import sys import os import platform import array import contextlib from weakref import proxy import signal import math import pickle import struct import random import shutil import string import _thread as thread import threading try: import multiprocessing except ImportError: multiprocessing = False try: import fcntl except ImportError: fcntl = None HOST = socket_helper.HOST # test unicode string and carriage return MSG = 'Michael Gilfix was here\u1234\r\n'.encode('utf-8') VSOCKPORT = 1234 AIX = platform.system() == "AIX" try: import _socket except ImportError: _socket = None def get_cid(): if fcntl is None: return None if not hasattr(socket, 'IOCTL_VM_SOCKETS_GET_LOCAL_CID'): return None try: with open("/dev/vsock", "rb") as f: r = fcntl.ioctl(f, socket.IOCTL_VM_SOCKETS_GET_LOCAL_CID, " ") except OSError: return None else: return struct.unpack("I", r)[0] def _have_socket_can(): """Check whether CAN sockets are supported on this host.""" try: s = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_can_isotp(): """Check whether CAN ISOTP sockets are supported on this host.""" try: s = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_can_j1939(): """Check whether CAN J1939 sockets are supported on this host.""" try: s = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_rds(): """Check whether RDS sockets are supported on this host.""" try: s = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_alg(): """Check whether AF_ALG sockets are supported on this host.""" try: s = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_qipcrtr(): """Check whether AF_QIPCRTR sockets are supported on this host.""" try: s = socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM, 0) except (AttributeError, OSError): return False else: s.close() return True def _have_socket_vsock(): """Check whether AF_VSOCK sockets are supported on this host.""" ret = get_cid() is not None return ret def _have_socket_bluetooth(): """Check whether AF_BLUETOOTH sockets are supported on this host.""" try: # RFCOMM is supported by all platforms with bluetooth support. Windows # does not support omitting the protocol. s = socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM) except (AttributeError, OSError): return False else: s.close() return True @contextlib.contextmanager def socket_setdefaulttimeout(timeout): old_timeout = socket.getdefaulttimeout() try: socket.setdefaulttimeout(timeout) yield finally: socket.setdefaulttimeout(old_timeout) HAVE_SOCKET_CAN = _have_socket_can() HAVE_SOCKET_CAN_ISOTP = _have_socket_can_isotp() HAVE_SOCKET_CAN_J1939 = _have_socket_can_j1939() HAVE_SOCKET_RDS = _have_socket_rds() HAVE_SOCKET_ALG = _have_socket_alg() HAVE_SOCKET_QIPCRTR = _have_socket_qipcrtr() HAVE_SOCKET_VSOCK = _have_socket_vsock() HAVE_SOCKET_UDPLITE = hasattr(socket, "IPPROTO_UDPLITE") HAVE_SOCKET_BLUETOOTH = _have_socket_bluetooth() # Size in bytes of the int type SIZEOF_INT = array.array("i").itemsize class SocketTCPTest(unittest.TestCase): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = socket_helper.bind_port(self.serv) self.serv.listen() def tearDown(self): self.serv.close() self.serv = None class SocketUDPTest(unittest.TestCase): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.port = socket_helper.bind_port(self.serv) def tearDown(self): self.serv.close() self.serv = None class SocketUDPLITETest(SocketUDPTest): def setUp(self): self.serv = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) self.port = socket_helper.bind_port(self.serv) class ThreadSafeCleanupTestCase: """Subclass of unittest.TestCase with thread-safe cleanup methods. This subclass protects the addCleanup() and doCleanups() methods with a recursive lock. """ def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self._cleanup_lock = threading.RLock() def addCleanup(self, *args, **kwargs): with self._cleanup_lock: return super().addCleanup(*args, **kwargs) def doCleanups(self, *args, **kwargs): with self._cleanup_lock: return super().doCleanups(*args, **kwargs) class SocketCANTest(unittest.TestCase): """To be able to run this test, a `vcan0` CAN interface can be created with the following commands: # modprobe vcan # ip link add dev vcan0 type vcan # ip link set up vcan0 """ interface = 'vcan0' bufsize = 128 """The CAN frame structure is defined in : struct can_frame { canid_t can_id; /* 32 bit CAN_ID + EFF/RTR/ERR flags */ __u8 can_dlc; /* data length code: 0 .. 8 */ __u8 data[8] __attribute__((aligned(8))); }; """ can_frame_fmt = "=IB3x8s" can_frame_size = struct.calcsize(can_frame_fmt) """The Broadcast Management Command frame structure is defined in : struct bcm_msg_head { __u32 opcode; __u32 flags; __u32 count; struct timeval ival1, ival2; canid_t can_id; __u32 nframes; struct can_frame frames[0]; } `bcm_msg_head` must be 8 bytes aligned because of the `frames` member (see `struct can_frame` definition). Must use native not standard types for packing. """ bcm_cmd_msg_fmt = "@3I4l2I" bcm_cmd_msg_fmt += "x" * (struct.calcsize(bcm_cmd_msg_fmt) % 8) def setUp(self): self.s = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) self.addCleanup(self.s.close) try: self.s.bind((self.interface,)) except OSError: self.skipTest('network interface `%s` does not exist' % self.interface) class SocketRDSTest(unittest.TestCase): """To be able to run this test, the `rds` kernel module must be loaded: # modprobe rds """ bufsize = 8192 def setUp(self): self.serv = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) self.addCleanup(self.serv.close) try: self.port = socket_helper.bind_port(self.serv) except OSError: self.skipTest('unable to bind RDS socket') class ThreadableTest: """Threadable Test class The ThreadableTest class makes it easy to create a threaded client/server pair from an existing unit test. To create a new threaded class from an existing unit test, use multiple inheritance: class NewClass (OldClass, ThreadableTest): pass This class defines two new fixture functions with obvious purposes for overriding: clientSetUp () clientTearDown () Any new test functions within the class must then define tests in pairs, where the test name is preceded with a '_' to indicate the client portion of the test. Ex: def testFoo(self): # Server portion def _testFoo(self): # Client portion Any exceptions raised by the clients during their tests are caught and transferred to the main thread to alert the testing framework. Note, the server setup function cannot call any blocking functions that rely on the client thread during setup, unless serverExplicitReady() is called just before the blocking call (such as in setting up a client/server connection and performing the accept() in setUp(). """ def __init__(self): # Swap the true setup function self.__setUp = self.setUp self.setUp = self._setUp def serverExplicitReady(self): """This method allows the server to explicitly indicate that it wants the client thread to proceed. This is useful if the server is about to execute a blocking routine that is dependent upon the client thread during its setup routine.""" self.server_ready.set() def _setUp(self): self.wait_threads = support.wait_threads_exit() self.wait_threads.__enter__() self.addCleanup(self.wait_threads.__exit__, None, None, None) self.server_ready = threading.Event() self.client_ready = threading.Event() self.done = threading.Event() self.queue = queue.Queue(1) self.server_crashed = False def raise_queued_exception(): if self.queue.qsize(): raise self.queue.get() self.addCleanup(raise_queued_exception) # Do some munging to start the client test. methodname = self.id() i = methodname.rfind('.') methodname = methodname[i+1:] test_method = getattr(self, '_' + methodname) self.client_thread = thread.start_new_thread( self.clientRun, (test_method,)) try: self.__setUp() except: self.server_crashed = True raise finally: self.server_ready.set() self.client_ready.wait() self.addCleanup(self.done.wait) def clientRun(self, test_func): self.server_ready.wait() try: self.clientSetUp() except BaseException as e: self.queue.put(e) self.clientTearDown() return finally: self.client_ready.set() if self.server_crashed: self.clientTearDown() return if not hasattr(test_func, '__call__'): raise TypeError("test_func must be a callable function") try: test_func() except BaseException as e: self.queue.put(e) finally: self.clientTearDown() def clientSetUp(self): raise NotImplementedError("clientSetUp must be implemented.") def clientTearDown(self): self.done.set() thread.exit() class ThreadedTCPSocketTest(SocketTCPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketTCPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.AF_INET, socket.SOCK_STREAM) def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ThreadedUDPSocketTest(SocketUDPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketUDPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class ThreadedUDPLITESocketTest(SocketUDPLITETest, ThreadableTest): def __init__(self, methodName='runTest'): SocketUDPLITETest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ThreadedCANSocketTest(SocketCANTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketCANTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) try: self.cli.bind((self.interface,)) except OSError: # skipTest should not be called here, and will be called in the # server instead pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ThreadedRDSSocketTest(SocketRDSTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketRDSTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) try: # RDS sockets must be bound explicitly to send or receive data self.cli.bind((HOST, 0)) self.cli_addr = self.cli.getsockname() except OSError: # skipTest should not be called here, and will be called in the # server instead pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) @unittest.skipIf(fcntl is None, "need fcntl") @unittest.skipUnless(HAVE_SOCKET_VSOCK, 'VSOCK sockets required for this test.') @unittest.skipUnless(get_cid() != 2, "This test can only be run on a virtual guest.") class ThreadedVSOCKSocketStreamTest(unittest.TestCase, ThreadableTest): def __init__(self, methodName='runTest'): unittest.TestCase.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def setUp(self): self.serv = socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) self.addCleanup(self.serv.close) self.serv.bind((socket.VMADDR_CID_ANY, VSOCKPORT)) self.serv.listen() self.serverExplicitReady() self.conn, self.connaddr = self.serv.accept() self.addCleanup(self.conn.close) def clientSetUp(self): time.sleep(0.1) self.cli = socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) self.addCleanup(self.cli.close) cid = get_cid() self.cli.connect((cid, VSOCKPORT)) def testStream(self): msg = self.conn.recv(1024) self.assertEqual(msg, MSG) def _testStream(self): self.cli.send(MSG) self.cli.close() class SocketConnectedTest(ThreadedTCPSocketTest): """Socket tests for client-server connection. self.cli_conn is a client socket connected to the server. The setUp() method guarantees that it is connected to the server. """ def __init__(self, methodName='runTest'): ThreadedTCPSocketTest.__init__(self, methodName=methodName) def setUp(self): ThreadedTCPSocketTest.setUp(self) # Indicate explicitly we're ready for the client thread to # proceed and then perform the blocking call to accept self.serverExplicitReady() conn, addr = self.serv.accept() self.cli_conn = conn def tearDown(self): self.cli_conn.close() self.cli_conn = None ThreadedTCPSocketTest.tearDown(self) def clientSetUp(self): ThreadedTCPSocketTest.clientSetUp(self) self.cli.connect((HOST, self.port)) self.serv_conn = self.cli def clientTearDown(self): self.serv_conn.close() self.serv_conn = None ThreadedTCPSocketTest.clientTearDown(self) class SocketPairTest(unittest.TestCase, ThreadableTest): def __init__(self, methodName='runTest'): unittest.TestCase.__init__(self, methodName=methodName) ThreadableTest.__init__(self) self.cli = None self.serv = None def socketpair(self): # To be overridden by some child classes. return socket.socketpair() def setUp(self): self.serv, self.cli = self.socketpair() def tearDown(self): if self.serv: self.serv.close() self.serv = None def clientSetUp(self): pass def clientTearDown(self): if self.cli: self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) # The following classes are used by the sendmsg()/recvmsg() tests. # Combining, for instance, ConnectedStreamTestMixin and TCPTestBase # gives a drop-in replacement for SocketConnectedTest, but different # address families can be used, and the attributes serv_addr and # cli_addr will be set to the addresses of the endpoints. class SocketTestBase(unittest.TestCase): """A base class for socket tests. Subclasses must provide methods newSocket() to return a new socket and bindSock(sock) to bind it to an unused address. Creates a socket self.serv and sets self.serv_addr to its address. """ def setUp(self): self.serv = self.newSocket() self.bindServer() def bindServer(self): """Bind server socket and set self.serv_addr to its address.""" self.bindSock(self.serv) self.serv_addr = self.serv.getsockname() def tearDown(self): self.serv.close() self.serv = None class SocketListeningTestMixin(SocketTestBase): """Mixin to listen on the server socket.""" def setUp(self): super().setUp() self.serv.listen() class ThreadedSocketTestMixin(ThreadSafeCleanupTestCase, SocketTestBase, ThreadableTest): """Mixin to add client socket and allow client/server tests. Client socket is self.cli and its address is self.cli_addr. See ThreadableTest for usage information. """ def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) ThreadableTest.__init__(self) def clientSetUp(self): self.cli = self.newClientSocket() self.bindClient() def newClientSocket(self): """Return a new socket for use as client.""" return self.newSocket() def bindClient(self): """Bind client socket and set self.cli_addr to its address.""" self.bindSock(self.cli) self.cli_addr = self.cli.getsockname() def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) class ConnectedStreamTestMixin(SocketListeningTestMixin, ThreadedSocketTestMixin): """Mixin to allow client/server stream tests with connected client. Server's socket representing connection to client is self.cli_conn and client's connection to server is self.serv_conn. (Based on SocketConnectedTest.) """ def setUp(self): super().setUp() # Indicate explicitly we're ready for the client thread to # proceed and then perform the blocking call to accept self.serverExplicitReady() conn, addr = self.serv.accept() self.cli_conn = conn def tearDown(self): self.cli_conn.close() self.cli_conn = None super().tearDown() def clientSetUp(self): super().clientSetUp() self.cli.connect(self.serv_addr) self.serv_conn = self.cli def clientTearDown(self): try: self.serv_conn.close() self.serv_conn = None except AttributeError: pass super().clientTearDown() class UnixSocketTestBase(SocketTestBase): """Base class for Unix-domain socket tests.""" # This class is used for file descriptor passing tests, so we # create the sockets in a private directory so that other users # can't send anything that might be problematic for a privileged # user running the tests. def setUp(self): self.dir_path = tempfile.mkdtemp() self.addCleanup(os.rmdir, self.dir_path) super().setUp() def bindSock(self, sock): path = tempfile.mktemp(dir=self.dir_path) socket_helper.bind_unix_socket(sock, path) self.addCleanup(support.unlink, path) class UnixStreamBase(UnixSocketTestBase): """Base class for Unix-domain SOCK_STREAM tests.""" def newSocket(self): return socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) class InetTestBase(SocketTestBase): """Base class for IPv4 socket tests.""" host = HOST def setUp(self): super().setUp() self.port = self.serv_addr[1] def bindSock(self, sock): socket_helper.bind_port(sock, host=self.host) class TCPTestBase(InetTestBase): """Base class for TCP-over-IPv4 tests.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_STREAM) class UDPTestBase(InetTestBase): """Base class for UDP-over-IPv4 tests.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_DGRAM) class UDPLITETestBase(InetTestBase): """Base class for UDPLITE-over-IPv4 tests.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) class SCTPStreamBase(InetTestBase): """Base class for SCTP tests in one-to-one (SOCK_STREAM) mode.""" def newSocket(self): return socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_SCTP) class Inet6TestBase(InetTestBase): """Base class for IPv6 socket tests.""" host = socket_helper.HOSTv6 class UDP6TestBase(Inet6TestBase): """Base class for UDP-over-IPv6 tests.""" def newSocket(self): return socket.socket(socket.AF_INET6, socket.SOCK_DGRAM) class UDPLITE6TestBase(Inet6TestBase): """Base class for UDPLITE-over-IPv6 tests.""" def newSocket(self): return socket.socket(socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDPLITE) # Test-skipping decorators for use with ThreadableTest. def skipWithClientIf(condition, reason): """Skip decorated test if condition is true, add client_skip decorator. If the decorated object is not a class, sets its attribute "client_skip" to a decorator which will return an empty function if the test is to be skipped, or the original function if it is not. This can be used to avoid running the client part of a skipped test when using ThreadableTest. """ def client_pass(*args, **kwargs): pass def skipdec(obj): retval = unittest.skip(reason)(obj) if not isinstance(obj, type): retval.client_skip = lambda f: client_pass return retval def noskipdec(obj): if not (isinstance(obj, type) or hasattr(obj, "client_skip")): obj.client_skip = lambda f: f return obj return skipdec if condition else noskipdec def requireAttrs(obj, *attributes): """Skip decorated test if obj is missing any of the given attributes. Sets client_skip attribute as skipWithClientIf() does. """ missing = [name for name in attributes if not hasattr(obj, name)] return skipWithClientIf( missing, "don't have " + ", ".join(name for name in missing)) def requireSocket(*args): """Skip decorated test if a socket cannot be created with given arguments. When an argument is given as a string, will use the value of that attribute of the socket module, or skip the test if it doesn't exist. Sets client_skip attribute as skipWithClientIf() does. """ err = None missing = [obj for obj in args if isinstance(obj, str) and not hasattr(socket, obj)] if missing: err = "don't have " + ", ".join(name for name in missing) else: callargs = [getattr(socket, obj) if isinstance(obj, str) else obj for obj in args] try: s = socket.socket(*callargs) except OSError as e: # XXX: check errno? err = str(e) else: s.close() return skipWithClientIf( err is not None, "can't create socket({0}): {1}".format( ", ".join(str(o) for o in args), err)) ####################################################################### ## Begin Tests class GeneralModuleTests(unittest.TestCase): def test_SocketType_is_socketobject(self): import _socket self.assertTrue(socket.SocketType is _socket.socket) s = socket.socket() self.assertIsInstance(s, socket.SocketType) s.close() def test_repr(self): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) with s: self.assertIn('fd=%i' % s.fileno(), repr(s)) self.assertIn('family=%s' % socket.AF_INET, repr(s)) self.assertIn('type=%s' % socket.SOCK_STREAM, repr(s)) self.assertIn('proto=0', repr(s)) self.assertNotIn('raddr', repr(s)) s.bind(('127.0.0.1', 0)) self.assertIn('laddr', repr(s)) self.assertIn(str(s.getsockname()), repr(s)) self.assertIn('[closed]', repr(s)) self.assertNotIn('laddr', repr(s)) @unittest.skipUnless(_socket is not None, 'need _socket module') def test_csocket_repr(self): s = _socket.socket(_socket.AF_INET, _socket.SOCK_STREAM) try: expected = ('' % (s.fileno(), s.family, s.type, s.proto)) self.assertEqual(repr(s), expected) finally: s.close() expected = ('' % (s.family, s.type, s.proto)) self.assertEqual(repr(s), expected) def test_weakref(self): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: p = proxy(s) self.assertEqual(p.fileno(), s.fileno()) s = None support.gc_collect() # For PyPy or other GCs. try: p.fileno() except ReferenceError: pass else: self.fail('Socket proxy still exists') def testSocketError(self): # Testing socket module exceptions msg = "Error raising socket exception (%s)." with self.assertRaises(OSError, msg=msg % 'OSError'): raise OSError with self.assertRaises(OSError, msg=msg % 'socket.herror'): raise socket.herror with self.assertRaises(OSError, msg=msg % 'socket.gaierror'): raise socket.gaierror def testSendtoErrors(self): # Testing that sendto doesn't mask failures. See #10169. s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.addCleanup(s.close) s.bind(('', 0)) sockname = s.getsockname() # 2 args with self.assertRaises(TypeError) as cm: s.sendto('\u2620', sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'str'") with self.assertRaises(TypeError) as cm: s.sendto(5j, sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'complex'") with self.assertRaises(TypeError) as cm: s.sendto(b'foo', None) self.assertIn('not NoneType',str(cm.exception)) # 3 args with self.assertRaises(TypeError) as cm: s.sendto('\u2620', 0, sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'str'") with self.assertRaises(TypeError) as cm: s.sendto(5j, 0, sockname) self.assertEqual(str(cm.exception), "a bytes-like object is required, not 'complex'") with self.assertRaises(TypeError) as cm: s.sendto(b'foo', 0, None) self.assertIn('not NoneType', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto(b'foo', 'bar', sockname) self.assertIn('an integer is required', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto(b'foo', None, None) self.assertIn('an integer is required', str(cm.exception)) # wrong number of args with self.assertRaises(TypeError) as cm: s.sendto(b'foo') self.assertIn('(1 given)', str(cm.exception)) with self.assertRaises(TypeError) as cm: s.sendto(b'foo', 0, sockname, 4) self.assertIn('(4 given)', str(cm.exception)) def testCrucialConstants(self): # Testing for mission critical constants socket.AF_INET if socket.has_ipv6: socket.AF_INET6 socket.SOCK_STREAM socket.SOCK_DGRAM socket.SOCK_RAW socket.SOCK_RDM socket.SOCK_SEQPACKET socket.SOL_SOCKET socket.SO_REUSEADDR def testCrucialIpProtoConstants(self): socket.IPPROTO_TCP socket.IPPROTO_UDP if socket.has_ipv6: socket.IPPROTO_IPV6 @unittest.skipUnless(os.name == "nt", "Windows specific") def testWindowsSpecificConstants(self): socket.IPPROTO_ICLFXBM socket.IPPROTO_ST socket.IPPROTO_CBT socket.IPPROTO_IGP socket.IPPROTO_RDP socket.IPPROTO_PGM socket.IPPROTO_L2TP socket.IPPROTO_SCTP @unittest.skipUnless(sys.platform == 'darwin', 'macOS specific test') @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test3542SocketOptions(self): # Ref. issue #35569 and https://tools.ietf.org/html/rfc3542 opts = { 'IPV6_CHECKSUM', 'IPV6_DONTFRAG', 'IPV6_DSTOPTS', 'IPV6_HOPLIMIT', 'IPV6_HOPOPTS', 'IPV6_NEXTHOP', 'IPV6_PATHMTU', 'IPV6_PKTINFO', 'IPV6_RECVDSTOPTS', 'IPV6_RECVHOPLIMIT', 'IPV6_RECVHOPOPTS', 'IPV6_RECVPATHMTU', 'IPV6_RECVPKTINFO', 'IPV6_RECVRTHDR', 'IPV6_RECVTCLASS', 'IPV6_RTHDR', 'IPV6_RTHDRDSTOPTS', 'IPV6_RTHDR_TYPE_0', 'IPV6_TCLASS', 'IPV6_USE_MIN_MTU', } for opt in opts: self.assertTrue( hasattr(socket, opt), f"Missing RFC3542 socket option '{opt}'" ) def testHostnameRes(self): # Testing hostname resolution mechanisms hostname = socket.gethostname() try: ip = socket.gethostbyname(hostname) except OSError: # Probably name lookup wasn't set up right; skip this test self.skipTest('name lookup failure') self.assertTrue(ip.find('.') >= 0, "Error resolving host to ip.") try: hname, aliases, ipaddrs = socket.gethostbyaddr(ip) except OSError: # Probably a similar problem as above; skip this test self.skipTest('name lookup failure') all_host_names = [hostname, hname] + aliases fqhn = socket.getfqdn(ip) if not fqhn in all_host_names: self.fail("Error testing host resolution mechanisms. (fqdn: %s, all: %s)" % (fqhn, repr(all_host_names))) def test_host_resolution(self): for addr in [socket_helper.HOSTv4, '10.0.0.1', '255.255.255.255']: self.assertEqual(socket.gethostbyname(addr), addr) # we don't test socket_helper.HOSTv6 because there's a chance it doesn't have # a matching name entry (e.g. 'ip6-localhost') for host in [socket_helper.HOSTv4]: self.assertIn(host, socket.gethostbyaddr(host)[2]) def test_host_resolution_bad_address(self): # These are all malformed IP addresses and expected not to resolve to # any result. But some ISPs, e.g. AWS, may successfully resolve these # IPs. explanation = ( "resolving an invalid IP address did not raise OSError; " "can be caused by a broken DNS server" ) for addr in ['0.1.1.~1', '1+.1.1.1', '::1q', '::1::2', '1:1:1:1:1:1:1:1:1']: with self.assertRaises(OSError, msg=addr): socket.gethostbyname(addr) with self.assertRaises(OSError, msg=explanation): socket.gethostbyaddr(addr) @unittest.skipUnless(hasattr(socket, 'sethostname'), "test needs socket.sethostname()") @unittest.skipUnless(hasattr(socket, 'gethostname'), "test needs socket.gethostname()") def test_sethostname(self): oldhn = socket.gethostname() try: socket.sethostname('new') except OSError as e: if e.errno == errno.EPERM: self.skipTest("test should be run as root") else: raise try: # running test as root! self.assertEqual(socket.gethostname(), 'new') # Should work with bytes objects too socket.sethostname(b'bar') self.assertEqual(socket.gethostname(), 'bar') finally: socket.sethostname(oldhn) @unittest.skipUnless(hasattr(socket, 'if_nameindex'), 'socket.if_nameindex() not available.') def testInterfaceNameIndex(self): interfaces = socket.if_nameindex() for index, name in interfaces: self.assertIsInstance(index, int) self.assertIsInstance(name, str) # interface indices are non-zero integers self.assertGreater(index, 0) _index = socket.if_nametoindex(name) self.assertIsInstance(_index, int) self.assertEqual(index, _index) _name = socket.if_indextoname(index) self.assertIsInstance(_name, str) self.assertEqual(name, _name) @unittest.skipUnless(hasattr(socket, 'if_indextoname'), 'socket.if_indextoname() not available.') def testInvalidInterfaceIndexToName(self): self.assertRaises(OSError, socket.if_indextoname, 0) self.assertRaises(OverflowError, socket.if_indextoname, -1) self.assertRaises(OverflowError, socket.if_indextoname, 2**1000) self.assertRaises(TypeError, socket.if_indextoname, '_DEADBEEF') if hasattr(socket, 'if_nameindex'): indices = dict(socket.if_nameindex()) for index in indices: index2 = index + 2**32 if index2 not in indices: with self.assertRaises((OverflowError, OSError)): socket.if_indextoname(index2) for index in 2**32-1, 2**64-1: if index not in indices: with self.assertRaises((OverflowError, OSError)): socket.if_indextoname(index) @unittest.skipUnless(hasattr(socket, 'if_nametoindex'), 'socket.if_nametoindex() not available.') def testInvalidInterfaceNameToIndex(self): self.assertRaises(TypeError, socket.if_nametoindex, 0) self.assertRaises(OSError, socket.if_nametoindex, '_DEADBEEF') @unittest.skipUnless(hasattr(sys, 'getrefcount'), 'test needs sys.getrefcount()') def testRefCountGetNameInfo(self): # Testing reference count for getnameinfo try: # On some versions, this loses a reference orig = sys.getrefcount(__name__) socket.getnameinfo(__name__,0) except TypeError: if sys.getrefcount(__name__) != orig: self.fail("socket.getnameinfo loses a reference") def testInterpreterCrash(self): # Making sure getnameinfo doesn't crash the interpreter try: # On some versions, this crashes the interpreter. socket.getnameinfo(('x', 0, 0, 0), 0) except OSError: pass def testNtoH(self): # This just checks that htons etc. are their own inverse, # when looking at the lower 16 or 32 bits. sizes = {socket.htonl: 32, socket.ntohl: 32, socket.htons: 16, socket.ntohs: 16} for func, size in sizes.items(): mask = (1<= 23): port2 = socket.getservbyname(service) eq(port, port2) # Try udp, but don't barf if it doesn't exist try: udpport = socket.getservbyname(service, 'udp') except OSError: udpport = None else: eq(udpport, port) # Now make sure the lookup by port returns the same service name # Issue #26936: Android getservbyport() is broken. if not support.is_android: eq(socket.getservbyport(port2), service) eq(socket.getservbyport(port, 'tcp'), service) if udpport is not None: eq(socket.getservbyport(udpport, 'udp'), service) # Make sure getservbyport does not accept out of range ports. self.assertRaises(OverflowError, socket.getservbyport, -1) self.assertRaises(OverflowError, socket.getservbyport, 65536) def testDefaultTimeout(self): # Testing default timeout # The default timeout should initially be None self.assertEqual(socket.getdefaulttimeout(), None) with socket.socket() as s: self.assertEqual(s.gettimeout(), None) # Set the default timeout to 10, and see if it propagates with socket_setdefaulttimeout(10): self.assertEqual(socket.getdefaulttimeout(), 10) with socket.socket() as sock: self.assertEqual(sock.gettimeout(), 10) # Reset the default timeout to None, and see if it propagates socket.setdefaulttimeout(None) self.assertEqual(socket.getdefaulttimeout(), None) with socket.socket() as sock: self.assertEqual(sock.gettimeout(), None) # Check that setting it to an invalid value raises ValueError self.assertRaises(ValueError, socket.setdefaulttimeout, -1) # Check that setting it to an invalid type raises TypeError self.assertRaises(TypeError, socket.setdefaulttimeout, "spam") @unittest.skipUnless(hasattr(socket, 'inet_aton'), 'test needs socket.inet_aton()') def testIPv4_inet_aton_fourbytes(self): # Test that issue1008086 and issue767150 are fixed. # It must return 4 bytes. self.assertEqual(b'\x00'*4, socket.inet_aton('0.0.0.0')) self.assertEqual(b'\xff'*4, socket.inet_aton('255.255.255.255')) @unittest.skipUnless(hasattr(socket, 'inet_pton'), 'test needs socket.inet_pton()') def testIPv4toString(self): from socket import inet_aton as f, inet_pton, AF_INET g = lambda a: inet_pton(AF_INET, a) assertInvalid = lambda func,a: self.assertRaises( (OSError, ValueError), func, a ) self.assertEqual(b'\x00\x00\x00\x00', f('0.0.0.0')) self.assertEqual(b'\xff\x00\xff\x00', f('255.0.255.0')) self.assertEqual(b'\xaa\xaa\xaa\xaa', f('170.170.170.170')) self.assertEqual(b'\x01\x02\x03\x04', f('1.2.3.4')) self.assertEqual(b'\xff\xff\xff\xff', f('255.255.255.255')) # bpo-29972: inet_pton() doesn't fail on AIX if not AIX: assertInvalid(f, '0.0.0.') assertInvalid(f, '300.0.0.0') assertInvalid(f, 'a.0.0.0') assertInvalid(f, '1.2.3.4.5') assertInvalid(f, '::1') self.assertEqual(b'\x00\x00\x00\x00', g('0.0.0.0')) self.assertEqual(b'\xff\x00\xff\x00', g('255.0.255.0')) self.assertEqual(b'\xaa\xaa\xaa\xaa', g('170.170.170.170')) self.assertEqual(b'\xff\xff\xff\xff', g('255.255.255.255')) assertInvalid(g, '0.0.0.') assertInvalid(g, '300.0.0.0') assertInvalid(g, 'a.0.0.0') assertInvalid(g, '1.2.3.4.5') assertInvalid(g, '::1') @unittest.skipUnless(hasattr(socket, 'inet_pton'), 'test needs socket.inet_pton()') def testIPv6toString(self): try: from socket import inet_pton, AF_INET6, has_ipv6 if not has_ipv6: self.skipTest('IPv6 not available') except ImportError: self.skipTest('could not import needed symbols from socket') if sys.platform == "win32": try: inet_pton(AF_INET6, '::') except OSError as e: if e.winerror == 10022: self.skipTest('IPv6 might not be supported') f = lambda a: inet_pton(AF_INET6, a) assertInvalid = lambda a: self.assertRaises( (OSError, ValueError), f, a ) self.assertEqual(b'\x00' * 16, f('::')) self.assertEqual(b'\x00' * 16, f('0::0')) self.assertEqual(b'\x00\x01' + b'\x00' * 14, f('1::')) self.assertEqual( b'\x45\xef\x76\xcb\x00\x1a\x56\xef\xaf\xeb\x0b\xac\x19\x24\xae\xae', f('45ef:76cb:1a:56ef:afeb:bac:1924:aeae') ) self.assertEqual( b'\xad\x42\x0a\xbc' + b'\x00' * 4 + b'\x01\x27\x00\x00\x02\x54\x00\x02', f('ad42:abc::127:0:254:2') ) self.assertEqual(b'\x00\x12\x00\x0a' + b'\x00' * 12, f('12:a::')) assertInvalid('0x20::') assertInvalid(':::') assertInvalid('::0::') assertInvalid('1::abc::') assertInvalid('1::abc::def') assertInvalid('1:2:3:4:5:6') assertInvalid('1:2:3:4:5:6:') assertInvalid('1:2:3:4:5:6:7:8:0') # bpo-29972: inet_pton() doesn't fail on AIX if not AIX: assertInvalid('1:2:3:4:5:6:7:8:') self.assertEqual(b'\x00' * 12 + b'\xfe\x2a\x17\x40', f('::254.42.23.64') ) self.assertEqual( b'\x00\x42' + b'\x00' * 8 + b'\xa2\x9b\xfe\x2a\x17\x40', f('42::a29b:254.42.23.64') ) self.assertEqual( b'\x00\x42\xa8\xb9\x00\x00\x00\x02\xff\xff\xa2\x9b\xfe\x2a\x17\x40', f('42:a8b9:0:2:ffff:a29b:254.42.23.64') ) assertInvalid('255.254.253.252') assertInvalid('1::260.2.3.0') assertInvalid('1::0.be.e.0') assertInvalid('1:2:3:4:5:6:7:1.2.3.4') assertInvalid('::1.2.3.4:0') assertInvalid('0.100.200.0:3:4:5:6:7:8') @unittest.skipUnless(hasattr(socket, 'inet_ntop'), 'test needs socket.inet_ntop()') def testStringToIPv4(self): from socket import inet_ntoa as f, inet_ntop, AF_INET g = lambda a: inet_ntop(AF_INET, a) assertInvalid = lambda func,a: self.assertRaises( (OSError, ValueError), func, a ) self.assertEqual('1.0.1.0', f(b'\x01\x00\x01\x00')) self.assertEqual('170.85.170.85', f(b'\xaa\x55\xaa\x55')) self.assertEqual('255.255.255.255', f(b'\xff\xff\xff\xff')) self.assertEqual('1.2.3.4', f(b'\x01\x02\x03\x04')) assertInvalid(f, b'\x00' * 3) assertInvalid(f, b'\x00' * 5) assertInvalid(f, b'\x00' * 16) self.assertEqual('170.85.170.85', f(bytearray(b'\xaa\x55\xaa\x55'))) self.assertEqual('1.0.1.0', g(b'\x01\x00\x01\x00')) self.assertEqual('170.85.170.85', g(b'\xaa\x55\xaa\x55')) self.assertEqual('255.255.255.255', g(b'\xff\xff\xff\xff')) assertInvalid(g, b'\x00' * 3) assertInvalid(g, b'\x00' * 5) assertInvalid(g, b'\x00' * 16) self.assertEqual('170.85.170.85', g(bytearray(b'\xaa\x55\xaa\x55'))) @unittest.skipUnless(hasattr(socket, 'inet_ntop'), 'test needs socket.inet_ntop()') def testStringToIPv6(self): try: from socket import inet_ntop, AF_INET6, has_ipv6 if not has_ipv6: self.skipTest('IPv6 not available') except ImportError: self.skipTest('could not import needed symbols from socket') if sys.platform == "win32": try: inet_ntop(AF_INET6, b'\x00' * 16) except OSError as e: if e.winerror == 10022: self.skipTest('IPv6 might not be supported') f = lambda a: inet_ntop(AF_INET6, a) assertInvalid = lambda a: self.assertRaises( (OSError, ValueError), f, a ) self.assertEqual('::', f(b'\x00' * 16)) self.assertEqual('::1', f(b'\x00' * 15 + b'\x01')) self.assertEqual( 'aef:b01:506:1001:ffff:9997:55:170', f(b'\x0a\xef\x0b\x01\x05\x06\x10\x01\xff\xff\x99\x97\x00\x55\x01\x70') ) self.assertEqual('::1', f(bytearray(b'\x00' * 15 + b'\x01'))) assertInvalid(b'\x12' * 15) assertInvalid(b'\x12' * 17) assertInvalid(b'\x12' * 4) # XXX The following don't test module-level functionality... def testSockName(self): # Testing getsockname() port = socket_helper.find_unused_port() sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) sock.bind(("0.0.0.0", port)) name = sock.getsockname() # XXX(nnorwitz): http://tinyurl.com/os5jz seems to indicate # it reasonable to get the host's addr in addition to 0.0.0.0. # At least for eCos. This is required for the S/390 to pass. try: my_ip_addr = socket.gethostbyname(socket.gethostname()) except OSError: # Probably name lookup wasn't set up right; skip this test self.skipTest('name lookup failure') self.assertIn(name[0], ("0.0.0.0", my_ip_addr), '%s invalid' % name[0]) self.assertEqual(name[1], port) def testGetSockOpt(self): # Testing getsockopt() # We know a socket should start without reuse==0 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) reuse = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) self.assertFalse(reuse != 0, "initial mode is reuse") def testSetSockOpt(self): # Testing setsockopt() sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) reuse = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR) self.assertFalse(reuse == 0, "failed to set reuse mode") def testSendAfterClose(self): # testing send() after close() with timeout with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: sock.settimeout(1) self.assertRaises(OSError, sock.send, b"spam") def testCloseException(self): sock = socket.socket() sock.bind((socket._LOCALHOST, 0)) socket.socket(fileno=sock.fileno()).close() try: sock.close() except OSError as err: # Winsock apparently raises ENOTSOCK self.assertIn(err.errno, (errno.EBADF, errno.ENOTSOCK)) else: self.fail("close() should raise EBADF/ENOTSOCK") def testNewAttributes(self): # testing .family, .type and .protocol with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: self.assertEqual(sock.family, socket.AF_INET) if hasattr(socket, 'SOCK_CLOEXEC'): self.assertIn(sock.type, (socket.SOCK_STREAM | socket.SOCK_CLOEXEC, socket.SOCK_STREAM)) else: self.assertEqual(sock.type, socket.SOCK_STREAM) self.assertEqual(sock.proto, 0) def test_getsockaddrarg(self): sock = socket.socket() self.addCleanup(sock.close) port = socket_helper.find_unused_port() big_port = port + 65536 neg_port = port - 65536 self.assertRaises(OverflowError, sock.bind, (HOST, big_port)) self.assertRaises(OverflowError, sock.bind, (HOST, neg_port)) # Since find_unused_port() is inherently subject to race conditions, we # call it a couple times if necessary. for i in itertools.count(): port = socket_helper.find_unused_port() try: sock.bind((HOST, port)) except OSError as e: if e.errno != errno.EADDRINUSE or i == 5: raise else: break @unittest.skipUnless(os.name == "nt", "Windows specific") def test_sock_ioctl(self): self.assertTrue(hasattr(socket.socket, 'ioctl')) self.assertTrue(hasattr(socket, 'SIO_RCVALL')) self.assertTrue(hasattr(socket, 'RCVALL_ON')) self.assertTrue(hasattr(socket, 'RCVALL_OFF')) self.assertTrue(hasattr(socket, 'SIO_KEEPALIVE_VALS')) s = socket.socket() self.addCleanup(s.close) self.assertRaises(ValueError, s.ioctl, -1, None) s.ioctl(socket.SIO_KEEPALIVE_VALS, (1, 100, 100)) @unittest.skipUnless(os.name == "nt", "Windows specific") @unittest.skipUnless(hasattr(socket, 'SIO_LOOPBACK_FAST_PATH'), 'Loopback fast path support required for this test') def test_sio_loopback_fast_path(self): s = socket.socket() self.addCleanup(s.close) try: s.ioctl(socket.SIO_LOOPBACK_FAST_PATH, True) except OSError as exc: WSAEOPNOTSUPP = 10045 if exc.winerror == WSAEOPNOTSUPP: self.skipTest("SIO_LOOPBACK_FAST_PATH is defined but " "doesn't implemented in this Windows version") raise self.assertRaises(TypeError, s.ioctl, socket.SIO_LOOPBACK_FAST_PATH, None) def testGetaddrinfo(self): try: socket.getaddrinfo('localhost', 80) except socket.gaierror as err: if err.errno == socket.EAI_SERVICE: # see http://bugs.python.org/issue1282647 self.skipTest("buggy libc version") raise # len of every sequence is supposed to be == 5 for info in socket.getaddrinfo(HOST, None): self.assertEqual(len(info), 5) # host can be a domain name, a string representation of an # IPv4/v6 address or None socket.getaddrinfo('localhost', 80) socket.getaddrinfo('127.0.0.1', 80) socket.getaddrinfo(None, 80) if socket_helper.IPV6_ENABLED: socket.getaddrinfo('::1', 80) # port can be a string service name such as "http", a numeric # port number or None # Issue #26936: Android getaddrinfo() was broken before API level 23. if (not hasattr(sys, 'getandroidapilevel') or sys.getandroidapilevel() >= 23): socket.getaddrinfo(HOST, "http") socket.getaddrinfo(HOST, 80) socket.getaddrinfo(HOST, None) # test family and socktype filters infos = socket.getaddrinfo(HOST, 80, socket.AF_INET, socket.SOCK_STREAM) for family, type, _, _, _ in infos: self.assertEqual(family, socket.AF_INET) self.assertEqual(str(family), 'AddressFamily.AF_INET') self.assertEqual(type, socket.SOCK_STREAM) self.assertEqual(str(type), 'SocketKind.SOCK_STREAM') infos = socket.getaddrinfo(HOST, None, 0, socket.SOCK_STREAM) for _, socktype, _, _, _ in infos: self.assertEqual(socktype, socket.SOCK_STREAM) # test proto and flags arguments socket.getaddrinfo(HOST, None, 0, 0, socket.SOL_TCP) socket.getaddrinfo(HOST, None, 0, 0, 0, socket.AI_PASSIVE) # a server willing to support both IPv4 and IPv6 will # usually do this socket.getaddrinfo(None, 0, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE) # test keyword arguments a = socket.getaddrinfo(HOST, None) b = socket.getaddrinfo(host=HOST, port=None) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, socket.AF_INET) b = socket.getaddrinfo(HOST, None, family=socket.AF_INET) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, 0, socket.SOCK_STREAM) b = socket.getaddrinfo(HOST, None, type=socket.SOCK_STREAM) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, 0, 0, socket.SOL_TCP) b = socket.getaddrinfo(HOST, None, proto=socket.SOL_TCP) self.assertEqual(a, b) a = socket.getaddrinfo(HOST, None, 0, 0, 0, socket.AI_PASSIVE) b = socket.getaddrinfo(HOST, None, flags=socket.AI_PASSIVE) self.assertEqual(a, b) a = socket.getaddrinfo(None, 0, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE) b = socket.getaddrinfo(host=None, port=0, family=socket.AF_UNSPEC, type=socket.SOCK_STREAM, proto=0, flags=socket.AI_PASSIVE) self.assertEqual(a, b) # Issue #6697. self.assertRaises(UnicodeEncodeError, socket.getaddrinfo, 'localhost', '\uD800') # Issue 17269: test workaround for OS X platform bug segfault if hasattr(socket, 'AI_NUMERICSERV'): try: # The arguments here are undefined and the call may succeed # or fail. All we care here is that it doesn't segfault. socket.getaddrinfo("localhost", None, 0, 0, 0, socket.AI_NUMERICSERV) except socket.gaierror: pass def test_getnameinfo(self): # only IP addresses are allowed self.assertRaises(OSError, socket.getnameinfo, ('mail.python.org',0), 0) @unittest.skipUnless(support.is_resource_enabled('network'), 'network is not enabled') def test_idna(self): # Check for internet access before running test # (issue #12804, issue #25138). with socket_helper.transient_internet('python.org'): socket.gethostbyname('python.org') # these should all be successful domain = 'испытание.pythontest.net' socket.gethostbyname(domain) socket.gethostbyname_ex(domain) socket.getaddrinfo(domain,0,socket.AF_UNSPEC,socket.SOCK_STREAM) # this may not work if the forward lookup chooses the IPv6 address, as that doesn't # have a reverse entry yet # socket.gethostbyaddr('испытание.python.org') def check_sendall_interrupted(self, with_timeout): # socketpair() is not strictly required, but it makes things easier. if not hasattr(signal, 'alarm') or not hasattr(socket, 'socketpair'): self.skipTest("signal.alarm and socket.socketpair required for this test") # Our signal handlers clobber the C errno by calling a math function # with an invalid domain value. def ok_handler(*args): self.assertRaises(ValueError, math.acosh, 0) def raising_handler(*args): self.assertRaises(ValueError, math.acosh, 0) 1 // 0 c, s = socket.socketpair() old_alarm = signal.signal(signal.SIGALRM, raising_handler) try: if with_timeout: # Just above the one second minimum for signal.alarm c.settimeout(1.5) with self.assertRaises(ZeroDivisionError): signal.alarm(1) c.sendall(b"x" * support.SOCK_MAX_SIZE) if with_timeout: signal.signal(signal.SIGALRM, ok_handler) signal.alarm(1) self.assertRaises(socket.timeout, c.sendall, b"x" * support.SOCK_MAX_SIZE) finally: signal.alarm(0) signal.signal(signal.SIGALRM, old_alarm) c.close() s.close() def test_sendall_interrupted(self): self.check_sendall_interrupted(False) def test_sendall_interrupted_with_timeout(self): self.check_sendall_interrupted(True) def test_dealloc_warn(self): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) r = repr(sock) with self.assertWarns(ResourceWarning) as cm: sock = None support.gc_collect() self.assertIn(r, str(cm.warning.args[0])) # An open socket file object gets dereferenced after the socket sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) f = sock.makefile('rb') r = repr(sock) sock = None support.gc_collect() with self.assertWarns(ResourceWarning): f = None support.gc_collect() def test_name_closed_socketio(self): with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: fp = sock.makefile("rb") fp.close() self.assertEqual(repr(fp), "<_io.BufferedReader name=-1>") def test_unusable_closed_socketio(self): with socket.socket() as sock: fp = sock.makefile("rb", buffering=0) self.assertTrue(fp.readable()) self.assertFalse(fp.writable()) self.assertFalse(fp.seekable()) fp.close() self.assertRaises(ValueError, fp.readable) self.assertRaises(ValueError, fp.writable) self.assertRaises(ValueError, fp.seekable) def test_socket_close(self): sock = socket.socket() try: sock.bind((HOST, 0)) socket.close(sock.fileno()) with self.assertRaises(OSError): sock.listen(1) finally: with self.assertRaises(OSError): # sock.close() fails with EBADF sock.close() with self.assertRaises(TypeError): socket.close(None) with self.assertRaises(OSError): socket.close(-1) def test_makefile_mode(self): for mode in 'r', 'rb', 'rw', 'w', 'wb': with self.subTest(mode=mode): with socket.socket() as sock: with sock.makefile(mode) as fp: self.assertEqual(fp.mode, mode) def test_makefile_invalid_mode(self): for mode in 'rt', 'x', '+', 'a': with self.subTest(mode=mode): with socket.socket() as sock: with self.assertRaisesRegex(ValueError, 'invalid mode'): sock.makefile(mode) def test_pickle(self): sock = socket.socket() with sock: for protocol in range(pickle.HIGHEST_PROTOCOL + 1): self.assertRaises(TypeError, pickle.dumps, sock, protocol) for protocol in range(pickle.HIGHEST_PROTOCOL + 1): family = pickle.loads(pickle.dumps(socket.AF_INET, protocol)) self.assertEqual(family, socket.AF_INET) type = pickle.loads(pickle.dumps(socket.SOCK_STREAM, protocol)) self.assertEqual(type, socket.SOCK_STREAM) def test_listen_backlog(self): for backlog in 0, -1: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as srv: srv.bind((HOST, 0)) srv.listen(backlog) with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as srv: srv.bind((HOST, 0)) srv.listen() @support.cpython_only def test_listen_backlog_overflow(self): # Issue 15989 import _testcapi with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as srv: srv.bind((HOST, 0)) self.assertRaises(OverflowError, srv.listen, _testcapi.INT_MAX + 1) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') def test_flowinfo(self): self.assertRaises(OverflowError, socket.getnameinfo, (socket_helper.HOSTv6, 0, 0xffffffff), 0) with socket.socket(socket.AF_INET6, socket.SOCK_STREAM) as s: self.assertRaises(OverflowError, s.bind, (socket_helper.HOSTv6, 0, -10)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') def test_getaddrinfo_ipv6_basic(self): ((*_, sockaddr),) = socket.getaddrinfo( 'ff02::1de:c0:face:8D', # Note capital letter `D`. 1234, socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP ) self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, 0)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipIf(sys.platform == 'win32', 'does not work on Windows') @unittest.skipIf(AIX, 'Symbolic scope id does not work') def test_getaddrinfo_ipv6_scopeid_symbolic(self): # Just pick up any network interface (Linux, Mac OS X) (ifindex, test_interface) = socket.if_nameindex()[0] ((*_, sockaddr),) = socket.getaddrinfo( 'ff02::1de:c0:face:8D%' + test_interface, 1234, socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP ) # Note missing interface name part in IPv6 address self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, ifindex)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless( sys.platform == 'win32', 'Numeric scope id does not work or undocumented') def test_getaddrinfo_ipv6_scopeid_numeric(self): # Also works on Linux and Mac OS X, but is not documented (?) # Windows, Linux and Max OS X allow nonexistent interface numbers here. ifindex = 42 ((*_, sockaddr),) = socket.getaddrinfo( 'ff02::1de:c0:face:8D%' + str(ifindex), 1234, socket.AF_INET6, socket.SOCK_DGRAM, socket.IPPROTO_UDP ) # Note missing interface name part in IPv6 address self.assertEqual(sockaddr, ('ff02::1de:c0:face:8d', 1234, 0, ifindex)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipIf(sys.platform == 'win32', 'does not work on Windows') @unittest.skipIf(AIX, 'Symbolic scope id does not work') def test_getnameinfo_ipv6_scopeid_symbolic(self): # Just pick up any network interface. (ifindex, test_interface) = socket.if_nameindex()[0] sockaddr = ('ff02::1de:c0:face:8D', 1234, 0, ifindex) # Note capital letter `D`. nameinfo = socket.getnameinfo(sockaddr, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV) self.assertEqual(nameinfo, ('ff02::1de:c0:face:8d%' + test_interface, '1234')) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless( sys.platform == 'win32', 'Numeric scope id does not work or undocumented') def test_getnameinfo_ipv6_scopeid_numeric(self): # Also works on Linux (undocumented), but does not work on Mac OS X # Windows and Linux allow nonexistent interface numbers here. ifindex = 42 sockaddr = ('ff02::1de:c0:face:8D', 1234, 0, ifindex) # Note capital letter `D`. nameinfo = socket.getnameinfo(sockaddr, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV) self.assertEqual(nameinfo, ('ff02::1de:c0:face:8d%' + str(ifindex), '1234')) def test_str_for_enums(self): # Make sure that the AF_* and SOCK_* constants have enum-like string # reprs. with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: self.assertEqual(str(s.family), 'AddressFamily.AF_INET') self.assertEqual(str(s.type), 'SocketKind.SOCK_STREAM') def test_socket_consistent_sock_type(self): SOCK_NONBLOCK = getattr(socket, 'SOCK_NONBLOCK', 0) SOCK_CLOEXEC = getattr(socket, 'SOCK_CLOEXEC', 0) sock_type = socket.SOCK_STREAM | SOCK_NONBLOCK | SOCK_CLOEXEC with socket.socket(socket.AF_INET, sock_type) as s: self.assertEqual(s.type, socket.SOCK_STREAM) s.settimeout(1) self.assertEqual(s.type, socket.SOCK_STREAM) s.settimeout(0) self.assertEqual(s.type, socket.SOCK_STREAM) s.setblocking(True) self.assertEqual(s.type, socket.SOCK_STREAM) s.setblocking(False) self.assertEqual(s.type, socket.SOCK_STREAM) def test_unknown_socket_family_repr(self): # Test that when created with a family that's not one of the known # AF_*/SOCK_* constants, socket.family just returns the number. # # To do this we fool socket.socket into believing it already has an # open fd because on this path it doesn't actually verify the family and # type and populates the socket object. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) fd = sock.detach() unknown_family = max(socket.AddressFamily.__members__.values()) + 1 unknown_type = max( kind for name, kind in socket.SocketKind.__members__.items() if name not in {'SOCK_NONBLOCK', 'SOCK_CLOEXEC'} ) + 1 with socket.socket( family=unknown_family, type=unknown_type, proto=23, fileno=fd) as s: self.assertEqual(s.family, unknown_family) self.assertEqual(s.type, unknown_type) # some OS like macOS ignore proto self.assertIn(s.proto, {0, 23}) @unittest.skipUnless(hasattr(os, 'sendfile'), 'test needs os.sendfile()') def test__sendfile_use_sendfile(self): class File: def __init__(self, fd): self.fd = fd def fileno(self): return self.fd with socket.socket() as sock: fd = os.open(os.curdir, os.O_RDONLY) os.close(fd) with self.assertRaises(socket._GiveupOnSendfile): sock._sendfile_use_sendfile(File(fd)) with self.assertRaises(OverflowError): sock._sendfile_use_sendfile(File(2**1000)) with self.assertRaises(TypeError): sock._sendfile_use_sendfile(File(None)) def _test_socket_fileno(self, s, family, stype): self.assertEqual(s.family, family) self.assertEqual(s.type, stype) fd = s.fileno() s2 = socket.socket(fileno=fd) self.addCleanup(s2.close) # detach old fd to avoid double close s.detach() self.assertEqual(s2.family, family) self.assertEqual(s2.type, stype) self.assertEqual(s2.fileno(), fd) def test_socket_fileno(self): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(s.close) s.bind((socket_helper.HOST, 0)) self._test_socket_fileno(s, socket.AF_INET, socket.SOCK_STREAM) if hasattr(socket, "SOCK_DGRAM"): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.addCleanup(s.close) s.bind((socket_helper.HOST, 0)) self._test_socket_fileno(s, socket.AF_INET, socket.SOCK_DGRAM) if socket_helper.IPV6_ENABLED: s = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) self.addCleanup(s.close) s.bind((socket_helper.HOSTv6, 0, 0, 0)) self._test_socket_fileno(s, socket.AF_INET6, socket.SOCK_STREAM) if hasattr(socket, "AF_UNIX"): tmpdir = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, tmpdir) s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.addCleanup(s.close) try: s.bind(os.path.join(tmpdir, 'socket')) except PermissionError: pass else: self._test_socket_fileno(s, socket.AF_UNIX, socket.SOCK_STREAM) def test_socket_fileno_rejects_float(self): with self.assertRaisesRegex(TypeError, "integer argument expected"): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=42.5) def test_socket_fileno_rejects_other_types(self): with self.assertRaisesRegex(TypeError, "integer is required"): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno="foo") def test_socket_fileno_rejects_invalid_socket(self): with self.assertRaisesRegex(ValueError, "negative file descriptor"): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=-1) @unittest.skipIf(os.name == "nt", "Windows disallows -1 only") def test_socket_fileno_rejects_negative(self): with self.assertRaisesRegex(ValueError, "negative file descriptor"): socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=-42) def test_socket_fileno_requires_valid_fd(self): WSAENOTSOCK = 10038 with self.assertRaises(OSError) as cm: socket.socket(fileno=support.make_bad_fd()) self.assertIn(cm.exception.errno, (errno.EBADF, WSAENOTSOCK)) with self.assertRaises(OSError) as cm: socket.socket( socket.AF_INET, socket.SOCK_STREAM, fileno=support.make_bad_fd()) self.assertIn(cm.exception.errno, (errno.EBADF, WSAENOTSOCK)) def test_socket_fileno_requires_socket_fd(self): with tempfile.NamedTemporaryFile() as afile: with self.assertRaises(OSError): socket.socket(fileno=afile.fileno()) with self.assertRaises(OSError) as cm: socket.socket( socket.AF_INET, socket.SOCK_STREAM, fileno=afile.fileno()) self.assertEqual(cm.exception.errno, errno.ENOTSOCK) @unittest.skipUnless(HAVE_SOCKET_CAN, 'SocketCan required for this test.') class BasicCANTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_CAN socket.PF_CAN socket.CAN_RAW @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def testBCMConstants(self): socket.CAN_BCM # opcodes socket.CAN_BCM_TX_SETUP # create (cyclic) transmission task socket.CAN_BCM_TX_DELETE # remove (cyclic) transmission task socket.CAN_BCM_TX_READ # read properties of (cyclic) transmission task socket.CAN_BCM_TX_SEND # send one CAN frame socket.CAN_BCM_RX_SETUP # create RX content filter subscription socket.CAN_BCM_RX_DELETE # remove RX content filter subscription socket.CAN_BCM_RX_READ # read properties of RX content filter subscription socket.CAN_BCM_TX_STATUS # reply to TX_READ request socket.CAN_BCM_TX_EXPIRED # notification on performed transmissions (count=0) socket.CAN_BCM_RX_STATUS # reply to RX_READ request socket.CAN_BCM_RX_TIMEOUT # cyclic message is absent socket.CAN_BCM_RX_CHANGED # updated CAN frame (detected content change) # flags socket.CAN_BCM_SETTIMER socket.CAN_BCM_STARTTIMER socket.CAN_BCM_TX_COUNTEVT socket.CAN_BCM_TX_ANNOUNCE socket.CAN_BCM_TX_CP_CAN_ID socket.CAN_BCM_RX_FILTER_ID socket.CAN_BCM_RX_CHECK_DLC socket.CAN_BCM_RX_NO_AUTOTIMER socket.CAN_BCM_RX_ANNOUNCE_RESUME socket.CAN_BCM_TX_RESET_MULTI_IDX socket.CAN_BCM_RX_RTR_FRAME def testCreateSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: pass @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def testCreateBCMSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) as s: pass def testBindAny(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: address = ('', ) s.bind(address) self.assertEqual(s.getsockname(), address) def testTooLongInterfaceName(self): # most systems limit IFNAMSIZ to 16, take 1024 to be sure with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: self.assertRaisesRegex(OSError, 'interface name too long', s.bind, ('x' * 1024,)) @unittest.skipUnless(hasattr(socket, "CAN_RAW_LOOPBACK"), 'socket.CAN_RAW_LOOPBACK required for this test.') def testLoopback(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: for loopback in (0, 1): s.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_LOOPBACK, loopback) self.assertEqual(loopback, s.getsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_LOOPBACK)) @unittest.skipUnless(hasattr(socket, "CAN_RAW_FILTER"), 'socket.CAN_RAW_FILTER required for this test.') def testFilter(self): can_id, can_mask = 0x200, 0x700 can_filter = struct.pack("=II", can_id, can_mask) with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: s.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_FILTER, can_filter) self.assertEqual(can_filter, s.getsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_FILTER, 8)) s.setsockopt(socket.SOL_CAN_RAW, socket.CAN_RAW_FILTER, bytearray(can_filter)) @unittest.skipUnless(HAVE_SOCKET_CAN, 'SocketCan required for this test.') class CANTest(ThreadedCANSocketTest): def __init__(self, methodName='runTest'): ThreadedCANSocketTest.__init__(self, methodName=methodName) @classmethod def build_can_frame(cls, can_id, data): """Build a CAN frame.""" can_dlc = len(data) data = data.ljust(8, b'\x00') return struct.pack(cls.can_frame_fmt, can_id, can_dlc, data) @classmethod def dissect_can_frame(cls, frame): """Dissect a CAN frame.""" can_id, can_dlc, data = struct.unpack(cls.can_frame_fmt, frame) return (can_id, can_dlc, data[:can_dlc]) def testSendFrame(self): cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf, cf) self.assertEqual(addr[0], self.interface) def _testSendFrame(self): self.cf = self.build_can_frame(0x00, b'\x01\x02\x03\x04\x05') self.cli.send(self.cf) def testSendMaxFrame(self): cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf, cf) def _testSendMaxFrame(self): self.cf = self.build_can_frame(0x00, b'\x07' * 8) self.cli.send(self.cf) def testSendMultiFrames(self): cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf1, cf) cf, addr = self.s.recvfrom(self.bufsize) self.assertEqual(self.cf2, cf) def _testSendMultiFrames(self): self.cf1 = self.build_can_frame(0x07, b'\x44\x33\x22\x11') self.cli.send(self.cf1) self.cf2 = self.build_can_frame(0x12, b'\x99\x22\x33') self.cli.send(self.cf2) @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def _testBCM(self): cf, addr = self.cli.recvfrom(self.bufsize) self.assertEqual(self.cf, cf) can_id, can_dlc, data = self.dissect_can_frame(cf) self.assertEqual(self.can_id, can_id) self.assertEqual(self.data, data) @unittest.skipUnless(hasattr(socket, "CAN_BCM"), 'socket.CAN_BCM required for this test.') def testBCM(self): bcm = socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_BCM) self.addCleanup(bcm.close) bcm.connect((self.interface,)) self.can_id = 0x123 self.data = bytes([0xc0, 0xff, 0xee]) self.cf = self.build_can_frame(self.can_id, self.data) opcode = socket.CAN_BCM_TX_SEND flags = 0 count = 0 ival1_seconds = ival1_usec = ival2_seconds = ival2_usec = 0 bcm_can_id = 0x0222 nframes = 1 assert len(self.cf) == 16 header = struct.pack(self.bcm_cmd_msg_fmt, opcode, flags, count, ival1_seconds, ival1_usec, ival2_seconds, ival2_usec, bcm_can_id, nframes, ) header_plus_frame = header + self.cf bytes_sent = bcm.send(header_plus_frame) self.assertEqual(bytes_sent, len(header_plus_frame)) @unittest.skipUnless(HAVE_SOCKET_CAN_ISOTP, 'CAN ISOTP required for this test.') class ISOTPTest(unittest.TestCase): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.interface = "vcan0" def testCrucialConstants(self): socket.AF_CAN socket.PF_CAN socket.CAN_ISOTP socket.SOCK_DGRAM def testCreateSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_RAW, socket.CAN_RAW) as s: pass @unittest.skipUnless(hasattr(socket, "CAN_ISOTP"), 'socket.CAN_ISOTP required for this test.') def testCreateISOTPSocket(self): with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) as s: pass def testTooLongInterfaceName(self): # most systems limit IFNAMSIZ to 16, take 1024 to be sure with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) as s: with self.assertRaisesRegex(OSError, 'interface name too long'): s.bind(('x' * 1024, 1, 2)) def testBind(self): try: with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_ISOTP) as s: addr = self.interface, 0x123, 0x456 s.bind(addr) self.assertEqual(s.getsockname(), addr) except OSError as e: if e.errno == errno.ENODEV: self.skipTest('network interface `%s` does not exist' % self.interface) else: raise @unittest.skipUnless(HAVE_SOCKET_CAN_J1939, 'CAN J1939 required for this test.') class J1939Test(unittest.TestCase): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.interface = "vcan0" @unittest.skipUnless(hasattr(socket, "CAN_J1939"), 'socket.CAN_J1939 required for this test.') def testJ1939Constants(self): socket.CAN_J1939 socket.J1939_MAX_UNICAST_ADDR socket.J1939_IDLE_ADDR socket.J1939_NO_ADDR socket.J1939_NO_NAME socket.J1939_PGN_REQUEST socket.J1939_PGN_ADDRESS_CLAIMED socket.J1939_PGN_ADDRESS_COMMANDED socket.J1939_PGN_PDU1_MAX socket.J1939_PGN_MAX socket.J1939_NO_PGN # J1939 socket options socket.SO_J1939_FILTER socket.SO_J1939_PROMISC socket.SO_J1939_SEND_PRIO socket.SO_J1939_ERRQUEUE socket.SCM_J1939_DEST_ADDR socket.SCM_J1939_DEST_NAME socket.SCM_J1939_PRIO socket.SCM_J1939_ERRQUEUE socket.J1939_NLA_PAD socket.J1939_NLA_BYTES_ACKED socket.J1939_EE_INFO_NONE socket.J1939_EE_INFO_TX_ABORT socket.J1939_FILTER_MAX @unittest.skipUnless(hasattr(socket, "CAN_J1939"), 'socket.CAN_J1939 required for this test.') def testCreateJ1939Socket(self): with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) as s: pass def testBind(self): try: with socket.socket(socket.PF_CAN, socket.SOCK_DGRAM, socket.CAN_J1939) as s: addr = self.interface, socket.J1939_NO_NAME, socket.J1939_NO_PGN, socket.J1939_NO_ADDR s.bind(addr) self.assertEqual(s.getsockname(), addr) except OSError as e: if e.errno == errno.ENODEV: self.skipTest('network interface `%s` does not exist' % self.interface) else: raise @unittest.skipUnless(HAVE_SOCKET_RDS, 'RDS sockets required for this test.') class BasicRDSTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_RDS socket.PF_RDS def testCreateSocket(self): with socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) as s: pass def testSocketBufferSize(self): bufsize = 16384 with socket.socket(socket.PF_RDS, socket.SOCK_SEQPACKET, 0) as s: s.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, bufsize) s.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, bufsize) @unittest.skipUnless(HAVE_SOCKET_RDS, 'RDS sockets required for this test.') class RDSTest(ThreadedRDSSocketTest): def __init__(self, methodName='runTest'): ThreadedRDSSocketTest.__init__(self, methodName=methodName) def setUp(self): super().setUp() self.evt = threading.Event() def testSendAndRecv(self): data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data, data) self.assertEqual(self.cli_addr, addr) def _testSendAndRecv(self): self.data = b'spam' self.cli.sendto(self.data, 0, (HOST, self.port)) def testPeek(self): data, addr = self.serv.recvfrom(self.bufsize, socket.MSG_PEEK) self.assertEqual(self.data, data) data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data, data) def _testPeek(self): self.data = b'spam' self.cli.sendto(self.data, 0, (HOST, self.port)) @requireAttrs(socket.socket, 'recvmsg') def testSendAndRecvMsg(self): data, ancdata, msg_flags, addr = self.serv.recvmsg(self.bufsize) self.assertEqual(self.data, data) @requireAttrs(socket.socket, 'sendmsg') def _testSendAndRecvMsg(self): self.data = b'hello ' * 10 self.cli.sendmsg([self.data], (), 0, (HOST, self.port)) def testSendAndRecvMulti(self): data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data1, data) data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data2, data) def _testSendAndRecvMulti(self): self.data1 = b'bacon' self.cli.sendto(self.data1, 0, (HOST, self.port)) self.data2 = b'egg' self.cli.sendto(self.data2, 0, (HOST, self.port)) def testSelect(self): r, w, x = select.select([self.serv], [], [], 3.0) self.assertIn(self.serv, r) data, addr = self.serv.recvfrom(self.bufsize) self.assertEqual(self.data, data) def _testSelect(self): self.data = b'select' self.cli.sendto(self.data, 0, (HOST, self.port)) @unittest.skipUnless(HAVE_SOCKET_QIPCRTR, 'QIPCRTR sockets required for this test.') class BasicQIPCRTRTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_QIPCRTR def testCreateSocket(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: pass def testUnbound(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: self.assertEqual(s.getsockname()[1], 0) def testBindSock(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: socket_helper.bind_port(s, host=s.getsockname()[0]) self.assertNotEqual(s.getsockname()[1], 0) def testInvalidBindSock(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: self.assertRaises(OSError, socket_helper.bind_port, s, host=-2) def testAutoBindSock(self): with socket.socket(socket.AF_QIPCRTR, socket.SOCK_DGRAM) as s: s.connect((123, 123)) self.assertNotEqual(s.getsockname()[1], 0) @unittest.skipIf(fcntl is None, "need fcntl") @unittest.skipUnless(HAVE_SOCKET_VSOCK, 'VSOCK sockets required for this test.') class BasicVSOCKTest(unittest.TestCase): def testCrucialConstants(self): socket.AF_VSOCK def testVSOCKConstants(self): socket.SO_VM_SOCKETS_BUFFER_SIZE socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE socket.VMADDR_CID_ANY socket.VMADDR_PORT_ANY socket.VMADDR_CID_HOST socket.VM_SOCKETS_INVALID_VERSION socket.IOCTL_VM_SOCKETS_GET_LOCAL_CID def testCreateSocket(self): with socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) as s: pass def testSocketBufferSize(self): with socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM) as s: orig_max = s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE) orig = s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_SIZE) orig_min = s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE) s.setsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE, orig_max * 2) s.setsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_SIZE, orig * 2) s.setsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE, orig_min * 2) self.assertEqual(orig_max * 2, s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MAX_SIZE)) self.assertEqual(orig * 2, s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_SIZE)) self.assertEqual(orig_min * 2, s.getsockopt(socket.AF_VSOCK, socket.SO_VM_SOCKETS_BUFFER_MIN_SIZE)) @unittest.skipUnless(HAVE_SOCKET_BLUETOOTH, 'Bluetooth sockets required for this test.') class BasicBluetoothTest(unittest.TestCase): def testBluetoothConstants(self): socket.BDADDR_ANY socket.BDADDR_LOCAL socket.AF_BLUETOOTH socket.BTPROTO_RFCOMM if sys.platform != "win32": socket.BTPROTO_HCI socket.SOL_HCI socket.BTPROTO_L2CAP if not sys.platform.startswith("freebsd"): socket.BTPROTO_SCO def testCreateRfcommSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM) as s: pass @unittest.skipIf(sys.platform == "win32", "windows does not support L2CAP sockets") def testCreateL2capSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_SEQPACKET, socket.BTPROTO_L2CAP) as s: pass @unittest.skipIf(sys.platform == "win32", "windows does not support HCI sockets") def testCreateHciSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_RAW, socket.BTPROTO_HCI) as s: pass @unittest.skipIf(sys.platform == "win32" or sys.platform.startswith("freebsd"), "windows and freebsd do not support SCO sockets") def testCreateScoSocket(self): with socket.socket(socket.AF_BLUETOOTH, socket.SOCK_SEQPACKET, socket.BTPROTO_SCO) as s: pass class BasicTCPTest(SocketConnectedTest): def __init__(self, methodName='runTest'): SocketConnectedTest.__init__(self, methodName=methodName) def testRecv(self): # Testing large receive over TCP msg = self.cli_conn.recv(1024) self.assertEqual(msg, MSG) def _testRecv(self): self.serv_conn.send(MSG) def testOverFlowRecv(self): # Testing receive in chunks over TCP seg1 = self.cli_conn.recv(len(MSG) - 3) seg2 = self.cli_conn.recv(1024) msg = seg1 + seg2 self.assertEqual(msg, MSG) def _testOverFlowRecv(self): self.serv_conn.send(MSG) def testRecvFrom(self): # Testing large recvfrom() over TCP msg, addr = self.cli_conn.recvfrom(1024) self.assertEqual(msg, MSG) def _testRecvFrom(self): self.serv_conn.send(MSG) def testOverFlowRecvFrom(self): # Testing recvfrom() in chunks over TCP seg1, addr = self.cli_conn.recvfrom(len(MSG)-3) seg2, addr = self.cli_conn.recvfrom(1024) msg = seg1 + seg2 self.assertEqual(msg, MSG) def _testOverFlowRecvFrom(self): self.serv_conn.send(MSG) def testSendAll(self): # Testing sendall() with a 2048 byte string over TCP msg = b'' while 1: read = self.cli_conn.recv(1024) if not read: break msg += read self.assertEqual(msg, b'f' * 2048) def _testSendAll(self): big_chunk = b'f' * 2048 self.serv_conn.sendall(big_chunk) def testFromFd(self): # Testing fromfd() fd = self.cli_conn.fileno() sock = socket.fromfd(fd, socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) self.assertIsInstance(sock, socket.socket) msg = sock.recv(1024) self.assertEqual(msg, MSG) def _testFromFd(self): self.serv_conn.send(MSG) def testDup(self): # Testing dup() sock = self.cli_conn.dup() self.addCleanup(sock.close) msg = sock.recv(1024) self.assertEqual(msg, MSG) def _testDup(self): self.serv_conn.send(MSG) def testShutdown(self): # Testing shutdown() msg = self.cli_conn.recv(1024) self.assertEqual(msg, MSG) # wait for _testShutdown to finish: on OS X, when the server # closes the connection the client also becomes disconnected, # and the client's shutdown call will fail. (Issue #4397.) self.done.wait() def _testShutdown(self): self.serv_conn.send(MSG) self.serv_conn.shutdown(2) testShutdown_overflow = support.cpython_only(testShutdown) @support.cpython_only def _testShutdown_overflow(self): import _testcapi self.serv_conn.send(MSG) # Issue 15989 self.assertRaises(OverflowError, self.serv_conn.shutdown, _testcapi.INT_MAX + 1) self.assertRaises(OverflowError, self.serv_conn.shutdown, 2 + (_testcapi.UINT_MAX + 1)) self.serv_conn.shutdown(2) def testDetach(self): # Testing detach() fileno = self.cli_conn.fileno() f = self.cli_conn.detach() self.assertEqual(f, fileno) # cli_conn cannot be used anymore... self.assertTrue(self.cli_conn._closed) self.assertRaises(OSError, self.cli_conn.recv, 1024) self.cli_conn.close() # ...but we can create another socket using the (still open) # file descriptor sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, fileno=f) self.addCleanup(sock.close) msg = sock.recv(1024) self.assertEqual(msg, MSG) def _testDetach(self): self.serv_conn.send(MSG) class BasicUDPTest(ThreadedUDPSocketTest): def __init__(self, methodName='runTest'): ThreadedUDPSocketTest.__init__(self, methodName=methodName) def testSendtoAndRecv(self): # Testing sendto() and Recv() over UDP msg = self.serv.recv(len(MSG)) self.assertEqual(msg, MSG) def _testSendtoAndRecv(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFrom(self): # Testing recvfrom() over UDP msg, addr = self.serv.recvfrom(len(MSG)) self.assertEqual(msg, MSG) def _testRecvFrom(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFromNegative(self): # Negative lengths passed to recvfrom should give ValueError. self.assertRaises(ValueError, self.serv.recvfrom, -1) def _testRecvFromNegative(self): self.cli.sendto(MSG, 0, (HOST, self.port)) @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class BasicUDPLITETest(ThreadedUDPLITESocketTest): def __init__(self, methodName='runTest'): ThreadedUDPLITESocketTest.__init__(self, methodName=methodName) def testSendtoAndRecv(self): # Testing sendto() and Recv() over UDPLITE msg = self.serv.recv(len(MSG)) self.assertEqual(msg, MSG) def _testSendtoAndRecv(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFrom(self): # Testing recvfrom() over UDPLITE msg, addr = self.serv.recvfrom(len(MSG)) self.assertEqual(msg, MSG) def _testRecvFrom(self): self.cli.sendto(MSG, 0, (HOST, self.port)) def testRecvFromNegative(self): # Negative lengths passed to recvfrom should give ValueError. self.assertRaises(ValueError, self.serv.recvfrom, -1) def _testRecvFromNegative(self): self.cli.sendto(MSG, 0, (HOST, self.port)) # Tests for the sendmsg()/recvmsg() interface. Where possible, the # same test code is used with different families and types of socket # (e.g. stream, datagram), and tests using recvmsg() are repeated # using recvmsg_into(). # # The generic test classes such as SendmsgTests and # RecvmsgGenericTests inherit from SendrecvmsgBase and expect to be # supplied with sockets cli_sock and serv_sock representing the # client's and the server's end of the connection respectively, and # attributes cli_addr and serv_addr holding their (numeric where # appropriate) addresses. # # The final concrete test classes combine these with subclasses of # SocketTestBase which set up client and server sockets of a specific # type, and with subclasses of SendrecvmsgBase such as # SendrecvmsgDgramBase and SendrecvmsgConnectedBase which map these # sockets to cli_sock and serv_sock and override the methods and # attributes of SendrecvmsgBase to fill in destination addresses if # needed when sending, check for specific flags in msg_flags, etc. # # RecvmsgIntoMixin provides a version of doRecvmsg() implemented using # recvmsg_into(). # XXX: like the other datagram (UDP) tests in this module, the code # here assumes that datagram delivery on the local machine will be # reliable. class SendrecvmsgBase(ThreadSafeCleanupTestCase): # Base class for sendmsg()/recvmsg() tests. # Time in seconds to wait before considering a test failed, or # None for no timeout. Not all tests actually set a timeout. fail_timeout = support.LOOPBACK_TIMEOUT def setUp(self): self.misc_event = threading.Event() super().setUp() def sendToServer(self, msg): # Send msg to the server. return self.cli_sock.send(msg) # Tuple of alternative default arguments for sendmsg() when called # via sendmsgToServer() (e.g. to include a destination address). sendmsg_to_server_defaults = () def sendmsgToServer(self, *args): # Call sendmsg() on self.cli_sock with the given arguments, # filling in any arguments which are not supplied with the # corresponding items of self.sendmsg_to_server_defaults, if # any. return self.cli_sock.sendmsg( *(args + self.sendmsg_to_server_defaults[len(args):])) def doRecvmsg(self, sock, bufsize, *args): # Call recvmsg() on sock with given arguments and return its # result. Should be used for tests which can use either # recvmsg() or recvmsg_into() - RecvmsgIntoMixin overrides # this method with one which emulates it using recvmsg_into(), # thus allowing the same test to be used for both methods. result = sock.recvmsg(bufsize, *args) self.registerRecvmsgResult(result) return result def registerRecvmsgResult(self, result): # Called by doRecvmsg() with the return value of recvmsg() or # recvmsg_into(). Can be overridden to arrange cleanup based # on the returned ancillary data, for instance. pass def checkRecvmsgAddress(self, addr1, addr2): # Called to compare the received address with the address of # the peer. self.assertEqual(addr1, addr2) # Flags that are normally unset in msg_flags msg_flags_common_unset = 0 for name in ("MSG_CTRUNC", "MSG_OOB"): msg_flags_common_unset |= getattr(socket, name, 0) # Flags that are normally set msg_flags_common_set = 0 # Flags set when a complete record has been received (e.g. MSG_EOR # for SCTP) msg_flags_eor_indicator = 0 # Flags set when a complete record has not been received # (e.g. MSG_TRUNC for datagram sockets) msg_flags_non_eor_indicator = 0 def checkFlags(self, flags, eor=None, checkset=0, checkunset=0, ignore=0): # Method to check the value of msg_flags returned by recvmsg[_into](). # # Checks that all bits in msg_flags_common_set attribute are # set in "flags" and all bits in msg_flags_common_unset are # unset. # # The "eor" argument specifies whether the flags should # indicate that a full record (or datagram) has been received. # If "eor" is None, no checks are done; otherwise, checks # that: # # * if "eor" is true, all bits in msg_flags_eor_indicator are # set and all bits in msg_flags_non_eor_indicator are unset # # * if "eor" is false, all bits in msg_flags_non_eor_indicator # are set and all bits in msg_flags_eor_indicator are unset # # If "checkset" and/or "checkunset" are supplied, they require # the given bits to be set or unset respectively, overriding # what the attributes require for those bits. # # If any bits are set in "ignore", they will not be checked, # regardless of the other inputs. # # Will raise Exception if the inputs require a bit to be both # set and unset, and it is not ignored. defaultset = self.msg_flags_common_set defaultunset = self.msg_flags_common_unset if eor: defaultset |= self.msg_flags_eor_indicator defaultunset |= self.msg_flags_non_eor_indicator elif eor is not None: defaultset |= self.msg_flags_non_eor_indicator defaultunset |= self.msg_flags_eor_indicator # Function arguments override defaults defaultset &= ~checkunset defaultunset &= ~checkset # Merge arguments with remaining defaults, and check for conflicts checkset |= defaultset checkunset |= defaultunset inboth = checkset & checkunset & ~ignore if inboth: raise Exception("contradictory set, unset requirements for flags " "{0:#x}".format(inboth)) # Compare with given msg_flags value mask = (checkset | checkunset) & ~ignore self.assertEqual(flags & mask, checkset & mask) class RecvmsgIntoMixin(SendrecvmsgBase): # Mixin to implement doRecvmsg() using recvmsg_into(). def doRecvmsg(self, sock, bufsize, *args): buf = bytearray(bufsize) result = sock.recvmsg_into([buf], *args) self.registerRecvmsgResult(result) self.assertGreaterEqual(result[0], 0) self.assertLessEqual(result[0], bufsize) return (bytes(buf[:result[0]]),) + result[1:] class SendrecvmsgDgramFlagsBase(SendrecvmsgBase): # Defines flags to be checked in msg_flags for datagram sockets. @property def msg_flags_non_eor_indicator(self): return super().msg_flags_non_eor_indicator | socket.MSG_TRUNC class SendrecvmsgSCTPFlagsBase(SendrecvmsgBase): # Defines flags to be checked in msg_flags for SCTP sockets. @property def msg_flags_eor_indicator(self): return super().msg_flags_eor_indicator | socket.MSG_EOR class SendrecvmsgConnectionlessBase(SendrecvmsgBase): # Base class for tests on connectionless-mode sockets. Users must # supply sockets on attributes cli and serv to be mapped to # cli_sock and serv_sock respectively. @property def serv_sock(self): return self.serv @property def cli_sock(self): return self.cli @property def sendmsg_to_server_defaults(self): return ([], [], 0, self.serv_addr) def sendToServer(self, msg): return self.cli_sock.sendto(msg, self.serv_addr) class SendrecvmsgConnectedBase(SendrecvmsgBase): # Base class for tests on connected sockets. Users must supply # sockets on attributes serv_conn and cli_conn (representing the # connections *to* the server and the client), to be mapped to # cli_sock and serv_sock respectively. @property def serv_sock(self): return self.cli_conn @property def cli_sock(self): return self.serv_conn def checkRecvmsgAddress(self, addr1, addr2): # Address is currently "unspecified" for a connected socket, # so we don't examine it pass class SendrecvmsgServerTimeoutBase(SendrecvmsgBase): # Base class to set a timeout on server's socket. def setUp(self): super().setUp() self.serv_sock.settimeout(self.fail_timeout) class SendmsgTests(SendrecvmsgServerTimeoutBase): # Tests for sendmsg() which can use any socket type and do not # involve recvmsg() or recvmsg_into(). def testSendmsg(self): # Send a simple message with sendmsg(). self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsg(self): self.assertEqual(self.sendmsgToServer([MSG]), len(MSG)) def testSendmsgDataGenerator(self): # Send from buffer obtained from a generator (not a sequence). self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgDataGenerator(self): self.assertEqual(self.sendmsgToServer((o for o in [MSG])), len(MSG)) def testSendmsgAncillaryGenerator(self): # Gather (empty) ancillary data from a generator. self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgAncillaryGenerator(self): self.assertEqual(self.sendmsgToServer([MSG], (o for o in [])), len(MSG)) def testSendmsgArray(self): # Send data from an array instead of the usual bytes object. self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgArray(self): self.assertEqual(self.sendmsgToServer([array.array("B", MSG)]), len(MSG)) def testSendmsgGather(self): # Send message data from more than one buffer (gather write). self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgGather(self): self.assertEqual(self.sendmsgToServer([MSG[:3], MSG[3:]]), len(MSG)) def testSendmsgBadArgs(self): # Check that sendmsg() rejects invalid arguments. self.assertEqual(self.serv_sock.recv(1000), b"done") def _testSendmsgBadArgs(self): self.assertRaises(TypeError, self.cli_sock.sendmsg) self.assertRaises(TypeError, self.sendmsgToServer, b"not in an iterable") self.assertRaises(TypeError, self.sendmsgToServer, object()) self.assertRaises(TypeError, self.sendmsgToServer, [object()]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG, object()]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], object()) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [], object()) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [], 0, object()) self.sendToServer(b"done") def testSendmsgBadCmsg(self): # Check that invalid ancillary data items are rejected. self.assertEqual(self.serv_sock.recv(1000), b"done") def _testSendmsgBadCmsg(self): self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [object()]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(object(), 0, b"data")]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, object(), b"data")]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0, object())]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0)]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0, b"data", 42)]) self.sendToServer(b"done") @requireAttrs(socket, "CMSG_SPACE") def testSendmsgBadMultiCmsg(self): # Check that invalid ancillary data items are rejected when # more than one item is present. self.assertEqual(self.serv_sock.recv(1000), b"done") @testSendmsgBadMultiCmsg.client_skip def _testSendmsgBadMultiCmsg(self): self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [0, 0, b""]) self.assertRaises(TypeError, self.sendmsgToServer, [MSG], [(0, 0, b""), object()]) self.sendToServer(b"done") def testSendmsgExcessCmsgReject(self): # Check that sendmsg() rejects excess ancillary data items # when the number that can be sent is limited. self.assertEqual(self.serv_sock.recv(1000), b"done") def _testSendmsgExcessCmsgReject(self): if not hasattr(socket, "CMSG_SPACE"): # Can only send one item with self.assertRaises(OSError) as cm: self.sendmsgToServer([MSG], [(0, 0, b""), (0, 0, b"")]) self.assertIsNone(cm.exception.errno) self.sendToServer(b"done") def testSendmsgAfterClose(self): # Check that sendmsg() fails on a closed socket. pass def _testSendmsgAfterClose(self): self.cli_sock.close() self.assertRaises(OSError, self.sendmsgToServer, [MSG]) class SendmsgStreamTests(SendmsgTests): # Tests for sendmsg() which require a stream socket and do not # involve recvmsg() or recvmsg_into(). def testSendmsgExplicitNoneAddr(self): # Check that peer address can be specified as None. self.assertEqual(self.serv_sock.recv(len(MSG)), MSG) def _testSendmsgExplicitNoneAddr(self): self.assertEqual(self.sendmsgToServer([MSG], [], 0, None), len(MSG)) def testSendmsgTimeout(self): # Check that timeout works with sendmsg(). self.assertEqual(self.serv_sock.recv(512), b"a"*512) self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) def _testSendmsgTimeout(self): try: self.cli_sock.settimeout(0.03) try: while True: self.sendmsgToServer([b"a"*512]) except socket.timeout: pass except OSError as exc: if exc.errno != errno.ENOMEM: raise # bpo-33937 the test randomly fails on Travis CI with # "OSError: [Errno 12] Cannot allocate memory" else: self.fail("socket.timeout not raised") finally: self.misc_event.set() # XXX: would be nice to have more tests for sendmsg flags argument. # Linux supports MSG_DONTWAIT when sending, but in general, it # only works when receiving. Could add other platforms if they # support it too. @skipWithClientIf(sys.platform not in {"linux"}, "MSG_DONTWAIT not known to work on this platform when " "sending") def testSendmsgDontWait(self): # Check that MSG_DONTWAIT in flags causes non-blocking behaviour. self.assertEqual(self.serv_sock.recv(512), b"a"*512) self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) @testSendmsgDontWait.client_skip def _testSendmsgDontWait(self): try: with self.assertRaises(OSError) as cm: while True: self.sendmsgToServer([b"a"*512], [], socket.MSG_DONTWAIT) # bpo-33937: catch also ENOMEM, the test randomly fails on Travis CI # with "OSError: [Errno 12] Cannot allocate memory" self.assertIn(cm.exception.errno, (errno.EAGAIN, errno.EWOULDBLOCK, errno.ENOMEM)) finally: self.misc_event.set() class SendmsgConnectionlessTests(SendmsgTests): # Tests for sendmsg() which require a connectionless-mode # (e.g. datagram) socket, and do not involve recvmsg() or # recvmsg_into(). def testSendmsgNoDestAddr(self): # Check that sendmsg() fails when no destination address is # given for unconnected socket. pass def _testSendmsgNoDestAddr(self): self.assertRaises(OSError, self.cli_sock.sendmsg, [MSG]) self.assertRaises(OSError, self.cli_sock.sendmsg, [MSG], [], 0, None) class RecvmsgGenericTests(SendrecvmsgBase): # Tests for recvmsg() which can also be emulated using # recvmsg_into(), and can use any socket type. def testRecvmsg(self): # Receive a simple message with recvmsg[_into](). msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG)) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsg(self): self.sendToServer(MSG) def testRecvmsgExplicitDefaults(self): # Test recvmsg[_into]() with default arguments provided explicitly. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 0, 0) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgExplicitDefaults(self): self.sendToServer(MSG) def testRecvmsgShorter(self): # Receive a message smaller than buffer. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) + 42) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgShorter(self): self.sendToServer(MSG) def testRecvmsgTrunc(self): # Receive part of message, check for truncation indicators. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) - 3) self.assertEqual(msg, MSG[:-3]) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=False) def _testRecvmsgTrunc(self): self.sendToServer(MSG) def testRecvmsgShortAncillaryBuf(self): # Test ancillary data buffer too small to hold any ancillary data. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 1) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgShortAncillaryBuf(self): self.sendToServer(MSG) def testRecvmsgLongAncillaryBuf(self): # Test large ancillary data buffer. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 10240) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgLongAncillaryBuf(self): self.sendToServer(MSG) def testRecvmsgAfterClose(self): # Check that recvmsg[_into]() fails on a closed socket. self.serv_sock.close() self.assertRaises(OSError, self.doRecvmsg, self.serv_sock, 1024) def _testRecvmsgAfterClose(self): pass def testRecvmsgTimeout(self): # Check that timeout works. try: self.serv_sock.settimeout(0.03) self.assertRaises(socket.timeout, self.doRecvmsg, self.serv_sock, len(MSG)) finally: self.misc_event.set() def _testRecvmsgTimeout(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) @requireAttrs(socket, "MSG_PEEK") def testRecvmsgPeek(self): # Check that MSG_PEEK in flags enables examination of pending # data without consuming it. # Receive part of data with MSG_PEEK. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) - 3, 0, socket.MSG_PEEK) self.assertEqual(msg, MSG[:-3]) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) # Ignoring MSG_TRUNC here (so this test is the same for stream # and datagram sockets). Some wording in POSIX seems to # suggest that it needn't be set when peeking, but that may # just be a slip. self.checkFlags(flags, eor=False, ignore=getattr(socket, "MSG_TRUNC", 0)) # Receive all data with MSG_PEEK. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 0, socket.MSG_PEEK) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) # Check that the same data can still be received normally. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG)) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) @testRecvmsgPeek.client_skip def _testRecvmsgPeek(self): self.sendToServer(MSG) @requireAttrs(socket.socket, "sendmsg") def testRecvmsgFromSendmsg(self): # Test receiving with recvmsg[_into]() when message is sent # using sendmsg(). self.serv_sock.settimeout(self.fail_timeout) msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG)) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) @testRecvmsgFromSendmsg.client_skip def _testRecvmsgFromSendmsg(self): self.assertEqual(self.sendmsgToServer([MSG[:3], MSG[3:]]), len(MSG)) class RecvmsgGenericStreamTests(RecvmsgGenericTests): # Tests which require a stream socket and can use either recvmsg() # or recvmsg_into(). def testRecvmsgEOF(self): # Receive end-of-stream indicator (b"", peer socket closed). msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, 1024) self.assertEqual(msg, b"") self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=None) # Might not have end-of-record marker def _testRecvmsgEOF(self): self.cli_sock.close() def testRecvmsgOverflow(self): # Receive a message in more than one chunk. seg1, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG) - 3) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=False) seg2, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, 1024) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) msg = seg1 + seg2 self.assertEqual(msg, MSG) def _testRecvmsgOverflow(self): self.sendToServer(MSG) class RecvmsgTests(RecvmsgGenericTests): # Tests for recvmsg() which can use any socket type. def testRecvmsgBadArgs(self): # Check that recvmsg() rejects invalid arguments. self.assertRaises(TypeError, self.serv_sock.recvmsg) self.assertRaises(ValueError, self.serv_sock.recvmsg, -1, 0, 0) self.assertRaises(ValueError, self.serv_sock.recvmsg, len(MSG), -1, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, [bytearray(10)], 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, object(), 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, len(MSG), object(), 0) self.assertRaises(TypeError, self.serv_sock.recvmsg, len(MSG), 0, object()) msg, ancdata, flags, addr = self.serv_sock.recvmsg(len(MSG), 0, 0) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgBadArgs(self): self.sendToServer(MSG) class RecvmsgIntoTests(RecvmsgIntoMixin, RecvmsgGenericTests): # Tests for recvmsg_into() which can use any socket type. def testRecvmsgIntoBadArgs(self): # Check that recvmsg_into() rejects invalid arguments. buf = bytearray(len(MSG)) self.assertRaises(TypeError, self.serv_sock.recvmsg_into) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, len(MSG), 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, buf, 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [object()], 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [b"I'm not writable"], 0, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [buf, object()], 0, 0) self.assertRaises(ValueError, self.serv_sock.recvmsg_into, [buf], -1, 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [buf], object(), 0) self.assertRaises(TypeError, self.serv_sock.recvmsg_into, [buf], 0, object()) nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into([buf], 0, 0) self.assertEqual(nbytes, len(MSG)) self.assertEqual(buf, bytearray(MSG)) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoBadArgs(self): self.sendToServer(MSG) def testRecvmsgIntoGenerator(self): # Receive into buffer obtained from a generator (not a sequence). buf = bytearray(len(MSG)) nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into( (o for o in [buf])) self.assertEqual(nbytes, len(MSG)) self.assertEqual(buf, bytearray(MSG)) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoGenerator(self): self.sendToServer(MSG) def testRecvmsgIntoArray(self): # Receive into an array rather than the usual bytearray. buf = array.array("B", [0] * len(MSG)) nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into([buf]) self.assertEqual(nbytes, len(MSG)) self.assertEqual(buf.tobytes(), MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoArray(self): self.sendToServer(MSG) def testRecvmsgIntoScatter(self): # Receive into multiple buffers (scatter write). b1 = bytearray(b"----") b2 = bytearray(b"0123456789") b3 = bytearray(b"--------------") nbytes, ancdata, flags, addr = self.serv_sock.recvmsg_into( [b1, memoryview(b2)[2:9], b3]) self.assertEqual(nbytes, len(b"Mary had a little lamb")) self.assertEqual(b1, bytearray(b"Mary")) self.assertEqual(b2, bytearray(b"01 had a 9")) self.assertEqual(b3, bytearray(b"little lamb---")) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True) def _testRecvmsgIntoScatter(self): self.sendToServer(b"Mary had a little lamb") class CmsgMacroTests(unittest.TestCase): # Test the functions CMSG_LEN() and CMSG_SPACE(). Tests # assumptions used by sendmsg() and recvmsg[_into](), which share # code with these functions. # Match the definition in socketmodule.c try: import _testcapi except ImportError: socklen_t_limit = 0x7fffffff else: socklen_t_limit = min(0x7fffffff, _testcapi.INT_MAX) @requireAttrs(socket, "CMSG_LEN") def testCMSG_LEN(self): # Test CMSG_LEN() with various valid and invalid values, # checking the assumptions used by recvmsg() and sendmsg(). toobig = self.socklen_t_limit - socket.CMSG_LEN(0) + 1 values = list(range(257)) + list(range(toobig - 257, toobig)) # struct cmsghdr has at least three members, two of which are ints self.assertGreater(socket.CMSG_LEN(0), array.array("i").itemsize * 2) for n in values: ret = socket.CMSG_LEN(n) # This is how recvmsg() calculates the data size self.assertEqual(ret - socket.CMSG_LEN(0), n) self.assertLessEqual(ret, self.socklen_t_limit) self.assertRaises(OverflowError, socket.CMSG_LEN, -1) # sendmsg() shares code with these functions, and requires # that it reject values over the limit. self.assertRaises(OverflowError, socket.CMSG_LEN, toobig) self.assertRaises(OverflowError, socket.CMSG_LEN, sys.maxsize) @requireAttrs(socket, "CMSG_SPACE") def testCMSG_SPACE(self): # Test CMSG_SPACE() with various valid and invalid values, # checking the assumptions used by sendmsg(). toobig = self.socklen_t_limit - socket.CMSG_SPACE(1) + 1 values = list(range(257)) + list(range(toobig - 257, toobig)) last = socket.CMSG_SPACE(0) # struct cmsghdr has at least three members, two of which are ints self.assertGreater(last, array.array("i").itemsize * 2) for n in values: ret = socket.CMSG_SPACE(n) self.assertGreaterEqual(ret, last) self.assertGreaterEqual(ret, socket.CMSG_LEN(n)) self.assertGreaterEqual(ret, n + socket.CMSG_LEN(0)) self.assertLessEqual(ret, self.socklen_t_limit) last = ret self.assertRaises(OverflowError, socket.CMSG_SPACE, -1) # sendmsg() shares code with these functions, and requires # that it reject values over the limit. self.assertRaises(OverflowError, socket.CMSG_SPACE, toobig) self.assertRaises(OverflowError, socket.CMSG_SPACE, sys.maxsize) class SCMRightsTest(SendrecvmsgServerTimeoutBase): # Tests for file descriptor passing on Unix-domain sockets. # Invalid file descriptor value that's unlikely to evaluate to a # real FD even if one of its bytes is replaced with a different # value (which shouldn't actually happen). badfd = -0x5555 def newFDs(self, n): # Return a list of n file descriptors for newly-created files # containing their list indices as ASCII numbers. fds = [] for i in range(n): fd, path = tempfile.mkstemp() self.addCleanup(os.unlink, path) self.addCleanup(os.close, fd) os.write(fd, str(i).encode()) fds.append(fd) return fds def checkFDs(self, fds): # Check that the file descriptors in the given list contain # their correct list indices as ASCII numbers. for n, fd in enumerate(fds): os.lseek(fd, 0, os.SEEK_SET) self.assertEqual(os.read(fd, 1024), str(n).encode()) def registerRecvmsgResult(self, result): self.addCleanup(self.closeRecvmsgFDs, result) def closeRecvmsgFDs(self, recvmsg_result): # Close all file descriptors specified in the ancillary data # of the given return value from recvmsg() or recvmsg_into(). for cmsg_level, cmsg_type, cmsg_data in recvmsg_result[1]: if (cmsg_level == socket.SOL_SOCKET and cmsg_type == socket.SCM_RIGHTS): fds = array.array("i") fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) for fd in fds: os.close(fd) def createAndSendFDs(self, n): # Send n new file descriptors created by newFDs() to the # server, with the constant MSG as the non-ancillary data. self.assertEqual( self.sendmsgToServer([MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", self.newFDs(n)))]), len(MSG)) def checkRecvmsgFDs(self, numfds, result, maxcmsgs=1, ignoreflags=0): # Check that constant MSG was received with numfds file # descriptors in a maximum of maxcmsgs control messages (which # must contain only complete integers). By default, check # that MSG_CTRUNC is unset, but ignore any flags in # ignoreflags. msg, ancdata, flags, addr = result self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkunset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertIsInstance(ancdata, list) self.assertLessEqual(len(ancdata), maxcmsgs) fds = array.array("i") for item in ancdata: self.assertIsInstance(item, tuple) cmsg_level, cmsg_type, cmsg_data = item self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) self.assertIsInstance(cmsg_data, bytes) self.assertEqual(len(cmsg_data) % SIZEOF_INT, 0) fds.frombytes(cmsg_data) self.assertEqual(len(fds), numfds) self.checkFDs(fds) def testFDPassSimple(self): # Pass a single FD (array read from bytes object). self.checkRecvmsgFDs(1, self.doRecvmsg(self.serv_sock, len(MSG), 10240)) def _testFDPassSimple(self): self.assertEqual( self.sendmsgToServer( [MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", self.newFDs(1)).tobytes())]), len(MSG)) def testMultipleFDPass(self): # Pass multiple FDs in a single array. self.checkRecvmsgFDs(4, self.doRecvmsg(self.serv_sock, len(MSG), 10240)) def _testMultipleFDPass(self): self.createAndSendFDs(4) @requireAttrs(socket, "CMSG_SPACE") def testFDPassCMSG_SPACE(self): # Test using CMSG_SPACE() to calculate ancillary buffer size. self.checkRecvmsgFDs( 4, self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_SPACE(4 * SIZEOF_INT))) @testFDPassCMSG_SPACE.client_skip def _testFDPassCMSG_SPACE(self): self.createAndSendFDs(4) def testFDPassCMSG_LEN(self): # Test using CMSG_LEN() to calculate ancillary buffer size. self.checkRecvmsgFDs(1, self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_LEN(4 * SIZEOF_INT)), # RFC 3542 says implementations may set # MSG_CTRUNC if there isn't enough space # for trailing padding. ignoreflags=socket.MSG_CTRUNC) def _testFDPassCMSG_LEN(self): self.createAndSendFDs(1) @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") @requireAttrs(socket, "CMSG_SPACE") def testFDPassSeparate(self): # Pass two FDs in two separate arrays. Arrays may be combined # into a single control message by the OS. self.checkRecvmsgFDs(2, self.doRecvmsg(self.serv_sock, len(MSG), 10240), maxcmsgs=2) @testFDPassSeparate.client_skip @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") def _testFDPassSeparate(self): fd0, fd1 = self.newFDs(2) self.assertEqual( self.sendmsgToServer([MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd0])), (socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd1]))]), len(MSG)) @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") @requireAttrs(socket, "CMSG_SPACE") def testFDPassSeparateMinSpace(self): # Pass two FDs in two separate arrays, receiving them into the # minimum space for two arrays. num_fds = 2 self.checkRecvmsgFDs(num_fds, self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_SPACE(SIZEOF_INT) + socket.CMSG_LEN(SIZEOF_INT * num_fds)), maxcmsgs=2, ignoreflags=socket.MSG_CTRUNC) @testFDPassSeparateMinSpace.client_skip @unittest.skipIf(sys.platform == "darwin", "skipping, see issue #12958") @unittest.skipIf(AIX, "skipping, see issue #22397") def _testFDPassSeparateMinSpace(self): fd0, fd1 = self.newFDs(2) self.assertEqual( self.sendmsgToServer([MSG], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd0])), (socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd1]))]), len(MSG)) def sendAncillaryIfPossible(self, msg, ancdata): # Try to send msg and ancdata to server, but if the system # call fails, just send msg with no ancillary data. try: nbytes = self.sendmsgToServer([msg], ancdata) except OSError as e: # Check that it was the system call that failed self.assertIsInstance(e.errno, int) nbytes = self.sendmsgToServer([msg]) self.assertEqual(nbytes, len(msg)) @unittest.skipIf(sys.platform == "darwin", "see issue #24725") def testFDPassEmpty(self): # Try to pass an empty FD array. Can receive either no array # or an empty array. self.checkRecvmsgFDs(0, self.doRecvmsg(self.serv_sock, len(MSG), 10240), ignoreflags=socket.MSG_CTRUNC) def _testFDPassEmpty(self): self.sendAncillaryIfPossible(MSG, [(socket.SOL_SOCKET, socket.SCM_RIGHTS, b"")]) def testFDPassPartialInt(self): # Try to pass a truncated FD array. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 10240) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, ignore=socket.MSG_CTRUNC) self.assertLessEqual(len(ancdata), 1) for cmsg_level, cmsg_type, cmsg_data in ancdata: self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) self.assertLess(len(cmsg_data), SIZEOF_INT) def _testFDPassPartialInt(self): self.sendAncillaryIfPossible( MSG, [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [self.badfd]).tobytes()[:-1])]) @requireAttrs(socket, "CMSG_SPACE") def testFDPassPartialIntInMiddle(self): # Try to pass two FD arrays, the first of which is truncated. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), 10240) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, ignore=socket.MSG_CTRUNC) self.assertLessEqual(len(ancdata), 2) fds = array.array("i") # Arrays may have been combined in a single control message for cmsg_level, cmsg_type, cmsg_data in ancdata: self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) self.assertLessEqual(len(fds), 2) self.checkFDs(fds) @testFDPassPartialIntInMiddle.client_skip def _testFDPassPartialIntInMiddle(self): fd0, fd1 = self.newFDs(2) self.sendAncillaryIfPossible( MSG, [(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd0, self.badfd]).tobytes()[:-1]), (socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array("i", [fd1]))]) def checkTruncatedHeader(self, result, ignoreflags=0): # Check that no ancillary data items are returned when data is # truncated inside the cmsghdr structure. msg, ancdata, flags, addr = result self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC, ignore=ignoreflags) def testCmsgTruncNoBufSize(self): # Check that no ancillary data is received when no buffer size # is specified. self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG)), # BSD seems to set MSG_CTRUNC only # if an item has been partially # received. ignoreflags=socket.MSG_CTRUNC) def _testCmsgTruncNoBufSize(self): self.createAndSendFDs(1) def testCmsgTrunc0(self): # Check that no ancillary data is received when buffer size is 0. self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), 0), ignoreflags=socket.MSG_CTRUNC) def _testCmsgTrunc0(self): self.createAndSendFDs(1) # Check that no ancillary data is returned for various non-zero # (but still too small) buffer sizes. def testCmsgTrunc1(self): self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), 1)) def _testCmsgTrunc1(self): self.createAndSendFDs(1) def testCmsgTrunc2Int(self): # The cmsghdr structure has at least three members, two of # which are ints, so we still shouldn't see any ancillary # data. self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), SIZEOF_INT * 2)) def _testCmsgTrunc2Int(self): self.createAndSendFDs(1) def testCmsgTruncLen0Minus1(self): self.checkTruncatedHeader(self.doRecvmsg(self.serv_sock, len(MSG), socket.CMSG_LEN(0) - 1)) def _testCmsgTruncLen0Minus1(self): self.createAndSendFDs(1) # The following tests try to truncate the control message in the # middle of the FD array. def checkTruncatedArray(self, ancbuf, maxdata, mindata=0): # Check that file descriptor data is truncated to between # mindata and maxdata bytes when received with buffer size # ancbuf, and that any complete file descriptor numbers are # valid. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbuf) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC) if mindata == 0 and ancdata == []: return self.assertEqual(len(ancdata), 1) cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.SOL_SOCKET) self.assertEqual(cmsg_type, socket.SCM_RIGHTS) self.assertGreaterEqual(len(cmsg_data), mindata) self.assertLessEqual(len(cmsg_data), maxdata) fds = array.array("i") fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) self.checkFDs(fds) def testCmsgTruncLen0(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(0), maxdata=0) def _testCmsgTruncLen0(self): self.createAndSendFDs(1) def testCmsgTruncLen0Plus1(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(0) + 1, maxdata=1) def _testCmsgTruncLen0Plus1(self): self.createAndSendFDs(2) def testCmsgTruncLen1(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(SIZEOF_INT), maxdata=SIZEOF_INT) def _testCmsgTruncLen1(self): self.createAndSendFDs(2) def testCmsgTruncLen2Minus1(self): self.checkTruncatedArray(ancbuf=socket.CMSG_LEN(2 * SIZEOF_INT) - 1, maxdata=(2 * SIZEOF_INT) - 1) def _testCmsgTruncLen2Minus1(self): self.createAndSendFDs(2) class RFC3542AncillaryTest(SendrecvmsgServerTimeoutBase): # Test sendmsg() and recvmsg[_into]() using the ancillary data # features of the RFC 3542 Advanced Sockets API for IPv6. # Currently we can only handle certain data items (e.g. traffic # class, hop limit, MTU discovery and fragmentation settings) # without resorting to unportable means such as the struct module, # but the tests here are aimed at testing the ancillary data # handling in sendmsg() and recvmsg() rather than the IPv6 API # itself. # Test value to use when setting hop limit of packet hop_limit = 2 # Test value to use when setting traffic class of packet. # -1 means "use kernel default". traffic_class = -1 def ancillaryMapping(self, ancdata): # Given ancillary data list ancdata, return a mapping from # pairs (cmsg_level, cmsg_type) to corresponding cmsg_data. # Check that no (level, type) pair appears more than once. d = {} for cmsg_level, cmsg_type, cmsg_data in ancdata: self.assertNotIn((cmsg_level, cmsg_type), d) d[(cmsg_level, cmsg_type)] = cmsg_data return d def checkHopLimit(self, ancbufsize, maxhop=255, ignoreflags=0): # Receive hop limit into ancbufsize bytes of ancillary data # space. Check that data is MSG, ancillary data is not # truncated (but ignore any flags in ignoreflags), and hop # limit is between 0 and maxhop inclusive. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbufsize) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkunset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertEqual(len(ancdata), 1) self.assertIsInstance(ancdata[0], tuple) cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) self.assertEqual(cmsg_type, socket.IPV6_HOPLIMIT) self.assertIsInstance(cmsg_data, bytes) self.assertEqual(len(cmsg_data), SIZEOF_INT) a = array.array("i") a.frombytes(cmsg_data) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], maxhop) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testRecvHopLimit(self): # Test receiving the packet hop limit as ancillary data. self.checkHopLimit(ancbufsize=10240) @testRecvHopLimit.client_skip def _testRecvHopLimit(self): # Need to wait until server has asked to receive ancillary # data, as implementations are not required to buffer it # otherwise. self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testRecvHopLimitCMSG_SPACE(self): # Test receiving hop limit, using CMSG_SPACE to calculate buffer size. self.checkHopLimit(ancbufsize=socket.CMSG_SPACE(SIZEOF_INT)) @testRecvHopLimitCMSG_SPACE.client_skip def _testRecvHopLimitCMSG_SPACE(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) # Could test receiving into buffer sized using CMSG_LEN, but RFC # 3542 says portable applications must provide space for trailing # padding. Implementations may set MSG_CTRUNC if there isn't # enough space for the padding. @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSetHopLimit(self): # Test setting hop limit on outgoing packet and receiving it # at the other end. self.checkHopLimit(ancbufsize=10240, maxhop=self.hop_limit) @testSetHopLimit.client_skip def _testSetHopLimit(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.assertEqual( self.sendmsgToServer([MSG], [(socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]), len(MSG)) def checkTrafficClassAndHopLimit(self, ancbufsize, maxhop=255, ignoreflags=0): # Receive traffic class and hop limit into ancbufsize bytes of # ancillary data space. Check that data is MSG, ancillary # data is not truncated (but ignore any flags in ignoreflags), # and traffic class and hop limit are in range (hop limit no # more than maxhop). self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVTCLASS, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbufsize) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkunset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertEqual(len(ancdata), 2) ancmap = self.ancillaryMapping(ancdata) tcdata = ancmap[(socket.IPPROTO_IPV6, socket.IPV6_TCLASS)] self.assertEqual(len(tcdata), SIZEOF_INT) a = array.array("i") a.frombytes(tcdata) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], 255) hldata = ancmap[(socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT)] self.assertEqual(len(hldata), SIZEOF_INT) a = array.array("i") a.frombytes(hldata) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], maxhop) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testRecvTrafficClassAndHopLimit(self): # Test receiving traffic class and hop limit as ancillary data. self.checkTrafficClassAndHopLimit(ancbufsize=10240) @testRecvTrafficClassAndHopLimit.client_skip def _testRecvTrafficClassAndHopLimit(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testRecvTrafficClassAndHopLimitCMSG_SPACE(self): # Test receiving traffic class and hop limit, using # CMSG_SPACE() to calculate buffer size. self.checkTrafficClassAndHopLimit( ancbufsize=socket.CMSG_SPACE(SIZEOF_INT) * 2) @testRecvTrafficClassAndHopLimitCMSG_SPACE.client_skip def _testRecvTrafficClassAndHopLimitCMSG_SPACE(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSetTrafficClassAndHopLimit(self): # Test setting traffic class and hop limit on outgoing packet, # and receiving them at the other end. self.checkTrafficClassAndHopLimit(ancbufsize=10240, maxhop=self.hop_limit) @testSetTrafficClassAndHopLimit.client_skip def _testSetTrafficClassAndHopLimit(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.assertEqual( self.sendmsgToServer([MSG], [(socket.IPPROTO_IPV6, socket.IPV6_TCLASS, array.array("i", [self.traffic_class])), (socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]), len(MSG)) @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testOddCmsgSize(self): # Try to send ancillary data with first item one byte too # long. Fall back to sending with correct size if this fails, # and check that second item was handled correctly. self.checkTrafficClassAndHopLimit(ancbufsize=10240, maxhop=self.hop_limit) @testOddCmsgSize.client_skip def _testOddCmsgSize(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) try: nbytes = self.sendmsgToServer( [MSG], [(socket.IPPROTO_IPV6, socket.IPV6_TCLASS, array.array("i", [self.traffic_class]).tobytes() + b"\x00"), (socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]) except OSError as e: self.assertIsInstance(e.errno, int) nbytes = self.sendmsgToServer( [MSG], [(socket.IPPROTO_IPV6, socket.IPV6_TCLASS, array.array("i", [self.traffic_class])), (socket.IPPROTO_IPV6, socket.IPV6_HOPLIMIT, array.array("i", [self.hop_limit]))]) self.assertEqual(nbytes, len(MSG)) # Tests for proper handling of truncated ancillary data def checkHopLimitTruncatedHeader(self, ancbufsize, ignoreflags=0): # Receive hop limit into ancbufsize bytes of ancillary data # space, which should be too small to contain the ancillary # data header (if ancbufsize is None, pass no second argument # to recvmsg()). Check that data is MSG, MSG_CTRUNC is set # (unless included in ignoreflags), and no ancillary data is # returned. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.misc_event.set() args = () if ancbufsize is None else (ancbufsize,) msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), *args) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.assertEqual(ancdata, []) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC, ignore=ignoreflags) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testCmsgTruncNoBufSize(self): # Check that no ancillary data is received when no ancillary # buffer size is provided. self.checkHopLimitTruncatedHeader(ancbufsize=None, # BSD seems to set # MSG_CTRUNC only if an item # has been partially # received. ignoreflags=socket.MSG_CTRUNC) @testCmsgTruncNoBufSize.client_skip def _testCmsgTruncNoBufSize(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTrunc0(self): # Check that no ancillary data is received when ancillary # buffer size is zero. self.checkHopLimitTruncatedHeader(ancbufsize=0, ignoreflags=socket.MSG_CTRUNC) @testSingleCmsgTrunc0.client_skip def _testSingleCmsgTrunc0(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) # Check that no ancillary data is returned for various non-zero # (but still too small) buffer sizes. @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTrunc1(self): self.checkHopLimitTruncatedHeader(ancbufsize=1) @testSingleCmsgTrunc1.client_skip def _testSingleCmsgTrunc1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTrunc2Int(self): self.checkHopLimitTruncatedHeader(ancbufsize=2 * SIZEOF_INT) @testSingleCmsgTrunc2Int.client_skip def _testSingleCmsgTrunc2Int(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTruncLen0Minus1(self): self.checkHopLimitTruncatedHeader(ancbufsize=socket.CMSG_LEN(0) - 1) @testSingleCmsgTruncLen0Minus1.client_skip def _testSingleCmsgTruncLen0Minus1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT") def testSingleCmsgTruncInData(self): # Test truncation of a control message inside its associated # data. The message may be returned with its data truncated, # or not returned at all. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg( self.serv_sock, len(MSG), socket.CMSG_LEN(SIZEOF_INT) - 1) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC) self.assertLessEqual(len(ancdata), 1) if ancdata: cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) self.assertEqual(cmsg_type, socket.IPV6_HOPLIMIT) self.assertLess(len(cmsg_data), SIZEOF_INT) @testSingleCmsgTruncInData.client_skip def _testSingleCmsgTruncInData(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) def checkTruncatedSecondHeader(self, ancbufsize, ignoreflags=0): # Receive traffic class and hop limit into ancbufsize bytes of # ancillary data space, which should be large enough to # contain the first item, but too small to contain the header # of the second. Check that data is MSG, MSG_CTRUNC is set # (unless included in ignoreflags), and only one ancillary # data item is returned. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVTCLASS, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, len(MSG), ancbufsize) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC, ignore=ignoreflags) self.assertEqual(len(ancdata), 1) cmsg_level, cmsg_type, cmsg_data = ancdata[0] self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) self.assertIn(cmsg_type, {socket.IPV6_TCLASS, socket.IPV6_HOPLIMIT}) self.assertEqual(len(cmsg_data), SIZEOF_INT) a = array.array("i") a.frombytes(cmsg_data) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], 255) # Try the above test with various buffer sizes. @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTrunc0(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT), ignoreflags=socket.MSG_CTRUNC) @testSecondCmsgTrunc0.client_skip def _testSecondCmsgTrunc0(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTrunc1(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT) + 1) @testSecondCmsgTrunc1.client_skip def _testSecondCmsgTrunc1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTrunc2Int(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT) + 2 * SIZEOF_INT) @testSecondCmsgTrunc2Int.client_skip def _testSecondCmsgTrunc2Int(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecondCmsgTruncLen0Minus1(self): self.checkTruncatedSecondHeader(socket.CMSG_SPACE(SIZEOF_INT) + socket.CMSG_LEN(0) - 1) @testSecondCmsgTruncLen0Minus1.client_skip def _testSecondCmsgTruncLen0Minus1(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) @requireAttrs(socket, "CMSG_SPACE", "IPV6_RECVHOPLIMIT", "IPV6_HOPLIMIT", "IPV6_RECVTCLASS", "IPV6_TCLASS") def testSecomdCmsgTruncInData(self): # Test truncation of the second of two control messages inside # its associated data. self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVHOPLIMIT, 1) self.serv_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVTCLASS, 1) self.misc_event.set() msg, ancdata, flags, addr = self.doRecvmsg( self.serv_sock, len(MSG), socket.CMSG_SPACE(SIZEOF_INT) + socket.CMSG_LEN(SIZEOF_INT) - 1) self.assertEqual(msg, MSG) self.checkRecvmsgAddress(addr, self.cli_addr) self.checkFlags(flags, eor=True, checkset=socket.MSG_CTRUNC) cmsg_types = {socket.IPV6_TCLASS, socket.IPV6_HOPLIMIT} cmsg_level, cmsg_type, cmsg_data = ancdata.pop(0) self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) cmsg_types.remove(cmsg_type) self.assertEqual(len(cmsg_data), SIZEOF_INT) a = array.array("i") a.frombytes(cmsg_data) self.assertGreaterEqual(a[0], 0) self.assertLessEqual(a[0], 255) if ancdata: cmsg_level, cmsg_type, cmsg_data = ancdata.pop(0) self.assertEqual(cmsg_level, socket.IPPROTO_IPV6) cmsg_types.remove(cmsg_type) self.assertLess(len(cmsg_data), SIZEOF_INT) self.assertEqual(ancdata, []) @testSecomdCmsgTruncInData.client_skip def _testSecomdCmsgTruncInData(self): self.assertTrue(self.misc_event.wait(timeout=self.fail_timeout)) self.sendToServer(MSG) # Derive concrete test classes for different socket types. class SendrecvmsgUDPTestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDPTestBase): pass @requireAttrs(socket.socket, "sendmsg") class SendmsgUDPTest(SendmsgConnectionlessTests, SendrecvmsgUDPTestBase): pass @requireAttrs(socket.socket, "recvmsg") class RecvmsgUDPTest(RecvmsgTests, SendrecvmsgUDPTestBase): pass @requireAttrs(socket.socket, "recvmsg_into") class RecvmsgIntoUDPTest(RecvmsgIntoTests, SendrecvmsgUDPTestBase): pass class SendrecvmsgUDP6TestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDP6TestBase): def checkRecvmsgAddress(self, addr1, addr2): # Called to compare the received address with the address of # the peer, ignoring scope ID self.assertEqual(addr1[:-1], addr2[:-1]) @requireAttrs(socket.socket, "sendmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class SendmsgUDP6Test(SendmsgConnectionlessTests, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgUDP6Test(RecvmsgTests, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoUDP6Test(RecvmsgIntoTests, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgRFC3542AncillaryUDP6Test(RFC3542AncillaryTest, SendrecvmsgUDP6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoRFC3542AncillaryUDP6Test(RecvmsgIntoMixin, RFC3542AncillaryTest, SendrecvmsgUDP6TestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class SendrecvmsgUDPLITETestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket.socket, "sendmsg") class SendmsgUDPLITETest(SendmsgConnectionlessTests, SendrecvmsgUDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket.socket, "recvmsg") class RecvmsgUDPLITETest(RecvmsgTests, SendrecvmsgUDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket.socket, "recvmsg_into") class RecvmsgIntoUDPLITETest(RecvmsgIntoTests, SendrecvmsgUDPLITETestBase): pass @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class SendrecvmsgUDPLITE6TestBase(SendrecvmsgDgramFlagsBase, SendrecvmsgConnectionlessBase, ThreadedSocketTestMixin, UDPLITE6TestBase): def checkRecvmsgAddress(self, addr1, addr2): # Called to compare the received address with the address of # the peer, ignoring scope ID self.assertEqual(addr1[:-1], addr2[:-1]) @requireAttrs(socket.socket, "sendmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class SendmsgUDPLITE6Test(SendmsgConnectionlessTests, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgUDPLITE6Test(RecvmsgTests, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoUDPLITE6Test(RecvmsgIntoTests, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgRFC3542AncillaryUDPLITE6Test(RFC3542AncillaryTest, SendrecvmsgUDPLITE6TestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test.') @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') @requireAttrs(socket, "IPPROTO_IPV6") @requireSocket("AF_INET6", "SOCK_DGRAM") class RecvmsgIntoRFC3542AncillaryUDPLITE6Test(RecvmsgIntoMixin, RFC3542AncillaryTest, SendrecvmsgUDPLITE6TestBase): pass class SendrecvmsgTCPTestBase(SendrecvmsgConnectedBase, ConnectedStreamTestMixin, TCPTestBase): pass @requireAttrs(socket.socket, "sendmsg") class SendmsgTCPTest(SendmsgStreamTests, SendrecvmsgTCPTestBase): pass @requireAttrs(socket.socket, "recvmsg") class RecvmsgTCPTest(RecvmsgTests, RecvmsgGenericStreamTests, SendrecvmsgTCPTestBase): pass @requireAttrs(socket.socket, "recvmsg_into") class RecvmsgIntoTCPTest(RecvmsgIntoTests, RecvmsgGenericStreamTests, SendrecvmsgTCPTestBase): pass class SendrecvmsgSCTPStreamTestBase(SendrecvmsgSCTPFlagsBase, SendrecvmsgConnectedBase, ConnectedStreamTestMixin, SCTPStreamBase): pass @requireAttrs(socket.socket, "sendmsg") @unittest.skipIf(AIX, "IPPROTO_SCTP: [Errno 62] Protocol not supported on AIX") @requireSocket("AF_INET", "SOCK_STREAM", "IPPROTO_SCTP") class SendmsgSCTPStreamTest(SendmsgStreamTests, SendrecvmsgSCTPStreamTestBase): pass @requireAttrs(socket.socket, "recvmsg") @unittest.skipIf(AIX, "IPPROTO_SCTP: [Errno 62] Protocol not supported on AIX") @requireSocket("AF_INET", "SOCK_STREAM", "IPPROTO_SCTP") class RecvmsgSCTPStreamTest(RecvmsgTests, RecvmsgGenericStreamTests, SendrecvmsgSCTPStreamTestBase): def testRecvmsgEOF(self): try: super(RecvmsgSCTPStreamTest, self).testRecvmsgEOF() except OSError as e: if e.errno != errno.ENOTCONN: raise self.skipTest("sporadic ENOTCONN (kernel issue?) - see issue #13876") @requireAttrs(socket.socket, "recvmsg_into") @unittest.skipIf(AIX, "IPPROTO_SCTP: [Errno 62] Protocol not supported on AIX") @requireSocket("AF_INET", "SOCK_STREAM", "IPPROTO_SCTP") class RecvmsgIntoSCTPStreamTest(RecvmsgIntoTests, RecvmsgGenericStreamTests, SendrecvmsgSCTPStreamTestBase): def testRecvmsgEOF(self): try: super(RecvmsgIntoSCTPStreamTest, self).testRecvmsgEOF() except OSError as e: if e.errno != errno.ENOTCONN: raise self.skipTest("sporadic ENOTCONN (kernel issue?) - see issue #13876") class SendrecvmsgUnixStreamTestBase(SendrecvmsgConnectedBase, ConnectedStreamTestMixin, UnixStreamBase): pass @requireAttrs(socket.socket, "sendmsg") @requireAttrs(socket, "AF_UNIX") class SendmsgUnixStreamTest(SendmsgStreamTests, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "recvmsg") @requireAttrs(socket, "AF_UNIX") class RecvmsgUnixStreamTest(RecvmsgTests, RecvmsgGenericStreamTests, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "recvmsg_into") @requireAttrs(socket, "AF_UNIX") class RecvmsgIntoUnixStreamTest(RecvmsgIntoTests, RecvmsgGenericStreamTests, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "sendmsg", "recvmsg") @requireAttrs(socket, "AF_UNIX", "SOL_SOCKET", "SCM_RIGHTS") class RecvmsgSCMRightsStreamTest(SCMRightsTest, SendrecvmsgUnixStreamTestBase): pass @requireAttrs(socket.socket, "sendmsg", "recvmsg_into") @requireAttrs(socket, "AF_UNIX", "SOL_SOCKET", "SCM_RIGHTS") class RecvmsgIntoSCMRightsStreamTest(RecvmsgIntoMixin, SCMRightsTest, SendrecvmsgUnixStreamTestBase): pass # Test interrupting the interruptible send/receive methods with a # signal when a timeout is set. These tests avoid having multiple # threads alive during the test so that the OS cannot deliver the # signal to the wrong one. class InterruptedTimeoutBase: # Base class for interrupted send/receive tests. Installs an # empty handler for SIGALRM and removes it on teardown, along with # any scheduled alarms. def setUp(self): super().setUp() orig_alrm_handler = signal.signal(signal.SIGALRM, lambda signum, frame: 1 / 0) self.addCleanup(signal.signal, signal.SIGALRM, orig_alrm_handler) # Timeout for socket operations timeout = support.LOOPBACK_TIMEOUT # Provide setAlarm() method to schedule delivery of SIGALRM after # given number of seconds, or cancel it if zero, and an # appropriate time value to use. Use setitimer() if available. if hasattr(signal, "setitimer"): alarm_time = 0.05 def setAlarm(self, seconds): signal.setitimer(signal.ITIMER_REAL, seconds) else: # Old systems may deliver the alarm up to one second early alarm_time = 2 def setAlarm(self, seconds): signal.alarm(seconds) # Require siginterrupt() in order to ensure that system calls are # interrupted by default. @requireAttrs(signal, "siginterrupt") @unittest.skipUnless(hasattr(signal, "alarm") or hasattr(signal, "setitimer"), "Don't have signal.alarm or signal.setitimer") class InterruptedRecvTimeoutTest(InterruptedTimeoutBase, UDPTestBase): # Test interrupting the recv*() methods with signals when a # timeout is set. def setUp(self): super().setUp() self.serv.settimeout(self.timeout) def checkInterruptedRecv(self, func, *args, **kwargs): # Check that func(*args, **kwargs) raises # errno of EINTR when interrupted by a signal. try: self.setAlarm(self.alarm_time) with self.assertRaises(ZeroDivisionError) as cm: func(*args, **kwargs) finally: self.setAlarm(0) def testInterruptedRecvTimeout(self): self.checkInterruptedRecv(self.serv.recv, 1024) def testInterruptedRecvIntoTimeout(self): self.checkInterruptedRecv(self.serv.recv_into, bytearray(1024)) def testInterruptedRecvfromTimeout(self): self.checkInterruptedRecv(self.serv.recvfrom, 1024) def testInterruptedRecvfromIntoTimeout(self): self.checkInterruptedRecv(self.serv.recvfrom_into, bytearray(1024)) @requireAttrs(socket.socket, "recvmsg") def testInterruptedRecvmsgTimeout(self): self.checkInterruptedRecv(self.serv.recvmsg, 1024) @requireAttrs(socket.socket, "recvmsg_into") def testInterruptedRecvmsgIntoTimeout(self): self.checkInterruptedRecv(self.serv.recvmsg_into, [bytearray(1024)]) # Require siginterrupt() in order to ensure that system calls are # interrupted by default. @requireAttrs(signal, "siginterrupt") @unittest.skipUnless(hasattr(signal, "alarm") or hasattr(signal, "setitimer"), "Don't have signal.alarm or signal.setitimer") class InterruptedSendTimeoutTest(InterruptedTimeoutBase, ThreadSafeCleanupTestCase, SocketListeningTestMixin, TCPTestBase): # Test interrupting the interruptible send*() methods with signals # when a timeout is set. def setUp(self): super().setUp() self.serv_conn = self.newSocket() self.addCleanup(self.serv_conn.close) # Use a thread to complete the connection, but wait for it to # terminate before running the test, so that there is only one # thread to accept the signal. cli_thread = threading.Thread(target=self.doConnect) cli_thread.start() self.cli_conn, addr = self.serv.accept() self.addCleanup(self.cli_conn.close) cli_thread.join() self.serv_conn.settimeout(self.timeout) def doConnect(self): self.serv_conn.connect(self.serv_addr) def checkInterruptedSend(self, func, *args, **kwargs): # Check that func(*args, **kwargs), run in a loop, raises # OSError with an errno of EINTR when interrupted by a # signal. try: with self.assertRaises(ZeroDivisionError) as cm: while True: self.setAlarm(self.alarm_time) func(*args, **kwargs) finally: self.setAlarm(0) # Issue #12958: The following tests have problems on OS X prior to 10.7 @support.requires_mac_ver(10, 7) def testInterruptedSendTimeout(self): self.checkInterruptedSend(self.serv_conn.send, b"a"*512) @support.requires_mac_ver(10, 7) def testInterruptedSendtoTimeout(self): # Passing an actual address here as Python's wrapper for # sendto() doesn't allow passing a zero-length one; POSIX # requires that the address is ignored since the socket is # connection-mode, however. self.checkInterruptedSend(self.serv_conn.sendto, b"a"*512, self.serv_addr) @support.requires_mac_ver(10, 7) @requireAttrs(socket.socket, "sendmsg") def testInterruptedSendmsgTimeout(self): self.checkInterruptedSend(self.serv_conn.sendmsg, [b"a"*512]) class TCPCloserTest(ThreadedTCPSocketTest): def testClose(self): conn, addr = self.serv.accept() conn.close() sd = self.cli read, write, err = select.select([sd], [], [], 1.0) self.assertEqual(read, [sd]) self.assertEqual(sd.recv(1), b'') # Calling close() many times should be safe. conn.close() conn.close() def _testClose(self): self.cli.connect((HOST, self.port)) time.sleep(1.0) class BasicSocketPairTest(SocketPairTest): def __init__(self, methodName='runTest'): SocketPairTest.__init__(self, methodName=methodName) def _check_defaults(self, sock): self.assertIsInstance(sock, socket.socket) if hasattr(socket, 'AF_UNIX'): self.assertEqual(sock.family, socket.AF_UNIX) else: self.assertEqual(sock.family, socket.AF_INET) self.assertEqual(sock.type, socket.SOCK_STREAM) self.assertEqual(sock.proto, 0) def _testDefaults(self): self._check_defaults(self.cli) def testDefaults(self): self._check_defaults(self.serv) def testRecv(self): msg = self.serv.recv(1024) self.assertEqual(msg, MSG) def _testRecv(self): self.cli.send(MSG) def testSend(self): self.serv.send(MSG) def _testSend(self): msg = self.cli.recv(1024) self.assertEqual(msg, MSG) class PurePythonSocketPairTest(SocketPairTest): # Explicitly use socketpair AF_INET or AF_INET6 to ensure that is the # code path we're using regardless platform is the pure python one where # `_socket.socketpair` does not exist. (AF_INET does not work with # _socket.socketpair on many platforms). def socketpair(self): # called by super().setUp(). try: return socket.socketpair(socket.AF_INET6) except OSError: return socket.socketpair(socket.AF_INET) # Local imports in this class make for easy security fix backporting. def setUp(self): if hasattr(_socket, "socketpair"): self._orig_sp = socket.socketpair # This forces the version using the non-OS provided socketpair # emulation via an AF_INET socket in Lib/socket.py. socket.socketpair = socket._fallback_socketpair else: # This platform already uses the non-OS provided version. self._orig_sp = None super().setUp() def tearDown(self): super().tearDown() if self._orig_sp is not None: # Restore the default socket.socketpair definition. socket.socketpair = self._orig_sp def test_recv(self): msg = self.serv.recv(1024) self.assertEqual(msg, MSG) def _test_recv(self): self.cli.send(MSG) def test_send(self): self.serv.send(MSG) def _test_send(self): msg = self.cli.recv(1024) self.assertEqual(msg, MSG) def test_ipv4(self): cli, srv = socket.socketpair(socket.AF_INET) cli.close() srv.close() def _test_ipv4(self): pass @unittest.skipIf(not hasattr(_socket, 'IPPROTO_IPV6') or not hasattr(_socket, 'IPV6_V6ONLY'), "IPV6_V6ONLY option not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_ipv6(self): cli, srv = socket.socketpair(socket.AF_INET6) cli.close() srv.close() def _test_ipv6(self): pass def test_injected_authentication_failure(self): orig_getsockname = socket.socket.getsockname inject_sock = None def inject_getsocketname(self): nonlocal inject_sock sockname = orig_getsockname(self) # Connect to the listening socket ahead of the # client socket. if inject_sock is None: inject_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) inject_sock.setblocking(False) try: inject_sock.connect(sockname[:2]) except (BlockingIOError, InterruptedError): pass inject_sock.setblocking(True) return sockname sock1 = sock2 = None try: socket.socket.getsockname = inject_getsocketname with self.assertRaises(OSError): sock1, sock2 = socket.socketpair() finally: socket.socket.getsockname = orig_getsockname if inject_sock: inject_sock.close() if sock1: # This cleanup isn't needed on a successful test. sock1.close() if sock2: sock2.close() def _test_injected_authentication_failure(self): # No-op. Exists for base class threading infrastructure to call. # We could refactor this test into its own lesser class along with the # setUp and tearDown code to construct an ideal; it is simpler to keep # it here and live with extra overhead one this _one_ failure test. pass class NonBlockingTCPTests(ThreadedTCPSocketTest): def __init__(self, methodName='runTest'): self.event = threading.Event() ThreadedTCPSocketTest.__init__(self, methodName=methodName) def assert_sock_timeout(self, sock, timeout): self.assertEqual(self.serv.gettimeout(), timeout) blocking = (timeout != 0.0) self.assertEqual(sock.getblocking(), blocking) if fcntl is not None: # When a Python socket has a non-zero timeout, it's switched # internally to a non-blocking mode. Later, sock.sendall(), # sock.recv(), and other socket operations use a select() call and # handle EWOULDBLOCK/EGAIN on all socket operations. That's how # timeouts are enforced. fd_blocking = (timeout is None) flag = fcntl.fcntl(sock, fcntl.F_GETFL, os.O_NONBLOCK) self.assertEqual(not bool(flag & os.O_NONBLOCK), fd_blocking) def testSetBlocking(self): # Test setblocking() and settimeout() methods self.serv.setblocking(True) self.assert_sock_timeout(self.serv, None) self.serv.setblocking(False) self.assert_sock_timeout(self.serv, 0.0) self.serv.settimeout(None) self.assert_sock_timeout(self.serv, None) self.serv.settimeout(0) self.assert_sock_timeout(self.serv, 0) self.serv.settimeout(10) self.assert_sock_timeout(self.serv, 10) self.serv.settimeout(0) self.assert_sock_timeout(self.serv, 0) def _testSetBlocking(self): pass @support.cpython_only def testSetBlocking_overflow(self): # Issue 15989 import _testcapi if _testcapi.UINT_MAX >= _testcapi.ULONG_MAX: self.skipTest('needs UINT_MAX < ULONG_MAX') self.serv.setblocking(False) self.assertEqual(self.serv.gettimeout(), 0.0) self.serv.setblocking(_testcapi.UINT_MAX + 1) self.assertIsNone(self.serv.gettimeout()) _testSetBlocking_overflow = support.cpython_only(_testSetBlocking) @unittest.skipUnless(hasattr(socket, 'SOCK_NONBLOCK'), 'test needs socket.SOCK_NONBLOCK') @support.requires_linux_version(2, 6, 28) def testInitNonBlocking(self): # create a socket with SOCK_NONBLOCK self.serv.close() self.serv = socket.socket(socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_NONBLOCK) self.assert_sock_timeout(self.serv, 0) def _testInitNonBlocking(self): pass def testInheritFlagsBlocking(self): # bpo-7995: accept() on a listening socket with a timeout and the # default timeout is None, the resulting socket must be blocking. with socket_setdefaulttimeout(None): self.serv.settimeout(10) conn, addr = self.serv.accept() self.addCleanup(conn.close) self.assertIsNone(conn.gettimeout()) def _testInheritFlagsBlocking(self): self.cli.connect((HOST, self.port)) def testInheritFlagsTimeout(self): # bpo-7995: accept() on a listening socket with a timeout and the # default timeout is None, the resulting socket must inherit # the default timeout. default_timeout = 20.0 with socket_setdefaulttimeout(default_timeout): self.serv.settimeout(10) conn, addr = self.serv.accept() self.addCleanup(conn.close) self.assertEqual(conn.gettimeout(), default_timeout) def _testInheritFlagsTimeout(self): self.cli.connect((HOST, self.port)) def testAccept(self): # Testing non-blocking accept self.serv.setblocking(False) # connect() didn't start: non-blocking accept() fails start_time = time.monotonic() with self.assertRaises(BlockingIOError): conn, addr = self.serv.accept() dt = time.monotonic() - start_time self.assertLess(dt, 1.0) self.event.set() read, write, err = select.select([self.serv], [], [], support.LONG_TIMEOUT) if self.serv not in read: self.fail("Error trying to do accept after select.") # connect() completed: non-blocking accept() doesn't block conn, addr = self.serv.accept() self.addCleanup(conn.close) self.assertIsNone(conn.gettimeout()) def _testAccept(self): # don't connect before event is set to check # that non-blocking accept() raises BlockingIOError self.event.wait() self.cli.connect((HOST, self.port)) def testRecv(self): # Testing non-blocking recv conn, addr = self.serv.accept() self.addCleanup(conn.close) conn.setblocking(False) # the server didn't send data yet: non-blocking recv() fails with self.assertRaises(BlockingIOError): msg = conn.recv(len(MSG)) self.event.set() read, write, err = select.select([conn], [], [], support.LONG_TIMEOUT) if conn not in read: self.fail("Error during select call to non-blocking socket.") # the server sent data yet: non-blocking recv() doesn't block msg = conn.recv(len(MSG)) self.assertEqual(msg, MSG) def _testRecv(self): self.cli.connect((HOST, self.port)) # don't send anything before event is set to check # that non-blocking recv() raises BlockingIOError self.event.wait() # send data: recv() will no longer block self.cli.sendall(MSG) class FileObjectClassTestCase(SocketConnectedTest): """Unit tests for the object returned by socket.makefile() self.read_file is the io object returned by makefile() on the client connection. You can read from this file to get output from the server. self.write_file is the io object returned by makefile() on the server connection. You can write to this file to send output to the client. """ bufsize = -1 # Use default buffer size encoding = 'utf-8' errors = 'strict' newline = None read_mode = 'rb' read_msg = MSG write_mode = 'wb' write_msg = MSG def __init__(self, methodName='runTest'): SocketConnectedTest.__init__(self, methodName=methodName) def setUp(self): self.evt1, self.evt2, self.serv_finished, self.cli_finished = [ threading.Event() for i in range(4)] SocketConnectedTest.setUp(self) self.read_file = self.cli_conn.makefile( self.read_mode, self.bufsize, encoding = self.encoding, errors = self.errors, newline = self.newline) def tearDown(self): self.serv_finished.set() self.read_file.close() self.assertTrue(self.read_file.closed) self.read_file = None SocketConnectedTest.tearDown(self) def clientSetUp(self): SocketConnectedTest.clientSetUp(self) self.write_file = self.serv_conn.makefile( self.write_mode, self.bufsize, encoding = self.encoding, errors = self.errors, newline = self.newline) def clientTearDown(self): self.cli_finished.set() self.write_file.close() self.assertTrue(self.write_file.closed) self.write_file = None SocketConnectedTest.clientTearDown(self) def testReadAfterTimeout(self): # Issue #7322: A file object must disallow further reads # after a timeout has occurred. self.cli_conn.settimeout(1) self.read_file.read(3) # First read raises a timeout self.assertRaises(socket.timeout, self.read_file.read, 1) # Second read is disallowed with self.assertRaises(OSError) as ctx: self.read_file.read(1) self.assertIn("cannot read from timed out object", str(ctx.exception)) def _testReadAfterTimeout(self): self.write_file.write(self.write_msg[0:3]) self.write_file.flush() self.serv_finished.wait() def testSmallRead(self): # Performing small file read test first_seg = self.read_file.read(len(self.read_msg)-3) second_seg = self.read_file.read(3) msg = first_seg + second_seg self.assertEqual(msg, self.read_msg) def _testSmallRead(self): self.write_file.write(self.write_msg) self.write_file.flush() def testFullRead(self): # read until EOF msg = self.read_file.read() self.assertEqual(msg, self.read_msg) def _testFullRead(self): self.write_file.write(self.write_msg) self.write_file.close() def testUnbufferedRead(self): # Performing unbuffered file read test buf = type(self.read_msg)() while 1: char = self.read_file.read(1) if not char: break buf += char self.assertEqual(buf, self.read_msg) def _testUnbufferedRead(self): self.write_file.write(self.write_msg) self.write_file.flush() def testReadline(self): # Performing file readline test line = self.read_file.readline() self.assertEqual(line, self.read_msg) def _testReadline(self): self.write_file.write(self.write_msg) self.write_file.flush() def testCloseAfterMakefile(self): # The file returned by makefile should keep the socket open. self.cli_conn.close() # read until EOF msg = self.read_file.read() self.assertEqual(msg, self.read_msg) def _testCloseAfterMakefile(self): self.write_file.write(self.write_msg) self.write_file.flush() def testMakefileAfterMakefileClose(self): self.read_file.close() msg = self.cli_conn.recv(len(MSG)) if isinstance(self.read_msg, str): msg = msg.decode() self.assertEqual(msg, self.read_msg) def _testMakefileAfterMakefileClose(self): self.write_file.write(self.write_msg) self.write_file.flush() def testClosedAttr(self): self.assertTrue(not self.read_file.closed) def _testClosedAttr(self): self.assertTrue(not self.write_file.closed) def testAttributes(self): self.assertEqual(self.read_file.mode, self.read_mode) self.assertEqual(self.read_file.name, self.cli_conn.fileno()) def _testAttributes(self): self.assertEqual(self.write_file.mode, self.write_mode) self.assertEqual(self.write_file.name, self.serv_conn.fileno()) def testRealClose(self): self.read_file.close() self.assertRaises(ValueError, self.read_file.fileno) self.cli_conn.close() self.assertRaises(OSError, self.cli_conn.getsockname) def _testRealClose(self): pass class UnbufferedFileObjectClassTestCase(FileObjectClassTestCase): """Repeat the tests from FileObjectClassTestCase with bufsize==0. In this case (and in this case only), it should be possible to create a file object, read a line from it, create another file object, read another line from it, without loss of data in the first file object's buffer. Note that http.client relies on this when reading multiple requests from the same socket.""" bufsize = 0 # Use unbuffered mode def testUnbufferedReadline(self): # Read a line, create a new file object, read another line with it line = self.read_file.readline() # first line self.assertEqual(line, b"A. " + self.write_msg) # first line self.read_file = self.cli_conn.makefile('rb', 0) line = self.read_file.readline() # second line self.assertEqual(line, b"B. " + self.write_msg) # second line def _testUnbufferedReadline(self): self.write_file.write(b"A. " + self.write_msg) self.write_file.write(b"B. " + self.write_msg) self.write_file.flush() def testMakefileClose(self): # The file returned by makefile should keep the socket open... self.cli_conn.close() msg = self.cli_conn.recv(1024) self.assertEqual(msg, self.read_msg) # ...until the file is itself closed self.read_file.close() self.assertRaises(OSError, self.cli_conn.recv, 1024) def _testMakefileClose(self): self.write_file.write(self.write_msg) self.write_file.flush() def testMakefileCloseSocketDestroy(self): refcount_before = sys.getrefcount(self.cli_conn) self.read_file.close() refcount_after = sys.getrefcount(self.cli_conn) self.assertEqual(refcount_before - 1, refcount_after) def _testMakefileCloseSocketDestroy(self): pass # Non-blocking ops # NOTE: to set `read_file` as non-blocking, we must call # `cli_conn.setblocking` and vice-versa (see setUp / clientSetUp). def testSmallReadNonBlocking(self): self.cli_conn.setblocking(False) self.assertEqual(self.read_file.readinto(bytearray(10)), None) self.assertEqual(self.read_file.read(len(self.read_msg) - 3), None) self.evt1.set() self.evt2.wait(1.0) first_seg = self.read_file.read(len(self.read_msg) - 3) if first_seg is None: # Data not arrived (can happen under Windows), wait a bit time.sleep(0.5) first_seg = self.read_file.read(len(self.read_msg) - 3) buf = bytearray(10) n = self.read_file.readinto(buf) self.assertEqual(n, 3) msg = first_seg + buf[:n] self.assertEqual(msg, self.read_msg) self.assertEqual(self.read_file.readinto(bytearray(16)), None) self.assertEqual(self.read_file.read(1), None) def _testSmallReadNonBlocking(self): self.evt1.wait(1.0) self.write_file.write(self.write_msg) self.write_file.flush() self.evt2.set() # Avoid closing the socket before the server test has finished, # otherwise system recv() will return 0 instead of EWOULDBLOCK. self.serv_finished.wait(5.0) def testWriteNonBlocking(self): self.cli_finished.wait(5.0) # The client thread can't skip directly - the SkipTest exception # would appear as a failure. if self.serv_skipped: self.skipTest(self.serv_skipped) def _testWriteNonBlocking(self): self.serv_skipped = None self.serv_conn.setblocking(False) # Try to saturate the socket buffer pipe with repeated large writes. BIG = b"x" * support.SOCK_MAX_SIZE LIMIT = 10 # The first write() succeeds since a chunk of data can be buffered n = self.write_file.write(BIG) self.assertGreater(n, 0) for i in range(LIMIT): n = self.write_file.write(BIG) if n is None: # Succeeded break self.assertGreater(n, 0) else: # Let us know that this test didn't manage to establish # the expected conditions. This is not a failure in itself but, # if it happens repeatedly, the test should be fixed. self.serv_skipped = "failed to saturate the socket buffer" class LineBufferedFileObjectClassTestCase(FileObjectClassTestCase): bufsize = 1 # Default-buffered for reading; line-buffered for writing class SmallBufferedFileObjectClassTestCase(FileObjectClassTestCase): bufsize = 2 # Exercise the buffering code class UnicodeReadFileObjectClassTestCase(FileObjectClassTestCase): """Tests for socket.makefile() in text mode (rather than binary)""" read_mode = 'r' read_msg = MSG.decode('utf-8') write_mode = 'wb' write_msg = MSG newline = '' class UnicodeWriteFileObjectClassTestCase(FileObjectClassTestCase): """Tests for socket.makefile() in text mode (rather than binary)""" read_mode = 'rb' read_msg = MSG write_mode = 'w' write_msg = MSG.decode('utf-8') newline = '' class UnicodeReadWriteFileObjectClassTestCase(FileObjectClassTestCase): """Tests for socket.makefile() in text mode (rather than binary)""" read_mode = 'r' read_msg = MSG.decode('utf-8') write_mode = 'w' write_msg = MSG.decode('utf-8') newline = '' class NetworkConnectionTest(object): """Prove network connection.""" def clientSetUp(self): # We're inherited below by BasicTCPTest2, which also inherits # BasicTCPTest, which defines self.port referenced below. self.cli = socket.create_connection((HOST, self.port)) self.serv_conn = self.cli class BasicTCPTest2(NetworkConnectionTest, BasicTCPTest): """Tests that NetworkConnection does not break existing TCP functionality. """ class NetworkConnectionNoServer(unittest.TestCase): class MockSocket(socket.socket): def connect(self, *args): raise socket.timeout('timed out') @contextlib.contextmanager def mocked_socket_module(self): """Return a socket which times out on connect""" old_socket = socket.socket socket.socket = self.MockSocket try: yield finally: socket.socket = old_socket def test_connect(self): port = socket_helper.find_unused_port() cli = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(cli.close) with self.assertRaises(OSError) as cm: cli.connect((HOST, port)) self.assertEqual(cm.exception.errno, errno.ECONNREFUSED) def test_create_connection(self): # Issue #9792: errors raised by create_connection() should have # a proper errno attribute. port = socket_helper.find_unused_port() with self.assertRaises(OSError) as cm: socket.create_connection((HOST, port)) # Issue #16257: create_connection() calls getaddrinfo() against # 'localhost'. This may result in an IPV6 addr being returned # as well as an IPV4 one: # >>> socket.getaddrinfo('localhost', port, 0, SOCK_STREAM) # >>> [(2, 2, 0, '', ('127.0.0.1', 41230)), # (26, 2, 0, '', ('::1', 41230, 0, 0))] # # create_connection() enumerates through all the addresses returned # and if it doesn't successfully bind to any of them, it propagates # the last exception it encountered. # # On Solaris, ENETUNREACH is returned in this circumstance instead # of ECONNREFUSED. So, if that errno exists, add it to our list of # expected errnos. expected_errnos = socket_helper.get_socket_conn_refused_errs() self.assertIn(cm.exception.errno, expected_errnos) def test_create_connection_timeout(self): # Issue #9792: create_connection() should not recast timeout errors # as generic socket errors. with self.mocked_socket_module(): try: socket.create_connection((HOST, 1234)) except socket.timeout: pass except OSError as exc: if socket_helper.IPV6_ENABLED or exc.errno != errno.EAFNOSUPPORT: raise else: self.fail('socket.timeout not raised') class NetworkConnectionAttributesTest(SocketTCPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketTCPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): self.source_port = socket_helper.find_unused_port() def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) def _justAccept(self): conn, addr = self.serv.accept() conn.close() testFamily = _justAccept def _testFamily(self): self.cli = socket.create_connection((HOST, self.port), timeout=support.LOOPBACK_TIMEOUT) self.addCleanup(self.cli.close) self.assertEqual(self.cli.family, 2) testSourceAddress = _justAccept def _testSourceAddress(self): self.cli = socket.create_connection((HOST, self.port), timeout=support.LOOPBACK_TIMEOUT, source_address=('', self.source_port)) self.addCleanup(self.cli.close) self.assertEqual(self.cli.getsockname()[1], self.source_port) # The port number being used is sufficient to show that the bind() # call happened. testTimeoutDefault = _justAccept def _testTimeoutDefault(self): # passing no explicit timeout uses socket's global default self.assertTrue(socket.getdefaulttimeout() is None) socket.setdefaulttimeout(42) try: self.cli = socket.create_connection((HOST, self.port)) self.addCleanup(self.cli.close) finally: socket.setdefaulttimeout(None) self.assertEqual(self.cli.gettimeout(), 42) testTimeoutNone = _justAccept def _testTimeoutNone(self): # None timeout means the same as sock.settimeout(None) self.assertTrue(socket.getdefaulttimeout() is None) socket.setdefaulttimeout(30) try: self.cli = socket.create_connection((HOST, self.port), timeout=None) self.addCleanup(self.cli.close) finally: socket.setdefaulttimeout(None) self.assertEqual(self.cli.gettimeout(), None) testTimeoutValueNamed = _justAccept def _testTimeoutValueNamed(self): self.cli = socket.create_connection((HOST, self.port), timeout=30) self.assertEqual(self.cli.gettimeout(), 30) testTimeoutValueNonamed = _justAccept def _testTimeoutValueNonamed(self): self.cli = socket.create_connection((HOST, self.port), 30) self.addCleanup(self.cli.close) self.assertEqual(self.cli.gettimeout(), 30) class NetworkConnectionBehaviourTest(SocketTCPTest, ThreadableTest): def __init__(self, methodName='runTest'): SocketTCPTest.__init__(self, methodName=methodName) ThreadableTest.__init__(self) def clientSetUp(self): pass def clientTearDown(self): self.cli.close() self.cli = None ThreadableTest.clientTearDown(self) def testInsideTimeout(self): conn, addr = self.serv.accept() self.addCleanup(conn.close) time.sleep(3) conn.send(b"done!") testOutsideTimeout = testInsideTimeout def _testInsideTimeout(self): self.cli = sock = socket.create_connection((HOST, self.port)) data = sock.recv(5) self.assertEqual(data, b"done!") def _testOutsideTimeout(self): self.cli = sock = socket.create_connection((HOST, self.port), timeout=1) self.assertRaises(socket.timeout, lambda: sock.recv(5)) class TCPTimeoutTest(SocketTCPTest): def testTCPTimeout(self): def raise_timeout(*args, **kwargs): self.serv.settimeout(1.0) self.serv.accept() self.assertRaises(socket.timeout, raise_timeout, "Error generating a timeout exception (TCP)") def testTimeoutZero(self): ok = False try: self.serv.settimeout(0.0) foo = self.serv.accept() except socket.timeout: self.fail("caught timeout instead of error (TCP)") except OSError: ok = True except: self.fail("caught unexpected exception (TCP)") if not ok: self.fail("accept() returned success when we did not expect it") @unittest.skipUnless(hasattr(signal, 'alarm'), 'test needs signal.alarm()') def testInterruptedTimeout(self): # XXX I don't know how to do this test on MSWindows or any other # platform that doesn't support signal.alarm() or os.kill(), though # the bug should have existed on all platforms. self.serv.settimeout(5.0) # must be longer than alarm class Alarm(Exception): pass def alarm_handler(signal, frame): raise Alarm old_alarm = signal.signal(signal.SIGALRM, alarm_handler) try: try: signal.alarm(2) # POSIX allows alarm to be up to 1 second early foo = self.serv.accept() except socket.timeout: self.fail("caught timeout instead of Alarm") except Alarm: pass except: self.fail("caught other exception instead of Alarm:" " %s(%s):\n%s" % (sys.exc_info()[:2] + (traceback.format_exc(),))) else: self.fail("nothing caught") finally: signal.alarm(0) # shut off alarm except Alarm: self.fail("got Alarm in wrong place") finally: # no alarm can be pending. Safe to restore old handler. signal.signal(signal.SIGALRM, old_alarm) class UDPTimeoutTest(SocketUDPTest): def testUDPTimeout(self): def raise_timeout(*args, **kwargs): self.serv.settimeout(1.0) self.serv.recv(1024) self.assertRaises(socket.timeout, raise_timeout, "Error generating a timeout exception (UDP)") def testTimeoutZero(self): ok = False try: self.serv.settimeout(0.0) foo = self.serv.recv(1024) except socket.timeout: self.fail("caught timeout instead of error (UDP)") except OSError: ok = True except: self.fail("caught unexpected exception (UDP)") if not ok: self.fail("recv() returned success when we did not expect it") @unittest.skipUnless(HAVE_SOCKET_UDPLITE, 'UDPLITE sockets required for this test.') class UDPLITETimeoutTest(SocketUDPLITETest): def testUDPLITETimeout(self): def raise_timeout(*args, **kwargs): self.serv.settimeout(1.0) self.serv.recv(1024) self.assertRaises(socket.timeout, raise_timeout, "Error generating a timeout exception (UDPLITE)") def testTimeoutZero(self): ok = False try: self.serv.settimeout(0.0) foo = self.serv.recv(1024) except socket.timeout: self.fail("caught timeout instead of error (UDPLITE)") except OSError: ok = True except: self.fail("caught unexpected exception (UDPLITE)") if not ok: self.fail("recv() returned success when we did not expect it") class TestExceptions(unittest.TestCase): def testExceptionTree(self): self.assertTrue(issubclass(OSError, Exception)) self.assertTrue(issubclass(socket.herror, OSError)) self.assertTrue(issubclass(socket.gaierror, OSError)) self.assertTrue(issubclass(socket.timeout, OSError)) def test_setblocking_invalidfd(self): # Regression test for issue #28471 sock0 = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) sock = socket.socket( socket.AF_INET, socket.SOCK_STREAM, 0, sock0.fileno()) sock0.close() self.addCleanup(sock.detach) with self.assertRaises(OSError): sock.setblocking(False) @unittest.skipUnless(sys.platform == 'linux', 'Linux specific test') class TestLinuxAbstractNamespace(unittest.TestCase): UNIX_PATH_MAX = 108 def testLinuxAbstractNamespace(self): address = b"\x00python-test-hello\x00\xff" with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s1: s1.bind(address) s1.listen() with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s2: s2.connect(s1.getsockname()) with s1.accept()[0] as s3: self.assertEqual(s1.getsockname(), address) self.assertEqual(s2.getpeername(), address) def testMaxName(self): address = b"\x00" + b"h" * (self.UNIX_PATH_MAX - 1) with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s: s.bind(address) self.assertEqual(s.getsockname(), address) def testNameOverflow(self): address = "\x00" + "h" * self.UNIX_PATH_MAX with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s: self.assertRaises(OSError, s.bind, address) def testStrName(self): # Check that an abstract name can be passed as a string. s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) try: s.bind("\x00python\x00test\x00") self.assertEqual(s.getsockname(), b"\x00python\x00test\x00") finally: s.close() def testBytearrayName(self): # Check that an abstract name can be passed as a bytearray. with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s: s.bind(bytearray(b"\x00python\x00test\x00")) self.assertEqual(s.getsockname(), b"\x00python\x00test\x00") def testAutobind(self): # Check that binding to an empty string binds to an available address # in the abstract namespace as specified in unix(7) "Autobind feature". abstract_address = b"^\0[0-9a-f]{5}" with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s1: s1.bind("") self.assertRegex(s1.getsockname(), abstract_address) # Each socket is bound to a different abstract address. with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s2: s2.bind("") self.assertRegex(s2.getsockname(), abstract_address) self.assertNotEqual(s1.getsockname(), s2.getsockname()) @unittest.skipUnless(hasattr(socket, 'AF_UNIX'), 'test needs socket.AF_UNIX') class TestUnixDomain(unittest.TestCase): def setUp(self): self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) def tearDown(self): self.sock.close() def encoded(self, path): # Return the given path encoded in the file system encoding, # or skip the test if this is not possible. try: return os.fsencode(path) except UnicodeEncodeError: self.skipTest( "Pathname {0!a} cannot be represented in file " "system encoding {1!r}".format( path, sys.getfilesystemencoding())) def bind(self, sock, path): # Bind the socket try: socket_helper.bind_unix_socket(sock, path) except OSError as e: if str(e) == "AF_UNIX path too long": self.skipTest( "Pathname {0!a} is too long to serve as an AF_UNIX path" .format(path)) else: raise def testUnbound(self): # Issue #30205 (note getsockname() can return None on OS X) self.assertIn(self.sock.getsockname(), ('', None)) def testStrAddr(self): # Test binding to and retrieving a normal string pathname. path = os.path.abspath(support.TESTFN) self.bind(self.sock, path) self.addCleanup(support.unlink, path) self.assertEqual(self.sock.getsockname(), path) def testBytesAddr(self): # Test binding to a bytes pathname. path = os.path.abspath(support.TESTFN) self.bind(self.sock, self.encoded(path)) self.addCleanup(support.unlink, path) self.assertEqual(self.sock.getsockname(), path) def testSurrogateescapeBind(self): # Test binding to a valid non-ASCII pathname, with the # non-ASCII bytes supplied using surrogateescape encoding. path = os.path.abspath(support.TESTFN_UNICODE) b = self.encoded(path) self.bind(self.sock, b.decode("ascii", "surrogateescape")) self.addCleanup(support.unlink, path) self.assertEqual(self.sock.getsockname(), path) def testUnencodableAddr(self): # Test binding to a pathname that cannot be encoded in the # file system encoding. if support.TESTFN_UNENCODABLE is None: self.skipTest("No unencodable filename available") path = os.path.abspath(support.TESTFN_UNENCODABLE) self.bind(self.sock, path) self.addCleanup(support.unlink, path) self.assertEqual(self.sock.getsockname(), path) @unittest.skipIf(sys.platform == 'linux', 'Linux specific test') def testEmptyAddress(self): # Test that binding empty address fails. self.assertRaises(OSError, self.sock.bind, "") class BufferIOTest(SocketConnectedTest): """ Test the buffer versions of socket.recv() and socket.send(). """ def __init__(self, methodName='runTest'): SocketConnectedTest.__init__(self, methodName=methodName) def testRecvIntoArray(self): buf = array.array("B", [0] * len(MSG)) nbytes = self.cli_conn.recv_into(buf) self.assertEqual(nbytes, len(MSG)) buf = buf.tobytes() msg = buf[:len(MSG)] self.assertEqual(msg, MSG) def _testRecvIntoArray(self): buf = bytes(MSG) self.serv_conn.send(buf) def testRecvIntoBytearray(self): buf = bytearray(1024) nbytes = self.cli_conn.recv_into(buf) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvIntoBytearray = _testRecvIntoArray def testRecvIntoMemoryview(self): buf = bytearray(1024) nbytes = self.cli_conn.recv_into(memoryview(buf)) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvIntoMemoryview = _testRecvIntoArray def testRecvFromIntoArray(self): buf = array.array("B", [0] * len(MSG)) nbytes, addr = self.cli_conn.recvfrom_into(buf) self.assertEqual(nbytes, len(MSG)) buf = buf.tobytes() msg = buf[:len(MSG)] self.assertEqual(msg, MSG) def _testRecvFromIntoArray(self): buf = bytes(MSG) self.serv_conn.send(buf) def testRecvFromIntoBytearray(self): buf = bytearray(1024) nbytes, addr = self.cli_conn.recvfrom_into(buf) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvFromIntoBytearray = _testRecvFromIntoArray def testRecvFromIntoMemoryview(self): buf = bytearray(1024) nbytes, addr = self.cli_conn.recvfrom_into(memoryview(buf)) self.assertEqual(nbytes, len(MSG)) msg = buf[:len(MSG)] self.assertEqual(msg, MSG) _testRecvFromIntoMemoryview = _testRecvFromIntoArray def testRecvFromIntoSmallBuffer(self): # See issue #20246. buf = bytearray(8) self.assertRaises(ValueError, self.cli_conn.recvfrom_into, buf, 1024) def _testRecvFromIntoSmallBuffer(self): self.serv_conn.send(MSG) def testRecvFromIntoEmptyBuffer(self): buf = bytearray() self.cli_conn.recvfrom_into(buf) self.cli_conn.recvfrom_into(buf, 0) _testRecvFromIntoEmptyBuffer = _testRecvFromIntoArray TIPC_STYPE = 2000 TIPC_LOWER = 200 TIPC_UPPER = 210 def isTipcAvailable(): """Check if the TIPC module is loaded The TIPC module is not loaded automatically on Ubuntu and probably other Linux distros. """ if not hasattr(socket, "AF_TIPC"): return False try: f = open("/proc/modules") except (FileNotFoundError, IsADirectoryError, PermissionError): # It's ok if the file does not exist, is a directory or if we # have not the permission to read it. return False with f: for line in f: if line.startswith("tipc "): return True return False @unittest.skipUnless(isTipcAvailable(), "TIPC module is not loaded, please 'sudo modprobe tipc'") class TIPCTest(unittest.TestCase): def testRDM(self): srv = socket.socket(socket.AF_TIPC, socket.SOCK_RDM) cli = socket.socket(socket.AF_TIPC, socket.SOCK_RDM) self.addCleanup(srv.close) self.addCleanup(cli.close) srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) srvaddr = (socket.TIPC_ADDR_NAMESEQ, TIPC_STYPE, TIPC_LOWER, TIPC_UPPER) srv.bind(srvaddr) sendaddr = (socket.TIPC_ADDR_NAME, TIPC_STYPE, TIPC_LOWER + int((TIPC_UPPER - TIPC_LOWER) / 2), 0) cli.sendto(MSG, sendaddr) msg, recvaddr = srv.recvfrom(1024) self.assertEqual(cli.getsockname(), recvaddr) self.assertEqual(msg, MSG) @unittest.skipUnless(isTipcAvailable(), "TIPC module is not loaded, please 'sudo modprobe tipc'") class TIPCThreadableTest(unittest.TestCase, ThreadableTest): def __init__(self, methodName = 'runTest'): unittest.TestCase.__init__(self, methodName = methodName) ThreadableTest.__init__(self) def setUp(self): self.srv = socket.socket(socket.AF_TIPC, socket.SOCK_STREAM) self.addCleanup(self.srv.close) self.srv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) srvaddr = (socket.TIPC_ADDR_NAMESEQ, TIPC_STYPE, TIPC_LOWER, TIPC_UPPER) self.srv.bind(srvaddr) self.srv.listen() self.serverExplicitReady() self.conn, self.connaddr = self.srv.accept() self.addCleanup(self.conn.close) def clientSetUp(self): # There is a hittable race between serverExplicitReady() and the # accept() call; sleep a little while to avoid it, otherwise # we could get an exception time.sleep(0.1) self.cli = socket.socket(socket.AF_TIPC, socket.SOCK_STREAM) self.addCleanup(self.cli.close) addr = (socket.TIPC_ADDR_NAME, TIPC_STYPE, TIPC_LOWER + int((TIPC_UPPER - TIPC_LOWER) / 2), 0) self.cli.connect(addr) self.cliaddr = self.cli.getsockname() def testStream(self): msg = self.conn.recv(1024) self.assertEqual(msg, MSG) self.assertEqual(self.cliaddr, self.connaddr) def _testStream(self): self.cli.send(MSG) self.cli.close() class ContextManagersTest(ThreadedTCPSocketTest): def _testSocketClass(self): # base test with socket.socket() as sock: self.assertFalse(sock._closed) self.assertTrue(sock._closed) # close inside with block with socket.socket() as sock: sock.close() self.assertTrue(sock._closed) # exception inside with block with socket.socket() as sock: self.assertRaises(OSError, sock.sendall, b'foo') self.assertTrue(sock._closed) def testCreateConnectionBase(self): conn, addr = self.serv.accept() self.addCleanup(conn.close) data = conn.recv(1024) conn.sendall(data) def _testCreateConnectionBase(self): address = self.serv.getsockname() with socket.create_connection(address) as sock: self.assertFalse(sock._closed) sock.sendall(b'foo') self.assertEqual(sock.recv(1024), b'foo') self.assertTrue(sock._closed) def testCreateConnectionClose(self): conn, addr = self.serv.accept() self.addCleanup(conn.close) data = conn.recv(1024) conn.sendall(data) def _testCreateConnectionClose(self): address = self.serv.getsockname() with socket.create_connection(address) as sock: sock.close() self.assertTrue(sock._closed) self.assertRaises(OSError, sock.sendall, b'foo') class InheritanceTest(unittest.TestCase): @unittest.skipUnless(hasattr(socket, "SOCK_CLOEXEC"), "SOCK_CLOEXEC not defined") @support.requires_linux_version(2, 6, 28) def test_SOCK_CLOEXEC(self): with socket.socket(socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_CLOEXEC) as s: self.assertEqual(s.type, socket.SOCK_STREAM) self.assertFalse(s.get_inheritable()) def test_default_inheritable(self): sock = socket.socket() with sock: self.assertEqual(sock.get_inheritable(), False) def test_dup(self): sock = socket.socket() with sock: newsock = sock.dup() sock.close() with newsock: self.assertEqual(newsock.get_inheritable(), False) def test_set_inheritable(self): sock = socket.socket() with sock: sock.set_inheritable(True) self.assertEqual(sock.get_inheritable(), True) sock.set_inheritable(False) self.assertEqual(sock.get_inheritable(), False) @unittest.skipIf(fcntl is None, "need fcntl") def test_get_inheritable_cloexec(self): sock = socket.socket() with sock: fd = sock.fileno() self.assertEqual(sock.get_inheritable(), False) # clear FD_CLOEXEC flag flags = fcntl.fcntl(fd, fcntl.F_GETFD) flags &= ~fcntl.FD_CLOEXEC fcntl.fcntl(fd, fcntl.F_SETFD, flags) self.assertEqual(sock.get_inheritable(), True) @unittest.skipIf(fcntl is None, "need fcntl") def test_set_inheritable_cloexec(self): sock = socket.socket() with sock: fd = sock.fileno() self.assertEqual(fcntl.fcntl(fd, fcntl.F_GETFD) & fcntl.FD_CLOEXEC, fcntl.FD_CLOEXEC) sock.set_inheritable(True) self.assertEqual(fcntl.fcntl(fd, fcntl.F_GETFD) & fcntl.FD_CLOEXEC, 0) def test_socketpair(self): s1, s2 = socket.socketpair() self.addCleanup(s1.close) self.addCleanup(s2.close) self.assertEqual(s1.get_inheritable(), False) self.assertEqual(s2.get_inheritable(), False) @unittest.skipUnless(hasattr(socket, "SOCK_NONBLOCK"), "SOCK_NONBLOCK not defined") class NonblockConstantTest(unittest.TestCase): def checkNonblock(self, s, nonblock=True, timeout=0.0): if nonblock: self.assertEqual(s.type, socket.SOCK_STREAM) self.assertEqual(s.gettimeout(), timeout) self.assertTrue( fcntl.fcntl(s, fcntl.F_GETFL, os.O_NONBLOCK) & os.O_NONBLOCK) if timeout == 0: # timeout == 0: means that getblocking() must be False. self.assertFalse(s.getblocking()) else: # If timeout > 0, the socket will be in a "blocking" mode # from the standpoint of the Python API. For Python socket # object, "blocking" means that operations like 'sock.recv()' # will block. Internally, file descriptors for # "blocking" Python sockets *with timeouts* are in a # *non-blocking* mode, and 'sock.recv()' uses 'select()' # and handles EWOULDBLOCK/EAGAIN to enforce the timeout. self.assertTrue(s.getblocking()) else: self.assertEqual(s.type, socket.SOCK_STREAM) self.assertEqual(s.gettimeout(), None) self.assertFalse( fcntl.fcntl(s, fcntl.F_GETFL, os.O_NONBLOCK) & os.O_NONBLOCK) self.assertTrue(s.getblocking()) @support.requires_linux_version(2, 6, 28) def test_SOCK_NONBLOCK(self): # a lot of it seems silly and redundant, but I wanted to test that # changing back and forth worked ok with socket.socket(socket.AF_INET, socket.SOCK_STREAM | socket.SOCK_NONBLOCK) as s: self.checkNonblock(s) s.setblocking(True) self.checkNonblock(s, nonblock=False) s.setblocking(False) self.checkNonblock(s) s.settimeout(None) self.checkNonblock(s, nonblock=False) s.settimeout(2.0) self.checkNonblock(s, timeout=2.0) s.setblocking(True) self.checkNonblock(s, nonblock=False) # defaulttimeout t = socket.getdefaulttimeout() socket.setdefaulttimeout(0.0) with socket.socket() as s: self.checkNonblock(s) socket.setdefaulttimeout(None) with socket.socket() as s: self.checkNonblock(s, False) socket.setdefaulttimeout(2.0) with socket.socket() as s: self.checkNonblock(s, timeout=2.0) socket.setdefaulttimeout(None) with socket.socket() as s: self.checkNonblock(s, False) socket.setdefaulttimeout(t) @unittest.skipUnless(os.name == "nt", "Windows specific") @unittest.skipUnless(multiprocessing, "need multiprocessing") class TestSocketSharing(SocketTCPTest): # This must be classmethod and not staticmethod or multiprocessing # won't be able to bootstrap it. @classmethod def remoteProcessServer(cls, q): # Recreate socket from shared data sdata = q.get() message = q.get() s = socket.fromshare(sdata) s2, c = s.accept() # Send the message s2.sendall(message) s2.close() s.close() def testShare(self): # Transfer the listening server socket to another process # and service it from there. # Create process: q = multiprocessing.Queue() p = multiprocessing.Process(target=self.remoteProcessServer, args=(q,)) p.start() # Get the shared socket data data = self.serv.share(p.pid) # Pass the shared socket to the other process addr = self.serv.getsockname() self.serv.close() q.put(data) # The data that the server will send us message = b"slapmahfro" q.put(message) # Connect s = socket.create_connection(addr) # listen for the data m = [] while True: data = s.recv(100) if not data: break m.append(data) s.close() received = b"".join(m) self.assertEqual(received, message) p.join() def testShareLength(self): data = self.serv.share(os.getpid()) self.assertRaises(ValueError, socket.fromshare, data[:-1]) self.assertRaises(ValueError, socket.fromshare, data+b"foo") def compareSockets(self, org, other): # socket sharing is expected to work only for blocking socket # since the internal python timeout value isn't transferred. self.assertEqual(org.gettimeout(), None) self.assertEqual(org.gettimeout(), other.gettimeout()) self.assertEqual(org.family, other.family) self.assertEqual(org.type, other.type) # If the user specified "0" for proto, then # internally windows will have picked the correct value. # Python introspection on the socket however will still return # 0. For the shared socket, the python value is recreated # from the actual value, so it may not compare correctly. if org.proto != 0: self.assertEqual(org.proto, other.proto) def testShareLocal(self): data = self.serv.share(os.getpid()) s = socket.fromshare(data) try: self.compareSockets(self.serv, s) finally: s.close() def testTypes(self): families = [socket.AF_INET, socket.AF_INET6] types = [socket.SOCK_STREAM, socket.SOCK_DGRAM] for f in families: for t in types: try: source = socket.socket(f, t) except OSError: continue # This combination is not supported try: data = source.share(os.getpid()) shared = socket.fromshare(data) try: self.compareSockets(source, shared) finally: shared.close() finally: source.close() class SendfileUsingSendTest(ThreadedTCPSocketTest): """ Test the send() implementation of socket.sendfile(). """ FILESIZE = (10 * 1024 * 1024) # 10 MiB BUFSIZE = 8192 FILEDATA = b"" TIMEOUT = support.LOOPBACK_TIMEOUT @classmethod def setUpClass(cls): def chunks(total, step): assert total >= step while total > step: yield step total -= step if total: yield total chunk = b"".join([random.choice(string.ascii_letters).encode() for i in range(cls.BUFSIZE)]) with open(support.TESTFN, 'wb') as f: for csize in chunks(cls.FILESIZE, cls.BUFSIZE): f.write(chunk) with open(support.TESTFN, 'rb') as f: cls.FILEDATA = f.read() assert len(cls.FILEDATA) == cls.FILESIZE @classmethod def tearDownClass(cls): support.unlink(support.TESTFN) def accept_conn(self): self.serv.settimeout(support.LONG_TIMEOUT) conn, addr = self.serv.accept() conn.settimeout(self.TIMEOUT) self.addCleanup(conn.close) return conn def recv_data(self, conn): received = [] while True: chunk = conn.recv(self.BUFSIZE) if not chunk: break received.append(chunk) return b''.join(received) def meth_from_sock(self, sock): # Depending on the mixin class being run return either send() # or sendfile() method implementation. return getattr(sock, "_sendfile_use_send") # regular file def _testRegularFile(self): address = self.serv.getsockname() file = open(support.TESTFN, 'rb') with socket.create_connection(address) as sock, file as file: meth = self.meth_from_sock(sock) sent = meth(file) self.assertEqual(sent, self.FILESIZE) self.assertEqual(file.tell(), self.FILESIZE) def testRegularFile(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE) self.assertEqual(data, self.FILEDATA) # non regular file def _testNonRegularFile(self): address = self.serv.getsockname() file = io.BytesIO(self.FILEDATA) with socket.create_connection(address) as sock, file as file: sent = sock.sendfile(file) self.assertEqual(sent, self.FILESIZE) self.assertEqual(file.tell(), self.FILESIZE) self.assertRaises(socket._GiveupOnSendfile, sock._sendfile_use_sendfile, file) def testNonRegularFile(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE) self.assertEqual(data, self.FILEDATA) # empty file def _testEmptyFileSend(self): address = self.serv.getsockname() filename = support.TESTFN + "2" with open(filename, 'wb'): self.addCleanup(support.unlink, filename) file = open(filename, 'rb') with socket.create_connection(address) as sock, file as file: meth = self.meth_from_sock(sock) sent = meth(file) self.assertEqual(sent, 0) self.assertEqual(file.tell(), 0) def testEmptyFileSend(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(data, b"") # offset def _testOffset(self): address = self.serv.getsockname() file = open(support.TESTFN, 'rb') with socket.create_connection(address) as sock, file as file: meth = self.meth_from_sock(sock) sent = meth(file, offset=5000) self.assertEqual(sent, self.FILESIZE - 5000) self.assertEqual(file.tell(), self.FILESIZE) def testOffset(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE - 5000) self.assertEqual(data, self.FILEDATA[5000:]) # count def _testCount(self): address = self.serv.getsockname() file = open(support.TESTFN, 'rb') sock = socket.create_connection(address, timeout=support.LOOPBACK_TIMEOUT) with sock, file: count = 5000007 meth = self.meth_from_sock(sock) sent = meth(file, count=count) self.assertEqual(sent, count) self.assertEqual(file.tell(), count) def testCount(self): count = 5000007 conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), count) self.assertEqual(data, self.FILEDATA[:count]) # count small def _testCountSmall(self): address = self.serv.getsockname() file = open(support.TESTFN, 'rb') sock = socket.create_connection(address, timeout=support.LOOPBACK_TIMEOUT) with sock, file: count = 1 meth = self.meth_from_sock(sock) sent = meth(file, count=count) self.assertEqual(sent, count) self.assertEqual(file.tell(), count) def testCountSmall(self): count = 1 conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), count) self.assertEqual(data, self.FILEDATA[:count]) # count + offset def _testCountWithOffset(self): address = self.serv.getsockname() file = open(support.TESTFN, 'rb') with socket.create_connection(address, timeout=2) as sock, file as file: count = 100007 meth = self.meth_from_sock(sock) sent = meth(file, offset=2007, count=count) self.assertEqual(sent, count) self.assertEqual(file.tell(), count + 2007) def testCountWithOffset(self): count = 100007 conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), count) self.assertEqual(data, self.FILEDATA[2007:count+2007]) # non blocking sockets are not supposed to work def _testNonBlocking(self): address = self.serv.getsockname() file = open(support.TESTFN, 'rb') with socket.create_connection(address) as sock, file as file: sock.setblocking(False) meth = self.meth_from_sock(sock) self.assertRaises(ValueError, meth, file) self.assertRaises(ValueError, sock.sendfile, file) def testNonBlocking(self): conn = self.accept_conn() if conn.recv(8192): self.fail('was not supposed to receive any data') # timeout (non-triggered) def _testWithTimeout(self): address = self.serv.getsockname() file = open(support.TESTFN, 'rb') sock = socket.create_connection(address, timeout=support.LOOPBACK_TIMEOUT) with sock, file: meth = self.meth_from_sock(sock) sent = meth(file) self.assertEqual(sent, self.FILESIZE) def testWithTimeout(self): conn = self.accept_conn() data = self.recv_data(conn) self.assertEqual(len(data), self.FILESIZE) self.assertEqual(data, self.FILEDATA) # timeout (triggered) def _testWithTimeoutTriggeredSend(self): address = self.serv.getsockname() with open(support.TESTFN, 'rb') as file: with socket.create_connection(address) as sock: sock.settimeout(0.01) meth = self.meth_from_sock(sock) self.assertRaises(socket.timeout, meth, file) def testWithTimeoutTriggeredSend(self): conn = self.accept_conn() conn.recv(88192) time.sleep(1) # errors def _test_errors(self): pass def test_errors(self): with open(support.TESTFN, 'rb') as file: with socket.socket(type=socket.SOCK_DGRAM) as s: meth = self.meth_from_sock(s) self.assertRaisesRegex( ValueError, "SOCK_STREAM", meth, file) with open(support.TESTFN, 'rt') as file: with socket.socket() as s: meth = self.meth_from_sock(s) self.assertRaisesRegex( ValueError, "binary mode", meth, file) with open(support.TESTFN, 'rb') as file: with socket.socket() as s: meth = self.meth_from_sock(s) self.assertRaisesRegex(TypeError, "positive integer", meth, file, count='2') self.assertRaisesRegex(TypeError, "positive integer", meth, file, count=0.1) self.assertRaisesRegex(ValueError, "positive integer", meth, file, count=0) self.assertRaisesRegex(ValueError, "positive integer", meth, file, count=-1) @unittest.skipUnless(hasattr(os, "sendfile"), 'os.sendfile() required for this test.') class SendfileUsingSendfileTest(SendfileUsingSendTest): """ Test the sendfile() implementation of socket.sendfile(). """ def meth_from_sock(self, sock): return getattr(sock, "_sendfile_use_sendfile") @unittest.skipUnless(HAVE_SOCKET_ALG, 'AF_ALG required') class LinuxKernelCryptoAPI(unittest.TestCase): # tests for AF_ALG def create_alg(self, typ, name): sock = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) try: sock.bind((typ, name)) except FileNotFoundError as e: # type / algorithm is not available sock.close() raise unittest.SkipTest(str(e), typ, name) else: return sock # bpo-31705: On kernel older than 4.5, sendto() failed with ENOKEY, # at least on ppc64le architecture @support.requires_linux_version(4, 5) def test_sha256(self): expected = bytes.fromhex("ba7816bf8f01cfea414140de5dae2223b00361a396" "177a9cb410ff61f20015ad") with self.create_alg('hash', 'sha256') as algo: op, _ = algo.accept() with op: op.sendall(b"abc") self.assertEqual(op.recv(512), expected) op, _ = algo.accept() with op: op.send(b'a', socket.MSG_MORE) op.send(b'b', socket.MSG_MORE) op.send(b'c', socket.MSG_MORE) op.send(b'') self.assertEqual(op.recv(512), expected) def test_hmac_sha1(self): expected = bytes.fromhex("effcdf6ae5eb2fa2d27416d5f184df9c259a7c79") with self.create_alg('hash', 'hmac(sha1)') as algo: algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, b"Jefe") op, _ = algo.accept() with op: op.sendall(b"what do ya want for nothing?") self.assertEqual(op.recv(512), expected) # Although it should work with 3.19 and newer the test blocks on # Ubuntu 15.10 with Kernel 4.2.0-19. @support.requires_linux_version(4, 3) def test_aes_cbc(self): key = bytes.fromhex('06a9214036b8a15b512e03d534120006') iv = bytes.fromhex('3dafba429d9eb430b422da802c9fac41') msg = b"Single block msg" ciphertext = bytes.fromhex('e353779c1079aeb82708942dbe77181a') msglen = len(msg) with self.create_alg('skcipher', 'cbc(aes)') as algo: algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, key) op, _ = algo.accept() with op: op.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, iv=iv, flags=socket.MSG_MORE) op.sendall(msg) self.assertEqual(op.recv(msglen), ciphertext) op, _ = algo.accept() with op: op.sendmsg_afalg([ciphertext], op=socket.ALG_OP_DECRYPT, iv=iv) self.assertEqual(op.recv(msglen), msg) # long message multiplier = 1024 longmsg = [msg] * multiplier op, _ = algo.accept() with op: op.sendmsg_afalg(longmsg, op=socket.ALG_OP_ENCRYPT, iv=iv) enc = op.recv(msglen * multiplier) self.assertEqual(len(enc), msglen * multiplier) self.assertEqual(enc[:msglen], ciphertext) op, _ = algo.accept() with op: op.sendmsg_afalg([enc], op=socket.ALG_OP_DECRYPT, iv=iv) dec = op.recv(msglen * multiplier) self.assertEqual(len(dec), msglen * multiplier) self.assertEqual(dec, msg * multiplier) @support.requires_linux_version(4, 9) # see issue29324 def test_aead_aes_gcm(self): key = bytes.fromhex('c939cc13397c1d37de6ae0e1cb7c423c') iv = bytes.fromhex('b3d8cc017cbb89b39e0f67e2') plain = bytes.fromhex('c3b3c41f113a31b73d9a5cd432103069') assoc = bytes.fromhex('24825602bd12a984e0092d3e448eda5f') expected_ct = bytes.fromhex('93fe7d9e9bfd10348a5606e5cafa7354') expected_tag = bytes.fromhex('0032a1dc85f1c9786925a2e71d8272dd') taglen = len(expected_tag) assoclen = len(assoc) with self.create_alg('aead', 'gcm(aes)') as algo: algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, key) algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_AEAD_AUTHSIZE, None, taglen) # send assoc, plain and tag buffer in separate steps op, _ = algo.accept() with op: op.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, iv=iv, assoclen=assoclen, flags=socket.MSG_MORE) op.sendall(assoc, socket.MSG_MORE) op.sendall(plain) res = op.recv(assoclen + len(plain) + taglen) self.assertEqual(expected_ct, res[assoclen:-taglen]) self.assertEqual(expected_tag, res[-taglen:]) # now with msg op, _ = algo.accept() with op: msg = assoc + plain op.sendmsg_afalg([msg], op=socket.ALG_OP_ENCRYPT, iv=iv, assoclen=assoclen) res = op.recv(assoclen + len(plain) + taglen) self.assertEqual(expected_ct, res[assoclen:-taglen]) self.assertEqual(expected_tag, res[-taglen:]) # create anc data manually pack_uint32 = struct.Struct('I').pack op, _ = algo.accept() with op: msg = assoc + plain op.sendmsg( [msg], ([socket.SOL_ALG, socket.ALG_SET_OP, pack_uint32(socket.ALG_OP_ENCRYPT)], [socket.SOL_ALG, socket.ALG_SET_IV, pack_uint32(len(iv)) + iv], [socket.SOL_ALG, socket.ALG_SET_AEAD_ASSOCLEN, pack_uint32(assoclen)], ) ) res = op.recv(len(msg) + taglen) self.assertEqual(expected_ct, res[assoclen:-taglen]) self.assertEqual(expected_tag, res[-taglen:]) # decrypt and verify op, _ = algo.accept() with op: msg = assoc + expected_ct + expected_tag op.sendmsg_afalg([msg], op=socket.ALG_OP_DECRYPT, iv=iv, assoclen=assoclen) res = op.recv(len(msg) - taglen) self.assertEqual(plain, res[assoclen:]) @support.requires_linux_version(4, 3) # see test_aes_cbc def test_drbg_pr_sha256(self): # deterministic random bit generator, prediction resistance, sha256 with self.create_alg('rng', 'drbg_pr_sha256') as algo: extra_seed = os.urandom(32) algo.setsockopt(socket.SOL_ALG, socket.ALG_SET_KEY, extra_seed) op, _ = algo.accept() with op: rn = op.recv(32) self.assertEqual(len(rn), 32) def test_sendmsg_afalg_args(self): sock = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) with sock: with self.assertRaises(TypeError): sock.sendmsg_afalg() with self.assertRaises(TypeError): sock.sendmsg_afalg(op=None) with self.assertRaises(TypeError): sock.sendmsg_afalg(1) with self.assertRaises(TypeError): sock.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, assoclen=None) with self.assertRaises(TypeError): sock.sendmsg_afalg(op=socket.ALG_OP_ENCRYPT, assoclen=-1) def test_length_restriction(self): # bpo-35050, off-by-one error in length check sock = socket.socket(socket.AF_ALG, socket.SOCK_SEQPACKET, 0) self.addCleanup(sock.close) # salg_type[14] with self.assertRaises(FileNotFoundError): sock.bind(("t" * 13, "name")) with self.assertRaisesRegex(ValueError, "type too long"): sock.bind(("t" * 14, "name")) # salg_name[64] with self.assertRaises(FileNotFoundError): sock.bind(("type", "n" * 63)) with self.assertRaisesRegex(ValueError, "name too long"): sock.bind(("type", "n" * 64)) @unittest.skipUnless(sys.platform.startswith("win"), "requires Windows") class TestMSWindowsTCPFlags(unittest.TestCase): knownTCPFlags = { # available since long time ago 'TCP_MAXSEG', 'TCP_NODELAY', # available starting with Windows 10 1607 'TCP_FASTOPEN', # available starting with Windows 10 1703 'TCP_KEEPCNT', # available starting with Windows 10 1709 'TCP_KEEPIDLE', 'TCP_KEEPINTVL' } def test_new_tcp_flags(self): provided = [s for s in dir(socket) if s.startswith('TCP')] unknown = [s for s in provided if s not in self.knownTCPFlags] self.assertEqual([], unknown, "New TCP flags were discovered. See bpo-32394 for more information") class CreateServerTest(unittest.TestCase): def test_address(self): port = socket_helper.find_unused_port() with socket.create_server(("127.0.0.1", port)) as sock: self.assertEqual(sock.getsockname()[0], "127.0.0.1") self.assertEqual(sock.getsockname()[1], port) if socket_helper.IPV6_ENABLED: with socket.create_server(("::1", port), family=socket.AF_INET6) as sock: self.assertEqual(sock.getsockname()[0], "::1") self.assertEqual(sock.getsockname()[1], port) def test_family_and_type(self): with socket.create_server(("127.0.0.1", 0)) as sock: self.assertEqual(sock.family, socket.AF_INET) self.assertEqual(sock.type, socket.SOCK_STREAM) if socket_helper.IPV6_ENABLED: with socket.create_server(("::1", 0), family=socket.AF_INET6) as s: self.assertEqual(s.family, socket.AF_INET6) self.assertEqual(sock.type, socket.SOCK_STREAM) def test_reuse_port(self): if not hasattr(socket, "SO_REUSEPORT"): with self.assertRaises(ValueError): socket.create_server(("localhost", 0), reuse_port=True) else: with socket.create_server(("localhost", 0)) as sock: opt = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT) self.assertEqual(opt, 0) with socket.create_server(("localhost", 0), reuse_port=True) as sock: opt = sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT) self.assertNotEqual(opt, 0) @unittest.skipIf(not hasattr(_socket, 'IPPROTO_IPV6') or not hasattr(_socket, 'IPV6_V6ONLY'), "IPV6_V6ONLY option not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_ipv6_only_default(self): with socket.create_server(("::1", 0), family=socket.AF_INET6) as sock: assert sock.getsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY) @unittest.skipIf(not socket.has_dualstack_ipv6(), "dualstack_ipv6 not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_dualstack_ipv6_family(self): with socket.create_server(("::1", 0), family=socket.AF_INET6, dualstack_ipv6=True) as sock: self.assertEqual(sock.family, socket.AF_INET6) class CreateServerFunctionalTest(unittest.TestCase): timeout = support.LOOPBACK_TIMEOUT def echo_server(self, sock): def run(sock): with sock: conn, _ = sock.accept() with conn: event.wait(self.timeout) msg = conn.recv(1024) if not msg: return conn.sendall(msg) event = threading.Event() sock.settimeout(self.timeout) thread = threading.Thread(target=run, args=(sock, )) thread.start() self.addCleanup(thread.join, self.timeout) event.set() def echo_client(self, addr, family): with socket.socket(family=family) as sock: sock.settimeout(self.timeout) sock.connect(addr) sock.sendall(b'foo') self.assertEqual(sock.recv(1024), b'foo') def test_tcp4(self): port = socket_helper.find_unused_port() with socket.create_server(("", port)) as sock: self.echo_server(sock) self.echo_client(("127.0.0.1", port), socket.AF_INET) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_tcp6(self): port = socket_helper.find_unused_port() with socket.create_server(("", port), family=socket.AF_INET6) as sock: self.echo_server(sock) self.echo_client(("::1", port), socket.AF_INET6) # --- dual stack tests @unittest.skipIf(not socket.has_dualstack_ipv6(), "dualstack_ipv6 not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_dual_stack_client_v4(self): port = socket_helper.find_unused_port() with socket.create_server(("", port), family=socket.AF_INET6, dualstack_ipv6=True) as sock: self.echo_server(sock) self.echo_client(("127.0.0.1", port), socket.AF_INET) @unittest.skipIf(not socket.has_dualstack_ipv6(), "dualstack_ipv6 not supported") @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'IPv6 required for this test') def test_dual_stack_client_v6(self): port = socket_helper.find_unused_port() with socket.create_server(("", port), family=socket.AF_INET6, dualstack_ipv6=True) as sock: self.echo_server(sock) self.echo_client(("::1", port), socket.AF_INET6) @requireAttrs(socket, "send_fds") @requireAttrs(socket, "recv_fds") @requireAttrs(socket, "AF_UNIX") class SendRecvFdsTests(unittest.TestCase): def testSendAndRecvFds(self): def close_pipes(pipes): for fd1, fd2 in pipes: os.close(fd1) os.close(fd2) def close_fds(fds): for fd in fds: os.close(fd) # send 10 file descriptors pipes = [os.pipe() for _ in range(10)] self.addCleanup(close_pipes, pipes) fds = [rfd for rfd, wfd in pipes] # use a UNIX socket pair to exchange file descriptors locally sock1, sock2 = socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM) with sock1, sock2: socket.send_fds(sock1, [MSG], fds) # request more data and file descriptors than expected msg, fds2, flags, addr = socket.recv_fds(sock2, len(MSG) * 2, len(fds) * 2) self.addCleanup(close_fds, fds2) self.assertEqual(msg, MSG) self.assertEqual(len(fds2), len(fds)) self.assertEqual(flags, 0) # don't test addr # test that file descriptors are connected for index, fds in enumerate(pipes): rfd, wfd = fds os.write(wfd, str(index).encode()) for index, rfd in enumerate(fds2): data = os.read(rfd, 100) self.assertEqual(data, str(index).encode()) def setUpModule(): thread_info = support.threading_setup() unittest.addModuleCleanup(support.threading_cleanup, *thread_info) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.9/test_ssl.py000066400000000000000000006577051471441230600205100ustar00rootroot00000000000000# Test the support for SSL and sockets import sys import unittest import unittest.mock from test import support from test.support import socket_helper, warnings_helper import re import socket import select import struct import time import datetime import gc import http.client import os import errno import pprint import urllib.request import threading import traceback import asyncore import weakref import platform import sysconfig import functools try: import ctypes except ImportError: ctypes = None ssl = support.import_module("ssl") from ssl import TLSVersion, _TLSContentType, _TLSMessageType Py_DEBUG = hasattr(sys, 'gettotalrefcount') Py_DEBUG_WIN32 = Py_DEBUG and sys.platform == 'win32' PROTOCOLS = sorted(ssl._PROTOCOL_NAMES) HOST = socket_helper.HOST IS_LIBRESSL = ssl.OPENSSL_VERSION.startswith('LibreSSL') IS_OPENSSL_1_1_0 = not IS_LIBRESSL and ssl.OPENSSL_VERSION_INFO >= (1, 1, 0) IS_OPENSSL_1_1_1 = not IS_LIBRESSL and ssl.OPENSSL_VERSION_INFO >= (1, 1, 1) IS_OPENSSL_3_0_0 = not IS_LIBRESSL and ssl.OPENSSL_VERSION_INFO >= (3, 0, 0) PY_SSL_DEFAULT_CIPHERS = sysconfig.get_config_var('PY_SSL_DEFAULT_CIPHERS') PROTOCOL_TO_TLS_VERSION = {} for proto, ver in ( ("PROTOCOL_SSLv23", "SSLv3"), ("PROTOCOL_TLSv1", "TLSv1"), ("PROTOCOL_TLSv1_1", "TLSv1_1"), ): try: proto = getattr(ssl, proto) ver = getattr(ssl.TLSVersion, ver) except AttributeError: continue PROTOCOL_TO_TLS_VERSION[proto] = ver def data_file(*name): return os.path.join(os.path.dirname(__file__), *name) # The custom key and certificate files used in test_ssl are generated # using Lib/test/make_ssl_certs.py. # Other certificates are simply fetched from the Internet servers they # are meant to authenticate. CERTFILE = data_file("keycert.pem") BYTES_CERTFILE = os.fsencode(CERTFILE) ONLYCERT = data_file("ssl_cert.pem") ONLYKEY = data_file("ssl_key.pem") BYTES_ONLYCERT = os.fsencode(ONLYCERT) BYTES_ONLYKEY = os.fsencode(ONLYKEY) CERTFILE_PROTECTED = data_file("keycert.passwd.pem") ONLYKEY_PROTECTED = data_file("ssl_key.passwd.pem") KEY_PASSWORD = "somepass" CAPATH = data_file("capath") BYTES_CAPATH = os.fsencode(CAPATH) CAFILE_NEURONIO = data_file("capath", "4e1295a3.0") CAFILE_CACERT = data_file("capath", "5ed36f99.0") CERTFILE_INFO = { 'issuer': ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'localhost'),)), 'notAfter': 'Aug 26 14:23:15 2028 GMT', 'notBefore': 'Aug 29 14:23:15 2018 GMT', 'serialNumber': '98A7CF88C74A32ED', 'subject': ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'localhost'),)), 'subjectAltName': (('DNS', 'localhost'),), 'version': 3 } # empty CRL CRLFILE = data_file("revocation.crl") # Two keys and certs signed by the same CA (for SNI tests) SIGNED_CERTFILE = data_file("keycert3.pem") SIGNED_CERTFILE_HOSTNAME = 'localhost' SIGNED_CERTFILE_INFO = { 'OCSP': ('http://testca.pythontest.net/testca/ocsp/',), 'caIssuers': ('http://testca.pythontest.net/testca/pycacert.cer',), 'crlDistributionPoints': ('http://testca.pythontest.net/testca/revocation.crl',), 'issuer': ((('countryName', 'XY'),), (('organizationName', 'Python Software Foundation CA'),), (('commonName', 'our-ca-server'),)), 'notAfter': 'Oct 28 14:23:16 2037 GMT', 'notBefore': 'Aug 29 14:23:16 2018 GMT', 'serialNumber': 'CB2D80995A69525C', 'subject': ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'localhost'),)), 'subjectAltName': (('DNS', 'localhost'),), 'version': 3 } SIGNED_CERTFILE2 = data_file("keycert4.pem") SIGNED_CERTFILE2_HOSTNAME = 'fakehostname' SIGNED_CERTFILE_ECC = data_file("keycertecc.pem") SIGNED_CERTFILE_ECC_HOSTNAME = 'localhost-ecc' # Same certificate as pycacert.pem, but without extra text in file SIGNING_CA = data_file("capath", "ceff1710.0") # cert with all kinds of subject alt names ALLSANFILE = data_file("allsans.pem") IDNSANSFILE = data_file("idnsans.pem") NOSANFILE = data_file("nosan.pem") NOSAN_HOSTNAME = 'localhost' REMOTE_HOST = "self-signed.pythontest.net" EMPTYCERT = data_file("nullcert.pem") BADCERT = data_file("badcert.pem") NONEXISTINGCERT = data_file("XXXnonexisting.pem") BADKEY = data_file("badkey.pem") NOKIACERT = data_file("nokia.pem") NULLBYTECERT = data_file("nullbytecert.pem") TALOS_INVALID_CRLDP = data_file("talos-2019-0758.pem") DHFILE = data_file("ffdh3072.pem") BYTES_DHFILE = os.fsencode(DHFILE) # Not defined in all versions of OpenSSL OP_NO_COMPRESSION = getattr(ssl, "OP_NO_COMPRESSION", 0) OP_SINGLE_DH_USE = getattr(ssl, "OP_SINGLE_DH_USE", 0) OP_SINGLE_ECDH_USE = getattr(ssl, "OP_SINGLE_ECDH_USE", 0) OP_CIPHER_SERVER_PREFERENCE = getattr(ssl, "OP_CIPHER_SERVER_PREFERENCE", 0) OP_ENABLE_MIDDLEBOX_COMPAT = getattr(ssl, "OP_ENABLE_MIDDLEBOX_COMPAT", 0) OP_IGNORE_UNEXPECTED_EOF = getattr(ssl, "OP_IGNORE_UNEXPECTED_EOF", 0) # Ubuntu has patched OpenSSL and changed behavior of security level 2 # see https://bugs.python.org/issue41561#msg389003 def is_ubuntu(): try: # Assume that any references of "ubuntu" implies Ubuntu-like distro # The workaround is not required for 18.04, but doesn't hurt either. with open("/etc/os-release", encoding="utf-8") as f: return "ubuntu" in f.read() except FileNotFoundError: return False if is_ubuntu(): def seclevel_workaround(*ctxs): """"Lower security level to '1' and allow all ciphers for TLS 1.0/1""" for ctx in ctxs: if ( hasattr(ctx, "minimum_version") and ctx.minimum_version <= ssl.TLSVersion.TLSv1_1 ): ctx.set_ciphers("@SECLEVEL=1:ALL") else: def seclevel_workaround(*ctxs): pass def has_tls_protocol(protocol): """Check if a TLS protocol is available and enabled :param protocol: enum ssl._SSLMethod member or name :return: bool """ if isinstance(protocol, str): assert protocol.startswith('PROTOCOL_') protocol = getattr(ssl, protocol, None) if protocol is None: return False if protocol in { ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS_SERVER, ssl.PROTOCOL_TLS_CLIENT }: # auto-negotiate protocols are always available return True name = protocol.name return has_tls_version(name[len('PROTOCOL_'):]) @functools.lru_cache def has_tls_version(version): """Check if a TLS/SSL version is enabled :param version: TLS version name or ssl.TLSVersion member :return: bool """ if version == "SSLv2": # never supported and not even in TLSVersion enum return False if isinstance(version, str): version = ssl.TLSVersion.__members__[version] # check compile time flags like ssl.HAS_TLSv1_2 if not getattr(ssl, f'HAS_{version.name}'): return False if IS_OPENSSL_3_0_0 and version < ssl.TLSVersion.TLSv1_2: # bpo43791: 3.0.0-alpha14 fails with TLSV1_ALERT_INTERNAL_ERROR return False # check runtime and dynamic crypto policy settings. A TLS version may # be compiled in but disabled by a policy or config option. ctx = ssl.SSLContext() if ( hasattr(ctx, 'minimum_version') and ctx.minimum_version != ssl.TLSVersion.MINIMUM_SUPPORTED and version < ctx.minimum_version ): return False if ( hasattr(ctx, 'maximum_version') and ctx.maximum_version != ssl.TLSVersion.MAXIMUM_SUPPORTED and version > ctx.maximum_version ): return False return True def requires_tls_version(version): """Decorator to skip tests when a required TLS version is not available :param version: TLS version name or ssl.TLSVersion member :return: """ def decorator(func): @functools.wraps(func) def wrapper(*args, **kw): if not has_tls_version(version): raise unittest.SkipTest(f"{version} is not available.") else: return func(*args, **kw) return wrapper return decorator requires_minimum_version = unittest.skipUnless( hasattr(ssl.SSLContext, 'minimum_version'), "required OpenSSL >= 1.1.0g" ) def handle_error(prefix): exc_format = ' '.join(traceback.format_exception(*sys.exc_info())) if support.verbose: sys.stdout.write(prefix + exc_format) def can_clear_options(): # 0.9.8m or higher return ssl._OPENSSL_API_VERSION >= (0, 9, 8, 13, 15) def no_sslv2_implies_sslv3_hello(): # 0.9.7h or higher return ssl.OPENSSL_VERSION_INFO >= (0, 9, 7, 8, 15) def have_verify_flags(): # 0.9.8 or higher return ssl.OPENSSL_VERSION_INFO >= (0, 9, 8, 0, 15) def _have_secp_curves(): if not ssl.HAS_ECDH: return False ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) try: ctx.set_ecdh_curve("secp384r1") except ValueError: return False else: return True HAVE_SECP_CURVES = _have_secp_curves() def utc_offset(): #NOTE: ignore issues like #1647654 # local time = utc time + utc offset if time.daylight and time.localtime().tm_isdst > 0: return -time.altzone # seconds return -time.timezone def asn1time(cert_time): # Some versions of OpenSSL ignore seconds, see #18207 # 0.9.8.i if ssl._OPENSSL_API_VERSION == (0, 9, 8, 9, 15): fmt = "%b %d %H:%M:%S %Y GMT" dt = datetime.datetime.strptime(cert_time, fmt) dt = dt.replace(second=0) cert_time = dt.strftime(fmt) # %d adds leading zero but ASN1_TIME_print() uses leading space if cert_time[4] == "0": cert_time = cert_time[:4] + " " + cert_time[5:] return cert_time needs_sni = unittest.skipUnless(ssl.HAS_SNI, "SNI support needed for this test") def test_wrap_socket(sock, ssl_version=ssl.PROTOCOL_TLS, *, cert_reqs=ssl.CERT_NONE, ca_certs=None, ciphers=None, certfile=None, keyfile=None, **kwargs): context = ssl.SSLContext(ssl_version) if cert_reqs is not None: if cert_reqs == ssl.CERT_NONE: context.check_hostname = False context.verify_mode = cert_reqs if ca_certs is not None: context.load_verify_locations(ca_certs) if certfile is not None or keyfile is not None: context.load_cert_chain(certfile, keyfile) if ciphers is not None: context.set_ciphers(ciphers) return context.wrap_socket(sock, **kwargs) def testing_context(server_cert=SIGNED_CERTFILE): """Create context client_context, server_context, hostname = testing_context() """ if server_cert == SIGNED_CERTFILE: hostname = SIGNED_CERTFILE_HOSTNAME elif server_cert == SIGNED_CERTFILE2: hostname = SIGNED_CERTFILE2_HOSTNAME elif server_cert == NOSANFILE: hostname = NOSAN_HOSTNAME else: raise ValueError(server_cert) client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(server_cert) server_context.load_verify_locations(SIGNING_CA) return client_context, server_context, hostname class BasicSocketTests(unittest.TestCase): def test_constants(self): ssl.CERT_NONE ssl.CERT_OPTIONAL ssl.CERT_REQUIRED ssl.OP_CIPHER_SERVER_PREFERENCE ssl.OP_SINGLE_DH_USE if ssl.HAS_ECDH: ssl.OP_SINGLE_ECDH_USE if ssl.OPENSSL_VERSION_INFO >= (1, 0): ssl.OP_NO_COMPRESSION self.assertIn(ssl.HAS_SNI, {True, False}) self.assertIn(ssl.HAS_ECDH, {True, False}) ssl.OP_NO_SSLv2 ssl.OP_NO_SSLv3 ssl.OP_NO_TLSv1 ssl.OP_NO_TLSv1_3 if ssl.OPENSSL_VERSION_INFO >= (1, 0, 1): ssl.OP_NO_TLSv1_1 ssl.OP_NO_TLSv1_2 self.assertEqual(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv23) def test_private_init(self): with self.assertRaisesRegex(TypeError, "public constructor"): with socket.socket() as s: ssl.SSLSocket(s) def test_str_for_enums(self): # Make sure that the PROTOCOL_* constants have enum-like string # reprs. proto = ssl.PROTOCOL_TLS self.assertEqual(str(proto), '_SSLMethod.PROTOCOL_TLS') ctx = ssl.SSLContext(proto) self.assertIs(ctx.protocol, proto) def test_random(self): v = ssl.RAND_status() if support.verbose: sys.stdout.write("\n RAND_status is %d (%s)\n" % (v, (v and "sufficient randomness") or "insufficient randomness")) data, is_cryptographic = ssl.RAND_pseudo_bytes(16) self.assertEqual(len(data), 16) self.assertEqual(is_cryptographic, v == 1) if v: data = ssl.RAND_bytes(16) self.assertEqual(len(data), 16) else: self.assertRaises(ssl.SSLError, ssl.RAND_bytes, 16) # negative num is invalid self.assertRaises(ValueError, ssl.RAND_bytes, -5) self.assertRaises(ValueError, ssl.RAND_pseudo_bytes, -5) if hasattr(ssl, 'RAND_egd'): self.assertRaises(TypeError, ssl.RAND_egd, 1) self.assertRaises(TypeError, ssl.RAND_egd, 'foo', 1) ssl.RAND_add("this is a random string", 75.0) ssl.RAND_add(b"this is a random bytes object", 75.0) ssl.RAND_add(bytearray(b"this is a random bytearray object"), 75.0) @unittest.skipUnless(os.name == 'posix', 'requires posix') def test_random_fork(self): status = ssl.RAND_status() if not status: self.fail("OpenSSL's PRNG has insufficient randomness") rfd, wfd = os.pipe() pid = os.fork() if pid == 0: try: os.close(rfd) child_random = ssl.RAND_pseudo_bytes(16)[0] self.assertEqual(len(child_random), 16) os.write(wfd, child_random) os.close(wfd) except BaseException: os._exit(1) else: os._exit(0) else: os.close(wfd) self.addCleanup(os.close, rfd) support.wait_process(pid, exitcode=0) child_random = os.read(rfd, 16) self.assertEqual(len(child_random), 16) parent_random = ssl.RAND_pseudo_bytes(16)[0] self.assertEqual(len(parent_random), 16) self.assertNotEqual(child_random, parent_random) maxDiff = None def test_parse_cert(self): # note that this uses an 'unofficial' function in _ssl.c, # provided solely for this test, to exercise the certificate # parsing code self.assertEqual( ssl._ssl._test_decode_cert(CERTFILE), CERTFILE_INFO ) self.assertEqual( ssl._ssl._test_decode_cert(SIGNED_CERTFILE), SIGNED_CERTFILE_INFO ) # Issue #13034: the subjectAltName in some certificates # (notably projects.developer.nokia.com:443) wasn't parsed p = ssl._ssl._test_decode_cert(NOKIACERT) if support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") self.assertEqual(p['subjectAltName'], (('DNS', 'projects.developer.nokia.com'), ('DNS', 'projects.forum.nokia.com')) ) # extra OCSP and AIA fields self.assertEqual(p['OCSP'], ('http://ocsp.verisign.com',)) self.assertEqual(p['caIssuers'], ('http://SVRIntl-G3-aia.verisign.com/SVRIntlG3.cer',)) self.assertEqual(p['crlDistributionPoints'], ('http://SVRIntl-G3-crl.verisign.com/SVRIntlG3.crl',)) def test_parse_cert_CVE_2019_5010(self): p = ssl._ssl._test_decode_cert(TALOS_INVALID_CRLDP) if support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") self.assertEqual( p, { 'issuer': ( (('countryName', 'UK'),), (('commonName', 'cody-ca'),)), 'notAfter': 'Jun 14 18:00:58 2028 GMT', 'notBefore': 'Jun 18 18:00:58 2018 GMT', 'serialNumber': '02', 'subject': ((('countryName', 'UK'),), (('commonName', 'codenomicon-vm-2.test.lal.cisco.com'),)), 'subjectAltName': ( ('DNS', 'codenomicon-vm-2.test.lal.cisco.com'),), 'version': 3 } ) def test_parse_cert_CVE_2013_4238(self): p = ssl._ssl._test_decode_cert(NULLBYTECERT) if support.verbose: sys.stdout.write("\n" + pprint.pformat(p) + "\n") subject = ((('countryName', 'US'),), (('stateOrProvinceName', 'Oregon'),), (('localityName', 'Beaverton'),), (('organizationName', 'Python Software Foundation'),), (('organizationalUnitName', 'Python Core Development'),), (('commonName', 'null.python.org\x00example.org'),), (('emailAddress', 'python-dev@python.org'),)) self.assertEqual(p['subject'], subject) self.assertEqual(p['issuer'], subject) if ssl._OPENSSL_API_VERSION >= (0, 9, 8): san = (('DNS', 'altnull.python.org\x00example.com'), ('email', 'null@python.org\x00user@example.org'), ('URI', 'http://null.python.org\x00http://example.org'), ('IP Address', '192.0.2.1'), ('IP Address', '2001:DB8:0:0:0:0:0:1')) else: # OpenSSL 0.9.7 doesn't support IPv6 addresses in subjectAltName san = (('DNS', 'altnull.python.org\x00example.com'), ('email', 'null@python.org\x00user@example.org'), ('URI', 'http://null.python.org\x00http://example.org'), ('IP Address', '192.0.2.1'), ('IP Address', '')) self.assertEqual(p['subjectAltName'], san) def test_parse_all_sans(self): p = ssl._ssl._test_decode_cert(ALLSANFILE) self.assertEqual(p['subjectAltName'], ( ('DNS', 'allsans'), ('othername', ''), ('othername', ''), ('email', 'user@example.org'), ('DNS', 'www.example.org'), ('DirName', ((('countryName', 'XY'),), (('localityName', 'Castle Anthrax'),), (('organizationName', 'Python Software Foundation'),), (('commonName', 'dirname example'),))), ('URI', 'https://www.python.org/'), ('IP Address', '127.0.0.1'), ('IP Address', '0:0:0:0:0:0:0:1'), ('Registered ID', '1.2.3.4.5') ) ) def test_DER_to_PEM(self): with open(CAFILE_CACERT, 'r') as f: pem = f.read() d1 = ssl.PEM_cert_to_DER_cert(pem) p2 = ssl.DER_cert_to_PEM_cert(d1) d2 = ssl.PEM_cert_to_DER_cert(p2) self.assertEqual(d1, d2) if not p2.startswith(ssl.PEM_HEADER + '\n'): self.fail("DER-to-PEM didn't include correct header:\n%r\n" % p2) if not p2.endswith('\n' + ssl.PEM_FOOTER + '\n'): self.fail("DER-to-PEM didn't include correct footer:\n%r\n" % p2) def test_openssl_version(self): n = ssl.OPENSSL_VERSION_NUMBER t = ssl.OPENSSL_VERSION_INFO s = ssl.OPENSSL_VERSION self.assertIsInstance(n, int) self.assertIsInstance(t, tuple) self.assertIsInstance(s, str) # Some sanity checks follow # >= 0.9 self.assertGreaterEqual(n, 0x900000) # < 4.0 self.assertLess(n, 0x40000000) major, minor, fix, patch, status = t self.assertGreaterEqual(major, 1) self.assertLess(major, 4) self.assertGreaterEqual(minor, 0) self.assertLess(minor, 256) self.assertGreaterEqual(fix, 0) self.assertLess(fix, 256) self.assertGreaterEqual(patch, 0) self.assertLessEqual(patch, 63) self.assertGreaterEqual(status, 0) self.assertLessEqual(status, 15) libressl_ver = f"LibreSSL {major:d}" if major >= 3: # 3.x uses 0xMNN00PP0L openssl_ver = f"OpenSSL {major:d}.{minor:d}.{patch:d}" else: openssl_ver = f"OpenSSL {major:d}.{minor:d}.{fix:d}" self.assertTrue( s.startswith((openssl_ver, libressl_ver)), (s, t, hex(n)) ) @support.cpython_only def test_refcycle(self): # Issue #7943: an SSL object doesn't create reference cycles with # itself. s = socket.socket(socket.AF_INET) ss = test_wrap_socket(s) wr = weakref.ref(ss) with support.check_warnings(("", ResourceWarning)): del ss self.assertEqual(wr(), None) def test_wrapped_unconnected(self): # Methods on an unconnected SSLSocket propagate the original # OSError raise by the underlying socket object. s = socket.socket(socket.AF_INET) with test_wrap_socket(s) as ss: self.assertRaises(OSError, ss.recv, 1) self.assertRaises(OSError, ss.recv_into, bytearray(b'x')) self.assertRaises(OSError, ss.recvfrom, 1) self.assertRaises(OSError, ss.recvfrom_into, bytearray(b'x'), 1) self.assertRaises(OSError, ss.send, b'x') self.assertRaises(OSError, ss.sendto, b'x', ('0.0.0.0', 0)) self.assertRaises(NotImplementedError, ss.dup) self.assertRaises(NotImplementedError, ss.sendmsg, [b'x'], (), 0, ('0.0.0.0', 0)) self.assertRaises(NotImplementedError, ss.recvmsg, 100) self.assertRaises(NotImplementedError, ss.recvmsg_into, [bytearray(100)]) def test_timeout(self): # Issue #8524: when creating an SSL socket, the timeout of the # original socket should be retained. for timeout in (None, 0.0, 5.0): s = socket.socket(socket.AF_INET) s.settimeout(timeout) with test_wrap_socket(s) as ss: self.assertEqual(timeout, ss.gettimeout()) def test_errors_sslwrap(self): sock = socket.socket() self.assertRaisesRegex(ValueError, "certfile must be specified", ssl.wrap_socket, sock, keyfile=CERTFILE) self.assertRaisesRegex(ValueError, "certfile must be specified for server-side operations", ssl.wrap_socket, sock, server_side=True) self.assertRaisesRegex(ValueError, "certfile must be specified for server-side operations", ssl.wrap_socket, sock, server_side=True, certfile="") with ssl.wrap_socket(sock, server_side=True, certfile=CERTFILE) as s: self.assertRaisesRegex(ValueError, "can't connect in server-side mode", s.connect, (HOST, 8080)) with self.assertRaises(OSError) as cm: with socket.socket() as sock: ssl.wrap_socket(sock, certfile=NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaises(OSError) as cm: with socket.socket() as sock: ssl.wrap_socket(sock, certfile=CERTFILE, keyfile=NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaises(OSError) as cm: with socket.socket() as sock: ssl.wrap_socket(sock, certfile=NONEXISTINGCERT, keyfile=NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) def bad_cert_test(self, certfile): """Check that trying to use the given client certificate fails""" certfile = os.path.join(os.path.dirname(__file__) or os.curdir, certfile) sock = socket.socket() self.addCleanup(sock.close) with self.assertRaises(ssl.SSLError): test_wrap_socket(sock, certfile=certfile) def test_empty_cert(self): """Wrapping with an empty cert file""" self.bad_cert_test("nullcert.pem") def test_malformed_cert(self): """Wrapping with a badly formatted certificate (syntax error)""" self.bad_cert_test("badcert.pem") def test_malformed_key(self): """Wrapping with a badly formatted key (syntax error)""" self.bad_cert_test("badkey.pem") def test_match_hostname(self): def ok(cert, hostname): ssl.match_hostname(cert, hostname) def fail(cert, hostname): self.assertRaises(ssl.CertificateError, ssl.match_hostname, cert, hostname) # -- Hostname matching -- cert = {'subject': ((('commonName', 'example.com'),),)} ok(cert, 'example.com') ok(cert, 'ExAmple.cOm') fail(cert, 'www.example.com') fail(cert, '.example.com') fail(cert, 'example.org') fail(cert, 'exampleXcom') cert = {'subject': ((('commonName', '*.a.com'),),)} ok(cert, 'foo.a.com') fail(cert, 'bar.foo.a.com') fail(cert, 'a.com') fail(cert, 'Xa.com') fail(cert, '.a.com') # only match wildcards when they are the only thing # in left-most segment cert = {'subject': ((('commonName', 'f*.com'),),)} fail(cert, 'foo.com') fail(cert, 'f.com') fail(cert, 'bar.com') fail(cert, 'foo.a.com') fail(cert, 'bar.foo.com') # NULL bytes are bad, CVE-2013-4073 cert = {'subject': ((('commonName', 'null.python.org\x00example.org'),),)} ok(cert, 'null.python.org\x00example.org') # or raise an error? fail(cert, 'example.org') fail(cert, 'null.python.org') # error cases with wildcards cert = {'subject': ((('commonName', '*.*.a.com'),),)} fail(cert, 'bar.foo.a.com') fail(cert, 'a.com') fail(cert, 'Xa.com') fail(cert, '.a.com') cert = {'subject': ((('commonName', 'a.*.com'),),)} fail(cert, 'a.foo.com') fail(cert, 'a..com') fail(cert, 'a.com') # wildcard doesn't match IDNA prefix 'xn--' idna = 'püthon.python.org'.encode("idna").decode("ascii") cert = {'subject': ((('commonName', idna),),)} ok(cert, idna) cert = {'subject': ((('commonName', 'x*.python.org'),),)} fail(cert, idna) cert = {'subject': ((('commonName', 'xn--p*.python.org'),),)} fail(cert, idna) # wildcard in first fragment and IDNA A-labels in sequent fragments # are supported. idna = 'www*.pythön.org'.encode("idna").decode("ascii") cert = {'subject': ((('commonName', idna),),)} fail(cert, 'www.pythön.org'.encode("idna").decode("ascii")) fail(cert, 'www1.pythön.org'.encode("idna").decode("ascii")) fail(cert, 'ftp.pythön.org'.encode("idna").decode("ascii")) fail(cert, 'pythön.org'.encode("idna").decode("ascii")) # Slightly fake real-world example cert = {'notAfter': 'Jun 26 21:41:46 2011 GMT', 'subject': ((('commonName', 'linuxfrz.org'),),), 'subjectAltName': (('DNS', 'linuxfr.org'), ('DNS', 'linuxfr.com'), ('othername', ''))} ok(cert, 'linuxfr.org') ok(cert, 'linuxfr.com') # Not a "DNS" entry fail(cert, '') # When there is a subjectAltName, commonName isn't used fail(cert, 'linuxfrz.org') # A pristine real-world example cert = {'notAfter': 'Dec 18 23:59:59 2011 GMT', 'subject': ((('countryName', 'US'),), (('stateOrProvinceName', 'California'),), (('localityName', 'Mountain View'),), (('organizationName', 'Google Inc'),), (('commonName', 'mail.google.com'),))} ok(cert, 'mail.google.com') fail(cert, 'gmail.com') # Only commonName is considered fail(cert, 'California') # -- IPv4 matching -- cert = {'subject': ((('commonName', 'example.com'),),), 'subjectAltName': (('DNS', 'example.com'), ('IP Address', '10.11.12.13'), ('IP Address', '14.15.16.17'), ('IP Address', '127.0.0.1'))} ok(cert, '10.11.12.13') ok(cert, '14.15.16.17') # socket.inet_ntoa(socket.inet_aton('127.1')) == '127.0.0.1' fail(cert, '127.1') fail(cert, '14.15.16.17 ') fail(cert, '14.15.16.17 extra data') fail(cert, '14.15.16.18') fail(cert, 'example.net') # -- IPv6 matching -- if socket_helper.IPV6_ENABLED: cert = {'subject': ((('commonName', 'example.com'),),), 'subjectAltName': ( ('DNS', 'example.com'), ('IP Address', '2001:0:0:0:0:0:0:CAFE\n'), ('IP Address', '2003:0:0:0:0:0:0:BABA\n'))} ok(cert, '2001::cafe') ok(cert, '2003::baba') fail(cert, '2003::baba ') fail(cert, '2003::baba extra data') fail(cert, '2003::bebe') fail(cert, 'example.net') # -- Miscellaneous -- # Neither commonName nor subjectAltName cert = {'notAfter': 'Dec 18 23:59:59 2011 GMT', 'subject': ((('countryName', 'US'),), (('stateOrProvinceName', 'California'),), (('localityName', 'Mountain View'),), (('organizationName', 'Google Inc'),))} fail(cert, 'mail.google.com') # No DNS entry in subjectAltName but a commonName cert = {'notAfter': 'Dec 18 23:59:59 2099 GMT', 'subject': ((('countryName', 'US'),), (('stateOrProvinceName', 'California'),), (('localityName', 'Mountain View'),), (('commonName', 'mail.google.com'),)), 'subjectAltName': (('othername', 'blabla'), )} ok(cert, 'mail.google.com') # No DNS entry subjectAltName and no commonName cert = {'notAfter': 'Dec 18 23:59:59 2099 GMT', 'subject': ((('countryName', 'US'),), (('stateOrProvinceName', 'California'),), (('localityName', 'Mountain View'),), (('organizationName', 'Google Inc'),)), 'subjectAltName': (('othername', 'blabla'),)} fail(cert, 'google.com') # Empty cert / no cert self.assertRaises(ValueError, ssl.match_hostname, None, 'example.com') self.assertRaises(ValueError, ssl.match_hostname, {}, 'example.com') # Issue #17980: avoid denials of service by refusing more than one # wildcard per fragment. cert = {'subject': ((('commonName', 'a*b.example.com'),),)} with self.assertRaisesRegex( ssl.CertificateError, "partial wildcards in leftmost label are not supported"): ssl.match_hostname(cert, 'axxb.example.com') cert = {'subject': ((('commonName', 'www.*.example.com'),),)} with self.assertRaisesRegex( ssl.CertificateError, "wildcard can only be present in the leftmost label"): ssl.match_hostname(cert, 'www.sub.example.com') cert = {'subject': ((('commonName', 'a*b*.example.com'),),)} with self.assertRaisesRegex( ssl.CertificateError, "too many wildcards"): ssl.match_hostname(cert, 'axxbxxc.example.com') cert = {'subject': ((('commonName', '*'),),)} with self.assertRaisesRegex( ssl.CertificateError, "sole wildcard without additional labels are not support"): ssl.match_hostname(cert, 'host') cert = {'subject': ((('commonName', '*.com'),),)} with self.assertRaisesRegex( ssl.CertificateError, r"hostname 'com' doesn't match '\*.com'"): ssl.match_hostname(cert, 'com') # extra checks for _inet_paton() for invalid in ['1', '', '1.2.3', '256.0.0.1', '127.0.0.1/24']: with self.assertRaises(ValueError): ssl._inet_paton(invalid) for ipaddr in ['127.0.0.1', '192.168.0.1']: self.assertTrue(ssl._inet_paton(ipaddr)) if socket_helper.IPV6_ENABLED: for ipaddr in ['::1', '2001:db8:85a3::8a2e:370:7334']: self.assertTrue(ssl._inet_paton(ipaddr)) def test_server_side(self): # server_hostname doesn't work for server sockets ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) with socket.socket() as sock: self.assertRaises(ValueError, ctx.wrap_socket, sock, True, server_hostname="some.hostname") def test_unknown_channel_binding(self): # should raise ValueError for unknown type s = socket.create_server(('127.0.0.1', 0)) c = socket.socket(socket.AF_INET) c.connect(s.getsockname()) with test_wrap_socket(c, do_handshake_on_connect=False) as ss: with self.assertRaises(ValueError): ss.get_channel_binding("unknown-type") s.close() @unittest.skipUnless("tls-unique" in ssl.CHANNEL_BINDING_TYPES, "'tls-unique' channel binding not available") def test_tls_unique_channel_binding(self): # unconnected should return None for known type s = socket.socket(socket.AF_INET) with test_wrap_socket(s) as ss: self.assertIsNone(ss.get_channel_binding("tls-unique")) # the same for server-side s = socket.socket(socket.AF_INET) with test_wrap_socket(s, server_side=True, certfile=CERTFILE) as ss: self.assertIsNone(ss.get_channel_binding("tls-unique")) def test_dealloc_warn(self): ss = test_wrap_socket(socket.socket(socket.AF_INET)) r = repr(ss) with self.assertWarns(ResourceWarning) as cm: ss = None support.gc_collect() self.assertIn(r, str(cm.warning.args[0])) def test_get_default_verify_paths(self): paths = ssl.get_default_verify_paths() self.assertEqual(len(paths), 6) self.assertIsInstance(paths, ssl.DefaultVerifyPaths) with support.EnvironmentVarGuard() as env: env["SSL_CERT_DIR"] = CAPATH env["SSL_CERT_FILE"] = CERTFILE paths = ssl.get_default_verify_paths() self.assertEqual(paths.cafile, CERTFILE) self.assertEqual(paths.capath, CAPATH) @unittest.skipUnless(sys.platform == "win32", "Windows specific") def test_enum_certificates(self): self.assertTrue(ssl.enum_certificates("CA")) self.assertTrue(ssl.enum_certificates("ROOT")) self.assertRaises(TypeError, ssl.enum_certificates) self.assertRaises(WindowsError, ssl.enum_certificates, "") trust_oids = set() for storename in ("CA", "ROOT"): store = ssl.enum_certificates(storename) self.assertIsInstance(store, list) for element in store: self.assertIsInstance(element, tuple) self.assertEqual(len(element), 3) cert, enc, trust = element self.assertIsInstance(cert, bytes) self.assertIn(enc, {"x509_asn", "pkcs_7_asn"}) self.assertIsInstance(trust, (frozenset, set, bool)) if isinstance(trust, (frozenset, set)): trust_oids.update(trust) serverAuth = "1.3.6.1.5.5.7.3.1" self.assertIn(serverAuth, trust_oids) @unittest.skipUnless(sys.platform == "win32", "Windows specific") def test_enum_crls(self): self.assertTrue(ssl.enum_crls("CA")) self.assertRaises(TypeError, ssl.enum_crls) self.assertRaises(WindowsError, ssl.enum_crls, "") crls = ssl.enum_crls("CA") self.assertIsInstance(crls, list) for element in crls: self.assertIsInstance(element, tuple) self.assertEqual(len(element), 2) self.assertIsInstance(element[0], bytes) self.assertIn(element[1], {"x509_asn", "pkcs_7_asn"}) def test_asn1object(self): expected = (129, 'serverAuth', 'TLS Web Server Authentication', '1.3.6.1.5.5.7.3.1') val = ssl._ASN1Object('1.3.6.1.5.5.7.3.1') self.assertEqual(val, expected) self.assertEqual(val.nid, 129) self.assertEqual(val.shortname, 'serverAuth') self.assertEqual(val.longname, 'TLS Web Server Authentication') self.assertEqual(val.oid, '1.3.6.1.5.5.7.3.1') self.assertIsInstance(val, ssl._ASN1Object) self.assertRaises(ValueError, ssl._ASN1Object, 'serverAuth') val = ssl._ASN1Object.fromnid(129) self.assertEqual(val, expected) self.assertIsInstance(val, ssl._ASN1Object) self.assertRaises(ValueError, ssl._ASN1Object.fromnid, -1) with self.assertRaisesRegex(ValueError, "unknown NID 100000"): ssl._ASN1Object.fromnid(100000) for i in range(1000): try: obj = ssl._ASN1Object.fromnid(i) except ValueError: pass else: self.assertIsInstance(obj.nid, int) self.assertIsInstance(obj.shortname, str) self.assertIsInstance(obj.longname, str) self.assertIsInstance(obj.oid, (str, type(None))) val = ssl._ASN1Object.fromname('TLS Web Server Authentication') self.assertEqual(val, expected) self.assertIsInstance(val, ssl._ASN1Object) self.assertEqual(ssl._ASN1Object.fromname('serverAuth'), expected) self.assertEqual(ssl._ASN1Object.fromname('1.3.6.1.5.5.7.3.1'), expected) with self.assertRaisesRegex(ValueError, "unknown object 'serverauth'"): ssl._ASN1Object.fromname('serverauth') def test_purpose_enum(self): val = ssl._ASN1Object('1.3.6.1.5.5.7.3.1') self.assertIsInstance(ssl.Purpose.SERVER_AUTH, ssl._ASN1Object) self.assertEqual(ssl.Purpose.SERVER_AUTH, val) self.assertEqual(ssl.Purpose.SERVER_AUTH.nid, 129) self.assertEqual(ssl.Purpose.SERVER_AUTH.shortname, 'serverAuth') self.assertEqual(ssl.Purpose.SERVER_AUTH.oid, '1.3.6.1.5.5.7.3.1') val = ssl._ASN1Object('1.3.6.1.5.5.7.3.2') self.assertIsInstance(ssl.Purpose.CLIENT_AUTH, ssl._ASN1Object) self.assertEqual(ssl.Purpose.CLIENT_AUTH, val) self.assertEqual(ssl.Purpose.CLIENT_AUTH.nid, 130) self.assertEqual(ssl.Purpose.CLIENT_AUTH.shortname, 'clientAuth') self.assertEqual(ssl.Purpose.CLIENT_AUTH.oid, '1.3.6.1.5.5.7.3.2') def test_unsupported_dtls(self): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.addCleanup(s.close) with self.assertRaises(NotImplementedError) as cx: test_wrap_socket(s, cert_reqs=ssl.CERT_NONE) self.assertEqual(str(cx.exception), "only stream sockets are supported") ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertRaises(NotImplementedError) as cx: ctx.wrap_socket(s) self.assertEqual(str(cx.exception), "only stream sockets are supported") def cert_time_ok(self, timestring, timestamp): self.assertEqual(ssl.cert_time_to_seconds(timestring), timestamp) def cert_time_fail(self, timestring): with self.assertRaises(ValueError): ssl.cert_time_to_seconds(timestring) @unittest.skipUnless(utc_offset(), 'local time needs to be different from UTC') def test_cert_time_to_seconds_timezone(self): # Issue #19940: ssl.cert_time_to_seconds() returns wrong # results if local timezone is not UTC self.cert_time_ok("May 9 00:00:00 2007 GMT", 1178668800.0) self.cert_time_ok("Jan 5 09:34:43 2018 GMT", 1515144883.0) def test_cert_time_to_seconds(self): timestring = "Jan 5 09:34:43 2018 GMT" ts = 1515144883.0 self.cert_time_ok(timestring, ts) # accept keyword parameter, assert its name self.assertEqual(ssl.cert_time_to_seconds(cert_time=timestring), ts) # accept both %e and %d (space or zero generated by strftime) self.cert_time_ok("Jan 05 09:34:43 2018 GMT", ts) # case-insensitive self.cert_time_ok("JaN 5 09:34:43 2018 GmT", ts) self.cert_time_fail("Jan 5 09:34 2018 GMT") # no seconds self.cert_time_fail("Jan 5 09:34:43 2018") # no GMT self.cert_time_fail("Jan 5 09:34:43 2018 UTC") # not GMT timezone self.cert_time_fail("Jan 35 09:34:43 2018 GMT") # invalid day self.cert_time_fail("Jon 5 09:34:43 2018 GMT") # invalid month self.cert_time_fail("Jan 5 24:00:00 2018 GMT") # invalid hour self.cert_time_fail("Jan 5 09:60:43 2018 GMT") # invalid minute newyear_ts = 1230768000.0 # leap seconds self.cert_time_ok("Dec 31 23:59:60 2008 GMT", newyear_ts) # same timestamp self.cert_time_ok("Jan 1 00:00:00 2009 GMT", newyear_ts) self.cert_time_ok("Jan 5 09:34:59 2018 GMT", 1515144899) # allow 60th second (even if it is not a leap second) self.cert_time_ok("Jan 5 09:34:60 2018 GMT", 1515144900) # allow 2nd leap second for compatibility with time.strptime() self.cert_time_ok("Jan 5 09:34:61 2018 GMT", 1515144901) self.cert_time_fail("Jan 5 09:34:62 2018 GMT") # invalid seconds # no special treatment for the special value: # 99991231235959Z (rfc 5280) self.cert_time_ok("Dec 31 23:59:59 9999 GMT", 253402300799.0) @support.run_with_locale('LC_ALL', '') def test_cert_time_to_seconds_locale(self): # `cert_time_to_seconds()` should be locale independent def local_february_name(): return time.strftime('%b', (1, 2, 3, 4, 5, 6, 0, 0, 0)) if local_february_name().lower() == 'feb': self.skipTest("locale-specific month name needs to be " "different from C locale") # locale-independent self.cert_time_ok("Feb 9 00:00:00 2007 GMT", 1170979200.0) self.cert_time_fail(local_february_name() + " 9 00:00:00 2007 GMT") def test_connect_ex_error(self): server = socket.socket(socket.AF_INET) self.addCleanup(server.close) port = socket_helper.bind_port(server) # Reserve port but don't listen s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED) self.addCleanup(s.close) rc = s.connect_ex((HOST, port)) # Issue #19919: Windows machines or VMs hosted on Windows # machines sometimes return EWOULDBLOCK. errors = ( errno.ECONNREFUSED, errno.EHOSTUNREACH, errno.ETIMEDOUT, errno.EWOULDBLOCK, ) self.assertIn(rc, errors) class ContextTests(unittest.TestCase): def test_constructor(self): for protocol in PROTOCOLS: if has_tls_protocol(protocol): with warnings_helper.check_warnings(): ctx = ssl.SSLContext(protocol) self.assertEqual(ctx.protocol, protocol) with warnings_helper.check_warnings(): ctx = ssl.SSLContext() self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS) self.assertRaises(ValueError, ssl.SSLContext, -1) self.assertRaises(ValueError, ssl.SSLContext, 42) def test_protocol(self): for proto in PROTOCOLS: if has_tls_protocol(proto): with warnings_helper.check_warnings(): ctx = ssl.SSLContext(proto) self.assertEqual(ctx.protocol, proto) def test_ciphers(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.set_ciphers("ALL") ctx.set_ciphers("DEFAULT") with self.assertRaisesRegex(ssl.SSLError, "No cipher can be selected"): ctx.set_ciphers("^$:,;?*'dorothyx") @unittest.skipUnless(PY_SSL_DEFAULT_CIPHERS == 1, "Test applies only to Python default ciphers") def test_python_ciphers(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ciphers = ctx.get_ciphers() for suite in ciphers: name = suite['name'] self.assertNotIn("PSK", name) self.assertNotIn("SRP", name) self.assertNotIn("MD5", name) self.assertNotIn("RC4", name) self.assertNotIn("3DES", name) @unittest.skipIf(ssl.OPENSSL_VERSION_INFO < (1, 0, 2, 0, 0), 'OpenSSL too old') def test_get_ciphers(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.set_ciphers('AESGCM') names = set(d['name'] for d in ctx.get_ciphers()) expected = { 'AES128-GCM-SHA256', 'ECDHE-ECDSA-AES128-GCM-SHA256', 'ECDHE-RSA-AES128-GCM-SHA256', 'DHE-RSA-AES128-GCM-SHA256', 'AES256-GCM-SHA384', 'ECDHE-ECDSA-AES256-GCM-SHA384', 'ECDHE-RSA-AES256-GCM-SHA384', 'DHE-RSA-AES256-GCM-SHA384', } intersection = names.intersection(expected) self.assertGreaterEqual( len(intersection), 2, f"\ngot: {sorted(names)}\nexpected: {sorted(expected)}" ) def test_options(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) # OP_ALL | OP_NO_SSLv2 | OP_NO_SSLv3 is the default value default = (ssl.OP_ALL | ssl.OP_NO_SSLv2 | ssl.OP_NO_SSLv3) # SSLContext also enables these by default default |= (OP_NO_COMPRESSION | OP_CIPHER_SERVER_PREFERENCE | OP_SINGLE_DH_USE | OP_SINGLE_ECDH_USE | OP_ENABLE_MIDDLEBOX_COMPAT | OP_IGNORE_UNEXPECTED_EOF) self.assertEqual(default, ctx.options) ctx.options |= ssl.OP_NO_TLSv1 self.assertEqual(default | ssl.OP_NO_TLSv1, ctx.options) if can_clear_options(): ctx.options = (ctx.options & ~ssl.OP_NO_TLSv1) self.assertEqual(default, ctx.options) ctx.options = 0 # Ubuntu has OP_NO_SSLv3 forced on by default self.assertEqual(0, ctx.options & ~ssl.OP_NO_SSLv3) else: with self.assertRaises(ValueError): ctx.options = 0 def test_verify_mode_protocol(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) # Default value self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) ctx.verify_mode = ssl.CERT_OPTIONAL self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) ctx.verify_mode = ssl.CERT_REQUIRED self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.verify_mode = ssl.CERT_NONE self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) with self.assertRaises(TypeError): ctx.verify_mode = None with self.assertRaises(ValueError): ctx.verify_mode = 42 ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self.assertFalse(ctx.check_hostname) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertTrue(ctx.check_hostname) def test_hostname_checks_common_name(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertTrue(ctx.hostname_checks_common_name) if ssl.HAS_NEVER_CHECK_COMMON_NAME: ctx.hostname_checks_common_name = True self.assertTrue(ctx.hostname_checks_common_name) ctx.hostname_checks_common_name = False self.assertFalse(ctx.hostname_checks_common_name) ctx.hostname_checks_common_name = True self.assertTrue(ctx.hostname_checks_common_name) else: with self.assertRaises(AttributeError): ctx.hostname_checks_common_name = True @requires_minimum_version @unittest.skipIf(IS_LIBRESSL, "see bpo-34001") def test_min_max_version(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # OpenSSL default is MINIMUM_SUPPORTED, however some vendors like # Fedora override the setting to TLS 1.0. minimum_range = { # stock OpenSSL ssl.TLSVersion.MINIMUM_SUPPORTED, # Fedora 29 uses TLS 1.0 by default ssl.TLSVersion.TLSv1, # RHEL 8 uses TLS 1.2 by default ssl.TLSVersion.TLSv1_2 } maximum_range = { # stock OpenSSL ssl.TLSVersion.MAXIMUM_SUPPORTED, # Fedora 32 uses TLS 1.3 by default ssl.TLSVersion.TLSv1_3 } self.assertIn( ctx.minimum_version, minimum_range ) self.assertIn( ctx.maximum_version, maximum_range ) ctx.minimum_version = ssl.TLSVersion.TLSv1_1 ctx.maximum_version = ssl.TLSVersion.TLSv1_2 self.assertEqual( ctx.minimum_version, ssl.TLSVersion.TLSv1_1 ) self.assertEqual( ctx.maximum_version, ssl.TLSVersion.TLSv1_2 ) ctx.minimum_version = ssl.TLSVersion.MINIMUM_SUPPORTED ctx.maximum_version = ssl.TLSVersion.TLSv1 self.assertEqual( ctx.minimum_version, ssl.TLSVersion.MINIMUM_SUPPORTED ) self.assertEqual( ctx.maximum_version, ssl.TLSVersion.TLSv1 ) ctx.maximum_version = ssl.TLSVersion.MAXIMUM_SUPPORTED self.assertEqual( ctx.maximum_version, ssl.TLSVersion.MAXIMUM_SUPPORTED ) ctx.maximum_version = ssl.TLSVersion.MINIMUM_SUPPORTED self.assertIn( ctx.maximum_version, {ssl.TLSVersion.TLSv1, ssl.TLSVersion.TLSv1_1, ssl.TLSVersion.SSLv3} ) ctx.minimum_version = ssl.TLSVersion.MAXIMUM_SUPPORTED self.assertIn( ctx.minimum_version, {ssl.TLSVersion.TLSv1_2, ssl.TLSVersion.TLSv1_3} ) with self.assertRaises(ValueError): ctx.minimum_version = 42 if has_tls_protocol(ssl.PROTOCOL_TLSv1_1): ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1_1) self.assertIn( ctx.minimum_version, minimum_range ) self.assertEqual( ctx.maximum_version, ssl.TLSVersion.MAXIMUM_SUPPORTED ) with self.assertRaises(ValueError): ctx.minimum_version = ssl.TLSVersion.MINIMUM_SUPPORTED with self.assertRaises(ValueError): ctx.maximum_version = ssl.TLSVersion.TLSv1 @unittest.skipUnless(have_verify_flags(), "verify_flags need OpenSSL > 0.9.8") def test_verify_flags(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # default value tf = getattr(ssl, "VERIFY_X509_TRUSTED_FIRST", 0) self.assertEqual(ctx.verify_flags, ssl.VERIFY_DEFAULT | tf) ctx.verify_flags = ssl.VERIFY_CRL_CHECK_LEAF self.assertEqual(ctx.verify_flags, ssl.VERIFY_CRL_CHECK_LEAF) ctx.verify_flags = ssl.VERIFY_CRL_CHECK_CHAIN self.assertEqual(ctx.verify_flags, ssl.VERIFY_CRL_CHECK_CHAIN) ctx.verify_flags = ssl.VERIFY_DEFAULT self.assertEqual(ctx.verify_flags, ssl.VERIFY_DEFAULT) # supports any value ctx.verify_flags = ssl.VERIFY_CRL_CHECK_LEAF | ssl.VERIFY_X509_STRICT self.assertEqual(ctx.verify_flags, ssl.VERIFY_CRL_CHECK_LEAF | ssl.VERIFY_X509_STRICT) with self.assertRaises(TypeError): ctx.verify_flags = None def test_load_cert_chain(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # Combined key and cert in a single file ctx.load_cert_chain(CERTFILE, keyfile=None) ctx.load_cert_chain(CERTFILE, keyfile=CERTFILE) self.assertRaises(TypeError, ctx.load_cert_chain, keyfile=CERTFILE) with self.assertRaises(OSError) as cm: ctx.load_cert_chain(NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_cert_chain(BADCERT) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_cert_chain(EMPTYCERT) # Separate key and cert ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.load_cert_chain(ONLYCERT, ONLYKEY) ctx.load_cert_chain(certfile=ONLYCERT, keyfile=ONLYKEY) ctx.load_cert_chain(certfile=BYTES_ONLYCERT, keyfile=BYTES_ONLYKEY) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_cert_chain(ONLYCERT) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_cert_chain(ONLYKEY) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_cert_chain(certfile=ONLYKEY, keyfile=ONLYCERT) # Mismatching key and cert ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) with self.assertRaisesRegex(ssl.SSLError, "key values mismatch"): ctx.load_cert_chain(CAFILE_CACERT, ONLYKEY) # Password protected key and cert ctx.load_cert_chain(CERTFILE_PROTECTED, password=KEY_PASSWORD) ctx.load_cert_chain(CERTFILE_PROTECTED, password=KEY_PASSWORD.encode()) ctx.load_cert_chain(CERTFILE_PROTECTED, password=bytearray(KEY_PASSWORD.encode())) ctx.load_cert_chain(ONLYCERT, ONLYKEY_PROTECTED, KEY_PASSWORD) ctx.load_cert_chain(ONLYCERT, ONLYKEY_PROTECTED, KEY_PASSWORD.encode()) ctx.load_cert_chain(ONLYCERT, ONLYKEY_PROTECTED, bytearray(KEY_PASSWORD.encode())) with self.assertRaisesRegex(TypeError, "should be a string"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=True) with self.assertRaises(ssl.SSLError): ctx.load_cert_chain(CERTFILE_PROTECTED, password="badpass") with self.assertRaisesRegex(ValueError, "cannot be longer"): # openssl has a fixed limit on the password buffer. # PEM_BUFSIZE is generally set to 1kb. # Return a string larger than this. ctx.load_cert_chain(CERTFILE_PROTECTED, password=b'a' * 102400) # Password callback def getpass_unicode(): return KEY_PASSWORD def getpass_bytes(): return KEY_PASSWORD.encode() def getpass_bytearray(): return bytearray(KEY_PASSWORD.encode()) def getpass_badpass(): return "badpass" def getpass_huge(): return b'a' * (1024 * 1024) def getpass_bad_type(): return 9 def getpass_exception(): raise Exception('getpass error') class GetPassCallable: def __call__(self): return KEY_PASSWORD def getpass(self): return KEY_PASSWORD ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_unicode) ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_bytes) ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_bytearray) ctx.load_cert_chain(CERTFILE_PROTECTED, password=GetPassCallable()) ctx.load_cert_chain(CERTFILE_PROTECTED, password=GetPassCallable().getpass) with self.assertRaises(ssl.SSLError): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_badpass) with self.assertRaisesRegex(ValueError, "cannot be longer"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_huge) with self.assertRaisesRegex(TypeError, "must return a string"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_bad_type) with self.assertRaisesRegex(Exception, "getpass error"): ctx.load_cert_chain(CERTFILE_PROTECTED, password=getpass_exception) # Make sure the password function isn't called if it isn't needed ctx.load_cert_chain(CERTFILE, password=getpass_exception) def test_load_verify_locations(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.load_verify_locations(CERTFILE) ctx.load_verify_locations(cafile=CERTFILE, capath=None) ctx.load_verify_locations(BYTES_CERTFILE) ctx.load_verify_locations(cafile=BYTES_CERTFILE, capath=None) self.assertRaises(TypeError, ctx.load_verify_locations) self.assertRaises(TypeError, ctx.load_verify_locations, None, None, None) with self.assertRaises(OSError) as cm: ctx.load_verify_locations(NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaisesRegex(ssl.SSLError, "PEM lib"): ctx.load_verify_locations(BADCERT) ctx.load_verify_locations(CERTFILE, CAPATH) ctx.load_verify_locations(CERTFILE, capath=BYTES_CAPATH) # Issue #10989: crash if the second argument type is invalid self.assertRaises(TypeError, ctx.load_verify_locations, None, True) def test_load_verify_cadata(self): # test cadata with open(CAFILE_CACERT) as f: cacert_pem = f.read() cacert_der = ssl.PEM_cert_to_DER_cert(cacert_pem) with open(CAFILE_NEURONIO) as f: neuronio_pem = f.read() neuronio_der = ssl.PEM_cert_to_DER_cert(neuronio_pem) # test PEM ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 0) ctx.load_verify_locations(cadata=cacert_pem) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 1) ctx.load_verify_locations(cadata=neuronio_pem) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # cert already in hash table ctx.load_verify_locations(cadata=neuronio_pem) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # combined ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) combined = "\n".join((cacert_pem, neuronio_pem)) ctx.load_verify_locations(cadata=combined) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # with junk around the certs ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) combined = ["head", cacert_pem, "other", neuronio_pem, "again", neuronio_pem, "tail"] ctx.load_verify_locations(cadata="\n".join(combined)) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # test DER ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(cadata=cacert_der) ctx.load_verify_locations(cadata=neuronio_der) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # cert already in hash table ctx.load_verify_locations(cadata=cacert_der) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # combined ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) combined = b"".join((cacert_der, neuronio_der)) ctx.load_verify_locations(cadata=combined) self.assertEqual(ctx.cert_store_stats()["x509_ca"], 2) # error cases ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertRaises(TypeError, ctx.load_verify_locations, cadata=object) with self.assertRaisesRegex( ssl.SSLError, "no start line: cadata does not contain a certificate" ): ctx.load_verify_locations(cadata="broken") with self.assertRaisesRegex( ssl.SSLError, "not enough data: cadata does not contain a certificate" ): ctx.load_verify_locations(cadata=b"broken") @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_load_dh_params(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.load_dh_params(DHFILE) if os.name != 'nt': ctx.load_dh_params(BYTES_DHFILE) self.assertRaises(TypeError, ctx.load_dh_params) self.assertRaises(TypeError, ctx.load_dh_params, None) with self.assertRaises(FileNotFoundError) as cm: ctx.load_dh_params(NONEXISTINGCERT) self.assertEqual(cm.exception.errno, errno.ENOENT) with self.assertRaises(ssl.SSLError) as cm: ctx.load_dh_params(CERTFILE) def test_session_stats(self): for proto in PROTOCOLS: if not has_tls_protocol(proto): continue with warnings_helper.check_warnings(): ctx = ssl.SSLContext(proto) self.assertEqual(ctx.session_stats(), { 'number': 0, 'connect': 0, 'connect_good': 0, 'connect_renegotiate': 0, 'accept': 0, 'accept_good': 0, 'accept_renegotiate': 0, 'hits': 0, 'misses': 0, 'timeouts': 0, 'cache_full': 0, }) def test_set_default_verify_paths(self): # There's not much we can do to test that it acts as expected, # so just check it doesn't crash or raise an exception. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.set_default_verify_paths() @unittest.skipUnless(ssl.HAS_ECDH, "ECDH disabled on this OpenSSL build") def test_set_ecdh_curve(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.set_ecdh_curve("prime256v1") ctx.set_ecdh_curve(b"prime256v1") self.assertRaises(TypeError, ctx.set_ecdh_curve) self.assertRaises(TypeError, ctx.set_ecdh_curve, None) self.assertRaises(ValueError, ctx.set_ecdh_curve, "foo") self.assertRaises(ValueError, ctx.set_ecdh_curve, b"foo") @needs_sni def test_sni_callback(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # set_servername_callback expects a callable, or None self.assertRaises(TypeError, ctx.set_servername_callback) self.assertRaises(TypeError, ctx.set_servername_callback, 4) self.assertRaises(TypeError, ctx.set_servername_callback, "") self.assertRaises(TypeError, ctx.set_servername_callback, ctx) def dummycallback(sock, servername, ctx): pass ctx.set_servername_callback(None) ctx.set_servername_callback(dummycallback) @needs_sni def test_sni_callback_refcycle(self): # Reference cycles through the servername callback are detected # and cleared. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) def dummycallback(sock, servername, ctx, cycle=ctx): pass ctx.set_servername_callback(dummycallback) wr = weakref.ref(ctx) del ctx, dummycallback gc.collect() self.assertIs(wr(), None) def test_cert_store_stats(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 0, 'crl': 0, 'x509': 0}) ctx.load_cert_chain(CERTFILE) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 0, 'crl': 0, 'x509': 0}) ctx.load_verify_locations(CERTFILE) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 0, 'crl': 0, 'x509': 1}) ctx.load_verify_locations(CAFILE_CACERT) self.assertEqual(ctx.cert_store_stats(), {'x509_ca': 1, 'crl': 0, 'x509': 2}) def test_get_ca_certs(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.get_ca_certs(), []) # CERTFILE is not flagged as X509v3 Basic Constraints: CA:TRUE ctx.load_verify_locations(CERTFILE) self.assertEqual(ctx.get_ca_certs(), []) # but CAFILE_CACERT is a CA cert ctx.load_verify_locations(CAFILE_CACERT) self.assertEqual(ctx.get_ca_certs(), [{'issuer': ((('organizationName', 'Root CA'),), (('organizationalUnitName', 'http://www.cacert.org'),), (('commonName', 'CA Cert Signing Authority'),), (('emailAddress', 'support@cacert.org'),)), 'notAfter': asn1time('Mar 29 12:29:49 2033 GMT'), 'notBefore': asn1time('Mar 30 12:29:49 2003 GMT'), 'serialNumber': '00', 'crlDistributionPoints': ('https://www.cacert.org/revoke.crl',), 'subject': ((('organizationName', 'Root CA'),), (('organizationalUnitName', 'http://www.cacert.org'),), (('commonName', 'CA Cert Signing Authority'),), (('emailAddress', 'support@cacert.org'),)), 'version': 3}]) with open(CAFILE_CACERT) as f: pem = f.read() der = ssl.PEM_cert_to_DER_cert(pem) self.assertEqual(ctx.get_ca_certs(True), [der]) def test_load_default_certs(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs(ssl.Purpose.SERVER_AUTH) ctx.load_default_certs() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs(ssl.Purpose.CLIENT_AUTH) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertRaises(TypeError, ctx.load_default_certs, None) self.assertRaises(TypeError, ctx.load_default_certs, 'SERVER_AUTH') @unittest.skipIf(sys.platform == "win32", "not-Windows specific") @unittest.skipIf(IS_LIBRESSL, "LibreSSL doesn't support env vars") def test_load_default_certs_env(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with support.EnvironmentVarGuard() as env: env["SSL_CERT_DIR"] = CAPATH env["SSL_CERT_FILE"] = CERTFILE ctx.load_default_certs() self.assertEqual(ctx.cert_store_stats(), {"crl": 0, "x509": 1, "x509_ca": 0}) @unittest.skipUnless(sys.platform == "win32", "Windows specific") @unittest.skipIf(hasattr(sys, "gettotalrefcount"), "Debug build does not share environment between CRTs") def test_load_default_certs_env_windows(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_default_certs() stats = ctx.cert_store_stats() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with support.EnvironmentVarGuard() as env: env["SSL_CERT_DIR"] = CAPATH env["SSL_CERT_FILE"] = CERTFILE ctx.load_default_certs() stats["x509"] += 1 self.assertEqual(ctx.cert_store_stats(), stats) def _assert_context_options(self, ctx): self.assertEqual(ctx.options & ssl.OP_NO_SSLv2, ssl.OP_NO_SSLv2) if OP_NO_COMPRESSION != 0: self.assertEqual(ctx.options & OP_NO_COMPRESSION, OP_NO_COMPRESSION) if OP_SINGLE_DH_USE != 0: self.assertEqual(ctx.options & OP_SINGLE_DH_USE, OP_SINGLE_DH_USE) if OP_SINGLE_ECDH_USE != 0: self.assertEqual(ctx.options & OP_SINGLE_ECDH_USE, OP_SINGLE_ECDH_USE) if OP_CIPHER_SERVER_PREFERENCE != 0: self.assertEqual(ctx.options & OP_CIPHER_SERVER_PREFERENCE, OP_CIPHER_SERVER_PREFERENCE) def test_create_default_context(self): ctx = ssl.create_default_context() self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertTrue(ctx.check_hostname) self._assert_context_options(ctx) with open(SIGNING_CA) as f: cadata = f.read() ctx = ssl.create_default_context(cafile=SIGNING_CA, capath=CAPATH, cadata=cadata) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self._assert_context_options(ctx) ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self._assert_context_options(ctx) def test__create_stdlib_context(self): ctx = ssl._create_stdlib_context() self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self.assertFalse(ctx.check_hostname) self._assert_context_options(ctx) if has_tls_protocol(ssl.PROTOCOL_TLSv1): with warnings_helper.check_warnings(): ctx = ssl._create_stdlib_context(ssl.PROTOCOL_TLSv1) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLSv1) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self._assert_context_options(ctx) with warnings_helper.check_warnings(): ctx = ssl._create_stdlib_context(ssl.PROTOCOL_TLSv1, cert_reqs=ssl.CERT_REQUIRED, check_hostname=True) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLSv1) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertTrue(ctx.check_hostname) self._assert_context_options(ctx) ctx = ssl._create_stdlib_context(purpose=ssl.Purpose.CLIENT_AUTH) self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) self._assert_context_options(ctx) def test_check_hostname(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) # Auto set CERT_REQUIRED ctx.check_hostname = True self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_REQUIRED self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) # Changing verify_mode does not affect check_hostname ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE ctx.check_hostname = False self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) # Auto set ctx.check_hostname = True self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_OPTIONAL ctx.check_hostname = False self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) # keep CERT_OPTIONAL ctx.check_hostname = True self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) # Cannot set CERT_NONE with check_hostname enabled with self.assertRaises(ValueError): ctx.verify_mode = ssl.CERT_NONE ctx.check_hostname = False self.assertFalse(ctx.check_hostname) ctx.verify_mode = ssl.CERT_NONE self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) def test_context_client_server(self): # PROTOCOL_TLS_CLIENT has sane defaults ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) # PROTOCOL_TLS_SERVER has different but also sane defaults ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.assertFalse(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_NONE) def test_context_custom_class(self): class MySSLSocket(ssl.SSLSocket): pass class MySSLObject(ssl.SSLObject): pass ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) ctx.sslsocket_class = MySSLSocket ctx.sslobject_class = MySSLObject with ctx.wrap_socket(socket.socket(), server_side=True) as sock: self.assertIsInstance(sock, MySSLSocket) obj = ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO()) self.assertIsInstance(obj, MySSLObject) @unittest.skipUnless(IS_OPENSSL_1_1_1, "Test requires OpenSSL 1.1.1") def test_num_tickest(self): ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) self.assertEqual(ctx.num_tickets, 2) ctx.num_tickets = 1 self.assertEqual(ctx.num_tickets, 1) ctx.num_tickets = 0 self.assertEqual(ctx.num_tickets, 0) with self.assertRaises(ValueError): ctx.num_tickets = -1 with self.assertRaises(TypeError): ctx.num_tickets = None ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.num_tickets, 2) with self.assertRaises(ValueError): ctx.num_tickets = 1 class SSLErrorTests(unittest.TestCase): def test_str(self): # The str() of a SSLError doesn't include the errno e = ssl.SSLError(1, "foo") self.assertEqual(str(e), "foo") self.assertEqual(e.errno, 1) # Same for a subclass e = ssl.SSLZeroReturnError(1, "foo") self.assertEqual(str(e), "foo") self.assertEqual(e.errno, 1) @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_lib_reason(self): # Test the library and reason attributes ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) with self.assertRaises(ssl.SSLError) as cm: ctx.load_dh_params(CERTFILE) self.assertEqual(cm.exception.library, 'PEM') self.assertEqual(cm.exception.reason, 'NO_START_LINE') s = str(cm.exception) self.assertTrue(s.startswith("[PEM: NO_START_LINE] no start line"), s) def test_subclass(self): # Check that the appropriate SSLError subclass is raised # (this only tests one of them) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE with socket.create_server(("127.0.0.1", 0)) as s: c = socket.create_connection(s.getsockname()) c.setblocking(False) with ctx.wrap_socket(c, False, do_handshake_on_connect=False) as c: with self.assertRaises(ssl.SSLWantReadError) as cm: c.do_handshake() s = str(cm.exception) self.assertTrue(s.startswith("The operation did not complete (read)"), s) # For compatibility self.assertEqual(cm.exception.errno, ssl.SSL_ERROR_WANT_READ) def test_bad_server_hostname(self): ctx = ssl.create_default_context() with self.assertRaises(ValueError): ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_hostname="") with self.assertRaises(ValueError): ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_hostname=".example.org") with self.assertRaises(TypeError): ctx.wrap_bio(ssl.MemoryBIO(), ssl.MemoryBIO(), server_hostname="example.org\x00evil.com") class MemoryBIOTests(unittest.TestCase): def test_read_write(self): bio = ssl.MemoryBIO() bio.write(b'foo') self.assertEqual(bio.read(), b'foo') self.assertEqual(bio.read(), b'') bio.write(b'foo') bio.write(b'bar') self.assertEqual(bio.read(), b'foobar') self.assertEqual(bio.read(), b'') bio.write(b'baz') self.assertEqual(bio.read(2), b'ba') self.assertEqual(bio.read(1), b'z') self.assertEqual(bio.read(1), b'') def test_eof(self): bio = ssl.MemoryBIO() self.assertFalse(bio.eof) self.assertEqual(bio.read(), b'') self.assertFalse(bio.eof) bio.write(b'foo') self.assertFalse(bio.eof) bio.write_eof() self.assertFalse(bio.eof) self.assertEqual(bio.read(2), b'fo') self.assertFalse(bio.eof) self.assertEqual(bio.read(1), b'o') self.assertTrue(bio.eof) self.assertEqual(bio.read(), b'') self.assertTrue(bio.eof) def test_pending(self): bio = ssl.MemoryBIO() self.assertEqual(bio.pending, 0) bio.write(b'foo') self.assertEqual(bio.pending, 3) for i in range(3): bio.read(1) self.assertEqual(bio.pending, 3-i-1) for i in range(3): bio.write(b'x') self.assertEqual(bio.pending, i+1) bio.read() self.assertEqual(bio.pending, 0) def test_buffer_types(self): bio = ssl.MemoryBIO() bio.write(b'foo') self.assertEqual(bio.read(), b'foo') bio.write(bytearray(b'bar')) self.assertEqual(bio.read(), b'bar') bio.write(memoryview(b'baz')) self.assertEqual(bio.read(), b'baz') def test_error_types(self): bio = ssl.MemoryBIO() self.assertRaises(TypeError, bio.write, 'foo') self.assertRaises(TypeError, bio.write, None) self.assertRaises(TypeError, bio.write, True) self.assertRaises(TypeError, bio.write, 1) class SSLObjectTests(unittest.TestCase): def test_private_init(self): bio = ssl.MemoryBIO() with self.assertRaisesRegex(TypeError, "public constructor"): ssl.SSLObject(bio, bio) def test_unwrap(self): client_ctx, server_ctx, hostname = testing_context() c_in = ssl.MemoryBIO() c_out = ssl.MemoryBIO() s_in = ssl.MemoryBIO() s_out = ssl.MemoryBIO() client = client_ctx.wrap_bio(c_in, c_out, server_hostname=hostname) server = server_ctx.wrap_bio(s_in, s_out, server_side=True) # Loop on the handshake for a bit to get it settled for _ in range(5): try: client.do_handshake() except ssl.SSLWantReadError: pass if c_out.pending: s_in.write(c_out.read()) try: server.do_handshake() except ssl.SSLWantReadError: pass if s_out.pending: c_in.write(s_out.read()) # Now the handshakes should be complete (don't raise WantReadError) client.do_handshake() server.do_handshake() # Now if we unwrap one side unilaterally, it should send close-notify # and raise WantReadError: with self.assertRaises(ssl.SSLWantReadError): client.unwrap() # But server.unwrap() does not raise, because it reads the client's # close-notify: s_in.write(c_out.read()) server.unwrap() # And now that the client gets the server's close-notify, it doesn't # raise either. c_in.write(s_out.read()) client.unwrap() class SimpleBackgroundTests(unittest.TestCase): """Tests that connect to a simple server running in the background""" def setUp(self): server = ThreadedEchoServer(SIGNED_CERTFILE) self.server_addr = (HOST, server.port) server.__enter__() self.addCleanup(server.__exit__, None, None, None) def test_connect(self): with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_NONE) as s: s.connect(self.server_addr) self.assertEqual({}, s.getpeercert()) self.assertFalse(s.server_side) # this should succeed because we specify the root cert with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=SIGNING_CA) as s: s.connect(self.server_addr) self.assertTrue(s.getpeercert()) self.assertFalse(s.server_side) def test_connect_fail(self): # This should fail because we have no verification certs. Connection # failure crashes ThreadedEchoServer, so run this in an independent # test method. s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED) self.addCleanup(s.close) self.assertRaisesRegex(ssl.SSLError, "certificate verify failed", s.connect, self.server_addr) def test_connect_ex(self): # Issue #11326: check connect_ex() implementation s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=SIGNING_CA) self.addCleanup(s.close) self.assertEqual(0, s.connect_ex(self.server_addr)) self.assertTrue(s.getpeercert()) def test_non_blocking_connect_ex(self): # Issue #11326: non-blocking connect_ex() should allow handshake # to proceed after the socket gets ready. s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, ca_certs=SIGNING_CA, do_handshake_on_connect=False) self.addCleanup(s.close) s.setblocking(False) rc = s.connect_ex(self.server_addr) # EWOULDBLOCK under Windows, EINPROGRESS elsewhere self.assertIn(rc, (0, errno.EINPROGRESS, errno.EWOULDBLOCK)) # Wait for connect to finish select.select([], [s], [], 5.0) # Non-blocking handshake while True: try: s.do_handshake() break except ssl.SSLWantReadError: select.select([s], [], [], 5.0) except ssl.SSLWantWriteError: select.select([], [s], [], 5.0) # SSL established self.assertTrue(s.getpeercert()) def test_connect_with_context(self): # Same as test_connect, but with a separately created context ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) with ctx.wrap_socket(socket.socket(socket.AF_INET)) as s: s.connect(self.server_addr) self.assertEqual({}, s.getpeercert()) # Same with a server hostname with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname="dummy") as s: s.connect(self.server_addr) ctx.verify_mode = ssl.CERT_REQUIRED # This should succeed because we specify the root cert ctx.load_verify_locations(SIGNING_CA) with ctx.wrap_socket(socket.socket(socket.AF_INET)) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) def test_connect_with_context_fail(self): # This should fail because we have no verification certs. Connection # failure crashes ThreadedEchoServer, so run this in an independent # test method. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) ctx.verify_mode = ssl.CERT_REQUIRED s = ctx.wrap_socket(socket.socket(socket.AF_INET)) self.addCleanup(s.close) self.assertRaisesRegex(ssl.SSLError, "certificate verify failed", s.connect, self.server_addr) def test_connect_capath(self): # Verify server certificates using the `capath` argument # NOTE: the subject hashing algorithm has been changed between # OpenSSL 0.9.8n and 1.0.0, as a result the capath directory must # contain both versions of each certificate (same content, different # filename) for this test to be portable across OpenSSL releases. ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) ctx.verify_mode = ssl.CERT_REQUIRED ctx.load_verify_locations(capath=CAPATH) with ctx.wrap_socket(socket.socket(socket.AF_INET)) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) # Same with a bytes `capath` argument ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) ctx.verify_mode = ssl.CERT_REQUIRED ctx.load_verify_locations(capath=BYTES_CAPATH) with ctx.wrap_socket(socket.socket(socket.AF_INET)) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) def test_connect_cadata(self): with open(SIGNING_CA) as f: pem = f.read() der = ssl.PEM_cert_to_DER_cert(pem) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) ctx.verify_mode = ssl.CERT_REQUIRED ctx.load_verify_locations(cadata=pem) with ctx.wrap_socket(socket.socket(socket.AF_INET)) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) # same with DER ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) ctx.verify_mode = ssl.CERT_REQUIRED ctx.load_verify_locations(cadata=der) with ctx.wrap_socket(socket.socket(socket.AF_INET)) as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) @unittest.skipIf(os.name == "nt", "Can't use a socket as a file under Windows") def test_makefile_close(self): # Issue #5238: creating a file-like object with makefile() shouldn't # delay closing the underlying "real socket" (here tested with its # file descriptor, hence skipping the test under Windows). ss = test_wrap_socket(socket.socket(socket.AF_INET)) ss.connect(self.server_addr) fd = ss.fileno() f = ss.makefile() f.close() # The fd is still open os.read(fd, 0) # Closing the SSL socket should close the fd too ss.close() gc.collect() with self.assertRaises(OSError) as e: os.read(fd, 0) self.assertEqual(e.exception.errno, errno.EBADF) def test_non_blocking_handshake(self): s = socket.socket(socket.AF_INET) s.connect(self.server_addr) s.setblocking(False) s = test_wrap_socket(s, cert_reqs=ssl.CERT_NONE, do_handshake_on_connect=False) self.addCleanup(s.close) count = 0 while True: try: count += 1 s.do_handshake() break except ssl.SSLWantReadError: select.select([s], [], []) except ssl.SSLWantWriteError: select.select([], [s], []) if support.verbose: sys.stdout.write("\nNeeded %d calls to do_handshake() to establish session.\n" % count) def test_get_server_certificate(self): _test_get_server_certificate(self, *self.server_addr, cert=SIGNING_CA) def test_get_server_certificate_fail(self): # Connection failure crashes ThreadedEchoServer, so run this in an # independent test method _test_get_server_certificate_fail(self, *self.server_addr) def test_ciphers(self): with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_NONE, ciphers="ALL") as s: s.connect(self.server_addr) with test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_NONE, ciphers="DEFAULT") as s: s.connect(self.server_addr) # Error checking can happen at instantiation or when connecting with self.assertRaisesRegex(ssl.SSLError, "No cipher can be selected"): with socket.socket(socket.AF_INET) as sock: s = test_wrap_socket(sock, cert_reqs=ssl.CERT_NONE, ciphers="^$:,;?*'dorothyx") s.connect(self.server_addr) def test_get_ca_certs_capath(self): # capath certs are loaded on request ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx.load_verify_locations(capath=CAPATH) self.assertEqual(ctx.get_ca_certs(), []) with ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname='localhost') as s: s.connect(self.server_addr) cert = s.getpeercert() self.assertTrue(cert) self.assertEqual(len(ctx.get_ca_certs()), 1) @needs_sni def test_context_setget(self): # Check that the context of a connected socket can be replaced. ctx1 = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx1.load_verify_locations(capath=CAPATH) ctx2 = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) ctx2.load_verify_locations(capath=CAPATH) s = socket.socket(socket.AF_INET) with ctx1.wrap_socket(s, server_hostname='localhost') as ss: ss.connect(self.server_addr) self.assertIs(ss.context, ctx1) self.assertIs(ss._sslobj.context, ctx1) ss.context = ctx2 self.assertIs(ss.context, ctx2) self.assertIs(ss._sslobj.context, ctx2) def ssl_io_loop(self, sock, incoming, outgoing, func, *args, **kwargs): # A simple IO loop. Call func(*args) depending on the error we get # (WANT_READ or WANT_WRITE) move data between the socket and the BIOs. timeout = kwargs.get('timeout', support.SHORT_TIMEOUT) deadline = time.monotonic() + timeout count = 0 while True: if time.monotonic() > deadline: self.fail("timeout") errno = None count += 1 try: ret = func(*args) except ssl.SSLError as e: if e.errno not in (ssl.SSL_ERROR_WANT_READ, ssl.SSL_ERROR_WANT_WRITE): raise errno = e.errno # Get any data from the outgoing BIO irrespective of any error, and # send it to the socket. buf = outgoing.read() sock.sendall(buf) # If there's no error, we're done. For WANT_READ, we need to get # data from the socket and put it in the incoming BIO. if errno is None: break elif errno == ssl.SSL_ERROR_WANT_READ: buf = sock.recv(32768) if buf: incoming.write(buf) else: incoming.write_eof() if support.verbose: sys.stdout.write("Needed %d calls to complete %s().\n" % (count, func.__name__)) return ret def test_bio_handshake(self): sock = socket.socket(socket.AF_INET) self.addCleanup(sock.close) sock.connect(self.server_addr) incoming = ssl.MemoryBIO() outgoing = ssl.MemoryBIO() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertTrue(ctx.check_hostname) self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) ctx.load_verify_locations(SIGNING_CA) sslobj = ctx.wrap_bio(incoming, outgoing, False, SIGNED_CERTFILE_HOSTNAME) self.assertIs(sslobj._sslobj.owner, sslobj) self.assertIsNone(sslobj.cipher()) self.assertIsNone(sslobj.version()) self.assertIsNotNone(sslobj.shared_ciphers()) self.assertRaises(ValueError, sslobj.getpeercert) if 'tls-unique' in ssl.CHANNEL_BINDING_TYPES: self.assertIsNone(sslobj.get_channel_binding('tls-unique')) self.ssl_io_loop(sock, incoming, outgoing, sslobj.do_handshake) self.assertTrue(sslobj.cipher()) self.assertIsNotNone(sslobj.shared_ciphers()) self.assertIsNotNone(sslobj.version()) self.assertTrue(sslobj.getpeercert()) if 'tls-unique' in ssl.CHANNEL_BINDING_TYPES: self.assertTrue(sslobj.get_channel_binding('tls-unique')) try: self.ssl_io_loop(sock, incoming, outgoing, sslobj.unwrap) except ssl.SSLSyscallError: # If the server shuts down the TCP connection without sending a # secure shutdown message, this is reported as SSL_ERROR_SYSCALL pass self.assertRaises(ssl.SSLError, sslobj.write, b'foo') def test_bio_read_write_data(self): sock = socket.socket(socket.AF_INET) self.addCleanup(sock.close) sock.connect(self.server_addr) incoming = ssl.MemoryBIO() outgoing = ssl.MemoryBIO() ctx = ssl.SSLContext(ssl.PROTOCOL_TLS) ctx.verify_mode = ssl.CERT_NONE sslobj = ctx.wrap_bio(incoming, outgoing, False) self.ssl_io_loop(sock, incoming, outgoing, sslobj.do_handshake) req = b'FOO\n' self.ssl_io_loop(sock, incoming, outgoing, sslobj.write, req) buf = self.ssl_io_loop(sock, incoming, outgoing, sslobj.read, 1024) self.assertEqual(buf, b'foo\n') self.ssl_io_loop(sock, incoming, outgoing, sslobj.unwrap) @support.requires_resource('network') class NetworkedTests(unittest.TestCase): def test_timeout_connect_ex(self): # Issue #12065: on a timeout, connect_ex() should return the original # errno (mimicking the behaviour of non-SSL sockets). with socket_helper.transient_internet(REMOTE_HOST): s = test_wrap_socket(socket.socket(socket.AF_INET), cert_reqs=ssl.CERT_REQUIRED, do_handshake_on_connect=False) self.addCleanup(s.close) s.settimeout(0.0000001) rc = s.connect_ex((REMOTE_HOST, 443)) if rc == 0: self.skipTest("REMOTE_HOST responded too quickly") elif rc == errno.ENETUNREACH: self.skipTest("Network unreachable.") self.assertIn(rc, (errno.EAGAIN, errno.EWOULDBLOCK)) @unittest.skipUnless(socket_helper.IPV6_ENABLED, 'Needs IPv6') def test_get_server_certificate_ipv6(self): with socket_helper.transient_internet('ipv6.google.com'): _test_get_server_certificate(self, 'ipv6.google.com', 443) _test_get_server_certificate_fail(self, 'ipv6.google.com', 443) def _test_get_server_certificate(test, host, port, cert=None): pem = ssl.get_server_certificate((host, port)) if not pem: test.fail("No server certificate on %s:%s!" % (host, port)) pem = ssl.get_server_certificate((host, port), ca_certs=cert) if not pem: test.fail("No server certificate on %s:%s!" % (host, port)) if support.verbose: sys.stdout.write("\nVerified certificate for %s:%s is\n%s\n" % (host, port ,pem)) def _test_get_server_certificate_fail(test, host, port): try: pem = ssl.get_server_certificate((host, port), ca_certs=CERTFILE) except ssl.SSLError as x: #should fail if support.verbose: sys.stdout.write("%s\n" % x) else: test.fail("Got server certificate %s for %s:%s!" % (pem, host, port)) from test.ssl_servers import make_https_server class ThreadedEchoServer(threading.Thread): class ConnectionHandler(threading.Thread): """A mildly complicated class, because we want it to work both with and without the SSL wrapper around the socket connection, so that we can test the STARTTLS functionality.""" def __init__(self, server, connsock, addr): self.server = server self.running = False self.sock = connsock self.addr = addr self.sock.setblocking(True) self.sslconn = None threading.Thread.__init__(self) self.daemon = True def wrap_conn(self): try: self.sslconn = self.server.context.wrap_socket( self.sock, server_side=True) self.server.selected_npn_protocols.append(self.sslconn.selected_npn_protocol()) self.server.selected_alpn_protocols.append(self.sslconn.selected_alpn_protocol()) except (ConnectionResetError, BrokenPipeError, ConnectionAbortedError) as e: # We treat ConnectionResetError as though it were an # SSLError - OpenSSL on Ubuntu abruptly closes the # connection when asked to use an unsupported protocol. # # BrokenPipeError is raised in TLS 1.3 mode, when OpenSSL # tries to send session tickets after handshake. # https://github.com/openssl/openssl/issues/6342 # # ConnectionAbortedError is raised in TLS 1.3 mode, when OpenSSL # tries to send session tickets after handshake when using WinSock. self.server.conn_errors.append(str(e)) if self.server.chatty: handle_error("\n server: bad connection attempt from " + repr(self.addr) + ":\n") self.running = False self.close() return False except (ssl.SSLError, OSError) as e: # OSError may occur with wrong protocols, e.g. both # sides use PROTOCOL_TLS_SERVER. # # XXX Various errors can have happened here, for example # a mismatching protocol version, an invalid certificate, # or a low-level bug. This should be made more discriminating. # # bpo-31323: Store the exception as string to prevent # a reference leak: server -> conn_errors -> exception # -> traceback -> self (ConnectionHandler) -> server self.server.conn_errors.append(str(e)) if self.server.chatty: handle_error("\n server: bad connection attempt from " + repr(self.addr) + ":\n") # bpo-44229, bpo-43855, bpo-44237, and bpo-33450: # Ignore spurious EPROTOTYPE returned by write() on macOS. # See also http://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/ if e.errno != errno.EPROTOTYPE and sys.platform != "darwin": self.running = False self.server.stop() self.close() return False else: self.server.shared_ciphers.append(self.sslconn.shared_ciphers()) if self.server.context.verify_mode == ssl.CERT_REQUIRED: cert = self.sslconn.getpeercert() if support.verbose and self.server.chatty: sys.stdout.write(" client cert is " + pprint.pformat(cert) + "\n") cert_binary = self.sslconn.getpeercert(True) if support.verbose and self.server.chatty: sys.stdout.write(" cert binary is " + str(len(cert_binary)) + " bytes\n") cipher = self.sslconn.cipher() if support.verbose and self.server.chatty: sys.stdout.write(" server: connection cipher is now " + str(cipher) + "\n") sys.stdout.write(" server: selected protocol is now " + str(self.sslconn.selected_npn_protocol()) + "\n") return True def read(self): if self.sslconn: return self.sslconn.read() else: return self.sock.recv(1024) def write(self, bytes): if self.sslconn: return self.sslconn.write(bytes) else: return self.sock.send(bytes) def close(self): if self.sslconn: self.sslconn.close() else: self.sock.close() def run(self): self.running = True if not self.server.starttls_server: if not self.wrap_conn(): return while self.running: try: msg = self.read() stripped = msg.strip() if not stripped: # eof, so quit this handler self.running = False try: self.sock = self.sslconn.unwrap() except OSError: # Many tests shut the TCP connection down # without an SSL shutdown. This causes # unwrap() to raise OSError with errno=0! pass else: self.sslconn = None self.close() elif stripped == b'over': if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: client closed connection\n") self.close() return elif (self.server.starttls_server and stripped == b'STARTTLS'): if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: read STARTTLS from client, sending OK...\n") self.write(b"OK\n") if not self.wrap_conn(): return elif (self.server.starttls_server and self.sslconn and stripped == b'ENDTLS'): if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: read ENDTLS from client, sending OK...\n") self.write(b"OK\n") self.sock = self.sslconn.unwrap() self.sslconn = None if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: connection is now unencrypted...\n") elif stripped == b'CB tls-unique': if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: read CB tls-unique from client, sending our CB data...\n") data = self.sslconn.get_channel_binding("tls-unique") self.write(repr(data).encode("us-ascii") + b"\n") elif stripped == b'PHA': if support.verbose and self.server.connectionchatty: sys.stdout.write(" server: initiating post handshake auth\n") try: self.sslconn.verify_client_post_handshake() except ssl.SSLError as e: self.write(repr(e).encode("us-ascii") + b"\n") else: self.write(b"OK\n") elif stripped == b'HASCERT': if self.sslconn.getpeercert() is not None: self.write(b'TRUE\n') else: self.write(b'FALSE\n') elif stripped == b'GETCERT': cert = self.sslconn.getpeercert() self.write(repr(cert).encode("us-ascii") + b"\n") else: if (support.verbose and self.server.connectionchatty): ctype = (self.sslconn and "encrypted") or "unencrypted" sys.stdout.write(" server: read %r (%s), sending back %r (%s)...\n" % (msg, ctype, msg.lower(), ctype)) self.write(msg.lower()) except (ConnectionResetError, ConnectionAbortedError): # XXX: OpenSSL 1.1.1 sometimes raises ConnectionResetError # when connection is not shut down gracefully. if self.server.chatty and support.verbose: sys.stdout.write( " Connection reset by peer: {}\n".format( self.addr) ) self.close() self.running = False except ssl.SSLError as err: # On Windows sometimes test_pha_required_nocert receives the # PEER_DID_NOT_RETURN_A_CERTIFICATE exception # before the 'tlsv13 alert certificate required' exception. # If the server is stopped when PEER_DID_NOT_RETURN_A_CERTIFICATE # is received test_pha_required_nocert fails with ConnectionResetError # because the underlying socket is closed if 'PEER_DID_NOT_RETURN_A_CERTIFICATE' == err.reason: if self.server.chatty and support.verbose: sys.stdout.write(err.args[1]) # test_pha_required_nocert is expecting this exception raise ssl.SSLError('tlsv13 alert certificate required') except OSError: if self.server.chatty: handle_error("Test server failure:\n") self.close() self.running = False # normally, we'd just stop here, but for the test # harness, we want to stop the server self.server.stop() def __init__(self, certificate=None, ssl_version=None, certreqs=None, cacerts=None, chatty=True, connectionchatty=False, starttls_server=False, npn_protocols=None, alpn_protocols=None, ciphers=None, context=None): if context: self.context = context else: self.context = ssl.SSLContext(ssl_version if ssl_version is not None else ssl.PROTOCOL_TLS_SERVER) self.context.verify_mode = (certreqs if certreqs is not None else ssl.CERT_NONE) if cacerts: self.context.load_verify_locations(cacerts) if certificate: self.context.load_cert_chain(certificate) if npn_protocols: self.context.set_npn_protocols(npn_protocols) if alpn_protocols: self.context.set_alpn_protocols(alpn_protocols) if ciphers: self.context.set_ciphers(ciphers) self.chatty = chatty self.connectionchatty = connectionchatty self.starttls_server = starttls_server self.sock = socket.socket() self.port = socket_helper.bind_port(self.sock) self.flag = None self.active = False self.selected_npn_protocols = [] self.selected_alpn_protocols = [] self.shared_ciphers = [] self.conn_errors = [] threading.Thread.__init__(self) self.daemon = True def __enter__(self): self.start(threading.Event()) self.flag.wait() return self def __exit__(self, *args): self.stop() self.join() def start(self, flag=None): self.flag = flag threading.Thread.start(self) def run(self): self.sock.settimeout(0.05) self.sock.listen() self.active = True if self.flag: # signal an event self.flag.set() while self.active: try: newconn, connaddr = self.sock.accept() if support.verbose and self.chatty: sys.stdout.write(' server: new connection from ' + repr(connaddr) + '\n') handler = self.ConnectionHandler(self, newconn, connaddr) handler.start() handler.join() except socket.timeout: pass except KeyboardInterrupt: self.stop() except BaseException as e: if support.verbose and self.chatty: sys.stdout.write( ' connection handling failed: ' + repr(e) + '\n') self.sock.close() def stop(self): self.active = False class AsyncoreEchoServer(threading.Thread): # this one's based on asyncore.dispatcher class EchoServer (asyncore.dispatcher): class ConnectionHandler(asyncore.dispatcher_with_send): def __init__(self, conn, certfile): self.socket = test_wrap_socket(conn, server_side=True, certfile=certfile, do_handshake_on_connect=False) asyncore.dispatcher_with_send.__init__(self, self.socket) self._ssl_accepting = True self._do_ssl_handshake() def readable(self): if isinstance(self.socket, ssl.SSLSocket): while self.socket.pending() > 0: self.handle_read_event() return True def _do_ssl_handshake(self): try: self.socket.do_handshake() except (ssl.SSLWantReadError, ssl.SSLWantWriteError): return except ssl.SSLEOFError: return self.handle_close() except ssl.SSLError: raise except OSError as err: if err.args[0] == errno.ECONNABORTED: return self.handle_close() else: self._ssl_accepting = False def handle_read(self): if self._ssl_accepting: self._do_ssl_handshake() else: data = self.recv(1024) if support.verbose: sys.stdout.write(" server: read %s from client\n" % repr(data)) if not data: self.close() else: self.send(data.lower()) def handle_close(self): self.close() if support.verbose: sys.stdout.write(" server: closed connection %s\n" % self.socket) def handle_error(self): raise def __init__(self, certfile): self.certfile = certfile sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.port = socket_helper.bind_port(sock, '') asyncore.dispatcher.__init__(self, sock) self.listen(5) def handle_accepted(self, sock_obj, addr): if support.verbose: sys.stdout.write(" server: new connection from %s:%s\n" %addr) self.ConnectionHandler(sock_obj, self.certfile) def handle_error(self): raise def __init__(self, certfile): self.flag = None self.active = False self.server = self.EchoServer(certfile) self.port = self.server.port threading.Thread.__init__(self) self.daemon = True def __str__(self): return "<%s %s>" % (self.__class__.__name__, self.server) def __enter__(self): self.start(threading.Event()) self.flag.wait() return self def __exit__(self, *args): if support.verbose: sys.stdout.write(" cleanup: stopping server.\n") self.stop() if support.verbose: sys.stdout.write(" cleanup: joining server thread.\n") self.join() if support.verbose: sys.stdout.write(" cleanup: successfully joined.\n") # make sure that ConnectionHandler is removed from socket_map asyncore.close_all(ignore_all=True) def start (self, flag=None): self.flag = flag threading.Thread.start(self) def run(self): self.active = True if self.flag: self.flag.set() while self.active: try: asyncore.loop(1) except: pass def stop(self): self.active = False self.server.close() def server_params_test(client_context, server_context, indata=b"FOO\n", chatty=True, connectionchatty=False, sni_name=None, session=None): """ Launch a server, connect a client to it and try various reads and writes. """ stats = {} server = ThreadedEchoServer(context=server_context, chatty=chatty, connectionchatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=sni_name, session=session) as s: s.connect((HOST, server.port)) for arg in [indata, bytearray(indata), memoryview(indata)]: if connectionchatty: if support.verbose: sys.stdout.write( " client: sending %r...\n" % indata) s.write(arg) outdata = s.read() if connectionchatty: if support.verbose: sys.stdout.write(" client: read %r\n" % outdata) if outdata != indata.lower(): raise AssertionError( "bad data <<%r>> (%d) received; expected <<%r>> (%d)\n" % (outdata[:20], len(outdata), indata[:20].lower(), len(indata))) s.write(b"over\n") if connectionchatty: if support.verbose: sys.stdout.write(" client: closing connection.\n") stats.update({ 'compression': s.compression(), 'cipher': s.cipher(), 'peercert': s.getpeercert(), 'client_alpn_protocol': s.selected_alpn_protocol(), 'client_npn_protocol': s.selected_npn_protocol(), 'version': s.version(), 'session_reused': s.session_reused, 'session': s.session, }) s.close() stats['server_alpn_protocols'] = server.selected_alpn_protocols stats['server_npn_protocols'] = server.selected_npn_protocols stats['server_shared_ciphers'] = server.shared_ciphers return stats def try_protocol_combo(server_protocol, client_protocol, expect_success, certsreqs=None, server_options=0, client_options=0): """ Try to SSL-connect using *client_protocol* to *server_protocol*. If *expect_success* is true, assert that the connection succeeds, if it's false, assert that the connection fails. Also, if *expect_success* is a string, assert that it is the protocol version actually used by the connection. """ if certsreqs is None: certsreqs = ssl.CERT_NONE certtype = { ssl.CERT_NONE: "CERT_NONE", ssl.CERT_OPTIONAL: "CERT_OPTIONAL", ssl.CERT_REQUIRED: "CERT_REQUIRED", }[certsreqs] if support.verbose: formatstr = (expect_success and " %s->%s %s\n") or " {%s->%s} %s\n" sys.stdout.write(formatstr % (ssl.get_protocol_name(client_protocol), ssl.get_protocol_name(server_protocol), certtype)) client_context = ssl.SSLContext(client_protocol) client_context.options |= client_options server_context = ssl.SSLContext(server_protocol) server_context.options |= server_options min_version = PROTOCOL_TO_TLS_VERSION.get(client_protocol, None) if (min_version is not None # SSLContext.minimum_version is only available on recent OpenSSL # (setter added in OpenSSL 1.1.0, getter added in OpenSSL 1.1.1) and hasattr(server_context, 'minimum_version') and server_protocol == ssl.PROTOCOL_TLS and server_context.minimum_version > min_version): # If OpenSSL configuration is strict and requires more recent TLS # version, we have to change the minimum to test old TLS versions. server_context.minimum_version = min_version # NOTE: we must enable "ALL" ciphers on the client, otherwise an # SSLv23 client will send an SSLv3 hello (rather than SSLv2) # starting from OpenSSL 1.0.0 (see issue #8322). if client_context.protocol == ssl.PROTOCOL_TLS: client_context.set_ciphers("ALL") seclevel_workaround(server_context, client_context) for ctx in (client_context, server_context): ctx.verify_mode = certsreqs ctx.load_cert_chain(SIGNED_CERTFILE) ctx.load_verify_locations(SIGNING_CA) try: stats = server_params_test(client_context, server_context, chatty=False, connectionchatty=False) # Protocol mismatch can result in either an SSLError, or a # "Connection reset by peer" error. except ssl.SSLError: if expect_success: raise except OSError as e: if expect_success or e.errno != errno.ECONNRESET: raise else: if not expect_success: raise AssertionError( "Client protocol %s succeeded with server protocol %s!" % (ssl.get_protocol_name(client_protocol), ssl.get_protocol_name(server_protocol))) elif (expect_success is not True and expect_success != stats['version']): raise AssertionError("version mismatch: expected %r, got %r" % (expect_success, stats['version'])) class ThreadedTests(unittest.TestCase): def test_echo(self): """Basic test of an SSL client connecting to a server""" if support.verbose: sys.stdout.write("\n") for protocol in PROTOCOLS: if protocol in {ssl.PROTOCOL_TLS_CLIENT, ssl.PROTOCOL_TLS_SERVER}: continue if not has_tls_protocol(protocol): continue with self.subTest(protocol=ssl._PROTOCOL_NAMES[protocol]): context = ssl.SSLContext(protocol) context.load_cert_chain(CERTFILE) seclevel_workaround(context) server_params_test(context, context, chatty=True, connectionchatty=True) client_context, server_context, hostname = testing_context() with self.subTest(client=ssl.PROTOCOL_TLS_CLIENT, server=ssl.PROTOCOL_TLS_SERVER): server_params_test(client_context=client_context, server_context=server_context, chatty=True, connectionchatty=True, sni_name=hostname) client_context.check_hostname = False with self.subTest(client=ssl.PROTOCOL_TLS_SERVER, server=ssl.PROTOCOL_TLS_CLIENT): with self.assertRaises(ssl.SSLError) as e: server_params_test(client_context=server_context, server_context=client_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIn('called a function you should not call', str(e.exception)) with self.subTest(client=ssl.PROTOCOL_TLS_SERVER, server=ssl.PROTOCOL_TLS_SERVER): with self.assertRaises(ssl.SSLError) as e: server_params_test(client_context=server_context, server_context=server_context, chatty=True, connectionchatty=True) self.assertIn('called a function you should not call', str(e.exception)) with self.subTest(client=ssl.PROTOCOL_TLS_CLIENT, server=ssl.PROTOCOL_TLS_CLIENT): with self.assertRaises(ssl.SSLError) as e: server_params_test(client_context=server_context, server_context=client_context, chatty=True, connectionchatty=True) self.assertIn('called a function you should not call', str(e.exception)) def test_getpeercert(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), do_handshake_on_connect=False, server_hostname=hostname) as s: s.connect((HOST, server.port)) # getpeercert() raise ValueError while the handshake isn't # done. with self.assertRaises(ValueError): s.getpeercert() s.do_handshake() cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") cipher = s.cipher() if support.verbose: sys.stdout.write(pprint.pformat(cert) + '\n') sys.stdout.write("Connection cipher is " + str(cipher) + '.\n') if 'subject' not in cert: self.fail("No subject field in certificate: %s." % pprint.pformat(cert)) if ((('organizationName', 'Python Software Foundation'),) not in cert['subject']): self.fail( "Missing or invalid 'organizationName' field in certificate subject; " "should be 'Python Software Foundation'.") self.assertIn('notBefore', cert) self.assertIn('notAfter', cert) before = ssl.cert_time_to_seconds(cert['notBefore']) after = ssl.cert_time_to_seconds(cert['notAfter']) self.assertLess(before, after) @unittest.skipUnless(have_verify_flags(), "verify_flags need OpenSSL > 0.9.8") def test_crl_check(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() tf = getattr(ssl, "VERIFY_X509_TRUSTED_FIRST", 0) self.assertEqual(client_context.verify_flags, ssl.VERIFY_DEFAULT | tf) # VERIFY_DEFAULT should pass server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") # VERIFY_CRL_CHECK_LEAF without a loaded CRL file fails client_context.verify_flags |= ssl.VERIFY_CRL_CHECK_LEAF server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaisesRegex(ssl.SSLError, "certificate verify failed"): s.connect((HOST, server.port)) # now load a CRL file. The CRL file is signed by the CA. client_context.load_verify_locations(CRLFILE) server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") def test_check_hostname(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() # correct hostname should verify server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") # incorrect hostname should raise an exception server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname="invalid") as s: with self.assertRaisesRegex( ssl.CertificateError, "Hostname mismatch, certificate is not valid for 'invalid'."): s.connect((HOST, server.port)) # missing server_hostname arg should cause an exception, too server = ThreadedEchoServer(context=server_context, chatty=True) with server: with socket.socket() as s: with self.assertRaisesRegex(ValueError, "check_hostname requires server_hostname"): client_context.wrap_socket(s) @unittest.skipUnless( ssl.HAS_NEVER_CHECK_COMMON_NAME, "test requires hostname_checks_common_name" ) def test_hostname_checks_common_name(self): client_context, server_context, hostname = testing_context() assert client_context.hostname_checks_common_name client_context.hostname_checks_common_name = False # default cert has a SAN server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) client_context, server_context, hostname = testing_context(NOSANFILE) client_context.hostname_checks_common_name = False server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(ssl.SSLCertVerificationError): s.connect((HOST, server.port)) def test_ecc_cert(self): client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) client_context.set_ciphers('ECDHE:ECDSA:!NULL:!aRSA') hostname = SIGNED_CERTFILE_ECC_HOSTNAME server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # load ECC cert server_context.load_cert_chain(SIGNED_CERTFILE_ECC) # correct hostname should verify server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") cipher = s.cipher()[0].split('-') self.assertTrue(cipher[:2], ('ECDHE', 'ECDSA')) def test_dual_rsa_ecc(self): client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) # TODO: fix TLSv1.3 once SSLContext can restrict signature # algorithms. client_context.options |= ssl.OP_NO_TLSv1_3 # only ECDSA certs client_context.set_ciphers('ECDHE:ECDSA:!NULL:!aRSA') hostname = SIGNED_CERTFILE_ECC_HOSTNAME server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) # load ECC and RSA key/cert pairs server_context.load_cert_chain(SIGNED_CERTFILE_ECC) server_context.load_cert_chain(SIGNED_CERTFILE) # correct hostname should verify server = ThreadedEchoServer(context=server_context, chatty=True) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) cert = s.getpeercert() self.assertTrue(cert, "Can't get peer certificate.") cipher = s.cipher()[0].split('-') self.assertTrue(cipher[:2], ('ECDHE', 'ECDSA')) def test_check_hostname_idn(self): if support.verbose: sys.stdout.write("\n") server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(IDNSANSFILE) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.verify_mode = ssl.CERT_REQUIRED context.check_hostname = True context.load_verify_locations(SIGNING_CA) # correct hostname should verify, when specified in several # different ways idn_hostnames = [ ('könig.idn.pythontest.net', 'xn--knig-5qa.idn.pythontest.net'), ('xn--knig-5qa.idn.pythontest.net', 'xn--knig-5qa.idn.pythontest.net'), (b'xn--knig-5qa.idn.pythontest.net', 'xn--knig-5qa.idn.pythontest.net'), ('königsgäßchen.idna2003.pythontest.net', 'xn--knigsgsschen-lcb0w.idna2003.pythontest.net'), ('xn--knigsgsschen-lcb0w.idna2003.pythontest.net', 'xn--knigsgsschen-lcb0w.idna2003.pythontest.net'), (b'xn--knigsgsschen-lcb0w.idna2003.pythontest.net', 'xn--knigsgsschen-lcb0w.idna2003.pythontest.net'), # ('königsgäßchen.idna2008.pythontest.net', # 'xn--knigsgchen-b4a3dun.idna2008.pythontest.net'), ('xn--knigsgchen-b4a3dun.idna2008.pythontest.net', 'xn--knigsgchen-b4a3dun.idna2008.pythontest.net'), (b'xn--knigsgchen-b4a3dun.idna2008.pythontest.net', 'xn--knigsgchen-b4a3dun.idna2008.pythontest.net'), ] for server_hostname, expected_hostname in idn_hostnames: server = ThreadedEchoServer(context=server_context, chatty=True) with server: with context.wrap_socket(socket.socket(), server_hostname=server_hostname) as s: self.assertEqual(s.server_hostname, expected_hostname) s.connect((HOST, server.port)) cert = s.getpeercert() self.assertEqual(s.server_hostname, expected_hostname) self.assertTrue(cert, "Can't get peer certificate.") # incorrect hostname should raise an exception server = ThreadedEchoServer(context=server_context, chatty=True) with server: with context.wrap_socket(socket.socket(), server_hostname="python.example.org") as s: with self.assertRaises(ssl.CertificateError): s.connect((HOST, server.port)) def test_wrong_cert_tls12(self): """Connecting when the server rejects the client's certificate Launch a server with CERT_REQUIRED, and check that trying to connect to it with a wrong client certificate fails. """ client_context, server_context, hostname = testing_context() # load client cert that is not signed by trusted CA client_context.load_cert_chain(CERTFILE) # require TLS client authentication server_context.verify_mode = ssl.CERT_REQUIRED # TLS 1.3 has different handshake client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server = ThreadedEchoServer( context=server_context, chatty=True, connectionchatty=True, ) with server, \ client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: try: # Expect either an SSL error about the server rejecting # the connection, or a low-level connection reset (which # sometimes happens on Windows) s.connect((HOST, server.port)) except ssl.SSLError as e: if support.verbose: sys.stdout.write("\nSSLError is %r\n" % e) except OSError as e: if e.errno != errno.ECONNRESET: raise if support.verbose: sys.stdout.write("\nsocket.error is %r\n" % e) else: self.fail("Use of invalid cert should have failed!") @requires_tls_version('TLSv1_3') def test_wrong_cert_tls13(self): client_context, server_context, hostname = testing_context() # load client cert that is not signed by trusted CA client_context.load_cert_chain(CERTFILE) server_context.verify_mode = ssl.CERT_REQUIRED server_context.minimum_version = ssl.TLSVersion.TLSv1_3 client_context.minimum_version = ssl.TLSVersion.TLSv1_3 server = ThreadedEchoServer( context=server_context, chatty=True, connectionchatty=True, ) with server, \ client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: # TLS 1.3 perform client cert exchange after handshake s.connect((HOST, server.port)) try: s.write(b'data') s.read(4) except ssl.SSLError as e: if support.verbose: sys.stdout.write("\nSSLError is %r\n" % e) except OSError as e: if e.errno != errno.ECONNRESET: raise if support.verbose: sys.stdout.write("\nsocket.error is %r\n" % e) else: self.fail("Use of invalid cert should have failed!") def test_rude_shutdown(self): """A brutal shutdown of an SSL server should raise an OSError in the client when attempting handshake. """ listener_ready = threading.Event() listener_gone = threading.Event() s = socket.socket() port = socket_helper.bind_port(s, HOST) # `listener` runs in a thread. It sits in an accept() until # the main thread connects. Then it rudely closes the socket, # and sets Event `listener_gone` to let the main thread know # the socket is gone. def listener(): s.listen() listener_ready.set() newsock, addr = s.accept() newsock.close() s.close() listener_gone.set() def connector(): listener_ready.wait() with socket.socket() as c: c.connect((HOST, port)) listener_gone.wait() try: ssl_sock = test_wrap_socket(c) except OSError: pass else: self.fail('connecting to closed SSL socket should have failed') t = threading.Thread(target=listener) t.start() try: connector() finally: t.join() def test_ssl_cert_verify_error(self): if support.verbose: sys.stdout.write("\n") server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(SIGNED_CERTFILE) context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) server = ThreadedEchoServer(context=server_context, chatty=True) with server: with context.wrap_socket(socket.socket(), server_hostname=SIGNED_CERTFILE_HOSTNAME) as s: try: s.connect((HOST, server.port)) except ssl.SSLError as e: msg = 'unable to get local issuer certificate' self.assertIsInstance(e, ssl.SSLCertVerificationError) self.assertEqual(e.verify_code, 20) self.assertEqual(e.verify_message, msg) self.assertIn(msg, repr(e)) self.assertIn('certificate verify failed', repr(e)) @requires_tls_version('SSLv2') def test_protocol_sslv2(self): """Connecting to an SSLv2 server with various client options""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv2, True, ssl.CERT_REQUIRED) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_TLS, False) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_TLSv1, False) # SSLv23 client with specific SSL options if no_sslv2_implies_sslv3_hello(): # No SSLv2 => client will use an SSLv3 hello on recent OpenSSLs try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_SSLv2) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_SSLv3) try_protocol_combo(ssl.PROTOCOL_SSLv2, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1) def test_PROTOCOL_TLS(self): """Connecting to an SSLv23 server with various client options""" if support.verbose: sys.stdout.write("\n") if has_tls_version('SSLv2'): try: try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv2, True) except OSError as x: # this fails on some older versions of OpenSSL (0.9.7l, for instance) if support.verbose: sys.stdout.write( " SSL2 client to SSL23 server test unexpectedly failed:\n %s\n" % str(x)) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1') if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False, ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True, ssl.CERT_OPTIONAL) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_OPTIONAL) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False, ssl.CERT_REQUIRED) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True, ssl.CERT_REQUIRED) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_REQUIRED) # Server with specific SSL options if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_SSLv3, False, server_options=ssl.OP_NO_SSLv3) # Will choose TLSv1 try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS, True, server_options=ssl.OP_NO_SSLv2 | ssl.OP_NO_SSLv3) if has_tls_version('TLSv1'): try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1, False, server_options=ssl.OP_NO_TLSv1) @requires_tls_version('SSLv3') def test_protocol_sslv3(self): """Connecting to an SSLv3 server with various client options""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, 'SSLv3') try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, 'SSLv3', ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv3, 'SSLv3', ssl.CERT_REQUIRED) if has_tls_version('SSLv2'): try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_SSLv2, False) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_SSLv3) try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLSv1, False) if no_sslv2_implies_sslv3_hello(): # No SSLv2 => client will use an SSLv3 hello on recent OpenSSLs try_protocol_combo(ssl.PROTOCOL_SSLv3, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_SSLv2) @requires_tls_version('TLSv1') def test_protocol_tlsv1(self): """Connecting to a TLSv1 server with various client options""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, 'TLSv1') try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_OPTIONAL) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1, 'TLSv1', ssl.CERT_REQUIRED) if has_tls_version('SSLv2'): try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv2, False) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1) @requires_tls_version('TLSv1_1') def test_protocol_tlsv1_1(self): """Connecting to a TLSv1.1 server with various client options. Testing against older TLS versions.""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_1, 'TLSv1.1') if has_tls_version('SSLv2'): try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_SSLv2, False) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1_1) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1_1, 'TLSv1.1') try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_1, False) @requires_tls_version('TLSv1_2') def test_protocol_tlsv1_2(self): """Connecting to a TLSv1.2 server with various client options. Testing against older TLS versions.""" if support.verbose: sys.stdout.write("\n") try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_2, 'TLSv1.2', server_options=ssl.OP_NO_SSLv3|ssl.OP_NO_SSLv2, client_options=ssl.OP_NO_SSLv3|ssl.OP_NO_SSLv2,) if has_tls_version('SSLv2'): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_SSLv2, False) if has_tls_version('SSLv3'): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_SSLv3, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLS, False, client_options=ssl.OP_NO_TLSv1_2) try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1_2, 'TLSv1.2') if has_tls_protocol(ssl.PROTOCOL_TLSv1): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1, False) try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1_2, False) if has_tls_protocol(ssl.PROTOCOL_TLSv1_1): try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_1, False) try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, False) def test_starttls(self): """Switching from clear text to encrypted and back again.""" msgs = (b"msg 1", b"MSG 2", b"STARTTLS", b"MSG 3", b"msg 4", b"ENDTLS", b"msg 5", b"msg 6") server = ThreadedEchoServer(CERTFILE, starttls_server=True, chatty=True, connectionchatty=True) wrapped = False with server: s = socket.socket() s.setblocking(True) s.connect((HOST, server.port)) if support.verbose: sys.stdout.write("\n") for indata in msgs: if support.verbose: sys.stdout.write( " client: sending %r...\n" % indata) if wrapped: conn.write(indata) outdata = conn.read() else: s.send(indata) outdata = s.recv(1024) msg = outdata.strip().lower() if indata == b"STARTTLS" and msg.startswith(b"ok"): # STARTTLS ok, switch to secure mode if support.verbose: sys.stdout.write( " client: read %r from server, starting TLS...\n" % msg) conn = test_wrap_socket(s) wrapped = True elif indata == b"ENDTLS" and msg.startswith(b"ok"): # ENDTLS ok, switch back to clear text if support.verbose: sys.stdout.write( " client: read %r from server, ending TLS...\n" % msg) s = conn.unwrap() wrapped = False else: if support.verbose: sys.stdout.write( " client: read %r from server\n" % msg) if support.verbose: sys.stdout.write(" client: closing connection.\n") if wrapped: conn.write(b"over\n") else: s.send(b"over\n") if wrapped: conn.close() else: s.close() def test_socketserver(self): """Using socketserver to create and manage SSL connections.""" server = make_https_server(self, certfile=SIGNED_CERTFILE) # try to connect if support.verbose: sys.stdout.write('\n') with open(CERTFILE, 'rb') as f: d1 = f.read() d2 = '' # now fetch the same data from the HTTPS server url = 'https://localhost:%d/%s' % ( server.port, os.path.split(CERTFILE)[1]) context = ssl.create_default_context(cafile=SIGNING_CA) f = urllib.request.urlopen(url, context=context) try: dlen = f.info().get("content-length") if dlen and (int(dlen) > 0): d2 = f.read(int(dlen)) if support.verbose: sys.stdout.write( " client: read %d bytes from remote server '%s'\n" % (len(d2), server)) finally: f.close() self.assertEqual(d1, d2) def test_asyncore_server(self): """Check the example asyncore integration.""" if support.verbose: sys.stdout.write("\n") indata = b"FOO\n" server = AsyncoreEchoServer(CERTFILE) with server: s = test_wrap_socket(socket.socket()) s.connect(('127.0.0.1', server.port)) if support.verbose: sys.stdout.write( " client: sending %r...\n" % indata) s.write(indata) outdata = s.read() if support.verbose: sys.stdout.write(" client: read %r\n" % outdata) if outdata != indata.lower(): self.fail( "bad data <<%r>> (%d) received; expected <<%r>> (%d)\n" % (outdata[:20], len(outdata), indata[:20].lower(), len(indata))) s.write(b"over\n") if support.verbose: sys.stdout.write(" client: closing connection.\n") s.close() if support.verbose: sys.stdout.write(" client: connection closed.\n") def test_recv_send(self): """Test recv(), send() and friends.""" if support.verbose: sys.stdout.write("\n") server = ThreadedEchoServer(CERTFILE, certreqs=ssl.CERT_NONE, ssl_version=ssl.PROTOCOL_TLS_SERVER, cacerts=CERTFILE, chatty=True, connectionchatty=False) with server: s = test_wrap_socket(socket.socket(), server_side=False, certfile=CERTFILE, ca_certs=CERTFILE, cert_reqs=ssl.CERT_NONE, ssl_version=ssl.PROTOCOL_TLS_CLIENT) s.connect((HOST, server.port)) # helper methods for standardising recv* method signatures def _recv_into(): b = bytearray(b"\0"*100) count = s.recv_into(b) return b[:count] def _recvfrom_into(): b = bytearray(b"\0"*100) count, addr = s.recvfrom_into(b) return b[:count] # (name, method, expect success?, *args, return value func) send_methods = [ ('send', s.send, True, [], len), ('sendto', s.sendto, False, ["some.address"], len), ('sendall', s.sendall, True, [], lambda x: None), ] # (name, method, whether to expect success, *args) recv_methods = [ ('recv', s.recv, True, []), ('recvfrom', s.recvfrom, False, ["some.address"]), ('recv_into', _recv_into, True, []), ('recvfrom_into', _recvfrom_into, False, []), ] data_prefix = "PREFIX_" for (meth_name, send_meth, expect_success, args, ret_val_meth) in send_methods: indata = (data_prefix + meth_name).encode('ascii') try: ret = send_meth(indata, *args) msg = "sending with {}".format(meth_name) self.assertEqual(ret, ret_val_meth(indata), msg=msg) outdata = s.read() if outdata != indata.lower(): self.fail( "While sending with <<{name:s}>> bad data " "<<{outdata:r}>> ({nout:d}) received; " "expected <<{indata:r}>> ({nin:d})\n".format( name=meth_name, outdata=outdata[:20], nout=len(outdata), indata=indata[:20], nin=len(indata) ) ) except ValueError as e: if expect_success: self.fail( "Failed to send with method <<{name:s}>>; " "expected to succeed.\n".format(name=meth_name) ) if not str(e).startswith(meth_name): self.fail( "Method <<{name:s}>> failed with unexpected " "exception message: {exp:s}\n".format( name=meth_name, exp=e ) ) for meth_name, recv_meth, expect_success, args in recv_methods: indata = (data_prefix + meth_name).encode('ascii') try: s.send(indata) outdata = recv_meth(*args) if outdata != indata.lower(): self.fail( "While receiving with <<{name:s}>> bad data " "<<{outdata:r}>> ({nout:d}) received; " "expected <<{indata:r}>> ({nin:d})\n".format( name=meth_name, outdata=outdata[:20], nout=len(outdata), indata=indata[:20], nin=len(indata) ) ) except ValueError as e: if expect_success: self.fail( "Failed to receive with method <<{name:s}>>; " "expected to succeed.\n".format(name=meth_name) ) if not str(e).startswith(meth_name): self.fail( "Method <<{name:s}>> failed with unexpected " "exception message: {exp:s}\n".format( name=meth_name, exp=e ) ) # consume data s.read() # read(-1, buffer) is supported, even though read(-1) is not data = b"data" s.send(data) buffer = bytearray(len(data)) self.assertEqual(s.read(-1, buffer), len(data)) self.assertEqual(buffer, data) # sendall accepts bytes-like objects if ctypes is not None: ubyte = ctypes.c_ubyte * len(data) byteslike = ubyte.from_buffer_copy(data) s.sendall(byteslike) self.assertEqual(s.read(), data) # Make sure sendmsg et al are disallowed to avoid # inadvertent disclosure of data and/or corruption # of the encrypted data stream self.assertRaises(NotImplementedError, s.dup) self.assertRaises(NotImplementedError, s.sendmsg, [b"data"]) self.assertRaises(NotImplementedError, s.recvmsg, 100) self.assertRaises(NotImplementedError, s.recvmsg_into, [bytearray(100)]) s.write(b"over\n") self.assertRaises(ValueError, s.recv, -1) self.assertRaises(ValueError, s.read, -1) s.close() def test_recv_zero(self): server = ThreadedEchoServer(CERTFILE) server.__enter__() self.addCleanup(server.__exit__, None, None) s = socket.create_connection((HOST, server.port)) self.addCleanup(s.close) s = test_wrap_socket(s, suppress_ragged_eofs=False) self.addCleanup(s.close) # recv/read(0) should return no data s.send(b"data") self.assertEqual(s.recv(0), b"") self.assertEqual(s.read(0), b"") self.assertEqual(s.read(), b"data") # Should not block if the other end sends no data s.setblocking(False) self.assertEqual(s.recv(0), b"") self.assertEqual(s.recv_into(bytearray()), 0) def test_nonblocking_send(self): server = ThreadedEchoServer(CERTFILE, certreqs=ssl.CERT_NONE, ssl_version=ssl.PROTOCOL_TLS_SERVER, cacerts=CERTFILE, chatty=True, connectionchatty=False) with server: s = test_wrap_socket(socket.socket(), server_side=False, certfile=CERTFILE, ca_certs=CERTFILE, cert_reqs=ssl.CERT_NONE, ssl_version=ssl.PROTOCOL_TLS_CLIENT) s.connect((HOST, server.port)) s.setblocking(False) # If we keep sending data, at some point the buffers # will be full and the call will block buf = bytearray(8192) def fill_buffer(): while True: s.send(buf) self.assertRaises((ssl.SSLWantWriteError, ssl.SSLWantReadError), fill_buffer) # Now read all the output and discard it s.setblocking(True) s.close() def test_handshake_timeout(self): # Issue #5103: SSL handshake must respect the socket timeout server = socket.socket(socket.AF_INET) host = "127.0.0.1" port = socket_helper.bind_port(server) started = threading.Event() finish = False def serve(): server.listen() started.set() conns = [] while not finish: r, w, e = select.select([server], [], [], 0.1) if server in r: # Let the socket hang around rather than having # it closed by garbage collection. conns.append(server.accept()[0]) for sock in conns: sock.close() t = threading.Thread(target=serve) t.start() started.wait() try: try: c = socket.socket(socket.AF_INET) c.settimeout(0.2) c.connect((host, port)) # Will attempt handshake and time out self.assertRaisesRegex(socket.timeout, "timed out", test_wrap_socket, c) finally: c.close() try: c = socket.socket(socket.AF_INET) c = test_wrap_socket(c) c.settimeout(0.2) # Will attempt handshake and time out self.assertRaisesRegex(socket.timeout, "timed out", c.connect, (host, port)) finally: c.close() finally: finish = True t.join() server.close() def test_server_accept(self): # Issue #16357: accept() on a SSLSocket created through # SSLContext.wrap_socket(). context = ssl.SSLContext(ssl.PROTOCOL_TLS) context.verify_mode = ssl.CERT_REQUIRED context.load_verify_locations(SIGNING_CA) context.load_cert_chain(SIGNED_CERTFILE) server = socket.socket(socket.AF_INET) host = "127.0.0.1" port = socket_helper.bind_port(server) server = context.wrap_socket(server, server_side=True) self.assertTrue(server.server_side) evt = threading.Event() remote = None peer = None def serve(): nonlocal remote, peer server.listen() # Block on the accept and wait on the connection to close. evt.set() remote, peer = server.accept() remote.send(remote.recv(4)) t = threading.Thread(target=serve) t.start() # Client wait until server setup and perform a connect. evt.wait() client = context.wrap_socket(socket.socket()) client.connect((host, port)) client.send(b'data') client.recv() client_addr = client.getsockname() client.close() t.join() remote.close() server.close() # Sanity checks. self.assertIsInstance(remote, ssl.SSLSocket) self.assertEqual(peer, client_addr) def test_getpeercert_enotconn(self): context = ssl.SSLContext(ssl.PROTOCOL_TLS) with context.wrap_socket(socket.socket()) as sock: with self.assertRaises(OSError) as cm: sock.getpeercert() self.assertEqual(cm.exception.errno, errno.ENOTCONN) def test_do_handshake_enotconn(self): context = ssl.SSLContext(ssl.PROTOCOL_TLS) with context.wrap_socket(socket.socket()) as sock: with self.assertRaises(OSError) as cm: sock.do_handshake() self.assertEqual(cm.exception.errno, errno.ENOTCONN) def test_no_shared_ciphers(self): client_context, server_context, hostname = testing_context() # OpenSSL enables all TLS 1.3 ciphers, enforce TLS 1.2 for test client_context.options |= ssl.OP_NO_TLSv1_3 # Force different suites on client and server client_context.set_ciphers("AES128") server_context.set_ciphers("AES256") with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(OSError): s.connect((HOST, server.port)) self.assertIn("no shared cipher", server.conn_errors[0]) def test_version_basic(self): """ Basic tests for SSLSocket.version(). More tests are done in the test_protocol_*() methods. """ context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) context.check_hostname = False context.verify_mode = ssl.CERT_NONE with ThreadedEchoServer(CERTFILE, ssl_version=ssl.PROTOCOL_TLS_SERVER, chatty=False) as server: with context.wrap_socket(socket.socket()) as s: self.assertIs(s.version(), None) self.assertIs(s._sslobj, None) s.connect((HOST, server.port)) if IS_OPENSSL_1_1_1 and has_tls_version('TLSv1_3'): self.assertEqual(s.version(), 'TLSv1.3') elif ssl.OPENSSL_VERSION_INFO >= (1, 0, 2): self.assertEqual(s.version(), 'TLSv1.2') else: # 0.9.8 to 1.0.1 self.assertIn(s.version(), ('TLSv1', 'TLSv1.2')) self.assertIs(s._sslobj, None) self.assertIs(s.version(), None) @requires_tls_version('TLSv1_3') def test_tls1_3(self): context = ssl.SSLContext(ssl.PROTOCOL_TLS) context.load_cert_chain(CERTFILE) context.options |= ( ssl.OP_NO_TLSv1 | ssl.OP_NO_TLSv1_1 | ssl.OP_NO_TLSv1_2 ) with ThreadedEchoServer(context=context) as server: with context.wrap_socket(socket.socket()) as s: s.connect((HOST, server.port)) self.assertIn(s.cipher()[0], { 'TLS_AES_256_GCM_SHA384', 'TLS_CHACHA20_POLY1305_SHA256', 'TLS_AES_128_GCM_SHA256', }) self.assertEqual(s.version(), 'TLSv1.3') @requires_minimum_version @requires_tls_version('TLSv1_2') def test_min_max_version_tlsv1_2(self): client_context, server_context, hostname = testing_context() # client TLSv1.0 to 1.2 client_context.minimum_version = ssl.TLSVersion.TLSv1 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 # server only TLSv1.2 server_context.minimum_version = ssl.TLSVersion.TLSv1_2 server_context.maximum_version = ssl.TLSVersion.TLSv1_2 with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.version(), 'TLSv1.2') @requires_minimum_version @requires_tls_version('TLSv1_1') def test_min_max_version_tlsv1_1(self): client_context, server_context, hostname = testing_context() # client 1.0 to 1.2, server 1.0 to 1.1 client_context.minimum_version = ssl.TLSVersion.TLSv1 client_context.maximum_version = ssl.TLSVersion.TLSv1_2 server_context.minimum_version = ssl.TLSVersion.TLSv1 server_context.maximum_version = ssl.TLSVersion.TLSv1_1 seclevel_workaround(client_context, server_context) with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.version(), 'TLSv1.1') @requires_minimum_version @requires_tls_version('TLSv1_2') @requires_tls_version('TLSv1') def test_min_max_version_mismatch(self): client_context, server_context, hostname = testing_context() # client 1.0, server 1.2 (mismatch) server_context.maximum_version = ssl.TLSVersion.TLSv1_2 server_context.minimum_version = ssl.TLSVersion.TLSv1_2 client_context.maximum_version = ssl.TLSVersion.TLSv1 client_context.minimum_version = ssl.TLSVersion.TLSv1 seclevel_workaround(client_context, server_context) with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: with self.assertRaises(ssl.SSLError) as e: s.connect((HOST, server.port)) self.assertIn("alert", str(e.exception)) @requires_minimum_version @requires_tls_version('SSLv3') def test_min_max_version_sslv3(self): client_context, server_context, hostname = testing_context() server_context.minimum_version = ssl.TLSVersion.SSLv3 client_context.minimum_version = ssl.TLSVersion.SSLv3 client_context.maximum_version = ssl.TLSVersion.SSLv3 seclevel_workaround(client_context, server_context) with ThreadedEchoServer(context=server_context) as server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertEqual(s.version(), 'SSLv3') @unittest.skipUnless(ssl.HAS_ECDH, "test requires ECDH-enabled OpenSSL") def test_default_ecdh_curve(self): # Issue #21015: elliptic curve-based Diffie Hellman key exchange # should be enabled by default on SSL contexts. context = ssl.SSLContext(ssl.PROTOCOL_TLS) context.load_cert_chain(CERTFILE) # TLSv1.3 defaults to PFS key agreement and no longer has KEA in # cipher name. context.options |= ssl.OP_NO_TLSv1_3 # Prior to OpenSSL 1.0.0, ECDH ciphers have to be enabled # explicitly using the 'ECCdraft' cipher alias. Otherwise, # our default cipher list should prefer ECDH-based ciphers # automatically. if ssl.OPENSSL_VERSION_INFO < (1, 0, 0): context.set_ciphers("ECCdraft:ECDH") with ThreadedEchoServer(context=context) as server: with context.wrap_socket(socket.socket()) as s: s.connect((HOST, server.port)) self.assertIn("ECDH", s.cipher()[0]) @unittest.skipUnless("tls-unique" in ssl.CHANNEL_BINDING_TYPES, "'tls-unique' channel binding not available") def test_tls_unique_channel_binding(self): """Test tls-unique channel binding.""" if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=True, connectionchatty=False) with server: with client_context.wrap_socket( socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # get the data cb_data = s.get_channel_binding("tls-unique") if support.verbose: sys.stdout.write( " got channel binding data: {0!r}\n".format(cb_data)) # check if it is sane self.assertIsNotNone(cb_data) if s.version() == 'TLSv1.3': self.assertEqual(len(cb_data), 48) else: self.assertEqual(len(cb_data), 12) # True for TLSv1 # and compare with the peers version s.write(b"CB tls-unique\n") peer_data_repr = s.read().strip() self.assertEqual(peer_data_repr, repr(cb_data).encode("us-ascii")) # now, again with client_context.wrap_socket( socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) new_cb_data = s.get_channel_binding("tls-unique") if support.verbose: sys.stdout.write( "got another channel binding data: {0!r}\n".format( new_cb_data) ) # is it really unique self.assertNotEqual(cb_data, new_cb_data) self.assertIsNotNone(cb_data) if s.version() == 'TLSv1.3': self.assertEqual(len(cb_data), 48) else: self.assertEqual(len(cb_data), 12) # True for TLSv1 s.write(b"CB tls-unique\n") peer_data_repr = s.read().strip() self.assertEqual(peer_data_repr, repr(new_cb_data).encode("us-ascii")) def test_compression(self): client_context, server_context, hostname = testing_context() stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) if support.verbose: sys.stdout.write(" got compression: {!r}\n".format(stats['compression'])) self.assertIn(stats['compression'], { None, 'ZLIB', 'RLE' }) @unittest.skipUnless(hasattr(ssl, 'OP_NO_COMPRESSION'), "ssl.OP_NO_COMPRESSION needed for this test") def test_compression_disabled(self): client_context, server_context, hostname = testing_context() client_context.options |= ssl.OP_NO_COMPRESSION server_context.options |= ssl.OP_NO_COMPRESSION stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['compression'], None) @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_dh_params(self): # Check we can get a connection with ephemeral Diffie-Hellman client_context, server_context, hostname = testing_context() # test scenario needs TLS <= 1.2 client_context.options |= ssl.OP_NO_TLSv1_3 server_context.load_dh_params(DHFILE) server_context.set_ciphers("kEDH") server_context.options |= ssl.OP_NO_TLSv1_3 stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) cipher = stats["cipher"][0] parts = cipher.split("-") if "ADH" not in parts and "EDH" not in parts and "DHE" not in parts: self.fail("Non-DH cipher: " + cipher[0]) @unittest.skipUnless(HAVE_SECP_CURVES, "needs secp384r1 curve support") @unittest.skipIf(IS_OPENSSL_1_1_1, "TODO: Test doesn't work on 1.1.1") def test_ecdh_curve(self): # server secp384r1, client auto client_context, server_context, hostname = testing_context() server_context.set_ecdh_curve("secp384r1") server_context.set_ciphers("ECDHE:!eNULL:!aNULL") server_context.options |= ssl.OP_NO_TLSv1 | ssl.OP_NO_TLSv1_1 stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) # server auto, client secp384r1 client_context, server_context, hostname = testing_context() client_context.set_ecdh_curve("secp384r1") server_context.set_ciphers("ECDHE:!eNULL:!aNULL") server_context.options |= ssl.OP_NO_TLSv1 | ssl.OP_NO_TLSv1_1 stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) # server / client curve mismatch client_context, server_context, hostname = testing_context() client_context.set_ecdh_curve("prime256v1") server_context.set_ecdh_curve("secp384r1") server_context.set_ciphers("ECDHE:!eNULL:!aNULL") server_context.options |= ssl.OP_NO_TLSv1 | ssl.OP_NO_TLSv1_1 try: stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) except ssl.SSLError: pass else: # OpenSSL 1.0.2 does not fail although it should. if IS_OPENSSL_1_1_0: self.fail("mismatch curve did not fail") def test_selected_alpn_protocol(self): # selected_alpn_protocol() is None unless ALPN is used. client_context, server_context, hostname = testing_context() stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['client_alpn_protocol'], None) @unittest.skipUnless(ssl.HAS_ALPN, "ALPN support required") def test_selected_alpn_protocol_if_server_uses_alpn(self): # selected_alpn_protocol() is None unless ALPN is used by the client. client_context, server_context, hostname = testing_context() server_context.set_alpn_protocols(['foo', 'bar']) stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['client_alpn_protocol'], None) @unittest.skipUnless(ssl.HAS_ALPN, "ALPN support needed for this test") def test_alpn_protocols(self): server_protocols = ['foo', 'bar', 'milkshake'] protocol_tests = [ (['foo', 'bar'], 'foo'), (['bar', 'foo'], 'foo'), (['milkshake'], 'milkshake'), (['http/3.0', 'http/4.0'], None) ] for client_protocols, expected in protocol_tests: client_context, server_context, hostname = testing_context() server_context.set_alpn_protocols(server_protocols) client_context.set_alpn_protocols(client_protocols) try: stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) except ssl.SSLError as e: stats = e if (expected is None and IS_OPENSSL_1_1_0 and ssl.OPENSSL_VERSION_INFO < (1, 1, 0, 6)): # OpenSSL 1.1.0 to 1.1.0e raises handshake error self.assertIsInstance(stats, ssl.SSLError) else: msg = "failed trying %s (s) and %s (c).\n" \ "was expecting %s, but got %%s from the %%s" \ % (str(server_protocols), str(client_protocols), str(expected)) client_result = stats['client_alpn_protocol'] self.assertEqual(client_result, expected, msg % (client_result, "client")) server_result = stats['server_alpn_protocols'][-1] \ if len(stats['server_alpn_protocols']) else 'nothing' self.assertEqual(server_result, expected, msg % (server_result, "server")) def test_selected_npn_protocol(self): # selected_npn_protocol() is None unless NPN is used client_context, server_context, hostname = testing_context() stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) self.assertIs(stats['client_npn_protocol'], None) @unittest.skipUnless(ssl.HAS_NPN, "NPN support needed for this test") def test_npn_protocols(self): server_protocols = ['http/1.1', 'spdy/2'] protocol_tests = [ (['http/1.1', 'spdy/2'], 'http/1.1'), (['spdy/2', 'http/1.1'], 'http/1.1'), (['spdy/2', 'test'], 'spdy/2'), (['abc', 'def'], 'abc') ] for client_protocols, expected in protocol_tests: client_context, server_context, hostname = testing_context() server_context.set_npn_protocols(server_protocols) client_context.set_npn_protocols(client_protocols) stats = server_params_test(client_context, server_context, chatty=True, connectionchatty=True, sni_name=hostname) msg = "failed trying %s (s) and %s (c).\n" \ "was expecting %s, but got %%s from the %%s" \ % (str(server_protocols), str(client_protocols), str(expected)) client_result = stats['client_npn_protocol'] self.assertEqual(client_result, expected, msg % (client_result, "client")) server_result = stats['server_npn_protocols'][-1] \ if len(stats['server_npn_protocols']) else 'nothing' self.assertEqual(server_result, expected, msg % (server_result, "server")) def sni_contexts(self): server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(SIGNED_CERTFILE) other_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) other_context.load_cert_chain(SIGNED_CERTFILE2) client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.load_verify_locations(SIGNING_CA) return server_context, other_context, client_context def check_common_name(self, stats, name): cert = stats['peercert'] self.assertIn((('commonName', name),), cert['subject']) @needs_sni def test_sni_callback(self): calls = [] server_context, other_context, client_context = self.sni_contexts() client_context.check_hostname = False def servername_cb(ssl_sock, server_name, initial_context): calls.append((server_name, initial_context)) if server_name is not None: ssl_sock.context = other_context server_context.set_servername_callback(servername_cb) stats = server_params_test(client_context, server_context, chatty=True, sni_name='supermessage') # The hostname was fetched properly, and the certificate was # changed for the connection. self.assertEqual(calls, [("supermessage", server_context)]) # CERTFILE4 was selected self.check_common_name(stats, 'fakehostname') calls = [] # The callback is called with server_name=None stats = server_params_test(client_context, server_context, chatty=True, sni_name=None) self.assertEqual(calls, [(None, server_context)]) self.check_common_name(stats, SIGNED_CERTFILE_HOSTNAME) # Check disabling the callback calls = [] server_context.set_servername_callback(None) stats = server_params_test(client_context, server_context, chatty=True, sni_name='notfunny') # Certificate didn't change self.check_common_name(stats, SIGNED_CERTFILE_HOSTNAME) self.assertEqual(calls, []) @needs_sni def test_sni_callback_alert(self): # Returning a TLS alert is reflected to the connecting client server_context, other_context, client_context = self.sni_contexts() def cb_returning_alert(ssl_sock, server_name, initial_context): return ssl.ALERT_DESCRIPTION_ACCESS_DENIED server_context.set_servername_callback(cb_returning_alert) with self.assertRaises(ssl.SSLError) as cm: stats = server_params_test(client_context, server_context, chatty=False, sni_name='supermessage') self.assertEqual(cm.exception.reason, 'TLSV1_ALERT_ACCESS_DENIED') @needs_sni def test_sni_callback_raising(self): # Raising fails the connection with a TLS handshake failure alert. server_context, other_context, client_context = self.sni_contexts() def cb_raising(ssl_sock, server_name, initial_context): 1/0 server_context.set_servername_callback(cb_raising) with support.catch_unraisable_exception() as catch: with self.assertRaises(ssl.SSLError) as cm: stats = server_params_test(client_context, server_context, chatty=False, sni_name='supermessage') self.assertEqual(cm.exception.reason, 'SSLV3_ALERT_HANDSHAKE_FAILURE') self.assertEqual(catch.unraisable.exc_type, ZeroDivisionError) @needs_sni def test_sni_callback_wrong_return_type(self): # Returning the wrong return type terminates the TLS connection # with an internal error alert. server_context, other_context, client_context = self.sni_contexts() def cb_wrong_return_type(ssl_sock, server_name, initial_context): return "foo" server_context.set_servername_callback(cb_wrong_return_type) with support.catch_unraisable_exception() as catch: with self.assertRaises(ssl.SSLError) as cm: stats = server_params_test(client_context, server_context, chatty=False, sni_name='supermessage') self.assertEqual(cm.exception.reason, 'TLSV1_ALERT_INTERNAL_ERROR') self.assertEqual(catch.unraisable.exc_type, TypeError) def test_shared_ciphers(self): client_context, server_context, hostname = testing_context() client_context.set_ciphers("AES128:AES256") server_context.set_ciphers("AES256") expected_algs = [ "AES256", "AES-256", # TLS 1.3 ciphers are always enabled "TLS_CHACHA20", "TLS_AES", ] stats = server_params_test(client_context, server_context, sni_name=hostname) ciphers = stats['server_shared_ciphers'][0] self.assertGreater(len(ciphers), 0) for name, tls_version, bits in ciphers: if not any(alg in name for alg in expected_algs): self.fail(name) def test_read_write_after_close_raises_valuerror(self): client_context, server_context, hostname = testing_context() server = ThreadedEchoServer(context=server_context, chatty=False) with server: s = client_context.wrap_socket(socket.socket(), server_hostname=hostname) s.connect((HOST, server.port)) s.close() self.assertRaises(ValueError, s.read, 1024) self.assertRaises(ValueError, s.write, b'hello') def test_sendfile(self): TEST_DATA = b"x" * 512 with open(support.TESTFN, 'wb') as f: f.write(TEST_DATA) self.addCleanup(support.unlink, support.TESTFN) context = ssl.SSLContext(ssl.PROTOCOL_TLS) context.verify_mode = ssl.CERT_REQUIRED context.load_verify_locations(SIGNING_CA) context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=context, chatty=False) with server: with context.wrap_socket(socket.socket()) as s: s.connect((HOST, server.port)) with open(support.TESTFN, 'rb') as file: s.sendfile(file) self.assertEqual(s.recv(1024), TEST_DATA) def test_session(self): client_context, server_context, hostname = testing_context() # TODO: sessions aren't compatible with TLSv1.3 yet client_context.options |= ssl.OP_NO_TLSv1_3 # first connection without session stats = server_params_test(client_context, server_context, sni_name=hostname) session = stats['session'] self.assertTrue(session.id) self.assertGreater(session.time, 0) self.assertGreater(session.timeout, 0) self.assertTrue(session.has_ticket) if ssl.OPENSSL_VERSION_INFO > (1, 0, 1): self.assertGreater(session.ticket_lifetime_hint, 0) self.assertFalse(stats['session_reused']) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 1) self.assertEqual(sess_stat['hits'], 0) # reuse session stats = server_params_test(client_context, server_context, session=session, sni_name=hostname) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 2) self.assertEqual(sess_stat['hits'], 1) self.assertTrue(stats['session_reused']) session2 = stats['session'] self.assertEqual(session2.id, session.id) self.assertEqual(session2, session) self.assertIsNot(session2, session) self.assertGreaterEqual(session2.time, session.time) self.assertGreaterEqual(session2.timeout, session.timeout) # another one without session stats = server_params_test(client_context, server_context, sni_name=hostname) self.assertFalse(stats['session_reused']) session3 = stats['session'] self.assertNotEqual(session3.id, session.id) self.assertNotEqual(session3, session) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 3) self.assertEqual(sess_stat['hits'], 1) # reuse session again stats = server_params_test(client_context, server_context, session=session, sni_name=hostname) self.assertTrue(stats['session_reused']) session4 = stats['session'] self.assertEqual(session4.id, session.id) self.assertEqual(session4, session) self.assertGreaterEqual(session4.time, session.time) self.assertGreaterEqual(session4.timeout, session.timeout) sess_stat = server_context.session_stats() self.assertEqual(sess_stat['accept'], 4) self.assertEqual(sess_stat['hits'], 2) def test_session_handling(self): client_context, server_context, hostname = testing_context() client_context2, _, _ = testing_context() # TODO: session reuse does not work with TLSv1.3 client_context.options |= ssl.OP_NO_TLSv1_3 client_context2.options |= ssl.OP_NO_TLSv1_3 server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: # session is None before handshake self.assertEqual(s.session, None) self.assertEqual(s.session_reused, None) s.connect((HOST, server.port)) session = s.session self.assertTrue(session) with self.assertRaises(TypeError) as e: s.session = object self.assertEqual(str(e.exception), 'Value is not a SSLSession.') with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # cannot set session after handshake with self.assertRaises(ValueError) as e: s.session = session self.assertEqual(str(e.exception), 'Cannot set session after handshake.') with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: # can set session before handshake and before the # connection was established s.session = session s.connect((HOST, server.port)) self.assertEqual(s.session.id, session.id) self.assertEqual(s.session, session) self.assertEqual(s.session_reused, True) with client_context2.wrap_socket(socket.socket(), server_hostname=hostname) as s: # cannot re-use session with a different SSLContext with self.assertRaises(ValueError) as e: s.session = session s.connect((HOST, server.port)) self.assertEqual(str(e.exception), 'Session refers to a different SSLContext.') @unittest.skipUnless(has_tls_version('TLSv1_3'), "Test needs TLS 1.3") class TestPostHandshakeAuth(unittest.TestCase): def test_pha_setter(self): protocols = [ ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLS_SERVER, ssl.PROTOCOL_TLS_CLIENT ] for protocol in protocols: ctx = ssl.SSLContext(protocol) self.assertEqual(ctx.post_handshake_auth, False) ctx.post_handshake_auth = True self.assertEqual(ctx.post_handshake_auth, True) ctx.verify_mode = ssl.CERT_REQUIRED self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.post_handshake_auth, True) ctx.post_handshake_auth = False self.assertEqual(ctx.verify_mode, ssl.CERT_REQUIRED) self.assertEqual(ctx.post_handshake_auth, False) ctx.verify_mode = ssl.CERT_OPTIONAL ctx.post_handshake_auth = True self.assertEqual(ctx.verify_mode, ssl.CERT_OPTIONAL) self.assertEqual(ctx.post_handshake_auth, True) def test_pha_required(self): client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') # PHA method just returns true when cert is already available s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'GETCERT') cert_text = s.recv(4096).decode('us-ascii') self.assertIn('Python Software Foundation CA', cert_text) def test_pha_required_nocert(self): client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True # Ignore expected SSLError in ConnectionHandler of ThreadedEchoServer # (it is only raised sometimes on Windows) with support.catch_threading_exception() as cm: server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'PHA') # receive CertificateRequest self.assertEqual(s.recv(1024), b'OK\n') # send empty Certificate + Finish s.write(b'HASCERT') # receive alert with self.assertRaisesRegex( ssl.SSLError, 'tlsv13 alert certificate required'): s.recv(1024) def test_pha_optional(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) # check CERT_OPTIONAL server_context.verify_mode = ssl.CERT_OPTIONAL server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') def test_pha_optional_nocert(self): if support.verbose: sys.stdout.write("\n") client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_OPTIONAL client_context.post_handshake_auth = True server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') # optional doesn't fail when client does not have a cert s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') def test_pha_no_pha_client(self): client_context, server_context, hostname = testing_context() server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) with self.assertRaisesRegex(ssl.SSLError, 'not server'): s.verify_client_post_handshake() s.write(b'PHA') self.assertIn(b'extension not received', s.recv(1024)) def test_pha_no_pha_server(self): # server doesn't have PHA enabled, cert is requested in handshake client_context, server_context, hostname = testing_context() server_context.verify_mode = ssl.CERT_REQUIRED client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') # PHA doesn't fail if there is already a cert s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') def test_pha_not_tls13(self): # TLS 1.2 client_context, server_context, hostname = testing_context() server_context.verify_mode = ssl.CERT_REQUIRED client_context.maximum_version = ssl.TLSVersion.TLSv1_2 client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # PHA fails for TLS != 1.3 s.write(b'PHA') self.assertIn(b'WRONG_SSL_VERSION', s.recv(1024)) def test_bpo37428_pha_cert_none(self): # verify that post_handshake_auth does not implicitly enable cert # validation. hostname = SIGNED_CERTFILE_HOSTNAME client_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) client_context.post_handshake_auth = True client_context.load_cert_chain(SIGNED_CERTFILE) # no cert validation and CA on client side client_context.check_hostname = False client_context.verify_mode = ssl.CERT_NONE server_context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) server_context.load_cert_chain(SIGNED_CERTFILE) server_context.load_verify_locations(SIGNING_CA) server_context.post_handshake_auth = True server_context.verify_mode = ssl.CERT_REQUIRED server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'FALSE\n') s.write(b'PHA') self.assertEqual(s.recv(1024), b'OK\n') s.write(b'HASCERT') self.assertEqual(s.recv(1024), b'TRUE\n') # server cert has not been validated self.assertEqual(s.getpeercert(), {}) HAS_KEYLOG = hasattr(ssl.SSLContext, 'keylog_filename') requires_keylog = unittest.skipUnless( HAS_KEYLOG, 'test requires OpenSSL 1.1.1 with keylog callback') class TestSSLDebug(unittest.TestCase): def keylog_lines(self, fname=support.TESTFN): with open(fname) as f: return len(list(f)) @requires_keylog @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_keylog_defaults(self): self.addCleanup(support.unlink, support.TESTFN) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.keylog_filename, None) self.assertFalse(os.path.isfile(support.TESTFN)) ctx.keylog_filename = support.TESTFN self.assertEqual(ctx.keylog_filename, support.TESTFN) self.assertTrue(os.path.isfile(support.TESTFN)) self.assertEqual(self.keylog_lines(), 1) ctx.keylog_filename = None self.assertEqual(ctx.keylog_filename, None) with self.assertRaises((IsADirectoryError, PermissionError)): # Windows raises PermissionError ctx.keylog_filename = os.path.dirname( os.path.abspath(support.TESTFN)) with self.assertRaises(TypeError): ctx.keylog_filename = 1 @requires_keylog @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_keylog_filename(self): self.addCleanup(support.unlink, support.TESTFN) client_context, server_context, hostname = testing_context() client_context.keylog_filename = support.TESTFN server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) # header, 5 lines for TLS 1.3 self.assertEqual(self.keylog_lines(), 6) client_context.keylog_filename = None server_context.keylog_filename = support.TESTFN server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertGreaterEqual(self.keylog_lines(), 11) client_context.keylog_filename = support.TESTFN server_context.keylog_filename = support.TESTFN server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertGreaterEqual(self.keylog_lines(), 21) client_context.keylog_filename = None server_context.keylog_filename = None @requires_keylog @unittest.skipIf(sys.flags.ignore_environment, "test is not compatible with ignore_environment") @unittest.skipIf(Py_DEBUG_WIN32, "Avoid mixing debug/release CRT on Windows") def test_keylog_env(self): self.addCleanup(support.unlink, support.TESTFN) with unittest.mock.patch.dict(os.environ): os.environ['SSLKEYLOGFILE'] = support.TESTFN self.assertEqual(os.environ['SSLKEYLOGFILE'], support.TESTFN) ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) self.assertEqual(ctx.keylog_filename, None) ctx = ssl.create_default_context() self.assertEqual(ctx.keylog_filename, support.TESTFN) ctx = ssl._create_stdlib_context() self.assertEqual(ctx.keylog_filename, support.TESTFN) def test_msg_callback(self): client_context, server_context, hostname = testing_context() def msg_cb(conn, direction, version, content_type, msg_type, data): pass self.assertIs(client_context._msg_callback, None) client_context._msg_callback = msg_cb self.assertIs(client_context._msg_callback, msg_cb) with self.assertRaises(TypeError): client_context._msg_callback = object() def test_msg_callback_tls12(self): client_context, server_context, hostname = testing_context() client_context.options |= ssl.OP_NO_TLSv1_3 msg = [] def msg_cb(conn, direction, version, content_type, msg_type, data): self.assertIsInstance(conn, ssl.SSLSocket) self.assertIsInstance(data, bytes) self.assertIn(direction, {'read', 'write'}) msg.append((direction, version, content_type, msg_type)) client_context._msg_callback = msg_cb server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) self.assertIn( ("read", TLSVersion.TLSv1_2, _TLSContentType.HANDSHAKE, _TLSMessageType.SERVER_KEY_EXCHANGE), msg ) self.assertIn( ("write", TLSVersion.TLSv1_2, _TLSContentType.CHANGE_CIPHER_SPEC, _TLSMessageType.CHANGE_CIPHER_SPEC), msg ) def test_msg_callback_deadlock_bpo43577(self): client_context, server_context, hostname = testing_context() server_context2 = testing_context()[1] def msg_cb(conn, direction, version, content_type, msg_type, data): pass def sni_cb(sock, servername, ctx): sock.context = server_context2 server_context._msg_callback = msg_cb server_context.sni_callback = sni_cb server = ThreadedEchoServer(context=server_context, chatty=False) with server: with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) with client_context.wrap_socket(socket.socket(), server_hostname=hostname) as s: s.connect((HOST, server.port)) def set_socket_so_linger_on_with_zero_timeout(sock): sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0)) class TestPreHandshakeClose(unittest.TestCase): """Verify behavior of close sockets with received data before to the handshake. """ class SingleConnectionTestServerThread(threading.Thread): def __init__(self, *, name, call_after_accept, timeout=None): self.call_after_accept = call_after_accept self.received_data = b'' # set by .run() self.wrap_error = None # set by .run() self.listener = None # set by .start() self.port = None # set by .start() if timeout is None: self.timeout = support.SHORT_TIMEOUT else: self.timeout = timeout super().__init__(name=name) def __enter__(self): self.start() return self def __exit__(self, *args): try: if self.listener: self.listener.close() except OSError: pass self.join() self.wrap_error = None # avoid dangling references def start(self): self.ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) self.ssl_ctx.verify_mode = ssl.CERT_REQUIRED self.ssl_ctx.load_verify_locations(cafile=ONLYCERT) self.ssl_ctx.load_cert_chain(certfile=ONLYCERT, keyfile=ONLYKEY) self.listener = socket.socket() self.port = socket_helper.bind_port(self.listener) self.listener.settimeout(self.timeout) self.listener.listen(1) super().start() def run(self): try: conn, address = self.listener.accept() except TimeoutError: # on timeout, just close the listener return finally: self.listener.close() with conn: if self.call_after_accept(conn): return try: tls_socket = self.ssl_ctx.wrap_socket(conn, server_side=True) except OSError as err: # ssl.SSLError inherits from OSError self.wrap_error = err else: try: self.received_data = tls_socket.recv(400) except OSError: pass # closed, protocol error, etc. def non_linux_skip_if_other_okay_error(self, err): if sys.platform == "linux": return # Expect the full test setup to always work on Linux. if (isinstance(err, ConnectionResetError) or (isinstance(err, OSError) and err.errno == errno.EINVAL) or re.search('wrong.version.number', getattr(err, "reason", ""), re.I)): # On Windows the TCP RST leads to a ConnectionResetError # (ECONNRESET) which Linux doesn't appear to surface to userspace. # If wrap_socket() winds up on the "if connected:" path and doing # the actual wrapping... we get an SSLError from OpenSSL. Typically # WRONG_VERSION_NUMBER. While appropriate, neither is the scenario # we're specifically trying to test. The way this test is written # is known to work on Linux. We'll skip it anywhere else that it # does not present as doing so. try: self.skipTest(f"Could not recreate conditions on {sys.platform}:" f" {err=}") finally: # gh-108342: Explicitly break the reference cycle err = None # If maintaining this conditional winds up being a problem. # just turn this into an unconditional skip anything but Linux. # The important thing is that our CI has the logic covered. def test_preauth_data_to_tls_server(self): server_accept_called = threading.Event() ready_for_server_wrap_socket = threading.Event() def call_after_accept(unused): server_accept_called.set() if not ready_for_server_wrap_socket.wait(support.SHORT_TIMEOUT): raise RuntimeError("wrap_socket event never set, test may fail.") return False # Tell the server thread to continue. server = self.SingleConnectionTestServerThread( call_after_accept=call_after_accept, name="preauth_data_to_tls_server") server.__enter__() # starts it self.addCleanup(server.__exit__) # ... & unittest.TestCase stops it. with socket.socket() as client: client.connect(server.listener.getsockname()) # This forces an immediate connection close via RST on .close(). set_socket_so_linger_on_with_zero_timeout(client) client.setblocking(False) server_accept_called.wait() client.send(b"DELETE /data HTTP/1.0\r\n\r\n") client.close() # RST ready_for_server_wrap_socket.set() server.join() wrap_error = server.wrap_error server.wrap_error = None try: self.assertEqual(b"", server.received_data) self.assertIsInstance(wrap_error, OSError) # All platforms. self.non_linux_skip_if_other_okay_error(wrap_error) self.assertIsInstance(wrap_error, ssl.SSLError) self.assertIn("before TLS handshake with data", wrap_error.args[1]) self.assertIn("before TLS handshake with data", wrap_error.reason) self.assertNotEqual(0, wrap_error.args[0]) self.assertIsNone(wrap_error.library, msg="attr must exist") finally: # gh-108342: Explicitly break the reference cycle wrap_error = None server = None def test_preauth_data_to_tls_client(self): server_can_continue_with_wrap_socket = threading.Event() client_can_continue_with_wrap_socket = threading.Event() def call_after_accept(conn_to_client): if not server_can_continue_with_wrap_socket.wait(support.SHORT_TIMEOUT): print("ERROR: test client took too long") # This forces an immediate connection close via RST on .close(). set_socket_so_linger_on_with_zero_timeout(conn_to_client) conn_to_client.send( b"HTTP/1.0 307 Temporary Redirect\r\n" b"Location: https://example.com/someone-elses-server\r\n" b"\r\n") conn_to_client.close() # RST client_can_continue_with_wrap_socket.set() return True # Tell the server to stop. server = self.SingleConnectionTestServerThread( call_after_accept=call_after_accept, name="preauth_data_to_tls_client") server.__enter__() # starts it self.addCleanup(server.__exit__) # ... & unittest.TestCase stops it. # Redundant; call_after_accept sets SO_LINGER on the accepted conn. set_socket_so_linger_on_with_zero_timeout(server.listener) with socket.socket() as client: client.connect(server.listener.getsockname()) server_can_continue_with_wrap_socket.set() if not client_can_continue_with_wrap_socket.wait(support.SHORT_TIMEOUT): self.fail("test server took too long") ssl_ctx = ssl.create_default_context() try: tls_client = ssl_ctx.wrap_socket( client, server_hostname="localhost") except OSError as err: # SSLError inherits from OSError wrap_error = err received_data = b"" else: wrap_error = None received_data = tls_client.recv(400) tls_client.close() server.join() try: self.assertEqual(b"", received_data) self.assertIsInstance(wrap_error, OSError) # All platforms. self.non_linux_skip_if_other_okay_error(wrap_error) self.assertIsInstance(wrap_error, ssl.SSLError) self.assertIn("before TLS handshake with data", wrap_error.args[1]) self.assertIn("before TLS handshake with data", wrap_error.reason) self.assertNotEqual(0, wrap_error.args[0]) self.assertIsNone(wrap_error.library, msg="attr must exist") finally: # gh-108342: Explicitly break the reference cycle wrap_error = None server = None def test_https_client_non_tls_response_ignored(self): server_responding = threading.Event() class SynchronizedHTTPSConnection(http.client.HTTPSConnection): def connect(self): # Call clear text HTTP connect(), not the encrypted HTTPS (TLS) # connect(): wrap_socket() is called manually below. http.client.HTTPConnection.connect(self) # Wait for our fault injection server to have done its thing. if not server_responding.wait(support.SHORT_TIMEOUT) and support.verbose: sys.stdout.write("server_responding event never set.") self.sock = self._context.wrap_socket( self.sock, server_hostname=self.host) def call_after_accept(conn_to_client): # This forces an immediate connection close via RST on .close(). set_socket_so_linger_on_with_zero_timeout(conn_to_client) conn_to_client.send( b"HTTP/1.0 402 Payment Required\r\n" b"\r\n") conn_to_client.close() # RST server_responding.set() return True # Tell the server to stop. timeout = 2.0 server = self.SingleConnectionTestServerThread( call_after_accept=call_after_accept, name="non_tls_http_RST_responder", timeout=timeout) server.__enter__() # starts it self.addCleanup(server.__exit__) # ... & unittest.TestCase stops it. # Redundant; call_after_accept sets SO_LINGER on the accepted conn. set_socket_so_linger_on_with_zero_timeout(server.listener) connection = SynchronizedHTTPSConnection( server.listener.getsockname()[0], port=server.port, context=ssl.create_default_context(), timeout=timeout, ) # There are lots of reasons this raises as desired, long before this # test was added. Sending the request requires a successful TLS wrapped # socket; that fails if the connection is broken. It may seem pointless # to test this. It serves as an illustration of something that we never # want to happen... properly not happening. with self.assertRaises(OSError): connection.request("HEAD", "/test", headers={"Host": "localhost"}) response = connection.getresponse() server.join() def setUpModule(): if support.verbose: plats = { 'Mac': platform.mac_ver, 'Windows': platform.win32_ver, } for name, func in plats.items(): plat = func() if plat and plat[0]: plat = '%s %r' % (name, plat) break else: plat = repr(platform.platform()) print("test_ssl: testing with %r %r" % (ssl.OPENSSL_VERSION, ssl.OPENSSL_VERSION_INFO)) print(" under %s" % plat) print(" HAS_SNI = %r" % ssl.HAS_SNI) print(" OP_ALL = 0x%8x" % ssl.OP_ALL) try: print(" OP_NO_TLSv1_1 = 0x%8x" % ssl.OP_NO_TLSv1_1) except AttributeError: pass for filename in [ CERTFILE, BYTES_CERTFILE, ONLYCERT, ONLYKEY, BYTES_ONLYCERT, BYTES_ONLYKEY, SIGNED_CERTFILE, SIGNED_CERTFILE2, SIGNING_CA, BADCERT, BADKEY, EMPTYCERT]: if not os.path.exists(filename): raise support.TestFailed("Can't read certificate file %r" % filename) thread_info = support.threading_setup() unittest.addModuleCleanup(support.threading_cleanup, *thread_info) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.9/test_subprocess.py000066400000000000000000004613221471441230600220630ustar00rootroot00000000000000import unittest from unittest import mock from test import support import subprocess import sys import signal import io import itertools import os import errno import tempfile import time import traceback import types import selectors import sysconfig import select import shutil import threading import gc import textwrap import json import pathlib from test.support import FakePath try: import _testcapi except ImportError: _testcapi = None try: import pwd except ImportError: pwd = None try: import grp except ImportError: grp = None if support.PGO: raise unittest.SkipTest("test is not helpful for PGO") mswindows = (sys.platform == "win32") # # Depends on the following external programs: Python # if mswindows: SETBINARY = ('import msvcrt; msvcrt.setmode(sys.stdout.fileno(), ' 'os.O_BINARY);') else: SETBINARY = '' NONEXISTING_CMD = ('nonexisting_i_hope',) # Ignore errors that indicate the command was not found NONEXISTING_ERRORS = (FileNotFoundError, NotADirectoryError, PermissionError) ZERO_RETURN_CMD = (sys.executable, '-c', 'pass') def setUpModule(): shell_true = shutil.which('true') if shell_true is None: return if (os.access(shell_true, os.X_OK) and subprocess.run([shell_true]).returncode == 0): global ZERO_RETURN_CMD ZERO_RETURN_CMD = (shell_true,) # Faster than Python startup. class BaseTestCase(unittest.TestCase): def setUp(self): # Try to minimize the number of children we have so this test # doesn't crash on some buildbots (Alphas in particular). support.reap_children() def tearDown(self): if not mswindows: # subprocess._active is not used on Windows and is set to None. for inst in subprocess._active: inst.wait() subprocess._cleanup() self.assertFalse( subprocess._active, "subprocess._active not empty" ) self.doCleanups() support.reap_children() class PopenTestException(Exception): pass class PopenExecuteChildRaises(subprocess.Popen): """Popen subclass for testing cleanup of subprocess.PIPE filehandles when _execute_child fails. """ def _execute_child(self, *args, **kwargs): raise PopenTestException("Forced Exception for Test") class ProcessTestCase(BaseTestCase): def test_io_buffered_by_default(self): p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) try: self.assertIsInstance(p.stdin, io.BufferedIOBase) self.assertIsInstance(p.stdout, io.BufferedIOBase) self.assertIsInstance(p.stderr, io.BufferedIOBase) finally: p.stdin.close() p.stdout.close() p.stderr.close() p.wait() def test_io_unbuffered_works(self): p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=0) try: self.assertIsInstance(p.stdin, io.RawIOBase) self.assertIsInstance(p.stdout, io.RawIOBase) self.assertIsInstance(p.stderr, io.RawIOBase) finally: p.stdin.close() p.stdout.close() p.stderr.close() p.wait() def test_call_seq(self): # call() function with sequence argument rc = subprocess.call([sys.executable, "-c", "import sys; sys.exit(47)"]) self.assertEqual(rc, 47) def test_call_timeout(self): # call() function with timeout argument; we want to test that the child # process gets killed when the timeout expires. If the child isn't # killed, this call will deadlock since subprocess.call waits for the # child. self.assertRaises(subprocess.TimeoutExpired, subprocess.call, [sys.executable, "-c", "while True: pass"], timeout=0.1) def test_check_call_zero(self): # check_call() function with zero return code rc = subprocess.check_call(ZERO_RETURN_CMD) self.assertEqual(rc, 0) def test_check_call_nonzero(self): # check_call() function with non-zero return code with self.assertRaises(subprocess.CalledProcessError) as c: subprocess.check_call([sys.executable, "-c", "import sys; sys.exit(47)"]) self.assertEqual(c.exception.returncode, 47) def test_check_output(self): # check_output() function with zero return code output = subprocess.check_output( [sys.executable, "-c", "print('BDFL')"]) self.assertIn(b'BDFL', output) def test_check_output_nonzero(self): # check_call() function with non-zero return code with self.assertRaises(subprocess.CalledProcessError) as c: subprocess.check_output( [sys.executable, "-c", "import sys; sys.exit(5)"]) self.assertEqual(c.exception.returncode, 5) def test_check_output_stderr(self): # check_output() function stderr redirected to stdout output = subprocess.check_output( [sys.executable, "-c", "import sys; sys.stderr.write('BDFL')"], stderr=subprocess.STDOUT) self.assertIn(b'BDFL', output) def test_check_output_stdin_arg(self): # check_output() can be called with stdin set to a file tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) output = subprocess.check_output( [sys.executable, "-c", "import sys; sys.stdout.write(sys.stdin.read().upper())"], stdin=tf) self.assertIn(b'PEAR', output) def test_check_output_input_arg(self): # check_output() can be called with input set to a string output = subprocess.check_output( [sys.executable, "-c", "import sys; sys.stdout.write(sys.stdin.read().upper())"], input=b'pear') self.assertIn(b'PEAR', output) def test_check_output_input_none(self): """input=None has a legacy meaning of input='' on check_output.""" output = subprocess.check_output( [sys.executable, "-c", "import sys; print('XX' if sys.stdin.read() else '')"], input=None) self.assertNotIn(b'XX', output) def test_check_output_input_none_text(self): output = subprocess.check_output( [sys.executable, "-c", "import sys; print('XX' if sys.stdin.read() else '')"], input=None, text=True) self.assertNotIn('XX', output) def test_check_output_input_none_universal_newlines(self): output = subprocess.check_output( [sys.executable, "-c", "import sys; print('XX' if sys.stdin.read() else '')"], input=None, universal_newlines=True) self.assertNotIn('XX', output) def test_check_output_stdout_arg(self): # check_output() refuses to accept 'stdout' argument with self.assertRaises(ValueError) as c: output = subprocess.check_output( [sys.executable, "-c", "print('will not be run')"], stdout=sys.stdout) self.fail("Expected ValueError when stdout arg supplied.") self.assertIn('stdout', c.exception.args[0]) def test_check_output_stdin_with_input_arg(self): # check_output() refuses to accept 'stdin' with 'input' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) with self.assertRaises(ValueError) as c: output = subprocess.check_output( [sys.executable, "-c", "print('will not be run')"], stdin=tf, input=b'hare') self.fail("Expected ValueError when stdin and input args supplied.") self.assertIn('stdin', c.exception.args[0]) self.assertIn('input', c.exception.args[0]) def test_check_output_timeout(self): # check_output() function with timeout arg with self.assertRaises(subprocess.TimeoutExpired) as c: output = subprocess.check_output( [sys.executable, "-c", "import sys, time\n" "sys.stdout.write('BDFL')\n" "sys.stdout.flush()\n" "time.sleep(3600)"], # Some heavily loaded buildbots (sparc Debian 3.x) require # this much time to start and print. timeout=3) self.fail("Expected TimeoutExpired.") self.assertEqual(c.exception.output, b'BDFL') def test_call_kwargs(self): # call() function with keyword args newenv = os.environ.copy() newenv["FRUIT"] = "banana" rc = subprocess.call([sys.executable, "-c", 'import sys, os;' 'sys.exit(os.getenv("FRUIT")=="banana")'], env=newenv) self.assertEqual(rc, 1) def test_invalid_args(self): # Popen() called with invalid arguments should raise TypeError # but Popen.__del__ should not complain (issue #12085) with support.captured_stderr() as s: self.assertRaises(TypeError, subprocess.Popen, invalid_arg_name=1) argcount = subprocess.Popen.__init__.__code__.co_argcount too_many_args = [0] * (argcount + 1) self.assertRaises(TypeError, subprocess.Popen, *too_many_args) self.assertEqual(s.getvalue(), '') def test_stdin_none(self): # .stdin is None when not redirected p = subprocess.Popen([sys.executable, "-c", 'print("banana")'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) p.wait() self.assertEqual(p.stdin, None) def test_stdout_none(self): # .stdout is None when not redirected, and the child's stdout will # be inherited from the parent. In order to test this we run a # subprocess in a subprocess: # this_test # \-- subprocess created by this test (parent) # \-- subprocess created by the parent subprocess (child) # The parent doesn't specify stdout, so the child will use the # parent's stdout. This test checks that the message printed by the # child goes to the parent stdout. The parent also checks that the # child's stdout is None. See #11963. code = ('import sys; from subprocess import Popen, PIPE;' 'p = Popen([sys.executable, "-c", "print(\'test_stdout_none\')"],' ' stdin=PIPE, stderr=PIPE);' 'p.wait(); assert p.stdout is None;') p = subprocess.Popen([sys.executable, "-c", code], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) out, err = p.communicate() self.assertEqual(p.returncode, 0, err) self.assertEqual(out.rstrip(), b'test_stdout_none') def test_stderr_none(self): # .stderr is None when not redirected p = subprocess.Popen([sys.executable, "-c", 'print("banana")'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stdin.close) p.wait() self.assertEqual(p.stderr, None) def _assert_python(self, pre_args, **kwargs): # We include sys.exit() to prevent the test runner from hanging # whenever python is found. args = pre_args + ["import sys; sys.exit(47)"] p = subprocess.Popen(args, **kwargs) p.wait() self.assertEqual(47, p.returncode) def test_executable(self): # Check that the executable argument works. # # On Unix (non-Mac and non-Windows), Python looks at args[0] to # determine where its standard library is, so we need the directory # of args[0] to be valid for the Popen() call to Python to succeed. # See also issue #16170 and issue #7774. doesnotexist = os.path.join(os.path.dirname(sys.executable), "doesnotexist") self._assert_python([doesnotexist, "-c"], executable=sys.executable) def test_bytes_executable(self): doesnotexist = os.path.join(os.path.dirname(sys.executable), "doesnotexist") self._assert_python([doesnotexist, "-c"], executable=os.fsencode(sys.executable)) def test_pathlike_executable(self): doesnotexist = os.path.join(os.path.dirname(sys.executable), "doesnotexist") self._assert_python([doesnotexist, "-c"], executable=FakePath(sys.executable)) def test_executable_takes_precedence(self): # Check that the executable argument takes precedence over args[0]. # # Verify first that the call succeeds without the executable arg. pre_args = [sys.executable, "-c"] self._assert_python(pre_args) self.assertRaises(NONEXISTING_ERRORS, self._assert_python, pre_args, executable=NONEXISTING_CMD[0]) @unittest.skipIf(mswindows, "executable argument replaces shell") def test_executable_replaces_shell(self): # Check that the executable argument replaces the default shell # when shell=True. self._assert_python([], executable=sys.executable, shell=True) @unittest.skipIf(mswindows, "executable argument replaces shell") def test_bytes_executable_replaces_shell(self): self._assert_python([], executable=os.fsencode(sys.executable), shell=True) @unittest.skipIf(mswindows, "executable argument replaces shell") def test_pathlike_executable_replaces_shell(self): self._assert_python([], executable=FakePath(sys.executable), shell=True) # For use in the test_cwd* tests below. def _normalize_cwd(self, cwd): # Normalize an expected cwd (for Tru64 support). # We can't use os.path.realpath since it doesn't expand Tru64 {memb} # strings. See bug #1063571. with support.change_cwd(cwd): return os.getcwd() # For use in the test_cwd* tests below. def _split_python_path(self): # Return normalized (python_dir, python_base). python_path = os.path.realpath(sys.executable) return os.path.split(python_path) # For use in the test_cwd* tests below. def _assert_cwd(self, expected_cwd, python_arg, **kwargs): # Invoke Python via Popen, and assert that (1) the call succeeds, # and that (2) the current working directory of the child process # matches *expected_cwd*. p = subprocess.Popen([python_arg, "-c", "import os, sys; " "buf = sys.stdout.buffer; " "buf.write(os.getcwd().encode()); " "buf.flush(); " "sys.exit(47)"], stdout=subprocess.PIPE, **kwargs) self.addCleanup(p.stdout.close) p.wait() self.assertEqual(47, p.returncode) normcase = os.path.normcase self.assertEqual(normcase(expected_cwd), normcase(p.stdout.read().decode())) def test_cwd(self): # Check that cwd changes the cwd for the child process. temp_dir = tempfile.gettempdir() temp_dir = self._normalize_cwd(temp_dir) self._assert_cwd(temp_dir, sys.executable, cwd=temp_dir) def test_cwd_with_bytes(self): temp_dir = tempfile.gettempdir() temp_dir = self._normalize_cwd(temp_dir) self._assert_cwd(temp_dir, sys.executable, cwd=os.fsencode(temp_dir)) def test_cwd_with_pathlike(self): temp_dir = tempfile.gettempdir() temp_dir = self._normalize_cwd(temp_dir) self._assert_cwd(temp_dir, sys.executable, cwd=FakePath(temp_dir)) @unittest.skipIf(mswindows, "pending resolution of issue #15533") def test_cwd_with_relative_arg(self): # Check that Popen looks for args[0] relative to cwd if args[0] # is relative. python_dir, python_base = self._split_python_path() rel_python = os.path.join(os.curdir, python_base) with support.temp_cwd() as wrong_dir: # Before calling with the correct cwd, confirm that the call fails # without cwd and with the wrong cwd. self.assertRaises(FileNotFoundError, subprocess.Popen, [rel_python]) self.assertRaises(FileNotFoundError, subprocess.Popen, [rel_python], cwd=wrong_dir) python_dir = self._normalize_cwd(python_dir) self._assert_cwd(python_dir, rel_python, cwd=python_dir) @unittest.skipIf(mswindows, "pending resolution of issue #15533") def test_cwd_with_relative_executable(self): # Check that Popen looks for executable relative to cwd if executable # is relative (and that executable takes precedence over args[0]). python_dir, python_base = self._split_python_path() rel_python = os.path.join(os.curdir, python_base) doesntexist = "somethingyoudonthave" with support.temp_cwd() as wrong_dir: # Before calling with the correct cwd, confirm that the call fails # without cwd and with the wrong cwd. self.assertRaises(FileNotFoundError, subprocess.Popen, [doesntexist], executable=rel_python) self.assertRaises(FileNotFoundError, subprocess.Popen, [doesntexist], executable=rel_python, cwd=wrong_dir) python_dir = self._normalize_cwd(python_dir) self._assert_cwd(python_dir, doesntexist, executable=rel_python, cwd=python_dir) def test_cwd_with_absolute_arg(self): # Check that Popen can find the executable when the cwd is wrong # if args[0] is an absolute path. python_dir, python_base = self._split_python_path() abs_python = os.path.join(python_dir, python_base) rel_python = os.path.join(os.curdir, python_base) with support.temp_dir() as wrong_dir: # Before calling with an absolute path, confirm that using a # relative path fails. self.assertRaises(FileNotFoundError, subprocess.Popen, [rel_python], cwd=wrong_dir) wrong_dir = self._normalize_cwd(wrong_dir) self._assert_cwd(wrong_dir, abs_python, cwd=wrong_dir) @unittest.skipIf(sys.base_prefix != sys.prefix, 'Test is not venv-compatible') def test_executable_with_cwd(self): python_dir, python_base = self._split_python_path() python_dir = self._normalize_cwd(python_dir) self._assert_cwd(python_dir, "somethingyoudonthave", executable=sys.executable, cwd=python_dir) @unittest.skipIf(sys.base_prefix != sys.prefix, 'Test is not venv-compatible') @unittest.skipIf(sysconfig.is_python_build(), "need an installed Python. See #7774") def test_executable_without_cwd(self): # For a normal installation, it should work without 'cwd' # argument. For test runs in the build directory, see #7774. self._assert_cwd(os.getcwd(), "somethingyoudonthave", executable=sys.executable) def test_stdin_pipe(self): # stdin redirection p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.exit(sys.stdin.read() == "pear")'], stdin=subprocess.PIPE) p.stdin.write(b"pear") p.stdin.close() p.wait() self.assertEqual(p.returncode, 1) def test_stdin_filedes(self): # stdin is set to open file descriptor tf = tempfile.TemporaryFile() self.addCleanup(tf.close) d = tf.fileno() os.write(d, b"pear") os.lseek(d, 0, 0) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.exit(sys.stdin.read() == "pear")'], stdin=d) p.wait() self.assertEqual(p.returncode, 1) def test_stdin_fileobj(self): # stdin is set to open file object tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b"pear") tf.seek(0) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.exit(sys.stdin.read() == "pear")'], stdin=tf) p.wait() self.assertEqual(p.returncode, 1) def test_stdout_pipe(self): # stdout redirection p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("orange")'], stdout=subprocess.PIPE) with p: self.assertEqual(p.stdout.read(), b"orange") def test_stdout_filedes(self): # stdout is set to open file descriptor tf = tempfile.TemporaryFile() self.addCleanup(tf.close) d = tf.fileno() p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("orange")'], stdout=d) p.wait() os.lseek(d, 0, 0) self.assertEqual(os.read(d, 1024), b"orange") def test_stdout_fileobj(self): # stdout is set to open file object tf = tempfile.TemporaryFile() self.addCleanup(tf.close) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("orange")'], stdout=tf) p.wait() tf.seek(0) self.assertEqual(tf.read(), b"orange") def test_stderr_pipe(self): # stderr redirection p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("strawberry")'], stderr=subprocess.PIPE) with p: self.assertEqual(p.stderr.read(), b"strawberry") def test_stderr_filedes(self): # stderr is set to open file descriptor tf = tempfile.TemporaryFile() self.addCleanup(tf.close) d = tf.fileno() p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("strawberry")'], stderr=d) p.wait() os.lseek(d, 0, 0) self.assertEqual(os.read(d, 1024), b"strawberry") def test_stderr_fileobj(self): # stderr is set to open file object tf = tempfile.TemporaryFile() self.addCleanup(tf.close) p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("strawberry")'], stderr=tf) p.wait() tf.seek(0) self.assertEqual(tf.read(), b"strawberry") def test_stderr_redirect_with_no_stdout_redirect(self): # test stderr=STDOUT while stdout=None (not set) # - grandchild prints to stderr # - child redirects grandchild's stderr to its stdout # - the parent should get grandchild's stderr in child's stdout p = subprocess.Popen([sys.executable, "-c", 'import sys, subprocess;' 'rc = subprocess.call([sys.executable, "-c",' ' "import sys;"' ' "sys.stderr.write(\'42\')"],' ' stderr=subprocess.STDOUT);' 'sys.exit(rc)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() #NOTE: stdout should get stderr from grandchild self.assertEqual(stdout, b'42') self.assertEqual(stderr, b'') # should be empty self.assertEqual(p.returncode, 0) def test_stdout_stderr_pipe(self): # capture stdout and stderr to the same pipe p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple");' 'sys.stdout.flush();' 'sys.stderr.write("orange")'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) with p: self.assertEqual(p.stdout.read(), b"appleorange") def test_stdout_stderr_file(self): # capture stdout and stderr to the same open file tf = tempfile.TemporaryFile() self.addCleanup(tf.close) p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple");' 'sys.stdout.flush();' 'sys.stderr.write("orange")'], stdout=tf, stderr=tf) p.wait() tf.seek(0) self.assertEqual(tf.read(), b"appleorange") def test_stdout_filedes_of_stdout(self): # stdout is set to 1 (#1531862). # To avoid printing the text on stdout, we do something similar to # test_stdout_none (see above). The parent subprocess calls the child # subprocess passing stdout=1, and this test uses stdout=PIPE in # order to capture and check the output of the parent. See #11963. code = ('import sys, subprocess; ' 'rc = subprocess.call([sys.executable, "-c", ' ' "import os, sys; sys.exit(os.write(sys.stdout.fileno(), ' 'b\'test with stdout=1\'))"], stdout=1); ' 'assert rc == 18') p = subprocess.Popen([sys.executable, "-c", code], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) out, err = p.communicate() self.assertEqual(p.returncode, 0, err) self.assertEqual(out.rstrip(), b'test with stdout=1') def test_stdout_devnull(self): p = subprocess.Popen([sys.executable, "-c", 'for i in range(10240):' 'print("x" * 1024)'], stdout=subprocess.DEVNULL) p.wait() self.assertEqual(p.stdout, None) def test_stderr_devnull(self): p = subprocess.Popen([sys.executable, "-c", 'import sys\n' 'for i in range(10240):' 'sys.stderr.write("x" * 1024)'], stderr=subprocess.DEVNULL) p.wait() self.assertEqual(p.stderr, None) def test_stdin_devnull(self): p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdin.read(1)'], stdin=subprocess.DEVNULL) p.wait() self.assertEqual(p.stdin, None) def test_env(self): newenv = os.environ.copy() newenv["FRUIT"] = "orange" with subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(os.getenv("FRUIT"))'], stdout=subprocess.PIPE, env=newenv) as p: stdout, stderr = p.communicate() self.assertEqual(stdout, b"orange") # Windows requires at least the SYSTEMROOT environment variable to start # Python @unittest.skipIf(sys.platform == 'win32', 'cannot test an empty env on Windows') @unittest.skipIf(sysconfig.get_config_var('Py_ENABLE_SHARED') == 1, 'The Python shared library cannot be loaded ' 'with an empty environment.') def test_empty_env(self): """Verify that env={} is as empty as possible.""" def is_env_var_to_ignore(n): """Determine if an environment variable is under our control.""" # This excludes some __CF_* and VERSIONER_* keys MacOS insists # on adding even when the environment in exec is empty. # Gentoo sandboxes also force LD_PRELOAD and SANDBOX_* to exist. return ('VERSIONER' in n or '__CF' in n or # MacOS n == 'LD_PRELOAD' or n.startswith('SANDBOX') or # Gentoo n == 'LC_CTYPE') # Locale coercion triggered with subprocess.Popen([sys.executable, "-c", 'import os; print(list(os.environ.keys()))'], stdout=subprocess.PIPE, env={}) as p: stdout, stderr = p.communicate() child_env_names = eval(stdout.strip()) self.assertIsInstance(child_env_names, list) child_env_names = [k for k in child_env_names if not is_env_var_to_ignore(k)] self.assertEqual(child_env_names, []) def test_invalid_cmd(self): # null character in the command name cmd = sys.executable + '\0' with self.assertRaises(ValueError): subprocess.Popen([cmd, "-c", "pass"]) # null character in the command argument with self.assertRaises(ValueError): subprocess.Popen([sys.executable, "-c", "pass#\0"]) def test_invalid_env(self): # null character in the environment variable name newenv = os.environ.copy() newenv["FRUIT\0VEGETABLE"] = "cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) # null character in the environment variable value newenv = os.environ.copy() newenv["FRUIT"] = "orange\0VEGETABLE=cabbage" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) # equal character in the environment variable name newenv = os.environ.copy() newenv["FRUIT=ORANGE"] = "lemon" with self.assertRaises(ValueError): subprocess.Popen(ZERO_RETURN_CMD, env=newenv) # equal character in the environment variable value newenv = os.environ.copy() newenv["FRUIT"] = "orange=lemon" with subprocess.Popen([sys.executable, "-c", 'import sys, os;' 'sys.stdout.write(os.getenv("FRUIT"))'], stdout=subprocess.PIPE, env=newenv) as p: stdout, stderr = p.communicate() self.assertEqual(stdout, b"orange=lemon") def test_communicate_stdin(self): p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.exit(sys.stdin.read() == "pear")'], stdin=subprocess.PIPE) p.communicate(b"pear") self.assertEqual(p.returncode, 1) def test_communicate_stdout(self): p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stdout.write("pineapple")'], stdout=subprocess.PIPE) (stdout, stderr) = p.communicate() self.assertEqual(stdout, b"pineapple") self.assertEqual(stderr, None) def test_communicate_stderr(self): p = subprocess.Popen([sys.executable, "-c", 'import sys; sys.stderr.write("pineapple")'], stderr=subprocess.PIPE) (stdout, stderr) = p.communicate() self.assertEqual(stdout, None) self.assertEqual(stderr, b"pineapple") def test_communicate(self): p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stderr.write("pineapple");' 'sys.stdout.write(sys.stdin.read())'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) (stdout, stderr) = p.communicate(b"banana") self.assertEqual(stdout, b"banana") self.assertEqual(stderr, b"pineapple") def test_communicate_timeout(self): p = subprocess.Popen([sys.executable, "-c", 'import sys,os,time;' 'sys.stderr.write("pineapple\\n");' 'time.sleep(1);' 'sys.stderr.write("pear\\n");' 'sys.stdout.write(sys.stdin.read())'], universal_newlines=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.assertRaises(subprocess.TimeoutExpired, p.communicate, "banana", timeout=0.3) # Make sure we can keep waiting for it, and that we get the whole output # after it completes. (stdout, stderr) = p.communicate() self.assertEqual(stdout, "banana") self.assertEqual(stderr.encode(), b"pineapple\npear\n") def test_communicate_timeout_large_output(self): # Test an expiring timeout while the child is outputting lots of data. p = subprocess.Popen([sys.executable, "-c", 'import sys,os,time;' 'sys.stdout.write("a" * (64 * 1024));' 'time.sleep(0.2);' 'sys.stdout.write("a" * (64 * 1024));' 'time.sleep(0.2);' 'sys.stdout.write("a" * (64 * 1024));' 'time.sleep(0.2);' 'sys.stdout.write("a" * (64 * 1024));'], stdout=subprocess.PIPE) self.assertRaises(subprocess.TimeoutExpired, p.communicate, timeout=0.4) (stdout, _) = p.communicate() self.assertEqual(len(stdout), 4 * 64 * 1024) # Test for the fd leak reported in http://bugs.python.org/issue2791. def test_communicate_pipe_fd_leak(self): for stdin_pipe in (False, True): for stdout_pipe in (False, True): for stderr_pipe in (False, True): options = {} if stdin_pipe: options['stdin'] = subprocess.PIPE if stdout_pipe: options['stdout'] = subprocess.PIPE if stderr_pipe: options['stderr'] = subprocess.PIPE if not options: continue p = subprocess.Popen(ZERO_RETURN_CMD, **options) p.communicate() if p.stdin is not None: self.assertTrue(p.stdin.closed) if p.stdout is not None: self.assertTrue(p.stdout.closed) if p.stderr is not None: self.assertTrue(p.stderr.closed) def test_communicate_returns(self): # communicate() should return None if no redirection is active p = subprocess.Popen([sys.executable, "-c", "import sys; sys.exit(47)"]) (stdout, stderr) = p.communicate() self.assertEqual(stdout, None) self.assertEqual(stderr, None) def test_communicate_pipe_buf(self): # communicate() with writes larger than pipe_buf # This test will probably deadlock rather than fail, if # communicate() does not work properly. x, y = os.pipe() os.close(x) os.close(y) p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(sys.stdin.read(47));' 'sys.stderr.write("x" * %d);' 'sys.stdout.write(sys.stdin.read())' % support.PIPE_MAX_SIZE], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) string_to_write = b"a" * support.PIPE_MAX_SIZE (stdout, stderr) = p.communicate(string_to_write) self.assertEqual(stdout, string_to_write) def test_writes_before_communicate(self): # stdin.write before communicate() p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(sys.stdin.read())'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) p.stdin.write(b"banana") (stdout, stderr) = p.communicate(b"split") self.assertEqual(stdout, b"bananasplit") self.assertEqual(stderr, b"") def test_universal_newlines_and_text(self): args = [ sys.executable, "-c", 'import sys,os;' + SETBINARY + 'buf = sys.stdout.buffer;' 'buf.write(sys.stdin.readline().encode());' 'buf.flush();' 'buf.write(b"line2\\n");' 'buf.flush();' 'buf.write(sys.stdin.read().encode());' 'buf.flush();' 'buf.write(b"line4\\n");' 'buf.flush();' 'buf.write(b"line5\\r\\n");' 'buf.flush();' 'buf.write(b"line6\\r");' 'buf.flush();' 'buf.write(b"\\nline7");' 'buf.flush();' 'buf.write(b"\\nline8");'] for extra_kwarg in ('universal_newlines', 'text'): p = subprocess.Popen(args, **{'stdin': subprocess.PIPE, 'stdout': subprocess.PIPE, extra_kwarg: True}) with p: p.stdin.write("line1\n") p.stdin.flush() self.assertEqual(p.stdout.readline(), "line1\n") p.stdin.write("line3\n") p.stdin.close() self.addCleanup(p.stdout.close) self.assertEqual(p.stdout.readline(), "line2\n") self.assertEqual(p.stdout.read(6), "line3\n") self.assertEqual(p.stdout.read(), "line4\nline5\nline6\nline7\nline8") def test_universal_newlines_communicate(self): # universal newlines through communicate() p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' + SETBINARY + 'buf = sys.stdout.buffer;' 'buf.write(b"line2\\n");' 'buf.flush();' 'buf.write(b"line4\\n");' 'buf.flush();' 'buf.write(b"line5\\r\\n");' 'buf.flush();' 'buf.write(b"line6\\r");' 'buf.flush();' 'buf.write(b"\\nline7");' 'buf.flush();' 'buf.write(b"\\nline8");'], stderr=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=1) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) (stdout, stderr) = p.communicate() self.assertEqual(stdout, "line2\nline4\nline5\nline6\nline7\nline8") def test_universal_newlines_communicate_stdin(self): # universal newlines through communicate(), with only stdin p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' + SETBINARY + textwrap.dedent(''' s = sys.stdin.readline() assert s == "line1\\n", repr(s) s = sys.stdin.read() assert s == "line3\\n", repr(s) ''')], stdin=subprocess.PIPE, universal_newlines=1) (stdout, stderr) = p.communicate("line1\nline3\n") self.assertEqual(p.returncode, 0) def test_universal_newlines_communicate_input_none(self): # Test communicate(input=None) with universal newlines. # # We set stdout to PIPE because, as of this writing, a different # code path is tested when the number of pipes is zero or one. p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=True) p.communicate() self.assertEqual(p.returncode, 0) def test_universal_newlines_communicate_stdin_stdout_stderr(self): # universal newlines through communicate(), with stdin, stdout, stderr p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' + SETBINARY + textwrap.dedent(''' s = sys.stdin.buffer.readline() sys.stdout.buffer.write(s) sys.stdout.buffer.write(b"line2\\r") sys.stderr.buffer.write(b"eline2\\n") s = sys.stdin.buffer.read() sys.stdout.buffer.write(s) sys.stdout.buffer.write(b"line4\\n") sys.stdout.buffer.write(b"line5\\r\\n") sys.stderr.buffer.write(b"eline6\\r") sys.stderr.buffer.write(b"eline7\\r\\nz") ''')], stdin=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=True) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) (stdout, stderr) = p.communicate("line1\nline3\n") self.assertEqual(p.returncode, 0) self.assertEqual("line1\nline2\nline3\nline4\nline5\n", stdout) # Python debug build push something like "[42442 refs]\n" # to stderr at exit of subprocess. self.assertTrue(stderr.startswith("eline2\neline6\neline7\n")) def test_universal_newlines_communicate_encodings(self): # Check that universal newlines mode works for various encodings, # in particular for encodings in the UTF-16 and UTF-32 families. # See issue #15595. # # UTF-16 and UTF-32-BE are sufficient to check both with BOM and # without, and UTF-16 and UTF-32. for encoding in ['utf-16', 'utf-32-be']: code = ("import sys; " r"sys.stdout.buffer.write('1\r\n2\r3\n4'.encode('%s'))" % encoding) args = [sys.executable, '-c', code] # We set stdin to be non-None because, as of this writing, # a different code path is used when the number of pipes is # zero or one. popen = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, encoding=encoding) stdout, stderr = popen.communicate(input='') self.assertEqual(stdout, '1\n2\n3\n4') def test_communicate_errors(self): for errors, expected in [ ('ignore', ''), ('replace', '\ufffd\ufffd'), ('surrogateescape', '\udc80\udc80'), ('backslashreplace', '\\x80\\x80'), ]: code = ("import sys; " r"sys.stdout.buffer.write(b'[\x80\x80]')") args = [sys.executable, '-c', code] # We set stdin to be non-None because, as of this writing, # a different code path is used when the number of pipes is # zero or one. popen = subprocess.Popen(args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, encoding='utf-8', errors=errors) stdout, stderr = popen.communicate(input='') self.assertEqual(stdout, '[{}]'.format(expected)) def test_no_leaking(self): # Make sure we leak no resources if not mswindows: max_handles = 1026 # too much for most UNIX systems else: max_handles = 2050 # too much for (at least some) Windows setups handles = [] tmpdir = tempfile.mkdtemp() try: for i in range(max_handles): try: tmpfile = os.path.join(tmpdir, support.TESTFN) handles.append(os.open(tmpfile, os.O_WRONLY|os.O_CREAT)) except OSError as e: if e.errno != errno.EMFILE: raise break else: self.skipTest("failed to reach the file descriptor limit " "(tried %d)" % max_handles) # Close a couple of them (should be enough for a subprocess) for i in range(10): os.close(handles.pop()) # Loop creating some subprocesses. If one of them leaks some fds, # the next loop iteration will fail by reaching the max fd limit. for i in range(15): p = subprocess.Popen([sys.executable, "-c", "import sys;" "sys.stdout.write(sys.stdin.read())"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) data = p.communicate(b"lime")[0] self.assertEqual(data, b"lime") finally: for h in handles: os.close(h) shutil.rmtree(tmpdir) def test_list2cmdline(self): self.assertEqual(subprocess.list2cmdline(['a b c', 'd', 'e']), '"a b c" d e') self.assertEqual(subprocess.list2cmdline(['ab"c', '\\', 'd']), 'ab\\"c \\ d') self.assertEqual(subprocess.list2cmdline(['ab"c', ' \\', 'd']), 'ab\\"c " \\\\" d') self.assertEqual(subprocess.list2cmdline(['a\\\\\\b', 'de fg', 'h']), 'a\\\\\\b "de fg" h') self.assertEqual(subprocess.list2cmdline(['a\\"b', 'c', 'd']), 'a\\\\\\"b c d') self.assertEqual(subprocess.list2cmdline(['a\\\\b c', 'd', 'e']), '"a\\\\b c" d e') self.assertEqual(subprocess.list2cmdline(['a\\\\b\\ c', 'd', 'e']), '"a\\\\b\\ c" d e') self.assertEqual(subprocess.list2cmdline(['ab', '']), 'ab ""') def test_poll(self): p = subprocess.Popen([sys.executable, "-c", "import os; os.read(0, 1)"], stdin=subprocess.PIPE) self.addCleanup(p.stdin.close) self.assertIsNone(p.poll()) os.write(p.stdin.fileno(), b'A') p.wait() # Subsequent invocations should just return the returncode self.assertEqual(p.poll(), 0) def test_wait(self): p = subprocess.Popen(ZERO_RETURN_CMD) self.assertEqual(p.wait(), 0) # Subsequent invocations should just return the returncode self.assertEqual(p.wait(), 0) def test_wait_timeout(self): p = subprocess.Popen([sys.executable, "-c", "import time; time.sleep(0.3)"]) with self.assertRaises(subprocess.TimeoutExpired) as c: p.wait(timeout=0.0001) self.assertIn("0.0001", str(c.exception)) # For coverage of __str__. self.assertEqual(p.wait(timeout=support.SHORT_TIMEOUT), 0) def test_invalid_bufsize(self): # an invalid type of the bufsize argument should raise # TypeError. with self.assertRaises(TypeError): subprocess.Popen(ZERO_RETURN_CMD, "orange") def test_bufsize_is_none(self): # bufsize=None should be the same as bufsize=0. p = subprocess.Popen(ZERO_RETURN_CMD, None) self.assertEqual(p.wait(), 0) # Again with keyword arg p = subprocess.Popen(ZERO_RETURN_CMD, bufsize=None) self.assertEqual(p.wait(), 0) def _test_bufsize_equal_one(self, line, expected, universal_newlines): # subprocess may deadlock with bufsize=1, see issue #21332 with subprocess.Popen([sys.executable, "-c", "import sys;" "sys.stdout.write(sys.stdin.readline());" "sys.stdout.flush()"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL, bufsize=1, universal_newlines=universal_newlines) as p: p.stdin.write(line) # expect that it flushes the line in text mode os.close(p.stdin.fileno()) # close it without flushing the buffer read_line = p.stdout.readline() with support.SuppressCrashReport(): try: p.stdin.close() except OSError: pass p.stdin = None self.assertEqual(p.returncode, 0) self.assertEqual(read_line, expected) def test_bufsize_equal_one_text_mode(self): # line is flushed in text mode with bufsize=1. # we should get the full line in return line = "line\n" self._test_bufsize_equal_one(line, line, universal_newlines=True) def test_bufsize_equal_one_binary_mode(self): # line is not flushed in binary mode with bufsize=1. # we should get empty response line = b'line' + os.linesep.encode() # assume ascii-based locale with self.assertWarnsRegex(RuntimeWarning, 'line buffering'): self._test_bufsize_equal_one(line, b'', universal_newlines=False) def test_leaking_fds_on_error(self): # see bug #5179: Popen leaks file descriptors to PIPEs if # the child fails to execute; this will eventually exhaust # the maximum number of open fds. 1024 seems a very common # value for that limit, but Windows has 2048, so we loop # 1024 times (each call leaked two fds). for i in range(1024): with self.assertRaises(NONEXISTING_ERRORS): subprocess.Popen(NONEXISTING_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE) def test_nonexisting_with_pipes(self): # bpo-30121: Popen with pipes must close properly pipes on error. # Previously, os.close() was called with a Windows handle which is not # a valid file descriptor. # # Run the test in a subprocess to control how the CRT reports errors # and to get stderr content. try: import msvcrt msvcrt.CrtSetReportMode except (AttributeError, ImportError): self.skipTest("need msvcrt.CrtSetReportMode") code = textwrap.dedent(f""" import msvcrt import subprocess cmd = {NONEXISTING_CMD!r} for report_type in [msvcrt.CRT_WARN, msvcrt.CRT_ERROR, msvcrt.CRT_ASSERT]: msvcrt.CrtSetReportMode(report_type, msvcrt.CRTDBG_MODE_FILE) msvcrt.CrtSetReportFile(report_type, msvcrt.CRTDBG_FILE_STDERR) try: subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) except OSError: pass """) cmd = [sys.executable, "-c", code] proc = subprocess.Popen(cmd, stderr=subprocess.PIPE, universal_newlines=True) with proc: stderr = proc.communicate()[1] self.assertEqual(stderr, "") self.assertEqual(proc.returncode, 0) def test_double_close_on_error(self): # Issue #18851 fds = [] def open_fds(): for i in range(20): fds.extend(os.pipe()) time.sleep(0.001) t = threading.Thread(target=open_fds) t.start() try: with self.assertRaises(EnvironmentError): subprocess.Popen(NONEXISTING_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) finally: t.join() exc = None for fd in fds: # If a double close occurred, some of those fds will # already have been closed by mistake, and os.close() # here will raise. try: os.close(fd) except OSError as e: exc = e if exc is not None: raise exc def test_threadsafe_wait(self): """Issue21291: Popen.wait() needs to be threadsafe for returncode.""" proc = subprocess.Popen([sys.executable, '-c', 'import time; time.sleep(12)']) self.assertEqual(proc.returncode, None) results = [] def kill_proc_timer_thread(): results.append(('thread-start-poll-result', proc.poll())) # terminate it from the thread and wait for the result. proc.kill() proc.wait() results.append(('thread-after-kill-and-wait', proc.returncode)) # this wait should be a no-op given the above. proc.wait() results.append(('thread-after-second-wait', proc.returncode)) # This is a timing sensitive test, the failure mode is # triggered when both the main thread and this thread are in # the wait() call at once. The delay here is to allow the # main thread to most likely be blocked in its wait() call. t = threading.Timer(0.2, kill_proc_timer_thread) t.start() if mswindows: expected_errorcode = 1 else: # Should be -9 because of the proc.kill() from the thread. expected_errorcode = -9 # Wait for the process to finish; the thread should kill it # long before it finishes on its own. Supplying a timeout # triggers a different code path for better coverage. proc.wait(timeout=support.SHORT_TIMEOUT) self.assertEqual(proc.returncode, expected_errorcode, msg="unexpected result in wait from main thread") # This should be a no-op with no change in returncode. proc.wait() self.assertEqual(proc.returncode, expected_errorcode, msg="unexpected result in second main wait.") t.join() # Ensure that all of the thread results are as expected. # When a race condition occurs in wait(), the returncode could # be set by the wrong thread that doesn't actually have it # leading to an incorrect value. self.assertEqual([('thread-start-poll-result', None), ('thread-after-kill-and-wait', expected_errorcode), ('thread-after-second-wait', expected_errorcode)], results) def test_issue8780(self): # Ensure that stdout is inherited from the parent # if stdout=PIPE is not used code = ';'.join(( 'import subprocess, sys', 'retcode = subprocess.call(' "[sys.executable, '-c', 'print(\"Hello World!\")'])", 'assert retcode == 0')) output = subprocess.check_output([sys.executable, '-c', code]) self.assertTrue(output.startswith(b'Hello World!'), ascii(output)) def test_handles_closed_on_exception(self): # If CreateProcess exits with an error, ensure the # duplicate output handles are released ifhandle, ifname = tempfile.mkstemp() ofhandle, ofname = tempfile.mkstemp() efhandle, efname = tempfile.mkstemp() try: subprocess.Popen (["*"], stdin=ifhandle, stdout=ofhandle, stderr=efhandle) except OSError: os.close(ifhandle) os.remove(ifname) os.close(ofhandle) os.remove(ofname) os.close(efhandle) os.remove(efname) self.assertFalse(os.path.exists(ifname)) self.assertFalse(os.path.exists(ofname)) self.assertFalse(os.path.exists(efname)) def test_communicate_epipe(self): # Issue 10963: communicate() should hide EPIPE p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) self.addCleanup(p.stdin.close) p.communicate(b"x" * 2**20) def test_repr(self): path_cmd = pathlib.Path("my-tool.py") pathlib_cls = path_cmd.__class__.__name__ cases = [ ("ls", True, 123, ""), ('a' * 100, True, 0, ""), (["ls"], False, None, ""), (["ls", '--my-opts', 'a' * 100], False, None, ""), (path_cmd, False, 7, f"") ] with unittest.mock.patch.object(subprocess.Popen, '_execute_child'): for cmd, shell, code, sx in cases: p = subprocess.Popen(cmd, shell=shell) p.returncode = code self.assertEqual(repr(p), sx) def test_communicate_epipe_only_stdin(self): # Issue 10963: communicate() should hide EPIPE p = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE) self.addCleanup(p.stdin.close) p.wait() p.communicate(b"x" * 2**20) @unittest.skipUnless(hasattr(signal, 'SIGUSR1'), "Requires signal.SIGUSR1") @unittest.skipUnless(hasattr(os, 'kill'), "Requires os.kill") @unittest.skipUnless(hasattr(os, 'getppid'), "Requires os.getppid") def test_communicate_eintr(self): # Issue #12493: communicate() should handle EINTR def handler(signum, frame): pass old_handler = signal.signal(signal.SIGUSR1, handler) self.addCleanup(signal.signal, signal.SIGUSR1, old_handler) args = [sys.executable, "-c", 'import os, signal;' 'os.kill(os.getppid(), signal.SIGUSR1)'] for stream in ('stdout', 'stderr'): kw = {stream: subprocess.PIPE} with subprocess.Popen(args, **kw) as process: # communicate() will be interrupted by SIGUSR1 process.communicate() # This test is Linux-ish specific for simplicity to at least have # some coverage. It is not a platform specific bug. @unittest.skipUnless(os.path.isdir('/proc/%d/fd' % os.getpid()), "Linux specific") def test_failed_child_execute_fd_leak(self): """Test for the fork() failure fd leak reported in issue16327.""" fd_directory = '/proc/%d/fd' % os.getpid() fds_before_popen = os.listdir(fd_directory) with self.assertRaises(PopenTestException): PopenExecuteChildRaises( ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # NOTE: This test doesn't verify that the real _execute_child # does not close the file descriptors itself on the way out # during an exception. Code inspection has confirmed that. fds_after_exception = os.listdir(fd_directory) self.assertEqual(fds_before_popen, fds_after_exception) @unittest.skipIf(mswindows, "behavior currently not supported on Windows") def test_file_not_found_includes_filename(self): with self.assertRaises(FileNotFoundError) as c: subprocess.call(['/opt/nonexistent_binary', 'with', 'some', 'args']) self.assertEqual(c.exception.filename, '/opt/nonexistent_binary') @unittest.skipIf(mswindows, "behavior currently not supported on Windows") def test_file_not_found_with_bad_cwd(self): with self.assertRaises(FileNotFoundError) as c: subprocess.Popen(['exit', '0'], cwd='/some/nonexistent/directory') self.assertEqual(c.exception.filename, '/some/nonexistent/directory') def test_class_getitems(self): self.assertIsInstance(subprocess.Popen[bytes], types.GenericAlias) self.assertIsInstance(subprocess.CompletedProcess[str], types.GenericAlias) class RunFuncTestCase(BaseTestCase): def run_python(self, code, **kwargs): """Run Python code in a subprocess using subprocess.run""" argv = [sys.executable, "-c", code] return subprocess.run(argv, **kwargs) def test_returncode(self): # call() function with sequence argument cp = self.run_python("import sys; sys.exit(47)") self.assertEqual(cp.returncode, 47) with self.assertRaises(subprocess.CalledProcessError): cp.check_returncode() def test_check(self): with self.assertRaises(subprocess.CalledProcessError) as c: self.run_python("import sys; sys.exit(47)", check=True) self.assertEqual(c.exception.returncode, 47) def test_check_zero(self): # check_returncode shouldn't raise when returncode is zero cp = subprocess.run(ZERO_RETURN_CMD, check=True) self.assertEqual(cp.returncode, 0) def test_timeout(self): # run() function with timeout argument; we want to test that the child # process gets killed when the timeout expires. If the child isn't # killed, this call will deadlock since subprocess.run waits for the # child. with self.assertRaises(subprocess.TimeoutExpired): self.run_python("while True: pass", timeout=0.0001) def test_capture_stdout(self): # capture stdout with zero return code cp = self.run_python("print('BDFL')", stdout=subprocess.PIPE) self.assertIn(b'BDFL', cp.stdout) def test_capture_stderr(self): cp = self.run_python("import sys; sys.stderr.write('BDFL')", stderr=subprocess.PIPE) self.assertIn(b'BDFL', cp.stderr) def test_check_output_stdin_arg(self): # run() can be called with stdin set to a file tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) cp = self.run_python( "import sys; sys.stdout.write(sys.stdin.read().upper())", stdin=tf, stdout=subprocess.PIPE) self.assertIn(b'PEAR', cp.stdout) def test_check_output_input_arg(self): # check_output() can be called with input set to a string cp = self.run_python( "import sys; sys.stdout.write(sys.stdin.read().upper())", input=b'pear', stdout=subprocess.PIPE) self.assertIn(b'PEAR', cp.stdout) def test_check_output_stdin_with_input_arg(self): # run() refuses to accept 'stdin' with 'input' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) tf.write(b'pear') tf.seek(0) with self.assertRaises(ValueError, msg="Expected ValueError when stdin and input args supplied.") as c: output = self.run_python("print('will not be run')", stdin=tf, input=b'hare') self.assertIn('stdin', c.exception.args[0]) self.assertIn('input', c.exception.args[0]) def test_check_output_timeout(self): with self.assertRaises(subprocess.TimeoutExpired) as c: cp = self.run_python(( "import sys, time\n" "sys.stdout.write('BDFL')\n" "sys.stdout.flush()\n" "time.sleep(3600)"), # Some heavily loaded buildbots (sparc Debian 3.x) require # this much time to start and print. timeout=3, stdout=subprocess.PIPE) self.assertEqual(c.exception.output, b'BDFL') # output is aliased to stdout self.assertEqual(c.exception.stdout, b'BDFL') def test_run_kwargs(self): newenv = os.environ.copy() newenv["FRUIT"] = "banana" cp = self.run_python(('import sys, os;' 'sys.exit(33 if os.getenv("FRUIT")=="banana" else 31)'), env=newenv) self.assertEqual(cp.returncode, 33) def test_run_with_pathlike_path(self): # bpo-31961: test run(pathlike_object) # the name of a command that can be run without # any arguments that exit fast prog = 'tree.com' if mswindows else 'ls' path = shutil.which(prog) if path is None: self.skipTest(f'{prog} required for this test') path = FakePath(path) res = subprocess.run(path, stdout=subprocess.DEVNULL) self.assertEqual(res.returncode, 0) with self.assertRaises(TypeError): subprocess.run(path, stdout=subprocess.DEVNULL, shell=True) def test_run_with_bytes_path_and_arguments(self): # bpo-31961: test run([bytes_object, b'additional arguments']) path = os.fsencode(sys.executable) args = [path, '-c', b'import sys; sys.exit(57)'] res = subprocess.run(args) self.assertEqual(res.returncode, 57) def test_run_with_pathlike_path_and_arguments(self): # bpo-31961: test run([pathlike_object, 'additional arguments']) path = FakePath(sys.executable) args = [path, '-c', 'import sys; sys.exit(57)'] res = subprocess.run(args) self.assertEqual(res.returncode, 57) def test_capture_output(self): cp = self.run_python(("import sys;" "sys.stdout.write('BDFL'); " "sys.stderr.write('FLUFL')"), capture_output=True) self.assertIn(b'BDFL', cp.stdout) self.assertIn(b'FLUFL', cp.stderr) def test_stdout_with_capture_output_arg(self): # run() refuses to accept 'stdout' with 'capture_output' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) with self.assertRaises(ValueError, msg=("Expected ValueError when stdout and capture_output " "args supplied.")) as c: output = self.run_python("print('will not be run')", capture_output=True, stdout=tf) self.assertIn('stdout', c.exception.args[0]) self.assertIn('capture_output', c.exception.args[0]) def test_stderr_with_capture_output_arg(self): # run() refuses to accept 'stderr' with 'capture_output' tf = tempfile.TemporaryFile() self.addCleanup(tf.close) with self.assertRaises(ValueError, msg=("Expected ValueError when stderr and capture_output " "args supplied.")) as c: output = self.run_python("print('will not be run')", capture_output=True, stderr=tf) self.assertIn('stderr', c.exception.args[0]) self.assertIn('capture_output', c.exception.args[0]) # This test _might_ wind up a bit fragile on loaded build+test machines # as it depends on the timing with wide enough margins for normal situations # but does assert that it happened "soon enough" to believe the right thing # happened. @unittest.skipIf(mswindows, "requires posix like 'sleep' shell command") def test_run_with_shell_timeout_and_capture_output(self): """Output capturing after a timeout mustn't hang forever on open filehandles.""" before_secs = time.monotonic() try: subprocess.run('sleep 3', shell=True, timeout=0.1, capture_output=True) # New session unspecified. except subprocess.TimeoutExpired as exc: after_secs = time.monotonic() stacks = traceback.format_exc() # assertRaises doesn't give this. else: self.fail("TimeoutExpired not raised.") self.assertLess(after_secs - before_secs, 1.5, msg="TimeoutExpired was delayed! Bad traceback:\n```\n" f"{stacks}```") def _get_test_grp_name(): for name_group in ('staff', 'nogroup', 'grp', 'nobody', 'nfsnobody'): if grp: try: grp.getgrnam(name_group) except KeyError: continue return name_group else: raise unittest.SkipTest('No identified group name to use for this test on this platform.') @unittest.skipIf(mswindows, "POSIX specific tests") class POSIXProcessTestCase(BaseTestCase): def setUp(self): super().setUp() self._nonexistent_dir = "/_this/pa.th/does/not/exist" def _get_chdir_exception(self): try: os.chdir(self._nonexistent_dir) except OSError as e: # This avoids hard coding the errno value or the OS perror() # string and instead capture the exception that we want to see # below for comparison. desired_exception = e else: self.fail("chdir to nonexistent directory %s succeeded." % self._nonexistent_dir) return desired_exception def test_exception_cwd(self): """Test error in the child raised in the parent for a bad cwd.""" desired_exception = self._get_chdir_exception() try: p = subprocess.Popen([sys.executable, "-c", ""], cwd=self._nonexistent_dir) except OSError as e: # Test that the child process chdir failure actually makes # it up to the parent process as the correct exception. self.assertEqual(desired_exception.errno, e.errno) self.assertEqual(desired_exception.strerror, e.strerror) self.assertEqual(desired_exception.filename, e.filename) else: self.fail("Expected OSError: %s" % desired_exception) def test_exception_bad_executable(self): """Test error in the child raised in the parent for a bad executable.""" desired_exception = self._get_chdir_exception() try: p = subprocess.Popen([sys.executable, "-c", ""], executable=self._nonexistent_dir) except OSError as e: # Test that the child process exec failure actually makes # it up to the parent process as the correct exception. self.assertEqual(desired_exception.errno, e.errno) self.assertEqual(desired_exception.strerror, e.strerror) self.assertEqual(desired_exception.filename, e.filename) else: self.fail("Expected OSError: %s" % desired_exception) def test_exception_bad_args_0(self): """Test error in the child raised in the parent for a bad args[0].""" desired_exception = self._get_chdir_exception() try: p = subprocess.Popen([self._nonexistent_dir, "-c", ""]) except OSError as e: # Test that the child process exec failure actually makes # it up to the parent process as the correct exception. self.assertEqual(desired_exception.errno, e.errno) self.assertEqual(desired_exception.strerror, e.strerror) self.assertEqual(desired_exception.filename, e.filename) else: self.fail("Expected OSError: %s" % desired_exception) # We mock the __del__ method for Popen in the next two tests # because it does cleanup based on the pid returned by fork_exec # along with issuing a resource warning if it still exists. Since # we don't actually spawn a process in these tests we can forego # the destructor. An alternative would be to set _child_created to # False before the destructor is called but there is no easy way # to do that class PopenNoDestructor(subprocess.Popen): def __del__(self): pass @mock.patch("subprocess._posixsubprocess.fork_exec") def test_exception_errpipe_normal(self, fork_exec): """Test error passing done through errpipe_write in the good case""" def proper_error(*args): errpipe_write = args[13] # Write the hex for the error code EISDIR: 'is a directory' err_code = '{:x}'.format(errno.EISDIR).encode() os.write(errpipe_write, b"OSError:" + err_code + b":") return 0 fork_exec.side_effect = proper_error with mock.patch("subprocess.os.waitpid", side_effect=ChildProcessError): with self.assertRaises(IsADirectoryError): self.PopenNoDestructor(["non_existent_command"]) @mock.patch("subprocess._posixsubprocess.fork_exec") def test_exception_errpipe_bad_data(self, fork_exec): """Test error passing done through errpipe_write where its not in the expected format""" error_data = b"\xFF\x00\xDE\xAD" def bad_error(*args): errpipe_write = args[13] # Anything can be in the pipe, no assumptions should # be made about its encoding, so we'll write some # arbitrary hex bytes to test it out os.write(errpipe_write, error_data) return 0 fork_exec.side_effect = bad_error with mock.patch("subprocess.os.waitpid", side_effect=ChildProcessError): with self.assertRaises(subprocess.SubprocessError) as e: self.PopenNoDestructor(["non_existent_command"]) self.assertIn(repr(error_data), str(e.exception)) @unittest.skipIf(not os.path.exists('/proc/self/status'), "need /proc/self/status") def test_restore_signals(self): # Blindly assume that cat exists on systems with /proc/self/status... default_proc_status = subprocess.check_output( ['cat', '/proc/self/status'], restore_signals=False) for line in default_proc_status.splitlines(): if line.startswith(b'SigIgn'): default_sig_ign_mask = line break else: self.skipTest("SigIgn not found in /proc/self/status.") restored_proc_status = subprocess.check_output( ['cat', '/proc/self/status'], restore_signals=True) for line in restored_proc_status.splitlines(): if line.startswith(b'SigIgn'): restored_sig_ign_mask = line break self.assertNotEqual(default_sig_ign_mask, restored_sig_ign_mask, msg="restore_signals=True should've unblocked " "SIGPIPE and friends.") def test_start_new_session(self): # For code coverage of calling setsid(). We don't care if we get an # EPERM error from it depending on the test execution environment, that # still indicates that it was called. try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getsid(0))"], start_new_session=True) except OSError as e: if e.errno != errno.EPERM: raise else: parent_sid = os.getsid(0) child_sid = int(output) self.assertNotEqual(parent_sid, child_sid) @unittest.skipUnless(hasattr(os, 'setreuid'), 'no setreuid on platform') def test_user(self): # For code coverage of the user parameter. We don't care if we get an # EPERM error from it depending on the test execution environment, that # still indicates that it was called. uid = os.geteuid() test_users = [65534 if uid != 65534 else 65533, uid] name_uid = "nobody" if sys.platform != 'darwin' else "unknown" if pwd is not None: try: pwd.getpwnam(name_uid) test_users.append(name_uid) except KeyError: # unknown user name name_uid = None for user in test_users: # posix_spawn() may be used with close_fds=False for close_fds in (False, True): with self.subTest(user=user, close_fds=close_fds): try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getuid())"], user=user, close_fds=close_fds) except PermissionError: # (EACCES, EPERM) pass except OSError as e: if e.errno not in (errno.EACCES, errno.EPERM): raise else: if isinstance(user, str): user_uid = pwd.getpwnam(user).pw_uid else: user_uid = user child_user = int(output) self.assertEqual(child_user, user_uid) with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, user=-1) with self.assertRaises(OverflowError): subprocess.check_call(ZERO_RETURN_CMD, cwd=os.curdir, env=os.environ, user=2**64) if pwd is None and name_uid is not None: with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, user=name_uid) @unittest.skipIf(hasattr(os, 'setreuid'), 'setreuid() available on platform') def test_user_error(self): with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, user=65535) @unittest.skipUnless(hasattr(os, 'setregid'), 'no setregid() on platform') def test_group(self): gid = os.getegid() group_list = [65534 if gid != 65534 else 65533] name_group = _get_test_grp_name() if grp is not None: group_list.append(name_group) for group in group_list + [gid]: # posix_spawn() may be used with close_fds=False for close_fds in (False, True): with self.subTest(group=group, close_fds=close_fds): try: output = subprocess.check_output( [sys.executable, "-c", "import os; print(os.getgid())"], group=group, close_fds=close_fds) except PermissionError: # (EACCES, EPERM) pass else: if isinstance(group, str): group_gid = grp.getgrnam(group).gr_gid else: group_gid = group child_group = int(output) self.assertEqual(child_group, group_gid) # make sure we bomb on negative values with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, group=-1) with self.assertRaises(OverflowError): subprocess.check_call(ZERO_RETURN_CMD, cwd=os.curdir, env=os.environ, group=2**64) if grp is None: with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, group=name_group) @unittest.skipIf(hasattr(os, 'setregid'), 'setregid() available on platform') def test_group_error(self): with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, group=65535) @unittest.skipUnless(hasattr(os, 'setgroups'), 'no setgroups() on platform') def test_extra_groups(self): gid = os.getegid() group_list = [65534 if gid != 65534 else 65533] name_group = _get_test_grp_name() perm_error = False if grp is not None: group_list.append(name_group) try: output = subprocess.check_output( [sys.executable, "-c", "import os, sys, json; json.dump(os.getgroups(), sys.stdout)"], extra_groups=group_list) except OSError as ex: if ex.errno != errno.EPERM: raise perm_error = True else: parent_groups = os.getgroups() child_groups = json.loads(output) if grp is not None: desired_gids = [grp.getgrnam(g).gr_gid if isinstance(g, str) else g for g in group_list] else: desired_gids = group_list if perm_error: self.assertEqual(set(child_groups), set(parent_groups)) else: self.assertEqual(set(desired_gids), set(child_groups)) # make sure we bomb on negative values with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, extra_groups=[-1]) with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, cwd=os.curdir, env=os.environ, extra_groups=[2**64]) if grp is None: with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, extra_groups=[name_group]) @unittest.skipIf(hasattr(os, 'setgroups'), 'setgroups() available on platform') def test_extra_groups_error(self): with self.assertRaises(ValueError): subprocess.check_call(ZERO_RETURN_CMD, extra_groups=[]) @unittest.skipIf(mswindows or not hasattr(os, 'umask'), 'POSIX umask() is not available.') def test_umask(self): tmpdir = None try: tmpdir = tempfile.mkdtemp() name = os.path.join(tmpdir, "beans") # We set an unusual umask in the child so as a unique mode # for us to test the child's touched file for. subprocess.check_call( [sys.executable, "-c", f"open({name!r}, 'w').close()"], umask=0o053) # Ignore execute permissions entirely in our test, # filesystems could be mounted to ignore or force that. st_mode = os.stat(name).st_mode & 0o666 expected_mode = 0o624 self.assertEqual(expected_mode, st_mode, msg=f'{oct(expected_mode)} != {oct(st_mode)}') finally: if tmpdir is not None: shutil.rmtree(tmpdir) def test_run_abort(self): # returncode handles signal termination with support.SuppressCrashReport(): p = subprocess.Popen([sys.executable, "-c", 'import os; os.abort()']) p.wait() self.assertEqual(-p.returncode, signal.SIGABRT) def test_CalledProcessError_str_signal(self): err = subprocess.CalledProcessError(-int(signal.SIGABRT), "fake cmd") error_string = str(err) # We're relying on the repr() of the signal.Signals intenum to provide # the word signal, the signal name and the numeric value. self.assertIn("signal", error_string.lower()) # We're not being specific about the signal name as some signals have # multiple names and which name is revealed can vary. self.assertIn("SIG", error_string) self.assertIn(str(signal.SIGABRT), error_string) def test_CalledProcessError_str_unknown_signal(self): err = subprocess.CalledProcessError(-9876543, "fake cmd") error_string = str(err) self.assertIn("unknown signal 9876543.", error_string) def test_CalledProcessError_str_non_zero(self): err = subprocess.CalledProcessError(2, "fake cmd") error_string = str(err) self.assertIn("non-zero exit status 2.", error_string) def test_preexec(self): # DISCLAIMER: Setting environment variables is *not* a good use # of a preexec_fn. This is merely a test. p = subprocess.Popen([sys.executable, "-c", 'import sys,os;' 'sys.stdout.write(os.getenv("FRUIT"))'], stdout=subprocess.PIPE, preexec_fn=lambda: os.putenv("FRUIT", "apple")) with p: self.assertEqual(p.stdout.read(), b"apple") def test_preexec_exception(self): def raise_it(): raise ValueError("What if two swallows carried a coconut?") try: p = subprocess.Popen([sys.executable, "-c", ""], preexec_fn=raise_it) except subprocess.SubprocessError as e: self.assertTrue( subprocess._posixsubprocess, "Expected a ValueError from the preexec_fn") except ValueError as e: self.assertIn("coconut", e.args[0]) else: self.fail("Exception raised by preexec_fn did not make it " "to the parent process.") class _TestExecuteChildPopen(subprocess.Popen): """Used to test behavior at the end of _execute_child.""" def __init__(self, testcase, *args, **kwargs): self._testcase = testcase subprocess.Popen.__init__(self, *args, **kwargs) def _execute_child(self, *args, **kwargs): try: subprocess.Popen._execute_child(self, *args, **kwargs) finally: # Open a bunch of file descriptors and verify that # none of them are the same as the ones the Popen # instance is using for stdin/stdout/stderr. devzero_fds = [os.open("/dev/zero", os.O_RDONLY) for _ in range(8)] try: for fd in devzero_fds: self._testcase.assertNotIn( fd, (self.stdin.fileno(), self.stdout.fileno(), self.stderr.fileno()), msg="At least one fd was closed early.") finally: for fd in devzero_fds: os.close(fd) @unittest.skipIf(not os.path.exists("/dev/zero"), "/dev/zero required.") def test_preexec_errpipe_does_not_double_close_pipes(self): """Issue16140: Don't double close pipes on preexec error.""" def raise_it(): raise subprocess.SubprocessError( "force the _execute_child() errpipe_data path.") with self.assertRaises(subprocess.SubprocessError): self._TestExecuteChildPopen( self, ZERO_RETURN_CMD, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=raise_it) def test_preexec_gc_module_failure(self): # This tests the code that disables garbage collection if the child # process will execute any Python. def raise_runtime_error(): raise RuntimeError("this shouldn't escape") enabled = gc.isenabled() orig_gc_disable = gc.disable orig_gc_isenabled = gc.isenabled try: gc.disable() self.assertFalse(gc.isenabled()) subprocess.call([sys.executable, '-c', ''], preexec_fn=lambda: None) self.assertFalse(gc.isenabled(), "Popen enabled gc when it shouldn't.") gc.enable() self.assertTrue(gc.isenabled()) subprocess.call([sys.executable, '-c', ''], preexec_fn=lambda: None) self.assertTrue(gc.isenabled(), "Popen left gc disabled.") gc.disable = raise_runtime_error self.assertRaises(RuntimeError, subprocess.Popen, [sys.executable, '-c', ''], preexec_fn=lambda: None) del gc.isenabled # force an AttributeError self.assertRaises(AttributeError, subprocess.Popen, [sys.executable, '-c', ''], preexec_fn=lambda: None) finally: gc.disable = orig_gc_disable gc.isenabled = orig_gc_isenabled if not enabled: gc.disable() @unittest.skipIf( sys.platform == 'darwin', 'setrlimit() seems to fail on OS X') def test_preexec_fork_failure(self): # The internal code did not preserve the previous exception when # re-enabling garbage collection try: from resource import getrlimit, setrlimit, RLIMIT_NPROC except ImportError as err: self.skipTest(err) # RLIMIT_NPROC is specific to Linux and BSD limits = getrlimit(RLIMIT_NPROC) [_, hard] = limits setrlimit(RLIMIT_NPROC, (0, hard)) self.addCleanup(setrlimit, RLIMIT_NPROC, limits) try: subprocess.call([sys.executable, '-c', ''], preexec_fn=lambda: None) except BlockingIOError: # Forking should raise EAGAIN, translated to BlockingIOError pass else: self.skipTest('RLIMIT_NPROC had no effect; probably superuser') def test_args_string(self): # args is a string fd, fname = tempfile.mkstemp() # reopen in text mode with open(fd, "w", errors="surrogateescape") as fobj: fobj.write("#!%s\n" % support.unix_shell) fobj.write("exec '%s' -c 'import sys; sys.exit(47)'\n" % sys.executable) os.chmod(fname, 0o700) p = subprocess.Popen(fname) p.wait() os.remove(fname) self.assertEqual(p.returncode, 47) def test_invalid_args(self): # invalid arguments should raise ValueError self.assertRaises(ValueError, subprocess.call, [sys.executable, "-c", "import sys; sys.exit(47)"], startupinfo=47) self.assertRaises(ValueError, subprocess.call, [sys.executable, "-c", "import sys; sys.exit(47)"], creationflags=47) def test_shell_sequence(self): # Run command through the shell (sequence) newenv = os.environ.copy() newenv["FRUIT"] = "apple" p = subprocess.Popen(["echo $FRUIT"], shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertEqual(p.stdout.read().strip(b" \t\r\n\f"), b"apple") def test_shell_string(self): # Run command through the shell (string) newenv = os.environ.copy() newenv["FRUIT"] = "apple" p = subprocess.Popen("echo $FRUIT", shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertEqual(p.stdout.read().strip(b" \t\r\n\f"), b"apple") def test_call_string(self): # call() function with string argument on UNIX fd, fname = tempfile.mkstemp() # reopen in text mode with open(fd, "w", errors="surrogateescape") as fobj: fobj.write("#!%s\n" % support.unix_shell) fobj.write("exec '%s' -c 'import sys; sys.exit(47)'\n" % sys.executable) os.chmod(fname, 0o700) rc = subprocess.call(fname) os.remove(fname) self.assertEqual(rc, 47) def test_specific_shell(self): # Issue #9265: Incorrect name passed as arg[0]. shells = [] for prefix in ['/bin', '/usr/bin/', '/usr/local/bin']: for name in ['bash', 'ksh']: sh = os.path.join(prefix, name) if os.path.isfile(sh): shells.append(sh) if not shells: # Will probably work for any shell but csh. self.skipTest("bash or ksh required for this test") sh = '/bin/sh' if os.path.isfile(sh) and not os.path.islink(sh): # Test will fail if /bin/sh is a symlink to csh. shells.append(sh) for sh in shells: p = subprocess.Popen("echo $0", executable=sh, shell=True, stdout=subprocess.PIPE) with p: self.assertEqual(p.stdout.read().strip(), bytes(sh, 'ascii')) def _kill_process(self, method, *args): # Do not inherit file handles from the parent. # It should fix failures on some platforms. # Also set the SIGINT handler to the default to make sure it's not # being ignored (some tests rely on that.) old_handler = signal.signal(signal.SIGINT, signal.default_int_handler) try: p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() time.sleep(30) """], close_fds=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) finally: signal.signal(signal.SIGINT, old_handler) # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) getattr(p, method)(*args) return p @unittest.skipIf(sys.platform.startswith(('netbsd', 'openbsd')), "Due to known OS bug (issue #16762)") def _kill_dead_process(self, method, *args): # Do not inherit file handles from the parent. # It should fix failures on some platforms. p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() """], close_fds=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) # The process should end after this time.sleep(1) # This shouldn't raise even though the child is now dead getattr(p, method)(*args) p.communicate() def test_send_signal(self): p = self._kill_process('send_signal', signal.SIGINT) _, stderr = p.communicate() self.assertIn(b'KeyboardInterrupt', stderr) self.assertNotEqual(p.wait(), 0) def test_kill(self): p = self._kill_process('kill') _, stderr = p.communicate() self.assertEqual(stderr, b'') self.assertEqual(p.wait(), -signal.SIGKILL) def test_terminate(self): p = self._kill_process('terminate') _, stderr = p.communicate() self.assertEqual(stderr, b'') self.assertEqual(p.wait(), -signal.SIGTERM) def test_send_signal_dead(self): # Sending a signal to a dead process self._kill_dead_process('send_signal', signal.SIGINT) def test_kill_dead(self): # Killing a dead process self._kill_dead_process('kill') def test_terminate_dead(self): # Terminating a dead process self._kill_dead_process('terminate') def _save_fds(self, save_fds): fds = [] for fd in save_fds: inheritable = os.get_inheritable(fd) saved = os.dup(fd) fds.append((fd, saved, inheritable)) return fds def _restore_fds(self, fds): for fd, saved, inheritable in fds: os.dup2(saved, fd, inheritable=inheritable) os.close(saved) def check_close_std_fds(self, fds): # Issue #9905: test that subprocess pipes still work properly with # some standard fds closed stdin = 0 saved_fds = self._save_fds(fds) for fd, saved, inheritable in saved_fds: if fd == 0: stdin = saved break try: for fd in fds: os.close(fd) out, err = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple");' 'sys.stdout.flush();' 'sys.stderr.write("orange")'], stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate() self.assertEqual(out, b'apple') self.assertEqual(err, b'orange') finally: self._restore_fds(saved_fds) def test_close_fd_0(self): self.check_close_std_fds([0]) def test_close_fd_1(self): self.check_close_std_fds([1]) def test_close_fd_2(self): self.check_close_std_fds([2]) def test_close_fds_0_1(self): self.check_close_std_fds([0, 1]) def test_close_fds_0_2(self): self.check_close_std_fds([0, 2]) def test_close_fds_1_2(self): self.check_close_std_fds([1, 2]) def test_close_fds_0_1_2(self): # Issue #10806: test that subprocess pipes still work properly with # all standard fds closed. self.check_close_std_fds([0, 1, 2]) def test_small_errpipe_write_fd(self): """Issue #15798: Popen should work when stdio fds are available.""" new_stdin = os.dup(0) new_stdout = os.dup(1) try: os.close(0) os.close(1) # Side test: if errpipe_write fails to have its CLOEXEC # flag set this should cause the parent to think the exec # failed. Extremely unlikely: everyone supports CLOEXEC. subprocess.Popen([ sys.executable, "-c", "print('AssertionError:0:CLOEXEC failure.')"]).wait() finally: # Restore original stdin and stdout os.dup2(new_stdin, 0) os.dup2(new_stdout, 1) os.close(new_stdin) os.close(new_stdout) def test_remapping_std_fds(self): # open up some temporary files temps = [tempfile.mkstemp() for i in range(3)] try: temp_fds = [fd for fd, fname in temps] # unlink the files -- we won't need to reopen them for fd, fname in temps: os.unlink(fname) # write some data to what will become stdin, and rewind os.write(temp_fds[1], b"STDIN") os.lseek(temp_fds[1], 0, 0) # move the standard file descriptors out of the way saved_fds = self._save_fds(range(3)) try: # duplicate the file objects over the standard fd's for fd, temp_fd in enumerate(temp_fds): os.dup2(temp_fd, fd) # now use those files in the "wrong" order, so that subprocess # has to rearrange them in the child p = subprocess.Popen([sys.executable, "-c", 'import sys; got = sys.stdin.read();' 'sys.stdout.write("got %s"%got); sys.stderr.write("err")'], stdin=temp_fds[1], stdout=temp_fds[2], stderr=temp_fds[0]) p.wait() finally: self._restore_fds(saved_fds) for fd in temp_fds: os.lseek(fd, 0, 0) out = os.read(temp_fds[2], 1024) err = os.read(temp_fds[0], 1024).strip() self.assertEqual(out, b"got STDIN") self.assertEqual(err, b"err") finally: for fd in temp_fds: os.close(fd) def check_swap_fds(self, stdin_no, stdout_no, stderr_no): # open up some temporary files temps = [tempfile.mkstemp() for i in range(3)] temp_fds = [fd for fd, fname in temps] try: # unlink the files -- we won't need to reopen them for fd, fname in temps: os.unlink(fname) # save a copy of the standard file descriptors saved_fds = self._save_fds(range(3)) try: # duplicate the temp files over the standard fd's 0, 1, 2 for fd, temp_fd in enumerate(temp_fds): os.dup2(temp_fd, fd) # write some data to what will become stdin, and rewind os.write(stdin_no, b"STDIN") os.lseek(stdin_no, 0, 0) # now use those files in the given order, so that subprocess # has to rearrange them in the child p = subprocess.Popen([sys.executable, "-c", 'import sys; got = sys.stdin.read();' 'sys.stdout.write("got %s"%got); sys.stderr.write("err")'], stdin=stdin_no, stdout=stdout_no, stderr=stderr_no) p.wait() for fd in temp_fds: os.lseek(fd, 0, 0) out = os.read(stdout_no, 1024) err = os.read(stderr_no, 1024).strip() finally: self._restore_fds(saved_fds) self.assertEqual(out, b"got STDIN") self.assertEqual(err, b"err") finally: for fd in temp_fds: os.close(fd) # When duping fds, if there arises a situation where one of the fds is # either 0, 1 or 2, it is possible that it is overwritten (#12607). # This tests all combinations of this. def test_swap_fds(self): self.check_swap_fds(0, 1, 2) self.check_swap_fds(0, 2, 1) self.check_swap_fds(1, 0, 2) self.check_swap_fds(1, 2, 0) self.check_swap_fds(2, 0, 1) self.check_swap_fds(2, 1, 0) def _check_swap_std_fds_with_one_closed(self, from_fds, to_fds): saved_fds = self._save_fds(range(3)) try: for from_fd in from_fds: with tempfile.TemporaryFile() as f: os.dup2(f.fileno(), from_fd) fd_to_close = (set(range(3)) - set(from_fds)).pop() os.close(fd_to_close) arg_names = ['stdin', 'stdout', 'stderr'] kwargs = {} for from_fd, to_fd in zip(from_fds, to_fds): kwargs[arg_names[to_fd]] = from_fd code = textwrap.dedent(r''' import os, sys skipped_fd = int(sys.argv[1]) for fd in range(3): if fd != skipped_fd: os.write(fd, str(fd).encode('ascii')) ''') skipped_fd = (set(range(3)) - set(to_fds)).pop() rc = subprocess.call([sys.executable, '-c', code, str(skipped_fd)], **kwargs) self.assertEqual(rc, 0) for from_fd, to_fd in zip(from_fds, to_fds): os.lseek(from_fd, 0, os.SEEK_SET) read_bytes = os.read(from_fd, 1024) read_fds = list(map(int, read_bytes.decode('ascii'))) msg = textwrap.dedent(f""" When testing {from_fds} to {to_fds} redirection, parent descriptor {from_fd} got redirected to descriptor(s) {read_fds} instead of descriptor {to_fd}. """) self.assertEqual([to_fd], read_fds, msg) finally: self._restore_fds(saved_fds) # Check that subprocess can remap std fds correctly even # if one of them is closed (#32844). def test_swap_std_fds_with_one_closed(self): for from_fds in itertools.combinations(range(3), 2): for to_fds in itertools.permutations(range(3), 2): self._check_swap_std_fds_with_one_closed(from_fds, to_fds) def test_surrogates_error_message(self): def prepare(): raise ValueError("surrogate:\uDCff") try: subprocess.call( ZERO_RETURN_CMD, preexec_fn=prepare) except ValueError as err: # Pure Python implementations keeps the message self.assertIsNone(subprocess._posixsubprocess) self.assertEqual(str(err), "surrogate:\uDCff") except subprocess.SubprocessError as err: # _posixsubprocess uses a default message self.assertIsNotNone(subprocess._posixsubprocess) self.assertEqual(str(err), "Exception occurred in preexec_fn.") else: self.fail("Expected ValueError or subprocess.SubprocessError") def test_undecodable_env(self): for key, value in (('test', 'abc\uDCFF'), ('test\uDCFF', '42')): encoded_value = value.encode("ascii", "surrogateescape") # test str with surrogates script = "import os; print(ascii(os.getenv(%s)))" % repr(key) env = os.environ.copy() env[key] = value # Use C locale to get ASCII for the locale encoding to force # surrogate-escaping of \xFF in the child process env['LC_ALL'] = 'C' decoded_value = value stdout = subprocess.check_output( [sys.executable, "-c", script], env=env) stdout = stdout.rstrip(b'\n\r') self.assertEqual(stdout.decode('ascii'), ascii(decoded_value)) # test bytes key = key.encode("ascii", "surrogateescape") script = "import os; print(ascii(os.getenvb(%s)))" % repr(key) env = os.environ.copy() env[key] = encoded_value stdout = subprocess.check_output( [sys.executable, "-c", script], env=env) stdout = stdout.rstrip(b'\n\r') self.assertEqual(stdout.decode('ascii'), ascii(encoded_value)) def test_bytes_program(self): abs_program = os.fsencode(ZERO_RETURN_CMD[0]) args = list(ZERO_RETURN_CMD[1:]) path, program = os.path.split(ZERO_RETURN_CMD[0]) program = os.fsencode(program) # absolute bytes path exitcode = subprocess.call([abs_program]+args) self.assertEqual(exitcode, 0) # absolute bytes path as a string cmd = b"'%s' %s" % (abs_program, " ".join(args).encode("utf-8")) exitcode = subprocess.call(cmd, shell=True) self.assertEqual(exitcode, 0) # bytes program, unicode PATH env = os.environ.copy() env["PATH"] = path exitcode = subprocess.call([program]+args, env=env) self.assertEqual(exitcode, 0) # bytes program, bytes PATH envb = os.environb.copy() envb[b"PATH"] = os.fsencode(path) exitcode = subprocess.call([program]+args, env=envb) self.assertEqual(exitcode, 0) def test_pipe_cloexec(self): sleeper = support.findfile("input_reader.py", subdir="subprocessdata") fd_status = support.findfile("fd_status.py", subdir="subprocessdata") p1 = subprocess.Popen([sys.executable, sleeper], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=False) self.addCleanup(p1.communicate, b'') p2 = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=False) output, error = p2.communicate() result_fds = set(map(int, output.split(b','))) unwanted_fds = set([p1.stdin.fileno(), p1.stdout.fileno(), p1.stderr.fileno()]) self.assertFalse(result_fds & unwanted_fds, "Expected no fds from %r to be open in child, " "found %r" % (unwanted_fds, result_fds & unwanted_fds)) def test_pipe_cloexec_real_tools(self): qcat = support.findfile("qcat.py", subdir="subprocessdata") qgrep = support.findfile("qgrep.py", subdir="subprocessdata") subdata = b'zxcvbn' data = subdata * 4 + b'\n' p1 = subprocess.Popen([sys.executable, qcat], stdin=subprocess.PIPE, stdout=subprocess.PIPE, close_fds=False) p2 = subprocess.Popen([sys.executable, qgrep, subdata], stdin=p1.stdout, stdout=subprocess.PIPE, close_fds=False) self.addCleanup(p1.wait) self.addCleanup(p2.wait) def kill_p1(): try: p1.terminate() except ProcessLookupError: pass def kill_p2(): try: p2.terminate() except ProcessLookupError: pass self.addCleanup(kill_p1) self.addCleanup(kill_p2) p1.stdin.write(data) p1.stdin.close() readfiles, ignored1, ignored2 = select.select([p2.stdout], [], [], 10) self.assertTrue(readfiles, "The child hung") self.assertEqual(p2.stdout.read(), data) p1.stdout.close() p2.stdout.close() def test_close_fds(self): fd_status = support.findfile("fd_status.py", subdir="subprocessdata") fds = os.pipe() self.addCleanup(os.close, fds[0]) self.addCleanup(os.close, fds[1]) open_fds = set(fds) # add a bunch more fds for _ in range(9): fd = os.open(os.devnull, os.O_RDONLY) self.addCleanup(os.close, fd) open_fds.add(fd) for fd in open_fds: os.set_inheritable(fd, True) p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=False) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertEqual(remaining_fds & open_fds, open_fds, "Some fds were closed") p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertFalse(remaining_fds & open_fds, "Some fds were left open") self.assertIn(1, remaining_fds, "Subprocess failed") # Keep some of the fd's we opened open in the subprocess. # This tests _posixsubprocess.c's proper handling of fds_to_keep. fds_to_keep = set(open_fds.pop() for _ in range(8)) p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True, pass_fds=fds_to_keep) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertFalse((remaining_fds - fds_to_keep) & open_fds, "Some fds not in pass_fds were left open") self.assertIn(1, remaining_fds, "Subprocess failed") @unittest.skipIf(sys.platform.startswith("freebsd") and os.stat("/dev").st_dev == os.stat("/dev/fd").st_dev, "Requires fdescfs mounted on /dev/fd on FreeBSD.") def test_close_fds_when_max_fd_is_lowered(self): """Confirm that issue21618 is fixed (may fail under valgrind).""" fd_status = support.findfile("fd_status.py", subdir="subprocessdata") # This launches the meat of the test in a child process to # avoid messing with the larger unittest processes maximum # number of file descriptors. # This process launches: # +--> Process that lowers its RLIMIT_NOFILE aftr setting up # a bunch of high open fds above the new lower rlimit. # Those are reported via stdout before launching a new # process with close_fds=False to run the actual test: # +--> The TEST: This one launches a fd_status.py # subprocess with close_fds=True so we can find out if # any of the fds above the lowered rlimit are still open. p = subprocess.Popen([sys.executable, '-c', textwrap.dedent( ''' import os, resource, subprocess, sys, textwrap open_fds = set() # Add a bunch more fds to pass down. for _ in range(40): fd = os.open(os.devnull, os.O_RDONLY) open_fds.add(fd) # Leave a two pairs of low ones available for use by the # internal child error pipe and the stdout pipe. # We also leave 10 more open as some Python buildbots run into # "too many open files" errors during the test if we do not. for fd in sorted(open_fds)[:14]: os.close(fd) open_fds.remove(fd) for fd in open_fds: #self.addCleanup(os.close, fd) os.set_inheritable(fd, True) max_fd_open = max(open_fds) # Communicate the open_fds to the parent unittest.TestCase process. print(','.join(map(str, sorted(open_fds)))) sys.stdout.flush() rlim_cur, rlim_max = resource.getrlimit(resource.RLIMIT_NOFILE) try: # 29 is lower than the highest fds we are leaving open. resource.setrlimit(resource.RLIMIT_NOFILE, (29, rlim_max)) # Launch a new Python interpreter with our low fd rlim_cur that # inherits open fds above that limit. It then uses subprocess # with close_fds=True to get a report of open fds in the child. # An explicit list of fds to check is passed to fd_status.py as # letting fd_status rely on its default logic would miss the # fds above rlim_cur as it normally only checks up to that limit. subprocess.Popen( [sys.executable, '-c', textwrap.dedent(""" import subprocess, sys subprocess.Popen([sys.executable, %r] + [str(x) for x in range({max_fd})], close_fds=True).wait() """.format(max_fd=max_fd_open+1))], close_fds=False).wait() finally: resource.setrlimit(resource.RLIMIT_NOFILE, (rlim_cur, rlim_max)) ''' % fd_status)], stdout=subprocess.PIPE) output, unused_stderr = p.communicate() output_lines = output.splitlines() self.assertEqual(len(output_lines), 2, msg="expected exactly two lines of output:\n%r" % output) opened_fds = set(map(int, output_lines[0].strip().split(b','))) remaining_fds = set(map(int, output_lines[1].strip().split(b','))) self.assertFalse(remaining_fds & opened_fds, msg="Some fds were left open.") # Mac OS X Tiger (10.4) has a kernel bug: sometimes, the file # descriptor of a pipe closed in the parent process is valid in the # child process according to fstat(), but the mode of the file # descriptor is invalid, and read or write raise an error. @support.requires_mac_ver(10, 5) def test_pass_fds(self): fd_status = support.findfile("fd_status.py", subdir="subprocessdata") open_fds = set() for x in range(5): fds = os.pipe() self.addCleanup(os.close, fds[0]) self.addCleanup(os.close, fds[1]) os.set_inheritable(fds[0], True) os.set_inheritable(fds[1], True) open_fds.update(fds) for fd in open_fds: p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True, pass_fds=(fd, )) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) to_be_closed = open_fds - {fd} self.assertIn(fd, remaining_fds, "fd to be passed not passed") self.assertFalse(remaining_fds & to_be_closed, "fd to be closed passed") # pass_fds overrides close_fds with a warning. with self.assertWarns(RuntimeWarning) as context: self.assertFalse(subprocess.call( ZERO_RETURN_CMD, close_fds=False, pass_fds=(fd, ))) self.assertIn('overriding close_fds', str(context.warning)) def test_pass_fds_inheritable(self): script = support.findfile("fd_status.py", subdir="subprocessdata") inheritable, non_inheritable = os.pipe() self.addCleanup(os.close, inheritable) self.addCleanup(os.close, non_inheritable) os.set_inheritable(inheritable, True) os.set_inheritable(non_inheritable, False) pass_fds = (inheritable, non_inheritable) args = [sys.executable, script] args += list(map(str, pass_fds)) p = subprocess.Popen(args, stdout=subprocess.PIPE, close_fds=True, pass_fds=pass_fds) output, ignored = p.communicate() fds = set(map(int, output.split(b','))) # the inheritable file descriptor must be inherited, so its inheritable # flag must be set in the child process after fork() and before exec() self.assertEqual(fds, set(pass_fds), "output=%a" % output) # inheritable flag must not be changed in the parent process self.assertEqual(os.get_inheritable(inheritable), True) self.assertEqual(os.get_inheritable(non_inheritable), False) # bpo-32270: Ensure that descriptors specified in pass_fds # are inherited even if they are used in redirections. # Contributed by @izbyshev. def test_pass_fds_redirected(self): """Regression test for https://bugs.python.org/issue32270.""" fd_status = support.findfile("fd_status.py", subdir="subprocessdata") pass_fds = [] for _ in range(2): fd = os.open(os.devnull, os.O_RDWR) self.addCleanup(os.close, fd) pass_fds.append(fd) stdout_r, stdout_w = os.pipe() self.addCleanup(os.close, stdout_r) self.addCleanup(os.close, stdout_w) pass_fds.insert(1, stdout_w) with subprocess.Popen([sys.executable, fd_status], stdin=pass_fds[0], stdout=pass_fds[1], stderr=pass_fds[2], close_fds=True, pass_fds=pass_fds): output = os.read(stdout_r, 1024) fds = {int(num) for num in output.split(b',')} self.assertEqual(fds, {0, 1, 2} | frozenset(pass_fds), f"output={output!a}") def test_stdout_stdin_are_single_inout_fd(self): with io.open(os.devnull, "r+") as inout: p = subprocess.Popen(ZERO_RETURN_CMD, stdout=inout, stdin=inout) p.wait() def test_stdout_stderr_are_single_inout_fd(self): with io.open(os.devnull, "r+") as inout: p = subprocess.Popen(ZERO_RETURN_CMD, stdout=inout, stderr=inout) p.wait() def test_stderr_stdin_are_single_inout_fd(self): with io.open(os.devnull, "r+") as inout: p = subprocess.Popen(ZERO_RETURN_CMD, stderr=inout, stdin=inout) p.wait() def test_wait_when_sigchild_ignored(self): # NOTE: sigchild_ignore.py may not be an effective test on all OSes. sigchild_ignore = support.findfile("sigchild_ignore.py", subdir="subprocessdata") p = subprocess.Popen([sys.executable, sigchild_ignore], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() self.assertEqual(0, p.returncode, "sigchild_ignore.py exited" " non-zero with this error:\n%s" % stderr.decode('utf-8')) def test_select_unbuffered(self): # Issue #11459: bufsize=0 should really set the pipes as # unbuffered (and therefore let select() work properly). select = support.import_module("select") p = subprocess.Popen([sys.executable, "-c", 'import sys;' 'sys.stdout.write("apple")'], stdout=subprocess.PIPE, bufsize=0) f = p.stdout self.addCleanup(f.close) try: self.assertEqual(f.read(4), b"appl") self.assertIn(f, select.select([f], [], [], 0.0)[0]) finally: p.wait() def test_zombie_fast_process_del(self): # Issue #12650: on Unix, if Popen.__del__() was called before the # process exited, it wouldn't be added to subprocess._active, and would # remain a zombie. # spawn a Popen, and delete its reference before it exits p = subprocess.Popen([sys.executable, "-c", 'import sys, time;' 'time.sleep(0.2)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) ident = id(p) pid = p.pid with support.check_warnings(('', ResourceWarning)): p = None if mswindows: # subprocess._active is not used on Windows and is set to None. self.assertIsNone(subprocess._active) else: # check that p is in the active processes list self.assertIn(ident, [id(o) for o in subprocess._active]) def test_leak_fast_process_del_killed(self): # Issue #12650: on Unix, if Popen.__del__() was called before the # process exited, and the process got killed by a signal, it would never # be removed from subprocess._active, which triggered a FD and memory # leak. # spawn a Popen, delete its reference and kill it p = subprocess.Popen([sys.executable, "-c", 'import time;' 'time.sleep(3)'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) self.addCleanup(p.stdout.close) self.addCleanup(p.stderr.close) ident = id(p) pid = p.pid with support.check_warnings(('', ResourceWarning)): p = None support.gc_collect() # For PyPy or other GCs. os.kill(pid, signal.SIGKILL) if mswindows: # subprocess._active is not used on Windows and is set to None. self.assertIsNone(subprocess._active) else: # check that p is in the active processes list self.assertIn(ident, [id(o) for o in subprocess._active]) # let some time for the process to exit, and create a new Popen: this # should trigger the wait() of p time.sleep(0.2) with self.assertRaises(OSError): with subprocess.Popen(NONEXISTING_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc: pass # p should have been wait()ed on, and removed from the _active list self.assertRaises(OSError, os.waitpid, pid, 0) if mswindows: # subprocess._active is not used on Windows and is set to None. self.assertIsNone(subprocess._active) else: self.assertNotIn(ident, [id(o) for o in subprocess._active]) def test_close_fds_after_preexec(self): fd_status = support.findfile("fd_status.py", subdir="subprocessdata") # this FD is used as dup2() target by preexec_fn, and should be closed # in the child process fd = os.dup(1) self.addCleanup(os.close, fd) p = subprocess.Popen([sys.executable, fd_status], stdout=subprocess.PIPE, close_fds=True, preexec_fn=lambda: os.dup2(1, fd)) output, ignored = p.communicate() remaining_fds = set(map(int, output.split(b','))) self.assertNotIn(fd, remaining_fds) @support.cpython_only def test_fork_exec(self): # Issue #22290: fork_exec() must not crash on memory allocation failure # or other errors import _posixsubprocess gc_enabled = gc.isenabled() try: # Use a preexec function and enable the garbage collector # to force fork_exec() to re-enable the garbage collector # on error. func = lambda: None gc.enable() for args, exe_list, cwd, env_list in ( (123, [b"exe"], None, [b"env"]), ([b"arg"], 123, None, [b"env"]), ([b"arg"], [b"exe"], 123, [b"env"]), ([b"arg"], [b"exe"], None, 123), ): with self.assertRaises(TypeError) as err: _posixsubprocess.fork_exec( args, exe_list, True, (), cwd, env_list, -1, -1, -1, -1, 1, 2, 3, 4, True, True, False, [], 0, -1, func) # Attempt to prevent # "TypeError: fork_exec() takes exactly N arguments (M given)" # from passing the test. More refactoring to have us start # with a valid *args list, confirm a good call with that works # before mutating it in various ways to ensure that bad calls # with individual arg type errors raise a typeerror would be # ideal. Saving that for a future PR... self.assertNotIn('takes exactly', str(err.exception)) finally: if not gc_enabled: gc.disable() @support.cpython_only def test_fork_exec_sorted_fd_sanity_check(self): # Issue #23564: sanity check the fork_exec() fds_to_keep sanity check. import _posixsubprocess class BadInt: first = True def __init__(self, value): self.value = value def __int__(self): if self.first: self.first = False return self.value raise ValueError gc_enabled = gc.isenabled() try: gc.enable() for fds_to_keep in ( (-1, 2, 3, 4, 5), # Negative number. ('str', 4), # Not an int. (18, 23, 42, 2**63), # Out of range. (5, 4), # Not sorted. (6, 7, 7, 8), # Duplicate. (BadInt(1), BadInt(2)), ): with self.assertRaises( ValueError, msg='fds_to_keep={}'.format(fds_to_keep)) as c: _posixsubprocess.fork_exec( [b"false"], [b"false"], True, fds_to_keep, None, [b"env"], -1, -1, -1, -1, 1, 2, 3, 4, True, True, None, None, None, -1, None) self.assertIn('fds_to_keep', str(c.exception)) finally: if not gc_enabled: gc.disable() def test_communicate_BrokenPipeError_stdin_close(self): # By not setting stdout or stderr or a timeout we force the fast path # that just calls _stdin_write() internally due to our mock. proc = subprocess.Popen(ZERO_RETURN_CMD) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin: mock_proc_stdin.close.side_effect = BrokenPipeError proc.communicate() # Should swallow BrokenPipeError from close. mock_proc_stdin.close.assert_called_with() def test_communicate_BrokenPipeError_stdin_write(self): # By not setting stdout or stderr or a timeout we force the fast path # that just calls _stdin_write() internally due to our mock. proc = subprocess.Popen(ZERO_RETURN_CMD) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin: mock_proc_stdin.write.side_effect = BrokenPipeError proc.communicate(b'stuff') # Should swallow the BrokenPipeError. mock_proc_stdin.write.assert_called_once_with(b'stuff') mock_proc_stdin.close.assert_called_once_with() def test_communicate_BrokenPipeError_stdin_flush(self): # Setting stdin and stdout forces the ._communicate() code path. # python -h exits faster than python -c pass (but spams stdout). proc = subprocess.Popen([sys.executable, '-h'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin, \ open(os.devnull, 'wb') as dev_null: mock_proc_stdin.flush.side_effect = BrokenPipeError # because _communicate registers a selector using proc.stdin... mock_proc_stdin.fileno.return_value = dev_null.fileno() # _communicate() should swallow BrokenPipeError from flush. proc.communicate(b'stuff') mock_proc_stdin.flush.assert_called_once_with() def test_communicate_BrokenPipeError_stdin_close_with_timeout(self): # Setting stdin and stdout forces the ._communicate() code path. # python -h exits faster than python -c pass (but spams stdout). proc = subprocess.Popen([sys.executable, '-h'], stdin=subprocess.PIPE, stdout=subprocess.PIPE) with proc, mock.patch.object(proc, 'stdin') as mock_proc_stdin: mock_proc_stdin.close.side_effect = BrokenPipeError # _communicate() should swallow BrokenPipeError from close. proc.communicate(timeout=999) mock_proc_stdin.close.assert_called_once_with() @unittest.skipUnless(_testcapi is not None and hasattr(_testcapi, 'W_STOPCODE'), 'need _testcapi.W_STOPCODE') def test_stopped(self): """Test wait() behavior when waitpid returns WIFSTOPPED; issue29335.""" args = ZERO_RETURN_CMD proc = subprocess.Popen(args) # Wait until the real process completes to avoid zombie process support.wait_process(proc.pid, exitcode=0) status = _testcapi.W_STOPCODE(3) with mock.patch('subprocess.os.waitpid', return_value=(proc.pid, status)): returncode = proc.wait() self.assertEqual(returncode, -3) def test_send_signal_race(self): # bpo-38630: send_signal() must poll the process exit status to reduce # the risk of sending the signal to the wrong process. proc = subprocess.Popen(ZERO_RETURN_CMD) # wait until the process completes without using the Popen APIs. support.wait_process(proc.pid, exitcode=0) # returncode is still None but the process completed. self.assertIsNone(proc.returncode) with mock.patch("os.kill") as mock_kill: proc.send_signal(signal.SIGTERM) # send_signal() didn't call os.kill() since the process already # completed. mock_kill.assert_not_called() # Don't check the returncode value: the test reads the exit status, # so Popen failed to read it and uses a default returncode instead. self.assertIsNotNone(proc.returncode) def test_send_signal_race2(self): # bpo-40550: the process might exist between the returncode check and # the kill operation p = subprocess.Popen([sys.executable, '-c', 'exit(1)']) # wait for process to exit while not p.returncode: p.poll() with mock.patch.object(p, 'poll', new=lambda: None): p.returncode = None p.send_signal(signal.SIGTERM) def test_communicate_repeated_call_after_stdout_close(self): proc = subprocess.Popen([sys.executable, '-c', 'import os, time; os.close(1), time.sleep(2)'], stdout=subprocess.PIPE) while True: try: proc.communicate(timeout=0.1) return except subprocess.TimeoutExpired: pass @unittest.skipUnless(mswindows, "Windows specific tests") class Win32ProcessTestCase(BaseTestCase): def test_startupinfo(self): # startupinfo argument # We uses hardcoded constants, because we do not want to # depend on win32all. STARTF_USESHOWWINDOW = 1 SW_MAXIMIZE = 3 startupinfo = subprocess.STARTUPINFO() startupinfo.dwFlags = STARTF_USESHOWWINDOW startupinfo.wShowWindow = SW_MAXIMIZE # Since Python is a console process, it won't be affected # by wShowWindow, but the argument should be silently # ignored subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_startupinfo_keywords(self): # startupinfo argument # We use hardcoded constants, because we do not want to # depend on win32all. STARTF_USERSHOWWINDOW = 1 SW_MAXIMIZE = 3 startupinfo = subprocess.STARTUPINFO( dwFlags=STARTF_USERSHOWWINDOW, wShowWindow=SW_MAXIMIZE ) # Since Python is a console process, it won't be affected # by wShowWindow, but the argument should be silently # ignored subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_startupinfo_copy(self): # bpo-34044: Popen must not modify input STARTUPINFO structure startupinfo = subprocess.STARTUPINFO() startupinfo.dwFlags = subprocess.STARTF_USESHOWWINDOW startupinfo.wShowWindow = subprocess.SW_HIDE # Call Popen() twice with the same startupinfo object to make sure # that it's not modified for _ in range(2): cmd = ZERO_RETURN_CMD with open(os.devnull, 'w') as null: proc = subprocess.Popen(cmd, stdout=null, stderr=subprocess.STDOUT, startupinfo=startupinfo) with proc: proc.communicate() self.assertEqual(proc.returncode, 0) self.assertEqual(startupinfo.dwFlags, subprocess.STARTF_USESHOWWINDOW) self.assertIsNone(startupinfo.hStdInput) self.assertIsNone(startupinfo.hStdOutput) self.assertIsNone(startupinfo.hStdError) self.assertEqual(startupinfo.wShowWindow, subprocess.SW_HIDE) self.assertEqual(startupinfo.lpAttributeList, {"handle_list": []}) def test_creationflags(self): # creationflags argument CREATE_NEW_CONSOLE = 16 sys.stderr.write(" a DOS box should flash briefly ...\n") subprocess.call(sys.executable + ' -c "import time; time.sleep(0.25)"', creationflags=CREATE_NEW_CONSOLE) def test_invalid_args(self): # invalid arguments should raise ValueError self.assertRaises(ValueError, subprocess.call, [sys.executable, "-c", "import sys; sys.exit(47)"], preexec_fn=lambda: 1) @support.cpython_only def test_issue31471(self): # There shouldn't be an assertion failure in Popen() in case the env # argument has a bad keys() method. class BadEnv(dict): keys = None with self.assertRaises(TypeError): subprocess.Popen(ZERO_RETURN_CMD, env=BadEnv()) def test_close_fds(self): # close file descriptors rc = subprocess.call([sys.executable, "-c", "import sys; sys.exit(47)"], close_fds=True) self.assertEqual(rc, 47) def test_close_fds_with_stdio(self): import msvcrt fds = os.pipe() self.addCleanup(os.close, fds[0]) self.addCleanup(os.close, fds[1]) handles = [] for fd in fds: os.set_inheritable(fd, True) handles.append(msvcrt.get_osfhandle(fd)) p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, close_fds=False) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 0) int(stdout.strip()) # Check that stdout is an integer p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 1) self.assertIn(b"OSError", stderr) # The same as the previous call, but with an empty handle_list handle_list = [] startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {"handle_list": handle_list} p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, stderr=subprocess.PIPE, startupinfo=startupinfo, close_fds=True) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 1) self.assertIn(b"OSError", stderr) # Check for a warning due to using handle_list and close_fds=False with support.check_warnings((".*overriding close_fds", RuntimeWarning)): startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {"handle_list": handles[:]} p = subprocess.Popen([sys.executable, "-c", "import msvcrt; print(msvcrt.open_osfhandle({}, 0))".format(handles[0])], stdout=subprocess.PIPE, stderr=subprocess.PIPE, startupinfo=startupinfo, close_fds=False) stdout, stderr = p.communicate() self.assertEqual(p.returncode, 0) def test_empty_attribute_list(self): startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {} subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_empty_handle_list(self): startupinfo = subprocess.STARTUPINFO() startupinfo.lpAttributeList = {"handle_list": []} subprocess.call(ZERO_RETURN_CMD, startupinfo=startupinfo) def test_shell_sequence(self): # Run command through the shell (sequence) newenv = os.environ.copy() newenv["FRUIT"] = "physalis" p = subprocess.Popen(["set"], shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertIn(b"physalis", p.stdout.read()) def test_shell_string(self): # Run command through the shell (string) newenv = os.environ.copy() newenv["FRUIT"] = "physalis" p = subprocess.Popen("set", shell=1, stdout=subprocess.PIPE, env=newenv) with p: self.assertIn(b"physalis", p.stdout.read()) def test_shell_encodings(self): # Run command through the shell (string) for enc in ['ansi', 'oem']: newenv = os.environ.copy() newenv["FRUIT"] = "physalis" p = subprocess.Popen("set", shell=1, stdout=subprocess.PIPE, env=newenv, encoding=enc) with p: self.assertIn("physalis", p.stdout.read(), enc) def test_call_string(self): # call() function with string argument on Windows rc = subprocess.call(sys.executable + ' -c "import sys; sys.exit(47)"') self.assertEqual(rc, 47) def _kill_process(self, method, *args): # Some win32 buildbot raises EOFError if stdin is inherited p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() time.sleep(30) """], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) with p: # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) getattr(p, method)(*args) _, stderr = p.communicate() self.assertEqual(stderr, b'') returncode = p.wait() self.assertNotEqual(returncode, 0) def _kill_dead_process(self, method, *args): p = subprocess.Popen([sys.executable, "-c", """if 1: import sys, time sys.stdout.write('x\\n') sys.stdout.flush() sys.exit(42) """], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) with p: # Wait for the interpreter to be completely initialized before # sending any signal. p.stdout.read(1) # The process should end after this time.sleep(1) # This shouldn't raise even though the child is now dead getattr(p, method)(*args) _, stderr = p.communicate() self.assertEqual(stderr, b'') rc = p.wait() self.assertEqual(rc, 42) def test_send_signal(self): self._kill_process('send_signal', signal.SIGTERM) def test_kill(self): self._kill_process('kill') def test_terminate(self): self._kill_process('terminate') def test_send_signal_dead(self): self._kill_dead_process('send_signal', signal.SIGTERM) def test_kill_dead(self): self._kill_dead_process('kill') def test_terminate_dead(self): self._kill_dead_process('terminate') class MiscTests(unittest.TestCase): class RecordingPopen(subprocess.Popen): """A Popen that saves a reference to each instance for testing.""" instances_created = [] def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.instances_created.append(self) @mock.patch.object(subprocess.Popen, "_communicate") def _test_keyboardinterrupt_no_kill(self, popener, mock__communicate, **kwargs): """Fake a SIGINT happening during Popen._communicate() and ._wait(). This avoids the need to actually try and get test environments to send and receive signals reliably across platforms. The net effect of a ^C happening during a blocking subprocess execution which we want to clean up from is a KeyboardInterrupt coming out of communicate() or wait(). """ mock__communicate.side_effect = KeyboardInterrupt try: with mock.patch.object(subprocess.Popen, "_wait") as mock__wait: # We patch out _wait() as no signal was involved so the # child process isn't actually going to exit rapidly. mock__wait.side_effect = KeyboardInterrupt with mock.patch.object(subprocess, "Popen", self.RecordingPopen): with self.assertRaises(KeyboardInterrupt): popener([sys.executable, "-c", "import time\ntime.sleep(9)\nimport sys\n" "sys.stderr.write('\\n!runaway child!\\n')"], stdout=subprocess.DEVNULL, **kwargs) for call in mock__wait.call_args_list[1:]: self.assertNotEqual( call, mock.call(timeout=None), "no open-ended wait() after the first allowed: " f"{mock__wait.call_args_list}") sigint_calls = [] for call in mock__wait.call_args_list: if call == mock.call(timeout=0.25): # from Popen.__init__ sigint_calls.append(call) self.assertLessEqual(mock__wait.call_count, 2, msg=mock__wait.call_args_list) self.assertEqual(len(sigint_calls), 1, msg=mock__wait.call_args_list) finally: # cleanup the forgotten (due to our mocks) child process process = self.RecordingPopen.instances_created.pop() process.kill() process.wait() self.assertEqual([], self.RecordingPopen.instances_created) def test_call_keyboardinterrupt_no_kill(self): self._test_keyboardinterrupt_no_kill(subprocess.call, timeout=6.282) def test_run_keyboardinterrupt_no_kill(self): self._test_keyboardinterrupt_no_kill(subprocess.run, timeout=6.282) def test_context_manager_keyboardinterrupt_no_kill(self): def popen_via_context_manager(*args, **kwargs): with subprocess.Popen(*args, **kwargs) as unused_process: raise KeyboardInterrupt # Test how __exit__ handles ^C. self._test_keyboardinterrupt_no_kill(popen_via_context_manager) def test_getoutput(self): self.assertEqual(subprocess.getoutput('echo xyzzy'), 'xyzzy') self.assertEqual(subprocess.getstatusoutput('echo xyzzy'), (0, 'xyzzy')) # we use mkdtemp in the next line to create an empty directory # under our exclusive control; from that, we can invent a pathname # that we _know_ won't exist. This is guaranteed to fail. dir = None try: dir = tempfile.mkdtemp() name = os.path.join(dir, "foo") status, output = subprocess.getstatusoutput( ("type " if mswindows else "cat ") + name) self.assertNotEqual(status, 0) finally: if dir is not None: os.rmdir(dir) def test__all__(self): """Ensure that __all__ is populated properly.""" intentionally_excluded = {"list2cmdline", "Handle", "pwd", "grp"} exported = set(subprocess.__all__) possible_exports = set() import types for name, value in subprocess.__dict__.items(): if name.startswith('_'): continue if isinstance(value, (types.ModuleType,)): continue possible_exports.add(name) self.assertEqual(exported, possible_exports - intentionally_excluded) @unittest.skipUnless(hasattr(selectors, 'PollSelector'), "Test needs selectors.PollSelector") class ProcessTestCaseNoPoll(ProcessTestCase): def setUp(self): self.orig_selector = subprocess._PopenSelector subprocess._PopenSelector = selectors.SelectSelector ProcessTestCase.setUp(self) def tearDown(self): subprocess._PopenSelector = self.orig_selector ProcessTestCase.tearDown(self) @unittest.skipUnless(mswindows, "Windows-specific tests") class CommandsWithSpaces (BaseTestCase): def setUp(self): super().setUp() f, fname = tempfile.mkstemp(".py", "te st") self.fname = fname.lower () os.write(f, b"import sys;" b"sys.stdout.write('%d %s' % (len(sys.argv), [a.lower () for a in sys.argv]))" ) os.close(f) def tearDown(self): os.remove(self.fname) super().tearDown() def with_spaces(self, *args, **kwargs): kwargs['stdout'] = subprocess.PIPE p = subprocess.Popen(*args, **kwargs) with p: self.assertEqual( p.stdout.read ().decode("mbcs"), "2 [%r, 'ab cd']" % self.fname ) def test_shell_string_with_spaces(self): # call() function with string argument with spaces on Windows self.with_spaces('"%s" "%s" "%s"' % (sys.executable, self.fname, "ab cd"), shell=1) def test_shell_sequence_with_spaces(self): # call() function with sequence argument with spaces on Windows self.with_spaces([sys.executable, self.fname, "ab cd"], shell=1) def test_noshell_string_with_spaces(self): # call() function with string argument with spaces on Windows self.with_spaces('"%s" "%s" "%s"' % (sys.executable, self.fname, "ab cd")) def test_noshell_sequence_with_spaces(self): # call() function with sequence argument with spaces on Windows self.with_spaces([sys.executable, self.fname, "ab cd"]) class ContextManagerTests(BaseTestCase): def test_pipe(self): with subprocess.Popen([sys.executable, "-c", "import sys;" "sys.stdout.write('stdout');" "sys.stderr.write('stderr');"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc: self.assertEqual(proc.stdout.read(), b"stdout") self.assertEqual(proc.stderr.read(), b"stderr") self.assertTrue(proc.stdout.closed) self.assertTrue(proc.stderr.closed) def test_returncode(self): with subprocess.Popen([sys.executable, "-c", "import sys; sys.exit(100)"]) as proc: pass # __exit__ calls wait(), so the returncode should be set self.assertEqual(proc.returncode, 100) def test_communicate_stdin(self): with subprocess.Popen([sys.executable, "-c", "import sys;" "sys.exit(sys.stdin.read() == 'context')"], stdin=subprocess.PIPE) as proc: proc.communicate(b"context") self.assertEqual(proc.returncode, 1) def test_invalid_args(self): with self.assertRaises(NONEXISTING_ERRORS): with subprocess.Popen(NONEXISTING_CMD, stdout=subprocess.PIPE, stderr=subprocess.PIPE) as proc: pass def test_broken_pipe_cleanup(self): """Broken pipe error should not prevent wait() (Issue 21619)""" proc = subprocess.Popen(ZERO_RETURN_CMD, stdin=subprocess.PIPE, bufsize=support.PIPE_MAX_SIZE*2) proc = proc.__enter__() # Prepare to send enough data to overflow any OS pipe buffering and # guarantee a broken pipe error. Data is held in BufferedWriter # buffer until closed. proc.stdin.write(b'x' * support.PIPE_MAX_SIZE) self.assertIsNone(proc.returncode) # EPIPE expected under POSIX; EINVAL under Windows self.assertRaises(OSError, proc.__exit__, None, None, None) self.assertEqual(proc.returncode, 0) self.assertTrue(proc.stdin.closed) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.9/test_threading.py000066400000000000000000001511551471441230600216400ustar00rootroot00000000000000""" Tests for the threading module. """ import test.support from test.support import verbose, import_module, cpython_only, unlink from test.support.script_helper import assert_python_ok, assert_python_failure import random import sys import _thread import threading import time import unittest import weakref import os import subprocess import signal import textwrap import traceback from gevent.tests import lock_tests # gevent: use our local copy #from test import lock_tests from test import support # Between fork() and exec(), only async-safe functions are allowed (issues # #12316 and #11870), and fork() from a worker thread is known to trigger # problems with some operating systems (issue #3863): skip problematic tests # on platforms known to behave badly. platforms_to_skip = ('netbsd5', 'hp-ux11') # A trivial mutable counter. class Counter(object): def __init__(self): self.value = 0 def inc(self): self.value += 1 def dec(self): self.value -= 1 def get(self): return self.value class TestThread(threading.Thread): def __init__(self, name, testcase, sema, mutex, nrunning): threading.Thread.__init__(self, name=name) self.testcase = testcase self.sema = sema self.mutex = mutex self.nrunning = nrunning def run(self): delay = random.random() / 10000.0 if verbose: print('task %s will run for %.1f usec' % (self.name, delay * 1e6)) with self.sema: with self.mutex: self.nrunning.inc() if verbose: print(self.nrunning.get(), 'tasks are running') self.testcase.assertLessEqual(self.nrunning.get(), 3) time.sleep(delay) if verbose: print('task', self.name, 'done') with self.mutex: self.nrunning.dec() self.testcase.assertGreaterEqual(self.nrunning.get(), 0) if verbose: print('%s is finished. %d tasks are running' % (self.name, self.nrunning.get())) class BaseTestCase(unittest.TestCase): def setUp(self): self._threads = test.support.threading_setup() def tearDown(self): test.support.threading_cleanup(*self._threads) test.support.reap_children() class ThreadTests(BaseTestCase): # Create a bunch of threads, let each do some work, wait until all are # done. def test_various_ops(self): # This takes about n/3 seconds to run (about n/3 clumps of tasks, # times about 1 second per clump). NUMTASKS = 10 # no more than 3 of the 10 can run at once sema = threading.BoundedSemaphore(value=3) mutex = threading.RLock() numrunning = Counter() threads = [] for i in range(NUMTASKS): t = TestThread(""%i, self, sema, mutex, numrunning) threads.append(t) self.assertIsNone(t.ident) self.assertRegex(repr(t), r'^$') t.start() if hasattr(threading, 'get_native_id'): native_ids = set(t.native_id for t in threads) | {threading.get_native_id()} self.assertNotIn(None, native_ids) self.assertEqual(len(native_ids), NUMTASKS + 1) if verbose: print('waiting for all tasks to complete') for t in threads: t.join() self.assertFalse(t.is_alive()) self.assertNotEqual(t.ident, 0) self.assertIsNotNone(t.ident) self.assertRegex(repr(t), r'^$') if verbose: print('all tasks done') self.assertEqual(numrunning.get(), 0) def test_ident_of_no_threading_threads(self): # The ident still must work for the main thread and dummy threads. self.assertIsNotNone(threading.currentThread().ident) def f(): ident.append(threading.currentThread().ident) done.set() done = threading.Event() ident = [] with support.wait_threads_exit(): tid = _thread.start_new_thread(f, ()) done.wait() self.assertEqual(ident[0], tid) # Kill the "immortal" _DummyThread del threading._active[ident[0]] # run with a small(ish) thread stack size (256 KiB) def test_various_ops_small_stack(self): if verbose: print('with 256 KiB thread stack size...') try: threading.stack_size(262144) except _thread.error: raise unittest.SkipTest( 'platform does not support changing thread stack size') self.test_various_ops() threading.stack_size(0) # run with a large thread stack size (1 MiB) def test_various_ops_large_stack(self): if verbose: print('with 1 MiB thread stack size...') try: threading.stack_size(0x100000) except _thread.error: raise unittest.SkipTest( 'platform does not support changing thread stack size') self.test_various_ops() threading.stack_size(0) def test_foreign_thread(self): # Check that a "foreign" thread can use the threading module. def f(mutex): # Calling current_thread() forces an entry for the foreign # thread to get made in the threading._active map. threading.current_thread() mutex.release() mutex = threading.Lock() mutex.acquire() with support.wait_threads_exit(): tid = _thread.start_new_thread(f, (mutex,)) # Wait for the thread to finish. mutex.acquire() self.assertIn(tid, threading._active) self.assertIsInstance(threading._active[tid], threading._DummyThread) #Issue 29376 self.assertTrue(threading._active[tid].is_alive()) self.assertRegex(repr(threading._active[tid]), '_DummyThread') del threading._active[tid] # PyThreadState_SetAsyncExc() is a CPython-only gimmick, not (currently) # exposed at the Python level. This test relies on ctypes to get at it. def test_PyThreadState_SetAsyncExc(self): ctypes = import_module("ctypes") set_async_exc = ctypes.pythonapi.PyThreadState_SetAsyncExc set_async_exc.argtypes = (ctypes.c_ulong, ctypes.py_object) class AsyncExc(Exception): pass exception = ctypes.py_object(AsyncExc) # First check it works when setting the exception from the same thread. tid = threading.get_ident() self.assertIsInstance(tid, int) self.assertGreater(tid, 0) try: result = set_async_exc(tid, exception) # The exception is async, so we might have to keep the VM busy until # it notices. while True: pass except AsyncExc: pass else: # This code is unreachable but it reflects the intent. If we wanted # to be smarter the above loop wouldn't be infinite. self.fail("AsyncExc not raised") try: self.assertEqual(result, 1) # one thread state modified except UnboundLocalError: # The exception was raised too quickly for us to get the result. pass # `worker_started` is set by the thread when it's inside a try/except # block waiting to catch the asynchronously set AsyncExc exception. # `worker_saw_exception` is set by the thread upon catching that # exception. worker_started = threading.Event() worker_saw_exception = threading.Event() class Worker(threading.Thread): def run(self): self.id = threading.get_ident() self.finished = False try: while True: worker_started.set() time.sleep(0.1) except AsyncExc: self.finished = True worker_saw_exception.set() t = Worker() t.daemon = True # so if this fails, we don't hang Python at shutdown t.start() if verbose: print(" started worker thread") # Try a thread id that doesn't make sense. if verbose: print(" trying nonsensical thread id") result = set_async_exc(-1, exception) self.assertEqual(result, 0) # no thread states modified # Now raise an exception in the worker thread. if verbose: print(" waiting for worker thread to get started") ret = worker_started.wait() self.assertTrue(ret) if verbose: print(" verifying worker hasn't exited") self.assertFalse(t.finished) if verbose: print(" attempting to raise asynch exception in worker") result = set_async_exc(t.id, exception) self.assertEqual(result, 1) # one thread state modified if verbose: print(" waiting for worker to say it caught the exception") worker_saw_exception.wait(timeout=support.SHORT_TIMEOUT) self.assertTrue(t.finished) if verbose: print(" all OK -- joining worker") if t.finished: t.join() # else the thread is still running, and we have no way to kill it def test_limbo_cleanup(self): # Issue 7481: Failure to start thread should cleanup the limbo map. def fail_new_thread(*args): raise threading.ThreadError() _start_new_thread = threading._start_new_thread threading._start_new_thread = fail_new_thread try: t = threading.Thread(target=lambda: None) self.assertRaises(threading.ThreadError, t.start) self.assertFalse( t in threading._limbo, "Failed to cleanup _limbo map on failure of Thread.start().") finally: threading._start_new_thread = _start_new_thread def test_finalize_running_thread(self): # Issue 1402: the PyGILState_Ensure / _Release functions may be called # very late on python exit: on deallocation of a running thread for # example. import_module("ctypes") rc, out, err = assert_python_failure("-c", """if 1: import ctypes, sys, time, _thread # This lock is used as a simple event variable. ready = _thread.allocate_lock() ready.acquire() # Module globals are cleared before __del__ is run # So we save the functions in class dict class C: ensure = ctypes.pythonapi.PyGILState_Ensure release = ctypes.pythonapi.PyGILState_Release def __del__(self): state = self.ensure() self.release(state) def waitingThread(): x = C() ready.release() time.sleep(100) _thread.start_new_thread(waitingThread, ()) ready.acquire() # Be sure the other thread is waiting. sys.exit(42) """) self.assertEqual(rc, 42) def test_finalize_with_trace(self): # Issue1733757 # Avoid a deadlock when sys.settrace steps into threading._shutdown assert_python_ok("-c", """if 1: import sys, threading # A deadlock-killer, to prevent the # testsuite to hang forever def killer(): import os, time time.sleep(2) print('program blocked; aborting') os._exit(2) t = threading.Thread(target=killer) t.daemon = True t.start() # This is the trace function def func(frame, event, arg): threading.current_thread() return func sys.settrace(func) """) def test_join_nondaemon_on_shutdown(self): # Issue 1722344 # Raising SystemExit skipped threading._shutdown rc, out, err = assert_python_ok("-c", """if 1: import threading from time import sleep def child(): sleep(1) # As a non-daemon thread we SHOULD wake up and nothing # should be torn down yet print("Woke up, sleep function is:", sleep) threading.Thread(target=child).start() raise SystemExit """) self.assertEqual(out.strip(), b"Woke up, sleep function is: ") self.assertEqual(err, b"") def test_enumerate_after_join(self): # Try hard to trigger #1703448: a thread is still returned in # threading.enumerate() after it has been join()ed. enum = threading.enumerate old_interval = sys.getswitchinterval() try: for i in range(1, 100): sys.setswitchinterval(i * 0.0002) t = threading.Thread(target=lambda: None) t.start() t.join() l = enum() self.assertNotIn(t, l, "#1703448 triggered after %d trials: %s" % (i, l)) finally: sys.setswitchinterval(old_interval) def test_no_refcycle_through_target(self): class RunSelfFunction(object): def __init__(self, should_raise): # The links in this refcycle from Thread back to self # should be cleaned up when the thread completes. self.should_raise = should_raise self.thread = threading.Thread(target=self._run, args=(self,), kwargs={'yet_another':self}) self.thread.start() def _run(self, other_ref, yet_another): if self.should_raise: raise SystemExit cyclic_object = RunSelfFunction(should_raise=False) weak_cyclic_object = weakref.ref(cyclic_object) cyclic_object.thread.join() del cyclic_object self.assertIsNone(weak_cyclic_object(), msg=('%d references still around' % sys.getrefcount(weak_cyclic_object()))) raising_cyclic_object = RunSelfFunction(should_raise=True) weak_raising_cyclic_object = weakref.ref(raising_cyclic_object) raising_cyclic_object.thread.join() del raising_cyclic_object self.assertIsNone(weak_raising_cyclic_object(), msg=('%d references still around' % sys.getrefcount(weak_raising_cyclic_object()))) def test_old_threading_api(self): # Just a quick sanity check to make sure the old method names are # still present t = threading.Thread() t.isDaemon() t.setDaemon(True) t.getName() t.setName("name") e = threading.Event() e.isSet() threading.activeCount() def test_repr_daemon(self): t = threading.Thread() self.assertNotIn('daemon', repr(t)) t.daemon = True self.assertIn('daemon', repr(t)) def test_daemon_param(self): t = threading.Thread() self.assertFalse(t.daemon) t = threading.Thread(daemon=False) self.assertFalse(t.daemon) t = threading.Thread(daemon=True) self.assertTrue(t.daemon) @unittest.skipUnless(hasattr(os, 'fork'), 'needs os.fork()') def test_fork_at_exit(self): # bpo-42350: Calling os.fork() after threading._shutdown() must # not log an error. code = textwrap.dedent(""" import atexit import os import sys from test.support import wait_process # Import the threading module to register its "at fork" callback import threading def exit_handler(): pid = os.fork() if not pid: print("child process ok", file=sys.stderr, flush=True) # child process sys.exit() else: wait_process(pid, exitcode=0) # exit_handler() will be called after threading._shutdown() atexit.register(exit_handler) """) _, out, err = assert_python_ok("-c", code) self.assertEqual(out, b'') self.assertEqual(err.rstrip(), b'child process ok') @unittest.skipUnless(hasattr(os, 'fork'), 'test needs fork()') def test_dummy_thread_after_fork(self): # Issue #14308: a dummy thread in the active list doesn't mess up # the after-fork mechanism. code = """if 1: import _thread, threading, os, time def background_thread(evt): # Creates and registers the _DummyThread instance threading.current_thread() evt.set() time.sleep(10) evt = threading.Event() _thread.start_new_thread(background_thread, (evt,)) evt.wait() assert threading.active_count() == 2, threading.active_count() if os.fork() == 0: assert threading.active_count() == 1, threading.active_count() os._exit(0) else: os.wait() """ _, out, err = assert_python_ok("-c", code) self.assertEqual(out, b'') self.assertEqual(err, b'') @unittest.skipUnless(hasattr(os, 'fork'), "needs os.fork()") def test_is_alive_after_fork(self): # Try hard to trigger #18418: is_alive() could sometimes be True on # threads that vanished after a fork. old_interval = sys.getswitchinterval() self.addCleanup(sys.setswitchinterval, old_interval) # Make the bug more likely to manifest. test.support.setswitchinterval(1e-6) for i in range(20): t = threading.Thread(target=lambda: None) t.start() pid = os.fork() if pid == 0: os._exit(11 if t.is_alive() else 10) else: t.join() support.wait_process(pid, exitcode=10) def test_main_thread(self): main = threading.main_thread() self.assertEqual(main.name, 'MainThread') self.assertEqual(main.ident, threading.current_thread().ident) self.assertEqual(main.ident, threading.get_ident()) def f(): self.assertNotEqual(threading.main_thread().ident, threading.current_thread().ident) th = threading.Thread(target=f) th.start() th.join() @unittest.skipUnless(hasattr(os, 'fork'), "test needs os.fork()") @unittest.skipUnless(hasattr(os, 'waitpid'), "test needs os.waitpid()") def test_main_thread_after_fork(self): code = """if 1: import os, threading from test import support pid = os.fork() if pid == 0: main = threading.main_thread() print(main.name) print(main.ident == threading.current_thread().ident) print(main.ident == threading.get_ident()) else: support.wait_process(pid, exitcode=0) """ _, out, err = assert_python_ok("-c", code) data = out.decode().replace('\r', '') self.assertEqual(err, b"") self.assertEqual(data, "MainThread\nTrue\nTrue\n") @unittest.skipIf(sys.platform in platforms_to_skip, "due to known OS bug") @unittest.skipUnless(hasattr(os, 'fork'), "test needs os.fork()") @unittest.skipUnless(hasattr(os, 'waitpid'), "test needs os.waitpid()") def test_main_thread_after_fork_from_nonmain_thread(self): code = """if 1: import os, threading, sys from test import support def f(): pid = os.fork() if pid == 0: main = threading.main_thread() print(main.name) print(main.ident == threading.current_thread().ident) print(main.ident == threading.get_ident()) # stdout is fully buffered because not a tty, # we have to flush before exit. sys.stdout.flush() else: support.wait_process(pid, exitcode=0) th = threading.Thread(target=f) th.start() th.join() """ _, out, err = assert_python_ok("-c", code) data = out.decode().replace('\r', '') self.assertEqual(err, b"") self.assertEqual(data, "Thread-1\nTrue\nTrue\n") def test_main_thread_during_shutdown(self): # bpo-31516: current_thread() should still point to the main thread # at shutdown code = """if 1: import gc, threading main_thread = threading.current_thread() assert main_thread is threading.main_thread() # sanity check class RefCycle: def __init__(self): self.cycle = self def __del__(self): print("GC:", threading.current_thread() is main_thread, threading.main_thread() is main_thread, threading.enumerate() == [main_thread]) RefCycle() gc.collect() # sanity check x = RefCycle() """ _, out, err = assert_python_ok("-c", code) data = out.decode() self.assertEqual(err, b"") self.assertEqual(data.splitlines(), ["GC: True True True"] * 2) def test_finalization_shutdown(self): # bpo-36402: Py_Finalize() calls threading._shutdown() which must wait # until Python thread states of all non-daemon threads get deleted. # # Test similar to SubinterpThreadingTests.test_threads_join_2(), but # test the finalization of the main interpreter. code = """if 1: import os import threading import time import random def random_sleep(): seconds = random.random() * 0.010 time.sleep(seconds) class Sleeper: def __del__(self): random_sleep() tls = threading.local() def f(): # Sleep a bit so that the thread is still running when # Py_Finalize() is called. random_sleep() tls.x = Sleeper() random_sleep() threading.Thread(target=f).start() random_sleep() """ rc, out, err = assert_python_ok("-c", code) self.assertEqual(err, b"") def test_tstate_lock(self): # Test an implementation detail of Thread objects. started = _thread.allocate_lock() finish = _thread.allocate_lock() started.acquire() finish.acquire() def f(): started.release() finish.acquire() time.sleep(0.01) # The tstate lock is None until the thread is started t = threading.Thread(target=f) self.assertIs(t._tstate_lock, None) t.start() started.acquire() self.assertTrue(t.is_alive()) # The tstate lock can't be acquired when the thread is running # (or suspended). tstate_lock = t._tstate_lock self.assertFalse(tstate_lock.acquire(timeout=0), False) finish.release() # When the thread ends, the state_lock can be successfully # acquired. self.assertTrue(tstate_lock.acquire(timeout=support.SHORT_TIMEOUT), False) # But is_alive() is still True: we hold _tstate_lock now, which # prevents is_alive() from knowing the thread's end-of-life C code # is done. self.assertTrue(t.is_alive()) # Let is_alive() find out the C code is done. tstate_lock.release() self.assertFalse(t.is_alive()) # And verify the thread disposed of _tstate_lock. self.assertIsNone(t._tstate_lock) t.join() def test_repr_stopped(self): # Verify that "stopped" shows up in repr(Thread) appropriately. started = _thread.allocate_lock() finish = _thread.allocate_lock() started.acquire() finish.acquire() def f(): started.release() finish.acquire() t = threading.Thread(target=f) t.start() started.acquire() self.assertIn("started", repr(t)) finish.release() # "stopped" should appear in the repr in a reasonable amount of time. # Implementation detail: as of this writing, that's trivially true # if .join() is called, and almost trivially true if .is_alive() is # called. The detail we're testing here is that "stopped" shows up # "all on its own". LOOKING_FOR = "stopped" for i in range(500): if LOOKING_FOR in repr(t): break time.sleep(0.01) self.assertIn(LOOKING_FOR, repr(t)) # we waited at least 5 seconds t.join() def test_BoundedSemaphore_limit(self): # BoundedSemaphore should raise ValueError if released too often. for limit in range(1, 10): bs = threading.BoundedSemaphore(limit) threads = [threading.Thread(target=bs.acquire) for _ in range(limit)] for t in threads: t.start() for t in threads: t.join() threads = [threading.Thread(target=bs.release) for _ in range(limit)] for t in threads: t.start() for t in threads: t.join() self.assertRaises(ValueError, bs.release) @cpython_only def test_frame_tstate_tracing(self): # Issue #14432: Crash when a generator is created in a C thread that is # destroyed while the generator is still used. The issue was that a # generator contains a frame, and the frame kept a reference to the # Python state of the destroyed C thread. The crash occurs when a trace # function is setup. def noop_trace(frame, event, arg): # no operation return noop_trace def generator(): while 1: yield "generator" def callback(): if callback.gen is None: callback.gen = generator() return next(callback.gen) callback.gen = None old_trace = sys.gettrace() sys.settrace(noop_trace) try: # Install a trace function threading.settrace(noop_trace) # Create a generator in a C thread which exits after the call import _testcapi _testcapi.call_in_temporary_c_thread(callback) # Call the generator in a different Python thread, check that the # generator didn't keep a reference to the destroyed thread state for test in range(3): # The trace function is still called here callback() finally: sys.settrace(old_trace) @cpython_only def test_shutdown_locks(self): for daemon in (False, True): with self.subTest(daemon=daemon): event = threading.Event() thread = threading.Thread(target=event.wait, daemon=daemon) # Thread.start() must add lock to _shutdown_locks, # but only for non-daemon thread thread.start() tstate_lock = thread._tstate_lock if not daemon: self.assertIn(tstate_lock, threading._shutdown_locks) else: self.assertNotIn(tstate_lock, threading._shutdown_locks) # unblock the thread and join it event.set() thread.join() # Thread._stop() must remove tstate_lock from _shutdown_locks. # Daemon threads must never add it to _shutdown_locks. self.assertNotIn(tstate_lock, threading._shutdown_locks) def test_locals_at_exit(self): # bpo-19466: thread locals must not be deleted before destructors # are called rc, out, err = assert_python_ok("-c", """if 1: import threading class Atexit: def __del__(self): print("thread_dict.atexit = %r" % thread_dict.atexit) thread_dict = threading.local() thread_dict.atexit = "value" atexit = Atexit() """) self.assertEqual(out.rstrip(), b"thread_dict.atexit = 'value'") def test_leak_without_join(self): # bpo-37788: Test that a thread which is not joined explicitly # does not leak. Test written for reference leak checks. def noop(): pass with support.wait_threads_exit(): threading.Thread(target=noop).start() # Thread.join() is not called def test_import_from_another_thread(self): # bpo-1596321: If the threading module is first import from a thread # different than the main thread, threading._shutdown() must handle # this case without logging an error at Python exit. code = textwrap.dedent(''' import _thread import sys event = _thread.allocate_lock() event.acquire() def import_threading(): import threading event.release() if 'threading' in sys.modules: raise Exception('threading is already imported') _thread.start_new_thread(import_threading, ()) # wait until the threading module is imported event.acquire() event.release() if 'threading' not in sys.modules: raise Exception('threading is not imported') # don't wait until the thread completes ''') rc, out, err = assert_python_ok("-c", code) self.assertEqual(out, b'') self.assertEqual(err, b'') class ThreadJoinOnShutdown(BaseTestCase): def _run_and_join(self, script): script = """if 1: import sys, os, time, threading # a thread, which waits for the main program to terminate def joiningfunc(mainthread): mainthread.join() print('end of thread') # stdout is fully buffered because not a tty, we have to flush # before exit. sys.stdout.flush() \n""" + script rc, out, err = assert_python_ok("-c", script) data = out.decode().replace('\r', '') self.assertEqual(data, "end of main\nend of thread\n") def test_1_join_on_shutdown(self): # The usual case: on exit, wait for a non-daemon thread script = """if 1: import os t = threading.Thread(target=joiningfunc, args=(threading.current_thread(),)) t.start() time.sleep(0.1) print('end of main') """ self._run_and_join(script) @unittest.skipUnless(hasattr(os, 'fork'), "needs os.fork()") @unittest.skipIf(sys.platform in platforms_to_skip, "due to known OS bug") def test_2_join_in_forked_process(self): # Like the test above, but from a forked interpreter script = """if 1: from test import support childpid = os.fork() if childpid != 0: # parent process support.wait_process(childpid, exitcode=0) sys.exit(0) # child process t = threading.Thread(target=joiningfunc, args=(threading.current_thread(),)) t.start() print('end of main') """ self._run_and_join(script) @unittest.skipUnless(hasattr(os, 'fork'), "needs os.fork()") @unittest.skipIf(sys.platform in platforms_to_skip, "due to known OS bug") def test_3_join_in_forked_from_thread(self): # Like the test above, but fork() was called from a worker thread # In the forked process, the main Thread object must be marked as stopped. script = """if 1: from test import support main_thread = threading.current_thread() def worker(): childpid = os.fork() if childpid != 0: # parent process support.wait_process(childpid, exitcode=0) sys.exit(0) # child process t = threading.Thread(target=joiningfunc, args=(main_thread,)) print('end of main') t.start() t.join() # Should not block: main_thread is already stopped w = threading.Thread(target=worker) w.start() """ self._run_and_join(script) @unittest.skipIf(sys.platform in platforms_to_skip, "due to known OS bug") def test_4_daemon_threads(self): # Check that a daemon thread cannot crash the interpreter on shutdown # by manipulating internal structures that are being disposed of in # the main thread. script = """if True: import os import random import sys import time import threading thread_has_run = set() def random_io(): '''Loop for a while sleeping random tiny amounts and doing some I/O.''' while True: with open(os.__file__, 'rb') as in_f: stuff = in_f.read(200) with open(os.devnull, 'wb') as null_f: null_f.write(stuff) time.sleep(random.random() / 1995) thread_has_run.add(threading.current_thread()) def main(): count = 0 for _ in range(40): new_thread = threading.Thread(target=random_io) new_thread.daemon = True new_thread.start() count += 1 while len(thread_has_run) < count: time.sleep(0.001) # Trigger process shutdown sys.exit(0) main() """ rc, out, err = assert_python_ok('-c', script) self.assertFalse(err) @unittest.skipUnless(hasattr(os, 'fork'), "needs os.fork()") @unittest.skipIf(sys.platform in platforms_to_skip, "due to known OS bug") def test_reinit_tls_after_fork(self): # Issue #13817: fork() would deadlock in a multithreaded program with # the ad-hoc TLS implementation. def do_fork_and_wait(): # just fork a child process and wait it pid = os.fork() if pid > 0: support.wait_process(pid, exitcode=50) else: os._exit(50) # start a bunch of threads that will fork() child processes threads = [] for i in range(16): t = threading.Thread(target=do_fork_and_wait) threads.append(t) t.start() for t in threads: t.join() @unittest.skipUnless(hasattr(os, 'fork'), "needs os.fork()") def test_clear_threads_states_after_fork(self): # Issue #17094: check that threads states are cleared after fork() # start a bunch of threads threads = [] for i in range(16): t = threading.Thread(target=lambda : time.sleep(0.3)) threads.append(t) t.start() pid = os.fork() if pid == 0: # check that threads states have been cleared if len(sys._current_frames()) == 1: os._exit(51) else: os._exit(52) else: support.wait_process(pid, exitcode=51) for t in threads: t.join() class SubinterpThreadingTests(BaseTestCase): def pipe(self): r, w = os.pipe() self.addCleanup(os.close, r) self.addCleanup(os.close, w) if hasattr(os, 'set_blocking'): os.set_blocking(r, False) return (r, w) def test_threads_join(self): # Non-daemon threads should be joined at subinterpreter shutdown # (issue #18808) r, w = self.pipe() code = textwrap.dedent(r""" import os import random import threading import time def random_sleep(): seconds = random.random() * 0.010 time.sleep(seconds) def f(): # Sleep a bit so that the thread is still running when # Py_EndInterpreter is called. random_sleep() os.write(%d, b"x") threading.Thread(target=f).start() random_sleep() """ % (w,)) ret = test.support.run_in_subinterp(code) self.assertEqual(ret, 0) # The thread was joined properly. self.assertEqual(os.read(r, 1), b"x") def test_threads_join_2(self): # Same as above, but a delay gets introduced after the thread's # Python code returned but before the thread state is deleted. # To achieve this, we register a thread-local object which sleeps # a bit when deallocated. r, w = self.pipe() code = textwrap.dedent(r""" import os import random import threading import time def random_sleep(): seconds = random.random() * 0.010 time.sleep(seconds) class Sleeper: def __del__(self): random_sleep() tls = threading.local() def f(): # Sleep a bit so that the thread is still running when # Py_EndInterpreter is called. random_sleep() tls.x = Sleeper() os.write(%d, b"x") threading.Thread(target=f).start() random_sleep() """ % (w,)) ret = test.support.run_in_subinterp(code) self.assertEqual(ret, 0) # The thread was joined properly. self.assertEqual(os.read(r, 1), b"x") @cpython_only def test_daemon_threads_fatal_error(self): subinterp_code = f"""if 1: import os import threading import time def f(): # Make sure the daemon thread is still running when # Py_EndInterpreter is called. time.sleep({test.support.SHORT_TIMEOUT}) threading.Thread(target=f, daemon=True).start() """ script = r"""if 1: import _testcapi _testcapi.run_in_subinterp(%r) """ % (subinterp_code,) with test.support.SuppressCrashReport(): rc, out, err = assert_python_failure("-c", script) self.assertIn("Fatal Python error: Py_EndInterpreter: " "not the last thread", err.decode()) class ThreadingExceptionTests(BaseTestCase): # A RuntimeError should be raised if Thread.start() is called # multiple times. def test_start_thread_again(self): thread = threading.Thread() thread.start() self.assertRaises(RuntimeError, thread.start) thread.join() def test_joining_current_thread(self): current_thread = threading.current_thread() self.assertRaises(RuntimeError, current_thread.join); def test_joining_inactive_thread(self): thread = threading.Thread() self.assertRaises(RuntimeError, thread.join) def test_daemonize_active_thread(self): thread = threading.Thread() thread.start() self.assertRaises(RuntimeError, setattr, thread, "daemon", True) thread.join() def test_releasing_unacquired_lock(self): lock = threading.Lock() self.assertRaises(RuntimeError, lock.release) def test_recursion_limit(self): # Issue 9670 # test that excessive recursion within a non-main thread causes # an exception rather than crashing the interpreter on platforms # like Mac OS X or FreeBSD which have small default stack sizes # for threads script = """if True: import threading def recurse(): return recurse() def outer(): try: recurse() except RecursionError: pass w = threading.Thread(target=outer) w.start() w.join() print('end of main thread') """ expected_output = "end of main thread\n" p = subprocess.Popen([sys.executable, "-c", script], stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() data = stdout.decode().replace('\r', '') self.assertEqual(p.returncode, 0, "Unexpected error: " + stderr.decode()) self.assertEqual(data, expected_output) def test_print_exception(self): script = r"""if True: import threading import time running = False def run(): global running running = True while running: time.sleep(0.01) 1/0 t = threading.Thread(target=run) t.start() while not running: time.sleep(0.01) running = False t.join() """ rc, out, err = assert_python_ok("-c", script) self.assertEqual(out, b'') err = err.decode() self.assertIn("Exception in thread", err) self.assertIn("Traceback (most recent call last):", err) self.assertIn("ZeroDivisionError", err) self.assertNotIn("Unhandled exception", err) def test_print_exception_stderr_is_none_1(self): script = r"""if True: import sys import threading import time running = False def run(): global running running = True while running: time.sleep(0.01) 1/0 t = threading.Thread(target=run) t.start() while not running: time.sleep(0.01) sys.stderr = None running = False t.join() """ rc, out, err = assert_python_ok("-c", script) self.assertEqual(out, b'') err = err.decode() self.assertIn("Exception in thread", err) self.assertIn("Traceback (most recent call last):", err) self.assertIn("ZeroDivisionError", err) self.assertNotIn("Unhandled exception", err) def test_print_exception_stderr_is_none_2(self): script = r"""if True: import sys import threading import time running = False def run(): global running running = True while running: time.sleep(0.01) 1/0 sys.stderr = None t = threading.Thread(target=run) t.start() while not running: time.sleep(0.01) running = False t.join() """ rc, out, err = assert_python_ok("-c", script) self.assertEqual(out, b'') self.assertNotIn("Unhandled exception", err.decode()) def test_bare_raise_in_brand_new_thread(self): def bare_raise(): raise class Issue27558(threading.Thread): exc = None def run(self): try: bare_raise() except Exception as exc: self.exc = exc thread = Issue27558() thread.start() thread.join() self.assertIsNotNone(thread.exc) self.assertIsInstance(thread.exc, RuntimeError) # explicitly break the reference cycle to not leak a dangling thread thread.exc = None def test_multithread_modify_file_noerror(self): # See issue25872 def modify_file(): with open(test.support.TESTFN, 'w', encoding='utf-8') as fp: fp.write(' ') traceback.format_stack() self.addCleanup(unlink, test.support.TESTFN) threads = [ threading.Thread(target=modify_file) for i in range(100) ] for t in threads: t.start() t.join() class ThreadRunFail(threading.Thread): def run(self): raise ValueError("run failed") class ExceptHookTests(BaseTestCase): def test_excepthook(self): with support.captured_output("stderr") as stderr: thread = ThreadRunFail(name="excepthook thread") thread.start() thread.join() stderr = stderr.getvalue().strip() self.assertIn(f'Exception in thread {thread.name}:\n', stderr) self.assertIn('Traceback (most recent call last):\n', stderr) self.assertIn(' raise ValueError("run failed")', stderr) self.assertIn('ValueError: run failed', stderr) @support.cpython_only def test_excepthook_thread_None(self): # threading.excepthook called with thread=None: log the thread # identifier in this case. with support.captured_output("stderr") as stderr: try: raise ValueError("bug") except Exception as exc: args = threading.ExceptHookArgs([*sys.exc_info(), None]) try: threading.excepthook(args) finally: # Explicitly break a reference cycle args = None stderr = stderr.getvalue().strip() self.assertIn(f'Exception in thread {threading.get_ident()}:\n', stderr) self.assertIn('Traceback (most recent call last):\n', stderr) self.assertIn(' raise ValueError("bug")', stderr) self.assertIn('ValueError: bug', stderr) def test_system_exit(self): class ThreadExit(threading.Thread): def run(self): sys.exit(1) # threading.excepthook() silently ignores SystemExit with support.captured_output("stderr") as stderr: thread = ThreadExit() thread.start() thread.join() self.assertEqual(stderr.getvalue(), '') def test_custom_excepthook(self): args = None def hook(hook_args): nonlocal args args = hook_args try: with support.swap_attr(threading, 'excepthook', hook): thread = ThreadRunFail() thread.start() thread.join() self.assertEqual(args.exc_type, ValueError) self.assertEqual(str(args.exc_value), 'run failed') self.assertEqual(args.exc_traceback, args.exc_value.__traceback__) self.assertIs(args.thread, thread) finally: # Break reference cycle args = None def test_custom_excepthook_fail(self): def threading_hook(args): raise ValueError("threading_hook failed") err_str = None def sys_hook(exc_type, exc_value, exc_traceback): nonlocal err_str err_str = str(exc_value) with support.swap_attr(threading, 'excepthook', threading_hook), \ support.swap_attr(sys, 'excepthook', sys_hook), \ support.captured_output('stderr') as stderr: thread = ThreadRunFail() thread.start() thread.join() self.assertEqual(stderr.getvalue(), 'Exception in threading.excepthook:\n') self.assertEqual(err_str, 'threading_hook failed') class TimerTests(BaseTestCase): def setUp(self): BaseTestCase.setUp(self) self.callback_args = [] self.callback_event = threading.Event() def test_init_immutable_default_args(self): # Issue 17435: constructor defaults were mutable objects, they could be # mutated via the object attributes and affect other Timer objects. timer1 = threading.Timer(0.01, self._callback_spy) timer1.start() self.callback_event.wait() timer1.args.append("blah") timer1.kwargs["foo"] = "bar" self.callback_event.clear() timer2 = threading.Timer(0.01, self._callback_spy) timer2.start() self.callback_event.wait() self.assertEqual(len(self.callback_args), 2) self.assertEqual(self.callback_args, [((), {}), ((), {})]) timer1.join() timer2.join() def _callback_spy(self, *args, **kwargs): self.callback_args.append((args[:], kwargs.copy())) self.callback_event.set() class LockTests(lock_tests.LockTests): locktype = staticmethod(threading.Lock) class PyRLockTests(lock_tests.RLockTests): locktype = staticmethod(threading._PyRLock) @unittest.skipIf(threading._CRLock is None, 'RLock not implemented in C') class CRLockTests(lock_tests.RLockTests): locktype = staticmethod(threading._CRLock) class EventTests(lock_tests.EventTests): eventtype = staticmethod(threading.Event) class ConditionAsRLockTests(lock_tests.RLockTests): # Condition uses an RLock by default and exports its API. locktype = staticmethod(threading.Condition) class ConditionTests(lock_tests.ConditionTests): condtype = staticmethod(threading.Condition) class SemaphoreTests(lock_tests.SemaphoreTests): semtype = staticmethod(threading.Semaphore) class BoundedSemaphoreTests(lock_tests.BoundedSemaphoreTests): semtype = staticmethod(threading.BoundedSemaphore) class BarrierTests(lock_tests.BarrierTests): barriertype = staticmethod(threading.Barrier) class MiscTestCase(unittest.TestCase): def test__all__(self): extra = {"ThreadError"} blacklist = {'currentThread', 'activeCount'} support.check__all__(self, threading, ('threading', '_thread'), extra=extra, blacklist=blacklist) class InterruptMainTests(unittest.TestCase): def test_interrupt_main_subthread(self): # Calling start_new_thread with a function that executes interrupt_main # should raise KeyboardInterrupt upon completion. def call_interrupt(): _thread.interrupt_main() t = threading.Thread(target=call_interrupt) with self.assertRaises(KeyboardInterrupt): t.start() t.join() t.join() def test_interrupt_main_mainthread(self): # Make sure that if interrupt_main is called in main thread that # KeyboardInterrupt is raised instantly. with self.assertRaises(KeyboardInterrupt): _thread.interrupt_main() def test_interrupt_main_noerror(self): handler = signal.getsignal(signal.SIGINT) try: # No exception should arise. signal.signal(signal.SIGINT, signal.SIG_IGN) _thread.interrupt_main() signal.signal(signal.SIGINT, signal.SIG_DFL) _thread.interrupt_main() finally: # Restore original handler signal.signal(signal.SIGINT, handler) class AtexitTests(unittest.TestCase): def test_atexit_output(self): rc, out, err = assert_python_ok("-c", """if True: import threading def run_last(): print('parrot') threading._register_atexit(run_last) """) self.assertFalse(err) self.assertEqual(out.strip(), b'parrot') def test_atexit_called_once(self): rc, out, err = assert_python_ok("-c", """if True: import threading from unittest.mock import Mock mock = Mock() threading._register_atexit(mock) mock.assert_not_called() # force early shutdown to ensure it was called once threading._shutdown() mock.assert_called_once() """) self.assertFalse(err) def test_atexit_after_shutdown(self): # The only way to do this is by registering an atexit within # an atexit, which is intended to raise an exception. rc, out, err = assert_python_ok("-c", """if True: import threading def func(): pass def run_last(): threading._register_atexit(func) threading._register_atexit(run_last) """) self.assertTrue(err) self.assertIn("RuntimeError: can't register atexit after shutdown", err.decode()) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.9/test_wsgiref.py000066400000000000000000000742301471441230600213370ustar00rootroot00000000000000from unittest import mock from test import support from test.support import socket_helper from test.test_httpservers import NoLogRequestHandler from unittest import TestCase from wsgiref.util import setup_testing_defaults from wsgiref.headers import Headers from wsgiref.handlers import BaseHandler, BaseCGIHandler, SimpleHandler from wsgiref import util from wsgiref.validate import validator from wsgiref.simple_server import WSGIServer, WSGIRequestHandler from wsgiref.simple_server import make_server from http.client import HTTPConnection from io import StringIO, BytesIO, BufferedReader from socketserver import BaseServer from platform import python_implementation import os import re import signal import sys import threading import unittest class MockServer(WSGIServer): """Non-socket HTTP server""" def __init__(self, server_address, RequestHandlerClass): BaseServer.__init__(self, server_address, RequestHandlerClass) self.server_bind() def server_bind(self): host, port = self.server_address self.server_name = host self.server_port = port self.setup_environ() class MockHandler(WSGIRequestHandler): """Non-socket HTTP handler""" def setup(self): self.connection = self.request self.rfile, self.wfile = self.connection def finish(self): pass def hello_app(environ,start_response): start_response("200 OK", [ ('Content-Type','text/plain'), ('Date','Mon, 05 Jun 2006 18:49:54 GMT') ]) return [b"Hello, world!"] def header_app(environ, start_response): start_response("200 OK", [ ('Content-Type', 'text/plain'), ('Date', 'Mon, 05 Jun 2006 18:49:54 GMT') ]) return [';'.join([ environ['HTTP_X_TEST_HEADER'], environ['QUERY_STRING'], environ['PATH_INFO'] ]).encode('iso-8859-1')] def run_amock(app=hello_app, data=b"GET / HTTP/1.0\n\n"): server = make_server("", 80, app, MockServer, MockHandler) inp = BufferedReader(BytesIO(data)) out = BytesIO() olderr = sys.stderr err = sys.stderr = StringIO() try: server.finish_request((inp, out), ("127.0.0.1",8888)) finally: sys.stderr = olderr return out.getvalue(), err.getvalue() def compare_generic_iter(make_it,match): """Utility to compare a generic 2.1/2.2+ iterator with an iterable If running under Python 2.2+, this tests the iterator using iter()/next(), as well as __getitem__. 'make_it' must be a function returning a fresh iterator to be tested (since this may test the iterator twice).""" it = make_it() n = 0 for item in match: if not it[n]==item: raise AssertionError n+=1 try: it[n] except IndexError: pass else: raise AssertionError("Too many items from __getitem__",it) try: iter, StopIteration except NameError: pass else: # Only test iter mode under 2.2+ it = make_it() if not iter(it) is it: raise AssertionError for item in match: if not next(it) == item: raise AssertionError try: next(it) except StopIteration: pass else: raise AssertionError("Too many items from .__next__()", it) class IntegrationTests(TestCase): def check_hello(self, out, has_length=True): pyver = (python_implementation() + "/" + sys.version.split()[0]) self.assertEqual(out, ("HTTP/1.0 200 OK\r\n" "Server: WSGIServer/0.2 " + pyver +"\r\n" "Content-Type: text/plain\r\n" "Date: Mon, 05 Jun 2006 18:49:54 GMT\r\n" + (has_length and "Content-Length: 13\r\n" or "") + "\r\n" "Hello, world!").encode("iso-8859-1") ) def test_plain_hello(self): out, err = run_amock() self.check_hello(out) def test_environ(self): request = ( b"GET /p%61th/?query=test HTTP/1.0\n" b"X-Test-Header: Python test \n" b"X-Test-Header: Python test 2\n" b"Content-Length: 0\n\n" ) out, err = run_amock(header_app, request) self.assertEqual( out.splitlines()[-1], b"Python test,Python test 2;query=test;/path/" ) def test_request_length(self): out, err = run_amock(data=b"GET " + (b"x" * 65537) + b" HTTP/1.0\n\n") self.assertEqual(out.splitlines()[0], b"HTTP/1.0 414 Request-URI Too Long") def test_validated_hello(self): out, err = run_amock(validator(hello_app)) # the middleware doesn't support len(), so content-length isn't there self.check_hello(out, has_length=False) def test_simple_validation_error(self): def bad_app(environ,start_response): start_response("200 OK", ('Content-Type','text/plain')) return ["Hello, world!"] out, err = run_amock(validator(bad_app)) self.assertTrue(out.endswith( b"A server error occurred. Please contact the administrator." )) self.assertEqual( err.splitlines()[-2], "AssertionError: Headers (('Content-Type', 'text/plain')) must" " be of type list: " ) def test_status_validation_errors(self): def create_bad_app(status): def bad_app(environ, start_response): start_response(status, [("Content-Type", "text/plain; charset=utf-8")]) return [b"Hello, world!"] return bad_app tests = [ ('200', 'AssertionError: Status must be at least 4 characters'), ('20X OK', 'AssertionError: Status message must begin w/3-digit code'), ('200OK', 'AssertionError: Status message must have a space after code'), ] for status, exc_message in tests: with self.subTest(status=status): out, err = run_amock(create_bad_app(status)) self.assertTrue(out.endswith( b"A server error occurred. Please contact the administrator." )) self.assertEqual(err.splitlines()[-2], exc_message) def test_wsgi_input(self): def bad_app(e,s): e["wsgi.input"].read() s("200 OK", [("Content-Type", "text/plain; charset=utf-8")]) return [b"data"] out, err = run_amock(validator(bad_app)) self.assertTrue(out.endswith( b"A server error occurred. Please contact the administrator." )) self.assertEqual( err.splitlines()[-2], "AssertionError" ) def test_bytes_validation(self): def app(e, s): s("200 OK", [ ("Content-Type", "text/plain; charset=utf-8"), ("Date", "Wed, 24 Dec 2008 13:29:32 GMT"), ]) return [b"data"] out, err = run_amock(validator(app)) self.assertTrue(err.endswith('"GET / HTTP/1.0" 200 4\n')) ver = sys.version.split()[0].encode('ascii') py = python_implementation().encode('ascii') pyver = py + b"/" + ver self.assertEqual( b"HTTP/1.0 200 OK\r\n" b"Server: WSGIServer/0.2 "+ pyver + b"\r\n" b"Content-Type: text/plain; charset=utf-8\r\n" b"Date: Wed, 24 Dec 2008 13:29:32 GMT\r\n" b"\r\n" b"data", out) def test_cp1252_url(self): def app(e, s): s("200 OK", [ ("Content-Type", "text/plain"), ("Date", "Wed, 24 Dec 2008 13:29:32 GMT"), ]) # PEP3333 says environ variables are decoded as latin1. # Encode as latin1 to get original bytes return [e["PATH_INFO"].encode("latin1")] out, err = run_amock( validator(app), data=b"GET /\x80%80 HTTP/1.0") self.assertEqual( [ b"HTTP/1.0 200 OK", mock.ANY, b"Content-Type: text/plain", b"Date: Wed, 24 Dec 2008 13:29:32 GMT", b"", b"/\x80\x80", ], out.splitlines()) def test_interrupted_write(self): # BaseHandler._write() and _flush() have to write all data, even if # it takes multiple send() calls. Test this by interrupting a send() # call with a Unix signal. pthread_kill = support.get_attribute(signal, "pthread_kill") def app(environ, start_response): start_response("200 OK", []) return [b'\0' * support.SOCK_MAX_SIZE] class WsgiHandler(NoLogRequestHandler, WSGIRequestHandler): pass server = make_server(socket_helper.HOST, 0, app, handler_class=WsgiHandler) self.addCleanup(server.server_close) interrupted = threading.Event() def signal_handler(signum, frame): interrupted.set() original = signal.signal(signal.SIGUSR1, signal_handler) self.addCleanup(signal.signal, signal.SIGUSR1, original) received = None main_thread = threading.get_ident() def run_client(): http = HTTPConnection(*server.server_address) http.request("GET", "/") with http.getresponse() as response: response.read(100) # The main thread should now be blocking in a send() system # call. But in theory, it could get interrupted by other # signals, and then retried. So keep sending the signal in a # loop, in case an earlier signal happens to be delivered at # an inconvenient moment. while True: pthread_kill(main_thread, signal.SIGUSR1) if interrupted.wait(timeout=float(1)): break nonlocal received received = len(response.read()) http.close() background = threading.Thread(target=run_client) background.start() server.handle_request() background.join() self.assertEqual(received, support.SOCK_MAX_SIZE - 100) class UtilityTests(TestCase): def checkShift(self,sn_in,pi_in,part,sn_out,pi_out): env = {'SCRIPT_NAME':sn_in,'PATH_INFO':pi_in} util.setup_testing_defaults(env) self.assertEqual(util.shift_path_info(env),part) self.assertEqual(env['PATH_INFO'],pi_out) self.assertEqual(env['SCRIPT_NAME'],sn_out) return env def checkDefault(self, key, value, alt=None): # Check defaulting when empty env = {} util.setup_testing_defaults(env) if isinstance(value, StringIO): self.assertIsInstance(env[key], StringIO) elif isinstance(value,BytesIO): self.assertIsInstance(env[key],BytesIO) else: self.assertEqual(env[key], value) # Check existing value env = {key:alt} util.setup_testing_defaults(env) self.assertIs(env[key], alt) def checkCrossDefault(self,key,value,**kw): util.setup_testing_defaults(kw) self.assertEqual(kw[key],value) def checkAppURI(self,uri,**kw): util.setup_testing_defaults(kw) self.assertEqual(util.application_uri(kw),uri) def checkReqURI(self,uri,query=1,**kw): util.setup_testing_defaults(kw) self.assertEqual(util.request_uri(kw,query),uri) @support.ignore_warnings(category=DeprecationWarning) def checkFW(self,text,size,match): def make_it(text=text,size=size): return util.FileWrapper(StringIO(text),size) compare_generic_iter(make_it,match) it = make_it() self.assertFalse(it.filelike.closed) for item in it: pass self.assertFalse(it.filelike.closed) it.close() self.assertTrue(it.filelike.closed) def test_filewrapper_getitem_deprecation(self): wrapper = util.FileWrapper(StringIO('foobar'), 3) with self.assertWarnsRegex(DeprecationWarning, r'Use iterator protocol instead'): # This should have returned 'bar'. self.assertEqual(wrapper[1], 'foo') def testSimpleShifts(self): self.checkShift('','/', '', '/', '') self.checkShift('','/x', 'x', '/x', '') self.checkShift('/','', None, '/', '') self.checkShift('/a','/x/y', 'x', '/a/x', '/y') self.checkShift('/a','/x/', 'x', '/a/x', '/') def testNormalizedShifts(self): self.checkShift('/a/b', '/../y', '..', '/a', '/y') self.checkShift('', '/../y', '..', '', '/y') self.checkShift('/a/b', '//y', 'y', '/a/b/y', '') self.checkShift('/a/b', '//y/', 'y', '/a/b/y', '/') self.checkShift('/a/b', '/./y', 'y', '/a/b/y', '') self.checkShift('/a/b', '/./y/', 'y', '/a/b/y', '/') self.checkShift('/a/b', '///./..//y/.//', '..', '/a', '/y/') self.checkShift('/a/b', '///', '', '/a/b/', '') self.checkShift('/a/b', '/.//', '', '/a/b/', '') self.checkShift('/a/b', '/x//', 'x', '/a/b/x', '/') self.checkShift('/a/b', '/.', None, '/a/b', '') def testDefaults(self): for key, value in [ ('SERVER_NAME','127.0.0.1'), ('SERVER_PORT', '80'), ('SERVER_PROTOCOL','HTTP/1.0'), ('HTTP_HOST','127.0.0.1'), ('REQUEST_METHOD','GET'), ('SCRIPT_NAME',''), ('PATH_INFO','/'), ('wsgi.version', (1,0)), ('wsgi.run_once', 0), ('wsgi.multithread', 0), ('wsgi.multiprocess', 0), ('wsgi.input', BytesIO()), ('wsgi.errors', StringIO()), ('wsgi.url_scheme','http'), ]: self.checkDefault(key,value) def testCrossDefaults(self): self.checkCrossDefault('HTTP_HOST',"foo.bar",SERVER_NAME="foo.bar") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="on") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="1") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="yes") self.checkCrossDefault('wsgi.url_scheme',"http",HTTPS="foo") self.checkCrossDefault('SERVER_PORT',"80",HTTPS="foo") self.checkCrossDefault('SERVER_PORT',"443",HTTPS="on") def testGuessScheme(self): self.assertEqual(util.guess_scheme({}), "http") self.assertEqual(util.guess_scheme({'HTTPS':"foo"}), "http") self.assertEqual(util.guess_scheme({'HTTPS':"on"}), "https") self.assertEqual(util.guess_scheme({'HTTPS':"yes"}), "https") self.assertEqual(util.guess_scheme({'HTTPS':"1"}), "https") def testAppURIs(self): self.checkAppURI("http://127.0.0.1/") self.checkAppURI("http://127.0.0.1/spam", SCRIPT_NAME="/spam") self.checkAppURI("http://127.0.0.1/sp%E4m", SCRIPT_NAME="/sp\xe4m") self.checkAppURI("http://spam.example.com:2071/", HTTP_HOST="spam.example.com:2071", SERVER_PORT="2071") self.checkAppURI("http://spam.example.com/", SERVER_NAME="spam.example.com") self.checkAppURI("http://127.0.0.1/", HTTP_HOST="127.0.0.1", SERVER_NAME="spam.example.com") self.checkAppURI("https://127.0.0.1/", HTTPS="on") self.checkAppURI("http://127.0.0.1:8000/", SERVER_PORT="8000", HTTP_HOST=None) def testReqURIs(self): self.checkReqURI("http://127.0.0.1/") self.checkReqURI("http://127.0.0.1/spam", SCRIPT_NAME="/spam") self.checkReqURI("http://127.0.0.1/sp%E4m", SCRIPT_NAME="/sp\xe4m") self.checkReqURI("http://127.0.0.1/spammity/spam", SCRIPT_NAME="/spammity", PATH_INFO="/spam") self.checkReqURI("http://127.0.0.1/spammity/sp%E4m", SCRIPT_NAME="/spammity", PATH_INFO="/sp\xe4m") self.checkReqURI("http://127.0.0.1/spammity/spam;ham", SCRIPT_NAME="/spammity", PATH_INFO="/spam;ham") self.checkReqURI("http://127.0.0.1/spammity/spam;cookie=1234,5678", SCRIPT_NAME="/spammity", PATH_INFO="/spam;cookie=1234,5678") self.checkReqURI("http://127.0.0.1/spammity/spam?say=ni", SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="say=ni") self.checkReqURI("http://127.0.0.1/spammity/spam?s%E4y=ni", SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="s%E4y=ni") self.checkReqURI("http://127.0.0.1/spammity/spam", 0, SCRIPT_NAME="/spammity", PATH_INFO="/spam",QUERY_STRING="say=ni") def testFileWrapper(self): self.checkFW("xyz"*50, 120, ["xyz"*40,"xyz"*10]) def testHopByHop(self): for hop in ( "Connection Keep-Alive Proxy-Authenticate Proxy-Authorization " "TE Trailers Transfer-Encoding Upgrade" ).split(): for alt in hop, hop.title(), hop.upper(), hop.lower(): self.assertTrue(util.is_hop_by_hop(alt)) # Not comprehensive, just a few random header names for hop in ( "Accept Cache-Control Date Pragma Trailer Via Warning" ).split(): for alt in hop, hop.title(), hop.upper(), hop.lower(): self.assertFalse(util.is_hop_by_hop(alt)) class HeaderTests(TestCase): def testMappingInterface(self): test = [('x','y')] self.assertEqual(len(Headers()), 0) self.assertEqual(len(Headers([])),0) self.assertEqual(len(Headers(test[:])),1) self.assertEqual(Headers(test[:]).keys(), ['x']) self.assertEqual(Headers(test[:]).values(), ['y']) self.assertEqual(Headers(test[:]).items(), test) self.assertIsNot(Headers(test).items(), test) # must be copy! h = Headers() del h['foo'] # should not raise an error h['Foo'] = 'bar' for m in h.__contains__, h.get, h.get_all, h.__getitem__: self.assertTrue(m('foo')) self.assertTrue(m('Foo')) self.assertTrue(m('FOO')) self.assertFalse(m('bar')) self.assertEqual(h['foo'],'bar') h['foo'] = 'baz' self.assertEqual(h['FOO'],'baz') self.assertEqual(h.get_all('foo'),['baz']) self.assertEqual(h.get("foo","whee"), "baz") self.assertEqual(h.get("zoo","whee"), "whee") self.assertEqual(h.setdefault("foo","whee"), "baz") self.assertEqual(h.setdefault("zoo","whee"), "whee") self.assertEqual(h["foo"],"baz") self.assertEqual(h["zoo"],"whee") def testRequireList(self): self.assertRaises(TypeError, Headers, "foo") def testExtras(self): h = Headers() self.assertEqual(str(h),'\r\n') h.add_header('foo','bar',baz="spam") self.assertEqual(h['foo'], 'bar; baz="spam"') self.assertEqual(str(h),'foo: bar; baz="spam"\r\n\r\n') h.add_header('Foo','bar',cheese=None) self.assertEqual(h.get_all('foo'), ['bar; baz="spam"', 'bar; cheese']) self.assertEqual(str(h), 'foo: bar; baz="spam"\r\n' 'Foo: bar; cheese\r\n' '\r\n' ) class ErrorHandler(BaseCGIHandler): """Simple handler subclass for testing BaseHandler""" # BaseHandler records the OS environment at import time, but envvars # might have been changed later by other tests, which trips up # HandlerTests.testEnviron(). os_environ = dict(os.environ.items()) def __init__(self,**kw): setup_testing_defaults(kw) BaseCGIHandler.__init__( self, BytesIO(), BytesIO(), StringIO(), kw, multithread=True, multiprocess=True ) class TestHandler(ErrorHandler): """Simple handler subclass for testing BaseHandler, w/error passthru""" def handle_error(self): raise # for testing, we want to see what's happening class HandlerTests(TestCase): # testEnviron() can produce long error message maxDiff = 80 * 50 def testEnviron(self): os_environ = { # very basic environment 'HOME': '/my/home', 'PATH': '/my/path', 'LANG': 'fr_FR.UTF-8', # set some WSGI variables 'SCRIPT_NAME': 'test_script_name', 'SERVER_NAME': 'test_server_name', } with support.swap_attr(TestHandler, 'os_environ', os_environ): # override X and HOME variables handler = TestHandler(X="Y", HOME="/override/home") handler.setup_environ() # Check that wsgi_xxx attributes are copied to wsgi.xxx variables # of handler.environ for attr in ('version', 'multithread', 'multiprocess', 'run_once', 'file_wrapper'): self.assertEqual(getattr(handler, 'wsgi_' + attr), handler.environ['wsgi.' + attr]) # Test handler.environ as a dict expected = {} setup_testing_defaults(expected) # Handler inherits os_environ variables which are not overridden # by SimpleHandler.add_cgi_vars() (SimpleHandler.base_env) for key, value in os_environ.items(): if key not in expected: expected[key] = value expected.update({ # X doesn't exist in os_environ "X": "Y", # HOME is overridden by TestHandler 'HOME': "/override/home", # overridden by setup_testing_defaults() "SCRIPT_NAME": "", "SERVER_NAME": "127.0.0.1", # set by BaseHandler.setup_environ() 'wsgi.input': handler.get_stdin(), 'wsgi.errors': handler.get_stderr(), 'wsgi.version': (1, 0), 'wsgi.run_once': False, 'wsgi.url_scheme': 'http', 'wsgi.multithread': True, 'wsgi.multiprocess': True, 'wsgi.file_wrapper': util.FileWrapper, }) self.assertDictEqual(handler.environ, expected) def testCGIEnviron(self): h = BaseCGIHandler(None,None,None,{}) h.setup_environ() for key in 'wsgi.url_scheme', 'wsgi.input', 'wsgi.errors': self.assertIn(key, h.environ) def testScheme(self): h=TestHandler(HTTPS="on"); h.setup_environ() self.assertEqual(h.environ['wsgi.url_scheme'],'https') h=TestHandler(); h.setup_environ() self.assertEqual(h.environ['wsgi.url_scheme'],'http') def testAbstractMethods(self): h = BaseHandler() for name in [ '_flush','get_stdin','get_stderr','add_cgi_vars' ]: self.assertRaises(NotImplementedError, getattr(h,name)) self.assertRaises(NotImplementedError, h._write, "test") def testContentLength(self): # Demo one reason iteration is better than write()... ;) def trivial_app1(e,s): s('200 OK',[]) return [e['wsgi.url_scheme'].encode('iso-8859-1')] def trivial_app2(e,s): s('200 OK',[])(e['wsgi.url_scheme'].encode('iso-8859-1')) return [] def trivial_app3(e,s): s('200 OK',[]) return ['\u0442\u0435\u0441\u0442'.encode("utf-8")] def trivial_app4(e,s): # Simulate a response to a HEAD request s('200 OK',[('Content-Length', '12345')]) return [] h = TestHandler() h.run(trivial_app1) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "Content-Length: 4\r\n" "\r\n" "http").encode("iso-8859-1")) h = TestHandler() h.run(trivial_app2) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "\r\n" "http").encode("iso-8859-1")) h = TestHandler() h.run(trivial_app3) self.assertEqual(h.stdout.getvalue(), b'Status: 200 OK\r\n' b'Content-Length: 8\r\n' b'\r\n' b'\xd1\x82\xd0\xb5\xd1\x81\xd1\x82') h = TestHandler() h.run(trivial_app4) self.assertEqual(h.stdout.getvalue(), b'Status: 200 OK\r\n' b'Content-Length: 12345\r\n' b'\r\n') def testBasicErrorOutput(self): def non_error_app(e,s): s('200 OK',[]) return [] def error_app(e,s): raise AssertionError("This should be caught by handler") h = ErrorHandler() h.run(non_error_app) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "Content-Length: 0\r\n" "\r\n").encode("iso-8859-1")) self.assertEqual(h.stderr.getvalue(),"") h = ErrorHandler() h.run(error_app) self.assertEqual(h.stdout.getvalue(), ("Status: %s\r\n" "Content-Type: text/plain\r\n" "Content-Length: %d\r\n" "\r\n" % (h.error_status,len(h.error_body))).encode('iso-8859-1') + h.error_body) self.assertIn("AssertionError", h.stderr.getvalue()) def testErrorAfterOutput(self): MSG = b"Some output has been sent" def error_app(e,s): s("200 OK",[])(MSG) raise AssertionError("This should be caught by handler") h = ErrorHandler() h.run(error_app) self.assertEqual(h.stdout.getvalue(), ("Status: 200 OK\r\n" "\r\n".encode("iso-8859-1")+MSG)) self.assertIn("AssertionError", h.stderr.getvalue()) def testHeaderFormats(self): def non_error_app(e,s): s('200 OK',[]) return [] stdpat = ( r"HTTP/%s 200 OK\r\n" r"Date: \w{3}, [ 0123]\d \w{3} \d{4} \d\d:\d\d:\d\d GMT\r\n" r"%s" r"Content-Length: 0\r\n" r"\r\n" ) shortpat = ( "Status: 200 OK\r\n" "Content-Length: 0\r\n" "\r\n" ).encode("iso-8859-1") for ssw in "FooBar/1.0", None: sw = ssw and "Server: %s\r\n" % ssw or "" for version in "1.0", "1.1": for proto in "HTTP/0.9", "HTTP/1.0", "HTTP/1.1": h = TestHandler(SERVER_PROTOCOL=proto) h.origin_server = False h.http_version = version h.server_software = ssw h.run(non_error_app) self.assertEqual(shortpat,h.stdout.getvalue()) h = TestHandler(SERVER_PROTOCOL=proto) h.origin_server = True h.http_version = version h.server_software = ssw h.run(non_error_app) if proto=="HTTP/0.9": self.assertEqual(h.stdout.getvalue(),b"") else: self.assertTrue( re.match((stdpat%(version,sw)).encode("iso-8859-1"), h.stdout.getvalue()), ((stdpat%(version,sw)).encode("iso-8859-1"), h.stdout.getvalue()) ) def testBytesData(self): def app(e, s): s("200 OK", [ ("Content-Type", "text/plain; charset=utf-8"), ]) return [b"data"] h = TestHandler() h.run(app) self.assertEqual(b"Status: 200 OK\r\n" b"Content-Type: text/plain; charset=utf-8\r\n" b"Content-Length: 4\r\n" b"\r\n" b"data", h.stdout.getvalue()) def testCloseOnError(self): side_effects = {'close_called': False} MSG = b"Some output has been sent" def error_app(e,s): s("200 OK",[])(MSG) class CrashyIterable(object): def __iter__(self): while True: yield b'blah' raise AssertionError("This should be caught by handler") def close(self): side_effects['close_called'] = True return CrashyIterable() h = ErrorHandler() h.run(error_app) self.assertEqual(side_effects['close_called'], True) def testPartialWrite(self): written = bytearray() class PartialWriter: def write(self, b): partial = b[:7] written.extend(partial) return len(partial) def flush(self): pass environ = {"SERVER_PROTOCOL": "HTTP/1.0"} h = SimpleHandler(BytesIO(), PartialWriter(), sys.stderr, environ) msg = "should not do partial writes" with self.assertWarnsRegex(DeprecationWarning, msg): h.run(hello_app) self.assertEqual(b"HTTP/1.0 200 OK\r\n" b"Content-Type: text/plain\r\n" b"Date: Mon, 05 Jun 2006 18:49:54 GMT\r\n" b"Content-Length: 13\r\n" b"\r\n" b"Hello, world!", written) def testClientConnectionTerminations(self): environ = {"SERVER_PROTOCOL": "HTTP/1.0"} for exception in ( ConnectionAbortedError, BrokenPipeError, ConnectionResetError, ): with self.subTest(exception=exception): class AbortingWriter: def write(self, b): raise exception stderr = StringIO() h = SimpleHandler(BytesIO(), AbortingWriter(), stderr, environ) h.run(hello_app) self.assertFalse(stderr.getvalue()) def testDontResetInternalStateOnException(self): class CustomException(ValueError): pass # We are raising CustomException here to trigger an exception # during the execution of SimpleHandler.finish_response(), so # we can easily test that the internal state of the handler is # preserved in case of an exception. class AbortingWriter: def write(self, b): raise CustomException stderr = StringIO() environ = {"SERVER_PROTOCOL": "HTTP/1.0"} h = SimpleHandler(BytesIO(), AbortingWriter(), stderr, environ) h.run(hello_app) self.assertIn("CustomException", stderr.getvalue()) # Test that the internal state of the handler is preserved. self.assertIsNotNone(h.result) self.assertIsNotNone(h.headers) self.assertIsNotNone(h.status) self.assertIsNotNone(h.environ) if __name__ == "__main__": unittest.main() gevent-24.11.1/src/greentest/3.9/version000066400000000000000000000000071471441230600176570ustar00rootroot000000000000003.9.20 gevent-24.11.1/src/greentest/README.rst000066400000000000000000000016631471441230600174360ustar00rootroot00000000000000================= Versioned Tests ================= The test directories that begin with a number (e.g., 2.7 and 3.5) are copies of the standard library tests for that specific version of Python. Each directory has a ``version`` file that identifies the specific point release the tests come from. The tests are only expected to pass if the version of python running the tests exactly matches the version in that file. If this is not the case, the test runner will print a warning. .. caution:: For ease of updating the standard library tests, gevent tries very hard not to modify the tests if at all possible. Prefer to use the ``patched_tests_setup.py`` or ``known_failures.py`` file if necessary. One exception to this is ``test_threading.py``, where we find it necessary to change 'from test import lock_tests' to our own 'from gevent.tests import lock_tests'. gevent-24.11.1/tox.ini000066400000000000000000000013571471441230600144730ustar00rootroot00000000000000[tox] envlist = py38,py39,py310,py311,py311-cffi,py311-libuv,py312,pypy3,lint,leak [testenv] usedevelop = true extras = test events dnspython deps = cffi whitelist_externals = * commands = python -m gevent.tests [testenv:lint] skip_install = true skipsdist = true basepython = python3.11 deps = cffi pylint commands = pylint --rcfile=.pylintrc src/gevent [testenv:py311-cffi] basepython = python3.11 setenv = GEVENT_LOOP=libev-cffi [testenv:py311-libuv] basepython = python3.11 setenv = GEVENT_LOOP=libuv-cffi [testenv:leak] basepython = python3.11 commands = GEVENTTEST_LEAKCHECK=1 python -mgevent.tests --config known_failures.py --quiet --ignore tests_that_dont_do_leakchecks.txt